source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
59,089 |
Is there a known connection between Weierstrass' function $W_\alpha (x) = \sum_{n=0}^\infty b^{- n \alpha} \cos(b^n x)$ and Brownian motion? Specifically, when $\alpha = 1/2$, the Weierstrass function has same Holder continuity peoperties that Brownain sample paths do. Some crude Matlab experiments seem to suggest this function has linear quadratic variation for low values of $b$, too. The expression reminds me a little of the Karhunen-loeve expansion of Brownian motion, but I don't see how the two might relate. Many thanks.
|
I don't know the history at all, but I have to imagine that the language was invented to provide a context for talking about solution operators for differential equations. Consider, for example, the PDE $D f = f_0$ where $D$ is a nice differential operator. Taking Fourier transforms, this says that $P(\xi)\hat{f} = \hat{f}_0$, where $P$ is the principal symbol of $D$ (a polynomial). Everyone in the world just wants to write $\hat{f} = \frac{1}{P(\xi)}\hat{f}_0$ and take inverse Fourier transforms. In other words, solving the PDE is the same thing as finding an operator $S$ whose Fourier multiplier is $\frac{1}{P(\xi)}$. This most likely fails to be a polynomial, so $S$ is evidently not a differential operator. As far as I can tell, many of those big fat books on pseudodifferential operator theory are all about how to invert as many operators as possible in this sense while salvaging as much regularity as you can. It gets extremely subtle, but I think the motivation is fairly close to the surface. Aside from that, you might also be led to invent pseudodifferential operators if you cared deeply about the spectral theory of differential operators. The spectral theorem for an operator $T$ is more or less equivalent to the existence of a "functional calculus", i.e. a sensible way to form operators $f(T)$ out of various classes of functions $f$ on the spectrum of $T$. For differential operators (especially on non-compact domains where there need not be a nice eigenspace decomposition), the functional calculus is often obtained via the Fourier transform, and the pseudodifferential calculus manifests itself.
|
{
"source": [
"https://mathoverflow.net/questions/59089",
"https://mathoverflow.net",
"https://mathoverflow.net/users/9564/"
]
}
|
59,149 |
Why can the elements of the dual space of $\ell^\infty(\mathbb N)$ be represented as sums of elements of $\ell^1(\mathbb N)$ and Null$(c_0)$? EDIT: As confirmed in the comments , the OP intended to ask about this sentence
"$f\in\ell_\infty^*$ is the sum of an element of $\ell_1$ and an element null on $c_0$" from the paper D. H. Fremlin and M. Talagrand: A Gaussian Measure on $l^\infty$ http://jstor.org/stable/2243023 (Which is different claim from what was in the original version of the question.)
|
Obviously, the OP intended to ask about this sentence
"$f\in\ell_\infty^*$ is the sum of an element of $\ell_1$ and an element null on $c_0$"
from the paper D. H. Fremlin and M. Talagrand: A Gaussian Measure on $l^\infty$ http://jstor.org/stable/2243023 (Which is different claim from what was in the question.) The authors refer to the book Day, M. (1973). Normed Linear Spaces. Springer, Berlin. I was not able to find the exact place in Day's book where this is shown, but I think that for this special case it is relatively easy. For $f\in\ell_\infty^*$ put $a_i=f(e^i)$. Then the sequence $a=(a_i)$ belongs to $\ell_1$. (Since $\sum\limits_{i=1}^n |a_i| = \sum\limits_{i=1}^n |f(e^i)| = f(\sum\limits_{i=1}^n \varepsilon_ie^i) \le \lVert f \rVert$, where $\varepsilon_i=\pm1$ are chosen according to the signs of $f(e^i)$.) Now, if $x_n\to 0$, then
$$f(x)-a^*(x)= \lim\limits_{n\to\infty} f(\sum\limits_{i=1}^n x_ie^i)-\sum\limits_{i=1}^n a_ix_i=0.$$ I hope I haven't overlooked something and that someone will provide the reference to the result (probably more general) which the authors of the above-mentioned paper had in mind.
|
{
"source": [
"https://mathoverflow.net/questions/59149",
"https://mathoverflow.net",
"https://mathoverflow.net/users/13799/"
]
}
|
59,157 |
I'm looking for references for the following facts concerning the ultrafilter lemma (~ "there exist non-principal ultrafilters"): The ultrafilter lemma is independent of ZF. ZF + the ultrafilter lemma does not imply the Axiom of Choice. I would prefer an overview article / book that links to the original papers instead of the original papers themselves. That's because I don't rely on these facts; I only want to give some context for my readers.
|
In any of the formulations mentioned (so far) in the comments, the ultrafilter lemma is independent of ZF but weaker than AC. That the strongest form (all filters can be extended to ultrafilters) doesn't imply AC is a theorem of J.D. Halpern and A. Lévy ["The Boolean prime ideal theorem does not imply the axiom of choice" in Axiomatic Set Theory, Proc. Symp. Pure Math. XIII part 1, pp. 83-134]. That ZF doesn't prove even the weakest form (there exists a nonprincipal ultrafilter on some set) is a theorem of mine ["A model without ultrafilters," Bull. Acad. Polon. Sci. 25 (1977) pp. 329-331], building on S. Feferman's construction of a model with no non-principal ultrafilters on the set of natural numbers [" Some applications of the notions of forcing and generic sets ," Fundamenta Mathematicae 55 (1965) pp. 325-345].
|
{
"source": [
"https://mathoverflow.net/questions/59157",
"https://mathoverflow.net",
"https://mathoverflow.net/users/8494/"
]
}
|
59,213 |
Here is a very natural question: Q: Is it always possible to generate a finite simple group with only $2$ elements? In all the examples that I can think of the answer is yes. If the answer is positive, how does one prove it? Is it possible to prove it without using the
classification of finite simple groups?
|
The answer to your question is yes. Moreover, if you pick two random elements from a finite simple group, then they generate the whole group with probability which tends to 1 as the size of the group grows. There are even stronger results in this direction, but I am not an expert in the subject so you will have to look for it yourself. You should look for papers by Liebek and Shalev, Lubotzky, Kantor, and there are others who I am not sure about now. All of these results require the classification. There are very few results regarding finite simple groups which do not require the classification. Edit: here is a link to a fairly old survey paper in the notices: http://www.ams.org/notices/200104/fea-shalev.pdf . There are many new developments in the last decade.
|
{
"source": [
"https://mathoverflow.net/questions/59213",
"https://mathoverflow.net",
"https://mathoverflow.net/users/11765/"
]
}
|
59,262 |
There are various open problems in the subject of logical number theory concerning the possibility of proving this or that well-known standard results over this or that weak theory of arithmetic, usually weakened by restricting the quantifier complexity of the formulas for which one has an induction axiom. It particular, the question of proving the infinitude of the primes in Bounded Arithmetic has received attention. Does this question make known contact with "workaday number theory" - number theory not informed by concepts from logic and model theory? I understand that proof of the infinitude of the primes in bounded arithmetic could not use any functions that grow exponentially (since the theory doesn't have the power to prove the totality of any such function). So especially I mean to ask: 1) If one had such a proof, would it have consequences about the primes or anything else in the standard model of arithmetic?
2) If one proves that no such proof exists, would that have consequences...
3) Do any purely number theoretic conjectures if settled in the right way, settle this question of its kin? As a side question, I'd be interested to know the history of this question. I first heard about it from Angus Macintyre and that must have been 25 years ago.
|
First, let me discuss the precise open question and why it is interesting to logicians. Then I will discuss potential ramifications outside of pure logic. The main open question is whether the theory IΔ 0 can prove the existence of arbitrarily large primes. The theory IΔ 0 is the theory over the language of basic arithmetic ($0,1,+,\times,{\leq}$, together with their usual defining axioms) where induction is limited to bounded formulas (Δ 0 formulas): formulas wherein all quantifiers are of the bounded form $\forall x(x \leq t \rightarrow \cdots)$ and $\exists x(x \leq t \land \cdots)$ where $t$ is a term of the language possibly involving variables other than $x$. While this theory has been around for quite some time, one of the earliest papers studying this system in detail is Paris & Wilkie On the scheme of induction for bounded arithmetic formulas [Ann. Pure Appl. Logic 35 (1987), no. 3, 261–302, MR904326 ]. The currently open question regarding the unboundedness of primes occurs there, probably for the first time in print. Why is this question interesting to logicians? There is a standard fact about the proof theory of ∀∃-formulas which explains the interest. Over IΔ 0 , this fact takes the following form due to Parikh: Fact. If IΔ 0 proves $\forall x \exists y \phi(x,y)$, where $\phi(x,y)$ is a bounded formula, then IΔ 0 proves $\forall x \exists y(y \leq t(x) \land \phi(x,y))$ where $t(x)$ is a term of the language (i.e. a polynomial in $x$). Thus, IΔ 0 cannot prove certain statements such as $\forall x \exists y (2^x = y)$ since $2^x$ grows faster than any polynomial. (Here, $2^x = y$ is an abbreviation for a bounded formula that expresses the properties of the exponential function $2^x$. That formula is rather complex, so I will not explain it here.) In fact, any proof that involves functions of exponential growth at intermediate stages cannot be formalized in IΔ 0 because of this. This applies in particular to Euclid's proof which takes a rather large product of primes relative to the input $x$ in order to prove that there is a prime larger than $x$. However, by Bertrand's Postulate, the formula $\forall x \exists y(\mathrm{prime}(y) \land y \geq x)$ does not by itself impose exponential growth. Thus, the fact leaves open the question whether some other proof of the infinitude of primes can be formalized in IΔ 0 . What are potential ramifications outside of logic? A natural question to ask about primes is whether there is some kind of efficient formula for producing large primes. (Cf. the Polymath 4 Project .) One can ask further about the growth rate and the computational complexity of such a function. Proof mining techniques show that if IΔ 0 proves the infinitude of primes, then there must be a relatively simple function of moderate growth rate to produce large primes. In fact, there must be such a function that produces primes provably in IΔ 0 . The existence of such a function should not be a surprise. Indeed, the simple function $p(x)$ that returns the first prime greater than $x$ has very moderate growth rate by Bertrand's Postulate. However, the function produced by a proof of the unboundedness of primes in IΔ 0 will have other important characteristics. In particular, it will have rather low computational complexity. The precise relationship between IΔ 0 and computational complexity is a little subtle and I don't want to make this statement more precise here. (Some details were added below.) However, there are good reasons to believe that such a proof could yield a polynomial time computable function of moderate growth that produces large primes, which would be a major breakthrough. What can we say if there is no IΔ 0 proof of the existence of arbitrarily large primes? Well, for one thing, a model of IΔ 0 where primes are bounded would be a rather interesting object with potential applications to the structure theory of integral domains. However, it turns out that there is not much one can say here about the standard model, but the reason is subtle. The key problem is the definition of prime number. In the discussion above, I implicitly assumed the standard definition $\mathrm{prime}(x)$ iff $$x > 1 \land (\forall y, z \leq x)(x = yz \rightarrow y = 1 \lor z = 1).$$ However, there are other potential definition of primes. For example, one could define prime numbers as those which pass the AKS primality test . These various equivalent definitions are not necessarily equivalent over IΔ 0 . Therefore, it is possible that IΔ 0 fails to prove the existence of arbitrarily large "traditional primes" but that IΔ 0 does prove the existence of arbitrarily large "AKS primes." Since the standard model can't tell the difference between these two notions of primality, this could still give an efficient way to produce large primes as discussed above. Let me clarify the relationship between proofs in IΔ 0 and computational complexity. The polynomial hierarchy is closely tied to slightly different systems of bounded arithmetic, namely the systems $S^n_2$, $T^n_2$ pioneered by Sam Buss. Note that these are somewhat orthogonal to IΔ 0 . On the one hand, these systems include the smash function $x \# y = 2^{|x||y|}$ which is not provably total in IΔ 0 . On the other hand, the amount of induction in these systems is even more restricted than in IΔ 0 . It is hard to give a precise relationship between time complexity and proofs in IΔ 0 . The claims I made above are rather based on the (admittedly optimistic) expected success of proof mining on a hypothetical proof the unboundedness of primes in IΔ 0 . In any case, there is an inherent weakness to this approach. Complexity theorists do not impose limits on the amount of induction they use in their proofs. For example, complexity theorists will freely use the equivalence of "traditional primes" and "AKS primes" in their arguments. Therefore, the existence of a simple way to generate large primes is not at all equivalent to the existence of a proof of the existence of arbitrarily large primes in IΔ 0 or other systems of bounded arithmetic.
|
{
"source": [
"https://mathoverflow.net/questions/59262",
"https://mathoverflow.net",
"https://mathoverflow.net/users/10909/"
]
}
|
59,263 |
For any commutative ring $A$, the set of idempotents of $A$ will be denoted as $E(A)$. This set has a (canonical) ring structure. With addition defined by:
$$e+'f=e(1−f)+f(1−e)$$
where $+$ and $−$ are operation in the ring itself. The multiplication operation is the same as the ring itself. Suppose now I have a commutative unital ring $A$ and let $B$ be another commutative ring that is an integral extension of $A$. Then clearly $E(B)$ is an over-ring of $E(A)$, but is it clear that $E(B)$ is an integral extension of $E(A)$. Are there easy counter examples for this? Would it help if I assumed that $B$ is a finite integral extension of $A$? Edit: The question above had a trivial answer as Todd pointed out. Now Im curious what happens if I really want $E(B)$ to be a finite integral extension of $E(A)$. Does $B$ being a finite integral extension of $A$ gauarantee that?
|
First, let me discuss the precise open question and why it is interesting to logicians. Then I will discuss potential ramifications outside of pure logic. The main open question is whether the theory IΔ 0 can prove the existence of arbitrarily large primes. The theory IΔ 0 is the theory over the language of basic arithmetic ($0,1,+,\times,{\leq}$, together with their usual defining axioms) where induction is limited to bounded formulas (Δ 0 formulas): formulas wherein all quantifiers are of the bounded form $\forall x(x \leq t \rightarrow \cdots)$ and $\exists x(x \leq t \land \cdots)$ where $t$ is a term of the language possibly involving variables other than $x$. While this theory has been around for quite some time, one of the earliest papers studying this system in detail is Paris & Wilkie On the scheme of induction for bounded arithmetic formulas [Ann. Pure Appl. Logic 35 (1987), no. 3, 261–302, MR904326 ]. The currently open question regarding the unboundedness of primes occurs there, probably for the first time in print. Why is this question interesting to logicians? There is a standard fact about the proof theory of ∀∃-formulas which explains the interest. Over IΔ 0 , this fact takes the following form due to Parikh: Fact. If IΔ 0 proves $\forall x \exists y \phi(x,y)$, where $\phi(x,y)$ is a bounded formula, then IΔ 0 proves $\forall x \exists y(y \leq t(x) \land \phi(x,y))$ where $t(x)$ is a term of the language (i.e. a polynomial in $x$). Thus, IΔ 0 cannot prove certain statements such as $\forall x \exists y (2^x = y)$ since $2^x$ grows faster than any polynomial. (Here, $2^x = y$ is an abbreviation for a bounded formula that expresses the properties of the exponential function $2^x$. That formula is rather complex, so I will not explain it here.) In fact, any proof that involves functions of exponential growth at intermediate stages cannot be formalized in IΔ 0 because of this. This applies in particular to Euclid's proof which takes a rather large product of primes relative to the input $x$ in order to prove that there is a prime larger than $x$. However, by Bertrand's Postulate, the formula $\forall x \exists y(\mathrm{prime}(y) \land y \geq x)$ does not by itself impose exponential growth. Thus, the fact leaves open the question whether some other proof of the infinitude of primes can be formalized in IΔ 0 . What are potential ramifications outside of logic? A natural question to ask about primes is whether there is some kind of efficient formula for producing large primes. (Cf. the Polymath 4 Project .) One can ask further about the growth rate and the computational complexity of such a function. Proof mining techniques show that if IΔ 0 proves the infinitude of primes, then there must be a relatively simple function of moderate growth rate to produce large primes. In fact, there must be such a function that produces primes provably in IΔ 0 . The existence of such a function should not be a surprise. Indeed, the simple function $p(x)$ that returns the first prime greater than $x$ has very moderate growth rate by Bertrand's Postulate. However, the function produced by a proof of the unboundedness of primes in IΔ 0 will have other important characteristics. In particular, it will have rather low computational complexity. The precise relationship between IΔ 0 and computational complexity is a little subtle and I don't want to make this statement more precise here. (Some details were added below.) However, there are good reasons to believe that such a proof could yield a polynomial time computable function of moderate growth that produces large primes, which would be a major breakthrough. What can we say if there is no IΔ 0 proof of the existence of arbitrarily large primes? Well, for one thing, a model of IΔ 0 where primes are bounded would be a rather interesting object with potential applications to the structure theory of integral domains. However, it turns out that there is not much one can say here about the standard model, but the reason is subtle. The key problem is the definition of prime number. In the discussion above, I implicitly assumed the standard definition $\mathrm{prime}(x)$ iff $$x > 1 \land (\forall y, z \leq x)(x = yz \rightarrow y = 1 \lor z = 1).$$ However, there are other potential definition of primes. For example, one could define prime numbers as those which pass the AKS primality test . These various equivalent definitions are not necessarily equivalent over IΔ 0 . Therefore, it is possible that IΔ 0 fails to prove the existence of arbitrarily large "traditional primes" but that IΔ 0 does prove the existence of arbitrarily large "AKS primes." Since the standard model can't tell the difference between these two notions of primality, this could still give an efficient way to produce large primes as discussed above. Let me clarify the relationship between proofs in IΔ 0 and computational complexity. The polynomial hierarchy is closely tied to slightly different systems of bounded arithmetic, namely the systems $S^n_2$, $T^n_2$ pioneered by Sam Buss. Note that these are somewhat orthogonal to IΔ 0 . On the one hand, these systems include the smash function $x \# y = 2^{|x||y|}$ which is not provably total in IΔ 0 . On the other hand, the amount of induction in these systems is even more restricted than in IΔ 0 . It is hard to give a precise relationship between time complexity and proofs in IΔ 0 . The claims I made above are rather based on the (admittedly optimistic) expected success of proof mining on a hypothetical proof the unboundedness of primes in IΔ 0 . In any case, there is an inherent weakness to this approach. Complexity theorists do not impose limits on the amount of induction they use in their proofs. For example, complexity theorists will freely use the equivalence of "traditional primes" and "AKS primes" in their arguments. Therefore, the existence of a simple way to generate large primes is not at all equivalent to the existence of a proof of the existence of arbitrarily large primes in IΔ 0 or other systems of bounded arithmetic.
|
{
"source": [
"https://mathoverflow.net/questions/59263",
"https://mathoverflow.net",
"https://mathoverflow.net/users/1245/"
]
}
|
59,291 |
For a poset $P$ there exists an embedding $y$ into a complete and cocomplet poset $\hat{P}$ of downward closed subsets of $P$. It is easy to verify that the embedding preserves all existing limits and no non-trivial colimits --- i.e. colimits are freely generated. $\hat{P}$ may be equally described as the poset of all monotonic functions from $P^{op}$ to $2$, where $2$ is the two-valued boolean algebra. Then we see, that $P$ is nothing more than a $2$-enriched category, $2^{P^{op}}$ the $2$-enriched category of presheaves over $P$ and that $y$ is just the Yoneda functor for $2$-enriched categories. However, for a poset $P$ there is also a completion that preserves both limits and colimits --- namely --- Dedekind-MacNeille completion link text , embedding $P$ into the poset of up-down-subsets of $P$. Is it possible to carry the later construction to the categorical setting and reach something like a limit and colimit preserving embedding for any category $\mathbb{C}$ into a complete and cocomplete category?
|
Yes, it's a general construction which is related to so-called Isbell conjugation. Let $C$ be a small category. It is well-known that the free colimit cocompletion is given by the Yoneda embedding into presheaves on $C$, $y: C \to Set^{C^{op}}$. The presheaf category is also complete. Dually, the free limit-completion is given by the dual Yoneda embedding $y^{op}: C \to (Set^C)^{op}$. The co-presheaf category is also cocomplete. Therefore there is a cocontinuous functor $L: Set^{C^{op}} \to (Set^C)^{op}$ which extends $y^{op}$ along $y$. This is a left adjoint; its right adjoint is the (unique up to isomorphism) functor $R: (Set^C)^{op} \to Set^{C^{op}}$ which extends $y$ continuously along $y^{op}$. This adjoint pair is called Isbell conjugation . As is the case for any adjoint pair, this restricts to an adjoint equivalence between the full subcategories consisting, on one side, of objects $F$ of $Set^{C^{op}}$ such that the unit component $F \to R L F$ is an iso, and on the other side of objects $G$ of $(Set^C)^{op}$ such that the counit $L R G \to G$ is an iso. Either side of this equivalence gives the Dedekind-MacNeille completion of $C$. By the Yoneda lemma, $y: C \to Set^{C^{op}}$ factors through the full subcategory of DM objects as a functor $C \to DM(C)$ which preserves any limits that exist in $C$, and dually $y^{op}: C \to (Set^C)^{op}$ factors as the same functor $C \to DM(C)$ which preserves any colimits that exist in $C$. Edit: Perhaps it might help to spell this out a little more. The classical Dedekind-MacNeille completion is obtained by taking fixed points of a Galois connection between upward-closed sets and downward-closed sets of a poset $P$. So, if $A$ is downward-closed (i.e., a functor $A: P^{op} \to \mathbf{2}$), and $B: P \to \mathbf{2}$ is upward-closed, we define $$A^u = \{p \in P: \forall_{x \in P} x \in A \Rightarrow x \leq p\}$$ $$B^d = \{q \in P: \forall_{y \in P} y \in B \Rightarrow q \leq y\}$$ and one has $$A \subseteq B^d \qquad \text{iff} \qquad A \times B \subseteq (\leq) \qquad \text{iff} \qquad B \subseteq A^u$$ We thus have an adjunction $$(L = (-)^u: \mathbf{2}^{P^{op}} \to (\mathbf{2}^P)^{op}) \qquad \dashv \qquad (R = (-)^d: (\mathbf{2}^P)^{op} \to \mathbf{2}^{P^{op}})$$ and the poset of downward-closed sets $A$ for which $A = (A^u)^d$ is isomorphic to the poset of upward-closed sets $B$ for which $(B^d)^u = B$. All of this can be "categorified" so as to hold in a general enriched setting, where the base of enrichment is a complete, cocomplete, symmetric monoidal closed category $V$. We may take for example $V = Set$. Analogous to the formation of $B^d$, we may define half of the Isbell conjugation $R: (Set^C)^{op} \to Set^{C^{op}}$ by the formula $$R(G) = \int_{d \in C} \hom(-, d)^{G(d)}$$ where $\hom$ plays the role of the poset relation $\leq$, exponentiation or cotensor plays the role of the implication operator, and the end plays the role of the universal quantifier. The other half $L: Set^{C^{op}} \to (Set^C)^{op}$ is also defined, at the object level, by $$L(F) = \int_{c \in C} \hom(c, -)^{F(c)}$$ (the right-hand side is a set-valued functor $C \to Set$; when we interpret this in $(Set^C)^{op}$, the end is interpreted as a coend, and the cotensor is interpreted as a tensor). In any event, given $F: C^{op} \to Set$ and $G: C \to Set$, we have natural bijections between morphisms $$\{F \to R(G)\} \qquad \cong \qquad \{F \times G \to \hom\} \qquad \cong \qquad \{G \to L(F)\}$$ and the analogue of the MacNeille completion is obtained by taking "fixed points" of the adjunction $L \dashv R$, as described above by full subcategories where the unit and counit $F \to RLF$ and $LRG \to G$ become isomorphisms. These full subcategories are equivalent; one side of the equivalence is complete because it is the category of algebras for an idempotent monad associated with $RL$, and the other side is cocomplete because it is the category of coalgebras for an idempotent comonad associated with $LR$, and thus both sides are complete and cocomplete. Edit: It has been pointed out that there is a mistake in the argument at the end of the prior edit, asserting that the fixed points of the monad coincide with the algebras of an associated idempotent monad. See Michal's answer (posted 9/13/2013).
|
{
"source": [
"https://mathoverflow.net/questions/59291",
"https://mathoverflow.net",
"https://mathoverflow.net/users/13480/"
]
}
|
59,357 |
Given two morphisms between chain complexes $f_\bullet,g_\bullet\colon\,C_\bullet\longrightarrow D_\bullet$, a chain homotopy between them is a sequence of maps $\psi_n\colon\,C_n\longrightarrow D_{n+1}$ such that $f_n-g_n= \partial_D \psi_n+\psi_{n-1}\partial_C$. I can motivate this definition only when the chain complexes are associated to some topological space. For example if $C_\bullet$ and $D_\bullet$ are simplicial chain complexes, then for a simplex $\sigma$, a homotopy between $f(\sigma)$ and $g(\sigma)$ is something like $\psi(\sigma)\approx\sigma\times [0,1]$, whose boundary is $f(\sigma)-g(\sigma)-\psi(\partial\sigma)$. But chain homotopy also features in contexts where there is no topological space lurking in the background. Examples are chain homotopy of complexes of graphs ( e.g. Conant-Schneiderman-Teichner) or chain homotopy of complexes in Khovanov homology. In such contexts, the motivation outlined in the previous paragraph makes no sense, with "the boundary of a cylinder being the top, bottom, and sides", because there's no such thing as a "cylinder". Thus, surely, the "cylinder motivation" isn't the most fundamental reason that chain homotopy is "the right" relation to study on chain complexes. It's an embarassing question, but: What is the fundamental (algebraic) reason that chain homotopy is relevant when studying chain complexes?
|
Here's one way to look at it: There is a chain complex $Hom(C,D)$ in which the $n$th chain group is the product over $k$ of $Hom(C_k,D_{n+k})$ and the boundary is given by $\partial (f(c))=(\partial f)(c)+(-1)^{|f|}f(\partial c)$. A chain map is a $0$-cycle, and two of them are chain homotopic if they differ by a boundary. EDIT: And then a chain map $B\to Hom(C,D)$ corresponds precisely to a chain map $B\otimes C\to D$, where $B\otimes C$ is defined using the usual convention $\partial (b\otimes c)=(\partial b)\otimes c + (-1)^{|b|}b\otimes \partial c$
|
{
"source": [
"https://mathoverflow.net/questions/59357",
"https://mathoverflow.net",
"https://mathoverflow.net/users/2051/"
]
}
|
59,456 |
It seems that commutative diagrams appeared sometime in the late 1940s -- for example, Eilenberg-McLane (1943) group cohomology paper does not have any, while the 1953 Hochschild-Serre paper does. Does anyone know who started using them (and how they convinced the printers to do this)?
|
I can muddy the waters...! According to editor E. Scholz of Hausdorff’s Collected Works (2008, p. 884 ): In a note of 3/20/1933 ( Nachlass , fasc. 449) and in a further undated note (fasc. 571), Hausdorff symbolized the functoriality property of homology (in our later terminology) with a commutative diagram of homomorphisms between the terms of two sequences of groups $(A_n)_{n\in\mathbf N}$ , $(A'_n)_{n\in\mathbf N}$ : ( Nachlass , fasc. 571, leaf 1). Could this have, somehow, made its way out of Bonn (where Hausdorff lectured on combinatorial topology that year) and to Hurewicz, Eilenberg, Steenrod, et al.? This seems not impossible, as e.g. Tucker ( 1932 , footnote 4) mentions discussion of Hausdorff letters in the Princeton seminar. Maybe also noteworthy: Hausdorff’s colleague and mentor in Bonn was Eduard Study, author of another early commutative diagram . Addition: Six years earlier, Fritz London (son of yet another Bonn mathematician ) already had, in Winkelvariable und kanonische Transformationen in der Undulationsmechanik , Z. f. Physik 40 ( Dec. 1926 ) 193-210: (For the story of the arrows themselves, see the relevant questions on $\to$ and $\mapsto$ .)
|
{
"source": [
"https://mathoverflow.net/questions/59456",
"https://mathoverflow.net",
"https://mathoverflow.net/users/11142/"
]
}
|
59,520 |
Less tongue in cheek, is it known what the relative consistency is for theorems proved with an automatic theorem prover? Of course this depends somewhat on what assumptions one makes with respect to predicativity and so on. But for concreteness take one of the popular packages with its standard installation. Perhaps this is a can of worms, or a piece of string of indeterminate length, but the recent surge of interest in Voevodsky's univalent foundations raises questions about the consistency strength of the system HoTT he (and others) propose.
|
For systems like Coq that are based on type theory, this question is trickier to answer than you might expect. First of all, what does it take to "know" the consistency strength of some system? Classically, the most thoroughly studied logical systems are based on first-order logic, using either the language of elementary arithmetic or the language of set theory. So if you are able to say, "System X is equiconsistent with ZF" (or with PA, or PRA, or ZFC + infinitely many inaccessibles, etc.), then most people will feel that they "know" the consistency strength of X, because you have calibrated it against a familiar hierarchy of systems. Coq, however, is based on something called the Calculus of Inductive Constructions (CIC). Without going into a detailed explanation of what this is, let me just mention that the core of CIC doesn't have any axioms, but typically people add axioms as needed. For example, if you want classical logic, then you can add the law of the excluded middle as an axiom. To get more power you can add more axioms (though you have to be careful because certain combinations of axioms are known to be inconsistent). But trying to line up the various systems you can get this way against more familiar set-theoretic or arithmetic systems is a tricky business. Typically, we cannot expect an exact calibration, but we can interpret various fragments of set theory in type theory and vice versa, showing that the consistency of CIC plus certain axioms is sandwiched between two different systems on the set-theoretic side. If you want to delve into the details, I'd recommend the paper Sets in Coq, Coq in Sets by Bruno Barras as a starting point.
|
{
"source": [
"https://mathoverflow.net/questions/59520",
"https://mathoverflow.net",
"https://mathoverflow.net/users/4177/"
]
}
|
59,563 |
The space of configurations of $k$ distinct points in the plane
$$F(\mathbb{R}^2,k)=\lbrace(z_1,\ldots , z_k)\mid z_i\in \mathbb{R}^2, i\neq j\implies z_i\neq z_j\rbrace$$
is a well-studied object from several points of view. Paths in this space correspond to motions of a set of point particles moving around avoiding collisions, and its fundamental group is the pure braid group. It is not hard to prove that this space is homotopy equivalent to the configuration space of the unit disk
$$F(D^2,k)=\lbrace(z_1,\ldots , z_k)\mid z_i\in D^2, i\neq j\implies z_i\neq z_j\rbrace$$ In real life however, particles, or any other objects which move around in some bounded domain without occupying the same space, have a positive radius, and so would be more realistically modelled by disks rather than points. This motivates the study of the spaces
$$F(D^2,k;r)= \lbrace(z_1,\ldots , z_k)\mid z_i\in D^2, i\neq j\implies |z_i - z_j|>r\rbrace$$
where $r>0$. The homotopy type of this space is a function of $r$ and $k$. Fixing $k$ and varying $r$ gives a spectrum (is this the right word?) of homotopy types between $F(\mathbb{R}^2,k)$ and the empty space. It seems like an interesting (and difficult) problem to study the homotopy invariants as functions of $r$. For instance, for $k=3$ what is $\beta_1(r)$, the first Betti number of $F(D^2,k;r)$? Question: Have these spaces and their homotopy invariants been studied before? If so, where? Of course one can also ask the same question with disks of arbitrary dimensions.
|
I have a number of results on hard disks in various types of regions, and preprints are in progress. The terminology "hard spheres" (or "hard disks" in dimension 2) comes from statistical mechanics, and I believe Fred Cohen is following my lead on this. (See for example the hard disks section of Persi Diaconis' survey article .) --- With Gunnar Carlsson and Jackson Gorham, we did numerical experiments and computed the number of path components for 5 disks in a box, as the radius varies over all possible values. This is quite a complicated story already, as the number is not monotone or even unimodal in the radius. (This preprint is almost done, a rough copy is available on request.) --- With Yuliy Baryshnikov and Peter Bubenik we develop a general Morse-theoretic framework and proved that in a square, if $r < 1/2n$ then the configuration space of $n$ disks of radius $r$ is homotopy equivalent to $n$ points in the plane. On the other hand this is tight: if $r> 1/2n$ then the natural inclusion map is not a homotopy equivalence. (Again this preprint is getting close to be being posted to the arXiv...) There is a much more general statement here. --- I also have some results with Bob MacPherson about hard disks in a square and also in (the easier case of) an infinite strip. We have been talking to Fred Cohen about this a bit lately, who believes there may be connections to more classical configuration spaces. I am slightly self-conscious about claiming results here, without first having posted the preprints to the arXiv, but I just wanted to state that there are a number of things known now, and I am working hard to get everything written up in a timely fashion. In the meantime I have some slides up from a talk I recently gave at UPenn. Here is a concrete example, since that might be more satisfying. It turns out for $3$ disks in a unit square: For $0.25433 < r$ , the configuration space is empty, for $0.25000 < r < 0.25433$ it is homotopy equivalent to $24$ points, for $ 0.20711 < r < 0.25000$ it is homotopy equivalent to $2$ circles, for $ 0.16667 < r < 0.20711 $ it is homotopy equivalent to a wedge of $13$ circles, and for $ r < 0.16667$ it is homotopy equivalent to the configuration space of $3$ points in the plane. For $4$ disks in a square it looks like the topology changes $9$ or $10$ times, and for $5$ disks it looks like the topology might change $25$ - $30$ times or more. The general idea is that certain types of "jammed" configurations act like critical points of a Morse function, and mark the only places where the topology can chance. Update: two preprints have been posted to the arXiv. Min-type Morse theory for configuration spaces of hard spheres (Baryshnikov, Bubenik, Kahle): http://arxiv.org/abs/1108.3061 Computational topology for configuration spaces of hard disks ( Carlsson, Gorham, Kahle, and Mason): http://arxiv.org/abs/1108.5719 Update (January 2022): A lot has happened in the past year or two. Here are some more preprints that have been recently posted to the arXiv. Configuration spaces of disks in an infinite strip (Alpert, Kahle, MacPherson) https://arxiv.org/abs/1908.04241 Configuration spaces of disks in a strip, twisted algebras, persistence, and other stories
(Alpert and Manin) https://arxiv.org/abs/2107.04574 Homology of configuration spaces of hard squares in a rectangle (Alpert, Bauer, Kahle, MacPherson, Spendlove) https://arxiv.org/abs/2010.14480
|
{
"source": [
"https://mathoverflow.net/questions/59563",
"https://mathoverflow.net",
"https://mathoverflow.net/users/8103/"
]
}
|
59,738 |
The title quote is from p.221 of the 2010 book, The Shape of Inner Space: String Theory and the Geometry of the Universe's Hidden Dimensions by Shing-Tung Yau and Steve Nadis. "Nash's theorem" here refers to the Nash embedding theorem
(discussed in an earlier MO question: " Nash embedding theorem for 2D manifolds ").
I would appreciate any pointer to literature that explains where and why embedding fails.
This is well-known in the right circles, but I am having difficulty locating sources. Thanks!
|
The failure is actually more profound than you might guess at first glance: There are conformal metrics on the Poincare disk that cannot (even locally) be isometrically induced by embedding in $\mathbb{C}^n$ by any holomorphic mapping. For example, there is no complex curve in $\mathbb{C}^n$ for which the induced metric has either curvature that is positive somewhere or constant negative curvature. You can get around the positivity problem by looking for complex curves in $\mathbb{P}^n$ (with the Fubini-Study metric, say), but even there, there are no complex curves with constant negative curvature. More generally, for any Kahler metric $g$ on an $n$-dimensional complex manifold $M$, there always exist many metrics on the Poincare disk that cannot be isometrically induced on the disk via a holomorphic embedding into $M$. This should not be surprising if you are willing to be a little heuristic: Holomorphic mappings of a disk into an $n$-dimensional complex manifold essentially depend on choosing $n$ holomorphic functions of one complex variable and each such holomorphic function essentially depends on choosing two (analytic) real functions of a single real variable. However, the conformal metrics on the disk depend essentially on one (positive) smooth function of two variables, which is too much `generality' for any finite number of functions of a single variable to provide.
|
{
"source": [
"https://mathoverflow.net/questions/59738",
"https://mathoverflow.net",
"https://mathoverflow.net/users/6094/"
]
}
|
59,741 |
The title says it all ... Obviously, any such triple must be of the form
$(4a+1,4a+2,4a+3)$ where $a$ is an integer. Has this problem
already been studied before ? The result would follow from Dickson's
conjecture on prime patterns, which implies that there are
infinitely many integers $b$ such that $4(9b)+1,2(9b)+1$ and $4(3b)+1$ are all prime
(take $a=9b$). A related question : Question on consecutive integers with similar prime factorizations
|
To expand on the answer in my comment, the proportion of integers $a$ for which $4a+1,4a+2,4a+3$ are all squarefree is $\prod_{p\not=2}(1-3/p^2)$, with the product taken over all odd primes $p$. As this product converges to a positive limit, there are infinitely many such $a$. A quick heuristic is to look modulo $p^2$. For odd prime $p$, precisely $p^2-3$ of the possible $p^2$ values of $a$ mod $p^2$ lead to $4a+1,4a+2,4a+3$ all being nonzero mod $p^2$, so this has probability $1-3/p^2$. Independence of mod $p^2$ arithmetic as $p$ runs through the primes suggests the claimed limit. More precisely, if $\phi(n)$ is the number of positive integers $a\le n$ with $4a+1,4a+2,4a+3$ squarefree then
$$
\frac{\phi(n)}{n}\to\prod_{p\not=2}(1-3/p^2).\qquad\qquad{\rm(1)}
$$
It's not too hard to turn this heuristic into a rigorous argument. If we let $\phi_N(n)$ denote the number of $a\le n$ such that none of $4a+1,4a+2,4a+3$ is a multiple of $p^2$ for a prime $p < N$, then the Chinese remainder theorem says that we get equality
$$
\frac{\phi_N(n)}{n}=\prod_{\substack{p\not=2,\\ p < N}}(1-3/p^2).\qquad\qquad{\rm(2)}
$$
wherever $n$ is a multiple of $\prod_{\substack{p\not=2,\\ p < N}}p^2$ and, therefore, the error in (2) is of order $1/n$ for arbitrary $n$. It only needs to be shown that ignoring primes $p\ge N$ leads to an error which is vanishingly small as $N$ is made large. In fact, the number of $a \le n$ which are a multiple of $p^2$ is $\left\lfloor\frac{n}{p^2}\right\rfloor\le \frac{n}{p^2}$. The number of $a\le n$ which is a multiple of $p^2$ for some prime $p\ge N$ is bounded by $n\sum_{p\ge N}p^{-2}$. So, the proportion of $a\le n$ for which one of $4a+1,4a+2,4a+3$ is a multiple of $p^2$ for an odd prime $p\ge N$ is bounded by $3\frac{4n+3}{n}\sum_{p\ge N}p^{-2}\sim3(4+3/n)/(N\log N)$. This means that $\phi_N(n)/n\to\phi(n)/n$ uniformly in $n$ as $N\to\infty$, and the limit (1) follows from approximating by $\phi_N$.
|
{
"source": [
"https://mathoverflow.net/questions/59741",
"https://mathoverflow.net",
"https://mathoverflow.net/users/2389/"
]
}
|
59,770 |
A few years ago, I found on arXiv an article in which the authors (I think they were at least two to write it) claimed to have proven that the non trivial zeros of the Riemann zeta function were all simple using the concept of Riemann surfaces. But unfortunately, I just can't find it back. Does someone know if such a result has been published and widely accepted by the mathematical community?
Thank you in advance.
|
This is widely open. Moreover, I think we will prove the Riemann Hypothesis much earlier than the simplicity of the zeros (if true). The latter is somehow much more accidental, the only reasonable argument I know in favor of it is "why would two zeros ever coincide"? Note, however, that some automorphic $L$-functions do have multiple zeros. If I recall correctly, even a Dedekind $L$-function can have a multiple zero at the center.
|
{
"source": [
"https://mathoverflow.net/questions/59770",
"https://mathoverflow.net",
"https://mathoverflow.net/users/13625/"
]
}
|
59,918 |
Let $\|\cdot\|_F$ and $\|\cdot\|_2$ be the Frobenius norm and the spectral norm, respectively. I'm reading Ji-Guang Sun's paper ' Perturbation Bounds for the Cholesky and QR Factorizations ' from BIT 31 in 1991.
While deriving the perturbation bound on Cholesky factorization $A=LL^T$ with $A\in\mathbb{R}^{n\times k}$ of rank $k$, he begins with the perturbed matrix $A+E$ and its Cholesky factorization: $$A+E=(L+G)(L+G)^T.$$ Expanding the RHS and subtracting $A$ from both sides, we have $E = GL^T + LG^T + GG^T$. Then he claims in (2.12) of his paper that $$\|E\|_F\leq 2\|L\|_2\|G\|_F + \|G\|_F^2\,.$$ If I read $\|L\|_F$ instead of $\|L\|_2$, everything seems perfectly normal to me by the following three properties of any matrix norm: $\|X^T\|=\|X\|$ for any matrix norm subadditivity: $\|X+Y\|_p\leq\|X\|_p + \|Y\|_p$ submultiplicativity: $\|XY\|_p\leq \|X\|_p\cdot\|Y\|_p$ for $p=2$ or $F$ But I'm not sure why $\|L\|$ is a spectral norm in this equation even though all other norms are Frobenius norms. Does $\|XY\|_F\leq \|X\|_2\|Y\|_F$ always hold?
|
This inequality is true. But let me first make a comment. When you say that any matrix norm is submultiplicative ($\|XY\|\le\|X\|\cdot\|Y\|$), you understate that a matrix norm over $M_n(\mathbb C)$ is subordinated to a norm of $\mathbb C^n$:
$$\|A\|:=\sup_{x\ne0}\frac{\|Ax\|}{\|x\|}.$$
But the Frobenius norm is not subordinated , for instance because $\|I_n\|_F=\sqrt{n}$, whereas $\|I_n\|=1$ for any matrix norm. The reason for which the Frobenius norm is submultiplicative is therefore specific; it is more or less a consequence of Cauchy-Schwarz inequality. Now, back to your question. Both norms are unitarily invariant, in the sense that $\|UAV\|=\|A\|$ whenever $U$ and $V$ are unitary matrices. Therefore we may assume that $A$ is diagonal, with diagonal entries $a_1,\ldots,a_n$. Now, we have $\|A\|_2=\max_i|a_i|$ and therefore
$$\|AB\|_F^2 = \sum |a_i b_{ij}|^2\le \|A\|_2^2 \sum |b_{ij}|^2 =\|A\|_2^2\|B\|_F^2.$$
|
{
"source": [
"https://mathoverflow.net/questions/59918",
"https://mathoverflow.net",
"https://mathoverflow.net/users/11361/"
]
}
|
59,939 |
The standard version of this puzzle is as follows: you have $1000$ bottles of wine, one of which is poisoned. You also have a supply of rats (say). You want to determine which bottle is the poisoned one by feeding the wines to the rats. The poisoned wine takes exactly one hour to work and is undetectable before then. How many rats are necessary to find the poisoned bottle in one hour? It is not hard to see that the answer is $10$. Way back when math.SE first started, a generalization was considered where more than one bottle of wine is poisoned. The strategy that works for the standard version fails for this version, and I could only find a solution for the case of $2$ poisoned bottles that requires $65$ rats. Asymptotically my solution requires $O(\log^2 N)$ rats to detect $2$ poisoned bottles out of $N$ bottles. Can anyone do better asymptotically and/or prove that their answer is optimal and/or find a solution that works for more poisoned bottles? The number of poisoned bottles, I guess, should be kept constant while the total number of bottles is allowed to become large for asymptotic estimates.
|
Each bottle of wine corresponds to the set of rats who tasted it. Let $\mathcal{F}$ be the family of the resulting sets. If bottles corresponding to sets $A$ and $B$ are poisoned then
$A \cup B$ is the set of dead rats. Therefore we can identify the poisoned bottles as long as for all $A,B,C,D \in \mathcal{F}$ such that $A \cup B = C \cup D$ we have $\{ A, B \} = \{ C, D \}$. Families with this property are called (strongly) union-free and the maximum possible size $f(n) $ of a union free family $\mathcal{F} \subset 2^{ [n] } $ has been studied in extremal combinatorics. In the question context, $f(n)$ is the maximum number of bottles of wine which can be tested by $n$ rats. In the paper "Union-free Hypergraphs and Probability Theory" Frankl and Furedi show that
$$2^{(n-3)/4} \leq f(n) \leq 2^{(n+1)/2}.$$
The proof of the lower bound is algebraic, constructive, and, I think, very elegant.
In particular, one can find $2$ poisoned bottles out of $1000$ with $43$ rats.
|
{
"source": [
"https://mathoverflow.net/questions/59939",
"https://mathoverflow.net",
"https://mathoverflow.net/users/290/"
]
}
|
59,978 |
Does anyone know if there is a classification of the subgroups of the real numbers taken under addition? If not can anyone point me in the directiong of any papers/materials which discuss properties of or interesting facts about these subgroups?
|
If you would like to classify the subgroups in the sense of Lebesugue measure, you may find the following facts helpful. (1) Any measurable proper subgroup of the real line is of measure $0$. (2) Any non-measurable subgroup $G$ of the real line charges fully everywhere, i.e., for any interval $I$, $m^{\ast}(G \cap I)=|I|$, where $m^{\ast}(\cdot)$ denotes the outer Lebesgue measure. (3) Non-measurable subgroup of the real line exists.
|
{
"source": [
"https://mathoverflow.net/questions/59978",
"https://mathoverflow.net",
"https://mathoverflow.net/users/13997/"
]
}
|
60,108 |
I am curious if the setup for (co)homology theory appears outside the realm of pure mathematics. The idea of a family of groups linked by a series of arrows such that the composition of consecutive arrows is zero seems like a fairly general notion, but I have not come across it in fields like biology, economics, etc. Are there examples of non-trivial (co)homology appearing outside of pure mathematics? I think Hatcher has a couple illustrations of homology in his textbook involving electric circuits. This is the type of thing I'm looking for, but it still feels like topology since it is about closed loops. Since the relation $d^2=0$ seems so simple to state, I would imagine this setup to be ubiquitous. Is it? And if not, why is it so special to topology and related fields?
|
Robert Ghrist is all about applied topology: Sensor Network, Signal Processing, and Fluid Dynamics. (homepage: http://www.math.upenn.edu/~ghrist/index.html ). For instance, we want to use the least number of sensors to cover a certain area, such that when we remove one sensor, a part of that area is undetectable. We can form a complex of these sensors and hence its nerve, and use homology to determine whether there are any gaps in the sensor-collection. I've met with him in person and he expressed confidence that this is going to be a big thing of the future. There are also applications of cohomology to Crystallography (see Howard Hiller) and Quasicrystals in physics (see Benji Fisher and David Rabson). In particular, it uses cohomology in connection with Fourier space to reformulate the language of quasicrystals/physics in terms of cohomology... Extinctions in x-ray diffraction patterns and degeneracy of electronic levels are interpreted as physical manifestations of nonzero homology classes. Another application is on fermion lattices ( http://arxiv.org/abs/0804.0174v2 ), using homology combinatorially. We want to see how fermions can align themselves in a lattice, noting that by the Pauli Exclusion principle we cannot put a bunch of fermions next to each other. Homology is defined on the patterns of fermion-distributions.
|
{
"source": [
"https://mathoverflow.net/questions/60108",
"https://mathoverflow.net",
"https://mathoverflow.net/users/10930/"
]
}
|
60,113 |
Where can I find the proof of the following fact: If $M$ is a contractible manifold of dimension $n\ge 5$, then the direct product of $M$ and $\mathbb{R}^{n+1}$ is homeomorphic to $R^{2n+1}$ ?
|
Robert Ghrist is all about applied topology: Sensor Network, Signal Processing, and Fluid Dynamics. (homepage: http://www.math.upenn.edu/~ghrist/index.html ). For instance, we want to use the least number of sensors to cover a certain area, such that when we remove one sensor, a part of that area is undetectable. We can form a complex of these sensors and hence its nerve, and use homology to determine whether there are any gaps in the sensor-collection. I've met with him in person and he expressed confidence that this is going to be a big thing of the future. There are also applications of cohomology to Crystallography (see Howard Hiller) and Quasicrystals in physics (see Benji Fisher and David Rabson). In particular, it uses cohomology in connection with Fourier space to reformulate the language of quasicrystals/physics in terms of cohomology... Extinctions in x-ray diffraction patterns and degeneracy of electronic levels are interpreted as physical manifestations of nonzero homology classes. Another application is on fermion lattices ( http://arxiv.org/abs/0804.0174v2 ), using homology combinatorially. We want to see how fermions can align themselves in a lattice, noting that by the Pauli Exclusion principle we cannot put a bunch of fermions next to each other. Homology is defined on the patterns of fermion-distributions.
|
{
"source": [
"https://mathoverflow.net/questions/60113",
"https://mathoverflow.net",
"https://mathoverflow.net/users/-1/"
]
}
|
60,160 |
This is not my area of research, but I am curious. Let $G=\left< X|R \right>$ be a finitely presented group, where $X$ and $R$ are finite. There are many questions which are undecidable for all such $G$, for example whether $G$ is trivial or whether a particular word is trivial in $G$. Is there any non-trivial question (by trivial I mean that the answer is always yes or always no) which is decidable? For instance, is there a class $S$ (non-empty and not equal to all the finitely presented groups) such you can always decide whether $G$ is in this class?
|
The problem whether $G$ is perfect, that is $G=[G,G]$ is decidable because you need to abelianize all relations (replace the operation by "+" in every relation) and solve a system of linear equations over $\mathbb Z$ . For example, if the defining relations are $xy^{-1}xxy^5x^{-8}=1, x^{-3}y^{-2}xyx^5 = 1 $ , then the Abelianization gives $-5x+4y=0, 3x-y=0$ . Now you need to check if these two relations "kill" $\mathbb{Z^2}$ . That means for some integers $a,b,c,d$ we should have $x = a(-5x+4y)+b(3x-y),
y = c(-5x+4y)+d(3x-y) $ . This gives 4 integer equations with four unknowns: $1=-5a+3b, 0=4a-b, 1=4c-d, 0=-5c+3d
$ . This systems does not have an integer solution (it implies $7a=1 $ ), so $G\ne [G,G]$ .
|
{
"source": [
"https://mathoverflow.net/questions/60160",
"https://mathoverflow.net",
"https://mathoverflow.net/users/5034/"
]
}
|
60,201 |
What are good ways to think about Lagrangian submanifolds? Why should one care about them? More generally: same questions about (co)isotropic ones.
Answers from a classical mechanics point of view would be especially welcome.
|
Lagrangian submanifolds arise naturally in Hamiltonian Mechanics, because of the classical Arnold-Liouville theorem. Let me state it here: Theorem (Arnold-Liouville). Let $(M, \omega, H)$ be an integrable system of dimension $2n$ with integrals of motion $f_1=H$, $f_2, \ldots, f_n$. Let $c \in \mathbb{R}^n$ be a regular value of $f:=(f_1, \ldots, f_n)$. Then the corresponding level $f^{-1}(c)$ is a Lagrangian submanifold of $M$. Geometrically this means that, locally around the regular value $c$, the map $f \colon M \to \mathbb{R}^n$ collecting the integrals of motion is a Lagrangian fibration , i.e. it is locally trivial and the fibres are Lagrangian submanifolds. Furthermore, one also shows that the connected components of $f^{-1}(c)$ are of the form $\mathbb{R}^{n-k} \times \mathbb{T}^k$, where $0 \leq k \leq n$ and $\mathbb{T}^k$ is a $k$-dimensional torus. In particular, every compact component must be a lagrangian torus. For a proof of this result, see for instance the book by Ana Canas Da Silva "Lectures on symplectic geometry".
|
{
"source": [
"https://mathoverflow.net/questions/60201",
"https://mathoverflow.net",
"https://mathoverflow.net/users/2837/"
]
}
|
60,375 |
The other day, I was idly considering when a topological space has a square root. That is, what spaces are homeomorphic to $X \times X$ for some space $X$. $\mathbb{R}$ is not such a space: If $X \times X$ were homeomorphic to $\mathbb{R}$, then $X$ would be path connected. But then $X \times X$ minus a point would also be path connected. But $\mathbb{R}$ minus a point is not path connected. A next natural space to consider is $\mathbb{R}^3$. My intuition is that $\mathbb{R}^3$ also doesn't have a square root. And I'm guessing there's a nice algebraic topology proof. But that's not technology I'm much practiced with. And I don't trust my intuition too much for questions like this. So, is there a space $X$ so that $X \times X$ is homeomorphic to $\mathbb{R}^3$?
|
No such space exists. Even better, let's generalize your proof by converting information about path components into homology groups. For an open inclusion of spaces $X \setminus \{x\} \subset X$ and a field $k$, we have isomorphisms (the relative Kunneth formula)
$$
H_n(X \times X, X \times X \setminus \{(x,x)\}; k) \cong \bigoplus_{p+q=n} H_p(X,X \setminus \{x\};k) \otimes_k H_q(X, X \setminus \{x\};k).
$$
If the product is $\mathbb{R}^3$, then the left-hand side is $k$ in degree 3 and zero otherwise, so something on the right-hand side must be nontrivial. However, if $H_p(X, X \setminus \{x\};k)$ were nontrivial in degree $n$, then the left-hand side must be nontrivial in degree $2n$.
|
{
"source": [
"https://mathoverflow.net/questions/60375",
"https://mathoverflow.net",
"https://mathoverflow.net/users/27/"
]
}
|
60,376 |
Even if the answer is no, I am interested in a more specific question. Let $\Sigma$ be a set of operations of finite arity, $E$ be a set of equations over $\Sigma$ and $\mathcal{A}(\Sigma,E)$ be the respective category of algebras and algebra morphisms. Also denote the free algebra functor by $F: \mathsf{Set} \to \mathcal{A}(\Sigma,E)$. If $f : A \to B$ is a monomorphism in such a category i.e. an injective algebra morphism, and also $X$ is set, does it follow that $FX + f : FX + A \to FX + B$ is also injective? Any help much appreciated.
|
No such space exists. Even better, let's generalize your proof by converting information about path components into homology groups. For an open inclusion of spaces $X \setminus \{x\} \subset X$ and a field $k$, we have isomorphisms (the relative Kunneth formula)
$$
H_n(X \times X, X \times X \setminus \{(x,x)\}; k) \cong \bigoplus_{p+q=n} H_p(X,X \setminus \{x\};k) \otimes_k H_q(X, X \setminus \{x\};k).
$$
If the product is $\mathbb{R}^3$, then the left-hand side is $k$ in degree 3 and zero otherwise, so something on the right-hand side must be nontrivial. However, if $H_p(X, X \setminus \{x\};k)$ were nontrivial in degree $n$, then the left-hand side must be nontrivial in degree $2n$.
|
{
"source": [
"https://mathoverflow.net/questions/60376",
"https://mathoverflow.net",
"https://mathoverflow.net/users/5152/"
]
}
|
60,457 |
Imagine your-self in front of a class with very good undergraduates
who plan to do mathematics (professionally) in the future.
You have 30 minutes after that you do not see these students again.
You need to present a theorem which will be 100% useful for them. What would you do? One theorem per answer please. Try to be realistic. For example: 30 min is more than enough to introduce metric spaces,
prove existence of partition of unity,
and explain how it can be used later. P.S. Many of you criticized the vague formulation of the question. I agree. I was trying to make it short --- I do not read the questions if they are longer than half a page. Still I think it is a good approximation to what I really wanted to ask. Here is an other formulation of the same question, but it might be even more vague. Before I liked jewelry-type theorems; those I can put in my pocket and look at it when I want to.
Now I like tool-type theorems; those which can be used to dig a hole or build a wall.
It turns out that there are jewelry-type and tool-type theorems at the same time.
I know a few and I want to know more.
|
The Banach fixed point theorem .
|
{
"source": [
"https://mathoverflow.net/questions/60457",
"https://mathoverflow.net",
"https://mathoverflow.net/users/1441/"
]
}
|
60,478 |
In Prospects in Mathematics (AM-70) , Hirzebruch gives a nice discussion of why the formal power series $f(x) = 1 + b_1 x + b_2 x^2 + \dots$ defining the Todd class must be what it is. In particular, the key relation $f(x)$ must satisfy is that ($\star$) the coefficient of $x^n$ in $(f(x))^{n+1}$ is 1 for all $n$. As Hirzebruch observes, there is only one power series with constant term 1 satisfying that requirement, namely
$$f(x) = \frac{x}{1-e^{-x}} = 1 + \frac{x}{2}+\sum_{k\geq 2}{B_{k}\frac{x^{k}}{k!}} = 1 + \frac{x}{2} + \frac{1}{6}\frac{x^2}{2} - \frac{1}{30}\frac{x^4}{24} + \dots,$$
where the $B_k$ are the Bernoulli numbers . The only approach I see to reach this conclusion is: Use ($\star$) to find the first several terms: $b_1 = 1/2, b_2 = 1/12, b_3 = 0, b_4 = -1/720$. Notice that they look suspiciously like the coefficients in the exponential generating function for the Bernoulli numbers, so guess that $f(x) = \frac{x}{1-e^{-x}}$. Do a residue calculation to check that this guess does satisfy ($\star$). My question is whether anyone knows of a less guess-and-check way to deduce from ($\star$) that $f(x) = \frac{x}{1-e^{-x}}$.
|
Since you mention playing around with residues, I'm probably not telling you anything you don't already know. But there is a systematic way to extract the power series $f$ from
the coefficients of $x^{n-1}$ in $f(x)^{n}$, which goes by the name of the Lagrange inversion formula. Assume that the constant term of $f$ is invertible, and define $g(x) = \frac{x}{f(x)}$.
Then $g(x)$ is a power series which has a compositional inverse. Denote this inverse by $h$, so that if $y = g(x)$ then $x = h(y)$. Write $h(y) = c_1 y + c_2 y^2 + c_3 y^3 + \cdots$.
For every integer $n$, the product $n c_n$ is the residue of the differential
$\frac{1}{y^n} h'(y) dy = \frac{1}{g(x)^{n}} dx = \frac{ f(x)^{n} }{x^n} dx$, which is the coefficient of $x^{n-1}$ in $f(x)^{n}$. In your example, you get $c_n = \frac{1}{n}$, so that $h(y) = y + \frac{y^2}{2} + \frac{y^3}{3} + \cdots = - \log(1-y)$. Then $g(x) = 1 - e^{-x}$, so that
$f(x) = \frac{x}{1-e^{-x}}$.
|
{
"source": [
"https://mathoverflow.net/questions/60478",
"https://mathoverflow.net",
"https://mathoverflow.net/users/6005/"
]
}
|
60,550 |
What is a topological feature, that a (some) TQFT (e.g. in 3 or 4 dim) sees but homology/cohomology/homotopy groups don't? Or: what is an example where using classical theories is hard, but using a TQFT is comparatively easy?
|
All the answers so far have focused on 3 dimensions, but the answer is much more striking in 4 dimensions. Freedman's theorem tells you that classical homology invariants give you complete information about topological, simply-connected 4-manifolds. These classical invariants cannot, however, distinguish between distinct smooth structures on the same topological 4-manifold, and essentially our only technique for distinguishing smooth 4-manifolds is Donaldson's invariant or the Seiberg-Witten invariant or their relatives. These do not quite form a TQFT, but are related to TQFTs. Edit: On request, a little about how the 4-manifold invariants are related to a TQFT. This is all nicely explained in the beginning of Kronheimer and Mrowka's book Monopoles and 3-manifolds . There are actually three different theories, denoted $\widehat{\mathit{HM}}$ ("HM-from"), $\check{\mathit{HM}}$ ("HM-to", unfortunately typeset badly here), and $\overline{\mathit{HM}}$. All are close to satisfying axioms for a TQFT assigning a vector space to a 3-manifold and maps to a 4-manifold, at least for connected manifolds. (The vector spaces are infinite dimensional, but finite in each graded piece.) Unfortunately, however you slice it, in each case the invariant associated to a closed 4-manifold in the usual TQFT way (when defined) is zero. Instead, you use the fact that there is an exact triangle
$$
\cdots \longrightarrow \widehat{\mathit{HM}} \longrightarrow \overline{\mathit{HM}} \longrightarrow \check{\mathit{HM}}\longrightarrow \cdots
$$
(with right mapping to left), and the map $\overline{\mathit{HM}}(W)$ is $0$ for $b_2^+(W) \ge 1$. If you have a 4-manifold $W$ with $b_2^+(W) \ge 2$, you factor it as two cobordisms $W = W_1 \cup_Y W_2$ for some 3-manifold $Y$, with $b_2^+(W_i) \ge 1$. Then the properties above let you map from $\check{\mathit{HM}}(S^3)$, to $\check{\mathit{HM}}(Y)$, backwards in the exact triangle to $\widehat{\mathit{HM}}(Y)$, and then forwards to $\widehat{\mathit{HM}}(S^3)$. The resulting map (from $\check{\mathit{HM}}(S^3)$ to $\widehat{\mathit{HM}}(S^3)$) gives the interesting Seiberg-Witten invariants of $W$.
|
{
"source": [
"https://mathoverflow.net/questions/60550",
"https://mathoverflow.net",
"https://mathoverflow.net/users/14123/"
]
}
|
60,598 |
I have searched for such a question and didn't find it. I recently had a presentation in which I introduced $p$-Sylow subgroups and proved Sylow's theorems. I will have another one soon, concerning applications of Sylow's theorem. My question is: Are there any spectacular applications
of Sylow's theorem in group theory and
other fields of mathematics (which are
of course related to groups)?
|
If you are introducing Sylow subgroups and the Sylow theorems, then your audience likely does not have an extensive mathematical background (otherwise I imagine they would have seen the Sylow theorems at some point in their studies, at least in North America and Western Europe). When I taught the Sylow theorems in an undergraduate abstract algebra class, I applied them to show converses of two basic properties of cyclic groups: (1) If a finite group has at most one subgroup of each size then the group is cyclic. [Edit: There is a proof of this in the comments below which bypasses the Sylow theorems.] (2) If a finite group has the property that for each positive integer $n$ the equation $x^n = 1$ has at most $n$ solutions in the group, then the group is cyclic. In both proofs, you use the existence of $p$ -Sylow subgroups to reduce yourself to the case of finite $p$ -groups, and that case is then settled by other techniques (not using the Sylow theorems). Proofs of the above, together with other applications of the Sylow theorems can be found in my notes at https://kconrad.math.uconn.edu/blurbs/grouptheory/sylowmore.pdf .
Of course these are not "spectacular" applications, but I think it's cute that you can use the existence of Sylow subgroups to show either of those features of finite cyclic groups really characterize cyclic groups among all finite groups. In a more advanced direction, Sylow subgroups are used to prove theorems about the cohomology of general finite groups. See Chapter IX of Serre's Local Fields (e.g., Theorems 12 and 13). This application is perhaps too much for your audience. The Schur-Zassenhaus theorem about finite groups (see https://en.wikipedia.org/wiki/Schur%E2%80%93Zassenhaus_theorem for the statement) is proved using the Sylow theorems -- and not just the existence part of the theorems -- along with other techniques. I wrote up the simpler aspects of the proof in https://kconrad.math.uconn.edu/blurbs/grouptheory/schurzass.pdf . A basic result about finite group actions is the Frattini argument: if a finite group $G$ acts on a finite set $X$ and a subgroup $H$ of $G$ acts transitively on $X$ then for every $x$ in $X$ we have $G = HS_x = S_xH$ , where $S_x$ is the stabilizer of $x$ in $G$ . As an example of this, fix a prime $p$ and a finite group $G$ . Let $K$ be a normal subgroup of $G$ and use for $X$ the set of all $p$ -Sylow subgroups of $K$ (not of $G$ !), which is a set on which $G$ acts by conjugation since $K$ is normal in $G$ . That $K$ acts transitively on $X$ is a special case of the conjugacy part of the Sylow theorems (for the group $K$ ). Then Frattini's argument tells us that for any $p$ -Sylow subgroup $P$ of $K$ , we have $G = KN_G(P)$ , where $N_G(P)$ is the normalizer subgroup of $P$ in $G$ , since $N_G(P)$ is the stabilizer of the "point" $P$ in the conjugation action of $G$ on $X$ . This special case of the Frattini argument (which I think was the original version of Frattini himself)
can be used to show the equivalence of several different characterizations of finite nilpotent groups. It might be hard to convince students new to the Sylow theorems that this special case of the Frattini argument is a "spectacular" thing, but you ought to find it in any text on finite groups. Finally, I think it would be good to place some of the basic features of the Sylow theorems in a broader context. I have in mind the following: existence (for any $p$ there is a $p$ -Sylow subgroup), extension (any $p$ -subgroup lies in a $p$ -Sylow subgroup), and conjugacy (any two $p$ -Sylow subgroups are conjugate). These aspects of $p$ -Sylow subgroups for a fixed prime $p$ occur in other classes of groups, such as the maximal tori in connected compact Lie groups or connected linear algebraic groups. In the article "A Lie approach to finite groups" (see https://link.springer.com/chapter/10.1007/BFb0100726 ), Alperin sets out an analogy between Lie groups and finite groups. See the table on page 4. In particular, he notes that Borel subgroups of Lie groups are analogous to normalizers of Sylow subgroups of a finite group.
|
{
"source": [
"https://mathoverflow.net/questions/60598",
"https://mathoverflow.net",
"https://mathoverflow.net/users/13093/"
]
}
|
60,615 |
Let $\{r_i\}_{i \in \aleph}$ be sequence of integers such that, for some $t \in \mathbb{N}$ and all $i \in \mathbb{N}$, we have $r_i = r_{i+t}$. My question: Can $\displaystyle \sum_{i=1}^n \dfrac{r_i}{i}$ converge to $0$ if $n \rightarrow \infty$ for a non-trivial choice of the $r_i$ and $t$? Or does $\displaystyle \sum_{i=1}^\infty \dfrac{r_i}{i} = 0$ imply $r_i = 0$ for all $i \in \mathbb{N}$?
|
Yes . All the $r_i$ must equal $0$ if the period is prime, however. Consider for example $$f(s)=(1-p^{1-s})^2 \zeta(s),$$ which is periodic with period $p^2$, at $s=1$. I should probably expand on this answer a bit. The case where $t$ is prime is an old conjecture of Chowla, which was resolved by Baker, Birch, and Wirsing (all the $r_i=0$ in this case) in the paper I link to in the first word of this answer. They give the Dirichlet series for $f(s)$ above as a counterexample when $t$ is not prime. To see that $f(s)$ has the desired properties, I'll work it out in a bit more detail for $p=2$. Expanding $f$ out as a Dirichlet series gives $$f(s)=\sum_{n=0}^\infty \frac{1}{(4n+1)^s}-\frac{3}{(4n+2)^s}+\frac{1}{(4n+3)^s}+\frac{1}{(4n+4)^s}$$ as Woett remarks in the comments. On the other hand, $(1-2^{1-s})^2$ has a double zero at $s=1$, whereas the zeta function $\zeta(s)$ has a simple pole at $s=1$; so $f(1)=0$. So taking the limit as $s\to 1^+$ gives that the OP's series converges to $f(1)=0$ for $r_1=r_3=r_4=1,~ r_2=-3, ~t=4$, as desired.
|
{
"source": [
"https://mathoverflow.net/questions/60615",
"https://mathoverflow.net",
"https://mathoverflow.net/users/6698/"
]
}
|
60,641 |
I have heard it said more than once—on Wikipedia , for example—that the étale topology on the category of, say, smooth varieties over $\mathbb{C}$, is equivalent to the Euclidean topology. I have not seen a good explanation for this statement, however. If we consider the relatively simple example of $\mathbb{P}^1_\mathbb{C}$, it seems to me that an étale map is just a branched cover by a Riemann surface, together with a Zariski open subset of $\mathbb{P}^1_\mathbb{C}$ that is disjoint from the ramification locus. (If there is a misconception there, small or large, please let me know) The connection to the Euclidean topology on $\mathbb{P}^1_\mathbb{C}$, however, is not obvious to me. What is the correct formulation of the statement that the two topologies are equivalent, or what is a good way to compare them?
|
Saying that the étale topology is equivalent to the euclidean topology is vastly overstating the case. For example, if you compute the cohomology of a complex algebraic variety with coefficients in $\mathbb Q$ in the étale topology, typically you get 0. On the other hand, it is a deep result that the étale cohomology of such a variety with coefficients in a finite abelian group coincides with its cohomology in the euclidean topology. Similarly, you can't capture the whole fundamental group with the étale topology, but only its finite quotients (and the fact that you can indeed describe the finite quotients of the fundamental group via étale covers is, again, a deep result).
|
{
"source": [
"https://mathoverflow.net/questions/60641",
"https://mathoverflow.net",
"https://mathoverflow.net/users/1770/"
]
}
|
60,912 |
This question is about dense sphere packings in euclidean space $\mathbb R^n$. By a sphere packing I understand any arrangement of mutually disjoint solid open spheres in $\mathbb R^n$, all of the same radius. The density of a packing is
$$\mathrm{lim}_{R \to \infty}\frac{\mathrm{vol }(B(0,R) \cap \mathrm{spheres})}{\mathrm{vol } B(0,R)} $$
if it exists. Here, $B(0,R)$ is the open ball of radius $R$ centered at $0 \in \mathbb R^n$. In low dimensions, the highest possible densities of sphere packings are known to be attained by lattice packings, that is, packings such that the centers of the spheres form a discrete subgroup of $\mathbb R^n$ of rank $n$. One could speculate that this is so in all dimensions, but I doubt it very much... Is it true that for some (possibly very lagre) integer $n$, there is a sphere packing in $\mathbb R^n$ which has a higher density than any lattice packing? Edit -- Note: I didn't mean to ask about an explicit $n$, let alone about explicit packings. So i'm completely satisfied if somebody tells me that there is asymptotically such and such upper bound for lattice packing densities and this and that lower bound for general densest sphere packing densities.
|
In ten dimensions the best packing known is the Best packing, which is not a lattice packing. Marc Best found a nonlinear $40$-element binary code of block length $10$ and minimal Hamming distance $4$, and one can turn it into a sphere packing in $\mathbb{R}^{10}$ by centering spheres at all the points in $\mathbb{Z}^{10}$ that reduce to it modulo $2$. This packing seems to be better than any lattice packing, but no proof is known. The best lattice packings up through $\mathbb{R}^8$ were determined by the 1930's, but even $\mathbb{R}^9$ isn't known, let alone $\mathbb{R}^{10}$, and there aren't even good enough bounds to prove that nothing is as good as the Best packing. For some reason, good non-lattice packings are more likely to be known in even dimensions than odd dimensions, at least for dimensions a little less than $24$. For example, (hypothetical) answers are known in $\mathbb{R}^{18}$, $\mathbb{R}^{20}$, and $\mathbb{R}^{22}$, but not in between. I imagine this is an artifact of coding-theory-based constructions. Probably lattices are suboptimal in all sufficiently high dimensions, but nobody really understands how to think about this problem asymptotically. The best existence results in high dimensions all produce lattices, but that's presumably just because lattices are more tractable than non-lattice packings.
|
{
"source": [
"https://mathoverflow.net/questions/60912",
"https://mathoverflow.net",
"https://mathoverflow.net/users/5952/"
]
}
|
60,925 |
(This problem appeared in face of me trying to generalize my theory of (binary) funcoids to the theory of $n$-ary funcoids (I call them "multifuncoids") for arbitrary $n$.) Let $I$ is some indexing set. By filters I will mean (not necessarily proper) filters on some fixed set $U$. I will call a multifuncoid a $I$-ary relation $f$ between subsets of $U$ such that for
every $k \in I$, subsets $A$ and $B$ of $U$, and family $L = L_{i \in I \setminus \{ k \}}$ of subsets of $U$ we have
$$ f ( L \cup \{ (k ; A \cup B) \} ) \Leftrightarrow f ( L \cup \{ (k ; A) \} ) \vee f ( L \cup \{ (k ; B) \} ) . $$ for every $k \in I$, and family $L = L_{i \in I}$ we have $L_k=\emptyset \Rightarrow \neg f (L)$. Let $a = a_{i \in I}$ is some family of filters. I will call funcoidal product $\prod a$ of a family $a = a_{i \in
I}$ of filters an $I$-ary relation between subsets of $U$ such that for every
family $R = R_{i \in I}$ of sets we have
$$ \left( \prod a \right) R \Leftrightarrow \forall i \in I \forall A \in a_i
: A \cap R_i \neq \emptyset . $$
It simple to show that funcoidal product is a multifuncoid. Conjecture For every non-empty multifuncoid $f$ there exist a family $a = a_{i \in I}$
of ultrafilters such that $f \supseteq \prod a$. If this conjecture is false, under which additional conditions it will be
true? (I know that it is true for finite set $I$, but am interested also in
the infinite case.) Addition:
I think that the following condition may be necessary:
$f(\{(i;A_i\cup B_i) | i\in I\}) \Leftrightarrow f(\{(i;A_i) | i\in I\}) \vee f(\{(i;B_i) | i\in I\})$ for each families $A=A_{i\in I}$ and $B=B_{i\in I}$ of subsets of $U$.
|
For $a$ an $I$-indexed family of filters and $S$ an $I$-indexed family of subsets of $U$ such that $U\smallsetminus S_i\notin a_i$ for every $i\in I$, define the restricted product $\prod^Sa$ by
$$\left(\prod\nolimits^Sa\right)R\Leftrightarrow\left(\prod a\right)R\land\{i\in I:R_i\ne S_i\}\text{ is finite.}$$
This is again a nonempty multifuncoid. Then: Every nonempty multifuncoid $f$ contains a restricted product of ultrafilters. Fix $S$ such that $f(S)$. For every $J\subseteq I$ finite, let $A_J$ be the set of sequences $a$ of ultrafilters such that $S_i\in a_i$ for every $i$, and $f(R)$ holds for every $R$ where $R_i\in a_i$ for $i\in J$, and $R_i=S_i$ for $i\notin J$. Then $A_J$ is closed in $(\beta U)^I$, $A_J\cap A_{J'}\supseteq A_{J\cup J'}$, and $A_J\ne\varnothing$ by the finite case, hence there exists $a\in\bigcap_JA_J$ by compactness of $(\beta U)^I$. Then $f\supseteq\prod^Sa$. The restricted product of an infinite family of ultrafilters does not contain any product of a family of ultrafilters (assuming $U$ has more than one element), thus refuting the original wording of your conjecture. Indeed, if $f=\prod^Sa$ and $f(R)$, then $R_i=S_i$ for all but finitely many $i$, whereas if $g=\prod b$ is a product of a family of ultrafilters, we can for every $i\in I$ fix $R_i\in b_i$ such that $R_i\ne S_i$; then $g(R)$, but not $f(R)$, so $g\nsubseteq f$. Point 1 says that the intuition behind the conjecture is basically sound, but the notion of the product has to be modified to make it really work to take into account that the axioms of multifuncoids only concern local behaviour when a single (or finitely many, by iteration) coordinate is changed, they do not imply anything about what happens when infinitely many coordinates change. Since the proof above refers to the case of finitely many coordinates in a stronger form than what is claimed to hold in the question, I may as well give a self-contained proof of 1. As before, fix $S$ such that $f(S)$. By definition, $S_i\ne\varnothing$ for every $i$. If $a$ is a family of filters such that $S_i\in a_i$ for all $i\in I$, consider a modified product
\begin{align}
\left(\prod\nolimits_ma\right)R&\Leftrightarrow(\forall i\in I)\,R_i\in a_i,\\
\left(\prod\nolimits_m^Sa\right)R&\Leftrightarrow\left(\prod\nolimits_ma\right)R\land\{i\in I:R_i\ne S_i\}\text{ is finite.}
\end{align}`
Note that if all $a_i$ are ultrafilters, then $\prod_ma=\prod a$, and $\prod_m^Sa=\prod^Sa$. It thus suffices to find $a$ such that $\prod_m^Sa\subseteq f$, and all $a_i$ are ultrafilters. Let $P$ be the set of all families $a$ of proper filters such that $S_i\in a_i$ for all $i$, and $\prod_m^Sa\subseteq f$. We define a partial order on $P$ by $a\le b$ iff $a_i\subseteq b_i$ for all $i\in I$. It is easy to see from the definition of a multifuncoid that: (*) Whenever $f(R)$, $R_i\subseteq R'_i$ for every $i$, and $R_i=R'_i$ for all but finitely many $i$, then $f(R')$. It follows that $P$ is nonempty, since $a\in P$, where $a_i$ is the filter generated by $S_i$. Since the pointwise union of any chain in $P$ is an element of $P$, Zorn’s lemma implies that there exists a maximal element $a\in P$. I claim that every $a_j$ is an ultrafilter. Assume for contradiction that it is not, and let $X\subseteq U$ be such that $X,U\smallsetminus X\notin a_j$. Define $b$ by $b_i=a_i$ for $i\ne j$, and $b_j$ is the filter generated by $a_j\cup\{X\}$. Since $a< b$, we have $b\notin P$, thus there exists $R$ such that $\neg f(R)$, $R_i=S_i$ for all but finitely many $i$, $R_i\in a_i$ for all $i\ne j$, and $X\cap Y\subseteq R_j$ for some $Y\in a_j$. Symmetrically, there exists $R'$ and $Y'\in a_j$ such that $\neg f(R')$, $R'_i=S_i$ for all but finitely many $i$, $R'_i\in a_i$ for $i\ne j$, and $(U\smallsetminus X)\cap Y'\subseteq R'_j$. Using (*) and the closure of $a_i$ under intersections, we can replace $R_i$ with $R_i\cap R'_i$ for all $i\ne j$, and the same for $R'_i$. Thus, without loss of generality, $R_i=R'_i$ for all $i\ne j$. But then by the definition of a multifuncoid, $\neg f(R'')$, where $R''_i=R_i=R'_i$ for $i\ne j$, and $R''_j=R_j\cup R'_j$. However, $R''_j\supseteq Y\cap Y'\in a_j$, hence $R''\in\prod_m^Sa\subseteq f$, a contradiction. $%%%%$
|
{
"source": [
"https://mathoverflow.net/questions/60925",
"https://mathoverflow.net",
"https://mathoverflow.net/users/4086/"
]
}
|
60,987 |
Given a fiber bundle $S\hookrightarrow M \rightarrow S^1$ with $M$ (suppose compact closed connected and oriented) 3-manifold and $S$ a compact connected surface, it follows form the exact homotopy sequence that $\pi_1(S)\hookrightarrow \pi_1(M)$. Does this imply that the "fiber" of a 3-manifold which fibers over $S^1$ is well defined? The answer should be NO, so I am asking: are there simple examples of 3-manifolds which are the total space of two fiber bundles over $S^1$ with fibers two non homeomorphic surfaces? EDIT: the answer is NO (see Autumn Kent's answer). I'm just seeking for a "practical" example to visualize how this phoenomenon can happen.
|
There are simple examples with $M = F \times S^1$ for $F$ a closed surface of genus $2$ or more. Choose a nonseparating simple closed curve $C$ in $F$, then take $n$ fibers $F_1,\cdots,F_n$ of $F\times S^1$, cut these fibers along the torus $T=C\times S^1$, and reglue the resulting cut surfaces so that $F_i$ connects to $F_{i+1}$ when it crosses $T$, with subscripts taken mod $n$. The resulting connected surface is an $n$-sheeted cover of $F$ and is a fiber of a new fibering of $M$ over $S^1$. The monodromy of this fibering is a periodic homeomorphism of the new fiber, of period $n$.
|
{
"source": [
"https://mathoverflow.net/questions/60987",
"https://mathoverflow.net",
"https://mathoverflow.net/users/12952/"
]
}
|
61,252 |
I have seen Bernoulli numbers many times, and sometimes very surprisingly. They appear in my textbook on complex analysis, in algebraic topology, and of course, number theory. Things like the criteria for regular primes, or their appearance in the Todd class, zeta value at even numbers looks really mysterious for me. (I remember in Milnor's notes about characteristic class there is something on homotopy group that has to do with Bernoulli numbers, too, but I don't recall precisely what that is. I think they also arise in higher K-theory.) The list can go on forever. And the wikipedia page of Bernoulli number is already quite long. My question is, why do they arise everywhere? Are they a natural thing to consider? ========================================== p.s.----(maybe this should be asked in a separate question) Also, I've been wondering why it is defined as the taylor coefficient of the particular function $\frac{x}{e^x-1}$, was this function important? e.g. I could have taken the coefficient of the series that defines the L-genus, namely $\dfrac{\sqrt{z}}{\text{tanh}\sqrt{z}}$, which only amounts to change the Bernoulli numbers by some powers of 2 and some factorial. I guess many similar functions will give you the Bernoulli numbers up to some factor. Why it happen to be the function $\frac{x}{e^x-1}$?
|
I don't know of a universal theory of all places where Bernoulli numbers arise, but Euler-Maclaurin summation explains many of their more down-to-earth occurrences. The heuristic explanation (due to Lagrange) is as follows. The first difference operator defined by $\Delta f(n) = f(n+1)-f(n)$ and summation are inverses, in the same sense in which differentiation and integration are inverses. This just amounts to a telescoping series: $\sum_{a \le i < b} \Delta f(i) = f(b) - f(a)$. Now by Taylor's theorem, $f(n+1) = \sum_{k \ge 0} f^{(k)}(n)/k!$ (under suitable hypotheses, of course). If we let $D$ denote the differentation operator defined by $Df = f'$, and $S$ denote the shift operator defined by $Sf(n) = f(n+1)$, then Taylor's theorem tells us that $S = e^D$. Thus, because $\Delta = S-1$, we have $\Delta = e^D - 1$. Now summing amounts to inverting $\Delta$, or equivalently applying $(e^D-1)^{-1}$. If we expand this in terms of powers of $D$, the coefficients are Bernoulli numbers (divided by factorials). Because of the singularity at "$D=0$", the initial term involves antidifferentiation $D^{-1}$, i.e., integration. Thus, we have expanded a sum as an integral plus correction terms involving higher derivatives, with Bernoulli number coefficients. Specifically,
$$
\sum_{a \le i < b} f(i) = \int_a^b f(x) \, dx + \sum_{k \ge 1} \frac{B_k}{k!} (f^{(k-1)}(b) - f^{(k-1)}(a)).
$$
(Subtracting the values at $b$ and $a$ just amounts to the analogue of turning an indefinite integral into a definite integral.) This equation isn't literally true in general: the infinite sum usually won't converge and there's a missing error term. However, it is true when $f$ is a polynomial, and one can bootstrap from this case to the general one using the Peano kernel trick. So from this perspective, the reason why $t/(e^t-1)$ is a natural generating function to consider is that we sometimes want to invert $e^t-1$ (the factor of $t$ is just to make it holomorphic), and the most important reason I know of to invert it is that we want to invert $\Delta = e^D-1$.
|
{
"source": [
"https://mathoverflow.net/questions/61252",
"https://mathoverflow.net",
"https://mathoverflow.net/users/11286/"
]
}
|
61,263 |
Consider the space of newforms $S^{\mathrm{new}}_k(\Gamma_1(q))$ of weight $k$ and level $q$ for the congruence subgroup $\Gamma_1(q)$ of $\mathrm{SL}_2(\mathbb{Z})$; for simplicity's sake, let's assume that $q$ is prime. Then for $k \geq 2$, it is known via Riemann-Roch that
$$\dim S^{\mathrm{new}}_k(\Gamma_1(q)) = \frac{k - 1}{24} (q^2 - 1) + E(q,k)$$
for an error term $E(q,k)$. This error term can be calculated explicitly (though not particularly neatly): see Theorem 13 of http://www.math.ubc.ca/~gerg/papers/downloads/DSCFN.pdf . So for $k \geq 2$, it is certainly possible to determine $\dim S^{\mathrm{new}}_k(\Gamma_1(q))$ precisely. For $k = 1$, on the other hand, no such precise equations seem to exist, as the method used to prove the $k \geq 2$ case breaks down. Instead, it is conjectured (see Conjecture 2.1 of http://arxiv.org/pdf/0906.4579v1 ) that
$$\dim S^{\mathrm{new}}_1(\Gamma_1(q)) = \frac{q - 2}{2} h(K_q) + O_{\varepsilon}(q^{\varepsilon}),$$
for any $\varepsilon > 0$ with the error term is uniform in $q$, and where $h(K_q)$ is the class number of $\mathbb{Q}(\sqrt{-q})$; here the leading term comes from the dihedral modular forms, while the error term is due to the others (icosahedral etc.). Now note that the leading term in the formula for $S^{\mathrm{new}}_k(\Gamma_1(q))$ for $k \geq 2$ vanishes when $k = 1$, so if that formula where to be valid for $k = 1$, we would be left with the error term $E(q,k)$, which we can explicitly compute. Question : Is there a reason why we should not expect $\dim S^{\mathrm{new}}_1(\Gamma_1(q)) = E(q,1)$? Obviously a quick check on Magma or Sage should prove that this is not the case, but unfortunately I don't have either installed. If not, is there any chance that we will one day find a closed form for $\dim S^{\mathrm{new}}_1(\Gamma_1(q))$?
|
The formula for the dimension of $S_k$ when $k \geq 2$ can be thought of as a Riemann--Roch calculation, applied to an appropriately chosen line bundle on the modular curve. The point is that when $k \geq 2$, this line bundle is positive, so positive that the $H^1$-term in Riemann--Roch vanishes. Thus the dimension of the space of cuspforms coincides with the degree of the line bundle ($+ 1 - g$, where $g$ is the genus of the modular curve),
which is linear in $k$. On the other hand, when $k = 1$, the $H^1$ term in the analogous Riemann--Roch formula need not vanish. In fact, what one finds is that the $H^0$ and $H^1$ terms essentially cancel each other, but one gains no information about whether $H^0$ is actually non-zero. It is whatever it is. This is the basic reason that it is hard to give a formula for the dimension of spaces of weight one forms.
|
{
"source": [
"https://mathoverflow.net/questions/61263",
"https://mathoverflow.net",
"https://mathoverflow.net/users/3803/"
]
}
|
61,408 |
For something I'm writing -- I'm interested in examples of bad arguments which involve the application of mathematical theorems in non-mathematical contexts. E.G. folks who make theological arguments based on (what they take to be) Godel's theorem, or Bayesian arguments for creationism. (If necessary I'm willing to extend the net to physics, to include bad applications of the second law of thermodynamics or the Uncertainty Principle, if you know any really amusing ones.)
|
Here are some examples, ranging from the comical to the debatable. Comical : Pretty much any mention of mathematics in Jacques Lacan. To give you an idea, here is a typical passage: This diagram [the Möbius strip] can be considered the basis of a sort of essential inscription at the origin, in the knot which constitutes the subject. This goes much further than you may think at first, because you can search for the sort of surface able to receive such inscriptions. You can perhaps see that the sphere, that old symbol for totality, is unsuitable. A torus, a Klein bottle, a cross-cut surface, are able to receive such a cut. And this diversity is very important as it explains many things about the structure of mental disease . If one can symbolize the subject by this fundamental cut, in the same way one can show that a cut on a torus corresponds to the neurotic subject, and on a cross-cut surface to another sort of mental disease. [Lacan (1970), pp. 192-193] And here's another one: Thus, by calculating that signification according to the algebraic method used here, namely $$\frac{S(\text{Signifier})}{s(\text{signified})} = s(\text{the statement})$$ with $S=(-1)$ produces $s=\sqrt{-1}$[...] Thus the erectile organ comes to symbolize the place of jouissance, not in itself, or even in the form of an image, but as a part lacking in the desired image: that is why it is equivalent to the of the signification produced above, of the jouissance that it restores by the coefficient of its statement to the function of the lack of signifier -1 . [Lacan (1971); seminar held in 1960.] Interesting/Rigorous but still quite a stretch : The work of Alain Badiou on set theory, although more rigorous and advanced, also provides a very good resource for misapplications of formal mathematics in order to draw non-mathematical conclusions, cf. especially Being and Event which is his magnum opus, in which he uses set theory to support the tagline that 'Mathematics is Ontology'. Unlike Lacan, Badiou at least knows his stuff when it comes to the statement and development of formal results. That said, his interpretations and conclusions are often huge stretches. Here's a related MO post on Badiou: Badiou and Mathematics Interesting/Philosophy : I don't know if you'd call these misapplications, but they are certainly attempts to use formal results to draw philosophical conclusions that are not in any formal way entailed by those results. Here are some examples: Michael Dummett on how Godel Incompleteness might/might not threaten the thesis that meaning is use (philosophical anti-realism): The philosophical significance of Gödel's theorem, M Dummett - Ratio, 1963 Hilary Putnam on how the Lowenheim-Skolem Theorem proves that reference is underdetermined by all possible theoretical or operation constraints (i.e. that the meaning of our mathematical vocabulary can never be accurately understood in order to fix an intended model): http://www.jstor.org/stable/2273415 Pretty much anything philosophical that has been written about the so-called Skolem Paradox involves formal-to-informal entailments. Roger Penrose in The Emperor's New Mind again using Godel to draw conclusions about consciousness and mechanism
|
{
"source": [
"https://mathoverflow.net/questions/61408",
"https://mathoverflow.net",
"https://mathoverflow.net/users/431/"
]
}
|
61,446 |
Nakayama's lemma is mentioned in the majority of books on algebraic geometry that treat varieties. So I think Ihave read the formulation of this lemma at least 20 times (and read the proof maybe around 10 times) in my life. But for some reason I just cannot get this lemma, i.e. I have tendency to forget it. Last time this happened just a couple of days ago, in the book of Shafarevich (Basic Algebraic geometry in 1.5.3.) This lemma is used to prove that for finite maps between quasiprojective varieties the image of a closed set is closed, and again this lemma sounded as something foreign to me (so again I went through the proof of the lemma)... Question. Is there a path to get some stable understanding of Nakayama's lemma and its corollaries? I would be especially happy if there were some geometric intuition underlying this lemma. Or some geometric example. Or maybe there is a nice article of this topic? Some mnemonic rule? (or one just needs to get used to the lemma?)
|
It's sort of like the inverse function theorem, and that is why it is so strong. If you have $n$ functions vanishing at the origin of $k^n$ and want to know if they give a local coordinate system, you ask if their differentials are independent at the origin. Or equivalently if their differentials generate the cotangent space at the origin. So in a [not necessarily noetherian, thanks Georges!] local ring $(\mathcal{O},\mathfrak{m})$, Nakayama's lemma says you can detect that elements of the maximal ideal generate that ideal, hence act sort of like coordinate functions, just by knowing their differentials, i.e. their residues in the Zariski cotangent space $\mathfrak{m}/\mathfrak{m}^2$, generate that linear space. Those versions of the lemma you linked to are almost unrecognizable forms of this simple statement, but that's the way abstract math goes as we know. But the idea is the same, you have a hypotheses about a truncated version of your statement, and you get out the fuller version. The Jacobson radical stuff is there to disguise the fact that it doesn't say much unless you are in a local setting. I.e. in a local ring the Jacobson radical is pretty big and you get a better result. In a polynomial ring with tiny Jacobson radical you get nothing.
|
{
"source": [
"https://mathoverflow.net/questions/61446",
"https://mathoverflow.net",
"https://mathoverflow.net/users/13441/"
]
}
|
61,451 |
Hey, Suppose I want to establish a theory of the category $C$ (vector spaces or whatever), but what I really have is $D$, some precisely known category. This is to say, I know all the axioms of $D$, but I only have an intuition for $C$ and I want to develop a theory of $C$. I would normally have diagrams in $C$ by mapping cpos or Domains into $C$. But instead, I want to do it with $D$. What I do is define the largest category, $J$, of Domains in $D$. I do this by defining a dcpo with objects as elements in $D$ and relations as arrows in $D$. Then any functor will map the domains in $J$ to diagrams in $C$. It seems like I am just inserting a category $D$ in the normal diagram functor $J \rightarrow C$ resulting in $J \rightarrow D \rightarrow C$ which seems to miss the point of the exercise. The point of the exercise, I think, is to try to do a lot of category theory when you have to live in some category $D$. We start by saying that we "have access to" all diagrams in $D$. Further, we say that we have access to none of the morphisms in any other category. So if we want to talk about a category $C$, it will have to be in terms of diagrams in $D$. Next, we intuit the existence of a category $C$ (I am using this restricted language to reflect the notion that we do don't have access to $C$). Next, we consider endofuntors of $D$, but we really see them as diagrams in $D$ indexed by the domains we constructed in $D$ by $J$. These endofunctors are meant to mimic functors from $D$ to $C$. We are pretending to have access to $C$, by attempting a construction of $C$ in $D$. Sorry that this is so unclear, especially the idea of having an "intuition of C" and "attempting a construction of". I think that this is an expression of a Topos, and so I have some questions. Firstly, what kind of minimum structure do we need in $D$ to really start doing some work? Second, if we really want to say that we only have access to $D$, then we cannot present $D$ as a set of morphisms and a set of objects because that would imply we are actually in SET, not $D$. Is there any way to start working only in $D$? This goes back to the first question (although thinking about this too much is a bit of a morass).
|
It's sort of like the inverse function theorem, and that is why it is so strong. If you have $n$ functions vanishing at the origin of $k^n$ and want to know if they give a local coordinate system, you ask if their differentials are independent at the origin. Or equivalently if their differentials generate the cotangent space at the origin. So in a [not necessarily noetherian, thanks Georges!] local ring $(\mathcal{O},\mathfrak{m})$, Nakayama's lemma says you can detect that elements of the maximal ideal generate that ideal, hence act sort of like coordinate functions, just by knowing their differentials, i.e. their residues in the Zariski cotangent space $\mathfrak{m}/\mathfrak{m}^2$, generate that linear space. Those versions of the lemma you linked to are almost unrecognizable forms of this simple statement, but that's the way abstract math goes as we know. But the idea is the same, you have a hypotheses about a truncated version of your statement, and you get out the fuller version. The Jacobson radical stuff is there to disguise the fact that it doesn't say much unless you are in a local setting. I.e. in a local ring the Jacobson radical is pretty big and you get a better result. In a polynomial ring with tiny Jacobson radical you get nothing.
|
{
"source": [
"https://mathoverflow.net/questions/61451",
"https://mathoverflow.net",
"https://mathoverflow.net/users/10007/"
]
}
|
61,632 |
The utopian situation in mathematics would be that the statement and the proof of every result would live "in the same world", at the same level of mathematical complexity (in a broad sense), unless there were a good conceptual reason for the contrary. The typical situation would be for a proof in finite combinatorics to be proven purely within the realm of finite combinatorics, a statement about integers to be proven using only the rationals (perhaps together with some formal symbols such as $\sqrt {2}$ and $\sqrt{-1}$), and so on. When the typical situation breaks down, the reason would be well-known and celebrated. The prototypical field where things don't seem to work this way is Number Theory. Kronecker famously stated that "God invented the integers; all else is the work of man."; and yet, the real numbers (often in the guise of complex analysis) are ubiquitous all over Number Theory. I am sure that this question is hopelessly naïve and standard but: What is the high-concept explanation for why real numbers are useful in number theory? What is the "minimal example" of a statement in number theory, for whose "best possible" proof the introduction of real numbers is obviously useful? An alternative way of framing the question would be to ask how you would refute the following hypothetical argument: "We know that calculus works well, so we are tempted to apply it to anything and everything. But perhaps it is in fact the wrong tool for Number Theory. Perhaps there exists a rational-number-based approach to Number Theory waiting to be discovered, whose discoverer will win a Fields Medal, which will replace all the analytic tools in Number Theory with dicrete tools." (This question is a byproduct of a discussion we had today at Dror Bar-Natan's LazyKnots seminar.) Update : (REWRITTEN) There has been some discussion in the comments concerning whether proofs and statements living in the same realm is "utopian". The philosophical idea underlying this question is that, in my opinion, part of mathematics is to understand proofs, including understanding which tools are optimal for a proof and why. If the proof is a formal manipulation of definitions used in the statement of the claim (e.g. proof of the snake lemma), then there is nothing to explain. If, on the other hand, the proof makes essential use of concepts from beyond the realm of the statement of the theorem (e.g. a proof of a statement about integers which uses real numbers, or proof of Poincare Duality for simplicial complexes which uses CW complexes) then we ought to understand why. Is there no other way to prove it?Why? Would another way to prove it necessarily be move clumsy? Why? Or is it just an accident of history, the first thing the prover thought of, with no claim of being an "optimally tooled proof" in any sense? For one think, if a proof of a result involving integers essentially uses properties of the real numbers (or complex numbers), such a proof would not work in a formal somehow analogous setting where there are no real numbers, such as knots as analogues for primes. For another, by understanding why the tool of the proof is optimal, we're learning something really fundamental about integers. I'm interested not in "what would be the fastest way to find a first proof", but rather in "what would be the most intuitive way to understand a mathematical phenomenon in hindsight". So one thing that would make me happy would be a result for integers which is "obviously" a projection or restriction of some easy fact for real numbers, and is readily understood that way, but remains mysterious if real numbers/ complex analysis aren't introduced.
|
The Gödel Speedup Theorem provides some explanation why real numbers (and variants) are useful in proving statements in number theory. Real numbers, complex numbers, and $p$-adic numbers are second-order objects over the natural numbers. Thus a proof of a number theoretic fact using such analytical devices is formally a proof of that fact in second-order arithmetic . The Gödel Speedup Theorem shows that there is a definite advantage to using second-order arithmetic to prove elementary number theoretic facts. Gödel Speedup Theorem. Let $h$ be any computable function. There is an infinite family $\mathcal{H}$ of first-order (indeed $\Pi^0_2$) statements such that if $\phi \in \mathcal{H}$, then $\phi$ is provable in first-order arithmetic and if $k$ is the length of the shortest proof of $\phi$ in second-order arithmetic, then the shortest proof of $\phi$ in first-order arithmetic has length at least $h(k)$. Since computable functions can grow very fast, this shows that there are true number theoretic facts that one can prove using second-order methods (e.g. complex analysis, $p$-adic numbers, etc.) but any first-order (a.k.a. elementary) proof is unfathomably long. Admittedly, the statements produced by Gödel to verify the theorem are very unnatural from a number theoretic point of view. However, it is a general fact that second-order proofs can be much much shorter and easier to understand than first-order proofs. Addendum. This excellent post by Emil Jeřábek demonstrates another speedup theorem, which is in many ways more striking. The method of going from a first-order $T$ to a second-order $T^+$ is conservative, meaning that $T^+$ cannot prove more first-order theorems than $T$. However, the mere act of allowing sets to replace formulas and introducing the possibility of quantifying over such sets introduces speedups faster than any exponential tower. Introducing $\mathbb{R}$, $\mathbb{C}$, $\mathbb{Q}_p$ and so forth has a similar effect where one can package complicated ideas into conceptually simpler ones (e.g. replacing $\forall\exists$ statements by the higher-level idea of continuity) can lead to monumentally shorter proofs!
|
{
"source": [
"https://mathoverflow.net/questions/61632",
"https://mathoverflow.net",
"https://mathoverflow.net/users/2051/"
]
}
|
61,678 |
Algebraic geometry is quite new for me, so this question may be too naive. therefore, I will also be happy to get answers explaining why this is a bad question. I understand that the basic philosophy begins with considering an abstract commutative ring as a function space of a certain "geometric" object (the spectrum of the ring). I also understand that at least certain types of modules correspond to well known geometric constructions. For example, projective modules should be thought of as vector bundles over the spectrum (and that there are formal statements such as the Serre-Swan theorem which make this correspondence precise in certain categories). My question is, what is the general geometric counterpart of modules? This is not a formal mathematical question, and I am not looking for the formal scheme-theoretic concept (of sheaves of certain type and so on), but for the geometric picture that I should keep in mind when working with modules. I will appreciate any kind of insight or even just a particularly Enlightening example.
|
Roughly a module can be thought of as a vector bundle on the spectrum, where the dimension of fibers may vary. Let me give some examples and facts: A free modules corresponds to trival vector bundles, or more generally projective modules correspond to vector bundles as you already pointed out. Let $R$ be the coordinate ring of a variety and $I$ a radical. Then the $R$ module $R/I$ corresponds to attaching a one dimensional vector space on each point of $Z(I)$ and the zero vector space everywhere else. For Example $R=k[x,y]$ and $I=(x,y)$ gives the skyscraper sheaf at the origin. $I=(x)$ gives the trivial one dimensional bundle on the y-axis etc. If your Ideal is not a radical, the situation is slightly more complicated. $R/I$ can be thought of as the trival bundle on an infinitesimal neighborhood of Z(I). Another nice example is a geometric explanation why the tensorproduct $\mathbb Z/p \otimes_{\mathbb Z} \mathbb Z/q$ for say $p,q$ without common divisor vanishes. Our space
$spec(\mathbb Z)$ consists just of a point for each prime (and a generic point). Now with the above intuition in mind,
our two modules are geometrically just one dimensional vectorspaces attached
to infinitesimal neighborhoods of the prime divisors of $p,q$. Since $p$ and $q$ have no common divisor there is no point where both $\mathbb Z/p$ and $\mathbb Z/q$ have nonzero fiber.
As with vectorbundles, tensor produduct of modules can be thought of geometrically as fiberwise tensor product ($i^*$ commutes with $\otimes$). But of course the fiberwise tensor product vanishes because there are no points where both modules have nonzero fibers, so $\mathbb Z/p \otimes \mathbb Z/q=0$ . Finally any finitely generated module (more generally a coherent sheaf on a noetherian scheme) is built up of vector bundles on subspaces in the following way: There exists a stratification of spec(R) such that the module pulled back to the strata is a vector bundle. This follows from Hartshorne Ex II.5.8.
|
{
"source": [
"https://mathoverflow.net/questions/61678",
"https://mathoverflow.net",
"https://mathoverflow.net/users/14379/"
]
}
|
61,840 |
Do you believe P=NP? I've seen some mathematicians say that if P=NP their work would be worthless and restricted to enunciating theorems. They seem to believe that there exist an almost philosophical impediment to P=NP. Do you agree with that? Does the possibility of P=NP bother you?
|
Contrary to a popular misunderstanding: if P = NP, then the proof of any statement $A$ can be found by an algorithm in time polynomial in the length of the shortest proof of $A$ , not in the length of $A$ itself. Moreover, the exponent of the polynomial could easily be so large as to make this algorithm practically worthless. But most importantly: the shortest, machine-generated, proof of some theorem is highly unlikely to be the most elegant, illuminating, or just human-comprehensible, proof. Thus this idea that under P = NP, mathematics would be reduced to “enunciating theorems”, is completely misguided.
|
{
"source": [
"https://mathoverflow.net/questions/61840",
"https://mathoverflow.net",
"https://mathoverflow.net/users/14312/"
]
}
|
61,842 |
let's consider a composite natural number $n$ greater or equal to $4$ . Goldbach's conjecture is equivalent to the following statement: "there is at least one natural number $r$ such as $(n-r)$ and $(n+r)$ are both primes".
For obvious reasons $r\leq n-3$ . Such a number $r$ will be called a "primality radius" of $n$ . Now let's define the number $ord_{C}(n)$ , which depends on $n$ , in the following way: $ord_C(n):=\pi(\sqrt{2n-3})$ , where $\pi(x)$ is the number of primes less or equal to $x$ . $(n+r)$ is a prime only if for all prime $p$ less or equal to $\sqrt{2n-3}$ , $p$ doesn't divide $(n+r)$ . There are exactly $ord_{C}(n)$ such primes. The number $ord_{C}(n)$ will be called the "natural configuration order" of $n$ .
Now let's define the " $k$ -order configuration" of an integer $m$ , denoted $C_{k}(n)$ , as the sequence $(m \ \ mod \ \ 2, \ \ m \ \ mod \ \ 3,...,m \ \ mod \ \ p_{k})$ .
For example $C_{4}(10)=(10\ \ mod \ \ 2,\ \ 10 \ \ mod \ \ 3, \ \ 10 \ \ mod \ \ 5, \ \ 10 \ \ mod \ \ 7)=(0,1,0,3)$ .
I call $C_{ord_{C}(n)}(n)$ the "natural configuration" of $n$ . A sufficient condition to make $r$ be a primality radius of $n$ is that for all integer $i$ such that $1\leq i\leq ord_{C}(n)$ , $(n-r) \ \ mod \ \ p_{i}$ differs from $0$ and $(n+r) \ \ mod \ \ p_{i}$ differs from $0$ . If this statement is true, $r$ will be called a "potential typical primality radius" of $n$ .
Moreover, if $r\leq n-3$ , then $r$ will be called a "typical primality radius" of $n$ . Now let's define $N_{1}(n)$ as the number of potential typical primality radii of $n$ less than $P_{ord_{C}(n)}$ , where $P_{ord_{C}(n)}=2\times 3\times...\times p_{ord_{C}(n)}$ , $N_{2}(n)$ as the number of typical primality radii of $n$ , and $\alpha_{n}$ by the following equality: $N_{2}(n)=\dfrac{n.N_{1}(n)}{P_{ord_{C}(n)}}\left(1+\dfrac{\alpha_{n}}{n}\right)$ It is quite easy to give an exact expression of $N_{1}(n)$ and to show that: $\dfrac{n.N_{1}(n)}{P_{ord_{C}(n)}}>\left(c.\dfrac{n}{\log(n)^{2}}\right)\left(1+o(1)\right)$ , where $c$ is a positive constant. A statistical heuristics makes me think that $\forall \varepsilon>0, \ \ \alpha_{n}=O_{\varepsilon}\left(n^{\frac{1}{2}+\varepsilon}\right)$ . I would like to know whether this is equivalent to the Riemann Hypothesis or not. If so, it would mean that RH implies that every large enough even number is the sum of two primes. Thank you in advance for your feedback. EDIT October 13th 2013: to answer Gerry Myerson's question below, the statistical heuristics I refer to is $\vert p−f\vert\leqslant\dfrac{1}{\sqrt{n}}$ with $p$ the "probability" of an integer less than $P_{ord_{C}(n)}$ to be a potential typical primality radius of $n$ , hence $p=\dfrac{N_{1}(n)}{P_{ord_{C}(n)}}$ and $f$ the "frequency" of the event "being a typical primality radius of $n$ ", hence $f=\dfrac{N_{2}(n)}{n}$ . This gives $\alpha_{n}=O(\sqrt{n}\log^{2}n)$ , which is, up to the implied constant, the error term in the explicit formula of $\psi(n)$ under RH. Edit August 6th 2014: denoting by $r_{0}(n)$ the smallest typical potential primality radius of $n$ , is there a rather rigorous way to figure out what the probability of the event $r_{0}(n)=1$ should be? Edit January 7th 2015: it appears that the considered equivalence might be obtained from the conjunction of the statements $r_{0}(n)\leq\left(\dfrac{P_{ord_c(n)}}{N_1(n)}\right)^{2}\ll \log^4 n$ and $\alpha_{n}\ll\sqrt{nr_{0}(n)}$ . I didn't manage to prove the latter but any help would be greatly appreciated. Edit April 8th 2015: it appears that the upper bound $\alpha_{n}=O_{\varepsilon}(n^{1/2+\varepsilon})$ would follow from the following reasonable assumption: $N_{2}(n)$ is the nearest integer to $N_{1}(n)\dfrac{n-\sqrt{2n-3}}{P_{ord_{C}}(n)-\sqrt{2n-3}}$ , which follows from the very definition of what a typical primality radius is. Indeed, writing $N_{2}(n)=\dfrac{n.N_{1}(n)}{P_{ord_{C}}(n)}=N_{1}(n)\dfrac{n-\sqrt{2n-3}}{P_{ord_{C}(n)}-\sqrt{2n-3}}+O(1)$ , one gets $\dfrac{n.N_{1}(n)}{P_{ord_{C}(n)}}(1+\dfrac{\alpha_{n}}{n})=N_{1}(n)\dfrac{n-\sqrt{2n-3}}{P_{ord_{C}(n)}-\sqrt{2n-3}}+O(1)$ , hence $1+\dfrac{\alpha_{n}}{n}=\dfrac{P_{ord_{C}(n)}}{n}\left(\dfrac{n-\sqrt{2n-3}}{P_{ord_{C}(n)}-\sqrt{2n-3}}\right)+O(\dfrac{P_{ord_{C}(n)}}{n.N_{1}(n)})$ , i.e. $\dfrac{\alpha_{n}}{n}=\dfrac{P_{ord_{C}(n)}}{n}\dfrac{n-\sqrt{2n-3}}{P_{ord_{C}(n)}-\sqrt{2n-3}}-\dfrac{n(P_{ord_{C}(n)}-\sqrt{2n-3})}{n(P_{ord_{C}(n)}-\sqrt{2n-3})}+O(\dfrac{\log^{2} n}{n})$ . Thus $\alpha_{n}=\dfrac{(n-P_{ord_{C}(n)})\sqrt{2n-3}}{P_{ord_{C}(n)}-′\sqrt{2n-3}}+O(\log^{2} n)$ so $\alpha_{n}=(\sqrt{2n})^{1+\varepsilon}+O(\log^{2}n)=O_{\varepsilon}(n^{1/2+\varepsilon})$ . Édit June 5th 2015: it turns out that the previous assumption is false. Nevertheless I would like to know whether a suitable generalization of the central limit theorem could be useful to show that, if $\alpha_{n}=o(n)$ , then $\alpha_{n}=O(\sqrt{n}\log^{2} n)$ .
Indeed writing $N_{2}(n)=\sum_{i=1}^{n}X_{i}(n)$ with $X_{i}(n)\in\{0,1\}$ for all $i$ , one should be able to define a variance $\sigma^{2}$ as $\dfrac{1}{n}(N_{2}(n)-\dfrac{n.N_{1}(n)}{P_{ord_{c}(n)}})^{2}$ which should tend to $1$ for $n$ large enough, entailing the desired upper bound. Any ideas/insights/references are welcome. Edit March 5th, 2016: Writing as above $N_{2}(n)=\displaystyle{\sum_{i=1}^{n}X_{i}(n)}$ with $X_{i}(n)\in\{0,1\}$ , there is, among all possible realizations of the Binomial distribution of parameters $n$ and $p=\dfrac{N_{1}(n)}{P_{ord_{C}(n)}}$ , exactly one that coincides with the sequence $(u_{i})_{i\le n}$ of general term term $1_{i\ \ is\ \ a \ \ typical\ \ primality \ \ radius \ \ of \ \ n}$ . Defining the quantity $\varepsilon_{i}$ as $\vert X_{i}-\frac{N_{2}(n)}{n}\vert$ , then the norm $\| x\|_{1}$ of the vector $x$ whose $i$ -th component is $u_{i}$ is $\displaystyle{\\ x\|_{1}=\sum_{i=1}^{n}\varepsilon_{i}}$ , while $\displaystyle{\| x\|_{2}=\left(\sum_{i=1}^{n}\varepsilon_{i}^{2}\right)^{1/2}}$ . From $\alpha_{n}=\dfrac{P_{ord_{C}(n)}}{N_{1}(n)}\left(N_{2}(n)-\dfrac{n.N_{1}(n)}{P_{ord_{C}(n)}}\right)$ , it follows that $|\alpha_{n}|\le \dfrac{P_{ord_{C}(n)}}{N_{1}(n)}\| x\|_{1}\le\sqrt{n}\log^{2}n\| x\|_{2}$ . All that remains to be done is proving that $\| x\|_{2}=O(1)$ . Edit January 22nd 2019: it seems that the stronger assumption $ \vert p-f\vert\lesssim\frac{p}{\sqrt{n}} $ holds numerically, at least for small values of $ n $ . A proof thereof would entail that $ \alpha_{n}\lesssim\sqrt{n} $ , which may be stronger than RH. Actually, writing $ \dfrac{\alpha_{n}}{n}=\dfrac{1}{R_{n}} $ one gets $ R_{n}=\dfrac{n.N_{1}(n)}{N_{2}.P_{ord_{C}(n)}-n.N_{1}(n)} $ . Replacing in the latter $N_{2}(n) $ by the approximation thereof derived from Hardy-Littlewood k-tuple conjecture times $ \frac{n-p_{ord_{c}(n)}}{n} $ to eliminate non typical primality radii should provide (conditionally) the desired result. Edit May 14th 2019 : can one use the Theorem 1 in https://arxiv.org/abs/1809.01409 to establish rigorously that $\dfrac{\alpha_{n}}{n}=o(1)$ ? As the sequence of primes is arbitrarily close to arbitrary long arithmetic progressions, one can expect the same to hold for the sequence of primality radii of a given integer $n$ . The idea is that the considered sequence behaves 'almost' like an arithmetic progression of gap size $\Delta:=\dfrac{P_{ord_{C}(n)}}{N_{1}(n)}$ , and that, were it actually such an arithmetic progression, the quantity $\alpha_{n}$ would vanish. Edit June 10th 2019: from $p=n-r$ and $q=n+r$ it follows that a prime $l$ dividing $2r$ is such that $p\equiv q\pmod l$ . But as $p$ and $q$ are prime both of them are coprime with any prime dividing $P_{ord_{C}(n)}$ and thus if $l\mid 2r$ then $l\mid P_{ord_{C}(n)}$ . Of course the $l$ -adic valuation of $r_{0}(n)$ can be greater than 1, but this property of being divisible only by primes in a prescribed finite set makes me think that, maybe, $r_{0}(n)$ can be interpreted as the conductor of some 'deep' arithmetic object associated to $n$ , like an L-function or an elliptic curve (and thus, via the modularity theorem, to the level of some arithmetic subgroup of the modular group). This has of course an interest in itself but may also be used to provide an upper bound of $r_{0}(n)$ in terms of $n$ . Edit June 12, 2020: can this preprint by Maynard: https://arxiv.org/abs/2006.06572 shed some light on the conjectured relation $\alpha_{n}=o(n)$ ? Edit March 20th, 2021: I started learning Python 3 and wrote a program related to this question which supports the main conjecture, namely that $\alpha_{n}\ll n^{1/2}\log^{2}n$ . Here come two screenshots as examples (sorry for using French in it): The code is far from being optimal as I'm not well-versed in computer science but I can share it with whoever is interested. The arithmeticity coefficient of $n$ is defined as $1-\frac{\vert\alpha_{n}\vert}{n}$ and measures how close the sequence of primality radii of $n$ is to an arithmetic progression, whose arithmeticity coefficient would be $1$ .
|
I think it could be a safe assumption that this is not equivalent to RH in a simple way (assuming the other assertions of the question are true). Here is why: to show that RH implies Goldbach (at least asymptotically) is not at all an unnatural idea, which however as far as I know is open. For example, in ' Refinements of Goldbach's Conjecture, and the
Generalized Riemann Hypothesis ' Granville discusses questions close to this.
However, it seems to me that the asymptotic counts of the number of solutions to 'the Goldbach equations' are related to the RH (and GRH). Another example would be Deshouillers, Effinger, te Riele, Zinoviev 'A complete Vinogradov $3$-primes theorem under the Riemann hypothesis. Electron. Res. Announc. Amer. Math. Soc. 3 (1997), 99–104.' who showed that ternary Goldbach follows from GRH.
This is less directly related as ternary Goldbach is long known asymptotically, and this is thus about eliminating 'small' counterexamples. So, it just seems more than a bit unlikely that 'RH implies asymptotic Goldbach' can be solved with a half-page argument and an equivalence argument direct enough that somebody might simply supply it here. In addition, despite an explicit request (made a while ago) there is still no information/evidence provided why this should be equivalent to RH, which I interpret as the absence of any such evidence. Finally, since being equivalent is a bit of a vague notion (if both were true they were equivalent even if totally unrelated) and since this is too long for a comment anyway, I thought I give these generalities as an answer.
|
{
"source": [
"https://mathoverflow.net/questions/61842",
"https://mathoverflow.net",
"https://mathoverflow.net/users/13625/"
]
}
|
62,088 |
The conjugacy classes of the permutation group $S_n$ are indexed by partitions like $[6]$ and $[2,2,2] = [2^3]$ describing the cycle type. What happens when you take products of two whole conjugacy classes? I saw in a paper,
$$[6][2^3] = 6[3,1^3] + 8[2^2,1^2]+5[5,1]+4[4,2]+3[3^2]$$
Which I take to mean if you multiply a 6-cycle ( abcdef ) and a product of disjoint 3-cycles ( pq )( rs )( tv ), you can get a three-cycle ( abc ), two two-cycles ( ab )( cd ), a five-cycles ( abcde ), a four-cycles and a two-cycle ( abcd )( ef ), two three cycles ( abc )( def ) With certain multiplicities. Is it predictable what kinds of conjugacy classes you get? Is there an interpretation of this as the intersection cohomology of some moduli space?
|
Short answer: Yes, on Hurwitz spaces. Let's set these numbers up as the structure constants of $Z_d=Z(\mathbb{C}[S_d])$, the center of the group ring of the symmetric group $S_d$. The ring $Z_d$ has basis $K_{\mu}$, where $\mu$ is a partition of $d$, and $K_\mu$ represents the sum of all permutations of cycle type $\mu$. Then multiplication gives $$K_\mu K_\nu=\sum_{\lambda} C^\lambda_{\mu,\nu} K_\lambda$$
for some numbers $ C^\lambda_{\mu,\nu}$, which are what you're interested in. I've seen these called connection coefficients , in work of Goulden and Jackson, for instance, their paper Transitive Factorisations into Transpositions and Holomorphic Mappings on the Sphere , which starts to get you a simple connection to geometry: by looking at ramified covers of the sphere. I'll talk about this a bit first, giving a rough sketch and some pointers, and then I'll address some of Mariano's comments. This easiest connection to geometry is what you asked about in your previous question as "Hurwitz encoding", and David's answer there was good so I'll take that as background. You can start turning this into a problem about intersection theory by looking at Hurwitz Spaces. You can make various flavours of these, but lets call the most basic one $H_{g,d}$, the moduli space of all holomorphic maps $\pi:\Sigma\to \mathbb{P}^1$ of degree $d$ from a smooth genis $g$ Riemann surface $\Sigma$ to the Riemann sphere $\mathbb{P}^1$. Generically, such maps will all have simple ramification, and by the Riemann-Hurwitz formula there will be $r=2g-2+2d$ such points of ramification, and so we see that $H_{g,d}$ will have complex dimension $r$. We will be able to view your numbers as suitable intersections on the Hurwitz space $H_{g,d}$. There is a map from $H_{g,d}$ to $\mathbb{P}^r=(\mathbb{P}^1)^r/S_r$ that forgets $\Sigma$ and just remembers the $r$ branch points, (the critical values of $\pi$), counted with multiplicity. This is sometimes called the branch map, and I believe it is essentially what is known as the Lyashko-Looijenga map, and so I'll call this map LL. The degree of the map LL is what is known as a Hurwitz number, and translating everything into monodromy we see that it counts the number of tuples of $r$ transpositions $t_i$ in $S_d$ with the product of the $t_i$ being the identity, divided by $d!$ coming from automorphisms of the cover, or choosing a labeling of the $d$ sheets of the cover, depending on your viewpoint. To understand your connection coefficients geometrically, for a partition $\mu$ and a point $p\in \mathbb{P}^1$ we could define a cohomology class $\alpha(\mu, p)$ to consist of those maps in $H_{g,d}$ where $\pi$ has ramification profile $\mu$ over $p$. Then, if we've set up $g$ correctly with respect to $\mu, \nu$ and $\lambda$, the numbers $C^{\lambda}_{\mu,\nu}$ should be, again, up to some factor of automorphisms, the number of points in the triple intersection $\alpha(\mu,p_1)\cap \alpha(\nu,p_2)\cap \alpha(\lambda, p_3)$. I'm not addressing some stuff (for isntance, connected versus discnonected covers) or necessarily giving you the most useful view in practice, but this is the simplest way to something like what you want, I think -- the Hurwitz space $H_{g,d}$ is not compact, and we'd want to compactify it (admissible covers is the first way, but this winds up not being normal, and you can use some orbifold Gromov-Witten theory and compactify with see twisted stable maps to get the normalization). But hopefully that's some idea of how this would go. To see this viewpoint used in practice, there are for instance, papers for instance of Lando and Zvonkine on the Arxiv -- I'm not sure where exactly you'd want to start. Through something known as the ELSV formula this story gets connected to intersection numbers on the moduli space of curves, which might be what you had in mind... To connect into what Mariano was saying in comments, you'd want to get into the of a permutation $\sigma$ -- the minimal number of transpositions $\sigma$ factors into. Let's call this the weight of sigma -- for a permutation of cycle type $\mu$, it is equal to $|\mu|-\ell(\mu)$, where $\ell(\mu)$ is the number of parts of $\mu$. The center of the group ring $Z_d$ is filtered by the weight, and the "top" coefficients are ones where the weight adds -- where $d-\ell(\lambda)=d-\ell(\mu)+d-\ell(\nu)$. In our geometric viewpoint, the weight is the amount of ramification above a certain point, and the top coefficients correspond to covers where all components are genus zero, the coefficients where the weight is off by two means we have a genus one cover, and similarly -- this filtration is geometrically filtering by the genus of our cover. The "top" coefficients are particularly nice in that they are independent of $d$ and so when you take the associated graded for each $s_d$ this plays well with the natural inclusions between the $S_d$ and you get some universal ring out of all the $S_d$ the Farahat-Higman ring. Mariano's mention of the hilbert scheme of points in the plane is a bit of a different longer story here -- the brief outline as I like to think about it is that we can view $Z_d$ as the Chen=-Ruan orbifold cohomology of the $\mathcal{B}S_d=point/S_d$. The stack $\mathbb{C}^{d}/S_d$ will have the same vector space of cohomology, but the grading will be different -- this is what algebraic geometers call "age" that Mariano referred to. This age induces exactly the filtration above. The filtration is just doubled for $\mathbb{C}^{2d}$, and the Hilbert Scheme of points is a crepant resolution of this space, and so you get the relation on homology above. This is a long story, and seems slightly off from what you want.
|
{
"source": [
"https://mathoverflow.net/questions/62088",
"https://mathoverflow.net",
"https://mathoverflow.net/users/1358/"
]
}
|
62,092 |
Hello, as I'm not an analyst, I'm having difficulties with the following, certainly well-known problem: one is given the PDE $\Delta u(x,y)=\sqrt{x^2+y^2}$ in the "region" $x^2+y^2\leq1$ with the boundary coundition $u(x,y)=0$ whenever $x^2+y^2=1$. The most obvious "answer" would be $u(x,y)=\sqrt{x^2+y^2}$, but the partial derivatives of $u(x,y)$ are not defined at $(x,y)=(0,0)$ (the singularity w.r.t. polar coordinates). Am I overlooking something, i.e. is there a well-behaved solution ? Any help would be greatly appreciated ! Kind regards, Stephan.
|
Short answer: Yes, on Hurwitz spaces. Let's set these numbers up as the structure constants of $Z_d=Z(\mathbb{C}[S_d])$, the center of the group ring of the symmetric group $S_d$. The ring $Z_d$ has basis $K_{\mu}$, where $\mu$ is a partition of $d$, and $K_\mu$ represents the sum of all permutations of cycle type $\mu$. Then multiplication gives $$K_\mu K_\nu=\sum_{\lambda} C^\lambda_{\mu,\nu} K_\lambda$$
for some numbers $ C^\lambda_{\mu,\nu}$, which are what you're interested in. I've seen these called connection coefficients , in work of Goulden and Jackson, for instance, their paper Transitive Factorisations into Transpositions and Holomorphic Mappings on the Sphere , which starts to get you a simple connection to geometry: by looking at ramified covers of the sphere. I'll talk about this a bit first, giving a rough sketch and some pointers, and then I'll address some of Mariano's comments. This easiest connection to geometry is what you asked about in your previous question as "Hurwitz encoding", and David's answer there was good so I'll take that as background. You can start turning this into a problem about intersection theory by looking at Hurwitz Spaces. You can make various flavours of these, but lets call the most basic one $H_{g,d}$, the moduli space of all holomorphic maps $\pi:\Sigma\to \mathbb{P}^1$ of degree $d$ from a smooth genis $g$ Riemann surface $\Sigma$ to the Riemann sphere $\mathbb{P}^1$. Generically, such maps will all have simple ramification, and by the Riemann-Hurwitz formula there will be $r=2g-2+2d$ such points of ramification, and so we see that $H_{g,d}$ will have complex dimension $r$. We will be able to view your numbers as suitable intersections on the Hurwitz space $H_{g,d}$. There is a map from $H_{g,d}$ to $\mathbb{P}^r=(\mathbb{P}^1)^r/S_r$ that forgets $\Sigma$ and just remembers the $r$ branch points, (the critical values of $\pi$), counted with multiplicity. This is sometimes called the branch map, and I believe it is essentially what is known as the Lyashko-Looijenga map, and so I'll call this map LL. The degree of the map LL is what is known as a Hurwitz number, and translating everything into monodromy we see that it counts the number of tuples of $r$ transpositions $t_i$ in $S_d$ with the product of the $t_i$ being the identity, divided by $d!$ coming from automorphisms of the cover, or choosing a labeling of the $d$ sheets of the cover, depending on your viewpoint. To understand your connection coefficients geometrically, for a partition $\mu$ and a point $p\in \mathbb{P}^1$ we could define a cohomology class $\alpha(\mu, p)$ to consist of those maps in $H_{g,d}$ where $\pi$ has ramification profile $\mu$ over $p$. Then, if we've set up $g$ correctly with respect to $\mu, \nu$ and $\lambda$, the numbers $C^{\lambda}_{\mu,\nu}$ should be, again, up to some factor of automorphisms, the number of points in the triple intersection $\alpha(\mu,p_1)\cap \alpha(\nu,p_2)\cap \alpha(\lambda, p_3)$. I'm not addressing some stuff (for isntance, connected versus discnonected covers) or necessarily giving you the most useful view in practice, but this is the simplest way to something like what you want, I think -- the Hurwitz space $H_{g,d}$ is not compact, and we'd want to compactify it (admissible covers is the first way, but this winds up not being normal, and you can use some orbifold Gromov-Witten theory and compactify with see twisted stable maps to get the normalization). But hopefully that's some idea of how this would go. To see this viewpoint used in practice, there are for instance, papers for instance of Lando and Zvonkine on the Arxiv -- I'm not sure where exactly you'd want to start. Through something known as the ELSV formula this story gets connected to intersection numbers on the moduli space of curves, which might be what you had in mind... To connect into what Mariano was saying in comments, you'd want to get into the of a permutation $\sigma$ -- the minimal number of transpositions $\sigma$ factors into. Let's call this the weight of sigma -- for a permutation of cycle type $\mu$, it is equal to $|\mu|-\ell(\mu)$, where $\ell(\mu)$ is the number of parts of $\mu$. The center of the group ring $Z_d$ is filtered by the weight, and the "top" coefficients are ones where the weight adds -- where $d-\ell(\lambda)=d-\ell(\mu)+d-\ell(\nu)$. In our geometric viewpoint, the weight is the amount of ramification above a certain point, and the top coefficients correspond to covers where all components are genus zero, the coefficients where the weight is off by two means we have a genus one cover, and similarly -- this filtration is geometrically filtering by the genus of our cover. The "top" coefficients are particularly nice in that they are independent of $d$ and so when you take the associated graded for each $s_d$ this plays well with the natural inclusions between the $S_d$ and you get some universal ring out of all the $S_d$ the Farahat-Higman ring. Mariano's mention of the hilbert scheme of points in the plane is a bit of a different longer story here -- the brief outline as I like to think about it is that we can view $Z_d$ as the Chen=-Ruan orbifold cohomology of the $\mathcal{B}S_d=point/S_d$. The stack $\mathbb{C}^{d}/S_d$ will have the same vector space of cohomology, but the grading will be different -- this is what algebraic geometers call "age" that Mariano referred to. This age induces exactly the filtration above. The filtration is just doubled for $\mathbb{C}^{2d}$, and the Hilbert Scheme of points is a crepant resolution of this space, and so you get the relation on homology above. This is a long story, and seems slightly off from what you want.
|
{
"source": [
"https://mathoverflow.net/questions/62092",
"https://mathoverflow.net",
"https://mathoverflow.net/users/13462/"
]
}
|
62,125 |
I have heard the following statement several times and I suspect that there is an easy and elegant proof of this fact which I am just not seeing. Question : Why is it true that an invertible nxn matrix with non-negative integer entries, whose inverse also has non-negative integer entries, is necessarily a permutation matrix? The reason I am interested in this has to do with categorification. There is an important 2-category, the 2-category of Kapranov–Voevodsky 2-vector spaces , which in one incarnation has objects given by the natural numbers and 1-morphisms from n to m are mxn matrices of vector spaces. Composition is like the usual matrix composition, but using the direct sum and tensor product of vector spaces. The 2-morphisms are matrices of linear maps. The above fact implies that the only equivalences in this 2-category are "permutation matrices" i.e. those matrices of vector spaces which look like permutation matrices, but where each "1" is replaced by a 1-dimensional vector space. It is easy to see why the above fact implies this.
Given a matrix of vector spaces, you can apply "dim" to get a matrix of non-negative integers. Dimension respects tensor product and direct sum and so this association is compatible with the composition in 2-Vect. Thus if a matrix of vector spaces is weakly invertible, then its matrix of dimensions is also invertible, and moreover both this matrix and its inverse have positive interger entries. Thus, by the above fact, the matrix of dimesnions must be a permutation matrix. But why is the above fact true?
|
Proof: The condition that $M$ has nonnegative integer entries means that it maps the monoid $\mathbb{Z}_{\geq 0}^n$ to itself. The condition that $M^{-1}$ is likewise means that $M$ is an automorphism of this monoid. The basis elements $(0,0,\ldots,0,1,0,\ldots, 0)$ in $\mathbb{Z}_{\geq 0}^n$ are the only elements which cannot be written as $u+v$ for some nonzero $u$ and $v$ in $\mathbb{Z}_{\geq 0}^n$. This description makes it clear that any automorphism of $\mathbb{Z}_{\geq 0}^n$ must permute this basis. So $M$ is a permutation matrix.
|
{
"source": [
"https://mathoverflow.net/questions/62125",
"https://mathoverflow.net",
"https://mathoverflow.net/users/184/"
]
}
|
62,144 |
I am an inexperienced logician, so I may be completely missing something major in this question. I also may be misconstruing the idea of decidability. However, I was wondering if all 6 of the remaining Millennium Prize Problems are decidable in the sense of Gödel.
If any of the associated theories were not decidable, wouldn't that have far-reaching applications in the world of mathematics?
Thanks in advance, and I hope that my question makes sense.
|
There are very few results which allow us to know that a mathematical claim will be provable or disprovable within ZFC without actually proving or disproving it. To the best of my knowledge, the only exceptions are theories which have quantifier elimination . Few 1 open mathematical problems which people are interested in are of this sort, and none of the Millenium problems are. So any of the Millenium problems could be independent of ZFC (except for the Poincare conjecture, because it has been proved!) You might be particularly interested in Scott Aaronson's survey on whether or not it is likely that $P \neq NP$ is independent of ZFC. 1 Here is an example of a question which I know is decidable in ZFC, yet whether the answer is "yes" or "no" is open. Do there exist $44$ vectors $(u_i,
> v_i, w_i, x_i, y_i)$ in
$\mathbb{R}^5$, each with length $1$,
and with the dot product between each
pair $\leq 1/2$? See Wikipedia for background. This is the a first order question about real numbers, so it is decidable by Tarski's theorem . The analogous result for four dimensional vectors was only obtained in 2003 ; if you can get the answer for $5$ dimensions, it should be publishable in a good journal. I think this about as interesting a question as one can find which is definitely settleable in ZFC, yet still open. Most questions mathematicians care about are not of this form (and, in my opinion, are much more interesting).
|
{
"source": [
"https://mathoverflow.net/questions/62144",
"https://mathoverflow.net",
"https://mathoverflow.net/users/14484/"
]
}
|
62,155 |
The cohomology of Shimura varieties and Drinfeld shtukas is conjectured to realize the representations sought for in the Langlands programme/conjectures, the cohomology of Deligne-Lusztig varieties realizes representations of the classical groups over finite fields: How did people find those varieties?
|
Regarding Shimura varieties: One has to first consider the case of modular curves, which has served throughout as an impetus and inspiration for the general theory. The study of modular curves (in various guises) goes back to the 19th century, with the work
of Jacobi and others on modular equations (which from a modern viewpoint are explicit equations for the modular curves $X_0(N)$). The fact that these curves are defined over $\mathbb Q$ (or even $\mathbb Z$) also goes back (in some form) to the 19th century, in so far as it was noticed that modular equations have rational or integral coefficients. There is also the (strongly related) fact that interesting modular functions/forms have rational or integral $q$-expansion coefficients. Finally, there are the facts related to Kronecker's Jugendtraum, that modular functions/forms with integral Fourier coefficients, when evaluated at quadratic imaginary points in the upper half-plane, give algebraic numbers lying in abelian extensions of quadratic imaginary fields. These all go back to the 19th century in various forms, although complete theories/interpretations/explanations weren't known until well into the 20th century. The idea that the cohomology of modular curves would be Galois theoretically interesting is more recent. I think that it goes back to Eichler, with Igusa, Ihara, Shimura, Serre, and then Deligne all playing important roles. It seems to be non-trivial to trace the history, in part because the intuitive idea seems to predate the formal introduction of etale cohomology (which is necessary to make the idea completely precise and general). Thus Ihara's work considers zeta-functions of modular curves (or of the Kuga--Satake varieties over them) rather than cohomology. (The zeta-function is a way of incarnating the information carried in cohomology without talking directly about cohomology). Shimura worked just with weight two modular forms (related to cohomology with constant coefficients), and instead of talking directly about etale cohomology worked with the Jacobians of the modular curves. (He explained how the Hecke operators break up the
Jacobian into a product of abelian varieties attached to Hecke eigenforms.) [Added: In fact,
I should add that Shimura also had an argument, via congruences, which reduced the
study of cohomology attached to higher weight forms to the case of weight two forms; this was elaborated on by Ohta. These kinds of arguments were then rediscovered and further developed by Hida, and have since been used by lots of people to relate modular forms of
different weights to one another.] The basic idea, which must have been understood in some form by all these people, is
that a given Hecke eigenform $f$ contributes two dimensions to cohomology, represented by the two differential forms $f d\tau$ and $\overline{f}d\tau$. Thus Hecke eigenspaces in
cohomology of modular curves are two-dimensional. Since the Hecke operators are defined over $\mathbb Q$, these eigenspaces are preserved by the Galois action on etale cohomology, and so we get two-dimensional Galois reps. attached to modular forms. As far as I understand, Shimura's introduction of general Shimura varieties grew out of
thinking about the theory of modular curves, and in particular, the way in which that
theory interacted with the theory of complex multiplication elliptic curves. In particular,
he and Taniyama developed the general theory of CM abelian varieties, and it was natural to try to embed that more general theory into a theory of moduli spaces generalizing the modular curves. A particular challenge was to try to give a sense to the idea that the
resulting varieties (i.e. Shimura varieties in modern terminology) had canonical models over number fields. This could no longer be done by studying rationality of $q$-expansions (since they could be compact, say, and hence have no cusps around which to form Fourier
expansions). Shimura introduced the Shimura reciprocity law, i.e. the description of the
Galois action on the special points (the points corresponding to CM abelian varieties) as the basic tool for characterizing and studying rationality questions for Shimura varieties. In particular, Shimura varieties were introduced prior to the development of the Langlands programme, and for reasons other than the construction of Galois representations. However,
once one had these varieties, naturally defined over number fields, and having their origins in the theory of algebraic groups and automorphic forms, it was natural to try to calculate their zeta-functions, or more generally, to calculate the Galois action on their cohomology, and Langlands turned to this problem in the early 1970s. (Incidentally, my understanding is that it was he who introduced the terminology Shimura varieties .) The first question he tried to answer was: how many dimensions does a given Hecke eigenspace contribute to the cohomology.
He realized that the answer to this --- at least typically --- was given by Harish-Chandra's theory of (what are now called) discrete series $L$-packets, as is explained in his letters to Lang ; the relationship of the resulting Galois representations to the Langlands program is not obvious --- in particular, it is not obvious how the dual group intervenes --- and this (namely, the intervention of the dual group) is the main topic of the letters to Lang. These letters to Lang are just the beginning of the story, of course. (For example, the typical situation does not always occur; there is the phenomenon of endoscopy. And then there is the problem of actually proving that the Galois action on cohomology gives what one expects it to!) Regarding Drinfeld and Deligne--Lusztig varieties: I've studied these cases in much less detail, but I think
that Drinfeld was inspired by the case of Shimura varieties, and (as Jim Humphreys has noted) Deligne--Lusztig drew insipration from Drinfeld. What can one conclude: These theoretically intricate objects grew out of a long and involved history, with multiple motivations driving their creation and the investigations of their properties. If you want to find a unifying (not necessarily historical) theme,
one could also note that Deligne--Lusztig varieties are built out of flag varieties in a certain sense, in fact as locally closed regions of flag varieties, and that Shimura varieties are also built out of (in the sense that they are
quotients of) symmetric spaces, which are again open regions in (partial) flag varieties.
This suggests a well-known conclusion, namely that the geometry of reductive groups and the various spaces associated to them seems to be very rich.
|
{
"source": [
"https://mathoverflow.net/questions/62155",
"https://mathoverflow.net",
"https://mathoverflow.net/users/-1/"
]
}
|
62,156 |
This question comes to me via a friend, and apparently has something to do with quantum physics. However, stripped of all physics, it seems interesting enough on its own. I assume someone has asked this question before, but I have no idea what to search for: Suppose we have points $P$ and $Q$ on $S^2$, and two available rotations: specifically, I am interested in rotations by $\pi/4$ radians about the $x$ and $z$ axes. Given $\varepsilon > 0$, is there an effective algorithm for applying these rotations to $P$ so that it is within Euclidean distance $\varepsilon$ of $Q$? Edit: Update retracted. I'm curious about the general situation as well, where the two rotations are arbitrary (and, obviously, send $P$ to a dense subset of $S^2$).
|
* More comments at end on finding group elements with small word length * If you're willing to accept an element of the group (as distinguished from a word expressing an element) there is an algorithm that will produce such an element moving $P$ to within $\epsilon$ of $Q$ that is polynomial as a function of the number of bits of $\epsilon$, that is, $|\log(\epsilon)|$. If you need a word, there are algorithms that are at least as good as polynomial in $1/\epsilon$. First: this group is dense, because the only finite subgroups of $SO(3)$ that are not contained in $O(2) \times O(1)$ have elements of orders only 2, 3, 4 and 5. There are good descriptions of all such subgroups in various places, but I won't go over this here --- I an explain if pressed. A rotation by $\pi/4$ has order 8, and the group, by inspection does not preserve a splitting. Therefore, the group is infinite. Any infinite subgroup of any Lie group is dense in some closed subgroup. The only possibility is that it is dense in $SO(3)$. The basic phenomenon that helps for approximation is that in any Lie group with any Riemannian metric, there is a radius $r$ such that for $g, h \in B_r$ (the ball of radius $r$), the commutator $[g, h]$ is contained in the much ball $B_{2 r^2}$, which is much smaller if $r$ is small. This follows from Taylor approximation of the commutator in a neighborhood of $[1,1]$. More concretely,
$|[g,h]| < 2|g| |h|$, where $||$ denotes distance from the identity. Furthermore, for two isometries of $S^2$ that move a point $X$ a small distance, $[g,h]$ is nearly a translation, since the group $SO(2)$ of possible first derivatives with respect to some local frame field is abelian. Using this, you can get from $P$ to $Q$ by successive approximation. If we denote the two generators $X$ and $Z$, we can first, take $Q$ roughly to within striking distance of $P$ by some element of the form
$X^k Z^j$. Or, use some other word; this first step has fixed finite cost, and can be done by exhaustive search through some set of words in $X$ and $Y$. Now, use commutators of smallish elements to find still smaller elements, and use these
to bring $Q$ still closer to $P$. It is easy to generate approximate translations of all scales, by judiciously choosing commutators of elements on larger scales. The computations in $O(3)$ to desired accuracy can be done in polynomial time. One concrete tool to actually implement this would be to reduce the question to the case that $P$ is a fixed point of an element $W$ of infinite order. (It is easy to find such elements, using a $p$-adic valuation). Given one approximate translation that moves $P$ a small distance, conjugates of it by powers of $W$ are translations approximating any desired direction. If you take the commutator with a fixed approximate translation of medium size, it is an approximate translation of size approximately say $1/3$ the original. In this process, elements of the group are generated recursively, and the wordlength in original generators typically grows exponentially in the number of steps, but they have exponentially increasing accuracy of moving $Q$ to $P$. If you unroll the process, it takes polynomial length words in $1/\epsilon$ to move $Q$ to within $\epsilon$ of $P$. Addendum . Any finitely-generated subgroup of $SO(3)$ generated by matrices with algebraic integer entries (such as this) has a faithful discrete action on a product of spheres, hyperbolic planes, and hyperbolic 3-spaces, by a general construction for algebraic groups, from which it follows that it is either virtually abelian or has exponential growth and in fact it contains a free subgroup. It seems likely that these group elements have orbits on $S^2$ that are reasonably uniformly distributed, although I don't know what's proven about it. If so, by just counting elements it would follow that $P$ can be take to within $\epsilon$ of $Q$ using a word of length linear in $\log(\epsilon)|$. It looks like an interesting challenge to try to find a polynomial-time algorithm that will find such a word. Once $P$ is within a small negihbrohood of $Q$ and you have a modest selection of moderate-length words that look almost like translations in a magnification of this small neighborhood, one strategy would be to make a first approximation of getting closer by adding vectors. But they are not exactly addition of a vector, and the many different possible orders in which you could multiply them give many different results.
If you could systematically analyze and control the effects of changing the orde it might be possible to systematically improve the approximtion. In other words: instead of multiplying
by higher and higher commutators at the end, actually commute elements in the produt. However, this is more of a challenge than I feel like plunging in to here.
|
{
"source": [
"https://mathoverflow.net/questions/62156",
"https://mathoverflow.net",
"https://mathoverflow.net/users/35336/"
]
}
|
62,159 |
Hi, I'm just starting to learn about deformation theory (via Hartshorne's Deformation theory, as well as Fantechi's section of FGA explained), and I feel like I'm confused about fundamental concepts. So please indulge me even if the question is well-known, or trivial. So suppose we have a projective variety $Y \subseteq \mathbb{P}_k^n$. Then we can consider two natural objects associated to it, namely, the Hilbert scheme, which parametrizes all the objects with the same Hilbert polynomial as $Y$, and the (versal?) deformation space, which represents the deformation functor $F: (Art)_k \to (Sets)$. My question is, is there a relationship between these two spaces? My hunch is that the Hilbert scheme is included in some way (perhaps via immersion, although that sounds kind of strong) to the deformation space - I guess this means that any deformation over an Artin ring doesn't change the Hilbert polynomial - but I can't formulate any coherent and believable conjecture right now. (If the question doesn't make sense, could you answer the right question that most closely approximates it?) Thank you for reading.
|
* More comments at end on finding group elements with small word length * If you're willing to accept an element of the group (as distinguished from a word expressing an element) there is an algorithm that will produce such an element moving $P$ to within $\epsilon$ of $Q$ that is polynomial as a function of the number of bits of $\epsilon$, that is, $|\log(\epsilon)|$. If you need a word, there are algorithms that are at least as good as polynomial in $1/\epsilon$. First: this group is dense, because the only finite subgroups of $SO(3)$ that are not contained in $O(2) \times O(1)$ have elements of orders only 2, 3, 4 and 5. There are good descriptions of all such subgroups in various places, but I won't go over this here --- I an explain if pressed. A rotation by $\pi/4$ has order 8, and the group, by inspection does not preserve a splitting. Therefore, the group is infinite. Any infinite subgroup of any Lie group is dense in some closed subgroup. The only possibility is that it is dense in $SO(3)$. The basic phenomenon that helps for approximation is that in any Lie group with any Riemannian metric, there is a radius $r$ such that for $g, h \in B_r$ (the ball of radius $r$), the commutator $[g, h]$ is contained in the much ball $B_{2 r^2}$, which is much smaller if $r$ is small. This follows from Taylor approximation of the commutator in a neighborhood of $[1,1]$. More concretely,
$|[g,h]| < 2|g| |h|$, where $||$ denotes distance from the identity. Furthermore, for two isometries of $S^2$ that move a point $X$ a small distance, $[g,h]$ is nearly a translation, since the group $SO(2)$ of possible first derivatives with respect to some local frame field is abelian. Using this, you can get from $P$ to $Q$ by successive approximation. If we denote the two generators $X$ and $Z$, we can first, take $Q$ roughly to within striking distance of $P$ by some element of the form
$X^k Z^j$. Or, use some other word; this first step has fixed finite cost, and can be done by exhaustive search through some set of words in $X$ and $Y$. Now, use commutators of smallish elements to find still smaller elements, and use these
to bring $Q$ still closer to $P$. It is easy to generate approximate translations of all scales, by judiciously choosing commutators of elements on larger scales. The computations in $O(3)$ to desired accuracy can be done in polynomial time. One concrete tool to actually implement this would be to reduce the question to the case that $P$ is a fixed point of an element $W$ of infinite order. (It is easy to find such elements, using a $p$-adic valuation). Given one approximate translation that moves $P$ a small distance, conjugates of it by powers of $W$ are translations approximating any desired direction. If you take the commutator with a fixed approximate translation of medium size, it is an approximate translation of size approximately say $1/3$ the original. In this process, elements of the group are generated recursively, and the wordlength in original generators typically grows exponentially in the number of steps, but they have exponentially increasing accuracy of moving $Q$ to $P$. If you unroll the process, it takes polynomial length words in $1/\epsilon$ to move $Q$ to within $\epsilon$ of $P$. Addendum . Any finitely-generated subgroup of $SO(3)$ generated by matrices with algebraic integer entries (such as this) has a faithful discrete action on a product of spheres, hyperbolic planes, and hyperbolic 3-spaces, by a general construction for algebraic groups, from which it follows that it is either virtually abelian or has exponential growth and in fact it contains a free subgroup. It seems likely that these group elements have orbits on $S^2$ that are reasonably uniformly distributed, although I don't know what's proven about it. If so, by just counting elements it would follow that $P$ can be take to within $\epsilon$ of $Q$ using a word of length linear in $\log(\epsilon)|$. It looks like an interesting challenge to try to find a polynomial-time algorithm that will find such a word. Once $P$ is within a small negihbrohood of $Q$ and you have a modest selection of moderate-length words that look almost like translations in a magnification of this small neighborhood, one strategy would be to make a first approximation of getting closer by adding vectors. But they are not exactly addition of a vector, and the many different possible orders in which you could multiply them give many different results.
If you could systematically analyze and control the effects of changing the orde it might be possible to systematically improve the approximtion. In other words: instead of multiplying
by higher and higher commutators at the end, actually commute elements in the produt. However, this is more of a challenge than I feel like plunging in to here.
|
{
"source": [
"https://mathoverflow.net/questions/62159",
"https://mathoverflow.net",
"https://mathoverflow.net/users/14449/"
]
}
|
62,218 |
There are quite a few german mathematical theorems or notions which usually are not translated into other languages. For example, Nullstellensatz , Hauptvermutung , Freiheitssatz , Eigenvector (the "Eigen" part), Verschiebung . For me, as a German, this is quite entertaining. Do you know other examples? Please one per answer, please give a reference for the term or a short explanation of what it means. It would be great to see an explanation why there is no translation. EDIT: Some more examples can be found at Wikipedia : Ansatz, Entscheidungsproblem, Grossencharakter, Hauptmodul, Möbius band, quadratfrei, Stützgerade, Vierergruppe, Nebentype.
|
Führerdiskriminantenproduktformel.
|
{
"source": [
"https://mathoverflow.net/questions/62218",
"https://mathoverflow.net",
"https://mathoverflow.net/users/2841/"
]
}
|
62,312 |
A non-principal ultrafilter $\mathcal{U}$ on $\omega$ is a p-point (or weakly selective ) iff for every partition $\omega = \bigsqcup _{n < \omega} Z_n$ into null sets, i.e each $Z_n \not \in \mathcal{U}$, there exists a measure one set $S \in \mathcal{U}$ such that $S \cap Z_n$ is finite for each $n$. A non-principal ultrafilter $\mathcal{U}$ on $\omega$ is Ramsey (or selective ) iff for every partition as above, there exists a measure one set $S$ such that $|S \cap Z_n| = 1$ for each $n$. Clearly, every Ramsey ultrafilter is a p-point. What is known about the converse? I couldn't find anything, not even a consistency result, in any searches I've done or sources I've checked. Is very little known/published about the converse?
|
Amit: The converse is not true, not even under MA. This is a result of Kunen, and the paper you want to look at is "Some points in $\beta{\mathbb N}$", Math. Proc. Cambridge Philos. Soc. 80 (1976), no. 3, 385–398. There is a related notion, called $q$-point . These are ultrafilters such that any finite-to-one $f:\omega\to\omega$ is injective on a set in the ultrafilter. A Ramsey ultrafilter is one that is simultaneously a $p$-point, and a $q$-point. Miller proved ("There are no $Q$-points in Laver's model for the Borel conjecture", Proc. Amer. Math. Soc. 78 (1980), no. 1, 103–106) that it is consistent that there are no $q$-points. The consistency of the non-existence of $p$-points is significantly harder, and due to Shelah (see for example Chapter VI of his "Proper and improper forcing"). There is a fairly extensive literature on related results. You may want to start by looking at Blass' article in the Handbook of Set Theory, "Combinatorial Cardinal Characteristics of the Continuum".
|
{
"source": [
"https://mathoverflow.net/questions/62312",
"https://mathoverflow.net",
"https://mathoverflow.net/users/7521/"
]
}
|
62,642 |
Does any one know why $d_3: H^* (X, K^0(point))\rightarrow H^{*+3}(X,K^0(point))$ is actually extended $Sq^3$ to $\mathbb{Z} $ coefficient.
|
This follows from the following considerations: This differential in the Atiyah-Hirzebruch spectral sequence must be a stable cohomology operation for general nonsense reasons (the first nonvanishing differential always is, no matter what the generalized cohomology theory is). There are exactly two stable cohomology operations $H^*(X) \to H^{*+3}(X)$ with integer coefficients. One of them is zero, and the other is $\beta \circ Sq^2 \circ r$, where $r$ is reduction mod 2 and $\beta$ is the Bockstein from mod-2 cohomology to integer cohomology. This comes from a calculation of the cohomology of Eilenberg-Mac Lane spaces, which describe all possible cohomology operations; for n sufficiently large we have $H^{n+3}(K(\mathbb{Z},n)) = \mathbb{Z}/2$. The $d_3$ differential is not the zero cohomology operation. For this, it suffices to find one space for which this differential is nontrivial (and you can find this by actually calculating the complex K-groups). I believe that you can find this for $\mathbb{RP}^2 \times \mathbb{RP}^4$; perhaps someone more industrious can flesh this out?
|
{
"source": [
"https://mathoverflow.net/questions/62642",
"https://mathoverflow.net",
"https://mathoverflow.net/users/14354/"
]
}
|
62,790 |
Among several possible definitions of ordered pairs - see below - I find Kuratowski's the least compelling: its membership graph (2) has one node more than necessary (compared to (1)), it is not as "symmetric" as possible (compared to (3) and (4)), and it is not as "intuitive" as (4) - which captures the intuition, that an ordered pair has a first and a second element. (source) Membership graphs for possible definitions of ordered pairs (≙ top node, arrow heads omitted) 1: (x,y) := { x , { x , y } }
2: (x,y) := { { x } , { x , y } } (Kuratowski's definition)
3: (x,y) := { { x } , { { x } , y } }
4: (x,y) := { { x , 0 } , { 1 , y } } (Hausdorff's definition) So my question is: Are there good reasons to choose
Kuratowski's definition (or did
Kuratowski himself give any) instead of
one of the more "elegant" - sparing,
symmetric, or intuitive -
alternatives?
|
Kuratowski's definition arose naturally out of Kuratowski's idea for representing any linear order of a set $S$ in terms of just sets, not ordered pairs. The idea was that a linear ordering of $S$ can be represented by the set of initial segments of $S$. Here "initial segment" means a nonempty subset of $S$ closed under predecessors in the ordering. When applied to the special case of two-element sets $S$, this gives the Kuratowski ordered pair.
|
{
"source": [
"https://mathoverflow.net/questions/62790",
"https://mathoverflow.net",
"https://mathoverflow.net/users/2672/"
]
}
|
62,818 |
It is well known that intuitive set theory (or naive set theory) is characterized by having paradoxes, e.g. Russell's paradox, Cantor's paradox, etc. To avoid these and any other discovered or undiscovered potential paradoxes, the ZFC axioms impose constraints on the existense of a set. But ZFC set theory is build on mathematical logic, i.e., first-order language. For example, the axiom of extensionality is the wff $\forall A B(\forall x(x\in A\leftrightarrow x\in B)\rightarrow A=B)$. But mathematical logic also uses the concept of sets, e.g. the set of alphabet, the set of variables, the set of formulas, the set of terms, as well as functions and relations that are in essence sets. However, I found these sets are used freely without worrying about the existence or paradoxes that occur in intuitive set theory. That is to say, mathematical logic is using intuitive set theory. So, is there any paradox in mathematical logic? If no, why not? and by what reasoning can we exclude this possibility? This reasoning should not be ZFC (or any other analogue) and should lie beyond current mathematical logic because otherwise, ZFC depends on mathematical logic while mathematical logic depends on ZFC, constituting a circle reasoning. If yes, what we should do? since we cannot tolerate paradoxes in the intuitive set theory, neither should we tolerate paradoxes in mathematical logic, which is considered as the very foundation of the whole mathematics. Of course we have the third answer: We do not know yes or no, until one day a genius found a paradox in the intuitive set theory used at will in mathematical logic and then the entire edifice of math collapse. This problem puzzled me for a long time, and I will appreciate any answer that can dissipate my apprehension, Thanks!
|
I have been asked this question several times in my logic or set theory classes.
The conclusion that I have arrived at is that you need to assume that we know how to deal
with finite strings over a finite alphabet. This is enough to code the countably many
variables we usually use in first order logic (and finitely or countably many constant,
relation, and function symbols). So basically you have to assume that you can write down things.
You have to start somewhere, and this is, I guess, a starting point that most mathematicians
would be happy with.
Do you fear any contradictions showing up when manipulating finite strings over a finite
alphabet? What mathematical logic does is analyzing the concept of proof using mathematical methods.
So, we have some intuitive understanding of how to do maths, and then we develop mathematical
logic and return and consider what we are actually doing when doing mathematics.
This is the hermeneutic circle that we have to go through since we cannot build something from nothing. We strongly believe that if there were any serious problems with the foundations of mathematics (more substantial than just assuming a too strong collection of axioms), the problems would show up in the logical analysis of mathematics described above.
|
{
"source": [
"https://mathoverflow.net/questions/62818",
"https://mathoverflow.net",
"https://mathoverflow.net/users/5072/"
]
}
|
62,843 |
Let $X$ be a variety. Then, is $X$ path connected? And by path connected, I mean any two closed points $P, Q$ on the variety can be connected by the image of a finite number of non-singular curves.
|
Given any two points on a projective variety, blow them up and re embed the blownup variety in P^N. Then by Bertini, any general linear section of the right codimension will meet the variety in an irreducible curve which also meets both exceptional divisors. Then blowing back down gives an irreducible curve connecting the original two points. Normalizing that curve gives a map from just one smooth connected curve that connects your two points. (I learned this trick from David Mumford.) – roy smith 11 hours ago
|
{
"source": [
"https://mathoverflow.net/questions/62843",
"https://mathoverflow.net",
"https://mathoverflow.net/users/9035/"
]
}
|
62,949 |
Bochner's theorem states that a positive definite function is the Fourier transform of a finite Borel measure. As well, an easy converse of this is that a Fourier transform must be positive definite. My question is: is there a high-brow explanation for why positive definiteness and Fourier transforms go hand-in-hand? As I understand it, positive definiteness imposes wonderfully strong regularity conditions on the function. We immediately deduce that the function is bounded above at its value at 0, that it is non-negative at 0 and that continuity at 0 implies continuity everywhere. A leading example I have in mind comes from probability. One can show (Levy's Theorem) that a sum of iid rv converges weakly to some probability distribution by considering the product of characteristic functions and showing that its tail converges to 1 around an interval containing 0, so by positive definiteness and by the identity $1-\mbox{Re} \phi(2t) \leq 4(1-\mbox{Re} \phi(t))$ this implies convergence to a degenerate distribution. It just seems rather mysterious to me how this kind of local regularity becomes global. Edit: To be a little more specific, I understand that the Radon Nikodym derivative is positive and $e^{ix}$ is positive definite. I am more interested in consequences of positive-definiteness on the regularity of the function. For example, if one takes the 2x2 positive definite matrix associated with the function and considers its determinant, it follows that $|f(x)|\leq |f(0)|$. If I take the 3x3 positive definite matrix, I can conclude that if $f$ is continuous at 0, it is then continuous everywhere. My issue is that these types of arguments give me no intuition at all as to what positive definiteness is. Let me thus add an additional question: what is it about positive definiteness that adds such regularity conditions?
|
Perhaps the phenomenon you are asking about is: why is the definition of a positive-definite function natural? One answer is that positive-definite functions are exactly coefficients of group representations, in the following sense. If $\pi : \mathbb{R}\to U(H)$ is a unitary representation of $\mathbb{R}$ on some Hilbert space $H$, and $h\in H$ is a vector, then the function $$t\mapsto \langle \pi (t) h, h\rangle$$ is positive-definite. Conversely, given a positive-definite function $\phi$, there exists a Hilbert space $H$, a vector $h\in H$ and a unitary representation $\pi$ of $\mathbb{R}$ on $H$, for which $\phi(t)=\langle \pi(t)h,h\rangle$. Indeed, the $n\times n$ matrix occurring in the definition of a positive definite function is nothing more than the Gramm matrix of inner products $\langle \pi (t_i) h, \pi (t_j) h\rangle$; and positivity of this matrix is just a reflection of the fact that the inner product of $H$, restricted to the linear span of $\pi(t_i)h: i=1,\dots,n$ is positive-definite. The Fourier transform goes from the functions on the group to functions on the space of irreducible unitary representations of the group, and thus switches positivity and complete positivity.
|
{
"source": [
"https://mathoverflow.net/questions/62949",
"https://mathoverflow.net",
"https://mathoverflow.net/users/934/"
]
}
|
62,972 |
This question is possibly ill-advised. (If it is not right for this site I will delete it.) I, suddenly, have students. It is very clear to me that there is nothing in my education that has prepared me for the task of training graduate students. Yes, I know that graduate school is the place where one finally assumes full responsibility for one's own mathematical progress. It is also equally clear to me that there are innumerable things that an advisor might do, unwittingly, to irrevocably damage the career of their own student. This is keeping me up at night. And unlike searching for advice on, say, parenting, it appears that most people keep their opinions on the process to themselves, especially with respect to issues specific to training mathematicians . The more senior people I have approached have generally told me that "things work themselves out". I see people I know, not so much younger than me, for whom the job market is not working itself out. I was very lucky, and as a result I have many questions about things I didn't deal with myself. I don't know how to strike the balance between a doable research project and a significant one. I don't know how to help students move from reading background into exploring on their own. I don't know when and how much to help when they are struggling, or what to say when they become unhappy about their progress. And I don't know where to find resources to do so. As I've said, sometimes I don't know that people take my concerns seriously... my own mentors deal with students at an n'th rate university, rather than an 8n'th. Any direction would be appreciated. (This question is anonymous, but not for my own sake.)
|
One important thing is to make sure your students talk enough to other mathematicians, by introducing them to people at conferences or visitors to your university, encouraging them to talk regularly with other faculty, making sure they get to know some of your friends and collaborators, trying to help them find other mentors, etc. Ideally, they should have substantive interactions with a mixture of other specialists in their area and mathematicians in other areas. Aside from the obvious intellectual benefits (learning from many people and developing one's own identity as a researcher) and career benefits (getting good letters of recommendation), this directly addresses one of the biggest advising issues, namely the Rumsfeldian unknown unknowns. You may not know what your blind spots are as an advisor, or how to fix them even if you can identity them, but talking to other mathematicians will help your students fill in any gaps.
|
{
"source": [
"https://mathoverflow.net/questions/62972",
"https://mathoverflow.net",
"https://mathoverflow.net/users/14678/"
]
}
|
62,996 |
It is a standard fact that for any finite morphism of proper Noetherian $A$-schemes ($A$ being Noetherian), the pullback of an ample line bundle is ample. The usual proof of this fact is via Serre's cohomological criterion for ampleness. However, since the statement seems, on its face, to have nothing to do with cohomology, I thought the following question worth asking: Does anyone know a reasonable proof of this fact that does not go through cohomology?
|
Unfortuatelly, this is too long for a comment. Can't we directly show that for every coherent sheaf $F$ on $X$ we have that $F\otimes (f^*L)^m$ is generated by global section for $m\gg 0$? Since $f$ is finite and $L$ is ample, we have that $f_*F\otimes L^m$ is generated by global sections for $m\gg 0$. So there is a surjection $O_Y^{(I)} \twoheadrightarrow f_* F\otimes L^m$. Note that $f_* F\otimes L^m=f_* (F\otimes f^* (L^m))$ by the projection formula. Since pullback is right exact and commutes with the tensor product, we get an induced surjective map $O_X^{(I)}=f^*O_Y^{(I)}\twoheadrightarrow f^* f_* (F\otimes (f^* L)^m)$. Finally the natural map $f^* f_* (F\otimes (f^* L^m))\to F\otimes (f^* L)^m$ is surjective since $f$ is affine. These two maps together give the desired surjection $O_X^{(I)}\twoheadrightarrow F\otimes (f^* L)^m$.
|
{
"source": [
"https://mathoverflow.net/questions/62996",
"https://mathoverflow.net",
"https://mathoverflow.net/users/5094/"
]
}
|
63,064 |
This question arised when I was trying to use this answer to understand Reid's "Young Person's guide to Canonical Singularities" . In particular page 352 when computing the blow-up $Y\rightarrow A^2/\mu_3$, the affine plane quotient the cyclic group of order 3, arises to the conclusion that the exceptional divisor is $E\sim P^1$, (no problems there) and $\mathcal{O}_E(-E)\sim \mathcal{O}(3)$ (problems here). Given a variety $Y$ and an effective Cartier divisor $D$ on it, there seems to be a pretty standard exact sequence: $$0 \longrightarrow \mathcal{O}_Y \longrightarrow \mathcal{O}_Y(D) \longrightarrow\mathcal{O}_D(D)\longrightarrow 0 $$
As far as I understand, if $U$ is an open set in $S$ and $D\cap U = div(g)_U$ (for $D$ a hypersurface, if you want, and extend by linearity), then $$ \mathcal{O}_Y(D)(U)= \{g \in \mathcal{O}_Y(U) \vert div(g)\geq D \}$$ or equivalently $g/f$ is regular. The first map must be something like $g\rightarrow gf$ maybe with some order. A good answer to my question would include: Is this correct? What is $\mathcal{O}_D(D)$? What is the second map? What does $\mathcal{O}_D(-D)$ mean Why $\mathcal{O}_E(-E) \sim \mathcal{O}(3)$? I understood the RHS is generated by polynomials of degree 3? I am aware this is a simple question and probably everyone knows why, but I could not find a proper answer for it.
|
Expanding the comment of Donu Arapura, let $X$ be a variety and $Y\subset X$ a subvariety.
Then, you have a short exact sequence of sheaves
$$
0\to\mathcal I_Y\to\mathcal O_X\to\mathcal O_X/\mathcal I_Y\to 0,
$$
where $\mathcal I_Y$ is the ideal sheaf of $Y$. By definition, $\mathcal O_X/\mathcal I_Y=\mathcal O_Y$ is the structure sheaf of $Y$. If $\mathcal F$ is any invertible sheaf, then tensoring by $\mathcal F$ leaves the sequence exact, so that you have a short exact sequence
$$
0\to\mathcal I_Y\otimes\mathcal F\to\mathcal F\to\mathcal O_Y\otimes\mathcal F\to 0
$$
and $\mathcal O_Y\otimes\mathcal F$ is just the restriction of $\mathcal F$ to $Y$. Now, suppose that your $Y=D$ is a (Cartier) divisor, and $\mathcal F=\mathcal O_X(D)$ is its associated (invertible) sheaf of sections (meromorphic functions with poles allowed along $D$). In this case, $\mathcal I_D=\mathcal O_X(-D)$ and the above-mentioned short exact sequence becomes
$$
0\to\mathcal O_X\to\mathcal O_X(D)\to\mathcal O_D(D)\to 0,
$$
and $\mathcal O_D(D)$ is nothing but the restriction $\mathcal O_X(D)\otimes\mathcal O_D$ of the invertible sheaf $\mathcal O_X(D)$ to the hypersurface $D$. You can argue dually for $\mathcal O_D(-D)$, which is thus just the restriction to $D$ of the invertible sheaf $\mathcal O_X(-D)$. So your "second map", is just the restriction map. For your last question, an heuristic explanation is the following: blow-up a smooth point on a surface to obtain a new surface $\widetilde X$, and call the exceptional divisor $E$. Then $\mathcal O_{\widetilde X}(-E)$ restricted to $E$, which is precisely $\mathcal O_E(-E)$, can be easily shown to be isomorphic to the (anti)tautological line bundle $\mathcal O(1)$ over $\mathbb P^1\simeq E$ (you can find that on every introductory book in algebraic geometry). Now, you are blowing up a singular point which is an isolated quotient singularity of order three, thus in some sense you are "counting three times" your point, so that $\mathcal O_E(-E)$ now becomes isomorphic to $\mathcal O(3)$.
|
{
"source": [
"https://mathoverflow.net/questions/63064",
"https://mathoverflow.net",
"https://mathoverflow.net/users/1887/"
]
}
|
63,095 |
I tried to answer an earlier question as to uses of GRR, just from my reading, although i do not understand GRR. Today i tried to understand the possible idea behind GRR. After editing my answer accordingly, it occurred to me i was asking a question instead of giving an answer. My question is roughly whether the following speculation is in the ball park as to the purpose of GRR. I've been thinking about Riemann Roch today, and reading Riemann. After dealing with a fixed divisor D, Riemann observes that his result proves every divisor of degree g+1 dominates the pole divisor of a non constant meromorphic function. Then he says that it may be possible to find a special divisor of even lower degree that dominates the poles of a non constant function. I.e. he begins to vary the divisor. By a rank calculation he shows one cannot expect a non constant function unless the pole divisor has degree at least (g/2)+1. Now following his lead, we are led to vary the curve instead of the divisor. E.g. we might consider the family of curves over the moduli space. Then a good Riemann Roch theorem should let us relate the riemann roch theorem for the curve fibers, to a conclusion for a related sheaf on the base space., like a kunneth type formula, relating cohomology of base space total space and fiber. I.e. a nice divisor like the canonical divisor on a curve, should be cut out on each curve fiber by a divisor on the total space, by intersecting it with each curve. (e.g. we could restrict the sheaf O(1) on the plane, to every curve of degree 4.) then we can push this sheaf from the total space down to the base space, i.e. the moduli space of curves. A good relative riemann roch theorem would then relate the universal canonical sheaf on the total space, to the canonical sheaves on the curve fibers, and the cohomology of the push down of the universal sheaf to the base space, the moduli space of curves. Ideally such a relation would let one compute invariants of sheaves on the moduli space that do arise by pushing down sheaves on the total space of curves. Hopeful applications might include finding ample sheaves on Mg, hence proving projectivity, and computing invariants of the canonical sheaf on Mg, hence potentially estimating the kodaira dimension. Now this is all speculation since i do not understand even the statement of the GRR, and have not read the paper of Harris-Mumford in which the application i cited above is made. Moreover I have never seen any proof of kodaira dimension of Mg using this method. Perhaps someone more knowledgable will comment on these speculative applications? Is this roughly the idea behind GRR and Mumford's applications of it? I.e. is the idea of GRR to understand the cohomology of a sheaf on a base space which arises as a push down, by restricting it to the fibers of the map? and how helpful is this in practice? specific question: if chi(O) is constant on fibers, does GRR allow one to determine chi(O) of either total space or base space from the other?
|
Here is how I think about G-R-R in the context of moduli of curves. I realize now that I wrote something quite long. Let me recall first the definition of the tautological ring. As a consequence of the results on the birational geometry of $\overline M_g$ that there is no hope of understanding the whole Chow ring of $\overline M_g$ -- for instance, unlike in misleading low genus examples, the Chow ring will in general be infinite-dimensional. In David Mumford's "Towards an enumerative geometry..." he he introduces a finite-dimensional subring of the Chow ring which contains all "geometrically natural" classes in the Chow ring and proposes studying it instead, and this subring is called the tautological ring. Let me quote: "Whenever a variety or topological space is defined by some universal property, one expects that by virtue of its defining property, it possesses certain cohomology classes called tautological classes. The standard example is a Grassmannian [...] by its very definition, there is a universal bundle $E$ on Grass of rank $k$, and this induces Chern classes $c_l(E)$ in both the cohomology ring and Chow ring of Grass." Let me expand on the meaning of "geometrically natural". There are several possible definitions of the tautological ring. The one used by Mumford is that it is the subring generated by the so-called $\kappa$-classes, which is not really the right one: you should also for instance consider the boundary divisors as tautological classes (but this is implicit already in Mumford's paper). A nice definition is the one of Faber and Pandharipande, which defines the tautological ring for all spaces $\overline M_{g,n}$ simultaneously: it is the minimal system of subrings which contains all fundamental classes, is closed under all gluing morphisms, and is closed under all forgetting points-morphisms. Morally what this means is that: (i) for any "natural" bundle you can write down directly in terms of the moduli functor, its Chern classes are going to be tautological; (ii) any sort of "natural" gluing procedure on curves is going to keep you inside of the tautological ring. For example, the $\lambda$-classes (Chern classes of the Hodge bundle) are tautological, the $\psi$-classes are tautological (the line bundles given by the cotangent line at a marked point), and the $\kappa$-classes are tautological. OK, so let us return to G-R-R. Let $f \colon X \to Y$ be a proper morphism. On one side of the equation you have the Chern character of the derived pushforward $Rf_\ast F$. On the other side you have the pushforward of the Chern character of $F$ and the Todd class of the relative tangent sheaf $T_f$. The point is that both $F$, $Rf_\ast F$ and $T_f$ can all be made sense of by working locally/fiberwise: we don't need to know anything about the global structure of $Y$ to apply G-R-R to $f$ and $F$. But this is also how the tautological ring was set up: the classes in the tautological ring are exactly those that can be defined by pushing around classes of "fiberwise" defined bundles, which means that these are exactly the classes that can be defined without making any reference to any "global" structure of the moduli space. So in hindsight Grothendieck-Riemann-Roch seems tailor made for the study of tautological rings. On the other hand, this is also a limitation of G-R-R: it will produce lots of relations and identities relating tautological classes to each other, but it will never prove any "global" statement about any of them. As an example, it is possible to algorithmically compute the intersection number on $\overline M_{g,n}$ for any polynomial in boundary strata and $\lambda$-, $\psi$- and $\kappa$-classes. First you express the $\kappa$-classes as pushforwards of $\psi$-classes, then G-R-R can be used to express the $\lambda$-classes in terms of pushforwards of $\psi$-classes, which will finally reduce your computation to an intersection number only involving $\psi$-classes. All this was completely formal, but sooner or later you are going to need to use some global geometric property of $\overline M_{g,n}$ to find an actual number, and this is where it comes in: the Witten conjecture/Kontsevich's theorem tells you how to compute any intersection of $\psi$-classes. So let me finally talk a bit about the article of Harris and Mumford. The first application of G-R-R in their article is to derive the formula $K_{\overline{M}_g} = 13\lambda_1 - 2\delta_0 - 3\delta_{1} - \ldots - 2\delta_{n}$ in the tautological ring. This is done by applying GRR to the projection from the universal curve and truncating after the first term. Incidentally, if you don't truncate after the first term, you get Mumford's formula (derived in "Towards an enumerative geometry...") expressing the Chern character of the Hodge bundle in terms of $\kappa$-classes and pushforwards of $\psi$-classes from the boundary strata. But again, GRR will not tell you any global geometric information like if a class is big or ample. The idea is then to find an effective divisor $D$ such that $mK_{\overline M_g} = D + a\lambda_1$ with $a > 0$. It turns out that this is possible for $D$ equal to the locus of $k$-gonal curves, where they pick $g = 2k-1$. They describe in the article how they came up with this particular choice of $D$ by trying to generalize the work of Freitag on the Kodaira dimension of $A_g$ for $g$ large, in particular I think that there should be a Siegel modular form whose pullback to $M_g$ conjecturally would have $D$ as its vanishing locus. I don't know if this was actually worked out in later work. Then $nK_{\overline M_g}$ for large enough $n$ defines a birational map using the fact that $\lambda_1$ is ample on $A_g$, ultimately because the Satake compactification is the Proj of the ring of Siegel modular form, i.e. the sections of powers of the determinant of the Hodge bundle. (However $\lambda_1$ is not ample on $\overline M_g$!) This part is clarified by the later article of Cornalba and Harris showing that a linear combination $a\lambda - b\delta$ is ample if and only if $a > 11b$. The rational Picard group of $\overline M_g$ is generated by $\lambda_1$ and the boundary divisors, so any effective divisor has an expression of the form $a\lambda - \sum b_i \delta_i$, so estimating the Kodaira dimension of $\overline M_g$ really comes down to finding effective divisors such that the slopes $a/b_i$ are small. Anyway, the second application of GRR in their article is to show that on the open part $M_g$, the $k$-gonal locus is a multiples of $\lambda_1$. Actually, this part uses even more crucially Porteous's formula: once they express $k$-gonality in terms of a morphism of bundles having lower than expected rank, the class of the $k$-gonal locus can be expressed in terms of Chern classes of the two bundles, i.e. in terms of tautological classes. It follows then that $D$ is the sum of a multiple of $\lambda_1$ and an integral linear combination of boundary divisors. Finally these integers are determined by evaluating the divisors on suitable "test curves". They conclude that $\overline M_g$ is of general type for big $g$.
|
{
"source": [
"https://mathoverflow.net/questions/63095",
"https://mathoverflow.net",
"https://mathoverflow.net/users/9449/"
]
}
|
63,142 |
The question is in the title, but here is some background/reminders: A subgroup $H\neq\{1\}$ of a finite group $G$ is called a Frobenius complement if $H\cap H^g = \{1\}$ for all $g\in G\backslash H$. Given such a Frobenius complement, the corresponding Frobenius kernel is defined by
$$
N = \left(G\backslash\bigcup_{x \in G}H^x\right)\cup\{1\}.
$$
Frobenius proved that $N$ is a normal subgroup of $G$, from which it follows immediately that $G$ is a semidirect product of $N$ and $H$. Frobenius's proof is a little gem of mathematics, using character theory. It is now over 100 years old and, at least at the beginning of this century, no alternative proof was known. My question is just a confirmation request, lest I should say something false in my upcoming representation theory lecture: Is there still no proof not using character theory of the fact that a Frobenius kernel is a normal subgroup?
|
Nothing much to say here. There is (as of now) no proof of this fact without character theory. Although I think there is a direct counting proof when $H$ has even order, and a transfer argument
tells you that in a minimal counterexample, $H$ must be perfect (since $H$ is a Hall subgroup
of $G$). Hence in a minimal counterexample, $H$ must be a non-trivial perfect group of odd order. There is no such group, but proving that requires a lot more character theory than the proof
of Frobenius.
|
{
"source": [
"https://mathoverflow.net/questions/63142",
"https://mathoverflow.net",
"https://mathoverflow.net/users/35416/"
]
}
|
63,221 |
[Please stop upvoting. I don't want to get a gold badge out of this...] [Too late... is there a way to give a medal back?] Dear community, the VU University Amsterdam , my employer, is planning to shut down its pure math section and fire four tenured faculty members, including me. This is a very drastic step for a department to take, and sadly, this kind of thing is becoming more and more common (Rochester, the Schrödinger institute, Bangor, Utrecht (CS) come to mind). I will explain about our particular situation a bit more later on. I apologize for abusing MO in this way, but I think this is an issue that we as the mathematically active community must try to stop or else many of our departments will soon be run solely on a business-oriented basis and pure research will give way to an industry of fundraising and revenue generation, eventually rendering our universities' work irrelevant for society. In our case, we have tried fighting this with creating pressure on all decision-taking levels of our university from the department to the president by rallying for support from mathematicians both offline and online in the hope that a public outcry will make an impression. We have involved the union to represent us and try to stop or delay the firings. Question: what do you think are other good measures to fight something like that? Apart from asking for your ideas, I would also like to ask you to consider supporting us in an online petition we have set up. If you do decide to support us, keep in mind that an anonymous signature isn't as helpful. Here is what is happening at our university (it's from the online petition): As with most universities in the Netherlands, the VU University Amsterdam suffers from financial underfunding. All faculties and all departments at the VU are asked to take measures to deal with this problem. For the Department of Mathematics a committee of applied mathematicians has put forward a proposal to close the Geometry Section, which consists of six tenured positions and focuses on algebraic K theory, algebraic topology, and general/geometric topology. At the same time, some of the funds freed up by the abolition of the Geometry Section are to be used for the creation of two additional positions in the Analysis Section. This proposal has received the endorsement of the Dean of the Faculty of Sciences and of the Executive Board of the university. Two members of the Geometry Section will retire in the next two years and closure of the section will allow for termination of the other four tenured positions. Thus, the proposal's drastic measures will merely cut the total number of positions by two. Of the four positions slated for termination, one is in general/geometric topology and has been held since 2001 by Jan Dijkstra. The other three people were appointed less than four years ago: Dietrich Notbohm, Rob de Jeu, and Tilman Bauer. This introduced algebraic K-theory and algebraic topology as new research subjects at the VU. In 2010, a research evaluation of all Dutch mathematics departments by an international committee took place. The committee welcomed these changes very much, stating that strong young people provided new impetus to the group in mainstream mathematics and offered promise for the future. What are the consequences of the closure of the Geometry Section for the university? Algebra, algebraic topology, and general/geometric topology will vanish. Algebraic K-theory and general/geometric topology will cease to exist in the Netherlands, and only Utrecht will be left with research in algebraic topology. No pure mathematicians will be on the staff anymore. The university will give up central areas of mathematics and adopt a narrow research profile. The education of students offered at the VU will also become much narrower, which may lead to a drop in the yearly intake of students, and will certainly compromise the academic chances for VU graduates.
|
Many applied mathematicians (based on no empirical evidence, I'd guess a vast majority) feel that pure mathematics is absolutely necessary because they apply pure mathematics to the real world. A university that is hostile to pure mathematics may thus find it difficult to maintain a strong status in the applied mathematics community. So I suggest that you try to find some high profile applied mathematicians who will support your cause. Hopefully some such people are reading MO these days...
|
{
"source": [
"https://mathoverflow.net/questions/63221",
"https://mathoverflow.net",
"https://mathoverflow.net/users/4183/"
]
}
|
63,301 |
Let $f:X\rightarrow Y$ be a morphism of schemes. When $PicY\rightarrow PicX$ is an embedding and $f_{*}\mathscr{O}_{X}$ is invertible, it is the structure sheaf of $Y$. In the proof of Zariski's Main Theorem, we have: If $f$ is birational, finite, integral, and $Y$ is normal, then $f_{*}\mathscr{O}_{X}$ is the structure sheaf of $Y$. My questions are 1) What exactly prevent $f_{*}\mathscr{O}_{X}$ to be a structure sheaf? 2) Is there any necessary and sufficient condition(s) guarantee that $f_{*}\mathscr{O}_{X}$ is a structure sheaf?
|
Q : Exactly what information is contained in $f_*\mathscr O_X$? Look at the
definition. For any $U\subseteq Y$ open, $f_*\mathscr O_X(U) = \mathscr O_X(f^{-1}(U))$ =
regular functions on $f^{-1}(U)$. So the information in $f_*\mathscr O_X$ is related
to the sets in $X$ of form $f^{-1}(U)$. Cases where $f_*\mathscr O_X$ contains as little information about $X$ as possible. If $X$ is irreducible and projective and $f$ is constant, e.g. if $Y$ is affine, then
the only non empty set of form $f^{-1}(U)$ in $X$ is $X$ itself. In this case
$f_*\mathscr O_X$ is a skyscraper sheaf with stalk $k$ supported on the image point
of $f$ in $Y$. There is very little information here about $X$, but perhaps we do
see that $f$ is constant and that $X$ is connected. More generally, if $Z$ is a
projective variety, $Y$ is any variety, and $X = Z\times Y$, and $f:Z\times Y\to Y$
is the projection, then $f^{-1}(U) = Z\times U$, so an element of $f_*\mathscr
O_X(U)$, i.e. a regular function on $f^{-1}(U)$, is determined by its restriction to
$\{p\}\times U$ for any $p\in X$, i.e., a regular function on $U$ in $Y$. Thus in
this case we have $f_*\mathscr O_X = \mathscr O_Y$. Consequently in this case
$f_*\mathscr O_X$ recovers $Y$, but contains no information at all about $X$. In general, if $f:X\to Y$ is a projective morphism with every fiber connected, and
$Y$ is any normal variety, then $f_*\mathscr O_X = \mathscr O_Y$, so again
$f_*\mathscr O_X$ contains little information about $X$. Recall that if $X$ is a
projective variety then every morphism out of $X$ is a projective morphism, and more
generally a projective morphism $X\to Y$ is one that factors via an isomorphism of X
with a closed subvariety of $\mathbb P^n\times Y$, followed by the projection
$\mathbb P^n\times Y\to Y$. Suppose that $f:X\to Y$ is any projective morphism.
Then the fibers $f^{-1}(y)$ over points $y \in Y$ are all finite unions of projective
varieties. Therefore for any open set $U\subseteq Y$ containing the point $y$, the
only regular functions in $\mathscr O_X(f^{-1}(U)) = f_*\mathscr O_X(U)$ are constant
on every connected component of the fiber $f^{-1}(y)$. Thus $f_*\mathscr O_X$ can
contain little information about $X$ and $f$, other than at most the connected
components of the fibers. We shall see below that it contains exactly this
information. Cases where $f_*\mathscr O_X$ contains as much information about $X$ as possible. If $f:X\to Y$ is a map of affine varieties, then the global sections of $f_*\mathscr
O_X$ determine $X$ completely, since then $H^0(Y,f_*\mathscr O_X) = H^0(X,\mathscr
O_X)$, and then $X = \mathrm{Spec}h^0(X,\mathscr O_X)$, is the unique affine variety
with coordinate ring $H^0(X,\mathscr O_X)$. The generalization of this case is that
of any affine map $f:X\to Y$, since then $X$ can be recovered by patching together
the analogous construction from $H^0(U,f_*\mathscr O_X)$ for affine open sets
$U\subseteq Y$. Thus $X$ is completely determined by $f_*\mathscr O_X$ for any
affine map $f:X\to Y$, and this is essentially the only case. I.e. in general
$f_*\mathscr O_X$ is always a quasi coherent $\mathscr O_Y$ algebra, and if we want
it to determine a variety, as opposed to a "scheme", it is reasonable to assume for
all $U\subseteq Y$ affine open, that $f_*\mathscr O_X(U)$ is a finitely generated k
algebra, as well as an $\mathscr O_Y(U)$ algebra. We may call temporarily such an
$\mathscr O_Y$ algebra "of finite type". Thus if $f:X\to Y$ is any morphism such that
$f_*\mathscr O_X$ is of finite type, then the patching construction above yields not
necessarily $X$, but a variety $Z$ and an affine map $h:Z\to Y$ which factors via a
map $g:X\to Z$, where $f = h\circ g$, and where $g_*(\mathscr O_X) = \mathscr O_Z$.
In particular then, we have $f_*\mathscr O_X = (h\circ g)_*(\mathscr O_X) =
h_*(g_*(\mathscr O_X))= h_*(\mathscr O_Z)$. So since $h$ is affine, $f_*\mathscr O_X =
h_*(\mathscr O_Z)$ determines not $X$, but $Z$. (Kempf, section 6.5.) The case of an arbitrary projective morphism. Now when $f:X\to Y$ is any projective morphism, then $f_*\mathscr O_X$ is a coherent
$\mathscr O_Y$-module, hence we get a factorization of $f$ as $h\circ g:X\to Z\to Y$,
where $h:Z\to Y$ is affine, and where also $h_*(\mathscr O_Z) = f_*\mathscr O_X$.
Then $h$ is not only an affine map, but since $h_*(\mathscr O_Z)$ is a coherent $\mathscr
O_Y$-module, $h$ is also a finite map. Moreover $g:X\to Z$ is also projective and since
$g_*(\mathscr O_X) = \mathscr O_Z$, it can be shown that the fibers of $g$ are connected.
Hence an arbitrary projective map $f$ factors through a projective map g with connected
fibers, followed by a finite map $h$. Thus in this case, the algebra $f_*\mathscr O_X$
determines exactly the finite part $h:Z\to Y$ of $f$, whose points over $y$ are precisely
the connected components of the fiber $f^{-1}(y)$. One corollary of this is "Zariski's connectedness theorem". If $f:X\to Y$ is projective
and birational, and $Y$ is normal then $f_*\mathscr O_X= \mathscr O_Y$, and all fibers
of $f$ are connected, since in this case $Z = Y$ in the Stein factorization described
above. If we assume in addition that $f$ is quasi finite, i.e. has finite fibers, then
$f$ is an isomorphism. More generally, if $Y$ is normal and $f:X\to Y$ is any birational,
quasi - finite, morphism, then $f$ is an embedding onto an open subset of $Y$ ("Zariski's
'main theorem' "). More generally still, any quasi finite morphism factors through
an open embedding and a finite morphism.
|
{
"source": [
"https://mathoverflow.net/questions/63301",
"https://mathoverflow.net",
"https://mathoverflow.net/users/14008/"
]
}
|
63,347 |
When a math department lays off tenured staff, people cry out loud.
But, 10 years later, such memories are no longer popular discussion subjects, and so the information doesn't always spread. Those who lived through it will of course remember. But will the younger get to know of the troubled past of a given university? I would like to record incidents of universities laying off tenured math faculty for financial reasons. If you know of such an event, please write the name of the university, the year when it happened, and the number of tenured faculty that got laid off. Other relevant information, such whether or not there was a lawsuit, aggravating circumstances, etc. should also be included. (This is a follow up on this discussion about the VU Amsterdam laying off people.)
|
Two tenured professors at the University of Uppsala, Oleg Viro and Burglind Joricke, were forced to resign in 2007. The reason seems to have been a disagreement with the rector of the University, Anders Hallberg, over an appointment of an applied maths professor. (As far as I know, there weren't financial reasons involved, but still I thought it might be worthwhile to mention this here.) More details can be found here http://www.pdmi.ras.ru/~olegviro/Uppsala-8-2-2007.html
|
{
"source": [
"https://mathoverflow.net/questions/63347",
"https://mathoverflow.net",
"https://mathoverflow.net/users/5690/"
]
}
|
63,373 |
I'm looking for an elegant proof that any closed, oriented $3$-manifold $M$ is the boundary of some oriented $4$-manifold $B$.
|
I know of several different arguments. You can decide which one you think is most elegant... Rohlin's argument, which is actually quite geometric. You start with an immersion of the 3-manifold in $\mathbb{R}^5$ . You modify the immersion by a cobordism until it is an embedding, and then find an explicit 4-manifold bounding it. This is nicely explained in "A la recherche de la topologie perdue" . I believe this is also Autumn Kent's answer above. Thom's argument, with lots of algebraic topology. This is probably not the most elegant route if you only want this piece, although of course Thom tells you much more. Rourke's argument as sketched by Daniel Moskovich above. Indeed, any proof that the mapping class group is generated by Dehn twists also gives a proof that $\Omega_3 = 0$ . Dehn and Lickorish also have proofs of this. I also have a proof with Francesco Costantino, also direct and geometric. You take the compact 3-manifold and look at a generic map to $\mathbb{R}^2$ . The preimage of a generic point is a disjoint union of circles, which bounds a convenient canonical surface (a union of disks). Take these disks as the start of your 4-manifold. In codimension one singularities, two of these circles can merge, and the preimage of a little transversal is a pair of pants, which can be filled in with a 3-sphere (together with the disks already attached). In codimension 2, there are only two different interesting local models, and both can be filled in canonically with a 4-ball. The reason to prefer our proof (number 4) is that it is more efficient, in that (e.g.) for a 3-manifold triangulated with $n$ tetrahedra, it gives a 4-manifold with bounded geometry with $O(n^2)$ simplices. By comparison, the mapping-class group arguments of (3) tend to give a 4-manifold of complexity at least exponential in $n$ , and usually a tower of exponentials. (You can see this already in the inductive argument sketched out in Daniel Moskovich's answer.) Thom's proof (2) is completely non-explicit; I don't know how to extract any bounds from it. Rohlin's proof (1) can, I believe, be shown to give a 4-manifold with $O(n^4)$ simplices, although I never worked out all the details.
|
{
"source": [
"https://mathoverflow.net/questions/63373",
"https://mathoverflow.net",
"https://mathoverflow.net/users/13132/"
]
}
|
63,412 |
Let $s(n)$ denote the sum of primes less than or equal to n. Clearly, $s(n)$ is bounded from above by the sum of the first $n/2$ odd integers $+1$. $s(n)$ is also bounded by the sum of the first $n$ primes, which is asymptotically equivalent to $\frac{n^2}{2\log{n}}$. It should thus be possible to find estimates for $s(n)$ using the fact that for an $\epsilon > 0$ and $n$ large enough $s(n) < (1+\epsilon)\frac{n^2}{\log{n}}.$ I would like to know if there are any known sharp upper bounds for $s(n)$. That is, I am looking for a function $f(n)$ such that for every $n > N_0$ $$ s(n) \leq f(n)$$ As a way of relaxing the question, $s(n)$ could be regarded as the sum of the primes in the interval $[c,n]$ given a constant $c$.
|
By partial summation
$$ s(n) = n\pi(n)-\sum_{m=2}^{n-1}\pi(m) $$
so by the Prime Number Theorem
$$ s(n) = \frac{n^2}{\log n}-\sum_{m=2}^{n-1}\frac{m}{\log m}+O\left(\frac{n^2}{\log^2 n}\right). $$
The sum on the right is
$$ \sum_{m=2}^{n-1}\frac{m}{\log m} = \int_2^n \frac{x}{\log x}dx + O\left(\frac{n}{\log n}\right) $$
using the monotonicity properties of the integrand. Now the integral equals, by partial integration,
$$ \int_2^n \frac{x}{\log x}dx = \left[\frac{x^2}{2\log x}\right]_2^n + \int_2^n \frac{x}{2\log^2 x}dx = \frac{n^2}{2\log n} + O\left(\frac{n^2}{\log^2 n}\right).$$
Altogether we have
$$ s(n) = \frac{n^2}{2\log n} + O\left(\frac{n^2}{\log^2 n}\right).$$
This can be made more precise both numerically and theoretically.
|
{
"source": [
"https://mathoverflow.net/questions/63412",
"https://mathoverflow.net",
"https://mathoverflow.net/users/1737/"
]
}
|
63,423 |
Is there a chess position with a finite number of pieces on the infinite chess board $\mathbb{Z}^2$ such that White to move has a forced win, but Black can stave off mate for at least $n$ moves for every $n$? This question is motivated by a question posed here a few months ago by Richard Stanley. He asked whether chess with finitely many pieces on $\mathbb{Z}^2$ is decidable. A compactness observation is that if Black has only short-range pieces (no bishops, rooks or queens), then the statement "White can force mate" is equivalent to "There is some $n$ such that White can force mate in at most $n$ moves". This probably won't lead to an answer to Stanley's question, because even if there are only short-range pieces, there is no general reason the game should be decidable. It is well-known that a finite automaton with a finite number of "counters" can emulate a Turing machine, and there seems to be no obvious reason why such an automaton could not be emulated by a chess problem, even if we allow only knights and the two kings. But it might still be of interest to have an explicit counterexample to the idea that being able to force a win means being able to do so in some specified number of moves. Such an example must involve a long-range piece for the losing side, and one idea is that Black has to move a rook (or bishop) out of the way to make room for his king, after which White forces Black's king towards the rook with a series of checks, finally mating thanks to the rook blocking a square for the king. If there are such examples, we can go on and define "mate in $\alpha$" for an arbitrary ordinal $\alpha$. To say that White has a forced mate in $\alpha$ means that White has a move such that after any response by Black, White has a forced mate in $\beta$ for some $\beta<\alpha$. For instance, mate in $\omega$ means that after Black's first move, White is able to force mate in $n$ for some finite $n$, while mate in $2\omega + 3$ means that after Black's fourth move, White will be able to specify how many more moves it will take until he can specify how long it will take to mate. With this definition, we can ask exactly how long-winded the solution to a chess problem can be: What is the smallest ordinal $\gamma$ such that having a forced mate implies having a forced mate in $\alpha$ for some $\alpha<\gamma$? Obviously $\gamma$ is infinite, and since there are only countably many positions, $\gamma$ must be countable. Can anyone give better bounds?
|
Here is my first try at a solution. Your idea was a good one, but
bishops are better than rooks, I surmise. The two pictures here are placed in some distinct parts of the infinite board.
The first just ensures it is White to move (in check), and that White's king
will never play a role, as capturing a black unit, which are nearly stalemated as is,
will release heavy pieces. alt text http://www.freeimagehosting.net/uploads/3c8e277e7d.jpg alt text http://www.freeimagehosting.net/uploads/72ef1c9b7e.jpg So White is left to checkmate with the four bishops and pawns.
White threatens checkmate via a check from below on the northwest diagonal,
and Black can only avoid this by moving the bishop northeast some amount.
Upon Black moving this bishop, White then makes the bishop check anyways,
the Black king moves where the Black bishop was, the pawn moves with check,
the Black king again retreats northeast along the diagonal, and then White
alternately moves the dark-square bishops, giving checks until the Black
bishop is reached when it is mate. The point of this second picture is that White cannot checkmate Black
unless the Black bishop plays a role. Four bishops are not enough to
checkmate a king on an infinite board, and hopefully I have set it up so
that the White pawns play no part once Black starts the king running northeast.
Pawns are not too valuable when they cannot become queens. In extended chess notation, White plays 1. Ke5 on board A,
then Black plays 1...Bz26 on board B, followed by
2. Bg3+ Kf6 3. e5+ Kg7 3. Bi5+ Kh8 4. Bf10+ Ki9 5. Bk7+ Kj10 6. Bh12+ ...,
as White successively cuts off NW-SE diagonals until the Black bishop
is reached. By moving the bishop X squares northeast on move 1, Black
can delay the checkmate for X moves, if I set this up proper. Other plans by White should be beatable by moving the Black king off
the long diagonal or capturing the light White bishop with the pawn.
Once Black's king exits the area with the pawns, the Black bishop
must be a part of the mating pattern. I don't think the Black king
can be forced back to that area. Well, this is a first try.
|
{
"source": [
"https://mathoverflow.net/questions/63423",
"https://mathoverflow.net",
"https://mathoverflow.net/users/14302/"
]
}
|
63,440 |
The action of a group $G$ on a topological space $X$ can be viewed as a functor $F: G \to \mathcal{Top}$ with $F(*)=X$. (Here I'm viewing a group as a category with one object, $ * $, and the morphisms are isomorphisms labeled by the group elements.) We can extend this idea and define the action of a groupoid $\mathcal{G}$ on a space to be a functor $F:\mathcal{G} \to\mathcal{Top}$. Are there any naturally occurring examples of a groupoid action on a space? (Other than the ones where the groupoid is actually a group.)
|
Perhaps the most natural example is given by universal covers? Let $X$ be a "nice" space. For a point $x\in X$ let $\tilde X_x$ be the universal covering of
$X$ taken at $x$ (the fiber at $y \in X$ is the homotopy classes of paths $[0,1]\to X$ which start
at $x$ and end at $y$, where we are taking homotopy classes relative to $\lbrace0,1\rbrace$). Let $\pi$ be the fundamental groupoid of $X$. Then there is a functor $\pi\to \text{Top}$ given on objects by $x\mapsto \tilde X_x$. On morphisms of $\pi$ from $x$ to $y$, the functor is given by the map $\tilde X_x \to \tilde X_y$ that is induced by concatenating paths.
|
{
"source": [
"https://mathoverflow.net/questions/63440",
"https://mathoverflow.net",
"https://mathoverflow.net/users/5000/"
]
}
|
63,519 |
(Sorry if this is too elementary for this site) I’m having some trouble understanding sheaf cohomology. It’s supposed to provide a theory of cohomology “with local coefficient”, and allow easy comparison between different theories like singular, Cech, de Rham and Alexander Spanier. What I don’t understand is: what’s all the fuss with coefficients that vary with each open set? Indeed what’s all the fuss with changing coefficients in an ordinary cohomology theory as in Eilenberg Steenrod? Homology is trying to measure the “holes” of a space; wouldn’t integer coefficients suffice already? I’m not really sure what cohomology is trying to measure; at least I think the first singular group is trying to measure some kind of “potential difference”, like explained in Hatcher’s book. It gets worse for me when the coefficient group isn’t the integers. But when I get to sheaf cohomology I’m totally dumbstruck as to what it’s trying to measure, and what useful information of the space can be extracted from it. Now if it’s just about comparisons of different theories I can live with that… Can someone please give me an intuitive explanation of the fuss with all these different coefficients? Please start off with why we even use different coefficients in Eilenberg Steenrod. Sorry if this is too elementary.
|
This (elementary and perfectly standard) example might help show the power of sheaves with non-constant coefficients: First, think about the circle $S^1$. Suppose you want to understand (real) line bundles on the circle. You can certainly cover the circle with two open contractible subsets $U_1$ and $U_2$ (which you can take to be the complements of the north and south poles), and we know that any line bundle on a contractible space is trivial. So if you've got a line bundle $L$ over $S^1$, you can restrict it to either $U_i$ and get a trivial bundle $L_i$. $L$ is built from these $L_i$ and the way they they are patched together over $U_1\cap U_2$. Now what does it mean to patch the $L_i$ together over $U_{12}=U_1\cap U_2$? It means choosing an isomorphism $L_1|U_{12}\rightarrow L_2|U_{12}$. For any $x\in U_{12}$, the restriction of this isomorphism to the fiber $L_x$ over $x$ is an isomorphism between 1-dimensional vector spaces, and so (after choosing bases) can be identified with an element of ${\bf R}^*$ (the non-zero reals). Therefore your patching consists of a continuous map $$U_{12}\rightarrow {\mathbb R}^*$$ which is to say, a Cech 1-cocycle for the sheaf of continuous ${\bf R}^{*}$-valued functions. Now of course you could build a line bundle in some other way, say by starting with two different contractible sets $U_1$ and $U_2$. When do two sets of patching data give isomorphic line bundles? A little thought reveals that the answer is: When and only when the corresponding cocycles give the same class in $$H^1(S^1,G^{*})$$ with $ G^{*} $ being the sheaf of continuous ${\bf R}^*$-valued functions. Therefore line bundles are classified by $H^1(S^1,G^{*})$. Now consider the exact sequence of sheaves $$0 \rightarrow G \rightarrow G^*\rightarrow {\bf Z}/2{\bf Z}\rightarrow 0$$ where $G$ is the sheaf of continuous ${\bf R}$ valued functions, and the map on the left is exponentiation. Follow the long exact sequence of cohomology, use the fact that $G$ is acyclic, and conclude that $H^1(S^1,G^*)=H^1(S^1,{\bf Z}/2{\bf Z})={\bf Z}/2{\bf Z}$. In other words, there are exactly two real line bundles over $S^1$ --- and indeed there are: the cylinder and the Mobius strip. Exercise: Do a similar calculation for ${\bf CP}^1$ (the Riemann sphere). Conclude that the set of (complex) line bundles is in one-one correspondence with $H^2({\bf CP}^1,{\bf Z})={\bf Z}$.
|
{
"source": [
"https://mathoverflow.net/questions/63519",
"https://mathoverflow.net",
"https://mathoverflow.net/users/14800/"
]
}
|
63,633 |
(This question came up in a conversation with my professor last week.) Let $\langle G,\cdot \rangle$ be a group. Let $x$ be an element of $G$. Is there always an isomorphism $f : G \to G$ such that $f(x) = x^{-1}$ ? What if $G$ is finite?
|
The Mathieu group $M_{11}$ does not have this property. A quote from Example 2.16 in this paper : "Hence there is no automorphism of $M_{11}$ that maps $x$ to $x^{−1}$." Background how I found this quote as I am no group theorist: I used Google on "groups with no outer automorphism" which led me to this Wikipedia article , and from there I jumped to this other Wikipedia article . So I learned that $M_{11}$ has no outer automorphism. Then I used Google again on "elements conjugate to their inverse in the mathieu group" which led me to the above mentioned paper. EDIT: Following Geoff Robinson's comment let me show that any element $x\in M_{11}$ of order 11 has this property, using only basic group theory and the above Wikipedia article . The article tells us that $M_{11}$ has 7920 elements of which 1440 have order 11. So $M_{11}$ has 1440/10=144 Sylow 11-subgroups, each cyclic of order 11. These subgroups are conjugates to each other by one of the Sylow theorems, so each of them has a normalizer subgroup of order 7920/144=55. In particular, if $x$ and $x^{-1}$ were conjugate to each other, then they were so by an element of odd order. This, however, is impossible as any element of odd order acts trivially on a 2-element set.
|
{
"source": [
"https://mathoverflow.net/questions/63633",
"https://mathoverflow.net",
"https://mathoverflow.net/users/-1/"
]
}
|
63,641 |
Here is the problem: Two mathematicians meet at a bar. They like each other and tend to collaborate. But it is not so clear what problems could be of common interest to both of them. Of course, the traditional way is they keep describing they work or their field in general so that hopefully they catch something at the end. But is there any reference, graph, table or whatever that they can use to help them? This, of course, makes sense only when such a reference keeps updated based on the continuous production in mathematics.
|
The Mathieu group $M_{11}$ does not have this property. A quote from Example 2.16 in this paper : "Hence there is no automorphism of $M_{11}$ that maps $x$ to $x^{−1}$." Background how I found this quote as I am no group theorist: I used Google on "groups with no outer automorphism" which led me to this Wikipedia article , and from there I jumped to this other Wikipedia article . So I learned that $M_{11}$ has no outer automorphism. Then I used Google again on "elements conjugate to their inverse in the mathieu group" which led me to the above mentioned paper. EDIT: Following Geoff Robinson's comment let me show that any element $x\in M_{11}$ of order 11 has this property, using only basic group theory and the above Wikipedia article . The article tells us that $M_{11}$ has 7920 elements of which 1440 have order 11. So $M_{11}$ has 1440/10=144 Sylow 11-subgroups, each cyclic of order 11. These subgroups are conjugates to each other by one of the Sylow theorems, so each of them has a normalizer subgroup of order 7920/144=55. In particular, if $x$ and $x^{-1}$ were conjugate to each other, then they were so by an element of odd order. This, however, is impossible as any element of odd order acts trivially on a 2-element set.
|
{
"source": [
"https://mathoverflow.net/questions/63641",
"https://mathoverflow.net",
"https://mathoverflow.net/users/3641/"
]
}
|
63,714 |
It is known that the Euler product formula converges for $\Re(s)>1$
(and there it represents the Riemann zeta function). My question: Is the Euler product always divergent for
$0 < \Re(s) < 1$ ? I thought that the absolute value of the Euler product formula is positively divergent under the above condition. Is it apparent?
|
Let
$$t_P = \sum_{p < P} \log \left| \frac{1}{1-p^{-s}} \right|$$
with $s=\sigma+it$, $\sigma \in (0,1)$ and $t$ a nonzero real.
The point of this answer is to show that the $t_P$ jump around a great deal. Specifically, for any $M$ and $N$, there are $P$ and $Q$ with $N < P < Q$ such that $t_Q - t_P > M$, and other $P'$ and $Q'$ with $N < P' < Q'$ such that $t_{Q'} - t_{P'} < -M$ Thus $t_P$ cannot approach any finite limit. It could still approach $\pm \infty$; think of $\sum (-1)^n (3+(-1)^n)^n$, which has arbitrarily large increases and decreases, but does climb to $\infty$. However, this result still means you should be very suspicious of any numerical data which seems to indicate that $t_P$ has a definite trend: There is always enough future oscillation remaining to wipe out any gains you have made towards $\pm \infty$. Obviously, this implies the analogous statements about $\prod \left| \frac{1}{1-p^{-s}} \right|$: It cannot approach a finite limit, and you should not trust numerical evidence that it is going to $0$ or $\infty$. And, of course, life is only more complicated if you keep track of the argument of the Euler product as well as its magnitude. So, a proof. We will treat $\sigma$ and $t$ as completely fixed, so constants in $O$'s can depend on them. Choose a small positive real $\delta$. This will be a once and for all choice, but I will record dependences on it explicitly, because I need to see that I can take a small enough choice to make everything work. Let $(P,Q)$ be of the form
$$(e^{(2 \pi k-\delta)/t}, e^{(2 \pi k+\delta)/t})$$
for some positive integer $k$. By choosing $k$ large, we can arrange that $P$ and $Q$ are larger than any required $N$. For any prime $p$ in this range,
$$|1-p^{-s}| = |1-p^{-\sigma} e^{i \theta}|$$
for some $\theta \in (2 \pi k - \delta, 2 \pi k + \delta)$. So this is
$$1-p^{-\sigma}(1 + O(\delta^2))$$
and
$$ \log \left| \frac{1}{1-p^{-s}} \right| = p^{-\sigma} (1+O(\delta^2))(1+O(p^{-\sigma}))$$
If $(P,Q)$ is large enough, the first error term dominates and
$$t_Q - t_P \geq \sum_{e^{2 \pi k - \delta}/t < p < e^{2 \pi k + \delta}/t} p^{-\sigma}(1+O(\delta^2)) = \# \{p: e^{(2 \pi k - \delta)/t} < p < e^{(2 \pi k + \delta)/t} \} e^{-2 \pi k \sigma/t} (1+O(\delta)).$$
(The error term has changed because the new dominant error is approximating $e^{\delta \sigma/t}$ as $1+O(\delta)$. By the prime number theorem, the number of primes in this range is
$$\left( e^{(2 \pi k + \delta)/t} - e^{(2 \pi k - \delta)/t} \right) \frac{1}{2 \pi k/t} (1 + O(1/k)) = \frac{2 \delta e^{2 \pi k/t}}{(2 \pi k/t)} (1+O(\delta)+O(1/k)).$$ In short, we have bounded $t_Q - t_P$ below by
$$\frac{\delta t e^{2 \pi k(1-\sigma)/t}}{2 \pi k}(1+O(\delta) + O(1/k)).$$
Assuming our initial choice of $\delta$ was small enough, and using $\sigma<1$, this goes to $\infty$. Now, repeat the argument with $(P,Q) = (e^{((2k+1)\pi -\delta)/t}, e^{((2k+1)\pi +\delta)/t})$ to show that $t_Q - t_P$ can be arbitrarily negative as well. I don't have a gut instinct for whether this sum goes to $- \infty$, goes to $\infty$, or oscillates indefinitely. However, it should be clear that this sum is very far from being the $\zeta$ function.
|
{
"source": [
"https://mathoverflow.net/questions/63714",
"https://mathoverflow.net",
"https://mathoverflow.net/users/14464/"
]
}
|
63,816 |
Technically, it is possible to prove anything in Coq proof assistant [1] (on at least Linux) due to a programming feature (or bug). This seems tractable when validating large proofs. Human analysis may catch this. Question: What are the consequences of technically proving anything in Coq? More or less the question had in mind dishonest human provers presenting seemingly valid long computer readable proofs. Later Coq developers replied `little hack'. I saw no official plans or replies of breaking the well formed proof (possibly because of backward compatibility being priority). The basic technical details are: Hostile ocaml plugins (possibly disguised as proofs in FILE.v) can generate their own .vo proofs (of trivial statements), thus subverting coqchk, and don't giving a chance of "coqc" to even see the "anything proof". The plugin does this with exit(2) after writing .vo. This scenario seems interesting when validating large archives. Here is a sample session, including links to full source code: joro@j:/tmp/test1$ tar xvf ../proof.tar
fib5.v
bLOB
joro@j:/tmp/test1$ ls -l
total 16
-rwxr-xr-x 1 joro joro 10301 2011-05-03 12:53 bLOB
-rw-r--r-- 1 joro joro 125 2011-05-03 12:53 fib5.v
joro@j:/tmp/test1$ coqc fib5.v
Trivially true. coqchk may pass
joro@j:/tmp/test1$ ls -l
total 24
-rwxr-xr-x 1 joro joro 10301 2011-05-03 12:53 bLOB
-rw-r--r-- 1 joro joro 51 2011-05-03 12:55 fib5.glob
-rw-r--r-- 1 joro joro 125 2011-05-03 12:53 fib5.v
-rw------- 1 joro joro 812 2011-05-03 12:55 fib5.vo
joro@j:/tmp/test1$ coqchk fib5
Welcome to Chicken 8.2pl1 (February 2010)
[intern /tmp/test1/fib5.vo ... done]
...snip...
Checking library: fib5
*** vo structure validated ***
checking cst: <>.fib5.thm1
checking cst: <>.fib5.really
Modules were successfully checked
joro@j:/tmp/test1$cat fib5.v
Theorem thm1: True.
Proof.
auto.
Qed.
Declare ML Module "bLOB" .
Theorem really: True = False.
Proof.
intuition.
Qed.
#note: there is a zero byte at the end of "bLOB". in addition "bLOB" may be "aux.v"
joro@j:/tmp/test1$ coqchk -v
The Coq Proof Checker, version 8.2pl1 (February 2010)
compiled on Feb 27 2010 16:09:50 to compile the plugin:
ocamlopt -o bLOB -shared a.ml the plugin writes valid but unrelated .vo proof and then does exit(2) to prevent coqc from analyzing the "anything proof". a.ml is here .
tar with the proof is here Possibly the most likely exploit scenario is: Here is a proof of X consisting of about $10^5$ files. lemma2817.v is the plugin and it doesn't stop the final proof. [1] http://coq.inria.fr/ (crossposted on M.SE) UPDATE : a.ml (OCAML) here: open Printf;;
(* #load "unix.cma";; *)
(* compile: ocamlopt -o bLOB -shared a.ml*)
open Unix;;
let x = 1 + 2 ;;
printf "Trivially true. coqchk may pass\n";;
let fd = Unix.openfile "./fib5.vo" [O_RDWR ; O_CREAT] 0o600 in
ftruncate fd 0 ;
write fd "\x00\x00\x20\x08\x84\x95\xa6\xbe\x00\x00\x02\xef\x00\x00\x00\xe6\x00\x00\x02\xbe\x00\x00\x02\x91\xd0\xa0\x24\x66\x69\x62\x35\x40\xc0\x04\x03\xd0\x90\xa2\xb0\x42\x24\x66\x69\x62\x35\x40\xa0\xa0\x24\x74\x68\x6d\x31\x90\xf0\x40\x90\x90\x90\x9c\xa0\xa0\xb0\x90\xa0\x25\x4c\x6f\x67\x69\x63\xa0\x24\x49\x6e\x69\x74\xa0\x23\x43\x6f\x71\x40\x40\x24\x54\x72\x75\x65\x40\x41\x90\x9b\xa0\x04\x0c\x40\x90\x90\x40\x40\x41\x40\xa0\xa0\x26\x72\x65\x61\x6c\x6c\x79\x90\xf0\x40\x90\x90\x90\x9c\xa0\xa0\xb0\x90\xa0\x04\x19\xa0\x04\x18\xa0\x04\x17\x40\x40\x04\x16\x40\x41\x90\x9b\xa0\x04\x08\x40\x90\x90\x40\x40\x41\x40\x40\x40\x40\x40\x40\xa0\xa0\xa0\x2a\x4c\x6f\x67\x69\x63\x5f\x54\x79\x70\x65\xa0\x24\x49\x6e\x69\x74\xa0\x23\x43\x6f\x71\x40\x30\x5f\xb0\x16\xdd\x26\xc4\x94\xf9\x07\x89\x51\x91\x03\xf1\xd3\x06\xa0\xa0\xa0\x27\x50\x72\x65\x6c\x75\x64\x65\xa0\x24\x49\x6e\x69\x74\xa0\x23\x43\x6f\x71\x40\x30\x3f\x7c\xa2\x88\x4a\x72\x6c\x12\x85\x67\x03\x1c\x07\xf8\x24\x86\xa0\xa0\xa0\x27\x54\x61\x63\x74\x69\x63\x73\xa0\x24\x49\x6e\x69\x74\xa0\x23\x43\x6f\x71\x40\x30\x27\xf0\x1f\x3c\xa4\x0a\x05\x89\x0c\xe5\xc4\xb6\x96\xec\x49\x5a\xa0\xa0\xa0\x22\x57\x66\xa0\x24\x49\x6e\x69\x74\xa0\x23\x43\x6f\x71\x40\x30\xb0\xd6\xf6\x26\x6c\xdd\x54\x3f\x23\x97\x33\x9e\xa3\xec\x62\xf1\xa0\xa0\xa0\x25\x50\x65\x61\x6e\x6f\xa0\x24\x49\x6e\x69\x74\xa0\x23\x43\x6f\x71\x40\x30\x07\x94\x7b\x89\xa1\x80\xa3\x50\xc0\x48\x2e\xb7\x0c\x6b\xf3\x1e\xa0\xa0\xa0\x26\x53\x70\x65\x63\x69\x66\xa0\x24\x49\x6e\x69\x74\xa0\x23\x43\x6f\x71\x40\x30\x0d\x59\x83\x8d\x26\x27\x56\x53\x31\x41\x6b\x00\x71\x08\x50\x6b\xa0\xa0\xa0\x29\x44\x61\x74\x61\x74\x79\x70\x65\x73\xa0\x24\x49\x6e\x69\x74\xa0\x23\x43\x6f\x71\x40\x30\xb6\x9e\x13\xbf\x29\x8c\x28\xd9\xe1\x77\xc9\x93\xca\xdf\xae\xfb\xa0\xa0\xa0\x04\x62\xa0\x04\x61\xa0\x04\x60\x40\x30\x10\xcd\x50\xdb\x7b\x31\x4d\xb5\xd1\x18\x3f\x03\x19\xfa\xda\x1d\xa0\xa0\xa0\x29\x4e\x6f\x74\x61\x74\x69\x6f\x6e\x73\xa0\x24\x49\x6e\x69\x74\xa0\x23\x43\x6f\x71\x40\x30\xf0\xb3\xe5\xfb\x02\x4b\x8f\x8c\xdb\x44\x61\x6d\x64\x44\x74\x0c\x40\x40\xb0\x04\x7f\xa0\xa0\x04\x7d\xa0\x28\x43\x4f\x4e\x53\x54\x41\x4e\x54\xb0\x90\x91\xa0\x94\x90\x41\x40\x40\x92\x40\xa0\xa0\x22\x5f\x37\xa0\x29\x49\x4d\x50\x4c\x49\x43\x49\x54\x53\xa0\xa0\xb0\x92\x04\x93\x40\x04\x8f\xe0\x40\x41\x40\x40\x40\x40\xa0\xa0\x91\x04\x06\x40\x40\xa0\xa0\x22\x5f\x38\xa0\x24\x48\x45\x41\x44\xa0\x91\x04\x0d\x90\x90\x04\x0f\xa0\xa0\x22\x5f\x39\xa0\x2f\x41\x52\x47\x55\x4d\x45\x4e\x54\x53\x2d\x53\x43\x4f\x50\x45\xb0\x40\x91\x04\x16\x40\xa0\xa0\x04\x8c\xa0\x04\x28\xb0\x04\x27\x40\x92\x40\xa0\xa0\x23\x5f\x31\x30\xa0\x04\x22\xa0\xa0\xb0\x04\x21\x40\x04\x96\x04\x20\xa0\xa0\x91\x04\x04\x40\x40\xa0\xa0\x23\x5f\x31\x31\xa0\x04\x1f\xa0\x91\x04\x0a\x90\x90\x04\x0c\xa0\xa0\x23\x5f\x31\x32\xa0\x04\x1e\xb0\x40\x91\x04\x12\x40\x40\x40\xa0\xa0\x04\x4f\x04\x49\xa0\xa0\x04\x57\x04\x54\xa0\xa0\x04\x62\x04\x5c\xa0\xa0\x04\x6d\x04\x67\xa0\xa0\x04\x78\x04\x72\xa0\xa0\x04\x83\x04\x7d\xa0\xa0\x04\x8e\x04\x88\xa0\xa0\x04\x99\x04\x93\xa0\xa0\x04\xa4\x04\x9e\x40\xa0\x04\x60\xa0\x04\x70\xa0\x04\x7a\xa0\x04\x84\xa0\x04\x8e\xa0\x04\x98\xa0\x04\xa2\xa0\x04\x6d\xa0\x04\xad\x40\x84\x95\xa6\xbe\x00\x00\x00\x11\x00\x00\x00\x01\x00\x00\x00\x06\x00\x00\x00\x04\x30\x92\x3c\xc2\xa2\xba\xb3\x7b\x2f\x1b\xe7\xea\x03\x44\x0e\x9f\x1b" 0 812 ;
exit 4 ;; Update Possibly a more portable solution (Coq): Theorem really: True = False.
Proof.
external "/bin/sh" "ESCAPE_SEQ; write_vo_proof; nicely_kill_coq ;" True.
(* this invokes /bin/sh *)
Qed. tactic external
|
For the innocent observers, let me explain what joro did. He has tricked Coq into thinking that True = False is a theorem by providing an external piece of code (i.e., something that Coq does not check but simply loads into memory) that breaks Coq loading mechanism. This of course should not happen, Coq should realize that it's loading corrupted external code. Thus, we do not have here a straight inconsistency in the Coq theorem prover, but rather a way of breaking Coq through its interaction with the external environment. There are in fact many ways in which this can be done, such as: Expose the computer to cosmic rays that cause the CPU to malfunction occasionally. (If you think this is a joke you should read what sort of ideas computer security experts have.) Coq depends on the runtime environment of the operating system and an extensive runtime library (to manipulate memory, strings, to communicate with the user, etc.) which is almost never bug-free. Any such bugs can in principle be used to make Coq think it proved something senseless. If the CPU has bugs then you cannot trust the execution of any program. (Who remembers the Intel Pentium division bug? Did it stop us from using computers?) If the compiler which was used to compile Coq has bugs, you cannot trust Coq to work correctly. These are all valid concerns, some more than others. People put serious thought into making sure that their theorem provers work correctly. In particular, I think there is an ongoing effort to formally prove that the Coq core algorithm does what it is supposed to do. It is hard enough to deal with the core algorithm, let alone consider what happens when people start linking in external libraries. Now to answer joro: I think you can trust Coq more than you can trust the average mathematician. The biological equivalent of what you did to Coq would be to give a mathematician an illegible photocopy of a paper with results that he relies on to prove theorems. If you want to trust your Coq code then you can take several precautions: Make sure you do not use any external libraries or experimental features. (You can syntactically check whether the Coq code links in any modules or uses certain experimental features.) Run your Coq code on several different operating systems. Run your Coq code on a computer in a vault inside a mountain to prevent cosmic rays from reaching your computer (and don't put any radioactive bananas inside the computer either). Think about what Coq is proving and see whether it makes sense. After all, your brain is not totally clueless about math and it should be able to assess what level of credence the results deserve. (If you're proving with Coq 30000 small theorems which all look alike that's a different matter. Your brain is useless then.) Ask other people to prove the same result, but do not tell them how you did it. Compare notes. Always remember that mathematics (still) is a human activity. Even if we use machines to do it.
|
{
"source": [
"https://mathoverflow.net/questions/63816",
"https://mathoverflow.net",
"https://mathoverflow.net/users/12481/"
]
}
|
63,847 |
As a relatively new abstraction, matroids clearly enjoy a rich theory unto themselves and also offer a viewpoint that suggests interesting analogies and clarifies aspects of the foundations of venerable subjects. All that said, a very harsh metric by which to judge such an abstraction might ask what important results in other areas reasonably seem to depend in an essential way upon insights first gleaned from the pursuit of the pure theory. So I'd like to know, please, what specific results a matroid theory partisan would likely cite as the best demonstrations of the power of matroid theory within the larger arena of mathematics. (I realize that mathematicians in one field will sometimes absorb ideas from another field, then translate back to their preferred language possibly obscuring the debt. So important papers that somehow could not exist without matroid theory should count here even if they never explicitly mention matroids.)
|
Here are two such triumphs. There are many others. (1) Oriented matroids were used by Gelfand and MacPherson to give a
combinatorial formula for Pontrjagin classes, a long-open problem.
See http://www.ams.org/journals/bull/1992-26-02/S0273-0979-1992-00282-3/home.html .
There have been many further developments in this area. (2) Quoting from the first sentence of the Math Review 88f:14045, "in
this paper the authors discover a remarkable connection between the
geometry of the Schubert cells in a Grassmannian manifold, matroid
theory, and convex polyhedra." The authors are Gelfand, Goresky,
MacPherson, and Serganova. This paper began (I believe) the study of
matroid polytopes, which has grown into a big industry.
|
{
"source": [
"https://mathoverflow.net/questions/63847",
"https://mathoverflow.net",
"https://mathoverflow.net/users/10909/"
]
}
|
63,868 |
I am gathering material for an exposition and I note that some texts (e.g. Ise and Takeuchi, "Lie Groups I & II", Stillwell, "Naive Lie Theory", Hall, "Lie Groups, Lie Algebras, and Representations") define "Matrix Lie Groups" with the unwonted requirement that the group should be a closed subgroup of $GL\left(V\right)$ (with $V = \mathbb{R}^n, \mathbb{C}^n$). I don't want to make such a restriction in my exposition - it seems a bit clunky to me and is certainly not needed. So the basic point of my question is - is the restriction just a simplification to make a first exposition easier to read (e.g. allows more readily grasped techniques to be used in proofs)? Or is there some deeper justification for it - e.g. the answer to my question if this answer is indeed yes? A simple example is the irrational slope one-parameter subgroup of the 2-torus - it is of course isomorphic to $\left(\mathbb{R},+\right)$. By "isomorphic" I mean of course isomorphic as Lie groups, not just as abstract groups, everywhere in this question. Furthermore, for the purposes of this question, by "Lie Subgroup" I mean in Rossmann's (Rossmann "Lie Groups, An introduction through linear groups") sense: for a subgroup you use the topology generated by sets of the form $\exp\left(U\right)$ where $U$ is open in the Lie subalgebra of the subgroup you are considering - this is generally not the same as the relative topology gotten from $GL\left(V\right)$ if the subgroup is not closed. If you like, I have seen the term "Virtual Lie Subgroup" for what I mean by "Lie Subgroup" here. Thus, the irrational slope one-parameter subgroup of the 2-torus is not a submanifold of $GL\left(V\right)$, but if Rossmann's group topology is used, you've got a (virtual) Lie Subgroup. Moreover, I'm not groping here for something like the closed subgroup theorem (Rossmann, section 2.7). One can argue that we study closed $GL\left(V\right)$ subgroups because this theorem guarantees they are Lie groups. I'm interested in whether ALL Lie subgroups of $GL\left(V\right)$ can be thought of as closed matrix groups after a suitable isomorphism. If you like, the isomorphism would be a "change of co-ordinates" to make the problem easier. There is an MO discussion line that seems related here wherein Greg Kuperberg adapts a proof of Ado's theorem to show that every Lie algebra is the algebra of some closed subgroup of $GL\left(V\right)$. So that means that either my arbitrary group is covered by or covers a closed subgroup of $GL\left(V\right)$ - maybe it's trivial, but I can't see whether this line of reasoning can or can't be furthered to my suspected result.
|
Any linear Lie group is Lie-isomorphic to a closed subgroup of $\operatorname{GL}(V)$ : that's a result of Morikuni Goto: Faithful representations of Lie groups. II. Nagoya Math. J. 1, (1950). 91–107. From the review in MR: "A Lie group $G$ is called faithfully representable (f.r.) if there exists a topological isomorphism $\phi$ of $G$ into the general linear group of suitable degree $n$ . It is shown ultimately that if $\phi$ exists, then $\phi$ can be chosen so that $\phi(G)$ is closed."
|
{
"source": [
"https://mathoverflow.net/questions/63868",
"https://mathoverflow.net",
"https://mathoverflow.net/users/14510/"
]
}
|
64,071 |
Just exactly what the title says; often, in mathematics, particularly in the vicinity of Grothendieck, I see reference to "the yoga of...". What exactly does the term "yoga" mean in these contexts?
|
I've taken "yoga" to mean a part of the body of mathematics which does not consist of many actual theorems or results -- or in fact could not be formalized as just a few theorems -- but rather a collection of principles and techniques that one needs to wrap one's head around completely, after which one will be able to use them almost effortlessly. As an example, I would say that there is a yoga of generating functions in combinatorics. (Perhaps this is the simplest example of a yoga.)
|
{
"source": [
"https://mathoverflow.net/questions/64071",
"https://mathoverflow.net",
"https://mathoverflow.net/users/3902/"
]
}
|
64,072 |
For purposes of this question "terminal symplectic variety" means a normal variety which is symplectic (in the usual sense of symplectic singularities) and whose singular locus has codimension $\geq 4$ (the equivalence of this with the usual definition of terminal is a theorem of Namikawa). Symplectic varieties has lots of nice properties; For example, they are always Cohen-Macaulay, which is a condition about niceness with respect to depth. So, one can hope for others. Is a terminal symplectic variety necessarily $S_4$? I should warn any potential answerers that my understanding of the $S_4$ property is very poor (it's an exceptionally tough thing to Google, since it's not even the dominant use of that term in mathematics). I hesitate to even give a definition for fear of messing it up; I believe it means that every ideal sheaf of codimension $\leq 3$ has depth equal to its codimension.
|
I've taken "yoga" to mean a part of the body of mathematics which does not consist of many actual theorems or results -- or in fact could not be formalized as just a few theorems -- but rather a collection of principles and techniques that one needs to wrap one's head around completely, after which one will be able to use them almost effortlessly. As an example, I would say that there is a yoga of generating functions in combinatorics. (Perhaps this is the simplest example of a yoga.)
|
{
"source": [
"https://mathoverflow.net/questions/64072",
"https://mathoverflow.net",
"https://mathoverflow.net/users/66/"
]
}
|
64,083 |
I just heard that Daniel Quillen passed on. I am not familiar with his work
and want to celebrate his life by reading some of his papers. Which one(s?)
should I read? I am an algebraic geometer who is comfortable with cohomological methods in his field, but knows almost nothing about homotopy theory. My goal is to gain a better appreciation of Quillen's work,
not to advance my own research. Here I tagged this question as "at.algebraic-topology, algebraic-k-theory" because I think these are the main fields in which Quillen worked. Please add or change this if other tags are appropriate.
|
Can I be the first to recommend Elementary proofs of some results of cobordism theory using Steenrod operations , Advances in Math. 7 1971 29–56 (1971) . From the MR review: "In this important and elegant paper the author gives new elementary proofs of the structure theorems for the unoriented cobordism ring $N^\ast$ and the complex cobordism ring $U^\ast$, together with new results and methods. Everyone working in cobordism theory should read this paper." The paper was revolutionary in (at least) two ways. The proofs are almost entirely geometric, with cobordism classes represented by proper oriented maps of manifolds. Quillen cites Grothendieck as inspiration for this, and such methods should appeal to algebraic geometers familiar with the Chow ring. Formal group methods are used to prove results in stable homotopy theory. It's hard to overestimate the impact this has had. Indeed almost all of the modern connections between homotopy theory and algebraic geometry seem to go through formal groups, drawing influence from Quillen's idea.
|
{
"source": [
"https://mathoverflow.net/questions/64083",
"https://mathoverflow.net",
"https://mathoverflow.net/users/5337/"
]
}
|
64,099 |
Encouraged by the progress made in a recently posted MO problem , here is a "conceptually related" problem originating from a 2003 joint paper of Sergei Konyagin and myself. Suppose we are given $n$ points $z_1,...,z_n$ on the unit circle $U=\{z\colon |z|=1\}$ and $n$ weights $p_1,...,p_n\ge 0$ such that $p_1+....+p_n=n$, and we want to find yet another point $z\in U$ to maximize the product
$$ \prod_{i=1}^n |z-z_i|^{p_i}. $$
How large can we make this product by the optimal choice of $z$? Conjecture. For any given $z_1,...,z_n\in U$ and $p_1,...,p_n\ge 0$ with $p_1+...+p_n=n$, there exists $z\in U$ with
$$ \prod_{i=1}^n |z-z_i|^{p_i} \ge 2. $$ Here are some comments. If true, the estimate of the conjecture is best possible, as evidenced by the situation where the points are equally spaced on $U$ and all weights are equal to $1$. We were able to resolve a number of particular cases; say, that where the points $z_i$ are equally spaced on $U$, and also that where all weights are equal to $1$. The case $n=2$ is almost trivial, but already the case $n=3$ is wide open. In the general case we have shown that the maximum is larger than some absolute constant exceeding $1$. Although this is not obvious at first glance, this conjecture is actually about the maxima of polynomials on the unit circle. I would be very interested to see any further progress!
|
Let's create a proof a la Koosis. All the techniques used below can be found in his book "The Logarithmic Integral". Take $a>1$ and put $f(z)=a\prod_j(1+z/z_j)^{p_j}$. That is a nice analytic function and, while its absolute value is somewhat hard to understand, its argument is very simple: it is an $n$-piece piecewise linear function on the circle with slope $\frac 12 n$ (with respect to the usual circle length) and jumps at $z_j$. Let $I$ be the image of one of the arcs between two adjacent points $z_j$ and $z_{j+1}$ under the mapping $z\mapsto \operatorname{arg}f(z)$. We can transplant all functions defined on the circle arc $[z_j,z_{j+1}]$ to $I$ using this mapping. Note that the integral of any function over the arc with respect to the circle length is just $2/n$ times the integral of its transplant over $I$ with respect to the line length. Let $\Phi$ be the transplant of $|f|$. Assume that $\Phi<2$ on $I$. The transplant of $f$ is then just $F(t)=\Phi(t)e^{it}$. The key observation is the following:
$$
\int_I \log|2-F(t)|dt\ge \log 2(|I|-\pi).
$$
Assuming that it is true, we conclude that the full integral of $\log|2-f|$ over the unit circle is at least $2/n$ times the sum of the right hand sides over the intervals corresponding to all arcs, which is $0$. On the other hand, if $a>1$, then $\log|2-f(0)|=\log|2-a|<0$, so $2-f$ must have a root inside the disk and the maximum principle finishes the story. Now let us prove the observation claim. The only thing we really know about $\Phi$ is that it is log-concave and, thereby, unimodal. Fortunately, that's all we need. So, in what follows, $\Phi$ will be just any unimodal function on $I$ with values in $[0,2]$. Since we can always extend $\Phi$ by $0$ outside $I$, we can switch to any larger interval we want without making the inequality easier. So, WLOG, $I=[-2\pi n-\frac\pi 2,2\pi n+\frac\pi 2]$ Now, let us observe that for every fixed $t$, the integrand is minimized for $\Phi(t)=2\max(0,\cos t)$ (that is just the nearest point on the line) and that the farther we go away from this optimal value, the larger the integrand is. Therefore, to minimize the left hand side, we need to stay as close to the black regime on the picture (the graph of $2\cos_+ t$) as we can. Suppose that the actual $\Phi$ is given by the blue line. Then, replacing $\Phi$ by the red line $\Psi$, we come closer to the optimum at every point. But the red line consists of several full periods (horizontal pieces) and several pieces that together constitute one full positive arc of $\cos t$. Now, each full period means running over some circle around the origin, so the average value of $\log|2-\Psi(t)e^{it}|$ over each full period is exactly $\log 2$. At last, the $2\cos t$ part gives $\int_0^\pi\log (2|\sin t|)dt=0$, which is exactly the loss of $\pi \log 2$ compared to $\log 2$ times its length $\pi$. That's it. Feel free to comment and/or ask questions.
|
{
"source": [
"https://mathoverflow.net/questions/64099",
"https://mathoverflow.net",
"https://mathoverflow.net/users/9924/"
]
}
|
64,131 |
in the paper Foundations of the theory of bounded cohomology, by N.V. Ivanov, the author considers the complex of bounded singular cochains on a simply connected CW-complex $X$, and constructs a chain homotopy between the identity and the null map. The construction of this homotopy involves the description of a Postnikov system for the space considered. In some sense, $S^2$ represents the easiest nontrivial case of interest for this construction, and I was just trying to figure out what is happening in this case. Since the existence of a contracting homotopy obviously implies the vanishing of bounded cohomology, this is somewaht related to understanding why the bounded cohomology of $S^2$ vanishes. A first step in constructing the needed Postnikiv system is the computation of the homotopy groups of $X$, so the following question came into my mind: Do there exists integers $n\neq 0,1$ such that $\pi_n(S^2)=0$? I gave a look around, and I did not find the answer to this question, but I am not an expert of the subject, so I don't even know if this is an open problem. In Berrick, A. J., Cohen, F. R., Wong, Y. L., Wu, J.,
Configurations, braids, and homotopy groups,
J. Amer. Math. Soc. 19 (2006), no. 2, 265–326 it is stated that $\pi_n(S^2)$ is known for every $n\leq 64$, and Wikipedia's table http://en.wikipedia.org/wiki/Homotopy_groups_of_spheres#Table_of_homotopy_groups shows that $\pi_n (S^2)$ is non-trivial for $n\leq 21$.
|
SERGEI O. IVANOV, ROMAN MIKHAILOV, AND JIE WU have recently(2nd June 2015) published a paper in arxive giving a proof that for $n\geq2$, $\pi_n(S^2)$ is non-zero.
You can look at it in the following link. Sergei O. Ivanov, Roman Mikhailov, Jie Wu, On nontriviality of homotopy groups of spheres , arXiv:1506.00952
|
{
"source": [
"https://mathoverflow.net/questions/64131",
"https://mathoverflow.net",
"https://mathoverflow.net/users/6206/"
]
}
|
64,195 |
I have a smattering of knowledge and disconnected facts about this question, so I would like to clarify the following discussion, and I also seek references and citations supporting this knowledge. Please see my specific questions at the end, after "discussion". Discussion and Background In practice, groups that do not have any faithful linear representation seem to be seldom (in the sense that I believe it was not till the late 1930s that anyone found any). By Ado's theorem, every abstract finite dimensional Lie algebra over $\mathbb{R}, \mathbb{C}$ is the Lie algebra of some matrix Lie group. All Lie groups with a given Lie algebra are covers of one another, so even groups that are not subsets of $GL\left(V\right)$ ($V = \mathbb{R}, \mathbb{C}$) are covers of matrix groups. I know that the metaplectic groups (double covers of the symplectic groups $Sp_{2 n}$) are not matrix groups. And I daresay it is known (although I don't know) exactly which covers of semisimple groups have faithful linear representations, thanks to the Cartan classification of all semisimple groups. But is there a know general reason (i.e. theorem showing) why particular groups lack linear representations? I believe a group must be noncompact to lack linear representations, because the connected components of all compact ones are the exponentials of the Lie algebra (actually if someone could point me to a reference to a proof of this fact, if indeed I have gotten my facts straight, I would appreciate that too). But conversely, do noncompact groups always have covers which lack faithful linear representations? Therefore, here are my specific questions: Specific Questions Firm answers with citations to any of the following would be highly helpful: 1) Is there a general theorem telling one exactly when a finite dimensional Lie group lacks a faithful linear representation; 2) Alternatively, which of the (Cartan-calssified) semisimple Lie groups have covers lacking faithful linear representations; 3) Who first exhibited a Lie group without a faithful representation and when; 4) Is compactness a key factor here? Am I correct that a complex group is always the exponetial of its Lie algebra (please give a citation for this). Does a noncompact group always have a cover lacking a faithful linear representation? Many thanks in advance.
|
Most of the answers can be found in Hochschild's book on the structure of Lie group. a) Every complex semisimple group has a faithful rep (Thm 3.2 in Chap. XVII) b) A connected Lie group with Levi decomposition $G=RS$ ($R$ the solvable radical, $S$ a semisimple Levi factor) is linear iff both $R$ and $S$ are linear (Thm 4.2 in Chap. XVIII) c) A solvable Lie group $G$ is linear iff its commutator subgroup $G'$ is closed, and $G'$ has no non-trivial compact subgroup (Thm 3.2 in Chap. XVIII) Now, let $G$ be a semi-simple Lie group. Assume that $G$ is simply connected. Then $G$ admits a greatest linear quotient. Indeed, let $G_{\mathbb{C}}$ be the simply connected complex group corresponding to the complexified Lie algebra of $G$. Let $L$ be the kernel of the canonical homomorphism $G\rightarrow G_{\mathbb{C}}$; so $L$ is a finite index subgroup of the center of $G$. Then $G/L$ is the greatest linear quotient of $G$, in the sense that, if $H$ is locally isomorphic to $G$ and $p:G\rightarrow H$ is a universal covering, the group $H$ s linear iff $p$ factors through $G/L$.
|
{
"source": [
"https://mathoverflow.net/questions/64195",
"https://mathoverflow.net",
"https://mathoverflow.net/users/14510/"
]
}
|
64,370 |
What are the simplest examples of
rings that are not isomorphic to their
opposite rings? Is there a science to constructing them? The only simple example known to me: In Jacobson's Basic Algebra (vol. 1), Section 2.8, there is an exercise that goes as follows: Let $u=\begin{pmatrix} 0 & 1 & 0 \\ 0 & 0 & 1\\ 0 & 0 & 0 \end{pmatrix}\in M_3(\mathbf Q)$ and let $x=\begin{pmatrix} u & 0 \\ 0 & u^2 \end{pmatrix}$,
$y=\begin{pmatrix}0&1\\0&0\end{pmatrix}$, where $u$ is as indicated and $0$ and $1$ are zero and unit matrices in $M_3(\mathbf Q)$. Hence $x,y\in M_6(\mathbf Q)$. Jacobson gives hints to prove that the subring of $M_6(\mathbf Q)$ generated by $x$ and $y$ is not isomorphic to its opposite. Examples seem to be well-known to the operator algebras crowd: See for example the paper: "A Simple Separable C*-Algebra Not Isomorphic to Its Opposite Algebra" by N. Christopher Phillips, Proceedings of the American Mathematical Society
Vol. 132, No. 10 (Oct., 2004), pp. 2997-3005.
|
Here's a factory for making examples. If $\Gamma$ is a quiver, and $k$ a field, then we get a quiver algebra $k\Gamma$. If $\Gamma$ has no oriented cycles, we can recover $\Gamma$ from $k\Gamma$ by taking the Ext-construction. Also, the opposite algebra of a quiver algebra is obtained by reversing all the arrows in the quiver. Hence you can produce an example by taking the quiver algebra of any quiver with no oriented cycles, which is not isomorphic to its reverse. It's easy to construct lots of quivers with these properties.
|
{
"source": [
"https://mathoverflow.net/questions/64370",
"https://mathoverflow.net",
"https://mathoverflow.net/users/9672/"
]
}
|
64,982 |
I'm a grad student getting close to submitting my first journal article (which will be single-authored). My understanding is that it's standard practice for authors to transfer the copyright of their paper to the journal in which it is published. I want my article to be published in a journal, but I don't want to transfer the copyright -- I want the article to be in the public domain. How will editors to behave towards such a request? Also, when should I bring up the topic: when I submit the article, or after it's accepted and I'm asked to sign a copyright transfer? Some grants apparently have a stipulation that articles written as part of the grant research must be released into the public domain (e.g. grants funded by the US government). In this case, authors presumably sign a consent to publish, instead of a copyright transfer. Hence, there's at least some precedent for what I want to do, though I want my article in the public domain purely because of my personal views on the ethics of copyright. I couldn't find too much information about this topic by googling. Oleg Pikhurko has a page discussing his attempt to have his articles revert to the public domain after a period of years, as opposed to instantly. It didn't work out particularly well in his case. I'm not sure how much I'm willing to have my ethical ideals damage my career (e.g. by having publications delayed and/or being banned from submitting to journals).
|
Most journals in math allow you to publish a version of the paper which was previously posted to the arxiv.org. They ask you often to take the copyright for the published version which just slightly differs from the arxiv version. So there is not much difference between having it public or having a slightly different version public. Some journals, on the other hand are free anyway and forever in their public versions, e.g. Theory and application of categories. If you choose a journal carefully you solve most of your concern. Some publishers are notorious of being nasty, expensive, proprietory, nonresponsive to author needs etc. You do not want to publish in expensive envelopes of crap, like Elsevier's Chaos, solitons and fractals used to be.
|
{
"source": [
"https://mathoverflow.net/questions/64982",
"https://mathoverflow.net",
"https://mathoverflow.net/users/15105/"
]
}
|
65,424 |
Say $A$ and $B$ are symmetric, positive definite matrices. I've proved that $$\det(A+B) \ge \det(A) + \det(B)$$ in the case that $A$ and $B$ are two dimensional. Is this true in general for $n$-dimensional matrices? Is the following even true? $$\det(A+B) \ge \det(A)$$ This would also be enough. Thanks.
|
The inequality
$$\det(A+B)\geq \det A +\det B$$
is implied by the Minkowski determinant theorem
$$(\det(A+B))^{1/n}\geq (\det A)^{1/n}+(\det B)^{1/n}$$
which holds true for any non-negative $n\times n$ Hermitian matrices $A$ and $B$. The latter inequality is equivalent to the fact that the function $A\mapsto(\det A )^{1/n}$ is concave on the set of $n\times n$ non-negative Hermitian matrices (see e.g., A Survey of Matrix Theory and Matrix Inequalities by Marcus and Minc, Dover, 1992, P. 115 and also the previous MO thread ).
|
{
"source": [
"https://mathoverflow.net/questions/65424",
"https://mathoverflow.net",
"https://mathoverflow.net/users/15221/"
]
}
|
65,729 |
This talk is about a theory of "perfectoid spaces", which "compares objects in characteristic p with objects in characteristic 0". What are those spaces, where can one read about them? Edit: A bit more info can be found in Peter Scholze's seminar description and in Bhargav Bhatt 's. Edit: Peter Scholze posted yesterday this beautiful overview on the arxiv. Edit: Peter Scholze posted today this new survey on the arxiv.
|
Update: The lecture notes of the CAGA lecture series on perfectoid spaces at the IHES can now be found online, cf. http://www.ihes.fr/~abbes/CAGA/scholze.html . It seems that it's my job to answer this question, so let me just briefly explain everything. A more detailed account will be online soon. We start with a complete non-archimedean field $K$ of mixed characteristic $(0,p)$ (i.e., $K$ has characteristic $0$, but its residue field has characteristic $p$), equipped with a non-discrete valuation of rank $1$, such that (and this is the crucial condition) Frobenius is surjective on $K^+/p$, where $K^+\subset K$ is the subring of elements of norm $\leq 1$. Some authors, e.g. Gabber-Ramero in their book on Almost ring theory, call such fields deeply ramified (they do not require that they are complete, anyway). Just think of $K$ as the completion of the field $\mathbb{Q}_p(p^{1/p^\infty})$, or alternatively as the completion of the field $\mathbb{Q}_p(\mu_{p^\infty})$. In this situation, one can form the field $K^\prime$, given as the fraction field of $K^{\prime +} = \varprojlim K^+/p$, where the transition maps are given by Frobenius. Concretely, in the first example it is given by the completion of $\mathbb{F} _p((t^{1/p^\infty}))$, where $t$ is the element $(p,p^{\frac 1p},p^{\frac 1{p^2}},\ldots)$ in $K^{\prime +}=\varprojlim K^+/p$. Now we have the following theorem, due to Fontaine-Wintenberger in the examples I gave, and deduced from the book of Gabber-Ramero in general: Theorem: There is a canonical isomorphism of absolute Galois group $G_K\cong G_{K^\prime}$. At this point, it may be instructive to explain this theorem a little, in the example where $K$ is the completion of $\mathbb{Q}_p(p^{1/p^\infty})$ (this assumption will be made whenever examples are discussed). It says that there is a natural equivalence of categories between the category of finite extensions $L$ of $K$ and the category of finite extensions $L^\prime$ of $K^\prime$. Let us give an example, say $L^\prime$ is given by adjoining a root of $X^2 - 7t X + t^5$. Basically, the idea is that one replaces $t$ by $p$, so that one would like to define $L$ as the field given by adjoining a root of $X^2 - 7p X + p^5$. However, this is obviously not well-defined: If $p=3$, then $X^2 - 7t X + t^5=X^2 - t X + t^5$, but $X^2 - 7p X + p^5\neq X^2 - p X + p^5$, and one will not expect that the fields given by adjoining roots of these different polynomials are the same. However, there is the following way out: $L^\prime$ can be defined as the splitting field of $X^2 - 7t^{1/p^n} X + t^{5/p^n}$ for all $n\geq 0$, and if we choose $n$ very large, then one can see that the fields $L_n$ given as the splitting field of $X^2 - 7p^{1/p^n} X + p^{5/p^n}$ will stabilize as $n\rightarrow \infty$; this is the desired field $L$. Basically, the point is that the discriminant of the polynomials considered becomes very small, and the difference between any two different choices one might make when replacing $t$ by $p$ become comparably small. This argument can be made precise by using Faltings's almost mathematics, as developed systematically by Gabber-Ramero. Consider $K\supset K^+\supset \mathfrak{m}$, where $\mathfrak{m}$ is the maximal ideal; in the example, it is the one generated by all $p^{1/p^n}$, and it satisfies $\mathfrak{m}^2 = \mathfrak{m}$, because the valuation on $K$ is non-discrete. We have a sequence of localization functors: $K^+$-mod $\rightarrow$ $K^+$-mod / $\mathfrak{m}$-torsion $\rightarrow$ $K^+ $-mod / $p$-power torsion. The last category is equivalent to $K$-mod, and the composition of the two functors is like taking the generic fibre of an object with an integral structure. In this sense, the category in the middle can be seen as a slightly generic fibre, sitting strictly between an integral structure and an object over the generic fibre. Moreover, an object like $K^+/p$ is nonzero in this middle category, so one can talk about torsion objects, neglecting only very small objects. The official name for this middle category is $K^{+a}$-mod: almost $K^+$-modules. This category is an abelian tensor category, and hence one can define in the usual way the notion of a $K^{+a}$-algebra (= almost $K^+$-algebra), etc. . With some work, one also has notions of almost finitely presented modules and (almost) étale maps. In the following, we will often need the notion of an almost finitely presented étale map, which is the almost analogue of a finite étale cover. Theorem (Tate, Gabber-Ramero): If $L/K$ finite extension, then $L^+/K^+$ is almost finitely presented étale. Similarly, if $L^\prime/K^\prime$ finite, then $L^{\prime +}/K^{\prime +}$ is almost finitely presented étale. Here, $L^+$ is the valuation subring of $L$. As an example, assume $p\neq 2$ and $L=K(p^\frac 12)$. For convenience, we look at the situation at a finite level, so let $K_n=\mathbb{Q}_p(p^{1/p^n})$ and $L_n=K_n(p^\frac 12)$. Then $L_n^+ = K_n^+[X] / (X^2 - p^{1/p^n})$. To check whether this is étale, look at $f(X)= X^2 - p^{1/p^n}$ and look at the ideal generated by $f$ and its derivative $f^\prime$. This contains $p^{1/p^n}$, so in some sense $L_n^+$ is étale over $K_n^+$ up to $p^{1/p^n}$-torsion. Now take the limit as $n\rightarrow \infty$ to see that $L^+$ is almost étale over $K^+$. Now we can prove the theorem above: Finite étale covers of $K$ = almost finitely presented étale covers of $K^+$ = almost finitely presented étale covers of $K^+/p$ [because (almost) finite étale covers lift uniquely over nilpotents] = almost finitely presented étale covers of $K^{\prime +}/t$ [because $K^+/p = K^{\prime +}/t$, cf. the example] = almost finitely presented étale covers of $K^{\prime +}$ = finite étale covers of $K^\prime$. After we understand this theory on the base, we want to generalize to the relative situation. Here, let me make the following claim. Claim: $\mathbb{A}^1_{K^\prime}$ "equals" $\varprojlim \mathbb{A}^1_K$, where the transition maps are the $p$-th power map. As a first step towards understanding this, let us check this on points. Here it says that $K^\prime = \varprojlim K$. In particular, there should be map $K^\prime\rightarrow K$ by projection to the last coordinate, which I usually denote $x^\prime\mapsto [x^\prime]$ (because it is a related to Teichmüller representatives) and again this can be explained in an example: Say $x^\prime = t^{-1} + 5 + t^3$. Basically, we want to replace $t$ by $p$, but this is not well-defined. But we have just learned that this problem becomes less serious as we take $p$-power roots. So we look at $t^{-1/p^n} + 5 + t^{3/p^n}$, replace $t$ by $p$, get $p^{-1/p^n} + 5 + p^{3/p^n}$, and then we take the $p^n$-th power again, so that the expression has the chance of being independent of $n$. Now, it is in fact not difficult to see that $\lim_{n\rightarrow \infty} (p^{-1/p^n} + 5 + p^{3/p^n})^{p^n}$ exists, and this defined $[x^\prime]\in K$. Now the map $K^\prime\rightarrow \varprojlim K$ is given by $x^\prime\mapsto ([x^\prime],[x^{\prime 1/p}],[x^{\prime 1/p^2}],\ldots)$. In order to prove that this is a bijection, just note that $K^{\prime +} = \varprojlim K^{\prime +}/t^{p^n} = \varprojlim K^{\prime +}/t = \varprojlim K^+/p \leftarrow \varprojlim K^+$. Here, the last map is the obvious projection, and in fact is a bijection, which amounts to the same verification as that the limit above exists. Afterwards, just invert $t$ to get the desired identification. In fact, the good way of approaching this stuff in general is to use some framework of rigid geometry. In the papers of Kedlaya and Liu, where they are doing extremely related stuff, they choose to work with Berkovich spaces; I favor the language of Huber's adic spaces, as this language is capable of expressing more (e.g., Berkovich only considers rank-$1$-valuations, whereas Huber considers also the valuations of higher rank). In the language of adic spaces, the spaces are actually locally ringed topological spaces (equipped with valuations) (and affinoids are open, in contrast to Berkovich's theory, making it easier to glue), and there is an analytification functor $X\mapsto X^{\mathrm{ad}}$ from schemes of finite type over $K$ to adic spaces over $K$ (similar to the functor associating to a scheme of finite type over $C$ a complex-analytic space). Then we have the following theorem: Theorem: We have a homeomorphism of underlying topological spaces $|(\mathbb{A}^1_{K^\prime})^{\mathrm{ad}}|\cong \varprojlim |(\mathbb{A}^1_K)^{\mathrm{ad}}|$. At this point, the following question naturally arises: Both sides of this homeomorphism are locally ringed topological spaces: So is it possible to compare the structure sheaves? There is the obvious problem that on the left-hand side, we have characteristic $p$-rings, whereas on the right-hand side, we have characteristic $0$-rings. How can one possibly pass from one to the other side? Definition: A perfectoid $K$-algebra is a complete Banach $K$-algebra $R$ such that the set of power-bounded elements $R^\circ\subset R$ is open and bounded and Frobenius induces an isomorphism $R^\circ/p^{\frac 1p}\cong R^\circ/p$. Similarly, one defines perfectoid $K^\prime$-algebras $R^\prime$, putting a prime everywhere, and replacing $p$ by $t$. The last condition is then equivalent to requiring $R^\prime$ perfect, whence the name. Examples are $K$, any finite extension $L$ of $K$, and $K\langle T^{1/p^\infty}\rangle$, by which I mean: Take the $p$-adic completion of $K^+[T^{1/p^\infty}]$, and then invert $p$. Recall that in classical rigid geometry, one considers rings like $K\langle T\rangle$, which is interpreted as the ring of convergent power series on the closed annulus $|x|\leq 1$. Now in the example of the $\mathbb{A}^1$ above, we take $p$-power roots of the coordinate, so after completion the rings on the inverse limit are in fact perfectoid. In characteristic $p$, one can pass from usual affinoid algebras to perfectoid algebras by taking the completed perfection; the difference between the two is small, at least as regards topological information on associated spaces: Frobenius is a homeomorphism on topological spaces, and even on étale topoi. [This is why we don't have to take $\varprojlim \mathbb{A}^1_{K^\prime}$: It does not change the topological spaces. In order to compare structure sheaves, one should however take this inverse limit.] The really exciting theorem is the following, which I call the tilting equivalence: Theorem: The category of perfectoid $K$-algebras and the category of perfectoid $K^\prime$-algebras are equivalent. The functor is given by $R^\prime = (\varprojlim R^\circ/p)[t^{-1}]$. Again, one also has $R^\prime = \varprojlim R$, where the transition maps are the $p$-th power map, giving also the map $R^\prime\rightarrow R$, $f^\prime\mapsto [f^\prime]$. There are two different proofs for this. One is to write down the inverse functor, given by $R^\prime\mapsto W(R^{\prime \circ})\otimes_{W(K^{\prime +})} K$, using the map $\theta: W(K^{\prime +})\rightarrow K$ known from $p$-adic Hodge theory. The other proof is similar to what we did above for finite étale covers: perfectoid $K$-algebras = almost $K^{+}$-algebras $A$ s.t. $A$ is flat, $p$-adically complete and Frobenius induces isom $A/p^{1/p}\cong A/p$ = almost $K^+/p$-algebras $\overline{A}$ s.t. $\overline{A}$ is flat and Frobenius induces isom $\overline{A}/p^{\frac 1p}\cong \overline{A}$, and then going over to the other side. Here, the first identification is not difficult; the second relies on the astonishing fact (already in the book by Gabber-Ramero) that the cotangent complex $\mathbb{L}_{\overline{A}/(K^+/p)}$ vanishes, and hence one gets unique deformations of objects and morphisms. At least on differentials $\Omega^1$, one can believe this: Every element $x$ has the form $y^p$ because Frobenius is surjective; but then $dx = dy^p = pdy = 0$ because $p=0$ in $\overline{A}$. Now let me just briefly summarize the main theorems on the basic nature of perfectoid spaces. First off, an affinoid perfectoid space is associated to an affinoid perfectoid $K$-algebra, which is a pair $(R,R^+)$ consisting of a perfectoid $K$-algebra $R$ and an open and integrally closed subring $R^+\subset R^\circ$ (it follows that $\mathfrak{m} R^\circ\subset R^+$, so $R^+$ is almost equal to $R^\circ$; in most cases, one will just take $R^+=R^\circ$). Then also the categories of affinoid perfectoid $K$-algebras and of affinoid perfectoid $K^\prime$-algebras are equivalent. Huber associates to such pairs $(R,R^+)$ a topological spaces $X=\mathrm{Spa}(R,R^+)$ consisting of continuous valuations on $R$ that are $\leq 1$ on $R^+$, with the topology generated by the rational subsets $\{x\in X\mid \forall i: |f_i(x)|\leq |g(x)|\}$, where $f_1,\ldots,f_n,g\in R$ generate the unit ideal. Moreover, he defines a structure pre sheaf $\mathcal{O}_X$, and the sub pre sheaf $\mathcal{O}_X^+$, consisting of functions which have absolute value $\leq 1$ everywhere. Theorem: Let $(R,R^+)$ be an affinoid perfectoid $K$-algebra, with tilt $(R^\prime,R^{\prime +})$. Let $X=\mathrm{Spa}(R,R^+)$, with $\mathcal{O}_X$ etc., and $X^\prime = \mathrm{Spa}(R^\prime,R^{\prime +})$, etc. .
i) We have a canonical homeomorphism $X\cong X^\prime$, given by mapping $x$ to $x^\prime$ defined via $|f^\prime(x^\prime)| = |[f^\prime] (x)|$. Rational subsets are identified under this homeomorphism.
ii) For any rational subset $U\subset X$, the pair $(\mathcal{O}_X(U),\mathcal{O}_X^+(U))$ is affinoid perfectoid with tilt $(\mathcal{O}_{X^\prime}(U),\mathcal{O}_{X^\prime}^+(U))$.
iii) The presheaves $\mathcal{O}_X$, $\mathcal{O}_X^+$ are sheaves.
iv) For all $i>0$, the cohomology group $H^i(X,\mathcal{O}_X)=0$; even better, the cohomology group $H^i(X,\mathcal{O}_X^+)$ is almost zero, i.e. $\mathfrak{m}$-torsion. This allows one to define general perfectoid spaces by gluing affinoid perfectoid spaces. Further, one can define étale morphisms of perfectoid spaces, and then étale topoi. This leads to an improvement on Faltings's almost purity theorem: Theorem: Let $R$ be a perfectoid $K$-algebra, and let $S/R$ be finite étale. Then $S$ is perfectoid and $S^\circ$ is almost finitely presented étale over $R^\circ$. In particular, no sort of semistable reduction hypothesis is required anymore. Also, the proof is much easier, cf. the book project by Gabber-Ramero. Tilting also identifies the étale topoi of a perfectoid space and its tilt, and as an application, one gets the following theorem. Theorem: We have an equivalence of étale topoi of adic spaces: $(\mathbb{P}^n_{K^\prime})^{\mathrm{ad}}_{\mathrm{et}}\cong \varprojlim (\mathbb{P}^n_K)^{\mathrm{ad}}_{\mathrm{et}}$. Here the transition maps are again the $p$-th power map on coordinates. Let me end this discussion by mentioning one application. Let $X\subset \mathbb{P}^n_K$ be a smooth hypersurface. By a theorem of Huber, we can find a small open neighborhood $\tilde{X}$ of $X$ with the same étale cohomology. Moreover, we have the projection $\pi: \mathbb{P}^n_{K^\prime}\rightarrow \mathbb{P}^n_K$, at least on topological spaces or étale topoi. Within the preimage $\pi^{-1}(\tilde{X})$, it is possible to find a smooth hypersurface (of possibly much larger degree) $X^\prime$. This gives a map from the cohomology of $X$ to the cohomology of $X^\prime$, thereby comparing the étale cohomology of a variety in characteristic $0$ with the étale cohomology of characteristic $p$. Using this, it is easy to verify the weight-monodromy conjecture for $X$.
|
{
"source": [
"https://mathoverflow.net/questions/65729",
"https://mathoverflow.net",
"https://mathoverflow.net/users/451/"
]
}
|
65,841 |
I expect this question has a very simple answer. We all know from primary school that there are no non-trivial continuous homomorphisms from $\hat{\mathbb{Z}}$ to $\mathbb{Z}$. What if we forget continuity: can anybody give an explicit example of a homomorphism? Note that $\hat{\mathbb{Z}}$ is torsion-free, and not divisible (since it's isomorphic to $\prod_p \mathbb{Z}_p$ and $\mathbb{Z}_p$ is not divisible by $p$). There is the canonical injection $\mathbb{Z} \to \hat{\mathbb{Z}}$; is there some abstract reason why it ought to have a left inverse, and if so can we write it down?
|
Let $\phi:\hat{\mathbb{Z}}\to\mathbb{Z}$ be a nontrivial homomorphism. As every nontrivial subgroup of $\mathbb{Z}$ is isomorphic to $\mathbb{Z}$, we may suppose that $\phi$ is surjective, with kernel $K$ say. Now $\phi$ induces a surjective homomorphism $\phi_n:\hat{\mathbb{Z}}/n\hat{\mathbb{Z}}\to\mathbb{Z}/n\mathbb{Z}$, but it is standard that $\hat{\mathbb{Z}}/n\hat{\mathbb{Z}}$ has order $n$, so $\phi_n$ must be an isomorphism. This implies that $K\leq n\hat{\mathbb{Z}}$ for all $n$, but $\bigcap_n n\hat{\mathbb{Z}}=0$, so $\phi$ is injective, which is clearly impossible.
|
{
"source": [
"https://mathoverflow.net/questions/65841",
"https://mathoverflow.net",
"https://mathoverflow.net/users/3753/"
]
}
|
65,858 |
For most of the mathematical concepts I learn, it has more or less always been possible to find (at least google and find) unsolved problems pertaining to that specific concept. Keeping a bag of unsolved problems on most topics I know has been to my benefit in that it reaffirms me that mathematics is a thriving subject. Coming to point, I am unable to find an elementary series of the kind we know on real analysis courses whose convergence is an unsolved problem. Please, share if you have any. Thanks.
|
$1/\zeta(s)=\sum_{n>0}\frac{\mu(n)}{n^s}$ where $\mu$ is the Moebius function. This series is known to converge for $s\ge 1$ and diverge for $s\le 1/2$.
Its convergence is unknown if $1/2< s< 1$ (convergence in this interval is essentially the Riemann hypothesis).
|
{
"source": [
"https://mathoverflow.net/questions/65858",
"https://mathoverflow.net",
"https://mathoverflow.net/users/6770/"
]
}
|
66,007 |
I an wondering if there are non-homeomorphic spaces $X$ and $Y$ such that $X^2$ is homeomorphic to $Y^2$.
|
Here is an extract from MR0562824 (81d:54005), Trnková, V. Homeomorphisms of products of spaces. (Russian) Uspekhi Mat. Nauk 34 (1979), no. 6(210), 124–138: S. Ulam raised the following question in 1933: Is there a space $X$ which has nonhomeomorphic square roots, i.e., $X\cong A\times A\cong B\times B$ for some nonhomeomorphic $A,B$? This problem was solved by R. H. Fox in 1947: he constructed two nonhomeomorphic four-dimensional manifolds $A$ and $B$ such that $A\times A\cong B\times B$. upd: The reference is Fox, R. H. On a problem of S. Ulam concerning Cartesian products. Fund. Math. 34, (1947). 278–287. The answer to Ulam's question for 3-manifolds is positive as well, see Glimm, James
Two Cartesian products which are Euclidean spaces. Bull. Soc. Math. France 88 1960 131–135. The answer for 2-polyhedra is negative, see W. Rosicki, "On a problem of S. Ulam concerning Cartesian squares of 2-dimensional polyhedra.", Fund. Math. 127 (1987), no. 2, 101–125. This paper also gives the following elementary example: Take $A$ to be the disjoint union of the Hilbert cube and $\mathbb{N}$ and $B$ to be the disjoint union of two copies of the HIlbert cube and $\mathbb{N}$. Then both $A^2$ and $B^2$ are homeomorphic to the disjoint union of a countable family of Hilbert cubes and $\mathbb{N}$. Finally, in this example one can replace the Hilbert cube by any space homeomorphic to its square and not homeomorphic to two copies of itself, e.g., by $\left\{1/n\mid n\in\mathbb{Z}_{>0} \right\}\cup\{0\}$.
|
{
"source": [
"https://mathoverflow.net/questions/66007",
"https://mathoverflow.net",
"https://mathoverflow.net/users/10052/"
]
}
|
66,048 |
How do you prove to any two simply-connected domains in the plane are homeomorphic without using the Riemann mapping theorem? An elementary proof would be nice.
|
Here's how one proof goes. I'm omitting some details, at least for now. If $U$ is a simply-connected domain in the plane $\mathbb C$, then define the function
$r: U \to \mathbb R_+$ to be the maximum radius of a (round) open disk in the standard Euclidean metric whose interior is contained in $U$. If $r(z) = \infty$ for any $z$,
then $U$ is the entire plane, so there is nothing to prove. Otherwise, define a new Riemannian metric using the conformal factor $1/r$, that is, $ds' = 1/r\ ds$ where $ds$ is arc length in the standard metric. Note that if $U$ is the upper half-plane, the metric coincides with the hyperbolic metric.
If $U = \mathbb C \setminus \{0\}$,
then all complex linear automorphisms of $\mathbb C$ preserve the metric, and the metric is that of a cylinder that is parametrized isometrically by
$\log(z) / \left \langle 2 \pi i \right \rangle$. If $U$ is a round disk, then $ds'$ is a smooth negatively-curved metric except at the center of the disk, where there is a non-smooth point, but no cone angle: if the disk has radius 1, then a circle about the center of radius $\epsilon$ has length $2 \pi \epsilon / (1-\epsilon)$ In general, although the metric $ds'$ need not be smooth, it always has non-positive curvature. Intuitively: the $1/r$ factor means it takes infinite arc length to reach the boundary, and shortest $ds'$ geodesics try to thread their way into any bays and inlets of $U$ keeping far from the shoreline, since the speed limit is drastically reduced near the shore. In particular, there is a unique $ds'$ geodesic between any two points in $U$, and geodesics have the unique continuation property, they are determined by the tangent vector at the beginning and the length. To parametrize $U$ by $\mathbb R^2$, choose any point $z_0$ in $U$. The tangent space to $U$ at $z_0$ parametrizes $U$, by $V$ goes to the geodesic through $V$ whose length is the length of $V$. Sorry for leaving off details. Somewhere else on MO I believe I posted an alternate way to do this, using the convex hull of $S^2 \setminus U'$, where $U'$ is the stereographic image of $U$ on a sphere; in the projective hyperbolic metric, this boundary of the convex hull always is isometric to the hyperbolic plane, from which a proof is easy.
|
{
"source": [
"https://mathoverflow.net/questions/66048",
"https://mathoverflow.net",
"https://mathoverflow.net/users/36038/"
]
}
|
66,075 |
Suppose you prove a theorem, and then sleep well at night knowing that future generations will remember your name in conjunction with the great advance in human wisdom. In fact, sadly, it seems that someone will publish the same (or almost the same) thing $n \ll \infty$ years later. I was wondering about what examples of this people might have. Here are two: Bill Thurston had remarked in the late seventies that Andre'ev's theorem implies the Circle Packing Theorem. The same result was proved half a century earlier by Koebe (so the theorem is now known as the Koebe-Andre'ev-Thurston Circle Packing Theorem). However, in the book Croft, Hallard T.(4-CAMBP); Falconer, Kenneth J.(4-BRST); Guy, Richard K.(3-CALG)
Unsolved problems in geometry.
Problem Books in Mathematics. Unsolved Problems in Intuitive Mathematics, II. Springer-Verlag, New York, 1991. xvi+198 pp. ISBN: 0-387-97506-3 the question of existence of mid-scribed polyhedron (which is obviously equivalent to the existence of circle packing) with the prescribed combinatorics is listed as an open problem. Another example: In the early 2000s, I noticed that every element in ${\frak A}_n$ is actually a commutator, and Henry Cejtin and I proved this in arXiv:math/0303036 [pdf, ps, other]
A property of alternating groups
Henry Cejtin, Igor Rivin
Subjects: Group Theory (math.GR) However, this result was already published by O. Ore a few years earlier: Ore, Oystein
Some remarks on commutators.
Proc. Amer. Math. Soc. 2, (1951). 307–314. But that's not all: in D. Husemoller's thesis, published as: Husemoller, Dale H.
Ramified coverings of Riemann surfaces.
Duke Math. J. 29 1962 167–174. Only a few years after Ore's paper, this result is reproved (by Andy Gleason) -- this is actually the key result of the paper. Another example (which actually inspired me to ask the question): If you look at the comments to (un)decidability in matrix groups , you will find a result proved by S. Humphries in the 1980s reproved by other people in the 2000s (and I believe there are other proofs in between). It would be interesting to have a list of such occurrences (hopefully made less frequent by the existence of MO).
|
This is maybe an extreme example. I don't remember if it was a joke or not, but I recall receiving an e-mail announcement about someone recently inventing the trapezoidal method for approximating Riemann integrals. Here's the Wikipedia page about the paper / controversy: http://en.wikipedia.org/wiki/Tai%27s_method
|
{
"source": [
"https://mathoverflow.net/questions/66075",
"https://mathoverflow.net",
"https://mathoverflow.net/users/11142/"
]
}
|
66,084 |
Since the old days, many mathematicians have been attaching monetary rewards to problems they admit are difficult. Their reasons could be to draw other mathematicians' attention, to express their belief in the magnitude of the difficulty of the problem, to challenge others, "to elevate in the consciousness of the general public the fact that in mathematics, the frontier is still open and abounds in important unsolved problems. 1 ", etc. Current major instances are The Millennium Prize Problems Beal's conjecture Other problems with money rewards Kimberling's list of problems Question: What others are there? To put some order into the answers, let's put a threshold prize money of 100 USD. I expect there are more mathematicians who have tucked problems in their web-pages with some prizes. What this question does not intend to achieve: once offered but then collected or withdrawn offers new pledges of sums of money just here P.S. Some may be interested in the psychological aspects of money rewards. However, to keep the question focused, I hope this topic won't be ignited here. One more, I understand that mathematicians do not work merely for money.
|
Two which are for food rather than cash: Let $f = t^{2d} + f_1 t^{2d-1} + f_2 t^{2d-2}+ \cdots f_d t^d + \cdots+ f_2 t^2 +f_1 t + 1$ be a palindromic polynomial, so the roots of $f$ are of the form $\lambda_1$ , $\lambda_2$ , ..., $\lambda_d$ , $\lambda_1^{-1}$ , $\lambda_2^{-1}$ , ..., $\lambda_d^{-1}$ . Set $r_k = \prod_{j=1}^d (\lambda_j^k-1)(\lambda_j^{-k} -1)$ . Conjecture: The coefficients of $f$ are uniquely determined by the values of $r_1$ , $r_2$ , ... $r_{d+1}$ . Motivation: When computing the zeta function of a genus $d$ curve over $\mathbb{F}_q$ , the numerator is essentially of the form $f$ . (More precisely, it is of the form $q^d f(t/\sqrt{q})$ for $f$ of this form.) Certain algorithms proceed by computing the $r_k$ and recovering the coefficients of $f$ from them. Note that you have to recover $d$ numbers, so you need at least $r_1$ through $r_d$ ; it is known that you need at least one more and the conjecture is that exactly one more is enough. Reward: Sturmfels and Zworski will buy you dinner at Chez Panisse if you solve it. Consider the following probabilistic model: We choose an infinite string, call it $\mathcal{A}$ , of $A$ 's, $C$ 's, $G$ 's and $T$ 's. Each letter of the string is chosen independently at random, with probabilities $p_A$ , $p_C$ , $p_G$ and $p_T$ . Next, we copy the string $\mathcal{A}$ to form a new string $\mathcal{D}_1$ . In the copying process, for each pair $(X, Y)$ of symbols in $\{ A, C, G, T \}$ , there is some probability $p_1(X \to Y)$ that we will miscopy an $X$ as a $Y$ . (The $16$ probabilities stay constant for the entire copying procedure.) We repeat the procedure to form two more strings $\mathcal{D}_2$ and $\mathcal{D}_3$ , using new probability matrices $p_2(X \to Y)$ and $p_3(X \to Y)$ . We then forget the ancestral string $\mathcal{A}$ and measure the $64$ frequencies with which the various possible joint distributions of $\{ A, C, G, T \}$ occur in the descendant strings $(\mathcal{D}_1, \mathcal{D}_2, \mathcal{D}_3)$ . Our procedure depended on $4+3 \times 16$ inputs: the $(p_A, p_C, p_G, p_T)$ and the $p_i(X \to Y)$ . When you remember that probabilities should add up to $1$ , there are actually only $39$ independent parameters here, and we are getting $63$ measurements (one less than $64$ because probabilities add up to $1$ ). So the set of possible outputs is a semialgeraic set of codimension $24$ . Conjecture: Elizabeth Allman has a conjectured list of generators for the Zariski closure of the set of possible measurements. Motivation: Obviously, this is a model of evolution, and one which (some) biologists actually use. Allman and Rhodes have shown that, if you know generators for the ideal for this particular case, then they can tell you generators for every possible evolutionary history. (More descendants, known sequence of branching, etc.) There are techniques in statistics where knowing this Zariski closure would be helpful progress. Reward: Elizabeth Allman will personally catch, clean, smoke and ship an Alaskan Salmon to you if you find the generators. (Or serve it to you fresh, if you visit her in Alaska.)
|
{
"source": [
"https://mathoverflow.net/questions/66084",
"https://mathoverflow.net",
"https://mathoverflow.net/users/5627/"
]
}
|
66,121 |
1) (By Goedel's) One can not prove, in PA, a formula that can be interpreted to express the consistency of PA. (Hopefully I said it right. Specialists correct me, please).
2) There are proofs (although for the purpose of this question I should putt it in quotations marks) of the consistency of PA. The questions are:
A) Is it the consistency of PA still a mathematical question that can be considered open?
B) Is it a mathematical question? (To this I dare to say that it is a mathematical question. Goedel himself translated it into a specific formula, but then I have question C)
C) Is it accepting the proofs of the consistency of PA as conclusive a mathematically justified act or an act of taking a philosophical stance? Motivation: There is a discussion in the mailing list FOM (Foundations Of Mathematics) about this topic, in part motivated by this talk link text . I thought a discussion about this fundamental matter concerns most mathematicians and wanted to bring it to a wider audience. Edit: It is simple. Either:
1) Consistency of PA is proved and has a proof (as claimed by some in FOM) as valid as any other theorem in math, or
2) On top of the existing proofs a philosophic choice is needed (which explains the length of the discussions in FOM, justifies closing this question but contradicts what is being claimed emphatically by some in FOM) But you see. If 1) is the case then there is no need for the lengthy discussions and this is a concrete math question as any other, terminating with a proof. ........................................................................................ Edit 2: Thank you all. Although I had seen some of these arguments at FOM now I think I have my ideas more organized and I can make my question more concrete. I would like to try to put aside what involves 'believes'. In, I think, all the answers shown there has been this action entering the argument quite soon, e.g. In Chow's: (approx.) If you believe in the existence of the naturals then con(PA) follows. In Friedman's (approx.) If you believe in (About a dozen Basic axioms) + (1/n subsequences) then con(PA) follows. I want to put aside that initial action because (1): It is a philosophical question and that is not what I want to discuss, (2): Because of: If I believe (propositional logic) + (p/-p) then I believe ... for example (everything you can say) and maybe (3): Because I, personally, don't do math to believe what I prove. When I show P->Q, in a sequence of self imposed constrained steps I don't do it with the purpose of showing that, and at the end I don't have a complete conviction that, Q is a property of whatever could be a real world. But that is just philosophy and philosophy allows for any sort of choices. That is why I want to put it aside, at least until the moment in which it is inevitably needed. My question is: Is any of the systems that prove con(PA) a system that has itself been proven consistent? Why to ask this question? Regardless of how your feelings are about the ontological nature of what you prove. We can say that, since an inconsistent system proves everything, a consistent system is a bit more interesting for not doing so. At least if it is because there is not always a proof in which you use modus ponens twice (after you have found p/-p) for everything that you want to prove. I guess that also, to answer the question above, it should be clarified what to accept for a consistency proof. Let's leave it kind of open and just try to delay the need for a philosophic stance as much as possible.
|
EDIT: I have written a paper that greatly expands on my answer here, and that in particular contains sketches of Gentzen's proof and Friedman's proof, as well as a discussion of formalism. I have already posted an answer but in light of the discussion and the kinds of confusions that have emerged, I believe that this additional answer will be helpful. Let us first note that the consistency of PA, or more precisely a certain formalized version of it that I will call "Con(PA)," is a theorem of Zermelo-Fraenkel set theory (ZF). Conceptually, the simplest ZF proof is obtained by formalizing the easy and almost trivial argument that N, the natural numbers, is a model of PA. ZF is an extremely powerful system, and the full power of ZF is not needed for proving Con(PA). Famously, Gentzen showed that primitive recursive arithmetic (PRA), a very weak system, can prove Con(PA) if you add the ability to do induction up to the countable ordinal $\epsilon_0$ . Other ways to prove Con(PA) are available. Let B-W denote the statement that "every bounded sequence of rational numbers contains a subsequence $(q_i)$ such that for all $n$ , $|q_i - q_j| < 1/n$ for all $i, j > n$ ." Then B-W implies every axiom of PA, and this implication can be proven in the system RCA $_0$ , yielding a relative consistency proof. Moreover, according to Harvey Friedman, RCA $_0$ can be replaced by SRM (strict reverse mathematics). Most mathematical statements are no longer considered "open problems" once a proof has been published or otherwise made widely available, and checked and confirmed by experts to be correct. Note that published proofs, and expert verification, usually make no explicit reference to any particular underlying formal system such as ZF or PRA. Mathematicians are trained to recognize correct proofs when they see them, even if no set of axioms is explicitly specified. If pressed to specify an axiomatic system, a common choice is ZF, or ZFC (ZF plus the axiom of choice). If a proof is available that is explicitly formalizable in ZF, that is normally regarded as more than sufficient for settling an assertion and removing its "open problem" status. In the case of Con(PA), the aforementioned "normal conditions" for removing its "open problem" status have been met, and in fact exceeded. Nevertheless, some debate continues over its status, most likely because Con(PA) is widely perceived to be a somewhat unusual mathematical statement, having connections to philosophical questions in the foundations of mathematics. For example, some people, whom I will loosely call "formalists" or "ultrafinitists," maintain that many ordinary mathematical statements (e.g., "every differentiable function is continuous") have no concrete meaning, and the only concrete thing that can be said about them is whether they can or cannot be proved in this or that formal system; however, a statement such as "PA is consistent" is regarded as having a direct, concrete meaning. Roughly speaking this is because "PA is inconsistent," unlike infinitary mathematical statements, can be assigned a quasi-physical meaning as the existence of a certain finite sequence of symbols that we can physically apprehend. While the formalist agrees with all the above facts about the provability of Con(PA) in this or that formal system, such formal proofs don't necessarily carry any weight with the formalist as far as establishing the consistency of PA (in what I've called the "quasi-physical" sense) goes. Formalists will generally agree that explicitly exhibiting a contradiction in PA will definitively establish its inconsistency , but may differ regarding what, if anything, would definitively establish its consistency . There are others who are not formalists but who reject the commonly accepted standard of ZF(C) and only accept proofs that are formalizable in much weaker systems. For example, someone with strong constructivist leanings might only accept proofs that are formalizable in RCA $_0$ . For such a person, the proof of Con(PA) in ZF carries no weight. Roughly speaking, the usual ZF proof, that proceeds by showing that N is a model of the axioms of PA, assumes that any first-order formula defines a set of natural numbers, and this assumption is unprovable on the basis of RCA $_0$ alone. In fact, one can prove that Con(PA) is unprovable in RCA $_0$ . Such a person might regard the consistency of PA as permanently unknowable (in a way similar to those who regard the continuum hypothesis as permanently unknowable since it has been proved independent of ZFC). Note, by the way, that this person would also regard a sizable portion of generally-accepted mathematics (including Brouwer's fixed-point theorem, the Bolzano-Weierstrass theorem, etc.) as being "unproved" or "unprovable." To summarize, the consistency of PA is not an open problem in the usual sense of the term "open problem." Some people do nevertheless assert that it is an open problem, or that it has not been proven. When you encounter such an assertion, you should be aware that most likely, the person is using the term "open problem" in a somewhat nonstandard fashion, and/or holds to certain standards of proof that are more stringent than those that are commonly accepted in the mathematical community. Finally, to answer the new question that Franklin has asked, about whether the consistency of any of the systems in which Con(PA) has been proved has been proved: The answer is, "not in any sense that you would likely find satisfying." For example, one can "prove" that PRA + induction up to $\epsilon_0$ is consistent, in the sense that the consistency proof can be formalized in ZF, which as I said above is the usual standard for settling mathematical questions. If, however, the reason that you're asking the question is that you doubt the consistency of PA, and are hoping that you can settle those doubts by proving the consistency of PA in some "weaker" system that can then be proved consistent using "weaker" assumptions that you don't have any doubts about, then you're basically out of luck. This, roughly speaking, was Hilbert's program for eliminating doubts about the consistency of infinitary set theory. The hope was that one could prove the consistency of (say) ZF on the basis of a weak system such as (say) PRA, about which we had no doubts. But Goedel showed that not only is this impossible, but even if we allow all of ZF into our arsenal, we still can't prove the consistency of ZF. For better or for worse, this tempting road out of skepticism about consistency is intrinsically blocked.
|
{
"source": [
"https://mathoverflow.net/questions/66121",
"https://mathoverflow.net",
"https://mathoverflow.net/users/5506/"
]
}
|
66,377 |
It is often said that "Differentiation is mechanics, integration is art." We have more or less simple rules in one direction but not in the other (e.g. product rule/simple <-> integration by parts/u-substitution/often tricky). There are all kinds of anecdotes alluding to this fact (see e.g. this nice one from Feynman ). Another consequence of this is that differentiation is well automatable within CAS but integration is often not. My question We know that there is a deep symmetry based on the Fundamental theorem of calculus , yet there seems to be another fundamental structural asymmetry. What is going on here...and why? Thank you EDIT Some peope asked for clarification, so I try to give it. The main objection to the question is that asymmetry between two inverse operations is more the rule than the exception in math so they are not very surprised by this behaviour. There is no doubt about that - but, and that is a big but , there is always a good reason for that kind of behaviour! E.g. multiplying prime numbers is obviously easier than factoring the result since you have to test for the factors doing the latter. Here it is understandable how you define the original operation and its inverse. With symbolic differentiation and integration the case doesn't seem to be that clear cut - this is why there are so many good discussions taking place in this thread (which by the way please me very much). It is this Why at the bottom of things I am trying to understand. Thank you all again!
|
One relevant thing here is that you are referring to differentiating and integrating within the class of so-called elementary functions , which are built recursively from polynomials and complex exponential and logarithmic functions and taking their closure under the arithmetic operations and composition. Here one can argue by recursion to show that the derivative of an elementary function is elementary, but the antiderivatives might not be elementary. This should surprise one no more than the fact than the square of a rational number is rational, but the square root of a rational number might be irrational. (The analogy isn't completely idle, as shown by differential Galois theory .) In other words, the symmetry you refer to is really based on much wider classes of functions (e.g. continuous and continuously differentiable functions), far beyond the purview of the class of elementary functions. But let's put that aside. The question might be: is there a mechanical procedure which will decide when an elementary function has an elementary antiderivative (and if it does, exhibit that antiderivative)? There is an almost-answer to this, the so-called Risch algorithm , which I believe is a basis for many symbolic integration packages. But see particularly the issues mentioned in the section "Decidability". There is another interesting asymmetry: in first-order logic, derivatives are definable in the sense that given some expansion of the structure of real numbers, say for example the real numbers as an exponential field, the derivative of a definable function is again definable by a first-order formula. But in general there is no purely first-order construction of for example the Riemann integral (involving quantification over finer and finer meshes). I seem to recall that there are similar difficulties in getting a completely satisfactory notion of integration for recursively defined functions on the surreals, due in part to the incompleteness (i.e., the many holes) in the surreal number line.
|
{
"source": [
"https://mathoverflow.net/questions/66377",
"https://mathoverflow.net",
"https://mathoverflow.net/users/1047/"
]
}
|
66,500 |
I've heard many people say that class field theory is the same as the Langlands conjectures for GL_1 (and more specifically, that local Langlands for GL_1 is the same as local class field theory). Could someone please explain why this is true? My background is as follows: I understand the statements of class field theory (in other words, that abelian extensions correspond to open subgroups of the idele class group, and the quotient is the Galois group of that abelian extension). I know what modular forms are and what a group representation is, but not much more than that. So I'm looking to see why the statement of class field theory that I know is essentially the same as a certain statement about L-functions, representations, or automorphic forms, in such a way that a more advanced mathematician could easily recognize the latter statement as Langlands in dimension 1.
|
What you are looking for is the correspondence between algebraic Hecke characters over a number field $F$ and compatible families of $l$-adic characters of the absolute Galois group of $F$. This is laid out beautifully in the first section of Laurent Fargues's notes here . EDIT: In more detail, as Kevin notes in the comments above, an automorphic representation of $GL(1)$ over $F$ is nothing but a Hecke character; that is, a continuous character
$$\chi:F^\times\setminus\mathbb{A}_F^\times\to\mathbb{C}^\times$$
of the idele class group of $F$. You can associate $L$-functions to these things: they admit analytic continuation and satisfy a functional equation. This is the automorphic side of global Langlands for $GL(1)$. How to go from here to the Galois side? Well, let's start with the local story. Fix some prime $v$ of $F$; then the automorphic side is concerned with characters
$$\chi_v:F_v^\times\to\mathbb{C}^\times$$
Local class field theory gives you the reciprocity isomorphism
$$rec_v:W_{F_v}\to F_v^\times,$$
where $W_{F_v}$ is the Weil group of $F_v$. Then $\chi_v\circ rec_v$ gives you a character of $W_{F_v}$. This is local Langlands for $GL(1)$. The matching up local $L$-functions and $\epsilon$-factors is basically tautological. We return to our global Hecke character $\chi$. Recall that global class field theory can be interpreted as giving a map (the Artin reciprocity map)
$$Art_F:F^\times\setminus\mathbb{A}_F^\times\to Gal(F^{ab}/F),$$
where $F^{ab}$ is the maximal abelian extension of $F$. Local-global compatibility here means that, for each prime $v$ of $F$, the restriction $Art_F\vert_{F_v^\times}$ agrees with the inverse of the local reciprocity map $rec_v$. Since $Art_F$ is not an isomorphism, we do not expect every Hecke character to be associated with a Galois representation. What is true is that $Art_F$ induces an isomorphism from the group of connected components of the idele class group to $Gal(F^{ab}/F)$. In particular, any Hecke character with finite image will factor through the reciprocity map, and so will give rise to a character of $Gal(F^{ab}/F)$. This is global Langlands for Dirichlet characters (or abelian Artin motives). But we can say more, supposing that we have a certain algebraicity (or arithmeticity) condition on our Hecke character $\chi$ at infinity. The notes of Fargues referenced above have a precise definition of this condition; I believe the original idea is due to Weil. The basic idea is that the obstruction to $\chi$ factoring through the group of connected components of the idele class group (and hence through the abelianized Galois group) lies entirely at infinity. The algebraicity condition lets us "move" this persnickety infinite part over to the $l$-primary ideles (for some prime $l$), at the cost of replacing our field of coefficients $\mathbb{C}$ by some finite extension $E_\lambda$ of $\mathbb{Q}_l$. This produces a character $$\chi_l:F^\times\setminus\mathbb{A}_F^\times\to E_\lambda^\times$$ that shares its local factors away from $l$ and $\infty$ with $\chi$, but now factors through $Art_F$. Varying over $l$ gives us a compatible family of $l$-adic characters associated with our automorphic representation $\chi$ of $GL(1)$. The $L$-functions match up since their local factors do.
|
{
"source": [
"https://mathoverflow.net/questions/66500",
"https://mathoverflow.net",
"https://mathoverflow.net/users/1355/"
]
}
|
66,669 |
Where can I find a proof (in English) of the Krylov-Bogoliubov theorem, which states if $X$ is a compact metric space and $T\colon X \to X$ is continuous, then there is a $T$ -invariant Borel probability measure? The only reference I've seen is on the Wikipedia page, but that reference is to a journal that I cannot find. Of course, feel free to answer this question by providing your own proof.
|
First, fix $x \in X$ and let $\mu_1 := \delta_x$ be the Dirac measure supported at $x$. Then define a sequence of probability measures $\mu_n$ such that for any $f \in C^0 (X)$,
$$ \int_X f(y) \mathrm{d} \mu_n (y) = \frac{1}{n} \sum_{k=0}^{n-1} \int_X f \circ T^k (y) \mathrm{d} \mu_1 (y). $$
Apply the Banach-Alaouglu Theorem to deduce there exists a subsequence $\mu_{n_j}$ which converges in the weak-$\star$ topology. It is then very easy to prove that this limit measure is in fact T-invariant, using the formulation that $\mu$ is T-invariant if and only if $$\int_X f \circ T \mathrm{d} \mu = \int_X f \mathrm{d}\mu$$ for all continuous $f$.
|
{
"source": [
"https://mathoverflow.net/questions/66669",
"https://mathoverflow.net",
"https://mathoverflow.net/users/8382/"
]
}
|
66,681 |
A lot of notions in differential geometry have direct meaning in Physics. For example: A Riemannian metric is a way to encode distances on a manifold and in Physics it is the gravitational field. The curvature of the Levi-Civita connection gives the strength of the gravitation in a certain sense, A principal $G$-connection is a object that allows us to do parallel transport conveniently with respect to an action of a certain Lie group $G$, and in Physics it is a gauge field, that is a field that is related to a fundamental interaction, for instance a principal $U(1)$-connection can be seen as the electromagnetic field. The curvature of the connection gives the field strength, in a way. I would like to have an interpretation of what is a spinor field (when the manifold on which we are working admits a spin structure) in classical differential geometry, that is a section of the spinor bundle. By classical differential geometry I mean typical manifolds, not supermanifolds. This is because, for me, spinors in the theory of supermanifolds, play a different role, since in a way they are "odd spacetime coordinates". I am interested in the geometry of classical fields: a spinor field represents "matter" (fermions) whereas gauge fields (that is, principal connections) represent "forces" (bosons). But this is Physics. I am interested in a mathematical interpretation like: Riemannian metric = gravitational field = a way to measure distances, Principal connection = gauge field = a way to do parallel transport, Spinor field = matter field = what in Mathematics? So my questions are: In classical differential geometry (that is, ordinary manifolds), how can we interpret geometrically spinor fields? How can we interpret the spin connection and its curvature? Thanks. EDIT: In a comment below I was saying that spinor geometry is of fundamental importance to the Atiyah-Singer theorem. So perhaps this gives a lead to other people to help me with the interpretation of spinors in classical differential geometry.
|
As far as I know, this sort of structure was first invoked by Dirac in order to take a square root of the Laplacian, and this he was doing in order to write down Lorentz invariant Klein-Gordon equations. It is a useful exercise to try to solve the equation $D^2 = \Delta$ on a Euclidean space $V$ for a first order operator $D$; you will find that the coefficients have to satisfy certain relations that cannot be satisfied by ordinary real or complex numbers. The algebraic structure required to obtain these relations is provided by an algebra $A$ with $V$ as a linear subspace such that $v^2 = -||v||^2 1$ in the algebra. In other words, you need to take a "square root" of your quadratic form. In brief, a spinor bundle on a Riemannian manifold is a setting for taking a square root of the Riemannian metric. To be precise, it is a bundle $S$ on which tangent vectors act as bundle morphisms in such a way that $v^2 s = -||v||^2 s$. In Dirac's equation, the coefficients of $D$ were given by certain matrices (the "Pauli spin matrices"), and thus he was thinking of $D$ as taking values in a vector space which carries a representation of the algebra $A$. Thus the spinor bundle is a global version of that vector space. That tells you what properties the spinor bundle is supposed to have, but it doesn't tell you what the bundle actually is. If you look it up in a book, you will find that the spinor bundle is an associated bundle to a principal $Spin(n)$ (or $Spin^c(n)$) bundle via the spin representation, but to me that is only a little more helpful than defining a Riemannian metric to be a reduction of structure group from the principal $GL(n)$-bundle of frames to a $O(n)$-bundle. Here is what I would consider to be a more concrete and well-motivated description. Let us return to the algebra $A$ associated to a Euclidean space $V = \mathbb{R}^n$ as above. The universal example of such an algebra is the Clifford algebra $Cl(V)$, equipped with a natural left action of $V$. Choosing an orthonormal basis for $V$, one can describe $\mathbb{R}_n := Cl(V)$ as the universal algebra over $\mathbb{R}$ generated by symbols $e_1, \ldots, e_n$ subject to the relations $e_j^2 = -1$ and $e_j e_k = -e_k e_j$ for $i \neq j$. It is not hard to see that $Cl(V)$ is isomorphic as a vector space (but not as an algebra) to the exterior algebra of $V$, and thus $Cl(V)$ inherits a natural $\mathbb{Z}/2\mathbb{Z}$ grading, given by products of even / odd numbers of generators. Notice that right multiplication by the $j$th generator is an odd anti-involution, so a choice of orthonormal basis for $V$ gives $Cl(V)$ the structure of a $n$-multigraded super algebra. We can define a (real) spinor bundle of a $n$-manifold to be a bundle which is locally isomorphic to the trivial bundle whose fibers are given by $\mathbb{R}_n$ equipped with a left action of the tangent bundle and a $n$-multigrading structure coming from a choice of local orthonormal frame. There is an obvious notion of complex spinor bundle as well: just use the complex Clifford algebra $\mathbb{C}_n$. Note that the fiber dimension of this bundle will be twice that of the bundle obtained via the spin representation, but the multigrading operators can be used to "reduce" my version of the spinor bundle down to the usual version. There are lots of reasons why I believe it is more convenient to think of a spinor bundle as a bundle of Clifford algebras with extra supersymmetry data, but I will briefly focus on a topological reason that I think cuts to the heart of the matter. The existence of a real spinor bundle on a manifold $M$ (a "Spin structure") is a rather severe condition. The complexification of a real spinor bundle is a complex spinor bundle, but not all complex spinor bundles ("Spin$^c$ structures") arise in this way. For example, any complex manifold has a spin$^c$ structure, but even $\mathbb{C}P^2$ fails to have a spin structure. An orientation on $M$ can be recovered from a choice of spin$^c$ structure, and indeed "spin$^c$-able" is only a little bit stronger than orientable - most orientable manifolds that you can name are probably spin$^c$-able. My point in bringing this up is to relate spinor bundles to K-homology, the generalized homology theory dual to topological K-theory. In ordinary homology theory, a choice of orientation on an $n$-manifold $M$ is the same thing as a choice of fundamental class in $H_n(M)$. Similarly, a choice of real / complex spinor bundle on a $n$-manifold $M$ is the same thing as a choice of fundamental class in the $n$th degree real / complex K-homology of $M$ (the multigrading data are crucial here). This observation is the starting point for some of the more conceptual proofs of the Atiyah-Singer index theorem, but this answer has gone on long enough. I hope it helps!
|
{
"source": [
"https://mathoverflow.net/questions/66681",
"https://mathoverflow.net",
"https://mathoverflow.net/users/14806/"
]
}
|
66,699 |
Usual mathematical formulation of a 2d (closed) TQFT is as a functor from the category of 2-dim cobordisms between 1-dim manifolds to the category of vector spaces (satisfying various properties.) For example, a pair of pants (a morphism from $S^1$ to $S^1 \times S^1$) is mapped to a linear map $f:V\to V\otimes V$; similarly a pair of pants in the other direction, a morphism from $S^1 \times S^1$ to $S^1$ is mapped to a linear map $g:V\otimes V\to V$. Then there are axioms (of the symmetric monoidal category) which say that $f$ and $g$ are essentially the same, reflecting the fact that both $f$ and $g$ came from the same pair of pants. For me (as a quantum field theorist) all this seems very roundabout. The extra axioms are there to ensure what is obvious from the point of view of the two-dimensional field theory; the extra axioms were necessary because the boundaries are arbitrarily grouped into "the source" and "the target" of a morphism, by picking the direction of time inside the 2d surface. (It's called "the Hamiltonian formulation" in physics.) I think you shouldn't introduce the time direction in the first place, or in the physics terminology, you should just use the "Lagrangian formulation". In some sense, the idea of "morphism" itself implies an implicit choice of the direction of time. However, you shouldn't introduce the direction of time in an Euclidean quantum field theory. So, you shouldn't use the concept of morphism. The idea of "arrow" itself is so passé, it's a pre-relativity concept which put paramount importance to "time" as something distinct from "space". So, I would just formulate a 2d TQFT as an association of $f_k:Sym^k V \to K $ to a Riemann surface having $k$ $S^1$ boundaries, and an axiom relating $f_{k}$ and $f_{l}$ to $f_{k+l-2}$. Why is this not preferred in mathematics? Yes in the physics literature too, the transition from the Hamiltonian framework (pre Feynman) to the Lagrangian framework (post Feynman) took quite a long time... Or is the higher-category theory (of which I don't know anything) exactly the "Lagrangian formulation" of the TQFT?
|
Mathematicians have sometimes defined TQFTs in the way Yuji suggests. Indeed, Getzler and Kapranov define the notion of "modular operad" for precisely this purpose (it formalizes the relations between $f_k$, $f_l$ and $f_{k+l-2}$, as well as between $f_k$ and $f_{k-2}$). Earlier, Kontsevich and Manin axiomatized Gromov-Witten invariants along these lines (without distinguisning between incoming and outgoing). Perhaps the main reason that mathematicians use the language of symmetric monoidal categories is that this is very familiar to them. If you want to explain the idea of a TQFT to the average mathematician, it's easier to say "it's a functor" than to say "it's a collection of linear maps $f_k$ satisfying these relations..." In addition, there are many very basic examples where the distinction between incoming and outgoing is really important. For example, if $A$ is any associative algebra, then the Hochschild cohomology $HH(A)$ of $A$ carries maps $HH(A)^{\otimes n} \to HH(A)$ indexed by Riemann surfaces of genus $0$, with $n$ incoming and one outgoing boundary components. However, $A$ needs to have a great deal of additional structure -- it needs to be a Calabi-Yau algebra -- in order for this to extend to a fully-fledged TQFT. As for Yuji's last point, I wouldn't think of the higher-categorical formulation of TQFT as a version of the Lagrangian formalism. After all, for $0+1$ dimensional TQFTs, the higher-categorical formulation reduces to the usual Hamiltonian formalism.
|
{
"source": [
"https://mathoverflow.net/questions/66699",
"https://mathoverflow.net",
"https://mathoverflow.net/users/5420/"
]
}
|
66,812 |
The wikipedia page on Srinivasa Ramanujan gives a very strange formula: Ramanujan: If $0 < a < b + \frac{1}{2}$ then, $$\int\limits_{0}^{\infty} \frac{ 1 + x^{2}/(b+1)^{2}}{ 1 + x^{2}/a^{2}} \times \frac{ 1 + x^{2}/(b+2)^{2}}{ 1 + x^{2}/(a+1)^{2}} \times \cdots \ \textrm{dx} = \frac{\sqrt{\pi}}{2} \small{\frac{ \Gamma(a+\frac{1}{2}) \cdot \Gamma(b+1)\: \Gamma(b-a+\frac{1}{2})}{\Gamma(a) \cdot \Gamma(b+\frac{1}{2}) \cdot \Gamma(b-a+1)}}$$ Question I would like to pose to this community is: What could be the Intuition behind discovering this formula. Next, I see that Ramanujan has discovered a lot of formulas for expressing $\pi$ as series. May I know what is the advantage of having a same number expressed as a series in a different way. Is it useful at all? From what I know Ramanujan basically worked on Infinite series, Continued fractions, $\cdots$ etc. I have never seen applications of continued fractions , in the real world. I would also like to know if continued fractions has any applications. Hope I haven't asked too many questions. As I was posting this question the last question on application of continued fractions popped up and I thought it would be a good idea to pose it here, instead of posing it as a new question.
|
This is one of those precious cases when Ramanujan himself provided (a sketch of) a proof. The identity was published in his paper "Some definite integrals" ( Mess. Math. 44 (1915), pp. 10-18) together with several related formulae. It might be instructive to look first at the simpler identity (i.e. the limiting case when $b\to\infty$; the identity mentioned in the original question can be obtained by a similar approach):
$$\int\limits_{0}^{\infty} \prod_{k=0}^{\infty}\frac{1}{ 1 + x^{2}/(a+k)^{2}}dx = \frac{\sqrt{\pi}}{2} \frac{ \Gamma(a+\frac{1}{2})}{\Gamma(a)},\quad a>0.\qquad\qquad\qquad(1)$$
Ramanujan derives (1) by using a partial fraction decomposition of the product $\prod_{k=0}^{n}\frac{1}{ 1 + x^{2}/(a+k)^{2}}$, integrating term-wise, and passing to the limit $n\to\infty$. He also indicates that alternatively (1) is implied by the factorization
$$\prod_{k=0}^{\infty}\left[1+\frac{x^2}{(a+k)^2}\right] = \frac{ [\Gamma(a)]^2}{\Gamma(a+ix)\Gamma(a-ix)},$$
which follows readily from Euler's product formula for the gamma function. Thus (1) is equivalent to the formula
$$\int\limits_{0}^{\infty}\Gamma(a+ix)\Gamma(a-ix)dx=\frac{\sqrt{\pi}}{2} \Gamma(a)\Gamma\left(a+\frac{1}{2}\right).$$ There is a nice paper "Wallis-Ramanujan-Schur-Feynman" by Amdeberhan et al ( American Mathematical Monthly 117 (2010), pp. 618-632) that discusses interesting combinatorial aspects of formula (1) and its generalizations.
|
{
"source": [
"https://mathoverflow.net/questions/66812",
"https://mathoverflow.net",
"https://mathoverflow.net/users/1483/"
]
}
|
67,214 |
Let ZF 1 = ZF, ZF k+1 = ZF + the assumption that ZF 1 ,...,ZF k are consistent, ZF ω = ZF + the assumption that ZF k is consistent for every positive integer k, ... and similarly define ZF α for every computable ordinal α. Then a commenter on my blog asked a question that boils down to the following: can we give an example of a Π 1 -sentence (i.e., a universally-quantified sentence about integers) that's provably independent of ZF α for every computable ordinal α? (AC and CH don't count, since they're not Π 1 -sentences.) An equivalent question is whether, for every positive integer k, there exists a computable ordinal α such that the value of BB(k) (the k th Busy Beaver number) is provable in ZF α . I apologize if I'm overlooking something obvious. Update: I'm grateful to François Dorais and the other answerers for pointing out the ambiguity in even defining ZF α , as well as the fact that this issue was investigated in Turing's thesis. Emil Jeřábek writes: "Basically, the executive summary is that once you manage to make the question sufficiently formal to make sense, then every true Π 1 formula follows from some iterated consistency statement." So, I now have a followup question: given a positive integer k, can we say something concrete about which iterated consistency statements suffice to prove the halting or non-halting of every k-state Turing machine? (For example, would it suffice to use ZF α for some encoding of α, where α is the largest computable ordinal that can be defined using a k-state Turing machine?)
|
In 1939, Alan Turing investigated such questions [ Systems of logic based on ordinals , Proc. London Math. Soc. 45, 161-228]. It turns out that one runs into problems rather quickly due to the fact that the $(\omega+1)$-th such theory is not completely well-defined. Indeed, there are many ordinal notations for $\omega+1$ and these can be used to code a lot of information. Turing's Completeness Theorem. If $\phi$ is a true $\Pi_1$ sentence in the language of arithmetic, then there is an ordinal notation $a$ such that $|a| = \omega+1$ and $T_a$ proves $\phi$. This result applies to any sound recursively axiomatized extension $T$ of $PA$. In particular, this applies to (the arithmetical part of) $ZF$. To avoid this, one might carefully choose a path through the ordinal notations, but this leads to a variety of other problems [S. Feferman and C. Spector, Incompleteness along paths in progressions of theories , J. Symbolic Logic 27 (1962), 383–390].
|
{
"source": [
"https://mathoverflow.net/questions/67214",
"https://mathoverflow.net",
"https://mathoverflow.net/users/2575/"
]
}
|
67,271 |
This came up in a question on the xkcd forums. Is it possible to have a nonconstructive metaproof, i.e. a proof that there exists a proof in some formal system which does not construct said proof? Are there any known examples, preferably with some well-known formal system like PA? Conversely, is it possible to prove a meta-metatheorem saying that any metaproof can be used to find a proof?
|
In theory, David’s answer is correct. Nevertheless, in practice it is perfectly possible to prove the existence of a proof non-constructively (such as by manipulating models and then appealing to the completeness theorem) where no one has a clue how to actually find the proof. One example which springs to mind is Jacobson’s theorem: if $R$ is a ring such that for every $a\in R$ there exists an integer $n > 1$ such that $a=a^n$, then $R$ is commutative. By completeness of equational logic, this implies that for any $n > 1$, there exists an equational derivation of $xy=yx$ from the axioms of rings and $x^n=x$. Already finding such derivation for $n=3$ is a nontrivial exercise; explicit derivations are known for some $n$, but not in general.
|
{
"source": [
"https://mathoverflow.net/questions/67271",
"https://mathoverflow.net",
"https://mathoverflow.net/users/3410/"
]
}
|
67,275 |
Let $X$ be a compact in the Polish space (metric, complete, separable) and $G\subseteq X\times X$ is open. For $x\in X$ we define the section of $G$:
$$
s(x) = (y\in X|\langle x,y \rangle \in \bar{G}).
$$ Here $\bar{G}$ is a closure of $G$. The set $A'\subseteq X$ is invariant if for all $x\in A'$ holds $s(x)\subset A'$. How to verify if there are non-empty invariant subsets of a given compact $A\subset X$? Maybe there are known equivalent problems? It will be even helpful in the case $X = [0,1]$. I also asked it here , however haven't received an answer.
|
In theory, David’s answer is correct. Nevertheless, in practice it is perfectly possible to prove the existence of a proof non-constructively (such as by manipulating models and then appealing to the completeness theorem) where no one has a clue how to actually find the proof. One example which springs to mind is Jacobson’s theorem: if $R$ is a ring such that for every $a\in R$ there exists an integer $n > 1$ such that $a=a^n$, then $R$ is commutative. By completeness of equational logic, this implies that for any $n > 1$, there exists an equational derivation of $xy=yx$ from the axioms of rings and $x^n=x$. Already finding such derivation for $n=3$ is a nontrivial exercise; explicit derivations are known for some $n$, but not in general.
|
{
"source": [
"https://mathoverflow.net/questions/67275",
"https://mathoverflow.net",
"https://mathoverflow.net/users/11768/"
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.