source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
415,469
I am trying to understand some things about Condensed Mathematics and the Liquid Tensor Experiment . The aim of the LTE is to provide a formalised proof of Theorem 9.4 in Scholze's paper Lectures on Analytic Geometry (describing joint work with Clausen). Part of the LTE is a blueprint of the general approach. Theorem 9.4 is stated for a rather general class of chain complexes, but the blueprint works with a more restricted class which is presumably sufficient for the intended application; I have not yet understood this reduction. Specifically, the blueprint considers complexes defined with the aid of a structure called a Breen-Deligne package (Definition 2.11 in the blueprint). As pointed out in Definition 2.13 of the blueprint, there is a default example of a Breen-Deligne package. My question is: is it sufficient to consider this default example, or is there a real need to consider all possible examples? If it is sufficient to consider the default example, then it seems to me that the whole proof can be substantially simplified. (However, I am just beginning to get to grips with these ideas, so I could easily be mistaken.)
The comments have already given the answers, but let me assemble them here with my account of the story. When Scholze first posted the Liquid Tensor Experiment , it was quickly identified (by both Peter and Reid, somewhat independently I think) that Breen–Deligne resolutions would be the largest input in terms of prerequisites that needed to be formalised. Back then, I had no idea what these resolutions were, or how to prove that they existed. But they were needed for the statement of Theorem 9.4, which seemed to be a very natural first milestone to aim for. So I set out to formalise the statement of the existence of Breen–Deligne resolutions, with the expectation that we would never prove the result in Lean, but just assume it as a black box. Let me recalll the statement: Theorem (Breen–Deligne). For an abelian group $A$ , there exists a resolution of the form $$ \dots → \bigoplus_{j = 0}^{n_i} ℤ[A^{r_{i,j}}] → \dots → ℤ[A^2] → ℤ[A] → A → 0 $$ that is functorial in $A$ . On the proof. See appendix to Section IV of Condensed.pdf . As Reid remarked in the comments, the proof relies on the fact that stable homotopy groups of spheres are finitely generated. ∎ (Aside: for the proof of Theorem 9.4, we really need a version that applies to condensed abelian groups $A$ , so in practice we want to abstract to abelian sheaves.) In the rest of the Lecture notes, Scholze often uses a somewhat different form of the above resolution, by assuming $n_i = 1$ and effectively dropping all the $\bigoplus$ 's. I think this was done mostly for presentational reasons. I did the same thing when I axiomatized Breen–Deligne resolutions in Lean. So really, they weren't axiomatized at all. I don't know of any reason to expect that there exists a resolution with the property that $n_i = 1$ for all $i$ , but I also don't see any reason why there shouldn't. Anyway, I needed a name for the axioms that I did put into Lean, and I chose Breen–Deligne package for that. So what is that exactly? Well, a functorial map $ℤ[A^m] → ℤ[A^n]$ is just a formal sum of matrices with coefficients in $ℤ$ . So as a first approximation, we record the natural numbers $r_j = r_{1,j}$ (since $n_i = 1$ ); for every $j$ , a formal sum of $(r_{j+1}, r_j)$ -matrices with coefficients in $ℤ$ . But we need one more property of Breen–Deligne resolutions: if $C(A)$ denotes the complex, then there are two maps induced by addition. There is the map $σ \colon C(A^2) → C(A)$ that comes from the functoriality of $C$ applied to the addition map $A^2 → A$ . But there is also the map $π \colon C(A^2) → C(A)$ that comes from addition in the objects of the complex. (All objects are of the form $ℤ[A^k]$ and we can simply add elements of these free abelian groups.) The final axiom of a Breen–Deligne package is there is a functorial homotopy between $σ$ and $π$ . While playing around with the axioms, I noticed that I could write down inductively a somewhat non-trivial example of such a package. When I discussed this example with Peter, he suggested that it might in fact be suitable as a replacement for Breen–Deligne resolutions in all applications in his lecture notes so far. Several months later we found out that I had rediscovered MacLane's $Q'$ -construction. So, let's denote by $Q'$ the complex corresponding to the example package. The following result is, as far as I know, original: Lemma. Let $A$ and $B$ be (condensed) abelian groups. If $\text{Ext}^i(Q'(A),B) = 0$ for all $i ≥ 0$ , then $\text{Ext}^i(A,B) = 0$ for all $i ≥ 0$ . Proof. I've written up a proof sketch on Zulip ( public archive of that thread ). ∎ We are in the process of formalizing the necessary homological algebra to verify the proof in Lean. It's the last major milestone left to complete the challenge in Peter's original blogpost. After that, we mostly need some glue. See this blogpost for an update on the formalisation effort. (Edit: see also https://math.commelin.net/files/LTE.pdf for a more precise roadmap of what remains to be done to complete the challange.) Once everything is done, it should all be written up in some paper. So now, let me turn to the question is it sufficient to consider this default example, or is there a real need to consider all possible examples? It is indeed sufficient to consider this default example. The reasons for working with the abstract concept of Breen–Deligne packages are historical: I had formalised the statement of Theorem 9.4 and some other material in terms of BD packages before I realised that this default example was indeed sufficient for our purposes. practical: as Remy points out in the comments, it is (i) helpful to abstract away concrete details into a conceptual object, and (ii) there is a chance that there might be better examples leading to better constants. What would be really interesting is an example of a BD package that gives resolutions instead of merely functorial complexes. It is known that the $Q'$ construction is not a resolution. Indeed, one can easily show inductively that $H_i(Q'(ℤ))$ contains a copy of $ℤ^{2^i}$ . With more work (relying again on abstract homotopy theory), Peter gave a proof that $Q'(A) ≅ Q'(A)^{⊕2}[-1] ⊕ ℤ ⊗_{\mathbb S} A$ on Zulip ( public archive ).
{ "source": [ "https://mathoverflow.net/questions/415469", "https://mathoverflow.net", "https://mathoverflow.net/users/10366/" ] }
415,507
This page is an overview of some of the types of "Galois theories" there are. One of the most basic type is the fundamental theorem of covering spaces , which says, roughly, that for each topological space $X$ , there is an equivalence of categories $$\mathrm{Cov}(X)\simeq \pi_1(X)\mathbf{Set}.$$ Grothendieck proved an analogue of that statement for schemes $X$ : $$\mathrm{EtCov}(X)\simeq \pi_1(X)\mathbf{Set}.$$ (This is again just a very rough formulation and omits some of the assumptions, but you know what I mean.) I am interested in "topos-theoretic Galois theory" . Unfortunately, this section of the nLab page isn't filled out ("(...)"), but I guess that the topos-theoretic formulation of Galois theory states, roughly, that for each topos $\mathcal E$ , $$\mathrm{Gal}(\mathcal E)\simeq \pi_1(\mathcal E)\mathbf{Set},\qquad (\ast)$$ where $\mathrm{Gal}(\mathcal E)$ is the full subcategory of $\mathcal E$ consisting of locally constant objects in $\mathcal E$ , and $\pi_1(\mathcal E)$ is the fundamental group of $\mathcal E$ . (This is suggested by the nLab section "Reformulation of classical Galois theory".) Question: Is there a reference for $(\ast)$ (and the definition of the fundamental group of a topos which is used here)? Is this in SGA 4? The linked nLab page fundamental group of a topos refers to (and is mostly copy-pasted from) Porter's paper Abstract Homotopy Theory: The interaction of category theory and homotopy theory , which contains a section called "The fundamental group of a topos", which in turn refers to SGA 1. This is weird, because SGA 1 doesn't discuss topoi, so in particular not the fundamental group of a topos! The nLab also refers to SGA 4 Exposé IV Exercice 2.7.5 for the definition of the fundamental group and SGA 4 Exposé VIII Proposition 2.1 for, I guess, $(\ast)$ in the special case that $\mathcal E$ is the étale topos of the scheme $X=\mathrm{Spec}(k)$ for some field $k$ . (But this is really just a guess - I can't read French. So correct me if I'm wrong.) Is there more of "topos-theoretic Galois theory" in SGA 4 or are these the only two paragraphs about that topic? Concerning the definition of the fundamental group of a topos, there is a construction in Moerdijk's Classifying Spaces and Classifying Topoi , in which he nevertheless remarks: The profinite fundamental group is discussed in SGA1. This suggests there are two version of the fundamental group of a topos: the one he discusses and the "profinite" version. However, as I said, topoi don't occur in SGA 1, so I wonder where I can find the definition of the "profinite" fundamental group, if that's the notion that should be used in $(\ast)$ . (The definition used in $(\ast)$ should of course have the property that if $\mathcal E$ is the étale topos of a scheme $X$ , then $\pi_1(\mathcal E)$ is isomorphic to the étale fundamental group of $X$ .)
The comments have already given the answers, but let me assemble them here with my account of the story. When Scholze first posted the Liquid Tensor Experiment , it was quickly identified (by both Peter and Reid, somewhat independently I think) that Breen–Deligne resolutions would be the largest input in terms of prerequisites that needed to be formalised. Back then, I had no idea what these resolutions were, or how to prove that they existed. But they were needed for the statement of Theorem 9.4, which seemed to be a very natural first milestone to aim for. So I set out to formalise the statement of the existence of Breen–Deligne resolutions, with the expectation that we would never prove the result in Lean, but just assume it as a black box. Let me recalll the statement: Theorem (Breen–Deligne). For an abelian group $A$ , there exists a resolution of the form $$ \dots → \bigoplus_{j = 0}^{n_i} ℤ[A^{r_{i,j}}] → \dots → ℤ[A^2] → ℤ[A] → A → 0 $$ that is functorial in $A$ . On the proof. See appendix to Section IV of Condensed.pdf . As Reid remarked in the comments, the proof relies on the fact that stable homotopy groups of spheres are finitely generated. ∎ (Aside: for the proof of Theorem 9.4, we really need a version that applies to condensed abelian groups $A$ , so in practice we want to abstract to abelian sheaves.) In the rest of the Lecture notes, Scholze often uses a somewhat different form of the above resolution, by assuming $n_i = 1$ and effectively dropping all the $\bigoplus$ 's. I think this was done mostly for presentational reasons. I did the same thing when I axiomatized Breen–Deligne resolutions in Lean. So really, they weren't axiomatized at all. I don't know of any reason to expect that there exists a resolution with the property that $n_i = 1$ for all $i$ , but I also don't see any reason why there shouldn't. Anyway, I needed a name for the axioms that I did put into Lean, and I chose Breen–Deligne package for that. So what is that exactly? Well, a functorial map $ℤ[A^m] → ℤ[A^n]$ is just a formal sum of matrices with coefficients in $ℤ$ . So as a first approximation, we record the natural numbers $r_j = r_{1,j}$ (since $n_i = 1$ ); for every $j$ , a formal sum of $(r_{j+1}, r_j)$ -matrices with coefficients in $ℤ$ . But we need one more property of Breen–Deligne resolutions: if $C(A)$ denotes the complex, then there are two maps induced by addition. There is the map $σ \colon C(A^2) → C(A)$ that comes from the functoriality of $C$ applied to the addition map $A^2 → A$ . But there is also the map $π \colon C(A^2) → C(A)$ that comes from addition in the objects of the complex. (All objects are of the form $ℤ[A^k]$ and we can simply add elements of these free abelian groups.) The final axiom of a Breen–Deligne package is there is a functorial homotopy between $σ$ and $π$ . While playing around with the axioms, I noticed that I could write down inductively a somewhat non-trivial example of such a package. When I discussed this example with Peter, he suggested that it might in fact be suitable as a replacement for Breen–Deligne resolutions in all applications in his lecture notes so far. Several months later we found out that I had rediscovered MacLane's $Q'$ -construction. So, let's denote by $Q'$ the complex corresponding to the example package. The following result is, as far as I know, original: Lemma. Let $A$ and $B$ be (condensed) abelian groups. If $\text{Ext}^i(Q'(A),B) = 0$ for all $i ≥ 0$ , then $\text{Ext}^i(A,B) = 0$ for all $i ≥ 0$ . Proof. I've written up a proof sketch on Zulip ( public archive of that thread ). ∎ We are in the process of formalizing the necessary homological algebra to verify the proof in Lean. It's the last major milestone left to complete the challenge in Peter's original blogpost. After that, we mostly need some glue. See this blogpost for an update on the formalisation effort. (Edit: see also https://math.commelin.net/files/LTE.pdf for a more precise roadmap of what remains to be done to complete the challange.) Once everything is done, it should all be written up in some paper. So now, let me turn to the question is it sufficient to consider this default example, or is there a real need to consider all possible examples? It is indeed sufficient to consider this default example. The reasons for working with the abstract concept of Breen–Deligne packages are historical: I had formalised the statement of Theorem 9.4 and some other material in terms of BD packages before I realised that this default example was indeed sufficient for our purposes. practical: as Remy points out in the comments, it is (i) helpful to abstract away concrete details into a conceptual object, and (ii) there is a chance that there might be better examples leading to better constants. What would be really interesting is an example of a BD package that gives resolutions instead of merely functorial complexes. It is known that the $Q'$ construction is not a resolution. Indeed, one can easily show inductively that $H_i(Q'(ℤ))$ contains a copy of $ℤ^{2^i}$ . With more work (relying again on abstract homotopy theory), Peter gave a proof that $Q'(A) ≅ Q'(A)^{⊕2}[-1] ⊕ ℤ ⊗_{\mathbb S} A$ on Zulip ( public archive ).
{ "source": [ "https://mathoverflow.net/questions/415507", "https://mathoverflow.net", "https://mathoverflow.net/users/476516/" ] }
415,703
I am thinking about the Axiom of Choice and I am trying to understand the Axiom with some but a little progress. Many questions are arising in my head. So, I know that there exists a model of ZF set theory in which the set of real numbers, which is provably uncountable, is a countable union of countable sets. Question: does there exist a model of ZF set theory for which there exists a collection $A_n$ , $n\in\mathbb{N}$ , of pairwise disjoint two-element sets such that their union is not countable? Some thoughts. Let $A_n$ , $n\in\mathbb{N}$ , be a collection of pairwise disjoint two-element sets. Then for every $n\in\mathbb{N}$ there exists a bijection $f:\{1,2\}\to A_n$ . But when we want to prove that $\bigcup_{n\in\mathbb{N}}A_n$ is countable, we have to choose a countable number of bijections $f_n:\{1,2\}\to A_n$ , $n\in\mathbb{N}$ , at once (simultaneously). After this we plainly define the bijection $f:\mathbb{N}\to\bigcup_{n\in\mathbb{N}}A_n$ by $f(1):=f_1(1)$ , $f(2):=f_1(2)$ , $f(3):=f_2(1)$ , $f(4):= f_2(2)$ , and so on. Rigorously, we write $f(k)=f_l(1)$ if $k=2l-1$ and $f(k)=f_l(2)$ if $k=2l$ . Clearly, $f$ is a bijection and we are done. But without the Axiom of Countable Choice we can not choose $f_n$ , $n\in\mathbb{N}$ , simultaneously and the argument does not work. It is worth mentioning that if $A_n$ are subsets of $\mathbb{R}$ , then we can choose $f_n$ , $n\in\mathbb{N}$ , simultaneously. Indeed, we can define $f_n(1):=\min A_n$ and $f_n(2):=\max A_n$ , $n\in\mathbb{N}$ , and the natural proof given above works. So if a counterexample exists, the sets $A_n$ , $n\in\mathbb{N}$ , have to be "abstract", say pairs of socks.
Yes, it is possible. This phenomenon is sometimes called Russell's socks, named after an analogy due to Russell about how one can pick out a shoe from an infinite set of pairs of shoes, but not for socks since socks in a pair are indistinguishable. Horst Herrlich, Eleftherios Tachtsis, On the number of Russell’s socks or 2 + 2 + 2 + . . . = ? is a nice overview which proves some basic properties, including consistency of existence of Russell's socks.
{ "source": [ "https://mathoverflow.net/questions/415703", "https://mathoverflow.net", "https://mathoverflow.net/users/48157/" ] }
416,937
There might be just enough time to pick another location, but I am curious what mathematicians think. Will Ukrainian mathematicians be able to attend a conference in Russia if Russia no longer recognizes their passports? To be clear: I love Russia, and I am not trying to hurt the feelings of Russian mathematicians or people. The International Mathematical Union (IMU) have made a decision on moving the ICM to a virtual event, but there is still the (less exciting) decision to make concerning the location of IMU General Assembly.
To answer the question in the title: "No." And I would imagine that Ukrainian mathematicians would boycott any ICM held in Russia, in these times. So the question of whether Russia would honor their passports will probably not arise.
{ "source": [ "https://mathoverflow.net/questions/416937", "https://mathoverflow.net", "https://mathoverflow.net/users/13268/" ] }
417,690
Teaching group theory this semester, I found myself laboring through a proof that the sign of a permutation is a well-defined homomorphism $\operatorname{sgn} : \Sigma_n \to \Sigma_2$ . An insightful student has pressed me for a more illuminating proof, and I'm realizing that this is a great question, and I don't know a satisfying answer. There are many ways of phrasing this question: Question: Is there a conceptually illuminating reason explaining any of the following essentially equivalent statements? The symmetric group $\Sigma_n$ has a subgroup $A_n$ of index 2. The symmetric group $\Sigma_n$ is not simple. There exists a nontrivial group homomorphism $\Sigma_n \to \Sigma_2$ . The identity permutation $(1) \in \Sigma_n$ is not the product of an odd number of transpositions. The function $\operatorname{sgn} : \Sigma_n \to \Sigma_2$ which counts the number of transpositions "in" a permutation mod 2, is well-defined. There is a nontrivial "determinant" homomorphism $\det : \operatorname{GL}_n(k) \to \operatorname{GL}_1(k)$ . …. Of course, there are many proofs of these facts available, and the most pedagogically efficient will vary by background. In this question, I'm not primarily interested in the pedagogical merits of different proofs, but rather in finding an argument where the existence of the sign homomorphism looks inevitable , rather than a contingency which boils down to some sort of auxiliary computation. The closest thing I've found to a survey article on this question is a 1972 note "An Historical Note on the Parity of Permutations" by TL Bartlow in the American Mathematical Monthly. However, although Bartlow gives references to several different proofs of these facts, he doesn't comprehensively review and compare all the arguments himself. Here are a few possible avenues: $\Sigma_n$ is a Coxeter group, and as such it has a presentation by generators (the adjacent transpositions) and relations where each relation respects the number of words mod 2. But just from the definition of $\Sigma_n$ as the group of automorphisms of a finite set, it's not obvious that it should admit such a presentation, so this is not fully satisfying. Using a decomposition into disjoint cycles, one can simply compute what happens when multiplying by a transposition. This is not bad, but here the sign still feels like an ex machina sort of formula. Defining the sign homomorphism in terms of the number of pairs whose order is swapped likewise boils down to a not-terrible computation to see that the sign function is a homomorphism. But it still feels like magic. Proofs involving polynomials again feel like magic to me. Some sort of topological proof might be illuminating to me.
(This is a variant of Cartier's argument mentioned by Dan Ramras.) Let $X$ be a finite set of size at least $2$ . Let $E$ be the set of edges of the complete graph on $X$ . The set $D$ of ways of directing those edges is a torsor under $\{\pm1\}^E$ . Let $G$ be the kernel of the product homomorphism $\{\pm1\}^E \to \{\pm1\}$ . Since $(\{\pm1\}^E:G)=2$ , the set $D/G$ of $G$ -orbits in $D$ has size $2$ . The symmetric group $\operatorname{Sym}(X)$ acts on $X$ , $D$ , and $D/G$ , so we get a homomorphism $\operatorname{Sym}(X) \to \operatorname{Sym}(D/G) \simeq \{\pm 1\}$ . Each transposition $(ij)$ maps to $-1$ because if $d \in D$ has all edges at $i$ and $j$ outward except for the edge from $i$ to $j$ , then $(ij)d$ equals $d$ except for the direction of the edge between $i$ and $j$ .
{ "source": [ "https://mathoverflow.net/questions/417690", "https://mathoverflow.net", "https://mathoverflow.net/users/2362/" ] }
417,800
I am working on a paper which will extend a result in my thesis and have boiled one problem down to the following: show that the symmetric matrix $M_p$ , whose definition follows, is invertible for all odd primes $p$ . Letting $p>3$ be prime and $\ell = \frac{p-1}{2}$ , we define $$M_p = \begin{pmatrix} 2ij - p - 2p\left\lfloor\frac{ij}{p}\right\rfloor\end{pmatrix}_{1\leq i,j\leq \ell}$$ Examples: For $p=5$ we have $M_5 = \begin{pmatrix} -3 & -1 \\ -1 & 3 \end{pmatrix}$ and $\det(M_5) = -1\cdot 2\cdot 5$ . For $p=7$ we have $M_7 = \begin{pmatrix} -5 & -3 & -1 \\ -3 & 1 & 5 \\ -1 & 5 & -3 \end{pmatrix}$ and $\det(M_7) = 2^2 \cdot 7^2$ . For $p=11$ we have $M_{11} = \begin{pmatrix} -9 & - 7 & -5 & -3 & -1 \\ -7 & -3 & 1 & 5 & 9 \\ -5 & 1 & 7 & -9 & -3 \\ -3 & 5 & -9 & -1 & 7 \\ -1 & 9 & -3 & 7 & -5 \end{pmatrix}$ and $\det(M_{11}) = -1\cdot 2^4\cdot 11^4$ . Though this (seemingly) nice formula that we see above fails for primes greater than 19, though the determinant has been checked to be non-zero for primes less than 1100. (My apologies if this question is not as motivated or as well discussed as is desired. If there are any questions or if further clarification is needed just let me know!)
Experimentally, we have the following formula for $p$ prime: $$\det(M_p)=(-1)^{(p^2-1)/8}(2p)^{(p-3)/2}h_p^-\;,$$ where $h_p^-$ is the minus part of the class number of the $p$ -th cyclotomic field, itself essentially equal to a product of $\chi$ -Bernoulli numbers. I have not tried to prove this, but since there are many determinant formulas for $h_p^-$ in the literature, it should be possible.
{ "source": [ "https://mathoverflow.net/questions/417800", "https://mathoverflow.net", "https://mathoverflow.net/users/477847/" ] }
417,896
Given a connected smooth manifold $M$ of dimension $m>1$ , points $p_1,\dots,p_n\in M$ and positive values $\{d_{i,j};1\leq i<j\leq n\}$ satisfying the strict triangle inequalities $d_{i,j}<d_{i,k}+d_{k,j}$ , Can we give $M$ a complete riemannian metric $g$ so that $d_g(p_i,p_j)=d_{i,j}$ , where $d$ is the geodesic distance? This can fail in dimension $2$ , as shown in the answer by André Henriques. I'm pretty sure it has to be true for $m\geq3$ , but I have not been able to prove it. Some comments: This occurred to me while answering Equidistant points on a compact Riemannian manifold , my answer to that question contains the ideas I tried for $m\geq3$ . By homogeneity of manifolds you can suppose the points $P_1,\dotsc,P_n$ are any set of $n$ points of $M$ , and using that it is not hard to reduce the problem to the case of $M$ being diffeomorphic to $\mathbb{R}^m$ . In particular if you prove it for $\mathbb{R}^3$ you will have proved it for any manifold of dimension $\geq 3$ . One of the first ideas which come to mind is trying to somehow imbed $M$ in $\mathbb{R}^N$ for some big $N$ , but triangle inequalities are not sufficient for a finite set to be isometrically imbedded in some $\mathbb{R}^N$ . What if we change the strict triangle inequalities for the usual ones?
It is not possible to find $5$ points $x_1,\ldots,x_5$ on a genus zero Riemannian 2-manifold (a sphere) such that $d(x_i,x_j)=1$ for all $i,j$ . The reason is that the complete graph $K_5$ is not planar. Assume by contradiction that we have $5$ points $x_1,\ldots,x_5$ with $d(x_i,x_j)=1$ . Up to permuting the points, we may assume that the minimal geodesic connecting $x_1$ and $x_3$ crosses the minimal geodesic connecting $x_2$ and $x_4$ . Let $y$ be the point at which these two geodesics intersect. Then \begin{align*} 2&=d(x_1,x_3)+d(x_2,x_4)\\ &=d(x_1,y)+d(y,x_3)+d(x_2,y)+d(y,x_4)\\ &=\tfrac12\big((d(x_1,y)+d(y,x_2))\\ &\qquad(d(x_2,y)+d(y,x_3))+\\ &\qquad(d(x_3,y)+d(y,x_4))+\\ &\qquad(d(x_4,y)+d(y,x_1)) \big)\\ &\ge\tfrac12\big(d(x_1,x_2)+d(x_2,x_3)+d(x_3,x_4)+d(x_4,x_1)\big)=2 \end{align*} with equality iff $y$ lies on all six geodesics (between $x_i$ and $x_j$ $\forall i,j\in\{1,2,3,4\})$ . But if $y$ lies on all six geodesics, then these six geodesics are all part of a single geodesic line (i.e. the points are "aligned"), which is clearly impossible. The crucial thing that I'm using here is the fact that geodesics admit unique extensions. If the ambient space was a graph, then my argument for deriving a contradiction wouldn't work as I wouldn't be able to conclude that the points are "aligned". In higher dimensions, the answer is yes. Take the complete graph $K_n$ on your set of points. Embed it in $M$ . Then put a metric on $M$ that agrees with your desired metric in a neighbourhood of the graph, and which is extremely huge away from the graph. Then minimal geodesics will essentially follow the graph. This solves the problem "up to $\varepsilon$ ", as the geodesics don't exactly follow the graph, but do so only approximately. To finish the argument, do the same thing in families, and invoke some version of the intermediate value theorem. Here's how the argument goes. Let $D$ be the space of metrics on your fixed finite set. Instead of doing the above construction for a single choice $d\in D$ of distances between the points $x_i$ , imagine that we adapt it to instead construct a family of Riemannian metrics on $M$ parametrised by the space $D$ . Starting from $d\in D$ , the geodesic distance between the $x_i$ produces another element $d'\in D$ . So we get a self-map $D\to D$ which is $\varepsilon$ -away from the identity map on $D$ . Now, $D$ is itself a manifold, and any self-map that's $\varepsilon$ -away from the identity is surjective. [added later: the answer is no] Error in the above argument: $D$ is in fact a manifold with boundary. My argument works for metrics $d\in D\setminus \partial D$ . I.e., metrics where the triangle inequality holds strictly. A counterexample is provided by the metric on $\{x_1,x_2,x_3,x_4\}$ given by $d(x_1,x_i)=1$ , $d(x_i,x_j)=2$ (where $i,j\in\{2,3,4\}$ ) [added even later: all is good] Ha ha! I hadn't noticed that you had assumed the strict triangle inequality to hold. So all is good, and this is a valid argument.
{ "source": [ "https://mathoverflow.net/questions/417896", "https://mathoverflow.net", "https://mathoverflow.net/users/172802/" ] }
420,094
https://userpages.monmouth.com/~colonel/nrectcover/index.html For a polyomino with no holes that cannot tile the plane, we may ask what are the maximal rectangles and infinite strips that it can cover without overlapping, allowing the tiles to extend beyond the region's perimeter. example from this site: But can there be such a polyomino that can cover an arbitrarily large square, but not the whole plane? And is this even possible for an arbitrary tile at all? The answer is probably no, but why exactly. In other words, can polyomino which cannot tile the plane produce infinite sequence of squares with increasing edge length?
Suppose you have a sequence $S^0 = (s_1, s_2, \ldots)$ of partial tilings, where $s_i$ covers a square of side length $2i-1$ centered at the origin. Now let's consider two of these partial tilings equivalent if they tile the central $1\times 1$ square in the same manner. Each equivalence class is a subsequence of $S^0$ . Since there are only finitely many ways to cover a $1 \times 1$ square, there must be an infinite equivalence class and subsequence $S^1 \subset S^0$ of tilings which all tile the central $1 \times 1$ square in the same way. Now let's create a new equivalence relation on $S^1 - \{s_1\}$ , considering two of the partial tilings in $S^1$ equivalent if they tile the central $3 \times 3$ square in the same manner. By the same logic as before, there must be an infinite subsequence $S^2 \subset S^1$ of tilings which all tile the central $3\times 3$ square in the same way. Repeat this procedure, and you obtain an infinite descending sequence of infinite sequences $S^i$ . Now we can construct a tiling as follows: Tile the central $1 \times 1$ square just like the tilings from $S^1$ do, then tile the central $3 \times 3$ square like the tilings from $S^2$ do, and so on. By our choice of $S^i$ , each extension agrees with the previous ones, and we end up with a valid tiling of the entire plane.
{ "source": [ "https://mathoverflow.net/questions/420094", "https://mathoverflow.net", "https://mathoverflow.net/users/160004/" ] }
420,158
I would like to ask a question inspired by the title of a book by Sir Roger Penrose ([1]). The germ of this is to ask about the role, if any, of the fashion in research of pure and applied mathematics. I'm going to focus the post (and modulate my genuine idea) about an aspect that I think can be discussed here from an historical and mathematical point of view, according to the following: Question. I would like to know what are examples of remarkable achievements (in your research subject or another that you know) that arose against the general view/work of the mathematical community since the year 1900 up to the year 1975. Refer the literature if you need it. Many thanks. An example is the mention that the author of [2] (as I interpret it) about Lennart Carleson and a conjecture due to Lusin in the second paragraph of page 671 (the article is in Spanish). Your answer can refer to (for the research of pure or applied mathematics, and mathematical physics) unexpected proofs of old unsolved problems, surprising examples or counterexamples, approaches or mathematical methods that defied the contemporary (ordinary, mainstream) approaches, incredible modulizations solving difficult problems,... all these in the context of the question that is: the proponents/teams of these solutions and ideas swam against the work of the contemporary mathematics that they knew at the time. *You can refer to the literature for the statements of the theorems, examples, methods,... if you need it. Also from my side it is welcome if you want to add some of your own historical remarks about the mathematical context concerning the answer that you provide us: that's historical remarks (if there is some philosophical issue also) emphasizing why the novelty work of the mathematician that you evoke was swimming against the tide of the contemporary ideas of those years. References: [1] Roger Penrose, Fashion, Faith, and Fantasy in the New Physics of the Universe , Princeton University Press (2016). [2] Javier Duoandikoetxea, 200 años de convergencia de las series de Fourier , La Gaceta de la Real Sociedad Matematica Española, Vol. 10, Nº 3, (2007), pages 651-677.
After mathematicians had been been taught for decades that a consistent theory of the calculus based on infinitesimals was impossible, Abraham Robinson was certainly swimming against the tide when he proved otherwise. Robinson, A. (1961): Non-standard analysis , Indagationes Mathematicae 23, pp. 432-440. Robinson, A. (1966): Non-standard Analysis , North-Holland Publishing Company, Amsterdam.
{ "source": [ "https://mathoverflow.net/questions/420158", "https://mathoverflow.net", "https://mathoverflow.net/users/142929/" ] }
420,253
Let's take the problem of the backpack: $A_1,\ldots ,A_n$ the weights that are integers, and we want to know if we can achieve a total weight of $V$ . We take $$I=\dfrac{1}{2\pi}\int_0^{2\pi} \exp(-iVt)\times (1+\exp(iA_1t))\times\cdots\times(1+\exp(iA_nt)) \, dt.$$ The question then becomes what $I=0$ or $I\geq 1$ . Can we not approach the value of $I$ with the method of Monte Carlo, or others? Why are these approaches not succeeding? Remark : it's not difficult to find a good approximation of $B\times t \bmod 2\pi$ , where $B$ is a big integer and $t\in [0,2\pi]$ , because we know excellent approximation of $\pi$ .
The size of the problem is measured by the number of digits it takes to specify $V$ and $A_1,\ldots,A_n$ . If each is less then $n$ bits then the input size is $< n^2$ but each of the factors $(1 + \exp(i A_k t))$ oscillates an exponential number of times (almost $2^n$ ) in $[0,2\pi]$ . We don't know how to tell in polynomial time whether such an integral is zero or $\geq 1$ .
{ "source": [ "https://mathoverflow.net/questions/420253", "https://mathoverflow.net", "https://mathoverflow.net/users/110301/" ] }
420,254
I am looking for an algorithm with polynomial complexity where, given a strongly connected edge-weighted digraph I can find the minimal subgraph which connects some root vertex v to a known set of other vertices. As an example, given a strongly connected edge-weighted digraph with vertices labeled a-z, I want to find the minimal subgraph rooted at node j that includes nodes b, f, g, and p. In my case I am going to be using this to determine an optimal pipeline for doing image manipulation (I already have a strongly connected edge-weighted digraph for this application).
The size of the problem is measured by the number of digits it takes to specify $V$ and $A_1,\ldots,A_n$ . If each is less then $n$ bits then the input size is $< n^2$ but each of the factors $(1 + \exp(i A_k t))$ oscillates an exponential number of times (almost $2^n$ ) in $[0,2\pi]$ . We don't know how to tell in polynomial time whether such an integral is zero or $\geq 1$ .
{ "source": [ "https://mathoverflow.net/questions/420254", "https://mathoverflow.net", "https://mathoverflow.net/users/480464/" ] }
420,853
Living in France, I am sometimes asked about Cédric Villani, a very popular figure here. Will he come back to mathematics ? The question becomes more relevant with the coming parliament elections (he should not candidate, having split with the president's party). My impression is that it would be difficult for him. Even after a two-weeks vacations, I find a bit difficult for me to think hard on a mathematical problem ; I just cannot imagine stopping a year long. Are there examples of mathematicians who stopped mathematics for a while (at least several years) and then resume and achieved valuable results in their second career ? Notice that the break may have various causes, such as nervous breakdown, imprisonment, war time, ... Of course, Jean Leray doesn't count, as he kept doing maths in an oflag (and what maths !). Besides the case of women who stopped because of motherhood (I should have think of it from the beginning ; thanks to Fedor), let me mention that of chinese mathematicians who were sent to the countryside during Cultural Revolution (e.g. Hsiao Ling).
Alice Roth became a mathematics teacher after her Ph.D. in 1938, and only returned to research after her retirement in 1971. Her 1976 paper on the "fusion lemma" is said to have "influenced a new generation of mathematicians worldwide". Further listening: 8 minute portrait Further reading: Alice in Switzerland: The life and mathematics of Alice Roth Alice Roth remained at the Humboldtianum [high school] until her retirement in 1971. It appears that shortly before retirement she had begun her transition back to work in mathematics. After announcing her plans to return to research to friends and relatives, she was told by one of them that in his field of medicine it would be impossible to return after so long an absence. Surely, most mathematicians would agree that it is impossible in the field of mathematics as well. And so Alice Roth would seem an unlikely candidate for success. Yet much had changed in the thirty years that she had been teaching. In particular, Roth's area of research – begun over thirty years earlier – had become fashionable. [...] At last Alice Roth had time on her side and was able to put her mathematical creativity to work. She was now "am chnobble" (pondering a problem) full-time, gave talks to other mathematicians at universities, and made good progress – at the cutting edge of contemporary mathematics. Roth's past as well as future work was to have a strong and lasting influence on mathematicians working in this area. Her Swiss cheese set has been modified (to an entire variety of cheeses); the fusion lemma which appeared in her 1976 paper influenced a new generation of mathematicians worldwide.
{ "source": [ "https://mathoverflow.net/questions/420853", "https://mathoverflow.net", "https://mathoverflow.net/users/8799/" ] }
420,896
Do there exist integers $x,y,z$ such that $$ xy(x+y)=7z^2 + 1 ? $$ The motivation is simple. Together with Aubrey de Grey, we developed a computer program that incorporates all standard methods we know (Hasse principle, quadratic reciprocity, Vieta jumping, search for large solutions, etc.) to try to decide the solvability of Diophantine equations, and this equation is one of the nicest (if not the nicest) cubic equation that our program cannot solve.
There is no solution . It is clear that at least one of $x$ and $y$ is positive and that neither is divisible by 7. We can assume that $a := x > 0$ . The equation implies that there are integers $X$ , $Y$ such that $$ X^2 - 7 a Y^2 = a (4 + a^3) $$ (with $X = a (a + 2y)$ and $Y = 2z$ ). First consider the case that $a$ is odd. Then $4 + a^3$ is also odd (and positive), so we can consider the Jacobi symbol $$ \left(\frac{7a}{4+a^3}\right) \,. $$ One of the two numbers involved is ${} \equiv 1 \bmod 4$ , so by quadratic reciprocity, $$ \left(\frac{7a}{4+a^3}\right) = \left(\frac{4+a^3}{7}\right) \left(\frac{4+a^3}{a}\right) = \left(\frac{4+a^3}{7}\right) $$ ( $4 + a^3$ is a square mod $a$ ). Since $7 \nmid a$ , we have $4 + a^3 \equiv 3$ or $5 \bmod 7$ , both of which are nonsquares $\bmod 7$ , so the symbol is $-1$ . This implies that there is an odd prime $p$ having odd exponent in $4 + a^3$ and such that $7a$ is a quadratic nonresidue $\bmod p$ . This gives a contradiction (note that $p \nmid a$ ). Now consider the case $a = 2b$ even; write $b = 2^{v_2(b)} b'$ . Then we have that $4 + a^3 = 4 (1 + 2 b^3)$ and $$ \left(\frac{7a}{1 + 2b^3}\right) = \left(\frac{14b}{1 + 2b^3}\right) = \left(\frac{2}{1 + 2b^3}\right)^{1+v_2(b)} \left(\frac{7b'}{1 + 2b^3}\right) \,. $$ If $b$ is odd, then this is $$ \left(\frac{2}{1 + 2b^3}\right) (-\left(\frac{-1}{b}\right)) \left(\frac{1 + 2b^3}{7}\right) \left(\frac{1 + 2b^3}{b}\right) \,, $$ which is always $-1$ (the product of the first two factors is $1$ ; then conclude similarly as above). We obtain again a contradiction. Finally, if $b$ is even, then $$ \left(\frac{2}{1 + 2b^3}\right)^{1+v_2(b)} \left(\frac{7b'}{1 + 2b^3}\right) = \left(\frac{1 + 2b^3}{7}\right) \left(\frac{1 + 2b^3}{b'}\right) = -1$$ again (the first symbol is $1$ , and quadratic reciprocity holds with the positive sign), and the result is the same. Here is an alternative proof using the product formula for the quadratic Hilbert symbol. If $(a,y,z)$ is a solution (with $a > 0$ ), then for all places $v$ of $\mathbb Q$ , we must have $(7a, a(4+a^3))_v = 1$ . We can rewrite the symbol as follows. $$ (7a, a(4+a^3))_v = (-7, a (4 + a^3))_v (-a, a)_v (-a, 4+a^3)_v = (-7, a(4 + a^3))_v $$ (the last two symbols in the middle expression are $+1$ ). So it follows that $$ (-7, a)_v = (-7, 4 + a^3)_v \,.$$ When $v = \infty$ , the symbols are $+1$ , since $a > 0$ . When $v = 2$ , the symbols are $+1$ , since $-7$ is a $2$ -adic square. When $v = p \ne 7$ is an odd prime, one of the symbols is $+1$ (and therefore both are), since $a$ and $4 + a^3$ have no common odd prime factors. Finally, when $v = 7$ , the symbol on the right is $$ (-7, 4 + a^3)_7 = \left(\frac{4 + a^3}{7}\right) = -1 $$ as in the first proof. Putting these together, we obtain a contradiction to the product formula for the Hilbert symbol.
{ "source": [ "https://mathoverflow.net/questions/420896", "https://mathoverflow.net", "https://mathoverflow.net/users/89064/" ] }
420,949
Context: In celebrating the centenary of Ramanujan's birth, Freeman Dyson presented the following career advice for talented young physicists [1]: My dream is that I will live to see the day when our young physicists, struggling to bring the predictions of superstring theory into correspondence with the facts of nature, will be led to enlarge their analytic machinery to include not only theta-functions but mock theta-functions … But before this can happen, the purely mathematical exploration of the mock- modular forms and their mock-symmetries must be carried a great deal further. —Freeman Dyson Question: Was Freeman Dyson guided by physical intuitions that could have convinced top-notch quantum field theorists of his generation, such as Richard Feynman? Though I am aware that Freeman Dyson and Richard Feynman collaborated on Feynman's approach to quantum field theory, the precursor to string theory, I doubt that Feynman would have advanced the hypothesis that Ramanujan's work had any important consequences for theoretical physics. Complementary insights: In parallel, I wonder whether it may not be equally sensible to reconcile quantum theory with the facts of probabilistic number theory where probabilistic events are of a deterministic and frequentist nature. Upon closer inspection, this would be a complementary effort but I don't know of a systematic research program aimed at this particular objective although a large number of physicists appear to have a strong interest in the pair correlation conjecture which emerged from a tea-time discussion between Freeman Dyson and Hugh Montgomery. These are related observations, which may be relevant for a couple reasons: (1) The theory of modular forms potentially enters mathematical physics via the analysis of the Pair-Correlation conjecture. (2) By John Bell's own admission, his 1964 theorem known as Bell's theorem was motivated by the super-deterministic theory proposed by De Broglie and Bohm. Furthermore, I suspect that Erdős is often quoted saying: God may not play dice with the universe, but something strange is going on with the prime numbers. because all mathematical systems may be constructed from Peano Arithmetic, and the prime numbers are the atomic units of the integers, so the distribution of the prime numbers may be viewed as fundamental scientific data. Based on a recent discussion with Max Tegmark [7], who believes that a physicist can only understand the mathematical relations between things, this perspective is worth consideration if we assume that the mathematical structure of the Universe emerged from an information-theoretic singularity(i.e. Big Bang Cosmology). Note: Contrary to those who are voting to close this question, I believe that if there are fundamental physical insights which motivated Freeman Dyson's hypothesis then this question is of interest to the MathOverflow community. References: Jeffrey A. Harvey. Ramanujan’s influence on string theory, black holes and moonshine. 2019. Hardy, G. H.; Ramanujan, S. “The normal number of prime factors of a number n”, Quarterly Journal of Mathematics. 1917. Erdős, Paul; Kac, Mark. “The Gaussian law of errors in the theory of additive number theoretic functions”. American Journal of Mathematics. 1940. Montgomery, Hugh L. "The pair correlation of zeros of the zeta function", Analytic number theory, Proc. Sympos. Pure Math. 1973. Bell, J.S.“On the Einstein-Podolsky-Rosen paradox,” Physics. 1964. Tegmark, Max. "The Mathematical Universe". Foundations of Physics. Arxiv. 2008. Email discussion with Max Tegmark on tabletop experiments for the Mathematical Universe Hypothesis via Probabilistic Number Theory. Dec 18 2021.
Dyson's A walk through Ramanujan's garden gives the background of this comment: He explains that the "seeds from Ramanujan's garden have been blowing on the wind and have been sprouting all over the landscape. Some of the seeds even blew over into physics." He then writes that he received a preprint from a superstring theorist entitled Atkin-Lehner symmetry , in which modular forms entered physics in ways that mathematicians never dreamt of, and concludes that "Perhaps we may one day see a preprint written by a physicist with the title Mock Atkin-Lehner Symmetry ." Dyson also indicates in the same text that he knows little about superstring theory, so my answer to the question: "Was Freeman Dyson guided by physical intuitions" is: No, he was guided by his experience that fundamental math often makes it into physics in unexpected ways.
{ "source": [ "https://mathoverflow.net/questions/420949", "https://mathoverflow.net", "https://mathoverflow.net/users/56328/" ] }
421,321
A prime $p$ is called a Sophie Germain prime if $2p+1$ is also prime: OEIS A005384 . Whether there are an infinite number of such primes is unsolved. My question is: If there are an infinite number of Germain primes, is the sum of the reciprocals of these primes known to converge, or diverge? Of course if there are only a finite number of Germain primes, the sum is finite. And a lower bound on any infinite sum can be calculated. But it is conceivable that it is known that the sum either converges or is a finite sum. And maybe even an upperbound is known? (My connection to this topic is via this question: "Why are this operator's primes the Sophie Germain primes?" .)
Here is a general result. For a sequence of nonnegative numbers $\{a_n\}$ , let $A(x) = \sum_{n \leq x} a_n$ . For example, if $S \subset \mathbf Z^+$ and we set $a_n = 1$ when $n\in S$ and $a_n = 0$ when $n \not\in S$ , then $A(x)$ is the number of elements of $S$ that are $\leq x$ . Exercise: If $A(x) = O(x/(\log x)^r)$ for a positive integer $r$ and all $x \geq 2$ , then $\sum_{n \leq x} a_n/n$ converges as $x \to \infty$ if $r \geq 2$ and $\sum_{n \leq x} a_n/n = O(\log \log x)$ for $r = 1$ . Example: if $f_1(T), \ldots, f_r(T)$ are polynomials with integer coefficients that fit the hypotheses of the Bateman-Horn conjecture (twin primes are $f_1(T) = T$ and $f_2(T) = T+2$ , while Sophie Germain primes are $f_1(T) = T$ and $f_2(T) = 2T+1$ ), then Bateman and Stemmler showed $60$ years ago that the number of $n \leq x$ such that $f_1(n), \ldots, f_r(n)$ are all prime is $O(x/(\log x)^r)$ , where the $O$ -constant depends on $f_1, \ldots, f_r$ . Therefore if above we take $S$ to be the $n \in \mathbf Z^+$ such that $f_1(n), \ldots, f_r(n)$ are all prime and define $a_n$ to be $1$ or $0$ according to $n \in S$ or $n \not\in S$ , then the exercise above says the sum of all $1/n$ for $n \in S$ converges if $r \geq 2$ . So for any sequence of pairs of primes $p$ and $ap+b$ that are expected to occur infinitely often ( $p$ and $p+2$ , or $p$ and $2p+1$ , or $\ldots$ ), the sum of $1/p$ for such primes converges. That the sum of the reciprocals of the twin primes converges indicates that this summation is the wrong thing to be looking at. We want a strategy to prove the infinitude of twin primes, and that suggests a better sum. The Bateman-Horn conjecture predicts the number of $n \leq x$ such that $f_1(n), \ldots, f_r(n)$ are all prime is asymptotic to $Cx/(\log x)^r$ where $C$ is a positive constant depending on $f_1, \ldots, f_r$ , and if $A(x) \sim cx/(\log x)^r$ as $x \to \infty$ for some $c > 0$ then $\sum_{n \leq x} a_n(\log n)^{r-1}/n \sim c\log\log x$ . Therefore we expect (but have never proved) that the sum of $(\log p)/p$ over prime $p \leq x$ such that $p$ and $p+2$ are prime should grow like $c\log\log x$ for some constant $c > 0$ , and a similar asymptotic estimate (for a different constant $c$ ) should hold for the sum of $(\log p)/p$ over all prime $p \leq x$ such that $p$ and $2p+1$ are prime.
{ "source": [ "https://mathoverflow.net/questions/421321", "https://mathoverflow.net", "https://mathoverflow.net/users/6094/" ] }
421,358
After writing many proofs, most of which contained errors in their initial form, I have developed some simple techniques for "debugging" my proofs. Of course, a good way to detect errors in proofs is to send them to a colleague for review. But even friendly and helpful colleagues tend to focus on the more interesting proofs and ignore the more technical ones, which are prone to errors. So it is important to have techniques for self-debugging. Some simple techniques that I use are: Verify that every assumption made in the theorem is used at least once in the proof . Mathematically this is not strictly required, since if "A" implies "C", then "A and B" implies "C" too. But, if an assumption is made and not used, it may indicate an error [there is an analogous technique in programming: many compilers will raise a warning if they detect that a variable is declared but not used.] Rewrite the proof in the opposite direction . For example, if the proof is by contradiction, rewrite it as a direct proof, and vice-versa. Mathematically it should not matter, but the process of rewrite may help to discover hidden errors. Read the proof in print . I have no rational explanation for this, but I found out that, when I read proofs in print, I often detect errors that evaded my eyes when I read them on the computer screen. Question: what other techniques do you find useful for detecting errors in mathematical proofs? I am looking for general techniques, that you would recommend to your research students.
Several basic suggestions. First, put your manuscript into a drawer, and forget about it for a couple of months. You will discover a whole lot of exciting new things when you take it off from the drawer and re-read anew. Second, I found that 80% of all mistakes are conveniently marked by the authors with the words like 'evidently", "clearly", and phrases "it is easy to see" etc. Double-check all occurrences of these words/phrases in your manuscript. The final suggestion: make your manuscript public. Post it to the arXiv, mail it to a colleague, submit it to a journal. You are guaranteed to find a bunch of mistakes in the following ten minutes.
{ "source": [ "https://mathoverflow.net/questions/421358", "https://mathoverflow.net", "https://mathoverflow.net/users/34461/" ] }
421,889
In the research of elliptic and parabolic equations, the Schauder estimate is one of the most important issues for them. In this topic, we always bound the norm of higher regularity in the small ball by a bigger one. That is, for the elliptic equation $ \operatorname{div}(A(x)\nabla u)=0 $ , we have estimates like $ \left\|u\right\|_{C^{0,\alpha}(B_1)}\leq C\left\|u\right\|_{L^2(B_2)} $ , where $ B_r=B(0,r) $ is the ball with center $ 0 $ and radius $ r $ . I want to ask why we do not study such estimates for hyperbolic equations.
Why we do not study such estimates for hyperbolic equations? Because they are false. Now: you may ask "why are they false?" This is a fairly deep question, and answers often involve discussion of propagation of singularities and characteristics. Quite a few chapters in Hörmander's Analysis of Linear Partial Differential Operators are devoted to this and similar questions.
{ "source": [ "https://mathoverflow.net/questions/421889", "https://mathoverflow.net", "https://mathoverflow.net/users/241460/" ] }
421,918
Given a polytope $P$ , what do the points of the secondary polytope correspond to? I know that the vertices of the secondary polytope correspond to regular triangulations of $P$ . But what do the interior points of the secondary polytope correspond to?
Why we do not study such estimates for hyperbolic equations? Because they are false. Now: you may ask "why are they false?" This is a fairly deep question, and answers often involve discussion of propagation of singularities and characteristics. Quite a few chapters in Hörmander's Analysis of Linear Partial Differential Operators are devoted to this and similar questions.
{ "source": [ "https://mathoverflow.net/questions/421918", "https://mathoverflow.net", "https://mathoverflow.net/users/5690/" ] }
421,928
Let $A$ be a unital $C^*$ -algebra and let $K$ be an inner product space (not necessarily complete!). Let $\pi: A \to \operatorname{End}_{\mathbb{C}}(K)$ be a unital algebra homomorphism such that $$\langle \pi(a)\xi, \eta\rangle = \langle \xi, \pi(a^*)\eta\rangle$$ for all $a \in A$ (i.e. the adjoint of $\pi(a)$ exists and equals $\pi(a^*)$ ). Is it true that $\|\pi(a)\|\le \|a\|$ for all $a \in A$ ? If $K$ is a Hilbert space, this result is well-known. However, since $K$ is no longer complete, $B(K)$ is not Banach and in particular not a $C^*$ -algebra. Does the result remain true? I'm mainly interested in knowing the answer for the $C^*$ -algebra $A= \ell^\infty\prod_{i \in I} M_{n_i}(\mathbb{C})$ . Thanks for your help!
Why we do not study such estimates for hyperbolic equations? Because they are false. Now: you may ask "why are they false?" This is a fairly deep question, and answers often involve discussion of propagation of singularities and characteristics. Quite a few chapters in Hörmander's Analysis of Linear Partial Differential Operators are devoted to this and similar questions.
{ "source": [ "https://mathoverflow.net/questions/421928", "https://mathoverflow.net", "https://mathoverflow.net/users/216007/" ] }
421,951
I posted this question on math.stackexchange earlier, but didn't see any response. So, I am posting it here, in case someone else has an answer. Original question: https://math.stackexchange.com/questions/4443845/on-finding-an-upper-bound-on-the-error-of-a-sparse-approximation $x \in R^n$ is a non-negative vector such that $ \sum_{i=1}^n x_i = 1$ ( $\forall i, 0 \le x_i \le 1$ ). The components are ordered: $x_1 \ge x_2 \ldots \ge x_n$ . We are also given : $ \sum_{i=1}^n x_i^2 \ge t$ for some constant $t$ ( $0 \le t \le 1$ ). Clearly, the larger the constant $t$ , the more concentrated the components $x_i$ are going to become. I want to make a claim on the approximate sparsity of $x$ . In other words, I want to place an upper bound on the total energy taken up by the smallest $(n-K)$ components. Say, something like : there exists an integer $K(t)$ , $1 \le K \le n$ , such that $$ \sum_{i =K+1}^n x_i^2 \le \phi(t) $$ where $\phi(t)$ is some decreasing function of $t$ . How do I get such a relation?
Why we do not study such estimates for hyperbolic equations? Because they are false. Now: you may ask "why are they false?" This is a fairly deep question, and answers often involve discussion of propagation of singularities and characteristics. Quite a few chapters in Hörmander's Analysis of Linear Partial Differential Operators are devoted to this and similar questions.
{ "source": [ "https://mathoverflow.net/questions/421951", "https://mathoverflow.net", "https://mathoverflow.net/users/176364/" ] }
422,196
After a long reflection, I've decided I won't go to graduate school and do a thesis, among other things. I personally can't cope with the pressure and uncertainty of an academic job. I will therefore move towards a master's degree in engineering and probably work in industry. However, I'm still passionate about math, and will continue to attend seminars, conferences, and work with people in my heart field (ie: algebraic geometry and number theory). My question is: Is it viable? Won't I be "ostracized from the math community"? Eventually, could I still publish work?
This is possible. I have at least two friends who studied mathematics (in the graduate school), did not defend their PhD, and found some jobs not related to mathematics. Still they do research, and publish papers from time to time. Probably the most famous modern mathematician who never studied mathematics on a graduate level was Marjorie Rice . She made an important contribution. The main problem on my point of view is not being "ostracized by math community" but the problem of lack of time for concentration on mathematics. Those two of my friends who started publishing are both retired, one had a career in business another in computer programming. Several of my other friends, who did have a PhD had to switch to other activities simply because they could not find jobs in mathematics. Many of them were intending to continue their math research "in free time". But the problem is that there is usually no free time if you do another job. Mathematics requires a high degree of concentration.
{ "source": [ "https://mathoverflow.net/questions/422196", "https://mathoverflow.net", "https://mathoverflow.net/users/141230/" ] }
422,279
I was looking at a bio-movie of Ramanujan last night. Very poignant. Also impressed by Jeremy Irons' portrayal of G.H. Hardy. In G.H. Hardy's wiki page , we read: . . . "Hardy cited as his most important influence his independent study of Cours d'analyse de l'École Polytechnique by the French mathematician Camille Jordan, through which he became acquainted with the more precise mathematics tradition in continental Europe." and . . . "Hardy is credited with reforming British mathematics by bringing rigour into it, which was previously a characteristic of French, Swiss and German mathematics. British mathematicians had remained largely in the tradition of applied mathematics, in thrall to the reputation of Isaac Newton (see Cambridge Mathematical Tripos). Hardy was more in tune with the cours d'analyse methods dominant in France, and aggressively promoted his conception of pure mathematics, in particular against the hydrodynamics that was an important part of Cambridge mathematics." Are we to understand from this that up to the late 1800s, British mathematics used only partial or inductive proofs or what ? On the face of it, this would have been quite a state of affairs. What exactly - in general or by a specific example - did Hardy bring to mathematics by way of rigour that had previously been absent ? If someone introduced a new and sketchily proven theorem in the days of Hardy's childhood - and we are talking about Victorian times here (...) - then surely all the mean old men of the profession would have been disapproving of it and would obstruct its publication ?
Rigor and Clarity: Foundations of Mathematics in France and England, 1800-1840 explains in some detail how British mathematicians in the early 19th century viewed the role of rigor in the formulation and proof of mathematical theorems. Rigor is now accepted as a universal good in mathematics. The differences between the French and the English at the turn of the century indicate that this was not always the case. [...] For Cauchy mathematical rigor was achieved when mathematical terms were defined unambiguously, so that they could be confidently used in subsequent proofs. The English did not agree that the essence of mathematics was captured in the abstract notion of rigor advocated by Cauchy and his school. For the nineteenth-century English, mathematical theorems, no matter how beautifully proved, did not stand alone. Their validity lay in the concepts they illuminated; these concepts existed independently of the systems describing them. In this view mathematics was not created, it was discovered, and the value of the discovery lay in the understandings it generated rather than in the mathematical structure itself. The English constructed for the subject a conceptual foundation that they found both strong and appropriate. Rigor as Cauchy and his followers understood it failed to capture the true spirit of legitimate mathematical development. The English would have agreed with the French that mathematics must be exact, but for them exactitude concerned the fit of mathematical definition to underlying concept, rather than precision in use. This way of seeing the issue supported an English style, just as Cauchy's notions of rigor came to support a French style, throughout the century.
{ "source": [ "https://mathoverflow.net/questions/422279", "https://mathoverflow.net", "https://mathoverflow.net/users/216345/" ] }
422,582
When I was an undergrad, the field of spherical trigonometry was cited as a once-popular area of math that has since died. Is this true? Are the results from spherical trigonometry relevant for contemporary research?
It is not. As a proof, I will mention three relatively recent papers where I am a co-author: M. Bonk and A. Eremenko, Covering properties of meromorphic functions, negative curvature and spherical geometry , Ann of Math. 152 (2000), 551-592. A. Eremenko, Metrics of positive curvature with conic singularities on the sphere, Proc. AMS, 132 (2004), 11, 3349--3355. A. Eremenko and A. Gabrielov, The space of Schwarz--Klein spherical triangles , Journal of Mathematical Physics, Analysis and Geometry, 16, 3 (2020) 263-282. As you see, they are all published in mainstream math journals. All contain some new results on spherical triangles. And I am not the only person who is involved in this business: Feng Luo, A characterization of spherical polyhedral surfaces , J. Differential Geom. 74(3): 407-424. Edit. To address one comment: here is a forthcoming conference on spherical geometry
{ "source": [ "https://mathoverflow.net/questions/422582", "https://mathoverflow.net", "https://mathoverflow.net/users/128876/" ] }
423,323
I've been wondering for a while: how should mathematicians read an article in order to "take most" from it? For example, when I did my Master's thesis I based it on an article (I'm into analysis) and of course I analyzed every and each part of it, extending some results in there and filling some gaps that were "left to the reader" I guess, or were thought to be sufficiently trivial by the authors. Now I'm doing my phd and I have to choose a specific topic (more or less I have an idea but I haven't exactly set up my mind yet). The fact is I've had to look up some articles to get some ideas and possible topics of research, and by now and I quickly understood that reading and trying to understand the whole thing is impossible. I mean they often have like 60+ articles/books in the bibliography, so I mainly read the introduction and the results, skipping proofs entirely (or almost entirely). Basically what I try to do is to get an idea of the general path taken by the authors, skipping all the technicalities are ok and noting on the side the techniques/results mentioned that I don't know about. Then I quickly look up on the internet what the idea of these techniques is, and that's it. Obviously, a couple of days pass and most of it is gone, except maybe the very very general idea/result it obtained (but only if it isn't too technical). I mean it seems a bit shallow, but I can't come up with with better ways to read them, they are so stuffed I really can't keep up. So how would a professional mathematician read an article about a topic he's interested in without going crazy, trying to learn most from it? Maybe in n years from now if one continues this path one can expect to be so well-versed in a very specific topic so that research articles about it become way easier to read?
I like this anecdote involving Hassler Whitney. I worked with Hassler Whitney for two years at the Institute for Advanced Study, and the mark he left on me goes deep. I guess I'm attracted to unconventional, original sorts, and Hass was surely that. His undergraduate days were at Yale, so one might expect that here was just about the most incredible math major ever. But his major was music, not math. Well, then, he must have taken all sorts of math courses, and.... Actually, he took almost no math courses. Mathematically, he was largely self-taught. Anyway, one day, in his office, I happened to mention Bézout's theorem, which basically says that two curves of degree $n$ and $m$ intersect in $nm$ points. He says he never heard of it (Bézout's theorem is in fact highly under-appreciated), and seems galvanized by it. He jumps up and heads toward the blackboard, saying "Let's see if I can disprove that!" Disprove it?! "Wait a minute!" I say, "That theorem is nearly two centuries old! You can't disprove anything... really..." As he begins working on some counterexamples at the blackboard, I see that my well-meant words are simply static. His first tries were easy to demolish, but he was a fast learner, and ideas soon surfaced about the complex line at infinity, and how to count multiple points of intersection. After a while, it got harder for me to justify the theorem, and when he asked, "What about two concentric circles?" I had no answer. He argued his way through, and eventually found all four points. Finally he was satisfied, and the piece of chalk was given a rest. He backed away from the blackboard and said. "Well, well—that is quite a theorem, isn't it?" I think I mostly kept my cool during all this, but after I left his office, I realized I was pretty shaken. I remember thinking to myself, "Golly, Kendig, you just saw how one of the giants does it!" He'd taken the theorem to the mat, wrestled it, and the theorem won. I'd known about that result for at least two years, and I realized that in 15 or 20 minutes, he'd gained a deeper appreciation of it than I'd ever had. In retrospect, it represented a turning point for me: I began to think examples, examples. Whitney worked by finding an example that contained the essential crux of a problem, and then worked relentlessly on it until he cracked it. He left it to others to generalize. It is to Hass that I affectionately dedicate this book.
{ "source": [ "https://mathoverflow.net/questions/423323", "https://mathoverflow.net", "https://mathoverflow.net/users/109382/" ] }
423,507
Has there ever been a set theory without an empty set? Is this possible? I ask because we usually take the empty set to exist axiomatically or obtain it through separation and a nonempty set together with the standard parameter-free predicate $X\neq X$ , but it seems possible to have a 'set theory' without an axiom asserting the existence of an empty set or an axiom of separation. I put 'set theory' in quotations because such a nonstandard axiomatization might not really deserve to be called a set theory per-se (it wouldn't prove the existence of intersections of disjoint sets), but more formally I mean Has a theory in the language of set theory whose axioms do not prove the existence of an empty set ever been explored?
For a good discussion of this matter, see: Kanamori, Akihiro The empty set, the singleton, and the ordered pair . Bull. Symbolic Logic 9 (2003), no. 3, 273–298.
{ "source": [ "https://mathoverflow.net/questions/423507", "https://mathoverflow.net", "https://mathoverflow.net/users/92164/" ] }
424,215
Is there a nontrivial link in a big solid torus that is trivial in the ambient Euclidean space such that each circle is unknot and has a sufficiently small length? It is motivated by a question that bothers me from my childhood: Is it possible to wrap a suitcase with hair ties without tying them together? Comments The answer of Larsen Linov is accepted, but it remains to prove formally that the link meets the conditions. The latter is equivalent to nontriviallity of the following link; the example of Larsen Linov (or a similar one) can be obtained by stating that one of circles is a meridian of the solid torus. Another question: is it possible to do the same with large rotational symmetry?
This configuration should work: Edit (to provide credit/context): Michael Freedman's solution (see Ian Agol's post) is the original one. Ian directed me to this problem and gave me the hint that Michael had already confirmed it was possible.
{ "source": [ "https://mathoverflow.net/questions/424215", "https://mathoverflow.net", "https://mathoverflow.net/users/1441/" ] }
424,269
In the end of the Abstract of the paper Minsky and Papert - Unrecognizable Sets of Numbers , the authors write "…for every infinite regular set $A$ there is a nonregular set $A'$ for which $$ \lvert\pi_A(n)-\pi_{A'}(n)\rvert\leq 1\text{",} $$ where $\pi_A(n)$ is the counting function for $A$ . But I don't find a reference in the paper. Also I want to know if the following statement is true or not: "…for every infinite nonregular set $B$ there is a regular set $B'$ for which $$ \lvert\pi_B(n)-\pi_{B'}(n)\rvert\leq 1\text{."} $$ If I understand right the "regular set" in this paper means "automatic set".
This configuration should work: Edit (to provide credit/context): Michael Freedman's solution (see Ian Agol's post) is the original one. Ian directed me to this problem and gave me the hint that Michael had already confirmed it was possible.
{ "source": [ "https://mathoverflow.net/questions/424269", "https://mathoverflow.net", "https://mathoverflow.net/users/159935/" ] }
424,545
I sent a paper to an elite journal (the top in the field). Two weeks later I got a decision "reject" but the editors added that "we believe it should deserve a good publicity and publication". The paper was described by the associate editor as "of very good quality" and the reason for the rejection is that "the techniques used look rather far to me from the journal readership". It should be mentioned that the main results of the paper are very much related to the scope of the journal. Moreover, this journal has already published more than 5 papers on the subject with results similar to those of mine (but in rather special cases, and according to few experts in the field no doubt that my new result is a significant step forward). My questions are: Is it common that a paper that contains results of high enough quality that are well in line with the journal scope, is rejected because the techniques are not familiar to the readership? If it is common, could anyone explain the reason behind this policy? To me it seems odd, as in mathematics applying tools from one subject to solve problems in another subject, as long as it is done correctly, is considered to be a good development.
When you submit to an elite journal, expect a rejection most of the time. Then submit to a less-prestigious journal. It is a waste of your time to attempt an analysis of the reasons given for rejection. Yes, it is common for journals that receive far more submissions than they can publish to reject most of them — sometimes for boilerplate reasons, sometimes for no reason at all.
{ "source": [ "https://mathoverflow.net/questions/424545", "https://mathoverflow.net", "https://mathoverflow.net/users/161837/" ] }
424,613
Mark Hovey maintains a list of open problems in model category theory . I think this list is quite old, and I don't know if Hovey is still updating it or not. My question is: i) which of the 13 problems in that list are still considered open and still important in homotopy theory? ii) or dually, which of the problems are considered settled or are perceived as no longer relevant given the current status of homotopy theory?
I am a former student of Mark Hovey's, and during grad school, I wrote a document giving an update on the status of the 13 problems (as of 2012 or 2013, I guess). I just briefly went through it a moment ago to give some updates, but it's still in rough shape. I apologize for what I'm sure will be many omissions, as so much work has been done in this area over the past 20 years. Nevertheless, here you go (using Hovey's words to set up each of the problems in case his website goes down again): The safest sort of problem to work on with model categories is building one of interest in applications. The essential idea is: whenever someone uses the word homology, there ought to be a model category around. I like this idea a great deal, and it might lead to expansion of algebraic topology into many different areas. The simplest example that I personally do not understand is complexes of (quasi-coherent?) sheaves over a scheme. There is certainly a model structure here, and it is probably even known. But I think it would be good to find this out, and find out how the model structure is built. I believe this should be a symmetric monoidal model category. I also have the impression that one can not generalize the usual model structure on chain complexes over a ring, because you won't have projectives. But these two impressions sort of contradict each other, since the second one would lead you to generalize the injective model structure on chain complexes over a ring, but this model structure is not symmetric monoidal. So there is something for me at least to learn here. The machinery to resolve this question was developed by Mark Hovey (2007) in “ Cotorsion pairs, model category structures, and representation theory ,” which provides a method for building an abelian model structure on a bicomplete abelian category with prescribed classes of acyclic, cofibrant, and fibrant objects $(W,C,F)$ such that $W$ is thick and the pairs $(C,F \cap W)$ and $(C\cap W, F)$ are complete cotorsion pairs. An excellent survey is Hovey’s “ Cotorsion Pairs and Model Categories ,” where this appears as Theorem 2.5. Conditions are also given to make this model structure monoidal (Theorem 4.2). Hovey used this theory to build a model structure on unbounded chain complexes of quasi-coherent sheaves over a (nice) scheme where weak equivalences are quasi-isomorphisms and fibrations are dimensionwise split surjections with dimensionwise injective kernel (see Hovey “ Model Category Structures on Chain Complexes of Sheaves ” Theorem 4.4). He also built a model structure on chain complexes of modules over a nice ringed space (Theorem 5.2) and found conditions to make this model structure satisfy the pushout product and monoid axioms (5.7, 5.9, 5.10). Jim Gillespie then proved a theorem which allows one to build a model structure on Ch(A) where A is an abelian model category with a nice cotorsion pair (Theorem 7.9). Gillespie’s work generalizes the above and can be found in “ The Flat Model Structure on Complexes of Sheaves ” and “ Cotorsion Pairs and Degreewise Homological Model Structures .” Gillespie continued to generalize and hone this theory, resulting in more than 30 papers on the topic . A great survey is his paper " Hereditary abelian model categories ." These kinds of model structures, that come from cotorsion pairs, are called abelian model structures and nowadays a compatible pair of cotorsion pairs (the type that give rise to an abelian model structure) is known as a Hovey triple . Many other authors have also worked in this area, including Sergio Estrada, Daniel Bravo, Jan Stovicek, Sinem Odabasi, Hanno Becker, and probably many more people I should mention. A scheme is a generalization of a ring, in the same way that a manifold is a generalization of R^n. So maybe there is some kind of model structure on sheaves over a manifold? Presumably this is where de Rham cohomology comes from, but I don't know. It doesn't seem like homotopy theory has made much of a dent in analysis, but I think this is partly due to our lack of trying. Floer homology, quantum cohomology--do these things come from model structures? These problems appear to still be open as stated. There are good reasons why one cannot have a model category of manifolds, and these are discussed in my expository paper “ On Colimits in Various Categories of Manifolds ” among other places. I don’t know whether or not there are similar obstructions for sheaves over a manifold. If so then the methods of getting around this obstruction mentioned in the expository paper may apply in those settings as well (i.e., using Voevodsky style enlargement or using Diffeologic Spaces). The note advertises Dan Dugger's nice paper " Sheaves and Homotopy Theory ." I should disclaim that I wrote this paper early in my grad student career, so it's probably badly written, naive, and might even have errors. The years since Hovey originally wrote his problem list have seen a massive development in the theory of infinity categories, and a partial answer can be given in that language. Consider the category of smooth $\infty$ -groupoids, which contains the categories of smooth manifolds and Lie groupoids. Making use of the global model structure on simplicial presheaves and the fact that BG is a fibrant object therein, one can recover the de Rham cohomology of a smooth manifold as $\pi_0$ of a mapping space in the category of smooth $\infty$ -groupoids. Similarly one can recover Cech hypercohomology. A nice survey of these results can be found at the nLab page on “smooth infinity-groupoid structures.” So, although it remains unknown whether there is a model category of sheaves over a manifold, at least de Rham cohomology can be recovered from model category theoretic considerations. There does not appear to have been any work done to recover Floer homology or quantum cohomology from a model category. UPDATE: Tyler Lawson has done nice work on homotopy theory for Floer homology. And I still think a model structure in this context is possible. It’s something I’d like to work on someday. Mark Hovey and I used to talk about this and we had some ideas I hope to work out one day. Every stable homotopy category I know of comes from a model category. Well, that used to be true, but it is no longer. Given a flat Hopf algebroid, Strickland and I have constructed a stable homotopy category of comodules over it. This clearly ought to be the homotopy category of a model structure on the category of chain complexes of comodules, but we have been unable to build such a model structure. My work with Strickland is still in progress, so you will have to contact me for details. This was solved by Mark Hovey in “ Homotopy Theory of Comodules over a Hopf Algebroid ” (published in contemp. math.) where he constructs for a given Hopf algebroid $(A,\Gamma)$ a model structure on the category of unbounded chain complexes of $\Gamma$ -comodules with weak equivalences the homotopy isomorphisms (not the homology isomorphisms) and the cofibrations are the degreewise split monomorphisms whose cokernel is a complex of relative projectives with no differential. See Theorems 2.1.1, 2.1.3, 5.1.4, and (for the monoidal structure) 5.1.5. Regarding the general philosophy behind this question, it is interesting to note that examples have been given for triangulated categories which do not arise as the homotopy category of a model category. The most well-known appears in “ Triangulated Categories without Models ” by Muro, Schwede, and Strickland. Note however that the definition of triangulated category used in the book Model Categories is different from the standard definition. Hovey’s definition of triangulated category T requires T to come with an action of Ho(sSet). Such triangulated categories are the ones which come up in homotopy theory, but they have not been studied systematically other than in Hovey’s book. So, technically, you could ask if every triangulated category in Hovey's sense comes from a model category. And the answer is probably "no." Since the triangulated category community kinda rejected Hovey's definition of "triangulated category", I don't think an explicit counterexample would generate much interest. Given a symmetric monoidal model category C, Schwede and Shipley have given conditions under which the category of monoids in C is again a model category (with underlying fibrations and weak equivalences). On the other hand, the category of commutative monoids seems to be much more subtle. It is well-known that the category of commutative differential graded algebras over Z cannot be a model category with underlying fibrations and weak equivalences (= homology isos). On the other hand, the solution to this is also pretty well-known--you are supposed to be using E-infinity DGAs, not commutative ones. Find a generalization of this statement. Here is how I think this should go, broken down into steps. The first step: find a model structure on the category of operads on a given model category. (Has this already been done? Charles Rezk is the person I would ask). We probably have to assume the model category is cofibrantly generated. This has pretty much been solved. My thesis gave conditions under which a category of commutative monoids has a transferred model structure, and also under which it’s equivalent to E-infinity algebras. Also, there is a model structure on categories of operads, due to Berger and Moerdijk (2003) , if M satisfies some conditions, like having a nicely behaved interval object. For more general M, say just cofibrantly generated, there’s a semi-model structure on operads due to Spitzweck (a published reference is Fresse’s book on operads and modules ). The second step: show that the category of algebras over a cofibrant operad admits a model structure, where the fibrations and weak equivalences are the underlying ones. Show that a weak equivalence of cofibrant operads induces a Quillen equivalence of the categories of algebras. Show that an E-infinity operad is just a cofibrant approximation to the commutative ring operad. (This latter statement is probably known, since to me it seems to be the whole point of E-infinity). Let M be a monoidal model category. If M is nice (like, simplicial sets, equivariant topological spaces, chain complexes over a field of characteristic zero, symmetric spectra, equivariant orthogonal spectra, etc) then algebras over any operad have a transferred model structure. If $M$ is less nice, you have a transferred semi-model structure. Many authors did work in this direction, including Spitzweck, Berger–Moerdijk, Elmendorf–Mandell (for spectra), Fresse, Harper, Hess, Casacuberta, Gutierrez, Moerdijk, Vogt, Caviglia, Harper, Hornbostel (in a motivic setting), Muro, and Pavlov-Scholbach. I give some history in my papers with Donald Yau, and also what we consider to be the most general approach with the weakest hypotheses on $M$ . Let's focus on full model structures instead of semi-model structures. Our first paper proves that all operads are admissible (have a transferred full model structure) in chain complexes, spaces, and symmetric spectra, and our second paper covered equivariant spaces and spectra, simplicial abelian groups, the category of small categories, the stable module category, etc. If M is only cofibrantly generated, then you have a semi-model structure on algebras over a Sigma-cofibrant operad, again by Spitzweck and Fresse. My first paper with Donald Yau generalizes this to a wider class of operads (just entrywise cofibrant), and our third paper handles rectification (a weak equivalence of operads inducing a Quillen equivalence of categories of algebras) in greater generality. I should point out that many authors had rectification results for $\Sigma$ -cofibrant operads (including Berger-Moerdijk and Fresse), and Pavlov-Scholbach had admissibility and rectification results under different assumptions on $M$ . Find conditions under which algebras over a noncofibrant operad admit a model structure that generalize the monoid axiom of Schwede-Shipley. This would include the case where everything is fibrant, for example. Show that, under some more conditions, a weak equivalence of operads induces a Quillen equivalence of the algebra categories. Thus, sometimes you can use commutative, sometimes you can't, but you can always use E-infinity. And using E-infinity will not hurt you when you can use commutative. I did this in my thesis, inventing the Commutative Monoid Axiom to get a model structure on commutative monoids. This also included the situation of rectification with E-infinity (indeed, under some more conditions, because it’s not true in the monoidal model category of compactly generated topological spaces). I generalized this in my work with Donald Yau. In our first paper, we get conditions on a model category M so that all operads are admissible (meaning: have a transferred model structure), or weaker conditions so that entrywise cofibrant operads are semi-admissible (have a transferred semi-model structure), and doing rectification in our third paper. To respond to Dmitri Pavlov's answer, let me remark that Jacob Lurie also had a result that ends up with a model structure on commutative monoids, but under much stronger conditions on $M$ . Mark and I were not aware of Lurie's approach until after mine was done, and back then there was actually an error in Lurie's approach that wiped out the applications (it was only applicable to chain complexes over a field of characteristic zero). Since my approach worked for all known examples (where commutative monoids have a transferred model structure) plus some new ones, we decided to call my condition the "commutative monoid axiom" instead of giving Lurie's condition that name. Let A be a cofibrant operad as above. Use the above results to construct spectral sequences that converge to the homotopy groups of the space of A-algebra structures on a given object X, and to the homotopy groups of the mapping space of A-algebra maps between two given A-algebras. These spectral sequences for the A-infinity operad are the key formal ingredients to the Hopkins-Miller proof that Morava E-theory admits an action by the stabilizer group. This has been done. Rezk got the program started in part 2 of his thesis , setting up a spectral sequence to compute the homotopy groups of the moduli space of $A$ -algebra structures on $X$ , when $A$ is a cofibrant operad (in simplicial sets). Vigleik Angeltveit did it in the context of spectra. A great paper by Niles Johnson and Justin Noel " Lifting homotopy T-algebra maps to strict maps " sets up a Bousfield-Kan spectral sequence that converges to the homotopy groups of the space of $A$ -algebra maps between two spaces. Here $A$ is a simplicial monad and $M$ is a simplicial model category. Possibly there is room here for generalization to non-simplicial cases, but I doubt that Hovey was asking for an answer in that level of generality. My general theory is that the category of model categories is not itself a model category, but a 2-model category. Weak equivalences of model categories are Quillen equivalences, and weak equivalences of Quillen functors are natural weak equivalences. Define a 2-model category and show the 2-category of model categories is one. Note that the homotopy 2-category at least makes sense (in a higher universe): we can just invert the Quillen equivalences and the natural weak equivalences. This localization process for an n-category has been studied by Andre Hirschowitz and Carlos Simpson in descent pour les n-champs, on xxx. It will be debatable whether or not this problem has been satisfactorily solved. I’ve got a recent paper with Boris Chorny that can be thought of as seeking the internal hom of the “2-model category of model categories” (meaning, a model structure on a category of functors between two model categories). Boris has done a lot of work in the direction of this problem and would not think it’s solved as of now. However, I think there’s a strong argument that Reid Barton’s thesis essentially solves this problem, or rather a slight but necessary weakening of what Hovey was asking for. Barton makes the argument himself, in section 1 of his thesis, laying out why he thinks this is a solution to what Hovey was asking for, and why what Hovey was asking for was literally impossible. The 2-category of simplicial model categories is supposed to be (according to me) 2-Quillen equivalent to the 2-category of model categories. Even without having all the definitions one can try to find out if this is true. For example, Dan Dugger has shown that every model category (with some hypotheses--surely cofibrantly generated at least) is Quillen equivalent to a simplicial model category. Understand his result in the context of the preceding two problems. That is, does Dugger's construction in fact give a 2-functor from model categories to simplicial model categories? Does it preserve enough structure to make it clear that it will induce some kind of equivalences on the homotopy 2-categories? Again, people will debate if this has been solved. But if you restrict attention to combinatorial model categories then essentially it has been. Dugger proved that every combinatorial model category is Quillen equivalent to a simplicial combinatorial model category (Batanin and I recently proved the same for combinatorial semi-model categories). Back when Clark Barwick was writing a paper about “partial model categories” I convinced myself that the collection of them was equivalent to partial simplicial model categories. I have a hazy recollection that Lennart Meier did some work related to this in 2015. Barton’s thesis also has Theorem 1.3.4 which is like a 2-model category of simplicial combinatorial premodel categories. Is every monoidal model category Quillen equivalent to a simplicial monoidal model category? This would remove the loose end in my book on model categories, where I am unable to show that the homotopy category of a monoidal model category is a central algebra over the homotopy category of simplicial sets. The centrality is the problem, and I can cope with this problem for simplicial monoidal model categories. Essentially yes. The relevant paper here is " Admissible replacements for simplicial monoidal model categories " by Bayinder and Chorny. Also, the centrality problem was resolved. I learned one solution from Jerome Scherer, who told me it's in his 2008 paper (with Chacholski) Representations of Spaces . Denis-Charles Cisinski points out that he also solved it in 2002 . Charles Rezk has a homotopy theory of homotopy theories. This is just a category, though it is large. The objects are generalizations of categories where composition is not associative on the nose--that is, they are some kind of simplicial spaces. Understand the relationship between Rezk's point of view and mine on the 2-category of model categories. They should be equivalent in some sense. Again, modulo the fact that what Hovey had in mind doesn’t exactly exist, this problem has been solved. There are many, many models for the "homotopy theory of homotopy theories" , including quasi-categories, relative categories, simplicial categories, topological categories, Segal categories, Segal spaces, complete Segal spaces, $A_\infty$ -categories, etc, etc. Each has a model structure and these model structures are all Quillen equivalent, even in a coherent way. See work of Toen and Barwick and Schommer-Pries . There are also models for $\infty$ -categories with extra structure, like the category of cofibration categories (whose homotopy theory was worked out beautifully by Karol Szumilo ) or the category of fibration categories. There's also a model-free approach due to Riehl and Verity, called $\infty$ -cosmoi . However, the theory of model categories is not equivalent to the theory of $(\infty,1)$ -categories, because not every infinity category comes from a model category. If an infinity category comes from a model category, it must have all limits and colimits. There are two ways to proceed. You can weaken what you mean by "model category" or you can look for an equivalence with $(\infty,1)$ -categories with extra structure. Both approaches work. For example, if you weaken from "model category" to, say, "partial model category", then Barwick and Kan’s paper on the subject (linked above) shows how the theory of partial model categories is equivalent to the theory of $\infty$ -categories. The same is true if you weaken from "model category" to "relative category". As for $(\infty,1)$ -categories with extra structure, the right notion is "presentable $(\infty,1)$ -categories." The theory of combinatorial model categories is equivalent in a strong way to the theory of presentable infinity categories. In the appendix to my book on model categories, I said maybe what we are doing in associating to a model category its homotopy category is the wrong thing. Maybe we should be associating to a model category C the homotopy categories of all the diagram categories C^I, together with all the adjunctions induced by functors I --> J. This would make homotopy limits and colimits part of the structure. Does this viewpoint have any value? I think what Mark had in mind here was basically the theory of derivators , first conceived by Grothendieck in 1983, and worked out by many authors over the past fifteen years or so. I would say it’s widely accepted that the viewpoint of derivators does have value and is an acceptable way to do homotopy theory. I don’t claim that it's equivalent in a formal way to the theory of model categories or infinity categories. Find a model category you can prove is not cofibrantly generated. This is just an annoyance, not a very significant problem, but it has been bugging me for a while. The obvious candidate for this is the simplest nontrivial model category, the one on chain complexes where weak equivalences are chain homotopy equivalences. Mike Cole is, so far as I know, the first to write down a desciption of this model category, though one certainly has the feeling that Quillen must have known about it. But how do you prove something is not cofibrantly generated? Many examples have now been given. I’ll list all the ones I know: George Raptis proves in Remark 4.7 of “ Homotopy Theory for Posets ” that Strom’s model structure on Top is not cofibrantly generated. This answers a question which was implicit in Hovey’s book when he discusses this model structure, but which he never explicitly asked. Christensen and Hovey's paper "Quillen model structures for relative homological algebra" prove that the absolute model structure on relative chain complexes is not cofibrantly generated (see section 5). Isaksen's model structure on pro-simplicial sets is not cofibrantly generated, and the category of simplicial sets is not fibrantly generated . For many more examples see Scott Balchin's recent book A Handbook of Model Categories Chorny showed that the model category of maps of spaces or simplicial sets (i.e. the arrow category) is not cofibrantly generated. This might have been the first example known. Adamek, Herrlich, Rosicky, and Tholen produce a model structure on the category of small categories that is not cofibrantly generated. Chorny-Rosicky “ class combinatorial model categories ” includes examples of Pro spaces and Ind spaces, and about Fun(M,N), that are not confibrantly generated in the classical sense for set-theoretic reasons. There are several examples in Emily Riehl’s book . Chapters 12 and 13 discuss “category cofibrantly generated” when the left class is a category not a set. The left class must be defined as a retract-closure, but the right is automatically closed under retracts. Examples include Hurewicz models. She also covers "enriched cofibrantly generated" more generally. Barthel and Riehl : Cole’s mixed model structure – this is an example like the above for topological model categories. Barthel May Riehl – this is an example like the above for chain enriched. Lack’s 2007 paper has a non-cofibrantly generated model structure on Arr(Cat), but it is cofibrantly generated in Riehl's sense. Bourke and Garner have notion of "cofibrantly generated by a double category", plus an example that is not cofibrantly generated as a category (because the right class is not closed under retracts). UPDATE: In my paper with Donald Yau about arrow categories, we list several examples of monoidal model categories that are not cofibrantly generated.
{ "source": [ "https://mathoverflow.net/questions/424613", "https://mathoverflow.net", "https://mathoverflow.net/users/139854/" ] }
424,694
Let $p$ be a prime, and consider $$S_p(a)=\sum_{\substack{1\le j\le a-1\\(p-1)\mid j}}\binom{a}{j}\;.$$ I have a rather complicated (15 lines) proof that $S_p(a)\equiv0\pmod{p}$ . This must be extremely classical: is there a simple direct proof ?
Let $$P(x)=(1+x)^a-1-x^a=\sum_{1 \le j \le a-1} \binom{a}{j}x^j.$$ Working in a field $F$ where $|\{\mu \in F: \mu^{p-1}=1\}|=p-1$ (roots of unity of order $p-1$ exist), we have $$ \frac{1}{p-1}\sum_{\mu^{p-1}=1}P(x \mu) = \sum_{\substack{1 \le j \le a-1\\(p-1)\mid j}} \binom{a}{j}x^j.$$ We now specialize the field to $\mathbb{F}_p$ , and let $x=1$ : $$ -\sum_{\mu \in \mathbb{F}_p^{\times}}P( \mu) = \sum_{\substack{1 \le j \le a-1\\(p-1)\mid j}} \binom{a}{j}.$$ To conclude, observe that $P(0)=0$ and $|\mathbb{F}_p| = p$ , so $$\sum_{\mu \in \mathbb{F}_p^{\times}} P(\mu) = \sum_{\mu \in \mathbb{F}_p} P(\mu) = \sum_{\mu \in \mathbb{F}_p} ((1+\mu)^a - \mu^a)=0,$$ because $\mu\mapsto \mu+1$ is a permutation of $\mathbb{F}_p$ . A slightly more general congruence is due to Glaisher (1899), as I've found in a survey by Granville, see equation (11) here . The precise reference is Glaisher, J. W. L., "A congruence theorem relating to sums of binomial-theorem coefficients.", Quart. J. 30, 150-156, 349-360, 361-383 (1899). See here for the zbMath review in German. There is no entry in MathSciNet. Namely, Glaisher proved that for any $0 \le j_0 \le p-1$ we have $$\sum_{\substack{1 \le j \le a \\ j \equiv j_0 \bmod (p-1)}} \binom{a}{j} \equiv \binom{k}{j_0} \bmod p$$ where $k$ is any positive integer with $k \equiv a \bmod p$ Applying this with $j_0=0$ and $k=a$ , and observing that the term $j=a$ contributes $1 \bmod p$ , your result follows. Clicking on equation (11) in the link above leads to a detailed proof which utilizes Lucas' theorem.
{ "source": [ "https://mathoverflow.net/questions/424694", "https://mathoverflow.net", "https://mathoverflow.net/users/81776/" ] }
425,920
Does there exist a pair of finite groups $G$ and $H$ satisfying both of the short exact sequences $1 \rightarrow G \rightarrow H \rightarrow A_4 \rightarrow 1$ and $1 \rightarrow G \rightarrow H \rightarrow D_6 \rightarrow 1$ ? Of course the homomorphisms $G \to H$ in these short exact sequences are not the same.
Call two finite groups $Q_1$ and $Q_2$ compatible if there exists a finite group $G$ with two isomorphic normal subgroups $N_1$ and $N_2$ such that $G/N_i\cong Q_i$ . One can show the following: Proposition: If two groups are compatible, then they have subnormal series of the same length with the same factor groups appearing in the same order. Proof: Let $Q_1$ and $Q_2$ be compatible with $G$ a witness of minimal order, and $N_1$ and $N_2$ the two corresponding isomorphic normal subgroups and let $\alpha$ be an isomorphism from $N_1$ to $N_2$ . Let $M=N_1\cap N_2$ . Note that $M$ and $\alpha(M)$ are isomorphic and normal in $N_2$ , so $N_2/M$ and $N_2/\alpha(M)$ are compatible, with $N_2$ as a witness. But $N_2/M\cong N_1N_2/N_1$ while $N_2/\alpha(M)\cong N_1/M\cong N_1N_2/N_2$ . Minimality of $G$ implies that $N_1N_2<G$ , so that $Q_1$ and $Q_2$ have $G/N_1N_2$ as a non-trivial common quotient, but moreover the corresponding normal subgroups are compatible, so the result follows by induction. $\square$ I've read somewhere that the above argument (which is in some sense a generalisation of the one by Robert) is due to Sims, but I'm not sure the argument itself is actually written anywhere. In particular, it shows that $A_4$ and $D_6$ are not compatible, because they don't have such subnormal series. (Any series for $A_4$ has a $C_3$ "on top", and in $D_6$ , a $C_2$ "on top".) I've been interested in the question of determining which groups are compatible for a while. I think it's an interesting question and the answer is not known. See Giudici, Glasby, Li, Verret, Arc-transitive digraphs with quasiprimitive local actions, Journal of Pure and Applied Algebra 223 (2019) 1217-1226 for some motivation and further results. See also https://math.stackexchange.com/questions/4295186/which-pairs-of-groups-are-quotients-of-some-group-by-isomorphic-subgroups/4296206#4296206
{ "source": [ "https://mathoverflow.net/questions/425920", "https://mathoverflow.net", "https://mathoverflow.net/users/172799/" ] }
426,302
A relation $R$ is implicitly definable in a structure $M$ if there is a formula $\varphi(\dot R)$ in the first-order language of $M$ expanded to include relation $R$ , such that $M\models\varphi(\dot R)$ only when $\dot R$ is interpreted as $R$ and not as any other relation. In other words, the relation $R$ has a first-order expressible property that only it has. (Model theorists please note that this is implicit definability in a model , which is not the same as the notion used in Beth's implicit definability theorem .) Implicit definability is a very weak form of second-order definability, one which involves no second-order quantifiers. Said this way, an implicitly definable relation $R$ is one that is definable in the full second-order Henkin structure of the model, but using a formula with only first-order quantifiers. Examples. Here are some examples of relations that are implicitly definable in a structure, but not definable. The predicate $E$ for being even is implicitly definable in the language of arithmetic with successor, $\langle\mathbb{N},S,0\rangle$ . It is implicitly defined by the property that $0$ is even and evenness alternates with successor: $$E0\wedge \forall x\ (Ex\leftrightarrow\neg ESx).$$ Meanwhile, being even is not explicitly definable in $\langle\mathbb{N},S,0\rangle$ , as that theory admits elimination of quantifiers, and all definable sets are either finite or cofinite. Addition also is implicitly definable in that model, by the usual recursion $a+0=a$ and $a+(Sb)=S(a+b)$ . But addition is not explicitly definable, again because of the elimination of quantifiers argument. Multiplication is implicitly definable from addition in the standard model of Presburger arithmetic $\langle\mathbb{N},+,0,1\rangle$ . This is again because of the usual recursion, $a\cdot 0=0$ , $a\cdot(b+1)=a\cdot b+a$ . But it is not explicitly definable, because this theory admits a relative QE result down to the language with congruence mod $n$ for every $n$ . First-order truth is implicitly definable in the standard model of arithmetic $\langle\mathbb{N},+,\cdot,0,1,<\rangle$ . The Tarski recursion expresses properties of the truth predicate that completely determine it in the standard model, but by Tarski's theorem on the nondefinability of truth, this is not a definable predicate. My question concerns iterated applications of implicit definability. We saw that addition was implicitly definable over successor, and multiplication is implicitly definable over addition, but I don't see any way to show that multiplication is implicitly definable over successor. Question. Is multiplication implicitly definable in $\langle\mathbb{N},S,0\rangle$ ? In other words, can we express a property of multiplication $a\cdot b=c$ in its relation to successor, which completely determines it in the standard model? I expect the answer is No , but I don't know how to prove this. Update. I wanted to mention a promising idea of Clemens Grabmayer for a Yes answer (see his tweet ). The idea is that evidently addition is definable from multiplication and successor (as first proved in Julia Robinson's thesis , and more conveniently available in Boolos/Jeffrey, Computability & Logic, Sect. 21). We might hope to use this to form an implicit definition of multiplication from successor. Namely, multiplication will be an operation that obeys the usual recursion over addition, but replacing the instances of $+$ in this definition with the notion of addition defined from multiplication in this unusual way. What would remain to be shown is that there can't be a fake version of multiplication that provides a fake addition, with respect to which it fulfills the recursive definition of multiplication over addition.
Contrary to my initial expectation, the answer is Yes. This answer is based on the idea of Clemens Grabmayer, which makes the observation that addition $+$ is definable from multiplication $\cdot$ and successor. The idea generalizes to the following: Theorem. Suppose that relation $R$ is implicitly definable in model $M$ , that $S$ is implicitly definable in the expansion $\langle M,R\rangle$ , and that $R$ is explicitly definable in $\langle M,S\rangle$ . Then $S$ is implicitly definable in $M$ . Proof. Suppose that $R$ is the unique relation fulfilling sentence $\varphi(\dot R)$ in $M$ , in the language expanded with predicate $\dot R$ . Suppose $S$ is the unique relation fulfilling sentence $\psi(R,\dot S)$ in $\langle M,R\rangle$ . And suppose that $R$ is definable by formula $\theta(x,S)$ in $\langle M,S\rangle$ , in that $Rx\leftrightarrow\theta(x,S)$ . Let $\Phi(\dot S)$ be the sentence asserting: $\varphi(\theta(x,\dot S))$ , that is, the relation defined by $\theta(x,\dot S)$ fulfills property $\varphi$ , and $\psi(\theta(x,\dot S),\dot S)$ holds, that is, the assertion $\psi(\dot R,\dot S)$ holds where $\dot R$ is interpreted by the relation defined by $\theta(x,\dot S)$ . I claim that this is an implicit definition of $S$ in $M$ . The reason is that whatever relation interpretation is given to $\dot S$ , it will have the property that the relation extracted from it via $\theta(x,\dot S)$ will have to be $R$ , since it fulfills the implicit definition of $R$ given by $\varphi$ . And further, since $\Phi$ asserts that $\psi$ is fulfilled by $\dot S$ relative to that relation, it follows that $\dot S$ must be $S$ . $\Box$ The corollary is that: Corollary. Multiplication is implicitly definable from successor. Proof. Addition is implicitly definable in $\langle\mathbb{N},S,0\rangle$ , and multiplication is implicictly definable over addition $\langle\mathbb{N},S,0,+\rangle$ , and by the Boolos/Jeffrey observation, addition is explicitly definable from multiplication and successor. So we are in the case of the theorem. $\Box$ A more striking instance might be: Corollary. First-order arithmetic truth for the standard model of arithmetic $\langle\mathbb{N},+,\cdot,0,1<\rangle$ is implicitly definable just from successor $\langle\mathbb{N},S,0\rangle$ . Proof. I intend to use the trinary truth predicate $\text{Tr}(\varphi,x,y,z)$ , holding when $\mathbb{N}\models\varphi[x,y,z]$ . This truth predicate is uniquely characterized on the standard model $\mathbb{N}$ by fulfilling the Tarski recursion, and so it is implicitly definable in $\langle\mathbb{N},+,\cdot\rangle$ . But both addition and multiplication are definable from the truth predicate (this is why we use the trinary version, since with just successor we don't initially have any coding, but once we get $+$ and $\times$ , then the usual coding kicks in), and they themselves are implicitly definable from successor. So by the theorem, truth is implicitly definable from successor. $\Box$ And one can of course iterate this by forming the predicate for truth-about-truth, and truth-about-truth-about-truth and so on, proceeding transfinitely up the hierarchy for quite some way. But lastly, let me mention that the theorem falls short of proving that the property of being implicitly-definable-over is transitive. That seems to be false in light of counterexamples discussed in the comments.
{ "source": [ "https://mathoverflow.net/questions/426302", "https://mathoverflow.net", "https://mathoverflow.net/users/1946/" ] }
426,742
(I am not sure if this is a mathematics or physics question so I am not sure where to post it. I am posting it here because the chief subject is an unreal universe that is purely a subject of mathematical theoretical analysis, but there are quite obvious and connections to the study of the real-life physics in our own universe.) This is something I have wondered about. Conway's game of life provides a fictional universe where that it seems that many highly complex behaviors are possible (including what could arguably be considered to some extent a form of life, i.e. self-replicating information processing systems) but objects are also subject interestingly to extreme instability and rapid entropization, in that a small perturbation of an ordered system, such as the addition or removal of a single "cell" or filled grid square, will cause rapid disintegration of that system. And this naturally invites comparisons with our own real-life universe: in this one, fortuitously, vanishingly small perturbations of systems do not generally lead to catastrophic entropy increases, but there is still a tendency toward increasing entropy in some way, and it seems thus that the CGoL universe could be thought of as perhaps in some regards having a "more aggressive" version of the 2nd law of thermodynamics. However, in other regards, it appears to strongly violate the laws of thermodynamics that work in our universe: machines or life forms generally will run forever without any changes, and you can even create infinite streams of "matter" such as with a glider gun. It would seem that, in particular, energy is not conserved, and what we call as "perpetual motion" is possible in the CGoL universe but is not possible in our own. But I wonder why this is, from a viewpoint of the mathematical structures of the two, in particular regarding the constraints on perpetual motion we know of in our universe. In particular, the typical retort as to why a perpetual motion machine is impossible is some variation on the first and second laws of thermodynamics (not sure what a "third law of thermodynamics" machine would be - I'd presume that's a machine that could refrigerate something to exactly absolute zero, then obtain 100% efficiency by using it as a cold bath). The typical reason it is asserted the first law is inviolate is Noether's theorem: dynamics is symmetric in temporal translation, meaning that, for a given configuration of particles, their future history does not depend on whether that configuration is created now or created (say) ten thousand years from now. But Conway's universe also has this temporal translation symmetry property. The rules have no explicit time dependence. They are a discrete-time dynamical system (DTDS), sure, but the temporal symmetry group is maximal, so arbitrary time translations can be accessed (a counterexample would be a universe where that behavior is one way for even generation numbers and another for odd generation numbers), and thus you can propagate a quantity along a streamline in phase space, so I am not sure why that the same ways you could argue for Noether in our universe wouldn't still go through for the most part. Is this correct? Does the temporal translation symmetry of Conway's universe give rise to a conserved quantity, that we might be able to call an "energy"? If so, how does or does not the ready and easy appearance of perpetual motion machines jive with its conservation? Moreover, could this also imply that if, say, hypothetically and someday a loophole were found in our universe that could permit what anyone else might call as perpetual motion, it would not necessarily imply a Noether temporal symmetry failure, but perhaps just a suitable redefinition or expansion of the idea of energy? And if not, why not? By the way, here are some observations on what a possible "energy" function might have to look like. One interesting property of Conway patterns is that they can not only expand perpetually, as in the glider gun, but they can also disappear completely . If we are to assume a conserved energy functional for a particular pattern, it would stand to reason that any pattern that eventually dies completely must have energy equal to that of the vacuum (presumably, we could just set this to 0 conways). It would be nice if, at least under some suitable well-defined circumstances, energies are additive, i.e. if we put two patterns next to each other on a suitable grid and they do not interact with each other, the total energy in the grid should be equal to the sum of energies of the two patterns. Note that non-interactivity is vital: we could imagine a couple of patterns that, by themselves, have a positive energy, but when suitably composed even if not immediately overlapping, would result in a pattern that disappears completely, and thus the composition must have energy equal to the vacuum energy. Presumably, the smallest stable pattern, which is 3 cells in a horizontal or vertical bar (technically it's not "stable" in the strictest sense because it oscillates between horizontal and vertical orientation, but I'd call it stable because it never dies and moreover it maintains its shape), should have the least positive energy, but it's also possible this may contend with the square (4 cells) because, while it has one more cell, it doesn't move. Does such an energy function exist? If so, but it is not unique, what further conditions could potentially single out a unique one? If it does not exist, which conditions should we relax and/or replace, and with what? And what would such energy functions suggest when applied to the situations that would seem like violations of conservation by the standards of the real-life universe, like the glider gun? Note that I'd also personally be inclined to think the above conditions are too restrictive in some ways because it's ostensibly still trying to assume or salvage a connection between energy and cell count and if anything we should take the existence of guns, patterns with changing cell count, and so forth as a big red flag suggesting "don't even bother" with that approach. However, then we need a different strategy for trying to find useful axioms. (Note, clearly there's always a trivial energy function, given by assigning the same value to every pattern. That's also clearly not what we want, but I'm not sure how to eliminate it.) It's also important to note the many differences between the symmetries of the Conway universe's dynamics and those of what we know so far about the dynamics of our own. For one, the Conway universe's symmetries are discrete, as I mentioned above. That may, alone, be enough to provide complete explaining power as to how that time translation can sit alongside infinite glider guns, but I am not sure. For another, some key symmetries that are present in ours are lacking: for one, rotational symmetry is not present except in the simple four-fold way. For another, boost symmetry is not present for any reasonable notion of boost - that is to say, the Conway universe admits a preferred, absolute rest frame: namely, the one in which a 2x2 square stays put. It seems to me any of these might frustrate or at least require a radical rethinking of what "energy" would have to mean. Perhaps these differences wholly and irrevocably sabotage the idea of energy?
Q: Does the temporal translation symmetry of Conway's universe give rise to a conserved quantity, that we might be able to call an "energy"? As noticed in the earliest studies of Conway's Game of Life, it has no local conservation law --- it is not possible to define a locally conserved energy functional. The dynamics does have temporal translation symmetry, but Noether's theorem (which ties a symmetry to a conservation law) does not apply firstly because the dynamics is discretized in space and time, and secondly because the dynamics is not based on a Lagrangian. So even a generalization along the lines of SmoothLife would not be sufficient to apply Noether's theorem.
{ "source": [ "https://mathoverflow.net/questions/426742", "https://mathoverflow.net", "https://mathoverflow.net/users/11576/" ] }
427,842
Consider the following Turing machine $M$ : it searches over valid ZFC proofs, in lexicographic order, and if it finds a proof that $M$ halts, then it halts. If we fix a particular model of Turing machine (say single-tape Turing machine), and if we fix an algorithm to verify that a given string is a valid ZFC proof of the fact that $M$ halts, this should constitute an unambiguous description of a Turing machine $M$ . (Standard arguments in computability theory, i.e., Kleene's recursion theorem, allows $M$ to compute functions of its own description). Does $M$ halt? I find this question puzzling because there's no apparent logical contradiction either way. There could be a proof, in which case it will halt. If there is no proof, then it doesn't halt. What would the answer "depend" on? $M$ either halts or doesn't halt, but could its behavior be independent of ZFC? I should note that a closely related Turing machine $M'$ can be used to give a simple proof of Godel's incompleteness theorem. It's much more "rebellious" in its behavior, where if it finds a proof that it halts, it doesn't halt, and if it finds a proof that it doesn't halt, it halts. It follows that there cannot be a proof of its halting or non-halting in ZFC (unless ZFC is inconsistent). However $M$ is just earnestly trying to figure out its fate. Which is it?
It is a very nice question. The answer is yes, the machine will find a proof of its own halting nature, and it will halt when it does so. I claim this is a consequence of Löb's theorem . Let $M$ be a Turing machine such as you describe. Note that it is not quite correct to say "the" Turing machine that does what you say, since there will be infinitely many different machines $M$ that search for proofs that they themselves halt. It may not be clear initially that they all have the same behavior, but let me show that indeed they do all halt. Let $\psi$ be the assertion " $M$ halts." Thus, we can prove in ZFC that if $\psi$ is provable, then it is true, since $M$ would discover the proof. Thus, ZFC proves $\text{Pr}_{ZFC}(\ulcorner\psi\urcorner)\to\psi$ . But this is exactly the situation that Löb's theorem is about, and it tells us that we can prove $\psi$ directly in ZFC. So we can prove in ZFC that $M$ halts, as I claimed. It follows that we can prove in PA and much less that $M$ halts, since once we have the actual ZFC proof that it halts, then we can prove in a very weak theory that the actual Turing machine computation halts in whatever specific number of steps it would take to verify the finding of it. That argument uses the ZFC version of Löb's theorem, but we can get by with the standard PA version, even though M is searching for proofs in ZFC. The reason is that in PA we can prove that $\text{Pr}_{PA}(\ulcorner\psi\urcorner)\to\psi$ , since if PA proves that $M$ halts, then we can prove that ZFC will prove it as well, and so $M$ will halt. Thus, we need only the standard PA version of Löb's theorem to see that PA proves that $M$ halts. Incidentally, regarding the negated version and the proof of the incompleteness theorem you mention at the end of the post, these ideas are also the basis of the universal algorithm. See my paper The modal logic of arithmetic potentialism and the universal algorithm .
{ "source": [ "https://mathoverflow.net/questions/427842", "https://mathoverflow.net", "https://mathoverflow.net/users/5534/" ] }
427,891
Does there exist a finite set of points on the Euclidean plane, such that: No 3 points are collinear, and Every one of the points has (at least) three other points in the set at the same distance from it? It seems to me that the answer should be No, but my naïve attempts to prove it have failed.
$ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $ $
{ "source": [ "https://mathoverflow.net/questions/427891", "https://mathoverflow.net", "https://mathoverflow.net/users/489097/" ] }
427,942
For an integer $n$ , let $\ell(n)$ denote the maximal number of consecutive $1$ s in the binary expansion of $n$ . For instance, $$ \ell(71_{10}) = \ell(1000111_2) = 3. $$ Consider the set $E$ of all integers $n \in \mathbb{N}$ such that $\ell(n)$ is even. It seems intuitively obvious that $E$ should have natural density $1/2$ : $$ d(E) = \lim_{N\to \infty} \frac{|E \cap [0,N)|}{N} = 1/2.$$ Can one prove that this is actually the case? Less ambitiously, can one show that $$ \bar{d}(E) = \limsup_{N\to \infty} \frac{|E \cap [0,N)|}{N} > 0?$$ Edit to add: The following sketch of an argument seeems to show that $$1/3 \leq \underline{d}(E) \leq \bar{d}(E) \leq 2/3.$$ Note that the binary expansion of any integer $n$ can be written as $(n)_2 = u 1^{\ell(n)}v$ , where $u,v \in \{0,1\}^*$ , $u$ is either empty or ends with a $0$ , $v$ is either empty or begins with a zero, and the longest block of consecutive $1$ s in $u$ has length strictly less than $\ell(n)$ . Divide $E$ into three sets, depending on $v$ in the decomposition above: $E_{bad}$ consists of $n\in E$ for which $|v| \leq 1$ , $E_{0}$ consists of $n\in E$ for which $v = 00v'$ , and $E_1$ consists of $n \in E$ for which $v = 01v'$ . The set $E_{bad}$ is small enough that we don't need to worry about it. Now, define the map $\phi_0 \colon\ E_0 \to \mathbb{N} \setminus E$ by (using the expansion above): $$ (\phi_0(n))_2 = u1^{\ell(n)+1}0v'.$$ Similarly, define $\phi_1 \colon\ E_1 \to \mathbb{N} \setminus E$ by $$ (\phi_1(n))_2 = u1^{\ell(n)+1}0v'.$$ It is not hard to show that these maps are both injective. Additionally, for almost all $n$ , $\phi_0(n)/n$ is close to $1$ , and likewise for $\phi_1$ . From here, one can infer that $$ \bar d(E) \leq 2(1 -\bar d(E)), $$ and consequently $\bar d(E) \leq 2/3$ . A symmetric argument shows that $\underline d(E) \geq 1/3$ .
Perhaps surprisingly, the random variable $\ell(n)$ (with $n$ drawn uniformly from $[0,N)$ ) concentrates too much around $\log_2\log_2 N$ (where $\log_2$ denotes the logarithm to base $2$ ) to have a limiting parity probability - the variance stays bounded as $N \to \infty$ , as opposed to growing to infinity. One only recovers a limiting law when the fractional part $\{\frac{1}{2} \log_2 \log_2 N \}$ of half the double logarithm of $N$ converges to a limit, and when one does so the parity probability will usually converge to a limit that deviates slightly from $1/2$ . To simplify the calculations a little let us assume that $N$ is of the form $N = 2^{2^{2k+1}}$ (I'll leave it as an exercise to the reader to handle the general case) in the asymptotic regime $k \to \infty$ . Then the binary expansion of a randomly chosen element $n$ of $[0,N)$ consists of $2^{2k+1}$ independent Bernoulli variables (each taking $0$ and $1$ with values $1/2$ ). We think of this as the initial segment of an infinite sequence of Bernoulli variables. Now we perform the standard trick of viewing this sequence as a renewal process . After each $0$ , the number of $1$ s one encounters before one reaches the next $0$ is $a-1$ where $a$ is distributed according to a geometric distribution of expectation $2$ . One can thus interpret this sequence as $a_1-1$ zeroes followed by a one, then $a_2-1$ zeroes followed by a one, and so forth ad infinitum, where $a_1,a_2,\dots$ are iid geometric distributions of expectation $2$ . By the law of large numbers, we see with probability $1-o(1)$ that the first $t$ for which $a_1+\dots+a_t$ exceeds $2^{2k+1}$ will lie in the range $[2^{2k}-2^{4k/3},2^{2k}+2^{4k/3}]$ (say). Also, by symmetry we see that with probability $1-o(1)$ , the maximum value of the $a_i$ for $i \leq 2^{2k}+2^{4k/3}$ will already be attained for $i \leq 2^{2k}-2^{4k/3}$ . Putting these two together, we see that with probability $1-o(1)$ , $\ell(n)$ will equal $\sup_{1 \leq i \leq 2^{2k}} a_i-1$ . So asymptotically we just need to understand the distribution of $\sup_{1 \leq i \leq 2^{2k}} a_i-1$ . We have the exact formula $$ {\bf P}( \sup_{1 \leq i \leq 2^{2k}} a_i-1 < t ) = \prod_{i=1}^{2^{2k}} {\bf P}(a_i-1 < t)$$ $$ = (1-2^{-t})^{2^{2k}}$$ for any positive integer $t$ , so in particular $$ {\bf P}( \sup_{1 \leq i \leq 2^{2k}} a_i-1 - 2k < s ) = \exp( - 2^{-s} ) + o(1)$$ for any fixed $s$ . Thus in the limit $k \to \infty$ , $\ell(n) - 2k$ converges in distribution to a discrete random variable $X$ with distribution function $$ {\bf P}( X < s ) = \exp( - 2^{-s} ).$$ (Is there a name for this sort of random variable? EDIT: it is a discrete Gumbel distribution, see update below.) The quantity $\frac{|E \cap [0,N)|}{N}$ then converges to the probability that $X$ is even, which is $$ \sum_{j \in {\bf Z}} \exp(-2^{-2j-1}) - \exp(-2^{-2j}) = 0.4998402\dots$$ which is very slightly less than $1/2$ . (If one picked a different subsequence of $N$ one would obtain a different limit; for instance if $N = 2^{2^{2k}}$ then the same analysis would ultimately give the complementary limiting probability of $0.500157\dots$ .) UPDATE: after a tip in the comments, I'll remark that a refinement of the above analysis will eventually show that the distribution of $\ell(n)$ is asymptotic to the integer part $\lfloor \mathrm{Gumbel}(\log_2 \log_2 N, \log_2 e)\rfloor$ of a Gumbel distribution , in the sense that the Levy metric (for instance) between the two distributions goes to zero as $N \to \infty$ (without any further restriction on the natural number $N$ ). In retrospect this sort of answer was a natural guess, given the usual role of the Gumbel distribution in extreme value theory . Some references for further reading (gathered from following links in the comments): Gordon, Louis; Schilling, Mark F.; Waterman, Michael S. , An extreme value theory for long head runs , Probab. Theory Relat. Fields 72, 279-287 (1986). ZBL0587.60031 . Chakraborty, Subrata; Chakravarty, Dhrubajyoti; Mazucheli, Josmar; Bertoli, Wesley , A discrete analog of Gumbel distribution: properties, parameter estimation and applications, ZBL07482747 .
{ "source": [ "https://mathoverflow.net/questions/427942", "https://mathoverflow.net", "https://mathoverflow.net/users/14988/" ] }
428,885
The title says it all. I'm wondering if the power series ring $\mathbb{Q}[[X]]$ (with rational coefficients) embeds as a ring into the field of real numbers. There are various topologies one might consider here, but I'm curious if there is an algebraic embedding.
$\def\QQ{\mathbb{Q}}\def\RR{\mathbb{R}}$ The answer is no! Lemma Let $f(x) \in \QQ[[x]]$ with $f(0) =c^2$ for some nonzero rational $c$ . Then $f(x)$ is a square in $\QQ[[x]]$ . Proof Use the Taylor series for $\sqrt{c^2+u}$ about $u=0$ . $\square$ Therefore, if $\phi : \QQ[[x]] \to \RR$ is a ring homomorphism, then $\phi(1/n^2 + x) = 1/n^2 + \phi(x)$ must be a square for every positive integer $n$ , and so $1/n^2 + \phi(x) \geq 0$ for every positive integer $n$ . Similarly, $\phi(1/n^2 - x) = 1/n^2 - \phi(x) \geq 0$ for every positive integer $n$ . So $-1/n^2 \leq \phi(x) \leq 1/n^2$ and we conclude that $\phi(x)=0$ , so $\phi$ cannot be an embedding.
{ "source": [ "https://mathoverflow.net/questions/428885", "https://mathoverflow.net", "https://mathoverflow.net/users/482287/" ] }
429,619
A function $f:X\to X$ on a group $X$ is called a polynomial if there exist $n\in\mathbb N=\{1,2,3,\dots\}$ and elements $a_0,a_1,\dots,a_n\in X$ such that $f(x)=a_0xa_1x\cdots xa_n$ for all $x\in X$ . The smallest possible number $n$ in this representation is called the degree of the polynomial $f$ and is denoted by $\deg(f)$ . Let $\mathrm{Poly}(X)$ be the set of all polynomials on a group $X$ . In fact, $\mathrm{Poly}(X)$ is a submonoid of the monoid $X^X$ of all self-maps of $X$ , endowed with the operation of composition of functions. So, $|\mathrm{Poly}(X)|\le|X^X|=|X|^{|X|}$ . If the group $X$ is commutative, then each polynomial is of the form $f(x)=ax^n$ for some $a\in X$ and $n\in\mathbb N$ . This implies that the number of semigroup polynomials on a finite Abelian group $X$ is equal to $|X|\cdot\exp(X)\le |X|^2$ where $\exp(X)=\min\{n\in\mathbb N:\forall x\in X\; (x^n=1)\}$ . Question 1. Is any reasonable upper bound on the number of polynomials on a finite group $X$ ? For example, is $|\mathrm{Poly}(X)|=o(|X|^{|X|})$ ? Each polynomial $f:X\to X$ on a finite Abelian group $X$ has degree $\deg(f)\le\exp(X)$ . Question 2. Is $\deg(f)\le\exp(X)$ for any polynomial $f:X\to X$ on a finite group $X$ ? Remark 2. The affirmative answer to Question 2 would imply that $$|\mathrm{Poly}(X)|\le \sum_{n=1}^{\exp(X)}|X|^{k+1}=\frac{|X|^{\exp(X)+2}-|X|^2}{|X|-1}.$$ Remark 3. Finite groups $X$ with $|\mathrm{Poly}(X)|=|X|\cdot\exp(X)$ are characterized in the following theorem. Theorem. A finite group $X$ has $|\mathrm{Poly}(X)|=|X|\cdot\exp(X)$ if and only if $X$ is either commutative or is isomorphic to $Q_8\times A$ for some nontrivial commutative group $A$ of odd order. Proof. To prove the ``if'' part, assume that $X$ is either commutative or $X$ is isomorphic to $Q_8\times A$ for some nontrivial commutative group $A$ of odd order. If $X$ is commutative, then the equality $|\mathrm{Poly}(X)|=|X|\cdot\exp(X)$ is clear. Now assume that $X=Q_8\times A$ for some nontrivial commutative group $A$ of odd order. GAP-calculations of Peter Taylor show that the group $Q_8$ has exactly 32 polynomials of each degree $k\in\{1,2,3,4\}$ . This implies that $$|\mathrm{Poly}(Q_8\times A)|=32\cdot|\mathrm{Poly}(A)|=32\cdot |A|\cdot\exp(A)=4\cdot|X|\cdot\exp(A)=|X|\cdot\exp(X).$$ To prove the ``only if'' part, assume that $X$ is a finite non-commutative group with $|\mathrm{Poly}(X)|=|X|\cdot\exp(X)$ . For every $a\in X$ and $n\in\mathbb N$ , consider the polynomial $p_{a,n}(x)=ax^n$ . The definition of $\exp(X)$ implies that the set $\mathrm{Pol}(X):=\{p_{a,n}:a\in X,\;1\le n\le \exp(X)\}$ has cardinality $|X|\cdot\exp(X)$ and hence coincides with the set $\mathrm{Poly}(X)$ . So, for any $a\in X$ there exists $n\le\exp(X)$ such that $axa^{-1}=x^n$ for all $x\in X$ . This implies that every subgroup of $X$ is normal, so $X$ is a Dedekind group . By the classical Dedekind result , $X$ is isomorphic to the product $Q_8\times A\times B$ where $A$ is a Abelian group of odd order and $B$ is a Boolean group, i.e., a group of exponent $\exp(B)\le 2$ . If the group $A$ and $B$ is trivial, then $|\mathrm{Poly}(X)|=|\mathrm{Poly}(Q_8)|=128\ne |X|\cdot\exp(X)=32$ . Next, assume that the group $A$ is trivial and $B$ is not trivial. Then $|\mathrm{Poly}(B)|=|\{a,ax:a\in B\}|=2|B|$ . GAP-calculations of Peter Taylor show that the group $Q_8$ has exactly 32 polynomials of each degree $k\in\{1,2,3,4\}$ . In particular, $Q_8$ has exactly 64 polynomials of even degree and 64 polynomials of odd degree. This implies that $|\mathrm{Poly}(X)|=64\cdot 2|B|=16|Q_8\times B|=16|X|\ne 4|X|=|X|\cdot\exp(X)=|\mathrm{Poly}(X)|$ . This contradiction shows that the group $A$ is nontrivial. Taking into account that the group $Q_8$ has exactly 32 polynomials of each degree $k\in\{1,2,3,4\}$ , we conclude that $$|X|\cdot\exp(X)=|\mathrm{Poly}(X)|=|\mathrm{Poly}(Q_8\times A\times B|=32\times|\mathrm{Poly}(A\times B)|=32\times |A\times B|\times \exp(A\times B)=4\times|Q_8\times A\times B|\times \exp(A\times B)=4\cdot |X|\cdot\exp(A\times B)$$ and hence $\exp(Q_8\times A\times B)=\exp(X)=4\exp(A\times B)$ . Since $\exp(Q_8\times A\times B)=4\exp(A),$ this implies that the Boolean group $B$ is trivial and hence $X=Q_8\times A$ . $\square$
$\DeclareMathOperator\Poly{Poly}$ Proposition. If $G$ is a simple non-abelian finite group, then $\Poly(G)=G^G$ . (Edit: this observation appears as the main therorem in this paper by Maurer and Rhodes, Proc. AMS 1965. See also Theorem 2 here by Schneider-Thom. Thanks to Benjamin Steinberg for the reference.) Here is the proof. It uses no machinery. Lemma. There exists $f\in\Poly(G)$ whose support is a singleton. [Here the support of $f$ means $f^{-1}(G\smallsetminus\{1\})$ .] Indeed, let $f$ have support $\{g\}$ . Considering $x\mapsto hf(x)h^{-1}$ we see that all values in a single nontrivial conjugacy class are achieved by polynomials supported by $\{g\}$ . By simplicity and taking products, we see that all maps supported by $g$ are definable as polynomials. Moreover after considering $x\mapsto f(gh^{-1}x)$ we obtain all functions supported by $\{h\}$ . Since an arbitrary map is product of maps supported by singletons, we obtain the proposition. Now let us prove the lemma. Let $X$ be a minimal subset among nonempty supports of elements of $\Poly(G)$ ( $X$ exists because there exists a polynomial not constant $=1$ ). Say $X$ is the support of $f$ . We have to show that $X$ is a singleton. Fix $g\in X$ . So $u(x)=g^{-1}x$ is a polynomial. Also for each $h\in H$ , the self-map $v$ defined $v(x)=hf(x)h^{-1}$ is a polynomial. Then $w_h:x\mapsto [u(x),v(x)]$ is a polynomial as well. Its support is contained in $X\smallsetminus\{g\}$ . So we obtain a contradiction (a strictly smaller nonempty support), unless $w_h$ is constant equal to $1$ for each choice of $h$ . The latter means that for each $x\in X\smallsetminus\{g\}$ , the element $g^{-1}x$ commutes with $hf(x)h^{-1}$ . That is, the nontrivial element $g^{-1}x$ commutes with a whole nontrivial conjugacy class. But the centralizer of a nontrivial conjugacy class is trivial (it is a normal subgroup, and can't be the whole group because the center is trivial). This is a contradiction unless $X\smallsetminus\{g\}$ is empty, which is precisely what we want. The proof is complete. Remark (after Taras' comment, and also in the above Maurer-Rhodes reference): conversely, for a finite group $G$ , the property $\Poly(G)=G^G$ implies that $G$ is simple non-abelian or $|G|\le 2$ . Indeed if $G$ is non-trivial and non-simple, then it has a non-trivial proper normal subgroup $N$ , and polynomials have the nontrivial constraint $f(N)\subset f(1)N$ . Otherwise $G=\mathbf{Z}/p\mathbf{Z}$ for $p$ prime or $1$ . For such a group, a "polynomial" has the form (using additive notation) $x\mapsto a+bx$ for some $a,b\in\mathbf{Z}/p\mathbf{Z}$ (i.e. is an affine self-map in this ring). There are thus $p^2$ such functions. And $p^2<p^p$ iff $p>2$ .
{ "source": [ "https://mathoverflow.net/questions/429619", "https://mathoverflow.net", "https://mathoverflow.net/users/61536/" ] }
430,365
Quadratic forms play a huge role in math. This leads one to wonder: Is there a theory of cubic forms, quartic forms, quintic forms and so on? I have failed to discover any. Is there any such theory? If not, is it because: It is not as interesting as quadratic forms? It is so hard that no-one has yet written about such a theory? It is already deeply infiltrated in math and only some smart people know about it?
I once asked André Weil the same question. When I was college, taking a course that discussed quadratic forms, Weil gave a guest lecture to the students about that topic. After the talk, I raised my hand and asked him why there was such a big deal in math about quadratic forms while it seemed there was nothing comparable for higher-degree forms. Weil gave an answer, but to my regret I could not understand it (difficulty hearing him) and I did not ask him later to repeat what he had said. Now many years later, I can offer an answer that I think my former student self would have found satisfactory. Before I begin, let me point out that we all know one important higher-degree form: the determinant form of degree $n$ . So it is reasonable to ask what kind of general theory there could be for higher degree forms. First let's see that the bijection between quadratic forms and symmetric bilinear forms generalizes to higher degree. Recall for a field $F$ not of characteristic $2$ , there is a bijection between quadratic forms $Q : F^n \to F$ and symmetric bilinear forms $B : F^n \times F^n \to F$ by $Q(\mathbf x) = B(\mathbf x,\mathbf x)$ and $$ B(\mathbf x,\mathbf y) = \frac{1}{2}(Q(\mathbf x + \mathbf y) - Q(\mathbf x) - Q(\mathbf y)). $$ Replacing $2$ by a degree $d \geq 1$ , for a field $F$ where $d! \not= 0$ (meaning $F$ has characteristic $0$ or characteristic $p$ for $p > d$ ) there is a bijection between forms $f : F^n \to F$ of degree $d$ and symmetric $d$ -multilinear maps $\Phi : \underbrace{F^n \times \cdots \times F^n}_{d \ {\sf copies}} \to F$ where $f(\mathbf x) = \Phi(\mathbf x,\ldots,\mathbf x)$ and $$ \Phi(\mathbf x_1,\ldots,\mathbf x_d) = \frac{1}{d!}\sum_{\substack{J \subset \{1,\ldots, d\} \\ J \not= \emptyset}} (-1)^{d - |J|}f\left(\sum_{j \in J} \mathbf x_j\right), $$ (You could include $J = \emptyset$ in the sum by the usual convention that an empty sum is $\mathbf 0$ , since $f(\mathbf 0) = 0$ .) For example, when $f$ is a cubic form ( $d = 3$ , $n$ arbitrary), the associated symmetric trilinear form is $$ \Phi(\mathbf x,\mathbf y,\mathbf z) = \frac{1}{6}(f(\mathbf x + \mathbf y + \mathbf z) - f(\mathbf x + \mathbf y) - f(\mathbf x + \mathbf z) - f(\mathbf y + \mathbf z) + f(\mathbf x) + f(\mathbf y) + f(\mathbf z)). $$ For example, if $f : F^3 \to F$ by $f(x_1,x_2,x_3) = x_1^3+x_2^3+x_3^3$ then $\Phi(\mathbf x,\mathbf y,\mathbf z) = x_1y_1z_1 + x_2y_2z_2+x_3y_3z_3$ . The general formula for $\Phi$ in terms of $f$ shows why we want $d! \not= 0$ in $F$ . Over fields of characteristic $0$ , I think this bijection is due to Weyl. Using this bijection, we call a form $f$ of degree $d$ nondegenerate if, for the corresponding symmetric multilinear form $\Phi$ , we have $\Phi(\mathbf x,\mathbf y, \ldots, \mathbf y) = 0$ for all $\mathbf y$ in $F^n$ only when $\mathbf x = \mathbf 0$ . (Equivalently, we have $\Phi(\mathbf x,\mathbf x_2, \ldots, \mathbf x_d) = 0$ for all $\mathbf x_2, \ldots, \mathbf x_d$ in $F^n$ only when $\mathbf x = \mathbf 0$ .) When $d = 2$ (the case of quadratic forms), this is the usual notion of a nondegenerate quadratic form (or nondegenerate symmetric bilinear form). That the bijection between quadratic forms and symmetric bilinear forms can be extended to higher degrees suggests there might be general theory in higher degree that's just like the quadratic case, but it turns out there really are significant differences between quadratic forms and forms of higher degree. Here are two of them. Diagonalizability. Outside characteristic 2, a quadratic form can be diagonalized after a linear change of variables, but for $n \geq 3$ , a form of degree $n$ might not be diagonalizable after any linear change of variables. While any nondegenerate binary cubic form over $\mathbf C$ can be diagonalized (see the start of the proof of Lemma 1.7 here ; in the binary case, nondegeneracy of a cubic form is equivalent to the dehomogenization being a cubic polynomial with nonzero discriminant), nondegenerate cubic forms over $\mathbf C$ in more than two variables need not be diagonalizable. For example, the three-variable cubic form $x^3 - y^2z - xz^2$ is nondegenerate and can't be diagonalized over $\mathbf C$ for a reason related to elliptic curves: see my comments on the MO pages here and here . For each $d \geq 3$ and $n \geq 2$ except for $d=3$ and $n = 2$ , there are nondegenerate forms of degree $d$ in $n$ variables over $\mathbf C$ that are smooth away from $(0,0,\ldots,0)$ and are not diagonalizable. Note the diagonal form $x_1^d + \cdots + x_n^d$ is smooth away from the origin. Group theory. For a form $f(x_1,\ldots,x_n)$ over a field $F$ , its orthogonal group is the linear changes of variables on $F^n$ that preserve it: $$ O(f) = \{A \in {\rm GL}_n(F) : f(A\mathbf v) = f(\mathbf v) \ {\rm for \ all } \ \mathbf v \in F^n\}. $$ Nondegenerate quadratic forms have a rich orthogonal group (many reflections) and some higher-degree forms have a large orthogonal group: if $f$ is the determinant form of degree $n$ then its orthogonal group is ${\rm SL}_n(F)$ . But for a form $f$ of degree $d \geq 3$ over an algebraically closed field of characteristic $0$ , $O(f)$ is sometimes a finite group. This happens if the corresponding symmetric $d$ -multilinear form $\Phi$ satisfies $\Phi(\mathbf x, \ldots, \mathbf x,\mathbf y) = 0$ for all $\mathbf y$ in $F^n$ only when $\mathbf x = 0$ . When $d \geq 3$ this condition is different from nondegeneracy as defined above. Let's say such $f$ and $\Phi$ are nonsingular . That nonsingular forms of degree $d$ have a finite orthogonal group over $\mathbf C$ is due to Jordan. It also holds over algebraically closed fields of characteristic $p$ when $p > d$ (so $d! \not= 0$ in the field). As an example, the orthogonal group of $x_1^d + \cdots + x_n^d$ over $\mathbf C$ when $d \geq 3$ has order $d^n n!$ : it contains only the compositions of $n!$ coordinate permutations and scaling of each of the $n$ coordinates by $d$ th roots of unity. Taking $n = 2$ , this reveals a basic difference between the concrete binary forms $x^2 + y^2$ and $x^d + y^d$ for $d \geq 3$ that you can tell anyone who asks you in the future how higher degree forms are different from quadratic forms. In retrospect, the label used for the second topic ("Group theory") really applies to both topics. For a field $F$ , the group ${\rm GL}_n(F)$ acts on the forms of degree $d$ in $n$ variables with coefficients in $F$ , and the first topic is about the orbit of $x_1^d + \cdots + x_n^d$ under this action while the second topic is about the stabilizer of $f(x_1,\ldots,x_n)$ under this action. Concerning papers and books, I'll just mention one of each. There is Harrison's paper "A Grothendieck ring of higher degree forms" in J. Algebra 35 (1978), 123-138 here and Manin’s book Cubic forms: algebra, geometry, arithmetic . Manin mentioned a recurring nightmare he had about this book, soon after he finished it, in an interview with Eisenbud here .
{ "source": [ "https://mathoverflow.net/questions/430365", "https://mathoverflow.net", "https://mathoverflow.net/users/173315/" ] }
431,083
Given $\ell\ge 1$ , we say a graph $G$ is $\ell$ -good if for each $u,v\in G$ (not necessarily distinct), the number of walks of length $\ell$ from $u$ to $v$ is odd. We say a graph $G$ is good if it is $\ell$ -good for some $\ell\ge 1$ . Do good graphs exist? For clarity, I am only talking about simple graphs (which lack loops and multiple edges). Context: In Stanley’s book on Algebraic combinatorics, Exercise 1.13 is about proving an interesting property held for all good graphs. A friend of mine told me that after solving the exercise, he realized he didn’t know of any example of such graphs. I too am stumped about whether such graphs can exist. A computer search revealed that none exist with $7$ or fewer vertices. I am unclear about the specifics of the search, they were done by my friend.
A graph without loops cannot be good. Assume the contrary, let $G$ have $n$ vertices and be good. Let $A$ be the adjacency matrix of $G$ , let $\lambda_1,\ldots,\lambda_n$ be its eigenvalues over some extension of $\mathbb{F}_2$ . We have $\sum_{i=1}^n \lambda_i=\mathrm{tr} A=0$ . That $A$ is good means that $A^\ell$ is an all-1 matrix over $\mathbb{F}_2$ . It has rank 1, thus at least $n-1$ eigenvalues of $A^\ell$ are 0. On the other hand, the eigenvalues of $A^\ell$ are $\lambda_1^\ell,\ldots,\lambda_n^\ell$ . Therefore, at least $n-1$ $\lambda_i$ 's are zero, and, since $\sum \lambda_i =0$ , all $\lambda_i$ 's are 0. Thus $A$ is nilpotent. Since $A^\ell$ has rank 1, we get $A^{\ell+1}=0$ (indeed, denote $\mathrm{im} A^{\ell}:=X$ , then $\dim X=1$ . We have $\mathrm{im} A^{\ell+1}\subset \mathrm{im} A^{\ell}=X$ , and also $\mathrm{im} A^{\ell+1}=AX$ . Since $\dim X=1$ , either $\mathrm{im} A^{\ell+1}=\{0\}$ , or $AX=\mathrm{im} A^{\ell+1}=X$ ; in the latter case $A$ is not nilpotent since $A^kX=X\ne \{0\}$ for all $k=0,1,2,\ldots$ ). So, $A\cdot A^\ell=0$ , that means that the sum of entries in every row of $A$ is even, i.e., every vertex in $G$ must have even degree. Now pick a vertex $v$ and let $W$ be the set of all walks of length $\ell$ from $v$ to $v$ . The cardinality of $W$ is odd by hypothesis. The operation $\rho$ of reversing a walk is an involution on $W$ , so the number of fixed points of $\rho$ is odd; these fixed points consists of walks of the form "take any walk of length $\ell/2$ starting at $v$ and then retrace your steps back to $v$ " (so in particular, $\ell$ must be even). But because every vertex has even degree, in particular there is an even number of choices for the last step of the walk of length $\ell/2$ , so the total number of walks of length $\ell/2$ must be even. This is a contradiction.
{ "source": [ "https://mathoverflow.net/questions/431083", "https://mathoverflow.net", "https://mathoverflow.net/users/130484/" ] }
431,642
Let $M$ , $N$ be connected nondiscrete compact smooth manifolds. Can the ring of continuous functions on $M$ be isomorphic to the ring of smooth functions on $N$ ?
No. In both the smooth function ring and the continuous function ring a maximal ideal $\frak m$ consists of the functions vanishing at some point. In the smooth case $\frak m/\frak m^2$ is the cotangent space of the manifold at that point, while in the continuous case $\frak m^2=\frak m$ .
{ "source": [ "https://mathoverflow.net/questions/431642", "https://mathoverflow.net", "https://mathoverflow.net/users/148161/" ] }
432,470
I came across a post by Ron Maimon on physics.SE that makes what seems to me to be a very interesting conjecture I've never seen before about what it would take to settle every question of arithmetic. First I'll try to be more precise: a question of arithmetic is a first-order statement in Peano arithmetic, e.g. a statement about whether some Turing machine halts. I believe these are exactly the mathematical statements which, for example, Scott Aaronson regards as having definite truth values independent of our ability to prove or disprove them from any particular system of axioms, unlike e.g. the continuum hypothesis. If I've understood Ron correctly, he seems to believe the following: Conjecture: Every question of arithmetic is settled by the claim that some sufficiently large computable ordinal $\alpha$ is well-founded. For example, Gentzen showed that the well-foundedness of $\alpha = \epsilon_0$ can prove the consistency of PA. Question: Has this been stated as a conjecture somewhere in the literature? Do people expect it to be true? A possibly more helpfully specific version of this question: does there exist for every positive integer $n$ a computable ordinal $\alpha_n$ whose well-foundedness determines the value of the Busy Beaver number $BB(n)$ ?
The question of whether a computable linear order is well-founded is $\Pi^1_1$ -complete, so this is true in a sense: There is a computable function $F$ such that, for every sentence $\varphi$ in the language of arithmetic with Godel number $\ulcorner\varphi\urcorner$ , $F(\ulcorner\varphi\urcorner)$ is an index for a computable well-ordering iff $\varphi$ is true. (To be precise, this is provable in - say - $\mathsf{ZF}$ or indeed much less.) Here's one way to visualize $F$ : There is a computable tree $\mathcal{T}\subseteq\mathbb{N}^{<\mathbb{N}}$ with a unique path $p$ which codes the set of true arithmetic sentences. Essentially, a node of height $k$ on $\mathcal{T}$ consists of a truth assignment to the first $k$ -many sentences in the language of arithmetic and additional "partial Skolemization data" which so far looks consistent (the details are a bit tedious). Given a sentence $\varphi$ , let $\mathcal{T}_\varphi$ be the subtree of $\mathcal{T}$ consisting of all nodes on $\mathcal{T}$ which (when "read" in the appropriate way) do not declare $\varphi$ to be true; this is a computable subtree of $\mathcal{T}$ , uniformly in $\varphi$ , and is well-founded iff $\varphi$ is true. We then set $F(\ulcorner\varphi\urcorner)$ to be the Kleene-Brouwer ordering of $\mathcal{T}_\varphi$ . Of course, this is all rather artificial. To be clear, the map $F$ itself is perfectly natural/interesting/important, but the result $F(\ulcorner\varphi\urcorner)$ is not particularly interesting to me. Contrast the construction above, where the connection between $\varphi$ and $F(\ulcorner\varphi\urcorner)$ is boringly tautological, with Gentzen's theorem that well-foundedness of (the usual notation for) $\epsilon_0$ implies $Con(PA)$ . Even if one doesn't buy this as making $Con(PA)$ more believable - and I don't - it's certainly a deep and interesting fact. The interesting version of the conjecture, to me, would be: "For every sentence of arithmetic $\varphi$ there is a computable linear order $\alpha$ such that $(i)$ $WF(\alpha)\leftrightarrow\varphi$ and $(ii)$ knowing this somehow sheds light on $\varphi$ (unless $\varphi$ was already so simple as to be boring)." And nothing like what I've described can possibly do that, obviously.
{ "source": [ "https://mathoverflow.net/questions/432470", "https://mathoverflow.net", "https://mathoverflow.net/users/290/" ] }
432,931
I don't know whether my question is in the appropriate place. I've studied physics, and then did a PhD in (pure) math and 2 postdocs. I definitely love math research, but I am not ready to apply all over the world hoping to find a position somewhere sometimes. Therefore I am looking for a job. I don't have any interest in anything from the society. I only love math for its beauty. I am wondering what happened to the world. All jobs I am looking for with "math diploma" requirement seems to be in data science or finance. I hate this stuff and don't see the relation with math, at least the math that I like. I cannot see any beauty in data science and worse, in finance. Does anyone have an idea of not-so-sad job openings? Is it our fate to change our career paths to finance if we had a pure math-physics academic background? Sorry for these desperate questions, but I feel so lost and sad….
I am sorry that the OP feels "desperate and sad." I agree with the comments suggesting that happiness in life is very different from achieving some specific career. I also think a lot has to do with mindset. That said, there are zillions of jobs for mathematicians (far from data science and finance being the only options) and many of them involve working with beautiful mathematical concepts. Here are some examples, in no particular order: Use math to identify cases of gerrymandering and help create maps that are fair. This involves graph theory, geometry, metric spaces, and more. It's very cool and super relevant. Become a senior scientist or research mathematician at a tech company, like the sort that hired Jennifer Chayes, Laszlo Lovasz, Katalin Vesztergombi, etc. There is plenty of beautiful work to do in graph theory. Social network analysis is a lovely blend of mathematics and sociology. I saw a great talk by Strogatz on this topic once. I imagine companies like Meta might have teams of mathematicians studying social network graphs. Topological data analysis (TDA) is beautiful to a lot of people, and involves mathematical concepts such as graphs, metric spaces, Betti numbers, and a whole lot more. There are government and industry research groups based on TDA, and it's a growing area. Lots of jobs. Work for a government intelligence service. Plenty of connections to graph theory, number theory, etc. If you like your government and believe its mission is protecting people, then this kind of work can be immensely rewarding. Work for a government contractor, like the IDA in the USA. I know people in jobs like that who spend most of their time thinking about elliptic curves, group laws, error correcting codes, etc. Be an actuary. If you like probability theory and probability models, there are really fun topics that come up in this setting. I push back against the idea that there is no beauty in data science. Many data mining algorithms involve beautiful mathematics, like principal component analysis (eigenvectors, change of basis), singular value decomposition and separating hyperplanes, graph clustering algorithms, etc. Many companies have realized that if they want to get their modeling right, it's beneficial to have a trained mathematician onboard rather than only people who know how to run commands and have no idea why the algorithm works. I know data scientists who spend their time tweaking these algorithms to work in new settings, which means they are constantly playing with these beautiful concepts. Additionally, there is tremendous satisfaction in feeling like you created something that has the ability to really help a large number of people in their lives, e.g., statistical models to inform government policy and help lift people out of poverty, match people to jobs they will enjoy, help people who use drugs to get out of a state of addiction, etc. I know a lot of people who think Fourier analysis is beautiful and there's a whole branch of data science (spectral theory, time series models) where you get to play with this every day. Same for working for companies like Sound Hound or Shazam, and probably many others that I haven't listed (Zoom? Skype? How do they denoise? Some beautiful math must be in the background.) I concur with comments who said secondary school teaching can be a very fulfilling job, and one full of opportunities to enjoy (and share) the beauty of math. That's especially true if you work with the IMO team, programs for gifted high school students, etc. Such students can even do cool research and there have been lots of MO questions about that topic. I believe certain types of engineering use fairly sophisticated tools from analysis. Sadly, I'm not an expert in this. Text analysis, e.g., using and developing algorithms for determining authorship, extracting summaries, etc. Imagine developing an algorithm that can use Twitter data to figure out when an emergency is happening and then dynamically allocate government resources to help. Mathematical art, both creating it and using math to connect people with art in new ways (e.g., Google Deep Dream) Using math to create improved epidemiological models, e.g., while working for a hospital system, government, etc. Others have compiled better lists than this, e.g., the AMS has a list including the following and also a list of other lists. Climate study Animated films Astronomy and space exploration I guess the message I want to impart to the OP is that there's a lot to be excited about and a lot to look forward to. Now that you're a trained mathematician, you can go in many directions. For almost any passion, there is a way to connect it to mathematics and to bring the beauty of math into that world. Go explore and play!
{ "source": [ "https://mathoverflow.net/questions/432931", "https://mathoverflow.net", "https://mathoverflow.net/users/192560/" ] }
433,062
Does there exist a group $G$ such that for any finite $K$ there is a monomorphism $K \to G$ for any $H$ with property 1 there is a monomorphism $G \to H$ If yes, is it the only one?
No. To show that it doesn't exist it is enough to produce two groups $G,H$ which contain isomorphic copies of all finite groups, but such that no group containing isomorphic copies of all finite groups embeds into both $G$ and $H$ . Let $(G_n)$ be an enumeration of all finite groups. Let $G=\bigoplus G_n$ be the restricted direct sum and $H={\Large\ast}_nG_n$ the free product. If $K$ is a subgroup of $G$ then $K$ is locally finite, hence freely indecomposable. Hence if $K$ is also isomorphic to a subgroup of $H$ , then by Kurosh's subgroup theorem, $K$ is finite. In particular, $K$ doesn't contain isomorphic copies of all finite groups.
{ "source": [ "https://mathoverflow.net/questions/433062", "https://mathoverflow.net", "https://mathoverflow.net/users/148161/" ] }
433,226
Are the categories of sets, abelian groups, and commutative rings unique? Independence results like the independence of the generalized continuum hypothesis, the Whitehead problem, and the global dimension of $\prod_{n = 1}^\infty \mathbb{F}_2$ from ZFC seem to indicate no. And yet we don't say "Let $\mathbf{Set}$ be a category of sets" instead of "Let $\mathbf{Set}$ be the category of sets," etc. If these categories are not unique, then could they be if we wanted them to? And shouldn't they be unique, if the terms "set," "function," "abelian group," "commutative ring," etc., are to be well-defined? EDIT: Questions have arisen as to what I mean by "unique". Unique as in one and only one? Unique as in unique up to equivalence of categories? Unique as in unique up to isomorphism of categories? Unique as in unique up to unique isomorphism of categories? Honestly, any of those four senses of "unique" is fine with me. Just pick one and answer that particular question. Does it make sense to consider $\mathbf{Set}$ as a category that is unique in some sense described above? EDIT (SUMMARY): Answers to this question seem to have been of one of the following flavors. Monism allows $\mathbf{Set}$ to be unique, but pluralism does not. Some mathematicians are monists, some are pluralists, while others think that both monism and pluralism are respectable philosophies of mathematics. Call these views "monism," "pluralism," and "monist-pluralist dualism." Just as $\mathbb{Z}$ is unique up to iso in the category of rings, $\mathbf{Set}$ and $\mathbf{Ring}$ are unique up to category equivalence in the meta-category of categories, but there is no sense of uniqueness in an absolute sense. $\mathbb{Z}$ and $\mathbf{Set}$ and $\mathbf{Ring}$ are not unique in an absolute sense. If this is the case, then I'd argue that mathematical truth, too, is not absolute, but rather relative to a theory. Call this view "relativism." This view is new to me. I am not a relativist, at least as yet. $\mathbb{Z}$ is unique up to iso, $\mathbf{Set}$ and $\mathbf{Ring}$ are unique up to category equivalence, etc. Mathematical truth is absolute and does not mean truth within a theory, and uniqueness is also absolute. Call this view "absolutism." Conventionally, mathematicians today are operating under the assumptions of ZFC, and ZFC doesn't answer the question in the positive or in the negative, and so the question doesn't have a mathematical answer. Call this view "conventionism". The answer one gives to the question apparently depends on what philosophical views one holds. But the same is true about questions like, "Is the axiom of choice true," "Does every nontrivial commutative ring have a maximal ideal," "Does every vector space have a basis?" A relativist would say, yes, if one is working under the assumptions of ZFC and the ordinary rules of mathematical proof. An absolutist would just say yes (if they believed that the axioms and theorems of ZFC were true). Others might just say yes, under the tacit assumption that most mathematicians today are operating under the assumptions of ZFC, while remaining agnostic about the status of mathematical truth. Absolutists might see the question I posted as a valid mathematical question. A problem with asbolutism is that many important questions, like GCH and the Whitehead problem, have not been settled, at least as yet. A problem with relativism is, why are "if-then" statements true? What makes Boolean first-order logic absolute but not the rest of mathematics? Why are any mathematical proofs valid at all? Why not assume some non-Boolean constructive or intuitionistic logic? If my question is not a purely mathematical one and is partly "philosophical" and "open to interpretation", then isn't every mathematical question thus? These aren't additional questions I'm asking for discussion here. I'm just trying to explain why I thought my question was a valid mathematical question and not a purely philosophical one. Thank you to those who tried to answer it, as your answers have much clarified my thinking about the problem. I've accepted Hamkins' answer because it was least biased, but I welcome other answers to the question and can always change what answer I accept. I felt pressured to accept an answer because of a vote to close the question as inappropriate for MO.
Introduction to pluralism A version of this question lies at the heart of the ongoing dispute on pluralism in the philosophy of mathematics. Is there at bottom just one mathematical reality? Does every mathematical question, whether about arithmetic, about the real continuum, or about set theory, have a definite mathematical answer? Most mathematicians (but not all) take the view that arithmetic assertions, for example, assertions such as the Riemann hypothesis or the question whether there are infinitely many prime pairs, have a definite answer. Either there are infinitely many prime pairs or there are not, full stop. According to this view, arithmetic questions have a determinate answer, whether we shall ever come to know it. We know by Gödel's theorem, of course, that no effective axiomatic system (whether formalized in set theory, category theory, HoTT, or what have you) will be able to establish the truth of all the true arithmetic assertions. But this observation can be taken to be merely about the weakness of our formal theories, rather than necessarily about any kind of pluralism in the arithmetic facts of the matter. One can admit that any given theory is weak, even if one holds that true arithmetic is meaningful. Peano arithmetic PA is incomplete, but ZFC proves more arithmetic truths, and ZFC + large cardinals settles still more. Characterizing structures in second-order logic Support for this determinate view of arithmetic truth is often taken from the categoricity results. Dedekind proved, namely, that the natural number structure $\langle\mathbb{N},S,0\rangle$ is uniquely determined up to isomorphism by the three Dedekind axioms, that $0$ is not a successor, that the successor function is one-to-one, and that every number is generated from $0$ by successor, in the sense that every set of numbers containing $0$ and closed under successor contains all the numbers. Once one knows that the axioms determine the structure, then one knows that those axioms determine all arithmetic truth. I have argued that Dedekind's categoricity result is the beginning of structuralism in mathematics. Other mathematicians observe that we have such categorical accounts of essentially all our familiar mathematical structures. The integer ring is uniquely determined up to isomorphism from the natural numbers, and the rational field. The real field is the unique complete ordered field. The complex numbers are the unique algebraic closure of the real field, or alternatively, they are the unique algebraically closed field of size continuum. Thus, all our familiar mathematical structures are characterized uniquely in second-order logic. This can be taken as support for the view that mathematical truth generally is determinate in nature. The structures are determined by the (second-order) axioms, and thus the truths are determined. Characterizing structures in first-order set theory All these familiar categoricity arguments can be viewed as taking place in first-order set theory, provable as theorems of ZFC. In a sense, the first-order theory of sets provides a natural interpretation of the second-order logic of any particular structure, by providing the sets that will constitute the second-order part. So ZFC proves that there is a unique structure of arithmetic up to isomorphism, a unique real-closed field with a unique algebraic closure. Similarly, ZFC proves that the category of groups and the category of sets and so on is unique up to suitable isomorphism. The philosophical difficulty here is that, meanwhile, ZFC is a first-order theory and thus subject to the incompleteness phenomenon. We know, for example, that if ZFC is consistent, then there can be models $M_0$ and $M_1$ of ZFC whose natural number structures $\mathbb{N}^{M_0}$ and $\mathbb{N}^{M_1}$ are not isomorphic to each other, even though each of them is thought to be the unique natural number structure in those respective set-theoretic universes. The philosophical difficulty is that, although ZFC proves that arithmetic truth is definite, nevertheless different models of ZFC can think that it is different arithmetic truths that come to be part of the definitely true arithmetic theory. Characterizing structures in second-order set theory In light of this, some logicians and philosophers of mathematics seek to apply the second-order categoricity results to set theory itself. Indeed, Zermelo proved that the models of second-order ZFC ${}_2$ enjoy a quasi-categoricity result—they all agree on their common initial segments. Specifically, Zermelo proved that the models of second-order ZFC ${}_2$ are exactly the models $V_\kappa$ for an inaccessible cardinal $\kappa$ , or in other words, exactly the Zermelo-Grothendieck universes. These set-theoretic worlds are linearly ordered and all agree on the assertions expressible in their common parts. Kreisel famously pointed out that indeed essentially all questions of classical mathematics are expressible as sentences that are absolute to low-level ranks of the cumulative hierarchy, which have the same truth value in all these Zermelo-Grothendieck universes. Thus, according to Kreisel, the continuum hypothesis has a determinate truth value in second-order set theory. It is either definitely true or definitely false, as a matter of (second-order) logic. The universe view of sets This is the beginning of the universe view , which holds that there is a unique set-theoretic reality underlying mathematics, and all statements have a definite truth value in this unique set-theoretic realm. This is the set-theoretic universe arising from the cumulative set-building process, where one iteratively computes the $V_\alpha$ hierarchy by adding all subsets at each stage and iterating through the ordinals. On this view, the answer to your question is Yes, there is a unique category of all groups, and it is the category of groups as defined in this final true set-theoretic universe $V$ , and similarly with the category of rings and what have you. Critics point out that second-order logic is simply a species of set theory. How can we establish the definiteness of our concept of finiteness by appealing to the comparatively murky concept of arbitrary set required in the second-order induction axiom? It seems hopeless to ground our concept of the finite this way. See my essay, A question for the mathematics oracle . The multiverse view of sets Set-theoretic pluralism offers an alternative perspective. According to the multiverse view , there are many concepts of set, each giving rise to a different set-theoretic realm. The continuum hypothesis might hold in some and not in others, and this situation is itself a kind of answer to the CH question—it holds and fails throughout the set-theoretic multiverse in a way that is quite deeply understood. In regard to your question, these different set-theoretic universes each have their own (unique in that universe) categories of groups and categories of rings and so on, and these categories are not always isomorphic to each other across the universes. On the pluralist view, the answer to your question is negative. Indeed, the question of whether they are isomorphic or not presumes a certain degree of set theory in the metatheory where those universes exist, and in this way one is led to the idea that there is a hierarchy of metatheoretic contexts. Indeed, every model of set theory provides a meta-theoretic context for the theories and models and categories which exist within that model. In this way, the traditional object-theory/meta-theory distinction is seen to break down as naive or crude, for we actually have a rich hierarchy of theories, each serving also as a metatheory. More extreme and more moderate alternatives Strong forms of the pluralist view extend to pluralism even in arithmetic as well as higher set theory. Many mathematicians prefer a kind of compromise position, taking arithmetic truth as definite, but allowing indeterminacy in higher set-theoretic truths. The universists hold that the set-theoretic universe is determinate all the way up, and the large cardinal hierarchy is pointing the way toward the one road upward. So there are philosophical positions taken on all sides of this issue. I have written at length on these topics in various venues, but perhaps you might look at: My paper: Hamkins, Joel David , The set-theoretic multiverse , Rev. Symb. Log. 5, No. 3, 416-449 (2012). ZBL1260.03103 . Chapter 8 of my book: Hamkins, Joel David , Lectures on the philosophy of mathematics , MIT Press, 2021.
{ "source": [ "https://mathoverflow.net/questions/433226", "https://mathoverflow.net", "https://mathoverflow.net/users/17218/" ] }
433,278
It seems like the article "The Twin Primes Conjecture is True in the Standard Model of Peano Arithmetic: Applications of Rasiowa–Sikorski Lemma in Arithmetic (I)" by Janusz Czelakowski published in Studia Logica yesterday, claims to have proven that the twin prime conjecture holds in the standard model of Peano arithmetic using the technique of forcing . This seems like a very significant achievement (if the claim is not erroneous) but I am by no means an expert in logic or number theory, and therefore I'm not qualified enough to understand and evaluate the contents of this paper. So I would appreciate others' inputs on whether this claim has merit.
The error in the paper is in the proof of Theorem 7.2. The proof of Theorem 7.2 is immediately suspicious because of how vague it is in places and because of how lofty the expository text before and after it is. In the proof, the author claims that because we can identify the set of variables $(v_i)_{i \in \mathbb{N}}$ with the natural numbers, the induction scheme $(\beta)$ If $\mathbf{P} \Vdash \varphi(0)$ and for every variable $v_i$ , $\mathbf{P} \Vdash \varphi(v_i)\Rightarrow \mathbf{P} \Vdash \varphi(S(v_i))$ , then for every variable $v_i$ , $\mathbf{P} \Vdash \varphi(v_i)$ . is just an instance of ordinary induction in $\text{ZFC}$ , but this is ridiculous. $0$ and $S(v_i)$ are not variables; they're terms. Furthermore, even if the assumptions of Theorem 7.2 were enough to ensure that for every $i$ , $\textbf{P}\Vdash S(v_i) \approx v_j$ for some $j$ (and they're not), that would in no way ensure that $\mathbf{P}\Vdash S(v_i) \approx v_{i+1}$ . That said, we need to step through a fair amount of the paper if we want to show conclusively that Theorem 7.2 is wrong. After all, maybe the assumptions of the theorem are inconsistent or otherwise overly strong and the error is really elsewhere in the paper. What makes this really tedious though is the forcing machinery, which only manages to make the proof more confusing and technical. It's pretty clear given the forcing posets being used (discrete posets and singleton posets) that the forcing can't really be doing anything that couldn't be described more simply in some other way. We have the Lindenbaum-Tarski algebra of $\text{PA}$ , written $\mathbf{B}_{\text{PA}}(L)$ , which is the Boolean algebra of formulas in some fixed countable collection of variables modulo logical equivalence over $\text{PA}$ . We write $[\varphi]_{\text{PA}}$ for the set of formulas that are logically equivalent to $\varphi$ over $\text{PA}$ . The forcing posets $\mathbf{P}=(P,\subseteq)$ considered are certain families $P$ of non-empty subsets of $\mathbf{B}_{\text{PA}}(L)$ (page 14). But the poset considered in Theorem 7.2 is a singleton, which means that all of the forcing machinery isn't really doing anything in Theorem 7.2. Nevertheless, let's go through some of the paper and track what the assumption that $P = \{p\}$ means (where $p$ is some non-empty subset of $\mathbf{B}_{\text{PA}}(L)$ ). On page 15 we get to the definition of a condition $p$ forcing an atomic formula $\sigma$ . Again, since $P = \{p\}$ , what this definition collapses to is just $p \Vdash \sigma$ if and only if $[\sigma]_{\text{PA}} \in p$ . We then extend this to arbitrary formulas in the standard way, but again everything collapses: $p \Vdash \neg \varphi$ if and only if $p \Vdash \varphi$ fails. $p \Vdash \varphi \wedge \psi$ if and only if $p\Vdash \varphi$ and $p\Vdash \psi$ . $p \Vdash \exists x \varphi$ if and only if there is a variable $y$ such that $p \Vdash \varphi(x//y)$ (where $\varphi(x//y)$ is $\varphi$ with instances of $x$ substituted by $y$ and existing instances of $y$ in $\varphi$ changed to some fresh variable to avoid binding). $p \Vdash \varphi \vee \psi$ if and only if $p\Vdash \varphi$ or $p \Vdash \psi$ . $p \Vdash \varphi \to \psi$ if and only if $p\Vdash\neg \varphi$ or $p\Vdash \psi$ . $p\Vdash (\forall x)\varphi$ if and only if $p\Vdash\varphi(x//y)$ for all variables $y$ . (The author mentions this simplification at the end of page 15 and the beginning of page 16.) Finally, we write $\mathbf{P}\Vdash \varphi$ to mean that $p \Vdash \varphi$ for all $p \in P$ , which in our case is just equivalent to $p \Vdash \varphi$ . We say that $\mathbf{P}$ is compatible with equality axioms if $[x \approx x]_{\text{PA}} \in p$ for some variable $x$ , whenever $[x \approx y]_{\text{PA}}$ and $[R(...,x,...)]_{\text{PA}}$ are in $p$ , then $[R(...,y,...)]_{\text{PA}}$ is in $p$ for any relation symbol $R$ , and if $[x \approx y]_{\text{PA}} \in p$ , then $[F(...,x,...)\approx F(...,y,...)]_{\text{PA}} \in p$ for any function symbol $F$ . This is essentially just what you need to ensure that $p$ forces the standard axioms of equality. (Transitivity and symmetry follow from special cases of the second bullet point.) We say that $\mathbf{P}$ is standard if $\mathbf{P}$ is compatible with equality axioms and has that for any atomic formula $\sigma$ , if $\text{PA}\vdash \neg \sigma$ , then $[\sigma]_{\text{PA}} \notin p$ . (Remember, we're assuming $P = \{p\}$ .) This is all we need to understand the statement of Theorem 7.2, which claims that if $\mathbf{P} = (\{p\},\subseteq)$ is standard, then $\mathbf{P}\Vdash \mathrm{Ind}(x;\varphi)$ for every formula $\varphi$ (where $\mathrm{Ind}(x;\varphi)$ is $\forall \bar{z}[\varphi(0,\bar{z}) \wedge \forall x(\varphi(x,\bar{z})\to\varphi(S(x),\bar{z})) \to \forall x\varphi(x,\bar{z})]$ , which is induction for the formula $\varphi(x,\bar{z})$ ). There is a typo in the statement of Theorem 7.2, but it's clear from the proof that the statement is meant to be $\mathbf{P} \Vdash \mathrm{Ind}(x;\varphi)$ , not $\mathbf{P} \Vdash \mathrm{Ind}(x;\sigma)$ . The argument (suppressing the other free variables) proceed by showing that $\mathbf{P} \Vdash \mathrm{Ind}(x;\varphi)$ if and only if the following holds: $(\beta)$ If $\mathbf{P} \Vdash \varphi(0)$ and for every variable $y$ , $\mathbf{P} \Vdash \varphi(y)\Rightarrow \mathbf{P} \Vdash \varphi(S(y))$ , then for every variable $y$ , $\mathbf{P} \Vdash \varphi(y)$ . As discussed above, $(\beta)$ does not work. Let's see a concrete example of Theorem 7.2 failing. Fix an enumeration $(v_i)_{i \in \mathbb{N}}$ of our variable symbols. From now on we'll write $\varphi(y)$ for $\varphi(x//y)$ , where $x$ is established by context to be the relevant free variable of $\varphi$ . Let $p$ be $$\{[\sigma]_{\text{PA}}: \text{PA} \cup \{v_0 \approx 0\}\vdash \sigma,~\sigma~\text{atomic}\}.$$ It is easy to check that $\mathbf{P} = (\{p\},\subseteq)$ is standard. Consider the formula $$\varphi(v_1) = \exists v_2(v_2 + v_2 \approx v_1 \vee S(v_2 + v_2) \approx v_1),$$ i.e., " $v_1$ is either even or odd." First, let's see that $\mathbf{P} \Vdash \varphi(0)$ (i.e., $p\Vdash \varphi(0)$ ). We have that $[v_0+v_0\approx 0]_{\text{PA}} \in p$ , so $p \Vdash v_0+v_0 \approx 0$ . Therefore $p \Vdash v_0+v_0\approx 0 \vee S(v_0+v_0) \approx 0$ and $p \Vdash \exists v_2( v_2+v_2\approx 0 \vee S(v_2+v_2) \approx 0)$ . Now fix a variable $v_i$ . There are two cases. Either $i = 0$ or $i \neq 0$ . If $i = 0$ , then we have that $[S(v_0+v_0) \approx S(v_0)]_{\text{PA}} \in p$ , so $p \Vdash S(v_0+v_0)\approx S(v_0)$ and $p \Vdash v_0 + v_0 \approx S(v_0) \vee S(v_0+v_0) \approx S(v_0)$ . Therefore $p \Vdash \exists v_2(v_2+v_2 \approx S(v_0) \vee S(v_2+v_2) \approx S(v_0)$ , i.e., $p \Vdash \varphi(S(v_0))$ . If $i \neq 0$ , then I claim that $p \not \Vdash \varphi(v_i)$ (i.e., $p \not \Vdash \exists v_2(v_2 + v_2 \approx v_i \vee S(v_2+v_2) \approx v_i)$ ). Fix a variable $v_j$ . Since $i \neq 0$ , we have that $v_j+v_j \not \approx v_i\wedge S(v_j+v_j)\not\approx v_i$ is consistent with $\text{PA} \cup \{v_0 \approx 0\}$ (even if $j = 0$ ), so $[v_j+v_j \approx v_i]_{\text{PA}} \notin p$ and $[S(v_j+v_j)\approx v_i]_{\text{PA}} \notin p$ . Since we can do this for any $j$ , we have that $p \not \Vdash \varphi(v_i)$ . So in any case, we have that if $p\Vdash \varphi(v_i)$ , then $p\Vdash \varphi(S(v_i))$ , but as we just established, $p \not \Vdash \varphi(v_1)$ , contradicting Theorem 7.2. Incidentally, this also shows that the assumptions of Theorem 7.2 are not enough to ensure that for every $i$ , $\mathbf{P} \Vdash S(v_i)\approx v_j$ for some $j$ .
{ "source": [ "https://mathoverflow.net/questions/433278", "https://mathoverflow.net", "https://mathoverflow.net/users/156061/" ] }
433,292
The Riemann hypothesis for finite fields can be stated as follows: take a smooth projective variety X of finite type over the finite field $\mathbb{F}_q$ for some $q=p^n$ . Then the eigenvalues $\alpha_j$ of the action of the Frobenius automorphism on the $i$ th $\ell$ -adic étale cohomology (are algebraic numbers and) have norm $q^{i/2}$ . This is part of the general philosophy that led to the proof of the Weil conjectures. What I don't understand, and pardon me for saying this, is why we care about these eigenvalues. As a homotopy theorist, I've found the other three conjectures (rationality, Betti numbers, and the functional equation) to be helpful in understanding zeta functions from a geometric perspective. Together, they describe the relationship between the combinatorics and the cohomology of a variety. The Riemann hypothesis, however, doesn't seem to admit a direct interpretation of this kind; it's unclear what role the $\alpha$ s play in the analogy. Do they have an interpretation as the arithmetic version of a classical geometric/topological object, like how the degrees are Betti numbers? If not, how should I understand them?
The Riemann hypothesis is very important for the relationship between the cohomology and combinatorics of the variety. First, the Riemann hypothesis lets us read off the Betti numbers from the point counts over finite fields, i.e. the $i$ 'th Betti number is the number of zeroes/poles of $$e^{ \sum_j \# X(\mathbb F_q^j) u^j / j }$$ of absolute value $q^{-i/2}$ . Without the Riemann hypothesis, and with just the other Weil conjectures, it's not possible to calculate the Betti numbers in this way, because you can't distinguish which zeroes or poles are coming from which $P_i$ s or, worse, rule out the case that zeroes and poles will cancel. Without the Riemann hypothesis, one can only calculate the Euler characteristic. Second, the Riemann hypothesis lets us get information about point counts over finite fields from the Betti numbers. The simplest of these is the upper bound $$|X(\mathbb F_q)| \leq \sum_{i=0}^{2n} \dim H^i(X) q^{i/2}.$$ Without the Riemann hypothesis, only much weaker results of this form could be proven (maybe one could replace $q^{i/2} $ with $q^{ \max(i,n)}$ or something like that). Without even a crude bound, even knowing exactly the Betti numbers won't typically rule out any particular value for the number of points over a given field. I would even say this is much more direct than the relationship between geometry and combinatorics obtained from the remaining Weil conjectures. In terms of an analogue in classical geometry / topology, the obvious thing would be the eigenvalues of the action of a map on the cohomology! Of course, one usually doesn't have an a priori exact formula for the absolute value of the eigenvalues, but if one did, it would certainly be useful for understanding the fixed points of the map. So the Riemann hypothesis is a new phenomenon that doesn't have an analogue in topology (except for Serre's analogue of the Weil conjectures for Kahler manifolds), but the eigenvalues of operators acting on cohomology were a pre-existing notion. Lefschetz certainly wasn't thinking about the Frobenius when he proved his original fixed point formula! Maybe one should mention also that the eigenvalues of the mapping class of a surface acting on its cohomology give you information on where that mapping class sits in the Nielsen-Thurston classification. There is one aspect to classical analogues that I think deserves mentioning because it's of great importance: The Riemann hypothesis in the Weil conjectures tells us that calculating the high-degree (compactly-supported) or low-degree (if the variety is smooth) cohomology groups of a variety in topology is analogous to obtaining an approximate estimate for the number of points in arithmetic . This is the starting point for deep connections between stable homology and other topological methods for calculating the low-degree cohomology groups without necessarily calculating every cohomology group, and analytic number theory or other fields where quantities are calculated approximately! So RH is not an analogue of anything classical in topology but it tells us what the analogues of some classical statements in topology are.
{ "source": [ "https://mathoverflow.net/questions/433292", "https://mathoverflow.net", "https://mathoverflow.net/users/158123/" ] }
433,554
I've often seen Lurie's Higher Topos Theory praised as the next "great" mathematical book. As someone who isn't particularly up-to-date on the state of modern homotopy theory, the book seems like a lot of abstract nonsense and the initial developments unmotivated. I'm interested in what the tools developed concretely allow us to do. What does HTT let us do that we previously were unable to? Please note that I'm looking for concrete examples or theorems that can be expressed in terms of math that one doesn't need higher topos theory to understand. I'd also be interested in ways that the book has changed pre-existing perspectives on homotopy theory.
It seems there are really two questions here: Why higher category theory? What questions can you pose without the language of higher category theory which are best answered using higher category theory? Why does Lurie's work specifically set the standard for the foundations of higher category theory? These are really distinct questions. I'll leave it to others to address (1), and focus on (2). For this, I will refer back to an old answer of mine for a summary of some of the contents of HTT and HA. There, I said: In Higher Topos Theory , Lurie accomplishes many things. Let me highlight a few: A study of the Joyal model structure and comparison to the Bergner model structure. A study of cartesian fibrations and straightening / unstraightening, the $\infty$ -categorical analog of the Grothendieck construction. This is often viewed as the technical heart of Lurie's theory, since cartesian fibrations are used systematically to avoid writing down all the higher coherence data involved in $qCat$ -valued functors. A development of the fundamental notions of category theory -- (co)limits, Kan extensions, cofinality, etc, allowing one to "do category theory" in the $\infty$ -categorical setting. A development of the theory of presentable $\infty$ -categories. The point here is to get access to (the most important instances of) Freyd's adjoint functor theorem in the $\infty$ -categorical setting, and in particular the theory of localizations. The theory of (Grothendieck) $\infty$ -toposes. In the context of foundations, maybe it's worth also mentioning some of the contents of Higher Algebra : The Barr-Beck monadicity theorem. I tend to think of this, along with the adjoint functor theorem as "the only real theorems" of basic ordinary category theory. A theory of operads, allowing one to "do algebra" $\infty$ -categorically. The theory of stable $\infty$ -categories, playing roughly the roles of abelian categories and triangulated categories in the $\infty$ -categorical setting. So the reason that HTT and HA set the standard for the foundations of category theory is pretty self-evident. Nowhere else can one find such a comprehensive treatment! The 2500 pages in these books are there for a reason. Some pieces of HTT/HA were previously available in various sources, but some were not, and moreover HTT/HA synthesize them in coherent account. So you don't have to spend as much time as you otherwise would have to patching together results proven in slightly different frameworks using model-comparison results. This is particularly striking from a historical perspective: in the days when HTT first appeared (almost 10 years ago now -- to call it "the next great mathematical book" is already a little behind the times I think: it's a current great mathematical book!), all of this was a dream. Lurie made it a reality. I'll add on a personal note that my own most common mode of doing higher category theory is to pretend that everything is an ordinary category and freely use all the tools available there, until I've worked out a complete argument. After that I go through the process of looking up $\infty$ -categorical analogs of each of the 1-categorical tools I've used in my argument. This works better than one might expect, because it's reasonable today to trust that most of these tools will indeed be available in the literature. That's thanks in large part to Lurie's work. Before Lurie, you could do something like this, but only if you were content to end up with incomplete arguments contingent on the dream of higher category theory working out. Today, I'd argue that higher category theory, among various mathematical disciplines, actually has a relatively high standard of rigor. That's thanks in part to Lurie's work setting the standard. Let me close by sharing a sort of testimonial from Clark Barwick ( originally from the homotopy theory chat room here on MO, in the context of another MO question ). Thanks to user1092847 for digging this up in the comments below! Clark Barwick on Lurie's impact with HTT: ... I feel a need to defend Jacob Lurie's writing. Let me take a rather selfish perspective, because I grew up alongside higher categories in some sense. I read preprints and papers of Rezk, Hirschowitz-Simpson, Simpson, Tamsamani, Toen, Joyal, Jacob's HTT-prototype on the arXiv, and others as a grad student (2001-05). All of these works had the same feature: they were all organised around a specific goal, leaving the more serious work of a complete theory for a later time. There were all sorts of homotopy coherence issues that were left hanging. So I developed my own point of view about these things and started writing a manuscript, the first little bit of which was my thesis. By the time I was halfway through my first postdoc, I'd written a pile of 'prenotes' that did enough foundational work to ensure, e.g., that there was no confusion over 'how unique' an adjoint between ∞-categories is, how to prove the existence of all colimits in an ∞-category from, say, geometric realisations and coproducts, a theory of what we now call ∞-operads, etc., etc. It all involved layer upon layer of giant combinatorial gadgets, and they were often fragile enough that I wasn't sure I had them layered correctly. At around that time I met Jacob at a conference, and he mentioned that he'd revised the text he put on the arXiv to add a little more detail. I said that I'd love to see it. He sent me a PDF of 600 pages or so of HTT. To my surprise and horror, he'd done everything I'd done, but more of it and far far better. He'd understood issues like cofinality in a way I didn't have access to with the models I was using. In his text, the proofs worked because of some very compact, very robust models he chose early on, following Joyal. Those models required him to do a lot of pretty tedious technical labour in the first few sections, but it ensured that if something existed up to homotopy, it 'really' existed. (This always came down to selecting a section of a trivial fibration.) This meant that it was genuinely easy to understand the arguments. When you look at a proof in HTT or HA or SAG, it's all there. He doesn't tell you that you 'can' find the argument – he gives you the argument! That's the real advantage of Jacob's arguments (and Joyal's before him) – they're completely convincing. You can actually check (and in rare cases, yes, correct) his proofs, because every individual object is so concrete. (Cf. claims about $A_{\infty}$ -categories like the Fukaya category.) After a few sleepless nights, I just gave up on what I was trying to develop. I was not going to try to compete. On the other hand, I didn't feel comfortable enough in the Joyal/Lurie perspective to really use their model, so I tried to do things in a model-independent way, as Rune suggests. But even simple things, like constructing a symmetric monoidal functor between two symmetric monoidal ∞-categories when there isn't one for formal reasons, is very difficult from that perspective: the only path I saw was to check an infinite hierarchy of coherences. It took me a long time to realise that the fibrational perspective was exactly designed to make it easy (or at least convincing) to write these things down. Jacob is actually providing you with the tools to perform explicit, nontrivial, non-formal constructions with higher categories in a precise, legible, and convincing way. That's what results like HTT.3.2.2.13 are all about. Jacob's done this continually: at every turn, he's done an incredible service to the community by carving out not just a narrow path to a desired application, but an expansive tunnel through which a lot of us can travel. He offers incredibly refined, interlinking technologies that are ideal for people like me, at least. I'm in a particularly good position to appreciate that kind of labour, because I attempted it and failed where he succeeded. Does he solve every problem or define every conceivable object? No, of course not. (And if it's tough to read now, what would it be like if he did? (However, I will point out that he does deal with general pro-objects in SAG.E.2.)) Is it possible to sharpen his results or use little techniques to get improvements on his results? Sure. But overall, I think that the precision, clarity, and thoroughness of Jacob's writing is something to which homotopy theory should aspire.
{ "source": [ "https://mathoverflow.net/questions/433554", "https://mathoverflow.net", "https://mathoverflow.net/users/99902/" ] }
433,686
Do editors for top math journals ever read a submitted paper, agree that there are no mistakes and the result is new, yet still reject it on the basis that this is a top math journal and someone could've done that before but chose not to? Maybe some arrogant mathematician goes "I could've proven that in a day or week but didn't because there's better stuff to do." I'm wondering because this seems to potentially fall into the category of results that are correct but not important enough. It appears the importance of a theorem depends not just on how many people care about it, how much it connects to other results, and how it can be applied, but also as a byproduct how many people have tried to prove it and failed. This last point is where the previous paragraph is relevant. Note that I'm only counting attempts by mathematicians (let's say at least a degree in math or peer reviewed research for starters) since some of the most famous conjectures receive tons of crackpot attempts after becoming famous, in which case cause and effect are reversed. In fact, most problems in the scope of this question would be slightly famous at best. If only a few people (or perhaps just 1) have tried and failed, does that discount whoever eventually succeeds? There are way more questions than there are people and hours around to answer them, so perhaps lots of people would like an answer (in the sense that we would like an answer to many questions but cannot attempt every question we're interested in) but only a few people are putting in the time. In the case where few people try because they believe it's too difficult, the paper probably will be accepted. However, if people think it's within their reach and don't try for other reasons, we may end up with a situation similar (but more respectful) than the one in the 1st paragraph.
As Sam Hopkins comments, the short answer to the stated question is "yes, all the time." You'd be hard-pressed to find a professional mathematician who hasn't received a referee report that basically boils down your first paragraph. Often, the referee or editor can't find anything mathematically wrong with the result, but they reject the paper on the basis that it's not at the right level for the journal, meaning the result or techniques used are not interesting or novel enough, in their view. Essentially, this means they think the work could have been done by many people but wasn't really worth the effort. The rest of the OP seems to be asking about whether or not it matters if someone else has tried and failed. In fact, it does matter, and it makes the paper more likely to be published if someone else has tried and failed, rather than less likely as the OP suggests. Let me give you a concrete example. In 2017, my coauthor Donald Yau and I wrote the paper Arrow Categories of Monoidal Model Categories . This paper was published in 2019 in Math Scandinavica . In it, we proved a fact that I would not normally have thought would be worthy of a paper in its own right. However, because the statement had been left as an Open Question by a well-known mathematician in the field (Mark Hovey), we were able to frame the paper as "answering a question of Mark Hovey" and I think that probably helped it get published. For an example in the other direction, my co-author Michael Batanin and I wrote a paper, Left Bousfield localization without left properness , that I think is definitely worthy of a publication. It shows how to side-step a problem that has bedeviled mathematicians in the field for a long time, and has zillions of examples illustrating the power of the approach. However, because it was left as a remark (4.13) in a paper by Clark Barwick , it has been much harder to get this paper published. I got a rejection that essentially boiled down to "Clark Barwick knew how to prove this and didn't think it was worth writing down." It is worth noting that the paper in question was one of Clark's earliest, and he later wrote a great essay about The Future of Homotopy Theory where he lamented this kind of thing. He wrote: We do not have a good culture of problems and conjectures. The people at the top of our field do not, as a rule, issue problems or programs of conjectures that shape our subject for years to come. In fact, in many cases, they simply announce results with only an outline of proof – and never generate a complete proof. Then, when others work to develop proofs, they are not said to have solved a problem of So-and-So; rather, they have completed the write-up of So-and-So’s proof or given a new proof of So-and-So’s theorem. The ossification of a caste system – in which one group has the general ideas and vision while another toils to realize that vision(6) – is no way for the subject to flourish. Other subjects have high-status visionaries who are no sketchier in details than those in homotopy theory, but whose unproved insights are nevertheless known a conjectures, problems, and programs. He even includes a side-note saying (6) only to have their paper rejected with lines like the following, from a colleague: "After So-and-So’s [sketchy] work, it was essentially obvious that such a result would be possible, given the right framework." So, based on that, I have to conclude that if he had a time machine, Clark probably would have written his Remark 4.13 as a Conjecture and then I could have published my paper saying I "proved a conjecture of Clark Barwick." I confess that I'm guilty of very much the same kind of behavior. I put a paper on arxiv in 2014 announcing a result that wasn't on arxiv till 2017 and one researcher told me my remark discouraged him from working on the project. I regret that. Nowadays I try to put many more Questions, Conjectures, and Problems in my papers, e.g., this one that just got accepted for publication. So, to conclude, I call upon anyone who has read this far to include named/numbered Conjectures, Questions, and Problems, and at all costs avoid Remarks where you claim things are true but don't write out the proof. Let's make the field friendlier to young people and help them get their work published, while at the same time incentivizing them to build on our work by answering questions we explicitly leave. I wrote something before to this effect here.
{ "source": [ "https://mathoverflow.net/questions/433686", "https://mathoverflow.net", "https://mathoverflow.net/users/127521/" ] }
433,698
A Riemannian manifold $(M, g)$ is said to be an almost Ricci soliton if there exists a complete vector field $X \in \Gamma(TM)$ and a smooth function $\lambda: M \to \mathbb{R}$ such that $$\operatorname{Ric} + \frac{1}{2}\mathscr{L}_{X} g = \lambda g$$ When this vector field is the gradient of a smooth function $f: M \to \mathbb{R}$ , we say $M$ is a gradient almost Ricci soliton, and this equation becomes: $$\operatorname{Ric} + \operatorname{Hess}(f) = \lambda g$$ Obviously, any Einstein manifold is a Ricci soliton and hence an almost Ricci soliton (gradient as well, trivially), so these are trivial examples. If $M$ satisfies: $$\operatorname{div}({\operatorname{Rm}}) = 0$$ we then say $M$ has harmonic curvature (notice this happens if and only if $M$ has harmonic Weyl curvature and constant scalar curvature. I think that part of some work I've been doing with some other people shows that any gradient almost Ricci soliton with harmonic curvature satisfies the property that for any $p \in M$ , there is a neighborhood $U_p \ni p$ such that $U_p$ has constant sectional curvature (and is therefore necessarily Einstein) ( EDIT AT NOVEMBER 27: this supposes the dimension is $\geq 4$ . Also, I've come to realize since the initial writing of this post that the Einstein examples might not be exhaustive). As a sanity check, I'm looking for some explicit examples of nontrivial (i.e, not Einstein and with nonconstant $\lambda$ ) gradient almost Ricci solitons (preferably of dimension $\geq 5$ ) with harmonic curvature. Can anyone here provide some examples? I'd appreciate any help. Thanks in advance!
As Sam Hopkins comments, the short answer to the stated question is "yes, all the time." You'd be hard-pressed to find a professional mathematician who hasn't received a referee report that basically boils down your first paragraph. Often, the referee or editor can't find anything mathematically wrong with the result, but they reject the paper on the basis that it's not at the right level for the journal, meaning the result or techniques used are not interesting or novel enough, in their view. Essentially, this means they think the work could have been done by many people but wasn't really worth the effort. The rest of the OP seems to be asking about whether or not it matters if someone else has tried and failed. In fact, it does matter, and it makes the paper more likely to be published if someone else has tried and failed, rather than less likely as the OP suggests. Let me give you a concrete example. In 2017, my coauthor Donald Yau and I wrote the paper Arrow Categories of Monoidal Model Categories . This paper was published in 2019 in Math Scandinavica . In it, we proved a fact that I would not normally have thought would be worthy of a paper in its own right. However, because the statement had been left as an Open Question by a well-known mathematician in the field (Mark Hovey), we were able to frame the paper as "answering a question of Mark Hovey" and I think that probably helped it get published. For an example in the other direction, my co-author Michael Batanin and I wrote a paper, Left Bousfield localization without left properness , that I think is definitely worthy of a publication. It shows how to side-step a problem that has bedeviled mathematicians in the field for a long time, and has zillions of examples illustrating the power of the approach. However, because it was left as a remark (4.13) in a paper by Clark Barwick , it has been much harder to get this paper published. I got a rejection that essentially boiled down to "Clark Barwick knew how to prove this and didn't think it was worth writing down." It is worth noting that the paper in question was one of Clark's earliest, and he later wrote a great essay about The Future of Homotopy Theory where he lamented this kind of thing. He wrote: We do not have a good culture of problems and conjectures. The people at the top of our field do not, as a rule, issue problems or programs of conjectures that shape our subject for years to come. In fact, in many cases, they simply announce results with only an outline of proof – and never generate a complete proof. Then, when others work to develop proofs, they are not said to have solved a problem of So-and-So; rather, they have completed the write-up of So-and-So’s proof or given a new proof of So-and-So’s theorem. The ossification of a caste system – in which one group has the general ideas and vision while another toils to realize that vision(6) – is no way for the subject to flourish. Other subjects have high-status visionaries who are no sketchier in details than those in homotopy theory, but whose unproved insights are nevertheless known a conjectures, problems, and programs. He even includes a side-note saying (6) only to have their paper rejected with lines like the following, from a colleague: "After So-and-So’s [sketchy] work, it was essentially obvious that such a result would be possible, given the right framework." So, based on that, I have to conclude that if he had a time machine, Clark probably would have written his Remark 4.13 as a Conjecture and then I could have published my paper saying I "proved a conjecture of Clark Barwick." I confess that I'm guilty of very much the same kind of behavior. I put a paper on arxiv in 2014 announcing a result that wasn't on arxiv till 2017 and one researcher told me my remark discouraged him from working on the project. I regret that. Nowadays I try to put many more Questions, Conjectures, and Problems in my papers, e.g., this one that just got accepted for publication. So, to conclude, I call upon anyone who has read this far to include named/numbered Conjectures, Questions, and Problems, and at all costs avoid Remarks where you claim things are true but don't write out the proof. Let's make the field friendlier to young people and help them get their work published, while at the same time incentivizing them to build on our work by answering questions we explicitly leave. I wrote something before to this effect here.
{ "source": [ "https://mathoverflow.net/questions/433698", "https://mathoverflow.net", "https://mathoverflow.net/users/119418/" ] }
433,949
Very recently, Yitang Zhang just gave a (virtual) talk about his work on Landau-Siegel zeros at Shandong University on the 5th of November's morning in China. He will also give a talk on 8th November at Peking University. The 111-page preprint now can be found on the internet, and it seems this version will be published on arXiv soon. (UPDATE: now it's on arXiv .) This paper shows that for a real primitive character $\chi$ to the modulus $D$ , $$ L(1, \chi) > c_{1}(\log D)^{-2022} $$ where $c_{1} > 0$ is an absolute, effectively computable constant. Assuming this result is correct, what are some significant number theoretical consequences that would follow? For example, what would be the impact on PNT error estimates, arithmetic progressions, and other related problems?
There will be many important consequences of Zhang's result, if correct. One specific result is that it will reduce one of the last open problem from the era of Gauss and Euler to a finite amount of computation, namely the classification of discriminants of binary quadratic forms with one class per genus. The congruence class of a prime number $p$ modulo $d$ determines which form of discriminant $-d<0$ represents $p$ if and only if there is one class per genus. Such discriminants which are congruent to $0$ modulo $4$ are Euler's numeri idonei or idoneal numbers. Euler expected there would be infinitely many such discriminants. [ Edit : Apparently I'm mis-remembering this. See the remarks of KConrad below.] It was Gauss who conjectured that the only such discriminants are the 65 examples (not necessarily fundamental) known to Euler. There are also 65 known fundamental discriminants (not necessarily even) with one class per genus. The existence of a 66th is still an open problem. By genus theory we know that for discriminants with one class per genus, the class group satisfies $$ \mathcal C(-d)\cong \left(\mathbb Z/2\right)^{g-1}, $$ where $g$ is the number of prime divisors of $d$ . Obviously $d$ is bigger than the absolute value of the smallest fundamental discriminant with $g$ prime divisors, $$ d_g\overset{\text{def.}}=3\cdot4\cdot5\cdot7\cdots p_g. $$ From lower bounds on the size of $p_g$ , the $g$ -th prime, and on $\theta(x)=\sum_{p\leq x}\log(p)$ , one can show that $$ d_g>g^g. $$ Since $ 2^{g-1} \ll \sqrt{g^g}, $ lower bounds for the class number which we expect to be true rule out the possibility for one class per genus for large $g$ . In 1973, Peter Weinberger showed that on GRH, no fundamental discriminant $-d<-5460$ has one class per genus, and unconditionally there is at most one more such $d$ . In contrast, Oesterle explicitly observed that the lower bound due to Goldfeld-Gross-Zagier is not strong enough to finish the classification of discriminants with one class per genus: $ \log(g^g) $ is $\ll 2^{g-1} $ . Iwaniec and Kowalski observed that even the full strength of the Birch Swinnerton-Dyer conjecture, "the best effective lower bounds which current technology allows us to hope for" would not suffice, as $ \log(g^g)^r$ is $\ll 2^{g-1}$ for any $r$ . In fact, the outlook is still more bleak: Watkins observed that if the discriminant $-d$ is divisible by all the primes up to $(\log\log d)^3$ (as $d_g$ certainly is), the product over primes dividing $d$ in the Goldfeld-Gross-Zagier lower bound is so small the resulting bound is worse than the trivial bound. If the implied constant is made explicit, Zhang's result would eliminate the possibility of one class per genus for discriminants above some bound. For example, neglecting the constant, the lower bound is discriminants with more than 6007 prime divisors. This works out to $d>3\cdot 10^{25734}$ .
{ "source": [ "https://mathoverflow.net/questions/433949", "https://mathoverflow.net", "https://mathoverflow.net/users/115910/" ] }
434,276
Consider a function $h$ defined on real numbers, which is not of the form $kx+b$ i.e. a linear function. If $h$ maps rational numbers to rational numbers and it maps irrational numbers to irrational numbers, could $h$ be analytic? If so, how to give an example?
Answering a question of Erdos, Barth and Schneider proved that for every countable dense sets $A$ and $B$ in the complex plane, there exists an entire function such that $f(z)\in B$ if and only if $z\in A$ . K. Barth and W. Schneider, Entire functions mapping arbitrary countable dense sets and their complements to each other, J. London Math. Soc., 4 (1971/72) 482-488. Another paper of the same authors concerns the case when $A$ and $B$ are on the real line. They prove that for every two such countable dense sets, there is a transcendental entire function that maps $A$ into $B$ monotonically. MR0269834 Barth, K. F.; Schneider, W. J. Entire functions mapping countable dense subsets of the reals onto each other monotonically. J. London Math. Soc. (2) 2 (1970), 620–626.
{ "source": [ "https://mathoverflow.net/questions/434276", "https://mathoverflow.net", "https://mathoverflow.net/users/494497/" ] }
435,110
Today, somebody posted on the nLab a link to Kirti Joshi's preprint on the arXiv from last month: https://arxiv.org/abs/2210.11635 In that preprint, Kirti Joshi claims that he agrees with Scholze and Stix that Mochizuki's proof of ABC is incomplete, Scholze and Stix's rigidity claim in Remark 9 of their paper " Why abc is still a conjecture " is wrong "This paper provides the first proof of Mochizuki’s non-redundancy claim by establishing that the isomorphs are of distinct arithmetic-geometric provenance (and even continuous families of isomorphs exist) and therefore are non-redundant" If these results are confirmed, what are the consequences of this preprint on the validity of IUT as a theory and Mochizuki's proof of the ABC conjecture?
I should point out that Joshi's paper does not falsify Remark 9 of our note. In Joshi's Theorem 4.8 (which he claims to falsify our Remark 9) the curve $X/E$ stays the same (and hence of course its tempered fundamental group stays the same). The only thing that changes is how $E$ is embedded into an untilt $K$ of an auxiliary characteristic $p$ perfectoid field $F$ . But this extra data also doesn't have anything to do whatsoever with the situation -- of course one can't reconstruct it from the tempered fundamental group, as the latter doesn't even know about this extra data...
{ "source": [ "https://mathoverflow.net/questions/435110", "https://mathoverflow.net", "https://mathoverflow.net/users/483446/" ] }
435,919
In 2002, the discovery of the AKS algorithm proved that it is possible to determine whether an integer is prime in polynomial time deterministically. However, it is still not known whether there is an algorithm for factoring an integer in polynomial time. To me, this is the most counter-intuitive observation in mathematics. If one can know for certain that a given integer is composite, why is it apparently so difficult to find its factors? Why doesn’t knowing that something exists give one a recipe for determining what that something is? One way to resolve this problem would be to find a polynomial time algorithm to factor integers. However, if this were possible, it appears that a completely new idea would be needed to do so. My question is is there an example of a problem similar to integer factorization in which it has been proven that an algorithm can determine the existence of a certain entity in polynomial time but there is no algorithm that computes that entity in polynomial time?
What I think you're asking for are examples of search problems that seem to be hard, while a corresponding decision problem is solvable in polynomial time (but not totally trivial). It is true that such problems do not arise in practice very often; typically, an efficient decision procedure can be turned into an efficient search. For example, if you can determine whether or not an arbitrary SAT instance is satisfiable, then you can find satisfying assignments easily, just by taking each variable in turn, trying the two possible settings in turn (TRUE or FALSE), and asking if the smaller instance is satisfiable. Or, for an optimization problem, if you can solve the decision problem ("is there a solution with cost at most $k$ ?") then you can find the optimum value by performing a binary search on $k$ . You might find examples of what you're looking for on the CS Theory StackExchange: Easy decision problem, hard search problem . But perhaps none of the examples there is as convincing as factorization. It should be pointed out, however, that "the decision version of factorization" is (arguably) not primality testing, but the following problem: Given a positive integer $n$ and a bound $k$ , does there exist $p$ ( $1 < p < k$ ) such that $p\mid n$ ? A fast algorithm for this decision problem would indeed yield a fast algorithm for factoring. So arguably, what's special about factoring is that there is "a" decision problem (primality testing) that looks very close to "the" decision problem for factoring, but which seemingly cannot be parlayed into a solution to "the" decision problem. Stated this way, it's perhaps less surprising that there appears to be a computational gap between the two decision problems. An analogy here might be subgraph isomorphism , which is $\mathsf{NP}$ -hard, while graph isomorphism appears to be much easier.
{ "source": [ "https://mathoverflow.net/questions/435919", "https://mathoverflow.net", "https://mathoverflow.net/users/7089/" ] }
436,346
It is often said that instead of proving a great theorem a mathematician's fondest dream is to prove a great lemma. Something like Kőnig's tree lemma, or Yoneda's lemma, or really anything from this list . When I was first learning algebra, one of the key lemmas we were taught was Zorn's lemma . It was almost magical in its power and utility. However, I can't remember the last time Zorn's lemma appeared in one of my papers (even though I'm an algebraist). In pondering why this is, a few reasons occurred to me, which I'll list below. I don't want to lose my old friend Zorn, and so my question is: What are some reasons to keep (or, perhaps in line with my thoughts, abandon) Zorn's lemma? Edited to add : One purpose to this question is to know whether or not I should be rewriting my proofs to use Zorn's lemma, instead of my usual practice of using transfinite recursion, if there is a mathematical reason to prefer one over the other. Hopefully this clarifies the mathematical content of this question. To motivate the discussion, let me give an example of how I would now teach ungraduates a result that was taught to me using Zorn's lemma. Theorem : Every vector space $V$ has a basis. Proof : First, fix a well-ordering for $V$ . We will recursively work our way through the ordering, deciding whether to keep or discard elements of $V$ . Suppose we have reached a vector $v$ ; we keep it if it is linearly independent from the previously kept vectors (equivalently, it is not in their span), otherwise we discard it. If $B$ is the set of kept vectors we see it is a basis as follows. Any vector $v\in V$ is in the span of $B$ , because it is either in $B$ or in the span of the vectors previously kept. On the other hand, the elements of $B$ are linearly independent because a nontrivial combination $c_1 v_1 + \dotsb +c_k v_k=0$ , where $v_1<v_2<\dotsb<v_k$ and $c_k\neq 0$ can be rearranged so $v_k$ is a linear combination of the previous vectors, so $v_k$ cannot belong to $B$ after all. $\quad\square$ Here are some of the benefits I see for this type of proof over the usual Zorn's lemma argument. 1. The use of choice is disentangled from the other parts of the proof. When applying Zorn's lemma, it is difficult to see exactly how the axiom of choice is being used to reach the conclusion of a maximal element. One way to visualize its use is that Zorn's lemma lets us recursively build a maximal chain through the poset. This chain must have a greatest element. However, this construction is hidden behind the magic words "Abracadabra Zornify". Is it a historical artifact that choice is hidden this way? 2. We can more easily see whether or not to use a choice principle. In the proof above, if $V$ is already well-orderable (without AC), then we don't need to ever use the axiom of choice. 3. Zorn's lemma is no easier than transfinite recursion. Each part of transfinite recursion already (implicitly) occurs in most Zorn's lemma arguments. The base case of the recursion corresponds, roughly, to showing that that the poset is nonempty (i.e., has some starting point). The successor ordinal step often occurs at the end; after asserting that some maximal element of the poset exists, we show that this maximal element has some claimed property by working by contradiction, and then passing to a slightly bigger element of the poset (i.e., the next successor). The limit ordinal step occurs when we show that chains have upper bounds. 4. Zorn's lemma often includes unnecessary complications. In the proof I gave above, there is no need to define a complicated set, together with a poset relation. We can use strong induction, to avoid differentiating between the zero, successor, and nonzero limit steps. We don't need to combine the contradiction at the end with any successor step; they are entirely separated. 5. Transfinite recursion is a more fundamental principle. As a matter of pedagogy, shouldn't we teach students about transfinite induction before we teach them a version of it that is also combined with AC, and that requires the construction of a complicated poset? 6. Transfinite recursion applies to situations where Zorn's lemma does not. To give just one example: There are some recursions that continue along all of the ordinals (for a proper class amount of time). Zorn's requires, as a hypothesis, an end.
I agree with almost everything in your post. But still, I believe I know why people use Zorn's lemma. My answer. Zorn's lemma encapsulates succinctly many of the consequences of AC via transfinite recursion, but without requiring any involvement of the ordinals or knowledge of transfinite recursion to be used. To those who are deeply familiar with transfinite recursion, of course, every use of Zorn's lemma can be seen as sumblimating the underlying construction, which achieves the maximal elements by a transfinite process that simultanesously explains why they exist. To appeal to Zorn seems to hide this essential explanatory underlying mechanism. And yet, the alternative perspective is that Zorn's lemma abstracts away from the recursive process, producing in the end a simpler argument that relies only on the core consequences of the recursive process, which do not rely on any explicit engagement with ordinals or recursion. And precisely because of that feature, Zorn's lemma arguments can be undertaken and understood by mathematicians who are unfamiliar with the ordinals and transfinite recursion. In the vector space example, to show every vector space has a basis, one can mount a transfinite recursive process: you pick an element, and then if it doesn't span, you pick another, and so on transfinitely until you have a basis. (My view of this example is a little different from how you described it, since I view the choice function as more primitive than the well order — I would build the basis by choosing amongst the element not yet in the span — indeed I prefer to view WOP itself as the outcome of recursively choosing elements.) With Zorn's lemma, however, there is no need for ordinals or transfinite recursion, and the Zorn's lemma argument instead encapsulates abstractly their effects — the partial order consists in effect of partial undertakings of the recursive process. In this sense, the Zorn argument is simpler, abstracting away from the transfinite constructive "process". I find the situation to be analogous to Martin's axiom and forcing. Martin's axiom is the poor mathematician's forcing, just as Zorn's lemma is the poor mathematician's choice+transfinite recursion. My personal view is that the ordinals and transfinite recursion are one of the wonders of mathematics, a sublime achievement of the intellect resulting in many beautiful arguments and constructions. I tend to prefer the transfinite recursive arguments as providing a deeply explanatory account of the consequences of Zorn's lemma. (Even the well-order theorem seems fundamentally less mysterious when explained via the transfinite recursion — pick any element as the least element, and now pick a next element, and a next, and so on transfinitely.) Further, although I recognize that many mathematicians have little involvement or experience with the ordinals and transfinite recursion, I also believe that their mathematical life would be improved by knowing more of them.
{ "source": [ "https://mathoverflow.net/questions/436346", "https://mathoverflow.net", "https://mathoverflow.net/users/3199/" ] }
436,925
When writing a paper, it's possible that some auxiliary results hold in more generality or in a stronger version than what's actually needed to prove the main results of the article. And so here comes the question: Should one state and prove the exact auxiliary result that is used, or should one sharpen it to its best possible version? I can think of pros and cons of both approaches: Proving better results cannot be a bad thing in itself, but spending time proving a too strong and not-so-interesting Lemma might be distracting and not worth the effort. Even if it's not so hard to improve the Lemma it might be confusing to the reader to use a weaker version of what's stated. Example: Suppose I need to use a Lemma of the form: For every $\varepsilon>0$ there exists a sequence $(x_n)_n$ with property $(P)$ such that $|x_n|<\varepsilon$ for all $n\in\mathbb{N}$ . However, looking at the proof of this Lemma I (and most likely the referee and the reader) noticed that slightly changing the proof a stronger version holds: For every sequence of positive numbers $(\varepsilon_n)_n$ there exists a sequence $(x_n)_n$ with property $(P)$ such that $|x_n|<\varepsilon_n$ for all $n\in\mathbb{N}$ . Which version should I include if I only need the first (and weaker) statement?
It depends on context. Here are some relevant considerations: How much more difficult is the stronger Lemma to prove? If the proof is nearly identical, then stating the strongest one may be a good idea. But it may not be helpful to spend a lot of time on making a minor Lemma slightly stronger if it takes a lot of effort and distracts from the main exposition. One thing I've occasionally done is included two versions of a Lemma, and explicitly called a stronger one a "Proposition" and make clear that it is a separate question of just how far we can push the Lemma. (Which may be an interesting question in its own right.) One thing to also keep in mind is that at least stating the strongest version may help others in two other ways. First, if they extend, generalize, or improve your result, having a stronger version of the Lemma may also be helpful. Second, if someone is trying to understand what the limiting steps are in your main result (e.g. why does this only apply to p-groups but not nilpotent groups, or why can some general inequality not be strengthened, etc.) then having a Lemma which is stronger than you need it can help understand that that Lemma is not the where whatever obstruction there is to making the result stronger. Another consideration is that a stronger Lemma may benefit you later. If you come back to a problem years later, you might not remember the stronger version, or might remember it but might not remember the proof. So if you decide not to include the proof in the final paper, it may be a good idea to keep a written up version in your own copy, or possibly just commented out in the LaTeX.
{ "source": [ "https://mathoverflow.net/questions/436925", "https://mathoverflow.net", "https://mathoverflow.net/users/123450/" ] }
437,256
I have been a user of category theory for a long time. I recently started studying a rigorous treatment of categories within ZFC+U. Then I become suspecting the effect of the smallness of sets. We first fix notations. Let $\mathbb{U}$ be a Grothendieck universe. A set $x$ is called a $\mathbb{U}$ -set if $x\in \mathbb{U}$ . A set is said to be $\mathbb{U}$ -small if it is isomorphic to a $\mathbb{U}$ -set. We denote by $\operatorname{Set}_{\mathbb{U}}$ the category of $\mathbb{U}$ -sets. A category is called a locally $\mathbb{U}$ -category (resp. locally $\mathbb{U}$ -small ) if its hom-sets are $\mathbb{U}$ -sets (resp. $\mathbb{U}$ -small). There seem to be various advantages to a set being a $\mathbb{U}$ -set. For example, $\mathbb{U}$ -sets belong to $\operatorname{Set}_{\mathbb{U}}$ by the definition. It is important when we construct the Yoneda embedding $\mathcal{C} \to \operatorname{Fun}(\mathcal{C}^{\operatorname{op}},\operatorname{Set}_{\mathbb{U}})$ of locally $\mathbb{U}$ -categories. In contrast, we cannot construct such an embedding canonically for locally $\mathbb{U}$ -small categories. In various books, the fact that a set is small is treated as an important thing. For example, let $\mathcal{A}$ be an abelian category. Then its Grothendieck group $K_0(\mathcal{A})$ is defined by the free abelian group generated by the isomorphism classes $[X]$ of objects modulo the Euler relation : $[B]=[A]+[C]$ if there exists a short exact sequence $0\to A \to B\to C\to 0$ . In many books (for example, Weibel`s K-book), the skeletally $\mathbb{U}$ -smallness is imposed on $\mathcal{A}$ . That is, suppose that the set of isomorphism classes of objects of $\mathcal{A}$ is $\mathbb{U}$ -small. This guarantees that $K_0(\mathcal{A})$ is a $\mathbb{U}$ -small abelian group. However, how does this benefit us? Note: a category is a tuple $(\mathcal{C}_0,\mathcal{C}_1,s,t,e,\circ)$ of sets and maps satifying some conditions. Thus I think that the Grothendieck group can be defined without imposing any conditions on an abelian category $\mathcal{A}$ .
First, it is important to distinguish between the problem related to the foundation you are using from the problems that are inherent to category theory. For example, the distinction between $\mathbb{U}$ -small and $\mathbb{U}$ -set is something that has to do with the set-theoretic foundation - in category theory, we don't consider properties that distinguish isomorphic objects so the notion of $\mathbb{U}$ -set don't make sense (only $\mathbb{U}$ -small). Now, from the category-theoretic perspective, the only important thing is simply to keep track of what "size" are the objects you work with ("small" vs "large" though in some contexts one might need more than two sizes, "very large", etc...). The problem isn't that you are not allowed to make some constructions - there are foundations that let you do pretty much everything you want with as many different sizes as you want - but only that you want to know in what categories the constructions you are making take values - do they produce small sets, large sets, "very large sets" etc... Some foundation might prevent you from doing some construction of course - but if you only focus on things that are "foundation independent" you can just change foundation if you run in this sort of problems - we do that very often. To come back to your specific problem, It is completely fine to consider the $K_0$ of a "large" abelian category and get a "large" group. As pointed out by Andrej Brauer, in some foundations this might cause problems, but there definitely are foundations that can handle that sort of thing fine - for example the one you are talking about where everything is a set and size issues are handled with Grothendieck universe is indeed completely fine with this. The thing is, if you build a large $K_0$ , then you have a large group, but it is not an element of your "category of groups". If you want to make the $K_0$ construction into a functor you have to put some kind of size restriction on the category you apply it to. And if you want the category of "large group" to be a set, you are really going to need some kind of size restriction... Now, there are many examples where not keeping track of size actually leads to problems. The most common is probably in the definition of limits and colimits: when one say that a category has all limits or colimits we always mean that it has all small limits or colimits. In fact it is a theorem that if a category has products (or coproduct) indexed by set of the same size as itself then it is a poset ! So when doing argument involving limits and colimits you always need to make sure the diagram you are taking limits and colimits on are small. For example, the following argument is false because we are not careful with the size problems: Fake Proposition: Every category with limits has an initial object. Fake proof: If $C$ has all limits then one can take the limit $L$ of the identity functor of $C \to C$ . We will show that $L$ is an initial object. By construction, for every object $X \in C$ there is a map $f_X:L \to X$ and for every arrow $v:X \to Y$ we have $v f_X = f_Y$ . Then for Every other arrow $k:L \to X$ we have $k f_L = f_X$ . In particular, taking $k=f_X$ you get $f_X f_L = f_X$ . It follows that $f_X f_L = f_X Id_L$ for all object $X$ , and hence by the uniqueness part of the universal property of the limits $f_L = Id_L$ , hence the equation above gives $k=f_X$ so there is a unique map $L \to X$ for each object $X$ . Note: The correct argument here of course show that if a category has limits indexed by itself then it has an initial object, but in traditional ZFC foundation, category having all limits of their own size are posets as mentioned above (There are however alternative foundation incompatible with ZFC where this result apply to categories that aren't posets). Another situation is when building adjoint functors. It is pretty frequent that some would-be adjoint functors we want to exist actually don't because they take values in categories of "large objects" (for example large set or large group instead of the category of groups and sets) instead of the domain of the functor they are an adjoint of. This happens for example with the forgetful functor from complete boolean algebra to the category of sets - where the would-be left adjoint applied to any infinite set gives rise a something the size of the universe. Of course in the case of $K_0$ you have two options: either only apply it to small categories, or you apply it to "large" categories, and consider that it takes values in the category of "large groups" - but then you need one more size because the category of "large groups" is itself "very large". In the case of $K_0$ the reason why everybody makes the first choice and not the second is because in practice the $K_0$ construction is only interesting for categories like "finitely presented modules" or "finite-dimensional bundle/vector space" which are all essentially small categories. As soon as you allow infinite dimensional objects in your category you end up with objects satisfying equation of the form $X \oplus X =X $ and $X \oplus Y = X$ for $Y$ finite-dimensional, which makes your $K_0$ group trivial. One can probably engineer interesting examples of large abelian categories with interesting $K_0$ , but all the naive examples (like all groups, all bundles or all vector spaces) have trivial $K_0$ for this reason.
{ "source": [ "https://mathoverflow.net/questions/437256", "https://mathoverflow.net", "https://mathoverflow.net/users/137654/" ] }
437,274
There is a lot of known examples of undecidable problems, a large amount of them not directly related to turing machines or equivalent models of computations, for example here: https://en.m.wikipedia.org/wiki/List_of_undecidable_problems It is also known that the structure of turing degrees is extremely complicated, and that there are undecidable problems not reducible to the halting problems, the most basic example being "Does a turing machine with an oracle for the halting problem halts?" So my question is: Is there a example of a problem not related to abstract machines that is undecidable but not reducible to the halting problem? (That is, that has turing degree different from 0')
The problems reducible to the halting problem are exactly the problems of complexity $\Delta^0_2$ in the arithmetic hierarchy , and there are indeed many natural problems outside of this class. In this sense, you are asking for natural examples of decision problems of high arithmetic complexity. Arithmetic truth, to decide if a given arithmetic sentence $\sigma$ is true in the standard model $\langle\mathbb{N},+,\cdot,0,1,<\rangle$ , is undecidable, but not reducible to the halting problem. $\Sigma^0_n$ truth, restricted to sentences of this complexity, for $n\geq 2$ is undecidable, but not reducible to the halting problem. Projective truth, to decide if a given sentence $\sigma$ is true in the real field $\langle\mathbb{R},+,\cdot,0,1,<,\mathbb{Z}\rangle$ , with the integers as a unary predicate, is undecidable, but not reducible to the halting problem. Set-theoretic truth in various models, to decide if a given sentence is true in the structure of hereditarily countable sets $\langle H_{\omega_1},\in\rangle$ or in the least Zermelo universe $V_{\omega+\omega}$ or the least Zermelo-Grothendieck universe $\langle V_\kappa,\in\rangle$ , if there is one, is undecidable, but not reducible to the halting problem. To decide if a given c.e. group presentation is trivial generally has complexity $\Pi^0_2$ (because one must say every generator is trivial), and this will be undecidable, but not reducible to the halting problem. (Note, for c.e. presentations with finitely many generators, this is reducible to the halting problem.) To decide if a given c.e. graph on the natural numbers is connected generally has complexity $\Pi^0_2$ , making it undecidable and not reducible to the halting problem. To decide if a given computable function is total has complexity $\Pi^0_2$ , since one must say every input has a halting computation, and this is complete for that level of complexity, making it undecidable, but not reducible to the halting problem. To decide if a given computable function is surjective has complexity complete $\Pi^0_2$ , and so this is undecidable, but not reducible to the halting problem. (Meanwhile, to decide if it is injective is $\Pi^0_1$ and hence reducible to the halting problem.) There are many more examples. See for example the hierarchy of degrees of irrationality . All the examples higher in the hierarchy amount to undecidable decision problems that are not reducible to the halting problem. A dual to your question. There is a dual version of your question that is fascinating and the subject of a research program in computability theory. Namely, are there natural decision problems that are undecidable, but such that the halting problem does not reduce to them? To be sure, the Friedberg-Muchnik solution of Post's problem shows that there are undecidable c.e. Turing degrees strictly below the halting problem, and so there are indeed undecidable decision problems strictly below the halting problem. But these problems are constructed especially for this purpose, and in this sense, are not seen as "naturally" arising. Furthermore, it is widely regarded as an open question whether there are natural decision problems in this class (one proposal: the set of differences of primes). Although I often find such uses of "natural" to be empty, in computability theory there is a research program to formulate substantive versions of the question, via Martin's conjecture and other approaches. See my further discussion of this in my paper, Linearity and illfoundedness in the hierarchy of large cardinal consistency strength , especially sections 9, 10, and 11, which focus on naturality and computability theory.
{ "source": [ "https://mathoverflow.net/questions/437274", "https://mathoverflow.net", "https://mathoverflow.net/users/496934/" ] }
438,258
(This must have been asked before and exist somewhere in Community Wiki, but I can't find it...) Where can you post open (math) problems? And what are the advantages and disadvantages? Example: This place (and Math StackExchange), duh. Example: The journal AMM had a corner "Unsolved Problems", but no longer. (Editor Stan Wagon told me that he doesn't know any math journal accepting unsolved problems.) Example: But then, probably any journal will accept a small "Open Questions" section after a research article. Example: USENET once had an useful group. The group still exists. (Yes, I'm that old...) Example: Other social media, e.g. Reddit has a math sub.
If you can motivate the problem and make some partial progress on it, you can try and publish it as a paper in a specialized journal, or at the very least upload it to the arXiv. If you only have empirical evidence, there are journals that are receptive to this kind of this ("Mathematics of Computation" and "Experimental Mathematics" spring to mind). If it concerns elementary mathematics and is relevant to a wide enough audience you can try a popular journal such as "American Mathematical Monthly". Other than that, you can put in on your website, if you have one. share it with experts, as they have the highest chance of solving it, and at the very least of assessing its importance and difficulty and possibly guiding you towards relevant literature or a proof. share it in the problem session of a relevant conference. Problems from such sessions tend to be published in the form of conference proceedings.
{ "source": [ "https://mathoverflow.net/questions/438258", "https://mathoverflow.net", "https://mathoverflow.net/users/11504/" ] }
438,263
Is there a concrete example of a $4$ tensor $R_{ijkl}$ with the same symmetries as the Riemannian curvature tensor, i.e. \begin{gather*} R_{ijkl} = - R_{ijlk},\quad R_{ijkl} = R_{jikl},\quad R_{ijkl} = R_{klij}, \\ R_{ijkl} + R_{iklj} + R_{iljk} = 0. \end{gather*} for which there is no metric for which it is the Riemannian curvature tensor? The existence of such a curvature was already shown by Robert Bryant , however, I'm looking for a concrete example.
If you can motivate the problem and make some partial progress on it, you can try and publish it as a paper in a specialized journal, or at the very least upload it to the arXiv. If you only have empirical evidence, there are journals that are receptive to this kind of this ("Mathematics of Computation" and "Experimental Mathematics" spring to mind). If it concerns elementary mathematics and is relevant to a wide enough audience you can try a popular journal such as "American Mathematical Monthly". Other than that, you can put in on your website, if you have one. share it with experts, as they have the highest chance of solving it, and at the very least of assessing its importance and difficulty and possibly guiding you towards relevant literature or a proof. share it in the problem session of a relevant conference. Problems from such sessions tend to be published in the form of conference proceedings.
{ "source": [ "https://mathoverflow.net/questions/438263", "https://mathoverflow.net", "https://mathoverflow.net/users/497575/" ] }
438,270
Suppose some circular coins (not necessarily the same size) are in a frame. The coins may be immobile, as in this example: On the other hand, they may be free to move, as in these examples (in which the coins can move simultaneously): It is rather tedious to show algebraically that the coins can move, so I tried to find some general principles that allow us to simply look at diagrams like these and know whether the coins can move. Conjecture: If circular coins (not necessarily the same size) are in a convex polygonal frame, with each coin touching exactly one edge, then all the coins can move. Is my conjecture true? Remarks about my conjecture The frame must be a polygon, otherwise there would be a counter-example: two coins in the region bounded by $y=x^2-1$ and $y=1-x^2$ , as shown below. The frame must be convex, otherwise there would be a counter-example, as shown below. Every coin must touch an edge, otherwise there would be a counter-example, as shown below. EDIT Zach Teitler has given a counter-example . I have proposed a second conjecture that avoids this counter-example. EDIT2 My second conjecture also has a counter-example . I have asked another question asking for general principles that are useful in determining whether coins can move.
The following seems like a counterexample to the conjecture as originally stated, allowing different size coins. It doesn't seem like the big coin, with diameter $1-\epsilon$ , can move right, up, or down. (I apologize for the poor drawing.) (I haven't done "formal" algebra to verify this, but just looking at it, it seems to be so.) Edit by OP: Here's another look at your idea.
{ "source": [ "https://mathoverflow.net/questions/438270", "https://mathoverflow.net", "https://mathoverflow.net/users/494920/" ] }
438,925
There are many statements in abstract algebra, often asked by beginners, which are just too good to be true . For example, if $N$ is a normal subgroup of a group $G$ , is $G/N$ isomorphic to a subgroup of $G$ ? As an experienced mathematician, we see immediately that there is no reason for this to be true — even without thinking about this in detail. Often we can quickly come up with counterexamples. Sometimes, it is hard to find counterexamples. Many questions fall into this category, for example: If $f : R \to S$ is a ring homomorphism and $I \subseteq R $ is an ideal, is then $f(I) \subseteq S$ an ideal? ( SE/2200335 ) $\DeclareMathOperator\Aut{Aut}$ If $G,H$ are groups, do we have $\Aut(G \times H) \cong \Aut(G) \times \Aut(H)$ ? ( SE/1236571 ) Is every submodule of a finitely generated module also finitely generated? ( SE/83078 ) If $A$ is an abelian group with $A^3 \cong A$ , does this imply $A^2 \cong A$ ? ( MO/10128 ) If $A$ is an abelian group with $A \oplus \mathbb{Z}^2 \cong A$ , does this imply $A \oplus \mathbb{Z} \cong A$ ? ( MO/218113 ) If $G$ , $H$ are groups whose group algebras $ \mathbb{Q}[G]$ , $\mathbb{Q}[H]$ are isomorphic, are then $G$ , $H$ isomorphic? ( SE/1342851 ) see also MO/23478 for common false beliefs in mathematics But my question is actually about situations where, for some strange reason, our first gut feeling is not correct and a wrong-looking statement turns out to be true . Examples will be abundant, which is why I want to restrict this question to examples coming from abstract algebra (you are welcome to open similar questions for other branches and flavors of mathematics, and please let me know if there are already questions of this type). Here are some examples which come to my mind: Every group homomorphism $\mathbb{Z}^{\mathbb{N}} \to \mathbb{Z}$ is a finite linear combination of projections. In fact, $(\mathbb{Z}^{ \mathbb{N}})^* \cong \mathbb{Z}^{\oplus \mathbb{N}}$ . ( Specker 1950 ) If $A$ , $B$ are finitely generated abelian groups (more generally, finitely generated modules over a commutative Noetherian ring) and $f : A \to A \oplus B$ , $g : A \oplus B \to B$ are homomorphisms such that $0 \to A \xrightarrow{f} A \oplus B \xrightarrow{g} B \to 0$ is exact, then it is split exact. If $A$ , $B$ , $C$ are finite groups such that $A \times B \cong A \times C$ , then $B \cong C$ . ( SE/3579745 ) every negation of the examples mentioned above, for example: There is an abelian group $A$ with $A \cong A^3$ and $A \not\cong A^2$ . (However, I am more interested in "positive" results.) I am looking for statements in abstract algebra where this is your reaction when you learn that they are actually true. Please try to include a reference for the statement and proof.
The free group with infinitely many generators is a subgroup of the free group with two generators.
{ "source": [ "https://mathoverflow.net/questions/438925", "https://mathoverflow.net", "https://mathoverflow.net/users/2841/" ] }
440,181
I was thinking about the idea that succession, addition, multiplication, exponentiation, tetration and so on form a sequence of operations where each is defined as a repeated self application of the previous one. And then it struck me that the first 2 operations in this sequence are commutative but this breaks at exponentiation. What exactly breaks? When repeated self application of a commutative operation is itself commutative and when it's not? That is, for an operation: $$f: \mathcal{N} \times \mathcal{N} \to \mathcal{N}$$ If I define: $$g(m, 2) = f(m, m)$$ $$g(m, n) = f(m, g(m, n-1))$$ for any $n \geq 2$ . What conditions $f$ must satisfy for $g$ to be commutative? That is, for: $$g(a,b) = g(b,a)$$ for all $a$ and $b$ ?
Not really an answer, but too long for a comment: it's worth noting that if we assume that $f$ is associative, $g$ is associative, $g$ is cancellative for at least one $a$ , meaning that $g(a,u)=g(a,v)$ implies $u=v$ for this particular $a$ , then $f$ and $g$ must be addition and multiplication. Indeed, let $a,p,q$ be natural numbers, with $a$ such that $g(a,—)$ is cancellative. Then $g(a,pq)$ is the application of $f$ to $pq$ terms all equal to $a$ . By associativity of $f$ we can group this as $q$ terms all of which are the application of $f$ to $p$ terms, i.e., $g(a,p)$ , so that $g(a,pq) = g(g(a,p),q)$ . By associativity of $g$ , we can rewrite this as $g(a,g(p,q))$ . By the cancellativity assumption, we get $pq = g(p,q)$ . We then have $g(1,n) = n$ , and since $g(1,m+n)=f(g(1,m),g(1,n))$ (again, by associativity of $f$ ) we get $m+n=f(m,n)$ , as claimed. ∎ Update: similarly, if we assume that $f$ is associative, $g$ has a unit element $e$ , meaning that $g(e,n) = n$ for all $n$ , then the same conclusion holds. The proof is pretty much the same: as above, $g(e,pq) = g(g(e,p),q)$ so the fact that $e$ is a unit for $g$ means $pq = g(p,q)$ , and the rest of the proof is identical. ∎ (Note that for all this I'm assuming that $g(c,1) = c$ , which is logical if $g(c,n)$ means “ $n$ -fold application of $f$ to $c$ ”, but you didn't actually make this part of your definition. I suppose it was an oversight.)
{ "source": [ "https://mathoverflow.net/questions/440181", "https://mathoverflow.net", "https://mathoverflow.net/users/757/" ] }
440,682
Several years ago, when I was just starting undergrad, I ran across an instructional text on chalking beautiful mathematical diagrams while killing time in the college library. In my infinite wisdom, I decided that I should remember this book's name, and find it again whenever I had the time to go through it. A few weeks back, I finally thought about it again, but this event apparently preceded my book list—I've forgotten everything except its existence. Is anyone aware of anything that might be it? The only other things I remember are some very nice fold-out illustrations and that it was at the tail end of the QAs, close to the astronomy and physics collections.
Georges K. Francis, A topological picturebook An excerpt from the color plates.
{ "source": [ "https://mathoverflow.net/questions/440682", "https://mathoverflow.net", "https://mathoverflow.net/users/160917/" ] }
440,726
Let $\Sigma_n$ be a genus $n$ surface, let $\mathcal{H}_n$ be a genus $n$ handle body, and let $F_n$ be a free group of rank $n$ . Fix an identification of $\pi_1(\mathcal{H}_n)$ with $F_n$ . I know several proofs of the following result: Theorem : Let $\phi\colon \pi_1(\Sigma_n) \rightarrow F_n$ be a surjection. Then there exists an orientation-preserving homeomorphism $\psi\colon \Sigma_n \rightarrow \partial \mathcal{H}_n$ such that $\phi$ factors as $$\pi_1(\Sigma_n) \stackrel{\psi_{\ast}}{\longrightarrow} \pi_1(\partial \mathcal{H}_n) \longrightarrow \pi_1(\mathcal{H}_n) = F_n.$$ However, I do not know any references for it, nor who to attribute it to. Does anyone know any references, preferably the original one?
Georges K. Francis, A topological picturebook An excerpt from the color plates.
{ "source": [ "https://mathoverflow.net/questions/440726", "https://mathoverflow.net", "https://mathoverflow.net/users/499341/" ] }
441,577
Is there some deeper meaning to the following derivation (or rather one-parameter family of derivations) associating the divergent series $1+2+3+4+…$ with the value $-\frac 1 8$ (as opposed to the value $-\frac 1 {12}$ obtained by zeta-function regularization)? Or is it just a curiosity? Formally put $x=1+2+3+4+\dotsb$ . Writing $$x-1=(2+3+4)+(5+6+7)+\dotsb=9+18+27+\dotsb=9x$$ we get $8x=-1$ . Or, writing $$x-1-2=(3+4+5+6+7)+(8+9+10+11+12)+\dotsb=25+50+75+\dotsb=25x$$ we get $24x=-3$ . Or, writing $$x-1-2-3=(4+\dotsb+10)+(11+\dotsb+17)+\dotsb=49+98+147+\dotsb=49x$$ we get $48x=-6$ . Etc. It is not surprising that values other than $-\frac 1 {12}$ can be obtained as “values” of this divergent series. What surprises me is that all of these methods of grouping terms give the same answer. It makes me wonder whether there is a larger story here.
Yes there is, in $p$ -adics. You are probably familiar with the relation $$8T(n)+1=(2n+1)^2.$$ Now for any $p$ except $2$ (which has to be excluded because of the non-unit coefficients in the above relation) we can identify a subsequence of whole numbers $n$ such that the squared quantity on the right converges $p$ -adically to zero. Then $T(n)$ follows suit, converging to $-1/8$ . For instance, we may put in $p=3$ , in which case $-1/8$ is rendered as the $3$ -adic integer $\overline{01}$ . Then using base $3$ arithmetic we develop the sequence \begin{align*} \newcommand\pdots{{\ldots}} & T(\pdots1)=\pdots01 \\ & T(\pdots11)=\pdots0101 \\ & T(\pdots111)=\pdots010101 \end{align*} where the base on the left side is set up to converge to $-1/2$ (for which the corresponding square is zero) and the right side then converges quadratically to $-1/8$ in the subsequence. The quadratic convergence to $-1/8$ is unique to that target value bevause of the critical value of the corresponding square. This quadratic convergence leads to ordinary integer triangular numbers being "attracted" to the $3$ -adic representation $\overline{01}=-1/8$ . Below is a table wherein the columns represent possible two-digit endings for any $81$ consecutive triangular numbers in base $3$ ; the rows represent possible values for the preceding two digits and the entries describe how many triangular numbers out of the block of $81$ will end with the resulting four-digit pattern. Combinations not shown correspond to no triangular numbers represented in base 3. The table shows that there us an excess of triangular numbers ending with $...01$ in base $3$ ( $27/81$ versus $18/81$ for the other possible two-digit endings) and among those triangular numbers ending with $...01$ , the four-digit ending $...0101$ is further overrepresented. The overrepresentation of patterns matching $\overline{01}$ grows when we cobsider longer strings of terminal digits in base $3$ . Similar attraction is seen to $\overline{03}$ in $5$ -adics, $\overline{06}$ in $7$ -adics, and so on. Triangular numbers are not the only ones with this property. We can set up similar patterns with any polygonal number pattern, for instance octagonal numbers quadratically converging to $-1/3$ and favoring that fraction in $p$ -adic subsequences where the prime $p\ne3$ .
{ "source": [ "https://mathoverflow.net/questions/441577", "https://mathoverflow.net", "https://mathoverflow.net/users/3621/" ] }
12
My daily commute changed from 9 miles to 9 feet when I started working remotely. That means I very rarely drive my car much. After a particularly hectic month, I didn't drive for about 4 weeks and the car's battery had died the next time I came to drive it. So, what should I do to keep the car in good, running condition? What can keep the battery charged? Are trickle chargers recommended? How about oil? Should I make sure I take the car for a reasonable drive every week? Two weeks? What other aspects of the car should I keep in mind? Edit: Thanks to all who've answered to date. I've upvoted everyone, and am accepting Patrick's answer as the most informative. Edit 2: Also of interest to this topic: Reviving a vehicle that has been idle for a long time How long does it take for gas to go bad?
Rule(s) of thumb: Drive the car once a week, long enough to get the engine to normal operating temperature. Change the oil every 3,000 miles, or since you are not driving much, every 3 months. If your car stays stationary for very long times you need to be concerned about dry rot or the rubber on the tires, but if you follow #1, you won't need to worry about this. About once a month drive the car for an extended time (>30 minutes), there can be water accumulation on sensors and exhaust as well as rust on brake rotors. You want to heat up the entire car including engine & exhaust system to evaporate any condensation to prevent rust buildup and clean the brake rotors to maintain the ability to stop. If the gas in the tank will be there for more than a month you need to put a gas stabilizer in the tank (à la STA-BIL) so that the gas does not degrade.
{ "source": [ "https://mechanics.stackexchange.com/questions/12", "https://mechanics.stackexchange.com", "https://mechanics.stackexchange.com/users/20/" ] }
39
Here in the UK it's common for petrol (gas) stations to offer both regular and "premium" unleaded petrol. The premium stuff is (naturally) more expensive, and I've often wondered what the actual difference is. Would I need to fill up with premium every time to enjoy these benefits, or does treating the car to a periodic tankful help in some way? Edit : It looks as though there are regional differences with respect to fuels marketed as "premium". S_Niles says in the answer below that in the US, "premium" implies "high octane", pure and simple. Here in the UK, "premium" implies "high octane plus additives". ( Here's an example ) It's that "high octane plus additives" stuff that I'm interested in.
There is absolutely no reason to use higher-octane fuel unless your car explicitly requires it. The higher the octane, the more compression/heat required to combust the fuel. In high-performance engines (turbo-charged, high compression cylinders, etc), a higher octane fuel is needed so the fuel doesn't combust prematurely (knocking). If you put this fuel in a "normal" engine, it may even have detrimental effects, since the engine will have a harder time combusting the higher octane fuel. Even if your car requests a higher octane fuel, it may be possible to use a lower octane fuel because of variable timing and other magic. Your manual would state that. However, it will not have as much performance, since the timing is being retarded to prevent knocking. Note: I am assuming that in the UK, premium = high-octane, the same as here in the States. Over here, premium is ONLY a designation of octane rating, and has nothing to do with the additive package, etc.
{ "source": [ "https://mechanics.stackexchange.com/questions/39", "https://mechanics.stackexchange.com", "https://mechanics.stackexchange.com/users/9/" ] }
73
I have heard it is proper practice to replace or resurface your brake rotors every time your do a brake pad replacement. Does this need to be done every single time or is it overkill? Maybe it is just something easy to do, might as well replace while you are in there type of thing?
The decision to replace is largely based on the thickness. The repair manual should tell you the minimum thickness, below which you should replace the rotors when doing the repair. Use a pair of calipers and measure the rotor thickness, if you're below this number you need to replace the rotors. You may also wish to replace the rotors if you have particularly heavy use planned and you are getting close to the limit. For example, if you live in the mountains, do a lot of towing, are planning to attend track days... You definitely need to get them resurfaced if they are warped or damaged. Usually you can feel if they are warped through the brake pedal when stopping -- instead of a smooth stop it will kind of vibrate or pulsate when braking at higher speeds. It's very noticeable. This can be measured with a dial gauge and checked against the repair manual's recommendations for "runout". You will need a dial gauge, and some sort of a mount to hold the gauge steady while you spin the rotor. Damage is usually caused by the old brake pads wearing completely through and tend to leave a very rough surface on the disc. These should definitely be turned, if possible. Before having a damaged or warped set of rotors turned, check their thickness. If they're close to the minimum, resurfacing them will leave you with rotors that are too thin. If you're at this point, you should have rotors that are thick enough and not damaged. Many people recommend resurfacing of them so the pads and rotors can better mate and wear into each-other. I tend to agree with this, but I have replaced pads on cars that I drive less spiritedly without resurfacing them, and have not had problems.
{ "source": [ "https://mechanics.stackexchange.com/questions/73", "https://mechanics.stackexchange.com", "https://mechanics.stackexchange.com/users/64/" ] }
86
I'm curious as to what are the particular difference between the various drive systems available in modern vehicles. FWD and RWD are pretty straightforward - Front Wheel Drive cars put the power to the front wheels, Rear Wheel Drive puts it in the rear. However, the remaining variants are less clear to me. What is the difference between 4WD and "Part-Time" 4WD? How is AWD different from any 4WD? Are there any systems I've missed?
To expand a little on what Eric said... I consider the 4WD versus AWD to largely be marketing terms differentiating between vehicles with additional ground clearance and plating underneath to protect sensitive components while going over rough terrain off-road travel (4WD) from systems that are targeted more towards on-road travel (AWD). Though this is confused more by companies making their own names for all wheel drive systems like quattro, Syncro, SH-AWD, Hydratrak, etc... I think it's worth differentiating between these flavors of drive: Full time AWD. These systems are designed to be left engaged even when on dry roads. Part-time AWD. These systems usually have a locked center differential AKA transfer case, which is will cause excessive wear on the drivetrain and/or tires if left engaged on dry roads. It's meant only to be engaged in reduced traction situations like snow, ice, gravel. Hi/Lo AWD. This is probably the most likely to be called 4WD, because it's usually fitted on vehicles meant for off-road use. This will either be part time or full time, and on older vehicles would also have "locking hubs" up front. This has an additional set of gears after the transmission, which can increase the gear ratio, particularly for off-roading or other high torque applications. RWD. Only the rear wheels are driven. FWD. Only the front wheels are driven. However, beyond AWD, FWD, and RWD, there are the front and rear differentials that can make a dramatic difference in how well a vehicle can handle low traction conditions. The differential is what connects the drive-line to two output shafts, say the ones going to the wheels on the left and right sides. It allows the wheels to turn at different rates as you go around corners. The inside wheels will describe a smaller circle, and therefore travel less distance than the outside wheels. Not all differentials are created equal. Normally, differentials are of a type called "open". A sad artifact of these differentials is that power goes to the wheel with the least traction. So if one wheel is on ice, and one is on dry pavement, you will sit there spinning your wheels. There are also a huge variety of "limited slip" or "locking" differentials which either allow you to manually lock and unlock them, or automatically distribute power to both wheels. These are more frequently found on sports cars, sometimes as additional-price option packs or upgrades, sometimes standard, in RWD and FWD cars. They are also used in better AWD systems in the center, front, and/or rear differentials. There are also some hybrid systems, such as Audi's quattro system. Some models of this use a Torsen center limited slip differential, and open differentials in the front and back, but they do this trick where they use the ABS sensors to detect wheel spin, and then apply brakes to the spinning wheel, forcing it traction to go to the other wheels. While not literally a limited slip differential in the front and back, this is still extremely effective. Sadly, the manufacturers don't really make it clear what exactly their system is. In short: If there is a lever or button to turn on AWD/4WD, it is a part-time system. AWD versus 4WD are basically the same, but 4WD tends to be used on off-road oriented vehicles. Not all AWD systems are created equal, so you'll have to research if you want specific functionality out of the system..
{ "source": [ "https://mechanics.stackexchange.com/questions/86", "https://mechanics.stackexchange.com", "https://mechanics.stackexchange.com/users/78/" ] }
92
My Honda Civic has a D gear and a D3, which apparently prevents it from going into 4th. Why would I want to do this? What is it good for? Does it make my fuel efficiency better or worse? Does it make the transmission's job easier or wear it out faster?
This is analagous to down shifting in a manual. This is a lower gear for the transmission which means the engine revolves at a higher rate producing more back pressure at the same speed as a higher gear. When going down a hill, if you downshift that will reduce the demands on the braking system, due to the back pressure. You will often see truck drivers downshifting on long hills so that their brakes do not overheat. You would only use D3 while going downhill with a load so you can use your brakes less. Your mileage would be worse since the engine is running at a higher rpm. Only if you did a lot of using D3 would it make any appreciable wear on your transmission.
{ "source": [ "https://mechanics.stackexchange.com/questions/92", "https://mechanics.stackexchange.com", "https://mechanics.stackexchange.com/users/31/" ] }
116
Most of the time when I first turn on the A/C in our 2008 Toyota Sienna, the air that blows out has a very distinct chemical - nasty smell. After a while it seems to go away. I'm not sure if I just get used to the smell or it actually goes away. When I just turn on the fan with no A/C there is no smell. Also if I drive somewhere then leave shortly thereafter, it doesn't stink when I fire it up the 2nd (or subsequent) time. Should I be concerned for my health or safety?
This is a common problem for all air conditioners (in a car or not), and is caused by mildew growth. In cars it often happens when people run their A/C on the recirculation all of the time, or the drain gets clogged. The system doesn't dry out completely and mildew starts to grow. You should be concerned about your health, especially if you have allergies. Just imagine all that mildew and god knows what else growing in there and being spewed in your face every time you turn the A/C on... Here's a link to US EPA page describing how mold may affect health, if you are still not convinced. The things you should do to remove the cause of your problem and prevent it from happening again: Run it on recirculation only when something stinks outside, or you want it to cool down quickly. Fresh air from outside will help it dry out better. Make sure that your A/C drain isn't clogged and there is no water building up. And this is what you could do to remove the unpleasant effects: Run the heater on full for a while, that will dry out the system and might 'cook' the mildew. Change your cabin air filter (if you have one). There are special sprays sold to remove the mildew from the A/C system (read the instructions carefully before using them). Just using Lysol or some other stuff like that will work too, but the smell will be more unpleasant. I suggest that you do all of this, and in the specified order.
{ "source": [ "https://mechanics.stackexchange.com/questions/116", "https://mechanics.stackexchange.com", "https://mechanics.stackexchange.com/users/96/" ] }
119
Approximately how long does it take for unleaded gasoline in the tank and fuel lines to become unusable in a car that is not being used? What causes this, and what can be done to prevent or remedy the issue? UPDATE: The car that prompted this question ('96 Lincoln Mark VIII, parked for about 6-9 months without preparation) has started successfully. The engine really seems like it doesn't like the gas, but it runs.
I left my Mazda Protege unused in my garage for over a year, and the fuel filter was clogged when I went to start it again. It started fine, but would not rev and was basically undriveable. I'm sure it wasn't great for the fuel injectors, too. Also, I once bought a motorcycle with 10 year old gas in the tank, and the bike wouldn't run at all. I could hear a cylinder occasionally fire, but the gas was totally unusable. I poured it into a metal bucket and lit it on fire. The carbs were very badly gummed up and clogged, too, and had to be torn apart and soaked in carb cleaner. If you use STA-BIL you can store gas for over a year without issue. Just make sure you run the engine for a few minutes to circulate the STA-BIL through the car. Once gas goes slightly bad, you can still use it if you put a little (1 gallon) in an (almost) full tank of fresh gas. The stuff I pulled out of the motorcycle was much darker than normal, and smelled so bad that I didn't even consider using it in an engine. If you're planning to store a vehicle for a very long time, it makes sense to spend the time to totally drain the fuel system. It's cheaper than replacing the fuel system, later on, due to dried up, gummy gas deposits. Here is some info I found on the STA-BIL website: Q: How long will STA-BIL Fuel Stabilizer keep fuel fresh? A: For 12 months when mixed into fresh gasoline. Doubling the dosage will keep fuel fresh for up to 2 years.
{ "source": [ "https://mechanics.stackexchange.com/questions/119", "https://mechanics.stackexchange.com", "https://mechanics.stackexchange.com/users/78/" ] }
125
Some related topics: How to maintain a sometimes-used vehicle? How long does it take for gas to go bad? The first related topic addresses preparing a vehicle for long-term disuse, and maintaining it during those periods. The second related topic addresses one of the effects of a long-term inactivity on a vehicle. Here, I'm looking to cover what should be done after the fact. Say a vehicle has been parked for several months and was neither prepared for this, nor maintained during that period. What particular adverse effects of long-term un-maintained storage should one be aware of when trying to revive a car in this situation? What things should be checked or touched on before starting the car? What measures can be taken at this point to prevent further damage to the vehicle during its first re-start, and facilitate bringing it back to a safe and healthy operational status? UPDATE: Good news. The car ('96 Lincoln Mark VIII) that prompted me to ask this question has started successfully. It had been sitting for probably 6-9 months without any preparation. The battery is definitely toast, and the engine really doesn't like the condition of the gas, but it started pretty well and got around the block a few times. For the first start, I swapped in the battery from my '94 Jeep Grand Cherokee. Then, after pulling the Mark VIII out to where I could reach it with jumper cables, I put the original battery back in. It's had to be jump-started every time since. The engine is running pretty rough, and I've got a constant (and occasionally flashing) CEL going. It also turns out that I've got about 3/4 of a tank to burn through before it'll get any really good gas. I threw in a couple gallons of premium (91 octane around here) and a can of SeaFoam to hopefully help though. Thanks for the tips, guys!
If you have one available, use a trickle-charger to bring the battery back up slowly, instead of jump-starting it. Check the tires. They are probably pretty low at this point. See if they are dry rotted (all cracked and ready to wear quickly). Check all of the fluids in the typical way. Note that it's okay if the oil shows a little low since it's not warm yet. Check the serpentine belt - like the tires, it is rubber and susceptible to dry rot. If there is room in the fuel tank, I'd go ahead and fill it up sooner rather than later to get some fresh fuel mixed in with the old. Keep a baseball bat handy for the pissed raccoon that just lost its house. Hmm, that's actually a good point - if you live out in the boonies, bang around on stuff to scare off the critters. It's a pain to clean animal parts off the engine fan. Start it up and see how it drives. Take it easy while it warms up. I'd drive it around for awhile and when it's nice and hot take it to the gas station as mentioned earlier. Then check your warm levels (engine oil and transmission oil) and check again for any leaks now that it's been running for awhile. I'd probably go ahead and change the engine oil and whatever else is due base on the usual time/mileage. All in all, nothing particularly different than what should be happening at any normal oil change, since you are talking about months and not years.
{ "source": [ "https://mechanics.stackexchange.com/questions/125", "https://mechanics.stackexchange.com", "https://mechanics.stackexchange.com/users/78/" ] }
312
I'm shopping for a new car battery for a 1998 Subaru Outback (2.5l). For the physical dimensions, the battery has to match. But I see some variance in capacity (Ah) rating, and in price too, of course. The intuition goes, as usual, "higher is better". The obvious advantage--if I replace my current 55Ah battery with 60Ah one, I'll be able to, say, play music with the engine off for longer, and still be able to start the car. Are there disadvantages of higher capacity though? Also, apart from capacity, which, if any, of the other characteristics (CA, CCA, HCA, RCM etc.) make significant difference in practice and should be paid attention to?
In most cases, the stock-size battery is correct, and that's what you should stick with. A smaller battery is likely to fail you sooner, unless you live somewhere without a winter (Hilo?). A larger battery is an extra expense, extra toxins, extra weight, and won't give you dramatically longer life. CCA (cold cranking amps) is the main thing to pay attention to. This is a measure of how much current the battery can provide at 0 deg F, as @Pangea points out. Batteries work better when they are warm, so the colder it gets, the less current it can provide. Starting an engine requires a lot of current, over a very short time. Meanwhile, cold engines are harder to start, requiring more power to get them going. As batteries age, their internal resistance increases, reducing CCA. Total capacity decreases at the same time. This is why a newer battery can start your car in any weather, even if you leave the headlights on overnight, while an old battery might only start the car when fully charged, in good weather. Starter batteries are optimized for short-duration, high-current draws, followed immediately by recharging. They are built with thinner plates with more surface area, which speeds up the chemical reactions that release the energy. If you draw them down to 1/2 charge, and then leave them like that for a while, hard crystals will form on the plates, which won't re-desolve easily on recharging. That reduces capacity and CCA over time. So, keep your starter battery fully charged, and avoid dropping below 80% during non-starter usage. (Deep-cycle batteries, like on golf carts, fork lifts, some boats, some RVs are able to tolerate being discharged deeper and left that way longer than starter batteries, but they really don't like a high-current draw like for starting an engine. Best to draw on them gently, and never below 50% if you can help it.) So , a regular car battery is not a good device for running appliances for a long time, without the engine running. However, a stock car stereo doesn't draw much current, so you can probably get away with it for a while without a problem. But if you leave it on all night, you may drain your starter battery enough to damage it. Increasing your battery size will allow you to run those appliances for a little longer, but it's still not the right technology. In an RV or boat, you have a starter battery and a separate "house" battery, which is deep-cycle. Then use a battery isolator switch to use only the starter to start, then recharge it, then recharge the house battery from the alternator. (You can also use the house battery to assist the starter when the starter battery is low.) Some people build these in to their cars if they have really big stereos, or a small camping setup with fridge in a van or pick-up camper. You can learn more from RV articles, like this one: http://www.phrannie.org/battery.html
{ "source": [ "https://mechanics.stackexchange.com/questions/312", "https://mechanics.stackexchange.com", "https://mechanics.stackexchange.com/users/104/" ] }
330
I'm trying to disassemble the exhaust system on a 2000-2003 Nissan Maxima and I'm having trouble removing one last bolt. It's got a 14mm head and its just plain STUCK. I've tried soaking it overnight with PB Blaster, then the next morning I tried removing it with a 2ft long breaker bar + a 2ft extension. After making absolutely positive I was trying to turn it the proper direction, I braced myself by putting both feet on a framerail and pushing with all my strength. And then, my socket shattered into a billion pieces. Bolt: 1 Me:0.
My general rule for exhaust bolts/nuts is to order new ones with the part you are replacing, knowing full well that there is a good chance you'll destroy the ones you are removing. Better to be prepared ahead of time than to be stuck waiting for new parts from another country after you've disassembled everything. Personally, I have a rule of ordering every bolt and nut that is attached to the exhaust member I'll be replacing. Even if there isn't much rust, I never know when a bolt will snap. First task is to let the exhaust completely cool. It's possible that the parts are made of different types of metal which expand and contract at different rates. Now, how to get the rusted bolts off: Use an impact wrench. Sometimes the vibration and hammering action works better to "knock" a bolt loose, compared to wrench/breaker bar. In my experience, this is the first tool of choice for 95% of techs working on an exhaust system. Use a big breaker bar. 2 ft is okay, 4 ft is more like it. Use high quality, name brand tools that won't flex or shatter on you (or at least give you free replacements when they do shatter). The flexing is important. If the cheap breaker bar is flexing instead of transferring the full torque, you may as well not have it. Sometimes there is just too much material rusted off of the bolt for either of the above to work. Or one of the above techniques may round out the bolt before it comes out. In this case, the bolt needs to be destroyed so you can move on with life. Some options are: Die grinder with appropriate wheel. Hacksaw / airsaw. Air hammer with chisel bit. In extreme cases, torch. As always, use high temperature anti-seize when putting in the new bolts. They will go in easier, and the next person down the road will thank you. Safety Note: All of the above methods assume you know the safety procedures for the tool. Of particular importance is having the vehicle properly raised on the car lift. Some of these methods involve violently pulling stuff around, which will cause the car to move.
{ "source": [ "https://mechanics.stackexchange.com/questions/330", "https://mechanics.stackexchange.com", "https://mechanics.stackexchange.com/users/136/" ] }
334
The painted edge of a side mirror on another car recently scraped the side of my vehicle, leaving a scuff of paint about half a meter long. There is no indentation or scratch, but the other vehicle left its mark behind in a very difficult-to-remove paint scuff. I can remove it by carefully scratching with fingernails, so I am sure it will come off, but I'm not sure how best to fully remove it, as any paint-removing chemicals surely will damage the coat I don't want to remove. What do you recommend for removing paint scuffs like this?
If it's deep enough that merely wiping it doesn't remove it, the scuff is deeper than just the very top surface of your paint. First, try Meguiars Scratch-x with a microfiber cloth. Rub it in. Try two or three passes to see if this removes the scuff mark. Doing so by hand won't remove any of your paint unless its been compromised (cracked, flaking, peeling, etc). If this doesn't work, have a pro detailer take a pass with an random orbital (or rotary if they know what they're doing) polisher + some compound. This will take it right out and leave the paint pretty shiny. The problem with this is, it will be shinier than the rest of your car and you may be tempted to just have them do the whole thing.
{ "source": [ "https://mechanics.stackexchange.com/questions/334", "https://mechanics.stackexchange.com", "https://mechanics.stackexchange.com/users/170/" ] }
406
My car still starts fine but when I turn off the car (just the motor . . the key is still in and "on") I noticed the headlights dim to almost off. It's and older car and the headlights don't turn off automatically. The battery is over 4 years old and in Arizona "they say" batteries only last 2 or 3 years. Should I go and get a new battery or wait until the car won't start? Can I test the battery with a simple tester to see if it's still delivering 12 volts?
Perform an open circuit voltage test with the vehicle off, and battery disconnected: Check the voltage with a DVOM (Digital Volt Ohm Meter) 12.66 = 100% state of charge 12.45 = 75% state of charge 12.24 = 50% state of charge 12.06 = 25% state of charge 11.89 = 0% state of charge 10.45 - 10.65 = bad cell, battery should be replaced If the battery is at or near 100% then go to step #5. If the battery is less than 100% then go to step #2 If the battery is sealed (maintenance free) go to step #5 if not check the electrolyte level, if low add distilled water and go to step #4. If not go to step #3 Hydrometer test , check the specific gravity of each cell, they should all be at 1.265 or more if the battery is in good condition. If all the cells are below 1.225 the go to step #4, the reading from the highest cell should be no more that 0.050 above the lowest cell, if it's more replace the battery you have a weak cell. Otherwise go to step #5 Charge the battery. 3 minute charge test. Hook up the charger to the battery set the charger to 30 - 40 amps at 3 minutes check the voltage while charging if it's above 15.5 volts replace the battery it will not accept a charge, if below 15.5 volts reduce the setting on the charger to 2 - 10 amp range (less is better, fast charging reduces the overall life of the battery) and continue to charge the battery until the gauge on the battery charger show close to zero amps. Note this could take 8 or more hours depending on the state of charge and condition of the battery. Unhook the charger wait 2 hours for the surface charge to dissipate then go to step #1 Load testing is done with a specialized tester, most of your local parts or battery stores will have one and provide free testing. First I will cover how to load test with a load tester and then a test you can do without a load tester. With a load tester load the battery to 1/2 the CCA (Cold Cranking Amps) for 15 seconds while watching the voltage it should not drop below 9.6 volts if it does replace the battery. Once the load is removed the battery voltage should recover to at least 12.24 volts within 5 minutes of the load test if not replace the battery. Testing without a load tester , this doesn't put as much of a load on the battery as the test above so if it passes this the battery could still be bad, if it fails however you can be sure the battery is bad. Disable the vehicle from cranking by disconnecting the coil wire or fuel pump relay. Turn on the bright headlights and crank the vehicle for 15 seconds (no longer than this and do not repeat this test within 3 minutes or damage to the starter can occur) with a DVOM on the battery see if the voltage drops below 9.6 volts or does not recover after the test as mentioned above. If either of those two fail then the battery is bad. If the battery checks out the next test would be the charging system.
{ "source": [ "https://mechanics.stackexchange.com/questions/406", "https://mechanics.stackexchange.com", "https://mechanics.stackexchange.com/users/244/" ] }
417
I just bought new tires for my car and included was free tire rotation every 7,500 miles for the life of the tires. I have heard many things about tire rotation, including: it's a worthless practice, don't bother tires should never be rotated from one side of the car to another (something about the tires rotating in the opposite direction and the belts inside shifting) tires should always be rotated in an X pattern (front tires straight back, back tires get swapped and put on the front) Is there any real value in rotating the tires? is it just about evenly distributing the wear of the tread on the tire? Can this be achieved just through careful monitoring of the tire pressure and alignment of the car?
While I believe the tire shop gives free rotation to get you in the habit of coming back to them every few months so they can sell you more, it can be important to rotate your tires. It all depends on the wear of the tires I have had sets of tires that wore extremely evenly and I only rotated them once. Other sets of tires I have had wore very unevenly and had to rotate them multiple times during the life of that set. For the question of rotation arrangement: It depends on your car's manual It depends on your specific tires As other people have said, some tires are unidirectional and if you put the tires on the other side of the car, you will have problems. Most of the recent cars I have driven tell you to rotate your tires front to back and NOT across the car. I believe many manufacturers are going to unidirectional tires. The real value that comes from rotating your tires is that when the tires wear unevenly, rotating the tires spreads the wear out more evenly on each tire as well, as over the set. On each tire — some cars and/or driver's habits cause the front tires to wear more in certain places of the tire, moving the tires to the rear axle generally cause those worn places to not get used as hard and wear the other places on the tires. (i.e., a lot of FWD car's front tires wear a good bit on the corners since they are used for steering and power) Over the set — the tires on the powered axle tend to wear out faster since more force is exerted through them to the pavement. To maximize the life of the tires, you switch the tires on the powered axle to the non-powered axle so that those less-used tires (with more tread) are used to even the wear.
{ "source": [ "https://mechanics.stackexchange.com/questions/417", "https://mechanics.stackexchange.com", "https://mechanics.stackexchange.com/users/251/" ] }
430
Recently they replaced my stock radiator with aftermarket one (at dealership, so I hope they knew what they were doing), and since then I feel that a car is not warming up enough. Even if I warm it up completely at garage - as soon as I start driving temperature drops. Driving at high speed drops it even more, when temperature outside is cold it gets worse too. I don't have precise temperature gauge though (only 4 digital bars on gauge) so I can't be sure. This new radiator is smaller/thinner. Can a radiator be too "good" and cool down a car too much? Thermostat should completely disconnect radiator from the circuit if a car is cold, isn't it? (at dealership they say it all behaves as it should, car is not over-cooling) Car in question is Honda Prelude'92.
If the thermostat is operating properly the radiator will only come into play when the thermostat opens, when the engine is at normal operating temperature (around 190 degrees, give or take). Adding a gigantic radiator won't make a bit of difference because if the engine gets too cold, the thermostat will close, causing the engine to heat up again. In the winter, the cabin heater will draw around 20% of the engine's heat in order to heat you. This will cause the car to heat up slightly slower, but once it's warm it will stay that way. It's possible that the thermostat isn't fully closing when you are driving, but works ok at idle. You might consider getting a high quality thermostat and replacing the existing one. It's also possible that the coolant temp sensor is on it's way out.. On the highway, a car produces lots of heat (at a higher RPM), which indicates to me that the thermostat is opening more (or failing to close properly) at higher RPMs since you are seeing a temp DEcrease at speed. Again, a new high quality thermostat should fix this.
{ "source": [ "https://mechanics.stackexchange.com/questions/430", "https://mechanics.stackexchange.com", "https://mechanics.stackexchange.com/users/164/" ] }
498
Background I've been trying to do a lot of research on boost lately because I'm planning on running a moderate turbo setup on my daily/light-duty-autox car in the future. I'm trying to get into the physics of things so that when I do my build, I'm not just slapping parts on and hoping for the best, but instead engineering a motor to work. The question My main question is this. I've been reading this article and while it's deepened my understanding of compression, it leaves me with this question: I know that engines running a higher static compression ratio require higher octane fuels to prevent detonation, so why do motors with higher effective compression ratios not seem to require higher octane fuels? I usually hear about people running turbo setups and simply using regular pump gas and having no trouble, even though the effective compression ratio would be much higher than most naturally aspirated engines. For instance, the setup that I was considering would be a turbocharged Honda d16a6, which has a static compression ratio of 9.1:1, with 10 psi of boost, giving it roughly 15:1 effective compression.
tl;dr: They do. It's just harder to tell how much. The longer answer is that they do and that effective compression is failing you as an approximation for actual effects. Think about detonation (AKA premature ignition of the fuel-air mixture). Normally we consider two causes: compression (the change in the space enclosed by the cylinder as the piston moves up and down) and temperature (e.g., measured temperature of the intake air). In reality, there is only temperature. Let's back all the way to the ideal gas law : PV = nRT where P is pressure, V is volume and T is temperature (in degrees Kelvin, remember!) and the rest are interesting constants that aren't germane to this discussion. Compression causes that V value to decrease and P to increase. In an ideal world, that would be the end of it: the compression of the cylinder would be a 100% efficient process without temperature increase. Unfortunately, we live in an actual rather than an ideal world. The best simple model for what's happening in the engine is that it is a system of constant entropy . This means that we are restricted by the heat capacity ratio of the gases in the system. If we use a heat capacity ratio of 1.3 and an example compression ratio of 10:1, we are looking at an approximate doubling in temperature (degrees Kelvin!). In short, compression makes gases hotter. Why is this bad, though? Think of it this way: you have a fixed temperature budget for a certain octane gas. If T gets higher than T_ignition , bang. So, as you point out, you can add an intercooler to the system, reducing the input air temperature. Likewise, you can change the amount that V changes. This increases the amount of temperature increase that your engine can tolerate before detonating. Now, adding a turbo on the intake air compresses the normal atmospheric pressure to something significantly higher, resulting in a change in those other constants that I previously brushed off (check turbo volumetric efficiency for more information) and increases the temperature. That eats into my temperature budget. If I used lower octane gas, that would lower the threshold for detonation and, at boost, I could be looking at engine damage. So, after all that, what do you do? Research research research: don't build in a vacuum. Copy other people's layouts or improve upon them. Measure your air intake temperature, before and after the turbo. Find the best gas that you can. Tune the engine computer to keep your engine from blowing up. On tuning: one thing the ECU can do is add extra fuel to the mixture, thereby cooling the mixture down. Admittedly, using fuel as a coolant is not conducive to absolute efficiency but shouldn't be an issue when driving around out of the boost. As always, less right foot = less gas spent. All of the above is discussed in Corky Bell's Turbocharging book Maximum Boost - a very entertaining read for geeky people like myself. Following up some time later : I just noticed the specific question about 9.1 static compression ratio running 10 psi of boost. As an example, my WRX runs 8:1 at about 13.5 psi so, on the face of it, 9:1 with 10 psi seems achievable. Let's look at one of the more arguably sensible equations for effective compression ratio (which, as we noted, is still an approximation of fairly complex thermodynamics): ECR = sqrt((boost+14.7)/14.7) * CR Where ECR is the "effective compression ratio" and CR is the "static compression ratio" (what you started with before adding boost). boost is measured in psi (pounds per square inch). Remember, the goal of this equation is to tell us whether our proposed setup is feasible at all and will it be able to run on gas that I can purchase on the street vs. the racetrack. So, using my car as an example: ECR = sqrt((13.5 + 14.7) / 14.7) * 8 = sqrt(1.92) * 8 = 11.08 Using this equation, the implication is that my effective compression ratio is about 11:1 at peak boost. That's within the bounds of what you could expect to run a normally aspirated motor on with pump gas (93 octane). And, proof by existence, my car does run on 93 octane just fine. So, let's look at the setup in question: ECR = sqrt((10 + 14.7) / 14.7) * 9.1 = sqrt(1.68) * 9.1 = 11.79 As cited in the reference, 12:1 is really about as far as you can go with a street car so this setup would still be within those limits. For completeness, we should note that there is also another ECR equation that wanders around the internet that omits the square root. There are two problems with that function: First, that would result in an ECR for my car of 15:1. That's a bit ridiculous: I wouldn't even want to start a motor like that with street gas. ECR is an approximation anyway: the real answer to the question of "how much boost can I run?" is derived from critical factors such as intake air temperature and compressor efficiency. If you're using an approximation, don't use one that one that immediately gives you useless answers (see point 1).
{ "source": [ "https://mechanics.stackexchange.com/questions/498", "https://mechanics.stackexchange.com", "https://mechanics.stackexchange.com/users/92/" ] }
718
What is the difference between DOT3 and DOT4 brake fluids? What could possibly go wrong if I use a (4 times cheaper) DOT3 brake fluid instead of the DOT4 which manufacturer recommends (but allows adding "some" DOT3 occasionally)? What will happen if I just use only DOT3?
If the manual is saying small amounts of DOT3 can be used, what they are most likely saying is if you find yourself with low brake fluid and only DOT3 is available, it is better to use that than to not have brake fluid. Once you get back home though you need to get the recommended DOT4 back in the system by bleeding the system and filling with DOT4. As already stated, DOT4 handles higher heat. If your car is calling for DOT4, that means that the manufacturer does not feel comfortable that the braking system will not raise the brake fluid above a temperature that DOT3 can handle. Another point to make here is there are two boiling temperatures for brake fluid, Dry and Wet. When you've just replaced your brake fluid and the system has been properly bled, you are working at the Dry boiling temperature. Over time, water works its way into the system through age, heat cycling, through the hoses, etc. You are then working at your Wet boiling point for the fluid. So again, if you get stuck in a spot where you need to put some new brake fluid into the system and all you have available is DOT3, most likely at that point your DOT4 has degraded some and the fresh DOT3 will be close to where the DOT4 is at, but this won't stay true, the DOT3 will degrade once in the lines so the above statement of replace it as soon as possible holds true.
{ "source": [ "https://mechanics.stackexchange.com/questions/718", "https://mechanics.stackexchange.com", "https://mechanics.stackexchange.com/users/185/" ] }
793
I own a 2010 Prius. It has a dead battery, front facing forward in my garage. I do not have long enough jumper cables. I’ve searched the Internet. Did Toyota really create a car that cannot be put into neutral if the battery is dead? I need to move it out of the garage. Am I missing something here?
According to 2010 Prius Emergency Response Guide (page 10): Being electronic, the gearshift selector and the park systems depend on the low voltage 12 Volt auxiliary battery for power. If the 12 Volt auxiliary battery is discharged or disconnected, the vehicle cannot be started and cannot be shifted out of park. The auxiliary battery is located in the cargo area. It is concealed by a fabric cover on the passenger side rear quarter well (page 15): Sounds like you can get to the back of your car and there is no longer a need for you to put it into neutral.
{ "source": [ "https://mechanics.stackexchange.com/questions/793", "https://mechanics.stackexchange.com", "https://mechanics.stackexchange.com/users/515/" ] }
911
AFAIK vehicle manuals say owner must replace spark plugs every N miles. Suppose he fails to do so and a single spark plug unexpectedly dies in an engine with multiple cylinders right in a freeway. What likely happens to the vehicle?
If you keep driving it that way for very long, the fuel that's pumping through the non-firing cylinder will contaminate the catalytic converter. That can result in the catalyst overheating and melting, possibly blocking the exhaust in the process (BTDT and cats are not cheap...). If you're really having a bad day, the cat could theoretically catch on fire and possibly result in a full-blown vehicle fire.
{ "source": [ "https://mechanics.stackexchange.com/questions/911", "https://mechanics.stackexchange.com", "https://mechanics.stackexchange.com/users/433/" ] }
1,125
I have way too much oil in my Acura TSX. Do I need to take care of it without turning it on or can I drive it to a service station? If I need to do it in place, I don't have the ramps or a pan or the daylight left and my wife needs the car in the morning. Can I siphon the oil out of the top? If so, what type of tubing won't be dissolved by the oil? ======== If you want to laugh, yes I'm an idiot. The car's been great for years and years and I've never been good about checking the oil. Just regular oil changes and there's been no problem. The second to last oil change, they sold me on synthetic and said I'd be good for a year. Well, after a year and many miles, turned out the engine was almost dry! Apparently "one year" was a bad recommendation. No damage done, but close. Anyway, now I've been anxious about running out of oil, so I checked it an saw very little oil on the dipstick. Since it's been so long, I'd forgotten what to expect - "that's a long piece of metal with nothing on it", I think. So I get a big jug, enough for a full oil change, and empty into my engine. Only once the oil level doesn't rise nearly as far as I expected do I realize I made a mistake. My only excuse is that the dipstick level indicators are very subtle on the TSX, just two holes. I was looking at the big bends (what are they for anyway?).
You haven't wrecked the car but you should get the oil down to the appropriate level. If the oil isn't hot, almost any sort of plastic tubing can be used for siphoning. It's easiest to go in via the dip stick. Remember not to use the "suck start" siphoning method as you don't want a mouthful of oil. If you have a long enough piece of tubing, you can stick the excess down the dipstick tube, put your thumb over the end, pull out the slack (that is now full of oil) and you'll have an immediate siphon. Also remember to siphon the oil back into the original container - you might as well keep it until you do need it. My vote is that you take care of it yourself - I hate letting other people know that I made a goof, especially if I can fix it myself.
{ "source": [ "https://mechanics.stackexchange.com/questions/1125", "https://mechanics.stackexchange.com", "https://mechanics.stackexchange.com/users/693/" ] }
1,210
Does the act of downshifting to slow your vehicle down have any negative side affects? I don't downshift more than one or two gears at a time and I let out on the clutch slow enough that I don't rev very high. My red line is at 5.5 and mine are around 3 - 3.5 when I do it. This process makes my brakes last much longer but I'd like to ensure it's not at the cost of something else. Edit: Thanks everyone for your answers, but I'm seeing very conflicting information. I found this on wikipedia but then again it's wikipedia. Does anyone have anything to support their thought one way or another? Engine braking passively reduces wear on brakes and helps a driver maintain control of the vehicle. Active use of engine braking (shifting into a lower gear) is advantageous when it is necessary to control speed while driving down very steep and long slopes. It should be applied before regular disk or drum brakes have been used, leaving the brakes available to make emergency stops. The desired speed is maintained by using engine braking to counteract the gravitational acceleration.
Most of the time when you drive, you're putting a load (and causing wear) on what I'm going to call the "forward" face of each tooth on each gear in your drivetrain. The front of a tooth on the crankshaft pushes against the back of a tooth on the next gear in line, which pushes the next gear, etc. When you use "engine braking", all you are doing is engaging the teeth in the opposite direction, and putting force and wear on the faces that normally are just along for the ride. Now, does that mean you're wearing your engine out faster? Marginally... but the parts you're wearing out would normally have to be replaced (if at all) because they'd worn out from the other side; you're wearing surfaces that would usually be thrown out with hardly any wear at all. To borrow a phrase from the medical field, your engine/transmission will die with that wear, not of it. To the people who say that you're transferring the wear from your brakes to your clutch, all I can say is... you're doing it wrong! If you downshift as quickly and smoothly as you upshift, then the added wear and tear on your clutch will be a statistical blip - seriously, how many times do you downshift for this reason, as opposed to normal shifting? (If your answer is "at every light", then the poster who advised you to calm down your driving habits had a point.) Having said that, there's a seriously wrong way to do this; I used to do it when I was first learning to drive stick, and it was incredibly stupid: pushing the clutch all the way in and letting the RPMs fall to idle, then letting the clutch out and allowing the engine to slow the car down in the same gear. If that's how you're doing it, STOP IT! That way wears out the clutch very fast (which might be what the other posters had in mind), drops your speed dramatically without lighting up your brakelights (I confess, that's why I started doing it - trying to sneak-slow past a cop), and runs a high risk of stalling the engine and seriously ^&*@ing something up. Don't be that guy.
{ "source": [ "https://mechanics.stackexchange.com/questions/1210", "https://mechanics.stackexchange.com", "https://mechanics.stackexchange.com/users/720/" ] }
1,410
Most of the advice I've seen for jump starting cars instructs to connect the black wire to a bare piece of metal on the car with the dead battery. However, I've always just hooked up both poles to the corresponding poles on the other car's battery. I've never experienced any negative consequences, and it has always worked fine. Out of curiosity, does it matter, and if so, why?
From an electrical perspective, it doesn't matter. However, a lead-acid battery that is charging or discharging rapidly will give off hydrogen, which is highly explosive. Since you generally make the ground connection last, there's a good chance that you'll get a spark, which is enough to ignite the hydrogen. So while it's unlikely that you'll have anything explode, under extreme conditions it's possible. Making the ground connection away from the battery eliminates the possibility.
{ "source": [ "https://mechanics.stackexchange.com/questions/1410", "https://mechanics.stackexchange.com", "https://mechanics.stackexchange.com/users/442/" ] }
1,463
I'm looking to buy a used car. What do you think I should check (mechanically) before buying it (especially if it's diesel)?
Inside the car Manual: Check the manual for the service history. Was it services regularly at a authorized dealership? Ash tray: Smells like cigarettes? The previous owner was a smoker, deal-breaker for me personally. Interior: Does the amount of wear correspond with the expected amount of wear for a car of that age and mileage? Trunk: Spare tire present? Jack present? Condition of the carpet? Under the hood Engine: Does it look to clean? If the garage cleans the engine it most likely has a defect like an oil leak Oil cap: White creamy stuff on the inside of the cap? Can indicate a broken head gasket or the car was used for a lot of very short trips. Walk away, it'll cost you to much money for repairs. Exterior Paint condition: Lots of small dents and scratches indicate a sloppy previous owner. Pay attention to door edges and fender corners. Front fender: Look at the underside, lots of scratches present? The previous owner didn't slow down for speed bumps. Can cost you money for new shocks, ball joints, etc. Door handles: Lots of scratches around the door handles? A woman owned the car before you, expect to find some small toys and jewelry in the car. ;) Parking damage on the rims: Lots of damage? Walk away, rim repair is expensive and the steering parts will be more worn than with careful drivers Test-drive Steering: Accelerate tot 30 mph (50 km/h) on an empty straight road and get your hands off the wheel. Does the car continue to drive in a straight line? Brakes: Go to an empty parking lot, accelerate to 30 mph (50 km/h) and press the brake really hard. Does the car brake in a straight line without tugging on the wheel? Do you feel the vibration from the antilock brakes (if present)? Gearbox/clutch: Does it shift smoothly? Is the clutch worn out? (Engage very late, does not stall when engaging the clutch without applying some gas). Engine noise: How does it sound? No weird noises? Shocks: Listen for squeaky noises or thumping sounds. Engine temperature: Does the car reach it's normal operating temperature after a few miles/kilometers? Air conditioner/heater: Is the air conditioner really cold? Make sure the fan of the ventilation doesn't make weird noises. Lights: Everything works? Price negotiation Timing belt almost due? Try to get a new one including the water pump included in the price Air conditioner not cold enough? Ask the dealer to refill it with coolant. Worn tires? No new (good!) tires = no deal And I think a full tank of gas is part of the deal. :)
{ "source": [ "https://mechanics.stackexchange.com/questions/1463", "https://mechanics.stackexchange.com", "https://mechanics.stackexchange.com/users/719/" ] }
1,494
I've heard from some people that high octane fuel will increase gas mileage. Around here, we have the basic unleaded(87), mid-grade(89), and premium(93 or 91). I've been using the basic 87 unleaded forever because it was the cheapest. Also, in my car's manual it recommend the lowest grade to use is 87(Grand Am 2004 V6) On a car with no modifications will high octane fuel in general improve gas mileage, decrease it, or have no effect?
Use the recommended gas for your car. Going lower than the recommended may reduce fuel economy as the engine may have to retard timing to avoid detonation. Going higher than recommended won't help as your engine is unable to take full advantage of it, as well as the fact that higher octane fuels actually contain slightly less energy (they just offer a more controlled burn that higher compression engines can take advantage of).
{ "source": [ "https://mechanics.stackexchange.com/questions/1494", "https://mechanics.stackexchange.com", "https://mechanics.stackexchange.com/users/783/" ] }
1,856
I've just bought a new 2011 plate car. I've heard stories of people 'burning in' engines for the first few hundred or couple thousand miles using methods such as: Not driving above 50 mph. Varying the revs more than usual throughout the entire range. Keeping the revs below 3.5k. I've also heard that modern car engines are broke-in at the factory and the above is complete rubbish. Is 'burn-in' necessary on modern cars? If so, how should I 'burn-in' a new engine?
You should perform a new car break-in per the manufacturer's instructions. That way, if any warranty issues come up, you've done what they said to do. Generally speaking though (according to the engine builders I've talked to), you do want to vary the RPMs a lot, don't cruise at a steady speed for too long. None of them hold to the drive slow theory, nor the low RPM only idea (some of them do the initial break-in on the dyno with a series of WOT runs through the first few gears). More frequent oil changes are a good idea as a new engine will shed a little bit of metal at first. Usually it's something like initial break-in dyno runs, change oil, drive for 500 miles, change oil, drive for 1000 more miles, change oil, drive for 1500 miles this time, change oil, then onto your normal schedule.
{ "source": [ "https://mechanics.stackexchange.com/questions/1856", "https://mechanics.stackexchange.com", "https://mechanics.stackexchange.com/users/1004/" ] }
1,876
As long as there is some fuel in the tank, does the fuel level matter? In other words, is it advisable to make sure the tank is mostly full most of the time, or is this just silly? In other words, are there valid reasons why you should avoid running on an almost empty tank (other than the risk of actually running empty)?
Both previous posts are pretty good. I'll add a few more considerations though. On a low tank, during hard cornering, some cars will uncover the fuel pump pickup and starve for fuel. There's been some discussion for years now about keeping 1/4 tank as your minimum as the fuel provides cooling for the fuel pump. Some people argue that additional cooling will extend the life of the pump. My take on it is that indeed it does, but that it doesn't matter as the pump won't overheat either way (OEM fuel pump failures are exceedingly rare nowadays, nearly all fuel pump replacements that I've seen done turned out to not solve the problem they were having...). Any floating debris will stay safely above the pump intake if the tank level is kept up. Run it down far enough and that stuff will get sucked in and contribute to either clogging the filter eventually, or even physical damage.
{ "source": [ "https://mechanics.stackexchange.com/questions/1876", "https://mechanics.stackexchange.com", "https://mechanics.stackexchange.com/users/1011/" ] }
2,013
What is the difference (besides price) between normal, platinum, and iridium spark plugs?
Copper conducts better and is generally used in higher-performance/modified engines. In dedicated race cars resistor-less copper plugs are used. Iridium and platinum plugs are chosen for their longevity only. You shouldn't gap iridiums because of potential damage to the tips. For that reason and their inferior conductivity, they aren't used in modified engines. Keep in mind their price, as well. Any claims of more power or fuel efficiency of one type over another are pretty much baseless unless you were using the wrong plugs to begin with. Stick with what your owner's manual calls for unless you have a reason to upgrade/modify. For example, if you have a turbocharged car but are running more boost you might gap your plugs a little smaller to accommodate. The claims by parts stores to upgrade to the Triple-Spark Ultimate Unobtainium plugs are just upsells.
{ "source": [ "https://mechanics.stackexchange.com/questions/2013", "https://mechanics.stackexchange.com", "https://mechanics.stackexchange.com/users/1083/" ] }
2,025
I was installing a boost gauge, and must've shorted something out because now my turn signals, fog lights, ac blower, and wipers don't work. I checked the fuses but they weren't blown. How can I pinpoint what the problem is? UPDATE: This past Saturday I took my time to tackle this problem. I pulled every fuse, everyone checked out. Then I started pulling relays, testing them by wrapping some wire around the 85/86 poles and connecting them to the battery, then testing for continuity between the other poles. Alas I found two dead relays. One, I believe, is the switched power relay and the other is the wiper relay. The stores around me only had the switched power relay. Bought it and replaced it. Lo and behold I start the car and the first thing I hear is the beep from the boost gauge! Everything is now working except for the blower. Not sure if replacing the other relay will fix things as I read that there is none for my car, just the resistor that sits on the blower motor. The make is 99 Audi A4 1.8TQ . Any ideas? Found this wiring diagram: http://autolib.diakom.ru/CAR/Audi/1997/A4/SYSTEM%20WIRING%20DIAGRAMS/fig02.pdf http://autolib.diakom.ru/CAR/Audi/1997/A4/SYSTEM%20WIRING%20DIAGRAMS/fig01.pdf PS: Its kind of ironic, had I just rushed and put everything back together before testing everything would've been fine!
Copper conducts better and is generally used in higher-performance/modified engines. In dedicated race cars resistor-less copper plugs are used. Iridium and platinum plugs are chosen for their longevity only. You shouldn't gap iridiums because of potential damage to the tips. For that reason and their inferior conductivity, they aren't used in modified engines. Keep in mind their price, as well. Any claims of more power or fuel efficiency of one type over another are pretty much baseless unless you were using the wrong plugs to begin with. Stick with what your owner's manual calls for unless you have a reason to upgrade/modify. For example, if you have a turbocharged car but are running more boost you might gap your plugs a little smaller to accommodate. The claims by parts stores to upgrade to the Triple-Spark Ultimate Unobtainium plugs are just upsells.
{ "source": [ "https://mechanics.stackexchange.com/questions/2025", "https://mechanics.stackexchange.com", "https://mechanics.stackexchange.com/users/997/" ] }
2,204
How do I loosen car tire lug nuts (so that I can change a tire) when they are really really stuck? I have tried turning the provided wrench, even standing and jumping on it. This worked for 4 of the lug nuts, but not the bottom-most one. I have heard of using a rust remover/blaster , but I do not see much rust at all, and the tires are not too old if I recall correctly. Another recommendation I see is to use a long pipe on the handle of the wrench for more torque. But even with just the wrench I seem to be warping the stock wrench with my efforts! Some forums recommend using a 4-way lug wrench , but they do not say how to use one, or why they are better than the stock wrench. Can they provide me more torque than jumping on a standard wrench? Finally, I am hopeful for an answer other than take it to a shop. I know I can do that, but I am trying to avoid the expense of a tow.
Remember that lug nuts are exposed to literally every element that could possibly cause corrosion. It sounds like your last nut is stuck due to some rust or oxidation that you can't see. Here's how I generally approach a badly stuck nut: Check your safety gear: eye protection, jack stands, everything to keep yourself from getting killed when this wheel finally comes loose. Get out the penetrating oil (AKA rust blaster). Really soak the bolt and nut. Now walk away and let it soak in, possibly for hours. Affix the correct socket to your breaker bar . This is a totally different beast from the stock tire iron. Its handle is much more durable and is very unlikely to bend under the torque that you're about to apply. Remember, think carefully about what's going to happen when the nut lets go. If you're pulling, it's not hard to end up punching yourself in the face. If you're pushing, don't let your fingers bash into the garage floor or other components. I've hurt myself using both methods when battling bolts (never worse than giving my wife an excuse to eyeroll me, thankfully). Try getting the nut off. Didn't work? Take a longish piece of steel pipe, stick it over the end of the breaker bar to increase the moment arm of the lever and try again. Once I get to this point, I usually cycle between penetrating oil and a super long breaker bar. Things eventually come loose after a sufficiently long period of HULK SMASH time. NOTE: when working with exhaust nuts and bolts, the bolt will eventually snap under enough torque. This is less likely with the much more robust wheel studs.
{ "source": [ "https://mechanics.stackexchange.com/questions/2204", "https://mechanics.stackexchange.com", "https://mechanics.stackexchange.com/users/1165/" ] }
2,411
The gear pattern is selected by clicking a lever with your left foot and is typically laid out as follows: 6th gear (if applicable) 5th gear 4th gear 3rd gear 2nd gear NEUTRAL 1st gear What is the technical reason engineers decided motorcycle gear patterns should reflect the above? More precisely, why is NEUTRAL placed between first and second gear?
Two of the useful features of this setup (I have no evidence to prove they were the design reasons) are: when braking in a hurry stamping down until you reach the bottom will leave you in first, NOT neutral. This is much safer in many respects than being left with no power in an emergency situation. when starting from neutral, there is no risk of ending up in the wrong gear; 1 kick down leaves you in first gear. I have ridden very old bikes where neutral was the bottom gear, and sometimes the first click up would leave me in second - where I would stall, not being prepared for this. I have also ridden a bike where the gears were the other way round, with 1st at the top, then neutral, then 2nd, 3rd etc - kicking down to change up a gear...less natural...very odd when accelerating hard
{ "source": [ "https://mechanics.stackexchange.com/questions/2411", "https://mechanics.stackexchange.com", "https://mechanics.stackexchange.com/users/1252/" ] }
3,572
I grew up flushing the coolant twice yearly. In the Spring, you drain the antifreeze and fill with straight water for the summer. In the Fall, you drain the water and put in a 50/50 antifreeze/water mixture. I have heard (from an auto store clerk) that running just water will cause overheating. The clerk also said that antifreeze prevents corrosion and sediment build up and cleans the coolant system. Despite years of using water in the summer, I have never experienced any problems that were obviously related. Do I need to start using antifreeze, even in the summer?
I have heard (from an auto store clerk) that running just water will cause overheating. Well, that's not true. Water isn't the cause of overheating. Your coolant mixture (of whatever proportion) and radiator work together to get rid of the heat . If it's not hot, you won't overheat. However, when it is hot, the coolant can only absorb heat up to its boiling point. Here's a super high level summary of a cooling system: The cool coolant is placed in contact with the metal of the hot engine. Heat is transferred from the metal of the engine to the liquid coolant, heating it up. Hot coolant is pumped to the radiator, making room for cooler coolant to move into the engine. Hot coolant is placed in contact with the metal of the cool radiator, cooling it off. Liquid cooling requires the best contact possible between the metal and the liquid for most efficient heat transfer. Problems occur as the coolant approaches it's boiling point: steam bubbles start to form, especially at hot metal surfaces. Each one of those bubbles is a less efficient point of heat transfer. That means less heat leaving the engine, meaning a hotter engine, more spots where bubbles will form, repeating until steam starts coming out of the hood. So, one of your main goals in assembling a useful cooling system is to ensure that the boiling point of the coolant is high in order to prevent high temperature disaster. Water's boiling point is 100 C = 212 F. Straight ethylene glycol's boiling point is at 197.3 C = 387 F. Of course, you shouldn't use straight ethylene glycol in the radiator either for the sake of efficiency . The clerk also said that antifreeze prevents corrosion and sediment build up and cleans the coolant system. That depends on the product. Quite a lot of the coolants on today's market will inhibit corrosion and minimize sediments . Some, like Water Wetter , will actually increase the cooling system's ability to carry away heat. Despite years of using water in the summer, I have never experienced any problems that were obviously related. Just remember that lack of evidence doesn't necessarily indicate absence of the phenomenon. Do I need to start using antifreeze, even in the summer? As always, it's your car. You need to make the call. I can't be bothered to flush out my coolant just to change it from green to clear. When it's dirty, I flush it, not before. NOTE: I know that a pressurized radiator system changes the physics from this simple "boiling point and no higher" explanation. This is a reasonable first-order approximation for the purposes of discussion. EDIT: @Paulster2 was kind enough to post a picture of what happens to a water pump when it is run with straight water without the corrosion prevention of coolant + water: I submit that the one on the left can no longer be considered a pump.
{ "source": [ "https://mechanics.stackexchange.com/questions/3572", "https://mechanics.stackexchange.com", "https://mechanics.stackexchange.com/users/406/" ] }
3,942
When you come to a stop with a manual transmission you have to push the clutch in or take it out of gear to keep the engine from stalling out. What keeps the engine from stalling in an automatic transmission. Even when I rev the engine while holding the brake the engine won't stall out, I can tell the car is in gear because it's trying to move forward but the engine stays running. If I tried this with a manual transmission the engine would stall
The reason that an automatic doesn't stall out while "in gear" and at a stop, while a manual transmission does, is that automatic transmissions use a hydraulic torque converter to connect the engine to the transmission, while manual transmissions use a friction clutch. These two systems do a similar job in a very different way. A torque converter uses fluid to transfer the power, and so it can "slip," effectively disengaging the engine from the drive wheels at low speeds. When you come to a complete stop in an automatic transmission car, the torque converter starts slipping, allowing the engine to keep turning even though the wheels have stopped. The only way to get the friction clutch of a manual transmission to slip is to depress the clutch pedal. If you come to a stop in a manual transmission car without depressing the clutch pedal, the engine stops turning when the wheels stop turning, and the car stalls. See Wikipedia's articles on torque converters and clutches for a more thorough treatment.
{ "source": [ "https://mechanics.stackexchange.com/questions/3942", "https://mechanics.stackexchange.com", "https://mechanics.stackexchange.com/users/1994/" ] }
4,606
I have a two wheeler i.e. bike (Honda company, four stroke) which has four gear. While driving my bike, when I come to downhill slopes, I shift the motorcycle to neutral to save the fuel. (The slopes are not very long, so I cant turn off engine because I have no starter to quickly start it :) ). But one of my friends suggested not to use neutral, but only hold the clutch while the bike is in gear. So I want to ask, what saves the fuel more between these two cases, shifting to neutral, or just holding the clutch? Or are both of these useless?
Holding the clutch in is generally not a good idea. The clutch is designed to be used for very short periods between gears, and for holding in first when you are about to pull away. So if you are wanting to coast you should definitely do it in neutral. The difference between these two from a fuel consumption perspective should be marginal. From a safety perspective, however, I would suggest this would be a mistake: You are very vulnerable on a motorbike, so using all safety mechanisms at your disposal should be encouraged. Your engine is a safety mechanism when going downhill - you can accelerate out of danger, or you can use engine braking in addition to your brakes in order to slow down safely. My advice - only use neutral when stationary, and only use the clutch to change gears or to prepare to pull away.
{ "source": [ "https://mechanics.stackexchange.com/questions/4606", "https://mechanics.stackexchange.com", "https://mechanics.stackexchange.com/users/2357/" ] }
5,129
It’s mentioned in the user manual of my motorbike, that when not using the vehicle, I should turn the petrol tap (the one that has three positions “On”, “Off” and “Reserve”) to “Off” position. I don’t understand why that’s necessary (and I need to understand that for my own knowledge and because my brother won't do it unless I can give him a convincing reason)? So three questions: Is it because petrol may evaporate or flow to the engine un-necessarily when the vehicle is parked and not used for a while (like overnight or just for an hour or two)? Is it recommended to always turn off the petrol tap whenever I park my vehicle somewhere or just when I park it in the garage for the night? Also, as an additional related question: Why do some people drive their bikes with the petrol tap always kept on “Reserve”? My uncle does it, but refuses to explain it.
The reason is that motorcycles traditionally have the fuel tank higher than the carburetor, and the fuel feeds with gravity alone. What risks does this introduce that necessitates a manual shutoff? Without the shutoff, if the carburetor float failed to close the valve tightly enough to stop the fuel flow, then gas would continue to trickle into the carb, overflowing the bowl and flowing down into the intake tract. If the intake valve were open, it would fill the cylinder. Then, upon attempting to start the motor, the incompressible liquid gasoline would cause a catastrophic hydraulic lock . More modern motorcycles have changed some of this in a few ways: the fuel petcock could be vacuum operated, closing with the engine off. the fuel tank could be lower than the carbs, making a fuel pump necessary. fuel injection has eliminated the use of carburetors and floats to control fuel flow.
{ "source": [ "https://mechanics.stackexchange.com/questions/5129", "https://mechanics.stackexchange.com", "https://mechanics.stackexchange.com/users/1539/" ] }
6,103
I'm relatively new to motorcycles, my flat mate taught me how to drive his two Royal Enfields around New Delhi, and for the past year and a half I've been taking care of them as they break down ;) Was just curious what the 'cc' actually means when you say a Hero Honda has 150 cc, or an Enfield as 350 cc, or a really beast Enfield has 500 cc. I know vaguely that more cc is more powerful, but am curious as to how one could actually measure cc.
cc is the size of the engine, in cubic centimeters - literally the volume of the cylinders. A larger cylinder can ingest more air (and more fuel), thus converting more energy per cycle than a smaller one, so making more power - assuming all other factors are the same, and there are many factors that affect power output. You can measure it by a simple volume calculation - area of the piston (pi x radius squared) x stroke x number of cylinders.
{ "source": [ "https://mechanics.stackexchange.com/questions/6103", "https://mechanics.stackexchange.com", "https://mechanics.stackexchange.com/users/3153/" ] }