source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
272,381 |
While contending with a certain Fourier series, I stumbled on an incredibly simple evaluation (numerically) of a slightly complicated-looking sin-integral. So, I wish ask: Question. Is this really true? If so, any proof? $$I:=\int_0^{\frac{\pi}2}\frac{\sin x}{1+\sqrt{\sin 2x}}\,dx=\frac{\pi}2-1.$$ ADDED. I'm an experimentalist and I find many many results. Some I could find being discovered earlier after checking the literature. For others, either I don't find them easily or I might be tired of looking and hope someone else points them out to me. I'm mostly interested in sharing and having fun, not seeking recognition of any sort. However, one thing is for sure: I don't give oxygen to rude comments.
|
We have \begin{align}
& 2\int_0^{\pi/2}\frac{\sin x}{1+\sqrt{\sin 2x}} \, dx=\int_0^{\pi/2}\frac{\sin x+\cos x}{1+\sqrt{\sin 2x}} \, dx=\frac12\int_0^\pi\frac{\sqrt{1+\sin y}}{1+\sqrt{\sin y}} \, dy \\[6pt]
= {} &\int_0^{\pi/2}\frac{\sqrt{1+\sin y}}{1+\sqrt{\sin y}} \, dy =\int_0^1\frac{\sqrt{1+t}}{(1+\sqrt{t})\sqrt{1-t^2}} \, dt=\int_0^1\frac{dt}{(1+\sqrt{t})\sqrt{1-t}} \\[6pt]
= {} &2\int_0^{\pi/2}\frac{\cos z}{1+\cos z} \, dz=\pi-2\int_0^{\pi/2}\frac1{1+\cos z} \,dz= \pi-2\tan\frac{z}2\bigg|_0^{\pi/2}=\pi-2,
\end{align}
where we used substitutions $y=2x$, $t=\sin y$, $t=\cos^2 z$.
|
{
"source": [
"https://mathoverflow.net/questions/272381",
"https://mathoverflow.net",
"https://mathoverflow.net/users/66131/"
]
}
|
272,385 |
For what values of $n \neq 1,3,7$ is the tangent bundle $TS^n$ of the $n$-sphere diffeomorphic to an open subset of $\mathbb{R}^{2n}$?
|
We have \begin{align}
& 2\int_0^{\pi/2}\frac{\sin x}{1+\sqrt{\sin 2x}} \, dx=\int_0^{\pi/2}\frac{\sin x+\cos x}{1+\sqrt{\sin 2x}} \, dx=\frac12\int_0^\pi\frac{\sqrt{1+\sin y}}{1+\sqrt{\sin y}} \, dy \\[6pt]
= {} &\int_0^{\pi/2}\frac{\sqrt{1+\sin y}}{1+\sqrt{\sin y}} \, dy =\int_0^1\frac{\sqrt{1+t}}{(1+\sqrt{t})\sqrt{1-t^2}} \, dt=\int_0^1\frac{dt}{(1+\sqrt{t})\sqrt{1-t}} \\[6pt]
= {} &2\int_0^{\pi/2}\frac{\cos z}{1+\cos z} \, dz=\pi-2\int_0^{\pi/2}\frac1{1+\cos z} \,dz= \pi-2\tan\frac{z}2\bigg|_0^{\pi/2}=\pi-2,
\end{align}
where we used substitutions $y=2x$, $t=\sin y$, $t=\cos^2 z$.
|
{
"source": [
"https://mathoverflow.net/questions/272385",
"https://mathoverflow.net",
"https://mathoverflow.net/users/36688/"
]
}
|
272,868 |
Let $d \geq 2$ be an integer and $\xi=\exp(\frac{2\pi i}{d})$. I am trying to compute the determinant of the matrix
$$
(\xi^{ij}-1)_{1 \leq i, j \leq d-1}.
$$ Let me call it $\Delta(d)$. For small values of $d$ I get: $\Delta(2)=-2$ $\Delta(3)=-3\sqrt{3}i$ $\Delta(4)=-16i$ But I can't seem to find a general formula. How can I do this?
|
Using the earlier responses and comments, I confirm the formula suggested by Neil Strickland:
$$\Delta(d)=d^{d/2}i^{m(d)}\qquad\text{with}\qquad m(d): = 1 + d(7-d)/2\in\mathbb{Z}.$$
Consider the $d\times d$ Vandermonde matrix
$$\Phi(d):=(\xi^{ij})_{0\leq i,j \leq d-1}.$$
Subtracting the first column from each other column, we get a matrix with first row equal to $(1,0,\dots,0)$ and lower right $(d-1)\times(d-1)$ block equal to the OP's matrix. Therefore,
$$\Delta(d)=\det\Phi(d).$$
It is straightforward to check that $\Phi(d)^\ast\cdot\Phi(d)$ equals $d$ times the identity matrix, therefore
$$ |\det\Phi(d)|^2=d^d.$$
In other words, $|\det\Phi(d)|=d^{d/2}$, and we are left with determining
$$\frac{\det\Phi(d)}{|\det\Phi(d)|}=\prod_{0\leq i<j\leq d-1}\frac{\xi^j-\xi^i}{|\xi^j-\xi^i|}.$$
Let me use the notation $e(t):=e^{2\pi it}$, familiar from analytic number theory. Then we see that
$$\xi^j-\xi^i=e\left(\frac{j}{d}\right)-e\left(\frac{i}{d}\right)
=e\left(\frac{i+j}{2d}\right)\left(e\left(\frac{j-i}{2d}\right)-e\left(\frac{i-j}{2d}\right)\right).$$
On the right hand side, $0<\frac{j-i}{2d}<\frac{1}{2}$, hence $e\left(\frac{j-i}{2d}\right)$ lies in the upper half-plane. As a result,
$$\frac{\xi^j-\xi^i}{|\xi^j-\xi^i|}=e\left(\frac{i+j}{2d}\right)i.$$
We need to calculate the product of the right hand side over the $\binom{d}{2}$ pairs $0\leq i<j\leq d-1$. By symmetry (or by brute-force calculation), the average of $i+j$ equals $d-1$, whence
$$\prod_{0\leq i<j\leq d-1}\frac{\xi^j-\xi^i}{|\xi^j-\xi^i|}=\left(e\left(\frac{d-1}{2d}\right)i\right)^{\binom{d}{2}}=e\left(\left(\frac{d-1}{2d}+\frac{1}{4}\right)\binom{d}{2}\right).$$
We calculate
$$\left(\frac{d-1}{2d}+\frac{1}{4}\right)\binom{d}{2}=\frac{(3d-2)(d-1)}{8},$$
therefore in the end
$$\Delta(d)=d^{d/2}i^{n(d)}\qquad\text{with}\qquad n(d):=(3d-2)(d-1)/2\in\mathbb{Z}.$$
This agrees with Neil Strickland's formula, upon noting that $m(d)\equiv n(d)\pmod{4}$, i.e.,
$$2+d(7-d)\equiv (3d-2)(d-1)\pmod{8}.$$ Added 1. As Alexey Ustinov remarked, $\Phi(d)$ is known as a Schur matrix. As Carlitz wrote in his 1959 paper , "this matrix is familiar in connection with Schur's derivation of the value of Gauss's sum". In fact, on page 295 of this paper, Carlitz uses the known value of Gauss's sum to find the eigenvalues of this matrix (which are all of the form $\pm\sqrt{d}$ and $\pm i\sqrt{d}$, hence one only needs to find the 4 multiplicities). This can be regarded as a refinement and an alternate proof of the above result, since the product of the eigenvalues is the determinant. Added 2. Carlitz referred to Landau's Vorlesungen, whose relevant part appeared in English as Landau: Elementary Number Theory (Chelsea, 1958). So I looked up this translation, and to my surprise on pages 211-212 I found essentially the same calculation as above. In fact all this is in Schur's 1921 paper , thanks to Alexey Ustinov for locating it for me (alternative link here ). As Landau explains (following Schur), the determinant calculation leads to an evaluation of Gauss's sum, at least for odd $d$. The point is that one can figure out the 4 eigenvalue multiplicities from the determinant, and hence one obtains the formula for the trace of $\Phi(d)$ as well. However, this trace is nothing but Gauss's sum! Added 3. For a more recent treatment of Schur's evaluation of the Gauss sum, see Section 6.3 in Rose: A course in number theory (2nd ed., Oxford University Press, 1994).
|
{
"source": [
"https://mathoverflow.net/questions/272868",
"https://mathoverflow.net",
"https://mathoverflow.net/users/111500/"
]
}
|
272,941 |
Most academic jobs involve some amount of teaching. Post-docs generally do not, but they are only short-term positions. Question: in which countries can one obtain a research-only permanent position in mathematics? Please provide a link to a relevant website if possible. Please mention only one country per answer, and since there is obviously no best answer, this is a community-wiki question.
|
In the US, you could try to follow in the footsteps of Gödel and land a job at the Institute for Advanced Study. But do keep in mind what Richard Feynman thought about such research positions without teaching duties: When I was at Princeton in the 1940s I could see what happened to
those great minds at the Institute for Advanced Study, who had been
specially selected for their tremendous brains and were now given this
opportunity to sit in this lovely house by the woods there, with no
classes to teach, with no obligations whatsoever. These poor bastards
could now sit and think clearly all by themselves, OK? So they don't
get any ideas for a while: They have every opportunity to do
something, and they are not getting any ideas. I believe that in a
situation like this a kind of guilt or depression worms inside of you,
and you begin to worry about not getting any ideas. And nothing
happens. Still no ideas come. Nothing happens because there's not enough real activity and
challenge: You're not in contact with the experimental guys. You don't
have to think how to answer questions from the students. Nothing! From: "Surely You're Joking Mr. Feynman! – Adventures of a Curious Character"
|
{
"source": [
"https://mathoverflow.net/questions/272941",
"https://mathoverflow.net",
"https://mathoverflow.net/users/83733/"
]
}
|
273,067 |
I have spend some time with the geometric approach of framed cobordisms to compute homotopy classes, due to Pontryagin. He computed $\pi_{n+1}(S^n)$ and $\pi_{n+2}(S^n)$. After surveying the literature (not too deeply) I was under the impression that the computation of $\pi_{n+3}(S^n)\cong \mathbb{Z}/24\mathbb{Z}$ for $n\rightarrow \infty$ with similar methods is due to Rohlin in the following paper: MR0046043 (13,674d) Reviewed Rohlin, V. A. Classification of mappings of an (n+3)-dimensional sphere into an n-dimensional one. (Russian) Doklady Akad. Nauk SSSR (N.S.) 81, (1951). 19–22. 56.0X It came a bit of a surprise to me that in the review of this paper on Mathscinet, Hilton states that the results in this paper are incorrect. Does the error only concern the unstable groups? Is it fair to cite this paper for the first computation of the third stable homotopy group of spheres, or should I cite papers by Barrat-Paechter, Massey-Whitehead and Serre? As I understand it these methods are much more algebraic and further removed from the applications that I have in mind.
|
The error is that Rokhlin claimed that $\pi_6(S^3)=\mathbb{Z}/6$, but Hilton, in his review , points out that the paper instead shows that $\pi_6(S^3)/\pi_5(S^2) = \mathbb{Z}/6$. The error lies in a prior calculation (reviewed here ) that Rokhlin claimed showed $\eta^3=0$, but in fact this element is 2-torsion. Rokhlin corrects his mistake and calculates the stable homotopy group $\pi_3^s$ in Rohlin, V. A. MR0052101 New results in the theory of four-dimensional manifolds. (Russian) Doklady Akad. Nauk SSSR (N.S.) 84, (1952). 221–224. The review states that this result "agrees with, and were anticipated by, results of Massey, G. W. Whitehead, Barratt, Paechter and Serre." Serre's CR note Sur les groupes d'Eilenberg-MacLane. C. R. Acad. Sci. Paris 234, (1952). 1243–1245 ( BnF ) found the correct $\pi_6(S^3)$ by homotopical means. Barratt and Paechter found an element of order 4 in $\pi_{3+k}(S^k)$ when $k\geq 2$. The reference to Massey-Whitehead is a result presented at the 1951 Summer Meeting of the AMS at Minneapolis; all we have is the abstract in the Bulletin of the AMS 57 , no. 6 If one wants to analyse 'dates received' to establish priority, then by all means.
|
{
"source": [
"https://mathoverflow.net/questions/273067",
"https://mathoverflow.net",
"https://mathoverflow.net/users/12156/"
]
}
|
273,632 |
In some circumstances, an injective (one-to-one) map is automatically surjective (onto). For example, Set theory An injective map between two finite sets with the same cardinality is surjective. Linear algebra An injective linear map between two finite dimensional vector spaces of the same dimension is surjective. General topology An injective continuous map between two finite dimensional connected compact manifolds of the same dimension is surjective. Are you aware of other results in the same spirit? Is there a general framework that somehow encompasses all these results?
|
A famous result in this spirit is the Ax-Grothendieck theorem , whose statement is the following: Theorem. If $f \colon \mathbb{C}^n \to \mathbb{C}^n$ is an injective polynomial function then $f$ is bijective.
|
{
"source": [
"https://mathoverflow.net/questions/273632",
"https://mathoverflow.net",
"https://mathoverflow.net/users/6129/"
]
}
|
273,635 |
Let $M$ be a smooth compact manifold, and let $X$ be a smooth vector field of $M$ that is nowhere vanishing, thus one can think of the pair $(M,X)$ as a smooth flow with no fixed points. Let us say that a smooth $1$-form $\theta$ on $M$ is adapted to this flow if $\theta(X)$ is everywhere positive; and The Lie derivative ${\mathcal L}_X \theta$ is an exact $1$-form. (By the way, I'd be happy to take suggestions for a better name than "adapted".
Most adjectives such as "calibrated", "polarised", etc. are unfortunately already taken.) Question. Is it true that every smooth flow with no fixed points has at least one $1$-form adapted to it? At first I was sure that there must be counterexamples (perhaps many such), but every construction of a smooth flow I tried to make ended up having at least one adapted $1$-form. Some examples: If the flow is isometric (that is, it preserves some Riemannian metric $g$), one can take $\theta$ to be the $1$-form dual to $X$ with respect to the metric $g$. If the flow is an Anosov flow, one can take $\theta$ to be the canonical $1$-form. If $M$ is the cosphere bundle of some compact Riemannian manifold $N$ and $(M,X)$ is the geodesic flow, then one can again take $\theta$ to be the canonical $1$-form. (This example can be extended to a number of other Hamiltonian flows, such as flows that describe a particle in a potential well, which was in fact the original context in which this question arose for me.) If the flow is a suspension, one can take $\theta$ to be $dt$, where $t$ is the time variable (in local coordinates). If there is a morphism $\phi: M \to M'$ from the flow $(M,X)$ to another flow $(M',X')$ (thus $\phi$ maps trajectories to trajectories), and the latter flow has an adapted $1$-form $\theta'$, then the pullback $\phi^* \theta'$ of that form will be adapted to $(M,X)$. Some simple remarks: If $\theta$ is adapted to a flow $(M,X)$, then so is $(e^{tX})^* \theta$ for any time $t$, where $e^{tX}: M \to M$ denotes the time evolution map along $X$ by $t$. In many cases this allows one to average along the flow and restrict attention to cases where $\theta$ is $X$-invariant. In the case when the flow is ergodic, this would imply in particular that we could restrict attention to the case when $\theta(X)$ is constant. Conversely, in the ergodic case one can almost (but not quite) use the ergodic theorem to relax the requirement that $\theta(X)$ be positive to the requirement that $\theta(X)$ have positive mean with respect to the invariant measure. The condition that ${\mathcal L}_X \theta$ be exact implies that $d\theta$ is $X$-invariant, and is in turn implied by $\theta$ being closed. For many vector fields $X$ it is already easy to find a closed $1$-form $\theta$ with $\theta(X) > 0$, but this is not always possible in general, in particular if $X$ is the divergence of a $2$-vector field with respect to some volume form, in which case the integral of $\theta(X)$ along this form must vanish when $\theta$ is closed. However, in all the cases in which this occurs, I was able to locate a non-closed example of $\theta$ that was adapted to the flow. (But perhaps if the flow is sufficiently "chaotic" then one can rule out non-closed examples also?)
|
If I understand correctly, there is already a counterexample on the torus: On the $xy$-plane $\mathbb{R}^2$, let $X$ be the vector field
$$
X = \sin x\,\frac{\partial\ }{\partial x} + \cos x\,\frac{\partial\ }{\partial y}.
$$
Now let $T^2=\mathbb{R}^2/\Lambda$ where $\Lambda$ is the lattice generated by $(2\pi,0)$ and $(0,2\pi)$. Since $X$ is invariant under this lattice, it gives rise to a well-defined, nowhere vanishing vector field on $T^2$, which I will also call $X$. It has two closed orbits $C_0$ defined by $x\equiv0\mod 2\pi$ and $C_1$ defined by $x\equiv \pi\mod 2\pi$, while every other orbit is nonclosed and has $C_0$ as its $\alpha$-limit and $C_1$ as its $\omega$-limit. In particular, the only functions on $T^2$ that are constant on the $X$-orbits are the constant functions. Now suppose that $\theta$ is a $1$-form on $T^2$ such that $\mathcal{L}_X\theta$ is exact. Then, by Cartan's formula,
$$
\mathrm{d}\bigl(\theta(X)\bigr) + i(X)(\mathrm{d}\theta) = \mathrm{d}h
$$
for some function $h$ on $T^2$. I.e.,
$$
i(X)(\mathrm{d}\theta) = \mathrm{d}\bigl(h-\theta(X)\bigr),
$$
implying that the function $h-\theta(X)$ is constant on the flow lines of $X$ and hence must be constant. Thus, $i(X)(\mathrm{d}\theta)=0$. Since $X$ is nowhere vanishing, $\mathrm{d}\theta$ must vanish identically, so $\theta$ must be closed. Now, the integrals of $\theta$ over the two closed orbits (oriented so that $\mathrm{d}y>0$, say) must be equal, since, oriented this way, they are homologous in $T^2$. However, $X$ orients $C_0$ and $C_1$ so that they are opposite in homology, so the integrals using $X$ to orient them positively must have opposite signs (or vanish). Hence, it cannot be that $\theta(X)>0$ everywhere. Added on July 29: A request has been made for an example of such a vector field without any closed leaves. This is easy to provide, as follows: Let $\mathbb{T}^3 = \mathbb{R}^3/(2\pi\mathbb{Z}^3)$ be the 'square' $3$-dimensional torus, i.e., $xyz$-space where $x$, $y$, and $z$ are $2\pi$-periodic. Let
$$
X = \sin x\,\frac{\partial\ }{\partial x} + \cos x\,\frac{\partial\ }{\partial y}
+ \sqrt 2\,\cos x\,\frac{\partial\ }{\partial z}\,,
$$
which is a well-defined vector field on $\mathbb{T}^3$. The $2$-tori $C_0$ (defined by $x\equiv0\,\mathrm{mod}\,2\pi$) and $C_\pi$ (defined by $x\equiv\pi\,\mathrm{mod}\,2\pi$) are invariant under $X$ and all the flow-lines of $X$ in $C_0$ and $C_\pi$ are dense in their respective $2$-tori. Meanwhile, every other flow-line of $X$ $\alpha$-limits to $C_0$ and $\omega$-limits to $C_1$. In particular, any function on $\mathbb{T}^3$ that is constant on the flow-lines of $X$ is necessarily constant. Just as above, it follows that if $\theta$ is a $1$-form on $\mathbb{T}^3$ that is adapted to $X$, then $i(X)(\mathrm{d}\theta)=0$. It follows that there is a smooth function $\lambda$ on $T$ such that
$$
\mathrm{d}\theta = \lambda\,i(X)(\mathrm{d}x\wedge\mathrm{d}y\wedge\mathrm{d}z)
= \lambda\,\bigl(\sin x\,\mathrm{d}y\wedge\mathrm{d}z
+ \cos x\, \mathrm{d}z\wedge\mathrm{d}x
+ \sqrt2\,\cos x\,\mathrm{d}x\wedge\mathrm{d}y \bigr)
$$
Taking the exterior derivative of both sides of this equation yields the identity
$$
0 = \mathrm{d}\lambda(X) + \lambda\,\cos x\,.
$$
Consequently,
$$
\mathrm{d}(\lambda\,\sin x)(X) = 0
$$
Thus, $\lambda\,\sin x$ is constant along the flow lines of $X$ and hence is constant. Since it vanishes on $C_0$ and $C_\pi$, the function $\lambda\,\sin x$ must vanish identically. Hence $\lambda$ vanishes identically, i.e., $\mathrm{d}\theta = 0$. Since $\theta$ is closed on $\mathbb{T}^3$, its integral over any two homologous closed oriented curves must be equal. However, just as in the $2$-dimensional case, using the hypothesis that $\theta(X)>0$, one can easily construct a closed oriented curve $\gamma_0$ in $C_0$ on which $\theta$ pulls back to be positive while its translate by $\pi$ in the $x$-direction, say $\gamma_\pi$ lies in $C_\pi$ and has the property that $\theta$ pulls back to $\gamma_\pi$ to be negative, making it impossible for the integrals of $\theta$ over the two curves to be equal.
|
{
"source": [
"https://mathoverflow.net/questions/273635",
"https://mathoverflow.net",
"https://mathoverflow.net/users/766/"
]
}
|
273,636 |
Let $H_{n,d}=\mathbb{R}_d[x_1,..,x_n]$ be the space of $n$-variate homogeneous degree $d$ polynomials, $D=D^\top\in \mathbb{N}^{m\times m}$ a symmetric $m\times m$ matrix. Consider the space $P_D$ of symmetric $m\times m$ matrices with $(i,j)$-entries in $H_{n,D_{ij}}$. Is there a natural (matrix?) inner product structure on $P_D$? In my application I have cone $C$ of positive semidefinite (globally, for any value of $x=(x_1,\dots,x_n)$) matrices in $P_D$, and I'd like to find defining inequalities for it - i.e. the dual cone $C^*$. One possibility might be to use Fischer-Fock inner product $[,]$ on the entries, so that for $f,g\in P_D$ one has $p=\langle f,g\rangle\in\mathbb{R}^{m\times m}$ with $p_{ij}=[f_{ij},g_{ij}]$, and nonnegativity (resp. positivity) of $p$ understood as $p$ being positive semidefinite (resp. definite). Is true that $\langle f,f\rangle$ is p.s.d. whenever $f$ is p.s.d.? Or am I on wrong track? Note: for $u,v\in H_{n,d}$ the Fischer-Fock product (see below) is defined as
$$
[u,v]:=\sum_{|\alpha|=d}\binom{|\alpha|}{\alpha} u_\alpha v_\alpha,
$$
where the usual multinomial notation $\alpha:=(\alpha_1,\dots,\alpha_n)$, $|\alpha|:=\sum_k\alpha_k$, $\binom{|\alpha|}{\alpha}:=\frac{|\alpha|!}{\prod_k\alpha_k!}$, $x_\alpha:=\prod_k x_k^{\alpha_k}$, $u(x)=\sum_\alpha \binom{|\alpha|}{\alpha}u_\alpha x^\alpha$ is used. In particular, $u(y)=[u,(\sum_k y_kx_k)^d]$.
|
If I understand correctly, there is already a counterexample on the torus: On the $xy$-plane $\mathbb{R}^2$, let $X$ be the vector field
$$
X = \sin x\,\frac{\partial\ }{\partial x} + \cos x\,\frac{\partial\ }{\partial y}.
$$
Now let $T^2=\mathbb{R}^2/\Lambda$ where $\Lambda$ is the lattice generated by $(2\pi,0)$ and $(0,2\pi)$. Since $X$ is invariant under this lattice, it gives rise to a well-defined, nowhere vanishing vector field on $T^2$, which I will also call $X$. It has two closed orbits $C_0$ defined by $x\equiv0\mod 2\pi$ and $C_1$ defined by $x\equiv \pi\mod 2\pi$, while every other orbit is nonclosed and has $C_0$ as its $\alpha$-limit and $C_1$ as its $\omega$-limit. In particular, the only functions on $T^2$ that are constant on the $X$-orbits are the constant functions. Now suppose that $\theta$ is a $1$-form on $T^2$ such that $\mathcal{L}_X\theta$ is exact. Then, by Cartan's formula,
$$
\mathrm{d}\bigl(\theta(X)\bigr) + i(X)(\mathrm{d}\theta) = \mathrm{d}h
$$
for some function $h$ on $T^2$. I.e.,
$$
i(X)(\mathrm{d}\theta) = \mathrm{d}\bigl(h-\theta(X)\bigr),
$$
implying that the function $h-\theta(X)$ is constant on the flow lines of $X$ and hence must be constant. Thus, $i(X)(\mathrm{d}\theta)=0$. Since $X$ is nowhere vanishing, $\mathrm{d}\theta$ must vanish identically, so $\theta$ must be closed. Now, the integrals of $\theta$ over the two closed orbits (oriented so that $\mathrm{d}y>0$, say) must be equal, since, oriented this way, they are homologous in $T^2$. However, $X$ orients $C_0$ and $C_1$ so that they are opposite in homology, so the integrals using $X$ to orient them positively must have opposite signs (or vanish). Hence, it cannot be that $\theta(X)>0$ everywhere. Added on July 29: A request has been made for an example of such a vector field without any closed leaves. This is easy to provide, as follows: Let $\mathbb{T}^3 = \mathbb{R}^3/(2\pi\mathbb{Z}^3)$ be the 'square' $3$-dimensional torus, i.e., $xyz$-space where $x$, $y$, and $z$ are $2\pi$-periodic. Let
$$
X = \sin x\,\frac{\partial\ }{\partial x} + \cos x\,\frac{\partial\ }{\partial y}
+ \sqrt 2\,\cos x\,\frac{\partial\ }{\partial z}\,,
$$
which is a well-defined vector field on $\mathbb{T}^3$. The $2$-tori $C_0$ (defined by $x\equiv0\,\mathrm{mod}\,2\pi$) and $C_\pi$ (defined by $x\equiv\pi\,\mathrm{mod}\,2\pi$) are invariant under $X$ and all the flow-lines of $X$ in $C_0$ and $C_\pi$ are dense in their respective $2$-tori. Meanwhile, every other flow-line of $X$ $\alpha$-limits to $C_0$ and $\omega$-limits to $C_1$. In particular, any function on $\mathbb{T}^3$ that is constant on the flow-lines of $X$ is necessarily constant. Just as above, it follows that if $\theta$ is a $1$-form on $\mathbb{T}^3$ that is adapted to $X$, then $i(X)(\mathrm{d}\theta)=0$. It follows that there is a smooth function $\lambda$ on $T$ such that
$$
\mathrm{d}\theta = \lambda\,i(X)(\mathrm{d}x\wedge\mathrm{d}y\wedge\mathrm{d}z)
= \lambda\,\bigl(\sin x\,\mathrm{d}y\wedge\mathrm{d}z
+ \cos x\, \mathrm{d}z\wedge\mathrm{d}x
+ \sqrt2\,\cos x\,\mathrm{d}x\wedge\mathrm{d}y \bigr)
$$
Taking the exterior derivative of both sides of this equation yields the identity
$$
0 = \mathrm{d}\lambda(X) + \lambda\,\cos x\,.
$$
Consequently,
$$
\mathrm{d}(\lambda\,\sin x)(X) = 0
$$
Thus, $\lambda\,\sin x$ is constant along the flow lines of $X$ and hence is constant. Since it vanishes on $C_0$ and $C_\pi$, the function $\lambda\,\sin x$ must vanish identically. Hence $\lambda$ vanishes identically, i.e., $\mathrm{d}\theta = 0$. Since $\theta$ is closed on $\mathbb{T}^3$, its integral over any two homologous closed oriented curves must be equal. However, just as in the $2$-dimensional case, using the hypothesis that $\theta(X)>0$, one can easily construct a closed oriented curve $\gamma_0$ in $C_0$ on which $\theta$ pulls back to be positive while its translate by $\pi$ in the $x$-direction, say $\gamma_\pi$ lies in $C_\pi$ and has the property that $\theta$ pulls back to $\gamma_\pi$ to be negative, making it impossible for the integrals of $\theta$ over the two curves to be equal.
|
{
"source": [
"https://mathoverflow.net/questions/273636",
"https://mathoverflow.net",
"https://mathoverflow.net/users/11100/"
]
}
|
273,670 |
I don't know the following is a known result, but it would be very useful to me in my research if it were true. Conjecture: Let $G$ be a planar graph. The sum $$
\sum_{\{x,y\} \in E(G)}{\min(\deg(x),\deg(y))}
$$ is at most linear in the number of vertices. What I know about this problem: This conjecture would be false if one replaces the minimum by the average - the star graph is a counterexample, in which the sum is quadratic. I can prove an upper bound of $O(n \log(n))$ as follows. Let $A_i = \{v : 2^i \leq \deg(v) < 2^{i+1}\}$ , and let $$
E_i = \{\{x,y\} \in E(G): x \in A_i ,y \in \cup_{j \geq i}{A_j}\}.
$$ Now, $E(G)$ is the union of the $E_i$ 's, and the contribution of an edge from $E_i$ is at most $2^{i+1}$ . On the other hand, as $G$ is planar the size of $E_i$ is at most $3|\cup_{j \geq i} A_i |$ . Now, as the average degree in a planar graph is at most 6, the number of vertices whose degree is at least $2^i$ is at most $6n/2^i$ . Therefore $|E_i| \leq 18 n/ 2^i$ . We have $$
\sum_{\{x,y\} \in E(G)}{\min(\deg(x),\deg(y))} \leq
\sum_{i=0}^{\log_2(n)} \sum_{\{x,y\} \in E_i}{\min(\deg(x),\deg(y))}
$$ $$
\leq\sum_{i=0}^{\log_2(n)} |E_i| \cdot 2^{i+1} \leq
\sum_{i=0}^{\log_2(n)} (18 n/ 2^i) \cdot 2^{i+1} = 36 n \log_2(n).
$$
|
Let $L(G)=\sum_{xy\in E(G)} \min\lbrace\deg(x),\deg(y)\rbrace$ . THM. For a simple planar graph with $n$ vertices, $L(G)\le 18n-36$ for $n\ge 3$ . PROOF. Recall that a simple planar graph with $k\ge 3$ vertices cannot have more than $3k-6$ edges, achieved by a triangulation. Let the degrees of the vertices be $d_1\ge d_2\ge\dots\ge d_n$ . We want to choose $3n-6$ pairs $(v_i,w_i)$ for $v_i\lt w_i$ and we want to maximize $\sum_i d_{w_i}$ . This is achieved by pushing the pairs to the left as much as possible, but we have the constraint that for $k\ge 3$ the number of pairs lying in $\lbrace 1,\ldots,k\rbrace$ is at most $3k-6$ . So the best we can hope for is to chose the pairs $(1,2)$ , $(1,3)$ and $(2,3)$ , then for $j\ge 4$ chose 3 pairs $(x,j)$ for $x\lt j$ . This gives $$ L(G) \le d_2 + 2d_3 + 3(d_4+\cdots+d_n)
\le 3\sum_i d_i \le 3(6n-12).$$ The actual maximums from $n=3$ to $n=18$ are: 6, 18, 30, 48, 60, 78, 93, 112, 127, 150, 162, 180, 198, 216, 234, 252, which are comfortably within the bound. The duals of fullerenes show that the constant $18$ cannot be improved, but the constant 36 can be. Note that I dropped $3d_1+2d_2+d_3$ in the calculation, which can definitely be used to push the bound down by a constant. For large enough $n$ , $18n-72$ is a correct bound and I conjecture that it is the exact value for $n\ge 13$ .
|
{
"source": [
"https://mathoverflow.net/questions/273670",
"https://mathoverflow.net",
"https://mathoverflow.net/users/17599/"
]
}
|
273,834 |
I have seen lots of descriptions of this map in the literature but never seen it nicely drawn anywhere. I could try to do it myself but I really lack expertise, hence am afraid to miss something or do it wrong. Let me just provide some glimpses, and maybe somebody can nicely tie them together. At the "initial end" there is the stable (co)homotopy, corresponding to the sphere spectrum. At the "terminal end" there is the rational cohomology (maybe extending further to real, complex, etc.) From that terminal end, chains of complex oriented theories emanate, one for each prime; the place in the chain corresponds to the height of the formal group attached. Now here I am already uncertain what to place at each spot - Morava $K$-theories? Or $E$-theories? At the limit of each chain there is something, and again I am not sure whether it is $BP$ or cohomology with coefficients in the prime field. Next, there is complex cobordism mapping to all of those (reflecting the fact that the complex orientation means a $MU$-algebra structure). But all this up to now only happens in the halfplane. There are now some Galois group-like actions on each of these, with the homotopy fixed point spectra jumping out of the plane and giving things like $KO$ and $TMF$ towards the terminal end and $MSpin$, $MSU$, $MSp$, $MString$, etc. above $MU$. Here I have vague feeling that moving up from $MU$ is closely related to moving in the plane from the terminal end (as $MString$, which is sort of "two steps upwards" from $MU$, corresponds to elliptic cohomologies which are "two steps to the left" from $H\mathbb Q$) but I know nothing precise about this connection. As you see my picture is quite vague and uncertain. For example, I have no idea where to place things like $H\mathbb Z$ and what is in the huge blind spot between the sphere and $MU$. From the little I was able to understand from the work of Devinatz-Hopkins-Smith, $MU$ is something like homotopy quotient of the sphere by the nilradical. Is it correct? If so, things between the sphere and $MU$ must display some "infinitesimal" variations. Is there anything right after the sphere? Also, can there be something above the sphere? How does connectivity-non-connectivity business and chromatic features enter the picture? What place do "non-affine" phenomena related to algebroids, etc. have? There are also some maps, like assigning to a vector bundle the corresponding sphere bundle, which seem to go backwards, and I cannot really fit them anywhere. Have I missed something essential? Or all this is just rubbish? Can anyone help with the map, or give a nice reference?
|
I'm not sure I understand what "the" map is here, but I'll attempt to answer
the questions that were asked in the body of the question. Sorry if I'm just
saying things that you already know.
$\newcommand{\Sp}{\mathrm{Sp}}\newcommand{\Mfg}{\mathscr{M}_{\textbf{fg}}}\newcommand{\QCoh}{\mathrm{QCoh}}\newcommand{\Eoo}{\mathbf{E}_\infty}$ Quillen's theorem says that the Lazard ring $L$ (which classifies formal group
laws over rings, so that a fgl over a ring $R$ is a ring map $L\to R$) is
canonically isomorphic to $MU_\ast$ via the universal complex orientation
$MU\to MU$. The key idea driving chromatic homotopy theory is that there's a
functor $\Sp \to \QCoh(\Mfg)$, given by sending a spectrum $X$ to its
$MU$-homology, which is naturally a $(MU_\ast, MU_\ast MU)$-comodule. The stack
$\mathscr{M}_{(MU_\ast, MU_\ast MU)}$ associated to the Hopf algebroid
$(MU_\ast, MU_\ast MU)$ is exactly $\Mfg$. Now, if $(A,\Gamma)$ is a Hopf
algebroid, then $\QCoh(\mathscr{M}_{(A,\Gamma)})$ is exactly the category of
$(A,\Gamma)$-comodules. All of this tells us that the $MU$-homology of a
spectrum is a quasicoherent sheaf over $\Mfg$. Chromotopists have adopted the philosophy that this functor is a rather good
approximation of $\Sp$. Morava $K$-theories and $E$-theories come from this
philosophy. The main tool utilized here is the Landweber exact functor theorem,
which can be phrased as follows: if $\text{Spec }R\to \Mfg$ is a flat map, then
the functor $X\mapsto MU_\ast(X)\otimes_{MU_\ast} R$ is a cohomology theory.
This is reasonable, since for that functor to be a cohomology theory, we don't
need $\mathrm{Tor}_{MU_\ast}(R,N)$ to vanish for every $MU_\ast$-module $N$ ---
we just need it to vanish for $(MU_\ast,MU_\ast MU)$-comodules. A theorem of Lazard's says that over an algebraically closed field $k$ of
characteristic $p$ (for some prime $p$ that'll be fixed forever), there is a
unique (up to isomorphism) formal group law of height $n$ for each $n$. People
call (a choice of) such a formal group law the Honda formal group law of height
$n$. At height $1$, the multiplicative formal group law $x+y+xy$ provides an
example. (Over a field of characteristic $0$, everything is isomorphic to the
additive formal group law; this is what the logarithm does). In particular,
there's a unique geometric point of $\Mfg$ (over $k$) for each $n$. We'd like to use the Landweber exact functor theorem to produce a cohomology
theory from this geometric point (corresponding to the integer $n$, say) ---
but the inclusion of a geometric point into something is rarely ever flat.
Instead, we can look at the infinitesimal neighborhood of this point, and
consider its inclusion into $\Mfg$. The structure of this infinitesimal
neighborhood was determined by Lubin and Tate: it is (noncanonically)
isomorphic to $\text{Spf }W(k)[[u_1,\cdots,u_{n-1}]]$. The ring
$W(k)[[u_1,\cdots,u_{n-1}]]$ is complete local, with maximal ideal
$\mathfrak{m}$ generated by the regular sequence $p, u_1, \cdots, u_{n-1}$. The
map $\text{Spf }W(k)[[u_1,\cdots,u_{n-1}]]\to \Mfg$ satisfies the hypotheses of
Landweber's theorem, providing us with a spectrum $E_n$, called Morava
$E$-theory, with $\pi_\ast E_n \simeq W(k)[[u_1,\cdots,u_{n-1}]][\beta^{\pm
1}]$, where $\beta$ is a class living in degree $2$. For instance, when $n=1$,
Morava $E$-theory is precisely $p$-adic complex $K$-theory $KU^\wedge_p$. A priori , there's no reason for $E$-theory to be a multiplicative cohomology
theory (i.e., an $\Eoo$-ring spectrum). But Goerss, Hopkins, and Miller proved
with what's known as Goerss-Hopkins obstruction theory (I livetexed notes from
this year's Talbot workshop here , which was on obstruction
theory, but you should check the Talbot website for the official and edited
notes) that $E_n$ really is an $\Eoo$-ring spectrum! (It seems appropriate to remark here that Lurie has recently given an alternative moduli-theoretic proof of this result; see here .) They also proved something
more: if $\mathbf{G}_n$ denotes the profinite group of automorphisms of the
geometric point, then there is a lift of the action of $\mathbf{G}_n$ to an
action on $E$-theory via $\Eoo$-ring maps. Moreover, $\mathrm{Aut}(E_n) \simeq
\mathbf{G}_n$. (For instance, at height $1$, the group $\mathbf{G}_1 \simeq
\mathbf{Z}_p^\times$, and the action of $\mathbf{G}_1$ on $E_1 = KU^\wedge_p$
is given by the Adams operations.) We can now realize the geometric point itself, by quotienting out the ideal
$\mathfrak{m}$. This is a general procedure that you can do in homotopy theory:
if $R$ is a ring spectrum, and $I\subseteq \pi_\ast R$ is an ideal generated by
a regular sequence, you can form the quotient $R/I$ (by taking iterated
cofibers). But if $R$ is an $\Eoo$-ring, there's no guarantee that $R/I$ will
also be an $\Eoo$-ring: this is true with Morava $E$-theory and the ideal
$\mathfrak{m}$. The quotient $E_n/\mathfrak{m}$ is denoted $K(n)$, and is
called Morava $K$-theory. (For instance, when $n=1$, Morava $K$-theory is
essentially $K$-theory modulo $p$.) The spectrum $K(n)$ is not an $\Eoo$-ring
--- it is only an $A_\infty$-ring, i.e., an $\mathbf{E}_1$-ring spectrum. Note,
also, that $K(n)$ isn't complex-oriented. I should mention here that I'm really
talking about the 2-periodic versions of all these cohomology theories, but
this'll suffice for now. Why do chromotopists care, though? For this, we need to embark on a brief
detour. The moduli stack $\Mfg$ admits a filtration by height. If $\Mfg^{\geq
n}$ denotes the moduli stack parametrizing formal groups of height at least
$n$, we have an exhaustive filtration of closed substacks $$\cdots\subset
\Mfg^{\geq 2}\subset \Mfg^{\geq 1}\subset \Mfg.$$ Note that the complement of
each of these inclusions is open, hence flat. It follows from the Landweber
exact functor theorem that there's a spectrum corresponding to
$\Mfg^{<n}\hookrightarrow \Mfg$. This spectrum turns out to be intimately
related to Morava $E$-theory (for instance, they have the same Bousfield class). It turns out that we can replicate this filtration in the category of spectra
via the functor $\Sp\to \QCoh(\Mfg)$ described above. This is the content of
the Ravenel conjectures. Let's write $L_n X$ for the Bousfield localization (I
wrote another answer here that might be useful) of $X$ with respect to $E$-theory, and $L_{K(n)} X$ for
the Bousfield localization of $X$ with respect to Morava $K$-theory. When you
work in the $K(n)$-local stable homotopy category, the action of $\mathbf{G}_n$
on $E$-theory becomes a continuous action. There are four remarkable theorems relating the structure of the stable
homotopy category to $\Mfg$. Chromatic convergence: Let $X$ be a finite $p$-local spectrum. Then $X$ is the (homotopy) limit of its chromatic tower
$$\cdots\to L_2 X\to L_1 X\to L_0 X.$$ The thick subcategory theorem: There's an exhaustive filtration of "thick subcategories" (i.e., a subcategory that's closed under retracts, finite limits, and finite colimits)
$$\cdots\subset \mathscr{C}_2\subset \mathscr{C}_1\subset \mathscr{C}_0 =
\Sp^\omega,$$
such that any thick subcategory of the category of spectra is one of the
$\mathscr{C}_k$.
Moreover, each of the subcategories $\mathscr{C}_n$ is defined to contain those spectra for which the $K(m)$-homology is zero for $m>n$.
Note the similarity to the height filtration!
(The similarity is not unexpected, since a spectrum is in $\mathscr{C}_k$ when
its associated sheaf is supported on $\Mfg^{\geq k}$.) Chromatic fracture: There's a (homotopy) pullback square
$$\require{AMScd}
\begin{CD}
L_n X @>>> L_{K(n)}X \\
@VVV @VVV\\
L_{n-1} X @>>> L_{n-1}L_{K(n)}X.
\end{CD}$$ The Devinatz-Hopkins fixed points theorem: the continuous homotopy fixed points $E_n^{h\mathbf{G}_n}$ of the $\mathbf{G}_n$-action on $E_n$ is equivalent to $L_{K(n)} S$. This gives rise to a homotopy fixed point spectral sequence (sometimes called the Morava spectral sequence)
$$E_2^{s,t} = H^s_c(\mathbf{G}_n,\pi_t E_n) \Rightarrow \pi_{t-s} L_{K(n)} S.$$ Combining all this, we see that the first step in computing $\pi_\ast S$ would
be to compute $\pi_\ast L_{K(n)} S$, which'd follow from the Morava spectral
sequence. It turns out that this is exceedingly hard, but (as usual) height $1$
is manageable. See Henn's notes on the
arXiv, which works out this case. Instead of attempting to compute the group cohomology of this huge profinite
group, we can try to detect classes by looking at homotopy fixed points with
respect to smaller subgroups. If $G\subseteq \mathbf{G}_n$ is a finite
subgroup, we can consider the homotopy fixed points $E_n^{hG}$, and there's a
map $L_{K(n)} S\to E_n^{hG}$, which gives a composite homomorphism $\pi_\ast S
\to \pi_\ast L_{K(n)} S \to \pi_\ast E_n^{hG}$. This is particularly
interesting when $G$ is a maximal finite subgroup, because we recover some
well-known spectra. At height $1$ and and the prime $2$, we know that $\mathbf{G}_1 \simeq
\mathbf{Z}_2^\times \simeq \mathbf{Z}_2 \times \mathbf{Z}/2$, so the maximal
finite subgroup is $\mathbf{Z}/2$. The group action on $E_1 = KU^\wedge_2$ is
given by complex conjugation, so $E_1^{h\mathbf{Z}/2}$ is the universally loved
spectrum $KO^\wedge_2$. At height $2$, I recall reading somewhere that the
fixed points $E_2^{hG}$ (for $G$ a maximal finite subgroup of $\mathbf{G}_n$)
is related to $TMF$ via
$$L_{K(2)} TMF \simeq \prod_{\# S_p}E_2^{hG},$$
where $S_p$ is the set of isomorphism classes of supersingular elliptic curves
over $\overline{\mathbf{F}_p}$. This follows essentially by construction; an
analogue at higher chromatic height is described in Chapter 14 of Behrens-Lawson . But $KO$ and $TMF$ are not complex-oriented! Instead, they admit orientations
from $MSpin$ and $MString$: there are $\Eoo$-maps $MSpin \to KO$ and $MString
\to TMF$ that lift the Atiyah-Bott-Shapiro orientation and the Witten genus.
This is in Ando-Hopkins-Rezk , but
it's hard to work through. There's an overview in Chapter 10 of the TMF book
(see here ), and some
notes in Appendix A.3 of Eric Peterson's book
project . Let me now try to answer some questions in your eighth paragraph. The
nilpotence theorem says that elements in the kernel of $\pi_\ast R \to MU_\ast
R$ are nilpotent. (A simple corollary is Nishida's nilpotence theorem: if
$R=S$, then everything in $\pi_\ast S$ is torsion, and since $MU_\ast$ is
torsion-free, the kernel of $\pi_\ast S\to MU_\ast$ is the whole of $\pi_\ast
S$, so anything in $\pi_\ast S$ is nilpotent.) The proof of this theorem goes
by filtering the map $S\to MU$, which is presumably what you mean by "things
between $S$ and $MU$". (I'm not sure what you mean by "above" the sphere: it is
the initial object in the category of spectra.) We have a sequence of maps $\ast\to \Omega SU(2) \to \cdots\to \Omega SU
\xrightarrow{\sim} BU$ (the last equivalence is thanks to Bott periodicity).
Consequently, we get maps $\Omega SU(n) \to BU$ for every $n$, and the Thom
spectrum of the corresponding complex vector bundle over $\Omega SU(n)$ is
denoted $X(n)$. For instance, $X(1) = S$ and $X(\infty) = MU$. This is a
homotopy commutative ring spectrum, but since the map $\Omega SU(n) \to BU$ is
a $2$-fold loop map, it is at best (for $n\neq 0,\infty$) an
$\mathbf{E}_2$-ring spectrum. (It's not an $\mathbf{E}_3$-ring spectrum, see here .)
Each $X(n)$ admits a canonical map from $S$ and to $MU$; moreover, the map
$X(n) \to MU$ is an equivalence below degree $2n+1$. The proof of the
nilpotence theorem now reduces to showing that if the image of $\alpha$ under
$h(n):\pi_\ast R \to X(n)_\ast R$ is zero, then the image of $\alpha$ under
$h(n+1)$ is also zero. I ran a seminar last month on this stuff; I wrote detailed notes at http://www.mit.edu/~sanathd/iap-2018.pdf , which expand on the discussion
above. Good sources to learn this stuff are Jacob Lurie's course from eight years ago and COCTALOS .
For more references, check out this
page . I hope this helps;
let me know if there's something I should add/talk more about.
|
{
"source": [
"https://mathoverflow.net/questions/273834",
"https://mathoverflow.net",
"https://mathoverflow.net/users/41291/"
]
}
|
275,251 |
The title of the question more or less says it all.
The question asks for precise and scientific descriptions (submission-rules, editor-behavior, referee-recruitment, anonimity issues in an age where handwriting was still the norm , all the way to rejection or acceptance-and-concomitant-galley-proof-process), examples (maybe even scans of some handwritten referee reports from this journal and this time), or pointers to articles in historical journals on precisely this topic. The motivation is mainly historical interest, together with my having to deliver a referee report posing some problems, and some hope on my part that even from seemingly-ancient examples one can learn new (or at least be remembered of known) useful ideas for the perennial problem of refereeing mathematical papers. Remarks. It would not surprise me if (a question equivalent to) the present question had been asked recently on the web, but I did not find it (though not searching around for long). It would also not surprise me (see below) if there already was a book like "The Acta Mathematica---Then and Now", or something like that, in folio format and replete with high-resolution photographic reproductions. But I did not find one. There is a biography on Mittag-Leffler by A. Stubhaug. There is a book on letters between Poincaré and Mittag-Leffler by P. Nabonnand, but both seem not to contain information on the refereeing process of the Acta Mathematica in the early days, let alone a dedicated study. I decided against phrasing the questions in the form "How were mathematical referee reports written in the pre-typewriter-age?" since this would involve the ill-defined concept pre-typewriter-age . Typewriters gradually grew into being during the 1800s, notable hot-beds having been Italy and the USA. One could have phrased the question as "What are concrete, well-documented examples of non-typewritten mathematical referee reports?" but it seemed to me that localizing at the Acta Mathematica could be a more fruitful and focused historical question. This journal's history is likely to be very well documented. The late 1800s is a period which sits squarely within what is conventionally called modernity , and many original documents are bound to be extant even today, especially in a country as---relatively---untouched by the turmoils of the twentieth century as Sweden.
I chose the end-point 1918 of the time-interval asked about for no precise reason. Most importantly, the journal is operational to this day, and there may be many Swedish mathematicians among MO users who are in the know, or even actively involved in the Acta Mathematica. I did decide upon asking it here---and not in some other, more historical forum---consciously.
|
Looking over the following references, I believe that they contain some of what is being asked after: Nickerson, Sylvia (2012). Referees, publisher’s readers and the image of mathematics in nineteenth century England. Publishing History, 71 , 27-67. Link . (Sample quotation: "The article examines the processes of refereeing, and the role of the editor, at nineteenth century mathematical journals by looking at the Cambridge Mathematical Journal and Acta Mathematica ," p. 27.) One of the references in the above manuscript is Chapter 8 (pp. 139-164) in an earlier work: Parshall, K. H. (2002). Mathematics Unbound: The evolution of an international mathematical research community, 1800-1945 (No. 23). American Mathematical Society. Chicago. Link . (Chapter 8 is entitled: "Gosta Mittag-Leffler and the Foundation and Administration of Acta Mathematica .") And among the many references in this latter chapter is the following article of potential interest: Domar, Y. (1982). On the foundation of Acta Mathematica. Acta mathematica, 148 (1), 3-8. Chicago. Link .
|
{
"source": [
"https://mathoverflow.net/questions/275251",
"https://mathoverflow.net",
"https://mathoverflow.net/users/108556/"
]
}
|
275,601 |
Title edited I thank მამუკა ჯიბლაძე and Corbennick for their suggestion on the title of this question. I changed the title based on the suggestion of Corbennick. What is an example of a manifold $M$ which does not admit an atlas $\mathcal{A}$ with the following property?: For every two charts $(\phi,U)$ and $(\psi,V)$ in $\mathcal{A}$, $\psi \circ \phi^{-1}$ is a polynomial map.(Its components are polynomial functions). Motivation: The motivation for this question comes from the concept "Affine manifolds" . I am indebted to Mike Cocos, for learning this concept and its related problem, that is Chern conjecture.
|
If I remember correctly, this is impossible for any (nonempty) simply-connected compact manifold of positive dimension. In particular, $S^2$ cannot have such an atlas. Off the top of my head, I don't remember where I saw this statement, but I'll try to find a reference. Followup: I still don't remember the reference, but I think that the proof goes something like this: By hypothesis, both $\tau=\psi\circ\phi^{-1}$ and $\sigma = \phi\circ\psi^{-1}$ are polynomial maps, and hence they are defined everywhere on $\mathbb{R}^n$. By hypothesis there is a nonempty open domain $D_1\subset\mathbb{R}^n$ such that $\sigma(\tau(x)) = x$ for all $x\in D_1$ and a nonempty open domain $D_2\subset\mathbb{R}^n$ such that $\tau(\sigma(y)) = y$ for all $y\in D_2$. Since $\sigma$ and $\tau$ are polynomial mappings, it follows that $\sigma(\tau(x)) = x$ for all $x\in\mathbb{R}^n$ and $\tau(\sigma(y)) = y$ for all $y\in\mathbb{R}^n$. Thus, $\sigma$ and $\tau$ are globally invertible. Moreover, by the chain rule, we have $\sigma'(\tau(x))\tau'(x) = I$ for all $x\in \mathbb{R}^n$, and, taking determinants, it follows that $\det(\tau'(x))$ and $\det(\sigma'(\tau(x)))$, which are polynomial in $x$ must actually be (nonzero) constants. Consequently, it follows that $\Lambda^n(T^*M)$ carries a (unique) flat connection such that, if $\mathrm{d}x$ is the standard volume form on $\mathbb{R}^n$, then $\psi^*(\mathrm{d}x)$ is a (local) parallel section of $\Lambda^n(T^*M)$ for each $(\psi,U)$ in the atlas $\mathcal{A}$. By simple-connectivity, there is a global parallel volume form $\mu$ on $M$, and we can, composing the members of our atlas with linear transformations of $\mathbb{R}^n$, get a new atlas with polynomial transitions that satisfies $\psi^*(\mathrm{d}x) = \mu$ on $U$ for all $(\psi,U)$ in the new atlas. There is also a way to analytically continue the map $\psi$ coming from a chart $(\psi,U)$ to the domain $U\cup V$ for any chart $(\phi,V)$ for which $U\cap V$ is non-empty: Since $\tau$ is defined on all of $\mathbb{R}^n$, and since $\psi = \tau\circ\phi$ on $U\cap V$, one can take $\psi(q) = \tau(\phi(q))$ for all $q\in V$, which extends $\psi$ to $V$. Using this construction, any $(\psi,U)\in\mathcal{A}$ can be uniquely analytically continued along any smooth path in $M$, and one easily verifies that this extension depends only on the homotopy class of the path with fixed endpoints. Since $M$ is simply connected, this means that $\psi$ can be extended uniquely analytically to all of $M$, and hence $\psi:M\to\mathbb{R}^n$ is a smooth map that satisfies $\psi^*(\mathrm{d}x) = \mu$ on all of $M$. Obviously, this is impossible.
|
{
"source": [
"https://mathoverflow.net/questions/275601",
"https://mathoverflow.net",
"https://mathoverflow.net/users/36688/"
]
}
|
277,069 |
Disclaimer: I don't feel qualified to ask this question and yet it's been troubling me for some time now and I lost my patience and decided to ask to get some kind of answer. If there are any stupid mistakes please treat them as such and try to focus on the main issue raised if at all possible. As the title suggests I'm struggling with the meaning of "Homology" . In particular how are "Homology" and "Cohomology" related. By the end of my question I hope it will be clear what I mean. Let me start with some of the possible interpretations I'm (somewhat) familiar with, and after that let me say what troubles me. (All categories and functors are $\infty$ unless stated otherwise) Cohomology $\sim \operatorname{Hom}$ — Homology $\sim \otimes$ To make this precise consider the suspension $\infty$ -functor sending spaces to their suspension spectra $\Sigma^{\infty}_+ :\mathrm{Spaces} \to \mathrm{Sp}$ . The category of spectra is a symmteric monoidal $\infty$ -category so for every space $X$ and spectrum $E$ one can
define the $E$ -homology of $X$ as the homotopy groups of the smash
product $E_*X\mathrel{:=}\pi_*(\Sigma^{\infty}_+X \otimes_{\mathbb{S}} E)$ .
The $E$ -cohomology of $X$ in this picture is the homotopy groups of
the mapping spectrum $E^*X\mathrel{:=}\pi_*(\operatorname{Map}(\Sigma^{\infty}_+X,E))$ . Homology $\sim$ Abelianization To make this precise one can consider the tangent category to $\mathrm{Spaces}$ which is the fiberwise stabilization of the codomain fibration $\mathrm{Spaces}$ . The fiber over a space $X$ will be the category spectra parametrized by $X$ . Then one can define the Homology of $X$ as the image of the identity map $X \to X$ under the stabilization procedure. This is the "absolute cotangent complex" $L_X$ . One has a kind of shriek pushforward for these parametrized spectra which for the case $X \to \mathrm{pt}$ sends $L_X$ to $\Sigma^{\infty}_+X$ and one recovers some of the above from this viewpoint (I'm not so sure about this statement suddenly, is this true?). In a sense this is the relative setting for the above. Cohomology $\sim \mathrm{limits}$ - Homology $\sim \mathrm{colimits}$ To make this precise start with a local system over a space $X$ . Let's take as a definition for a local system a functor from $X$ considered as an infinity groupoid to some category of coefficients (say spectra). Take this local system $L:X \to \mathrm{Sp}$ and define $L$ -cohomology of X to be $\operatorname{Lim} L$ (this coincides with the sheaf cohomology definition) and $L$ -homology to be $\operatorname{Colim} L$ (giving the same answer as 1 for the case of a constant functor $L=E$ ). Homology $\sim$ dual to Cohomology This is the most cheeky definition. There are many flavors of this I believe the basic archetype being the Poincaré duality for oriented manifolds $H^i_{\mathrm c}(M) \cong H_{n-i}(M)$ . The main idea is to define homology in such a way that one gets "Poincaré duality". For example in Verdier duality for locally compact (sufficiently nice) spaces one can define homology with coefficients in a sheaf $F$ as the compactly supported cohomology with coefficients in the Verdier dual of $F$ . For example on a manifold if $F= \mathbb{Z}$ is the constant sheaf then the Verdier dual will be $\operatorname{OR}_M$ the orientation sheaf (perhaps shifted depends on one's conventions). The point is that this definition is concocted so that one always has a duality between homology and cohomology. This can be done in any cohomology theory which has good duality properties (i.e. six functors). Why am I not satisfied? Here are my concerns. Some of the interpretations above answer some of the concerns but none of them answer all of the concerns in a satisfactory way: Lack of convenient relative framework: For sheaf cohomology one has a very convenient framework for working in a relative situation (push/pull) in any context no matter how general. All one needs is a site and one immediately can ask questions about how cohomology behaves in this site, what kind of properties does it satisfy? Does it have 6 functor formalism? If not maybe at least 5 or 4? Does it have any interesting dualities? etc.… For Homology one seems to run into several persistent problems when trying to translate the above interpretations into a relative general setting like this. Using duality as a crutch: As much as I like dualities sometimes I feel like we're being a bit unfair to "Homology" treating it like a deformed creature which only has a right to exist as a dual to cohmology when in fact homology is the older brother of the two! Asymmetry between co/homology: In cohomology one has sheaves, sections, resolutions etc.… What do we have in homology? I'm kind of wishing that all the homology business is part of a bigger story Cosheaf Homology — Sheaf Cohomology . Unfortunately I have no idea what the words in the left hand side mean or even what they should mean. I just wish there was some way to put homology and cohomology on an equal footing. Only locally constant data : This is related to the above point. Why is there no "Constructible Homology" or "Coherent Homology" ? Why doesn't Homology deserve these variants? I hope by now I've made it clear what's my "problem" with my current understanding of Homology. As I said I don't feel like I'm qualified to ask this question so if anyone has any suggestion for an edit or a revision please don't even ask permission just edit away!
|
Let's take coefficients in a field $k$, for simplicity. On 2): the singular cohomology of a topological space $X$ is the dual of its singular homology, almost by definition. But if $X$ is a space for which singular cohomology is not the same as sheaf cohomology, then the sheaf cohomology of $X$ need not have a predual. For example, if $X$ is the Cantor set, then the sheaf cohomology of $X$ with coefficients in the constant sheaf $\underline{k}$ is the vector space of locally constant functions from $X$ into $k$. This is a vector space of countable dimension over $k$, so it cannot arise as the dual of anything. On 1) and 4): part of the point of the six-functor formalism is that it incorporates things like homology automatically. For nice spaces $X$, singular cohomology = sheaf cohomology with coefficients in the constant sheaf, and singular homology = compactly supported sheaf cohomology with coefficients in the dualizing sheaf. Or, in six-functor notation, Cohomology of $X$ = $f_* f^* k$ and homology of $X$ = $f_! f^! k$
(here $f$ is the projection map from $X$ to a point, and all functors are derived). These constructions are related as follows: a) If the topological space $X$ is locally nice (so that the constant sheaf satisfies Verdier biduality), then cohomology $f_* f^* k$ is the dual of homology $f_! f^! k$. This is satisfied for many spaces of interest (for example, finite simplicial complexes, underlying topological spaces of complex algebraic varieties, ...) b) If the topological space $X$ is compact, then homology $f_! f^! k$ is the dual of cohomology $f_* f^* k$. This applies even when $X$ is locally very badly behaved, like the Cantor set. If $X$ is both compact and locally nice, then both of these arguments apply, and the homology and cohomology of $X$ are forced to be finite-dimensional.
|
{
"source": [
"https://mathoverflow.net/questions/277069",
"https://mathoverflow.net",
"https://mathoverflow.net/users/22810/"
]
}
|
278,130 |
Do you think the letter $\wp$ has a name? It may depend on community - the language, region, speciality, etc, so if you don't mind, please be specific about yours. (Mainly I'd like to know the English names, if any, but other information is welcome.) If yes, when and how did you come to know it? When, how, and how often do you mention it? (See below.) What's the origin of the letter? In computing, various names, many of which are bad, have been given to $\wp$ . See my answer . Background: (Sorry for being a bit chatty.) Originally I raised a related question at Wikipedia. The user Momotaro answered that in math community it's called "Weierstrass-p". Momotaro also gave a nice reference to the book The Brauer-Hasse-Noether Theorem in Historical Perspective by Peter Roquette. The author's claim supports Momotaro. (The episode in the book about the use of $\wp$ by Hasse and Emmy Noether is very interesting - history amuses - but it's off topic. Read the above link to Wikipedia. :) However I'm not completely sure yet, because the occasions on which the letter's name becomes a topic must be quite limited. For example perhaps in the classroom a professor draws $\wp$ , and students giggle by witnessing such a weird symbol and mastery of handwriting it; then the professor solemnly announces "this letter is called Weierstrass-p", like that? And "Weierstrass-p" is never an alias of the p-function? After reading Momotaro's comment, I think I've read somewhere that the letter was invented by Weierstrass himself, but my memory about it is quite vague. Does anyone know something about it? Is it a mere folklore, or any reference? I don't think mathoverflow is a place for votes, but if it were, I'd like one: "Have you ever heard of the name of the letter $\wp$ ? Slightly off-topic, about the p-function's name in Japanese; In Japanese, the names of the Latin alphabets are mostly of English origin, エー, ビー, シー... (eh, bee, cee, etc.) But $\wp$ -function is called ペー (peh), indicating its German origin. See e.g. 岩波 数学公式 III, p34, footnote 2 I don't know the name of the letter in Japanese. (In fact, most non-English European languages read "p" as "peh"...) EDIT Examples of typography in some early literature (off-topic, but interesting): First see the excellent comment below by Francois Ziegler $\wp$ that looks like the original (?) and today's glyph: Elliptische Functionen. Theorie und geschichte. (sic) (1890) by Alfred Enneper with Felix Müller , p60 . The first ed (1876) by Enneper alone does not seem to mention $\wp$ -function. A Course of Modern Analysis (1902) by Whittaker, 1st ed, p322 . (Famous Whittaker & Watson, but the 1st ed was by Whittaker alone.) Similar to Kurrent/Sütterlin (see the answer below by Manfred Weis ) lowercase p. All were published by the publisher Gauthier-Villars in Paris, in French: Traité des fonctions elliptiques et de leurs applications (1886) by Georges Henri Halphen , p 355 . Éléments de la Théorie des Fonctions Elliptiques (1893) by Jules Tannery and Jules Molk , vol 1, page 156 . Principes de la théorie des fonctions elliptiques et applications (1897) by Emile Lacour and Paul Appell , p 22 . BTW Abramowitz & Stegun uses $\mathscr{P}$ . Wow. See p 629 .
|
Apparently first introduced by Weierstrass in Winter 1862/63 lectures published by H. A. Schwarz (1881, 1885 , 1892 , 1893 ), §9: Mit der Sigma -Function $\mathfrak Su$ ist die Pe -Function $\wp u=\wp(u\mid\omega,\omega')=\wp(u;g_2,g_3)$ durch die Gleichung
$$
\wp u=-\frac{d^2}{du^2}\log\mathfrak S u=\frac{(\mathfrak S'u)^2-\mathfrak S u\mathfrak S''u}{\mathfrak S^2u}
$$
verbunden. (...) The letter and a reference to Schwarz's notes also appear on the first page of Weierstrass's paper Zur Theorie der elliptischen Functionen ( 1882 ). Attribution in e.g. (Schwarz student) H. Hancock's Lectures on the Theory of Elliptic Functions ( 1910 ), p. 309: (...) the function which we thus have was called by Weierstrass the Pe-function and denoted by
$$
\wp(u)\qquad\text{or more simply}\qquad\wp u
$$ or R. Godement's Analysis I ( 2004 ), p. 181: (...) the famous function
$$
\wp(u)=1/u^2+\sum_{\omega\ne0}\left[1/(u-\omega)^2-1/\omega^2\right]
$$
of Weierstrass (it already appeared in Eisenstein), with a $p$ which smacks of the gothic, of the italic and of the cursive, chosen by the inventor 65 and retained by posterity. (...) 65 His biography in the DSB tells us that in the course of his fourteen years of high-school teaching he had to teach mathematics, physics, German, botany, geography, history, gymnastics “and even calligraphy”. Note added: While I don’t know of a handwritten specimen by Weierstrass himself (asked about in comments by @NateEldredge and @ManfredWeis), there are a few in lecture notes of S. Pincherle who had studied with Weierstrass in Berlin: ( 1899-1900, Chap. XXII ).
|
{
"source": [
"https://mathoverflow.net/questions/278130",
"https://mathoverflow.net",
"https://mathoverflow.net/users/56062/"
]
}
|
278,134 |
Using the French convention, the content of the $i \times j$ box in the Young diagram of a partition $\lambda$ is $i-j$. Now if
$\lambda$ is partition of $n$ and $\sigma_\lambda: S_n \longrightarrow V_\lambda$ is the corresponding irreducible representation of the symmetric group $S_n$ then the sum of contents of all boxes in the young diagram of $\lambda$ equals \begin{equation} {\text{tr} \, \sigma_\lambda \big( t \big) \over
{\text{dim} V_\lambda}} \cdot \big| T \big|
\end{equation} where $T$ is the conjugacy class consisting of all transpositions and $t$
is any choice of transposition. Moreover each Young tableau
$Y_\lambda$ encodes an eigenvector in $V_\lambda$ for the operator $\sigma_\lambda \big( J_k \big)$ with eigenvalue $c_k$ where \begin{equation} J_k \, := \, \sum_{i=1}^{k-1} \, \big(i,k \big)
\, \in \Bbb{C} \big[ S_n \big]
\end{equation} and $c_k$ is the content of the box in $Y_\lambda$ labeled by $k$. Consider now the Young-Fibonacci lattice whose elements consist of words $w = a_1 \cdots a_d$
taken from the alphabet $\{1,2 \}$ which can be visualised by stacking boxes into adjacient vertical columns going from left to right such that the number of boxes in the $i$-th column is $a_i$. The rank $|w|$ of a word is simply the number of boxes in such a picture; equivalently $|w|$
equals $a_1 \, + \, \cdots \, + \, a_d$.
I won't describe the covering relations that give the lattice structure --- but suffice it to say each word $w$ or rank $n$ encodes an irreducible representation $V_w$ of the Okada algebra $\mathcal{A}_n$ and each complete chain $Y_w$ ending at $w$ indexes a basis vector in $V_w$. Question: (1) Are there pairwise commuting operators $\tilde{J_1}, \dots, \tilde{J_n}$ within the Okada algebra $\mathcal{A}_n$ for which each complete chain $Y_w$ (viewed as a basis vector in $V_w$) is a simultaneous eigenvector and (2) is there
a notion of content (a value for each covering relation in the Young-Fibonacci lattice) so that the $k$-th content $c_k$ of $Y_w$ (viewed as a complete chain) is the eigenvalue of $\tilde{J_k}$ corresponding to $Y_w$ and (3) will the sum of such contents along any complete chain $Y_w$ be constant ? regards, A. Leverkühn
|
Apparently first introduced by Weierstrass in Winter 1862/63 lectures published by H. A. Schwarz (1881, 1885 , 1892 , 1893 ), §9: Mit der Sigma -Function $\mathfrak Su$ ist die Pe -Function $\wp u=\wp(u\mid\omega,\omega')=\wp(u;g_2,g_3)$ durch die Gleichung
$$
\wp u=-\frac{d^2}{du^2}\log\mathfrak S u=\frac{(\mathfrak S'u)^2-\mathfrak S u\mathfrak S''u}{\mathfrak S^2u}
$$
verbunden. (...) The letter and a reference to Schwarz's notes also appear on the first page of Weierstrass's paper Zur Theorie der elliptischen Functionen ( 1882 ). Attribution in e.g. (Schwarz student) H. Hancock's Lectures on the Theory of Elliptic Functions ( 1910 ), p. 309: (...) the function which we thus have was called by Weierstrass the Pe-function and denoted by
$$
\wp(u)\qquad\text{or more simply}\qquad\wp u
$$ or R. Godement's Analysis I ( 2004 ), p. 181: (...) the famous function
$$
\wp(u)=1/u^2+\sum_{\omega\ne0}\left[1/(u-\omega)^2-1/\omega^2\right]
$$
of Weierstrass (it already appeared in Eisenstein), with a $p$ which smacks of the gothic, of the italic and of the cursive, chosen by the inventor 65 and retained by posterity. (...) 65 His biography in the DSB tells us that in the course of his fourteen years of high-school teaching he had to teach mathematics, physics, German, botany, geography, history, gymnastics “and even calligraphy”. Note added: While I don’t know of a handwritten specimen by Weierstrass himself (asked about in comments by @NateEldredge and @ManfredWeis), there are a few in lecture notes of S. Pincherle who had studied with Weierstrass in Berlin: ( 1899-1900, Chap. XXII ).
|
{
"source": [
"https://mathoverflow.net/questions/278134",
"https://mathoverflow.net",
"https://mathoverflow.net/users/43889/"
]
}
|
278,268 |
My question is prompted by this illustration from Eugenia Cheng’s book Beyond Infinity , where it appears in reference to the Basel problem. Is it known whether the infinite set of squares of side $\frac{1}{2}, \frac{1}{3}, \frac{1}{4},...$ can be packed into three quarters of a unit square? (It doesn't seem obvious that the illustrated packing could be continued to infinity.)
|
The standard simple proof that $\sum_{n=1}^\infty \frac1{n^2}$ converges is to round each $n$ down to the nearest $2^k$; this rounds each $\frac1{n^2}$ up to the nearest $\frac1{2^{2k}}$. In fact, one gets $2^k$ copies of $\frac1{2^{2k}}$ for each $k$; hence
$$\sum_{n=1}^\infty n^{-2} < \sum_{k=0}^\infty 2^k\frac1{2^{2k}} = \sum_{k=0}^\infty \frac1{2^{k}} = 2.$$ This is a sloppy estimate, and one way of improving it is to round each $n$ down to the nearest $2^k$ OR $3\cdot2^{k}$. The first cluster of $\frac1{2^{2k}}$s, comprising a single $\frac1{2^0}$, is unaffected; all subsequent clusters cleave into $2^{k-1}$ copies of $\frac1{2^{2k}}$ and $2^{k-1}$ copies of $\frac1{(3\cdot2^{k-1})^2}$. In this way, we get
$$\sum_{n=1}^\infty n^{-2}
< 1+\sum_{k=1}^\infty 2^{k-1}\frac1{2^{2k}} + \sum_{k=1}^\infty 2^{k-1}\frac1{(3\cdot2^{k-1})^2}
\\ = 1+\sum_{k=1}^\infty \frac1{2^{k+1}} + \frac19\sum_{k=1}^\infty \frac1{2^{k-1}} = 1 + \frac12+\frac29.$$ This sum (minus the irrelevant first term) can be represented geometrically as in the following image, which is reasonably similar to the one you posted. (But the shaded region has area $\frac16+\frac19=\frac5{18}>\frac14$.)
|
{
"source": [
"https://mathoverflow.net/questions/278268",
"https://mathoverflow.net",
"https://mathoverflow.net/users/8217/"
]
}
|
278,280 |
Gromov's waist inequality for unit n-sphere $\mathbb{S}^{n}$ says:
For any continuous function $f: \mathbb{S}^{n} \rightarrow \mathbb{R}^{m} $,
there is some $y \in \mathbb{R}^{m}$ s.t. $Vol_{n-m}(f^{-1}(y)) \geq Vol_{n-m}(\mathbb{S}^{n-m}) $. I'm wondering if there is an averaged version of the inequality,
comparing the averaged fiber volume and some averaged $\mathbb{S}^{n-m}$ volume.
For example,
is it true that for some constant $C(n, m)$ depending only on dimensions: $ \int_{f(\mathbb{S}^{n})} Vol_{n-m}(f^{-1}(y))
\geq C(n, m) Vol_{m}(f(\mathbb{S}^{n})) Vol_{n-m}(\mathbb{S}^{n-m}) $ It is, of course, interesting to consider the tubular version: $ \int_{f(\mathbb{S}^{n})} Vol_{n}(f^{-1}(y) + \epsilon )
\geq C(n, m) Vol_{m}(f(\mathbb{S}^{n})) Vol_{n}(\mathbb{S}^{n-m} + \epsilon ) $, as well as similar inequalities for ball, cube, etc. in place of sphere.
|
The standard simple proof that $\sum_{n=1}^\infty \frac1{n^2}$ converges is to round each $n$ down to the nearest $2^k$; this rounds each $\frac1{n^2}$ up to the nearest $\frac1{2^{2k}}$. In fact, one gets $2^k$ copies of $\frac1{2^{2k}}$ for each $k$; hence
$$\sum_{n=1}^\infty n^{-2} < \sum_{k=0}^\infty 2^k\frac1{2^{2k}} = \sum_{k=0}^\infty \frac1{2^{k}} = 2.$$ This is a sloppy estimate, and one way of improving it is to round each $n$ down to the nearest $2^k$ OR $3\cdot2^{k}$. The first cluster of $\frac1{2^{2k}}$s, comprising a single $\frac1{2^0}$, is unaffected; all subsequent clusters cleave into $2^{k-1}$ copies of $\frac1{2^{2k}}$ and $2^{k-1}$ copies of $\frac1{(3\cdot2^{k-1})^2}$. In this way, we get
$$\sum_{n=1}^\infty n^{-2}
< 1+\sum_{k=1}^\infty 2^{k-1}\frac1{2^{2k}} + \sum_{k=1}^\infty 2^{k-1}\frac1{(3\cdot2^{k-1})^2}
\\ = 1+\sum_{k=1}^\infty \frac1{2^{k+1}} + \frac19\sum_{k=1}^\infty \frac1{2^{k-1}} = 1 + \frac12+\frac29.$$ This sum (minus the irrelevant first term) can be represented geometrically as in the following image, which is reasonably similar to the one you posted. (But the shaded region has area $\frac16+\frac19=\frac5{18}>\frac14$.)
|
{
"source": [
"https://mathoverflow.net/questions/278280",
"https://mathoverflow.net",
"https://mathoverflow.net/users/105627/"
]
}
|
278,629 |
I hope this is a suitable MO question. In a research project, my collaborator and I came across some combinatorial expressions. I used my computer to test a few numbers and the pattern was suggesting the following equation for fixed integers $K\geq n>0$. $$\dfrac{K!}{n!K^{K-n}}\sum\limits_{ \begin{subarray}{c} k_1+\dotsb+k_{n}=K \\ k_i \geq 1 \end{subarray}}
\prod\limits_{i=1}^n \dfrac{k_i^{k_i-2}}{(k_i-1)!}=\displaystyle {K-1\choose n-1}.$$ We tried to think of a proof but failed. One can probably move these $K!, n!$ to the right and rewrite the RHS, or maybe move $K!$ into the summation to form combinatorial numbers like $K\choose k_1,k_2,\dotsc,k_n$. We don't know which is better. The questions are: Anyone knows a proof for this identity? In fact the expression that appears in our work is $\sum\limits_{ \begin{subarray}{c} k_1+\dotsb+k_{n}=K \\ k_i \geq 1 \end{subarray}}
\sigma_p(k_1,\dotsc,k_n) \prod\limits_{i=1}^n \dfrac{k_i^{k_i-2}}{(k_i-1)!}$, where $p$ is a fixed integer and $\sigma_p(\dotsc)$ is the $p$-th elementary symmetric polynomial. The equation in the beginning simplifies this expression for $p=0,1$. Is there a similar identity for general $p$? ----------Update---------- Question 2 is perhaps too vague, and I'd like to make it a bit more specific. Probably I should have written this down in the beginning, but I feared this is too long and unmotivated. But after seeing people's skills, I'm very tempted to leave it here in case somebody has remarks. In fact, question 2 partly comes from the effort to find a proof for the following (verified by computer). $$
\frac{1}{K!} \prod_{r=1}^{K} (r+1 -x)=
\sum_{n=1}^K \frac{(-1)^n}{n!} \left[ \sum_{p=0}^n K^{n-p} \prod_{r=1}^p (x +r-4)
\left(
\sum\limits_{ \begin{subarray}{c} k_1+\dotsb+k_{n}=K \\ k_i \geq 1 \end{subarray}}
\sigma_p(k_1,\dotsc,k_{n}) \prod\limits_{i=1}^n \dfrac{k_i^{k_i-2}}{(k_i-1)!}
\right) \right],
$$
Where $x$ is a fixed number (in our case, an integer).
|
This is the answer to the first question, I wrote a long answer to Question 2 as a separate answer. Note that $A:=\sum_{k_i>0,k_1+\dots+k_n=K}\frac{K!}{n!k_1!\dots k_n!} \prod k_i^{k_i-1}$ is a number of forests on the ground set $\{1,2,\dots,K\}$ having exactly $n$ connected components and with a marked vertex in each component ($k_i$ correspond to the sizes of components.) Add a vertex 0 and join it with the marked vertices. Then we have to count the number of trees on $\{0,1,\dots,K\}$ in which $0$ has degree $n$. Remember that the sum of $z_0^{d_1-1}\dots z_K^{d_K-1}$ over all trees on $\{0,\dots,K\}$, where $d_i$ is degree of $i$, equals $(z_0+\dots+z_K)^{K-1}$. Substitute $z_{1}=\dots=z_K=1$ and take a coefficient of $z_0^{n-1}$. It is $\binom{K-1}{n-1}K^{K-n}$.
|
{
"source": [
"https://mathoverflow.net/questions/278629",
"https://mathoverflow.net",
"https://mathoverflow.net/users/10333/"
]
}
|
278,786 |
In Recoltes et Semailles, Grothendieck remarks that the theory of motives is related to anabelian geometry and Galois-Teichmuller theory. My understanding of these subjects is not very solid at this moment, but this is what I understand: Anabelian geometry tries to ask how much information about a variety is contained in its etale fundamental group. In particular, there exist "anabelian varieties" which should be completely determined by the etale fundamental group (up to isomorphism). The determination of these anabelian varieties is currently ongoing. Galois-Teichmuller theory tries to understand the absolute Galois group $\text{Gal}(\bar{\mathbb{Q}}/\mathbb{Q})$ in terms of the automorphisms of the "Teichmuller tower", which is constructed as follows. We begin with the moduli stacks of curves with genus $g$ and $\nu$ marked points. These moduli stacks $\mathcal{M}_{g,\nu}$ have homomorphisms to each other, which correspond to "erasing" marked points and "gluing". The Teichmuller tower $\hat{T}_{g,\nu}$ comes from the profinite fundamental groupoids of these moduli stacks. The theory of motives is some sort of "universal cohomology theory" in the sense that any Weil cohomology theory (which is a functor from smooth projective varieties to graded algebras over a field) factors through it. This is obtained from some process of "linearization" of algebraic varieties (considering correspondences as morphisms, followed by the process of "passing to the pseudo-abelian envelope", and formally inverting the Lefschetz motive). Related to the theory of motives is the concept of a Tannakian category , which provides a kind of higher-dimensional analogue of Galois theory. I think the category of motives is conjectured to be a Tannakian category, via Grothendieck's standard conjectures on algebraic cycles (please correct me if I am wrong about this). So I'm guessing Tannakian categories might provide the link between the theory of motives and anabelian geometry and Galois-Teichmuller theory (which are both related to Galois theory) that Grothendieck was talking about in Recoltes et Semailles, but I'm not really sure. Either way, the ideas are still not very clear to me, and I'd like to understand the connections more explicitly.
|
Two clarifications: For anabelian geometry, you should ask how much information about a variety is contained in the Galois action on its etale fundamental group. While it's true that the motivic Galois group is a higher-dimensional analogue of the Galois group, it also should be true that motives are "just" a special kind of Galois representation, i.e. under the Tate conjecture the $\ell$-adic realization functor should give a faithful functor from motives to $\ell$-adic Galois representations, so the category of motives is the category of Galois representations with some restrictions placed on the objects and morphisms. Of course these restrictions are highly nontrivial. Only for the irreducible Galois representations do we have a good conjectural description of which ones come from motives, via the Fontaine-Mazur conjecture. So we can see that all 3 of these relate to Galois actions - the first two to Galois actions on fundamental groups, and the last to Galois actions on $\ell$-adic vector spaces. However, Galois actions are used in different ways in the three. Thinking about the three concepts, you might be led to questions like these: Can we construct Galois representations from the Galois action on the fundamental group of a curve? (this would be the first step in relating motives to anabelian geometry) Do these Galois representations arise from motives? (this would be the second step) Are these motives related to the geometry of a curve? (seeking a deeper connection to anabelian geometry) Can the class of motives arising this way be used to construct or describe the motivic Galois group? (now we bring in Grothendieck-Teichmuller theory) I think these questions at least touch on the beginning of what Grothendieck was thinking of. Since Grothendieck, people have heavily studied these questions, primarily in the case of unipotent quotients of the fundamental group, starting with the paper of Deligne on the fundamental group of the projective line minus three points. I think it's fair to say that the answer to all these questions is yes, with the largest caveat for the last question - I believe we can understand certain very special quotients of the motivic Galois group this way, but I don't think anyone has a strategy to construct the whole thing. The story goes something like this: Deligne looked at the maximal pro-$\ell$ quotient of the geometric fundamental group of the projective line minus three points. This is naturally an $\ell$-adic analytic group, and has a Lie algebra, which is an $\ell$-adic representation, and admits an action of the Galois group. This is supposed to be the $\ell$-adic realization of a motive, and Deligne worked to find the other realizations, including Hodge theory. This is a mixed motive, not a pure motive, so isn't constructed directly from linearizing algebraic varieties. All motives generated this way are mixed Tate motives, i.e extensions of powers of the Tate motive (the inverse of the Lefschetz motive), and are everywhere unramified. One can define the Tannakian category of everywhere unramified mixed Tate motives, and the Tannakian fundamental group is known to ask faithfully on the limit of these unipotent completions.
|
{
"source": [
"https://mathoverflow.net/questions/278786",
"https://mathoverflow.net",
"https://mathoverflow.net/users/85392/"
]
}
|
278,922 |
As we know, there are lots of consequences with the presupposition of the Riemann Hypothesis. Similarly, are there any important consequences with the presupposition of $\mathbf{P} \neq \mathbf{NP}$ ? An alternative statement of $\mathbf{P} \neq \mathbf{NP}$ is the extended Church-Turing Thesis. So if we have an speedup algorithm of other model than classic Turing Machine, we have to find an new algorithm for Turing Machine with the assumption $\mathbf{P} \neq \mathbf{NP}$ ? that means we have to find new speedup algorithm of factoring and the like.
|
Because there are natural computational problems involving many mathematical objects, there are a bunch of implications of complexity class separations like $\mathrm{P} \neq \mathrm{NP}$. I think the first paper to investigate this idea is probably Mike Freedman's Complexity classes as mathematical axioms , which assumes a complexity class separation (namely $\mathrm{NP} \neq \mathrm{P}^{\#\mathrm{P}}$, which is stronger than $\mathrm{P} \neq \mathrm{NP}$) to prove that knots with certain properties exist. The main idea of all these arguments is to prove an implication like "If all objects of type $T$ satisfy property $P$, then there is an efficient algorithm for a problem which we assumed has no efficient algorithm." You can then deduce the existence of objects of type $T$ which satisfy property $\neg P$. (Here the meaning of "efficient" depends on the class separation you assume.) The exact thing Freedman proves is a little esoteric, so let me give two other examples that have a somewhat similar flavor. The systole of a metric manifold is the length of the shortest non-contractible loop on it (I'll also use the word for the shortest loop itself). Probably the first manifold whose systole you'd try to understand is a flat torus $\mathbb{T}^d = \mathbb{R}^d / \Lambda$ for some lattice $\Lambda = \langle v_1, \dots, v_d \rangle$, since these are pretty much the simplest metric manifolds. One natural thing might be to try to say something about the word length of the systole $\gamma$ when considered as an element of $\pi_1(\mathbb{T}^d)$ equipped with the generating set $\{ v_1^*, \dots, v_d^* \}$ where $v_i^*$ is the loop in $\mathbb{T}^d$ naturally associated to $v_i$. For example, maybe the systole always has a relatively short word expressing it. Say, maybe we can always write $\gamma =_{\pi_1(\mathbb{T}^d)} \sum_i n_i v_i^*$ with $\sum_i |n_i| < \sqrt{d}$ or something. It turns out that a modest strengthening of $\mathrm{P} \neq \mathrm{NP}$ actually rules this out. Specifically, assuming that $\mathrm{NP}$-hard problems do not have time $2^{o(n)}$ bounded-error probabilstic algorithms, we can prove the following: For any $\ell(d) = o(d / \log d)$, there exist infinitely many $d$ and $\Lambda=(v_1, \dots, v_d)$ such that every systolic loop $\gamma$ on the torus $\mathbb{R}^d / \Lambda$ has word length at least $\ell(n)$ in the generating set $\{ v_1^*, \dots, v_d^* \}$. The idea of the argument is just that such a bound would imply a sub-exponential time algorithm for the $\mathrm{NP}$-hard problem of computing the systole: namely just enumerate all possible words in the generators of the given word length and pick the one which is represented by the shortest loop (this is easy to compute). You can use a similar argument also to show that faithful representations of $S_n$ have dimension at least $n^{\varepsilon}$ for some $\varepsilon > 0$ (assuming some version of the strong exponential time hypothesis.) The basic idea is given here -- the argument is very simple. These arguments I think give some idea of why proving class separations should be hard. They immediately imply that mathematical objects of all kinds must have certain kinds of complexity, or else they could be used to give algorithms contradicting the class separations. So, the class separation simultaneously demonstrates the existence of such complexity in all such mathematical objects at once. Bonus: Tim Roughgarden and Inbal Talgam-Cohen have some writing along these lines as well showing that class separations imply markets in which certain kinds of equilibria do not exist.
|
{
"source": [
"https://mathoverflow.net/questions/278922",
"https://mathoverflow.net",
"https://mathoverflow.net/users/14024/"
]
}
|
279,150 |
Apologies for the vagueness of question. Background this thread has some nice examples of presheaves failing to be sheaves. Question Is there a generic way to measure "how badly" a presheaf fails at being a sheaf? Something like an invariant that "counts", up to some notion of equivalence, sections that fail to glue or restrict properly? Discussion Can we do this by comparing some invariant of a presheaf $P$ and its sheafification $\tilde P$? The comments on this old math SE thread makes an attempt to argue that the Cech cohomology (taking the cover refinement limit) of the two are equal. But is there something else that we can compare between $P$ and $\tilde P$?
|
This answer is inspired by the Embedding Calculus (aka Manifold Calculus) of Weiss and Goodwillie. This is a framework for studying certain presheaves on manifolds. The idea is that sheafification of a presheaf is analogous to the linearization of a function. From this point of view, sheafification is just the first in a sequence of approximation - for each $n$ there is the universal approximation of degree $n$. What I am doing below is describe the difference between the quadratic and the linear approximation, which one may think of as the principal part of the difference between a presheaf and its sheafification. I am not sure if this approach is useful in the context of algebraic geometry, or for the applications that you have in mind. But let me put it out here, FWIIW. Let ${\mathcal F}$ be a presheaf on $X$. Suppose $x$ and $y$ are two points in $X$ that can be separated by disjoint open sets. Let us define the "bi-stalk" of $\mathcal F$ at $(x,y)$ as ${\mathcal F}_{(x,y)}=$colim$_{U,V} \mathcal F(U\cup V)$, where $(U, V)$ range over pairs of disjoint neighborhoods of $x$ and $y$. There is a natural homomorphism from the bi-stalk to the product of stalks ${\mathcal F}_{(x,y)}\to {\mathcal F}_{x}\times {\mathcal F}_{y}$. If $\mathcal F$ is a sheaf then this homomorphism is an isomorphism. So you have a homomorphism for each such pair that measures the failure of $\mathcal F$ to be a sheaf. Here is a perhaps slightly more sophisticated version of this idea. We can use $\mathcal F$ to define some new presheaves on $X\times X$. We will define them on basic sets of the form $U\times V$. There is an evident diagram of presheaves $$\begin{array}{ccc}
{\mathcal F}(U \cup V) & \to & {\mathcal F}(U)\\
\downarrow & & \downarrow \\
{\mathcal F}(V) & \to & {\mathcal F}(U\cap V)
\end{array}$$ If $\mathcal F$ is a sheaf then this is a pullback square for every $U, V$. Define $\mathcal F_2$ to be the total homotopy fiber of this square homotopy fiber of the homomorphism from the initial corner to the pullback of the rest. We may want to think of $\mathcal F_2$ as a presheaf of chain complexes on $X\times X$. The cohomology of the associated sheaf is an invariant that measures the deviation of $\mathcal F$ from being a sheaf (roughly speaking - see next paragraph). If this invariant vanishes, one can construct similar invariants of higher order by looking at higher "cross-effects" of $\mathcal F$. In fact, the restriction of $\mathcal F_2$ to the diagonal is trivial, we really want to consider cohomology relative to the diagonal. Also, there is a $\Sigma_2$ symmetry to this set-up, and we probably want to consider equivariant cohomology.
|
{
"source": [
"https://mathoverflow.net/questions/279150",
"https://mathoverflow.net",
"https://mathoverflow.net/users/74739/"
]
}
|
279,173 |
Excuse me for the concern, but I want to ask you a question. In 2002 Professor John Baez had published a few articles on his page regarding the possibility of applying $q$-mathematics in the science of physics (see [1]). The scripts were interesting but so far I could not find any article where $q$-mathematics was applied. Is there any published article where $q$-mathematics is applied? [1] http://math.ucr.edu/home/baez/week183.html
|
There has been quite a lot of literature on the applications of $q$-numbers, $q$-derivatives, $q$-deformations, etc, of various algebraic models of physics. Such applications range from $q$-deformations of simple harmonic oscillator(s) and angular momentum algebras to the development of quantum groups and their applications in nuclear physics, particle physics and field theories. They can be -roughly- divided in two broad categories (although experts might argue that such classifications can be made much more fine): Applications of phenomenological nature: Various $q$-deformed oscillators or $q$-deformed rotators have been used. Their spectrums, transition rates, matrix elements etc are proved to depend on the deformation parameter(s). "Playing" with the parameters allows curve fitting to experimental data. In lots of cases deformations based on more than one parameters (i.e. $q,p,..$-deformations) have been used. An interesting characteristic of such applications is that in many cases -for example in the fitting of the vibrational and rotational spectra of nuclei and molecules- the number of phenomenological $q$-parameters needed is significantly fewer than the number of traditional phenomenological parameters required to fit the same spectral data. (In many cases, the physical interpretation of such deformation parameters still lingers -see p.179- and imposes ideas of a more fundamental nature). In modern times, such ideas are also finding applications in semi-phenomenological cosmological models. See for example 9 where the dark energy is essentialy considered to be a $q$-deformed scalar field. Just to mention a few papers in this category (my former phd advisor has quite some work on the field): Generalized deformed oscillator and nonlinear algebras, C Daskaloyannis 1991 J. Phys. A: Math. Gen. 24 L789 Coupled $Q$-oscillators as a model for vibrations of polyatomic molecules, D.Bonatsos, C.Daskaloyannis, The Journal of Chemical Physics 106, 605 (1997) Quantum groups and their applications in nuclear physics, D.Bonatsos, C.Daskaloyannis, Progress in Particle and Nuclear Physics, v.43, 1999, p. 537-618 (see also here for the arxiv version). The many-body problem for $q$-oscillators, E G Floratos, Journal of Physics A: Mathematical and General, Volume 24, Number 20, 1991 Dynamical algebra of the $q$‐deformed three‐dimensional oscillator, J. Van der Jeugt, J. of Math. Phys. 34, 1799 (1993) WKB equivalent potentials for the $q$-deformed harmonic oscillator,
D Bonatsos, C Daskaloyannis and K Kokkotas, J. of Phys. A: Math. and Gen., Volume 24, Number 15, 1991 Introduction to Quantum Algebras, Maurice R. Kibler, arXiv:hep-th/9409012 (see especially the discussion in sections 7,8 and the references). An Introduction to Quantum Algebras and Their Applications, R. Jaganathan, arXiv:math-ph/0003018 (see the discussion of p.11-13) Interacting Dark Matter and $q$-Deformed Dark Energy Nonminimally Coupled to Gravity, Emre Dil, Advances in High Energy Physics, Volume 2016 (2016), Article ID 7380372 Applications of a more conceptual or fundamental nature: -->Such as for example the introduction and study of non-commutative space-times. Here, the non-commutativity is frequently expressed through deformed commutation relations between the coordinates or the functions on the space-time manifolds, leading to deformed algebras and non-commutative geometries. Fundamental quantization problems are frequently discussed in this setting: See for example this paper , this paper and this one (among lots of works in a similar spirit). The relation of $q$-mathematics and quantum groups to deformation quantization is another hot topic with a significant number -imo- of open questions. In Minimal areas from q-deformed oscillator algebras the authors argue that non-commutative space-times with dynamical commutation relations between the coordinates imply $q$-deformed algebras of observables and a kind of converse argument is also supplied. --> The quantum inverse scattering method and the definition(s) and study of quantum integrability (see p. 269) have given rise to quantum groups and quantum algebras . The mathematical developments associated with them have been greatly inspired and have -in return- significantly contributed to such directions of study. Jule Lamers' answer provides further details and references on that point. S. Majid's book is also worth to be mentioned as it contains a cataclysm of such ideas and applications. -->The case of deformed particles (deformed bosons or fermions) which interpolate between different statistics has been another line of interesting applications of this kind. A significant amount of related literature can be found at mathematical physics journals such as the Journal of Mathematical Physics, Journal of Physics A: Mathematical and General, Communications of Mathematical Physics, SIGMA etc.
|
{
"source": [
"https://mathoverflow.net/questions/279173",
"https://mathoverflow.net",
"https://mathoverflow.net/users/110748/"
]
}
|
279,558 |
I've asked a related question about nine months ago here , however, apparently, I lacked expertise to ask the precise question I want to ask here, as I wish to revisit the matter of universes. I hope it will not count as a duplicate. One way to deal with set-theoretic difficulties when studying large categories are Grothendieck universes.
In practice, it means that instead of studying, for example, the category of all abelian groups $\mathrm{Ab}$ we "fix" an arbitrary universe $\mathrm{U}$ and study the category $\mathrm{U\text{-}Ab}$ of all abelian groups which belong to $\mathrm{U}$.
As $\mathrm{U}$ was arbitrary, everything we prove about $\mathrm{U\text{-}Ab}$ will be true for all categories $\mathrm{V\text{-}Ab}$ for any universe $\mathrm{V}$. The thing that bothers me is: how can we circumvent the issue that we're not working with all abelian groups . Sure, adopting the axiom of universes ("every set is contained in some universe"), for every abelian group $G$ we have a an inverse $\mathrm{U}$ containing it, and that means that we have a category $\mathrm{U\text{-}Ab}$ which contains $G$ an object. But it doesn't solve all issues, at least not right away. Mike Shulman in his notes says: But then I don't understand why we can get away with this. Before reading this I though that any property of a $\mathrm{U}$-category can be directly translated to the "full" category, but it appears to be not the case. Why can we study, for examply, the category of $\mathrm{U}$-small algebraic varieties in algebraic geometry instead of the "full" category of all algebraic varieties? Or the category of compactly generated $\mathrm{U}$-small topological spaces in algebraic topology? Or the category of $\mathrm{U}$-small vector spaces over a field $F$? Also, in this comment to his answer to my aforementioned question HeinrichD said that "You won't be able to do many constructions with this "entire" category. And no, we don't really have to look at them. Also, it is common practice to just write "groups" when one uses U-groups if the context permits this." My question here is why doesn't one need to study the "full" categories of schemes, topological spaces, vector spaces, symmetric spectra (and of any set satisfying a property $\phi(x)$, for that matter) together with a property for what it means to be a morphism between $x$ and $y$ such that $\phi(x)$ and $\phi(y)$, and can get away with studying they $\mathrm{U}$-versions (of $\mathrm{U}$-small sets $x$ satisfying $\phi(x)$ and so on...). For any large category $\mathrm{C}$, does its version "restricted" to (an arbitrary) $\mathrm{U}$ suffice? $\mathrm{P.S.}$ In the linked notes, apparently, Mike Shulman wrote a section concerning those limitations of universe, but his exposition is well over my head and seems to be aimed at logicians or at least logic students, besides, he seems to propose another foundation for large categories, from what I understand, an extension of $\mathrm{ZFC\text{+}U}$.
|
Let me try to answer as a set theorist, rather than as a category
theorist, since I think that your question concerns at bottom a
matter often considered in set theory. Namely, the essence of your question, to my way of thinking,
revolves around the fact that Grothendieck universes (or
Grothendieck-Zermelo universes, as one might call them) need not
all agree with each other or with the background ambient
set-theoretic universe $V$ on mathematical assertions. Something
can be true in one universe and false in another. First of all, let me point out that indeed, this is the case. For
example, if $\kappa$ is the least inaccessible cardinal, then
$V_\kappa$ is the smallest Grothendieck universe, and one of the
statements that is true in $V_\kappa$ and not true in any larger
Grothendieck universe is the assertion, "there are no Grothendieck
universes." This will cause certain kinds of limits to behave
differently in $V_\kappa$ than in larger universes. For example,
Easton-support limits are the same as inverse limits in this
$V_\kappa$, but not in any larger universe, and I believe that one
can translate this to category-theoretic instances which would make
this distinction important. Similarly, if $\kappa_1$ is the next inaccessible cardinal, then
$V_{\kappa_1}$ will think that there is precisely one Grothendieck
universe, but no other Grothendieck universe will think that this
statement is true, and so again the truth differs. In order to resolve this issue, what you really want is not merely
a hierarchy of Grothendieck universes, but an elementary chain of
universes $$V_{\gamma}\prec V_{\delta}\prec\cdots\prec
V_{\lambda}\prec\cdots$$ where $\prec$ refers to the relation of elementary
submodel .
What $V_\gamma\prec V_\delta$ means is that any statement
$\varphi(a)$ expressible in the first-order language of set theory
that is true in $V_\gamma$ will also be true in $V_\delta$.
Ultimately, with an elementary chain, all the universes agree on
the truth of any particular assertion, and furthermore they agree
with the full ambient background universe $V$ on such truths. If one has a proper class $I$ of cardinals $\delta$ that form such
an elementary chain $V_\delta$ for $\delta\in I$, then you can use
these Grothendieck universes and be confident that any statement
true in one of them is true in all of them (or all of them that
contain the objects about which the statement is made, if there are
parameters in your statement). This feature would apply to any
statement made in the language of set theory. (But it would not
apply to statements made in the language with the class $I$, since
of course, the least element of $I$ will again exhibit the
phenomenon as before.) The consistency strength of having such an elementary chain is a
slight upgrade from the Grothendieck universe axiom, but still
strictly below the existence of a Mahlo cardinal, which is still
rather low in the large cardinal hierarchy. Let us formalize it
with a definition. Definition. The elementary-chain universe axiom is the
assertion that there is a proper class $I$ of inaccessible
cardinals that form a chain of elementary substructures, in the sense that the truth of any particular formula $\varphi(a)$ with parameters is the same in all $V_\gamma$ for $\gamma\in I$. It follows by a standard result in model theory theory that every $V_\gamma$ also agrees with truth in $V$ for these assertions $\varphi(a)$. The axiom is stated as a scheme, with a separate assertion for each $\varphi$. One should think of this axiom as a large cardinal axiom, and we
can place it into the hierarchy of large cardinal strength as
follows. Theorem. The following theories are equiconsistent over ZFC. The elementary-chain universe axiom. The assertion "Ord is Mahlo", which asserts that every definable closed unbounded class of ordinals contains a regular cardinal. Proof. ($1\to 2$) If we have an elementary chain and $C$ is a
definable club in the ordinals, then above the parameters used in
the definition, $C$ will be unbounded in every element of $I$ and
therefore contain all elements of $I$ above those parameters. Thus,
$C$ will contain a regular cardinal. ($2\to 1$) If ZFC + "Ord is Mahlo" is consistent, then consider the
theory $T$ asserting $I$ is an unbounded class of inaccessible
cardinals and $V_\delta\prec V$ for each $\delta\in I$. The
assertion $V_\delta\prec V$ is asserted as a scheme, with a
separate statement asserting agreement between $V_\delta$ and $V$
for each formula $\varphi$ separately. This theory is finitely
consistent, since every formula reflects on a club of ordinals,
which must contain an unbounded proper class of inaccessible
cardinals by the assumption "Ord is Mahlo". Thus, it is consistent.
And so we have a model of statement 1. $\Box$ It follows as an immediate corollary that if $\kappa$ is a Mahlo
cardinal, then there is a class $I\subset\kappa$ making $\langle
V_\kappa,\in,I\rangle$ a model of the elementary chain universe axiom.
Thus, the strength of this axiom is strictly less in consistency
strength than the existence of a Mahlo cardinal. A stronger version of the elementary chain universe axiom would be to assert that there is a unbounded class $I$ of inaccessible cardinals for which $V_\gamma\prec V_\delta$ for any $\gamma<\delta$ in $I$. This can be stated as a single axiom, not a scheme, in Gödel-Bernays set theory GBC. It is very similar to the axiom above, but strictly stronger, since it implies the existence of a truth-predicate for first-order truth. The difference between this axiom and the one above is whether or not you get elementarity $V_\gamma\prec V_\delta$ in nonstandard models of the theory, that is, with respect to nonstandard formulas. Conclusion. If you assume the elementary-chain universe axiom, and prove a set-theoretically expressible statement true in an arbitrary Grothendieck universe, then it will be true in the full ambient set-theoretic universe as well.
|
{
"source": [
"https://mathoverflow.net/questions/279558",
"https://mathoverflow.net",
"https://mathoverflow.net/users/83143/"
]
}
|
279,656 |
Question 1 Is there a winning strategy (algorithm to play infinitely) in Tetris,
or is there a sequence of bricks which is impossible to pack without holes? Consider generalized Tetris with Young diagrams (for some $n$) are falling down. Question 2 Is there winning strategy?
If not - consider some probability measure on Young diagrams (e.g. uniform). What will be "losing speed"? I.e. how fast the height of uncancelled rows will grow for best possible algorithm? Question 3 Can one relate such Tetris like questions on Young diagrams with some conceptual/conventional theories where Young diagrams appear - representation theory of symmetric group or something else?
|
Heidi Burgiel's first paper "How to Lose at Tetris" (which she wrote towards the end of our time in grad school in Seattle) answers Question 1 in the negative. It was published in the Mathematical Gazette volume 81 (1997) 194--200. If you want to see the published version, that volume is still on JSTOR where you can access some number of articles for free, http://www.jstor.org/stable/3619195 . (Current Gazettes are on the Cambirdge journals site.) As mentioned in the comments, there's a version at the old Geometry Center site, http://www.geom.uiuc.edu/java/tetris/tetris.ps . A related article "Tetris is Hard, Even to Approximate" by Erik Demaine, Susan Hohenberger, and David Liben-Nowell shows the difficulty of dealing even with a finite sequence known in advance. Although it generalizes the board size, it keeps tetris pieces. The document http://publications.csail.mit.edu/lcs/pubs/pdf/MIT-LCS-TR-865.pdf is much longer than the published http://erikdemaine.org/papers/Tetris_COCOON2003/paper.pdf . To your other questions: For the simple part of Question 2, I'd think the answer is "no" since any "pentris" or larger Young tableaux / Ferrers diagrams for $n \ge 5$ would be extensions of the Tetris pieces that are already too much for a winning strategy. The details of the Demaine et al. paper(s) may help address the rest of that question. Towards Question 3, there are skew tableaux which can be thought of as building blocks for a Young tableau. Also, there's the notion of removing rim hooks to get to the $p$ -core of a partition.
|
{
"source": [
"https://mathoverflow.net/questions/279656",
"https://mathoverflow.net",
"https://mathoverflow.net/users/10446/"
]
}
|
279,870 |
In the course of discussing another MO question we realized that we did not know the answer to a more basic question, namely: Is it true that for every positive integer $k$ there exists a balanced bipartite graph with exactly $k$ perfect matchings? Equivalently, as stated in the title, is every positive integer the permanent of some 0-1 matrix? The answer is surely yes, but it is not clear to me how to prove it. Entry A089479 of the OEIS reports the number $T(n,k)$ of times the permanent of a real $n\times n$ zero-one matrix takes the value $k$ but does not address the question of whether, for every $k$, there exists $n$ such that $T(n,k)\ne 0$. Assuming the answer is yes, the followup question is, what else can we say about the values of $n$ for which $T(n,k)\ne 0$ (e.g., upper and lower bounds)?
|
The answer to the question is yes. Given $k$, the 0-1 matrix given by $1$ $1$ $\dotsc$ $1$ $0$ $0$ $\dotsc$ $0$ $0$ $0$ $0$ $0$ $1$ $1$ $0$ $\dotsc$ $0$ $0$ $\dotsc$ $0$ $0$ $0$ $0$ $0$ $1$ $1$ $0$ $\dotsc$ $0$ $\dotsc$ $0$ $0$ $0$ $\dotsc$ $1$ $0$ $0$ $0$ $0$ $\dotsc$ $\dotsc$ $0$ $0$ $0$ $1$ where the first row has precisely $k$ entries equal to $1$, evidently has permanent equal to $k$. For $k=1$ the matrix is ($1$). For $k=2$ the matrix is $1$ $1$ $1$ $1$ For $k=3$ the matrix is $1$ $1$ $1$ $0$ $1$ $1$ $1$ $0$ $1$. For $k=4$ the matrix is $1$ $1$ $1$ $1$ $0$ $1$ $1$ $0$ $0$ $0$ $1$ $1$ $1$ $0$ $0$ $1$ which evidently has permanent equal to $4$. Please note that also, for each given $k$ and $1\leq \ell \leq k$, this construction can be tweaked to give an explicit $k\times k$ sized $0$-$1$-matrix having permanent precisely $\ell$, just by making the first row have precisely $\ell$ entries equal to $1$. Please also note that my construction does not have any bearing on the interesting and apparently difficult question which was cited in the present OP: my construction is too wasteful : it utilizes a $k\times k$ matrix, which is far too large when it comes to meet the demands of the OP in said question .
|
{
"source": [
"https://mathoverflow.net/questions/279870",
"https://mathoverflow.net",
"https://mathoverflow.net/users/3106/"
]
}
|
279,914 |
If definitions themselves are informally just maps from words to collections of other words. Then in order for one to define anything, they must inherently already have a notion of a function. I mean of course one could ask what the exact "things" our functions are mapping to and from, and then revert back to set theory or something else to represent those "things" but why not just define everything to be a function so we are mapping functions to functions? (perhaps with some axiom guaranteeing the existence of at least one function which we can use to recursively build up a bunch of other functions allowing us to express most of contemporary mathematics in the same way I've seen this done iteratively with the emptyset in ZFC) I mean how can one read any language, much less a mathematics textbook unless they are capable of mapping words to other words i.e. using functions (albeit even if one doesn't mentally recognize it, their brain is still implementing something akin to a function). Even in formulations of ZFC it seems functions are implicitly being used, for example the Axiom of intersection says given any two sets $A$ and $B$ there exists a set containing all the elements they have in common that one denotes by $A\cap B$. However we essentially have just defined a function $\cap$ of two variables, taking any two sets to another set - their intersection. Similarly in propositional logic when one defines the logical conjunction $\land$ we are again defining a two variable function from truth values to other truth values. Pre-facing a statement about mathematical objects that are being mapped to something with the words "we represent this by" or "we denote it by" merely conceals the fact one has just defined a function by hiding it as a statement in the English language. So re-iterating I don't see how anyone can learn anything without having some mental grasp of what a map/function/arrow etc. is. For example a dictionary can be crudely expressed as a functional relation $f$ where we might have: $$f(\texttt{apple})≡\texttt{a round fruit that grows on trees}$$ So how is it one can accept any mathematical axioms without first taking functions as a primitive, when reading and interpreting words/symbols in of itself requires their usage. If to formally define a function you must use functions then isn't that circular reasoning? How can a person understand a simple map or visual diagram when interpreting objects, color-intensity etc. requires mentally creating a bijective map between the object being labeled and what it represents. I mean the use of any variables for that matter requires creating what amounts to a bijection, at least mentally between the letters/symbols on paper and the objects/ideas they represent. If understanding everything from written language to cave drawings requires some mental notion of a functional relation, then shouldn't it be used as a starting point? Also from an aesthetic point of view, it seems a lot simpler to just accept some mathematical variant of a map/arrow/morphism/function etc. then to define a large number of other objects or "syntactic abbreviations" (I'm not familiar with the proper term but this is what Peter Heinig calls it) which appear to me as objects that hold almost all the same characteristics of functions. Lastly if one gets slightly looser with this idea, couldn't you argue understanding any cause and effect relationship essentially requires some mental abstraction that when modeled symbolically is functional? This could be argued as much simpler as then even non-humans would be capable of grasping similar notions, e.g. gorillas capable of rudimentary sign language can understand that by "inputting" a configuration of their hands they can "output" a banana from their owner. In any case though before I get off topic and venture into philosophy, I want to add I'm not too well versed in mathematical logic, so I'm guessing this exact thing has been covered elsewhere. In which case I would appreciate any references.
|
Let me explain one sense in which using functions or sets provides
exactly equivalent foundations of mathematics, in a way that is
connected with some deep ideas in set theory. There is a
translation back and forth between these foundational choices. For example, it is a standard exercise in set theory to consider
how we might construct the set-theoretic universe using
characteristic functions, rather than sets, since as you noted, the
characteristic function seems to provide all the necessary
information about a set. We want to replace every set $A$ with a
function $\chi_A$, which will have value $1$ for the "elements" of
$A$ and value $0$ for objects outside $A$. But notice that we don't really mean the ordinary elements of $A$,
since we want to found the entire universe using only these
functions, and so we should mean the functions that represent those
elements of $A$. So this should be a recursive transfinite
hereditary process. Specifically, we can build a functional version of the
set-theoretic universe as follows, in a way that perhaps fulfills the idea in your comment at the end of the first paragraph in your question. Namely, just as the set-theoretic
universe is built up in a cumulative hiearchy $V_\alpha$ by
iterating the power set, we can undertake a similar construction
for the functional universe. We start with nothing
$V_0^{\{0,1\}}=\emptyset$. If $V_\alpha^{\{0,1\}}$ is defined, then
we define $V^{\{0,1\}}_{\alpha+1}$ as the set of all functions with domain contained in $V_\alpha^{\{0,1\}}$ and range contained in $\{0,1\}$. At limit stages,
we take unions $V_\lambda^{\{0,1\}}=\bigcup_{\alpha<\lambda}
V_\alpha^{\{0,1\}}$. The final universe $V^{\{0,1\}}$ is any
function that arises in this hierarchy. One thing to notice here is that because we allowed value $0$, we
will have the same "set" being represented or named by more than
one function, since $0$-values indicate non-membership. So we can
define by transfinite recursion a value for equality on the names
$[\![f=g]\!]$ in the natural hereditary manner: the value will be
$1$, if the "members" of $f$ and $g$ are also equivalent with value
$1$. And we can define the truth value of the membership relation $[\![f\in g]\!]$, which will be $1$ if $f$ is equivalent to some $f'$ for which $g(f')=1$. Meanwhile, every object $a$ in the original set-theoretic
universe has a canonical name $\check a$, which is the constant $1$
function on the collection $\{\check b\mid b\in a\}$. You can extend the truth value assignments to all assertions in the
language of set theory $[\![\varphi]\!]$, and the fundamental fact
to prove for this particular way of undertaking the functional universe construction is that a set-theoretic statement is true in the original
set-theoretic universe if and only if the value of the statement on
the corresponding names is $1$.
$$V\models\varphi[a]\quad\iff\quad [\![\varphi(\check a)]\!]=1.$$
The truth values on the right are all about the functional universe
$V^{\{0,1\}}$, and this equivalence expresses the sense in which truth in the
set-theoretic universe is exactly copied over to the functional
universe. Conversely, given any functional universe, one can extract a
corresponding set-theoretic universe. So the two approaches are
essentially equivalent, the set-theoretic universe or the
characteristic-functional universe. A further thing to notice next is that the functional approach to
foundations opens up an intriguing possibility. Namely, although we
had used truth values $\{0,1\}$, which amounts to using classical
logic, suppose that we had used another logic? For example, if we had used functions into
$\newcommand\B{\mathbb{B}}\B$, a complete Boolean algebra, then we
would have constructed the universe $V^{\B}$ of $\B$-valued sets,
which are hereditary functions into $\B$. Set-theorists will recognize this as the core of the forcing
method, for the elements of $V^{\B}$ are precisely the $\B$-names
commonly considered in forcing. The main point is that Forcing is the method of using a functional foundations, but
where one uses a complete Boolean algebra $\B$ in place of the classical logic $\{0,1\}$. The amazing thing about forcing is that it turns out that for any
complete Boolean algebra $\B$, the ZFC axioms still come out fully
true, so that $[\![\varphi]\!]=1$ for any axiom $\varphi$ of ZFC.
But meanwhile, other set-theoretic statements such as the continuum
hypothesis or what have you, can get value $0$ or intermediate
values. This is how one establishes independence results in set
theory. I find it remarkable, a profound truth about mathematical
foundations, that the nature of set-theoretic truth, concerning the
continuum hypothesis or other set-theoretic statements, simply
flows out of the choice of which logic of truth values to have when
you are building the set-theoretic universe. Finally, let me mention that one can undertake exactly the same
process using other algebras of truth values, which are not
necessarily Boolean algebras. For example, using a Heyting algebra
gives rise to topos theory. And using paraconsistent logics, such
as the three-element logic, gives rise to paraconsistent set
theory.
|
{
"source": [
"https://mathoverflow.net/questions/279914",
"https://mathoverflow.net",
"https://mathoverflow.net/users/38626/"
]
}
|
279,969 |
I'm seeking for a Certificate of Positivity for the AM-GM inequality in five variables
$$a^5+b^5+c^5+d^5+e^5-5abcde\;\ge 0\qquad\forall\,a,b,c,d,e\ge 0\,.$$ Can one write the LHS as a sum
$\,\sum_i h_i\,s_i\,$ with real polynomials
$\,h_i(a,b,c,d,e)\,$ and $\,s_i(a,b,c,d,e)$, where each $\,h_i\,$ is homogeneous of degree $1$ and positive (with arguments $\ge0\,$), each $\,s_i\,$ is a square? In the case of $3$ variables the answer would be yes by the common factorisation
$$a^3+b^3+c^3-3abc\;=\;\frac 12(a+b+c)\left[(a-b)^2+(b-c)^2+(c-a)^2\right].$$ This is a Cross-post from math.SE after a decent period of waiting ... Remark: From David's comment to this post the $n=5$ expression does not factor according to Maple, contrary to the preceding $\,n=3\,$ case. Added in edit: I am really delighted by the community's rich spectrum of reactions, such a Math Overflow within the 12 hours after posting! Thanks a lot! In particular I've gotten a more general answer than hoped for, covering the specific issue addressed.
If you'd like to see a specific five-variables-certificate as initially sought-after, then you may follow the above "Cross-post" link, where a corresponding answer has been added.
|
The following paper: Fujiwara, Kazumasa, and Tohru Ozawa. Identities for the Difference between the Arithmetic and Geometric Means, (2014). proves the following representation for odd $n$: \begin{equation*}
\frac{1}{n}\sum_i x_i^n - \prod_i x_i = \sum_{i=1}^n x_i\sum_{j \in J(n)} (P_{ij}(x_1,\ldots,x_n))^2,
\end{equation*}
for suitable polynomials $P_{ij}$. For even $n$ , a SOS representation is available in Ch.2 of Hardy, Littlewood, Polyá.
|
{
"source": [
"https://mathoverflow.net/questions/279969",
"https://mathoverflow.net",
"https://mathoverflow.net/users/89757/"
]
}
|
280,314 |
Are there simple, undirected graphs $G, H$ that are non-isomorphic, but there exist graph homomorphisms $f_1: G\to H$ and $f_2: H\to G$ which are bijective set-maps $V(G)\rightarrow V(H)$ and $V(H)\rightarrow V(G)$? Notes. By the argument in Tobias Fritz's comment below, $G, H$ have to be infinite. As suggested by a commenter, one should make it unambiguously clear that here, 'simple, undirected graph'='irreflexive symmetric binary relation on a set'.
|
As vertex set, take $V=V'\cup V''$, the disjoint union of two infinite sets. For $G$, take all edges except those joining pairs of vertices from $V''$. For $H$, add one extra edge, between a pair of vertices $u,v\in V''$. Then $G\not\cong H$, since if two vertices of $G$ are adjacent, then at least one of them is adjacent to every vertex, but that is not true for the vertices $u,v$ of $H$. The identity map on $V$ is a bijective homomorphism $G\to H$. There is a bijective homomorphism $H\to G$ given by choosing arbitrary bijections $V'\cup\{u\}\to V'$ and $V''\setminus\{u\}\to V''$.
|
{
"source": [
"https://mathoverflow.net/questions/280314",
"https://mathoverflow.net",
"https://mathoverflow.net/users/8628/"
]
}
|
280,678 |
By not interpreting arithmetic, I mean it does not interprets enough arithmetic for Godel's argument (coding the syntax, finding the fix point) to work through. In other words, is there any other methods to prove that a theory does not have a computable consistent complete extension, or can we prove the converse that every such theory interprets arithmetic? Related, is there a class of structures whose first-order theory is not computable and arithmetic is not interpreted in it?
|
Any theory that can represent all recursive functions has no consistent decidable extension, however there are such theories that do not interpret even as weak an arithmetic as Robinson’s theory $R$ , see my paper Recursive functions and existentially closed structures ( arXiv:1710.09864 [math.LO], to appear in Journal of Mathematical Logic). You don’t even need to represent all recursive functions. Fix a recursively inseparable pair of r.e. predicates $A,B\subseteq\mathbb N$ . Let $L$ be the language consisting of one unary predicate $P$ , and constants $\overline n$ for every $n\in\mathbb N$ , and let $T$ be the (recursively axiomatizable) theory axiomatized by $P(\overline n)$ for $n\in A$ , and $\neg P(\overline n)$ for $n\in B$ . Then $T$ has no decidable consistent extension, and it does not interpret much of anything (in particular, if $S$ is a theory in a finite language interpretable in $T$ , then $S$ is also interpretable in a finite-language fragment of $T$ , and as such does have a decidable extension).
|
{
"source": [
"https://mathoverflow.net/questions/280678",
"https://mathoverflow.net",
"https://mathoverflow.net/users/18879/"
]
}
|
280,760 |
Recently I'm reading Donaldson's Geometry of four manifolds. It seems to me that the book requires a lot for background. Additionally, the proof in the book is too sketchy without too much detail. I had a really hard time to digest the content in the book. Do we have other textbook demonstrating the same topic with more detail and background?
|
Please do not ignore the other author, Peter Kronheimer. Based on all of the material I've read, I do not agree with your belief about the book. I think it is more detailed than you will find elsewhere which covers all of that material. Here are some useful alternatives, though: Take that book and replace the structure group $SU(2)$ by $SO(3)$. This route is taken in Petrie-Randall's "Connections, definite forms, and four-manifolds" . If you care more for easy digestion and less about whether it is sketchy, the closest thing I can think of to Donaldson-Kronheimer's book is Freed-Uhlenbeck's "Instantons and four-manifolds" . For rigorous yet digestive background, I recommend Booss-Bleecker's "Topology and analysis: the Atiyah-Singer index formula and gauge-theoretic physics" . Other relevent material, which has a lot of explanations/comments but not complete and not always rigorous, are Gompf-Stipsicz' "4-manifolds and Kirby calculus" and Scorpan's "The wild world of 4-manifolds" .
|
{
"source": [
"https://mathoverflow.net/questions/280760",
"https://mathoverflow.net",
"https://mathoverflow.net/users/110479/"
]
}
|
280,820 |
My first language is not English. How can I improve my mathematical writing. I feel like the only things I can write down are numbers and equations. Is there any good suggestion for improving writing, especially for mathematical writing (math-philosophy)?
|
I want to highlight two tools for learning: imitation and practice. Read a lot of mathematics.
You will find that some texts are easier to follow than others.
What makes you like a text?
What texts do you like most?
When you write, try to write as your favorite author would.
If you keep on writing mathematics long enough, you will find your own voice, but imitation is a necessary first step. Write a lot.
You say you can write numbers and equations.
What are they about?
Explain.
It doesn't have to be perfect, but explain it in your own words.
Tell a story about your calculation.
What would you say out loud to explain your work to a fellow student?
Write that down.
Try to make a habit out of making explained calculations that anyone could read. Lastly, if you don't know how to write something, ask for help.
Composing good mathematical prose is not trivial, and learning it is an important part of any degree in mathematics.
You are entitled for help with it, not only with your calculations. If I understand correctly, your problem is in writing relatively simple and short things.
Writing and structuring a thesis, a paper, or other extended piece of work is a story I will exclude here.
|
{
"source": [
"https://mathoverflow.net/questions/280820",
"https://mathoverflow.net",
"https://mathoverflow.net/users/114351/"
]
}
|
281,124 |
Consider a set of fractions $\left\{1, \frac{1}{2}, \frac{1}{3}, \ldots, \frac{1}{n}\right\}$. How many subsets of this set have sum at most 1? I'm interested in the asymptotics of this number. Clearly, any subset of $\left\{\frac{1}{\lceil n/2 \rceil}, \ldots, \frac{1}{n}\right\}$ works, hence the answer is $\Omega(2^{n / 2})$. Can we show $\Omega(2^{\beta n})$ for $\beta > \frac{1}{2}$? Can we determine $\beta$ exactly? From numeric estimates of OEIS sequence $\beta$ seems to at least $0.88$ (link to the sequence and correction of the estimate due to Max Alekseyev). The question arose while I was thinking about upper bounds for this question . Clearly, every divisibility antichain $I$ of $[n]$ must satisfy $\sum_{x \in I} \lfloor\frac{n}{x}\rfloor \leq n$, which is a very similar condition. Post-mortem : while I accepted Lucia's answer (simply because it was the first to contain the correct answer and some reasoning to why it is correct), the whole discussion here is very valuable. Be sure to also check out js21's answer with an approach based on large deviations method, and RaphaelB4's answer for a more off-the-ground explanation of the method. In a comment Jay Pantone shared a link to a paper on series analysis, in particular, the differential approximation method allows to obtain the same answer with high precision and is, without doubt, a great practical tool. Kudos to all of you guys! What a great day to learn.
|
Let $n_0$ be the smallest number such that the sum of the reciprocals of the integers from $n_0+1$ to $n$ is $<2$. It is easy to see that $n_0 \approx n/e^2$, since $\sum_{j>n/e^2}^{n} 1/j \approx \log n - \log (n/e^2)
=2$. Now for any subset $A$ of $\{n_0 +1, \ldots, n\}$ either the sum of the reciprocals of elements in $A$ or the sum of the reciprocals of its complement must be $<1$. Therefore there are at least
$$
\frac 12 2^{n-n_0} \asymp 2^{n(1-1/e^2)}
$$
possible sets. My guess is that this exponent $1-1/e^2$ is correct -- note that $1-1/e^2 = 0.86466\ldots$. Maybe my first guess is not right! Here's an upper bound, which gives an exponent around $0.91\ldots$ (my numerical calculations are pretty rough). For any positive $x$, an upper bound on the quantity we want is
$$
e^x \prod_{j=1}^n (1+e^{-x/j}).
$$
To see this, just expand out the product and terms with sum of reciprocals less than $1$ will contribute at least $1$, and the rest are positive. Now choose $x$ so as to minimize the above (a standard idea, known in analytic number theory as Rankin's trick). Calculus shows that one must choose $x$ so that
$$
1= \sum_{j=1}^{n} \frac 1j \frac{1}{1+e^{x/j}}.
$$
It is natural to guess that $x$ is of the shape $\alpha n$ for a
constant $\alpha$, and then for large $n$ the condition on $\alpha$ becomes
$$
1= \int_0^1 \frac{1}{1+e^{\alpha/y}} \frac{dy}{y} = \int_1^\infty \frac{1}{1+e^{\alpha y}} \frac{dy}{y}.
$$
If I calculated right, this gives $\alpha \approx 0.1273$. For this choice of $\alpha$ (and so $x$), one obtains the bound (approximately)
$$
\exp\Big(n\Big( \alpha + \int_0^1 \log (1+e^{-\alpha/y}) dy\Big)\Big),
$$
which seems to be about
$$
\exp(-.631n) \approx 2^{0.911n}.
$$
(I won't swear to the numerics -- someone should check.) My second guess is that the upper bound is tight (and I think this could be proved with some effort). The idea is to choose $j$ to be in your set with probability $1/(1+\exp(x/j))$ with the same $x$ as in the upper bound. The expected value of $1/j$ with this distribution is $1$, by the choice of $x$. An entropy calculation for this distribution then gives the exponent. (More generally, in all the situations I know, the Rankin upper bound is pretty close to optimal.)
|
{
"source": [
"https://mathoverflow.net/questions/281124",
"https://mathoverflow.net",
"https://mathoverflow.net/users/106512/"
]
}
|
281,447 |
Background: My daughter is 6 years old now, once I wanted to think on some math (about some Young diagrams), but she wanted to play with me...
How to make both of us to do what they want ? I guess for everybody who has children, that question comes up.
Okay, I said to her: let's play a game which I called "Young diagram" for her:
we took a sheet of paper and I tried to explain to her what a Young diagram is, she was asked to draw all the diagrams of some size n=1,2,3,4,5... Question: Do you have some experience/proposals of "games" which you can play with your children,
which would be on the one hand would make some fun for them, on the other would
somehow develop their logical/thinking/mathematical skills,
and on the other hand would be of at least some interest for adult mathematicians ? Related MO questions: “Mathematics talk” for five year olds it is quite related to the present question, but slightly different -
it is about a single presentation to children, while the present question
is about your own children with whom you play everyday, you can slightly "push",
and so on... How do you approach your child's math education? it is also related, but the present questions has a slightly different focus:
games interesting for children and adults. The book by Alexandre Zvonkine, "Math for little ones" (in Russian here), recommended in answer there - is really
something related to the present question. Which popular games are the most mathematical? is NOT directly related,
but may serve as kind of inspiration for answers... I think Allen Knutson's answer on “Mathematics talk” for five year olds: I've spoken (to 5+ years old) about the "puzzles" that Terry Tao and I
developed for Schubert calculus, like the left two here: can be a nice example of an answer to the present question as well:
on the one hand there is something to explain to the child and some colorful pictures,
and on the other hand that is about research level math ...
|
The game " Set " seems to fit the bill. It's a card came where there are cards that show images which have four different features, each of which comes in three possibilities: number (1, 2, or 3 objects) color (green, blue, pink) shape (diamonds, rounded rectangles, "tildes") filling (empty, filled, half-filled) so there are $3^4 = 81$ cards. You lay a certain number of card open on the table and the players need to find "sets" of cards, and a set are three cards such that on these three card each feature is either the same or all three versions appear. So, this picture shows a set: In mathematical terms, you are looking for lines in four-dimensional space over three elements. Granted, it's not easy for 5year olds, but I've met some kids at that age who could play it and had fun. One successful way to play it with kids even as young as 4 is to first find the set yourself, and then hand two of the cards of it to the kid. And let them find the third card. You coach them along: "What color is this? What color is this (second card)? So, what color will the third card have to be?" If it's too hard for them, let them play with a reduced deck for a while: use all the solid cards only, to make a deck of 27 cards, and play with that. Then all the single-shape cards (again 27), so they get used to spotting differences of shading. If you're going to be playing with younger children for a while, you could consider getting Set Junior . It only includes the solid cards, and the cards are thicker cardboard tiles. It also includes an easier variation, where one is just trying to match the cards in one's hand to existing Sets on a game board.
|
{
"source": [
"https://mathoverflow.net/questions/281447",
"https://mathoverflow.net",
"https://mathoverflow.net/users/10446/"
]
}
|
282,259 |
Problem. Is the series $$\sum_{n=1}^\infty\frac{|\sin(n)|^n}n$$ convergent? (The problem was posed on 22.06.2017 by Ph D students of H.Steinhaus Center of Wroclaw Polytechnica. The promised prize for solution is "butelka miodu pitnego", see page 37 of Volume 1 of the Lviv Scottish Book .
To get the prize, write to the e-mail: [email protected]).
|
Note that if $\pi$ were rational (with even numerator), then $\sin(n)$ would equal $1$ periodically, so the series would diverge. Similarly if $\pi$ were a sufficiently strong Liouville number . Thus, to establish convergence, one must use some quantitative measure of the irrationality of $\pi$. It is known that the irrationality measure $\mu$ of $\pi$ is finite (indeed, the current best bound is $\mu \leq 7.60630853$). Thus, one has a lower bound
$$ | \pi - \frac{p}{q} | \gg \frac{1}{q^{\mu+\varepsilon}}$$
for all $p,q$ and any fixed $\varepsilon>0$. This implies that
$$ \mathrm{dist}( p/\pi, \mathbf{Z}) \gg \frac{1}{p^{\mu-1+\varepsilon}},$$
for all large $p$ (apply the previous bound with $q$ the nearest integer to $p/\pi$, multiply by $q/\pi$, and note that $q$ is comparable to $p$). In particular, if $I \subset {\bf R}/{\bf Z}$ is an arc of length $0 < \delta < 1$, the set of $n$ for which $n/\pi \hbox{ mod } 1 \in I$ is $\gg \delta^{-1/(\mu-1+\varepsilon)}$-separated. This implies, for any natural number $k$, that the number of $n$ in $[2^k,2^{k+1}]$ such that $|\sin(n)|$ lies in any given interval $J$ of length $2^{-k}$ (which forces $n/\pi \hbox{ mod } 1$ to lie in the union of at most two intervals of length at most $O(2^{-k/2})$) is at most $\ll 2^{k(1 - \frac{1}{2(\mu-1+\varepsilon)})}$, the key point being that this is a "power saving" over the trivial bound of $2^k$. Noting (from Taylor expansion) that $|\sin(n)|^n \ll \exp( - j)$ if $n \in [2^k,2^{k+1}]$ and $|\sin(n)| \in [1 - \frac{j+1}{2^k}, 1-\frac{j}{2^k}]$, we conclude on summing in $j$ that
$$ \sum_{2^k \leq n < 2^{k+1}} |\sin(n)|^n \ll 2^{k(1 - \frac{1}{2(\mu-1+\varepsilon)})}$$
and hence
$$ \sum_{2^k \leq n < 2^{k+1}} \frac{|\sin(n)|^n}{n} \ll 2^{- k\frac{1}{2(\mu-1+\varepsilon)}}.$$
The geometric series on the RHS is summable in $k$, so the series $\sum_{n=1}^\infty \frac{|\sin(n)|^n}{n}$ is convergent. (In fact the argument also shows the stronger claim that $\sum_{n=1}^\infty \frac{|\sin(n)|^n}{n^{1-\frac{1}{2(\mu-1+\varepsilon)}}}$ is convergent for any $\varepsilon>0$.) EDIT: the apparent numerical divergence of the series may possibly be due to the reasonably good rational approximation $\pi \approx 22/7$, which is causing $|\sin(n)|$ to be close to $1$ for $n$ that are reasonably small odd multiples of $11$. UPDATE: I now agree with Will that it is the growth of $-2^{3/2}/\pi^{1/2} n^{1/2}$, rather than any rational approximant to $1/\pi$, which was responsible for the apparent numerical divergence at medium values of $n$, as is made clear by the updated numerics on another answer to this question.
|
{
"source": [
"https://mathoverflow.net/questions/282259",
"https://mathoverflow.net",
"https://mathoverflow.net/users/105651/"
]
}
|
282,526 |
Let $\lambda$ denote the Lebesgue-measure on $\mathbb{R}^n$, and let $C\subset\mathbb{R}^n$ be a convex region. My question is about
$$f(C):=\int_{C} \lambda(C \cap (x + C) ) \mathrm{d} x.$$
How large can $f(C)$ be? Of course, there is the trivial bound $f(C)\leq \lambda(C)^2$ but I would expect more something like $f(C) = O( \lambda(C) )$ at least. This question has probably been answered somewhere, and I suspect that the following might be very well known: for regions $C$ of constant measure, the value of $f(C)$ is maximal if $C$ is an $n$-ball (or something similar). I couldn't find any useful references regarding this question. If someone could help me out with some reference, or even a simple solution, I would be very grateful.
|
The following proposition answers OP's question regarding the upper bound of $$\tau(C) \Doteq f(C)/\lambda^2(C).$$ Let $B_n$ be the closed Euclidean unit ball of $\mathbb{R^n}$ centred at $0$ , that is $$B_n = \{ (x_1,\dots, x_n) \in \mathbb{R^n} \, \vert \,\,
x_1^2 + \cdots + x_n^2 \le 1\},$$ and let $\tau_n = \tau(B_n)$ . Proposition. The Euclidean unit ball $B_n \subset \mathbb{R}^n$ maximizes $\tau(C)$ among all convex subsets of finite and positive Lebesgue measure, i.e., $\tau(C) \le \tau_n$ holds for any such $C$ . Claim 3 below gives a way to compute efficiently $\tau_n$ for all $n > 0$ . The proof of the proposition follows Fedja's idea. The first part consists in proving that $\tau$ doesn't decrease after [Steiner symmetrization][2]. This is Claim 4 below. The second part uses [1, Theorem 7.1.10] showing that some Euclidean ball centred at $0$ can be obtained as the limit of Steiner symmetrizations of $C$ for any compact convex $C$ of positive Lebesgue measure (equivalently, with non-empty interior). When $n = 1$ , the result is slightly stronger. Claim 1. Let $C \subset \mathbb{R}$ be a convex subset of finite Lebesgue measure $\lambda(C) > 0$ . Then $\tau(C) \le \frac{3}{4}$ and equality holds if and only if $C$ is centred at $0$ . So, the above claim settles OP's question when $n = 1$ . Unsurprisingly, the result generalizes to $\mathbb{R}^n$ in the following way. Claim 2. Let $C \subset \mathbb{R}^n$ be a parallelotope of positive Lebesgue measure. Then $\tau(C) \le (\frac{3}{4})^n$ and equality holds if and only if $C$ is centred at $0$ . Computing the maximum $\tau_n$ of $\tau$ can be achieved via the formula Claim 3. We have $$ \tau_n = \frac{3}{2} \frac{\int^{\pi/3}_0 \sin^n(\theta) d\theta}{\int^{\pi/2}_0 \sin^n(\theta) d\theta}.$$ In particular, $\tau_1 = \frac{3}{4}, \, \tau_2 = 1 - \frac{3 \sqrt{3}}{4 \pi}, \tau_3 = \frac{15}{32}, \, \tau_4 = 1 - \frac{9 \sqrt{3}}{8 \pi},
\tau_5 = \frac{159}{512}, \, \tau_6 = 1 - \frac{27 \sqrt{3}}{20 \pi}
$ . The above formula can be rephrased as $\tau_n = \frac{3}{2}\frac{W_n'}{W_n}$ where $W_n$ is the $n$ -th [Wallis' integral][3] and $W_n'$ is recursively defined by $W'_0 = \frac{\pi}{3}, \,W_1' = \frac{1}{2}$ and $W_n' = - \frac{1}{2n}(\frac{\sqrt{3}}{2})^{n - 1} + \frac{n - 1}{n}W'_{n - 2}$ . The latter formula allows an efficient computation of $\tau_n$ for any $n > 0$ . As $W_n' \le \frac{\pi}{3}(\frac{\sqrt{3}}{2})^n$ and $W_n \sim \sqrt{\frac{\pi}{2n}}$ , the sequence $(\tau_n)_n$ converges to $0$ exponentially fast (we have for instance $10^{-7} < \tau_{91} < 10^{-6}$ ). Before proving the claims, let us state some obvious, but important, remarks. Remark 1. Let $A \in GL_n(\mathbb{R})$ and $d \in \mathbb{R}^n$ . Let $C$ be any subset of $\mathbb{R}^n$ . Then we have $$f(AC) = \vert \det(A) \vert^2f(C), \quad f(C + d) = \int_C \lambda\left(C \cap (C + d + x) \right)d\lambda(x).$$ In particular the ratio $\tau(C)$ is $GL_n(\mathbb{R})$ -invariant but it is not invariant under translation . Indeed, if $C$ is a convex subset of finite positive Lebesgue measure whose interior contains $0$ , then the function $f_C(d): d \mapsto f(C + d)$ has a non-empty and bounded support. The following tightly relates to OP's question. Question. Let $C \subset \mathbb{R}^n $ be a convex subset of finite positive Lebesgue measure. For which $d$ is $f_C(d)$ maximal? The following relates $\tau$ to Steiner symmetrization. Claim 4. Let $C \subset \mathbb{R}^n $ be a convex subset of finite positive Lebesgue measure. Let $H =\{0\} \times \mathbb{R}^{n -1}$ and let $C' \Doteq \text{St}_H(C)$ be the [Steiner symmetrization][2] of $C$ with respect to $H$ . Then we have $\tau(C) \le \tau(C')$ . We now turn to the proofs of the above claims. Proof of Claim 1. Because of Remark 1, we can restrict, without loss of generality, to intervals of the form $[-1, 1] + d$ with $d \ge 0$ .
A direct computation yields $$
f([-1, 1] + d) = \int_{-1}^1 (2 - \vert d + x \vert)^{+} dx =
\left\{\begin{array}{cc}
3 -d^2 & \text{ if } 0 \le d \le 1, \\
\frac{1}{2}(3 - d)^2 & \text{ if } 1 \le d \le 3, \\
0 & \text{ if } d \ge 3.
\end{array}\right.
$$ where $g^{+} \Doteq \max(g ,0)$ for $g$ a real-valued function. The result follows. Proof of Claim 2. By Remark 1, we can restrict to the convex sets of the form $C = [-1, 1]^n + d$ with $d = (d_1, \dots, d_n) \in \mathbb{R}^n$ .
We have $f(C) = \int_{[-1, 1]^n} (2 - \vert x_1 + d_1 \vert)^{+} \cdots (2 - \vert x_n + d_n\vert)^{+}dx_1 \cdots dx_n$ . It follows from Claim 1, that $f(C) \le \int_{[-1, 1]^n} (2 - \vert x_1 \vert)^{+} \cdots (2 - \vert x_n \vert)^{+}dx_1 \cdots dx_n = 3^n$ and the inequality is strict whenever one of the $d_i$ 's is not zero. The result follows. We denote by $V_n(r)$ the Lebesgue measure of the Euclidean ball of radius $r \ge 0$ , that is, $V_n(r) = r^n\frac{\pi^{\frac{n}{2}}}{\Gamma(\frac{n}{2} + 1)}$ , and let $\nu_n = V_n(1) = \lambda(B_n)$ .
Given a real number $h$ such that $-1 \le h \le 1$ , we denote by $B_n^{\text{cap}}(h)$ the hyperspherical cap of height $h$ along $x_n$ , that is the set of points $x = (x_1, \dots, x_n) \in B_n$ such that $x_n \ge h$ . We also set $V_n^{\text{cap}}(h) = \lambda(B_n^{\text{cap}}(h))$ . Proof of Claim 3. By definition, we have $f(B_n) = \int_{B_n} \lambda(B_n \cap (B_n + x)) d\lambda(x) = 2 \int_{B_n} V_n^{\text{cap}}(\frac{\vert x \vert}{2}) d\lambda(x)$ where $\vert \cdot \vert$ stands for the Euclidean norm of $\mathbb{R}^n$ . Changing to [ $n$ -dimensional spherical coordinates][4], we get $$f(B_n) = 2 \int_{D_n} V_n^{\text{cap}}(\frac{r}{2})r^{n - 1} \sin^{n - 2}(\phi_1)\sin^{n - 3}(\phi_2) \cdots \sin(\phi_{n - 1}) dr d\phi_1 d\phi_2 \cdots d \phi_{n - 1}$$ where $D_n = [0,2\pi] \times [0, \pi] \times \cdots \times [0, \pi] \times [0, 1]$ .
Using the classical computation of $\nu_n = \lambda(B_n)$ , we deduce that $$f(B_n) = 2n\nu_n \int_{[0, 1]} V_n^{\text{cap}}(\frac{r}{2})r^{n - 1}dr
= 2^{n + 1}n\nu_n \int_{[0, \frac{1}{2}]} V_n^{\text{cap}}(s)s^{n - 1}ds
.$$ Since $V_n^{\text{cap}}(s) = \int_s^1 V_{n - 1}(\sqrt{1 - h^2})dh =
\nu_{n - 1} \int_0^{\text{arcos}(s)} \sin^n(\theta)d \theta$ , we obtain $$f(B_n) = 2^{n + 1}n\nu_n \nu_{n - 1} \int_{[0, \frac{1}{2}]} J_n(\text{arcos}(s))s^{n - 1}ds$$ where $J_n(\phi) = \int_0^{\phi}\sin^n(\theta)d\theta$ . Using the substitution $\theta = \text{arcos}(s)$ , we get $$
\begin{eqnarray}
\frac{f(B_n)}{2^{n + 1}n\nu_n \nu_{n - 1}} &=& \int_{[\frac{\pi}{3}, \frac{\pi}{2}]} J_n(\theta)\cos(\theta)^{n - 1} \sin(\theta)d\theta \\
&=& \frac{[J_n(\theta) \cos^n(\theta)]^{\frac{\pi}{3}}_{\frac{\pi}{2}} + \int^{\frac{\pi}{2}}_{\frac{\pi}{3}} (\cos(\theta)\sin(\theta))^n d\theta}{n} \\
&=& \frac{2J_n(\frac{\pi}{3}) + \int^{\pi}_{\frac{2\pi}{3}} \sin(\theta)^n d\theta}{2^{n + 1} n} \\
&=& \frac{3J_n(\frac{\pi}{3})}{2^{n + 1} n}.
\end{eqnarray}
$$ Now the result follows from the identity $\frac{\nu_{n - 1}}{\nu_n} = \frac{1}{2 W_n}$ . Claim 4 is a specialization of Fedja's Lemma. Let $$\mathcal{I}(C, D, E) \Doteq \int_C \lambda(D \cap (E + x)) d\lambda(x)$$ where $C$ , $D$ and $E$ are convex subsets of $\mathbb{R}^n$ of positive finite Lebesgue measure. Then $\mathcal{I}(C, D, E) \le \mathcal{I}(C', D', E')$ , where $X'$ denotes the Steiner symmetrization of $X$ with respect to $H = \{0\} \times \mathbb{R}^{n - 1}$ . The proof of Fedja's Lemma can be reduced to the case $n = 1$ . The proof of the case $n = 1$ is eased off by Lemma 1. Let $g: \mathbb{R} \rightarrow \mathbb{R}^{+} \cup \{0\}$ be an even non-negative function which is concave on its support and non-decreasing on the negative real numbers. Then we have $$\int_I g d\lambda \le \int_{I'} g d\lambda $$ for every convex subset $I$ of $\mathbb{R}$ where $I'$ is the closed interval of measure $\lambda(I)$ centred at $0$ . Proof of Lemma 1. The non-negative number $\int_I g d\lambda$ is the area of convex subset $C(I) \subset \mathbb{R}^2$ located between the $x$ -axis and the graph of $g$ . By hypothesis, we have $C(I)' \subset C(I') = \int_{I'} g d\lambda$ , where $C(I)'$ is Steiner symmetrization of $C(I)$ with respect to $H = \{0\} \times \mathbb{R}$ . As $C(I)$ and $C(I)'$ have the same area, the result follows. We are now in position to prove Fedja's Lemma. Proof of Fedja's Lemma. Let us assume first that $n = 1$ .
As $\mathcal{I}(C + x, D, E) = \mathcal{I}(C, D, E + x)$ and $\mathcal{I}(C, D + x, E + x) = \mathcal{I}(C, D, E)$ , we can assume, without loss of generality, that $C$ and $D$ are centred at zero.
Let us write $E = E’ + \eta$ and let $g(x) \Doteq \lambda(D \cap (E' + x))$ .
Then we have $$
g(x) = \left\lbrace \begin{array}{ccc}
\min(d, e) & \text{ if } \vert x \vert \le \frac{ \vert d - e \vert }{2}, \\
\frac{d + e}{2} - \vert x \vert & \text{ if } \frac{ \vert d - e \vert }{2} \le \vert x \vert \le \frac{d + e}{2}, \\
0 & \text{ if } \vert x \vert \ge \frac{d + e}{2},
\end{array}\right.
$$ where $d = \lambda(D) > 0$ and $e = \lambda(E) > 0$ .
The function $g$ satisfies the hypothesis of Lemma $1$ , thus $$
\mathcal{I}(C, D, E) = \mathcal{I}(C + \eta, D, E’) = \int_{C + \eta} g(x) d\lambda(x) \le \int_{C} g(x) d\lambda(x) = \mathcal{I}(C, D, E’)$$ which proves the result for $n = 1$ .
Assume now that $n > 1$ . We denote by $\lambda_{n - 1}$ the Lebesgue measure on $\mathbb{R}^{n - 1}$ and we identify $H$ with $\mathbb{R}^{n - 1}$ .
For $\xi \in \mathbb{R^n}$ , we write $\xi = (\xi_1, \zeta)$ , with $\xi_1 \in \mathbb{R}$ and $\zeta \in H$ .
Given $A \subset \mathbb{R}^n$ and $\zeta \in H$ , we set $A^{\zeta} \Doteq \{ \xi_1 \, \vert \, (\xi_1, \zeta) \in A\} \subset \mathbb{R}$ . Let $x = (x_1, z) \in \mathbb{R} \times H$ .
Observe that $(A^{\zeta})’ = (A’)^{\zeta}$ and $(A + x)^{\zeta} = A^{\zeta - z} + x_1$ .
By the Fubini-Tonelli Theorem, we have $\lambda_n(D \cap (E + x)) = \int_H \lambda_1(D^{\zeta} \cap (E^{\zeta - z} + x_1)) d \lambda_{n - 1}(\zeta)$ .
We shall use the abbreviations $dz = d \lambda_{n - 1}(z)$ and $d\zeta = d \lambda_{n - 1}(\zeta)$ .
Using Fubini-Tonelli’s Theorem and the case $n = 1$ , we obtain $$
\begin{eqnarray}
\mathcal{I}(C, D, E) & = & \int_{H^2}
\mathcal{I}(C^{z}, D^{\zeta}, E^{\zeta - z})d\zeta dz \\
& \le & \int_{H^2} \mathcal{I}((C’)^{z}, (D’)^{\zeta}, (E’)^{\zeta - z}) d\zeta dz \\
& = & \mathcal{I}(C’, D’, E’).
\end{eqnarray}
$$ Eventually, we can prove the proposition. Proposition's proof . By [1, Theorem 7.1.10], there is a Euclidean ball $B$ centred at zero which is the limit in the Hausdorff distance topology of Steiner symmetrizations of $C$ with respect to linear hyperplanes. As $\lambda$ , $f$ and hence $\tau$ are continuous with respect to the Hausdorff distance topology (we use that $C$ has non-empty interior), we have $\lambda(B) = \lambda(C) > 0$ and we infer from Claim 4 that $\tau(C) \le \tau(B)$ . By Remark 1, we have $\tau(B) = \tau_n$ , which completes the proof. [1] S. G. Krantz, H. R. Parks, "The Geometry of Domains in Space", 1999. [2]: https://en.wikipedia.org/wiki/Symmetrization_methods [3]: https://en.wikipedia.org/wiki/Wallis%27_integrals [4]: https://en.wikipedia.org/wiki/N-sphere
|
{
"source": [
"https://mathoverflow.net/questions/282526",
"https://mathoverflow.net",
"https://mathoverflow.net/users/74957/"
]
}
|
282,742 |
In principle, a mathematical paper should be complete and correct. New statements should be supported by appropriate proofs. But this is only theory. Because we often cannot enter into the smallest details, we "prove" wrong statements here and then. I plead guilty, having published myself one or two false (fortunately minor) papers. So far, this is not harmful. The research community is able to point out incorrect statements, at least among those which have some importance in the development of mathematics. In time, the errors are fixed; this is the role of monographs to present a universally accepted state of the art of a topic. But sometimes, hopefully rarely, the technicalities are such that a consensus does not emerge and a controversy raises, between the author and their critics. I have an example in the realm of wave stability in PDE models for fluid dynamics. The controversy has lasted for a decade or two and I don't see how it can be resolved some day; it could just kill the topic. Are there famous endless controversies about the correctness of a significant paper? Are there some significant mathematical questions, that remain unsettled because people disagree on the status of released proofs? What should we do in order to salvage mathematical topics that suffer such tensions? In this question, I am not concerned with other kinds of controversy, about priority or citations.
|
Edit: a (methodo-)logical proposal to make this thread more transparent It can be argued that, broadly, there are three quite distinct 'types' of such controversies (and I propose that each answer in here gets tagged, by the respective contributor, if so inclined, by one of the following three tags): (non-sequitur) This is the nightmare of anyone who has had to referee a long submitted paper and felt the responsibility to make a judgemental statement about the 'Is it true?'-part of a referee's three Littlewoodian responsitbilities: the proof contains many true things, but the goal seems not to be reached, but it is so difficult to justify why one is not convinced , all one can say is 'I am not convinced'. (propositional-contradiction) By this I mean that the result contradicts, on the coarse boolean level of propositional logic , another published result, and both proofs are long , so ferreting out the error is literally a dilemma , a διλήμματος, with two horns (which most of the time, sadly, are not so easy as to be formalizable in Horn logic ). This the dream of anyone who has to refereee a long paper , since then there is an undisputable and documentable reason why one cannot give the go-ahead, if the traditional standards of truth are to be upheld at all (which they should), namely that propositional logic is a conditio-sine-qua-non, something like 'checking an arithmetical calculation modulo two'. (many-small-gaps) By this I mean that neither (non-sequitur) nor (propositional-contradition) are applicable; the overall line of argumentation is convincing, and, by itself, the claimed conclusion seems credible, too, especially as there is no other proposition proved elsewhere which would propositionally contradict it, but there are lots of small mistakes . This is something between the dream and the nightmare: one can then with good conscious recommend publication, or at least, a second round, but the task of patching up all the small errors still is nightmarishly work-intensive. None of the above three seems to imply any of the others. On a rough intuitive level, these seem mutually distinct 'types' of controversies around a manuscript (in my experience). I'll 'tag' my proposed contribution to this thread with the second-named 'type'. A proposed contribution to this thread. (propositional-contradiction) With trepidation (since I am only beginning to understand what the real issues are), and due respect, let me mention one of the most famous examples these days. To repeat myself: I know that there are many many others round here whom it would behoove more to mention this. Endlessly fascinatingly- and fertilely-controversial is: M. M. Kapranov, V. A. Voevodsky: ∞-groupoids and homotopy types .
Cahiers de Topologie et Géométrie Différentielle Catégoriques (1991)
Volume: 32, Issue: 1, page 29-46 Now the question is of course whether this qualifies as 'endless controversy' since even one of the authors readily acknowledged that there was an error, but a fruitful error , indicative of the traditional methods (both the formal-methods and the social-methods) being inadequate to give 'durable wings' with which to do the 'flights of fancy' (in a positive sense) of higher category theory. But, while still learning some of the relevant subject matter (and, myself, being mostly working to understand the comparably humble example of the unambiguous interpretability of pasting schemes in good-old-bicategories ), I think I can recognize that the above example satisfies each of the requirements famous (why? look around...) endless (why? since this dedicated MO thread seems so unconclusive (to me); after as yet 2624 views on a professional focused site, said thread contains only a "guess" and detailed confirmation *that there is an incorrectness in the sense of propositional logic but it still seems not clear (to me) how to pin down the reason for why the authors 'went wrong' . controversial (why? since one of the authors himself in public lectures said that at first he did not take Simpson's statement that something was wrong serious, rather thought that it was wrong to state that something was wrong; what is endlessly fascinating about this example is the expressiveness of the mathematics which gave rise to this 'controversy' ) significant (why? because, similar to e.g. Poincaré fertile errors in 'Analysis Situs' and the 5 subsequent 'patches', Kapranov-Voevodsky's error turned out to be a fertile error , for example by motivating one of the authors to find an alternative formal system for mathematics) A micro-summary is given on a page hosted by the Institute of Advanced Study in Princeton : During these lectures, Voevodsky identified a mistake in the proof of a key lemma in his paper. Around the same time, another mathematician claimed that the main result of Kapranov and Voevodsky’s “∞-groupoids” paper could not be true, a flaw that Voevodsky confirmed fifteen years later. Examples of mathematical errors in his work and the work of other mathematicians became a growing concern for Voevodsky, especially as he began working in a new area of research that he called 2-theories, which involved discovering new higher-dimensional structures that were not direct extensions of those in lower dimensions. “Who would ensure that I did not forget something and did not make a mistake, if even the mistakes in much more simple arguments take years to uncover?” asked Voevodsky in a public lecture he gave at the Institute on the origins and motivations of his work on univalent foundations. Voevodsky determined that he needed to use computers to verify his abstract, logical, and mathematical constructions. The primary challenge, according to Voevodsky, was that the received foundations of mathematics (based on set theory) were far removed from the actual practice of mathematicians, so that proof verifications based on them would be useless. The fifteen years later seems to approximate "endless" rather closely. Again, my apologies if this is off-topic for some reason that I do not see, and I know it is debatable whether this counts as endless controversy , maybe indefinite fertility would be a more fitting heading for this example.
|
{
"source": [
"https://mathoverflow.net/questions/282742",
"https://mathoverflow.net",
"https://mathoverflow.net/users/8799/"
]
}
|
282,780 |
Let $R = k[x_1, \ldots, x_n]$ for $k$ a field of characteristic zero and let $S \subset R$ be a graded sub-$k$-algebra (for the standard grading: $\deg x_i = 1$) such that $R$ is a free $S$-module of finite rank. Does this imply $S \cong k[y_1,\ldots,y_n]$?
|
Note that $R$ and $S$ each have only one graded maximal ideal, so they are local in the graded sense, so most of the standard results for ungraded local rings are applicable. There is an obvious finite resolution of $k$ by modules that are finitely generated and free over $R$ and thus also over $S$. This implies that $S$ has finite global dimension, and so is a regular local ring by a theorem of Serre (Theorem 19.2 in Matsumura). Together with the grading assumptions this forces $S$ to be polynomial.
|
{
"source": [
"https://mathoverflow.net/questions/282780",
"https://mathoverflow.net",
"https://mathoverflow.net/users/297/"
]
}
|
282,854 |
I seem to remember reading once a story that some mathematician had written to justify the use of categories, or isomorphisms or equivalences, or something like that. The story goes something like this: Once upon a time, people did not know what equality was. Instead, they only thought about things up to isomorphism. For example, they did not say that two sets had the same number of elements, but that they were in bijection. Today with category theory go back to these roots. Does anyone have an idea of who told this story and what the full story is?
|
This sounds an awful lot like TWF week 121 : To understand this, the following parable may be useful. Long ago, when shepherds wanted to see if two herds of sheep were isomorphic, they would look for an explicit isomorphism. In other words, they would line up both herds and try to match each sheep in one herd with a sheep in the other. But one day, along came a shepherd who invented decategorification. She realized one could take each herd and "count" it, setting up an isomorphism between it and some set of "numbers", which were nonsense words like "one, two, three,..." specially designed for this purpose. By comparing the resulting numbers, she could show that two herds were isomorphic without explicitly establishing an isomorphism! In short, by decategorifying the category of finite sets, the set of natural numbers was invented.
|
{
"source": [
"https://mathoverflow.net/questions/282854",
"https://mathoverflow.net",
"https://mathoverflow.net/users/68468/"
]
}
|
282,869 |
One of the most salient aspects of the discipline of number theory is that from a very small number of definitions, entities and axioms one is led to an extraordinary wealth and diversity of theorems, relations and problems--some of which can be easily stated yet are so complex that they take centuries of concerted efforts by the best mathematicians to find a proof (Fermat's Last Theorem, ...), or have resisted such efforts (Goldbach's Conjecture, distribution of primes, ...), or lead to mathematical entities of astounding complexity, or required extraordinary collective effort, or have been characterized by Paul Erdös as "Mathematics is not ready for such problems" (Collatz Conjecture). (Of course all branches of mathematics have this property to some extent, but number theory seems the prototypical case because it has attracted some of the most thorough efforts at axiomization (Russell and Whitehead), that Gödel's work is most relevant to the foundations of number theory (more than, say, topology), and that there has been a great deal of work on quantifying complexity for number theory--more so than differential equations, say.) Related questions, such as one exploring the relation between Gödel's Theorem and the complexity of mathematics , have been highly reviewed and somehow avoided any efforts to close. This current problem seems even more focused on specific references, theorems, and such. How do professional mathematicians best understand the foundational source of the complexity in number theory? I don't think answers such as "once relations are non-linear things get complicated" or its ilk are intellectually satisfying. One can refer to Gödel's Theorem to "explain" that number theory is so complicated that no finite axiomitization will capture all its theorems, but should we consider this theorem as in some sense the source of such complexity? This is not an "opinion-based" question (though admittedly it may be more appropriate for metamathematics or philosophy of mathematics): I'm seeking theorems and concepts that professional mathematicians (particularly number theorists) recognize as being central to our understanding the source of the breadth and complexity of number theory. Why isn't number theory trivial? What references, especially books, have been devoted to specifically addressing the source of the deep roots of the diversity and complexity of number theory? By contrast, I think physicists can point to a number of sources of the extraordinary wealth and variety of physical phenomena: It is because certain forces (somehow) act on fundamentally different length and time scales. The rules of quantum mechanics govern the interaction of a small number of subatomic particles. At sufficiently larger scales, though, quantum mechanics effective "shuts off" and classical mechanics dominates, including in most statistical mechanics, where it is the large number of particles is relevant. At yet larger scales (e.g., celestial dynamics), we ignore quantum effects. Yes, physicists are trying to unify the laws so that even the quantum mechanics that describes the interactions of quarks is somehow unified with gravitation, which governs at the largest scales... but the fact that these forces have different natural scales leads to qualitatively different classes of phenomena at the different scales, and hence the complexity and variety of physical phenomena.
|
I'm not a number theorist, but FWIW: I would talk, not so much about Gödel's Theorem itself, but about the wider phenomenon that Gödel's Theorem was pointing to, although the terminology didn't yet exist when the theorem was published in 1931. Namely, number theory is already a universal computer. Or more precisely: when we ask whether a given equation has an integer solution, that's already equivalent to asking whether an arbitrary computer program halts. (A strong form of that statement, where the equations need to be polynomial Diophantine equations, was Hilbert's Tenth Problem , and was only proven by the famous MRDP Theorem in 1970. But a weaker form of the statement is contained in Gödel's Theorem itself.) Once you understand that, and you also understand what arbitrary computer programs can do, I think it's no surprise that number theory would seem to contain unlimited amounts of complexity. The real surprise, of course, is that "simple" number theory questions, like Fermat's Last Theorem or the Goldbach Conjecture, can already show so much of the complexity, that one already sees it with questions that I intend to explain to my daughter before she's nine. This is the number-theoretic counterpart of the well-known phenomenon that short computer programs already give rise to absurdly complicated behavior. (As an example, there are 5-state Turing machines, with a single tape and a binary alphabet, for which no one has yet proved whether they halt or not, when run on a tape that's initially all zeroes. This is equivalent to saying that we don't yet know the value of the fifth Busy Beaver number .) Here, I think a crucial role is played by a selection effect. Above, I didn't talk about the overwhelming majority of 5-state Turing machines for which we do know whether they halt, nor did I talk about 10,000-state TMs---because those wouldn't have made my point. Likewise, the number-theory questions that become famous, are overwhelmingly skewed toward those that are easiest to state yet hardest to solve. So it's enough for some such questions to exist ---or more precisely, for some to exist that are discoverable by humans---to give rise to what you're asking about. Another way to look at it is that number theorists, in the course of their work, are naturally going to be pushed toward the "complexity frontier"---as one example, to the most complicated types of Diophantine equations about which they can still make deep and nontrivial statements, and aren't completely in Gödel/Turing swampland. E.g., my layperson's caricature is that linear and then quadratic Diophantine equations were understood quite some time ago, so then next up are the cubic ones, which are the kind that give rise to elliptic curves, which are of course where number theory still expends much of its effort today. Meanwhile, we know that if you go up to sufficiently higher complexity--- apparently , a degree-4 equation in 9 unknowns suffices; it's not known whether that's optimal---then you've already entered the terrain of the MRDP Theorem and hence Gödel-undecidability (at least for arbitrary equations of that type). In summary, if there is a borderland between triviality and undecidability, where questions can still be resolved but only by spending centuries to develop profound theoretical machinery, then number theory seems to have a pretty natural mechanism that would cause it to end up there! (One sees something similar in low-dimensional topology: classification of 2-manifolds is classical; 4-manifold homeomorphism is known to be at least as hard as the halting problem; so then that leaves classification of 3-manifolds, which was achieved by Perelman's proof of geometrization I've since learned this is still open, although geometrization does lead to a decision procedure for 3-manifold homeomorphism.) In some sense I agree with Gerhard Paseman's answer, except that I think that Wolfram came several generations too late to be credited for the basic insight, and that there's too much wrong with A New Kind of Science for it to be recommended without strong warnings. The pictures of cellular automata are fun, though, and do help to make the point about just how few steps you need to take through the space of rule-systems before you're already at the edge of the abyss.
|
{
"source": [
"https://mathoverflow.net/questions/282869",
"https://mathoverflow.net",
"https://mathoverflow.net/users/89654/"
]
}
|
282,989 |
It is a well-known fact, that one can derive some spectacular identities, e. g. $\sum^{n-1}_{m=1}\sigma_3(m)\sigma_3(n-m)=\frac {\sigma_7(n)-\sigma_3(n)}{120}$ $\sum^{n-1}_{m=1}\sigma_3(m)\sigma_9(n-m)=\frac {\sigma_{13}(n)-11\sigma_9(n)+10\sigma_3(n)}{2640}
$ just by equating several modular forms together. In the book The 1-2-3 of Modular Forms Don Zagier writes: "It is not easy to obtain any of these identities by direct number-theroretical reasoning (although in fact it can be done )" Does anybody know how to derive these identities "the hard way" or at least point me to some discussion?
|
Ramanujan's original paper On certain arithmetical functions gives a direct proof. The ideas behind this proof are closely related to the usual modular forms proof, but the words Eisenstein, vector space, modular forms are not mentioned. Indeed Ramanujan says explicitly that his identities "are of course really results in the theory of elliptic functions" and that "the elementary proof of these formulae given in the preceding sections seems to be of interest in itself." Briefly, Ramanujan writes down the power series for $P$ (which is $E_2$), $Q$ (which is $E_4$) and $R$ (which is $E_6$), and shows that
$$
\Phi_{r,s}(x) = \sum_{n=1}^{\infty} x^n \Big(\sum_{ab=n} a^r b^s\Big)
$$
can be expressed by polynomials in $Q$ and $R$. Then he multiplies $\Phi_{0,r}$ and $\Phi_{0,s}$ and compares the answer with $\Phi_{1,r+s}$ and $\Phi_{0,r+s+1}$.
|
{
"source": [
"https://mathoverflow.net/questions/282989",
"https://mathoverflow.net",
"https://mathoverflow.net/users/114143/"
]
}
|
283,142 |
Let $\Sigma$ be a compact Riemann surface equipped with a spin structure (a square root of $\Omega^1_\Sigma$, denoted $\Omega^{1/2}_\Sigma$). Let $\Gamma(\Omega^1_\Sigma)$ be the space of holomorphic differentials on $\Sigma$, and let $\Gamma(\Omega^{1/2}_\Sigma)$ be the space of holomorphic
$\frac12$-forms. I believe that there is a canonical isomorphism $$
\Lambda^{top}\big(\Gamma(\Omega^{1/2}_\Sigma)\big)^{\otimes 2}
\cong
\Lambda^{top}\big(\Gamma(\Omega^1_\Sigma)\big).
$$ Did I get this right? [I got it wrong! See the answers below.] What is the isomorphism?
Where can I read about it? PS: Both $\Lambda^{top}(\Gamma(\Omega^{1/2}_\Sigma))$ and $\Lambda^{top}(\Gamma(\Omega^1_\Sigma))$ are line bundles over the moduli stack of spin Riemann surfaces.
|
There is no such isomorphism (at least for $g \geq 9$). In O. Randal-Williams, The Picard group of the moduli space of r-Spin Riemann surfaces . Advances in Mathematics 231 (1) (2012) 482-515. I computed the Picard groups of moduli spaces of Spin Riemann surfaces (for $g \geq 9$). Grothendieck--Riemann--Roch shows that, in the notation of that paper, the right hand side of your formula is the class $\lambda$, and the left-hand side is the class $\lambda^{1/2}=2\mu$. (See page 511 of the published version for the calculation of the latter; the preprint has some mistakes at this point.) But the Picard group has presentation $\langle \lambda, \mu \,\vert\, 4(\lambda + 4\mu)\rangle$ as an abelian group, so these are not equal (even modulo torsion). You should not really need my calculation to see this: you can calculate the rational first Chern class of both sides by GRR, and see that they are distinct multiplies of the Miller--Morita--Mumford class $\kappa_1$; all that remains to know is that $\kappa_1 \neq 0$, which was shown in J. L. Harer, The rational Picard group of the moduli space of Riemann surfaces with spin structure . Mapping class groups and moduli spaces of Riemann surfaces (Göttingen, 1991/Seattle, WA, 1991), 107–136, Contemp. Math., 150, Amer. Math. Soc., Providence, RI, 1993. EDIT: To answer the question in the comments. Yes, I think that (in my notation) the relation $4(\lambda + 4\mu)=0$ holds for $g \geq 3$ (for $g < 3$ it can probably be checked by hand). To see this, let me shorten notation by writing $\mathcal{M}_g = \mathcal{M}_g^{1/2}[\epsilon]$ for the moduli space of Spin Riemann surfaces of Arf invatriant $\epsilon \in \{0,1\}$, $\pi : \mathcal{M}_g^1 \to \mathcal{M}_g$ for the universal family (i.e. $\mathcal{M}_g^1$ is the moduli space of Spin Riemann surfaces with one marked point), and $\mathcal{M}_{g,1}$ for the moduli space of Spin Riemann surfaces with one boundary component. Firstly, the Serre spectral sequence for $\pi$ has
$$E_2^{0,1} = H^0(\mathcal{M}_g ; H^1(\Sigma_g ; \mathbb{Z}))$$
and one can show that this is zero: the fundamental group of $\mathcal{M}_g$ acts on $H^1(\Sigma_g ; \mathbb{Z}) = \mathbb{Z}^{2g}$ via a surjection onto a finite-index subgroup of $\mathrm{Sp}_{2g}(\mathbb{Z})$, but this finite-index subgroup will still be Zariski-dense in $\mathrm{Sp}_{2g}(\mathbb{C})$, so the (complexified) invariants will be zero. It follows from the Serre spectral sequence that
$$\pi^* : H^2(\mathcal{M}_g; \mathbb{Z}) \to H^2(\mathcal{M}_g^1; \mathbb{Z})$$
is injective, so it is enough to prove the relation when there is a marked point. In fact, it even follows that
$$H^2(\mathcal{M}_g; \mathbb{Z}) \oplus \mathbb{Z}\cdot e \to H^2(\mathcal{M}_g^1; \mathbb{Z})$$
is injective, where $e$ denotes the Euler class of the vertical tangent bundle of $\pi$. Now there is a fibration sequence
$$\mathcal{M}_{g,1} \to \mathcal{M}_g^1 \overset{\frac{e}{2}}\to BSpin(2)$$
and so, from the Serre spectral sequence, an exact sequence
$$0 \to \mathbb{Z}\cdot \frac{e}{2} \to H^2(\mathcal{M}_g^1;\mathbb{Z}) \to H^2(\mathcal{M}_{g,1};\mathbb{Z}) \overset{d^2}\to H^1(\mathcal{M}_{g,1};\mathbb{Z}).$$
Now it follows from A. Putman, A note on the abelianizations of finite-index subgroups of the mapping class group , Proc. Amer. Math. Soc. 138 (2010) 753-758. that for $g \geq 3$ the fundamental group of $\mathcal{M}_{g,1}$ has torsion abelianisation, so its first cohomology is zero. Putting it all together, we get an injection
$$H^2(\mathcal{M}_g; \mathbb{Z}) \to H^2(\mathcal{M}_{g,1}; \mathbb{Z}),$$
so it is enough to verify the relation $4(\lambda + 4\mu)=0$ on $\mathcal{M}_{g,1}$. But for $g \leq 9$ there is a map
$$\mathcal{M}_{g,1} \to \mathcal{M}_{9,1} \to \mathcal{M}_9,$$
given by gluing on 2-holed tori then a disc, so the relation holds because it holds on $\mathcal{M}_9$.
|
{
"source": [
"https://mathoverflow.net/questions/283142",
"https://mathoverflow.net",
"https://mathoverflow.net/users/5690/"
]
}
|
283,172 |
This question is partly inspired by David Stork's recent question about the enigmatic complexity of number theory . Are there algebraic systems which are similar enough to the integers that one can formulate a "Riemann hypothesis" but for which the Riemann hypothesis is false? One motivation for constructing such things would be to illustrate the barriers to proving the Riemann hypothesis, and another one would be to illustrate how "delicate" the Riemann hypothesis is (i.e., that it's not something that automatically follows from very general considerations). I've run across various zeta functions over the years, but I seem to recall that either the Riemann hypothesis is probably/provably true, or the zeta function is too unlike the classical zeta function to yield much insight. More generally, what happens if we replace "Riemann hypothesis" with some other famous theorem or conjecture of number theory that seems to be "delicate"? Can we construct interesting systems where the result fails to hold?
|
One way of making "fake integers" explicit is a Beurling generalized number system , which is the multiplicative semigroup $Z$ generated by a (multi)set $P$ of real numbers exceeding $1$; lots of research has been done on the relationship between the counting function of $P$ (the Beurling primes ) and the counting function of $Z$ itself. In this context, it is certainly known that the Riemann hypothesis can fail; see for example this paper of Diamond, Montgomery, and Vorhauer . If this is not "similar enough to the integers", then I think you should more clearly define what you mean by that phrase.
|
{
"source": [
"https://mathoverflow.net/questions/283172",
"https://mathoverflow.net",
"https://mathoverflow.net/users/3106/"
]
}
|
283,753 |
Question. Does there exist a graph $G$ with $(\Delta(G),\chi(G),\omega(G))=(8,8,6)$? Remarks. Here, 'graph'='undirected simple graph'='irreflexive symmetric relation on a set' any number of vertices is permitted in the question (though, trivially, at least 8 vertices (and quite a bit more) vertices are necessary finiteness of the graph is not required, but finite graphs are the main focus $\Delta(G)$ $=$ maximum vertex degree of $G$ $\chi(G)$ $=$ chromatic number of $G$ $\omega(G)$ $=$ clique number of $G$ Needless to say, what I am asking for is not ruled out by the trivial bounds $\omega\leq\chi\leq\Delta+1$. One should also note that what I am asking for can be regarded as a 'critical instance' for Brooks' theorem (since $(\chi(G),\omega(G))=(8,6)$ by itself implies that $G$ is neither a complete graph nor a circuit, Brooks theorem applies and guarantees that $\chi(G)\leq 8$, hence what is being asked is an explicit example of a graph achieving Brooks' bound; what seems to make it difficult is that the clique number is required to be six.) Needless to say, the condition that $\Delta(G)=8$ rules out most of the 'usual' 'named graphs (e.g., all strongly regular graph that the English Wikipedia currently lists as a named example are not 8-regular, so do not satisfy $\Delta(G)=8$ While I won't go into details to try to 'prove' that I did, I think I did try quite a few things to find an example. This question is motivated both by research on triangle-free graphs (of course, there, $\omega=2)$ and in particular by my writing a comprehensive answer to this interesting research question of 'C.F.G.' . I seem to need a graph as in the question, to make my answer 'more complete', if that's grammatically possible. The particular instance in the question isn't really necessary to my answer, but it would be very nice to have a construction, or proof of non-existence of, such a graph. Let me also mention that constructions of, or proofs of impossibility of, graphs $G$ with (axiom.0) $\quad\Delta(G)\geq 8$ (axiom.1) $\quad\omega(G)\leq\lceil\frac12\Delta(G)\rceil+2$ (axiom.2) $\begin{cases}
3+\frac12\Delta(G)\hspace{25pt} < \quad\chi(G)\quad \leq 2+\frac34(\Delta(G)+0) & \text{if $\Delta(G)\ \mathrm{mod}\ 4\quad = 0$ } \\
2+\frac12(\Delta(G)+3) < \quad\chi(G)\quad \leq 2+\frac34(\Delta(G)+3) & \text{if $\Delta(G)\ \mathrm{mod}\ 4\quad = 1$ } \\
3+\frac12\Delta(G)\hspace{25pt} < \quad\chi(G)\quad \leq 0+\frac34(\Delta(G)+2) & \text{if $\Delta(G)\ \mathrm{mod}\ 4\quad = 2$ } \\
3+\frac12(\Delta(G)+1) < \quad\chi(G)\quad \leq 1+\frac34(\Delta(G)+1) & \text{if $\Delta(G)\ \mathrm{mod}\ 4\quad = 3$ }
\end{cases}$ would be helpful for writing the answer to C.F.G.'s question, too, though I chose to make the actual question simpler, by picking out the instance $\Delta(G)=8$, whereupon the first line of the cases above becomes $7<\chi(G)\leq8$, equivalently, $\chi(G)=8$. Note also that then $\lceil\frac12\Delta(G)\rceil+2=6$, explaining the $\omega$-value in the question. Again, an answer to the present answer does not simply imply an answer to the cited thread ; I am not asking someone else's question be answered again here; the present question arises as a relevant technical sub-issue in my answer. For the above more general specification of graphs, planar graph evidently are no solution (since they have $\chi(G)\leq4$), and dense random graphs are not a solution either because, very roughly speaking, for them asymptotically almost surely (axiom.1) is satisfied but (axiom.2) is not (because, very very roughly, $G(n,\frac12)$ has $\Delta\in\Theta(\frac12 n)$ but $\chi\in\Theta(\frac{n}{\log n})$, making it impossible to satisfy the lower bound in (axiom.2) . Results of McDiarmid, Müller and Penrose make it possible to 'try out' random geometric graphs , but for those, sadly, again not all axioms seem to be satisfied (though getting comprehensible results on the maximum degree in random geometric graph seems to be difficult). Random regular graphs also do not provide an answer, e.g. because [Coja-Oghlan--Efthymiou--Hetterich: On the chromatic number of random regular graphs Journal of Combinatorial Theory, Series B, 116:367-439] implies that for this it is necessary that $8 \geq (2\cdot 8-1)\log(8) - 1$, which is false.
|
Yes, it exists. Take 5 triangles $T_1,\dots,T_5$ (all 15 vertices are distinct) and draw also all edges between $T_i$ and $T_{i+1}$, $i=1,2,3,4$, and between $T_5$ and $T_1$. All degrees are equal to 8, maximal clique is formed by two neighboring triangles, and $\chi=8$. Indeed, each color may appear at most twice (in at most two triangles), thus 7 colors are not enough. But 8 colors are of course enough (by Brooks theorem, if you want.)
|
{
"source": [
"https://mathoverflow.net/questions/283753",
"https://mathoverflow.net",
"https://mathoverflow.net/users/108556/"
]
}
|
284,041 |
If someone asked me the question for the fundamental group, I would answer as follows: The connection to classification of covering spaces. The fundamental group of many spaces is an object of independent interest. For instance, for an Elliptic curve over the complex numbers, the fundamental group is the lattice defining the curve or equivalently, it is related to the torsion points on the curve. Related to 1, the arithmetic fundamental group is closely related and the arithmetic fundamental group is itself very important. For instance, the Galois groups of fields is an example. The fundamental group offers very natural proofs of fundamental theorems like the fundamental theorem of algebra. However, for the higher homotopy groups, the best answer I could give would be something along the lines of the the long exact sequence of homotopy groups] 1 for fibrations. Maybe the Hurewicz theorem is also an answer to my question except that I think the Hurewicz theorem is usually used to get information about the homotopy groups from the homology groups. If this is not true, that would be an interesting answer too. I am almost sure this is entirely due to my background (in arithmetic geometry) and lack of formal training in algebraic topology and that the higher homotopy groups are indeed a natural object to study. Ideally, I would appreciate answers that either connect the higher homotopy groups to important invariants of spaces that were already studied (1,2, 3 above) or proofs of statements not about the higher homotopy groups that however use the higher homotopy groups in an essential way (4 above and I guess the long exact sequence comes under here).
|
Some good answers have already been given. To my mind, though: a really obvious thing one wants to do in topology is to classify topological spaces, or more reasonably, to classify CW-complexes, at least up to homotopy equivalence. You build CW-complexes by attaching cells. To attach a cell, you just have to specify the attaching map, i.e.,
the continuous map from the boundary of the cell to the skeleton
you've already constructed. The boundary of a cell is a sphere. The homotopy type of the space you get by attaching a cell depends
only on the homotopy class of the attaching map. So homotopy groups are telling you the ways you can attach a cell. So the question of how many homotopy types of CW-complexes you can build with some given property typically comes down to a computational problem about homotopy groups. For example: it sounds like you are an arithmetic geometer who already is convinced of the utility of homology groups and the fundamental groups, so suppose you are doing your work in arithmetic geometry, and you find yourself confronted with two different smooth schemes over Spec Z whose underlying analytic spaces are simply-connected and have homology groups isomorphic to Z in degrees 0, 8, and 14, and trivial in all other degrees. You knock on the door of the friendly topologist down the hall and ask the topologist whether your two analytic spaces are necessarily homotopy-equivalent. The topologist first checks with you to make sure there's some general theorem which ensures that these spaces are homotopy-equivalent to CW-complexes, and then observes that a minimal CW-decomposition of any such space ought to have a 0-cell, an 8-cell, and an 14-cell, since anything else would give you the wrong homology. The 8-cell has to be attached trivially to the 0-cell for silly reasons, so the 8-skeleton of any such CW-complex must be S^8. Then the topologist points out that the attaching map for the 14-cell must be a map from its boundary, a 13-sphere, to an 8-sphere. The homotopy group $\pi_{13}(S^8)$ is in the stable range, by the Freudenthal suspension theorem, so the topologist shows you a 2-primary Adams spectral sequence chart and points to the empty 5-column, and says "So the 5-stem is 2-locally trivial." Then the topologist tells you a bit (maybe more than you wanted to hear) about how the alpha family and beta_1 work at odd primes, ending with the conclusion that the 5-stem (the fifth stable homotopy group of spheres) also vanishes at all odd primes, and hence $\pi_{13}(S^8)$ is trivial. Consequently there is only one homotopy class of attaching map for that 14-cell which has been attached to S^8. So your two analytic spaces are homotopy-equivalent. If you have more than just two cells in positive dimensions, Toda brackets are a convenient way to organize the algebra of the attaching maps, in order to reduce these kinds of classification problems to algebraic problems in the homotopy groups of spheres. Never used this site before--hope I didn't write anything too critically stupid. Sorry if I did.
|
{
"source": [
"https://mathoverflow.net/questions/284041",
"https://mathoverflow.net",
"https://mathoverflow.net/users/58001/"
]
}
|
284,230 |
Here's a familiar conversation: Me: Do you think Conjecture A and Conjecture B are equivalent? Friend: Yes, because I think they're both true. Me: [eye roll] You know what I mean... Does there exist a rigorous notion of what I mean? Perhaps something about the existence of a proof of $p \Leftrightarrow q$ that doesn't "come close" to revealing the truth value of either $p$ or $q$? Unfortunately, the idea of "coming close" sounds rather subjective and reminiscent of Timothy Chow's answer here . Perhaps there's another approach? Edit: Judging by the immediate response, it seems that my question requires additional clarification. I am wondering whether there is a rigorous notion in logic for pairs of statements that are logically equivalent in a nontrivial sense. Observe that the fundamental theorem of algebra is equivalent to $0=0$, but in a trivial sense since they are both true. On the other hand, one might consider $p\Leftrightarrow q$ to be "nontrivially true" if there exists a proof of $p\Leftrightarrow q$ despite $p$ not being provable. Is there a more general theory that applies in the case where $p$ is provable?
|
First of all, in practice when we say "Conjecture A is equivalent to Conjecture B," what we mean is "We have a proof that Conjecture A is true iff Conjecture B is true." We can have such a proof without having a proof of either conjecture, so this is a meaningful situation. Of course, it will (hopefully) later become trivial, when we prove or disprove the conjectures (and so reduce it to "they're both true" or "they're both false"). But your question has more to it than that: suppose we want to say that two theorems we already know to be true are equivalent. How can we do that? (Note that this is something we in fact do all the time - e.g. when we say "the compactness of the real numbers is equivalent to their satisfying the extreme value theorem.") The simplest approach to this is by considering extremely weak axiom systems, which aren't strong enough to prove either result but can prove the equivalence. That is, we work over some very weak "base theory." Historically, of course, the most well-known example is the study of equivalences/implications between versions of the axiom of choice over the theory ZF; as a fun fact, there's a famous story that when Tarski tried to publish a certain equivalence over ZF, one editor rejected it on the grounds that the equivalence between two true statements isn't interesting and the other rejected it on the grounds that the equivalence between two false statements isn't interesting. (I believe there were also hints of interest in equivalences between true principles in the study of absolute geometry , but I'm not certain - it's been a while since I looked at the history of non-Euclidean geometry.) However, ZF is "too strong" for most statements of mathematical interest, so we want to go deeper into things. This is one of the motivations behind reverse mathematics : we look at equivalences/implications/nonimplications over a very weak theory, RCA$_0$, which intuitively corresponds to "computable" mathematics with "finitistic" induction only. For example, here are some statements which are all equivalent to each other in the sense of reverse mathematics: Every commutative ring which is not a field or the zero ring has a nontrivial proper ideal. $[0,1]$ is compact. Every infinite binary tree has an infinite path. (There is actually a serious issue here which doesn't really crop up when proving equivalences over ZF, namely that we have to figure out how to express the statements we care about in the language of our base theory; this is an issue with weak theories like RCA$_0$ whose language is quite limited. I'm ignoring this issue completely here.) And we sometimes want to go weaker still. Equivalences over theories much weaker than RCA$_0$ have been studied, albeit not (in my understanding) as extensively.
|
{
"source": [
"https://mathoverflow.net/questions/284230",
"https://mathoverflow.net",
"https://mathoverflow.net/users/29873/"
]
}
|
284,262 |
Let $(T_1,..., T_n)\in \mathcal{L}(E)^n$ be a tuple of commuting normal operators (i.e. each $T_k$ is normal and $T_iT_j=T_jT_i$ for all $i,j$), where $E$ is a complex Hilbert space. I want to show that
$$\displaystyle\sup_{\|x\|=1}\bigg(\displaystyle\sum_{i=1}^n|\langle T_ix,x\rangle|^2\bigg)\geq\displaystyle\sup_{\|x\|=1}\bigg(\displaystyle\sum_{i=1}^n\|T_ix\|^2\bigg)\;.$$
|
First of all, in practice when we say "Conjecture A is equivalent to Conjecture B," what we mean is "We have a proof that Conjecture A is true iff Conjecture B is true." We can have such a proof without having a proof of either conjecture, so this is a meaningful situation. Of course, it will (hopefully) later become trivial, when we prove or disprove the conjectures (and so reduce it to "they're both true" or "they're both false"). But your question has more to it than that: suppose we want to say that two theorems we already know to be true are equivalent. How can we do that? (Note that this is something we in fact do all the time - e.g. when we say "the compactness of the real numbers is equivalent to their satisfying the extreme value theorem.") The simplest approach to this is by considering extremely weak axiom systems, which aren't strong enough to prove either result but can prove the equivalence. That is, we work over some very weak "base theory." Historically, of course, the most well-known example is the study of equivalences/implications between versions of the axiom of choice over the theory ZF; as a fun fact, there's a famous story that when Tarski tried to publish a certain equivalence over ZF, one editor rejected it on the grounds that the equivalence between two true statements isn't interesting and the other rejected it on the grounds that the equivalence between two false statements isn't interesting. (I believe there were also hints of interest in equivalences between true principles in the study of absolute geometry , but I'm not certain - it's been a while since I looked at the history of non-Euclidean geometry.) However, ZF is "too strong" for most statements of mathematical interest, so we want to go deeper into things. This is one of the motivations behind reverse mathematics : we look at equivalences/implications/nonimplications over a very weak theory, RCA$_0$, which intuitively corresponds to "computable" mathematics with "finitistic" induction only. For example, here are some statements which are all equivalent to each other in the sense of reverse mathematics: Every commutative ring which is not a field or the zero ring has a nontrivial proper ideal. $[0,1]$ is compact. Every infinite binary tree has an infinite path. (There is actually a serious issue here which doesn't really crop up when proving equivalences over ZF, namely that we have to figure out how to express the statements we care about in the language of our base theory; this is an issue with weak theories like RCA$_0$ whose language is quite limited. I'm ignoring this issue completely here.) And we sometimes want to go weaker still. Equivalences over theories much weaker than RCA$_0$ have been studied, albeit not (in my understanding) as extensively.
|
{
"source": [
"https://mathoverflow.net/questions/284262",
"https://mathoverflow.net",
"https://mathoverflow.net/users/113054/"
]
}
|
284,292 |
Does there exist any non-trivial space (i.e not deformation retract onto a point) in $\mathbb R^n$ such that any continuous map from the space onto itself has a fixed point. I highly suspect that the quasi circle on $\mathbb R^2$ is an example. Yet I've not written down the (dirty) proof. But in this case all its homotpy groups are trivial. So if I assume my space as a manifold, then (QUESTION:) does this fixed point property force it to become a contractible manifold? I read somewhere that there exists a contractible compact manifold which does not satisfy this fixed point property. So does there exist any non-contractible manifold (compact) where this property follows? Or otherwise can anyone please provide an outline of how to prove that such a manifold is contractible?
|
Take the space $\mathbb{CP}^2$. Its cohomology ring is given by $\mathbb{Z}[a]/a^3$, where $a$ has degree $2$. A map $f:\mathbb{CP}^2\rightarrow \mathbb{CP}^2$ induces a map on the second cohomology group with $f^*(a)=k a$ with $k\in \mathbb{Z}$. From this you can compute the action on (co)homology on the other degrees. In degree zero it is the identity and on the fourth degree it is given by multiplying with $k^2$. Then the Lefschetz number of this map is seen to be $L(f)=k^2+k+1$. This number is never zero. A non-zero Lefschetz number implies a fixed point by the Lefschetz fixed point Theorem.
|
{
"source": [
"https://mathoverflow.net/questions/284292",
"https://mathoverflow.net",
"https://mathoverflow.net/users/33064/"
]
}
|
284,433 |
I learned from a colleague that if one sums translates of the Gaussian density $f(x)=(2\pi)^{-1/2}e^{-x^2/2}$ translated by the integers (i.e. one considers $F(x)=\sum_{n\in\mathbb Z}f(x+n)$), the resulting function is remarkably constant: that is, the function differs from its average value by less than one part in $10^8$. The significance of translating by multiples of 1 is that the inflection point of the Gaussian occurs at $\pm1$. I attempted to repeat this for $g(x)=e^{-3x^4/4}$ which also has an inflection point at $\pm 1$, and found that the resulting sum is constant to one part in 200. Can anyone offer any conceptual explanation for the remarkable degree of constancy of $F(x)$, or is this just a fluke?
|
First of all this has nothing to do with the inflection point of $e^{-\alpha x^2}$. According to Poisson summation formula (see Whittaker, Watson, Modern analysis, chapter 21.51)
$$
\sum_{n=-\infty}^\infty e^{-\alpha (x-n)^2}=2{\sqrt{\frac{\pi}{\alpha}}}\left(1+2\sum_{n=1}^\infty e^{-\frac{\pi^2}{\alpha}n^2}\cos2\pi n x\right),\quad \text{Re}~\alpha>0.\tag{1}
$$
From this formula one can see that when $\alpha>0$ is small, the Fourier coefficients $a_n=2{\sqrt{\frac{\pi}{\alpha}}}e^{-\frac{\pi^2}{\alpha}n^2}$ of the resulting function decreases rapidly with increasing $n$. When $\alpha=1/2$ one has
$$
\frac{a_1}{a_0}=2e^{-2\pi^2}\approx 5.4\times 10^{-9}.
$$ I don't know any conceptual explanation for this in mathematics, but there is such an explanation that comes from physics. Consider a quantum particle with mass $m$ on a ring of radius $a$. We assume the ring is pierced with magnetic flux $\phi$. We want to calculate partition function of this system at temperature $T>0$ in two different ways. On the one hand, it is known that the energy spectrum of the particle is given by
$$
E_n=\frac{\hbar^2}{2m^2}\left(n+\frac{\phi}{\phi_0}\right)^2,
$$
where $\phi_0$ is the so called magnetic flux quantum . Then partition function is given as the Gibbs sum $$
Z=\sum_{n=-\infty}^\infty e^{-E_n/T}=\sum_{n=-\infty}^\infty e^{-\frac{\hbar^2}{2m^2T}\left(n+{\phi}/{\phi_0}\right)^2}.\tag{2}
$$
Here one immediately recognizes the sum analogous to the LHS of $(1)$. On the other hand, partition function is related to the trace of the density matrix $\rho_{\theta_1,\theta_2}$, i.e. to $\int\rho_{\theta,\theta}d\theta$. It is possible to calculate the density matrix of this system in imaginary time representation by solving a certain differential equation. Details of this calculation can be found for example in this book , chapter 4.3. Consider first the more simple case of unbounded line $-\infty<\theta<+\infty$; then the answer is
$$
\rho_{\theta_1,\theta_2}=Ce^{-\frac{mTa^2(\theta_1-\theta_2)^2}{2\hbar^2}},
$$
where $C$ is some normalization constant. Magnetic flux does not enter this expression because there is not any nontrivial loop in the system. On a ring there is a nontrivial loop, and let $n$ be the winding number. It is known that the partition function is a sum over all homotopy classes multiplied by corresponding phase factor. In this case the phase factor comes from the magnetic flux ( Aharonov-Bohm phase ) and equals $e^{2\pi in\phi/\phi_0}$.
As a result we have
\begin{align}
Z&=\sum_{n=-\infty}^\infty e^{2\pi in\phi/\phi_0}\int\rho_{\theta,\theta+2\pi n}d\theta\\
&=2\pi C \sum_{n=-\infty}^\infty e^{-\frac{2\pi^2mTa^2}{\hbar^2}n^2+2\pi in\phi/\phi_0}.\tag{3}
\end{align}
Combining $(2)$ and $(3)$ one obtains the transformation in $(1)$. One can see from this physical interpretation that when the temperature $T$ increases, then the higher Fourier harmonics decrease. This corresponds to the physical intuition that the higher the temperature the more chaotic the system becomes and the effect of the Aharonov-Bohm phase averages out due to thermal fluctuations.
|
{
"source": [
"https://mathoverflow.net/questions/284433",
"https://mathoverflow.net",
"https://mathoverflow.net/users/11054/"
]
}
|
284,458 |
Start with a circle and draw two tangent circles inside. The (black) inner tangent lines to the smaller circles intersect the large circle. The (red) lines through these intersection points are parallel to the (green) outer tangents to the small circles. A long time ago I worked on this theorem, but I never knew the name. Without a name it's difficult to find more information. Does anyone know if this theorem has a name and where I can find more information about it?
|
Even more is true for this theorem. Check out this drawing from Arseniy Akopyan wonderful book of Geometry in Figures (Second, extended edition, 2017). On page 65 we find Figure 4.7.29) In the foreword, Arseniy Akopyan writes "It is commonly very hard to determine who the author of a certain
result is." He nevertheless provides source for many of the figures in the end of his book. Unfortunately for Figure 4.7.29 he doesn't provide such a reference. This leads me to the answer: Probably it doesn't have a name (like many "geometry theorems").
|
{
"source": [
"https://mathoverflow.net/questions/284458",
"https://mathoverflow.net",
"https://mathoverflow.net/users/116470/"
]
}
|
284,855 |
4-dimensional Smale conjecture claims the following: The inclusion $SO(5)$ → $SDiff(S^4)$ is a homotopy equivalence. or Does $Diff(S^4)$ have the homotopy-type of $O(5)$ ?. The inclusion $SO(n + 1$) → $SDiff(S^n)$ is a homotopy equivalence for n = 1 (trivial proof), n = 2 [1004,Smale,1959,Proc. Amer. Math. Soc.], n = 3 [464,Hatcher, 1983,Ann. of Math.], and is not a homotopy equivalence for n ≥ 5 [41,Antonelli, Burghelea, & Kahn,1972,Topology] and [164,Burghelea & Lashof,1974,Trans. Amer. Math. Soc.]. I looked everywhere but I could not find anything. Is this problem still open? Thanks.
|
This problem is completely open.
|
{
"source": [
"https://mathoverflow.net/questions/284855",
"https://mathoverflow.net",
"https://mathoverflow.net/users/99280/"
]
}
|
285,186 |
I have been interested in fractional calculus for some time now, and I have seen "lots" of definitions of the $\frac {d^\alpha} {dx^\alpha}$ operator. I started with the book The Fractional Calculus by Oldham and Spanier, and it comes as no surprise that I favor the Grünwald-Leitnikov derivative. It seems to me a great definition, because it directly generalizes the basic definition of the derivative $\frac {df} {dx}=\lim_{h \rightarrow 0} \frac {f(x)-f(x-h)} {h}$. And it also produces the integral when $\alpha$ is set to be a negative number. Another (which I think is the Liouville definition, but I'm not sure) generalizes the property of differentiating an exponential $\frac {d^k} {dx^k} e^{rx} = r^ke^{rx}$ and thus if $\frac {d^\alpha} {dx^\alpha}f(x)=\sum A_ne^{nx}$ then $f(x)=\sum A_n n^\alpha e^{nx}$. A definition, which is used really often for some reason, is the Caputo derivative. Lot of people find it natural that $\frac {d^{\frac 1 2}} {dx^{\frac 1 2}} [1]=0$, but I think it is "evident" that it should be proportional to $x^{-\frac 1 2}$. Now comes the actual question. Why are there so many definitions of the fractional derivative? Are some of them "better" than the others in some sense? And lastly, is there a general framework, wherein "functions" of differential operators, maybe more general than (fractional) powers, can be given an explicit meaning?
|
The reason is that the fractional derivative is not a local operator. The usual derivative is a local derivative in the sense that the value of the derivative at one point only depends on the value of the function in a neighborhood of that point. This is not the case for the fractional derivative and that cannot be due to some general theoretical result due to Peetre . So the definition depends on the domain of definition of the functions under scrutiny. This is not the same definition if we are looking at functions defined on ${\bf R}$ or on $[0,1]$ or on $[0,\infty)$ and of course the derivative of say $\sin$ is not the same in these three cases.
Same for the derivative of the constant function. Fractional derivatives are a particular example of operators obtained using the functional calculus on some operator space. The result of such operation of course depends on the functional space under consideration, which itself is dictated by the context and the problems at hand. tl;dr: there is not a best definition and the fractional derivatives do not share the nice local properties of the usual derivative, so beware.
|
{
"source": [
"https://mathoverflow.net/questions/285186",
"https://mathoverflow.net",
"https://mathoverflow.net/users/114143/"
]
}
|
285,304 |
I've always wondered how some "mathematical coincidences" are published, or spread to a wider audience. For instance, the almost integer: $$
e^\pi -\pi = 19.9990999\ldots
$$ was " noticed almost simultaneously around 1988 by N. J. A. Sloane, J. H. Conway, and S. Plouffe " ( Weisstein ). Were these "observations" published somewhere, and if so, what motivated such a publication, knowing that there was no explanation for the coincidence?
|
You answered your question yourself: it is published on the web site that you refer to. The author of the web site cites his sources, in most cases these are personal communications. Many results of this sort are spread by correspondence, on Internet, and by oral personal communication. There is also a journal " Experimental Mathematics " which publishes these kinds of observations, even those for which the author has no explanation.
|
{
"source": [
"https://mathoverflow.net/questions/285304",
"https://mathoverflow.net",
"https://mathoverflow.net/users/103722/"
]
}
|
285,955 |
I'm looking for an explanation or a reference to why there is this equivelence of triangulated categories: $${D}^b(\mathrm {Coh}(\Bbb P^1))\simeq {D}^b(\mathrm {Rep}(\bullet\rightrightarrows \bullet))$$
It is my understanding that the only reason why $\Bbb P^1$ appears at all is because it is used to index the regular irreducible representations of the Kronecker quiver. I have also heard that this equivalence can be used to understand the geometry of $\Bbb P^1$. I suppose more generally, adding arrows to the Kronecker quiver gives a similar result for $\Bbb P^n$.
|
Let $\mathcal O$ be the structure sheaf of $\mathbb P^1$. Then $\mathcal O \oplus \mathcal O(1)$ is rigid and generates the derived category of coherent sheaves on $\mathbb P^1$. Thus, it is a tilting object, and so the derived category is equivalent to the category of modules over its endomorphism ring, which is the path algebra of the Kronecker quiver. For $\mathbb P^n$, you need $n+1$ objects to generate the derived category. You can take $\mathcal O(i)$ for $0\leq i \leq n$; this will be a tilting object again, but its endomorphism ring will not be hereditary; you will get a quiver with relations. This shouldn't be surprising, though: path algebras of quivers have global dimension one, so you shouldn't expect their derived categories to agree with derived categories of sheaves on higher-dimensional varieties.
|
{
"source": [
"https://mathoverflow.net/questions/285955",
"https://mathoverflow.net",
"https://mathoverflow.net/users/54401/"
]
}
|
286,197 |
Articles from the Proceedings of the International Congress of Mathematicians , Seoul, 2014 don't appear to be on Mathscinet. Why is this? (Someone pointed this out to me recently, and I was reminded of it today when I tried to cite a lecture.)
|
We have had difficulty obtaining the requisite permissions from the publisher. The ICM2014 website has the Legal Disclaimer: "The Seoul ICM Organizing Committee, the legal copyright owner of the articles in the proceedings, hearby grants unlimited noncommercial download and use of the articles." This is not sufficient for our purposes. The most recent communication with them was this week. (Note: I'm the person @DanRamras wrote to.) Update (2018.02.22). We have now received permission from the publisher, as well as physical copies of the proceedings. The papers are now in the database and available in MathSciNet. Further processing (such as reviews) is still to come. See ICM2014, Volume I , ICM2014, Volume II , ICM2014, Volume III , and ICM2014, Volume IV .
|
{
"source": [
"https://mathoverflow.net/questions/286197",
"https://mathoverflow.net",
"https://mathoverflow.net/users/919/"
]
}
|
286,626 |
This is an immediate successor of Chebyshev polynomials of the first kind and primality testing and does not have any other motivation - although original motivation seems to be huge since a positive answer (if not too complicated) would give a very efficient primality test (see the linked question for details). Recall that the Chebyshev polynomials $T_n(x)$ are defined by $T_0(x)=1$, $T_1(x)=x$ and $T_{n+1}(x)=2xT_n(x)-T_{n-1}(x)$, and there are several explicit expressions for their coefficients. Rather than writing them down (you can find them at the Wikipedia link anyway), let me just give a couple of examples:
$$
T_{15}(x)=-15x(1-4\frac{7\cdot8}{2\cdot3}x^2+4^2\frac{6\cdot7\cdot8\cdot9}{2\cdot3\cdot4\cdot5}x^4-4^3\frac{8\cdot9\cdot10}{2\cdot3\cdot4}x^6+4^4\frac{10\cdot11}{2\cdot3}x^8-4^5\frac{12}{2}x^{10}+4^6x^{12})+4^7x^{15}
$$
$$
T_{17}(x)=17x(1-4\frac{8\cdot9}{2\cdot3}x^2+4^2\frac{7\cdot8\cdot9\cdot10}{2\cdot3\cdot4\cdot5}x^4-4^3\frac{8\cdot9\cdot10\cdot11}{2\cdot3\cdot4\cdot5}x^6+4^4\frac{10\cdot11\cdot12}{2\cdot3\cdot4}x^8-4^5\frac{12\cdot13}{2\cdot3}x^{10}+4^6\frac{14}{2}x^{12}-4^7x^{14})+4^8x^{17}
$$
It seems that $n$ is a prime if and only if all the ratios in the parentheses are integers; this is most likely well known and easy to show. The algorithm described in the above question requires determining whether, for an odd $n$, coefficients of the remainder from dividing $T_n(x)-x^n$ by $x^r-1$, for some fairly small prime $r$ (roughly $\sim\log n$) are all divisible by $n$. In other words, denoting by $a_j$, $j=0,1,2,...$ the coefficients of $T_n(x)-x^n$, we have to find out whether the sum $s_j:=a_j+a_{j+r}+a_{j+2r}+...$ is divisible by $n$ for each $j=0,1,...,r-1$. The question then is: given $r$ and $n$ as above ($n$ odd, $r$ a prime much smaller than $n$), is there an efficient method to find these sums $s_j$ without calculating all $a_j$? I. e., can one compute $T_n(x)$ modulo $x^r-1$ (i. e. in a ring where $x^r=1$) essentially easier than first computing the whole $T_n(x)$ and then dividing by $x^r-1$ in the ring of polynomials? (As already said, only the question of divisibility of the result by $n$ is required; also $r$ is explicitly given (it is the smallest prime with $n$ not $\pm1$ modulo $r$). This might be easier to answer than computing the whole polynomials mod $x^r-1$.)
|
There's a rapid algorithm to compute $T_n(x)$ modulo $(n,x^r-1)$. Note that
$$
\pmatrix{T_n(x) \\ T_{n-1}(x)} = \pmatrix { 2x & -1 \\ 1&0} \pmatrix{T_{n-1}(x) \\ T_{n-2}(x)} = \pmatrix { 2x & -1 \\ 1&0}^{n-1} \pmatrix{ x\\ 1}.
$$
Now you can compute these matrix powers all modulo $(n, x^{r}-1)$ rapidly by repeated squaring. Clearly $O(\log n)$ multiplications (of $2\times 2$ matrices) are required, and the matrices have entries that are polynomials of degree at most $r$ and coefficients bounded by $n$. So the complexity is a polynomial in $r$ and $\log n$.
|
{
"source": [
"https://mathoverflow.net/questions/286626",
"https://mathoverflow.net",
"https://mathoverflow.net/users/41291/"
]
}
|
286,732 |
I would like to ask if anyone could share any specific experiences of
discovering nonequivalent definitions in their field of mathematical research. By that I mean discovering that in different places in the literature, the same name is used for two different mathematical objects.
This can happen when the mathematical literature grows quickly and becomes chaotic and of course this could be a source of serious errors.
I have heard of such complaints by colleagues, mostly with respect to definitions of various spaces and operators, but I do not recall the specific (and very specialised) examples. My reason for asking is that I'm currently experimenting with verification using proof assistants and I'd like to test some cases that might be a source of future errors. EDIT: Obviously, I would be interested to see as many instances of this issue as possible so I would like to ask for more answers. Also, it would be even more helpful if you could provide references.
|
Perhaps the mother of all examples is "natural number". You can start an internet flame war by asking whether zero is a natural number.
|
{
"source": [
"https://mathoverflow.net/questions/286732",
"https://mathoverflow.net",
"https://mathoverflow.net/users/117549/"
]
}
|
286,742 |
I'm searching for a proof of Witt's result that a biquadratic extension $K(\sqrt{a},\sqrt{b})/K$ extends to a Galois extension $L/K$ with quaternion group $Q_8$ iff the quadratic forms $<a,b,\frac{1}{ab}>, <1,1,1>$ are equivalent iff $(a,b)(a,a)(b,b) = 0 \in Br(K)$. I know there is a proof in his original paper "Konstruktion von galoisschen Körpern der Charakteristik p zu vorgegebener Gruppe der Ordnung pf". However, I am searching for a proof in English. I would also like to find proofs from a more modern perspective as well as generalizations of the result. Thank you.
|
Perhaps the mother of all examples is "natural number". You can start an internet flame war by asking whether zero is a natural number.
|
{
"source": [
"https://mathoverflow.net/questions/286742",
"https://mathoverflow.net",
"https://mathoverflow.net/users/70019/"
]
}
|
286,804 |
This post is a sequel to: Collaboration or acknowledgment? The following has come to my attention. A senior mathematician (let us call him or her Alice) suggested a problem to a young mathematician (Bobby) who proceeded to solve it on her own and wrote up the result. Bobby agreed (*) to let Alice be listed as a coauthor, but Alice also insisted to include her PhD student (Charlotte) as a coauthor because they were thinking about the same problem, despite the fact that Alice and Charlotte did not even have partial results. (*) Bobby had no problem with Alice joining her as a coauthor for the reasons mentioned by Igor Rivin below (I include you as a coauthor, you write me a good recommendation). Thus, the credit was unfairly diluted by including Charlotte who had not contributed. Question: Is there a way for Bobby to manage such a situation without creating conflict? Full disclosure: I am Bobby's PhD advisor. I can not interfere directly because Alice is a powerful person in the field known for aggressive backing of her PhD students and I do not want to inadvertently hurt Bobby's career. Update: Following the advice of Joel David Hamkins, Bobby will be the sole author. There is nothing in the paper that Alice and Charlotte could point to as an idea they already had in mind. Looking back, what bothered me the most was not that Bobby's credit would be diluted but that someone who did not deserve it would be rewarded. I will award bounty points to JDH for his uplifting Thanksgiving Day Answer , but, of course, any new comments are welcome.
|
Well, of course the young mathematician should simply discuss the
matter with the senior mathematician and perhaps the student until
they can come to an agreeable arrangement. My advice is that they
should all talk about it. Co-authorship is a matter upon which all authors must agree. What other answer could there be? If it seemed that the professor or the student had little or no
contribution, then the young mathematician should say so and
inquire why the professor should be co-author, or why the student
should be co-author. If there was not sufficient contribution, then
the young mathematician should simply say so and there should be a
discussion about it. Perhaps the senior mathematician will point
out that the contribution was greater than realized, or that there
were other aspects of the collaboration of which the young
mathematician is not aware. Or perhaps not, and the senior
mathematician will agree that the young mathematician should
proceed solo. But apart from the particular situation described in the post, let
me now mention several further reactions that I have more generally
to the issues about co-authorship that this question raises. The first and most important thing to say is that collaborative
research is one of the great joys of mathematical life, and I
strongly recommend it. To discuss a mathematical idea with another
mathematician, who can understand what you are saying and who has
thought deeply about the very same topic, gives enormous
satisfaction and meaning to one's life as a mathematician.
Collaborative research is our mathematical social life. For my own part, I am
thankful on today, Thanksgiving Day, for the opportunity that I
have had to interact with all my collaborators; I have
learned so much from them. (See the list of my
collaborators .) Therefore, my advice is that one should seek out collaborations
wherever they are to be found. Often, after one has proved a
theorem, then in joint work it becomes much better, strengthened or
simplified, or a collaborator finds new applications or uses. If
someone asks a question and you answer it, then perhaps the
mathematics is not yet finished, but only begun. Aren't there
further natural questions arising from the result or its proof?
This could be the beginning of a collaboration rather than the end
of one. Another part of my view is that one should be relaxed about
collaboration and co-authorship. Except in extraordinary cases, the
stakes are low. A mathematician seems to get basically as much
respect and credit for a result, whether or not there are
co-authors on the paper, and so I question whether there really is
any meaningful "dilution" as mentioned in the post. It is simply no
big deal to have co-authors or not. Therefore, why not be generous? If someone has made a contribution
to your project, even when the contribution is light, then invite
them as co-author. Few mathematical collaborations are perfectly
balanced contributions, and in most collaborations one person has
had a more important insight or made a larger contribution than the
other. But so what? Perhaps the co-author invitation will be
declined, and that is fine, or perhaps they will join and then
proceed to make your result even better. I have had many
collaborations where at first we had a result, which seemed fine
and complete, but then in writing the joint paper we were able
together to improve the result or give further applications, which
wouldn't have happened without the joint interaction. I think you
will often be surprised. Another point, as I mentioned in the comments, is that asking a
good question in my view is often sufficient for co-authorship. I
have several joint papers that arose from someone asking me a
question (in some cases on MathOverflow), which I answered, and
then asked them to join as co-author. And I've had some the other
way around as well. I find it more natural in such a case, however,
for the theorem-prover to be suggesting the idea of co-authorship,
rather than the question-asker, which is part of why the situation
in the OP seems wrong to me. Another point is that it sometimes happens that person A, perhaps a
junior person, asks a question that person B, perhaps a senior
person, answers, settling the question; but the situation is that
person A simply cares more about it or has a stronger vision of
what the result can become than person B, who is not as interested.
In this case, the solution is that person A should do the work of
writing the paper, with person B as co-author, even though the
result may have been due to person B. The point is that person B
would not bother to write this paper on their own, but the joint
authorship brings the mathematical paper into existence. The result
can be a great paper, and I know of many papers following this
pattern. In summary, pursue collaborations; be generous about co-authorship;
be relaxed about co-authorship; enjoy joint mathematical
interaction; make great mathematics.
|
{
"source": [
"https://mathoverflow.net/questions/286804",
"https://mathoverflow.net",
"https://mathoverflow.net/users/111456/"
]
}
|
286,872 |
When Paul Gordan became a professor in 1875 he could show the binary form in any degree has some finite complete system of (general linear) invariants, but he could not actually give a complete system above degree 6. He discussed this limitation that year in Uber das Formensystem binaerer Formen (B.G. Tuebner, Leipzig). So far as I can tell (by reading Kung, Sturmfels, Derksen, and Eisenbud and some correspondence with them) advances in theoretical and computer algebra have not really changed that. The complexity of calculation rises so quickly with degree that the limit of degree 6 has not been passed by much if at all. But perhaps my information is incomplete or out of date. Is it now possible to calculate specific complete systems of invariants in higher degrees? What is the state of that? Answers to the question Algorithms in Invariant Theory give relevant references but they do not give any clear answer. One leads to an arXiv article which describes one computer package this way. The package calculate the set of irreducible invariants up to degree
min(18, βd), but in all known computable cases this set coincides with
a minimal generating set, see, for example, Brouwer’s webpage http://www.win.tue.nl/~aeb/math/invar/invarm.html That refers to the degree of the invariants, not the degree of the form they are invariant for. I did not find a description there of which cases are computable. And that link no longer works.
|
Let $F$ be a binary form of degree $d$ , namely, a homogeneous polynomial of the form $$
F(\mathbb{x})=\sum_{i=0}^{d}\left(\begin{array}{c}d\\ i \end{array}\right)f_i\ x_1^{d-i}x_2^i
$$ where $\mathbb{x}$ denotes the pair of variables $(x_1,x_2)$ .
For $g=(g_{ij})_{1\le i,j\le 2}$ in $GL_2$ , define the corresponding left action on the variables by $g\mathbb{x}=(g_{11}x_1+g_{12}x_2, g_{21}x_1+g_{22}x_2)$ . This gives an action on binary forms via $$
(gF)(\mathbb{x}):=F(g^{-1}\mathbb{x})\ .
$$ Now consider $C(F,\mathbb{x})=C(f_0,\ldots,f_d;x_1,x_2)$ a polynomial in these $d+3$ variables. It is classically called a covariant of the (generic) binary form $F$ if it satisfies $$
C(gF,g\mathbb{x})=C(F,\mathbb{x})
$$ for all matrices $g$ in $SL_2$ .
Such polynomials form a ring ${\rm Cov}_d$ . It has a subring ${\rm Inv}_d$ made of polynomials in the coefficients of the form $f_0,\ldots,f_d$ only. This is the ring of invariants. It is well known that invariants do not separate orbits. However, covariants separate orbits. It is also obvious that if one knows a minimal system of generators for ${\rm Cov}_d$ then it will contain (as the degree zero in $\mathbb{x}$ subset) a minimal system for ${\rm Inv}_d$ . The minimal systems for the rings ${\rm Cov}_5$ and ${\rm Cov}_6$ were determined by Gordan in his 1868 article (and not in 1875).
Then von Gall determined ${\rm Cov}_8$ around 1880 and later the harder case ${\rm Cov}_7$ in 1888 . In 1967 , Shioda rederived a minimal system for ${\rm Inv}_8$ and also found all the syzygies among these generators. von Gall's system for the septimic was generating but not minimal. Six elements in his list were in fact reducible. The determination of a truly minimal system of 147 covariants for ${\rm Cov}_7$ is
due to Holger Cröni (2002 Ph.D. thesis) and Bedratyuk in J. symb. Comp. 2009 .
In 2010, Brouwer and Popoviciu obtained the minimal systems of generators for ${\rm Inv}_9$ ( 92 invariants ) and ${\rm Inv}_{10}$ ( 106 invariants ). Only very recently, Lercier and Olive managed to go beyond von Gall's 1888 results and determined the minimal systems of generators for ${\rm Cov}_9$ (476 covariants) and ${\rm Cov}_{10}$ (510 covariants). Addendum: Recently, for $d$ divisible by four, I produced an explicit list of invariants of degrees $2,3,\ldots,\frac{d}{2}+1$ and proved that they algebraically independent. See my article "An algebraic independence result related to a conjecture of Dixmier on binary form invariants" .
|
{
"source": [
"https://mathoverflow.net/questions/286872",
"https://mathoverflow.net",
"https://mathoverflow.net/users/38783/"
]
}
|
286,874 |
It was an ambitious project of Vladimir Voevodsky 's to provide new foundations for mathematics with univalent foundations (UF) to eventually replace set theory (ST). Part of what makes ST so appealing is its incredible conciseness: the only undefined symbol it uses is the element membership $\in$. UF with its type theory and parts of higher category theory seems to be a vastly bigger body to build the foundation of mathematics. To draw from a (certainly very imperfect) analogy from programming: ST is like the C programming language (about which Brian Kernighan wrote : "C is not a big language, and it is not well served by a big book"), but UF seem more like the vast language of Java with all its object-oriented ballast. Questions. Why should mathematicians study UF, and in what respect could UF be superior to ST as a foundation of mathematics?
|
I like your analogy with programming languages. If we think of ST as a low-level programming language and UF as a high-level one, then one advantage of UF is obvious: it is more convenient to write proofs (programs) in a high-level language. It is feasible to write proofs in UF, but it's virtually impossible to write down even statements of theorems in plain ST. This argument shows that UF is more convenient in practice than ST (well, I didn't give any proofs of this statement, but they can be found somewhere else, see this question and this list for example), but your question is about foundations and not about practice, so let me address this aspect. Let me give two reasons why UF may be better as a foundation of mathematics than ST. The first reason is that all the constructions in homotopy type theory are stable under isomorphism, so if you prove a property of (let's say) a group, then this property is true for any isomorphic group. This is not true in ST and this problem is usually swept under the carpet. Another argument is that the category theory in UF is more well-behaved. For example, we have the concept of anafunctors in ST, but we don't need them in UF since they coincide with ordinary functors for univalent categories. Finally, let's discuss the problem that UF is more complicated than ST. I claim that it's not actually true: ST isn't much smaller than UF. ZFC has (about) 10 axioms, but it is based on the formalism of first-order logic which has quite a few rules too. A types theory combines the rules of the logic and axioms of a set theory into one system (which, I think, is more elegant, but it's a matter of taste I guess). To prove that the number of constructions of a type theory is roughly the same as FOL+ZFC, let me just list some of them. On the left we have a type theoretic construction and on the right I write a FOL+ZFC construction which is analogous or similar (this correspondence is very informal and isn't precise). Sigma types / Existential quantifier, Axiom of union Pi types / Universal quantifier, Axiom of power set Sum types / Disjunctions, Axiom of pairing Identity types / Equality Natural numbers types / Axiom of infinity Universes / Large cardinals Univalence axiom (for propositions) / Axiom of extensionality I could continue this list, but this relationship isn't precise, so let me stop here. You can see that some type theoretic constructions correspond to two different constructions on the FOL+ZFC side. This is because the logic and the set theory are fused together in TT, propositions are just a special kind of type. So one construction in TT may correspond to two constructions in FOL+ZFC. Thus the basic (homotopy) type theory has less constructions than FOL+ZFC. You can extends TT with other constructions such as (higher) inductive types, but you don't have to. The basic version of TT with the axiom of choice is roughly equivalent to ZFC (this statement can be made precise, but that's beside the point). So you get an equivalent theory with less constructions and (arguably) more elegant presentation. Moreover, you can get not only a set theory, but also a theory of homotopy types almost for free, you just need to add a very simple extension (universes + the univalence axiom).
|
{
"source": [
"https://mathoverflow.net/questions/286874",
"https://mathoverflow.net",
"https://mathoverflow.net/users/8628/"
]
}
|
286,970 |
I don't know if MO is the right place to ask such a question, but anyway it's my only hope to get an answer, and it's very important for me (not to say 'vital'); so let's try. I'm at this time a Ph.D. student, and I plan to defend in the spring of 2018. I'm currently looking for a postdoc position for next year. I am, at this time, known as a man in the mathematical community, but I'm actually a trans woman, beginning my gender transition. I have two problems. Firstly, I will have to come out as transgender, in at most a few years, in the mathematical community, and I'm quite fearful about the consequences (for example, for my career). Secondly, I have to ensure before applying for a postdoc that in the country where I apply, I will be able to pursue my transition, I will be accepted as I am at the university, and that there won't be any major threat to my security (because of the policy of the country regarding trans people, for instance). For this reason, having contacts in these countries who are reaserchers in maths and are familiar with transidentity questions would be very helpful for me, as I have no other means to get the info I need. So my first question is: are there, here, trans mathematicians who would be willing to talk with me, in private, about how they came out (if they had to) in the mathematical community, how it was accepted, what has been the consequences for their career, and more generally what was their experience as trans mathematicians? (I also have other specific questions like, for instance, how to deal with a change of your first name when you already have published under your former name?) Even if you're not trans, if you have information about all of this (if you know a trans mathematician for example), I would be interested. My second question is: in the countries where I am interested in applying for a postdoc, that is Spain, the Czech Republic, Canada, the US, and Brazil, do you have any contacts, in the academic world, who are familiar with LGBT questions, and who could give me an idea about the situation of trans people in their country, and especially at the university? (In order for me to know if it's safe to apply there or not?) If some of you are yourselves why I don't ask these questions directly to researchers of the universities where I want to apply: that's simply because it's not safe. Trans people have to face a lot of discrimination and you never know if speaking about your transidentity with someone you don't know is safe or not - that's the reason why I choosed to ask it anonymously here, first. (You can contact me in private at rdm.v[at]yahoo[dot]com.)
|
Spectra is an organization for LGBT mathematicians. I hope that you can find people to safely discuss your questions with there.
|
{
"source": [
"https://mathoverflow.net/questions/286970",
"https://mathoverflow.net",
"https://mathoverflow.net/users/117681/"
]
}
|
287,011 |
For $n,m \geq 3$, define $ P_n = \{ p : p$ is a prime such that $ p\leq n$ and $ p \nmid n \}$ . For example :
$P_3= \{ 2 \}$
$P_4= \{ 3 \}$
$P_5= \{ 2, 3 \}$,
$P_6= \{ 5 \}$ and so on. Claim: $P_n \neq P_m$ for $m\neq n$. While working on prime numbers I formulated this problem and it has eluded me for a while so I decided to post it here. I am not sure if this is an open problem or solved one. I couldn't find anything that looks like it.
My attempts haven't come to fruition though I have been trying to prove it for a while. If $m$ and $n$ are different primes then it's clear. If $m \geq 2n$, I think we can find a prime in between so that case is also taken care of. My opinion is that it eventually boils down to proving this statement for integers that share the same prime factors. My coding is kind of rusty so would appreciate anybody checking if there is a counterexample to this claim. Any ideas if this might be true or false? Thanks. PS: I asked this question on mathstackexchage and somebody recommended I post it here as well. Here is the link to the original post https://math.stackexchange.com/questions/2536176/a-conjecture-regarding-prime-numbers
|
This is true (for large $m$ and $n$) under ABC plus the assumption that there is a prime in $[x,x+x^{1/2-\delta}]$ for some positive $\delta$ (which is widely believed, but beyond RH). To see this, suppose $m <n$
and that they have the same radical $r$. Write $m=gM$ and $n=gN$ where $g$ is the gcd of $m$ and $n$, so that $M$ and $N$ are coprime. Applying the ABC conjecture to $M + (N-M) = N$, we conclude that
$$
(N-M) r \ge N^{1-\epsilon},
$$
so that
$$
n-m \ge n^{1-\epsilon}/r.
$$
On the other hand, clearly $n-m$ is also $\ge r$ (since it is divisible by $r$). It follows that $n-m \ge n^{1/2-\epsilon}$, and the assumption that there are primes in short intervals finishes the (conditional) proof. The problem is likely very hard, as fedja's observation in the comments already shows. There is a conjecture of Hall that $|x^3-y^2| \gg x^{1/2-\epsilon}$ which is wide open. The best results that are known here (going back to Baker's method) are of the flavor $|x^3-y^2| \gg (\log x)^C$ for some $C$. If $x^3-y^2$ does get as small as in the Baker result, then take $n=x^4y$ and $m=xy^3$, which clearly have the same radical and then $|n-m|$ is of size essentially $n^{5/11}$. In other words, either you have to improve work towards Hall's conjecture, or work towards gaps between primes! Added Thanks to Pasten's comment, I learned that this problem is already in the literature and is known as Dressler's conjecture. The conditional proof above is recorded in work of Cochrane and Dressler who give more information on the conjecture.
|
{
"source": [
"https://mathoverflow.net/questions/287011",
"https://mathoverflow.net",
"https://mathoverflow.net/users/117699/"
]
}
|
287,109 |
I know that $\cos(\pi/n)$ is a root of the Chebyshev polynomial $(T_n + 1)$, in fact it is the largest root of that polynomial, but often that polynomial factors. For example, if $n = 2 k$ then $\cos(\pi/n)$ is the largest root of $T_k$, which is a polynomial of lower degree, and if $n = 3$ then $\cos(\pi/n)$ is a root of $2 x - 1$, again lower degree than $T_3 + 1$. How can I compute, for a given $n$, a polynomial in $\mathbb{Q}[x]$ of minimal degree that $\cos(\pi/n)$ is a root of?
|
The minimal polynomial of $\cos(2\pi/n)$ (by William Watkins and Joel Zeitlin, The American Mathematical Monthly
Vol. 100, No. 5 (May, 1993), pp. 471-474) has full clarity on this matter (just take their result for even $n$ to resolve your case).
|
{
"source": [
"https://mathoverflow.net/questions/287109",
"https://mathoverflow.net",
"https://mathoverflow.net/users/3319/"
]
}
|
287,608 |
I've heard about two ways mathematicians describe Feynman diagrams: They can be seen as "string diagrams" describing various type of arrows (and/or compositions operations on them) in a monoidal closed category. They are combinatorial tools that allow one to give formulas for the asymptotic expansion of integrals of the form: $$ \int_{\mathbb{R}^n} g(x) e^{-S(x)/\hbar} $$ when $\hbar \rightarrow 0$ in terms of asymptotic expansions for $g$ and $S$ around $0$ (with $S$ having a unique minimum at $0$ and increasing quickly enough at $\infty$ and often with a very simple $g$, like a product of linear forms), as well as some variation of this idea, or for the slightly more subtle ``oscilating integral'' version of it, with $e^{-i S(x)/\hbar}$. My question is: is there a relation between the two ? I guess what I would like to see is a "high level" proof of the kind of formula we get in the second point in terms of monoidal categories which explains the link between the terms appearing in the expansion and arrows in a monoidal category... But maybe there is another way to understand it...
|
If I understand the question correctly, the search is for a calculation of the asymptotic expansion of Gaussian integrals using concepts and techniques from category theory. Here is one such calculation: Feynman diagrams via graphical calculus (2001) There is a very close connection between the graphical formalism for
ribbon categories and Feynman diagrams. Although this correspondence
is frequently implied, we know of no systematic exposition
in existing literature; the aim of this paper is to provide such an
account. In particular, in deriving Feynman diagrams expansion of
Gaussian integrals as an application of the graphical formalism for
symmetric monoidal categories, we discuss in detail how different
kinds of interactions give rise to different families of graphs and
show how symmetric and cyclic interactions lead to “ordinary” and
“ribbon” graphs respectively.
|
{
"source": [
"https://mathoverflow.net/questions/287608",
"https://mathoverflow.net",
"https://mathoverflow.net/users/22131/"
]
}
|
287,917 |
Historically, the current "standard" set of chess pieces wasn't the only existing alternative or even the standard one. For instance, the famous Al-Suli's Diamond Problem (which remained open for more than one millennium before getting solved by Grandmaster Yuri Averbakh ) was formulated in an ancient Persian variant of chess, called Shatranj , using a fairy chess piece , called Wazir (Persian: counsellor), rather than the conventional queen. There is a long-standing discussion amongst chess players concerning the best possible configuration of chess pieces which makes the game more exciting and complicated. Also, one might be interested in knowing whether, in a fixed position on the infinitary chessboard, the game value could be changed into an arbitrary ordinal merely by replacing the pieces with new (possibly unconventional) ones rather than changing their positions. In order to address such questions one first needs to have a reasonable mathematical definition of the notion of a "chess piece" in hand. Maybe a promising approach inspired by Rook , Knight , and King's graphs is to simply consider a chess piece a graph which satisfies certain properties. Though, due to the different nature of all "reasonable" chess pieces, it seems a little bit hard to find principles which unify all of them into one single "neat" definition. For example, some pieces can move only in one direction, some others can jump out of the barriers, some have a/an finite/infinite range, some can only move among positions of a certain color, etc. Here the following question arises: Question. What are examples of mathematical papers (or unpublished notes) which present an abstract mathematical definition of a chess piece? Is such a definition unique or there are several variants? Update 1. In view of Todd and Terry's comments ( here and here ), it seems a more generalized question could be of some interest. The problem simply is to formulate an abstract mathematical definition of a "game piece" in general. Are there any references addressing such a problem? Update 2. As a continuation of this line of thought, Joel has asked the following question as well: When is a game tree the game tree of a board game?
|
In terms of mathematical analysis and combinatorial game theory,
the essence of any game is captured by its game tree, the tree
whose nodes represent the current game state, and to make a move in
the game is to move from a node in this tree to a child node.
Terminal nodes are labeled as a win for one player or the other, or
a draw (and in the case of infinite games, the winner is determined
by consulting the set of winning plays, which in a sense defines
the game). In chess, the current game state is not merely a description of
what is on the board, for one must also know whose turn it is and
also a little about the history of the play, in order to determine
whether castling or en passant is allowed or to determine draws by
repetition or the 50-move rule. Once one has the game-tree perspective, the concept of chess pieces
tends to fall away, and one might look upon the concept of a chess
piece as epi-phenomenal to the actual game, a convenient way to describe the game tree: strategic
considerations concern at bottom only the game tree, not pieces. In the case of chess, for example, the computer chess programs are
definitely analyzing and searching the game tree. You ask for references, and any text in combinatorial game theory
will discuss the game tree and prove what I call the fundamental
theorem of finite games. Fundamental theorem of finite games . (Zermelo, 1913) In any finite
game, one of the players has a winning strategy or both players
have drawing strategies. (Zermelo's actual result was something a little different than this; see the comments below and the interesting paper, Schwalbe and Walker, Zermelo and the early history of game theory .) This theorem is generalized by the Gale-Stewart theorem (1953), which
shows also that every open game is determined, and this is
generalized to Borel determinacy and more, and one then enters a
realm of sophisticated results in set theory. Let me mention an example showing how two games can look very
different in terms of how they are played, yet at bottom be
essentially the same game, with isomorphic game trees. Consider the game 15 , in which players take turns to select
distinct numbers from the numbers 1, 2, ..., 9. Once one player
takes a number, it is no longer available to the other player.
Whichever player can make 15 as the sum of three distinct numbers
is the winner. Please give the game a try! After a while, the game might begin to be familiar, for we can
realize that it is exactly the same game as tic-tac-toe, as can be
seen via the following magic square. $$\begin{array}{ccc}
8 & 1 & 6 \\
3 & 5 & 7 \\
4 & 9 & 2 \\
\end{array}$$ At the MoMath museum in New York, they have this game set up with a
two-sided display. On one side, for the parents, you see only the
numbers in a row. On the other side, for the kids, you see the
tic-tac-toe arrangement. How amazed the parents are to be beaten
soundly by their kids — all the kids are geniuses! My point with this is that game of 15 and the game of tic-tac-toe are essentially identical as games, yet in tic-tac-toe there is directly no concept of number or selecting a number, and in 15 there is directly no concept of a corner square or center square. The nature of the number 5 in 15 is game-theoretically similar to the nature of the center square in tic-tac-toe, and this is revealed by the fact that the game trees are isomorphic. Chess pieces are like that.
|
{
"source": [
"https://mathoverflow.net/questions/287917",
"https://mathoverflow.net",
"https://mathoverflow.net/users/82843/"
]
}
|
287,947 |
For example, $\sqrt 2 = 2 \cos (\pi/4)$, $\sqrt 3 = 2 \cos(\pi/6)$, and $\sqrt 5 = 4 \cos(\pi/5) + 1$. Is it true that any integer's square root can be expressed as a (rational) linear combinations of the cosines of rational multiples of $\pi$? Products of linear combinations of cosines of rational multiples of $\pi$ are themselves such linear combinations, so it only needs to be true of primes. But I do not know, for example, a representation of $\sqrt 7$ in this form.
|
Someone should actually record the formula. If $p$ is a prime $\equiv 1 \bmod 4$, then
$$\sqrt{p} = \sum_{k=1}^{p-1} \left( \frac{k}{p} \right) \cos \frac{2 k \pi}{p}$$
where $\left( \tfrac{k}{p} \right)$ is the quadratic residue symbol. Note that $\left( \tfrac{k}{p} \right) = \left( \tfrac{p-k}{p} \right)$, so every term appears twice. Similarly, if $p \equiv 3 \bmod 4$, then
$$\sqrt{p} = \sum_{k=1}^{p-1} \left( \frac{k}{p} \right) \sin \frac{2 k \pi}{p}.$$
Again, $k$ and $p-k$ make the same contribution. These are usually both written together as
$$\sqrt{(-1)^{(p-1)/2} p} = \sum_{k=1}^{p-1} \left( \frac{k}{p} \right) \exp \frac{2 k \pi i}{p}.$$
This is a formula of Gauss .
|
{
"source": [
"https://mathoverflow.net/questions/287947",
"https://mathoverflow.net",
"https://mathoverflow.net/users/3319/"
]
}
|
288,052 |
So I am inspired by unitary matrices which preserve the $\ell^2$-norm of all vectors, so in particular the unit norm vectors. But then I saw that the $\ell^1$-norm of probability vectors is preserved by matrices whose columns are probability vectors. And this got me thinking: But what are the matrices preserving the $\ell^1$-norm of arbitrary real unit $\ell^1$-norm vectors? So basically we extend a probability vector to also allow a sign, but ignoring the signs, this should still be a probability vector; and then we ask for the corresponding structure-preserving matrices. It is already clear that the columns of such a matrix should be this 'extended' kind of probability vector, because we can multiply the matrix with a standard basis vector which has $\ell^1$-norm 1. But not all of such matrices preserve this, take for example $$ M = \frac{1}{2} \left(\begin{matrix} 1 & 1\\ 1 & -1 \end{matrix}\right) $$ and $$ x = \left( \begin{matrix} 0.3 \\ -0.7 \end{matrix} \right) $$ Then we have $$ Mx = \left(\begin{matrix} -0.2 \\ 0.5 \end{matrix}\right) $$ which fails the test.
|
As pointed out by YCor in the comments, the following theorem is true: Theorem 1 Let $p \in [1,\infty] \setminus \{2\}$. If a matrix $A \in \mathbb{R}^{n \times n}$ is an isometry on $\mathbb{R}^n$ with respect to the $p$-norm, then $A$ is a signed permutation matrix, i.e. a permutation matrix where some of the one's are replaced with $-1$. For the proof, first note that the case $p = \infty$ follows from $p = 1$ by duality, so we only have to show the theorem for $\in [1,\infty) \setminus 2$. Now we use the following lemma: Lemma 2 Let $p \in [1,\infty) \setminus \{2\}$ and let $(\Omega_1,\mu_1)$ and $(\Omega_2,\mu_2)$ be two measure spaces. If $T: L^p(\Omega_1,\mu_1) \to L^p(\Omega_2,\mu_2)$ is an isometric linear mapping, then $T$ is disjointness preserving , i.e. for all $f,g \in L^p(\Omega_1,\mu_1)$ which fulfil $fg = 0$, we also have $(Tf)(Tg) = 0$. In a more general form, this lemma goes originally back to Lamperti ("On the isometries of certain function spaces", Pacific J. Math. 8 (1958), 459–466.). A very clear proof of the lemma in the above form can be found in Lemma 4.2.2 of S. Facklers PhD dissertation (DOI: 10.18725/OPARU-3268). If we apply Lemma 2 to $L^p(\Omega_1,\mu_1) = L^p(\Omega_2,\mu_2) = \mathbb{R}^n$, it follows that every matrix $A \in \mathbb{R}^{n \times n}$ which is isometric with respect to the $p$-norm is automatically disjointness preserving. Hence, every row of $A$ contains exactly one non-zero entry. Since $A$ is invertible, this implies that every column of $A$ also contains exactly one non-zero entry. Thus, $A$ is of the form $A = DP$, where $P$ is a permutation matrix and $D$ is a diagonal matrix. Using again that $A$ is isometic, we can see that $D$ can only have the numbers $1$ and $-1$ on its diagonal. Remarks: (a) Lemma 2 is of course quite general compared to the finite dimensional question. However, I don't think that a finite dimensional version of Lemma 2 is easier to prove. (b) Using Lemma 2 above, one can also obtain a description of isometries on general $L^p$-spaces; see Theorem 3.1 in Lamperti's article quoted above.
|
{
"source": [
"https://mathoverflow.net/questions/288052",
"https://mathoverflow.net",
"https://mathoverflow.net/users/113001/"
]
}
|
288,259 |
Enumerate the rationals as $b_1,b_2,\dots$ and define the (set) function:
$$f(x) = (x-b_1)^2 + (x-b_1)^2(x-b_2)^2 + \dots.$$
At any particular $x$, only finitely many terms are non zero so this is perfectly well defined as a (set) function but surely, it is not equal to any polynomial! (or is it?) How do I show that there is no polynomial $p(t) \in \mathbb Q[t]$ such that $p(x) = f(x)$ for all $x \in \mathbb Q$? If $f(x)$ were defined as $(x-b_1) + (x-b_1)(x-b_2) + \dots$, then this question is not so hard. If $p(x)$ has degree $n$, then testing on $b_1,\dots,b_n$ would show that $p(x)$ is necessarily $(x-b_1) + (x-b_1)(x-b_2) + \dots + (x-b_1)\dots(x-b_n)$ but then $x=b_{n+1}$ derives a contradiction. I don't know how to adapt this approach. Trying to guess the polynomial seems hard even if we think $p(x)$ is degree $1$. I posted a follow up to this question here: (Variation of an old question) Are these functions polynomials? .
|
For each positive integer $n$ and any rational $x$, we have $$f(x)\geq (x-b_1)^2(x-b_2)^2\dots(x-b_n)^2.$$
For large $x$, we then have $f(x)\gg x^{2n}$, which implies that if $f$ is a polynomial, it must have degree $≥2n$.
|
{
"source": [
"https://mathoverflow.net/questions/288259",
"https://mathoverflow.net",
"https://mathoverflow.net/users/58001/"
]
}
|
288,410 |
Quoting his Wikipedia page ( current revision ): compiled nearly 3,900 results Nearly all his claims have now been proven correct Which of his claims have been disproven, can any insight be gained from the mistakes of this genius?
|
Hardy wrote some things about this, as I learned when writing this blog post . Here is a mistake which was even featured in the Ramanujan movie: in his letters to Hardy, Ramanujan claimed to have found an exact formula for the prime counting function $\pi(n)$, but (in Hardy's words) Ramanujan’s theory of primes was vitiated by his ignorance of the theory of functions of a complex variable. It was (so to say) what the theory might be if the Zeta-function had no complex zeros. His method depended upon a wholesale use of divergent series… That his proofs should have been invalid was only to be expected. But the mistakes went deeper than that, and many of the actual results were false. He had obtained the dominant terms of the classical formulae, although by invalid methods; but none of them are such close approximations as he supposed. Based on the second sentence in particular it sounds like what happened, although I haven't checked, was that Ramanujan's formula was the explicit formula but missing the contribution from the complex zeroes of the zeta function.
|
{
"source": [
"https://mathoverflow.net/questions/288410",
"https://mathoverflow.net",
"https://mathoverflow.net/users/118526/"
]
}
|
288,723 |
In his Midrasha Mathematicae lectures ("In Search of Ultimate $L$", BSL 23 [2017]: 1–109), Woodin notes that $V = \textit{Ultimate }L$ implies $\textrm{CH}$ (Theorem 7.26, p.103). Is it known whether $V = \textit{Ultimate }L$ implies $\textrm{GCH}$?
|
In his slide Absolutely ordinal definable sets John Steel writes: At the same time, one hopes that V = ultimate L will yield a detailed fine structure theory for V, removing the incompleteness that large cardinal hypotheses by themselves can never remove. It is known that V = ultimate L implies the CH, and many instances of the GCH. Whether it implies the full GCH is a crucial open problem
|
{
"source": [
"https://mathoverflow.net/questions/288723",
"https://mathoverflow.net",
"https://mathoverflow.net/users/91635/"
]
}
|
289,031 |
Stably, phantom maps (nonzero maps which are zero on homotopy) exist, but it's not known if they exist between finite complexes (Freyd's Generating Hypothesis). Unstably, it's easy to find maps which are the same on homotopy but not homotopic (even between finite complexes), however I don't know an example of a map which is the identity on homotopy but not homotopic to the identity. I presume I could take $\Omega^\infty(1+f)$ where $f$ is a known stable phantom map -- and I would be interested to see the details worked out -- but known stable phantom maps seem to hinge crucially on $\varprojlim^1$ issues, and so are "inherently infinitary". I'd like to see an unstable example which is not "inherently infinitary"; concretely it would be nice to see an example on a finite complex, but I'm not wedded to this interpretation of "not inherently infinitary". To sum up: Question: What is an example of a self-map $f: X \to X$ of a (pointed, connected) CW complex which is not homotopic to the identity but such that $\pi_\ast(f): \pi_\ast(X) \to \pi_\ast(X)$ is the identity? Bonus points if $X$ is a finite complex. An equivalent question is: what's an example of a space $X$ such that $Aut(X)$ doesn't act faithfully on $\pi_\ast(X)$, where $Aut(X)$ is the group of homotopy classes of self-homotopy-equivalences of $X$? Another way of putting this is: Whitehead's theorem tells us that the functor $\pi_\ast$ reflects isomorphisms, but does it reflect identities , i.e. is it faithful with respect to isomorphisms? For that matter, I'm having trouble coming up with a space $X$ such that $Aut(X)$ doesn't act faithfully on its homology , either.
|
Pick a degree $1$ map $h: T^3 \to S^3$ from the $3$-torus to the sphere and define
$$f: T^3 \times S^3 \to T^3 \times S^3; \; f(x,y):=(x, yh(x)).$$
This map induces the identity on homotopy groups, but not on homology.
|
{
"source": [
"https://mathoverflow.net/questions/289031",
"https://mathoverflow.net",
"https://mathoverflow.net/users/2362/"
]
}
|
289,042 |
Is there any global constant $C$ such that $$C<\frac{\sum_{i=1}^{n}x_{i}\log x_{i}-x_{i}+(1-x_{i})\log(1-x_{i})}{\sum_{i=1}^{n}x_{i}}+\log(\sum_{i=1}^{n}x_{i})-\log(\sum_{i=1}^{n}x_{i}^{2})$$ for all vectors $x\in\mathbb{R}^n$ such that $0<x_i<1$ for all $i$? The main sticking point seems to be that the last term is not convex, so i can't just fix $\sum_i x_i $ and apply symmetry.
|
Pick a degree $1$ map $h: T^3 \to S^3$ from the $3$-torus to the sphere and define
$$f: T^3 \times S^3 \to T^3 \times S^3; \; f(x,y):=(x, yh(x)).$$
This map induces the identity on homotopy groups, but not on homology.
|
{
"source": [
"https://mathoverflow.net/questions/289042",
"https://mathoverflow.net",
"https://mathoverflow.net/users/70190/"
]
}
|
289,052 |
Let $A$ be a ring. Recall that a module $M$ is called reflexive in case the canonical evaluation map $f_M : M \rightarrow M^{**}$ is an isomorphism. Here $(-)^{*}$ denotes the functor $Hom_A(-,A)$ . In the following I restrict to Artin algebras, but the questions are interesting for more general rings. Questions: What are the Artin algebras $A$ such that every simple $A$ -module is reflexive? Is every such algebra selfinjective? In the answers below you find good evidence that the answer to 2. might be yes. Being reflexive implies being a 2. syzygy module, so I would guess that 2. is correct. In https://www.sciencedirect.com/science/article/pii/002186938590198X a positive answer was given in case $A$ has additionally dominant dimension at least two.
In fact one can prove that $A$ with dominant dimension at least one and all simple modules reflexive implies that $A$ must be selfinjective:
Being reflexive implies being a 2. syzygy module and thus every simple module is a submodule of a projective module, which is equivalent to have dominant dimension at least one for algebras with dominant dimension at least one. But the socle of an indecomposable injective non-projective module has dominant dimension zero. Thus $A$ has to be selfinjective.
|
Pick a degree $1$ map $h: T^3 \to S^3$ from the $3$-torus to the sphere and define
$$f: T^3 \times S^3 \to T^3 \times S^3; \; f(x,y):=(x, yh(x)).$$
This map induces the identity on homotopy groups, but not on homology.
|
{
"source": [
"https://mathoverflow.net/questions/289052",
"https://mathoverflow.net",
"https://mathoverflow.net/users/61949/"
]
}
|
289,084 |
Is 47 the largest number which has a unique partition into five parts (15, 10, 10, 6, 6), no two of which are relatively prime?
|
Yes. Suppose $n>47$. If $2\mid n$, we can take $(n-8,2,2,2,2),(n-10,4,2,2,2)$, which are distinct partitions for $n\geq 14$. If $3\mid n$, we can take $(n-12,3,3,3,3),(n-15,6,3,3,3)$, which are distinct partitions for $n\geq 21$. If $n\equiv 1\pmod 6$, we can take $(n-37,15,10,6,6),(n-43,15,12,10,6)$, which are distinct partitions for $n\geq 55$, and $(21,7,7,7,7),(14,14,7,7,7)$ for $n=49$. If $n\equiv 5\pmod 6$, we can take $(n-41,15,10,10,6),(n-47,15,12,10,10)$, which are distinct partitions for $n\geq 59$, and $(20,15,6,6,6),(15,12,10,10,6)$ for $n=53$. (Thanks to Gerhard for pointing out we can finish the argument quickly from what I wrote before)
|
{
"source": [
"https://mathoverflow.net/questions/289084",
"https://mathoverflow.net",
"https://mathoverflow.net/users/60732/"
]
}
|
289,259 |
I'm currently a young, not-so-young mathematician, finishing its second postdoc. I developed an interest for rather different topics in the last few years but constantly, slowly converged towards something that has to do with (but at this point I'm quite unsure is ) category theory and its applications. What motivated me in the study of mathematics back in the days was the desire to understand the mechanisms ruling algebraic topology; then the word "functor" came in, and I fell into the rabbit-hole. At this point most of you would expect I'm not unsatisfied with the shape category theory has nowadays: isn't a blurry mixture of homotopy theory and category theory precisely what I'm tackling? I'm instead profoundly disappointed by the drift that categorical thinking has taken in the last ten years or so. And this is because the more I dwell into what "higher category theory" and "formal homotopy theory" became, the less I like both of them (I will somewhat refer as both with the portmanteau term "HTT"; I hereby stress that this acronym has no particular meaning whatsoever): It is still absolutely unclear what good is HTT for category theorists. To my eye, it is certainly a masterpiece of applied mathematics (in the sense that its tasks rest on the use of conceptualization as a tool, not as a target), but it doesn't seem to add a single grain of sand to the sea of category theory; instead, it re-does all the things you need to know to behave "as if" your homotopy-things were things , or to compactly bookkeep an infinite amount of data into a finite amount of space. These are honest practical motivations, addressed in a way I'm unable to judge; what I am able to judge, is the impact this impressive amount of material is having on category theory intended not as a part of mathematics, but as a way to look at mathematics from the outside. I feel this impact is near to zero. Not to mention that to my eye you do category theory only the australian way; everyone else is applying category theory towards the solution of a specific mathematical problem. And yet, I couldn't think of two more distant languages than Australian CT and HTT; what's wrong with me? What's wrong with the community? Sure there have been attempts to circumvent this ; I feel this is a beginning, and somehow the first example of HTT done by truly categorical means. But in the end, you open and read these papers, only to find that you still need to know simplicial sets and homotopy theory and the lingo of topologists. This is not what I'm after. When you use HTT, you are not providing a foundation for (higher) category theory; instead, you are relying quite heavily on the structure of a single category (simplicial sets), and on its quite complicated combinatorics. I am perplexed by the naivety of people that believe HTT can serve as a foundation for higher category theory; I am frightened by the fact that these people seems to be satisfied by what they have. So should I? Or shall I look for more? And where? Struggling with the books I had, I haven't been able to find a single convincing word about neither of these terms (foundation, category, theory). Again, it seems that HTT is a framework to perform computations (be them in stable homotopy theory or intersection theory or something else), instead than a language explaining the profound reason why you already know what things intimately are (this is what category theory does, to me). It is also quite schizophrenic that HTT exhibits the double nature of a device taking (almost all) category theory for granted, and at the same time it wants to rebuild it from scratch. Do I have to already know this stuff, to learn this stuff? There's a rather deep asymmetry between category theory and homotopy theory: these two fields, although intimately linked, live different planets when it comes to outreach and learning. By its very nature, categorical thinking is trivial; there are few things to prove, and all of them are done with the same toolset, and instead there's an extreme effort in carving deep definitions that can turn into milestones of thought (I take "elementary topos" as an example of such a definition). On the contrary, homotopy theory is a scattered set of results, fragmented in a cloud of subfields, speaking different dialects; every proof is technically a mess, uses ad-hoc ideas, complicated constructions, forces to re-learn things from scratch... in a few words, there is no Bourbaki for algebraic topology [edit: now I know there is one, but it's evidently insufficient]. This double nature entails that there's no way to learn HTT if you (like me) are not so acquainted with the use of concrete and painful arguments; in a few words, if you are not a good enough mathematician. The complexity of techniques you are requested to master is daunting and leaves outside some beginners, as well as some people caught at the wrong time in their formation process. Sure, the situation is changing; but it's doing it slowly, too slow to perceive a real change in the pace, or in the sensibility, or in the sense of priority of the community. Until now, every single attempt I made to enter the field failed in the most painful way. I feel there's no way I can understand fragmented, uncanny arguments like those. The few I can follow, I'd be absolutely unable to repeat, or reshape to prove something I need: they simply lie outside the language I'm comfortable with. Every time I have to check whether something is true, I have absolutely no clue how to operate, apart from pretending that what I do happens in/for a 1-category. And this disability is not conceptual, it is utterly practical, and seemingly unsolvable. Learning HTT requires to abandon categorical thinking from time to time; you are forced to show that something is true in a specific model , using a rather specific and particular technique, without relying on completely formal arguments. It is an unsatisfying, poor language from the point of view of a category theorist and people seem to avoid tackling foundations to do geometry and topology. Which is fine, but not my cup of tea. It is at this point extremely likely that, by lack of ability, or simply because I can't recognize myself in (the absence of) their philosophy, I won't be part of the crew of people that will be remembered for their contributions to higher category theory. What shall I do then? The echo-chamber where I live in seems to suggest a "love or leave it" approach, without any space for people that couldn't care less about chromatic homotopy theory, algebraic geometry, differential geometry, deformation theory... So, what shall I do? I can list a few answers, all equally frightening: Settle down, learn my lesson, and fake to be a real mathematician, even though I know barely anything about the above mentioned homotopy theory, algebraic geometry, differential geometry, deformation theory? To a certain extent, it is working: my thesis received surprisingly positive reports, I happen to be able to maintain a position, even though scattered and temporary. But I'm also full of discomfort; I fear that my nature is preventing me from becoming a good mathematician; I am unsatisfied and I feel I'm denying my true self. What's worse: I feel I have to deny it, posting this rant with a throwaway account, because the ideas I proposed here are unpopular and could cost my academic life. Shall I quit mathematics, since at this point there's no time to learn something new (I have to employ my time writing to avoid death)? I have to do mathematics with what I have; I feel what I have, what I know at the deep level I want, is barely nothing. And I can't use things I don't know, that's the rule. Shall I face the fact that I've been defeated in my deepest desire, becoming exactly the kind of mathematician (and human being) I've always hated, the one who uses a theorem like a black box and makes guesses about things he ignores the true meaning of? But mathematics works this way: there is no point in knowing that something is true, until you ignore why it's true. Following a quite common idea among category theorists, I would like to go further, knowing why something is trivial . I don't want to know a definition, I want to know why that definition is the only possible way to speak about the definiendum. And if it's not, I want to be aware of the totality of such ways: does this totality carry a structure? The presence/absence of it have a meaning? Is there a totality of totalities, and how it behaves? When I first approached HTT I thought that answering these very questions was its main task. You can see how deeply I'm disappointed. And you can see the source of my sense of defeat: I feel stupid, way more limited, distracted from learning technicalities, way more than people that do not tackle this search for an absolute meaning. Younger than me, many colleagues began studying HTT, rapidly reaching a certain command of the basic words and subsequently began producing mathematics out of this command. To them, category theory is just another piece of mathematics, not different from another (maybe more beautiful); you do your exercises, learn to prove theorems, that's it. To me, category theory is the only satisfying way to think. Am I burdened by this belief to the point that it's preventing me from being a good mathematician? The questions I raised at point 3 do not pertain mathematics; I should do something else. In fact, the only reason why I tried to become a mathematician was that I felt that mathematics is the only correct meaning of the word "philosophy", and the only correct way to pursue it. But turning to philosophy would be, if possible, even a more unfortunate choice: philosophers tend to be silly, ignorant people who claim to be able to explain ethics (=a complicated and elusive task) ignoring linear algebra (=something that shall be the common core of knowledge of every learned person). One of the answers below advises me to "give HTT another try". This is what to do. I've no clue about how , and this is why I'm looking for mathematical help. I can't find a way out of this cul-de-sac: doing new, unpolished mathematics is a social event, but I've lived the years of my PhD isolated and without a precise guidance aside from myself.
|
Higher category theory is, roughly speaking, where category theory meets homotopy coherent mathematics. It is hence relevant to those problems in which categorical structures and homotopy coherent phenomena play a significant role. Many areas of algebraic topology and algebraic geometry have this property. There are also many such areas who don't. From what I understood from your question, you like category theory, but not so much homotopy coherent mathematics. So far I would say you don't actually have any problem, since ordinary category theory itself is not, at least in my opinion, a domain in which homotopy coherent mathematics is crucially needed. This is mainly because the coherence issues that arise in category theory are very low-dimensional, to the extent that it is more cost effective to do them by hand (or simply neglect them), then to use fancy machinery. This leads me to the first possible solution to your problem: Do category theory. It has really not been my impression that this field is anywhere close to finished. This is especially true if you consider 2-category theory as an acceptable extension (here the coherence is again usually simple enough to do by hand). It also has many interactions with domains such as logic, set theory and foundations of mathematics. You will find many interesting discussions of all these topics, as well as links to state-of-the-art research, in the n-category café . It will also not surprise me if you will find people there who share your mathematical taste. If you still maintain, for whatever reason, that it is imperative that you do things related to higher category theory, I can tell you that there are many domains in this topic which are very 1-categorical in flavor. For example, you can Do model category theory. This notion, one of many brilliant ideas of Quillen, allows one to magically reduce homotopy coherence issues into a 1-categorical framework. Model categories also share many of the aesthetic features of ordinary category theory, in the sense that everything seems to fit together very nicely, while still being extremely useful for real world homotopy coherent mathematics. A bit less known, but also very categorical in flavor are derivators . You might also look into triangulated categories. Finally, as many of the comments above suggest, it's possible that the things that you don't like in homotopy coherent mathematics are actually not essential properties of the field, but rather of its young age. You may hence consider to Give HTT another try. In doing so, you may want to take into account the following: I strongly believe that no one has ever written a technical simplex-by-simplex combinatorial proof of an HTT-type result without knowing in advance that what they want to prove is true, and moreover why it is true. This is because, despite the technicality of some proofs, higher categories do behave according to fundamental principles. Sometimes these principles are the same as the 1-categorical case, but sometimes they're different. As a result, it may take a bit of time to acquire a guiding intuition for what should be true and when. It is, nonetheless, certainly doable. I would then suggest that, before reading a given proof, you try to think first why the announced result should be true. In addition, think how you would prove, say, the 1-categorical case, and then try to extend the proof to higher categories dimension by dimension, and see where this leads you. Then read the simplex-by-simplex argument. It may suddenly look very clear.
|
{
"source": [
"https://mathoverflow.net/questions/289259",
"https://mathoverflow.net",
"https://mathoverflow.net/users/118946/"
]
}
|
289,303 |
According to this post Intuition for group homology , I wonder what is the intuition for Hochschild homology. The Hochschild homology is defined as the homology of this complex
chain.
Given a ring $A$ and a bimodule $M$.
Define an $A$-module by setting \begin{equation} \displaystyle
\notag C_{n}(A,M)=M\otimes_{k}A^{\otimes n} \end{equation}
and for each $n$ the maps $d_{i}:C_{n}(A,M)\rightarrow C_{n-1}(A,M)$
as follows. Define \begin{equation} \displaystyle d_{0}(m\otimes
a_{1}\otimes\cdots\otimes a_{n})=ma_{1}\otimes
a_{2}\otimes\cdots\otimes a_{n} \end{equation}
and \begin{equation} \displaystyle d_{i}(m\otimes
a_{1}\otimes\cdots\otimes a_{n})=m\otimes a_{1}\otimes\cdots\otimes
a_{i}a_{i+1}\otimes\cdots\otimes a_{n},\quad 1\leq i\leq n-1 \end{equation}
and \begin{equation} \displaystyle d_{n}(m\otimes a_{1}\otimes \cdots
\otimes a_{n})=a_{m}m\otimes a_{1}\otimes\cdots\otimes a_{n-1}. \end{equation}
When $i<j$, one can check that $d_{i}d_{j}=d_{j-1}d_{i}$. We define a
linear operator \begin{equation} \displaystyle
b=\sum_{i=0}^{n}(-1)^{i}d_{i} \end{equation}
and an $ A$-module \begin{equation} \displaystyle
C_{*}(A,M)=\bigoplus_{n\geq 0}C_{n}(A,M). \end{equation}
One can verify that $b:C_{*}(A,M)\rightarrow C_{*}(A,M)$ is a
differential. The differential complex $(C_{*}(A,M),b)$ is called a
Hochschild complex. The homology theory defined by a Hochschild
complex is denoted by $H_{*}(A,M)$ and called a Hochschild homology.
If $M=A$, the Hochschild homology is also denoted by $HH_{*}(A)$.
|
If you have a right $A$ -module $M_A$ and a left $A$ -module $_AN$ , then you can form their tensor product $$M\otimes_AN:=\operatorname{coker}(M\otimes_kA\otimes_kN\xrightarrow{(m,a,n)\mapsto(ma,n)-(m,an)}M\otimes_kN).$$ There is also a "derived" version of the tensor product, i.e. a chain complex whose homology groups are the $\operatorname{Tor}^i_A(M,N)$ groups, given by $$M\otimes_A^{\mathbb L}N:=\bigoplus_{i=0}^\infty M\otimes_kA^{\otimes_ki}\otimes_kN.$$ Notice that the $i=0$ and $i=1$ terms are exactly those appearing in the definition of the ordinary tensor product. [As noted in the comments, if the base ring $k$ is not a field, then one should be careful with this definition if $M$ , $N$ , and $A$ are not all flat over $k$ .] Now, if you have an $(A,A)$ -bimodule $_AB_A$ , then you can "tensor together the right and left actions of $A$ on $B$ " (recovering the above for $_AB_A={}_AN\otimes_kM_A$ ). The derived version of this is precisely the Hochschild homology chain complex. You can think of $HH_\bullet(A,B)$ as " $B\otimes_A^{\mathbb L}{}$ " written on annular paper so that to the right of $\otimes_A^{\mathbb L}$ is again (the same copy of) $B$ . This makes apparent identities such as $$HH_\bullet(R,M\otimes_S^{\mathbb L}N)=HH_\bullet(S,N\otimes_R^{\mathbb L}M)$$ for bimodules $_RM_S$ and $_SN_R$ . It also means that $HH_\bullet(A,A)$ has a "circle action" in a homotopical sense.
|
{
"source": [
"https://mathoverflow.net/questions/289303",
"https://mathoverflow.net",
"https://mathoverflow.net/users/19072/"
]
}
|
289,402 |
In Wikipedia's page for Bertrand's postulate , it is said that its (2n,3n) version was proved by El Bachraoui in 2006. Seems likely that it was first proved way before than that! Can anyone point to the first source, or at least to a previous one? Analogously, Wikipedia said until recently that the (3n,4n) version was due to Andy Loo in 2011. I'm aware of a proof by Denis Hanson in 1973, so I have updated the page with that info, but I don't know if his proof is the first one. Do you know of previous proofs?
|
I have finally found the following papers and results, which predate Nagura's paper of 1952. I cite them from newest to oldest: (Molsen, 1941): For $n\geq 118$ there are primes in $(n,\frac43n)$ congruent to 1,5,7,11 modulo 12. For $n\geq 199$ there are primes in $(n,\frac87n)$ congruent to 1,2 modulo 3. This result implies that of Nagura. K. Molsen, Zur Verallgemeinerung des Bertrandschen Postulates , Deutsche Math. 6 (1941), 248-256. MR0017770 (Breusch, 1932): For $n\geq 7$ there are primes in $(n,2n)$ congruent to 1,2 modulo 3 and to 1,3 modulo 4. For $n\geq 48$ there is a prime in $(n,\frac98n)$. This result implies those of Nagura and Molsen (but not the congruences part). R. Breusch, Zur Verallgemeinerung des Bertrandschen Postulates, dass zwischen x und 2x stets Primzahlen liegen , Math. Z. 34 (1932), 505–526. MR1545270 (Schur, 1929, according to Breusch in the previous paper): For $n\geq 24$ there is a prime in $(n,\frac54n)$. This result already implies those of Hanson and El Bachraoui. I. Schur, Einige Sätze über Primzahlen mit Anwendungen auf Irreduzibilitätsfragen I , Sitzungsberichte der preuss. Akad. d. Wissensch., phys.-math. Klasse 1929, S.128.
|
{
"source": [
"https://mathoverflow.net/questions/289402",
"https://mathoverflow.net",
"https://mathoverflow.net/users/1234/"
]
}
|
289,476 |
The $(\infty, 1)$ category $Sp$ of spectra as defined by Lurie in Higher Algebra has the structure of a symmetric monoidal category. Although I know the definition of symmetric monoidal category in the $(\infty, 1)$ setting and can reasonably follow Lurie's arguments in Higher Algebra as to why $Sp$ has such a structure, I don't understand it well enough to think about it intuitively. So my question is, what does the symmetric monoidal structure on $Sp$ look like? How is this related to the symmetric monoidal structure on symmetric or orthogonal spectra in the ordinary categorical setting? How may I picture ring spectra and other such objects arising from the monoidal structure?
|
Lurie characterizes the symmetric monoidal structure on $\mathsf{Sp}$ by a universal property (HA.4.8.2.19): it is uniquely determined up to a contractible space of choices by the property that $S^0$ is the unit and $\wedge$ commutes with homotopy colimits in both variables. I think on first glance this sounds like one of those formal things that doesn't help you 'compute' anything, but I claim that (i) you can get all the computational mileage out of this that you usually get from model categories of spectra and (ii) if you like, you can easily show that your favorite symmetric monoidal model category of spectra models this symmetric monoidal structure. We'll start with (ii) since that's what you asked about, and it's easier. Suppose you have a model category $\mathbf{M}$ which models spectra (which is something else that can be checked easily from universal properties by its relationship to some model for $\mathsf{Spaces}$, for example). Suppose further that it's a symmetric monoidal model category. Then the underlying $\infty$-category $\mathsf{M}[W^{-1}] \cong \mathsf{Sp}$ inherits a symmetric monoidal structure. To compare it with the 'universal' one, one needs to check that the tensor product commutes with homotopy colimits and that the unit is weakly equivalent to the sphere spectrum. The unit requirement is just always satisfied (otherwise what sort of spectra are you using??) and the homotopy colimit requirement is also always satisfied, because part of the definition of being a symmetric monoidal model category is that $\otimes$ be a left Quillen bifunctor- and that forces the tensor product to commute with homotopy colimits in both variables. Now let's talk about (i). What does it even mean to 'understand' the smash product of spectra? Well, usually the way a spectrum is handed to us is as a sequence of (say, pointed) spaces $(X_k)$ and maps $\Sigma X_k \to X_{k+1}$. Every model ever includes at least that much data, often with all sorts of extra requirements and structure. But in any case, this data presents a spectrum $X = \mathrm{hocolim}_k \Sigma^{-k}\Sigma^{\infty}X_k$. So, from the universal property we learn that $$ X \wedge Y = \underset{j,k}{\mathrm{hocolim}}\, \Sigma^{-j-k}\left( \Sigma^{\infty}X_j \wedge \Sigma^{\infty}Y_k\right). $$ So now we'd better figure out how to compute $\Sigma^{\infty}A \wedge \Sigma^{\infty} B$ for pointed spaces $A$ and $B$. It's possible to argue very abstractly that $\Sigma^{\infty}$ must be symmetric monoidal, using the methods in HA.4.8.2, but you can also argue as follows: first reduce to the unpointed case, so we want to compute $\Sigma^{\infty}_+A \wedge \Sigma^{\infty}_+B$. But $A$ and $B$ are homotopy colimits of constant diagrams shaped like $A$ and $B$, respectively. And $\Sigma^{\infty}_+$ commutes with homotopy colimits. A little string of equivalences and the fact that $\Sigma^{\infty}_+(*) = S^0$ is the unit gives the result. Excellent, so just from nonsense we learn that $\Sigma^{\infty}$ is symmetric monoidal. Moving back to our original formula we learn that a 'concrete' computation for the smash product is: $$X \wedge Y = \underset{j,k}{\mathrm{hocolim}}\, \Sigma^{-j-k}\left( \Sigma^{\infty}X_j \wedge \Sigma^{\infty}Y_k\right) = \underset{j,k}{\mathrm{hocolim}}\, \Sigma^{-j-k}\left( \Sigma^{\infty}(X_j \wedge Y_k)\right).\quad (1)$$ If you like, then at this point you could pick your favorite cofinal copy of $\mathbb{N}$ inside $\mathbb{N} \times \mathbb{N}$ and present this smash product as a sequence of spaces with maps from the suspension of one to the next. This way of thinking about the smash product is the very original one, going back to Boardman and Adams. Of course they ran into all sorts of technical issues verifying all the properties they wanted out of a symmetric monoidal structure. What happened to those? Well, when dealing with a symmetric monoidal structure one would like (a) properties and (b) the ability to 'compute' what the thing does. The original approach was to begin with (b) and then work hard to verify (a). In the present situation, one begins with (a) using various levels of sophistication and then deduces (b). Of course, lots of technical work went into producing the symmetric monoidal $\infty$-category $\mathsf{Sp}$! But the work pays off: you get a much stronger theorem and a more flexible theory. Let me go into a bit more detail about comparing with, say, the formula for the smash product one finds for orthogonal spectra. I claim that it is a particular model-categorical presentation of precisely the formula (1). To justify that, I'm going to compare the two formulae directly. So I'll need to review a bit about orthogonal spectra. Recall that an orthogonal spectrum consists of a sequence of pointed spaces $X_n$ equipped with $O(n)$-actions together with compatible, $O(n)\times O(m)$-equivariant, based maps $X_n \wedge S^m \to X_{n+m}$. Given an orthogonal spectrum, one would like to know how to describe the corresponding object in $\mathsf{Sp}$ and how to understand smash products. We'll take the $\infty$-category $\mathsf{Spaces}$ as 'understood' and the functor $\Sigma^{\infty}$ as also understood. Then an orthogonal spectrum $(X_k)$ presents an object of $\mathsf{Sp}$ by the formula $$X = \underset{\mathbb{N}}{\mathrm{hocolim}}\, \Sigma^{-k}\Sigma^{\infty}X_k$$ Of course, we ignored the orthogonal group action. Luckily, here's a fun fact: Fun fact. Let $\mathsf{Orth}$ denote the $\infty$-category of real inner product spaces and isometric embeddings. Then the inclusion $\mathbb{N} \to \mathsf{Orth}$ is homotopy final. It follows that we may compute the homotopy colimit either over $\mathsf{Orth}$ or over $\mathbb{N}$. That's important, because the formula for the smash product of orthogonal spectra is really trying to be a formula for a homotopy colimit over $\mathsf{Orth} \times \mathsf{Orth}$. I won't bother typesetting the formula here (see page 5 of Schwede's book , for example) but unless I'm mistaken one arrives at this formula as follows: Take the formula in (1) and replace the homotopy colimit over $\mathbb{N} \times \mathbb{N}$ by a homotopy colimit over $\mathsf{Orth} \times \mathsf{Orth}$. To compute that homotopy colimit, we are free to first left Kan extend along $\oplus: \mathsf{Orth} \times \mathsf{Orth} \to \mathsf{Orth}$. Stare at the pointwise formula for the left Kan extension evaluated at $\mathbb{R}^n$, and you become interested in a homotopy colimit along all pairs $p+q = n$ together with actions of $O(p) \times O(q)$ etc. Now suspend everything n times, erase the $\Sigma^{\infty}$'s, and use one of the standard formulas for computing a homotopy colimit as an ordinary colimit in some model category, and you should get (with cofibrancy conditions if you want the right homotopy type) the previously mentioned formula in the linked book. The same thing works in symmetric spectra, except that the claim about finality is false. Instead, one uses a cofibrancy condition that allows a lemma of Bökstedt to apply so you can replace $\mathbb{N}$ with the category of finite sets and injections.
|
{
"source": [
"https://mathoverflow.net/questions/289476",
"https://mathoverflow.net",
"https://mathoverflow.net/users/101861/"
]
}
|
289,495 |
Assume I need to solve an NP-complete problem, for which problem-specific methods (e.g. efficient heuristics or exponential algorithms faster than naive one) are not well developed. If the size of input is n, then, in theory, I could reduce the problem to SAT of size P(n), where P is some polynomial, and use SAT solvers. Or I could reduce it to other NP-complete problem with well-developed algorithms available. Of course, I would like to use reduction with P(n) being polynomial with as low degree as possible. 1) Is there a (reasonably recent) book/survey/webpage in which I can learn what are the most efficient known reductions from some (as many as possible) NP-complete problems to (say) SAT? 2) I am sure many such reductions has already been implemented, some of them open source. Is there a webpage collecting links to such implementations?
|
Lurie characterizes the symmetric monoidal structure on $\mathsf{Sp}$ by a universal property (HA.4.8.2.19): it is uniquely determined up to a contractible space of choices by the property that $S^0$ is the unit and $\wedge$ commutes with homotopy colimits in both variables. I think on first glance this sounds like one of those formal things that doesn't help you 'compute' anything, but I claim that (i) you can get all the computational mileage out of this that you usually get from model categories of spectra and (ii) if you like, you can easily show that your favorite symmetric monoidal model category of spectra models this symmetric monoidal structure. We'll start with (ii) since that's what you asked about, and it's easier. Suppose you have a model category $\mathbf{M}$ which models spectra (which is something else that can be checked easily from universal properties by its relationship to some model for $\mathsf{Spaces}$, for example). Suppose further that it's a symmetric monoidal model category. Then the underlying $\infty$-category $\mathsf{M}[W^{-1}] \cong \mathsf{Sp}$ inherits a symmetric monoidal structure. To compare it with the 'universal' one, one needs to check that the tensor product commutes with homotopy colimits and that the unit is weakly equivalent to the sphere spectrum. The unit requirement is just always satisfied (otherwise what sort of spectra are you using??) and the homotopy colimit requirement is also always satisfied, because part of the definition of being a symmetric monoidal model category is that $\otimes$ be a left Quillen bifunctor- and that forces the tensor product to commute with homotopy colimits in both variables. Now let's talk about (i). What does it even mean to 'understand' the smash product of spectra? Well, usually the way a spectrum is handed to us is as a sequence of (say, pointed) spaces $(X_k)$ and maps $\Sigma X_k \to X_{k+1}$. Every model ever includes at least that much data, often with all sorts of extra requirements and structure. But in any case, this data presents a spectrum $X = \mathrm{hocolim}_k \Sigma^{-k}\Sigma^{\infty}X_k$. So, from the universal property we learn that $$ X \wedge Y = \underset{j,k}{\mathrm{hocolim}}\, \Sigma^{-j-k}\left( \Sigma^{\infty}X_j \wedge \Sigma^{\infty}Y_k\right). $$ So now we'd better figure out how to compute $\Sigma^{\infty}A \wedge \Sigma^{\infty} B$ for pointed spaces $A$ and $B$. It's possible to argue very abstractly that $\Sigma^{\infty}$ must be symmetric monoidal, using the methods in HA.4.8.2, but you can also argue as follows: first reduce to the unpointed case, so we want to compute $\Sigma^{\infty}_+A \wedge \Sigma^{\infty}_+B$. But $A$ and $B$ are homotopy colimits of constant diagrams shaped like $A$ and $B$, respectively. And $\Sigma^{\infty}_+$ commutes with homotopy colimits. A little string of equivalences and the fact that $\Sigma^{\infty}_+(*) = S^0$ is the unit gives the result. Excellent, so just from nonsense we learn that $\Sigma^{\infty}$ is symmetric monoidal. Moving back to our original formula we learn that a 'concrete' computation for the smash product is: $$X \wedge Y = \underset{j,k}{\mathrm{hocolim}}\, \Sigma^{-j-k}\left( \Sigma^{\infty}X_j \wedge \Sigma^{\infty}Y_k\right) = \underset{j,k}{\mathrm{hocolim}}\, \Sigma^{-j-k}\left( \Sigma^{\infty}(X_j \wedge Y_k)\right).\quad (1)$$ If you like, then at this point you could pick your favorite cofinal copy of $\mathbb{N}$ inside $\mathbb{N} \times \mathbb{N}$ and present this smash product as a sequence of spaces with maps from the suspension of one to the next. This way of thinking about the smash product is the very original one, going back to Boardman and Adams. Of course they ran into all sorts of technical issues verifying all the properties they wanted out of a symmetric monoidal structure. What happened to those? Well, when dealing with a symmetric monoidal structure one would like (a) properties and (b) the ability to 'compute' what the thing does. The original approach was to begin with (b) and then work hard to verify (a). In the present situation, one begins with (a) using various levels of sophistication and then deduces (b). Of course, lots of technical work went into producing the symmetric monoidal $\infty$-category $\mathsf{Sp}$! But the work pays off: you get a much stronger theorem and a more flexible theory. Let me go into a bit more detail about comparing with, say, the formula for the smash product one finds for orthogonal spectra. I claim that it is a particular model-categorical presentation of precisely the formula (1). To justify that, I'm going to compare the two formulae directly. So I'll need to review a bit about orthogonal spectra. Recall that an orthogonal spectrum consists of a sequence of pointed spaces $X_n$ equipped with $O(n)$-actions together with compatible, $O(n)\times O(m)$-equivariant, based maps $X_n \wedge S^m \to X_{n+m}$. Given an orthogonal spectrum, one would like to know how to describe the corresponding object in $\mathsf{Sp}$ and how to understand smash products. We'll take the $\infty$-category $\mathsf{Spaces}$ as 'understood' and the functor $\Sigma^{\infty}$ as also understood. Then an orthogonal spectrum $(X_k)$ presents an object of $\mathsf{Sp}$ by the formula $$X = \underset{\mathbb{N}}{\mathrm{hocolim}}\, \Sigma^{-k}\Sigma^{\infty}X_k$$ Of course, we ignored the orthogonal group action. Luckily, here's a fun fact: Fun fact. Let $\mathsf{Orth}$ denote the $\infty$-category of real inner product spaces and isometric embeddings. Then the inclusion $\mathbb{N} \to \mathsf{Orth}$ is homotopy final. It follows that we may compute the homotopy colimit either over $\mathsf{Orth}$ or over $\mathbb{N}$. That's important, because the formula for the smash product of orthogonal spectra is really trying to be a formula for a homotopy colimit over $\mathsf{Orth} \times \mathsf{Orth}$. I won't bother typesetting the formula here (see page 5 of Schwede's book , for example) but unless I'm mistaken one arrives at this formula as follows: Take the formula in (1) and replace the homotopy colimit over $\mathbb{N} \times \mathbb{N}$ by a homotopy colimit over $\mathsf{Orth} \times \mathsf{Orth}$. To compute that homotopy colimit, we are free to first left Kan extend along $\oplus: \mathsf{Orth} \times \mathsf{Orth} \to \mathsf{Orth}$. Stare at the pointwise formula for the left Kan extension evaluated at $\mathbb{R}^n$, and you become interested in a homotopy colimit along all pairs $p+q = n$ together with actions of $O(p) \times O(q)$ etc. Now suspend everything n times, erase the $\Sigma^{\infty}$'s, and use one of the standard formulas for computing a homotopy colimit as an ordinary colimit in some model category, and you should get (with cofibrancy conditions if you want the right homotopy type) the previously mentioned formula in the linked book. The same thing works in symmetric spectra, except that the claim about finality is false. Instead, one uses a cofibrancy condition that allows a lemma of Bökstedt to apply so you can replace $\mathbb{N}$ with the category of finite sets and injections.
|
{
"source": [
"https://mathoverflow.net/questions/289495",
"https://mathoverflow.net",
"https://mathoverflow.net/users/31472/"
]
}
|
289,711 |
From a recent answer by Mike Shulman, I read: "HoTT is (among other things) a foundational theory, on roughly the same ontological level as ZFC, whose basic objects can be regarded as $\infty$-groupoids" Now, $\infty$-groupoids are the same thing as spaces, and I have a couple of spaces which I like. Here's one which I like a lot: the Lie group $SU(n)$. Question: How do I say "let $X:=SU(n)$" in HoTT? Here, $n$ can be either a variable, or a definite number (say $3$). Disclaimer: I (unfortunately) don't know much about HoTT. So please be gentle.
|
This isn't easy to do, and the reason it isn't easy is because of the step "$\infty$-groupoids are the same thing as spaces." Of course the homotopy hypothesis tells you that any $\infty$-groupoid is equivalent to the fundamental $\infty$-groupoid of a space, but that doesn't mean that they're literally the same thing. The way that HoTT approaches about $\infty$-groupoids is not by thinking of them as spaces. What's easy in HoTT is to do something like, "consider the $\infty$-groupoid freely generated by a single point and a single 1-path from that point to itself." This $\infty$-groupoid is equivalent to the fundamental $\infty$-groupoid of classical circle, but we haven't constructed it that way, instead we've constructed it algebraically via generators/relations. Ok, so the first thing you could do is find an "algebraic" description of the fundamental $\infty$-groupoid of $\mathrm{SU}(n)$. Essentially this is giving an explicit CW-decomposition of $\mathrm{SU}(n)$. I'm not sure how hard this is off the top of my head, but one needs to be a bit careful with such an approach to make sure to get something that's just as useful as $\mathrm{SU}(n)$ in practice. For example, as suggested in comments it would be better to construct $B\mathrm{SU}(n)$ instead so that when you take the loop space you know you have a group structure. But you also want to be able to handle things like the relationship between $\mathrm{SU}(n)$ and complex vector bundles, and if your construction is sufficiently messy and computational then it might not be good enough. Instead, you could try to build up the theory of manifolds (or something similar) inside HoTT and then define a notion of fundamental $\infty$-groupoid. If I understand things right, this is something that real cohesive type theory will let you do. But there's a lot you need to do in order to get there, for a first step you need to define $\mathbb{R}$ inside HoTT which has some tricky points because HoTT is naturally constructive, but the Cauchy reals and the Dedekind reals are different constructively. In HoTT $\infty$-groupoids are foundational native objects, but $\mathbb{R}$ is still just as complicated (well, slightly more complicated) as it was classically. From there you can start trying to define manifolds (or some more general notion of a topological space of similar flavor to manifolds) and then defining the fundamental $\infty$-groupoid of such a space. All of this is done in Mike's paper where he develops a type-theory for topological $\infty$-groupoids. The theory there is developed to the point where he shows that $\{(x,y)\in \mathbb{R}^2: x^2+y^2=1\}$ is something that you can take the fundamental $\infty$-groupoid (or "shape") which outputs an $\infty$-groupoid and that the answer you get is equivalent to the type-theoretic circle. If I understand everything correctly, you should be able to use this to define the shape of any real algebraic manifold (including $\mathrm{SU}(n)$), as well as many other "geometrically" defined topological spaces. In conclusion, HoTT does let you deal with $\infty$-groupoids nicely and directly, but doesn't easily give you access to $\infty$-groupoids that are of a topological nature rather than an algebraic nature. In order to access those you still need to build up real analysis and topology or some replacement for them, which takes a bit of work (as it does classically) and also has a few tricky points due to the constructive nature of HoTT.
|
{
"source": [
"https://mathoverflow.net/questions/289711",
"https://mathoverflow.net",
"https://mathoverflow.net/users/5690/"
]
}
|
289,810 |
A $n$- polyplet is a collection of $n$ cells on a grid which are orthogonally or diagonally connected. The number of $n$-polyplets is given by the OEIS sequence A030222 : $1, 2, 5, 22, 94, 524, 3031, \dots$ See below the five $3$-polyplets: A polyplet will be called $1$-step vanishing on Conway's game of life , if every cell dies after one step. We observed that for $n\le 4$, a $n$-polyplet is $1$-step vanishing iff $n \le 2$. We found $1$-step vanishing polyplets with $n=9, 12$, see below: Question : Is there an other $1$-step vanishing $n$-polyplet with $n \le 12$? If yes, what are they? (note that there are exactly $37963911$ $n$-polyplets with $n \le 12$). We can build infinitely many $1$-step vanishing polyplets with such pattern, see below with $n=142$: Bonus question : Are there $1$-step vanishing polyplets of an other kind?
|
Here are two more vanishing 12-plets similar to yours: $$
\substack{
\displaystyle{◻◻◼◻◻◻} \cr
\displaystyle{◻◻◻◼◻◻} \cr
\displaystyle{◻◻◼◼◼◼} \cr
\displaystyle{◼◼◼◼◻◻} \cr
\displaystyle{◻◻◼◻◻◻} \cr
\displaystyle{◻◻◻◼◻◻}
}
\quad
\substack{
\displaystyle{◻◻◼◻◻◻} \cr
\displaystyle{◻◻◻◼◻◻} \cr
\displaystyle{◻◻◼◼◼◼} \cr
\displaystyle{◼◼◼◼◻◻} \cr
\displaystyle{◻◻◼◻◻◻} \cr
\displaystyle{◻◻◼◻◻◻}
}
$$ I found these using JavaLifeSearch , combined with manual filtering of the search results to skip any non-polyplet patterns. I'm pretty sure that, together with your 9- and 12-plet and the 9- and 10-plets found by Noam D. Elkies, these (and their rotations and mirror images) are the only vanishing polyplets with 5 to 12 cells in Conway's Game of Life. That said, it's always possible that I've made some kind of a mistake in my search or filtering, so independent confirmation would be nice to have. (It's perhaps worth noting that 1-step vanishing patterns in standard GoL ( rule B3/S23) are exactly the same as still lifes in the "semi-complementary" alternative rule B3/S0145678, i.e. where the birth rules are the same, but live cells survive if and only if they would not survive in standard GoL. Thus, any existing software for exhaustively enumerating still lifes — or oscillators or spaceships, of which still lifes are a special case — in Life-like cellular automata could be used for this, at least as long as it doesn't have the standard GoL rules hardcoded.) As for your bonus question, I'm not sure what you mean by "of an other kind", but it's pretty easy to use tools like JLS to construct large vanishing polyplets with arbitrarily complex boundaries and inner structure, like this somewhat whimsical example: $$\substack{
\displaystyle{◼◻◼◼◻◻◼◼◻◼◼◻◻◼◼◻◻◼◼◻◼◼◻◻◼◼◻◼} \cr
\displaystyle{◻◼◼◼◼◼◼◼◼◼◼◼◼◼◼◼◼◼◼◼◼◼◼◼◼◼◼◻} \cr
\displaystyle{◼◼◻◼◻◼◼◻◻◻◼◼◻◼◼◼◼◻◼◼◼◼◼◻◻◼◼◼} \cr
\displaystyle{◼◼◻◼◻◼◼◻◼◼◼◼◻◼◼◼◼◻◼◼◼◼◻◼◼◻◼◼} \cr
\displaystyle{◻◼◻◻◻◼◼◻◻◻◼◼◻◼◼◼◼◻◼◼◼◼◻◼◼◻◼◻} \cr
\displaystyle{◼◼◻◼◻◼◼◻◼◼◼◼◻◼◼◼◼◻◼◼◼◼◻◼◼◻◼◼} \cr
\displaystyle{◼◼◻◼◻◼◼◻◻◻◼◼◼◻◻◻◼◼◻◻◻◼◼◻◻◼◼◼} \cr
\displaystyle{◻◼◼◼◼◼◼◼◼◼◼◼◼◼◼◼◼◼◼◼◼◼◼◼◼◼◼◻} \cr
\displaystyle{◼◻◼◼◻◻◼◼◻◼◼◻◻◼◼◻◻◼◼◻◼◼◻◻◼◼◻◼}
}$$ Update: I finally went and wrote a basic exhaustive search script ( available on GitHub here ) to find these patterns. The actual search code is currently written in (pretty awful) Python; I may (or may not) clean it up later, and maybe rewrite it in some more efficient language like C or C++. (There's also a simple Perl script to filter equivalent rotated and mirror image patterns from the output, and to sort them by live cell count.) The search script uses a simple depth first search to fill in an $N+2$ times $N$ cell grid with live and dead cells, starting from a single live cell at the top left, 1 and backtracking if: the number of live cells plus the number of connected components exceeds $N+1$, 2 the number of live cells plus the actual minimum number of additional cells needed to join the components together 3 exceeds $N$, a connected component is closed off by dead cells, so that it cannot be extended and joined with the rest of the pattern, 4 or the most recently added cell or one of its neighbors cannot be part of a valid still life / one-step vanishing pattern according to the CA rule. A lot of those checks are inefficiently implemented, and in any case Python is not a very fast language to begin with. Even so, it only took me an hour or so on my old laptop to enumerate all the one-step vanishing polyplets in GoL with up to $N = 16$ cells , and a couple of days to get up to $N = 20$ . The total number of distinct polyplets of various sizes found by my script (not counting rotated and mirror image versions separately) are: $$
\begin{array}{r|r}
\text{cells} & 1 & 2 & 9 & 10 & 12 & 14 & 15 & 16 & 17 & 18 & 19 & 20 \\ \hline
\text{polyplets} & 1 & 2 & 2 & 1 & 3 & 10 & 1 & 45 & 27 & 70 & 98 & 285
\end{array}
$$ Here's a picture showing all the 18-cell and smaller vanishing polyplets: $\hspace{14px}$ As I rather expected, while some of the larger polyplets are nicely symmetric or feature obviously generalizable repetitive motifs, as the permitted number of cells gets larger more and more of the patterns look like random agglomerations of cells with no obvious structure or symmetry. While it should not be difficult to exhibit families of vanishing polyplets that can have any (sufficiently large) cell count, perimiter or enclosed area, a nontrivial classification of all vanishing polyplets in GoL seems as hopeless as attempting to classify all still lifes (or oscillators or spaceships). 1) The grid wraps around from left to right, so the pattern can in fact expand both ways from the starting cell. (It does so in a somewhat peculiar way, so that the right side of each row wraps to the left side of the next row; this effectively makes the 2-dimensional GoL lattice equivalent to a 1-dimensional CA lattice with a funny neighborhood shape.) Making the grid $N+2$ cells wide ensures that an $N$-cell pattern cannot actually reach around the whole grid, and the output code takes care to shift the grid so that the leftmost column of the pattern is actually printed on the first column of the output. 2) Each additional connected component beyond the first necessarily requires at least one additional live cell to connect it to the rest of the pattern. The top-down left-to-right filling order ensures that adding a new live cell can never connect more than two components. 3) This is a somewhat slower calculation, and so the cruder lower bound of one cell per component is checked first. This is also the part that took me the longest to debug, and if there are any remaining bugs in the code, my money would be on this being where they are. That said, I've checked that, at least up to $N = 14$, disabling this check does not actually change the output, so I'm fairly confident that it works. 4) If this was the only connected component, then we've just completed a valid polyplet and will call the output code. Either way, the search will still backtrack the same way regardless. Ps. To answer a question posed in the comments, yes, vanishing $n$-plets (and, in fact, $n$-ominoes) exist for all $n \ge 14$. For example, the following family of vanishing 16 to 23 cell polyominoes: $\hspace{82px}$ is easily extensible to all higher cell counts, as shown below for 24 to 32 cells: $\hspace{82px}$ (In fact, even the 20 to 23 cell polyominoes above are just simple extensions of the 16 to 19 cell ones.) Together with the 14 and 15 cell polyominoes already found by the brute force search, these cover all the sizes from 14 cells up.
|
{
"source": [
"https://mathoverflow.net/questions/289810",
"https://mathoverflow.net",
"https://mathoverflow.net/users/34538/"
]
}
|
289,976 |
For the time being, the OEIS website contains almost $300000$ sequences. Each of these sequences is the mark of a specific mathematical concept. Sometimes two (or more) distinct concepts have the same mark, which suggests a connection between a priori independent mathematical areas. The most famous example like that is perhaps the Catalan numbers sequence: A000108 . Question : What are the examples of pair of integer sequences coinciding on all the known terms, but for which the coincidence for all the terms is unknown? Cheating is not allowed. By cheating I mean artificial examples like: $u_n = v_n =n$ for $n \neq 10$, and if RH is true then $u_{10} = v_{10} = 10$, else $u_{10}+1 = v_{10} = 1$. The existence of an OEIS entry could act as safety. EDIT : I would like to point out that all the answers below are about pair of integer sequences which were already conjectured to be the same, and of course they are on-topic (and some of them are very nice). Note that such examples can be found by searching something like "conjectured to be identical" on OEIS, as I did for some of my own examples below... Now, a more surprising kind of answer would be a (non-cheating) pair of integer sequences which are the same on the known entries, but for which there is no evidence a priori that they are the same for all the entries or that they are related (i.e. the precise meaning of a coincidence ). Such examples, also on-topic, could reveal some unexpected connections in mathematics, but could be harder to find...
|
A historical example, in the sense that the conjectural equality has been refuted: A180632 (Minimum length of a string of letters that contains every permutation of $n$ letters as sub-strings) was conjectured equal to A007489 ($\sum_{k=1}^n k!$). The exact value of A180632 at $n=6$ is still unknown, but it must be less than the conjectured value of $1!+2!+\cdots+6!=873$, because the following string of length 872 contains every permutation of 123456 as a substring: 12345612345162345126345123645132645136245136425136452136451234651234156234152634152364152346152341652341256341253641253461253416253412653412356412354612354162354126354123654132654312645316243516243156243165243162543162453164253146253142653142563142536142531645231465231456231452631452361452316453216453126435126431526431256432156423154623154263154236154231654231564213564215362415362145362154362153462135462134562134652134625134621536421563421653421635421634521634251634215643251643256143256413256431265432165432615342613542613452613425613426513426153246513246531246351246315246312546321546325146325416325461325463124563214563241563245163245613245631246532146532416532461532641532614532615432651436251436521435621435261435216435214635214365124361524361254361245361243561243651423561423516423514623514263514236514326541362541365241356241352641352461352416352413654213654123
|
{
"source": [
"https://mathoverflow.net/questions/289976",
"https://mathoverflow.net",
"https://mathoverflow.net/users/34538/"
]
}
|
289,991 |
Are there simple proofs of some concrete special cases of Faltings's theorem ? Any help would be appreciated.
|
Based on the OP's comment clarifying his question, I fear that the answer is no, there are no concrete special cases in which one can follow the approach of Faltings' proof that yield any significant simplifications. Faltings' proof is very indirect. First one uses rational points in $C(K)$ to construct coverings of $C$ that have good reduction outside a certain finite set of primes $S$. (This idea is, I believe, due to Parshin.) Taking Jacobians yields abelian varieties with good reduction outside $S$. One can, in principle, do this for a specific curve fairly concretely. But now one is reduced to Faltings proof of the Shafarevich conjecture, that there are only finitely many (suffices to do principally polarized) abelian varieties of a given dimension having good reduction outside $S$. And the proof of that is via reversing an argument of Tate to show it suffices to prove the isogeny conjecture, which gives a Galois-theoretic interpretation of isogenies between abelian varieties. And the proof of the isogeny conjecture is sufficiently complicated that I won't try to summarize it here. Anyway, bottom line is it does not appear that applying Faltings' ideas to a single curve would simplify the argument very much. Having said that, there are more elementary methods that can prove that is $C(K)$ finite in some cases. There's the method of Dem'janenko : If there are independent maps $f_1,...,f_n:C\to E$ for some elliptic curve $E$, and if $n>\text{rank } E(K)$, then $C(K)$ is finite. This is done via a height calculation. It was generalized by Manin, who applied it to towers of modular curves $X_1(p^k)$. There is the method of Chabauty (strengthened to what is now usually called the Chabauty-Coleman method ): Let $C/K$ and $J=\text{Jac}(C)$. If $\text{genus}(C)>\text{rank }J(K)$, then $C(K)$ is finite. The proof is via $p$-adic analytic methods. Coleman made the method quite precise (using Coleman integration ), so in many cases it allows one to actually compute $C(K)$. Note that Faltings original proof, and the Vojta-Bombieri-Faltings alternative proof via Diophantine approximation, are ineffective.
|
{
"source": [
"https://mathoverflow.net/questions/289991",
"https://mathoverflow.net",
"https://mathoverflow.net/users/14024/"
]
}
|
290,085 |
Let $k$ be a field, and suppose $G$ is a group-scheme over $k$ (I am happy to assume that $k=\mathbb{Q}$ and that $G$ is affine). A $G$-torsor over $k$ is a non-empty $k$-scheme $T$ equipped with an action $a:G\times T\to T$, such that $(a,\pi_2):G\times T\to T\times T$ is an isomorphism. There is an induced morphism
$$
\begin{align*}
T\times T\times T&\to T,\\
(t_1,t_2,t_3)&\mapsto a\bigg(\pi_2\big((a,\pi_2)^{-1}(t_1,t_2)\big),t_3\bigg)=``t_1t_2^{-1}t_3".
\end{align*}
$$
I would like to know to what extent the map $T\times T\times T\to T$ determines $G$. I mean the following: For which pairs of groups $G$ and $G'$ does there exist a $G$-torsor $T$, a $G'$-torsor $T'$, and an isomorphism $T\cong T'$ compatible with the maps $T\times T\times T\to T$ and $T'\times T'\times T'\to T'$?
|
Let $b:T\times T\times T\to T$ be the map in question. We can view it as a morphism of functors (actually fpqc sheaves) on $k$-schemes:
$$\begin{array}{rcl}
T\times T & \longrightarrow & \operatorname{\underline{Aut}}(T)\\
(t_1,t_2) & \longmapsto & \left(\,t_3\mapsto b(t_1,t_2,t_3)\,\right)
\end{array}$$
where $\underline{\mathrm{Aut}}$ stands for the sheaf of $k$-automorphisms. Thus, this map sends $(t_1,t_2)$ to the automorphism of $T$ given by the action of ``$t_1t_2^{-1}$''. So you can recover $G$ as the subsheaf of $\operatorname{\underline{Aut}}(T)$ which is the image of this map.
|
{
"source": [
"https://mathoverflow.net/questions/290085",
"https://mathoverflow.net",
"https://mathoverflow.net/users/5263/"
]
}
|
290,196 |
For topological spaces $X,Y$ let $\text{Cont}(X,Y)$ be the collection of continuous functions $f:X\to Y.$ We endow $\text{Cont}(X,Y)$ with the topology inherited from the product topology on $Y^X.$ Are there spaces $X,Y$ such that $X$ has more than one point and $Y\not\cong\mathbb{R}$ such that $\mathbb{R}\cong\text{Cont}(X,Y)$?
|
I must admit that I hesitated answering this question, but here it is. The answer is "no".
Assume there exist topological spaces $X$ and $Y$ such that $C(X,Y)\simeq \mathbb{R}$ and $Y\not\simeq\mathbb{R}$.
Identifying $C(X,Y)$ with $\mathbb{R}$ and $Y$ with the constant functions in $Y^X$ we consider $Y$ as a closed subsapce of $\mathbb{R}$. Considering the surjection $\mathbb{R} \to C(X,Y)\to C(\{x\},Y) \to Y$, we see that $Y$ is connected.
Thus $Y\subset\mathbb{R}$ is a closed convex subset. As $Y\not\simeq \mathbb{R}$ there must exist an extreme point $y\in Y$.
Note that $C(X,Y)$ is a convex subset of $C(X,\mathbb{R})$ and the constant function $y$ is an extreme point of it. It follows that $C(X,Y)-\{y\}$ is also convex, hence contractible. But $C(X,Y)-\{y\}$ is homeomorphic to $\mathbb{R}$ minus a point. This is a contradiction.
|
{
"source": [
"https://mathoverflow.net/questions/290196",
"https://mathoverflow.net",
"https://mathoverflow.net/users/8628/"
]
}
|
290,459 |
Let $k$ be a field and $V$ a $k$-vector space. Then there is a map $V \to V^{\ast \ast}$, where $V^{\ast}$ is the dual vector space. If we are in ZFC and $\dim V$ is infinite, then this map is not surjective. As we learned in this question , there are models of $ZF$ where $V \to V^{\ast \ast}$ is an isomorphism when $V$ has a countable basis. I think the same argument shows that it is consistent with ZF that this is an isomorphism whenever $V$ has a basis. Is it consistent with $ZF$ that $V \to V^{\ast \ast}$ is an isomorphism for all vector spaces $V$? I ask because I'm teaching a rigorous undergrad analysis class. My students keep asking me whether they have to believe that $V \to V^{\ast \ast}$ can fail to be an isomorphism. Of course, I'm trying to change their intuition to point out why most mathematicians find the failure of isomorphism plausible and point out that there are more subtle ways to salvage the claim, such as Hilbert spaces, but I'd also love to be able to give them a choice free proof that there is some vector space where this issue comes up.
|
No, it’s not consistent. Let $V=k^{(\omega)}$ be the vector space of finite sequences of elements of $k$. Then $V^*$ can be identified with the vector space $k^\omega$ of all sequences, and elements of the image of the natural map $V\to V^{**}$, considered as maps $k^\omega\to k$, are determined by their restriction to $k^{(\omega)}$. So if $V\to V^{**}$ is an isomorphism, then, taking $W=k^\omega/k^{(\omega)}$, there are no nonzero linear maps $W\to k$, and hence $W^{**}=0$. But $W$ is nonzero. So the map to the double dual must fail to be an isomorphism either for $V$ or for $W$.
|
{
"source": [
"https://mathoverflow.net/questions/290459",
"https://mathoverflow.net",
"https://mathoverflow.net/users/297/"
]
}
|
290,522 |
This might be a very naive question. But what is quantum algebra, really? Wikipedia defines quantum algebra as "one of the top-level mathematics categories used by the arXiv". Surely this cannot be a satisfying definition. The arXiv admins didn't create a field of mathematics by choosing a name out of nowhere. Wikipedia (and, in fact, the MathOverflow tag wiki ) also lists some subjects: quantum groups, skein theories, operadic and diagrammatic algebra, quantum field theory. But again, I don't find this very satisfying, as I feel this doesn't tell me what the overarching idea of quantum algebra is. (For example, inspired by the table of of contents of the Wikipedia article I could define algebraic topology to be "homotopy, homology, manifolds, knots and complexes". But first, I have certainly missed many subfields of algebraic topology, and second, this is missing the overarching idea, contained in the introduction of the Wikipedia article: algebraic topology is the use of "tools from abstract algebra to study topological spaces" . It immediately makes the link behind all the themes I listed clearer, and if I encounter a new theme I can tell if it is AT or not using this criterion.) This MO question is looking for the intuition behind quantum algebra and relations to quantum mechanics. The main thing I gathered from the answers (that I more-or-less knew already) is that "quantum = classical + ħ", or less informally that we are looking at noncommutative deformations of commutative, classical objects. But this doesn't seem to account for all of quantum algebra. For example, a TQFT is a functor from a category of cobordisms to some algebraic category. Where's ħ? Operadic algebra is also listed as one of the components of quantum algebra, but one can study operads a lot without talking about noncommutative deformations. In fact, I've seen and read many papers about operads listed in math.QA that don't seem to have anything to do with this picture. In brief: What could be a one-sentence definition of quantum algebra? (In the spirit of the definition of algebraic topology above.)
|
Quantum algebra is an umbrella term used to describe a number of different mathematical ideas, all of which are linked back to the original realisation that in quantum physics, one finds noncommutativity. The areas now encompassed by the term "quantum algebra" are not necessarily directly or obviously related to each other (and this is even more true for publications tagged math.QA on the arXiv, since arXiv classifications are intended to flag work as "of interest to people in area X", not that "this work is in area X"; the Mathematics Subject Classification is better suited to this, but is naturally a much finer classification, and most items have multiple tags). The original quantum groups (more precisely, deformation quantizations of enveloping and coordinate algebras) are one example, but their study has largely been absorbed into a wider area of noncommutative geometry (usually with qualifiers: algebraic, projective, differential, ...). One also finds Hopf algebra theory and thence categorical approaches to noncommutative geometry (symmetric and braided monoidal categories, for starters). These lead you towards TQFTs, operads, knot invariants and many other things. There are plenty of good places to read about what different people think the area encompasses, one being Majid's summary in the article "Quantum groups" (p.272-275) in Gowers, Timothy (ed.); Barrow-Green, June (ed.); Leader, Imre (ed.) , The Princeton companion to mathematics., Princeton, NJ: Princeton University Press (ISBN 978-0-691-11880-2/hbk; 978-1-400-83039-8/ebook). xx, 1034 p. (2008). ZBL1242.00016 . I would say that a one sentence summary that covers even 80% or so of "quantum algebra" is going to be tricky, but the closest I think you'll get is something along the lines of The study of noncommutative analogues and generalisations of
commutative algebras, especially those arising in Lie theory. Some might prefer an additional mention of the original link to mathematical physics, but my personal view is that in some directions we have moved very far away from being directly applicable to mathematical physics (my own areas of interest are really purely algebra), so I have chosen not to include this.
|
{
"source": [
"https://mathoverflow.net/questions/290522",
"https://mathoverflow.net",
"https://mathoverflow.net/users/36146/"
]
}
|
291,018 |
What are some examples of mathematical conjectures that applied mathematicians assume to be true in applications, despite it being unknown whether or not they are true?
|
An important specific conjecture is that you cannot factor large integers fast. Many security systems for Internet and other transactions, depend on this.
|
{
"source": [
"https://mathoverflow.net/questions/291018",
"https://mathoverflow.net",
"https://mathoverflow.net/users/65915/"
]
}
|
291,158 |
Are there examples of originally widely accepted proofs that were later discovered to be wrong by attempting to formalize them using a proof assistant (e.g. Coq, Agda, Lean, Isabelle, HOL, Metamath, Mizar)?
|
Since this question was asked in January there have been some developments. I would like to argue that the scenario raised in the question has now actually happened. Indeed, Sébastien Gouëzel, when formalising Vladimir Shchur's work on bounds for the Morse Lemma for Gromov-hyperbolic spaces, found an actual inequality which was transposed at some point causing the proof (which had been published in 2013 in J. Funct. Anal., a good journal) to collapse. Gouëzel then worked with Shchur to find a new and correct (and in places far more complex) argument, which they wrote up as a joint paper. http://www.math.sciences.univ-nantes.fr/~gouezel/articles/morse_lemma.pdf The details are in the introduction. Anyone who reads it will see that this is not a "mistake" in the literature in the weak sense defined by
Manuel Eberl's very clear answer -- this was an actual error which was discovered because of a formalization process.
|
{
"source": [
"https://mathoverflow.net/questions/291158",
"https://mathoverflow.net",
"https://mathoverflow.net/users/119858/"
]
}
|
291,738 |
Is the following identity known? $$\sum\limits_{k=0}^n\frac{(-1)^k}{2k+1}\binom{n+k}{n-k}\binom{2k}{k}=
\frac{1}{2n+1}$$ I have not found it in the following book: Henry Wadsworth Gould, Combinatorial identities: a standardized set of tables listing 500 binomial coefficient summations , Morgantown, West Virginia, 1972.
|
In terms of hypergeometric series, the sum is $_3F_2(-n, 1+n, 1/2;1,3/2;1)$ and the identity is a special case of Saalschütz's theorem (also called the Pfaff-Saalschütz theorem), one of the standard hypergeometric series identities. A more general identity, also a special case of Saalschütz's theorem, is
$$\sum_{k=0}^n (-1)^k\frac{a}{a+k}\binom{n+k+b}{n-k}\binom{2k+b}{k}
= \binom{n+b-a}{n}\biggm/\binom{n+a}{n}.$$
The O.P.'s identity is the case $a=1/2, b=0$.
|
{
"source": [
"https://mathoverflow.net/questions/291738",
"https://mathoverflow.net",
"https://mathoverflow.net/users/32389/"
]
}
|
292,044 |
I was wondering about the distribution of $\sqrt{p}$ mod $1$ this morning, as one does while brushing one's teeth. I remembered the paper of Elkies and McMullen (Duke Math. J. 123 (2004), no. 1, 95–139.) about $\sqrt{n}$ mod $1$, but hadn't really thought about it before. Question 1: is $\sqrt{p}$ equidistributed mod $1$, as $p$ varies over all prime numbers? Is this known? Within range of current techniques? Question 2: What about subtler statistics of $\sqrt{p}$ (and $\sqrt{n}$) mod $1$? I made three plots, giving histograms of $\sqrt{n}$ mod $1$ (for natural numbers up to 100,000) and $\sqrt{p}$ mod $1$ (for primes up to 1 million) and (for comparison) a histogram of 100,000 samples drawn uniformly at random from {0,1,...,999}. Here they are for your enjoyment. There's some wild stuff going on, I think! Question 2(a): What's up with these sharp peak/valleys at rational numbers, in the distribution of $\sqrt{n}$ mod $1$? They are especially prominent at fractions of the form $a / 2^{e}$. How tall are these peaks near rational numbers? They persist when sampling from the primes, i.e., in the distribution of $\sqrt{p}$ mod $1$ too. Question 2(b): Outside of those funky spots in 2(a), the distribution of $\sqrt{n}$ mod $1$ is far flatter than one would expect, e.g., from samples drawn uniformly at random as displayed in the bottom histogram. This must have been noticed and quantified before... what's the relevant quantitative result here? Question 2(c): The distribution of $\sqrt{p}$ mod 1 displays the same funky spots near rational numbers, but otherwise seems much closer (in noise-volume) to the random samples at the bottom. Maybe for a larger sample, the funkiness goes away... I don't know. Explanations or conjectures are welcome. Question 3: These seem like natural images to look at. If you know a reference where others have drawn such pictures or studied similar phenomena, I'd love to take a look! -------------Update after answers below----------------- It looks like the answer to Question 1 is YES. Lucia's answer below explains this, and also some of the flatness evident in the $\sqrt{n}$ distribution mod 1. Igor and Aaron discuss the "spikes" around rational numbers. This seems related to binning: if our bins have width 1/1000, we see spikes at multiples of 1/2, 1/4, 1/5, 1/10, etc., related to divisors of 1000. Here's a new picture, which might help us understand the behavior of the distribution of $\sqrt{n}$ mod 1 near rational numbers. I've intentionally drawn the bins so that their endpoints lie on rational numbers with denominator up to 60. (I call this Farey-binning). This seems to bring the "spikes" around rational numbers down to the same size (independent of denominator). I think I'll accept Lucia's answer soon, because it answers the most direct Question 1. But more insights are welcome.
|
You can certainly use Vinogradov's method to show that $\sqrt{p}$ is equidistributed $\pmod 1$. I haven't thought about more subtle properties, such as the gap spacing considered by Elkies and McMullen (or your other questions). For the equidistribution, by Weyl's criterion it is enough to show cancellation in sums of the form
$$
\sum_{n\le x} \Lambda(n) e(k\sqrt{n})
$$
for non-zero integers $k$. This is exactly the kind of sum to which Vinogradov's method applies. For example, see Exercise 2 on page 348 of Iwaniec-Kowalski which invites you to show that this sum is $\ll_k x^{\frac 56+\epsilon}$. Sums like this also appeared in the IHES paper of Iwaniec, Luo and Sarnak, where they show that better bounds for this sum (like $O(x^{\frac 12+\epsilon})$) have implications for the Riemann hypothesis for $GL(2)$ $L$-functions. One should expect the exponential sum over primes above to be on the scale of $O(x^{\frac 12+\epsilon})$. This is in keeping with the plots for $\sqrt{p}$ looking like random noise. To see why $\sqrt{n}$ looks different and more flat, note that the number of $n\le N^2$ with $\{ \sqrt{n} \} \in (\alpha,\beta)$ is given by
$$
\sum_{k\le N} \sum_{(k+\alpha)^2 < n <(k+\beta)^2} 1 = \sum_{k\le N} (\lfloor 2k\beta+\beta^2 \rfloor - \lfloor 2k \alpha + \alpha^2\rfloor).
$$
Since the distribution of $\{ 2k\alpha+\alpha^2\}$ (and similarly for $\beta$) is extremely regular, one should expect this to be nailed down much more precisely than for primes. Finally, suppose for example that $\alpha=a/q$ is a rational number (in lowest terms) with small denominator $q$, which let us assume odd for simplicity. Write $\alpha^2 = b/q + c/q^2$ with $0<c <q$. Note that $\{ 2k\alpha+\alpha^2\}$ will run over $c/q^2$, $1/q+c/q^2$, $\ldots$, $(q-1)/q+c/q^2$, and its average value will be $(q-1)/(2q) + c/q^2$. This can be noticeably different from the average value of $\{ x\}$, which is $1/2$, explaining the "spikes" near small rational numbers.
|
{
"source": [
"https://mathoverflow.net/questions/292044",
"https://mathoverflow.net",
"https://mathoverflow.net/users/3545/"
]
}
|
292,582 |
A few years ago, I came up with this proof of Perron's theorem for a class presentation: https://pi.math.cornell.edu/~web6720/Perron-Frobenius_Hannah%20Cairns.pdf I've written an outline of it below so that you don't have to read a link. It's close in spirit to Wielandt's proof using $$\rho := \sup_{\substack{x \ge 0\\|x| = 1}} \min_j {|{\sum x_i A_{ij}}| \over x_j},$$ but I think it's simpler. (In particular, you don't have to divide by anything or take the sup min of anything.) The exception is the part where you prove that the spectral radius has only one eigenvector, which is exactly the same as Wieland's proof. I believe it's correct, but it hasn't passed through any kind of verification process aside from being presented in class. And I've had a couple people write me about it, and, for God knows what reason, it comes up on Google in the first couple pages if you search for "Perron-Frobenius." So I'd appreciate it if you would look at this and see if you see anything wrong with it. And if you don't, I'd like to know if it's original, because if so then I get to feel proud of myself. Here is the proof: Let $A > 0$ be a positive $n \times n$ matrix with eigenvalues $\lambda_1, \ldots, \lambda_n$ , counted with multiplicity. Let $\rho = \max |\lambda_i|$ be the spectral radius. We want to prove that $\rho$ is a simple eigenvalue of $A$ with a positive eigenvector, and that every other eigenvalue is strictly smaller in absolute value. Let $\lambda$ be an eigenvalue with $|\lambda| = \rho$ , and finally let $\psi$ an eigenvector for $\lambda$ . Consider $$\Psi := |\psi| = (|\psi_1|, \ldots, |\psi_n|).$$ Then $A \Psi = A |\psi| \ge |A \psi| = |\lambda \psi| = \rho |\psi| = \rho \Psi$ , where " $x \ge y$ " means that each coordinate of $x$ is greater than or equal to each coordinate of $y$ . Suppose $A \Psi \ne \rho \Psi$ . Then by positivity we have $A^2 \Psi > \rho A \Psi$ , which means that by continuity there is some $\varepsilon > 0$ with $A^2 \Psi \ge (\rho + \varepsilon) A \Psi$ . Therefore \begin{align*}A^{n+1} \Psi &\ge (\rho + \varepsilon) A^n \Psi \\&\cdots\\&\ge (\rho + \varepsilon)^n A \Psi \ge 0\end{align*} and taking norms we get $\Vert A^{n+1} \Psi \Vert_1 \ge (\rho + \varepsilon)^n \Vert A \Psi \Vert_1$ , so the operator 1-norm of $A^n$ is at least $(\rho + \varepsilon)^n$ , which is a contradiction with Gelfand's formula $\lim \Vert A^n \Vert^{1/n} = \rho$ . Therefore $A \Psi = \rho \Psi$ and $\rho$ is an eigenvalue with positive eigenvector $\rho \Psi = A \Psi > 0$ . Suppose there is an eigenvalue $\lambda$ with $|\lambda| = \rho$ . Let $\psi$ be an eigenvector for $\lambda$ . We have seen above that $A \Psi = \rho \Psi = |A \psi|$ or $\sum_j A_{ij} |\psi_j| = |\sum_{ij} A_{ij} \psi_j|$ . Fix an index $i$ . Then $A_{ij} > 0$ for each row $j$ , so $\sum_{ij} A_{ij} \psi_j$ is a weighted sum of $\psi_j$ where all the weights are positive, and its absolute value is the weighted sum of $|\psi_j|$ with the same weights. Those two things can only be equal if all the summands $\psi_j$ all have the same complex argument, so $\psi = e^{i\theta} \psi'$ where $\psi' \ge 0$ , and $\lambda \psi' = A \psi' > 0$ , so $\lambda > 0$ . Therefore $\lambda = \rho$ . Now we know that every eigenvalue with $|\lambda| = \rho$ is $\rho$ , and it has one positive eigenvector (and possibly more), but we don't know how many times $\rho$ appears in the list of eigenvalues. That is, we don't know whether it's simple or not. We can prove that $\rho$ has only one eigenvector by the same argument in Wielandt's proof. We know $\Psi$ is a positive eigenvector. Suppose that there is another linearly independent eigenvector $\psi$ . We can pick $\psi$ to be real (because $\mathop{\rm Re} \psi$ and $\mathop{\rm Im} \psi$ are eigenvectors or zero and at least one is linearly independent of $\Psi$ ). Choose $c$ so $\Psi + c \psi$ is nonnegative and has one zero entry. Then $\rho (\Psi + c \psi) = A(\Psi + c \psi) > 0$ by positivity, but it has one zero entry, which is a contradiction. So there's no other linearly independent eigenvector. Now that we know there's only one eigenvector, we can prove that $\rho$ is a simple eigenvalue. By the previous reasoning, there is a positive left eigenvector $\Pi$ of $\rho$ , so $\Pi A = \rho A$ . Then $\Pi > 0$ and $\Psi > 0$ , so $\Pi \Psi \ne 0$ . Then $\Pi^0 := \{x: \Pi x = 0\}$ is an $(n-1)$ -dimensional subspace of $\mathbb R^n$ and $\Psi \notin \Pi^0$ , so we can decompose $\mathbb R^n$ into the direct sum $$\mathbb R^n = \mathop{\text{span}}\{\Psi\} \oplus \Pi^0.$$ Both of these spaces are invariant under $A$ , because $A \Psi = \rho \Psi$ and $\Pi A x = \rho \Pi x = 0$ . Let $x_2, \ldots, x_n$ be a basis of $\Pi^0$ . Let $$X = \begin{bmatrix}\Psi&x_2&x_3&\cdots&x_n\end{bmatrix}.$$ Then the invariance means that $$X^{-1}AX = \begin{bmatrix}\rho&0\\0&Y\end{bmatrix}$$ where the top right $0$ says $\Pi^0$ is invariant under $A$ and the lower left $0$ says $\mathop{\text{span}}\{\Psi\}$ is invariant under $A$ . Here $Y$ is some unknown $(n-1) \times (n-1)$ matrix. $A$ is similar to the above block matrix, so the eigenvalues of $A$ are $\rho$ followed by the eigenvalues of $Y$ . If $\rho$ is not a simple eigenvalue, then it must be an eigenvalue of $Y$ . Suppose $\rho$ is an eigenvalue of $Y$ . Let $\psi'$ be an eigenvector with $Y \psi' = \rho \psi'$ . Then $A X {0 \choose \psi'} = \rho X {0 \choose \psi'}$ and $X{0 \choose \psi'}$ is linearly independent of $\Psi = X {1 \choose \mathbf{0}}$ . We've already proved that $A$ has only one eigenvector for $\rho$ , so that is impossible. Therefore, $\rho$ is not an eigenvalue of $Y$ , so $\rho$ is a simple eigenvalue of $A$ . That's the last thing we had to prove. Extending to $A \ge 0$ with $A^n > 0$ works as usual. Thanks!
|
(1) Correctness: I read all arguments in detail and couldn't find anything wrong with them. Of course, this doesn't mean too much... (2) Orginality: I think in a topic which has such an extensive historical record as Perron-Frobenius theory does, the question of "originality" or "novelty" of any particular proof is a very delicate one. There is an enormous amount of literature out there which all deals with spectral properties of positive matrices in one way or another. I recommend to have a look at MacCluer's survey article [MacCluer: The Many Proofs and Applications of Perron’s Theorem, 2000, SIAM Review, Vol. 42, No. 3, pp. 487–498] for an overview over some proofs of Perron's theorem and for references pointing to several further proofs. But even if somebody had an overview over all the relevant literature and could thus decide with sufficiently high probability whether the OP's proof (or a very similar one) is written somewhere in the literature, we would still face the problem that, even if the proof as a whole was new, this does not necessarily mean that any of the single arguments in the proof is new. In fact, in addition to all the articles and textbooks with proofs of Perron's theorem, there have been extensive (and successful) attempts to generalise Perron-Frobenius theory in various directions (for instance, to matrices which leave invariant a cone in $\mathbb{R}^n$ , to eventually positive matrices, to Krein-Rutman type theorems on ordered Banach spaces whose cone has non-empty interior, and to Perron-Frobenius theory for positive operators on Banach lattices, in order to mention just four of them), and many arguments used in those theories are variations of techniques from the classical theory. Hence, it is quite save to say that, for any argument used in any "new" proof of Perron's theorem, we can find a similar argument somewhere in the related literature. Here are a few examples to back up this claim (in the following, I assume for simplicity that the spectral radius equals $1$ ; this is no loss of generality since we can replace $A$ with $A/\rho$ ): The second bullet point in the question essentially says that, for every eigenvector $\psi$ belonging to a unimodular eigenvalue, the modulus $|\psi|$ is a super-fixed point of $A$ . This observation is essential for many arguments in Perron-Frobenius theory on Banach lattices (see for instance [Schaefer: Banach Lattices and Positive Operators, 1974, Springer, Proposition V.4.6]) The argument in the third bullet point in the question is for instance used in [Karlin, Positive Operators, 1959, Lemma 3 and Theorem 8 on page 921]. In the subsequent corollary, Karlin uses this argument in the same way as the OP to deduce the same result (on infinite-dimensional spaces, though). The argument in the seventh bullet point in the question (which shows that the spectral radius is a geometrically simple eigenvalue and which is attributed to Wielandt by the OP) can for instance be found in [Karlin, op. cit., Theorem 9 on p. 922], where the argument is in turn attributed to Krein and Rutman (but I don't know who was earlier). The OP's subsequent argument (which proves algebraic simplicity) actually shows the following general spectral theoretic observation (which is independent of any positivity assumptions): If $\lambda$ is a geometrically simple eigenvalue of a matrix $A$ and if there exists an eigenvector $\Psi$ and a dual eigenvector $\Pi$ such that $\Pi\Psi \not= 0$ , then $\lambda$ is also algebraically simple (positivity of $A$ is only used to show the existence of such $\Psi$ and $\Pi$ and to deduce the geometric simplicity). [Actually, I wasn't aware of this spectral theoretic fact, and the question brought this to my attention - so let me express my gratitude to the OP for that.] I do not know any place in the literature where this can be found, but it seems very likely that this is known. Maybe somebody else can help out with a reference here? Of course, one could argue that most proofs published (even of new results) are just a recombination of known arguments from various branches in mathematics, but here we have the very special situation that the known arguments which are combined all stem from essentially one field, namely from the spectral theory of positive matrices and its generalisations - and that they were all used in the literature to prove results which are very closely related to the already known theorem under consideration. Thus, I would argue that one should rather not consider the OP's proof to be really "novel", even if it might not be written down explicitly in the literature. Those things said, I feel obliged to add the following three points: It is certainly rewarding, though, to seek for versions and variations of proofs of Perron's theorem. A proof which efficiently combines a few elegant arguments - as the OP's proof definitely does - can be very helpful in teaching (and after all, that's where the proof under discussion comes from, if I understood the OP correctly) I personally find the OP's version of the proof quite appealing. It's very clear and easy to follow. Concerning your motivation to "feel proud of yourself": well, you certainly should feel proud of yourself - you found that proof, and a good one at that.
|
{
"source": [
"https://mathoverflow.net/questions/292582",
"https://mathoverflow.net",
"https://mathoverflow.net/users/120600/"
]
}
|
292,833 |
Two simple remarks: The polynomial $x^k-1$ can be factorised over the integers as a product of (irreducible) cyclotomic polynomials: $$x^k-1 = \prod_{d|k}\Phi_d(x).$$
If we choose $k$ to be a number that has a lot of divisors, then $x^k-1$ will have a lot of factors. For example, if $k$ is a product of $b$ distinct primes then $x^k-1$ has $2^b$ factors. Suppose we are given some largeish natural number $n$, and we want to factorise it. One way to find a factor of $n$ would be to compute the product $a$ of lots of different numbers $a_i$, modulo $n,$ and then compute $\gcd(n, a)$. If we are lucky and some of the $a_i$ are factors of $n$ but $a$ is not a multiple of $n$, then the gcd will be a non-trivial factor of $n$. Putting these together, one might imagine that a good way to find factors of $n$ would be to compute $\gcd(n, x^k-1\mathrm{\ mod\ } n)$ with $k$ a product of distinct primes and $x>1$. This is equivalent to testing $\Phi_d(x)$ for a common factor with $n$ for each of the exponentially-many divisors $d|k$. So it might naively be hoped that one could thereby factorise $n$ in time $O(\log n)$ or thereabouts. Of course this does not actually work – one cannot thus factorise enormous numbers in the blink of an eye. I justify this claim on the non-mathematical grounds that a) if such a simple method worked then someone would have noticed by now; and the heuristic grounds that b) I tried it, and it didn’t. What I would like to know is why it doesn’t work. Presumably the problem is that the $2^b$ values $\Phi_d(x)$ are distributed in a sufficiently non-uniform way to stymie the procedure. What’s going on here? I can’t see much hint of this non-uniformity in examples that are small enough to compute all the $\Phi_d(x)$ explicitly in a reasonable amount of time. For example, if $n=61\times71=4331$ and $k=9699690$ is the product of the first eight primes, then the expression $\Phi_d(2)$ takes 244 different values as $d$ ranges over the $2^8=256$ divisors of $k$. (And as it happens, $\Phi_{35}(2)$ is a multiple of 71, and so computing $\gcd(4331, 2^{9699690}-1\mathrm{\ mod\ } 4331)$ reveals this factor.) Added : Thanks to David E Speyer for a clear concise answer. Just to make it explicit, the “non-uniformity” at play here is that each prime factor of $\Phi_d(x)$ is congruent to 1 modulo $d$.
|
Lets suppose you're in the worst case scenario: $N=pq$ where $p=2p′+1$ and $q=2q′+1$ with $p$, $q$, $p′$ and $q′$ all prime and $p$ and $q$ roughly the same size. Then the order of $x$ modulo $p$ will be either $p′$ or $2p′$ for any $x \neq \pm 1 \mod p$, and similarly modulo $q$. So we expect $GCD(N,x^k−1)$ to be nontrivial only if $p′$ or $q′$ divide $k$. The odds of this happening are roughly $1/p′+1/q′ \approx 4/\sqrt{N}$, so you need to try $\sqrt{N}$ values of $(x,k)$ before you expect a hit.
|
{
"source": [
"https://mathoverflow.net/questions/292833",
"https://mathoverflow.net",
"https://mathoverflow.net/users/8217/"
]
}
|
293,382 |
Let $X$ and $Y$ be reasonable spaces. Since $\mathbb{R}^{\infty}$ is contractible,
$$
X \times \mathbb{R}^{\infty} \cong Y \times \mathbb{R}^{\infty} \;\;\; \implies \;\;\; X \simeq Y.
$$ Is the converse also true? My vague intuition: the factors of $\mathbb{R}^{\infty}$ provide so much extra room that there will never be a geometric obstruction to producing a homeomorphism. Evidently, there is no homotopy-theoretic obstruction, so maybe the converse is true. On the other hand, I really have no idea and could be missing something basic. For example, the plane with two punctures is homotopy equivalent to a wedge of two circles. However, I do not know about a homeomorphism
$$
(\mathbb{C} - \{0, 1\}) \times \mathbb{R}^{\infty} \overset{?}{\cong} (S^1 \vee S^1) \times \mathbb{R}^{\infty}.
$$ Clarification about the meaning of $\mathbb{R}^{\infty}$ and the intent of the question When I wrote the question, I had in mind the infinite union $\cup_n \mathbb{R}^n$ inside the product $\mathbb{R}^{\mathbb{N}}$. However, since I would like the answer to the question to be "yes," I am also interested in other versions of $\mathbb{R}^{\infty}$. I asked the question because of an algebraic limiting construction in a paper I'm writing, and I felt that a topological version of the limit would be satisfying. The algebraic version is already working for spaces like $X \times \mathbb{C}^n$ for large-enough $n$, and converges algebraically to some limiting group, but this doesn't give too many hints about the topology I should use on $\mathbb{C}^{\infty}$, or if the limit can even be considered topologically. My application involves singular homology, and the direct limit topology is well-suited to this application, but other choices may be as well.
|
This question is answered by two classical theorems of infinite-dimensional topology, which can be found in the books of Bessaga and Pelczynski , Chigogidze or Sakai . Factor Theorem. For any Polish absolute neighborhood retract $X$ (= neighborhood retract of $\mathbb R^\omega$ ) the product $X\times\mathbb R^\omega$ is an $\ell_2$ -manifold. Classification Theorem. Two $\ell_2$ -manifolds are homeomorphic if and only if they are homotopy equivalent. So, the reasonable space in your question should read as a Polish absolute neighborhood retract . By the way, this class of spaces includes all (countable locally) finite simplicial complexes, mentioned in the answer of Igor Rivin. Remark on possible generalizations. Chapter IX of the book of Bessaga and Pelczynski contains generalizations of the above two theorems to manifolds modeled on normed spaces $E$ , which are homeomorphic to $E^\omega$ or $E^{\omega}_0:=\{(x_n)_{n\in\omega}\in E^\omega:\exists n\in\omega\;\forall k\ge n\;(x_k=0)\}$ . Chapter 7 of Chigogidze's book contains generalizations of the Factor and Classification Theorems to manifolds modeled on uncountable products of lines. Chapter 5 of Sakai's book contains generalizations of the Factor and Classification Theorems to manifolds modeled on the direct limits $\mathbb R^\infty$ and $Q^\infty$ of Euclidean spaces and Hilbert cubes, respectively. So, typical generalizations of Factor and Classification Theorems look as follows: Theorem. Let $E$ be a reasonable model space (usually it is an infinite-dimensional locally convex space with some additional properties). $\bullet$ For any neighborhood retract $X$ of $E$ the product $X\times E$ is an $E$ -manifold. $\bullet$ Two $E$ -manifolds are homeomorphic if and only if they are homotopically equivalent. Depending on the meaning of your model space $\mathbb R^\infty$ the meaning of a reasonable space also changes. If $\mathbb R^\infty$ is the direct limit of Euclidean spaces, then a reasonable space means an absolute neighborhood extensors which is a direct limits of finite dimensional compacta. If $\mathbb R^\infty$ is the union of the Euclidean spaces in the countable product of lines, then a reasonable space is a locally contractible space which is the countable union of finite-dimensional compact subsets.
|
{
"source": [
"https://mathoverflow.net/questions/293382",
"https://mathoverflow.net",
"https://mathoverflow.net/users/9068/"
]
}
|
294,119 |
When I first read Set Theory by Jech, I came under the impression that the Universe of Sets, $V$ was a fixed, well defined object like $\pi$ or the Klein four group. However as I have read on, I am beginning to have my doubts. We define \begin{align}
V_0 & :=\emptyset. \\[10pt]
V_{\beta +1} & :={\mathcal {P}}(V_\beta). \\[10pt]
V_\lambda & :=\bigcup_{\beta <\lambda} V_\beta \text{ for any limit ordinal } \lambda,
\end{align} and finish by saying
$$V:=\bigcup_{\alpha \in \operatorname{Ord}} V_\alpha.$$ However this definition seems to create many problems. I can see at least two immediately: First of all, the Power Set operation is not absolute, that is it varies between models of ZFC. Secondly (and more importantly) this definition seems to be completely circular as we do not know a priori what the ordinals actually are. For instance, if we assume some two, mutually inaccessible large cardinals $\kappa , \kappa'$ to exist, and model ZFC as $V_\kappa, V_{\kappa'}$
respectively, then we get two completely different sets of ordinals! So we seem to be at an impasse: In order to define the Universe of Sets we must begin with a concept of ordinals, but in order to define the ordinals we need to have a concept of the Universe of Sets! So my question is to ask: Is this definition circular? The only solution I can think of is that when we define $V$, we implicitly assume a model of ZFC to begin with. Then after constructing the ordinals in this model, we construct $V$ off of them, so to speak. Is this what is being assumed here?
|
As you noticed, the iterative conception of sets requires a pre-existing universe of sets, and ordinals with which we can label the stages. So if you work within ZFC itself, in other words within an existing model of ZFC, you can perform that iterative construction to obtain $V$ . Like Asaf Karagila says here , you cannot get nothing from nothing. Typically, in set theory you work in ZFC, where you have ordinals, and construct $V_k$ for each ordinal $k$ . Note that $V$ is not a set, but the entire universe (the one you are working in). I still think your question is mainly philosophical, your comment to Nik Weaver notwithstanding. After all, you ask: First of all, the Power Set operation is not absolute, that is it varies between models of ZFC. Of course, since ZFC proves that $V$ is the whole universe, different models of ZFC will have different $V$ . If ZFC is consistent, then it has a countable model $V$ . That should not be surprising; one can never 'pin down' the set-theoretic universe, not to say using $V$ . The same issue shows up with the natural numbers, as Asaf alluded to in a comment; second-order PA with full semantics does not 'pin them down' because our meta-system MS must always be computable, and hence even if MS has (proves existence of) a full semantics model of second-order PA, there are models of MS whose interpretation of the naturals are not isomorphic (if there are any models at all). In order to define the Universe of Sets we must begin with a concept of ordinals, but in order to define the ordinals we need to have a concept of the Universe of Sets! So my question is to ask: Is this definition circular? We cannot define anything , much less whatever is meant by a "universe of sets", without already working under some assumptions. It is not necessary that you work in ZFC, but what other alternative meta-system do you have in mind? Remember that to construct $V$ you need all those set-theoretic operations that you are using, plus the collection of ordinals, and any meta-system that supports all these is going to look very much like ZF or some extension of it. Boolos noted the same philosophical circularity in this paper (page 15) (which I rephrased to the language used here and emphasized some points): There is an extension of the stage theory from which the axioms of replacement could have been derived. We could have taken as axioms all instances of a principle which may be put, 'If each set is correlated with at least one stage (no matter how), then for any set $z$ there is a stage $s$ such that for each member $w$ of $z$ , $s$ is later than some stage with which $w$ is correlated'. This bounding or cofinality principle is an attractive further thought about the interrelation of sets and stages, but it does seem to us to be a further thought, and not one that can be said to have been meant in the rough description of the iterative conception. For that there are exactly $ω_1$ stages does not seem to be excluded by anything said in the rough description; it would seem that $V_{ω_1}$ (see below) is a model for any statement that can (fairly) be said to have been implied by the rough description, and not all of the axioms of replacement hold in $V_{ω_1}$ . (*) Thus the axioms of replacement do not seem to us to follow from the iterative conception. (*) Worse yet, $V_{δ_1}$ would also seem to be such a model. ( $δ_1$ is the first uncomputable ordinal.) To put it more cogently, if you take for granted the power-set operation as a primitive, and start with the empty-set, and also take for granted the ability to consolidate into a single operation the union of any iteratively generated sequence of sets, which may itself be used iteratively to generate more sets, then what you can generate appears to be entirely contained within $V_{δ_1}$ . And if you additionally allow taking union of arbitrary (countable) and potentially indescribable sequences, then what you can generate is still contained within $V_{ω_1}$ . The crux is that you cannot generate $V_{ω_1}$ without essentially having $ω_1$ . And this corresponds to two logical facts: that there is no countable sequence of ordinals before $ω_1$ whose union is $ω_1$ , and that $V_{ω_1}$ is a model of ZF with replacement restricted to countable sequences. More philosophically, if you envision the stages as being generated rather than pre-existing , then necessarily you cannot generate stage $ω_1$ until you have generated all the stages corresponding to countable ordinals. But there is no way to generate all countable stages without having a generation process that already has length at least $ω_1$ . And since $ω_1$ does not appear in any stage up to $V_{ω_1}$ , you have no choice but to assume the ability to 'run a generation process' of length $ω_1$ if you want to obtain $V_{ω_1}$ and further stages, which implies that the iterative conception cannot give ontological justification for the existence of $ω_1$ . Just to add, it is true that uncountable well-orderings do appear much earlier than $V_{ω_1}$ , but the very fact that $ω_1$ does not appear even at stage $ω_1$ (union of all prior stages) should be a warning that one should not consider all well-orderings of the same length to be on equal footing. In particular, to have a well-ordering as a binary relation on a set that makes it totally ordered with no strictly descending sequence is not the same as being able to iterate along it. Perhaps someone may find a non-circular way to justify ZFC philosophically, but the iterative conception seems to get us no further than countable replacement.
|
{
"source": [
"https://mathoverflow.net/questions/294119",
"https://mathoverflow.net",
"https://mathoverflow.net/users/120841/"
]
}
|
294,828 |
It was asked at the Bulletin of the American Mathematical Society Volume 64, Number 2, 1958 , as a Research Problem, if a Hurwitz polynomial with real coefficients (i.e. all of its zeros have negative real parts) can be divided into the arithmetic sum of two or three polynomials, each of which has positive coefficients and only nonpositive real roots. I would like to know if the following problem is known and how it can be solved: Can any polynomial with complex coefficients and degree $n$ be divided into the arithmetic sum of three complex polynomials, each of which has degree at most $n$ and only real roots? Any help would be appreciated.
|
To address the original question about the polynomials with complex coefficients. Given a polynomial $P\in\mathbb C[x]$ of degree $n$, write $P=Q+iR$ with $Q,R\in\mathbb R[x]$, and fix arbitrarily a polynomial $S\in\mathbb R[x]$ of degree $n$ with all roots real and pairwise distinct. As observed in Eremenko's answer, for any $\varepsilon\ne 0$ sufficiently small in absolute value, the polynomials $Q_\varepsilon:=S+\varepsilon Q$ and $R_\varepsilon:=S+\varepsilon R$ will also have all their roots real, and then
\begin{align*}
P &= Q+iR \\
&= \varepsilon^{-1}(Q_\varepsilon-S)+i\varepsilon^{-1}(R_\varepsilon-S) \\
&= \varepsilon^{-1}Q_\varepsilon+i\varepsilon^{-1}R_\varepsilon-(1+i)\varepsilon^{-1}S
\end{align*}
is the required representation of $P$ as a sum of three polynomials of degree at most $n$ with all their roots real.
|
{
"source": [
"https://mathoverflow.net/questions/294828",
"https://mathoverflow.net",
"https://mathoverflow.net/users/70464/"
]
}
|
294,919 |
Suppose I'm an aspiring mathematician-to-be, who started doing research. Although this is really what I love doing, I found that one disturbing point is that there's always the pressure of discovering something new. When I'm doing mathematical research, there's always the fear in the back of my mind, that maybe, I don't get new results. In the past, I could think freely about mathematics, without the pressure, but now that it's "my job", I have these problems. How to handle this? Since this site is for mathematicians from the graduate level onwards, maybe somebody has a good suggestion. Note that, although this is a mathematics forum, I think this question is appropriate here, since it perfectly matches with the description of the "soft question"-tag.
|
This is ancient history, and considering my age, I may have told this story here before. I started at Harvard graduate school in 1957, the same year that Hironaka arrived there to work with Zariski. He was already an accomplished mathematician, even if he didn’t yet have a PhD. Early that year, I must have said to him that I couldn’t imagine ever doing research, and he said, in essence, Oh, you learn about some subject, think about it in depth, and before you know it, you’re proving Theorems. I thought, This guy must be in Cloud Cuckoo Land, I’ll never do that. But of course, that’s exactly what happens.
|
{
"source": [
"https://mathoverflow.net/questions/294919",
"https://mathoverflow.net",
"https://mathoverflow.net/users/-1/"
]
}
|
295,114 |
I'd like to prove the following: $$ a + b + 4 \sqrt{1 + a^{2} + b^{2}} \leq 4 \sqrt{a^{2} + b^{2}} +
\sqrt{1+b^{2}} + \sqrt{1+a^{2}} + 2 $$ for all $a, b \in \mathbb{R}_{>0}$. Question : is there a way to prove this inequality? Motivation : In a paper , Borm et al. work out the theory behind stochastic cooperative game theory, which is a generalisation of the classical (deterministic) cooperative game theory developed in the 1950s. For my master's thesis, I am currently trying to apply their model to production prediction problems for renewable energy sources. Owners of renewable energy sources need to report how much energy their wind mills or solar panels they think will produce the next hour/day to the network operator. Often, they need to pay a fine when the actual production levels deviate from their productions. In my thesis, I assume that the normalized error distribution of the predictions is a normal distribution $N(0, \sigma^{2})$. Furthermore, I assume that the costs of the fine increase linearly with the absolute value of the error made (as is the case in Spain: see this article). So the fine/cost distribution is a so-called half-normal distribution $H(0, \sigma^{2})$. The owners of the renewable energy sources can cooperate to reduce the fine they need to pay. In one variant of cooperation, they can combine their predictions to minimise their total fine, after which they distribute the costs among all owners. In the three-player game, I assume that owner 1 has an error distribution that goes like $X_{1} \sim N(0, \sigma_{1}^{2})$. For owner 2 we have $X_{2} \sim N(0, \sigma_{2}^{2})$ and for owner 3 we have $X_{3} \sim N(0, \sigma_{3}^{2})$. Furthermore, I assume that $\sigma_{2} = a \sigma_{1} = a \sigma$, and $\sigma_{3} = b \sigma_{1} = b \sigma$. So for our cost game, the stochastic costs are distributed as follows: $$R(S) = \left\{
\begin{array}{lllllll}
|X_{1}| & \mbox{for } S = \{1\} \\
|X_{2}| & \mbox{for } S = \{2\} \\
|X_{3}| & \mbox{for } S = \{3\} \\
|X_{1} + X_{2}| & \mbox{for } S= \{1,2\} \\
|X_{1} + X_{3}| & \mbox{for } S = \{1,3\} \\
|X_{2} + X_{3}| & \mbox{for } S = \{2,3\} \\
|X_{1} + X_{2} + X_{3}| & \mbox{for } S= \{1,2,3\} = N
\end{array}
\right.$$ Keep in mind that for normally distributed random variables $X_{1}$ and $X_{2}$, we can compute the convolution: $$X_{1} + X_{2} \sim N(0, \sigma_{1}^{2} + \sigma_{2}^{2}) = N(0, \sigma^{2}(1+a^{2})).$$ This means $$|X_{1} + X_{2}| \sim H(0, \sigma^{2}(1+a^{2})).$$ Analogously, we can compute the convolutions for the other coalitions, too. Now, I assume that that all players of the game have expectational preferences (see page 5 of the paper by Borm et al.). This means that for a player $i$, we have $X \succsim_{i} Y$ if and only if $E(X) \geq E(Y)$. Players compare the stochastic payoffs with their preferences and the so-called tracking function (my terminology, not theirs). If player $i$ receives $p_{i}$th part of the payoff (or costs, in my case) of the stochastic coalitional value $R(S)$, there exists a number $\alpha_{i}$ such that $p_{i} R(S) \sim_{i} \alpha_{i} R(T)$ for the set $T \supset S$. Notice that $\alpha_{i}$ depends on $S$, $T$ and $p_{i}$, so it is a function of these entities. This is the tracking function. In case all players have expectational preferences, we have $$\alpha_{i} (S, T, p) = p \frac{E(R(S))}{E(R(T))}$$ for $i=1,2,3$. The core of a stochastic cooperative game can be computed too. It consists of those efficient (which means that $\sum_{i} p_{i} = 1$) allocations $p$ such that $\sum_{i \in S} p_{i} / \alpha_{i} (S, N, 1) \geq 1$ for all coalitions $S \in 2^{N}$. For a cost game (my situation) the inequalities are flipped. I computed the core for the game I described above. It consists of those efficient allocations $p \in \mathbb{R}^{3}$ such that: $$p_{1} \leq \frac{1}{\sqrt{1+a^{2}+b^{2}}}, \quad p_{2} \leq \frac{a}{\sqrt{1+a^{2}+b^{2}}}, \quad p_{3} \leq \frac{b}{\sqrt{1+a^{2}+b^{2}}},\\ p_{1} + p_{2} \leq \sqrt{\frac{1+a^{2}}{1+a^{2}+b^{2}}}, \quad p_{1} + p_{3} \leq \sqrt{\frac{1+b^{2}}{1+a^{2}+b^{2}}}, \quad p_{2} + p_{3} \leq \sqrt{\frac{a^{2}+b^{2}}{1+a^{2}+b^{2}}}. $$ I then calculated the Shapley value for this game. The procedure for the computation of this value is explained on pages 9 and 10 of Borm et al.'s paper. It is as follows: \begin{align*}
\phi (\alpha) &= \frac{1}{n!} \sum_{\sigma \in \Pi(N)} m^{\sigma} (\alpha) \\
&= \frac{1}{3!} \big{(} m^{(1,2,3)} + m^{(1,3,2)} + m^{(2,1,3)} + m^{(2,3,1)} + m^{(3,1,2)} + m^{(3,2,1)} \big{)} \\
&= \frac{1}{6\sqrt{1+a^{2}+b^{2}}} \Big{(} 2 + \sqrt{1+a^{2}} + \sqrt{1+b^{2}} + 2 \sqrt{1+a^{2}+b^{2}} - 2 \sqrt{a^{2}+b^{2}} - a - b, \\
& \qquad \qquad \qquad \quad 2a + \sqrt{a^{2} + 1} + \sqrt{a^{2}+b^{2}} + 2 \sqrt{1+a^{2}+b^{2}} - 2 \sqrt{1+b^{2}} - 1 - b, \\
& \qquad \qquad \qquad \quad 2b + \sqrt{b^{2}+1} + \sqrt{b^{2}+a^{2}} + 2 \sqrt{1+a^{2}+b^{2}} - 2 \sqrt{1+a^{2}} - 1 - a \Big{)}
\end{align*} I am now trying to verify whether this Shapley value lies in the core of the game. In particular, I am trying to verify the sixth (and last) inequality of the core. So if we take the second and third coordinates of the Shapley value and add them together, then this sum mustn't exceed the value $\sqrt{\dfrac{a^{2}+b^{2}}{1+a^{2}+b^{2}}} $. With a little bit of basic algebra, you will arrive at the inequality at the top of this question. What I tried : I already tried applying the triangle inequality and the AM-GM inequality, but that did not seem to work. I also asked a similar question on MSE on the fourth inequality for the determination of whether the Shapley value is in the core. Two users on that page were kind enough to provide me with an answer to that question. However, I don't think I can generalise Alex Francisco's answer to this inequality, and I don't understand Piquito's answer fully yet. I wonder whether there are other ways to approach this problem.
|
This rewrites as $4f(a^2 +b^2)\le f(a^2) +f(b^2 )+2f(0) $ for a decreasing function $f(x)=\sqrt{x+1}-\sqrt{x}=1/(\sqrt{x+1}+\sqrt{x})$.
|
{
"source": [
"https://mathoverflow.net/questions/295114",
"https://mathoverflow.net",
"https://mathoverflow.net/users/93724/"
]
}
|
295,180 |
The search for a neat Theory of Everything (ToE) which unifies the entire set of fundamental forces of the universe (as well as the rules which govern dark energy , dark matter and anti-matter realms) has been the subject of a long-standing all-out effort of physicists since the early 20th century. Some complicated theories such as Quantum Field Theory and M-Theory have been developed along these lines. However, the ultimate theory of everything still seems far out of reach and highly controversial.(A related debate: 1 , 2 ). The difficulty of the hopeless situation brought some physicists, such as Hawking, on the verge of total disappointment. Their general idea was that maybe such an ultimate theory is not only out of reach (with respect to our current knowledge of the universe) but also fundamentally non-existent . In his Gödel and the End of the Universe lecture Hawking stated: Some people will be very disappointed if there is not an ultimate
theory that can be formulated as a finite number of principles. I used
to belong to that camp, but I have changed my mind. I'm now glad that
our search for understanding will never come to an end, and that we
will always have the challenge of new discovery. Without it, we would
stagnate. Gödel’s theorem ensured there would always be a job for
mathematicians. I think M-theory will do the same for physicists. I'm
sure Dirac would have approved. A glimpse of Hawking's lecture makes it clear that the argument which he uses for refuting the possibility of achieving the Theory of Everything in his lecture, is loosely (inspired by and) based on Gödel's incompleteness theorems in mathematics. Not to mention that Hawking is not the only person who brought up such an argument against ToE using Gödel's theorems. For a fairly complete list see here and here . Also, some arguments of the same nature (using large cardinal axioms) could be found in this related MathOverflow post. Hawking's view also shares some points with Lucas-Penrose's argument against AI using Gödel's incompleteness theorems, indicating that human mind is not a Turing machine (computer) and so the Computational Theory of Mind 's hope for constructing an ultimate machine that has the same cognitive abilities as humans will fail eventually. There have been a lot of criticism against Lucas-Penrose's argument as well as the presumptions of Computational Theory of Mind. Here, I would like to ask about the possible critical reviews on Hawking's relatively new idea. Question: Is there any critical review of Hawking's argument against the Theory of Everything in his " Gödel and the End of the Universe " lecture, illustrating whether it is a valid conclusion of Gödel's incompleteness theorem in theoretical physics or just yet another philosophical abuse of mathematical theorems out of the context? Articles and lectures by researchers of various background including mathematicians, physicists, and philosophers are welcome.
|
Alon Amit: "There are some things that break my heart more thoroughly than reading nonsensical conclusions from Gödel's Theorems to the limitations of physics published by eminent scientists, but they are few." Johannes Koelman: "Stating that Gödel (or Turing, or gravity) implies the logical impossibility of a TOE, is the same as stating that because of the incompleteness theorem an axiomatic logic can not be constructed. This is simply wrong." Stanley Jaki: a more favorable review by the scientist who first argued, in 1966, that because any "theory of everything" will certainly be a consistent non-trivial mathematical theory, it must be incomplete.
|
{
"source": [
"https://mathoverflow.net/questions/295180",
"https://mathoverflow.net",
"https://mathoverflow.net/users/82843/"
]
}
|
295,875 |
Question Is there a connection between Abel and Galois theories of polynomial equations? Recall that for every polynomial $p(x)\in \mathbb{Q}[x]$ (say, without the free coefficient), Abel considered the monodromy group of the Riemann surface of the analytic function $w(z)$ defined by $p(w(z))+z=0$ .. There is an expression of $w(z)$ in radicals if the monodromy group is solvable (is it an "if and only if" statement?). On the other hand, for every polynomial $p(x)\in \mathbb{Q}[x]$ , Galois considered the automorphism group (the Galois group) of the splitting field of $p$ . The roots of $p(x)$ are expressed in radicals if and only if the Galois group is solvable. The question asks whether there are any known connections between the monodromy group of $w(z)$ and the Galois group of $p(x)+z$ (considered as polynomial in $x$ ). I am pretty sure this is well known. I just cannot find it in the literature. Update 1. What I called "Abel's proof" of Abels' theorem is in fact Arnold's proof written by Alexeev (an English translation can be found here ). Abel's proof was based on different ideas, see this text . So some instances of the word "Abel" above should be replaced by "Arnold". Added "Arnold" to the title. Update 2. I found a very nice book by Askold Khovanskii,
" Topological Galois Theory ", where Arnold's proof and its strong generalizations to other types of equations, including differential equations, are explained in detail. Highly recommended.
|
The action of the monodromy group of $w(z)$ on the fiber $p^{-1}(a)$ for a non-critical value $a$ of $p$ (that is $|p^{-1}(a)|=\deg p$) is the same as the action of the Galois group of $p(x)+z$ over $\mathbb C(z)$ on the roots of $p(x)+z$ in some splitting field. One can see this by comparing each of these groups with the deck transformation group of the cover $X\to\mathbb P^1\mathbb C$, where $X$ is the normal hull of the cover $P^1\mathbb C\to P^1\mathbb C$, $x\mapsto p(x)$. (Remark: Though it doesn't make a difference, it is slightly more convenient to work with $p(x)-z$ instead of $p(x)+z$.)
|
{
"source": [
"https://mathoverflow.net/questions/295875",
"https://mathoverflow.net",
"https://mathoverflow.net/users/-1/"
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.