set
stringclasses
1 value
id
stringlengths
5
9
chunk_text
stringlengths
1
115k
chunk_num_tokens
int64
1
106k
document_num_tokens
int64
58
521k
document_language
stringclasses
2 values
train
0.103.19
\subsection{The $\mathfrak{so}(6)$ KV polynomial and the $\mathfrak{sl}(4)$ MOY polynomial} In this section, we prove that the $\mathfrak{so}(6)$ KV polynomial of a planar $4$-valent graph is equal to the $\mathfrak{sl}(4)$ MOY polynomial of ``mostly $2$-colored" MOY graphs. \begin{figure}\label{mostly-2-colored-MOY-local-figure} \end{figure} \begin{definition}\label{def-mostly-2-colored-MOY} We call a MOY graph mostly $2$-colored if all of its edges are colored by $2$ except in local configurations of the two types given in Figure \ref{mostly-2-colored-MOY-local-figure}. Note that all vertices of a mostly $2$-colored MOY graph are contained in such local configurations. \end{definition} \begin{figure}\label{mostly-2-colored-MOY-to-4-valent-figure} \end{figure} The following is the main theorem of this subsection. \begin{theorem}\label{thm-mostly-2-colored-MOY-to-4-valent} Given a mostly $2$-colored MOY graph $\Gamma$, we remove the $4$-colored edge in every type one local configuration, shrink the square in every type two configuration to a vertex, remove color and orientation, and smooth out all the vertices of valence $2$. (See Figure \ref{mostly-2-colored-MOY-to-4-valent-figure}.) This gives an unoriented $4$-valent graph $G(\Gamma)$ embedded in $\mathbb{R}^2$. Then \begin{equation}\label{eq-mostly-2-colored-MOY-to-4-valent} \left\langle \Gamma \right\rangle_4 = P_6(G(\Gamma)). \end{equation} \end{theorem} Note that the $\mathfrak{so}(6)$ Jaeger formula expresses $P_6(G(\Gamma))$ as a state sum of $\mathfrak{sl}(3)$ MOY polynomials and the $\mathfrak{sl}(1)\times\mathfrak{sl}(3)\hookrightarrow \mathfrak{sl}(4)$ composition product expresses $\left\langle \Gamma \right\rangle_4$ as a state sum of $\mathfrak{sl}(3)$ MOY polynomials. We prove equation \eqref{eq-mostly-2-colored-MOY-to-4-valent} by showing that these two state sums are essentially the same. Several notions of rotation numbers are involved in these two formulas. We need Lemma \ref{lemma-rotation-numbers-oc-reverse} below to track the rotation numbers of MOY graphs. \begin{figure}\label{rotation-numbers-oc-reverse-index-fig} \end{figure} \begin{lemma}\label{lemma-rotation-numbers-oc-reverse} Let $\Gamma$ be a MOY graph and $\Delta$ a simple circuit of $\Gamma$. Denote by $\Gamma'$ the MOY graph obtained from $\Gamma$ by reversing the orientation and the color of edges (with respect to $N$) along $\Delta$. Recall that the rotation number of a MOY graph is defined in equation \eqref{eq-rot-gamma}. We view $\Delta$ as an uncolored oriented circle embedded $\mathbb{R}^2$ and define $\mathrm{rot}\Delta$ to be the usual rotation number of this circle. Then \begin{equation}\label{eq-rotation-numbers-oc-reverse} \mathrm{rot} \Gamma' = \mathrm{rot} \Gamma -N \mathrm{rot} \Delta + \sum_v d(v \leadsto v'), \end{equation} where $v$ runs through all vertices of $\Delta$, and $d(v\leadsto v')$ is defined in Figure \ref{rotation-numbers-oc-reverse-index-fig}. \end{lemma} \begin{proof} We prove Lemma \ref{lemma-rotation-numbers-oc-reverse} using a localization of the rotation number similar to that used in the proof of Theorem \ref{thm-oc-reverse}. Cut each edge of $\Gamma$ at one point in its interior. This divides $\Gamma$ into a collection of neighborhoods of its vertices, each of which is a vertex with three adjacent half-edges. (See Figure \ref{fig-MOY-vertex-angles}, where $e$, $e_1$ and $e_2$ are the three half-edges.) For a vertex of $\Gamma$, if it is of the form $v$ in Figure \ref{fig-MOY-vertex-angles}, we denote by $\alphapha$ the directed angle from $e_1$ to $e$ and by $\beta$ the directed angle from $e_2$ to $e$. We define \begin{equation}\label{gamma-rot-def-local-v} \mathrm{rot}(v) = \frac{m+n}{2\pi} \int_{e}\kappa ds +\frac{m}{2\pi}\left(\alphapha + \int_{e_1}\kappa ds\right) + \frac{n}{2\pi}\left(\beta + \int_{e_2}\kappa ds\right), \end{equation} where $\kappa$ is the signed curvature of a plane curve. If the vertex is of the form $\hat{v}$ in Figure \ref{fig-MOY-vertex-angles}, we denote by $\hat{\alphapha}$ the directed angle from $e$ to $e_1$ and by $\hat{\beta}$ the directed angle from $e$ to $e_2$. We define \begin{equation}\label{gamma-rot-def-local-v-prime} \mathrm{rot}(\hat{v}) = \frac{m+n}{2\pi}\int_{e}\kappa ds + \frac{m}{2\pi} \left(\hat{\alphapha}+\int_{e_1}\kappa ds\right) + \frac{n}{2\pi}\left(\hat{\beta}+\int_{e_2}\kappa\right) ds . \end{equation} By the Gauss-Bonnet Theorem, one can easily see that \begin{equation} \label{eq-gamma-rot-sum} \mathrm{rot}(\Gamma) = \sum_{v \in V(\Gamma)} \mathrm{rot}(v). \end{equation} \begin{figure}\label{delta-rot-local-fig} \end{figure} For a vertex $v$ of $\Delta$, denote by $e_1$ and $e_2$ the two half-edges incident at $v$ belonging to $\Delta$. Assume that $e_1$ points into $v$, $e_2$ points out of $v$, and the directed angle from $e_1$ to $e_2$ is $\theta$. (See Figure \ref{delta-rot-local-fig}.) Define \begin{equation}\label{eq-delta-rot-local} \mathrm{rot}_\Delta (v) = \frac{1}{2\pi}\left(\int_{e_1}\kappa ds +\theta + \int_{e_2}\kappa ds\right). \end{equation} By the Gauss-Bonnet Theorem, we know $\mathrm{rot} \Delta = \sum_v \mathrm{rot}_\Delta (v)$, where $v$ runs through all vertices of $\Delta$. For a vertex $v$ of $\Gamma$ contained in $\Delta$, denote by $v'$ the vertex of $\Gamma'$ corresponding to $v$. We claim \begin{equation}\label{eq-local-rot-change} \mathrm{rot}(v') = \mathrm{rot}(v)- N \mathrm{rot}_\Delta (v) + d(v \leadsto v'). \end{equation} Clearly, the lemma follows from \eqref{eq-local-rot-change}. To prove \eqref{eq-local-rot-change}, one needs to check that it is true for all four cases listed in Figure \ref{rotation-numbers-oc-reverse-index-fig}. Since the proofs in all four cases are very similar, we only check the first case here and leave the other three to the reader. In the first case, $v$ and $v'$ are depicted in Figure \ref{fig-MOY-vertex-change}. As before, denote by $\alphapha$ the directed angle from $e_1$ to $e$, by $\beta$ the directed angle from $e_2$ to $e$ and by $\gamma$ the directed angle from $e_2'$ to $e_1'$. Then \begin{eqnarray*} \mathrm{rot}(v') & = & \frac{N-m}{2 \pi} \int_{e_1'}\kappa ds + \frac{N-m-n}{2\pi} \left(-\alphapha + \int_{e'}\kappa ds\right) + \frac{n}{2\pi} \left(\gamma + \int_{e_2'}\kappa ds \right) \\ & = & \frac{m}{2\pi} \left(\alphapha + \int_{e_1}\kappa ds \right) + \frac{n}{2\pi} \left(\beta + \int_{e_2}\kappa ds \right) + \frac{m+n}{2\pi} \left(\int_{e}\kappa ds\right) \\ && - \frac{N}{2\pi} \left( \int_{e_1}\kappa ds + \alphapha + \int_{e}\kappa ds\right) + \frac{\alphapha + \gamma -\beta}{2\pi} n \\ & = & \mathrm{rot}(v)- N \mathrm{rot}_\Delta (v) + \frac{n}{2}, \end{eqnarray*} where, in the last step, we used the fact that $\alphapha + \gamma -\beta = \pi$. This proves \eqref{eq-local-rot-change} in the first case in Figure \ref{rotation-numbers-oc-reverse-index-fig}. \end{proof}
2,375
69,513
en
train
0.103.20
Now we are ready to prove Theorem \ref{thm-mostly-2-colored-MOY-to-4-valent}. \begin{proof}[Proof of Theorem \ref{thm-mostly-2-colored-MOY-to-4-valent}] By the composition product \eqref{eq-composition-product}, we know that \[ \left\langle \Gamma \right\rangle_4 = \sum_{\mathsf{f} \in \mathcal{L}(\Gamma)} q^{\sigma_{1,3}(\Gamma,\mathsf{f})} \cdot \left\langle \Gamma_{\mathsf{f}} \right\rangle_1 \cdot \left\langle \Gamma_{\bar{\mathsf{f}}} \right\rangle_3, \] where \begin{equation}\label{eq-composition-product-1+3-power-label} \sigma_{1,3}(\Gamma,\mathsf{f}) = \mathrm{rot}(\Gamma_{\bar{\mathsf{f}}}) - 3 \cdot \mathrm{rot}(\Gamma_{\mathsf{f}}) + \sum_{v\in V(\Gamma)} [v|\Gamma|\mathsf{f}]. \end{equation} Note that, in order for the product $\left\langle \Gamma_{\mathsf{f}} \right\rangle_1 \cdot \left\langle \Gamma_{\bar{\mathsf{f}}} \right\rangle_3$ to be non-zero, we must have $\mathsf{f}(e)=0,1$ and $0\leq\bar{\mathsf{f}}(e) \leq 3$ for all edges of $\Gamma$. Define \[ \mathcal{L}_{\neq0}(\Gamma) = \{\mathsf{f} \in \mathcal{L}(\Gamma)~|~ \mathsf{f}(e)=0,1,~ 0\leq\bar{\mathsf{f}}(e) \leq 3 ~\forall e \in E(\Gamma)\}. \] Then \begin{equation}\label{eq-composition-product-1+3} \left\langle \Gamma \right\rangle_4 = \sum_{\mathsf{f} \in \mathcal{L}_{\neq0}(\Gamma)} q^{\sigma_{1,3}(\Gamma,\mathsf{f})} \cdot \left\langle \Gamma_{\bar{\mathsf{f}}} \right\rangle_3, \end{equation} where we used the fact that $\left\langle \Gamma_{\mathsf{f}} \right\rangle_1=1$ for all $\mathsf{f} \in \mathcal{L}_{\neq0}(\Gamma)$. Denote by $E_2(\Gamma)$ the set of edges of $\Gamma$ colored by $2$ and by $E(G(\Gamma))$ the set of edges of $G(\Gamma)$. Then there is a surjective function $g:E_2(\Gamma) \rightarrow E(G(\Gamma))$ such that $g(e)$ is the edge of $G(\Gamma)$ ``containing" $e$ for every $e\in E_2(\Gamma)$. Note that, for any $\mathsf{f} \in \mathcal{L}_{\neq0}(\Gamma)$, $\Gamma_{\mathsf{f}}$ is a collection of pairwise disjoint embedded circles colored by $1$ (after erasing edges colored by $0$.) All possible intersections of $\Gamma_{\mathsf{f}}$ with type one and type two local configurations (defined in Figure \ref{mostly-2-colored-MOY-local-figure}) are described in Figures \ref{local-weights-type-1-fig} and \ref{local-weights-type-2-fig} in Appendix \ref{app-figures}, where edges belonging to $\Gamma_{\mathsf{f}}$ are traced out by \textcolor{BrickRed}{red} paths. For any $\mathsf{f} \in \mathcal{L}_{\neq0}(\Gamma)$, we define an edge orientation $\varrho_{\mathsf{f}}$ of $G(\Gamma)$ such that, for all $e \in E_2(\Gamma)$, \[ \varrho_{\mathsf{f}}(g(e)) = \begin{cases} \text{the orientation of } e & \text{if } \mathsf{f}(e) =1, \\ \text{the opposite of the orientation of } e & \text{if } \mathsf{f}(e) =0. \end{cases} \] It is easy to check that $\varrho_{\mathsf{f}}(g(e))$ is a well define balanced edge orientation of $G(\Gamma)$ and the mapping $\mathsf{f} \mapsto \varrho_{\mathsf{f}}$ is a surjection $\mathcal{L}_{\neq0}(\Gamma) \rightarrow \mathcal{O}(G(\Gamma))$. Given a labeling $\mathsf{f} \in \mathcal{L}_{\neq0}(\Gamma)$, we define a subgraph $\Delta_{\mathsf{f}}$ of $\Gamma$ (and therefore of $\Gamma_{\bar{\mathsf{f}}}$) such that \begin{itemize} \item if $e \in E_2(\Gamma)$, then $e$ is in $\Delta_{\mathsf{f}}$ if and only if $\mathsf{f}(e)=0$; \item edges of $\Delta_{\mathsf{f}}$ of other colors (which are all contained in local configurations of type one or two) are traced out by \textcolor{SkyBlue}{blue} paths in Figures \ref{local-weights-type-1-fig} and \ref{local-weights-type-2-fig} in Appendix \ref{app-figures}. \end{itemize} It is easy to see that $\Delta_{\mathsf{f}}$ is a union of pairwise disjoint simple circuits of $\Gamma$ (and therefore of $\Gamma_{\bar{\mathsf{f}}}$.) Reversing the orientation and the color (with respect to $3$) of edges of $\Gamma_{\bar{\mathsf{f}}}$ along $\Delta_{\mathsf{f}}$, we get a MOY graph $\Gamma_{\bar{\mathsf{f}}}'$. By Theorem \ref{thm-oc-reverse}, we have \begin{equation}\label{eq-Gamma-bar-Gamma-bar-prime} \left\langle \Gamma_{\bar{\mathsf{f}}}\right\rangle_3 = \left\langle \Gamma_{\bar{\mathsf{f}}}' \right\rangle_3. \end{equation} In $\Gamma_{\bar{\mathsf{f}}}'$, delete all edges colored by $0$, contract all edge colored by $2$ and smooth out all vertices of valence $2$. This gives a partial resolution $G(\Gamma)_{\varrho_{\mathsf{f}},\varsigma_{\mathsf{f}}}$ of $G(\Gamma)_{\varrho_{\mathsf{f}}}$, where $\varsigma_{\mathsf{f}}$ resolves all vertices of $G(\Gamma)_{\varrho_{\mathsf{f}}}$ except those corresponding to the last case in Figure \ref{local-weights-type-2-fig}. In this last case, we need to further choose the resolution. Note that, by \cite[Lemma 2.4]{MOY}, we have \begin{equation}\label{eq-MOY-3-3} \left\langle \setlength{\unitlength}{1.75pt} \begin{picture}(20,10)(-10,7) \put(10,20){\varepsilonctor(-1,-1){5}} \put(-10,0){\varepsilonctor(1,1){5}} \put(-5,15){\varepsilonctor(-1,1){5}} \put(5,5){\varepsilonctor(1,-1){5}} \put(-5,15){\varepsilonctor(0,-1){10}} \put(5,5){\varepsilonctor(0,1){10}} \put(-5,5){\varepsilonctor(1,0){10}} \put(5,15){\varepsilonctor(-1,0){10}} \put(-4,9){\tiny{$1$}} \put(3,9){\tiny{$1$}} \put(0,6){\tiny{$2$}} \put(0,12){\tiny{$2$}} \put(8,15){\tiny{$1$}} \put(6,0){\tiny{$1$}} \put(-8,18){\tiny{$1$}} \put(-8,0){\tiny{$1$}} \end{picture} \right\rangle_3 = \left\langle \setlength{\unitlength}{1.75pt} \begin{picture}(20,10)(-10,7) \qbezier(-10,0)(0,10)(-10,20) \qbezier(10,0)(0,10)(10,20) \put(-10,20){\varepsilonctor(-1,1){0}} \put(10,0){\varepsilonctor(1,-1){0}} \put(-5,15){\tiny{$1$}} \put(3,3){\tiny{$1$}} \end{picture} \right\rangle_3 + \left\langle \setlength{\unitlength}{1.75pt} \begin{picture}(20,10)(-10,7) \qbezier(-10,0)(0,10)(10,0) \qbezier(-10,20)(0,10)(10,20) \put(-10,20){\varepsilonctor(-1,1){0}} \put(10,0){\varepsilonctor(1,-1){0}} \put(-5,12){\tiny{$1$}} \put(3,6){\tiny{$1$}} \end{picture} \right\rangle_3. \end{equation} For $\mathsf{f} \in \mathcal{L}_{\neq0}(\Gamma)$ and $\varsigma \in \Sigma(G(\Gamma)_{\varrho_{\mathsf{f}}})$, we say that $\varsigma$ is compatible with $\mathsf{f}$ if, on all the vertices that are resolved by $\varsigma_{\mathsf{f}}$, $\varsigma$ agrees with $\varsigma_{\mathsf{f}}$. Denote by $\Sigma_{\mathsf{f}}(G(\Gamma)_{\varrho_{\mathsf{f}}})$ the set of resolutions of $G(\Gamma)_{\varrho_{\mathsf{f}}}$ that are compatible with $\mathsf{f}$. Then the mapping $(\mathsf{f}, \varsigma) \mapsto (\varrho_{\mathsf{f}},\varsigma)$ gives a bijection \begin{equation}\label{bijection-labeling-orientation} \{(\mathsf{f}, \varsigma)~|~ \mathsf{f} \in \mathcal{L}_{\neq0}(\Gamma),~ \varsigma \in \Sigma_{\mathsf{f}}(G(\Gamma)_{\varrho_{\mathsf{f}}})\} \rightarrow \{(\varrho,\varsigma) ~|~ \varrho \in \mathcal{O}(G(\Gamma)),~ \varsigma \in \Sigma (G(\Gamma)_\varrho\}. \end{equation} Let $C$ be a local configuration (of type one or type two in Figure \ref{mostly-2-colored-MOY-local-figure}) in $\Gamma$. For $\mathsf{f} \in \mathcal{L}_{\neq0}(\Gamma)$ and $\varsigma \in \Sigma_{\mathsf{f}}(G(\Gamma)_{\varrho_{\mathsf{f}}})$, we define two indices $t_{\mathsf{f},\varsigma}(C)$ and $r_{\mathsf{f},\varsigma}(C)$. The values of these two indices are given in Figures \ref{local-weights-type-1-fig} and \ref{local-weights-type-2-fig} in Appendix \ref{app-figures}. It is straightforward to check that \begin{eqnarray} \label{eq-rot-change-Gamma-prime-G} \mathrm{rot} (G(\Gamma)_{\varrho_{\mathsf{f}},\varsigma}) & = & \mathrm{rot} (\Gamma_{\bar{\mathsf{f}}}') + \sum_C t_{\mathsf{f},\varsigma}(C), \\ \label{eq-rot-change-rot-f-G} \mathrm{rot}(G(\Gamma)_{\varrho_{\mathsf{f}},\varsigma}) & = & \mathrm{rot} (\Gamma_{\mathsf{f}}) - \mathrm{rot} (\Delta_{\mathsf{f}}) + \sum_C r_{\mathsf{f},\varsigma}(C), \end{eqnarray} where $C$ runs through all local configurations of type one or two in $\Gamma$. Combining the above and Lemma \ref{lemma-rotation-numbers-oc-reverse}, we have that, for any $\mathsf{f} \in \mathcal{L}_{\neq0}(\Gamma)$ and $\varsigma \in \Sigma_{\mathsf{f}}(G(\Gamma)_{\varrho_{\mathsf{f}}})$, \begin{eqnarray} \label{eq-sigma-13-simplified-1} && \sigma_{1,3}(\Gamma,\mathsf{f}) \\ & = & \mathrm{rot}(\Gamma_{\bar{\mathsf{f}}}) - 3 \mathrm{rot}(\Gamma_{\mathsf{f}}) + \sum_{v\in V(\Gamma)} [v|\Gamma|\mathsf{f}] \nonumber \\ & = & \mathrm{rot}(\Gamma_{\bar{\mathsf{f}}}') - 3 (\mathrm{rot}(\Gamma_{\mathsf{f}})- 3 \mathrm{rot} (\Delta_{\mathsf{f}})) + \sum_{v\in V(\Gamma)} ([v|\Gamma|\mathsf{f}] -d(v\leadsto v')) \nonumber \\ & = & -2\mathrm{rot} (G(\Gamma)_{\varrho_{\mathsf{f}},\varsigma}) + \sum_C (3r_{\mathsf{f},\varsigma}(C)- t_{\mathsf{f},\varsigma}(C)) + \sum_{v\in V(\Gamma)} ([v|\Gamma|\mathsf{f}]-d(v\leadsto v')) \nonumber \\ & = & -2\mathrm{rot} (G(\Gamma)_{\varrho_{\mathsf{f}},\varsigma}) +\sum_C \left( 3r_{\mathsf{f},\varsigma}(C)- t_{\mathsf{f},\varsigma}(C) + \sum_{v\in V(C)} ([v|\Gamma|\mathsf{f}]-d(v\leadsto v')) \right), \nonumber \end{eqnarray} where $C$ runs through all local configurations of type one or two in $\Gamma$, $V(C)$ is the set of vertices of $C$, and we use the convention that $d(v\leadsto v')=0$ if the vertex $v$ is unchanged in $\Gamma_{\bar{\mathsf{f}}} \leadsto \Gamma_{\bar{\mathsf{f}}}'$. The values of $r_{\mathsf{f},\varsigma}(C)$, $t_{\mathsf{f},\varsigma}(C)$, $\sum_{v\in V(C)} [v|\Gamma|\mathsf{f}]$ and $\sum_{v\in V(C)}d(v\leadsto v'))$ are recorded in Figures \ref{local-weights-type-1-fig} and \ref{local-weights-type-2-fig} in Appendix \ref{app-figures}. One can verify case by case that \begin{eqnarray} \label{eq-sigma-13-simplified-2} && 3r_{\mathsf{f},\varsigma}(C)- t_{\mathsf{f},\varsigma}(C) + \sum_{v\in V(C)} ([v|\Gamma|\mathsf{f}]-d(v\leadsto v')) \\ & = & \begin{cases} 1 & \text{if } C \text{ is of type two and } \varsigma \text{ applies } L \\ & \text{to the corresponding vertex in } G(\Gamma)_{\varrho_{\mathsf{f}}}, \\ & \\ -1 & \text{if } C \text{ is of type two and } \varsigma \text{ applies } R \\ & \text{to the corresponding vertex in } G(\Gamma)_{\varrho_{\mathsf{f}}}, \\ & \\ 0 & \text{otherwise.} \end{cases} \nonumber \end{eqnarray} Equations \eqref{eq-sigma-13-simplified-1} and \eqref{eq-sigma-13-simplified-2} imply that, for any $\mathsf{f} \in \mathcal{L}_{\neq0}(\Gamma)$ and $\varsigma \in \Sigma_{\mathsf{f}}(G(\Gamma)_{\varrho_{\mathsf{f}}})$, \begin{equation}\label{CP-Jaeger-rot-weight-match} q^{\sigma_{1,3}(\Gamma,\mathsf{f})} = q^{-2\mathrm{rot} (G(\Gamma)_{\varrho_{\mathsf{f}},\varsigma})} \cdot [G(\Gamma)_{\varrho_{\mathsf{f}}},\varsigma]. \end{equation}
3,981
69,513
en
train
0.103.21
Let $C$ be a local configuration (of type one or type two in Figure \ref{mostly-2-colored-MOY-local-figure}) in $\Gamma$. For $\mathsf{f} \in \mathcal{L}_{\neq0}(\Gamma)$ and $\varsigma \in \Sigma_{\mathsf{f}}(G(\Gamma)_{\varrho_{\mathsf{f}}})$, we define two indices $t_{\mathsf{f},\varsigma}(C)$ and $r_{\mathsf{f},\varsigma}(C)$. The values of these two indices are given in Figures \ref{local-weights-type-1-fig} and \ref{local-weights-type-2-fig} in Appendix \ref{app-figures}. It is straightforward to check that \begin{eqnarray} \label{eq-rot-change-Gamma-prime-G} \mathrm{rot} (G(\Gamma)_{\varrho_{\mathsf{f}},\varsigma}) & = & \mathrm{rot} (\Gamma_{\bar{\mathsf{f}}}') + \sum_C t_{\mathsf{f},\varsigma}(C), \\ \label{eq-rot-change-rot-f-G} \mathrm{rot}(G(\Gamma)_{\varrho_{\mathsf{f}},\varsigma}) & = & \mathrm{rot} (\Gamma_{\mathsf{f}}) - \mathrm{rot} (\Delta_{\mathsf{f}}) + \sum_C r_{\mathsf{f},\varsigma}(C), \end{eqnarray} where $C$ runs through all local configurations of type one or two in $\Gamma$. Combining the above and Lemma \ref{lemma-rotation-numbers-oc-reverse}, we have that, for any $\mathsf{f} \in \mathcal{L}_{\neq0}(\Gamma)$ and $\varsigma \in \Sigma_{\mathsf{f}}(G(\Gamma)_{\varrho_{\mathsf{f}}})$, \begin{eqnarray} \label{eq-sigma-13-simplified-1} && \sigma_{1,3}(\Gamma,\mathsf{f}) \\ & = & \mathrm{rot}(\Gamma_{\bar{\mathsf{f}}}) - 3 \mathrm{rot}(\Gamma_{\mathsf{f}}) + \sum_{v\in V(\Gamma)} [v|\Gamma|\mathsf{f}] \nonumber \\ & = & \mathrm{rot}(\Gamma_{\bar{\mathsf{f}}}') - 3 (\mathrm{rot}(\Gamma_{\mathsf{f}})- 3 \mathrm{rot} (\Delta_{\mathsf{f}})) + \sum_{v\in V(\Gamma)} ([v|\Gamma|\mathsf{f}] -d(v\leadsto v')) \nonumber \\ & = & -2\mathrm{rot} (G(\Gamma)_{\varrho_{\mathsf{f}},\varsigma}) + \sum_C (3r_{\mathsf{f},\varsigma}(C)- t_{\mathsf{f},\varsigma}(C)) + \sum_{v\in V(\Gamma)} ([v|\Gamma|\mathsf{f}]-d(v\leadsto v')) \nonumber \\ & = & -2\mathrm{rot} (G(\Gamma)_{\varrho_{\mathsf{f}},\varsigma}) +\sum_C \left( 3r_{\mathsf{f},\varsigma}(C)- t_{\mathsf{f},\varsigma}(C) + \sum_{v\in V(C)} ([v|\Gamma|\mathsf{f}]-d(v\leadsto v')) \right), \nonumber \end{eqnarray} where $C$ runs through all local configurations of type one or two in $\Gamma$, $V(C)$ is the set of vertices of $C$, and we use the convention that $d(v\leadsto v')=0$ if the vertex $v$ is unchanged in $\Gamma_{\bar{\mathsf{f}}} \leadsto \Gamma_{\bar{\mathsf{f}}}'$. The values of $r_{\mathsf{f},\varsigma}(C)$, $t_{\mathsf{f},\varsigma}(C)$, $\sum_{v\in V(C)} [v|\Gamma|\mathsf{f}]$ and $\sum_{v\in V(C)}d(v\leadsto v'))$ are recorded in Figures \ref{local-weights-type-1-fig} and \ref{local-weights-type-2-fig} in Appendix \ref{app-figures}. One can verify case by case that \begin{eqnarray} \label{eq-sigma-13-simplified-2} && 3r_{\mathsf{f},\varsigma}(C)- t_{\mathsf{f},\varsigma}(C) + \sum_{v\in V(C)} ([v|\Gamma|\mathsf{f}]-d(v\leadsto v')) \\ & = & \begin{cases} 1 & \text{if } C \text{ is of type two and } \varsigma \text{ applies } L \\ & \text{to the corresponding vertex in } G(\Gamma)_{\varrho_{\mathsf{f}}}, \\ & \\ -1 & \text{if } C \text{ is of type two and } \varsigma \text{ applies } R \\ & \text{to the corresponding vertex in } G(\Gamma)_{\varrho_{\mathsf{f}}}, \\ & \\ 0 & \text{otherwise.} \end{cases} \nonumber \end{eqnarray} Equations \eqref{eq-sigma-13-simplified-1} and \eqref{eq-sigma-13-simplified-2} imply that, for any $\mathsf{f} \in \mathcal{L}_{\neq0}(\Gamma)$ and $\varsigma \in \Sigma_{\mathsf{f}}(G(\Gamma)_{\varrho_{\mathsf{f}}})$, \begin{equation}\label{CP-Jaeger-rot-weight-match} q^{\sigma_{1,3}(\Gamma,\mathsf{f})} = q^{-2\mathrm{rot} (G(\Gamma)_{\varrho_{\mathsf{f}},\varsigma})} \cdot [G(\Gamma)_{\varrho_{\mathsf{f}}},\varsigma]. \end{equation} In view of bijection \eqref{bijection-labeling-orientation}, it follows from \eqref{eq-MOY-HOMFLY}, \eqref{eq-composition-product-1+3}, \eqref{eq-Gamma-bar-Gamma-bar-prime}, \eqref{eq-MOY-3-3} and \eqref{CP-Jaeger-rot-weight-match} that \begin{eqnarray} \label{Jaeger-CP-MOY-4-2} \left\langle \Gamma \right\rangle_4 & = & \sum_{\mathsf{f} \in \mathcal{L}_{\neq0}(\Gamma)} \sum_{\varsigma \in \Sigma_{\mathsf{f}}(G(\Gamma)_{\varrho_{\mathsf{f}}})} q^{-2\mathrm{rot} (G(\Gamma)_{\varrho_{\mathsf{f}},\varsigma})} \cdot [G(\Gamma)_{\varrho_{\mathsf{f}}},\varsigma] \cdot R_3( G(\Gamma)_{\varrho_{\mathsf{f}},\varsigma}) \\ &= & \sum_{\varrho \in \mathcal{O}(G(\Gamma))} \sum_{\varsigma \in \Sigma(G(\Gamma)_{\varrho})} q^{-2\mathrm{rot} (G(\Gamma)_{\varrho,\varsigma})} \cdot [G(\Gamma)_{\varrho},\varsigma] \cdot R_3( G(\Gamma)_{\varrho,\varsigma}). \nonumber \end{eqnarray} Comparing the right hand side of \eqref{Jaeger-CP-MOY-4-2} to the Jaeger Formula \eqref{eq-Jaeger-formula-N-graph} in Theorem \ref{thm-Jaeger-formula-graph}, we get $\left\langle \Gamma \right\rangle_4 = P_6(G(\Gamma))$. \end{proof}
1,841
69,513
en
train
0.103.22
\subsection{An explicit $\mathfrak{so}(6)$ Kauffman homology} Webster \cite{Webster1,Webster2} has categorified, for any simple complex Lie algebra $\mathfrak{g}$, the quantum $\mathfrak{g}$ invariant for links colored by any finite dimensional representations of $\mathfrak{g}$. But his categorification is very abstract. For applications in knot theory, it would help if we have categorifications that are concrete and explicit. For quantum $\mathfrak{sl}(N)$ link invariants, examples of such categorifications can be found in \cite{K1,KR1,Wu-color}. We know much less about explicit categorifications of quantum $\mathfrak{so}(N)$ link invariants. Khovanov and Rozansky \cite{KR3} proposed a categorification of the $\mathfrak{so}(2N)$ Kauffman polynomial. But its invariance under Reidemeister move (III) is still open. They did however point out that the $\mathfrak{so}(4)$ version of their homology is isomorphic to the tensor square of the Khovanov homology \cite{K1} and is therefore a link invariant. More recently, Cooper, Hogancamp and Krushkal \cite{Cooper-Hogancamp-Krushkal} gave an explicit categorification of the $\mathfrak{so}(3)$ Kauffman polynomial. Theorem \ref{thm-mostly-2-colored-MOY-to-4-valent} allows us to give an explicit categorification of the $\mathfrak{so}(6)$ Kauffman polynomial. More precisely, we have the following theorem. \begin{theorem}\label{thm-2-4-MOY-6-Kauffman} Let $L$ be an unoriented framed link in $S^3$. Fix an orientation $\varrho$ of $L$, color all components of $L$ by $2$ and denote the resulted colored oriented framed link $L_\varrho^{(2)}$. Then \begin{equation}\label{eq-2-4-MOY-6-Kauffman} \widetilde{R}_{4}(L_\varrho^{(2)}) = (-1)^m P_6(\overline{L}), \end{equation} where the polynomial $\widetilde{R}_{4}$ is defined in Definition \ref{def-2N-homology-renormalized}, $m$ is the ($\mathbb{Z}_2$-) number of crossings in $L$, and $\overline{L}$ is the mirror image of $L$, that is, $L$ with the upper- and lower- branches switched at every crossing. Consequently, the renormalized $\mathfrak{sl}(4)$ homology $\widetilde{H}_{4}(L_\varrho^{(2)})$ categorifies $(-1)^m P_6(\overline{L})$. \end{theorem} \begin{figure} \caption{MOY resolutions} \label{MOY-resolutions-figure} \end{figure} \begin{figure} \caption{KV resolutions} \label{KV-resolutions-figure} \end{figure} \begin{proof} Replace every crossing of $L_\varrho^{(2)}$ by one of the three local configurations in Figure \ref{MOY-resolutions-figure}. We call the result a MOY resolution of $L_\varrho^{(2)}$ and denote by $\mathcal{MOY}(L_\varrho^{(2)})$ the set of MOY resolutions of $L_\varrho^{(2)}$. Replace every crossing of $L$ by one of the three local configurations in Figure \ref{KV-resolutions-figure}. We call the result a KV resolution of $L$ and denote by $\mathcal{KV}(L)$ the set of KV resolutions of $L$. Note that the mapping $\Gamma \mapsto G(\Gamma)$ defined in Theorem \ref{thm-mostly-2-colored-MOY-to-4-valent} gives a bijection $\mathcal{MOY}(L_\varrho^{(2)}) \rightarrow \mathcal{KV}(L)$. Compare the skein relations \begin{eqnarray*} \widetilde{R}_{4} (\setlength{\unitlength}{2pt} \begin{picture}(20,20)(-10,7) \put(-10,0){\varepsilonctor(1,1){20}} \put(10,0){\line(-1,1){8}} \put(-2,12){\varepsilonctor(-1,1){8}} \put(7,15){\tiny{$2$}} \put(-9,15){\tiny{$2$}} \end{picture}) & = & -q^{-1} \widetilde{R}_{4} (\setlength{\unitlength}{2pt} \begin{picture}(20,20)(-10,7) \qbezier(-10,0)(0,10)(-10,20) \qbezier(10,0)(0,10)(10,20) \put(-10,20){\varepsilonctor(-1,1){0}} \put(10,20){\varepsilonctor(1,1){0}} \put(4,15){\tiny{$2$}} \put(-6,15){\tiny{$2$}} \end{picture}) + \widetilde{R}_{4} (\setlength{\unitlength}{2pt} \begin{picture}(20,20)(-10,7) \put(5,15){\varepsilonctor(1,1){5}} \put(-10,0){\varepsilonctor(1,1){5}} \put(-5,15){\varepsilonctor(-1,1){5}} \put(10,0){\varepsilonctor(-1,1){5}} \put(-5,5){\varepsilonctor(0,1){10}} \put(5,5){\varepsilonctor(0,1){10}} \put(5,5){\varepsilonctor(-1,0){10}} \put(-5,15){\varepsilonctor(1,0){10}} \put(-4,9){\tiny{$3$}} \put(3,9){\tiny{$1$}} \put(0,6){\tiny{$1$}} \put(0,12){\tiny{$1$}} \put(6,18){\tiny{$2$}} \put(6,0){\tiny{$2$}} \put(-8,18){\tiny{$2$}} \put(-8,0){\tiny{$2$}} \end{picture}) - q \widetilde{R}_{4} (\setlength{\unitlength}{2pt} \begin{picture}(20,20)(-10,7) \put(0,15){\varepsilonctor(2,1){10}} \put(-10,0){\varepsilonctor(2,1){10}} \put(0,15){\varepsilonctor(-2,1){10}} \put(10,0){\varepsilonctor(-2,1){10}} \put(0,5){\varepsilonctor(0,1){10}} \put(1.5,9){\tiny{$4$}} \put(5,15){\tiny{$2$}} \put(5,4){\tiny{$2$}} \put(-6,15){\tiny{$2$}} \put(-6,4){\tiny{$2$}} \end{picture}), \\ \widetilde{R}_{4} (\setlength{\unitlength}{2pt} \begin{picture}(20,20)(-10,7) \put(-10,0){\varepsilonctor(1,1){20}} \put(2,8){\varepsilonctor(1,-1){8}} \put(-2,12){\line(-1,1){8}} \put(7,15){\tiny{$2$}} \put(-9,15){\tiny{$2$}} \end{picture}) & = & -q^{-1} \widetilde{R}_{4} (\setlength{\unitlength}{2pt} \begin{picture}(20,20)(-10,7) \put(-10,0){\varepsilonctor(1,2){5}} \put(-10,20){\varepsilonctor(1,-2){5}} \put(-5,10){\varepsilonctor(1,0){10}} \put(5,10){\varepsilonctor(1,2){5}} \put(5,10){\varepsilonctor(1,-2){5}} \put(0,11){\tiny{$4$}} \put(5,15){\tiny{$2$}} \put(5,4){\tiny{$2$}} \put(-6,15){\tiny{$2$}} \put(-6,4){\tiny{$2$}} \end{picture}) + \widetilde{R}_{4} (\setlength{\unitlength}{2pt} \begin{picture}(20,20)(-10,7) \put(5,15){\varepsilonctor(1,1){5}} \put(-10,0){\varepsilonctor(1,1){5}} \put(-10,20){\varepsilonctor(1,-1){5}} \put(5,5){\varepsilonctor(1,-1){5}} \put(-5,5){\varepsilonctor(0,1){10}} \put(5,15){\varepsilonctor(0,-1){10}} \put(-5,5){\varepsilonctor(1,0){10}} \put(-5,15){\varepsilonctor(1,0){10}} \put(-4,9){\tiny{$1$}} \put(3,9){\tiny{$1$}} \put(0,6){\tiny{$1$}} \put(0,12){\tiny{$3$}} \put(6,18){\tiny{$2$}} \put(6,0){\tiny{$2$}} \put(-8,18){\tiny{$2$}} \put(-8,0){\tiny{$2$}} \end{picture}) - q \widetilde{R}_{4} (\setlength{\unitlength}{2pt} \begin{picture}(20,20)(-10,7) \qbezier(-10,0)(0,10)(10,0) \qbezier(-10,20)(0,10)(10,20) \put(10,20){\varepsilonctor(1,1){0}} \put(10,0){\varepsilonctor(1,-1){0}} \put(4,12){\tiny{$2$}} \put(4,5){\tiny{$2$}} \end{picture}), \\ P_6 (\setlength{\unitlength}{2pt} \begin{picture}(20,20)(-10,7) \put(10,0){\line(-1,1){20}} \put(-10,0){\line(1,1){8}} \put(2,12){\line(1,1){8}} \end{picture}) & = & q^{-1}P_6(\setlength{\unitlength}{2pt} \begin{picture}(20,20)(-10,7) \qbezier(-10,0)(0,10)(-10,20) \qbezier(10,0)(0,10)(10,20) \end{picture}) - P_6(\setlength{\unitlength}{2pt} \begin{picture}(20,20)(-10,7) \put(0,10){\line(1,1){10}} \put(-10,0){\line(1,1){10}} \put(0,10){\line(-1,1){10}} \put(10,0){\line(-1,1){10}} \end{picture}) + qP_6(\setlength{\unitlength}{2pt} \begin{picture}(20,20)(-10,7) \qbezier(-10,0)(0,10)(10,0) \qbezier(-10,20)(0,10)(10,20) \end{picture}). \end{eqnarray*} \noindent It is easy to see that equation \eqref{eq-2-4-MOY-6-Kauffman} follows from equation \eqref{eq-mostly-2-colored-MOY-to-4-valent} in Theorem \ref{thm-mostly-2-colored-MOY-to-4-valent}. \end{proof} \begin{question} Is $\widetilde{H}_{4}(L_\varrho^{(2)})$ isomorphic to the $\mathfrak{so}(6)$ version of the homology defined in \cite{KR3}? \end{question} \appendix
3,128
69,513
en
train
0.103.23
\section{Figures Used in the Proof of Theorem \ref{thm-mostly-2-colored-MOY-to-4-valent}}\label{app-figures} In Figures \ref{local-weights-type-1-fig} and \ref{local-weights-type-2-fig}, edges along \textcolor{BrickRed}{red paths} belong to $\Gamma_{\mathsf{f}}$ and edges along \textcolor{SkyBlue}{blue paths} belong to $\Delta_{\mathsf{f}}$. \begin{figure}\label{local-weights-type-1-fig} \end{figure} \begin{figure}\label{local-weights-type-2-fig} \end{figure} \end{document}
173
69,513
en
train
0.104.0
\begin{document} \title{Hochschild homology and cohomology of generalized Weyl algebras} \tableofcontents \footnotetext[1] {Dto. de Matem\'atica, Facultad de Cs. Exactas y Naturales. Universidad de Buenos Aires. Ciudad Universitaria Pab I. 1428, Buenos Aires - Argentina. e-mail: \texttt{[email protected]}, \texttt{[email protected]}, \texttt{[email protected]}\\ Research partially supported by UBACYT TW69 and CONICET.} \footnotetext[2] {Research member of CONICET (Argentina).} \section*{Introduction} The relevance of algebras such as the Weyl algebra $A_n({\mathbb{C}})$, the enveloping algebra ${\cal U}(\mathfrak{sl}_2)$ and its primitive quotients $B_{\lambda}$ and other algebras related to algebras of differential operators is already well-known. Recently, some articles where their Hochschild homology and cohomology has an important role have been written (see for example \cite{afls}, \cite{al1}, \cite{al2}, \cite{gui-eti}, \cite{odile}, \cite{mariano}, \cite{marianococprim}). Both the results obtained in \cite{afls} and in \cite{marianococprim} seem to depend strongly on intrinsic properties of $A_1({\mathbb{C}})$ and ${\cal U}(\mathfrak{sl}_2)$. However, it is not entirely the case. In this article we consider a class of algebras, called generalized Weyl algebras (GWA for short), defined by V.~Bavula in \cite{Bav} and studied by himself and collaborators in a series of papers (see for example \cite{Bav}, \cite{BavJor}, \cite{BavLen}) from the point of view of ring theory. Our aim is to compute the Hochschild homology and cohomology groups of these algebras and to study whether there is a duality between these groups. Examples of GWA are, as we said before, $n$-th Weyl algebras, ${\cal U}(\mathfrak{sl}_2)$, primitive quotients of ${\cal U}(\mathfrak{sl}_2)$, and also the subalgebras of invariants of these algebras under the action of finite cyclic subgroups of automorphisms. As a consequence we recover in a simple way the results of \cite{afls} and we also complete results of \cite{odile}, \cite{michelinkassel} and of \cite{marianococprim}, giving at the same time a unified method for GWA. The article is organized as follows: In section \ref{sect:GWA} we recall from \cite{Bav} the definition of generalized Weyl algebras and state the main theorems. In section \ref{sect:res} we describe the resolution used afterwards in order to compute the Hochschild homology and cohomology groups. We prove a ``reduction" result (Proposition \ref{prop:reduction}) and finally we prove the main theorem for homology using an spectral sequence argument. Section \ref{sect:coho} is devoted to the computation of Hochschild cohomology of GWAs. As a consequence of the results we obtain, we notice that the hypotheses of Theorem 1 of \cite{VdB} are not sufficient to assure duality between Hochschild homology and cohomology. We state here hypotheses under which duality holds. In section \ref{sect:inv} we consider subalgebras of invariants of the previous ones under diagonalizable cyclic actions of finite order. We first show that these subalgebras are also GWA and we state the main theorem concerning subalgebras of invariants. Finally, in section \ref{sect:apps}, we begin by describing some applications of the above results. The first application is to specialize the results to the usual Weyl algebra. Secondly, we consider the primitive quotients of ${\frak sl}_2$, and considering the Cartan involution $\Omega$ we answer a question of Bavula (\cite{BavJor} remark 3.30) and we finish the proof of the main theorem. The formula for the dimension of $H\!H_*(A^G)$ explains, in particular, the computations made by O.~Fleury for $H\!H_0(B_{\lambda}^G)$. We will work over a field $k$ of characteristic zero and all algebras will be $k$-algebras. Given a $k$-algebra $A$, $\Aut_k(A)$ will always denote the group of $k$-algebra automorphisms of $A$. \section{Generalized Weyl Algebras}\label{sect:GWA} We recall the definition of generalized Weyl algebras given by Bavula in \cite{Bav}. Let $R$ be an algebra, fix a central element $a\in{\cal Z}(R)$ and $\sigma\in \Aut_k(R)$. The generalized Weyl Algebra $A=A(R,a,\sigma)$ is the $k$-algebra generated by $R$ and two new free variables $x$ and $y$ subject to the relations: \begin{alignat*}{2} yx&=a &\qquad&xy=\sigma(a) \\ \intertext{and} xr&=\sigma(r)x &&ry=y\sigma(r) \end{alignat*} for all $r\in R $. \noindent\textbf{Example: }s\begin{enumerate} \item If $R=k[h]$, $a=h$ and $\sigma\in \Aut_k(k[h])$ is the unique automorphism determined by $\sigma(h)=h-1$; then $A(k[h],a,\sigma)\cong A_1(k)$, the usual Weyl algebra, generated by $x$ and $y$ subject to the relation $[x,y]=1$. \item Let $R=k[h,c]$, $\sigma(h)=h-1$ and $\sigma(c)=c$, and define $a:=c-h(h+1)$. Then $A(k[h,c],\sigma,a)\cong {\cal U}(\mathfrak{sl}_2)$. Under the obvious isomorphism (choosing $x$, $y$ and $h$ as the standard generators of $\mathfrak{sl}_2$) the image of the element $c$ corresponds to the Casimir element. \item Given $\lambda \in k$, the maximal primitive quotients of ${\cal U}(\mathfrak{sl}_2)$ are the algebras $B_{\lambda}:={\cal U}(\mathfrak{sl}_2)/\langle c-\lambda\rangle$, cf.~\cite{Dixmier}. They can also be obtained as generalized Weyl algebras because $B_{\lambda}\cong A(k[h],\sigma,a=\lambda - h(h+1))$. \end{enumerate} We will focus on the family of examples $A=A(k[h],a,\sigma)$ with $a=\sum_{i=0}^na_ih^i\in k[h]$ a non-constant polynomial, and the automorphism $\sigma$ defined by $\sigma(h)=h-h_0$, with $h_0\in k\setminus\{0\}$. There is a filtration on $A$ which assigns to the generators $x$ and $y$ degree $n$ and to $h$ degree $2$; the associated graded algebra is, with an obvious notation, $k[x,y,h]/(yx-a_nh^n)$. This is the coordinate ring of a Klein surface. We remark that this is a complete intersection, hence it is a Gorenstein algebra. There is also a graduation on $A$, which we will refer to as weight, such that $\deg x=1$, $\deg y=-1$ and $\deg h=0$. We will denote, for polynomial $a, b\in k[h]$, $\deg a$ the degree of $a$, $a'=\frac{\partial a}{\partial h}$ the formal derivative of $a$, and $(a;b)$ the greatest common divisor of $a$ and $b$. Our main results are the following theorems, whose proofs will be given in next sections. \begin{teo}\label{thm:hom} Let $a\in k[h]$ be a non-constant polynomial, $\sigma\in \Aut_k(k[h])$ defined by $\sigma(h)=h-h_0$ with $0\neq h_0\in k$. Consider $A=A(k[h],a,\sigma)$ and $n=\deg a$, $d=\deg(a;a')$. \begin{itemize} \item If $(a;a')=1$ (i.e. $d=0$), then $\dim_kH\!H_0(A)=n-1$, $\dim_kH\!H_2(A)=1$, and $H\!H_i(A)=0$ for $i\neq 0,2$. \item If $d\geq 1$ then $\dim_kH\!H_0(A)=n-1$, $\dim_kH\!H_1(A)=d-1$, and $\dim_kH\!H_i(A)=d$ for $i\geq 2$. \end{itemize} \end{teo} \begin{teo}\label{thm:coho} Let $a\in k[h]$ be a non-constant polynomial, $\sigma\in \Aut_k(k[h])$ defined by $\sigma(h)=h-h_0$ with $0\neq h_0\in k$. Consider $A=A(k[h],a,\sigma)$ and $n=\deg a$, $d=\deg(a;a')$. \begin{itemize} \item If $(a;a')=1$ (i.e. $d=0$), then $\dim_kH\!H^0(A)=1$, $\dim_kH\!H^2(A)=n-1$, and $H\!H^i(A)=0$ for $i\neq 0,2$. \item If $d\geq 1$ then $\dim_kH\!H^0(A)=1$, $\dim_kH\!H^1(A)=0$, $\dim_kH\!H^2(A)=n-1$, and $\dim_kH\!H^i(A)=d$ for $i\geq 3$. \end{itemize} \end{teo}
2,641
22,342
en
train
0.104.1
\section{A resolution for $A$ and proof of the first theorem}\label{sect:res} In this section we will construct a complex of free $A^e$-modules and we will prove, using an appropriate filtration, that this complex is actually a resolution of $A$. The construction of the resolution is performed in two steps. First we consider an algebra $B$ above $A$ which has ``one relation less'' than $A$. Then we use the Koszul resolution of $B$ and obtain a resolution of $A$ mimicking the construction of the resolution for the coordinate ring of an affine hypersurface done in \cite{burg-vigue}. Let us consider $V$ the $k$-vector space with basis $\{e_x, e_y, e_h\}$ and the following complex of free $A^e$-modules: \begin{equation}\label{dag} 0\to A\ot \Lambda^3V\ot A \to A\ot \Lambda^2V\ot A \to A\ot V\ot A \to A\ot A \to 0 \tag{\dag} \end{equation} In order to define the differential, we consider the elements $\lambda_k\in k$ such that \[ \sigma(a)-a=\sum_{k=0}^{n-1}\lambda_kh^k \] and let \[ e_{[x,y]}=\sum_{k,i}\lambda_kh^ie_hh^{k-i-1}, \: e_{[x,h]}=-e_x, \: e_{[y,h]}=e_y \in A\ot V\ot A. \] For simplicity, in these formulas we have written for example $xe_y$ instead of $x\ot e_y\ot 1\in A\ot V\ot A$ and similarly in the other degrees. The differential in \eqref{dag} is formally, with these notations, the Chevalley-Eilenberg differential; for example, \[ d(\alpha e_x\wedge e_y \beta) = \alpha x e_y \beta - \alpha e_y x \beta - \alpha y e_x \beta + \alpha e_x y \beta - \alpha e_{[x,y]} \beta. \] \begin{lem} The homology of the complex \eqref{dag} is isomorphic to $A$ in degree $0$ and $1$, and zero elsewhere. \end{lem} \begin{proof} We consider the $k$-algebra $B$ freely generated by $x,y,h$ modulo relations \[ xh=\sigma(h)x, \qquad hy=y\sigma(h), \qquad xy-yx=\sigma(a)-a. \] Let $f:=yx-b\in B$ where $b\in k[h]$ is such that $\sigma(b)-b=a$; observe that $f$ is central in $B$. This algebra $B$ has been studied by S.~Smith in \cite{smith}. Our interest in it comes from the fact that $A$ is the quotient of $B$ by the two sided ideal generated by $f$. In particular, $B$ has a filtration induced by the filtration on $A$, and it is clear that the associated graded algebra is simply $k[x,y,h]$. We claim that a complex similar to \eqref{dag} but with $A$ replaced throughout by $B$ is a resolution of $B$ by free $B^e$-modules. Indeed, the filtration on $B$ extends to a filtration on this complex, and the associated graded object is acyclic, because it coincides with the usual Koszul resolution of the polynomial algebra $k[x,y,h]$ as a bimodule over itself. The original complex can be recovered by tensoring over $B$ with $A$ on the right and on the left, or, equivalently, by tensoring over $B^e$ with $A^e$. As a result, the homology of the resulting complex computes $\Tor_*^{B^e}(B,A^e)\cong \Tor_*^B(A,A)$ (see for example \textsc{IX}.\S4.4 of \cite{CE}). Consider now the free resolution of $A$ as a left $B$-module \[ 0\to B\to B\to A\to 0 \] with the map $B\to A$ being the natural projection and the other one multiplication by $f$. This can be used to compute $\Tor^B_*(A,A)$, and the proof of the lemma is finished. \end{proof} In order to kill the homology of the complex \eqref{dag}, we consider a resolution of the following type: \begin{equation} \xymatrix@C-10pt@-10pt{ &&0\ar[r]& A\ot \Lambda^3V\ot A \ar[r]& A\ot \Lambda^2V\ot A \ar[r] & A\ot V\ot A \ar[r]& A\ot A \ar[r]& 0\\ &0\ar[r]& A\ot \Lambda^3V\ot A \ar[u]\ar[r]& A\ot \Lambda^2V\ot A\ar[u] \ar[r]& A\ot V\ot A \ar[u]\ar[r]& A\ot A \ar[u]\ar[r]& 0\ar[u]&\\ 0\ar[r]& A\ot \Lambda^3V\ot A \ar[u]\ar[r]& A\ot \Lambda^2V\ot A\ar[u] \ar[r]& A\ot V\ot A \ar[u]\ar[r]& A\ot A \ar[u]\ar[r]& 0\ar[u]&&\\ &\ar[u] &\ar[u]&\ar[u]&\ar[u] && } \tag{\ddag}\label{ddag} \end{equation} The horizontal differentials are the same as before, and the vertical ones--denoted ``$.df$"--are defined as follows: \begin{align*} .df:A\ot A &\to A\ot V\ot A\\ 1\ot 1&\mapsto y e_x+e_y x-\sum_{i,k}a_kh^i e_hh^{k-i-1}\\ \end{align*} \begin{align*} .df:A\ot V\ot A &\to A\ot \Lambda^2V\ot A\\ e_x&\mapsto -e_x\wedge e_y x +\sum_{i,k}a_k\sigma(h^i) e_x\wedge e_hh^{k-i-1}\\ e_y&\mapsto -y e_y\wedge e_x +\sum_{i,k}a_kh^i e_y\wedge e_h\sigma(h^{k-i-1})\\ e_h&\mapsto -y e_h\wedge e_x- e_h\wedge e_y x\\ \end{align*} \begin{align*} .df:A\ot \Lambda^2V\ot A &\to A\ot \Lambda^3V\ot A\\ e_y\wedge e_h&\mapsto y e_y\wedge e_h\wedge e_x \\ e_x\wedge e_h&\mapsto e_x\wedge e_h\wedge e_y x\\ e_x\wedge e_y&\mapsto -\sum_{i,k}a_k\sigma(h^i) e_x\wedge e_y\wedge e_h \sigma(h^{k-i-1}) \end{align*} \begin{prop} The total complex associated to \eqref{ddag} is a resolution of $A$ as $A^e$-module. \end{prop} \begin{proof} That it is a double complex follows from a straightforward computation. We consider again the filtration on $A$ and the filtration induced by it on \eqref{ddag}. All maps respect it, so it will suffice to see that the associated graded complex is a resolution of $\gr A$. Filtering this new complex by rows, we know that the homology of the rows compute $\Tor_*^{\gr B}(\gr(B),\gr(B))$. The only thing to be checked now is that the differential on the $E^1$ term can be identified to $.df$, and this is easily done. \end{proof} In order to compute $H\!H_*(A)$ we can compute the homology of the complex $(A\ot_{A^e}(A\ot \Lambda^*V\ot A),.df,d_{CE})\cong (A\ot \Lambda^*V,.df,d_{CE})$. This is a double complex which can be filtered by the rows, as usual, so we obtain a spectral sequence converging to the homology of the total complex. Of course, the first term is just the homology of the rows. The computation of Hochschild homology can be done in a direct way; however, it is worth noticing that this procedure can be considerably reduced. Let $X_*=(A\ot\Lambda^*V,d)$ be the complex obtained by tensoring the rows in \eqref{ddag} with $A$ over $A^e$, and let $X^0_*$ be the zero weight component of $X_*$. \begin{prop}\label{prop:reduction} The inclusion map $X^0_*\to X_*$ induces an isomorphism in homology. \end{prop} \begin{proof} Let us define the map $s:A\ot \Lambda^*V\to A\ot \Lambda^{*+1}V$ by $s(w\ot v_1\wedge \dots\wedge v_k):= w\ot v_1\wedge \dots\wedge v_k\wedge e_h$. A computation shows that \begin{equation}\label{eq:htpy} (d_{CE}s+sd_{CE})(w\ot v_1\wedge \dots\wedge v_k)= weight(w\ot v_1\wedge \dots\wedge v_k) .(w\ot v_1\wedge \dots\wedge v_k). \end{equation} Since $\chr(k)=0$, this ``Euler'' map is an isomorphism for non-zero weights, but it is the zero map in homology, because \eqref{eq:htpy} shows that it is homotopic to zero. \end{proof}
2,666
22,342
en
train
0.104.2
\subsection{The term $E^1$} \paragraph{Computation of $H\!H_0(A)$.} We remark that in the graduation by weight $A=\oplus_{n\in{\mathbb{Z}}}A_n$ we have $A_0=k[h]$, and, for $n>0$, $A_n=k[h]x^n$ and $A_{-n}=k[h]y^n$. We have to compute $H\!H_0(A)=A/[A,A]=A/([A,x]+[A,y]+[A,h])$, and this is, according to proposition \ref{prop:reduction}, the same as $A_0/([A_{-1},x]+[A_{1},y]+[A_0,h])$. Since $A_0=k[h]$, $[A_0,h]=0$. Because $A_{-1}=k[h]y$, a system of linear generators of $[A_{-1},x]$ is given by commutators of the form $[h^iy,x]=h^ia-\sigma(h^ia)=(I\!d-\sigma)(h^ia)$. On the other hand, $A_1=k[h]x$, so $[A_1,y]$ is spanned by the $[h^jx,y]=h^j\sigma(a)-\sigma^{-1}(h^j)a=(I\!d-\sigma)(-\sigma^{-1}(h^j )a)$ for $j\geq0$. As a consequence, $[A_1,y]+[A_{-1},x]=[A_{-1},x]$ is the subspace of $A_0=k[h]$ of all polynomials $pa-\sigma(pa)$ with $p\in k[h]$. The $k$-linear map $I\!d-\sigma:k[h]\to k[h]$ is an epimorphism, and its kernel is the one dimensional subspace consisting of constant polynomials. The subspace of multiples of $a$ has codimension $\deg a=n$, so the image of the restriction of $I\!d-\sigma$ to this subspace has codimension $n-1$. Then we conclude that $\dim_kH\!H_0(A)=n-1$; a basis is given for example by the set of homology classes $\{[1], [h], [h^2],\dots ,[h^{n-2}]\}$. \paragraph{Homology of the row in degree $1$.} Recall that we only have to consider the subcomplex of elements of weight zero. Let us then suppose that $c=sye_x+txe_y+ue_h$ is a $1$-cycle in the row complex of weight zero. This implies that \[ d(sye_x+txe_y+ue_h)= \sigma\left( (\sigma^{-1}(t)-s)a \right) - (\sigma^{-1}(t)-s)a = 0. \] As a consequence, $(\sigma^{-1}(t)-s)a\in \Ker(I\!d-\sigma)=k$. But $a$ is not a constant polynomial, so $\sigma^{-1}(t)-s=0$. In other words, $s=\sigma^{-1}(t)$ and the cycle can be written in the form \[ \sigma^{-1}(t)ye_x+txe_y+ue_h. \] The horizontal boundary of a $2$-chain $pe_x\wedge e_y +qye_x\wedge e_h +rxe_y\wedge e_h$ is \begin{multline*} d_{CE}(pe_x\wedge e_y +qye_x\wedge e_h +rxe_y\wedge e_h)= \\ =(p-\sigma(p))xe_y + (\sigma^{-1}(p)-p)ye_x +\left(-p (\sigma(a')-a') +(q-\sigma^{-1}(r))a - \sigma((q-\sigma^{-1}(r))a)\right)e_h. \end{multline*} We can choose $p$ such that $\sigma(p)-p=t$, so that, adding $d(pe_x\wedge e_y)$ to $c$, we obtain cycle homologous to $c$, in which the only eventually non-zero coefficient is the one corresponding to $e_h$. We can then simply assume that $c$ is of the form $ue_h$ to begin with, and we want to know if it is a boundary or not. The equation $d(pe_x\wedge e_y +qye_x\wedge e_h +rxe_y\wedge e_h)=ue_h$ implies $p=\sigma(p)$, so $p\in k$, and \begin{equation}\label{yyy} u+ \left(-(\sigma(a')-a')p \right) =- \sigma((q-\sigma^{-1}(r))a) +(q-\sigma^{-1}(r))a. \end{equation} If $n=\deg a=1$, then we are in the special case of the usual Weyl algebra. In this case $a'\in k$ and $\sigma(a')-a'=0$, so there is one term less in the left hand side of \eqref{yyy} and the homology of the row in this degree is zero. Suppose now $n\geq 2$; if $p=0$ then $u\in\mathrm{Im}((I\!d-\sigma)|_{a.k[h]})$, and we have, as in degree zero, that $\{[1],\dots,[h^{n-2}]\}$ is a basis of the quotient. If $p\neq 0$, we have to mod out a $(n-1)$-dimensional space by the space spanned by a non-zero element, so we obtain a $(n-2)$-dimensional space. We notice that $\deg(\sigma(a')-a')=\deg a-2$, and since $n\geq 2$, the element $\sigma(a')-a'$ is a non-zero element of $k[h]/(I\!d-\sigma)(a.k[h])$. \paragraph{Homology of the row in degree $2$.} The boundary of a weight zero element in degree two\\ $w:=se_x\wedge e_y + tye_x\wedge e_h +uxe_y\wedge e_h$ is: \[ d(w)= (s-\sigma(s))x e_y - (s-\sigma^ {-1}(s))y e_x +\left( (a'-\sigma(a'))s +(t-\sigma^{-1}(u))a - \sigma((t-\sigma^{-1}(u))a) \right)e_h. \] If $d(w)=0$, we must have that $s=\sigma(s)$, so $s\in k$, and that \[ s(\sigma(a')-a')=-(t-\sigma^{-1}(u))a -\sigma((t-\sigma^{-1}(u))a). \] If $s\neq 0$, then the expression on the left is a polynomial of degree $n-2$, and the degree of the polynomial on the right is (if it is not the zero polynomial) $n+\deg(t-\sigma^{-1}(u))-1$. This is only possible if both sides are zero and we see that $\sigma(t)=u$. We mention that in the case $\deg a=1$ (i.e. the usual Weyl algebra), the expression on the left is always zero independently of $s$, so the argument is not really different in this case. Now we compute the $2$-boundaries: as $p$ varies in $k[h]$, they are the elements \begin{align*} d(p e_x \wedge e_y \wedge e_h)&= [p,x]e_y\wedge e_h -[p,y]e_x\wedge e_h +[p,h]e_x\wedge e_y\\ &=(p-\sigma(p))x e_y\wedge e_h - (p-\sigma^{-1}(p))y e_x\wedge e_h. \end{align*} Given $u$, there is a $p$ such that $(I\!d-\sigma)(p)=u$, and this $p$ automatically satisfies $\sigma^{-1}(p)-p=t$. We remark that the coefficient corresponding to $e_x\wedge e_y$ in a $0$-weight boundary, is always zero. As a consequence, in the case of the usual Weyl algebra, the class of $e_x\wedge e_y$ is a generator of the homology. On the other hand, if $n\geq 2$ the homology is zero. \paragraph{Homology of the row in degree $3$.} The homology in degree three is the kernel of the map $A\ot \Lambda^3V \to A\ot \Lambda^2V$ given by \[ w\mapsto [w,x]e_y\wedge e_h -[w,y]e_x\wedge e_h +[w,h]e_x\wedge e_y. \] It is clearly isomorphic to the center of $A$, which is known to be $k$ (see for example \cite{Bav}). A basis of the homology is given by the class of $e_x\wedge e_y\wedge e_h$. \paragraph{Summary.} We summarize the previous computations in the following table showing the dimensions of the vector spaces in the term $E^1$. In each case, the boxed entry has coordinates $(0,0)$. \[ \begin{array}{ccc} \begin{array}{ccccccc} & & &1 &0 & n-2&\fbox{$n-1$}\\ & &1 &0 &n-2&n-1&\\ &1& 0 & n-2 &n-1& &\\ 1&0&n-2& n-1 & & &\\ \end{array} & \qquad & \begin{array}{ccccccc} & & &1 &1 & 0&\fbox{$0$}\\ & &1 &1 &0&0&\\ &1& 1 & 0 &0& &\\ 1&1&0& 0 & & &\\ \end{array} \\ \\ n\geq2 && \text{The Weyl algebra ($n=1$)} \end{array} \] \subsection{The term $E^2$} The differential $d^1$ corresponds to the vertical differential in the original complex. Let $n\geq 2$. The only relevant component is the map $.df:A\to A\ot V$; we recall that it is defined by \[ .df(b)= bye_x +\sigma(b)xe_y-ba'e_h. \] Adding $d_{CE}(pe_x\wedge e_y)$, where $p$ is such that $b=\sigma(p)-p$, we see that the expression $bye_x +\sigma(b)xe_y-ba'e_h$ is homologous to $\left(-\sigma\left(\sigma^{-1}(p) a'\right)+\left(\sigma^{-1}(p) a'\right)\right)e_h$, and, since the homology of the row in the place corresponding to $A\ot V$ is isomorphic to $k[h]/(I\!d-\sigma)(a.k[h])e_h$, we conclude that the cokernel of the first differential of the spectral sequence (in the same place) is isomorphic to $k[h]/(I\!d-\sigma)(a.k[h] +a'k[h])e_h$. The subspace $ak[h]+a'k[h]$ has codimension $d=\deg(a;a')$, so $(I\!d-\sigma)(ak[h]+a'k[h])$ has codimension $d-1$ (or zero if $d=0$). By linear algebra arguments, the dimension of the kernel of this differential is $d$. The corresponding table at this step of the spectral sequence is the following: \[ \begin{array}{ccc} \begin{array}{ccccccc} & & &1 &0 & d-1&\fbox{$n-1$}\\ & &1 &0 &d-1&d&\\ &1& 0 &d-1 &d& &\\ 1&0&d-1&d & & & \end{array} & \qquad & \begin{array}{ccccccc} & & &1 &0 & 0&\fbox{$n-1$}\\ & &1 &0 &0&1&\\ &1& 0 &0 &1& &\\ 1&0&0&1 & & & \end{array} \\ &&\\ d\geq1 && d=0 \end{array} \] In case $n=1$, the homology of the Weyl algebra is well-known (see for example \cite{kassel} or \cite{sri}), but for completeness we include it. The only relevant differential is the one corresponding to the map $A\ot \Lambda^2V\to A\ot \Lambda^3V$. The generator of the homology of the row in the place corresponding to $A\ot\Lambda^2V$ is $e_x\wedge e_y$, and $.df(e_x\wedge e_y)=-\sigma(a')e_x\wedge e_y\wedge e_h$. But $\deg a=1$ so $a'$ is a non-zero constant; as a consequence $d_1$ is an epimorphism and hence an isomorphism. The table of the dimensions is in this case: \[ \begin{array}{c} \begin{array}{ccccccc} & & &0 &1 & 0&\fbox{$0$}\\ & &0 &0 &0&0&\\ &0& 0 &0 &0& &\\ 0&0&0&0 & & &\\ \end{array} \\ \\ n=1 \end{array} \] We recover thus known results.
3,543
22,342
en
train
0.104.3
\subsection{The term $E^2$} The differential $d^1$ corresponds to the vertical differential in the original complex. Let $n\geq 2$. The only relevant component is the map $.df:A\to A\ot V$; we recall that it is defined by \[ .df(b)= bye_x +\sigma(b)xe_y-ba'e_h. \] Adding $d_{CE}(pe_x\wedge e_y)$, where $p$ is such that $b=\sigma(p)-p$, we see that the expression $bye_x +\sigma(b)xe_y-ba'e_h$ is homologous to $\left(-\sigma\left(\sigma^{-1}(p) a'\right)+\left(\sigma^{-1}(p) a'\right)\right)e_h$, and, since the homology of the row in the place corresponding to $A\ot V$ is isomorphic to $k[h]/(I\!d-\sigma)(a.k[h])e_h$, we conclude that the cokernel of the first differential of the spectral sequence (in the same place) is isomorphic to $k[h]/(I\!d-\sigma)(a.k[h] +a'k[h])e_h$. The subspace $ak[h]+a'k[h]$ has codimension $d=\deg(a;a')$, so $(I\!d-\sigma)(ak[h]+a'k[h])$ has codimension $d-1$ (or zero if $d=0$). By linear algebra arguments, the dimension of the kernel of this differential is $d$. The corresponding table at this step of the spectral sequence is the following: \[ \begin{array}{ccc} \begin{array}{ccccccc} & & &1 &0 & d-1&\fbox{$n-1$}\\ & &1 &0 &d-1&d&\\ &1& 0 &d-1 &d& &\\ 1&0&d-1&d & & & \end{array} & \qquad & \begin{array}{ccccccc} & & &1 &0 & 0&\fbox{$n-1$}\\ & &1 &0 &0&1&\\ &1& 0 &0 &1& &\\ 1&0&0&1 & & & \end{array} \\ &&\\ d\geq1 && d=0 \end{array} \] In case $n=1$, the homology of the Weyl algebra is well-known (see for example \cite{kassel} or \cite{sri}), but for completeness we include it. The only relevant differential is the one corresponding to the map $A\ot \Lambda^2V\to A\ot \Lambda^3V$. The generator of the homology of the row in the place corresponding to $A\ot\Lambda^2V$ is $e_x\wedge e_y$, and $.df(e_x\wedge e_y)=-\sigma(a')e_x\wedge e_y\wedge e_h$. But $\deg a=1$ so $a'$ is a non-zero constant; as a consequence $d_1$ is an epimorphism and hence an isomorphism. The table of the dimensions is in this case: \[ \begin{array}{c} \begin{array}{ccccccc} & & &0 &1 & 0&\fbox{$0$}\\ & &0 &0 &0&0&\\ &0& 0 &0 &0& &\\ 0&0&0&0 & & &\\ \end{array} \\ \\ n=1 \end{array} \] We recover thus known results. \subsection{The term $E^3$}\label{subsect:e3} Since $d^2$ has bidegree $(-2,1)$, its only eventually non-zero component has as target a vector space of dimension one, and there are two possibilities: either it is zero, or it is an epimorphism. In order to decide whether it is an epimorphism, it is sufficient to determine if the element $e_x\wedge e_y\wedge e_h$ is a coboundary or not. In other words, we want to know if there exist $z_1=\alpha xe_y\wedge e_h+\beta ye_x\wedge e_h+\gamma e_x\wedge e_y$ and $z_2=p$ such that $z_1.e_f=e_x\wedge e_y \wedge e_h$ and \begin{equation}\label{eq:cond} d_{CE}(z_1)+.df(z_2)=0. \end{equation} We have that $.df(z_1)=(((\alpha-\sigma(\beta))\sigma(a)-\gamma\sigma( a'))e_x\wedge e_y\wedge e_h$; so $.df(z_1)=e_x\wedge e_y\wedge e_h$ if and only if \begin{equation}\label{eq:natural} (\alpha-\sigma(\beta))\sigma(a)-\gamma\sigma( a')=1, \end{equation} if and only if \[ (\sigma^{-1}(\alpha)-\beta)a-\sigma^{-1}(\gamma) a'=1. \] A necessary and sufficient condition for a solution to this equation to exist is that $(a;a')=1$, in other words, that $a$ have only simple roots. If this is the case, let $(\alpha,\beta,\gamma)$ be a solution. We have \begin{multline*} d_{CE}(z_1)= (\gamma- \sigma(\gamma))xe_y- (\gamma- \sigma^{-1}(\gamma))ye_x+\\ +\left(-\gamma(\sigma(a')-a')+ ( \beta- \sigma^{-1}(\alpha))a- \sigma(( \beta- \sigma^{-1}(\alpha))a)\right)e_h;\qquad\qquad\qquad \end{multline*} and using \eqref{eq:natural} we see that this is equal to \[ (\gamma- \sigma(\gamma))xe_y- (\gamma- \sigma^{-1}(\gamma))ye_x +(\gamma-\sigma^{-1}(\gamma))a'e_h. \] On the other hand, $.df(z_2)=.df(p)= pye_x +\sigma(p)xe_y-pa'e_h$. It is then enough to choose $p=\sigma^{-1}(\gamma)-\gamma$ to have equation \eqref{eq:cond} satisfied. We conclude that $d_2$ is an epimorphism if $(a;a')=1$, and zero if not. We can summarize the results of the above computations in the following table containing the dimensions of $H\!H_p(A)$: \[ \begin{array}{||c|c|c||} \hline p&(a:a')=1&\deg((a:a'))=d\geq 1\\ \hline \hline 0 & n-1 & n-1 \\ \hline 1 & 0 & d-1 \\ \hline 2 & 1 & d \\ \hline \geq 3 &0 & d \\ \hline \end{array} \] We note that this proves theorem \ref{thm:hom}.
1,902
22,342
en
train
0.104.4
\section{Cohomology}\label{sect:coho} The aim of this section is to compute the Hochschild cohomology of GWA. Once we have done this, we compare the obtained dimensions with duality results. We use resolution \eqref{ddag} of $A$ as an $A^e$-module to compute cohomology. We apply the functor $\Hom_{A^e}(-,A)$ and make the following identifications: \[ \Hom_{A^e}(A\ot \Lambda^kV\ot A,A)\cong \Hom(\Lambda^kV,A)\cong (\Lambda^kV)^*\ot A\cong \Lambda^{3-k}V\ot A. \] Here we are identifing $(\Lambda^{k}V)^*\cong \Lambda^{3-k}V$ using the pairing $\Lambda^{k}V\ot\Lambda^{3-k}V\to\Lambda^3V\cong k$ given by exterior multiplication. Using superscripts for the dual basis, the correspondence between the basis of $(\Lambda^{3-k}V)^*$ and the basis of $\Lambda^k V$ is: \begin{align*} e^h&\mapsto e_x\wedge e_y & e^x\wedge e^h&\mapsto-e_y & 1&\mapsto e_x\wedge e_y\wedge e_h \\ e^y&\mapsto-e_x\wedge e_h &e^x\wedge e^y&\mapsto e_h \\ e^x&\mapsto e_y\wedge e_h &e^y\wedge e^h&\mapsto e_x \end{align*} In this way, we obtain the following double complex, whose total homology computes $H\!H^*(A)$: \[ \xymatrix@C-10pt@R-10pt{ 0\ar[r]& A\ot \Lambda^3V \ar[d]\ar[r]& A\ot \Lambda^2V\ar[d] \ar[r]& A\ot V \ar[d]\ar[r]& A \ar[d]\ar[r]& 0\ar[d]&&\\ &0\ar[r]& A\ot \Lambda^3V \ar[d]\ar[r]& A\ot \Lambda^2V\ar[d] \ar[r]& A\ot V \ar[d]\ar[r]& A \ar[d]\ar[r]& 0\ar[d]&\\ &&0\ar[r]& A\ot \Lambda^3V \ar[d]\ar[r]& A\ot \Lambda^2V\ar[d] \ar[r]& A\ot V \ar[d]\ar[r]& A \ar[d]\ar[r]& 0\\ &&&&&&&} \] The horizontal differentials are, up to sign, {\em exactly} the same as in homology. The vertical differentials are also essentially the same; for example, $.df^*:A\to A\ot V$ is such that \[ .df^*(b)=yb\ot e_x+ bx\ot e_y-\sum_{i,k}h^{k-i-1}ba_kh^i\ot e_h. \] \begin{remark} The differences (and similarities) between the above formulas and the corresponding ones in homology may be explained as follows. Given the map $A^e\to A^e$ defined by $a\ot b\mapsto az\ot wb$, the induced maps on the tensor product and $\Hom$ are related in the following way: when we use the tensor product functor we obtain: \begin{align*} A\cong A\ot_{A^e}A^e&\to A\ot_{A^e}A^e \cong A\\ b&\mapsto wbz. \end{align*} When, on the other hand, we use the $\Hom$ functor we get: \begin{align*} A\cong \Hom_{A^e}(A^e,A)&\to \Hom_{A^e}(A^e,A) \cong A\\ b&\mapsto zbw. \end{align*} \end{remark} This fact implies that we already know the homology of the rows, up to reindexing. However it is worth noticing that there is a change of degree with respect to the previous computation (now degree increases from left to right). Schematically, the dimensions of these homologies are: \[ \begin{array}{ccccccc} \fbox{$1$} &0 & n-2& n-1&&&\\ &1 &0 & n-2& n-1&&\\ &&1 &0 & n-2& n-1&\\ &&&1 &0 & n-2& n-1 \end{array} \] From this, it follows that $\dim_kH\!H^0(A)=1$ and $\dim_kH\!H^1(A)=0$, independently of the polynomial $a$. Also $\dim_kH\!H^2(A)=n-1=\deg a-1$, because of the form of the $E_1$ term in the spectral sequence. As before, there are two different cases: either \textsc{(i)} $(a;a')=1$ or \textsc{(ii)} $1\leq \deg(a;a')=d\leq n-1$. The following tables give the dimensions of the spaces in the $E_2$ terms of the spectral sequence, in both situations: \[ \begin{array}{ccc} \begin{array}{ccccccc} \fbox{$1$} &0 & n-2& 1&&&\\ &1 &0 & 0& 1&&\\ &&1 &0 & 0& 1&\\ &&&1 &0 &0 & 1\\ \end{array} &\qquad& \begin{array}{ccccccc} \fbox{$1$} &0 & n-2& d&&&\\ &1 &0 & d-1& d&&\\ &&1 &0 & d-1& d&\\ &&&1 &0 &d-1 &d \\ \end{array} \\ \\ \text{\textsc{(i)}}&&\text{\textsc{(ii)}} \end{array} \] In each case, the differential $d_2$ is the same, up to our identifications, as the one considered in section \ref{subsect:e3}. As a consequence, we have: in case \textsc{(i)}, the $E_3$ term has the form \[ \begin{array}{ccccccc} \fbox{$1$} &0 & n-2& 0&&&\\ &1 &0 & 0& 0&&\\ &&0 &0 & 0& 0&\\ &&&0 &0 &0 & 0\\ \end{array} \] so $E_\infty=E_3$; in case \textsc{(ii)}, $d_2=0$, and we see that $E_\infty=E_2$. We summarize the results in the following table containing the dimensions of $H\!H^p(A)$: \[ \begin{array}{||c|c|c||} \hline p&(a;a')=1&\deg(a;a')=d\geq 1\\ \hline \hline 0 & 1 & 1 \\ \hline 1 & 0 & 0 \\ \hline 2 & n-1 & n-1 \\ \hline \geq 3 &0 & d \\ \hline \end{array} \] This proves theorem \ref{thm:coho} concerning cohomology. It is clear that, when the polynomial $a$ has multiple roots, there is no duality between Hochschild homology and cohomology, contrary to what one might expect after \cite{VdB}. This is explained by the fact that in this case the algebra $A^e$ has infinite left global dimension; in this situation, theorem 1 in \cite{VdB} fails: one cannot in general replace $\ot$ by $\ot^L$ in the first line of Van den Bergh's proof. One can retain, however, the conclusion in the theorem if one adds the hypothesis that either the $A^e$-module $A$ or the module of coefficients has finite projective dimension. This is explained in detail in \cite{VdBerratum}.
2,176
22,342
en
train
0.104.5
\section{Invariants under finite group actions}\label{sect:inv} The algebraic torus $k^*=k\setminus\{0\}$ acts on generalized Weyl algebras by diagonal automorphisms. More precisely, given $w\in k^*$, there is an automorphism of algebras uniquely determined by \[ x\mapsto wx, \qquad y\mapsto w^{-1}y, \qquad h\mapsto h. \] This defines a morphism of groups $k^*\hookrightarrow\Aut_k(A)$ which we will consider as an inclusion. The automorphism defined by $w\in k^*$ is of finite order if and only if $w$ is a root of unity, and, in this case, the subalgebra of invariants can easily be seen to be generated by $\{h,x^m,y^m\}$, where $m$ is the order of $w$. The following lemma is a statement of the fact that the process of taking invariants with respect to finite subgroups of $k^*$ for this action does not lead out of the class of GWA. This enables us to obtain almost immediately the Hochschild homology and cohomology of the invariants. \begin{lem}\label{lemma:inv} Let $A=A(k[h],\sigma, a)$ be a generalized Weyl algebra and let $G:={\mathbb{Z}}/r.{\mathbb{Z}}$ act on $A$ by powers of the diagonal automorphism induced by a primitive $r$-th root of unity. The subalgebra of invariants $A^G$ is isomorphic to the generalized Weyl algebra $A^G=A(k[H],\tau,\wt{a})$, where $\tau(H)=H-1$ and $\wt{a}(H)=\sigma^{-r+1}(a)(rH)\cdots\sigma^{-1}(a)(rH)a(rH)$. \end{lem} \begin{proof} We know that $A^G=\langle h, x^r,y^r \rangle$. Let us write $X:=x^r$, $Y:=y^r$ and $H:=h/r$. Then $XH=x^rh/r=\sigma^r(h/r)x^r=\tau(H)X$ and similarly $HY=Y\tau(H)$. Now \[ YX=y^rx^r=y^{r-1}yxx^{r-1}=y^{r-1}a(h)x^{r-1}= \sigma^{-r+1}(a)(h)y^{r-1}x^{r-1} = \sigma^{-r+1}(a)(rH)y^{r-1}x^{r-1}. \] so clearly the equality $y^rx^r=\wt a(H)$ follows by induction on $r$. \end{proof} The idea to compute (co)homology of $A^G$ is to replace it by the crossed product $A*G$. This change does not affect the homology provided that $A^G$ and $A*G$ are Morita equivalent; this is discussed in detail in \cite{afls}. In particular, this is the case when the polynomial $a$ has no pair of different roots conjugated to each other by $\sigma$---that is, there do not exist $\mu\in{\mathbb{C}}$ and $j\in{\mathbb{Z}}$ such that $a(\mu)=a(\mu+j)=0$---because in this situation the algebra $A=A(k[h],\sigma,a)$ is simple, as proved by Bavula in~\cite{Bav}. We state this as \begin{prop}\label{prop:desc} Let $a\in k[h]$ be a polynomial such that no pair of its roots are conjugated by $\sigma$ in the sense explained above. Let $G$ be any finite subgroup of $\Aut_k(k[h])$. Then there are isomorphisms \[ H\!H^*(A^G) \cong H^*(A,A*G)^G \cong \bigoplus_{\cl{g}\in\cl{G}}H^*(A,Ag)^{{\cal Z}_g}, \] where the sum is over the set $\cl G$ of conjugacy classes $\cl g$ of $G$, and, for each $g\in G$, ${\cal Z}_g$ is the centralizer of $g$ in $G$. Also, there are duality isomorphisms $H\!H_*(A^G)\congH\!H^{2-*}(A^G)$. \end{prop} \begin{proof} Given the hypotheses in the statement, we are in a situation similar to the one considered in \cite{afls}. The proposition follows from the arguments presented there. The last part concerning homology follows from the duality theorem of \cite{VdB}, since the global dimensional of $A^G$ is finite; see also section~7 in~\cite{afls}. \end{proof} Under appropriate conditions on the $g\in G$, we are able to compute the ${\cal Z}_g$-module $H_*(A,Ag)$. This module has always finite dimension as $k$-vector space (see proposition \ref{prop:coef}) and the action of ${\cal Z}_g$ is determined by an element $\Omega\in \Aut_k(A)$ (see proposition \ref{prop:action}). This automorphism $\Omega$ is a generalization of the Cartan involution of ${\frak sl}_2$, and is explained in more detail in section \ref{section:cartan}. We state the theorem, but since we need to know some facts about the group $\Aut_k(A)$, its proof will finish in section \ref{section:end}. \begin{teo}\label{teoG} Let us consider a GWA $A=A(k[h],\sigma, a)$ which is simple. Let $G\subset \Aut_k(A)$ be a finite subgroup such that every element $g$ of $G$ is conjugated in $\Aut_k(A)$ to an element in the torus $k^*$. Let us define $a_1:=\#\{\cl{g}\in\cl{G}\setminus\{I\!d\}\hbox{ such that } \Omega\notin{\cal Z}_g\}$ and $a_2:=\#\{\cl{g}\in\cl{G}\setminus\{I\!d\}\hbox{ such that }\Omega\in{\cal Z}_g\}$. We have that \[ \dim_kH\!H^p(A^G)= \begin{cases} 1 & \text{if $p=0$}\\ (n-1)+na_1+[(n+1)/2]a_2 & \text{if $p=2$}\\ 0 & \text{if $p=1$ or $p>2$}\\ \end{cases} \] \end{teo} \begin{remark} In particular, if the action of ${\cal Z}_g$ is trivial, the formula for $H\!H^2$ gives $\dim H\!H^2(A^G)=n.\#\cl{G}-1$. \end{remark} \begin{proof} Using the hypotheses and the above proposition, the proof will follow from the computation of the dimensions of $H^*(A,Ag)$ (proposition \ref{prop:coef}) and the characterization of the action (proposition \ref{prop:action}). \end{proof} We state the following proposition for automorphisms $g$ of $A$ diagonalizable but not necessarily of finite order, although we do not need such generality. \begin{prop}\label{prop:coef} Let $g\in \Aut_k(A)$ different from the identity and conjugated to an element of $k^*$. Then $H^0(A,Ag)=H^1(A,Ag)=0$, $\dim_kH^2(A,Ag)=n$, and $\dim_kH^*(A,Ag)=d$ for each $*>2$, where $d=\deg(a;a')$. Also, $\dim_kH_0(A,Ag)=\deg a=n$, and, for all $*>0$, $\dim_kH_*(A,Ag)=d$. \end{prop} We can assume that $g\in\Aut_k(A)$ is in fact in $k^*$, since, by Morita invariance, $H^*(A,Ag)\cong H^*(A,Ahgh^{-1})$ for all $h\in\Aut_k(A)$. The groups $H^*(A,Ag)$ can be computed using the complex obtained by applying to the resolution \eqref{ddag} the functor $\Hom_{A^e}({-},Ag)$; it can be identified, using the same idea as in section \ref{sect:coho}, to the double complex \[ \xymatrix@C-10pt@R-10pt{ 0\ar[r]& Ag\ot \Lambda^3V \ar[d]\ar[r]& Ag\ot \Lambda^2V\ar[d] \ar[r]& Ag\ot V \ar[d]\ar[r]& Ag \ar[d]\ar[r]& 0\ar[d]&&\\ &0\ar[r]& Ag\ot \Lambda^3V \ar[d]\ar[r]& Ag\ot \Lambda^2V\ar[d] \ar[r]& Ag\ot V \ar[d]\ar[r]& Ag \ar[d]\ar[r]& 0\ar[d]&\\ &&0\ar[r]& Ag\ot \Lambda^3V \ar[d]\ar[r]& Ag\ot \Lambda^2V\ar[d] \ar[r]& Ag\ot V \ar[d]\ar[r]& Ag \ar[d]\ar[r]& 0\\ &&&&&&&} \] This complex is graded (setting $weight(g)=0$) in an analogous way to the complex which computes $H\!H^*(A)$. It is straightforward to verify that the homotopy defined in \ref{prop:reduction} may be also used in this case. As a consequence, the cohomology of the rows is concentrated in weight zero. \subsection{The term $E_1$} \paragraph{Computation of $H^0(A,Ag)$.} Let $p\in k[h]$ and assume $pge_x\wedge e_y\wedge e_h\in \Ker(d_{CE})$, that is, that \[ (\sigma(p)-wp)xge_y\wedge e_h- (\sigma^{-1}(p)-w^{-1}p)yge_x\wedge e_h+ 0e_x\wedge e_y=0 \] Then $(\sigma-w.I\!d)(p)=0$, so $p=0$, and the cohomology in degree zero vanishes. \paragraph{Homology of the rows, degree $1$.} Given $u$, $v$, $t$ in $k[h]$, a computation shows that $uge_x\wedge e_y+ vyge_x\wedge e_h+ txge_y\wedge e_h\in \Ker(d_{CE})$, if and only if \begin{multline*} (\sigma^{-1}(u)-w^{-1}u)yge_x +(wu-\sigma(u))xge_y+\\ +(w^{-1}t\sigma(a)-\sigma^{-1}(t)a -u(\sigma(a')-a') +wva-\sigma(va))g e_h=0.\qquad\qquad\qquad \end{multline*} Using again that $\sigma-w.I\!d$ is an isomorphism, we conclude that in order for the coefficient of $e_y$ to vanish, $u$ must be zero. Looking at the coefficient of $e_h$, we obtain the only other condition, it is \[ w^{-1}t\sigma(a)-\sigma^{-1}(t)a +wva-\sigma(va)= (\sigma-w.I\!d)((w^{-1}\sigma^{-1}(t)-v)a)=0, \] so $w^{-1}\sigma^{-1}(t)-v=0$. We conclude that any $1$-cocycle is of the form \[ d_{CE}(pge_x\wedge e_y\wedge e_h)= vyge_x\wedge e_h+ w\sigma(v)xge_y\wedge e_h, \] where $p\in k[h]$ is chosen so that $\sigma(p)-wp=w\sigma(v)$. It follows immediately that the cohomology of the rows in degree $1$ is zero. \paragraph{Homology of the rows, degree $2$.} The $2$-coundaries are expressions of the form \begin{multline}\label{eq:2bd} (\sigma^{-1}(u)-w^{-1}u)yge_x +(wu-\sigma(u))xge_y+ \\ +(w^{-1}t\sigma(a)-\sigma^{-1}(t)a -u(\sigma(a')-a') +wva-\sigma(va))g e_h \end{multline} A $2$-cochain $\alpha=pyge_x+qxge_y+rge_h$, with $p$, $q$, $r\in k[h]$ is a cocycle if and only if $q=w\sigma(p)$ holds; in particular, this imposes no conditions on $r$. We can then assume that $p=q=0$ in $\alpha$ because one can add to $\alpha$ a coboundary of the form $d(ug e_x\wedge e_y)$ with $u\in k[h]$. We want now to decide when such a $2$-cocycle is a coboundary. In view of \eqref{eq:2bd} and the fact that $\sigma-w.I\!d$ is an isomorphism, we see at once $u$ must be zero. We are reduced to solve the equation \[ w^{-1}t\sigma(a)-\sigma^{-1}(t)a +wva-\sigma(va)=(\sigma-w.I\!d)((w^{-1}\sigma^{-1}(t)-v)a)= r. \] This can be solved if and only if $(\sigma-w.I\!d)^{-1}(r)$ is a multiple of $a$, so the codimension of the subspace of solutions is $\deg a=n$, in other words, the dimension of the cohomology of the rows in degree $2$ is $n$. \paragraph{Homology of the rows, degree $3$.} The coboundaries of weight zero are of the form \[ d(pyge_x+qxge_y+rge_h)= (wpa-\sigma(pa)+w^{-1}q\sigma(a)-\sigma^{-1}(q)a)g= (\sigma-w.I\!d)((w^{-1}\sigma^{-1}(q)-p)a)g \] with $p$, $q$, $r$ in $k[h]$. Every polynomial in $k[h]$ can be written as $w^{-1}\sigma^{-1}(q)-p$ for some $p, q\in k[h]$, so, since the map $\sigma-w.I\!d:k[h]\to k[h]$ is an isomorphism, we see that the dimension of the cohomology of the row complex in degree $3$ is equal to $\dim_kk[h]/ak[h]=\deg a=n$.
3,859
22,342
en
train
0.104.6
\subsection{The term $E_1$} \paragraph{Computation of $H^0(A,Ag)$.} Let $p\in k[h]$ and assume $pge_x\wedge e_y\wedge e_h\in \Ker(d_{CE})$, that is, that \[ (\sigma(p)-wp)xge_y\wedge e_h- (\sigma^{-1}(p)-w^{-1}p)yge_x\wedge e_h+ 0e_x\wedge e_y=0 \] Then $(\sigma-w.I\!d)(p)=0$, so $p=0$, and the cohomology in degree zero vanishes. \paragraph{Homology of the rows, degree $1$.} Given $u$, $v$, $t$ in $k[h]$, a computation shows that $uge_x\wedge e_y+ vyge_x\wedge e_h+ txge_y\wedge e_h\in \Ker(d_{CE})$, if and only if \begin{multline*} (\sigma^{-1}(u)-w^{-1}u)yge_x +(wu-\sigma(u))xge_y+\\ +(w^{-1}t\sigma(a)-\sigma^{-1}(t)a -u(\sigma(a')-a') +wva-\sigma(va))g e_h=0.\qquad\qquad\qquad \end{multline*} Using again that $\sigma-w.I\!d$ is an isomorphism, we conclude that in order for the coefficient of $e_y$ to vanish, $u$ must be zero. Looking at the coefficient of $e_h$, we obtain the only other condition, it is \[ w^{-1}t\sigma(a)-\sigma^{-1}(t)a +wva-\sigma(va)= (\sigma-w.I\!d)((w^{-1}\sigma^{-1}(t)-v)a)=0, \] so $w^{-1}\sigma^{-1}(t)-v=0$. We conclude that any $1$-cocycle is of the form \[ d_{CE}(pge_x\wedge e_y\wedge e_h)= vyge_x\wedge e_h+ w\sigma(v)xge_y\wedge e_h, \] where $p\in k[h]$ is chosen so that $\sigma(p)-wp=w\sigma(v)$. It follows immediately that the cohomology of the rows in degree $1$ is zero. \paragraph{Homology of the rows, degree $2$.} The $2$-coundaries are expressions of the form \begin{multline}\label{eq:2bd} (\sigma^{-1}(u)-w^{-1}u)yge_x +(wu-\sigma(u))xge_y+ \\ +(w^{-1}t\sigma(a)-\sigma^{-1}(t)a -u(\sigma(a')-a') +wva-\sigma(va))g e_h \end{multline} A $2$-cochain $\alpha=pyge_x+qxge_y+rge_h$, with $p$, $q$, $r\in k[h]$ is a cocycle if and only if $q=w\sigma(p)$ holds; in particular, this imposes no conditions on $r$. We can then assume that $p=q=0$ in $\alpha$ because one can add to $\alpha$ a coboundary of the form $d(ug e_x\wedge e_y)$ with $u\in k[h]$. We want now to decide when such a $2$-cocycle is a coboundary. In view of \eqref{eq:2bd} and the fact that $\sigma-w.I\!d$ is an isomorphism, we see at once $u$ must be zero. We are reduced to solve the equation \[ w^{-1}t\sigma(a)-\sigma^{-1}(t)a +wva-\sigma(va)=(\sigma-w.I\!d)((w^{-1}\sigma^{-1}(t)-v)a)= r. \] This can be solved if and only if $(\sigma-w.I\!d)^{-1}(r)$ is a multiple of $a$, so the codimension of the subspace of solutions is $\deg a=n$, in other words, the dimension of the cohomology of the rows in degree $2$ is $n$. \paragraph{Homology of the rows, degree $3$.} The coboundaries of weight zero are of the form \[ d(pyge_x+qxge_y+rge_h)= (wpa-\sigma(pa)+w^{-1}q\sigma(a)-\sigma^{-1}(q)a)g= (\sigma-w.I\!d)((w^{-1}\sigma^{-1}(q)-p)a)g \] with $p$, $q$, $r$ in $k[h]$. Every polynomial in $k[h]$ can be written as $w^{-1}\sigma^{-1}(q)-p$ for some $p, q\in k[h]$, so, since the map $\sigma-w.I\!d:k[h]\to k[h]$ is an isomorphism, we see that the dimension of the cohomology of the row complex in degree $3$ is equal to $\dim_kk[h]/ak[h]=\deg a=n$. \subsection{The term $E_2$} In view of the above computations, the dimensions of the components of the $E_1$-term of the spectral sequence are as follows: \[ \begin{array}{cccccc} \fbox{$0$}&0&n&n& \\ &0&0&n&n& \\ &&0&0&n&n \\ \end{array} \] Consequently, the only relevant vertical differential is \begin{align*} .df: Ag&\to Ag\ot V\\ bg&\mapsto bw^{-1}yge_x+\sigma(b)xge_y-ba'ge_h. \end{align*} Adding $d(qge_x\wedge e_y)$ one sees that this element is cohomologous to $(-ba'+ q(\sigma(a')-a') )ge_h$, where $q\in k[h]$ is such that $(\sigma-w.I\!d)(q)=-\sigma(b)$. But then $b=w\sigma^{-1}(q)-q$, and therefore \[ .df(bg)=(-w\sigma^{-1}(q)a'+ q\sigma(a') )ge_h= (\sigma-w.I\!d)(\sigma^{-1}(q)a')ge_h. \] On the other hand, the target of $.df$ has been already shown to be isomorphic to $k[h]/(\sigma-w.I\!d)(a.k[h])$. Under this isomorphism, the cokernel of $.df$ is isomorphic to $k[h]/(\sigma-w.I\!d)(a.k[h]+a'.k[h])$. Since we have assumed that $(a;a')=1$, the cokernel of $.df$ is zero, and by counting dimensions, the kernel of $.df$ also vanishes. This proves the first part of proposition \ref{prop:coef}; the rest of the statements thereof follow from similar computations, which we omit. In particular, theorem \ref{teoG} follows. We would like to observe that in order to prove theorem \ref{teoG}, if one assumes that the action of ${\cal Z}_g$ is trivial, then the full strength of proposition \ref{prop:coef} is not needed. Indeed, let $g\in G$. Using proposition \ref{prop:desc} for the cyclic subgroup group $C=(g)\subset\Aut_k(A)$ generated by $g$, and the hypothesis on the triviality of the action of the centralizers in homology, we see that the following relation holds for all $p\geq0$: \begin{align} \dim_kH\!H_p(A^{C}) & = \dim_kH\!H_p(A)^{C} +\sum_{1\leq i<|g|} \dim_kH_p(A,Ag^i)^{C}\label{eq:rel}\\ & = \dim_kH\!H_p(A) +\sum_{1\leq i<|g|} \dim_kH_p(A,Ag^i).\notag \end{align} Now, in view of lemma \ref{lemma:inv}, the algebra $A^C$ is a GWA, so we already know its homology. For $p=1$ or $p\geq3$, it vanishes, in particular, $H_p(A,Ag)=0$; for $p=2$, $\dim_kH\!H_2(A^{C})=1=\dim_kH\!H_2(A)$, so again we have $H_2(A,Ag)=0$. Finally, computing $g$-commutators as in the end of section 5.1, it is easy to see that $\dim_kH_0(A,Ag^i)\leq n$ for all $1\leq i<|g|$; since $\dim_kH\!H_0(A^{C})=|g|n-1$ and $\dim_kH\!H_0(A)=n-1$, relation \eqref{eq:rel} forces $\dim_kH_0(A,Ag^i)=n$.
2,351
22,342
en
train
0.104.7
\section{Applications}\label{sect:apps} \subsection{The usual Weyl algebra} The results of the previous sections apply to the case when $A=A_1(k)$ ($\chr k=0$) and $G$ is an arbitrary finite subgroup of $\Aut_k(A)$, because in this case the finite order automorphisms of $A$ are always diagonizable, and ${\cal Z}_g$ acts trivially on $H_*(A,Ag)$ for all such $g$. We then recover the results of \cite{afls}. \subsection{Primitive quotients of ${\cal U}(\mathfrak{sl}_2)$} If the polynomial $a$ is of degree two, then $A$ is isomorphic to one of the maximal primitive quotients of ${\cal U}(\mathfrak{sl}_2)$. In this case, O.~Fleury \cite{odile} has proved that the group of automorphisms is isomorphic to the amalgamated product of $\mathit{PSL}(2,{\mathbb{C}})$ with a torsion-free group. The action of $\mathit{PSL}(2,{\mathbb{C}})$ is the one coming from the adjoint action of $\mathit{SL}(2,{\mathbb{C}})$ on ${\cal U}(\mathfrak{sl}_2)$. There is then a simple classification, up to conjugacy, of all finite groups of automorphisms of $A$: they are the cyclic groups $A_n$, the binary dihedral groups $D_n$, and the binary polyhedral groups $E_6$, $E_7$ and $E_8$; cf.~\cite{Springer}. In her thesis, for the regular case, O.~Fleury \cite{odile} has computed, case by case the action of the centralizers and in this way she achieves the computation of $H\!H_0(A^G)$. For positive degrees, following proposition \ref{prop:desc} one has to compute $H_*(A,Ag)$ and the action of ${\cal Z}_g$ on it. After proposition \ref{prop:coef} one knows that $H_*(A,Ag)=0$ for $*>0$ and $g\neq 1$, so, for positive degrees $H\!H_*(A^G)=H\!H_*(A)^G$. But the only positive and nonzero degree of $H\!H_*(A)$ is $*=2$ and $H\!H_2(A)\cong H\!H^0(A)={\cal Z}(A)=k$. Since the duality isomorphism is $G$-equivariant, the action of $G$ on $H\!H_2(A)$ is trivial and we conclude $H\!H_*(A^G)=0$ for $*\neq 0,2$ and $H\!H_2(A^G)=k$. Using the duality one has the cohomology. For the non-regular case, what we are able to compute is not $H\!H_*(A^G)$ but $H\!H_*(A\#G)$. The computation of the action of the centralizers is discussed in next sections, because those actions can be described in general, and also ``explain'' the computations made by O.~Fleury. \subsection{The Cartan involution\label{section:cartan}} In the case of ${\cal U}({\frak sl}_2)$ there is a special automorphism (that descends to $B_{\lambda}$) defined by $e\mapsto f$, $f\mapsto e$ and $h\mapsto -h$. For an arbitrary GWA $A$ with defining polinomial $a(h)$ of degree $n$, there are some particular cases on which a similar automorphism is defined. In \cite{BavJor} the authors find generators of the automorphism group of GWAs. It turns out that $\Aut_k(A)$ is generated by the torus action and exponentials of inner derivations, and, in the case that there is $\rho\in {\mathbb{C}}$ such that $a(\rho -h)=(-1)^ {n}a(h)$, a generalization of the Cartan involution --still called $\Omega$-- is defined as follows: \[x\mapsto y \qquad;\qquad y\mapsto (-1)^{n}x\qquad ; \qquad h\mapsto 1+\rho-h\] Let us call in this section ${\cal G}$ the subgroup of $\Aut_k(A)$ generated by the torus and the exponentials. When the polynomial is reflective (i.e. $a(\rho-h)=(-1)^na(h)$), the group generated by ${\cal G}$ and $\Omega$ coincides with $\Aut_k(A)$. If the polynomial is not reflective, ${\cal G}=\Aut_k(A)$. In \cite{Dixmier} Dixmier shows, in the case of ${\cal U}({\frak sl}_2)$, that $\Omega$ belongs to ${\cal G}$ (and of course this fact descends to the primitive quotients). This situation corresponds to $\deg(a)=2$. In \cite{BavJor} (remark 3.30), the authors ask whether this automorphism $\Omega$ belongs to the subgroup generated by the torus and exponentials. We will answer this question looking at the action of the group of automorphisms on $H\!H_0(A)$, so we begin by recalling the expression of the exponential-type automorphisms. Let $\lambda\in {\mathbb{C}}$ and $m\in {\mathbb{N}}_0$, the two exponential automorphisms associated to them are defined as follows: \begin{align*} \phi_{m,\lambda}:&=\noindent\textbf{Example: }p(\lambda \ad( y^m))\\ x&\mapsto x+\sum_{i=1}^n\frac{(-\lambda)^i}{i!}(\ad y^m)^i(x)\\ y&\mapsto y\\ h&\mapsto h+m\lambda y^{m} \end{align*} \begin{align*} \psi_{m,\lambda}:&=\noindent\textbf{Example: }p(\lambda \ad( x^m))\\ x&\mapsto x\\ y&\mapsto y+\sum_{i=1}^n\frac{\lambda^i}{i!}(\ad x^m)^i(y)\\ h&\mapsto h-m\lambda x^{m} \end{align*} We know that $H\!H_0(A)$ has $\{1,h,h^2,\dots,h^{n-2}\}$ as a basis. If one assumes $n>2$ then the action of $\Omega$ on $H\!H_0(A)$ is not trivial. On the other hand, since the homogeneous components of weight different from zero are commutators, the action of $\psi_{m,\lambda}$ and of $\phi_{n,\lambda}$ is trivial on $H\!H_0(A)$. To see this we consider for example \[\phi_{m,\lambda}(h^i)= \phi_{m,\lambda}(h)^i= (h+m\lambda y^{m})^i\] and it is clear that the 0-weight component of $(h+m\lambda y^m)^i$ equals $h^i$. The action of the torus is also trivial on $H\!H_0(A)$ (it is already trivial on $k[h]$). We conclude that for $n>2$, the automorphism $\Omega$ cannot belong to ${\cal G}$. \subsection{End of the proof of Theorem \ref{teoG}\label{section:end}} The hypotheses of theorem \ref{teoG} are that in the finite group $G$ under consideration, every element $g$ is conjugated (in $\Aut_k(A)$) to an element of the torus. We will show that the triviality of the action of ${\cal Z}_g$ on $H_*(A,Ag)$ is generically satisfied, and the fact that the action of ${\cal Z}_g$ is trivial or not depends only on whether $\Omega$ belongs to ${\cal Z}_g$. \begin{prop}\label{prop:action} Let $A$ be a GWA, $G$ a finite subgroup of $\Aut_k(A)$ such that every $g\in G$ is conjugated to an element of the torus. If $\Omega\notin {\cal Z}_g$ then the action of ${\cal Z}_g$ on $H_*(A,Ag)$ is trivial. \end{prop} \begin{proof} Let us suppose that $a$ is not reflective and let $g\in G$. If $g$ is the identity, then the centralizer of $g$ in $\Aut_k(A)$ is $\Aut_k(A)$ itself, and the triviality of the action on $H\!H_0$ was explained in the above section. For $H\!H_2(A)\cong H\!H^0(A)={\cal Z}(A)=k$ the action is always trivial. When $*\neq 0,2$, $H\!H_*(A)=0$. Let us now consider $g\neq I\!d$. Up to conjugation may assume that $g(h)=h$, $g(x)=wx$ and $g(y)=w^{-1}y$, where $w$ is a root of unity. After proposition \ref{prop:coef} the only non-zero homology group is $H_0(A,Ag)$ which has basis $\{g, hg, h^2g,\dots,h^{n-1}g\}$. If $g'\in \Aut_k(A)$ commutes with $g$ then it induces an automorphism $g'|_{A^g}:A^g\to A^g$. This defines a map ${\cal Z}_g\to \Aut_k(A^g)$. But $A^g=\cl{h, x^{|w|}, y^{|w|} }$ is again a GWA so one knows the generators of its group of automorphisms. It is not hard to see that if $a$ is not reflective then the polynomial $\wt{a}$ associated to $A^g$ is not reflective, neither. In this case $\Aut_k(A)$ is generated by the torus and exponentials of $\lambda\ad x^{|w|.m}$ and $\lambda\ad y^{|w|.m}$. On the other hand, an automorphism $g'$ induces the identity on $A^g$ if and only if it fixes $h$, $x^{|w|}$ and $y^{|w|}$, and by degree considerations it is clear that $g'(x)$ must be a multiple of $x$ and analogously for $y$. We conclude that the group of elements (in $\Aut_k(A)$) commuting with $g$ is generated by the torus and exponentials of $\lambda \ad x^{m.|w|}$ and $\lambda \ad y^{m.|w|}$. It is clear that the torus acts trivially on the vector space spanned by $\{g, hg, h^2g,\dots,h^{n-1}g\}$. A computation similar to the one done in the end of section \ref{section:cartan} shows that $\phi_{m,\lambda}(h^ig)=h^ig$ modulo $g$-commutators, and analogously for $\psi$. If the polynomial $a$ is reflective then $\Aut_k(A)$ is generated by ${\cal G}$ and $\Omega$. We have just seen the triviality of the action for the generators of ${\cal G}$, and $\Omega$ is excluded by hypothesis. \end{proof} We now finish the proof of Theorem \ref{teoG}: We will explain the formula $\dim H\!H_0(A^G)=(n-1)+n.a_1+[(n+1)/2].a_2$. From the decomposition $H\!H_0(A^G)=\bigoplus_{\cl{g}\in \cl{G}}H_0(A,Ag)^{{\cal Z}_g}$, the first ``$n-1$'' comes from the summand corresponding to the identity element that contributes with $\dim H\!H_0(A)^G=\dim HH_0(A)=n-1$. The ``$n.a_1$'' comes from the terms corresponding to conjugacy classes $\cl{g}$ such that ${\cal Z}_g$ does not contain $\Omega$ because in this case $\dim H_0(A,Ag)^{{\cal Z}_g}= \dim H_0(A,Ag)=n$. Finally, the summand ``$[(n+1)/2].a_2$'' corresponds to the conjugacy clases having $\Omega$ in their centralizers, in these cases the dimension of $(kg\oplus khg\oplus kh^2g\oplus \dots \oplus kh^{n-1}g)^{\Omega}$ is the integer part of the half of $n+1$. \begin{remark} This result explains, for $n=2$, the case by case computations of $H\!H_0(B_{\lambda}^G)$ made by O.~Fleury in \cite{odile}, see also \cite{odile2}. \end{remark} \end{document}
3,204
22,342
en
train
0.105.0
\begin{document} \begin{abstract} We investigate uniform, strong, weak and almost weak stability of multiplication semigroups on Banach space valued $L^p$-spaces. We show that, under certain conditions, these properties can be characterized by analogous ones of the pointwise semigroups. Using techniques from selector theory, we prove a spectral mapping theorem for the point spectra of the pointwise and global semigroups and apply this as a major tool for determining almost weak stability. \end{abstract} \maketitle One of the significant features of the Fourier transform is that it converts a differential operator into a multiplication operator induced by some scalar-valued function. The properties of the original operator are then determined by the values of this function. The same holds if a system of differential operators is transformed into a matrix valued multiplication operator on a vector valued function space. This motivates the systematic investigation of multiplication operators on Banach space valued function spaces. Such operators (and semigroups generated by them) have been studied by, e.g. Holderrieth \cite{Hol91} as well as Hardt and Wagenf\"uhrer \cite{HaWa96} for matrix multiplication semigroups and by Arendt and Thomaschewski \cite{ThoAr05} and Graser \cite{Gra97} in the infinite dimensional case. See also \cite[Section 4]{MuNi11} and \cite{Neerven93}. Qualitative properties of such semigroups, e.g. various stability concepts, are of great interest in control theory (see e.g. \cite{CIZ09}). In fact, motivated by these applications, Hans Zwart has proved a characterisation of strong stability of a multiplication semigroup in the finite dimensional case (see \cite{Zwart}), while so-called polynomial stability of multiplication semigroups is characterized in Theorem 4.4 of \cite{BEP06}. The general question in this context is to what extent the global properties of a multiplication operator are determined by the local properties of the pointwise operators. In this paper we systematically investigate spectral and stability properties of multiplication semigroups. Our aim is thus to understand how these properties are related to those of the corresponding pointwise operators or semigroups, as explained below. As a major tool and also as a result of independent interest, we obtain a perfect characterisation of the eigenvalues of multiplication operators (Theorem \ref{thmPtSpM}). Furthermore, we indicate how the global and local stability properties are related for uniform (Theorem \ref{thmUniform}), strong (Theorem \ref{thmSG_STRONG}), weak (Proposition \ref{propWeak}) and almost weak (Theorem \ref{thmAwsSg}) convergence. Finally, we include a section in which we state analogous stability results for the powers of multiplication operators. Throughout this text we assume that the measure space $(\Omega,\Sigma, \mu)$ is $\sigma$-finite. For a separable Banach space $X$, $L^p(\Omega,X)$ denotes the Bochner space $L^p(\Omega,\Sigma,\mu;X)$ for a fixed value of $1\leq p \leq\infty$ {(see e.g \cite{abhn01}, or \cite{DU77}).} \begin{defn}[Multiplication Operator, Pointwise Operator]\lambdabel{defMULT} Let $X$ be a separable Banach space and let $M:\Omega\rightarrow\Lop$ be such that, for every $x\in X$, the function $$ \Omega \nonumberi s\mapsto M(s)x \in X $$ is Bochner measurable. The {\em multiplication operator} ${\mathcal{M}}$ on $L^p(\Omega,X)$, with $1\leq p \leq\infty$, is defined by $$ ({\mathcal{M}} f)(s) := M(s)f(s) \text{ for all } s\in\Omega $$ with $$ D({\mathcal{M}})=\Bigl\{f\in L^p(\Omega,X)\ :\ M(\cdot)f(\cdot)\in L^p(\Omega,X) \Bigr\}. $$ In this context, we call the operators $M(s)$ with $s\in\Omega$ the {\em pointwise operators} on $X$. \end{defn} \begin{rem} The Bochner measurability of $s\mapsto M(s)x$ for all $x \in X$ implies that the function $M(\cdot)f(\cdot)$ is also measurable if $f\in L^p(\Omega,X)$, see \cite[Lemma 2.2.9]{Tho03}. \end{rem} \begin{defn}[Multiplication Semigroup]{\cite[Definition 2.3.8, p.\,45]{Tho03}} If a $C_0$-semigroup $\bigl({\mathcal{M}}(t)\bigr)_{t\geq 0}$ on $L^p(\Omega,X)$ consists of multiplication operators on $L^p(\Omega,X)$, it is called a {\em multiplication semigroup}. (For the general theory of $C_0$-semigroups, we refer to \cite{enna00}.) \end{defn} \begin{rem}\lambdabel{remNOT} If $\bigl({\mathcal{A}},D({\mathcal{A}})\bigr)$ is the generator of the multiplication semigroup $\bigl({\mathcal{M}}(t)\bigr)_{t\geq 0}$, then there exists a family of operators $A(s)$ with domain $D(A(s))$ on $X$ such that $A(s)$ is the generator of a $C_0$-semigroup on $X$ for all $s\in\Omega\begin{align}ckslash N$ for some null set $N$ (see \cite[Theorem 2.3.15, p.\,49]{Tho03}). We call these semigroups on $X$ the {\em pointwise semigroups}. We denote the multiplication semigroup $\bigl({\mathcal{M}}(t)\bigr)_{t\geq 0}$ by $\left(e^{t{\mathcal{A}}}\right)_{t\geq 0}$ and the pointwise semigroups by $\bigl(e^{tA(s)}\bigr)_{t\geq 0}$ for all $s\in\Omega\begin{align}ckslash N$. Furthermore, for every $t\geq 0$, the function from $\Omega$ to $\Lop$, $s\mapsto e^{tA(s)}$, is measurable and the operator $e^{t{\mathcal{A}}}$ is the corresponding multiplication operator on $L^p(\Omega,X)$, cf. \cite[Proposition 2.3.12, p.\,48]{Tho03}. \end{rem} Before discussing stability properties, we recall the characterisation of boundedness of a multiplication operator by Thomaschewski (cf. \cite{Tho03}) or Klaus-J.\ Engel (cf. \cite[Chapter IX, Proposition 1.3]{en97}). \begin{lem}\lambdabel{lemNORM}\cite[Proposition 2.2.14, p.\ 35]{Tho03} Let ${\mathcal{M}}$ be a multiplication operator on $L^p(\Omega,X)$ induced by the function $M(\cdot)$ as in Definition \ref{defMULT}. The operator ${\mathcal{M}}$ is bounded if and only if the function $M(\cdot)$ is essentially bounded{, i.e. if the function $M(\cdot)$ is bounded up to a set of a measure zero. In this case}, we have that \begin{align*} \|{\mathcal{M}}\| &= \essup_{s\in\Omega}{\|M(s)\|} \\ &:= \inf\Bigl\{ C \geq 0\ :\ \mu\bigl(\left\{s\in \Omega\ :\ \|M(s)\| > C\right\}\bigr) = 0 \Bigr\}.\\ \end{align*} \end{lem} We are interested in the extent to which stability properties of the pointwise semigroups determine stability properties (see the sections below) of the corresponding multiplication semigroup. This is not always the case as is illustrated by a simple modification of Zabzyk's classical counterexample to the spectral mapping theorem for $C_0$-semigroups (cf. \cite{enna00} p. 273, Counterexample 3.4). Other examples are discussed in \cite[Section 2]{CIZ09}.
2,043
15,560
en
train
0.105.1
\section{Eigenvalues of a multiplication operator} Since many stability properties can be characterised by spectral properties (see, for instance, \cite[Chapter IV and V]{enna00}), we start by investigating the relation between the pointwise spectra $\sigma(M(s))$ with $s\in\Omega$ and the global spectrum $\sigma({\mathcal{M}})$. As a first and important step, we characterise the eigenvalues of ${\mathcal{M}}$ via $M(s)$ with $s\in\Omega$. We take note of the fact that the nontrivial implication in the following theorem is essentially a selector result. See, for instance, \cite{KurRyll_65} for a standard selector theorem. It is possible to present a proof based on methods from this theory, but our proof is more elementary and not less elegant. We would like to mention that the idea of the following proof has been sparked by discussions with Hans Zwart. \begin{thm}\lambdabel{thmPtSpM} For a multiplication operator ${\mathcal{M}}$ on $L^p\left(\Omega,X\right)$ with $1\leq p\leq\infty$ induced by the pointwise operators $\{M(s)\ :\ s\in\Omega\}$ and an arbitrary $\lambdambda \in \mathbb{C}$, the following equivalence holds: $$ \lambda \in \PtSp{{\mathcal{M}}} \Longleftrightarrow \lambda \in \PtSp{M(s)} \mbox{ for $s \in Z$, } $$ where $Z$ is a measurable subset of $\Omega$ with $\mu(Z)>0$. \end{thm} \begin{proof} Assume that the pointwise operators $M(s)$ are injective, i.e., the kernels of $M(s)$ are zero for almost all $s\in\Omega$. Choose an arbitrary function $0\nonumberot=f\in L^p(\Omega,X)$. It is clear that ${\mathcal{M}} f \nonumberot= 0$, hence the kernel of ${\mathcal{M}}$ is zero. Assume now that $M(s)$ is not injective for all $s$ in a set $N$ such that $\mu(N) > 0$. Without loss of generality, take $N = \Omega$. \begin{description} \item[{Idea of proof}] \\ We will construct a sequence of countably-valued measurable functions $(f_m)$ in $L^p(\Omega,X)$, converging pointwise to some non-zero function such that the sequence of functions $({\mathcal{M}} f_m)$ converges to zero in norm. Choose $$ W := \{w_n : n\in\mathbb{N}\} $$ as a countable dense subset of the unit sphere of $X$. We construct for every $m\in\mathbb{N}$ countably many disjoint, measurable sets $\tilde\Omega_{(n_1,n_2,\hdots,n_m)}$ indexed by $(n_1,n_2,\hdots,n_m) \in \prod_{k=1}^m\mathbb{N}$ such that \begin{equation} \lambdabel{eq:normM} \|M(s)w_{n_m}\| \leq \frac{1}{m} \text{ for all } s \in \tilde\Omega_{(n_1,n_2,\hdots,n_m)}, \end{equation} \begin{align}\lambdabel{eq:inclusion} \Omega = \bigcup_{n_1\in\mathbb{N}} \bigcup_{n_2\in\mathbb{N}}\cdots \bigcup_{n_m\in\mathbb{N}} \tilde\Omega_{(n_1,n_2,\hdots,n_m)}, \end{align} and \begin{equation} \lambdabel{eq:subset} \tilde\Omega_{(n_1,n_2,\hdots,n_m)} \subseteq \tilde\Omega_{(n_1,n_2,\hdots,n_{m-1})}. \end{equation} Furthermore, if $s \in \tilde\Omega_{(n_1,n_2,\hdots,n_m)}$ then, for every $m \leq r \in \mathbb{N}$, there exists a convergent sequence $(a_k)_{k\in\mathbb{N}} \subseteq W$ with $a_k = w_{n_k}$ for $1 \leq k \leq m$, such that $\| M(s)a_k \| \leq \frac{1}{k}$ for $1 \leq k \leq r$. Then, for every $m\in\mathbb{N}$, we define $f_m$ as $$ f_m := \sum_{(n_1,n_2,\hdots,n_{m}) \in \prod_{k=1}^{m}\mathbb{N}} {\mathbbm{1}_{\tilde\Omega_{(n_1,n_2,\hdots,n_m)}} w_{n_m}}. $$ For every $s\in\Omega$, the sequence $$ (f_m(s))_{m\in\mathbb{N}} = (w_{n_m})_{m\in\mathbb{N}} $$ converges, $\|f_m(s)\| = \| w_{n_m} \| = 1 $ for all $m\in\mathbb{N}$ and $$ \| M(s)f_m(s) \| = \| M(s)w_{n_m} \| \leq \frac{1}{m}. $$ \item[{Definitions}] \\ Consider the following set of (convergent) sequences $$ \mathcal{W} := \left\{ \begin{array}{ll} (a_{k})_{k\in\mathbb{N}} : & a_{k} \in W \text{ for all } k\in\mathbb{N}, \text{ and } \\ & \text{ for each } q\in\mathbb{N} , \|a_q - a_{q+j}\| < \frac{1}{q} \\ & \text{ for all } j\in\mathbb{N} \end{array} \right\}. $$ Note that, to each sequence $(a_{k})_{k\in\mathbb{N}} \in \mathcal{W}$, there is a corresponding function \begin{equation} \lambdabel{eq:beta} \beta : \mathbb{N} \rightarrow \mathbb{N} \end{equation} such that $a_k = w_{\beta(k)}$ for all $k\in\mathbb{N}$. Hence we can write $(a_k) = (w_{\beta(k)})$. For each $m\in\mathbb{N}$ define the set $\mathcal{W}_{(n_1,n_2,\ldots,n_m)} \subseteq \mathcal{W}$ for each $m$-tuple $(n_1,n_2,\ldots,n_m) \in \prod_{j=1}^{m}\mathbb{N}$ as $$ \mathcal{W}_{(n_1,n_2,\ldots,n_m)} := \left\{ \begin{array}{ll} (a_{k})_{k\in\mathbb{N}} \in \mathcal{W} : & a_{k} = w_{n_k} \text{ for } 1 \leq k \leq m \end{array} \right\}. $$ Now, for all $n,j \in \mathbb{N}$, define the subsets $$ \Omega_{n,j} := \left\{s \in \Omega : \|M(s)w_n\| \leq \frac{1}{j}\right\} $$ of $\Omega$. \item[{Construction}] \\ Take $m=1$. We now define the sets $\tilde\Omega_{(n_1)}$ for each $n_1 \in \mathbb{N}$ as \begin{align} \Omega_{(n_1)} &:= \left\{ \begin{array}{ll} s \in \Omega\ : & \text{ for each } r\in\mathbb{N} \text{ there exists a sequence } \\ & (a_{k})_{k\in\mathbb{N}} = (w_{\beta(k)})_{k\in\mathbb{N}} \in \mathcal{W}_{(n_1)} \\ & \text{ such that } s\in\bigcap_{j=1}^r{\Omega_{\beta(j),j}} \end{array} \right\} \nonumberotag\\ &\ = \bigcap_{r \in \mathbb{N}} \left( \bigcup_{ (w_{\beta(k)})_{k\in\mathbb{N}} \in \mathcal{W}_{(n_1)} } \left( \bigcap_{j=1}^r{\Omega_{\beta(j),j}} \right) \right) \lambdabel{eq:cntble} \intertext{and} \tilde\Omega_{(n_1)} &:= \Omega_{(n_1)} \begin{align}ckslash \bigcup_{q < n_1}\tilde\Omega_{(q)}.\nonumberotag \end{align} The set $ \Omega_{(n_1)}$ is the countable intersection of the union of certain measurable sets of the form $\bigcap_{j=1}^r{\Omega_{\beta(j),j}}$. As we see in \eqref{eq:cntble} above, this union consists of those sets, for each of which the corresponding sequence $ (w_{\beta(j)})_{j\in\mathbb{N}}$ is in $\mathcal{W}_{(n_1,n_2,\ldots,n_m)} $ which is an uncountable set. However, there are only countably many sets of the form $\bigcap_{j=1}^r{\Omega_{\beta(j),j}}$ for a fixed $r\in\mathbb{N}$ and hence the union can be written as a countable union. Hence $\Omega_{(n_1)}$ and also $\tilde\Omega_{(n_1)}$ are measurable. It now holds that \begin{equation}n \|M(s)w_{n_1}\| \leq 1 \end{equation}n for all $s \in \tilde\Omega_{(n_1)}$ and \begin{align*} \Omega = \bigcup_{n_1\in\mathbb{N}} \tilde\Omega_{(n_1)}. \end{align*} Furthermore, if $s \in \tilde\Omega_{(n_1)}$ then, for every $r\in\mathbb{N}$, there exists a (convergent) sequence $(a_k)_{k\in\mathbb{N}} \in \mathcal{W}$ with $a_1 = w_{n_1}$ and $\| M(s)a_k \| \leq \frac{1}{k}$ for $1 \leq k \leq r$. Now define the function $$ f_1 := \sum_{n_1 \in \mathbb{N}}\mathbbm{1}_{\tilde\Omega_{(n_1)}} w_{n_1}. $$ Observe that $\|f_1\| = 1$ and $\|{\mathcal{M}} f_1\| \leq 1$ as desired. \vspace*{5pt} The recursive definition of the sets $\tilde\Omega_{(n_1,n_2,\hdots,n_m)}$ now follows. Let $m \geq 2$ and assume that we have disjoint sets $\tilde\Omega_{(n_1,n_2,\hdots,n_{m-1})}$ with $(n_1,n_2,\hdots,n_{m-1}) \in \prod_{k=1}^{m-1}\mathbb{N}$ such that \begin{equation}n \|M(s)w_{n_{m-1}}\| \leq \frac{1}{m-1} \text{ for all } s \in \tilde\Omega_{(n_1,n_2,\hdots,n_{m-1})}, \end{equation}n \begin{align*} \Omega = \bigcup_{n_1\in\mathbb{N}} \bigcup_{n_2\in\mathbb{N}}\cdots \bigcup_{n_{m-1}\in\mathbb{N}} \tilde\Omega_{(n_1,n_2,\hdots,n_{m-1})}, \end{align*} and \begin{equation}n \tilde\Omega_{(n_1,n_2,\hdots,n_{m-1})} \subseteq \tilde\Omega_{(n_1,n_2,\hdots,n_{m-2})}. \end{equation}n Moreover, for every $s \in \tilde\Omega_{(n_1,n_2,\hdots,n_{m-1})}$ and every $m-1 \leq r \in \mathbb{N}$, there exists a sequence $(a_k) \in \mathcal{W}$ with $a_k = w_{n_k}$ for $1 \leq k \leq m-1$ and $\| M(s)a_k \| \leq \frac{1}{k}$ for $1 \leq k \leq r$. Then, for every $(n_1,n_2,\hdots,n_{m-1}) \in \prod_{k=1}^{m-1}\mathbb{N}$ and $n_m \in \mathbb{N}$, define the measurable sets \begin{align*} \Omega_{(n_1,n_2,\hdots,n_m)} &:= \left\{ \begin{array}{ll} s \in \Omega\ : &\text{ for all } r\in\mathbb{N} \text{ there exists a sequence} \\ & (a_k)_{k\in\mathbb{N}} = (w_{\beta(k)})_{k\in\mathbb{N}}\in \mathcal{W}_{(n_1,n_2,\hdots,n_m)} \\ & \text{ such that } s\in\bigcap_{j=1}^r{\Omega_{\beta(j),j}} \end{array} \right\}\\ \intertext{and} \tilde\Omega_{(n_1,n_2,\hdots,n_m)} &:= \tilde\Omega_{n_1,n_2,\hdots,n_{m-1}} \cap \left( \Omega_{(n_1,n_2,\hdots,n_m)} \begin{align}ckslash \bigcup_{q < n_m}\tilde\Omega_{(n_1,n_2,\hdots,n_{m-1},q)}\right). \end{align*} For every $\tilde\Omega_{(n_1,n_2,\hdots,n_m)}$, the properties \eqref{eq:normM}, \eqref{eq:inclusion} and \eqref{eq:subset} hold and if $s \in \tilde\Omega_{(n_1,n_2,\hdots,n_m)}$ then, for every $m \leq r \in \mathbb{N}$, there exists a sequence $(a_k) \in \mathcal{W}$ with $a_k = w_{n_k}$ for $1 \leq k \leq m$ and $\| M(s)a_k \| \leq \frac{1}{k}$ for $1 \leq k \leq r$. Define the function $$ f_m := \sum_{(n_1,n_2,\hdots,n_m) \in \prod_{k=1}^m\mathbb{N}}{\mathbbm{1}_{\tilde\Omega_{(n_1,n_2,\hdots,n_m)}} w_{n_m}}. $$ Thus the sequence $(f_m)_{m\in\mathbb{N}}$ has been constructed with the desired properties.
3,941
15,560
en
train
0.105.2
Furthermore, if $s \in \tilde\Omega_{(n_1)}$ then, for every $r\in\mathbb{N}$, there exists a (convergent) sequence $(a_k)_{k\in\mathbb{N}} \in \mathcal{W}$ with $a_1 = w_{n_1}$ and $\| M(s)a_k \| \leq \frac{1}{k}$ for $1 \leq k \leq r$. Now define the function $$ f_1 := \sum_{n_1 \in \mathbb{N}}\mathbbm{1}_{\tilde\Omega_{(n_1)}} w_{n_1}. $$ Observe that $\|f_1\| = 1$ and $\|{\mathcal{M}} f_1\| \leq 1$ as desired. \vspace*{5pt} The recursive definition of the sets $\tilde\Omega_{(n_1,n_2,\hdots,n_m)}$ now follows. Let $m \geq 2$ and assume that we have disjoint sets $\tilde\Omega_{(n_1,n_2,\hdots,n_{m-1})}$ with $(n_1,n_2,\hdots,n_{m-1}) \in \prod_{k=1}^{m-1}\mathbb{N}$ such that \begin{equation}n \|M(s)w_{n_{m-1}}\| \leq \frac{1}{m-1} \text{ for all } s \in \tilde\Omega_{(n_1,n_2,\hdots,n_{m-1})}, \end{equation}n \begin{align*} \Omega = \bigcup_{n_1\in\mathbb{N}} \bigcup_{n_2\in\mathbb{N}}\cdots \bigcup_{n_{m-1}\in\mathbb{N}} \tilde\Omega_{(n_1,n_2,\hdots,n_{m-1})}, \end{align*} and \begin{equation}n \tilde\Omega_{(n_1,n_2,\hdots,n_{m-1})} \subseteq \tilde\Omega_{(n_1,n_2,\hdots,n_{m-2})}. \end{equation}n Moreover, for every $s \in \tilde\Omega_{(n_1,n_2,\hdots,n_{m-1})}$ and every $m-1 \leq r \in \mathbb{N}$, there exists a sequence $(a_k) \in \mathcal{W}$ with $a_k = w_{n_k}$ for $1 \leq k \leq m-1$ and $\| M(s)a_k \| \leq \frac{1}{k}$ for $1 \leq k \leq r$. Then, for every $(n_1,n_2,\hdots,n_{m-1}) \in \prod_{k=1}^{m-1}\mathbb{N}$ and $n_m \in \mathbb{N}$, define the measurable sets \begin{align*} \Omega_{(n_1,n_2,\hdots,n_m)} &:= \left\{ \begin{array}{ll} s \in \Omega\ : &\text{ for all } r\in\mathbb{N} \text{ there exists a sequence} \\ & (a_k)_{k\in\mathbb{N}} = (w_{\beta(k)})_{k\in\mathbb{N}}\in \mathcal{W}_{(n_1,n_2,\hdots,n_m)} \\ & \text{ such that } s\in\bigcap_{j=1}^r{\Omega_{\beta(j),j}} \end{array} \right\}\\ \intertext{and} \tilde\Omega_{(n_1,n_2,\hdots,n_m)} &:= \tilde\Omega_{n_1,n_2,\hdots,n_{m-1}} \cap \left( \Omega_{(n_1,n_2,\hdots,n_m)} \begin{align}ckslash \bigcup_{q < n_m}\tilde\Omega_{(n_1,n_2,\hdots,n_{m-1},q)}\right). \end{align*} For every $\tilde\Omega_{(n_1,n_2,\hdots,n_m)}$, the properties \eqref{eq:normM}, \eqref{eq:inclusion} and \eqref{eq:subset} hold and if $s \in \tilde\Omega_{(n_1,n_2,\hdots,n_m)}$ then, for every $m \leq r \in \mathbb{N}$, there exists a sequence $(a_k) \in \mathcal{W}$ with $a_k = w_{n_k}$ for $1 \leq k \leq m$ and $\| M(s)a_k \| \leq \frac{1}{k}$ for $1 \leq k \leq r$. Define the function $$ f_m := \sum_{(n_1,n_2,\hdots,n_m) \in \prod_{k=1}^m\mathbb{N}}{\mathbbm{1}_{\tilde\Omega_{(n_1,n_2,\hdots,n_m)}} w_{n_m}}. $$ Thus the sequence $(f_m)_{m\in\mathbb{N}}$ has been constructed with the desired properties. Indeed, for every $m\in\mathbb{N}$, we have $\|f_m\| \geq 1$, because of \eqref{eq:inclusion} and the fact that, if $s\in \tilde\Omega_{(n_1,n_2,\hdots,n_m)}$, then $\|f(s)\| = \|w_{n_m}\| = 1$. We also have that $\|{\mathcal{M}} f_m\| \leq \frac{1}{m}$. Furthermore, the sequences $(f_m(s))_{m\in\mathbb{N}}$ are convergent for every $s\in\Omega$. The pointwise limit $f$ is measurable, nonzero and in the kernel of ${\mathcal{M}}$. \end{description} \end{proof}
1,514
15,560
en
train
0.105.3
We obtain the following corollary directly from the above theorem by using the spectral mapping theorem for the point spectrum of the resolvent of a closed operator (see, for instance, \cite[Chapter IV, Theorem 1.13]{enna00}). \begin{cor}\lambdabel{corPtSpA} If $\left(e^{t{\mathcal{A}}}\right)_{t\geq 0}$ is a multiplication semigroup with generator ${\mathcal{A}}$ and $\lambdambda\in\mathbb{C}$, then the following equivalence holds. $$ \lambda \in \PtSp{{\mathcal{A}}} \Longleftrightarrow \lambda \in \PtSp{A(s)} \mbox{ for $s \in Z$, } $$ where $Z$ is a measurable subset of $\Omega$ with $\mu(Z)>0$. \end{cor}
204
15,560
en
train
0.105.4
\section{Uniform Stability} This short section is devoted to the strongest notion of stability, namely uniform stability. \begin{defn}\cite[Definition V.1.1, p. 296]{enna00} A $C_0$-semigroup $\SG$ of operators on a Banach space is called {\em uniformly stable} if $\|T(t)\|\rightarrow 0$ as $t\rightarrow\infty$. \end{defn} Lemma \ref{lemNORM} immediately leads to the following characterisation of uniform stability of multiplication semigroups. \begin{thm} \lambdabel{thmUniform} Let $\bigl(e^{t{\mathcal{A}}}\bigr)_{t\geq 0}$ be a multiplication semigroup on $L^p(\Omega,X)$ with $1\leq p \leq \infty$. Then the following are equivalent. \begin{enumerate} \item[(a)] The semigroup $\bigl(e^{t{\mathcal{A}}}\bigr)_{t\geq 0}$ is uniformly stable. \item[(b)] The pointwise semigroups converge to $0$ in norm, uniformly in $s$, i.e. $$ \essup_{s\in\Omega}{\left\|e^{t A(s)}\right\|} \rightarrow 0\ \text{ as }\ t\rightarrow\infty. $$ \item[(c)] At some $t_0 > 0$, the spectral radii $\sr{e^{t_0 A(s)}}$ of the pointwise semigroups satisfy $\essup_{s\in\Omega}{\sr{e^{t_0 A(s)}}} < 1$. \item[(d)] There exist constants $C \geq 1$ and $\epsilon > 0$ such that $\|e^{tA(s)}\| \leq Me^{-t\epsilon }$ for all $t\geq 0$ and almost all $s$. \end{enumerate} \end{thm} \begin{rem} Note that, due to Remark 6, the uniform stability of a multiplication semigroup is independent of the value of $p$ as long as $1 \leq p \leq \infty$. \end{rem} \section{Strong Stability} In this section we study strong stability in our context. \begin{defn}\cite[Definition V.1.1, p. 296]{enna00} A $C_0$-semigroup $\SG$ of operators on a Banach space is called {\em strongly stable} if $\|T(t)x\|\rightarrow 0$ as $t\rightarrow\infty$ for all $x \in X$. \end{defn} We now show that a multiplication semigroup is strongly stable if and only if the pointwise semigroups are uniformly bounded in $s$ and strongly stable almost everywhere. The backward implication was proved by Curtain-Iftime-Zwart in \cite{CIZ09} for the special case where $\Omega=[0,1],\ p=2$ and $X= \mathbb{C}^n$. The other implication was conjectured in the same paper and has since been proved by Hans Zwart (see Theorem on p. 3 in \cite{Zwart}), again for $X=\mathbb{C}^n$. \begin{thm} \lambdabel{thmSG_STRONG} Suppose that the multiplication operator ${\mathcal{A}}$ generates a $C_0$-semigroup of multiplication operators $(e^{t{\mathcal{A}}})_{t \geq 0}$ on $L^p(\Omega,X)$ with $1 \leq p < \infty$ such that $\|e^{t{\mathcal{A}}}\| \leq C$ for all $t\geq 0$ and some constant $C >0$. Then the following are equivalent. \begin{enumerate} \item[(a)] The semigroup $\left(e^{t{\mathcal{A}}}\right)_{t\geq 0}$ is strongly stable. \item[(b)] The pointwise semigroups $\left(e^{tA(s)}\right)_{t\geq 0}$ on $X$ are strongly stable for almost all $s\in \Omega$. \end{enumerate} \end{thm} \begin{proof} Note that, by Lemma \ref{lemNORM}, we have $\|e^{t{\mathcal{A}}}\| \leq C$ for all $t\geq 0$ if and only if $\|e^{tA(s)}\| \leq C$ for almost all $s\in \Omega$ and for all $t\geq 0$. (a)\,$\implies$\,(b): Assume that $\|e^{t{\mathcal{A}}}f\|\rightarrow 0$ as $t\rightarrow\infty$ for all $f\in L^p(\Omega,X)$. Choose an arbitrary $x\in X$ and define the function $f_x:\Omega\rightarrow X$ by $$ f_x(s):= x $$ for all $s\in\Omega$. Since $\Omega$ is $\sigma$-finite, we can write $\Omega=\cup_{n\in\mathbb{N}}{\Omega_n}$ where $\mu(\Omega_n) < \infty$ for every $n\in\mathbb{N}$. Then $\mathbbm{1}_{\Omega_n}{f_x} \in L^p(\Omega,X)$ for every $n\in\mathbb{N}$. By assumption, we have that $$ \left\| e^{t{\mathcal{A}}} \bigl( \mathbbm{1}_{\Omega_n}{f_x} \bigr) \right\|\rightarrow 0 \quad \text{as} \quad t\rightarrow\infty $$ for every $n\in\mathbb{N}$. Therefore, the Riesz subsequence theorem (see e.g. proof of \cite[Chapter I, Theorem 3.12]{RudAna}) implies that, for every $n\in\mathbb{N}$, there exists a sequence $(t_k)_{k \in \mathbb{N}} \subset \mathbb{R}_+$ tending to infinity as $k\rightarrow\infty$, such that the functions $e^{t_k{\mathcal{A}}}\bigl( \mathbbm{1}_{\Omega_n}{f_x} \bigr)$ converge pointwise to $0$, almost everywhere, i.e. \begin{align}\lambdabel{cvg} \left\|e^{t_kA(s)} \bigl( \mathbbm{1}_{\Omega_n}(s){f_x}(s) \bigr) \right\|_{ X} = \left\|e^{t_kA(s)}x\right\|_{ X} \rightarrow 0 \quad \text{as} \quad k\rightarrow\infty \end{align} for $s\in \Omega_n\begin{align}ckslash N_{(x,n)}$, where $N_{(x,n)}$ is a subset of $\Omega$ of measure $0$. We now show that \eqref{cvg} implies that $\left\|e^{tA(s)}x\right\|_{ X} \rightarrow 0$ as $t\rightarrow\infty$ for $s\in \Omega_n\begin{align}ckslash N_{(x,n)}$. Let $\epsilon>0$. Then $\left\|e^{t_kA(s)}x\right\|_{ X}<\frac{1}{C}\epsilon$ for all $k$ greater than or equal to some $k_\epsilon\in\mathbb{N}$. For each $t>t_{k_\epsilon}$, we can write $t=t_{k_\epsilon}+r$ where $r\in\mathbb{R}_+$. Then we have that \begin{align*} \left\|e^{tA(s)}x\right\|_{ X} &= \left\|e^{(r + t_{k_\epsilon})A(s)}x\right\|_{ X} \\ &= \left\| \left(e^{rA(s)}\right) \left(e^{t_{k_\epsilon}A(s)}x\right) \right\|_{ X} \\ &\leq \left\|e^{rA(s)}\right\| \left\|e^{t_{k_\epsilon}A(s)}x\right\|_{ X} \\ &\leq C \left\|e^{t_{k_\epsilon}A(s)}x\right\|_{ X} \\ &\leq C \frac{1}{C}\epsilon \\ &= \epsilon. \end{align*} Hence, \begin{align}\lambdabel{cvg2} \left\|e^{tA(s)}x\right\|_{ X}\rightarrow 0 \quad \text{as} \quad t\rightarrow\infty \end{align} for all $s\in \Omega_n\begin{align}ckslash N_{(x,n)}$. It follows that \eqref{cvg2} holds for all $s\in\Omega\begin{align}ckslash{\left(\cup_{n\in\mathbb{N}}N_{(x,n)} \right)}$. Observe that $N_x := \cup_{n\in\mathbb{N}}N_{(x,n)}$, being a countable union of null sets, is also a null set and hence we have the convergence \eqref{cvg2} almost everywhere. For each $x \in X$, \eqref{cvg2} holds for all $s\in \Omega\begin{align}ckslash N_x$. Now, choose any countable dense subset $C\subset X$. Then \eqref{cvg2} holds for each $x\in C$, for all $s\in \Omega\begin{align}ckslash\left(\cup_{x\in{C}}{N_x}\right)$. Note that $\cup_{x\in{C}}{N_x}$ is also null set. So we have that \eqref{cvg2} holds for all $x$ in a dense subset of $ X$, almost everywhere. Because the semigroups $\left(e^{tA(s)}\right)_{t\geq 0}$ are bounded, it follows that \eqref{cvg2} holds for all $x\in X$, almost everywhere, which is what we wanted to prove. (b)\,$\implies$\,(a): Assume that $\left\|e^{tA(s)}x\right\|_{ X}\rightarrow 0$ as $t\rightarrow\infty$ for all $x\in X$ and almost all $s\in \Omega$. Choose an arbitrary function $f\in L^p(\Omega, X)$. Then $$ \left\|e^{tA(s)}f(s)\right\|_{ X}^p \leq C^p\left\|f(s)\right\|_{ X}^p $$ for almost all $s\in \Omega$. Hence, the functions $\left\|e^{t{\mathcal{A}}}f(\cdot)\right\|_{ X}^p$ are dominated by the integrable function $\left\|Mf(\cdot)\right\|_{ X}^p$. Because of our assumption, we know that\\ $\left\|e^{tA(s)}f(s)\right\|_{ X}^p\rightarrow 0$ as $t\rightarrow\infty$ for almost all $s\in \Omega$. It now follows from Lebesgue's dominated convergence theorem that $$\int_{\Omega}{\left\|e^{tA(s)}f(s)\right\|_{ X}^p{\text d}s}=\left\|e^{t{\mathcal{A}}}f\right\|^p\rightarrow 0\quad\text{ as }\quad t\rightarrow\infty.$$ Thus the proof is complete. \end{proof} \begin{rem} As before, strong stability of a multiplication semigroup is independent of the value $p$, as long as $1\leq p < \infty$. \end{rem} \section{Weak Stability} The concept of weak stability is the most difficult to investigate in this context. We include a trivial example in the scalar case, where the multiplication semigroup is weakly stable, but where none of the pointwise semigroups are. \begin{exm}\lambdabel{exm1} We use the notation introduced in Remark \ref{remNOT} with $\Omega=[0,1]$, $X=\mathbb{C}$ and $A(s):=is$ for all $s\in [0,1]$. Then the semigroup $\left(e^{t{\mathcal{A}}}\right)_{t\geq 0}$ is weakly stable, but none of the pointwise semigroups are. \end{exm} We show in the proposition below that the weak stability of almost every pointwise semigroup $\left(e^{tA(s)}\right)_{t\geq 0}$ does imply that the multiplication semigroup $\left(e^{t{\mathcal{A}}}\right)_{t\geq 0}$ is weakly stable. \begin{prop} \lambdabel{propWeak} Let $\left(e^{t{\mathcal{A}}}\right)_{t\geq 0}$ be a bounded multiplication semigroup on the space $L^p(\Omega,X)$, with $1\leq p<\infty$, where $X$ is a reflexive Banach space. If the pointwise semigroups $\left(e^{tA(s)}\right)_{t\geq 0}$ are weakly stable for almost all $s\in\Omega$, then the multiplication semigroup $\left(e^{t{\mathcal{A}}}\right)_{t\geq 0}$ is weakly stable. \end{prop} \begin{proof} Assume that the pointwise semigroups $\left(e^{t A(s)}\right)_{t\geq 0}$ are weakly stable for almost all $s\in\Omega$. Then there exists a null set $N\subset\Omega$ such that $$ \psi\bigl(e^{tA(s)}x\bigr) \rightarrow 0 \text{ as } t\rightarrow\infty $$ for all $x\in X$, $\psi\in X'$, and for all $s\in\Omega\begin{align}ckslash N$, where $X'$ denotes the continuous dual space of $X$. Since $X$ is a reflexive Banach space, the dual space of $L^p(\Omega,X)$ is $L^q(\Omega,X')$, where $\frac{1}{p} + \frac{1}{q} = 1$ (see \cite[Theorem 8.20.5, p.\,607]{Don95}). Choose arbitrary functions $f\in L^p(\Omega,X)$ and $g\in L^q(\Omega,X')$ (or $g\in L^\infty(\Omega,X')$, if $p=1$). Then \begin{align*} \left| g(s) \biggl( e^{tA(s)}f(s) \biggr) \right| & \leq \|g(s)\|_{X'} \|e^{t A(s)}f(s)\|_X \\ & \leq C \|g(s)\|_{X'} \|f(s)\|_X\\ \end{align*} almost everywhere. Since the real valued function $\|g(\cdot)\|_{X'}$ is in $L^q$ (or in $L^\infty$) and the function $\|f(\cdot)\|_X$ is in $L^p$, we have that the function $C \|g(\cdot)\| \|f(\cdot)\|_X$ is in $L^1$ and hence integrable. The functions $ g(\cdot)\Bigl(e^{tA(\cdot)}f(\cdot)\Bigr) $ are integrable, converge pointwise almost everywhere to $0$ as $t\rightarrow \infty$ and are dominated by the function $C \|g(\cdot)\|_{X'} \|f(\cdot)\|_X$. It follows from Lebesgue's Dominated Convergence Theorem that $$ \int_\Omega{ g(s)\bigl(e^{tA(s)}f(s)\bigr) \text{d}}\mu(s)\rightarrow 0\text{\ \ as\ \ }t\rightarrow\infty. $$ Hence, the semigroup $\left(e^{t{\mathcal{A}}}\right)_{t\geq 0}$ is weakly stable. \end{proof} \begin{rem} This result is independent of the value of $p$, as long as $1\leq p<\infty$. \end{rem}
3,890
15,560
en
train
0.105.5
\section{Weak Stability} The concept of weak stability is the most difficult to investigate in this context. We include a trivial example in the scalar case, where the multiplication semigroup is weakly stable, but where none of the pointwise semigroups are. \begin{exm}\lambdabel{exm1} We use the notation introduced in Remark \ref{remNOT} with $\Omega=[0,1]$, $X=\mathbb{C}$ and $A(s):=is$ for all $s\in [0,1]$. Then the semigroup $\left(e^{t{\mathcal{A}}}\right)_{t\geq 0}$ is weakly stable, but none of the pointwise semigroups are. \end{exm} We show in the proposition below that the weak stability of almost every pointwise semigroup $\left(e^{tA(s)}\right)_{t\geq 0}$ does imply that the multiplication semigroup $\left(e^{t{\mathcal{A}}}\right)_{t\geq 0}$ is weakly stable. \begin{prop} \lambdabel{propWeak} Let $\left(e^{t{\mathcal{A}}}\right)_{t\geq 0}$ be a bounded multiplication semigroup on the space $L^p(\Omega,X)$, with $1\leq p<\infty$, where $X$ is a reflexive Banach space. If the pointwise semigroups $\left(e^{tA(s)}\right)_{t\geq 0}$ are weakly stable for almost all $s\in\Omega$, then the multiplication semigroup $\left(e^{t{\mathcal{A}}}\right)_{t\geq 0}$ is weakly stable. \end{prop} \begin{proof} Assume that the pointwise semigroups $\left(e^{t A(s)}\right)_{t\geq 0}$ are weakly stable for almost all $s\in\Omega$. Then there exists a null set $N\subset\Omega$ such that $$ \psi\bigl(e^{tA(s)}x\bigr) \rightarrow 0 \text{ as } t\rightarrow\infty $$ for all $x\in X$, $\psi\in X'$, and for all $s\in\Omega\begin{align}ckslash N$, where $X'$ denotes the continuous dual space of $X$. Since $X$ is a reflexive Banach space, the dual space of $L^p(\Omega,X)$ is $L^q(\Omega,X')$, where $\frac{1}{p} + \frac{1}{q} = 1$ (see \cite[Theorem 8.20.5, p.\,607]{Don95}). Choose arbitrary functions $f\in L^p(\Omega,X)$ and $g\in L^q(\Omega,X')$ (or $g\in L^\infty(\Omega,X')$, if $p=1$). Then \begin{align*} \left| g(s) \biggl( e^{tA(s)}f(s) \biggr) \right| & \leq \|g(s)\|_{X'} \|e^{t A(s)}f(s)\|_X \\ & \leq C \|g(s)\|_{X'} \|f(s)\|_X\\ \end{align*} almost everywhere. Since the real valued function $\|g(\cdot)\|_{X'}$ is in $L^q$ (or in $L^\infty$) and the function $\|f(\cdot)\|_X$ is in $L^p$, we have that the function $C \|g(\cdot)\| \|f(\cdot)\|_X$ is in $L^1$ and hence integrable. The functions $ g(\cdot)\Bigl(e^{tA(\cdot)}f(\cdot)\Bigr) $ are integrable, converge pointwise almost everywhere to $0$ as $t\rightarrow \infty$ and are dominated by the function $C \|g(\cdot)\|_{X'} \|f(\cdot)\|_X$. It follows from Lebesgue's Dominated Convergence Theorem that $$ \int_\Omega{ g(s)\bigl(e^{tA(s)}f(s)\bigr) \text{d}}\mu(s)\rightarrow 0\text{\ \ as\ \ }t\rightarrow\infty. $$ Hence, the semigroup $\left(e^{t{\mathcal{A}}}\right)_{t\geq 0}$ is weakly stable. \end{proof} \begin{rem} This result is independent of the value of $p$, as long as $1\leq p<\infty$. \end{rem} \section{Almost Weak Stability} We now investigate the almost weak stability of the multiplication semigroup $\bigl(e^{t{\mathcal{A}}}\bigr)_{t\geq 0}$ and of the pointwise semigroups $\bigl(e^{t A(s)}\bigr)_{t\geq 0}$ with $s\in\Omega$. In order to define this stability concept, we mention that the density of a Lebesgue measurable subset $Z$ of $\mathbb{R}_+$ is $d := \lim_{t\rightarrow\infty}{\frac{\mu(Z\cap[0,t])}{t}}$ ($\mu$ being the Lebesgue measure), whenever the limit exists. \begin{defn}[Almost Weak Stability] Let $\SG$ be a $C_0$-semigroup on a reflexive Banach space $X$. Then $\SG$ is called {\em almost weakly stable} if there exists a Lebesgue measurable set $Z\subset \mathbb{R}_+$ of density $1$ such that $$ T(t) \rightarrow 0 \quad \text{as} \quad t \to \infty \,,\ t\in Z, $$ in the weak operator topology. \end{defn} The main result of this section is a characterisation of almost weak stability of a multiplication semigroup via Ces\`aro stability (see Definition \ref{defCesaroStable}) of the pointwise semigroups. This is remarkable since it is not true that almost weak stability of a multiplication semigroup and that of the pointwise semigroups are equivalent. We introduce Ces\`aro stability which seems to be the appropriate concept. \begin{defn}[Ces\`aro mean; Mean Ergodic Semigroup, Mean Ergodic Projection](\cite[p.\,20, Chapter I, Definition 2.18, 2.20]{eis10}) Let $\SG$ be a $C_0$-semigroup of operators on a Banach space $X$ with generator $\bigl(A,D(A)\bigr)$. For every $t > 0$, the {\em Ces\`aro mean} $S(t)\in\Lop[X]$ is defined by $$ S(t)x := \frac{1}{t}\int_{0}^t{T(s)x\text{d}s} $$ for all $x\in X$. The semigroup $\SG$ is called {\em mean ergodic} if the Ces\`aro means converge pointwise as $t$ tends to $\infty$. In this case the operator $P \in \Lop[X]$ defined by $$ Px := \lim_{t\to \infty}{S(t)x} $$ is called the {\em mean ergodic projection} corresponding to $\SG$. \end{defn} \begin{rem}\cite[p.\,21-22; Chapter I; Remark 2.21, Proposition 2.24 and Theorem 2.25]{eis10} The mean ergodic projection $P$ commutes with $\SG$ and is indeed a projection. A bounded $\SG$ is mean ergodic if and only if $X = \ker{A} \oplus \cl[\ran{A}]$, where $\ker{A}$ and $\ran{A}$ denote, respectively, the kernel and range of $A$. One also has that $\ran{P}=\ker{A}=\fix{\SG}$ and $\ker{P}=\cl[\ran{A}]$ where $\fix{\SG}$ is the fixed space of $\SG$. \end{rem} \begin{defn}[Ces\`aro Stability of Semigroups]\lambdabel{defCesaroStable} A semigroup $\SG$ is called {\em Ces\`aro stable} if the Ces\`aro means converge to 0 as $t\rightarrow\infty$, i.e. the mean ergodic projection is the $0$ operator. \end{defn} \begin{thm}\lambdabel{thmAwsSg} Let $X$ be a reflexive, separable Banach space. Take $({\mathcal{A}},D({\mathcal{A}}))$ to be the generator of a bounded multiplication semigroup $\bigl(e^{t{\mathcal{A}}}\bigr)_{t\geq 0}$ on $L^p(\Omega,X)$ with $1<p<\infty$, and let $A(s)$ with $s\in\Omega$ be the family of operators corresponding to ${\mathcal{A}}$. The semigroup $\bigl(e^{t{\mathcal{A}}}\bigr)_{t\geq 0}$ is almost weakly stable if and only if, for each $ir \in i\mathbb{R}$, the rescaled pointwise semigroups $\bigl(e^{irt}e^{tA(s)}\bigr)_{t\geq 0}$ are Ces\`aro stable, almost everywhere. \end{thm} \begin{rem} If the pointwise semigroups are almost weakly stable, almost everywhere, then the rescaled pointwise semigroups $\bigl(e^{irt}e^{tA(s)}\bigr)_{t\geq 0}$ are Ces\`aro stable for each $ir \in i\mathbb{R}$ almost everywhere which implies, by the above theorem, that the multiplication semigroup $\bigl(e^{t{\mathcal{A}}}\bigr)_{t\geq 0}$ is almost weakly stable. \end{rem} In order to prove Theorem \ref{thmAwsSg} we need the following characterisation of almost weak stability of a semigroup on a reflexive Banach space via the point spectrum of its generator. \begin{lem}{\cite[Chapter II, Theorem 4.1]{eis10}}\lambdabel{lemAwsSg} Let $\bigl(T(t)\bigr)_{t\geq 0}$ be a bounded $C_0$-semigroup with generator $(A, D(A))$ on a reflexive, separable Banach space $X$. Then the following are equivalent. \begin{enumerate} \item $\bigl(T(t)\bigr)_{t\geq 0}$ is almost weakly stable. \item $\PtSp{A}\cap i\mathbb{R} = \emptyset$, where $\PtSp{A}$ is the point spectrum of $A$. \end{enumerate} \end{lem} We are now ready to prove Theorem \ref{thmAwsSg} by using the above characterisation as well as the spectral mapping result of Theorem \ref{thmPtSpM}. \begin{proof} Using Lemma \ref{lemAwsSg} almost weak stability of $\bigl(e^{t{\mathcal{A}}}\bigr)_{t\geq 0}$ is equivalent to $\PtSp{{\mathcal{A}}}\cap i\mathbb{R} = \emptyset$. By Corollary \ref{corPtSpA}, this is equivalent to the following: for each $ir\in i\mathbb{R}$, $\mu\bigl(\{ s\in\Omega | ir \in \PtSp{A(s)} \}\bigr) = 0$. This in turn is equivalent to the fact that, for each $ir\in i\mathbb{R}$, the rescaled pointwise semigroups $\bigl(e^{irt}e^{tA(s)}\bigr)_{t\geq 0}$ are Ces\`aro stable almost everywhere. \end{proof} \begin{exm} Once again, we consider the multiplication semigroup of Example \ref{exm1}. In this example we have for the point spectrum that $\PtSp{A(s)} \cap i\mathbb{R} \nonumberot=\emptyset$ for each $s\in\Omega$, since $is\in\PtSp{A(s)}$ for all $s\in[0,1]$. It follows that none of the pointwise semigroups are almost weakly stable, whereas the point spectrum of ${\mathcal{A}}$ is empty, which means that the semigroup $\bigl(e^{t{\mathcal{A}}}\bigr)_{t\geq 0}$ is almost weakly stable. \end{exm} \begin{rem} Since the stability is determined by the pointwise semigroups, the value of $p$ is irrevelant, as long as $1<p<\infty$. \end{rem} \section{Stability of Multiplication Operators} It is possible to develop analogous results for time-discrete semigroups of the form $\{{\mathcal{M}}^n \mid \ n \in \mathbb{N}\}$ for a bounded multiplication operator ${\mathcal{M}}$ on $L^p(\Omega,X)$. The relevant stability properties are the following. \begin{defn}Let $T$ be an operator on a Banach space $X$. Then $T$ is \begin{itemize} \item[(i)] {\em uniformly stable} if $\left\|T^n\right\|\longrightarrow 0$ as $n\rightarrow\infty$. \item[(ii)] {\em strongly stable} if $\left\|T^nf\right\|\longrightarrow 0$ as $n\rightarrow\infty$ for all $f \in X$. \item[(iii)] {\em weakly stable} if $\psi\left(T^nf\right)\longrightarrow 0$ as $n\rightarrow\infty$ for all $f \in X$ and all $\psi\in X'$. \item[(iv)] {\em almost weakly stable} if $\psi(T^{n_k}f)\longrightarrow 0 \mbox{ as } k\rightarrow\infty$ for all $f \in X, \psi \in X'$, and sequences $(n_k)_{k\in\mathbb{N}} \subset \mathbb{N}$ of density 1. \end{itemize} \end{defn} Recall that the {\em density of a sequence} $(n_k)_{k\in\mathbb{N}} \subset \mathbb{N}$ is $$ d := \lim_{n\rightarrow\infty}{\frac{|\{ k\ :\ n_k < n \}|}{n}}, $$ if the limit exists and that a bounded linear operator $T$ on a Banach space is called {\em power bounded} if $\sup_{n \in \mathbb{N}}\left\|T^n\right\| < \infty$. By using methods analogous to those developed in Sections 1 -- 4 we can characterise these stability properties of a power bounded multiplication operator ${\mathcal{M}}$ through the pointwise operators $M(\cdot)$. \begin{thm} Let ${\mathcal{M}}$ be a power-bounded multiplication operator on $L^p(\Omega,X)$ with $1<p<\infty$. \begin{itemize} \item[(i)] Then ${\mathcal{M}}$ is uniformly stable if and only if $M(s)$ is uniformly stable for almost all $s \in \Omega$ and $\essup_{s\in\Omega}{\left\|M(s)^n\right\|} \rightarrow 0$ as $n\rightarrow\infty$, or, equivalently, that $\essup_{s\in\Omega}{\sr{M(s)}} < 1$. \item[(ii)] If $\Omega$ is $\sigma$-finite, then ${\mathcal{M}}$ is strongly stable if and only if $M(s)$ is strongly stable for almost all $s \in \Omega$. \item[(iii)] If the Banach space $X$ is separable, then ${\mathcal{M}}$ is weakly stable if the pointwise operators $M(s)$ are weakly stable almost everywhere. \item[(iv)] Let the measure space $(\Omega,\Sigma,\mu)$ be separable and let $X$ be a reflexive, separable Banach space. Then ${\mathcal{M}}$ is almost weakly stable if and only if, for each $\lambdambda\in\mathbb{C}$ with $|\lambdambda| = 1$, the rescaled pointwise operators $\lambdambda M(s)$ are Ces\`aro stable for almost all $s \in \Omega$. \end{itemize} \end{thm} \end{document}
3,968
15,560
en
train
0.106.0
\begin{document} \title{Reducing quadrangulations of the sphere and the projective plane} \begin{abstract} We show that every quadrangulation of the sphere can be transformed into a $4$-cycle by deletions of degree-$2$ vertices and by $t$-contractions at degree-$3$ vertices. A $t$-contraction simultaneously contracts all incident edges at a vertex with stable neighbourhood. The operation is mainly used in the field of $t$-perfect graphs. \\ We further show that a non-bipartite quadrangulation of the projective plane can be transformed into an odd wheel by $t$-contractions and deletions of degree-$2$ vertices. \\ We deduce that a quadrangulation of the projective plane is (strongly) $t$-perfect if and only if the graph is bipartite. \end{abstract} \section{Introduction} For characterising quadrangulations of the sphere, it is very useful to transform a quadrangulation into a slightly smaller one. Such reductions are mainly based on the following idea: Given a class of quadrangulations, a sequence of particular face-contractions transforms every member of the class into a $4$-cycle; see eg Brinkmann et~al~\cite{BGGMTW05}, Nakamoto~\cite{Nakamoto99}, Negami and Nakamoto~\cite{NeNa93}, and Broersma et al.~\cite{BDG93}. A \emph{face-contraction} identifies two non-adjacent vertices $v_1, v_3$ of a $4$-face $v_1, v_2,v_3,v_4$ in which the common neighbours of $v_1$ and $v_3$ are only $v_2$ and $v_4$. A somewhat different approach was made by Bau et al.~\cite{BMNZ14}. They showed that any quadrangulation of the sphere can be transformed into a $4$-cycle by a sequence of deletions of degree-$2$ vertices and so called hexagonal contractions. The obtained graph is a minor of the previous graph. Both operations can be obtained from face-contractions. We provide a new way to reduce arbitrary quadrangulations of the sphere to a $4$-cycle. Our operations are minor-operations --- in contrast to face-contractions. We use deletions of degree-$2$ vertices and $t$-contractions. A \emph{$t$-contraction} simultaneously contracts all incident edges of a vertex with stable neighbourhood and deletes all multiple edges. The operation is mainly used in the field of $t$-perfection. Face-contractions cannot be obtained from $t$-contractions. We restrict ourselves to $t$-contractions at vertices that are only contained in $4$-cycles whose interior does not contain a vertex. \begin{gather} \mbox{These $t$-contractions and deletions of degree-$2$ vertices } \nonumber \sloppy \\ \mbox{ can be obtained from a sequence of face-contractions.} \label{eq:operations} \sloppy \end{gather} Figure~\ref{fig:facecon} illustrates this. The restriction on the applicable $t$-contractions makes sure that all face-contractions can be applied, ie that all identified vertices are non-adjacent and have no common neighbours besides the two other vertices of their $4$-face. \begin{figure} \caption{Face-contractions that give a deletion of a degree-$2$ vertex, a $t$-contraction at a degree-$3$ and a degree-$6$ vertex} \label{fig:facecon} \end{figure} We prove: \begin{theorem}\label{thm:plane_C4_irreducible} Let $G$ be a quadrangulation of the sphere. Then, there is a sequence of $t$-contractions at degree-$3$ vertices and deletions of degree-$2$ vertices that transforms $G$ into a $4$-cycle. During the whole process, the graph remains a quadrangulation. \end{theorem} The proof of Theorem~\ref{thm:plane_C4_irreducible} can be found in Section~\ref{sec:quadrangulations}. It is easy to see that both operations used in Theorem~\ref{thm:plane_C4_irreducible} are necessary. By \eqref{eq:operations}, Theorem~\ref{thm:plane_C4_irreducible} implies: \begin{gather*} \mbox{Any quadrangulation of the sphere can be transformed } \sloppy \\ \mbox{into a $4$-cycle by a sequence of face-contractions.} \sloppy \end{gather*} Via the dual graph, quadrangulations of the sphere are in one-to-one correspondence with planar $4$-regular (not necessarily simple) graphs. Theorem~\ref{thm:plane_C4_irreducible} thus implies a method to reduce all $4$-regular planar graphs to the graph on two vertices and four parallel edges. Broersma et al.~\cite{BDG93}, Lehel~\cite{Le81}, and Manca~\cite{Ma79} analysed methods to reduce $4$-regular planar graphs to the octahedron graph. In the second part of this paper, we consider quadrangulations of the projective plane. We use Theorem~\ref{thm:plane_C4_irreducible} to reduce all non-bipartite quadrangulations of the projective plane to an odd wheel. A \emph{$p$-wheel} $W_p$ is a graph consisting of a cycle $(w_1, \ldots, w_p,w_1)$ and a vertex $v$ adjacent to all vertices of the cycle. A wheel $W_p$ is an \emph{odd wheel}, if $p$ is odd. Figure~\ref{fig:wheels} shows some odd wheels. \begin{theorem}\label{thm:pp_odd_wheels_irreducible} Let $G$ be a non-bipartite quadrangulation of the projective plane. Then, there is a sequence of $t$-contractions and deletions of degree-$2$ vertices that transforms $G$ into an odd wheel. During the whole process, the graph remains a non-bipartite quadrangulation. \end{theorem} The proof of this theorem can be found in Section~\ref{sec:quadrangulations}. It is easy to see that both operations used in this theorem are necessary. \begin{figure} \caption{The odd wheels $W_3, W_5$ and $W_7$} \label{fig:wheels} \end{figure} \begin{figure} \caption{An even embedding of $W_5$ in the projective plane and face-contractions that produce a smaller odd wheel. Opposite points on the dotted cycle are identified. } \label{fig:pp_wheels} \end{figure} Negami and Nakamoto~\cite{NeNa93} showed that any non-bipartite quadrangulation of the projective plane can be transformed into a $K_4$ by a sequence of face-contractions. This result can be deduced from Theorem~\ref{thm:pp_odd_wheels_irreducible}: By \eqref{eq:operations}, Theorem~\ref{thm:pp_odd_wheels_irreducible} implies that any non-bipartite quadrangulation of the projective plane can be transformed into an odd wheel by a sequence of face-contractions. The odd wheel $W_{2k+1}$ can now be transformed into $W_{2k-1}$ --- and finally into $W_3=K_4$ --- by face-contractions (see Figure~\ref{fig:pp_wheels}). Nakamoto~\cite{Nakamoto99} gave a reduction method based on face-contractions and so called $4$-cycle deletions for non-bipartite quadrangulations of the projective plane with minimum degree $3$. Matsumoto et al.~\cite{MNY16} analysed quadrangulations of the projective plane with respect to hexagonal contractions while Nakamoto considered face-contractions for quadrangulations of the Klein~bottle~\cite{Nakamoto95} and the torus~\cite{Nakamoto96}. Youngs~\cite{You96} and Esperet and Stehl{\'{\i}}k~\cite{Es_Steh15} considered non-bipartite quadrangulations of the projective plane with regard to vertex-colourings and width-parameters. Theorem~\ref{thm:pp_odd_wheels_irreducible} allows an application to the theory of $t$-perfection. A graph $G$ is \emph{$t$-perfect} if its stable set polytope $\textrm{\rm SSP}(G)$ equals the polyhedron $\textrm{\rm TSTAB}(G)$. The \emph{stable set polytope} $\textrm{\rm SSP}(G)$ is the convex hull of stable sets of $G$; the polyhedron \emph{$\textrm{\rm TSTAB}(G)$} is defined via non-negativity-, edge- and odd-cycle inequalities (see Section~\ref{sec:t-perf} for a precise definition). If the system of inequalities defining $\textrm{\rm TSTAB}(G)$ is totally dual integral, the graph $G$ is called \emph{strongly $t$-perfect}. Evidently, strong $t$-perfection implies $t$-perfection. It is not known whether the converse is also true. \begin{theorem}\label{thm:t-perfect} For every quadrangulation $G$ of the projective plane the following assertions are equivalent: \begin{enumerate}[\rm(a)] \item $G$ is $t$-perfect \label{item:t-perfect} \item $G$ is strongly $t$-perfect \label{item:strongly_t-perfect} \item $G$ is bipartite \label{item:bipartite} \end{enumerate} \end{theorem} See Section~\ref{sec:t-perf} for precise definitions and for the proof. A general treatment on $t$-perfect graphs may be found in Gr\"otschel, Lov\'asz and Schrijver~\cite[Ch.~9.1]{GLS88} as well as in Schrijver~\cite[Ch.~68]{LexBible}. We showed that triangulations of the projective plane are (strongly) $t$-perfect if and only if they are perfect and do not contain the complete graph $K_4$~\cite{triang17}. Bruhn and Benchetrit analysed $t$-perfection of triangulations of the sphere~\cite{Ben_Bru15}. Boulala and Uhry~\cite{BouUhr79} established the $t$-perfection of series-parallel graphs. Ge\-rards~\cite{Gerards89} extended this to graphs that do not contain an \emph{odd-$K_4$} as a subgraph (an odd-$K_4$ is a subdivision of $K_4$ in which every triangle becomes an odd circuit). Ge\-rards and Shepherd~\cite{GS98} characterised the graphs with all subgraphs $t$-perfect, while Barahona and Mahjoub~\cite{BM94} described the $t$-imperfect subdivisions of $K_4$. Bruhn and Fuchs~\cite{Bru_Fu15} characterised $t$-perfection of $P_5$-free graphs by forbidden $t$-minors.
2,785
9,534
en
train
0.106.1
\section{Quadrangulations} \label{sec:quadrangulations} All the graphs mentioned here are finite and simple. We follow the notation of Diestel~\cite{Diestel}. We begin by recalling several useful definitions related to surface-embedded graphs. For further background on topological graph theory, we refer the reader to Gross and Tucker~\cite{Gross_Tucker} or Mohar and Thomassen~\cite{Mohar_Thomassen}. An \emph{embedding} of a simple graph $G$ on a surface is a continuous one-to-one function from a topological representation of $G$ into the surface. For our purpose, it is convenient to abuse the terminology by referring to the image of $G$ as the \emph{ the graph $G$}. The \emph{faces} of an embedding are the connected components of the complement of $G$. An embedding $G$ is \emph{even} if all faces are bounded by an even circuit. A \emph{quadrangulation} is an embedding where each face is bounded by a circuit of length $4$. A cycle $C$ is \emph{contractible} if $C$ separates the surface into two sets $S_C$ and $\overline{S_C}$ where $S_C$ is homeomorphic to an open disk in $\ensuremath{\mathbb{R}}^2$. Note that for the sphere, $S_C$ and $\overline{S_C}$ are homeomorphic to an open disk. In contrast, for the plane and the projective plane, $\overline{S_C}$ is not homeomorphic to an open disk. For the plane and the projective plane , we call $S_C$ the \emph{interior} of $C$ and $\overline{S_C}$ the \emph{exterior} of $C$. Using the stereographic projection, it is easy to switch between embeddings in the sphere and the plane. In order to have an interior and an exterior of a contractible cycle, we will concentrate on quadrangulations of the plane (and the projective plane). Note that by the Jordan curve theorem, \begin{equation} \label{eq:plane->contractible_cycles} \text{all cycles in the plane are contractible.} \end{equation} A cycle in a non-bipartite quadrangulation of the projective plane is contractible if and only if it has even length (see e.g.~\cite[Lemma 3.1]{Kai_Steh15}). As every non-bipartite even embedding is a subgraph of a non-bipartite quadrangulation, one can easily generalise this result. \begin{observation} \label{obs:cycles_in_pp} A cycle in a non-bipartite even embedding in the projective plane is contractible if and only it has even length. \end{observation} An embedding is a \emph{$2$-cell embedding} if each face is homeomorphic to an open disk. It is well-known that embeddings of $2$-connected graphs in the plane are $2$-cell embeddings. A non-bipartite quadrangulation of the projective plane contains a non-contractible cycle; see Observation~\ref{obs:cycles_in_pp}. The complement of this cycle in the projective plane is homeomorphic to an open disk. Thus, we observe: \begin{observation} \label{obs:closed-cell-embedding} Every quadrangulation of the plane and every non-bipartite quadrangulation of the projective plane is a $2$-cell embedding. \end{observation} This observation makes sure that we can apply Euler's formula to all the considered quadrangulations. A simple graph cannot contain a $4$-circuit that is not a $4$-cycle. Thus, note that every face of a quadrangulation is bounded by a cycle. It is easy to see that \begin{equation} \label{eq:quad_plane->bipartite} \text{all quadrangulations of the plane are bipartite.} \end{equation} We first take a closer look at deletions of degree-$2$ vertices in graphs that are not the $4$-cycle $C_4$. \begin{observation} \label{obs:deg2_vertex} Let $G\not= C_4$ be a quadrangulation of the plane or the projective plane that contains a vertex $v$ of degree $2$. Then, $G-v$ is again a quadrangulation. \end{observation} \begin{proof} Let $u$ and $u'$ be the two neighbours of $v$. Then, there are distinct vertices $s,t$ such that the cycles $(u,v,u',s,u) $ and $(u,v,u',t,u)$ are bounding a face. Thus, $(u,s,u',t,u)$ is a contractible $4$-cycle whose interior contains only $v$ and $G-v$ is again a quadrangulation. \end{proof} We now take a closer look at $t$-contractions. \begin{lemma} \label{lem:quad_stays_quad} Let $G$ be a quadrangulation of the plane or a non-bipartite quadrangulation of the projective plane. Let $G'$ be obtained from $G$ by a $t$-contraction at $v$. If $v$ is not a vertex of a contractible $4$-cycle with some vertices in its interior, then $G'$ is again a quadrangu\-lation. \end{lemma} \begin{proof} Let $G''$ be obtained from $G$ by the operation that identifies $v$ with all its neighbours but does not delete multiple edges. This operation leaves every cycle not containing $v$ untouched, transforms every other cycle $C$ into a cycle of length $|C|-2$, and creates no new cycles. Therefore, all cycles bounding faces of $G''$ are of size $4$ or $2$. The graphs $G'$ and $G''$ differ only in the property that $G''$ has some double edges. These double edges form $2$-cycles that arise from $4$-cycles containing $v$. As all these $4$-cycles are contractible (see~\eqref{eq:plane->contractible_cycles} and Observation~\ref{obs:cycles_in_pp}) with no vertex in their interior, the $2$-cycles are also contractible and contain no vertex in its interior. Deletion of all double edges now gives $G'$ --- an embedded graph where all faces are of size $4$. \end{proof} Lemma~\ref{lem:quad_stays_quad} enables us to prove the following statement that directly implies Theorem~\ref{thm:plane_C4_irreducible}. \begin{lemma}\label{lem:plane_C4_irreducible} Let $G$ be a quadrangulation of the plane. Then, there is a sequence of \begin{itemize} \item $t$-contractions at degree-$3$ vertices that are only contained in $4$-cycles whose interior does not contain a vertex. \item deletions of degree-$2$ vertices \end{itemize} that transforms $G$ into a $4$-cycle. During the whole process, the graph remains a quadrangulation. \end{lemma} \begin{proof} Let $\mathcal{C}$ be the set of all contractible $4$-cycles whose interior contains some vertices of $G$. Note that $\mathcal{C}$ contains the $4$-cycle bounding the outer face unless $G=C_4$. Let $C \in \mathcal{C}$ be a contractible $4$-cycle whose interior does not contain another element of $\mathcal{C}$. We will first see that the interior of $C$ contains a vertex of degree $2$ or $3$: Deletion of all vertices in the exterior of $C$ gives a quadrangulation $G'$ of the plane. As $G$ is connected, one of the vertices in $C$ must have a neighbour in the interior of $C$ and thus must have degree at least $3$. Euler's formula now implies that $ \sum_{v \in V(G')} \deg(v)=2|E(G')| \leq 4|V(G')| -8. $ As no vertex in $G'$ has degree $0$ or $1$, there must be a vertex of degree $2$ or $3$ in $V(G')-V(C)$. This vertex has the same degree in $G$ and is contained in the interior of $C$. We use deletions of degree-$2$ vertices and $t$-contractions at degree-$3$ vertices in the interior of the smallest cycle of $\mathcal{C} $ to successively get rid of all vertices in the interior of $4$-cycles. By Observation~\ref{obs:deg2_vertex} and Lemma~\ref{lem:quad_stays_quad}, the obtained graphs are quadrangulations. Now, suppose that no more $t$-contraction at a degree-$3$ vertex and no more deletion of a degree-$2$ vertex is possible. Assume that the obtained graph is not a $4$-cycle. Then, there is a cycle $C'\in \mathcal{C}$ whose interior does not contain another cycle of $\mathcal{C}$. As we have seen above, $C' \in \mathcal{C}$ contains a vertex $v$ of degree $3$. Since no $t$-contraction can be applied to $v$, the vertex $v$ has two adjacent neighbours. This contradicts~\eqref{eq:quad_plane->bipartite}. \end{proof} In the rest of the paper, we will consider the projective plane. A quadrangulation of the projective plane is~\emph{nice} if no vertex is contained in the interior of a contractible $4$-cycle. \begin{lemma}\label{lem:make_quad_nice} Let $G$ be a non-bipartite quadrangulation of the projective plane. Then, there is a sequence of $t$-contractions and deletions of vertices of degree $2$ that transforms $G$ into a nice quadrangulation. During the whole process, the graph remains a quadrangulation. \end{lemma} \begin{proof} Let $C$ be a contractible $4$-cycle whose interior contains at least one vertex. Delete all vertices that are contained in the exterior of $C$. The obtained graph is a quadrangulation of the plane. By Lemma~\ref{lem:plane_C4_irreducible}, there is a sequence of $t$-contractions (as described in Lemma~\ref{lem:quad_stays_quad}) and deletions of degree-$2$ vertices that eliminates all vertices in the interior of $C$. With this method, it is possible to transform $G$ into a nice quadrangulation. \end{proof} Similar as in the proof of Theorem~\ref{thm:plane_C4_irreducible}, Euler's formula implies that a non-bipartite quadrangulation of the projective plane contains a vertex of degree $2$ or $3$. As no nice quadrangulation has a degree-$2$ vertex (see Observation~\ref{obs:deg2_vertex}), we deduce: \begin{observation}\label{obs:nice_quad_min_degree} Every nice non-bipartite quadrangulation of the projective plane has minimal degree $3$. \end{observation} In an even embedding of an odd wheel $W$, every odd cycle must be non-contractible (see Observation~\ref{obs:cycles_in_pp}). Thus, it is easy to see that there is only one way (up to topological isomorphy) to embed an odd wheel in the projective plane. (This can easily be deduced from~\cite{MoRoVi96} --- a paper dealing with embeddings of planar graphs in the projective plane.) The embedding is illustrated in Figure~\ref{fig:pp_wheels}. Noting that this embedding is a quadrangulation, we observe: \begin{observation} \label{obs:odd_wheel_nice_quad} Let $G$ be a quadrangulation of the projective plane that contains an odd wheel $W$. If $G$ is nice, then $G$ equals $W$. \end{observation} Note that every graph containing an odd wheel also contains an induced odd wheel. Now, we consider even wheels. \begin{lemma} \label{lem:even_wheel_no_embedding} Even wheels $W_{2k}$ for $k\geq 2$ do not have an even embedding in the projective plane. \end{lemma} The statement follows directly from~\cite{MoRoVi96}. We nevertheless give an elementary proof of the lemma. \begin{proof} First assume that the $4$-wheel $W_4$ has an even embedding. As all triangles of $W_4-{w_3w_4}$ must be non-contractible by Observation~\ref{obs:cycles_in_pp}, it is easy to see that the graph must be embedded as in Figure~\ref{fig:pp_evenwheels}. Since the insertion of $w_3w_4$ will create an odd face, $W_4$ is not evenly embeddable. Now assume that $W_{2k}$ for $k \geq 3$ is evenly embedded. Delete the edges $vw_i$ for $i=5, \ldots, 2k$ and note that $w_5, \ldots, w_{2k}$ are now of degree $2$, ie the path $P=(w_4, w_5, \ldots, w_{2k}, w_1)$ bounds two faces or one face from two sides. Deletion of the edges $vw_i$ preserve the even embedding: Deletion of an edge bounding two faces $F_1, F_2$ merges the faces into a new face of size $|F_1|+|F_2|-2$. Deletion of an edge bounding a face $F$ from two sides leads to a new face of size $|F|-2$. In both cases, all other faces are left untouched. Next, replace the odd path $P$ by the edge $w_4w_1$. The two faces $F_3, F_4$ adjacent to $P$ are transferred into two new faces of size $|F_3|- (2k-3)+1$ and $|F_4|- (2k-3)+1$. This yields an even embedding of $W_4$ which is a contradiction. \end{proof}
3,455
9,534
en
train
0.106.2
In the rest of the paper, we will consider the projective plane. A quadrangulation of the projective plane is~\emph{nice} if no vertex is contained in the interior of a contractible $4$-cycle. \begin{lemma}\label{lem:make_quad_nice} Let $G$ be a non-bipartite quadrangulation of the projective plane. Then, there is a sequence of $t$-contractions and deletions of vertices of degree $2$ that transforms $G$ into a nice quadrangulation. During the whole process, the graph remains a quadrangulation. \end{lemma} \begin{proof} Let $C$ be a contractible $4$-cycle whose interior contains at least one vertex. Delete all vertices that are contained in the exterior of $C$. The obtained graph is a quadrangulation of the plane. By Lemma~\ref{lem:plane_C4_irreducible}, there is a sequence of $t$-contractions (as described in Lemma~\ref{lem:quad_stays_quad}) and deletions of degree-$2$ vertices that eliminates all vertices in the interior of $C$. With this method, it is possible to transform $G$ into a nice quadrangulation. \end{proof} Similar as in the proof of Theorem~\ref{thm:plane_C4_irreducible}, Euler's formula implies that a non-bipartite quadrangulation of the projective plane contains a vertex of degree $2$ or $3$. As no nice quadrangulation has a degree-$2$ vertex (see Observation~\ref{obs:deg2_vertex}), we deduce: \begin{observation}\label{obs:nice_quad_min_degree} Every nice non-bipartite quadrangulation of the projective plane has minimal degree $3$. \end{observation} In an even embedding of an odd wheel $W$, every odd cycle must be non-contractible (see Observation~\ref{obs:cycles_in_pp}). Thus, it is easy to see that there is only one way (up to topological isomorphy) to embed an odd wheel in the projective plane. (This can easily be deduced from~\cite{MoRoVi96} --- a paper dealing with embeddings of planar graphs in the projective plane.) The embedding is illustrated in Figure~\ref{fig:pp_wheels}. Noting that this embedding is a quadrangulation, we observe: \begin{observation} \label{obs:odd_wheel_nice_quad} Let $G$ be a quadrangulation of the projective plane that contains an odd wheel $W$. If $G$ is nice, then $G$ equals $W$. \end{observation} Note that every graph containing an odd wheel also contains an induced odd wheel. Now, we consider even wheels. \begin{lemma} \label{lem:even_wheel_no_embedding} Even wheels $W_{2k}$ for $k\geq 2$ do not have an even embedding in the projective plane. \end{lemma} The statement follows directly from~\cite{MoRoVi96}. We nevertheless give an elementary proof of the lemma. \begin{proof} First assume that the $4$-wheel $W_4$ has an even embedding. As all triangles of $W_4-{w_3w_4}$ must be non-contractible by Observation~\ref{obs:cycles_in_pp}, it is easy to see that the graph must be embedded as in Figure~\ref{fig:pp_evenwheels}. Since the insertion of $w_3w_4$ will create an odd face, $W_4$ is not evenly embeddable. Now assume that $W_{2k}$ for $k \geq 3$ is evenly embedded. Delete the edges $vw_i$ for $i=5, \ldots, 2k$ and note that $w_5, \ldots, w_{2k}$ are now of degree $2$, ie the path $P=(w_4, w_5, \ldots, w_{2k}, w_1)$ bounds two faces or one face from two sides. Deletion of the edges $vw_i$ preserve the even embedding: Deletion of an edge bounding two faces $F_1, F_2$ merges the faces into a new face of size $|F_1|+|F_2|-2$. Deletion of an edge bounding a face $F$ from two sides leads to a new face of size $|F|-2$. In both cases, all other faces are left untouched. Next, replace the odd path $P$ by the edge $w_4w_1$. The two faces $F_3, F_4$ adjacent to $P$ are transferred into two new faces of size $|F_3|- (2k-3)+1$ and $|F_4|- (2k-3)+1$. This yields an even embedding of $W_4$ which is a contradiction. \end{proof} \begin{figure} \caption{The only even embedding of $W_4 - w_3w_4 $ in the projective plane. Opposite points on the dotted cycle are identified.} \label{fig:pp_evenwheels} \end{figure} Note that a $t$-contraction at a vertex $v$ is only allowed if its neighbourhood is stable, that is, if $v$ is not contained in a triangle. The next lemma characterises the quadrangulations to which no $t$-contraction can be applied. \begin{lemma} \label{lem:irred_oddwheel} Let $G$ be a non-bipartite nice quadrangulation of the projective plane where each vertex is contained in a triangle. Then $G$ is an odd wheel. \end{lemma} \begin{proof} By Observation~\ref{obs:nice_quad_min_degree}, there is a vertex $v$ of degree $3$ in $G$. Let $\{x_1,x_2,x_3\}$ be its neighbourhood and let $x_1$,$x_2$ and $v$ form a triangle. Recall that each two triangles are non-contractible (see~Observation~\ref{obs:cycles_in_pp}). Consequently each two triangles intersect. As $x_3$ is contained in a triangle intersecting the triangle $(v,x_1,x_2)$ and as $v$ has no further neighbour, we can suppose without loss of generality that $x_3$ is adjacent to $x_1$. The graph induced by the two triangles $(v,x_1,x_2)$ and $(x_1,v,x_3)$ is not a quadrangulation. Further, addition of the edge $x_2x_3$ yields a $K_4$. By Observation~\ref{obs:odd_wheel_nice_quad}, $G$ then equals the odd wheel $W_3=K_4$. Otherwise, the graph contains a further vertex and this vertex is contained in a further triangle $T$. Since the vertex $v$ has degree $3$, it is not contained in $T$. If further $x_1 \notin V(T)$, then the vertices $x_2$ and $x_3$ must be contained in $T$. But then $x_2x_3\in E(G)$ and, as above, $v$, $x_1$, $x_2$ and $x_3$ form a $K_4$. Therefore, $x_1$ is contained in $T$ and consequently in every triangle of $G$. Since every vertex is contained in a triangle, $x_1$ must be adjacent to all vertices of $ G-x_1$. As $|E(G)|=2|V(G)|-2$ by Euler's formula, the graph $G-x_1$ has $2|V(G)|-2-(|V(G)|-1)=|V(G)|-1=|V(G-x_1)|$ many edges. By Observation~\ref{obs:nice_quad_min_degree}, no vertex in $G$ has degree smaller than $3$. Consequently, no vertex in $G-x_1$ has degree smaller than $2$. Thus, $G-x_1$ is a cycle and $G$ is a wheel. By Lemma~\ref{lem:even_wheel_no_embedding}, $G$ is an odd wheel. \end{proof} Finally, we can prove our second main result: \begin{proof}[Proof of Theorem~\ref{thm:pp_odd_wheels_irreducible}] Transform $G$ into a nice quadrangulation (Lemma~\ref{lem:make_quad_nice}). Now, consecutively apply $t$-contractions (as described in Lemma~\ref{lem:quad_stays_quad}) as long as possible. In each step, the obtained graph is a quadrangulation. By Lemma~\ref{lem:make_quad_nice} we can assume that the quadrangulation is nice. If no more $t$-contraction can be applied, then every vertex is contained in a triangle. By Lemma~\ref{lem:irred_oddwheel}, the obtained quadrangulation is an odd wheel. \end{proof}
2,196
9,534
en
train
0.106.3
\section{(Strong) $t$-perfection}\label{sec:t-perf} The \emph{stable set polytope} $\textrm{\rm SSP}(G)\subseteq\mathbb R^{V}$ of a graph $G=(V,E)$ is defined as the convex hull of the characteristic vectors of stable, ie independent, subsets of $V$. The characteristic vector of a subset $S$ of the set $V$ is the vector $\charf{S}\in \{0,1\}^{V}$ with $\charf{S}(v)=1$ if $v\in S$ and $0$ otherwise. We define a second polytope $\textrm{\rm TSTAB}(G)\subseteq\mathbb R^V$ for $G$, given by \begin{eqnarray} \label{inequ} &&x\geq 0,\notag\\ &&x_u+x_v\leq 1\text{ for every edge }uv\in E,\\ &&\sum_{v\in V(C)}x_v\leq \left\lfloor\frac{ |C|}{2}\right\rfloor\text{ for every induced odd cycle }C \text{ in }G.\notag \end{eqnarray} These inequalities are respectively known as non-negativity, edge and odd-cycle inequalities. Clearly, $\textrm{\rm SSP}(G)\subseteq \textrm{\rm TSTAB}(G)$. The graph $G$ is called \emph{$t$-perfect} if $\textrm{\rm SSP}(G)$ and $\textrm{\rm TSTAB}(G)$ coincide. Equivalently, $G$ is $t$-perfect if and only if $\textrm{\rm TSTAB}(G)$ is an integral polytope, ie if all its vertices are integral vectors. The graph $G$ is called \emph{strongly $t$-perfect} if the system \eqref{inequ} of inequalities is totally dual integral. That is, if for each weight vector $w \in \ensuremath{\mathbb{Z}}^{V}$, the linear program of maximizing $w^Tx$ over~\eqref{inequ} has an integer optimum dual solution. This property implies that $\textrm{\rm TSTAB}(G)$ is integral. Therefore, strong $t$-perfection implies $t$-perfection. It is an open question whether every $t$-perfect graph is strongly $t$-perfect. The question is briefly discussed in Schrijver~\cite[Vol. B, Ch. 68]{LexBible}. It is easy to see that all bipartite graphs are (strongly) $t$-perfect (see eg Schrijver~\cite[Ch.~68]{LexBible}) and that vertex deletion preserves (strong) $t$-perfection. Another operation that keeps (strong) $t$-perfection (see eg~\cite[Vol.~B,~Ch.~68.4]{LexBible}) was found by Gerards and Shepherd~\cite{GS98}: the $t$-contraction. Odd wheels $W_{2k+1}$ for $k \geq 1$ are not (strongly) $t$-perfect. Indeed, the vector $(1 \slash 3, \ldots, 1 \slash 3)$ is contained in $\textrm{\rm TSTAB}(W_{2k+1})$ but not in $\textrm{\rm SSP}(W_{2k+1})$. With this knowledge, the proof of Theorem~\ref{thm:t-perfect} follows directly from Theorem~\ref{thm:pp_odd_wheels_irreducible}. \begin{proof} [Proof of Theorem~\ref{thm:t-perfect}] If $G$ is bipartite, the $G$ is (strongly) $t$-perfect. Let $G$ be non-bipartite. Then, there is a sequence of $t$-contractions and deletions of vertices that transforms $G$ into an odd wheel (Theorem~\ref{thm:pp_odd_wheels_irreducible}). As odd wheels are not (strongly) $t$-perfect and as vertex deletion and $t$-contraction preserve (strong) $t$-perfection, $G$ is not (strongly) $t$-perfect. \end{proof} \noindent Elke Fuchs {\tt <[email protected]>}\\ Laura Gellert {\tt <[email protected]>}\\ Institut f\"ur Optimierung und Operations Research\\ Universit\"at Ulm, Ulm\\ Germany\\ \end{document}
1,098
9,534
en
train
0.107.0
\begin{document} \begin{abstract} We consider cardinal invariants related to Shelah's model-theoretic tree properties and the relations that obtain between them. From strong colorings, we construct theories $T$ with $\kappa_{\text{cdt}}(T) > \kappa_{\text{sct}}(T) + \kappa_{\text{inp}}(T)$. We show that these invariants have distinct structural consequences, by investigating their effect on the decay of saturation in ultrapowers. This answers some questions of Shelah. \end{abstract} \title{Invariants related to the tree property} \setcounter{tocdepth}{1} \tableofcontents \section{Introduction} One of the fundamental discoveries in stability theory is that stability is local: a theory is stable if and only if no formula has the order property. Among the stable theories, one can obtain a measure of complexity by associating to each theory $T$ its \emph{stability spectrum}, namely, the class of cardinals $\lambda$ such that $T$ is stable in $\lambda$. A classification of stability spectra was given by Shelah in \cite[Chapter 3]{shelah1990classification}. Part of this analysis amounts showing that stable theories do not have the tree property and, consequently, that forking satisfies local character. But a crucial component of that work was studying the approximations to the tree property which can exist in stable theories and what structural consequences they have. These approximations were measured by a cardinal invariant of the theory called $\kappa(T)$, and Shelah's stability spectrum theorem gives an explicit description of the cardinals in which a given theory $T$ is stable in terms of the cardinality of the set of types in finitely many variables over the empty set and $\kappa(T)$. Shelah used the definition of $\kappa(T)$ as a template for quantifying the global approximations to other tree properties in introducing the invariants $\kappa_{\text{cdt}}(T)$, $\kappa_{\text{sct}}(T)$, and $\kappa_{\text{inp}}(T)$ (see Definition \ref{patterns} below) which bound approximations to the tree property (TP), the tree property of the first kind (TP$_{1}$), and the tree property of the second kind (TP$_{2}$), respectively. Eventually, the local condition that a theory does not have the tree property (\emph{simplicity}), and the global condition that $\kappa(T) = \kappa_{cdt}(T) = \aleph_{0}$ (\emph{supersimplicity}) proved to mark substantial dividing lines. These invariants provide a coarse measure of the complexity of the theory, providing a ``quantitative" description of the patterns that can arise among forking formulas. They are likely to continue to play a role in the development of a structure theory for tame classes of non-simple theories. Motivated by some questions from \cite{shelah1990classification}, we explore which relationships known to hold between the \emph{local} properties TP, TP$_{1}$, and TP$_{2}$ also hold for the \emph{global} invariants $\kappa_{\text{cdt}}(T)$, $\kappa_{\text{sct}}(T)$, and $\kappa_{\text{inp}}(T)$. In short, we are pursuing the following analogy: \begin{table}[h!] \begin{center} \label{tab:table1} \begin{tabular}{c|c|c|c} local & TP & TP$_{1}$ & TP$_{2}$ \\ \hline global & $\kappa_{\text{cdt}}$ & $\kappa_{\text{sct}}$ & $\kappa_{\text{inp}}$ \\ \end{tabular} \end{center} \end{table} \noindent This continues the work done in \cite{ArtemNick}, where, with Artem Chernikov, we considered a global analogue of the following theorem of Shelah: \begin{theorem*}{\cite[III.7.11]{shelah1990classification}} For complete theory $T$, $\kappa_{\text{cdt}}(T) = \infty$ and only if $\kappa_{\text{sct}}(T) = \infty$ or $\kappa_{\text{inp}}(T) = \infty$. That is, $T$ has the tree property if and only if it has the tree property of the first kind or the tree property of the second kind. \end{theorem*} \noindent Shelah then asked if $\kappa_{\text{cdt}}(T) = \kappa_{\text{sct}}(T) + \kappa_{\text{inp}}(T)$ in general \cite[Question III.7.14]{shelah1990classification}\footnote{This formulation is somewhat inaccurate. Shelah defines for $x \in \{\text{cdt},\text{inp},\text{sct}\}$, the cardinal invariant $\kappa r_{x}$, which is the least regular cardinal $\geq \kappa_{x}$. Shelah's precise question was about the possible equality $\kappa r_{\text{cdt}} = \kappa r_{\text{sct}} + \kappa r_{\text{inp}}$. For our purposes, we will only need to consider theories in which $\kappa_{x}$ is a successor cardinal, so we will not need to distinguish between these two variations.}. In \cite{ArtemNick}, we showed that is true under the assumption that $T$ is countable. For a countable theory $T$, the only possible values of these invariants are $\aleph_{0} ,\aleph_{1}$, and $\infty$\textemdash our proof handled each cardinal separately using a different argument in each case. Here we consider this question without any hypothesis on the cardinality of $T$, answering the general question negatively (Theorem \ref{first main theorem} below): \begin{theorem*} There is a stable theory \(T\) so that \(\kappa_{\text{cdt}}(T) > \kappa_{\text{sct}}(T) + \kappa_{\text{inp}}(T)\). Moreover, it is consistent with ZFC that for every regular uncountable \(\kappa\), there is a stable theory \(T\) with \(|T| = \kappa\) and \(\kappa_{\text{cdt}}(T) > \kappa_{\text{sct}}(T) + \kappa_{\text{inp}}(T)\). \end{theorem*} To construct a theory $T$ so that $\kappa_{\text{cdt}}(T) \neq \kappa_{\text{sct}}(T) + \kappa_{\text{inp}}(T)$, we use results on \emph{strong colorings} constructed by Galvin under GCH and later by Shelah in ZFC. These results show that, at suitable regular cardinals, Ramsey's theorem fails in a particularly dramatic way. The statement $\kappa_{\text{cdt}}(T) = \kappa_{\text{sct}}(T) + \kappa_{\text{inp}}(T)$ amounts to saying that a certain large global configuration gives rise to another large configuration which is moreover very uniform. This has the feel of many statements in the partition calculus and we show that, in fact, a coloring $f: [\kappa]^{2} \to 2$ can be used to construct a theory $T^{*}_{\kappa, f}$ such that the existence of a large inp- or sct-patterns relative to $T^{*}_{\kappa,f}$ implies some homogeneity for the coloring $f$. The theories built from the strong colorings of Galvin and Shelah, then, furnish ZFC counter-examples to Shelah's question, and also give a consistency result showing that, consistently, for every regular uncountable cardinal $\kappa$, there is a theory $T$ with $|T| = \kappa$ and $\kappa_{\text{cdt}}(T) \neq \kappa_{\text{sct}}(T) + \kappa_{\text{inp}}(T)$. This suggests that the aforementioned result of \cite{ArtemNick} for countable theories is in some sense the optimal result possible in ZFC. Our second theorem is motivated by the following theorem of Shelah: \begin{theorem*}{\cite[VI.4.7]{shelah1990classification}}\label{saturation} If $T$ is not simple, $\mathcal{D}$ is a regular ultrafilter over $I$, $M$ is an $|I|^{++}$-saturated model of $T$, then $M^{I}/\mathcal{D}$ is not $|I|^{++}$-compact. \end{theorem*} \noindent In an exercise, Shelah claims that the hypothesis that $T$ is not simple in the above theorem may be replaced by the condition $\kappa_{\text{inp}}(T) > |I|^{+}$ and asks if $\kappa_{\text{cdt}}(T) > |I|^{+}$ suffices \cite[Question VI.4.20]{shelah1990classification}. We prove, in Corollary \ref{second main theorem} and Theorem \ref{second main theorem part 2} respectively, the following: \begin{theorem*} There is a theory $T$ such that $\kappa_{\text{inp}}(T) = \lambda^{++}$ yet for any regular ultrafilter $\mathcal{D}$ on $\lambda$ and $\lambda^{++}$-saturated model $M \models T$, $M^{\lambda}/\mathcal{D}$ is $\lambda^{++}$-saturated. \end{theorem*} \begin{theorem*} If $\lambda = \lambda^{<\lambda}$ and $\kappa_{\text{sct}}(T) > \lambda^{+}$, $M$ is an $\lambda^{++}$-saturated model of $T$ and $\mathcal{D}$ is a regular ultrafilter over $\lambda$, then $M^{\lambda}/\mathcal{D}$ is not $\lambda^{++}$-compact. \end{theorem*} \noindent The first of these results contradicts Shelah's Exercise VI.4.19 and \emph{a fortiori} answers Question VI.4.20 negatively. Although $\kappa_{\text{inp}}(T) > |I|^{+}$ and hence $\kappa_{\text{cdt}}(T) > |I|^{+}$ do not suffice to guarantee a loss of saturation in the ultrapower, one can ask if $\kappa_{\text{sct}}(T) > |I|^{+}$ does suffice. Shelah's original argument for Theorem \ref{saturation} does not generalize, but fortunately a recent new proof due to Malliaris and Shelah \cite{Malliaris:2012aa} does and we point out in the second of these two theorems how the revised question can be answered, modulo a mild set-theoretic hypothesis, by an easy and direct adaptation of their argument. These results suggest that the rough-scale asymptotic structure revealed by studying the $\lambda^{++}$-compactness of ultrapowers on $\lambda$ is global in nature and differs from the picture suggested by the local case considered by Shelah. In order to construct these examples, it is necessary to build a theory capable of coding a complicated strong coloring yet simple enough that the invariants are still computable. This was accomplished by a method inspired by Medvedev's $\mathbb{Q}$ACFA construction \cite{Medvedev:2015aa}, realizing the theory as a union of theories in a system of finite reducts each of which is the theory of a Fra\"iss\'e limit. The theories in the finite reducts are $\aleph_{0}$-categorical and eliminate quantifiers and one may apply the $\Delta$-system lemma to the finite reducts arising in global configurations. Altogether, this makes computing the invariants tractable. \textbf{Acknowledgements:} This is work done as part of our dissertation under the supervision of Thomas Scanlon. We would additionally like to acknowledge very helpful input from Artem Chernikov, Leo Harrington, Alex Kruckman, and Maryanthe Malliaris, as well as Assaf Rinot, from whom we first learned of Galvin's work on strong colorings. Finally we would like to thank the anonymous referee for more than one especially thorough reading which did a great deal to improve this paper.
2,852
42,656
en
train
0.107.1
\section{Preliminaries}
8
42,656
en
train
0.107.2
\subsection{Notions from Classification Theory} For the most part, we follow standard model-theoretic notation. We may write $x$ or $a$ to denote a tuple of variables or elements, which may not have length 1. If $x$ is a tuple of variables we write $l(x)$ to denote its length and for each $l < l(x)$, we write $(x)_{l}$ to denote the $l$th coordinate of $x$. If $\varphi(x)$ is a formula and $t \in \{0,1\}$, we write $\varphi(x)^{t}$ to denote $\varphi(x)$ if $t = 1$ and $\neg \varphi(x)$ if $t = 0$. In the following definitions, we will refer to collections of tuples indexed by arrays and trees. For cardinals $\kappa$ and $\lambda$, we use the notation $\unlhd$, $<_{lex}$, $\wedge$, and $\perp$ to refer to the tree partial order, the lexicographic order, the binary meet function, and the relation of incomparability on $\kappa^{<\lambda}$, respectively. Given an element $\eta \in \kappa^{<\lambda}$, we write $l(\eta)$ to denote the length of $\eta$\textemdash that is, the unique $\alpha < \lambda$ such that $\eta \in \kappa^{\alpha}$\textemdash and if $l(\eta)\geq \beta$, we write $\eta | \beta$ for the unique $\nu \unlhd \eta$ with $l(\nu) = \beta$. \begin{defn} \label{patterns} \cite[Definitions III.7.2, III.7.3, III.7.5]{shelah1990classification} \begin{enumerate} \item A \emph{cdt-pattern of height} \(\kappa\) is a sequence of formulas \(\varphi_{i}(x;y_{i})\) (\(i < \kappa, i \text{ successor}\)) and numbers \(n_{i} < \omega\), and a tree of tuples \((a_{\eta})_{\eta \in \omega^{<\kappa}}\) for which \begin{enumerate} \item \(p_{\eta} = \{\varphi_{i}(x;a_{\eta | i}) : i \text{ successor }, i < \kappa\}\) is consistent for \(\eta \in \omega^{\kappa}\). \item \(\{\varphi_{i} (x;a_{\eta \frown \langle \alpha \rangle}) : \alpha < \omega , i = l(\eta) + 1\}\) is \(n_{i}\)-inconsistent. \end{enumerate} \item An \emph{inp-pattern of height} \(\kappa\) is a sequence of formulas \(\varphi_{i}(x;y_{i})\) \((i < \kappa)\), sequences \((a_{i,\alpha}: \alpha < \omega)\), and numbers \(n_{i} <\omega\) such that \begin{enumerate} \item For any \(\eta \in \omega^{\kappa}\), \(\{ \varphi_{i}(x;a_{i,\eta(i)}) : i < \kappa\}\) is consistent. \item For any \(i < \kappa\), \(\{\varphi_{i}(x;a_{i,\alpha}) : \alpha < \omega\}\) is \(n_{i}\)-inconsistent. \end{enumerate} \item An \emph{sct-pattern of height} \(\kappa\) is a sequence of formulas \(\varphi_{i}(x;y_{i})\) \((i < \kappa)\) and a tree of tuples \((a_{\eta})_{\eta \in \omega^{<\kappa}}\) such that \begin{enumerate} \item For every \(\eta \in \omega^{\kappa}\), \(\{\varphi_{\alpha}(x;a_{\eta | \alpha}) : 0 < \alpha < \kappa, \alpha \text{ successor}\}\) is consistent. \item If \(\eta \in \omega^{\alpha}\), \(\nu \in \omega^{\beta}\), \(\alpha, \beta\) are successors, and \(\nu \perp \eta\) then \(\{\varphi_{\alpha}(x;a_{\eta}), \varphi_{\beta}(x;a_{\nu})\}\) are inconsistent. \end{enumerate} \item For \(X \in \{\text{cdt}, \text{sct}, \text{inp}\}\), we define \(\kappa_{X}^{n}(T)\) be the first cardinal \(\kappa\) such that there is no \(X\)-pattern of height \(\kappa\) in \(n\) free variables. We define \(\kappa_{X}(T) = \sup_{n} \{\kappa_{X}^{n}\}\). \end{enumerate} \end{defn} When introducing these definitions, Shelah notes that cdt stands for ``contradictory types" and inp stands for ``independent partitions." He does not explain the meaning of sct, but presumably it is intended to abbreviate something like ``strongly contradictory types". \begin{fact} \label{easy inequalities} \cite[Observation 3.1]{ArtemNick} Suppose $T$ is a complete theory in the language $L$. \begin{enumerate} \item If $T$ is stable, then $\kappa_{\mathrm{cdt}}(T) \leq |L|^{+}$. \item $\kappa_{\mathrm{sct}}(T) \leq \kappa_{\mathrm{cdt}}(T)$ and $\kappa_{\mathrm{inp}}(T) \leq \kappa_{\mathrm{cdt}}(T)$. \end{enumerate} \end{fact} \begin{exmp} Fix a regular uncountable cardinal \(\kappa\) and let \(L = \langle E_{\alpha} : \alpha < \kappa \rangle\) be a language consisting of $\kappa$ many binary relations. Let $T_{\text{sct}}$ be the model companion of the $L$-theory asserting that each $E_{\alpha}$ is an equivalence relation and $\alpha < \beta$ implies $E_{\beta}$ refines $E_{\alpha}$. Let $T_{\text{inp}}$ be the model companion of the $L$-theory which only asserts that each $E_{\alpha}$ is an equivalence relation. In other words, $T_{\text{sct}}$ is the generic theory of $\kappa$ refining equivalence relations and $T_{\text{inp}}$ is the generic theory of $\kappa$ independent equivalence relations. Now $\kappa_{\text{cdt}}(T_{\text{sct}}) = \kappa_{\text{cdt}}(T_{\text{sct}}) = \kappa^{+}\), and further \(\kappa_{\text{sct}}(T_{\text{sct}}) = \kappa_{\text{inp}}(T_{\text{inp}}) = \kappa^{+}\). However, we have \(\kappa_{\text{inp}}(T_{\text{sct}}) = \aleph_{0}\) and \(\kappa_{\text{sct}}(T_{\text{inp}}) = \aleph_{1}\). Computing each of the invariants is straightforward using quantifier elimination for $T_{\text{inp}}$ and $T_{\text{sct}}$ with the exception of $\kappa_{\text{sct}}(T_{\text{inp}}) = \aleph_{1}$. The fact that $\kappa_{\text{cdt}}(T_{\text{inp}}) \geq \aleph_{1}$ implies that $\kappa_{\text{sct}}(T_{\text{inp}}) \geq \aleph_{1}$ by \cite[Proposition 3.14]{ArtemNick}. If $\kappa_{\text{sct}}(T_{\text{inp}}) > \aleph_{1}$ then there is an sct-pattern $(\varphi_{\alpha}(x;y_{\alpha}) : \alpha < \omega_{1})$, $(a_{\eta})_{\eta \in \omega^{<\omega_{1}}}$. Let $w_{\alpha}$ be the finite set of indices $\beta$ such that the symbol $E_{\beta}$ appears in $\varphi_{\alpha}(x;y_{\alpha})$. After passing to an sct-pattern of the same size, we may assume that the $w_{\alpha}$ form a $\Delta$-system (see Fact \ref{delta-system lemma} below), using that $\kappa$ is regular and uncountable. Now it is easy to check using quantifier elimination for $T_{\text{sct}}$ that there are incomparable $\eta \in \omega^{\alpha}, \nu \in \omega^{\beta}$ for some $\alpha,\beta < \omega_{1}$ such that $\{\varphi_{\alpha}(x;a_{\eta}), \varphi_{\beta}(x;a_{\nu})\}$ is consistent, a contradiction.\end{exmp} The following simple observation will be useful: \begin{lem} \label{no equalities} Suppose $\kappa$ is an infinite cardinal. \begin{enumerate} \item Suppose $(\varphi_{\alpha}(x;y_{\alpha}) : \alpha < \kappa)$, $(a_{\alpha,i})_{\alpha < \kappa, i < \omega}$, $(k_{\alpha})_{\alpha < \kappa}$ is an inp-pattern with $l(x) = 1$. Then each formula $\varphi_{\alpha}(x;a_{\alpha,i})$ is non-algebraic. \item Suppose $(\varphi_{\alpha}(x;y_{\alpha}) : \alpha < \kappa)$, $(a_{\eta})_{\eta \in \omega^{<\kappa}}$ is an sct-pattern such that $l(x)$ is minimal among sct-patterns of height $\kappa$ modulo $T$. Then no formula $\varphi_{\alpha}(x;a_{\eta})$ with $\eta \in \omega^{\alpha}$ implies $(x)_{l} = c$ for some $l < l(x)$ and parameter $c$. \end{enumerate} \end{lem} \begin{proof} (1) Given any $\alpha < \kappa$ and $i < \omega$, we may, for each $j < \omega$, choose a realization $c_{j} \models \{\varphi_{\alpha}(x;a_{\alpha,i}), \varphi_{\alpha+1}(x;a_{\alpha+1,j})\}$, which is is consistent by the definition of an inp-pattern. Since $\{\varphi_{\alpha+1}(x;a_{\alpha+1,j}) : j < \omega\}$ is $k_{\alpha+1}$-inconsistent, each $c_{j}$ can realize at most $k_{\alpha+1}-1$ many formulas in this set, so $\{c_{j} : j < \omega\}$ must be an infinite set of realizations of $\varphi_{\alpha}(x;a_{\alpha,i})$, which shows $\varphi_{\alpha}(x;a_{\alpha})$ is non-algebraic. (2) Suppose not, so there are $\alpha < \kappa$, $\eta \in \omega^{\alpha}$, and $l < l(x)$ so that $\varphi_{\alpha}(x;a_{\eta}) \vdash (x)_{l} = c$ for some parameter $c$; without loss of generality $l = l(x) - 1$. If $l(x) = 1$, then it follows from the fact that $\{\varphi_{\alpha}(x;a_{\eta}),\varphi_{\alpha+1}(x;a_{\eta \frown \langle i \rangle})\}$ is consistent for each $i < \omega$ that $c \models \{\varphi_{\alpha+1}(x;a_{\eta \frown \langle i \rangle}) : i < \omega\}$, contradicting the fact that this set of formulas is $2$-inconsistent. On the other hand, if $l > 1$, we will let $x' = (x_{0}, \ldots, x_{l-2})$, so that $x = (x',x_{l-1})$ and let $b_{\nu} = (c,a_{\eta \frown \nu})$ for all $\nu \in \omega^{<\kappa}$. Finally, we set $\psi_{\beta}(x';z_{\beta}) = \varphi_{\alpha + \beta}(x';x_{l-1},y_{\alpha+\beta})$. Since for any $\nu \in \omega^{\kappa}$, $\{\varphi_{\alpha + \beta}(x;a_{\eta \frown (\nu | \beta)}) : \beta < \kappa\}$ is consistent and any realization will be of the form $(c',c)$ for some $c'$, it follows that $\{\psi_{\beta}(x';b_{\nu | \beta}) : \beta < \kappa\}$ is consistent. The inconsistency requirement is immediate so it follows that $(\psi_{\beta}(x';z_{\beta}))_{\beta < \kappa}$, $(b_{\eta})_{\eta \in \omega^{<\kappa}}$ is an sct-pattern of height $\kappa$ in fewer than $l(x)$ variables, contradicting the minimality of $l(x)$. \end{proof}
3,033
42,656
en
train
0.107.3
\begin{rem} Note that by \cite[Corollary 2.9]{ChernikovNTP2}, if $T$ has an inp-pattern of height $\kappa$, then there is also an inp-pattern of height $\kappa$ in a single free variable, so the hypothesis in (1) that $l(x) = 1$ is equivalent to the requirement that $l(x)$ be minimal among inp-patterns of height $\kappa$. \end{rem} In order to simplify many of the arguments below, it will be useful to work with indiscernible trees and arrays. Define a language \(L_{s,\lambda} = \{\vartriangleleft, \wedge, <_{lex}, P_{\alpha} : \alpha < \lambda\}\) where \(\lambda\) is a cardinal. We may view the tree \(\kappa^{<\lambda}\) as an \(L_{s,\lambda}\)-structure in a natural way, giving \(\vartriangleleft\), \(\wedge\), and \(<_{lex}\) their eponymous interpretations, and interpreting \(P_{\alpha}\) as a predicate which identifies the \(\alpha\)th level. Note that we may define the relation $\eta \perp \nu$ in this language by $\neg (\eta \unlhd \nu) \wedge \neg (\nu \unlhd \eta)$. See \cite{ArtemNick} and \cite{KimKimScow} for a detailed treatment. \begin{defn} \text{ } \begin{enumerate} \item We say \((a_{\eta})_{\eta \in \kappa^{<\lambda}}\) is an \(s\)\emph{-indiscernible tree over A} if \[ \text{qftp}_{L_{s,\lambda}}(\eta_{0}, \ldots, \eta_{n-1}) = \text{qftp}_{L_{s,\lambda}}(\nu_{0}, \ldots, \nu_{n-1}) \] implies \(\text{tp}(a_{\eta_{0}}, \ldots, a_{\eta_{n-1}}/A) = \text{tp}(a_{\nu_{0}}, \ldots, a_{\nu_{n-1}}/A)\). \item We say \((a_{\alpha,i})_{\alpha < \kappa, i < \omega}\) is a \emph{mutually indiscernible array} over $A$ if, for all $\alpha < \kappa$, $(a_{\alpha, i})_{i < \omega}$ is a sequence indiscernible over $A \cup \{a_{\beta,j} : \beta < \kappa, \beta \neq \alpha, j < \omega\}$. \end{enumerate} \end{defn} \begin{fact} \cite[Theorem 4.3]{KimKimScow} \label{s-indiscernible extraction} Given a collection of tuples $(a_{\eta})_{\eta \in \omega^{<\omega}}$, there is $(b_{\eta})_{\eta \in \omega^{<\omega}}$ which is $s$-indiscernible and \emph{locally based} on $(a_{\eta})_{\eta \in \omega^{<\omega}}$, that is, given any $\overline{\eta} = (\eta_{0},\ldots, \eta_{k-1}) \in \omega^{<\omega}$ and $\varphi(x_{0},\ldots, x_{n-1})$ such that $\models \varphi(b_{\eta_{0}},\ldots, b_{\eta_{k-1}})$, there is $\overline{\nu} = (\nu_{0},\ldots, \nu_{n-1}) \in \omega^{<\omega}$ with $\mathrm{qftp}_{L_{s,\omega}}(\overline{\eta}) = \mathrm{qftp}_{L_{s,\omega}}(\overline{\nu})$ and $\models \varphi(a_{\nu_{0}},\ldots, a_{\nu_{n-1}})$. \end{fact} \begin{fact} \cite[Lemma 1.2(2)]{ChernikovNTP2} \label{artem's lemma} Let $(a_{\alpha,i})_{\alpha < n, i < \omega}$ be an array of parameters. Given a finite set of formulas $\Delta$ and $N < \omega$, we can find, for each $\alpha < n$, $i_{\alpha,0} < i_{\alpha,1} < \ldots < i_{\alpha,N-1}$ so that $(a_{\alpha,i_{\alpha,j}})_{\alpha < n, j < N}$ is $\Delta$-mutually indiscernible array\textemdash i.e. for all $\alpha < n$, $(a_{\alpha,i_{\alpha,j}})_{j < N}$ is $\Delta$-indiscernible over $\{a_{\beta,i_{\beta,j}} : \beta \neq \alpha, j < N\}$. \end{fact} \begin{fact} \label{s-IndiscTreeProp} \cite[Lemma 2.2]{ArtemNick} Let $(a_\eta : \eta \in \kappa^{<\lambda})$ be a tree s-indiscernible over a set of parameters $C$. \begin{enumerate} \item All paths have the same type over $C$: for any $\eta, \nu \in \kappa^{\lambda}$, $tp((a_{\eta | \alpha})_{\alpha < \lambda}/C) = tp((a_{\nu|\alpha})_{\alpha < \lambda}/C)$. \item Suppose $\{\eta_{\alpha} : \alpha < \gamma\} \subseteq \kappa^{<\lambda}$ satisfies $\eta_{\alpha} \perp \eta_{\alpha'}$ whenever $\alpha \neq \alpha'$. Then the array $(b_{\alpha, \beta})_{\alpha < \gamma, \beta < \kappa}$ defined by $$ b_{\alpha, \beta} = a_{\eta_{\alpha} \frown \langle \beta \rangle} $$ is mutually indiscernible over $C$. \end{enumerate} \end{fact} Parts (1) and (2) of the following lemma are essentially \cite[Lemma 2.2]{ChernikovNTP2} and \cite[Lemma 3.1(1)]{ArtemNick}, respectively, but we sketch the argument in order to point out that, from a inp- or sct-pattern of height $\kappa$, we can find one with appropriately indiscernible parameters, leaving the formulas fixed. \begin{lem} \label{witness} \begin{enumerate} \item If there is an inp-pattern $(\varphi_{\alpha}(x;y_{\alpha}) : \alpha < \kappa)$, $(a_{\alpha,i})_{\alpha < \kappa, i < \omega}$, $(k_{\alpha})_{\alpha < \omega}$ of height $\kappa$ modulo $T$, then there is an inp-pattern $(\varphi_{\alpha}(x;y_{\alpha}) : \alpha < \kappa)\), $(a'_{\alpha, i})_{\alpha < \kappa, i < \omega}$, $(k_{\alpha})_{\alpha < \kappa}$ such that $(a'_{\alpha, i})_{\alpha < \kappa, i < \omega}$ is a mutually indiscernible array. \item If there is an sct-pattern (cdt-pattern) of height $\kappa$ modulo $T$, then there is an sct-pattern (cdt-pattern) $\varphi_{\alpha}(x;y_{\alpha})$, $(a_{\eta})_{\eta \in \omega^{<\kappa}}$ such that $(a_{\eta})_{\eta \in \omega^{<\kappa}}$ is an $s$-indiscernible tree. \end{enumerate} \end{lem} \begin{proof} (1) Given an inp-pattern $(\varphi_{\alpha}(x;y_{\alpha}) : \alpha < \kappa)$, $(a_{\alpha,i})_{\alpha < \kappa, i < \omega}$, $(k_{\alpha})_{\alpha < \omega}$, let $\Gamma(z_{\alpha,i} : \alpha < \kappa, i < \omega)$ be a partial type that naturally expresses the following: \begin{itemize} \item $(z_{\alpha,i})_{\alpha < \kappa, i < \omega}$ is a mutually indiscernible array. \item $\{\varphi_{\alpha}(x;z_{\alpha,i}) : i < \omega\}$ is $k_{\alpha}$-inconsistent. \item For every $f: \kappa \to \omega$, $\{\varphi_{\alpha}(x;z_{\alpha,f(\alpha)}) : \alpha < \kappa\}$ is consistent. \end{itemize} By Lemma \ref{artem's lemma}, any finite subset of $\Gamma$ this partial type can be satisfied by an array from $(a_{\alpha,i})_{\alpha < \kappa, i < \omega}$ and therefore $\Gamma$ is consistent by compactness. A realization $(a'_{\alpha,i})_{\alpha < \kappa, i < \omega}$ yields the desired inp-pattern. (2) is entirely similar: given an sct-pattern $\varphi_{\alpha}(x;y_{\alpha})$, $(a_{\eta})_{\eta \in \omega^{<\kappa}}$, apply Fact \ref{s-indiscernible extraction} and compactness to obtain $(b_{\eta})_{\eta \in \omega^{<\kappa}}$, which is $s$-indiscernible and has the property that for any formula $\varphi(x_{0},\ldots, x_{n-1})$ and $\overline{\eta} = (\eta_{0},\ldots, \eta_{n-1}) \in \omega^{<\kappa}$, if $\varphi(b_{\eta_{0}},\ldots, b_{\eta_{n-1}})$, there is $\overline{\nu} = (\nu_{0},\ldots, \nu_{n-1})$ with $\mathrm{qftp}_{L_{s,\kappa}}(\overline{\eta}) = \mathrm{qftp}_{L_{s,\kappa}}(\overline{\nu})$ such that $\varphi(a_{\nu_{0}},\ldots, a_{\nu_{n-1}})$. From this property, it easily follows that, for all $\eta \in \omega^{\alpha}$, $\{\varphi_{\alpha+1}(x;a_{\eta \frown \langle i \rangle}) : i < \omega \}$ is $k_{\alpha+1}$-inconsistent and, for all $\eta \in \omega^{\kappa}$, $\{\varphi_{\alpha}(x;a_{\eta | \alpha}) : \alpha < \kappa\}$ is consistent. Therefore $(\varphi_{\alpha}(x;y_{\alpha}) : \alpha < \kappa)$, $(b_{\eta})_{\eta \in \omega^{<\kappa}}$ is the desired sct-pattern. \end{proof}
2,488
42,656
en
train
0.107.4
\subsection{Fra\"iss\'e Theory} We will recall some basic facts from Fra\"iss\'e theory, from \cite[Section 7.1]{hodges1993model}. Let \(L\) be a finite language and let \(\mathbb{K}\) be a non-empty finite or countable set of finitely generated \(L\)-structures which has HP, JEP, and AP. Such a class $\mathbb{K}$ is called a \emph{Fra\"iss\'e class}. Then there is an \(L\)-structure \(D\), unique up to isomorphism, such that \(D\) has cardinality \(\leq \aleph_{0}\), \(\mathbb{K}\) is the age of \(D\), and \(D\) is ultrahomogeneous. We call $D$ the \emph{Fra\"iss\'e limit} of $\mathbb{K}$, which we sometimes denote $\text{Flim}(\mathbb{K})$. Given a subset $A$ of the $L$-structure $C$, we write $\langle A \rangle^{C}_{L}$ for the $L$-substructure of $C$ generated by $A$. We say that $\mathbb{K}$ is \emph{uniformly locally finite} if there is a function $g: \omega \to \omega$ such that a structure in $\mathbb{K}$ generated by $n$ elements has cardinality at most $g(n)$. If \(\mathbb{K}\) is a countable uniformly locally finite set of finitely generated \(L\)-structures and $T = \text{Th}(D)$, then \(T\) is \(\aleph_{0}\)-categorical and has quantifier elimination. The following equivalent formulation of ultrahomogeneity is well-known, see, e.g., \cite[Proposition 2.3]{KPT}: \begin{fact} Let \(A\) be a countable structure. Then \(A\) is ultrahomogeneous if and only if it satisfies the following extension property: if \(B,C\) are finitely generated and can be embedded into \(A\), \(f: B \to A\), \(g: B \to C\) are embeddings then there is an embedding \(h: C \to A\) such that \(h \circ g = f\). \end{fact} The following is a straight-forward generalization of \cite[Proposition 5.2]{KPT}: \begin{lem}\label{KPTredux} Suppose \(L \subseteq L'\), and \(\mathbb{K}\) is a Fra\"iss\'e class of \(L\)-structures and \(\mathbb{K}'\) is a Fra\"iss\'e class of \(L'\)-structures satisfying the following two conditions: \begin{enumerate} \item \(A \in \mathbb{K}\) if and only if there is a \(D' \in \mathbb{K}'\) such that \(A\) is an $L$-substructure of \(D' \upharpoonright L\). \item If \(A,B \in \mathbb{K}\), \(\pi: A \to B\) is an \(L\)-embedding, and \(C \in \mathbb{K}'\) with \(C = \langle A \rangle^{C}_{L'}\), then there is a \(D \in \mathbb{K}'\), such that $B$ is an $L$-substructure of $D\upharpoonright L$, and an \(L'\)-embedding \(\tilde{\pi}: C \to D\) extending \(\pi\). \end{enumerate} Then \(\mathrm{Flim}(\mathbb{K}') \upharpoonright L = \mathrm{Flim}(\mathbb{K})\). \end{lem} \begin{proof} Let \(F' = \text{Flim}(\mathbb{K}')\) and suppose \(F = F' \upharpoonright L\). Fix \(A_{0}, B_{0} \in \mathbb{K}\) and an \(L\)-embedding \(\pi: A_{0} \to B_{0}\). Suppose \(\varphi: A_{0} \to F\) is an \(L\)-embedding. Let \(E = \langle \varphi(A_{0}) \rangle^{F'}_{L'}\). Up to isomorphism over \(A_{0}\), there is a unique \(C \in \mathbb{K}'\) containing \(A_{0}\) such that \(C = \langle A_{0} \rangle^{C}_{L'}\) and \(\tilde{\varphi} : C \to F'\) is an \(L'\)-embedding extending \(\varphi\) with \(E = \tilde{\varphi}(C)\), since given another such $C'$ and $\tilde{\varphi}' :C' \to F'$, we have $\tilde{\varphi}'^{-1} \circ \tilde{\varphi} : C \to C'$ is an $L'$-isomorphism which is the identity on $A_{0}$. By (2), there is some \(D \in \mathbb{K}'\) with \(B_{0} \subseteq D \upharpoonright L\) and and there is an \(L'\)-embedding \(\tilde{\pi}: C \to D\) extending \(\pi\). By the extension property for \(F'\), there is an \(L'\)-embedding \(\psi: D \to F'\) such that \(\psi \circ \tilde{\pi} = \tilde{\varphi}\) and hence \(\psi \circ \pi = \varphi\). As \(\psi \upharpoonright B_{0}\) is an \(L\)-embedding, this shows the extension property for \(F\). So \(F\) is ultrahomogeneous, and \(\text{Age}(F) = \mathbb{K}\) by (1) so \(F \cong \text{Flim}(\mathbb{K})\), which completes the proof. \end{proof}
1,398
42,656
en
train
0.107.5
\subsection{Strong Colorings} \begin{defn} \cite[Definition A.1.2]{Sh:g} Given cardinals $\lambda, \mu, \theta,$ and $\chi$, we write \(\text{Pr}_{1}(\lambda, \mu, \theta, \chi)\) for the assertion: there is a coloring \(c: [\lambda]^{2} \to \theta\) such that for any \(A \subseteq [\lambda]^{<\chi}\) of size \(\mu\) consisting of pairwise disjoint subsets of \(\lambda\) and any color \(\gamma < \theta\) there are \(a,b \in A\) with \(\max(a) < \min(b)\) with \(c(\{\alpha, \beta\}) = \gamma\) for all \(\alpha \in a\), \(\beta \in b\). \end{defn} Note, for example, that $\text{Pr}_{1}(\lambda, \lambda, 2, 2)$ holds if and only if $\lambda \not\to (\lambda)^{2}_{2}$ - i.e. $\lambda$ is not weakly compact. \begin{obs} \label{monotonicity} For fixed $\lambda$, if $\mu \leq \mu'$, $\theta' \leq \theta$, $\chi' \leq \chi$, then $$ \text{Pr}_{1}(\lambda, \mu, \theta, \chi) \implies \text{Pr}_{1}(\lambda, \mu', \theta', \chi'). $$ \end{obs} \begin{proof} Fix $c: [\lambda]^{2} \to \theta$ witnessing $\text{Pr}_{1}(\lambda, \mu, \theta, \chi)$. Define a new coloring $c': [\lambda]^{2} \to \theta'$ by $c'(\{\alpha, \beta\}) = c(\{\alpha, \beta\})$ if $c(\{\alpha, \beta\}) < \theta'$ and $c'(\{\alpha, \beta\}) = 0$ otherwise. Now suppose $A \subseteq [\lambda]^{< \chi'}$ is a family of pairwise disjoint sets with $|A| \geq \mu'$. Then, in particular, $A \subseteq [\lambda]^{<\chi}$ and $|A| \geq \mu$ so for any $\gamma < \theta'$, as $\gamma < \theta$, there are $a,b \in A$ with $\text{max}(a) < \text{min}(b)$ with $c'(\{\alpha,\beta\}) = c(\{\alpha,\beta\}) = \gamma$ for all $\alpha \in a$, $\beta \in b$, using $\text{Pr}_{1}(\lambda, \mu, \theta, \chi)$ and the definition of $c'$. This shows that $c'$ witnesses $\text{Pr}_{1}(\lambda, \mu',\theta',\chi')$. \end{proof} In the arguments that follow, we will only make use of instances of $\mathrm{Pr}_{1}(\lambda^{+},\lambda^{+}, 2, \aleph_{0})$, which we will obtain from stronger results of Galvin and of Shelah, using Observation \ref{monotonicity}. Galvin proved \(\text{Pr}_{1}\) holds in some form for arbitrary successor cardinals from instances of GCH. Considerably later, Shelah proved that $\text{Pr}_{1}$ holds in a strong form for the double-successors of arbitrary regular cardinals in ZFC. \begin{fact}\cite[Conclusion 4.2]{Sh:572} \label{ShelahPr} The principle \(\text{Pr}_{1}(\lambda^{++}, \lambda^{++}, \lambda^{++}, \lambda)\) holds for every regular cardinal \(\lambda\). \end{fact} The above theorem of Shelah suffices to produce a ZFC counterexample to the equality $\kappa_{\mathrm{cdt}}(T) = \kappa_{\mathrm{inp}}(T) + \kappa_{\mathrm{sct}}(T)$, but we will need Galvin's result on arbitrary successor cardinals in order to get the consistency result contained in Theorem \ref{first main theorem}. Unfortunately, Galvin's result is only implicit in \cite[Lemma 4.1]{Galvin80} in a certain construction, and the argument there refers to earlier sections of his paper. So, following a suggestion of the referee, we have opted for providing a self-contained proof. The argument below merely consolidates Galvin's argument in \cite[Lemma 4.1]{Galvin80} and recasts it in Shelah's $\mathrm{Pr}_{1}$ notation, adding no new ideas. It will be useful to introduce the following notation: given sets $X$ and $Y$, let $X \otimes Y = \{\{x,y\} : x \in X,y \in Y\}$. \begin{lem} \cite[Lemma 3.1]{Galvin80} \label{matrix lemma} Let $\lambda$ be an infinite cardinal and $A$ be a set. Suppose that, for each $\rho < \lambda$, we have a set $I_{\rho}$ with $|I_{\rho}| = \lambda$ and finite sets $E^{\xi}_{\rho} \subseteq A$ $(\xi \in I_{\rho})$ so that for any $a \in A$, $|\{\xi \in I_{\rho} : a \in E^{\xi}_{\rho}\}| < \aleph_{0}$. Then there are pairwise disjoint sets $(A_{\nu} : \nu < \lambda)$ so that for all $\nu < \lambda$ and $\rho < \lambda$ $$ |\{\xi \in I_{\rho} : E^{\xi}_{\rho} \subseteq A_{\nu}\}| = \lambda. $$ \end{lem} \begin{proof} Identify $I_{\rho}$ with $\lambda$ for all $\rho$ and let $<^{*}$ be a well-ordering of $\lambda \times \lambda$ in order-type $\lambda$. By recursion on $(\lambda \times \lambda, <^{*})$, define $(\xi_{(\alpha, \beta)} : (\alpha, \beta) \in \lambda \times \lambda)$ as follows: if $(\xi_{(\gamma, \delta)} : (\gamma, \delta) <^{*} (\alpha, \beta))$ has been defined, choose $\xi_{(\alpha, \beta)}$ to be the least $\xi \in I_{\alpha}$ so that $$ E^{\xi}_{\alpha} \cap \left( \bigcup_{\substack{(\gamma, \delta) <^{*} (\alpha, \beta) \\ \delta \neq \beta}} E^{\xi_{(\gamma, \delta)}}_{\gamma} \right) = \emptyset. $$ There is such a $\xi$ by the pigeonhole principle, given our assumption that $|\{\xi \in I_{\rho} : a \in E^{\xi}_{\rho}\}| < \aleph_{0}$ for all $a \in A$. Now define the sequence of sets $(A_{\nu} : \nu < \lambda)$ by $$ A_{\nu} = \bigcup_{\alpha < \lambda} E_{\nu}^{\xi_{(\alpha, \nu)}}. $$ It is easy to check that this satisfies the requirements. \end{proof} \begin{thm} \cite[Lemma 4.1]{Galvin80} \label{GalvinPr} If \(\lambda\) is an infinite cardinal and \(2^{\lambda} = \lambda^{+}\), then \(\text{Pr}_{1}(\lambda^{+}, \lambda^{+}, \lambda^{+}, \aleph_{0})\). \end{thm} \begin{proof} Let $\langle \overline{B}_{\gamma} : \gamma < \lambda^{+} \rangle$ enumerate all $\lambda$-sequences $\overline{B} = \langle B_{\xi} : \xi < \lambda\rangle$ of pairwise disjoint finite subsets of $\lambda^{+}$. This is possible as $2^{\lambda} = \lambda^{+}$. \textbf{Claim 1}: There is a sequence of pairwise disjoint sets $\langle K_{\nu} : \nu < \lambda^{+} \rangle$ so that, for all $\nu < \lambda^{+}$, $K_{\nu} \subseteq [\lambda^{+}]^{2}$ and, for all $\alpha < \lambda^{+}$, we have $(A)$ implies $(B)$, where: \begin{enumerate} \item[(A)] $\gamma < \alpha$, $\bigcup_{\xi < \lambda} B_{\gamma, \xi} \subseteq \alpha$, $X \in [\alpha]^{<\aleph_{0}}$, and $|\{\xi : B_{\gamma,\xi} \otimes X \subseteq K_{\nu}\}| = \lambda$. \item[(B)] $|\{ \xi : B_{\gamma,\xi} \otimes (X \cup \{\alpha\}) \subseteq K_{\nu} \}| = \lambda$. \end{enumerate} \emph{Proof of claim}: By induction on $\alpha < \lambda^{+}$, we will construct for every $\nu < \lambda$, a set $K_{\nu}(\alpha) \subseteq \alpha$ and define $K_{\nu} = \{\{\beta, \alpha\} : \alpha < \lambda^{+}, \beta \in K_{\nu}(\alpha)\}$. We will define the sets $K_{\nu}(\alpha)$ to be pairwise disjoint and so that: \begin{enumerate} \item[(*)] Whenever $\gamma < \alpha$, $\bigcup_{\xi < \lambda} B_{\gamma,\xi} \subseteq \alpha$, $X \in [\alpha]^{<\aleph_{0}}$, and $|\{\xi : B_{\gamma,\xi} \otimes X \subseteq K_{\nu}\}| = \lambda$, then $|\{\xi : B_{\gamma,\xi} \otimes (X \cup \{\alpha\}) \subseteq K_{\nu}\}| = \lambda$. \end{enumerate} Note that if $\bigcup_{\xi < \lambda} B_{\gamma,\xi} \subseteq \alpha$, $X \in [\alpha]^{<\aleph_{0}}$, then it makes sense to write $\{\xi : B_{\gamma,\xi} \otimes X \subseteq K_{\nu}\}$, since $K_{\nu} \cap [\alpha]^{2}$ has already been defined. Suppose we have constructed $K_{\nu}(\beta)$ for every $\nu < \lambda$ and $\beta < \alpha$. Let $\langle (\nu_{\beta},\gamma_{\rho},X_{\rho}) : \rho < \lambda \rangle$ enumerate all triples $(\nu,\gamma,X)$ satisfying the hypothesis of $(*)$ for $\alpha$. Apply Lemma \ref{matrix lemma} with $A = \alpha$, $I_{\rho} = \{\xi : B_{\gamma_{\rho},\xi} \otimes X_{\rho} \subseteq K_{\nu_{\rho}}\}$, and $E^{\xi}_{\rho} = B_{\gamma_{\rho},\xi}$ to obtain the disjoint sets $A_{\nu} := K_{\nu}(\alpha)$ for all $\nu < \lambda$. Then for all $\nu < \lambda$, we have that if $\gamma < \alpha$, $\bigcup_{\xi < \lambda} B_{\gamma,\xi} \subseteq \alpha$, $X \in [\alpha]^{<\aleph_{0}}$, and $|\{\xi : B_{\gamma,\xi} \otimes X \subseteq K_{\nu}\}| = \lambda$, then $|\{\xi : B_{\gamma,\xi} \otimes X \subseteq K_{\nu} \text{ and } B_{\gamma,\xi} \subseteq K_{\nu}(\alpha)\}| = \lambda$. Since the set $\{\xi : B_{\gamma,\xi} \otimes (X \cup \{\alpha\}) \subseteq K_{\nu}\}$ is equal to the set $\{\xi : B_{\gamma,\xi} \otimes X \subseteq K_{\nu} \text{ and } B_{\gamma,\xi} \subseteq K_{\nu}(\alpha)\}$, by the definition of $K_{\nu}(\alpha)$, this completes the proof of the claim. \qed \textbf{Claim 2}: If $\nu < \lambda$ and $\langle v_{\xi} : \xi < \lambda^{+} \rangle$ is a sequence of pairwise disjoint finite subsets of $\lambda^{+}$, then there are $\xi < \eta < \lambda$ so that $v_{\xi} \otimes v_{\eta} \subseteq K_{\nu}$. \emph{Proof of claim}: There is an index $\gamma < \lambda^{+}$ such that $\overline{B}_{\gamma,\xi} = v_{\xi}$ for all $\xi < \lambda$. By the regularity of $\lambda^{+}$, there is some $\beta < \lambda^{+}$ so that $\bigcup_{\xi < \lambda} v_{\xi} \subseteq \beta$ and we may further choose $\beta$ so that $\gamma < \beta$. Since the sets $v_{\xi}$ are pairwise disjoint, there is some $\eta$ with $\lambda \leq \eta < \lambda^{+}$ so that $v_{\eta} \cap \beta = \emptyset$. It follows that $\gamma < \alpha$ and $\bigcup_{\xi < \lambda} B_{\gamma,\xi} \subseteq \alpha$ for all $\alpha \in v_{\eta}$. List $v_{\eta} = \{\alpha_{0} < \ldots < \alpha_{m-1}\}$. Applying the implication (A)$\implies$(B) of Claim 1 $m$ times, with $\alpha_{0},\ldots, \alpha_{m-1}$ playing the role of $\alpha$ and $\emptyset$, $\{\alpha_{0}\}$, \ldots, $\{\alpha_{0},\ldots, \alpha_{m-1}\}$ playing the role of $X$ in (A), we get that $$ |\{\xi < \lambda : B_{\gamma, \xi} \otimes v_{\eta} \subseteq K_{\nu}\} | = \lambda. $$ In particular, there is some $\xi < \lambda \leq \eta$ so that $v_{\xi} \otimes v_{\eta} \subseteq K_{\nu}$. \qed Now to complete the proof, we must construct a coloring. By replacing $K_{0}$ with $[\lambda^{+}]^{2} \setminus \left(\bigcup_{\nu > 0} K_{\nu}\right)$, we may assume that $\bigcup K_{\nu} = [\lambda^{+}]^{2}$. We define a coloring $c : [\lambda^{+}]^{2} \to \lambda^{+}$ by $c(\{\alpha,\beta\}) = \nu$ if and only if $\{\alpha,\beta\} \in K_{\nu}$, for all $\nu < \lambda^{+}$, which is well-defined since the $K_{\nu}$ are pairwise disjoint with union $[\lambda^{+}]^{2}$. Given any sequence $\langle v_{\xi} : \xi < \lambda^{+}\rangle$ of pairwise disjoint finite subsets of $\lambda^{+}$, we know by the regularity of $\lambda^{+}$ that there is a subsequence $\langle v_{\xi_{\rho}} : \rho < \lambda^{+} \rangle$ so that $\rho < \rho'$ implies $\max (v_{\xi_{\rho}}) < \min (v_{\xi_{\rho'}})$, so, replacing the given sequence by a subsequence, we may assume $\xi < \xi'$ implies $\max(v_{\xi}) < \min(v_{\xi'})$. Given $\nu < \lambda^{+}$, we know, by Claim 2, there are $\xi < \eta< \lambda^{+}$ so that $v_{\xi} \otimes v_{\eta} \subseteq K_{\nu}$ or, in other words, $c(\{\alpha,\beta\}) = \nu$ for all $\alpha \in v_{\xi}$ and $\beta \in v_{\eta}$ which shows $c$ witnesses $\mathrm{Pr}_{1}(\lambda^{+},\lambda^{+},\lambda^{+},\aleph_{0})$. \end{proof}
3,691
42,656
en
train
0.107.6
\section{The main construction} From strong colorings, we construct theories with \(\kappa_{\text{sct}}(T) + \kappa_{\text{inp}}(T) < \kappa_{\text{cdt}}(T)\). For each regular uncountable cardinal \(\kappa\) and coloring \(f: [\kappa]^{2} \to 2\) we build a theory \(T^{*}_{\kappa,f}\) which comes equipped with a canonical cdt-pattern of height \(\kappa\), in which the consistency of two incomparable nodes, one on level \(\alpha\) and another on level \(\beta\), is determined by the value of the coloring \(f(\{\alpha, \beta\})\). In the next section, we then analyze the possible inp- and sct-patterns that arise in models of \(T^{*}_{\kappa,f}\) and show that the combinatorial properties of the function $f$ are reflected in the values of the cardinal invariants $\kappa_{\mathrm{inp}}$ and $\kappa_{\mathrm{sct}}$.
241
42,656
en
train
0.107.7
\subsection{Building a Theory} Suppose \(\kappa\) is a regular uncountable cardinal. We define a language \(L_{\kappa} = \langle O,P_{\alpha}, f_{\alpha \beta}, p_{\alpha} : \alpha \leq \beta < \kappa\rangle\), where \(O\) and all the \(P_{\alpha}\) are unary predicates and the \(f_{\alpha \beta}\) and \(p_{\alpha}\) are unary functions. Given a subset \(w \subseteq \kappa\), let \(L_{w} = \langle O, P_{\alpha}, f_{\alpha \beta},p_{\alpha} : \alpha \leq \beta, \alpha,\beta \in w\rangle\). Given a function \(f: [\kappa]^{2} \to 2\), we define a universal theory \(T_{\kappa, f}\) with the following axiom schemas: \begin{enumerate} \item The predicates \(O\) and \((P_{\alpha})_{\alpha < \kappa}\) are pairwise disjoint; \item For all $\alpha < \kappa$, \(f_{\alpha\alpha}\) is the identity function, for all \(\alpha < \beta<\kappa\), \[ (\forall x)\left[ (x \not\in P_{\beta} \to f_{\alpha \beta}(x) = x) \wedge (x \in P_{\beta} \to f_{\alpha \beta}(x) \in P_{\alpha})\right], \] and if \(\alpha < \beta < \gamma<\kappa\), then \[ (\forall x \in P_{\gamma})[f_{\alpha \gamma} (x) = (f_{\alpha \beta} \circ f_{\beta \gamma})(x)]. \] \item For all \(\alpha < \kappa\), \[ (\forall x)\left[(x \not\in O \to p_{\alpha}(x) = x) \wedge (p_{\alpha}(x) \neq x \to p_{\alpha}(x) \in P_{\alpha})\right]. \] \item For all \(\alpha < \beta < \kappa\) satisfying \(f(\{\alpha , \beta\}) = 0\), we have the axiom $$ (\forall z \in O)[p_{\alpha}(z) \neq z \wedge p_{\beta}(z) \neq z \to p_{\alpha}(z) = (f_{\alpha \beta} \circ p_{\beta})(z)]. $$ \end{enumerate} The \(O\) is for ``objects" and \(\bigcup P_{\alpha}\) is a tree of ``parameters'' where each \(P_{\alpha}\) names nodes of level \(\alpha\). The functions \(f_{\alpha \beta}\) map elements of the tree at level \(\beta\) to their unique ancestor at level \(\alpha\). So the tree partial order is coded in a highly non-uniform way, for each pair of levels. The \(p_{\alpha}\)'s should be considered as partial functions on \(O\) which connect objects to elements of the tree: we will write $\text{dom}(p_{\alpha})$ for the set $\{x \in O : p_{\alpha}(x) \neq x\}$. Axiom \((4)\) says, in essence, that if \(f(\{\alpha, \beta\}) = 0\), then the only way for an object in both $\text{dom}(p_{\alpha})$ and $\text{dom}(p_{\beta})$ to connect to a node on level \(\alpha\) and a node on level \(\beta\) is if these two nodes lie along a path in the tree. \begin{lem} Define a class of finite structures \[ \mathbb{K}_{w} = \{ \text{ finite models of }T_{\kappa,f} \upharpoonright L_{w}\}. \] Then for finite \(w\), \(\mathbb{K}_{w}\) is a Fra\"iss\'e class and, moreover, it is uniformly locally finite. \end{lem} \begin{proof} The axioms for \(T_{\kappa,f}\) are universal so HP is clear. JEP and AP are proved similarly, so we will give the argument for AP only. Suppose \(A\) includes into \(B\) and \(C\) where \(A,B,C \in \mathbb{K}_{w}\) and \(B \cap C = A\). Because all the symbols of the language are unary, \(B \cup C\) may be viewed as an \(L_{w}\)-structure by interpreting each predicate \(Q\) of \(L_{w}\) so that \(Q^{B \cup C} = Q^{B} \cup Q^{C}\) and similarly interpreting \(g^{B \cup C} = g^{B} \cup g^{C}\) for all the function symbols \(g \in L_{w}\). It is easy to check that \(B \cup C\) is a model of \(T_{\kappa,f} \upharpoonright L_{w}\). To see uniform local finiteness, just observe that a set of size \(n\) can generate a model of size at most \((|w|+1)n\) in virtue of the way that the functions are defined. \end{proof} Hence, for each finite \(w \subset \kappa\), there is a countable ultrahomogeneous \(L_{w}\)-structure \(M_{w}\) with \(\text{Age}(M_{w}) = \mathbb{K}_{w}\). Let \(T^{*}_{w} = \text{Th}(M_{w})\). In the following lemmas, we will establish the properties needed to apply Lemma \ref{KPTredux} in order to show the $T^{*}_{w}$ cohere. \begin{lem} \label{first condition} Suppose $w \subseteq v$ are finite subsets of $\kappa$ and $A \in \mathbb{K}_{w}$. Then there is an $L_{v}$-structure $D \in \mathbb{K}_{v}$ such that $A \subseteq D \upharpoonright L_{w}$. \end{lem} \begin{proof} We may enumerate $w$ in increasing order as $w = \{\alpha_{0} < \alpha_{1} < \ldots < \alpha_{n-1}\}$. By induction, it suffices to consider the case when $v = w \cup \{\gamma\}$ for some $\gamma \in \kappa \setminus w$. We consider two cases: \textbf{Case 1}: $\alpha_{n-1} < \gamma$ or $w = \emptyset$. In this case, the new symbols in $L_{v}$ not in $L_{w}$ consist of the predicate $P_{\gamma}$, the function $p_{\gamma}$, and the functions $f_{\alpha_{j}\gamma}$ for $j < n$ and $f_{\gamma \gamma}$. We define the underlying set of $D$ to be $A$, and give the symbols of $L_{w}$ their interpretation in $A$. Then we interpret $P_{\gamma}^{D} = \emptyset$, and interpret $p_{\gamma}^{D}$, $f_{\alpha_{j}\gamma}^{D}$ for $j < n$, and $f^{D}_{\gamma \gamma}$ to be the identity function on $D$. Clearly $A = D \upharpoonright L_{w}$ and it is easy to check $D \in \mathbb{K}_{v}$. \textbf{Case 2}: $\gamma < \alpha_{n-1}$. Let $i$ be least such that $\gamma < \alpha_{i}$. We define the underlying set of $D$ to be $A \cup \{*_{d} : d \in P_{\alpha_{i}}^{A}\}$, where the $*_{d}$ denote new formal elements. We interpret all the predicates of $L_{w}$ on $D$ to have the same interpretation on $A$, and we interpret each function of $L_{w}$ to be the identity on $\{*_{d}: d \in P^{A}_{\alpha_{i}}\}$ and, when restricted to $A$, to have the same interpretation as in $A$. The new symbols in $L_{v}$ not in $L_{w}$ are: the predicate $P_{\gamma}$, the function $p_{\gamma}$, and the functions $f_{\alpha_{j}\gamma}$ for $j < i$, the function $f_{\gamma \gamma}$, and the functions $f_{\gamma \alpha_{j}}$ for $i \leq j < n$. We remark that it is possible that $i = 0$, in which case there are no such $j < i$ so our conditions on $f_{\alpha_{j}\gamma}$ below say nothing. We interpret $P_{\gamma}^{D} = \{*_{d} : d \in P^{A}_{\alpha_{i}}\}$ and $p_{\gamma}^{D}$ as the identity function on $D$. Informally speaking, we will interpret the remaining functions so that $*_{d}$ becomes the ancestor of $d$ at level $\gamma$. More precisely, for $j < i$, we set $f^{D}_{\alpha_{j} \gamma}(*_{d}) = f^{A}_{\alpha_{j}\alpha_{i}}(d)$ and to be the identity on the complement of $\{*_{d} : d \in P^{A}_{\alpha_{i}}\}$. Likewise, if $i \leq j < n$ and $e \in P^{D}_{\alpha_{j}}$, we set $f^{D}_{\gamma \alpha_{j}}(e) = *_{f_{\alpha_{i}\alpha_{j}}^{A}(e)}$ and we define $f^{D}_{\gamma \alpha_{j}}$ to be the identity on the complement of $P^{D}_{\alpha_{j}}$. Finally, we set $f^{D}_{\gamma \gamma} = \text{id}_{D}$, which completes the definition of the $L_{v}$-structure $D$. Now we check that $D \in \mathbb{K}_{v}$. By construction and the fact that $\mathbb{A} \in \mathbb{K}_{w}$, all the axioms are clear except, in order to establish (2), we must check that if $\beta < \beta' < \beta''$ are from $v$, then for all $x \in P^{D}_{\beta''}$, $(f^{D}_{\beta \beta'} \circ f^{D}_{\beta'\beta''})(x) = f^{D}_{\beta \beta''}(x)$. We may assume $\gamma \in \{\beta,\beta',\beta''\}$. If $\gamma = \beta''$, then every element of $P_{\gamma}^{D}$ is of the form $*_{d}$ for some $d \in P^{A}_{\alpha_{i}}$ and we have \begin{eqnarray*} (f_{\beta\beta'}^{D} \circ f^{D}_{\beta' \gamma})(*_{d}) &=& (f^{D}_{\beta \beta'} \circ f^{D}_{\beta' \alpha_{i}})(d)\\ &=& f^{D}_{\beta \alpha_{i}}(d) \\ &=& f^{D}_{\beta \gamma}(*_{d}), \end{eqnarray*} by the definition of $f^{D}_{\alpha_{j}\gamma}$ for $j < i$ and the fact that $D$ extends $A$, which satisfies axiom (2). Similarly, if $\gamma = \beta'$ and $x \in P^{D}_{\beta''}$, we have \begin{eqnarray*} (f^{D}_{\beta \gamma} \circ f^{D}_{\gamma \beta''})(x)&=& f^{D}_{\beta \gamma}(*_{f^{D}_{\alpha_{i}\beta''}(x)})\\ &=& f^{D}_{\beta \alpha_{i}}(f^{D}_{\alpha_{i}\beta''}(x)) \\ &=& f^{D}_{\beta \beta''}(x). \end{eqnarray*} Finally, if $\beta = \gamma$ and $x \in P^{D}_{\beta''}$, we have \begin{eqnarray*} f^{D}_{\gamma \beta'}(f^{D}_{\beta' \beta''}(x)) &=& *_{f^{D}_{\alpha_{i}\beta'}(f^{D}_{\beta' \beta''}(x))} \\ &=& *_{f^{D}_{\alpha_{i}\beta''}(x)} \\ &=& f^{D}_{\gamma \beta''}(x), \end{eqnarray*} which verifies that (2) holds of $D$ and therefore $D \in \mathbb{K}_{v}$. \end{proof}
2,918
42,656
en
train
0.107.8
\begin{lem} \label{second condition} Suppose $w \subseteq v$ are finite subsets of $\kappa$, $A,B \in \mathbb{K}_{w}$, and $\pi: A \to B$ is an $L_{w}$-embedding. Then given any $C \in \mathbb{K}_{v}$ with $C = \langle A \rangle^{C}_{L_{v}}$, there is $D \in \mathbb{K}_{v}$ and an $L_{v}$-embedding $\tilde{\pi}: C \to D$ extending $\pi$. \end{lem} \begin{proof} As in the proof of Lemma \ref{first condition}, we will list $w$ in increasing order as $w = \{\alpha_{0} < \alpha_{1} < \ldots < \alpha_{n-1}\}$ and assume that $v = w \cup \{\gamma\}$ for some $\gamma \in \kappa \setminus w$. We suppose we are given $A,B,C$, and $\pi$ as in the statement and we will construct $D$ and $\tilde{\pi}$. We may assume $B \cap C = \emptyset$. Note that the condition that $C = \langle A \rangle^{C}_{L_{v}}$ entails that the only elements of $C \setminus A$ are contained in $P_{\gamma}^{C}$ and similarly for $B$ and $D$. \textbf{Case 1}: $\alpha_{n-1} < \gamma$ or $w = \emptyset$. We define the underlying set of $D$ to be $B \cup P^{C}_{\gamma}$ and we define $\tilde{\pi}: C \to D$ so that $\tilde{\pi}\upharpoonright A = \pi$ and $\tilde{\pi} \upharpoonright P^{C}_{\gamma} = \text{id}_{P^{C}_{\gamma}}$. Interpret the predicates of $L_{w}$ on $D$ so that they agree with their interpretation on $B$ and interpret the functions of $L_{w}$ on $D$ so that they are the identity on $P^{C}_{\gamma}$ and so that, when restricted to $B$, they agree with their interpretation on $B$. This will ensure that $D \upharpoonright L_{w}$ is an extension of $B$. Finally, interpret $P_{\gamma}$ so that $P_{\gamma}^{D} = P^{C}_{\gamma}$ and define $f^{D}_{\gamma \gamma} = \text{id}_{D}$. Then for each $j < n$, we interpret $f_{\alpha_{j}\gamma}$ on $D$ so that, if $c \in P^{C}_{\gamma}$, then $f^{D}_{\alpha_{j} \gamma}(c) = \pi(f^{C}_{\alpha_{j}\gamma}(c))$, and if $c \in D \setminus P^{C}_{\gamma}$, then $f^{D}_{\alpha_{j}\gamma}(c) = c$. Note that $\tilde{\pi}(f^{C}_{\alpha_{j}\gamma}(c)) = f^{D}_{\alpha_{j}\gamma}(\tilde{\pi}(c))$ for all $c \in C$. Finally, interpret $p_{\gamma}$ so that, if $d = \pi(c) \in \pi(O^{C}) \subseteq O^{D}$ and $p^{C}_{\gamma}(c) \neq c$, then $p^{D}_{\gamma}(d) = p^{C}_{\gamma}(c)$, and otherwise $p^{D}_{\gamma}(d) = d$. It is clear from the definitions that $\tilde{\pi}(p_{\gamma}^{C}(c)) = p_{\gamma}^{D}(\tilde{\pi}(c))$ for all $c \in C$, so $\tilde{\pi}$ is an $L_{v}$-embedding. We are left with showing that $D \in \mathbb{K}_{v}$. Axioms (1) and (3) are clear from the construction and to check (2), we just need to establish that if $\beta < \beta'$ are from $v$ and $c \in P^{C}_{\gamma}$, then $(f^{D}_{\beta \beta'} \circ f^{D}_{\beta' \gamma})(c) = f^{D}_{\beta \gamma}(c)$. For this, we unravel the definitions and make use of the fact that (2) is true in $C$: \begin{eqnarray*} f^{D}_{\beta \beta'}(f^{D}_{\beta'\gamma}(c)) &=& f^{D}_{\beta \beta'}(\pi(f^{C}_{\beta'\gamma}(c)) \\ &=& \pi (f^{C}_{\beta \beta'}(f^{C}_{\beta' \gamma}(c))\\ &=& \pi(f^{C}_{\beta \gamma}(c)) \\ &=& f^{D}_{\beta \gamma}(c), \end{eqnarray*} which verifies (2). Likewise, to show that (4) holds, we note that if $f(\{\beta,\gamma\}) = 0$, $p^{D}_{\gamma}(d) \neq d$, and $p^{D}_{\beta}(d) \neq d$ for some $\beta \in v$ then, by definition of $p^{D}_{\gamma}$, $d = \tilde{\pi}(c)$ for some $c \in O^{C}$ so $p^{C}_{\beta}(c) = (f_{\beta \gamma}^{C} \circ p^{C}_{\gamma})(c)$ so $p^{D}_{\beta}(d) = (f_{\beta \gamma}^{D} \circ p_{\gamma}^{D})(d)$ as $\tilde{\pi}$ is an embedding, which shows (4) and thus $D \in \mathbb{K}_{v}$. \textbf{Case 2}: $\gamma < \alpha_{n-1}$. Let $i$ be least such that $\gamma < \alpha_{i}$. The underlying set of $D$ will be $B \cup P_{\gamma}^{C} \cup \{*_{d} : d \in P^{B}_{\alpha_{i}} \setminus \pi(P^{A}_{\alpha_{i}})\}$, where each $*_{d}$ denotes a new formal element and we will define $\tilde{\pi}: C \to D$ to be $\pi \cup \text{id}_{P_{\gamma}^{C}}$. As in the previous case, we interpret the predicates of $L_{w}$ on $D$ so that they agree with their interpretation on $B$ and interpret the functions of $L_{w}$ on $D$ so that they are the identity on $P^{C}_{\gamma} \cup \{*_{d} : d \in P^{B}_{\alpha_{i}} \setminus \pi(P^{A}_{\alpha_{i}})\} $ and so that, when restricted to $B$, they agree with their interpretation on $B$. We will interpret $P_{\gamma}$ so that $$ P^{D}_{\gamma} = P^{C}_{\gamma} \cup \{*_{d} : d \in P^{B}_{\alpha_{i}} \setminus \pi(P^{A}_{\alpha_{i}})\} $$ The map $\pi$ will dictate how we have to define the ancestors and descendants at level $\gamma$ of the elements in the image of $\pi$, and, for those elements not in the image of $\pi$, we define the interpretations so that $*_{d}$ will be the ancestor at level $\gamma$ of $d \in P^{B}_{\alpha_{i}} \setminus \pi(P^{A}_{\alpha_{i}})$, as in the previous lemma. For $j < i$, we define $f^{D}_{\alpha_{j}\gamma}$ so that, if $c \in P^{C}_{\gamma}$, $f^{D}_{\alpha_{j}\gamma}(c) = \pi(f^{C}_{\alpha_{j}\gamma}(c))$, and if $d \in P^{B}_{\alpha_{i}} \setminus \pi(P^{A}_{\alpha_{i}})$, then $f^{D}_{\alpha_{j}\gamma}(*_{d}) = f^{B}_{\alpha_{j}\alpha_{i}}(d)$. This defines $f^{D}_{\alpha_{j}\gamma}$ on $P^{D}_{\gamma}$ and we define $f^{D}_{\alpha_{j}\gamma}$ to be the identity on the complement of $P^{D}_{\gamma}$ in $D$. Next, we define $f^{D}_{\gamma \alpha_{i}}$ as follows: if $d = \pi(c) \in \pi(P^{C}_{\alpha_{i}}) \subseteq P^{B}_{\alpha_{i}}$, we put $f_{\gamma \alpha_{i}}^{D}(d) = f^{C}_{\gamma \alpha_{j}}(c)$, and if $e \in P^{B}_{\alpha_{i}} \setminus \pi(P^{C}_{\alpha_{i}})$, then we set $f^{D}_{\gamma \alpha_{i}}(e) = *_{e}$. This defines $f^{D}_{\gamma \alpha_{i}}$ on $P^{D}_{\alpha_{i}}$ and we define $f^{D}_{\gamma \alpha_{i}}$ to be the identity on the complement of $P^{D}_{\alpha_{i}}$ in $D$. For $j > i$, we put $f^{D}_{\gamma \alpha_{j}} = f^{D}_{\gamma \alpha_{i}} \circ f^{D}_{\alpha_{i} \alpha_{j}}$. Then we define $f_{\gamma \gamma} = \text{id}_{D}$. Lastly, we define $p^{D}_{\gamma}$ to be the identity on all elements in the complement of $\pi(O^{A})$ and if $d = \pi(c)$, we put $p_{\gamma}^{D}(d) = d$ if $p^{C}_{\gamma}(c) = c$ and we put $p_{\gamma}^{D}(d) = p^{C}_{\gamma}(c)$ if $p^{C}_{\gamma}(c) \neq c$. This completes the construction. It follows from the definitions that $\tilde{\pi}$ is an $L_{v}$-embedding, so we must check $D \in \mathbb{K}_{v}$. Axioms (1) and (3) are clear from the construction. To show (2), we note that if $\beta < \beta' < \beta''$ and $c \in P^{D}_{\beta''}$, then either $c$ is in the image of $\tilde{\pi}$, in which case it is easy to check that $(f^{D}_{\beta \beta'} \circ f^{D}_{\beta' \beta''})(c) = f^{D}_{\beta \beta''}(c)$ using that (2) is satisfied in $C$ and $\tilde{\pi}$ is an embedding, or $c$ is not in the image of $\pi$, in which case the verification of (2) is identical to the verification of (2) in Case 2 of Lemma \ref{first condition}. The argument for (4) is identical to the argument for (4) in Case 1. We conclude that $D \in \mathbb{K}_{v}$, completing the proof. \end{proof} \begin{cor}\label{dirlim} Suppose \(w \subseteq v \subseteq \kappa\) and \(v,w\) are both finite. Then \(T^{*}_{w} \subseteq T^{*}_{v}\). \end{cor} \begin{proof} We will show $\mathrm{Flim}(\mathbb{K}_{v})\upharpoonright L_{w} = \mathrm{Flim}(\mathbb{K}_{w})$ by applying Lemma \ref{KPTredux}. Condition (1) in the Lemma is proved in Lemma \ref{first condition} and Condition (2) is proved in Lemma \ref{second condition}. \end{proof} Using Corollary \ref{dirlim}, we may define the theory \(T^{*}_{\kappa,f}\) as the union of the $T^{*}_{w}$ for all finite $w \subset \kappa$ and the resulting theory is consistent. Because each $T^{*}_{w}$ is complete and eliminates quantifiers, it follows that $T^{*}_{\kappa,f}$ is a complete theory extending $T_{\kappa,f}$ which eliminates quantifiers. The following lemmas will be useful in analyzing the possible formulas that could appear in the various patterns under consideration. Recall that, for all $\alpha < \kappa$, we write $\text{dom}(p_{\alpha})$ for the definable set $\{x \in O : p_{\alpha}(x) \neq x\}$, or equivalently $\{x \in O : p_{\alpha}(x) \in P_{\alpha}\}$. \begin{lem} \label{nice functions on P} Suppose $w \subseteq \kappa$ is a finite set containing $\beta$ and $\varphi(x)$ is an $L_{w}$-formula with $\varphi(x) \vdash x \in P_{\beta}$. Then for any $L_{w}$-term $t(x)$, there is $\alpha \leq \beta$ in $w$ such that $\varphi(x) \vdash t(x) =f_{\alpha \beta}(x)$. \end{lem} \begin{proof} The proof is by induction on terms. The conclusion holds for the term $x$ since $(\forall x)[f_{\beta \beta}(x) =x]$ is an axiom of $T_{\kappa,f}$. Now suppose $t(x)$ is a term such that $\varphi(x) \vdash t(x) = f_{\alpha \beta}(x)$ for some $\alpha \leq \beta$ from $w$. Then because $\varphi(x) \vdash x \in P_{\beta}$, $\varphi(x) \vdash t(x) \in P_{\alpha}$. It follows that for any $\delta \leq \gamma$ from $w$, $\varphi(x) \vdash p_{\gamma}(t(x)) = t(x)$ and $\varphi(x) \vdash f_{\delta \gamma}(t(x)) = t(x)$ when $\gamma \neq \alpha$. Additionally, if $\delta \leq \alpha$ is from $w$, then $\varphi(x) \vdash f_{\delta \alpha}(t(x)) = (f_{\delta \alpha} \circ f_{\alpha \beta})(x) = f_{\delta \beta}(x)$, which is of the desired form, completing the induction. \end{proof} \begin{lem} \label{nice functions on O} Suppose $w \subseteq \kappa$ is finite and $\varphi(x)$ is a complete $L_{w}$-formula with $\varphi(x) \vdash x \in O$. Then for any term $t(x)$ of $L_{w}$, we have one of the following: \begin{enumerate} \item $\varphi(x) \vdash t(x) =x$. \item $\varphi(x) \vdash t(x) = (f_{\alpha \beta} \circ p_{\beta})(x)$ for some $\alpha \leq \beta$ from $w$. \end{enumerate} \end{lem} \begin{proof} The proof is by induction on terms. Clearly the conclusion holds for the term $t(x) = x$. Now suppose we have established the conclusion for the term $t(x)$. We must prove that it also holds for the terms $p_{\gamma}(t(x))$ and $f_{\delta \gamma}(t(x))$ for $\delta \leq \gamma$ from $w$. If $\varphi(x) \vdash t(x) = x$, then $\varphi(x) \vdash p_{\gamma}(t(x)) = (f_{\gamma \gamma} \circ p_{\gamma})(x)$, which falls under case (2), and $\varphi(x) \vdash f_{\delta \gamma}(t(x)) = x$, since $\varphi(x) \vdash t(x) \in O$ which is under case (1). Now suppose $\varphi(x) \vdash t(x) = (f_{\alpha \beta} \circ p_{\beta})(x)$. Since we already handled terms falling under case (1), we may, by completeness of $\varphi$, assume $\varphi(x) \vdash x \in \text{dom}(p_{\beta})$ and hence $\varphi(x) \vdash t(x) \in P_{\alpha}$. It follows that $\varphi(x) \vdash p_{\gamma}(t(x)) = t(x)$ and $\varphi(x) \vdash f_{\delta\gamma}(t(x)) = t(x)$ when $\gamma \neq \alpha$, which remain under case (2). Finally, we have $\varphi(x) \vdash f_{\delta \alpha}(t(x)) = (f_{\delta\alpha} \circ f_{\alpha \beta} \circ p_{\beta})(x) = (f_{\delta \beta} \circ p_{\beta})(x)$, which also remains under case (2), completing the induction. \end{proof}
3,891
42,656
en
train
0.107.9
\section{Analysis of the invariants} In this section, we analyze the possible values of the cardinal invariants under consideration in $T^{*}_{\kappa,f}$ for a coloring $f:[\kappa]^{2} \to 2$. In the first subsection, we show that any $\mathrm{inp}$- and $\mathrm{sct}$-pattern of height $\kappa$ in $T^{*}_{\kappa, f}$ gives rise to one of a particularly uniform and controlled form, which we call \emph{rectified}. In the second subsection, we show $\kappa_{cdt}(T^{*}_{\kappa,f}) = \kappa^{+}$, independent of the choice of $f$. Then, making heavy use of rectification, we show in the next two subsections that if $\kappa_{\text{sct}}(T^{*}_{\kappa,f})$ or $\kappa_{\text{inp}}(T^{*}_{\kappa,f})$ are equal to $\kappa^{+}$, then this has combinatorial consequences for the coloring $f$. More precisely, we show in the third subsection that if there is an inp-pattern of height \(\kappa\), we can conclude that \(f\) has a homogeneous set of size \(\kappa\). In the case that there is an sct-pattern of height \(\kappa\), we cannot quite get a homogeneous set, but one nearly so: we prove in this case that there is precisely the kind of homogeneity which a strong coloring witnessing $\text{Pr}_{1}(\kappa, \kappa, 2, \aleph_{0})$ explicitly prohibits. The theory associated to such a coloring, then, gives the desired counterexample. For the entirety of this section, we will fix $\kappa$ a regular uncountable cardinal, a coloring $f:[\kappa]^{2} \to 2$, and a monster model \(\mathbb{M} \models T^{*}_{\kappa,f}\). \subsection{Rectification} Recall that, given a set $X$, a family of subsets $\mathcal{B} \subseteq \mathcal{P}(X)$ is called a \emph{$\Delta$-system} (of subsets of $X$) if there is some $r \subseteq X$ such that for all distinct $x,y \in \mathcal{B}$, $x \cap y = r$. Given a $\Delta$-system, the common intersection of any two distinct sets is called the \emph{root} of the $\Delta$-system. The following fact gives a condition under which $\Delta$-systems may be shown to exist: \begin{fact} \cite[Lemma III.2.6]{kunen2014set} \label{delta-system lemma} Suppose that $\lambda$ is a regular uncountable cardinal and $\mathcal{A}$ is a family of finite subsets of $\lambda$ with $|\mathcal{A}| = \lambda$. Then there is $\mathcal{B} \subseteq \mathcal{A}$ with $|\mathcal{B}| = \lambda$ and which forms a $\Delta$-system. \end{fact} We note that the definitions below are specific to $T^{*}_{\kappa,f}$. Recall that, given a subset \(w \subseteq \kappa\), we define \(L_{w} = \langle O, P_{\alpha}, f_{\alpha \beta},p_{\alpha} : \alpha \leq \beta, \alpha,\beta \in w\rangle\). \begin{defn} Given $X \in \{\mathrm{inp},\mathrm{sct}\}$, we define a \emph{rectified $X$-pattern as follows}: \begin{enumerate} \item A \emph{rectified $\mathrm{sct}$-pattern of height $\kappa$} is a triple $(\overline{\varphi},(a_{\eta})_{\eta \in \omega^{<\kappa}},\overline{w})$ satisfying the following: \begin{enumerate} \item $(a_{\eta})_{\eta \in \omega^<{\kappa}}$ is an $s$-indiscernible tree of parameters. \item $\overline{\varphi}$ is a sequence of formulas $(\varphi_{\alpha}(x;y_{\alpha}) : \alpha < \kappa)$ which, together with the parameters $(a_{\eta})_{\eta \in \omega^{<\kappa}}$ forms an $\mathrm{sct}$-pattern of height $\kappa$. \item $\overline{w} = (w_{\alpha})_{\alpha < \kappa}$ is a $\Delta$-system of finite subsets of $\kappa$ with root $r$ such that every $w_{\alpha}$ has the same cardinality, $\max r < \min(w_{\alpha} \setminus r)$ for all $\alpha < \kappa$, and $\max(w_{\alpha} \setminus r) < \min(w_{\alpha'} \setminus r)$ for all $\alpha < \alpha' < \kappa$. \item For all $\alpha < \kappa$, the formula $\varphi_{\alpha}(x;y_{\alpha})$ is in the language $L_{w_{\alpha}}$ and isolates a complete $L_{w_{\alpha}}$-type over $\emptyset$ in the variables $xy_{\alpha}$. Additionally, for all $\alpha < \kappa$ and $\eta \in \omega^{\alpha}$, the tuple $a_{\eta}$ enumerates an $L_{w_{\alpha}}$-substructure of $\mathbb{M}$. \end{enumerate} \item We define a \emph{rectified $\mathrm{inp}$-pattern of height $\kappa$} to be a quadruple $(\overline{\varphi},\overline{k},(a_{\alpha,i})_{\alpha < \kappa, i < \omega},\overline{w})$ satisfying the following: \begin{enumerate} \item $(a_{\alpha ,i})_{\alpha < \kappa, i < \omega}$ is a mutually indiscernible array of parameters. \item $\overline{\varphi}$ is a sequence of formulas $(\varphi_{\alpha}(x;y_{\alpha}) : \alpha < \kappa)$ and $\overline{k} = (k_{\alpha})_{\alpha < \kappa}$ is a sequence of natural numbers which, together with the parameters $(a_{\alpha,i})_{\alpha < \kappa, i < \omega}$ form an $\mathrm{inp}$-pattern of height $\kappa$. \item $\overline{w} = (w_{\alpha})_{\alpha < \kappa}$ is a $\Delta$-system of finite subsets of $\kappa$ with root $r$ such that every $w_{\alpha}$ has the same cardinality, $\max r < \min(w_{\alpha} \setminus r)$ for all $\alpha < \kappa$, and $\max(w_{\alpha} \setminus r) < \min(w_{\alpha'} \setminus r)$ for all $\alpha < \alpha' < \kappa$. \item For all $\alpha < \kappa$, the formula $\varphi_{\alpha}(x;y_{\alpha})$ is in the language $L_{w_{\alpha}}$ and isolates a complete $L_{w_{\alpha}}$-type over $\emptyset$ in the variables $xy_{\alpha}$. Additionally, for all $\alpha < \kappa$ and $i < \omega$, the tuple $a_{\alpha,i}$ enumerates an $L_{w_{\alpha}}$-substructure of $\mathbb{M}$. \end{enumerate} \item We will refer to $\overline{w}$ in the above definitions as the \emph{associated $\Delta$-system} of the rectified $X$-pattern. We will consistently denote the root $r = \{\zeta_{i} : i < n\}$ and the sets $v_{\alpha} = w_{\alpha} \setminus r = \{\beta_{\alpha, i} : i < m \}$, where the enumerations are increasing. \end{enumerate} \end{defn} \begin{lem}\label{tending} Given \(X \in \{\text{inp},\text{sct}\}\), if there is an \(X\)-pattern of height \(\kappa\) in \(T\), there is a rectified one. \end{lem} \begin{proof} Given an \(X\)-pattern with the sequence of formulas \(\overline{\varphi} = (\varphi_{\alpha}(x;y_{\alpha}): \alpha < \kappa)\) one can choose some finite \(w_{\alpha} \subset \kappa\) such that \(\varphi_{\alpha}(x;y_{\alpha})\) is in the language \(L_{w_{\alpha}}\). Apply the \(\Delta\)-system lemma, Fact \ref{delta-system lemma}, to the collection \((w_{\alpha} : \alpha < \kappa)\) to find some \(I \subseteq \kappa\) with $|I| = \kappa$ such that \(\overline{w} = (w_{\alpha} : \alpha \in I)\) forms a \(\Delta\)-system with root $r$. By the pigeonhole principle, using that $\kappa$ is uncountable, and the regularity of $\kappa$, we may assume \(|w_{\alpha}| = m\) for all \(\alpha< \kappa\), \(\max r < \min (w_{\alpha} \setminus r)\) for all $\alpha < \kappa$, and if \(\alpha < \alpha'\), \(\max (w_{\alpha} \setminus r) < \min (w_{\alpha'} \setminus r)\). By renaming, we may assume \(I = \kappa\). If \(X = \text{inp}\), we may take the parameters witnessing that \((\overline{\varphi},\overline{k},(a_{\alpha,i})_{\alpha <\kappa,i < \omega})\) is an inp-pattern to be a mutually indiscernible array by Lemma \ref{witness}(1). Moreover, mutual indiscernibility is clearly preserved after replacing each \(a_{\alpha,i}\) by a tuple enumerating the \(L_{w_{\alpha}}\)-substructure generated by $a_{\alpha,i}$ and, by \(\aleph_{0}\)-categoricity of \(T^{*}_{w_{\alpha}}\), this structure is finite. Let \(b \models \{\varphi_{\alpha}(x;a_{\alpha,0}) : \alpha < \kappa\}\). Using again the \(\aleph_{0}\)-categoricity of \(T^{*}_{w_{\alpha}}\), replace \(\varphi_{\alpha}(x;y_{\alpha})\) by an \(L_{w_{\alpha}}\)-formula \(\varphi'_{\alpha}(x;y_{\alpha})\) such that \(\varphi_{\alpha}'(x,y_{\alpha})\), viewed as an unpartitioned formula in the variables $xy_{\alpha}$, isolates the type \(\text{tp}_{L_{w_{\alpha}}}(ba_{\alpha,0}/\emptyset)\). By mutual indiscernibility, if \(g: \kappa \to \omega\) is a function, there is \(\sigma \in \text{Aut}(\mathbb{M})\) such that \(\sigma(a_{\alpha,0}) = a_{\alpha, g(\alpha)}\) for all \(\alpha < \kappa\). Then \(\sigma(b) \models \{\varphi'_{\alpha}(x;a_{\alpha,g(\alpha)}) : \alpha < \kappa\}\) so paths are consistent. The row-wise inconsistency is clear so if we set \(\overline{\varphi}' =(\varphi'_{\alpha}(x;y_{\alpha}) : \alpha < \kappa)\), we see $(\overline{\varphi}',\overline{k},(a_{\eta,i})_{\alpha < \kappa, i < \omega},\overline{w})$ forms a rectified inp-pattern of height $\kappa$. If \(X = \text{sct}\), we argue similarly. We may take the witnessing parameters $(a_{\eta})_{\eta \in \omega^{<\kappa}}$ to be \(s\)-indiscernible, by Lemma \ref{witness}(2). Likewise, \(s\)-indiscernibility is preserved by replacing each \(a_{\eta}\) by its closure under the functions of \(L_{w_{l(\eta)}}\) and this closure is finite. Let \(b \models \{\varphi_{\alpha}(x;a_{0^{\alpha}}) : \alpha < \kappa\}\) and replace $\overline{\varphi}$ by $\overline{\varphi}'$ where \(\varphi'_{\alpha}(x;y_{\alpha})\) is an \(L_{w_{\alpha}}\)-formula which, viewed as an unpartitioned formula in the variables $xy_{\alpha}$, isolates \(\mathrm{tp}_{L_{w_{\alpha}}}(ba_{0^{\alpha}}/\emptyset)\). For all \(\eta \in \omega^{\kappa}\), there is a \(\sigma \in \text{Aut}(\mathbb{M})\) such that \(\sigma(a_{0^{\alpha}}) = a_{\eta | \alpha}\). Then \(\sigma(b) \models \{\varphi'_{\alpha}(x;a_{\eta | \alpha}) : \alpha < \kappa\}\) so paths are consistent. Incomparable nodes remain inconsistent, so \((\overline{\varphi}',(a_{\eta})_{\eta \in \omega^{<\kappa}},\overline{w})\) forms a rectified sct-pattern. \end{proof} \begin{rem} \label{same number of variables} As the replacement of $(\varphi(x;y_{\alpha}) : \alpha < \kappa)$ with a sequence of complete formulas $(\varphi'_{\alpha}(x;y_{\alpha}) : \alpha < \kappa)$ does not change the free variables $x$, if $T$ has an inp- or sct-pattern in $k$ free variables of height $\kappa$, Lemma \ref{tending} produces a rectified inp- or sct-pattern of height $\kappa$ in the same number of free variables. \end{rem}
3,154
42,656
en
train
0.107.10
\subsection{Computing \(\kappa_{cdt}\)} \begin{lem}\label{stable} The theory \(T_{\kappa,f}^{*}\) is stable. \end{lem} \begin{proof} Since stability is local, it suffices to show \(T^{*}_{w}\) is stable for all finite \(w \subset \kappa\). Let \(M \models T_{w}\) be a countable model. We will count 1-types in \(T^{*}_{w}\) over \(M\) explicitly using quantifier elimination. Pick some \(p(x) \in S^{1}_{L_{w}}(M)\). If \(x = m\) is a formula in \(p\) for some \(m \in M\) then this formula obviously isolates \(p\) so there are countably many such possibilities. So assume \(x \neq m\) is in \(p\) for all \(m \in M\). Now we break into cases based upon the predicate contained in $p$. If \(x \not\in O \wedge \bigwedge_{\alpha \in w} x \not\in P_{\alpha}\) is a formula in \(p\), then \(p\) is completely determined, so there is a unique type in this case. If \(x \in O\) is a formula in \(p\), then, by quantifier-elimination and Lemma \ref{nice functions on O}, the type is determined after deciding the truth value of $p_{\alpha}(x)= x$ and \((f_{\beta \alpha} \circ p_{\alpha})(x) = m\) for all \(\beta \leq \alpha \in w\) and \(m \in P_{\beta}(M)\). As \((f_{\beta \alpha} \circ p_{\alpha})(x)\) can be equal to at most 1 element of \(P_{\beta}(M)\) and $w$ is finite, there are countably many possibilties for this case. Finally, if \(x \in P_{\beta}\) is a formula in \(p\), then, by quantifier-elimination and Lemma \ref{nice functions on P}, the type is determined after deciding the truth value of \(f_{\gamma \beta}(x) = m\) for \(m \in P_{\gamma}(M)\) for all \(\gamma < \beta < \alpha\) from \(w\). Here again there are only countably many possibilities, by the finiteness of $w$. Since this covers all possible types, we've shown that \(S^{1}_{L_{w}}(M)\) is countable, so \(T^{*}_{w}\) is stable (in fact, as $M$ is an arbitrary countable model, $\omega$-stable) which implies that $T^{*}_{\kappa,f}$ is stable. \end{proof} \begin{prop} \label{cdtcomputation} \(\kappa_{\text{cdt}}(T^{*}_{\kappa,f}) = \kappa^{+}\). \end{prop} \begin{proof} First, we will show $\kappa_{\text{cdt}}(T^{*}_{\kappa,f}) \geq \kappa^{+}$. We will construct a cdt-pattern of height \(\kappa\). By recursion on \(\alpha < \kappa\), we will construct a tree of tuples \((a_{\eta})_{\eta \in \omega^{<\kappa}}\) so that \(l(\eta) = \beta\) implies \(a_{\eta} \in P_{\beta}\) and if \(\eta \unlhd \nu\) with \(l(\eta) = \beta\) and \(l(\nu) = \gamma\), then \(f_{\beta \gamma}(a_{\nu}) = a_{\eta}\). For \(\alpha = 0\), choose an arbitrary \(a \in P_{0}\) and let \(a_{\emptyset} = a\). Now suppose given \((a_{\eta})_{\eta \in \omega^{\leq \alpha}}\). For each \(\eta \in \omega^{\alpha}\), choose a set \(\{b_{i} : i < \omega\} \subseteq f^{-1}_{\alpha \alpha+1}(a_{\eta})\) with the $b_{i}$ pairwise distinct. Define \(a_{\eta \frown \langle i \rangle} = b_{i}\). This gives us \((a_{\eta})_{\eta \in \omega^{\leq \alpha+1}}\) with the desired properties. Now suppose \(\delta\) is a limit and we've defined \((a_{\eta})_{\eta \in \omega^{\leq \alpha}}\) for all \(\alpha < \delta\). Given any \(\eta \in \omega^{\delta}\), we may, by saturation, find an element \(b \in \bigcap_{\alpha < \delta} f^{-1}_{\alpha \delta}(a_{\eta | \alpha})\). Then we can set \(a_{\eta} = b\). This gives \((a_{\eta})_{\eta \in \omega^{\leq \delta}}\) and completes the construction. Given \(\alpha < \kappa\), let \(\varphi_{\alpha}(x;y)\) be the formula \(p_{\alpha}(x) = y\). For any \(\eta \in \omega^{\kappa}\), \(\{\varphi_{\alpha}(x;a_{\eta | \alpha}) : \alpha < \kappa\}\) is consistent and, for all \(\nu \in \omega^{<\kappa}\), \(\{\varphi_{l(\nu)+1}(x;a_{\nu \frown \langle i \rangle}) : i < \omega \}\) is 2-inconsistent. We have thus exhibited a cdt-pattern of height \(\kappa\) so \(\kappa_{\text{cdt}}(T^{*}_{\kappa,f}) \geq \kappa^{+}\). By Lemma \ref{stable} and Fact \ref{easy inequalities}, we have $\kappa_{\mathrm{cdt}}(T^{*}_{\kappa,f}) \leq \kappa^{+}$, so we have the desired equality. \end{proof}
1,409
42,656
en
train
0.107.11
\subsection{Case 1: \(\kappa_{\text{inp}} = \kappa^{+}\)} In this subsection, we first show how to produce a homogeneous set of size $\kappa$ for $f$ from an $\mathrm{inp}$-pattern of a very particular form. Then, using rectification, we observe that every $\mathrm{inp}$-pattern of height $\kappa$ gives rise to one of this particular form. Together, these will allow us to calculate an upper bound on $\kappa_{\mathrm{inp}}(T^{*}_{\kappa,f})$ when the coloring $f$ is chosen to have no homogeneous set of size $\kappa$. \begin{lem}\label{case2} Suppose we are given a collection \((\beta_{\alpha, i})_{\alpha < \kappa, i < 2}\) of ordinals smaller than $\kappa$ such that if \(\alpha < \alpha' < \kappa\), then $\beta_{\alpha, 0} \leq \beta_{\alpha,1}$, $\beta_{\alpha',0} \leq \beta_{\alpha',1}$, $\beta_{\alpha,0} \leq \beta_{\alpha',0}$ and $\beta_{\alpha,1} < \beta_{\alpha',1}$. Suppose that there is a mutually indiscernible array \((c_{\alpha, k})_{\alpha < \kappa, k < \omega}\) such that, with \(\varphi_{\alpha}(x;y_{\alpha})\) defined by \((f_{\beta_{\alpha,0}\beta_{\alpha,1}} \circ p_{\beta_{\alpha,1}})(x) = y_{\alpha}\), \((\varphi_{\alpha}(x;y_{\alpha}) : \alpha < \kappa)\), \((c_{\alpha, k})_{\alpha < \kappa, k < \omega}\) forms an inp-pattern of height \(\kappa\). Then for all pairs \(\alpha < \alpha'\), \(f(\{\beta_{\alpha,1}, \beta_{\alpha',1}\}) = 1\). \end{lem} \begin{proof} If \(\alpha < \alpha'\) and \(f(\{\beta_{\alpha,1}, \beta_{\alpha',1}\}) = 0\), then $p_{\beta_{\alpha,1}}(x) = (f_{\beta_{\alpha,1} \beta_{\alpha',1}} \circ p_{\beta_{\alpha',1}})(x)$ for any \(x\) with \(p_{\beta_{\alpha,1}}(x) \neq x\) and \(p_{\beta_{\alpha',1}}(x) \neq x\), and hence \begin{eqnarray*} (f_{\beta_{\alpha,0}\beta_{\alpha,1}} \circ p_{\beta_{\alpha,1}})(x) &=& (f_{\beta_{\alpha,0}\beta_{\alpha,1}} \circ f_{\beta_{\alpha,1}\beta_{\alpha',1}} \circ p_{\beta_{\alpha',1}})(x) \\ &=& (f_{\beta_{\alpha,0}\beta_{\alpha',1}} \circ p_{\beta_{\alpha',1}})(x) \\ &=& (f_{\beta_{\alpha,0}, \beta_{\alpha',0}} \circ f_{\beta_{\alpha',0}\beta_{\alpha',1}} \circ p_{\beta_{\alpha',1}})(x). \end{eqnarray*} Consequently, \[ \{(f_{\beta_{\alpha,0}\beta_{\alpha,1}} \circ p_{\beta_{\alpha,1}})(x) = c_{\alpha, k'}, (f_{\beta_{\alpha',0}\beta_{\alpha',1}}\circ p_{\beta_{\alpha',1}})(x) = c_{\alpha',k}\} \] is consistent only if \(c_{\alpha, k'} = f_{\beta_{\alpha,0}\beta_{\alpha',0}}(c_{\alpha',k})\). Because for all $\xi < \kappa$, $(c_{\xi,i})_{i < \omega}$ is indiscernible and, by the definition of an $\mathrm{inp}$-pattern, $\{\varphi_{\xi}(x;c_{\xi,i}) : i < \omega\}$ is inconsistent, we know that $c_{\xi,l} \neq c_{\xi,l'}$ for $l \neq l'$. Fix any $k<\omega$. We have shown there is a unique $k'$ such that \[ \{(f_{\beta_{\alpha,0}\beta_{\alpha,1}} \circ p_{\beta_{\alpha,1}})(x) = c_{\alpha, k'}, (f_{\beta_{\alpha',0}\beta_{\alpha',1}}\circ p_{\beta_{\alpha',1}})(x) = c_{\alpha',k}\} \] is consistent. By the definition of an $\mathrm{inp}$-pattern, given any function $g: \kappa \to \omega$, $$ \{\varphi_{\alpha}(x;c_{\alpha,g(\alpha)}) : \alpha < \kappa\} $$ is consistent and so, in particular, the set \[ \{(f_{\beta_{\alpha,0}\beta_{\alpha,1}} \circ p_{\beta_{\alpha,1}})(x) = c_{\alpha, g(\alpha)}, (f_{\beta_{\alpha',0}\beta_{\alpha',1}}\circ p_{\beta_{\alpha',1}})(x) = c_{\alpha',g(\alpha')}\} \] is consistent. Choosing $g(\alpha') = k$ and $g(\alpha) \neq k'$, we obtain a contradiction. \end{proof} For the remainder of this subsection, we will assume there is an inp-pattern of height $\kappa$ modulo $T$. By Lemma \ref{tending}, it follows there is a \emph{rectified} inp-pattern of height $\kappa$ and, by \cite[Corollary 2.9]{ChernikovNTP2} and Remark \ref{same number of variables}, we may assume that this is witnessed by a rectified inp-pattern in a single free variable. Hence, for the rest of this subsesction, we will fix a rectified inp-pattern \((\overline{\varphi},\overline{k},(a_{\alpha,i})_{\alpha < \kappa, i < \omega},\overline{w})$ and we will assume that each $\varphi_{\alpha}(x;y_{\alpha})$ enumerated in $\overline{\varphi}$ has \(l(x) = 1\). Recall the associated \(\Delta\)-system is denoted \(\overline{w} = (w_{\alpha}: \alpha < \kappa)\) with root \(r = \{\zeta_{i} : i < n\}\) and \(w_{\alpha} \setminus r = v_{\alpha} = \{\beta_{\alpha, j} : j <m\}\), where the enumerations are increasing. \begin{lem} \label{ino} For all \(\alpha < \kappa\), \(\varphi_{\alpha}(x;y_{\alpha}) \vdash x \in O\). \end{lem} \begin{proof} First, note that we may assume that there is a predicate \(Q \in \{O, P_{\zeta_{i}} : i < n\}\) such that $\varphi_{\alpha}(x;y_{\alpha}) \vdash x \in Q$ for all $\alpha < \kappa$. If not, using that the $w_{\alpha}$'s form a $\Delta$-system, that every formula $\varphi_{\alpha}(x;y_{\alpha})$ is complete, and that $\varphi_{\alpha}(x;a_{\alpha,i})$ is consistent with $\varphi_{\beta}(x;a_{\beta,j})$ whenever $\alpha \neq \beta$, there would be some $\alpha < \kappa$ such that $\varphi_{\alpha}(x;y_{\alpha})$ implies that $x$ is not contained in any predicate of $L_{w_{\alpha}}$. By Lemma \ref{no equalities}(1), we know each $\varphi_{\alpha}(x;a_{\alpha,i})$ is non-algebraic, so, in this case it is easy to check that $\{\varphi_{\alpha}(x;a_{\alpha, i}) : i < \omega\}$ is consistent, contradicting the definition of inp-pattern. So we must show that \(\varphi_{\alpha}(x;y_{\alpha}) \vdash P_{\zeta_{i}}\) for some \(i < n\) is impossible. Suppose not and fix $i_{*} < n$ so that $\varphi_{\alpha}(x;y_{\alpha}) \vdash x \in P_{\zeta_{i_{*}}}$ for some $\alpha < \kappa$. Note that it follows that $\varphi_{\alpha}(x;y_{\alpha}) \vdash x \in P_{\zeta_{i_{*}}}$ for \emph{all} $\alpha < \kappa$ as each $\varphi_{\alpha}$ is a complete $L_{w_{\alpha}}$-formula, the predicate $P_{\zeta_{i_{*}}}$ is in every $L_{w_{\alpha}}$, and columns in the $\mathrm{inp}$-pattern are consistent. Write each tuple in the array \(a_{\alpha,i}\) as \(a_{\alpha,i} = (b_{\alpha, i}, c_{\alpha, i}, d_{\alpha, i}, e_{\alpha, i})\) where the elements of \(b_{\alpha, i}\) are in \(O\), the elements of \(c_{\alpha, i}\) are in predicates indexed by the root \(\bigcup_{i < n} P_{\zeta_{i}}\), the elements of \(d_{\alpha,i}\) are in predicates whose index is in \(\bigcup_{j < m} P_{\beta_{\alpha,j}}\), and the elements of \(e_{\alpha,i}\) are not in any predicate of \(L_{w_{\alpha}}\). By completeness, quantifier-elimination, as well as Lemmas \ref{no equalities}(1) and \ref{nice functions on P}, each \(\varphi_{\alpha}(x;a_{\alpha,i})\) is equivalent to the conjunction of the following: \begin{enumerate} \item \(x \in P_{\zeta_{i_{*}}}\) \item \(x \neq (a_{\alpha,i})_{l}\) for all \(l < l(a_{\alpha,i})\) \item \((f_{\gamma \zeta_{i_{*}}}(x) = (c_{\alpha,i})_{l})^{t_{\gamma,l}}\) for all \(l < l(c_{\alpha,i})\) and \(\gamma \in w_{\alpha}\) less than \(\zeta_{i_{*}}\) and some \(t_{\gamma,l} \in \{0,1\}\). \end{enumerate} For each \(k < i_{*}\), let \(\gamma_{k}\) be the least ordinal \(<\kappa\) such that \(\varphi_{\gamma_{k}}(x;a_{\gamma_{k},0}) \vdash f_{\alpha_{k}\alpha_{i_{*}}}(x) = c\) for some \(c \in c_{\gamma_{k},0}\) and \(0\) if there is no such. Let \(\gamma = \max\{\gamma_{k} : k < i_{*}\}\). We claim that \(\{\varphi_{\gamma+1}(x;a_{\gamma+1,j}) : j < \omega\}\) is consistent. Note that any equality of the form $f_{\zeta_{k}\zeta_{i_{*}}}(x) = c$ implied by $\varphi_{\gamma+1}(x;a_{\gamma+1,j})$ is implied by $\varphi_{\gamma_{k}}(x;a_{\gamma_{k},0})$ by indiscernibility and the fact that, for all \(j < \omega\), \[ \{\varphi_{\gamma_{k}}(x;a_{\gamma_{k},0}) ,\varphi_{\gamma+1}(x;a_{\gamma+1,j})\} \] is consistent. Additionally, any inequality of the form \(f_{\zeta_{k}\zeta_{i_{*}}}(x) \neq c\) implied by \(\varphi_{\gamma+1}(x;a_{\gamma+1,j})\) is compatible with \(\{\varphi_{\alpha}(x;a_{\alpha,0}) : \alpha \leq \gamma\}\). Choosing a realization \(b \models \{\varphi_{\alpha}(x;a_{\alpha,0}) : \alpha \leq \gamma\}\) satisfying every inequality of the form \(f_{\zeta_{k}\zeta_{i_*}}(x) \neq c\) implied by the \(\varphi_{\gamma+1}(x;a_{\gamma+1,j})\) yields a realization of \(\{\varphi_{\gamma+1}(x;a_{\gamma+1,j}) : j < \omega\}\), by the description of $\varphi_{\gamma+1}(x;a_{\gamma+1,j})$ as a conjunction given above. This contradicts the definition of inp-pattern. \end{proof}
2,953
42,656
en
train
0.107.12
\begin{prop}\label{inpcomputation} If $\kappa_{\mathrm{inp}}(T^{*}_{\kappa,f}) = \kappa^{+}$, then there is a subset \(H \subseteq \kappa\) with \(|H| = \kappa\) such that \(f\) is constant on \([H]^{2}\). \end{prop} \begin{proof} Recall that the hypothesis $\kappa_{\mathrm{inp}}(T^{*}_{\kappa,f}) = \kappa^{+}$ allowed us to fix a rectified inp-pattern \((\overline{\varphi},\overline{k},(a_{\alpha,i})_{\alpha < \kappa, i < \omega},\overline{w})$ with the property that each $\varphi_{\alpha}(x;y_{\alpha})$ enumerated in $\overline{\varphi}$ has \(l(x) = 1\). By completeness and Lemma \ref{ino}, we know that, for each \(\alpha < \kappa\), \(\varphi_{\alpha}(x;y)\vdash x \in O\). Then by quantifier-elimination, completeness, and Lemmas \ref{no equalities}(2) and \ref{nice functions on O}, for each $\alpha < \kappa$, $\varphi_{\alpha}(x;a_{\alpha,0})$ is equivalent to the conjunction of the following: \begin{enumerate} \item \(x \in O\) \item \(x \neq (a_{\alpha,0})_{l}\) for all \(l < l(a_{\alpha,0})\) \item \((p_{\gamma}(x) = x)^{t^{0}_{\gamma}}\) for $\gamma \in w_{\alpha}$ and some $t^{0}_{\gamma} \in \{0,1\}$. \item The values of the \(p_{\gamma}\) and how they descend in the tree: \begin{enumerate} \item $((f_{\delta \gamma} \circ p_{\gamma})(x) = (a_{\alpha,0})_{l})^{t^{1}_{l,\delta,\gamma}}$ for $l < l(a_{\alpha,0})$, $\delta \leq \gamma$ in $w_{\alpha}$, and some $t^{1}_{l,\delta,\gamma} \in \{0,1\}$. \item \(((f_{\delta \gamma} \circ p_{\gamma})(x) = (f_{\delta \gamma'} \circ p_{\gamma'})(x))^{t^{2}_{\delta,\gamma,\gamma'}}\) for \(\delta, \gamma, \gamma' \in w_{\alpha}\) with \(\delta \leq \gamma < \gamma'\), for some $t^{2}_{\delta,\gamma,\gamma'} \in \{0,1\}$. \end{enumerate} \end{enumerate} \textbf{Claim: }Given $\alpha < \kappa$, there are $\epsilon_{\alpha} \leq \epsilon'_{\alpha} \in w_{\alpha}$ and pairwise distinct $c_{\alpha,k} \in a_{\alpha,k}$ such that, for all $k < \omega$, $\varphi_{\alpha}(x;a_{\alpha,k}) \vdash (f_{\epsilon_{\alpha} \epsilon_{\alpha}'} \circ p_{\epsilon'_{\alpha}})(x) = c_{\alpha,k}$. \emph{Proof of claim:} Suppose not. Then, by the description of $\varphi_{\alpha}(x;a_{\alpha,k})$ given above, the following set of formulas \[ \{\varphi_{\alpha}(x;a_{\alpha,k}) : k < \omega\} \] is equivalent to a finite number of equations common to each instance \(\varphi_{\alpha}(x;a_{\alpha,k})\) and an infinite collection of inequations. Then, it is easy to see then that \(\{\varphi_{\alpha}(x;a_{\alpha,k}) : k < \omega\}\) is consistent, contradicting the definition of an inp-pattern. This proves the claim. Note that, by the pigeonhole principle, we may assume that either (i) $\epsilon_{\alpha}, \epsilon_{\alpha}' \in r$ for all $\alpha < \kappa$, (ii) $\epsilon_{\alpha} \in r$, $\epsilon'_{\alpha} \in v_{\alpha}$ for all $\alpha < \kappa$, or (iii) $\epsilon_{\alpha},\epsilon'_{\alpha} \in v_{\alpha}$ for all $\alpha < \kappa$. Case (i) is impossible: as the root \(r = \{\zeta_{i} : i < n\}\) is finite and the all 0's path is consistent, we can find an ordinal \(\gamma < \kappa\) such that for all \(\alpha < \kappa\), if there is a \(c \in a_{\alpha,0}\) such that \(\varphi_{\alpha}(x;a_{\alpha,0}) \vdash (f_{\zeta_{i}\zeta_{i'}} \circ p_{\zeta_{i'}})(x) = c\) for some \(i \leq i' < n\), then there is some \(\alpha' < \gamma\) such that \(\varphi_{\alpha'}(x;a_{\alpha',0}) \vdash (f_{\zeta_{i}\zeta_{i'}} \circ p_{\zeta_{i'}})(x) = c\). Hence, by indiscernibility, the equality $(f_{\epsilon_{\gamma} \epsilon_{\gamma}'} \circ p_{\epsilon'_{\gamma}})(x) = c_{\gamma,k}$ implied by $\varphi_{\gamma}(x;a_{\gamma,k})$ must also be implied by $\varphi_{\alpha}(x;a_{\alpha,0})$ for some $\alpha < \gamma$. Since $\{\varphi_{\alpha}(x;a_{\alpha,0}), \varphi_{\gamma}(x;a_{\gamma,k})\}$ is consistent for all $k < \omega$, this is impossible because the tuples in $(c_{\alpha,k})_{k < \omega}$ are pairwise distinct. Now we consider cases (ii) and (iii). Again by the pigeonhole principle, we may assume that if we are in case (ii), then $\epsilon_{\alpha}$ is constant for all $\alpha$. Then by rectification, we know that, in either case (ii) or (iii), when $\alpha < \alpha'$, $\epsilon_{\alpha} \leq \epsilon_{\alpha'}$ and $\epsilon'_{\alpha} < \epsilon'_{\alpha'}$. Because for all $\alpha < \kappa$, the $c_{\alpha,k}$ are pairwise distinct and $k$ varies, the set of formulas $$ \{(f_{\epsilon_{\alpha}\epsilon'_{\alpha}} \circ p_{\epsilon'_{\alpha}})(x) = c_{\alpha,k} : k < \omega\} $$ is $2$-inconsistent. Moreover, if $g : \kappa \to \omega$ is a function, the partial type $$ \{ (f_{\epsilon_{\alpha}\epsilon_{\alpha'}} \circ p_{\epsilon'_{\alpha}})(x) = c_{\alpha,g(\alpha)} : \alpha < \kappa\} $$ is implied by $\{\varphi_{\alpha}(x;a_{\alpha,g(\alpha)}) : \alpha < \kappa\}$ and is therefore consistent. It follows that $((f_{\epsilon_{\alpha}\epsilon'_{\alpha}} \circ p_{\epsilon'_{\alpha}})(x) = y_{\alpha})_{\alpha < \kappa}, (c_{\alpha,k})_{\alpha < \kappa, k < \omega}$ is an inp-pattern with $k_{\alpha} = 2$ for all $\alpha < \kappa$. By Lemma \ref{case2}, $f(\{\epsilon'_{\alpha},\epsilon'_{\alpha'}\}) = 1$ for all $\alpha < \alpha'$. Therefore $H = \{\epsilon'_{\alpha} : \alpha < \kappa\}$ is a homogeneous set for $f$. \end{proof}
1,809
42,656
en
train
0.107.13
\subsection{Case 2: \(\kappa_{\text{sct}} = \kappa^{+}\)} In this subsection, we show that if $\kappa_{sct}(T^{*}_{\kappa,f}) = \kappa^{+}$ then $f$ satisfies a homogeneity property inconsistent with $f$ being a strong coloring. In particular, we will show that if this homogeneity property fails, then for any putative sct-pattern of height $\kappa$, there are two incomparable elements in $\omega^{<\kappa}$ which index compatible formulas, contradicting the inconsistency condition in the definition of an sct-pattern. This step is accomplished by relating consistency of the relevant formulas to an amalgamation problem in finite structures. The following lemma describes the relevant amalgamation problem: \begin{lem}\label{consistency} Suppose we are given the following: \begin{itemize} \item Finite sets \(w, w' \subset \kappa\) with \(w \cap w' = v\) such that for all \(\alpha \in v\), \(\beta \in w \setminus v\), \(\gamma \in w' \setminus v\), we have \(\alpha < \beta < \gamma\) and \(f(\{\beta, \gamma\}) = 1\). \item Structures \(A \in \mathbb{K}_{w \cup w'}\), \(B = \langle d,A \rangle^{B}_{L_{w}} \in \mathbb{K}_{w}\), \(C = \langle e, A \rangle^{C}_{L_{w'}} \in \mathbb{K}_{w'}\) satisfying the following: \begin{enumerate} \item The tuples $d,e$ are contained in $O \cup \bigcup_{\alpha\in v} P_{\alpha}$. \item The map sending \(d \mapsto e\) induces an isomorphism of $L_{v}$-structures over \(A\) between $B = \langle d,A \rangle^{B}_{L_{v}}$ and $C = \langle e, A \rangle_{L_{v}}^{C}$. \end{enumerate} \end{itemize} Then there is \(D = \langle f,A \rangle^{D}_{L_{w \cup w'}} \in \mathbb{K}_{w \cup w'}\) extending \(A\) such that \(l(f)= l(d) = l(e)\) and \(\langle f, A \rangle^{D}_{L_{w}} \cong B\) over \(A\) and \(\langle f,A \rangle^{D}_{L_{w'}} \cong C\) over \(A\) via the isomorphisms over \(A\) sending \(f \mapsto d\) and \(f \mapsto e\), respectively. \end{lem} \begin{proof} Let \(f\) be a tuple of formal elements with \(l(f) = l(d)\)(\(=l(e)\)) with \(L_{w}\) and \(L_{w'}\) interpreted so that \(\langle f,A \rangle_{L_{w}}\) extends \(A\) and is $L_{w}$-isomorphic over \(A\) to \(B\), so that \(\langle f,A \rangle_{L_{w'}}\) extends \(A\) and is $L_{w'}$-isomorphic over \(A\) to \(C\), and so that $\langle f,A \rangle_{L_{w}}$ and $\langle f,A \rangle_{L_{w'}}$ are disjoint over $A \cup \{f\}$. Let $\gamma$ be the least element of $w' \setminus v$ and define \(D\) to have underlying set \[ \langle f,A \rangle_{L_{w}} \cup \langle f,A \rangle_{L_{w'}} \cup \{*_{\alpha,c} : \alpha \in w\setminus v, c \in P_{\gamma}^{\langle f,A \rangle_{L_{w'}}} \setminus P_{\gamma}^{A} \}. \] We must give \(D\) an \(L_{w \cup w'}\)-structure. The main task is to give elements at the levels of the tree indexed by $\alpha \in w' \setminus v$ ancestors at the levels of $w \setminus v$ and the new formal elements $*_{\alpha,c}$ will play this role. Interpret the predicates on $D$ by setting $O^{D} = O^{\langle f,A \rangle_{L_{w}} } = O^{\langle f,A \rangle_{L_{w'}}}$ and, additionally, $$ P^{D}_{\alpha} = \left\{ \begin{matrix} P_{\alpha}^{\langle f,A \rangle_{L_{w'}}} & \text{ if } \alpha \in w' \setminus v \\ P_{\alpha}^{\langle f,A \rangle_{L_{w}}} \cup \{*_{\alpha,c}: c \in P_{\gamma}^{\langle f,A \rangle_{L_{w'}}} \setminus P_{\gamma}^{A}\} & \text{ if } \alpha \in w \setminus v \\ P_{\alpha}^{\langle f, A \rangle_{L_{w}}} \cup P^{\langle f,A \rangle_{L_{w'}}}_{\alpha} & \text{ if } \alpha \in v. \end{matrix} \right. $$ For each of the function symbols $f^{D}_{\alpha \beta}$, we are forced to interpret $f^{D}_{\alpha \beta}$ to be the identity on the complement of $P_{\beta}^{D}$ in $D$, so it suffices to specify the interpretation on $P^{D}_{\beta}$. Given \(\alpha \in w\setminus v\) and $c \in P_{\gamma}^{\langle f,A \rangle_{L_{w'}}} \setminus P_{\gamma}^{A}$, interpret \(f^{D}_{\alpha \gamma}(c) = {*}_{\alpha,c}\) and for any \(\beta \in w'\setminus v\), define \(f_{\alpha \beta}^{D} = f_{\alpha \gamma}^{D} \circ f_{\gamma \beta}^{D}\) on \(P_{\beta}^{D}\). If \(\alpha \in w\setminus v\) and \(\xi \in v\), interpret \(f^{D}_{\xi \alpha}\) so that $f^{D}_{\xi \alpha}|_{P^{\langle f,A\rangle_{L_{w}}}_{\alpha}} = f^{\langle f,A \rangle_{L_{w}}}_{\xi \alpha}|_{P^{\langle f,A\rangle_{L_{w}}}_{\alpha}}$ and \(f^{D}_{\xi \alpha}(*_{\alpha,c}) = f^{D}_{\xi \gamma}(c)\). If $\alpha < \beta$ are both from $w \setminus v$, we likewise define $f^{D}_{\alpha \beta}$ so that $f^{D}_{\alpha \beta}|_{P^{\langle f,A \rangle_{L_{w}}}_{\beta}} = f_{\alpha \beta}^{\langle f, A \rangle_{L_{w}}}$ and $f^{D}_{\alpha \beta}(*_{\beta,c}) = *_{\alpha,c}$. It remains to define the interpretation of $f_{\alpha \beta}^{D}$ when $\alpha <\beta$ are from $(w \cup w')$ and $\alpha,\beta \notin w \setminus v$. If $\beta \in w'$, then we can only set $f^{D}_{\alpha \beta}|_{P^{D}_{\beta}} = f^{\langle f,A \rangle_{L_{w'}}}_{\alpha \beta}|_{P^{D}_{\beta}}$, since $P^{D}_{\beta} = P_{\beta}^{\langle f,A \rangle_{L_{w'}}}$. If $\beta \in v$, then we set $f^{D}_{\alpha \beta}|_{P^{D}_{\beta}} = f^{\langle f, A \rangle_{L_{w}}}_{\alpha \beta}|_{P^{\langle f, A \rangle_{L_{w}}}_{\beta}} \cup f^{\langle f, A \rangle_{L_{w'}}}_{\alpha \beta}|_{P^{\langle f, A \rangle_{L_{w'}}}_{\beta}}$ Finally, interpret each function of the form \(p_{\beta}\) for \(\beta \in w\) to restrict to $p_{\beta}^{\langle f,A \rangle_{L_{w}}}$ and to be the identity on the complemement of $\langle f,A \rangle_{L_{w}}$ and likewise for $\beta \in w'$ (note that these definitions agree for $\alpha \in w \cap w' = v$). This completes the definition of the \(L_{w \cup w'}\)-structure on \(D\). It is clear from construction that $D$ is an $L_{w \cup w'}$-extension of $A$, an $L_{w}$-extension of $\langle f,A \rangle_{L_{w}}$, and an $L_{w'}$-extension of $\langle f,A \rangle_{L_{w'}}$. Now we must check that \(D \in \mathbb{K}_{w \cup w'}\). It is easy to check that axioms \((1)-(3)\) are satisfied in \(D\). As \(f(\{\alpha, \beta\}) = 1\) for all \(\alpha \in w \setminus v, \beta \in w' \setminus v\), the only possible counterexample to axiom (4) can occur when \(\xi \in v\), \(\beta \in (w \cup w') \setminus v\) and \(f(\{\xi, \beta\})=0\). As the formal elements \(*_{\alpha, c}\) are not in the image of \(O\) under the \(p_{\alpha}\), it follows that a counterexample to axiom (4) must come from a counter-example either in \(B\) or \(C\), which is impossible. So \(D \in \mathbb{K}_{w \cup w'}\), which completes the proof. \end{proof} \begin{lem}\label{rootandobject} Suppose \(((\varphi_{\alpha}(x;y_{\alpha}))_{\alpha < \kappa},(a_{\eta})_{\eta \in \omega^{<\kappa}},\overline{w})\) is a rectified sct-pattern such that \(l(x)\) is minimal among sct-patterns of height \(\kappa\). Then for all \(\alpha < \kappa\), \(\varphi_{\alpha}(x;y_{\alpha}) \vdash (x)_{l} \in O \cup \bigcup_{i < n} P_{\zeta_{i}}\) for all \(l < l(x)\), that is, every formula in the pattern implies that every variable $(x)_{l}$ is in $O$ or a predicate indexed by the root of the associated $\Delta$-system. \end{lem} \begin{proof} Suppose not. First, consider the case that for some \(l < l(x)\) and all \(\alpha < \kappa\), $\varphi_{\alpha}(x;y_{\alpha}) \vdash (x)_{l} \not\in O \cup \bigcup_{i < n} P_{\zeta_{i}} \cup \bigcup_{j < m} P_{\beta_{\alpha,j}}$, then the only relations that \(\varphi_{\alpha}(x;y_{\alpha})\) can assert between \((x)_{l}\) and the elements of \(y_{\alpha}\) and the other elements of \(x\) are equalities and inequalities. By Lemma \ref{no equalities}(2), we know that $\varphi_{\alpha}(x;y_{\alpha})$ proves no equalities between elements of $x$ and the element of $y_{\alpha}$ so it can only prove inequalties between $(x)_{l}$ and $y_{\alpha}$, but it is easy to see that this allows us to find an sct-pattern in fewer variables, contradicting minimality (or if $l(x) = 1$ the definition of an sct-pattern). Secondly, consider the case that there is some \(\alpha < \kappa\) and \(j < m\) such that $\varphi_{\alpha}(x;y_{\alpha}) \vdash (x)_{l} \in P_{\beta_{\alpha, j}}$ and therefore, for all \(\alpha' \neq \alpha\), \(\varphi_{\alpha'}(x;y_{\alpha'})\) implies that \((x)_{l}\) is not in any of the unary predicates of \(L_{w_{\alpha'}}\), as \(\beta_{\alpha,j}\) is outside the root of the \(\Delta\)-system. So restricting the given pattern to the formulas \((\varphi_{\alpha'}(x;y_{\alpha'}) : \alpha' < \kappa, \alpha' \neq \alpha)\) yields a rectified sct-pattern of height $\kappa$ which falls into the first case considered, a contradiction. As these are the only cases, we conclude. \end{proof}
2,998
42,656
en
train
0.107.14
\begin{lem}\label{rootandobject} Suppose \(((\varphi_{\alpha}(x;y_{\alpha}))_{\alpha < \kappa},(a_{\eta})_{\eta \in \omega^{<\kappa}},\overline{w})\) is a rectified sct-pattern such that \(l(x)\) is minimal among sct-patterns of height \(\kappa\). Then for all \(\alpha < \kappa\), \(\varphi_{\alpha}(x;y_{\alpha}) \vdash (x)_{l} \in O \cup \bigcup_{i < n} P_{\zeta_{i}}\) for all \(l < l(x)\), that is, every formula in the pattern implies that every variable $(x)_{l}$ is in $O$ or a predicate indexed by the root of the associated $\Delta$-system. \end{lem} \begin{proof} Suppose not. First, consider the case that for some \(l < l(x)\) and all \(\alpha < \kappa\), $\varphi_{\alpha}(x;y_{\alpha}) \vdash (x)_{l} \not\in O \cup \bigcup_{i < n} P_{\zeta_{i}} \cup \bigcup_{j < m} P_{\beta_{\alpha,j}}$, then the only relations that \(\varphi_{\alpha}(x;y_{\alpha})\) can assert between \((x)_{l}\) and the elements of \(y_{\alpha}\) and the other elements of \(x\) are equalities and inequalities. By Lemma \ref{no equalities}(2), we know that $\varphi_{\alpha}(x;y_{\alpha})$ proves no equalities between elements of $x$ and the element of $y_{\alpha}$ so it can only prove inequalties between $(x)_{l}$ and $y_{\alpha}$, but it is easy to see that this allows us to find an sct-pattern in fewer variables, contradicting minimality (or if $l(x) = 1$ the definition of an sct-pattern). Secondly, consider the case that there is some \(\alpha < \kappa\) and \(j < m\) such that $\varphi_{\alpha}(x;y_{\alpha}) \vdash (x)_{l} \in P_{\beta_{\alpha, j}}$ and therefore, for all \(\alpha' \neq \alpha\), \(\varphi_{\alpha'}(x;y_{\alpha'})\) implies that \((x)_{l}\) is not in any of the unary predicates of \(L_{w_{\alpha'}}\), as \(\beta_{\alpha,j}\) is outside the root of the \(\Delta\)-system. So restricting the given pattern to the formulas \((\varphi_{\alpha'}(x;y_{\alpha'}) : \alpha' < \kappa, \alpha' \neq \alpha)\) yields a rectified sct-pattern of height $\kappa$ which falls into the first case considered, a contradiction. As these are the only cases, we conclude. \end{proof} \begin{prop}\label{sctcomputation} If $\kappa_{\mathrm{sct}}(T^{*}_{\kappa,f}) = \kappa^{+}$, then there is \(\gamma\) such that for any \(\alpha,\alpha'\) with \(\gamma <\alpha < \alpha'<\kappa\) there is \(\xi \in v_{\alpha}, \zeta \in v_{\alpha'}\) such that \(f(\{\xi, \zeta\}) = 0\). \end{prop} \begin{proof} Suppose not. Recall that by Lemma \ref{tending} and Remark \ref{same number of variables}, if there is an sct-pattern of height $\kappa$ in $k$-free variables, there is a sct-pattern in $k$ free variables which is also rectified. It follows we may fix a rectified sct-pattern \(((\varphi_{\alpha}(x;y_{\alpha}))_{\alpha < \kappa},(a_{\eta})_{\eta \in \omega^{<\kappa}},\overline{w})\) such that \(l(x)\) is minimal among sct-patterns of height \(\kappa\). By Lemma \ref{rootandobject}, we know that up to a relabeling of the variables, there is a \(k \leq l(x)\) such that, for all $l < k$, \(\varphi_{\alpha}(x;y_{\alpha}) \vdash (x)_{l} \in P_{\zeta_{i(l)}}\) for some $i(l)<n$ and \(\varphi_{\alpha}(x;y_{\alpha}) \vdash (x)_{l} \in O\) for \(l \geq k\). For each $\alpha < \kappa$, let $\varphi'_{\alpha}(x)$ be a complete $L_{w_{\alpha}}$-formula, without parameters, in the variables $x$ implied by $\varphi_{\alpha}(x;y_{\alpha})$ (which is unique up to logical equivalence, since $\varphi_{\alpha}(x;y_{\alpha})$ was assumed to be a complete $L_{w_{\alpha}}$-formula). Clearly we have, for all $l < k$, \(\varphi'_{\alpha}(x) \vdash (x)_{l} \in P_{\zeta_{i(l)}}\) and \(\varphi'_{\alpha}(x) \vdash (x)_{l} \in O\) for \(l \geq k\), since these are formulas without parameters in $L_{r} \subseteq L_{w_{\alpha}}$. Since all the symbols in the language are unary, it is easy to see from quantifier-elimination that for each $\alpha < \kappa$ and $\eta \in \omega^{\alpha}$, $\varphi_{\alpha}(x;a_{\eta})$ is equivalent to a conjunction of the following: \begin{enumerate} \item $\varphi'_{\alpha}(x)$. \item $(x)_{l} \neq (a_{\eta})_{i}$ for $l < l(x)$ and $i < l(a_{\eta})$ (using the minimality of $l(x)$). \item $(f_{\delta \zeta_{i(l)}}((x)_{l}) = (a_{\eta})_{i})^{t^{0}_{\delta,l,i}}$ for $l < k$, $\delta\in r$ with $\delta < \zeta_{i(l)}$, and $i < l(a_{\eta})$, and for some $t^{0}_{\delta,l,i} \in \{0,1\}$. \item $((f_{\delta \xi} \circ p_{\xi})((x)_{l}) = (a_{\eta})_{i})^{t^{1}_{\delta, \xi,l,i}}$ for $\delta \leq \xi$ from $r$, $k \leq l < l(x)$, and $i < l(a_{\eta})$, and for some $t^{1}_{\delta, \xi,l,i} \in \{0,1\}$. \item $((f_{\delta \xi} \circ p_{\xi})((x)_{l}) = (a_{\eta})_{i})^{t^{2}_{\delta, \xi,l,i}}$ for $\delta \leq \xi$ from $w_{\alpha}$, $\xi \in v_{\alpha}$, $k \leq l < l(x)$, and $i < l(a_{\eta})$, and for some $t^{2}_{\delta, \xi,l,i} \in \{0,1\}$. \end{enumerate} Choose \(\gamma < \kappa\) so that if \(\alpha < \kappa\) and \(\varphi_{\alpha}(x;a_{0^{\alpha}})$ implies a positive instance of one of the equalities in (3) and (4), then this is implied by \(\varphi_{\alpha'}(x;a_{0^{\alpha'}})\) for some \(\alpha' < \gamma\) (possible as the root is finite). By assumption, there are \(\alpha, \alpha'\) with \(\gamma < \alpha < \alpha' < \kappa\) such that \(f(\{\xi, \zeta\}) = 1\) for all \(\xi \in v_{\alpha}, \zeta \in v_{\alpha'}\). Choose \(\eta \in \omega^{\alpha}\), \(\nu \in \omega^{\alpha'}\) both extending $0^{\gamma}$ such that \(\eta \perp \nu\). Let \(A = \langle a_{\eta}, a_{\nu} \rangle_{L_{w_{\alpha} \cup w_{\alpha'}}}\) be the finite \(L_{w_{\alpha} \cup w_{\alpha'}}\)-structure generated by \(a_{\eta}\) and \(a_{\nu}\). Pick $d \models \{\varphi_{\delta}(x;a_{0^{\delta}}): \delta \leq \gamma\} \cup \{\varphi_{\alpha}(x;a_{\eta})\}$ and $e \models \{\varphi_{\delta}(x;a_{0^{\delta}}) : \delta \leq \gamma\} \cup \{\varphi_{\alpha'}(x;a_{\nu})\}$. By the choice of $\gamma$, the $s$-indiscernibility of $(a_{\eta})_{\eta \in \omega^{<\kappa}}$, and quantifier-elimination and the observation above, we have \(\text{tp}_{L_{r}}(d/A) = \text{tp}_{L_{r}}(e/A)\). Let \(B = \langle d,A \rangle_{L_{w_{\alpha}}}\) and \(C = \langle e,A \rangle_{L_{w_{\alpha'}}}\). By Lemma \ref{consistency}, there is a \(D \in \mathbb{K}_{w_{\alpha} \cup w_{\alpha'}}\) such that \(D = \langle g, A \rangle^{D}_{L_{w_{\alpha} \cup w_{\alpha'}}}\) such that \(l(g) = l(d) = l(e)\) and \(\langle g,A \rangle_{L_{w_{\alpha}}} \cong B\) over \(A\) and \(\langle g, A \rangle_{L_{w_{\alpha'}}} \cong C\) over \(A\). Using the extension property to embed $D$ in $\mathbb{M}$ over $A$, it follows that in \(\mathbb{M}\), \(g \models \{\varphi_{\alpha}(x;a_{\eta}), \varphi_{\alpha'}(x;a_{\nu})\}\), contradicting the definition of sct-pattern. This completes the proof. \end{proof}
2,400
42,656
en
train
0.107.15
\subsection{Conclusion} \begin{thm} \label{first main theorem} There is a stable theory \(T\) such that \(\kappa_{\text{cdt}}(T) \neq \kappa_{\text{sct}}(T) + \kappa_{\text{inp}}(T)\). Moreover, it is consistent with ZFC that for every regular uncountable \(\kappa\), there is a stable theory \(T\) with \(|T| = \kappa\) and \(\kappa_{\text{cdt}}(T) > \kappa_{\text{sct}}(T) + \kappa_{\text{inp}}(T)\). \end{thm} \begin{proof} If $\kappa$ is regular and uncountable satisfying $\text{Pr}_{1}(\kappa,\kappa,2,\aleph_{0})$, then choose \(f: [\kappa]^{2} \to 2\) witnessing \(\text{Pr}_{1}(\kappa,\kappa,2,\aleph_{0})\). There can be no homogeneous set of size \(\kappa\) for \(f\), since given any $\{x_{\alpha} : \alpha < \kappa\} \subseteq \kappa$, enumerated in increasing order, we obtain a pairwise disjoint family of finite sets $(v_{\alpha})_{\alpha < \kappa}$ defined by $v_{\alpha} = \{x_{\alpha}\}$ and $\mathrm{Pr}_{1}(\kappa,\kappa,2,\aleph_{0})$ implies that for each color $i \in \{0,1\}$, there are $\alpha < \alpha'$ such that $f(\{x_{\alpha},x_{\alpha'}\}) =i$. Moreover, $\mathrm{Pr}_{1}(\kappa,\kappa,2,\aleph_{0})$ implies directly that there can be no collection \((v_{\alpha} : \alpha < \kappa)\) of disjoint finite sets such that, given \(\alpha < \alpha' < \kappa\), there are \(\xi \in v_{\alpha}, \zeta \in v_{\alpha'}\) such that \(f(\{\xi, \zeta\}) = 0\). Let \(T = T^{*}_{\kappa, f}\). This theory is stable by Lemma \ref{stable}. Additionally, \(\kappa_{\mathrm{cdt}}(T) = \kappa^{+}\), by Proposition \ref{cdtcomputation}, but \(\kappa_{\text{sct}}(T) < \kappa^{+}\) and \(\kappa_{\text{inp}}(T) < \kappa^{+}\) by Proposition \ref{sctcomputation} and Proposition \ref{inpcomputation} respectively. By Fact \ref{ShelahPr} and Observation \ref{monotonicity}, \(\text{Pr}_{1}(\lambda^{++}, \lambda^{++}, 2, \aleph_{0})\) holds for any regular uncountable $\lambda$. Then \(T = T^{*}_{\kappa,f}\) gives the desired theory, for \(\kappa = \lambda^{++}\) and any \(f\) witnessing \(\text{Pr}_{1}(\lambda^{++}, \lambda^{++}, 2, \aleph_{0})\). For the ``moreover'' clause, note that ZFC is equiconsistent with ZFC + GCH + ``there are no inaccessible cardinals" (if \(V \models \text{ZFC}\) has a strongly inaccessible in it, replace \(V\) by \(V_{\kappa}\) for \(\kappa\) the least such, then consider \(L\) in \(V\)) which entails that every regular uncountable cardinal is a successor. By Theorem \ref{GalvinPr} this implies that \(\text{Pr}_{1}(\kappa, \kappa,2, \aleph_{0})\) holds for all regular uncountable cardinals \(\kappa\), which completes the proof. \end{proof} \begin{rem} In \cite[Theorem 3.1]{ArtemNick}, it was proved that \(\kappa_{\text{cdt}}(T) = \kappa_{\text{inp}}(T) + \kappa_{\text{sct}}(T)\) for any countable theory \(T\). The above theorem shows that in a certain sense, this result is best possible. \end{rem} \begin{rem} It would be interesting to know if for $\kappa$ strongly inaccessible, there is a theory $T$ with $\kappa_{\mathrm{cdt}}(T) = \kappa^{+} > \kappa_{\text{inp}}(T) + \kappa_{\text{sct}}(T)$. \end{rem}
1,045
42,656
en
train
0.107.16
\section{Compactness of ultrapowers}\label{compactness} In this section we study the decay of saturation in regular ultrapowers. We say an ultrafilter $\mathcal{D}$ on $I$ is \emph{regular} if there is a collection of sets $\{X_{\alpha} : \alpha < |I|\} \subset \mathcal{D}$ such that for all $t \in I$, the set $\{\alpha : t \in X_{\alpha}\}$ is finite and $\mathcal{D}$ is \emph{uniform} if all sets in $\mathcal{D}$ have cardinality $|I|$. Recall that a model $M$ is called \emph{$\lambda$-compact} if every (partial) type over $M$ of cardinality less than $\lambda$ is realized in $M$. In the case that the language has size at most $\lambda$, the notions of $\lambda$-compactness and $\lambda$-saturation are equivalent but they may differ if the cardinality of the language exceeds $\lambda$, since, in this case, types over sets of parameters of size less than $\lambda$ may still contain more than $\lambda$ many formulas, in general. Given a theory $T$, we start with a regular uniform ultrafilter $\mathcal{D}$ on $\lambda$ and a $\lambda^{++}$-saturated model $M \models T$. We then consider whether the ultrapower $M^{\lambda}/\mathcal{D}$ is $\lambda^{++}$-compact. Shelah has shown \cite[Theorem VI.4.7]{shelah1990classification} that if $T$ is not simple, then in this situation $M^{\lambda}/\mathcal{D}$ will not be $\lambda^{++}$-compact and asked whether an analogous result holds for theories $T$ with $\kappa_{\text{inp}}(T) > \lambda^{+}$. We will show by direct construction that $\kappa_{\text{inp}}(T) > \lambda^{+}$ does not suffice but, by modifying an argument due to Malliaris and Shelah \cite[Claim 7.5]{Malliaris:2012aa}, $\kappa_{\text{sct}}(T) > \lambda^{+}$ is sufficient to obtain a decay in compactness, by levaraging the finite square principles of Kennedy and Shelah \cite{kennedyshelah}.
535
42,656
en
train
0.107.17
\subsection{A counterexample} Fix $\kappa$ a regular uncountable cardinal. Let $L'_{\kappa} = \langle O, P_{\alpha},p_{\alpha} : \alpha < \kappa \rangle$ be a language where $O$ and each $P_{\alpha}$ is a unary predicate and each $p_{\alpha}$ is a unary function. Define a theory $T'_{\kappa}$ to be the universal theory with the following as axioms: \begin{enumerate} \item $O$ and the $(P_{\alpha})_{\alpha < \kappa}$ are pairwise disjoint. \item For all $\alpha < \kappa$, $p_{\alpha}$ is a function such that $(\forall x \in O)[p_{\alpha}(x) \in P_{\alpha}]$ and $(\forall x \not\in O)[p_{\alpha}(x) = x]$. \end{enumerate} Given a finite set $w \subset \kappa$, define $L'_{w} = \langle O,P_{\alpha}, p_{\alpha} : \alpha \in w \rangle$. Let $\mathbb{K}'_{w}$ denote the class of finite models of $T'_{\kappa} \upharpoonright L'_{w}$. \begin{lem} Suppose $w \subset \kappa$ is finite. Then $\mathbb{K}'_{w}$ is a Fra\"iss\'e class. \end{lem} \begin{proof} The axioms of $T'_{\kappa}\upharpoonright L_{w}$ are universal so HP is clear. As we allow the empty structure to be a model, JEP follows from AP. For AP, we reduce to the case where $A,B,C \in \mathbb{K}'_{w}$, $A$ is a substructure of both $B$ and $C$ and $B \cap C = A$. Because all the functions in the language are unary, we may define an $L'_{w}$-structure $D$ on $B \cup C$ by taking unions of the relations and functions as interpreted on $B$ and $C$. It is easy to see that $D \in \mathbb{K}'_{w}$, so we are done. \end{proof} By Fra\"iss\'e theory, for each finite $w \subset \kappa$, there is a unique countable ultrahomogeneous $L'_{w}$-structure with age $\mathbb{K}'_{w}$. Let $T^{\dag}_{w}$ denote its theory. We remark that the theory $T^{\dag}_{w}$ is almost a reduct of $T^{*}_{w}$ considered in the previous sections, with the difference that the functions $p_{\alpha}$ are partial in $T^{*}_{w}$ and total in $T^{\dag}_{w}$. One can easily check that $T^{\dag}_{w}$ is interpretable in $T^{*}_{w}$ for $w$ finite, interpreting $O$ by $\bigwedge_{\alpha \in w} \text{dom}(p_{\alpha})$. Since this interpretation is not uniform in $w$, we will still need to rapidly repeat the same steps in the analysis above to show that the $T^{\dag}_{w}$ are coherent. \begin{lem} Suppose $v$ and $w$ are finite sets with $w \subset v \subset \kappa$. Then $T^{\dag}_{w} \subset T^{\dag}_{v}$. \end{lem} \begin{proof} By induction, it suffices to consider the case when $v = w \cup \{\gamma\}$ for some $\gamma \in \kappa \setminus w$. By Fact \ref{KPTredux}, we must show (1) that $A \in \mathbb{K}'_{w}$ if and only if there is $D \in \mathbb{K}'_{v}$ such that $A$ is an $L'_{w}$-substructure of $D \upharpoonright L'_{w}$ and (2) that whenever $A,B \in \mathbb{K}'_{w}$, $\pi : A \to B$ is an $L'_{w}$-embedding, and $C \in \mathbb{K}'_{v}$ satisfies $C = \langle A \rangle^{C}_{L'_{v}}$ then there is $D \in \mathbb{K}'_{v}$ such that $B$ is an $L'_{w}$-substructure of $D \upharpoonright L'_{w}$ and $\pi$ extends to an $L'_{v}$-embedding $\tilde{\pi} : C \to D$. For (1), it is clear from definitions that if $D \in \mathbb{K}'_{v}$ then $D \upharpoonright L'_{w} \in \mathbb{K}'_{w}$. Given $A \in \mathbb{K}'_{w}$, we may construct a suitable $L'_{v}$-structure $D$ as follows. If $O^{A} = \emptyset$, we may simply expand $A$ to $D$ by setting $P_{\gamma}^{D} = \emptyset$ and this trivially satisfies the required axioms. So we will assume $O^{A}$ is non-empty and let the underlying set of $D$ be $A \cup \{*\}$. We interpret the predicates of $L'_{w}$ to have the same interpretation as on $A$, and we interpret the functions of $L'_{w}$ so that their restriction to $A$ are their interpretations on $A$ and so that the functions are the identity on $*$. We additionally set $P^{D}_{\gamma} = \{*\}$ and $p^{D}_{\gamma}$ to be the identity on the complement of $O^{D}$ ($=O^{A}$) and the constant function with value $*$ on $O^{D}$. Clearly $D \in \mathbb{K}'_{w}$, $D = \langle A \rangle_{L'_{v}}$, and $A$ is an $L'_{w}$-substructure of $D \upharpoonright L'_{w}$. For (2), suppose $A,B \in \mathbb{K}'_{w}$, $\pi : A \to B$ is an embedding, and $C \in \mathbb{K}'_{v}$ satisfies $C = \langle A \rangle^{C}_{L'_{v}}$. The requirement that $C = \langle A \rangle^{C}_{L'_{v}}$ entails that any points of $C \setminus A$ lie in $P_{\gamma}^{C}$. In particular, $O^{A} = O^{C}$ and we may use this notation interchangeably. Let $E = O^{B} \setminus \pi(O^{A})$, so that we may write $O^{B} = \pi(O^{A}) \sqcup E$. Define an $L'_{v}$-structure $D$ whose underlying set is $B \cup P_{\gamma}(A) \cup \{*_{e} : e \in E\}$. Interpret the predicates of $L'_{w}$ on $D$ to have the same interpretation as on $B$ and interpret the functions of $L'_{w}$ so that they agree with their interpretations on $B$ and are the identity on the complement of $B$. Then define $P_{\gamma}(D) = P_{\gamma}(A) \cup \{*_{e} : e \in E\}$ and interpret $p^{D}_{\gamma}$ by $$ p_{\gamma}^{D}(x) = \left\{ \begin{matrix} p_{\gamma}^{C}(a) & \text{ if } x = \pi(a) \\ *_{x} & \text{ if } x \not\in \pi(O^{A}). \end{matrix} \right. $$ Clearly $D \in \mathbb{K}'_{v}$. Extend $\pi$ to a map $\tilde{\pi}: C \to D$ by defining $\pi$ to be the identity on $P_{\gamma}(C)$. We claim $\tilde{\pi}$ is an $L'_{v}$-embedding: note that for all $x \in O^{C}$, $p_{\gamma}^{D}(\tilde{\pi}(x)) = p_{\gamma}^{C}(x) = \tilde{\pi}(p_{\gamma}^{C}(x))$ and $\tilde{\pi}$ obviously respects all other structure from $L'_{w}$ as $\pi$ is an $L'_{w}$-embedding. \end{proof} Define the theory $T^{\dag}_{\kappa}$ to be the union of $T^{\dag}_{w}$ for all finite $w \subset \kappa$. This is a complete $L'_{\kappa}$-theory with quantifier elimination, as these properties are inherited from the $T^{\dag}_{w}$. Fix a monster $\mathbb{M} \models T_{\kappa}^{\dag}$ and work there. \begin{prop}\label{bound} The theory $T^{\dag}_{\kappa}$ is stable and $\kappa_{inp}(T^{\dag}_{\kappa}) = \kappa^{+}$. \end{prop} \begin{proof} For each $\alpha < \kappa$, choose for each $\beta < \omega$ $a_{\alpha,\beta} \in P_{\alpha}(\mathbb{M})$ such that $\beta \neq \beta'$ implies $a_{\alpha,\beta} \neq a_{\alpha,\beta'}$. It is easy to check that, for all functions $g: \kappa \to \omega$, $\{p_{\alpha}(x) = a_{\alpha,g(\alpha)} : \alpha < \kappa\}$ is consistent and, for all $\alpha < \kappa$, $\{p_{\alpha}(x) = a_{\alpha, \beta} : \beta < \omega\}$ is $2$-inconsistent by the injectivity of the sequence $(a_{\alpha,\beta})_{\beta < \omega}$. Setting $k_{\alpha} = 2$ for all $\alpha$, we see that $(p_{\alpha}(x) = y_{\alpha} : \alpha < \kappa)$, $(a_{\alpha, \beta})_{\alpha < \kappa, \beta < \omega}$, and $(k_{\alpha})_{\alpha < \kappa}$ forms an inp-pattern of height $\kappa$ so $\kappa_{\text{inp}}(T_{\kappa}^{\dag}) \geq \kappa^{+}$. The stability of $T^{\dag}_{\kappa}$ follows from an argument identical to Lemma \ref{stable} which, by Fact \ref{easy inequalities}, gives the upper bound $\kappa_{\text{inp}}(T_{\kappa}^{\dag}) \leq \kappa^{+}$. \end{proof} \begin{prop}\label{saturation} Suppose $\mathcal{D}$ is an ultrafilter on $\lambda$, $\kappa = \lambda^{+}$, and $M \models T^{\dag}_{\kappa}$ is $\lambda^{++}$-saturated. Then $M^{\lambda}/\mathcal{D}$ is $\lambda^{++}$-saturated. \end{prop} \begin{proof} Suppose $A \subseteq M^{\lambda}/\mathcal{D}$, $|A| = \kappa = \lambda^{+}$. To show that any $q(x) \in S^{1}(A)$ is realized, we have three cases to consider: \begin{enumerate} \item $q(x) \vdash x \in P_{\alpha}$ for some $\alpha < \kappa$ \item $q(x) \vdash x \not\in O$ and $q(x) \vdash x\not\in P_{\alpha}$ for all $\alpha < \kappa$ \item $q(x) \vdash x \in O$. \end{enumerate} It suffices to consider $q$ non-algebraic and $A = \text{dcl}(A)$. In case (1), $q(x)$ is implied by $\{P_{\alpha}(x)\} \cup \{x \neq a : a \in A\}$ and in case (2), $q(x)$ is implied by $\{\neg O(x) \wedge \neg P_{\alpha}(x) : \alpha < \kappa\} \cup \{x \neq a : a \in A\}$. To realize $q(x)$ in case (1), for each $t \in \lambda$, choose some $b_{t} \in P_{\alpha}(M)$ such that $b_{t} \neq a[t]$ for all $a \in A$, which is possible by the $\lambda^{++}$-saturation of $M$ and the fact that $|A| = \lambda^{+}$. Let $b = \langle b_{t} \rangle_{t \in \lambda}/\mathcal{D}$. By $\L$o\'{s}'s theorem, $b \models q$. Realizing $q$ in case (2) is entirely similar. So now we show how to handle case (3). Fix some complete type $q(x) \in S_{1}(A)$ such that $q(x) \vdash x \in O$. First, we note that by possibly growing $A$ by $\kappa$ many elements, we may assume that there is a sequence $(c_{\alpha})_{\alpha < \kappa}$ from $A$ so that $q$ is equivalent to the following: $$ \{x \in O\} \cup \{x \neq a : a \in O(A)\} \cup \{p_{\alpha}(x) = c_{\alpha}\}, $$ This follows from the fact that, for each $\alpha < \kappa$, either $q(x) \vdash p_{\alpha}(x) = c_{\alpha}$ for some $c_{\alpha}$, or it only proves inequations of this form. In the latter case, we can choose some element $c_{\alpha} \in P_{\alpha}(M^{\lambda}/\mathcal{D})$ not in $A$ (possible by case (1) above) and extend $q(x)$ by adding the formula $p_{\alpha}(x) = c_{\alpha}$, which will then imply all inequations of the form $p_{\alpha}(x) \neq a$ for any $a \in A$, and this clearly remains finitely satisfiable. So now given $q$ in the form described above, let $X_{t} = \{\alpha < \kappa : M \models P_{\alpha}(c_{\alpha}[t])\}$ for each $t \in \lambda$. Let $q_{t}(x)$ denote the following set of formulas over $M$: $$ q_{t}(x) = \{x \in O\} \cup \{x \neq a[t]: a \in O(A)\} \cup \{p_{\alpha}(x) = c_{\alpha}[t] : \alpha \in X_{t}\}. $$ By construction, if $\alpha \neq \alpha' \in X_{t}$ then $M \models P_{\alpha}(c_{\alpha}[t]) \wedge P_{\alpha'}(c_{\alpha'}[t])$ so this set of formulas is consistent and over a parameter set from $M$ of size at most $\kappa$, hence realized by some $b_{t} \in M$. Let $b = \langle b_{t} \rangle_{t \in \lambda}/\mathcal{D}$ and let $J_{\alpha}$ be defined by $J_{\alpha} = \{t \in \lambda : M \models P_{\alpha}(c_{\alpha}[t])\}$. Note that, for $t < \lambda$ and $\alpha < \kappa$, $t \in J_{\alpha}$ if and only if $\alpha \in X_{t}$. As $q(x)$ is a consistent set of formulas, $J_{\alpha} \in \mathcal{D}$ and, by construction, $J_{\alpha} \subseteq \{t \in \lambda : M \models p_{\alpha}(b_{t}) = c_{\alpha}[t]\}$ so $M^{\lambda}/\mathcal{D} \models p_{\alpha}(b) = c_{\alpha}$. It is obvious that $b$ satisfies all of the other formulas of $q$ so we are done. \end{proof} \begin{cor} \label{second main theorem} Suppose $T$ is a complete theory, $|I| = \lambda$, $\mathcal{D}$ on $I$ is a ultrafilter, and $M \models T$ is a $\lambda^{++}$-saturated model of $T$. The condition that $\kappa_{\text{inp}}(T) > |I|^{+}$ is, in general, not sufficient to guarantee that $M^{I}/\mathcal{D}$ is not $\lambda^{++}$-compact. In particular, by Fact \ref{easy inequalities}(2), the condition that $\kappa_{\text{cdt}}(T) > |I|^{+}$ is not sufficient to guarantee that $M^{I}/\mathcal{D}$ is not $\lambda^{++}$-compact. \end{cor} \begin{proof} Given $\lambda$, $I$ with $|I| = \lambda$, and an ultrafilter $\mathcal{D}$ on $I$, choose any $\lambda^{++}$-saturated model of $T^{\dag}_{\lambda^{+}}$. By Lemma \ref{bound}, $\kappa_{\text{cdt}}(T^{\dag}_{\lambda^{+}}) \geq \kappa_{\text{inp}}(T^{\dag}_{\lambda^{+}}) = \lambda^{++} > |I|^{+}$, but, by Proposition \ref{saturation}, $M^{I}/\mathcal{D}$ is $\lambda^{++}$-saturated and hence $\lambda^{++}$-compact. \end{proof}
4,006
42,656
en
train
0.107.18
\subsection{Loss of saturation from large sct-patterns} If $T$ is not simple, then it has either the tree property of the first kind or the second kind\textemdash Shelah argues in \cite[Theorem VI.4.7]{shelah1990classification} by demonstrating that either property results in a decay of saturation with an argument tailored to each property. The preceding section demonstrates that the analogy between TP$_{2}$ and $\kappa_{\text{inp}}(T) > |I|^{+}$ breaks down, but we show that the analogy between TP$_{1}$ and $\kappa_{\text{sct}}(T) > |I|^{+}$ survives, assuming some set theory. The argument below is a straightforward adaptation of the argument of \cite[Claim 8.5]{Malliaris:2012aa}. Recall that if $T$ is a theory with a distinguished predicate $P$ and $\kappa < \lambda$ are infinite cardinals, then the theory $T$ is said to \emph{admit} $(\lambda, \kappa)$ if there is a model $M \models T$ with $|M| = \lambda$ and $|P^{M}| = \kappa$. The notation $\langle \kappa, \lambda \rangle \to \langle \kappa', \lambda' \rangle$ stands for the assertion that any theory in a countable language that admits $(\lambda,\kappa)$ also admits $(\lambda',\kappa')$. Chang's two-cardinal theorem asserts that if $\lambda = \lambda^{<\lambda}$ then $\langle \aleph_{0},\aleph_{1} \rangle \to \langle \lambda, \lambda^{+}\rangle$ (see, e.g., \cite[Theorem 7.2.7]{chang1990model}\textemdash the statement given here follows from the proof). \begin{fact}\label{square} \cite[Lemma 4]{kennedyshelah} Suppose $\mathcal{D}$ is a regular uniform ultrafilter on $\lambda$ and $\langle \aleph_{0},\aleph_{1} \rangle \to \langle \lambda, \lambda^{+}\rangle$. There is an array of sets $\langle u_{t,\alpha} : t < \lambda, \alpha < \lambda^{+} \rangle$ satisfying the following properties: \begin{enumerate} \item $u_{t, \alpha} \subseteq \alpha$ \item $|u_{t, \alpha}| < \lambda$ \item $\alpha \in u_{t, \beta} \implies u_{t, \beta} \cap \alpha = u_{t, \alpha}$ \item if $u \subseteq \lambda^{+}$, $|u| < \aleph_{0}$ then $\{t < \lambda : (\exists \alpha)(u \subseteq u_{t, \alpha})\} \in \mathcal{D}$. \end{enumerate} \end{fact} \begin{thm} \label{second main theorem part 2} Suppose $|I| = \lambda$ and $\langle \aleph_{0},\aleph_{1} \rangle \to \langle \lambda, \lambda^{+}\rangle$. Suppose $\kappa_{\text{sct}}(T) > |I|^{+}$, $M$ is an $|I|^{++}$-saturated model of $T$ and $\mathcal{D}$ is a regular ultrafilter over $I$. Then $M^{I}/\mathcal{D}$ is not $|I|^{++}$-compact. \end{thm} \begin{proof} Let $(\varphi_{\alpha}(x;y_{\alpha}) : \alpha < \lambda^{+})$, $(a_{\eta})_{\eta \in \lambda^{<\lambda^{+}}}$ be an sct-pattern. We may assume $l(y_{\alpha}) = k$ for all $\alpha < \lambda^{+}$. Let $\langle u_{t,\alpha} : t < \lambda, \alpha < \lambda^{+} \rangle$ be given as by Fact \ref{square}. We may consider the tree $(\lambda^{+})^{<\lambda}$ as the set of sequences of elements of $\lambda^{+}$ of length $<\lambda$ ordered by extension and then, for each $t < \lambda$ and $\alpha < \lambda^{+}$, we can define $\eta_{t,\alpha} \in (\lambda^{+})^{<\lambda}$ to be the sequence that enumerates $u_{t,\alpha} \cup \{\alpha\}$ in increasing order. Note that if $\alpha < \beta$, then, because $\alpha \in u_{t,\beta}$ implies $u_{t,\beta} \cap \alpha = u_{t,\alpha}$, we have $\eta_{t,\alpha} \vartriangleleft \eta_{t,\beta} \iff \alpha \in u_{t,\beta}$. For each $\alpha < \lambda^{+}$ we thus have an element $c_{\alpha} \in M^{\lambda}/\mathcal{D}$ given by $c_{\alpha} = \langle c_{\alpha}[t]: t < \lambda \rangle /\mathcal{D}$ where $c_{\alpha}[t] = a_{\eta_{t, \alpha}} \in M$. \textbf{Claim:} $p(x):= \{\varphi_{\alpha}(x;c_{\alpha}) : \alpha < \lambda^{+} \}$ is consistent. \emph{Proof of claim:} Fix any finite $u \subseteq \lambda^{+}$. If for some $t < \lambda$ and $\alpha < \lambda^{+}$, we have $u \subseteq u_{t,\alpha}$ then $\{\eta_{t,\beta} : \beta \in u\} \subseteq \{\eta_{t,\beta} : \beta \in u_{t,\alpha}\}$ which is contained in a path, hence $\{\varphi_{\beta}(x;c_{\beta}[t]) : \beta \in u\} = \{\varphi_{\beta}(x;a_{\eta_{t,\beta}}) : \beta \in u\}$ is consistent by definition of an sct-pattern. We know $\{t < \lambda : (\exists \alpha)(u \subseteq u_{t,\alpha})\} \in \mathcal{D}$ so the claim follows by $\L$o\'{s}'s theorem and compactness.\qed Suppose $b = \langle b[t] \rangle_{t \in \lambda} / \mathcal{D}$ is a realization of $p$ in $M^{\lambda}/\mathcal{D}$. For each $\alpha < \lambda^{+}$ define $J_{\alpha} = \{t < \lambda : M \models \varphi_{\alpha}(b[t],c_{\alpha}[t])\} \in \mathcal{D}$. For each $\alpha$, pick $t_{\alpha} \in J_{\alpha}$. The map $\alpha \mapsto t_{\alpha}$ is regressive on the stationary set of $\alpha$ with $\lambda \leq \alpha < \lambda^{+}$. By Fodor's lemma, there's some $t_{*}$ such that the set $S = \{\alpha < \lambda^{+} : t_{\alpha} = t_{*}\}$ is stationary. Therefore $p_{*}(x) = \{\varphi_{\alpha}(x;a_{\eta_{t_{*},\alpha}}) : \alpha \in S\}$ is a consistent partial type in $M$ so $\{\eta_{t_{*},\alpha} : \alpha \in S\}$ is contained in a path, by definition of sct-pattern. Choose an $\alpha \in S$ so that $|S \cap \alpha| = \lambda$. Then, by choice of the $\eta_{t,\alpha}$, we have $\beta \in S \cap \alpha$ implies $\eta_{t_{*},\beta} \unlhd \eta_{t_{*},\alpha}$ and therefore $\beta \in u_{t_{*},\alpha}$. This shows $|u_{t_{*},\alpha}| \geq \lambda$, a contradiction. \end{proof} {} \end{document}
1,827
42,656
en
train
0.108.0
\begin{document} \title{Minimal Ramsey graphs for cyclicity} \author{ Damian Reding \and Anusch Taraz } \address{Technische Universit\"at Hamburg, Institut f\"ur Mathematik, Hamburg, Germany} \email{\{damian.reding|taraz\}@tuhh.de} \maketitle \begin{abstract} We study graphs with the property that every edge-colouring admits a monochromatic cycle (the length of which may depend freely on the colouring) and describe those graphs that are minimal with this property. We show that every member in this class reduces recursively to one of the base graphs $K_5-e$ or $K_4\vee K_4$ (two copies of $K_4$ identified at an edge), which implies that an arbitrary $n$-vertex graph with $e(G)\geq 2n-1$ must contain one of those as a minor. We also describe three explicit constructions governing the reverse process. As an application we are able to establish Ramsey infiniteness for each of the three possible chromatic subclasses $\chi=2, 3, 4$, the unboundedness of maximum degree within the class as well as Ramsey separability of the family of cycles of length $\ifmmode\ell\else\polishlcross\fieq l$ from any of its proper subfamilies. \end{abstract} \ifmmode\ell\else\polishlcross\fiinespread{1.3} \@ifstar{\origsection*}{\@startsection{section}{1}\z@{.7\ifmmode\ell\else\polishlcross\fiinespacing\@plus\ifmmode\ell\else\polishlcross\fiinespacing}{.5\ifmmode\ell\else\polishlcross\fiinespacing}{\normalfont\scshape\centering\S}}{Introduction and results} By an $r$-\textit{Ramsey graph for} $H$ we mean a graph $G$ with the property that every $r$-edge-colouring of $G$ admits a monochromatic copy of $H$. Wo focus on the Ramsey graphs that are \textit{minimal} with respect to the subgraph relation, i.e. no proper subgraph is a Ramsey graph for $H$. As a consequence of Ramsey's theorem~\cite{RAM} such graphs always exist. Minimal Ramsey graphs, their constructions, number on a fixed vertex set, connectivity as well as extent of chromatic number and maximum degree have been investigated by Burr, Erd\H{o}s and Lov\'asz~\cite{BEL}, Ne\v{s}et\v{r}il and Rödl~\cite{NRC}, Burr, Faudree and Schelp~\cite{BFS} as well as Burr, Ne\v{s}et\v{r}il, Rödl~\cite{BNR} and others. More recently, the question of the minimum degree of minimal Ramsey graphs initiated by Burr, Erd\H{o}s, Lov\'asz~\cite{BEL} was picked up again by Fox, Lin~\cite{FL} and Szab\'o, Zumstein and Z\"urcher~\cite{SZZ}. Subsequently Fox, Grinshpun, Liebenau, Person and Szab\'o~\cite{FGLPS} have employed the parameter in a proof of Ramsey non-equivalence (or separability)~\cite{FGLPS} and also obtained some generalizations to multiple colours~\cite{FGLPSM}. However, a persistent obstacle is that the structure of (minimal) Ramsey graphs for a specific graph $H$ is difficult to characterize, essentially because it requires a practical description of how graphs edge-decompose into $H$-free subgraphs. Indeed, few exact characterizations are known other than some simple ones for stars and collections of such~\cite{BEL}. The obstacle turns out to be a lesser one if $H$ is relaxed to be a graph property. We say that a graph $G$ is an $r$-\textit{Ramsey graph for a graph property $\mathcal{P}$} (which is closed under taking supergraphs), if every $r$ edge-colouring of $G$ admits a monochromatic copy of a member of $\mathcal{P}$. The choice of the member is thus allowed to depend freely on the choice of colouring. We denote that class by $\mathcal{R}_r(\mathcal{P})$ and the subclass of minimal ones by $\mathcal{M}_r(\mathcal{P})\subset\mathcal{R}_r(\mathcal{P})$. Indeed, this is not a far-fetched definition. Results on the corresponding notion of Ramsey numbers for graph properties appear across the literature both in and outside the context of Ramsey theory, e.g. connectivity~\cite{M}, minimum degree~\cite{KS}, planarity~\cite{B}, the contraction clique number~\cite{T} or, more recently, embeddability in the plane~\cite{FKS}. For a small number of such properties, the minimal order $R_r(\mathcal{P})$ of a Ramsey graph for $\mathcal{P}$ is known exactly, e.g. $R_r(\chi\geq k)=(k-1)^r+1$~\cite{LZ}. Most notable, however, is the characterization of the chromatic Ramsey number of $H$ as the Ramsey number for the graph property $Hom(H)$ by Burr, Erd\H{o}s and Lov\'asz~\cite{BEL}. The notion also connects naturally to classical graph parameters. Indeed, for every number $r\geq 2$ of colours we have that $G\in\mathcal{R}_r(\mathcal{C}_{\text{odd}})$, where $\mathcal{C}_{\text{odd}}$ denotes the property of containing an odd cycle, if and only if $\chi (G)\geq 2^r+1$ (for the \emph{if}-direction, note that if $G\notin\mathcal{R}_r(\mathcal{C}_{\text{odd}})$, then $G$ edge-decomposes into $\ifmmode\ell\else\polishlcross\fieq r$ bipartite graphs, whence a proper $2^r$-colouring of $V(G)$ is given by the $r$-tuples of $0$'s and $1$'s. The \emph{only if}-direction follows by a simple inductive argument on $r\geq 1$). Consequently we have that $G\in\mathcal{M}_r(\mathcal{C}_{\text{odd}})$ if and only if $G$ is minimal subject to $\chi(G)\geq 2^r+1$, so the study of $\mathcal{M}_r(\mathcal{C}_{\text{odd}})$ is precisely the study of the well-known notion of $(2^r+1)$-\emph{critical} graphs. The property we focus on in this paper is the property $\mathcal{C}$ of containing an arbitrary cycle. Indeed we have the following useful characterization of $\mathcal{R}_r(\mathcal{C})$ (and hence of $ \mathcal{M}_r(\mathcal{C})$) in terms of local edge-densities of subgraphs. \begin{prop}\ifmmode\ell\else\polishlcross\fiabel{prop:NW} For every integer $r\geq 2$, we have that $G\in\mathcal{R}_r(\mathcal{C})$ if and only if $\frac{e(H)-1}{v(H)-1}\geq r$ for some subgraph $H\subseteq G$, and consequently we have that $G\in\mathcal{M}_r(\mathcal{C})$ if and only if both $\frac{e(G)-1}{v(G)-1}=r$ and $\frac{e(H)-1}{v(H)-1}<r$ for every proper subgraph $H\subset G$. \end{prop} Since the graphs in $\mathcal{R}_r(\mathcal{C})$ are precisely those which do not edge-decompose into $r$ forests, one obtains Proposition \ref{prop:NW} as a direct translation of the following well-known theorem. \begin{thm}\ifmmode\ell\else\polishlcross\fiabel{thm:NW} (\text{Nash-Williams' Arboricity Theorem}~\cite{NW}) Every graph $G$ admits an edge-decomposition into $\ifmmode\ell\else\polishlcross\fieft\ifmmode\ell\else\polishlcross\ficeil ar(G)\right\rceil$ many forests, where $ar(G):=\max_{J\subseteq G, v_J>1}\frac{e_J}{v_J-1}$. \end{thm} We remark that this is not the first time that Theorem \ref{thm:NW} finds use in graph Ramsey theory, see e.g.~\cite{PR} for an account of how the theorem can be used to establish the relation $ar(G)\geq r\cdot ar(F)$ for every $r$-Ramsey graph $G$ of an arbitrary graph $F$. For the rest of the paper we focus on the case $r=2$ and also write $\mathcal{R}(\mathcal{C}):=\mathcal{R}_2(\mathcal{C})$ and $\mathcal{M}(\mathcal{C}):=\mathcal{M}_2(\mathcal{C})$. Given the aforementioned relation between $\mathcal{M}(\mathcal{C})$ and $5$-critical graphs, the latter of which are completely described (in the language of \emph{constructibility}) by the well-known H\'ajos construction~\cite{HJS} originating in the single base graph $K_5$, one might suspect that a similar reduction to base graphs is possible for $\mathcal{M}(\mathcal{C})$. Indeed, our first result does just that. Our two base graphs will be $K_5-e\in\mathcal{M}(\mathcal{C})$ and $K_4\vee K_4\in\mathcal{M}(\mathcal{C})$, the graph obtained by identifying two copies of $K_4$ at an edge; a quick computation based on Proposition \ref{prop:NW} shows that these are in $\mathcal{M}(\mathcal{C})$. \begin{thm}\ifmmode\ell\else\polishlcross\fiabel{thm:first} For every $G\in \mathcal{M}(\mathcal{C})$ there exists $n\in\mathbb{N}_0$ and a sequence $G_k$ of minimal Ramsey graphs for $\mathcal{C}$ such that $$\{K_5-e, K_4\vee K_4\}\ni G_0\prec G_1\prec\ifmmode\ell\else\polishlcross\fidots\prec G_{n}=G,$$ where $\prec$ denotes the minor relation. In fact, for every $k\in [n]$ one can take $G_{k-1}$ to be an arbitrary minimal Ramsey subgraph (for $\mathcal{C}$) of the Ramsey graph (for $\mathcal{C}$) obtained from $G_k$ by contracting an arbitrary edge that belongs to at most one triangle in $G_{k}$. \end{thm} As we shall show, the contraction of an edge, which is in at most one triangle, preserves the Ramsey property of a Ramsey-graph for $\mathcal{C}$, whence a minimal Ramsey-subgraph can be found. The theorem guarantees that continuing the reduction in this way necessarily results in $K_5-e$ or $K_4\vee K_4$. By combining \ref{prop:NW} with \ref{thm:first} we therefore obtain: \begin{cor} Every graph $G$ with $e(G)\geq 2v(G)-1$ contains one of $K_5-e$, $K_4\vee K_4$ as a minor. \end{cor} Upon reinterpretation of Theorem \ref{thm:first}, every $G\in\mathcal{M}(\mathcal{C})$ can be obtained by starting with one of the two base graphs by recursively splitting a vertex of a suitable supergraph. A concrete description of the process would result in an algorithm constructing all minimal Ramsey-graphs for $\mathcal{C}$. Traditionally, for graphs $H$ such extensions were done by means of \emph{signal senders}, i.e. non-Ramsey graphs $G$ with two special edges $e$ and $f$, which attain same (respectively distinct) colours in every $H$-free colouring, which were then use to establish infiniteness of $\mathcal{M}(H)$ and much more, see e.g.~\cite{BEL} and~\cite{BNR}. However, it follows from an extension of Theorem \ref{thm:NW} by Reiher and Sauermann~\cite{RS} that no (positive) signal senders for $\mathcal{C}$ can exist: indeed, given a graph $G$ that edge-decomposes into two forests, for any choice of $e$ and $f$ one finds an edge-decomposition with $e$ and $f$ belonging to different colour classes. Instead, one may prove infiniteness for $\mathcal{M}(\mathcal{C})$ by noting (by an argument similar to that in~\cite{ADOV}) that a $4$-regular graph of girth $g$ (which is known to exist by~\cite{ESA}) must contain a minimal Ramsey graph for cyclicity, where the monochromatic cycles are of length $\geq g$. Our second result provides a much simpler way to make progress towards this aim by describing three entirely constructive ways to enlarge a graph in $\mathcal{M}(\mathcal{C})$ that allow to track its structure; note that the first increases the number of vertices by $1$, while the other two increase it by $2$. \begin{thm}\ifmmode\ell\else\polishlcross\fiabel{thm:second} If $G\in\mathcal{M}(\mathcal{C})$, then also $G^*\in\mathcal{M}(\mathcal{C})$, where $G^*$ is a larger graph obtained from $G$ by applying one of the following three constructions: \begin{enumerate} \item Given a $2$-path $uvw$ in $G$, do the following: Introduce a new vertex $x$. Join $x$ to each of $u, v$ and $w$. Then delete edge $vw$. \item Given an edge $vw$ in $G$, do the following: Introduce a new vertex $x$. Join $x$ to both $v$ and $w$. Then apply construction (1) to the $2$-path $xvw$. \item Given a $2$-path $uvw$ in $G$, do the following: apply construction (1) to $uvw$ and $wvu$ at the same time, that is: Introduce new vertices $x, y$. Join both $x, y$ to each of $u, v, w$. Then delete edges $uv$ and $vw$. \end{enumerate} \end{thm} Note that one has $\chi(G)\ifmmode\ell\else\polishlcross\fieq 4$ for every graph $G\in\mathcal{M}(\mathcal{C})$ or, more generally $\chi(G)\ifmmode\ell\else\polishlcross\fieq 2r$ for every graph $G\in\mathcal{M}_r(\mathcal{C})$. Indeed, any $n$-vertex graph $G\in\mathcal{M}_r(\mathcal{C})$ contains a subgraph $H$ with $\delta(H)\geq\chi(G)-1$, which at the same time satisfies $\delta(H)\ifmmode\ell\else\polishlcross\fieq d(H)\ifmmode\ell\else\polishlcross\fieq\frac{2[r(n-1)+1]}{n}<2r$, where $d(H)$ denotes the average degree of $H$. Our Theorem \ref{thm:second} now implies: \begin{cor}\ifmmode\ell\else\polishlcross\fiabel{cor:inf} Each of the three partition classes of $\mathcal{M}(\mathcal{C})$ corresponding to chromatic number $\chi = 2, 3, 4$, respectively, consists of infinitely many pairwise non-isomorphic graphs. \end{cor} In fact, since our first two constructions can be seen to preserve planarity, infinitely many of the above graphs with $\chi = 2, 3$ can be chosen planar each. On the other hand, the smallest bipartite graph $G\in\mathcal{M}(\mathcal{C})$ is already $K_{3, 5}$ (obtained as $K_5-e\ifmmode\ell\else\polishlcross\fiongrightarrow (K_{2, 3})^+\ifmmode\ell\else\polishlcross\fiongrightarrow (K_{2, 4})^+\ifmmode\ell\else\polishlcross\fiongrightarrow K_{3, 5})$. Since $e(G)>2v(G)-4$, any such must be non-planar.\\
4,021
18,773
en
train
0.108.1
As we shall show, the contraction of an edge, which is in at most one triangle, preserves the Ramsey property of a Ramsey-graph for $\mathcal{C}$, whence a minimal Ramsey-subgraph can be found. The theorem guarantees that continuing the reduction in this way necessarily results in $K_5-e$ or $K_4\vee K_4$. By combining \ref{prop:NW} with \ref{thm:first} we therefore obtain: \begin{cor} Every graph $G$ with $e(G)\geq 2v(G)-1$ contains one of $K_5-e$, $K_4\vee K_4$ as a minor. \end{cor} Upon reinterpretation of Theorem \ref{thm:first}, every $G\in\mathcal{M}(\mathcal{C})$ can be obtained by starting with one of the two base graphs by recursively splitting a vertex of a suitable supergraph. A concrete description of the process would result in an algorithm constructing all minimal Ramsey-graphs for $\mathcal{C}$. Traditionally, for graphs $H$ such extensions were done by means of \emph{signal senders}, i.e. non-Ramsey graphs $G$ with two special edges $e$ and $f$, which attain same (respectively distinct) colours in every $H$-free colouring, which were then use to establish infiniteness of $\mathcal{M}(H)$ and much more, see e.g.~\cite{BEL} and~\cite{BNR}. However, it follows from an extension of Theorem \ref{thm:NW} by Reiher and Sauermann~\cite{RS} that no (positive) signal senders for $\mathcal{C}$ can exist: indeed, given a graph $G$ that edge-decomposes into two forests, for any choice of $e$ and $f$ one finds an edge-decomposition with $e$ and $f$ belonging to different colour classes. Instead, one may prove infiniteness for $\mathcal{M}(\mathcal{C})$ by noting (by an argument similar to that in~\cite{ADOV}) that a $4$-regular graph of girth $g$ (which is known to exist by~\cite{ESA}) must contain a minimal Ramsey graph for cyclicity, where the monochromatic cycles are of length $\geq g$. Our second result provides a much simpler way to make progress towards this aim by describing three entirely constructive ways to enlarge a graph in $\mathcal{M}(\mathcal{C})$ that allow to track its structure; note that the first increases the number of vertices by $1$, while the other two increase it by $2$. \begin{thm}\ifmmode\ell\else\polishlcross\fiabel{thm:second} If $G\in\mathcal{M}(\mathcal{C})$, then also $G^*\in\mathcal{M}(\mathcal{C})$, where $G^*$ is a larger graph obtained from $G$ by applying one of the following three constructions: \begin{enumerate} \item Given a $2$-path $uvw$ in $G$, do the following: Introduce a new vertex $x$. Join $x$ to each of $u, v$ and $w$. Then delete edge $vw$. \item Given an edge $vw$ in $G$, do the following: Introduce a new vertex $x$. Join $x$ to both $v$ and $w$. Then apply construction (1) to the $2$-path $xvw$. \item Given a $2$-path $uvw$ in $G$, do the following: apply construction (1) to $uvw$ and $wvu$ at the same time, that is: Introduce new vertices $x, y$. Join both $x, y$ to each of $u, v, w$. Then delete edges $uv$ and $vw$. \end{enumerate} \end{thm} Note that one has $\chi(G)\ifmmode\ell\else\polishlcross\fieq 4$ for every graph $G\in\mathcal{M}(\mathcal{C})$ or, more generally $\chi(G)\ifmmode\ell\else\polishlcross\fieq 2r$ for every graph $G\in\mathcal{M}_r(\mathcal{C})$. Indeed, any $n$-vertex graph $G\in\mathcal{M}_r(\mathcal{C})$ contains a subgraph $H$ with $\delta(H)\geq\chi(G)-1$, which at the same time satisfies $\delta(H)\ifmmode\ell\else\polishlcross\fieq d(H)\ifmmode\ell\else\polishlcross\fieq\frac{2[r(n-1)+1]}{n}<2r$, where $d(H)$ denotes the average degree of $H$. Our Theorem \ref{thm:second} now implies: \begin{cor}\ifmmode\ell\else\polishlcross\fiabel{cor:inf} Each of the three partition classes of $\mathcal{M}(\mathcal{C})$ corresponding to chromatic number $\chi = 2, 3, 4$, respectively, consists of infinitely many pairwise non-isomorphic graphs. \end{cor} In fact, since our first two constructions can be seen to preserve planarity, infinitely many of the above graphs with $\chi = 2, 3$ can be chosen planar each. On the other hand, the smallest bipartite graph $G\in\mathcal{M}(\mathcal{C})$ is already $K_{3, 5}$ (obtained as $K_5-e\ifmmode\ell\else\polishlcross\fiongrightarrow (K_{2, 3})^+\ifmmode\ell\else\polishlcross\fiongrightarrow (K_{2, 4})^+\ifmmode\ell\else\polishlcross\fiongrightarrow K_{3, 5})$. Since $e(G)>2v(G)-4$, any such must be non-planar.\\ Note that the fact that $\chi(G)\ifmmode\ell\else\polishlcross\fieq 4$ for $G\in\mathcal{M}(\mathcal{C})$ is much unlike the situation for graphs $G\in\mathcal{M}(H)$ for $H=K_3$ or $H$ $3$-connected, where $\chi(G)$ becomes arbitrarily large (see~\cite{BNR}) and hence so does $\Delta(G)$. Despite the boundedness of $\chi(G)$ we are still able to show: \begin{cor}\ifmmode\ell\else\polishlcross\fiabel{cor:delta} For every $\Delta\geq 1$ there exists $G\in\mathcal{M}(\mathcal{C})$ with $\Delta (G)\geq\Delta$. \end{cor} Indeed, Corollary \ref{cor:delta} is a special case of a much more general theorem, which as an exhaustive application of \ref{thm:second} asserts that the structure of $\mathcal{M}(\mathcal{C})$ is actually quite rich. By a \emph{forest of cycles} we refer to a graph $F$ obtained, with disregard to isolated vertices, by starting with a cycle and then recursively adjoining a further cycle by identifying at most one of its vertices with a vertex on already existing cycles. Clearly there are forests of cycles of arbitrarily large maximum degree. Note that thanks to every edge of $F$ belonging to precisely one cycle, we can $2$-edge-colour a forest of cycles $F$ in such a way that every cycle in $F$ is monochromatic while choosing each cycle's colour independently of that of any other cycle. Call any such colouring \emph{cycle-monochromatic}. \begin{thm}\ifmmode\ell\else\polishlcross\fiabel{thm:third} For every forest of cycles $F$ and every integer $n\geq 5$ satisfying $n\geq \ifmmode\ell\else\polishlcross\fieft|F\right|$ there exists $G\in\mathcal{M}(\mathcal{C})$ with the following properties: \begin{enumerate} \item $\ifmmode\ell\else\polishlcross\fieft|G\right|=n$ \item $F$ is a subgraph of $G$ \item Every cycle-monochromatic $2$-edge-colouring of $F$ extends to a $2$-edge-colouring of $G$, in which there are no monochromatic cycles other than those already in $F$. \end{enumerate} \end{thm} Note that the condition $n\geq \ifmmode\ell\else\polishlcross\fieft|F\right|$ could be replaced by $n=\ifmmode\ell\else\polishlcross\fieft|F\right|$ if the definition of a forest of cycles were relaxed so as to allow isolated vertices, but this variant would somewhat undermine the strength of the statement. Since, as is quickly seen, a forest of cycles $F$ on $n$ (non-isolated) vertices contains between $n$ and $\frac{3}{2}(n-1)$ edges, Theorem \ref{thm:third} also guarantees that any such $F$ ($n\geq 5$) extends to some $G\in\mathcal{M}(\mathcal{C})$ with $F$ as a spanning subgraph by adding only $k$ edges, where $\frac{1}{2}(v(F)+1)\ifmmode\ell\else\polishlcross\fieq k\ifmmode\ell\else\polishlcross\fieq v(F)-1$. Finally, we remark on a second corollary of \ref{thm:third}. \begin{cor}\ifmmode\ell\else\polishlcross\fiabel{cor:equiv} For all $l \geq 4$ the family $\{C_3,\ifmmode\ell\else\polishlcross\fidots, C_l\}$ is not Ramsey-equivalent to any proper subfamily of itself, that is, for every proper $\mathcal{F}\subset\{C_3,\ifmmode\ell\else\polishlcross\fidots, C_l\}$ there exists a (minimal) Ramsey-graph for $\{C_3,\ifmmode\ell\else\polishlcross\fidots, C_l\}$, which is not a Ramsey-graph for $\mathcal{F}$. \end{cor} Corollary \ref{cor:equiv} asserts that for every $l\geq 3$ the cycle family $\mathcal{F}:=\{C_3,\ifmmode\ell\else\polishlcross\fidots, C_l\}$ and any proper subfamily $\mathcal{F}_0$ of $\mathcal{F}$ are Ramsey-separable (or Ramsey non-equivalent). The concepts were introduced in~\cite{SZZ} and subsequently studied in e.g.~\cite{FGLPS},~\cite{ARU} and~\cite{BL}. A central open problem in the area is whether some two distinct graphs are Ramsey equivalent. The existence of Ramsey graphs for cycles $C_k$ with girth $k$ (which follows from the Random Ramsey Theorem, see also~\cite{HRRS}) sorts out this question in the case of single cycles and also cycle families $\mathcal{F}_0$ containing the longest cycle $C_l$ of $\mathcal{F}$. In contrast, \ref{cor:equiv} provides constructively a supply of separating Ramsey graphs for all proper $\mathcal{F}_0$.\\ The organization of the paper is as follows. In each of the following three sections we provide the proofs of Theorem \ref{thm:first}, Theorem \ref{thm:second} and Theorem \ref{thm:third}, respectively, and subsequently discuss the possibility of some generalizations in the concluding remarks. \@ifstar{\origsection*}{\@startsection{section}{1}\z@{.7\ifmmode\ell\else\polishlcross\fiinespacing\@plus\ifmmode\ell\else\polishlcross\fiinespacing}{.5\ifmmode\ell\else\polishlcross\fiinespacing}{\normalfont\scshape\centering\S}}{Proof of theorem \ref{thm:first}} Our proof of \ref{thm:first} relies on three lemmas. We state the elementary one first, which holds for any number of colours. \begin{lem}\ifmmode\ell\else\polishlcross\fiabel{lem:elem} Every $G\in\mathcal{M}_r(\mathcal{C})$ satisfies $r+1\ifmmode\ell\else\polishlcross\fieq\delta (G)\ifmmode\ell\else\polishlcross\fieq 2r-1$ and is also $2$-connected. \end{lem} \begin{proof} An immediate consequence of Proposition \ref{prop:NW} is that every $G\in\mathcal{M}_r(\mathcal{C})$ has size $e(G)=rv(G)-(r-1)$ and every subgraph $H\subseteq G$ has average degree $d(H)<2r$, which implies the upper bound for $\delta (H)$ (including the case $H=G$). For the lower bound for $\delta (G)$ suppose that $G$ contains a vertex $v$ of degree at most $r$. Colour the outgoing edges with distinct colours; since now no monochromatic cycle can pass through $v$, it follows that $G-v$ itself must be Ramsey for $\mathcal{C}$, thus contradicting the minimality of $G$. For connectivity suppose that $G$ can be disconnected by removing at most one vertex, so $G$ consists of two proper subgraphs $G_1, G_2$ which may or may not have a vertex in common. Since removing an edge from $G_1$ destroys the Ramsey property of the whole graph, we can fix an $r$-edge-colouring of $G_2$ without a monochromatic cycle. It follows that $G_1$ itself must be Ramsey for $\mathcal{C}$, again contradicting the minimality of $G$. \end{proof}
3,332
18,773
en
train
0.108.2
In the following we assume that $r=2$. The following lemma asserts that contraction of certain edges preserves the Ramsey property for cyclicity. \begin{lem}\ifmmode\ell\else\polishlcross\fiabel{contract} If $G\in\mathcal{R}(\mathcal{C})$, then $G/e\in\mathcal{R}(\mathcal{C})$, where $G/e$ is the graph obtained from $G$ by contracting an arbitrary edge $e\in E(G)$ that lies in at most one triangle. \end{lem} \begin{proof} Let $e$ be as above and fix a $2$-edge-colouring of $G/e$.\\ \emph{Case 1.} If $e$ belongs to no triangle in $G$, then a $2$-edge colouring of $G/e$ induces a $2$-edge colouring of $G-e$, and any monochromatic cycle in $G-e$ induces a monochromatic cycle in $G/e$. If there is no monochromatic cycle in $G-e$, then, by Ramseyness of $G$, rejoining $e$ produces a monochromatic cycle irrespective of its colour. So $G-e$ must contain both a blue and red path joining the vertices of $e$. Note that since these are edge-disjoint, at least one of the paths must have length at least $3$, otherwise $e$ would be chord to a four-cycle. Hence there is a monochromatic cycle in $G/e$.\\ \emph{Case 2.} If $e$ belongs to one triangle in $G$, then a $2$-edge-colouring of $G/e$ induces a $2$-edge colouring of $G-e$ with the other two triangle edges in the same colour. If $G-e$ has no monochromatic cycle, proceed as above. Suppose $G-e$ has a monochromatic cycle. If it does not use both of the other edges of the triangle containing $e$, then it induces a monochromatic in $G/e$. If the cycle does use both, so $e$ is a chord to the cycle, then it must be of length at least $5$ since $e$ is not chord to a four-cycle. But then again there is a path of length at least $3$ joining the vertices of $e$. Hence there is a monochromatic cycle in $G/e$. This completes the proof. \end{proof} Consequently, for graphs with every edge in at most one triangle, e.g. such with girth $\geq 4$, the property of being Ramsey for cyclicity is stable under arbitrary edge-contractions. Note that we could have dealt with case $2$ computationally by invoking Proposition \ref{prop:NW} (thus even obtaining that for $e$ in one triangle the Ramsey-graph $G/e$ is minimal whenever $G$ is) but a constructive proof sheds more light on the subject matter. \begin{lem}\ifmmode\ell\else\polishlcross\fiabel{attached} Any $2$-connected graph $G$ with every edge contained in at least two triangles satisfies $e(G)\geq 2v(G)$, unless $v(G)\ifmmode\ell\else\polishlcross\fieq 6$. \end{lem} \begin{proof} We start with two simple observations:\\ (1) Since every edge of $G$ is chord to a $4$-cycle, we must have $\delta (G)\geq 3$. Note that wlog. we can assume that equality holds, because if $\delta (G)\geq 4$, then $e(G)\geq 2v(G)$ follows by the Handshaking Lemma. Suppose therefore that there is $v\in G$ with $d(v)=3$.\\ (2) Observe further that every vertex $v\in G$ with $d(v)=3$ necessarily lies in a $K_4$ in $G$. This is because each of the three edges incident to $v$ must be a chord of a $C_4$, which due to $d(v)=3$ is necessarily spanned by the other two.\\ Now fix both a $v\in G$ with $d(v)=3$ and a $K_:=K_4\subset G$ with $v\in K$.\\ \emph{Remark.} At this stage it is clear that the two base graphs $K_5-e$ and $K_4\vee K_4$ are the only graphs $G$ with $v(G)<7$, $\delta (G)=3$ and every edge chord of a $4$-cycle: this is clear when $v(G)=5$, and also when $v(G)=6$, since then $K_4\subset G$ with precisely $5$ more edges to built a further $K_4$ housing the remaining two vertices. (Hence, the two graphs also prove the lemma false when $v(G)<7$.)\\ Suppose $K$ is \emph{strongly attached} in $G$, that is, that some vertex $z$, say, outside of $K$ in $G$ is adjacent to at least two vertices $u, w$ in $K$. We choose the reduction of $G$ so that $G'$ also satisfies the hypothesis of the lemma with $v(G')=v(G)-1$ and $e(G')\ifmmode\ell\else\polishlcross\fieq e(G)-2$: Obviously $v$ is not adjacent to $z$, so $v\neq u$ and $v\neq w$. Let $t$ denote the fourth vertex in $K$; it may or may not be adjacent to $z$. Obtain $G'$ from $G$ by deleting $v$ and its three incident edges, and also add the edge between $t$ and $z$, if it does not exist already, so as to ensure that every edge of $G'$ is in at least two triangles. Note that $G'$ remains $2$-connected since clearly none of its vertices is a cutvertex.\\ Else, if $K$ is \emph{weakly attached} in $G$, that is, if every vertex of $G$ outside $K$ is adjacent to at most one vertex in the $K$, consider the following. If $K$ does not contract to a cutvertex, then $G':=G/K$ clearly satisfies the hypothesis of the lemma with $v(G')=v(G)-3$ and $e(G')=e(G)-6$. If $K$ does contract to a cutvertex $v$ in $G/K$, let $V_1, \ifmmode\ell\else\polishlcross\fidots, V_k$ denote the vertex classes of the $k\geq 2$ connected components of $G/K-v$. Note that since $K$ is weakly attached we have that $n_i:=\ifmmode\ell\else\polishlcross\fieft|V_i\right|\geq 3$ and that each of the subgraphs $G_i:=G[V_i\cup V(K)]$ satisfies the hypothesis of the lemma with $n_i+4=\ifmmode\ell\else\polishlcross\fieft|V_i\cup V(K)\right|<v(G)$, so by induction we obtain \begin{eqnarray*} e(G) & = & e(G_1)+\ifmmode\ell\else\polishlcross\fidots + e(G_k)-(k-1)e(K)\geq 2(n_1+4)+\ifmmode\ell\else\polishlcross\fidots 2(n_k+4)-6k+6\\ & = & 2(n_1+\ifmmode\ell\else\polishlcross\fidots +n_k)+8k-6k+6 = 2(v(G)-4)+2k+6\geq 2v(G) \end{eqnarray*} Note that the result now easily follows by induction on $v(G)$, provided it holds true in the cases $v(G)=7, 8, 9$:\\ For the cases $v(G)=8, 9$, consider as before a $K:=K_4\subset G$. If $K$ can be chosen strongly attached, we successfully reduce to the cases $v(G)=7, 8$. If not, then contracting a weakly attached $K$ necessarily results in either $K_5-e$ or $K_4\vee K_4$, with the contraction having occurred at one of its high degree vertices (else a strongly attached $K_4$ in the reduced graph must have already been strongly attached in $G$). Since each of the low degree vertices in the reduced graph is contained in a $K_4$ as well, the same $K_4$'s must have existed in $G$ prior contraction of $K$ or $K$ could not have been weakly attached. Consequently, $K$ intersects one of those $K_4$ at a cutvertex, thus contradicting $2$-connectedness.\\ The case $v(G)=7$ is more involved as we cannot reduce it to a smaller graph as in the previous cases: Suppose there exists a $2$-connected graph $G$ on $7$ vertices, with every edge occurring as the chord to a $4$-cycle, which satisfies $e(G)<2v(G)=14$. We now force a contradiction in several steps: Fix a $K:=K_4$ in $G$ and let $v, u_1, u_2$ denote the $3$ vertices of $G$, which are not vertices of $K$. Since $G$ is $2$-connected, at least $2$ vertices of $K$ are incident to edges not in $K$, hence have degree $\geq 4$ in $G$. If any of these vertices has degree $\geq 5$, then the degree sum of $G$ is $\geq 5\cdot 3 + 4+5=24$. If, however, all of these have degree $=4$, then there must be at least $3$ vertices of degree $=4$ (since we cannot have an odd number of odd degree vertices), in which case the degree sum of $G$ is $\geq 4\cdot 3+3\cdot 4=24$. In any case, $G$ has at least $12$ edges. Hence, as $e(G)\ifmmode\ell\else\polishlcross\fieq 13$, $G$ is obtained from $K\cup\{v, u_1, u_2\}$ by adding $6$ or $7$ edges. Note that since the degrees of $v, u_1, u_2$ are all $\geq 3$, but only $\geq 7$ edges can join $v, u_1, u_2$ to the vertices of $K$, the induced subgraph $H$ of $G$ on vertices $v, u_1, u_2$ contains at least $2$ edges. Wlog. suppose the edges are $u_1 v$ and $vu_2$ and further let $w$ be a vertex of $K$ adjacent to $v$. Note that at this stage there are at most $4$ more edges to add. We claim that $u_1, u_2, v, w$ must form the vertices of a further $K_4$ in $G$. In that case, $G$ is obtained by adding at most one edge to the graph obtained by identifying $K$ with a further copy of $K_4$ at vertex $w$. This is a contradiction because if we do not add the edge, $G$ will not be $2$-connected, but if we do add the edge, it will not be chord to a $4$-cycle because its end vertices will only have $w$ as a common neighbour. If $d(v)=3$, we are done, because $v$ is then contained in a $K_4$ with the remaining vertices necessarily given by the neighbours $u_1, u_2, w$ of $v$. If $d(v)\geq 4$, note that we must have $d(u_1)=3$ and $d(u_2)=3$. This follows since $2$ of $u_1, u_2, v$ must have degree $3$, otherwise $e(G-K)\geq(3+4+4)-e(H)\geq(3+4+4)-3>7$, a contradiction. Hence, both $u_1$ and $u_2$ must lie in a $K_4$ (containing $v$) in $G$. Note that they must lie in the same $K_4$, otherwise the $K_4$ of $u_1$ and $v$ would take up $\geq 3$ of our remaining edges, thus leaving $\ifmmode\ell\else\polishlcross\fieq 1$ to be incident to $u_2$, in which case $d(u_2)\ifmmode\ell\else\polishlcross\fieq 2$, a contradiction. Hence $u_1, u_2, v$ lie in a $K_4$ in $G$, in particular $u_1$ and $u_2$ are adjacent. This leaves $\ifmmode\ell\else\polishlcross\fieq 3$ edges to build up $G$. Assume, towards the final contradiction, that $w$ is not the fourth vertex of that $K_4$. Then, as $d(u_1)=3$ and $d(u_2)=3$, $w$ cannot be adjacent to $u_1$ or $u_2$. Since, however, the edge $wv$ is chord to a $4$-cycle, there must be two further vertices in $K$ that are adjacent to $v$. But then there remains at most one further edge to be incident to one of $u_1$ or $u_2$, in which case either $d(u_1)=2$ or $d(u_2)=2$, a contradiction. \end{proof} We are now ready to prove Theorem \ref{thm:first}. \begin{proof} Given $G\in\mathcal{M}(\mathcal{C})$, apply Lemma \ifmmode\ell\else\polishlcross\fiabel{contract} to a suitable edge and take a minimal Ramsey-subgraph of the resulting Ramsey-graph. Repeat this process until you end up with a graph $G_0$ with the property that every edge of $G$ is in at least two triangles. Since $G_0\in\mathcal{M}(\mathcal{C})$, so $e(G_0)=2v(G_0)-1$, we must have $v(G_0)\ifmmode\ell\else\polishlcross\fieq 6$ by Lemma \ref{attached}. The only such possibilities allowing no further contractions are $K_5-e$ and $K_4\vee K_4$ (the other such graphs on $6$ vertices all reduce to $K_5$-e as remarked above). \end{proof} \@ifstar{\origsection*}{\@startsection{section}{1}\z@{.7\ifmmode\ell\else\polishlcross\fiinespacing\@plus\ifmmode\ell\else\polishlcross\fiinespacing}{.5\ifmmode\ell\else\polishlcross\fiinespacing}{\normalfont\scshape\centering\S}}{Proof of theorem \ref{thm:second}} We partition Theorem \ref{thm:second} into three lemmas, each governing the effect of the respective operations on a graph in $\mathcal{M}(\mathcal{C})$, then show how they jointly imply Corollary \ref{cor:inf}. \begin{lem}\ifmmode\ell\else\polishlcross\fiabel{lem:path} If $G\in\mathcal{M}(\mathcal{C})$, then $G^*\in\mathcal{M}(\mathcal{C})$, where $G^*$ is the graph obtained from $G$ by applying construction (1) to an arbitrary $2$-path in $G$. \end{lem} \begin{proof} The construction increases the number of vertices by $1$ and the number of edges by $2$, so $G^+$ retains the correct global density in order to be in $\mathcal{M}(\mathcal{C})$. Now, let $H^+\subset G^+$ be a proper subgraph and suppose wlog. that it uses the new vertex, so it uses at most two new edges. Then there exists a proper subgraph $H\subset G$ with $e(H^+)\ifmmode\ell\else\polishlcross\fieq e(H)+2$ and $v(H^+)=v(H)+1$, so $$\frac{e(H^+)-1}{v(H^+)-1}\ifmmode\ell\else\polishlcross\fieq\frac{(e(H)+2)-1}{(v(H)+1)-1}=\frac{(e(H)-1)+2}{v(H)}<\frac{2(v(H)-1)+2}{v(H)}=2.$$ \end{proof}
4,041
18,773
en
train
0.108.3
We are now ready to prove Theorem \ref{thm:first}. \begin{proof} Given $G\in\mathcal{M}(\mathcal{C})$, apply Lemma \ifmmode\ell\else\polishlcross\fiabel{contract} to a suitable edge and take a minimal Ramsey-subgraph of the resulting Ramsey-graph. Repeat this process until you end up with a graph $G_0$ with the property that every edge of $G$ is in at least two triangles. Since $G_0\in\mathcal{M}(\mathcal{C})$, so $e(G_0)=2v(G_0)-1$, we must have $v(G_0)\ifmmode\ell\else\polishlcross\fieq 6$ by Lemma \ref{attached}. The only such possibilities allowing no further contractions are $K_5-e$ and $K_4\vee K_4$ (the other such graphs on $6$ vertices all reduce to $K_5$-e as remarked above). \end{proof} \@ifstar{\origsection*}{\@startsection{section}{1}\z@{.7\ifmmode\ell\else\polishlcross\fiinespacing\@plus\ifmmode\ell\else\polishlcross\fiinespacing}{.5\ifmmode\ell\else\polishlcross\fiinespacing}{\normalfont\scshape\centering\S}}{Proof of theorem \ref{thm:second}} We partition Theorem \ref{thm:second} into three lemmas, each governing the effect of the respective operations on a graph in $\mathcal{M}(\mathcal{C})$, then show how they jointly imply Corollary \ref{cor:inf}. \begin{lem}\ifmmode\ell\else\polishlcross\fiabel{lem:path} If $G\in\mathcal{M}(\mathcal{C})$, then $G^*\in\mathcal{M}(\mathcal{C})$, where $G^*$ is the graph obtained from $G$ by applying construction (1) to an arbitrary $2$-path in $G$. \end{lem} \begin{proof} The construction increases the number of vertices by $1$ and the number of edges by $2$, so $G^+$ retains the correct global density in order to be in $\mathcal{M}(\mathcal{C})$. Now, let $H^+\subset G^+$ be a proper subgraph and suppose wlog. that it uses the new vertex, so it uses at most two new edges. Then there exists a proper subgraph $H\subset G$ with $e(H^+)\ifmmode\ell\else\polishlcross\fieq e(H)+2$ and $v(H^+)=v(H)+1$, so $$\frac{e(H^+)-1}{v(H^+)-1}\ifmmode\ell\else\polishlcross\fieq\frac{(e(H)+2)-1}{(v(H)+1)-1}=\frac{(e(H)-1)+2}{v(H)}<\frac{2(v(H)-1)+2}{v(H)}=2.$$ \end{proof} Note that Lemma \ifmmode\ell\else\polishlcross\fiabel{lem:path} alone provides a constructive proof for the existence of infinitely many non-isomorphic minimal Ramsey-graphs for cyclicity. Indeed, applying this to $K_5-e$ in one of two possible ways (up to isomorphism), results in two further minimal Ramsey-graphs on $6$ vertices, one of which is the edge-maximal planar graph with one edge removed. \begin{lem}\ifmmode\ell\else\polishlcross\fiabel{lem:diam} If $G\in\mathcal{M}(\mathcal{C})$, then $G^{*}\in\mathcal{M}(\mathcal{C})$, where $G^{*}$ is the graph obtained from $G$ by applying construction (2) to an arbitrary edge in $G$. \end{lem} \begin{proof} While Lemma \ifmmode\ell\else\polishlcross\fiabel{lem:diam} could be proved similarly to Lemma \ifmmode\ell\else\polishlcross\fiabel{lem:path} via Proposition \ref{prop:NW}, it is possible to provide an exhaustive graph-chasing proof, which may be of independent interest as it works in more generality. Note that the effect of construction (2) is the replacement of an edge by the diamond graph with the non-adjacent vertices taking the place of the ends of the original edge. We prove the lemma with the diamond replaced by any graph $D$, which admits two non-adjacent \emph{contact vertices} $c, d$ with the property that in any $2$-edge-colouring of $D$ without a monochromatic cycle there is a monochromatic path joining $c$ and $d$ (note that a graph in $\mathcal{M}(\mathcal{C})$ with an edge $cd$ removed already has this property). In particular, we prove the following claim.\\ \textsl{Claim.} \emph{If $G\in\mathcal{R}(\mathcal{C})$, then $G^{*}\in\mathcal{R}(\mathcal{C})$, where graph $G^{*}$ is obtained from $G$ via parallel composition of $G-e$ with $D$ (that is, its contact edges taking the place of the ends of $e$). What's more, if $G\in\mathcal{M}(\mathcal{C})$ and $D$ is edge-minimal with the above property (given fixed contact vertices), then $G^{*}\in\mathcal{M}(\mathcal{C})$ as well.}\\ \textsl{Proof of Claim.} Fix a blue-red colouring of the edges of $G^{*}$. This restricts to a colouring of $G-e$; if this admits a monochromatic cycle, then so does $G^{++}$. Otherwise, since $G\in\mathcal{R}(\mathcal{C})$, there is both a red and a blue path in $G-e$ joining the contact vertices. One of these forms a monochromatic cycle in $G^{*}$ along with the monochromatic path in $D$, which must exist by definition whenever there is not already a monochromatic cycle in $D$.\\ Now suppose that both $G$ and $D$ are chosen minimal, in which case both clearly have minimal degree at least $2$. Given any edge $f$ of $G^{*}$ (so $f\neq e$), we show that in some colouring of $G^{*}-f$ there is no monochromatic cycle. If $f$ is an edge of $D$, such a colouring is obtained by fixing both a cycle-free colouring of $G-e$ and a cycle-free colouring of $D-f$ without a monochromatic path joining the contact vertices, and then inserting the coloured $D-f$ into the coloured $G-e$. If $f$ is an edge of $G$, fix both a cycle-free colouring of $G-f$ and a cycle-free colouring of $D$ with precisely one monochromatic path joining the contact vertices. If the path does not have the colour of $e$ in $G$, switch the colours in $D$. Now remove $e$ from the coloured $G-f$ and insert the coloured $D$. In the colouring of $G^{*}$ thus obtained there cannot be a monochromatic cycle. Suppose otherwise; then any monochromatic cycle would need to contain the whole monochromatic path in $D$ (as $G-f-e$ is coloured cycle-free) and since the contact vertices are non-adjacent, they would need to be joined by a path in $G-f-e$ of the colour of the path in $D$, and of length at least $2$. But along with $e$ any such path would form a monochromatic cycle in $G-f$. Contradiction. \end{proof} \begin{lem} If $G\in\mathcal{M}(\mathcal{C})$, then $G^{*}\in\mathcal{M}(\mathcal{C})$, where $G^{*}$ is the graph obtained from $G$ by applying construction (3) to an arbitrary $2$-path in $G$. \end{lem} \begin{proof} Let $G^*$ be the graph obtained from $G\in\mathcal{M}(\mathcal{C})$ by applying construction (3) to some path $uvw$. Since $e(G^*)=e(G)+4$ and $v(G^{*})=v(G)+2$, we have $G\in\mathcal{R}(\mathcal{C})$. To prove minimality, suppose that an edge $e$ is removed from $G^{*}$. Suppose that $e\notin E(G)$. In either case if $e$ is adjacent to $u$ or $w$ or if it is adjacent to $v$, proceed analogously as in the respective case in the proof of the previous lemma. Otherwise, if $e\in E(G)$, put a $2$-colouring on $E(G-e)$ and consider the colours of $uv$ and $vw$. Give the edges $ux, xv$ the colour of $uv$ and $uy$ the other colour. Also, give the edges $vy, yw$ the colour of $vw$ and $xw$ the other colour. If the $2$-colouring of $E(G-e)$ admits no monochromatic cycles, then neither does the so obtained $2$-colouring of $E(G^{*}-e)$. \end{proof} Finally, we are able to prove Corollary \ref{cor:inf}. \begin{proof} In order to obtain infinitely many graphs $G\in\mathcal{M}(\mathcal{C})$ with $\chi (G)=4$ fix a copy of $K_4$ in $K_5-e$ and let $e$ be an edge not belonging to that copy; now simply replace $e$ by a diamond, then replace an edge of that diamond by a diamond and so on. In order to obtain infinitely many $G\in\mathcal{M}(\mathcal{C})$ with $\chi (G)=3$ note that replacing every edge of any graph in $\mathcal{M}(\mathcal{C})$ results in precisely those graphs required. Finally, in order to obtain infinitely many graphs $G\in\mathcal{M}(\mathcal{C})$ with $\chi (G)=2$ start with $G_0:=K_{3, 5}\in\mathcal{M}(\mathcal{C})$ and repeatedly apply the following extension: apply construction (3) to some path $uvw$ in $G_i$ and let $x, y$ denote the two new vertices. Now apply construction (3) to the path $xvy$, thus producing two further vertices $x', y'$. Note that the resulting graph $G_{i+1}\in\mathcal{M}(\mathcal{C})$ is bipartite: Given a $2$-colouring on $V(G_i)$, give $x, y$ the colour of $v$ and $x', y'$ the other colour. (Alternatively note that any odd cycle, which may arise in the intermediate graph, must be using one of the edges $xv, yv$ and is thus destroyed in the construction of $G_{i+1}$.) \end{proof}
2,640
18,773
en
train
0.108.4
\begin{lem} If $G\in\mathcal{M}(\mathcal{C})$, then $G^{*}\in\mathcal{M}(\mathcal{C})$, where $G^{*}$ is the graph obtained from $G$ by applying construction (3) to an arbitrary $2$-path in $G$. \end{lem} \begin{proof} Let $G^*$ be the graph obtained from $G\in\mathcal{M}(\mathcal{C})$ by applying construction (3) to some path $uvw$. Since $e(G^*)=e(G)+4$ and $v(G^{*})=v(G)+2$, we have $G\in\mathcal{R}(\mathcal{C})$. To prove minimality, suppose that an edge $e$ is removed from $G^{*}$. Suppose that $e\notin E(G)$. In either case if $e$ is adjacent to $u$ or $w$ or if it is adjacent to $v$, proceed analogously as in the respective case in the proof of the previous lemma. Otherwise, if $e\in E(G)$, put a $2$-colouring on $E(G-e)$ and consider the colours of $uv$ and $vw$. Give the edges $ux, xv$ the colour of $uv$ and $uy$ the other colour. Also, give the edges $vy, yw$ the colour of $vw$ and $xw$ the other colour. If the $2$-colouring of $E(G-e)$ admits no monochromatic cycles, then neither does the so obtained $2$-colouring of $E(G^{*}-e)$. \end{proof} Finally, we are able to prove Corollary \ref{cor:inf}. \begin{proof} In order to obtain infinitely many graphs $G\in\mathcal{M}(\mathcal{C})$ with $\chi (G)=4$ fix a copy of $K_4$ in $K_5-e$ and let $e$ be an edge not belonging to that copy; now simply replace $e$ by a diamond, then replace an edge of that diamond by a diamond and so on. In order to obtain infinitely many $G\in\mathcal{M}(\mathcal{C})$ with $\chi (G)=3$ note that replacing every edge of any graph in $\mathcal{M}(\mathcal{C})$ results in precisely those graphs required. Finally, in order to obtain infinitely many graphs $G\in\mathcal{M}(\mathcal{C})$ with $\chi (G)=2$ start with $G_0:=K_{3, 5}\in\mathcal{M}(\mathcal{C})$ and repeatedly apply the following extension: apply construction (3) to some path $uvw$ in $G_i$ and let $x, y$ denote the two new vertices. Now apply construction (3) to the path $xvy$, thus producing two further vertices $x', y'$. Note that the resulting graph $G_{i+1}\in\mathcal{M}(\mathcal{C})$ is bipartite: Given a $2$-colouring on $V(G_i)$, give $x, y$ the colour of $v$ and $x', y'$ the other colour. (Alternatively note that any odd cycle, which may arise in the intermediate graph, must be using one of the edges $xv, yv$ and is thus destroyed in the construction of $G_{i+1}$.) \end{proof} \@ifstar{\origsection*}{\@startsection{section}{1}\z@{.7\ifmmode\ell\else\polishlcross\fiinespacing\@plus\ifmmode\ell\else\polishlcross\fiinespacing}{.5\ifmmode\ell\else\polishlcross\fiinespacing}{\normalfont\scshape\centering\S}}{Proof of theorem \ref{thm:third}} \begin{proof} The proof is by induction on $n\geq 5$ and makes heavy use of constructions (1) and (2) as in \ref{thm:second}. For $n=5$ the result needs to be verified manually, and indeed $G=K_5-e$ works for all forests of cycles $F$ with $3\ifmmode\ell\else\polishlcross\fieq v(F)\ifmmode\ell\else\polishlcross\fieq 5$. Let $x, y$ denote the non-adjacent vertices of $K_5-e$ and let $a, b, c$ denote the other three. \begin{enumerate} \item If w.l.o.g. $F$ is the red-coloured triangle $abc$, colour the edges $ay$ and $cx$ red and the remaining path $a-x-b-y-c$ blue. \item If w.l.o.g. $F$ is the red-coloured $4$-cycle $a-b-c-x$, colour edge $cy$ red and the remaining path $x-b-y-a-c$ blue. \item If w.l.o.g. $F$ is a red-coloured $C_5$, colour the remaining $4$-path blue. \item If $F$ is a bowtie and the two triangles are of the same colour, colour the remaining $3$-path with the opposite colour. \item If $F$ is a bowtie and the two triangles are of distinct colours, colour the remaining edges using each colour at least once. \end{enumerate} The aim in the induction step is to carefully build graphs in $\mathcal{M}(\mathcal{C})$ containing some prescribed forests of cycles from those containing some suitable smaller forest of cycles as provided by the induction hypothesis, while maintaining the possibility to extend the edge-colouring without creating new monochromatic cycles.\\ \textsl{Step 1 (Creating new space).} To begin with, we reduce the proof from $n\geq \ifmmode\ell\else\polishlcross\fieft|F\right|$ to $n=\ifmmode\ell\else\polishlcross\fieft|F\right|$. Fix $F$ and suppose $G\in\mathcal{M}(\mathcal{C})$ with $v(G)=v(F)$ is as in the statement of the theorem. We want to increase $G$ by one vertex while maintaining the containment of $F$ and the colouring extension property: Pick a vertex $v\in G$ with $d(v)=3$. Since $v(G)=v(F)$, such lies on precisely one cycle $C$ in $F$. Hence it is incident to an edge $vw$, which is not part of $C$ (even though $w$ may be); if $v$ is not in $F$, pick $vw\notin E(F)$, too. Further pick $u\in C$ such that $uv$ is an edge of $C$. Apply (1) to the path $u-v-w$, thus deleting the edge $vw$ and creating a new vertex $x$ incident to all of $u, v, w$. Note that by removing the edge $vw$ we have not destroyed any cycle of $F$ since thanks to $d(v)=3$, $vw$ is not an edge of $F$. Now given any $2$-edge-colouring of $G-F$ (or $G-F-vw$, respectively) as in the statement of the theorem, extend it by giving $xu$ and $xw$ arbitrary opposite colours and give $xv$ the colour opposite to that of $C$. If we have thus created a new monochromatic cycle, it has to pass through $x$, and hence, by choice of colouring, through $v$. This, however, is impossible since $v$ has maintained $d(v)=3$ throughout the construction. For the rest of the proof we can assume that $F$ is a spanning subgraph of the minimal Ramsey graph that contains it.\\ \textsl{Step 2 (Growing new trees).} We show how to extend the result for $F$ to that for $F$ with a disjoint triangle. Let $G\in\mathcal{M}(\mathcal{C})$ with $F\subset G$ and as in the statement of the theorem, and now without loss of generality $v(F)=v(G)$. Create new space in $G$ as in step 1, thus obtaining $G'$ with $v(G')=v(G)+1$ and the colouring property with respect to $F$ and fix the special edge-colouring of $G'-F$. Consider, as in step 1, the edge $xv$: Replace it by a diamond graph $D$ as in extension (2). Give the remaining so far uncoloured triangle in $D$, which is disjoint from $F$, a monochromatic colouring (this triangle is the new tree). If this is the colour of $xv$, give the two edges in $D$ now incident to $v$ distinct colours. If this is not the colour of $xv$, give the two edges in $D$ now incident to $v$ the colour of $xv$.\\ What we have so far achieved is that it suffices to prove the result for spanning trees of cycles. Note that any such can be obtained recursively by (1) starting with a triangle (2) enlarging it to required size (while it is a 'leaf' of the tree of cycles) (3) creating a required number of branches (that is, pairwise disjoint triangles) and repeating the procedure for any of the new branch triangles in turn. To complete the proof it therefore merely suffices to show how to enlarge cycles in $F$ irrespective of their distribution of attached branches, how to create a new triangle at a given vertex of degree $2$ in $F$ (\emph{extending an existing branch}), and finally, how to create a new triangle at a vertex, which is already used by more than one triangle (\emph{creating a new branch}).\\ \emph{Step 3 (Enlarging existing cycles).} Let $C$ be a cycle in $F$ to be enlarged and let $G\in\mathcal{M}(\mathcal{C})$ be for $F$ as in the statement of the theorem. Let $u-v-w$ be any $2$-path in $C$. Apply extension (1) as in Theorem \ref{thm:second}, thus producing a new vertex $x$ adjacent to all of $u, v, w$. The cycle $C$ is now enlarged in the resulting graph $G^+$ since $vw$ has been replaced by the $2$-path $v-x-w$. Any cycle-monochromatic $2$-edge-colouring $c$ of the enlarged forest $F^+$ now induces a cycle-monochromatic $2$-edge-colouring of $F$; pick a respective $2$-edge-colouring of $G-F$ and extend it to a respective colouring of $G^+-F^+$ by giving edge $xu$ the colour opposite of that of $xv$ in $c$.\\ \textsl{Step 4 (Extending existing branches).} Let $F\subset G$ be as before, and suppose that at $v\in F$ with $d(v)=2$ in $F$ a new triangle branch is to be created. Let $vw$ denote an edge not in $F$. Replace it by a diamond $D$, as before, and give the two edges in $D$ incident to $w$ distinct colours. Verifying the colouring property is now analogous to Step 2.\\ \textsl{Step 5 (Creating new branches).} Suppose that $u$ is a vertex of $F\subset G$, which lies in at least two triangles in $F$, and that a further triangle containing $u$ is to be created. Fix one of the triangles, which without loss of generality is a leaf to the tree of cycles, and label its remaining vertices $v$ and $w$. Apply (1) to $u-v-w$, thus destroying(!) one of the already existing triangles by removing edge $vw$, but instead creating the two new triangles $uvx$ and $uwx$, sharing edge $xu$. Apply now (1) again to the path $u-v-x$, thus destroying triangle $uvx$ by removing edge $vx$, but creating the new triangle $uvx'$, which is edge-disjoint from triangle $uwx$, and the extra edge $xx'$. Any cycle-monochromatic $2$-edge-colouring $c$ of the enlarged forest $F^+$ now induces a cycle-monochromatic $2$-edge-colouring of $F$; pick a corresponding special $2$-edge-colouring of $G-F$ and extend it to a special colouring of $G^+-F^+$ by giving edge $xx'$ the colour opposite to that of the triangles $uwx$ and $uvx'$ if these are monochromatic in $c$, and an arbitrary otherwise. This completes the proof. \end{proof}
3,004
18,773
en
train
0.108.5
\@ifstar{\origsection*}{\@startsection{section}{1}\z@{.7\ifmmode\ell\else\polishlcross\fiinespacing\@plus\ifmmode\ell\else\polishlcross\fiinespacing}{.5\ifmmode\ell\else\polishlcross\fiinespacing}{\normalfont\scshape\centering\S}}{Concluding Remarks} In \ref{thm:first} we proved that every $G\in\mathcal{M}(\mathcal{C})$ can be obtained by starting with one of two base graphs by recursively splitting a vertex of a suitable supergraph. Any such description would shed light on how to constructively increase the girth while maintaining Ramseyness. This may be regarded as a first step towards the construction of Ramsey graphs for fixed length cycles $C_k$ with girth precisely $k$ (see e.g.~\cite{HRRS}, but to the best of our knowledge no explicit construction is known). We therefore raise the weaker question: \begin{question} For any $g\geq 3$, does there exist $G\in\mathcal{M}(\mathcal{C})$ with girth $g$? \end{question} We also note also how Lemma \ref{attached} implies that no minimal Ramsey-graph for $K_3$ is a minimial Ramsey-graph for $\mathcal{C}$ (since in the former every edge is in at least two triangles). It would be therefore interesting to work out what additional conditions on $G\in\mathcal{R}(\mathcal{C})$ ensure that $G\in\mathcal{R}(K_3)$. This might be possibly achieved by approximating the class $\mathcal{R}(K_3)$ by the classes $\mathcal{R}(\mathcal{C}_{\ifmmode\ell\else\polishlcross\fieq l})$ for fixed $l\geq 3$. Constructing graphs which are minimal with this property is probably hard as removing an edge and taking a good colouring gives rise to highly chromatic high-girth girth graphs (for which a non-recursive hypergraph-free construction was given only recently~\cite{A}). Note that similarly our remark in the introduction allows for a simple construction for $G\in\mathcal{R}_r(\mathcal{C}_{\text{odd}\ifmmode\ell\else\polishlcross\fieq l})$, just take $\chi (G)\geq 2^r+1$ and $g(G)\geq l$.\\ Another line of study relates to the fact that a $2$-edge-colouring of a Ramsey-graph for $K_3$ admits multiple monochromatic copies of $K_3$. As a step in this direction it therefore seems plausible to consider graphs with the approximative property that every $2$-edge-colouring admits either two disjoint monochromatic copies of $K_3$ in the same colour or a monochromatic cycle of length $\geq 4$. It is easy to see by case distinction that $G^{+}$, the graph obtained from some $G\in\mathcal{R}(C)$ by joining a new vertex to every vertex of $G$, has this property.\\ With regard to the existence of multiple monochromatic cycles, we observe that thanks to a known decomposition result into pseudoforests, see e.g.~\cite{PR}, one could in principle work out a theorem similar to ours for graphs, for which every $2$-edge-colouring admits a monochromatic connected graph containing at least two cycles. More generally, for $k\geq 1$ set $\mathcal{C}_k:=\{G:\; G\;\text{is connected and contains at least}\; k\;\text{cycles}\}$ and $m_k(G):=\frac{e(G)-1}{v(G)+k-2}$, excluding the trivial graphs. It is then easy to see that if $G$ contains a subgraph $H$ with $m_{k}(H)\geq r$, then $G$ is $r$-Ramsey for $\mathcal{C}_k$, and that if $G$ is minimal $r$-Ramsey for $\mathcal{C}_k$, then $m_k(H)<r$ for every proper subgraph $H\subset G$. Crucial, however, to the characterization of graphs in $\mathcal{M}(\mathcal{C}_k)$ is the validity of the converse, which we do know about for $k\geq 3$. Indeed, with three available cycles allowing for circular arrangements, thus create new cycles, more complicated configuration may be needed in order for the Ramsey-property to be broken by the removal of any single edge. Instead, it seems more conceivable that the $+k$ in the density parameter is replaced by a larger quantity $f(k)$. To make this precise, for every $k\in\mathbb{N}$ let $f(k)$ denote the smallest natural number, if one exists, with the property that, for every integer $r\geq 1$, any graph $G$ satisfying $e(G)\ifmmode\ell\else\polishlcross\fieq r(v(G)+f(k)-2)$ edge-decomposes into at most $r$ subgraphs containing strictly less than $k$ (not necessarily edge-disjoint) cycles each. Note that $f$ is required to depend on $k$ only. If $f(k)$ exist, then its are given by (the ceiling integer part of) the maximum of $\frac{e(G)}{r_k(G)}-v(G)+2$ taken over all graphs, where $r_k(G)$ denotes the size of a smallest edge-decomposition of $G$ into subgraphs with at most $k-1$ cycles. By the above, we know that $f(1)=1$ and $f(2)=2$. For $k\geq 3$ note that $f(k)\geq k$ holds by considering the chain of $k-1$ copies of triangles with two consecutive ones each identified at a vertex. We observe that for every $k$ the following are then equivalent: \begin{enumerate} \item $f(k):=\max\ifmmode\ell\else\polishlcross\fieft\{\frac{e(G)}{r_k(G)}-v(G)+2:\; v(G)\geq 1\right\}<\infty$ \item $\forall r\in\mathbb{N}\backslash\{1\}$: $\mathcal{R}_r(\mathcal{C}_k)=\{G:\;\exists H\subseteq G:\; m_{f(k)}(H)\geq r\}$ \item $\forall r\in\mathbb{N}\backslash\{1\}$: $\mathcal{M}_r(\mathcal{C}_k)=\{G:\; m_{f(k)}(G)=r,\,\forall H\subset G, H\neq G:\; m_{f(k)}(H)< r\}$ \end{enumerate} \begin{question} For any $k\geq 3$, does $f(k)$ exists, that is, is $f(k)<\infty$? If so, what is $f(k)$? \end{question} Finally, we remark that cyclicity and $2$-connectivity are Ramsey equivalent and also that odd cyclicity and $3$-chromaticity are Ramsey equivalent. Undoubtedly, our results could therefore be generalized to both higher connectivity and chromaticity as well as to multiple colours. We thank Dennis Clemens and Matthias Schacht for helpful comments. \end{document}
1,735
18,773
en
train
0.109.0
\begin{document} \preprint{V.M.} \title{Multiple--Instance Learning: Radon--Nikodym Approach to Distribution Regression Problem. } \author{Vladislav Gennadievich \surname{Malyshkin}} \email{[email protected]} \affiliation{Ioffe Institute, Politekhnicheskaya 26, St Petersburg, 194021, Russia} \date{November, 27, 2015} \begin{abstract} \begin{verbatim} $Id: DistReg1Step.tex,v 1.41 2015/12/02 11:00:50 mal Exp $ \end{verbatim} For distribution regression problem, where a bag of $x$--observations is mapped to a single $y$ value, a one--step solution is proposed. The problem of random distribution to random value is transformed to random vector to random value by taking distribution moments of $x$ observations in a bag as random vector. Then Radon--Nikodym or least squares theory can be applied, what give $y(x)$ estimator. The probability distribution of $y$ is also obtained, what requires solving generalized eigenvalues problem, matrix spectrum (not depending on $x$) give possible $y$ outcomes and depending on $x$ probabilities of outcomes can be obtained by projecting the distribution with fixed $x$ value (delta--function) to corresponding eigenvector. A library providing numerically stable polynomial basis for these calculations is available, what make the proposed approach practical. \end{abstract} \keywords{Distribution Regression, Radon--Nikodym} \maketitle \hbox{\small Dedicated to Ira Kudryashova} \section{\label{intro}Introduction} Multiple instance learning\cite{dietterich1997solving} is an important Machine Learning (ML) concept having numerous applications\cite{yang2005review}. In multiple instance learning class label is associated not with a single observation, but with a ``bag'' of observations. A very close problem is distribution regression problem, where a sample distribution of $x$ is mapped to a single $y$ value. There are numerous heuristics methods developed from both: ML and distribution regression sides, see \cite{zhou2004multi,szabo2014learning} for review. As in any ML problem the most important part is not so much the learning algorithm, but the way how the learned knowledge is represented. Learned knowledge is often represented as a set of propositional rules, regression function, Neural Network weights, etc. In this paper we consider the case where knowledge is represented as a function of distribution moments. Recent progress in numerical stability of high order moments calculation\cite{2015arXiv151005510G} allow the moments of very high order to be calculated, e.g. in Ref. \cite{2015arXiv151101887G} up to hundreds, thus make this approach practical. Most of distribution regression algorithms deploy a two--step type of algorithm\cite{szabo2014learning} to solve the problem. In our previous work \cite{2015arXiv151107085G} a two--step solution with knowledge representation in a form of Christoffel function was developed. However, there is exist a one--step solution to distribution regression problem, a random distribution to random value, that converts each bag's observations to moments of it, then solving the problem random vector (the moments of random distribution) to random value. Once this transition is made an answer of least squares or Radon--Nikodym type from Ref. \cite{2015arXiv151005510G} can be applied and close form result obtained. The distribution of outcomes, if required, can be obtained by solving generalized eigenvalues problem, then matrix spectrum give possible $y$ outcomes, and the square of projection of localized at given $x$ bag distribution to eigenvector give each outcome probability. This matrix spectrum ideology is similar to the one we used in \cite{2015arXiv151107085G}, but is more generic and not reducible to Gauss quadrature. The paper is organized as following: In Section \ref{christoffel1Step} a general theory of distribution regression is discussed and close form result or least squares and Radon--Nikodym type are presented. Then in Section \ref{christoffel1StepNum} an algorithm is described and numerical example of calculations is presented. In Section \ref{christoffeldisc} possible further development is discussed. \section{\label{christoffel1Step}One--Step Solution} Consider distribution regression problem where a bag of $N$ observations of $x$ is mapped to a single outcome observation $y$ for $l=[1..M]$. \begin{eqnarray} (x_1,x_2,\dots,x_j,\dots,x_N)^{(l)}&\to&y^{(l)} \label{regressionproblem} \end{eqnarray} A distribution regression problem can have a goal to estimate $y$, average of $y$, distribution of $y$, etc. given specific value of $x$ For further development we need $x$ basis $Q_k(x)$ and some $x$ and $y$ measure. For simplicity, not reducing the generality of the approach, we are going to assume that $x$ measure is a sum over $j$ index $\sum_{j=1}^{N}$, $y$ measure is a $\sum_{l=1}^{M}$, the basis functions $Q_k(x)$ are polynomials $k=0..d_x-1$, where $d_x$ is the number of elements in $x$ basis, typical value for $d_x$ is below 10--15. Let us convert the problem ``random distribution'' to ``random variable'' to the problem ``vector of random variables'' to ``random variable''. The simplest way to obtain ``vector of random variables'' from $x_j^{(l)}$ distributions is to take the moments of it. Now the $<Q_k>^{(l)}$ would be this random vector: \begin{eqnarray} &&<Q_k>^{(l)}=\sum_{j=1}^{N} Q_k(x_j^{(l)}) \label{xmu} \\ &&\left(<Q_0>^{(l)},\dots,<Q_{d_x-1}>^{(l)}\right) \to y^{(l)} \label{Qregressionproblem} \label{momy} \end{eqnarray} Then the (\ref{Qregressionproblem}) becomes vector to value problem. Introduce \begin{eqnarray} Y_q&=&\sum_{l=1}^{M} y^{(l)} <Q_q>^{(l)} \label{Yq} \\ \left(G\right)_{qr}&=&\sum_{l=1}^{M} <Q_q>^{(l)} <Q_r>^{(l)} \label{Gramm} \\ \left(yG\right)_{qr}&=&\sum_{l=1}^{M} y^{(l)} <Q_q>^{(l)} <Q_r>^{(l)} \label{GrammY} \end{eqnarray} The problem now is to estimate $y$ (or distribution of $y$) given $x$ distribution, now mapped to a vector of moments $<Q_k>$ calculated on this $x$ distribution. Let us denote these input moments as $M_k$ to avoid confusion with measures on $x$ and $y$. For the case we study the $x$ value is given, and for a state with exact $x$ the $M_k$ values are: \begin{eqnarray} M_k(x)&=& N Q_k(x) \label{Mdist} \end{eqnarray} what means that all $N$ observations in a bag give exactly the same $x$ value. The problem now becomes a standard: random vector to random variable. We have solutions of two types for this problem, see \cite{2015arXiv151005510G} Appendix D, Least Squares $A_{LS}$ and Radon--Nikodym $A_{RN}$. The answers would be: \begin{eqnarray} A_{LS}(x)&=& \sum\limits_{q,r=0}^{d_x-1} M_q(x) \left(G\right)^{-1}_{qr} Y_r \label{ALS} \\ A_{RN}(x)&=& \frac{\sum\limits_{q,r,s,t=0}^{d_x-1} M_q(x) \left(G\right)^{-1}_{qr} \left(yG\right)_{rs} \left(G\right)^{-1}_{st} M_t(x)} {\sum\limits_{q,r=0}^{d_x-1}M_q(x) \left(G\right)^{-1}_{qr} M_r(x)} \label{ARN} \end{eqnarray} The (\ref{ALS}) is least squares answer to $y$ estimation given $x$. The (\ref{ARN}) is Radon--Nikodym answer to $y$ estimation given $x$. These are the two $y$ estimators at given $x$ for distribution regression problem \ref{regressionproblem}. These answers can be considered as an extension of least squares and Radon--Nikodym type of interpolation from value to value problem to random distribution to random variable problem. In case $N=1$ the $A_{LS}$ and $A_{RN}$ are reduced exactly to value to value problem considered in Ref. \cite{2015arXiv151005510G}. Note, that the $A_{LS}(x)$ answer not necessary preserve $y$ sign, but $A_{RN}(x)$ always preserve $y$ sign, same as in value to value problem. If $y$ distribution at given $x$ need to be estimated this problem can also be solved. With one--step approach of this paper we do not need $Q_m(y)$ basis used in two--step approach of Ref. \cite{2015arXiv151107085G} and outcomes of $y$ are estimated from $x$ moments only. Generalized eigenvalues problem\cite{2015arXiv151005510G} give the answer: \begin{eqnarray} \sum\limits_{r=0}^{d_x-1}\left(yG\right)_{qr} \psi^{(i)}_r &=& y^{(i)} \sum\limits_{r=0}^{d_x-1}\left(G\right)_{qr} \psi^{(i)}_r \label{gevproblem} \end{eqnarray} The result of (\ref{gevproblem}) is eigenvalues $y^{(i)}$ (possible outcomes) and eigenvectors $\psi^{(i)}$ (can be used to compute the probabilities of outcomes). The problem now becomes: given $x$ value estimate possible $y$--outcomes and their probabilities. The moments of states with given $x$ value are $NQ_q(x)$ from (\ref{Mdist}), so the distribution with (\ref{Mdist}) moments should be projected to distributions corresponding to $\psi^{(i)}_q$ states, the square of this projection give the weight and normalized weight give the probability. This is actually very similar to ideology we used in \cite{2015arXiv151107085G}, but the eigenvalues from (\ref{gevproblem}) no longer have a meaning of Gauss quadrature nodes. The eigenvectors $\psi^{(i)}_r$ correspond to distribution with moments $<Q_q>=\sum_{r=0}^{d_x-1} \left(G\right)_{qr} \psi^{(i)}_r$, and the distribution with such moments correspond to $y^{(i)}$ value. These distributions can be considered as ``natural distribution basis''. This is an important generalizatioh of Refs. \cite{2015arXiv151005510G,2015arXiv151101887G} approach to random distribution, where natural basis for random value, not random distribution, was considered. The projection of two $x$ distributions with moments $M^{(1)}_k$ and $M^{(2)}_k$ on each other is \begin{eqnarray} <M^{(1)}|M^{(2)}>_{\pi}&=&\sum_{q,r=0}^{d_x-1} M^{(1)}_q \left(G\right)^{-1}_{qr} M^{(2)}_r \label{proj} \end{eqnarray} then the required probabilities, calculated by projecting the (\ref{Mdist}) distribution to natural basis states, are: \begin{eqnarray} w^{(i)}(x)&=&\left(\sum_{r=0}^{d_x-1}M_r(x) \psi^{(i)}_r \right)^2 \label{wi} \\ P^{(i)}(x)&=&w^{(i)}(x)/\sum_{r=0}^{d_x-1} w^{(r)}(x) \label{Pi} \end{eqnarray} The (\ref{gevproblem}) and (\ref{Pi}) is one--step answer to distribution regression problem: find the outcomes $y^{(i)}$ and their probabilities $P^{(i)}(x)$. Note, that in this setup possible outcomes $y^{(i)}$ do not depend on $x$, and only probabilities $P^{(i)}(x)$ of outcomes depend on $x$. This is different from a two--step solution of \cite{2015arXiv151107085G} where outcomes and their probabilities both depend on $x$. Also note that $\sum_{r=0}^{d_x-1} w^{(r)}(x)=\sum_{q,r=0}^{d_x-1} M_q(x) \left(G\right)^{-1}_{qr} M_r(x)$. One of the major difference between the probabilities (\ref{Pi}) and probabilities from Christoffel function approach \cite{2015arXiv151107085G} is that the (\ref{Pi}) has a meaning of ``true'' probability while in two--step solution \cite{2015arXiv151107085G} Christoffel function value is used as a proxy to probability on first step. It is important to note how the knowledge is represented in these models. The model (\ref{ALS}) has learned knowledge represented in $d_x$ by $d_x$ matrix (\ref{Gramm}) and $d_x$ size vector (\ref{Yq}). The model (\ref{ARN}) as well as distribution answer (\ref{Pi}) has learned knowledge represented in two $d_x$ by $d_x$ matrices (\ref{Gramm}) and (\ref{GrammY}).
3,749
5,886
en
train
0.109.1
\section{\label{christoffel1StepNum}Numerical estimation of One--Step Solution} Numerical instability similar to the one of two--stage Christoffel function approach \cite{2015arXiv151107085G} also arise for approach in study, but now the situation is much less problematic, because we do not have $y$--basis $Q_m(y)$, and all the dependence on $y$ enter the answer through matrix (\ref{GrammY}). In this case the only stable $x$ basis $Q_k(x)$ is required. The algorithm for $y$ estimators of (\ref{ALS}) or (\ref{ARN}) is this: Calculate $<Q_k>^{(l)}$ moments from (\ref{xmu}, then calculate matrices (\ref{Gramm}) and (\ref{GrammY}), if least squares approximation is required also calculate moments (\ref{Yq}). In contrast with Christoffel function approach where $<Q_qQ_r>; q,r=[0..d_x-1]$ matrix can be obtained from $Q_k ; k=[0..2d_x-1]$ moments by application of polynomials multiplication operator, here the (\ref{Gramm}) and (\ref{GrammY}) can be hardly obtained this way for $N>1$ and should be calculated directly from sample. This is not a big issue, because $d_x$ is typically not large. Then inverse matrix $\left(G\right)_{qr}$ from (\ref{Gramm}), this matrix is some kind similar to Gramm matrix, but uses distribution moments, not basis functions. Finally put all these to (\ref{ALS}) for least squares $y(x)$ estimation or to (\ref{ARN}) for Radon--Nikodym $y(x)$ estimation. If $y$-- distribution is required then solve generalized eigenvalues problem (\ref{gevproblem}), obtain $y^{(i)}$ as possible $y$--outcomes (they do not depend on $x$), and calculate $x$--dependent probabilities (\ref{Pi}), these are squared projection coefficient of a state with specific $x$ value, point--distribution (\ref{Mdist}), or some other $x$ distribution of general form, to $\psi^{(i)}$ eigenvector. To show an application of this approach let us take several simple distribution to apply the theory. Let $\epsilon$ be a uniformly distributed $[-1;1]$ random variable and take $N=1000$ and $M=10000$. Then consider sample distributions build as following 1) For $l=[1..M]$ take random $x$ out of $[-1;1]$ interval. 2) Calculate $y=f(x)$, take this $y$ as $y^{(l)}$. 3) Build a bag of $x$ observations as $x_j=x+R\epsilon ; j=[1..N]$, where $R$ is a parameter. The following three $f(x)$ functions for building sample distribution are used: \begin{eqnarray} f(x)&=&x \label{flin} \\ f(x)&=&\frac{1}{1+25x^2} \label{frunge} \\ f(x)&=&\left\{\begin{array}{ll} 0 & x\le 0 \\ 1 & x>0\end{array}\right. \label{fstep} \end{eqnarray} \begin{figure} \caption{\label{fig:flin} \label{fig:flin} \end{figure} \begin{figure} \caption{\label{fig:frunge} \label{fig:frunge} \end{figure} \begin{figure} \caption{\label{fig:fstep} \label{fig:fstep} \end{figure} In Figs. \ref{fig:flin}, \ref{fig:frunge}, \ref{fig:fstep}, the (\ref{ALS}) and (\ref{ARN}) answers are presented for $f(x)$ from (\ref{flin}), (\ref{frunge}) and (\ref{fstep}) respectively for $R=\{0.1,0.3\}$ and $d_x=\{10,20\}$. The $x$ range is specially taken slightly wider that $[-1; 1]$ interval to see possible divergence outside of measure support. In most cases Radon--Nikodym answer is superior, and in addition to that it preserves the sign of $y$. Least squares approximation is good for special case $f(x)=x$ and typically diverges at $x$ outside of measure support. \begin{figure} \caption{\label{fig:Px} \label{fig:Px} \end{figure} The numerical estimation of probability function (the $y^{(i)}$ and $P^{(i)}(x)$) were also calculated and eigenvalue index $i$, corresponding to maximal $P$ typically correspond to $y^{(i)}$, for which $f(x)$ is most close. For simplistic case (\ref{flin}) see Fig. \ref{fig:Px}. See the Ref. \cite{polynomialcode}, file com/polytechnik/ algorithms/ ExampleDistribution1Stage.scala for algorithm implementation. \section{\label{christoffeldisc}Discussion} In this work a one--stage approach is applied to distribution regression problem. The bag's observations are initially converted to moments, then least squares or Radon--Nikodym theory can be applied and closed form answer to be received. These (\ref{ALS}) and (\ref{ARN}) estimate $y$ value given $x$. This answer can be generalized to ``what is $y$ estimate given distribution of $x$''. For this problem obtain moments $<Q_k>$, corresponding to given distribution of $x$, first, then use them in (\ref{ALS}) or (\ref{ARN}) instead of $M_k(x)$, corresponding to localized at $x$ state. Similary, if probabilities of $y$ outcomes are required for given distribution of $x$, the $<Q_k>$ should be used in weights expression (\ref{wi}) instead of $M_k(x)$ (this is a special case of two distribution projection on each other (\ref{proj})). Computer code implementing the algorithms is available\cite{polynomialcode}. And in conclusion we want to discuss possible directions of future development. \begin{itemize} \item In this work a closed form solution for random distribution to random value problem (\ref{regressionproblem}) is found. The question arise about problem order increase, replace ``random distribution'' by ``random distribution of random distribution'' (or even further ``random distribution of random distribution of random distribution'', etc.). In this case each $x_j$ in (\ref{regressionproblem}) should be treated as a sample distribution itself, and index $j$ then can be treated as 2D index $x_{j_1,j_2}$. Working with 2D indexes is actually very similar to working with images, see Ref. \cite{2015arXiv151101887G} where the 2D index was used for image reconstruction by applying Radon--Nikodym or least squares approximation. Similarly, the results of this paper, can be generalized to higher order problems, by considering all indexes as 2D. \item Obtaining possible $y$ outcomes as matrix spectrum (\ref{gevproblem}) and then calculating their probabilities by projection (\ref{proj}) of given distribution (point distribution (\ref{Mdist}) is a simplest example of such) to eigenvectors (\ref{Pi}) is a powerful approach to estimation of $y$ distribution under given condition. We can expect this approach to show good performance for data drawn from a wide range of probability distributions, especially for distributions that are not normal. The reason is because the (\ref{gevproblem}) is expressed in terms of probability states, what make the role of outliers much less important, compared to methods based on $L^2$ norm, particulary least squares. For example, this approach can be applied to distribtions where only first moment of $y$ is finite, while the $L^2$ norm approaches require second moment of $y$ to be finite, what make them inapplicable to distributions with infinite standard deviation. We expect the (\ref{gevproblem}) approach can be a good foundation for construction of Robust Statistics\cite{huber2011robust}. \end{itemize} \end{document}
2,137
5,886
en
train
0.110.0
\begin{document} \begin{frontmatter} \title{Gaining power in multiple testing of interval hypotheses via conditionalization} \runtitle{Interval hypotheses} \author{\fnms{Jules L.} \snm{Ellis$^1$} \mathrm{cor}ref{} } \and \author{\fnms{Jakub} \snm{Pecanka$^2$} } \and \author{\fnms{Jelle} \snm{Goeman$^2$} } \affiliation{Radboud University Nijmegen \thanksmark{m1} and Leiden University Medical Center \thanksmark{m2}} \runauthor{Ellis et al.} \address{1. Radboud University Nijmegen, 2. Leiden University Medical Center} \begin{abstract} In this paper we introduce a novel procedure for improving multiple testing procedures (MTPs) under scenarios when the null hypothesis $p$-values tend to be stochastically larger than standard uniform (referred to as \emph{inflated}). An important class of problems for which this occurs are tests of interval hypotheses. The new procedure starts with a set of $p$-values and discards those with values above a certain pre-selected threshold while the rest are corrected (scaled-up) by the value of the threshold. Subsequently, a chosen family-wise error rate (FWER) or false discovery rate (FDR) MTP is applied to the set of corrected $p$-values only. We prove the general validity of this procedure under independence of $p$-values, and for the special case of the Bonferroni method we formulate several sufficient conditions for the control of the FWER. It is demonstrated that this 'filtering' of $p$-values can yield considerable gains of power under scenarios with inflated null hypotheses $p$-values. \end{abstract} \begin{keyword}[class=MSC] \kwd[Primary ]{62J15} \kwd[; secondary ]{62G30, 62P15, 62P10} \end{keyword} \begin{keyword} \kwd{conditionalized test} \kwd{false discovery rate} \kwd{familywise error-rate} \kwd{multiple testing} \kwd{one-sided tests} \kwd{uniform conditional stochastic order} \end{keyword} \end{frontmatter} \section{Introduction} Multiple testing procedures (MTPs) generally assume that $p$-values of true null hypotheses follow the standard uniform distribution or are stochastically larger. The latter situation may occur when interval (rather than point) null hypotheses are tested. Under such scenarios the $p$-values are standard uniform typically only in borderline cases such as when the true value of a parameter is on the edge of the null hypothesis interval. When the true value of the parameter is in the interior of the interval the $p$-values tend to be stochastically larger than uniform, sometimes dramatically so with many $p$-values having distribution concentrated near 1. We call such $p$-values \emph{inflated}. There are many practical examples of multiple testing situations with interval null hypotheses with some or all of the true parameter values located (deep) in the interior of the null hypothesis interval. (1.) In test construction according to the nonparametric Item Response Theory (IRT) model of Mokken (1971), one can test whether all item-covariances are nonnegative \citep{Mokken1971, Rosenbaum1984, HR1986, JE1997}. Ordinarily, most item-covariances are substantially greater than zero, with only a few negative exceptions. (2.) In large-scale survey evaluations of public organisations, such as schools or health care organizations, it can be interesting to test whether organizations score lower than a benchmark \citep{Normand2007, Ellis2013}. If many organisations score well above the benchmark, a large number of the $p$-values of true null hypotheses become inflated. (3.) When a treatment or a drug is known to have a substantial positive treatment effect in a given population it can be of interest to look for adverse treatment effects in subpopulations. The $p$-values of most null hypotheses again become inflated. Intuitively, if the null $p$-values tend to be stochastically larger than uniform, true and false null hypotheses should be easier to distinguish, making the multiple testing problem easier. However, most MTPs focus the error control on the 'worst case' of standard uniformity and thus miss the opportunity to yield more power for inflated $p$-values. Consequently, in the presence of inflated $p$-values the actual error rate can be (much) smaller than the nominal level. Some MTPs actually lose power when null $p$-values become inflated, e.g.\ adaptive FDR methods \citep[e.g.][]{Storey2002} that incorporate an estimate of $\pi_0$, the proportion of null $p$-values \citep[note 9 on p.\ 258]{Fischer2012}. In this paper we propose a procedure which improves existing MTPs in the presence of inflated $p$-values by adding a simple conditionalization step at the onset of the analysis. For an a priori selected threshold $\lambda\in(0,1]$ (e.g.\ $\lambda=0.5$) we remove (i.e. not reject) all hypotheses with $p$-value above $\lambda$. The remaining $p$-values are scaled up by the threshold $\lambda$: $p_i'=p_i/\lambda $. The selected MTP is subsequently performed on the rescaled $p$-values only. We refer to the altered procedure as the \emph{conditionalized} version of the MTP, leading to procedures such as the \emph{conditionalized Bonferroni procedure} (CBP). In terms of power, there are both benefits and costs associated with conditionalization. The costs come from the scaling of the $p$-values by $1/\lambda$, thus effectively increasing their values. If a fixed significance threshold were used, the number of significant $p$-values would decrease. However, the conditionalization step also tends to increase the significance threshold for each $p$-value by reducing the multiple testing burden (i.e. the number of hypotheses corrected for). Crucially, in scenarios with a large portion of substantially inflated $p$-values the increased significance threshold means that the overall effect of conditionalization results in a more powerful procedure. In the remainder of the paper we formally investigate the effects and benefits of conditionalization. We prove that for scenarios with inflated $p$-values conditionalized procedures retain type I error control whenever $p$-values are independent. Our result applies to both the family-wise error rate (FWER), the false discovery rate (FDR), and other error rates. We also show that if $p$-values are not independent, such control is not automatically guaranteed. We formulate conditions which are sufficient for the control of the FWER by the CBP. We conjecture that the CBP is generally valid for positively correlated $p$-values. Finally, the power of conditionalized procedures is investigated using simulations. \section{Definition of conditionalized tests} We define a multiple testing procedure (MTP) $\mathcal{P}$ as a mapping that transforms any finite vector of $p$-values into an equally long vector of binary decisions. If $\mathcal{P}({p_1},\ldots,{p_m})=({d_1},\ldots,{d_m})$, then ${d_i}$ indicates whether the null hypothesis corresponding to $p_i$ is rejected ($d_i=1$) or not ($d_i=0$). We define a decision rate as the expected value of a function of $\mathcal{P}({p_1},\ldots,{p_m})$. We denote the FWER and FDR of the procedure $\mathcal{P}$ as $\text{$\mathrm{FWER}$}_{\mathcal{P}}$ and $\mathrm{FDR}_{\mathcal{P}}$, respectively. For $\lambda\in(0,1]$ and an MTP $\mathcal{P}$ we define the corresponding conditionalized MTP $\mathcal{P}^\lambda$ as the MTP that, on input of a vector of $p$-values $({p_1},\ldots,{p_m})$, applies $\mathcal{P}$ to the sub-vector consisting of only the rescaled $p$-values $p_i/\lambda$ with $p_i\leq\lambda$, and that does not reject the null hypotheses of the $p$-values with $p_i > \lambda$. Throughout the paper we always assume that both the level of significance $\alpha$ and the conditionalization factor $\lambda$ are fixed (independently of the data) prior to the analysis. In this paper we pay special attention to the conditionalized Bonferroni procedure (CBP) and its control of the FWER. For $\lambda\in(0,1]$ define $R_m(\lambda)=\sum_{i=1}^m\boldsymbol{1}\{p_i\le\lambda\}$. Let $\mathcal{T}\subseteq\{1,\ldots,m\}$ be the index set of true null hypotheses. The FWER of the CBP is defined as \begin{align*} \text{$\mathrm{FWER}$}_{\text{CB}}^{\lambda,\alpha}=P\Big(\bigcup\limits_{i\in\mathcal{T}}{\Big[\,p_i<\frac{\alpha \lambda }{R_m(\lambda) \vee 1}}\Big]\Big). \end{align*} If $\text{$\mathrm{FWER}$}_{\text{CB}}^{\lambda,\alpha}\le\alpha$ for given $\lambda$ and $\alpha$ we say that \emph{the CBP controls the FWER} for those $\lambda$ and $\alpha$. Note that for the sake of simplicity in the rest of the paper we sometimes suppress one or both arguments and simply use $R_m$, $R(\lambda)$, or even $R$ in the place of $R_m(\lambda)$. The proofs of all theorems and lemmas formulated below can be found in the Appendix. \section{FWER and FDR of independent tests} In this section we state our main result: a conditionalized procedure controls FWER (or FDR) if the non-conditionalized procedure controls FWER (or FDR) and if the test statistics are independent and the marginal distributions satisfy a condition that we call \emph{supra-uniformity}. \begin{Definition}[supra-uniformity] \label{def:supre-uniform} The distribution of ${p_i}$ is supra-uniform if for all $\lambda ,\gamma \in [0,1]$ with $ \gamma \leq\lambda$ it holds $P({p_i} < \gamma\,|\,{p_i}\leq\lambda )\leq\gamma /\lambda$. We say that ${p_i}$ is supra-uniform if its distribution is supra-uniform. \end{Definition} Supra-uniformity is also known as the uniform conditional stochastic order (UCSO) \citep[defined by][]{Whitt1980, Whitt1982, KS1982,Ruschendorf1991} relative to the standard uniform distribution $U(0,1)$\xspace. It is well-known that this condition is implied if ${p_i}$ dominates $U(0,1)$\xspace in likelihood ratio order \citep[e.g.,][]{Whitt1980, Denuit2005}. \citealt{Whitt1980} shows that when the sample space is a subset of the real line and the probability measures have densities, then UCSO is equivalent to the monotone likelihood ratio (MLR) property (i.e. for every $y>x$ it holds $f(y)/g(y)\geq{}f(x)/g(x)$). In the case of $U(0,1)$\xspace (i.e. when $g$ is a constant) it is immediately clear that MLR is equivalent to the $p$-values having densities that are increasing on $(0,1)$, which is further equivalent to having cumulative distribution functions that are convex on $(0,1)$. \begin{theorem} \label{thm:independent} Let $\mathcal{P}$ be an MTP and $D$ be a decision rate (e.g., FWER of FDR) such that $D_{\mathcal{P}}\leq\alpha$ for $\alpha\in(0,1)$ whenever the $p$-values of the true hypotheses are independent and supra-uniformly distributed. If the $p$-values of the true hypotheses are independent and supra-uniform, then, for the conditionalized MTP $\mathcal{P}^\lambda$ it holds that $D_{\mathcal{P}^\lambda}\le\alpha$. \end{theorem} The proof of Theorem \ref{thm:independent} can be found in the Appendix. The basic idea behind the proof is to divide the space of $p$-values into orthants partitioned by the events ${[{p_i}\leq\lambda ]}$ versus ${[{p_i} > \lambda ]} $ for all $i$. Conditionally on each of these orthants, the FWER (or FDR) of $\mathcal{P}^\lambda$ is at most $\alpha $. Therefore, the total FWER (or FDR) of $\mathcal{P}^\lambda$ must also be at most $\alpha$. A similar argument is used by \citet{WD1986} in the context of order restricted inference. Many popular multiple testing procedures satisfy the conditions of Theorem \ref{thm:independent}, since they only require a weaker condition $\mathrm{P}(p_i{}\leq{}c)\leq{}c$ in order to preserve type I error control. Consequently, for independent $p$-values the validity of the conditionalized versions of the methods by \cite{Holm1979}, \cite{Hommel1988}, \cite{Hochberg1988} for FWER control and by \cite{BH1995} for FDR control follows by Theorem \ref{thm:independent}.
3,281
19,769
en
train
0.110.1
\section{FWER and FDR of independent tests} In this section we state our main result: a conditionalized procedure controls FWER (or FDR) if the non-conditionalized procedure controls FWER (or FDR) and if the test statistics are independent and the marginal distributions satisfy a condition that we call \emph{supra-uniformity}. \begin{Definition}[supra-uniformity] \label{def:supre-uniform} The distribution of ${p_i}$ is supra-uniform if for all $\lambda ,\gamma \in [0,1]$ with $ \gamma \leq\lambda$ it holds $P({p_i} < \gamma\,|\,{p_i}\leq\lambda )\leq\gamma /\lambda$. We say that ${p_i}$ is supra-uniform if its distribution is supra-uniform. \end{Definition} Supra-uniformity is also known as the uniform conditional stochastic order (UCSO) \citep[defined by][]{Whitt1980, Whitt1982, KS1982,Ruschendorf1991} relative to the standard uniform distribution $U(0,1)$\xspace. It is well-known that this condition is implied if ${p_i}$ dominates $U(0,1)$\xspace in likelihood ratio order \citep[e.g.,][]{Whitt1980, Denuit2005}. \citealt{Whitt1980} shows that when the sample space is a subset of the real line and the probability measures have densities, then UCSO is equivalent to the monotone likelihood ratio (MLR) property (i.e. for every $y>x$ it holds $f(y)/g(y)\geq{}f(x)/g(x)$). In the case of $U(0,1)$\xspace (i.e. when $g$ is a constant) it is immediately clear that MLR is equivalent to the $p$-values having densities that are increasing on $(0,1)$, which is further equivalent to having cumulative distribution functions that are convex on $(0,1)$. \begin{theorem} \label{thm:independent} Let $\mathcal{P}$ be an MTP and $D$ be a decision rate (e.g., FWER of FDR) such that $D_{\mathcal{P}}\leq\alpha$ for $\alpha\in(0,1)$ whenever the $p$-values of the true hypotheses are independent and supra-uniformly distributed. If the $p$-values of the true hypotheses are independent and supra-uniform, then, for the conditionalized MTP $\mathcal{P}^\lambda$ it holds that $D_{\mathcal{P}^\lambda}\le\alpha$. \end{theorem} The proof of Theorem \ref{thm:independent} can be found in the Appendix. The basic idea behind the proof is to divide the space of $p$-values into orthants partitioned by the events ${[{p_i}\leq\lambda ]}$ versus ${[{p_i} > \lambda ]} $ for all $i$. Conditionally on each of these orthants, the FWER (or FDR) of $\mathcal{P}^\lambda$ is at most $\alpha $. Therefore, the total FWER (or FDR) of $\mathcal{P}^\lambda$ must also be at most $\alpha$. A similar argument is used by \citet{WD1986} in the context of order restricted inference. Many popular multiple testing procedures satisfy the conditions of Theorem \ref{thm:independent}, since they only require a weaker condition $\mathrm{P}(p_i{}\leq{}c)\leq{}c$ in order to preserve type I error control. Consequently, for independent $p$-values the validity of the conditionalized versions of the methods by \cite{Holm1979}, \cite{Hommel1988}, \cite{Hochberg1988} for FWER control and by \cite{BH1995} for FDR control follows by Theorem \ref{thm:independent}. \section{FWER control by the CBP under dependence: finitely many hypotheses} Generalizing Theorem \ref{thm:independent} to the setting with dependent $p$-values is not trivial. In the next three sections of this paper we focus on the specific case of CBP, presenting several sufficient conditions the control of the FWER by the CBP. The overarching theme of these conditions is a requirement for $p$-values to be positively correlated. In light of the several special results obtained below we conjecture that, at least in the multivariate normal model, the CBP controls FWER whenever the $p$-values are positively correlated. Further justification for our conjecture is the extreme case when the $p$-values under the null are all identical at which point the proof of the control of the FWER by the CBP is trivial. \subsection{Negative correlations: a counterexample} To see that the requirement of independence of $p$-values in Theorem~\ref{thm:independent} cannot simply be dropped, consider a multiple testing problem with $m=2$ where $p_1$ and $p_2$ both have a $U(0,1)$\xspace and $p_1=1-p_2$. Assume $\lambda > 1/2$, since otherwise CBP is uniformly less powerful than the classical Bonferroni method. In this setting \[ \text{$\mathrm{FWER}$}cb=\mathrm{P}(p_1\leq\lambda\alpha)+\mathrm{P}(p_2\leq\lambda\alpha)=2\lambda\alpha>\lambda. \] In other words, under the considered setting the CBP either fails to control FWER (with $\lambda>1/2$) or is strictly less powerful than the Bonferroni method (with $\lambda\leq1/2$). \subsection{The bivariate normal case} Proposition \ref{thm:bivnorm} below guarantees FWER control by the CBP for all $\alpha,\lambda\in(0,1)$ in the setting with two $p$-values corresponding to two bivariate zero-mean normally distributed test statistics with positive correlation. Denote as $\mathcal{P}hi$ the standard normal distribution function and as $I_2$ the $2\times 2$ identity matrix. \begin{Proposition} \label{thm:bivnorm} Let $m=2$ and let $(X_1,X_2)'\sim{}N(0,\Sigma_\rho)$, where $\Sigma_\rho=\rho+(1-\rho)I_2$. Set $p_1=1-\mathcal{P}hi(X_1)$ and $p_2=1-\mathcal{P}hi(X_2)$. If $\rho\geq0$, then $\text{$\mathrm{FWER}$}cb\leq\alpha$. \end{Proposition} \subsection{Distributions satisfying the expectation criterion} In the multivariate setting without independence of the $p$-values a number of conditions can be formulated which guarantee the control of the FWER by the CBP. One such sufficient condition is given in Lemma \ref{lem:expcrit}. \begin{lemma} \label{lem:expcrit} Let the $p$-values $p_1,\ldots,p_m$ have continuous distributions $F_1,\ldots,F_m$ that satisfy $(F_i(y) - F_i(x))\lambda \alpha \leq F_i(\lambda \alpha)(y-x)$ for any $x, y \in (0,\lambda \alpha)$ such that $x < y$, and let $P(R_m \le k\,|\,p_i=x)$ be increasing in $x$ for every $k=0,1,\ldots,m$ and $i=1,\ldots,m$ and let \begin{align} \label{eq:expcrit} \sum_{i=1}^{m}{P(p_i\le\lambda\alpha)\,E(R_m^{-1}\,|\,p_i=\lambda\alpha)\le\alpha}. \end{align} Then $\text{$\mathrm{FWER}$}cb\le\alpha$. \end{lemma} We refer to the condition in (\ref{eq:expcrit}) as the \emph{expectation criterion}. Note that for positively associated $p$-values, small values of $p_i$ often occur together with small values of $R_m^{-1}$; hence for small $\lambda \alpha$, the summand in the expectation criterion tends to be small. The expectation criterion can be used to prove a general result on $p$-values arising from equicorrelated jointly normal test statistics formulated as Lemma \ref{lem:equic_norm}. Note that if $p_1\ldots,p_m$ are exchangeable and standard uniform, (\ref{eq:expcrit}) simplifies to $E(R_m^{-1}\mid p_1=\lambda\alpha)\le (\lambda m)^{-1}$. If further $p_1\ldots,p_m$ are derived from jointly normally distributed test statistics, Lemma \ref{lem:equic_norm} gives a further simplification of the condition. \begin{lemma} \label{lem:equic_norm} Assume that $(\mathcal{P}hi^{-1}(p_1),\ldots, \mathcal{P}hi^{-1}(p_m))'\sim{}N(0,\Sigma_\rho)$, where $\Sigma_\rho=\rho+\mathrm{diag}(1-\rho,\ldots,1-\rho)$ with $0\leq\rho<1$. Let \begin{equation} \label{eq:integralcrit} \int_{-\infty }^{\infty }{\frac{\mathrm{var}phi(x)}{\mathcal{P}hi (\mu-\sqrt{\rho}x)}dx}\leq\lambda^{-1}, \end{equation} where $\mu=\mathcal{P}hi^{-1}(\lambda)(1-\rho)^{-1/2}-\mathcal{P}hi^{-1}(\lambda\alpha)\rho(1-\rho)^{-1/2}$. Then $\text{$\mathrm{FWER}$}cb\le\alpha$. \end{lemma} A practical implication of Lemma \ref{lem:equic_norm} is that for a given setting $(\lambda, \alpha, \rho)$ the FWER control by the CBP can be verified numerically by evaluating the one-dimensional integral in (\ref{eq:integralcrit}). The results of our numerical analysis suggest that (\ref{eq:integralcrit}) holds for any $0\leq\rho<1$ and any $0<\lambda\leq1$ whenever $\alpha\leq0.368$. Moreover, we observed evidence that condition (\ref{eq:integralcrit}) is most certainly too strict: Certain combinations of $\alpha$, $\lambda$ and $\rho$ (e.g.\ $\alpha=0.7$, $\lambda=0.9$, $\rho=0.2$) exist for which (\ref{eq:integralcrit}) is violated, but simulations indicate that the CBP controls the FWER for all $\alpha$, $\lambda$ and $\rho \geq 0$ in the case of $p$-values arising from equicorrelated normals with nonnegative means and unit variances. \subsection{Mixtures} Next we show that the control of the type I error rate by the CBP is preserved when distributions are mixed. The result below applies when the family of distributions being mixed is indexed by a one-dimensional parameter, however, it can be easily generalized to more complex families of distributions. \begin{Proposition} \label{lem:mixtures} Let $\mathcal{F}=\{F_w,w\in\mathbb{R}\}$ be a family of distribution functions. Assume that a decision rate $D$ and an MTP $\mathcal{P}$ satisfy $D_{\mathcal{P}} \leq\alpha$ whenever the joint distribution of the $p$-values is in $\mathcal{F}$. For any mixing density $g$, if the $p$-values $p_1,\ldots,p_m$ are distributed according to the mixture $F=\int_{-\infty}^\infty{}F_w g(w)dw$, then $D_{\mathcal{P}} \leq\alpha$. \end{Proposition} Proposition \ref{lem:mixtures} is a general result that applies to many conditionalized procedures. For instance, in the case of the CBP, which controls the FWER whenever the $p$-values are independent and supra-uniform, the proposition guarantees the control of the FWER by the CBP also for mixtures of such distributions. Note that such mixtures may be correlated. \section{FWER control by the CBP under dependence in large testing problems} Finally, we give a sufficient conditions for FWER control by the CBP as the number of hypotheses $m$ approaches infinity. Suppose for a moment that the expectation of $R(\lambda)$ (i.e. the number of $p$-values below $\lambda$) is known. In such case one could use the alternative to CBP that rejects hypothesis $H_i$ whenever $p_i\leq\lambda\alpha/\mathrm{E}[R(\lambda)]$. If the $p$-values are supra-uniform then under arbitrary dependence this procedure controls FWER, since it holds \begin{align*} \mathrm{FWER}_{\mathrm{CBP'}}&\leq\sum\nolimits_{i\in T} P(p_i<\alpha\lambda/E[R(\lambda)]) \\ &\leq\sum\nolimits_{i\in T} P(p_i\leq\lambda)\,P(p_i<\alpha\lambda/E[R(\lambda)]\mid p_i\leq\lambda) \\ &\leq\sum\nolimits_{i\in T} P(p_i\leq\lambda)\,\alpha/E[R(\lambda)] \leq\alpha. \end{align*} This suggests that the CBP should also control FWER for $m\to\infty$ whenever $R(\lambda)$ is an consistent estimator of $\mathrm{E}[R(\lambda)]$. This heuristic argument is formalized in Proposition \ref{thm:exp_bonf}, where $\mathop{\mathrm{plim}}$ denotes convergence in probability. \begin{Proposition} \label{thm:exp_bonf} Let the $p$-values $p_1,\ldots,p_m$ have supra-uniform distributions and let $\mathop{\mathrm{plim}}_{m\to\infty}\,R/m=\eta$ and $\lim_{m\to\infty}\,E(R/m)=\eta$ for some $\eta\in\mathbb{R}$. Then $\limsup_{m\to\infty}\,\text{$\mathrm{FWER}$}cb\leq\alpha$. \end{Proposition} An application of Proposition~\ref{thm:exp_bonf} in a situation where correlations between $p$-values vanish with $m\to\infty$ leads to Corollary \ref{corol:asymptotic}. \begin{Corollary} \label{corol:asymptotic} Denote $\rho_{ij}=\mathrm{cor}(\mbf{1}[p_i\leq\lambda],\mbf{1}[{{p}_{j}}\leq\lambda])$ and put $\rho_{ij}^+=\max\{0,\rho_{ij}\}$. Denote the average off-diagonal positive part of the correlations as \begin{align*} \bar\rho(m)=\frac{2}{m(m-1)}\sum\limits_{i=1}^{m-1}{\sum\limits_{j=i+1}^{m}{{{\rho }_{ij}^{+}}}}. \end{align*} If the $p$-values are supra-uniform and $\lim_{m\to\infty}\,E(R/m)=\eta$ for some $\eta\in\mathbb{R}$ and $\lim_{m\to\infty}\bar\rho(m)=0$, then $\limsup_{m\to\infty}\text{$\mathrm{FWER}$}cb\leq\alpha $. \end{Corollary} An example of the usage of Corollary~\ref{corol:asymptotic} for data analysis can be found in Section \ref{manifest}.
3,751
19,769
en
train
0.110.2
\section{FWER control by the CBP under dependence in large testing problems} Finally, we give a sufficient conditions for FWER control by the CBP as the number of hypotheses $m$ approaches infinity. Suppose for a moment that the expectation of $R(\lambda)$ (i.e. the number of $p$-values below $\lambda$) is known. In such case one could use the alternative to CBP that rejects hypothesis $H_i$ whenever $p_i\leq\lambda\alpha/\mathrm{E}[R(\lambda)]$. If the $p$-values are supra-uniform then under arbitrary dependence this procedure controls FWER, since it holds \begin{align*} \mathrm{FWER}_{\mathrm{CBP'}}&\leq\sum\nolimits_{i\in T} P(p_i<\alpha\lambda/E[R(\lambda)]) \\ &\leq\sum\nolimits_{i\in T} P(p_i\leq\lambda)\,P(p_i<\alpha\lambda/E[R(\lambda)]\mid p_i\leq\lambda) \\ &\leq\sum\nolimits_{i\in T} P(p_i\leq\lambda)\,\alpha/E[R(\lambda)] \leq\alpha. \end{align*} This suggests that the CBP should also control FWER for $m\to\infty$ whenever $R(\lambda)$ is an consistent estimator of $\mathrm{E}[R(\lambda)]$. This heuristic argument is formalized in Proposition \ref{thm:exp_bonf}, where $\mathop{\mathrm{plim}}$ denotes convergence in probability. \begin{Proposition} \label{thm:exp_bonf} Let the $p$-values $p_1,\ldots,p_m$ have supra-uniform distributions and let $\mathop{\mathrm{plim}}_{m\to\infty}\,R/m=\eta$ and $\lim_{m\to\infty}\,E(R/m)=\eta$ for some $\eta\in\mathbb{R}$. Then $\limsup_{m\to\infty}\,\text{$\mathrm{FWER}$}cb\leq\alpha$. \end{Proposition} An application of Proposition~\ref{thm:exp_bonf} in a situation where correlations between $p$-values vanish with $m\to\infty$ leads to Corollary \ref{corol:asymptotic}. \begin{Corollary} \label{corol:asymptotic} Denote $\rho_{ij}=\mathrm{cor}(\mbf{1}[p_i\leq\lambda],\mbf{1}[{{p}_{j}}\leq\lambda])$ and put $\rho_{ij}^+=\max\{0,\rho_{ij}\}$. Denote the average off-diagonal positive part of the correlations as \begin{align*} \bar\rho(m)=\frac{2}{m(m-1)}\sum\limits_{i=1}^{m-1}{\sum\limits_{j=i+1}^{m}{{{\rho }_{ij}^{+}}}}. \end{align*} If the $p$-values are supra-uniform and $\lim_{m\to\infty}\,E(R/m)=\eta$ for some $\eta\in\mathbb{R}$ and $\lim_{m\to\infty}\bar\rho(m)=0$, then $\limsup_{m\to\infty}\text{$\mathrm{FWER}$}cb\leq\alpha $. \end{Corollary} An example of the usage of Corollary~\ref{corol:asymptotic} for data analysis can be found in Section \ref{manifest}. \section{FWER investigation - simulations} Our conjecture is that the CBP controls the FWER in the case of positively correlated multivariate normal test statistics. To substantiate this, we conducted the following simulations. We generated the $p$-values as $p_i = \mathcal{P}hi(Z_i)$, with the 'test statistics' $(Z_1, \ldots, Z_m)'\sim{}N(0,\Sigma) $ and $\Sigma = (\sigma_{ij})$ with each $\sigma_{ij} \ge 0$ and each $\sigma_{ii} =1$. The correlation matrices were generated in the following way. First, a covariance matrix was generated as $\Sigma = AA^T$, where the $a_{ij}$ were drawn randomly and independently from a standard normal distribution. If there was a negative covariance, then the smallest covariance was found, and the corresponding negative elements of $a_{ij}$ were set to 0. This was repeated until all covariances were nonnegative. Finally, the covariance matrix was scaled into a correlation matrix. For each $m\in\{1,2,3,4,5,6,7,8,9,10,15,20,25,50,75, 100\}$ we generated 100 correlation matrices $\Sigma$, and for each $\Sigma$ we conducted 10,000 simulations and computed the FWER for the CBP with $\alpha \in {0.05,0.10,\ldots,0.95}$ and $\lambda\in\{0.1,0.2,\ldots,0.9\}$. There were 6 combinations of $(\Sigma,\alpha, \lambda)$ with simulated FWER slightly above $\alpha$, but none of these differences were significant according to a binomial test with significance level 0.05. For $m \ge 6$ we found no cases with simulated FWER above $\alpha$. In our simulations we also explored several multivariate settings with negative correlations (results not included). The simulations showed what was already suggested by the lack of control of the FWER for negative correlations in the bivariate case, namely that FWER is not necessarily controlled with negative correlations (especially for small $m$).
1,345
19,769
en
train
0.110.3
\section{FWER investigation - simulations} Our conjecture is that the CBP controls the FWER in the case of positively correlated multivariate normal test statistics. To substantiate this, we conducted the following simulations. We generated the $p$-values as $p_i = \mathcal{P}hi(Z_i)$, with the 'test statistics' $(Z_1, \ldots, Z_m)'\sim{}N(0,\Sigma) $ and $\Sigma = (\sigma_{ij})$ with each $\sigma_{ij} \ge 0$ and each $\sigma_{ii} =1$. The correlation matrices were generated in the following way. First, a covariance matrix was generated as $\Sigma = AA^T$, where the $a_{ij}$ were drawn randomly and independently from a standard normal distribution. If there was a negative covariance, then the smallest covariance was found, and the corresponding negative elements of $a_{ij}$ were set to 0. This was repeated until all covariances were nonnegative. Finally, the covariance matrix was scaled into a correlation matrix. For each $m\in\{1,2,3,4,5,6,7,8,9,10,15,20,25,50,75, 100\}$ we generated 100 correlation matrices $\Sigma$, and for each $\Sigma$ we conducted 10,000 simulations and computed the FWER for the CBP with $\alpha \in {0.05,0.10,\ldots,0.95}$ and $\lambda\in\{0.1,0.2,\ldots,0.9\}$. There were 6 combinations of $(\Sigma,\alpha, \lambda)$ with simulated FWER slightly above $\alpha$, but none of these differences were significant according to a binomial test with significance level 0.05. For $m \ge 6$ we found no cases with simulated FWER above $\alpha$. In our simulations we also explored several multivariate settings with negative correlations (results not included). The simulations showed what was already suggested by the lack of control of the FWER for negative correlations in the bivariate case, namely that FWER is not necessarily controlled with negative correlations (especially for small $m$). \section{Power investigation - simulations} In this section we investigate the power performance of conditionalized tests relative to their non-conditionalized versions through simulations. We consider the following procedures: Bonferroni; {\v S}id{\'a}k \citep[attributed to Tippet by][p.~2433]{Davidov2011}; Fisher combination method based on the transformation $F=-2\sum_{i=1}^m\log {p_i}$ \citep[see][p.~2433]{Davidov2011}; the likelihood ratio (LR) procedure based on the theory of order restricted statistical inference of \citet{RobertsonWD}, using the chi-bar distribution with binomial weights; the ${I_+}$ statistic, based on the empirical distribution function \citep[p.~2433]{Davidov2011}; the Bonferroni plug-in procedure as defined by \citet{FG2009} based on the work of \citet{Storey2002} referred to as the FGS procedure. It should be noted that neither Fisher's method nor the LR and ${I_+}$ procedures provide a decision for each individual hypothesis ${H_i}$. Instead they only allow a conclusion about the intersection hypothesis that all ${H_i}$ are true (i.e. the global null hypothesis). For this reason their usage is limited. Furthermore, both Fisher's and {\v S}id{\'a}k's method as well as the LR and ${I_+}$ procedures assume independence of the analyzed $p$-values. However, this assumption is often violated in practice, which limits the usage of these methods. \subsection{Power as the number of true hypotheses increases} For the power investigation the $p$-values were generated based on $m$ parallel \textit{z}-tests of null hypotheses of type $H_0:\mu_i\geq0$, each based on a sample of size $n$. The $p$-values were calculated as $p_i=\mathcal{P}hi(X_i)$, with $var(X_i) = 1$ and noncentrality parameter $E(X_i)=\mu_i \sqrt{n}$. To each set of $p$-values we applied the conditionalized and the ordinary versions of the considered testing procedures at the overall significance level of $\alpha=.05$. Conditionalizing was applied with $\lambda=0.5$. A number of combinations in terms of noncentrality parameter, hypothesis count and proportion of false hypotheses was considered. For each combination we performed 10,000 replications. Fig.~\ref{fig:figureM} shows the results of a simulation where the number of false hypotheses is fixed while the number of true hypotheses increases. For the false hypotheses the value of the noncentrality parameter was set at $-2$, while for the true null hypotheses it was set at $2$. This illustrates that the power decreases rapidly with the number of true hypotheses for most non-conditionalized procedures. The only exception to this is the LR procedure. In contrast, for all of the considered conditionalized procedures the power decreases much more slowly. This shows that, with the exception of the LR procedure, the conditionalization substantially improves the power performance of the considered procedures in this setting. Among the procedures that permit a per-hypothesis decision (i.e. Bonferroni, {\v S}id{\'a}k, and FGS) it is the conditionalized FGS procedure that shows the highest power. Fig.~\ref{fig:figureP2} illustrates the influence of conditionalization on the performance of the Bonferroni and FGS methods in a setting where the percentage of true hypotheses increases while the total number of hypotheses remains fixed. The figure shows that the conditionalized FGS procedure is the overall best performing procedure among the four. \subsection{Power in pairwise comparisons of ordered means} \label{ordered} Consider a series of independent sample means $y_i \sim N(\mu_i,\sigma^2/n)$ with the compound hypothesis $H:\mu_1 \leq\mu_2\leq\ldots\leq\mu_k$. An analysis method specifically designed for this setting is the isotonic regression \citep{RobertsonWD}, although this method does not allow to deduce specifically which pairs $(\mu_i,\mu_j)$ violate the ordering specified by the null hypothesis. Alternatively, the $k(k-1)/2$ individual hypotheses $H_{ij}:\mu_i\leq\mu_j$ with $i < j$ can be analyzed using one-sided \textit{t}-tests, and the conditionalized Bonferroni or the conditionalized FGS procedures can be applied. The average correlation between the $p$-values vanishes as $m \to \infty$, thus the asymptotic control of the FWER by the CBP follows by Corollary~\ref{corol:asymptotic}. The simulations below indicate that the FWER is in fact controlled even for the small hypothesis counts. The means in this simulation were modeled as $\mu_{i+1}=\mu_i+\delta$ for $i=1,2,\ldots,k-2$, and $\mu_k = \mu_1$. Thus, most means satisfy the ordering of the hypothesis, but the last mean violates it. We used $\sigma = 1$, $n = 10$ and set $\lambda = 0.5$. Fig.~\ref{fig:figurePairs} shows the results for $k = 20$ and $k = 5$ respectively. At $\delta = 0$, it is observed that all four procedures exhibit $\text{$\mathrm{FWER}$}$ below $\alpha$. For both $k = 5$ and $k = 20$, the two conditionalized procedures perform essentially as good or better than their non-conditionalized counterparts across the whole range of $\delta \in [0.1, 3]$. Note that the same results would be obtained with, for example, $n = 90$ and $\delta \in [1/30, 1]$. \section {Examples with real data} \subsection{Example 1 (detecting adverse effects in meta-analysis)} Suppose the effect of a medical or psychological treatment is investigated in a meta-analysis of $m$ studies in distinct populations, and that a one-sided \textit{t}-test of $H_i^*: \mathrm{var}theta_i \le 0$ is conducted in each population, yielding $p$-values $p^*_i$, where $\mathrm{var}theta_i > 0$ indicates a positive, beneficial effect of the treatment in population $i$. In a meta-analysis, one would usually test a weighted average of the effects, say $\bar{\mathrm{var}theta}=\sum\nolimits_iw_i\mathrm{var}theta_i$. However, even if the average effect $\bar{\mathrm{var}theta}$ is positive, there can be populations in which the local effect $\mathrm{var}theta_i$ is negative. It would be wise to test for such adverse effects, as the treatment should not be recommended in such populations. This means that one also has to test the opposite hypothesis, $H_i: \mathrm{var}theta_i \ge 0$, in each population, yielding $p$-values $p_i = 1 - p^*_i$ . This is a problem of the form considered in this article. Under the classical \textit{t}-test applied in the context of interval hypothesis testing, each \textit{t}-statistic has a non-central \textit{t}-distribution with the non-centrality parameter determined by the true value of the expectation $\mathrm{var}theta_i$. Under the principle of least favorability the $p$-values are obtained using the central \textit{t}-distribution. It is well-known that the ratio of densities of \textit{t}-distributions with different non-centrality parameters (and equal degrees of freedom) is monotone \citep{Kruskall1954,Lehmann1955}, and from this it follows that they are ordered in likelihood ratio. This implies that the supra-uniformity of Definition \ref{def:supre-uniform} applies \citep{Whitt1980, Whitt1982, Denuit2005}. Consequently, assuming that the $p$-values are independent, by Theorem \ref{thm:independent} it follows that in this setting the CBP and the conditionalized FGS procedure control the FWER, while the conditionalized Benjamini-Hochberg procedure \citep{BH1995,BY2001} controls the FDR. \subsection{Example 2. Detecting substandard organizations in quality benchmarking} Several countries have developed programs in which the quality of public organizations such as schools or hospitals is assessed. As stated by \citet{Ellis2013}, "such research can consist of large-scale studies where dozens [3], hundreds [4], or thousands [5, 6] of organizations are compared on one or more measures of performance or quality of care, on the basis of a sample of clients or patients from each organization". A goal of such programs is to identify under-performing organizations. For example, in the Consumer Quality Index (CQI) program of the Netherlands, the questionnaire used in 2010 to evaluate the short-term ambulatory mental health and addiction care organizations contained a question whether it was a problem to contact the therapist by phone in the evening or during the weekend in case of emergency. Now suppose that a minimum standard of 90\% satisfaction rate is imposed. Under such standard in each organization 90\% or more of the patients should answer that contacting the therapist outside office hours was not a problem. Investigating whether hospitals satisfy this minimum standard can be done using the binomial test within each hospital with the null hypothesis of type $H_0:\pi\ge.90$ where $\pi$ denotes the success rate. The advisory statistics team debated the question of whether a correction for multiplicity for all hospitals is required in such analyses. The arguments against correcting for all hospitals were motivated by the expected loss of power associated with multiplicity correction for all hospitals in non-conditionalized MTPs. The advantage of using a conditionalized MTP in such setting is that the presence of organizations that score high above the minimum standard does not exacerbate the severity of the multiple testing problem and much of the power is preserved even with many high-performing hospitals included in the analysis. \subsection{Example 3. Testing for manifest monotonicity in IRT}\label{manifest} In Mokken scale analysis it is recommended to test manifest monotonicity \citep{Ark2007}. With $k$ items to be tested suppose that the variables $X_1,\ldots,X_k$ indicate correctness of response for the $k$ items (with $X_i=1/0$ indicating a correct/incorrect answer for the $i$-th item). Denote the \textit{rest score} of the $i$-th item as $X_{-i}=(\sum_{j=1}^{k}X_j)-X_i$. A question of interest is whether $\pi_{ij}:=P(X_i=1\,|\,X_{-i}=j)$ is a nondecreasing function of $j$ within each item $i$. This leads to testing the $k(k-1)/2$ pairwise hypotheses $\pi_{ij'}\leq\pi_{ij}$ for $j'<j$ \citep{Ark2007}. In the subtest E of the Raven Progressive Matrices test in the data set reported by \citet{VE2000} we obtained the following result. For item 11, there were 21 pairs of rest score groups that had to be compared - small adjacent groups were joined together by the program. There were 4 violations with a maximum \textit{z}-statistic of 2.33, yielding an unmodified $p$-value $p = 0.010$. If no multiplicity correction is performed the probability of false rejection for each item undesirably increases with the number of rest score groups. The classical Bonferroni correction yields the adjusted $p$-value of $p'=0.010\times 21=0.21$, while the CBP yields the adjusted $p$-value of $p''=0.010\times 4=0.04$. As the number of items increases, the number of pairwise comparisons increases, but the average correlation between the $z$-statistics vanishes. In this situation, Corollary~\ref{corol:asymptotic} implies that the FWER is asymptotically under control, while the simulations of Section \ref{manifest} indicate that FWER control is already achieved with small hypothesis count. Thus, both the classical Bonferroni correction and the CBP control the FWER, but the CBP yields the smallest $p$-value. \subsection{Example 4. Testing for nonnegative covariances in IRT} In Mokken scale analysis and, more generally in monotone latent variable models, it is required that the test items have nonnegative covariances with each other \citep{Mokken1971,HR1986}. Two approaches are possible in item selection with this requirement. One approach is to retain only items with significantly positive covariances, and the other approach is to delete items with significantly negative covariances. We consider the latter approach here. The distribution of the standardized sample covariances converges to a normal distribution with increasing sample size, which suggests that the CBP might control the FWER in this setting. We investigated this further, both analytically and with simulations. Both approaches suggest that the FWER is indeed under control, but we intend to report the details of this in a psychometric journal. Here, consider only briefly an example. We have deployed this procedure on an exam with 78 multiple choice questions. There were 3003 covariances between items, of which 280 were negative. The smallest unadjusted $p$-value was 0.000243. The Bonferroni corrected $p$-value is 0.73, while the CBP with $\lambda=0.5$ yields a $p$-value of 0.14.
3,779
19,769
en
train
0.110.4
\section{Discussion} We have proposed a very simple and general method, called conditionalization, to deal with the presence of inflated $p$-values in multiple testing problems. Such $p$-values often arise in practice for instance when interval hypotheses are tested. We suggest to discard all hypotheses with $p$-values above a pre-chosen constant $\lambda$ (typically 0.5 or higher), and to divide the remaining $p$-values by $\lambda$ before applying the multiple testing procedure of choice. For independent $p$-values, we have proven that the conditionalized procedure controls the same error rate as the original procedure, provided null $p$-values are supra-uniform (i.e.\ dominate the standard uniform distribution in likelihood ratio order). As a rule of thumb, conditionalized procedures can be expected to be more powerful than their ordinary, non-conditionalized counterparts if there are more true hypotheses with inflated $p$-values (i.e.\ with true parameter values deep inside the null hypothesis) than there are false null hypotheses. The power gain achieved by conditionalizing can be substantial, especially for adaptive procedures that incorporate an estimate of the proportion of true null hypotheses. For the case of the conditionalized Bonferroni procedure (CBP) we conjecture that the CBP is valid when the $p$-values are positively correlated. For this case we have given several sufficient conditions for FWER control by the CBP. We accompanied these results with an extensive simulation study and the results give supporting evidence for our conjecture. Nonetheless, a proof of our conjecture still eludes us and thus remains for future research. We have shown that it is not universally valid for negatively correlated variables, however. Other topics that in our opinion deserve further attention are the question of how to optimally choose the value for the cut-off parameter (i.e. $\lambda$) and whether the procedure is valid when the $p$-values are based on discretely distributed test statistics, since these typically do not fulfill the supra-uniformity condition of Theorem \ref{thm:independent}. We believe that this paper makes a strong case for the usage of the conditionalized multiple testing procedures since they mitigate the loss of power typically associated with multiple testing procedures on inflated $p$-values and thus make it more attractive for researchers to formulate their scientific questions in terms of interval hypotheses. In light of the fact that shifting the focus towards interval hypotheses has been advocated as one of the solutions to get out of the current "$p$-value controversy" \citep{Wellek2017} this likely makes conditionalization a very powerful method of analysis. \appendix
588
19,769
en
train
0.110.5
\section{Proofs} The following notation is used. For $\mbf{p}=({p_1},\ldots,{p_m})$ and a subset $K \subseteq \{ 1,\ldots,m\} $, denote by $\mbf{p}_{\!K}$ be the subvector of components ${p_i}$ with $i\in{}K$. The index set of true hypotheses is denoted by $\mathcal{T}$, that is: $i \in \mathcal{T} \Leftrightarrow ({H_i}$ is true). For a scalar $\lambda $ we write $\mbf{p}_{\!K}\leq\lambda $ or $\mbf{p}_{\!K}>\lambda $ if these inequalities hold for component-wise. ${\mbf{U}}=({U_1},\ldots,{U_m})$ denotes a random vector with independent components that follow $U(0,1)$\xspace. \subsection{Independence case} \begin{proof}[Proof of Theorem 3.2] For any set $K \subseteq \mathcal{T}$, let $\bar K = \mathcal{T} - K$, and define the orthant ${G_K} = [{{\mbf{p}}_K}\leq\lambda,{{\mbf{p}}_{\bar K}} > \lambda ]$. Conditionally on each ${G_K}$, the modified $p$-values ${p'_i} = {p_i}/\lambda$, $i\in\{1,\ldots,m\}$ are independent, and the distribution of each ${p'_i}$, $i\in\mathcal{T}$ stochastically dominates $U(0,1)$\xspace. Because $\mathcal{P}$ controls the decision rate under these circumstances, we may conclude that $E(D_{\mathcal{P}^\lambda}|G_K) \leq \alpha$. Consequently, by the law of total expectation, $E(D_{\mathcal{P}^\lambda})\leq\alpha$. \end{proof}
440
19,769
en
train
0.110.6
\subsection{Independence case} \begin{proof}[Proof of Theorem 3.2] For any set $K \subseteq \mathcal{T}$, let $\bar K = \mathcal{T} - K$, and define the orthant ${G_K} = [{{\mbf{p}}_K}\leq\lambda,{{\mbf{p}}_{\bar K}} > \lambda ]$. Conditionally on each ${G_K}$, the modified $p$-values ${p'_i} = {p_i}/\lambda$, $i\in\{1,\ldots,m\}$ are independent, and the distribution of each ${p'_i}$, $i\in\mathcal{T}$ stochastically dominates $U(0,1)$\xspace. Because $\mathcal{P}$ controls the decision rate under these circumstances, we may conclude that $E(D_{\mathcal{P}^\lambda}|G_K) \leq \alpha$. Consequently, by the law of total expectation, $E(D_{\mathcal{P}^\lambda})\leq\alpha$. \end{proof} \subsection{Bivariate normal case} In this section we formulate three additional lemmaas which together immediately imply the validity of Proposition 1. \begin{lemma} \label{lem.bivariate_PQD} Let $m=2$ and let $p_1,p_2$ be marginally standard uniformly distributed under the null hypothesis. Put $X_1=\mathcal{P}hi^{-1}(1-p_1)$ and $X_2=\mathcal{P}hi^{-1}(1-p_2)$ and let $X_1,X_2$ be positive quadrant dependent. Let $\alpha,\lambda\in(0,1)$ be such that $1-(1-\tfrac{\lambda\alpha}{2})^2+2(1-\lambda)\lambda\alpha\;\leq\;\alpha$. Then $\text{$\mathrm{FWER}$}cb\leq\alpha$. \end{lemma} \begin{proof} For fixed $\alpha\in(0,1)$ and $\lambda\in(0,1)$ the CBP rejects $H_i$ whenever $p_i\leq\lambda\alpha/R_\lambda$, where $R_\lambda=I\{p_1\leq\lambda\}+I\{p_2\leq\lambda\}$. The corresponding FWER can be written as $\text{$\mathrm{FWER}$}cb=1-P(A^p)+2[P(B^p)-P(C^p)]$, where $A^p=\{p_i\in(\tfrac{\lambda\alpha}{2},1),i=1,2\}$, $B^p=\{p_1\in(\lambda,1),p_2\in(0,\lambda\alpha)\}$ and $C^p=\{p_1\in(\lambda,1),p_2\in(0,\tfrac{\lambda\alpha}{2})\}$. Since $P(C^p)\geq0$, it holds $\text{$\mathrm{FWER}$}cb\leq1-P(A^p)+2P(B^p)$. Since $A^p$ is an "on-diagonal" quadrant, with positive quadrant dependence the probability $P(A^p)$ is \emph{minimized} under independence, when its probability is $P^\bot(A^p)=(1-\tfrac{1}{2}\lambda\alpha)^2$. Analogously, $B^p$ is an "anti-diagonal" quadrant, which means that $P(B^p)$ is \emph{maximized} under independence, thus $P(B^p)\leq{}P^\bot(B^p)=(1-\lambda)\lambda\alpha$. Consequently, $\text{$\mathrm{FWER}$}cb\leq1-(1-\tfrac{\lambda\alpha}{2})^2+2(1-\lambda)\lambda\alpha$. \end{proof} Solving this inequality with respect to $\alpha$ and $\lambda$ yields a set of combinations of $\alpha$ and $\lambda$ for which the CBP controls FWER under positive dependence. The permissible ranges are depicted in Figure \ref{fig.lambda_ok_rough_QPD}. Note that if $\lambda\leq\tfrac{1}{2}$ it holds trivially $\text{$\mathrm{FWER}$}cb\leq\alpha$ since in such case $\text{$\mathrm{FWER}$}cb$ is dominated by the FWER of the classical Bonferroni method. The lemma requires that the test statistics are positive quadrant dependent, which under the bivariate normal model with correlation $\rho$ is equivalent to $\rho\geq0$. \begin{lemma} \label{lem.bivariate_normal} Let $n=2$ and let $p_1,p_2$ be marginally standard uniformly distributed under the null hypothesis. Put $X_1=\mathcal{P}hi^{-1}(1-p_1)$ and $X_2=\mathcal{P}hi^{-1}(1-p_2)$ and let $(X_1,X_2)'\sim{}N(0,\Sigma_\rho)$ under the null hypothesis, where $\Sigma_\rho=\rho+\mathrm{diag}(1-\rho,1-\rho)$. Let $\alpha,\lambda\in(0,1)$ be such that $\alpha\lambda\leq\frac{2}{3}$. If $\rho\geq0$, then $\text{$\mathrm{FWER}$}cb\leq\alpha$. \end{lemma} \begin{proof} Analogously to the proof of Lemma \ref{lem.bivariate_PQD}, we can write $\text{$\mathrm{FWER}$}cb$ as \begin{align} \label{eqn.Fs} \text{$\mathrm{FWER}$}cb&\;=\;1-P_{\!\!\rho}(A^p)+2[P_{\!\!\rho}(B^p)-P_{\!\!\rho}(C^p)], \end{align} where $A^p=\{p_i\in(\tfrac{\lambda\alpha}{2},1),i=1,2\}$, $B^p=\{p_1\in(\lambda,1),p_2\in(0,\lambda\alpha)\}$, $C^p=\{p_1\in(\lambda,1),p_2\in(0,\tfrac{\lambda\alpha}{2})\}$, where we added the lower index $\rho$ into the notation $P_{\!\!\rho}$ to signify that the probability is a function of $\rho$. We proceed to show that $\text{$\mathrm{FWER}$}cb$ is a decreasing function in $\rho\in[0,1)$ whenever $\alpha\lambda\leq\frac{2}{3}$, which we do by differentiating $P_{\!\!\rho}(A^p)$, $P_{\!\!\rho}(B^p)$ and $P_{\!\!\rho}(C^p)$ with respect to $\rho$. By \citet{Tong90} (page 191) the derivative of the bivariate normal distribution function $F_\rho(x)$ with respect to $\rho$ equals its density at $x$. Consequently, $\tfrac{\partial}{\partial\rho}\,P_{\!\!\rho}(A^p)=f_\rho(z_2,z_2)$, $\tfrac{\partial}{\partial\rho}\,P_{\!\!\rho}(B^p)=-f_\rho(z_0,z_1)$, $\tfrac{\partial}{\partial\rho}\,P_{\!\!\rho}(C^p)=-f_\rho(z_0,z_2)$, where $f_\rho$ is the density function of the unit-variance bivariate normal distribution \begin{align*} f_\rho(x,y)&\;=\;\tfrac{1}{2\pi}(1-\rho^2)^{-1/2}\,\exp(-\tfrac{(x-\mu_1)^2-2\rho{}(x-\mu_1)(y-\mu_2)+(y-\mu_2)^2}{2(1-\rho^2)}), \end{align*} and $z_0=\mathcal{P}hi^{-1}(1-\lambda)$, $z_1=\mathcal{P}hi^{-1}(1-\lambda\alpha)$, $z_2=\mathcal{P}hi^{-1}(1-\tfrac{\lambda\alpha}{2})$. Therefore, $\tfrac{\partial}{\partial\rho}\,\text{$\mathrm{FWER}$}cb=-f_\rho(z_2,z_2)-2[f_\rho(z_0,z_1)-f_\rho(z_0,z_2)]$, which in turn means that $\tfrac{\partial}{\partial\rho}\,\text{$\mathrm{FWER}$}cb\leq0$ whenever \begin{align} \label{eqn.Fs_via_devatives_condition} \frac{f_\rho(z_2,z_2)}{f_\rho(z_0,z_2)}+2\frac{f_\rho(z_0,z_1)}{f_\rho(z_0,z_2)}\;\geq\;2. \end{align} The conditional distribution of $X_2\,|\,X_1=z_0$ is $N(\mu_1+\rho(z_0-\mu_2), 1-\rho^2)$, which has density $g_\rho(x;z_0)=(2\pi(1-\rho^2))^{-1/2}\exp(-\tfrac{1}{2}\,(x-\rho{}z_0-(\mu_1-\rho\mu_2))^2(1-\rho^2)^{-1})$. Consequently, \begin{align*} \frac{f_\rho(z_0,z_1)}{f_\rho(z_0,z_2)}&\;=\;\frac{g_\rho(z_1;z_0)}{g_\rho(z_2;z_0)}\;=\;\exp(-(z_1-\rho{}z_0-\mu_1+\rho\mu_2)^2+(z_2-\rho{}z_0-\mu_1+\rho\mu_2)^2). \end{align*} Similarly, the conditional distribution of $X_1\,|\,X_2=z_2$ is $N(\mu_2+\rho(z_2-\mu_1), 1-\rho^2)$, and therefore \begin{align*} \frac{f_\rho(z_2,z_2)}{f_\rho(z_0,z_2)}&\;=\;\frac{g_\rho(z_2;z_2)}{g_\rho(z_0;z_2)}\;=\;\exp(-(z_2-\rho{}z_2-\mu_2+\rho\mu_1)^2+(z_0-\rho{}z_2-\mu_2+\rho\mu_1)^2). \end{align*} Under the null hypothesis we have $\mu_1=\mu_2=0$. Define \begin{align} \label{eqn.h_rho} h(\rho)&\;=\;\exp(-(z_2-\rho{}z_2)^2+(z_0-\rho{}z_2)^2)+2\exp(-(z_1-\rho{}z_0)^2+(z_2-\rho{}z_0)^2). \end{align} Then (\ref{eqn.Fs_via_devatives_condition}) is equivalent to $h(\rho)\geq2$. Differentiating $h(\rho)$ with respect to $\rho$ yields \begin{align*} h'(\rho) &\;=\;2z_2(z_2-z_0)\exp(-(z_2-\rho{}z_2)^2+(z_0-\rho{}z_2)^2)-4z_0(z_2-z_1)\exp(-(z_1-\rho{}z_0)^2+(z_2-\rho{}z_0)^2). \end{align*} Since $z_2\geq0$, $z_2\geq{}z_1$ and $z_0\leq0$, it holds $h'(\rho)\geq0$. In other words, $h(\rho)$ is minimized at $\rho=0$, where $h(0)=\exp(z_0^2-z_2^2)+2\exp(z_2^2-z_1^2)$. Clearly, for any $|z_1|\leq|z_2|$ it holds $h(\rho)\geq2$ and the inequality (\ref{eqn.Fs_via_devatives_condition}) is satisfied. Since $\mathcal{P}hi^{-1}$ is strictly monotone and symmetric about $\tfrac{1}{2}$, finding the largest $\alpha$ for a given $\lambda$ such that $|z_1|\leq|z_2|$ leads to the inequality $\tfrac{1}{2}-(1-\lambda\alpha)\leq1-\tfrac{\lambda\alpha}{2}-\tfrac{1}{2}$, which is equivalent to $\lambda\alpha\leq\tfrac{2}{3}$. \end{proof} As it turns out, Lemmas \ref{lem.bivariate_PQD} and \ref{lem.bivariate_normal} together cover almost all combinations of $\alpha,\lambda\in(0,1)$. In Figure \ref{lem.bivariate_normal} the right plot shows the area (white) which is not covered by the two lemmas. Next we close the "gap" left uncovered by the two lemmas. \begin{lemma} \label{lem.bivariate_normal_gap} Let $n=2$ and let $p_1,p_2$ be marginally standard uniformly distributed under the null hypothesis. Put $X_1=\mathcal{P}hi^{-1}(1-p_1)$ and $X_2=\mathcal{P}hi^{-1}(1-p_2)$ and let $(X_1,X_2)'\sim{}N(0,\Sigma_\rho)$ under the null hypothesis, where $\Sigma_\rho=\rho+\mathrm{diag}(1-\rho,1-\rho)$. Let $\alpha,\lambda\in(0,1)$ be such that $\alpha\lambda\geq\frac{2}{3}$ and $1-(1-\tfrac{\lambda\alpha}{2})^2+2(1-\lambda)\lambda\alpha\;\geq\;\alpha$. If $\rho\geq0$, then $\text{$\mathrm{FWER}$}cb\leq\alpha$. \end{lemma} \begin{proof} It can be easily verified that $(\lambda,\alpha)=(\tfrac{2}{3},1)$ and $(\lambda,\alpha)=(\tfrac{3}{4},\tfrac{8}{9})$ are the two points for which the two inequalities in the lemma simultaneously turn into equalities. Moreover, for any $\lambda,\alpha\in(0,1)$ such that $\lambda\alpha\geq0.69$ it holds $1-(1-\tfrac{\lambda\alpha}{2})^2+2(1-\lambda)\lambda\alpha\;\leq\;\alpha$. Consequently, we only need to show that $\text{$\mathrm{FWER}$}cb\leq\alpha$ for $\lambda,\alpha\in\Omega$, where $\Omega=\{(\alpha,\lambda)\in\mathbb{R}^2:\alpha\in[\tfrac{8}{9},1),\lambda\in[\tfrac{2}{3},\tfrac{3}{4}],\tfrac{2}{3}\leq\lambda\alpha\leq0.69\}$. As discussed in the proof of Lemma \ref{lem.bivariate_normal}, it is sufficient to show that $h(\rho)\geq2$ with $h(\rho)$ defined in (\ref{eqn.h_rho}). It can be easily shown that on $\Omega$ we have $0.398<z_2<0.431$ and $-0.496<z_1<-0.43$ and both $z_1$ and $z_2$ are decreasing in $\lambda\alpha$ and so is $z_2^2-z_1^2$ with the minimum above $-0.087$. Consequently, $-2\exp(z_2^2-z_1^2)\geq1.83$ and since it also holds also $\exp(z_0^2-z_2^2)\geq\exp(z_0^2-z_2^2)\geq\exp(z_0^2-z_2^2)>0.83$, we get $h(0)\geq2$. Since it was already shown in the proof of Lemma \ref{lem.bivariate_normal} that $h(\rho)$ is decreasing in $\rho$, this concludes the proof. \end{proof} \begin{proof}[Proof of Proposition 1] The proposition is an immediate corollary of Lemmas \ref{lem.bivariate_PQD}--\ref{lem.bivariate_normal_gap}. \end{proof}
3,941
19,769
en
train
0.110.7
\subsection{Expectation criterion} \begin{proof}[Proof of Lemma 4.1: expectation criterion] It is sufficient to consider the cases where all tested hypotheses are true, since adding false hypotheses to the test cannot increase the FWER. Divide the interval $(0, \lambda \alpha]$ into intervals $B_k = (b_{k+1}, b_k]$ with $b_k = \lambda \alpha / k$ for $k=1,\ldots,m$, and $b_{m+1}=0$. For each $H_i$, denote with $E_i$ the probability to reject $H_i$. It is given by \begin{align*} E_i&\;=\;P(p_i \le \lambda \alpha / (R \lor 1)) \;=\;P(R \le \tfrac{\lambda \alpha}{p_i}, p_i \le \lambda \alpha)) \;=\;\sum_{k=1}^{m}{P(R \le \tfrac{\lambda \alpha}{p_i}\,|\,p_i \in B_k)P(p_i \in B_k)}. \end{align*} Since $P(R\le{}k\,|\,p_i=x)$ is assumed to be increasing in $x$, we get \begin{align*} E_i&\;\leq\;\sum_{k=1}^{m}{P(R \le k\,|\,p_i =\lambda \alpha)P(p_i \in B_k)}\\ &\;\leq\;\sum_{k=1}^{m}{P(R \le k\,|\,p_i =\lambda \alpha)(b_k-b_{k+1})\frac{F_i(\lambda \alpha)}{\lambda \alpha}}\\ &\;=\;\sum_{k=1}^{m}{P(R = k\,|\,p_i =\lambda \alpha)\frac{1}{k} F_i(\lambda \alpha)}\\ &\;=\;E(R^{-1}\,|\,p_i =\lambda \alpha)P(p_i \le \lambda \alpha). \end{align*} Therefore, $\text{$\mathrm{FWER}$}cb\le\sum_{i=1}^{m}{E(R^{-1}\,|\,p_i=\lambda\alpha)P(p_i\le\lambda\alpha)}\leq\alpha$ by the assumptions of the lemma. \end{proof} \subsection{Equicorrelated normal case} For the proof of Lemma 4.2 we need Lemma \ref{lem:binomial}. \begin{lemma} \label{lem:binomial} If $X$ is a random variable with binomial $(n, p)$ distribution, then \begin{align*} E(X+1)^{-1}\;=\;\frac{(1-(1-p)^{n+1})}{(n+1)p} \le \frac{1}{(n+1)p}. \end{align*} \end{lemma} \begin{proof} Using $\frac{1}{k+1}\binom{n}{k} = \frac{1}{n+1}\binom{n+1}{k+1}$ we obtain \begin{align*} E(X+1)^{-1}&\;=\;\frac {1}{n+1} \sum_{k=0}^{n}\tbinom{n+1}{k+1}p^k (1-p)^{n-k} \;=\;\frac{1}{(n+1)p} \sum_{k=1}^{n+1}\tbinom{n+1}{k}p^k (1-p)^{n+1-k}, \end{align*} where the last sum corresponds to the binomial distribution with parameters $n+1$ and $p$, and is thus upper-bounded by 1. \end{proof} \begin{proof}[Proof of Lemma 4.2: equicorrelated normal case] Note that the $p$-values are standard uniform, thus their distribution functions $F_i$ satisfy the condition $(F_i(y) - F_i(x))\lambda \alpha \leq F_i(\lambda \alpha)(y-x)$ of Lemma 4.1. Moreover, $P(R_m \leq k\,|\,p_i)$ is increasing in $p_i$ by Theorem 4.1 of \citet{KR1980}, since the $p_i$ are multivariate totally positive of order 2 and the function $\phi(p_1, \ldots, p_m):=\boldsymbol{1}[{(\sum_{i=1}^m\boldsymbol{1}\{p_i\le\lambda\}) \le k]}$ is increasing. It remains to prove that the the expectation criterion is satisfied. We may write ${{Z}_{i}}=\sqrt{\rho }\,\Theta +\sqrt{1-\rho }\,{{\mathrm{var}epsilon }_{i}}$, where $\Theta,{{\mathrm{var}epsilon }_{1}},\ldots,{{\mathrm{var}epsilon }_{n}}$ are independent standard normal variables. Define $z=\mathcal{P}hi^{-1}(\lambda\alpha)$ and $\Theta_z=(\Theta - \sqrt{\rho} z)/\sqrt{1-\rho}$. Immediately, $\Theta_z$ has a standard normal distribution conditionally on the event $[Z_i=z]$, and hence on $[p_i=\lambda \alpha]$. Define $X_i = \boldsymbol{1}_{\{p_i \leq \lambda\}}$. Conditionally on $\Theta_z$, the $X_i$ are independent Bernoulli variables with success probability $P(X_i=1\,|\,\Theta_z)=\mathcal{P}hi(\mu-\sqrt{\rho}\Theta_z)$. Therefore the conditional distribution $R\,|\,[\Theta_z,p_i=\lambda \alpha]$ is equal to the conditional distribution $R\,|\,[\Theta_z,X_i=1]$, and Lemma \ref{lem:binomial} yields $E(R^{-1}\,|\,\Theta_z,p_i=\lambda\alpha)\le(m\mathcal{P}hi(\mu-\sqrt{\rho}\Theta_z))^{-1}$. Taking the expectation over $\Theta_z$ and using (2) yields $E(R^{-1}\,|\,p_i=\lambda\alpha)\le(m\lambda)^{-1}$, which in turn implies that the expectation criterion is satisfied. The conclusion then follows by Lemma 4.1. \end{proof} \subsection{Mixtures} \begin{proof}[Proof of Proposition 2: mixtures] By the law of total expectation it follows that $E(D_{\mathcal{P}}) = E(E(D_{\mathcal{P}}|w)) \leq \alpha$. \end{proof} \subsection{Asymptotic control} \begin{proof}[Proof of Proposition 3: asymptotic case] For a given $\mathrm{var}epsilon '>0$ we prove that, for $m$ sufficiently large, $\text{$\mathrm{FWER}$}\le\alpha+\mathrm{var}epsilon '$. First, note that there is an $\mathrm{var}epsilon >0$ such that $\frac{\eta +\mathrm{var}epsilon}{\eta -\mathrm{var}epsilon }\le 1+\tfrac{1}{2}\mathrm{var}epsilon'$. Moreover, for $m$ large enough we have, as a consequence of the convergence in probability, it holds $P(|R/m-\eta|\ge\mathrm{var}epsilon)\le\tfrac{1}{2}\mathrm{var}epsilon'$ and $E(R/m)\le\eta+\mathrm{var}epsilon$. Then \begin{align*} \text{$\mathrm{FWER}$}&\;=\;P\Big(\textstyle\bigcup\limits_{i=1}^{m}{p_i<\frac{\alpha \lambda }{R}}\Big) \\ &\;=\;P\Big(\textstyle\bigcup\limits_{i=1}^{m}{p_i<\frac{\alpha\lambda}{R}},\frac{R}{m}\ge\eta-\mathrm{var}epsilon\Big) +P\Big(\textstyle\bigcup\limits_{i=1}^{m}{p_i<\frac{\alpha\lambda}{R}},\frac{R}{m}<\eta-\mathrm{var}epsilon\Big)\\ &\;\le\;P\Big(\textstyle\bigcup\limits_{i=1}^{m}{p_i<\frac{\alpha\lambda}{(\eta-\mathrm{var}epsilon )m}}\Big)+P\big(\frac{R}{m}<\eta -\mathrm{var}epsilon\big) \\ &\;\le\; \sum\limits_{i=1}^{m}{P(p_i<\tfrac{\alpha \lambda }{(\eta -\mathrm{var}epsilon )m})}+\frac{\mathrm{var}epsilon '}{2} \\ &\;=\;\sum\limits_{i=1}^{m}{P(p_i<\tfrac{\alpha \lambda }{(\eta -\mathrm{var}epsilon )m}\,|\,p_i\le \lambda )}\,P(p_i\le \lambda )+\frac{\mathrm{var}epsilon '}{2} \\ &\;\le\; \sum\limits_{i=1}^{m}{\frac{\alpha }{(\eta -\mathrm{var}epsilon )m}}\,P(p_i\le \lambda )+\frac{\mathrm{var}epsilon '}{2} \\ &\;=\; \frac{\alpha }{\eta -\mathrm{var}epsilon }E(R/m)+\frac{\mathrm{var}epsilon '}{2} \\ &\;\le\; \alpha\frac{\eta +\mathrm{var}epsilon }{\eta -\mathrm{var}epsilon }+\frac{\mathrm{var}epsilon '}{2} \\ &\;\le\; \alpha+\mathrm{var}epsilon'. \end{align*} \end{proof} \begin{proof}[Proof of Corollary 5.1] From \begin{align*} \mathrm{var}(R)&\;=\;\sum\limits_{i=1}^{m}{\mathrm{var}(\mathbf{1}[p_i\le\lambda ]) +\sum\limits_{i=1}^{m}{\sum\limits_{j=1,j\ne i}^{m}{\text{cov}(\mathbf{1}[p_i\le\lambda],\mathbf{1}[{{p}_{j}}\le \lambda ])}}} \end{align*} it follows $\mathrm{var}(R/m)\le \frac{1}{4}(\frac{1}{m}+\bar\rho_m)$, where the right-hand side goes to 0 as $m\to\infty$. The rest follows by Proposition 3. \end{proof} \begin{figure}\label{fig:figureM} \end{figure} \begin{figure}\label{fig:figureP2} \end{figure} \begin{figure}\label{fig:figurePairs} \end{figure} \begin{figure} \caption{Permissible ranges for $\lambda$ given $\alpha$. On the left the grey area shows the permissible combinations of $\lambda$ and $\alpha$ based on Lemma \ref{lem.bivariate_PQD} \label{fig.lambda_ok_rough_QPD} \end{figure} \end{document}
2,644
19,769
en
train
0.111.0
\begin{document} \title{Structure-preserving $H^2$ optimal model reduction\\ based on Riemannian trust-region method} \author{Kazuhiro Sato and Hiroyuki Sato \thanks{K. Sato is with the School of Regional Innovation and Social Design Engineering, Kitami Institute of Technology, Hokkaido 090-8507, Japan, email: [email protected]} \thanks{H. Sato is with the Department of Information and Computer Technology, Tokyo University of Science, Tokyo, 125-8585 Japan, email: [email protected]} } \markboth{This paper was published in IEEE Transactions on Automatic Control (DOI: 10.1109/TAC.2017.2723259)} {Shell \MakeLowercase{\textit{et al.}}: Bare Demo of IEEEtran.cls for IEEE Journals} \maketitle \begin{abstract} This paper studies stability and symmetry preserving $H^2$ optimal model reduction problems of linear systems which include linear gradient systems as a special case. The problem is formulated as a nonlinear optimization problem on the product manifold of the manifold of symmetric positive definite matrices and the Euclidean spaces. To solve the problem by using the trust-region method, the gradient and Hessian of the objective function are derived. Furthermore, it is shown that if we restrict our systems to gradient systems, the gradient and Hessian can be obtained more efficiently. More concretely, by symmetry, we can reduce linear matrix equations to be solved. In addition, by a simple example, we show that the solutions to our problem and a similar problem in some literatures are not unique and the solution sets of both problems do not contain each other in general. Also, it is revealed that the attained optimal values do not coincide. Numerical experiments show that the proposed method gives a reduced system with the same structure with the original system although the balanced truncation method does not. \end{abstract} \begin{IEEEkeywords} $H^2$ optimal model reduction, Riemannian optimization, structure-preserving model reduction. \end{IEEEkeywords} \IEEEpeerreviewmaketitle \section{Introduction} \IEEEPARstart{M}{odel} reduction method reduces the dimension of the state of a given system to facilitate the controller design. The most famous method is called balanced truncation method, which gives a stable reduced order model with guaranteed $H^{\infty}$ error bounds \cite{antoulas2005approximation, moore1981principal}. Another famous method is moment matching method \cite{astolfi2010model, ionescu2014families}, which gives a reduced system matching some coefficients of the transfer function of a given linear system. In particular, \cite{ionescu2013moment, scherpen2011balanced} have discussed structure preserving model reduction methods which give a reduced model with the same structure with the original system. However, the previous methods do not guarantee any optimality. In \cite{sato2015riemannian, yan1999approximate}, $H^2$ optimal model reduction problems have been studied. Reference \cite{yan1999approximate} has formulated the problem as an optimization problem for minimizing the $H^2$ norm performance index subject to orthogonality constraints. The constrained optimization problem can be regarded as an unconstrained problem on the Stiefel manifold. To solve the problem, an iterative gradient flow method has been proposed in \cite{yan1999approximate}. Reference \cite{sato2015riemannian} has reformulated the optimization problem on the Stiefel manifold into that on the Grassmann manifold because the objective function of the $H^2$ optimal model reduction problem is invariant under actions of the orthogonal group. To solve the problem, the Riemannian trust-region methods have been proposed in \cite{sato2015riemannian}. The proposed methods in \cite{sato2015riemannian, yan1999approximate} preserve stability and symmetric properties of the original system. However, the methods may not give good reduced models as shown in Section \ref{sec4} in this paper. To obtain better reduced models, this paper further exploits the stability and symmetry preserving $H^2$ optimal model reduction problem of linear systems which include gradient systems as a special case. In particular, the present paper reformulates the problems in \cite{sato2015riemannian, yan1999approximate} as an optimization problem on the product manifold of the manifold of symmetric positive definite matrices and two Euclidean spaces. This novel approach is the first to the best of our knowledge. A global optimal solution in this formulation gives a smaller value of the objective function than that in \cite{sato2015riemannian, yan1999approximate}. Furthermore, the $H^2$ optimal model reduction problem of gradient systems is formulated as another specific optimization problem. The contributions of this paper are as follows.\\ 1) We derive the gradient and Hessian of the objective function of the new optimization problem. By using them, we can apply the Riemannian trust-region method to solve the new problem. Furthermore, it is shown that if we restrict our systems to the gradient systems, the gradient and Hessian can be obtained more efficiently. More concretely, by symmetry, we can reduce linear matrix equations to be solved. Some numerical experiments demonstrate that the proposed Riemannian trust-region method gives a reduced system which is sufficiently close to the original system, even if the balanced truncation method and the method in \cite{sato2015riemannian} do not. \\ 2) By a simple example, we show that the solutions to our problem and the problem in \cite{sato2015riemannian, yan1999approximate} are not unique and the solution sets of both problems do not in general contain each other. Also, it is revealed that the attained optimal values do not coincide in general. This paper is organized as follows. In Section \ref{sec2}, we formulate the structure preserving $H^2$ optimal model reduction problem on the manifold. In Section \ref{sec3}, we first review the geometry of the manifold of the symmetric positive definite matrices. Next, we derive the Euclidean gradient of the objective function and then we give the Riemannian gradient and Riemannian Hessian to develop the trust-region method. In Section \ref{sec_Pro2}, the $H^2$ optimal model reduction problem of gradient systems are discussed. In Section \ref{secnew_5}, we study the difference between our problem and the problem in \cite{sato2015riemannian, yan1999approximate}. Section \ref{sec4} shows some numerical experiments to investigate the performance of the proposed method. We demonstrate that the objective function in the case of the proposed method takes a smaller value than those of the balanced truncation method and the method in \cite{sato2015riemannian}. Furthermore, the experiments indicate that although the balanced truncation method does not preserve the original structure, the proposed method does. The conclusion is presented in Section \ref{sec5}. {\it Notation:} The sets of real and complex numbers are denoted by ${\bf R}$ and ${\bf C}$, respectively. The identity matrix of size $n$ is denoted by $I_n$. The symbols ${\rm Sym}(n)$ and ${\rm Skew}(n)$ denote the sets of symmetric and skew-symmetric matrices in ${\bf R}^{n\times n}$, respectively. The set of symmetric positive definite matrices in ${\bf R}^{n\times n}$ is denoted by ${\rm Sym}_+(n)$. The symbols $GL(r)$ and $O(r)$ are the general linear group and the orthogonal group of degree $r$, respectively. Given a matrix $A\in {\bf R}^{n\times n}$, ${\rm tr} (A)$ denotes the sum of the diagonal elements of $A$ and ${\rm sym}(A)$ denotes the symmetric part of $A$; i.e., ${\rm sym} (A) = \frac{A+A^T}{2}$. Here, $A^T$ denotes the transposition of $A$. The tangent space at $x$ on a manifold $\mathcal{M}$ is denoted by $T_x \mathcal{M}$. Given a smooth function $f$ on a manifold $\mathcal{M} \subset {\bf R}^{n_1\times n_2}$, the symbol $\bar{f}$ is the extension of $f$ to the ambient Euclidean space ${\bf R}^{n_1\times n_2}$. The symbols $\nabla$ and ${\rm grad}$ denote the Euclidean and Riemannian gradients, respectively; i.e., given a smooth function $f$ on a manifold $\mathcal{M} \subset {\bf R}^{n_1\times n_2}$, $\nabla$ and ${\rm grad}$ act on ${\bar f}$ and $f$, respectively. The symbol ${\rm Hess}$ denotes the Riemannian Hessian. Given a transfer function $G$, $||G||_{H^2}$ denotes the $H^2$ norm of $G$.
2,246
18,726
en
train
0.111.1
\section{Problem setup} \label{sec2} We consider the $H^2$ optimal model reduction problem of a linear time invariant system \begin{align} \begin{cases} \dot{x} = -Ax +Bu, \\ y = Cx, \end{cases} \label{1} \end{align} where $x\in {\bf R}^n$, $u\in {\bf R}^m$, and $y\in {\bf R}^p$ are the state, input, and output, respectively, and where $A\in {\bf R}^{n\times n}$, $B\in {\bf R}^{n\times m}$, and $C\in {\bf R}^{p\times n}$ are constant matrices. Throughout this paper, we assume $A\in {\rm Sym}_{+}(n)$, and thus, all the eigenvalues of $-A$ are negative. Thus, the original system has a stable and symmetric state transition matrix. Note that if $m=p$ (i.e., the number of the output variables is the same with that of the input variables) and $B^T=C$, then the system \eqref{1} is a linear gradient system \cite{ionescu2013moment, scherpen2011balanced}. Note also that the following discussion fully exploits the symmetry of $A$ and does not apply to systems with a non-symmetric matrix $A$. The structure preserving $H^2$ optimal model reduction problem in this paper is to find $A_r\in {\rm Sym}_+(r)$, $B_r\in {\bf R}^{r\times m}$, and $C_r\in {\bf R}^{p\times r}$ for a fixed integer $r$ $(<n)$ such that the associated reduced system \begin{align} \begin{cases} \dot{x}_r = -A_rx_r +B_ru, \\ y_r = C_rx_r \end{cases} \label{2} \end{align} best approximates the original system \eqref{1} in the sense that the $H^2$ norm of the transfer function of the error system between the original system \eqref{1} and the reduced system \eqref{2} is minimized. That is, the stability and symmetry of the state transition matrix are preserved because the reduced matrices $A_r$, $B_r$, and $C_r$ have the same structures with the original matrices $A$, $B$, and $C$, respectively. Note that the symmetry preservation is significant because the symmetry implies that any oscillations never occur when $u=0$. This is because all the eigenvalues of any symmetric matrices are real numbers. If the state transition matrix of the reduced system \eqref{2} is not symmetric, some oscillations may be observed under $u=0$ in contrast to the case of the original system \eqref{1}. The optimization problem to be solved is stated as follows. \begin{framed} Problem 1: \begin{align*} &{\rm minimize} \quad J(A_r,B_r,C_r), \\ &{\rm subject\, to} \quad (A_r,B_r,C_r)\in M. \end{align*} \end{framed} \noindent Here, \begin{align} \label{eq_J} J(A_r,B_r,C_r):=||G-G_r||_{H^2}^2, \end{align} where \begin{align*} G(s) := C(sI_n +A)^{-1}B, \quad s\in {\bf C} \end{align*} is the transfer function of the original system \eqref{1} and $G_r$ is the transfer function of the reduced system \eqref{2}, and \begin{align*} M:= {\rm Sym}_+(r) \times {\bf R}^{r\times m} \times {\bf R}^{p\times r}. \end{align*} Since all the eigenvalues of $-A$ and $-A_r$ are negative, the objective function $J(A_r,B_r,C_r)$ can be expressed as \begin{align*} J(A_r,B_r,C_r) &= {\rm tr}( C\Sigma_c C^T +C_rPC_r^T-2C_rX^TC^T) \\ &= {\rm tr} ( B^T\Sigma_o B +B_r^TQB_r + 2B^TYB_r), \end{align*} where the matrices $\Sigma_c$, $\Sigma_o$, $P$, $Q$, $X$, and $Y$ are the solutions to \begin{align} A\Sigma_c +\Sigma_cA -BB^T &= 0, \nonumber\\ A\Sigma_o + \Sigma_o A- C^TC &= 0, \nonumber\\ A_rP+PA_r-B_rB_r^T &=0, \label{3} \\ A_rQ+QA_r-C_r^TC_r &=0, \label{4} \\ AX+XA_r-BB_r^T &=0, \label{5} \\ AY+YA_r+C^TC_r &=0, \label{6} \end{align} respectively. A similar discussion can be found in \cite{sato2015riemannian}, which contains a more detailed explanation of the calculation. As mentioned earlier, if $m=p$ and $B^T=C$, then the system \eqref{1} is a stable gradient system \cite{ionescu2013moment, scherpen2011balanced}. If this is the case, Problem 1 can be replaced with the following problem. \begin{framed} Problem 2: \begin{align*} &{\rm minimize} \quad \tilde{J}(A_r,B_r), \\ &{\rm subject\, to} \quad (A_r,B_r)\in \tilde{M}. \end{align*} \end{framed} \noindent Here, \begin{align*} \tilde{J}(A_r,B_r) := || B^T(sI_n+A)^{-1}B - B_r^T(sI_r+A_r)B_r||^2_{H^2} \end{align*} and \begin{align*} \tilde{M}:= {\rm Sym}_+(r)\times {\bf R}^{r\times m}. \end{align*} We develop optimization algorithms for solving Problems 1 and 2 in Sections \ref{sec3} and \ref{sec_Pro2}, respectively. \begin{remark} We can also consider the reduced system expressed by \begin{align*} \begin{cases} \dot{x}_r = -U^TAUx_r +U^TB u, \\ y_r = CU x \end{cases} \end{align*} for $U$ belonging to the Stiefel manifold ${\rm St}(r,n):= \left\{ U\in {\bf R}^{n\times r}\, |\, U^TU =I_r\right\}$. Then, Problem 1 is replaced with the following optimization problem on the Stiefel manifold. \begin{framed} Problem 3: \begin{align*} &{\rm minimize} \quad J(U^TAU,U^TB,CU), \\ &{\rm subject\, to} \quad U \in {\rm St}(r,n). \end{align*} \end{framed} \noindent Reference \cite{sato2015riemannian} has proposed the trust-region method for solving Problem 3. \end{remark} \begin{remark} It is beneficial to consider Problem 1 instead of Problem 3. In fact, if $U_*$ is a global optimal solution to Problem 3, then $(U^T_*AU_*,U_*^TB,CU_*)\in M$ is a feasible solution to Problem 1; i.e., the minimum value of Problem 3 is not smaller than that of Problem 1. In Section \ref{sec4}, we verify this fact numerically. Furthermore, we give an example in Section \ref{secnew_5} which shows that the critical points of Problems 1 and 3 do not necessarily coincide with each other. \end{remark} \begin{remark} A possible drawback of Problem 1 is that the problem may not have a solution because the Riemannian manifold $M$ is not compact. However, we always obtained solutions by using the trust-region method in this paper. We leave a general mathematical analysis of the existence of the solution to Problem 1 to future work. \end{remark}
2,116
18,726
en
train
0.111.2
\section{Optimization algorithm for Problem 1} \label{sec3} \subsection{General Riemannian trust-region method} We first review Riemannian optimization methods including the Riemannian trust-region method following \cite{absil2009optimization} for readability of the subsequent subsections. We also refer to \cite{absil2009optimization} for schematic figures of Riemannian optimization. In this subsection, we consider a general Riemannian optimization problem to minimize an objective function $h$ defined on a Riemannian manifold $\mathcal{M}$. In optimization on the Euclidean space $\mathcal{E}$, we can compute a point $x_+ \in \mathcal{E}$ from the current point $x \in \mathcal{E}$ and the search direction $d \in \mathcal{E}$ as $x_+=x+d$. However, this update formula cannot be used on $\mathcal{M}$ since $\mathcal{M}$ is not generally a Euclidean space. For $x \in \mathcal{M}$ and $\xi \in T_x \mathcal{M}$, $x+\xi$ is not defined in general. Even if $\mathcal{M}$ is a submanifold of the Euclidean space $\mathcal{E}$ and $x+\xi$ is defined as a point in $\mathcal{E}$, it is not generally on $\mathcal{M}$. Therefore, we seek for a next point $x_+$ on a curve on $\mathcal{M}$ emanating from $x$ in the direction of $\xi$. Such a curve is defined by using a map called an exponential mapping ${\rm Exp}$, which is defined by a curve called geodesic. More concretely, for any $x, y \in \mathcal{M}$ on a geodesic which are sufficiently close to each other, the piece of geodesic between $x$ and $y$ is the shortest among all curves connecting the two points. For any $\xi\in T_x\mathcal{M}$, there exists an interval $I \subset {\bf R}$ around $0$ and a unique geodesic $\Gamma_{(x,\xi)}: I\rightarrow \mathcal{M}$ such that $\Gamma_{(x,\xi)}(0)=x$ and $\dot{\Gamma}_{(x,\xi)}=\xi$. The exponential mapping ${\rm Exp}$ at $x\in \mathcal{M}$ is then defined through this curve as \begin{align} {\rm Exp}_x(\xi):=\Gamma_{(x,\xi)}(1). \label{exp_def} \end{align} This definition is well-defined because the geodesic $\Gamma_{(x,\xi)}$ has the homogeneity property $\Gamma_{(x,a\xi)}(t)=\Gamma_{(x,\xi)}(at)$ for any $a\in {\bf R}$ satisfying $at \in I$. We can thus compute a point $x_+$ as \begin{equation} x_+= {\rm Exp}_x(\xi). \label{next_step} \end{equation} In the trust-region method, at the current point $x$, we compute a second-order approximation of the objective function $h$ based on the Taylor expansion. We minimize the second-order approximation in a ball of a radius called trust-region radius. See Section \ref{Sec:3.E} for detail discussion on the trust-region method for our problem. We thus need the first and second-order derivatives of $h$, which are characterized by the Riemannian gradient and Hessian of $h$. Since $\mathcal{M}$ is a Riemannian manifold, $\mathcal{M}$ has a Riemannian metric $\langle \cdot, \cdot\rangle$, which endows the tangent space $T_x \mathcal{M}$ at each point $x \in \mathcal{M}$ with an inner product $\langle \cdot, \cdot \rangle_x$. The gradient ${\rm grad}\, h(x)$ of $h$ at $x \in \mathcal{M}$ is defined as a tangent vector at $x$ which satisfies \begin{equation} {\rm D}h(x)[\xi] = \langle {\rm grad}\, h(x), \xi\rangle_x \label{grad_def} \end{equation} for any $\xi \in T_x \mathcal{M}$. Here, the left-hand side of \eqref{grad_def} denotes the directional derivative of $h$ at $x$ in the direction $\xi$. The Hessian ${\rm Hess}\, h(x)$ of $h$ at $x$ is defined via the covariant derivative of the gradient ${\rm grad}\, h(x)$. If $\mathcal{M}$ is a Riemannian submanifold of Euclidean space, we can compute the Hessian ${\rm Hess}\, h(x)$ by using the gradient ${\rm grad}\, h(x)$ and the orthogonal projection onto the tangent space $T_x \mathcal{M}$. Based on this fact, we derive the gradient and Hessian of our objective function $J$ in Section \ref{Sec:3.D}. \subsection{Difficulties when we apply the general Riemannian trust-region method} This subsection points out difficulties when we apply the above general Riemannian trust-region method. The first difficulty is to obtain the geodesic $\Gamma_{(x,\xi)}$. In fact, to get $\Gamma_{(x,\xi)}$, we may need to solve a nonlinear differential equation in a local coordinate system around $x\in \mathcal{M}$. The equation may only be approximately solved by a numerical integration scheme. The numerical integration consumes a large amount of time in many cases. As a result, it is difficult to obtain the exponential map defined by \eqref{exp_def} in general. The second difficulty is how to choose a Riemmanian metric $\langle \cdot, \cdot \rangle$. Since the gradient ${\rm grad}\,h(x)$ defined by \eqref{grad_def} varies by the Riemannian metric, we should adopt a metric in such a manner that we can obtain the gradient in a short time. However, the adoption may imply that the manifold $\mathcal{M}$ is not geodesically complete. Here, a Riemannian manifold is called geodesically complete if the exponential mapping is defined for every tangent vector at any point. If $\mathcal{M}$ is not geodesically complete, we have to carefully choose $\xi$ in \eqref{next_step} in such a manner that $x_+$ is contained in $\mathcal{M}$. This leads to computational inefficiency. For example, consider the manifold ${\rm Sym}_{+}(r)$. Since ${\rm Sym}_{+}(r)$ is a submanifold of the vector space ${\rm Sym}(r)$, we can consider the induced metric $\langle \cdot,\cdot \rangle$ from the natural inner product in the ambient space ${\rm Sym}(r)$ as \begin{align} \langle \xi_1, \xi_2 \rangle_{S}:= {\rm tr}(\xi_1 \xi_2) \label{normal_metric} \end{align} for $\xi_1, \xi_2\in T_S {\rm Sym}_+(r)$. Here, $T_S {\rm Sym}_+(r)\cong {\rm Sym}(r)$ as explained in \cite{lang1999fundamentals}. Then, the exponential map is simply given by ${\rm Exp}_S(\xi) = S+\xi.$ However, $S+\xi \not \in {\rm Sym}_+(r)$ for some $\xi\in T_S {\rm Sym}_+(r)$ because of lack of positive definiteness. This means that ${\rm Sym}_+(r)$ is not geodesically complete. As a result, we have to carefully choose $\xi \in T_S {\rm Sym}_+(r)$. Our following discussion overcomes these difficulties. \subsection{Geometry of the manifold ${\rm Sym}_{+}(r)$} \label{Sec:3.B} This subsection introduces another Riemannian metric on the manifold ${\rm Sym}_{+}(r)$ \cite{lang1999fundamentals, Gallier2016, helgason1979differential, helmke1996optimization, pennec2006riemannian}. This is useful to develop an optimization algorithm for solving Problem 1 for the following reasons:\\ 1) The geodesic is given by a closed-form expression. That is, we do not have to integrate a nonlinear differential equation.\\ 2) The manifold ${\rm Sym}_+(r)$ is then geodesically complete in contrast to the case of the Riemannian metric \eqref{normal_metric}. That is, ${\rm Exp}_S(\xi)\in {\rm Sym}_+(r)$ is always defined for any $\xi\in T_S{\rm Sym}_+(r)$. For $\xi_1$, $\xi_2\in T_S {\rm Sym}_{+}(r)$, we define the Riemannian metric as \begin{align} \langle \xi_1, \xi_2\rangle_S := {\rm tr} ( S^{-1} \xi_1 S^{-1} \xi_2 ), \label{metric} \end{align} which is invariant under the group action $\phi_g: S \to gSg^T$ for $g \in GL(r)$; i.e., $\langle {\rm D} \phi_g(\xi_1), {\rm D} \phi_g(\xi_2) \rangle_{\phi_g(S)} = \langle \xi_1,\xi_2 \rangle_S$, where the map ${\rm D}\phi_g: T_S {\rm Sym}_+(r) \rightarrow T_S {\rm Sym}_+(r)$ is a derivative map given by ${\rm D}\phi_g(\xi) =g\xi g^T$. The proof that \eqref{metric} is a Riemannian metric can be found in Chapter XII in \cite{lang1999fundamentals}. Let $f: {\rm Sym}_+(r) \rightarrow {\bf R}$ be a smooth function and $\bar{f}$ the extension of $f$ to the Euclidean space ${\bf R}^{r\times r}$. The relation of the Euclidean gradient $\nabla \bar{f}(S)$ and the directional derivative ${\rm D}\bar{f}(S)[\xi]$ of $\bar{f}$ at $S$ in the direction $\xi$ is given by \begin{align} {\rm tr}\, (\xi^T\nabla \bar{f}(S)) ={\rm D} \bar{f}(S)[\xi]. \end{align} \noindent The Riemannian gradient ${\rm grad}\, f(S)$ is given by \begin{align} \langle {\rm grad} f (S), \xi \rangle_S &= {\rm D}f(S)[\xi] \\ &= {\rm D}\bar{f}(S)[\xi] \\ &= {\rm tr}\, (\xi^T {\rm sym}(\nabla \bar{f}(S)) ). \label{relation} \end{align} Here, we have used $\xi=\xi^T$. From \eqref{metric} and \eqref{relation}, we obtain \begin{align} {\rm grad}\, f(S) = S {\rm sym}(\nabla \bar{f}(S)) S. \label{gradient} \end{align} According to Section 4.1.4 in \cite{jeuris2012survey}, the Riemannian Hessian ${\rm Hess}\,f(S): T_S {\rm Sym}_+(r) \rightarrow T_S {\rm Sym}_+(r)$ of the function $f$ at $S\in {\rm Sym}_+(r)$ is given by \begin{align} {\rm Hess}\,f(S)[\xi] = {\rm D}{\rm grad}\, f(S)[\xi] - {\rm sym} ({\rm grad}\, f(S) S^{-1} \xi ). \label{Hess1} \end{align} Hence, \eqref{gradient} and \eqref{Hess1} yield \begin{align} {\rm Hess}\,f(S)[\xi] =S {\rm sym}( {\rm D} \nabla \bar{f}(S) [\xi] )S + {\rm sym} (\xi {\rm sym} (\nabla \bar{f}(S)) S ). \label{Hess} \end{align} The geodesic $\Gamma_{(S,\xi)}$ on the manifold ${\rm Sym}_+(r)$ going through a point $S\in {\rm Sym}_+(r)$ with a tangent vector $\xi\in T_S {\rm Sym}_+(r)$ is given by \begin{align} \Gamma_{(S,\xi)}(t) = \phi_{\exp (t \zeta)}(S) \label{geo1} \end{align} with $\xi = \zeta S + S \zeta$ for $\zeta\in T_{I_r} {\rm Sym}_+(r)$; i.e., the geodesic is the orbit of the one-parameter subgroup $\exp (t\zeta)$, where $\exp$ is the matrix exponential function. The relation \eqref{geo1} follows from the fact that ${\rm Sym}_+(r)$ is a reductive homogeneous space. For convenience, we prove it in Appendix \ref{ape1}. A detailed explanation of the expression \eqref{geo1} can be found in \cite{Gallier2016}. To simplify \eqref{geo1}, we consider the geodesic going through the origin $I_r =\phi_{S^{-1/2}}(S) \in {\rm Sym}_+(r)$ because the Riemannian metric given by \eqref{metric} is invariant under the group action. In this case, we get $\zeta=\frac{1}{2}\xi$ and $\Gamma_{(I_r,\xi)}(t) = \exp (t\xi)$. Hence, \begin{align*} \Gamma_{(S,\xi)}(t) &= \phi_{S^{1/2}} (\Gamma_{(I_r, {\rm D}\phi_{S^{-1/2}}(\xi) )} (t)) \\ &= S^{\frac{1}{2}} \exp (t S^{-\frac{1}{2}} \xi S^{-\frac{1}{2}} ) S^{\frac{1}{2}}. \end{align*} Therefore, the exponential map on ${\rm Sym}_+(r)$ is given by \begin{align} {\rm Exp}_S (\xi) := \Gamma_{(S,\xi)}(1) = S^{\frac{1}{2}} \exp (S^{-\frac{1}{2}} \xi S^{-\frac{1}{2}} ) S^{\frac{1}{2}}. \label{8} \end{align} Since ${\rm Exp} : T_S {\rm Sym}_+(r) \rightarrow {\rm Sym}_+(r)$ is a bijection \cite{Gallier2016}, ${\rm Sym}_+(r)$ endowed with the Riemannian metric \eqref{metric} is geodesically complete in contrast to the case of \eqref{normal_metric}. \subsection{Euclidean gradient of the objective function $J$} Let $\bar{J}$ denote the extension of the objective function $J$ to the Euclidean space ${\bf R}^{r\times r}\times {\bf R}^{r\times m} \times {\bf R}^{p\times r}$. Then, the Euclidean gradient of $\bar{J}$ is given by \begin{align} & \nabla \bar{J}(A_r,B_r,C_r) \nonumber \\ =& 2( -QP-Y^TX, QB_r+Y^TB, C_rP-CX). \label{16} \end{align} \noindent Although a similar expression can be found in Theorem 3.3 in \cite{van2008h2} and Section 3.2 in \cite{wilson1970optimum}, we provide another proof in Appendix \ref{apeB} because some equations in the proof are needed for deriving the Riemannian Hessian of $J$ as shown in the next subsection.
3,813
18,726
en
train
0.111.3
\subsection{Euclidean gradient of the objective function $J$} Let $\bar{J}$ denote the extension of the objective function $J$ to the Euclidean space ${\bf R}^{r\times r}\times {\bf R}^{r\times m} \times {\bf R}^{p\times r}$. Then, the Euclidean gradient of $\bar{J}$ is given by \begin{align} & \nabla \bar{J}(A_r,B_r,C_r) \nonumber \\ =& 2( -QP-Y^TX, QB_r+Y^TB, C_rP-CX). \label{16} \end{align} \noindent Although a similar expression can be found in Theorem 3.3 in \cite{van2008h2} and Section 3.2 in \cite{wilson1970optimum}, we provide another proof in Appendix \ref{apeB} because some equations in the proof are needed for deriving the Riemannian Hessian of $J$ as shown in the next subsection. \subsection{Geometry of Problem 1} \label{Sec:3.D} We define the Riemannian metric of the manifold $M$ as \begin{align} & \langle (\xi_1,\eta_1,\zeta_1),(\xi_2,\eta_2,\zeta_2) \rangle_{(A_r,B_r,C_r)} \nonumber \\ :=& {\rm tr}( A_r^{-1} \xi_1 A_r^{-1} \xi_2 ) + {\rm tr} (\eta_1^T\eta_2) + {\rm tr}(\zeta_1^T \zeta_2) \label{Riemannian_metric} \end{align} for $(\xi_1,\eta_1,\zeta_1),(\xi_2,\eta_2,\zeta_2) \in T_{(A_r,B_r,C_r)} M$. Then, it follows from \eqref{gradient} and \eqref{16} that \begin{align} {\rm grad}\, J(A_r,B_r,C_r) =& ( -2 A_r {\rm sym}(QP+Y^TX)A_r, \label{grad_J} \\ & 2(QB_r+Y^TB), 2(C_rP-CX) ). \nonumber \end{align} Furthermore, from \eqref{Hess} and \eqref{16}, the Riemannian Hessian of $J$ at $(A_r,B_r,C_r)$ is given by \begin{align} & {\rm Hess}\, J(A_r,B_r,C_r) [(A'_r,B'_r,C'_r)] \nonumber \\ =& ( -2A_r {\rm sym}( Q'P+QP'+Y'^TX+Y^TX') A_r \nonumber \\ &\, -2 {\rm sym} (A'_r {\rm sym} (QP+Y^TX) A_r), \label{Hess_J}\\ &\,\, 2(Q'B_r +QB'_r +Y'^TB), 2(C'_rP+C_rP'-CX') ), \nonumber \end{align} where $P'$ and $X'$ are the solutions to \eqref{10} and \eqref{11} in Appendix \ref{apeB}, respectively, and $Q'$ and $Y'$ are the solutions to \begin{align} & A_r Q' +Q' A_r+A'_r Q+Q A'_r- C'^T_r C_r-C^T_r C'_r =0, \label{31} \\ & AY'+Y'A_r+YA'_r+C^TC'_r =0. \label{32} \end{align} The equations \eqref{31} and \eqref{32} are obtained by differentiating \eqref{4} and \eqref{6}, respectively. From \eqref{8}, we can define the exponential map on the manifold $M$ as \begin{align} & {\rm Exp}_{(A_r,B_r,C_r)}(\xi,\eta,\zeta) \nonumber \\ :=& (A_r^{\frac{1}{2}} \exp (A_r^{-\frac{1}{2}} \xi A_r^{-\frac{1}{2}} ) A_r^{\frac{1}{2}}, B_r+\eta,C_r+\zeta) \label{33} \end{align} for any $(\xi,\eta,\zeta)\in T_{(A_r,B_r,C_r)} M$; i.e., the manifold $M$ is geodesically complete. \subsection{Trust-region method for Problem 1} \label{Sec:3.E} This section gives the Riemannian trust-region method for solving Problem 1. In \cite{absil2009optimization, absil2007trust}, the Riemannian trust-region method has been discussed in detail. At each iterate $(A_r,B_r,C_r)$ in the Riemannian trust-region method on the manifold $M$, we evaluate the quadratic model $\hat{m}_{(A_r,B_r,C_r)}$ of the objective function $J$ within a trust-region: \begin{align*} &\quad \hat{m}_{(A_r,B_r,C_r)}(\xi,\eta,\zeta) \\ =& J(A_r,B_r,C_r) + \langle {\rm grad}\,J(A_r,B_r,C_r), (\xi,\eta,\zeta) \rangle_{(A_r,B_r,C_r)} \\ &+\frac{1}{2} \langle {\rm Hess}\, J(A_r,B_r,C_r)[(\xi,\eta,\zeta)], (\xi,\eta,\zeta) \rangle_{(A_r,B_r,C_r)}. \end{align*} A trust-region with a radius $\Delta>0$ at $(A_r,B_r,C_r)\in M$ is defined as a ball with center $0$ in $T_{(A_r,B_r,C_r)} M$. Thus, the trust-region subproblem at $(A_r,B_r,C_r)\in M$ with a radius $\Delta$ is defined as a problem of minimizing $\hat{m}_{(A_r,B_r,C_r)}(\xi,\eta,\zeta)$ subject to $(\xi,\eta,\zeta)\in T_{(A_r,B_r,C_r)} M$, $||(\xi,\eta,\zeta)||_{(A_r,B_r,C_r)}:= \sqrt{ \langle (\xi,\eta,\zeta),(\xi,\eta,\zeta) \rangle_{(A_r,B_r,C_r)}} \leq \Delta$. This subproblem can be solved by the truncated conjugate gradient method \cite{absil2009optimization}. Then, we compute the ratio of the decreases in the objective function $J$ and the model $\hat{m}_{(A_r,B_r,C_r)}$ attained by the resulting $(\xi_*,\eta_*,\zeta_*)$ to decide whether $(\xi_*,\eta_*,\zeta_*)$ should be accepted and whether the trust-region with the radius $\Delta$ is appropriate. Algorithm \ref{algorithm} describes the process. The constants $\frac{1}{4}$ and $\frac{3}{4}$ in the condition expressions in Algorithm \ref{algorithm} are commonly used in the trust-region method for a general unconstrained optimization problem. These values ensure the convergence properties of the algorithm \cite{absil2009optimization, absil2007trust}. \begin{algorithm} \caption{Trust-region method for Problem 1.} \label{algorithm} \label{alg1} \begin{algorithmic}[1] \STATE Choose an initial point $((A_{r})_0,(B_{r})_0,(C_{r})_0) \in M$ and parameters $\bar{\Delta}>0$, $\Delta_0\in (0,\bar{\Delta})$, $\rho'\in [0,\frac{1}{4})$. \FOR{$k=0,1,2,\ldots$ } \STATE Solve the following trust-region subproblem for $(\xi,\eta,\zeta)$ to obtain $(\xi_k,\eta_k,\zeta_k)\in T_{(A_r,B_r,C_r)} M$: \begin{align*} &{\rm minimize}\quad \hat{m}_{((A_r)_k,(B_r)_k,(C_r)_k)}(\xi,\eta,\zeta) \\ &{\rm subject\, to}\quad ||(\xi,\eta,\zeta)||_{((A_r)_k,(B_r)_k,(C_r)_k)} \leq \Delta_k, \\ &{\rm where}\quad \hat{m}_k(\xi,\eta,\zeta):=\hat{m}_{((A_r)_k,(B_r)_k,(C_r)_k)}(\xi,\eta,\zeta), \\ &\quad\quad\quad (\xi,\eta,\zeta)\in T_{((A_r)_k,(B_r)_k,(C_r)_k)}M. \end{align*} \STATE Evaluate \begin{align*} \rho_k := \frac{ J({\rm Exp}_{k}(0,0,0)) -J({\rm Exp}_{k}(\xi_k,\eta_k,\zeta_k))}{ \hat{m}_{k}(0,0,0)- \hat{m}_{k} (\xi_k,\eta_k,\zeta_k)} \end{align*} \STATE with ${\rm Exp}_k(\xi,\eta,\zeta):= {\rm Exp}_{((A_r)_k,(B_r)_k,(C_r)_k)}(\xi,\eta,\zeta)$. \IF {$\rho_k<\frac{1}{4}$} \STATE $\Delta_{k+1}=\frac{1}{4}\Delta_k$. \ELSIF {$\rho_k>\frac{3}{4}$ and $||(\xi_k,\eta_k,\zeta_k)||_{((A_r)_k,(B_r)_k,(C_r)_k)} = \Delta_k$} \STATE $\Delta_{k+1} = \min (2\Delta_k,\bar{\Delta})$. \ELSE \STATE $\Delta_{k+1} = \Delta_k$. \ENDIF \IF {$\rho_k>\rho'$} \STATE $((A_r)_{k+1},(B_r)_{k+1},(C_r)_{k+1}) = {\rm Exp}_k(\xi_k,\eta_k,\zeta_k)$. \ELSE \STATE {\small $((A_r)_{k+1},(B_r)_{k+1},(C_r)_{k+1}) = ((A_r)_{k},(B_r)_{k},(C_r)_{k})$.} \ENDIF \ENDFOR \end{algorithmic} \end{algorithm} \begin{remark} \label{remark3} The most computational task to perform Algorithm 1 is to solve Eqs. \eqref{3}--\eqref{6} iteratively. Although some algorithms to solve these equations have been studied in some literatures \cite{antoulas2005approximation, benner2003state, damm2008direct}, we need to develop a more effective method for solving large-scale model reduction problems by Algorithm 1. \end{remark}
2,832
18,726
en
train
0.111.4
\section{Optimization algorithm for solving Problem 2} \label{sec_Pro2} This section develops an optimization algorithm for solving Problem 2. As with Problem 1, to derive the Riemannian gradient and Hessian of the objective function $\tilde{J}$, we calculate the Euclidean gradient $\nabla \bar{\tilde{J}}$, where $\bar{\tilde{J}}$ is the extension of $\tilde{J}$ to the ambient space ${\bf R}^{r\times r}\times {\bf R}^{r\times m}$. Since $C=B^T$ and $C_r=B_r^T$, it follows from \eqref{3}--\eqref{6} that $P=Q$ and $X=-Y$. Thus, in Appendix \ref{apeB}, by replacing $C$, $C_r$, and $C'_r$ with $B$, $B_r$, and $B'_r$, respectively, we obtain \begin{align*} \nabla \bar{\tilde{J}}(A_r,B_r) = ( -2(P^2-X^TX), 4PB_r-4X^TB). \end{align*} Hence, if we consider the counterpart of the Riemannian metric \eqref{Riemannian_metric} for the manifold $\tilde{M}$ as \begin{align*} \langle (\xi_1,\eta_1),(\xi_2,\eta_2) \rangle_{(A_r,B_r)} = {\rm tr}( A_r^{-1} \xi_1 A_r^{-1} \xi_2 ) + {\rm tr} (\eta_1^T\eta_2), \end{align*} the Riemannian gradient and Hessian of $\tilde{J}$ are given by \begin{align*} & {\rm grad}\, \tilde{J}(A_r,B_r) = (-2A_r{\rm sym}(P^2 -X^T X) A_r, \\ &\quad \quad\quad\quad\quad\quad\quad \quad 4PB_r-4X^TB ), \\ & {\rm Hess}\, \tilde{J}(A_r,B_r) [ (A'_r,B'_r)] \\ =& ( -2A_r {\rm sym}( P'P+PP'-X'^TX-X^TX') A_r \\ &\, -2 {\rm sym} (A'_r {\rm sym} (P^2-X^TX) A_r), \\ &\,\, 4(P'B_r+PB'_r) -4 X'^TB ), \end{align*} respectively. Here, $P$, $X$, $P'$, and $X'$ are the solutions to \eqref{3}, \eqref{5}, \eqref{10}, and \eqref{11}, respectively. The exponential map on the manifold $\tilde{M}$ is, of course, given by \begin{align*} {\rm Exp}_{(A_r,B_r)}(\xi,\eta) =(A_r^{\frac{1}{2}} \exp (A_r^{-\frac{1}{2}} \xi A_r^{-\frac{1}{2}} ) A_r^{\frac{1}{2}}, B_r+\eta). \end{align*} Similarly to Problem 1, we can solve Problem 2 by using a modified algorithm of Algorithm \ref{algorithm}. The reduced system constructed by the solution is also a stable gradient system. Note that in contrast to Problem 1, we do not calculate $Q$, $Y$, $Q'$, and $Y'$; i.e., we only need to calculate $P$, $X$, $P'$, and $X'$ for solving Problem 2 by the trust-region method. This improves computational efficiency. \begin{remark} If $m=p$ and $B^T=C$, we can also regard the system \eqref{1} as a port-Hamiltonian system \cite{ionescu2013moment, van2012l2}. Since a port-Hamiltonian system is passive, the reduced system constructed by the solution to Problem 2 is also passive \cite{van2012l2}. \end{remark} \section{Comparison between Problem 1 and Problem 3} \label{secnew_5} In this section, we compare the reduced systems obtained by solving Problems 1 and 3 and give a simple example which shows that they do not necessarily coincide with each other. For $J$ in \eqref{eq_J}, which is the $H^2$ norm of the error system, let $J_1 := J$ and $J_3(U) := J(U^TAU, U^TB, CU)$. Then, we have \begin{equation*} {\rm grad}\, J_1(A_r, B_r, C_r)=(A_r{\rm sym}(\nabla_{A_r} \bar{J})A_r, \nabla_{B_r} \bar{J}, \nabla_{C_r} \bar{J}) \end{equation*} and \begin{equation*} {\rm grad}\, J_3(U) = \nabla \bar{J_3}(U) - U {\rm sym}(U^T \nabla \bar{J_3}(U)), \end{equation*} where \begin{align*} \nabla \bar{J_3}(U) =& 2 A U {\rm sym}(\nabla_{A_r} \bar{J}(U^TAU, U^TB, CU)) \\ &+ B(\nabla_{B_r} \bar{J}(U^TAU, U^TB, CU))^T \\ &+ C^T\nabla_{C_r} \bar{J}(U^TAU, U^TB, CU). \end{align*} Note that we have used $A^T=A$ and that $\nabla_{A_r} \bar{J}$ denotes the $A_r$-component of $\nabla \bar{J}$. The expression of ${\rm grad}\, J_1(A_r,B_r,C_r)$ is from \eqref{16} and \eqref{grad_J}, and ${\rm grad}\, J_3(U)$ can be found in \cite{sato2015riemannian, yan1999approximate}. Even if ${\rm grad}\, J_1(A_r, B_r, C_r) = 0$ for some $(A_r, B_r, C_r)$, there does not in general exist $U$ such that \begin{equation} \label{EqABC} A_r = U^T A U,\ B_r = U^TB,\ C_r = CU, \end{equation} and ${\rm grad}\,J_3(U) =0$. Conversely, ${\rm grad}\, J_3(U) =0$ does not yield ${\rm grad}\, J_1(U^TAU, U^TB, CU)=0$ either. In order to see this clearly from a simple example, we consider in the remainder of this section the system \eqref{1} with $n=2$ and $m=p=1$ and assume that the dimension of the reduced model is $r=1$. Furthermore, we suppose $A=\begin{pmatrix}2 & 0 \\ 0 & 1\end{pmatrix}$, $B = \begin{pmatrix}-1 \\ 1\end{pmatrix}$, and $C = \begin{pmatrix} 1 & 1 \end{pmatrix}$. For Problem 1, we can obtain $P = B_r^2/2A_r$, $Q = C_r^2/2A_r$, $X = \begin{pmatrix}-B_r/(A_r+2) & B_r/(A_r+1)\end{pmatrix}^T$, and $Y = -\begin{pmatrix}C_r/(A_r+2) & C_r/(A_r+1)\end{pmatrix}^T$ by \eqref{3}--\eqref{6}. Then, a simple analysis implies that ${\rm grad}\, J_1(A_r, B_r, C_r) = 0$ is equivalent to \begin{equation*} B_r C_r = 0 \quad \text{or} \quad A_r=-\frac{1}{2}+\frac{\sqrt{33}}{6},\ B_rC_r=6-\sqrt{33}. \end{equation*} The objective function at these infinite critical points are evaluated as \begin{align*} J(A_r, B_r, C_r) = 1/12 = 0.0833 \end{align*} for any $(A_r, B_r, C_r)$ with $A_r>0$ and $B_rC_r=0$, and \begin{equation*} J(A_r, B_r, C_r) = (569-99\sqrt{33})/24 = 0.0120 \end{equation*} for $A_r=-1/2+\sqrt{33}/6$ and for any $(B_r, C_r)$ with $B_rC_r=6-\sqrt{33}$, which implies that the minimum value of $J$ attained by solving Problem 1 is $0.0120$. For Problem 3, let $U = \begin{pmatrix} u_1 & u_2 \end{pmatrix}^T \in {\rm St}\,(1,2)$. This means that $U$ is in the unit $2$-sphere, that is, $u_1^2+u_2^2=1$. Then, we have $P = (u_1-u_2)^2/2(1+u_1^2)$, $Q = (u_1+u_2)^2/2(1+u_1^2)$,\\ $X =\begin{pmatrix} (u_1-u_2)/(u_1^2+3) & - (u_1-u_2)/(u_1^2+2)\end{pmatrix}^T$, and $Y = -\begin{pmatrix}(u_1+u_2)/(u_1^2+3) & (u_1+u_2)/(u_1^2+2)\end{pmatrix}^T$ in a similar manner to that in Problem 1. A straightforward but tedious calculation shows that ${\rm grad}\, J_3(U) = 0$ holds if and only if \begin{equation} \label{Ueq2} (u_1, u_2) = (\pm 1, 0),\ (0, \pm 1) \end{equation} or \begin{equation} \label{Ueq1} u_1 = \pm 0.5642 \quad \text{and} \quad u_2 = \pm \sqrt{1-u_1^2}= \pm 0.8256, \end{equation} where $u_1=\pm 0.5642$ are the real solutions to the equation $4u_1^{12}+48u_1^{10}+215u_1^8+478u_1^6+515u_1^4+132u_1^2-112=0$. Therefore, there are only $8$ finite discrete critical points of $J_3$ in contrast to Problem 1. The resultant reduced system matrices are then computed by \eqref{EqABC}. Eq.~\eqref{Ueq2} yields $(A_r, B_r, C_r) = (2, \mp 1, \pm 1), (1, \pm 1, \pm 1)$, where $B_r C_r = \pm 1$. In contrast, for \eqref{Ueq1} we have $A_r = 1.318$ and $(B_r, C_r) = (\pm 0.2614, \pm 1.390)$, $(\pm 1.390, \pm 0.2614)$, where $B_rC_r=0.3633$. Meanwhile, the result for Problem 1 yields $B_rC_r=0$ or $B_rC_r=6-\sqrt{33}=0.2554$. Therefore, we can conclude that the reduced systems obtained by the two problems do not coincide with each other in general. Furthermore, we have $J(A_r,B_r,C_r)=0.0389$ for all $(A_r, B_r, C_r)$ obtained by \eqref{Ueq1}, $J(A_r, B_r, C_r) = 1/2 = 0.5$ for $(u_1, u_2) = (\pm 1, 0)$, and $J(A_r, B_r, C_r) = 1/4 = 0.25$ for $(u_1, u_2) = (0, \pm 1)$, all of which are worse than the results in Problem 1. From these observations, we can conclude that the solutions to Problems 1 and 3 are not necessarily unique nor the solution sets of both problems do not contain each other. Also, the attained optimal values do not coincide with each other.
3,198
18,726
en
train
0.111.5
\section{Numerical experiments} \label{sec4} This section illustrates that the proposed reduction method preserves the structure of the system \eqref{1} although the balanced truncation method does not preserve it. Furthermore, it is shown that the value of the objective function in the case of the proposed reduction method becomes smaller than that in the case of the reduction method proposed in \cite{sato2015riemannian} even if we choose an initial point in Algorithm 1 as a local optimal solution to Problem 3. This means that the stationary points of Problems 1 and 3 do not coincide. To perform them, we have used Manopt \cite{boumal2014manopt}, which is a MATLAB toolbox for optimization on manifold. We consider a reduction of the system \eqref{1} with $n=5$ and $m=p=2$ to the system \eqref{2} with $r=3$. Here, the system matrices $A$, $B$, and $C$ are given by \begin{align*} A &:= \begin{pmatrix} 3 & -1 & 1 & 1 & -1 \\ -1 & 2 & 0 & 0 & 2 \\ 1 & 0 & 2 & 1 & 1 \\ 1 & 0 & 1 & 3 & 0 \\ -1 & 2 & 1 & 0 & 4 \end{pmatrix}, B := \begin{pmatrix} 0 & 1 \\ 1 & 0 \\ -1 & 1 \\ 1 & 0 \\ 0 & 1 \end{pmatrix},\\ C &:= \begin{pmatrix} 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 1 \end{pmatrix}. \end{align*} That is, $(A,B,C)\in {\rm Sym}_+(5)\times {\bf R}^{5\times 2} \times {\bf R}^{2\times 5}$. The balanced truncation method, which is the most popular model reduction method \cite{antoulas2005approximation}, gave the reduced matrix $A_r^{\rm BT}$ as \begin{align*} A_r^{\rm BT} &= \begin{pmatrix} 2.8944 & -0.0422 & -1.4729 \\ -0.0318 & 1.0470 & -0.2615 \\ -1.1764 & -0.2355 & 4.1898 \end{pmatrix}. \end{align*} Thus, $A_r^{\rm BT} \not\in {\rm Sym}_+(3)$; i.e., the balanced truncation method did not preserve the original model structure. Furthermore, we obtained $||G-G_r||_{H^2}=0.0157$. The reduction method which was briefly explained in Remark 1 in \cite{sato2015riemannian}, gave the orthogonal matrix \begin{align} U = \begin{pmatrix} 0.8906 & 0.1189 & -0.1025 \\ -0.1117 & 0.7216 & 0.0373 \\ -0.0650 & -0.1558 & 0.8994 \\ -0.2144 & 0.6138 & 0.0302 \\ 0.3798 & 0.2532 & 0.4223 \end{pmatrix}, \label{U} \end{align} and then \begin{align*} U^TAU &= \begin{pmatrix} 1.9613 & 0.0507 & 0.7510 \\ 0.0507 & 2.8566 & 1.6666 \\ 0.7510 & 1.6666 & 3.1486 \end{pmatrix}. \end{align*} Thus, $U^TAU \in {\rm Sym}_+(3)$; i.e., this method preserved the original model structure. Furthermore, we obtained $||G-G_r||_{H^2}=0.0217$. Note that, in this result, the norm of the gradient of the objective function was approximately equal to $7.493\times 10^{-7}$; i.e., we can expect that a local optimal solution to Problem 3 was obtained. The proposed algorithm gave the reduced matrix $A_r$, $B_r$, and $C_r$ as follows: \begin{align*} A_r &= \begin{pmatrix} 1.8965 & 0.0237 & 0.7778 \\ 0.0237 & 3.1554 & 1.8009 \\ 0.7778 & 1.8009 & 3.1784 \end{pmatrix}, \\ B_r & = \begin{pmatrix} -0.2677 & 1.1820 \\ 1.5124 & 0.2049 \\ -0.7759 & 1.2155 \end{pmatrix}, \\ C_r &= \begin{pmatrix} 0.8726 & 0.1503 & -0.0630 \\ 0.3321 & 0.0680 & 1.3121 \end{pmatrix}. \end{align*} Thus, $A_r\in {\rm Sym}_+(3)$; i.e., the reduced system had the same structure with the original system. Here, we chose an initial point $((A_r)_0, (B_r)_0, (C_r)_0)$ in Algorithm \ref{algorithm} as $(U^TAU, U^TB, CU)$, where $U$ is defined by \eqref{U}. Furthermore, we obtained $||G-G_r||_{H^2}=0.0156$. Hence, the value of the objective function attained by the proposed algorithm was smaller than those by the balanced truncation method and the method in \cite{sato2015riemannian}. This means that the stationary points of Problems 1 and 3 do not coincide. \begin{comment} \begin{align*} A_r &= \begin{pmatrix} 3.0291 & 0.0796 & 1.9073 \\ 0.0796 & 1.9821 & -0.5467 \\ 1.9073 & -0.5467 & 3.2192 \end{pmatrix}, \\ B_r & = \begin{pmatrix} 0.9263 & 1.1414 \\ -1.0458 & 0.3119 \\ -0.3533 & 0.9406 \\ \end{pmatrix}, \\ C_r &= \begin{pmatrix} 1.0185 & 1.0416 & -0.5334 \\ 0.7696 & 1.3947 & 0.7315 \end{pmatrix}. \end{align*} \end{comment} To verify the effectiveness of the proposed algorithm for medium-scale systems, we also randomly created matrices $A$, $B$, and $C$ of larger size. Table \ref{table} shows the values of the relative $H^2$ error in the case of $A\in {\rm Sym}_+(300)$, $B\in {\bf R}^{300\times 3}$, and $C\in {\bf R}^{2\times 300}$, respectively. For all $r$, the relative $H^2$ errors in the proposed method were smaller than those of the balanced truncation method. Furthermore, the reduced models by the balanced truncation method did not have the original symmetric structure while the proposed method had. Moreover, for all $r$, the proposed method was better than the method in \cite{sato2015riemannian}. Here, we note that for each $r$, an initial point $((A_r)_0, (B_r)_0, (C_r)_0)$ in Algorithm \ref{algorithm} to solve Problem 1 was chosen as $(U^TAU, U^TB, CU)$, where $U$ is a local optimal solution to Problem 3. Thus, Table \ref{table} also shows that the stationary points of Problems 1 and 3 do not coincide. \begin{table}[h] \caption{The comparison of the relative $H^2$ error $\frac{||G-G_r||_{H^2}}{||G||_{H^2}}$.} \label{table} \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline $r$& $6 $ & $8$ & $10$ & $12$ \\ \hline Balanced truncation & 0.0141 & 0.0120 & 0.0103 & 0.0088 \\\hline The method in \cite{sato2015riemannian} & 0.0297 & 0.0299 & 0.0294 & 0.0317 \\\hline The proposed method & 0.0112 & 0.0089& 0.0042 & 0.0020 \\\hline \end{tabular} \end{center} \end{table} \begin{remark} As mentioned in Remark \ref{remark3}, in order to solve large-scale model reduction problems by Algorithm 1, a long computational time is needed. On the other hand, a computational time for performing the balanced truncation method is less than it. Furthermore, the balanced truncation method gives upper bounds of the $H^2$ and $H^{\infty}$ error norms \cite{antoulas2005approximation}. From these facts, we suggest that we use the balanced truncation method for determining the possible largest reduced dimension $r$ for performing Algorithm 1 by observing the $H^2$ and $H^{\infty}$ error norms. Then, we can choose an actual $r$ to perform Algorithm 1 as a smaller value than the possible largest dimension. \end{remark}
2,593
18,726
en
train
0.111.6
\section{Conclusion} \label{sec5} We have studied the stability and symmetry preserving $H^2$ optimal model reduction problem on the product manifold of the manifold of the symmetric positive definite matrices and two Euclidean spaces. To solve the problem by using the trust-region method, we have derived the Riemannian gradient and Riemannian Hessian. Furthermore, it has been shown that if we restrict our systems to gradient systems, the gradient and Hessian can be obtained more efficiently. By a simple example, we have proved that the solutions to our problem and the problem in \cite{sato2015riemannian} are not unique and the solution sets of both problems do not contain each other in general. Also, it has been revealed that the attained optimal values do not coincide. Numerical experiments have illustrated that although the balanced truncation does not preserve the original symmetric structure of the system, the proposed method preserves the structure. Furthermore, it has been demonstrated that the proposed method is better than our method in \cite{sato2015riemannian}, and also usually better than the balanced truncation method, in the sense of the $H^2$ error norm between the transfer functions of the original and reduced systems. \appendix \subsection{Proof of the fact that ${\rm Sym}_+(r)$ is a reductive homogeneous space} \label{ape1} To prove that ${\rm Sym}_+(r)$ is a reductive homogeneous space, we first note that there is a natural bijection \begin{align} {\rm Sym}_+(r) \cong GL(r)/O(r). \label{doukei} \end{align} To see this, let $\phi_g$ be $GL(r)$ action on the manifold ${\rm Sym}_{+}(r)$; i.e., $\phi_g(S) =gSg^T,\quad g\in GL(r), S\in {\rm Sym}_{+}(r)$. The action $\phi_g$ is transitive; i.e., for any $S_1$, $S_2\in {\rm Sym}_+(r)$, there exists $g\in GL(r)$ such that $\phi_g(S_1)=S_2$. Thus, the manifold ${\rm Sym}_+(r)$ consists of a single orbit; i.e., ${\rm Sym}_+(r)$ is a homogeneous space of $GL(r)$. The action $\phi_g$ has the isotropy subgroup of the orthogonal group $O(r)$ at $I_r\in {\rm Sym}_+(r)$ because $O(r) = \{ g\in GL(r)\,|\, \phi_g (I_r) = I_r \}$. In general, if an action of a group on a set is transitive, the set is isomorphic to a quotient of the group by its isotropy subgroup \cite{Gallier2016}. Hence, \eqref{doukei} holds. From the identification \eqref{doukei}, we can show that the quotient $GL(r)/O(r)$ is reductive; i.e., $T_{I_r} GL(r) \cong T_{I_r} {\rm Sym}_+(r) \oplus T_{I_r} O(r)$ and $O\xi O^{-1} \in T_{I_r} {\rm Sym}_+(r)$ for $\xi\in T_{I_r} {\rm Sym}_+(r)$ and $O\in O(r)$. In fact, these follow from \begin{align*} T_{I_r} GL(r) &\cong {\bf R}^{r\times r} \cong {\rm Sym}(r) \oplus {\rm Skew}(r), \\ {\rm Sym}(r) & \cong T_{I_r} {\rm Sym}_+(r), \,\, {\rm Skew} (r) \cong T_{I_r} O(r), \end{align*} and $O^{-1}=O^T$. \subsection{Proof of \eqref{16}} \label{apeB} The directional derivative of $\bar{J}$ at $(A_r,B_r,C_r)$ in the direction $(A'_r,B'_r,C'_r)$ can be calculated as \begin{align} &{\rm D}\bar{J}(A_r,B_r,C_r)[(A'_r,B'_r,C'_r)] \nonumber \\ =& 2{\rm tr} (C'_r (PC_r^T -X^TC^T)) - 2{\rm tr} (C^TC_rX'^T) +{\rm tr} (C_rP'C_r^T), \label{9} \end{align} where $P'$ and $X'$ are also the directional derivative of $P$ and $X$ at $(A_r,B_r,C_r)$ in the direction $(A'_r,B'_r,C'_r)$, respectively. Differentiating \eqref{3} and \eqref{5}, we obtain \begin{align} & A_r P'+P' A_r +A_r' P+PA'_r-B_r'B_r^T-B_rB'^{T}_r =0, \label{10} \\ & AX'+X'A_r+XA_r'-BB_r'^T =0. \label{11} \end{align} Eqs.\,\eqref{4} and \eqref{10} yield that \begin{align} {\rm tr}(C_r^TC_rP') = -2 {\rm tr}(A_r'^TQP)+2{\rm tr}(B_r'^TQB_r), \label{12} \end{align} and \eqref{6} and \eqref{11} imply that \begin{align} {\rm tr}(-C^TC_rX'^T) = {\rm tr}( (-X A_r'^T+BB_r'^T)^TY). \label{13} \end{align} By substituting \eqref{12} and \eqref{13} into \eqref{9}, we have \begin{align} &{\rm D}\bar{J}(A_r,B_r,C_r)[(A'_r,B'_r,C'_r)] \nonumber \\ =& 2{\rm tr} ( A_r'^T (-QP-Y^TX) ) + 2{\rm tr} (B_r'^T (QB_r+Y^TB) ) \nonumber \\ &+2{\rm tr} (C_r'^T(C_rP-CX) ). \label{14} \end{align} Since the Euclidean gradient $\nabla \bar{J}(A_r,B_r,C_r)$ satisfies \begin{align*} & {\rm D} \bar{J}(A_r,B_r,C_r)[(A'_r,B'_r,C'_r)] \nonumber \\ = & {\rm tr} (A_r'^T \nabla_{A_r} \bar{J}(A_r,B_r,C_r)) + {\rm tr} (B_r'^T \nabla_{B_r} \bar{J}(A_r,B_r,C_r))\\ &+ {\rm tr} (C_r'^T \nabla_{C_r} \bar{J}(A_r,B_r,C_r)), \end{align*} \eqref{14} implies \eqref{16}. \section*{Acknowledgment} This study was supported in part by JSPS KAKENHI Grant Number JP16K17647. The authors would like to thank the anonymous reviewers for their valuable comments that helped improve the paper significantly. \ifCLASSOPTIONcaptionsoff \fi \end{document}
1,928
18,726
en
train
0.112.0
\begin{document} \title{Approximation rate in Wasserstein distance of probability measures on the real line by deterministic empirical measures} \author{O. Bencheikh and B. Jourdain\thanks{Cermics, \'Ecole des Ponts, INRIA, Marne-la-Vall\'ee, France. E-mails : [email protected], [email protected]. The authors would like to acknowledge financial support from Université Mohammed VI Polytechnique. }} \maketitle \begin{abstract} We are interested in the approximation in Wasserstein distance with index $\rho\ge 1$ of a probability measure $\mu$ on the real line with finite moment of order $\rho$ by the empirical measure of $N$ deterministic points. The minimal error converges to $0$ as $N\to+\infty$ and we try to characterize the order associated with this convergence. In \cite{xuberger}, Xu and Berger show that, apart when $\mu$ is a Dirac mass and the error vanishes, the order is not larger than $1$ and give a sufficient condition for the order to be equal to this threshold $1$ in terms of the density of the absolutely continuous with respect to the Lebesgue measure part of $\mu$. They also prove that the order is not smaller than $1/\rho$ when the support of $\mu$ is bounded and not larger when the support is not an interval. We complement these results by checking that for the order to lie in the interval $\left(1/\rho,1\right)$, the support has to be bounded and by stating a necessary and sufficient condition in terms of the tails of $\mu$ for the order to be equal to some given value in the interval $\left(0,1/\rho\right)$, thus precising the sufficient condition in terms of moments given in \cite{xuberger}. In view of practical application, we emphasize that in the proof of each result about the order of convergence of the minimal error, we exhibit a choice of points explicit in terms of the quantile function of $\mu$ which exhibits the same order of convergence. \noindent{\bf Keywords:} deterministic empirical measures, Wasserstein distance, rate of convergence. \noindent {{\bf AMS Subject Classification (2010):} \it 49Q22, 60-08} \end{abstract} \section*{Introduction} Let $\rho\ge 1$ and $\mu$ be a probability measure on the real line. We are interested in the rate of convergence in terms of $N\in{\mathbb N}^*$ of \begin{equation} e_N(\mu,\rho):=\inf\left\{\mathcal{W}_\rho\left(\frac{1}{N}\sum_{i=1}^N\delta_{x_i},\mu\right):-\infty<x_1\le x_2\le \cdots\le x_N<+\infty\right\},\label{pbor} \end{equation} where ${\cal W}_\rho$ denotes the Wasserstein distance with index $\rho$. The motivation is the approximation of the probability measure $\mu$ by finitely supported probability measures. An example of application is provided by the optimal initialization of systems of particles with mean-field interaction \cite{jrdcds,benchjour}, where, to preserve the mean-field feature, it is important to get $N$ points with equal weight $\frac{1}{N}$ (of course, nothing prevents several of these points to be equal). The Hoeffding-Fr\'echet or comonotone coupling between two probability measures $\nu$ and $\eta$ on the real line is optimal for $\mathcal{W}_\rho$ so that: \begin{align}\label{Wasserstein} \displaystyle \mathcal{W}_\rho^\rho\left(\nu,\eta\right) = \int_0^1 \left|F^{-1}_{\nu}(u) - F^{-1}_{\eta}(u) \right|^\rho\,du, \end{align} where for $u\in(0,1)$, $F^{-1}_{\nu}(u)= \inf\left\{ x \in {\mathbb R}: \nu\left(\left(-\infty,x\right]\right)\geq u\right\}$ and $F^{-1}_{\eta}(u)= \inf\left\{ x \in {\mathbb R}: \eta\left(\left(-\infty,x\right]\right)\geq u\right\}$ are the respective quantile functions of $\nu$ and $\eta$. We set $F(x)=\mu\left(\left(-\infty,x\right]\right)$ for $x\in{\mathbb R}$ and denote $F^{-1}(u)=\inf\left\{ x \in {\mathbb R}: F(x)\geq u\right\}$ for $u\in(0,1)$. We have $u\le F(x)\Leftrightarrow F^{-1}(u)\le x$. The quantile function $F^{-1}$ is left-continuous and non-decreasing and we denote by $F^{-1}(u+)$ its right-hand limit at $u\in [0,1)$ (in particular $F^{-1}(0+)=\lim\limits_{u\to 0+}F^{-1}(u)\in [-\infty,+\infty)$) and set $F^{-1}(1)=\lim\limits_{u\to 1-}F^{-1}(u)\in(-\infty,+\infty]$. By \eqref{Wasserstein}, when $-\infty<x_1\le x_2\le \cdots\le x_N<+\infty$, \begin{equation} \mathcal{W}_\rho^\rho\left(\frac{1}{N}\sum_{i=1}^N\delta_{x_i},\mu\right) =\sum \limits_{i=1}^N \int_{\frac{i-1}{N}}^{\frac{i}{N}} \left|x_{i}-F^{-1}(u)\right|^\rho\,du,\label{wrhomunmu} \end{equation} where, by the inverse transform sampling, the right-hand side is finite if and only if $\int_{\mathbb R}|x|^\rho\mu(dx)<+\infty$. So, when considering $e_N(\mu,\rho)$, we will suppose that $\mu$ has a finite moment of order $\rho$.\\ In the first section of the paper, we recall that, under this moment condition, the infimum in \eqref{pbor} is attained: \begin{equation*} e_N\left(\mu,\rho\right) = {\cal W}_\rho\left(\frac{1}{N}\sum_{i=1}^N\delta_{x_i^N},\mu\right)=\sum \limits_{i=1}^N \int_{\frac{i-1}{N}}^{\frac{i}{N}} \left|x^N_{i}-F^{-1}(u)\right|^\rho\,du \end{equation*} for some points $x_i^N\in[F^{-1}(\frac{i-1}{N}+),F^{-1}(\frac{i}{N})]\cap{\mathbb R}$ which are unique as soon as $\rho>1$ and explicit in the quadratic case $\rho=2$. Of course the points $\left(x_i^N\right)_{1\le i\le N}$ depend on $\rho$ but we do not explicit this dependence to keep notations simple. For $\rho=1$, because of the lack of strict convexity of ${\mathbb R}\ni x\mapsto|x|$, there may be several optimal choices among which $x_i^N=F^{-1}\left(\frac{2i-1}{2N}\right)$ for $i\in\{1,\hdots,N\}$. Note that when $\rho\ge\tilde \rho\ge 1 $ and $\displaystyle \int_{\mathbb R}|x|^\rho\mu(dx)<+\infty$, with $(x_i^N)_{1\le i\le N}$ denoting the optimal points for $\rho \ge 1$, \begin{equation} e_N(\mu,\rho)= {\cal W}_\rho\left(\frac{1}{N}\sum_{i=1}^N\delta_{x^N_i},\mu\right)\ge {\cal W}_{\tilde \rho}\left(\frac{1}{N}\sum_{i=1}^N\delta_{x^N_i},\mu\right)\ge e_N(\mu,\tilde\rho)\label{minoerhoun}. \end{equation}Hence $\rho\mapsto e_N(\mu,\rho)$ is non-decreasing. We give an alternative expression of $e_N(\mu,\rho)$ in terms of the cumulative distribution function rather than the quantile function and recover that $e_N(\mu,\rho)$ tends to $0$ as $N\to+\infty$ when $\int_{\mathbb R}|x|^\rho\mu(dx)<+\infty$. The main purpose of the paper is to study the rate at which this convergence occurs. In particular, we would like to give sufficient conditions on $\mu$, which, when possible, are also necessary, to ensure convergence at a rate $N^{-\alpha}$ with $\alpha >0$ called the order of convergence. This question has already been studied by Xu and Berger in \cite{xuberger}. According to Theorem 5.20 \cite{xuberger}, $\limsup_{N\to\infty}Ne_N(\mu,\rho)\ge\frac 12\int_0^{\frac 12}|F^{-1}(u+\frac 12)-F^{-1}(u)|du$ with the right-hand side positive apart when $\mu$ is a Dirac mass and $e_N(\mu,\rho)$ vanishes for all $N,\rho\ge 1$. This result may be complemented by the non-asymptotic bound obtained in Lemma 1.4 of the first version \cite{BJ} of the present paper : $Ne_N(\mu,\rho)+(N+1)e_{N+1}(\mu,\rho)\ge \frac 12\int_{{\mathbb R}}F(x)\wedge (1-F(x))\,dx$ where $\int_{{\mathbb R}}F(x)\wedge (1-F(x))\,dx=\int_0^{\frac 12}|F^{-1}(u+\frac 12)-F^{-1}(u)|du$ as easily seen since these integrals correspond to the area of the points at the right to $(F(x))_{-\infty\le x\le F^{-1}(\frac{1}{2})}$, at the left to $(1-F(x))_{F^{-1}(\frac{1}{2})<x<+\infty}$, above $0$ and below $\frac{1}{2}$ respectively computed by integration with respect to the abscissa and to the ordinate. This previous version also contained a section devoted to the case when the support of $\mu$ is bounded i.e. $F^{-1}(1)-F^{-1}(0+)<+\infty$. The results in this section were mainly obtained before by Xu and Berger in \cite{xuberger}. In particular, according to Theorem 5.21 (ii) \cite{xuberger}, when the support of $\mu$ is bounded, then $\sup_{N\ge 1}N^{\frac{1}{\rho}}e_N(\mu,\rho)<+\infty$, while when $F^{-1}$ is discontinuous $\limsup_{N\to+\infty}N^{\frac{1}{\rho}}e_N(\mu,\rho)>0$, according to Remark 5.22 (ii) \cite{xuberger}. We recovered these results and went slightly further by noticing that when the support of $\mu$ is bounded and $F^{-1}$ is continuous, then $\lim_{N\to\infty}N^{\frac{1}{\rho}}e_N(\mu,\rho)=0$ (Proposition 2.1 \cite{BJ}), with an order of convergence arbitrarily slow as examplified by the the beta distribution with parameter $(\beta,1)$ with $\beta>0$ : $\mu_\beta(dx)=\beta {\mathbf 1}_{[0,1]}(x)x^{\beta-1}\,dx$. Indeed, with the notation $\asymp$ defined at the end of the introduction, according to Example 2.3 \cite{BJ}, when $\rho>1$, $e_N(\mu_\beta,\rho)\asymp N^{-\frac{1}{\rho}-\frac{1}{\beta}}\asymp{\cal W}_\rho\left(\frac{1}{N}\sum_{i=1}^N\delta_{F^{-1}\left(\frac{i-1}{N}\right)},\mu_\beta\right)$ for $\beta>\frac{\rho}{\rho-1}$, $e_N(\mu_\beta,\rho)\asymp N^{-1}(\ln N)^{\frac{1}{\rho}}\asymp{\cal W}_\rho\left(\frac{1}{N}\sum_{i=1}^N\delta_{F^{-1}\left(\frac{i-1}{N}\right)},\mu_\beta\right)$ for $\beta=\frac{\rho}{\rho-1}$ and $\lim_{N\to+\infty}Ne_N(\mu,\rho)=\frac{1}{2\beta(\rho+1)^{1/\rho}}\left(\int_0^1u^{\frac{\rho}{\beta}-\rho}du\right)^{1/\rho}$ when $\beta\in (0,\frac{\rho}{\rho-1})$. The latter limiting behaviour which remains valid for $\rho=1$ whatever $\beta>0$ is a consequence of one of the main results by Xu and Berger \cite{xuberger} namely Theorem 5.15 : when the density $f$ of the absolutely continuous with respect to the Lebesgue measure part of $\mu$ is $dx$ a.e. positive on $\left\{x\in{\mathbb R}:0<F(x)<1\right\}$ (or equivalently $F^{-1}$ is absolutely continuous), then $$\lim_{N\to+\infty} Ne_N(\mu,\rho)=\lim_{N\to+\infty} N{\cal W}_\rho\left(\frac{1}{N}\sum_{i=1}^N\delta_{F^{-1}\left(\frac{2i-1}{2N}\right)},\mu\right) = \frac{1}{2(\rho+1)^{1/\rho}}\left(\int_{\mathbb R}\frac{{\mathbf 1}_{\{0<F(x)<1\}}}{f^{\rho-1}(x)}\,dx\right)^{1/\rho}.$$ In Theorem 2.4 \cite{BJ}, we recovered this result and also stated that, without the positivity assumption on the density, $$\liminf_{N\to+\infty} Ne_N(\mu,\rho) \ge \frac{1}{2(\rho+1)^{1/\rho}} \left(\int_{\mathbb R}\frac{{\mathbf 1}_{\{f(x)>0\}}}{f^{\rho-1}(x)}\,dx\right)^{1/\rho}.$$ In particular for $(Ne_N(\mu,1))_{N\ge 1}$ to be bounded the Lebesgue measure of $\{x\in{\mathbb R}:f(x)>0\}$ must be finite. Weakening this necessary condition is discussed in Section 4 of the first version \cite{BJ} of this paper. In \cite{chevallier}, Chevallier addresses the multidimensional setting and proves in Theorem III.3 that for a probability measure $\mu$ on ${\mathbb R}^d$ with support bounded by $r$, there exist points $x_1,\hdots,x_N\in{\mathbb R}^d$ such that $\frac{1}{4r}\mathcal{W}_\rho\left(\frac{1}{N}\sum_{i=1}^N\delta_{x_i},\mu\right)\le f_{\rho,d}(N)$ where $f_{\rho,d}(N)$ is respectively equal to $\left(\frac d{d-\rho}\right)^{\frac 1\rho}N^{-\frac 1 d}$, $\left(\frac{1+\ln N}{N}\right)^{\frac 1 d}$, and $\zeta(p/d)N^{-\frac 1 \rho}$ with $\zeta$ denoting the zeta Riemann function when $\rho<d$, $\rho=d$ and $\rho>d$.
3,765
30,163
en
train
0.112.1
The case when the support of $\mu$ is not bounded is also considered by Xu and Berger \cite{xuberger} in the one-dimensional setting of the present paper and by \cite{chevallier} in the multidimensional setting. In Corollary III.5 \cite{chevallier}, Chevallier proves that $\lim_{N\to\infty}(f_{\rho,d}(N))^{-\alpha\rho}\mathcal{W}_\rho\left(\frac{1}{N}\sum_{i=1}^N\delta_{x_i},\mu\right)=0$ when $\int_{{\mathbb R}^d}|x|^{\frac{\rho}{1-\alpha\rho}}\mu(dx)<+\infty$ for some $\alpha\in (0,\frac 1 \rho)$. This generalizes the one-dimensional statement of Theorem 5.21 (i) \cite{xuberger} : under the same moment condition, $\lim_{N\to\infty}N^\alpha e_N(\mu,\rho)=0$. In Theorem \ref{alphaRater}, using our alternative formula for $e_N(\mu,\rho)$ in terms of the cumulative distribution function of $\mu$, we refine this result by stating the following necessary and sufficient condition $$\forall \alpha\in \left(0,\frac{1}{\rho}\right),\;\lim_{x\to +\infty}x^{\frac{\rho}{1-\alpha \rho}}\Big(F(-x)+1-F(x)\Big)=0\Leftrightarrow \lim_{N\to+\infty}N^{\alpha}e_N(\mu,\rho)=0.$$ We also check that $$\forall \alpha\in \left(0,\frac{1}{\rho}\right),\quad\displaystyle{\sup_{N \ge 1}} N^{\alpha} \, e_N(\mu,\rho)<+\infty \Leftrightarrow \sup_{x\ge 0}\;x^{\frac{\rho}{1-\alpha \rho}}\Big(F(-x)+1-F(x)\Big)<+\infty,$$ a condition under which, the order of convergence $\alpha$ of the minimal error $e_N(\mu,\rho)$ is preserved by choosing $x_1=F^{-1}\left(\frac{1}{N}\right)\wedge (-N^{\frac{1}{\rho}-\alpha})$, $x_N=F^{-1}\left(\frac{N-1}{N}\right)\vee N^{\frac{1}{\rho}-\alpha}$ and any $x_i\in[F^{-1}(\frac{i-1}{N}+),F^{-1}(\frac{i}{N})]$ for $2\le i\le N-1$. We also exhibit non-compactly supported probability measures $\mu$ such that, for $\rho>1$, $\lim_{N\to+\infty}N^{\frac{1}{\rho}} \, e_N(\mu,\rho)=0$. Nevertheless, we show that for $\left(N^\alpha e_N(\mu,\rho)\right)_{N\ge 1}$ to be bounded for $\alpha>\frac{1}{\rho}$, the support of $\mu$ has to be bounded. We last give a necessary condition for $\left(N^{\frac{1}{\rho}} e_N(\mu,\rho)\right)_{N\ge 1}$ to be bounded, which unfortunately is not sufficient but ensures the boundedness of $\left(\frac{N^{1/\rho}}{1+\ln N}e_N(\mu,\rho)\right)_{N\ge 1}$. We summarize our results together with the ones obtained by Xu and Berger \cite{xuberger} in Table \ref{res}. \begin{center} \begin{table}[!ht] \begin{tabular}{|c||c|c|}\hline $\alpha$ & Necessary condition & Sufficient condition\\\hline\hline $\alpha=1$ & $\displaystyle \int_{\mathbb R}\frac{{\mathbf 1}_{\left\{f(x)>0 \right\}}}{f^{\rho-1}(x)}\,dx<+\infty$ & $f(x)>0$ $dx$ a.e. on $\left\{x\in{\mathbb R}: 0<F(x)<1 \right\}$ \\ & (Thm. 2.4 \cite{BJ}) & and $\displaystyle \int_{\mathbb R}\frac{{\mathbf 1}_{\{f(x)>0\}}}{f^{\rho-1}(x)}\,dx<+\infty$ (Thm. 5.15 \cite{xuberger}) \\\hline $\alpha\in \left(\frac 1\rho,1\right)$& $F^{-1}$ continuous (Remark 5.22 (ii) \cite{xuberger}) & related to the modulus of continuity of $F^{-1}$\\ when $\rho>1$ & and $\mu$ with bounded support (Prop. \ref{propals1rcomp}) & \\\hline $\alpha=\frac 1\rho$ & $\exists \lambda>0$, $\forall x\ge 0$, $F(-x)+1-F(x)\le\frac{e^{-\lambda x}}{\lambda}$ & $\mu$ with bounded support (Thm. 5.21 (ii) \cite{xuberger}) \\ & (Prop. \ref{propal1rho}) & For $\rho>1$, Exple \ref{exempleexp} with unbounded supp.\\\hline $\alpha\in\left(0,\frac 1\rho\right)$ & ${\sup\limits_{x\ge 0}}\;x^{\frac{\rho}{1-\alpha \rho}}\Big(F(-x)+1-F(x)\Big)<+\infty$& ${\sup\limits_{x\ge 0}}\;x^{\frac{\rho}{1-\alpha \rho}}\Big(F(-x)+1-F(x)\Big)<+\infty$\\ &(Thm. \ref{alphaRater}) &(Thm. \ref{alphaRater})\\\hline \end{tabular}\caption{Conditions for the convergence of $e_N(\mu,\rho)$ with order $\alpha$ : $\sup\limits_{N\ge 1}N^\alpha e_N(\mu,\rho)<+\infty$.}\label{res} \end{table} \end{center} {\bf Notation :} \begin{itemize} \item We denote by $\lfloor x\rfloor$ (resp. $\lceil x\rceil$) the integer $j$ such that $j\le x<j+1$ (resp. $j-1<x\le j$) and by $\{x\}=x-\lfloor x\rfloor$ the integer part of $x\in{\mathbb R}$. \item For two sequences $(a_N)_{N\ge 1}$ and $(b_N)_{N\ge 1}$ of real numbers with $b_N>0$ for $N\ge 2$ we denote $a_N\asymp b_N$ when $\displaystyle 0<\inf_{N\ge 2}\left(\frac{a_N}{b_N}\right)$ and $\displaystyle \sup_{N\ge 2}\left(\frac{a_N}{b_N}\right)<+\infty$. \end{itemize}
1,681
30,163
en
train
0.112.2
\section{Preliminary results} When $\rho=1$ (resp. $\rho=2$), $\displaystyle {\mathbb R}\ni y\mapsto N\int_{\frac{i-1}{N}}^{\frac{i}{N}} \left|y-F^{-1}(u)\right|^\rho\,du$ is minimal for $y$ belonging to the set $\left[F^{-1}\left(\frac{2i-1}{2N}\right),F^{-1}\left(\frac{2i-1}{2N}+\right)\right]$ of medians (resp. equal to the mean $\displaystyle N\int_{\frac{i-1}{N}}^{\frac{i}{N}}F^{-1}(u)\,du$) of the image of the uniform law on $\left[\frac{i-1}{N},\frac{i}{N}\right]$ by $F^{-1}$. For general $\rho>1$, the function $\displaystyle {\mathbb R}\ni y\mapsto \int_{\frac{i-1}{N}}^{\frac{i}{N}}\left|y-F^{-1}(u)\right|^\rho\,du$ is strictly convex and continuously differentiable with derivative \begin{equation} \rho\int_{\frac{i-1}{N}}^{\frac{i}{N}}\left({\mathbf 1}_{\left\{y\ge F^{-1}(u)\right\}} \left(y-F^{-1}(u)\right)^{\rho-1}-{\mathbf 1}_{\left\{y<F^{-1}(u)\right\}}\left(F^{-1}(u)-y\right)^{\rho-1}\right)\,du\label{dery} \end{equation} non-positive for $y=F^{-1}\left(\frac{i-1}{N}+\right)$ when either $i=1$ and $F^{-1}(0+)>-\infty$ or $i\ge 2$ and non-negative for $y=F^{-1}\left(\frac{i}{N}\right)$ when either $i\le N-1$ or $i=N$ and $F^{-1}(1)<+\infty$. Since the derivative has a positive limit as $y\to+\infty$ and a negative limit as $y\to-\infty$, we deduce that $\displaystyle {\mathbb R}\ni y\mapsto \int_{\frac{i-1}{N}}^{\frac{i}{N}}\left|y-F^{-1}(u)\right|^\rho\,du$ admits a unique minimizer $x_i^N\in \left[F^{-1}\left(\frac{i-1}{N}+\right),F^{-1}\left(\frac{i}{N}\right)\right]\cap{\mathbb R}$ (to keep notations simple, we do not explicit the dependence of $x_i^N$ on $\rho$). Therefore \begin{equation} e^\rho_N(\mu,\rho)=\sum_{i=1}^N \int_{\frac{i-1}{N}}^{\frac{i}{N}} \left|x_{i}^N-F^{-1}(u)\right|^\rho\,du \mbox{ with } \left[F^{-1}\left(\frac{i-1}{N}+\right),F^{-1}\left(\frac{i}{N}\right)\right]\ni x_i^N=\begin{cases} \displaystyle F^{-1}\left(\frac{2i-1}{2N}\right)\mbox{ if }\rho=1,\\ \displaystyle N\int_{\frac{i-1}{N}}^{\frac{i}{N}}F^{-1}(u)\,du\mbox{ if }\rho=2,\\ \mbox{not explicit otherwise.} \end{cases}\label{enrho} \end{equation} When needing to bound $e_N(\mu,\rho)$ from above, we may replace the optimal point $x_i^N$ by $F^{-1}\left(\frac{2i-1}{2N}\right)$: $$\forall i\in\{1,\hdots,N\},\quad \int_{\frac{i-1}{N}}^{\frac{i}{N}}\left|F^{-1}(u)-x_i^N \right|^\rho\,du\le \int_{\frac{i-1}{N}}^{\frac{i}{N}}\left|F^{-1}(u)-F^{-1}\left(\frac{2i-1}{2N}\right)\right|^\rho\,du,$$ a simple choice particularly appropriate when linearization is possible since $\displaystyle \left[\frac{i-1}{N},\frac{i}{N}\right]\ni v\mapsto\int_{\frac{i-1}{N}}^{\frac{i}{N}}\left|u-v\right|^\rho\,du$ is minimal for $v=\frac{2i-1}{2N}$.To bound $e_N(\mu,\rho)$ from below, we can use that, by Jensen's inequality and the minimality of $F^{-1}\left(\frac{2i-1}{2N}\right)$ for $\rho=1$, \begin{align} \displaystyle \int_{\frac{i-1}{N}}^{\frac i N} \left|F^{-1}(u)-x^N_{i}\right|^{\rho}\,du &\ge N^{\rho -1} \left(\int_{\frac{i-1}{N}}^{\frac i N} \left|F^{-1}(u)-x^N_{i}\right|\,du\right)^{\rho} \ge N^{\rho -1} \left(\int_{\frac{i-1}{N}}^{\frac i N} \left|F^{-1}(u)-F^{-1}\left(\frac{2i-1}{2N}\right)\right|\,du\right)^{\rho} \notag\\ &\ge N^{\rho -1} \left(\frac{1}{4N}\left(F^{-1}\left(\frac{2i-1}{2N}\right)-F^{-1}\left(\frac{4i-3}{4N}\right)+F^{-1}\left(\frac{4i-1}{4N}\right)-F^{-1}\left(\frac{2i-1}{2N}\right)\right)\right)^{\rho} \notag\\ &\ge \frac{1}{4^\rho N}\left(F^{-1}\left(\frac{4i-1}{4N}\right)-F^{-1}\left(\frac{4i-3}{4N}\right)\right)^{\rho}.\label{minotermbordlimB} \end{align} We also have an alternative formulation of $e_N(\mu,\rho)$ in terms of the cumulative distribution function $F$ in place of the quantile function $F^{-1}$: \begin{prop}\label{propenf} \begin{equation} e_N^\rho(\mu,\rho)=\rho\sum_{i=1}^N\left(\int_{F^{-1}\left(\frac{i-1}{N}+\right)}^{x_i^N}\left(x_i^N-y\right)^{\rho-1}\left(F(y)-\frac{i-1}{N}\right)\,dy+\int^{F^{-1}\left(\frac{i}{N}\right)}_{x_i^N}\left(y-x_i^N\right)^{\rho-1}\left(\frac{i}{N}-F(y)\right)\,dy\right).\label{enf} \end{equation} \end{prop} Under the convention $F^{-1}(0)=-\infty$, when, for some $i\in\{1,\hdots,N\}$, $F^{-1}\left(\frac{i-1}{N}+\right)>F^{-1}\left(\frac{i-1}{N}\right)$, then $F(y)=\frac{i-1}{N}$ for $y\in\left[F^{-1}\left(\frac{i-1}{N}\right),F^{-1}\left(\frac{i-1}{N}+\right)\right)$ and $\displaystyle \int_{F^{-1}\left(\frac{i-1}{N}\right)}^{F^{-1}\left(\frac{i-1}{N}+\right)}(x_i^N-y)^{\rho-1}\left(F(y)-\frac{i-1}{N}\right)\,dy=0$ so that the lower integration limit in the first integral in the right-hand side of \eqref{enf} may be replaced by $F^{-1}\left(\frac{i-1}{N}\right)$. In a similar way, the upper integration limit in the second integral may be replaced by $F^{-1}\left(\frac{i}{N}+\right)$ under the convention $F^{-1}(1+)=+\infty$. When $\rho=1$, the equality \eqref{enf} follows from the interpretation of ${\cal W}_1(\nu,\eta)$ as the integral of the absolute difference between the cumulative distribution functions of $\nu$ and $\eta$ (equal, as seen with a rotation with angle $\frac{\pi}{2}$, to the integral of the absolute difference between their quantile functions) and the integral simplifies into: \begin{equation}\label{w1altern2b} e_N(\mu,1)=\sum_{i=1}^N\left(\int_{F^{-1}\left(\frac{i-1}{N}+\right)}^{F^{-1}\left(\frac{2i-1}{N}\right)}\left(F(y)-\frac{i-1}{N}\right)\,dy+\int^{F^{-1}\left(\frac{i}{N}\right)}_{F^{-1}\left(\frac{2i-1}{N}\right)}\left(\frac{i}{N}-F(y)\right)\,dy\right)=\frac{1}{N}\int_{\mathbb R}\min_{j\in{\mathbb N}}\left|NF(y)-j\right|\,dy. \end{equation} For $\rho>1$, it can be deduced from the general formula for ${\cal W}_\rho^\rho(\nu,\eta)$ in terms of the cumulative distribution functions of $\mu$ and $\eta$ (see for instance Lemma B.3 \cite{jourey2}). It is also a consequence of the following equality for each term of the decomposition over $i\in\{1,\hdots,N\}$, that we will need next. \begin{lem}\label{lemenf} Assume that $\displaystyle \int_{\mathbb R}|x|^\rho\mu(dx)<+\infty$ with $\rho\ge 1$. For $i\in\{1,\hdots,N\}$ and $x\in \left[F^{-1}\left(\frac{i-1}{N}\right),F^{-1}\left(\frac{i}{N}\right)\right]\cap{\mathbb R}$ (with convention $F^{-1}(0)=F^{-1}(0+)$), we have: $$\int_{\frac{i-1}{N}}^{\frac{i}{N}}\left|x-F^{-1}(u)\right|^\rho\,du=\rho\int_{F^{-1}\left(\frac{i-1}{N}\right)}^{x}(x-y)^{\rho-1}\left(F(y)-\frac{i-1}{N}\right)\,dy+\rho\int^{F^{-1}\left(\frac{i}{N}\right)}_{x}(y-x)^{\rho-1}\left(\frac{i}{N}-F(y)\right)\,dy,$$ and the right-hand side is minimal for $x=x_i^N$. \end{lem} \begin{proof} Let $i\in\{1,\hdots,N\}$ and $x\in\left[F^{-1}\left(\frac{i-1}{N}\right),F^{-1}\left(\frac{i}{N}\right)\right]\cap{\mathbb R}$. We have $\frac{i-1}{N}\le F(x)$ and $F(x-)\le\frac{i}{N}$. Since $F^{-1}(u)\le x\Leftrightarrow u\le F(x)$ and $F^{-1}(u)=x$ for $u\in \left(F(x-),F(x)\right]$, we have: $$\int_{\frac{i-1}{N}}^{\frac{i}{N}} \left|x-F^{-1}(u)\right|^\rho\,du=\int_{\frac{i-1}{N}}^{F(x)} \left(x-F^{-1}(u)\right)^\rho\,du+\int^{\frac{i}{N}}_{F(x)} \left(F^{-1}(u)-x\right)^\rho\,du.$$ Using the well-known fact that the image of ${\mathbf 1}_{[0,1]}(v)\,dv\mu(dz)$ by $(v,z)\mapsto F(z-)+v\mu(\{z\})$ is the Lebesgue measure on $[0,1]$ and that ${\mathbf 1}_{[0,1]}(v)\,dv\mu(dz)$ a.e., $F^{-1}\left(F(z-)+v\mu(\{z\})\right)=z$ , we obtain that: \begin{align} \int_{\frac{i-1}{N}}^{F(x)} \left(x-F^{-1}(u)\right)^\rho\,du &= \int_{v=0}^1\int_{z\in {\mathbb R}}{\mathbf 1}_{\left\{\frac{i-1}{N}\le F(z-)+v\mu(\{z\})\le F(x)\right\}}(x-z)^\rho\mu(dz)\,dv\notag\\ &=\int_{v=0}^1\int_{z\in {\mathbb R}}{\mathbf 1}_{\left\{\frac{i-1}{N}\le F(z-)+v\mu(\{z\})\le F(x)\right\}}\int \rho(x-y)^{\rho-1}{\mathbf 1}_{\{z\le y\le x\}}\,dy\mu(dz)\,dv\notag\\ &=\rho\int_{y=-\infty}^{x}(x-y)^{\rho-1}\int_{v=0}^1\int_{z\in {\mathbb R}}{\mathbf 1}_{\left\{\frac{i-1}{N}\le F(z-)+v\mu(\{z\})\right\}}{\mathbf 1}_{\left\{z\le y \right\}}\mu(dz)\,dv\,dy.\label{triplint} \end{align} For $v>0$, $\{z\in{\mathbb R}:F(z-)+v\mu(\{z\})\le F(y)\}=(-\infty,y]\cup\{z\in{\mathbb R}:z>y\mbox{ and }F(z)=F(y)\}$ with $\mu\left(\{z\in{\mathbb R}:z>y\mbox{ and }F(z)=F(y)\}\right)=0$ and therefore $$\int_{z\in {\mathbb R}}{\mathbf 1}_{\left\{\frac{i-1}{N}\le F(z-)+v\mu(\{z\})\right\}}{\mathbf 1}_{\{z\le y\}}\mu(dz)=\int_{z\in {\mathbb R}}{\mathbf 1}_{\left\{\frac{i-1}{N}\le F(z-)+v\mu(\{z\})\le F(y)\right\}}\mu(dz).$$ Plugging this equality in \eqref{triplint}, using again the image of ${\mathbf 1}_{[0,1]}(v)\,dv\mu(dz)$ by $(v,z)\mapsto F(z-)+v\mu(\{z\})$ and the equivalence $\frac{i-1}{N}\le F(y)\Leftrightarrow F^{-1}\left(\frac{i-1}{N}\right)\le y$ , we deduce that: \begin{align*} \int_{\frac{i-1}{N}}^{F(x)}\left(x-F^{-1}(u)\right)^\rho\,du&=\rho\int_{y=-\infty}^{x}(x-y)^{\rho-1}\int_{u=0}^1{\mathbf 1}_{\left\{\frac{i-1}{N}\le u\le F(y)\right\}}\,du\,dy=\rho\int_{F^{-1}\left(\frac{i-1}{N}\right)}^{x}(x-y)^{\rho-1}\left(F(y)-\frac{i-1}{N}\right)\,dy. \end{align*} In a similar way, we check that: $$\int^{\frac{i}{N}}_{F(x)}\left(F^{-1}(u)-x\right)^\rho\,du=\rho\int^{F^{-1}\left(\frac{i}{N}\right)}_{x}(y-x)^{\rho-1}\left(\frac{i}{N}-F(y)\right)\,dy,$$ which concludes the proof.\end{proof}
3,881
30,163
en
train
0.112.3
\begin{prop} For each $\rho\ge 1$, we have $\displaystyle \int_{\mathbb R} |x|^\rho\mu(dx)<+\infty\Leftrightarrow \lim_{N\to+\infty} e_N(\mu,\rho)=0$. \end{prop} The direct implication can be deduced from the inequality $e_N(\mu,\rho)\le{\cal W}_\rho\left(\frac{1}{N}\sum\limits_{i=1}^N\delta_{X_i},\mu\right)$ and the almost sure convergence to $0$ of ${\cal W}_\rho\left(\frac{1}{N}\sum\limits_{i=1}^N\delta_{X_i},\mu\right)$ for $(X_i)_{i\ge 1}$ i.i.d. according to $\mu$ deduced from the strong law of large numbers and stated for instance in Theorem 2.13 \cite{bobkovledoux}. We give an alternative simple argument based on \eqref{enf}. \begin{proof} According to the introduction, the finiteness of $e_N(\mu,\rho)$ for some $N\ge 1$ implies that $\displaystyle \int_{\mathbb R}|x|^\rho\mu(dx)<+\infty$. So it is enough to check the zero limit property under the finite moment condition. When respectively $F^{-1}\left(\frac{i}{N}\right)\le 0$, $F^{-1}\left(\frac{i-1}{N}+\right)<0<F^{-1}\left(\frac{i}{N}\right)$ or $F^{-1}\left(\frac{i-1}{N}+\right)\ge 0$ , then, by Lemma \ref{lemenf}, the term with index $i$ in \eqref{enf} is respectively bounded from above by $$\int_{F^{-1}\left(\frac{i-1}{N}+\right)}^{F^{-1}\left(\frac{i}{N}\right)}\left(F^{-1}\left(\frac{i}{N}\right)-y\right)^{\rho-1}\left(F(y)-\frac{i-1}{N}\right)\,dy \le \int_{F^{-1}\left(\frac{i-1}{N}+\right)}^{F^{-1}\left(\frac{i}{N}\right)}(-y)^{\rho-1}\left(\frac{1}{N}\wedge F(y)\right)\,dy,$$ $$\int_{F^{-1}\left(\frac{i-1}{N}+\right)}^{0}(-y)^{\rho-1}\left(\frac{1}{N}\wedge F(y)\right)\,dy+\int^{F^{-1}\left(\frac{i}{N}\right)}_{0}y^{\rho-1}\left(\frac{1}{N}\wedge(1-F(y))\right)\,dy,$$ $$\int_{F^{-1}\left(\frac{i-1}{N}+\right)}^{F^{-1}\left(\frac{i}{N}\right)}\left(y-F^{-1}\left(\frac{i-1}{N}+\right)\right)^{\rho-1}\left(\frac{i}{N}-F(y)\right)\,dy \le \int_{F^{-1}\left(\frac{i-1}{N}+\right)}^{F^{-1}\left(\frac{i}{N}\right)}y^{\rho-1}\left(\frac{1}{N}\wedge(1-F(y))\right)\,dy.$$ After summation, we deduce that: $$e_N^\rho(\mu,\rho)\le \rho\int_{-\infty}^0(-y)^{\rho-1}\left(\frac{1}{N}\wedge F(y)\right)\,dy+\rho\int_0^{+\infty}y^{\rho-1}\left(\frac{1}{N}\wedge(1-F(y))\right)\,dy.$$ Since, by Fubini's theorem, $\displaystyle \rho\int_{-\infty}^0(-y)^{\rho-1}F(y)\,dy+\rho\int_0^{+\infty}y^{\rho-1}(1-F(y))\,dy=\int_{\mathbb R} |x|^\rho\mu(dx)<+\infty$, Lebesgue's theorem ensures that the right-hand side and therefore $e_N(\mu,\rho)$ go to $0$ as $N\to+\infty$. \end{proof}
1,014
30,163
en
train
0.112.4
\section{The non compactly supported case}\label{parnoncomp} According to Theorem 5.21 (ii) \cite{xuberger}, when the support of $\mu$ is bounded, $\displaystyle \sup_{N\ge 1}N^{1/\rho}e_N(\mu,\rho)<+\infty$ with $\displaystyle \lim_{N\to+\infty}N^{1/\rho}e_N(\mu,\rho)=0$ if and only if the quantile function $F^{-1}$ is continuous according to Remark 5.22 (ii) \cite{xuberger} and Proposition 2.1 \cite{BJ}. The case $\beta>1$ in the next example illustrates the possibility that, when $\rho>1$, $\displaystyle \lim_{N\to+\infty}N^{\frac{1}{\rho}}e_N(\mu,\rho)=0$ for some non compactly supported probability measures $\mu$. Of course, $F^{-1}$ is then continuous on $(0,1)$, since, by Remark 5.22 (ii) \cite{xuberger}, $\displaystyle \limsup_{N\to+\infty} N^{1/\rho}e_N(\mu,\rho)>0$ otherwise. \begin{exple}\label{exempleexp} For $\mu_\beta(dx)=f(x)\,dx$ with $f(x)=\mathbf{1}_{\{x>0\}}\beta x^{\beta -1}\exp\left(-x^{\beta}\right)$ with $\beta>0$ (the exponential distribution case $\beta=1$, was addressed in Example 5.17 and Remark 5.22 (i) \cite{xuberger}), we have that $F(x) = \mathbf{1}_{\{x>0\}}\left(1-\exp(-x^{\beta})\right)$, $F^{-1}(u) = \left(-\ln(1-u)\right)^{\frac{1}{\beta}}$ and $f\left(F^{-1}(u)\right)=\beta(1-u)(-\ln(1-u))^{1-\frac{1}{\beta}}$. The density $f$ is decreasing on $\left[x_\beta,+\infty\right)$ where $x_\beta=\left(\frac{(\beta-1)\vee 0}{\beta}\right)^{\frac{1}{\beta}}$. Using \eqref{minotermbordlimB}, the equality $F^{-1}(w)-F^{-1}(u)=\int_u^w\frac{dv}{f\left(F^{-1}(v)\right)}$ valid for $u,w\in(0,1)$ and the monotonicity of the density, we obtain that for $N$ large enough so that $\lceil NF(x_\beta)\rceil\le N-1$, \begin{align} e_N^\rho(\mu_\beta,\rho)&\ge \frac{1}{4^{\rho}N}\sum_{i=\lceil NF(x_\beta)\rceil +1}^N\left(\int_{\frac{4i-3}{4N}}^{\frac{4i-1}{4N}}\frac{du}{f(F^{-1}(u))}\right)^\rho\ge \frac{1}{8^{\rho}N^{\rho+1}}\sum_{i=\lceil NF(x_\beta)\rceil +1}^N\frac{1}{f^\rho\left(F^{-1}\left(\frac{4i-3}{4N}\right)\right)}\notag\\ &\ge \frac{1}{(8N)^{\rho}}\sum_{i=\lceil NF(x_\beta)\rceil +2}^N\int_{\frac{i-2}{N}}^{\frac{i-1}{N}}\frac{du}{f^\rho\left(F^{-1}(u)\right)}=\frac{1}{(8N)^{\rho}}\int_{\frac{\lceil NF(x_\beta)\rceil}{N}}^{\frac{N-1}{N}}\frac{du}{f^\rho\left(F^{-1}(u)\right)}.\label{minoenrd} \end{align} Using H\"older's inequality for the second inequality, then Fubini's theorem for the third, we obtain that \begin{align*} e_N^\rho(\mu_\beta,\rho)&-\int_{\frac{N-1}{N}}^{1}\left|x_N^N-F^{-1}(u)\right|^\rho\,du \le \sum_{i=1}^{N-1}\int_{\frac{i-1}{N}}^{\frac{i}{N}}\left|\int_{\frac{2i-1}{2N}}^u\frac{dv}{f(F^{-1}(v))}\right|^\rho\,du \\&\le \sum_{i=1}^{N-1}\int_{\frac{i-1}{N}}^{\frac{i}{N}}\left|u-\frac{2i-1}{2N}\right|^{\rho-1}\left|\int_{\frac{2i-1}{2N}}^u \frac{dv}{f^{\rho}\left(F^{-1}(v)\right)}\right|\,du\le\frac{1}{(2N)^\rho\rho}\int_0^{\frac{N-1}{N}}\frac{dv}{f^\rho\left(F^{-1}(v)\right)}. \end{align*} We have $F(x_\beta)<1$ and, when $\beta>1$, $F(x_\beta)>0$. By integration by parts, for $\rho>1$, \begin{align*} (\rho-1)&\int_{F(x_\beta)}^{\frac{N-1}{N}}\frac{\beta^\rho\,du}{f^\rho\left(F^{-1}(u)\right)} = \int_{F(x_\beta)}^{\frac{N-1}{N}}(\rho-1)(1-u)^{-\rho}(-\ln(1-u))^{\frac{\rho}{\beta}-\rho}\,du\\ &=\left[(1-u)^{1-\rho}(-\ln(1-u))^{\frac{\rho}{\beta}-\rho}\right]^{\frac{N-1}{N}}_{F(x_\beta)}+{\left(\frac{\rho}{\beta}-\rho\right)}\int_{F(x_\beta)}^{\frac{N-1}{N}}(1-u)^{-\rho}(-\ln(1-u))^{\frac{\rho}{\beta}-\rho-1}\,du\\ &=N^{\rho-1}(\ln N)^{\frac{\rho}{\beta}-\rho}+o\left(\int_{F(x_\beta)}^{\frac{N-1}{N}}(1-u)^{-\rho}(-\ln(1-u))^{\frac{\rho}{\beta}-\rho}\,du\right)\sim N^{\rho-1}(\ln N)^{\frac{\rho}{\beta}-\rho}, \end{align*} as $N\to+\infty$. We obtain the same equivalent when replacing the lower integration limit $F(x_\beta)$ in the left-hand side by $\frac{\lceil NF(x_\beta)\rceil}{N}$ or $0$ since $\displaystyle \lim_{N\to+\infty}\int^{\frac{\lceil NF(x_\beta)\rceil}{N}}_{F(x_\beta)}\frac{du}{f^\rho(F^{-1}(u))}=0$ and $\displaystyle \int_0^{F(x_\beta)}\frac{du}{f^\rho(F^{-1}(u))}<+\infty$. On the other hand, \begin{align*} \int_{\frac{N-1}{N}}^1\left|x_N^N-F^{-1}(u)\right|^\rho\,du\le \int_{\frac{N-1}{N}}^1\left(\left(-\ln(1-u)\right)^{\frac{1}{\beta}}-(\ln N)^{\frac{1}{\beta}}\right)^\rho\,du. \end{align*} When $\beta<1$, for $u\in\left[\frac{N-1}{N},1\right]$, $\left(-\ln(1-u)\right)^{\frac{1}{\beta}}-(\ln N)^{\frac{1}{\beta}} \le {\frac{1}{\beta}}\left(-\ln(1-u)\right)^{\frac{1}{\beta}-1}\left(-\ln(1-u)-\ln N\right)$ so that \begin{align*} \int_{\frac{N-1}{N}}^1\left(\left(-\ln(1-u)\right)^{\frac{1}{\beta}}-(\ln N)^{\frac{1}{\beta}}\right)^\rho\,du&\le \frac{1}{\beta^\rho}\int_{\frac{N-1}{N}}^1\left(-\ln(1-u)\right)^{\frac{\rho}{\beta}-\rho}\left(-\ln(N(1-u)) \right)^\rho\,du\\ &=\frac{1}{\beta^\rho N}\int_0^1\left(\ln N-\ln v\right)^{\frac{\rho}{\beta}-\rho}(-\ln(v))^\rho\,dv\\ &\le \frac{2^{(\frac{\rho}{\beta}-\rho-1)\vee 0}}{\beta^\rho N}\left(\left(\ln N\right)^{\frac{\rho}{\beta}-\rho}\int_0^1(-\ln(v))^\rho\,dv+\int_0^1(-\ln(v))^{\frac{\rho}{\beta}}\,dv\right). \end{align*} When $\beta\ge 1$, for $N\ge 2$ and $u\in\left[\frac{N-1}{N},1\right]$, $\left(-\ln(1-u)\right)^{\frac{1}{\beta}}-(\ln N)^{\frac{1}{\beta}}\le {\frac{1}{\beta}}\left(\ln N\right)^{\frac{1}{\beta}-1}\left(-\ln(1-u)-\ln N\right)$ so that \begin{align} \int_{\frac{N-1}{N}}^1\left(\left(-\ln(1-u)\right)^{\frac{1}{\beta}}-(\ln N)^{\frac{1}{\beta}}\right)^\rho\,du\le \frac{\left(\ln N\right)^{\frac{\rho}{\beta}-\rho}}{\beta^\rho N}\int_0^1(-\ln(v))^{\frac{\rho}{\beta}}\,dv.\label{amjotailexp} \end{align} We conclude that for $\rho>1$ and $\beta>0$, $e_N(\mu_\beta,\rho)\asymp N^{-\frac{1}{\rho}}(\ln N)^{\frac{1}{\beta}-1}\asymp {\cal W}_\rho(\frac{1}{N}(\sum_{i=1}^{N-1}\delta_{F^{-1}\left(\frac{2i-1}{2N}\right)}+\delta_{F^{-1}\left(\frac{N-1}{N}\right)}),\mu_\beta)$. In view of Theorem 5.20 \cite{xuberger}, this rate of convergence does not extend continuously to $e_N(\mu_\beta,1)$, at least for $\beta>1$. Indeed, by Remark 2.2 \cite{jrdcds}, $e_N(\mu_\beta,1)\asymp N^{-1}(\ln N)^{\frac{1}{\beta}}$, which in view of \eqref{amjotailexp}, implies that $\sum\limits_{i=1}^{N-1}\int_{\frac{i-1}{N}}^{\frac{i}{N}}\left|x_i^N-F^{-1}(u)\right|^\rho\,du\asymp N^{-1}(\ln N)^{\frac{1}{\beta}}$. In the Gaussian tail case $\beta=2$, $e_N(\mu_2,\rho)\asymp N^{-\frac{1}{\rho}}(\ln N)^{-\frac{1}{2}+{\mathbf 1}_{\{\rho=1\}}}$ for $\rho\ge 1$, like for the true Gaussian distribution, according to Example 5.18 \cite{xuberger}. This matches the rate obtained when $\rho>2$ in Corollary 6.14 \cite{bobkovledoux} for ${\mathbb E}^{1/\rho}\left[{\cal W}_\rho^\rho\left(\frac{1}{N}\sum\limits_{i=1}^N\delta_{X_i},\mu\right)\right]$ where $(X_i)_{i\ge 1}$ are i.i.d. with respect to some Gaussian distribution $\mu$ with positive variance. When $\rho=2$, still according to this corollary the random rate is $N^{-1/2}(\ln\ln N)^{1/2}$ (of course worse than the standard Monte Carlo rate $N^{-1/2}$).\end{exple} According to the next result, the order of convergence of $e_N(\mu,\rho)$ cannot exceed $\frac{1}{\rho}$ when the support of $\mu$ is not bounded. \begin{prop} Let $\rho>1$. Then $\exists \, \alpha>\frac 1\rho,\;\sup_{N\ge 1}N^\alpha e_N(\mu,\rho)<+\infty\implies F^{-1}(1)-F^{-1}(0+)<+\infty$. \label{propals1rcomp} \end{prop} \begin{proof} Let $\rho>1$ and $\alpha>\frac 1\rho$ be such that $\sup_{N\ge 1}N^{\alpha} e_N(\mu,\rho)<+\infty$ so that, by \eqref{enrho}, $$\sup_{N\ge 1}N^{\alpha\rho}\left(\int_0^{\frac 1 N} \left|F^{-1}(u)-x^N_1\right|^{\rho}\,du+\int^1_{\frac{N-1}{N}} \left|F^{-1}(u)-x^N_N\right|^{\rho}\,du\right)<+\infty.$$ By \eqref{minotermbordlimB} for $i=1$ and $N\ge1$, we have: \begin{align*} \displaystyle \int_0^{\frac 1 N} \left|F^{-1}(u)-x^N_1\right|^{\rho}\,du \ge \frac{1}{4^\rho N}\left(F^{-1}\left(\frac{1}{2N}\right)-F^{-1}\left(\frac{1}{4N}\right)\right)^{\rho}. \end{align*} Therefore $C:=\sup\limits_{N\ge 1}(2N)^{\alpha-\frac{1}{\rho}}\left(F^{-1}\left(\frac{1}{2N}\right)-F^{-1}\left(\frac{1}{4N}\right)\right)<+\infty$. For $k\in {\mathbb N}^{*}$, we deduce that $F^{-1}\left(2^{-(k+1)}\right)-F^{-1}\left(2^{-k}\right) \ge - C2^{\frac{1-\alpha \rho}{\rho}k}$, and after summation that: \begin{equation} \forall k\in {\mathbb N}^{*},F^{-1}\left(2^{-k}\right) \ge F^{-1}\left(1/2\right) - \frac{C}{2^{\alpha -\frac{1}{\rho}} -1 }\left(1- 2^{\frac{1-\alpha \rho}{\rho}(k-1)}\right).\label{minoF-1u0} \end{equation} When $k\to+\infty$, the right-hand side goes to $\left(F^{-1}\left(\frac 1 2\right) - \frac{C}{2^{\alpha -\frac{1}{\rho}}-1}\right)>-\infty$ so that $F^{-1}(0+)>-\infty$. In a symmetric way, we check that $F^{-1}(1)<+\infty$ so that $\mu$ is compactly supported. \end{proof}
3,680
30,163
en
train
0.112.5
The next theorem gives a necessary and sufficient condition for $e_N(\mu,\rho)$ to go to $0$ with order $\alpha\in\left(0,\frac 1\rho\right)$. \begin{thm}\label{alphaRater} Let $\rho\ge 1$ and $\alpha\in \left(0,\frac{1}{\rho}\right)$. We have \begin{align*} {\sup_{x\ge 0}}\;x^{\frac{\rho}{1-\alpha \rho}}\Big(F(-x)+1-F(x)\Big)<+\infty \Leftrightarrow \displaystyle{\sup_{N \ge 1}} N^{\alpha} \, e_N(\mu,\rho)<+\infty\Leftrightarrow \displaystyle{\sup_{N \ge 2}}\sup_{x_{2:N-1}}N^{\alpha}{\cal W}_\rho(\mu_N(x_{2:N-1}),\mu)<+\infty\end{align*} where $\mu_N(x_{2:N-1})=\frac{1}{N}\left(\delta_{F^{-1}\left(\frac{1}{N}\right)\wedge (-N^{\frac{1}{\rho}-\alpha})}+\sum_{i=2}^{N-1}\delta_{x_i}+\delta_{F^{-1}\left(\frac{N-1}{N}\right)\vee N^{\frac{1}{\rho}-\alpha}}\right)$ and $\sup_{x_{2:N-1}}$ means the supremum over the choice of $x_i\in[F^{-1}(\frac{i-1}{N}+),F^{-1}(\frac{i}{N})]$ for $2\le i\le N-1$. Moreover, $$\lim_{x\to +\infty}x^{\frac{\rho}{1-\alpha \rho}}\Big(F(-x)+1-F(x)\Big)=0\Leftrightarrow \lim_{N\to+\infty}N^{\alpha}e_N(\mu,\rho)=0.$$ \end{thm} Let $\hat\beta=\sup\{\beta\ge 0:\int_{\mathbb R}|x|^\beta\mu(dx)<\infty\}$. For $\beta \in (0,\hat\beta)$, $\int_{\mathbb R}|x|^\beta\mu(dx)<\infty$ while when $\hat\beta<\infty$, for $\beta>\hat\beta$, $\int_{\mathbb R}|x|^{\frac{\hat\beta+\beta}{2}}=+\infty$, so that by Lemma \ref{lemququcdf} ${\sup_{x\ge 0}}\;x^{\beta}\Big(F(-x)+1-F(x)\Big)=+\infty$. Let $\rho\ge 1$. If $\hat\beta>\rho$, we deduce from Theorem 5.21 (i) \cite{xuberger} that for each $\alpha\in (0,\frac 1\rho-\frac 1 {\hat\beta})$, $\lim_{N\to\infty}N^\alpha e_N(\mu,\rho)=0$ and, when $\hat\beta<+\infty$, Theorem \ref{alphaRater} ensures that for each $\alpha>\frac 1\rho-\frac 1 {\hat\beta}$, $\sup_{N\ge 1} N^\alpha e_N(\mu,\rho)=+\infty$ since $\frac\rho{1-\alpha\rho}>\hat\beta$. In this sense, when $\rho<\hat\beta<+\infty$ the order of convergence of $e_N(\mu,\rho)$ to $0$ is $\frac 1\rho-\frac 1 {\hat\beta}$. Moreover, the boundedness and the vanishing limit at infinity for the sequence $(N^{\frac 1\rho-\frac 1 {\hat\beta}} e_N(\mu,\rho))_{N\ge 1}$ are respectively equivalent to the same property for the function ${\mathbb R}_+\ni x\mapsto x^{\hat\beta}\Big(F(-x)+1-F(x)\Big)<+\infty$. Note that $\limsup_{x\to+\infty} x^{\hat\beta}\Big(F(-x)+1-F(x)\Big)$ can be $0$, or positive or $+\infty$. \begin{remark}\begin{itemize} \item In the proof we check that if $C={\sup\limits_{x\ge 0}}\;x^{\frac{\rho}{1-\alpha \rho}}\Big(F(-x)+1-F(x)\Big)$, then \begin{align*} \sup_{N\ge 1} N^{\alpha\rho}e^\rho_N(\mu,\rho)\le \sup_{N \ge 2}\sup_{x_{2:N-1}}N^{\alpha\rho}{\cal W}^\rho_\rho(\mu_N(x_{2:N-1}),\mu)\le 2^\rho C^{1-\alpha\rho}+\frac{1-\alpha\rho}{\alpha\rho}C+1+2^{\rho-1}+2^{\rho+\alpha\rho-2}\left|F^{-1}\left(1/2\right)\right|^\rho. \end{align*} \item According to Theorem 7.16 \cite{bobkovledoux}, for $(X_i)_{i\ge 1}$ i.i.d. according to $\mu$, $$\sup_{N\ge 1}N^{\frac{1}{2\rho}}{\mathbb E}^{1/\rho}\left[{\cal W}_\rho^\rho\left(\frac{1}{N}\sum_{i=1}^N\delta_{X_i},\mu\right)\right]\le \left(\rho 2^{\rho-1}\int_{\mathbb R} |x|^{\rho-1}\sqrt{F(x)(1-F(x))}\,dx\right)^{1/\rho}$$ with $\exists \;\varepsilon>0,\;\int_{\mathbb R}|x|^{2\rho+\varepsilon}\mu(dx)<+\infty{\mathbb R}ightarrow \int_{\mathbb R} |x|^{\rho-1}\sqrt{F(x)(1-F(x))}\,dx<+\infty{\mathbb R}ightarrow \int_{\mathbb R}|x|^{2\rho}\mu(dx)<+\infty$ by the discussion just after the theorem. The condition ${\sup_{x\ge 0}}\;x^{2\rho}\Big(F(-x)+1-F(x)\Big)<+\infty$ equivalent to $\sup_{N \ge 1} N^{\frac{1}{2\rho}} \, e_N(\mu,\rho)<+\infty$ is slightly weaker, according to Lemma \ref{lemququcdf} just below. Moreover, we address similarly any order of convergence $\alpha$ with $\alpha\in \left(0,\frac{1}{\rho}\right)$ for $e_N(\mu,\rho)$, while the order $\frac{1}{2\rho}$ seems to play a special role for ${\mathbb E}^{1/\rho}\left[{\cal W}_\rho^\rho\left(\frac{1}{N}\sum\limits_{i=1}^N\delta_{X_i},\mu\right)\right]$ in the random case. When $\rho=1$, the order of convergence $\alpha$ for $\alpha\in(0,1/2)$ is addressed in the random case in Theorem 2.2 \cite{barrGinMat} where the finiteness of ${\sup_{x\ge 0}}\,x^{\frac{1}{1-\alpha}}\Big(F(-x)+1-F(x)\Big)$ is stated to be equivalent to the stochastic boundedness of the sequence $\left(N^\alpha{\cal W}_1\left(\frac{1}{N}\sum\limits_{i=1}^N\delta_{X_i},\mu\right)\right)_{N\ge 1}$. When $\alpha=1/2$, the stochastic boundedness property is, according to Theorem 2.1 (b) \cite{barrGinMat}, equivalent to $\int_{\mathbb R}\sqrt{F(x)(1-F(x))}\,dx<+\infty$. \end{itemize} \end{remark} The proof of Theorem \ref{alphaRater} relies on the two next lemmas. \begin{lem}\label{lemququcdf} For $\beta>0$, we have \begin{align*} \int_{{\mathbb R}}|y|^{\beta}\mu(dy)<+\infty &\implies \lim_{x\to+\infty}x^{\beta}\Big(F(-x)+1-F(x)\Big)=0 \\&\implies {\sup_{x\ge 0}}\;x^{\beta}\Big(F(-x)+1-F(x)\Big)<+\infty \implies \forall \varepsilon\in \left(0,\beta\right],\;\int_{{\mathbb R}}|y|^{\beta-\varepsilon}\mu(dy)<+\infty \end{align*} and ${\sup_{x\ge 0}}\;x^{\beta}\Big(F(-x)+1-F(x)\Big)<+\infty \Leftrightarrow \sup_{u\in(0,1/2]}u^{\frac{1}{\beta}}\left(F^{-1}(1-u)-F^{-1}(u)\right)<+\infty$ with \begin{equation} \sup_{u\in(0,1/2]}u^{\frac{1}{\beta}}\left(F^{-1}(1-u)-F^{-1}(u)\right) \le \left(\sup_{x\ge 0}\,x^{\beta}F(-x)\right)^{\frac{1}{\beta}}+\left(\sup_{x\ge 0}\,x^{\beta}(1-F(x))\right)^{\frac{1}{\beta}}.\label{majodifquantb} \end{equation} Last, $\lim_{x\to +\infty}x^{\beta}\Big(F(-x)+1-F(x)\Big)=0\Leftrightarrow \lim_{u\to 0+}u^{\frac{1}{\beta}}\left(F^{-1}(1-u)-F^{-1}(u)\right)=0$. \end{lem} \begin{proof} Let $\beta>0$. For $x>0$, using the monotonicity of $F$ for the first inequality then that for $y\in[\frac x 2,x]$, $y^{\beta-1}\ge \left(\frac{x}{2}\right)^{\beta-1}\wedge x^{\beta-1}=\frac{x^{\beta-1}}{2^{(\beta-1)\vee 0}}$, we obtain that \begin{align*} F(-x)+1-F(x) \le \frac{2}{x}\int_{x/2}^{x}\Big(F(-y)+1-F(y)\Big)\,dy &\le \frac{2^{\beta\vee 1}}{x^\beta}\int_{x/2}^{+\infty}y^{\beta-1}\Big(F(-y)+1-F(y)\Big)\,dy.\end{align*} Since $\int_0^{+\infty}y^{\beta-1}\Big(F(-y)+1-F(y)\Big)\,dy=\frac 1\beta\int_{{\mathbb R}}|y|^{\beta}\mu(dy)$, the finiteness of $\int_{{\mathbb R}}|y|^{\beta}\mu(dy)$ implies by Lebesgue's theorem that $\lim_{x\to\infty} x^\beta\left(F(-x)+1-F(x)\right)=0$. Since $x\mapsto x^\beta\left(F(-x)+1-F(x)\right)$ is right-continuous with left-hand limits on $[0,+\infty)$, $${\sup_{x\ge 0}}\;x^{\beta}\Big(F(-x)+1-F(x)\Big)<+\infty \Leftrightarrow \limsup_{x\to\infty}x^{\beta}\Big(F(-x)+1-F(x)\Big)<+\infty,$$ with the latter property clearly implied by $\lim_{x\to\infty} x^\beta\left(F(-x)+1-F(x)\right)=0$. For $\varepsilon\in (0,\beta)$, using that for $y\ge 0$, $F(-y)+1-F(y)=\mu((-\infty,-y]\cup (y,+\infty))\le 1$, we obtain \begin{align*} \int_{\mathbb R}|x|^{\beta-\varepsilon}\mu(dx)&=(\beta-\varepsilon)\int_0^{+\infty}y^{\beta-\varepsilon-1}(F(-y)+1-F(y))\,dy\\ &\le (\beta-\varepsilon)\int_0^1y^{\beta-\varepsilon-1}dy+(\beta-\varepsilon)\sup\limits_{x \ge 0}x^{\beta}\Big(F(-x)+1-F(x)\Big)\int_1^{+\infty}y^{-\varepsilon-1}\,dy\\ &=1+\frac{\beta-\varepsilon}{\varepsilon}\sup\limits_{x \ge 0}x^{\beta}\Big(F(-x)+1-F(x)\Big). \end{align*} Therefore $\displaystyle \sup\limits_{x \ge 0}x^{\beta}\Big(F(-x)+1-F(x)\Big)<+\infty\implies \forall \varepsilon\in (0,\beta),\;\int_{\mathbb R}|x|^{\beta-\varepsilon}\mu(dx)<+\infty$.\\ Let us next check that \begin{equation} \sup_{x\ge 0}\,x^{\beta}\Big(F(-x)+(1-F(x))\Big)<+\infty\Leftrightarrow \sup_{u\in(0,1/2]}u^{\frac{1}{\beta}}(F^{-1}(1-u)-F^{-1}(u))<+\infty\label{equivtailquant}. \end{equation} For the necessary condition, we set $u\in(0,1/2]$. Either $F^{-1}(u)\ge 0$ or, since for all $v\in (0,1)$, $F(F^{-1}(v))\ge v$, we have $\left(-F^{-1}(u)\right)^\beta u\le \sup_{x\ge -F^{-1}(u)}\,x^{\beta}F(-x)$ and therefore $F^{-1}(u)\ge -\left(\sup_{x\ge 0}\,x^{\beta}F(-x)\right)^{\frac{1}{\beta}}u^{-\frac{1}{\beta}}$. Either $F^{-1}(1-u)\le 0$ or, since for all $v\in (0,1)$, $F(F^{-1}(v)-)\le v$, we have $(F^{-1}(1-u))^\beta u\le \sup_{x\ge F^{-1}(1-u)}\,x^{\beta}(1-F(x-))$ and therefore $F^{-1}(1-u)\le \left(\sup_{x\ge 0}\,x^{\beta}(1-F(x))\right)^{\frac{1}{\beta}}u^{-\frac{1}{\beta}}$. Hence \eqref{majodifquantb} holds. For the sufficient condition, we remark that the finiteness of $\sup_{u\in(0,1/2]}u^{\frac{1}{\beta}}(F^{-1}(1-u)-F^{-1}(u))$ and the inequality $F^{-1}(1-u)-F^{-1}(u)\ge \left(F^{-1}(1/2)-F^{-1}(u)\right)\vee\left(F^{-1}(1-u)-F^{-1}(1/2)\right) $ valid for $u\in(0,1/2]$ imply that $\inf_{u \in (0,1/2]}u^{\frac{1}{\beta}}F^{-1}(u) >-\infty$ and $\sup_{u \in (0,1/2]}u^{\frac{1}{\beta}}F^{-1}(1-u)<+\infty$. With the inequality $ x\ge F^{-1}(F(x))$ valid for $x\in{\mathbb R}$ such that $0<F(x)<1$, this implies that $\inf_{x\in{\mathbb R}:F(x)\le 1/2}\left(F(x)\right)^{\frac{1}{\beta}}x> -\infty$ and therefore that $\sup_{x\ge 0}x^{{\beta}}F(-x)<+\infty$. With the inequality $ x\le F^{-1}(F(x)+)$ valid for $x\in{\mathbb R}$ such that $0<F(x)<1$, we obtain, in a symmetric way $\sup_{x\ge 0}x^{{\beta}}(1-F(x))<+\infty$.
3,864
30,163
en
train
0.112.6
For $\varepsilon\in (0,\beta)$, using that for $y\ge 0$, $F(-y)+1-F(y)=\mu((-\infty,-y]\cup (y,+\infty))\le 1$, we obtain \begin{align*} \int_{\mathbb R}|x|^{\beta-\varepsilon}\mu(dx)&=(\beta-\varepsilon)\int_0^{+\infty}y^{\beta-\varepsilon-1}(F(-y)+1-F(y))\,dy\\ &\le (\beta-\varepsilon)\int_0^1y^{\beta-\varepsilon-1}dy+(\beta-\varepsilon)\sup\limits_{x \ge 0}x^{\beta}\Big(F(-x)+1-F(x)\Big)\int_1^{+\infty}y^{-\varepsilon-1}\,dy\\ &=1+\frac{\beta-\varepsilon}{\varepsilon}\sup\limits_{x \ge 0}x^{\beta}\Big(F(-x)+1-F(x)\Big). \end{align*} Therefore $\displaystyle \sup\limits_{x \ge 0}x^{\beta}\Big(F(-x)+1-F(x)\Big)<+\infty\implies \forall \varepsilon\in (0,\beta),\;\int_{\mathbb R}|x|^{\beta-\varepsilon}\mu(dx)<+\infty$.\\ Let us next check that \begin{equation} \sup_{x\ge 0}\,x^{\beta}\Big(F(-x)+(1-F(x))\Big)<+\infty\Leftrightarrow \sup_{u\in(0,1/2]}u^{\frac{1}{\beta}}(F^{-1}(1-u)-F^{-1}(u))<+\infty\label{equivtailquant}. \end{equation} For the necessary condition, we set $u\in(0,1/2]$. Either $F^{-1}(u)\ge 0$ or, since for all $v\in (0,1)$, $F(F^{-1}(v))\ge v$, we have $\left(-F^{-1}(u)\right)^\beta u\le \sup_{x\ge -F^{-1}(u)}\,x^{\beta}F(-x)$ and therefore $F^{-1}(u)\ge -\left(\sup_{x\ge 0}\,x^{\beta}F(-x)\right)^{\frac{1}{\beta}}u^{-\frac{1}{\beta}}$. Either $F^{-1}(1-u)\le 0$ or, since for all $v\in (0,1)$, $F(F^{-1}(v)-)\le v$, we have $(F^{-1}(1-u))^\beta u\le \sup_{x\ge F^{-1}(1-u)}\,x^{\beta}(1-F(x-))$ and therefore $F^{-1}(1-u)\le \left(\sup_{x\ge 0}\,x^{\beta}(1-F(x))\right)^{\frac{1}{\beta}}u^{-\frac{1}{\beta}}$. Hence \eqref{majodifquantb} holds. For the sufficient condition, we remark that the finiteness of $\sup_{u\in(0,1/2]}u^{\frac{1}{\beta}}(F^{-1}(1-u)-F^{-1}(u))$ and the inequality $F^{-1}(1-u)-F^{-1}(u)\ge \left(F^{-1}(1/2)-F^{-1}(u)\right)\vee\left(F^{-1}(1-u)-F^{-1}(1/2)\right) $ valid for $u\in(0,1/2]$ imply that $\inf_{u \in (0,1/2]}u^{\frac{1}{\beta}}F^{-1}(u) >-\infty$ and $\sup_{u \in (0,1/2]}u^{\frac{1}{\beta}}F^{-1}(1-u)<+\infty$. With the inequality $ x\ge F^{-1}(F(x))$ valid for $x\in{\mathbb R}$ such that $0<F(x)<1$, this implies that $\inf_{x\in{\mathbb R}:F(x)\le 1/2}\left(F(x)\right)^{\frac{1}{\beta}}x> -\infty$ and therefore that $\sup_{x\ge 0}x^{{\beta}}F(-x)<+\infty$. With the inequality $ x\le F^{-1}(F(x)+)$ valid for $x\in{\mathbb R}$ such that $0<F(x)<1$, we obtain, in a symmetric way $\sup_{x\ge 0}x^{{\beta}}(1-F(x))<+\infty$. Let us finally check that $\lim_{x\to +\infty}x^{\beta}\Big(F(-x)+1-F(x)\Big)=0\Leftrightarrow \lim_{u\to 0+}u^{\frac{1}{\beta}}\left(F^{-1}(1-u)-F^{-1}(u)\right)=0$. For the necessary condition, we remark that either $F^{-1}(1)<+\infty$ and $\lim_{u\to 0+}u^{\frac 1\beta}F^{-1}(1-u)=0$ or $F^{-1}(1-u)$ goes to $+\infty$ as $u\to 0+$. For $u$ small enough so that $F^{-1}(1-u)>0$, we have $(F^{-1}(1-u))^\beta u\le \sup_{x\ge F^{-1}(1-u)}\,x^{\beta}(1-F(x-))$, from which we deduce that $\lim_{u\to 0+}u(F^{-1}(1-u))^\beta =0$. The fact that $\lim_{u\to 0+}u^{\frac 1\beta}F^{-1}(u)=0$ is deduced by a symmetric reasoning. For the sufficient condition, we use that $x(1-F(x))^{\frac{1}{\beta}}\le \sup_{u\le 1-F(x)}u^{\frac{1}{\beta}}F^{-1}((1-u)+)$ and $x\left(F(x)\right)^{\frac{1}{\beta}}\ge \inf_{u\le F(x)}u^{\frac{1}{\beta}}F^{-1}(u)$ for $x\in{\mathbb R}$ such that $0<F(x)<1$. \end{proof}
1,481
30,163
en
train
0.112.7
\begin{lem}\label{lemcontxn} Let $\rho\ge 1$ and $\alpha\in \left(0,\frac{1}{\rho}\right)$.There is a finite constant $C$ only depending on $\rho$ and $\alpha$ such that the two extremal points in the optimal sequence $(x_i^N)_{1\le i\le N}$ for $e_N(\mu,\rho)$ satisfy $$\forall N\ge 1,\;x_1^N\ge CN^{\frac{1}{\rho}-\alpha}\inf_{u\in (0,\frac 1 N)}u^{\frac{1}{\rho}-\alpha}F^{-1}(u)\mbox{ and }x_N^N\le CN^{\frac{1}{\rho}-\alpha}\sup_{u\in (0,\frac 1 N)}u^{\frac{1}{\rho}-\alpha}F^{-1}(1-u).$$ If $\sup_{u\in(0,1/2]}u^{\frac{1}{\rho}-\alpha}\left(F^{-1}(1-u)-F^{-1}(u)\right)<+\infty$, then $\sup_{N\ge 1}N^{\alpha-\frac{1}{\rho}}\left(x_N^N\vee \left(-x_1^N\right)\right)<+\infty$. \end{lem} \begin{proof} Since the finiteness of $\sup_{u\in(0,1/2]}u^{\frac{1}{\rho}-\alpha}\left(F^{-1}(1-u)-F^{-1}(u)\right)$ implies the finiteness of both $\sup_{u\in (0,1)}u^{\frac{1}{\rho}-\alpha}F^{-1}(1-u)$ and $\inf_{u\in (0,1)}u^{\frac{1}{\rho}-\alpha}F^{-1}(u)$, the second statement is a consequence of the first one, that we are now going to prove. When $\rho=1$ (resp. $\rho=2$), then the conclusion easily follows from the explicit form $x_1^N=F^{-1}\left(\frac{1}{2N}\right)$ and $x_N^N=F^{-1}\left(\frac{2N-1}{2N}\right)$ (resp. $x_1^N=N\int_0^{\frac{1}{N}}F^{-1}(u)\,du$ and $x_N^N=N\int^1_{\frac{N-1}{N}}F^{-1}(u)\,du$). In the general case $\rho>1$, we are going to take advantage of the expression $$f(y)=\rho\int_0^{\frac{1}{N}}\left({\mathbf 1}_{\{y\ge F^{-1}(1-u)\}}\left(y-F^{-1}(1-u)\right)^{\rho-1}-{\mathbf 1}_{\{y< F^{-1}(1-u)\}}\left(F^{-1}(1-u)-y\right)^{\rho-1}\right)\,du$$ of the derivative of the function $\displaystyle {\mathbb R}\ni y\mapsto \int_{\frac{N-1}{N}}^{1}\left|y-F^{-1}(u)\right|^\rho\,du$ minimized by $x_N^N$. Since this function is strictly convex $x_N^N=\inf\{y\in{\mathbb R}: f(y)\ge 0\}$. Let us first suppose that $S_N:=\sup_{u\in (0,\frac 1 N)}u^{\frac{1}{\rho}-\alpha}F^{-1}(1-u)\in (0,+\infty)$. Since for fixed $y\in{\mathbb R}$, ${\mathbb R}\ni x\mapsto \left({\mathbf 1}_{\{y\ge x\}}(y-x)^{\rho-1}-{\mathbf 1}_{\{y< x\}}(x-y)^{\rho-1}\right)$ is non-increasing, we deduce that $\forall y\in{\mathbb R}$, $f(y)\ge \rho S_N^{\rho-1}g(\frac{y}{S_N})$ where $$g(z)=\int_0^{\frac{1}{N}}\left({\mathbf 1}_{\left\{z\ge u^{\alpha-\frac{1}{\rho}}\right\}}\left(z-u^{\alpha-\frac{1}{\rho}}\right)^{\rho-1}-{\mathbf 1}_{\left\{z< u^{\alpha-\frac{1}{\rho}}\right\}}\left(u^{\alpha-\frac{1}{\rho}}-z\right)^{\rho-1}\right)\,du.$$ For $z\ge (4N)^{\frac{1}{\rho}-\alpha}$, we have $z^{\frac{\rho}{\alpha\rho-1}}\le\frac{1}{4N}$ and $z-(2N)^{\frac{1}{\rho}-\alpha}\ge \left(1-2^{\alpha-\frac 1\rho}\right)z$ so that \begin{align*} g(z)&=\int_{z^{\frac{\rho}{\alpha\rho-1}}}^{\frac{1}{N}}\left(z-u^{\alpha-\frac{1}{\rho}}\right)^{\rho-1}\,du-\int_0^{z^{\frac{\rho}{\alpha\rho-1}}}\left(u^{\alpha-\frac{1}{\rho}}-z\right)^{\rho-1}\,du \ge \int_{\frac{1}{2N}}^{\frac{1}{N}}\left(z-(2N)^{\frac{1}{\rho}-\alpha}\right)^{\rho-1}\,du-\int_0^{z^{\frac{\rho}{\alpha\rho-1}}}u^{(\rho-1)\frac{\alpha\rho-1}{\rho}}\,du\\ &\ge \left(1-2^{\alpha-\frac 1\rho}\right)^{\rho-1}z^{\rho-1}\int_{\frac{1}{2N}}^{\frac{1}{N}}\,du-\frac{\rho z^{\frac{\rho}{\alpha\rho-1}+\rho-1}}{1+(\rho-1)\alpha\rho} = z^{\rho-1}\left(\frac{\left(1-2^{\alpha-\frac 1\rho}\right)^{\rho-1}}{2N}-\frac{\rho z^{\frac{\rho}{\alpha\rho-1}}}{1+(\rho-1)\alpha\rho} \right). \end{align*} The right-hand side is positive for $z>(\kappa N)^{\frac{1}{\rho}-\alpha}$ with $\kappa:=\frac{2\rho}{\left(1-2^{\alpha-\frac 1\rho}\right)^{\rho-1}\left(1+(\rho-1)\alpha\rho\right)}$. Hence for $z>\left(\left(\kappa\vee 4\right)N\right)^{\frac{1}{\rho}-\alpha}$, $g(z)>0$ so that for $y>\left(\left(\kappa\vee 4\right)N\right)^{\frac{1}{\rho}-\alpha}S_N$, $f(y)>0$ and therefore $$x_N^N \le \left(\left(\kappa\vee 4\right)N\right)^{\frac{1}{\rho}-\alpha}S_N .$$ Clearly, this inequality remains valid when $S_N=+\infty$. It holds in full generality since $S_N\ge 0$ and $S_N=0\Leftrightarrow F^{-1}(1)\le 0$ a condition under which $x_N^N\le 0$ since $x_N^N\in\left[F^{-1}\left(\frac{N-1}{N}+\right),F^{-1}\left(1\right)\right]\cap{\mathbb R}$. By a symmetric reasoning, we check that $x_1^N\ge \left(\left(\kappa\vee 4\right)N\right)^{\frac{1}{\rho}-\alpha}\inf_{u\in (0,\frac 1 N)}u^{\frac{1}{\rho}-\alpha}F^{-1}(u)$. \end{proof}
1,835
30,163
en
train
0.112.8
\begin{proof}[Proof of Theorem \ref{alphaRater}] Since by Lemma \ref{lemququcdf}, $$\sup_{u\in(0,1/2]}u^{\frac{1}{\rho}-\alpha}\left(F^{-1}(1-u)-F^{-1}(u)\right)<+\infty\implies {\sup_{x\ge 0}}\;x^{\frac{\rho}{1-\alpha \rho}}\Big(F(-x)+1-F(x)\Big)<+\infty$$ and, by \eqref{enrho}, $$e_N^\rho(\mu,\rho)\ge \int_0^{\frac 1 N} \left|F^{-1}(u)-x^N_1\right|^{\rho}\,du+\int^1_{\frac{N-1}{N}} \left|F^{-1}(u)-x^N_N\right|^{\rho}\,du$$ to prove the equivalence, it is enough to check that \begin{align*} &{\sup_{x\ge 0}}\;x^{\frac{\rho}{1-\alpha \rho}}\Big(F(-x)+1-F(x)\Big)<+\infty \implies \displaystyle{\sup_{N \ge 1}} N^{\alpha\rho} \, e_N^\rho(\mu,\rho)<+\infty\mbox{ and that }\\ &\sup_{N\ge 1}N^{\alpha\rho}\left(\int_0^{\frac 1 N} \left|F^{-1}(u)-x^N_1\right|^{\rho}\,du+\int^1_{\frac{N-1}{N}} \left|F^{-1}(u)-x^N_N\right|^{\rho}\,du\right)<+\infty\\&\phantom{\sup_{N\ge 1}N^{\alpha\rho}\int_0^{\frac 1 N} \left|F^{-1}(u)-x^N_1\right|^{\rho}\,du} \implies\sup_{u\in(0,1/2]}u^{\frac{1}{\rho}-\alpha}\left(F^{-1}(1-u)-F^{-1}(u)\right)<+\infty. \end{align*} We are now going to do so and thus prove that the four suprema in the two last implications are simultaneously finite or infinite. Let us first suppose that $C:={\sup_{x\ge 0}}\;x^{\frac{\rho}{1-\alpha \rho}}\Big(F(-x)+1-F(x)\Big)<+\infty$ and set $N\ge 2$. Let $x_i\in[F^{-1}(\frac{i-1}{N}+),F^{-1}(\frac{i}{N})]$ for $2\le i\le N-1$ . We have $$e^\rho_N(\mu,\rho)\le {\cal W}^\rho_\rho\left(\mu_N(x_{2:N-1}),\mu\right)= L_N+M_N+U_N$$ with $L_N=\int_{0}^{\frac{1}{N}}\left|F^{-1}(u)-F^{-1}\left(\frac{1}{N}\right)\wedge (-N^{\frac{1}{\rho}-\alpha})\right|^\rho\,du$, $U_N=\int_{\frac{N-1}{N}}^{1}\left|F^{-1}(u)-F^{-1}\left(\frac{N-1}{N}\right)\vee N^{\frac{1}{\rho}-\alpha}\right|^\rho\,du$ and \begin{align} M_N&=\sum_{i=2}^{N-1}\int_{\frac{i-1}{N}}^{\frac{i}{N}}\left|F^{-1}(u)-x_i\right|^\rho\,du \le \sum_{i=2}^{N-1}\int_{\frac{i-1}{N}}^{\frac{i}{N}}\left(F^{-1}\left(\frac{i}{N}\right)-F^{-1}\left(\frac{i-1}{N}\right)\right)^\rho\,du\notag\\&\le \frac{1}{N}\sum_{i=1}^N\left(F^{-1}\left(\frac{N-1}{N}\right)-F^{-1}\left(\frac{1}{N}\right)\right)^{\rho-1}\left(F^{-1}\left(\frac{i}{N}\right)-F^{-1}\left(\frac{i-1}{N}\right)\right)\notag\\&=\frac{1}{N}\left(F^{-1}\left(\frac{N-1}{N}\right)-F^{-1}\left(\frac{1}{N}\right)\right)^{\rho}\label{majomn}\\&\le 2^\rho C^{1-\alpha\rho}N^{-\alpha\rho},\notag\end{align} where we used \eqref{majodifquantb} applied with $\beta=\frac{\rho}{1-\alpha \rho}$ for the last inequality. Let $x_+=0\vee x$ denote the positive part of any real number $x$. Applying Lemma \ref{lemenf} with $x=F^{-1}\left(\frac{1}{N}\right)\wedge \left(-N^{\frac{1}{\rho}-\alpha}\right)$, we obtain that \begin{align*} L_N &= \rho\int_{-\infty}^{F^{-1}\left(\frac{1}{N}\right)\wedge \left(-N^{\frac{1}{\rho}-\alpha}\right)}\left(F^{-1}\left(\frac{1}{N}\right)\wedge \left(-N^{\frac{1}{\rho}-\alpha}\right)-y\right)^{\rho-1}F(y)\,dy\\&+\rho\int^{F^{-1}\left(\frac{1}{N}\right)}_{F^{-1}\left(\frac{1}{N}\right)\wedge \left(-N^{\frac{1}{\rho}-\alpha}\right)}\left(y-F^{-1}\left(\frac{1}{N}\right)\wedge \left(-N^{\frac{1}{\rho}-\alpha}\right)\right)^{\rho-1}\left(\frac{1}{N}-F(y)\right)\,dy \\&\le \rho\int^{+\infty}_{N^{\frac{1}{\rho}-\alpha}}y^{\rho-1}F(-y)\,dy+\frac{1}{N}\left(N^{\frac{1}{\rho}-\alpha}+F^{-1}\left(\frac{1}{N}\right)\right)_+^{\rho}. \end{align*} In a symmetric way, we check that $\displaystyle U_N\le \rho\int^{+\infty}_{N^{\frac{1}{\rho}-\alpha}}y^{\rho-1}(1-F(y))\,dy+\frac{1}{N}\left(N^{\frac{1}{\rho}-\alpha}-F^{-1}\left(\frac{N-1}{N}\right)\right)_+^{\rho}$ so that \begin{align*} L_N+U_N &\le \rho C\int^{+\infty}_{N^{\frac{1}{\rho}-\alpha}}y^{-1-\frac{\alpha\rho^2}{1-\alpha\rho}}\,dy+\frac{1}{N}\left(\left(N^{\frac{1}{\rho}-\alpha}+F^{-1}\left(1/2\right)\right)_+^{\rho}+\left(N^{\frac{1}{\rho}-\alpha}-F^{-1}\left(1/2\right)\right)_+^{\rho}\right)\\ &\le \frac{1-\alpha\rho}{\alpha\rho} CN^{-\alpha\rho}+\left(1+2^{\rho-1}\right)N^{-\alpha\rho}+2^{\rho-1}\left|F^{-1}\left(1/2\right)\right|^\rho N^{-1}. \end{align*} Since $N^{-1}\le 2^{\alpha\rho-1}N^{-\alpha\rho}$, we conclude that with $\sup_{x_{2:N-1}}$ denoting the supremum over $x_i\in[F^{-1}(\frac{i-1}{N}+),F^{-1}(\frac{i}{N})]$ for $2\le i\le N-1$, \begin{align*} \sup_{N\ge 2}N^{\alpha\rho}e^\rho_N(\mu,\rho)\le \sup_{N\ge 2}\sup_{x_{2:N-1}}N^{\alpha\rho}{\cal W}_\rho^\rho(\mu_N(x_{2:N}),\mu)\le 2^\rho C^{1-\alpha\rho}+\frac{1-\alpha\rho}{\alpha\rho}C+1+2^{\rho-1}+2^{\rho+\alpha\rho-2}\left|F^{-1}\left(1/2\right)\right|^\rho. \end{align*} We may replace $\sup_{N\ge 2}N^{\alpha\rho}e^\rho_N(\mu,\rho)$ by $\sup_{N\ge 1}N^{\alpha\rho}e^\rho_N(\mu,\rho)$ in the left-hand side, since, applying Lemma \ref{lemenf} with $x=0$, then using that for $y\ge 0$, $F(-y)+1-F(y)=\mu((-\infty,-y]\cup (y,+\infty))\le 1$, we obtain that \begin{align*} e^\rho_1(\mu,\rho) &\le \rho\int_0^{+\infty}y^{\rho-1}\Big(F(-y)+1-F(y)\Big)\,dy \le \rho\int_0^1y^{\rho-1}\,dy+\rho C\int^{+\infty}_{1}y^{-1-\frac{\alpha\rho^2}{1-\alpha\rho}}\,dy=1+\frac{1-\alpha\rho}{\alpha\rho}C. \end{align*} Let us next suppose that $\displaystyle \sup_{N\ge 1}N^{\alpha\rho}\left(\int_0^{\frac 1 N} \left|F^{-1}(u)-x^N_1\right|^{\rho}\,du+\int^1_{\frac{N-1}{N}} \left|F^{-1}(u)-x^N_N\right|^{\rho}\,du\right)<+\infty$. Like in the proof of Proposition \ref{propals1rcomp}, we deduce \eqref{minoF-1u0}. With the monotonicity of $F^{-1}$, this inequality implies that $$\exists C<+\infty,\;\forall u \in (0,1/2], \quad F^{-1}(u) \ge F^{-1}(1/2) - \frac{C}{1 - 2^{\alpha-\frac{1}{\rho}}}\left(u^{\alpha-\frac{1}{\rho}}-1\right), $$ and therefore that $\inf_{u \in (0,1/2]}\left(u^{\frac{1}{\rho}-\alpha}F^{-1}(u)\right) >-\infty$. With a symmetric reasoning, we conclude that $$\sup_{u\in(0,1/2]}u^{\frac{1}{\rho}-\alpha}\Big(F^{-1}(1-u)-F^{-1}(u)\Big)<+\infty.$$
2,579
30,163
en
train
0.112.9
Let us now assume that $\limsup_{x\to+\infty}x^{\frac{\rho}{1-\alpha\rho}}\Big(F(-x)+1-F(x)\Big)\in(0,+\infty)$, which, in particular implies that $\sup_{x\ge 0}x^{\frac{\rho}{1-\alpha\rho}}\Big(F(-x)+1-F(x)\Big)<+\infty$ and check that $\limsup_{N\to\infty}N^\alpha e_N(\mu,\rho)>0$. For $x>0$, we have, on the one hand \begin{align*} x^{\frac{\alpha \rho^2}{1-\alpha\rho}}\int_x^{+\infty}y^{\rho-1}\Big(F(-y)+1-F(y)\Big)\,dy &\ge x^{\frac{\alpha \rho^2}{1-\alpha\rho}}\int_x^{2x}x^{\rho-1}\Big(F(-2x)+1-F(2x)\Big)\,dy\\ &=x^{\frac{\rho}{1-\alpha\rho}}\Big(F(-2x)+1-F(2x)\Big). \end{align*} On the other hand, still for $x>0$, \begin{align} x^{\frac{\alpha \rho^2}{1-\alpha\rho}}\int_x^{+\infty}y^{\rho-1}\Big(F(-y)+1-F(y)\Big)\,dy &\le x^{\frac{\alpha \rho^2}{1-\alpha\rho}}\sup_{y\ge x}y^{\frac{\rho}{1-\alpha\rho}}\Big(F(-y)+1-F(y)\Big)\int_x^{+\infty}y^{-\frac{\alpha \rho^2}{1-\alpha\rho}-1}\,dy\notag\\ &=\frac{1-\alpha\rho}{\alpha \rho^2}\sup_{y\ge x}y^{\frac{\rho}{1-\alpha\rho}}\Big(F(-y)+1-F(y)\Big)\label{jamqu} \end{align} Therefore $\displaystyle \limsup_{x\to+\infty}x^{\frac{\alpha \rho^2}{1-\alpha\rho}}\int_x^{+\infty}y^{\rho-1}\Big(F(-y)+1-F(y)\Big)\,dy\in (0,+\infty)$ and, by monotonicity of the integral, \begin{equation} \limsup_{N\to+\infty} y_N^{\frac{\alpha \rho^2}{1-\alpha\rho}}\int_{y_N}^{+\infty}y^{\rho-1}\Big(F(-y)+1-F(y)\Big)\,dy\in (0,+\infty)\label{limsupsousuite} \end{equation} along any sequence $(y_N)_{N\in{\mathbb N}}$ of positive numbers increasing to $+\infty$ and such that $\limsup_{N\to+\infty}\frac{y_{N+1}}{y_N}<+\infty$. By Lemmas \ref{lemququcdf} and \ref{lemcontxn}, we have $\kappa:=\sup_{N\ge 1}N^{\alpha-\frac 1\rho}\left(x_N^N\vee\left(-x_1^N\right)\right)<+\infty$ (notice that since $x_1^N\le x_N^N$, $\kappa\ge 0$). With \eqref{enf}, we deduce that: \begin{align*} \frac{e_N^\rho(\mu,\rho)}{\rho} &\ge \int^{x_1^N}_{-\infty}\left(x_1^N-y\right)^{\rho-1}F(y)\,dy+\int_{x_N^N}^{+\infty}\left(y-x_N^N\right)^{\rho-1}(1-F(y))\,dy \\&\ge \int^{-\kappa N^{\frac 1\rho-\alpha}}_{-\infty}\left(-\kappa N^{\frac 1\rho-\alpha}-y\right)^{\rho-1}F(y)\,dy+\int_{\kappa N^{\frac 1\rho-\alpha}}^{+\infty}\left(y-\kappa N^{\frac 1\rho-\alpha}\right)^{\rho-1}(1-F(y))\,dy\\ &\ge 2^{1-\rho}\int_{2\kappa N^{\frac 1\rho-\alpha}}^{+\infty} y^{\rho-1}\Big(F(-y)+1-F(y)\Big)\,dy. \end{align*} Applying \eqref{limsupsousuite} with $y_N=2\kappa N^{\frac 1\rho-\alpha}$, we conclude that $\limsup\limits_{N\to+\infty}\,N^{\alpha\rho}e_N^\rho(\mu,\rho)>0$. If $x^{\frac{\rho}{1-\alpha \rho}}\Big(F(-x)+1-F(x)\Big)$ does not go to $0$ as $x\to+\infty$ then either ${\sup_{x\ge 0}}\,x^{\frac{\rho}{1-\alpha \rho}}\Big(F(-x)+1-F(x)\Big)=+\infty={\sup_{N \ge 1}} N^{\alpha} \,e_N(\mu,\rho)$ or $\limsup_{x\to+\infty}x^{\frac{\rho}{1-\alpha \rho}}\Big(F(-x)+1-F(x)\Big)\in(0,+\infty)$ and $\limsup_{N\to+\infty}N^{\alpha}e_N(\mu,\rho)\in(0,+\infty)$ so that, synthesizing the two cases, $N^{\alpha}e_N(\mu,\rho)$ does not go to $0$ as $N\to+\infty$. Therefore, to conclude the proof of the second statement, it is enough to suppose $\lim_{x\to+\infty}x^{\frac{\rho}{1-\alpha \rho}}\Big(F(-x)+1-F(x)\Big)=0$ and deduce $\lim_{N\to +\infty} N^\alpha e_N(\mu,\rho)=0$, which we now do. By Lemma \ref{lemququcdf}, $\lim_{N\to\infty} N^{\alpha-\frac{1}{\rho}}\left(F^{-1}\left(\frac{N-1}{N}\right)-F^{-1}\left(\frac{1}{N}\right)\right)=0$. Since, reasoning like in the above derivation of \eqref{majomn}, we have $\sum_{i=2}^{N-1}\int_{\frac{i-1}{N}}^{\frac{i}{N}}|F^{-1}(u)-x_i^N|^\rho du\le \frac{1}{N}\left(F^{-1}\left(\frac{N-1}{N}\right)-F^{-1}\left(\frac{1}{N}\right)\right)^{\rho}$ for $N\ge 3$, we deduce that $\lim_{N\to\infty} N^{\alpha\rho}\sum_{i=2}^{N-1}\int_{\frac{i-1}{N}}^{\frac{i}{N}}|F^{-1}(u)-x_i^N|^\rho du=0$. Let $$S_N=\sup_{x\ge N^{\frac{1}{2\rho}-\frac \alpha 2}}\left(x^{\frac{\rho}{1-\alpha \rho}}\Big(F(-x)+1-F(x)\Big)\right)^{\frac{1-\alpha\rho}{2\alpha\rho^2}}\mbox{ and }y_N=F^{-1}\left(\frac{N-1}{N}\right)\vee N^{\frac{1}{2\rho}-\frac \alpha 2}\vee (S_NN^{\frac{1}{\rho}-\alpha}).$$ Using Lemma \ref{lemenf} for the first inequality, \eqref{jamqu} for the second one, then the definition of $y_N$ for the third, we obtain that \begin{align*} N^{\alpha\rho}&\int_{\frac{N-1}{N}}^1|F^{-1}(u)-x_N^N|^\rho du\\&\le \rho N^{\alpha\rho}\int_{F^{-1}\left(\frac{N-1}{N}\right)}^{y_N}(y_N-y)^{\rho-1}\left(F(y)-\frac{N-1}N\right)dy+\rho N^{\alpha\rho} \int_{y_N}^{+\infty}(y-y_N)^{\rho-1}(1-F(y))dy\\ &\le N^{\alpha\rho-1}\int_{F^{-1}\left(\frac{N-1}{N}\right)}^{y_N}\rho(y_N-y)^{\rho-1}dy+\frac{1-\alpha\rho}{\alpha\rho^2}N^{\alpha\rho}y_N^{-\frac{\alpha\rho^2}{1-\alpha\rho}}\sup_{y\ge y_N}y^{\frac{\rho}{1-\alpha \rho}}\Big(F(-y)+1-F(y)\Big)\\ &\le N^{\alpha\rho-1}\left(N^{\frac{1}{2\rho}-\frac \alpha 2}\vee (S_NN^{\frac{1}{\rho}-\alpha})-F^{-1}\left(\frac{N-1}{N}\right)\right)_+^\rho\\&+\frac{1-\alpha\rho}{\alpha\rho^2}N^{\alpha\rho}\left(S_NN^{\frac{1}{\rho}-\alpha}\right)^{-\frac{\alpha\rho^2}{1-\alpha\rho}}\sup_{y\ge N^{\frac{1}{2\rho}-\frac \alpha 2}} y^{\frac{\rho}{1-\alpha \rho}}\Big(F(-y)+1-F(y)\Big)\\ &\le \left(N^{\frac \alpha 2-\frac{1}{2\rho}}\vee S_N-N^{\alpha-\frac 1\rho}F^{-1}\left(\frac{N-1}{N}\right)\right)^\rho_++\frac{1-\alpha\rho}{\alpha\rho^2}S_N^{\frac{\alpha\rho^2}{1-\alpha\rho}}.\end{align*} Since $\lim_{N\to\infty}N^{\alpha-\frac 1\rho}F^{-1}\left(\frac{N-1}{N}\right)= 0=\lim_{N\to\infty}N^{\frac \alpha 2-\frac{1}{2\rho}}= \lim_{N\to+\infty} S_N$, we deduce that $\lim_{N\to\infty}N^{\alpha\rho}\int_{\frac{N-1}{N}}^1|F^{-1}(u)-x_N^N|^\rho du=0$. Dealing in a symmetric way with $N^{\alpha\rho}\int^{\frac{1}{N}}_0|F^{-1}(u)-x_N^N|^\rho du$, we conclude that $\lim_{N\to\infty}N^{\alpha\rho}e_N^\rho(\mu,\rho)=0$. \end{proof}
2,588
30,163
en
train
0.112.10
\begin{exple} let $\mu_\beta(dx)=f(x)\,dx$ with $f(x)=\beta\frac{{\mathbf 1}_{\{x\ge 1\}}}{x^{\beta+1}}$ be the Pareto distribution with parameter $\beta>0$. Then $F(x)={\mathbf 1}_{\{x\ge 1\}}\left(1-x^{-\beta}\right)$ and $F^{-1}(u)=(1-u)^{-\frac{1}{\beta}}$. To ensure that $\int_{\mathbb R}|x|^\rho\mu(dx)<+\infty$, we suppose that $\beta>\rho$. Since $\frac{\rho}{1-\rho\left(\frac{1}{\rho}-\frac{1}{\beta}\right)}=\beta$ we have $\lim_{x\to+\infty}x^\frac{\rho}{1-\rho\left(\frac{1}{\rho}-\frac{1}{\beta}\right)}(F(-x)+1-F(x))=1$. Replacing $\limsup$ by $\liminf$ in the last step of the proof of Theorem \ref{alphaRater}, we check that $\liminf_{N\to+\infty}N^{\frac{1}{\rho}-\frac{1}{\beta}}e_N(\mu_\beta,\rho)>0$ and deduce with the statement of this theorem that $e_N\left(\mu_\beta,\rho\right)\asymp N^{-\frac{1}{\rho}+\frac{1}{\beta}}\asymp\sup_{x_{2:N-1}}{\cal W}_\rho(\mu_N(x_{2:N-1}),\mu_\beta) $. \end{exple} In the case $\alpha=\frac{1}{\rho}$, limit situation not covered by Theorem \ref{alphaRater}, we have the following result. \begin{prop}\label{propal1rho} For $\rho\ge 1$, \begin{align*} &\sup_{N\ge 1}N^{1/\rho}e_N(\mu,\rho)<+\infty {\mathbb R}ightarrow \sup_{N\ge 1}N\left(\int_0^{\frac 1 N} \left|F^{-1}(u)-x^N_1\right|^{\rho}\,du+\int^1_{\frac{N-1}{N}} \left|F^{-1}(u)-x^N_N\right|^{\rho}\,du\right)<+\infty\\ &\Leftrightarrow\sup_{u\in(0,1/2]}\left(F^{-1}(1-u/2)-F^{-1}(1-u)+F^{-1}(u)-F^{-1}(u/2)\right)<+\infty\\ &{\mathbb R}ightarrow\sup_{u\in(0,1/2]}\frac{F^{-1}(1-u)-F^{-1}(u)}{\ln(1/u)}<+\infty\Leftrightarrow\exists \lambda\in(0,+\infty),\;\forall x\ge 0,\;\Big(F(-x)+1-F(x)\Big)\le e^{-\lambda x}/\lambda\\&{\mathbb R}ightarrow \sup_{N\ge 2}\sup_{x_{2:N-1}}\frac{N^{1/\rho}}{1+\ln N}{\cal W}_\rho(\mu_N(x_{2:N-1}),\mu)<+\infty{\mathbb R}ightarrow\sup_{N\ge 1}\frac{N^{1/\rho}}{1+\ln N}e_N(\mu,\rho)<+\infty, \end{align*} where $\mu_N(x_{2:N-1})=\frac{1}{N}\left(\delta_{F^{-1}\left(\frac{1}{N}\right)\wedge (-\frac{\ln N}{\lambda})}+\sum_{i=2}^{N-1}\delta_{x_i}+\delta_{F^{-1}\left(\frac{N-1}{N}\right)\vee \frac{\ln N}{\lambda}}\right)$ and $\sup_{x_{2:N-1}}$ means the supremum over the choice of $x_i\in[F^{-1}(\frac{i-1}{N}+),F^{-1}(\frac{i}{N})]$ for $2\le i\le N-1$. \end{prop} \begin{remark} The first implication is not an equivalence for $\rho=1$. Indeed, in Example \ref{exempleexp}, for $\beta\ge 1$, $\lim\limits_{N\to+\infty} Ne_N(\mu,1)= +\infty$ while $\sup\limits_{N\ge 1}N\left(\int_0^{\frac 1 N} \left|F^{-1}(u)-x^N_1\right|\,du+\int^1_{\frac{N-1}{N}} \left|F^{-1}(u)-x^N_N\right|\,du\right)<+\infty$. \end{remark} \begin{proof} The first implication is an immediate consequence of \eqref{enrho}.\\To prove the equivalence, we first suppose that: \begin{equation} \sup_{N\ge 1}N^{\frac{1}{\rho}}\left(\int_0^{\frac 1 N} \left|F^{-1}(u)-x^N_1\right|^{\rho}\,du+\int^1_{\frac{N-1}{N}} \left|F^{-1}(u)-x^N_N\right|^{\rho}\,du\right)^{\frac{1}{\rho}}<+\infty.\label{majotermesqueues} \end{equation} and denote by $C$ the finite supremum in this equation. By \eqref{minotermbordlimB} for $i=1$, $\forall N\ge 1,\;F^{-1}\left(\frac{1}{2N}\right)-F^{-1}\left(\frac{1}{4N}\right)\le 4C$. For $u\in(0,1/2]$, there exists $N\in{\mathbb N}^*$ such that $u\in\left[\frac{1}{2(N+1)},\frac{1}{2N}\right]$ and, by monotonicity of $F^{-1}$ and since $4N\ge 2(N+1)$, we get \begin{align*} F^{-1}(u)-F^{-1}(u/2) &\le F^{-1}\left(\frac{1}{2N}\right)-F^{-1}\left(\frac{1}{4(N+1)}\right)\\ &\le F^{-1}\left(\frac{1}{2N}\right)-F^{-1}\left(\frac{1}{4N}\right)+F^{-1}\left(\frac{1}{2(N+1)}\right)-F^{-1}\left(\frac{1}{4(N+1)}\right)\le 8 C. \end{align*} Dealing in a symmetric way with $ F^{-1}(1-u/2)-F^{-1}(1-u)$, we obtain that $$\sup_{u\in(0,1/2]}\Big(F^{-1}(1-u/2)-F^{-1}(1-u)+F^{-1}(u)-F^{-1}(u/2)\Big) \le 16C. $$ On the other hand, for $N\ge 2$, by Lemma \ref{lemenf} applied with $x=F^{-1}\left(\frac{1}{N}\right)$, \begin{align*} \displaystyle \frac{1}{\rho}\int_0^{\frac 1 N} \left|F^{-1}(u)-x^N_1\right|^{\rho}\,du &\le \sum_{k\in{\mathbb N}}\int_{F^{-1}\left(\frac{1}{2^{k+1}N}\right)}^{F^{-1}\left(\frac{1}{2^kN}\right)}\left(F^{-1}\left(\frac{1}{N}\right)-y\right)^{\rho-1}F(y)\,dy\\ &\le \sum_{k\in{\mathbb N}}\frac{F^{-1}\left(\frac{1}{2^kN}\right)-F^{-1}\left(\frac{1}{2^{k+1}N}\right)}{2^kN} \left(\sum_{j=0}^k\left(F^{-1}\left(\frac{1}{2^jN}\right)-F^{-1}\left(\frac{1}{2^{j+1}N}\right)\right)\right)^{\rho-1}\\ &\le \frac{1}{N}\left(\sup_{u\in(0,1/2]}\left(F^{-1}(u)-F^{-1}(u/2)\right)\right)^\rho\sum_{k\in{\mathbb N}}\frac{(k+1)^{\rho-1}}{2^k}, \end{align*} where the last sum is finite. Dealing in a symmetric way with $\int^1_{\frac{N-1}{N}} \left|F^{-1}(u)-x^N_N\right|^{\rho}\,du$, we conclude that \eqref{majotermesqueues} is equivalent to the finiteness of $\sup_{u\in(0,1/2]}\Big(F^{-1}(1-u/2)-F^{-1}(1-u)+F^{-1}(u)-F^{-1}(u/2)\Big)$. Under \eqref{majotermesqueues} with $C$ denoting the finite supremum, for $k\in{\mathbb N}^*$, $F^{-1}\left(2^{-(k+1)}\right)-F^{-1}\left(2^{-k}\right)\ge-4C$ and, after summation, $$F^{-1}(2^{-k})\ge F^{-1}(1/2)-4 C (k-1).$$ With the monotonicity of $F^{-1}$, we deduce that: $$\forall u\in(0,1/2],\;F^{-1}(u)\ge F^{-1}(1/2)+\frac{4C}{\ln 2}\ln u$$ and therefore that $\sup_{u\in(0,1/2]}\frac{-F^{-1}(u)}{\ln(1/u)}<+\infty$. With the inequality $F^{-1}(F(x))\le x$ valid for $x\in{\mathbb R}$, this implies that $\sup_{\left\{x\in{\mathbb R}:0<F(x)\le 1/2 \right\}}\frac{-x}{\ln(1/F(x))}<+\infty$ and therefore that $\exists \, \lambda\in(0,+\infty),\;\forall x\le 0,\;F(x)\le e^{\lambda x}/\lambda$. Under the latter condition, since $u\le F(F^{-1}(u))$ and $F^{-1}(u)\le 0$ for $u\in (0,F(0)]$, we have $\sup_{u\in(0,F(0)]}\frac{-F^{-1}(u)}{\ln(1/u)}<\infty$ and even $\sup_{u\in(0,1/2]}\frac{-F^{-1}(u)}{\ln(1/u)}<\infty$ since when $F(0)<\frac 1 2$, $\sup_{u\in(F(0),1/2]}\frac{-F^{-1}(u)}{\ln(1/u)}\le 0$. By a symmetric reasoning, we obtain the two equivalent tail properties $\sup_{u\in(0,1/2]}\frac{F^{-1}(1-u)-F^{-1}(u)}{\ln(1/u)}<+\infty$ and $\exists \,\lambda\in(0,+\infty),\;\forall x\ge 0,\;\Big(F(-x)+1-F(x)\Big)\le e^{-\lambda x}/\lambda$. Let us finally suppose these two tail properties and deduce that $\sup_{N\ge 2}\sup_{x_{2:N-1}}\frac{N^{1/\rho}}{1+\ln N}{\cal W}_\rho(\mu_N(x_{2:N-1}),\mu)<+\infty$. We use the decomposition ${\cal W}^\rho_\rho(\mu_N(x_{2:N-1}),\mu)= L_N+M_N+U_N$ introduced in the proof of Theorem \ref{alphaRater} but with $F^{-1}\left(\frac{1}{N}\right)\wedge \left(-\frac{\ln N}{\lambda}\right)$ and $F^{-1}\left(\frac{N-1}{N}\right)\vee \left(\frac{\ln N}{\lambda}\right)$ respectively replacing $F^{-1}\left(\frac{1}{N}\right)\wedge (-N^{\frac{1}{\rho}-\alpha})$ and $F^{-1}\left(\frac{N-1}{N}\right)\vee (N^{\frac{1}{\rho}-\alpha})$ in $L_N$ and $U_N$. By \eqref{majomn}, we get: \begin{align*} \forall N\ge 3,\;M_N\le \frac{1}{N}\left(F^{-1}\left(\frac{N-1}{N}\right)-F^{-1}\left(\frac{1}{N}\right)\right)^{\rho}\le \left(\sup_{u\in(0,1/2]}\frac{F^{-1}(1-u)-F^{-1}(u)}{\ln(1/u)}\right)^\rho\frac{(\ln N)^\rho}{N}. \end{align*} Applying Lemma \ref{lemenf} with $x=F^{-1}\left(\frac{1}{N}\right)\wedge \left(-\frac{\ln N}{\lambda}\right)$ then the estimation of the cumulative distribution function, we obtain that for $N\ge 2$, \begin{align*} L_N &\le \rho\int_{-\infty}^{F^{-1}\left(\frac{1}{N}\right)\wedge \left(-\frac{\ln N}{\lambda}\right)}\left(F^{-1}\left(\frac{1}{N}\right)\wedge \left(-\frac{\ln N}{\lambda}\right)-y\right)^{\rho-1}F(y)\,dy\\&+\rho\int^{F^{-1}\left(\frac{1}{N}\right)}_{F^{-1}\left(\frac{1}{N}\right)\wedge \left(-\frac{\ln N}{\lambda}\right)}\left(y-F^{-1}\left(\frac{1}{N}\right)\wedge \left(-\frac{\ln N}{\lambda}\right)\right)^{\rho-1}\left(\frac{1}{N}-F(y)\right)\,dy\\ &\le \frac{\rho}{\lambda}\int_{-\infty}^{-\frac{\ln N}{\lambda}}(-y)^{\rho-1}e^{\lambda y}\,dy+\frac{1}{N}\left(\frac{\ln N}{\lambda}+F^{-1}\left(\frac{1}{N}\right)\right)_+^{\rho}\\ &\le \frac{\rho}{\lambda}\sum_{k\ge 1}\int_{k\frac{\ln N}{\lambda}}^{(k+1)\frac{\ln N}{\lambda}}\left((k+1)\frac{\ln N}{\lambda}\right)^{\rho-1}e^{-\lambda y}\,dy+\frac{1}{N}\left(\frac{\ln N}{\lambda}+F^{-1}\left(\frac{1}{2}\right)\right)_+^{\rho}\\ &\le \frac{\rho(\ln N)^{\rho}}{\lambda^{\rho+1}N}\sum_{k\ge 1}\frac{(k+1)^{\rho-1}}{2^{k-1}}+\frac{1}{N}\left(\frac{1}{\lambda}\ln N+F^{-1}\left(\frac{1}{2}\right)\right)_+^{\rho}, \end{align*} where we used that $N^k\ge N2^{k-1}$ for the last inequality. Dealing in a symmetric way with $U_N$, we conclude that $\sup_{N\ge 2}\sup_{x_{2:N-1}}\frac{N{\cal W}^\rho_\rho(\mu_N(x_{2:N-1}),\mu)}{1+(\ln N)^\rho}<+\infty$. \end{proof} \pagenumbering{gobble} \end{document}
3,795
30,163
en
train
0.113.0
\begin{document} \title{Optimal three spheres inequality at the boundary for the Kirchhoff-Love plate's equation with Dirichlet conditions hanks{The first and the second authors are supported by FRA 2016 ``Problemi Inversi, dalla stabilit\`a alla ricostruzione'', Universit\`a degli Studi di Trieste. The second and the third authors are supported by Progetto GNAMPA 2017 ``Analisi di problemi inversi: stabilit\`a e ricostruzione'', Istituto Nazionale di Alta Matematica (INdAM).} \noindent \textbf{Abstract.} We prove a three spheres inequality with optimal exponent at the boundary for solutions to the Kirchhoff-Love plate's equation satisfying homogeneous Dirichlet conditions. This result implies the Strong Unique Continuation Property at the Boundary (SUCPB). Our approach is based on the method of Carleman estimates, and involves the construction of an ad hoc conformal mapping preserving the structure of the operator and the employment of a suitable reflection of the solution with respect to the flattened boundary which ensures the needed regularity of the extended solution. To the authors' knowledge, this is the first (nontrivial) SUCPB result for fourth-order equations with bi-Laplacian principal part. \noindent \textbf{Mathematical Subject Classifications (2010): 35B60, 35J30, 74K20, 35R25, 35R30, 35B45} \noindent \textbf{Key words:} elastic plates, three spheres inequalities, unique continuation, Carleman estimates. \section{Introduction} \label{sec: introduction} The main purpose of this paper is to prove a Strong Unique Continuation Property at the Boundary (SUCPB) for the Kirchhoff-Love plate's equation. In order to introduce the subject of SUCPB we give some basic, although coarse, notion. Let $\mathcal{L}$ be an elliptic operator of order $2m$, $m\in \mathbb{N}$, and let $\Omega$ be an open domain in $\mathbb{R}^N$, $N\geq2$. We say that $\mathcal{L}$ enjoys a SUCPB with respect to the Dirichlet boundary conditions if the following property holds true: \begin{equation}\label{formulaz-sucpb} \begin{cases} \mathcal{L}u=0, \mbox{ in } \Omega, \\ \frac{\partial^ju}{\partial n^j}=0, \mbox{ on } \Gamma, \quad\mbox{ for } j=0, 1, \ldots, m-1, \\ \int_{\Omega\cap B_r(P)}u^2=\mathcal{O}(r^k), \mbox{ as } r\rightarrow 0, \forall k\in \mathbb{N}, \end{cases}\Longrightarrow \quad u\equiv 0 \mbox{ in } \Omega, \end{equation} where $\Gamma$ is an open portion (in the induced topology) of $\partial\Omega$, $n$ is outer unit normal, $P\in\Gamma$ and $B_r(P)$ is the ball of center $P$ and radius $r$. Similarly, we say that $\mathcal{L}$ enjoys a SUCPB with respect to the set of normal boundary operators $\mathcal{B}_j$, $j\in J$, $\mathcal{B}_j$ of order $j$, $J\subset \{0, 1, \ldots, 2m-1\}$, $\sharp J = m$, \cite{l:Fo}, if the analogous of \eqref{formulaz-sucpb} holds when the Dirichlet boundary conditions are replaced by \begin{equation} \label{BC-generale} \mathcal{B}_ju=0, \quad \hbox{on } \Gamma, \quad \hbox{for } j\in J. \end{equation} The SUCPB has been studied for the second order elliptic operators in the last two decades, both in the case of homogeneous Dirichlet, Neumann and Robin boundary conditions, \cite{l:AdEsKe}, \cite{l:AdEs}, \cite{l:ARRV}, \cite{l:ApEsWaZh}, \cite{l:BaGa}, \cite{l:BoWo}, \cite{l:KeWa}, \cite{l:KuNy}, \cite{l:Si}. Although the conjecture that the SUCPB holds true when $\partial\Omega$ is of Lipschitz class is not yet proved, the SUCPB and the related quantitative estimates are today well enough understood for second-order elliptic equations. Starting from the paper \cite{l:AlBeRoVe}, the SUCPB turned out to be a crucial property to prove optimal stability estimates for inverse elliptic boundary value problems with unknown boundaries. Mostly for this reason the investigation about the SUCPB has been successfully extended to second order parabolic equations \cite{l:CaRoVe}, \cite{l:DcRoVe}, \cite{l:EsFe}, \cite{l:EsFeVe}, \cite{l:Ve1} and to wave equation with time independent coefficients \cite{l:SiVe}, \cite{l:Ve2}. For completeness we recall (coarsely) the formulation of inverse boundary value problems with unknown boundaries in the elliptic context. Assume that $\Omega$ is a bounded domain, with connected boundary $\partial \Omega$ of $C^{1,\alpha}$ class, and that $\partial\Omega$ is disjoint union of an accessible portion $\Gamma^{(a)}$ and of an inaccessible portion $\Gamma^{(i)}$. Given a symmetric, elliptic, Lipschitz matrix valued $A$ and $\psi \not\equiv 0$ such that $$\psi(x)=0, \mbox{ on } \Gamma^{(i)},$$ let $u$ be the solution to \begin{equation*} \left\{\begin{array}{ll} \mbox{div}\left(A\nabla u\right)=0, \quad \hbox{in } \Omega,\\ u=\psi, \quad \hbox{on } \partial\Omega. \end{array}\right. \end{equation*} Assuming that one knows \begin{equation*} \label{flux} A\nabla u\cdot\nu,\quad \mbox{on } \Sigma, \end{equation*} where $\Sigma$ is an open portion of $\Gamma^{(a)}$, the inverse problem under consideration consists in determining the unknown boundary $\Gamma^{(i)}$. The proof of the uniqueness of $\Gamma^{(i)}$ is quite simple and requires the weak unique continuation property of elliptic operators. On the contrary, the optimal continuous dependence of $\Gamma^{(i)}$ from the Cauchy data $u$, $A\nabla u\cdot\nu$ on $\Sigma$, which is of logarithmic rate (see \cite{l:DcRo}), requires quantitative estimates of strong unique continuation at the interior and at the boundary, like the three spheres inequality, \cite{l:Ku}, \cite{l:La} and the doubling inequality, \cite{l:AdEs}, \cite{l:GaLi}. Inverse problems with unknown boundaries have been studied in linear elasticity theory for elliptic systems \cite{l:mr03}, \cite{l:mr04}, \cite{l:mr09}, and for fourth-order elliptic equations \cite{l:mrv07}, \cite{l:mrv09}, \cite{l:mrv13}. It is clear enough that the unavailability of the SUCPB precludes proving optimal stability estimates for these inverse problems with unknown boundaries. In spite of the fact that the strong unique continuation in the interior for fourth-order elliptic equation of the form \begin{equation}\label{bilaplacian} \mathbb{D}elta^2u+\sum_{|\alpha|\leq 3}c_{\alpha} D^{\alpha}u=0 \end{equation} where $c_{\alpha}\in L^{\infty}(\Omega)$, is nowadays well understood, \cite{l:CoGr}, \cite{l:CoKo}, \cite{l:Ge}, \cite{l:LBo}, \cite{l:LiNaWa}, \cite{l:mrv07}, \cite{l:Sh}, to the authors knowledge, the SUCPB for equation like \eqref{bilaplacian} has not yet proved even for Dirichlet boundary conditions. In this regard it is worthwhile to emphasize that serious difficulties occur in performing Carleman method (the main method to prove the unique continuation property) for bi-Laplace operator \textit{near the boundaries}, we refer to \cite{l:LeRRob} for a thorough discussion and wide references on the topics. In the present paper we begin to find results in this direction for the Kirchhoff-Love equation, describing thin isotropic elastic plates \begin{equation} \label{eq:equazione_piastra-int} L(v) := {\rm div}\left ({\rm div} \left ( B(1-\nu)\nabla^2 v + B\nu \mathbb{D}elta v I_2 \right ) \right )=0, \qquad\hbox{in } \Omega\subset\mathbb{R}^2, \end{equation} where $v$ represents the transversal displacement, $B$ is the \emph{bending stiffness} and $\nu$ the \emph{Poisson's coefficient} (see \eqref{eq:3.stiffness}--\eqref{eq:3.E_nu} for the precise definitions). Assuming $B,\nu\in C^4(\overline{\Omega})$ and $\Gamma$ of $C^{6, \alpha}$ class, we prove our main results: a three spheres inequality at the boundary with optimal exponent (see Theorem \ref{theo:40.teo} for the precise statement) and, as a byproduct, the following SUCPB result (see Corollary \ref{cor:SUCP}) \begin{equation}\label{formulaz-sucpb-piastra} \begin{cases} Lv=0, \mbox{ in } \Omega, \\ v =\frac{\partial v}{\partial n}=0, \mbox{ on } \Gamma, \\ \int_{\Omega\cap B_r(P)}v^2=\mathcal{O}(r^k), \mbox{ as } r\rightarrow 0, \forall k\in \mathbb{N}, \end{cases}\Longrightarrow \quad v\equiv 0 \mbox{ in } \Omega. \end{equation} In our proof, firstly we flatten the boundary $\Gamma$ by introducing a suitable conformal mapping (see Proposition \ref{prop:conf_map}), then we combine a reflection argument (briefly illustrated below) and the Carleman estimate \begin{equation} \label{eq:24.4-intr} \sum_{k=0}^3 \tau^{6-2k}\int\rho^{2k+\epsilon-2-2\tau}|D^kU|^2dxdy\leq C \int\rho^{6-\epsilon-2\tau}(\mathbb{D}elta^2 U)^2dxdy, \end{equation} for every $\tau\geq \overline{\tau}$ and for every $U\in C^\infty_0(B_{\widetilde{R}_0}\setminus\{0\})$, where $0<\varepsilon<1$ is fixed and $\rho(x,y)\sim \sqrt{x^2+y^2}$ as $(x,y)\rightarrow (0,0)$, see \cite[Theorem 6.8]{l:mrv07} and here Proposition \ref{prop:Carleman} for the precise statement. To enter a little more into details, let us outline the main steps of our proof. a) Since equation \eqref{eq:equazione_piastra-int} can be rewritten in the form \begin{equation} \label{eq:equazione_piastra_non_div-intr} \mathbb{D}elta^2 v= -2\frac{\nabla B}{B}\cdot \nabla\mathbb{D}elta v + q_2(v) \qquad\hbox{in } \Omega, \end{equation} where $q_2$ is a second order operator, the equation resulting after flattening $\Gamma$ by a conformal mapping preserves the same structure of \eqref{eq:equazione_piastra_non_div-intr} and, denoting by $u$ the solution in the new coordinates, we can write \begin{equation} \label{eq:15.1a-intro} \begin{cases} \mathbb{D}elta^2 u= a\cdot \nabla\mathbb{D}elta u + p_2(u), \qquad\hbox{in } B_1^+, \\ u(x,0)=u_y(x,0) =0, \quad \forall x\in (-1,1) \end{cases} \end{equation} where $p_2$ is a second order operator. b) We use the following reflection of $u$, \cite{l:Fa}, \cite{l:Jo}, \cite{l:Sa}, \begin{equation*} \overline{u}(x,y)=\left\{ \begin{array}{cc} u(x,y), & \hbox{ in } B_1^+\\ w(x,y)=-[u(x,-y)+2yu_y(x,-y)+y^2\mathbb{D}elta u(x,-y)], & \hbox{ in } B_1^- \end{array} \right. \end{equation*} which has the advantage of ensuring that $\overline{u}\in H^4(B_1)$ if $u\in H^4(B_1^+)$ (see Proposition \ref{prop:16.1}), and then we apply the Carleman estimate \eqref{eq:24.4-intr} to $\xi \overline{u}$, where $\xi$ is a cut-off function. Nevertheless we have still a problem. Namely c) Derivatives of $u$ up to the sixth order occur in the terms on the right-hand side of the Carleman estimate involving negative value of $y$, hence such terms cannot be absorbed in a standard way by the left hand side. In order to overcome this obstruction, we use Hardy inequality, \cite{l:HLP34}, \cite{l:T67}, stated in Proposition \ref{prop:Hardy}. The paper is organized as follows. In Section \ref{sec: notation} we introduce some notation and definitions and state our main results, Theorem \ref{theo:40.teo} and Corollary \ref{cor:SUCP}. In Section \ref{sec: flat_boundary} we state Proposition \ref{prop:conf_map}, which introduces the conformal map which realizes a local flattening of the boundary which preserves the structure of the differential operator. Section \ref{sec: Preliminary} contains some auxiliary results which shall be used in the proof of the three spheres inequality in the case of flat boundaries, precisely Propositions \ref{prop:16.1} and \ref{prop:19.2} concerning the reflection w.r.t. flat boundaries and its properties, a Hardy's inequality (Proposition \ref{prop:Hardy}), the Carleman estimate for bi-Laplace operator (Proposition \ref{prop:Carleman}), and some interpolation estimates (Lemmas \ref{lem:Agmon} and \ref{lem:intermezzo}). In Section \ref{sec:3sfere} we establish the three spheres inequality with optimal exponents for the case of flat boundaries, Proposition \ref{theo:40.prop3}, and then we derive the proof of our main result, Theorem \ref{theo:40.teo}. Finally, in the Appendix, we give the proof of Proposition \ref{prop:conf_map} and of the interpolation estimates contained in Lemma \ref{lem:intermezzo}.
3,845
35,444
en
train
0.113.1
\section{Notation} \label{sec: notation} We shall generally denote points in $\mathbb{R}^2$ by $x=(x_1,x_2)$ or $y=(y_1,y_2)$, except for Sections \ref{sec: Preliminary} and \ref{sec:3sfere} where we rename $x,y$ the coordinates in $\mathbb{R}^2$. In places we will use equivalently the symbols $D$ and $\nabla$ to denote the gradient of a function. Also we use the multi-index notation. We shall denote by $B_r(P)$ the disc in $\mathbb{R}^2$ of radius $r$ and center $P$, by $B_r$ the disk of radius $r$ and center $O$, by $B_r^+$, $B_r^-$ the hemidiscs in $\mathbb{R}^2$ of radius $r$ and center $O$ contained in the halfplanes $\mathbb{R}^2_+= \{x_2>0\}$, $\mathbb{R}^2_-= \{x_2<0\}$ respectively, and by $R_{ a,b}$ the rectangle $(-a,a)\times(-b,b)$. Given a matrix $A =(a_{ij})$, we shall denote by $|A|$ its Frobenius norm $|A|=\sqrt{\sum_{i,j}a_{ij}^2}$. Along our proofs, we shall denote by $C$ a constant which may change {}from line to line. \begin{definition} \label{def:reg_bordo} (${C}^{k,\alpha}$ regularity) Let $\Omega$ be a bounded domain in ${\mathbb{R}}^{2}$. Given $k,\alpha$, with $k\in\mathbb{N}$, $0<\alpha\leq 1$, we say that a portion $S$ of $\partial \Omega$ is of \textit{class ${C}^{k,\alpha}$ with constants $r_{0}$, $M_{0}>0$}, if, for any $P \in S$, there exists a rigid transformation of coordinates under which we have $P=0$ and \begin{equation*} \Omega \cap R_{r_0,2M_0r_0}=\{x \in R_{r_0,2M_0r_0} \quad | \quad x_{2}>g(x_1) \}, \end{equation*} where $g$ is a ${C}^{k,\alpha}$ function on $[-r_0,r_0]$ satisfying \begin{equation*} g(0)=g'(0)=0, \end{equation*} \begin{equation*} \|g\|_{{C}^{k,\alpha}([-r_0,r_0])} \leq M_0r_0, \end{equation*} where \begin{equation*} \|g\|_{{C}^{k,\alpha}([-r_0,r_0])} = \sum_{i=0}^k r_0^i\sup_{[-r_0,r_0]}|g^{(i)}|+r_0^{k+\alpha}|g|_{k,\alpha}, \end{equation*} \begin{equation*} |g|_{k,\alpha}= \sup_ {\overset{\scriptstyle t,s\in [-r_0,r_0]}{\scriptstyle t\neq s}}\left\{\frac{|g^{(k)}(t) - g^{(k)}(s)|}{|t-s|^\alpha}\right\}. \end{equation*} \end{definition} We shall consider an isotropic thin elastic plate $\Omega\times \left[-\frac{h}{2},\frac{h}{2}\right]$, having middle plane $\Omega$ and width $h$. Under the Kirchhoff-Love theory, the transversal displacement $v$ satisfies the following fourth-order partial differential equation \begin{equation} \label{eq:equazione_piastra} L(v) := {\rm div}\left ({\rm div} \left ( B(1-\nu)\nabla^2 v + B\nu \mathbb{D}elta v I_2 \right ) \right )=0, \qquad\hbox{in } \Omega. \end{equation} Here the \emph{bending stiffness} $B$ is given by \begin{equation} \label{eq:3.stiffness} B(x)=\frac{h^3}{12}\left(\frac{E(x)}{1-\nu^2(x)}\right), \end{equation} and the \emph{Young's modulus} $E$ and the \emph{Poisson's coefficient} $\nu$ can be written in terms of the Lam\'{e} moduli as follows \begin{equation} \label{eq:3.E_nu} E(x)=\frac{\mu(x)(2\mu(x)+3\lambda(x))}{\mu(x)+\lambda(x)},\qquad\nu(x)=\frac{\lambda(x)}{2(\mu(x)+\lambda(x))}. \end{equation} We shall make the following strong convexity assumptions on the Lam\'{e} moduli \begin{equation} \label{eq:3.Lame_convex} \mu(x)\geq \alpha_0>0,\qquad 2\mu(x)+3\lambda(x)\geq\gamma_0>0, \qquad \hbox{ in } \Omega, \end{equation} where $\alpha_0$, $\gamma_0$ are positive constants. It is easy to see that equation \eqref{eq:equazione_piastra} can be rewritten in the form \begin{equation} \label{eq:equazione_piastra_non_div} \mathbb{D}elta^2 v= \widetilde{a}\cdot \nabla\mathbb{D}elta v + \widetilde{q}_2(v) \qquad\hbox{in } \Omega, \end{equation} with \begin{equation} \label{eq:vettore_a_tilde} \widetilde{a}=-2\frac{\nabla B}{B}, \end{equation} \begin{equation} \label{eq:q_2} \widetilde{q}_2(v)=-\sum_{i,j=1}^2\frac{1}{B}\partial^2_{ij}(B(1-\nu)+\nu B\delta_{ij})\partial^2_{ij} v. \end{equation} Let \begin{equation} \label{eq:Omega_r_0} \Omega_{r_0} = \left\{ x\in R_{r_0,2M_0r_0}\ |\ x_2>g(x_1) \right\}, \end{equation} \begin{equation} \label{eq:Gamma_r_0} \Gamma_{r_0} = \left\{(x_1,g(x_1))\ |\ x_1\in (-r_0,r_0)\right\}, \end{equation} with \begin{equation*} g(0)=g'(0)=0, \end{equation*} \begin{equation} \label{eq:regol_g} \|g\|_{{C}^{6,\alpha}([-r_0,r_0])} \leq M_0r_0, \end{equation} for some $\alpha\in (0,1]$. Let $v\in H^2(\Omega_{r_0})$ satisfy \begin{equation} \label{eq:equat_u_tilde} L(v)= 0, \quad \hbox{ in } \Omega_{r_0}, \end{equation} \begin{equation} \label{eq:Diric_u_tilde} v = \frac{\partial v}{\partial n}= 0, \quad \hbox{ on } \Gamma_{r_0}, \end{equation} where $L$ is given by \eqref{eq:equazione_piastra} and $n$ denotes the outer unit normal. Let us assume that the Lam\'{e} moduli $\lambda,\mu$ satisfies the strong convexity condition \eqref{eq:3.Lame_convex} and the following regularity assumptions \begin{equation} \label{eq:C4Lame} \|\lambda\|_{C^4(\overline{\Omega}_{r_0})}, \|\mu\|_{C^4(\overline{\Omega}_{r_0})}\leq \Lambda_0. \end{equation} The regularity assumptions \eqref{eq:3.Lame_convex}, \eqref{eq:regol_g} and \eqref{eq:C4Lame} guarantee that $v\in H^6(\Omega_r)$, see for instance \cite{l:a65}. \begin{theo} [{\bf Optimal three spheres inequality at the boundary}] \label{theo:40.teo} Under the above hypotheses, there exist $c<1$ only depending on $M_0$ and $\alpha$, $C>1$ only depending on $\alpha_0$, $\gamma_0$, $\Lambda_0$, $M_0$, $\alpha$, such that, for every $r_1<r_2<c r_0<r_0$, \begin{equation} \label{eq:41.1} \int_{B_{r_2}\cap \Omega_{r_0}}v^2\leq C\left(\frac{r_0}{r_2}\right)^C\left(\int_{B_{r_1}\cap \Omega_{r_0}}v^2\right)^\theta\left(\int_{B_{r_0}\cap \Omega_{r_0}}v^2\right)^{1-\theta}, \end{equation} where \begin{equation} \label{eq:41.2} \theta = \frac{\log\left(\frac{cr_0}{r_2}\right)}{\log\left(\frac{r_0}{r_1}\right)}. \end{equation} \end{theo} \begin{cor} [{\bf Quantitative strong unique continuation at the boundary}] \label{cor:SUCP} Under the above hypotheses and assuming $\int_{B_{r_0 }\cap\Omega_{r_0}}v^2>0$, \begin{equation} \label{eq:suc1} \int_{B_{r_1 }\cap\Omega_{r_0}}v^2 \geq \left(\frac{r_1}{r_0}\right)^{\frac{\log A}{\log \frac{r_2}{cr_0}}} \int_{B_{r_0 }\cap\Omega_{r_0}}v^2, \end{equation} where \begin{equation} \label{eq:suc2} A= \frac{1}{C}\left(\frac{r_2}{r_0}\right)^C\frac{\int_{B_{r_2 }\cap\Omega_{r_0}}v^2}{\int_{B_{r_0 }\cap\Omega_{r_0}}v^2}<1, \end{equation} $c<1$ and $C>1$ being the constants appearing in Theorem \ref{theo:40.teo}. \end{cor} \begin{proof} Reassembling the terms in \eqref{eq:41.1}, it is straightforward to obtain \eqref{eq:suc1}-\eqref{eq:suc2}. The SUCBP follows immediately. \end{proof} \section{Reduction to a flat boundary} \label{sec: flat_boundary} The following Proposition introduces a conformal map which flattens the boundary $\Gamma_{r_0}$ and preserves the structure of equation \eqref{eq:equazione_piastra_non_div}. \begin{prop} [{\bf Conformal mapping}] \label{prop:conf_map} Under the hypotheses of Theorem \ref{theo:40.teo}, there exists an injective sense preserving differentiable map \begin{equation*} \Phi=(\varphi,\psi):[-1,1]\times[0,1]\rightarrow \overline{\Omega}_{r_0} \end{equation*} which is conformal, and it satisfies \begin{equation} \label{eq:9.assente} \Phi((-1,1)\times(0,1))\supset B_{\frac{r_0}{K}}(0)\cap \Omega_{r_0}, \end{equation} \begin{equation} \label{eq:9.2b} \Phi(([-1,1]\times\{0\})= \left\{ (x_1,g(x_1))\ |\ x_1\in [-r_1,r_1]\right\}, \end{equation} \begin{equation} \label{eq:9.2a} \Phi(0,0)= (0,0), \end{equation} \begin{equation} \label{eq:gradPhi} \frac{c_0r_0}{2C_0}\leq |D\Phi(y)|\leq \frac{r_0}{2}, \quad \forall y\in [-1,1]\times[0,1], \end{equation} \begin{equation} \label{eq:gradPhiInv} \frac{4}{r_0}\leq |D\Phi^{-1}(x)|\leq \frac{4C_0}{c_0r_0}, \quad\forall x\in \Phi([-1,1]\times[0,1]), \end{equation} \begin{equation} \label{eq:stimaPhi} |\Phi(y)|\leq \frac{r_0}{2}|y|, \quad \forall y\in [-1,1]\times[0,1], \end{equation} \begin{equation} \label{eq:stimaPhiInv} |\Phi^{-1}(x)| \leq \frac{K}{r_0}|x|, \quad \forall x\in \Phi([-1,1]\times[0,1]), \end{equation} with $K>8$, $0<c_0<C_0$ being constants only depending on $M_0$ and $\alpha$. Letting \begin{equation} \label{eq:def_sol_composta} u(y) = v(\Phi(y)), \quad y\in [-1,1]\times[0,1], \end{equation} then $u\in H^6((-1,1)\times(0,1))$ and it satisfies \begin{equation} \label{eq:equazione_sol_composta} \mathbb{D}elta^2 u= a\cdot \nabla\mathbb{D}elta u + q_2(u), \qquad\hbox{in } (-1,1)\times(0,1), \end{equation} \begin{equation} \label{eq:Dirichlet_sol_composta} u(y_1,0)= u_{y_2}(y_1,0) =0, \quad \forall y_1\in (-1,1), \end{equation} where \begin{equation*} a(y) = |\nabla \varphi(y)|^2\left([D\Phi(y)]^{-1}\widetilde{a}(\Phi(y))-2\nabla(|\nabla \varphi(y)|^{-2})\right), \end{equation*} $a\in C^3([-1,1]\times[0,1], \mathbb{R}^2)$, $q_2=\sum_{|\alpha|\leq 2}c_\alpha D^\alpha$ is a second order elliptic operator with coefficients $c_\alpha\in C^2([-1,1]\times[0,1])$, satisfying \begin{equation} \label{eq:15.2} \|a\|_{ C^3([-1,1]\times[0,1], \mathbb{R}^2)}\leq M_1,\quad \|c_\alpha\|_{ C^2([-1,1]\times[0,1])}\leq M_1, \end{equation} with $M_1>0$ only depending on $M_0, \alpha, \alpha_0, \gamma_0, \Lambda_0$. \end{prop} The explicit construction of the conformal map $\Phi$ and the proof of the above Proposition are postponed to the Appendix.
3,996
35,444
en
train
0.113.2
\section{Preliminary results} \label{sec: Preliminary} In this paragraph, for simplicity of notation, we find it convenient to rename $x,y$ the coordinates in $\mathbb{R}^2$ instead of $y_1,y_2$. Let $u\in H^6(B_1^+)$ be a solution to \begin{equation} \label{eq:15.1a} \mathbb{D}elta^2 u= a\cdot \nabla\mathbb{D}elta u + q_2(u), \qquad\hbox{in } B_1^+, \end{equation} \begin{equation} \label{eq:15.1b} u(x,0)=u_y(x,0) =0, \quad \forall x\in (-1,1), \end{equation} with $q_2=\sum_{|\alpha|\leq 2}c_\alpha D^\alpha$, \begin{equation} \label{eq:15.2_bis} \|a\|_{ C^3(\overline{B}_1^+, \mathbb{R}^2)}\leq M_1,\quad \|c_\alpha\|_{ C^2(\overline{B}_1^+)}\leq M_1, \end{equation} for some positive constant $M_1$. Let us define the following extension of $u$ to $B_1$ (see \cite{l:Jo}) \begin{equation} \label{eq:16.1} \overline{u}(x,y)=\left\{ \begin{array}{cc} u(x,y), & \hbox{ in } B_1^+\\ w(x,y), & \hbox{ in } B_1^- \end{array} \right. \end{equation} where \begin{equation} \label{eq:16.2} w(x,y)= -[u(x,-y)+2yu_y(x,-y)+y^2\mathbb{D}elta u(x,-y)]. \end{equation} \begin{prop} \label{prop:16.1} Let \begin{equation} \label{eq:16.3} F:=a\cdot \nabla\mathbb{D}elta u + q_2(u). \end{equation} Then $F\in H^2(B_1^+)$, $\overline{u}\in H^4(B_1)$, \begin{equation} \label{eq:16.4} \mathbb{D}elta^2 \overline{u} = \overline{F},\quad \hbox{ in } B_1, \end{equation} where \begin{equation} \label{eq:16.5} \overline{F}(x,y)=\left\{ \begin{array}{cc} F(x,y), & \hbox{ in } B_1^+,\\ F_1(x,y), & \hbox{ in } B_1^-, \end{array} \right. \end{equation} and \begin{equation} \label{eq:16.6} F_1(x,y)= -[5F(x,-y)-6yF_y(x,-y)+y^2\mathbb{D}elta F(x,-y)]. \end{equation} \end{prop} \begin{proof} Throughout this proof, we understand $(x,y)\in B_1^-$. It is easy to verify that \begin{equation} \label{eq:17.1} \mathbb{D}elta^2 w(x,y)= -[5F(x,-y)-6yF_y(x,-y)+y^2\mathbb{D}elta F(x,-y)]=F_1(x,y). \end{equation} Moreover, by \eqref{eq:15.1b} and \eqref{eq:16.2}, \begin{equation} \label{eq:17.2} w(x,0)= -u(x,0) =0, \quad \forall x\in (-1,1). \end{equation} By differentiating \eqref{eq:16.2} w.r.t. $y$, we have \begin{equation} \label{eq:17.3bis} w_y(x,y)= -[u_y(x,-y)-2yu_{yy}(x,-y)+2y\mathbb{D}elta u(x,-y)-y^2(\mathbb{D}elta u_y)(x,-y)], \end{equation} so that, by \eqref{eq:15.1b}, \begin{equation} \label{eq:17.3} w_y(x,0)= -u_y(x,0) =0, \quad \forall x\in (-1,1). \end{equation} Moreover, \begin{equation} \label{eq:17.6} \mathbb{D}elta w(x,y)= -[3 \mathbb{D}elta u(x,-y)-4u_{yy}(x,-y)-2y(\mathbb{D}elta u_y)(x,-y)+y^2(\mathbb{D}elta^2 u)(x,-y)], \end{equation} so that, recalling \eqref{eq:15.1b}, we have that, for every $x\in (-1,1)$, \begin{multline} \label{eq:17.4} \mathbb{D}elta w(x,0)= -[3 \mathbb{D}elta u(x,0)-4u_{yy}(x,0)]= u_{yy}(x,0) = \mathbb{D}elta u (x,0). \end{multline} By differentiating \eqref{eq:17.6} w.r.t. $y$, we have \begin{multline} \label{eq:18.1} (\mathbb{D}elta w_y)(x,y)= -[-5 (\mathbb{D}elta u_y)(x,-y)+4u_{yyy}(x,-y)+\\ + 2y(\mathbb{D}elta u_{yy})(x,-y) +2y(\mathbb{D}elta^2 u)(x,-y)-y^2(\mathbb{D}elta^2 u_y)(x,-y)], \end{multline} so that, taking into account \eqref{eq:15.1b}, it follows that, for every $x\in (-1,1)$, \begin{multline} \label{eq:17.5} (\mathbb{D}elta w_y)(x,0)= -[-5 (\mathbb{D}elta u_y)(x,0)+4u_{yyy}(x,0)] =\\ =-[-5 u_{yxx}(x,0) - u_{yyy}(x,0)] = u_{yyy}(x,0) = (\mathbb{D}elta u_y)(x,0). \end{multline} By \eqref{eq:17.2} and \eqref{eq:17.3}, we have that $\overline{u}\in H^2(B_1)$. Let $\varphi\in C^\infty_0(B_1)$ be a test function. Then, integrating by parts and using \eqref{eq:17.1}, \eqref{eq:17.4}, \eqref{eq:17.5}, we have \begin{multline} \label{eq:18.2} \int_{B_1}\mathbb{D}elta \overline{u} \mathbb{D}elta\varphi = \int_{B_1^+}\mathbb{D}elta u \mathbb{D}elta\varphi +\int_{B_1^-}\mathbb{D}elta w \mathbb{D}elta\varphi=\\ =-\int_{-1 }^1 \mathbb{D}elta u(x,0)\varphi_y(x,0)+\int_{-1 }^1 (\mathbb{D}elta u_y)(x,0)\varphi(x,0) +\int_{B_1^+}(\mathbb{D}elta^2 u) \varphi +\\ +\int_{-1 }^1 \mathbb{D}elta w(x,0)\varphi_y(x,0)-\int_{-1 }^1 (\mathbb{D}elta w_y)(x,0)\varphi(x,0) +\int_{B_1^-}(\mathbb{D}elta^2 w) \varphi=\\ +\int_{B_1^+}F \varphi+\int_{B_1^-}F_1 \varphi =\int_{B_1}\overline{F} \varphi. \end{multline} Therefore \begin{equation*} \int_{B_1}\mathbb{D}elta \overline{u} \mathbb{D}elta\varphi =\int_{B_1}\overline{F} \varphi, \quad \forall \varphi \in C^\infty_0(B_1), \end{equation*} so that \eqref{eq:16.4} holds and, by interior regularity esimates, $\overline{u}\in H^4(B_1)$. \end{proof}
2,286
35,444
en
train
0.113.3
{}From now on, we shall denote by $P_k$, for $k\in \mathbb{N}$, $0\leq k\leq 3$, any differential operator of the form \begin{equation*} \sum_{|\alpha|\leq k}c_\alpha(x)D^\alpha, \end{equation*} with $\|c_\alpha\|_{L^\infty}\leq cM_1$, where $c$ is an absolute constant. \begin{prop} \label{prop:19.2} For every $(x,y)\in B_1^-$, we have \begin{equation} \label{eq:19.1} F_1(x,y)= H(x,y)+(P_2(w))(x,y)+(P_3(u))(x,-y), \end{equation} where \begin{multline} \label{eq:19.2} H(x,y)= 6\frac{a_1}{y}(w_{yx}(x,y)+u_{yx}(x,-y))+\\ +6\frac{a_2}{y}(-w_{yy}(x,y)+u_{yy}(x,-y)) -\frac{12a_2}{y}u_{xx}(x,-y), \end{multline} where $a_1,a_2$ are the components of the vector $a$. Moreover, for every $x\in (-1,1)$, \begin{equation} \label{eq:23.1} w_{yx}(x,0)+u_{yx}(x,0)=0, \end{equation} \begin{equation} \label{eq:23.2} -w_{yy}(x,0)+u_{yy}(x,0)=0, \end{equation} \begin{equation} \label{eq:23.3} u_{xx}(x,0)=0. \end{equation} \end{prop} \begin{proof} As before, we understand $(x,y)\in B_1^-$. Recalling \eqref{eq:16.2} and \eqref{eq:16.3}, it is easy to verify that \begin{equation} \label{eq:19.3} F(x,-y)= (P_3(u))(x,-y), \end{equation} \begin{equation} \label{eq:20.1} -6yF_y(x,-y)= -6y(a\cdot \nabla \mathbb{D}elta u_y)(x,-y)+(P_3(u))(x,-y). \end{equation} Next, let us prove that \begin{equation} \label{eq:20.2} y^2\mathbb{D}elta F(x,-y)= (P_2(w))(x,y)+(P_3(u))(x,-y). \end{equation} By denoting for simplicity $\partial_1 =\frac{\partial}{\partial x}$, $\partial_2 =\frac{\partial}{\partial y}$, we have that \begin{multline} \label{eq:20.3} y^2\mathbb{D}elta F(x,-y)= y^2(a_j\partial_j\mathbb{D}elta^2 u + 2\nabla a_j\cdot \nabla \partial_j\mathbb{D}elta u + \mathbb{D}elta a_j\partial_j\mathbb{D}elta u)(x,-y)+y^2\mathbb{D}elta(q_2(u))(x,-y)=\\ =y^2(a_j\partial_j(a\cdot \nabla \mathbb{D}elta u+q_2 (u))(x,-y)+ 2y^2(\nabla a_j\cdot\nabla\partial_j \mathbb{D}elta u)(x,-y)+\\ +y^2(\mathbb{D}elta q_2(u))(x,-y)+y^2(P_3(u))(x,-y)=\\ =y^2(a_j a\cdot \nabla \mathbb{D}elta \partial_j u)(x,-y)+\\ +2y^2(\nabla a_j\cdot\nabla\partial_j \mathbb{D}elta u)(x,-y) +y^2\mathbb{D}elta(q_2(u))(x,-y)+y^2(P_3(u))(x,-y). \end{multline} By \eqref{eq:16.2}, we have \begin{equation*} y^2\mathbb{D}elta u(x,-y)=-w(x,y)-u(x,-y)-2yu_y(x,-y), \end{equation*} obtaining \begin{multline} \label{eq:21.1} y^2(a_j a\cdot \nabla \partial_j\mathbb{D}elta u)(x,-y)= (a_j a\cdot \nabla \partial_j(y^2\mathbb{D}elta u))(x,-y)+ (P_3(u))(x,-y)=\\ =(P_2(w))(x,y)+(P_3(u))(x,-y). \end{multline} Similarly, we can compute \begin{equation} \label{eq:21.2} 2y^2(\nabla a_j \cdot \nabla \partial_j\mathbb{D}elta u)(x,-y)= (P_2(w))(x,y)+(P_3(u))(x,-y), \end{equation} \begin{equation} \label{eq:21.3} y^2(\mathbb{D}elta q_2(u))(x,-y)= (P_2(w))(x,y)+(P_3(u))(x,-y). \end{equation} Therefore, \eqref{eq:20.2} follows {}from \eqref{eq:20.3}--\eqref{eq:21.3}. {}From \eqref{eq:16.6}, \eqref{eq:19.3}--\eqref{eq:20.2}, we have \begin{equation} \label{eq:21.4} F_1(x,y)=6y(a\cdot \nabla\mathbb{D}elta u_y)(x,-y) +(P_2(w))(x,y)+(P_3(u))(x,-y). \end{equation} We have that \begin{equation} \label{eq:21.5} 6y(a\cdot \nabla\mathbb{D}elta u_y)(x,-y)= 6y(a_1\mathbb{D}elta u_{xy})(x,-y)+6y(a_2\mathbb{D}elta u_{yy})(x,-y). \end{equation} By \eqref{eq:16.2}, we have \begin{equation} \label{eq:22.1} w_{yx}(x,y)=-u_{yx}(x,-y)+2yu_{yyx}(x,-y)-2y(\mathbb{D}elta u_{x})(x,-y) +y^2(\mathbb{D}elta u_{yx})(x,-y), \end{equation} so that \begin{equation} \label{eq:22.2} y(\mathbb{D}elta u_{yx})(x,-y)=\frac{1}{y}(w_{yx}(x,y)+u_{yx}(x,-y))+(P_3(u))(x,-y). \end{equation} Again by \eqref{eq:16.2}, we have \begin{multline} \label{eq:22.3} w_{yy}(x,y)=\\ =3u_{yy}(x,-y)-2(\mathbb{D}elta u)(x,-y)-2y((u_{yyy})(x,-y)+2\mathbb{D}elta u_y(x,-y)) -y^2(\mathbb{D}elta u_{yy})(x,-y)=\\ =u_{yy}(x,-y)-2u_{xx}(x,-y)-y^2(\mathbb{D}elta u_{yy})(x,-y)+y(P_3(u))(x,-y), \end{multline} so that \begin{equation} \label{eq:22.4} y(\mathbb{D}elta u_{yy})(x,-y)=\frac{1}{y}(-w_{yy}(x,y)+u_{yy}(x,-y)-2u_{xx}(x,-y))+(P_3(u))(x,-y). \end{equation} Therefore \eqref{eq:19.1}--\eqref{eq:19.2} follow by \eqref{eq:21.4}, \eqref{eq:21.5}, \eqref{eq:22.2} and \eqref{eq:22.4}. The identity \eqref{eq:23.1} is an immediate consequence of \eqref{eq:22.1} and \eqref{eq:15.1b}. By \eqref{eq:15.1b}, we have \eqref{eq:23.3} and by \eqref{eq:22.3} and \eqref{eq:23.3}, \begin{equation*} -w_{yy}(x,0)+ u_{yy}(x,0) =2 u_{xx}(x,0) =0. \end{equation*} \end{proof} For the proof of the three spheres inequality at the boundary we shall use the following Hardy's inequality (\cite[\S 7.3, p. 175]{l:HLP34}), for a proof see also \cite{l:T67}. \begin{prop} [{\bf Hardy's inequality}] \label{prop:Hardy} Let $f$ be an absolutely continuous function defined in $[0,+\infty)$, such that $f(0)=0$. Then \begin{equation} \label{eq:24.1} \int_1^{+\infty} \frac{f^2(t)}{t^2}dt\leq 4 \int_1^{+\infty} (f'(t))^2dt. \end{equation} \end{prop} Another basic result we need to derive the three spheres inequality at the boundary is the following Carleman estimate, which was obtained in \cite[Theorem 6.8]{l:mrv07}. \begin{prop} [{\bf Carleman estimate}] \label{prop:Carleman} Let $\epsilon\in(0,1)$. Let us define \begin{equation} \label{eq:24.2} \rho(x,y) = \varphi\left(\sqrt{x^2+y^2}\right), \end{equation} where \begin{equation} \label{eq:24.3} \varphi(s) = s\exp\left(-\int_0^s \frac{dt}{t^{1-\epsilon}(1+t^\epsilon)}\right). \end{equation} Then there exist $\overline{\tau}>1$, $C>1$, $\widetilde{R}_0\leq 1$, only depending on $\epsilon$, such that \begin{equation} \label{eq:24.4} \sum_{k=0}^3 \tau^{6-2k}\int\rho^{2k+\epsilon-2-2\tau}|D^kU|^2dxdy\leq C \int\rho^{6-\epsilon-2\tau}(\mathbb{D}elta^2 U)^2dxdy, \end{equation} for every $\tau\geq \overline{\tau}$ and for every $U\in C^\infty_0(B_{\widetilde{R}_0}\setminus\{0\})$. \end{prop} \begin{rem} \label{rem:stima_rho} Let us notice that \begin{equation*} e^{-\frac{1}{\epsilon}}s\leq \varphi(s)\leq s, \end{equation*} \begin{equation} \label{eq:stima_rho} e^{-\frac{1}{\epsilon}}\sqrt{x^2+y^2}\leq \rho(x,y)\leq \sqrt{x^2+y^2}. \end{equation} \end{rem} We shall need also the following interpolation estimates. \begin{lem} \label{lem:Agmon} Let $0<\epsilon\leq 1$ and $m\in \mathbb{N}$, $m\geq 2$. There exists an absolute constant $C_{m,j}$ such that for every $v\in H^m(B_r^+)$, \begin{equation} \label{eq:3a.2} r^j\|D^jv\|_{L^2(B_r^+)}\leq C_{m,j}\left(\epsilon r^m\|D^mv\|_{L^2(B_r^+)} +\epsilon^{-\frac{j}{m-j}}\|v\|_{L^2(B_r^+)}\right). \end{equation} \end{lem} See for instance \cite[Theorem 3.3]{l:a65}. \begin{lem} \label{lem:intermezzo} Let $u\in H^6(B_1^+)$ be a solution to \eqref{eq:15.1a}--\eqref{eq:15.1b}, with $a$ and $q_2$ satisfying \eqref{eq:15.2_bis}. For every $r$, $0<r<1$, we have \begin{equation} \label{eq:12a.2} \|D^hu\|_{L^2(B_{\frac{r}{2}}^+)}\leq \frac{C}{r^h}\|u\|_{L^2(B_r^+)}, \quad \forall h=1, ..., 6, \end{equation} where $C$ is a constant only depending on $\alpha_0$, $\gamma_0$ and $\Lambda_0$. \end{lem} The proof of the above result is postponed to the Appendix.
3,585
35,444
en
train
0.113.4
\section{Three spheres inequality at the boundary and proof of the main theorem} \label{sec:3sfere} \begin{theo} [{\bf Optimal three spheres inequality at the boundary - flat boundary case}] \label{theo:40.prop3} Let $u\in H^6(B_1^+)$ be a solution to \eqref{eq:15.1a}--\eqref{eq:15.1b}, with $a$ and $q_2$ satisfying \eqref{eq:15.2_bis}. Then there exist $\gamma\in (0,1)$, only depending on $M_1$ and an absolute constant $C>0$ such that, for every $r<R<\frac{R_0}{2}<R_0<\gamma$, \begin{equation} \label{eq:40.1} R^{2\epsilon}\int_{B_R^+}u^2\leq C(M_1^2+1)\left(\frac{R_0/2}{R}\right)^C\left(\int_{B_r^+}u^2\right)^{\widetilde{\theta}}\left(\int_{B_{R_0}^+}u^2\right)^{1-\widetilde{\theta}}, \end{equation} where \begin{equation} \label{eq:39.1} \widetilde{\theta} = \frac{\log\left(\frac{R_0/2}{R}\right)}{\log\left(\frac{R_0/2}{r/4}\right)}. \end{equation} \end{theo} \begin{proof} Let $\epsilon \in (0,1)$ be fixed, for instance $\epsilon=\frac{1}{2}$. However, it is convenient to maintain the parameter $\epsilon$ in the calculations. Along this proof, $C$ shall denote a positive constant which may change {}from line to line. Let $R_0\in (0,\widetilde{R}_0)$ to be chosen later, where $\widetilde{R}_0$ has been introduced in Proposition \ref{prop:Carleman}, and let \begin{equation} \label{eq:25.1} 0<r<R<\frac{R_0}{2}. \end{equation} Let $\eta\in C^\infty_0((0,1))$ such that \begin{equation} \label{eq:25.2} 0\leq \eta\leq 1, \end{equation} \begin{equation} \label{eq:25.3} \eta=0, \quad \hbox{ in }\left(0,\frac{r}{4}\right)\cup \left(\frac{2}{3}R_0,1\right), \end{equation} \begin{equation} \label{eq:25.4} \eta=1, \quad \hbox{ in }\left[\frac{r}{2}, \frac{R_0}{2}\right], \end{equation} \begin{equation} \label{eq:25.6} \left|\frac{d^k\eta}{dt^k}(t)\right|\leq C r^{-k}, \quad \hbox{ in }\left(\frac{r}{4}, \frac{r}{2}\right),\quad\hbox{ for } 0\leq k\leq 4, \end{equation} \begin{equation} \label{eq:25.7} \left|\frac{d^k\eta}{dt^k}(t)\right|\leq C R_0^{-k}, \quad \hbox{ in }\left(\frac{R_0}{2}, \frac{2}{3}R_0\right),\quad\hbox{ for } 0\leq k\leq 4. \end{equation} Let us define \begin{equation} \label{eq:25.5} \xi(x,y)=\eta(\sqrt{x^2+y^2}). \end{equation} By a density argument, we may apply the Carleman estimate \eqref{eq:24.4} to $U=\xi \overline{u}$, where $\overline{u}$ has been defined in \eqref{eq:16.1}, obtaining \begin{multline} \label{eq:26.1} \sum_{k=0}^3 \tau^{6-2k}\int_{B_{R_0}^+}\rho^{2k+\epsilon-2-2\tau}|D^k(\xi u)|^2 +\sum_{k=0}^3 \tau^{6-2k}\int_{B_{R_0}^-}\rho^{2k+\epsilon-2-2\tau}|D^k(\xi w)|^2\leq \\ \leq C \int_{B_{R_0}^+}\rho^{6-\epsilon-2\tau}|\mathbb{D}elta^2(\xi u)|^2+ C\int_{B_{R_0}^-}\rho^{6-\epsilon-2\tau}|\mathbb{D}elta^2(\xi w)|^2, \end{multline} for $\tau\geq \overline{\tau}$ and $C$ an absolute constant. By \eqref{eq:25.2}--\eqref{eq:25.5} we have \begin{multline} \label{eq:26.2} |\mathbb{D}elta^2(\xi u)|\leq \xi|\mathbb{D}elta^2 u|+C\chi_{B_{r/2}^+\setminus B_{r/4}^+} \sum_{k=0}^3 r^{k-4}|D^k u|+ C\chi_{B_{2R_0/3}^+\setminus B_{R_0/2}^+}\sum_{k=0}^3 R_0^{k-4}|D^k u|, \end{multline} \begin{multline} \label{eq:26.3} |\mathbb{D}elta^2(\xi w)|\leq \xi|\mathbb{D}elta^2 w|+C\chi_{B_{r/2}^-\setminus B_{r/4}^-} \sum_{k=0}^3 r^{k-4}|D^k w|+ C\chi_{B_{2R_0/3}^-\setminus B_{R_0/2}^-}\sum_{k=0}^3 R_0^{k-4}|D^k w|. \end{multline} Let us set \begin{multline} \label{eq:27.1} J_0 =\int_{B_{r/2}^+\setminus B_{r/4}^+}\rho^{6-\epsilon-2\tau} \sum_{k=0}^3 (r^{k-4}|D^k u|)^2+ \int_{B_{r/2}^-\setminus B_{r/4}^-}\rho^{6-\epsilon-2\tau} \sum_{k=0}^3 (r^{k-4}|D^k w|)^2, \end{multline} \begin{multline} \label{eq:27.2} J_1 =\int_{B_{2R_0/3}^+\setminus B_{R_0/2}^+}\rho^{6-\epsilon-2\tau} \sum_{k=0}^3 (R_0^{k-4}|D^k u|)^2+ \int_{B_{2R_0/3}^-\setminus B_{R_0/2}^-}\rho^{6-\epsilon-2\tau} \sum_{k=0}^3 (R_0^{k-4}|D^k w|)^2. \end{multline} By inserting \eqref{eq:26.2}, \eqref{eq:26.3} in \eqref{eq:26.1} we have \begin{multline} \label{eq:27.3} \sum_{k=0}^3 \tau^{6-2k}\int_{B_{R_0}^+}\rho^{2k+\epsilon-2-2\tau}|D^k(\xi u)|^2 +\sum_{k=0}^3 \tau^{6-2k}\int_{B_{R_0}^-}\rho^{2k+\epsilon-2-2\tau}|D^k(\xi w)|^2\leq \\ \leq C \int_{B_{R_0}^+}\rho^{6-\epsilon-2\tau}\xi^2|\mathbb{D}elta^2 u|^2+ C\int_{B_{R_0}^-}\rho^{6-\epsilon-2\tau}\xi^2|\mathbb{D}elta^2 w|^2+CJ_0+CJ_1, \end{multline} for $\tau\geq \overline{\tau}$, with $C$ an absolute constant. By \eqref{eq:15.1a} and \eqref{eq:15.2_bis} we can estimate the first term in the right hand side of \eqref{eq:27.3} as follows \begin{equation} \label{eq:28.1} \int_{B_{R_0}^+}\rho^{6-\epsilon-2\tau}\xi^2|\mathbb{D}elta^2 u|^2\leq CM_1^2\int_{B_{R_0}^+}\rho^{6-\epsilon-2\tau}\xi^2\sum_{k=0}^3|D^k u|^2. \end{equation} By \eqref{eq:17.1}, \eqref{eq:19.1} and by making the change of variables $(x,y)\rightarrow(x,-y)$ in the integrals involving the function $u(x,-y)$, we can estimate the second term in the right hand side of \eqref{eq:27.3} as follows \begin{multline} \label{eq:28.2} \int_{B_{R_0}^-}\rho^{6-\epsilon-2\tau}\xi^2|\mathbb{D}elta^2 w|^2\leq C\int_{B_{R_0}^-}\rho^{6-\epsilon-2\tau}\xi^2|H(x,y)|^2+\\ +CM_1^2\int_{B_{R_0}^-}\rho^{6-\epsilon-2\tau}\xi^2\sum_{k=0}^2|D^k w|^2+ CM_1^2\int_{B_{R_0}^+}\rho^{6-\epsilon-2\tau}\xi^2\sum_{k=0}^3|D^k u|^2. \end{multline} Now, let us split the integral in the right hand side of \eqref{eq:28.1} and the second and third integrals in the right hand side of \eqref{eq:28.2} over the domains of integration $B_{r/2}^\pm\setminus B_{r/4}^\pm$, $B_{R_0/2}^\pm\setminus B_{r/2}^\pm$, $B_{2R_0/3}^\pm\setminus B_{R_0/2}^\pm$ and then let us insert \eqref{eq:28.1}--\eqref{eq:28.2} so rewritten in \eqref{eq:27.3}, obtaining \begin{multline} \label{eq:28.4} \sum_{k=0}^3 \tau^{6-2k}\int_{B_{R_0}^+}\rho^{2k+\epsilon-2-2\tau}|D^k(\xi u)|^2 +\sum_{k=0}^3 \tau^{6-2k}\int_{B_{R_0}^-}\rho^{2k+\epsilon-2-2\tau}|D^k(\xi w)|^2\leq \\ \leq C\int_{B_{R_0}^-}\rho^{6-\epsilon-2\tau}\xi^2|H(x,y)|^2 +CM_1^2\int_{B_{R_0/2}^- \setminus B_{r/2}^-}\rho^{6-\epsilon-2\tau}\sum_{k=0}^2|D^k w|^2+\\+ CM_1^2\int_{B_{R_0/2}^+ \setminus B_{r/2}^+}\rho^{6-\epsilon-2\tau}\sum_{k=0}^3|D^k u|^2 +C(M_1^2+1)(J_0+J_1), \end{multline} for $\tau\geq \overline{\tau}$, with $C$ an absolute constant. \end{proof}
3,045
35,444
en
train
0.113.5
Next, by estimating {}from below the integrals in the left hand side of this last inequality reducing their domain of integration to $B_{R_0/2}^\pm\setminus B_{r/2}^\pm$, where $\xi=1$, we have \begin{multline} \label{eq:29.1} \sum_{k=0}^3 \int_{B_{R_0/2}^+ \setminus B_{r/2}^+}\tau^{6-2k} (1-CM_1^2\rho^{8-2\epsilon-2k})\rho^{2k+\epsilon-2-2\tau}|D^k u|^2+\\ +\int_{B_{R_0/2}^- \setminus B_{r/2}^-}\rho^{4+\epsilon-2\tau}|D^3 w|^2 +\sum_{k=0}^2\int_{B_{R_0/2}^- \setminus B_{r/2}^-}\tau^{6-2k} (1-CM_1^2\rho^{8-2\epsilon-2k})\rho^{2k+\epsilon-2-2\tau}|D^k w|^2 \leq \\ \leq C\int_{B_{R_0}^-}\rho^{6-\epsilon-2\tau}\xi^2|H(x,y)|^2 +C(M_1^2+1)(J_0+J_1), \end{multline} for $\tau\geq \overline{\tau}$, with $C$ an absolute constant. Recalling \eqref{eq:stima_rho}, we have that, for $k=0,1,2,3$ and for $R_0\leq R_1:=\min\{\widetilde{R}_0,2(2CM_1^2)^{-\frac{1}{2(1-\epsilon)}}\}$, \begin{equation} \label{eq:30.1} 1-CM_1^2\rho^{8-2\epsilon-2k}\geq \frac{1}{2}, \quad \hbox{ in }B_{R_0/2}^\pm, \end{equation} so that, inserting \eqref{eq:30.1} in \eqref{eq:29.1}, we have \begin{multline} \label{eq:30.3} \sum_{k=0}^3 \tau^{6-2k}\int_{B_{R_0/2}^+ \setminus B_{r/2}^+} \rho^{2k+\epsilon-2-2\tau}|D^k u|^2 +\sum_{k=0}^3 \tau^{6-2k}\int_{B_{R_0/2}^- \setminus B_{r/2}^-} \rho^{2k+\epsilon-2-2\tau}|D^k w|^2 \leq \\ \leq C\int_{B_{R_0}^-}\rho^{6-\epsilon-2\tau}\xi^2|H(x,y)|^2 +C(M_1^2+1)(J_0+J_1), \end{multline} for $\tau\geq \overline{\tau}$, with $C$ an absolute constant. By \eqref{eq:19.2} and \eqref{eq:15.2_bis}, we have that \begin{equation} \label{eq:30.4} \int_{B_{R_0}^-}\rho^{6-\epsilon-2\tau}\xi^2|H(x,y)|^2\leq CM_1^2(I_1+I_2+I_3), \end{equation} with \begin{equation} \label{eq:31.0.1} I_1=\int_{-R_0}^{R_0}\left(\int_{-\infty}^0\left|y^{-1}(w_{yy}(x,y)- (u_{yy}(x,-y))\rho^\frac{6-\epsilon-2\tau}{2}\xi\right|^2dy\right)dx. \end{equation} \begin{equation} \label{eq:31.0.2} I_2=\int_{-R_0}^{R_0}\left(\int_{-\infty}^0\left|y^{-1}(w_{yx}(x,y)+ (u_{yx}(x,-y))\rho^\frac{6-\epsilon-2\tau}{2}\xi\right|^2dy\right)dx. \end{equation} \begin{equation} \label{eq:31.0.4} I_3=\int_{-R_0}^{R_0}\left(\int_{-\infty}^0\left|y^{-1} u_{xx}(x,-y)\rho^\frac{6-\epsilon-2\tau}{2}\xi\right|^2dy\right)dx. \end{equation} Now, let us see that, for $j=1,2,3$, \begin{multline} \label{eq:31.1} I_j\leq C\int_{B_{R_0}^-}\rho^{6-\epsilon-2\tau}\xi^2|D^3 w|^2 +C\tau^2\int_{B_{R_0}^-}\rho^{4-\epsilon-2\tau}\xi^2|D^2 w|^2+\\ +C\int_{B_{R_0}^+}\rho^{6-\epsilon-2\tau}\xi^2|D^3 u|^2 +C\tau^2\int_{B_{R_0}^+}\rho^{4-\epsilon-2\tau}\xi^2|D^2 u|^2 +C(J_0+J_1), \end{multline} for $\tau\geq \overline{\tau}$, with $C$ an absolute constant. Let us verify \eqref{eq:31.1} for $j=1$, the other cases following by using similar arguments. By \eqref{eq:23.2}, we can apply Hardy's inequality \eqref{eq:24.1}, obtaining \begin{multline} \label{eq:32.2} \int_{-\infty}^0\left|y^{-1}(w_{yy}(x,y)- (u_{yy}(x,-y))\rho^{\frac{6-\epsilon-2\tau}{2}}\xi\right|^2dy\leq\\ \leq 4\int_{-\infty}^0\left|\partial_y\left[(w_{yy}(x,y)- (u_{yy}(x,-y))\rho^{\frac{6-\epsilon-2\tau}{2}}\xi\right]\right|^2dy\leq\\ \leq 16 \int_{-\infty}^0\left(|w_{yyy}(x,y)|^2 +|u_{yyy}(x,-y)|^2\right)\rho^{6-\epsilon-2\tau}\xi^2dy+\\ 16 \int_{-\infty}^0\left(|w_{yy}(x,y)|^2 +|u_{yy}(x,-y)|^2\right)\left|\partial_y\left(\rho^{\frac{6-\epsilon-2\tau}{2}}\xi\right)\right|^2dy. \end{multline} Noticing that \begin{equation} \label{eq:32.1} |\rho_y|\leq\left|\frac{y}{\sqrt{x^2+y^2}}\varphi'(\sqrt{x^2+y^2})\right|\leq 1, \end{equation} we can compute \begin{multline} \label{eq:32.3} \left|\partial_y\left(\rho^{\frac{6-\epsilon-2\tau}{2}}(x,y)\xi(x,y)\right)\right|^2\leq 2|\xi_y|^2\rho^{6-\epsilon-2\tau}+2\left|\left(\frac{6-\epsilon-2\tau}{2}\right)\xi \rho_y\rho^{\frac{4-\epsilon-2\tau}{2}}\right|^2\leq\\ \leq 2\xi_y^2\rho^{6-\epsilon-2\tau}+2\tau^2\rho^{4-\epsilon-2\tau}\xi^2, \end{multline} for $\tau\geq \widetilde{\tau}:= \max\{\overline{\tau},3\}$, with $C$ an absolute constant. By inserting \eqref{eq:32.3} in \eqref{eq:32.2}, by integrating over $(-R_0,R_0)$ and by making the change of variables $(x,y)\rightarrow(x,-y)$ in the integrals involving the function $u(x,-y)$, we derive \begin{multline} \label{eq:33.0} I_1\leq C\int_{B_{R_0}^-}\xi^2\rho^{6-\epsilon-2\tau}|w_{yyy}|^2+ C\int_{B_{R_0}^+}\xi^2\rho^{6-\epsilon-2\tau}|u_{yyy}|^2+\\ +C\int_{B_{R_0}^-}\xi_y^2\rho^{6-\epsilon-2\tau}|w_{yy}|^2 +C\int_{B_{R_0}^+}\xi_y^2\rho^{6-\epsilon-2\tau}|u_{yy}|^2+\\ +C\tau^2\int_{B_{R_0}^-}\xi^2\rho^{4-\epsilon-2\tau}|w_{yy}|^2 +C\tau^2\int_{B_{R_0}^+}\xi^2\rho^{4-\epsilon-2\tau}|u_{yy}|^2. \end{multline} Recalling \eqref{eq:25.2}--\eqref{eq:25.5}, we find \eqref{eq:31.1} for $j=1$. Next, by \eqref{eq:30.3}, \eqref{eq:30.4} and \eqref{eq:31.1}, we have \begin{multline} \label{eq:33.1} \sum_{k=0}^3 \tau^{6-2k}\int_{B_{R_0/2}^+ \setminus B_{r/2}^+} \rho^{2k+\epsilon-2-2\tau}|D^k u|^2 +\sum_{k=0}^3 \tau^{6-2k}\int_{B_{R_0/2}^- \setminus B_{r/2}^-} \rho^{2k+\epsilon-2-2\tau}|D^k w|^2 \leq \\ \leq CM_1^2\int_{B_{R_0}^+}\rho^{6-\epsilon-2\tau}\xi^2|D^3u|^2+ CM_1^2\int_{B_{R_0}^-}\rho^{6-\epsilon-2\tau}\xi^2|D^3w|^2+\\ +CM_1^2\tau^2\int_{B_{R_0}^+}\rho^{4-\epsilon-2\tau}\xi^2|D^2u|^2 +CM_1^2\tau^2\int_{B_{R_0}^-}\rho^{4-\epsilon-2\tau}\xi^2|D^2w|^2 +C(M_1^2+1)(J_0+J_1), \end{multline} for $\tau\geq \widetilde{\tau}$, with $C$ an absolute constant. Now, let us split the first four integrals in the right hand side of \eqref{eq:33.1} over the domains of integration $B_{r/2}^\pm\setminus B_{r/4}^\pm$, $B_{2R_0/3}^\pm\setminus B_{R_0/2}^\pm$ and $B_{R_0/2}^\pm\setminus B_{r/2}^\pm$ and move on the left hand side the integrals over $B_{R_0/2}^\pm\setminus B_{r/2}^\pm$. Recalling \eqref{eq:stima_rho}, we obtain \begin{multline} \label{eq:34.1} \sum_{k=2}^3 \int_{B_{R_0/2}^+ \setminus B_{r/2}^+} \tau^{6-2k}(1-CM_1^2\rho^{2-2\epsilon})\rho^{2k+\epsilon-2-2\tau}|D^k u|^2+\\ +\sum_{k=2}^3 \int_{B_{R_0/2}^- \setminus B_{r/2}^-} \tau^{6-2k}(1-CM_1^2\rho^{2-2\epsilon})\rho^{2k+\epsilon-2-2\tau}|D^k w|^2+\\ +\sum_{k=0}^1 \tau^{6-2k}\int_{B_{R_0/2}^+ \setminus B_{r/2}^+} \rho^{2k+\epsilon-2-2\tau}|D^k u|^2 +\sum_{k=0}^1 \tau^{6-2k}\int_{B_{R_0/2}^- \setminus B_{r/2}^-} \rho^{2k+\epsilon-2-2\tau}|D^k w|^2 \leq \\ \leq C(\tau^2M_1^2+1)(J_0+J_1), \end{multline} for $\tau\geq \widetilde{\tau}$, with $C$ an absolute constant. Therefore, for $R_0\leq R_2=\min\{R_1,2(2CM_1^2)^{-\frac{1}{2(1-\epsilon)}}\}$, it follows that \begin{multline} \label{eq:35.1} \sum_{k=0}^3 \tau^{6-2k}\int_{B_{R_0/2}^+ \setminus B_{r/2}^+} \rho^{2k+\epsilon-2-2\tau}|D^k u|^2+ \sum_{k=0}^3 \tau^{6-2k}\int_{B_{R_0/2}^- \setminus B_{r/2}^-} \rho^{2k+\epsilon-2-2\tau}|D^k w|^2\leq\\ \leq C(\tau^2M_1^2+1)(J_0+J_1), \end{multline} for $\tau\geq \widetilde{\tau}$, with $C$ an absolute constant.
3,566
35,444
en
train
0.113.6
By inserting \eqref{eq:32.3} in \eqref{eq:32.2}, by integrating over $(-R_0,R_0)$ and by making the change of variables $(x,y)\rightarrow(x,-y)$ in the integrals involving the function $u(x,-y)$, we derive \begin{multline} \label{eq:33.0} I_1\leq C\int_{B_{R_0}^-}\xi^2\rho^{6-\epsilon-2\tau}|w_{yyy}|^2+ C\int_{B_{R_0}^+}\xi^2\rho^{6-\epsilon-2\tau}|u_{yyy}|^2+\\ +C\int_{B_{R_0}^-}\xi_y^2\rho^{6-\epsilon-2\tau}|w_{yy}|^2 +C\int_{B_{R_0}^+}\xi_y^2\rho^{6-\epsilon-2\tau}|u_{yy}|^2+\\ +C\tau^2\int_{B_{R_0}^-}\xi^2\rho^{4-\epsilon-2\tau}|w_{yy}|^2 +C\tau^2\int_{B_{R_0}^+}\xi^2\rho^{4-\epsilon-2\tau}|u_{yy}|^2. \end{multline} Recalling \eqref{eq:25.2}--\eqref{eq:25.5}, we find \eqref{eq:31.1} for $j=1$. Next, by \eqref{eq:30.3}, \eqref{eq:30.4} and \eqref{eq:31.1}, we have \begin{multline} \label{eq:33.1} \sum_{k=0}^3 \tau^{6-2k}\int_{B_{R_0/2}^+ \setminus B_{r/2}^+} \rho^{2k+\epsilon-2-2\tau}|D^k u|^2 +\sum_{k=0}^3 \tau^{6-2k}\int_{B_{R_0/2}^- \setminus B_{r/2}^-} \rho^{2k+\epsilon-2-2\tau}|D^k w|^2 \leq \\ \leq CM_1^2\int_{B_{R_0}^+}\rho^{6-\epsilon-2\tau}\xi^2|D^3u|^2+ CM_1^2\int_{B_{R_0}^-}\rho^{6-\epsilon-2\tau}\xi^2|D^3w|^2+\\ +CM_1^2\tau^2\int_{B_{R_0}^+}\rho^{4-\epsilon-2\tau}\xi^2|D^2u|^2 +CM_1^2\tau^2\int_{B_{R_0}^-}\rho^{4-\epsilon-2\tau}\xi^2|D^2w|^2 +C(M_1^2+1)(J_0+J_1), \end{multline} for $\tau\geq \widetilde{\tau}$, with $C$ an absolute constant. Now, let us split the first four integrals in the right hand side of \eqref{eq:33.1} over the domains of integration $B_{r/2}^\pm\setminus B_{r/4}^\pm$, $B_{2R_0/3}^\pm\setminus B_{R_0/2}^\pm$ and $B_{R_0/2}^\pm\setminus B_{r/2}^\pm$ and move on the left hand side the integrals over $B_{R_0/2}^\pm\setminus B_{r/2}^\pm$. Recalling \eqref{eq:stima_rho}, we obtain \begin{multline} \label{eq:34.1} \sum_{k=2}^3 \int_{B_{R_0/2}^+ \setminus B_{r/2}^+} \tau^{6-2k}(1-CM_1^2\rho^{2-2\epsilon})\rho^{2k+\epsilon-2-2\tau}|D^k u|^2+\\ +\sum_{k=2}^3 \int_{B_{R_0/2}^- \setminus B_{r/2}^-} \tau^{6-2k}(1-CM_1^2\rho^{2-2\epsilon})\rho^{2k+\epsilon-2-2\tau}|D^k w|^2+\\ +\sum_{k=0}^1 \tau^{6-2k}\int_{B_{R_0/2}^+ \setminus B_{r/2}^+} \rho^{2k+\epsilon-2-2\tau}|D^k u|^2 +\sum_{k=0}^1 \tau^{6-2k}\int_{B_{R_0/2}^- \setminus B_{r/2}^-} \rho^{2k+\epsilon-2-2\tau}|D^k w|^2 \leq \\ \leq C(\tau^2M_1^2+1)(J_0+J_1), \end{multline} for $\tau\geq \widetilde{\tau}$, with $C$ an absolute constant. Therefore, for $R_0\leq R_2=\min\{R_1,2(2CM_1^2)^{-\frac{1}{2(1-\epsilon)}}\}$, it follows that \begin{multline} \label{eq:35.1} \sum_{k=0}^3 \tau^{6-2k}\int_{B_{R_0/2}^+ \setminus B_{r/2}^+} \rho^{2k+\epsilon-2-2\tau}|D^k u|^2+ \sum_{k=0}^3 \tau^{6-2k}\int_{B_{R_0/2}^- \setminus B_{r/2}^-} \rho^{2k+\epsilon-2-2\tau}|D^k w|^2\leq\\ \leq C(\tau^2M_1^2+1)(J_0+J_1), \end{multline} for $\tau\geq \widetilde{\tau}$, with $C$ an absolute constant. Let us estimate $J_0$ and $J_1$. {}From \eqref{eq:27.1} and recalling \eqref{eq:stima_rho}, we have \begin{multline} \label{eq:36.1} J_0\leq\left(\frac{r}{4}\right)^{6-\epsilon-2\tau}\left\{ \int_{B^+_{r/2}}\sum_{k=0}^3(r^{k-4}|D^k u|)^2+ \int_{B^-_{r/2}}\sum_{k=0}^3(r^{k-4}|D^k w|)^2 \right\}. \end{multline} By \eqref{eq:16.2}, we have that, for $(x,y)\in B^-_{r/2}$ and $k=0,1,2,3$, \begin{equation} \label{eq:36.1bis} |D^k w|\leq C\sum_{h=k}^{2+k}r^{h-k}|(D^h u)(x,-y)|. \end{equation} By \eqref{eq:36.1}--\eqref{eq:36.1bis}, by making the change of variables $(x,y)\rightarrow(x,-y)$ in the integrals involving the function $u(x,-y)$ and by using Lemma \ref{lem:intermezzo}, we get \begin{multline} \label{eq:36.2} J_0\leq C\left(\frac{r}{4}\right)^{6-\epsilon-2\tau} \sum_{k=0}^5 r^{2k-8}\int_{B^+_{r/2}}|D^k u|^2 \leq C\left(\frac{r}{4}\right)^{-2-\epsilon-2\tau}\int_{B_r^+}|u|^2, \end{multline} where $C$ is an absolute constant. Analogously, we obtain \begin{equation} \label{eq:37.1} J_1 \leq C\left(\frac{R_0}{2}\right)^{-2-\epsilon-2\tau}\int_{B_{R_0}^+}|u|^2. \end{equation} Let $R$ such that $r<R<\frac{R_0}{2}$. By \eqref{eq:35.1}, \eqref{eq:36.2}, \eqref{eq:37.1}, it follows that \begin{multline} \label{eq:37.1bis} \tau^{6}R^{\epsilon-2-2\tau}\int_{B_{R}^+ \setminus B_{r/2}^+} |u|^2 \leq\sum_{k=0}^3\tau^{6-2k}\int_{B_{R_0/2}^+ \setminus B_{r/2}^+} \rho^{2k+\epsilon-2-2\tau}|D^ku|^2\leq\\ \leq C\tau^2 (M_1^2+1)\left[\left(\frac{r}{4}\right)^{-2-\epsilon-2\tau}\int_{B_r^+}|u|^2+ \left(\frac{R_0}{2}\right)^{-2-\epsilon-2\tau}\int_{B_{R_0}^+}|u|^2 \right], \end{multline} for $\tau\geq \widetilde{\tau}$, with $C$ an absolute constant. Since $\tau>1$, we may rewrite the above inequality as follows \begin{multline} \label{eq:37.2} R^{2\epsilon}\int_{B_{R}^+ \setminus B_{r/2}^+} |u|^2\leq C(M_1^2+1)\left[\left(\frac{r/4}{R}\right)^{-2-\epsilon-2\tau}\int_{B_r^+}|u|^2+ \left(\frac{R_0/2}{R}\right)^{-2-\epsilon-2\tau}\int_{B_{R_0}^+}|u|^2 \right], \end{multline} for $\tau\geq \widetilde{\tau}$, with $C$ an absolute constant. By adding $R^{2\epsilon}\int_{B_{r/2}^+}|u|^2$ to both members of \eqref{eq:37.2}, and setting, for $s>0$, \begin{equation*} \sigma_s=\int_{B_{s}^+}|u|^2, \end{equation*} we obtain \begin{equation} \label{eq:38.1} R^{2\epsilon}\sigma_R\leq C(M_1^2+1) \left[\left(\frac{r/4}{R}\right)^{-2-\epsilon-2\tau}\sigma_r+ \left(\frac{R_0/2}{R}\right)^{-2-\epsilon-2\tau}\sigma_{R_0} \right], \end{equation} for $\tau\geq \widetilde{\tau}$, with $C$ an absolute constant. Let $\tau^*$ be such that \begin{equation} \label{eq:38.2} \left(\frac{r/4}{R}\right)^{-2-\epsilon-2\tau^*}\sigma_r= \left(\frac{R_0/2}{R}\right)^{-2-\epsilon-2\tau^*}\sigma_{R_0}, \end{equation} that is \begin{equation} \label{eq:38.3} 2+\epsilon+2\tau^*=\frac{\log (\sigma_{R_0}/\sigma_r)}{\log \left(\frac{R_0/2}{r/4}\right)}. \end{equation} Let us distinguish two cases: \begin{enumerate}[i)] \item $\tau^*\geq \widetilde{\tau}$, \item $\tau^*< \widetilde{\tau}$, \end{enumerate} and set \begin{equation} \label{eq:39.1bis} \widetilde{\theta}=\frac{\log \left(\frac{R_0/2}{R}\right)}{\log \left(\frac{R_0/2}{r/4}\right)}. \end{equation} In case i), it is possible to choose $\tau = \tau^*$ in \eqref{eq:38.1}, obtaining, by \eqref{eq:38.2}--\eqref{eq:39.1bis}, \begin{equation} \label{eq:39.2} R^{2\epsilon}\sigma_R\leq C(M_1^2+1)\sigma_r^{\widetilde{\theta}}\sigma_{R_0}^{1-\widetilde{\theta}}. \end{equation} In case ii), since $\tau^*< \widetilde{\tau}$, {}from \eqref{eq:38.3}, we have \begin{equation*} \frac{\log (\sigma_{R_0}/\sigma_r)}{\log \left(\frac{R_0/2}{r/4}\right)}<2+\epsilon+2\widetilde{\tau}, \end{equation*} so that, multiplying both members by $\log \left(\frac{R_0/2}{R}\right)$, it follows that \begin{equation*} \widetilde{\theta}\log\left(\frac{\sigma_{R_0}}{\sigma_r}\right)<\log\left(\frac{R_0/2}{R}\right) ^{2+\epsilon+2\widetilde{\tau}}, \end{equation*} and hence \begin{equation} \label{eq:39.3} \sigma_{R_0}^{\widetilde{\theta}}\leq \left(\frac{R_0/2}{R}\right)^{2+\epsilon+2\widetilde{\tau}}\sigma_r^{\widetilde{\theta}}. \end{equation} Then is follows trivially that \begin{equation} \label{eq:39.4} R^{2\epsilon}\sigma_R\leq R^{2\epsilon}\sigma_{R_0}\leq R^{2\epsilon}\left(\frac{R_0/2}{R}\right)^{2+\epsilon+2\widetilde{\tau}}\sigma_r^{\widetilde{\theta}}\sigma_{R_0}^{1-\widetilde{\theta}}. \end{equation} Finally, by \eqref{eq:39.2} and \eqref{eq:39.4}, we obtain \eqref{eq:40.1}. \begin{proof}[Proof of Theorem \ref{theo:40.teo}] Let $r_1<r_2<\frac{r_0R_0}{2K}<r_0$, where $R_0$ is chosen such that $R_0<\gamma<1$, where $\gamma$ has been introduced in Theorem \ref{theo:40.prop3} and $K>1$ is the constant introduced in Proposition \ref{prop:conf_map}. Let us define \begin{equation*} r=\frac{2r_1}{r_0}, \qquad R= \frac{Kr_2}{r_0}. \end{equation*} Recalling that $K>8$, it follows immediately that $r<R<\frac{R_0}{2}$. Therefore, we can apply \eqref{eq:40.1} with $\epsilon=\frac{1}{2}$ to $u=v\circ\Phi$, obtaining \begin{equation} \label{eq:3sfere_u} \int_{B_R^+}u^2\leq \frac{C}{R^C}\left(\int_{B_r^+}u^2\right)^{\widetilde{\theta}}\left(\int_{B_{R_0}^+}u^2\right)^{1-\widetilde{\theta}}, \end{equation} with \begin{equation*} \widetilde{\theta} = \frac{\log\left(\frac{R_0r_0}{2Kr_2}\right)}{\log\left(\frac{R_0r_0}{r_1}\right)}. \end{equation*} and $C>1$ only depending on $M_0$, $\alpha$, $\alpha_0$ e $\gamma_0$ and $\Lambda_0$.
4,001
35,444
en
train
0.113.7
Let $\tau^*$ be such that \begin{equation} \label{eq:38.2} \left(\frac{r/4}{R}\right)^{-2-\epsilon-2\tau^*}\sigma_r= \left(\frac{R_0/2}{R}\right)^{-2-\epsilon-2\tau^*}\sigma_{R_0}, \end{equation} that is \begin{equation} \label{eq:38.3} 2+\epsilon+2\tau^*=\frac{\log (\sigma_{R_0}/\sigma_r)}{\log \left(\frac{R_0/2}{r/4}\right)}. \end{equation} Let us distinguish two cases: \begin{enumerate}[i)] \item $\tau^*\geq \widetilde{\tau}$, \item $\tau^*< \widetilde{\tau}$, \end{enumerate} and set \begin{equation} \label{eq:39.1bis} \widetilde{\theta}=\frac{\log \left(\frac{R_0/2}{R}\right)}{\log \left(\frac{R_0/2}{r/4}\right)}. \end{equation} In case i), it is possible to choose $\tau = \tau^*$ in \eqref{eq:38.1}, obtaining, by \eqref{eq:38.2}--\eqref{eq:39.1bis}, \begin{equation} \label{eq:39.2} R^{2\epsilon}\sigma_R\leq C(M_1^2+1)\sigma_r^{\widetilde{\theta}}\sigma_{R_0}^{1-\widetilde{\theta}}. \end{equation} In case ii), since $\tau^*< \widetilde{\tau}$, {}from \eqref{eq:38.3}, we have \begin{equation*} \frac{\log (\sigma_{R_0}/\sigma_r)}{\log \left(\frac{R_0/2}{r/4}\right)}<2+\epsilon+2\widetilde{\tau}, \end{equation*} so that, multiplying both members by $\log \left(\frac{R_0/2}{R}\right)$, it follows that \begin{equation*} \widetilde{\theta}\log\left(\frac{\sigma_{R_0}}{\sigma_r}\right)<\log\left(\frac{R_0/2}{R}\right) ^{2+\epsilon+2\widetilde{\tau}}, \end{equation*} and hence \begin{equation} \label{eq:39.3} \sigma_{R_0}^{\widetilde{\theta}}\leq \left(\frac{R_0/2}{R}\right)^{2+\epsilon+2\widetilde{\tau}}\sigma_r^{\widetilde{\theta}}. \end{equation} Then is follows trivially that \begin{equation} \label{eq:39.4} R^{2\epsilon}\sigma_R\leq R^{2\epsilon}\sigma_{R_0}\leq R^{2\epsilon}\left(\frac{R_0/2}{R}\right)^{2+\epsilon+2\widetilde{\tau}}\sigma_r^{\widetilde{\theta}}\sigma_{R_0}^{1-\widetilde{\theta}}. \end{equation} Finally, by \eqref{eq:39.2} and \eqref{eq:39.4}, we obtain \eqref{eq:40.1}. \begin{proof}[Proof of Theorem \ref{theo:40.teo}] Let $r_1<r_2<\frac{r_0R_0}{2K}<r_0$, where $R_0$ is chosen such that $R_0<\gamma<1$, where $\gamma$ has been introduced in Theorem \ref{theo:40.prop3} and $K>1$ is the constant introduced in Proposition \ref{prop:conf_map}. Let us define \begin{equation*} r=\frac{2r_1}{r_0}, \qquad R= \frac{Kr_2}{r_0}. \end{equation*} Recalling that $K>8$, it follows immediately that $r<R<\frac{R_0}{2}$. Therefore, we can apply \eqref{eq:40.1} with $\epsilon=\frac{1}{2}$ to $u=v\circ\Phi$, obtaining \begin{equation} \label{eq:3sfere_u} \int_{B_R^+}u^2\leq \frac{C}{R^C}\left(\int_{B_r^+}u^2\right)^{\widetilde{\theta}}\left(\int_{B_{R_0}^+}u^2\right)^{1-\widetilde{\theta}}, \end{equation} with \begin{equation*} \widetilde{\theta} = \frac{\log\left(\frac{R_0r_0}{2Kr_2}\right)}{\log\left(\frac{R_0r_0}{r_1}\right)}. \end{equation*} and $C>1$ only depending on $M_0$, $\alpha$, $\alpha_0$ e $\gamma_0$ and $\Lambda_0$. {}From \eqref{eq:gradPhiInv}, \eqref{eq:stimaPhi}, \eqref{eq:stimaPhiInv} and noticing that \begin{equation*} \widetilde{\theta} \geq \theta:=\frac{\log\left(\frac{R_0r_0}{2Kr_2}\right)}{\log\left(\frac{r_0}{r_1}\right)}, \end{equation*} we obtain \eqref{eq:41.1}--\eqref{eq:41.2}. \end{proof}
1,362
35,444
en
train
0.113.8
\section{Appendix} \label{sec: Appendix} \begin{proof}[Proof of Proposition \ref{prop:conf_map}] Let us construct a suitable extension of $g$ to $[-2r_0,2r_0]$. Let $P_6^\pm$ be the Taylor polynomial of order 6 and center $\pm r_0$ \begin{equation*} P_6^\pm(x_1)=\sum_{j=0}^6 \frac{g^{(j)}(\pm r_0)}{j!}(x_1-(\pm r_0))^j, \end{equation*} and let $\chi\in C^\infty_0(\mathbb{R})$ be a function satisfying \begin{equation*} 0\leq\chi\leq 1, \end{equation*} \begin{equation*} \chi=1, \hbox{ for } |x_1|\leq r_0, \end{equation*} \begin{equation*} \chi=0, \hbox{ for } \frac{3}{2}r_0\leq |x_1|\leq 2r_0, \end{equation*} \begin{equation*} |\chi^{(j)}(x_1)|\leq \frac{C}{r_0^j}, \hbox{ for } r_0\leq |x_1|\leq \frac{3}{2}r_0, \forall j\in \mathbb{N}. \end{equation*} Let us define \begin{equation*} \widetilde{g}=\left\{ \begin{array}{cc} g, & \hbox{ for } x_1\in [-r_0,r_0],\\ \chi P_6^+, & \hbox{ for } x_1\in [r_0, 2r_0],\\ \chi P_6^-, & \hbox{ for } x_1\in [-2r_0, -r_0]. \end{array} \right. \end{equation*} It is a straightforward computation to verify that \begin{equation} \label{eq:3.2} \widetilde g(x_1)=0, \hbox{ for } \frac{3}{2}r_0\leq |x_1|\leq 2r_0, \end{equation} \begin{equation} \label{eq:3.2bis} |\widetilde g(x_1)|\leq 2M_0r_0, \hbox{ for } |x_1|\leq 2r_0, \end{equation} so that the graph of $\widetilde g$ is contained in $R_{2r_0,2M_0r_0}$ and \begin{equation} \label{eq:3.3} \|\widetilde g \|_{C^{6,\alpha}([-2r_0,2r_0])}\leq CM_0r_0, \end{equation} where $C$ is an absolute constant. Let \begin{equation} \label{eq:Omega_r_0_tilde} \widetilde{\Omega}_{r_0} = \left\{ x\in R_{2r_0,2M_0r_0}\ |\ x_2>\widetilde{g}(x_1)\right\}, \end{equation} and let $k\in H^1(\widetilde{\Omega}_{r_0} )$ be the solution to \begin{equation} \label{eq:3.4} \left\{ \begin{array}{ll} \mathbb{D}elta k =0, & \hbox{in } \widetilde{\Omega}_{r_0},\\ & \\ k_{x_1}(2r_0,x_2) =k_{x_1}(-2r_0,x_2)=0, & \hbox{for } 0\leq x_2\leq 2M_0r_0,\\ & \\ k(x_1,2M_0r_0) =1, & \hbox{for } -2r_0\leq x_1\leq 2r_0,\\ & \\ k(x_1,\widetilde{g}(x_1)) =0, &\hbox{for } -2r_0\leq x_1\leq 2r_0.\\ \end{array}\right. \end{equation} Let us notice that $k\in C^{6,\alpha}\left(\overline{\widetilde{\Omega}}_{r_0} \right)$. Indeed, this regularity is standard away {}from any neighborhoods of the four points $(\pm2r_0,0)$, $(\pm 2r_0, 2M_0r_0)$ and, by making a even reflection of $k$ w.r.t. the lines $x_1 = \pm 2r_0$ in a neighborhood in $\widetilde{\Omega}_{r_0}$ of each of these points, we can apply Schauder estimates and again obtain the stated regularity. By the maximum principle, $\min_{\overline{\widetilde{\Omega}}_{r_0}}k = \min_{\partial \widetilde{\Omega}_{r_0}}k$. In view of the boundary conditions, this minimum value cannot be achieved in the closed segment $\{x_2=2M_0r_0, |x_1|\leq 2r_0\}$. It cannot be achieved in the segments $\{\pm 2r_0\}\times (0,2M_0r_0)$ since the boundary conditions over these segment contradict Hopf Lemma (see \cite{l:GT}). Therefore the minimum is attained on the boundary portion $\{(x_1, \widetilde{g}(x_1) \ | \ x_1\in [-2r_0,2r_0]\}$, so that $\min_{\overline{\widetilde{\Omega}}_{r_0}}k = 0$. Similarly, $\max_{\overline{\widetilde{\Omega}}_{r_0}}k = 1$ and, moreover, by the strong maximum and minimum principles, $0<k(x_1,x_2)<1$, for every $(x_1,x_2)\in \widetilde{\Omega}_{r_0}$. Denoting by $\mathcal R$ be the reflection around the line $x_1=2r_0$, let \begin{equation*} \Omega^*_{r_0}=\widetilde{\Omega}_{r_0}\cup \mathcal R(\widetilde{\Omega}_{r_0})\cup(\{2r_0\}\times (0,2M_0r_0)), \end{equation*} and let $k^*$ be the extension of $k$ to $\overline{\Omega}^*_{r_0}$ obtained by making an even reflection of $k$ around the line $x_1=2r_0$. Next, let us extend $k^*$ by periodicity w.r.t. the $x_1$ variable to the unbounded strip \begin{equation*} S_{r_0} = \cup_{l\in \mathbb{Z}} (\Omega^*_{r_0} + 8r_0le_1). \end{equation*} By Schauder estimates and by the periodicity of $k^*$, it follows that \begin{equation} \label{eq:5.1} \|\nabla k^*\|_{L^\infty(S_{r_0})}\leq \frac{C_0}{r_0}, \end{equation} with $C_0$ only depending on $M_0$ and $\alpha$. Therefore there exists $\delta_0= \delta_0(M_0, \alpha)$, $0<\delta_0\leq \frac{1}{4}$, such that \begin{equation} \label{eq:5.2} k^*(x_1,x_2)\geq \frac{1}{2} \quad \forall (x_1,x_2)\in \mathbb{R}\times[(1-\delta_0)2M_0r_0,2M_0r_0]. \end{equation} Since $k^*>0$ in $S_{r_0}$, by applying Harnack inequality and Hopf Lemma (see \cite{l:GT}), we have \begin{equation*} \frac{\partial k^*}{\partial x_2}\geq \frac{c_0}{r_0}, \quad \hbox{ on } \partial S_{r_0}, \end{equation*} with $c_0$ only depending on $M_0$ and $\alpha$. Therefore, the function $k^*$ satisfies \begin{equation*} \left\{ \begin{array}{ll} \mathbb{D}elta \left(\frac{\partial k^*}{\partial x_2}\right) =0, & \hbox{in } S_{r_0},\\ & \\ \frac{\partial k^*}{\partial x_2}\geq \frac{c_0}{r_0}, & \hbox{on } \partial S_{r_0}.\\ \end{array}\right. \end{equation*} Moreover, $\frac{\partial k^*}{\partial x_2}$, being continuous and periodic w.r.t. the variable $x_1$, attains its minimum in $\overline{S}_{r_0}$. Since this minimum value cannot be attained in $S_{r_0}$, it follows that \begin{equation} \label{eq:6.1} \frac{\partial k^*}{\partial x_2}\geq \frac{c_0}{r_0}, \quad \hbox{ in } \overline{S}_{r_0}. \end{equation} Now, let $h$ be an harmonic conjugate of $-k$ in $\widetilde{\Omega}_{r_0}$, that is \begin{equation} \label{eq:6.2} \left\{ \begin{array}{ll} h_{x_1} = k_{x_2}, &\\ & \\ h_{x_2} = -k_{x_1}. &\\ \end{array}\right. \end{equation} The map $\Psi : = h+ik$ is a conformal map in $\widetilde{\Omega}_{r_0}$, \begin{equation} \label{eq:DPsi} D\Psi =\left( \begin{array}{ll} k_{x_2} &-k_{x_1}\\ & \\ k_{x_1} &k_{x_2}\\ \end{array}\right) \end{equation} so that $|D\Psi| = \sqrt 2|\nabla k|$ and, by \eqref{eq:5.1} and \eqref{eq:6.1}, \begin{equation} \label{eq:6.3} \sqrt 2\frac{c_0}{r_0}\leq |D\Psi|\leq \sqrt 2\frac{C_0}{r_0}, \quad \hbox{in } \widetilde{\Omega}_{r_0}. \end{equation} Let us analyze the behavior of $\Psi$ on the boundary of $\widetilde{\Omega}_{r_0}$ \begin{equation*} \partial{\widetilde{\Omega}_{r_0}} = \sigma_1\cup \sigma_2\cup \sigma_3\cup \sigma_4, \end{equation*} where \begin{equation*} \sigma_1 = \{(x_1, \widetilde{g}(x_1)),\ | \ x_1\in [-2r_0,2r_0]\},\qquad \sigma_2 = \{(2r_0, x_2),\ | \ x_2\in [0,2M_0r_0]\}, \end{equation*} \begin{equation*} \sigma_3 = \{(x_1,2M_0r_0),\ | \ x_1\in [-2r_0,2r_0]\}, \qquad \sigma_4 = \{(-2r_0, x_2),\ | \ x_2\in [0,2M_0r_0]\}. \end{equation*} On $\sigma_1$, we have \begin{equation*} \Psi(x_1, \widetilde{g}(x_1))= h((x_1, \widetilde{g}(x_1))) +i0, \end{equation*} \begin{equation*} \frac{\partial}{\partial x_1}h(x_1, \widetilde{g}(x_1)= h_{x_1}(x_1, \widetilde{g}(x_1)+ h_{x_2}(x_1, \widetilde{g}(x_1)\widetilde{g}'(x_1) =-\sqrt{1+[\widetilde{g}'(x_1)]^2}(\nabla k\cdot n)>0, \end{equation*} where $n$ is the outer unit normal. Therefore $\Psi$ is injective on $\sigma_1$ and $\Psi(\sigma_1)$ is an interval $[a,b]$ contained in the line $\{y_2=0\}$, with \begin{equation*} a=h(-2r_0, 0), \quad b=h(2r_0, 0). \end{equation*} On $\sigma_2$, we have \begin{equation*} \Psi(2r_0, x_2)= h(2r_0, x_2)+ik(2r_0, x_2), \end{equation*} \begin{equation*} h_{x_2}(2r_0, x_2)=-k_{x_1}(2r_0, x_2)=0, \end{equation*} and similarly in $\sigma_4$, so that $h(-2r_0, x_2)\equiv a$ and $h(2r_0, x_2)\equiv b$ for $x_2\in[0,2M_0r_0]$ whereas, by \eqref{eq:6.1}, $k$ is increasing w.r.t. $x_2$. Therefore $\Psi$ is injective on $\sigma_2\cup \sigma_4$, and maps $\sigma_2$ into the segment $\{b\}\times[0,1]$ and $\sigma_4$ into the segment $\{a\}\times[0,1]$. On $\sigma_3$, we have \begin{equation*} \Psi(x_1, 2M_0r_0)= h(x_1, 2M_0r_0) +i1, \end{equation*} \begin{equation*} h_{x_1}(x_1, 2M_0r_0) = k_{x_2}(x_1, 2M_0r_0)>0, \end{equation*} so that $h$ is increasing in $[-2r_0,2r_0]$, $\Psi$ is injective on $\sigma_3$ and $\Psi(\sigma_3)$ is the interval $[a,b]\times\{1\}$.
3,572
35,444
en
train
0.113.9
On $\sigma_3$, we have \begin{equation*} \Psi(x_1, 2M_0r_0)= h(x_1, 2M_0r_0) +i1, \end{equation*} \begin{equation*} h_{x_1}(x_1, 2M_0r_0) = k_{x_2}(x_1, 2M_0r_0)>0, \end{equation*} so that $h$ is increasing in $[-2r_0,2r_0]$, $\Psi$ is injective on $\sigma_3$ and $\Psi(\sigma_3)$ is the interval $[a,b]\times\{1\}$. Therefore $\Psi$ maps in a bijective way the boundary of $\widetilde{\Omega}_{r_0}$ into the boundary of $[a,b]\times [0,1]$. Moreover, we have \begin{equation} \label{eq:b-a} b-a= \int_{-2r_0}^{2r_0}h_{x_1}(x_1,2M_0r_0)dx_1 = \int_{-2r_0}^{2r_0}k_{x_2}(x_1,2M_0r_0)dx_1. \end{equation} By \eqref{eq:5.1}, \eqref{eq:6.1} and \eqref{eq:b-a} the following estimate holds \begin{equation} \label{eq:b-a_bis} 4c_0\leq b-a\leq 4C_0. \end{equation} By \eqref{eq:6.3}, we can apply the global inversion theorem, ensuring that \begin{equation*} \Psi^{-1}: [a,b]\times [0,1]\rightarrow \overline{\widetilde{\Omega}}_{r_0} \end{equation*} is a conformal diffeomorphism. Moreover, \begin{equation} \label{eq:DPsi_inversa} D(\Psi^{-1}) =\frac{1}{|\nabla k|^2}\left( \begin{array}{ll} k_{x_2} &k_{x_1}\\ & \\ -k_{x_1} &k_{x_2}\\ \end{array}\right), \end{equation} \begin{equation} \label{eq:8.1} \frac{\sqrt 2}{C_0}r_0\leq |D\Psi^{-1}|= \frac{\sqrt 2}{|\nabla k|}\leq \frac{\sqrt 2}{c_0}r_0, \quad \hbox{in } [a,b]\times [0,1]. \end{equation} Now, let us see that the set $\Psi(\Omega_{r_0})$ contains a closed rectangle having one basis contained in the line $\{y_2=0\}$ and whose sides can be estimated in terms of $M_0$ and $\alpha$. To this aim we need to estimate the distance of $\Psi(0,0)=(\overline{\xi}_1,0)$ {}from the edges $(a,0)$ and $(b,0)$ of the rectangle $[a,b]\times[0,1]$. Recalling that $\widetilde{g}\equiv 0$ for $\frac{3}{2}r_0\leq |x_1|\leq 2r_0$, we have that $\sigma_1$ contains the segments $\left[-2r_0,-\frac{3}{2}r_0\right]\times \{0\}$, $\left[\frac{3}{2}r_0,2r_0\right]\times \{0\}$, so that \begin{equation} \label{eq:segmentino} h(2r_0,0)-h\left(\frac{3}{2}r_0,0\right)= \int_{\frac{3}{2}r_0}^{2r_0}h_{x_1}(x_1,0)dx_1 = \int_{\frac{3}{2}r_0}^{2r_0}k_{x_2}(x_1,0)dx_1. \end{equation} By \eqref{eq:5.1}, \eqref{eq:6.1} and \eqref{eq:segmentino} we derive \begin{equation} \label{eq:segmentino_bis} \frac{c_0}{2}\leq h(2r_0,0)-h\left(\frac{3}{2}r_0,0\right)\leq \frac{C_0}{2}. \end{equation} Similarly, \begin{equation} \label{eq:segmentino_ter} \frac{c_0}{2}\leq h\left(-\frac{3}{2}r_0,0\right)-h(-2r_0,0)\leq \frac{C_0}{2}. \end{equation} Since $h$ is injective and maps $\sigma_1$ into $[a,b]\times\{0\}$, it follows that \begin{equation*} |\Psi(0,0)-(a,0)| = h(0,0)-h(-2r_0,0) \geq\frac{c_0}{2}, \end{equation*} \begin{equation*} |\Psi(0,0)-(b,0)| = h(2r_0,0) - h(0,0) \geq\frac{c_0}{2}. \end{equation*} Possibly replacing $c_0$ with $\min\{c_0,2\}$, we obtain that $\overline{B}^+_{\frac{c_0}{2}}(\Psi(O))\subset [a,b]\times [0,1]$. By \eqref{eq:8.1}, \begin{equation*} |\Psi^{-1}(\xi)| = |\Psi^{-1}(\xi)-\Psi^{-1}(\Psi(O))|\leq\frac{\sqrt 2}{2}r_0<r_0, \qquad \forall \xi \in B^+_{\frac{c_0}{2}}(\Psi(O)), \end{equation*} so that $\Psi^{-1}\left(B^+_{\frac{c_0}{2}}(\Psi(O))\right)\subset \Omega_{r_0}$, \begin{equation*} \Psi(\Omega_{r_0})\supset B^+_{\frac{c_0}{2}}(\Psi(O))\supset R, \end{equation*} where $R$ is the rectangle \begin{equation*} R= \left(\overline{\xi}_1-\frac{c_0}{2\sqrt 2}, \overline{\xi}_1+\frac{c_0}{2\sqrt 2}\right)\times \left(0,\frac{c_0}{2\sqrt 2}\right). \end{equation*} Let us consider the homothety \begin{equation*} \Theta:[a,b]\times [0,1] \rightarrow\mathbb{R}^2, \end{equation*} \begin{equation*} \Theta(\xi_1,\xi_2) = \frac{2\sqrt 2}{c_0}(\xi_1-\overline{\xi}_1,\xi_2), \end{equation*} which satisfies \begin{equation*} \Theta(\Psi(O)) = O, \qquad D\Theta = \frac{2\sqrt 2}{c_0} I_2, \end{equation*} \begin{equation*} \Theta([a,b]\times [0,1]) =R^*, \qquad R^* =\left[\frac{2\sqrt 2}{c_0}(a-\overline{\xi}_1), \frac{2\sqrt 2}{c_0}(b-\overline{\xi}_1)\right]\times \left[0, \frac{2\sqrt 2}{c_0}\right], \end{equation*} \begin{equation*} \Theta(\overline{R}) = [-1,1]\times [0,1], \end{equation*} \begin{equation*} D(\Theta\circ \Psi)(x) = \frac{2\sqrt 2}{c_0}D\Psi(x). \end{equation*} Its inverse \begin{equation*} \Theta^{-1}:R^*\rightarrow [a,b]\times [0,1], \end{equation*} \begin{equation*} \Theta^{-1}(y_1,y_2) = \frac{c_0}{2\sqrt 2}(y_1+\overline{\xi}_1,y_2), \end{equation*} satisfies \begin{equation*} D\Theta^{-1}= \frac{c_0} {2\sqrt 2}I_2, \end{equation*} \begin{equation*} D((\Theta\circ \Psi)^{-1})(y) = \frac{c_0}{2\sqrt 2}D\Psi^{-1}(\Theta^{-1}(y)). \end{equation*} Let us define \begin{equation*} \Phi =(\Theta\circ \Psi)^{-1}). \end{equation*} We have that $\Phi$ is a conformal diffeomorphism {}from $R^*$ into $\widetilde{\Omega}_{r_0}$ such that \begin{equation*} \Omega_{r_0}\supset \Psi^{-1}(R)=\Phi((-1,1)\times(0,1)), \end{equation*} \begin{equation} \label{eq:gradPhibis} \frac{c_0r_0}{2C_0}\leq |D\Phi(y)|\leq \frac{r_0}{2}, \end{equation} \begin{equation} \label{eq:gradPhiInvbis} \frac{4}{r_0}\leq |D\Phi^{-1}(x)|\leq \frac{4C_0}{c_0r_0}. \end{equation} By \eqref{eq:gradPhi}, we have that, for every $y\in [-1,1]\times [0,1]$, \begin{equation} \label{eq:stimaPhibis} |\Phi(y)|= |\Phi(y)-\Phi(O)|\leq \frac{r_0}{2}|y|. \end{equation} Given any $x(x_1,x_2)\in \overline{\Omega}_{r_0}$, let $x^* =(x_1,g(x_1))$. We have \begin{equation*} |x-x^*| = |x_2 - g(x_1)| \leq|x_2|+ |g(x_1)-g(0)|\leq (M_0+1)|x|, \end{equation*} and, since the segment joining $x$ and $x^*$ is contained in $\overline{\Omega}_{r_0}$, by \eqref{eq:gradPhiInvbis} we have \begin{equation} \label{eq:stimaPhiInv1} |\Phi^{-1}(x)-\Phi^{-1}(x^*)|\leq \frac{4C_0}{c_0r_0}(M_0+1)|x|. \end{equation} Let un consider the arc $\tau(t)= \Phi^{-1}(t,g_1(t))$, for $t\in [0,x_1]$. Again by \eqref{eq:gradPhiInvbis}, we have \begin{multline} \label{eq:stimaPhiInv2} |\Phi^{-1}(x^*)| = |\Phi^{-1}(x^*)-\Phi^{-1}(O)| =\tau(x_1) -\tau(0) \leq\\ \leq \left|\int_0^{x_1}\tau'(t)dt \right|\leq \frac{4C_0}{c_0r_0}\sqrt{M_0^2+1}\ |x|. \end{multline} By \eqref{eq:stimaPhiInv1}, \eqref{eq:stimaPhiInv2}, we have \begin{equation} \label{eq:stimaPhiInvbis} |\Phi^{-1}(x)| \leq \frac{K}{r_0}|x|, \end{equation} with $K=\frac{4C_0}{c_0}(M_0+1+\sqrt{M_0^2+1})>8$. {}From this last inequality, we have that \begin{equation*} \Phi^{-1}\left(\Omega_{r_0}\cap B_{\frac{r_0}{K}}\right)\subset B_1^+\subset (-1,1)\times(0,1), \qquad \Phi((-1,1)\times(0,1))\supset \Omega_{r_0}\cap B_{\frac{r_0}{K}}. \end{equation*} Let $\Phi = (\varphi, \psi)$. We have that \begin{equation} \label{eq:DPhi} D\Phi =\left( \begin{array}{ll} \varphi_{y_1} &\varphi_{y_2}\\ & \\ -\varphi_{y_2} &\varphi_{y_1}\\ \end{array}\right), \end{equation} \begin{equation} \label{eq:32.1bisluglio} det(D\Phi(y)) = |\nabla\varphi(y)|^2, \end{equation} \begin{equation} \label{eq:DPhi_inversa} (D\Phi)^{-1} =\frac{1}{|\nabla \varphi|^2}\left( \begin{array}{ll} \varphi_{y_1} &-\varphi_{y_2}\\ & \\ \varphi_{y_2} &\varphi_{y_1}\\ \end{array}\right). \end{equation} Concerning the function $u(y) = v(\Phi(y))$, we can compute \begin{equation} \label{eq:32.3luglio} (\nabla v) (\Phi(y)) = [(D\Phi(y))^{-1}]^T\nabla u(y), \end{equation} \begin{equation} \label{eq:32.2luglio} (\mathbb{D}elta v) (\Phi(y)) = \frac{1}{|det(D\Phi(y)|}\textrm{div}\,(A(y)\nabla u(y)), \end{equation} where \begin{equation} \label{eq:33.0luglio} A(y) = |det(D\Phi(y)| (D\Phi(y))^{-1} [(D\Phi(y))^{-1}]^T. \end{equation} By \eqref{eq:DPhi}--\eqref{eq:DPhi_inversa}, we obtain that \begin{equation} \label{eq:33.0bisluglio} A(y) = I_2, \end{equation} so that \begin{equation} \label{eq:33.0terluglio} (\mathbb{D}elta v) (\Phi(y)) = \frac{1}{|\nabla \varphi(y)|^2}\mathbb{D}elta u(y), \end{equation} \begin{equation} \label{eq:33.1luglio} (\mathbb{D}elta^2 v) (\Phi(y)) = \frac{1}{|\nabla \varphi(y)|^2}\mathbb{D}elta \left(\frac{1}{|\nabla \varphi(y)|^2}\mathbb{D}elta u(y)\right). \end{equation} By using the above formulas, some computations allow to derive \eqref{eq:equazione_sol_composta}--\eqref{eq:15.2} {}from \eqref{eq:equazione_piastra_non_div}. Finally, the boundary conditions \eqref{eq:Dirichlet_sol_composta} follow {}from \eqref{eq:32.3luglio}, \eqref{eq:9.2b} and \eqref{eq:Diric_u_tilde}. \end{proof}
3,877
35,444
en
train
0.113.10
\begin{proof}[Proof of Lemma \ref{lem:intermezzo}] Here, we develop an argument which is contained in \cite[Chapter 9]{l:GT}. By noticing that $a\cdot\nabla\mathbb{D}elta u = \textrm{div}\,(\mathbb{D}elta u a)-(\textrm{div}\, a)\mathbb{D}elta u$, we can rewrite \eqref{eq:41.1} in the form \begin{equation*} \sum_{|\alpha|,|\beta|\leq 2}D^\alpha(a_{\alpha\beta}D^\beta u)=0. \end{equation*} Let $\sigma\in\left[\frac{1}{2},1\right)$, $\sigma'=\frac{1+\sigma}{2}$ and let us notice that \begin{equation} \label{eq:3a.1} \sigma'-\sigma = \frac{1-\sigma}{2}, \qquad 1-\sigma = 2(1-\sigma'). \end{equation} Let $\xi\in C^\infty_0(\mathbb{R}^2)$ be such that \begin{equation*} 0\leq\xi\leq 1, \end{equation*} \begin{equation*} \xi=1, \hbox{ for } |x|\leq \sigma, \end{equation*} \begin{equation*} \xi=0, \hbox{ for } |x|\geq \sigma', \end{equation*} \begin{equation*} |D^k(\xi)|\leq \frac{C}{(\sigma'-\sigma)^k}, \hbox{ for } \sigma\leq \sigma', k=0,1,2. \end{equation*} By straightforward computations we have that \begin{equation*} \sum_{|\alpha|,|\beta|\leq 2}D^\alpha(a_{\alpha\beta}D^\beta (u\xi))=f, \end{equation*} with \begin{equation*} f=\sum_{|\alpha|,|\beta|\leq 2}\sum_ {\overset{\scriptstyle \delta_2\leq\alpha}{\scriptstyle \delta_2\neq0}}{\alpha \choose \beta} D^{\alpha-\delta_2}a_{\alpha\beta}D^\beta u)D^{\delta_2}\xi+ \sum_{|\alpha|,|\beta|\leq 2}D^\alpha\left[a_{\alpha\beta} \sum_ {\overset{\scriptstyle \delta_1\leq\beta}{\scriptstyle \delta_1\neq0}}{\beta \choose \delta_1} D^{\beta-\delta_1}uD^{\delta_1}\xi\right]. \end{equation*} By standard regularity estimates (see for instance \cite[Theorem 9.8]{l:a65}, \begin{equation} \label{eq:8a.1} \|u\xi\|_{H^{4+k}(B_1^+)}\leq C\left(\|u\xi\|_{L^{2}(B_1^+)}+ \|f\|_{H^{k}(B_1^+)}\right). \end{equation} On the other hand, it follows trivially that \begin{equation} \label{eq:8a.2} \|f\|_{H^{k}(B_1^+)}\leq CM_1 \sum_{h=0}^{3+k}\frac{1}{(1-\sigma')^{4+k-h}}\|D^h u\|_{L^{2}(B_{\sigma'}^+)}. \end{equation} By inserting \eqref{eq:8a.2} in \eqref{eq:8a.1}, by multiplying both members by $(1-\sigma')^{4+k}$ and by recalling \eqref{eq:3a.1}, we have \begin{equation} \label{eq:8a.3} (1-\sigma)^{4+k}\|D^{4+k}u\|_{L^{2}(B_{\sigma}^+)}\leq C \left(\|u\|_{L^{2}(B_1^+)}+\sum_{h=1}^{3+k}(1-\sigma')^h \|D^{h}u\|_{L^{2}(B_{\sigma'}^+)} \right) \end{equation} Setting \begin{equation*} \Phi_j=\sup_{\sigma\in\left[\frac{1}{2},1\right)}(1-\sigma)^j \|D^{j}u\|_{L^{2}(B_{\sigma}^+)}, \end{equation*} {}from \eqref{eq:8a.3} we obtain \begin{equation} \label{eq:9a.2} \Phi_{4+k}\leq C\left(A_{2+k}+ \Phi_{3+k}\right). \end{equation} where \begin{equation*} A_{2+k}=\|u\|_{L^{2}(B_1^+)}+ \sum_{h=1}^{2+k}\Phi_h. \end{equation*} By the interpolation estimate \eqref{eq:3a.2} we have that, for every $\epsilon$, $0<\epsilon<1$ and for every $h\in \mathbb{N}$, $1\leq h\leq 3+k$, \begin{equation} \label{eq:9a.3} \|D^{h}u\|_{L^{2}(B_{\sigma}^+)}\leq C\left( \epsilon\|D^{4+k}u\|_{L^{2}(B_{\sigma}^+)}+ \epsilon^{-\frac{h}{4+k-h}}\|u\|_{L^{2}(B_{\sigma}^+)}\right). \end{equation} Let $\gamma>0$ and let $\sigma_\gamma\in \left[\frac{1}{2},1\right)$ such that \begin{equation} \label{eq:9a.4} \Phi_{3+k}\leq(1-\sigma_\gamma)^{3+k} \|D^{3+k}u\|_{L^{2}(B_{\sigma_\gamma}^+)}+\gamma. \end{equation} By applying \eqref{eq:9a.3} with $h=3+k$, $\epsilon=(1-\sigma_\gamma)\widetilde{\epsilon}$, $\sigma = \sigma_\gamma$, we have \begin{equation*} (1-\sigma_\gamma)^{3+k}\|D^{3+k}u\|_{L^{2}(B_{\sigma_\gamma}^+)}\leq \left( \widetilde{\epsilon}(1-\sigma_\gamma)^{4+k}\|D^{4+k}u\|_{L^{2}(B_{\sigma_\gamma}^+)}+ \widetilde{\epsilon}^{-(3+k)}\|u\|_{L^{2}(B_{\sigma_\gamma}^+)}\right), \end{equation*} so that, by \eqref{eq:9a.4} and by the arbitrariness of $\gamma$, we have \begin{equation*} \Phi_{3+k}\leq C \left( \widetilde{\epsilon}\Phi_{4+k}+ \widetilde{\epsilon}^{-(3+k)}\|u\|_{L^{2}(B_{1}^+)}\right). \end{equation*} By inserting this last inequality in \eqref{eq:9a.2}, we get \begin{equation*} \Phi_{4+k}\leq C \left(A_{2+k}+ \widetilde{\epsilon}^{-(3+k)}\|u\|_{L^{2}(B_{1}^+)}+ \widetilde{\epsilon}\Phi_{4+k}\right), \end{equation*} which gives, for $\epsilon =\frac{1}{2C+1}$, \begin{equation*} \Phi_{4+k}\leq C \left(\|u\|_{L^{2}(B_{1}^+)}+ \sum_{h=1}^{2+k}\Phi_{h}\right). \end{equation*} By proceeding similarly, we get \begin{equation*} \Phi_{4+k}\leq C \|u\|_{L^{2}(B_{1}^+)}, \end{equation*} so that \begin{equation} \label{eq:12a.1} \|D^{4+k}u\|_{L^{2}(B_{\frac{1}{2}}^+)} \leq2^{4+k}C\|u\|_{L^{2}(B_{1}^+)}, \qquad k=0,1,2. \end{equation} \end{proof} By applying \eqref{eq:9a.3} for a fixed $\epsilon$, $\sigma=\frac{1}{2}$, we can estimates the derivatives of order $h$, $1\leq h\leq 3$, \begin{equation} \label{eq:12a.1bis} \|D^{h}u\|_{L^{2}(B_{\frac{1}{2}}^+)} \leq C\left(\|D^{4+k}u\|_{L^{2}(B_{\frac{1}{2}}^+)}+ \|u\|_{L^{2}(B_{\frac{1}{2}}^+)}\right). \end{equation} By \eqref{eq:12a.1}, \eqref{eq:12a.1bis}, we have \begin{equation*} \|D^{h}u\|_{L^{2}(B_{\frac{1}{2}}^+)} \leq C\|u\|_{L^{2}(B_{1}^+)}, \qquad \hbox{ for } h=1,...,6. \end{equation*} By employing an homothety, we obtain \eqref{eq:12a.2}. \noindent \emph{Acknowledgement:} The authors wish to thank Antonino Morassi for fruitful discussions on the subject of this work. \end{document}
2,309
35,444
en
train
0.114.0
\begin{document} \title[ ]{Extrema of Curvature Functionals on the Space of Metrics on 3-Manifolds, II.} \author[ ]{Michael T. Anderson} \thanks{Partially supported by NSF Grant DMS-9802722} \maketitle \setcounter{section}{-1} \section{Introduction} \setcounter{equation}{0} This paper is a continuation of the study of some rigidity or non-existence issues discussed in [An1, \S 6]. The results obtained here also play a significant role in the approach to geometrization of 3-manifolds discussed in [An4]. Let $N$ be an oriented 3-manifold and consider the functional \begin{equation} \label{e0.1} {\cal R}^{2}(g) = \int_{N}|r_{g}|^{2}dV_{g}, \end{equation} on the space of metrics ${\Bbb M} $ on $N$ where $r$ is the Ricci curvature and $dV$ is the volume form. The Euler-Lagrange equations for a critical point of ${\cal R}^{2}$ read \begin{equation} \label{e0.2} \nabla{\cal R}^{2} = D^{*}Dr + D^{2}s - 2 \stackrel{\circ}{R}\circ r -\tfrac{1}{2}(\Delta s - |r|^{2})\cdot g = 0, \end{equation} \begin{equation} \label{e0.3} \Delta s = -\tfrac{1}{3}|r|^{2}. \end{equation} Here $s$ is the scalar curvature, $D^{2}s$ the Hessian of $s$, $\Delta s = trD^{2}s$ the Laplacian, and $ \stackrel{\circ}{R} $ the action of the curvature tensor $R$ on symmetric bilinear forms, c.f. [B, Ch.4H] for further details. The equation (0.3) is just the trace of (0.2). It is obvious from the trace equation (0.3) that there are no non-flat ${\cal R}^{2}$ critical metrics, i.e. solutions of (0.2)-(0.3), on compact manifolds $N$; this follows immediately by integrating (0.3) over $N$. Equivalently, since the functional ${\cal R}^{2}$ is not scale invariant in dimension 3, there are no critical metrics $g$ with ${\cal R}^{2}(g) \neq $ 0. To obtain non-trivial critical metrics in this case, one needs to modify ${\cal R}^{2}$ so that it is scale-invariant, i.e. consider $v^{1/3}{\cal R}^{2},$ where $v$ is the volume of $(N, g)$. Nevertheless, it is of course apriori possible that there are non-trivial solutions of (0.2)-(0.3) on non-compact manifolds $N$. \begin{theorem} \label{t 0.1.} Let (N, g) be a complete ${\cal R}^{2}$ critical metric with non-negative scalar curvature. Then (N,g) is flat. \end{theorem} This result generalizes [An1, Thm.6.2], which required that $(N, g)$ have an isometric free $S^{1}$ action. It is not known if the condition $s \geq $ 0 is necessary in Theorem 0.1; a partial result without this assumption is given after Proposition 2.2. However, following the discussion in \S 1 and [An1, \S 6], the main situation of interest is when $s \geq $ 0. Of course, Theorem 0.1 is false in higher dimensions, since any Ricci-flat metric is a critical point, in fact minimizer of ${\cal R}^{2}$ in any dimension, while any Einstein metric is critical for ${\cal R}^{2}$ in dimension 4. Next, we consider a class of metrics which are critical points of the functional ${\cal R}^{2}$ subject to a scalar curvature constraint. More precisely, consider scalar-flat metrics on a (non-compact) 3-manifold $N$ satisfying the equations \begin{equation} \label{e0.4} \alpha\nabla{\cal R}^{2} + L^{*}(\omega ) = 0, \end{equation} \begin{equation} \label{e0.5} \Delta\omega = -\frac{\alpha}{4}|r|^{2}. \end{equation} where again (0.5) is the trace of (0.4) since $s =$ 0. Here $L^{*}$ is the adjoint of the linearization of the scalar curvature, given by $$L^{*}f = D^{2}f - \Delta f\cdot g - fr, $$ and $\omega $ is a locally bounded function on $N$, which we consider as a potential. The meaning and derivation of these equations will be discussed in more detail in \S 1. They basically arise from the Euler-Lagrange equations for a critical metric of ${\cal R}^{2}$ subject to the constraint $s =$ 0. The parameter $\alpha $ may assume any value in [0, $\infty ).$ When $\alpha =$ 0, the equations (0.4)-(0.5) are the static vacuum Einstein equations, c.f. [An2] and references there. In this case, we require that $\omega $ is not identically 0. It is proved in \S 3 that an $L^{2,2}$ Riemannian metric $g$ and $L^{2}$ potential function $\omega $ satisfying the equations (0.4)-(0.5) weakly in a 3-dimensional domain is a $C^{\infty},$ (in fact real-analytic), solution of the equations. A smooth metric $g$ and potential function $\omega $ satisfying (0.4)-(0.5) will be called an ${\cal R}_{s}^{2}$ critical metric or ${\cal R}_{s}^{2}$ solution. \begin{theorem} \label{t 0.2.} Let $(N, g)$ be a complete ${\cal R}_{s}^{2}$ critical metric, i.e. a complete scalar-flat metric satisfying (0.4)-(0.5), with \begin{equation} \label{e0.6} -\lambda \leq \omega \leq 0, \end{equation} for some $\lambda < \infty .$ Suppose further that (N, g) admits an isometric free $S^{1}$ action leaving $\omega $ invariant. Then (N, g) is flat. \end{theorem} In contrast to Theorem 0.1, the assumption that $(N, g)$ admit an isometric free $S^{1}$ action here is essential. There are complete non-flat ${\cal R}_{s}^{2}$ solutions satisfying (0.6) which admit an isometric, but not free, $S^{1}$ action. For example, the complete Schwarzschild metric is an ${\cal R}_{s}^{2}$ solution for a suitable choice of the potential $\omega $ satisfying (0.6), c.f. Proposition 5.1. The condition $\omega \leq $ 0 can be weakened to an assumption that $\omega \leq $ 0 outside some compact set $K \subset N$. However, it is unknown if this result holds when $\omega \geq $ 0 everywhere for instance. Similarly, the assumption that $\omega $ is bounded below can be removed in certain situations, but it is not clear if it can be removed in general, c.f. Remarks 4.4 and 4.5. The proofs of these results rely almost exclusively on the respective trace equations (0.3) and (0.5). The full equations (0.2) and (0.4) are used only to obtain regularity estimates of the metric in terms of the potential function $s$, respectively $\omega .$ Thus it is likely that these results can be generalized to variational problems for other curvature-type integrals, whose trace equations have a similar form; c.f. \S 5.2 for an example. As noted above, both Theorem 0.1 and 0.2 play an important role in the approach to geometrization of 3-manifolds studied in [An4]. For instance, Theorem 0.2 is important in understanding the collapse situation. Following discussion of the origin of the ${\cal R}^{2}$ and ${\cal R}_{s}^{2}$ equations in \S 1, Theorem 0.1 is proved in \S 2. In \S 3, we prove the regularity of ${\cal R}_{s}^{2}$ solutions and apriori estimates for families of such solutions. Theorem 0.2 is proved in \S 4, while \S 5 shows that the Schwarzschild metric is an ${\cal R}_{s}^{2}$ solution and concludes by showing that Theorems 0.1 and 0.2 also hold for ${\cal Z}^{2}$ and ${\cal Z}_{s}^{2}$ solutions, where $z$ is the trace-free Ricci curvature. While efforts have been made to make the paper self-contained, in certain instances we refer to [An1] for further details.
2,203
60,678
en
train
0.114.1
\section{Scalar Curvature Constrained Equations.} \setcounter{equation}{0} In this section, we discuss the nature and form of the ${\cal R}_{s}^{2}$ equations, as well as some motivation for considering these and the ${\cal R}^2$ equations. The discussion here is by and large only formal and we refer to [An1, \S 8] and [An4] for complete details and proofs of the assertions made. Suppose first that $M$ is a compact, oriented 3-manifold and consider the scale-invariant functional \begin{equation} \label{e1.1} I_{\varepsilon} = \varepsilon v^{1/3}{\cal R}^{2} + v^{1/6}{\cal S}^{2} = \varepsilon v^{1/3}\int|r|^{2} + v^{1/6}(\int s^{2})^{1/2}. \end{equation} on the space of metrics on $M$. Here $\varepsilon > $ 0 is a small parameter and we are interested in considering the behavior $\varepsilon \rightarrow $ 0. The existence, regularity and general geometric properties of minimizers $g_{\varepsilon}$ of (essentially) $I_{\varepsilon}$, for a fixed $\varepsilon > 0$, are proved in [An1, \S 8]; more precisely, such is done there for the closely related functional $\varepsilon v^{1/3}\int|R|^{2} + v^{1/3}\int s^{2}$, where $R$ is the full Riemann curvature tensor. All of these results follow from the same results proved in [An1, \S 3 - \S 5] for the $L^2$ norm of $R$, together with the fact that, for a fixed $\varepsilon > 0$, the functional $\varepsilon v^{1/3}\int|R|^{2} + v^{1/3}\int s^{2}$ has the same basic properties as $v^{1/3}\int|R|^{2}$ w.r.t. existence, regularity and completeness issues. Now in dimension 3, the full curvature $R$ is controlled by the Ricci curvature $r$. Thus, for example one has the relations $|R|^2 = 4|r|^2 - s^2$ and $s^2 \leq 3|r|^2$. A brief inspection of the work in [An1, \S 3 - \S 5, \S 8] then shows that these results, together with the same proofs, also hold for the functional $I_{\varepsilon}$ and its minimizers $g_{\varepsilon}$ in (1.1). The Euler-Lagrange equations for $I_{\varepsilon}$ at $g = g_{\varepsilon}$ are \begin{equation} \label{e1.2} \varepsilon\nabla{\cal R}^{2} + L^{*}(\tau ) + (\tfrac{1}{4}s\tau + c)\cdot g = 0, \end{equation} \begin{equation} \label{e1.3} 2\Delta (\tau + {\tfrac{3}{4}}\varepsilon s) + {\tfrac{1}{4}}s\tau = -{\tfrac{1}{2}}\varepsilon |r|^{2} + 3c, \end{equation} where $\tau = \tau_{\varepsilon} \ s/\sigma , \sigma = (v^{1/6}{\cal S}^{2}(g))$ and the constant term $c$, corresponding to the volume terms in (1.1), is given by \begin{equation} \label{e1.4} c = \frac{\varepsilon}{6v}\int|r|^{2}dV + \frac{1}{12\sigma v}\int s^{2}dV. \end{equation} Again (1.3) is the trace of (1.2). These equations can be deduced either from [An1, \S 8] or [B, Ch.4H]; again all terms in (1.2)-(1.4) are w.r.t. $g = g_{\varepsilon}$. As $\varepsilon \rightarrow $ 0, the curvature $r_{\varepsilon}$ of the solutions $g_{\varepsilon}$ of (1.2)-(1.3) will usually blow-up, i.e. diverge to infinity in some region, say in a neighborhood of points $x_{\varepsilon}.$ Thus blow up or renormalize the metric $g_{\varepsilon}$ by considering $g_{\varepsilon}' = \rho_{\varepsilon}^{- 2}\cdot g_{\varepsilon},$ where $\rho_{\varepsilon} \rightarrow $ 0 is chosen so that the curvature in the geodesic ball $(B_{x_{\varepsilon}}' (1), g_{\varepsilon}' )$ w.r.t. $g_{\varepsilon}' $ is bounded. More precisely, $\rho $ is chosen to be $L^{2}$ curvature radius of $g_{\varepsilon}$ at $x_{\varepsilon},$ c.f. [An1,Def.3.2]. Then the renormalized Euler-Lagrange equations take the form \begin{equation} \label{e1.5} \frac{\varepsilon}{\rho^{2}}\nabla{\cal R}^{2} + L^{*}(\tau ) +({\tfrac{1}{4}}s\tau + \frac{c}{\rho^{2}})\cdot g = 0, \end{equation} \begin{equation} \label{e1.6} 2\Delta (\tau +{\tfrac{3}{4}}\frac{\varepsilon}{\rho^{2}}s) + {\tfrac{1}{4}}s\tau = - {\tfrac{1}{2}}\frac{\varepsilon}{\rho^{2}}|r|^{2} + \frac{3c}{\rho^{2}}. \end{equation} Here $\rho = \rho_{\varepsilon}$ and otherwise all metric quantities are taken w.r.t. $g = g_{\varepsilon}' $ except for the potential function $\tau ,$ which has been normalized to be scale invariant. It is necessary to divide by $\rho^{2}$ in (1.5)-(1.6), since otherwise all terms in the equations tend uniformly to 0 in $L^{2}(B_{\varepsilon}), B_{\varepsilon} = B_{x_{\varepsilon}}' (1).$ From the scaling properties of $c$ in (1.4), note that in the scale $g_{\varepsilon}' , c' = \rho^{4}\cdot c,$ where $c = c(g_{\varepsilon}).$ Thus, the constant term in (1.5)-(1.6) satisfies \begin{equation} \label{e1.7} \frac{c}{\rho^{2}} = \frac{c(g_{\varepsilon}' )}{\rho^{2}} = \frac{\rho^{4}c(g_{\varepsilon})}{\rho^{2}} = \rho^{2}c(g_{\varepsilon}) \rightarrow 0, \ \ {\rm as} \ \ \varepsilon \rightarrow 0. \end{equation} Similarly, since ${\cal S}^{2}(g_{\varepsilon})$ is bounded, and the scalar curvature $s = s_{\varepsilon}' $ of $g_{\varepsilon}' $ is given by $s_{\varepsilon}' = \rho^{2}s_{\varepsilon},$ one sees that $s$ in (1.5)-(1.6) goes to 0 in $L^{2}(B_{\varepsilon})$ as $\varepsilon \rightarrow $ 0. Now assume that the potential function $\tau = \tau_{\varepsilon}$ is uniformly bounded in the $g_{\varepsilon}' $ ball $B_{\varepsilon}$, as $\varepsilon \rightarrow 0$. One then has three possible behaviors for the equations (1.5)-(1.6) in a limit $(N, g' , x, \tau )$ as $\varepsilon \rightarrow $ 0, (in a subsequence). The discussion to follow here is formal in that we are not concerned with the existence of such limits; this issue is discussed in detail in [An4], as is the situation where $\tau = \tau_{\varepsilon}$ is not uniformly bounded in $B_\varepsilon$ as $\varepsilon \rightarrow 0$. (It turns out that the limits below have the same form even if $\{\tau_{\varepsilon}\}$ is unbounded). {\bf Case(i).} $\varepsilon / \rho^{2} \rightarrow $ 0. In this case, the equations (1.7)-(1.8) in the limit $\varepsilon \rightarrow $ 0 take the form \begin{equation} \label{e1.8} L^{*}(\tau ) = 0, \Delta\tau = 0. \end{equation} These are the static vacuum Einstein equations, c.f. [An2,3] or [EK]. {\bf Case(ii).} $\varepsilon / \rho^{2} \rightarrow \alpha > $ 0. In this case, the limit equations take the form \begin{equation} \label{e1.9} \alpha\nabla{\cal R}^{2} + L^{*}(\tau ) = 0, \end{equation} \begin{equation} \label{e1.10} \Delta\tau = -\frac{\alpha}{4}|r|^{2}. \end{equation} Formally, these are the equations for a critical metric $g' $ of ${\cal R}^{2}$ subject to the constraint that $s =$ 0. However there are no compact scalar-flat perturbations of $g' $ since the limit $(N, g' )$ is non-compact and thus one must impose certain boundary conditions on the comparison metrics. For example, if the limit $(N, g' )$ is complete and asymptotically flat, (1.9)-(1.10) are the equations for a critical point of ${\cal R}^{2}$ among all scalar-flat and asymptotically flat metrics with a given mass $m$. {\bf Case(iii).} $\varepsilon / \rho^{2} \rightarrow \infty .$ In this case, renormalize the equations (1.5)-(1.6) by dividing by $\varepsilon /\rho^{2}.$ Since $\tau $ is bounded, $(\rho^{2}/\varepsilon )\tau \rightarrow $ 0, and one obtains in the limit \begin{equation} \label{e1.11} \nabla{\cal R}^{2} = 0, \end{equation} \begin{equation} \label{e1.12} |r|^{2} = 0, \end{equation} so the limit metric is flat. These three cases may be summarized by the equations \begin{equation} \label{e1.13} \alpha\nabla{\cal R}^{2} + L^{*}(\tau ) = 0, \end{equation} \begin{equation} \label{e1.14} \Delta\tau = -\frac{\alpha}{4}|r|^{2}, \end{equation} where $\alpha =$ 0 corresponds to Case (i), 0 $< \alpha < \infty $ corresponds to Case (ii) and $\alpha = \infty $ corresponds to (the here trivial) Case (iii). Essentially the same discussion is valid for the scale-invariant functional \begin{equation} \label{e1.15} J_{\varepsilon} = (\varepsilon v^{1/3}{\cal R}^{2} - v^{2/3}\cdot s)|_{{\cal C}}, \end{equation} where ${\cal C} $ is the space of Yamabe metrics on $M$. The existence and general properties of minimizers of $J_{\varepsilon}$ again are discussed in [An1,\S 8II]. By the same considerations, one obtains as above limit equations of the form (1.13)-(1.14), with $\tau $ replaced by the potential function $- (1+h)$ from [An1,\S 8II]. Next consider briefly the scale-invariant functional \begin{equation} \label{e1.16} I_{\varepsilon}' = \varepsilon v^{1/3}\int|r|^{2} + \bigl( v^{1/3}\int (s^-)^{2}\bigr )^{1/2}, \end{equation} on the space of metrics on $M$ as above, where $s^-$ = min$(s, 0)$. This functional, (essentially), is the main focus of [An4], and we refer there for a complete discussion. c.f. also \S 5.2. The Euler-Lagrange equations of $I_{\varepsilon}^-$ are formally the same as (1.2)-(1.3) with $\tau^-$ = min$(\tau ,0)$ in place of $\tau $ and $\sigma $ replaced by $(v^{1/3}\int (s^-)^{2})^{1/2}.$ Let now $g_{\varepsilon}$ be a minimizer of $I_{\varepsilon}^-$ on $M$. Note that here $\tau^-$ is automatically bounded above, as is $\int (s^-)^{2}$ as $\varepsilon \rightarrow $ 0. However, there is no longer an apriori bound on the $L^{2}$ norm of $s$, i.e. it may well happen that $\int s^{2}(g_{\varepsilon}) \rightarrow \infty $ as $\varepsilon \rightarrow $ 0. Formally taking a blow-up limit $(N, g' , x, \tau^-)$ as $\varepsilon \rightarrow $ 0 as above leads to the analogue of the equations (1.13)-(1.14), i.e. to the limit equations \begin{equation} \label{e1.17} \alpha\nabla{\cal R}^{2} + L^{*}(\tau^-) = 0, \end{equation} \begin{equation} \label{e1.18} \Delta (\tau^- + {\tfrac{3}{4}}\alpha s) = -{\tfrac{1}{4}}\alpha |r|^{2}, \end{equation} where as before we assume that $\tau^-$ = lim $\tau_{\varepsilon}^-$ is bounded below. These equations correspond to (0.4)-(0.5) with $\omega = \tau^-,$ but with $s \geq $ 0 in place of $s =$ 0, corresponding to the fact that in blow-up limits, now only $\int (s^-)^{2} \rightarrow $ 0 while previously $\int s^{2} \rightarrow $ 0. While in the region $N^- = {\tau^- < 0}$, the equations (1.17)-(1.18) have the same form as the Cases (i)-(iii) above, in the region $N^{+} = \{s > $ 0\}, (so that $\tau^- =$ 0), these equations take the form \begin{equation} \label{e1.19} \nabla{\cal R}^{2} = 0, \end{equation} \begin{equation} \label{e1.20} \Delta s = -\tfrac{1}{3}|r|^{2}, \end{equation} i.e. the ${\cal R}^{2}$ equations (0.2)-(0.3); here we have divided by $\alpha .$ The junction $\Sigma = \partial\{s =$ 0\} between the two regions $N^-$ and $N^{+}$ above is studied in [An4]. This junction may be compared with junction conditions common in general relativity, where vacuum regions of space-(time) are joined to regions containing a non-vanishing matter distribution, c.f. [W, Ch.6.2] or [MTW, Ch.21.13, 23]. This concludes the brief discussion on the origin of the equations in \S 0. The remainder of the paper is concerned with properties of their solutions.
3,706
60,678
en
train
0.114.2
\section{Non-Existence of ${\cal R}^{2}$ Solutions.} \setcounter{equation}{0} In this section, we prove Theorem 0.1, i.e. there are no non-trivial complete ${\cal R}^{2}$ solutions with non-negative scalar curvature. The proof will proceed in several steps following in broad outline the proofs of [An1, Thms. 6.1,6.2]. We begin with some preliminary material. Let $r_{h}(x)$ and $\rho (x)$ denote the $L^{2,2}$ harmonic radius and $L^{2}$ curvature radius of $(N, g)$ at $x$, c.f. [An1, Def.3.2] for the exact definition. Roughly speaking, $r_{h}(x)$ is the largest radius of the geodesic ball at $x$ on which there exists a harmonic chart for $g$ in which the metric differs from the flat metric by a fixed small amount, say $c_{o},$ in the $L^{2,2}$ norm. Similarly, $\rho (x)$ is the largest radius on which the $L^{2}$ average of the curvature is bounded by $c_{o}\cdot \rho (x)^{-2},$ for some fixed but small constant $c_{o} > $ 0. From the definition, \begin{equation} \label{e2.1} \rho (y) \geq dist(y, \partial B_{x}(\rho (x))), \end{equation} for all $y\in B_{x}(\rho (x)),$ and similarly for $r_{h}(x).$ The point $x$ is called (strongly) $(\rho ,d)$ buffered if $\rho (y) \geq d\cdot \rho (x),$ for all $y\in\partial B_{x}(\rho (x)),$ c.f. [An3, Def.3.7] and also [An1, \S 5]. This condition insures that there is a definite amount of curvature in $L^{2}$ away from the boundary in $B_{x}(\rho (x)).$ As shown in [An1,\S 4], the equations (0.2)-(0.3) form an elliptic system and hence satisfy elliptic regularity estimates. Thus within the $L^{2,2}$ harmonic radius $r_{h},$ one actually has $C^{\infty}$ bounds of the solution metric, and hence its curvature, away from the boundary; the bounds depend only on the size of $r_{h}^{-1}.$ Observe also that the ${\cal R}^{2}$ equations are scale-invariant. Given this regularity derived from (0.2), as mentioned in \S 0, the full equation (0.2) itself is not otherwise needed for the proof of Theorem 0.1; only the trace equation (0.3) is used from here on. For completeness, we recall some results from [An1, \S 2, \S 3] concerning convergence and collapse of Riemannian manifolds with uniform lower bounds on the $L^2$ curvature radius. Thus, suppose ${(B_{i}(R), g_i, x_i)}$ is a sequence of geodesic $R$-balls in complete non-compact Riemannian manifolds $(N_i, g_i)$, centered at base points $x_i \in N_i$. Suppose that $\rho_{i}(y_{i}) \geq \rho_{o}$, for some $\rho_{o} > 0$, for all $y_i \in B_{x_{i}}(R)$. The sequence is said to be {\it non-collapsing} if there is a constant $\nu_o > 0$ such that vol$B_{x_{i}}(1) \geq \nu_o$, for all $i$, or equivalently, the volume radius of $x_i$ is uniformly bounded below. In this case, it follows that a subsequence of ${(B_{i}(R), g_i, x_i)}$ converges in the weak $L^{2,2}$ topology to a limit ${(B_{\infty}(R), g_{\infty}, x_{\infty})}$. The convergence is uniform on compact subsets, and the limit is a manifold with $L^{2,2}$ Riemannian metric $g_{\infty}$, with base point $x_{\infty}$ = lim $x_i$. In case the sequence above is a sequence of ${\cal R}^2$ solutions, the convergence above is in the $C^{\infty}$ topology, by the regularity results above. The sequence as above is {\it collapsing} if vol$B_{x_{i}}(1) \rightarrow 0$, as $i \rightarrow \infty$. In this case, it follows that vol$B_{y_{i}}(1) \rightarrow 0$, for all $y_i \in B_{x_{i}}(R)$. Further, for any $\delta > 0$, and $i$ sufficiently large, there are domains $U_{i} = U_{i}(\delta)$, with $B_{x_{i}}(R - 2\delta) \subset U_i \subset B_{x_{i}}(R - \delta)$ such that, topologically, $U_i$ is a graph manifold. Thus, $U_i$ admits an F-structure, and the $g_i$-diameter of the fibers, (circles or tori), converges to 0 as $i \rightarrow \infty$. Now in case the sequence satisfies regularity estimates as stated above for ${\cal R}^2$ solutions, the curvature is uniformly bounded in $L^{\infty}$ on $U_i$. Hence, it follows from results of Cheeger-Gromov, Fukaya and Rong, (c.f. [An1, Thm.2.10]), that $U_i$ is topologically either a Seifert fibered space or a torus bundle over an interval. Further, the inclusion map of any fiber induces an injection into the fundamental group $\pi_{1}(U_i)$, and the fibers represent (homotopically) the collection of all very short essential loops in $U_i$. Hence, there is an infinite ${\Bbb Z}$ or ${\Bbb Z} \oplus {\Bbb Z}$ cover $(\widetilde U_i, g_i, \widetilde x_i)$ of $(U_i, g_i, x_i)$, ($\widetilde x_i$ a lift of $x_i$), obtained by unwrapping the fibers, which is a non-collapsing sequence. For if the lifted sequence of covers collapsed, by the same arguments again, $\widetilde U_i$, and hence $U_i$, must contain essential short loops; all of these however have already been unwrapped in $\widetilde U_i$. Alternately, one may pass to sufficiently large finite covers $\bar U_i$ of $U_i$ to unwrap the collapse, in place of the infinite covers. In this collapse situation, the limit metrics $(\widetilde U_{\infty}, g_{\infty}, \widetilde x_{\infty})$, (resp. $(\bar U_{\infty}, g_{\infty}, \bar x_{\infty})$) have free isometric ${\Bbb R}$, (resp. $S^1$), actions. These results on the behavior of non-collapsing and collapsing sequences will be used frequently below. We now begin with the proof of Theorem 0.1 itself. The following Lemma shows that one may assume without loss of generality that a complete ${\cal R}^{2}$ solution has uniformly bounded curvature. \begin{lemma} \label{l 2.1.} Let $(N, g)$ be a complete non-flat ${\cal R}^{2}$ solution. Then there exists another complete non-flat ${\cal R}^{2}$ solution $(N' , g' ),$ obtained as a geometric limit at infinity of $(N, g)$, which has uniformly bounded curvature, i.e. \begin{equation} \label{e2.2} |r|_{g'} \leq 1. \end{equation} \end{lemma} {\bf Proof:} We may assume that $(N, g)$ itself has unbounded curvature, for otherwise there is nothing to prove. It follows from the $C^{\infty}$ regularity of solutions mentioned above that the curvature $|r|$ is unbounded on a sequence $\{x_{i}\}$ in $(N, g)$ if and only if \begin{equation} \label{e2.3} \rho (x_{i}) \rightarrow 0. \end{equation} For such a sequence, let $B_{i} = B_{x_{i}}(1)$ and let $d_{i}(x) = dist(x_{i}, \partial B_{i}).$ Consider the scale-invariant ratio $\rho (x)/d_{i}(x),$ for $x\in B_{i},$ and choose points $y_{i}\in B_{i}$ realizing the minimum value of $\rho /d_{i}$ on $B_{i}.$ Since $\rho /d_{i}$ is infinite on $\partial B_{i}, y_{i}$ is in the interior of $B_{i}.$ By (2.3), we have $$\rho (y_{i})/d_{i}(y_{i}) \rightarrow 0, $$ and so in particular $\rho (y_{i}) \rightarrow $ 0. Now consider the sequence $(B_{i}, g_{i}, y_{i}),$ where $g_{i} = \rho (y_{i})^{-2}\cdot g.$ By construction, $\rho_{i}(y_{i}) =$ 1, where $\rho_{i}$ is the $L^{2}$ curvature radius w.r.t. $g_{i}$ and $\delta_{i}(y_{i}) = dist_{g_{i}}(y_{i}, \partial B_{i}) \rightarrow \infty .$ Further, by the minimality property of $y_{i},$ \begin{equation} \label{e2.4} \rho_{i}(x) \geq \rho_{i}(y_{i})\cdot \frac{\delta_{i}(x)}{\delta_{i}(y_{i})} = \frac{\delta_{i}(x)}{\delta_{i}(y_{i})} . \end{equation} It follows that $\rho_{i}(x) \geq \frac{1}{2},$ at all points $x$ of uniformly bounded $g_{i}$-distance to $y_{i},$ (for $i$ sufficiently large, depending on $dist_{g_{i}}(x, y_{i})).$ Consider then the pointed sequence $(B_{i}, g_{i}, y_{i}).$ If this sequence, (or a subsequence), is not collapsing at $y_{i},$ then the discussion above, applied to ${(B_{y_{i}}(R_j), g_i, y_i)}$, with $R_j \rightarrow \infty$, implies that a diagonal subsequence converges smoothly to a limit $(N' , g' , y)$, $y =$ lim $y_{i}.$ The limit is a complete ${\cal R}^{2}$ solution, (since $\delta_{i}(y_{i}) \rightarrow \infty )$ satisfying $\rho \geq \frac{1}{2}$ everywhere, and $\rho (y) = 1$, since $\rho$ is continuous under smooth convergence to limits, (c.f. [An1, Thm. 3.5]). Hence the limit is not flat. By the regularity estimates above, the curvature $|r|$ is pointwise bounded above. A further bounded rescaling then gives (2.2). On the other hand, suppose this sequence is collapsing at $y_{i}.$ Then from the discussion preceding Lemma 2.1, it is collapsing everywhere within $g_{i}$-bounded distance to $x_{i}$ along a sequence of injective F-structures. Hence one may pass to suitable covers $\widetilde U_{i}$ of $U_i$ with $B_{x_{i}}(R_i - 1) \subset U_i \subset B_{x_{i}}(R)$, for some sequence $R_{i} \rightarrow \infty $ as $i \rightarrow \infty .$ This sequence is not collapsing and thus one may apply the reasoning above to again obtain a limit complete non-flat ${\cal R}^{2}$ solution satisfying (2.2), (which in addition has a free isometric ${\Bbb R}$-action. {\qed
2,859
60,678
en
train
0.114.3
} Let $v(r)$ = vol$B_{x_{o}}(r),$ where $B_{x_{o}}(r)$ is the geodesic $r$-ball about a fixed point $x_{o}$ in $(N, g)$. Let $J^{2}$ be the Jacobian of the exponential map exp: $T_{x_{o}}N \rightarrow N$, so that $$v(r) = \int_{S_{o}}\int_{0}^{r}J^{2}(s,\theta )dsd\theta , $$ where $S_{o}$ is the unit sphere in $T_{x_{o}}N.$ Thus, $$v' (r) = \int_{S_{o}}J^{2}(r,\theta )d\theta , $$ is the area of the geodesic sphere $S_{x_{o}}(r).$ The next result proves Theorem 0.1 under reasonably weak conditions, and will also be needed for the proof in general. \begin{proposition} \label{p 2.2.} Let (N, g) be a complete ${\cal R}^{2}$ solution on a 3-manifold N, with bounded curvature and $s \geq $ 0. Suppose there are constants $\varepsilon > $ 0 and $c < \infty $ such that \begin{equation} \label{e2.5} v(r) \leq c\cdot r^{4-\varepsilon}, \end{equation} for all $r \geq $ 1. Then (N, g) is flat. \end{proposition} {\bf Proof:} Let $t(x)$= dist$(x, x_{o})$ be the distance function from $x_{o}\in N$ and let $\eta = \eta (t)$ be a non-negative cutoff function, of compact support to be determined below, but initially satisfying $\eta' (t) \leq $ 0. Multiply (0.3) by $\eta^{4}$ and apply the divergence theorem, (this is applicable since $\eta (t)$ is a Lipschitz function on $N$), to obtain $$\int\eta^{4}|r|^{2} = 3\int<\nabla s, \nabla\eta^{4}> . $$ Now one cannot immediately apply the divergence theorem again, since $t$ and hence $\eta $ is singular at the cut locus $C$ of $x_{o}.$ Let $U_{\delta}$ be the $\delta$-tubular neighborhood of $C$ in $N$. Then applying the divergence theorem on $N \setminus U_{\delta}$ gives $$\int_{N \setminus {U_{\delta}}}<\nabla s, \nabla\eta^{4}> = -\int_{N \setminus {U_{\delta}}}s\Delta\eta^{4} + \int_{\partial (N \setminus {U_{\delta}})}s<\nabla\eta^{4}, \nu> , $$ where $\nu $ is the unit outward normal. Since $<\nu , \nabla t> > $ 0 on $\partial (N \setminus {U_{\delta}})$ and $\eta' \leq $ 0, the hypothesis $s \geq $ 0 implies that the boundary term is non-positive. Hence, $$\int_{N \setminus U_{\delta}} \eta^{4}|r|^{2} \leq - 3\int_{N \setminus U_{\delta}} s\Delta\eta^{4}.$$ We have $\Delta\eta^{4} = 4\eta^{3}\Delta\eta + 12\eta^{2}|\nabla \eta|^{2} \geq 4\eta^{3}\Delta\eta ,$ so that again since $s \geq $ 0, $$\int_{N \setminus U_{\delta}} \eta^{4}|r|^{2} \leq - 12\int_{N \setminus U_{\delta}} s\eta^{3}\Delta\eta . $$ Further $\Delta\eta = \eta'\Delta t + \eta'' ,$ so that $$\int_{N \setminus U_{\delta}} \eta^{4}|r|^{2} \leq - 12\int_{N \setminus U_{\delta}} s\eta^{3}\eta' \Delta t + s\eta^{3}\eta''. $$ It is standard, c.f. [P,9.1.1], that off $C$, $$\Delta t = H = 2\frac{J'}{J}, $$ where $H$ is the mean curvature of $S_{x_{o}}(r)$ and $J = (J^{2})^{1/2}.$ Further, since the curvature of $(N, g)$ is bounded, standard comparison geometry, (c.f. [Ge] for example), implies that there is a constant $C < \infty$ such that \begin{equation} \label{e2.6} H(x) \leq C, \end{equation} for all $x$ outside $B_{x_{o}}(1) \subset N$. (Of course there is no such lower bound for $H$). Hence, since $s \geq 0$ and $\eta' \leq 0$, it follows that $$\int_{N \setminus U_{\delta}} \eta^{4}|r|^{2} \leq - 12\int_{N \setminus U_{\delta}} s\eta^{3}\eta' H^{+} + s\eta^{3}\eta'',$$ where $H^{+}$ = max$(H, 0)$. The integrand $-s\eta'H^{+}$ is positive and bounded. Hence, since the cutlocus $C$ is of measure 0, we may let $\delta \rightarrow 0$ and obtain $$\int_{N} \eta^{4}|r|^{2} \leq - 12\int_{N} s\eta^{3}\eta' H^{+} + s\eta^{3}\eta''.$$ Now fix any $R < \infty $ and choose $\eta = \eta (t)$ so that $\eta \equiv $ 1 on $B_{x_{o}}(R), \eta \equiv $ 0 on $N \setminus B_{x_{o}}(2R), \eta' \leq $ 0, and $|\eta'| \leq c/R, |\eta''| \leq c/R^{2}.$ Using the H\"older and Cauchy inequalities, we obtain \begin{equation} \label{e2.7} \int\eta^{4}|r|^{2} \leq \mu\int\eta^{4}s^{2} + \mu^{-1}\int\eta^{2}(\eta' )^{2}(H^{+})^{2} + \mu^{-1}\int\eta^{2}(\eta'' )^{2}, \end{equation} for any $\mu > $ 0 small. Since $|r|^{2} \geq s^{2}/3,$ by choosing $\mu $ sufficiently small the first term on the right in (2.7) may be absorbed into the left. Thus we have on $B(R)$= $B_{x_{o}}(R),$ for suitable constants $c_{i}$ independent of $R$, \begin{equation} \label{e2.8} \int_{B(R)}|r|^{2} \leq c_{1}\int_{B(2R)}(R^{-2}(H^{+})^{2} + R^{-4}) \leq c_{2}R^{-2}\int_{B(2R)}(H^{+})^{2} + c_{3}R^{-\varepsilon}, \end{equation} where the last inequality uses (2.5). We now claim that there is a constant $K < \infty$, (depending on the geometry of $(N, g)$), such that \begin{equation} \label{e2.9} \Delta t(x) \cdot \rho(x) \leq K, \end{equation} for all $x \in N$ with $t(x) \geq 10$, with $x \notin C$. We will assume (2.9) for the moment and complete the proof of the result; following this, we prove (2.9). Thus, substituting (2.9) in (2.8), and using the definition of $\rho$, we obtain $$\int_{B(R)}|r|^{2} \leq c_{4}R^{-2}\int_{B(2R)}|r| + c_{3}R^{-\varepsilon}.$$ Applying the Cauchy inequality to the $|r|$ integral then gives $$\int_{B(R)}|r|^{2} \leq c_{4}R^{-2}\big (\int_{B(2R)}|r|^{2} \bigl )^{1/2}vol B(2R)^{1/2} + c_{3}R^{-\varepsilon}.$$ Now from the volume estimate (2.5) and the uniform bound on $|r|$, there exists a sequence $R_i \rightarrow \infty$ and a constant $C < \infty$ such that $$\int_{B(2R_i)}|r|^{2} \leq C\int_{B(R_i)}|r|^{2}.$$ Hence, setting $R = R_i$ and combining these estimates gives $$\int_{B(R_i)}|r|^{2} \leq c_{5}\frac{vol B(2R_i)}{R_{i}^{4}}.$$ Taking the limit as $i \rightarrow \infty$ and using (2.5), it follows that $(N, g)$ is flat, as required. Thus, it remains to establish (2.9). We prove (2.9) by contradiction. Thus, suppose there is a sequence $\{x_i\} \in N \setminus C$ such that \begin{equation} \label{e2.10} \Delta t(x_i) \cdot \rho(x_i) \rightarrow \infty, \end{equation} as $i \rightarrow \infty$. Note that (2.10) is scale invariant and that necessarily $t(x_i) \rightarrow \infty$. In fact, by (2.6), (2.10) implies that $\rho(x_i) \rightarrow \infty$ also. Note that \begin{equation} \label{e2.11} \rho(y) \leq 2t(y), \end{equation} for any $y$ such that $t(y)$ is sufficiently large, since $(N, g)$ is assumed not flat. We rescale the manifold $(N, g)$ at $x_i$ by setting $g_i = \lambda_{i}^{2} \cdot g$, where $\lambda_{i} = \Delta t(x_i)$. Thus, w.r.t. $g_i$, we have $\Delta_{g_{i}}t_{i}(x_{i}) = 1$, where $t_{i}(y)$ = $dist_{g_{i}}(y, x_o) = \lambda_i t(y)$. By the scale invariance of (2.10), it follows that \begin{equation} \label{e2.12} \rho_i(x_i) \rightarrow \infty , \end{equation} where $\rho_i = \lambda_i \cdot \rho$ is the $L^2$ curvature radius w.r.t. $g_i$. By (2.11), this implies that $t_i(x_i) \rightarrow \infty$, so that the base point $x_o$ diverges to infinity in the $\{x_i\}$ based sequence $(N, g_i, x_i)$. Hence renormalize $t_i$ by setting $\beta_i(y) = t_i(y) - dist_{g_{i}}(y, x_o)$, as in the construction of Busemann functions. Thus, we have a sequence of ${\cal R}^2$ solutions $(N, g_i, x_i)$ based at $\{x_i\}$. From the discussion preceding Lemma 2.1, it follows that a subsequence converges smoothly to an ${\cal R}^2$ limit metric $(N_{\infty}, g_{\infty}, x_{\infty})$, passing to suitable covers as described in the proof of Lemma 2.1 in the case of collapse. By (2.12), it follows that $$N_{\infty} = {\Bbb R}^3, $$ (or a quotient of ${\Bbb R}^3$), and $g_{\infty}$ is the complete flat metric. Now the smooth convergence also gives $$\Delta_{g_{\infty}}\beta(x_{\infty}) = 1, $$ where $\beta$, the limit of $\beta_i$, is a Busemann function on a complete flat manifold. Hence $\beta$ is a linear coordinate function. This of course implies $\Delta_{g_{\infty}}\beta(x_{\infty}) = 0$, giving a contradiction. This contradiction then establishes (2.9). {\qed } We remark that this result mainly requires the hypothesis $s \geq $ 0 because of possible difficulties at the cut locus. There are other hypotheses that allow one to overcome this problem. For instance if $(N, g)$ is complete as above and (2.5) holds, (but without any assumption on $s$), and if there is a smooth approximation $\Roof{t}{\widetilde}$ to the distance function $t$ such that $|\Delta\Roof{t}{\widetilde}| \leq c/\Roof{t}{\widetilde},$ (for example if $|r| \leq c/t^{2}$ for some $c < \infty ),$ then $(N, g)$ is flat. The proof is the same as above, (in fact even simpler in this situation). Next we need the following simple result, which allows one to control the full curvature in terms of the scalar curvature. This result is essentially equivalent to [An1, Lemma 5.1]. \begin{lemma} \label{l 2.3.} Let $g$ be an ${\cal R}^{2}$ solution, defined in a geodesic ball $B = B_{x}(1),$ with $r_{h}(x) =$ 1. Then for any small $\mu > $ 0, there is a constant $c_{1} = c_{1}(\mu )$ such that \begin{equation} \label{e2.13} |r|^{2}(y) \leq c_{1}\cdot ||s||_{L^{2}(B)} , \end{equation} for all $y\in B(1-\mu ) = B_{x}(1-\mu ).$ In particular, if $||s||_{L^{2}(B)}$ is sufficiently small, then $g$ is almost flat, i.e. has almost 0 curvature, in $B(1-\mu ).$ Further, if $s \geq $ 0 in $B(1)$, then there is a constant $c_{2} = c_{2}(\mu )$ such that \begin{equation} \label{e2.14} ||s||_{L^{2}(B(1-\mu ))} \leq c_{2}s(x). \end{equation} \end{lemma} {\bf Proof:} Let $\eta $ be a non-negative cutoff function satisfying $\eta \equiv $ 1 on $B(1-\frac{\mu}{2}), \eta \equiv $ 0 on $A(1-\frac{\mu}{4},1),$ and $|\nabla \eta| \leq c/\mu .$ Pair the trace equation (0.3) with $\eta^{2}$ to obtain $$\int_{B}\eta^{2}|r|^{2} = - 3\int_{B}s\Delta\eta \leq c\cdot (\int_{B}s^{2})^{1/2}(\int_{B}(\Delta\eta )^{2})^{1/2}. $$ Since $r_{h}(x) =$ 1, $\eta $ may be chosen so that the $L^{2}$ norm of $\Delta\eta $ is bounded in terms of $\mu $ only. It follows that $$\int_{B(1-\frac{\mu}{2})}|r|^{2} \leq c(\mu )||s||_{L^{2}(B)}. $$ One obtains then an $L^{\infty},$ (and in fact $C^{k,\alpha}),$ estimate for $|r|^{2}$ by elliptic regularity, as discussed preceding Lemma 2.1. For the second estimate (2.14), note that by (0.3), $s$ is a superharmonic function, assumed non-negative in $B(1)$. Since the metric $g$ is bounded in $L^{2,2}$ on $B(1)$, and hence bounded in $C^{1/2}$ by Sobolev embedding, the estimate (2.14) is an immediate consequence of the DeGiorgi-Nash-Moser estimates for non-negative supersolutions of divergence form elliptic equations, c.f. [GT,Thm.8.18]. {\qed
3,982
60,678
en
train
0.114.4
} We remark that this result mainly requires the hypothesis $s \geq $ 0 because of possible difficulties at the cut locus. There are other hypotheses that allow one to overcome this problem. For instance if $(N, g)$ is complete as above and (2.5) holds, (but without any assumption on $s$), and if there is a smooth approximation $\Roof{t}{\widetilde}$ to the distance function $t$ such that $|\Delta\Roof{t}{\widetilde}| \leq c/\Roof{t}{\widetilde},$ (for example if $|r| \leq c/t^{2}$ for some $c < \infty ),$ then $(N, g)$ is flat. The proof is the same as above, (in fact even simpler in this situation). Next we need the following simple result, which allows one to control the full curvature in terms of the scalar curvature. This result is essentially equivalent to [An1, Lemma 5.1]. \begin{lemma} \label{l 2.3.} Let $g$ be an ${\cal R}^{2}$ solution, defined in a geodesic ball $B = B_{x}(1),$ with $r_{h}(x) =$ 1. Then for any small $\mu > $ 0, there is a constant $c_{1} = c_{1}(\mu )$ such that \begin{equation} \label{e2.13} |r|^{2}(y) \leq c_{1}\cdot ||s||_{L^{2}(B)} , \end{equation} for all $y\in B(1-\mu ) = B_{x}(1-\mu ).$ In particular, if $||s||_{L^{2}(B)}$ is sufficiently small, then $g$ is almost flat, i.e. has almost 0 curvature, in $B(1-\mu ).$ Further, if $s \geq $ 0 in $B(1)$, then there is a constant $c_{2} = c_{2}(\mu )$ such that \begin{equation} \label{e2.14} ||s||_{L^{2}(B(1-\mu ))} \leq c_{2}s(x). \end{equation} \end{lemma} {\bf Proof:} Let $\eta $ be a non-negative cutoff function satisfying $\eta \equiv $ 1 on $B(1-\frac{\mu}{2}), \eta \equiv $ 0 on $A(1-\frac{\mu}{4},1),$ and $|\nabla \eta| \leq c/\mu .$ Pair the trace equation (0.3) with $\eta^{2}$ to obtain $$\int_{B}\eta^{2}|r|^{2} = - 3\int_{B}s\Delta\eta \leq c\cdot (\int_{B}s^{2})^{1/2}(\int_{B}(\Delta\eta )^{2})^{1/2}. $$ Since $r_{h}(x) =$ 1, $\eta $ may be chosen so that the $L^{2}$ norm of $\Delta\eta $ is bounded in terms of $\mu $ only. It follows that $$\int_{B(1-\frac{\mu}{2})}|r|^{2} \leq c(\mu )||s||_{L^{2}(B)}. $$ One obtains then an $L^{\infty},$ (and in fact $C^{k,\alpha}),$ estimate for $|r|^{2}$ by elliptic regularity, as discussed preceding Lemma 2.1. For the second estimate (2.14), note that by (0.3), $s$ is a superharmonic function, assumed non-negative in $B(1)$. Since the metric $g$ is bounded in $L^{2,2}$ on $B(1)$, and hence bounded in $C^{1/2}$ by Sobolev embedding, the estimate (2.14) is an immediate consequence of the DeGiorgi-Nash-Moser estimates for non-negative supersolutions of divergence form elliptic equations, c.f. [GT,Thm.8.18]. {\qed } The behavior of the scalar curvature, and thus of the full curvature, at infinity is the central focus of the remainder of the proof. For example, Lemma 2.3 leads easily to the following special case of Theorem 0.1. \begin{lemma} \label{l 2.4.} Suppose (N, g) is a complete ${\cal R}^{2}$ solution satisfying \begin{equation} \label{e2.15} limsup_{t\rightarrow\infty} \ t^{2}\cdot s = 0, \end{equation} where t(x) $=$ dist(x, $x_{o}).$ Then (N, g) is flat. \end{lemma} {\bf Proof:} We claim first that (2.15) implies that \begin{equation} \label{e2.16} liminf_{t\rightarrow\infty} \ \rho /t \geq c_{o}, \end{equation} for some constant $c_{o} > $ 0. For suppose (2.16) were not true. Then there is a sequence $\{x_{i}\}$ in $N$ with $t_{i} = t(x_{i}) \rightarrow \infty ,$ such that $\rho (x_{i})/t(x_{i}) \rightarrow $ 0. We may choose $x_{i}$ so that it realizes approximately the minimal value of the ratio $\rho /t$ for $ \frac{1}{2}t_{i} \leq t \leq 2t_{i},$ as in the proof of Lemma 2.1. For example, choose $x_{i}$ so that it realizes the minimal value of the ratio $$\rho (x)/dist(x, \partial A({\tfrac{1}{2}}t_{i}, 2t_{i})) $$ for $x\in A(\frac{1}{2}t_{i}, 2t_{i}).$ Such a choice of $x_{i}$ implies that $x_{i}$ is strongly $(\rho ,\frac{1}{2})$ buffered, i.e. $\forall y_{i}\in\partial B_{x_{i}}(\rho (x_{i})),$ $$\rho (y_{i}) \geq \tfrac{1}{2}\rho (x_{i}), $$ c.f. the beginning of \S 2 and compare with (2.4). Now rescale the metric $g$ by the $L^{2}$ curvature radius $\rho $ at $x_{i},$ i.e. set $g_{i} = \rho (x_{i})^{-2}\cdot g.$ Thus $\rho_{i}(x_{i}) =$ 1, where $\rho_{i} = \rho (g_{i}).$ As in the proof of Lemma 2.1, if the ball $(B_{i}, g_{i}), B_{i} = (B_{x_{i}}(\frac{11}{8}), g_{i})$ is sufficiently collapsed, pass to sufficiently large covers of this ball to unwrap the collapse, as discussed preceding Lemma 2.1. We assume this is done, and do not change the notation for the collapse case. Since we are assuming that $\rho (x_{i}) << t(x_{i}),$ by (2.15) and scaling properties, it follows that $s_{i},$ the scalar curvature of $g_{i},$ satisfies \begin{equation} \label{e2.17} s_{i} \rightarrow 0, \end{equation} uniformly on $(B_{i}(\frac{5}{4}), g_{i})$. By Lemma 2.3, we obtain \begin{equation} \label{e2.18} |r_{i}| \rightarrow 0, \end{equation} uniformly on $(B_{x_{i}}(\frac{9}{8}), g_i)$. However, since $\rho_{i}(x_{i}) =$ 1, and vol$B_{x{_i}}(1) > \nu_o > 0$, the ball $(B_{x_{i}}(\frac{9}{8}), g_i)$ has a definite amount of curvature in $L^{2}.$ This contradiction gives (2.16). Now apply the same reasoning to any sequence $\{y_{i}\}$ in $N$, with $t(y_{i}) \rightarrow \infty ,$ but with respect to the blow-down metrics $g_{i} = t(y_{i})^{-2}\cdot g,$ so that by (2.16), $\rho_{i}(y_{i}) \geq c > $ 0. Since (2.17) remains valid, apply Lemma 2.3 again to obtain (2.18) on a neighborhood of fixed $g_{i}$-radius about $y_{i}.$ The estimate (2.18) applied to the original (unscaled) metric $g$ means that \begin{equation} \label{e2.19} limsup_{t\rightarrow\infty} \ t^{2}\cdot |r| = 0, \end{equation} improving the estimate (2.15). Now standard comparison estimates on the Ricatti equation $H' + \frac{1}{2}H^2 \leq |r|$, (c.f. [P, Ch.9] and the proof of Prop. 2.2), shows that (2.19) implies that the volume growth of $(N, g)$ satisfies $$v(t) \leq c\cdot t^{3+\varepsilon}, $$ for any given $\varepsilon > $ 0 with $c = c(\varepsilon ) < \infty .$ The maximum principle applied to the trace equation (0.3), together with (2.15) implies that $s > 0$ everywhere. Thus, the result follows from Proposition 2.2. {\qed
2,327
60,678
en
train
0.114.5
} The proof of Theorem 0.1 now splits into two cases, following the general situation in [An1, Thms. 6.1, 6.2] respectively. The first case below can basically be viewed as a local and quantitative version of Lemma 2.4. The result roughly states that if a complete ${\cal R}^{2}$ solution with $s \geq $ 0 is weakly asymptotically flat in some direction, then it is flat. \begin{theorem} \label{t 2.5.} Let (N, g) be a complete ${\cal R}^{2}$ solution with non-negative scalar curvature. Suppose there exists a sequence $x_{i}$ in (N, g) such that \begin{equation} \label{e2.20} \rho^{2}(x_{i})\cdot s(x_{i}) \rightarrow 0 \ \ {\rm as} \ i \rightarrow \infty . \end{equation} Then (N, g) is flat. \end{theorem} {\bf Proof:} The proof follows closely the ideas in the proof of [An1, Thms. 5.4 and 6.1]. Throughout the proof below, we let $\rho $ denote the $L^{4}$ curvature radius as opposed to the $L^{2}$ curvature radius. As noted in [An1,(5.6)], the $L^{2}$ and $L^{4}$ curvature radii are uniformly equivalent to each other on $(N, g)$, since as discussed preceding Lemma 2.1, the metric satisfies an elliptic system, and regularity estimates for such equations give $L^{4}$ bounds in terms of $L^{2}$ bounds. In particular, Lemma 2.3 holds with the $L^{4}$ curvature radius in place of the $L^{2}$ radius. Further, as discussed preceding Lemma 2.1, if a ball $B(\rho )\subset (N, g)$ is sufficiently collapsed, we will always assume below that the collapse is unwrapped by passing to the universal cover. Thus, the $L^{4}$ curvature radius and $L^{2,4}$ harmonic radius are uniformly equivalent to each other, c.f. [An1, (3.8)-(3.9)]. Let $\{x_{i}\}$ be a sequence satisfying (2.20). As in the proof of Lemma 2.4, (c.f. (2.17)ff), a subsequence of the rescaled metrics $g_{i} = \rho (x_{i})^{-2}\cdot g$ converges to a flat metric on uniformly compact subsets of $(B_{x_{i}}(1), g_{i}),$ unwrapping to the universal cover in case of collapse. Thus, the $(L^{4})$ curvature radius $\rho_{i}(y_{i})$ w.r.t. $g_{i}$ necessarily satisfies $\rho_{i}(y_{i}) \rightarrow $ 0, for some $y_{i}\in (\partial B_{x_{i}}(1), g_{i}),$ as $i \rightarrow \infty .$ Pick $i_{o}$ sufficiently large, so that $g_{i_{o}}$ is very close to the flat metric. We relabel by setting $g^{1} = g_{i_{o}}, q^{1} = x_{i_{o}}$ and $B = B^{1}= (B_{q^{1}}(1), g^{1}).$ For the moment, we work in the metric ball $(B^{1}, g^{1}).$ It follows that for any $\delta_{1}$ and $\delta_{2} > $ 0, we may choose $i_{o}$ such that \begin{equation} \label{e2.21} \rho (q) \leq \delta_{1}\rho (q_{1}), \end{equation} for some $q\in\partial B,$ and (from (2.20)), \begin{equation} \label{e2.22} s(q^{1}) \leq \delta_{2}, \end{equation} where $\rho $ and $s$ are taken w.r.t. $g_{1}.$ Here both $\delta_{1}$ and $\delta_{2}$ are assumed to be sufficiently small, (for reasons to follow), and further $\delta_{2}$ is assumed sufficiently small compared with $\delta_{1}$ but sufficiently large compared with $\delta_1{}^{2}.$ For simplicity, and to be concrete, we set \begin{equation} \label{e2.23} \delta_{2} = \delta_1{}^{3/2}, \end{equation} and assume that $\delta_{1}$ is (sufficiently) small. We use the trace equation (0.3), i.e. \begin{equation} \label{e2.24} \Delta s = -\tfrac{1}{3}|r|^{2}, \end{equation} on $(B, g^{1})$ to analyse the behavior of $s$ in this scale near $\partial B;$ recall that the trace equation is scale invariant. From the Green representation formula, we have for $x\in B$ \begin{equation} \label{e2.25} s(x) = \int_{\partial B}P(x,Q)d\mu_{Q} - \int_{B}\Delta s(y)G(x,y)dV_{y}, \end{equation} where $P$ is the Poisson kernel and $G$ is the positive Green's function for the Laplacian on $(B, g^{1}).$ As shown in [An1, Lemma 5.2], the Green's function is uniformly bounded in $L^{2}(B).$ The same holds for $\Delta s$ by (2.24), since the $L^{4}$ curvature radius of $g^{1}$ at $q^{1}$ is 1. Thus, the second term in (2.25) is uniformly bounded, i.e. there is a fixed constant $C_{o}$ such that \begin{equation} \label{e2.26} s(x) \leq \int_{\partial B}P(x,Q)d\mu_{Q} + C_{o}. \end{equation} The Radon measure $d\mu $ is a positive measure on $\partial B,$ since $s > $ 0 everywhere. Further, the total mass of $d\mu $ is at most $\delta_{2},$ by (2.22). By [An1, Lemma 5.3], the Poisson kernel $P(x,Q)$ satisfies $$P(x,Q) \leq c_{1}\cdot t_{Q}(x)^{-2}, $$ where $t_{Q}(x)$ = dist$(x,Q)$ and $c_{1}$ is a fixed positive constant. Hence, for all $x\in B,$ \begin{equation} \label{e2.27} s(x) \leq c_{1}\cdot \delta_{2}\cdot t^{-2}(x) + C_{o}, \end{equation} where $t(x)$= dist$(x,\partial B).$ Now suppose the estimate (2.27) can be improved in the sense that there exist points $q^{2}\in B^{1}$ s.t. \begin{equation} \label{e2.28} \rho^{1}(q^{2}) \leq \delta_{1}, \end{equation} and \begin{equation} \label{e2.29} s(q^{2}) \leq \tfrac{1}{2}\delta_{2}\cdot (\rho^{1}(q^{2}))^{-2} + C_{o}, \end{equation} where $\rho^{1}(x) = \rho (x, g^{1}).$ Note that $\rho^{1}(x) \geq t(x)$ by (2.1), so that the difference between (2.27) and (2.29) is only in the factors $c_{1}$ and $\frac{1}{2}.$ We have \begin{equation} \label{e2.30} \delta_{2}\cdot (\rho^{1}(q^{2}))^{-2} \geq \delta_{2}\delta_{1}^{-2} >> 1, \end{equation} where the last estimate follows from the assumption (2.23) on the relative sizes of $\delta_{1}$ and $\delta_{2}.$ Thus, the term $C_{o}$ in (2.29) is small compared with its partner in (2.29), and so \begin{equation} \label{e2.31} s(q^{2}) \leq \delta_{2}\cdot (\rho^{1}(q^{2}))^{-2}. \end{equation} We may then repeat the analysis above on the new scale $g^{2} = (\rho^{1}(q^{2}))^{-2}\cdot g^{1}$ and the $g^{2}$ geodesic ball $B^{2} = B_{q^{2}}^{2}(1),$ so that $\rho^{2}(q^{2}) =$ 1. Observe that the product $\rho^{2}\cdot s$ is scale-invariant, so that in the $g^{2}$ scale, $s$ is much smaller than $s$ in the $g^{1}$ scale. In the $g^{2}$ scale, (2.31) becomes the statement \begin{equation} \label{e2.32} s(q^{2}) \leq \delta_{2}, \end{equation} as in (2.22). We will show below in Lemma 2.6 that one may continue in this way indefinitely, i.e. as long as there exist points $q^{k}\in B^{k-1},$ with $\rho (q^{k}) \leq \delta_{1}\rho (q^{k-1})$ as in (2.28), then there exist such points $q^{k}$ satisfying in addition \begin{equation} \label{e2.33} s(q^{k}) \leq \delta_{2}, \end{equation} where $s$ is the scalar curvature of $g^{k} = (\rho^{k-1}(q_{k}))^{-2}\cdot g^{k-1},$ as in (2.22) or (2.32). On the one hand, we claim this sequence $\{q^{k}\}$ must terminate at some value $k_{o}.$ Namely, return to the original metric $(N, g)$. By construction, we have $\rho (q^{k}) \leq \delta_1{}^{k}\rho (q^{1}).$ The value $\rho (q^{1})$ is some fixed number, (possibly very large), say $\rho (q^{1}) = C$, so that \begin{equation} \label{e2.34} \rho (q^{k}) \leq C\cdot \delta_1{}^{k} \rightarrow 0, \ \ {\rm as} \ \ k \rightarrow \infty . \end{equation} However, observe that $dist_{g}(q^{k}, q^{1})$ is uniformly bounded, independent of $k$. Since $(N, g)$ is complete and smooth, $\rho $ cannot become arbitrarily small in compact sets of $N$. Hence (2.34) prevents $k$ from becoming arbitrarily large. On the other hand, if this sequence terminates at $q^{k}, k = k_{o},$ then necessarily \begin{equation} \label{e2.35} \rho (q) \geq \delta_{1}\cdot \rho (q^{k}), \end{equation} for all $q\in\partial B^{k}.$ However the construction gives $s(q^{k}) \leq \delta_{2},$ where $s$ is the scalar curvature of $g_{k},$ with $\rho^{k}(q^{k}) =$ 1. This situation contradicts Lemma 2.3 if $\delta_{2}$ is chosen sufficiently small compared with $\delta_{1},$ i.e. in view of (2.23), $\delta_{1}$ is sufficiently small. It follows that the proof of Theorem 2.5 is completed by the following: \begin{lemma} \label{l 2.6.} Let (N, g) be an ${\cal R}^{2}$ solution, $x\in N,$ and let $g$ be scaled so that $\rho (x) =$ 1, where $\rho $ is the $L^{4}$ curvature radius. Suppose that $\delta_{1}$ is sufficiently small, $\delta_{2} = \delta_1{}^{3/2},$ \begin{equation} \label{e2.36} s(x) \leq \delta_{2}, \end{equation} and, for some $y_{o}\in B_{x}(1),$ \begin{equation} \label{e2.37} \rho (y_{o}) \leq \delta_{1}\cdot \rho (x) = \delta_{1}. \end{equation} Then there exists an absolute constant $K < \infty $ and a point $y_{1}\in B_{x}(1),$ with $\rho (y_{1}) \leq K\cdot \rho (y_{o}),$ such that \begin{equation} \label{e2.38} s(y_{1}) \leq \delta_{2}\cdot (\rho (y_{1}))^{-2}. \end{equation} \end{lemma} {\bf Proof:} By (2.27), we have \begin{equation} \label{e2.39} s(y) \leq c_{1}\cdot \delta_{2}\cdot t^{-2}(y) + C_{o}, \end{equation} for all $y\in B.$ If there is a $y$ in $B_{y_{o}}(2\rho (y_{o})) \cap B_x(1)$ such that (2.38) holds at $y$, then we are done, so suppose there is no such $y$. Consider the collection $\beta$ of points $z\in B$ for which an opposite inequality to (2.38) holds, i.e. \begin{equation} \label{e2.40} s(z) \geq \tfrac{1}{10}\delta_{2}\cdot t^{-2}(z), \end{equation} for $t(z)$ very small; (the factor $\frac{1}{10}$ may be replaced by any other small positive constant). From (2.26), this implies that $s$ resembles a multiple of the Poisson kernel near $z$. More precisely, suppose (2.40) holds for all $z$ within a small ball $B_{z_{o}}(\nu ),$ for some $z_{o}\in\partial B.$ Given a Borel set $E \subset \partial B,$ let $m(E)$ denote the mass of the measure $d\mu $ from (2.26), of total mass at most $\delta_{2}.$ Then the estimate (2.40) implies \begin{equation} \label{e2.41} m(B_{z_{o}}(\nu )) \geq \delta_{2}\cdot \varepsilon_{o}, \end{equation} where $\nu $ may be made arbitrarily small if (2.40) holds for $z\in\beta$ and $t(z)$ is sufficiently small. The constant $\varepsilon_{o}$ depends only on the choice of $\frac{1}{10}$ in (2.40). Thus, part of $d\mu $ is weakly close to a multiple of the Dirac measure at some point $z_{o}\in\partial B$ near $\beta$; (the Dirac measure at $z_{o}$ generates the Poisson kernel $P(x, z_{o})).$ The idea now is that there can be only a bounded number $n_{o}$ of points satisfying this property, with $n_{o}$ depending only on the ratio $c_{1}/\varepsilon_{o}.$
3,808
60,678
en
train
0.114.6
On the one hand, we claim this sequence $\{q^{k}\}$ must terminate at some value $k_{o}.$ Namely, return to the original metric $(N, g)$. By construction, we have $\rho (q^{k}) \leq \delta_1{}^{k}\rho (q^{1}).$ The value $\rho (q^{1})$ is some fixed number, (possibly very large), say $\rho (q^{1}) = C$, so that \begin{equation} \label{e2.34} \rho (q^{k}) \leq C\cdot \delta_1{}^{k} \rightarrow 0, \ \ {\rm as} \ \ k \rightarrow \infty . \end{equation} However, observe that $dist_{g}(q^{k}, q^{1})$ is uniformly bounded, independent of $k$. Since $(N, g)$ is complete and smooth, $\rho $ cannot become arbitrarily small in compact sets of $N$. Hence (2.34) prevents $k$ from becoming arbitrarily large. On the other hand, if this sequence terminates at $q^{k}, k = k_{o},$ then necessarily \begin{equation} \label{e2.35} \rho (q) \geq \delta_{1}\cdot \rho (q^{k}), \end{equation} for all $q\in\partial B^{k}.$ However the construction gives $s(q^{k}) \leq \delta_{2},$ where $s$ is the scalar curvature of $g_{k},$ with $\rho^{k}(q^{k}) =$ 1. This situation contradicts Lemma 2.3 if $\delta_{2}$ is chosen sufficiently small compared with $\delta_{1},$ i.e. in view of (2.23), $\delta_{1}$ is sufficiently small. It follows that the proof of Theorem 2.5 is completed by the following: \begin{lemma} \label{l 2.6.} Let (N, g) be an ${\cal R}^{2}$ solution, $x\in N,$ and let $g$ be scaled so that $\rho (x) =$ 1, where $\rho $ is the $L^{4}$ curvature radius. Suppose that $\delta_{1}$ is sufficiently small, $\delta_{2} = \delta_1{}^{3/2},$ \begin{equation} \label{e2.36} s(x) \leq \delta_{2}, \end{equation} and, for some $y_{o}\in B_{x}(1),$ \begin{equation} \label{e2.37} \rho (y_{o}) \leq \delta_{1}\cdot \rho (x) = \delta_{1}. \end{equation} Then there exists an absolute constant $K < \infty $ and a point $y_{1}\in B_{x}(1),$ with $\rho (y_{1}) \leq K\cdot \rho (y_{o}),$ such that \begin{equation} \label{e2.38} s(y_{1}) \leq \delta_{2}\cdot (\rho (y_{1}))^{-2}. \end{equation} \end{lemma} {\bf Proof:} By (2.27), we have \begin{equation} \label{e2.39} s(y) \leq c_{1}\cdot \delta_{2}\cdot t^{-2}(y) + C_{o}, \end{equation} for all $y\in B.$ If there is a $y$ in $B_{y_{o}}(2\rho (y_{o})) \cap B_x(1)$ such that (2.38) holds at $y$, then we are done, so suppose there is no such $y$. Consider the collection $\beta$ of points $z\in B$ for which an opposite inequality to (2.38) holds, i.e. \begin{equation} \label{e2.40} s(z) \geq \tfrac{1}{10}\delta_{2}\cdot t^{-2}(z), \end{equation} for $t(z)$ very small; (the factor $\frac{1}{10}$ may be replaced by any other small positive constant). From (2.26), this implies that $s$ resembles a multiple of the Poisson kernel near $z$. More precisely, suppose (2.40) holds for all $z$ within a small ball $B_{z_{o}}(\nu ),$ for some $z_{o}\in\partial B.$ Given a Borel set $E \subset \partial B,$ let $m(E)$ denote the mass of the measure $d\mu $ from (2.26), of total mass at most $\delta_{2}.$ Then the estimate (2.40) implies \begin{equation} \label{e2.41} m(B_{z_{o}}(\nu )) \geq \delta_{2}\cdot \varepsilon_{o}, \end{equation} where $\nu $ may be made arbitrarily small if (2.40) holds for $z\in\beta$ and $t(z)$ is sufficiently small. The constant $\varepsilon_{o}$ depends only on the choice of $\frac{1}{10}$ in (2.40). Thus, part of $d\mu $ is weakly close to a multiple of the Dirac measure at some point $z_{o}\in\partial B$ near $\beta$; (the Dirac measure at $z_{o}$ generates the Poisson kernel $P(x, z_{o})).$ The idea now is that there can be only a bounded number $n_{o}$ of points satisfying this property, with $n_{o}$ depending only on the ratio $c_{1}/\varepsilon_{o}.$ Thus we claim that there is a constant $K_{1} < \infty ,$ depending only on the choice of $\frac{1}{10}$ in (2.40), and points $p_{1}\in\partial B$ such that \begin{equation} \label{e2.42} dist(p_{1},y_{o}) \leq K_{1}\cdot \rho (y_{o}), \end{equation} and \begin{equation} \label{e2.43} m(B_{p_{1}}(\rho (p_{1}))) \leq \delta_{2}\cdot \varepsilon_{o}. \end{equation} To see this, choose first $p'\in\partial B(1),$ as close as possible to $y_{o}$ such that \begin{equation} \label{e2.44} B_{p'}(\tfrac{1}{2}\rho (p' ))\cap B_{y_{o}}(\rho (y_{o})) = \emptyset . \end{equation} Note that in general $$\rho (p' ) \leq dist(p' , y_{o}) + \rho (y_{o}), $$ so that (2.44) implies $$dist(p' , y_{o}) \leq 3\rho (y_{o}), $$ and thus \begin{equation} \label{e2.45} \rho (p' ) \leq 4\rho (y_{o}). \end{equation} If $p' $ satisfies (2.43), then set $p_{1} = p' .$ If not, so $m(B_{p'}(\rho (p' ))) > \delta_{2}\cdot \varepsilon_{o},$ then repeat this process with $p' $ in place of $y_{o}.$ Since the total mass is $\delta_{2},$ this can be continued only a bounded number $K_{1}$ of times. Clearly, $$C^{-1}\rho (y_{o}) \leq \rho (p_{1}) \leq C\cdot \rho (y_{o}), $$ where $C = C(K_{1}).$ Now choose $y_{1}\in B_{x}(1)\cap B_{p_{1}}(\rho (p_{1})),$ say with $t(y_{1}) = \frac{1}{2}\rho (p_{1}).$ For such a choice, we then have $$s(y_{1}) \leq \tfrac{1}{10}\delta_{2}\rho (y_{1})^{-2}, $$ and the result follows. {\qed
1,978
60,678
en
train
0.114.7
} Lemma 2.6 also completes the proof of Theorem 2.5. Finally consider the complementary case to Theorem 2.5. This situation is handled by the following result, which shows that the assumption (2.20) must hold. This result generalizes [An1, Thm.6.2]. \begin{theorem} \label{t 2.7.} Let $(N, g)$ be a complete ${\cal R}^{2}$ solution with non-negative scalar curvature and uniformly bounded curvature. Then \begin{equation} \label{e2.46} liminf_{t\rightarrow\infty} \ \rho^{2}s = 0. \end{equation} \end{theorem} The proof of Theorem 2.7 will proceed by contradiction in several steps. Thus, we assume throughout the following that there is some constant $d_{o} > $ 0 such that, for all $x\in (N, g)$, \begin{equation} \label{e2.47} s(x) \geq d_{o}\cdot \rho (x)^{-2}. \end{equation} Note first that for any complete non-flat manifold, $\rho (x) \leq 2t(x)$ for $t(x)$ sufficiently large, (as in (2.11)), so that (2.47) implies, for some $d > $ 0, \begin{equation} \label{e2.48} s(x) \geq d\cdot t(x)^{-2}. \end{equation} For reasons to follow later, we assume that $N$ is simply connected, by passing to the universal cover if it is not. Note that (2.47) also holds on any covering space, (with a possibly different constant, c.f. [An1, (3.8)]). Consider the conformally equivalent metric \begin{equation} \label{e2.49} \Roof{g}{\widetilde} = s\cdot g. \end{equation} The condition (2.48) guarantees that $(N, \Roof{g}{\widetilde})$ is complete. (Recall that by the maximum principle, $s > $ 0 everywhere, so that $\Roof{g}{\widetilde}$ is well-defined). A standard computation of the scalar curvature $\Roof{s}{\widetilde}$ of $\Roof{g}{\widetilde},$ c.f. [B, Ch.1J] or [An1, (5.18)], gives \begin{equation} \label{e2.50} \Roof{s}{\widetilde} = 1 +{\tfrac{2}{3}}\frac{|r|^{2}}{s^{2}} + {\tfrac{3}{2}}\frac{|\nabla s|^{2}}{s^{3}} \geq 1. \end{equation} Thus, $\Roof{g}{\widetilde}$ has uniformly positive scalar curvature. We claim that $(N, \Roof{g}{\widetilde})$ has uniformly bounded curvature. To see this, from formulas for the behavior of curvature under conformal changes, c.f. again [B, Ch.1J], one has \begin{equation} \label{e2.51} |\Roof{r}{\widetilde}|_{\Roof{g}{\widetilde}} \leq c_{1}\frac{|r|}{s} + c_{2}\frac{|D^{2}s|}{s^{2}} + c_{3}\frac{|\nabla s|^{2}}{s^{3}}, \end{equation} for some absolute constants $c_{i}.$ Here, the right side of (2.51) is w.r.t. the $g$ metric. The terms on the right on $(N, g)$ are all scale-invariant, so we may estimate them at a point $x\in N$ with $g$ scaled so that $\rho (x, g) = 1$. By assumption (2.47), it follows that $s$ is uniformly bounded below in $B(\frac{1}{2})$ = $B_{x}(\frac{1}{2})$. By Lemma 2.3, $|r|,$ and so also $s$, is uniformly bounded above in $B(\frac{1}{4}).$ Similarly, elliptic regularity for ${\cal R}^{2}$ solutions on $B(\frac{1}{2})$ implies that $|D^{2}s|$ and $|\nabla s|^{2}$ are bounded above on $B(\frac{1}{4}).$ Hence the claim follows. Now a result of Gromov-Lawson, [GL, Cor. 10.11] implies, since $N$ is simply connected, that the 1-diameter of $(N, \Roof{g}{\widetilde})$ is at most $12\pi .$ More precisely, let $\Roof{t}{\tilde}(x) = dist_{\Roof{g}{\tilde}}(x, x_{o}).$ Let $\Gamma = N/\sim ,$ where $x \sim x' $ if $x$ and $x' $ are in the same arc-component of a level set of $\Roof{t}{\tilde}.$ Then $\Gamma $ is a locally finite metric tree, for which the projection $\pi : N \rightarrow \Gamma $ is distance non-increasing, c.f. [G, App.1E]. The Gromov-Lawson result states that the diameter of any fiber $F(x) =\pi^{-1}(x)$ in $(N, \Roof{g}{\widetilde})$ is at most $12\pi .$ Since the curvature of $\Roof{g}{\widetilde}$ is uniformly bounded, it follows that the area of these fibers is also uniformly bounded. In particular, for any $x$, \begin{equation} \label{e2.52} vol_{\Roof{g}{\tilde}}B_{x}(r) \leq C_{o}\cdot L_{x}(r), \end{equation} where $L(r)$ is the length of the $r$-ball about $x$ in $\Gamma .$ Further, the uniform curvature bound on $\Roof{g}{\widetilde}$ implies there is a uniform bound $Q$ on the number of edges $E \subset \Gamma $ emanating from any vertex $v\in\Gamma .$ Note that the distance function $\Roof{t}{\tilde}$ gives $\Gamma $ the structure of a directed tree. Then by construction, at any edge $E \subset\Gamma $ terminating at a vertex $v$, either $\Gamma $ terminates, or there are at least two new edges initiating at $v$. Observe that any point $e$ in an edge $E$, (say not a vertex), divides $\Gamma $ into two components, the inward and outward, with the former containing the base point $x_{o}$ and the latter its complement $\Gamma_{e}.$ The outgoing subtree or branch $\Gamma_{e}$ may have infinite length, in which case it gives an end of $N$, or may have finite length. The remainder of the proof needs to be separated into several parts according to the complexity of the graph $\Gamma .$ \begin{lemma} \label{l 2.8.} Suppose that (2.47) holds. Then the graph $\Gamma $ must have an infinite number of edges. In particular, $N$ has an infinite number of ends. \end{lemma} {\bf Proof:} Suppose that $\Gamma $ has only a finite number of edges. It follows that $\Gamma $ has at most linear growth, i.e. \begin{equation} \label{e2.53} L_{x}(r) \leq C_{1}\cdot r, \end{equation} for some fixed $C_{1} < \infty .$ Returning to the metric $g$ on $N$, we have $g \leq C_{2}\cdot t^{2}\Roof{g}{\widetilde}.$ This together with (2.52) and (2.53) imply that \begin{equation} \label{e2.54} vol_{g}(B_{x_{o}}(r)) \leq C_{3}\cdot r^{3}, \end{equation} and so Proposition 2.2 implies that $(N, g)$ is flat. This of course contradicts (2.47). Since the graph $\Gamma$ is a metric tree, and any vertex has at least two outgoing edges, it follows that $\Gamma$ has infinitely many ends. Hence, by construction, $N$ also has infinitely many ends. {\qed } The preceding argument will be generalized further in the following to handle the situation when $\Gamma $ has infinitely many edges. To do this however, we first need the following preliminary result. Let $A_{c} = A_{c}(r)$ be any component of the annulus $\Roof{t}{\tilde}^{-1}(r, r+1)$ in $(N, \Roof{g}{\widetilde}).$ \begin{lemma} \label{l 2.9.} Assume the hypotheses of Theorem 2.7 and (2.47). {\bf (i).} There exists a fixed constant $d < \infty $ such that \begin{equation} \label{e2.55} sup_{A_{c}}s \leq d\cdot inf_{A_{c}}s. \end{equation} {\bf (ii).} Let $e\in\Gamma ,$ be any edge for which the outgoing branch $\Gamma_{e}$ is infinite, giving an end $N_{e}$ of N. Then \begin{equation} \label{e2.56} inf_{N_{e}}s = 0. \end{equation} \end{lemma} {\bf Proof: (i).} We work on the manifold $(N, \Roof{g}{\widetilde})$ as above. From standard formulas for the behavior of $\Delta $ under conformal changes, c.f. [B, Ch.1J], we have $$\Roof{\Delta}{\widetilde}s = \frac{1}{s}\Delta s + \frac{1}{2s^{2}}|\nabla s|^{2} = \frac{1}{s}\Delta s + \frac{1}{2s}|\Roof{\nabla}{\widetilde}s|^{2}, $$ where the last term is the norm of the gradient, both in the $\Roof{g}{\widetilde}$ metric. Since $$\Roof{\Delta}{\widetilde}(s^{1/2}) = {\tfrac{1}{2}}s^{-1/2}\bigl(\Roof{\Delta}{\widetilde}s - \frac{1}{2s}|\Roof{\nabla}{\widetilde}s|^{2}\bigr) , $$ we obtain $$\Roof{\Delta}{\widetilde}(s^{1/2}) = \tfrac{1}{2}s^{-3/2}\Delta s = - \tfrac{1}{6}s^{-3/2}|r|^{2} < 0. $$ Since $(N, \Roof{g}{\widetilde})$ has uniformly bounded geometry, the DeGiorgi-Nash-Moser estimate for supersolutions, c.f. [GT, Thm. 8.18] implies that $$\bigl(\frac{c}{vol_{\Roof{g}{\tilde}}B}\int_{B}s^{p}d\Roof{V}{\tilde}_{g}\bigr)^{1/p} \leq inf_{B' }s, $$ for any concentric $\Roof{g}{\widetilde}$-geodesic balls $B' \subset B = B(1)$ and $p < 3/2,$ where $c$ depends only on $dist_{\Roof{g}{\tilde}}(\partial B,\partial B' )$ and $p$. In particular for any component $A_{c}=A_{c}(r)$ of the annulus $\Roof{t}{\tilde}^{-1}(r,r+1)$ in $(N,\Roof{g}{\widetilde}),$ one has $$\bigl(\frac{1}{vol_{\Roof{g}{\tilde}}A_{c}}\int_{A_{c}}s^{p}dV_{\Roof{g}{\tilde}}\bigr)^{1/p} \leq c\cdot inf_{A_{c}' }s, $$ where $A_{c}' $ is say of half the width of $A_{c}.$ Converting this back to $(N, g)$ gives, after a little calculation, $$\bigl(\frac{1}{vol_{g}A_{c}}\int_{A_{c}}s^{3-\varepsilon}dV_{g}\bigr)^{1/(3-\varepsilon )} \leq c\cdot inf_{A_{c}' }s, $$ for any $\varepsilon > $ 0, with $c = c(\varepsilon ).$ By Lemma 2.3, it follows that the $L^{\infty}$ norm of $|r|$ and hence $s$ is bounded on $A_{c}$ by the $L^{2}$ norm average of $s$, which thus gives (2.55). {\bf (ii).} We first note that, in $(N, g)$, \begin{equation} \label{e2.57} liminf_{t\rightarrow\infty} \ s = 0. \end{equation} This follows easily from the maximum principle at infinity. Thus, let $\{x_{i}\}$ be any minimizing sequence for $s$, so that $s(x_{i}) \rightarrow inf_{N }s.$ Since $(N, g)$ has bounded curvature, it is clear that $(\Delta s)(x_{i}) \geq -\varepsilon_{i},$ for some sequence $\varepsilon_{i} \rightarrow $ 0. The trace equation (0.3) then implies that $|r|^{2}(x_{i}) \rightarrow $ 0 which of course implies $s(x_{i}) \rightarrow $ 0. Essentially the same argument proves (2.56). Briefly, if (2.56) were not true, one may take any sequence $x_{i}$ going to infinity in $\Gamma_{e}$ and consider the pointed sequence $(N, g, x_{i}).$ Since the curvature is uniformly bounded, a subsequence converges to a complete limit $(N' , g' , x)$, (passing as usual to sufficiently large covers in case of collapse), so that (2.57) holds on the limit. This in turn implies that (2.56) must also hold on $(N, g)$ itself. {\qed
3,394
60,678
en
train
0.114.8
} The preceding argument will be generalized further in the following to handle the situation when $\Gamma $ has infinitely many edges. To do this however, we first need the following preliminary result. Let $A_{c} = A_{c}(r)$ be any component of the annulus $\Roof{t}{\tilde}^{-1}(r, r+1)$ in $(N, \Roof{g}{\widetilde}).$ \begin{lemma} \label{l 2.9.} Assume the hypotheses of Theorem 2.7 and (2.47). {\bf (i).} There exists a fixed constant $d < \infty $ such that \begin{equation} \label{e2.55} sup_{A_{c}}s \leq d\cdot inf_{A_{c}}s. \end{equation} {\bf (ii).} Let $e\in\Gamma ,$ be any edge for which the outgoing branch $\Gamma_{e}$ is infinite, giving an end $N_{e}$ of N. Then \begin{equation} \label{e2.56} inf_{N_{e}}s = 0. \end{equation} \end{lemma} {\bf Proof: (i).} We work on the manifold $(N, \Roof{g}{\widetilde})$ as above. From standard formulas for the behavior of $\Delta $ under conformal changes, c.f. [B, Ch.1J], we have $$\Roof{\Delta}{\widetilde}s = \frac{1}{s}\Delta s + \frac{1}{2s^{2}}|\nabla s|^{2} = \frac{1}{s}\Delta s + \frac{1}{2s}|\Roof{\nabla}{\widetilde}s|^{2}, $$ where the last term is the norm of the gradient, both in the $\Roof{g}{\widetilde}$ metric. Since $$\Roof{\Delta}{\widetilde}(s^{1/2}) = {\tfrac{1}{2}}s^{-1/2}\bigl(\Roof{\Delta}{\widetilde}s - \frac{1}{2s}|\Roof{\nabla}{\widetilde}s|^{2}\bigr) , $$ we obtain $$\Roof{\Delta}{\widetilde}(s^{1/2}) = \tfrac{1}{2}s^{-3/2}\Delta s = - \tfrac{1}{6}s^{-3/2}|r|^{2} < 0. $$ Since $(N, \Roof{g}{\widetilde})$ has uniformly bounded geometry, the DeGiorgi-Nash-Moser estimate for supersolutions, c.f. [GT, Thm. 8.18] implies that $$\bigl(\frac{c}{vol_{\Roof{g}{\tilde}}B}\int_{B}s^{p}d\Roof{V}{\tilde}_{g}\bigr)^{1/p} \leq inf_{B' }s, $$ for any concentric $\Roof{g}{\widetilde}$-geodesic balls $B' \subset B = B(1)$ and $p < 3/2,$ where $c$ depends only on $dist_{\Roof{g}{\tilde}}(\partial B,\partial B' )$ and $p$. In particular for any component $A_{c}=A_{c}(r)$ of the annulus $\Roof{t}{\tilde}^{-1}(r,r+1)$ in $(N,\Roof{g}{\widetilde}),$ one has $$\bigl(\frac{1}{vol_{\Roof{g}{\tilde}}A_{c}}\int_{A_{c}}s^{p}dV_{\Roof{g}{\tilde}}\bigr)^{1/p} \leq c\cdot inf_{A_{c}' }s, $$ where $A_{c}' $ is say of half the width of $A_{c}.$ Converting this back to $(N, g)$ gives, after a little calculation, $$\bigl(\frac{1}{vol_{g}A_{c}}\int_{A_{c}}s^{3-\varepsilon}dV_{g}\bigr)^{1/(3-\varepsilon )} \leq c\cdot inf_{A_{c}' }s, $$ for any $\varepsilon > $ 0, with $c = c(\varepsilon ).$ By Lemma 2.3, it follows that the $L^{\infty}$ norm of $|r|$ and hence $s$ is bounded on $A_{c}$ by the $L^{2}$ norm average of $s$, which thus gives (2.55). {\bf (ii).} We first note that, in $(N, g)$, \begin{equation} \label{e2.57} liminf_{t\rightarrow\infty} \ s = 0. \end{equation} This follows easily from the maximum principle at infinity. Thus, let $\{x_{i}\}$ be any minimizing sequence for $s$, so that $s(x_{i}) \rightarrow inf_{N }s.$ Since $(N, g)$ has bounded curvature, it is clear that $(\Delta s)(x_{i}) \geq -\varepsilon_{i},$ for some sequence $\varepsilon_{i} \rightarrow $ 0. The trace equation (0.3) then implies that $|r|^{2}(x_{i}) \rightarrow $ 0 which of course implies $s(x_{i}) \rightarrow $ 0. Essentially the same argument proves (2.56). Briefly, if (2.56) were not true, one may take any sequence $x_{i}$ going to infinity in $\Gamma_{e}$ and consider the pointed sequence $(N, g, x_{i}).$ Since the curvature is uniformly bounded, a subsequence converges to a complete limit $(N' , g' , x)$, (passing as usual to sufficiently large covers in case of collapse), so that (2.57) holds on the limit. This in turn implies that (2.56) must also hold on $(N, g)$ itself. {\qed } We are now in position to understand in more detail the structure of $\Gamma .$ \begin{lemma} \label{l 2.10.} Under the assumptions of Theorem 2.7 and (2.47), there is a uniform upper bound on the distance between nearest vertices in $\Gamma .$ Thus for any vertex $v\in\Gamma ,$ there exists $v'\in\Gamma ,$ with $v' > v$ in terms of the direction on $\Gamma ,$ such that \begin{equation} \label{e2.58} dist_{\Gamma}(v, v' ) \leq D, \end{equation} for some fixed $D < \infty .$ \end{lemma} {\bf Proof:} This is proved by contradiction, so suppose that there is some sequence of edges $E_{i}$ in $\Gamma $ of arbitrarily long length, or an edge $E$ of infinite length. Let $x_{i}$ be the center point of $E_{i}$ in $N$, or a divergent sequence in $E$ in the latter case, and consider the pointed manifolds $(N, \Roof{g}{\widetilde}, x_{i}).$ This sequence has uniformly bounded curvature, and a uniform lower bound on $(volB_{x_{i}}(1), \Roof{g}{\widetilde}),$ since if the sequence volume collapsed somewhere, $(N, \Roof{g}{\widetilde})$ would have regions of arbitrarily long diameter which have the structure of a Seifert fibered space, (c.f. [An1,Thm.2.10]), contradicting by (2.50) [GL,Cor.10.13]. It follows that a subsequence converges to a complete, non-compact manifold $(N' , \Roof{g}{\widetilde}', x)$ with uniformly bounded 1-diameter, uniformly positive scalar curvature, and within bounded Gromov-Hausdorff distance to a line. Consider the same procedure for the pointed sequence $(N, g_{i}, x_{i}),$ where $g_{i} = \rho (x_{i})^{-2}\cdot g.$ There are now two possibilities for the limiting geometry of $(N, g_i, x_i)$, according to whether liminf \ $\rho (x_{i})/t(x_{i}) = 0$ or $\rho (x_{i})/t(x_{i}) \geq \mu_o $, for some $\mu_o > 0$, as $i \rightarrow \infty$. For clarity, we separate the discussion into these two cases. {\bf (a).} Suppose that liminf \ $\rho (x_{i})/t(x_{i}) = 0$ and choose a subsequence, also called $\{x_i\}$ such that $\rho (x_{i})/t(x_{i}) \rightarrow 0$. This is again similar to the situation where (2.16) does not hold. Argueing in exactly the same way as in this part of the proof of Lemma 2.4, it follows that one obtains smooth convergence of a further subsequence to a limit $(\bar N, \bar g, \bar x)$. Here, as before, one must pass to suitable covers of larger and larger domains to unwrap a collapsing sequence. The limit $(\bar N, \bar g)$ is a complete, non-flat ${\cal R}^2$ solution. Note that the assumption (2.47) is scale-invariant, invariant under coverings, (c.f. the statement following (2.48)), and invariant under the passage to geometric limits. Hence the estimate (2.47) holds on $(\bar{N}, \bar{g}).$ Now by construction, the graph $\bar{\Gamma}$ associated to $\bar{N}$ is a single line or edge, with no vertices. The argument in Lemma 2.8 above then proves (2.54) holds on $(\bar{N}, \bar{g})$, so that Proposition 2.2 implies that $(\bar{N},\bar{g})$ is flat, contradicting (2.47). {\bf (b).} Suppose that $\rho(x_i)/t(x_i) \geq \mu_o > 0$, for all $i$. In this case, the center points $x_{i}$ remain within uniformly bounded $g_{i}$-distance to the initial vertex $v_{i}$ of $E_{i}.$ The scalar curvature $s_i$ of $g_i$ goes to infinity in a small $g_i$-tubular neighborhood of $v_{i}.$ However, the curvature of $(N, g_i, x_i)$ is uniformly bounded outside the unit ball $(B_{v}(1), g_i)$ and hence in this region one obtains smooth convergence to an incomplete limit manifold $(\bar N', \bar g', \bar x')$, again passing to sufficiently large finite covers to unwrap any collapse. The limit $(\bar N', \bar g')$ is complete away from a compact boundary, (formed by a neigborhood of $\{v\}$). As in Case (a), the limit is a non-flat ${\cal R}^2$ solution satisfying (2.47), and the associated graph $\bar \Gamma'$ consists of a single ray. Hence (2.56) and (2.57) hold on $\bar N'$. Since $\bar \Gamma'$ is a single ray, the annuli $A$ in Lemma 2.9 are all connected and thus (2.56) and (2.57) imply that $\bar s' \rightarrow 0$ uniformly at infinity in $\bar N'$. Hence there is a compact regular level set $L = \{s = s_{o}\}, s_{o} > $ 0, of $s$ for which the gradient $\nabla s|_{L}$ points {\it out} of $U$, for $U = \{s \leq s_{o}\}.$ Observe that $U$ is cocompact in $\bar N'.$ Now return to the proof of Proposition 2.2, and the trace equation (0.3). Let $\eta $ be a cutoff function as before, but now with $\eta \equiv $ 1 on a neighborhood of $L$. Integrate as before, but over $U$ in place of $N$ to obtain $$\int_{U}\eta|r|^{2} = - 3\int_{U}\eta\Delta s = - 3\int_{U}s\Delta\eta - 3\int_{L}\eta<\nabla s, \nu> + \ 3\int_{L}s<\nabla\eta , \nu> , $$ where $\nu $ is the outward unit normal. Since $\eta \equiv $ 1 near $L$, $<\nabla\eta , \nu> =$ 0. Since $\nabla s|_{L} $ points out of $U$, $<\nabla s, \nu> > $ 0. Hence, $$\int_{U}\eta|r|^{2} \leq - 3\int_{U}s\Delta\eta . $$ From the bound (2.54) obtained as in the proof of Lemma 2.8, the proof of Proposition 2.2 now goes through without any differences and implies that $(\bar{N}, \bar{g})$ is flat in this case also, giving a contradiction. {\qed
3,075
60,678
en
train
0.114.9
} Next, we observe that a similar, but much simpler, argument shows that there is a constant $D < \infty $ such that for any branch $\Gamma_{e},$ either \begin{equation} \label{e2.59} L(\Gamma_{e}) \leq D, \ \ {\rm or} \ \ L(\Gamma_{e}) = \infty . \end{equation} For suppose there were arbitrarily long but finite branches $\Gamma_{i},$ with length $L(\Gamma_{i}) \rightarrow \infty $ as $i \rightarrow \infty ,$ and starting at points $e_{i}.$ Then the same argument proving (2.56) implies that $inf_{N_{i}}s$ cannot occur at or near $e_{i},$ if $i$ is sufficiently large; here $N_{i}$ is the part of $N$ corresponding to $\Gamma_{i}.$ Thus $inf_{N_{i}}s$ occurs in the interior of $N_{i}.$ Since $\Gamma_{i}$ is finite, and thus $\Gamma_{i}$ and $N_{i}$ are compact, this contradicts the minimum principle for the trace equation (0.3). Given Lemmas 2.9 and 2.10, we now return to the situation following Lemma 2.8 and assume that $(N, g)$ satisfies the assumptions of Theorem 2.7 and (2.47). We claim that the preceding arguments imply the graph $\Gamma $ of $N$ must have exponential growth, in the strong sense that every branch $\Gamma_{e} \subset \Gamma $ also has exponential growth. Equivalently, we claim that given any point $e$ in an edge $E\subset\Gamma ,$ there is some vertex $v'\in\Gamma_{e},$ (so $v' > e$), within fixed distance $D$ to $e$, such that at least two outgoing edges $e_{1}, e_{2}$ from $v' $ have infinite subbranches $\Gamma_{e_{1}}, \Gamma_{e_{2}} \subset \Gamma_{e}.$ For if this were not the case, then there exist points $e_{i}\in\Gamma $ and branches $\Gamma_{e_{i}} \subset \Gamma $ and $D_{i} \rightarrow \infty ,$ such that all subbranches of $\Gamma_{e_{i}}$ starting within distance $D_{i}$ to $e_{i}$ are finite. By (2.59), all subbranches then have a uniform bound $D$ on their length. It follows that one generates as previously in the proof of Lemma 2.10 a geometric limit graph $\Gamma $ with at most linear growth which, as before, gives a contradiction. Hence all branches $\Gamma_{e}$ have a uniform rate of exponential growth, i.e. for all $r$ large, \begin{equation} \label{e2.60} L(B_{e}(r)) \geq e^{d\cdot r}, \end{equation} for some $d > $ 0, where $B_{e}(r)$ is the ball of radius $r$ in $\Gamma_{e}$ about some point in $e$. Since $(N, \Roof{g}{\widetilde})$ has a uniform lower bound on its injectivity radius, $(N_{e}, \Roof{g}{\widetilde})$ satisfies \begin{equation} \label{e2.61} vol_{\Roof{g}{\tilde}}(B_{e}(r)) \geq e^{d\cdot r}, \end{equation} for some possibly different constant $d$. Further, since $g \geq c\cdot \Roof{g}{\widetilde},$ (since $s$ is bounded above), (2.61) also holds for $g$, i.e. \begin{equation} \label{e2.62} vol_{g}(B_{e}(r)) \geq e^{d\cdot r}. \end{equation} again for some possibly different constant $d > 0$. However, by (2.57) and (2.55), (and the maximum principle for the trace equation), we may choose a branch $\Gamma_{e}$ such that $s \leq \delta $ on $N_{e},$ for any prescribed $\delta > $ 0. Lemma 2.3 then implies that \begin{equation} \label{e2.63} |r| \leq \delta_{1} = \delta_{1}(\delta ), \end{equation} everywhere on $N_{e}.$ From standard volume comparison theory, c.f. [P, Ch.9], (2.63) implies that the volume growth of $N_{e}$ satisfies \begin{equation} \label{e2.64} vol_{g}(B_{e}(r)) \leq e^{\delta_{2}\cdot r}, \end{equation} where $\delta_{2}$ is small if $\delta_{1}$ is small. Choosing $\delta $ sufficiently small, this contradicts (2.62). This final contradiction proves Theorem 2.7. {\qed } Theorems 2.5 and 2.7 together prove Theorem 0.1.
1,212
60,678
en
train
0.114.10
\section{Regularity and Apriori Estimates for ${\cal R}_{s}^{2}$ Solutions.} \setcounter{equation}{0} In this section we prove the interior regularity of weak ${\cal R}_{s}^{2}$ solutions as well as apriori estimates for families of ${\cal R}_{s}^{2}$ solutions. These results will be needed in \S 4, and also in [An4]. These are local questions, so we work in a neighborhood of an arbitrary point $x\in N.$ We will assume that $(N, g, x)$ is scaled so that $\rho (x) =$ 1, passing to the universal cover if necessary if the metric is sufficiently collapsed in $B_{x}(\rho (x)).$ In particular, the $L^{2,2}$ geometry of $g$ is uniformly controlled in $B = B_{x}(1).$ The proof of regularity is similar to the proof of the smooth regularity of ${\cal R}^{2}$ solutions in [An1,\S 4] or that of critical metrics of $I_{\varepsilon},$ for any given $\varepsilon > $ 0, (c.f. \S 1) in [An1, \S 8]. The proof of apriori estimates proceeds along roughly similar lines, the main difference being that $\alpha $ is not fixed, but can vary over any value in [0, $\infty ).$ Thus one needs uniform estimates, independent of $\alpha .$ To handle this, especially when $\alpha $ is small, one basically uses the interaction of the terms $\alpha\nabla{\cal R}^{2}$ and $L^{*}\omega .$ We assume that $g$ is an $L^{2,2}$ metric satisfying the ${\cal R}_{s}^{2}$ equations \begin{equation} \label{e3.1} \alpha\nabla{\cal R}^{2} + L^{*}(\omega ) = 0, \end{equation} \begin{equation} \label{e3.2} \Delta\omega = -\frac{\alpha}{4}|r|^{2}, \end{equation} weakly in $B$, with scalar curvature $s \equiv $ 0 (in $L^{2})$ and potential $\omega\in L^{2}.$ If $\alpha =$ 0, we have assumed, c.f \S 0, that $\omega $ is not identically 0 on $B$. \begin{theorem} \label{t 3.1.} Let (g, $\omega )$ be a weak solution of the ${\cal R}_{s}^{2}$ equations. Then $g$ and $\omega $ are $C^{\infty}$ smooth, in fact real-analytic, in B. \end{theorem} For clarity, the proof will proceed in a sequence of Lemmas. These Lemmas will hold on successively smaller concentric balls $B \supset B(r_{1}) \supset B(r_{2}),$ etc, whose ratio $r_{i+1}/r_{i}$ is a definite but arbitrary constant $< $ 1, but close to 1. (The estimates will then depend on this ratio). In other words, we are only considering the interior regularity problem. To simplify notation, we will ignore explicitly stating the size of the ball at each stage and let $\bar{B}$ denote a suitable ball in $B$, with $d = dist(\partial\bar{B}, \partial B).$ Further, $c$ will always denote a constant independent of $\alpha ,$ which is either absolute, or whose dependence is explicitly stated. The value of $c$ may change from line to line or even from one inequality to the next. We recall the Sobolev embedding theorem, (in dimension 3), c.f [Ad, Thm. 7.57], \begin{equation} \label{e3.3} L^{k,p} \subset L^{m,q}, \ \ {\rm provided} \ \ k - m < 3/p \ \ {\rm and} \ \ 1/p - (k-m)/3 < 1/q, \end{equation} $$L^{1} \subset H^{t-2}, \ \ {\rm any} \ \ t < \frac{1}{2}. $$ where $H^{t},$ (resp. $H_{o}^{t}), t > $ 0, is the Sobolev space of functions (of compact support) with '$t$' derivatives in $L^{2}$ and $H^{-t}$ is the dual of $H_{o}^{t},$ c.f. [Ad, Thm.3.10], [LM, Ch.1.12]. Let $\omega_{o}$ be defined by \begin{equation} \label{e3.4} \omega_{o} = ||\omega||_{L^{2}(B)}. \end{equation} \begin{lemma} \label{l 3.2.} There is a constant $c =$ c(d) such that \begin{equation} \label{e3.5} ||\alpha|r|^{2}||_{L^{1}(\bar{B})} \leq c\cdot \omega_{o}. \end{equation} \end{lemma} {\bf Proof:} This follows immediately from the trace equation (3.2), by pairing it with a suitable smooth cutoff function $\eta $ of compact support in $B$, with $\eta \equiv $ 1 on $\bar{B}$ and using the self-adjointness of $\Delta .$ Since the $L^{2}$ norm of $\Delta\eta $ is bounded, the left side of (3.2) then becomes bounded by the $L^{2}$ norm of $\omega .$ (This argument is essentially the same as the proof of (2.13)). {\qed } \begin{lemma} \label{l 3.3.} For any $t < \frac{1}{2}, \omega\in H^{t}(\bar{B})$, and there is a constant $c =$ c(t, d) such that \begin{equation} \label{e3.6} ||\omega||_{H^{t}(\bar B)} \leq c\cdot \omega_{o}. \end{equation} \end{lemma} {\bf Proof:} This also follows from the trace equation (3.2). Namely, Lemma 3.2 implies that the right side of (3.2) is bounded in $L^{1}$ by $\omega_{o}.$ By Sobolev embedding, $L^{1}\subset H^{t-2},$ for any $t < \frac{1}{2},$ and we may consider the Laplacian as an operator $\Delta : H^{t} \rightarrow H^{t-2},$ c.f. [LM, Ch. 2.7] and also [An1, \S 4]. Elliptic theory for $\Delta $ implies that $||\omega||_{H^{t}}$ is bounded by the $H^{t-2}$ norm, and thus $L^{1}$ norm, of the right side and the $L^{2}$ norm of $\omega ,$ each of which is bounded by $\omega_{o}.$ {\qed } \begin{lemma} \label{l 3.4.} The tensor $\alpha r$ is in $H^{t}(\bar{B})$ and there is a constant $c =$ c(t, d) such that \begin{equation} \label{e3.7} ||\alpha r||_{H^{t}(\bar B)} \leq c\cdot \omega_{o}. \end{equation} \end{lemma} {\bf Proof:} This follows exactly the arguments of [An1, \S 4], so we will be brief. Write equation (3.1) as $$D^{*}D\alpha r = Q, $$ where $Q$ consists of all the other terms in (3.1). Using (3.5) and (3.6), it follows that $D^{*}D\alpha r$ is bounded in $H^{t-2}$ by $\omega_{o},$ (c.f. [An1, \S 4] for the details), and the ellipticity of $D^{*}D$ as in Lemma 3.3 above gives the corresponding bound on the $H^{t}$ norm of $\alpha r.$ {\qed } From Sobolev embedding (3.3), it follows that \begin{equation} \label{e3.8} ||\alpha r||_{L^{3-\mu}} \leq c\omega_{o}, \end{equation} for any given $\mu = \mu (t) > $ 0. Here and in the following, the norms are on balls $\bar B$, possibly becoming succesively smaller. Using the H\"older inequality, we have \begin{equation} \label{e3.9} \int (\alpha|r|^{2})^{p} \leq \bigl(\int (\alpha|r|)^{pr}\bigr)^{1/r}\bigl(\int|r|^{pq}\bigr)^{1/q}, \end{equation} so that choosing $pr = 3-\mu $ and $pq = 2$ implies that \begin{equation} \label{e3.10} ||\alpha r^{2}||_{L^{p}} \leq c\omega_{o}, \end{equation} for any $p = p(t) < 6/5.$ We may now repeat the arguments in Lemma 3.3 above, using the improved estimate (3.10), over the $L^{1}$ estimate. As before, the trace equation now gives, for $p < 6/5,$ $$||\omega||_{L^{1,2-\mu}} \leq ||\omega||_{L^{2,p}} \leq c\omega_{o}, $$ for any $\mu = \mu (p) > $ 0. Now repeat the argument in Lemma 3.4, considering $D^{*}D$ as an operator $L^{1,2-\mu} \rightarrow L^{-1,2-\mu}.$ This gives $$||\alpha r||_{L^{6-\varepsilon}} \leq c||\alpha r||_{L^{1,2-\mu}} \leq c\omega_{o}, $$ where the first inequality follows from the Sobolev inequality. From the Holder inequality (3.9) again, with $pr = 6-\varepsilon $ and $pq =$ 2, we obtain $$||\alpha r^{2}||_{L^{p}} \leq c\omega_{o}, $$ for $p < 3/2.$ Repeating this process again gives \begin{equation} \label{e3.11} ||\alpha r||_{L^{p}} \leq c\omega_{o}, \end{equation} for $p < \infty$ , $c = c(p)$. Finally, $\alpha r$ bounded in $L^{p}$ implies $\alpha|r|^{2}$ bounded in $L^{2-\varepsilon}, \varepsilon = \varepsilon (p),$ so repeating the process once more as above gives $\alpha r$ bounded in $L^{1,6-\varepsilon}$ and bounded in $C^{\gamma}, \gamma < \frac{1}{2}.$ This in turn implies $\alpha|r|^{2}$ is bounded in $L^{2}.$ These arguments thus prove the following: \begin{corollary} \label{c 3.5.} On a given $\bar{B} \subset B$, the following estimates hold, with $c = c(d)$: \begin{equation} \label{e3.12} ||\alpha r||_{L^{1,6}} \leq c\cdot \omega_{o}, ||\alpha|r|^{2}||_{L^{2}} \leq c\cdot \omega_{o}, ||\omega||_{L^{2,2}} \leq c\cdot \omega_{o}. \end{equation} \end{corollary} {\qed } In particular, from Sobolev embedding it follows that $\omega $ is a $C^{1/2}$ function, with modulus of continuity depending on $\omega_{o}.$ Given these initial estimates, it is now straightforward to prove Theorem 3.1. \noindent {\bf Proof of Theorem 3.1.} Suppose first $\alpha > $ 0. Then the iteration above may then be continued indefinitely. Thus since $\alpha r$ is bounded in $L^{1,6} \subset C^{1/2}$ by Sobolev embedding, $\alpha|r|^{2}$ is also bounded in $L^{1,6}.$ Applying the iteration above, it follows that $\omega\in L^{3,6},$ so that $D^{*}D\alpha r\in L^{1,6}$ implying that $\alpha r\in L^{3,6},$ so that $\alpha|r|^{2}\in L^{3,6},$ and so on. Note that at each stage the regularity of the metric $g$ is improved, since if $\alpha r\in L^{k,p},$ then $g\in L^{k+2,p},$ c.f. [An1, \S 4]. Continuing in this way gives the required $C^{\infty}$ regularity. The equations (3.1)-(3.2) form an elliptic system for $(g, \omega)$ with coefficients depending real analytically on $g$. This implies that $g$ and $\omega$ are in fact real-analytic, c.f. [M, Ch.6.6,6.7]. If $\alpha =$ 0, then the equations (3.1)-(3.2) are the static vacuum Einstein equations. By assumption the potential $\omega $ is not identically 0 and is an $L^{2,2}$ function by (3.12). The smooth (or real-analytic) regularity of $(g, \omega )$ then follows from standard regularity results for Einstein metrics, c.f. [B, Ch.5]. {\qed
3,313
60,678
en
train
0.114.11
} In particular, from Sobolev embedding it follows that $\omega $ is a $C^{1/2}$ function, with modulus of continuity depending on $\omega_{o}.$ Given these initial estimates, it is now straightforward to prove Theorem 3.1. \noindent {\bf Proof of Theorem 3.1.} Suppose first $\alpha > $ 0. Then the iteration above may then be continued indefinitely. Thus since $\alpha r$ is bounded in $L^{1,6} \subset C^{1/2}$ by Sobolev embedding, $\alpha|r|^{2}$ is also bounded in $L^{1,6}.$ Applying the iteration above, it follows that $\omega\in L^{3,6},$ so that $D^{*}D\alpha r\in L^{1,6}$ implying that $\alpha r\in L^{3,6},$ so that $\alpha|r|^{2}\in L^{3,6},$ and so on. Note that at each stage the regularity of the metric $g$ is improved, since if $\alpha r\in L^{k,p},$ then $g\in L^{k+2,p},$ c.f. [An1, \S 4]. Continuing in this way gives the required $C^{\infty}$ regularity. The equations (3.1)-(3.2) form an elliptic system for $(g, \omega)$ with coefficients depending real analytically on $g$. This implies that $g$ and $\omega$ are in fact real-analytic, c.f. [M, Ch.6.6,6.7]. If $\alpha =$ 0, then the equations (3.1)-(3.2) are the static vacuum Einstein equations. By assumption the potential $\omega $ is not identically 0 and is an $L^{2,2}$ function by (3.12). The smooth (or real-analytic) regularity of $(g, \omega )$ then follows from standard regularity results for Einstein metrics, c.f. [B, Ch.5]. {\qed } We now turn to higher order estimates for families of ${\cal R}_{s}^{2}$ solutions on balls on which one has initial $L^{2,2}$ control of the metric. This corresponds to obtaining uniform estimates as above which are independent of $\alpha ,$ or equivalently to the smooth compactness of a family of such solutions. In contrast to the ${\cal R}^{2}$ equations, note that the ${\cal R}_{s}^{2}$ equations, for a fixed $\alpha ,$ are not scale-invariant. The full family of ${\cal R}_{s}^{2}$ equations, depending on $\alpha ,$ is scale-invariant. However, $\alpha $ itself is not scale-invariant. As seen in \S 1, $\alpha $ scales inversely to the curvature, i.e. as the square of the distance. On the other hand, the potential $\omega $ is scale-invariant. Thus, we now assume we have a sequence or family of smooth ${\cal R}_{s}^{2}$ solutions, with no apriori bound on $\alpha\in [0,\infty )$ or on the $L^{2}$ norm of $\omega $ on $B$. We assume throughout that the metrics are defined on a ball $B = B(1)$, with $r_{h}(B) \geq $ 1. Recall the definition of the constant $c_{o}$ preceding (2.1) in the definition of $r_{h}$ and $\rho ,$ which measures the deviation of $g$ from the flat metric. We define the $L^{k,2}$ curvature radius $\rho^{k}$ exactly in the same way as the $L^{2}$ curvature radius, with $|\nabla^{k} r|$ in place of $|r|$, c.f. [An1, Def.3.2]. \begin{theorem} \label{t 3.6} Let (N, g, $\omega )$ be an ${\cal R}_{s}^{2}$ solution, with $x\in N$ and $r_{h}(x) \geq $ 1. Suppose \begin{equation} \label{e3.13} \omega \leq 0, \end{equation} in $B_{x}(1).$ Then there is a constant $\rho_{o} > $ 0 such that the $L^{k,2}$ curvature radius $\rho^{k}$ satisfies, for all $k\geq 1,$ \begin{equation} \label{e3.14} \rho^{k}(x) \geq \rho_{o}. \end{equation} In particular, by Sobolev embedding, the Ricci curvature $r$ is bounded in $C^{k}.$ Further the $L^{k+2,2}$ norm of the potential $\omega $ is uniformly bounded in $B_{x}(\rho_{o}),$ in that, for a given $\varepsilon > $ 0, which may be made (arbitrarily) small if $c_{o}$ is chosen sufficiently small, \begin{equation} \label{e3.15} ||\omega||_{L^{k+2,2}(B_{x}(\rho_{o}))} \leq max\bigl( c(\varepsilon^{-1},k)\cdot ||\omega||_{L^{1}(B_{x}(\varepsilon ))}, c_{o}\bigr) , \end{equation} \end{theorem} The proof of this result will again be carried out in a sequence of steps. This result is considerably more difficult to prove than Theorem 3.1, since one needs to obtain estimates independent of $\alpha $ and $\omega_{o}.$ To do this, we will strongly use the assumption (3.13) that $\omega \leq $ 0. It is not clear if Theorem 3.6 holds in general when $\omega \geq $ 0. In fact, this is the main reason why the hypothesis (3.13) is needed in Theorem 0.2. We note that by Theorem 3.1, the regularity of $(g, \omega )$ is not an issue, so we will assume that $g$ and $\omega $ are smooth, (or real-analytic) on $B$. Theorem 3.6 is easy to prove from the preceding estimates if one has suitable bounds on $\alpha $ and $\omega_{o}.$ It is worthwhile to list these explicitly. {\bf (i).} Suppose $\alpha $ is uniformly bounded away from 0 and $\infty ,$ and $\omega_{o}$ is bounded above, i.e. \begin{equation} \label{e3.16} \kappa \leq \alpha \leq \kappa^{-1}, \omega_{o} \leq \kappa^{-1} \end{equation} for some fixed $\kappa > $ 0. One may then just carry out the arguments preceding Corollary 3.5 and in the proof of Theorem 3.1 to prove Theorem 3.6, with bounds then depending on $\kappa .$ This gives (3.14) and also (3.15), but with $\omega_{o}$ in place of the $L^{1}$ norm of $\omega $ in $B_{x}(\varepsilon ).$ In case $$||\omega||_{L^{1}(B_{x}(\varepsilon ))}<< \omega_{o}, $$ we refer to Case (I) below, where this situation is handled. {\bf (ii).} Suppose that $\alpha $ is uniformly bounded away from 0, and $\omega_{o}$ is bounded above, i.e. \begin{equation} \label{e3.17} \kappa \leq \alpha , \omega_{o} \leq \kappa^{-1} \end{equation} for some fixed $\kappa > $ 0. Thus, the difference with Case (i) is that $\alpha $ may be arbitrarily large. Then the equations (3.1)-(3.2) may be renormalized by dividing by $\alpha ,$ so that $\alpha $ becomes 1. This decreases the $L^{2}$ norm of the potential $\omega ,$ i.e. $\omega_{o},$ but otherwise leaves the preceding arguments unchanged. Hence, Theorem 3.6 is again proved in this case, with bounds depending on $\kappa .$ Observe that both of these arguments in (i), (ii) do not require the bound (3.13). The main difficulty is when $\alpha $ is small, and especially when $\omega_{o}$ is small as well. In this situation, the equations (3.1)-(3.2) approach merely the statement that 0 $\sim $ 0. Here one must understand the relative sizes of $\alpha $ and $\omega_{o}$ to proceed further. First note that by Theorem 3.1, we may assume that $\omega$ is not identically zero on any ball $\bar{B} \subset B$, since otherwise, (by real-analyticity), $\omega \equiv 0$ on $B$, and hence the solution is flat; (recall that we have assumed $\alpha > 0$ if $\omega \equiv 0$ on $B$). It follows that the set $\{\omega =$ 0\} is a closed real-analytic set in B. To begin, we must obtain apriori estimates on, for example, the $L^{2}$ norm of $\omega $ in terms of its value at or near the center point $x$, c.f. Corollary 3.9. The first step in this is the following $L^{1}$ estimate, which may also be of independent interest. (Of course this is meant to apply to $u = -\omega $ and $f = \frac{1}{4}\alpha|r|^{2}$, as in (3.2)). \begin{proposition} \label{p 3.7.} Let (B, g), $B = B_{x}(1)$ be a geodesic ball of radius 1, with $r_{h}(x) \geq $ 1. For a given smooth function $f$ on B, let $u$ be a solution of \begin{equation} \label{e3.18} \Delta u = f, \end{equation} on B, with \begin{equation} \label{e3.19} u \geq 0. \end{equation} Let $B_{\varepsilon} = B_{x}(\varepsilon ),$ where $\varepsilon > $ 0 is (arbitrarily) small, depending only on the choice of $c_{o}.$ Then there is a constant $c > $ 0, depending only on $\varepsilon^{-1},$ such that \begin{equation} \label{e3.20} \int_{B}u \leq c(\varepsilon^{-1})\int_{B}|f| + {\tfrac{1}{2}} u_{av}(\varepsilon ) + \int_{B_{\varepsilon}}u, \end{equation} where $u_{av}$ is the average of $u$ on $S_{\varepsilon} = \partial B_{\varepsilon}.$ \end{proposition} {\bf Proof:} Let $\eta $ be a non-negative function on $B$ such that $\eta = |\nabla\eta| =$ 0 on $\partial B,$ determined more precisely below. Multiply (3.18) by $\eta $ and apply the divergence theorem to obtain \begin{equation} \label{e3.21} \int_{B \setminus {B_{\varepsilon}}}\eta f = \int_{B \setminus {B_{\varepsilon}}}\eta\Delta u = \int_{B \setminus {B_{\varepsilon}}}u\Delta\eta + \int_{S_{\varepsilon}}\eta<\nabla u, \nu> - \int_{S_{\varepsilon}}u<\nabla\eta , \nu>, \end{equation} where $\nu $ is the outward unit normal. Let $\{x_{i}\}$ be a harmonic coordinate chart on $B$, so that the metric $g$ is bounded in $L^{2,2}$ on $B$ in the coordinates $\{x_{i}\},$ since $r_{h}(x) \geq $ 1. We may assume w.l.o.g. that in these coordinates, $g_{ij}(x) = \delta_{ij}.$ Let $\sigma = (\sum x_{i}^{2})^{1/2}.$ Then the ratio $\sigma /t,$ for $t(y) = dist_{g}(y,x),$ satisfies \begin{equation} \label{e3.22} 1- c_{1} \leq \frac{\sigma}{t} \leq 1+c_{1}, \end{equation} in $B$, where the constant $c_{1}$ may be made small by choosing the constant $c_{o}$ in the definition of $r_{h}$ sufficiently small. We choose $\eta = \eta (\sigma )$ so that $\eta (1) =$ 0 and $\eta' (1) =$ 0 and so that $\Delta\eta $ is close to the constant function 2 in $C^{0}(B \setminus {B_{\varepsilon}}),$ i.e. \begin{equation} \label{e3.23} \Delta\eta \sim 2. \end{equation} (Actually, since $\{\sigma =$ 1\} may not be contained in $B$, one should replace by the boundary condition at 1 by the same condition at $\sigma = 1-\delta , \delta = \delta (c_{o})$ small, but we will ignore this minor adjustment below). To determine $\eta$, we have $\Delta\eta = \eta'\Delta\sigma + \eta''|\nabla\sigma|^{2}.$ Since the metric $g = g_{ij}$ is close to the flat metric $\delta_{ij},$ we have $||\nabla\sigma|^{2}- 1| < \delta $ and $|\Delta\sigma-\frac{2}{\sigma}| < \frac{\delta}{\sigma}$, in $C^{0}$, where $\delta $ may be made (arbitrarily) small by choosing $c_{o}$ sufficiently small. Thus, let $\eta $ be a solution to \begin{equation} \label{e3.24} \frac{2}{\sigma}\eta' + \eta'' = 2. \end{equation} It is easily verified that the function \begin{equation} \label{e3.25} \eta (\sigma ) = \tfrac{2}{3}\sigma^{-1} - 1 + \tfrac{1}{3}\sigma , \end{equation} satisfies (3.24), with the correct boundary conditions at $\sigma =$ 1. Observe that $\eta (\varepsilon ) \sim \frac{2}{3}\varepsilon^{-1}, \ \eta' (\varepsilon ) \sim -<\nabla\eta , \nu> (\varepsilon ) \sim -\frac{2}{3}\varepsilon^{-2},$ for $\varepsilon $ small. Hence, (3.23) is satisfied on $B \setminus {B_{\varepsilon}}$ for a given choice of $\varepsilon $ small, if $\delta ,$ i.e. $c_{o},$ is chosen sufficiently small. With this choice of $\eta $ and $\varepsilon ,$ (3.21) becomes $$\int_{B \setminus {B_{\varepsilon}}}\eta f \sim 2\int_{B \setminus {B_{\varepsilon}}}u - {\tfrac{2}{3}} \varepsilon^{-2}\int_{S_{\varepsilon}}u + {\tfrac{2}{3}} \varepsilon^{-1}\int_{S_{\varepsilon}}<\nabla u, \nu> . $$ Applying the divergence theorem on $B_{\varepsilon}$ to the last term gives $$ {\tfrac{2}{3}} \varepsilon^{-1}\int_{S_{\varepsilon}}<\nabla u, \nu> = -{\tfrac{2}{3}} \varepsilon^{-1}\int_{B_{\varepsilon}}\Delta u \sim - c\cdot \varepsilon^{-1}\int_{B_{\varepsilon}}f. $$ Thus, combining these estimates gives $$ 2\int_{B \setminus {B_{\varepsilon}}}u \leq c(\varepsilon^{-1})\int_{B}|f| + u_{av}(\varepsilon ), $$ which implies (3.20). {\qed
3,785
60,678
en
train
0.114.12
} The next Lemma is a straightforward consequence of this result. \begin{lemma} \label{l 3.8.} Assume the same hypotheses as Proposition 3.7. Then for any $\mu > $ 0, there is a constant $c = c(\mu ,d, \varepsilon^{-1})$ such that \begin{equation} \label{e3.26} ||u||_{L^{3-\mu}(\bar{B})} \leq c(\mu ,d,\varepsilon^{-1})\cdot \int_{B}|f| + {\tfrac{1}{2}} u_{av}(\varepsilon ). \end{equation} \end{lemma} {\bf Proof:} Let $\phi $ be a smooth positive cutoff function, supported in $B$, with $\phi \equiv $ 1 on $\bar{B}.$ As in Lemma 3.3, the Laplacian $\Delta $ is a bounded surjection $\Delta : H^{t} \rightarrow H^{t-2},$ for $t < \frac{1}{2},$ so that \begin{equation} \label{e3.27} ||\phi u||_{H^{t}} \leq c\cdot ||\Delta (\phi u)||_{H^{t-2}}. \end{equation} By Sobolev embedding $||\phi u||_{L^{3-\mu}} \leq c\cdot ||\phi u||_{H^{t}}, c = c(\mu ), \mu = \mu (t)$ so that it suffices to estimate the right side of (3.27). By definition, \begin{equation} \label{e3.28} ||\Delta (\phi u)||_{H^{t-2}} = sup|\int h\Delta (\phi u)|, \end{equation} where the supremum is over compactly supported functions $h$ with $||h||_{H_{o}^{2-t}} \leq $ 1. Since $\Delta u = f$, we have $$\Delta (\phi u) = \phi f + u\Delta\phi + 2<\nabla\phi ,\nabla u> . $$ Now \begin{equation} \label{e3.29} \int h\phi f \leq c\cdot ||f||_{L^{1}}\cdot ||h||_{L^{\infty}} \leq c\cdot ||f||_{L^{1}}, \end{equation} where the last inequality follows from Sobolev embedding. Next \begin{equation} \label{e3.30} \int hu\Delta\phi \leq c\cdot ||h||_{L^{\infty}}||u||_{L^{1}} \leq c\cdot ||u||_{L^{1}}, \end{equation} since we may choose $\phi $ with $|\Delta\phi| \leq c$ (e.g. $\phi = \phi (\sigma ),$ for $\sigma $ as in the proof of Prop. 3.7). For the last term, applying the divergence theorem gives \begin{equation} \label{e3.31} \int h<\nabla\phi , \nabla u> = -\int uh\Delta\phi -\int u<\nabla h, \nabla\phi> . \end{equation} The first term here is treated as in (3.30). For the second term, for any $\varepsilon_{1} > $ 0 we may choose $\phi $ so that $|\nabla\phi^{\varepsilon_{1}}|$ is bounded, (with bound depending only on $\varepsilon_{1}).$ By Sobolev embedding $|\nabla h|$ is bounded in $L^{3+\varepsilon_{2}},$ for $\varepsilon_{2} = \varepsilon_{2}(\mu )> $ 0. Thus, applying the H\"older inequality, $$|\int u<\nabla h, \nabla\phi>| \leq c\int\phi^{1-\varepsilon_{1}}u|\nabla h| \leq c\cdot ||\phi^{1-\varepsilon_{1}}u||_{L^{(3/2)-\varepsilon_{2}}}. $$ Write $(\phi^{1-\varepsilon_{1}}u)^{(3/2)-\varepsilon_{2}} = (\phi u)^{p}u^{1/2},$ where $p = 1-\varepsilon_{2}.$ For $\varepsilon_{2}$ small, this gives $\varepsilon_{1} \sim 1/3.$ It follows that $$\int (\phi^{1-\varepsilon_{1}}u)^{(3/2)-\varepsilon_{2}} = \int (\phi u)^{p}\cdot u^{1/2} \leq \bigl(\int (\phi u)^{2p}\bigl)^{1/2}\bigl(\int u\bigr)^{1/2}. $$ This estimate, together with standard use of the Young and interpolation inequalities, c.f. [GT, (7.5),(7.10)] implies that \begin{equation} \label{e3.32} ||\phi^{1-\varepsilon_{1}}u||_{L^{(3/2)-\varepsilon_{2}}} \leq \kappa||\phi u||_{L^{2}} + c\cdot \kappa^{-a}||u||_{L^{1}}, \end{equation} for any given $\kappa > $ 0, where $c$ and $a$ depend only on $\varepsilon_{2}.$ Since $||\phi u||_{L^{2}} \leq ||\phi u||_{H^{t}},$ by choosing $\kappa $ small in (3.32), we may absorb the first term on the right in (3.32) to the left in (3.27). Combining the estimates above gives $$||u||_{L^{3-\mu}(\bar{B})} \leq c(\mu )\cdot (||f||_{L^{1}(B)} + ||u||_{L^{1}(B)}). $$ The $L^{1}$ norm of $u$ is estimated in Proposition 3.7. In addition, we have $$\int_{B_{\varepsilon}}u \leq ||u||_{L^{3-\mu}(\bar{B})}\cdot (volB_{\varepsilon})^{q}, $$ where $q = \frac{2-\mu}{3-\mu}.$ Choosing $\varepsilon $ small, this term may also be absorbed into the right in (3.27), which gives the bound (3.26). {\qed } We point out that although the assumption (3.19) on $u$, (equivalent to (3.13) on $\omega$), is in fact not used in Proposition 3.7, it is required in Lemma 3.8 to control the $L^1$ norm of $u$. We now apply these results to the trace equation (3.2), so that $u = -\omega $ and $f = \frac{1}{4}\alpha|r|^{2}.$ Using the fact that $f \geq $ 0, we obtain: \begin{corollary} \label{c 3.9.} On (B, g) as above, \begin{equation} \label{e3.33} sup_{\bar{B}}|\omega| \leq c(d,\varepsilon^{-1})\cdot \int_{B}\alpha|r|^{2} + {\tfrac{1}{2}} |\omega|_{av}(\varepsilon ). \end{equation} \end{corollary} {\bf Proof:} By Lemma 3.8, it suffices to prove there is a bound $$sup_{\bar{B}}|\omega| \leq c||\omega||_{L^{2}(B)}. $$ Since $f$ is positive, i.e. $\Delta (-\omega ) \geq $ 0, this estimate is an application of the DeGiorgi-Nash-Moser sup estimate for subsolutions of the Laplacian, c.f. [GT, Thm 8.17]. {\qed
1,849
60,678
en