text
stringlengths
56
7.94M
\begin{equation}gin{document} \keywords{Intersection cohomology, Cap product; Cup product;Topological invariance} {\rm Sub}jclass[2010]{55N33, 57N80, 55S05} \today \begin{equation}gin{abstract} In previous works, we have introduced the blown-up intersection cohomology and used it to extend Sullivan's minimal models theory to the framework of pseudomanifolds, and to give a positive answer to a conjecture of M. Goresky and W. Pardon on Steenrod squares in intersection homology. In this paper, we establish the main properties of this cohomology. One of its major feature is the existence of cap and cup products for any filtered space and any commutative ring of coefficients, at the cochain level. Moreover, we show that each stratified map induces an homomorphism between the blown-up intersection cohomologies, compatible with the cup and cap products. We prove also its topological invariance in the case of a pseudomanifold with no codimension one strata. Finally, we compare it with the intersection cohomology studied by G. Friedman and J.E. McClure. A great part of our results involves general perversities, defined independently on each stratum, and a tame intersection homology adapted to large perversities. \end{abstract} \title{Blown-up intersection cohomology} \tableofcontents \section*{Introduction} Intersection homology was defined by M.~Goresky and R.~MacPherson in the case of PL-pseudomanifolds in \cite{MR572580} and with sheaf theory in \cite{MR696691} where the authors restore Poincar\'e duality for pseudomanifolds: if $X$ is a compact oriented pseudomanifold of dimension $n$ and $\overline p, \overline q$ are two complementary perversities, there is an isomorphism $$\lau H {\overline p} r {X;\mathbb{Q}} \cong \lau H {n-r}{\overline q} {X;\mathbb{Q}}, $$ with $\lau H {n-r} {\overline q} {X;\mathbb{Q}} := \hom(\lau H {\overline q} {n-r}{X;\mathbb{Q}};\mathbb{Q})$. This restoration also presents some disadvantages : \begin{equation}gin{enumerate} \item[(1)] If we don't restrict ourselves to perversities comprised between the zero $\overline 0$ and the top $\overline t$ perversities respecting some growth condition, then we don't have Poincar\'e duality anymore and we need to use the more general {\em tame intersection cohomology} $\lau {\mathfrak{H}} *{\overline p}{X;R}$ of $X$ (see \defref{tameNormHom}), \item[(2)] The cohomology is defined via a linear dual process and not via a geometrical one, \item[(3)] This duality is not satisfied for any commutative ring $R$. Especially for a commutative ring of positive characteristic properties of ``torsion freeness" for the links of the singularities are needed. \end{enumerate} {\em Blown-up intersection cohomology} aims to overcome some of these difficulties. The goal in this article is to develop the main properties of the Blown-up intersection cohomology $\lau \mathscr H * {\overline p} {X;R}$ for filtered spaces and more precisely for CS sets. This theory was successfully used in \cite{CST1} to extend the theory of Sullivan's minimal models for intersection cohomology and in \cite{CST2} to answer a conjecture on Steenrod squares. The first sections contain some reminders about the type of stratified spaces and maps used in the rest of the paper. After that come the definitions by a local system of the blown-up complex $\lau{\widetilde{N}}*{\overline p}{X;R}$, first for a regular simplex, then by a limit argument for a general weighted simplicial complex in {\rm pr}opref{prop:pasdeface} and finally for a perverse space $(X,\overline p)$ in \defref{def:thomwhitney}. The interesting features are that \begin{equation}gin{enumerate} \item[(1)] the perversities $\overline p$ used here are completely general, lifting the restriction on the perversities used for classical intersection cohomology, \item[(2)] it is well defined for any commutative ring $R$ as coefficients ring, regardless of its characteristic. \end{enumerate} The existence and properties of a cup product for this cohomology are proved in {\rm pr}opref{42} by the existence of an associative multiplication $$ - \cup - \colon \lau {\widetilde{N}} k {\overline p} {X;R} \otimes \lau {\widetilde{N}} \end{lemma}l {\overline q} {X;R} \to \lau {\widetilde{N}} { \end{lemma}l+k} {\overline p +\overline q} {X;R}, $$ inducing a graded commutative multiplication with unity $$ - \cup - \colon \lau \mathscr H k {\overline p} {X;R} \otimes \lau \mathscr H \end{lemma}l {\overline q} {X;R} \to \lau \mathscr H { \end{lemma}l+k} {\overline p +\overline q} {X;R}. $$ The same goes for the cap product, proving in {\rm pr}opref{prop:cap} that the cap product allows an interaction between the blown-up intersection cohomology $\lau \mathscr H * {\overline p} {X;R}$ and the tame intersection homology $\lau {\mathfrak{H}} {\overline p}* {X;R}$. The first part ends with two sections detailing the behavior of a stratified map with respect to blown-up intersection cohomology, first at the local level and then globally. The key result being here the \thmref{MorCoho} stating that any stratified map $f \colon X \to Y$ as defined in the \defref{def:applistratifieeforte} induces a chain map $$ f^*\colon \lau {\widetilde{N}} k {\overline p} {Y;R} \to \lau {\widetilde{N}} k {\overline p} {X;R} $$ compatible with the cup product. This theorem also states that the induced maps $f_*$ and $f^*$ are compatible with the interaction between the blown-up intersection cohomology $\lau \mathscr H * {\overline p} {X;R}$ and the tame intersection homology $\lau {\mathfrak{H}} {\overline p}* {X;R}$ of {\rm pr}opref{prop:cap}. The second part of the paper is devoted to the various properties of the blown-up intersection cohomology and their proofs. The first property comes from the \thmref{thm:Upetits} stating the existence of the complex of $\cal U$-small chains $ \lau {\widetilde{N}} {*,\cal U}{\overline p} {X,R}$ where $\cal U$ is an open cover of the perverse space $(X,\overline p)$ and the existence of a quasi-isomorphism $ \lau {\widetilde{N}} {*}{\overline p} {X,R} \to \lau {\widetilde{N}} {*,\cal U}{\overline p} {X,R}$. This leads then to the \thmref{thm:MVcourte} where the Mayer- Vietoris exact sequence is proved. The two next sections detail results about explicit computations with the blown-up intersection cohomology, the first one is the \thmref{prop:isoproduitR} determining the blown-up intersection cohomology of the product of a filtered space $X$ with the line $\mathbb{R}$, from the existence of an isomorphism $$ \lau \mathscr H * {\overline p} {X \widetildemes \mathbb{R};R} \cong \lau \mathscr H * {\overline p} {X;R}. $$ The second result is the \thmref{prop:coneTW} giving the computation for the blown-up intersection cohomology of an open cone ${\mathring{\tc}} X$ over a compact filtered space $X$. The paper finishes with the two following theorems. The \thmref{thm:TWGMcorps} compares the blown-up intersection cohomology $ \lau \mathscr H * {\overline p} {X;R}$ to the already known tame intersection cohomology $ \lau {\mathfrak{H}} * {\overline p} {X;R} $ for $(X, \overline p)$ a paracompact separable perverse CS set and proves that if $R$ is a field or if the space $X$ is a locally $(D\overline p,R)$-torsion free pseudomanifold there is an isomorphism $$ \lau \mathscr H * {\overline p} {X;R} \cong \lau {\mathfrak{H}} * {D\overline p} {X;R} $$ where $D\overline p$ is the complementary perversity. The last \thmref{inv} states that the blown-up intersection cohomology is a topological invariant when working with GM-perversities and separable paracompact CS sets with no codimension one strata . We fix for the sequel a commutative ring $R$ with unity. All (co)homologies in this work are considered with coefficients in $R$. For a topological space $X$, we denote by ${\mathtt c} X= X \widetildemes [0,1]/ X \widetildemes \{ 0\}$ the \emph{cone} on $X$ and ${\mathring{\tc}} X = X \widetildemes [0,1[/ X \widetildemes \{ 0\}$ the \emph{open cone} on $X$. We thank the anonymous referee for her/his comments and suggestions which have in particular contributed to improve the writing and the organization of this introduction. \part{Blown-up intersection cohomology. Stratified maps}\label{part:TWcoho} We introduce the main notion of this work: the blown-up intersection cohomology and its associated cup and cap products. \section{Some reminders.}\label{sec:rappels} This section contains the basic definitions and properties of the main notions used in this work. \begin{definition}\label{def:espacefiltre} A \emph{filtered space} is a Hausdorff topological space endowed with a filtration by closed sub-spaces \begin{equation}e \emptyset = X_{-1} {\rm Sub}seteq X_0{\rm Sub}seteq X_1{\rm Sub}seteq\ldots {\rm Sub}seteq X_{n-1} {\rm Sub}setneqq X_n=X. \end{equation}e The \emph{formal dimension} of $X$ is $\dim X=n$. The non-empty connected components of $X_{i}\backslash X_{i-1}$ are the \emph{strata} of $X$. Those of $X_n\backslash X_{n-1}$ are \emph{regular strata}, while the others are \emph{singular strata.} The family of strata of $X$ is denoted by $\cal S_X$. The {\em singular set} is $X_{n-1}$, denoted by $\Sigma_X$ or simply $\Sigma$. The {\em formal dimension} of a stratum $S {\rm Sub}set X_i\backslash X_{i-1}$ is $\dim S=i$. The {\em formal codimension} of $S$ is ${\rm codim\,} S = \dim X -\dim S$. \end{definition} An open subset $U$ of $X$ can be provided with the \emph{induced filtration,} defined by $U_i = U \cap X_{i}$. If $M$ is a topological manifold, the \emph{product filtration} is defined by $\left(M \widetildemes X\right) _i = M \widetildemes X_{i}$ (see remarks about shifted filtrations of subsections \ref{SF1} and \ref{SF}). The more restrictive concept of stratified space provides a better behavior of the intersection (co)homology with regard to continuous maps. \begin{definition} A \emph{stratified space} is a filtered space verifying the following frontier condition: for any two strata $S, S'\in \cal S_X$ such that $S\cap \overlineerline{S'}\neq \emptyset$ then $S{\rm Sub}set \overlineerline{S'}$. \end{definition} In their work (\cite{MR572580}, \cite{MR696691}), Goresky and MacPherson proved that the intersection (co)ho\-mo\-logy has richer properties on a singular space $X$ with local conical structure: these are the classical stratified pseudomanifolds. The local structure is characterized by the fact that any stratum $S$ is a manifold having a conical transversal structure over the link $ L$. This link must be in turn a compact stratified pseudomanifold. Friedman \cite{LibroGreg} observed that we can suppose the link $L$ to be only a compact filtered space to preserve the (co)homological properties of $X$. These are the CS sets of L. Siebenman, \cite{MR0319207}. \begin{equation}gin{definition} A \emph{CS set} of dimension $n$ is a filtered space, $$ \emptyset{\rm Sub}set X_0 {\rm Sub}seteq X_1 {\rm Sub}seteq \cdots {\rm Sub}seteq X_{n-2} {\rm Sub}seteq X_{n-1} {\rm Sub}setneqq X_n =X, $$ such that, for each $i$, $X_i\backslash X_{i-1}$ is a topological manifold of dimension $i$ or the empty set. Moreover, for each point $x \in X_i \backslash X_{i-1}$, $i\neq n$, there exist \begin{equation}gin{enumerate}[(i)] \item an open neighborhood $V$ of $x$ in $X$, endowed with the induced filtration, \item an open neighborhood $U$ of $x$ in $X_i\backslash X_{i-1}$, \item a compact filtered space $L$, with formal dimension $n-i-1$, where the open cone, ${\mathring{\tc}} L$, is provided with the {\em conical filtration}, $({\mathring{\tc}} L)_{j}={\mathring{\tc}} L_{j-1}$, with ${\mathring{\tc}} \emptyset = {\mathtt v}$ the apex of ${\mathring{\tc}} L$, \item a homeomorphism, $\varphi \colon U \widetildemes {\mathring{\tc}} L\to V$, such that \begin{equation}gin{enumerate}[(a)] \item $\varphi(u,{\mathtt v})=u$, for each $u\in U$, \item $\varphi(U\widetildemes {\mathring{\tc}} L_{j})=V\cap X_{i+j+1}$, for each $j\in \{0,\ldots,n-i-1\}$. \end{enumerate} \end{enumerate} The pair $(V,\varphi)$ is a \emph{conical chart} of $x$ and the filtered space $L$ is a \emph{link} of $x$. The CS set is called \emph{normal} if the links are connected. \end{definition} In the above definition, the links are always non-empty sets. Therefore, the open subset $ X_ {n} \backslash X_ {n-1} $ is dense. Links are not necessarily CS sets but they are always filtered spaces. Note also that the links associated to points living in the same stratum may be not homeomorphic but they always have the same intersection homology, see for example \cite [Chapter 5] {LibroGreg}. Finally, note that any open subset of a CS set is a CS set, that any CS set is a locally compact stratified space \cite[Theorem G]{CST1} and that a paracompact CS set is metrizable \cite [Proposition 1.11] {CST3}. The pseudomanifolds are special cases of CS sets. Their definition varies in the literature; we consider here the original definition of M. Goresky and R. MacPherson \cite{MR572580}. \begin{equation}gin{definition}\label{def:pseudomanifold} An \emph{$n$-dimensional pseudomanifold } (or simply pseudomanifold) is an $n$-dimensional CS set, where the link $L$ of a point $x \in X_i\backslash X_{i-1}$ is an $(n-i-1)$-dimensional pseudomanifold. We refer to a pseudomanifold such that $X_{n-1} = X_{n-2}$ as a \emph{classical $n$-dimensional pseudomanifold }. \end{definition} There are several notions of stratified maps. \begin{equation}gin{definition}\label{def:applistratifieeforte} Let $f\colon X \to Y$ be a continuous map between two stratified spaces. The map $f$ is a \emph{stratified map,} if it sends a stratum $S$ of $X$ in a stratum $\bi S f$ of $Y$, $f(S) {\rm Sub}set \bi S f$, verifying ${\rm codim\,} S \geq {\rm codim\,} \bi Sf$. The stratified map $f$ is a \emph{stratified homeomorphism,} if $f$ and $f^{-1}$ are stratified maps. The map $f$ is a \emph{stratum preserving stratified map} if $n=\dim X = \dim Y$ and $f^{-1}(Y_{n-\end{lemma}l})=X_{n-\end{lemma}l}$, for any $\end{lemma}l \in \{0,\dots,n\}$. \end{definition} This notion of stratified map does not match exactly with those found in \cite{LibroGreg} or \cite{MR696691}. There are two families of perversities: filtration depending (introduced by Goresky-MacPherson \cite{MR572580}) and the stratification depending (introduced by MacPherson \cite{RobertSF}). \begin{equation}gin{definition}\label{def:perversite} A \emph{Goresky-MacPherson perversity } (or \emph{$GM$-perversity}) (see \cite{MR572580}) is a map $ \overline {p} \colon \mathbb{N} \to \mathbb{Z} $, verifying $\overline p(0)=\overline{p}(1)=\overline{p}(2)=0$ and $\overline p (i) \leq \overline p (i+1) \leq \overline p (i) +1$ for each $i\geq 2$. \end{definition} We use in this work the notion of perversity introduced by MacPherson \cite{RobertSF} and also present in \cite{MR1245833,MR2210257,MR2721621,MR2461258,MR2796412,MR3046315}. Unlike classic perversities, these perversities are not maps depending only on the codimension of the strata but are maps defined on the strata themselves. \begin{equation}gin{definition}\label{def:perversitegen} A \emph {perversity on a filtered space} $ X$ is a map $ \overline {p} \colon \cal S_X \to \mathbb{Z} \cup \{\pm\infty\}$ taking the value ~ 0 on the regular strata. The pair $(X, \overline {p}) $ is called a \emph {perverse space.} If $ X $ is a CS set, we will say that $ (X, \overline {p}) $ is a \emph {perverse CS set.} The \emph{top perversity} is the perversity defined by $ \overline {t} (S) = {\rm codim\,} (S) -2 $ on singular strata. The {\em complementary perversity} of a perversity $\overline p$ is the perversity $D\overline p = \overline t - \overline p$. The \emph {zero perversity} is defined by $ \overline {0} (S) = 0$. A GM-perversity induces a perversity on a filtered space $X$, still denoted by $\overline p$, defined by $\overline p (S) = \overline p ({\rm codim\,} S)$. \end{definition} \begin{definition} Let $f \colon X \to Y$ be a stratified map. The \emph{pull-back of a perversity } $\overline q$ of $Y$ is the perversity $f^* \overline q$ of $X$ defined by $f^*\overline q (S) = \overline q(S^f)$. \end{definition} \section{Blown-up complex of a weighted simplicial complex.}\label{sec:TWcomplexepoids} The blown-up complex is based on the blow up of a filtered simplex. This technique goes back to \cite{MR1143404}, (see also \cite{MR2210257}). In this section, we introduce the notions and first properties associated to the blow up of a regular euclidean simplex (see \defref{def:eclate}) and, more generally, of a weighted simplicial complex (see \defref{def:poidssommet}). The cone of an euclidean simplex $ \mathbb{D}elta = [e_0, \dots, e_m]$ is the simplicial simplex $ {\mathtt c} \mathbb{D}elta = [e_0, \dots, e_m, {\mathtt v}] $. The notation $\nabla \triangleleft \mathbb{D}elta$ means that $\nabla $ is a face of $\mathbb{D}elta $. {\rm Sub}section{Differential complexes associated to a simplicial complex}\label{subsec:petitspoids} Let $ \cal L $ be an oriented simplicial complex whose family of vertices is $ \cal V (\cal L) = \{e_{0}, \dots, e_{m} \} $. We denote $ (\hiru N {*} {\cal L} ,\partial) $ the $ R $-complex of linear simplices of $ \cal L$. The differential of $ (\hiru N {*} {\cal L} , \partial) $ is defined by $$\partial [e_{i_{0}},\dots,e_{i_{p}}]= \sum_{k=0}^{p} (-1)^k [e_{i_{0}},\dots,\hat{e}_{i_{k}},\dots,e_{i_{p}}].$$ For any vertex $e\in\cal V(\cal L)$, we define the homomorphism $-*e\colon N_{p}(\cal L)\to N_{p+1}(\cal L)$, by \begin{equation}gin{equation*}\label{equa:starsimplex} [e_{i_{0}},\dots, e_{i_{p}}]*e= \left\{ \begin{equation}gin{array}{cl} [e_{i_{0}},\dots ,e_{i_{p}},e]&\text{if }[e_{i_{0}},\dots ,e_{i_{p}},e]{\rm Sub}set \cal L,\\ 0&\text{if not.} \end{array}\right. \end{equation*} This operator is extended to the empty set by $ \emptyset \ast e = [e] $. When $ [e_ {0}, \dots, e_{r}] $ does not match an ordered simplex, by convention, it means $ (- 1) ^ {\varepsilon ({\mathtt{Simp}}gma)} [e_{{\mathtt{Simp}}gma (0)}, \dots, e_ {{\mathtt{Simp}}gma (r)}] $, where $ {\mathtt{Simp}}gma $ is the permutation for which $ [e _ {{\mathtt{Simp}}gma (0)}, \dots, e_{{\mathtt{Simp}}gma (r)}] $ is an ordered simplex and $ \varepsilon ({\mathtt{Simp}}gma) $ its signature. We also put $ [e _ {{0}}, \dots, e_{r}] = 0 $ if two vertices $ e_{i} $ are equal. Let $ (\mathbb{H}iru N {*} {\cal L} , \delta) $ be the $ R $-dual, $ \hom (\hiru N {*} {\cal L} , R), $ equipped with the transpose differential of $ \partial $, defined by $ (\delta f) (v) = - (- 1) ^ {| f | } f(\partial v) $. Among the elements of $ \mathbb{H}iru N {*} {\cal L} $, we consider the dual basis of the simplices of $ \cal L$; i.e., if $ F $ is a $ p $-simplex of $ \cal L$, we denote by $ {\boldsymbol 1}_ {F} $ the $ p $-cochain taking the value $ 1$ on $ F$ and $0$ on the other simplices of $ \cal L$. If $ e \in \cal V (\cal L) $, we also introduce a homomorphism, $ - * e \colon \mathbb{H}iru N {p} {\cal L}\to \mathbb{H}iru N {p+1} {\cal L} $, defined by \begin{equation}gin{equation*} {\boldsymbol 1}_{[e_{i_{0}},\dots,e_{i_{p}}]}\ast e= (-1)^p{\boldsymbol 1}_{[e_{i_{0}},\dots,e_{i_{p}},e]}, \end{equation*} putting $ {\boldsymbol 1}_ {[e_ {i_ {0}}, \dots, e_ {i_ {p}} e]} = 0$ if $ [e_{i_ {0}}, \dots, e_ {i_ { p}} ,e] \not{\rm Sub}set \cal L$. On the elements of the dual basis, the differential $ \delta \colon \mathbb{H}iru N * {\cal L} \to \mathbb{H}iru N {*+ 1} {\cal L} $ takes the value \\ $\delta{\boldsymbol 1}_{[e_{i_{0}},\dots,e_{i_{p}}]}=\sum_{{k}\notin \{i_{0},\dots,i_{p}\}} {\boldsymbol 1}_{[e_{i_{0}},\dots,e_{i_{p}},e_{{k}}]}.$ Using $[e_{i_ 0}, \dots, e_{i_k}] = 0 $ when two of the vertices are equal, we can consider that the above sum is indexed by the set of vertices of $ \cal L$; i.e., \begin{equation}gin{equation}\label{equa:ledelta1} \delta {\boldsymbol 1}_{[e_{i_{0}},\dots,e_{i_{p}}]} =\sum_{e\in\cal V(\cal L)} {\boldsymbol 1}_{[e_{i_{0}},\dots,e_{i_{p}},e]}= (-1)^p\sum_{e\in\cal V(\cal L)} {\boldsymbol 1}_{[e_{i_{0}},\dots,e_{i_{p}}]}\ast e. \end{equation} The cochain $ \delta {\boldsymbol 1}_ {F} $ depends on the simplicial complex $ {\cal L} $ in which the face $ F $ lives. We denote by $ \delta^{\cal L} {\boldsymbol 1}_{F} $ the differential of $ {\boldsymbol 1}_ {F} $ in ~ $ {\cal L} $, when necessary. \begin{equation}gin{definition}\label{def:poidssommet} A \emph{weighted simplicial complex} is an oriented simplicial complex, where each vertex $ e $ has a weight $ w (e) \in \{0, \dots, n \} $. The integer $ n $ is called the {\em formal dimension of $ \cal L$.} \end{definition} Let $ \cal L_{i} $ be the union of the simplices of $\cal L$ whose vertices have a weight equal to $i$. Any simplex $ F $ of $ \cal L$ is written (modulo orientation) as a join, $ F = F_ {0} \ast \dots \ast F_ {n} $, with $ F_i {\rm Sub}set \cal L_i$. They are the \emph{filtered euclidean simplices}. Let $ {\mathtt c} \cal L_ {i} $ be the cone on $ \cal L_{i} $, whose apex $ {\mathtt v}_ {i} $ is called \emph{virtual vertex}, \label{virt} chosen as the last vertex of the cone; i.e., if $ F_ {i} = [e_ {i_{0}}, \dots, e_ {i_ {p}}] {\rm Sub}set \cal L_ {i} $, then $ {\mathtt c} F_ {i} = [e_ {i_{ 0}}, \dots, e_{i_p}, {\mathtt v}_ {i}] $. The face $ [{\mathtt v}_ {i}] $ can be considered as the cone on the empty set. For a comprehensive treatment of all these cases, we consider the empty set as a simplex of $\cal L_i $. The \emph{full blow up} of a weighted simplicial complex $\cal L$ is the prismatic set ${\widetilde{\mathcal L}}^{{\boldsymbol all}}={\mathtt c}\cal L_{0}\widetildemes\dots\widetildemes{\mathtt c}\cal L_{n}$. For each $ i \in \{0, \dots, n \} $, we denote by $ L_ {i} $ - a simplex $ F_{i} $ of $ \cal L_{i} $ - or the cone $ {\mathtt c} F_{i} $ of a simplex of $\cal L_i $, - either the singleton reduced to the virtual vertex $ {\mathtt v}_{i} $. The products $ L_{0} \widetildemes \dots \widetildemes L_{n}$ are called \emph{faces of ${\widetilde{\mathcal L}}^{{\boldsymbol all}}$} and represented by \begin{equation}gin{equation*}\label{equa:lafaceseclate} (F,\varepsilon)=(F_{0},\varepsilon_{0})\widetildemes \dots \widetildemes (F_{n},\varepsilon_{n}), \end{equation*} satisfying the following conventions: \begin{equation}gin{itemize} \item the iterated join, $F=F_{0}\ast\dots\ast F_{n}$, is a simplex of $\cal L$; \item if $\varepsilon_{i}=0$ and $F_{i}\neq \emptyset$, then $(F_{i},0)=F_{i}$ is a simplex of $\cal L_{i}$; \item if $\varepsilon_{i}=1$ and $F_{i}\neq \emptyset$, then $(F_{i},1)={\mathtt c} F_{i}$ is the cone over the simplex $F_{i}$ of $\cal L_{i}$; \item if $F_{i}=\emptyset$, one must have $\varepsilon_{i}=1$. \end{itemize} Let $(F,\varepsilon)=(F_{0},\varepsilon_{0})\widetildemes \dots \widetildemes (F_{n},\varepsilon_{n}) $ be a face of ${\widetilde{\mathcal L}}^{\boldsymbol all}$ and let $\gamma\in\{0,\dots,n\}$. We put, \begin{equation}gin{itemize} \item $|(F,\varepsilon)|_{< \gamma}=\sum_{i< \gamma}(\dim F_{i}+\varepsilon_{i})$ and $|(F,\varepsilon)|_{> \gamma}=\sum_{i> \gamma}(\dim F_{i}+\varepsilon_{i})$, \item $|(F,\varepsilon)|_{\leq \gamma}=\sum_{i\leq \gamma}(\dim F_{i}+\varepsilon_{i})$ and $|(F,\varepsilon)|_{\geq \gamma}=\sum_{i\geq \gamma}(\dim F_{i}+\varepsilon_{i})$, \item $|(F,\varepsilon)|=|(F,\varepsilon)|_{\leq n}$. \end{itemize} \begin{equation}gin{definition}\label{def:eclate} The \emph{blow up} of a weighted simplicial complex of dimension $n$, $ \cal L$, is the sub-prism $ {\widetilde{\mathcal L}}$ of ${\widetilde{\mathcal L}}^{\boldsymbol all}$ consisting of the $ (F, \varepsilon)'s $ such that $ \varepsilon_ {n} = 0$. Notice that the corresponding simplices, $ F = F_ {0} \ast \dots \ast F_{n} {\rm Sub}set \cal L$, must verify $ F_ {n} \neq \emptyset$. This kind of filtered simplices are called \emph{regular}. \end{definition} \begin{equation}gin{example}\label{exem:eclate} The generic case of a weighted simplicial complex is a regular simplex $ \mathbb{D}elta = \mathbb{D}elta_{0} \ast \dots \ast \mathbb{D}elta_{n} $. Assume $ \mathbb{D}elta $ is oriented by an order on its vertices, $ (e_ {i})_{0 \leq i \leq m}, e_ {i} <e_ {i + 1} $, in a compatible way with the join decomposition. More precisely, expressing the maximal simplex by its vertices, and indicating by a vertical bar the filtration change, we have \begin{equation}gin{equation}\label{equa:ordresommets} \mathbb{D}elta=[\underbrace{e_{0},\dots,e_{k_{0}}}_{\mathbb{D}elta_{0}}\mid \underbrace{e_{k_{0}+1},\dots,e_{k_{1}}}_{\mathbb{D}elta_{1}}\mid\dots\mid \underbrace{\emptyset}_{\mathbb{D}elta_{i}}\mid\dots\mid \underbrace{e_{k_{n-1}+1},\dots,e_{k_{n}}}_{\mathbb{D}elta_{n}}], \end{equation} with $ k_ {{n}} = {m} $. The vertices $ \{e_{0} ,\dots, e_{k_{\end{lemma}l}} \} $ generate the simplex $ \mathbb{D}elta_{0} \ast \dots \ast \mathbb{D}elta_{\end{lemma}l} $. A face $ \mathbb{D}elta_{i} $ can be empty, as shown in the decomposition above. As we have already noticed, giving the filtration $\mathbb{D}elta = \mathbb{D}elta_ {0} \ast \dots \ast \mathbb{D}elta_n $ of the simplex $ \mathbb{D}elta $ is equivalent to giving a weight on each vertex of $ \mathbb{D}elta $, in the sense of \defref{def:poidssommet}. The corresponding blow up is \begin{equation}\label{bup} {\widetilde{\Delta}}={\mathtt c} \mathbb{D}elta_{0}\widetildemes\dots\widetildemes {\mathtt c}\mathbb{D}elta_{n-1}\widetildemes \mathbb{D}elta_{n}. \end{equation} \end{example} \begin{equation}gin{example}\label{exem:codageeclate} The coding of the faces of the blow up of a regular simplex under the form of a product $ (F, \varepsilon) = (F_ 0 ,\varepsilon_{0}) \widetildemes \dots \widetildemes (F_ {n} ,\varepsilon_n) $ is used throughout the text. To familiarize the reader with this representation, we specify the blow up ${\widetilde{\Delta}}$ of $\mathbb{D}elta=\mathbb{D}elta_{0}\ast\mathbb{D}elta_{1}=[e_{0}]\ast [e_{1},e_{2}]$. \vskip -2cm $$ \begin{equation}gin{picture}(250,150)(100,0) \put(55,-10){\makebox(0,0){$(e_0,e_1)$}} \put(55,90){\makebox(0,0){$(e_0,e_2)$}} \put(100,40){\makebox(0,0){${\widetilde{\Delta}}$}} \put(150,-10){\makebox(0,0){$({\mathtt v},e_1)$}} \put(150,90){\makebox(0,0){$({\mathtt v},e_2)$}} \put(150,40){\makebox(0,0){\textcircled{\widetildeny 1}}} \put(50,40){\makebox(0,0){\textcircled{\widetildeny 2}}} \put(100,-10){\makebox(0,0){\textcircled{\widetildeny 3}}} \put(100,90){\makebox(0,0){\textcircled{\widetildeny 4}}} \linethickness{,7mm} \put(60,0){\line(0,1){80}} \thinlines \put(60,0){\line(1,0){80}} \put(140,80){\line(-1,0){80}} \put(140,80){\line(0,-1){80}} \put(280,0){\makebox(0,0){$\bullet$}} \put(250,-10){\makebox(0,0){$[e_0] =\mathbb{D}elta_0 $}} \put(370,-10){\makebox(0,0){$\mathbb{D}elta_1 =[e_1,e_2] $}} \put(295,5){\makebox(0,0){$e_0$}} \put(350,7){\makebox(0,0){$e_1$}} \put(352,60){\makebox(0,0){$e_2$}} \put(280,0){\line(1,1){80}} \put(280,0){\line(1,0){80}} \put(360,80){\line(0,-1){80}} \put(210,40){\makebox(0,0){\overlineerrightarrowctor(1,0){70}}} \end{picture} $$ The four one dimensional faces of the blow up ${\widetilde{\Delta}} = {\mathtt c} \mathbb{D}elta_0 \widetildemes \mathbb{D}elta_1$ are encoded as\\ \begin{equation}gin{center} \begin{equation}gin{tabular}{ll} \textcircled{\widetildeny 1}=$(\emptyset ,1) \widetildemes (\mathbb{D}elta_1,0) ,$& \textcircled{\widetildeny 2}=$([e_{0}] ,0) \widetildemes (\mathbb{D}elta_1,0),$\\[.2cm] \textcircled{\widetildeny 3}=$([e_{0}] ,1) \widetildemes ([e_1],0), $& \textcircled{\widetildeny 4}=$([e_{0}] ,1) \widetildemes ([e_2],0),$ \end{tabular} \end{center} \noindent corresponding to \begin{equation}gin{center} \begin{equation}gin{tabular}{lllllll} \textcircled{\widetildeny 1}=$[{\mathtt v}]\widetildemes \mathbb{D}elta_1,$& & \textcircled{\widetildeny 2}=$\mathbb{D}elta_0 \widetildemes \mathbb{D}elta_1,$& & \textcircled{\widetildeny 3}=${\mathtt c}\mathbb{D}elta_0 \widetildemes [e_1] ,$& & \textcircled{\widetildeny 4}=${\mathtt c} \mathbb{D}elta_0 \widetildemes [e_2].$ \end{tabular} \end{center} \end{example} {\rm Sub}section{Blown-up complex associated to a weighted simplicial complex}\label{subsec:twpoids} We recall the notations of the previous paragraph. The element $ {\boldsymbol 1}_{(F_{i}, \varepsilon_ {i})} $ is the cochain on $ {\mathtt c} \cal L_ {i} $, taking the value $ 1$ on the simplex $(F_ {i} , \varepsilon_i)$ and $ 0$ on other simplices of $ {\mathtt c} \cal L_i$. If $F=F_{0}\ast\dots\ast F_{n}{\rm Sub}set\cal L$ and $(F,\varepsilon)=(F_{0},\varepsilon_{0})\widetildemes \dots \widetildemes (F_{n},\varepsilon_{n}) $, we write $${\boldsymbol 1}_{(F,\varepsilon)}={\boldsymbol 1}_{(F_{0},\varepsilon_{0})}\otimes\dots\otimes {\boldsymbol 1}_{(F_{n},\varepsilon_{n})}.$$ The $ R $-module generated by these elements when $ F $ runs over the simplices of $ \cal L$ is denoted by $ \mathbb{H}iru{\widetilde{N}} {{\boldsymbol all}, *} {\cal L; R} $ or $ \mathbb{H}iru{\widetilde{N}} {{\boldsymbol all}, *} {\cal L} $ if there is no ambiguity. The sub-$ R $-module generated by the elements such that $ \varepsilon_ {n} =0 $, endowed with the differential \begin{equation}gin{eqnarray}\label{equa:ladiff} \delta {\boldsymbol 1}_{(F,\varepsilon)} &=& \sum_{i=0}^{n-1}(-1)^{|(F,\varepsilon)|_{<i}} {\boldsymbol 1}_{(F_{0},\varepsilon_{0})}\otimes\dots\otimes \delta^{{\mathtt c}\cal L_{i}}{\boldsymbol 1}_{(F_{i},\varepsilon_{i})} \otimes\dots\otimes{\boldsymbol 1}_{F_{n}}\\ && + (-1)^{|(F,\varepsilon)|_{<n}} {\boldsymbol 1}_{(F_{0},\varepsilon_{0})}\otimes\dots\otimes {\boldsymbol 1}_{(F_{n-1},\varepsilon_{n-1})} \otimes\delta^{\cal L_{n}}{\boldsymbol 1}_{F_{n}},\nonumber \end{eqnarray} is called the \emph{blown-up complex} of $ \cal L $ and denoted by $\mathbb{H}iru {\widetilde{N}} * {\cal L; R} $ or simply $ \mathbb{H}iru{\widetilde{N}} * {\cal L} $. (Recall that $ \delta^{{\mathtt c} \cal L_ {i}} $ is the differential of the cochain complex on the simplicial complex $ {\mathtt c} \cal L_ {i}$.) The elements of the blown-up complex are provided with an additional degree which looks like the degree of a differential form along the fiber in a bundle. \begin{equation}gin{definition}\label{def:degrepervers} Let $ \end{lemma}l \in \{1, \dots, n \} $. The \emph{$\end{lemma}l$-perverse degree of the cochain $ {\boldsymbol 1}_{(F, \varepsilon) }$} is equal to $$ \|{\boldsymbol 1}_{(F,\varepsilon)}\|_{\end{lemma}l}=\left\{ \begin{equation}gin{array}{ccl} -\infty&\text{if} & \varepsilon_{n-\end{lemma}l}=1,\\ |(F,\varepsilon)|_{> n-\end{lemma}l} &\text{if}& \varepsilon_{n-\end{lemma}l}=0. \end{array}\right.$$ If $ \omega \in \mathbb{H}iru {\widetilde{N}} * {\cal L; R} $ decomposes as $ \omega = \sum_ {\mu} \lambda_{\mu} \, {\boldsymbol 1}_{(F_{\mu} \varepsilon_{\mu})} $, with $ \lambda_ {\mu} \neq 0$, its \emph {$\end{lemma}l$-perverse degree } is equal to \begin{equation}gin{equation}\label{equa:degrepervers} \|\omega\|_{\end{lemma}l}=\max_{\mu}\|{\boldsymbol 1}_{(F_{\mu},\varepsilon_{\mu})}\|_{\end{lemma}l}. \end{equation} By convention, we set $ \| 0 \|_{\end{lemma}l} = - \infty $. \end{definition} \begin{equation}x We compute the perverse degree of some cochains of the blow up $ {\widetilde{\Delta}} = c\mathbb{D}elta_0 \widetildemes c\mathbb{D}elta_1 \widetildemes \mathbb{D}elta_2$ of the regular simplex $\mathbb{D}elta =\mathbb{D}elta_0 * \mathbb{D}elta_1 * \mathbb{D}elta_2$. \begin{equation}gin{center} \begin{equation}gin{tikzpicture} \definecolor{zzttqq}{rgb}{0.6,0.2,0.9} \draw[color=black] (2.1,1.1) node {$\xleftarrow{\hspace*{.5cm} \hspace*{.5cm}}$}; \draw [color=black] (0,0)-- (1,0.5); \draw [color=black] (1,0.5)-- (0,2); \draw [color=black] (0,2)-- (0,0); \draw [color=black] (1,0.5)-- (-2,0.5); \draw [color=red,very thick,dashed] (0,0)-- (-2,0.5); \draw [color=black] (0,2)-- (-2,0.5); \fill [color=zzttqq] (-2,0.5) circle (3pt); \draw[color=black] (-2.5,0.5) node {$\mathbb{D}elta_0$}; \draw[color=black] (0,-0.2) node {$\mathbb{D}elta_1$}; \draw[color=black] (1,1.5) node {$\mathbb{D}elta_2$}; \draw [color=black] (6,0)-- (7,0.5); \draw [] (7,0.5)-- (6,2); \draw [color=black] (5,1.5)-- (6,2); \fill[color=black,fill=zzttqq,fill opacity=0.2,dashed] (4,0) -- (5,0.5) -- (4,2) -- (3,1.5) -- (4,0) -- cycle; \fill[color=black,fill=red,fill opacity=0.3] (4,0) -- (6,0)-- (5,1.5) -- (3,1.5) -- (4,0) -- cycle; \draw [color=zzttqq] (4,0)-- (5,0.5); \draw [color=black] (5,0.5)-- (7,0.5); \draw [color=black] (6,2)-- (4,2); \draw [color=black] (4,0)-- (3,1.5); \draw [color=red,dashed] (6,0)-- (4,0); \draw [color=red,dashed] (3,1.5)-- (5,1.5); \draw [color=zzttqq] (3,1.5)-- (4,2); \draw [color=red,dashed] (6,0 )-- (5,1.5); \draw [color=zzttqq] (5,0.5)-- (4,2); \draw[color=black] (7.2,1.5) node {$\mathbb{D}elta_2$}; \draw[color=black] (5,-0.3) node {${\mathtt c}\mathbb{D}elta_0$}; \draw[color=black] (7,0) node {${\mathtt c}\mathbb{D}elta_1$}; \draw[color=black] (8.5,0.5) node {${\mathtt v_1}$ \widetildeny{ apex of ${\mathtt c}\mathbb{D}elta_1$} }; \draw[color=black] (6,-.2) node {$\widetildeny{\mathtt v_0}$ }; \draw[color=black] (6,-.5) node {\widetildeny{ apex of ${\mathtt c}\mathbb{D}elta_0$} }; \end{tikzpicture} \end{center} $$ \begin{equation}gin{array}{ll} \|{\boldsymbol 1}_{ {\mathtt v_0} \widetildemes {\mathtt v_1} \widetildemes \mathbb{D}elta_2}\|_2 =-\infty & \|{\boldsymbol 1}_{ {\mathtt v_0} \widetildemes {\mathtt v_1} \widetildemes \mathbb{D}elta_2}\|_1 =-\infty \\ \|{\boldsymbol 1}_{c \mathbb{D}elta_0 \widetildemes\mathbb{D}elta_1 \widetildemes \mathbb{D}elta_2}\|_2=-\infty & \|{\boldsymbol 1}_{c \mathbb{D}elta_0 \widetildemes\mathbb{D}elta_1 \widetildemes \mathbb{D}elta_2}\|_1= \dim \mathbb{D}elta_2 \\ \|{\boldsymbol 1}_{ \mathbb{D}elta_0 \widetildemes {\mathtt c}\mathbb{D}elta_1 \widetildemes \mathbb{D}elta_2}\|_2 =\dim {\mathtt c}\mathbb{D}elta_1 + \dim \mathbb{D}elta_2 &\|{\boldsymbol 1}_{ \mathbb{D}elta_0 \widetildemes {\mathtt c}\mathbb{D}elta_1 \widetildemes \mathbb{D}elta_2}\|_1 =-\infty\\ \|{\boldsymbol 1}_{ \mathbb{D}elta_0 \widetildemes \mathbb{D}elta_1 \widetildemes \mathbb{D}elta_2}\|_2 =\dim \mathbb{D}elta_1 +\dim \mathbb{D}elta_2 & \|{\boldsymbol 1}_{ \mathbb{D}elta_0 \widetildemes \mathbb{D}elta_1 \widetildemes \mathbb{D}elta_2}\|_1 =\dim \mathbb{D}elta_2 \end{array} $$ (We have used ${\mathtt v}_i$ the apex of the cone ${\mathtt c} \mathbb{D}elta_i$, $i=0,1$). \end{equation}x \begin{definition} In the case of a \emph{ regular simplex} $ \mathbb{D}elta = \mathbb{D}elta_{0} \ast \dots \ast \mathbb{D}elta_ {n} $, $\mathbb{D}elta_n\ne \emptyset$, the blown-up complex is the tensor product, $\mathbb{H}iru {\widetilde{N}} * {\mathbb{D}elta} = \mathbb{H}iru N * {{\mathtt c} \mathbb{D}elta_{0}} \otimes \dots \otimes \mathbb{H}iru N * {{\mathtt c} \mathbb{D}elta_ { n-1}} \otimes \mathbb{H}iru N * {\mathbb{D}elta_ {n}} $ which corresponds to \cite[Definition 1.31]{CST1}. The \emph{face operators} of a filtered simplex $ \mathbb{D}elta = \mathbb{D}elta_ {0} \ast \dots \ast \mathbb{D}elta_n$ are maps $ \mu \colon \mathbb{D}elta' = \mathbb{D}elta'_{0} \ast \dots \ast \mathbb{D}elta'_{n} \to \mathbb{D}elta = \mathbb{D}elta_{0} \ast \dots \ast \mathbb{D}elta_ {n} $, of the form $ \mu = \ast_ {i = 0}^n \mu_{i} $ where $ \mu_ {i} $ is an injective map preserving the order. The face $\mathbb{D}elta'$ is a codimension 1 regular face. Each operator face of a factor of the join $\delta_{\end{lemma}l}\colon \mathbb{D}elta'_{{i}}\to \mathbb{D}elta_{{i}}$ induces a map, still denoted by ${\delta}_{\end{lemma}l}\colon \hiru N* {{\mathtt c}\mathbb{D}elta'_{0}}\otimes \dots\otimes \hiru N *{{\mathtt c}\mathbb{D}elta'_{n-1}} \otimes \hiru N *{\mathbb{D}elta'_{n}} \to \hiru N *{{\mathtt c}\mathbb{D}elta_{0}} \otimes \dots\otimes \hiru N *{{\mathtt c}\mathbb{D}elta_{i}} \otimes \dots\otimes \hiru N *{\mathbb{D}elta_{n}} $, obtained by carrying out the join with the identity map on factors $ \mathbb{D}elta_{j} $, $ j \neq \end{lemma}l $. We call ${\delta}_{\end{lemma}l}^*\colon \mathbb{H}iru N *{{\mathtt c}\mathbb{D}elta_{0}}\otimes \dots\otimes \mathbb{H}iru N*{{\mathtt c}\mathbb{D}elta_{n-1}}\otimes \mathbb{H}iru N *{\mathbb{D}elta_{n}} \to \mathbb{H}iru N *{{\mathtt c}\mathbb{D}elta'_{0}}\otimes \dots\otimes \mathbb{H}iru N *{{\mathtt c}\mathbb{D}elta'_{i}}\otimes \dots\otimes \mathbb{H}iru N *{\mathbb{D}elta'_{n}} $, the transpose of the linear map $\delta_\end{lemma}l$. \end{definition} The blown-up complex of a weighted simplicial complex is expressed from the blown-up complex of regular simplices. \begin{equation}gin{proposition}\label{prop:pasdeface} Let $\cal L$ be a weighted simplicial complex. Then $$ \mathbb{H}iru {\widetilde{N}} *{\cal L;R}\cong \varprojlim_{\mathbb{D}elta{\rm Sub}set \cal L, \mathbb{D}elta\,\text{regular}} \mathbb{H}iru {\widetilde{N}} *{\mathbb{D}elta;R}.$$ \end{proposition} \begin{equation}gin{proof} An element $\omega$ of $ \varprojlim \mathbb{H}iru {\widetilde{N}} * {\mathbb{D}elta} $ is a family $ \omega = \left( \omega_{\mathbb{D}elta} \right)_{\mathbb{D}elta} $, indexed by the regular simplices of $ \cal L$, with $ \omega_{\mathbb{D}elta} \in \mathbb{H}iru {\widetilde{N}} * {\mathbb{D}elta} $ and satisfying the following face compatibility conditions: if $ \delta_{\end{lemma}l} \colon \nabla \to \mathbb{D}elta $ is a regular face of codimension 1 of a simplex $ \mathbb{D}elta $ of $ \cal L$, then $ \delta_{\end{lemma}l}^* {\omega _ {\mathbb{D}elta}} = \omega_{\nabla} \in \mathbb{H}iru {\widetilde{N}} * {\nabla} $. To $ \omega \in \varprojlim \mathbb{H}iru {\widetilde{N}} * {\mathbb{D}elta} $, we associate the cochain $ \omega_{\cal L} \in \mathbb{H}iru {\widetilde{N}} k {\cal L} $ defined by $$\omega_{\cal L}(F,\varepsilon) = \omega_{F}(F,\varepsilon).$$ Conversely, let $\omega_{\cal L}= \sum_{F} \alpha_{(F,\varepsilon)}{\boldsymbol 1}_{(F,\varepsilon)}\in{\widetilde{N}}^k(\cal L)$, where $ F $ runs over the regular simplices of $ \cal L$, $ | (F, \varepsilon) | =k $ and $ \alpha_{(F, \varepsilon)} \in R $. For each regular simplex $ \mathbb{D}elta $ of $ \cal L$, we define $ \omega_{\mathbb{D}elta} \in \mathbb{H}iru{\widetilde{N}} k {\mathbb{D}elta} $ by $$\omega_{\mathbb{D}elta}= \sum_{F{\vartriangleleft} \mathbb{D}elta} \alpha_{(F,\varepsilon)}{\boldsymbol 1}_{(F,\varepsilon)}.$$ It remains to prove the compatibility with respect to the face operators. Let $ \delta_{\end{lemma}l} \colon \nabla \to \mathbb{D}elta $ be a regular face of $\mathbb{D}elta$ and $ F $ a simplex of $\mathbb{D}elta $. By definition of the dual basis one has $${\delta}^*_{\end{lemma}l}({\boldsymbol 1}_{(F,\varepsilon)})= \left\{\begin{equation}gin{array}{cl} {\boldsymbol 1}_{(F,\varepsilon)} &\text{if}\; F{\rm Sub}set\nabla,\\ 0&\text{if not.} \end{array}\right.$$ It follows $$\delta_{\end{lemma}l}^*(\omega_{\mathbb{D}elta})= \sum_{F{\vartriangleleft} \nabla}\alpha_{(F,\varepsilon)}{\boldsymbol 1}_{(F,\varepsilon)} =\omega_{\nabla}.$$ \end{proof} {\rm Sub}section{Adjunction of a vertex to a cochain of the blow up} \begin{definition}\label{def:etunpointun} Let $(F,\varepsilon)=(F_{0},\varepsilon_{0})\widetildemes \dots \widetildemes (F_{n},\varepsilon_{n})$ be a face of ${{\widetilde{\mathcal L}}^{\boldsymbol all}}$ and $\end{lemma}l\in\{0,\dots,n\}$. \emph{The adjunction of a vertex $e\in \cal L_{\end{lemma}l}$} to the cochain ${\boldsymbol 1}_{(F,\varepsilon)}$ is the cochain \begin{equation}gin{equation*}\label{equa:etunpointunaussi} {\boldsymbol 1}_{(F,\varepsilon)}\ast e= (-1)^{|(F,\varepsilon)|_{> \end{lemma}l}}\; {\boldsymbol 1}_{(F_{0,}\varepsilon_{0})}\otimes\dots\otimes ({\boldsymbol 1}_{(F_{\end{lemma}l},\varepsilon_{\end{lemma}l})}\ast e) \otimes\dots\otimes {\boldsymbol 1}_{(F_{n},\varepsilon_{n})}. \end{equation*} Likewise, for the virtual vertex ${\mathtt v}_{\end{lemma}l}$, we set, \begin{equation}gin{equation*}\label{equa:etunpointvirtuel} {\boldsymbol 1}_{(F,\varepsilon)}\ast {\mathtt v}_{\end{lemma}l} = (-1)^{|(F,\varepsilon)|_{> \end{lemma}l}}\; {\boldsymbol 1}_{(F_{0,}\varepsilon_{0})}\otimes\dots\otimes ({\boldsymbol 1}_{(F_{\end{lemma}l},\varepsilon_{\end{lemma}l})}\ast {\mathtt v}_{\end{lemma}l})\otimes\dots\otimes {\boldsymbol 1}_{(F_{n},\varepsilon_{n})}. \end{equation*} We extend by linearity this adjunction to $\mathbb{H}iru {\widetilde{N}} {{\boldsymbol all},*}{\cal L;R}$. \end{definition} The next property follows directly from the definition. \begin{equation}gin{lemma}\label{lem:unecellulepuislautre} Let $\cal L$ be a weighted simplicial complex and $(F,\varepsilon)$ a face of ${{\widetilde{\mathcal L}}^{\boldsymbol all}}$. Consider two vertices of $\cal L$, $e_{\alpha}\in \cal L_{\end{lemma}l(\alpha)}$ and $e_{\begin{equation}ta}\in \cal L_{\end{lemma}l(\begin{equation}ta)}$, and two virtual vertices ${\mathtt v}_{\end{lemma}l}$ and ${\mathtt v}_{\end{lemma}l'}$. Then the following properties are verified, \begin{equation}gin{eqnarray*} {\boldsymbol 1}_{(F,\varepsilon)}\ast e_{\alpha}\ast e_{\begin{equation}ta} &=& - {\boldsymbol 1}_{(F,\varepsilon)}\ast e_{\begin{equation}ta}\ast e_{\alpha}, \\ {\boldsymbol 1}_{(F,\varepsilon)}\ast e_{\alpha}\ast {\mathtt v}_{\end{lemma}l} &=& - {\boldsymbol 1}_{(F,\varepsilon)}\ast {\mathtt v}_{\end{lemma}l}\ast e_{\alpha},\\ {\boldsymbol 1}_{(F,\varepsilon)}\ast {\mathtt v}_{\end{lemma}l}\ast {\mathtt v}_{\end{lemma}l'} &=& - {\boldsymbol 1}_{(F,\varepsilon)}\ast {\mathtt v}_{\end{lemma}l'}\ast {\mathtt v}_{\end{lemma}l}. \end{eqnarray*} \end{lemma} \begin{equation}gin{proposition}\label{prop:dfaceajout} Let $\cal L$ be a weighted simplicial complex. The blown-up differential of an element ${\boldsymbol 1}_{(F,\varepsilon)}\in\mathbb{H}iru {\widetilde{N}} *{\cal L;R}$ can be written as, \begin{equation}gin{equation}\label{equa:diffetpoint} \delta{\boldsymbol 1}_{(F,\varepsilon)}= (-1)^{|(F,\varepsilon)|} \left( \sum_{e\in \cal V(\cal L)} {\boldsymbol 1}_{(F,\varepsilon)}\ast e + \sum_{i=0}^{n-1} {\boldsymbol 1}_{(F,\varepsilon)}\ast {\mathtt v}_{i} \right). \end{equation} \end{proposition} \begin{equation}gin{proof} Let ${\boldsymbol 1}_{(F,\varepsilon)}= {\boldsymbol 1}_{(F_{0},\varepsilon_{0})}\otimes\dots\otimes {\boldsymbol 1}_{(F_{n-1},\varepsilon_{n-1})} \otimes{\boldsymbol 1}_{F_{n}}$. By replacing $\delta^{{\mathtt c}\cal L_{i}}$ and $\delta^{\cal L_{n}}$ by their values from (\ref{equa:ledelta1}) in the equality (\ref{equa:ladiff}) and denoting $|(F_i,\varepsilon_i)| = \dim F_i + \varepsilon_i$, we get, \begin{equation}gin{eqnarray*} \delta {\boldsymbol 1}_{(F,\varepsilon)} &=&\\ \sum_{i=0}^{n}(-1)^{|(F,\varepsilon)|_{<i}} \sum_{e_{i}\in\cal V(\cal L_{i})} (-1)^{|(F_{i},\varepsilon_{i})|} {\boldsymbol 1}_{(F_{0},\varepsilon_{0})}\otimes\dots\otimes ({\boldsymbol 1}_{(F_{i},\varepsilon_{i})}\ast e_{i}) \otimes\dots\otimes{\boldsymbol 1}_{(F_{n},\varepsilon_{n})}&&\\ +\sum_{i=0}^{n-1}(-1)^{|(F,\varepsilon)|_{\leq i}} {\boldsymbol 1}_{(F_{0},\varepsilon_{0})}\otimes\dots\otimes ({\boldsymbol 1}_{(F_{i},\varepsilon_{i})}\ast {\mathtt v}_{i}) \otimes\dots\otimes {\boldsymbol 1}_{F_{n}}. \end{eqnarray*} The wanted formula follows from the definition of the adjunction of a vertex. \end{proof} From {\rm pr}opref{prop:dfaceajout} and \lemref{lem:unecellulepuislautre}, we directly deduce the behavior of the adjunction of a vertex with respect to the differential of the blown-up complex. \begin{equation}gin{corollary}\label{cor:astdiff} Let $\cal L$ be a weighted simplicial complex and let ${\boldsymbol 1}_{(F,\varepsilon)}\in \mathbb{H}iru {\widetilde{N}} *{\cal L;R}$. For each vertex $e_{\alpha}\in \cal V(\cal L)$ and each virtual vertex ${\mathtt v}_{\end{lemma}l}$, one has $ \delta({\boldsymbol 1}_{(F,\varepsilon)}\ast e_{\alpha})= (\delta{\boldsymbol 1}_{(F,\varepsilon)})\ast e_{\alpha} \;\text{ and }\; \delta({\boldsymbol 1}_{(F,\varepsilon)}\ast {\mathtt v}_{\end{lemma}l})= (\delta{\boldsymbol 1}_{(F,\varepsilon)})\ast {\mathtt v}_{\end{lemma}l}. $ \end{corollary} \section{Blown-up intersection cohomology.}\label{sec:TWcohomologie} In this section, we define the blown-up intersection cohomology of a perverse space. \begin{equation}gin{definition}\label{def:filteredsimplex} Let $X$ be a filtered space. A \emph{filtered singular simplex} is a continuous map, ${\mathtt{Simp}}gma\colon\mathbb{D}elta\to X$, where the euclidean simplex $\mathbb{D}elta$ is endowed with a decomposition $\mathbb{D}elta=\mathbb{D}elta_{0}\ast\mathbb{D}elta_{1}\ast\dots\ast\mathbb{D}elta_{n}$, called \emph{${\mathtt{Simp}}gma$-decomposition of $\mathbb{D}elta$}, verifying $$ {\mathtt{Simp}}gma^{-1}X_{i} =\mathbb{D}elta_{0}\ast\mathbb{D}elta_{1}\ast\dots\ast\mathbb{D}elta_{i}, $$ for each~$i \in \{0, \dots, n\}$. The filtered singular simplex ${\mathtt{Simp}}gma$ is \emph{regular} if $\mathbb{D}elta_n\ne \emptyset$. To specify that the filtration of the euclidean simplex $ \mathbb{D}elta $ is induced from that of $ X $ by the map $ {\mathtt{Simp}}gma $, we sometimes write $ \mathbb{D}elta = \mathbb{D}elta_{{\mathtt{Simp}}gma} $. This notation is particularly useful when a simplex carries two filtrations associated with two distinct maps. \end{definition} The dimensions of the simplices $ \mathbb{D}elta_i$ of the $ {\mathtt{Simp}}gma $-decomposition measure the non-transver\-sality of the simplex $ {\mathtt{Simp}}gma $ with respect to the strata of $X$. These simplices $ \mathbb{D}elta_i$ may be empty, with the convention $ \emptyset * Y = Y $, for any space $ Y $. Note also that a singular simplex $ {\mathtt{Simp}}gma \colon \mathbb{D}elta \to X $ is filtered if each $ {\mathtt{Simp}}gma^{- 1} (X_i) $, $ i \in \{0, \dots, n \} $, is a face of $ \mathbb{D}elta $. To any regular simplex, ${\mathtt{Simp}}gma\colon \mathbb{D}elta=\mathbb{D}elta_0\ast\dots\ast \mathbb{D}elta_n\to X$, we associate the cochain complex defined by $$\tres {\widetilde{N}}*{\mathtt{Simp}}gma={\widetilde{N}}^*(\mathbb{D}elta)= \mathbb{H}iru N *{{\mathtt c}\mathbb{D}elta_0}\otimes\dots\otimes \mathbb{H}iru N *{{\mathtt c}\mathbb{D}elta_{n-1}} \otimes \mathbb{H}iru N *{\mathbb{D}elta_n}.$$ If $\delta_{\end{lemma}l}\colon \mathbb{D}elta'=\mathbb{D}elta'_{0}\ast\dots\ast\mathbb{D}elta'_\end{lemma}l\ast\dots\ast\mathbb{D}elta'_{n} \to \mathbb{D}elta=\mathbb{D}elta_{0}\ast\dots\ast\mathbb{D}elta_\end{lemma}l\ast\dots\ast\mathbb{D}elta_{n}$ is a face operator, we denote $\partial_{\end{lemma}l}{\mathtt{Simp}}gma$ the filtered simplex defined by $\partial_{\end{lemma}l}{\mathtt{Simp}}gma={\mathtt{Simp}}gma\circ\delta_{\end{lemma}l}\colon \mathbb{D}elta'\to X$. The face operator $\delta_{\end{lemma}l}$ is regular if $\mathbb{D}elta'_n\ne \emptyset$. \begin{equation}gin{definition}\label{def:thomwhitney} Let $X$ be a filtered space. The \emph{blown-up complex of $X$ with coefficients in $R$,} $\mathbb{H}iru {\widetilde{N}}*{X;R}$, is the cochain complex formed by the elements $ \omega $, associating to any regular filtered simplex, ${\mathtt{Simp}}gma\colon \mathbb{D}elta_{0}\ast\dots\ast\mathbb{D}elta_{n}\to X$, an element $\omega_{{\mathtt{Simp}}gma}\in {\widetilde{N}}^*_{{\mathtt{Simp}}gma}$, so that $\delta_{\end{lemma}l}^*(\omega_{{\mathtt{Simp}}gma})=\omega_{\partial_{\end{lemma}l}{\mathtt{Simp}}gma}$, for any regular face operator, $\delta_{\end{lemma}l}$. The differential $\delta\omega$ of $\omega\in\mathbb{H}iru {\widetilde{N}} *{X;R}$ is defined by $(\delta\omega)_{{\mathtt{Simp}}gma}=\delta(\omega_{{\mathtt{Simp}}gma})$ for any regular filtered simplex ${\mathtt{Simp}}gma$. \end{definition} For any $\end{lemma}l \in \{1,\dots,n\}$, the element $\omega_{{\mathtt{Simp}}gma}\in {\widetilde{N}}^*_{{\mathtt{Simp}}gma}$ is endowed with the \emph{perverse degree} $\|\omega_{{\mathtt{Simp}}gma}\|_{\end{lemma}l}$, introduced in \defref{def:degrepervers}. We extend this degree to the elements of $ \mathbb{H}iru {\widetilde{N}} *{X;R}$ as follows. \begin{equation}gin{definition}\label{def:transversedegreeblowup} Let $X$ be a filtered space and $\omega\in\mathbb{H}iru{\widetilde{N}} *{X;R}$. The \emph{perverse degree of $\omega$ along a singular stratum,} $S$, is equal to \begin{equation}gin{equation*}\label{equa:perversstrate} \|\omega\|_{S}=\sup\left\{\|\omega_{{\mathtt{Simp}}gma}\|_{{\rm codim\,} S}\mid {\mathtt{Simp}}gma\colon\mathbb{D}elta\to X \text{ regular with }{\mathtt{Simp}}gma(\mathbb{D}elta)\cap S\neq \emptyset\right\}. \end{equation*} We denote by $\|\omega\|$ the map associating to any singular stratum $ S $ of $ X $ the element $\|\omega\|_{S}$ and 0 to a regular stratum. \end{definition} Notice that, by definition, face operators $\delta_{\end{lemma}l}^*$ decrease the perverse degree. \begin{equation}gin{definition}\label{def:admissible} Let $(X,\overline{p})$ be a perverse space. A cochain $\omega\in \mathbb{H}iru {\widetilde{N}} *{X;R}$ is \emph{$\overline{p}$-allowable} if \begin{equation}gin{equation}\label{equa:legraal} \|\omega\|\leq \overline{p}. \end{equation} A cochain $\omega$ is a \emph{$\overline{p}$-intersection cochain} if $\omega$ and its coboundary, $\delta \omega$, are $\overline{p}$-allowable. We denote by $\lau{\widetilde{N}}*{\overline{p}}{X;R}$ the complex of $\overline{p}$-intersection cochains and $\lau \mathscr H {\overline{p}} *{X;R}$ its homology, called \emph{blown-up intersection cohomology} of $X$ with coefficients in~$R$, for the perversity $\overline{p}$. \end{definition} {\rm Sub}section{Shifted filtrations}\label{SF1} Blown-up intersection cohomology does not depend on the dimension of the strata of $X$ but on the codimension of these strata (see \cite[Section 4.3.1]{LibroGreg}). Let us consider on $X$ the shifted filtration $Y$, where $m \in \mathbb{N}^*$: \begin{equation}gin{align}\label{shif} \emptyset = Y_0 = \cdots = Y_{m-1} {\rm Sub}set Y_m =X_0 {\rm Sub}set \dots {\rm Sub}set Y_{n+m-1} =\\ \nonumber X_{n-1} {\rm Sub}set Y_{n+m} =X_n =X. \end{align} So, the formal dimension of $Y$ is $n+m$. For example, on $\mathbb{R}^m \widetildemes X$ we have two natural shifted filtrations: $(\mathbb{R}^m \widetildemes X)_i = \mathbb{R}^m \widetildemes X_i$, with $i\in \{0,\ldots,n\}$, and $(\mathbb{R}^m \widetildemes X)_i = \mathbb{R}^m \widetildemes X_{i-m}$, with $i\in \{0,\ldots,n+m\}$. The family of regular simplices is the same for both filtrations. The perverse degree of a filtered simplex ${\mathtt{Simp}}gma\colon \mathbb{D}elta \to X=Y$ is the same for both filtrations. Let us see that. If $S$ is a singular stratum of $X$ (and therefore of $Y$) with ${\rm Im\,} {\mathtt{Simp}}gma \cap S \ne \emptyset$, we have ${\rm codim\,}_XS ={\rm codim\,}_Y S $. Let $\end{lemma}l$ be this codimension. On the other hand, if $\mathbb{D}elta_{\mathtt{Simp}}gma^X = \mathbb{D}elta_0* \cdots * \mathbb{D}elta_n$ is the induced filtration by ${\mathtt{Simp}}gma \colon \mathbb{D}elta \to X$ then $\mathbb{D}elta_{\mathtt{Simp}}gma^Y = \underbrace{\emptyset *\dots *\emptyset}_{m \ times}*\mathbb{D}elta'_{m}* \cdots * \mathbb{D}elta'_{n+m}$, with $\mathbb{D}elta'_{m+r} = \mathbb{D}elta_r$, is the induced filtration by ${\mathtt{Simp}}gma \colon \mathbb{D}elta \to Y$. So, if $(F=F_0*\cdots *F_n,\varepsilon=(\varepsilon_0 *\dots*\varepsilon_n))$ is a face of the blow up of $\mathbb{D}elta_{\mathtt{Simp}}gma^X$ then $(F'=\underbrace{\emptyset, \dots,\emptyset}_{m \ times}*F_0*\cdots *F_{n},\varepsilon'=(\underbrace{1, \dots,1}_{m \ times},\varepsilon_0,\dots,\varepsilon_{n}))$ is the corresponding face of the blow up of $\mathbb{D}elta_{\mathtt{Simp}}gma^Y$. We have, for $\end{lemma}l \in \{1,\dots,n\}$ \begin{equation}gin{eqnarray*} \|{\boldsymbol 1}_{(F,\varepsilon)}\|_\end{lemma}l^X &= & \left\{ \begin{equation}gin{array}{ll} -\infty & \hbox{if } \varepsilon_{n-\end{lemma}l} =1\\ |(F,\varepsilon)|_{>n-\end{lemma}l} & \hbox{if } \varepsilon_{n-\end{lemma}l} =0 \end{array} \right. = \left\{ \begin{equation}gin{array}{ll} -\infty & \hbox{if } \varepsilon'_{m+n-\end{lemma}l} =1\\ |(F',\varepsilon')|_{>m+n-\end{lemma}l} & \hbox{if } \varepsilon'_{m+n-\end{lemma}l} =0 \end{array} \right.\\ &=& \|{\boldsymbol 1}_{(F',\varepsilon')}\|_{\end{lemma}l}^Y, \end{eqnarray*} and $ \|{\boldsymbol 1}_{(F',\varepsilon')}\|_\end{lemma}l^Y = -\infty \ \ \hbox{ if } \end{lemma}l \in \{n+1, \ldots, m+n\}. $ We get $\lau {\widetilde{N}} *{\overline p} {X;R} = \lau {\widetilde{N}} * {\overline p} {Y;R}$ and therefore $\lau \mathscr H* {\overline p} {X;R} = \lau \mathscr H * {\overline p} {Y;R}$. We prove in \thmref{MorCoho} that any stratified map induces a morphism in blown-up intersection cohomology. The following Proposition, which is a weaker version of this result, is used in \partref{part:mayervietoris}. \begin{equation}gin{proposition}\label{prop:applistratifieeforte} Let $f \colon X \to Y$ be a stratum preserving stratified map. The \emph{induced map} $f \colon \mathbb{H}iru {\widetilde{N}} * {Y;R} \to \mathbb{H}iru {\widetilde{N}} * {X;R} $, defined by $(f^*(\omega))_{{\mathtt{Simp}}gma}=\omega_{f\circ{\mathtt{Simp}}gma}$, is a well defined chain map. Consider a perversity $\overline p$ on $X$ and a perversity $\overline q$ on $Y$ verifying $\overline p \geq f^*\overline q$. The induced operator $f ^*\colon \lau {\widetilde{N}} * {\overline q} {Y;R} \to \lau {\widetilde{N}} * {\overline p} {X;R}$, is a well defined chain map inducing the morphism $f ^*\colon \lau \mathscr H * {\overline q} {Y;R} \to \lau \mathscr H* {\overline p} {X;R}$. \end{proposition} \begin{equation}gin{proof} The definition makes sense since $(\delta_{\end{lemma}l}^*f^*(\omega))_{{\mathtt{Simp}}gma} = \delta_{\end{lemma}l}^*(f^*(\omega)_{{\mathtt{Simp}}gma}) = \delta_{\end{lemma}l}^*(\omega_{f\circ{\mathtt{Simp}}gma})$ $ = \omega_{f\circ{\mathtt{Simp}}gma\circ\delta_{\end{lemma}l}} = (f^*(\omega))_{{\mathtt{Simp}}gma\circ\delta_{\end{lemma}l}}$, for each $\omega \in \mathbb{H}iru {\widetilde{N}} * {Y;R}$, each regular simplex ${\mathtt{Simp}}gma \colon \mathbb{D}elta \to X$ and each regular face $\delta_\end{lemma}l \colon \nabla \to \mathbb{D}elta$. The induced morphism is a chain map since $(\delta f^*(\omega))_{{\mathtt{Simp}}gma} = \delta (f^*(\omega)_{{\mathtt{Simp}}gma}) = \delta(\omega_{f\circ{\mathtt{Simp}}gma}) = (\delta\omega)_{f\circ{\mathtt{Simp}}gma} = (f^*(\delta\omega))_{{\mathtt{Simp}}gma}$, for each $\omega \in \mathbb{H}iru {\widetilde{N}} * {Y;R}$ and each regular simplex ${\mathtt{Simp}}gma \colon \mathbb{D}elta_{\mathtt{Simp}}gma \to X$. We turn now to perversities and firstly to the filtrations. The euclidean simplex $ \mathbb{D}elta $ has two filtrations, respectively induced by $ {\mathtt{Simp}}gma $ and $ f \circ {\mathtt{Simp}}gma $, denoted by $ \mathbb{D}elta_{{\mathtt{Simp}}gma} $ and $ \mathbb{D}elta_{f \circ {\mathtt{Simp}}gma} $, following the conventions of \defref{def:filteredsimplex}. For each $\end{lemma}l \in\{0,\dots,n\}$, the hypothesis $f^{-1}(Y_{n-\end{lemma}l})=X_{n-\end{lemma}l}$ implies equalities, ${\mathtt{Simp}}gma^{-1}(X_{n-\end{lemma}l })={\mathtt{Simp}}gma^{-1}f^{-1}(Y_{n-\end{lemma}l })=(f\circ{\mathtt{Simp}}gma)^{-1}(Y_{n-\end{lemma}l })$ and $\mathbb{D}elta_{{\mathtt{Simp}}gma}=\mathbb{D}elta_{f\circ{\mathtt{Simp}}gma}$. Let $ S $ be an $\end{lemma}l$-codimensional singular stratum of $ X $ such that $ S \cap {\rm Im\,} {\mathtt{Simp}}gma \neq \emptyset$. The stratum $S^f$ of $ Y $, characterized by $ f (S) {\rm Sub}set S^f $, also has codimension $ \end{lemma}l $ and verifies $ S^f \cap {\rm Im\,} (f \circ {\mathtt{Simp}}gma) \neq \emptyset$. Moreover, the definition of $ f^* $ and the previous observation on the two filtrations of $ \mathbb{D}elta $ give $ \| f^* (\omega)_{{\mathtt{Simp}}gma} \|_{\end{lemma}l} = \| \omega_ {f \circ {\mathtt{Simp}}gma} \|_{\end{lemma}l} $. From \defref{def:transversedegreeblowup}, we deduce $\|f^*(\omega)\|_{S}\leq \|\omega\|_{S^f}\leq \overline{q}(S^f)$. The assumption made on the perversities $ \overline {p} $ and $ \overline q $ allows us to conclude $\|f^*(\omega)\|_{S}\leq \overline{p}(S)$. The same argument applied to the $ \overline q$-allowable form, $ \delta \omega $, gives $\|f^*(\delta\omega)\|_{S}\leq \overline{p}(S)$. The compatibility of $ f ^ * $ with the differential $ \delta $ implies $f^*(\omega)\in \lau {\widetilde{N}}*{\overline{p}}{X;R}$. \end{proof} \begin{equation}gin{remark}\label{rem:fortementstratifieetidentite} If $f$ is a stratum preserving stratified map, for any filtered simplex ${\mathtt{Simp}}gma\colon\mathbb{D}elta\to X$, the ${\mathtt{Simp}}gma$- and $f\circ{\mathtt{Simp}}gma$- decompositions of $\mathbb{D}elta$ are the same: $\mathbb{D}elta_{{\mathtt{Simp}}gma}=\mathbb{D}elta_{f\circ {\mathtt{Simp}}gma}$. This is not the case for a general stratified map (see \cite[A.25]{CST1}). \end{remark} \section{Cup product.}\label{subsec:cupTW} We have defined the notion of cup product in \cite{CST1} for "filtered face sets". In \cite{CST2}, we also introduced the notion of $ cup_i$-products and Steenrod squares on the blown-up intersection cohomology. We give below a definition of cup product for any coefficient ring, in our context of stratum depending perversities. \begin{equation}gin{definition}\label{def:cupsurDelta} Two ordered simplices, $F=[a_{0},\dots,a_{k}]$ and $G=[b_{0},\dots,b_{\end{lemma}l}]$, of an euclidean simplex $\mathbb{D}elta$, are said to be \emph{compatible} if $a_{k}=b_{0}$. In this case, we set $F\cup G=[a_{0},\dots,a_{k},b_{1},\dots,b_{\end{lemma}l}]\in N_{*}(\mathbb{D}elta)$. The \emph{cup product} is defined on the dual basis of $\mathbb{H}iru N*{\mathbb{D}elta}$ by $${\boldsymbol 1}_{F}\cup {\boldsymbol 1}_{G}=(-1)^{k \cdot \end{lemma}l} \ {\boldsymbol 1}_{F\cup G}$$ if $F$ and $G$ are compatible and 0 if not. \end{definition} In the case of the cone ${\mathtt c}\mathbb{D}elta$, this law appears on $\mathbb{H}iru N* {{\mathtt c}\mathbb{D}elta}$ as: \begin{equation}gin{equation}\label{equa:cupbase} {\boldsymbol 1}_{(F,\varepsilon)}\cup {\boldsymbol 1}_{(G,\kappa)} =\left\{\begin{equation}gin{array}{cl} (-1)^{|F| \cdot |(G,\kappa)|} \ {\boldsymbol 1}_{(F\cup G,\kappa)} & \text{if}\; F, G\;\text{compatible}\;\text{and}\;\varepsilon=0,\\ {\boldsymbol 1}_{(F,\varepsilon)} & \text{if}\; (G,\kappa) = (\emptyset,1) \hbox{ and } \varepsilon=1,\\ 0 & \text{if not.} \end{array}\right. \end{equation} If $\mathbb{D}elta=\mathbb{D}elta_{0}\ast\dots\ast\mathbb{D}elta_{n}$ is a regular euclidean simplex, this definition extend to $\mathbb{H}iru {\widetilde{N}} *{\mathbb{D}elta}$ as a law defined on a tensor product. More precisely, if $\omega_{0}\otimes\dots\otimes\omega_{n}$ and $\end{theorem}a_{0}\otimes\dots\otimes\end{theorem}a_{n}$ belong to $\mathbb{H}iru N* {{\mathtt c}\mathbb{D}elta_{0}} \otimes\dots\otimes \mathbb{H}iru N*{\mathbb{D}elta_{n}}$, we set \begin{equation}gin{equation}\label{equa:cupsimplexfiltre} (\omega_{0}\otimes\dots\otimes\omega_{n})\cup (\end{theorem}a_{0}\otimes\dots\otimes\end{theorem}a_{n})= (-1)^{\sum_{i>j}|\omega_{i}|\,|\end{theorem}a_{j}|} (\omega_{0}\cup\end{theorem}a_{0})\otimes\dots\otimes (\omega_{n}\cup\end{theorem}a_{n}). \end{equation} Given two perversities $\overline p,\overline q$ defined on a filtered set $X$ we define the perversity $\overline p + \overline q$ by $(\overline p + \overline q )(S) = \overline p(S) + \overline q(S)$, with the convention $-\infty + \overline p(S) =\overline p(S) -\infty = -\infty$. \begin{equation}gin{proposition}\label{42} For each filtered space $X$ endowed with two perversities $\overline{p}$ and $\overline{q}$, there exists an associative multiplication, \begin{equation}gin{equation*}\label{equa:cupprduitespacefiltre} -\cup -\colon \lau {\widetilde{N}} k {\overline{p}}{X;R}\otimes \lau {\widetilde{N}} {\end{lemma}l}{\overline{q}}{X;R}\to \lau {\widetilde{N}} {k+\end{lemma}l} {\overline{p}+\overline{q}}{X;R}, \end{equation*} defined by $(\omega\cup\end{theorem}a)_{{\mathtt{Simp}}gma}=\omega_{{\mathtt{Simp}}gma}\cup \end{theorem}a_{{\mathtt{Simp}}gma}$, for each pair of cochains $(\omega,\end{theorem}a)\in \lau {\widetilde{N}} k {\overline{p}}{X;R}\widetildemes \lau {\widetilde{N}} \end{lemma}l{\overline{q}}{X;R}$ and each regular simplex ${\mathtt{Simp}}gma\colon\mathbb{D}elta\to X$. It induces a graded commutative multiplication with unity called \emph{intersection cup product}, \begin{equation}gin{equation*}\label{equa:cupprduitTWcohomologie} -\cup -\colon \lau \mathscr H k {\overline{p}}{X;R}\otimes \lau \mathscr H {\end{lemma}l} {\overline{q}}{X;R}\to \lau \mathscr H{k+\end{lemma}l}{\overline{p}+\overline{q}}{X;R}. \end{equation*} \end{proposition} \begin{equation}gin{proof} Begin by verifying that the product locally defined on each regular simplex extends to the blown-up complex. For this, consider two cochains $ \omega ,\end{theorem}a \in \mathbb{H}iru {\widetilde{N}}*{X;R}$ and $ \delta_{\end{lemma}l} \colon \nabla \to \mathbb{D}elta $ a regular face of a simplex $ {\mathtt{Simp}}gma \colon \mathbb{D}elta \to X $. The map induced by $ \delta_{\end{lemma}l}$ at the cochain level, $\delta_{\end{lemma}l}^*\colon \mathbb{H}iru {\widetilde{N}}*{X;R}\to \mathbb{H}iru {\widetilde{N}}*{X;R}$, is the identity on each factor of the tensor product, except for one where it is induced by a canonical inclusion. It is compatible with the cup product and we can write, $ \delta_{\end{lemma}l}^*(\omega\cup \end{theorem}a)_{{\mathtt{Simp}}gma} = \delta_{\end{lemma}l}^*(\omega_{{\mathtt{Simp}}gma}\cup\end{theorem}a_{{\mathtt{Simp}}gma}) =_{(1)} \delta_{\end{lemma}l}^*(\omega_{{\mathtt{Simp}}gma})\cup \delta_{\end{lemma}l}^*(\end{theorem}a_{{\mathtt{Simp}}gma}) = \omega_{{\mathtt{Simp}}gma\circ\delta_{\end{lemma}l}}\cup \end{theorem}a_{{\mathtt{Simp}}gma\circ\delta_{\end{lemma}l}} = (\omega\cup\end{theorem}a)_{{\mathtt{Simp}}gma\circ\delta_{\end{lemma}l}}, $ where $=_{(1)}$ comes from the naturality of the usual cup product. Then the product $\omega\cup\end{theorem}a$ of the statement is well defined. Let us now study the behavior of the cup product with respect to the perverse degree. This is a local issue. Let $\omega \in \lau {\widetilde{N}} * {\overline p}{X;R}$ and $\end{theorem}a \in \lau {\widetilde{N}} * {\overline q} {X;R}$. We need to prove that \begin{equation}gin{equation*}\label{equa:cupperversite} \|\omega_{\mathtt{Simp}}gma \cup \end{theorem}a_{\mathtt{Simp}}gma \|_\end{lemma}l\leq (\overline p + \overline q )(S), \end{equation*} where $S$ is a singular stratum of $X$, $\end{lemma}l = {\rm codim\,} S$, and ${\mathtt{Simp}}gma \colon \mathbb{D}elta=\mathbb{D}elta_{0}\ast\dots\ast\mathbb{D}elta_{n} \to X$ is a regular simplex with ${\rm Im\,} {\mathtt{Simp}}gma \cap S \ne \emptyset$. Without loss of generality, we can suppose $\omega_{\mathtt{Simp}}gma={\boldsymbol 1}_{(F,\varepsilon)}$ and $\end{theorem}a_{\mathtt{Simp}}gma={\boldsymbol 1}_{(G,\kappa)}$. The result is clear if $\|\omega_{\mathtt{Simp}}gma\cup \end{theorem}a_{\mathtt{Simp}}gma\|_{\end{lemma}l} =-\infty$. So, we can suppose that $\|\omega_{\mathtt{Simp}}gma\cup \end{theorem}a_{\mathtt{Simp}}gma\|_{\end{lemma}l}\ne -\infty$. If $\overline p (S) =-\infty$, condition $\|\omega_{\mathtt{Simp}}gma \|_\end{lemma}l \leq -\infty$ implies $\end{proposition}si_{n-\end{lemma}l}=1$ and therefore $\|\omega_{\mathtt{Simp}}gma\cup \end{theorem}a_{\mathtt{Simp}}gma\|_\end{lemma}l = -\infty$. So, we can suppose that $\overline p (S) >-\infty$ and similarly $\overline q (S) >-\infty$, which imply $(\overline p+ \overline q)(S) = \overline p (S) + \overline q(S)$. Using (\ref{equa:cupbase}) and \defref{def:degrepervers}, we see that condition $\|\omega_{\mathtt{Simp}}gma\cup \end{theorem}a_{\mathtt{Simp}}gma\|_{\end{lemma}l}\neq-\infty$ gives $\varepsilon_{n-\end{lemma}l}=\kappa_{n-\end{lemma}l}=0$. The usual degree of the cup product of two cochains gives $\|\omega_{\mathtt{Simp}}gma\cup \end{theorem}a_{\mathtt{Simp}}gma\|_{\end{lemma}l}\leq \|\omega_{\mathtt{Simp}}gma\|_\end{lemma}l + \|\end{theorem}a_{\mathtt{Simp}}gma\|_{\end{lemma}l} \leq \overline p(S) + \overline q (S) = (\overline p+ \overline q)(S).$ Then, the cup product of a $\overline{p}$-allowable cochain and of a $\overline{q}$-allowable cochain is $(\overline{p}+\overline{q})$-allowable. For the intersection cochains, the result follows from the formula $\delta(\omega \cup \end{theorem}a) = \delta(\omega)\cup \end{theorem}a+(-1)^{|\omega|} \omega\cup\delta(\end{theorem}a)$. Associativity is deduced from the associativity of the usual cup product. The unity element is the 0-cochain taking the value 1 on any face. Let us verify commutativity. We have constructed the product \\ $-\cup_1 -\colon \lau {\widetilde{N}} k {\overline{p}}{X,\mathbb{Z}_2}\otimes \lau {\widetilde{N}} {\end{lemma}l} {\overline{q}}{X,\mathbb{Z}_2}\to \lau {\widetilde{N}} {k+\end{lemma}l-1} {\overline{p}+\overline{q}}{X,\mathbb{Z}_2} $ verifying the Leibniz condition $\delta (x_1 \cup_1 x_2)=x_1 \cup x_2 +x_2 \cup x_1 +\delta x_1 \cup_1 x_2 +x_1 \cup_1 \delta x_2$ (see \cite{CST2}). For general coefficients, and following the same procedure, we construct the product $-\cup_1 -\colon \lau {\widetilde{N}} k{\overline{p}}{X;R} \otimes \lau {\widetilde{N}} {\end{lemma}l} {\overline{q}} {X;R}\to \lau {\widetilde{N}} {k+\end{lemma}l-1} {\overline{p}+\overline{q}}{X;R}, $ taking into account the signs (see for example \cite[p. 414]{MR1700700}). So, for each $\omega \in \lau {\widetilde{N}} k {\overline{p}}{X;R}$ and $\end{theorem}a \in \lau {\widetilde{N}} \end{lemma}l {\overline{q}}{X;R}$ we have $$ \delta (\omega \cup_1\end{theorem}a ) = (-1)^{p+q-1}\omega \cup \end{theorem}a + (-1)^{p+q+p q}\end{theorem}a \cup \omega + \delta \omega \cup_1 \end{theorem}a +(-1)^p \omega \cup_1 \delta \end{theorem}a. $$ So, if $\delta \omega =\delta \end{theorem}a =0$, we get the commutativity $ [\omega] \cup [\end{theorem}a] = (-1)^{p q} [\end{theorem}a] \cup [\omega]. $ \end{proof} See \cite{MR2607414, MR3046315} for other cup products. \begin{equation}gin{remark}\label{43} Given perversities $\overline a \leq \overline p$ and $\overline b \leq \overline q$, the cup product $ - \cup - \colon \lau {\widetilde{N}} * {\overline{a}}{X;R}\otimes \lau {\widetilde{N}} * {\overline{b}} {X;R} \to \lau {\widetilde{N}} * {\overline{a}+\overline{b}} {X;R} $ is the restriction of $ - \cup - \colon \lau {\widetilde{N}} * {\overline{p}} {X;R}\otimes \lau {\widetilde{N}} * {\overline{q}} {X;R} \to \lau {\widetilde{N}} * {\overline{p}+\overline{q}} {X;R}. $ \end{remark} \section{Intersection and tame intersection (co)homology.} \label{IC} For a perversity $\overline p$ such that $\overline p \not \leq \overline t$, we may have $\overline p$-allowable simplices that are not regular. This failure has bad consequences; for instance, Poincaré duality may be not satisfied in this case (see \cite{CST4}). To overcome this problem the tame intersection homology $\lau {\mathfrak{H}} {\overline p} * {X;R}$ has been introduced in \cite{MR2210257}. In this work, we use the simpler presentation of G. Friedman (see \cite{MR2209151,LibroGreg,CST3}). We begin by recalling the notions of intersection homology. As proved in \cite[Proposition A29]{CST1}, it can be computed using filtered chains. \begin{definition} Consider a perverse space $(X,\overline p)$ and a filtered simplex ${\mathtt{Simp}}gma\colon\mathbb{D}elta=\mathbb{D}elta_{0}\ast \cdots\ast\mathbb{D}elta_{n} \to X$. \begin{equation}gin{enumerate}[{\rm (i)}] \item The \emph{perverse degree of } ${\mathtt{Simp}}gma$ is the $(n+1)$-tuple, $\|{\mathtt{Simp}}gma\|=(\|{\mathtt{Simp}}gma\|_0,\ldots,\|{\mathtt{Simp}}gma\|_n)$, where $\|{\mathtt{Simp}}gma\|_{i}=\dim {\mathtt{Simp}}gma^{-1}(X_{n-i})=\dim (\mathbb{D}elta_{0}\ast\cdots\ast\mathbb{D}elta_{n-i})$, with the convention $\dim \emptyset=-\infty$. \item Given a stratum $S$ of $X$, the \emph{perverse degree of ${\mathtt{Simp}}gma$ along $S$} is defined by \ $$\|{\mathtt{Simp}}gma\|_{S}=\left\{ \begin{equation}gin{array}{cl} -\infty,&\text{if } S\cap {\rm Im\,} {\mathtt{Simp}}gma=\emptyset,\\ \|{\mathtt{Simp}}gma\|_{{\rm codim\,} S}&\text{if } S\cap {\rm Im\,} {\mathtt{Simp}}gma\ne\emptyset.\\ \end{array}\right.$$ \item The filtered singular simplex ${\mathtt{Simp}}gma$ is \emph{$\overline{p}$-allowable} if $ \|{\mathtt{Simp}}gma\|_{S}\leq \dim \mathbb{D}elta-{\rm codim\,} S+\overline{p}(S), $ for any stratum $S$. \item A chain $c$ is \emph{$\overline{p}$-allowable} if it is a linear combination of $\overline{p}$-allowable simplices. The chain $c$ is a \emph{$\overline{p}$-intersection chain} if $c$ and $\partial c$ are $\overline{p}$-allowable chains. \end{enumerate} \end{definition} \begin{equation}gin{definition}\label{def:chaineintersection} Consider a perverse space $(X,\overline p)$. We denote by $\hiru C {*}{X;R}$ the complex of filtered chains of $X$, generated by filtered simplices. The dual complex is $\lau C {*} {}{X;R}=\hom(\lau C {} *{X;R},R)$. The complex of $\overline{p}$-intersection chains of $X$ with the differential $\partial$ is denoted by $\lau C{\overline{p}}* {X;R} $. Its homology $\lau H {\overline{p}} * {X;R}$ is the \emph{$\overline{p}$-intersection homology} of $X$ (see \cite[Théorème A]{CST3}). The dual complex $\lau C {*} {\overline{p}}{X;R}=\hom(\lau C {\overline{p}} *{X;R},R)$ computes the \emph{$\overline p$-intersection cohomology} $\lau H {*} {\overline{p}}{X;R}$ (see \cite{LibroGreg}). \end{definition} \begin{definition} Given a regular simplex $\mathbb{D}elta = \mathbb{D}elta_0 * \cdots *\mathbb{D}elta_n$ we denote by ${\mathfrak{d}}$ the regular part of the chain $\partial \mathbb{D}elta$. That is ${\mathfrak{d}} \mathbb{D}elta =\partial (\mathbb{D}elta_0 * \cdots * \mathbb{D}elta_{n-1})* \mathbb{D}elta_n$, if $|\mathbb{D}elta_n| = 0 $, or ${\mathfrak{d}} \mathbb{D}elta = \partial \mathbb{D}elta$, if $|\mathbb{D}elta_n|\geq 1$. \end{definition}\begin{definition} \label{defgd} Let $(X,\overline p)$ be a perverse space. Given a regular simplex ${\mathtt{Simp}}gma \colon\mathbb{D}elta \to X$, we define the chain ${\mathfrak{d}} {\mathtt{Simp}}gma$ by ${\mathtt{Simp}}gma \circ {\mathfrak{d}}$. Notice that ${\mathfrak{d}}^2=0$. We denote by $\hiru {\mathfrak{C}} * {X;R}$ the chain complex generated by the regular simplices, endowed with the differential ${\mathfrak{d}}$. \end{definition} \begin{definition}\label{tameNormHom} Let $(X,\overline p)$ be a perverse space. A $\overline p$-allowable filtered simplex ${\mathtt{Simp}}gma \colon \mathbb{D}elta \to X$ is \emph{$\overline{p}$-tame} if ${\mathtt{Simp}}gma$ is also a regular simplex. A chain $\xi$ is \emph{$\overline{p}$-tame} if is a linear combination of $\overline{p}$-tame simplices. The chain $\xi$ is a \emph{tame $\overline{p}$-intersection chain} if $\xi$ and ${\mathfrak{d}} \xi$ are $\overline{p}$-tame chains. We write $\lau {\mathfrak{C}} {\overline{p}} * {X;R} {\rm Sub}set \hiru {\mathfrak{C}} * {X;R}$ the complex of tame $\overline{p}$-intersection chains endowed with the differential ${\mathfrak{d}} $. Its homology $\lau {\mathfrak{H}} {\overline{p}}{*} {X;R}$ is the \emph{tame $\overline{p}$-intersection homology} of $X$ \end{definition} Main properties of this homology have been studied in \cite{CST3,LibroGreg}. We have proven in \cite{CST3} that the homology $\lau {\mathfrak{H}} {\overline{p}} * {X;R}$ coincides with those of \cite{MR2210257,MR2276609} (see also \cite[Chapter 6]{LibroGreg}). It is also proved that $ \lau {\mathfrak{H}} {\overline{p}} * {X;R} = \lau H {\overline{p}} * {X;R}$ if $\overline{p}\leq\overline{t}$. The \emph{tame $\overline{p}$-intersection cohomology} $\lau {\mathfrak{H}} *{\overline p} {X;R}$ is defined by using the dual complex $\lau {\mathfrak{C}} * {\overline p} {X;R} = \mathbb{H}om (\lau {\mathfrak{C}} {\overline p} * {X;R},R)$. It verifies $ \lau {\mathfrak{H}} * {\overline{p}} {X;R} = \lau H * {\overline{p}} {X;R}$ if $\overline{p}\leq\overline{t}$. {\rm Sub}section{Shifted filtrations}\label{SF} Intersection homology does not depend on the dimension of the strata of $X$ but on the codimension of these strata (see \cite[Section 4.3.1]{LibroGreg}). Let us consider on $X$ the shifted filtration $Y$ of \eqref{shif} in subsection \ref{SF1}. So, the formal dimension of $Y$ is $n+m$. Following subsection \ref{SF1} we have: $$ \|{\mathtt{Simp}}gma\|_\end{lemma}l^{^X} = \dim {\mathtt{Simp}}gma^{-1}(X_{n-\end{lemma}l}) = \dim {\mathtt{Simp}}gma^{-1}(Y_{n+m-\end{lemma}l})= \|{\mathtt{Simp}}gma\|_{\end{lemma}l}^{^Y}. $$ The allowability condition is the same. The perversity $\overline p$ is also a perversity on $Y$. If $S$ is a stratum of $X$ (and therefore of $Y$) with ${\rm Im\,} {\mathtt{Simp}}gma \cap S \ne \emptyset$ we have $\end{lemma}l = {\rm codim\,}_XS ={\rm codim\,}_Y S $. So, $$ \|{\mathtt{Simp}}gma\|_\end{lemma}l^{^X} \leq \dim \mathbb{D}elta -\end{lemma}l + \overline p(S) \mathbb{L}ongleftrightarrow \|{\mathtt{Simp}}gma\|_\end{lemma}l^{^Y} \leq \dim \mathbb{D}elta -\end{lemma}l + \overline p(S). $$ We get $\lau C {\overline p} *{X;R} = \lau C {\overline p} *{Y;R}$ and therefore $\lau H {\overline p} *{X;R} = \lau H {\overline p} *{Y;R}$. By the same reasons, we also have $\lau H *{\overline p} {X;R} = \lau H *{\overline p} {Y;R}$, $\lau {\mathfrak{H}} {\overline p} *{X;R} = \lau {\mathfrak{H}} {\overline p}*{Y;R}$ and $\lau {\mathfrak{H}} *{\overline p} {X;R} = \lau {\mathfrak{H}} *{\overline p} {Y;R}$. \section{Cap product.}\label{ProdCap} We introduce the notion of cap product for the blown-up intersection (co)homo\-logy, already treated in \cite{CST7} in a different context. First of all, we work at the simplex level. Let $\mathbb{D}elta$ be an $m$-dimensional euclidean simplex. We denote by $[\mathbb{D}elta]$ its face of maximal dimension. The \emph{classical cap product} $-\cap [\mathbb{D}elta]\colon \mathbb{H}iru N *\mathbb{D}elta\to \hiru N {m-*}\mathbb{D}elta$ is defined by \begin{equation}\label{equa:cap0} {\boldsymbol 1}_{F}\cap [\mathbb{D}elta]=\left\{ \begin{equation}gin{array}{cl} G & \text{if $F\cup G=\mathbb{D}elta$ (where $\cup$ is the map of \defref{def:cupsurDelta})},\\ 0 & \text{otherwise.} \end{array}\right. \end{equation} Consider now the cone ${\mathtt c}\mathbb{D}elta$ whose apex is denoted by ${\mathtt v}$, which is the last vertex of the cone (see \pagref{virt}). We have the formula: \begin{equation}gin{equation}\label{equa:cap1} {\boldsymbol 1}_{(F,j)}\cap [{\mathtt c}\mathbb{D}elta]=\left\{ \begin{equation}gin{array}{cl} (G,1)&\text{if } j=0 \hbox{ or } (F,j) = (\mathbb{D}elta,1),\\ 0&\text{if not,} \end{array}\right. \end{equation} where the simplex $G$ is the face of $\mathbb{D}elta$ with $F\cup G=\mathbb{D}elta$ (cf. \defref{def:cupsurDelta}) if $j=0$ and $G = \emptyset$ if $(F,j) = (\mathbb{D}elta,1)$. We extend it to regular simplices $\mathbb{D}elta=\mathbb{D}elta_{0}\ast\dots\ast\mathbb{D}elta_{n}$ as follows. Denote $\hiru {\widetilde{N}}{*} \mathbb{D}elta =\hiru N{*}{{\mathtt c}\mathbb{D}elta_{0}} \otimes\dots\otimes \hiru N {*}{{\mathtt c} \mathbb{D}elta_{n-1}}\otimes \hiru N{*}{\mathbb{D}elta_{n}}$. \begin{definition}\label{61} We define the \emph{cap product} $ -{\mathtt c}ap {\widetilde{\Delta}}\colon \mathbb{H}iru {\widetilde{N}} {*} \mathbb{D}elta \to \hiru {\widetilde{N}}{*-m} \mathbb{D}elta, $ linearly from \begin{equation}\label{equa:lecapsurdelta} {\boldsymbol 1}_{(F,\varepsilon)} {\mathtt c}ap {\widetilde{\Delta}} = (-1)^{\nu(F,\varepsilon,\mathbb{D}elta)} ({\boldsymbol 1}_{(F_{0},\varepsilon_{0})}\cap {\mathtt c}[\mathbb{D}elta_{0}])\otimes\dots\otimes ({\boldsymbol 1}_{F_{n}}\cap [\mathbb{D}elta_{n}]), \end{equation} where ${\boldsymbol 1}_{(F,\varepsilon)}={\boldsymbol 1}_{(F_{0},\varepsilon_{0})}\otimes\dots\otimes {\boldsymbol 1}_{(F_{n-1},\varepsilon_{n-1})}\otimes {\boldsymbol 1}_{F_{n}}$, ${\widetilde{\Delta}} = {\mathtt c} \mathbb{D}elta_0 \widetildemes \cdots \widetildemes {\mathtt c} \mathbb{D}elta_{n-1} \widetildemes \mathbb{D}elta_n$ (see \eqref{bup}) and $\nu(F,\varepsilon,\mathbb{D}elta)=\sum_{j=0}^{n-1}(\dim\mathbb{D}elta_{j}+1) \,(\sum_{i=j+1}^n|(F_{i},\varepsilon_{i})|) $, with the convention $\varepsilon_{n}=0$. At the filtered simplex level, the \emph{local intersection cap product} $-\cap {\widetilde{\Delta}}\colon \mathbb{H}iru {\widetilde{N}}* \mathbb{D}elta\to \hiru N{m-*}\mathbb{D}elta$ is defined by $$c\cap {\widetilde{\Delta}}=\mu_{*}(c{\mathtt c}ap{\widetilde{\Delta}}),$$ where the map $\mu_{*}\colon \hiru {\widetilde{N}} {*}\mathbb{D}elta \to \hiru N{*}\mathbb{D}elta$ is defined by \begin{equation}gin{equation}\label{AmuA} \mu_{*}\left(\otimes_{k=0}^{n-1} (F_k,\varepsilon_k) \otimes F_n \right)=\left\{ \begin{equation}gin{array}{cl} F_{0}\ast\dots\ast F_{\end{lemma}l}&\text{if } \dim (F,\varepsilon) =\dim (F_{0}\ast\dots\ast F_{\end{lemma}l}),\\ 0&\text{otherwise,} \end{array}\right. \end{equation} where $(F,\varepsilon)=(F_{0},\varepsilon_{0})\otimes\dots\otimes (F_{n-1},\varepsilon_{n-1})\otimes F_{n}$ and $\end{lemma}l$ is the smallest integer, $j$, such that $\varepsilon_{j}=0$. \end{definition} \begin{definition} Since ${\widetilde{\Delta}}={\mathtt c}\mathbb{D}elta_{0} \widetildemes\dots\widetildemes {\mathtt c} \mathbb{D}elta_{n-1}\widetildemes \mathbb{D}elta_{n}$ (see \eqref{bup}), we have the boundary chain \begin{equation}gin{eqnarray*} \partial{\widetilde{\Delta}} = \mathop{\sum_{i=0}^n}_{\mathbb{D}elta_i\ne \emptyset} (-1)^{|\mathbb{D}elta|_{\leq {i-1}}+1} [{\mathtt c} \mathbb{D}elta_{0}]\otimes\dots\otimes [{\mathtt c} \partial\mathbb{D}elta_{i}]\otimes\dots\otimes [\mathbb{D}elta_{n} ] && \\+\mathop{\sum_{i=0}^{n-1}}_{\mathbb{D}elta_i\ne \emptyset} (-1)^{|\mathbb{D}elta|_{\leq i} + 1} [{\mathtt c}\mathbb{D}elta_{0}]\otimes\dots\otimes [\mathbb{D}elta_{i}]\otimes\dots\otimes [\mathbb{D}elta_{n}]. \end{eqnarray*} Let $i\in \{0, \dots, n-1\}$ such that $\mathbb{D}elta_i\ne \emptyset$, the face $\mathcal H_i = {\mathtt c} \mathbb{D}elta_{0} \widetildemes\dots \widetildemes {\mathtt c} \mathbb{D}elta_{i-1} \widetildemes \mathbb{D}elta_{i}\widetildemes {\mathtt c} \mathbb{D}elta_{i+1} \widetildemes \dots\widetildemes \mathbb{D}elta_{n}$ is called a \emph{hidden face} of ${\widetilde{\Delta}}$. This gives \begin{equation}gin{eqnarray}\label{Hid} \partial{\widetilde{\Delta}}= \widetilde{{\mathfrak{d}}\mathbb{D}elta} + \mathop{\sum_{i=0}^{n-1}}_{\mathbb{D}elta_i\ne \emptyset} (-1)^{|\mathbb{D}elta|_{\leq i} + 1} \mathcal \ \mathcal H_i, \end{eqnarray} where $\widetilde {{\mathfrak{d}} \mathbb{D}elta}$ is the blow up of the weighted simplicial complex ${\mathfrak{d}} \mathbb{D}elta$ (cf. \defref{def:eclate}). \end{definition} \begin{equation}gin{remark} \label{GG} \color{white}.\normalcolor \begin{equation}gin{itemize} \item For each $\omega \in \mathbb{H}iru {\widetilde{N}} *\mathbb{D}elta$ the chain $\omega \cap {\widetilde{\Delta}}$ is regular or 0. Let us see that. Let ${\boldsymbol 1}_{(F,\varepsilon)}= {\boldsymbol 1}_{(F_{0},\varepsilon_{0})}\otimes\cdots\otimes {\boldsymbol 1}_{F_{n}} \in {\widetilde{N}}^*(\mathbb{D}elta)$. From the definition of the cap product we have, up to a sign, $${\boldsymbol 1}_{(F,\varepsilon)}\cap{\widetilde{\Delta}}=G_{0}\ast\cdots \ast G_{n},$$ if ${\boldsymbol 1}_{(F,\varepsilon)}\cap{\widetilde{\Delta}}\ne 0$. The faces $G_\bullet$ have been defined in \eqref{equa:cap0}, \eqref{equa:cap1} and \eqref{AmuA}. The simplex ${\boldsymbol 1}_{(F,\varepsilon)}\cap{\widetilde{\Delta}}$ is regular: $F_n\ne \emptyset \mathbb{R}ightarrow G_n\ne \emptyset$. \item Given a hidden face $\mathcal H_i = {\mathtt c} \mathbb{D}elta_{0} \widetildemes\dots \widetildemes \mathbb{D}elta_{i} \widetildemes \dots\widetildemes \mathbb{D}elta_{n}$, we have \begin{equation}gin{equation*} \mu_{*}\left(\cal H_i\right)=\left\{ \begin{equation}gin{array}{cl} \mathbb{D}elta_{0}\ast\dots\ast \mathbb{D}elta_{i}&\text{if } \dim(\mathbb{D}elta_{i+1}* \dots * \mathbb{D}elta_n)=0\\ 0&\text{otherwise} \end{array}\right., \end{equation*} which is a non regular face or 0. \end{itemize} \end{remark} We define the intersection cap product for any filtered space $X$. \begin{definition} Let $X$ be a filtered space. The \emph{intersection cap product} \begin{equation}e - \cap - \colon \mathbb{H}iru {\widetilde{N}} * {X;R} \otimes \lau {\mathfrak{C}} {}m {X;R} \to \lau {\mathfrak{C}}{} {m-*} {X;R} \end{equation}e is defined by linear extension of $$ \omega\cap {\mathtt{Simp}}gma= {\mathtt{Simp}}gma_{*}(\omega_{{\mathtt{Simp}}gma}\cap {\widetilde{\Delta}}), $$ where $\omega\in \mathbb{H}iru {\widetilde{N}} *{X;R}$ and ${\mathtt{Simp}}gma\colon \mathbb{D}elta\to X$ is a regular simplex. \end{definition} \begin{equation}gin{lemma}\label{bebe} Let $\mathbb{D}elta$ be a regular simplex. The map $\mu_{*}\colon \hiru {\widetilde{N}} {*}\mathbb{D}elta \to \hiru N{*}\mathbb{D}elta$ commutes with differentials. \end{lemma} \begin{equation}gin{proof} Set $\mathbb{D}elta= \mathbb{D}elta_0 * \cdots * \mathbb{D}elta_n$. We proceed by induction on $n$. The result is clear when $n=0$. We consider the regular simplex $\nabla=\mathbb{D}elta_{1}\ast\dots\ast\mathbb{D}elta_{n}$, et ${\widetilde{N}}_{*}(\nabla)=N_{*}({\mathtt c}\mathbb{D}elta_{1})\otimes\dots\otimes N_{*}({\mathtt c} \mathbb{D}elta_{n-1})\otimes N_{*}(\mathbb{D}elta_{n})$. The map $\mu_{\mathbb{D}elta,*} $ is the composition $$ \mu_{\mathbb{D}elta,*} \colon N_{*}({\mathtt c} \mathbb{D}elta_{0})\otimes {\widetilde{N}}_{*}(\nabla) \xrightarrow[]{{\rm id}\otimes \mu_{\nabla,*} } N_{*}({\mathtt c}\mathbb{D}elta_{0})\otimes N_{*}(\nabla) \xrightarrow[]{\nu} N_{*}(\mathbb{D}elta), $$ (see \eqref{AmuA}), where $$\nu((F_{0},\varepsilon_{0})\otimes G)=\left\{ \begin{equation}gin{array}{cl} F_{0}&\text{if } \varepsilon_{0}=0 \text{ and } \dim G=0,\\ 0&\text{if }\varepsilon_{0}=0 \text{ and } \dim G>0,\\ F_{0}\ast G&\text{if } \varepsilon_{0}=1. \end{array}\right.$$ By induction hypothesis, the operator $\mu_{\nabla,*}$ is compatible with differentials. It remains to prove that $\nu$ is also compatible with differentials. We distinguish four cases. \begin{equation}gin{itemize} \item If $\varepsilon_{0}=0$ and $\dim G=0$, we have $\nu(\partial((F_{0},0)\otimes G))= \nu((\partial F_{0},0)\otimes G)=\partial F_{0}=\partial \nu((F_{0},0)\otimes G)$. \item If $\varepsilon_{0}=0$ and $\dim G=1$, we have $\nu(\partial((F_{0},0)\otimes G))= \nu((\partial F_{0},0)\otimes G)+(-1)^{|F_{0}|}\nu((F_{0},0)\otimes \partial G)= 0+ F_{0}-F_{0}=0=\partial \nu((F_{0},0)\otimes G)$. \item If $\varepsilon_{0}=0$ and $\dim G>1$, all the terms vanish. \item If $\varepsilon_{0}=1$, the result comes from \begin{equation}gin{eqnarray*} \nu(\partial((F_{0},1)\otimes G))=&&\\ \nu((\partial F_{0},1)\otimes G)+ (-1)^{|F_{0}|+1} \left(\nu((F_{0},0)\otimes G) + \nu((F_{0},1)\otimes \partial G)\right)=&&\\ (\partial F_{0})\ast G + (-1)^{|F_{0}|+1}\left\{ \begin{equation}gin{array}{cl} F_{0}&\text{if } \dim G=0,\\ F_{0}\ast \partial G&\text{if } \dim G\geq 1, \end{array}\right.=&&\\ \partial (F_{0}\ast G)=\partial \nu((F_{0},1)\otimes G).&& \end{eqnarray*} \end{itemize} \end{proof} \begin{equation}gin{proposition}\label{prop:lecap} The intersection cap product verifies the following properties \begin{equation}gin{itemize} \item[(i)] ${\mathfrak{d}} (\omega\cap\xi)=(\delta \omega)\cap \xi+(-1)^{|\omega|}\omega\cap {\mathfrak{d}} \xi$, and \item[(ii)] $(\omega\cup\end{theorem}a)\cap \xi=(-1)^{|\omega| \cdot |\end{theorem}a|} \ \end{theorem}a\cap \omega\cap\xi $. \end{itemize} where $\omega , \end{theorem}a\in \lau {\widetilde{N}} * {} {X;R}$ and $\xi \in \lau {\mathfrak{C}} {} * {X;R}$. \end{proposition} \begin{equation}gin{proof} (i) It suffices to prove it for a regular simplex ${\mathtt{Simp}}gma \colon \mathbb{D}elta \to X$. We write $k = |\omega|$. Using formula \eqref{Hid}, the classic Leibniz formula for the cap product gives \begin{equation}gin{eqnarray*} \partial (\omega_{\mathtt{Simp}}gma \cap {\widetilde{\Delta}} )=\partial \mu_* (\omega_{\mathtt{Simp}}gma {\mathtt c}ap {\widetilde{\Delta}} ) \stackrel{\lemref{bebe}}{=} \mu_* \partial (\omega_{\mathtt{Simp}}gma {\mathtt c}ap {\widetilde{\Delta}} )=&& \\ \mu_*\left( (\delta\omega_{\mathtt{Simp}}gma ){\mathtt c}ap {\widetilde{\Delta}}+ (-1)^k \omega_{\mathtt{Simp}}gma {\mathtt c}ap (\partial{\widetilde{\Delta}}) \right) =&&\\ \stackrel{\eqref{Hid}}{=}\mu_* \left( (\delta\omega_{\mathtt{Simp}}gma ){\mathtt c}ap {\widetilde{\Delta}}+(-1)^k \omega_{\mathtt{Simp}}gma {\mathtt c}ap \widetilde{{\mathfrak{d}} \mathbb{D}elta} + \mathop{\sum_{i=0}^{n-1}}_{\mathbb{D}elta_i\ne \emptyset}(-1)^{(-1)^{k+ |\mathbb{D}elta|_{\leq i}}} \omega_{\mathtt{Simp}}gma {\mathtt c}ap \mathcal H_i \right) =&&\\ = \underbrace{\mu_* \left( (\delta\omega_{\mathtt{Simp}}gma ){\mathtt c}ap {\widetilde{\Delta}}+(-1)^k \omega_{\mathtt{Simp}}gma {\mathtt c}ap \widetilde{{\mathfrak{d}} \mathbb{D}elta}\right)}_{\hbox{\widetildeny regular or 0}} + \mathop{\sum_{i=0}^{n-1}}_{\mathbb{D}elta_i\ne \emptyset}(-1)^{(-1)^{k+ |\mathbb{D}elta|_{\leq i}}} \underbrace{\mu_* (\omega_{\mathtt{Simp}}gma {\mathtt c}ap \mathcal H_i )}_{\hbox{\widetildeny non regular or 0}} \end{eqnarray*} (cf. \remref{GG}). So, $ {\mathfrak{d}}(\omega_{\mathtt{Simp}}gma \cap {\widetilde{\Delta}} ) = \mu_* ((\delta\omega_{\mathtt{Simp}}gma ){\mathtt c}ap {\widetilde{\Delta}}) +(-1)^k \mu_*(\omega_{\mathtt{Simp}}gma {\mathtt c}ap \widetilde{{\mathfrak{d}} \mathbb{D}elta}) = (\delta\omega_{\mathtt{Simp}}gma )\cap {\widetilde{\Delta}} +(-1)^k \omega_{\mathtt{Simp}}gma \cap \widetilde{{\mathfrak{d}} \mathbb{D}elta}, $ and therefore $ {\mathtt{Simp}}gma_*{\mathfrak{d}}(\omega_{\mathtt{Simp}}gma \cap {\widetilde{\Delta}} )= (\delta\omega)\cap \mathbb{D}elta +(-1)^k \omega \cap {\mathfrak{d}} \mathbb{D}elta. $ The regular simplex ${\mathtt{Simp}}gma \colon \mathbb{D}elta \to X$ is in fact a stratified map when one considers on $\mathbb{D}elta$ the filtration $\mathbb{D}elta_0 {\rm Sub}set \mathbb{D}elta_0*\mathbb{D}elta_1 {\rm Sub}set \cdots {\rm Sub}set \mathbb{D}elta_0 * \cdots * \mathbb{D}elta_n$. So, ${\mathtt{Simp}}gma_* \circ \partial = \partial \circ {\mathtt{Simp}}gma_*$ \cite[Theorem F]{CST1}. Since $\mathbb{D}elta_0* \cdots * \mathbb{D}elta_{n-1} = {\mathtt{Simp}}gma^{-1}(\Sigma_X)$ then ${\mathtt{Simp}}gma_* \circ {\mathfrak{d}} = {\mathfrak{d}} \circ{\mathtt{Simp}}gma_*$. We get property (i) since $ {\mathfrak{d}} (\omega \cap \mathbb{D}elta ) = {\mathfrak{d}} {\mathtt{Simp}}gma_* (\omega_{\mathtt{Simp}}gma \cap {\widetilde{\Delta}} ) = {\mathtt{Simp}}gma_* {\mathfrak{d}} (\omega_{\mathtt{Simp}}gma \cap {\widetilde{\Delta}} ). $ (ii) Without loss of generality we can suppose $\omega_{\mathtt{Simp}}gma ={\boldsymbol 1}_{(F,\varepsilon)}={\boldsymbol 1}_{(F_{0},\varepsilon_{0})}\otimes\dots\otimes {\boldsymbol 1}_{(F_{n-1},\varepsilon_{n-1})}\otimes {\boldsymbol 1}_{F_{n}}$ and $\end{theorem}a_{\mathtt{Simp}}gma ={\boldsymbol 1}_{(H,\kappa)}={\boldsymbol 1}_{(H_{0},\kappa_{0})}\otimes\dots\otimes {\boldsymbol 1}_{(H_{n-1},\kappa_{n-1})}\otimes {\boldsymbol 1}_{H_{n}}$. It suffices to prove $$ ({\boldsymbol 1}_{(F,\varepsilon)} \cup {\boldsymbol 1}_{(H,\kappa)}) {\mathtt c}ap {\widetilde{\Delta}} = (-1)^{|(F,\varepsilon)| \cdot |(H,\kappa)|} \ {\boldsymbol 1}_{(H,\kappa)} {\mathtt c}ap ({\boldsymbol 1}_{(F,\varepsilon)} {\mathtt c}ap {\widetilde{\Delta}}). $$ For $n=0$ we find the classic property of cup/cap products. In the general case, with the convention $\end{proposition}si_n=\kappa_n=0$, we have \begin{equation}gin{eqnarray*} ({\boldsymbol 1}_{(F,\end{proposition}si)} \cup {\boldsymbol 1}_{(H,\kappa)}) {\mathtt c}ap {\widetilde{\Delta}} = \left( \bigotimes_{a=0}^n {\boldsymbol 1}_{(F_a,\end{proposition}si_a)} \cup \bigotimes_{a=0}^n {\boldsymbol 1}_{(H_a,\kappa_a)} \right) {\mathtt c}ap {\widetilde{\Delta}} \stackrel{\eqref{equa:cupsimplexfiltre}}{=}&&\\ (-1)^{\ltimes_1} \cdot \left(\bigotimes_{a=0}^n \ \left( {\boldsymbol 1}_{(F_a,\end{proposition}si_a)} \cup {\boldsymbol 1}_{(H_a,\kappa_a)}\right) \right){\mathtt c}ap {\widetilde{\Delta}} \stackrel{\eqref{equa:lecapsurdelta}}{=}&&\\ (-1)^{\ltimes_2} \cdot \bigotimes_{a=0}^{n-1} \ \left( \left( {\boldsymbol 1}_{(F_a,\end{proposition}si_a)} \cup {\boldsymbol 1}_{(H_a,\kappa_a)} \right) \cap [{\mathtt c} \mathbb{D}elta_a ]\right) \otimes \left( \left( {\boldsymbol 1}_{F_n} \cup {\boldsymbol 1}_{H_n} \right) \cap [\mathbb{D}elta_n]\right) \stackrel{classic}{=}&&\\ (-1)^{\ltimes_3} \cdot \bigotimes_{a=0}^{n-1} \ \left({\boldsymbol 1}_{(H_a,\kappa_a)} \cap{\boldsymbol 1}_{(F_a,\end{proposition}si_a) }\cap [{\mathtt c} \mathbb{D}elta_a ]\right) \otimes ({\boldsymbol 1}_{H_n} \cap{\boldsymbol 1}_{F_n }\cap [\mathbb{D}elta_n]) \stackrel{\eqref{equa:lecapsurdelta}}{=}&& \\ (-1)^{\ltimes_4}\cdot {\boldsymbol 1}_{(H,\kappa)} {\mathtt c}ap \left( \bigotimes_{a=0}^{n-1} \ \left({\boldsymbol 1}_{(F_a,\end{proposition}si_a) }\cap [{\mathtt c} \mathbb{D}elta_a ]\right) \otimes ({\boldsymbol 1}_{F_n }\cap [\mathbb{D}elta_n]) \right) \stackrel{\eqref{equa:lecapsurdelta}}{=}&&\\ (-1)^{|(F,\end{proposition}si)| \cdot |(H,\kappa)|} \cdot {\boldsymbol 1}_{(H,\kappa)} {\mathtt c}ap {\boldsymbol 1}_{(F,\end{proposition}si) } {\mathtt c}ap {\widetilde{\Delta}}. \end{eqnarray*} where $ \ltimes_1 =\sum_{i>j}|(F_i,\end{proposition}si_i)| \cdot |(H_j,\kappa_j)|, \ltimes_2 =\ltimes_1 + \sum_{j=0}^{n-1}(\dim\mathbb{D}elta_{j}+1) \,(\sum_{i>j}|(F_{i},\varepsilon_{i})|+ |(H_i,\kappa_i)|), \ltimes_3=\ltimes_2 +\sum_{i}|(F_i,\end{proposition}si_i)| \cdot |(H_i,\kappa_i)|$ and \begin{equation}gin{eqnarray*} \ltimes_4 &=&\ltimes_3 + \sum_{j=0}^{n-1}(\dim\mathbb{D}elta_{j}+1 - |(F_j,\end{proposition}si_j)| ) \,(\sum_{i>j} |(H_i,\kappa_i)|) =\sum_{i\geq j}|(F_i,\end{proposition}si_i)| \cdot |(H_j,\kappa_j)| \\ && + \sum_{j=0}^{n-1}(\dim\mathbb{D}elta_{j}+1) \,(\sum_{i>j}|(F_{i},\varepsilon_{i})|) - \sum_{j=0}^{n-1}|(F_j,\end{proposition}si_j)| \,(\sum_{i>j} |(H_i,\kappa_i)|)\\ &=& \sum_{j=0}^{n-1}(\dim\mathbb{D}elta_{j}+1) \,(\sum_{i>j}|(F_{i},\varepsilon_{i})|) + |(F,\end{proposition}si)|\cdot |(H,\kappa)|. \end{eqnarray*} \end{proof} The cap product between a cochain $\omega \in \lau {\widetilde{N}} * {} {X;R}$ and a chain $\xi \in \lau C*{} {X;R}$ may not exist. This happens when a simplex ${\mathtt{Simp}}gma$ of $\xi$ lies in the singular part of $X$. In this case, ${\mathtt{Simp}}gma$ is not regular and we cannot construct the blow up of $\mathbb{D}elta_{\mathtt{Simp}}gma$. This is the reason to use the tame intersection homology in the definition of the cap product: all the involved simplices are regular. \begin{equation}gin{proposition}\label{prop:cap} For each filtered space, $X$, endowed with two perversities, $\overline{p}$ and $\overline{q}$, the intersection cap product \begin{equation}gin{equation*}\label{equa:capprduitespacefiltre} -\cap -\colon \lau {\widetilde{N}} * {\overline{p}}{X;R}\otimes \lau {\mathfrak{C}} {\overline{q}} m {X;R}\to \lau {\mathfrak{C}}{\overline{p}+\overline{q}} {m-*} {X;R}, \end{equation*} is well defined and induces a morphism \begin{equation}gin{equation*}\label{equa:capprduitBUcohomologie} -\cap -\colon \lau \mathscr H * {\overline{p}}{X;R}\otimes \lau {\mathfrak{H}} {\overline{q}} m {X;R}\to \lau {\mathfrak{H}} {\overline{p}+\overline{q}} {m-*}{X;R}. \end{equation*} \end{proposition} \begin{equation}gin{proposition}r Following {\rm pr}opref{prop:lecap} it remains to study the behavior of the cap product with respect to the perverse degree on a singular stratum $S$. Let us consider $\omega \in {\widetilde{N}}^*(X;R)$ with $\|\omega\|_S \leq \overline{p}(S)$, ${\mathtt{Simp}}gma \colon \mathbb{D}elta \to X$ a regular simplex with ${\rm Im\,} {\mathtt{Simp}}gma \cap S \ne \emptyset$ and $\|{\mathtt{Simp}}gma\|_S \leq |\mathbb{D}elta|-\end{lemma}l + \overline{q}(S)$, where $\end{lemma}l={\rm codim\,} S$. We need to prove \begin{equation}\label{kap} \|\omega \cap {\mathtt{Simp}}gma\|_S \leq |\omega \cap {\mathtt{Simp}}gma| - \end{lemma}l + (\overline p + \overline q)(S). \end{equation} Without loss of generality, we can suppose $\omega \cap {\mathtt{Simp}}gma \ne 0$. When $\overline p (S) =-\infty$, the condition $\|\omega \|_S \leq -\infty$ implies $\end{proposition}si_{n-\end{lemma}l}=1$ and therefore the $(n-\end{lemma}l)$-factor of $\omega_{\mathtt{Simp}}gma {\mathtt c}ap {\widetilde{\Delta}}$ is $[{\mathtt v}_{n-\end{lemma}l}]$ which gives $\omega \cap {\mathtt{Simp}}gma \cap S =\emptyset$ and therefore \eqref{kap}. On the other hand, if $\overline q (S) =-\infty$, the condition $\|{\mathtt{Simp}}gma\|_S =-\infty$ gives $\mathbb{D}elta_{n-\end{lemma}l} =\emptyset$ then $\omega \cap {\mathtt{Simp}}gma \cap S =\emptyset$ and therefore \eqref{kap}. We can suppose $(\overline p + \overline q )(S) = \overline p(S) +\overline q(S)$. Then we have \eqref{kap} from $$ \|\omega\cap{\mathtt{Simp}}gma\|_{S} \stackrel{claim}{\leq} \|{\mathtt{Simp}}gma\|_S+\|\omega_{{\mathtt{Simp}}gma}\|_{\end{lemma}l}-|\omega_{{\mathtt{Simp}}gma}| \leq |\mathbb{D}elta|-\end{lemma}l+\overline{p}(S)+\overline{q}(S)-|\omega_{{\mathtt{Simp}}gma}| = |\omega\cap{\mathtt{Simp}}gma|-\end{lemma}l +\overline{p}(S)+\overline{q}(S). $$ We verify the claim. It suffices to consider $\omega_{\mathtt{Simp}}gma = {\boldsymbol 1}_{(F,\varepsilon)}$ with ${\boldsymbol 1}_{(F,\varepsilon)}\cap {\widetilde{\Delta}} \ne 0$. We know, from \remref{GG}, that ${\boldsymbol 1}_{(F,\varepsilon)}\cap \mathbb{D}elta = \pm G_0*\dots*G_n$ with $|\mathbb{D}elta_i| = |G_i| + |(F_i,\varepsilon_i)|$, for each $i \in \{0, \dots ,n\}$. Then, $\| \omega \cap {\mathtt{Simp}}gma\|_S $ is equal to \begin{equation}gin{eqnarray*} |G_0*\dots * G_{n-\end{lemma}l}| &=& |G_0| + \dots + |G_{n-\end{lemma}l}| + n-\end{lemma}l \\ &=& |\mathbb{D}elta_0| + \dots + | \mathbb{D}elta_{n-\end{lemma}l}| + n-\end{lemma}l - |(F_0,\varepsilon_0)| - \dots - |(F_{n-\end{lemma}l},\varepsilon_{n-\end{lemma}l})| \\ &=& \|{\mathtt{Simp}}gma\|_S - |(F,\varepsilon)| + |(F,\varepsilon)|_{>n-\end{lemma}l} = \|{\mathtt{Simp}}gma\|_S - |\omega_{\mathtt{Simp}}gma| +\|\omega_{\mathtt{Simp}}gma\|_\end{lemma}l. \end{eqnarray*} \end{proposition}r \begin{equation}gin{remark}\label{67} The intersection cap product is natural respectively to perversities. Given perversities $\overline a \leq \overline p$ and $\overline b \leq \overline q$, the cap product $ - \cap - \colon \lau {\widetilde{N}} * {\overline{a}} X\otimes \lau {\mathfrak{C}} {\overline{b}} {m} X \to \lau {\mathfrak{C}} {\overline{a}+\overline{b}} {m-*}X $ is the restriction of $ - \cap - \colon \lau {\widetilde{N}} * {\overline{p}} X\otimes \lau {\mathfrak{C}} {\overline{q}} {m} X \to \lau {\mathfrak{C}} {\overline{p}+\overline{q}} {m-*}X. $ \end{remark} \section{Stratified maps: the local level.}\label{EAmal} In this section and the next one, we prove that any stratified map induces a homomorphism between the blown-up intersection cohomologies if a certain compatibility of the involved perversities is satisfied. In particular, if $\overline p$ is a GM-perversity, any stratified map $f \colon X \to Y$ induces a homomorphism $f \colon \lau \mathscr H * {\overline p} {X;R} \to \lau \mathscr H * {\overline p} {Y;R}$. Let $f \colon X \to Y$ be a stratified map as in \defref{def:applistratifieeforte}. If ${\mathtt{Simp}}gma\colon \mathbb{D}elta \to X$ is a filtered simplex then the composite $f \circ {\mathtt{Simp}}gma \colon \mathbb{D}elta \to Y$ is also a filtered simplex but, in general, the two filtrations induced on $\mathbb{D}elta$ by ${\mathtt{Simp}}gma$ and $f \circ {\mathtt{Simp}}gma$ differ. We denote them by $\mathbb{D}elta_{\mathtt{Simp}}gma$ and $\mathbb{D}elta_{f\circ {\mathtt{Simp}}gma}$. In \cite[Corollary A.25 and Lemma A.24]{CST1}, we prove that the two filtrations $\mathbb{D}elta_{\mathtt{Simp}}gma$ and $\mathbb{D}elta_{f\circ {\mathtt{Simp}}gma}$ (on the same euclidean simplex) match through a finite number of elementary amalgamations that we describe and study in this section. \begin{definition} \label{elemkam} Let $k \in \{0, \ldots,n-1\}$. An {\em elementary $k$-amalgamation} of a regular simplex $\mathbb{D}elta = \mathbb{D}elta_0 * \cdots * \mathbb{D}elta_n$ is the regular simplex: $ \mathbb{D}elta' = \mathbb{D}elta'_0 * \cdots * \mathbb{D}elta'_{n-1} $ with $$ \mathbb{D}elta'_i = \left\{ \begin{equation}gin{array}{ll} \mathbb{D}elta_i & \hbox{if } i \leq n-k-2,\\ \mathbb{D}elta_{n-k-1} * \mathbb{D}elta_{n-k} & \hbox{if } i = n-k-1,\\ \mathbb{D}elta_{i-1} & \hbox{if } i \geq n-k. \end{array} \right. $$ If $\mathbb{D}elta_{n-k-1}=\emptyset$ then the elementary amalgamation is \emph{$k$-simple}. \end{definition} We study the effect of these maps at the blow up level. First of all, we introduce a technical tool. {\rm Sub}section{Stratification} The simplices $\mathbb{D}elta = \mathbb{D}elta_0 * \cdots * \mathbb{D}elta_n$ and $\mathbb{D}elta' = \mathbb{D}elta'_0 * \cdots * \mathbb{D}elta'_{n-1}$ are filtered spaces (see \defref{def:espacefiltre}) of respective dimension $n$ and $n-1$. We set $\mathbb{N}_\mathbb{D}elta = \{ i \in \{0,\ldots,n\} \ / \ \mathbb{D}elta_0 * \cdots * \mathbb{D}elta_{n-i - 1} \ne \mathbb{D}elta_0 * \cdots * \mathbb{D}elta_{n-i}\}$. We observe that the family of strata of $\mathbb{D}elta$ is $$\cal S_\mathbb{D}elta =\{ S_{i} = \mathbb{D}elta_0 * \cdots * \mathbb{D}elta_{n-i } \backslash \mathbb{D}elta_0 * \cdots * \mathbb{D}elta_{n-i - 1} \ / \ i \in \mathbb{N}_\mathbb{D}elta\}. $$ Notice that ${\rm codim\,} S_i =i$ for each $i \in \mathbb{N}_\mathbb{D}elta$. If $k \in \{0, \ldots,n-1\}$, we denote $\end{proposition}si_k \colon \{0,\ldots,n\} \to \{0,\ldots,n-1\}$ the map defined by $\end{proposition}si_k(i) = i $ if $i \leq k$ and $\end{proposition}si_k(i) = i-1$ if $i\geq k+1$. We check easily that this maps restricts as a map $a_k \colon \mathbb{N}_\mathbb{D}elta \to \mathbb{N}_{\mathbb{D}elta'}$, called the \emph{index map}. If $S_i \in \cal S_\mathbb{D}elta$, we have $S_{i} {\rm Sub}set S'_{a_k(i)}$. Thus by definition, the identity map is a stratified map from $\mathbb{D}elta$ to $\mathbb{D}elta'$, we denote it $$ \cal A [k] \colon \mathbb{D}elta = \mathbb{D}elta_0 * \cdots * \mathbb{D}elta_n\to \mathbb{D}elta' = \mathbb{D}elta_0 * \cdots *\mathbb{D}elta'_{n-1}. $$ If $ {S} \in \cal S_\mathbb{D}elta$, with the notations of \defref{def:applistratifieeforte} we have ${S}^{\cal A [k] } = S'_{a_k(i)}$, with $i={\rm codim\,} {S}$, and therefore \begin{equation}\label{indexa} {\rm codim\,} {S}^{\cal A [k] } = a_k({\rm codim\,} {S}). \end{equation} We also say that $ \cal A [k] $ is an elementary amalgamation. If the elementary $k$-amalgam\-ation is simple, then $a_k$ is a bijection and $\cal A[k] $ is a stratified homeomorphism (see \defref{def:applistratifieeforte}). The elementary $k$-amalgamation has an effect only on two components . We first focus on them. For the sake of the simplicity we write $E_0 = \mathbb{D}elta_{n-k-1}$ and $E_1 = \mathbb{D}elta_{n-k}$, they can be empty. \begin{definition}\label{LemaTec} We define a map $$\theta \colon \hiru N *{{\mathtt c} E_{0}} \otimes \hiru N * {{\mathtt c} E_1} \to \hiru N *{{\mathtt c} (E_0* E_1)}$$ by \begin{equation}gin{eqnarray*}\label{deftetapeq*} \theta ((F_0,\end{proposition}si_0) \otimes (F_{1},\end{proposition}si_1) ) = \left\{ \begin{equation}gin{array}{ll} (F_0*F_1,\end{proposition}si_1) & \hbox{if } \end{proposition}si_0=1,\\ (F_0,0) & \hbox{if } \end{proposition}si_0=0 \hbox{ and } |(F_1,\end{proposition}si_1) |=0,\\ 0 & \hbox{if not.} \end{array} \right. \end{eqnarray*} Notice that the restriction of $\theta$ to $ \hiru N *{{\mathtt c} E_{0}} \otimes \hiru N * { E_1}$ gives a map, still denoted $\theta$, $$\theta \colon \hiru N *{{\mathtt c} E_{0}} \otimes \hiru N * { E_1} \to \hiru N *{E_0* E_1}.$$ By duality, we get a map $$\Xi \colon \mathbb{H}iru N *{{\mathtt c} (E_0* E_1)} \to \mathbb{H}iru N *{{\mathtt c} E_{0}} \otimes \mathbb{H}iru N * {{\mathtt c} E_1}$$ which verifies \begin{equation}e \left\{ \begin{equation}gin{array}{lcll} \Xi({\boldsymbol 1}_{( F_0*F_1,\varepsilon) })&=& (-1)^{|(F_0,1)| \cdot |(F_1,\varepsilon)|} {\boldsymbol 1}_{(F_0,1)}\otimes {\boldsymbol 1}_{(F_1,\varepsilon)} &\hbox{if }(F_1,\end{proposition}si) \ne (\emptyset,0). \\[.2cm] \nonumber \Xi({\boldsymbol 1}_{(F_0,0)}) &=& {\boldsymbol 1}_{(F_0,0)}\otimes \lambda_{{\mathtt c} E_1}, &\hbox{with } \lambda_{{\mathtt c} E_1} = {\boldsymbol 1}_{(\emptyset,1)} \\ &&& + \displaystyle \sum_{e \in\cal V( E_1)} {\boldsymbol 1}_{(e,0)}. \end{array} \right. \end{equation}e \end{definition} We also denote $\Xi \colon \mathbb{H}iru N *{E_0* E_1} \to \mathbb{H}iru N *{{\mathtt c} E_{0}} \otimes \mathbb{H}iru N * { E_1}$ the restriction of $\Xi$. \begin{equation}gin{lemma}\label{CompBlow} The morphisms $\theta $ and $\Xi$ are chain maps. Moreover, \begin{equation}gin{enumerate}[(1)] \item if $E_0=\emptyset$ then $\theta$ and $\Xi$ are isomorphisms, \item $\Xi$ is compatible with the cup products, \item $\theta$ and $\Xi$ are compatible with the cap product, i.e., \begin{equation}gin{eqnarray*} \theta (\Xi \left({\boldsymbol 1}_{(F_0 *F_1,\varepsilon)} \right) {\mathtt c}ap ([{\mathtt c} E_0] \otimes [{\mathtt c} E_1] )) &=& {\boldsymbol 1}_{(F_0 *F_1,\varepsilon)} \cap [{\mathtt c}(E_0*E_1)], \hbox{ and} \\ \theta (\Xi \left({\boldsymbol 1}_{F_0 *F_1} \right) {\mathtt c}ap ([{\mathtt c} E_0] \otimes[ E_1] )) &=& {\boldsymbol 1}_{F_0 *F_1} \cap [E_0*E_1] \end{eqnarray*} where $[\nabla]$ is the maximal simplex of an euclidean simplex $\nabla$, $(F_0*F_1,\varepsilon)$, is a face of ${\mathtt c}(E_0 * E_1) $ and $F_0*F_1$, is a face of $E_0 * E_1 $. \end{enumerate} \end{lemma} We postpone the proof of this Lemma and deduce first some properties on amalgamations from it. \begin{definition}\label{85} Let us consider the elementary $k$-amalgamation $\cal A[k] \colon \mathbb{D}elta \to \mathbb{D}elta'$ with $k\in\{0, \ldots,n-1\}$ (see \defref{elemkam}). We define two homomorphisms $ \cal A[k]_* \colon \hiru {\widetilde{N}} * {\mathbb{D}elta}\to \hiru {\widetilde{N}} * {\mathbb{D}elta'} $ and $ \cal A[k]^* \colon \mathbb{H}iru {\widetilde{N}} * {\mathbb{D}elta'} \to \mathbb{H}iru {\widetilde{N}} * {\mathbb{D}elta} $ by \begin{equation}\label{Abajo} \cal A[k]_* = \underbrace{{\rm id} \otimes \cdots \otimes {\rm id}}_{n-k-1 \ times} \otimes \ \theta \ \otimes \underbrace{{\rm id} \otimes \cdots \otimes {\rm id} }_{k \ times} \end{equation} and \begin{equation}\label{Aalto} \cal A[k]^* = \underbrace{{\rm id} \otimes \cdots \otimes {\rm id}}_{n-k-1 \ times} \otimes \ \Xi \ \otimes \underbrace{{\rm id} \otimes \cdots \otimes {\rm id} }_{k \ times} \end{equation} \end{definition} If there is not ambiguity, we use the notation $\cal A_*$ and $\cal A^*$ for these two maps and $a$ for the index map. In order to get coherent notation, we write $\|-\|_0=0$. \begin{equation}gin{proposition}\label{CMAmal} Let $\cal A \colon \mathbb{D}elta \to \mathbb{D}elta'$ be an elementary $k$-amalgamation. Then, the maps $\cal A_*$ and $\cal A^*$ defined above satisfy the following properties. \begin{equation}gin{enumerate}[(1)] \item They are chain maps and $\cal A^*$ is compatible with the cup product. \item Let $\omega \in \mathbb{H}iru {\widetilde{N}} * {\mathbb{D}elta'}$, then we have $$ \cal A_*( \cal A^*(\omega) {\mathtt c}ap {\widetilde{\Delta}}) = \omega {\mathtt c}ap \widetilde{\mathbb{D}elta'}. $$ \item In the case of simple amalgamations, they are isomorphisms. \item For each regular face operator $\delta_\end{lemma}l \colon \nabla \to \mathbb{D}elta$ we denote by $\cal A_{\mathbb{D}elta}^*$ the previous map and by $\cal A_\nabla^*$ the map corresponding to the amalgamation induced on $\nabla$. Then we have $$ \delta_\end{lemma}l^* \circ \cal A ^*_\nabla= \cal A ^*_\mathbb{D}elta \circ \delta_\end{lemma}l^*. $$ \item The map $\cal A^*$ decreases the perverse degree, i.e., for $\omega \in \mathbb{H}iru {\widetilde{N}} * {\mathbb{D}elta'}$ and $\end{lemma}l \in \mathbb{N}_\mathbb{D}elta$, we have $$ \|\cal A^*(\omega)\|_\end{lemma}l \leq \|\omega\|_{a(\end{lemma}l )}.$$ Moreover, if the amalgamation is simple, this inequality becomes an equality, $$ \|\cal A^*(\omega)\|_\end{lemma}l = \|\omega\|_{a(\end{lemma}l )}.$$ \end{enumerate} \end{proposition} \begin{equation}gin{proposition}r The properties (1) and (3) are consequences of \lemref{CompBlow} (2),(1). (2) From the definition of $\cal A_*$ and $\cal A^*$ it suffices to apply \lemref{CompBlow} (3). (4) Direct from definitions. (5) Let $\omega={\boldsymbol 1}_{(F_0,\varepsilon_0)} \otimes \cdots \otimes {\boldsymbol 1}_{(F_{n -k-2},\varepsilon_{n-k-2} )} \otimes {\boldsymbol 1}_{(F_{n-k-1}*F_{n-k},\varepsilon)} \otimes {\boldsymbol 1}_{(F_{n-k+1},\varepsilon_{n-k+1} )} \otimes \cdots \otimes {\boldsymbol 1}_{F_{n}} \in \mathbb{H}iru {\widetilde{N}} * {\mathbb{D}elta'}$. We distinguish two cases (see Definitions \ref{LemaTec}, \ref{85}): \begin{equation}gin{itemize} \item Suppose $(F_{n-k} ,\end{proposition}si) \ne (\emptyset,0)$. We have, up to a sign, $ \cal A^* (\omega) = \pm {\boldsymbol 1}_{(F_0,\varepsilon_0)} \otimes \cdots \otimes {\boldsymbol 1}_{(F_{n-k-2},\varepsilon_{n-k-2})} \otimes {\boldsymbol 1}_{(F_{n-k-1},1)} \otimes {\boldsymbol 1}_{(F_{n-k},\varepsilon)} \otimes {\boldsymbol 1}_{(F_{n-k+1},\varepsilon_{n-k+1} )} \otimes \cdots \otimes{\boldsymbol 1}_{(F_{n},\varepsilon_{n})}.$ The result comes from $$ \| \cal A^* (\omega) \|_\end{lemma}l = \left\{ \begin{equation}gin{array}{cl} \|\omega\|_{\end{lemma}l -1} & \hbox{if } \end{lemma}l \geq k +2\\ -\infty & \hbox{if } \end{lemma}l = k+1\\ \|\omega\|_{\end{lemma}l } & \hbox{if } \end{lemma}l \leq k \end{array} \right. \leq \left\{ \begin{equation}gin{array}{cl} \|\omega\|_{\end{lemma}l -1} & \hbox{if } \end{lemma}l \geq k +1\\ \|\omega\|_{\end{lemma}l } & \hbox{if } \end{lemma}l \leq k \end{array} \right. =\|\omega\|_{a(\end{lemma}l)}, $$ for $\end{lemma}l \in \mathbb{N}_\mathbb{D}elta$. \item Suppose $(F_{n-k},\end{proposition}si) =(\emptyset,0)$. We have the equality $ \cal A^* (\omega) = {\boldsymbol 1}_{(F_0,\varepsilon_0)} \otimes \cdots \otimes {\boldsymbol 1}_{(F_{n-k-2},\varepsilon_{k-2})} \otimes {\boldsymbol 1}_{(F_{n-k-1},0)} \otimes {\boldsymbol 1}_{(\emptyset,1)} \otimes {\boldsymbol 1}_{(F_{n-k+1},\varepsilon_{n-k+1} )} \otimes \cdots \otimes{\boldsymbol 1}_{(F_{n},\varepsilon_{n})} + \sum_{e \in \cal V(\mathbb{D}elta_k)} {\boldsymbol 1}_{(F_0,\varepsilon_0)} \otimes \cdots \otimes {\boldsymbol 1}_{(F_{n-k-2},\varepsilon_{n-k-2})} \otimes {\boldsymbol 1}_{(F_{n-k-1},0)} \otimes {\boldsymbol 1}_{(e,0)} \otimes $ \\ $ {\boldsymbol 1}_{(F_{n-k+1},\varepsilon_{n-k+1} )} \otimes \cdots \otimes{\boldsymbol 1}_{(F_{n},\varepsilon_{n})}. $ Then we have $$ \|\cal A^* (\omega) \|_\end{lemma}l = \left\{ \begin{equation}gin{array}{cl} \|\omega\|_{\end{lemma}l -1} & \hbox{if } \end{lemma}l \geq k +2\\ \|\omega\|_{\end{lemma}l } & \hbox{if } \end{lemma}l \leq k \end{array} \right. = \|\omega\|_{a(\end{lemma}l)}, $$ for $\end{lemma}l \in \mathbb{N}_\mathbb{D}elta$. Since the amalgamation is simple, the equality comes from the fact that $k+ 1\not\in \mathbb{N}_\mathbb{D}elta$. \end{itemize} \end{proposition}r \begin{equation}gin{proof}[Proof of \lemref{CompBlow}.] We begin with the compatibility with the differentials. Let $H= (F_0,\varepsilon_0) \otimes (F_1,\varepsilon_1) $ and suppose $\varepsilon_0=1$ and $|(F_1,\varepsilon_1)|\geq 2$. We have \begin{equation}gin{eqnarray*} \theta \partial H & =& \theta( (\partial F_0,1) \otimes (F_1,\varepsilon_1) ) +(-1)^{|(F_0,1)|}\theta( (F_0,1) \otimes (\partial F_1,\varepsilon_1))\\ &&+ (-1)^{|(F_0,1) + |F_1|+1} \left\{ \begin{equation}gin{array}{ll} 0 &\hbox{if } \varepsilon_1=0\\ \theta( (F_0,1) \otimes (F_1,0)) & \hbox{if } \varepsilon_1=1 \end{array} \right.\\ &=& (\partial F_0 * F_1,\varepsilon_1) +(-1)^{|(F_0,1)|}(F_0*\partial F_1,\varepsilon_1) \\ &&+ \left\{ \begin{equation}gin{array}{ll} 0 &\hbox{if } \varepsilon_1=0\\ (-1)^{|(F_0,1) + |F_1|+1}(F_0*F_1,0) & \hbox{if } \varepsilon_1=1 \end{array} \right.\\ &=& \partial (F_0 * F_1,\varepsilon_1) \end{eqnarray*} On the other hand, we get $\partial \theta H = \partial (F_0*F_1,\varepsilon_1)$ and we have established the equality $\theta\partial =\partial \theta$ in the case $\varepsilon_0=1$ and $|(F_1,\varepsilon_1)|\geq 2$. The proof is similar in the other cases. The compatibility of $\Xi$ with the differentials follows by duality. (1) If $E_0 = \emptyset$, then $\end{proposition}si_0 =1$ and, by definition of the map $\theta$, we have $\theta((\emptyset,1) \otimes (F_1,\end{proposition}si_1)) = (F_1,\end{proposition}si_1)$. Therefore, $\theta$ is an isomorphism and so is $\Xi$ by duality. (2) Let $ {\boldsymbol 1}_{(F_0* F_1,\varepsilon)} ,{\boldsymbol 1}_{(F_0'*F_1',\varepsilon')} \in \mathbb{H}iru {\widetilde{N}} * {{\mathtt c} (E_0*E_1)}$. We suppose that $(F_1,\varepsilon)$ and $(F'_1,\varepsilon')$ are different from $(\emptyset,0)$. Let us consider $A =\Xi \left({\boldsymbol 1}_{(F_0* F_1,\varepsilon)} \right) \cup \Xi \left({\boldsymbol 1}_{(F_0'*F_1',\varepsilon')} \right)$ and $B = \Xi \left({\boldsymbol 1}_{(F_0*F_1,\varepsilon)} \cup {\boldsymbol 1}_{(F'_0* F'_1,\varepsilon')} \right)$. A direct computation gives the following equalities from definitions. \begin{equation}gin{eqnarray*} A =(-1)^{ |(F_0,1)| \cdot |(F_1,\end{proposition}si)| + |(F'_0,1)| \cdot |(F'_1,\end{proposition}si')| } \left({\boldsymbol 1}_{(F_0,1)} \otimes {\boldsymbol 1}_{(F_1,\end{proposition}si)} \right)\cup \left({\boldsymbol 1}_{(F'_0,1)} \otimes {\boldsymbol 1}_{(F'_1,\end{proposition}si')}\right) = &&\\ (-1)^{|(F_1,\end{proposition}si)| \cdot |(F'_0,1)| + |(F_0,1)| \cdot |(F_1,\end{proposition}si)| + |(F'_0,1)| \cdot |(F'_1,\end{proposition}si')| } \left({\boldsymbol 1}_{(F_0,1)} \cup{\boldsymbol 1}_{(F'_0,1)} \right)\otimes \left({\boldsymbol 1}_{(F_1,\end{proposition}si)} \cup{\boldsymbol 1}_{(F'_1,\end{proposition}si')}\right)&& \end{eqnarray*} If $(F'_0,\end{proposition}si) =(\emptyset, 0)$ and $F_1,F'_1$ are compatibles (see \defref{def:cupsurDelta}), we have \begin{equation}gin{eqnarray*} A &=& (-1)^{|(F_1,0)| \left( |(F_0,1)| + |(F'_1,\end{proposition}si')|\right)} {\boldsymbol 1}_{(F_0,1)} \otimes{\boldsymbol 1}_{(F_1\cup F'_1,\end{proposition}si')}\\ &=& (-1)^{|(F_0*F_1,0)| \cdot |(F'_1,\end{proposition}si')| } \Xi ({\boldsymbol 1}_{(F_0*F_1 \cup F'_1,\end{proposition}silon' ) }) = B. \end{eqnarray*} If $(F'_0,\end{proposition}si) =(\emptyset, 1)$ and $(F'_1,\end{proposition}si') = (\emptyset,1)$, we have $ A = (-1)^{|(F_0,1)| \cdot |(F_1,1)| } {\boldsymbol 1}_{(F_0,1)} \otimes{\boldsymbol 1}_{(F_1,1)} = \Xi( {\boldsymbol 1}_{(F_0*F_1,1)} ) =B. $ In the other cases we have $A=B=0$. We have established the equality $A=B$ in the case where $(F_1,\varepsilon)$ and $(F'_1,\varepsilon')$ are different from $(\emptyset,0)$. The verification is similar in the other cases. (3) Suppose $(F_1,\end{proposition}si)\ne (\emptyset,0)$. We have \begin{equation}gin{eqnarray*} C&=&\theta (\Xi ({\boldsymbol 1}_{(F_0* F_1,\varepsilon)} ) {\mathtt c}ap ([{\mathtt c} E_0 ]\otimes [{\mathtt c} E_1] ))\\ &=& \mathfrak (-1)^{|(F_0,1)| \cdot |(F_1,\end{proposition}si)|} \theta(({\boldsymbol 1}_{(F_0,1)}\otimes {\boldsymbol 1}_{(F_1,\end{proposition}si)} ){\mathtt c}ap ( [{\mathtt c} E_0 ]\otimes [{\mathtt c} E_1] ))\\&=& (-1)^{|(F_1,\end{proposition}si)| \cdot |{\mathtt c} E_0| + |(F_0,1)| \cdot |(F_1,\end{proposition}si)|} \mathfrak \theta ( ({\boldsymbol 1}_{(F_0,1)} \cap [{\mathtt c} E_0]) \otimes ({\boldsymbol 1}_{(F_1,\end{proposition}si)} \cap [{\mathtt c} E_1 ])). \end{eqnarray*} We apply the definition of cap product \eqref{equa:cap1} and we consider three different cases. \begin{equation}gin{itemize} \item If $F_0=E_0, F_1=E_1$ and $\end{proposition}si=1$, we have $$ C = \theta( [{\mathtt v} ]\otimes [{\mathtt v}] ) =[{\mathtt v}] = {\boldsymbol 1}_{(F_0*F_1,\varepsilon) }\cap [{\mathtt c}(E_0*E_1)]. $$ \item If $F_0=E_0, F_1 \cup G_1 =E_1$ and $\end{proposition}si=0$, we have $$ C= \theta( [{\mathtt v} ]\otimes (G_1,1) ) =(G_1,1) = {\boldsymbol 1}_{(F_0*F_1,\varepsilon) }\cap [{\mathtt c}(E_0*E_1)]. $$ \item The other cases correspond to $C = {\boldsymbol 1}_{(F_0*F_1,\varepsilon) }\cap [{\mathtt c}(E_0*E_1) ] =0$. \end{itemize} We have established the property (3) in the case $(F_1,\end{proposition}si)\ne (\emptyset,0)$. The verification is similar in the other cases. \end{proof} \section{Stratified maps: the global level. \thmref{MorCoho}.}\label{AmalSec} Let $f \colon X \to Y$ be a stratified map, ${\mathtt{Simp}}gma \colon \mathbb{D}elta_{\mathtt{Simp}}gma \to X$ and $f \circ {\mathtt{Simp}}gma \colon \mathbb{D}elta_{f\circ {\mathtt{Simp}}gma} \to Y$ filtered simplices of $X$ and $Y$ respectively. In \cite[Corollary A.25]{CST1} we have proved that the filtrations $\mathbb{D}elta_{\mathtt{Simp}}gma$ and $\mathbb{D}elta_{f\circ {\mathtt{Simp}}gma}$ (of the same euclidean simplex $\mathbb{D}elta$) are connected by an amalgamation, more exactly, by a finite sequence $\cal A_1$ of elementary amalgamations and a finite sequence $\cal A_2^{-1}$ of inverse of simple amalgamations as follows. \begin{equation}gin{center} \begin{equation}gin{tikzpicture} \node (a) at (-6,0) {$\mathbb{D}elta_{\mathtt{Simp}}gma $} ; \node (b) at (-6,-4) {$\mathbb{D}elta_{f\circ {\mathtt{Simp}}gma}$} ; \draw [->] (a) --(b) node[midway,left]{\widetildeny$\cal A_{\mathtt{Simp}}gma$} ; \draw(-5.5,0) node{$=$} ; \node (c) at (-4,-.15) {$\underbrace{\mathbb{D}elta_0 {*} \cdots {*}\mathbb{D}elta_{j_0} }$} ; \node (d) at (-4,-2) {$\mathbb{D}elta'_{i_0}$} ; \node (e) at (-4,-3.85) {$\overlineerbrace{\mathbb{D}elta'_0 {*} \cdots{*} \mathbb{D}elta'_{i_0}}$} ; \draw [->] (c) --(d) node[midway]{} ; \draw [->] (d) --(e) node[midway]{} ; \draw (-2.8,0) node{$*$} ; \draw (-2.8,-4) node{$*$} ; \node (f) at (-1.3,-.15) {$\underbrace{\mathbb{D}elta_{j_0+1} * \cdots *\mathbb{D}elta_{j_1} }$}; \node (g) at (-1.3,-2) {$\mathbb{D}elta'_{i_1}$} ; \node (h) at (-1.3,-3.85) {$\overlineerbrace{\mathbb{D}elta'_{i_0+1} * \cdots * \mathbb{D}elta'_{i_1}}$} ; \draw [->] (f) --(g) node[midway]{} ; \draw [->] (g) --(h) node[midway]{} ; \draw (.8,0) node{$* \cdots *$} ; \draw (.8,-4) node{$* \cdots *$} ; \node (i) at (3,-.15) {$\underbrace{ \mathbb{D}elta_{j_{a-1}+1}{*}\cdots {*} \mathbb{D}elta_{j_a}}$}; \node (j) at (3,-2) {$\mathbb{D}elta'_{i_a}$} ; \node (k) at (3,-3.85){$\overlineerbrace{ \mathbb{D}elta'_{i_{a-1}+1}{*}\cdots *\mathbb{D}elta'_{i_a} }$} ; \draw [->] (i) --(j) node[midway]{} ; \draw [->] (j) --(k) node[midway]{} ; \node (l) at (4.5,0){} ; \node (m) at (3.2,-1.9) {} ; \node (mm) at (3.2,-2.1) {} ; \node (n) at (4.5,-4) {} ; \draw [->] (l) to[out=0,in=0] node[midway,right]{\widetildeny${\cal A}_1$}(m) ; \draw [->] (mm) to[out=0,in=0] node[midway,right]{\widetildeny${\cal A}^{-1}_2$}(n) ; \end{tikzpicture} \end{center} We denote the amalgamation $\cal A_{\mathtt{Simp}}gma = \cal A_2^{-1} \circ \cal A_1$, with $\cal A_1 = \cal A_{1,1} \circ \cdots \circ \cal A_{1,u}$ and $\cal A_2 = \cal A_{2,1} \circ \cdots \circ \cal A_{2,v}$. The elementary amalgamations $\cal A_{1,i}$ (resp. simple amalgamations $\cal A_{2,j}$) are written in a canonical way, going from the left to the right. {\rm Sub}section{$\mu$-amalgamation} \label{defmu*} Mention the amalgamation of $\mathbb{D}elta_{\mathtt{Simp}}gma$ collecting all the filtration in one factor. We call it the \emph{$\mu$-amalgamation} $\cal A_{\mathtt{Simp}}gma''\colon \mathbb{D}elta_{\mathtt{Simp}}gma \to \mathbb{D}elta$. The induced map $\cal A''_{{\mathtt{Simp}}gma,*} \colon \hiru {\widetilde{N}} * {\mathbb{D}elta_{\mathtt{Simp}}gma} \to \hiru N * {\mathbb{D}elta}$ is defined by \begin{equation}gin{eqnarray*} \bigotimes_{i=0}^n (F_i,\varepsilon_i) &\rightsquigarrow& (F_{0} * \cdots *F_\end{lemma}l,0) \otimes\bigotimes_{i> \end{lemma}l}(F_i,\varepsilon_i)\rightsquigarrow \left\{ \begin{equation}gin{array}{cl} F_{0}\ast\dots\ast F_{\end{lemma}l}&\text{if } |(F,\end{proposition}silon)|_{>\end{lemma}l} = 0\\ 0&\text{otherwise,} \end{array}\right. \end{eqnarray*} with $\end{proposition}si_n=0$ and $\end{lemma}l$ is the smallest integer, $j$, such that $\varepsilon_{j}=0$ (see \defref{61}). We recover the map $\mu_*$ of \eqref{AmuA}. We also define $\mu^* = \cal A_{\mathtt{Simp}}gma''^*$. With the previous notations, we notice the commutativity of the following diagram \begin{equation}\label{PropmuA} \xymatrix{ \mathbb{D}elta_{\mathtt{Simp}}gma \ar[r]^-{{\cal A}_{\mathtt{Simp}}gma } \ar[d]_{\cal A''_{\mathtt{Simp}}gma} & \mathbb{D}elta_{f\circ {\mathtt{Simp}}gma} \ar[ld]^-{\cal A''_{f \circ {\mathtt{Simp}}gma}} \\ \mathbb{D}elta & } \end{equation} If $\overline p$ is a perversity on $X$ and $\overline q$ is a perversity on $Y$ with $\overline p \leq f^* \overline q$, we have proved in \cite[Proposition 3.6]{CST3} that the map $f$ induces a homomorphism $f_* \colon \lau H {\overline p} *{X;R} \to \lau H{\overline p} *{Y;R} $. We define now the induced map in blown-up intersection cohomology and study its properties. \begin{definition}\label{local1} Let $f \colon X \to Y$ be a stratified map. The \emph{induced map} $f \colon \mathbb{H}iru {\widetilde{N}} * {Y;R} \to \mathbb{H}iru {\widetilde{N}} * {X;R} $ is the map defined by $ (f^* \omega)_{\mathtt{Simp}}gma = \cal A_{\mathtt{Simp}}gma^*\omega_{f \circ {\mathtt{Simp}}gma }, $ for each $\omega \in \mathbb{H}iru {\widetilde{N}} * {Y;R}$ and each regular simplex ${\mathtt{Simp}}gma \colon \mathbb{D}elta_{\mathtt{Simp}}gma \to X$. \end{definition} With {\rm pr}opref{CMAmal}(4) this definition makes sense. It generalizes the notion of induced map of {\rm pr}opref{prop:applistratifieeforte} (see \remref{rem:fortementstratifieetidentite}). \begin{equation}gin{theorem} \label{MorCoho} Let $f \colon X \to Y$ be a stratified map. \begin{equation}gin{enumerate}[(1)] \item The induced morphism $f ^*\colon \mathbb{H}iru {\widetilde{N}} * {Y;R} \to \mathbb{H}iru {\widetilde{N}} * {X;R}$ is a chain map. \item The map $f^*$ is compatible with the cup product. \item The maps $f_*\colon \lau {\mathfrak{C}} {} * {X;R} \to \lau {\mathfrak{C}} {} * {Y;R}$ and $f ^*\colon \mathbb{H}iru {\widetilde{N}} * {Y;R} \to \mathbb{H}iru {\widetilde{N}} * {X;R}$ verify $$f_*(f^* \omega\cap \xi ) = \omega \cap f_* \xi.$$ for each $\omega\in \lau {\widetilde{N}} * {} {Y;R}$ and each $\xi \in \lau {\mathfrak{C}} {} * {X;R}$. \item Let $\overline p$ be a perversity on $X$ and $\overline q$ a perversity on $Y$ such that $\overline p \geq f^*\overline q$. Then $f$ induces a chain map $f ^*\colon \lau {\widetilde{N}} * {\overline q} {Y;R} \to \lau {\widetilde{N}} * {\overline p} {X;R}$, thus a homomorphism $f ^*\colon \lau \mathscr H * {\overline q} {X;R} \to \lau \mathscr H* {\overline p} {Y;R}$. \end{enumerate} \end{theorem} As a GM-perversity verifies $\overline p\geq f^* \overline p $, the previous Theorem has an adaptation in this context. \begin{equation}gin{corollary} \label{GMp} Let $f \colon X \to Y$ be a stratified map and $\overline p$ a GM-perversity. Then $f$ induces a homomorphism $f ^*\colon \lau \mathscr H * {\overline p} {Y;R} \to \lau \mathscr H* {\overline p} {X;R}$ compatible with cup products. \end{corollary} The maps induced in cohomology and homology for a GM-perversity satisfy also the condition (3) of \thmref{MorCoho}. \begin{equation}gin{proof}[Proof of \thmref{MorCoho}] Properties (1) and (2) are direct consequences of {\rm pr}opref{CMAmal}. (3) Notice first that $f_*\colon \lau {\mathfrak{C}} {} * {X;R} \to \lau {\mathfrak{C}} {} * {Y;R}$ is well defined since $f_*$ preserves filtered simplices (see \cite[Theorem F]{CST1}) and $f(X\backslash \Sigma_X) {\rm Sub}set Y \backslash \Sigma_Y$, since $f$ is stratified. We can suppose that $\xi$ is a regular simplex ${\mathtt{Simp}}gma \colon \mathbb{D}elta_{\mathtt{Simp}}gma \to X$. Applying {\rm pr}opref{CMAmal}(2) repeatedly we have $ \cal A_{{\mathtt{Simp}}gma,*}(\cal A ^*_{\mathtt{Simp}}gma \omega_{f\circ{\mathtt{Simp}}gma} {\mathtt c}ap {\widetilde{\Delta}}_{\mathtt{Simp}}gma ) = \omega_{f\circ {\mathtt{Simp}}gma} {\mathtt c}ap {\widetilde{\Delta}}_{f \circ {\mathtt{Simp}}gma}. $ The result follows from \begin{equation}gin{eqnarray*} f_*(f^*\omega \cap {\mathtt{Simp}}gma ) = f_* {\mathtt{Simp}}gma_* \mu_{\mathbb{D}elta_{\mathtt{Simp}}gma, *} ((f^*\omega )_{\mathtt{Simp}}gma {\mathtt c}ap {\widetilde{\Delta}}_{\mathtt{Simp}}gma) = f_* {\mathtt{Simp}}gma_* \mu_{\mathbb{D}elta_{\mathtt{Simp}}gma, *} ( \cal A_{\mathtt{Simp}}gma^* \omega_{f\circ {\mathtt{Simp}}gma} {\mathtt c}ap {\widetilde{\Delta}}_{\mathtt{Simp}}gma) &&\\ \stackrel{\eqref{PropmuA}}{=} f_* {\mathtt{Simp}}gma_* \mu_{\mathbb{D}elta_{f\circ{\mathtt{Simp}}gma}, *}\cal A_{{\mathtt{Simp}}gma ,*} ( \cal A_{\mathtt{Simp}}gma^* \omega_{f\circ {\mathtt{Simp}}gma} {\mathtt c}ap {\widetilde{\Delta}}_{\mathtt{Simp}}gma) = (f \circ {\mathtt{Simp}}gma)_* \mu_{\mathbb{D}elta_{f\circ {\mathtt{Simp}}gma},*}( \omega_{f\circ {\mathtt{Simp}}gma} {\mathtt c}ap {\widetilde{\Delta}}_{f \circ {\mathtt{Simp}}gma}) &&\\ =\omega \cap f_* ({\mathtt{Simp}}gma).&& \end{eqnarray*} (4) Let $\omega \in \lau {\widetilde{N}} * {\overline q} {Y;R} $. Let ${\mathtt{Simp}}gma \colon \mathbb{D}elta \to X$ be a regular simplex and $S$ a singular stratum of $X$ with ${\rm Im\,} {\mathtt{Simp}}gma \cap S\ne \emptyset$. We have to prove $$ \|\omega\|_{S^f} \leq \overline q(S^f) \mathbb{R}ightarrow \|f^*\omega\|_{S} \leq \overline p(S) $$ Since $\overline p \geq f^* \overline q$, it suffices to prove $ \|\cal A_{\mathtt{Simp}}gma^*\omega_{f \circ {\mathtt{Simp}}gma}\|_{{\rm codim\,} S} \leq \|\omega_{f \circ {\mathtt{Simp}}gma}\|_{{\rm codim\,} S^f}.$ Recall the decomposition, $ \cal A_{\mathtt{Simp}}gma=\cal A^{-1}_{2,v} \circ \dots \circ \cal A^{-1}_{2,1} \circ \cal A_{1,1} \circ \dots \circ \cal A_{1,u}, $ at the beginning of the section and denote by $a_{1,i}, a_{2,j}$ the associated index maps. We have $$ {\rm codim\,} S^f = (a^{-1}_{2,v} \circ \dots \circ a^{-1}_{2,1} \circ a_{1,1} \circ \dots \circ a_{1,u})({\rm codim\,} S) $$ (see \eqref{indexa}). Now, the result follows from {\rm pr}opref{CMAmal} (5). \end{proof} \part{Properties of the Blown-up intersection cohomology}\label{part:mayervietoris} We present the properties of the blown-up intersection cohomology used in this work. We end this Part by comparing this cohomology with the intersection cohomology obtained from the dual cochains. \section{$\cal U$-small chains. \thmref{thm:Upetits}.}\label{sec:subdivision} We establish a Theorem of $\cal U $-small filtered simplices, where $ \cal U $ is an open cover of $ X $ (\thmref{thm:Upetits}). Let $\mathbb{D}elta$ be an euclidean simplex whose vertices are $e_{0},\dots,e_{m}$. Let $e_{\{i_{0}\dots i_{p}\}}$ be the barycenter of the simplex $[e_{i_{0}},\dots,e_{i_{p}}]$ and ${\rm Sub}\,\mathbb{D}elta$ the simplicial complex given by the barycentric subdivision of $\mathbb{D}elta$. The subdivision linear map, ${\rm Sub}_{*}\colon N_{*}(\mathbb{D}elta)\to N_{*}({\rm Sub}\,\mathbb{D}elta)$, is defined by ${\rm Sub}_{*} [e_{i_{0}},\dots,e_{i_{p}}] = (-1)^p ({\rm Sub}_{*}\partial [e_{i_{0}},\dots,e_{i_{p}}])\ast e_{\{i_{0}\dots i_{p}\}} $. Any face of $ {\rm Sub} \, \mathbb{D}elta $ appears at most once in the expression of $ {\rm Sub}_{*} $ and some faces do not. \begin{equation}gin{definition}\label{def:simplexcomplet} An $\end{lemma}l$-simplex of ${\rm Sub}\,\mathbb{D}elta$ is a \emph{subdividing simplex} if it appears in the subdivision of an $\end{lemma}l$-simplex of $\mathbb{D}elta$. \end{definition} From the definition of $ {\rm Sub}_{*} $ we know that the subdividing simplices of $ {\rm Sub} \, \mathbb{D}elta $ are the simplices of the form $$\nabla=[e_{i_{0}},e_{\{i_{0}i_{1}\}},e_{\{i_{0}i_{1}i_{2}\}},\dots,e_{\{i_{0}\dots i_{p}\}}].$$ The vertex $e_{i_{0}}$ is the \emph{first vertex} and the vertex $e_{i_{p}}$ is the \emph{last vertex of $\nabla$}. We write $e_{i_{p}}={\mathtt{der}\,}\nabla$. The transpose map of ${\rm Sub}_{*}$ is denoted by ${\rm Sub}^*\colon \mathbb{H}iru N *{{\rm Sub}\,\mathbb{D}elta} \to \mathbb{H}iru N *{\mathbb{D}elta}$. It is defined on the subdividing simplices by $${{\rm Sub}^*} ({\boldsymbol 1}_{[e_{i_{0}},e_{\{i_{0}i_{1}\}},e_{\{i_{0}i_{1}i_{2}\}},\dots,e_{\{i_{0}\dots i_{p}\}}]})= {\boldsymbol 1}_{[e_{i_{0}},\dots,e_{i_{p}}]},$$ and by 0 on the other simplices. Since the linear map ${{\rm Sub}_{*}}$ is compatible with differentials, then ${{\rm Sub}^*}$ also is a cochain map. To construct a homotopy between the barycentric subdivision operator and the identity, we consider the simplicial complex $ K (\mathbb{D}elta) $, whose simplices are the joins, $F \ast G$, of a simplex $ F $ of $ \mathbb{D}elta $ and of a simplex $ G $ of $ {\rm Sub} \, \mathbb{D}elta $, such that: if $F=[e_{i_{0}},\dots,e_{i_{p}}]$ and $G=[e_{J_{0}},\dots,e_{J_{k}}]$, then $\{i_{0},\dots,i_{p}\}{\rm Sub}set J_{\end{lemma}l}$, for each $\end{lemma}l\in\{0,\dots,k\}$. In addition, simplices $ F $ and $ G $ can be empty, but not simultaneously, making $ \mathbb{D}elta $ and $ {\rm Sub} \, \mathbb{D}elta $ two sub-complexes of $ K ( \mathbb{D}elta) $. The homotopy operator, $T\colon \hiru N {*}{\mathbb{D}elta}\to \hiru N {*+1}{K(\mathbb{D}elta)}$, defined by \\ $T[e_{i_{0}},\dots,e_{i_{p}}]= (-1)^{p+1}\left( [e_{i_{0}},\dots,e_{i_{p}}] -T\partial [e_{i_{0}},\dots,e_{i_{p}}]\right) \ast e_{\{i_{0}\dots i_{p}\}} $, verifies $$(\partial T+T\partial)(F)=F- {\rm Sub}_{*}(F).$$ In particular, $T$ takes the values, $T(e_{0})=-e_{0}\ast e_{\{0\}}$ and $T[e_{0},e_{1}] = [e_{0}e_{1}]\ast e_{\{01\}} -e_{0}\ast [e_{\{0\}},e_{\{0,1\}}]+e_{1}\ast [e_{\{1\}},e_{\{0,1\}}]$. \begin{equation}gin{definition}\label{def:facecompleteK} A \emph{full simplex} of $K(\mathbb{D}elta)$ is a simplex of the form $$F\ast G = [e_{i_{0}},\dots,e_{i_{p}}]\ast [e_{\{i_{0}\dots i_{p}\}}, e_{\{i_{0}\dots i_{p}i_{p+1}\}},\dots, e_{\{i_{0}\dots i_{p}\dots i_{p+r}\}}].$$ The vertex $e_{i_{p+r}}$ is the \emph{last vertex} of $F\ast G$ and denoted by ${\mathtt{der}\,}(F\ast G)$. \end{definition} Note that if $ F \ast G$ is full and $ p = 0$, then $ G $ is subdividing in the sense of \defref{def:simplexcomplet}. Endow the simplex $\mathbb{D}elta$ with a filtration, $\mathbb{D}elta=\mathbb{D}elta_{0}\ast\dots\ast\mathbb{D}elta_{n}$, compatible with the order considered on the vertices, cf. (\ref{equa:ordresommets}). The simplicial complex \emph{${\rm Sub}\,\mathbb{D}elta$ is a weighted simplicial complex,} associating to the vertex $e_{\{i_{0}\dots i_{p}\}}$ the weight $\end{lemma}l$ if we have $[e_{i_{0}},\dots,e_{i_{p}}]{\vartriangleleft} \mathbb{D}elta_{0}\ast \dots\ast \mathbb{D}elta_{{\end{lemma}l}}$ and $[e_{i_{0}},\dots,e_{i_{p}}]\not\!\!{\vartriangleleft} \mathbb{D}elta_{0}\ast \dots\ast \mathbb{D}elta_{{\end{lemma}l}-1}$. Since the family of the vertices of $K(\mathbb{D}elta)$ is the union of those of $\mathbb{D}elta$ and those of ${\rm Sub}\,\mathbb{D}elta$, the complex $K(\mathbb{D}elta)$ also is a weighted simplicial complex. The chosen canonical basis of ${\widetilde{N}}^*(K(\mathbb{D}elta))$ is composed of elements of the form, $${\boldsymbol 1}_{(F\ast G,\varepsilon)}={\boldsymbol 1}_{(F_{0},\varepsilon_{0})} \otimes\dots\otimes {\boldsymbol 1}_{(F_{q}\ast G_{q},\varepsilon_{q})} \otimes\dots\otimes {\boldsymbol 1}_{(G_{n-1},\varepsilon_{n-1})} \otimes {\boldsymbol 1}_{G_{n}},$$ with $F=F_{0}\ast\dots\ast F_{q}{\vartriangleleft} \mathbb{D}elta$ and $G=G_{q}\ast\dots\ast G_{n}{\vartriangleleft} {\rm Sub}\,\mathbb{D}elta$. \begin{equation}gin{remark}\label{rem:subplouf} The linear operator $ {\rm Sub}^*$ does not necessarily respect the filter degree of simplices. Consider $ \mathbb{D}elta = [e_ {0}] \ast [e_ {1}, e_ {2}] $. Previous computations give ${\rm Sub}^*({\boldsymbol 1}_{[e_{1},e_{\{0,1\}}]})={\boldsymbol 1}_{[e_{1},e_{0}]}$. Taking into account the filtration defined on $ {\rm Sub} \, \mathbb{D}elta $, one has $[e_{1},e_{\{0,1\}}]=\emptyset\ast [e_{1},e_{\{0,1\}}]$ and $[e_{1}, e_{0}]=-[e_{0}]\ast [e_{1}]$, from which one gets ${\rm Sub}^*({\boldsymbol 1}_{\emptyset\ast [e_{1},e_{\{0,1\}}]})=-{\boldsymbol 1}_{[e_{0}]\ast [e_{1}]}$. More generally, let $\mathbb{D}elta=\mathbb{D}elta_{0}\ast\dots\ast\mathbb{D}elta_{n}$ and let $\nabla=\nabla_{0}\ast\dots\ast\nabla_{n}{\vartriangleleft} {\rm Sub}\,\mathbb{D}elta$. The previous example shows that the use of ${\rm Sub}^*$ for the construction of a cochain morphism, $${\widetilde{{\rm Sub}\,}}\colon N^*({\mathtt c}\nabla_{0})\otimes \dots\otimes N^*({\mathtt c}\nabla_{n-1})\otimes N^*(\nabla_{n}) \to N^*({\mathtt c}\mathbb{D}elta_{0})\otimes \dots\otimes N^*({\mathtt c}\mathbb{D}elta_{n-1})\otimes N^*(\mathbb{D}elta_{n}),$$ requires a reordering of the vertices according to the filtrations of $\mathbb{D}elta$ and ${\rm Sub}\,\mathbb{D}elta$. To avoid this difficulty we define below the morphism ${\widetilde{{\rm Sub}\,}}_{\mathbb{D}elta}\colon \mathbb{H}iru {\widetilde{N}}*{{\rm Sub}\,\mathbb{D}elta}\to \mathbb{H}iru {\widetilde{N}}*\mathbb{D}elta$ and the homotopy ${\widetilde{\mathrm{T}}}_{\mathbb{D}elta}\colon \mathbb{H}iru {\widetilde{N}}* {K(\mathbb{D}elta)}\to \mathbb{H}iru {\widetilde{N}}{*-1}\mathbb{D}elta$ by induction. \end{remark} The two canonical injections of $\mathbb{D}elta$ and ${\rm Sub}\,\mathbb{D}elta$ into $K(\mathbb{D}elta)$ induce the chain complex epimorphisms, \begin{equation}gin{itemize} \item $\iota^*_{\mathbb{D}elta}\colon \mathbb{H}iru {\widetilde{N}}*{K(\mathbb{D}elta)}\to {\widetilde{N}}^*(\mathbb{D}elta)$, defined by the identity on the elements ${\boldsymbol 1}_{(F,\varepsilon)}$ with $F{\vartriangleleft} \mathbb{D}elta$ and 0 elsewhere, \item $\iota^*_{{\rm Sub}\,\mathbb{D}elta}\colon {\widetilde{N}}^*(K(\mathbb{D}elta))\to \mathbb{H}iru {\widetilde{N}}*{{\rm Sub}\,\mathbb{D}elta}$, defined by the identity on the elements ${\boldsymbol 1}_{(G,\varepsilon)}$ with $G{\vartriangleleft} {\rm Sub}\,\mathbb{D}elta$ and 0 elsewhere. \end{itemize} \begin{equation}gin{lemma}\label{lem:subethomotopie} There exist a linear map, ${\widetilde{\mathrm{T}}}_{\mathbb{D}elta}\colon \mathbb{H}iru {\widetilde{N}}*{K(\mathbb{D}elta)}\to \mathbb{H}iru {\widetilde{N}} {*-1}\mathbb{D}elta$, and a complex cochain morphism, ${\widetilde{{\rm Sub}\,}}_{\mathbb{D}elta}\colon \mathbb{H}iru {\widetilde{N}}*{{\rm Sub}\,\mathbb{D}elta}\to \mathbb{H}iru{\widetilde{N}} *\mathbb{D}elta$, verifying \begin{equation}gin{equation}\label{equa:cequilfaut} {\widetilde{\mathrm{T}}}_{\mathbb{D}elta}\circ {\widetilde{\delta}}^{K(\mathbb{D}elta)}+{\widetilde{\delta}}^{\mathbb{D}elta}\circ {\widetilde{\mathrm{T}}}_{\mathbb{D}elta}= \iota^*_{\mathbb{D}elta}-{\widetilde{{\rm Sub}\,}}_{\mathbb{D}elta}\circ\iota^*_{{\rm Sub}\,\mathbb{D}elta}, \end{equation} where ${\widetilde{\delta}}^{K(\mathbb{D}elta)}$ and ${\widetilde{\delta}}^{\mathbb{D}elta}$ are the differentials on ${\widetilde{N}}^*(K(\mathbb{D}elta))$ and ${\widetilde{N}}^*(\mathbb{D}elta)$. \end{lemma} {\rm Sub}section{Construction of ${\widetilde{{\rm Sub}\,}}_{\mathbb{D}elta}$ and ${\widetilde{\mathrm{T}}}_{\mathbb{D}elta}$} The homotopy ${\widetilde{\mathrm{T}}}_{\mathbb{D}elta}$ is constructed by induction at the level of the full blow up's, ${\widetilde{N}}^{{\boldsymbol all},*}(K(\mathbb{D}elta))$ and ${\widetilde{N}}^{{\boldsymbol all},*-1}(\mathbb{D}elta)$, specifying its value on the elements ${\boldsymbol 1}_{(F\ast G,\varepsilon)}$, with $F{\vartriangleleft} \mathbb{D}elta$ and $G{\vartriangleleft} {\rm Sub}\,\mathbb{D}elta$. We set ${\widetilde{\mathrm{T}}}_{\mathbb{D}elta}({\boldsymbol 1}_{(F\ast G,\varepsilon)})=0$ if $F\ast G$ is not full, in the sense of \defref{def:facecompleteK}. Now, let, $${\boldsymbol 1}_{(F\ast G,\varepsilon)}={\boldsymbol 1}_{(F_{0},\varepsilon_{0})} \otimes\dots\otimes {\boldsymbol 1}_{(F_{q}\ast G_{q},\varepsilon_{q})} \otimes\dots\otimes {\boldsymbol 1}_{(G_{n-1},\varepsilon_{n-1})} \otimes {\boldsymbol 1}_{(G_{n},\varepsilon_{n})},$$ with $F\ast G$ full. We use a double induction, on the length $n$ of the filtration of $\mathbb{D}elta$ and on the dimension of the component $(G_{n},\varepsilon_{n})$. We set $\mathbb{D}elta=\nabla\ast \mathbb{D}elta_{n}$. \begin{equation}gin{enumerate}[a)] \item {If $\dim (G_{n},\varepsilon_{n})=0$,} we distinguish three cases \begin{equation}gin{enumerate}[i)] \item If $G_{n}=\emptyset$ and $\varepsilon_{n}=1$, then $F\ast G{\vartriangleleft} \nabla$ and we set: \begin{equation}gin{equation}\label{equa:homotopiedemarragevide} {\widetilde{\mathrm{T}}}_{\mathbb{D}elta}({\boldsymbol 1}_{(F\ast G,\varepsilon)}\otimes {\boldsymbol 1}_{(\emptyset,1)})= {\widetilde{\mathrm{T}}}_{\nabla}({\boldsymbol 1}_{(F\ast G,\varepsilon)})\otimes {\boldsymbol 1}_{(\emptyset,1)}. \end{equation} \item If $\dim G_{n}=0$, $\varepsilon_{n}=0$ and $q<n$, then $G_{n}=[e_{J}]$, where $e_{J}$ is the barycenter of a face of $\mathbb{D}elta$. Let $e_{\alpha}={\mathtt{der}\,} (G_{q}\ast\dots\ast G_{n})$ the last vertex of the face $G$ and $G'=G_{q}\ast\dots\ast G_{n-1}$. (Notice $e_{\alpha}\in\mathbb{D}elta_{n}$, otherwise the weight of the vertex $e_{J}$ of ${\rm Sub}\,\mathbb{D}elta$ would be $n-1$.) We set: \begin{equation}gin{equation}\label{equa:homotopiedim0} {\widetilde{\mathrm{T}}}_{\mathbb{D}elta}({\boldsymbol 1}_{(F\ast G',\varepsilon)}\otimes {\boldsymbol 1}_{([e_{J}],0)})= {\widetilde{\mathrm{T}}}_{\nabla}({\boldsymbol 1}_{(F\ast G',\varepsilon)}) \otimes {\boldsymbol 1}_{([e_{\alpha}],0)}. \end{equation} \item If $\dim G_{n}=0$, $\varepsilon_{n}=0$ and $q=n$, then $G=G_{n}=[e_{\{F\}}]$ is reduced to the barycenter of the face $F$. (Notice $F_{n}\neq \emptyset$ otherwise the weight of the barycenter of $F$ could not be $n$ in ${\rm Sub}\,\mathbb{D}elta$.) We set: \begin{equation}gin{equation}\label{equa:homotopiedemarrage} {\widetilde{\mathrm{T}}}_{\mathbb{D}elta}({\boldsymbol 1}_{(F,\varepsilon)}\ast e_{\{F\}})= (-1)^{|(F,\varepsilon)|} {\boldsymbol 1}_{(F,\varepsilon)}. \end{equation} \end{enumerate} \item If $\dim (G_{n},\varepsilon_{n})\geq 1$, then $G_{n}\neq\emptyset$ and we set $e_{\alpha}={\mathtt{der}\,} (G_{q}\ast\dots\ast G_{n})$ the last vertex of the face $G$. We write $G_{n}=G'_{n}\ast e_{J}$, so that $G'=G_{q}\ast \dots\ast G_{n-1}\ast G'_{n}$ is subdividing and $e_{\alpha}$ is a vertex of the face $J$ for which $e_{J}$ is the barycenter. We set: \begin{equation}gin{equation}\label{equa:homotopiedimqcq} {\widetilde{\mathrm{T}}}_{\mathbb{D}elta}({\boldsymbol 1}_{(F\ast G',\varepsilon)}\ast e_{J})= {\widetilde{\mathrm{T}}}_{\mathbb{D}elta}({\boldsymbol 1}_{(F\ast G',\varepsilon)})\ast e_{\alpha}. \end{equation} \end{enumerate} The construction of ${\widetilde{\mathrm{T}}}_{\mathbb{D}elta}$ does not change the value of $\varepsilon_{n}$; moreover, the image of an element ${\boldsymbol 1}_{(F\ast G,\varepsilon)}$ with $\varepsilon_{n}=0$ and $G_{n}\neq\emptyset$ also has a non-empty component in $\mathbb{D}elta_{n}$. So we built a linear map $$ {\widetilde{\mathrm{T}}}_{\mathbb{D}elta}\colon {\widetilde{N}}^*(K(\mathbb{D}elta))\to {\widetilde{N}}^{*-1}(\mathbb{D}elta). $$ We construct the map ${\widetilde{{\rm Sub}\,}}_{\mathbb{D}elta}\colon {\widetilde{N}}^*({\rm Sub}\,\mathbb{D}elta)\to {\widetilde{N}}^*(\mathbb{D}elta)$ from ${\widetilde{\mathrm{T}}}_{\mathbb{D}elta}$, by $${\widetilde{{\rm Sub}\,}}_{\mathbb{D}elta} = -{\widetilde{\mathrm{T}}}_{\mathbb{D}elta}\circ {\widetilde{\delta}}^{K(\mathbb{D}elta)}.$$ \begin{equation}gin{remark}\label{rem:pleinetsommets} To any simplex, $F\ast G$, of $K(\mathbb{D}elta)$, we associate a family of vertices, $\cal V_{\mathbb{D}elta}(F\ast G)$, consisting of the vertices of $F$ and the vertices of the faces of $\mathbb{D}elta$ having a vertex of $G$ as barycenter. Adding the virtual vertices, we set $\cal V_{\mathbb{D}elta}({\boldsymbol 1}_{(F\ast G,\varepsilon)})= \cal V_{\mathbb{D}elta}(F\ast G,\varepsilon)$. By construction of ${\widetilde{\mathrm{T}}}_{\mathbb{D}elta}$, we have $$\cal V_{\mathbb{D}elta}({\boldsymbol 1}_{(F\ast G,\varepsilon)})=\cal V_{\mathbb{D}elta}({\widetilde{\mathrm{T}}}_{\mathbb{D}elta}({\boldsymbol 1}_{(F\ast G,\varepsilon)})).$$ \end{remark} The proof of \lemref{lem:subethomotopie} is postponed to the end of the proof of \lemref{lem:subethomotopie}, \pagref{subsec:constructionhomotopie}. First, we take advantage of this result in terms of the blown-up intersection cohomology. Let $(X,\overline{p})$ be a perverse space. Recall that a cochain $\omega\in \lau {\widetilde{N}}* {\overline{p}}{X;R}$ associates to any regular filtered simplex ${\mathtt{Simp}}gma\colon \mathbb{D}elta\to X$, a cochain $\omega_{{\mathtt{Simp}}gma}\in \tres {\widetilde{N}}*{\mathtt{Simp}}gma$ and verifies $ \|\omega\|\leq \overline{p}$, $\|\delta\omega\|\leq\overline{p}$ and $ \delta^*_{\end{lemma}l}(\omega_{{\mathtt{Simp}}gma})=\omega_{\partial_{\end{lemma}l}{\mathtt{Simp}}gma}, $ for any regular face operator, $\delta_{\end{lemma}l}$. \begin{equation}gin{definition}\label{def:Upetit} Let $\cal U$ be an open cover of $X$. A \emph{$\cal U$-small simplex} is a regular simplex, ${\mathtt{Simp}}gma\colon \mathbb{D}elta=\mathbb{D}elta_{0}\ast\dots\ast\mathbb{D}elta_{n}\to X$, such that there exists $U\in\cal U$ with ${\rm Im\,}{\mathtt{Simp}}gma{\rm Sub}set U$. The family of $\cal U$-small simplices is denoted by ${\mathtt{Simp}}_{\cal U}$. The \emph{blown-up complex of $\cal U$-small cochains of $X$ with coefficients in $R$,} written $\mathbb{H}iru {\widetilde{N}}{*,\cal U}{X;R}$, is the cochain complex made up of elements $\omega$, associating to any $\cal U$-small simplex, ${\mathtt{Simp}}gma\colon\mathbb{D}elta= \mathbb{D}elta_{0}\ast\dots\ast\mathbb{D}elta_{n}\to X$, an element $\omega_{{\mathtt{Simp}}gma}\in \mathbb{H}iru {\widetilde{N}}*\mathbb{D}elta$, so that $\delta_{\end{lemma}l}^*(\omega_{{\mathtt{Simp}}gma})=\omega_{\partial_{\end{lemma}l}{\mathtt{Simp}}gma}$, for any face operator, $\delta_{\end{lemma}l}\colon \mathbb{D}elta'_{0}\ast\dots\ast\mathbb{D}elta'_{n}\to \mathbb{D}elta_{0}\ast\dots\ast\mathbb{D}elta_{n}$, with $\mathbb{D}elta'_{n}\neq\emptyset$. If $\overline{p}$ is a perversity on $X$, we denote by $\lau {\widetilde{N}} {*,\cal U}{\overline{p}}{X;R}$ the complex of $\cal U$-small cochains verifying $ \|\omega\|\leq \overline{p}$ and $\|\delta \omega\|\leq\overline{p}$. \end{definition} The following Theorem compares the complexes $\lau {\widetilde{N}} * {\overline{p}}{X;R}$ and $\lau {\widetilde{N}} {*,\cal U} {\overline{p}}{X;R}$. \begin{equation}gin{theorem}\label{thm:Upetits} Let $(X,\overline{p})$ be a perverse space endowed with an open cover, $\cal U$. Let $\rho_{\cal U}\colon \lau {\widetilde{N}} {*}{\overline{p}}{X;R}\to \lau {\widetilde{N}} {*,\cal U}{\overline{p}}{X;R}$ be the restriction map. The following properties are verified. \begin{equation}gin{enumerate}[\rm (i)] \item There exist a cochain map, $\varphi_{\cal U}\colon \lau {\widetilde{N}} {*,\cal U} {\overline{p}}{X;R}\to \lau {\widetilde{N}} *{\overline{p}}{X;R}$, and a homotopy, \\ $\Theta\colon \lau {\widetilde{N}}*{\overline{p}}{X;R}\to\lau {\widetilde{N}}{*-1}{\overline{p}}{X;R}$, such that $$\rho_{\cal U}\circ \varphi_{\cal U}={\rm id} \text{ and } \delta\circ\Theta+\Theta\circ\delta={\rm id} -\varphi_{\cal U}\circ\rho_{\cal U}. $$ \item Furthermore, if the cochain $\omega \in {\widetilde{N}}^{*,\cal U}_{\overline{p}}(X;R)$ is such that there exists a subset $K{\rm Sub}set X$ for which $\omega_{{\mathtt{Simp}}gma}=0$ if $({\rm Im\,}{\mathtt{Simp}}gma)\cap K=\emptyset$, then $\varphi_{\cal U}(\omega)$ also verifies $(\varphi_{\cal U}(\omega))_{{\mathtt{Simp}}gma}=0$ if $({\rm Im\,}{\mathtt{Simp}}gma)\cap K=\emptyset$. \end{enumerate} \end{theorem} As an immediate consequence of property (i) we get the following corollary. \begin{equation}gin{corollary}\label{cor:Upetits} The restriction map, $\rho_{\cal U}\colon \lau {\widetilde{N}}{*}{\overline{p}}{X;R}\to \lau{\widetilde{N}}{*,\cal U}{\overline{p}}{X;R}$, is a quasi-isomor\-phism. \end{corollary} First we establish the existence of a subdivision at the level of the blown-up complex. \begin{equation}gin{proposition}\label{prop:wsubX} Consider $X$ a filtered space. The maps ${\widetilde{\mathrm{T}}}_{\mathbb{D}elta}$ and ${\widetilde{{\rm Sub}\,}}_{\mathbb{D}elta}$ extend as ${\widetilde{\mathrm{T}}}\colon \mathbb{H}iru {\widetilde{N}}*{X;R}\to \mathbb{H}iru{\widetilde{N}} {*-1}{X;R}$ and as ${\widetilde{{\rm Sub}\,}}\colon\mathbb{H}iru {\widetilde{N}} *{X;R}\to\mathbb{H}iru {\widetilde{N}} *{X;R}$ verifying \begin{equation}gin{equation}\label{equa:homotopiesub} {\widetilde{\mathrm{T}}}\circ \delta+\delta\circ {\widetilde{\mathrm{T}}}={\rm id} -{\widetilde{{\rm Sub}\,}}. \end{equation} \end{proposition} \begin{equation}gin{proof} We first construct a homotopy, ${\widetilde{\mathrm{T}}}\colon \mathbb{H}iru {\widetilde{N}}*{X;R}\to \mathbb{H}iru {\widetilde{N}}{*-1}{X;R}$, from the maps ${\widetilde{\mathrm{T}}}_{\mathbb{D}elta}$ previously defined. Let ${\mathtt{Simp}}gma\colon\mathbb{D}elta\to X$ be a filtered simplex and $\omega\in \mathbb{H}iru {\widetilde{N}} k {X;R}$. A simplex $F\ast G$ of $K(\mathbb{D}elta)$ can be described by its vertices, $F\ast G=[e_{i_{0}},\dots,e_{i_{r}},e_{\{F_{0}\}},\dots,e_{\{F_{s}\}}]$, where $e_{j}$ is a vertex of $\mathbb{D}elta$ and $e_{\{F_{j}\}}\in\cal V({\rm Sub}\,\mathbb{D}elta)$ is the barycenter of the face $F_{j}{\vartriangleleft} \mathbb{D}elta$. It follows $e_{\{F_{j}\}}\in \mathbb{D}elta$ and we define a filtered simplex of $X$, ${\mathtt{Simp}}gma_{_{F\ast G}}\colon F\ast G\to X$, using the barycentric coordinates by \begin{equation}\label{equa:sigmaFG} {\mathtt{Simp}}gma_{_{F\ast G}}(\sum_{j}t_{j}e_{j}+\sum_{\end{lemma}l}u_{\end{lemma}l}e_{\{F_{\end{lemma}l}\}}) = {\mathtt{Simp}}gma(\sum_{j}t_{j}e_{j}+\sum_{\end{lemma}l}u_{\end{lemma}l}e_{\{F_{\end{lemma}l}\}}). \end{equation} Thanks to {\rm pr}opref{prop:pasdeface}, we define $\omega_{K(\mathbb{D}elta)}\in \mathbb{H}iru{\widetilde{N}} k{K(\mathbb{D}elta)}$ by $$\omega_{K(\mathbb{D}elta)}= \sum_{{\rm Sub}stack{F\ast G{\vartriangleleft} K(\mathbb{D}elta) \\ |(F\ast G,\varepsilon)|=k\phantom{-}}} \omega_{{\mathtt{Simp}}gma_{_{F\ast G}}}(F\ast G,\varepsilon) \,{\boldsymbol 1}_{(F\ast G,\varepsilon)}. $$ Write now $$({\widetilde{\mathrm{T}}}(\omega))_{{\mathtt{Simp}}gma}={\widetilde{\mathrm{T}}}_{\mathbb{D}elta}(\omega_{K(\mathbb{D}elta)})$$ and verify that we obtain an element of $\mathbb{H}iru {\widetilde{N}}*{X;R}$. Let ${\mathtt{Simp}}gma\colon\mathbb{D}elta=\mathbb{D}elta_{0}\ast\dots\ast\mathbb{D}elta_{n}\to X$ be a regular simplex and $\tau={\mathtt{Simp}}gma\circ\delta_{\end{lemma}l}\colon\nabla\to X$, where $\delta_{\end{lemma}l}\colon \nabla\to\mathbb{D}elta$ is a regular face operator. Since $\omega_{K(\mathbb{D}elta)}\in {\widetilde{N}}^k(K(\mathbb{D}elta))$ then $\delta_{\end{lemma}l}^*\,\omega_{K(\mathbb{D}elta)}=\omega_{K(\nabla)}$. The maps ${\delta}^*_{\end{lemma}l}\colon {\widetilde{N}}^*(\mathbb{D}elta)\to{\widetilde{N}}^*(\nabla)$ and ${\delta}^*_{\end{lemma}l}\colon {\widetilde{N}}^*(K(\mathbb{D}elta))\to {\widetilde{N}}^*(K(\nabla))$ are defined by $$ {\delta}^*_{\end{lemma}l}({\boldsymbol 1}_{(F,\varepsilon)})= \left\{\begin{equation}gin{array}{cl} {\boldsymbol 1}_{(F,\varepsilon)} &\text{if}\; F{\vartriangleleft}\nabla,\\ 0&\text{if not,} \end{array}\right. \text{and } {\delta}^*_{\end{lemma}l}({\boldsymbol 1}_{(F*G,\varepsilon)})= \left\{\begin{equation}gin{array}{cl} {\boldsymbol 1}_{(F*G,\varepsilon)} &\text{if}\; F*G{\vartriangleleft} K(\nabla),\\ 0&\text{if not.} \end{array}\right. $$ From these calculations and from \remref{rem:pleinetsommets}, directly follows the commutativity of the following diagram, \begin{equation}gin{equation*}\label{equa:biendefini} \xymatrix{ \mathbb{H}iru {\widetilde{N}} *{K(\mathbb{D}elta)}\ar[rr]^-{{\widetilde{\mathrm{T}}}_{\mathbb{D}elta}} \ar[d]_{{\delta}^*_{\end{lemma}l}} && \mathbb{H}iru {\widetilde{N}} {*-1}{\mathbb{D}elta} \ar[d]^{{\delta}^*_{\end{lemma}l}} \\ \mathbb{H}iru {\widetilde{N}}*{K(\nabla)} \ar[rr]^-{{\widetilde{\mathrm{T}}}_{\nabla}} && \mathbb{H}iru {\widetilde{N}} {*-1}{\nabla}, } \end{equation*} from which we deduce, $ \delta^*_{\end{lemma}l}{\widetilde{\mathrm{T}}}(\omega)_{{\mathtt{Simp}}gma} =\delta^*_{\end{lemma}l}{\widetilde{\mathrm{T}}}_{\mathbb{D}elta}(\omega_{K(\mathbb{D}elta)})= {\widetilde{\mathrm{T}}}_{\nabla}(\delta^*_{\end{lemma}l}\omega_{K(\mathbb{D}elta)})= {\widetilde{\mathrm{T}}}_{\nabla}(\omega_{K(\nabla)}) = {\widetilde{\mathrm{T}}}(\omega)_{\tau}. $ Thus the map ${\widetilde{\mathrm{T}}}\colon \mathbb{H}iru {\widetilde{N}} * {X;R}\to \mathbb{H}iru {\widetilde{N}} {*-1} {X;R}$ is well defined. Now we proceed to the construction of ${\widetilde{{\rm Sub}\,}}\colon \mathbb{H}iru {\widetilde{N}}*{X;R}\to \mathbb{H}iru {\widetilde{N}}* {X;R}$ from the operators ${\widetilde{{\rm Sub}\,}}_{\mathbb{D}elta}$ of \lemref{lem:subethomotopie}. Let ${\mathtt{Simp}}gma\colon\mathbb{D}elta\to X$ be a regular simplex and let $\omega\in\mathbb{H}iru{\widetilde{N}} k {X;R}$. If $G$ is a face of ${\rm Sub}\,\mathbb{D}elta$, we denote by $\iota_{G}\colon G\to \mathbb{D}elta$ the canonical injection and we define (cf. {\rm pr}opref{prop:pasdeface}) \begin{equation}gin{equation*}\label{equa:leSub} \omega_{{\rm Sub}\,\mathbb{D}elta}= \sum_{{\rm Sub}stack{G{\vartriangleleft} {\rm Sub}\,\mathbb{D}elta \\ |(G,\varepsilon)|=k\phantom{-}}} \omega_{{\mathtt{Simp}}gma\circ\iota_{G}}(G,\varepsilon){\boldsymbol 1}_{(G,\varepsilon)} \quad\text{and}\quad ({\widetilde{{\rm Sub}\,}} \omega)_{{\mathtt{Simp}}gma}= {\widetilde{{\rm Sub}\,}}_{\mathbb{D}elta}(\omega_{{\rm Sub}\,\mathbb{D}elta}). \end{equation*} Notice \begin{equation}gin{eqnarray*} \iota^*_{{\rm Sub}\,\mathbb{D}elta}\left(\omega_{K(\mathbb{D}elta)}\right) &=& \sum_{{\rm Sub}stack{F\ast G{\vartriangleleft} K(\mathbb{D}elta) \\ | (F\ast G,\varepsilon)|=k\phantom{-}}} \omega_{{\mathtt{Simp}}gma_{_{F\ast G}}}(F\ast G,\varepsilon)\, \iota^*_{{\rm Sub}\,\mathbb{D}elta}\left({\boldsymbol 1}_{(F\ast G,\varepsilon)}\right) \\ &=& \sum_{{\rm Sub}stack{G{\vartriangleleft} {\rm Sub}\,\mathbb{D}elta \\ |(G,\varepsilon)|=k\phantom{-}}} \omega_{{\mathtt{Simp}}gma\circ\iota_{G}}(G,\varepsilon){\boldsymbol 1}_{(G,\varepsilon)} = \omega_{{\rm Sub}\,\mathbb{D}elta}. \end{eqnarray*} We also have $\iota^*_{\mathbb{D}elta}\left(\omega_{K(\mathbb{D}elta)}\right) = \omega_{{\mathtt{Simp}}gma} $. It follows, by using \lemref{lem:subethomotopie}, \begin{equation}gin{eqnarray*} \omega_{{\mathtt{Simp}}gma}- ({\widetilde{\mathrm{T}}}\delta\omega)_{{\mathtt{Simp}}gma}-(\delta{\widetilde{\mathrm{T}}}\omega)_{{\mathtt{Simp}}gma} &=& \iota^*_{\mathbb{D}elta}\left(\omega_{K(\mathbb{D}elta)}\right) -{\widetilde{\mathrm{T}}}_{\mathbb{D}elta}\left({\widetilde{\delta}}^{K(\mathbb{D}elta)}\omega_{K(\mathbb{D}elta)}\right) -{\widetilde{\delta}}^{\mathbb{D}elta}{\widetilde{\mathrm{T}}}_{\mathbb{D}elta}\left(\omega_{K(\mathbb{D}elta)}\right)\\ &=& {\widetilde{{\rm Sub}\,}}_{\mathbb{D}elta}\left(\iota^*_{{\rm Sub}\,\mathbb{D}elta}(\omega_{K(\mathbb{D}elta)})\right) = {\widetilde{{\rm Sub}\,}}_{\mathbb{D}elta}\left(\omega_{{\rm Sub}\,\mathbb{D}elta}\right) = \left({\widetilde{{\rm Sub}\,}}(\omega)\right)_{{\mathtt{Simp}}gma} \end{eqnarray*} We have shown ${\widetilde{{\rm Sub}\,}}(\omega)\in \mathbb{H}iru {\widetilde{N}} * {X;R}$ and the equality ${\widetilde{\mathrm{T}}}\circ \delta+\delta\circ{\widetilde{\mathrm{T}}}={\rm id}-{\widetilde{{\rm Sub}\,}}$. \end{proof} \begin{equation}gin{proof}[Proof of \thmref{thm:Upetits}] (i) We prove $\|{\widetilde{\mathrm{T}}}(\omega)\|\leq \|\omega\|$ and $\|{\widetilde{{\rm Sub}\,}}(\omega)\|\leq \|\omega\|$, by working on the chosen basis and the various possible cases. Making reference to \defref{def:degrepervers}, since maps ${\widetilde{\mathrm{T}}}$ and ${\widetilde{{\rm Sub}\,}}$ do not modify the value of the parameter $\varepsilon$ then we can limit ourselves to the case $\varepsilon_{n-\end{lemma}l}=0$, with $\end{lemma}l\in \{1,\dots,n\}$. We proceed by induction on the complex $\mathbb{H}iru {\widetilde{N}} {{\boldsymbol all}} \mathbb{D}elta$, the case $n=0$ being obvious. With the notations used in the construction of $ {\widetilde{\mathrm{T}}}$, the following properties are verified. \begin{equation}gin{itemize} \item (\ref{equa:homotopiedemarragevide}): if $G_{n}=\emptyset$ and $\varepsilon_{n}=1$, the desired inequality comes from \begin{equation}gin{eqnarray*} \|{\widetilde{\mathrm{T}}}_{\mathbb{D}elta}({\boldsymbol 1}_{(F\ast G,\varepsilon)})\otimes {\boldsymbol 1}_{(\emptyset,1)}\|_{\end{lemma}l} &=& \left\{ \begin{equation}gin{array}{cl} 0&\text{if }\end{lemma}l=1,\\ \|{\widetilde{\mathrm{T}}}_{\mathbb{D}elta}({\boldsymbol 1}_{(F\ast G,\varepsilon)})\|_{\end{lemma}l-1} &\text{if } \end{lemma}l>1, \end{array}\right.\\ &\leq& \left\{ \begin{equation}gin{array}{cl} 0&\text{if }\end{lemma}l=1,\\ \|{\boldsymbol 1}_{(F\ast G,\varepsilon)}\|_{\end{lemma}l-1} &\text{if } \end{lemma}l>1, \end{array}\right.\leq \|{\boldsymbol 1}_{(F\ast G,\varepsilon)}\otimes{\boldsymbol 1}_{(\emptyset,1)}\|_{\end{lemma}l}. \end{eqnarray*} \item The justifications of (\ref{equa:homotopiedim0}) and (\ref{equa:homotopiedemarrage}) are similar. \item (\ref{equa:homotopiedimqcq}): if $\dim (G_{n},\varepsilon_{n})\geq 1$, we denote by $\end{lemma}l(\alpha)$ the weight of the generator $e_{\alpha}$ of $\mathbb{D}elta$. By construction of ${\widetilde{\mathrm{T}}}_{\mathbb{D}elta}$, we have $$ \|{\widetilde{\mathrm{T}}}_{\mathbb{D}elta}({\boldsymbol 1}_{(F\ast G',\varepsilon)})\ast e_{\alpha}\|_{\end{lemma}l} = \left\{\begin{equation}gin{array}{cl} \|{\widetilde{\mathrm{T}}}_{\mathbb{D}elta}({\boldsymbol 1}_{(F\ast G',\varepsilon)})\|_{\end{lemma}l} \text{if } \end{lemma}l(\alpha)\leq n-\end{lemma}l,\\[.2cm] \|{\widetilde{\mathrm{T}}}_{\mathbb{D}elta}({\boldsymbol 1}_{(F\ast G',\varepsilon)})\|_{\end{lemma}l}+1 \text{if } \end{lemma}l(\alpha)>n-\end{lemma}l, \end{array}\right. \leq \|{\boldsymbol 1}_{(F\ast G,\varepsilon)}\|_{\end{lemma}l}. $$ \end{itemize} To study the perverse degree of ${\widetilde{{\rm Sub}\,}}_{\mathbb{D}elta}(-)$, consider a subdividing face $G{\vartriangleleft} {\rm Sub}\,\mathbb{D}elta$ with first term $ e_ {0} $. By definition of ${\widetilde{{\rm Sub}\,}}_{\mathbb{D}elta}$, one has, for each $\end{lemma}l\in\{1,\dots,n\}$, $$ \|{\widetilde{{\rm Sub}\,}}_{\mathbb{D}elta}{\boldsymbol 1}_{(G,\varepsilon)}\|_{\end{lemma}l} = \|{\widetilde{\mathrm{T}}}_{\mathbb{D}elta}{\widetilde{\delta}}^{K(\mathbb{D}elta)}{\boldsymbol 1}_{(G,\varepsilon)}\|_{\end{lemma}l} =\|{\widetilde{\mathrm{T}}}_{\mathbb{D}elta}({\boldsymbol 1}_{(G,\varepsilon)}\ast e_{0})\|_{\end{lemma}l} \leq \|{\boldsymbol 1}_{(G,\varepsilon)}\ast e_{0}\|_{\end{lemma}l} =\|{\boldsymbol 1}_{(G,\varepsilon)}\|_{\end{lemma}l}. $$ (The last equality uses the fact that $e_{0}$, being the first vertex of $ G $, lies in the smallest filtration of $ G $.) Together with {\rm pr}opref{prop:wsubX}, this proves that ${\widetilde{\mathrm{T}}}$ and ${\widetilde{{\rm Sub}\,}}$ define two maps from the complex $\lau{\widetilde{N}}*{\overline{p}}{X;R}$ to itself. We construct a cochain map, $\varphi_{\cal U}\colon \lau {\widetilde{N}}{*,\cal U}{\overline{p}}{X;R}\to \lau {\widetilde{N}}*{\overline{p}}{X;R}$, and a homotopy, $\Theta\colon \lau {\widetilde{N}}*{\overline{p}}{X;R}\to\lau{\widetilde{N}}{*-1}{\overline{p}}{X;R}$, such that $\rho_{\cal U}\circ\varphi_{\cal U}={\rm id}$ and $\delta\circ\Theta+\Theta\circ\delta={\rm id}-\varphi_{\cal U}\circ\rho_{\cal U}$. Write $\Psi_{1}={\widetilde{\mathrm{T}}}$ and, for $m\geq 2$, ${\widetilde{{\rm Sub}\,}}^m={\widetilde{{\rm Sub}\,}}^{m-1}\circ{\widetilde{{\rm Sub}\,}}$ and $\Psi_{m}=\sum_{0\leq i<m}{\widetilde{\mathrm{T}}}\circ {\widetilde{{\rm Sub}\,}}^i$. A simple calculation using (\ref{equa:homotopiesub}) gives \begin{equation}gin{equation}\label{equa:psim} \delta\circ \Psi_{m}+\Psi_{m}\circ\delta=\sum_{0\leq i <m}\delta\circ{\widetilde{\mathrm{T}}}\circ{\widetilde{{\rm Sub}\,}}^i+{\widetilde{\mathrm{T}}}\circ{\widetilde{{\rm Sub}\,}}^i\circ \delta ={\rm id}-{\widetilde{{\rm Sub}\,}}^m. \end{equation} If ${\mathtt{Simp}}gma\colon \mathbb{D}elta\to X$ is a regular simplex, we denote by $m({\mathtt{Simp}}gma)$ the smallest integer such that the simplices of ${\rm Sub}^{m({\mathtt{Simp}}gma)}\,\mathbb{D}elta$ have their image by ${\mathtt{Simp}}gma$ included in an open subset of the cover~$\cal U$. If $F$ is a simplex of $\mathbb{D}elta$, we denote by ${\mathtt{Simp}}gma_{F}\colon F\to X$ the restriction of ${\mathtt{Simp}}gma$ and by $(F,\varepsilon){\vartriangleleft} {\widetilde{\Delta}}$ the fact that $(F,\varepsilon)$ is a face of the blow up ${\widetilde{\Delta}}$. Let $\omega\in \mathbb{H}iru {\widetilde{N}} k {X;R}$. The homotopy $\Theta$ is defined by $$\left(\Theta(\omega)\right)_{{\mathtt{Simp}}gma}= \sum_{{\rm Sub}stack{(F,\varepsilon){\vartriangleleft} {\widetilde{\Delta}} \\ |(F,\varepsilon)|=k-1\phantom{-}}} \left(\Psi_{m({\mathtt{Simp}}gma_{F})}(\omega)\right)_{{\mathtt{Simp}}gma_{F}}\!\!\!\!(F,\varepsilon)\,{\boldsymbol 1}_{(F,\varepsilon)}. $$ Since the coefficient of ${\boldsymbol 1}_{(F,\varepsilon)}$ depends only on ${\mathtt{Simp}}gma_{F}$ and $ \varepsilon $ then we get the compatibility with the faces. In order to construct $\varphi_{\cal U}$, we calculate $\delta\circ\Theta$ and $\Theta\circ\delta$. Following (\ref{equa:ledelta1}), we have \begin{equation}gin{equation}\label{equa:lecobord} \delta {{\boldsymbol 1}_{(F,\varepsilon)}} =(-1)^k \sum_{(F,\varepsilon){\vartriangleleft} \partial (\nabla,\kappa)}n_{(F,\varepsilon,\nabla,\kappa)}{\boldsymbol 1}_{(\nabla,\kappa)}, \end{equation} where $(F,\varepsilon){\vartriangleleft}\partial(\nabla,\kappa)$ means that $(F,\varepsilon)$ runs over the proper faces of $(\nabla,\kappa)$ and $$\partial(\nabla,\kappa)=\sum_{(F,\varepsilon){\vartriangleleft}\partial(\nabla,\kappa)} n_{(F,\varepsilon,\nabla,\kappa)}(F,\varepsilon).$$ We deduce, for all $\omega\in \mathbb{H}iru {\widetilde{N}} k {X;R}$, $$ \left( \delta\Theta(\omega)\right)_{{\mathtt{Simp}}gma} = (-1)^k \! \! \! \! \! \! \sum_{{\rm Sub}stack{(F,\varepsilon){\vartriangleleft} {\widetilde{\Delta}} \\ |(F,\varepsilon)|=k-1\phantom{-}}} \sum_{{\rm Sub}stack{(F,\varepsilon){\vartriangleleft} \partial (\nabla,\kappa) \\ (\nabla,\kappa){\vartriangleleft} {\widetilde{\Delta}}\phantom{-}}} n_{(F,\varepsilon,\nabla,\kappa)} \left(\Psi_{m({\mathtt{Simp}}gma_{F})}(\omega)\right)_{{\mathtt{Simp}}gma_{F}}(F,\varepsilon)\,{\boldsymbol 1}_{(\nabla,\kappa)}. $$ On the other hand, the definition of $\Theta$ implies \begin{equation}gin{eqnarray*} \left(\Theta(\delta\omega)\right)_{{\mathtt{Simp}}gma} &=& \sum_{{\rm Sub}stack{(\nabla,\kappa){\vartriangleleft} {\widetilde{\Delta}} \\ |(\nabla,\kappa)|=k\phantom{-}}} \left(\Psi_{m({\mathtt{Simp}}gma_{\nabla})}(\delta\omega)\right)_{{\mathtt{Simp}}gma_{\nabla}} (\nabla,\kappa)\, {\boldsymbol 1}_{(\nabla,\kappa)} \\ &=_{(\ref{equa:psim})}& \omega_{{\mathtt{Simp}}gma} -\sum_{{\rm Sub}stack{(\nabla,\kappa){\vartriangleleft} {\widetilde{\Delta}} \\ |(\nabla,\kappa)|=k\phantom{-}}} \left( {\widetilde{{\rm Sub}\,}}^{m({\mathtt{Simp}}gma_{\nabla})}(\omega)\right)_{{\mathtt{Simp}}gma_{\nabla}}(\nabla,\kappa)\,{\boldsymbol 1}_{(\nabla,\kappa)} \\ && -\sum_{{\rm Sub}stack{(\nabla,\kappa){\vartriangleleft} {\widetilde{\Delta}} \\ |(\nabla,\kappa)|=k\phantom{-}}} \left(\delta\Psi_{m({\mathtt{Simp}}gma_{\nabla})}(\omega)\right)_{{\mathtt{Simp}}gma_{\nabla}}(\nabla,\kappa) \,{\boldsymbol 1}_{(\nabla,\kappa)}. \end{eqnarray*} It follows, again by using (\ref{equa:lecobord}), \begin{equation}gin{eqnarray*}\label{equa:levarphi} \left((\delta\Theta+\Theta\delta-{\rm id})(\omega)\right)_{{\mathtt{Simp}}gma} &=& -(\varphi_{\cal U}(\omega))_{{\mathtt{Simp}}gma} \end{eqnarray*} with \begin{equation}gin{eqnarray*} -(\varphi_{\cal U}(\omega))_{{\mathtt{Simp}}gma} = -\sum_{{\rm Sub}stack{(\nabla,\kappa){\vartriangleleft} {\widetilde{\Delta}} \\ |(\nabla,\kappa)|=k\phantom{-}}} \left( {\widetilde{{\rm Sub}\,}}^{m({\mathtt{Simp}}gma_{\nabla})}(\omega)\right)_{{\mathtt{Simp}}gma_{\nabla}}(\nabla,\kappa)\,{\boldsymbol 1}_{(\nabla,\kappa)} + && \label{equa:varphi}\\ (-1)^k \sum_{{\rm Sub}stack{(F,\varepsilon){\vartriangleleft} {\widetilde{\Delta}} \\ |(F,\varepsilon)|=k-1\phantom{-}}} \sum_{{\rm Sub}stack{(F,\varepsilon){\vartriangleleft} \partial (\nabla,\kappa) \\ (\nabla,\kappa){\vartriangleleft} {\widetilde{\Delta}}\phantom{-}}} n_{(F,\varepsilon,\nabla,\kappa)} \left((\Psi_{m({\mathtt{Simp}}gma_{F})}-\Psi_{m({\mathtt{Simp}}gma_{\nabla})})(\omega)\right)_{{\mathtt{Simp}}gma_{F}}(F,\varepsilon)\,{\boldsymbol 1}_{(\nabla,\kappa)}. &&\nonumber \end{eqnarray*} Observe that $(\varphi_{\cal U}(\omega))_{{\mathtt{Simp}}gma}$ is well defined for all $\omega\in \mathbb{H}iru {\widetilde{N}} k {X;R}$ but we have to show that it is also well defined for every $\omega\in \mathbb{H}iru {\widetilde{N}}{*,\cal U}{X;R}$ and any regular simplex ${\mathtt{Simp}}gma\colon\mathbb{D}elta\to X$. The first term of the sum above is defined since the cochains $({\widetilde{{\rm Sub}\,}}^m(\omega))_{{\mathtt{Simp}}gma_{\nabla}}$ are well defined for any $m\geq m({\mathtt{Simp}}gma_{\nabla})$. The second term is also well defined since $m({\mathtt{Simp}}gma_{F})\leq m({\mathtt{Simp}}gma_{\nabla})$ and the cochain $({\widetilde{{\rm Sub}\,}}^m(\omega))_{{\mathtt{Simp}}gma_{F}}$ is defined for any $m\geq m({\mathtt{Simp}}gma_{F})$. So we have $\varphi_{\cal U}(\omega)\in \mathbb{H}iru {\widetilde{N}} * {X;R}$. The equality $\delta\circ\Theta+\Theta\circ\delta={\rm id}-\varphi_{\cal U}\circ\rho_{\cal U}$ is thus verified by construction of $\varphi_{\cal U}$ and $\Theta$. If $\omega\in \mathbb{H}iru {\widetilde{N}}{*,\cal U}{X;R}$ and if ${\mathtt{Simp}}gma$ is $\cal U$-small, then $m({\mathtt{Simp}}gma)=0$ and the family of indexes of the terms defining $\Psi_{m({\mathtt{Simp}}gma)}$ is empty. Then $$\left(\rho_{\cal U}(\varphi_{\cal U}(\omega))\right)_{{\mathtt{Simp}}gma}= \varphi_{\cal U}(\omega)_{{\mathtt{Simp}}gma} =\omega_{{\mathtt{Simp}}gma}-0=\omega_{{\mathtt{Simp}}gma}.$$ We have therefore established $\rho_{\cal U}\circ \varphi_{\cal U}={\rm id}$. From ${\rm id}-\varphi_{\cal U}=\delta\circ\Theta+\Theta\circ\delta$, we deduce $\delta-\delta\circ\varphi_{\cal U} =\delta\circ\Theta\circ\delta= \delta-\varphi_{\cal U}\circ\delta$, and the compatibility of $\varphi_{\cal U}$ with differentials. It remains to study the behavior of $\Theta$ and $\varphi_{\cal U}$ relatively to the perverse degree. Since $\|\Psi_{m}(\omega)\|\leq \|\omega\|$ and $\|{\widetilde{{\rm Sub}\,}}(\omega)\|\leq \|\omega\|$, we have, for any singular stratum, $S$, $\|\Theta(\omega)\|_{S}\leq \|\omega\|_{S} \quad\text{and}\quad \|\varphi_{\cal U}(\omega)\|_{S}\leq \|\omega\|_{S}$. This completes the proof of the assertion (i). (ii) Let $\omega \in \lau {\widetilde{N}}{*,\cal U}{\overline{p}}{X;R}$ verifying the hypothesis of (ii) for a subset $K{\rm Sub}set X$. Let ${\mathtt{Simp}}gma\colon \mathbb{D}elta\to X$ such that $({\rm Im\,}{\mathtt{Simp}}gma)\cap K=\emptyset$. The element $(\varphi_{\cal U}(\omega))_{{\mathtt{Simp}}gma}$ is defined from cochains $\omega_{{\mathtt{Simp}}gma_{F\ast G}}$, where ${\mathtt{Simp}}gma_{F\ast G}$ is the restriction of ${\mathtt{Simp}}gma$, as defined in (\ref{equa:sigmaFG}). From this formula, we find that ${\rm Im\,} {\mathtt{Simp}}gma_{F\ast G}{\rm Sub}set {\rm Im\,}{\mathtt{Simp}}gma$, and then $\omega_{{\mathtt{Simp}}gma_{F\ast G}}=0$ and $(\varphi_{\cal U}(\omega))_{{\mathtt{Simp}}gma}=0$. \end{proof} \begin{equation}gin{proof}[Proof of \lemref{lem:subethomotopie}]\label{subsec:constructionhomotopie} We have to show that $ {\widetilde{\mathrm{T}}}_{\mathbb{D}elta}\colon\mathbb{H}iru {\widetilde{N}} *{K(\mathbb{D}elta)}\to \mathbb{H}iru {\widetilde{N}} {*-1}\mathbb{D}elta $ verifies (\ref{equa:cequilfaut}). For it, we use {\rm pr}opref{prop:dfaceajout} to express the value of the differential $\delta$ in function of the adding of vertices. (i) Start with ${\boldsymbol 1}_{(F,\varepsilon)}\in \mathbb{H}iru {\widetilde{N}} *{\mathbb{D}elta}$. By definition, the face $F$ is not a full face (see \defref{def:facecompleteK}) on $K(\mathbb{D}elta)$, then we have ${\widetilde{\mathrm{T}}}_{\mathbb{D}elta}({\boldsymbol 1}_{(F,\varepsilon)})=0$. On the other hand, the only term of ${\widetilde{\delta}}^{K(\mathbb{D}elta)} {\boldsymbol 1}_{(F,\varepsilon)}$ having a non-zero image by ${\widetilde{\mathrm{T}}}_{\mathbb{D}elta}$ is ${\boldsymbol 1}_{(F,\varepsilon)} \ast e_{\{F\}}$ where $e_{\{F\}}$is the barycenter of the face $F$. Therefore one has, using (\ref{equa:homotopiedemarrage}), $$ {\widetilde{\mathrm{T}}}_{\mathbb{D}elta}({\widetilde{\delta}}^{K(\mathbb{D}elta)} {\boldsymbol 1}_{(F,\varepsilon)}) = {\widetilde{\mathrm{T}}}_{\mathbb{D}elta}\left( (-1)^{|(F,\varepsilon)|} {\boldsymbol 1}_{(F,\varepsilon)}\ast e_{\{F\}}\right) = (-1)^{|(F,\varepsilon)|}(-1)^{|(F,\varepsilon)|}{\boldsymbol 1}_{(F,\varepsilon)} = {\boldsymbol 1}_{(F,\varepsilon)}. $$ It follows: $$({\widetilde{\mathrm{T}}}_{\mathbb{D}elta}\circ {\widetilde{\delta}}^{K(\mathbb{D}elta)}+{\widetilde{\delta}}^{\mathbb{D}elta} \circ {\widetilde{\mathrm{T}}}_{\mathbb{D}elta})({\boldsymbol 1}_{(F,\varepsilon)})= (\iota^*_{\mathbb{D}elta}-{\widetilde{{\rm Sub}\,}}_{\mathbb{D}elta}\circ\iota^*_{{\rm Sub}\,\mathbb{D}elta})({\boldsymbol 1}_{(F,\varepsilon)}).$$ (ii) Continue with a simplex, $F\ast G{\vartriangleleft} K(\mathbb{D}elta)$, such that $F\neq \emptyset$ and $G\neq \emptyset$. In this case, we have to show \begin{equation}gin{equation}\label{equa:pointiii} ({\widetilde{\mathrm{T}}}_{\mathbb{D}elta}\circ {\widetilde{\delta}}^{K(\mathbb{D}elta)}+{\widetilde{\delta}}^{\mathbb{D}elta}\circ {\widetilde{\mathrm{T}}}_{\mathbb{D}elta})({\boldsymbol 1}_{(F\ast G,\varepsilon)})=0. \end{equation} --- We look first to the case of a \emph{full simplex} distinguishing the various possibilities that appear in the construction of $ {\widetilde{\mathrm{T}}}_{\mathbb{D}elta} $, located after the statement of \lemref{lem:subethomotopie}. We use a first recurrence assuming equality (\ref{equa:pointiii}) holds for any filtered euclidean simplex with formal dimension strictly less than $ n $. $\bullet$ If $G_{n}=\emptyset$ and $\varepsilon_{n}=1$, we have $F\ast G{\vartriangleleft} K(\nabla)$, then: \begin{equation}gin{eqnarray*} && {\widetilde{\mathrm{T}}}_{\mathbb{D}elta}\left({\widetilde{\delta}}^{K(\mathbb{D}elta)}({\boldsymbol 1}_{(F\ast G,\varepsilon)}\otimes {\boldsymbol 1}_{(\emptyset,1)})\right) =_{(1)}\\ && {\widetilde{\mathrm{T}}}_{\nabla}\left({\widetilde{\delta}}^{K(\nabla)}{\boldsymbol 1}_{(F\ast G,\varepsilon)}\right)\otimes {\boldsymbol 1}_{(\emptyset,1)}\\&& + (-1)^{|(F\ast G,\varepsilon)|} \sum_{e\in\cal V(K(\mathbb{D}elta)_{n})} {\widetilde{\mathrm{T}}}_{\mathbb{D}elta}\left({\boldsymbol 1}_{(F\ast G,\varepsilon)}\otimes {\boldsymbol 1}_{(\emptyset,1)}\ast e\right)=_{(2)}\\ && {\widetilde{\mathrm{T}}}_{\nabla}\left({\widetilde{\delta}}^{K(\nabla)}{\boldsymbol 1}_{(F\ast G,\varepsilon)}\right)\otimes {\boldsymbol 1}_{(\emptyset,1)}\\&& + (-1)^{|(F\ast G,\varepsilon)|} \sum_{e\in\cal V(\mathbb{D}elta_{n})} {\widetilde{\mathrm{T}}}_{\nabla}\left({\boldsymbol 1}_{(F\ast G,\varepsilon)}\right)\otimes {\boldsymbol 1}_{(\emptyset,1)}\ast e=_{(3)}\\ && -{\widetilde{\delta}}^{\nabla}{\widetilde{\mathrm{T}}}_{\nabla}\left({\boldsymbol 1}_{(F\ast G,\varepsilon)}\right)\otimes {\boldsymbol 1}_{(\emptyset,1)} + (-1)^{|(F\ast G,\varepsilon)|} {\widetilde{\mathrm{T}}}_{\nabla}\left({\boldsymbol 1}_{(F\ast G,\varepsilon)}\right)\otimes {\widetilde{\delta}}^{c\mathbb{D}elta_{n}}{\boldsymbol 1}_{(\emptyset,1)} =_{(1)}\\ && -{\widetilde{\delta}}^{\mathbb{D}elta} {\widetilde{\mathrm{T}}}_{\mathbb{D}elta}\left({\boldsymbol 1}_{(F\ast G,\varepsilon)}\otimes {\boldsymbol 1}_{(\emptyset,1)})\right), \end{eqnarray*} where $=_{(1)}$ uses (\ref{equa:homotopiedemarragevide}), $=_{(2)}$ uses (\ref{equa:homotopiedimqcq}) and (\ref{equa:homotopiedemarragevide}), $=_{(3)}$ is the induction hypothesis on $\nabla$. $\bullet$ The argument is similar when $ \dim G_{n} = 0$ and $ \varepsilon_ {n} = 0$. A second induction on the dimension of $ G_{n} $ completes the proof in the case of a full simplex $ F \ast G $. --- If $ F \ast G $ is not full and if the differential $ {\widetilde{\delta}}^{K(\mathbb{D}elta)}{\boldsymbol 1}_{(F\ast G,\varepsilon)}$ only involves non full simplices, then the left hand side of (\ref{equa:pointiii}) is zero and the result is true. So we have to consider the case of a non full simplex whose differential involves full simplices. Specifically, consider a full $ k $-simplex $ F '\ast G' $ with $F'=[e_{i_{0}},\dots,e_{i_{a}}]$ and $G'=[e_{\{i_{0}\dots i_{a}\}},\dots,e_{\{i_{0}\dots i_{a} \dots i_{b}\}}]$ and including a non full $ (k-1) $-face $ F \ast G $ with $ F \neq \emptyset$ and $ G \neq \emptyset$. We need to establish \begin{equation}gin{equation}\label{equa:pasplein} {\widetilde{\mathrm{T}}}_{\mathbb{D}elta}{\widetilde{\delta}}^{K(\mathbb{D}elta)}\left({\boldsymbol 1}_{(F\ast G,\varepsilon)}\right)=0. \end{equation} Consider the various possible cases. \begin{equation}gin{enumerate}[(a)] \item Suppose $a=b$, then $F=[e_{i_{0}},\dots,\hat{e}_{i_{x}},\dots,e_{i_{a}}]$ and $G=G'=[e_{\{F'\}}]$. The equality (\ref{equa:pasplein}) is reduced to ${\widetilde{\mathrm{T}}}_{\mathbb{D}elta}{\widetilde{\delta}}^{K(\mathbb{D}elta)}\left({\boldsymbol 1}_{(F,\varepsilon)}\ast e_{\{F'\}}\right)=0.$ By keeping in the expression of the differential only the elements corresponding to full simplices, we have: $$ {\widetilde{\mathrm{T}}}_{\mathbb{D}elta}{\widetilde{\delta}}^{K(\mathbb{D}elta)}\left({\boldsymbol 1}_{(F,\varepsilon)}\ast e_{\{F'\}}\right)= (-1)^{(F,\varepsilon)+1} {\widetilde{\mathrm{T}}}_{\mathbb{D}elta}\left( {\boldsymbol 1}_{(F,\varepsilon)}\ast e_{\{F'\}}\ast e_{i_{x}}+ {\boldsymbol 1}_{(F,\varepsilon)}\ast e_{\{F'\}}\ast e_{\{F\}} \right) $$ The definition of ${\widetilde{\mathrm{T}}}_{\mathbb{D}elta}$ gives, \begin{equation}gin{eqnarray*} {\widetilde{\mathrm{T}}}_{\mathbb{D}elta}\left( {\boldsymbol 1}_{(F,\varepsilon)}\ast e_{\{F'\}}\ast e_{i_{x}}\right) = -{\widetilde{\mathrm{T}}}_{\mathbb{D}elta}\left( {\boldsymbol 1}_{(F,\varepsilon)}\ast e_{i_{x}} \ast e_{\{F'\}}\right) = -(-1)^{|(F',\varepsilon)|}\,{\boldsymbol 1}_{(F,\varepsilon)}\ast e_{i_{x}}, && \\ {\widetilde{\mathrm{T}}}_{\mathbb{D}elta}\left({\boldsymbol 1}_{(F,\varepsilon)}\ast e_{\{F'\}}\ast e_{\{F\}} \right) = -{\widetilde{\mathrm{T}}}_{\mathbb{D}elta}\left({\boldsymbol 1}_{(F,\varepsilon)}\ast e_{\{F\}} \ast e_{\{F'\}}\right) &&\\ = -{\widetilde{\mathrm{T}}}_{\mathbb{D}elta}\left({\boldsymbol 1}_{(F,\varepsilon)}\ast e_{\{F\}}\right)\ast e_{i_{x}} = -(-1)^{|(F,\varepsilon)|}{\boldsymbol 1}_{(F,\varepsilon)}\ast e_{i_{x}}. && \end{eqnarray*} The conclusion comes from $|(F,\varepsilon)|+1=|(F',\varepsilon)|$. \noindent \emph{For the last two cases, we use an induction on the dimension of the component $G_{n}$.} \item Suppose $a<b$ and $F=[e_{i_{0}},\dots,\hat{e}_{i_{x}},\dots,e_{i_{a}}]$, then $G=G'=L\ast e_{\{i_{0}\dots i_{a} \dots i_{b}\}}$. The equality (\ref{equa:pasplein}) is reduced to ${\widetilde{\mathrm{T}}}_{\mathbb{D}elta}{\widetilde{\delta}}^{K(\mathbb{D}elta)}\left({\boldsymbol 1}_{(F\ast L,\varepsilon)}\ast e_{\{i_{0}\dots i_{a} \dots i_{b}\}}\right)=0.$ By definition of ${\widetilde{\mathrm{T}}}_{\mathbb{D}elta}$ and following \corref{cor:astdiff}, one has: $${\widetilde{\mathrm{T}}}_{\mathbb{D}elta}{\widetilde{\delta}}^{K(\mathbb{D}elta)}\left({\boldsymbol 1}_{(F\ast L,\varepsilon)}\ast e_{\{i_{0}\dots i_{a} \dots i_{b}\}}\right) = {\widetilde{\mathrm{T}}}_{\mathbb{D}elta}\left({\widetilde{\delta}}^{K(\mathbb{D}elta)}{\boldsymbol 1}_{(F\ast L,\varepsilon)}\right)\ast e_{i_{b}}. $$ The simplex $F\ast L$ is not full and $L\neq\emptyset$ since $a<b$. We can apply the induction hypothesis, so ${\widetilde{\mathrm{T}}}_{\mathbb{D}elta}{\widetilde{\delta}}^{K(\mathbb{D}elta)}\left({\boldsymbol 1}_{(F\ast L,\varepsilon)}\right)=0$. \item The argument is similar when $G=[e_{\{i_{0}\dots i_{a}\}},\dots,\hat{e}_{\{i_{0}\dots i_{a}\dots i_{x}\}},\dots,e_{\{i_{0}\dots i_{a} \dots i_{b}\}}]$ and $a<b$, and thus $F=F'$. \end{enumerate} (iii) It remains to verify the equality (\ref{equa:cequilfaut}) for elements ${\boldsymbol 1}_{(G,\varepsilon)}\in{\widetilde{N}}^*({\rm Sub}\,\mathbb{D}elta)$. In this case, it is actually the \emph{definition} of ${\widetilde{{\rm Sub}\,}}$ by $${\widetilde{{\rm Sub}\,}} {\boldsymbol 1}_{(G,\varepsilon)}=-\left({\widetilde{\mathrm{T}}}_{\mathbb{D}elta}\circ {\widetilde{\delta}}^{K(\mathbb{D}elta)}\right)({\boldsymbol 1}_{(G,\varepsilon)}),$$ the other terms being zero. To complete the proof of \lemref{lem:subethomotopie}, it remains to verify the compatibility of ${\widetilde{{\rm Sub}\,}}$ with differentials, \begin{equation}gin{equation}\label{equa:subdiff} {\widetilde{\delta}}^{\mathbb{D}elta}\circ {\widetilde{\mathrm{T}}}_{\mathbb{D}elta}\circ {\widetilde{\delta}}^{K(\mathbb{D}elta)}({\boldsymbol 1}_{(G,\varepsilon)}) = {\widetilde{\mathrm{T}}}_{\mathbb{D}elta}\circ {\widetilde{\delta}}^{K(\mathbb{D}elta)}\circ {\widetilde{\delta}}^{{\rm Sub}\,\mathbb{D}elta}({\boldsymbol 1}_{(G,\varepsilon)}). \end{equation} The two terms of this equation are zero except in the two following situations. \begin{equation}gin{enumerate}[(i)] \item $G=[e_{i_{0}},\dots,e_{\{i_{0}\dots i_{a}\}}]$, \item $G=[e_{i_{0}},\dots,\hat{e}_{\{i_{0}\dots i_{x}\}},\dots,e_{\{i_{0}\dots i_{x}\dots i_{a}\}}]$ with $x<a$. \end{enumerate} Detail each case. (i) The term ${\widetilde{\delta}}^{K(\mathbb{D}elta)}({\boldsymbol 1}_{(G,\varepsilon)})$ has only one full term, $(-1)^{|(G,\varepsilon)|}{\boldsymbol 1}_{(G,\varepsilon)}\ast e_{i_{0}}$. The term ${\widetilde{\delta}}^{K(\mathbb{D}elta)}{\widetilde{\delta}}^{{\rm Sub}\,\mathbb{D}elta}({\boldsymbol 1}_{(G,\varepsilon)})$ has the following full terms $-\sum_{e_{i_{j}}\in\cal V(\mathbb{D}elta)}{\boldsymbol 1}_{(G,\varepsilon)}\ast e_{\{i_{0}\dots i_{a}i_{j}\}}\ast e_{i_{0}}$. We can deduce, \begin{equation}gin{eqnarray*} {\widetilde{\mathrm{T}}}_{\mathbb{D}elta}{\widetilde{\delta}}^{K(\mathbb{D}elta)}{\widetilde{\delta}}^{{\rm Sub}\,\mathbb{D}elta}({\boldsymbol 1}_{(G,\varepsilon)}) &=& -\sum_{e_{i_{j}}\in\cal V(\mathbb{D}elta)}{\widetilde{\mathrm{T}}}_{\mathbb{D}elta}({\boldsymbol 1}_{(G,\varepsilon)}\ast e_{\{i_{0}\dots i_{a}i_{j}\}}\ast e_{i_{0}})\\ &=& \sum_{e_{i_{j}}\in\cal V(\mathbb{D}elta)}{\widetilde{\mathrm{T}}}_{\mathbb{D}elta}({\boldsymbol 1}_{(G,\varepsilon)}\ast e_{i_{0}})\ast e_{i_{j}} \\ &=& (-1)^{|(G,\varepsilon)|}{\widetilde{\delta}}^{\mathbb{D}elta}{\widetilde{\mathrm{T}}}_{\mathbb{D}elta}({\boldsymbol 1}_{(G,\varepsilon)}\ast e_{i_{0}})= {\widetilde{\delta}}^{\mathbb{D}elta}{\widetilde{\mathrm{T}}}_{\mathbb{D}elta}{\widetilde{\delta}}^{K(\mathbb{D}elta)}({\boldsymbol 1}_{(G,\varepsilon)}). \end{eqnarray*} (ii) The term ${\widetilde{\delta}}^{K(\mathbb{D}elta)}{\boldsymbol 1}_{(G,\varepsilon)}$ does not have any full term. Also, the next term ${\widetilde{\delta}}^{K(\mathbb{D}elta)}{\widetilde{\delta}}^{{\rm Sub}\,\mathbb{D}elta}{\boldsymbol 1}_{(G,\varepsilon)}$ has the following full terms $-{\boldsymbol 1}_{(G,\varepsilon)}\ast e_{\{i_{0}\dots i_{x}\}}\ast e_{i_{0}} -{\boldsymbol 1}_{(G,\varepsilon)}\ast e_{\{i_{0}\dots i_{x-1}i_{x+1}\}}\ast e_{i_{0}}$. As in the previous calculation, equation (\ref{equa:subdiff}) follows from the definition of ${\widetilde{\mathrm{T}}}_{\mathbb{D}elta}$: $${\widetilde{\mathrm{T}}}_{\mathbb{D}elta}({\boldsymbol 1}_{(G,\varepsilon)}\ast e_{\{i_{0}\dots i_{x}\}}\ast e_{i_{0}})=-{\widetilde{\mathrm{T}}}_{\mathbb{D}elta}({\boldsymbol 1}_{(G,\varepsilon)}\ast e_{\{i_{0}\dots i_{x-1}i_{x+1}\}}\ast e_{i_{0}}).$$ (The sign comes from the permutation of $e_{i_{x}}$ and $e_{i_{x+1}}$.) \end{proof} \section{Mayer-Vietoris exact sequence. \thmref{thm:MVcourte}. }\label{sec:MV} If $ U {\rm Sub}set $ V are two open subsets of a perverse space $ (X, \overline {p}) $, canonical inclusions $ U {\rm Sub}set V {\rm Sub}set X $ induce cochain maps, $\lau {\widetilde{N}}* {\overline{p}}{X;R}\to \lau {\widetilde{N}}* {\overline{p}}{V;R}\to \lau {\widetilde{N}} *{\overline{p}}{U;R}$. If there is no ambiguity, we keep the same notation for a cochain and its images by these maps. \begin{equation}gin{theorem}[Mayer-Vietoris exact sequence]\label{thm:MVcourte} Let $(X,\overline{p})$ be a paracompact perverse space, endowed with an open cover $(U_{1},U_{2})$ and a subordinated partition of the unity, $(f_{1},f_{2})$. For $i=1,\,2$, we denote by $\cal U_{i}$ the cover of $U_{i}$ consisting of the open subsets $(U_{1}\cap U_{2}, f_{{i}}^{-1}(]1/2,1])$ and by $\cal U$ the cover of $X$, union of the covers $\cal U_{i}$. Then, the canonical inclusions, $U_{i}{\rm Sub}set X$ and $U_{1}\cap U_{2}{\rm Sub}set U_{i}$, induce a short exact sequence , $$ 0\to \lau {\widetilde{N}} {*,\cal U} {\overline{p}}{X;R} \stackrel{\iota}{\longrightarrow} \lau {\widetilde{N}} {*,\cal U_{1}} {\overline{p}}{U_{1};R} \oplus \lau {\widetilde{N}} {*,\cal U_{2}} {\overline{p}}{U_{2};R} \stackrel{\varphi}{\longrightarrow} \lau {\widetilde{N}} * {\overline{p}}{U_{1}\cap U_{2};R} \to 0, $$ where $\varphi(\omega_{1},\omega_{2})=\omega_{1}-\omega_{2}$. \end{theorem} The following result is a direct consequence of Theorems \ref{thm:Upetits} and \ref{thm:MVcourte}. \begin{equation}gin{corollary}\label{cor:MVlongue} Let $ (X, \overline p) $ be a paracompact perverse space provided with an open cover $ (U_ {1}, U_ {2}) $. Then there is a long exact sequence for the blown-up intersection cohomology, $$ \to \lau \mathscr H i{\overline{p}}{X;R} \to \lau \mathscr H i{\overline{p}}{U_{1};R}\oplus \lau \mathscr H i{\overline{p}}{U_{2};R} \to \lau \mathscr H i{\overline{p}}{U_{1}\cap U_{2};R} \to \lau \mathscr H{i+1}{\overline{p}}{X;R}\to $$ \end{corollary} \begin{equation}gin{proof}[Proof of \thmref{thm:MVcourte}] Since the open cover $ \cal U$ is the union of the covers $ \cal U_ {1} $ and $ \cal U_ {2} $, the morphism $ \iota $ is injective. We prove the surjectivity of the map $\varphi$. For $i=1,\,2$, we define the fonction, $g_{i}\colon X\to \{0,1\}$, by $$\begin{equation}gin{array}{ccc} g_{1}(x)=\left\{ \begin{equation}gin{array}{cl} 1&\text{if } f_{1}(x)>1/2,\\ 0&\text{if not,} \end{array}\right. & \text{ and } & g_{2}(x)=\left\{ \begin{equation}gin{array}{cl} 1&\text{if } f_{2}(x)\geq 1/2,\\ 0&\text{if not.} \end{array}\right. \end{array} $$ The inequality $f_{1}(x)>1/2$ implies $f_{2}(x)<1/2$ and $g_{2}(x)=0$. The support of $g_{2}$ is therefore included in $U_{2}\backslash f_{1}^{-1}(]1/2,1])$. It is noted as well that the support of $g_{1}$ is included in $U_{1}\backslash f_{2}^{-1}(]1/2,1])$. On the other hand, by construction, one has $g_{1}(x)+g_{2}(x)=1$. We denote by ${\tilde{g}}_{1}$ and ${\tilde{g}}_{2}$ the two 0-cochains of perverse degree 0, respectively associated to $g_{1}$ and $g_{2}$, and defined in \lemref{lem:0cochaine}. If $\omega\in \lau {\widetilde{N}} * {\overline{p}}{U_{1}\cap U_{2};R}$, we denote by ${\tilde{g}}_{i}\cup \omega$ the cup product (cf. \secref{subsec:cupTW}) of ${\tilde{g}}_{i}$ with $\omega$, for $i=1,\,2$. Since the cochain ${\tilde{g}}_{1}$ has a support included in $U_{1}\backslash f_{2}^{-1}(]1/2,1])$, then the cup product ${\tilde{g}}_{1}\cup \omega\in \lau {\widetilde{N}}{*,\cal U_{1}}{\overline{p}}{U_{1};R}$. Likewise, one has ${\tilde{g}}_{2}\cup \omega\in \lau {\widetilde{N}} {*,\cal U_{2}} {\overline{p}}{U_{2};R}$. We verify $\varphi({\tilde{g}}_{1}\cup \omega, -{\tilde{g}}_{2}\cup \omega)=\omega$, which gives the surjectivity of $\varphi$. The composition $\varphi\circ\iota$ is the zero map. It remains to consider an element $(\omega_{1},\omega_{2})\in \lau {\widetilde{N}} {*,\cal U_{1}} {\overline{p}}{U_{1};R} \oplus \lau {\widetilde{N}} {*,\cal U_{2}} {\overline{p}}{U_{2};R}$ such that $\varphi(\omega_{1},\omega_{2})=0$ and to construct $\omega\in \lau {\widetilde{N}}{*,\cal U} {\overline{p}}{X;R}$ such that $\iota(\omega)=(\omega_{1},\omega_{2})$. If ${\mathtt{Simp}}gma\colon \mathbb{D}elta\to X$ is a regular simplex, we set, \begin{equation}gin{itemize} \item $\omega_{{\mathtt{Simp}}gma}=(\omega_{1})_{{\mathtt{Simp}}gma}=(\omega_{2})_{{\mathtt{Simp}}gma}$, if ${\mathtt{Simp}}gma(\mathbb{D}elta){\rm Sub}set U_{1}\cap U_{2}$, \item $\omega_{{\mathtt{Simp}}gma}=(\omega_{1})_{{\mathtt{Simp}}gma}$, if ${\mathtt{Simp}}gma(\mathbb{D}elta){\rm Sub}set f_{1}^{-1}(]1/2,1])$, \item $\omega_{{\mathtt{Simp}}gma}=(\omega_{2})_{{\mathtt{Simp}}gma}$, if ${\mathtt{Simp}}gma(\mathbb{D}elta){\rm Sub}set f_{2}^{-1}(]1/2,1])$. \end{itemize} This definition makes sense because, on the one hand $f_{1}^{-1}(]1/2,1])\cap f_{2}^{-1}(]1/2,1])=\emptyset$ and on the other hand $U_{1}\cap U_{2}\cap f_{1}^{-1}(]1/2,1]){\rm Sub}set U_{1}\cap U_{2}$, where the two cochains $(\omega_{1})_{{\mathtt{Simp}}gma}$ and $(\omega_{2})_{{\mathtt{Simp}}gma}$ coincide. \end{proof} With the notation of the previous proof, the connecting morphism of the long exact sequence of Mayer-Vietoris is defined by $$ [\omega]\mapsto [(\delta \widetildelde{g}_{1})\cup \omega].$$ \begin{equation}gin{lemma}\label{lem:0cochaine} Let $(X,\overline{p})$ be a perverse space. Any map, $g\colon X\to R$, defines a 0-cochain ${\tilde{g}}\in \lau {\widetilde{N}} 0 {\overline{0}}{X;R}$. Moreover, the association $g\mapsto \widetildelde{g}$ is $R$-linear. \end{lemma} \begin{equation}gin{proof} Let ${\mathtt{Simp}}gma\colon \mathbb{D}elta=\mathbb{D}elta_{0}\ast\dots\ast\mathbb{D}elta_{n}\to X$ be a regular simplex. We want to define ${\tilde{g}}_{{\mathtt{Simp}}gma}\in N^0(c\mathbb{D}elta_{0})\otimes\dots\otimes N^0(c\mathbb{D}elta_{n-1})\otimes N^0(\mathbb{D}elta_{n})$. If $b=(b_{0},\dots, b_{n})\in c\mathbb{D}elta_{0}\widetildemes\dots\widetildemes c\mathbb{D}elta_{n-1} \widetildemes \mathbb{D}elta_{n}$, we denote by $i_{0}$ the smallest index for which $ b_i $ is not a apex of a cone (i.e., $i_{0}=\min\{i \mid b_{i}\in \mathbb{D}elta_{i}\}$). Observe that the integer $i_{0}$ exists since $b_{n}\in \mathbb{D}elta_{n}$ and set ${\tilde{g}}_{{\mathtt{Simp}}gma}(b)=g({\mathtt{Simp}}gma(b_{i_{0}}))$. This map is clearly compatible with the face operators and defines ${\tilde{g}}\in \mathbb{H}iru {\widetilde{N}} 0{X;R}$. It remains to determine the perversity of the cochain $ {\tilde{g}}_{{\mathtt{Simp}}gma} $. Since it is a 0-cochain, it is obviously $\overline{0}$-allowable and we only need to study its coboundary. To compute the $ \end{lemma}l $-perverse degree of $ \delta {\tilde{g}}_{{\mathtt{Simp}}gma} $, we consider $F=F_{0}\otimes\dots\otimes F_{n}=\{b_{0}\}\otimes\dots\otimes F_{j}\otimes\dots\otimes \{b_{n}\}$ with $\dim F_{j}=1$ and $\partial F_{j}=b_{j}^1-b_{j}^0$. We have, $$\delta{\tilde{g}}_{{\mathtt{Simp}}gma}(F)= {\tilde{g}}_{{\mathtt{Simp}}gma}(b_{0},\dots, b^1_{j},\dots, b_{n}) -{\tilde{g}}_{{\mathtt{Simp}}gma}(b_{0},\dots, b^0_{j},\dots, b_{n}). $$ If $n-\end{lemma}l<j$, then $i_{0}\leq n-\end{lemma}l<j$ and $\delta{\tilde{g}}_{{\mathtt{Simp}}gma}(F)=g({\mathtt{Simp}}gma(b_{i_{0}}))-g({\mathtt{Simp}}gma(b_{i_{0}}))=0$, and then $\|\delta \widetildelde{g}_{{\mathtt{Simp}}gma}\|_{\end{lemma}l}=-\infty$.\\ Finally, ${\tilde{g}}\in \lau {\widetilde{N}} 0 {\overline{0}}X$ and the association $g\mapsto \widetildelde{g}$ is $ R $-linear by construction. \end{proof} \section{Product with the real line. \thmref{prop:isoproduitR}.}\label{sec:produitR} Consider the product $ X \widetildemes \mathbb{R} $ equipped with the product filtration $(X\widetildemes \mathbb{R})_i = X_i \widetildemes \mathbb{R} $ and a perversity~$\overline{p} $. We also denote by $ \overline {p} $ the perversity induced on $ X $, that is, $\overline p (S) = \overline p (S \widetildemes \mathbb{R})$ for each stratum $S$ of $X$. Let $ I_{0} \ I_{1} \colon X \to X \widetildemes \mathbb{R} $ be the canonical injections, defined by $ I_ {0} (x) = (x, 0) $ and $ I_ {1} (x) = (x, 1) $. The canonical projections are denoted by $ {\rm pr} \colon X \widetildemes \mathbb{R} \to X $ and $ {\rm pr}_ {2} \colon X \widetildemes \mathbb{R} \to \mathbb{R} $. {\rm pr}opref{prop:applistratifieeforte} ensures that the maps $I_{0} $, $ I_ {1} $ and $ {\rm pr} $ induce cochain maps between the blown-up complexes. \begin{equation}gin{theorem}\label{prop:isoproduitR} Let $ X $ be a filtered space and let $ X \widetildemes \mathbb{R} $ equipped with the filtration product and a perversity~$\overline {p} $. Denoting also $ \overline {p} $ the perversity induced on $ X $, the maps induced in the blown-up intersection cohomology by the projection $ {\rm pr} \colon X \widetildemes \mathbb{R} \to X $ and the canonical injections, $ I_{0} \ I_{1} \colon X \to X \widetildemes \mathbb{R} $ , verify $(I_{0}\circ {\rm pr})^*=(I_{1}\circ{\rm pr})^*={\rm id}$. \end{theorem} The proof proceeds according to the scheme of \secref{sec:subdivision} by constructing a homotopy $ \Theta_{\mathbb{D}elta} $ at the simplex level (cf. {\rm pr}opref{prop:homotopieR}), then gluing them to get a homotopy at the level of blown-up complexes (see {\rm pr}opref{prop:homotopieRglobal}). For any filtered simplex, $\mathbb{D}elta=\mathbb{D}elta_{0}\ast\dots\ast\mathbb{D}elta_{n}$, one defines a weighted simplicial complex $\cal L_{\mathbb{D}elta}=\mathbb{D}elta\otimes [0,1]$, whose simplices are the joins $(F,\pmb{0})\ast (G,\pmb{1})$, with $F{\vartriangleleft} \mathbb{D}elta$, $G{\vartriangleleft} \mathbb{D}elta$, $(F,\pmb{0}){\rm Sub}set \mathbb{D}elta\widetildemes \{0\}$ and $(G,\pmb{1}){\rm Sub}set \mathbb{D}elta\widetildemes\{1\}$. Henceforth, we denote by $F \ast G$ such simplex, meaning that the first term, $ F $, is identified with a simplex of $ \mathbb{D}elta \widetildemes \{0 \} $ and the second, $ G $, with a simplex of $ \mathbb{D}elta \widetildemes \{1 \}$. If $ F $ and $ G $ are compatible, in the sense of \defref{def:cupsurDelta}, then, for the filtration induced by $ \mathbb{D}elta $, we have $F\ast G=F_{0}\ast\dots\ast(F_{p}\ast G_{p})\ast\dots\ast G_{n}$, with $p\in\{0,\dots,n\}$. A face of the prismatic set $ {\widetilde{\mathcal L}}_{\mathbb{D}elta} $, corresponding to compatible simplices, $ F $ and $ G $ of $ \mathbb{D}elta $, is denoted by, $$(F\ast G,\varepsilon)=(F_{0},\varepsilon_{0})\widetildemes\dots\widetildemes (F_{p}\ast G_{p},\varepsilon_{p}) \widetildemes\dots\widetildemes (G_{n-1},\varepsilon_{n-1})\widetildemes G_{n}.$$ In this writing, if $ j <n $, $ j \neq p$, one authorizes $ F_{j} $ and $ G_{j} $ to be the emptyset but the associated variable $ \varepsilon_{j}$ must be 1. \begin{equation}gin{proposition}\label{prop:homotopieR} There exists a linear map, $\Theta_{\mathbb{D}elta}\colon \mathbb{H}iru {\widetilde{N}} *{\mathbb{D}elta\otimes [0,1]}\to \mathbb{H}iru {\widetilde{N}} {*-1}{\mathbb{D}elta}$, such that \begin{equation}gin{equation}\label{equa:homotopieR} (\Theta_{\mathbb{D}elta}\circ \delta+\delta\circ \Theta_{\mathbb{D}elta})({\boldsymbol 1}_{(F\ast G,\varepsilon)}) =\left\{\begin{equation}gin{array}{cl} 0& \text{if } F\neq\emptyset \text{ and } G\neq\emptyset,\\ - {\boldsymbol 1}_{(G,\varepsilon)}& \text{if } F=\emptyset,\\ {\boldsymbol 1}_{(F,\varepsilon)}& \text{if } G=\emptyset, \end{array}\right. \end{equation} where $\delta$ denotes the differential of $\mathbb{H}iru {\widetilde{N}}* {\mathbb{D}elta\otimes [0,1]}$ and $\mathbb{H}iru {\widetilde{N}} {*-1}{\mathbb{D}elta}$. \end{proposition} \begin{equation}gin{proof} Define the map $\Theta_{\mathbb{D}elta}$ on the elements of the dual basis. Let $${\boldsymbol 1}_{(F\ast G,\varepsilon)}= {\boldsymbol 1}_{(F_{0},\varepsilon_{0})}\otimes\dots\otimes {\boldsymbol 1}_{(F_{p}\ast G_{p},\varepsilon_{p})}\otimes\dots\otimes {\boldsymbol 1}_{(G_{n-1},\varepsilon_{n-1})}\otimes {\boldsymbol 1}_{G_{n}}.$$ We set $$\Theta_{\mathbb{D}elta}({\boldsymbol 1}_{(F \ast G,\varepsilon)})= \left\{\begin{equation}gin{array}{cl} (-1)^{|(F,\varepsilon)|_{<p}+|(G_{p},\varepsilon_{p})|}\, {\boldsymbol 1}_{(F\cup G,\varepsilon)}& \text{if}\; F\; \text{and}\; G \; \text{are compatible,}\\ 0& \text{if not,} \end{array}\right.$$ where ${\boldsymbol 1}_{(F\cup G,\varepsilon)}= {\boldsymbol 1}_{(F_{0},\varepsilon_{0})}\otimes\dots\otimes {\boldsymbol 1}_{(F_{p}\cup G_{p},\varepsilon_{p})}\otimes\dots\otimes {\boldsymbol 1}_{(G_{n-1},\varepsilon_{n-1})}\otimes {\boldsymbol 1}_{G_{n}} $ and $F_{p}\cup G_{p}$ was introduced in \defref{def:cupsurDelta}. In ${\boldsymbol 1}_{(F\ast G,\varepsilon)}$, we consider the faces $F$ of $\mathbb{D}elta\widetildemes \{0\}$ and $G$ of $\mathbb{D}elta\widetildemes\{1\}$. In the expression of $\Theta_{\mathbb{D}elta}({\boldsymbol 1}_{(F \ast G,\varepsilon)})$, we make an abuse of notation by keeping the letters $ F $ and $ G $ for faces of $ \mathbb{D}elta $. It remains to verify (\ref{equa:homotopieR}) and for that, we consider the following cases. We denote by $\cal V({\widetilde{\mathcal L}}_{\mathbb{D}elta})$ the union of $\cal V(\cal L_{\mathbb{D}elta})$ with the family of virtual vertices. \begin{equation}gin{itemize} \item Suppose $F\neq \emptyset$, $G\neq\emptyset$, $F$ and $G$ compatible. Using equality (\ref{equa:diffetpoint}) and \lemref{lem:thetaaste}, we obtain the equality (\ref{equa:homotopieR}) in this case: \begin{equation}gin{eqnarray*} \delta\Theta_{\mathbb{D}elta}({\boldsymbol 1}_{(F\ast G,\varepsilon)}) &=& (-1)^{|(F\ast G,\varepsilon)|+1} \sum_{e\in\cal V({\widetilde{\Delta}})}\Theta_{\mathbb{D}elta}({\boldsymbol 1}_{(F\ast G,\varepsilon)}) \ast e \\ &=& (-1)^{|(F\ast G,\varepsilon)|+1} \sum_{e\in\cal V({\widetilde{\mathcal L}}_{\mathbb{D}elta})}\Theta_{\mathbb{D}elta}({\boldsymbol 1}_{(F\ast G,\varepsilon)} \ast e) = -\Theta_{\mathbb{D}elta}(\delta{\boldsymbol 1}_{(F\ast G,\varepsilon)}).\nonumber \end{eqnarray*} \item Suppose $F\neq\emptyset$, $G\neq\emptyset$, $F$ and $G$ not compatible such that $\delta {\boldsymbol 1}_{(F\ast G,\varepsilon)}$ contains compatible elements. This amounts to giving a $k$-simplex, $F'\ast G'$, of $\mathbb{D}elta\otimes [0,1]$, with $F'$ and $G'$ compatible and having a $(k-1)$-face, $F\ast G$, with $F$ and $G$ non compatible and non empty. Put $F'=[a_{i_{0}},\dots,a_{i_{r}}]$ and $G'=[b_{j_{0}},\dots,b_{j_{s}}]$ with $a_{i_{r}}=b_{j_{0}}\in\mathbb{D}elta_{p}$. (Recall that in the latter equality we identified $\mathbb{D}elta\cong\mathbb{D}elta\widetildemes\{{0}\}\cong \mathbb{D}elta\widetildemes \{{1}\}$.) We denote by $a_{j_{t}}\in\mathbb{D}elta\widetildemes\{{0}\}$ the vertex identified to $b_{j_{t}}\in\mathbb{D}elta\widetildemes\{{1}\}$, as well as $b_{i_{t}}\in\mathbb{D}elta\widetildemes\{{1}\}$ for the vertex identified to $a_{i_{t}}\in\mathbb{D}elta\widetildemes\{{0}\}$. For the simplices $ F $ and $ G $, there are only two possibilities corresponding to compatible simplices $ F '$, $ G' $. Equality (\ref{equa:homotopieR}) is deduced from the following calculations. $+$ If $F=[a_{i_{0}},\dots,a_{i_{r-1}}]$ and $G=G'$, the expression $\Theta_{\mathbb{D}elta}(\delta{\boldsymbol 1}_{(F\ast G,\varepsilon)})$ takes the values, $$ (-1)^{|(G,\varepsilon)|+1+|(F,\varepsilon)|_{<p}+\varepsilon_{p}} \Theta_{\mathbb{D}elta}({\boldsymbol 1}_{(F\ast a_{i_{r}}\ast G,\varepsilon)}+{\boldsymbol 1}_{(F\ast b_{i_{r-1}}\ast G,\varepsilon)}), $$ if $a_{i_{r-1}}\in F_{p}$, and $$ (-1)^{|(F,\varepsilon)|+|(G_{p},\varepsilon_{p})+1} \Theta_{\mathbb{D}elta}({\boldsymbol 1}_{(F\ast a_{i_{r}}\ast G,\varepsilon)}) + (-1)^{|(F,\varepsilon)|_{<a}+\varepsilon_{a}} \Theta_{\mathbb{D}elta}({\boldsymbol 1}_{(F\ast b_{i_{r-1}}\ast G,\varepsilon)}), $$ if $a_{i_{r}-1}\in F_{a}$ with $a<p$, which implies $$ \Theta_{\mathbb{D}elta}(\delta{\boldsymbol 1}_{(F\ast G,\varepsilon)})= -{\boldsymbol 1}_{((F\ast a_{i_{r}})\cup G,\varepsilon)}+ {\boldsymbol 1}_{(F\cup (b_{i_{r-1}}\ast G),\varepsilon)}=0. $$ $+$ The argument is the same for $F=F'$ and $G=[b_{j_{1}},\dots,b_{j_{s}}]$. \item Suppose $F=\emptyset$. Let $b_{i_{0}}\in\mathbb{D}elta_{p}\widetildemes \{{1}\}$ be the first vertex of $G$ and $a_{i_{0}}$ the vertex of $\mathbb{D}elta_{p}\widetildemes \{{0}\}$ whose projection on $\mathbb{D}elta_{p}$ is equal to the projection of $b_{i_{0}}$. We have $\Theta_{\mathbb{D}elta}({\boldsymbol 1}_{(G,\varepsilon)})=0$. To determine the second term, first note, \begin{equation}gin{eqnarray*} {\boldsymbol 1}_{(G,\varepsilon)}\ast a_{i_{0}} &=& (-1)^{|(G,\varepsilon)|_{>p}} ({\boldsymbol 1}_{(G_{p},\varepsilon_{p})}\ast a_{i_{0}})\otimes\dots\\ &=& (-1)^{|(G,\varepsilon)|_{>p}+1} ({\boldsymbol 1}_{(a_{i_{0}}\ast G_{p},\varepsilon_{p})})\otimes\dots \end{eqnarray*} It follows \begin{equation}gin{eqnarray*} \Theta_{\mathbb{D}elta}\delta{\boldsymbol 1}_{(G,\varepsilon)} &=& (-1)^{|(G,\varepsilon)|} \Theta_{\mathbb{D}elta}({\boldsymbol 1}_{(G,\varepsilon)}\ast a_{i_{0}})\\ &=& (-1)^{|(G_{p},\varepsilon_{p})|+1} \Theta_{\mathbb{D}elta}({\boldsymbol 1}_{(a_{i_{0}}\ast G,\varepsilon)}) = - \,{\boldsymbol 1}_{(G,\varepsilon)}. \end{eqnarray*} \item The proof for the case $G=\emptyset$ is similar. \end{itemize} \end{proof} \begin{equation}gin{lemma}\label{lem:thetaaste} Let ${\boldsymbol 1}_{(F\ast G,\varepsilon)} \in \mathbb{H}iru {\widetilde{N}} *{\mathbb{D}elta\otimes [0,1]}$, with $F\neq\emptyset$ and $G\neq\emptyset$. \begin{equation}gin{enumerate}[1)] \item Let $\end{lemma}l\in\{1,\dots,n\}$. For any vertex $e\in\mathbb{D}elta_{\end{lemma}l}$, we denote $e_{0}\in \mathbb{D}elta_{\end{lemma}l}\widetildemes\{{0}\}$ and $e_{1}\in \mathbb{D}elta_{\end{lemma}l}\widetildemes\{{1}\}$ the vertices of $\mathbb{D}elta\otimes [0,1]$ corresponding to $e$. Then we have $$ \Theta_{\mathbb{D}elta}({\boldsymbol 1}_{(F \ast G,\varepsilon)})\ast e = \Theta_{\mathbb{D}elta}({\boldsymbol 1}_{(F \ast G,\varepsilon)}\ast e_{0}) + \Theta_{\mathbb{D}elta}({\boldsymbol 1}_{(F \ast G,\varepsilon)}\ast e_{1}). $$ \item For any virtual vertex ${\mathtt v}_{\end{lemma}l}$, $\end{lemma}l<n$, we have \begin{equation}gin{equation}\label{equa:thetavirtuel} \Theta_{\mathbb{D}elta}({\boldsymbol 1}_{(F \ast G,\varepsilon)}\ast {\mathtt v}_{\end{lemma}l})= (\Theta_{\mathbb{D}elta}({\boldsymbol 1}_{(F \ast G,\varepsilon)}))\ast {\mathtt v}_{\end{lemma}l}. \end{equation} \end{enumerate} \end{lemma} \begin{equation}gin{proof} Let ${\boldsymbol 1}_{(F\ast G,\varepsilon)}= {\boldsymbol 1}_{(F_{0},\varepsilon_{0})}\otimes\dots\otimes {\boldsymbol 1}_{(F_{p}\ast G_{p},\varepsilon_{p})}\otimes\dots\otimes {\boldsymbol 1}_{(G_{n-1},\varepsilon_{n-1})}\otimes {\boldsymbol 1}_{G_{n}}$. 1) $\bullet$ Let $\end{lemma}l<p$. Using the \defref{def:etunpointun}, we get $\Theta_{\mathbb{D}elta}({\boldsymbol 1}_{(F \ast G,\varepsilon)}\ast e_{1})=0$ and the following equalities \begin{equation}gin{eqnarray*} \Theta_{\mathbb{D}elta}({\boldsymbol 1}_{(F \ast G,\varepsilon)}\ast e_{0})=&&\\ (-1)^{|(F\ast G,\varepsilon)|_{>\end{lemma}l}} \Theta_{\mathbb{D}elta} ( {\boldsymbol 1}_{(F_{0},\varepsilon_{0})} \otimes\dots\otimes ({\boldsymbol 1}_{(F_{\end{lemma}l},\varepsilon_{\end{lemma}l})}\ast e_{0}) \otimes \dots\otimes {\boldsymbol 1}_{(F_{p}\ast G_{p},\varepsilon_{p})} \otimes\dots\otimes {\boldsymbol 1}_{G_{n}} ) =&&\\ (-1)^{|(F\ast G,\varepsilon)|_{>\end{lemma}l}+ |(F,\varepsilon)|_{<p}+1+|G_{p}|+\varepsilon_{p}} \dots\otimes ({\boldsymbol 1}_{(F_{\end{lemma}l},\varepsilon_{\end{lemma}l})}\ast e) \otimes \dots\otimes {\boldsymbol 1}_{(F_{p}\cup G_{p},\varepsilon_{p})} \otimes\dots =&& \\ (-1)^{|(F,\varepsilon)|_{<p}+|G_{p}|+\varepsilon_{p}} ( {\boldsymbol 1}_{(F_{0},\varepsilon_{0})} \otimes \dots\otimes {\boldsymbol 1}_{(F_{p}\cup G_{p},\varepsilon_{p})} \otimes\dots\otimes {\boldsymbol 1}_{G_{n}} )\ast e = (\Theta_{\mathbb{D}elta}({\boldsymbol 1}_{(F \ast G,\varepsilon)}))\ast e.&& \end{eqnarray*} \noindent $\bullet$ The argument is the same if $\end{lemma}l>p$. \noindent $\bullet$ If $\end{lemma}l=p$, these formulas become: \begin{equation}gin{eqnarray*} \Theta_{\mathbb{D}elta}({\boldsymbol 1}_{(F \ast G,\varepsilon)}\ast e_{0}) = (-1)^{|(F\ast G,\varepsilon)|_{>p}} \Theta_{\mathbb{D}elta}( {\boldsymbol 1}_{(F_{0},\varepsilon_{0})} \otimes\dots\otimes {\boldsymbol 1}_{(F_{p}\ast G_{p},\varepsilon_{p})}\ast e_{0} \otimes\dots\otimes {\boldsymbol 1}_{G_{n}} ).&&\\ = (-1)^{|(F\ast G,\varepsilon)|_{\geq p}+\varepsilon_{p}} \Theta_{\mathbb{D}elta}( {\boldsymbol 1}_{(F_{0},\varepsilon_{0})} \otimes\dots\otimes {\boldsymbol 1}_{(F_{p}\ast G_{p}\ast e_{0},\varepsilon_{p})} \otimes\dots\otimes {\boldsymbol 1}_{G_{n}} ).&& \end{eqnarray*} Write $F_{p}=[a_{i_{0}},\dots,a_{i_{r}}]$, $G_{p}=[b_{j_{0}},\dots,b_{j_{s}}]$. If the simplices are not compatible, both members of the previous equality are zero. The only cases giving compatible simplices are one of the following cases: \begin{equation}gin{equation}\label{equa:ellegalp} (*)\ a_{i_{t}}<e<a_{i_{t+1}} \text{ with } t\in\{0,\dots,r-1\}, \ \ (*) \ e<a_{i_{0}} \text{ and }e\in\mathbb{D}elta_{p}. \end{equation} We treat the first case, the second being identical. Notice: \begin{equation}gin{eqnarray*} F_{p}\ast G_{p}\ast e &=& [a_{i_{0}},\dots,a_{i_{r}}]\ast [b_{j_{0}},\dots,b_{j_{s}}]\ast e\\ &=& (-1)^{s+1+r-t} [a_{i_{0}},\dots,a_{i_{t}},e,a_{i_{t+1}},\dots, a_{i_{r}}]\ast [b_{j_{0}},\dots,b_{j_{s}}] \end{eqnarray*} and, setting $F_{p(e)}=[a_{i_{0}},\dots,a_{i_{t}},e,a_{i_{t+1}},\dots, a_{i_{r}}]$, $$ F_{p}(e)\cup G_{p} = [a_{i_{0}},\dots,a_{i_{t}},e,a_{i_{t+1}},\dots, a_{i_{r}}=b_{j_{0}},\dots,b_{j_{s}}] = (-1)^{s+r-t} (F_{p}\cup G_{p}) \ast e. $$ It follows \begin{equation}gin{eqnarray*} \Theta_{\mathbb{D}elta}({\boldsymbol 1}_{(F \ast G,\varepsilon)}\ast e_{0}) =&&\\ (-1)^{|(F\ast G,\varepsilon)|_{\geq p}+|(F,\varepsilon)|_{<p}+|G_{p}|+1} {\boldsymbol 1}_{(F_{0},\varepsilon_{0})} \otimes\dots\otimes {\boldsymbol 1}_{((F_{p}\cup G_{p})\ast e,\varepsilon_{p})} \otimes\dots\otimes {\boldsymbol 1}_{G_{n}}=&&\\ (-1)^{|(F,\varepsilon)|_{<p}+|G_{p}|+\varepsilon_{p}} ({\boldsymbol 1}_{(F_{0},\varepsilon_{0})} \otimes\dots\otimes ({\boldsymbol 1}_{(F_{p}\cup G_{p},\varepsilon_{p})}) \otimes\dots\otimes {\boldsymbol 1}_{G_{n}})\ast e =&&\\ (\Theta_{\mathbb{D}elta}({\boldsymbol 1}_{(F_{0},\varepsilon_{0})} \otimes\dots\otimes ({\boldsymbol 1}_{(F_{p}\ast G_{p},\varepsilon_{p})}) \otimes\dots\otimes {\boldsymbol 1}_{G_{n}}))\ast e.&& \end{eqnarray*} In the case (\ref{equa:ellegalp}), we have $\Theta_{\mathbb{D}elta}({\boldsymbol 1}_{(F \ast G,\varepsilon)}\ast e_{1})=0$, which gives the desired equality. \noindent Similar argument gives the result in the symmetric case of the previous one: ``$b_{j_{t}}<e<b_{j_{t+1}}$ with $t\in\{1,\dots,s-1\}$ or $b_{j_{s}}<e$ with $e\in\mathbb{D}elta_{p}$.'' 2) We now consider a virtual vertex ${\mathtt v}_{\end{lemma}l}$. If $\varepsilon_{\end{lemma}l}=1$, equality (\ref{equa:thetavirtuel}) is verified since its two members are zero. We therefore assume $\varepsilon_{\end{lemma}l}=0$. If $\end{lemma}l<p$, we have: \begin{equation}gin{eqnarray*} \Theta_{\mathbb{D}elta}({\boldsymbol 1}_{(F \ast G,\varepsilon)}\ast {\mathtt v}_{\end{lemma}l}) =&&\\ (-1)^{|(F\ast G,\varepsilon)|_{>\end{lemma}l}+ |(F,\varepsilon)|_{<p}+1+|(G_{p},\varepsilon_{p})|} \dots\otimes ({\boldsymbol 1}_{(F_{\end{lemma}l},0)}\ast {\mathtt v}_{\end{lemma}l}) \otimes \dots\otimes {\boldsymbol 1}_{(F_{p}\cup G_{p},\varepsilon_{p})} \otimes\dots =&& \\ (-1)^{|(F,\varepsilon)|_{<p}+|(G_{p},\varepsilon_{p})|} ( {\boldsymbol 1}_{(F_{0},\varepsilon_{0})} \otimes \dots\otimes {\boldsymbol 1}_{(F_{p}\cup G_{p},\varepsilon_{p})} \otimes\dots \otimes {\boldsymbol 1}_{G_{n}} )\ast {\mathtt v}_{\end{lemma}l}=&&\\ (\Theta_{\mathbb{D}elta}({\boldsymbol 1}_{(F \ast G,\varepsilon)}))\ast {\mathtt v}_{\end{lemma}l}.&& \end{eqnarray*} The argument is similar for $\end{lemma}l\geq p$. \end{proof} \begin{equation}gin{proposition}\label{prop:homotopieRglobal} Let $X$ be a filtered space and let $ X\widetildemes \mathbb{R}$ endowed with the product filtration and a perversity~$\overline{p}$. Denoting also by $\overline{p}$ the induced perversity on $X$, there exists a linear map $\Theta \colon \lau {\widetilde{N}}*{\overline{p}}{X\widetildemes \mathbb{R};R}\to \lau {\widetilde{N}} {*-1} {\overline{p}}{X\widetildemes \mathbb{R};R}$, such that \begin{equation}gin{equation}\label{equa:homotopieXR} \Theta \circ \delta+\delta\circ \Theta= (I_{0}\circ{\rm pr})^*-{\rm id}. \end{equation} \end{proposition} \begin{equation}gin{proof} Let ${\mathtt{Simp}}gma\colon\mathbb{D}elta\to X\widetildemes \mathbb{R}$ be a regular simplex and $\omega\in \mathbb{H}iru {\widetilde{N}} k{X\widetildemes \mathbb{R};R}$. We define $\hat{{\mathtt{Simp}}gma}\colon \cal L_\mathbb{D}elta = \mathbb{D}elta\otimes [0,1]\to X\widetildemes \mathbb{R}$ by $\hat{{\mathtt{Simp}}gma}(x,t)=({\rm pr}({\mathtt{Simp}}gma(x)),t{\rm pr}_{2}({\mathtt{Simp}}gma(x)))$ and we set $$\omega_{\mathbb{D}elta\otimes [0,1]}= \sum_{(F\ast G,\varepsilon){\vartriangleleft} \widetilde{\mathbb{D}elta \otimes [0,1]}} \omega_{\hat{{\mathtt{Simp}}gma}\circ \iota_{F\ast G}}(F\ast G,\varepsilon)\,{\boldsymbol 1}_{(F\ast G,\varepsilon)}, $$ where $\iota_{F\ast G}\colon F\ast G \hookrightarrow \mathbb{D}elta \otimes [0,1]$ is the canonical injection. By construction, $\omega_{\mathbb{D}elta \otimes [0,1]}\in\mathbb{H}iru {\widetilde{N}} k {\mathbb{D}elta \otimes [0,1]}$, and we can define \begin{equation}gin{equation*}\label{equa:homotopiee} \Theta(\omega)_{\mathtt{Simp}}gma= \Theta_{\mathbb{D}elta }(\omega_{\mathbb{D}elta \otimes [0,1]}) \in \mathbb{H}iru {\widetilde{N}} {k-1}{\mathbb{D}elta }. \end{equation*} We need to verify $\Theta(\omega)\in \mathbb{H}iru {\widetilde{N}}*{X\widetildemes \mathbb{R};R}$. For this purpose, we consider a regular face operator $\delta_{\end{lemma}l}\colon \nabla \to\mathbb{D}elta $ and $\tau={\mathtt{Simp}}gma\circ\delta_{\end{lemma}l}$. The property $\omega_{\mathbb{D}elta \otimes [0,1]}\in \mathbb{H}iru{\widetilde{N}} *{\mathbb{D}elta \otimes [0,1]}$ becomes $\delta_{\end{lemma}l}^*\,\omega_{\mathbb{D}elta \otimes [0,1]}=\omega_{\nabla \otimes [0,1]}$. The map ${\delta}^*_{\end{lemma}l}\colon \mathbb{H}iru {\widetilde{N}} * {\mathbb{D}elta \otimes [0,1]}\to \mathbb{H}iru {\widetilde{N}}*{\nabla \otimes [0,1]}$ verifies $$ {\delta}^*_{\end{lemma}l}({\boldsymbol 1}_{(F*G,\varepsilon)})= \left\{\begin{equation}gin{array}{cl} {\boldsymbol 1}_{(F*G,\varepsilon)} &\text{if}\; F*G{\vartriangleleft} \nabla \otimes [0,1],\\ 0&\text{if not,} \end{array}\right. $$ which implies the commutativity of the following diagram, \begin{equation}gin{equation*}\label{equa:biendefini2} \xymatrix{ \mathbb{H}iru {\widetilde{N}} *{\mathbb{D}elta \otimes [0,1]} \ar[rr]^-{\Theta_{\mathbb{D}elta }} \ar[d]_{{\delta}^*_{\end{lemma}l}} && \mathbb{H}iru {\widetilde{N}}{*-1}{\mathbb{D}elta \otimes [0,1]} \ar[d]^{{\delta}^*_{\end{lemma}l}} \\ \mathbb{H}iru {\widetilde{N}} *{\nabla \otimes [0,1]} \ar[rr]^-{\Theta_\nabla } && \mathbb{H}iru {\widetilde{N}} {*-1}{\nabla \otimes [0,1]}. } \end{equation*} We deduce \begin{equation}gin{eqnarray*} \delta^*_{\end{lemma}l}(\Theta(\omega)_{\mathtt{Simp}}gma) &=& \delta^*_{\end{lemma}l}\Theta_\mathbb{D}elta (\omega_{\mathbb{D}elta \otimes [0,1]})= \Theta_\nabla (\delta^*_{\end{lemma}l}\omega_{\mathbb{D}elta \otimes [0,1]})= \Theta_\nabla (\omega_{\nabla \otimes [0,1]}) = \Theta(\omega)_\tau . \end{eqnarray*} The map $\Theta\colon \mathbb{H}iru {\widetilde{N}} *{X\widetildemes \mathbb{R};R}\to \mathbb{H}iru {\widetilde{N}} {*-1}{X\widetildemes \mathbb{R};R}$ is well defined. From {\rm pr}opref{prop:homotopieR}, we can deduce \begin{equation}gin{equation}\label{equa:thetadelta} (\Theta\delta+\delta\Theta)(\omega )_{\mathtt{Simp}}gma = \sum_{(F,\varepsilon){\vartriangleleft} \widetilde{\mathbb{D}elta \widetildemes\{0\}}} \omega_{\hat{{\mathtt{Simp}}gma}\circ \iota_{F}}(F,\varepsilon)\,{\boldsymbol 1}_{(F,\varepsilon)} - \sum_{(G,\varepsilon){\vartriangleleft} \widetilde{\mathbb{D}elta \widetildemes\{1\}}} \omega_{\hat{{\mathtt{Simp}}gma}\circ \iota_{G}}(G,\varepsilon)\,{\boldsymbol 1}_{(G,\varepsilon)}. \end{equation} Recall the canonical inclusions $\iota_{0},\,\iota_{1}\colon \mathbb{D}elta \to \mathbb{D}elta \otimes [0,1]$. If $F$ is a face of $\mathbb{D}elta \widetildemes \{0\}$ identified to $\nabla{\vartriangleleft}\mathbb{D}elta $, one has $\iota_{F}=\iota_{0}\circ\iota_{\nabla}$, from where $$\hat{{\mathtt{Simp}}gma}\circ\iota_{F}=\hat{{\mathtt{Simp}}gma}\circ\iota_{0}\circ \iota_{\nabla}=I_{0}\circ{\rm pr}\circ{\mathtt{Simp}}gma\circ\iota_{\nabla}$$ and $$ \sum_{(F,\varepsilon){\vartriangleleft} \widetilde{\mathbb{D}elta \widetildemes\{0\}}} \omega_{\hat{{\mathtt{Simp}}gma}\circ \iota_{F}}(F,\varepsilon)\,{\boldsymbol 1}_{(F,\varepsilon)} = (I_{0}\circ{\rm pr})^* \left( \sum_{(\nabla,\varepsilon){\vartriangleleft} \widetilde{\mathbb{D}elta }} \omega_{{{\mathtt{Simp}}gma}\circ \iota_{\nabla}}(\nabla,\varepsilon)\,{\boldsymbol 1}_{(\nabla,\varepsilon)} \right) = (I_{0}\circ{\rm pr})^*(\omega ) _{\mathtt{Simp}}gma. $$ Using $\iota_{G}=\iota_{1}\circ \iota_{\nabla}$ and $\hat{{\mathtt{Simp}}gma}\circ\iota_{G}=\hat{{\mathtt{Simp}}gma}\circ\iota_{1}\circ \iota_{\nabla}={\mathtt{Simp}}gma\circ\iota_{\nabla}$, we show, as above, that the right side-hand of the second term of (\ref{equa:thetadelta}) is equal to $\omega_{\mathtt{Simp}}gma $. In summary, we have shown, $$(\Theta\delta+\delta\Theta)(\omega ) _{\mathtt{Simp}}gma= (I_{0}\circ{\rm pr})^*(\omega ) _{\mathtt{Simp}}gma-\omega _{\mathtt{Simp}}gma ,$$ from which comes the equality (\ref{equa:homotopieXR}). As for the compatibility with the perverse degrees, we prove $\|\Theta(\omega)\|\leq \|\omega\|$, for each $\omega\in \mathbb{H}iru {\widetilde{N}}*{X\widetildemes \mathbb{R};R}$, by arguing over the chosen basis. This inequality follows directly from the definition of $\Theta$, $$\|\Theta_{\mathbb{D}elta}({\boldsymbol 1}_{(F\ast G,\varepsilon)})\|_{\end{lemma}l} =\|{\boldsymbol 1}_{(F\cup G,\varepsilon)}\|_{\end{lemma}l} \leq \|{\boldsymbol 1}_{(F\ast G,\varepsilon)}\|_{\end{lemma}l}, $$ for each $\end{lemma}l\in\{1,\dots,n\}$. With (\ref{equa:homotopieXR}), we have constructed the homotopy\\ $\Theta \colon \lau {\widetilde{N}}*{\overline{p}}{X\widetildemes \mathbb{R};R}\to \lau {\widetilde{N}} {*-1} {\overline{p}}{X\widetildemes \mathbb{R};R}$. \end{proof} \begin{equation}gin{proof}[Proof of \thmref{prop:isoproduitR}] This is a direct consequence of the equalities ${\rm pr}\circ I_{0}={\rm pr}\circ I_{1}={\rm id}$ and $\lau \mathscr H*{\overline{p}}{I_{0}\circ {\rm pr}}={\rm id} $, proved in (\ref{equa:homotopieXR}). \end{proof} The following result is used in \secref{15}. \begin{equation}gin{proposition}\label{cor:SfoisX} Let $(X,\overline p)$ be a paracompact perverse space. Let $S^{\end{lemma}l}$ be the unit sphere of $\mathbb{R}^{\end{lemma}l+1}$. The canonical projection $p_{X}\colon {S}^\end{lemma}l \widetildemes X\to X$, $(z,x)\mapsto x$, induces an isomorphism $\lau \mathscr H {k} {\overline{p}}{{S}^\end{lemma}l\widetildemes X;R}\cong \lau \mathscr H {k} {\overline{p}}{X;R}\oplus \lau \mathscr H {k-\end{lemma}l} {\overline{p}}{X;R}$. \end{proposition} \begin{equation}gin{proof} It suffices to use an induction on $\end{lemma}l$ with the decomposition ${S}^{\end{lemma}l} = {S}^{\end{lemma}l} \backslash \{\mathrm{North pole}\} \cup {S}^\end{lemma}l \backslash \{\mathrm{South pole}\}$ (see \thmref{thm:MVcourte} and \thmref{prop:isoproduitR}). \end{proof} \section{Blown-up intersection cohomology of a cone. \thmref{prop:coneTW}.}\label{sec:cohomologiecone} In this section, $ X $ is an $n$-dimensional \emph{compact} filtered space and we represent the open cone as the quotient $ {\mathring{\tc}} X = X \widetildemes [0,\infty[ /X \widetildemes \{0\}$, whose apex is denoted by $ {\mathtt w}$. The formal dimension of ${\mathring{\tc}} X$ is $ n + 1 $ relatively to the conical filtration, $({\mathring{\tc}} X)_{i}={\mathring{\tc}} X_{i-1}$ if $i\geq 1$ and $({\mathring{\tc}} X)_{0}=\{{\mathtt w}\}$. The purpose of this section is to prove the following proposition, cf. also \cite[Corollary 1.47]{CST1} and \cite[Proposition 3.1.1]{MR2210257}. \begin{equation}gin{theorem}\label{prop:coneTW} Let $X$ be a compact filtered space. Consider the open cone, ${\mathring{\tc}} X = X \widetildemes [0,\infty[ /X \widetildemes \{0\}$, equipped with the conical filtration and a perversity $ \overline {p} $. We also denote by $\overline{p}$ the perversity induced on $ X $. The following properties are verified for any commutative ring, $R$. \begin{equation}gin{enumerate}[\rm (a)] \item The inclusion $\iota\colon X\to {\mathring{\tc}} X$, $x\mapsto [x,1]$, induces an isomorphism, $\lau \mathscr H {k}{\overline{p}}{{\mathring{\tc}} X;R}\xrightarrow[]{\cong} \lau \mathscr H{k}{\overline{p}}{X;R}$, for each $k\leq \overline{p}( {\mathtt w} )$. \item For each $k> \overline{p}( {\mathtt w} )$, we have $\lau \mathscr H {k}{\overline{p}}{{\mathring{\tc}} X;R}=0$. \end{enumerate} \end{theorem} {\rm Sub}section{Simplices on a filtered space and its cone} First we link the complexes of $ X $ and $ {\mathring{\tc}} X$. The formal dimension of the cone being different from that of the original space, we introduce some operations which increase or decrease the length of filtrations. \begin{equation}gin{itemize} \item If $\mathbb{D}elta=\mathbb{D}elta_{0}\ast\dots\ast \mathbb{D}elta_{n+1}$ is a regular simplex, of formal dimension $n+1$, we define a regular simplex, of formal dimension $n$, by $\widehat{\mathbb{D}elta}=\mathbb{D}elta_{1}\ast\dots\ast\mathbb{D}elta_{n+1}$. Its filtration is characterized by $\widehat{\mathbb{D}elta}_{i}=\mathbb{D}elta_{i+1}$, for each $i\in\{0,\dots,n\}$. \item Let ${\mathtt{Simp}}gma\colon \mathbb{D}elta_{{\mathtt{Simp}}gma}=\mathbb{D}elta_{0}\ast\dots\ast\mathbb{D}elta_{n+1}\to {\mathring{\tc}} X$ be a regular simplex of ${\mathring{\tc}} X$. Since ${\mathtt{Simp}}gma(\widehat{\mathbb{D}elta}_{{\mathtt{Simp}}gma}){\rm Sub}set X\widetildemes ]0,\infty[$, we define the restriction $$\hat{{\mathtt{Simp}}gma}\colon\mathbb{D}elta_{\hat{{\mathtt{Simp}}gma}}=\widehat{\mathbb{D}elta}_{{\mathtt{Simp}}gma}\xrightarrow[]{{\mathtt{Simp}}gma}X\widetildemes ]0,\infty[. $$ \item For each regular simplex of ${\mathring{\tc}} X$, ${\mathtt{Simp}}gma\colon \mathbb{D}elta_{{\mathtt{Simp}}gma}=\mathbb{D}elta_{0}\ast\dots\ast\mathbb{D}elta_{n+1}\to {\mathring{\tc}} X$, the image of a point $x\in \mathbb{D}elta_{{\mathtt{Simp}}gma}$ can be written as, $${\mathtt{Simp}}gma(x)=[{\mathtt{Simp}}gma_{1}(x),{\mathtt{Simp}}gma_{2}(x)]\in {{\mathring{\tc}} X}=X\widetildemes [0,\infty[/X\widetildemes \{0\}.$$ Associated to the simplex ${\mathtt{Simp}}gma$, there is the following regular simplex of ${\mathring{\tc}} X$, $${\mathtt c}{\mathtt{Simp}}gma\colon \mathbb{D}elta_{{\mathtt c}{\mathtt{Simp}}gma}=\left(\{{\mathtt p}\}\ast\mathbb{D}elta_{0}\right)\ast\dots\ast \mathbb{D}elta_{n+1}\to {\mathring{\tc}} X,$$ defined by ${\mathtt c}{\mathtt{Simp}}gma((1-t){\mathtt p}+tx)=[{\mathtt{Simp}}gma_{1}(x),t{\mathtt{Simp}}gma_{2}(x)]$. Moreover, if one considers $\hat{\mathtt{Simp}}gma\colon \widehat{\mathbb{D}elta}_{{\mathtt{Simp}}gma}\to X\widetildemes ]0,\infty[\hookrightarrow {\mathring{\tc}} X$ as a filtered simplex of the cone, then ${\mathtt c}\hat{{\mathtt{Simp}}gma}$ is a face of ${\mathtt c} {\mathtt{Simp}}gma$. \end{itemize} The \emph{truncation} of a cochain complex is defined for all positive integers $s$ by \begin{equation}gin{equation}\label{equa:troncation} (\tau_{\leq s} C)^r=\left\{ \begin{equation}gin{array}{ccl} C^r &\text{if}& r<s,\\ \cal Z C^s &\text{if}& r=s,\\ 0 &\text{if}& r>s, \end{array}\right. \end{equation} where $\cal Z C^s$ means the $R$-module of cocycles whose degree is $s$. \parr{Construction of $f\colon \mathbb{H}iru {\widetilde{N}}*{X\widetildemes ]0,\infty[;R}\to \mathbb{H}iru {\widetilde{N}}*{{\mathring{\tc}} X;R}$} Let ${\mathtt{Simp}}gma\colon \mathbb{D}elta_{{\mathtt{Simp}}gma}=\mathbb{D}elta_{0}\ast\dots\ast\mathbb{D}elta_{n+1}\to {\mathring{\tc}} X$ and $\omega\in \mathbb{H}iru{\widetilde{N}} *{X\widetildemes ]0,\infty[;R}$. We denote by $\lambda_{{\mathtt c}\mathbb{D}elta_{0}}$ the cocycle ${\boldsymbol 1}_{(\emptyset,1)}+\sum_{e\in \cal V(\mathbb{D}elta_{0})}{\boldsymbol 1}_{([e],0)}\in N^0({\mathtt c}\mathbb{D}elta_{0})$. We set \begin{equation}gin{equation}\label{equa:lef} f(\omega)_{{\mathtt{Simp}}gma}=\lambda_{{\mathtt c}\mathbb{D}elta_{0}}\otimes \omega_{\hat{{\mathtt{Simp}}gma}}. \end{equation} \begin{equation}gin{proposition}\label{prop:lef} Let $({\mathring{\tc}} X,\overline{p})$ be a perverse space over the cone of the compact space $X$ and let $(X\widetildemes ]0,\infty[,\overline{p})$ be the induced perverse space. The correspondence defined above induces a cochain map, $$f\colon \tau_{\leq \overline{p}({\mathtt w})} \lau {\widetilde{N}}* {\overline{p}}{X\widetildemes ]0,\infty[;R}\to \tau_{\leq \overline{p}({\mathtt w})}\lau {\widetilde{N}} *{\overline{p}}{{\mathring{\tc}} X;R}.$$ \end{proposition} \begin{equation}gin{proof} First, we check that the application $ f $, defined locally at the level of simplices, extends globally to $\mathbb{H}iru {\widetilde{N}}*{X\widetildemes ]0,\infty[;R}$. For this, we must establish $\delta^*_{k} f(\omega)_{{\mathtt{Simp}}gma}=f(\omega)_{{\mathtt{Simp}}gma\circ\delta_{k}},$ for each $\omega\in \mathbb{H}iru {\widetilde{N}}*{X\widetildemes ]0,\infty[;R}$, each regular simplex, ${\mathtt{Simp}}gma\colon \mathbb{D}elta_{{\mathtt{Simp}}gma}=\mathbb{D}elta_{0}\ast\dots\ast\mathbb{D}elta_{n+1}\to {\mathring{\tc}} X$, and any regular face operator, $\delta_{k}\colon \nabla\to \mathbb{D}elta_{{\mathtt{Simp}}gma}$, with $k\in\{0,\dots,\dim\mathbb{D}elta_{{\mathtt{Simp}}gma}\}$. Let $j_{0}$ denote the dimension of $\mathbb{D}elta_{0}$. To determine the effect of $\delta_{k}$ on the operation ${\mathtt{Simp}}gma\mapsto \hat{{\mathtt{Simp}}gma}$, we must distinguish $ k> j_{0} $ of $ k \leq j_{0 } $. For the sake of convenience, we set $\delta_{s}={\rm id}$ if $s<0$. From the construction of $\hat{{\mathtt{Simp}}gma}$, we have $$\widehat{{\mathtt{Simp}}gma\circ\delta_{k}}=\left\{ \begin{equation}gin{array}{ccl} \hat{{\mathtt{Simp}}gma}\circ \delta_{k-j_{0}-1} &\text{if}& k>j_{0},\\ \hat{{\mathtt{Simp}}gma} &\text{if}& k\leq j_{0}, \end{array}\right.$$ which implies $\widehat{{\mathtt{Simp}}gma\circ\delta_{k}}=\hat{{\mathtt{Simp}}gma}\circ \delta_{k-j_{0}-1}$, with the previous convention. We conclude $$\delta^*_{k}f(\omega)_{{\mathtt{Simp}}gma}=\delta^*_{k}\left(\lambda_{{\mathtt c}\mathbb{D}elta_{0}}\otimes \omega_{\hat{{\mathtt{Simp}}gma}}\right)=\left\{ \begin{equation}gin{array}{ccl} \lambda_{{\mathtt c}\mathbb{D}elta_{0}}\otimes \delta^*_{k-j_{0}-1}\omega_{\hat{{\mathtt{Simp}}gma}} &\text{if}& k>j_{0},\\ \lambda_{{\mathtt c} \nabla_{0}}\otimes \omega_{\hat{{\mathtt{Simp}}gma}} &\text{if}& k\leq j_{0}. \end{array}\right.$$ It follows $\delta^*_{k}f(\omega)_{{\mathtt{Simp}}gma}=\lambda_{{\mathtt c} \nabla_{0}}\otimes \omega_{\widehat{{\mathtt{Simp}}gma\circ\delta_{k}}}= f(\omega)_{{\mathtt{Simp}}gma\circ\delta_{k}}$. Since the 0-cochain $\lambda_{{\mathtt c}\mathbb{D}elta_{0}}$ is a cocycle, the compatibility with the differentials is immediate from the equalities $$\delta\left(f(\omega)_{{\mathtt{Simp}}gma}\right)=\delta\left(\lambda_{{\mathtt c}\mathbb{D}elta_{0}}\otimes \omega_{\hat{{\mathtt{Simp}}gma}}\right)= \lambda_{{{\mathtt c}\mathbb{D}elta_{0}}}\otimes \delta \omega_{\hat{{\mathtt{Simp}}gma}}=f(\delta\,\omega)_{{\mathtt{Simp}}gma}.$$ The map $f$ being compatible with the differentials, it remains to show that the image by $ f $ of a $ \overline {p} $-allowable cochain, $\omega\in \mathbb{H}iru {\widetilde{N}}*{X\widetildemes ]0,\infty[;R}$, is a $\overline{p}$-allowable cochain in $\mathbb{H}iru {\widetilde{N}}*{{\mathring{\tc}} X;R}$. We choose $ \omega $ of degree less than or equal to $ \overline {p} ({\mathtt w}) $ and refer to \defref{def:admissible} for the property of $ \overline {p} $-allowability. For the stratum reduced to ${\mathtt w}$, the allowability comes directly from $\|f(\omega)_{{\mathtt{Simp}}gma}\|_{n+1}\leq |\omega_{\hat{{\mathtt{Simp}}gma}}|\leq \overline{p}({\mathtt w}).$ Now consider a singular stratum $ S $ of $ X $ and a regular simplex ${\mathtt{Simp}}gma\colon \mathbb{D}elta_{{\mathtt{Simp}}gma}=\mathbb{D}elta_{0}\ast\dots\ast\mathbb{D}elta_{n+1}\to {\mathring{\tc}} X$, such that ${\mathtt{Simp}}gma(\mathbb{D}elta_{{\mathtt{Simp}}gma})\cap (S\widetildemes ]0,\infty[)\neq \emptyset$. Let $\end{lemma}l={\rm codim\,}_{\!X\widetildemes ]0,\infty[}(S\widetildemes ]0,\infty[)$ and notice the equivalence of the conditions ${\mathtt{Simp}}gma(\mathbb{D}elta_{{\mathtt{Simp}}gma})\cap (S\widetildemes ]0,\infty[)\neq \emptyset$ and $\hat{{\mathtt{Simp}}gma}(\mathbb{D}elta_{\hat{{\mathtt{Simp}}gma}})\cap (S\widetildemes ]0,\infty[)\neq \emptyset$. For such stratum, we have $\end{lemma}l\in \{1,\dots,n\}$ and $\|f(\omega)_{{\mathtt{Simp}}gma}\|_{\end{lemma}l}=\|\lambda_{{\mathtt c}\mathbb{D}elta_{0}}\otimes \omega_{\hat{{\mathtt{Simp}}gma}}\|_{\end{lemma}l}= \|\omega_{\hat{{\mathtt{Simp}}gma}}\|_{\end{lemma}l}$. The result is a consequence of the inequality $\|\omega_{\hat{{\mathtt{Simp}}gma}}\|_{\end{lemma}l}\leq\|\omega\|_{S\widetildemes ]0,\infty[}\leq \overline{p}(S\widetildemes ]0,\infty[)$, arising from the $\overline{p}$-allowability of $\omega$. \end{proof} \parr{Construction of $g\colon \mathbb{H}iru {\widetilde{N}}*{{\mathring{\tc}} X;R}\to \mathbb{H}iru {\widetilde{N}}*{ X\widetildemes ]0,\infty[;R}$} Let $\omega\in\mathbb{H}iru {\widetilde{N}} *{{\mathring{\tc}} X;R}$ and $\tau\colon \mathbb{D}elta_{\tau}\to X\widetildemes ]0,\infty[$ a regular simplex. We denote by ${\mathtt c}\tau\colon \mathbb{D}elta_{c\tau}=\{{\mathtt p}\}\ast \mathbb{D}elta_{\tau}\to {\mathring{\tc}} X$ the cone over $\tau$ defined above. Notice $\widetilde{\mathbb{D}elta_{{\mathtt c}\tau}}={\mathtt c}\{{\mathtt p}\}\widetildemes \widetilde{\mathbb{D}elta_{\tau}}$. \emph{Let ${\mathtt v}_{0}$ be the apex of the cone over the component of filtration degree 0 of a filtered simplex.} The cone ${\mathtt c}\{{\mathtt p}\}$ having two vertices ${\mathtt p}$ and ${\mathtt v}_{0}$, the cochain $\omega_{{\mathtt c}\tau}$ decomposes into \begin{equation}gin{equation}\label{equa:leg} \omega_{{\mathtt c}\tau}={\boldsymbol 1}_{{\mathtt p}}\otimes \gamma_{{\mathtt p}}+{\boldsymbol 1}_{{\mathtt v}_{0}}\otimes \gamma_{{\mathtt v}_{0}}+ {\boldsymbol 1}_{{\mathtt p}\ast{\mathtt v}_{0}}\otimes \gamma'_{{\mathtt v}_{0}}, \end{equation} with $\gamma_{{\mathtt p}},\gamma_{{\mathtt v}_{0}},\gamma'_{{\mathtt v}_{0}}\in \mathbb{H}iru {\widetilde{N}} *{\mathbb{D}elta_{\tau}}$. We set $$g(\omega)_{\tau}=\gamma_{{\mathtt v}_{0}}.$$ \begin{equation}gin{proposition}\label{prop:leg} Let $({\mathring{\tc}} X,\overline{p})$ be a perverse space with $X$ compact and let $(X\widetildemes ]0,\infty[,\overline{p})$ be the induced perverse space. The correspondence defined above induces a cochain map, $$g\colon \tau_{\leq \overline{p}({\mathtt w})}\lau {\widetilde{N}} *{\overline{p}}{{\mathring{\tc}} X;R}\to \tau_{\leq \overline{p}({\mathtt w})} \lau {\widetilde{N}} * {\overline{p}}{X\widetildemes ]0,\infty[;R}.$$ \end{proposition} The proof follows the pattern of that of {\rm pr}opref{prop:lef}; we leave it to the reader. \noindent \emph{Specify the compositions $f\circ g$ and $g\circ f$.} \begin{equation}gin{enumerate}[\rm (a)] \item Let $\omega\in\mathbb{H}iru {\widetilde{N}} *{X;R}$. Consider a regular simplex, $\tau\colon \mathbb{D}elta_{\tau}\to X\widetildemes]0,\infty[$ and its associated map ${\mathtt c}\tau\colon \{{\mathtt p}\}\ast\mathbb{D}elta_{\tau}\to {\mathring{\tc}} X$. Following (\ref{equa:lef}), one has $$f(\omega)_{{\mathtt c}\tau}=\lambda_{{\mathtt c}\{{\mathtt p}\}}\otimes \omega_{\tau} = {\boldsymbol 1}_{{\mathtt p}}\otimes \omega_{\tau}+ {\boldsymbol 1}_{{\mathtt v}_{0}}\otimes \omega_{\tau}.$$ It follows, according to (\ref{equa:leg}), $g(f(\omega))_{\tau}=\omega_{\tau}$ and $g\circ f={\rm id}$. \item Let $\omega\in\mathbb{H}iru {\widetilde{N}} *{{\mathring{\tc}} X;R}$. Consider a regular simplex ${\mathtt{Simp}}gma\colon\mathbb{D}elta_{{\mathtt{Simp}}gma}=\mathbb{D}elta_{0}\ast\dots\ast \mathbb{D}elta_{n+1}\to {\mathring{\tc}} X$, and its associated map ${\mathtt c}{\mathtt{Simp}}gma\colon (\{{\mathtt p}\}\ast\mathbb{D}elta_{0})\ast\dots\ast \mathbb{D}elta_{n+1}\to {\mathring{\tc}} X$. The cochain $\omega_{{\mathtt c}{\mathtt{Simp}}gma}$ decomposes into \begin{equation}gin{equation}\label{equa:omegacsigma} \omega_{{\mathtt c}{\mathtt{Simp}}gma}=\sum_{F{\vartriangleleft} {\mathtt c}\mathbb{D}elta_{0}}{\boldsymbol 1}_{F}\otimes \gamma_{F}+ \sum_{F{\vartriangleleft} {\mathtt c}\mathbb{D}elta_{0}}{\boldsymbol 1}_{{\mathtt p}\ast F}\otimes \gamma'_{F}+{\boldsymbol 1}_{{\mathtt p}}\otimes \gamma_{\emptyset}. \end{equation} Since the cochain $ \omega $ is globally defined and the simplex $ {\mathtt c} \hat {{\mathtt{Simp}}gma} $ is a face of $ {\mathtt c} {\mathtt{Simp}}gma $, we deduce $\omega_{{\mathtt c}\hat{{\mathtt{Simp}}gma}}= {\boldsymbol 1}_{{\mathtt v}_{0}}\otimes \gamma_{{\mathtt v}_{0}}+ {\boldsymbol 1}_{{\mathtt p}\ast{\mathtt v}_{0}}\otimes \gamma'_{{\mathtt v}_{0}}+ {\boldsymbol 1}_{{\mathtt p}}\otimes \gamma_{\emptyset}$. It follows: \begin{equation}gin{equation}\label{equa:fg} f(g(\omega))_{{\mathtt{Simp}}gma}=\lambda_{{\mathtt c}\mathbb{D}elta_{0}}\otimes g(\omega)_{\hat{{\mathtt{Simp}}gma}}=\lambda_{{\mathtt c}\mathbb{D}elta_{0}}\otimes \gamma_{{\mathtt v}_{0}}. \end{equation} \end{enumerate} \parr{Construction of a homotopy $H\colon\mathbb{H}iru {\widetilde{N}} *{{\mathring{\tc}} X;R}\to \mathbb{H}iru {\widetilde{N}} {*-1}{{\mathring{\tc}} X;R}$} If ${\mathtt{Simp}}gma\colon\mathbb{D}elta_{{\mathtt{Simp}}gma}=\mathbb{D}elta_{0}\ast\dots\ast \mathbb{D}elta_{n+1}\to {\mathring{\tc}} X$ is a regular simplex, we define a map $H\colon \mathbb{H}iru {\widetilde{N}}{*}{\mathbb{D}elta_{{\mathtt c}{\mathtt{Simp}}gma}}\to \mathbb{H}iru {\widetilde{N}} {*-1}{\mathbb{D}elta_{{\mathtt{Simp}}gma}}$, i.e., $$H\colon \mathbb{H}iru N{*}{{\mathtt c}({\mathtt p}\ast\mathbb{D}elta_{0})}\otimes \mathbb{H}iru N*{{\mathtt c}\mathbb{D}elta_{1}}\otimes\dots\otimes \mathbb{H}iru N*{\mathbb{D}elta_{n+1}} \to \mathbb{H}iru N{*-1}{{\mathtt c}\mathbb{D}elta_{0}}\otimes \dots\otimes \mathbb{H}iru N*{\mathbb{D}elta_{n+1}}.$$ We decompose $\omega_{{\mathtt c}{\mathtt{Simp}}gma}\in \mathbb{H}iru{\widetilde{N}} {*}{\mathbb{D}elta_{{\mathtt c}{\mathtt{Simp}}gma}}$ as in the formula (\ref{equa:omegacsigma}) and set: \begin{equation}gin{equation}\label{equa:leH} (H(\omega))_{{\mathtt{Simp}}gma}= \sum_{{\mathtt v}_{0}\neq F{\vartriangleleft} {\mathtt c}\mathbb{D}elta_{0}}(-1)^{|F|+1} {\boldsymbol 1}_{F}\otimes \gamma'_{F}+\lambda_{\mathbb{D}elta_{0}}\otimes \gamma'_{{\mathtt v}_{0}}, \end{equation} where ${\mathtt v}_{0}$ is the apex of the cone over the component filtration of degree 0 and $\lambda_{\mathbb{D}elta_{0}}$ the sum of 0-cochains on $\mathbb{D}elta_{0}$. \begin{equation}gin{proposition}\label{prop:leH} Let $({\mathring{\tc}} X,\overline{p})$ be a perverse space with $X$ compact. \begin{equation}gin{enumerate}[\rm (a)] \item The equality (\ref{equa:leH}) induces a linear map, $H\colon\mathbb{H}iru {\widetilde{N}} *{{\mathring{\tc}} X;R}\to \mathbb{H}iru {\widetilde{N}} {*-1} {{\mathring{\tc}} X;R}$, verifying $$\delta\circ H+H\circ \delta={\rm id} -f\circ g.$$ \item Using the notation introduced in (\ref{equa:troncation}), the application $ H $ induces a map, $$H\colon \tau_{\leq \overline{p}({\mathtt w})}\lau {\widetilde{N}} *{\overline{p}}{{\mathring{\tc}} X;R}\to \tau_{\leq \overline{p}({\mathtt w})} \lau {\widetilde{N}}{*-1}{\overline{p}}{{\mathring{\tc}} X;R}.$$ \end{enumerate} \end{proposition} \begin{equation}gin{proof} (a) We must establish the equality $\delta_{k}^*H(\omega)_{{\mathtt{Simp}}gma}=H(\omega)_{{\mathtt{Simp}}gma\circ\delta_{k}},$ for each cochain $\omega\in \mathbb{H}iru {\widetilde{N}} *{{\mathring{\tc}} X}$, each regular simplex ${\mathtt{Simp}}gma\colon\mathbb{D}elta_{{\mathtt{Simp}}gma}=\mathbb{D}elta_{0}\ast\dots\ast\mathbb{D}elta_{n+1}\to {\mathring{\tc}} X$ and each regular face operator $\delta_{k}\colon D=D_{0}\ast\dots\ast D_{n+1}\to \mathbb{D}elta_{{\mathtt{Simp}}gma}$, with $k\in\{0,\dots,\dim\mathbb{D}elta_{{\mathtt{Simp}}gma}\}$. Denoting by $\delta^{{\mathtt p}}_{*}$ the regular face operators of $\{{\mathtt p}\}\ast \mathbb{D}elta_{{\mathtt{Simp}}gma}$, we can write ${\mathtt c}({\mathtt{Simp}}gma\circ\delta_{k})={\mathtt c}{\mathtt{Simp}}gma \circ \delta^{{\mathtt p}}_{k+1}$. If $k>\dim\mathbb{D}elta_{0}$, we set $k^{\circ}=k-\dim\mathbb{D}elta_{0}-1$ and $\delta^{\circ}_{k^{\circ}}\colon D_{1}\ast\dots\ast D_{n+1}\to \mathbb{D}elta_{1}\ast\dots\ast \mathbb{D}elta_{n+1}$ the induced face. Following (\ref{equa:leH}), we have: $$\delta_{k}^*H(\omega)_{{\mathtt{Simp}}gma} =\left\{\begin{equation}gin{array}{cl} \sum_{{\mathtt v}_{0}\neq F{\vartriangleleft} {\mathtt c} D_{0}}(-1)^{|F|+1} {\boldsymbol 1}_{F}\otimes \gamma'_{F}+\lambda_{D_{0}}\otimes \gamma'_{{\mathtt v}_{0}} & \text{if } k\leq\dim\mathbb{D}elta_{0},\\[.2cm] \sum_{{\mathtt v}_{0}\neq F{\vartriangleleft} {\mathtt c}\mathbb{D}elta_{0}}(-1)^{|F|+1} {\boldsymbol 1}_{F}\otimes \delta_{k^{\circ}}^{\circ,*}\gamma'_{F}+ \lambda_{\mathbb{D}elta_{0}}\otimes \delta_{k^{\circ}}^{\circ,*}\gamma'_{{\mathtt v}_{0}} & \text{if } k>\dim\mathbb{D}elta_{0}. \end{array}\right. $$ Using the equality $\omega_{{\mathtt c}({\mathtt{Simp}}gma\circ\delta_{k})}=\omega_{{\mathtt c}{\mathtt{Simp}}gma\circ\delta^{{\mathtt p}}_{k+1}}= \delta_{k+1}^{{\mathtt p},*}\omega_{{\mathtt c}{\mathtt{Simp}}gma}$ and (\ref{equa:omegacsigma}), we get: $$ \omega_{{\mathtt c}({\mathtt{Simp}}gma\circ\delta_{k})} = \left\{\begin{equation}gin{array}{cl} (\sum_{F{\vartriangleleft} {\mathtt c} D_{0}}{\boldsymbol 1}_{F}\otimes \gamma_{F}+ {\boldsymbol 1}_{{\mathtt p}\ast F}\otimes \gamma'_{F}) +{\boldsymbol 1}_{{\mathtt p}}\otimes \gamma_{\emptyset} & \text{if } k\leq \dim \mathbb{D}elta_{0},\\[.2cm] (\sum_{F{\vartriangleleft} {\mathtt c} D_{0}}{\boldsymbol 1}_{F}\otimes \delta_{k^{\circ}}^{\circ,*}\gamma_{F}+ {\boldsymbol 1}_{{\mathtt p}\ast F}\otimes \delta_{k^{\circ}}^{\circ,*}\gamma'_{F}) +{\boldsymbol 1}_{{\mathtt p}}\otimes \delta_{k^{\circ}}^{\circ,*}\gamma_{\emptyset} & \text{if } k>\dim\mathbb{D}elta_{0}, \end{array}\right. $$ which gives \begin{equation}gin{eqnarray*} H(\omega)_{{\mathtt{Simp}}gma\circ\delta_{k}} &=& \left\{\begin{equation}gin{array}{cl} (\sum_{{\mathtt v}_{0}\neq F{\vartriangleleft} {\mathtt c} D_{0}}(-1)^{|F|+1} {\boldsymbol 1}_{F}\otimes \gamma'_{F})+ \lambda_{D_{0}}\otimes \gamma'_{{\mathtt v}_{0}}, & \text{if } k\leq \dim \mathbb{D}elta_{0},\\[.2cm] (\sum_{{\mathtt v}_{0}\neq F{\vartriangleleft} {\mathtt c} D_{0}}(-1)^{|F|+1} {\boldsymbol 1}_{F}\otimes \delta_{k^{\circ}}^{\circ,*}\gamma'_{F})+ \lambda_{D_{0}}\otimes \delta_{k^{\circ}}^{\circ,*}\gamma'_{{\mathtt v}_{0}} & \text{if } k>\dim\mathbb{D}elta_{0}, \end{array}\right.\\[,2cm] &=& \delta_{k}^* H(\omega)_{{\mathtt{Simp}}gma}. \end{eqnarray*} Let us study the behavior of $ H $ towards the differentials. Let ${\mathtt{Simp}}gma\colon\mathbb{D}elta_{{\mathtt{Simp}}gma}=\mathbb{D}elta_{0}\ast\dots\ast\mathbb{D}elta_{n+1}\to {\mathring{\tc}} X$ be a regular simplex and $\omega\in{\widetilde{N}}^*({\mathring{\tc}} X;R)$. Let us start from the equality (\ref{equa:omegacsigma}) and calculate the differential in eliminating terms having a zero image by $ H $. \begin{equation}gin{eqnarray}\label{previa} (H(\delta \omega))_{{\mathtt{Simp}}gma} &=& H\left( \sum_{e\in\cal V({\mathtt c}\mathbb{D}elta_{0})} {\boldsymbol 1}_{p\ast e}\otimes \gamma_{\emptyset} + \sum_{F{\vartriangleleft}{\mathtt c}\mathbb{D}elta_{0}}{\boldsymbol 1}_{F\ast {\mathtt p}}\otimes \gamma_{F}+ \right.\\\nonumber && \left. \sum_{{\rm Sub}stack{F{\vartriangleleft}{\mathtt c}\mathbb{D}elta_{0} \\ e\in\cal V({\mathtt c}\mathbb{D}elta_{0})\phantom{-}}} {\boldsymbol 1}_{{\mathtt p}\ast F\ast e}\otimes \gamma'_{F}+ \sum_{F{\vartriangleleft}{\mathtt c}\mathbb{D}elta_{0}}(-1)^{|F|+1}{\boldsymbol 1}_{{\mathtt p}\ast F}\otimes \delta \gamma'_{F}\right). \label{eqna:homotopie} \end{eqnarray} Notice \begin{equation}gin{eqnarray*} \sum_{e\in\cal V({\mathtt c}\mathbb{D}elta_{0})} {\boldsymbol 1}_{p\ast e}\otimes \gamma_{\emptyset} + \sum_{F{\vartriangleleft}{\mathtt c}\mathbb{D}elta_{0}}{\boldsymbol 1}_{F\ast {\mathtt p}}\otimes \gamma_{F} =&&\\ \sum_{e\in\cal V(\mathbb{D}elta_{0})} {\boldsymbol 1}_{p\ast e}\otimes \gamma_{\emptyset} + \sum_{{\mathtt v}_{0}\neq F{\vartriangleleft}{\mathtt c}\mathbb{D}elta_{0}}{\boldsymbol 1}_{F\ast {\mathtt p}}\otimes \gamma_{F} +{\boldsymbol 1}_{{\mathtt p}\ast{\mathtt v}_{0}}\otimes (\gamma_{\emptyset}-\gamma_{{\mathtt v}_{0}}).&& \end{eqnarray*} Replacing in \eqref{previa} and developing the definition of $H$, we get: \begin{equation}gin{eqnarray*} (H(\delta \omega))_{{\mathtt{Simp}}gma} &=& -\sum_{e\in\cal V(\mathbb{D}elta_{0})}{\boldsymbol 1}_{e}\otimes \gamma_{\emptyset} +\sum_{{\mathtt v}_{0}\neq F{\vartriangleleft}{\mathtt c}\mathbb{D}elta_{0}}{\boldsymbol 1}_{F}\otimes \gamma_{F} +\lambda_{\mathbb{D}elta_{0}}\otimes (\gamma_{\emptyset}-\gamma_{{\mathtt v}_{0}}-\delta\gamma'_{{\mathtt v}_{0}})\\ && +\sum_{{\rm Sub}stack{F{\vartriangleleft}{\mathtt c}\mathbb{D}elta_{0} \\ e\in\cal V({\mathtt c}\mathbb{D}elta_{0})\phantom{-}}} (-1)^{|F|}{\boldsymbol 1}_{F\ast e}\otimes \gamma'_{F} + \sum_{{\mathtt v}_{0}\neq F{\vartriangleleft}{\mathtt c}\mathbb{D}elta_{0}}{\boldsymbol 1}_{F}\otimes \delta \gamma'_{F}. \end{eqnarray*} On the other hand, the quantity $\delta\circ H$ can be written \begin{equation}gin{eqnarray*} \delta(H(\omega))_{{\mathtt{Simp}}gma} &=& \sum_{{\rm Sub}stack{{\mathtt v}_{0}\neq F{\vartriangleleft}{\mathtt c}\mathbb{D}elta_{0} \\ e\in\cal V({\mathtt c}\mathbb{D}elta_{0})\phantom{-}}} (-1)^{|F|+1}{\boldsymbol 1}_{F\ast e}\otimes \gamma'_{F} -\sum_{{\mathtt v}_{0}\neq F{\vartriangleleft}{\mathtt c}\mathbb{D}elta_{0}} {\boldsymbol 1}_{F}\otimes \delta \gamma'_{F}\\ && +\sum_{e\in\cal V(\mathbb{D}elta_{0})}{\boldsymbol 1}_{e\ast{\mathtt v}_{0}}\otimes \gamma'_{{\mathtt v}_{0}} +\lambda_{\mathbb{D}elta_{0}}\otimes \delta\gamma'_{{\mathtt v}_{0}}. \end{eqnarray*} Using (\ref{equa:fg}), the sum of the two expressions can be reduced to: \begin{equation}gin{equation}\label{equa:deltaH} H(\delta\omega)_{{\mathtt{Simp}}gma}+\delta H(\omega)_{{\mathtt{Simp}}gma} = \sum_{F{\vartriangleleft} {\mathtt c}\mathbb{D}elta_{0}}{\boldsymbol 1}_{F}\otimes \gamma_{F} -\lambda_{{\mathtt c}\mathbb{D}elta_{0}}\otimes \gamma_{{\mathtt v}_{0}}= \omega_{{\mathtt{Simp}}gma}-(f\circ g)(\omega)_{{\mathtt{Simp}}gma}. \end{equation} (b) As in the proof of {\rm pr}opref{prop:lef}, we are reduced to consider a singular stratum $T$ of ${\mathring{\tc}} X$, a $\overline{p}$-allowable cochain $\omega\in \mathbb{H}iru {\widetilde{N}} *{{\mathring{\tc}} X}$, of degree less than or equal to $\overline{p}({\mathtt w})$, and a regular simplex, ${\mathtt{Simp}}gma\colon\mathbb{D}elta_{{\mathtt{Simp}}gma}\to {\mathring{\tc}} X$, with ${\mathtt{Simp}}gma(\mathbb{D}elta_{{\mathtt{Simp}}gma})\cap T\neq\emptyset$. Let $\end{lemma}l={\rm codim\,}_{\!{\mathring{\tc}} X} T\in \{1,\dots,n+1\}$. Notice that ${\mathtt c}{\mathtt{Simp}}gma(\mathbb{D}elta_{{\mathtt c}{\mathtt{Simp}}gma})\cap T\neq\emptyset$. Thus, according to the definition of perverse degree (cf. \defref{def:degrepervers}), we have \begin{equation}gin{eqnarray*} \overline{p}(T) &\geq& \|\omega\|_{T}\geq \|\omega_{{\mathtt c}{\mathtt{Simp}}gma}\|_{\end{lemma}l} = \max\{\|{\boldsymbol 1}_{F}\otimes\gamma_{F}\|_{\end{lemma}l},\, \|{\boldsymbol 1}_{{\mathtt p}\ast F}\otimes \gamma'_{F}\|_{\end{lemma}l},\, \|{\boldsymbol 1}_{{\mathtt p}}\otimes \gamma_{\emptyset}\|_{\end{lemma}l}\mid F{\vartriangleleft}{\mathtt c}\mathbb{D}elta_{0}\} \\ &\geq& \max\{\|{\boldsymbol 1}_{{\mathtt p}\ast F}\otimes \gamma'_{F}\|_{\end{lemma}l}\mid F{\vartriangleleft}{\mathtt c}\mathbb{D}elta_{0}\}, \end{eqnarray*} where the equality uses (\ref{equa:degrepervers}) and (\ref{equa:omegacsigma}). We develop this expression by distinguishing two cases. \begin{equation}gin{itemize} \item Let $\end{lemma}l\neq n+1$. By definition, for any face $F{\vartriangleleft} {\mathring{\tc}}\mathbb{D}elta_{0}$, we have the equality $\|{\boldsymbol 1}_{{\mathtt p}\ast F}\otimes \gamma'_{F}\|_{\end{lemma}l}= \|{\boldsymbol 1}_{F}\otimes \gamma'_{F}\|_{\end{lemma}l}$, if $F\neq {\mathtt v}_{0}$, and $\|{\boldsymbol 1}_{{\mathtt p}\ast {\mathtt v}_{0}}\otimes \gamma'_{{\mathtt v}_{0}}\|_{\end{lemma}l}=\|\lambda_{\mathbb{D}elta_{0}}\otimes \gamma'_{{\mathtt v}_{0}}\|_{\end{lemma}l}$. It follows: $$\overline{p}(T)\geq \max\{\|{\boldsymbol 1}_{F}\otimes\gamma'_{F}\|_{\end{lemma}l},\, \|\lambda_{\mathbb{D}elta_{0}}\otimes \gamma'_{{\mathtt v}_{0}}\|_{\end{lemma}l}\mid {\mathtt v}_{0}\neq F{\vartriangleleft} {\mathtt c}\mathbb{D}elta_{0}\} =\|H(\omega)_{{\mathtt{Simp}}gma}\|_{\end{lemma}l},$$ where the last equality uses (\ref{equa:degrepervers}) and (\ref{equa:leH}). We deduce $\overline{p}(T)\geq \|H(\omega)\|_{T}$. \item Let $\end{lemma}l=n+1$. In this case we have \begin{equation}gin{eqnarray*} \|H(\omega)_{{\mathtt{Simp}}gma}\|_{n+1} &=& \max\{\|{\boldsymbol 1}_{{\mathtt p}\ast F}\otimes \gamma'_{F}\|_{n+1},\, \|\lambda_{\mathbb{D}elta_{0}}\otimes \gamma'_{{\mathtt v}_{0}}\|_{n+1}\mid {\mathtt v}_{0}\neq F{\vartriangleleft}{\mathtt c}\mathbb{D}elta_{0}\}\\ &=& \max\{|\gamma'_{F}|,\,|\gamma'_{{\mathtt v}_{0}}|\mid F{\vartriangleleft}\mathbb{D}elta_{0}\} \leq |\omega|-1\leq \overline{p}({\mathtt w}), \end{eqnarray*} where the first equality uses (\ref{equa:degrepervers}) and (\ref{equa:leH}). It follows $\overline{p}({\mathtt w})\geq \|H(\omega)\|_{{\mathtt w}}$. \end{itemize} It has been shown $\|H(\omega)\|\leq \overline{p}$. The property $\|\delta H(\omega)\|\leq \overline{p}$ is deduced using (\ref{equa:deltaH}). \end{proof} The determination of the cohomology of a cone follows from the properties of $ f, g $ and $ H $. \begin{equation}gin{proof}[Proof of \thmref{prop:coneTW}] (a) From Propositions \ref{prop:lef}, \ref{prop:leg} and \ref{prop:leH}, we deduce that the map $g\colon \tau_{\leq \overline{p}({\mathtt w})}\lau {\widetilde{N}} * {\overline{p}}{{\mathring{\tc}} X;R}\to \tau_{\leq \overline{p}({\mathtt w})}\lau {\widetilde{N}}*{\overline{p}}{X\widetildemes ]0,\infty[;R}$ is a quasi-isomor\-phism. We know from \thmref{prop:isoproduitR}, that the inclusion $I_{1}\colon X\to X\widetildemes ]0,\infty[$ induces a quasi-isomorphism. It remains to prove $I_{1}^*\circ g =\iota^*$. The involved stratifications on $X \widetildemes ]0,\infty[$ are different in these two quasi-isomorphisms but the cohomologies are the same (see subsection \ref{SF1}). For this, consider $\omega\in \tau_{\overline{p}({\mathtt w})}\lau {\widetilde{N}} * {\overline{p}}{{\mathring{\tc}} X;R}$ and $\tau\colon \mathbb{D}elta_{\tau}\to X$ a regular simplex. By definition, we have $(\iota^*\omega)_{\tau}=\omega_{\iota\circ\tau}$. Notice $\iota\circ \tau ={\mathtt c}(I_{1}\circ\tau)\circ\delta_{0}$, where $\delta_{0}(x)=0 \cdot {\mathtt p}+1 \cdot x$. It then follows $$(\iota^*\omega)_{\tau}= \omega_{{\mathtt c}(I_{1}\circ\tau)\circ\delta_{0}}=\delta^*_{0}\,\omega_{{\mathtt c}(I_{1}\circ\tau)} =_{(\ref{equa:leg})}\gamma_{{\mathtt v}_{0}} =g(\omega)_{I_{1}\circ\tau} =I_{1}^*(g(\omega))_{\tau}.$$ To prove part (b), we consider a cocycle $\omega\in \lau {\widetilde{N}} k {\overline{p}}{{\mathring{\tc}} X;R}$ with $k>\overline{p}({\mathtt w})$. Following {\rm pr}opref{prop:leH}, we have $\delta H(\omega)=\omega-f(g(\omega))$ and it suffices to establish the equality$f(g(\omega))=0$. We prove it by contradiction, assuming that there is a regular simplex ${\mathtt{Simp}}gma\colon \mathbb{D}elta_{{\mathtt{Simp}}gma}\to {\mathring{\tc}} X$, such that $f(g(\omega))_{{\mathtt{Simp}}gma}\neq 0$. Following (\ref{equa:fg}), this implies $\gamma_{{\mathtt v}_{0}}\neq 0$. We get a contradiction, \begin{equation}gin{eqnarray*} k> \overline{p}({\mathtt w}) &\geq& \|f(g(\omega))\|_{{\mathtt w}} \geq \|f(g(\omega))_{{\mathtt c}{\mathtt{Simp}}gma}\|_{n+1} = \|\lambda_{{\mathtt c}({\mathtt p}\ast\mathbb{D}elta_{0})}\otimes \gamma_{{\mathtt v}_{0}}\|_{n+1}\\ &=& \|\lambda_{{\mathtt p}\ast\mathbb{D}elta_{0}}\otimes \gamma_{{\mathtt v}_{0}}\|_{n+1} =|\gamma_{{\mathtt v}_{0}}|=k. \end{eqnarray*} \end{proof} The proof of the topological independence Theorem of \secref{15} is based on the following excision theorem. {\rm Sub}section{Relative cohomology}\label{RelCoho} Let $X$ be a filtered space and $Y {\rm Sub}set X$ a subspace endowed with the induced stratification. Consider a perversity $\overline{p}$ on $X$. We also denote by $\overline p$ the induced perversity on $Y$. The \emph{complex of relative $\overline{p}$-intersection cochains} is the direct sum $\lau {\widetilde{N}} * {\overline p}{X,Y;R} = \lau {\widetilde{N}} * {\overline p}{X;R} \oplus \lau {\widetilde{N}} {*-1} {\overline p}{Y;R} $, endowed with the differential $D(\alpha,\begin{equation}ta) = (d\alpha, \alpha - d\begin{equation}ta)$. Its homology is the \emph{relative blown-up $\overline{p}$-intersection cohomology of the perverse pair $(X,Y,\overline{p})$,} denoted by $\lau \mathscr H{*}{\overline{p}}{X,Y;R}$. By definition, we have a long exact sequence associated to the perverse pair $(X,Y,\overline{p})$: \begin{equation}gin{equation}\label{equa:suiterelative2} \ldots\to \lau \mathscr H {i}{\overline{p}}{X;R}\stackrel{\i^*}{\to} \lau \mathscr H {i}{\overline{p}}{Y;R}\to \lau \mathscr H{i+1}{\overline{p}}{X,Y;R}\stackrel{{\rm pr}^*}{\to} \lau \mathscr H{i+1}{\overline{p}}{X;R}\to \ldots, \end{equation} where ${\rm pr} \colon \lau {\widetilde{N}} * {\overline p}{X,Y;R} \to \lau {\widetilde{N}} * {\overline p}{X;R} $ is defined by ${\rm pr}(\alpha,\begin{equation}ta) = \alpha$ and $\i^*$ comes from the restriction map $\i \colon \lau {\widetilde{N}} * {\overline p}{X;R} \rightarrow \lau {\widetilde{N}} * {\overline p}{Y;R} $. \begin{equation}gin{proposition}\label{prop:Excisionhomologie} Let $(X,\overline p)$ be a paracompact perverse space. If $F$ is a closed subset of $X$ and $U$ an open subset of $X$ with $F{\rm Sub}set U$, then the natural inclusion $(X\backslash F,U\backslash F)\hookrightarrow (X,U)$ induces the isomorphism $$ \lau \mathscr H * {\overline{p}} {X,U ;R} \cong \lau \mathscr H * {\overline{p}}{ X\backslash F,U\backslash F ;R}. $$ \end{proposition} \begin{equation}gin{proof} For the sake of simplicity we write the complexes $\bi {\cal A} *= \lau {\widetilde{N}} {*,\mathcal U} {\overline{p}} {X;R}$, $\bi {\cal B} * = \lau {\widetilde{N}} {*,\mathcal U_1} {\overline{p}} {X\backslash F;R}$, $\bi {\cal C} * = \lau {\widetilde{N}} {*,\mathcal U_2} {\overline{p}} {U;R}$ and $\bi {\cal E} *=\lau {\widetilde{N}} {*} {\overline{p}} {U\backslash F;R}$, the terms of the Mayer-Vietoris exact sequence of \thmref{thm:MVcourte}, applied to the cover $\cal U = \{ X\backslash F,U\}$. Associated to this sequence we have the exact sequence $$ 0 \to (\bi {\cal A} * \oplus \bi {\cal C} {*-1},D) \xrightarrow[]{f} (\bi {\cal B} *\oplus \bi {\cal C} * \oplus \bi {\cal C} {*-1} \oplus \bi {\cal E} {*-1}, D') \xrightarrow[]{g} (\bi {\cal E} {*} \oplus \bi {\cal E} {*-1},D) \to 0, $$ where $D(a,c) = (da ,a -dc)$, $f(a,c) = (a,a,c,0)$, $D'(b,c_0,c_1,e)=(db,dc_0,c_0-dc_1,b-c_0-de)$, $g(b,c_0,c_1,e) =(b-c_0,e)$ and $D(e,e') = (de,e-de')$. The right-hand complex is acyclic since $D(e,e')=0 \mathbb{R}ightarrow (e,e') = D(e',0)$. The chain map $h \colon (\bi {\cal B} * \oplus \bi {\cal E} {*-1},D) \to (\bi {\cal B} *\oplus \bi {\cal C} * \oplus \bi {\cal C} {*-1} \oplus \bi {\cal E} {*-1}, D') , $ defined by $h (b,e) = (b,0,0,e)$, where $D(b,e) = (db , b -de)$, is a quasi-isomorphism, since \begin{equation}gin{eqnarray*} 0= D'(b,c_0,c_1,e) &\mathbb{R}ightarrow& (b,c_0,c_1,e) = h(b,e+c_1) + D'(0,c_1,0,0) \\ h(b',e') = D'(b,c_0,c_1,e) &\mathbb{R}ightarrow& (b', e') = D(b,e+c_1). \end{eqnarray*} Let $k\colon (\bi {\cal A} * \oplus \bi {\cal C} {*-1},D) \to (\bi {\cal B} * \oplus \bi {\cal E} {*-1},D)$, defined by $k(a,c) =(a,c)$. If $D(a,c)=0$, then $h(k(a,c)) -f(a,c) = (0,-a,-c,c) = D'(0,-c,0,0)$. Therefore, $k$ is a quasi-isomorphism. Applying \thmref{thm:Upetits} and the definition of relative blown-up $\overline{p}$-intersection cohomology we get the result.\end{proof} The following result comes directly from the Mayer-Vietoris sequence and the long exact sequence associated to a pair. \begin{equation}gin{proposition}\label{cor:homologieconerel} Let $X$ be a compact filtered space. Let us consider the cone, ${\mathring{\tc}} X$, with apex ${\mathtt w}$, endowed with the cone filtration. Consider a perversity $\overline p$ on ${\mathring{\tc}} X$ and denote also by $\overline p$ the induced perversity on $X$. Then $$ \lau \mathscr H j{\overline{p}}{{\mathring{\tc}} X, {\mathring{\tc}} X \backslash \{ {\mathtt w}\} ;R}=\left\{ \begin{equation}gin{array}{cl} \lau \mathscr H {j-1}{\overline{p}} {X;R}&\text{ if } j\geq \overline{p}( {\mathtt w}) +2, \\[.2cm] 0&\text{ if } j\leq \overline{p}( {\mathtt w}) +1.\\[.2cm] \end{array}\right.$$ \end{proposition} \section{Comparison between intersection cohomologies. \thmref{thm:TWGMcorps}.}\label{subsec:lesdeuxcohomologies} If $ R $ is a field, the existence of a quasi-isomorphism between the cochain complexes, $\lau C {*} {D\overline{p}}{X;R}$ and $\lau {\widetilde{N}} *{\overline{p}} {X;R}$, has been established in {\cite[Theorem B]{CST1}} for GM-perversities in the framework of filtered face sets. We generalize this result using general perversities without the condition $D\overline p \leq \overline t$. In order to eliminate this last constraint we use tame intersection cohomology and homology instead of intersection cohomology and homology. Recall that a perverse CS set $(X,\overline{p})$ is \emph{locally $(D\overline{p},R)$-torsion free} if, for each singular stratum $S$ and each $x\in S$ with link $L$, we have that $\lau {\mathfrak{T}} {D\overline{p}} { \overline{p}(S)}{L;R}=0$, i.e., the torsion $R$-submodule of $\lau {\mathfrak{H}} {D\overline{p}} { \overline{p}(S)}{L;R}$, vanishes (see \cite{MR699009}). \begin{equation}gin{theorem}\label{thm:TWGMcorps} Let $ (X, \overline p) $ be a paracompact separable perverse CS set and $ R $ a Dedekind ring. The blown-up and the tame intersection complexes, $\lau {\widetilde{N}}*{\overline{p}}{X;R}$ and $\lau {\mathfrak{C}}{*} {D\overline{p}}{X;R}$, are related by a quasi-isomorphism if one of the following hypotheses is satisfied. \begin{equation}gin{enumerate}[1)] \item The ring $R$ is a field. \item The space $X$ is a locally $(D\overline{p},R)$-torsion free pseudomanifold. \end{enumerate} Under any of these assumptions, there exists an isomorphism $$\lau \mathscr H *{\overline{p}}{X;R}\cong \lau {\mathfrak{H}}*{D\overline{p}}{X;R}.$$ \end{theorem} This isomorphism exists for the top perversity $ \overline {t} $, for any CS set and any Dedekind ring (see \cite[Remark 1.51]{CST1}). This Theorem has a particular form in the classical setting of stratified pseudomanifolds and GM-perversities. \begin{equation}gin{corollary}\label{cor:TW corps} Let $\overline p$ be a GM-perversity, with \emph{complementary perversity} $\overline q$, i.e., $\overline q(i) = i-2-\overline p (i)$ for $i\geq 2$. We consider a Dedekind ring $R$. Let $ X$ be a paracompact separable classical pseudomanifold. The blown-up and the intersection complexes, $\lau {\widetilde{N}}*{\overline{p}}{X;R}$ and $\lau C{*} {\overline{q}}{X;R}$, are related by a quasi-isomorphism if $X$ is locally $(\overline{q},R)$-torsion free. So, we have an isomorphism $\lau \mathscr H *{\overline{p}}{X;R}\cong \lau H*{\overline{q}}{X;R}.$ \end{corollary} The proof of \thmref{thm:TWGMcorps}, similar to that used in \cite[Théorème A]{CST3}, meets the scheme used in \cite[Theorem 10]{MR800845}, \cite[Lemma 1.4.1]{MR2210257}, \cite[Section 5.1]{LibroGreg}. \begin{equation}gin{proposition}\label{prop:supersuperbredon} Let $\cal F_{X}$ be the category whose objects are (stratified homeomorphic to) open subsets of a given paracompact and separable CS set $X$ and whose morphisms are stratified homeomorphisms and inclusions. Let $\cal Ab_{*}$ be the category of graded abelian groups. Let $F^{*},\,G^{*}\colon \cal F_{X}\to \cal Ab$ be two functors and $\Phi\colon F^{*}\to G^{*}$ a natural transformation satisfying the conditions listed below. \begin{equation}gin{enumerate}[(i)] \item $F^{*}$ and $G^{*}$ admit exact Mayer-Vietoris sequences and the natural transformation $\Phi$ induces a commutative diagram between these sequences, \item If $\{U_{\alpha}\}$ is a disjoint collection of open subsets of $X$ and $\Phi\colon F_{*}(U_{\alpha})\to G_{*}(U_{\alpha})$ is an isomorphism for each $\alpha$, then $\Phi\colon F^{*}(\bigsqcup_{\alpha}U_{\alpha})\to G^{*}(\bigsqcup_{\alpha}U_{\alpha})$ is an isomorphism. \item If $L$ is a compact filtered space such that $X$ has an open subset stratified homeomorphic to $\mathbb{R}^i\widetildemes {\mathring{\tc}} L$ and, if $\Phi\colon F^{*}(\mathbb{R}^i\widetildemes ({\mathring{\tc}} L\backslash \{{\mathtt v}\}))\to G^{*}(\mathbb{R}^i\widetildemes ({\mathring{\tc}} L\backslash \{{\mathtt v}\}))$ is an isomorphism, then so is $\Phi\colon F^{*}(\mathbb{R}^i\widetildemes {\mathring{\tc}} L)\to G^{*}(\mathbb{R}^i\widetildemes {\mathring{\tc}} L)$. Here, ${\mathtt v}$ is the apex of the cone ${\mathring{\tc}} L$. \item If $U$ is an open subset of X contained within a single stratum and homeomorphic to an Euclidean space, then $\Phi\colon F^{*}(U)\to G^{*}(U)$ is an isomorphism. \end{enumerate} Then $\Phi\colon F^{*}(X)\to G^{*}(X)$ is an isomorphism. \end{proposition} \begin{equation}gin{proof} Let $U$ be an open subset of $X$. We apply \lemref{lem:bredon}, taking for $P(U)$ the property $$``\Phi\colon F^*(U)\to G^*(U) \text{ is an isomorphism.''}$$ Notice that the CS set $X$ is a separable, locally compact and metrizable space \cite[Proposition 1.11]{CST3}. We proceed by induction on the dimension of $ X $, the result is immediate when $ \dim X=0$, that is, a discrete space. The inductive step occurs in two stages. $\bullet$ \emph{We first show $P(Y)$ for any open subset $Y$ of a fixed conical chart.} We can suppose $Y=\mathbb{R}^m\widetildemes {\mathring{\tc}} L$. If $L=\emptyset$, we apply (iv). Let us suppose $L\ne \emptyset$. We consider the basis $\cal V$ of open sets of $Y$, composed of \begin{equation}gin{itemize} \item open subsets $V$ of $Y$, with ${\mathtt v} \not\in V$, which are CS sets of strictly lower dimension than $ X $, \item open subsets $V=B\widetildemes {\mathring{\tc}}_{\varepsilon}L$, where $B {\rm Sub}set \mathbb{R}^m$ is an open $m$-cube, $\varepsilon >0$ and ${\mathring{\tc}}_{\end{proposition}silon}=(L\widetildemes [0,\varepsilon[)/(L\widetildemes \{0\})$. \end{itemize} This family is closed by finite intersections and it verifies the assumptions of \lemref{lem:bredon}. For a), it suffices to apply the induction hypothesis together with (iii) and (iv). The property b) derives from (i) and property c) from (ii). $\bullet$ \emph{We prove $P(X)$} by considering the open basis of $X$ consisting of open subsets of conical charts. This family is stable by finite intersections and verifies the assumptions of \lemref{lem:bredon}. The condition a) is a consequence of the first step. The two other conditions derive from (i) and (ii), as previously. \end{proof} \begin{equation}gin{lemma}\label{lem:bredon} Let $ X $ be a locally compact topological space, metrizable and separable. We are given an open basis of $ X $, $ \cal U = \{U_{\alpha} \} $, closed by finite intersections, and a statement $P(U) $ on open subsets of $X$ satisfying the following three properties. \begin{equation}gin{enumerate}[a)] \item The property $P(U_{\alpha}) $ is true for all $ \alpha $. \item If $ U $, $ V $ are open subsets of $ X $ for which properties $P(U) $, $ P (V) $ and $ P (U \cap V) $ are true, then $ P (U \cup V) $ is true. \item If $ (U_{i})_{i \in I} $ is a family of open subsets of $ X $, pairwise disjoint, verifying the property $ P (U_{i}) $ for all $ i \in I $, then $ P (\bigsqcup_i U_{i}) $ is true. \end{enumerate} Then the property $ P (X) $ is true. \end{lemma} \begin{equation}gin{proof} Since the family $\cal U$ is closed by finite intersections, the properties a) and b) imply that the property $P$ is true for any finite union of elements of $\cal U$. Since the space $X$ is separable, metrizable and locally compact, then its Alexandroff's compactification $\hat{X}=X\sqcup\{\infty\}$ is metrizable (cf. \cite[Exercise 23C]{MR0264581}) and there exists a proper map, $f\colon X\to [0,\infty[$, defined by $f(x)=1/d(\infty,x)$. We associate to $f$ a countable family of compact subsets, $A_{n}=f^{-1}([n,n+1])$. Each $A_{n}$ possesses a finite cover consisting of open subsets $U_{\alpha}\in\cal U$, included in $f^{-1}(]n-1/2,n+3/2[)$; we denote by $U_{n}$ the union of the elements of this cover. Since the open subset $ U_ {n} $ is a finite union of elements of $ \cal U$, then the property $ P (U_ {n}) $ is true. Let $U_{\mathrm{even}}=\bigsqcup_{n}U_{2n}$ and let $U_{\mathrm{odd}}=\bigsqcup_{n}U_{2n+1}$. The hypothesis c) implies that $P(U_{\mathrm{even}})$ and $P(U_{\mathrm{odd}})$ are true. Furthermore, since the intersection $U_{2n}\cap U_{2n+1}$ is a finite union of elements $U_{\alpha}$ of $\cal U$, then the property $P(U_{2n}\cap U_{2n+1})$ is true. From $U_{\mathrm{even}}\cap U_{\mathrm{odd}}=\bigsqcup_{n} U_{2n}\cap U_{2n+1}$ and from the hypothesis c), we deduce that the property $P(U_{\mathrm{even}}\cup U_{\mathrm{odd}})$ is true. The conclusion then follows from $U_{\mathrm{even}}\cup U_{\mathrm{odd}}=\bigcup_{n}U_{n}\supset \bigcup_{n}A_{n}=X$. \end{proof} Let $X$ be a filtered space. We construct the map $$\chi\colon\mathbb{H}iru {\widetilde{N}} *{X;R}\to \mathbb{H}iru {\mathfrak{C}}* {X;R}$$ as follows. If $\omega\in\mathbb{H}iru {\widetilde{N}} *{X;R}$ and if ${\mathtt{Simp}}gma\colon \mathbb{D}elta=\mathbb{D}elta_{0}\ast\dots\ast\mathbb{D}elta_{n}\to X$ is a regular simplex, we set: $$\chi(\omega)({\mathtt{Simp}}gma)= \omega_{{\mathtt{Simp}}gma}({\widetilde{\Delta}}). $$ \begin{equation}gin{proposition}\label{prop:TWGM} Let $(X,\overline{p})$ be a perverse space. The operator $\chi$ induces a chain map, $\chi\colon \lau {\widetilde{N}} * {\overline{p}}{X;R}\to \lau {\mathfrak{C}}*{D\overline{p}}{X;R}$. \end{proposition} \begin{equation}gin{proof} Recall that $\lau {\mathfrak{C}}*{D\overline{p}}{X;R} = \mathbb{H}om (\lau {\mathfrak{C}}{D\overline{p}} * {X;R},R)$. Since the simplices coming from $\lau {\mathfrak{C}}{D\overline{p}}*{X;R} $ are regular, then the definition of $\chi$ makes sense. This is the reason of considering tame intersection chains instead of intersection chains. We need to prove $\chi \circ \delta = {\mathfrak{d}}^* \circ \chi$, where ${\mathfrak{d}}^*$ is the linear dual of ${\mathfrak{d}}$ (see \defref{defgd}). Consider a cochain $\omega$ of $ \lau {\widetilde{N}} * {\overline{p}}{X;R}$ and a chain $\xi \in \lau {\mathfrak{C}}{D\overline{p}} * {X;R}$. We prove $\chi(\delta\omega)(\xi) =({\mathfrak{d}}^*\chi)(\omega)(\xi) =-(-1)^{|\omega|}\chi(\omega)({\mathfrak{d}}\xi) $. In fact, it suffices to prove $\chi(\delta\omega)({\mathtt{Simp}}gma) =-(-1)^{|\omega|}\chi(\omega)({\mathfrak{d}}{\mathtt{Simp}}gma) $ for a regular simplex ${\mathtt{Simp}}gma \colon \mathbb{D}elta \to X$ $D\overline p$-allowable. Let us suppose that: \begin{equation}\label{cleim} \omega_{\mathtt{Simp}}gma (\cal H_i)=0 \hbox{ for each hidden face $\cal H_i$} \end{equation} (see \eqref{Hid}). Then \begin{equation}gin{eqnarray*} \chi(\delta\omega)({{\mathtt{Simp}}gma}) =(\delta\omega)_{{\mathtt{Simp}}gma}({\widetilde{\Delta}}) =-(-1)^{|\omega|} \; \omega_{{\mathtt{Simp}}gma}(\partial {\widetilde{\Delta}}) = -(-1)^{|\omega|} \; \omega_{{\mathtt{Simp}}gma}(\widetilde{{\mathfrak{d}}\mathbb{D}elta}) = -(-1)^{|\omega|} \; \chi(\omega)({{\mathfrak{d}}{\mathtt{Simp}}gma}).\label{equa:chidelta} \end{eqnarray*} Let us prove the claim \eqref{cleim}. We suppose $ \omega_{\mathtt{Simp}}gma (\cal H_i)\ne 0 $ for some hidden face $\cal H_i$. By definition of a hidden face, notice that $\mathbb{D}elta_{n-i} \ne \emptyset $, thus there exists an $i$-dimensional stratum $S$ of $X$ with $S \cap {\rm Im\,} {\mathtt{Simp}}gma \ne \emptyset$. Then, since ${\mathtt{Simp}}gma$ is $D\overline p$-allowable and $\omega$ is $\overline p$-allowable, \begin{equation}gin{itemize} \item $ 0 \leq |\mathbb{D}elta|_{\leq n-i} = \|{\mathtt{Simp}}gma\|_S \leq \dim {\mathtt{Simp}}gma - {\rm codim\,} S + D\overline p (S) =\dim {\mathtt{Simp}}gma - \overline p (S) -2$, and \item $0 \leq |\mathbb{D}elta|_{>n-i} = |\cal H_i|_{>n-i} = \|{\boldsymbol 1}_{\cal H_i}\|_{{\rm codim\,} S} \leq_{(1)} \|\omega_{\mathtt{Simp}}gma\|_{{\rm codim\,} S}\leq \overline p(S)$, \end{itemize} where $\leq_{(1)} $ comes from $ \omega_{\mathtt{Simp}}gma (\cal H_i)\ne 0 $. We conclude that $\overline p (S) $ is finite. Adding up these two inequalities, we get $\dim {\mathtt{Simp}}gma -1= \dim\mathbb{D}elta-1=|\mathbb{D}elta|_{\leq n-i} + |\mathbb{D}elta|_{>n-i} \leq \|\omega_{\mathtt{Simp}}gma\|_S+ \|{\mathtt{Simp}}gma\|_S \leq \dim{\mathtt{Simp}}gma-2. $ This contradiction gives the claim \eqref{cleim} and ends the proof. \end{proof} \begin{equation}gin{proof}[Proof of \thmref{thm:TWGMcorps}] By hypothesis, $X$ is a separable paracompact CS set. Properties (i), (ii), (iv) of {\rm pr}opref{prop:supersuperbredon} are verified. For the item (iii), we consider a filtered compact space $ L $, for which the chain map of {\rm pr}opref{prop:TWGM} induces an isomorphism $\chi^*\colon \lau \mathscr H * {\overline{p}}{\mathbb{R}^i\widetildemes L\widetildemes ]0,\infty[;R}\xrightarrow[]{\cong} \lau {\mathfrak{H}} * {\overline{q}}{\mathbb{R}^i\widetildemes L\widetildemes ]0,\infty[;R}$. Since the space $\mathbb{R}^i\widetildemes L\widetildemes ]0,\infty[$ is an open subset of $\mathbb{R}^i\widetildemes{\mathring{\tc}} L$, the following diagram commutes. \begin{equation}gin{equation}\label{equa:dualitefin} \xymatrix{ \lau \mathscr H *{\overline{p}}{\mathbb{R}^i\widetildemes L\widetildemes ]0,\infty[;R)} \ar[r]^{\chi^*}& \lau {\mathfrak{H}}*{D \overline{p}} {\mathbb{R}^i\widetildemes L\widetildemes ]0,\infty[;R} \\ \lau \mathscr H *{\overline{p}}{\mathbb{R}^i\widetildemes{\mathring{\tc}} L;R} \ar[u]\ar[r]^-{\chi^*_{\mathbb{R}^i\widetildemes {\mathring{\tc}} L}}& \lau {\mathfrak{H}} *{D\overline{p}}{\mathbb{R}^i\widetildemes {\mathring{\tc}} L;R)}\ar[u] } \end{equation} Recall the blown-up intersection cohomology computation (see \thmref{prop:isoproduitR} and \thmref{prop:coneTW}): \begin{equation}gin{eqnarray*} \lau \mathscr H k{\overline{p}}{\mathbb{R}^i\widetildemes{\mathring{\tc}} L;R} &=& \lau \mathscr H k {\overline{p}}{{\mathring{\tc}} L;R} \\ &=& \left\{ \begin{equation}gin{array}{ccl} \lau \mathscr H k {\overline{p}}{L;R} =\lau \mathscr H k{\overline{p}}{\mathbb{R}^i\widetildemes L\widetildemes ]0,\infty[ ;R} &\text{if}& k\leq \overline{p}({\mathtt w}),\\ 0 &\text{if}& k>\overline{p}({\mathtt w}). \end{array}\right. \end{eqnarray*} The tame intersection cohomology is determined in \cite[Chapter 7]{LibroGreg}: \begin{equation}gin{eqnarray*} \lau {\mathfrak{H}} k{D \overline{p}} {\mathbb{R}^i\widetildemes {\mathring{\tc}} L;R} &= &\lau {\mathfrak{H}} k{D\overline{p}}{{\mathring{\tc}} L;R} \\ &=& \left\{ \begin{equation}gin{array}{ccl} \lau {\mathfrak{H}} k {D\overline{p}}{L;R}= \lau {\mathfrak{H}} k{D \overline{p}} {\mathbb{R}^i\widetildemes L \widetildemes ]0,\infty[;R} &\text{if}& k\leq \overline{p}({\mathtt w}),\\[,2cm] {\mathrm{Ex}}t(\lau {\mathfrak{H}}{D\overline{p}} {k-1} {L;R},R) &\text{if}& k= \overline{p}({\mathtt w}) +1,\\[,2cm] 0 &\text{if}& k> \overline{p}({\mathtt w}) +1. \end{array}\right. \end{eqnarray*} The involved stratifications on $ L $ are different in these computations but the cohomologies are the same (see subsections \ref{SF1}, \ref{SF}). In case 1) we have ${\mathrm{Ex}}t(\lau {\mathfrak{H}}{D\overline{p}} {\overline{p}({\mathtt w})} {L;R},R)=0$ since $R$ is a field. In case 2) the link $L$ is a compact pseudomanifold. According to \cite[Section 5.7]{LibroGreg}, the tame intersection homology of $ L $ is finitely generated. Since the torsion module of $\lau {\mathfrak{H}} {D\overline{p}} {\overline{p}({\mathtt w})}{L;R}$ is zero by hypothesis, we deduce ${\mathrm{Ex}}t(\lau {\mathfrak{H}} {D\overline{p}} {\overline{p}({\mathtt w})}{L;R},R)=0$. So, in both cases, the vertical maps of \eqref{equa:dualitefin} are isomorphisms. Since $\chi^*$ is an isomorphism then $\chi^*_{\mathbb{R}^i\widetildemes {\mathring{\tc}} L}$ is also an isomorphism. \end{proof} For some particular perversities the blown-up intersection cohomology is described in terms of the ordinary cohomology, denoted by $\mathbb{H}iru H * {-;R}$. \begin{equation}gin{proposition}\label{PartPer} Let $X$ be a paracompact separable CS set. Then \begin{equation}gin{enumerate}[(a)] \item $\lau \mathscr H * {\overline 0} {X;R} \cong \mathbb{H}iru H * {X;R}$ if $X$ is normal. \item $\lau \mathscr H * {\overline p} {X;R} \cong \mathbb{H}iru H * {X, \Sigma;R}$ if $\overline p <0$. \item $\lau \mathscr H * {\overline p} {X;R} \cong \mathbb{H}iru H * {X\backslash\Sigma;R}$ if $\overline p >\overline t$. \end{enumerate} These isomorphisms preserve the cup product. \end{proposition} \begin{equation}gin{proposition}r The complex of singular chains (resp. relative chains) with coefficients in $R$, is denoted by $\hiru S * {X;R}$ (resp.$\hiru S * {X,\Sigma;R}$). It computes the ordinary homology $\hiru H * {X;R}$ (resp. $\hiru H * {X,\Sigma;R}$). (a) Let $ F\colon \mathbb{H}iru S*{X;R} \to \lau {\widetilde{N}} * {\overline 0} {X;R} $ be the differential operator defined by $F (\omega)_{\mathtt{Simp}}gma = \mu^*({\mathtt{Simp}}gma^*(\omega)),$ for any regular filtered simplex ${\mathtt{Simp}}gma\colon \mathbb{D}elta\to X$. The operator $\mu^*$ is associated to the $\mu$-amalgamation of $\mathbb{D}elta$ (see \ref{defmu*}.1). Applying {\rm pr}opref{CMAmal} repeatedly, we conclude that $F$ is a chain map commuting with the cup product. By definition of $\mu^*$ we have $||F (\omega)_{\mathtt{Simp}}gma ||_\end{lemma}l \leq ||{\mathtt{Simp}}gma^*(\omega)||_0 =0$. The operator $F$ is therefore well defined. We consider the statement: ``The operator $F$ is a quasi-isomorphism'' and we prove it by using {\rm pr}opref{prop:supersuperbredon}. We verify the four properties (i)-(iv) of this Proposition. \begin{equation}gin{itemize} \item[(i)] It is well known that the functor $\mathbb{H}iru S * {-;R}$ verifies the Mayer-Vietoris property. It has been proven in \thmref{thm:MVcourte} that the functor $\lau {\widetilde{N}}*{\overline{0}}{-;R}$ also verifies Mayer-Vietoris property. One easily checks that $F$ induces a commutative diagram between these sequences. \item[(ii),(iv)] Straightforward. \item[(iii)] It is well known that $H^*(\mathbb{R}^i\widetildemes {\mathring{\tc}} L;R)=H^0(\mathbb{R}^i\widetildemes {\mathring{\tc}} L;R)=R$ (constant cochains). From Theorems \ref{prop:isoproduitR}, \ref{prop:coneTW}, we have $\lau \mathscr H* {\overline{0}}{\mathbb{R}^i\widetildemes {\mathring{\tc}} L;R}$ =$\lau \mathscr H 0 {\overline{0}}{L;R}.$ Let us consider a basis point $x_0 \in L \backslash \Sigma$, which is connected from \cite[Lemma 2.6.3]{LibroGreg}. Also, for any point $x_1 \in L$ there exists a regular simplex ${\mathtt{Simp}}gma \colon [0,1] \to L$ going from $x_1$ to $x_0$. If $\omega\in \lau {\widetilde{N}} 0 {\overline{0}} {L;R}$ is a cocycle, the cochain $\omega_{{\mathtt{Simp}}gma}$ takes the same value on $x_1$ and on $x_0$. This implies $\lau \mathscr H 0 {\overline{0}}{L;R}=R$. \end{itemize} (b) Consider the previous operator $ F\colon \mathbb{H}iru S*{X;R} \to \lau {\widetilde{N}} * {} {X;R} $ defined by $F (\omega)_{\mathtt{Simp}}gma = \mu^*({\mathtt{Simp}}gma^*(\omega)),$ for any regular filtered simplex ${\mathtt{Simp}}gma\colon \mathbb{D}elta\to X$. We know that it is a chain map commuting with the cup product. If $\omega$ vanishes on $\Sigma$ then $\|F (\omega)_{\mathtt{Simp}}gma\|_\end{lemma}l =-\infty$ for any $\end{lemma}l \in \{1, \ldots,n\}$. So, $ F\colon \mathbb{H}iru S*{X,\Sigma;R} \to \lau {\widetilde{N}} * {\overline p} {X;R} $ is well defined. We consider the statement: ``The operator $F$ is a quasi-isomorphism'' and we prove it as in the previous case. In fact, the only item to prove is (iii). It comes from $ \mathbb{H}iru H* {\mathbb{R}^i \widetildemes {\mathring{\tc}} L, \mathbb{R}^i \widetildemes {\mathring{\tc}} \Sigma_L;R} = 0=\lau \mathscr H * {\overline p}{\mathbb{R}^i \widetildemes {\mathring{\tc}} L;R}, $ induced by Theorems \ \ref{prop:isoproduitR}, \ref{prop:coneTW}. (c) Consider the natural restriction $\gamma \colon \lau {\widetilde{N}} * {\overline p} {X;R} \to \lau {\widetilde{N}} * {\overline p} {X\backslash \Sigma;R} =\mathbb{H}iru S * {X\backslash \Sigma;R}$. It is a chain map commuting with the cup product. We consider the statement: ``The operator $\gamma$ is a quasi-isomorphim'' we proceed as in the previous case. In fact, the only property to prove is the item (iii) of {\rm pr}opref{prop:supersuperbredon}. We know that $ \mathbb{H}iru H * {\mathbb{R}^i \widetildemes {\mathring{\tc}} L \backslash \Sigma_{\mathbb{R}^i \widetildemes {\mathring{\tc}} L };R} = \mathbb{H}iru H * {\mathbb{R}^i \widetildemes (L\backslash\Sigma_L) \widetildemes ]0,\infty[;R} = \mathbb{H}iru H * {L\backslash \Sigma_L;R} = \mathbb{H}iru H * {\mathbb{R}^i \widetildemes ({\mathring{\tc}} L \backslash \{ {\mathtt v} \}) \backslash \Sigma_{\mathbb{R}^i \widetildemes ({\mathring{\tc}} L \backslash \{ {\mathtt v} \}) };R} $. On the other hand, we have $\lau \mathscr H * {\overline p} {\mathbb{R}^i \widetildemes {\mathring{\tc}} L;R} = \lau \mathscr H * {\overline p} { L;R} = \lau \mathscr H * {\overline p} { \mathbb{R}^i \widetildemes ({\mathring{\tc}} L \backslash \{ {\mathtt v} \});R} $ (cf. Theorems \ref{prop:isoproduitR}, \ref{prop:coneTW}). It suffices to apply the hypothesis of (iii). \end{proposition}r \section{Topological invariance. \thmref{inv}.}\label{15} We prove in this section that the blown-up intersection cohomology is a topological invariant, working with CS sets and GM-perversities. We follow the procedure of King \cite{MR800845} (see also \cite{LibroGreg}). {\rm Sub}section{Intrinsic filtration} A key ingredient in the King's proof of the topological invariance is the intrinsic filtration of a CS set $X$. It was introduced in \cite{MR800845}, crediting to Sullivan, see also \cite{MR0478169}. We refer the reader to \cite{LibroGreg} where there is an exhaustive study of this notion. Let $X$ be an $n$-dimensional filtered space. Two points $x,y\in X$ are \emph{equivalent} if there exists a homeomorphism $h \colon (U,x) \to (V,y)$, where $U,V$ are neighborhoods of $x,y$ respectively. The local structure of a CS set implies that two points of the same stratum are necessarily equivalent. Hence the equivalence classes are unions of components of strata of X. Let $Y_i$ be the union of the equivalence classes which only contain components of strata of dimension $\leq i$. The filtration $Y_0 {\rm Sub}set Y_1 {\rm Sub}set \cdots {\rm Sub}set Y_n=X$ is the \emph{intrinsic filtration} of $X$, denoted by $X^*$. It is proved that $X^*$ is an $n$-dimensional CS set \cite{MR800845}. Since the filtration $X^*$ coarsens the original filtration of $X$ then the identity map $\nu \colon X \to X^*$ is a stratified map, called the \emph{intrinsic aggregation} (see \defref{def:applistratifieeforte}). Recall that, given a perversity $\overline p$, the map $\nu$ induces a morphism $\nu^* \colon \lau \mathscr H * {\overline p}{X^*;R} \to \lau \mathscr H * {\overline p}{X;R} $ (see \thmref{MorCoho}). \begin{equation}gin{proposition}\label{prop:local} Let $\overline p$ be a GM-perversity. Let $X$ be a paracompact CS set with no codimension one strata. Let $S$ be a stratum of $X$ and $(U,\varphi)$ a conical chart of a point $x\in S$. If the intrinsic aggregation $\nu\colon X\to X^*$ induces the isomorphism $\nu_{*}\colon \lau \mathscr H{*} {\overline{p}}{(U\backslash S)^*;R} \xrightarrow[]{\cong} \lau \mathscr H {*} {\overline{p}}{U\backslash S;R}$ then it also induces the isomorphism, $$\nu_{*}\colon \lau \mathscr H{*} {\overline{p}}{U^*;R} \xrightarrow[]{\cong} \lau \mathscr H {*} {\overline{p}}{U;R}.$$ \end{proposition} \begin{equation}gin{proof}We analyze the local structure of $X$ and $X^*$. Without loss of generality, we can suppose $U = \mathbb{R}^k \widetildemes {\mathring{\tc}} W$, where $W$ is a compact filtered space (possibly empty) and $S \cap U= \mathbb{R}^k \widetildemes \{ {\mathtt w}\}$, ${\mathtt w}$ being the apex of the cone ${\mathring{\tc}} W$. Following \cite[Lemma 2 and Proposition 1]{MR800845}, there exists a homeomorphism \begin{equation}gin{equation}\label{equa:homeo} h\colon ( \mathbb{R}^k \widetildemes {\mathring{\tc}} W)^*\xrightarrow[]{\cong} \mathbb{R}^m\widetildemes {\mathring{\tc}} L, \end{equation} which is also a stratified map, where $L$ is a compact filtered space (possibly empty) and $m\geq k$. Moreover, $h$ verifies \begin{equation}gin{equation}\label{equa:hetcone} h(\mathbb{R}^k\widetildemes \{{\mathtt w}\} ) {\rm Sub}set \mathbb{R}^m\widetildemes \{{\mathtt v}\} \text{ and } h^{-1}(\mathbb{R}^m\widetildemes \{{\mathtt v}\})=\mathbb{R}^k\widetildemes {\mathring{\tc}} A, \end{equation} where $A$ is an $(m-k-1)$ sphere and ${\mathtt v}$ is the apex of the cone ${\mathring{\tc}} L$. Now, the hypothesis and the conclusion of the Proposition become \begin{equation}gin{equation}\label{equa:hyp} h \colon \lau \mathscr H * {\overline{p}}{\mathbb{R}^m \widetildemes {\mathring{\tc}} L \backslash h(\mathbb{R}^k \widetildemes \{{\mathtt w}\});R} \xrightarrow[]{\cong} \lau \mathscr H *{\overline{p}}{\mathbb{R}^k \widetildemes {\mathring{\tc}} W \backslash (\mathbb{R}^k \widetildemes \{{\mathtt w}\});R} \end{equation} and \begin{equation}gin{equation}\label{equa:conc} h\colon \lau \mathscr H * {\overline{p}}{\mathbb{R}^m \widetildemes {\mathring{\tc}} L ;R} \xrightarrow[]{\cong} \lau \mathscr H * {\overline{p}}{\mathbb{R}^k \widetildemes {\mathring{\tc}} W;R }. \end{equation} The existence of the homeomorphism (\ref{equa:homeo}) implies $k+s=m+t$, and $s\geq t$, since $m\geq k$, where $s = \dim W$ and $t=\dim L$. The result is clear when $s=-1$. So, we can suppose $s\geq 0$ which implies that the stratum $\mathbb{R}^k\widetildemes \{{\mathtt w}\}$ is singular. Since $X$ has no codimension one strata, we indeed have $s \geq 1$. When $t=-1$ then $L=\emptyset$ , $s =\dim A$ and therefore \begin{equation}gin{eqnarray*} \lau \mathscr H * {\overline{p}}{\mathbb{R}^k \widetildemes {\mathring{\tc}} W;R } &\stackrel{(\ref{equa:hetcone})}{\cong}& \lau \mathscr H * {\overline{p}}{ \mathbb{R}^k \widetildemes {\mathring{\tc}} A ;R} \cong_{(2)} \lau \mathscr H * {\overline{p} }{ {\mathring{\tc}} A;R } \cong_{(3)} \left\{ \begin{equation}gin{array}{ll} R & \hbox{if } *=0\\ 0 & \hbox{if not} \end{array} \right. \\ &\cong& \lau \mathscr H * {\overline{p}}{\mathbb{R}^m \widetildemes {\mathring{\tc}} L;R } . \end{eqnarray*} Here, $\cong_{(2)}$ is \thmref{prop:isoproduitR} and $\cong_{(3)}$ comes from \begin{equation}gin{itemize} \item \thmref{prop:coneTW}, and \item $0 \leq \overline p(s+1) \leq \overline t (s+1) = s-1 < \dim A$ since $\overline p$ is a GM-perversity and $s \geq 1$. \end{itemize} So, we can suppose $t\geq 0$ which implies that the stratum $\mathbb{R}^m\widetildemes \{{\mathtt v}\}$ is singular. We establish (\ref{equa:conc}) in two different cases. $\bullet$ \emph{First case}: $i > \overline p(s+1)$. \thmref{prop:coneTW} gives $\lau \mathscr H i {\overline{p}}{\mathbb{R}^k \widetildemes {\mathring{\tc}} W;R}=0$ and then we have to prove that \newline $\lau \mathscr H i {\overline{p}} {\mathbb{R}^m \widetildemes {\mathring{\tc}} L;R}=0$. Since the stratum $\mathbb{R}^m \widetildemes \{ {\mathtt v}\}$ is singular and $\overline p$ is a GM-perversity then $ i> \overline{p}(s+1) \geq \overline{p}(t+1).$ Now, \thmref{prop:coneTW} applied to $\lau \mathscr H i{\overline{p}} {\mathbb{R}^m \widetildemes {\mathring{\tc}} L;R}$ gives the result. $\bullet$ \emph{Second case} : $i \leq \overline{p}(s+1)$. We have the isomorphisms \begin{equation}gin{eqnarray}\label{equa:cas3debut} \lau \mathscr H i {\overline{p}}{\mathbb{R}^m\widetildemes {\mathring{\tc}} L\backslash h(\mathbb{R}^k\widetildemes \{{\mathtt w}\});R} \cong_{(1)} \lau \mathscr H {i}{\overline{p}}{\mathbb{R}^k\widetildemes{\mathring{\tc}} W\backslash (\mathbb{R}^k\widetildemes \{{\mathtt w}\});R} \cong \\ \lau \mathscr H{i}{\overline{p}}{\mathbb{R}^k\widetildemes ({\mathring{\tc}} W\backslash \{{\mathtt w}\});R} \cong \lau \mathscr H{i}{\overline{p}}{\mathbb{R}^k\widetildemes ]0,1[\widetildemes W;R} \cong_{(2)} \lau \mathscr H {i}{\overline{p}}{W;R},\nonumber&& \end{eqnarray} where $\cong_{(1)}$ is the hypothesis (\ref{equa:hyp}) and $\cong_{(2)}$ is \thmref{prop:isoproduitR}. Write $h(\mathbb{R}^k\widetildemes\{{\mathtt w}\})=B\widetildemes \{{\mathtt v}\}{\rm Sub}set \mathbb{R}^m\widetildemes \{{\mathtt v}\}$, with $B$ closed. We obtain a new sequence of isomorphisms \begin{equation}gin{eqnarray}\label{equa:tropdeL} \lau \mathscr H {i}{\overline{p}}{\mathbb{R}^m\widetildemes {\mathring{\tc}} L\backslash h(\mathbb{R}^k\widetildemes \{{\mathtt w}\}),\mathbb{R}^m\widetildemes {\mathring{\tc}} L\backslash \mathbb{R}^m\widetildemes\{{\mathtt v}\};R} \cong &&\\ \lau \mathscr H {i}{\overline{p}}{(\mathbb{R}^m\widetildemes {\mathring{\tc}} L )\backslash (B\widetildemes\{{\mathtt v}\}) ,\mathbb{R}^m\widetildemes ({\mathring{\tc}} L\backslash \{{\mathtt v}\});R} \cong_{(1)}\nonumber &&\\ \lau \mathscr H {i} {\overline{p}}{(\mathbb{R}^m\backslash B)\widetildemes {\mathring{\tc}} L, (\mathbb{R}^m\backslash B)\widetildemes ({\mathring{\tc}} L\backslash \{{\mathtt v}\});R} \cong \lau \mathscr H {i} {\overline{p}}{(\mathbb{R}^m\backslash B)\widetildemes ({\mathring{\tc}} L,{\mathring{\tc}} L\backslash \{{\mathtt v}\});R} \cong_{(2)}\nonumber &&\\ \lau \mathscr H {i}{\overline{p}}{\mathbb{R}^{k+1}\widetildemes A\widetildemes ({\mathring{\tc}} L,{\mathring{\tc}} L\backslash \{{\mathtt v}\});R}\cong_{(3)}\nonumber &&\\ \lau \mathscr H {i}{\overline{p}}{{\mathring{\tc}} L,{\mathring{\tc}} L\backslash \{{\mathtt v}\};R}\oplus \lau \mathscr H {i-m+1+k} {\overline{p}}{{\mathring{\tc}} L,{\mathring{\tc}} L\backslash \{{\mathtt v}\};R}.\nonumber&& \end{eqnarray} The isomorphism $\cong_{(1)}$ is the excision of $B\widetildemes ({\mathring{\tc}} L\backslash \{{\mathtt v}\})$ (see {\rm pr}opref{prop:Excisionhomologie}), $\cong_{(2)}$ comes from (\ref{equa:hetcone}) and $\cong_{(3)}$ from \thmref{prop:isoproduitR} and {\rm pr}opref{cor:SfoisX}. Notice that $\mathbb{R}^m \widetildemes {\mathring{\tc}} L$, ${\mathring{\tc}} L$ and ${\mathring{\tc}} L \backslash \{ {\mathtt w} \}$ are $F_{\mathtt{Simp}}gma$-subsets, \cite[Exercise 3H]{MR0264581}, of $X^*$ and therefore paracompact spaces \cite[Theorem 20.12 a),b)]{MR0264581}. The hypothesis on $i$ implies $i-m+1+k\leq \overline p(s+1) -s+1+t \leq \overline{p}(t+1)+1$, since $\overline p$ is a GM-perversity. Then $\lau \mathscr H {i-m+1+k} {\overline{p}}{{\mathring{\tc}} L,{\mathring{\tc}} L\backslash \{{\mathtt v}\};R}$ vanishes (see {\rm pr}opref{cor:homologieconerel}). We have proved that $\lau \mathscr H {i} {\overline{p}}{\mathbb{R}^m\widetildemes {\mathring{\tc}} L\backslash h(\mathbb{R}^k\widetildemes \{{\mathtt w}\}),\mathbb{R}^m\widetildemes {\mathring{\tc}} L\backslash \mathbb{R}^m\widetildemes\{{\mathtt v}\};R} \cong \lau \mathscr H {i}{\overline{p}}{{\mathring{\tc}} L,{\mathring{\tc}} L\backslash \{{\mathtt v}\};R}.$ Finally, from this isomorphism, from the long exact sequence of a pair (\ref{equa:suiterelative2}) and from (\ref{equa:cas3debut}), we get that $\lau \mathscr H {i} {\overline{p}}{W;R} \cong \lau \mathscr H {i} {\overline{p}}{{\mathring{\tc}} L;R}. $ Applying \thmref{prop:coneTW} and \thmref{prop:isoproduitR} we have $ \lau \mathscr H {i}{\overline{p}}{\mathbb{R}^m\widetildemes {\mathring{\tc}} L;R} \cong \lau \mathscr H {i} {\overline{p}}{{\mathring{\tc}} L;R} \cong \lau \mathscr H{i}{\overline{p}}{W;R}\cong \lau \mathscr H {i}{\overline{p}}{{\mathring{\tc}} W;R} \cong \lau \mathscr H {i}{\overline{p}}{\mathbb{R}^k\widetildemes {\mathring{\tc}} W;R}.$ \end{proof} \begin{equation}gin{theorem}\label{inv} Suppose $X$ is a separable paracompact CS set with no codimension one strata. Let $\overline p$ be a $GM$-perversity. Then, the intrinsic aggregation $\nu \colon X \to X^*$ induces an isomorphism $\lau \mathscr H * {\overline{p}}{X;R}\cong \lau \mathscr H * {\overline{p}}{X^*;R}.$ It follows that $\lau \mathscr H * {\overline{p}}{X;R}$ is independent (up to isomorphism) of the choice of a stratification of $X$ as CS set with no codimension one strata. In particular, if $X'$ is another CS set with no codimension one strata which is homeomorphic to $X$ (not necessarily stratified homeomorphic), then $\lau \mathscr H * {\overline{p}}{X;R} \cong \lau \mathscr H * {\overline{p}}{X';R}.$ These $\overline p$-depending isomorphisms can be chosen to preserve the cup product. \end{theorem} \begin{equation}gin{proof} We consider the statement: ``The intrinsic aggregation $\nu \colon X \to X^*$ induces a quasi-isomorphism'' and we prove it by using {\rm pr}opref{prop:supersuperbredon}. We verify the four conditions of this Proposition. \begin{equation}gin{itemize} \item[(i)] The functor $\lau {\widetilde{N}} * {\overline p} {-;R}$ satisfies Mayer-Vietoris property (see \thmref{thm:MVcourte}). Notice that we know from \cite[Theorem 20.12 b)]{MR0264581} that $X^*$ is paracompact. \item[(ii),(iv)] Straightforward. \item[(iii)] See {\rm pr}opref{prop:local}. \end{itemize} Last statement comes from the fact that $\nu$ is a stratified map and \thmref{MorCoho}. \end{proof} \section{Decomposition of the ordinary cap product.} \label{TWIC} In this last section, we show how the ordinary cap product factorizes through the cap product we have defined for the blown-up intersection cohomology. This result extends the factorization developed in \cite[Theorem A]{CST7}. \begin{equation}gin{proposition}\label{Comparacion} Let $X$ be a normal separable paracompact CS set endowed with two perversities $\overline p, \overline p$ verifying $\overline 0 \leq \overline p$ and $\overline p + \overline p \leq \overline t$. Then, there exists a commutative diagram $$ \xymatrix{ \lau H * {} {X;R} \otimes \lau H {} m {X;R} \ar[r]^-{\cap} \ar@<-4ex>[d]^\Phi & \lau H {} {m-*} {X;R} \\ \lau \mathscr H * {\overline{p}} {X;R} \otimes \lau H {\overline{q}} m {X;R} \ar[r]^-{\cap} \ar@<-4ex>[u]^\phi & \lau H {\overline{p} + \overline p} {m-*} {X;R} \ar[u]^\phi, } $$ that is, $ \phi (\Phi(\alpha) \cap \xi) = \alpha \cap \phi (\xi), $ for each $\alpha \in \lau H * {} {X;R}$ and $\xi \in \lau H {\overline{q}} m {X;R}$. The map $\Phi$ preserves the $\cup$-product. Moreover, if $(\overline p,\overline p) =(\overline 0,\overline t)$ then $\Phi$ and $\phi$ can be chosen isomorphisms. \end{proposition} \begin{equation}gin{proof} Consider the natural inclusions $\lau {\widetilde{N}} * {\overline 0} {X;R} \hookrightarrow \lau {\widetilde{N}} * {\overline p} {X;R}$, $\lau C {\overline q} * {X;R} \hookrightarrow \lau C {\overline t} * {X;R}$ and $\lau C {\overline p + \overline q} * {X;R} \hookrightarrow \lau C {\overline t} * {X;R}$. So, it suffices to prove the statement for $(\overline p,\overline q) =(\overline 0,\overline t)$ (see \remref{67}). The complex of singular chains (resp. cochains), with coefficients on $R$, is denoted by $\hiru S * {X;R}$ (resp. $\mathbb{H}iru S * {X;R}$), it computes the ordinary homology $\hiru H * {X;R}$ (resp. ordinary cohomology $\mathbb{H}iru H * {X;R}$). We proceed in three steps. \begin{equation}gin{itemize} \item {\em Construction of $\phi$}. We consider the inclusion $ \iota\colon \lau C {\overline t} * {X;R}\hookrightarrow \hiru S * {X;R}. $ We prove the statement :``The map $\iota$ is a quasi-isomorphim'' by using \cite[Theorem 5.1.4]{LibroGreg}. Let us verify the four conditions (1)-(4) of this Theorem. \begin{equation}gin{enumerate}[(1)] \item It is well known that the functor $\hiru S * {-;R}$ verifies the Mayer-Vietoris property. It has been proven in \cite[Proposition 4.1]{CST3} that the functor $\lau C {\overline t} * {-;R}$ also verifies Mayer-Vietoris property. \item Since the support of chains are compact. \item Since \begin{equation}gin{eqnarray*} \hiru H * {\mathbb{R}^i \widetildemes {\mathring{\tc}} L;R} &= &\hiru H 0 {\mathbb{R}^i \widetildemes {\mathring{\tc}} L;R}= R \stackrel{Normal}{=} \hiru H 0{L;R} \\ &=& \hiru H 0 {\mathbb{R}^i \widetildemes ( {\mathring{\tc}} L \backslash \{ {\mathtt v}\}) ;R} \\ &\stackrel{Hyp. \ (3)}{=}& \lau H {\overline t} 0 {\mathbb{R}^i \widetildemes ( {\mathring{\tc}} L \backslash \{ {\mathtt v}\}) ;R} \stackrel{\hbox{\widetildeny \cite[Prop. 5.4]{CST3}}}{=} \lau H {\overline t} * {\mathbb{R}^i \widetildemes {\mathring{\tc}} L;R} \end{eqnarray*} \item Straightforward. \end{enumerate} The differential operator $\phi$ is the isomorphism $\iota_{*}$. \item[] \item {\em Construction of $\Phi$}. Consider the operator $F$ constructed in the proof of {\rm pr}opref{PartPer} (a) and set $\Phi = F_*$. \item {\em Diagram commutativity}. This is a local question. We can consider a cochain $\omega \in \lau {\widetilde{N}} * {} {X}$ and a regular simplex ${\mathtt{Simp}}gma \colon \mathbb{D}elta \to X$. We have: $ \phi(\Phi(\omega ) \cap {\mathtt{Simp}}gma) = {\mathtt{Simp}}gma_*\mu_* ((\Phi(\omega))_{\mathtt{Simp}}gma {\mathtt c}ap {\widetilde{\Delta}}) ={\mathtt{Simp}}gma_* \mu_{*}(\mu^*({\mathtt{Simp}}gma^* \omega){\mathtt c}ap {\widetilde{\Delta}}) \stackrel{Prop. \ref{CMAmal} (2)}{=} {\mathtt{Simp}}gma_*({\mathtt{Simp}}gma^*\omega \cap \mathbb{D}elta)= \omega \cap {\mathtt{Simp}}gma =\omega \cap \phi({\mathtt{Simp}}gma). $ \end{itemize} \end{proof} The case $\overline q = \overline 0$ corresponds to the decomposition of \cite[Theorem A]{CST7}. If we work with non positive perversities, we have a decomposition of the cap product as follows. \begin{equation}gin{proposition}\label{Comparacion2} Let $X$ be a separable paracompact CS set endowed with two perversities $\overline p < \overline 0$ and $\overline q < \overline 0$. Then, there exists a commutative diagram $$ \xymatrix{ \lau H * {} {X,\Sigma;R} \otimes \lau H {} m {X\backslash \Sigma;R} \ar[r]^-{\cap} \ar@<-4ex>[d]^\Phi & \lau H {} {m-*} {X\backslash \Sigma;R} \\ \lau \mathscr H * {\overline{p}} {X;R} \otimes \lau H {\overline{q}} m {X;R} \ar[r]^-{\cap} \ar@<-4ex>[u]^\phi & \lau H {\overline{p} + \overline q} {m-*} {X;R}, \ar[u]^\phi } $$ that is, $ \phi (\Phi(\alpha) \cap \xi) = \alpha \cap \phi (\xi), $ for each $\alpha \in \lau H * {} {X,\Sigma;R}$ and $\xi \in \lau H {\overline{q}} m {X;R}$, where $\Phi$, $\phi$ are isomorphisms and the first one preserves the $\cup$-product . \end{proposition} \begin{equation}gin{proof} We proceed in three steps. \begin{equation}gin{itemize} \item {\em Construction of $\phi$}. We prove that the inclusion $ \lau C {\overline p} * {X;R} \stackrel{\iota}{\hookleftarrow} \hiru S * {X \backslash \Sigma,R} $ is a quasi-isomorphim. We proceed as in the previous Proposition. In fact, the only item to prove is the property (3) of \cite[Theorem 5.1.4]{LibroGreg}. The regular part of $\mathbb{R}^i \widetildemes {\mathring{\tc}} L$ is $\mathbb{R}^i \widetildemes (L\backslash\Sigma_L) \widetildemes ]0,\infty[$. We know that $\hiru H * {\mathbb{R}^i \widetildemes (L\backslash\Sigma_L) \widetildemes ]0,\infty[;R} = \hiru H * { L\backslash \Sigma_L;R}$. On the other hand, we have $\lau H {\overline p} * {\mathbb{R}^i \widetildemes {\mathring{\tc}} L;R} = \hiru H * { L\backslash \Sigma_L;R} $ (cf. \cite[Prop. 5.4]{CST3}). Since both isomorphisms are induced by the canonical projection then we get (3). The map $\phi$ is the isomorphism $\iota_*$. \item {\em Construction of $\Phi$}. Consider the operator $F$ constructed in the proof of {\rm pr}opref{PartPer} (b) and set $\Phi = F_*$. \item {\em Diagram commutativity}. As in the previous Proposition. \end{itemize} \end{proof} \begin{equation}gin{thebibliography}{A} \bibitem{MR2607414} M.~Banagl -- \textit{ Rational generalized intersection homology theories }, {Homology, Homotopy Appl.} \textbf{12} (2010), no.~1, p.~157--185. \bibitem{MR1143404} J.-P. Brasselet, G.~Hector and M.~Saralegi -- \textit{ Th\'eor\`eme de de {R}ham pour les vari\'et\'es stratifi\'ees }, {Ann. Global Anal. Geom.} \textbf{9} (1991), no.~3, p.~211--243. \bibitem{MR1700700} G.~Bredon -- {Topology and geometry}, Graduate Texts in Mathematics, vol. 139, Springer-Verlag, New York, 1997, Corrected third printing of the 1993 original. \bibitem{CST1} D.~Chataur, M.~Saralegi-Aranguren and D.~Tanré --\textit{ Intersection Cohomology. Simplicial blow-up and rational homotopy.}, {ArXiv Mathematics e-prints. no. 1205.7057} (2012), To appear in Mem. Amer. Math. Soc. \bibitem{CST3} \bysame , \textit{ {Homologie d'intersection. Perversit\'es g\'en\'erales et invariance topologique} }, {ArXiv Mathematics e-prints. no. 1602.03009} (2016). \bibitem{CST4} \bysame , \textit{ {Poincaré duality with cap products in intersection homology} }, {ArXiv Mathematics e-prints. no. 1603.08773 } (2016). \bibitem{CST7} \bysame , \textit{ Singular decompositions of a cap product }, {Proceedings AMS }\textbf{145}(2017), no.~8, p.~3645--3656. \bibitem{CST2} \bysame , \textit{ Steenrod squares on intersection cohomology and a conjecture of {M} {G}oresky and {W} {P}ardon }, {Algebr. Geom. Topol.} \textbf{16} (2016), no.~4, p.~1851--1904. \bibitem{MR2209151} G.~Friedman -- \textit{ Superperverse intersection cohomology: stratification (in)dependence }, {Math. Z.} \textbf{252} (2006), no.~1, p.~49--70. \bibitem{MR2276609} \bysame , \textit{ Singular chain intersection homology for traditional and super-perversities }, {Trans. Amer. Math. Soc.} \textbf{359} (2007), no.~5, p.~1977--2019 (electronic). \bibitem{MR2461258} \bysame , \textit{ Intersection homology {K}\"unneth theorems }, {Math. Ann.} \textbf{343} (2009), no.~2, p.~371--395. \bibitem{MR2721621} \bysame , \textit{ Intersection homology with general perversities }, {Geom. Dedicata} \textbf{148} (2010), p.~103--135. \bibitem{MR2796412} \bysame , \textit{ An introduction to intersection homology with general perversity functions }, in {Topology of stratified spaces}, Math. Sci. Res. Inst. Publ., vol.~58, Cambridge Univ. Press, Cambridge, 2011, p.~177--222. \bibitem{LibroGreg} \bysame , \textit{ Singular intersection homology }, {Available at http://faculty.tcu.edu/gfriedman/IHbook.pdf} (2017). \bibitem{MR3046315} G.~Friedman and J.~E. McClure -- \textit{ Cup and cap products in intersection (co)homology }, {Adv. Math.} \textbf{240} (2013), p.~383--426. \bibitem{MR572580} M.~Goresky and R.~MacPherson -- \textit{ Intersection homology theory }, {Topology} \textbf{19} (1980), no.~2, p.~135--162. \bibitem{MR696691} \bysame , \textit{ Intersection homology. {II} }, {Invent. Math.} \textbf{72} (1983), no.~1, p.~77--129. \bibitem{MR699009} M.~Goresky and P.~Siegel--\textit{ Linking pairings on singular spaces }, {Comment. Math. Helv.} \textbf{58} (1983), no.~1, p.~96--110. \bibitem{MR736299} S.~Halperin--\textit{ Lectures on minimal models }, {M\'em. Soc. Math. France (N.S.)} (1983), no.~9-10. \bibitem{MR0478169} M.~Handel--\textit{ A resolution of stratification conjectures concerning {CS} sets }, {Topology} \textbf{17} (1978), no.~2, p.~167--175. \bibitem{MR800845} H.~C. King--\textit{ Topological invariance of intersection homology without sheaves }, {Topology Appl.} \textbf{20} (1985), no.~2, p.~149--160. \bibitem{RobertSF} R.~MacPherson--\textit{ Intersection homology and perverse sheaves }, {Unpublished AMS Colloquium Lectures, San Francisco} (1991). \bibitem{MR1245833} M.~Saralegi--\textit{ Homological properties of stratified spaces }, {Illinois J. Math.} \textbf{38} (1994), no.~1, p.~47--70. \bibitem{MR2210257} M.~Saralegi-Aranguren--\textit{ de {R}ham intersection cohomology for general perversities }, {Illinois J. Math.} \textbf{49} (2005), no.~3, p.~737--758 (electronic). \bibitem{MR0319207} L.~C. Siebenmann--\textit{ Deformation of homeomorphisms on stratified sets. {I}, {II} }, {Comment. Math. Helv.} \textbf{47} (1972), p.~123--136; ibid. 47 (1972), 137--163. \bibitem{MR0264581} S.~Willard-- {General topology}, Addison-Wesley Publishing Co., Reading, Mass.-London-Don Mills, Ont., 1970. \end{thebibliography} \end{document}
\begin{document} \hat{\varphi}eaders{Morley element for the CH equation and the HS flow}{S. Wu and Y. Li} \title{Analysis of the Morley element for the Cahn-Hilliard equation and the Hele-Shaw flow \thanks{The work of Shuonan Wu is partially supported by the startup grant from Peking Unversity.} } \author{ Shuonan Wu\thanks{School of Mathematical Sciences, Peking University, China, 100871 ({\tt [email protected]}) } \and Yukun Li\thanks{Department of Mathematics, The Ohio State University, Columbus, U.S.A. ({\tt [email protected]})} } \maketitle \begin{abstract} The paper analyzes the Morley element method for the Cahn-Hilliard equation. The objective is to derive the optimal error estimates and to prove the zero-level sets of the Cahn-Hilliard equation approximate the Hele-Shaw flow. If the piecewise $L^{\infty}(H^2)$ error bound is derived by choosing test function directly, we cannot obtain the optimal error order, and we cannot establish the error bound which depends on $\frac{1}{\epsilonilon}$ polynomially either. To overcome this difficulty, this paper proves them by the following steps, and the result in each next step cannot be established without using the result in its previous one. First, it proves some a priori estimates of the exact solution $u$, and these regularity results are minimal to get the main results; Second, it establishes ${L^{\infty}(L^2)}$ and piecewise ${L^2(H^2)}$ error bounds which depend on $\frac{1}{\epsilonilon}$ polynomially based on the piecewise ${L^{\infty}(H^{-1})}$ and ${L^2(H^1)}$ error bounds; Third, it establishes piecewise ${L^{\infty}(H^2)}$ optimal error bound which depends on $\frac{1}{\epsilonilon}$ polynomially based on the piecewise ${L^{\infty}(L^2)}$ and ${L^2(H^2)}$ error bounds; Finally, it proves the ${L^\infty(L^\infty)}$ error bound and the approximation to the Hele-Shaw flow based on the piecewise ${L^{\infty}(H^2)}$ error bound. The nonstandard techniques are used in these steps such as the generalized coercivity result, integration by part in space, summation by part in time, and special properties of the Morley elements. If one of these techniques is lacked, either we can only obtain the sub-optimal piecewise ${L^{\infty}(H^2)}$ error order, or we can merely obtain the error bounds which are exponentially dependent on $\frac{1}{\epsilonilon}$. The approach used in this paper provides a way to bound the errors in higher norm from the errors in lower norm step by step, which has a profound meaning in methodology. Numerical results are presented to validate the optimal $L^\infty(H^2)$ error order and the asymptotic behavior of the solutions of the Cahn-Hilliard equation. \end{abstract} \begin{keywords} Morley element, Cahn-Hilliard equation, generalized coercivity result, $\frac{1}{\epsilonilon}$ polynomial dependence, Hele-Shaw flow \end{keywords} \begin{AMS} 65N12, 65N15, 65N30 \end{AMS} \section{Introduction} Consider the following Cahn-Hilliard equation with Neumann boundary conditions: \begin{alignat}{2} u_t +\Omegaelta(\epsilonilon\Omegaelta u -\frac{1}{\epsilonilon}f(u)) &=0 &&\quad \mbox{in } \Omegaga_T:=\Omegaga\times(0,T],\label{eq20170504_1}\\ \frac{\partialartialrtial u}{\partialartialrtial n} =\frac{\partialartialrtial}{\partialartialrtial n}(\epsilonilon\Omegaelta u-\frac{1}{\epsilonilon}f(u)) &=0 &&\quad \mbox{on } \partialartialrtial\Omegaga_T:=\partialartialrtial\Omegaga\times(0,T], \label{eq20170504_2}\\ u &=u_0 &&\quad \mbox{in } \Omegaga\times\{t=0\},\label{eq20170504_3} \end{alignat} where $\Omegaga\subseteq \mathbf{R}^2$ is a bounded domain, $f(u) = u^3 - u$ is the derivative of a double well potential $F(u)$ which is defined by \begin{equation}\label{eq20170504_5} F(u)=\frac{1}{4}(u^2-1)^2. \end{equation} The Allen-Cahn equation \cite{allen1979microscopic, bartels2011robust, chen1994spectrum, feng2003numerical, feng2014finite, feng2014analysis, feng2017finite, ilmanen1993convergence} and the Cahn-Hilliard equation \cite{alikakos1994convergence,chen1994spectrum, kovacs2011finite, wu2017multiphase} are two basic phase field models to describe the phase transition process. They are also proved to be related to geometric flow. For example, the zero-level sets of the Allen-Cahn equation approximate the mean curvature \cite{evans1992phase, ilmanen1993convergence} and the zero-level sets of the Cahn-Hilliard equation approximate the Hele-Shaw flow \cite{stoth1996convergence, alikakos1994convergence}. The Cahn-Hilliard equation was introduced by J. Cahn and J. Hilliard in \cite{cahn1958free} to describe the process of phase separation, by which the two components of a binary fluid separate and form domains pure in each component. It can be interpreted as the $H^{-1}$ gradient flow \cite{alikakos1994convergence} of the Cahn-Hilliard energy functional \begin{align}\label{eq2.1} J_\epsilonilon(v):= \int_\Omegaga \Bigl( \frac\epsilon2 |\nablala v|^2+ \frac{1}{\epsilonilon} F(v) \Bigr)\, {\rm d} x. \end{align} There are a few papers \cite{aristotelous2013mixed, xu2016stability,du1991numerical,elliott1989nonconforming} discussing the error bounds, which depend on the exponential power of $\frac{1}{\epsilonilon}$, of the numerical methods for Cahn-Hilliard equation. Such an estimate is clearly not useful for small $\epsilonilon$, in particular, in addressing the issue whether the computed numerical interfaces converge to the original sharp interface of the Hele-Shaw problem. Instead, the polynomial dependence in $\frac{1}{\epsilonilon}$ is proved in \cite{feng2004error, feng2005numerical} using the standard finite element method, and in \cite{feng2016analysis,li2015numerical} using the discontinuous Galerkin method. Due to the high efficiency of the Morley elements, compared with mixed finite element methods or $C^1$-conforming finite element methods, the Morley finite element method is used to derive the error bound which depends on $\frac{1}{\epsilonilon}$ polynomially in this paper. The highlights of this paper are fourfold. First, it establishes the piecewise ${L^{\infty}(L^2)}$ and ${L^2(H^2)}$ error bounds which depend on $\frac{1}{\epsilonilon}$ polynomially. If the standard technique is used, we can only prove that the error bounds depend on $\frac{1}{\epsilonilon}$ exponentially, which can not be used to prove our main theorem. To prove these bounds, special properties of the Morley elements are explored, i.e., Lemma 2.3 in \cite{elliott1989nonconforming}, and piecewise ${L^{\infty}(H^{-1})}$ and ${L^2(H^1)}$ error bounds \cite{li2017error} are required. Second, by making use of the piecewise ${L^{\infty}(L^2)}$ and ${L^2(H^2)}$ error bounds above, it establishes the piecewise ${L^{\infty}(H^2)}$ error bound which depends on $\frac{1}{\epsilonilon}$ polynomially. If the standard technique is used, we can only get the error bound in Remark \ref{rmk20180823_1}, which does not have an optimal order. The crux here is to employ the summation by part in time and integration by part in space techniques simultaneously to handle the nonlinear term, together with the special properties of the Morley elements. Third, the minimal regularity of $u$ is used, i.e., $\|u_{tt}\|_{L^2(L^2)}$ regularity instead of $\|u_{tt}\|_{L^\infty(L^2)}$ regularity is used, and the a priori estimate is derived in Theorem \ref{thm20180609_1}. Fourth, the ${L^\infty(L^\infty)}$ error bound is established using the optimal piecewise ${L^{\infty}(H^2)}$ error, by which the main result that the zero-level sets of the Cahn-Hilliard equation approximate the Hele-Shaw flow is proved in Section \ref{sec5}. The organization of this paper is as follows. In Section \ref{sec2}, the standard Sobolev space notation is introduced, some useful lemmas are stated, and a new a priori estimate of the exact solution $u$ is derived. In Section \ref{sec3}, the fully discrete approximation based on the Morley finite element space is presented. In Section \ref{sec4}, first the polynomially dependent piecewise ${L^{\infty}(L^2)}$ and ${L^2(H^2)}$ error bounds are established based on piecewise ${L^{\infty}(H^{-1})}$ and ${L^2(H^1)}$ error bounds, then the polynomially dependent piecewise ${L^{\infty}(H^2)}$ error bound is established based on piecewise ${L^{\infty}(L^2)}$ and ${L^2(H^2)}$ error bounds, by which the ${L^\infty(L^\infty)}$ error bound is proved. In Section \ref{sec5}, the approximation of the zero-level sets of the Cahn-Hilliard equation of the Hele-Shaw flow is proved. In Section \ref{sec6}, numerical tests are presented to validate our theoretical results, including the optimal error orders and the approximation of the Hele-Shaw flow. \section{Preliminaries}\label{sec2} In this section, we present some results which will be used in the following sections. Throughout this paper, $C$ denotes a generic positive constant which is independent of interfacial length $\epsilonilon$, spacial size $h$, and time step size $k$, and it may have different values in different formulas. The standard Sobolev space notation below is used in this paper. \begin{alignat*}{2} \|v\|_{0,p,A}&=\bigg(\int_{A}|v|^p\,{\rm d} x\bigg)^{1\slash p}\qquad &&1\le p<\infty,\\ \|v\|_{0,\infty,A}&=\underset{A}{\mbox{\rm ess sup }} |v|,\\ |v|_{m,p,A}&=\bigg(\sum_{|\alpha|=m}\|D^{\alpha}v\|_{0,p,A}^p\bigg)^{1\slash p}\qquad &&1\le p<\infty,\\ \|v\|_{m,p,A}&=\bigg(\sum_{j=0}^m|v|_{m,p,A}^p\bigg)^{1\slash p}. \end{alignat*} Here $A$ denotes some domain, i.e., a single mesh element $K$ or the whole domain $\Omegaga$. When $A=\Omegaga$, $\|\cdot\|_{H^k}, \|\cdot\|_{L^k}$ are used to denote $\|\cdot\|_{H^k(\Omegaga)}, \|\cdot\|_{L^k(\Omegaga)}$ respectively, and $\|\cdot\|_{0,2}$ is also used to denote $\|\cdot\|_{L^2(\Omegaga)}$. Let $\mathcal{T}_h$ be a family of quasi-uniform triangulations of domain $\Omegaga$, and $\mathcal{E}_h$ be a collection of edges, then the global mesh dependent semi-norm, norm and inner product are defined below \begin{align*} |v|_{j,p,h}&=\bigg(\sum_{K\in\mathcal{T}_h}|v|_{j,p,K}^p\bigg)^{1\slash p},\\ \|v\|_{j,p,h}&=\bigg(\sum_{K\in\mathcal{T}_h}\|v\|_{j,p,K}^p\bigg)^{1\slash p},\\ (w,v)_h&=\sum_{K\in\mathcal{T}_h}\int_Kw(x)v(x)\,{\rm d} x. \end{align*} Define $L^2_0(\Omega)$ as the mean zero functions in $L^2(\Omega)$. For $\Phi\in L_0^2(\Omega)$, let $u := -\Omegaelta^{-1}\Phi \in H^1(\Omega)\cap L^2_0(\Omega)$ such that \begin{alignat}{2} -\Omegaelta u &= \Phi&&\qquad \mathrm{in}\ \Omegaga,\notag\\ \frac{\partialartialrtial u}{\partialartialrtial n}&= 0&&\qquad \mathrm{on}\ \partialartialrtial\Omegaga.\notag \end{alignat} Then we have \begin{align}\label{eq6_add} -(\nablala\Omegaelta^{-1}\Phi,\nablala v) = (\Phi,v)\quad \mathrm{in}\ \Omegaga\qquad\forall v\in H^1(\Omegaga)\cap L^2_0(\Omega). \end{align} For $v\in L^2_0(\Omega)$ and $\Phi\in L^2_0(\Omega)$, define the continuous $H^{-1}$ inner product by \begin{align}\label{eq7_add} (\Phi, v)_{H^{-1}} := (\nablala\Omegaelta^{-1}\Phi,\nablala\Omegaelta^{-1}v) = (\Phi,-\Omegaelta^{-1}v) = (v,-\Omegaelta^{-1}\Phi). \end{align} As in \cite{chen1994spectrum, feng2016analysis, feng2004error, feng2005numerical, li2015numerical, li2017error}, we made the following assumptions on the initial condition. These assumptions were used to derive the a priori estimates for the solution of problem \eqref{eq20170504_1}--\eqref{eq20170504_5}. {\bf General Assumption} (GA) \begin{itemize} \item[(1)] Assume that $m_0\in (-1,1)$ where \begin{align*} m_0:=\frac{1}{|\Omegaga|}\int_{\Omegaga}u_0(x)\,{\rm d} x. \end{align*} \item[(2)] There exists a nonnegative constant $\sigma_1$ such that \begin{align*} J_{\epsilonilon}(u_0)\leq C\epsilonilon^{-2\sigma_1}. \end{align*} \item[(3)] There exist nonnegative constants $\sigma_2$, $\sigma_3$ and $\sigma_4$ such that \begin{align*} \big\|-\epsilonilon\Omegaelta u_0 +\epsilonilon^{-1} f(u_0)\big\|_{H^{\ell}} \leq C\epsilonilon^{-\sigma_{2+\ell}}\qquad \ell=0,1,2. \end{align*} \end{itemize} Under the above assumptions, the following a priori estimates of the solution were proved in \cite{feng2016analysis,feng2004error, feng2005numerical, li2015numerical}. \begin{theorem}\label{prop2.1} The solution $u$ of problem \eqref{eq20170504_1}--\eqref{eq20170504_5} satisfies the following energy estimate: \begin{align} &\underset{t\in [0,T]}{\mbox{\rm ess sup }} \Bigl( \frac{\epsilonilon}{2}\|\nablala u\|_{L^2}^2 +\frac{1}{\epsilonilon}\|F(u)\|_{L^1} \Bigr) +\int_{0}^{T}\|u_t(s)\|_{H^{-1}}^2\, {\rm d} s \leq J_{\epsilonilon}(u_0)\label{eq2.5}. \end{align} Moreover, suppose that GA (1)--(3) hold, $u_0\in H^4(\Omegaga)$ and $\partial\Omegaga\in C^{2,1}$, then $u$ satisfies the additional estimates: \begin{align} &\frac{1}{|\Omegaga|}\int_{\Omegaga}u(x,t)\, {\rm d} x=m_0 \quad\forall t\geq 0, \label{eq2.8}\\ &\underset{t\in [0,T]}{\mbox{\rm ess sup }}\|\Omegaelta u\|_{L^2}\leq C\epsilonilon^{-\max\{\sigma_1+\frac{5}{2},\sigma_3+1\}},\label{eq2.12}\\ &\underset{t\in [0,T]}{\mbox{\rm ess sup }}\|\nablala\Omegaelta u\|_{L^2}\leq C\epsilonilon^{-\max\{\sigma_1+\frac{5}{2},\sigma_3+1\}},\label{eq2.13_add}\\ &\epsilonilon\int_0^{T}\|\Omegaelta u_t\|_{L^2}^2\,{\rm d} s+\underset{t\in [0,T]}{\mbox{\rm ess sup }}\|u_t\|_{L^2}^2 \leq C\epsilonilon^{-\max\{2\sigma_1+\frac{13}{2},2\sigma_3+\frac{7}{2},2\sigma_2+4,2\sigma_4\}}.\label{eq2.15_add} \end{align} Furthermore, if there exists $\sigma_5>0$ such that \begin{equation}\label{eq2.17} \mathop{\rm{lim}}_{s\rightarrow0^{+}}\limits\|\nablala u_t(s)\|_{L^2}\leq C\epsilonilon^{-\sigma_5}, \end{equation} then there hold \begin{align} &\underset{t\in [0,T]}{\mbox{\rm ess sup }}\|\nablala u_t\|_{L^2}^2 + \epsilonilon\int_0^{T}\|\nablala\Omegaelta u_t\|_{L^2}^2\,{\rm d} s \leq C\rho_0(\epsilonilon),\label{eq2.18}\\ &\int_0^{T}\|u_{tt}\|_{H^{-1}}^2\,{\rm d} s \leq C\rho_1(\epsilonilon),\label{eq2.19} \end{align} where \begin{align*} \rho_0(\epsilonilon) &:=\epsilonilon^{-\frac{1}{2}\max\{2\sigma_1+5,2\sigma_3+2\} -\max\{2\sigma_1+\frac{13}{2},2\sigma_3+\frac{7}{2},2\sigma_2+4\}} +\epsilonilon^{-2\sigma_5}\\ &\qquad +\epsilonilon^{-\max\{2\sigma_1+7,2\sigma_3+4\}},\\ \rho_1(\epsilonilon) &:=\epsilonilon \rho_0(\epsilonilon). \end{align*} \end{theorem} Besides, an extra a priori estimates of solution $u$ is needed in this paper. \begin{theorem}\label{thm20180609_1} Under the assumptions of Theorem \ref{prop2.1} and if there exists $\sigma_6>0$ such that \begin{align}\label{eq20180606_5} \|\Omegaelta u_t(0)\|_{L^2}\le C\epsilonilon^{-\sigma_6}, \end{align} then there hold \begin{align}\label{eq20180606_1} \underset{t\in [0,T]}{\mbox{\rm ess sup }}\|\Omegaelta u_t\|_{L^2}^2 + \epsilonilon\int_0^{T}\|\Omegaelta^2 u_t\|_{L^2}^2\,{\rm d} s &\leq C \rho_2(\epsilonilon),\\ \underset{t\in [0,T]}{\mbox{\rm ess sup }}\epsilonilon\|\Omegaelta u_t\|_{L^2}^2+\int_0^{T}\|u_{tt}\|_{L^2}^2\,{\rm d} s &\leq C \rho_3(\epsilonilon),\label{eq20180606_2} \end{align} where \begin{align*} \rho_2(\epsilonilon)&:=\epsilonilon^{-\max\{2\sigma_1+\frac{13}{2},2\sigma_3+\frac{7}{2},2\sigma_2+4,2\sigma_4\} - \max\{2\sigma_1+5, 2\sigma_3+2\} - 3} \\ & \qquad + \epsilonilon^{-\max\{\sigma_1+\frac52,\sigma_3+1\}-3}\rho_0(\epsilonilon) + \epsilonilon^{-2\sigma_6},\\ \rho_3(\epsilonilon)&:=\epsilonilon\rho_2(\epsilonilon). \end{align*} \end{theorem} \begin{proof} Using the Gagliardo-Nirenberg inequalities \cite{adams2003sobolev} in two-dimensional space, we have \begin{align}\label{eq20180801_3} \|\nablala u\|_{L^{\infty}}\leq C\bigg(\|\nablala\Omegaelta u\|_{L^2}^{\frac12} \|u\|_{L^{\infty}}^{\frac12}+\|u\|_{L^{\infty}}\bigg)\le C \epsilonilon^{-\frac12 \max\{\sigma_1+\frac52, \sigma_3 + 1\}}. \end{align} Since $f'(u) = 3u^2 - 1$, using Sobolev embedding theorem \cite{adams2003sobolev}, \eqref{eq2.5}, \eqref{eq2.12}, \eqref{eq2.13_add}, \eqref{eq2.15_add} and \eqref{eq2.18}, we have \begin{align} \label{eq20180731_1} &~\quad \int_0^T \|\Omegaelta(f'(u)u_t)\|_{L^2}^2 \,{\rm d} s \\ &= \int_0^T \|6uu_t \Omegaelta u + 12 u\nablala u \cdot \nablala u_t + 6u_t\nablala u \cdot \nablala u + (3u^2 -1) \Omegaelta u_t\|_{L^2}^2 \,{\rm d} s \notag \\ &\le C\int_0^T\|\Omegaelta u\|_{L^2}^2 \|u_t\|_{L^\infty}^2\, {\rm d} s + C\int_0^T \|\nablala u\|_{L^\infty}^2 \|\nablala u_t\|_{L^2}^2 \,{\rm d} s \notag \\ & ~\quad + C\int_0^T \|\nablala u\|_{L^\infty}^{4} \|u_t\|_{L^2}^2 \,{\rm d} s + C\int_0^T \|\Omegaelta u_t\|_{L^2}^2\, {\rm d} s \notag \\ &\le C\|\Omegaelta u\|_{L^\infty(L^2)}^2 \int_0^T\|u_t\|_{H^2}^2\, {\rm d} s + C\|\nablala u_t\|_{L^\infty(L^2)}^2 \|\nablala u\|_{L^\infty(L^\infty)}^2 \notag \\ & ~\quad + C\|\nablala u\|_{L^\infty(L^\infty)}^{4} \|u_t\|_{L^\infty(L^2)}^2 + C\int_0^T \|\Omegaelta u_t\|_{L^2}^2\,{\rm d} s \notag \\ &\le C \epsilonilon^{-\max\{2\sigma_1+\frac{13}{2},2\sigma_3+\frac{7}{2}, 2\sigma_2+4,2\sigma_4\} - \max\{2\sigma_1+5, 2\sigma_3+2\}-1} \notag \\ &~\quad + C\epsilonilon^{-\max\{\sigma_1 + \frac52,\sigma_3+1\}}\rho_0(\epsilonilon) \notag \\ & ~\quad + C \epsilonilon^{-\max\{2\sigma_1+\frac{13}{2},2\sigma_3+\frac{7}{2}, 2\sigma_2+4,2\sigma_4\} - \max\{2\sigma_1+5, 2\sigma_3 + 2\}} \notag \\ & ~\quad + C \epsilonilon^{-\max\{2\sigma_1+\frac{13}{2},2\sigma_3+\frac{7}{2}, 2\sigma_2+4,2\sigma_4\} - 1} \notag \\ &\le C \epsilonilon^{-\max\{2\sigma_1+\frac{13}{2},2\sigma_3+\frac{7}{2}, 2\sigma_2+4,2\sigma_4\} - \max\{2\sigma_1+5, 2\sigma_3+2\}} \notag \\ &~\quad + C\epsilonilon^{-\max\{\sigma_1 + \frac52, \sigma_3+1\}}\rho_0(\epsilonilon) \notag. \end{align} Taking the derivative with respect to $t$ on both sides of \eqref{eq20170504_1}, we get \begin{align}\label{eq20180606_3} u_{tt}+\epsilonilon\Omegaelta^2u_t-\frac{1}{\epsilonilon}\Omegaelta(f'(u)u_t)=0. \end{align} Testing \eqref{eq20180606_3} with $\Omegaelta^2u_t$, and taking the integral over $(0,T)$, we obtain \begin{align}\label{eq20180606_4} &~\quad\frac{1}{2}\|\Omegaelta u_t(T)\|_{L^2}^2+\epsilonilon\int_0^{T}\|\Omegaelta^2u_t\|_{L^2}^2\,{\rm d} s \\ & =\frac{1}{\epsilonilon}\int_0^{T}(\Omegaelta(f'(u)u_t),\Omegaelta^2 u_t)\, {\rm d} s + \frac{1}{2}\|\Omegaelta u_t(0)\|_{L^2}^2 \notag \\ &\le\frac{C}{\epsilonilon^3}\int_0^{T} \|\Omegaelta(f'(u) u_t)\|_{L^2}^2\, {\rm d} s +\frac{\epsilonilon}{2}\int_0^{T}\|\Omegaelta^2u_t\|_{L^2}^2\,{\rm d} s +C\epsilonilon^{-2\sigma_6}\notag. \end{align} Then \eqref{eq20180606_1} is obtained by \eqref{eq20180731_1}. Next we bound \eqref{eq20180606_2}. Testing \eqref{eq20180606_3} with $u_{tt}$, taking the integral over $(0,T)$, and using \eqref{eq20180606_4}, we obtain \begin{align}\label{eq20180606_8} &~\quad \int_0^{T}\|u_{tt}\|_{L^2}^2\,{\rm d} s + \frac{\epsilonilon}{2}\|\Omegaelta u_t(T)\|_{L^2}^2 \\ &\le \frac{\epsilonilon}{2}\|\Omegaelta u_t(0)\|_{L^2}^2 + \frac{C}{\epsilonilon^2}\int_0^{T} \|\Omegaelta(f'(u)u_t)\|_{L^2}^2\,{\rm d} s + \frac{1}{2}\int_0^{T}\|u_{tt}\|_{L^2}^2\,{\rm d} s \notag. \end{align} Then \eqref{eq20180606_2} is obtained by \eqref{eq20180731_1}. \end{proof} The next lemma gives an $\epsilonilon$-independent lower bound for the principal eigenvalue of the linearized Cahn-Hilliard operator $\mathcal{L}_{CH}$ defined below. The proof of this lemma can be found in \cite{chen1994spectrum}. \begin{lemma}\label{lem3.4} Suppose that GA (1)--(3) hold. Given a smooth initial curve/surface $\Gammamma_0$, let $u_0$ be a smooth function satisfying $\Gammamma_0 = \{x\in\Omegaga; u_0(x)=0\}$ and some profile described in \cite{chen1994spectrum}. Let $u$ be the solution to problem \eqref{eq20170504_1}--\eqref{eq20170504_5}. Define $\mathcal{L}_{CH}$ as \begin{equation*} \mathcal{L}_{CH} := \Omegaelta\left(\epsilon\Omegaelta-\frac{1}{\epsilon}f'(u)I\right). \end{equation*} Then there exists $0<\epsilonilon_0\ll 1$ and a positive constant $C_0$ such that the principle eigenvalue of the linearized Cahn-Hilliard operator $\mathcal{L}_{CH}$ satisfies \begin{equation*} \lambda_{CH}:=\mathop{\inf}_{\substack{0\neq\partialsi\in H^1(\Omegaga)\\ \Omegaelta w=\partialsi}} \limits\frac{\epsilonilon\|\nablala\partialsi\|_{L^2}^2+\frac{1}{\epsilonilon}(f'(u)\partialsi,\partialsi)}{\|\nablala w\|_{L^2}^2}\geq -C_0 \end{equation*} for $t\in [0,T]$ and $\epsilon\in (0,\epsilon_0)$. \end{lemma} \section{Fully Discrete Approximation}\label{sec3} In this section, the backward Euler is used for time stepping, and the Morley finite element discretization is used for space discretization. \subsection{Morley finite element space} Define the Morley finite element spaces $S^h$ below \cite{brenner1999convergence,brenner2013morley,elliott1989nonconforming}: \begin{align*} S^h := \{& v_h\in L^{\infty}(\Omega): v_h\in P_2(K), v_h ~\text{is continuous at the vertices of all triangles,} \\ &\frac{\partialartialrtial v_h}{\partialartialrtial n} \text{ is continuous at the midpoints of interelement edges of triangles} \}. \end{align*} We use the following notation \begin{equation*} H^j_E(\Omegaga):=\{v\in H^j(\Omegaga): \frac{\partialartialrtial v}{\partialartialrtial n}=0~\text{on}~\partialartialrtial\Omegaga\}\qquad j=1, 2, 3. \end{equation*} Corresponding to $H^j_E(\Omegaga)$, define $S^h_E$ as a subspace of $S^h$ below: \begin{equation*} S^h_E := \{v_h\in S^h: \frac{\partialartialrtial v_h}{\partialartialrtial n}=0 \text{ at the midpoints of the edges on } \partialartialrtial\Omegaga\}. \end{equation*} We also define $\mathring{H}_E^j(\Omegaga) = H_E^j(\Omegaga) \cap L_0^2(\Omegaga), j=1,2,3$, and $\mathring{S}^h_E = S^h_E \cap L_0^2(\Omegaga)$, where $L_0^2(\Omegaga)$ denotes the set of mean zero functions. The enriching operator $\widetilde{E}_h$ is restated \cite{brenner1996two,brenner1999convergence,brenner2013morley}. Let $\widetilde{S}_E^h$ be the Hsieh-Clough-Tocher macro element space, which is an enriched space of the Morley finite element space $S_E^h$. Let $p$ and $m$ be the internal vertices and midpoints of triangles $\mathcal{T}_h$. Define $\widetilde{E}_h: S_E^h\rightarrow \widetilde{S}_E^h$ by \begin{align*} (\widetilde{E}_h v)(p) &= v(p),\\%\label{eq20170812_3}\\ \frac{\partial (\widetilde{E}_h v)}{\partial n}(m) &= \frac{\partial v}{\partial n}(m),\\%\label{eq20170812_4}\\ (\partial^{\beta}(\widetilde{E}_h v))(p) &= \text{average of } (\partial^{\beta}v_i)(p)\qquad |\beta|=1, \end{align*} where $v_i=v|_{T_i}$ and triangle $T_i$ contains $p$ as a vertex. Define the interpolation operator $I_h: H^2_E(\Omegaga)\rightarrow S_E^h$ such that \begin{align*} (I_h v)(p)&=v(p),\\ \frac{\partial (I_h v)}{\partial n}(m)&=\frac{1}{|e|}\int_e\frac{\partial v}{\partial n}\,{\rm d} S, \end{align*} where $p$ ranges over the internal vertices of all the triangles $T$, and $m$ ranges over the midpoints of all the edges $e$. It can be proved that \cite{brenner1996two,brenner1999convergence,brenner2013morley,elliott1989nonconforming} \begin{alignat}{2}\label{eq20170812_6} |v-I_hv|_{j,p,K}&\le Ch^{3-j}|v|_{3,p,K}\qquad&&\forall K\in\mathcal{T}_h,\quad\forall v\in H^3(K),\quad j=0,1,2,\\ \|\widetilde{E}_h v-v\|_{j,2,h}&\le Ch^{2-j}|v|_{2,2,h}\quad&&\forall v\in S_E^h,\quad j=0,1,2.\label{eq20171006_1} \end{alignat} Notice that $\widetilde{E}_h$ and $I_h$ cannot preserve the mean zero functions. Let $\mathring{\widetilde S_E^h}:= \widetilde{S}_E^h \cap L_0^2(\Omegaga)$. Define $\mathring{\widetilde{E}_h}:\mathring{S}_E^h \mapsto \mathring{\widetilde S_E^h}$ such that \begin{align} \label{eq20180524_1} \mathring{\widetilde{E}_h}v = \widetilde{E}_h v - \frac{1}{|\Omegaga|}\int_{\Omegaga} \widetilde{E}_h v \,{\rm d} x. \end{align} Using \eqref{eq20171006_1}, we have $$ \int_\Omegaga \widetilde{E}_h v \,{\rm d} x = (\widetilde{E}_h v - v, 1) \leq |\Omegaga|^{1/2}\|\widetilde{E}_h v - v\|_{0,2} \leq Ch^2|v|_{2,2,h} \qquad \forall v \in \mathring{S}_E^h. $$ Then \begin{align} \label{eq20180524_2} \|\mathring{\widetilde{E}_h} v-v\|_{j,2,h}&\le Ch^{2-j}|v|_{2,2,h}\qquad\forall v\in \mathring{S}_E^h,\quad j=0,1,2. \end{align} Finally the following spaces are needed \begin{alignat*}{2} &H^{3,h}(\Omegaga)=S^h\oplus H^3(\Omegaga), &&\qquad H_E^{3,h}(\Omegaga)=S_E^h\oplus H_E^3(\Omegaga),\\ &H^{2,h}(\Omegaga)=S^h\oplus H^2(\Omegaga), &&\qquad H_E^{2,h}(\Omegaga)=S_E^h\oplus H_E^2(\Omegaga),\\ &H^{1,h}(\Omegaga)=S^h\oplus H^1(\Omegaga), &&\qquad H_E^{1,h}(\Omegaga)=S_E^h\oplus H_E^1(\Omegaga), \end{alignat*} where, for instance, \begin{align*} S_E^h\oplus H_E^3(\Omegaga):=\{u+v: u\in S_E^h\ \ \text{and}\ \ v\in H_E^3(\Omegaga)\}. \end{align*} \subsection{Formulation} The weak form of \eqref{eq20170504_1}--\eqref{eq20170504_5} is to seek $u(\cdot,t)\in H^2_E(\Omega)$ such that \begin{align}\label{eq20180211_1} (u_t,v)+\epsilonilon a(u,v) +\frac{1}{\epsilonilon}(\nablala f(u), \nablala v)&= 0\quad\forall v\in H_E^2(\Omega),\\ u(\cdot,0)&=u_0\in H_E^2(\Omega),\label{eq20180211_2} \end{align} where the bilinear form $a(\cdot,\cdot)$ is defined as \begin{align}\label{eq20170504_8} a(u,v):=\int_{\Omega}\Omegaelta u\Omegaelta v+\bigl(\frac{\partialartialrtial^2u}{\partialartialrtial x\partialartialrtial y}\frac{\partialartialrtial^2v}{\partialartialrtial x\partialartialrtial y}-\frac12\frac{\partialartialrtial^2u}{\partialartialrtial x^2}\frac{\partialartialrtial^2v}{\partialartialrtial y^2}-\frac12\frac{\partialartialrtial^2u}{\partialartialrtial y^2}\frac{\partialartialrtial^2v}{\partialartialrtial x^2}\bigr)\,{\rm d} x {\rm d} y \end{align} with Poisson's ratio $\frac12$. Next define the discrete bilinear form \begin{align}\label{eq20170504_9} a_h(u,v)&:=\sum_{K\in\mathcal{T}_h}\int_K\Omegaelta u\Omegaelta v+\bigl(\frac{\partialartialrtial^2u}{\partialartialrtial x\partialartialrtial y}\frac{\partialartialrtial^2v}{\partialartialrtial x\partialartialrtial y}-\frac12\frac{\partialartialrtial^2u}{\partialartialrtial x^2}\frac{\partialartialrtial^2v}{\partialartialrtial y^2}-\frac12\frac{\partialartialrtial^2u}{\partialartialrtial y^2}\frac{\partialartialrtial^2v}{\partialartialrtial x^2}\bigr) \,{\rm d} x{\rm d} y. \end{align} Based on the bilinear form \eqref{eq20170504_9}, a fully discrete Galerkin method is to seek $u_h^n\in S^h_E$ such that \begin{align}\label{eq20170504_11} (d_tu_h^{n},v_h)+\epsilonilon a_h(u_h^{n},v_h)+\frac{1}{\epsilonilon}(\nablala f(u_h^{n}),\nablala v_h)_h&=0\quad\forall v_h\in S^h_E,\\ u_h^0&=u_0^h\in S^h_E,\label{eq20170504_12} \end{align} where the difference operator $d_tu_h^{n} := \frac{u_h^{n}-u_h^{n-1}}{k}$ and $u_0^h := P_hu(t_0)$, where the operator $P_h$ is defined below. \subsection{Elliptic operator $P_h$} We define \begin{align*} R:=\bigl\{v\in H_E^2(\Omega): \Omegaelta v\in H_E^2(\Omega)\bigr\}. \end{align*} Then $\forall v\in R$, define the elliptic operator $P_h$ (cf. \cite{elliott1989nonconforming}) by seeking $P_hv\in S_E^h$ such that \begin{align}\label{eq20170504_14} \tilde b_h(P_hv,w):=(\epsilonilon\Omegaelta^2v-\frac{1}{\epsilonilon}\nablala \cdot (f'(u)\nablala v)+\alpha v,w)\qquad\forall w\in S_E^h, \end{align} where \begin{align}\label{eq20170504_15} \tilde b_h(v,w):=\epsilonilon a_h(v,w)+\frac{1}{\epsilonilon}(f'(u)\nablala v,\nablala w)_h+\alpha(v,w), \end{align} and $\alpha$ should be chosen as $\alpha = \alpha_0 \epsilonilon^{-3}$ to guarantee the coercivity of $\tilde{b}_h(\cdot,\cdot)$. More precisely, first we cite some lemmas in \cite{elliott1989nonconforming}, which will be used in this paper. \begin{lemma}[Lemma 2.3 in \cite{elliott1989nonconforming}] \label{lem20180817_1} Let $w,z \in H_E^{2,h}(\Omegaga)$, then $$ \left| \sum_{K \in \mathcal{T}_h} \int_{\partialartialrtial K} \frac{\partialartialrtial w}{\partialartialrtial n}z \,{\rm d} S \right| \leq Ch(h\|w\|_{2,2,h}\|z\|_{2,2,h} + \|w\|_{1,2,h}\|z\|_{2,2,h} + \|w\|_{2,2,h}\|z\|_{1,2,h}). $$ \end{lemma} \begin{lemma}[Lemma 2.5 in \cite{elliott1989nonconforming}] \label{lem20180817_2} Let $z\in H^{2,h}(\Omegaga)$ and $w\in H_E^2(\Omegaga) \cap H^3(\Omegaga)$, and define $B_h(w,z)$ by $$ B_h(w,z) = \sum_{K\in \mathcal{T}_h} \int_{\partialartialrtial K} \left( \Omegaelta w \frac{\partialartialrtial z}{\partialartialrtial n} + \frac{1}{2} \frac{\partialartialrtial^2 w}{\partialartialrtial n \partialartialrtial s} - \frac{1}{2}\frac{\partialartialrtial^2 w}{\partialartialrtial s^2}\frac{\partialartialrtial z}{\partialartialrtial n} \right)\,{\rm d} S, $$ then we have \begin{equation} \label{eq20180817_2} |B_h(w,z)| \leq Ch |w|_{3,2,h}|z|_{2,2,h}. \end{equation} \end{lemma} For any $w\in S_E^h$, using Lemma \ref{lem20180817_1} and the inverse inequality, we have $$ \begin{aligned} |w|_{1,2,h}^2 & \leq |w|_{2,2,h}\|w\|_{0,2} + \left| \sum_{K \in \mathcal{T}_h} \int_{\partialartialrtial K} \frac{\partialartialrtial w}{\partialartialrtial n}z \,{\rm d} S \right| \leq C \|w\|_{2,2,h}\|w\|_{0,2} \\ &\leq C ( |w|_{2,2,h}\|w\|_{0,2} + |w|_{1,2,h}\|w\|_{0,2} + \|w\|_{0,2}^2 ). \end{aligned} $$ The kick-back argument gives $$ |w|_{1,2,h}^2 \leq C ( |w|_{2,2,h}\|w\|_{0,2} + \|w\|_{0,2}^2 ). $$ Hence, \begin{align} \label{eq20180817_3} \tilde{b}_h(w,w) &= \epsilonilon a_h(w,w) + \frac{1}{\epsilonilon} (f'(u) \nablala w, \nablala w) + \frac{\alpha_0}{\epsilonilon^3}(w,w) \\ & \geq \frac{1}{\epsilonilon^3} \left( \frac{\epsilonilon^4}{2}|w|_{2,2,h}^2 - C\epsilonilon^2|w|_{1,2,h}^2 + \alpha_0\|w\|_{0,2}^2 \right) \notag \\ & \geq \frac{1}{\epsilonilon^3} \left( \frac{\epsilonilon^4}{4}|w|_{2,2,h}^2 + (\alpha_0 - C)\|w\|_{0,2}^2 \right) \notag, \end{align} which implies the coercivity of $\tilde{b}_h(\cdot,\cdot)$ when $\alpha_0$ is large enough but independent of $\epsilonilon$. Next we give the properties of $P_h$. Define $b_h(\cdot,\cdot) := \epsilonilon^3 \tilde{b}_h(\cdot,\cdot)$ and a norm $$ \mbox{\rm tr}iplenorm{v}_{2,2,h}^2 := \epsilonilon^4 |v|_{2,2,h}^2 + \epsilonilon^2|v|_{1,2,h}^2 + \|v\|_{0,2}^2,\qquad $$ \begin{lemma} \label{lem20180817_3} Consider the following problems: \begin{align} b_h(v, \eta) &= F_h(\eta) \quad \forall \eta \in H_E^2(\Omegaga), \label{eq20180817_4} \\ b_h(v_h, \chi) &= \widetilde{F}_h(\chi) \quad \forall \chi \in S_E^h. \label{eq20180817_5} \end{align} Then we have \begin{align} & \quad ~\mbox{\rm tr}iplenorm{v - v_h}_{2,2,h} \label{eq20180817_6} \\ & \leq Ch\left\{ (\epsilonilon+h)^2|v|_{3,2} + |v|_{1,2} + \sup_{\chi \in S_E^h} \frac{F_h(\widetilde{E}_h\chi) - \widetilde{F}_h(\chi) + \alpha_0(v, \chi - \widetilde{E}_h\chi)}{\mbox{\rm tr}iplenorm{\chi}_{2,2,h}} \right\}. \notag \end{align} \end{lemma} \begin{proof} Using \eqref{eq20180817_3} and the Strang Lemma, we have $$ \begin{aligned} &\quad~ \mbox{\rm tr}iplenorm{v - v_h}_{2,2,h} \\ & \leq C \left( \inf_{\partialsi \in S_E^h}\mbox{\rm tr}iplenorm{v - \partialsi}_{2,2,h} + \sup_{\chi \in S_E^h} \frac{b_h(v, \chi) - \widetilde{F}_h(\chi)}{\mbox{\rm tr}iplenorm{\chi}_{2,2,h}} \right) \\ & \leq C \left( \inf_{\partialsi \in S_E^h}\mbox{\rm tr}iplenorm{v - \partialsi}_{2,2,h} + \sup_{\chi \in S_E^h} \frac{b_h(v, \chi - \widetilde{E}_h \chi) + b_h(v,\widetilde{E}_h\chi)- \widetilde{F}_h(\chi)}{\mbox{\rm tr}iplenorm{\chi}_{2,2,h}} \right) \\ & \leq C \left( \inf_{\partialsi \in S_E^h}\mbox{\rm tr}iplenorm{v - \partialsi}_{2,2,h} + \sup_{\chi \in S_E^h} \frac{b_h(v, \chi - \widetilde{E}_h \chi) + F_h(\widetilde{E}_h\chi)- \widetilde{F}_h(\chi)}{\mbox{\rm tr}iplenorm{\chi}_{2,2,h}} \right). \end{aligned} $$ Using Lemma \ref{lem20180817_2} and \eqref{eq20171006_1}, we have $$ \begin{aligned} b_h(v, \chi - \widetilde{E}_h\chi) &= \epsilonilon^4 a_h(v, \chi - \widetilde{E}_h\chi) + \epsilonilon^2 (f'(u)\nablala v, \nablala(\chi - \widetilde{E}_h\chi)) + (\alpha_0v, \chi - \widetilde{E}_h\chi) \\ & \leq Ch\left( \epsilonilon^4 |v|_{3,2}|\chi|_{2,2,h} + \epsilonilon^2|v|_{1,2}|\chi|_{2,2,h} \right) + (\alpha_0 v, \chi - \widetilde{E}_h\chi) \\ & \leq Ch\left( \epsilonilon^2 |v|_{3,2} + |v|_{1,2} \right) \mbox{\rm tr}iplenorm{\chi}_{2,2,h} + (\alpha_0 v, \chi - \widetilde{E}_h\chi) \\ \end{aligned} $$ Then we obtain the desired bound \eqref{eq20180817_6} by the approximation properties of Morley interpolation operator \eqref{eq20170812_6}. \end{proof} \begin{theorem} \label{thm20180817_1} Suppose $u$ solves the Cahn-Hilliard equation \eqref{eq20170504_1} -- \eqref{eq20170504_3}, then we have \begin{align} & \quad ~ \epsilonilon^2|u - P_hu|_{2,2,h} + \epsilonilon|u - P_h u|_{1,2,h} + \|u - P_hu\|_{0,2} \label{eq20180817_7}\\ & \leq Ch \big( (\epsilonilon+h)^2|u|_{3,2} + |u|_{1,2} + \epsilonilon h\|u_t\|_{0,2} \big), \notag \\ & \quad ~ \epsilonilon^2|u_t - (P_hu)_t|_{2,2,h} + \epsilonilon|u_t - (P_h u)_t|_{1,2,h} + \|u_t - (P_hu)_t\|_{0,2} \label{eq20180817_8} \\ & \leq Ch \Big\{ (\epsilonilon+h)^2|u_t|_{3,2} + |u_t|_{1,2} + \epsilonilon h \|u_{tt}\|_{0,2} + \|u_t\nablala u\|_{0,2} \notag \\ &\quad~+ \epsilonilon^{-1}|\ln h|^{1/2}\|u_t\|_{0,2} ((\epsilonilon+h)^2|u|_{3,2} + |u|_{1,2} + \epsilonilon h\|u_t\|_{0,2}) \Big\}. \notag \end{align} \end{theorem} \begin{proof} Taking $v = u$ and $v_h = P_hu$ in Lemma \ref{lem20180817_3}, and noticing that $$ F_h(\partialsi) = \tilde{F}_h(\partialsi) = (\epsilonilon^4 \Omegaelta^2u - \epsilonilon^2\Omegaelta f(u) + \alpha_0 u, \partialsi) = (\epsilonilon^3 u_t + \alpha_0 u, \partialsi), $$ we obtain the bound \eqref{eq20180817_7} from \eqref{eq20171006_1} and \eqref{eq20180817_6}. Taking $v = u_t$ and $v_h = (P_hu)_t$, we have $$ \begin{aligned} F_h(\partialsi) &= (\epsilonilon^4 \Omegaelta^2 u_t - \epsilonilon^2 \Omegaelta f(u)_t + \alpha_0 u_t, \partialsi) - (\epsilonilon^2 f''(u)u_t \nablala u, \nablala \partialsi)_h, \\ \widetilde{F}_h(\partialsi) &= (\epsilonilon^4 \Omegaelta^2 u_t - \epsilonilon^2 \Omegaelta f(u)_t + \alpha_0 u_t, \partialsi) - (\epsilonilon^2 f''(u)u_t \nablala P_hu, \nablala \partialsi)_h. \end{aligned} $$ Then we get $$ \begin{aligned} & \quad ~F_h(\widetilde{E}_h \chi) - \widetilde{F}(\chi) + \alpha_0(u_t, \chi - \widetilde{E}_h \chi) \\ &= (\epsilonilon^4 \Omegaelta^2 u_t - \epsilonilon^2 \Omegaelta f(u)_t, \widetilde{E}_h \chi - \chi) \\ & \quad ~- (\epsilonilon^2 f''(u)u_t\nablala u, \nablala \widetilde{E}_h\chi - \nablala\chi) - (\epsilonilon^2 f''(u)u_t\nablala(u - P_hu), \nablala \chi)\\ &\leq \epsilonilon^3h^2\|u_{tt}\|_{0,2}|\chi|_{2,2,h} + C\epsilonilon^2 h \|u_t\nablala u\|_{0,2} |\chi|_{2,2,h} + C\epsilonilon^2\|u_t\|_{0,2}\|\nablala \chi\|_{0,\infty} |u - P_hu|_{1,2,h} \\ &\leq Ch\Big\{ \epsilonilon h\|u_{tt}\|_{0,2} + \|u_t\nablala u\|_{0,2} \\ &\quad~+ \epsilonilon^{-1}|\ln h|^{1/2}\|u_t\|_{0,2} ((\epsilonilon+h)^2|u|_{3,2} + |u|_{1,2} + \epsilonilon h\|u_t\|_{0,2}) \Big\}\mbox{\rm tr}iplenorm{\chi}_{2,2,h}, \end{aligned} $$ where we use the discrete Sobolev inequality and the fact that $\nablala \chi$ belongs to the Crouzeix-Raviar finite element space \cite{brenner2015forty}. This implies the bound \eqref{eq20180817_8}. \end{proof} Combining with the a priori estimates of the bounds given in Section \ref{sec2}, we have the following theorem. \begin{theorem} \label{thm20180818_1} Assume $h \leq C\epsilonilon$, then there hold \begin{align} & \epsilonilon^4|u - P_hu|_{2,2,h}^2 + \epsilonilon^2|u - P_h u|_{1,2,h}^2 + \|u - P_hu\|_{0,2}^2 \leq Ch^2\rho_4(\epsilonilon), \label{eq20180818_1} \\ & \quad ~ \int_0^T \epsilonilon^4|u_t - (P_hu)_t|_{2,2,h}^2 + \epsilonilon^2|u_t - (P_hu)_t|_{1,2,h}^2 + \|u_t - (P_hu)_t\|_{0,2}^2 \,{\rm d} s \label{eq20180818_2} \\ &\leq Ch^2\epsilonilon^4\rho_3(\epsilonilon) + Ch^2|\ln h| \rho_5(\epsilonilon), \notag \end{align} where $$ \begin{aligned} \rho_4(\epsilonilon) &:= \epsilonilon^{-\max\{2\sigma_1+\frac{13}{2},2\sigma_3+\frac{7}{2},2\sigma_2+4,2\sigma_4\} +4}, \\ \rho_5(\epsilonilon) &:= \epsilonilon^{-2\max\{2\sigma_1+\frac{13}{2},2\sigma_3+\frac{7}{2},2\sigma_2+4,2\sigma_4\} +2}. \end{aligned} $$ \end{theorem} \begin{proof} Using \eqref{eq2.5}, \eqref{eq2.13_add} and \eqref{eq2.15_add}, we have \begin{align} & \quad ~(\epsilonilon+h)^4 |u|_{3,2}^2 + |u|_{1,2}^2 + \epsilonilon^2 h^2 \|u_t\|_{0,2}^2 \label{eq20180818_3} \\ & \leq C\epsilonilon^{-\max\{2\sigma_1+5, 2\sigma_3+2\}+4} + C\epsilonilon^{-2\sigma_1 - 1} + C\epsilonilon^{-\max\{ 2\sigma_1+\frac{13}{2},2\sigma_3+\frac{7}{2},2\sigma_2+4,2\sigma_4 \}+4} \notag \\ & \leq C\rho_4(\epsilonilon), \notag \end{align} which implies the bound \eqref{eq20180818_1} by \eqref{eq20180817_7}. Using \eqref{eq2.15_add}, \eqref{eq20180606_2}, \eqref{eq2.18} and \eqref{eq20180801_3}, we obtain $$ \begin{aligned} &\quad~ \int_0^T (\epsilonilon+h)^4|u_t|_{3,2}^2 + |u_t|_{1,2}^2 + \epsilonilon^2 h^2 \|u_{tt}\|_{0,2}^2 + \|u_t\nablala u\|_{0,2}^2\,{\rm d} s \\ & \leq C\int_0^T \epsilonilon^4|u_t|_{3,2}^2 + |u_t|_{1,2}^2 + \epsilonilon^4\|u_{tt}\|_{0,2}^2 + \|u_t\|_{0,2}^2 \|\nablala u\|_{0,\infty}^2 \,{\rm d} s \\ & \leq C\epsilonilon^3\rho_0(\epsilonilon) + C\rho_0(\epsilonilon) + C\epsilonilon^4\rho_3(\epsilonilon) \\ &~+ C\epsilonilon^{-\max\{\sigma_1+\frac52, \sigma_3+1\} - \max\{2\sigma_1+\frac{13}{2}, 2\sigma_3+\frac72, 2\sigma_2+4, 2\sigma_4 \}} \\ & \leq C\epsilonilon^4\rho_3(\epsilonilon). \end{aligned} $$ Further, using \eqref{eq2.15_add} and \eqref{eq20180818_3}, we obtain $$ \begin{aligned} \quad ~ \int_0^T \epsilonilon^{-2} \|u_t\|_{0,2}^2 ((\epsilonilon+h)^2|u|_{3,2} + |u|_{1,2} + \epsilonilon h\|u_t\|_{0,2})^2\,{\rm d} s \leq C \rho_5(\epsilonilon). \end{aligned} $$ This implies the bound \eqref{eq20180818_2}. \end{proof} \begin{corollary}\label{cor20180818_1} Under the condition that \begin{equation} \label{mesh_cond} h \leq C\epsilonilon^2 \rho_4^{-\frac12}(\epsilonilon), \quad h \leq C \rho_3^{-\frac12}(\epsilonilon), \quad h|\ln h|^{\frac12} \leq C \epsilonilon^2 \rho_5^{-\frac12}(\epsilonilon), \end{equation} there hold \begin{align} \label{eq20180819_1} |P_hu|_{j,2,h}^2 &\leq C(1+|u|_{j,2,h}^2) \quad j=0,1,2, \\ \int_0^T |P_hu|_{j,2,h}^2 \,{\rm d} s&\leq C(1+\int_0^T |u|_{j,2,h}^2) \quad j=0,1,2, \notag \\ \|P_h u\|_{0,\infty} &\leq C. \notag \end{align} \end{corollary} \begin{proof} By the Sobolev embedding and \eqref{eq20180818_1}, we have $$ \|P_h u\|_{0,\infty} \leq \|u\|_{0,\infty} + \|u - P_hu\|_{2,2,h} \leq C + Ch\epsilonilon^{-2}\rho_4^{1/2}(\epsilonilon) \leq C. $$ The first two bounds are the direct consequences of Theorem \ref{thm20180818_1}. \end{proof} \section{Error Estimates} \label{sec4} In this section, first we derive the piecewise ${L^{\infty}(L^2)}$ and ${L^2(H^2)}$ error bounds which depend on $\frac{1}{\epsilonilon}$ polynomially based on the generalized coercivity result in Theorem \ref{thm3.7_add}, and piecewise ${L^{\infty}(H^{-1})}$ and ${L^2(H^1)}$ error bounds. Then we prove the piecewise ${L^{\infty}(H^2)}$ error bound based on the piecewise ${L^{\infty}(L^2)}$ and ${L^2(H^2)}$ error bounds. Finally, the ${L^\infty(L^\infty)}$ error bound is established. Decompose the error \begin{align} u-u_h^n=(u-P_hu)+(P_hu-u_h^n):=\rho^n+\theta^n. \end{align} The following two lemmas will be used in this section. \begin{lemma}[Summation by parts] \label{lem20180515_1} Suppose $\{a_n\}_{n=0}^\ell$ and $\{b_n\}_{n=0}^\ell$ are two sequences, then \begin{equation*} \sum_{n=1}^\ell(a^n-a^{n-1},b^n) = (a^\ell,b^\ell)-(a^0,b^0)-\sum_{n=1}^\ell(a^{n-1},b^n-b^{n-1}). \end{equation*} \end{lemma} \begin{lemma}\label{lem20180409_1} Suppose $u(t_n)$ to be the solution of \eqref{eq20170504_1}--\eqref{eq20170504_5}, and $u_h^n$ to be the solution of \eqref{eq20170504_11}--\eqref{eq20170504_12}, then \begin{align*} \rho^n \in \mathring{S}^h_E,\quad \theta^n \in \mathring{S}^h_E. \end{align*} \end{lemma} \begin{proof} Testing \eqref{eq20170504_1} with constant $1$, and then taking the integration over $(0,t)$, we can obtain for any $t\ge0$, \begin{align*} \int_{\Omegaga}u(t)dx=\int_{\Omegaga}u(0)dx. \end{align*} Then choosing $v=u(t), w=1$ in \eqref{eq20170504_14}, we have for any $t\ge0$, \begin{align*} \int_{\Omegaga}P_hu(t)~{\rm d} x=\int_{\Omegaga}u(t)~{\rm d} x. \end{align*} Choosing $v_h=1$ in \eqref{eq20170504_11}, then \begin{align*} \int_{\Omegaga}u_h^n\,{\rm d} x=\int_{\Omegaga}u_h^{n-1}\,{\rm d} x = \cdots=\int_{\Omegaga}u_h^{0}\,{\rm d} x. \end{align*} Therefore, if choosing $u_h^{0}=P_hu(0)$, then \begin{align*} \int_{\Omegaga}u_h^n\,{\rm d} x &= \int_{\Omegaga}u_h^{0}\,{\rm d} x =\int_{\Omegaga}P_hu(0)\,{\rm d} x\\ &=\int_{\Omegaga}u(0)\,{\rm d} x =\int_{\Omegaga}u(t_n)\,{\rm d} x =\int_{\Omegaga}P_hu(t_n)\,{\rm d} x. \end{align*} Hence, $P_hu(t_n)-u_h^n \in \mathring{S}^h_E$. \end{proof} \subsection{Generalized coercivity result, piecewise $L^\infty(H^{-1})$ and $L^2(H^1)$ error estimates} We first cite the generalized coercivity result, piecewise $L^{\infty}(H^{-1})$ and $L^2(H^1)$ error estimates established in \cite{li2017error}. \begin{theorem}[Generalized coercivity] \label{thm3.7_add} Suppose there exists a positive number $\gamma_3>0$ such that the solution $u$ of problem \eqref{eq20170504_1}--\eqref{eq20170504_5} and elliptic operator $P_h$ satisfy \begin{equation} \label{eq20180819_4} \|u-P_h u\|_{L^{\infty}((0,T);L^{\infty})} \leq C_1 h \epsilonilon^{-\gamma_3}. \end{equation} Then there exists an $\epsilon$-independent and $h$-independent constant $C>0$ such that for $\epsilon\in(0,\epsilon_0)$, a.e. $t\in [0,T]$, and for any $\partialsi\in \mathring{S}_E^h$, \begin{equation*} (\epsilonilon-\epsilonilon^4)(\nablala\partialsi,\nablala\partialsi)_h+ \frac{1}{\epsilonilon}(f'(P_hu(t))\partialsi,\partialsi)_h\geq -C\|\nablala\Omegaelta^{-1}\partialsi\|_{L^2}^2-C\epsilonilon^{-2\gamma_2-4}h^4, \end{equation*} provided that $h$ satisfies the constraint \begin{align}\label{eq3.24b} h &\leq (C_1C_2)^{-1}\epsilon^{\gamma_3+3}, \end{align} where $\gamma_2 = 2\gamma_1 + \sigma_1 + 6$ and $C_2$ is determined by $$ C_2:=\max_{|\mathbf{x}i|\le \|u\|_{L^\infty((0,T); L^\infty)}}|f{''}(\mathbf{x}i)|. $$ \end{theorem} \begin{remark} Thanks to the Sobolev embedding theorem and \eqref{eq20180818_1}, we have \begin{equation} \label{eq20180819_5} \|u - P_hu\|_{0,\infty} \leq \|u - P_hu\|_{2,2,h} \leq Ch \epsilonilon^{-2}\rho_4^{\frac12}(\epsilonilon), \end{equation} which gives the explicit formulation of $\gamma_3$ in \eqref{eq20180819_4}. \end{remark} \begin{theorem}[Piecewise $L^\infty(H^{-1})$ and $L^2(H^1)$ error estimates]\label{thm20171007_1} Assume $u$ is the solution of \eqref{eq20170504_1}--\eqref{eq20170504_5}, $u_h^n$ is the numerical solution of scheme \eqref{eq20170504_11}--\eqref{eq20170504_12}. Under the mesh constraints in Theorem 3.15 in \cite{li2017error}, we have the following error estimate \begin{align*} &\frac{1}{4}\|\nablala \widetilde\Omegaelta_h^{-1}\theta^{\ell}\|_{0,2,h}^2 + \frac{k^2}{4}\sum_{n=1}^\ell\|\nablala \widetilde\Omegaelta_h^{-1}d_t\theta^n\|_{0,2,h}^2 + \frac{\epsilonilon^4k}{16}\sum_{n=1}^\ell(\nablala\theta^n,\nablala\theta^n)_h\\ & \qquad + \frac{k}{\epsilonilon}\sum_{n=1}^\ell\|\theta^n\|_{0,4,h}^4\leq C(\tilde{\rho}_0(\epsilonilon) |\ln h| h^2 + \tilde{\rho}_1(\epsilonilon) k^2), \end{align*} where $\tilde{\rho}_0(\epsilonilon)$ and $\tilde{\rho}_1(\epsilonilon)$ are polynomial $\frac1\epsilonilon$-dependent functions and $\widetilde{\Omegaelta}_h^{-1}$ is a discrete inverse Laplace operator defined in \cite{li2017error}. \end{theorem} \subsection{$L^\infty(L^2)$ and piecewise $L^2(H^2)$ error estimates} Based on Theorem \ref{thm20171007_1}, the $L^\infty(L^2)$ and piecewise $L^2(H^2)$ error estimates which depend on $\frac{1}{\epsilonilon}$ polynomially, instead of exponentially, are derived below. Notice that the Theorem \ref{thm20171007_1} is used to circumvent the use of interpolation of $\|\cdot\|_{1,2,h}$ between $\|\cdot\|_{0,2,h}$ and $\|\cdot\|_{2,2,h}$, by which only the exponential dependence can be derived. \begin{theorem}\label{thm20180611_add_1} Assume $u$ is the solution of \eqref{eq20170504_1}--\eqref{eq20170504_5}, $u_h^n$ is the numerical solution of scheme \eqref{eq20170504_11}--\eqref{eq20170504_12}. Under the mesh constraints in Theorem 3.15 in \cite{li2017error} and \eqref{mesh_cond}, the following $L^\infty(L^2)$ and piecewise $L^2(H^2)$ error estimates hold \begin{align} \label{eq20180820_1} &~\quad \|\theta^{\ell}\|_{0,2,\Omegaga}^2+k\sum_{n=1}^\ell\|d_t\theta^{n}\|_{0,2,\Omegaga}^2 + \epsilonilon k\sum_{n=1}^{\ell}a_h(\theta^n,\theta^n)\\ & \leq C\tilde{\rho}_2(\epsilonilon) |\ln h|^2 h^2 + C\tilde{\rho}_3(\epsilonilon) |\ln h| k^2, \notag \end{align} where $$ \begin{aligned} \tilde{\rho}_2(\epsilonilon) &:= \epsilonilon^{4}\rho_3(\epsilonilon) + \epsilonilon^{-2\sigma_1-6}\rho_4(\epsilonilon) + \rho_5(\epsilonilon) + \epsilonilon^{-5}\tilde{\rho}_0(\epsilonilon) + \epsilonilon^{-2\gamma_1 - 2\gamma_2 - 2}\tilde{\rho}_0(\epsilonilon), \\ \tilde{\rho}_3(\epsilonilon) &:= \rho_3(\epsilonilon) + \epsilonilon^{-5}\tilde{\rho}_1(\epsilonilon) + \epsilonilon^{-2\gamma_1 - 2\gamma_2 - 2}\tilde{\rho}_1(\epsilonilon). \end{aligned} $$ \end{theorem} \begin{proof} It follows from \eqref{eq20170504_11}, \eqref{eq20170504_14}, and \eqref{eq20170504_15} that for any $v_h\in S^h_E$, \begin{align}\label{eq20180209_2} &(d_t\theta^n,v_h)+\epsilonilon a_h(\theta^n,v_h)\\ =&~ [(d_tP_hu,v_h)+\epsilonilon a_h(P_hu,v_h)]-[(d_tu_h^n,v_h)+\epsilonilon a_h(u_h^n,v_h)]\notag\\ =& -(d_t\rho^n,v_h)+(u_t+\epsilonilon\Omegaelta^2u-\frac{1}{\epsilonilon}\Omegaelta f(u)+\alpha u,v_h)+(R^n(u_{tt}),v_h)\notag\\%+C\epsilonilon^{\gamma}(k,v_h)\notag\\ &-\frac{1}{\epsilonilon}(f'(u)\nablala P_hu,\nablala v_h)_h-\alpha(P_hu,v_h)+\frac{1}{\epsilonilon}(\nablala f(u_h^{n}),\nablala v_h)_h\notag\\ =&~(-d_t\rho^n+\alpha\rho^n,v_h)-\frac{1}{\epsilonilon}(f'(u)\nablala P_hu-\nablala f(u_h^n),\nablala v_h)_h\notag\\ &+(R^n(u_{tt}),v_h), \notag \end{align} where the remainder \begin{equation} \label{eq20180801_2} R^n(u_{tt}):= \frac{u(t_n) - u(t_{n-1})}{k} - u_t(t_n) = -\frac{1}{k}\int^{t_n}_{t_{n-1}}(s-t_{n-1})u_{tt}(s)\,{\rm d} s. \end{equation} Choosing $v_h=\theta^n$, taking summation over $n$ from $1$ to $\ell$, multiplying $k$ on both sides of \eqref{eq20180209_2}, we have \begin{align} \label{eq20180801_1} & ~\quad \frac{1}{2}\|\theta^\ell\|_{0,2}^2 + \frac{k}{2}\sum_{n=1}^\ell \|d_t \theta^n\|_{0,2}^2 + \epsilonilon k \sum_{n=1}^\ell a_h(\theta^n, \theta^n) \\ &= k\sum_{n=1}^\ell (-d_t\rho^n+\alpha\rho^n,\theta^n) - \frac{k}{\epsilonilon}\sum_{n=1}^\ell (f'(u)\nablala P_hu-\nablala f(u_h^n),\nablala \theta^n)_h\notag\\ &~\quad+k\sum_{n=1}^\ell(R^n(u_{tt}),\theta^n):= I_1 + I_2 + I_3. \notag \end{align} \noindent\underline{Estimate of $I_1$:} The first term on the right hand side of \eqref{eq20180209_2} can be bounded by \begin{align}\label{eq20180211_5} I_1 &= k\sum_{n=1}^\ell (-d_t\rho^n+\alpha\rho^n,\theta^n)\\ &\le Ck \sum_{n=1}^\ell \|d_t\rho^n\|_{0,2}^2 + Ck\sum_{n=1}^\ell \alpha^2 \|\rho^n\|_{0,2}^2 + Ck \sum_{n=1}^\ell \|\theta^n\|_{0,2}^2 \notag\\ &\le C (\epsilonilon^4\rho_3(\epsilonilon) + \epsilonilon^{-6}\rho_4(\epsilonilon)) h^2 + C\rho_5(\epsilonilon)|\ln h|h^2 +Ck\sum_{n=1}^\ell \|\theta^n\|_{0,2}^2,\notag \end{align} where by \eqref{eq20180818_1} and \eqref{eq20180818_2} \begin{align} k \sum_{n=1}^\ell \|d_t \rho^n\|_{0,2}^2 & = \frac{1}{k} \sum_{n=1}^\ell \|\int_{t_{n-1}}^{t_n} \rho_t \,{\rm d} s\|_{0,2}^2 \le \sum_{n=1}^\ell \int_{t_{n-1}}^{t_n} \|\rho_t \|_{0,2}^2 \,{\rm d} s \label{eq20180818_add1}\\ & \leq \int_0^T \|\rho_t\|_{0,2}^2 \,{\rm d} s \leq C\epsilonilon^4\rho_3(\epsilonilon)h^2 + C \rho_5(\epsilonilon)|\ln h|h^2, \notag \\ k \sum_{n=1}^\ell \alpha^2 \|\rho^n\|_{0,2}^2 & \le C \epsilonilon^{-6} \underset{1\leq n \leq \ell}{\mbox{\rm sup }} \|\rho^n\|_{0,2}^2 \leq C\epsilonilon^{-6}\rho_4(\epsilonilon)h^2. \label{eq20180818_add2} \end{align} \noindent\underline{Estimate of $I_2$:} The second term on the right hand side of \eqref{eq20180801_1} can be written as \begin{align}\label{eq20180209_3} &-\frac{k}{\epsilonilon} \sum_{n=1}^\ell (f'(u)\nablala P_hu-\nablala f(u_h^n),\nablala\theta^n)_h\\ =&-\frac{k}{\epsilonilon}\sum_{n=1}^\ell (f'(u)\nablala P_hu-f'(P_hu)\nablala P_hu,\nablala\theta^n)_h \notag \\ &- \frac{k}{\epsilonilon}\sum_{n=1}^\ell(\nablala f(P_hu)- f'(P_hu)\nablala u_h^n,\nablala\theta^n)_h \notag\\ &-\frac{k}{\epsilonilon}\sum_{n=1}^\ell (f'(P_hu)\nablala u_h^n-\nablala f(u_h^n),\nablala\theta^n)_h := J_1 + J_2 + J_3. \notag \end{align} By \eqref{eq2.5}, \eqref{eq20180818_1} and mesh condition \eqref{mesh_cond}, we have $$ \|\nablala P_hu\|_{0,2}^2 \leq \|\nablala u\|_{0,2}^2 + C \leq \epsilonilon^{-2\sigma_1 - 1}. $$ Then, using \eqref{eq20180819_5} and the piecewise $L^2(H^1)$ error estimate given in Theorem \ref{thm20171007_1}, the first term on the right-hand side of \eqref{eq20180209_3} can be bounded below \begin{align} J_1 &= -\frac{3k}{\epsilonilon}\sum_{n=1}^\ell (\rho^n(u+P_hu)\nablala P_hu, \nablala\theta^n)_h \label{eq20180211_5_add}\\ & \leq \frac{Ck}{\epsilonilon}\sum_{n=1}^\ell \|u+P_hu\|_{0,\infty}^2 \|\rho^n\|_{0,\infty}^2 \|\nablala P_h u\|_{0,2}^2 + \frac{Ck}{\epsilonilon}\sum_{n=1}^\ell (\nablala \theta^n, \nablala \theta^n)_h \notag \\ & \leq C \epsilonilon^{-2\sigma_1-6}\rho_4(\epsilonilon) h^2 + C\epsilonilon^{-5}\tilde{\rho}_0(\epsilonilon){|\ln h|}h^2 + C\epsilonilon^{-5}\tilde{\rho}_1(\epsilonilon)k^2. \notag \end{align} Again, thanks to the piecewise $L^2(H^1)$ error estimate given in Theorem \ref{thm20171007_1}, the second term on the right-hand side of \eqref{eq20180209_3} can be written as \begin{align}\label{eq20180209_4} J_2 & = -\frac{k}{\epsilonilon}\sum_{n=1}^\ell (f'(P_hu)\nablala \theta^n,\nablala\theta^n)_h \leq \frac{Ck}{\epsilonilon}\sum_{n=1}^\ell (\nablala \theta^n, \nablala \theta^n)_h \\ &\leq C\epsilonilon^{-5}\tilde{\rho}_0(\epsilonilon){|\ln h|}h^2 + C\epsilonilon^{-5}\tilde{\rho}_1(\epsilonilon)k^2. \notag \end{align} By the discrete Sobolev inequality and Theorem 3.14 in \cite{li2017error}, we have for any $n$, \begin{align}\label{eq20180212_1} \|u_h^n\|_{1,\infty,h} \le C|\ln h|^{\frac12}\|u_h^n\|_{2,2,h} \le C\epsilonilon^{-\gamma_2}|\ln h|^{\frac12}. \end{align} Then, the third term on the right-hand side of \eqref{eq20180209_3} can be bounded by \begin{align}\label{eq20180213_1} J_3 & = -\frac{3k}{\epsilonilon} \sum_{n=1}^\ell (\theta^n(P_hu + u_h) \nablala u_h^n,\nablala\theta^n) \\ & \leq Ck \sum_{n=1}^\ell \|\theta^n\|_{0,2}^2 + \frac{Ck}{\epsilonilon^2} \sum_{n=1}^\ell \|P_hu + u_h^n\|_{0,\infty}^2 \|u_h^n\|_{1,\infty,h}^2 \|\nablala \theta^n\|_{0,2}^2 \notag \\ & \leq Ck \sum_{n=1}^\ell \|\theta^n\|_{0,2}^2 + C\epsilonilon^{-2\gamma_1 - 2\gamma_2 - 2} |\ln h| k\sum_{n=1}^\ell \|\nablala \theta^n\|_{0,2}^2 \notag \\ & \leq Ck \sum_{n=1}^\ell \|\theta^n\|_{0,2}^2 + C\epsilonilon^{-2\gamma_1 - 2\gamma_2 - 2} (\tilde{\rho}_0(\epsilonilon){ |\ln h|^2}h^2 + \tilde{\rho}_1(\epsilonilon)|\ln h|k^2 ).\notag \end{align} \noindent\underline{Estimate of $I_3$:} The third term on the right hand side of \eqref{eq20180209_2} can be bounded by \begin{align}\label{eq20180517_12} I_3 = k\sum_{n=1}^\ell (R^n(u_{tt}),\theta^n) & \le Ck \sum_{n=1}^\ell \|R^n(u_{tt})\|_{0,2}^2 + Ck\sum_{n=1}^\ell \|\theta^n\|_{0,2}^2 \\ & \le C\rho_3(\epsilonilon)k^2 + Ck\sum_{n=1}^\ell \|\theta^n\|_{0,2}^2 \notag, \end{align} where by \eqref{eq20180606_2} and \eqref{eq20180801_2}, \begin{align} k\sum_{n=1}^{\ell}\|R^n(u_{tt})\|_{0,2}^2 &\leq \frac{1}{k}\sum_{n=1}^{\ell} \Bigl(\int^{t_n}_{t_{n-1}}(s-t_{n-1})^2\,{\rm d} s\Bigr) \Bigl(\int^{t_n}_{t_{n-1}}\|u_{tt}(s)\|_{0,2}^2\,{\rm d} s\Bigr)\label{eq20180606_11}\\ &\leq C\rho_3(\epsilon)k^2.\notag \end{align} \noindent{\underline{$L^\infty(L^2)$ and piecewise $L^2(H^2)$ error estimates:}} Taking \eqref{eq20180211_5}, \eqref{eq20180211_5_add}, \eqref{eq20180209_4}, \eqref{eq20180213_1}, \eqref{eq20180213_1} into \eqref{eq20180801_1}, we have \begin{align}\label{eq20180209_4_add} & \quad ~ \frac{1}{2} \|\theta^{\ell}\|_{0,2}^2 + \frac{k}{2} \sum_{n=1}^\ell\|d_t\theta^{n}\|_{0,2}^2 + \epsilonilon k\sum_{n=1}^{\ell}a_h(\theta^n,\theta^n) \\ &\le Ck\sum_{n=1}^{\ell} \|\theta^n\|_{0,2}^2 \notag \\ &~~~ + C(\epsilonilon^{4}\rho_3(\epsilonilon) + \epsilonilon^{-2\sigma_1-6}\rho_4(\epsilonilon) ) h^2 \notag\\ &~~~ + C(\rho_5(\epsilonilon) + \epsilonilon^{-5}\tilde{\rho}_0(\epsilonilon))|\ln h| h^2 + \epsilonilon^{-2\gamma_1 - 2\gamma_2 - 2}\tilde{\rho}_0(\epsilonilon)|\ln h|^2h^2 \notag \\ &~~~ + C(\rho_3(\epsilonilon) + \epsilonilon^{-5}\tilde{\rho}_1(\epsilonilon))k^2 + C\epsilonilon^{-2\gamma_1 - 2\gamma_2 - 2}\tilde{\rho}_1(\epsilonilon)|\ln h|k^2. \notag \end{align} The desired result \eqref{eq20180820_1} is therefore obtained by the Gronwall's inequality. \end{proof} \subsection{Piecewise $L^\infty(H^2)$ and $L^\infty(L^\infty)$ error estimates} In this subsection, we give the $\|\theta^\ell\|_{2,2,h}^2$ estimate by taking the summation by parts in time and integration by parts in space, and using the special properties of the Morley element. The $\|\theta^\ell\|_{2,2,h}^2$ estimate below is ``almost'' optimal with respect to time and space. \begin{theorem}\label{thm20180214_4} Assume $u$ is the solution of \eqref{eq20170504_1}--\eqref{eq20170504_5}, $u_h^n$ is the numerical solution of scheme \eqref{eq20170504_11}--\eqref{eq20170504_12}. Under the mesh constraints in Theorem 3.15 in \cite{li2017error} and \eqref{mesh_cond}, the following piecewise $L^\infty(H^2)$ error estimate holds \begin{align} &\quad ~ k\sum_{n=1}^{\ell}\|d_t\theta^n\|_{L^2}^2 +\epsilonilon k^2 \sum_{n=1}^{\ell}a_h(d_t\theta^n, d_t\theta^n) +\epsilonilon\|\theta^\ell\|_{2,2,h}^2 \label{eq20180821_1}\\ &\le C\tilde{\rho}_4(\epsilonilon) |\ln h|^2 h^2 + C\tilde{\rho}_5(\epsilonilon)|\ln h| k^2, \notag \end{align} \end{theorem} where \begin{align*} \tilde{\rho}_4(\epsilonilon)& = \epsilonilon^{-2\sigma_1 - 1}\rho_3(\epsilonilon) + \epsilonilon^{-4}\rho_0(\epsilonilon)\rho_4(\epsilonilon) + \epsilonilon^{-2\sigma_1 - 5}\rho_5(\epsilonilon) \\ &~~~+ \Big(\epsilonilon^{-4\gamma_1-3} + \epsilonilon^{-4\gamma_2-2} + \epsilonilon^{-\max\{2\sigma_1+5,2\sigma_3+2\}-2} \notag \\ &\qquad~ + \epsilonilon^{2\gamma_1-\max\{ 2\sigma_1 + \frac{13}{2}, 2\sigma_{3} + \frac72, 2\sigma_2+4, 2\sigma_4 \} - 1} \Big) \tilde{\rho}_2(\epsilonilon), \\ \tilde{\rho}_5(\epsilonilon)& = \Big(\epsilonilon^{-4\gamma_1-3} + \epsilonilon^{-4\gamma_2-2} + \epsilonilon^{-\max\{2\sigma_1+5,2\sigma_3+2\}-2} \notag \\ &\qquad~ + \epsilonilon^{2\gamma_1-\max\{ 2\sigma_1 + \frac{13}{2}, 2\sigma_{3} + \frac72, 2\sigma_2+4, 2\sigma_4 \} - 1} \Big) \tilde{\rho}_3(\epsilonilon). \end{align*} \begin{proof} Choosing $v_h = \theta^n - \theta^{n-1} = kd_t \theta^n$ in \eqref{eq20180209_2}, taking summation over $n$ from $1$ to $\ell $, we get \begin{align}\label{eq20180214_2} &\quad ~ k \sum_{n=1}^\ell \|d_t\theta^n\|_{L^2}^2 + \frac{\epsilonilon}{2} a_h(\theta^\ell, \theta^\ell) + \frac{\epsilonilon k^2}{2} \sum_{n=1}^\ell a_h(d_t\theta^n,d_t\theta^n)\\ &= k\sum_{n=1}^\ell (-d_t\rho^n+\alpha\rho^n,d_t\theta^n) -\frac{k}{\epsilonilon}\sum_{n=1}^\ell (f'(u)\nablala P_hu-\nablala f(u_h^n),\nablala(d_t\theta^n))_h\notag\\ &\quad + k \sum_{n=1}^\ell (R^n(u_{tt}),d_t\theta^n) := I_1 + I_2 + I_3. \notag \end{align} Here we use the fact that \begin{align*} \epsilonilon a_h(\theta^n,\theta^n-\theta^{n-1}) = \frac{\epsilonilon k^2}{2}a_h(d_t \theta^n, d_t \theta^n) +\frac{\epsilonilon}{2}a_h(\theta^n,\theta^n)-\frac{\epsilonilon}{2}a_h(\theta^{n-1},\theta^{n-1}).\notag \end{align*} \noindent\underline{Estimates of $I_1$ and $I_3$}: Similar to \eqref{eq20180211_5}, using \eqref{eq20180818_add1} and \eqref{eq20180818_add2}, we have \begin{align} I_1 &\le Ck \sum_{n=1}^\ell \|d_t\rho^n\|_{L^2}^2 + Ck\sum_{n=1}^\ell \alpha^2 \|\rho^n\|_{L^2}^2 + \frac{k}{8} \sum_{n=1}^\ell \|d_t \theta^n\|_{L^2}^2 \label{eq20180820_2}\\ &\le C (\epsilonilon^4\rho_3(\epsilonilon) + \epsilonilon^{-6}\rho_4(\epsilonilon)) h^2 + C\rho_5(\epsilonilon)|\ln h|h^2 + \frac{k}{8}\sum_{n=1}^\ell \|d_t \theta^n\|_{0,2,h}^2.\notag \end{align} From \eqref{eq20180517_12} and \eqref{eq20180606_11}, we also obtain the estimate of $I_3$ below \begin{align}\label{eq20180820_3} I_3 = k\sum_{n=1}^\ell (R^n(u_{tt}),d_t\theta^n) & \le Ck \sum_{n=1}^\ell \|R^n(u_{tt})\|_{L^2}^2 + \frac{k}{8}\sum_{n=1}^\ell \|d_t\theta^n\|_{0,2}^2 \\ & \le C\rho_3(\epsilonilon)k^2 + \frac{k}{8} \sum_{n=1}^\ell \|d_t \theta^n\|_{0,2}^2 \notag. \end{align} \noindent\underline{Estimate of $I_2$}: Next we bound the more complicated term $I_2$. Using integration by parts, we have \begin{align} \label{eq20180822_1} I_2 &= -\frac{k}{\epsilonilon}\sum_{n=1}^\ell (f'(u)\nablala P_hu - \nablala f(P_hu), d_t\nablala \theta^n)_h - \frac{k}{\epsilonilon}\sum_{n=1}^\ell (\nablala( f(P_hu) - f(u_h^n)), d_t\nablala \theta^n)_h \\ &= -\frac{k}{\epsilonilon}\sum_{n=1}^\ell (f'(u)\nablala P_hu - \nablala f(P_hu), d_t\nablala \theta^n)_h + \frac{k}{\epsilonilon}\sum_{n=1}^\ell (f(P_hu) - f(u_h^n), d_t \Omegaelta \theta^n)_h \notag \\ &~~~ - \frac{k}{\epsilonilon}\sum_{n=1}^\ell \sum_{E \in \mathcal{E}_h} (\{f(P_hu) - f(u_h^n)\}, d_t \llbracket\nablala \theta^n\rrbracket )_E \notag \\ &~~~ - \frac{k}{\epsilonilon}\sum_{n=1}^\ell \sum_{E\in \mathcal{E}_h} (\llbracket f(P_hu) - f(u_h^n)\rrbracket, \{\nablala d_t\theta^n\})_E \notag := J_1 + J_2 + J_3 + J_4. \end{align} Here we adopt the standard DG notation and the DG identity, see \cite[Equ. (3.3)]{arnold2002unified}. Next we bound $J_1$ to $J_4$ respectively. \partialartialragraph{$\bullet$ Estimate of $J_1$} Using summation by parts in Lemma \ref{lem20180515_1}, we have \begin{align} \label{eq20180822_5} J_1 &= \frac{k}{\epsilonilon}\sum_{n=1}^\ell(d_t (\rho(u+P_hu)\nablala P_hu), \nablala\theta^{n-1})_h - \frac{1}{\epsilonilon}(\rho^\ell(u^\ell + P_hu^\ell)\nablala P_hu^\ell, \nablala \theta^\ell )_h. \end{align} Thanks to \eqref{eq2.5}, \eqref{eq2.15_add}, \eqref{eq2.18}, \eqref{eq20180818_1}, \eqref{eq20180818_2}, \eqref{eq20180819_1}, and the piecewise $L^2(H^1)$ estimate in Theorem \ref{thm20171007_1}, the first term on the right hand side of \eqref{eq20180822_5} can be bounded by \begin{align} \label{eq20180822_2} & \quad ~ \frac{k}{\epsilonilon}\sum_{n=1}^\ell(d_t (\rho(u+P_hu)\nablala P_hu), \nablala\theta^{n-1})_h \\ & \leq \frac{1}{k} \sum_{n=1}^\ell \| \int_{t_{n-1}}^{t_n} (\rho(u+P_hu)\nablala P_hu)_t \,{\rm d} s\|_{0,2}^2 + C\epsilonilon^{-2} k \sum_{n=1}^\ell |\theta^{n-1}|_{1,2,h}^2 \notag \\ & \leq \underset{t\in [0,T]}{\mbox{\rm ess sup }}\|\nablala P_h u\|_{0,2}^2 \int_0^T\|\rho_t\|_{0,\infty}^2 \,{\rm d} s + \underset{t\in [0,T]}{\mbox{\rm ess sup }}\|\rho\|_{0,\infty}^2 \int_0^T \|\nablala(P_hu)_t\|_{0,2}^2 \,{\rm d} s \notag \\ & + \underset{t\in [0,T]}{\mbox{\rm ess sup }} \|\rho\|_{0,\infty}^2 \|\nablala P_hu \|_{0,2}^2 \int_0^T \|u_t + (P_hu)_t\|_{0,\infty}^2 \,{\rm d} s + C\epsilonilon^{-2} k \sum_{n=1}^\ell |\theta^{n-1}|_{1,2,h}^2 \notag \\ & \leq C\epsilonilon^{-2\sigma_1 - 1}(\rho_3(\epsilonilon) + \epsilonilon^{-4}\rho_5(\epsilonilon)|\ln h|)h^2 + C \epsilonilon^{-4}\rho_0(\epsilonilon)\rho_4(\epsilonilon)h^2 \notag \\ &~~~ + C\epsilonilon^{-2\sigma_1 - 6 -\max\{2\sigma_1 + \frac{13}{2}, 2\sigma_3+ \frac72, 2\sigma_2 + 4, 2\sigma_4\}}\rho_4(\epsilonilon) h^2 \notag \\ &~~~ + C\epsilonilon^{-6}\tilde{\rho}_0(\epsilonilon)|\ln h|h^2 + C\epsilonilon^{-6}\tilde{\rho}_1(\epsilonilon)k^2. \notag \end{align} Thanks to \eqref{eq2.5}, \eqref{eq20180818_1} and the $L^\infty(L^2)$ estimate in Theorem \ref{thm20180611_add_1}, the second term on the right hand of \eqref{eq20180822_5} can be bounded by \begin{align} \label{eq20180822_3} &\quad ~ - \frac{1}{\epsilonilon}(\rho^\ell(u^\ell + P_hu^\ell)\nablala P_hu^\ell, \nablala \theta^\ell )_h \\ & \leq C \epsilonilon^{-2}\|\rho^l\|_{0,\infty}^2 |P_hu^l|_{1,2,h}^2 + C\epsilonilon^{-1}\|\theta\|_{0,2}^2 + \frac{\epsilonilon}{8}a_h(\theta^l, \theta^l) \notag \\ & \leq \notag C\epsilonilon^{-2\sigma_1 - 7}\rho_4(\epsilonilon) h^2 + C\epsilonilon^{-1}\tilde{\rho}_2(\epsilonilon)|\ln h|^2 h^2 + C\epsilonilon^{-1}\tilde{\rho}_3(\epsilonilon)|\ln h|k^2 + \frac{\epsilonilon}{8}a_h(\theta^l, \theta^l). \end{align} Combining \eqref{eq20180822_2} and \eqref{eq20180822_3}, simplifying the coefficients according to the definition of $\rho_i(\epsilonilon)$ and $\tilde{\rho}_i(\epsilonilon)$, we obtain the bound for $J_1$: \begin{align} \label{eq20180822_4} J_1 &\leq C(\epsilonilon^{-2\sigma_1 - 1}\rho_3(\epsilonilon) + \epsilonilon^{-4}\rho_0(\epsilonilon)\rho_4(\epsilonilon) + \epsilonilon^{-2\sigma_1-5}\rho_5(\epsilonilon) + \epsilonilon^{-1}\tilde{\rho}_2(\epsilonilon) ) |\ln h|^2 h^2 \\ &~~~ + C\epsilonilon^{-1}\tilde{\rho}_3(\epsilonilon)|\ln h|k^2 + \frac{\epsilonilon}{8}a_h(\theta^l, \theta^l). \notag \end{align} \partialartialragraph{$\bullet$ Estimate of $J_2$} Define $f(P_hu) - f(u_h^n) := M^n\theta^n$, where $M^n$ is given as $$ M^n := (P_hu(t_n))^2 + P_hu(t_n)u_h^n + (u_h^n)^2 - 1. $$ Using summation by parts in Lemma \ref{lem20180515_1}, we have \begin{align} \label{eq20180822_6} J_2 &= -\frac{k}{\epsilonilon}\sum_{n=1}^\ell (d_t (M^n\theta^n), \Omegaelta \theta^{n-1})_h + \frac{1}{\epsilonilon}( M^l \theta^l, \Omegaelta \theta^l)_h \\ & \leq \frac{Ck}{\epsilonilon} \sum_{n=1}^\ell \|d_t(M^n\theta^n)\|_{0,2} |\theta|_{2,2,h} + \frac{C}{\epsilonilon}\|M^l\theta^l\|_{0,2} |\theta^l|_{2,2,h}. \notag \end{align} Since $d_t u_h^n = d_t(P_hu^n) - d_t\theta^n$, a direct calculation shows that \begin{align*} d_t(M^n \theta^n) &= \theta^n d_t M^n + M^{n-1}d_t \theta^n \\ & = M^{n-1}d_t \theta^n + \theta^n(P_h u^{n} + P_h u^{n-1}) d_t(P_hu^n) \\ &~~~ + \theta^n u_h^n d_t(P_hu^n) + \theta^n P_h u^{n-1}d_t(P_hu^n) - \theta^n P_hu^{n-1} d_t \theta^n \\ &~~~ + \theta^n(u_h^n+u_h^{n-1})d_t(P_hu^n) - \theta^n(u_h^n + u_h^{n-1})d_t\theta^n \\ & = (M^{n-1} -\theta^nP_h u^{n-1} - \theta^n(u_h^n + u_h^{n-1}))d_t\theta^n \\ &~~~ + (P_hu^n + 2P_hu^{n-1} + 2u_h^n + u_h^{n-1})\theta^n d_t(P_hu^n). \end{align*} Using the $L^2(H^2)$ error estimate \eqref{eq20180820_1} and the assumption on the $L^\infty$ bound of $u_h^n$, we get \begin{align} \label{eq20180822_7} & \quad ~\frac{Ck}{\epsilonilon}\sum_{n=1}^\ell \|d_t(M^n\theta^n)\|_{0,2}|\theta^n|_{2,2,h} \\ & \leq C\epsilonilon^{-2\gamma_1 - 1} k\sum_{n=1}^\ell \|d_t\theta^n\|_{0,2}|\theta^n|_{2,2,h} + C\epsilonilon^{-\gamma_1 - 1} k\sum_{n=1}^\ell \|\theta^n d_t(P_h u)\|_{0,2}|\theta^n|_{2,2,h} \notag \\ & \leq \frac{k}{8}\sum_{n=1}^\ell \|d_t \theta^n\|_{0,2}^2 + C\epsilonilon^{-4\gamma_1 - 2} k\sum_{n=1}^\ell |\theta|_{2,2,h}^2 + C\epsilonilon^{2\gamma_1}k\sum_{n=1}^\ell \|\theta d_t(P_h u)\|_{0,2}^2 \notag \\ & \leq \frac{k}{8}\sum_{n=1}^\ell \|d_t \theta^n\|_{0,2}^2 + C\epsilonilon^{-4\gamma_1 - 3}(\tilde{\rho}_2(\epsilonilon)|\ln h|^2h^2 + \tilde{\rho}_3(\epsilonilon)|\ln h|k^2) \notag \\ &~~~ + C \epsilonilon^{2\gamma_1-\max\{ 2\sigma_1 + \frac{13}{2}, 2\sigma_3+\frac72, 2\sigma2+4, 2\sigma_4 \}-1}(\tilde{\rho}_2(\epsilonilon)|\ln h|^2 h^2 + \tilde{\rho}_3(\epsilonilon)|\ln h| k^2), \notag \end{align} where by \eqref{eq2.15_add} and the $L^\infty(L^2)$ error estimate \eqref{eq20180820_1}, $$ \begin{aligned} &\quad ~k\sum_{n=1}^\ell \|\theta d_t(P_h u)\|_{0,2}^2 \\ &\leq \underset{1\leq n \leq \ell}{\mbox{\rm sup }}\|\theta^n\|_{0,2}^2 \frac{1}{k} \|\int_{t_{n-1}}^{t_n} (P_hu)_t \,{\rm d} s\|_{0,\infty}^2 \\ & \leq \underset{1\leq n \leq \ell}{\mbox{\rm sup }}\|\theta^n\|_{0,2}^2 \int_{0}^{T} \|(P_hu)_t\|_{0,\infty}^2 \,{\rm d} s \\ & \leq C \epsilonilon^{-\max\{ 2\sigma_1 + \frac{13}{2}, 2\sigma_3+\frac72, 2\sigma2+4, 2\sigma_4 \}-1}(\tilde{\rho}_2(\epsilonilon)|\ln h|^2 h^2 + \tilde{\rho}_3(\epsilonilon)|\ln h| k^2). \end{aligned} $$ And the second term on the right hand side of \eqref{eq20180822_6} can be bounded by \begin{align} \label{eq20180822_8} \frac{C}{\epsilonilon} \|M^l\theta^l\|_{0,2}|\theta^l|_{2,2,h} &\leq C^{-4\gamma_1-3}\|\theta^l\|_{0,2}^2 + \frac{\epsilonilon}{8}a_h(\theta^l, \theta^l) \\ &\leq C\epsilonilon^{-4\gamma_1 - 3}(\tilde{\rho}_2(\epsilonilon)|\ln h|^2 h^2 + \tilde{\rho}_3(\epsilonilon)|\ln h|k^2) + \frac{\epsilonilon}{8}a_h(\theta^l, \theta^l). \notag \end{align} Combining \eqref{eq20180822_7} and \eqref{eq20180822_8}, we obtain the bound for $J_2$: \begin{align} \label{eq20180822_9} J_2 &\leq \frac{k}{8}\sum_{n=1}^\ell \|d_t \theta^n\|_{0,2}^2 + \frac{\epsilonilon}{8}a_h(\theta^l, \theta^l) + C\epsilonilon^{-4\gamma_1 - 3}(\tilde{\rho}_2(\epsilonilon)|\ln h|^2 h^2 + \tilde{\rho}_3(\epsilonilon)|\ln h|k^2) \\ &~~~ + C \epsilonilon^{2\gamma_1-\max\{ 2\sigma_1 + \frac{13}{2}, 2\sigma_3+\frac72, 2\sigma_2+4, 2\sigma_4 \}-1}(\tilde{\rho}_2(\epsilonilon)|\ln h|^2 h^2 + \tilde{\rho}_3(\epsilonilon)|\ln h| k^2). \notag \end{align} \partialartialragraph{$\bullet$ Estimate of $J_3$} Notice that $\theta^n \in S_E^h$ and $$ \int_{E} \llbracket\nablala \theta^n\rrbracket\,{\rm d} S = 0 \qquad \forall E \in \mathcal{E}_h. $$ Using summation by parts in Lemma \ref{lem20180515_1}, Lemma 2.2 in \cite{elliott1989nonconforming} and inverse inequality, we have \begin{align*} J_3 &= \frac{k}{\epsilonilon}\sum_{n=1}^\ell\sum_{E\in \mathcal{E}_h} (d_t \{M^n\theta^n\}, \llbracket\nablala \theta^{n-1}\rrbracket)_E - \frac{1}{\epsilonilon}\sum_{E\in \mathcal E_h}(\{M^\ell \theta^{\ell}\}, \llbracket\nablala \theta^{\ell}\rrbracket)_E \\ & \leq \frac{Ck}{\epsilonilon} \sum_{n=1}^\ell \|d_t(M^n\theta^n)\|_{0,2} |\theta|_{2,2,h} + \frac{C}{\epsilonilon}\|M^\ell\theta^\ell\|_{0,2} |\theta^l|_{2,2,h}. \end{align*} Hence, $J_3$ has the same bound as $J_2$. \partialartialragraph{$\bullet$ Estimate of $J_4$} Since $P_hu$ and $u_h$ are continuous at vertexes of $\mathcal{T}_h$, thanks to Lemma 2.6 in \cite{elliott1989nonconforming}, we have \begin{align} \label{eq20180823_2} J_4 & \leq \frac{Ck}{\epsilonilon} \sum_{n=1}^\ell h|M^n\theta^n|_{2,2,h} |d_t\theta^n|_{1,2,h}\\ &\leq \frac{Ck}{\epsilonilon} \sum_{n=1}^\ell |M^n\theta^n|_{2,2,h} \|d_t\theta^n\|_{0,2} \notag\\ & \leq \frac{Ck}{\epsilonilon^2} \sum_{n=1}^\ell |M^n\theta^n|_{2,2,h}^2 + \frac{k}{8}\sum_{n=1}^\ell \|d_t \theta^n\|_{0,2}^2 \notag. \end{align} Using the piecewise $L^2(H^2)$ estimate given in Theorem \ref{thm20171007_1}, we have \begin{align} \label{eq20180823_1} & \quad ~\frac{Ck}{\epsilonilon^2} \sum_{n=1}^\ell |M^n\theta^n|_{2,2,h}^2 \\ & \leq \frac{Ck}{\epsilonilon^2} \sum_{n=1}^\ell \left( \|M^n\|_{0,\infty}^2 |\theta^n|_{2,2,h}^2 + |M^n|_{1,4,h}^2|\theta^n|_{1,4,h}^2 + |M^n|_{2,2,h}^2 \|\theta^n\|_{0,\infty}^2 \right) \notag \\ & \leq \frac{C}{\epsilonilon^2}\underset{1\leq n \leq \ell}{\mbox{\rm sup }} \|M^n\|_{2,2,h}^2 k\sum_{n=1}^\ell \|\theta^n\|_{2,2,h}^2 \notag \\ & \leq C(\epsilonilon^{-4\gamma_2-2} + \epsilonilon^{-\max\{2\sigma_1+5, 2\sigma_3+2\}-2})(\tilde{\rho}_2(\epsilonilon)|\ln h|^2 h^2 + \tilde{\rho}_3(\epsilonilon)|\ln h|k^2),\notag \end{align} where by \eqref{eq2.13_add} and the fact that $\|u_h^n\|_{2,2,h} \leq C\epsilonilon^{-\gamma_2}$ (c.f. \cite[Theorem 3.14]{li2017error}) $$ \begin{aligned} \|M^n\|_{2,2,h} &\leq C (\|(P_hu^n)^2\|_{2,2,h} + \|u_h^n P_hu^n\|_{2,2,h} + \|(u_h^n)^2\|_{2,2,h}) \\ & \leq C(\|P_hu^n\|_{2,2,h} + \|P_hu^n\|_{1,4,h}^2 + \|u_h\|_{0,\infty}\|u_h^n\|_{2,2,h} + \|u_h^n\|_{1,4,h}^2\\ &~~~ + \|u_h^n\|_{2,2,h} + \|u_h^n\|_{0,\infty}\|P_hu^n\|_{2,2,h} + \|u_h^n\|_{1,4,h}\|P_hu^n\|_{1,4,h}) \\ & \leq C(\epsilonilon^{-2\gamma_2} + \epsilonilon^{-\max\{2\sigma_1+5, 2\sigma_3 + 2\}}). \end{aligned} $$ \underline{Piecewise $L^\infty(H^2)$ error estimate}: Taking \eqref{eq20180820_2}, \eqref{eq20180820_3}, \eqref{eq20180822_4}, \eqref{eq20180822_9} and \eqref{eq20180823_2} into \eqref{eq20180214_2}, we obtain \begin{align} &\quad ~ \frac{k}{8} \sum_{n=1}^\ell \|d_t\theta^n\|_{L^2}^2 + \frac{\epsilonilon}{8} a_h(\theta^\ell, \theta^\ell) + \frac{\epsilonilon k^2}{2} \sum_{n=1}^\ell a_h(d_t\theta^n,d_t\theta^n) \\ & \leq C (\epsilonilon^4\rho_3(\epsilonilon) + \epsilonilon^{-6}\rho_4(\epsilonilon)) h^2 + C\rho_5(\epsilonilon)|\ln h|^2 h^2 + C\rho_3(\epsilonilon)k^2 \notag \\ &~~~ + C(\epsilonilon^{-2\sigma_1 - 1}\rho_3(\epsilonilon) + \epsilonilon^{-4}\rho_0(\epsilonilon)\rho_4(\epsilonilon) + \epsilonilon^{-2\sigma_1-5}\rho_5(\epsilonilon) + \epsilonilon^{-1}\tilde{\rho}_2(\epsilonilon) ) |\ln h|^2 h^2 \notag \\ &~~~ + C\epsilonilon^{-1}\tilde{\rho}_3(\epsilonilon)|\ln h|k^2 + C\epsilonilon^{-4\gamma_1 - 3}(\tilde{\rho}_2(\epsilonilon)|\ln h|^2 h^2 + \tilde{\rho}_3(\epsilonilon)|\ln h|k^2) \notag \\ &~~~ + C \epsilonilon^{-\max\{ 2\sigma_1 + \frac{13}{2}, 2\sigma_3+\frac72, 2\sigma_2+4, 2\sigma_4\}-1}(\tilde{\rho}_2(\epsilonilon)|\ln h|^2 h^2 + \tilde{\rho}_3(\epsilonilon)|\ln h| k^2) \notag \\ &~~~ + C(\epsilonilon^{-4\gamma_2-2} + \epsilonilon^{-\max\{2\sigma_1+5, 2\sigma_3+2\}-2})(\tilde{\rho}_2(\epsilonilon)|\ln h|^2 h^2 + \tilde{\rho}_3(\epsilonilon)|\ln h|k^2). \notag \end{align} Then the theorem can be proved by simplifying the coefficients according to the definitions of $\rho_i(\epsilonilon)$ and $\tilde{\rho}_i(\epsilonilon)$. \end{proof} \begin{remark} \label{rmk20180823_1} If the summation by part for time and integration by part for space techniques are not employed simultaneously, one can only obtain a coarse estimate \begin{align*} &\quad ~\|\theta^\ell\|_{2,2,h}^2 + k\sum_{n=1}^{\ell}\|d_t\theta^n\|_{L^2}^2 +\epsilonilon k^2 \sum_{n=1}^{\ell}a_h(d_t\theta^n, d_t \theta^n) \\ &\le Ck^{-\frac12}(\epsilonilon^{-\gamma_4}|\ln h|^2h^2+\epsilonilon^{-\gamma_5}|\ln h|k), \end{align*} where $\gamma_4, \gamma_5$ denote some positive constants. \end{remark} Finally, using \eqref{eq20180819_5}, Theorem \ref{thm20180214_4} and the Sobolev embedding theorem, we can prove the desired $L^\infty(L^\infty)$ error estimate. \begin{theorem}\label{thm20180611_add2} Assume $u$ is the solution of \eqref{eq20170504_1}--\eqref{eq20170504_5}, $u_h^n$ is the numerical solution of scheme \eqref{eq20170504_11}--\eqref{eq20170504_12}. Under the mesh constraints in Theorem 3.15 in \cite{li2017error} and \eqref{mesh_cond}, we have the $L^\infty(L^\infty)$ error estimate \begin{align} \label{eq20180823_3} \|u(t_n) - u_h^n\|_{L^{\infty}}\le C|\ln h|^{\frac12}((\tilde{\rho}_4(\epsilonilon))^{\frac12}|\ln h|^{\frac12}h + (\tilde{\rho}_5(\epsilonilon))^{\frac12}k) \quad \forall 1 \leq n \leq \ell. \end{align} \end{theorem} \begin{remark} The mesh constraints in Theorem 3.15 in \cite{li2017error} and \eqref{mesh_cond} can be achieved by $h = C\epsilonilon^{p_1}$ and $k = C\epsilonilon^{p_2}$ for certain positive $p_1, p_2$. Hence, the $|\ln h| k^2$ decreases asymptoticly as $k^2$ when $\epsilonilon$ goes to zero. \end{remark} \section{Convergence of the Numerical Interface}\label{sec5} In this section, we prove that the numerical interface defined as the zero level set of the Morley element interpolation of the solution $U^n$ converges to the moving interface of the Hele-Shaw problem under the assumption that the Hele-Shaw problem has a unique global (in time) classical solution. We first cite the following convergence result established in \cite{alikakos1994convergence}. \begin{theorem}\label{thm4.1} Let $\Omegaga$ be a given smooth domain and $\Gammamma_{00}$ be a smooth closed hypersurface in $\Omegaga$. Suppose that the Hele-Shaw problem starting from $\Gammamma_{00}$ has a unique smooth solution $\bigl(w,\Gammamma:=\bigcup_{0\leq t\leq T}(\Gammamma_t\times\{t\}) \bigr)$ in the time interval $[0,T]$ such that $\Gammamma_t\subseteq\Omegaga$\ \,for all $t\in[0,T]$. Then there exists a family of smooth functions $\{u_{0}^{\epsilonilon}\}_{0<\epsilonilon\leq 1}$ which are uniformly bounded in $\epsilonilon\in(0,1]$ and $(x,t)\in \overline{\Omegaga}_T$, such that if $u^{\epsilonilon}$ solves the Cahn-Hilliard problem \eqref{eq20170504_1}--\eqref{eq20170504_3}, then \begin{itemize} \item[\rm (i)] $\displaystyle{\lim_{\epsilonilon\rightarrow 0}} u^{\epsilon}(x,t)= \begin{cases} 1 &\qquad \mbox{if}\, (x,t)\in \mathcal{O}\\ -1 &\qquad \mbox{if}\, (x,t)\in \mathcal{I} \end{cases} \,\mbox{ uniformly on compact subsets}$, where $\mathcal{I}$ and $\mathcal{O}$ stand for the ``inside" and ``outside" of $\Gammamma$; \item[\rm (ii)] $\displaystyle{\lim_{\epsilonilon\rightarrow 0}} \bigl( \epsilonilon^{-1} f(u^{\epsilonilon})-\epsilonilon\Omegaelta u^{\epsilonilon} \bigr)(x,t)=-w(x,t)$ uniformly on $\overline{\Omegaga}_T$. \end{itemize} \end{theorem} We are now ready to state the first main theorem of this section. \begin{theorem}\label{thm4.2} Let $\{\Gammamma_t\}_{t\geq0}$ denote the zero level set of the Hele-Shaw problem and $U_{\epsilonilon,h,k}(x,t)$ denotes the piecewise linear interpolation in time of the numerical solution $u_h^n$, namely, \begin{align} U_{\epsilonilon,h,k}(x,t):=\frac{t-t_{n-1}}{k}u_h^{n}(x)+\frac{t_{n}-t}{k}u_h^{n-1}(x), \label{eq4.1} \end{align} for $t_{n-1}\leq t\leq t_{n}$ and $1\leq n\leq M$. Then, under the mesh and starting value constraints of Theorem \ref{thm20180214_4} and $k=O(h^q)$ with $0<q<1$, we have \begin{itemize} \item[\rm (i)] $U_{\epsilonilon,h,k}(x,t) \stackrel{\epsilon\searrow 0}{\longrightarrow} 1$ uniformly on compact subset of $\mathcal{O}$, \item[\rm (ii)] $U_{\epsilonilon,h,k}(x,t) \stackrel{\epsilon\searrow 0}{\longrightarrow} -1$ uniformly on compact subset of $\mathcal{I}$. \end{itemize} \end{theorem} \begin{proof} For any compact set $A\subset\mathcal{O}$ and for any $(x,t)\in A$, we have \begin{align} \label{eq4.4} |U_{\epsilonilon,h,k}-1|&\leq |U_{\epsilonilon,h,k}-u^{\epsilonilon}(x,t)|+|u^{\epsilonilon}(x,t)-1| \\ &\leq |U_{\epsilonilon,h,k}-u^{\epsilonilon}(x,t)|_{L^{\infty}(\Omegaga_T)}+|u^{\epsilonilon}(x,t)-1|.\nonumber \end{align} Theorem \ref{thm20180611_add2} infers that \begin{equation}\label{eq4.5} |U_{\epsilonilon,h,k}-u^{\epsilonilon}(x,t)|_{L^{\infty}(\Omegaga_T)}\leq C(\tilde{\rho}_6(\epsilonilon))^{\frac12}h^q|\ln h|. \end{equation} where $\tilde{\rho}_6(\epsilonilon)=\max\{\tilde{\rho}_4(\epsilonilon),\tilde{\rho}_5(\epsilonilon)\}.$ The first term on the right-hand side of \eqref{eq4.4} tends to $0$ when $\epsilonilon\searrow 0$ (note that $h,k\searrow 0$, too). The second term converges uniformly to $0$ on the compact set $A$, which is ensured by (i) of Theorem \ref{thm4.1}. Hence, the assertion (i) holds. To show (ii), we only need to replace $\mathcal{O}$ by $\mathcal{I}$ and $1$ by $-1$ in the above proof. \end{proof} The second main theorem addresses the convergence of numerical interfaces. \begin{theorem}\label{thm4.3} Let $\Gammamma_t^{\epsilonilon,h,k}:=\{x\in\Omegaga;\, U_{\epsilonilon,h,k}(x,t)=0\}$ be the zero level set of\ \,$U_{\epsilonilon,h,k}(x,t)$, then under the assumptions of Theorem \ref{thm4.2}, we have \[ \sup_{x\in\Gammamma_t^{\epsilonilon,h,k}} \mbox{\rm dist}(x,\Gammamma_t) \stackrel{\epsilonilon\searrow 0}{\longrightarrow} 0 \quad\mbox{uniformly on $[0,T]$}. \] \end{theorem} \begin{proof} For any $\eta\in(0,1)$, define the tabular neighborhood $\mathcal{N}_{\eta}$ of width $2\eta$ of $\Gammamma_t$ \begin{equation}\label{eq4.8} \mathcal{N}_{\eta}:=\{(x,t)\in\Omegaga_T;\, \mbox{\rm dist}(x,\Gammamma_t)<\eta\}. \end{equation} Let $A$ and $B$ denote the complements of the neighborhood $\mathcal{N}_{\eta}$ in $\mathcal{O}$ and $\mathcal{I}$, respectively, \begin{equation*} A=\mathcal{O}\setminus\mathcal{N}_{\eta} \qquad\mbox{and}\qquad B=\mathcal{I}\setminus\mathcal{N}_{\eta}. \end{equation*} Note that $A$ is a compact subset outside $\Gammamma_t$ and $B$ is a compact subset inside $\Gammamma_t$. By Theorem \ref{thm4.2}, there exists ${\epsilonilon_1}>0$, which only depends on $\eta$, such that for any $\epsilonilon\in (0,{\epsilonilon_1})$ \begin{align} &|U_{\epsilonilon,h,k}(x,t)-1|\leq\eta\quad\forall(x,t)\in A,\label{eq4.9}\\ &|U_{\epsilonilon,h,k}(x,t)+1|\leq\eta\quad\forall(x,t)\in B.\label{eq4.10} \end{align} Now for any $t\in[0,T]$ and $x\in \Gammamma_t^{\epsilonilon,h,k}$, from $U_{\epsilonilon,h,k}(x,t)=0$ we have \begin{align} &|U_{\epsilonilon,h,k}(x,t)-1|=1\qquad\forall(x,t)\in A,\label{eq4.11}\\ &|U_{\epsilonilon,h,k}(x,t)+1|=1\qquad\forall(x,t)\in B.\label{eq4.12} \end{align} \eqref{eq4.9} and \eqref{eq4.11} imply that $(x,t)$ is not in $A$, and \eqref{eq4.10} and \eqref{eq4.12} imply that $(x,t)$ is not in $B$, then $(x,t)$ must lie in the tubular neighborhood $\mathcal{N}_{\eta}$. Therefore, for any $\epsilonilon\in(0,\epsilonilon_1)$, \begin{equation}\label{eq4.13} \sup_{x\in\Gammamma_t^{\epsilonilon,h,k}} \mbox{\rm dist}(x,\Gammamma_t) \leq\eta \qquad\mbox{uniformly on $[0,T]$}. \end{equation} The proof is complete. \end{proof} \section{Numerical experiments}\label{sec6} In this section, we present two two-dimensional numerical tests to gauge the performance of the proposed fully discrete Morley finite element method for Cahn-Hilliard equation. The square domain $\Omegaga = [-1,1]^2$ is used in both tests. \partialartialragraph{Test 1} Consider the Cahn-Hilliard problem with an ellipse initial interface determined by $\Gammamma_0: \frac{x^2}{0.36} + \frac{y^2}{0.04} = 0$. The initial condition is chosen to have the form $u_0(x,y) = \tanh(\frac{d_0(x, y)}{\sqrt{2\epsilonilon}})$, where $d_0(x, y)$ denotes the signed distance from $(x,y)$ to the initial ellipse interface $\Gammamma_0$ and $\tanh(t) = (e^t - e^{-t})/(e^t + e^{-t})$. Figure \ref{fig:ellipse} displays four snapshots at four fixed time points of the numerical interface with four different $\epsilonilon$'s. Here time step size $k = 1\times 10^{-4}$ and space size $h = 0.01$ are used. They clearly indicate that at each time point the numerical interface converges to the sharp interface $\Gammamma_t$ of the Hele-Shaw flow as $\epsilonilon$ tends to zero. Note that this initial condition may not satisfy the General Assumption (GA) due to the singularity of the signed distance function. We will adopt a smooth initial condition in the later test. \begin{figure} \caption{Test 1: Snapshots of the zero-level sets of $u^{\epsilonilon, k} \label{fig:ellipse} \end{figure} \partialartialragraph{Test 2} Consider the following initial condition, which is also adopted in \cite{feng2008posteriori}, $$ u_0(x,y) = \tanh\Big( ((x-0.3)^2 + y^2 - 0.25^2)/\epsilonilon \Big) \tanh\Big( ((x+0.3)^2 + y^2 - 0.3^2)/\epsilonilon \Big). $$ Table \ref{tab:error1} and \ref{tab:error2} show the errors of spatial $L^2$, $H^1$ and $H^2$ semi-norms and the rates of convergence at $T = 0.0002$ and $T = 0.001$. $\epsilonilon = 0.08$ is used to generate the table. $k = 1\times 10^{-5}$ is chosen so that the error in time is relatively small to the error in space. The $L^\infty(H^2)$ norm error is in agreement with the convergence theorem, but $L^\infty(L^2)$ and $L^\infty(H^1)$ norm errors are one order higher than our theoretical results. We note that in \cite{elliott1989nonconforming}, the second order convergence for both $L^\infty(L^2)$ and $L^\infty(H^1)$ norms are proved, whereas only $\frac{1}{\epsilonilon}$-exponential dependence can be derived. \begin{table}[!htbp] \centering \footnotesize \begin{tabular}{|l||c|c||c|c||c|c|} \hat{\varphi}line & $L^{\infty}(L^2)$ error & order & $L^{\infty}(H^1)$ error & order & $L^{\infty}(H^2)$ error & order \\ \hat{\varphi}line $h=0.2\sqrt{2}$ & 0.079659 & --- & 1.761563 & --- & 34.097686 & --- \\ \hat{\varphi}line $h=0.1\sqrt{2}$ & 0.023142 & 1.7833 & 0.642870 & 1.4543 & 21.604986 & 0.6583\\ \hat{\varphi}line $h=0.05\sqrt{2}$ & 0.007598 & 1.6067 & 0.183600 & 1.8080& 11.783724 & 0.8746\\ \hat{\varphi}line $h=0.025\sqrt{2}$ & 0.002151 & 1.8201 & 0.048042 & 1.9342& 6.045416 & 0.9629\\ \hat{\varphi}line $h=0.0125\sqrt{2}$ & 0.000557 & 1.9501 & 0.012167 & 1.9813 & 3.042138 & 0.9908\\ \hat{\varphi}line \end{tabular} \caption{Spatial errors and convergence rates of Test 2: $\epsilonilon = 0.08$, $k = 1\times 10^{-5}$, $T = 0.0002$.} \label{tab:error1} \end{table} \begin{table}[!htbp] \centering \footnotesize \begin{tabular}{|l||c|c||c|c||c|c|} \hat{\varphi}line & $L^{\infty}(L^2)$ error & order & $L^{\infty}(H^1)$ error & order & $L^{\infty}(H^2)$ error & order \\ \hat{\varphi}line $h=0.2\sqrt{2}$ & 0.137170 & --- & 2.469582 & --- & 43.008910 & --- \\ \hat{\varphi}line $h=0.1\sqrt{2}$ & 0.032310 & 2.0859 & 0.710340 & 1.7977 & 23.320078 & 0.8831\\ \hat{\varphi}line $h=0.05\sqrt{2}$ & 0.008830 & 1.8715 & 0.183932 & 1.9493 & 11.774451 & 0.9859\\ \hat{\varphi}line $h=0.025\sqrt{2}$ & 0.002349 & 1.9103 & 0.046810 & 1.9743 & 5.927408 & 0.9902\\ \hat{\varphi}line $h=0.0125\sqrt{2}$ & 0.000597 & 1.9746 & 0.011764 & 1.9924 & 2.970322 & 0.9968\\ \hat{\varphi}line \end{tabular} \caption{Spatial errors and convergence rates of Test 2: $\epsilonilon = 0.08$, $k = 1\times 10^{-5}$, $T = 0.001$.} \label{tab:error2} \end{table} Figure \ref{fig:2circle} displays six snapshots at six fixed time points of the numerical interface with four different $\epsilonilon$. Again, they clearly indicate that at each time point the numerical interface converges to the sharp interface $\Gammamma_t$ of the Hele-€揝haw flow as $\epsilonilon$ tends to zero. \begin{figure} \caption{Test 2: Snapshots of the zero-level sets of $u^{\epsilonilon, k} \label{fig:2circle} \end{figure} \end{document}
\begin{document} \title*{A Generalized It$\hat {\rm o}$'s Formula in Two-Dimensions and Stochastic Lebesgue-Stieltjes Integrals} \titlerunning{Two-dimensional generalized It$\hat {\rm o}$ Formula } \author{Chunrong Feng\inst{1,2}, Huaizhong Zhao\inst{1}} \authorrunning{C. Feng and H. Zhao} \institute{ Department of Mathematical Sciences, Loughborough University, LE11 3TU, UK. \texttt{[email protected]}, \texttt{[email protected]} \and School of Mathematics and System Sciences, Shandong University, Jinan, Shandong Province, 250100, China} \maketitle \newcounter{bean} \begin{abstract} In this paper, a generalized It${\hat {\rm o}}$ formula for time dependent functions of two-dimensional continuous semi-martingales is proved. The formula uses the local time of each coordinate process of the semi-martingale, left space and time first derivatives and second derivative $\nabla _1^- \nabla _2^-f$ only which are assumed to be of locally bounded variation in certain variables, and stochastic Lebesgue-Stieltjes integrals of two parameters. The two-parameter integral is defined as a natural generalization of the It${\hat {\rm o}}$ integral and Lebesgue-Stieltjes integral through a type of It${\hat {\rm o }}$ isometry formula. \vskip5pt Keywords: local time, continuous semi-martingale, generalized It$\hat {\rm o}$'s formula, stochastic Lebesgue-Stieltjes integral. \vskip5pt AMS 2000 subject classifications: 60H05, 60J55 \end{abstract} \renewcommand{\arabic{section}.\arabic{equation}}{\arabic{section}.\arabic{equation}} \section{Introduction} The classical It$\hat {\rm o}$'s formula for twice differentiable functions has been extended to less smooth functions by many mathematicians. Progresses have been made mainly in one-dimension beginning with Tanaka's pioneering work \cite{tan} for $|X_t|$ to which the local time was beautifully linked. Further extensions were made to time independent convex functions in \cite{meyer} and \cite{wang}; to the case of absolutely continuous function with the first derivative being locally bounded in \cite{bou}; to $W_{loc}^{1,2}$ functions of a Brownian motion in \cite{Protter} for one dimension and \cite{Protter2} for multi-dimensions. It was proved in \cite{Protter} that $f(B_t)=f(B_0)+\int_0^t f'(B_s)dB_s+{1\over 2}[f(B),B]_t$, where $[f(B),B]_t$ is the covariation of the processes $f(B)$ and $B$ and is equal to $\int_0^t f(B_s)d^*B_s-\int_0^t f(B_s)dB_s$ as a difference of backward and forward integrals. See \cite{rv} for the case of continuous semi-martingale. The multi-dimensional case was considered by \cite{Protter2}, \cite{rv} and \cite{mn}. An integral $\int_{-\infty}^\infty f^{\prime}(x){\rm d}_x L_t(x)$ was introduced in \cite {bou} through the existence of the expression $f(X(t))-f(X(0))-\int_0^t {\partial ^-\over \partial x} f(X(s))dX(s)$, where $L _t(x)$ is the local time of the semi-martingale $X_t$. This work was extended further to define $\int_0^t\int_{-\infty}^\infty {\partial \over \partial x} f(s,X(s))d_{s,x}L_s(x)$ for a time dependent function $f(s,x)$ using forward and backward integrals for Brownian motion in \cite{eisenbaum1} and to semi-martingales other than Brownian motion in \cite{eisenbaum2}. This integral was also defined in \cite{rog} as a stochastic integral with excursion fields, and in \cite{Peskir1} through It$\hat {\rm o}$'s formula without assuming the reversibility of the semi-martingale which was required in \cite{eisenbaum1}. Other generalizations include \cite{frw} where it was also proved that if $X$ is a semi-martingale, then $f(X(t))$ is a semi-martingale if and only if $f\in W_{loc}^{1,2}$ and its weak derivative is of bounded variation using backward and forward integrals (\cite{lyons}). The above mentioned extensions are useful in many problems. However, to use probabilistic methods to study problems arising in partial differential equations with singularities and mathematics of finance, we often need a generalized It$\hat {\rm o}$'s formula for time dependent $f(t,x)$. In a special case that is if there exists a Radon measure $\nu$ and locally bounded Borel function $H$ such that $d_x(\nabla f(t,x))=H(t,x)\nu(dx)$, a generalized It$\hat {\rm o}$'s formula was obtained by \cite{yor1}. In a recent work \cite{Zhao1}, a new generalized It$\hat {\rm o}$'s formula for one-dimensional continuous semi-martingales was proved. It is given in terms of a Lebesgue-Stieltjes integral of the local time $L_t(x)$ with respect to the two-dimensional variation of $\nabla^-f(t,x)$ as follows \begin{eqnarray}\label{zhao1} &&f(t,X(t))-f(0,X(0))\nonumber\\ &=&\int _0^t{\partial ^-\over \partial s} f(s,X(s)){\rm d}s+\int _0^t\nabla ^-f(s,X(s))dX_s\nonumber\\ &&+{1\over 2} \int_0^t \Delta f_h(s,X(s))d<\hskip-4pt X\hskip-4pt>_s + \int _{-\infty}^{\infty }L _t(x){\rm d}_x\nabla ^-f_v(t,x)\nonumber\\ &&-\int _{-\infty}^{+\infty}\int _0^{t}L _s(x) {\bf \rm d}_{s,x}\nabla ^-f_v(s,x).\ \ a.s. \end{eqnarray} Here $f(t,x)=f_h(t,x)+f_v(t,x)$ is left continuous with $f_h(t,x)$ being $C^1$ in $x$ and $\nabla f_h(t,x)$ being absolutely continuous whose left derivative $\Delta ^-f_h(t,x)$ is left continuous and locally bounded, and $\nabla ^-f_v(t,x)$ being of locally bounded variation in $(t, x)$ and of locally bounded variation in $x$ at $t=0$. Note the last two integrals are pathwise well defined due to the well-known fact that the local time $L _t(x)$ is jointly continuous in $t$ and c$\grave{a}$dl$\grave{a}$g in $x$ and has a compact support in space $x$ for each $t$ (\cite{yor}, \cite{ks}). In a special case, when there exists a curve $x=\gamma (t)$ of locally bounded variation and the function $f$ is continuous but the first order derivative $\nabla f$ has jumps across the curve and second order derivative $\Delta f$ has left limit when $x\to \gamma(t) -$, i.e. $\Delta ^-f$ exists and locally bounded and left continuous off the curve(s) $x=\gamma (t)$, and there may be jumps of $\nabla f$ along $x=\gamma (t)$ ($\Delta f$ is still undefined), define $\Delta^-f$ on the curve $x=\gamma(t)$ as the left limit of $\Delta f$. Then the following formula was derived from (\ref{zhao1}) using the integration by parts formula (\cite{Zhao1}): \begin{eqnarray}\label{zhao2} f(t,X(t)) &=&f(0,X(0))+\int _0^t{\partial ^-\over \partial s} f(s,X(s)){\rm d}s+\int _0^t\nabla ^-f(s,X(s))dX_s\nonumber\\ && + {1\over 2} \int _0^t\Delta ^-f(s,X(s))d<\hskip-4pt X\hskip-4pt>_s\nonumber\\ && +\int _0^{t}(\nabla f(s,\gamma (s)+)-\nabla f(s,\gamma (s)-))dL _s(\gamma(s)).\ \ a.s. \end{eqnarray} Here $dL_s(a)$ refers to the Lebesgue-Stieltjes integral with respect to $s\mapsto L _s(a)$. Formula (\ref {zhao2}) was also observed in \cite{peskir2} independently. These two new formulae have been proved useful in analysing asymptotics of solutions of partial differential equations in the presence of caustics (\cite{Zhao2}) and studying the smooth fitting problem in American put option (\cite{peskir4}). Formula (\ref{zhao1}) is in a very general form. It includes the classical It${\hat{\rm o}}$ formula, Tanaka's formula, Meyer's formula for convex functions, the formula given by Az\'ema, Jeulin, Knight and Yor \cite{yor} and formula (1.2). The purpose of this paper is to extend formula (\ref{zhao1}) to two dimensions. This is a nontrivial extension as the local time in two-dimensions does not exist. But we observe for a smooth function $f$, formally by the occupation times formula \begin{eqnarray}\label{zhao3} && {1\over 2}\int _0^{t}\Delta _1f(s,X_1(s),X_2(s))d<\hskip-4pt X _1\hskip-4pt>_s \nonumber\\ &=& \int _{-\infty}^{+\infty}\int _0^{t}\Delta _1f(s,a,X_2(s)){\rm d}_sL _1(s,a){\rm d}a\nonumber\\ &=& \int _{-\infty}^{+\infty}\Delta _1f({t},a,X_2({t})){L_1(t,a)}{\rm d}a\nonumber\\ &&-\int _{-\infty}^{+\infty}\int _0^{t}{L_1(s,a)}{\bf\rm d}_{s,a}\nabla_1 f(s,a,X_2(s)), \end{eqnarray} if the integral $\int_{-\infty}^{+\infty}\int _0^{t}{L_1(s,a)}{\bf\rm d}_{s,a}\nabla _1f(s,a,X_2(s))$ is properly defined. Here $\nabla_1 f(s,a,X_2(s))$ is a semi-martingale for any fixed $a$, following the one-dimensional generalized It${\hat {\rm o}}$'s formula (\ref{zhao1}). For this, we study this kind of the integral $\int_{-\infty}^{+\infty}\int _0^{t}{g(s,a)}{\bf\rm d}_{s,a} h(s,a)$ in section 2. Here $h(s,x)$ is a continuous martingale with cross variation $<h(\cdot,a),h(\cdot,b)>_s$ of locally bounded variation in $(a,b)$, and $E\left[\int_0^t\int_{R^2}|g(s,a)g(s,b)||{\bf \rm d}_{a,b,s}<h(\cdot,a),h(\cdot,b)>_s|\right] < {\infty}$. The integral is different from the Lebesgue-Stieltjes integral and It${\hat{\rm o}}$'s stochastic integral. But it is a natural extension to the two-parameter stochastic case and therefore called a stochastic Lebesgue-Stieltjes integral. According to our knowledge, our integral is new. It's different from integration with Brownian sheet defined by Walsh (\cite{walsh}) and integration w.r.t. Poisson random measure (see \cite{ikeda}). A generalized It$\hat {\rm o}$'s formula in two dimensions is proved in section 3. Applications e.g. in the study of the asymptotics of the solutions of heat equations with caustics in two dimensions, are not included in this paper. These results will be published in some future work. Furthermore, it has been observed by us in \cite{Zhao3} that the local time $L_t(x)$ can be considered naturally as a rough path in $x$ of finite 2-variation and $\int_0^t\int_{-\infty}^\infty \nabla^-f(s,x)d_{s,x}L_s(x)$ is defined pathwisely by using and extending Lyons' idea of rough path integration (\cite{terry}). \vskip5pt \section{The definition of stochastic Lebesgue-Stieltjes integrals and the integration by parts formula} \setcounter{equation}{0} For a filtered probability space $(\Omega,{\cal F},\{{\cal F}_t\}_{t\geq 0},P)$, denote by ${\cal M}_2$ the Hilbert space of all processes $X=(X_t)_{0\leq t\leq T}$ such that $(X_t)_{0\leq t\leq T}$ is a $({\cal F}_t)_{0\leq t\leq T}$ right continuous square integrable martingale with inner product $(X,Y)=E(X_TY_T)$. A three-variable function $f(s,x,y)$ is called left continuous iff it is left continuous in all three variables together i.e. for any sequence $(s_1,x_1,y_1)\leq (s_2,x_2,y_2) \leq \cdots \leq (s_k,x_k,y_k)\to (s,x,y)$, as $k\to \infty$, we have $f(s_k,x_k,y_k)\to f(s,x,y)$ as $k\to \infty$. Here $(s_1,x_1,y_1)\leq (s_2,x_2,y_2)$ means $s_1\leq s_2$, $x_1\leq x_2$ and $y_1\leq y_2$. Define \begin{eqnarray*} {\cal V}_1:= \Big\{h: && [0,t] \times ({-\infty},{\infty}) \times {\Omega} \to R \ s.t.\ (s,x,\omega)\mapsto h(s,x,\omega)\\ && is \ {\cal B} ([0,t] \times R) \times {\cal F}{\rm-}measurable,\ and\ h(s,x)\ is\ \\ &&{\cal F}_s{\rm-}adapted \ for\ any\ x\in R\Big \},\\ {\cal V}_2:=\Big\{h: && h\in{\cal V}_1\ is\ a \ continuous \ (in\ s)\ {\cal M}_2-martingale \ for \ each \ x,\\ && and \ the \ crossvariation\ <h(\cdot,x),h(\cdot,y)>_s is\ left\ continuous\\ && and \ of \ locally\ bounded\ variation \ in\ (s,x,y)\Big\}. \end{eqnarray*} In the following, we will always denote $<h(\cdot,x),h(\cdot,y)>_s$ by $<h(x),h(y)>_s$. We now recall some classical results for the sake of completeness of the paper (see \cite{ash} and \cite{mc}). A three-variable function $f(s,x,y)$ is called monotonically increasing if whenever $(s_2,x_2,y_2)\geq (s_1,x_1,y_1)$, then \begin{eqnarray*} &&f(s_2,x_2,y_2)-f(s_2,x_1,y_2)-f(s_2,x_2,y_1)+f(s_2,x_1,y_1)\\ &&- f(s_1,x_2,y_2)+f(s_1,x_1,y_2)+f(s_1,x_2,y_1)-f(s_1,x_1,y_1)\geq 0. \end{eqnarray*} For a left-continuous and monotonically increasing function $f(s,x,y)$, one can define a Lebesgue-Stieltjes measure by setting \begin{eqnarray*} && \nu ([s_1,s_2)\times [x_1,x_2)\times [y_1,y_2))\\ &=&f(s_2,x_2,y_2)-f(s_2,x_1,y_2)-f(s_2,x_2,y_1)+f(s_2,x_1,y_1)\\ &&- f(s_1,x_2,y_2)+f(s_1,x_1,y_2)+f(s_1,x_2,y_1)-f(s_1,x_1,y_1). \end{eqnarray*} For $h\in{\cal V}_2$, define \begin{eqnarray*} <h(x),h(y)>_{t_1}^{t_2}:=<h(x),h(y)>_{t_2} - <h(x),h(y)>_{t_1},\ {t_2}\geq{t_1}. \end{eqnarray*} Note as $<h(x),h(y)>_s$ is left continuous and of locally bounded variation in ($s,x,y$), so it can be decomposed to the difference of two increasing and left continuous functions $f_1(s,x,y)$ and $f_2(s,x,y)$ (see McShane \cite{mc} or Proposition 2.2 in Elworthy, Truman and Zhao \cite{Zhao1} which also holds for multi-parameter functions). Note each of $f_1$ and $f_2$ generates a measure, so for any measurable function $g(s,x,y)$, we can define \begin{eqnarray*} && \int _{t_1}^{t_2}\int _{a_1}^{a_2}\int _{b_1}^{b_2}g(s,x,y)d_{x,y,s}<h(x),h(y)>_s\\ &=& \int _{t_1}^{t_2}\int _{a_1}^{a_2}\int _{b_1}^{b_2}g(x,y,s)d_{x,y,s}f_1(s,x,y)\\ &&- \int _{t_1}^{t_2}\int _{a_1}^{a_2}\int _{b_1}^{b_2}g(x,y,s)d_{x,y,s}f_2(s,x,y). \end{eqnarray*} In particular, a signed product measure in the space $[0,T]\times R^2$ can be defined as follows: for any ${[t_1,t_2)}\times{[x_1,x_2)}\times{[y_1,y_2)} \subset [0,T]\times R^2$ \begin{eqnarray} && \int _{t_1}^{t_2}\int _{x_1}^{x_2} \int _{y_1}^{y_2} {\bf \rm d}_{x,y,s}<h(x),h(y)>_s\nonumber\\ &=&\int _{t_1}^{t_2}\int _{x_1}^{x_2} \int _{y_1}^{y_2}{\bf \rm d}_{x,y,s}f_1(s,x,y)-\int _{t_1}^{t_2}\int _{x_1}^{x_2} \int _{y_1}^{y_2}{\bf \rm d}_{x,y,s}f_2(s,x,y)\nonumber\\ &=&<h(x_2),h(y_2)>_{t_1}^{t_2} - <h(x_2),h(y_1)>_{t_1}^{t_2}\nonumber\\ &&-<h(x_1),h(y_2)>_{t_1}^{t_2} + <h(x_1),h(y_1)>_{t_1}^{t_2}\nonumber\\ &=&<h(x_2)-h(x_1),h(y_2)-h(y_1)>_{t_1}^{t_2}. \end{eqnarray} Define \begin{eqnarray} |{\bf \rm d}_{x,y,s}<h(x),h(y)>_s|={\bf \rm d}_{x,y,s}f_1(s,x,y)+{\bf \rm d}_{x,y,s}f_2(s,x,y). \end{eqnarray} Moreover, for $h\in{\cal V}_2$, define: \begin{eqnarray*} {\cal V}_3(h):=\Big\{g&:&g\in{\cal V}_1\ has\ a\ compact\ support\ in\ x,\ and\ \\&E&\left[\int_0^t\int_{R^2}|g(s,x)g(s,y)||{\bf \rm d}_{x,y,s}<h(x),h(y)>_s|\right] < {\infty} \Big\}. \end{eqnarray*} Consider now a simple function \begin{eqnarray}\label{fr1} g(s,x,\omega)&=&\sum_{i=1}^{n}\sum_{j=1}^{m}e(t_j,x_i)1_{(t_j,t_{j+1}]}(s)1_{(x_i,x_{i+1}]}(x) \end{eqnarray} where $t_1<t_2<\cdots <t_{n+1}$, $x_1<x_2<\cdots <x_{m+1}$, $ e(t_j,x_i)$ are ${\cal F}_{t_j}$-measurable. For $h\in{\cal V}_2$, define an integral as: \begin{eqnarray} I_t(g)&:=&\int _{0}^{t}\int _{-\infty}^{\infty}g(s,x){\bf \rm d}_{s,x}h(s,x)\nonumber\\ &=&\sum _{i=1}^{n}\sum _{j=1}^{m} e(t_j\wedge t,x_i)\Big[h(t_{j+1}\wedge t,x_{i+1})-h(t_j\wedge t,x_{i+1})\nonumber\\ &&\hskip 3cm -h(t_{j+1}\wedge t,x_i)+h(t_j\wedge t,x_i)\Big]. \end{eqnarray} This integral is called the stochastic Lebesgue-Stieltjes integral of the simple function $g$. It's easy to see for simple functions $g_1, g_2\in {\cal V}_3(h)$, \begin{eqnarray}\label{add} I_t(\alpha g_1+\beta g_2)=\alpha I_t(g_1)+\beta I_t(g_2), \end{eqnarray} for any $\alpha, \beta\in R$. The following lemma plays a key role in extending the integral of simple functions to functions in ${\cal V}_3(h)$. It is equivalent to the It$\hat {\rm o}$'s isometry formula in the case of the stochastic integral. \vskip5pt \begin{lem}\label{lemma1} If $h\in{\cal V}_2$, $g\in{\cal V}_3(h)$ is simple, then $I_t(g)$ is a continuous martingale with respect to $({\cal F}_t)_{0\leq t\leq T}$ and \begin{eqnarray}\label{fcr1} &&E \left(\int _{0}^{t}\int _{-\infty}^{\infty}g(s,x){\bf \rm d}_{s,x}h(s,x)\right)^2\nonumber\\ &=&E\int _{0}^{t}\int _{R^2} g(s,x)g(s,y){\bf \rm d}_{x,y,s}<h(x),h(y)>_s. \end{eqnarray} \end{lem} {\em Proof}: From the definition of $\int _{0}^{t}\int _{-\infty}^{\infty}g(s,x){\bf \rm d}_{s,x}h(s,x)$, it is easy to see that $I_t$ is a continuous martingale with respect to $({\cal F}_t)_{0\leq t\leq T}$. As $h(s,x,\omega)$ is a continuous martingale in ${\cal M}_2$, using a standard conditional expectation argument to remove the cross product parts, we get: \begin{eqnarray*} && E\left[\left(\int _{0}^{t}\int _{-\infty}^{\infty}g(s,x){\bf \rm d}_{s,x}h(s,x)\right)^2\right]\\ &=&E\sum _{j=1}^{m}\Bigg(\sum _{i=1}^{n} e(t_j\wedge t,x_i)\Big[h(t_{j+1}\wedge t,x_{i+1})-h(t_{j}\wedge t,x_{i+1})\nonumber\\ &&\hskip 3cm -h(t_{j+1}\wedge t,x_i)+h(t_{j}\wedge t,x_{i})\Big]\Bigg)^2 \\ &=&E\sum _{j=1}^{m}\Bigg (\sum _{i=1}^{n} \sum _{k=1}^{n} e(t_j\wedge t,x_i)e(t_j\wedge t,x_k)\cdot\\ &&\hskip 1cm \Big[h(t_{j+1}\wedge t,x_{i+1}) - h(t_{j}\wedge t,x_{i+1}) - h(t_{j+1}\wedge t,x_i) + h(t_{j}\wedge t,x_{i})\Big]\cdot \\ &&\hskip 1cm\Big[h(t_{j+1}\wedge t,x_{k+1}) - h(t_j\wedge t,x_{k+1}) - h(t_{j+1}\wedge t,x_k) + h(t_j\wedge t,x_k)\Big]\Bigg)\\ &=&E\sum _{j=1}^{m}\Bigg\{\sum _{i=1}^{n} \sum _{k=1}^{n} e(t_j\wedge t,x_i)e(t_j\wedge t,x_k)\cdot\\ &&\hskip 1cm\Big[\big(h(t_{j+1}\wedge t,x_{i+1})-h(t_{j}\wedge t,x_{i+1})\big)\big(h(t_{j+1}\wedge t,x_{k+1})-h(t_{j}\wedge t,x_{k+1})\big)\\ && \hskip 1cm- \big(h(t_{j+1}\wedge t,x_{i+1})-h(t_{j}\wedge t,x_{i+1})\big)\big(h(t_{j+1}\wedge t,x_k) - h(t_{j}\wedge t,x_k)\big) \\ && \hskip 1cm - \big(h(t_{j+1}\wedge t,x_i) - h(t_{j}\wedge t,x_{i})\big)\big(h(t_{j+1}\wedge t,x_{k+1})-h(t_{j}\wedge t,x_{k+1})\big) \\ &&\hskip 1cm + \big(h(t_{j+1}\wedge t,x_i) - h(t_{j}\wedge t,x_{i})\big)\big(h(t_{j+1}\wedge t,x_k) - h(t_{j}\wedge t,x_k)\big)\Big]\Bigg\}\\ &=&E \int _{0}^{t} \sum _{i=1}^{n} \sum _{k=1}^{n}e(s,x_i)e(s,x_k) \Big[{\bf \rm d}_s <h(x_{i+1}),h(x_{k+1})>_s \\ &&- {\bf \rm d}_s <h(x_{i+1}),h(x_{k})>_s - {\bf \rm d}_s <h(x_{i}),h(x_{k+1})>_s \\ && \hskip3.8cm + {\bf \rm d}_s <h(x_{i}),h(x_{k})>_s\Big]\\ &=&E\int _{0}^{t} \sum _{i=1}^{n} \sum _{k=1}^{n}e(s,x_i)e(s,x_k)) \Big[{\bf \rm d}_s <h(x_{i+1})-h(x_{i}),h(x_{k+1})-h(x_{k})>_s\Big]\\ &=&E\left[\int _{0}^{t}\int _{R^2} g(s,x)g(s,y){\bf \rm d}_{x,y,s}<h(x),h(y)>_s\right]. \end{eqnarray*} So we prove the desired result. $ \diamond$\\ The idea is to use (\ref {fcr1}) to extend the definition of the integrals of simple functions to integrals of functions in ${\cal V}_3(h)$, for any $h \in {\cal V}_2$. We achieve this goal in several steps: \begin{lem}\label{lem1} Let $h \in {\cal V}_2$, $f \in {\cal V}_3(h)$ be bounded uniformly in $\omega$, $f(\cdot,\cdot,\omega)$ be continuous for each ${\omega}$ on its compact support. Then there exist a sequence of bounded simple functions ${\varphi}_{m,n} \in {\cal V}_3(h) $ such that \begin{eqnarray*} &&\hskip -0.3cm E\int _{0}^{t}\int _{R^2} \mid (f - \varphi _{m,n})(s,x)(f - \varphi_{m',n'})(s,y) |\mid{\bf \rm d}_{x,y,s}<h(x),h(y)>_s|\\ &&\hskip -0.3cm \to0, \end{eqnarray*} as $m, n , m',n'\to \infty$. \end{lem} {\em Proof}: Let $[0,T]\times[a,b]$ be a rectangle covering the compact support of $f$ and $0=t_1<t_2<\cdots<t_{m+1}=T$, and $a=x_1<x_2<\cdots<x_{n+1}=b$ be a partition of $[0,T]\times[a,b]$. Assume when $n,m\to\infty$, $\max \limits _{1\leq j\leq m}(t_{j+1}-t_j)\to 0$, $\max \limits _{1\leq i\leq n}(x_{i+1}-x_i)\to 0$. Define \begin{eqnarray} \varphi _{m,n}(t,x):=\sum_{j=1}^{m}\sum_{i=1}^{n}f(t_j,x_i)1_{(t_j,t_{j+1}]}(t)1_{(x_i,x_{i+1}]}(x). \end{eqnarray} Then $\varphi _{m,n}(t,x)$ are simple and $\varphi _{m,n}(t,x)\to f(t,x) \ a.s. \ as \ m,n\to {\infty}$. The result follows Lebesgue's dominated convergence theorem. $ \diamond$ \begin{lem} \label{lem2} Let $h \in {\cal V}_2$ and $k \in {\cal V}_3(h)$ be bounded uniformly in $\omega$. Then there exist functions $f_n \in {\cal V}_3(h) $ such that $f_n(\cdot,\cdot,\omega) $ are continuous for all $\omega$ and $n$ on its support, and \begin{eqnarray*} &&E \int _{0}^{t}\int _{R^2} \mid (k - f_n)(s,x)(k - f_{n'})(s,y) |\mid{\bf \rm d}_{x,y,s}<h(x), h(y)>_s|\\ &&\to0, \end{eqnarray*} as $n,n' \to {\infty}$. \end{lem} {\em Proof}: Define \begin{eqnarray*} f_{n}(s,x)=n^2\int_{x-{1\over n}}^x\int_{s-{1\over n}}^s k(\tau,y)d\tau dy. \end{eqnarray*} Then $f_{n}(s,x)$ is continuous in $s, x$, and when $n \to {\infty} $, $f_{n}(s,x)\to k(s,x)$ a.s.. The desired convergence follows by Lebesgue's dominated convergence theorem. $ \diamond$ \begin{lem} \label{lem3} Let $h \in {\cal V}_2$ and $g\in{\cal V}_3(h)$. Then there exist functions $k_n\in {\cal V}_3(h)$, bounded uniformly in $\omega$ for each $n$, and \begin{eqnarray*} &&E\int _{0}^{t}\int _{R^2} \mid (g - k_n)(s,x)(g - k_{n'})(s,y) |\mid{\bf \rm d}_{x,y,s}<h(x), h(y)>_s|\\ &&\to0, \end{eqnarray*} as $ n,n' \to {\infty}$. \end{lem} {\em Proof}: Define \begin{eqnarray} k_n(t,x,\omega):=\cases {-n { \ \ \ \ \ \ \ \ \ \ {\rm if}\ g(t,x,\omega)<-n} \cr g(t,x,\omega) { \ \ {\rm if} \ -n \leq g(t,x,\omega) \leq n} \cr n { \ \ \ \ \ \ \ \ \ \ \ \ {\rm if} \ g(t,x,\omega)>n.}} \end{eqnarray} Then as $n\to \infty$, $k_n(t,x,\omega)\to g(t,x,\omega)$ for each ($t,x,\omega$). Note $|k_n(t,x,\omega)|\leq |g(t,x,\omega)|$ and $g\in{\cal V}_3(h)$. So applying Lebesgue's dominated convergence theorem, we obtain the desired result. $ \diamond$ \vskip5pt From Lemmas \ref{lem3}, \ref{lem2}, \ref{lem1}, for each $h \in {\cal V}_2$, $g \in {\cal V}_3(h)$, we can construct a sequence of simple functions $\{{\varphi} _{m,n}\} $ in ${\cal V}_3(h)$ such that, \begin{eqnarray*} &&E\int _{0}^{t}\int _{R^2} \mid {(g - \varphi _{m,n})(s,x)(g - \varphi _{m',n'})(s,y)} |\mid{\bf \rm d}_{x,y,s}<h(x),h(y)>_s|\\ &&\to0, \end{eqnarray*} as $ m,n,m',n' \to {\infty}$. For $\varphi _{m,n}$ and $\varphi _{m',n'}$, we can define stochastic Lebesgue-Stieltjes integrals $I_t(\varphi _{m,n})$ and $I_t(\varphi _{m',n'})$. From Lemma \ref{lemma1} and (\ref{add}), it is easy to see that \begin{eqnarray*} &&E\left[I_T(\varphi _{m,n})-I_T(\varphi _{m',n'})\right]^2\\ &=&E\left[I_T(\varphi _{m,n}-\varphi _{m',n'})\right]^2\\ &=&E\int_0^T\int_{R^2}(\varphi _{m,n}-\varphi_{m',n'})(s,x)(\varphi _{m,n}-\varphi_{m',n'})(s,y)d_{x,y,s}<h(x),h(y)>_s\\ &=&E\int_0^T\int_{R^2}[(\varphi _{m,n}-g)-(\varphi_{m',n'}-g)](s,x)\cdot\\ &&\hskip 1.5cm[(\varphi _{m,n}-g)-(\varphi_{m',n'}-g)](s,y)d_{x,y,s}<h(x),h(y)>_s\\ &=&E\int_0^T\int_{R^2}(\varphi _{m,n}-g)(s,x)(\varphi _{m,n}-g)(s,y)d_{x,y,s}<h(x),h(y)>_s \\ &&-E\int_0^T\int_{R^2}(\varphi _{m,n}-g)(s,x)(\varphi _{m',n'}-g)(s,y)d_{x,y,s}<h(x),h(y)>_s \\ &&-E\int_0^T\int_{R^2}(\varphi _{m',n'}-g)(s,x)(\varphi _{m,n}-g)(s,y)d_{x,y,s}<h(x),h(y)>_s \\ &&+E\int_0^T\int_{R^2}(\varphi _{m',n'}-g)(s,x)(\varphi _{m',n'}-g)(s,y)d_{x,y,s}<h(x),h(y)>_s \\ &\leq&E\int_0^T\int_{R^2}\mid(\varphi _{m,n}-g)(s,x)(\varphi _{m,n}-g)(s,y)\mid\mid d_{x,y,s}<h(x),h(y)>_s\mid \\ &&+E\int_0^T\int_{R^2}\mid(\varphi _{m,n}-g)(s,x)(\varphi _{m',n'}-g)(s,y)\mid\mid d_{x,y,s}<h(x),h(y)>_s\mid \\ &&+E\int_0^T\int_{R^2}\mid(\varphi _{m',n'}-g)(s,x)(\varphi _{m,n}-g)(s,y)\mid\mid d_{x,y,s}<h(x),h(y)>_s\mid \\ &&+E\int_0^T\int_{R^2}\mid(\varphi _{m',n'}-g)(s,x)(\varphi _{m',n'}-g)(s,y)\mid\mid d_{x,y,s}<h(x),h(y)>_s \mid\\ &\to& 0, \end{eqnarray*} as $m,n,m',n'\to \infty$. Therefore $\{I_.(\varphi_{m,n})\}_{m,n=1}^\infty$ is a Cauchy sequence in ${\cal M}_2$ whose norm is denoted by $\parallel\cdot\parallel$. So there exists a process $I(g)=\{I_t(g), 0\leq t\leq T\}$ in ${\cal M}_2$, defined modulo indistinguishability, such that \begin{eqnarray*} \parallel I(\varphi_{m,n})-I(g)\parallel\to 0, \ as \ m,n\to\infty. \end{eqnarray*} By the same argument as for the stochastic integral, one can easily prove that $I(g)$ is well-defined (independent of the choice of the simple functions), and (\ref{fcr1}) is true for $I(g)$. We now can have the following definition. \begin{defi}\label{definition1} Let $h \in {\cal V}_2$, $g \in {\cal V}_3(h)$.Then the integral of $g$ with respect to $h$ can be defined as: \begin{eqnarray*} &&\int _{0}^{t}\int _{-\infty}^{\infty}g(s,x){\bf \rm d}_{s,x}h(s,x)\\ &=&\lim_{m,n \to {\infty}} \int _{0}^{t}\int _{-\infty}^{\infty}\varphi_{m,n}(s,x){\bf \rm d}_{s,x}h(s,x), \ \ \ \ (limit \ in \ {\cal M}_2) \end{eqnarray*} is a continuous martingale with respect to $({\cal F}_t)_{0\leq t\leq T}$ and for each $t\leq T$, (\ref{fcr1}) is satisfied. Here $\{{\varphi} _{m,n}\} $ is a sequence of simple functions in ${\cal V}_3(h)$, s.t. \begin{eqnarray*} &&E\int _{0}^{t}\int _{R^2} \mid (g - \varphi _{m,n})(s,x)(g - \varphi _{m',n'})(s,y) |\mid{\bf \rm d}_{x,y,s}<h(x),h(y)>_s|\\ &&\to0, \end{eqnarray*} as $ m,n, m^{\prime}, n^{\prime} \to {\infty}$. Note $\varphi _{m,n}$ may be constructed by combining the three approximation procedures in Lemmas \ref{lem3}, \ref{lem2}, \ref{lem1}. \end{defi} The following integration by parts formula will be useful in the proof of our main theorem. Although the conditions are strong and may be unnecessary, the proposition is enough for our purpose. We don't strike to weaken the conditions here. \vskip5pt \begin{prop}\label{proposition1} If $h \in {\cal V}_2$, $g \in {\cal V}_3(h)$, and $g(t,x)$ is $C^2$ in $x$, $\Delta g(t,x)$ is bounded uniformly in $t$, then a.s. \begin{eqnarray}\label{fc1} -\int _{-\infty}^{+\infty}\int _0^t \nabla g(s,x) {\rm d}_{s} h(s,x) dx=\int _0^t \int _{-\infty}^{+\infty}g(s,x){\rm \bf d}_{s,x} h(s,x). \end{eqnarray} \end{prop} {\em Proof}: If $g$ is a simple function as given in (\ref{fr1}), one can always add some points in the partition to make $e(t_j\wedge t,x_1)=0$ and $e(t_j\wedge t,x_{n+1})=0$ for all $j=1,2,\cdots,m$ as $g$ has a compact support in $x$. So for $h\in{\cal V}_2$, \begin{eqnarray*} && \int _{0}^{t}\int _{-\infty}^{\infty}g(s,x){\bf \rm d}_{s,x}h(s,x)\\ &=&\sum _{i=1}^{n}\sum _{j=1}^{m} e(t_j\wedge t,x_i)\Big[h(t_{j+1}\wedge t,x_{i+1})-h(t_j\wedge t,x_{i+1})\\ &&\hskip 3cm -h(t_{j+1}\wedge t,x_i)+h(t_j\wedge t,x_i)\Big]\\ &=&-\sum _{i=0}^{n-1}\sum _{j=1}^{m} e(t_j\wedge t,x_{i+1})\Big[h(t_{j+1}\wedge t,x_{i+1})-h(t_j\wedge t,x_{i+1})\Big]\\ &&+\sum _{i=1}^{n}\sum _{j=1}^{m} e(t_j\wedge t,x_{i})\Big[h(t_{j+1}\wedge t,x_{i+1})-h(t_j\wedge t,x_{i+1})\Big]\\ &=&-\sum _{i=1}^{n}\sum _{j=1}^{m} \Big[e(t_j\wedge t,x_{i+1 })-e(t_j\wedge t,x_i)\Big]\Big[h(t_{j+1}\wedge t,x_{i+1})-h(t_j\wedge t,x_{i+1})\Big]. \end{eqnarray*} If $g(t,x)$ is $C^2$ in $x$ and $\Delta g(t,x_2)$ is bounded uniformly in $t$, let \begin{eqnarray*} \varphi _{m,n}(t,x):=\sum_{j=1}^{m}\sum_{i=1}^{n}g(t_j,x_i)1_{[t_j,t_{j+1})}(t)1_{[x_i,x_{i+1})}(x), \end{eqnarray*} so \begin{eqnarray*} \varphi _{m,n}(t,x)\to {g(t,x)}\ a.s. \ as \ m,n\to {\infty}. \end{eqnarray*} Then by the intermediate value theorem, there exist $\xi_i\in [x_i,x_{i+1}]$ $(i=1,2,\cdots,n)$ such that, \begin{eqnarray*} &&\int _{-\infty}^{+\infty}\int _0^t g(s,x) {\rm d}_{s,x}{h(s,x)}\\ &=&-\lim_{\delta_t,\delta_x \to 0}\sum _{i=1}^{n}\sum _{j=1}^{m} \Big[g(t_j\wedge t,x_{i+1})-g(t_j\wedge t,x_i)\Big]\\ &&\hskip 2.5cm\Big[h(t_{j+1}\wedge t,x_{i+1})-h(t_j\wedge t,x_{i+1})\Big] \ \ \ \ \ \ \ (limit \ in \ {\cal M}_2)\\ &=&-\lim_{\delta_t,\delta_x \to 0}\sum _{i=1}^{n}\sum _{j=1}^{m} \nabla g(t_j\wedge t,\xi_i)\Big[h(t_{j+1}\wedge t,x_{i+1})-h(t_j\wedge t,x_{i+1})\Big]\cdot\\ &&\hskip 2.5cm (x_{i+1}-x_i)\\ &=&-\lim_{\delta_x \to 0}\sum_{i=1}^{n}\int_0^t \nabla g(s,\xi_i){\rm d}_{s}h(s,x_{i+1})(x_{i+1}-x_{i})\ \ \ \ \ \ \ \ \ \ \ \ \ (limit \ in \ {\cal M}_2)\\ &=&-\lim_{\delta_x \to 0}\sum_{i=1}^{n}\int_0^t \nabla g(s,x_{i+1}){\rm d}_{s}h(s,x_{i+1})(x_{i+1}-x_{i})\\ &&-\lim_{\delta_x \to 0}\sum_{i=1}^{n}\int_0^t (\nabla g(s,\xi_i)-\nabla g(s,x_{i+1})){\rm d}_{s}h(s,x_{i+1})(x_{i+1}-x_{i})\\ &=&-\int _{-\infty}^{+\infty}\int _0^t \nabla g(s,x) {\rm d}_{s} h(s,x) dx.\hskip 3.5cm (limit \ in \ {\cal M}_2) \end{eqnarray*} Here $ {\delta _t}=\max\limits_{1\leq j\leq m}{|t_{j+1}-t_j|} $, $ {\delta _x}=\max\limits_{1\leq i\leq m}{|x_{i+1}-x_i|} $. To prove the last equality, first notice that \begin{eqnarray*} &&\lim_{\delta_x \to 0}\sum_{i=1}^{n}\int_0^t \nabla g(s,x_{i+1}){\rm d}_{s}h(s,x_{i+1})(x_{i+1}-x_{i})\\ &=&\int _{-\infty}^{+\infty}\int _0^t \nabla g(s,x) {\rm d}_{s} h(s,x) dx. \end{eqnarray*} Second, by the intermediate value theorem again, the second term can be estimated as: \begin{eqnarray*} &&E\left[\sum_{i=1}^{n}\int_0^t (\nabla g(s,\xi_i)-\nabla g(s,x_{i+1})){\rm d}_{s}h(s,x_{i+1})(x_{i+1}-x_{i})\right]^2\\ &=&E \sum_{i=1}^{n}\sum_{k=1}^{n}\bigg[\int_0^t (\nabla g(s,\xi_i)-\nabla g(s,x_{i+1})){\rm d}_{s}h(s,x_{i+1})(x_{i+1}-x_{i})\cdot\\ &&\hskip 2cm \int_0^t (\nabla g(s,\xi_k)-\nabla g(s,x_{k+1})){\rm d}_{s}h(s,x_{k+1})(x_{k+1}-x_{k})\bigg]\\ &=&\sum_{i=1}^{n}\sum_{k=1}^{n}E\int_0^t (\nabla g(s,\xi_i)-\nabla g(s,x_{i+1}))(\nabla g(s,\xi_k)-\nabla g(s,x_{k+1}))\\ &&\hskip 2cm{\rm d}_s<h(x_{i+1}),h(x_{k+1})>_s(x_{i+1}-x_i)(x_{k+1}-x_{k})\\ &&\\ &\leq&\sum_{i=1}^{n}\sum_{k=1}^{n}E\sup\limits_{\xi_i\in[x_i,x_{i+1}]}|\nabla g(s,\xi_i)-\nabla g(s,x_{i+1})|\cdot\\ &&\hskip 1.5cm\sup\limits_{\xi_k\in[x_k,x_{k+1}]}|\nabla g(s,\xi_k)-\nabla g(s,x_{k+1})|\cdot\\ &&\hskip 1.5cm|<h(x_{i+1})>_t<h(x_{k+1})>_t|^{1\over2}(x_{i+1}-x_i)(x_{k+1}-x_k)\\ &\leq&E\Big [\sup\limits_{s}\sup\limits_{i}\sup\limits_{\xi_i\in[x_i,x_{i+1}]}|\Delta g(s,\eta_i)(\xi _i-x_{i+1})|\cdot\\ &&\hskip 0.3cm\sup\limits_{s}\sup\limits_{k}\sup\limits_{\xi_k\in[x_k,x_{k+1}]}|\Delta g(s,\eta_k)(\xi _k-x_{k+1})| |<h(x_{i+1})>_t<h(x_{k+1})>_t|^{1\over2}\Big]\\ &&\hskip 0.3cm\cdot\left(\sum_{i=1}^{n}\sum_{k=1}^{n}(x_{i+1}-x_i)(x_{k+1}-x_k)\right)\\ &\to& 0,\ as\ \delta _x\to 0, \end{eqnarray*} where $\eta _i\in [\xi _i, x_{i+1}]$, $\eta _k\in [\xi _k, x_{k+1}]$. The desired result is proved. $\diamond$ \vskip5pt \section{The generalized It${\hat{\rm o}}$'s formula in two-dimensional space} \setcounter{equation}{0} Let $X(s)=(X_1(s),X_2(s))$ be a two-dimensional continuous semi-martingale with $X_i(s)=X_i(0)+M_i(s)+V_i(s) (i=1,2)$ on a probability space $(\Omega,{\cal F},P)$. Here $M_i(s)$ is a continuous local martingale and $V_i(s)$ is an adapted continuous process of locally bounded variation (in $s$). Let $L_i(t,a)$ be the local time of $X_i(t)$ (i=1,2) \begin{eqnarray} L _i(t,a)=\lim_{\epsilon\downarrow 0} {1\over 2\epsilon}\int _0^t1_{[a,a+\epsilon)}(X_i(s))d<\hskip-4pt M_i\hskip-4pt>_s, \ \ a.s. \ \ i=1,2 \end{eqnarray} for each $t$ and $a\in R$. Then it is well known for each fixed $a\in R$, $L_i(t,a,\omega)$ is continuous, and nondecreasing in $t$ and right continuous with left limit (c$\grave{a}$dl$\grave{a}$g) with respect to $a$ (\cite{ks}, \cite{yor}). Therefore we can define a Lebesgue-Stieltjes integral $\int _0^{\infty}\phi(s)dL _i(s,a,\omega)$ for each $a$ for any Borel-measurable function $\phi$. In particular \begin{eqnarray} \int _0^{\infty}1_{R\setminus\{a\}}(X_i(s))dL_i(s,a,\omega)=0, \ \ a.s.\ \ i=1,2. \end{eqnarray} Furthermore if $\phi$ is differentiable, then we have the following integration by parts formula \begin{eqnarray} &&\int _0^t\phi(s)dL_i(s,a,\omega)\nonumber\\ &=&\phi(t)L_i(t,a,\omega)-\int _0^t\phi^{\prime}(s)L_i(s,a,\omega){\rm d}s,\ \ a.s.\ \ i=1, 2. \end{eqnarray} Moreover, if $g(s,x_i,\omega)$ is measurable and bounded, by the occupation times formula (e.g. see \cite{ks}, \cite{yor})), \begin{eqnarray*} \int _0^tg(s,X_i(s))d<\hskip-4pt M_i\hskip-4pt> _s=2\int _{-\infty}^{\infty}\int _0^tg(s,a)dL_i(s,a,\omega){\rm d}a.\ \ a.s.\ \ i=1,2 \end{eqnarray*} If $g(s,x_i)$ is differentiable in $s$, then using the integration by parts formula, we have \begin{eqnarray} &&\int _0^tg(s,X_i(s))d<\hskip-4pt M_i\hskip-4pt> _s\nonumber\\ &=&2\int _{-\infty}^{\infty}\int _0^tg(s,a)dL_i(s,a,\omega){\rm d}a\nonumber\\ &=&2\int _{-\infty}^{\infty}g(t,a)L_i(t,a,\omega){\rm d}a\nonumber\\ &&-2\int _{-\infty}^{\infty}\int _0^t{\partial \over \partial s}g(s,a)L_i(s,a,\omega){\rm d}s{\rm d}a, \ \ a.s., \end{eqnarray} for $i=1,2$. On the other hand, by Tanaka formula \begin{eqnarray*} L_1(t,a)=(X_1(t)-a)^+-(X_1(0)-a)^+-\hat M_1(t,a)-\hat V_1(t,a), \end{eqnarray*} where $ \hat Z_1(t,a)=\int_0^t1_{\{X_1(s)>a\}}dZ_1(s),\ Z_1=M_1,V_1,X_1$. By a standard localizing argument, we may assume without loss of generality that there is a constant $N$ for which \begin{eqnarray*} \sup\limits_{0\leq s\leq t} |X_1(s)|\leq N, \ <\hskip-4pt M_1\hskip-4pt>_t\leq N, \ Var_tV_1\leq N, \end{eqnarray*} where $ Var_tV_1$ is the total variation of $V_1$ on $[0,t]$. From the property of local time (see Chapter $3$ in \cite {ks}), for any $\gamma\geq 1$, \begin{eqnarray*} &&E|\hat M_1(t,a)-\hat M_1(t,b)|^{2\gamma}\\ &=&E|\int_0^t1_{\{a<X_s\leq b\}}d<\hskip-4pt M_1\hskip-4pt>_s|^\gamma\\ &\leq&C(b-a)^\gamma, \ a<b \end{eqnarray*} where the constant $C$ depends on $\gamma$ and on the bound $N$. From Kolmogorov's tightness criterion (see \cite{kun}), we know that the sequence $Y_n(a):= {1\over n}\hat M_1(t,a)$, $n=1,2,\cdots$, is tight. Moreover for any $a_1, a_2,\cdots, a_k$, \begin{eqnarray*} &&P(\sup\limits_{a_i}|{1\over n}\hat M_1(t,a_i)|\leq 1)\\ &=&P(|{1\over n}\hat M_1(t,a_1)|\leq 1, |{1\over n}\hat M_1(t,a_2)|\leq 1,\cdots, |{1\over n}\hat M_1(t,a_k)|\leq 1|)\\ &\geq&1-\sum\limits_{i=1}^k P(|{1 \over n}\hat M_1(t,a_i)|>1)\\ &\geq&1-{1\over n^2}\sum\limits_{i=1}^k E[\hat M_1^2(t,a_i)]\\ &\geq&1-{k\over{n^2}}C(N-a), \end{eqnarray*} so by the weak convergence theorem of random fields (see Theorem 1.4.5 in \cite{kun}), we have \begin{eqnarray*} \lim\limits_{n\to \infty}P(\sup\limits_{a}|\hat M_1(t,a)|\leq n)=1. \end{eqnarray*} Furthermore it is easy to see that \begin{eqnarray*} {1\over n} \hat V_1(t,a)\leq {1\over n} Var_tV_1(t,a)\to 0, \ when\ n\to \infty, \end{eqnarray*} so it follows that, \begin{eqnarray*} \lim\limits_{n\to \infty}P(\sup\limits_{a}|L_1(t,a)|\leq n)=1. \end{eqnarray*} Therefore in our localization argument, we can also assume $L_1(t,a)$ and $L_2(t,a)$ are bounded uniformly in $a$. \vskip5pt In the following we assume some conditions on $f:{R^+}\times R\times R \to R$: \vskip5pt {\it Condition (i)} $f(\cdot,\cdot,\cdot): {R^+}\times R\times R \to R$ is left continuous and locally bounded and jointly continuous from the right in $t$ and left in $(x_1,x_2)$ at each point $(0,x_1,x_2)$; \vskip3pt {\it Condition (ii)} the left derivative ${\partial^-\over \partial t}f(t,x_1,x_2)$ exists at all points of $(0,\infty)\times R^2$, and ${\nabla}_1^-f(t,x_1,x_2)$, ${\nabla}_2^-f(t,x_1,x_2)$ exist at all points $[0,\infty)\times R^2$ and are jointly left continuous and locally bounded; \vskip3pt {\it Condition (iii)} ${\nabla}_i^-f(t,x_1,x_2)$ is of locally bounded variation in $x_i$, $i=1,2$; \vskip3pt {\it Condition (iv)} ${\partial ^-\over \partial t } \nabla_i^- f(t,x_1,x_2)$ $(i=1,2)$ and $\nabla_1^-\nabla_2^- f(t,x_1,x_2)$ exist at all points of $(0,\infty)\times R^2$ and $[0,\infty)\times R^2$ respectively, and are left continuous and locally bounded; \vskip3pt { \it Condition (v)} $\nabla_1^-\nabla_2^- f(t,x_1,x_2)$ is of locally bounded variation in $(t,x_1)$ and $(t,x_2)$ and $\nabla_1^-\nabla_2^- f(0,x_1,x_2)$ is of locally bounded variation in $x_1$ and $x_2$ respectively. \vskip 5pt From the assumption of $\nabla _1^-f$, we can use the one-dimensional generalized It$\hat {\rm o}$ formula (Theorem 1.1 in \cite {Zhao1}) \begin{eqnarray} &&\nabla _1^-f(t,a,X_2(t))-\nabla _1^-f(0,a,X_2(0))\nonumber\\ &=&\int _0^t{\partial ^-\over \partial s} \nabla _1^-f(s,a,X_2(s)){\rm d}s+ \int _0^t\nabla _1^-\nabla _2 ^-f(s,a,X_2(s))dX_2(s)\nonumber\\ &&+ \int _{-\infty}^{\infty }{L}_2(t,x_2){\rm d}_{x_2}\nabla _1^-\nabla _2 ^-f(t,a,x_2)\nonumber\\ &&-\int _{-\infty}^{+\infty}\int _0^{t}{L}_2(s,x_2){\bf \rm d}_{s,x_2}\nabla _1^-\nabla _2^-f(s,a,x_2).\ \ a.s. \end{eqnarray} Therefore $\nabla _1^-f(t,a,X_2(t))$ is a continuous semi-martingale, and can be decomposed as $\nabla _1^-f(t,a,X_2(t))=\nabla _1^-f(0,a,X_2(0))+ h(t,a) + v(t,a)$, where $h$ is a continuous local martingale and $v$ is a continuous process of locally bounded variation (in $t$). In fact $h(t,a)=\int _0^t\nabla _1^-\nabla _2 ^-f(s,a,X_2(s))dM_2(s)$. Define \begin{eqnarray*} F_s(a,b)&:=&<h(a),h(b)>_s\ =\ <\nabla_1^-f(a),\nabla_1^-f(b)>_s\nonumber\\&=&\int _0^s\nabla _1^-\nabla _2 ^-f(r,a,X_2(r))\nabla _1^-\nabla _2 ^-f(r,b,X_2(r)){\bf \rm d}<\hskip-4pt M_2\hskip-4pt>_r.\\ F_{s_{k}}^{s_{k+1}}(a,b)&:=&<h(a),h(b)>_{s_{k}}^{s_{k+1}}\ =\ <\nabla_1^-f(a),\nabla_1^-f(b)>_{s_{k}}^{s_{k+1}}\nonumber\\&=&\int _{s_{k}}^{s_{k+1}}\nabla _1^-\nabla _2 ^-f(r,a,X_2(r))\nabla _1^-\nabla _2 ^-f(r,b,X_2(r)){\bf \rm d}<\hskip-4pt M_2\hskip-4pt>_r. \end{eqnarray*} We need to prove $h(s,a) \in {\cal V}_2$. To see this, as $\nabla _1^-\nabla _2 ^-f(t,x_1,x_2)$ is of locally bounded variation in $x_1$, so for any compact set $G$, $\nabla _1^-\nabla _2 ^-f(t,x_1,x_2)$ is of bounded variation in $x_1$ for $x_1 \in G$. Also on this set, let $\cal P$ be the partition on $R^2\times [0,t]$, ${\cal P}_i$ be a partition on $R$ $(i=1,2)$, ${\cal P}_3$ be a partition on $[0,t]$ such that ${\cal P} = {{\cal P}_1}\times {\cal P}_2\times {\cal P}_3$. Then we have: \begin{eqnarray*} &&{\rm Var} _{s,a,b} (F_{s}(a,b))\\&=&\sup_{\cal P}\sum_k\sum_i\sum_j\Big|F_{s_{k}}^{s_{k+1}}(a_{i+1},b_{j+1}) - F_{s_{k}}^{s_{k+1}}(a_{i+1},b_{j}) - F_{s_{k}}^{s_{k+1}}(a_{i},b_{j+1})\\ &&\hskip 2.5cm+F_{s_{k}}^{s_{k+1}}(a_{i},b_{j})\Big|\\ &=&\sup _{\cal P}\sum_k\sum_i\sum_j\Big|\int_{s_{k}}^{s_{k+1}}\nabla _1^-\nabla _2^-f(r,a_{i+1},X_2(r))\nabla _1^-\nabla _2^-f(r,b_{j+1},X_2(r)){\bf \rm d}<\hskip-4pt M_2\hskip-4pt>_r \\ &&- \int_{s_{k}}^{s_{k+1}}\nabla _1^-\nabla _2^-f(r,a_{i+1},X_2(r))\nabla _1^-\nabla _2^-f(r,b_{j},X_2(r)){\bf \rm d}<\hskip-4pt M_2\hskip-4pt>_r \\ &&- \int_{s_{k}}^{s_{k+1}}\nabla _1^-\nabla _2^-f(r,a_{i},X_2(r))\nabla _1^-\nabla _2^-f(r,b_{j+1},X_2(r)){\bf \rm d}<\hskip-4pt M_2\hskip-4pt>_r\\ &&+ \int_{s_{k}}^{s_{k+1}}\nabla _1^-\nabla _2^-f(r,a_{i},X_2(r))\nabla _1^-\nabla _2^-f(r,b_{j},X_2(r)){\bf \rm d}<\hskip-4pt M_2\hskip-4pt>_r \bigg| \\ &=&\sup _{\cal P}\sum_k\sum_i\sum_j\bigg|\int_{s_{k}}^{s_{k+1}}\bigg(\nabla _1^-\nabla _2^-f(r,a_{i+1},X_2(r)) - \nabla _1^-\nabla _2^-f(r,a_{i},X_2(r))\bigg)\\ &&\bigg( \nabla _1^-\nabla _2^-f(r,b_{j+1},X_2(r)) - \nabla _1^-\nabla _2^-f(r,b_{j},X_2(r))\bigg){\bf \rm d}<\hskip-4pt M_2\hskip-4pt>_r\bigg |\\ &\leq& \int_0^{s} \sup _{{\cal P}_1}\sum_i\Big|\nabla _1^-\nabla _2^-f(r,a_{i+1},X_2(r)) - \nabla _1^-\nabla _2^-f(r,a_{i},X_2(r))\Big|\\ &&\sup_{{\cal P}_2}\sum_j\Big| \nabla _1^-\nabla _2^-f(r,b_{j+1},X_2(r))- \nabla _1^-\nabla _2^-f(r,b_{j},X_2(r))\Big|{\bf \rm d}<\hskip-4pt M_2\hskip-4pt>_r \\ &=&\int_0^{s}\bigg({\rm Var}_{a}(\nabla _1^-\nabla _2^-f(r,a,X_2(r)))\bigg)^2{\bf \rm d}<\hskip-4pt M_2\hskip-4pt>_r <{\infty}. \end{eqnarray*} Therefore under the localizing assumption, $\int_{-\infty}^\infty\int_0^t L_1(s,a)d_{s,a} \nabla_1^-f(s,a,X_2(s))$ and $\int_{-\infty}^\infty\int_0^t L_2(s,a)d_{s,a} \nabla_2^-f(s,X_1(s),a)$ can be defined by Definition \ref{definition1}. A localizing argument implies they are semi-martingales. We will prove the following generalized It$\hat{\rm o}$'s formula in two-dimensional space. \begin{thm} \label{tom100} Under conditions (i)-(v), for any continuous two-dimensional semi-martingale $X(t)=(X_1(t), X_2(t))$, we have \begin{eqnarray} && f(t,X_1(t),X_2(t))\nonumber\\ &=&f(0,X_1(0),X_2(0))+\int _0^t{\partial ^-\over \partial s} f(s,X_1(s),X_2(s)){\rm d}s\nonumber\\ &&+\sum_{i=1}^2\int _0^t\nabla _i ^-f(s,X_1(s),X_2(s))dX_i(s)\nonumber\\ &&+\int _{-\infty}^{\infty }L _1(t,a){\rm d}_a\nabla _1 ^-f(t,a,X_2(t)) -\int _{-\infty}^{+\infty}\int _0^{t}L _1(s,a) {\bf \rm d}_{s,a}\nabla _1^-f(s,a,X_2(s))\nonumber\\ &&+\int _{-\infty}^{\infty }L _2(t,a){\rm d}_a\nabla _2 ^-f(t,X_1(t),a) -\int _{-\infty}^{+\infty}\int _0^{t}L _2(s,a) {\bf \rm d}_{s,a}\nabla _2^-f(s,X_1(s),a)\nonumber\\ &&+\int_0^t{\nabla _1^-}{\nabla _2^-}f(s,X_1(s),X_2(s))d<\hskip-4pt M_1,M_2\hskip-4pt>_s.\ \ a.s. \end{eqnarray} \end{thm} {\em Proof}: By a standard localization argument, we can assume $X_1(t)$, $X_2(t)$ and their quadratic variations $<\hskip-4pt X_1\hskip-4pt>_t$,$<\hskip-4pt X_2\hskip-4pt>_t$ and $<\hskip-4pt X_1,X_2\hskip-4pt>_t$ and the local times $L_1$, $L_2$ are bounded processes so that $f$, ${\partial ^-\over \partial t} f $, $\nabla_1^-f$, $\nabla_2^-f$, $\nabla_1^-\nabla_2^-f$, $Var_{x_1}\nabla_1^-f$, $Var_{x_2}\nabla_2^-f$, $Var_{x_1}\nabla_1^-\nabla_2^-f$, $Var_{x_2}\nabla_1^-\nabla_2^-f$ are bounded. Note the left derivatives of $f$ agree with the generalized derivatives, so condition (ii) and (iv) imply that $f$ is absolutely continuous in each variable and $\nabla_1^-f$ is absolutely continuous with respect to $t$ and $x_2$ respectively and $\nabla_2^-f$ is absolutely continuous with respect to $t$ and $x_1$ respectively. We divide the proof into several steps: {\bf (A)} Define \begin{eqnarray}\label{smooth} \rho(x)=\cases {c{\rm e}^{{1\over (x-1)^2-1}}, {\rm \ if } \ x\in (0,2),\cr 0, \ \ \ \ \ \ \ \ \ \ \ \ \ {\rm otherwise.}} \end{eqnarray} Here $c$ is chosen such that $\int _0^2\rho(x)dx=1$. Take $\rho_n(x)=n\rho(nx)$ as mollifiers. Define \begin{eqnarray*} &&f_n(s,x_1,x_2)\\ &=&\int _{-\infty}^{+\infty}\int _{-\infty}^{+\infty}\int _{-\infty}^{+\infty}\rho_n(s-\tau)\rho_n(x_1-y)\rho_n(x_2-z)f(\tau,y,z)d\tau dydz, \ n\geq 1, \end{eqnarray*} where we set $f(\tau,y,z)=f(-\tau,y,z)$ if $\tau<0$. Then $f_n(s,x_1,x_2)$ are smooth and \begin{eqnarray}\label{truman41} &&f_n(s,x_1,x_2)\nonumber\\ &=&\int _0^2\int _0^2\int _0^2\rho(t)\rho(y)\rho(z)f(s-{t\over n},x_1-{y\over n},x_2-{z\over n})dtdydz, \ n\geq 1. \end{eqnarray} Because of the absolute continuity mentioned above, we can differentiate under the integral (\ref {truman41}) to see ${\partial \over \partial t}f_n$, $\nabla_1f_n$, $\nabla_2f_n$, $\nabla_1\nabla_2f_n$, $Var_{x_1}\nabla_1f_n$, $Var_{x_2}\nabla_2f_n$, $Var_{x_1}\nabla_1\nabla_2f_n$ and $Var_{x_2}\nabla_1\nabla_2f_n$ are bounded. Furthermore using Lebesgue's dominated convergence theorem, one can prove that as $n\to \infty$, \begin{eqnarray} f_n(s,x_1,x_2)&\to & f(s,x_1,x_2),\ s\geq 0{\label{frc1}}\\ {\partial \over \partial s}f_n(s,x_1,x_2)&\to & {\partial ^- \over \partial s} f(s,x_1,x_2),\ s> 0{\label{frc2}}\\ \nabla_1f_n(s,x_1,x_2)&\to&\nabla _1^-f(s,x_1,x_2),\ s\geq 0{\label{frc3}}\\ \nabla_2f_n(s,x_1,x_2)&\to & \nabla _2^-f(s,x_1,x_2),\ s\geq 0{\label{frc4}}\\ \nabla_1\nabla _2f_n(s,x_1,x_2)&\to&\nabla_1^-\nabla _2^-f(s,x_1,x_2),\ s\geq 0{\label{frc5}} \end{eqnarray} and each $(x_1,x_2) \in R^2$. \vskip5pt \noindent {\bf (B)} It turns out for any $g(t,x_1)$ being continuous in $t$ and $C^1$ in $x_1$ and having a compact support, using the integration by parts formula and Lebesgue's dominated convergence theorem, we see that \begin{eqnarray} &&\lim_{n\to +\infty}\int _{-\infty}^{+\infty} g(t,x_1)\Delta_1 f_n(t,x_1,X_2(t))dx_1\nonumber\\ &=& -\lim_{n\to +\infty}\int _{-\infty}^{\infty} \nabla g(t,x_1)\nabla _1f_n(t,x_1,X_2(t))dx_1 \nonumber\\ &=& -\int _{-\infty}^{\infty}\nabla g(t,x_1)\nabla _1^-f(t,x_1,X_2(t))dx_1. \end{eqnarray} Note $\nabla _1^-f(t,x_1,x_2)$ is of locally bounded variation in $x_1$ and $g(t,x_1)$ has a compact support in $ x_1$, so \begin{eqnarray} &&-\int _{-\infty}^{+\infty} \nabla g(t,x_1)\nabla _1^- f(t,x_1,X_2(t))dx_1\nonumber\\ &=&\int _{-\infty}^{ +\infty} g(t,x_1){\rm d}_{x_1}\nabla _1^-f(t,x_1,X_2(t)). \end{eqnarray} Thus \begin{eqnarray}\label{fcr2} &&\lim_{n\to +\infty}\int _{-\infty}^{+\infty} g(t,x_1)\Delta _1f_n(t,x_1,X_2(t))dx_1\nonumber\\ &=&\int _{-\infty}^{\infty } g(t,x_1){\rm d}_{x_1}\nabla _1^-f(t,x_1,X_2(t)). \end{eqnarray} \vskip 5pt \noindent {\bf (C)} If $g(s,x_1)$ is $C^2$ in $x_1$, $\Delta g(s,x_1)$ is bounded uniformly in $s$, ${\partial \over \partial s}\nabla g(s,x_1)$ is continuous in $s$ and has a compact support in $x_1$, and \\ $E\left[\int _0^t\int_{R^2}|g(s,x)g(s,y)||{\bf \rm d}_{x,y,s}<h(x),h(y)>_s|\right] < \infty$, where $h\in {\cal V}_2$, then applying It$\hat{\rm o}$'s formula, Lebesgue's dominated convergence theorem and the integration by parts formula, \begin{eqnarray*} &&\lim_{n\to +\infty}\Big(\int _{0}^{t}\int _{-\infty}^{+\infty}g(s,x_1){\partial \over \partial s}\Delta _1f_n(s,x_1,X_2(s))dx_1{\rm d}s \\ &&+ \int _{0}^{t}\int _{-\infty}^{+\infty}g(s,x_1)\nabla _2\Delta _1f_n(s,x_1,X_2(s))dx_1dX_2(s) \\ &&+{1\over2} \int _{0}^{t}\int _{- \infty}^{+\infty}g(s,x_1)\Delta_2\Delta _1f_n(s,x_1,X_2(s))dx_1d<\hskip-4pt M_2\hskip-4pt>_s\Big) \\ &=& -\lim_{n\to +\infty}\Big(\int _{0}^{t}\int _{-\infty}^{+\infty} \nabla g(s,x_1){\partial \over \partial s}\nabla _1f_n(s,x_1,X_2(s))dx_1{\rm d}s \\ &&+ \int _{0}^{t}\int _{-\infty}^{+\infty}\nabla g(s,x_1)\nabla _1\nabla_2f_n(s,x_1,X_2(s))dx_1dX_2(s) \\ &&+{1\over2}\int _{0}^{t}\int _{- \infty}^{+\infty}\nabla g(s,x_1)\Delta _2\nabla_1f_n(s,x_1,X_2(s))dx_1d<\hskip-4pt M_2\hskip-4pt>_s \Big)\\ &=&-\lim_{n\to +\infty}\int _{-\infty}^{\infty}\int _0^t \nabla g(s,x_1){\rm d}_s\nabla _1f_n(s,x_1,X_2(s))dx_1\\ &=&-\lim_{n\to +\infty}\Big(\int _{-\infty}^{\infty} \nabla g(s,x_1)\nabla _1f_n(s,x_1,X_2(s))\Big |_0^t dx_1\\ &&-\int _{0}^{t}\int _{-\infty}^{+\infty} {\partial \over \partial s}\nabla g(s,x_1)\nabla _1f_n(s,x_1,X_2(s))dx_1{\rm d}s\Big)\\ &=&-\int _{-\infty}^{\infty} \nabla g(s,x_1)\nabla _1^-f(s,x_1,X_2(s))\Big |_0^t dx_1\\ &&+\int _{0}^{t}\int _{-\infty}^{+\infty} {\partial \over \partial s}\nabla g(s,x_1)\nabla _1^-f(s,x_1,X_2(s))dx_1{\rm d}s\\ &=&-\int _{-\infty}^{+\infty}\int _0^t \nabla g(s,x_1) {\rm d}_{s} \nabla _1^-f(s,x_1,X_2(s))dx_1. \end{eqnarray*} It turns out by applying Proposition \ref{proposition1} that \begin{eqnarray}\label {fcr3} &&\lim_{n\to +\infty}\Big(\int _{0}^{t}\int _{-\infty}^{+\infty}g(s,x_1){\partial \over \partial s}\Delta _1f_n(s,x_1,X_2(s))dx_1{\rm d}s \nonumber\\ &&+ \int _{0}^{t}\int _{-\infty}^{+\infty}g(s,x_1)\nabla _2\Delta _1f_n(s,x_1,X_2(s))dx_1dX_2(s) \nonumber\\ &&+{1\over2} \int _{0}^{t}\int _{- \infty}^{+\infty}g(s,x_1)\Delta_2\Delta _1f_n(s,x_1,X_2(s))dx_1d<\hskip-4pt M_2\hskip-4pt>_s\Big) \nonumber\\ &=&\int _0^t \int _{-\infty}^{+\infty}g(s,x_1){\rm \bf d}_{s,x_1} \nabla _1^-f(s,x_1,X_2(s)). \end{eqnarray} \vskip5pt \noindent {\bf (D)} But any c$\grave{a}$dl$\grave{a}$g function with a compact support can be approximated by smooth functions with a compact support uniformly by the following standard smoothing procedure \begin{eqnarray*} g_m(t,x_1)=\int _{-\infty}^{\infty}\rho_m(y-x_1)g(t,y)dy=\int _0^2\rho (z)g(t,x_1+{z\over m})dz. \end{eqnarray*} Then we can prove that (\ref{fcr2}) also holds for any c$\grave{a}$dl$\grave{a}$g function $g(t,x_1)$ with a compact support in $x_1$. Moreover, if $g \in {\cal V}_3$, (\ref{fcr3}) also holds. To see (\ref{fcr2}), note that there is a compact set $G\subset R^1$ such that \begin{eqnarray*} &\max \limits _{x_1\in G}|g_m(t,x_1)-g(t,x_1)|\to 0 & {\rm as } \ \ m\to +\infty,\\ &g_m(t,x_1)=g(t,x_1) =0 & {\rm for } \ \ x_1\notin G. \end{eqnarray*} Note \begin{eqnarray}\label{elworthy1} &&\int _{-\infty}^{+\infty}g(t,x_1)\Delta_1 f_n(t,x_1,X_2(t))dx_1\nonumber\\ &=& \int _{-\infty}^{+\infty}g_m(t,x_1)\Delta_1 f_n(t,x_1,X_2(t))dx_1\nonumber\\ &&+\int _{-\infty}^{+\infty}(g(t,x_1)-g_m(t,x_1))\Delta_1 f_n(t,x_1,X_2(t))dx_1. \end{eqnarray} It is easy to see from ({\ref{fcr2}) and Lebesgue's dominated convergence theorem, that \begin{eqnarray}\label{elworthy3} && \lim\limits_{m\to \infty}\lim\limits _{n\to \infty} \int _{-\infty}^{+\infty}g_m(t,x_1)\Delta_1 f_n(t,x_1,X_2(t))dx_1\nonumber\\ &=&\lim\limits _{m\to \infty}\int _{-\infty}^{\infty } g_m(t,x_1){\bf \rm d}_{x_1}\nabla_1^-f(t,x_1,X_2(t))\nonumber\\ &=&\int _{-\infty}^{\infty } g(t,x_1){\bf \rm d}_{x_1}\nabla_1^-f(t,x_1,X_2(t)). \end{eqnarray} Moreover, \begin{eqnarray}\label{elworthy6} &&|\int _{-\infty}^{+\infty}\Big(g(t,x_1)-g_m(t,x_1)\Big)\Delta _1f_n(t,x_1,X_2(t))dx_1|\nonumber\\ &=& |\int _{-\infty}^{+\infty}\Big(g(t,x_1)-g_m(t,x_1)\Big)d_{x_1}\nabla_1f_n(t,x_1,X_2(t))|\nonumber\\ &\leq& \Big(\max\limits _{x_1\in {G}} |g(t,x_1)-g_m(t,x_1)|\Big)Var_{x_1\in G}\nabla _1f_n(t,x_1,X_2(t)). \end{eqnarray} But, \begin{eqnarray*} \lim\limits_{m\to \infty}\limsup\limits_{n\to \infty} \Big(\max\limits _{x_1\in {G}} |g(t,x_1)-g_m(t,x_1)|\Big)Var_{x_1\in G}\nabla _1f_n(t,x_1,X_2(t))=0. \end{eqnarray*} So inequality (\ref{elworthy6}) leads to \begin{eqnarray}\label{elworthy7} &&\lim\limits_{m\to \infty}\limsup\limits_{n\to \infty} |\int _{-\infty}^{+\infty}\Big(g(t,x_1)-g_m(t,x_1)\Big)\Delta _1f_n(t,x_1,X_2(t))dx_1|\nonumber\\ &=&0. \end{eqnarray} Now we use (\ref{elworthy1}), (\ref{elworthy3}) and (\ref{elworthy7}) \begin{eqnarray*} && \limsup\limits_{n\to \infty}\int _{-\infty}^{+\infty}g(t,x_1)\Delta_1 f_n(t,x_1,X_2(t))dx_1\nonumber\\ &=& \lim\limits_{m\to \infty}\limsup\limits_{n\to \infty}\int _{-\infty}^{+\infty}g_m(t,x_1)\Delta _1f_n(t,x_1,X_2(t))dx_1\nonumber\\ && +\lim\limits_{m\to \infty}\limsup\limits_{n\to \infty}\int _{-\infty}^{+\infty}\Big(g(t,x_1)-g_m(t,x_1)\Big)\Delta _1f_n(t,x_1,X_2(t))dx_1\nonumber \\ &=& \int _{-\infty}^{\infty } g(t,x_1){\rm }{\rm d}_{x_1}\nabla _1^-f(t,x_1,X_2(t)). \end{eqnarray*} Similarly we also have \begin{eqnarray}\label{elworthy7*} &&\liminf\limits_{n\to \infty}\int _{-\infty}^{+\infty}g(t,x_1)\Delta _1f_n(t,x_1,X_2(t))dx_1\nonumber\\ &=& \int _{-\infty}^{\infty } g(t,x_1){\rm d}_{x_1}\nabla _1^-f(t,x_1,X_2(t)). \end{eqnarray} So (\ref{fcr2}) holds for a c$\grave{a}$dl$\grave{a}$g function $g$ with a compact support in $x_1$. Now we prove that (\ref{fcr3}) also holds for a c$\grave{a}$dl$\grave{a}$g function $g\in {\cal V}_3$. Obviously, \begin{eqnarray*} &&\int _{0}^{t}\int _{-\infty}^{+\infty}g(s,x_1){\partial \over \partial s}\Delta _1f_n(s,x_1,X_2(s))dx_1{\rm d}s \\ &&+ \int _{0}^{t}\int _{-\infty}^{+\infty}g(s,x_1)\nabla _2\Delta _1f_n(s,x_1,X_2(s))dx_1dX_2(s) \\ &&+{1\over2} \int _{0}^{t}\int _{- \infty}^{+\infty}g(s,x_1)\Delta_2\Delta _1f_n(s,x_1,X_2(s))dx_1d<\hskip-4pt M_2\hskip-4pt>_s \\ &=&\int _0^t \int _{-\infty}^{+\infty}g(s,x_1){\rm \bf d}_{s,x_1} \nabla _1f_n(s,x_1,X_2(s)). \end{eqnarray*} Define \begin{eqnarray*} g_m(s,x_1)=\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\rho_m(y-x_1)\rho_m(\tau-s)g(\tau,y)d\tau dy. \end{eqnarray*} Then there is a compact $G\subset R^1$ such that \begin{eqnarray*} &\max \limits _{0\leq s \leq t, x_1\in G}|g_m(s,x_1)-g(s,x_1)|\to 0 & {\rm as } \ \ m\to +\infty,\\ &g_m(s,x_1)=g(s,x_1) =0 & {\rm for } \ \ x_1\notin G. \end{eqnarray*} Then it is trivial to see \begin{eqnarray*} &&\int _0^t \int _{-\infty}^{+\infty}g(s,x_1){\rm \bf d}_{s,x_1} \nabla _1f_n(s,x_1,X_2(s))\\ &=&\int _0^t \int _{-\infty}^{+\infty}g_m(s,x_1){\rm \bf d}_{s,x_1} \nabla _1f_n(s,x_1,X_2(s))\\ &&+\int _0^t \int _{-\infty}^{+\infty}(g(s,x_1)-g_m(s,x_1)){\rm \bf d}_{s,x_1} \nabla _1f_n(s,x_1,X_2(s)). \end{eqnarray*} But from (\ref{fcr3}), we can see that \begin{eqnarray} &&\lim\limits_{m\to\infty}\lim\limits_{n\to\infty}\int _0^t \int _{-\infty}^{+\infty}g_m(s,x_1){\rm \bf d}_{s,x_1} \nabla _1f_n(s,x_1,X_2(s))\nonumber\\ &=&\lim\limits_{m\to\infty}\int _0^t \int _{-\infty}^{+\infty}g_m(s,x_1){\rm \bf d}_{s,x_1} \nabla _1^-f(s,x_1,X_2(s))\nonumber\\ &=&\int _0^t \int _{-\infty}^{+\infty}g(s,x_1){\rm \bf d}_{s,x_1} \nabla _1f(s,x_1,X_2(s)). \ \ \ \ \ (limit\ in\ {\cal M}_2) \end{eqnarray} The last limit holds because of the following: \begin{eqnarray*} &&E\Big[\int _{-\infty}^{+\infty}(g_m(s,x_1)-g(s,x_1)){\rm \bf d}_{s,x_1} \nabla _1^-f(s,x_1,X_2(s))\Big]^2\\ &=&E\Big[\int_0^t\int_{-\infty}^{+\infty}(g_m-g)(s,a)(g_m-g)(s,b) {\rm\bf d}_{a,b} {\rm d}_{s} <\nabla _1^-f(a),\nabla _1^-f(b)>_s\Big]\\ &=&E\Big[\int_0^t\int_{-\infty}^{+\infty}(g_m-g)(s,a)(g_m-g)(s,b)\\ &&\ \ \ \ \ {\rm \bf d}_{a,b} \nabla_1^-\nabla _2^-f(s,a,X_2(s))\nabla_1^-\nabla _2^-f(s,b,X_2(s))\Big] {\rm d}<\hskip-4pt M_2\hskip-4pt>_s\\ &=&\int_0^tE\Big[\int_{-\infty}^{+\infty}(g_m-g)(s,a) {\rm d}_{a} \nabla_1^-\nabla _2^-f(s,a,X_2(s))\Big]^2 {\rm d}<\hskip-4pt M_2\hskip-4pt>_s \\ &\to& 0,\ as \ m\to {\infty}.\\ \end{eqnarray*} On the other hand, \begin{eqnarray}\label{elworthy7**} &&\lim\limits_{m\to\infty}\lim\limits_{n\to\infty}\int _0^t \int _{-\infty}^{+\infty}(g(s,x_1)-g_m(s,x_1)){\rm \bf d}_{s,x_1} \nabla _1f_n(s,x_1,X_2(s)) \nonumber\\ &=&0. \ \ \ \ \ \ \ \ \ \ (limit\ in\ {\cal M}_2) \end{eqnarray} In fact, \begin{eqnarray*} &&E\Big[\int _{-\infty}^{+\infty}(g(s,x_1)-g_m(s,x_1)){\rm \bf d}_{s,x_1} \nabla _1^-f_n(s,x_1,X_2(s))\Big]^2\\ &=&\int_0^tE\Big[\int_{-\infty}^{+\infty}(g-g_m)(s,a) {\rm d}_{a} \nabla_1\nabla _2f_n(s,a,X_2(s))\Big]^2 d<\hskip-4pt M_2\hskip-4pt>_s. \end{eqnarray*} Note that $\nabla_1\nabla _2f_n(s,a,X_2(s))$ is of bounded variation in $a$, we can use an argument similar to the one in the proof of (\ref{elworthy7}) and (\ref{elworthy7*}) to prove (\ref{elworthy7**}). \vskip5pt \noindent {\bf (E)} Now we use the multi-dimensional It${\hat {\rm o}}$'s formula to the function \\ $f_n(s,X_1(s),X_2(s))$, then a.s. \begin{eqnarray}\label{zhao11} &&f_n(t,X_1(t),X_2(t))-f_n(0,X_1(0),X_2(0))\nonumber\\&=&\int _0^t{\partial \over \partial s} f_n(s,X_1(s),X_2(s)){\rm d}s +\sum_{i=1}^{2}\int _0^t\nabla_i f_n(s,X_1(s),X_2(s))dX_i(s)\nonumber\\ &&+{1\over 2}\int _0^t\Delta _1f_n(s,X_1(s),X_2(s))d<\hskip-4pt M_1\hskip-4pt>_s\nonumber\\ &&+ {1\over 2}\int _0^t\Delta _2f_n(s,X_1(s),X_2(s))d<\hskip-4pt M_2\hskip-4pt>_s\nonumber\\ &&+\int _0^t\nabla _1\nabla _2f_n(s,X_1(s),X_2(s))d<\hskip-4pt M_1,M_2\hskip-4pt>_s. \end{eqnarray} As $n\to \infty$, it is easy to see from Lebesgue's dominated convergence theorem and (\ref{frc1}), (\ref{frc2}), (\ref{frc3}), (\ref{frc4}), (\ref{frc5}) that, $(i=1,2)$ \begin{eqnarray*} f_n(t,X_1(t),X_2(t))-f_n(0,z_1,z_2)\to f(t,X_1(t),X_2(t))-f(0,z_1,z_2),\ \ a.s. \end{eqnarray*} \begin{eqnarray*} \int _0^t{\partial \over \partial s} f_n(s,X_1(s),X_2(s)){\rm d}s\to \int _0^t{\partial ^- \over \partial s} f(s,X_1(s),X_2(s)){\rm d}s,\ \ a.s. \end{eqnarray*} \begin{eqnarray*} \int _0^t \nabla _if_n(s,X_1(s),X_2(s))dV_i(s)\to \int _0^t\nabla _i^-f(s,X_1(s),X_2(s))dV_i(s),\ \ a.s \end{eqnarray*} \begin{eqnarray*} && \int _0^t \nabla _1\nabla _2f_n(s,X_1(s),X_2(s))d<\hskip-4pt M_1,M_2\hskip-4pt>_s\\ &\to& \int _0^t\nabla _1^-\nabla _2^- f(s,X_1(s),X_2(s))d<\hskip-4pt M_1,M_2\hskip-4pt>_s.\ \ a.s. \end{eqnarray*} and \begin{eqnarray*} &&E\int _0^t(\nabla _if_n(s,X_1(s),X_2(s)))^2d<\hskip-4pt M_i\hskip-4pt>_s\\ &\to& E \int _0^t(\nabla _i ^-f(s,X_1(s),X_2(s))^2d<\hskip-4pt M_i\hskip-4pt>_s. \end{eqnarray*} Therefore in ${\cal M}_2$, \begin{eqnarray*} \int _0^t \nabla _if_n(s,X_1(s),X_2(s))dM_i(s)\to \int _0^t\nabla _i^-f(s,X_1(s),X_2(s))dM_i(s), (i=1,2). \end{eqnarray*} To see the convergence of ${1\over 2}\int _0^{t}\Delta _1f_n(s,X_1(s),X_2(s))d<\hskip-4pt M_1\hskip-4pt>_s$, we recall the well-known result that the local time $L _1(s,a)$ is jointly continuous in $s$ and c$\grave{a}$dl$\grave{a}$g with respect to $a$ and has a compact support in space $a$ for each $s$ (\cite{yor}, \cite{ks}). As $L _1(s,a)$ is an increasing function of $s$ for each $a$, so if $G\subset R^1$ is the support of $L _1(s,a)$, then $L _1(s,a)=0$ for all $a\notin G$ and $s\le t$. Now we use the occupation times formula, the integration by parts formula and (\ref{fcr2}), (\ref{fcr3}) for the case when $g$ is c$\grave{a}$dl$\grave{a}$g with compact support, \begin{eqnarray*} && {1\over 2}\int _0^{t}\Delta _1f_n(s,X_1(s),X_2(s))d<\hskip-4pt M_1\hskip-4pt>_s\\ &=&\int _{-\infty}^{+\infty}\int _0^{t}\Delta _1f_n(s,a,X_2(s)){\rm d}_sL _1(s,a){\rm d}a\nonumber\\ &=& \int _{-\infty}^{+\infty}\Delta _1f_n({t},a,X_2({t})){L_1(t,a)}{\rm d}a\\ &&-\int _{-\infty}^{+\infty}\bigg[\int _0^{t}{{\rm d}\over {\rm d}s} \Delta _1f_n(s,a,X_2(s))L _1(s,a){\rm d}s\\ &&+\int _0^{t}\nabla _2\Delta _1f_n(s,X_1(s),a)L _1(s,a)dX_2(s)\\ &&+ {1\over 2}\int _0^{t}\Delta _2\Delta _1f_n(s,X_1(s),a)L _1(s,a)d<\hskip-4pt M_2\hskip-4pt>_s\bigg]{\rm d}a\nonumber\\ &\to & \int _{-\infty}^{\infty }L _1({t},a){\rm d}_a\nabla _1^-f({t},a,X_2({t}))\\ &&-\int _{-\infty}^{+\infty}\int _0^{t}L _1(s,a) {\bf \rm d}_{s,a}\nabla _1^-f(s,a,X_2(s)) \end{eqnarray*} as $n\to \infty$. About the term ${1\over 2}\int _0^{t}\Delta _2f_n(s,X_1(s),X_2(s))d<\hskip-4pt M_2\hskip-4pt>_s$, we can use the same method to get a similar result. So we proved the desired formula. $ \diamond$\\ \vskip 5pt The above smoothing procedure can be used to prove that if $f:R^+\times R^2\to R$ is left continuous and locally bounded, $C^1$ in $x_1$ and $x_2$, and the left derivatives ${\partial^-\over \partial t}f(t,x_1,x_2)$, ${{\partial^{2-}}\over {\partial{x_i}\partial{x_j}}} f(t,x_1,x_2)$, $(i,j=1,2)$ exist at all points of $(0,\infty)\times R^2$ and $[0,\infty)\times R^2$ respectively and are locally bounded and left continuous, then \begin{eqnarray}\label{cfr1} && f(t,X(t))-f(0,X(0))\nonumber\\ &=&\int_0^t{\partial^-\over \partial s}f(s,X_1(s),X_2(s))ds +\sum_{i=1}^2\int_0^t\nabla_i f(s,X_1(s),X_2(s))dX_i(s)\nonumber\\ &&+{1\over 2}\sum_{i,j=1}^2 \int_0^t {{\partial^{2-}}\over {\partial{x_i}\partial{x_j}}} f(s,X_1(s),X_2(s))d<\hskip-4pt X_i,X_j\hskip-4pt>_s. \end{eqnarray} This can be seen from the convergence in the proof of Theorem \ref{tom100} and the fact that ${{\partial^{2}}\over {\partial{x_i}\partial{x_j}}} f_n(s,x_1,x_2) \to {{\partial^{2-}}\over {\partial{x_i}\partial{x_j}}} f(s,x_1,x_2)$ under the stronger condition on ${{\partial^{2-}}\over {\partial{x_i}\partial{x_j}}}f$. The next theorem is an easy consequence of the methods of the proofs of Theorem \ref{tom100} and (\ref{cfr1}). \begin{thm}\label{cfr11} Let $f:R^+ \times R^2\to R$ satisfy conditions (i),(ii) and $f(t,x_1,x_2)=f_h(t,x_1,x_2)+f_v(t,x_1,x_2)$. Assume $f_h$ is $C^1$ in $x_1,x_2$ and the left derivatives ${{\partial^{2-}}\over {\partial{x_i}\partial{x_j}}}f_h(s,x_1,x_2)(i,j=1,2)$ exist and are left continuous and locally bounded; $f_v$ satisfies conditions (iii)-(v). Then \begin{eqnarray}\label{f6} &&f(t,X_1(t),X_2(t))-f(0,X_1(0),X_2(0))\nonumber\\ &=&\int _0^t{\partial ^-\over \partial s} f(s,X_1(s),X_2(s)){\rm d}s +\sum_{i=1}^2\int _0^t\nabla _i ^-f(s,X_1(s),X_2(s))dX_i(s)\nonumber\\ &&+{1\over 2}\sum_{i=1}^2\int_0^t \Delta_i ^-f_h(s,X_1(s),X_2(s))d<\hskip-4pt X_i \hskip-4pt>_s\nonumber\\ &&+\int _{-\infty}^{\infty }L _1(t,a){\rm d}_a\nabla _1 ^-f_v(t,a,X_2(t)) -\int _{-\infty}^{+\infty}\int _0^{t}L _1(s,a) {\bf \rm d}_{s,a}\nabla _1^-f_v(s,a,X_2(s))\nonumber\\ &&+\int _{-\infty}^{\infty }L _2(t,a){\rm d}_a\nabla _2 ^-f_v(t,X_1(t),a) -\int _{-\infty}^{+\infty}\int _0^{t}L _2(s,a) {\bf \rm d}_{s,a}\nabla _2^-f_v(s,X_1(s),a)\nonumber\\ &&+\int_0^t{\nabla _1^-}{\nabla _2^-}f(s,X_1(s),X_2(s))d<\hskip-4pt M_1,M_2\hskip-4pt>_s.\ \ a.s. \end{eqnarray} \end{thm} Now assume there exists a curve $x_2=b(x_1)$ with left derivative ${{d^-}\over{dx_1}}b(x_1)$ being locally bounded and $b(X_1(t))$ is a semi-martingale. Let $x_2^*=x_2-b(x_1)$ and $g(t,x_1,x_2^*)=f(t,x_1,x_2^*+b(x_1))$. We can have a generalized It${\hat {\rm o}}$'s formula in terms of $X_1(s)$ and $X_2^*(s)$ similar to (\ref{f6}). Let $L_2^*(t,a)$ be the local time of $X_2^*(t)$. In particular, the following result obtained in \cite{peskir3} can be derived from Theorem \ref{cfr11} as a special case of our theorem. \begin{cor} Assume $f:R^2\to R$ is left continuous and locally bounded and there exists a continuous curve $x_2=b(x_1)$ such that (i) the left derivative ${{d^-}\over{dx_1}}b(x_1)$ exists and is locally bounded and $b(X_1(t))$ is a semi-martingale; (ii) $f(x_1,x_2)$ is twice differentiable with continuous second order derivatives ${{\partial^{2}}\over {\partial{x_i}\partial{x_j}}}f$ $(i,j=1,2)$ in regions $x_2\leq b(x_1)$ and $x_2\geq b(x_1)$ respectively. Then for any two-dimensional continuous semi-martingale $(X_1(t),X_2(t))$ \begin{eqnarray}\label{f7} && f(X_1(t),X_2(t))-f(X_1(0),X_2(0))\nonumber\\ &=&\sum_{i=1}^2\int _0^t\nabla _i ^-f(X_1(s),X_2(s))dX_i(s)+{1\over 2}\sum_{i=1}^2\int_0^t \Delta_i^-f(X_1(s),X_2(s))d<\hskip-4pt X_i\hskip-4pt>_s\nonumber\\ &&+\int_0^t \Big[\nabla_2^- f(X_1(s),b(X_1(s))+)-\nabla_2^- f(X_1(s),b(X_1(s))-) \Big]d L _2^*(s,0)\nonumber\\ &&+\int_0^t{\nabla _1^-}{\nabla _2^-}f(X_1(s),X_2(s))d<\hskip-4pt M_1,M_2\hskip-4pt>_s.\ \ a.s. \end{eqnarray} \end{cor} {\bf Proof}: Formula (\ref{f7}) can be read from (\ref{f6}) by considering \begin{eqnarray*} f_h(x_1,x_2)&=&f(x_1,x_2)+\int_0^{x_1}(\nabla_2 f(y,b(y)-)-\nabla_2 f(y,b(y)+))(x_2-b(y))^+dy,\\ f_v(x_1,x_2)&=&\int_0^{x_1}(\nabla_2 f(y,b(y)+)-\nabla_2 f(y,b(y)-))(x_2-b(y))^+dy, \end{eqnarray*} and the integration by parts formula. To verify conditions of Theorem \ref{cfr11} on $f_v$, first note \begin{eqnarray*} \nabla_1^-f_v(x_1,x_2)&=&(\nabla_2 f(x_1,b(x_1)+)-\nabla_2 f(x_1,b(x_1)-))(x_2-b(x_1))^+,\\ \nabla_2^-f_v(x_1,x_2)&=&\int_0^{x_1}(\nabla_2 f(y,b(y)+)-\nabla_2 f(y,b(y)-))1_{\{x_2> b(y)\}}dy,\\ \nabla_1^-\nabla_2^-f_v(x_1,x_2)&=&(\nabla_2 f(x_1,b(x_1)+)-\nabla_2 f(x_1,b(x_1)-))1_{\{x_2 > b(x_1)\}}. \end{eqnarray*} It's trivial to prove that $\nabla_1^-\nabla_2^-f_v(x_1,x_2+b(x_1))$ is of locally bounded variation in $x_2$. To see $\nabla_2^-f_v(x_1,x_2+b(x_1))$ is of locally bounded variation for $x_2$, for any partition $-N=x_2^0<x_2^1<\cdots<x_2^n=N$, \begin{eqnarray*} &&\sum\limits_{i=0}^{n-1}|\nabla_2^-f_v(x_1,x_2^{i+1}+b(x_1))-\nabla_2^-f_v(x_1,x_2^{i}+b(x_1))|\\ &\leq&\sum\limits_{i=0}^{n-1}\int_0^{x_1}|\nabla_2f(y,b(y)+)-\nabla_2f(y,b(y)-)|1_{\{x_2^i+b(x_1)\leq b(y)\leq x_2^{i+1}+b(x_1)\}} dy\\ &\leq&\int_0^{x_1}|\nabla_2f(y,b(y)+)-\nabla_2f(y,b(y)-)|1_{\{-N+b(x_1)\leq b(y)\leq N+b(x_1)\}} dy\\ &<&\infty. \end{eqnarray*} In order to prove that $\nabla_1^-\nabla_2^-f_v(x_1,x_2+b(x_1))=(\nabla_2 f(x_1,b(x_1)+)-\nabla_2 f(x_1,b(x_1)-))1_{\{x_2 > 0\}}$ and $\nabla_1^-f_v(x_1,x_2+b(x_1))=(\nabla_2 f(x_1,b(x_1)+)-\nabla_2 f(x_1,b(x_1)-))x_2^+$ are of locally bounded variation in $x_1$, we only need to prove that $\nabla_2 f(x_1,b(x_1)+)-\nabla_2 f(x_1,b(x_1)-)$ is of locally bounded variation in $x_1$. This is true, because for $x_2^*>0$ \begin{eqnarray*} &&{D^-\over {D{x_1}}} \nabla_2 f(x_1,x_2^*+b(x_1))\\ &=&\nabla_1\nabla_2 f(x_1,x_2^*+b(x_1))+\nabla_2\nabla_2 f(x_1,x_2^*+b(x_1)){d^-\over {dx_1}}b(x_1). \end{eqnarray*} So as $x_2^*\to 0+$, \begin{eqnarray*} &&{D^-\over {D{x_1}}} \nabla_2 f(x_1,x_2^*+b(x_1))\\ &\to& \nabla_1\nabla_2 f(x_1, b(x_1)+)+\nabla_2^2f(x_1, b(x_1)+){d^-\over {dx_1}}b(x_1)\\ &=&{d ^-\over {d{x_1}}} \nabla_2 f(x_1,b(x_1)+). \end{eqnarray*} It follows that $\nabla_2f(x_1,b(x_1)+)$ is of locally bounded variation. Similarly, one can prove that $\nabla_2f(x_1,b(x_1)-)$ is also of locally bounded variation. $ \diamond$\\ {\bf Acknowledgement} \vskip5pt We would like to acknowledge partial financial supports to this project by the EPSRC research grants GR/R69518 and GR/R93582. CF would like to thank the Loughborough University development fund for its financial support. It is our great pleasure to thank N. Eisenbaum, D. Elworthy, Y. Liu, Z. Ma, S. Peng, G. Peskir, A. Truman, J. A. Yan, M. Yor and W. Zheng for helpful discussions. We would like to thank G. Peskir and N. Eisenbaum for invitation to the mini-workshop of local time-space calculus with applications in Oberwolfach May 2004 where the results of this paper were announced; to S. Peng for invitation to speak at the 9-th Chinese mathematics summer school (Weihai) 2004; to F. Gong to the workshop on stochastic analysis in Chinese Academy of Sciences 2004 and M. Chen to the workshop on stochastic processes and related topics in Beijing 2004. We would like to thank the referee for careful reading of the manuscript and pointing out an error in the early version of the paper and other useful suggestions. \end{document}
\begin{equation}gin{document} \title{Detecting genuine multipartite entanglement in steering scenarios} \author{C. Jebaratnam} \email{[email protected];[email protected]} \affiliation{Indian Institute of Science Education and Research Mohali, Sector-81, S.A.S. Nagar, Manauli 140306, India.} \affiliation{Department of Physics, Indian Institute of Technology Madras, Chennai 600036, India} \date{\today} \begin{equation}gin{abstract} Einstein-Podolsky-Rosen (EPR) steering is a form of quantum nonlocality which is intermediate between entanglement and Bell nonlocality. EPR steering is a resource for quantum key distribution that is device independent on only one side in that it certifies bipartite entanglement when one party's device is not characterized while the other party's device is fully characterized. In this work, we introduce two types of genuine tripartite EPR-steering, and derive two steering inequalities to detect them. In a semi-device-independent scenario where only the dimensions of two parties are assumed, the correlations which violate one of these inequalities also certify genuine tripartite entanglement. It is known that Alice can demonstrate bipartite EPR-steering to Bob if and only if her measurement settings are incompatible. We demonstrate that quantum correlations can also detect tripartite EPR-steering from Alice to Bob and Charlie, even if Charlie's measurement settings are compatible. \end{abstract} \pacs{03.65.Ud, 03.67.Mn, 03.65.Ta} \maketitle \section{Introduction} Entanglement, Einstein-Podolsky-Rosen (EPR) steering, and Bell nonlocality are three inequivalent forms of nonlocality in quantum physics \cite{WJD,enteprnl}. The observation of Bell nonlocality \cite{bell64, BNL} implies the presence of entanglement without the need for the detailed characterization of the measured systems as well as the measurement operators. For this reason, Bell nonlocality has been used as a resource for device-independent (DI) quantum information processing \cite{DQKD,Pironioetal}. EPR-steering is a weaker form of quantum nonlocality \cite{Schrodinger,WJD} and is witnessed by violation of a steering inequality \cite{CJWR,EPRsi,CFFW}. EPR-steering is a resource for one-side-device-independent ($1$SDI) quantum key distribution \cite{SDIQKD}. This follows from the fact that the observation of EPR-steering in bipartite systems certifies entanglement with measurements on one side characterized and the other side uncharacterized. The observation of Bell nonlocality or EPR-steering also implies the presence of another nonclassical feature, which is incompatibility of measurements \cite{BNL,IncomN}. The notion of commutativity does not properly capture the incompatibility of generalized measurements, i.e., positive-operator-valued measurements (POVMs). The notion of joint measurability \cite{nJM}, which is inequivalent to commutativity, is a natural choice to capture the incompatibility of POVMs. Recently, it has been shown that a set of POVMs can be used to demonstrate bipartite EPR-steering if and only if (iff) it is nonjointly measurable \cite{IncomN, JMguhne,Uola}. In other words, Alice's measurement settings are incompatible if she can demonstrate EPR-steering to Bob. On the other hand, Alice can ``always find a quantum state'' to demonstrate EPR-steering to Bob if her measurement settings are incompatible. In the multipartite scenario, various approaches to DI verification of genuine entanglement are known \cite{SI,multiSI,multiSI1,Banceletal1, DImulti,DImulti1,DImulti2,selftest}. Violation of the Bell-type inequalities which detect genuine nonlocality \cite{SI,multiSI,multiSI1,Banceletal1} belongs to one of these approaches. Verification of multipartite entanglement in partially DI scenarios, where some of the parties' measurements are trusted, has also been characterized \cite{SDMulti,Gsteer,SDMulti1,SDMulti2}. In Ref. \cite{Gsteer}, genuine multipartite forms of EPR-steering have been developed and criteria to detect them have been derived. The violation of these criteria ensures that steering is shared among all subsystems of the multipartite system. The observation of genuine multipartite steering implies the presence of genuine multipartite entanglement in a $1$SDI way. In Ref. \cite{SDMulti2}, tripartite steering inequalities have been derived to detect genuine tripartite entanglement in $1$SDI scenarios where either one or two parties' measurements are uncharacterized. In this paper, we consider two types of tripartite steering scenarios where two parties' measurements are fully characterized (see Fig. \ref{plotine}). We derive two inequalities to detect genuine tripartite steering in these two $1$SDI scenarios. We argue that the correlations which violate one of these steering inequalities detect genuine entanglement in a semi-DI way \cite{SDI} as well. That is, when the Hilbert-space dimensions of two subsystems are constrained, the correlations which exhibit genuine tripartite steering also detect genuine entanglement. We demonstrate that quantum correlations can also detect tripartite EPR-steering from Alice to Bob and Charlie, even if Charlie's measurement settings are compatible. The paper is organized as follows. In Sec. \ref{biNL}, we discuss in detail the three notions of bipartite nonlocality. In Sec. \ref{Gsteer}, we introduce two notions of genuine tripartite EPR-steering and we derive the steering inequalities which detect them. In Sec. \ref{IncVsEn}, we illustrate that incompatibility of POVMs is not necessary for detecting tripartite steering. Conclusions are presented in Sec. \ref{Cnc}. \begin{equation}gin{figure} \centering \includegraphics[width=0.40\textwidth]{1SDI} \caption{\textit{Depiction of the one-side-device-independent scenario that we consider for deriving the two steering inequalities:} Alice and Bob's subsystems and measurements are fully characterized. Charlie's measurements are treated as black box, i.e., Charlie has no knowledge about the measurement operators as well as the measured system. }\label{plotine} \end{figure} \section{Bipartite nonlocality}\label{biNL} Consider a bipartite scenario in which two spatially separated parties, Alice and Bob, perform local measurements on a composite quantum system described by a density operator $\rho\in \mathcal{B}(\mathcal{H}_A\otimes \mathcal{H}_B)$. The correlation between the outcomes is described by the conditional probability distributions of getting the outcomes $a$ and $b$ given the measurements $A_x$ and $B_y$ on Alice and Bob's sides: $P(ab|A_xB_y)$, where $x$ and $y$ label the measurement choices. Quantum theory predicts the correlation as follows: \begin{equation} P(ab|A_xB_y)=\operatorname{Tr} (\rho M_{a|x} \otimes M_{b|y}), \end{equation} where $M_{a|x}$ and $M_{b|y}$ are measurement operators. \textit{Bell nonlocality.} A correlation is Bell nonlocal if it cannot be explained by the local hidden variable (LHV) model \cite{BNL}, \begin{equation} P(ab|A_xB_y)=\sum_\lambda p_\lambda P_\lambda(a|A_x)P_\lambda(b|B_y), \end{equation} for some hidden variable $\lambda$ with probability distribution $p_\lambda$. In the case of two dichotomic measurements per side, i.e., $x,y\in\{0,1\}$ and $a,b\in\{-1,+1\}$, the correlation has a LHV model iff it satisfies the Bell--Clauser-Horne-Shimony-Holt (CHSH) inequality \cite{chsh}, \begin{equation} \braket{A_0B_0+A_0B_1+A_1B_0-A_1B_1}_{LHV}\le2, \label{BCHSH} \end{equation} and its equivalents \cite{Fine}. Here, $\braket{A_xB_y}=\sum_{ab}abP(ab|A_xB_y)$. Quantum correlations violate the Bell-CHSH inequality up to the Tsirelson bound $2\sqrt{2}$ \cite{tsi1}. \textit{EPR-steering.} Consider a $1$SDI scenario in which Alice has knowledge about her subsystem and which measurements she can perform, while Bob performs black-box measurements (i.e., uncharacterized measurements). In this scenario, a quantum correlation exhibits EPR-steering from Bob to Alice if it cannot be explained by the hybrid local hidden state (LHS)-LHV model \cite{CFFW}, \begin{equation} \operatorname{Tr} (\rho M_{a|x} \otimes M_{b|y})=\sum_\lambda p_\lambda P(a|A_x,\rho_\lambda)P_\lambda(b|B_y), \end{equation} where $P(a|A_x,\rho_\lambda)$ is the distribution arising from quantum state $\rho_\lambda$. Suppose Alice performs two orthogonal qubit projective measurements; for instance, $\sigma_x$ and $\sigma_y$, and Bob performs two dichotomic black-box measurements. Then the inequality \begin{equation} \braket{A_0B_0-A_1B_1}_{2\times ?}^{LHS}\le\sqrt{2}, \label{EPRB} \end{equation} where $A_0$ and $A_1$ are qubit projective measurements which satisfy $[A_0,A_1]=-1$, serves as the EPR-steering criterion \cite{CJWR,EPRsi}. Here, $2 \times ?$ indicates that Alice's subsystem is assumed to be qubit while Bob's subsystem is uncharacterized. Just like the Bell-CHSH inequalities, there are eight equivalent steering inequalities. For instance, the steering inequality $\braket{A_0B_1+A_1B_0}_{2\times?}^{LHS}\le\sqrt{2}$ can be obtained from the one in Eq. (\ref{EPRB}) by the transformation $a \rightarrow a\oplus x$, with $a\in\{0,1\}$ and $y\rightarrow y\oplus1$; here, $\oplus$ denotes addition modulo $2$. Suppose Bob performs unknown measurements with POVM elements $M_{b|y}$ on his share of a bipartite quantum state $\rho^{AB}\in\mathcal{B}(\mathcal{H}_2\otimes \mathcal{H}_d)$. The unnormalized conditional states on Alice's side are given by $\sigma^A_{b|y}=\operatorname{Tr}(\openone \otimes M_{b|y}\rho^{AB})$. Alice can do state tomography to determine these conditional states. The set of unnormalized conditional states is called an assemblage \cite{EPRLHS}. The above scenario exhibits steering if the state assemblage does not have a LHS model, \begin{equation} \sigma^A_{b|y}=\sum_\lambda p_\lambda P_\lambda (b|y) \rho_\lambda, \label{SABA} \end{equation} where $P_\lambda (b|y)$ are some conditional probability distributions and $\rho_\lambda$ are positive operators which satisfy $\sum_\lambda \operatorname{Tr} p_\lambda \rho_\lambda=1$. The violation of a steering inequality as in Eq. (\ref{EPRB}) implies that the assemblage does not have a decomposition as in Eq. (\ref{SABA}) \cite{EPRLHS}. Note that the state assemblage arising from any separable state has a LHS model. This implies that when the $1$SDI scenario does not demonstrate steering, there always exists a separable state which reproduces the given state assemblage \cite{Piani}. \textit{Nonseparability.} Nonseparability of quantum correlations arises as a failure of the quantum separable model. In this model, LHS description is assumed for both of the parties. When a quantum correlation exhibits nonseparability, it violates a LHS-LHS model, \begin{equation} \operatorname{Tr} (\rho M_{a|x} \otimes M_{b|y})=\sum_\lambda p_\lambda P(a|A_x,\rho^A_\lambda)P(b|B_y,\rho^B_\lambda), \end{equation} where $P(a|A_x,\rho^A_\lambda)$ and $P(b|B_y,\rho^B_\lambda)$ are the distributions arising from quantum states $\rho^A_\lambda$ and $\rho^B_\lambda$, respectively. Any condition that is derived under the assumption of the above model is known as separability criterion or entanglement criterion. \subsection{Entanglement certification from CHSH and BB84 families} Moroder and Gittsovich (MG) \cite{DbBi} explored the task of entanglement detection in various partially DI scenarios. For instance, MG introduced entanglement certification in the semi-DI scenario where only the Hilbert-space dimensions of the subsystems are assumed. In Ref. \cite{Koon}, Goh {\it{et. al. }} have defined a quantity which gives certifiable lower bounds on the amount of entanglement in the semi-DI scenario. By using this quantity, Goh {\it{et. al. }} studied the amount of two-qubit entanglement certifiable from the CHSH family defined as \begin{equation} P_{CHSH}=\frac{2+ab(-1)^{xy}\sqrt{2}V}{8}, \label{chshfam} \end{equation} and the BB84 family defined as \begin{equation} P_{BB84}=\frac{1+ab\delta_{x,y}V}{4}. \label{bb84fam} \end{equation} The CHSH family with $V=1$ violates the Bell-CHSH inequality to its quantum bound of $2\sqrt{2}$, whereas the BB84 family with $V=1$ corresponds to the BB84 correlations \cite{DQKD}. The CHSH and BB84 families can be obtained from the two-qubit Werner state, $\rho_W=V\ketbra{\Psi^-}{\Psi^-}+(1-V)\openone/4$, where $\ket{\Psi^-}=(\ket{01}-\ket{10})/\sqrt{2}$, for suitable projective measurements. The CHSH family is achievable with the measurement settings that give rise to the maximal violation of the CHSH inequality in Eq. (\ref{BCHSH}), whereas the BB84 family is achievable with the settings that give rise to the maximal violation of the steering inequality in Eq. (\ref{EPRB}). As the CHSH family violates the Bell-CHSH inequality for $V>1/\sqrt{2}$, it certifies entanglement in a DI way in this range. Since the BB84 family is local, it can also be produced by a separable state in the higher-dimensional space \cite{DQKD}. However, entanglement is certifiable from the BB84 family for $V>1/\sqrt{2}$ in a $1$SDI way, as it violates the steering inequality in this range. Note that the two-qubit Werner state is entangled iff $V>1/3$ \cite{Werner}. If one assumes qubit dimension and which measurements are performed on both sides, the CHSH and BB84 families detect entanglement for $V>1/2$. This follows from the fact that these correlations violate a quantum separable model in this range. This can be checked by the criteria to detect the nonexistence of a LHS-LHS model derived in Ref. \cite{UFNL}. The CHSH family with $V>1/2$ violates the following LHS-LHS condition: \begin{equation} \braket{A_0B_0+A_0B_1+A_1B_0-A_1B_1}_{2\times2}^{LHS}\le\sqrt{2}, \end{equation} with the assumption that Alice and Bob have access to measurements that give rise to the optimal violation of the Bell-CHSH inequality in Eq. (\ref{BCHSH}). The BB84 family with $V>1/2$ violates the following LHS-LHS condition: \begin{equation} \braket{A_0B_0-A_1B_1}_{2\times2}^{LHS}\le1, \label{Nonsep} \end{equation} with the assumption that Alice and Bob have access to measurements that give rise to the optimal violation of the steering inequality in Eq. (\ref{EPRB}). Goh {\it{et. al. }} found that for $V>1/2$, entanglement is certifiable from these two families if one assumes only qubit systems for Alice and Bob \cite{Koon}. \subsection{Incompatibility vs nonseparability} We now note that incompatibility of measurements is not necessary to demonstrate nonseparability. For this, we consider a measurement scenario in which Alice has access to the set of two dichotomic qubit POVMs, $\mathcal{M}^\eta_A=\{M^\eta_{\pm|\hat{a}_x}|x=0,1\}$, with elements \begin{equation} M^\eta_{\pm|\hat{a}_x}=\eta \Pi_{\pm|\hat{a}_x}+(1-\eta)\frac{\openone}{2}; \quad 0 \le \eta \le1, \label{nPVM} \end{equation} and Bob has access to the set of two projective qubit measurements $\mathcal{M}_B=\{\Pi_{\pm|\hat{b}_y}|y=0,1\}$. Here $\Pi_{\pm|\hat{a}_x}=1/2(\openone \pm \hat{a}_x \cdot \vec{\sigma})$ and $\Pi_{\pm|\hat{b}_y}=1/2(\openone \pm \hat{b}_y \cdot \vec{\sigma})$ are projectors along the directions $\hat{a}_x$ and $\hat{b}_y$; $\vec{\sigma}$ is the vector of Pauli matrices. Note that in the above measurement scenario, Alice performs the noisy projective measurements. This implies that the correlation arising from the given two-qubit state $\rho$ satisfies the following relation: \begin{equation} \operatorname{Tr} (\rho M^\eta_{\pm|\hat{a}_x} \otimes \Pi_{\pm|\hat{b}_y})=\operatorname{Tr}(\rho_\eta \Pi_{\pm|\hat{a}_x} \otimes \Pi_{\pm|\hat{b}_y}). \label{povm-proj} \end{equation} Here, $\rho_\eta=\eta \rho+ (1-\eta)\openone/2 \otimes \rho_B$, with $\rho_B=\operatorname{Tr}_A\rho$. In other words, the correlation arising from the given state for noisy projective measurements on Alice's side and projective measurements on Bob's side is equivalent to the correlation arising from the noisy state for the projective measurements on both sides. In Refs. \cite{JMguhne,IncompLHV}, this connection between the noisy measurements and noisy states has been used to obtain new results for the former from known results of the latter. The set of two POVMs $\mathcal{M}^\eta_A$ with $\hat{a}_0\cdot \hat{a}_1=0$ is jointly measurable iff $\eta\le1/\sqrt{2}$ \cite{JMPauli1,JMPauli2} and noncommuting for any $\eta>0$. We now illustrate that this noncommuting set of POVMs can be used to demonstrate nonseparability for $\eta>1/2$. \begin{equation}gin{example}\label{ex01} Suppose Alice and Bob share the singlet state, $\ket{\Psi^-}$, with Alice performing two noisy projective measurements along the directions $\hat{a}_0=\hat{x}$ and $\hat{a}_1=\hat{y}$ and Bob performing two projective measurements along the directions $\hat{b}_0=-(\hat{x}+\hat{y})/\sqrt{2}$ and $\hat{b}_1=(-\hat{x}+\hat{y})/\sqrt{2}$. From the relation given in Eq. (\ref{povm-proj}), it follows that the statistics arising from this setting are analogous to that arising from the Werner state with the visibility $\eta$ for projective measurements on both sides along the directions given above. Therefore, the statistics arising from the above scenario are equivalent to the CHSH family in Eq. (\ref{chshfam}) with $V$ replaced by $\eta$. \end{example} Since the above statistics exhibit nonseparability for $\eta>1/2$, the set of two POVMs which is jointly measurable in this range is also useful for this nonclassical task. This, in turn, implies that this compatible set of POVMs can also be used to detect two-qubit entanglement. For the measurements given in example \ref{ex01}, the statistics arising from the Werner state are equivalent to the CHSH family in Eq. (\ref{chshfam}) with $V$ replaced by $\eta V$. Thus, these statistics detect entanglement for any $\eta V >1/2$. For $\eta=1/\sqrt{2}$, the statistics do not exhibit Bell nonlocality; however, entanglement is detected for $V>1/\sqrt{2}$. \section{Tripartite nonlocality}\label{Gsteer} We now turn to the tripartite case which is the focus of this work. We restrict ourselves to the simplest scenario in which three spatially separated parties, i.e., Alice, Bob, and Charlie, perform two dichotomic measurements on their subsystems. The correlation is described by the conditional probability distributions: $P(abc|A_xB_yC_z)$, where $x,y,z\in\{0,1\}$ and $a,b,c\in\{-1,+1\}$. The correlation exhibits standard nonlocality (i.e., Bell nonlocality) if it cannot be explained by the LHV model, \begin{equation} P(abc|A_xB_yC_z)=\sum_\lambda p_\lambda P_\lambda(a|A_x)P_\lambda(b|B_y)P_\lambda(c|C_z). \label{FLHV} \end{equation} If a correlation cannot be reproduced by this fully LHV model, it does not necessarily imply that it exhibits genuine nonlocality \cite{SI,Banceletal1}. In Ref. \cite{SI}, Svetlichny introduced the strongest form of genuine tripartite nonlocality (see Ref. \cite{Banceletal1} for the other two forms of genuine nonlocality). A correlation exhibits Svetlichny nonlocality if it cannot be explained by a hybrid nonlocal-LHV (NLHV) model, \begin{equation}gin{align} &P(abc|A_xB_yC_z)\!=\!\sum_\lambda p_\lambda P_\lambda(a|A_x)P_\lambda(bc|B_yC_z)+\nonumber \\ &\!\sum_\lambda q_\lambda P_\lambda(ac|A_xC_z)P_\lambda(b|B_y)\!+\!\sum_\lambda r_\lambda P_\lambda(ab|A_xB_y)P_\lambda(c|C_z), \label{HNLHV} \end{align} with $\sum_\lambda p_\lambda+\sum_\lambda q_\lambda+\sum_\lambda r_\lambda=1$. The bipartite probability distributions in this decomposition can have arbitrary nonlocality. Svetlichny derived Bell-type inequalities to detect the strongest form of genuine nonlocality \cite{SI}. For instance, one of the Svetlichny inequalities reads, \begin{eqnarray} &&\braket{A_0B_0C_1+A_0B_1C_0+A_1B_0C_0-A_1B_1C_1}\nonumber \\ &&+\braket{A_0B_1C_1+A_1B_0C_1+A_1B_1C_0-A_0B_0C_0}\le4. \label{SI1} \end{eqnarray} Here $\braket{A_xB_yC_z}=\sum_{abc}abcP(abc|A_xB_yC_z)$. Quantum correlations violate the Svetlichny inequality (SI) up to $4\sqrt{2}$. A Greenberger-Horne-Zeilinger (GHZ) state \cite{GHZ} gives rise to the maximal violation of the SI for a different choice of measurements which do not demonstrate the GHZ paradox \cite{UNLH}. Bancal {\it{et. al. }} \cite{Bancaletal} presented an intuitive approach to the SI in Eq. (\ref{SI1}). For this, the SI was rewritten as follows: \begin{equation} \braket{CHSH_{AB}C_1+CHSH'_{AB}C_0}_{NLHV}\le4. \label{SI} \end{equation} Here, $CHSH_{AB}$ is the canonical CHSH operator given in Eq. (\ref{BCHSH}) and $CHSH'_{AB}=-A_0B_0+A_0B_1+A_1B_0+A_1B_1$ is one of its equivalents. Bancal {\it{et. al. }} observed that the input setting of Charlie defines which version of CHSH game Alice and Bob are playing. When $C$ gets the input $z=0$, $AB$ play the canonical $CHSH$ game; when $C$ gets the input $z=1$, $AB$ play $CHSH'$. In Argument $1$ of Ref. \cite{Bancaletal}, Bancal {\it{et. al. }} found that $AB$ play the average game $\pm CHSH \pm CHSH'$, where the signs indicate that which game they are playing depends on the outputs of $C$. It can be checked that the algebraic maximum of any of these average games is $4$ for the NLHV model in Eq. (\ref{HNLHV}) with $\sum_\lambda r_\lambda=1$. \subsection{Svetlichny steering} We will derive a criterion for tripartite EPR-steering by exploiting the structure of the SI given in Eq. (\ref{SI}). For this, we consider the following $1$SDI scenario. Alice and Bob have access to incompatible qubit measurements that give rise to violation of a Bell-CHSH inequality, while Charlie has access to two black-box measurements. Suppose that $P(abc|A_xB_yC_z)$ cannot be explained by the following nonlocal LHS-LHV (NLHS) model: \begin{equation}gin{align} P(abc|A_xB_yC_z)&=\sum_\lambda p_\lambda P(ab|A_xB_y,\rho_{AB}^\lambda)P_\lambda(c|C_z)+\nonumber \\ &\sum_\lambda q_\lambda P(a|A_x,\rho_A^\lambda)P(b|B_y,\rho_B^\lambda)P_\lambda(c|C_z), \label{NLHS} \end{align} where $P(ab|A_xB_y,\rho_\lambda)$ denotes the nonlocal probability distribution arising from two-qubit state $\rho_{AB}^\lambda$, and $P(a|A_x,\rho_A^\lambda)$ and $P(b|B_y,\rho_B^\lambda)$ are the distributions arising from qubit states $\rho_A^\lambda$ and $\rho_B^\lambda$. Then the quantum correlation exhibits genuine steering from Charlie to Alice and Bob. We obtain the following criterion for genuine steering under the constraint of the above $1$SDI scenario. \begin{equation}gin{thm} If a given quantum correlation violates the steering inequality \begin{equation} \braket{CHSH_{AB}C_1+CHSH'_{AB}C_0}_{2\times 2 \times ?}^{NLHS}\le2\sqrt{2}, \label{SIEPR} \end{equation} then it exhibits genuine tripartite steering from Charlie to Alice and Bob. Here, $2\times 2 \times ?$ indicates that Alice and Bob have access to known qubit measurements that demonstrate Bell nonlocality, while Charlie's measurements are uncharacterized. \end{thm} \begin{equation}gin{proof} Note that in the $1$SDI scenario that we are interested in, $AB$ play the average game $\pm CHSH \pm CHSH'$. The maximum of any of these games cannot exceed $2\sqrt{2}$ if the correlation admits the NLHS model given in Eq. (\ref{NLHS}). There are two cases which have to be checked: (i) Suppose Alice and Bob have a LHS-LHS model, the expectation values of the CHSH operators are bounded by $-\sqrt{2}\le\braket{CHSH}^{LHS}_{2\times2}\le \sqrt{2}$ and $-\sqrt{2}\le\braket{CHSH'}^{LHS}_{2\times2}\le \sqrt{2}$ \cite{UFNL}. This implies that $\braket{\pm CHSH \pm CHSH'}^{LHS}_{2\times2}\le 2\sqrt{2}$. (ii) In case Bell nonlocality is shared by Alice and Bob, it can be checked that $\braket{\pm CHSH \pm CHSH'}_{2\times 2}\le 2\sqrt{2}$. For instance, the quantum correlation which exhibits maximal Bell nonlocality has $\braket{CHSH}=2\sqrt{2}$ and $\braket{CHSH'}=0$. Thus, the violation of the inequality in Eq. (\ref{SIEPR}) implies the violation of the model in Eq. (\ref{NLHS}). \end{proof} For a given quantum state, genuine steering as witnessed by the steering inequality in Eq. (\ref{SIEPR}) originates from measurement settings that give rise to Svetlichny nonlocality. For this reason, we call this type of genuine steering Svetlichny steering. \begin{equation}gin{definition} A quantum correlation exhibits \emph{Svetlichny steering} if it cannot be explained by a model in which arbitrary two-qubit Bell nonlocality is allowed between two parties with the third party locally correlated. \end{definition} Note that the SI is invariant under the permutation of the parties. This implies that the criterion to detect Svetlichny steering from $A$ to $BC$ or $B$ to $AC$ can be obtained from the steering inequality in Eq. (\ref{SIEPR}) by permuting the parties. We consider the Svetlichny family defined as \begin{equation} P_{Sv}=\frac{2+abc(-1)^{xy\oplus xz \oplus yz \oplus x \oplus y \oplus z \oplus 1}\sqrt{2}V}{16}, \label{SvF} \end{equation} which is the tripartite version of the CHSH family in Eq. (\ref{chshfam}). \begin{equation}gin{example}\label{ex1} The Svetlichny family can be obtained from a noisy three-qubit GHZ state, $\rho=V\ketbra{\Phi_{GHZ}}{\Phi_{GHZ}}+(1-V)\openone/8$, where $\ket{\Phi_{GHZ}}=\frac{1}{\sqrt{2}}(\ket{000}+\ket{111})$, for the measurements that give rise to the maximal violation of the SI; for instance, $A_0=\sigma_x$, $A_1=\sigma_y$, $B_0=(\sigma_x-\sigma_y)/\sqrt{2}$, $B_1=(\sigma_x+\sigma_y)/\sqrt{2}$, $C_0=\sigma_x$, and $C_1=-\sigma_y$. \end{example} Note that the noisy GHZ state given above is genuinely entangled iff $V>0.429$ \cite{Guhne}. The Svetlichny family certifies genuine entanglement in a DI way for $V>1/\sqrt{2}$, as it violates the SI in this range. The Svetlichny family can be written as a convex mixture of the local deterministic strategies when $V\le 1/\sqrt{2}$ \cite{Banceletal1}. This implies that in this range, it can also arise from a separable state in the higher dimensional space \cite{DQKD}. However, the measurement statistics in example \ref{ex1} certify genuine entanglement for $V>1/2$ in a $1$SDI way, as these statistics violate the steering inequality in Eq. (\ref{SIEPR}). By using the concept of steering, Bancal {\it{et. al. }} \cite{Bancaletal} observed that the structure of the SI in Eq. (\ref{SI}) allows one to understand its violation by genuinely entangled states: The SI should be violated iff Charlie's measurements prepare entangled states for Alice and Bob such that the average game $\pm CHSH \pm CHSH'>4$. Similarly, we understand violation of the steering inequality by the Svetlichny family as follows. When the parties share the noisy GHZ state and observe the optimal violation of the steering inequality in Eq. (\ref{SIEPR}), we have the following two situations. First, Charlie's black-box measurements prepare the noisy Bell states, which are a mixture of a Bell state (i.e., a maximally entangled state) and white noise, with the visibility $V$ for Alice and Bob. Second, Alice and Bob's measurements generate the CHSH family from the states steered by Charlie. These imply that the Svetlichny family certifies genuine tripartite entanglement for $V>1/2$ if one assumes only qubit dimension for two parties. \subsection{Mermin steering} We now derive a criterion for another form of tripartite EPR-steering from a Mermin inequality (MI) \cite{mermin}, \begin{equation} \braket{A_0B_0C_1+A_0B_1C_0+A_1B_0C_0-A_1B_1C_1}_{LHV}\le2. \label{MI0} \end{equation} In the seminal paper \cite{mermin}, the MI was derived to demonstrate standard nonlocality of three-qubit correlations arising from the genuinely entangled states. For this purpose, noncommuting measurements that do not demonstrate Svetlichny nonlocality were used. Note that when the GHZ state maximally violates the MI, the measurements that give rise to it exhibit the GHZ paradox \cite{UNLH}. We rewrite the MI in Eq. (\ref{MI0}) as follows: \begin{equation} \braket{Mermin_{AB}C_1+Mermin'_{AB}C_0}_{LHV}\le2, \label{MI} \end{equation} where $Mermin_{AB}=A_0B_0-A_1B_1$ and $Mermin'_{AB}=A_0B_1+A_1B_0$ \footnote{The multipartite generalization of these operators generates the Mermin inequalities, hence the name \cite{mermin,allmult}.}. Note that these bipartite Mermin operators can be used to witness EPR-steering without Bell nonlocality as in Eq. (\ref{EPRB}). Inspired by the structure of the MI in Eq. (\ref{MI}), we consider the following $1$SDI scenario. Alice and Bob have access to incompatible qubit measurements that give rise to EPR-steering without Bell nonlocality, while Charlie has access to two dichotomic black-box measurements. Suppose that $P(abc|A_xB_yC_z)$ cannot be explained by the following steering LHS-LHV (SLHS) model: \begin{equation}gin{align} P(abc|A_xB_yC_z)&=\sum_\lambda p_\lambda P^{EPR}_L(ab|A_xB_y,\rho_{AB}^\lambda)P_\lambda(c|C_z)+\nonumber \\ &\sum_\lambda q_\lambda P(a|A_x,\rho_A^\lambda)P(b|B_y,\rho_B^\lambda)P_\lambda(c|C_z),\label{wSVEPR} \end{align} where $P^{EPR}_L(ab|A_xB_y,\rho_{AB}^\lambda)$ denotes the local probability distribution which exhibits EPR-steering. Then the quantum correlation exhibits genuine steering from Charlie to Alice and Bob. We obtain the following criterion which witnesses genuine steering in the above $1$SDI scenario. \begin{equation}gin{thm} If a given quantum correlation violates the steering inequality \begin{equation} \braket{Mermin_{AB}C_1+Mermin'_{AB}C_0}_{2\times2\times?}^{SLHS}\le2, \label{MIEPR} \end{equation} then it exhibits genuine tripartite steering from Charlie to Alice and Bob. Here Alice and Bob's measurements demonstrate EPR-steering without Bell nonlocality, while Charlie's measurements are uncharacterized. \end{thm} \begin{equation}gin{proof} Notice that the correlations that admit the SLHS model in Eq. (\ref{wSVEPR}) also admit the standard LHV model in Eq. (\ref{FLHV}). This implies that if a given correlation violates the inequality in Eq. (\ref{MIEPR}), the correlation also violates the SLHS model. Similar to the proof of the inequality in Eq. (\ref{SIEPR}), we will also show that the inequality in Eq. (\ref{MIEPR}) is satisfied if the correlation admits the SLHS model. Notice that in the $1$SDI scenario that we are now interested in, $AB$ play the average game $\pm Mermin \pm Mermin'$. The algebraic maximum of any of these games is $2$ if the correlation admits the SLHS model. There are two cases that have to be checked now. (i) Suppose Alice and Bob have a LHS-LHS model, the expectation values of the Mermin operators satisfy $-1 \le \braket{Mermin}^{LHS}_{2\times2}\le 1$ and $-1 \le \braket{Mermin'}^{LHS}_{2\times2}\le 1$ \cite{UFNL}. This implies that $\braket{\pm Mermin \pm Mermin'}_{2\times2}^{LHS}\le2$. (ii) In case EPR-steering is shared by Alice and Bob, it can be checked that $\braket{\pm Mermin \pm Mermin'}\le 2$. For instance, the quantum correlation which violates the steering inequality in Eq. (\ref{EPRB}) maximally has $\braket{Mermin}=2$ and $\braket{Mermin'}=0$. \end{proof} For a given state, genuine steering as witnessed by the steering inequality in Eq. (\ref{MIEPR}) originates from measurement settings that lead to standard nonlocality, which is witnessed only by the violation of the MI. For this reason, we call this type of tripartite steering Mermin steering. \begin{equation}gin{definition} A quantum correlation exhibits \emph{Mermin steering} if it cannot be explained by a model in which arbitrary two-qubit EPR-steering without Bell nonlocality is allowed between two parties with the third party locally correlated. \end{definition} Note that the MI is invariant under the permutation of the parties. This implies that the criterion to detect Mermin steering from $A$ to $BC$ or $B$ to $AC$ can be obtained from the steering inequality in Eq. (\ref{MIEPR}) by permuting the parties. We consider the GHZ family defined as \begin{equation} P_{GHZ}=\frac{1+abc\delta_{x\oplus y,z\oplus1}V}{8}, \end{equation} which is the tripartite version of the BB84 family in Eq. (\ref{bb84fam}). \begin{equation}gin{example}\label{ex2} The GHZ family can be obtained from the noisy three-qubit GHZ state for the measurements that give rise to the GHZ paradox; for instance, $A_0=\sigma_x$, $A_1=\sigma_y$, $B_0=\sigma_x$, $B_1=\sigma_y$, $C_0=\sigma_x$, and $C_1=-\sigma_y$. \end{example} Since the GHZ family does not exhibit genuine nonlocality, it does not certify genuine entanglement in a DI way. However, genuine entanglement is detected from the measurement statistics in example \ref{ex2} for $V>1/2$ in a $1$SDI way, as these statistics violate the steering inequality in Eq. (\ref{MIEPR}). Notice that when the GHZ family violates this steering inequality, Alice and Bob's measurement settings generate the BB84 family from the noisy Bell states with the visibility $V$ which are steered by Charlie. This implies that genuine entanglement is certifiable from the GHZ family for $V>1/2$ by assuming only qubit systems for two parties. \section{Incompatibility versus tripartite steering}\label{IncVsEn} Following the approach of Ref. \cite{SDMulti2}, we will now discuss tripartite steerability in the $1$SDI scenarios (see Fig. \ref{plotine}) which we have considered. Let $\rho^{ABC}\in \mathcal{B}(\mathcal{H}_2\otimes \mathcal{H}_2 \otimes \mathcal{H}_d)$ denote the shared tripartite quantum state and $M_{c|z}$ denote the measurement operators on Charlie's side. The assemblage, i.e., the set of (unnormalized) conditional states on Alice and Bob's side, is given by \begin{equation} \sigma^{AB}_{c|z}=\operatorname{Tr}_C (\openone \otimes \openone \otimes M_{c|z} \rho^{ABC}). \end{equation} Note that in examples \ref{ex1} and \ref{ex2}, there are genuinely tripartite entangled states which do not demonstrate tripartite steering. Suppose the $1$SDI scenario which uses genuine tripartite entanglement does not demonstrate tripartite steering. The biseparable state which can reproduce the given assemblage can be written as \begin{eqnarray} \rho_{bisep}^{ABC}&=&\sum_\lambda p^{A:BC}_\lambda \rho^{A}_\lambda \otimes \rho^{BC}_\lambda +\sum_\mu p^{B:AC}_\mu \rho^{B}_\mu \otimes \rho^{AC}_\mu \nonumber \\ &+&\sum_\nu p^{AB:C}_\nu \rho^{AB}_\nu \otimes \rho^{C}_\nu. \end{eqnarray} Thus, the unsteerable assemblage admits the following LHS model: \begin{eqnarray} \sigma^{AB}_{c|z}&=&\sum_\lambda p^{A:BC}_\lambda \rho^{A}_\lambda \otimes \sigma^{B}_{c|z,\lambda} +\sum_\mu p^{B:AC}_\mu \sigma^{A}_{c|z,\mu} \otimes \rho^{B}_\mu \nonumber \\ &+&\sum_\nu p^{AB:C}_\nu P_\nu(c|z) \rho^{AB}_\nu, \label{UStri} \end{eqnarray} where $\sigma^{B}_{c|z,\lambda}=\operatorname{Tr}_C(\openone \otimes M'_{c|z}\rho^{BC}_\lambda)$, $\sigma^{A}_{c|z,\mu}=\operatorname{Tr}_C(\openone \otimes M'_{c|z}\rho^{AC}_\mu)$ and $P_\nu(c|z)=\operatorname{Tr} (M'_{c|z} \rho^{C}_\nu)$. Suppose Charlie has access to POVMs which are compatible. Then there exists a parent POVM with measurement operators $G_\nu$ such that \begin{equation} M_{c|z}=\sum_{\nu}D_{\nu}(c|z)G_\nu, \end{equation} where $D_{\nu}(c|z)$ are positive numbers with $\sum_{c}D_{\nu}(c|z)=1$ \cite{Ali2009}. For these measurements, the state assemblage of Alice and Bob's side admits the following decomposition: \begin{equation} \sigma^{AB}_{c|z}=\sum_{c}D_{\nu}(c|z) \operatorname{Tr}_C (\openone \otimes \openone \otimes G_\nu \rho^{ABC}). \label{compLHS} \end{equation} Note that the above decomposition resembles that of the unsteerable assemblage given in Eq. (\ref{UStri}). Therefore, if Charlie's measurement settings are compatible then there is no steering between Charlie and Alice-Bob. Inspired by example \ref{ex01}, we consider the following measurement scenario. \begin{equation}gin{example}\label{ex3} Alice and Bob perform projective measurements along the directions $\hat{a}_0=\hat{x}$, $\hat{a}_1=\hat{y}$, $\hat{b}_0=(\hat{x}-\hat{y})/\sqrt{2}$ and $\hat{b}_1=(\hat{x}+\hat{y})/\sqrt{2}$. Charlie performs noisy projective measurements with visibility $\eta$ as in Eq. (\ref{nPVM}) along the directions $\hat{c}_0=\hat{x}$ and $\hat{c}_1=-\hat{y}$. For the above measurements, the statistics arising from the GHZ state, $\ket{\Phi_{GHZ}}$, are equivalent to the Svetlichny family in Eq. (\ref{SvF}) with $V$ replaced by $\eta$. \end{example} Note that in the above example, Charlie's measurement settings are compatible for $\eta \le 1/\sqrt{2}$. This implies that in this range, the state assemblage of Alice and Bob's side has the decomposition as in Eq. (\ref{compLHS}). Therefore, Charlie does not demonstrate tripartite steerability to Alice and Bob for $\eta\le1/\sqrt{2}$. However, the correlations in example \ref{ex3} violate the steering inequality in Eq. (\ref{SIEPR}) for any $\eta>1/2$. Note that in this example, Bob and Charlie's measurement settings can be used to demonstrate Bell nonlocality for $\eta>1/\sqrt{2}$ and nonseparability for $\eta>1/2$ (as in example \ref{ex01}). Therefore, the correlations in example \ref{ex3} exhibit Svetlichny steering from $A$ to $BC$ for $\eta>1/2$. \begin{equation}gin{remark} Quantum correlations can also detect tripartite EPR-steering from Alice to Bob and Charlie even if Charlie's measurement settings are compatible. \end{remark} \section{Conclusion}\label{Cnc} We have introduced Svetlichny steering and Mermin steering to distinguish the presence of two types of genuine tripartite steering. We have derived the two inequalities which detect them by using the structure of the Svetlichny inequality and Mermin inequality, which detect genuine nonlocality and standard nonlocality, respectively. We find that genuine tripartite entanglement is certifiable from the correlations that violate one of these steering inequalities in a semi-device-independent way as well. That is, when qubit dimension is assumed for two parties, the correlations which exhibit Svetlichny steering or Mermin steering also detect genuine tripartite entanglement (where measurements on all subsystems are uncharacterized). We demonstrate that quantum correlations can also detect tripartite EPR-steering from Alice to Bob and Charlie even if Charlie's measurement settings are compatible. The two tripartite steering inequalities which detect Svetlichny steering and Mermin steering can be generalized to arbitrary number of systems by using the structure of $n$-partite Svetlichny inequality \cite{multiSI,multiSI1} and Mermin-Ardehali-Belinskii-Klyshko (MABK) inequality \cite{mermin,multimermin,multimermin1,multimermin2,multimermin3}, respectively. \end{document}
\begin{document} \author[R.~Garbit]{Rodolphe Garbit} \address{Universit\'e d'Angers\\D\'epartement de Math\'ematiques\\ LAREMA\\ UMR CNRS 6093\\ 2 Boulevard Lavoisier\\49045 Angers Cedex 1\\ France} \email{[email protected]} \author[K.~Raschel]{Kilian Raschel} \address{CNRS\\ F\'ed\'eration Denis Poisson\\Universit\'e de Tours\\LMPT\\UMR CNRS 7350\\Parc de Grandmont\\ 37200 Tours\\ France} \email{[email protected]} \title[On the exit time from a cone for brownian motion with drift]{On the exit time from a cone for brownian motion with drift} \subjclass[2000]{60F17, 60G50, 60J05, 60J65} \keywords{Brownian motion with drift; Exit time; Cone; Heat kernel} \thanks{} \date{\today} \begin{abstract} We investigate the tail distribution of the first exit time of Brownian motion with drift from a cone and find its exact asymptotics for a large class of cones. Our results show in particular that its exponential decreasing rate is a function of the distance between the drift and the cone, whereas the polynomial part in the asymptotics depends on the position of the drift with respect to the cone and its polar cone, and reflects the local geometry of the cone at the points that minimize the distance to the drift. \end{abstract} \maketitle \section{Introduction} \label{sec:Introduction} Let $B_t$ be a $d$-dimensional Brownian motion with drift $a\in \RR^d$. For any cone $C\subset\RR^d$, define the first exit time \begin{equation*} \label{eq:def_exit_time} \tau_C =\inf\{t>0 : B_t \notin C\}. \end{equation*} In this article we study the probability for the Brownian motion started at $x$ not to exit $C$ before time $t$, namely, \begin{equation} \label{eq:def_exit_probability} \PP_x[\tau_C>t], \end{equation} and its asymptotics \begin{equation} \label{eq:exit_asymptotic} \kappa h(x) t^{-\alpha} e^{-\gamma t}(1+o(1)),\quad t\to\infty. \end{equation} In the literature, these problems have first been considered for Brownian motion with no drift ($a=0$). In \cite{Sp58}, Spitzer considered the case $d=2$ and obtained an explicit expression for the probability \eqref{eq:def_exit_probability} for any two-dimensional cone. He also introduced the winding number process $\theta_t=\arg B_t$ (in dimension $d=2$, the Brownian motion does not exit a given cone before time $t$ if and only if $\theta_t$ stays in some interval). He proved a weak limit theorem for $\theta_t$ as $t\to\infty$. Later on, this result has been extended by many authors in several directions (e.g., strong limit theorems, winding numbers not only around points but also around certain curves, winding numbers for other processes), see for instance \cite{Me91}. In \cite{Dy62}, motivated by studying the eigenvalues of matrices from the Gaussian Unitary Ensemble, Dyson analyzed the Brownian motion in the cone formed by the Weyl chamber of type $A$, namely, \begin{equation*} \label{eq:def_Weyl_chamber} \{x=(x_1,\ldots ,x_d)\in\RR^d : x_1<\cdots <x_d\}. \end{equation*} He also defined the Brownian motion conditioned never to exit the chamber. These results have been extended by Biane \cite{Bi95} and Grabiner \cite{Grab99}. In \cite{Bi94}, Biane studied some further properties of the Brownian motion conditioned to stay in cones, and in particular generalized the famous Pitman's theorem to that context. In \cite{KoSc11} K\"onig and Schmid analyzed the non-exit probability \eqref{eq:def_exit_probability} of Brownian motion from a growing truncated Weyl chamber. In \cite{Bu77}, Burkholder considered open right circular cones in any dimension and computed the values of $p>0$ such that \begin{equation*} \EE_x[\tau_C^p]<\infty. \end{equation*} In \cite{DB87,DB88}, for a fairly general class of cones, DeBlassie obtained an explicit expression for the probability \eqref{eq:def_exit_probability} in terms of the eigenfunctions of the Dirichlet problem for the~Laplace-Beltrami operator on \begin{equation*} \label{eq:def_Theta} \Theta=\mathbb S^{d-1}\cap C, \end{equation*} see \cite[Theorem 1.2]{DB87}. DeBlassie also derived the asymptotics \eqref{eq:exit_asymptotic}, see \cite[Corollary 1.3]{DB87}: he found $\gamma =0$ (indeed, the drift is zero), while $\alpha$ is related to the first eigenvalue and $h(x)$ to the first eigenfunction. The basic strategy in \cite{DB87,DB88} was to show that the probability \eqref{eq:def_exit_probability} is solution to the heat equation and to solve the latter. In \cite{BaSm97}, Ba\~nuelos and Smits refined the results of DeBlassie \cite{DB87,DB88}: they considered more general cones, and obtained a quite tractable expression for the heat kernel (the transition densities for the Brownian motion in $C$ killed on the boundary), and thus for \eqref{eq:def_exit_probability}. We conclude this part by mentioning the work \cite{DoOC05}, in which Doumerc and O'Connell found a formula for the distribution of the first exit time of Brownian motion from a fundamental region associated with a finite reflection group. For Brownian motion with non-zero drift, much less is known. Only the case of Weyl chambers (of type $A$) has been investigated. In \cite{BiBoOC05}, Biane, Bougerol and O'Connell obtained an expression for the probability $\PP_x[\tau_C=\infty]=\lim_{t\to\infty}\PP_x[\tau_C>t]$ in the case where the drift is inside of the Weyl chamber (and hence the latter probability is positive). In \cite{PuRo08}, Pucha{\l}a and Rolski gave, for any drift $a$, the exact asymptotics \eqref{eq:exit_asymptotic} of the tail distribution of the exit time, in the context of Weyl chambers too. The different quantities in \eqref{eq:exit_asymptotic} were determined explicitly in terms of the drift $a$ and of a vector obtained by a procedure involving the construction of a stable partition of the drift vector. In this article, we compute the asymptotics \eqref{eq:exit_asymptotic} for a very general class of cones $C$, and we identify $\kappa$, $h(x)$, $\alpha$ and $\gamma$ in terms of the cone $C$ and the drift $a$. We find that there are six different regimes depending on the position of the drift with respect to (w.r.t.)\ the cone. To be more specific, we will consider general cones as defined by Ba\~nuelos and Smits in~\cite{BaSm97}. Namely, given a {\em proper}, {\em open} and {\em connected} subset $\Theta$ of the unit sphere $\SS^{d-1}\subset \mathbb R^d$, we consider the cone $C$ generated by $\Theta$, that is, the set of all rays emanating from the origin and passing through $\Theta$: \begin{equation*} C=\{\lambda\theta :\lambda>0, \theta\in\Theta\}. \end{equation*} We associate with the cone the polar cone (which is a closed set) \begin{equation*} C^{\sharp}=\{x\in\RR^d : \sclr{x}{y}\leqslant 0,\forall y\in C\}. \end{equation*} See Figure \ref{fig:cones} for an example. Below and throughout, we shall denote by $\open{D}$ (resp.\ $\close{D}$) the interior (resp.\ the closure) of a set $D\subset \mathbb R^d$. The six cases leading to different regimes are then: \begin{enumerate}[label={\rm\Alph{*}.},ref={\rm\Alph{*}}] \item\label{case:A}polar interior drift: $a\in\open{(C^{\sharp})}$; \item\label{case:B}zero drift: $a=0$; \item\label{case:C}interior drift: $a\in C$; \item\label{case:D}boundary drift: $a\in \partial C\setminus\{0\}$; \item\label{case:E}non-polar exterior drift: $a\in \RR^d\setminus (\close{C}\cup C^{\sharp})$; \item\label{case:F}polar boundary drift: $a\in\partial C^{\sharp}\setminus\{0\}$. \end{enumerate} These cases will be analyzed in Theorems \ref{thm:case:A}, \ref{thm:case:B}, \ref{thm:case:C}, \ref{thm:case:D}, \ref{thm:case:E} and \ref{thm:case:F}, respectively. Our results show in particular that the exponential decreasing rate $e^{-\gamma}$ in \eqref{eq:exit_asymptotic} is related to the distance between the drift and the cone by the formula \begin{equation} \label{eq:exp_rate} \gamma=\frac{1}{2}d(a,C)^2=\frac{1}{2}\min_{y\in \close{C}}\vert a-y\vert^2. \end{equation} As for the polynomial part $t^{-\alpha}$ in \eqref{eq:exit_asymptotic}, it depends on the case under consideration and reflects the local geometry of the cone at the point(s)\ that minimize the distance to the drift, plus the local geometry at the {\em contact points} between $\partial\Theta$ and the hyperplane orthogonal to the drift in case \ref{case:F}. We would like to point out that the formula for $\gamma$ obtained in \cite{PuRo08} in the case of the Weyl chamber of type $A$ is the same as ours. Indeed, though it is not mentioned there, the vector $f$ obtained in \cite{PuRo08} via the construction of a {\em stable partition} of the drift is the projection of the drift on the Weyl chamber, and their formula (4.10) reads $\gamma=\vert a-f\vert^2/2$, as the reader can check. \section{Assumptions on the cone and statements of results} Though our results are stated precisely in Theorems \ref{thm:case:A}, \ref{thm:case:B}, \ref{thm:case:C}, \ref{thm:case:D}, \ref{thm:case:E} and \ref{thm:case:F}, we would like to give now a brief overview as well as precise statements. \subsection{Assumptions on the cone} Our main assumption on the cones studied here is the following: \begin{enumerate}[label={\rm(C\arabic{*})},ref={\rm(C\arabic{*})}] \item\label{hypothesis1}The set $\Theta=\mathbb S^{d-1}\cap C$ is {\em normal}, that is, piecewise infinitely differentiable. \end{enumerate} With this assumption (see \cite[page 169]{Ch84}), there exists a complete set of eigenfunctions $(m_j)_{j\geqslant 1}$ orthonormal w.r.t.\ the surface measure on $\Theta$ with corresponding eigenvalues $0<\lambda_1<\lambda_2\leqslant \lambda_3\leqslant\cdots$, satisfying for any $j\geqslant 1$ \begin{equation} \label{eq:eigenfunctions} \begin{cases} L_{\SS^{d-1}} m_j=-\lambda_j m_j & \mbox{on}\quad \Theta,\\ m_j=0 &\mbox{on}\quad \partial \Theta. \end{cases} \end{equation} where $L_{\SS^{d-1}}$ denotes the Laplace-Beltrami operator on $\SS^{d-1}$. We shall say that the cone is normal if $\Theta$ is normal. For any $j\geqslant 1$, we set \begin{equation} \label{eq:alpha_j} \alpha_j=\sqrt{\lambda_j+({d}/{2}-1)^2} \end{equation} and \begin{equation} \label{eq:p_j} p_j=\alpha_j-({d}/{2}-1). \end{equation} \begin{example} \label{ex:dim2} In dimension $2$, any (connected and proper) open cone is a rotation of \begin{equation*} \label{eq:any_cone} \{\rho e^{i\theta}: \rho> 0, 0<\theta < \beta\} \end{equation*} for some $\beta\in(0,2\pi]$, see Figure \ref{fig:cones}. A direct computation starting from Equation \eqref{eq:eigenfunctions} yields $\lambda_j = (j\pi/\beta)^2$, and thus \begin{equation*} p_j = \alpha_j = j\pi/\beta, \end{equation*} for any $j\geqslant 1$. Further, the eigenfunctions \eqref{eq:eigenfunctions} are given in polar coordinates by \begin{equation} \label{eq:expression_eigenfunctions_2} m_j(\theta)=\frac{2}{\beta}\sin\left(\frac{j\pi \theta}{\beta}\right),\quad \forall j\geqslant 1, \end{equation} where the term $2/\beta$ comes from the normalization $\int_0^\beta m_j(\theta)^2\text{d}\theta =1$. \end{example} \unitlength=0.6cm \begin{figure} \caption{Cones $C$ with opening angle $\beta$ and polar cones $C^\sharp$ in dimension $2$. The set $\Theta$ (the arc of circle) and its boundary are particularly important in our analysis.} \label{fig:cones} \end{figure} The functions $m_j$ and constants $\alpha_j$ are particularly important in this study because they allow to write a series expansion for the heat kernel of the cone (Lemma~\ref{lemma:heat_kernel}) to which the non-exit probability is explicitly related (Lemma~\ref{lemma:expression_heat_kernel}). Cases \ref{case:A}, \ref{case:B} and \ref{case:C} are treated with full generality under the sole assumption \ref{hypothesis1}. Thus we extend the corresponding results of Pucha{\l}a and Rolski in \cite{PuRo08} about Weyl chambers of type $A$ in these cases. (Note that case \ref{case:A} is new since the polar cone of a Weyl chamber of type $A$ has an empty interior, whereas case \ref{case:B} has already been settled in \cite{BaSm97}, but is presented here for the sake of completeness.) Cases \ref{case:D}, \ref{case:E} and \ref{case:F} will be considered under an additional smoothness assumption on the cone that excludes Weyl chambers from our analysis. The reason is that we will need estimates for the heat kernel of the cone at boundary points, and those are only available (to our knowledge)\ in the case of smooth cones or, on the other hand, in the case of Weyl chambers. More precisely, we shall assume in these cases that: \begin{enumerate}[label={\rm(C\arabic{*})},ref={\rm(C\arabic{*})}] \setcounter{enumi}{1} \item\label{hypothesis2}The set $\Theta=\mathbb S^{d-1}\cap C$ is real-analytic.\footnote{A domain $\Omega\subset \mathbb R^d$ is real-analytic if at each point $x\in \partial\Omega$ there is a ball $B(x,r)$ with $r>0$ and a one-to-one mapping $\psi$ of $B(x,r)$ onto a certain domain $D\subset \mathbb R^d$ such that (i) $\phi(B(x,r)\cap \Omega)\subset [0,\infty)^d$, (ii) $\phi(B(x,r)\cap \partial\Omega)\subset \partial([0,\infty)^d)$, (iii) $\psi$ and $\psi^{-1}$ are real-analytic functions on $B(x,r)$ and $D$, respectively. This is equivalent to the fact that each point of $\partial\Omega$ has a neighborhood in which $\partial\Omega$ is the graph of a real-analytic function of $n-1$ coordinates. We refer to \cite[section 6.2]{GiTr83} for more details.\label{footnote:real-analytic}} \end{enumerate} Notice that under this assumption $\Theta$ is normal (in other words, \ref{hypothesis1} implies \ref{hypothesis2}). We have already mentioned the formula for the exponential decreasing rate: \begin{equation*} \gamma=\frac{1}{2}d(a,C)^2, \end{equation*} and the reader can already imagine the importance of the set \begin{equation*} \Pi(a)=\{y\in\close{C}: \vert a-y\vert=d(a,C)\}. \end{equation*} Indeed, the formula for the non-exit probability involves an integral of Laplace's type, and only neighborhoods of the points of $\Pi(a)$ will contribute to the asymptotics. It follows by elementary topological arguments that $\Pi(a)$ is a non-empty compact set. In cases \ref{case:A}, \ref{case:B}, \ref{case:C}, \ref{case:D} and \ref{case:F}, this set is a singleton ($\{0\}$ or $\{a\}$ according to the case), but in case \ref{case:E} it may have infinitely many points. Since we are not able to handle the case where $\Pi(a)$ has an accumulation point, we shall assume (in case \ref{case:E} only) that \begin{enumerate}[label={\rm(C\arabic{*})},ref={\rm(C\arabic{*})}] \setcounter{enumi}{2} \item\label{hypothesis3}The set $\Pi(a)$ is finite. \end{enumerate} This holds if the cone is convex for example. Our final comment concerns the case \ref{case:F}. Surprisingly, it is the most difficult: it is a mixture between cases \ref{case:A} and \ref{case:B}, and its analysis reveals an unexpected (at first sight) contribution of the contact points (see section \ref{subsec:case:F} for a precise definition) between $\partial\Theta$ and the hyperplane orthogonal to the drift. Here again, we shall add a technical assumption, namely: \begin{enumerate}[label={\rm(C\arabic{*})},ref={\rm(C\arabic{*})}] \setcounter{enumi}{3} \item\label{hypothesis4}The set of contact points $\Theta_c$ is finite. \end{enumerate} Moreover, we will consider case \ref{case:F} only in dimension $2$ (where \ref{hypothesis4} always holds) and $3$. The reason is that we are technically not able to handle more general cases. \subsection{Main results} The following theorem summarizes our results. Some important comments may be found below. \begin{theorem-sum} \label{thm:generic_theorem}Let $C$ be a normal cone in $\RR^d$ {\rm(}hypothesis \ref{hypothesis1}{\rm)}. For Brownian motion with drift $a$, in each of the six cases \ref{case:A}, \ref{case:B}, \ref{case:C}, \ref{case:D}, \ref{case:E} and \ref{case:F}, the asymptotic behavior of the non-exit probability is given by \begin{equation*} \PP_x[\tau_C>t]=\kappa h(x) t^{-\alpha} e^{-\gamma t}(1+o(1)),\quad t\to\infty, \end{equation*} where \begin{equation*} \gamma=\frac{1}{2}d(a,C)^2, \end{equation*} and \begin{equation*} \alpha= \begin{cases} \alpha_1+1 & \mbox{if $a$ is a polar interior drift {\rm(}case \ref{case:A}{\rm)},}\\ p_1/2 & \mbox{if $a=0$ {\rm(}case \ref{case:B}{\rm)},}\\ 0 & \mbox{if $a$ is an interior drift {\rm(}case \ref{case:C}{\rm)},}\\ 1/2 & \mbox{if $a$ is a boundary drift {\rm(}case \ref{case:D}{\rm)} and $\Theta$ is real-analytic \ref{hypothesis2},}\\ 3/2 & \mbox{if $a$ is a non-polar exterior drift {\rm(}case \ref{case:E}{\rm)}, \ref{hypothesis2} and \ref{hypothesis3},}\\ p_1/2+1 & \mbox{if $a$ is a polar boundary drift {\rm(}case \ref{case:F}{\rm)} and $C$ is two-dimensional.} \end{cases} \end{equation*} \end{theorem-sum} The constants $\kappa$ and the functions $h(x)$ are also explicit, but their expression is rather complicated in some cases. For this reason they are given in the corresponding sections. As a matter of example, let us give them in case \ref{case:A}: \begin{equation*} \kappa_A=\frac{1}{2^{\alpha_1}\Gamma(\alpha_1+1)}\int_C e^{\sclr{a}{y}}\vert y\vert^{p_1}m_1(\vec{y}) \text{d}y,\quad h_A(x)=e^{\sclr{-a}{x}}\vert x\vert^{p_1}m_1(\vec{x}), \end{equation*} where $m_1$ is defined in \eqref{eq:eigenfunctions} and $\alpha_1$ in \eqref{eq:alpha_j}, and where for any $y\not=0$, we denote by $\vec{y}=y/\vert y\vert$ its projection on the unit sphere $\SS^{d-1}$. Above, case \ref{case:F} is presented in dimension $2$ only, because the value of $\alpha$ in dimension $3$ is quite complicated (we refer to Theorem \ref{thm:case:F3} for the full statement). \section{The example of two-dimensional Brownian motion in cones} For the one-dimensional Brownian motion and the cone $C=(0,\infty)$, there are three regimes for the asymptotics of the non-exit probability, according to the sign of the drift $a\in\mathbb R$. Precisely, for any $x>0$, as $t\to\infty$ one has, with obvious notations (see \cite[section 2.8]{KaSh91}), \begin{equation} \label{eq:asymptotic_1D} \PP_x[\tau_{(0,\infty)}>t] = (1+o(1))\left\{\begin{array}{ccc} \displaystyle\frac{xe^{-a x} e^{-ta^2/2}}{\sqrt{2\pi} a^2 t^{3/2}} & \text{if} & a<0, \\ \displaystyle\frac{\sqrt{2}x}{\sqrt{\pi t}} & \text{if} & a=0, \\ \displaystyle 1-e^{-2ax}& \text{if} & a>0. \end{array}\right. \end{equation} For some specific two-dimensional cones, the asymptotics of the non-exit probability is easy to determine. This is for example the case of the upper half-plane since this is essentially a one-dimensional case. It is also an easy task to deal with the quarter plane $Q$. Indeed, by independence of the coordinates $(B_t^{(1)},B_t^{(2)})$ of the Brownian motion $B_t$, the non-exit probability can be written as the product \begin{equation*} \mathbb P_{x}[\tau_Q>t] = \mathbb P_{x_1}[\tau_{(0,\infty)}(B^{(1)})>t]\cdot\mathbb P_{x_2}[\tau_{(0,\infty)}(B^{(2)})>t], \end{equation*} where $x=(x_1,x_2)$. Denoting by $a=(a_1,a_2)$ the coordinates of the drift and making use of \eqref{eq:asymptotic_1D}, one readily deduces the asymptotics $\mathbb P_x[\tau_Q>t]=\kappa h(x)t^{-\alpha}e^{-\gamma t}(1+o(1))$, as summarized in Figure \ref{fig:quarter_plane}, where the value of $\alpha$ is given, according to the position of the drift $(a_1,a_2)$ in the quarter plane. We focus on $\alpha$ and not on $\gamma$, since the value of $\gamma$ is always obtained in the same way. \unitlength=0.6cm \begin{figure} \caption{Value of $\alpha$ in terms of the position of the drift $(a_1,a_2)$ in the plane (case of the quarter plane)} \label{fig:quarter_plane} \end{figure} More generally, our results show that the value of $\alpha$ for any two-dimensional cone is given as in Figure \ref{fig:cone}. \unitlength=0.6cm \begin{figure} \caption{Value of $\alpha$ in terms of the position of the drift $(a_1,a_2)$ in the plane (case of a general cone of opening angle $\beta$, for which $\alpha_1=\pi/\beta$, see Figure \ref{fig:cones} \label{fig:cone} \end{figure} This can be understood as follows: when the drift is negative (i.e., when it belongs to the polar cone $C^\sharp$), one sees the influence of the vertex of the cone ($\alpha$ is expressed with the opening angle $\beta$) since the trajectories that do not leave the cone will typically stay close to the origin. In all other cases, the Brownian motion will move away from the vertex, and will see the cone as a half-space (boundary drift and non-polar exterior drift) or as a whole-space (interior drift). \section{Preliminary results} \label{sec:heat_kernel_cone} In this section we introduce all necessary tools for our study. We first give the expression of the non-exit probability \eqref{eq:def_exit_probability} in terms of the heat kernel of the cone $C$ (see Lemmas \ref{lemma:expression_heat_kernel} and \ref{lem:non_exit_expression}). Then we guess the value of the exponential decreasing rate of this probability, by simple considerations on its integral expression. Finally we present our general strategy to compute the asymptotics of the non-exit probability. \subsection{Expression of the non-exit probability} In what follows we consider $(B_t)_{t\geqslant0}$ a $d$-dimensional Brownian motion with drift $a$ and identity covariance matrix. Under $\PP_x$, the Brownian motion starts at $x\in\mathbb R^d$. The lemma hereafter gives an expression of the non-exit probability for Brownian motion with drift $a$ in terms of an integral involving the transition probabilities of the Brownian motion with zero drift killed at the boundary of the cone. This is a quite standard result (see \cite[Proposition 2.2]{PuRo08} for example) and an easy consequence of Girsanov theorem. Notice that this result is not at all specific to cones and is valid for any domain in $\RR^d$. \begin{lemma} \label{lemma:expression_heat_kernel} Let $p^C(t,x,y)$ denote the transition probabilities of the Brownian motion with zero drift killed at the boundary of the cone $C$. We have \begin{equation} \label{exittime} \PP_x[\tau_C>t]=e^{\sclr{-a}{x}-t\vert a\vert^2/2}\int_C e^{\sclr{a}{y}}p^C(t,x,y) \textnormal{d}y,\quad \forall t\geqslant 0. \end{equation} \end{lemma} We shall now write a series expansion for the transition probabilities of the Brownian motion killed at the boundary of $C$ (or equivalently, see \cite[section 4]{Hu56}, for the heat kernel $p^C(t,x,y)$ of the cone $C$), as given in~\cite{BaSm97}. We denote by $I_{\nu}$ the modified Bessel function of order $\nu$: \begin{equation} \label{bessel} I_{\nu}(x)=\frac{2({x}/{2})^{\nu}}{\sqrt{\pi}\Gamma(\nu+1/2)}\int_{0}^{\frac{\pi}{2}}(\sin t)^{2\nu}\cosh(x\cos t)\text{d}t=\sum_{m=0}^{\infty}\frac{({x}/{2})^{\nu+2m}}{m!\Gamma(\nu+m+1)}. \end{equation} It satisfies the second order differential equation \begin{equation*} I_\nu ''(x)+\frac{1}{x}I_\nu'(x)=\left(1+\frac{\nu^2}{x^2}\right) I_\nu(x). \end{equation*} Its leading asymptotic behavior near $0$ is given by: \begin{equation} \label{eq:Bessel_equivalent_0} I_{\nu}(x)=\frac{x^{\nu}}{2^{\nu}\Gamma(\nu+1)}(1+o(1)), \quad x\to 0. \end{equation} We refer to \cite{Wa44} for proofs of the facts above and for any further result. \begin{lemma}[\cite{BaSm97}] \label{lemma:heat_kernel} Under \ref{hypothesis1}, the heat kernel of the cone $C$ has the series expansion \begin{equation} \label{heatkernel} p^C(t,x,y)=\frac{e^{-\frac{\vert x\vert^2 +\vert y\vert^2}{2t}}}{t(\vert x\vert\vert y\vert)^{{d}/{2}-1}}\sum_{j=1}^{\infty}I_{\alpha_j}\left(\frac{\vert x\vert \vert y\vert}{t}\right)m_j(\vec{x})m_j(\vec{y}), \end{equation} where the convergence is uniform for $(t,x,y)\in [T,\infty)\times\{x\in C:\vert x\vert\leqslant R\}\times C$, for any positive constants $T$ and $R$. \end{lemma} Making the change of variables $y\mapsto ty$ in \eqref{exittime} and using \eqref{heatkernel}, we easily obtain the following lemma, where the expression of the non-exit probability now involves an integral of Laplace's type. \begin{lemma} \label{lem:non_exit_expression} Let $C$ be a normal cone. For Brownian motion with drift $a$, the non-exit probability is given by \begin{equation} \label{exittime_after_change} \PP_x[\tau_C>t]=e^{\sclr{-a}{x}-\vert x\vert^2/(2t)+\vert x\vert^2/2}t^{d/2}\int_C e^{\vert y\vert^2/2}p^C(1,x,y) e^{-t\vert a-y\vert^2/2}\textnormal{d}y,\quad \forall t\geqslant 0. \end{equation} \end{lemma} \subsection{General strategy} The aim now is to understand the asymptotic behavior as $t\to\infty$ of the integral in the right-hand side of \eqref{exittime_after_change}. First, we notice that it suffices to analyze the asymptotic behavior of \begin{equation} \label{eq:definition_of_I} I(t)=t^{d/2}\int_C e^{\vert y\vert^2/2}p^C(1,x,y) e^{-t\vert a-y\vert^2/2}\text{d}y. \end{equation} To do this, we shall use Laplace's method \cite[Chapter 5]{Co65}. The basic question when applying this method is to locate the points $y\in \close{C}$ where the function \begin{equation*} \label{eq:function_to_max} \vert a-y\vert^2/2 \end{equation*} in the exponential reaches its minimum value, for it is expected that only a neighborhood of these points will contribute to the {asymptotics}. And indeed, we shall prove that the exponential decreasing rate $e^{-\gamma}$ of the non-exit probability in \eqref{eq:exit_asymptotic} is given, for the six cases \ref{case:A}--\ref{case:F}, by \eqref{eq:exp_rate}, namely \begin{equation*} \gamma=\frac{1}{2}\min_{y\in \close{C}}\vert a-y\vert^2=\frac{1}{2}d(a,C)^2. \end{equation*} Specifically, let $\Pi(a)$ be the set of minimum points, that is, \begin{equation*} \Pi(a)=\{y\in\close{C}: \vert a-y\vert=d(a,C)\}. \end{equation*} It follows by elementary topological arguments that $\Pi(a)$ is a non-empty compact set. The lemma below shows that if the domain of integration is restricted to the complement of any neighborhood of $\Pi(a)$, then the integral in \eqref{eq:definition_of_I} becomes negligible w.r.t.\ the expected exponential rate $e^{-t\gamma}$. To be precise, consider the open $\delta$-neighborhood of $\Pi(a)$: \begin{equation*} \Pi_{\delta}(a)=\{y\in\mathbb R^d: d(y,\Pi(a))<\delta\}. \end{equation*} \begin{lemma} \label{lem:bound_for_non_contributive_integral} For any $\delta>0$, there exists $\eta>0$ such that \begin{equation*} \int_{C\setminus\Pi_{\delta}(a)}e^{\vert y\vert^2/2}p^C(1,x,y)e^{-t\vert a-y\vert^2/2} \textnormal{d}y=O(e^{-t(\gamma+\eta)}),\quad t\to\infty, \end{equation*} where $\gamma$ is the quantity defined in \eqref{eq:exp_rate}. \end{lemma} \begin{proof} Let $\delta>0$ and define \begin{equation*} J_\delta(t)=\int_{C\setminus\Pi_{\delta}(a)}e^{\vert y\vert^2/2}p^C(1,x,y)e^{-t\vert a-y\vert^2/2} \text{d}y. \end{equation*} From the inequality $ \vert y\vert^2\leqslant (\vert y-a\vert+\vert a\vert)^2\leqslant 2\vert y-a\vert^2+2\vert a\vert^2 $, we obtain the upper bound $ e^{\vert y\vert^2/2}\leqslant ce^{2\vert y-a\vert^2} $, from which we deduce that \begin{equation*} 0\leqslant J_\delta(t)\leqslant c \int_{C\setminus\Pi_{\delta}(a)}p^C(1,x,y)e^{-s\vert a-y\vert^2/2} \text{d}y, \end{equation*} where $s=t-2$. Since $y\mapsto \vert a-y\vert^2/2$ is coercive and continuous, its infimum on the closed set $\close{C}\setminus\Pi_{\delta}(a)$ is a minimum. Thus, by definition of $\Pi(a)$, we have \begin{equation*} \inf_{\close{C}\setminus\Pi_{\delta}(a)}\vert a-y\vert^2/2>\gamma. \end{equation*} In other words, there exists $\eta>0$ such that $\vert a-y\vert^2/2\geqslant \gamma+\eta$ on $\close{C}\setminus\Pi_{\delta}(a)$. Hence, for all $s\geqslant 0$, we have \begin{equation*} 0\leqslant J_\delta(t)\leqslant c e^{-s(\gamma+\eta)} \int_{C\setminus\Pi_{\delta}(a)}p^C(1,x,y)\text{d}y\leqslant c e^{-s(\gamma+\eta)}. \end{equation*} This concludes the proof of the lemma. \end{proof} It is now clear that the strategy to analyze the non-exit probability is to determine the asymptotic behavior of the integral $I_{\delta}(t)$, which is defined by \begin{equation} \label{eq:defintion_of_I_delta} I_{\delta}(t)=t^{d/2}\int_{C\cap\Pi_{\delta}(a)}e^{\vert y\vert^2/2}p^C(1,x,y)e^{-t\vert a-y\vert^2/2} \text{d}y, \end{equation} and to check that it has the right exponential decreasing rate $e^{-\gamma}$, as expected. Indeed, in this case, the asymptotic behavior of $I(t)$, and consequently that of the non-exit probability, can be derived from the asymptotics of $I_{\delta}(t)$, as explained in the next lemma, which will constitute our general proof strategy. \begin{lemma} \label{lem:general_proof_strategy} Suppose that $g(t)$ is a function satisfying conditions \ref{fit} and \ref{sit} below: \begin{enumerate}[label={\rm(\roman{*})},ref={\rm(\roman{*})}] \item\label{fit}$g(t)=\kappa t^{-\alpha}e^{-t\gamma}$ for some $\kappa>0$ and $\alpha\in\RR$; \item\label{sit}For all $\epsilon>0$, there exists $\delta>0$ such that \begin{equation*} 1-\epsilon\leqslant\liminf_{t\to\infty}\frac{I_{\delta}(t)}{g(t)}\leqslant\limsup_{t\to\infty}\frac{I_{\delta}(t)}{g(t)}\leqslant 1+\epsilon. \end{equation*} \end{enumerate} Then $I(t)= g(t)(1+o(1))$ as $t\to\infty$. \end{lemma} \begin{proof} It follows from Lemma \ref{lem:bound_for_non_contributive_integral} as an easy exercise. \end{proof} In our study of $I_\delta(t)$, it will be important that the elements of $\Pi(a)$ be isolated from each other. By compactness, this condition is equivalent to the fact that $\Pi(a)$ be finite. In that case, for $\delta>0$ small enough, $I_{\delta}(t)$ decomposes into the finite sum $$I_{\delta}(t)=t^{d/2}\sum_{p\in \Pi(a)}\int_{C\cap B(p,\delta)}e^{\vert y\vert^2/2}p^C(1,x,y)e^{-t\vert a-y\vert^2/2} \text{d}y,$$ where $B(p,\delta)$ does not contain any other minimum point than $p$. The contribution of each minimum point $p$ can then be analyzed separately. The reason to do that is that we simply don't know how to handle the general case. In most cases, it is not much of a restriction. Indeed, for a convex cone (or any convex set), the set $\Pi(a)$ reduces to a single point, namely the projection $p_C(a)$ of $a$ on $\close{C}$. Though the projection may not be unique in general (that is, when the cone is not convex), it is still true in cases \ref{case:A}, \ref{case:B}, \ref{case:C}, \ref{case:D} and \ref{case:F} that $\Pi(a)$ has only one element, namely $p=0$ (cases \ref{case:A}, \ref{case:B}, \ref{case:F}) or $p=a$ (cases \ref{case:C} and \ref{case:D}), and that this point satisfies the usual property $\sclr{a-p}{y-p}\leqslant 0$ for all $y\in\close{C}$. Therefore, we call this point {\em the} projection and write it $p_C(a)$. The condition that $\Pi(a)$ be finite is a restriction only in case \ref{case:E}: according to the cone, the minimum could be reached at infinitely many different points, but we leave this general setting as an open problem. \section{Precise statements and proofs of the theorems \ref{case:A}--\ref{case:F}} \subsection{Case \ref{case:A} (polar interior drift)} In this section, we study the case where the drift $a$ belongs to the interior of the polar cone $C^\sharp$. It might be thought of as the natural generalization of the one-dimensional negative drift case. Define (with $p_1$ as in \eqref{eq:p_j}) \begin{equation} \label{eq:harmonic_Brownian} u(x)=\vert x\vert^{p_1}m_1(\vec{x}). \end{equation} The function $u$ is the unique (up to multiplicative constants) positive harmonic function of Brownian motion killed at the boundary of $C$. We also define (with $\alpha_1$ as in \eqref{eq:alpha_j}) \begin{equation*} \kappa_A=\frac{1}{2^{\alpha_1}\Gamma(\alpha_1+1)}\int_C e^{\sclr{a}{y}}u(y) \text{d}y, \end{equation*} as well as \begin{equation*} h_A(x)=e^{\sclr{-a}{x}}u(x). \end{equation*} Notice that $\kappa_A$ is finite because $a\in \open{(C^\sharp)}$ (see Lemma \ref{polarconeinterior}). Our main result in this section is the following: \begin{theorem} \label{thm:case:A} Let $C$ be a normal cone. If the drift $a$ belongs to the interior of the polar cone $C^{\sharp}$, then \begin{equation*} \PP_x[\tau_C>t]= \kappa_A h_A(x)t^{-(\alpha_1+1)}e^{-t\vert a\vert^2/2}(1+o(1)),\quad t\to\infty. \end{equation*} \end{theorem} \begin{proof} Since $a\in \open{(C^\sharp)}$, the projection $p_C(a)$ is $0$ and $\gamma=\vert a\vert^2/2$. According to our general strategy, we focus our attention on \begin{equation*} \label{eq:starting_formula_case_A} I_\delta(t)=t^{d/2}\int_{\{y\in C:\vert y \vert\leqslant\delta\}} e^{\vert y\vert^2/2}p^C(1,x,y) e^{-t\vert a-y\vert^2/2}\text{d}y. \end{equation*} Let $\epsilon>0$ be given. It follows from Lemma~\ref{equibrotue} below that there exists $\delta>0$ such that $p^C(1,x,y)$ is bounded from above and below on $\{y\in C: \vert y\vert\leqslant \delta\}$ by \begin{equation*} (1\pm\epsilon) bu(x)u(y)e^{-(\vert x\vert^2+\vert y\vert^2)/2}, \end{equation*} where $b=(2^{\alpha_1}\Gamma(\alpha_1+1))^{-1}$. Therefore, $I_\delta(t)$ is bounded from above and below by \begin{equation} \label{eq:bound_for_I_delta_case_A} (1\pm\epsilon) bu(x)e^{-\vert x\vert^2/2}t^{d/2}\int_{\{y\in C: \vert y\vert\leqslant \delta\}} u(y) e^{-t\vert a-y\vert^2/2}\text{d}y. \end{equation} By making the change of variables $v=ty$ and using the homogeneity of $u$, this expression becomes \begin{equation*}(1\pm\epsilon) bu(x)e^{-\vert x\vert^2/2}t^{-(\alpha_1+1)}e^{-t\vert a\vert^2/2}\int_{\{v\in C: \vert v\vert\leqslant t\delta\}} u(v) e^{\sclr{a}{v}-\vert v\vert^2/(2t)}\text{d}v. \end{equation*} Now, since $a\in \open{(C^\sharp)}$ implies that $\sclr{a}{v}\leqslant -c\vert v\vert$ for all $v\in C$, for some $c>0$ (see Lemma~\ref{polarconeinterior} below), the function $u(v)e^{\sclr{a}{v}}$ is integrable on $C$. Therefore, we can apply the dominated convergence theorem to obtain \begin{equation*} \int_{\{v\in C: \vert v\vert\leqslant t\delta\}} u(v) e^{\sclr{a}{v}-\vert v\vert^2/(2t)}\text{d}v=(1+o_\delta(1))\int_{C} u(v) e^{\sclr{a}{v}}\text{d}v, \quad t\to\infty. \end{equation*} Hence, the bound for $I_\delta(t)$ can finally be written as \begin{equation*} (1\pm\epsilon) \kappa_A u(x)e^{-\vert x\vert^2/2}t^{-(\alpha_1+1)}e^{-t\vert a\vert^2/2}(1+o_\delta(1)), \quad t\to\infty, \end{equation*} and a direct application of Lemma~\ref{lem:general_proof_strategy} gives \begin{equation*} I(t)= \kappa_A u(x)e^{-\vert x\vert^2/2}t^{-(\alpha_1+1)}e^{-t\vert a\vert^2/2}(1+o(1)), \quad t\to\infty. \end{equation*} The theorem then follows thanks to the expression \eqref{exittime_after_change} of the non-exit probability. \end{proof} We now state and prove a lemma that was used in the proof of Theorem \ref{thm:case:A}. Similar estimates can be found in \cite[section 5]{Ga09}. \begin{lemma} \label{equibrotue} We have \begin{equation*} \lim_{\vert y\vert\to 0}\frac{p^C(1,x,y)e^{(\vert x\vert^2+\vert y\vert^2)/2}}{u(x)u(y)}=(2^{\alpha_1}\Gamma(\alpha_1+1))^{-1} \end{equation*} uniformly on $\{x\in C:\vert x\vert\leqslant R\}$, for any positive constant $R$. \end{lemma} \begin{proof} For brevity, let us write $x=\rho\theta$ and $y=r\eta$, with $\rho, r>0$ and $\theta, \eta\in \Theta$, and set $M=\rho r$. It follows from the expression of the heat kernel \eqref{heatkernel} that \begin{equation*} \frac{p^C(1,\rho \theta,r\eta)e^{(\rho^2+r^2)/2}}{u(\rho\theta)u(r\eta)} = \sum_{j=1}^{\infty}\frac{I_{\alpha_j}(M)}{M^{\alpha_1}} \frac{m_j(\theta)}{m_1(\theta)}\frac{m_j(\eta)}{m_1(\eta)}. \end{equation*} Using then equation \eqref{compfoncprop} from Lemma \ref{lemma_BaSm97} below, we find the upper bound (below and throughout, $c$ will denote a positive constant, possibly depending on the dimension $d$, which can take different values from line to line) \begin{equation} \label{eq:upper_bound} \left\vert \frac{I_{\alpha_j}(M)}{M^{\alpha_1}}\frac{m_j(\theta)}{m_1(\theta)}\frac{m_j(\eta)}{m_1(\eta)}\right\vert \leqslant\frac{c}{M^{\alpha_1}}\frac{I_{\alpha_j}(M)}{I_{\alpha_j}(1)}. \end{equation} Now, using the integral expression \eqref{bessel} for $I_{\alpha_j}$, we obtain \begin{align*} \label{eq:two_estimations} I_{\alpha_j}(M)\leqslant & \frac{2\left(\frac{M}{2}\right)^{\alpha_j}}{\sqrt{\pi}\Gamma(\alpha_j+{1}/{2})}\cosh(M) \int_{0}^{\frac{\pi}{2}}(\sin t)^{2\alpha_j}\text{d}t,\\ I_{\alpha_j}(1)\geqslant & \frac{2\left(\frac{1}{2}\right)^{\alpha_j}}{\sqrt{\pi}\Gamma(\alpha_j+1/2)} \int_{0}^{\frac{\pi}{2}}(\sin t)^{2\alpha_j}\text{d}t.\nonumber \end{align*} We conclude that \begin{equation*} \frac{I_{\alpha_j}(M)}{I_{\alpha_j}(1)}\leqslant M^{\alpha_j}\cosh(M). \end{equation*} Using the latter estimation in \eqref{eq:upper_bound}, we deduce that \begin{equation*} \label{eq:upper_bound_d} \left\vert \frac{I_{\alpha_j}(M)}{M^{\alpha_1}}\frac{m_j(\theta)}{m_1(\theta)}\frac{m_j(\eta)}{m_1(\eta)}\right\vert \leqslant c M^{\alpha_j-\alpha_1}\cosh(M). \end{equation*} It is easily seen from equation \eqref{croissalp} in Lemma \ref{lemma_BaSm97} below that $\sum_{j=1}^{\infty}M^{\alpha_j-\alpha_1}\cosh(M)$ is a uniformly convergent series for $M\in[0,1-\epsilon]$, for any $\epsilon\in (0,1]$. This implies that the series \begin{equation*} \sum_{j=1}^\infty \frac{I_{\alpha_j}(M)}{M^{\alpha_1}}\frac{m_j(\theta)}{m_1(\theta)}\frac{m_j(\eta)}{m_1(\eta)} \end{equation*} is uniformly convergent for $(M,\theta,\eta)\in [0,1-\epsilon]\times \Theta\times \Theta$, for any $\epsilon\in (0,1]$. Therefore we can take the limit term by term. Since \begin{equation*} \lim_{M\to 0}\frac{I_{\alpha_j}(M)}{M^{\alpha_1}}\frac{m_j(\theta)}{m_1(\theta)}\frac{m_j(\eta)}{m_1(\eta)}= \left\{\begin{array}{lcl} \displaystyle\frac{1}{2^{\alpha_1}\Gamma(\alpha_1+1)} &\text{if}&j=1,\\ 0 &\text{if}& j\geqslant 2, \end{array}\right. \end{equation*} uniformly in $(\theta, \eta)\in \Theta\times\Theta$ (see \eqref{eq:Bessel_equivalent_0} and Lemma \ref{lemma_BaSm97} below), we reach the conclusion that \begin{equation*} \lim_{M\to 0}\sum_{j=1}^{\infty} \frac{I_{\alpha_j}(M)}{M^{\alpha_1}}\frac{m_j(\theta)}{m_1(\theta)}\frac{m_j(\eta)}{m_1(\eta)}=\frac{1}{2^{\alpha_1}\Gamma(\alpha_1+1)}, \end{equation*} where the convergence is uniform for $(\theta,\eta)\in \Theta\times \Theta$. The proof of Lemma \ref{equibrotue} is complete. \end{proof} The following facts in the lemma below, concerning the eigenfunctions \eqref{eq:eigenfunctions}, are proved in~\cite{BaSm97}. \begin{lemma}[\cite{BaSm97}] \label{lemma_BaSm97} If $C$ is normal, then there exist two constants $0<c_1<c_2$ such that \begin{equation} \label{croissalp} c_1j^{{1}/({d-1})}\leqslant \alpha_j \leqslant c_2 j^{{1}/{(d-1)}},\quad\forall j\geqslant 1. \end{equation} In addition, there exists a constant $c$ such that \begin{equation} \label{compfoncprop} m_j^2(\eta)\leqslant \frac{cm_1^2(\eta)}{I_{\alpha_j}(1)},\quad \forall j\geqslant 1,\quad \forall \eta\in \Theta. \end{equation} \end{lemma} We conclude this section with a useful characterization of the interior of the polar cone, which was used in the proof of Theorem \ref{thm:case:A}: \begin{lemma} \label{polarconeinterior} The drift vector $a$ belongs to $\open{(C^{\sharp})}$ if and only if there exists $\delta>0$ such that $\sclr{a}{y}\leqslant -\delta\vert y\vert$ for all $y\in \close{C}$. \end{lemma} \begin{proof} Assume first that $a$ satisfies the above condition. For all $x$ such that $\vert a-x\vert<\delta$ and all $y\in C$, we have by Cauchy-Schwarz inequality \begin{equation*} \sclr{x}{y}=\sclr{a}{y}+\sclr{x-a}{y}< -\delta\vert y\vert + \delta \vert y\vert=0, \end{equation*} hence $C^{\sharp}$ contains the open ball $B(a,\delta)$, and $a$ is an interior point. Conversely, suppose that there exists $r>0$ such that the closed ball $\close{B(a,r)}$ is included in $C^{\sharp}$. It is easily seen that \begin{equation*} C^{\sharp}=\{x\in\mathbb R^d : \sclr{x}{u}\leqslant 0, \forall x\in\close{C}\cap \SS^{d-1}\}. \end{equation*} Since $\close{C}\cap\SS^{d-1}$ is a compact set, there exists a vector $u_0$ in this set such that \begin{equation*} \gamma=\sclr{a}{u_0}=\max_{u\in\close{C}\cap\SS^{d-1}}\sclr{a}{u}. \end{equation*} Hence it remains to prove that $\gamma<0$. To that aim, we select a family $\{x_1,\ldots,x_d\}$ of vectors of $\partial B(a,r)$ which forms a basis of $\RR^d$. One of them, say $x_1$, must satisfy $\sclr{x_1}{u_0}<0$, since else we would have $\sclr{x_i}{u_0}=0$ for all $i\in\{1,\ldots,d\}$, and therefore $u_0=0$. Let $\close{x}_1=2a-x_1$ be the opposite of $x_1$ on $\partial B(a,r)$. Since $\sclr{x_1}{u_0}<0$ and $\sclr{\close{x}_1}{u_0}\leqslant 0$, it follows that $\gamma=\sclr{a}{u_0}=(\sclr{x_1}{u_0}+\sclr{\close{x}_1}{u_0})/2<0$. \end{proof} \subsection{Case \ref{case:B} (zero drift)} The case of a driftless Brownian motion, that we consider in the present section, has already been investigated by many authors, see \cite{Sp58,DB87,DB88,BaSm97}. Define (with $\alpha_1$ as in \eqref{eq:alpha_j} and $u(y)$ as in \eqref{eq:harmonic_Brownian}) \begin{equation*} \kappa_B=\frac{1}{2^{\alpha_1}\Gamma(\alpha_1+1)}\int_C u(y)e^{-\vert y\vert^2/2} \text{d}y. \end{equation*} \begin{theorem} \label{thm:case:B} Let $C$ be a normal cone. If the drift $a$ is zero, then \begin{equation*} \PP_x[\tau_C>t]= \kappa_B u(x) t^{-p_1/2}(1+o(1)),\quad t\to\infty. \end{equation*} \end{theorem} Although a proof of Theorem \ref{thm:case:B} can be found in \cite{DB87,BaSm97}, for the sake of completeness we wish to write down some of the details below. As we shall see, the arguments are very similar to those used for proving Theorem \ref{thm:case:A}. \begin{proof}[Proof of Theorem \ref{thm:case:B}] We have $a=0$ and $\gamma=0$. Thus, the lower and upper bounds \eqref{eq:bound_for_I_delta_case_A} for $I_\delta(t)$ write \begin{equation*} (1\pm\epsilon) bu(x)e^{-\vert x\vert^2/2}t^{d/2}\int_{\{y\in C: \vert y\vert\leqslant \delta\}} u(y) e^{-t\vert y\vert^2/2}\text{d}y. \end{equation*} This time, we make the change of variables $v=\sqrt{t}y$ and use the homogeneity of $u$ in order to transform this expression into \begin{equation*} (1\pm\epsilon) bu(x)e^{-\vert x\vert^2/2}e^{-t\vert a\vert^2/2}t^{-p_1/2}\int_{\{v\in C: \vert v\vert\leqslant \sqrt{t}\delta\}} u(v) e^{-\vert v\vert^2/2}\text{d}v. \end{equation*} Since the function $u(v)e^{-\vert v\vert^2/2}$ is integrable on $C$, it comes from the dominated convergence theorem that \begin{equation*} \int_{\{v\in C: \vert v\vert\leqslant \sqrt{t}\delta\}} u(v) e^{-\vert v\vert^2/2}\text{d}v=(1+o_\delta(1))\int_{C} u(v) e^{-\vert v\vert^2/2}\text{d}v, \quad t\to\infty. \end{equation*} Hence, the bounds for $I_\delta(t)$ can finally be written as \begin{equation*} (1\pm\epsilon) \kappa_B u(x)e^{-\vert x\vert^2/2}t^{-p_1/2}(1+o_\delta(1)), \quad t\to\infty. \end{equation*} The theorem then follows by an application of Lemma~\ref{lem:general_proof_strategy} and formula~\eqref{exittime_after_change}. \end{proof} \subsection{Case \ref{case:C} (interior drift)} Now we turn to the case when the drift $a$ is inside the cone $C$. \begin{theorem} \label{thm:case:C} Let $C$ be a normal cone. If $a$ belongs to $C$, then \begin{equation*} \PP_x[\tau_C= \infty]=\lim_{t\to\infty}\PP[\tau_C > t]=(2\pi)^{d/2}e^{\vert x-a\vert^2}p^{C}(1,x,a). \end{equation*} \end{theorem} \begin{proof} Since $a\in C$, one has $p_C(a)=a$ and $\gamma=0$. As in the previous cases, we focus our attention on \begin{equation*} \label{eq:starting_formula_case_C} I_{\delta}(t)=t^{d/2}\int_{\{y\in C:\vert y-a\vert\leqslant\delta \}} e^{\vert y\vert^2/2}p^C(1,x,y) e^{-t\vert a-y\vert^2/2}\text{d}y. \end{equation*} Given $\epsilon>0$, we choose $\delta>0$ so small that $\close{B(a,\delta)}\subset C$ and \begin{equation*} f(y)=e^{\vert y\vert^2/2}p^C(1,x,y) \end{equation*} be bounded from above and below by $f(a)\pm\epsilon$ for all $y\in \close{B(a,\delta)}$. With this choice, $I_{\delta}(t)$ is then bounded from above and below by \begin{equation*} (f(a)\pm \epsilon) t^{d/2}\int_{\{y\in \RR^d : \vert y-a\vert\leqslant \delta\}}e^{-t\vert y-a\vert^2/2}\text{d}y. \end{equation*} By the change of variables $v=\sqrt{t}(y-a)$, this expression becomes \begin{equation*} (f(a)\pm \epsilon)\int_{\{v\in\RR^d : \vert v\vert\leqslant \sqrt{t}\delta\}} e^{-\vert v\vert^2/2}\text{d}v =(f(a)\pm \epsilon)(2\pi)^{d/2}(1+o_{\delta}(1)),\quad t\to\infty. \end{equation*} Hence, the theorem follows from Lemma \ref{lem:general_proof_strategy} and formula \eqref{exittime_after_change}. \end{proof} \begin{example} In the case where $C$ is the Weyl chamber of type $A$, the heat kernel is given by the Karlin-McGregor formula (see \cite[Theorem 1]{KaMcGr59}): \begin{equation*} p^C(t,x,y)=\det(p(t,x_i,y_j))_{1\leqslant i,j\leqslant d}, \end{equation*} with $p(t,x_i,y_j)=(2\pi t)^{-1/2}e^{-(x_i-y_j)^2/2t}$. An easy computation then shows that $p^C(1,x,a)$ is equal to \begin{equation*} p^C(1,x,a)=(2\pi)^{-d/2}e^{-(\vert x\vert^2+\vert a\vert^2)/2}\det(e^{x_ia_j})_{1\leqslant i,j\leqslant d}. \end{equation*} Hence \begin{equation*} \PP_x[\tau_C= \infty]=\lim_{t\to\infty}\PP_x[\tau_C>t]=e^{\sclr{-a}{x}}\det(e^{x_ia_j})_{1\leqslant i,j\leqslant d}. \end{equation*} \end{example} This result was derived earlier by Biane, Bougerol and O'Connell in \cite[section 5]{BiBoOC05}. Indeed, in \cite{BiBoOC05} the authors first find the probability $\PP_x[\tau_C= \infty]$ in the case of a drift $a\in C$ via the reflection principle and a change of measure. As an application of this, they show that the Doob $h$-transform of the Brownian motion with the harmonic function given by the non-exit probability $\PP_x[\tau_C= \infty]$ has the same law that a certain path transformation of the Brownian motion (defined thanks to the Pitman operator, which is one of the main topics studied in \cite{BiBoOC05}). \subsection{Case \ref{case:D} (boundary drift)} \label{sec:case:D} In this section and the following ones, we make the additional hypothesis that the cone is real-analytic, that is, hypothesis \ref{hypothesis2}. Notice that under this assumption $\Theta$ is normal. This assumption ensures that the heat kernel can be locally and analytically continued across the boundary, and thus admits a Taylor expansion at any boundary point different from the origin. To our knowledge, for more general cones like those which are intersections of smooth deformations of half-spaces, the boundary behavior of the heat kernel at a corner point (i.e., a point located at the intersection of two or more half-spaces) is not known, except in the particular case of Weyl chambers \cite{KaMcGr59,BiBoOC05}. This behavior will determine the polynomial part $t^{-\alpha}$ in the {asymptotics} of the non-exit probability. The case of Weyl chambers is treated in \cite{PuRo08}. Here, we deal with the opposite (i.e., smooth) setting. Define the function \begin{equation*} h_D(x) = e^{\vert x-a\vert^ 2/2}\partial_n p^C(1,x,a), \end{equation*} where $n$ stands for the inner-pointing unit vector normal to $C$ at $a$, and $\partial_n p^C(1,x,a)$ denotes the normal derivative of the function $y\mapsto p^C(1,x,y)$ at $y=a$. Function $h_D(x)$ is non-zero thanks to Lemma \ref{lemma:normal_derivative_at_a} below. Define also the constant \begin{equation*} \kappa_D = ({2\pi})^{(d-1)/2}. \end{equation*} \begin{theorem} \label{thm:case:D} Let $C$ be a real-analytic cone. If $a\not=0$ belongs to $\partial C$, then \begin{equation*} \PP_x[\tau_C>t]=\kappa_D h_D(x)t^{-1/2}(1+o(1)),\quad t\to\infty. \end{equation*} \end{theorem} \begin{proof} As in case \ref{case:C}, we have $p_C(a)=a$ and $\gamma=0$, and the formula \eqref{eq:defintion_of_I_delta} for $I_\delta(t)$ writes \begin{equation*} \label{eq:starting_formula_case_D} I_{\delta}(t)=t^{d/2}\int_{\{y\in C:\vert y-a\vert\leqslant\delta \}} f(y) e^{-t\vert a-y\vert^2/2}\text{d}y, \end{equation*} where $f(y)=e^{\vert y\vert^2/2}p^C(1,x,y)$. In the present case, $f(y)$ vanishes at $y=a$, contrary to case \ref{case:C}. Since the function $y\mapsto p^C(1,x,y)$ is infinitely differentiable in a neighborhood of $a$ (see Lemma \ref{lemma:heat_kernel_continuation}), it follows from Taylor's formula that, for any (sufficiently small) $\delta>0$, there exists $M>0$ such that \begin{equation*} \label{eq:application_Taylor} \vert f(y) -\sclr{y-a}{\nabla f(a)}\vert \leqslant M\vert y-a\vert ^2,\quad \forall\vert y-a\vert\leqslant \delta. \end{equation*} Therefore, for any fixed $\delta>0$, one has \begin{equation*} I_\delta(t)=t^{d/2}\int_{\{y\in C : \vert y-a\vert\leqslant \delta\}} (\sclr{y-a}{\nabla f(a)}+O(\vert y-a\vert^2))e^{-t\vert y-a\vert^2/2}\text{d}y. \end{equation*} Making the change of variables $v=\sqrt{t}(y-a)$ implies that the above equation is the same as \begin{equation*} {t^{-1/2}} \int_{(C-\sqrt{t}a)\cap \{v\in \RR^d : \vert v\vert \leqslant \sqrt{t}\delta\}}\sclr{v}{\nabla f(a)}e^{-\vert v\vert^2/2}\text{d}v+O(t^{-1}). \end{equation*} Now, due to the regularity of $\partial C$ at $a$, the set \begin{equation*} (C-\sqrt{t}a)\cap \{v\in \RR^d : \vert v\vert \leqslant \sqrt{t}\delta\} \end{equation*} goes to $\{v\in \RR^d : \sclr{v}{n}>0\}$ as $t\to\infty$. Furthermore, an easy computation shows that \begin{equation*} \int_{\{v\in \RR^d : \sclr{v}{n}>0\}} v e^{-\vert v\vert^2/2}\text{d}v=({2\pi})^{(d-1)/2}n. \end{equation*} Hence, we deduce that \begin{equation*} I_{\delta}(t)=t^{-1/2} ({2\pi})^{(d-1)/2}\partial_n f(a)+o_{\delta}(t^{-1/2}),\quad t\to\infty. \end{equation*} Since $\partial_n f(a)=e^{\vert a\vert^2/2}\partial_n p^C(1,x,a)\not=0$ by Lemma \ref{lemma:normal_derivative_at_a}, Theorem \ref{thm:case:D} follows from Lemma \ref{lem:general_proof_strategy} and formula \eqref{exittime_after_change}. \end{proof} The two following lemmas have been used in the proof of Theorem \ref{thm:case:D}. The first of the two lemmas follows from \cite[Theorem 1]{KiNi78}, which proves analyticity of solutions to general parabolic problems, both in the interior and on the boundary. As for the second one, it is a consequence of \cite[Theorem 2]{Fr58}. \begin{lemma} \label{lemma:heat_kernel_continuation} Under \ref{hypothesis2}, the function $y\mapsto p^C(1,x,y)$ can be analytically continued in some open neighborhood of any point $y=a\in\close{C}\setminus\{0\}$. \end{lemma} \begin{lemma} \label{lemma:normal_derivative_at_a} Under \ref{hypothesis2}, the normal derivative of the function $y\mapsto p^C(1,x,y)$ at $y=a$ is non-zero {\rm(}for any $a\in \partial C\setminus \{0\}${\rm)}. \end{lemma} \setcounter{example}{0} \begin{example}[continued] In the particular case of the dimension $2$, with a cone of opening angle $\beta$ (see Figure \ref{fig:cones}), one has the following expression for the normal derivative at $a$: \begin{equation*} \partial_n f(a) = \frac{2\pi}{\vert a\vert \beta^2}e^{-\vert x\vert^2/2}\sum_{j=1}^{\infty} I_{\alpha_j}(\vert x\vert \vert a\vert)m_j(x)j, \end{equation*} which gives a simplified expression for function $h_D(x)$. The above identity is elementary: it follows from the expression \eqref{eq:expression_eigenfunctions_2} of the eigenfunctions together with the definition of function $f$ and some uniform estimates (to be able to exchange the summation and the derivation in the series defining the heat kernel). \end{example} \subsection{Case \ref{case:E} (non-polar exterior drift)} \label{sec:case:E} In addition to the real-analyticity \ref{hypothesis2} of the cone, we shall assume in this section that \ref{hypothesis3} holds, i.e., that the set $\Pi(a)=\{y\in \close{C}: \vert y-a\vert=d(a,C)\}$ of minimum points is finite. Define the function \begin{equation*} h_E(x) = e^{\vert x-a\vert^2/2}\sum_{p\in\Pi(a)}\kappa_E(p)\partial_n p^C(1,x,p), \end{equation*} where $\kappa_E(p)$ denotes the positive constant defined in equation~\eqref{eq:def_kappa_E}. We shall prove the following: \begin{theorem} \label{thm:case:E} Let $C$ be a real-analytic cone. If $a$ belongs to $\RR^d\setminus (\overline{C}\cup C^\sharp)$ and $\Pi(a)$ is finite, then \begin{equation*} \PP_x[\tau_C>t]=h_E(x)t^{-3/2}e^{-td(a,C)^2/2}(1+o(1)),\quad t\to\infty. \end{equation*} \end{theorem} \begin{proof} Since $a$ belongs to $\RR^d\setminus (\overline{C}\cup C^\sharp)$, every $p\in\Pi(a)$ belongs to $\partial C$ and is different from $0$ and $a$. Here $\gamma=\vert p-a\vert^2/2$. Because $\Pi(a)$ is finite, we can choose $\delta>0$ so that the balls $B(p,\delta)$ for $p\in\Pi(a)$ are pairwise disjoint. Then $I_{\delta}(t)$ can be written as \begin{equation*} I_{\delta}(t)=\sum_{p\in\Pi(a)} I_{\delta, p}(t), \end{equation*} where \begin{equation*} I_{\delta,p}(t)=t^{d/2}\int_{C\cap B(p,\delta)}e^{\vert y\vert^2/2}p^C(1,x,y)e^{-t\vert a-y\vert^2/2} \text{d}y, \end{equation*} and $B(p,\delta)$ does not contain any other element of $\Pi(a)$ than $p$. The beginning of the analysis of $I_{\delta,p}(t)$ is similar to the proof of Theorem \ref{thm:case:D}, except that we have to make a Taylor expansion with three (and not two) terms, for reasons that will be clear later. For the same reasons as in case \ref{case:D}, for any $\delta>0$ small enough, we have \begin{multline} \label{eq:bcov1} I_{\delta,p}(t)=t^{d/2}\int \left(\sclr{y-p}{\nabla f (p)}+\frac{1}{2}(y-p)^{\top}\nabla ^2 f(p)(y-p)+O(\vert y-p\vert^3)\right)\\ \times e^{-t\vert y-a\vert^2/2}\text{d}y, \end{multline} where $f(y)=e^{\vert y\vert^2/2}p^C(1,x,y)$, $(y-p)^{\top}$ is the transpose of the vector $y-p$, $\nabla ^2 f(p )$ denotes the Hessian matrix of $f$ at $p$, and the domain of integration is $\{y\in C : \vert y-p\vert<\delta\}$. To compute the {asymptotics} of the integral $I_{\delta,p}(t)$ as $t\to\infty$, we shall make a series of two changes of variables. First, the change of variables $u=y-p$ and the use of the identity \begin{equation*} e^{-t\vert y-a\vert^2/2} = e^{-t\gamma}e^{-t\vert y-p\vert^2/2-t\sclr{y-p}{p-a}} \end{equation*} give the following alternative expression \begin{equation} \label{eq:bcov2} I_{\delta,p}(t)=t^{d/2}e^{-t\gamma}\int_D \left(\sclr{u}{\nabla f (p)}+\frac{1}{2}u^{\top}\nabla ^2 f(p )u+O(\vert u\vert^3)\right) e^{-t\vert u\vert^2/2}e^{-t\sclr{u}{p-a}}\text{d}u, \end{equation} where the domain of integration $D$ equals $(C-p)\cap \{u\in \mathbb R^d : \vert u\vert<\delta\}$. In what follows, we will assume (without loss of generality) that the inner-pointing unit normal to $\partial C$ at $p$ is equal to $e_1$, the first vector of the standard basis. With this convention $p-a=\vert p-a\vert e_1$, and the only non-zero component of $\nabla f (p)$ is in the $e_1$-direction. Indeed, since $f(y)=0$ for $y\in\partial C$, the boundary of the cone is a level set for the function $f$, and it is well known that the gradient is orthogonal to the level curves. Therefore, the quantity $\sclr{u}{\nabla f (p)}$ is equal to $u_1\partial_1 f(p)$. Our last change is $v=\phi_t(u)$; it sends $(u_1,u_2,\ldots ,u_d)$ onto $(tu_1,\sqrt{t} u_2,\ldots ,\sqrt{t}u_d)$. Note that the scalings in the normal and tangential directions are not the same; this entails that in \eqref{eq:bcov1} the second term in the integrand is not negligible w.r.t.\ the first one, and this is the reason why we have to make a Taylor expansion with three terms and not two. Note also that the Jacobian of this transformation is $t^{(d+1)/2}$. From this and \eqref{eq:bcov2} we deduce that, as $t\to\infty$, \begin{multline} \label{eq:bcov3} t^{3/2}e^{t\gamma}I_{\delta,p}(t)=\int_{\phi_t(D)} \left(v_1 \partial_1 f(p) +\frac{1}{2} (0,v_2,\ldots ,v_d)^\top \nabla ^2 f(p )(0,v_2,\ldots ,v_d)\right)\\ \times e^{- v_1 \vert p-a\vert}e^{-(v_2^2+\cdots +v_d^2)/2}e^{-v_1^2/(2t)}\text{d}v +O(t^{-1/2}). \end{multline} The aim is now to understand the behavior of the domain $\phi_t(D)$ as $t\to\infty$. Since the cone $C$ is tangent to the hyperplane $\{u\in\RR^d : u_1=0\}$ at $p$ and its boundary is real-analytic, there exists a real-analytic function $g$ with $g(0)=0$ and $\nabla g(0)=0$, such that, for $\delta$ small enough, the domain $D$ coincides with \begin{equation*} \{u\in\mathbb R^d : u_1 > g(u_2,\ldots ,u_d), \vert u\vert < \delta\}. \end{equation*} An application of Taylor formula then gives that (up to a set of Lebesgue measure zero) \begin{equation*} \lim_{t\to\infty} \phi_t(D) = \phi_\infty(D)=\{v\in\mathbb R^d : v_1 > \frac{1}{2}(v_2,\ldots ,v_d)^\top\nabla^2g(0)(v_2,\ldots ,v_d)\}. \end{equation*} Let us compare the limit domain $\phi_\infty(D)$ and the integrand in equation \eqref{eq:bcov3}. Since $f$ vanishes on the boundary of the cone, we have \begin{equation*} \label{eq:id_zero} f(p_1+g(u_2,\ldots ,u_d),p_2+u_2,\ldots ,p_d+u_d)=0, \end{equation*} for any $u$ in some neighborhood of $0$. Differentiating twice this identity, we obtain \begin{equation*} (0,v_2,\ldots ,v_d)^\top\nabla^2f(p )(0,v_2,\ldots ,v_d)=-\partial_1f(p) (v_2,\ldots ,v_d)^\top\nabla^2g(0)(v_2,\ldots ,v_d). \end{equation*} Therefore, equation \eqref{eq:bcov3} can be rewritten as \begin{multline*} t^{3/2}e^{t\gamma}I_{\delta,p}(t)=\partial_1f(p)\int_{\phi_t(D)} \left(v_1 -\frac{1}{2} (v_2,\ldots ,v_d)^\top \nabla ^2 g(0)(v_2,\ldots ,v_d)\right)\\ \times e^{- v_1 \vert p-a\vert}e^{-(v_2^2+\cdots +v_d^2)/2}e^{-v_1^2/(2t)}\text{d}v +O(t^{-1/2}) \end{multline*} as $t\to\infty$. Notice that the limit domain $\phi_\infty(D)$ is exactly the subset of $\RR^d$ where the integrand is positive. Thus, the constant \begin{multline} \label{eq:def_kappa_E} \kappa_E(p)=e^{(\vert p\vert^2-\vert a\vert^2)/2} \\\times\int_{\phi_\infty(D)} \left(v_1-(v_2,\ldots ,v_d)^\top\nabla^2g(0)(v_2,\ldots ,v_d)\right)e^{- v_1 \vert p-a\vert}e^{-(v_2^2+\cdots +v_d^2)/2}\text{d}v \end{multline} is positive. Since $\partial_1f(p)=e^{\vert p\vert^2/2}\partial_1 p^C(1,x,p)\not=0$ by Lemma \ref{lemma:normal_derivative_at_a}, we obtain that \begin{equation*} I_{\delta,p}(t)=\kappa_E(p) e^{\vert a\vert^2/2}\partial_1p^C(1,x,p)t^{-3/2}e^{-t\gamma}(1+o(1)),\quad t\to\infty. \end{equation*} To conclude the proof of Theorem \ref{thm:case:E}, it suffices to sum the estimates for $I_{\delta, p}(t)$ over $p\in\Pi(a)$, and then to apply Lemma \ref{lem:general_proof_strategy} and to use equation \eqref{exittime_after_change}. \end{proof} \setcounter{example}{0} \begin{example}[continued] In the particular case of two-dimensional cones, $\nabla^2 g(0)=0$ and the limit domain of integration $\phi_\infty(D)$ is the half-space $\{v\in\mathbb R^2 : v_1\geqslant 0\}$. The constant $\kappa_E(p)$ can then be computed: \begin{equation*} \kappa_E(p) = \frac{e^{(\vert p\vert^2-\vert a\vert^2)/2}}{\vert p-a\vert^2}\sqrt{2\pi}. \end{equation*} \end{example} \subsection{Case \ref{case:F} (polar boundary drift)} \label{subsec:case:F} We finally consider the case where the drift $a\not=0$ belongs to $\partial C^\sharp$. Let us first notice that the existence of such a vector $a$ implies that the cone $C$ is included in some half-space. More precisely, by definition of the polar cone, the inner product of $a$ with any $y\in C$ is non-positive, so that $C$ is included in the half-space $\{y\in\mathbb R^d: \sclr{a}{y}\leqslant 0\}$. Moreover, there must exist some $\theta_c\in\partial\Theta=\partial (C\cap\SS^{d-1})$ such that $\sclr{a}{\theta_c}=0$, for else $a$ would belong to the interior of $C^{\sharp}$, as seen in Lemma \ref{polarconeinterior}. We call $\Theta_c$ the set of all these {\em contact points} $\theta_c$ between $\partial\Theta$ and the hyperplane $a^\perp=\{y\in\mathbb R^d:\sclr{a}{y}=0\}$. As we shall see, the asymptotics of $\PP_x[\tau_C>t]$ is determined by the local geometry of the cone $C$ near these points. We first present some general aspects of our approach, and then we will treat the case $d=2$ for cones with opening angle $\beta\in (0,\pi)$, and the case $d=3$ for cones with a real-analytic boundary and a finite number of contact points. Other cases are left as open problems. In the sequel, we will assume (without loss of generality)\ that $a=-\vert a\vert e_d$, where $e_d$ stands for the last vector of the standard basis. As in case \ref{case:A}, we have $p_C(a)=0$ and $\gamma=\vert a\vert^2/2$, so that the formula \eqref{eq:defintion_of_I_delta} for $I_\delta(t)$ can be written as \begin{equation*} \label{eq:starting_formula_case_F2} I_{\delta}(t)=t^{d/2}\int_{\{y\in C:\vert y\vert\leqslant\delta \}} e^{\vert y\vert^2/2}p^C(1,x,y) e^{-t\vert a-y\vert^2/2}\text{d}y. \end{equation*} Let $\epsilon>0$ be given. Arguing as in case \ref{case:A}, we can pick $\delta>0$ small enough so that $I_\delta(t)$ be bounded from above and below by \begin{equation} \label{eq:bound_for_I_delta_case_F2} (1\pm\epsilon) bu(x)e^{-\vert x\vert^2/2}t^{d/2}\int_{\{y\in C: \vert y\vert\leqslant \delta\}} u(y) e^{-t\vert a-y\vert^2/2}\text{d}y, \end{equation} where $b=(2^{\alpha_1}\Gamma(\alpha_1+1))^{-1}$. Thus, we are led to study the asymptotic behavior of \begin{align*} \label{eq:definition_of_J_delta_case_F2} J_{\delta}(t)&=t^{d/2}\int_{\{y\in C: \vert y\vert\leqslant \delta\}} u(y) e^{-t\vert a-y\vert^2/2}\text{d}y,\\ &=e^{-t\gamma}t^{d/2}\int_{\{y\in C: \vert y\vert\leqslant \delta\}} u(y) e^{-t\vert y\vert^2/2}e^{-t\vert a\vert y_d}\text{d}y. \end{align*} Making the change of variables $z=\sqrt{t}y$ and using the homogeneity property of $u$ (see \eqref{eq:harmonic_Brownian}), we obtain \begin{equation} \label{eq:J_delta_after_first_change_case_F2} J_{\delta}(t)=e^{-t\gamma}t^{-p_1/2}\int_{\{z\in C: \vert z\vert\leqslant\sqrt{t} \delta\}} u(z) e^{-\vert z\vert^2/2}e^{-\sqrt{t}\vert a\vert z_d}\text{d}z. \end{equation} Now, Laplace's method suggests that only some neighborhood of the hyperplane $\{z\in\mathbb R^d :z_d=0\}$ will contribute to the asymptotics. More precisely, we have the following result: \begin{lemma} \label{lem:spherical_cap_contribution} For any $\eta>0$, we have \begin{equation*} \int_{\{z\in C: z_d>\eta\vert z\vert\}} u(z) e^{-\vert z\vert^2/2}e^{-\sqrt{t}\vert a\vert z_d}\textnormal{d}z=o(t^{-d/2}),\quad t\to\infty. \end{equation*} \end{lemma} \begin{proof} Since $\vert u(z)\vert\leqslant M \vert z\vert^{p_1}$, the integral above is bounded from above by \begin{equation*} M\int_{\RR^d}\vert z\vert^{p_1}e^{-\eta\sqrt{t}\vert a\vert\vert z\vert} \text{d}z=Mt^{-(p_1+d)/2}\int_{\RR^d}\vert w\vert^{p_1}e^{-\eta\vert a\vert\vert w\vert} \text{d}w, \end{equation*} which is equal to $O(t^{-(p_1+d)/2})$. Lemma \ref{lem:spherical_cap_contribution} follows since $p_1>0$. \end{proof} From now on, we shall assume that \ref{hypothesis4} holds, i.e., that the set of contact points $\Theta_c$ is finite. Let $\eta>0$ be so small that the $d$-dimensional balls $B(\theta_c, \eta)$ for $\theta_c\in\Theta_c$ are disjoints. Since the set of all $\theta\in\close{\Theta}$ that do not belong to any of these open balls is compact and does not contain any contact point, there exists some $\eta'>0$ such that $\theta_d>\eta'$ for all such $\theta$. For $\theta_c\in\Theta_c$, we define the cone \begin{equation} \label{eq:thin_cones_definition} C(\theta_c,\eta)=\{z\in C: z/\vert z\vert\in B(\theta_c,\eta)\}. \end{equation} Then $C$ can be written as the disjoint union of these (thin) cones and of a (big) remaining cone whose points $z$ all satisfy the inequality $z_d/\vert z\vert>\eta'$. Thus, according to formula \eqref{eq:J_delta_after_first_change_case_F2} and Lemma \ref{lem:spherical_cap_contribution}, we have \begin{equation} \label{eq:decomposion_of_J_delta} J_{\delta}(t)=e^{-t\gamma}t^{-p_1/2}\left(\sum_{\theta_c\in\Theta_c}K_{\delta, \eta,\theta_c}(t)+o(t^{-d/2})\right), \end{equation} where \begin{equation} \label{eq:contact_contribution_definition} K_{\delta, \eta,\theta_c}(t)=\int_{\{z\in C(\theta_c,\eta): \vert z\vert\leqslant\sqrt{t} \delta\}} u(z) e^{-\vert z\vert^2/2}e^{-\sqrt{t}\vert a\vert z_d}\text{d}z \end{equation} represents the contribution of the contact point $\theta_c$. \subsection*{Two-dimensional cones} Here the cone is $C=\{\rho e^{i\theta}: \rho>0, \theta\in(0,\beta)\}$ with $\beta\in(0,\pi)$. Define \begin{equation*} h_F(x) = e^{\sclr{-a}{x}}u(x) \end{equation*} and the constant \begin{equation*} \kappa_F =\frac{\pi 2^{p_1/2}\Gamma(p_1/2)}{2^{\alpha_1}\Gamma(\alpha_1+1)\beta^2\vert a\vert^2}. \end{equation*} \begin{theorem}[Case of the dimension $2$] \label{thm:case:F} Let $C$ be any two-dimensional cone with $\beta\in(0,\pi)$. If $a\not=0$ belongs to $\partial C^\sharp$, then \begin{equation*} \PP_x[\tau_C>t]=\kappa_F h_F(x)e^{-t\vert a\vert^2/2}t^{-(p_1/2+1)}(1+o(1)),\quad t\to\infty. \end{equation*} \end{theorem} \begin{proof} Since $\beta<\pi$, there is only one contact point, namely $\theta_c=(1,0)$. Let us analyze its contribution. According to \eqref{eq:contact_contribution_definition}, we have \begin{equation*} K_{\delta, \eta,\theta_c}(t)=\int_{\{z\in\mathbb R^2: 0<z_2<\eta z_1,\vert z\vert\leqslant\sqrt{t} \delta\}} u(z) e^{-\vert z\vert^2/2}e^{-\sqrt{t}\vert a\vert z_2}\text{d}z, \end{equation*} as soon as $\eta$ is small enough. (In fact, the condition is $\arcsin \eta <\beta$, and $\eta$ in the integral should be $\tan(\arcsin\eta)$.) We now proceed to the change of variables $v=\phi_t (z)=(z_1,\sqrt{t}z_2)$, which leads to \begin{equation*} K_{\delta, \eta,\theta_c}(t)=t^{-1/2}\int_{D_t} u\left(v_1,\frac{v_2}{\sqrt{t}}\right) e^{-\vert v_1\vert^2/2}e^ {-\vert v_2\vert/2t}e^{-\vert a\vert v_2}\text{d}v, \end{equation*} where $D_t=\phi_t(\{z\in\mathbb R^2:0<z_2<\eta z_1,\vert z\vert\leqslant\sqrt{t} \delta\})$. Notice that $(v_1,v_2)\in D_t$ implies that $\vert v_2/(v_1\sqrt{t})\vert<\eta$. It follows from the Taylor-Lagrange inequality that (if $\eta$ is small enough) there exists $M$ such that \begin{equation*} u(1,h)=\partial_2 u(1,0) h+h^2 R(h), \end{equation*} with $\vert R(h)\vert\leqslant M$ for all $\vert h\vert\leqslant \eta$. Therefore, using the homogeneity of $u$, we obtain \begin{equation*} \sqrt{t}u\left(v_1,\frac{v_2}{\sqrt{t}}\right)=\sqrt{t}v_1^{p_1}u\left(1,\frac{v_2}{v_1\sqrt{t}}\right)=v_1^{p_1-1}v_2(\partial_2 u(1,0)+h R(h)), \end{equation*} with $h=v_2/(v_1\sqrt{t})$ and $\vert hR(h)\vert\leqslant\eta M$ for all $(v_1,v_2)\in D_t$. As $t\to\infty$, the domain $D_t$ converges to the quarter plane $\mathbb R_+^2$, and it follows from the dominated convergence theorem that, as $t\to\infty$, \begin{align} \label{eq:final_expression_for_contribution_of_10} K_{\delta, \eta,\theta_c}(t)&=t^{-1}\partial_2u(1,0)\int_{\RR_+^2} v_1^{p_1-1}v_2 e^{-v_1^2/2}e^{-\vert a\vert v_2}\text{d}v+o(t^{-1})\\ &=t^{-1}\frac{\pi 2^{p_1/2}\Gamma(p_1/2)}{\beta^2\vert a\vert^2}(1+o(1)),\nonumber \end{align} where we have used the fact that $\partial_2 u(1,0)=2\pi/\beta^2$ (see \eqref{eq:expression_eigenfunctions_2} for $j=1$). For $\beta<\pi$, there is no other contribution and, therefore, combining equations \eqref{eq:final_expression_for_contribution_of_10}, \eqref{eq:decomposion_of_J_delta} and \eqref{eq:bound_for_I_delta_case_F2} shows that upper and lower bounds for $I_{\delta}(t)$ are given by \begin{equation*} (1\pm \epsilon)\kappa_Fu(x)e^{-\vert x\vert^2/2}e^{-t\gamma}t^{-({p_1}/{2}+1)}(1+o(1)),\quad t\to\infty. \end{equation*} Hence, as in the other cases, the result follows from Lemma \ref{lem:general_proof_strategy} and formula \eqref{exittime_after_change}. \end{proof} \begin{remark} When $\beta=\pi$, the point $(-1,0)$ is a second contact point. By symmetry, its contribution is exactly the same as that of $(1,0)$. Hence the result of Theorem~\ref{thm:case:F} is still valid if $\kappa_F$ is replaced by $2\kappa_F$. \end{remark} \subsection*{Three-dimensional cones with real-analytic boundary} Recall that (by convention) $a=-\vert a\vert e_3$ and the cone $C$ is contained in the half space $\{z_3>0\}$, see Figure \ref{fig:3Dcone}. Thanks to \eqref{eq:decomposion_of_J_delta}, the asymptotic behavior of $\PP_x[\tau_C>t]$ will follow from the study of the contributions \begin{equation*} K_{\delta, \eta,\theta_c}(t)=\int_{\{z\in C(\theta_c,\eta): \vert z\vert\leqslant\sqrt{t} \delta\}} u(z) e^{-\vert z\vert^2/2}e^{-\sqrt{t}\vert a\vert z_3}\text{d}z \end{equation*} of the contact points $\theta_c\in \Theta_c$ between $\partial\Theta$ and the hyperplane $a^\perp=\{z\in\mathbb R^3: z_3=0\}$. As we shall see, the behavior of the integral above will depend on the geometry of $\Theta$ at the point $\theta_c$. \unitlength=0.6cm \begin{figure} \caption{Three-dimensional cones in the proof of Theorem \ref{thm:case:F} \label{fig:3Dcone} \end{figure} \subsubsection*{Contribution of one fixed contact point} Without loss of generality, let us assume that $\theta_c=e_1$. Since the cone is tangent to the plane $\{z\in\mathbb R^3: z_3=0\}$ at the point $\theta_c$ and since its boundary is assumed to be real-analytic, there exists a real-analytic function $g(z_2)$ with $g(0)=0$ and $g'(0)=0$, such that the intersection of $C$ with $\{z\in\mathbb R^3: z_1=1\}$ coincides (in a neighborhood of $\theta_c$) with the set \begin{equation*} \label{eq:parametric_definition_of_the_boundary_via_g} g^+=\{z\in\mathbb R^3: z_1=1, z_3> g(z_2)\}. \end{equation*} Define \begin{equation*} q=q(\theta_c)=\inf\{n\geqslant 2: g^{(n)}(0)\not=0\}, \end{equation*} and \begin{equation*} c=c(\theta_c)=\frac{g^{(q)}(0)}{q!}. \end{equation*} Since $\theta_c$ is isolated from the other contact points (recall that $\Theta_c$ is assumed to be finite), the function $g(z_2)$ must be positive for all $z_2\not=0$ in a neighborhood of $0$. Thus, by real-analyticity, $q$ must be finite, even, and such that $g^{(q)}(0)>0$. Set \begin{equation*} \kappa(q)=\frac{2^{(p_1+1-1/q)/{2}}(1-\frac{1}{q+1})}{\vert a\vert^{2+{1}/{q}}}\Gamma\left(\frac{p_1+1-1/q}{2}\right)\Gamma\left(2+\frac{1}{q}\right). \end{equation*} Then we have: \begin{lemma} \label{lem:individual_contact_point_contribution} For any $\delta>0$ and $\eta>0$ small enough, the contribution of each contact point $\theta_c$ to the asymptotics of the non-exit probability is given by \begin{equation*} K_{\delta, \eta,\theta_c}(t)= \frac{\kappa(q) \partial_n u(\theta_c)}{c(\theta_c)^{1+{1}/{q}}} t^{-(1+{1}/{(2q)})} (1+o(1)),\quad t\to\infty, \end{equation*} where $\partial_n u(\theta_c)$ stands for the (inner-pointing) normal derivative of the function $u$ at $\theta_c$. \end{lemma} We postpone the proof of Lemma \ref{lem:individual_contact_point_contribution} after the statement and the proof of Theorem \ref{thm:case:F3}. \subsubsection*{Statement of Theorem \ref{thm:case:F3}} Let $q_1$ be the maximum value of $q(\theta_c)$ for $\theta_c\in \Theta_c$. We define \begin{equation*} h_F(x)=u(x) e^{-\sclr{a}{x}} \end{equation*} as well as \begin{equation*} \kappa_F=b\kappa(q_1)\sum_{q(\theta_c)=q_1}\frac{\partial_n u(\theta_c)}{c(\theta_c)^{1+{1}/{q}}}, \end{equation*} where $b=(2^{\alpha_1}\Gamma(\alpha_1+1))^{-1}$. Then we have: \setcounter{theorem}{5} \begin{theorem}[Case of the dimension $3$] \label{thm:case:F3} Let $C$ be a real-analytic three-dimensional cone. If $a\not=0$ belongs to $\partial C^\sharp$ and the set of contact points $\Theta_c$ between $\partial\Theta$ and the hyperplane $a^\perp$ is finite, then \begin{equation*} \PP_x[\tau_C>t]=\kappa_F h_F(x) t^{-({p_1}/{2}+1+{1}/({2q_1}))} e^{-t\vert a\vert^2/2}(1+o(1)),\quad t\to\infty. \end{equation*} \end{theorem} \begin{proof} Since $K_{\delta, \eta,\theta_c}(t)$ is of order $t^{-(1+{1}/{(2q)})}$ by Lemma \ref{lem:individual_contact_point_contribution}, only those $\theta_c$ with $q(\theta_c)=q_1$ will contribute in \eqref{eq:decomposion_of_J_delta} to the asymptotics of $J_{\delta}(t)$. Thus, we obtain that \begin{equation*} J_{\delta}(t)=e^{-t\gamma}t^{-({p_1}/{2}+1+{1}/{(2q_1)})}\kappa(q_1)\sum_{q(\theta_c)=q_1}\frac{ \partial_n u(\theta_c)}{c(\theta_c)^{1+{1}/{q_1}}}(1+o(1)),\quad t\to\infty. \end{equation*} Now, equation \eqref{eq:bound_for_I_delta_case_F2} shows that bounds for $I_{\delta}(t)$ are given by \begin{equation*} (1\pm \epsilon)\kappa_Fu(x)e^{-\vert x\vert^2/2}e^{-t\gamma}t^{-({p_1}/{2}+1+{1}/{(2q_1)})}(1+o(1)),\quad t\to\infty. \end{equation*} Hence, the result follows from Lemma \ref{lem:general_proof_strategy} and formula \eqref{exittime_after_change}. \end{proof} \begin{proof}[Proof of Lemma \ref{lem:individual_contact_point_contribution}] With the conventions made just above, we analyze the contribution of $\theta_c=(1,0,0)$, namely, \begin{equation*} K_{\delta, \eta,\theta_c}(t)=\int_{\{z\in C(\theta_c,\eta): \vert z\vert\leqslant\sqrt{t} \delta\}} u(z) e^{-\vert z\vert^2/2}e^{-\sqrt{t}\vert a\vert z_3}\text{d}z. \end{equation*} By making the linear change of variables $v=\phi_t(z)$, with \begin{equation*} \phi_t(z_1,z_2,z_3)=(z_1,t^{1/(2q)}z_2,\sqrt{t}z_3), \end{equation*} we obtain \begin{equation} \label{eq:last_equation_for_J_delta} K_{\delta, \eta,\theta_c}(t)=t^{-{1}/{2}-{1}/({2q})}\int_{D_t} u\left(v_1,\frac{v_2}{t^{1/(2q)}},\frac{v_3}{\sqrt{t}}\right) e^{-v_1^2/2}e^{-\vert a\vert v_3}(1+o(1))\text{d}v, \end{equation} where $D_t=\phi_t(\{z\in C(\theta_c,\eta): \vert z\vert\leqslant\sqrt{t} \delta\})$, and $1+o(1)$ increases to $1$ as $t\to\infty$. In order to understand the behavior of $D_t$ as $t\to\infty$, we first notice that \begin{equation*} \lim_{t\to\infty} D_t=\lim_{t\to\infty}\phi_t(C(\theta_c,\eta)). \end{equation*} Then, since the first coordinate is left invariant by $\phi_t$, we shall look at what happens in the plane $\{z_1=1\}$. It follows from the definition of $q$ that \begin{equation*} g^+=\{z\in\mathbb R^3: z_1=1, z_3 > cz_2^q+o(z_2^q)\}, \end{equation*} with $c=g^{(q)}(0)/q!>0$. From this and the definition of $\phi_t$, it is easily seen that \begin{equation*} \lim_{t\to\infty}\phi_t(C(\theta_c,\eta)\cap\{z\in\mathbb R^3: z_1=1\})=\{v\in\mathbb R^3: v_1=1, v_3> cv_2^q\}. \end{equation*} Further, the homogeneity of the cone and the linearity of $\phi_t$ immediately imply that \begin{equation*} \lim_{t\to\infty}\phi_t(C(\theta_c,\eta)\cap\{z\in\mathbb R^3: z_1=\lambda\})=\{v\in\mathbb R^3: v_1=\lambda, \lambda^{q-1}v_3> cv_2^q\}, \end{equation*} for all $\lambda>0$. Now, if $\eta>0$ is small enough, the cone $C(\theta_c,\eta)$ does not contain any $z$ with $z_1\leqslant 0$. Therefore, \begin{equation} \label{eq:limit_domain_in_case_F} \lim_{t\to\infty}\phi_t(C(\theta_c,\eta))=\{v\in\mathbb R^3: v_1>0, v_3>0, v_1^{q-1}v_3>cv_2^q\}. \end{equation} We call $D$ the limit domain in \eqref{eq:limit_domain_in_case_F}. It remains to analyze the behavior of the integrand in \eqref{eq:last_equation_for_J_delta}, i.e., to find the asymptotics of \begin{equation*} u\left(v_1,\frac{v_2}{t^{1/(2q)}}, \frac{v_3}{\sqrt{t}}\right)=v_ 1^{p_1}u\left(1,\frac{v_2}{v_1t^{1/(2q)}}, \frac{v_3}{v_1\sqrt{t}}\right) \end{equation*} for $v_1>0$, as $t\to\infty$. To this end, we shall use a Taylor expansion of $u(1,x,y)$ in a neighborhood of $(0,0)$. This can be done since it is known that the real-analyticity of $\Theta$ ensures that $u$ can be extended to a strictly bigger cone, inside of which $u$ is (still) harmonic, see \cite[Theorem A]{MoNi57}. Since $u$ is equal to zero on the boundary of $C$, the relation \begin{equation*} u(1,z_2,g(z_2))=0 \end{equation*} holds for all $z_2$ in a neighborhood of $0$, and a direct application of Lemma \ref{lem:implicit_vs_parametric_derivatives} below for $n=1$ and $k\in\{0,\ldots, q-1\}$ shows that \begin{equation} \label{eq:link_between_partial_derivatives} \partial_{2,2,\ldots,2}^{(j)}u(1,0,0)= \begin{cases} 0 & \mbox{ if } 1\leqslant j\leqslant q-1,\\ -\partial_3 u(1,0,0) g^{(q)}(0) &\mbox{ if } j=q. \end{cases} \end{equation} Hence, the Taylor expansion of $u(1,z_2,z_3)$ leads to \begin{equation*} \lim_{t\to\infty}\sqrt{t} u\left(1,\frac{v_2}{v_1t^{1/(2q)}}, \frac{v_3}{v_1\sqrt{t}}\right)=\partial_3 u(1,0,0) \left(\frac{v_3}{v_1}-\frac{g^{(q)}(0)}{q!}\frac{v_2^q}{v_1^q}\right). \end{equation*} The proof that this convergence is dominated is deferred to Lemma \ref{lem:domination_case_F3} below, where the crucial role of $C(\theta_c,\eta)$ will appear clearly. Therefore, as $t\to\infty$, \begin{multline} \label{eq:J_delta_asymptotic} K_{\delta, \eta,\theta_c}(t)= t^{-1-{1}/{(2q)}}\partial_3 u(1,0,0) \\ \times\int_{D}v_1^{p_1-q}(v_1^{q-1}v_3-cv_2^q)e^{-v_1^2/2}e^{-\vert a\vert v_3}\text{d}v+o(t^{-1-{1}/{(2q)}}). \end{multline} Notice that the last integral is positive since $D$ has positive (infinite) Lebesgue measure and is exactly the domain where the integrand is positive. We now compute its value. Since $q$ is even, for any fixed $v_1>0$ and $v_3>0$, we have \begin{equation*} \int_{\{v_2\in\mathbb R: v_1^{q-1}v_3>cv_2^q\}}(v_1^{q-1}v_3-cv_2^q)\text{d}v_2=2\left(1-\frac{1}{q+1}\right)(c^{-1}v_1^{q-1}v_3)^{1+{1}/{q}}. \end{equation*} Thus, by an application of Fubini's theorem, the integral in \eqref{eq:J_delta_asymptotic} becomes \begin{equation*} 2\left(1-\frac{1}{q+1}\right)c^{-1-{1}/{q}}\int_{0}^\infty v_1^{p_1-{1}/{q}}e^{-v_1^2/2}\text{d}v_1 \int_{0}^\infty v_3^{1+{1}/{q}}e^{-\vert a\vert v_3}\text{d}v_3, \end{equation*} and can be expressed in terms of the Gamma function as \begin{equation*} \frac{2^{(p_1+1-1/q)/{2}}(1-\frac{1}{q+1})}{\vert a\vert^{2+{1}/{q}} c^{1+{1}/{q}}}\Gamma\left(\frac{p_1+1-1/q}{2}\right)\Gamma\left(2+\frac{1}{q}\right)= \kappa(q)c^{-(1+{1}/{q})}. \end{equation*} This concludes the proof of Lemma \ref{lem:individual_contact_point_contribution}. \end{proof} \begin{lemma} \label{lem:domination_case_F3} Let $a_{i,j}$ denote the coefficient of $z_2^iz_3^j$ in the Taylor expansion of $u(1,z_2,z_3)$ at $(0,0)$. If $\eta>0$ in the definition \eqref{eq:thin_cones_definition} of $C(\theta_c,\eta)$ is small enough, then \begin{equation*} \int_{D_t}v_1^{p_1}\left\vert\sqrt{t} u\left(1,\frac{v_2}{v_1t^{1/(2q)}}, \frac{v_3}{v_1\sqrt{t}}\right)-\left(a_{0,1}\frac{v_3}{v_1}+a_{q,0}\frac{v_2^q}{v_1^q}\right)\right\vert e^{-v_1^2/2}e^{-\vert a\vert v_3}\textnormal{d}v= o(1),\quad t\to\infty. \end{equation*} \end{lemma} \begin{proof} Since the function $u(1,z_2,z_3)$ can be extended to a function infinitely differentiable in a neighborhood of $(0,0)$, see \cite[Theorem A]{MoNi57}, there exists $M>0$ such that, for $\eta_0>0$ small enough, \begin{equation*} u(1,z_2,z_3)=\sum_{i+j\leqslant q}a_{i,j}z_2^iz_3^j+\vert (z_2,z_3)\vert^{q+1}R(z_2,z_3), \end{equation*} where $\vert R(z_2,z_3)\vert\leqslant M$ for all $(z_2,z_3)\in B(0,\eta_0)$. We already know (see \eqref{eq:link_between_partial_derivatives} in the proof of Theorem \ref{thm:case:F3}) that $a_{i,0}=0$ for all $i\in\{0,\ldots ,q-1\}$, hence \begin{equation*} \vert u(1,z_2,z_3)-(a_{0,1}z_ 3+a_{q,0}z_2^q)\vert\leqslant \sum_{2\leqslant j\leqslant q}\vert a_{0,j} z_3^j\vert+\sum_{\substack{i,j\geqslant 1\\ i+j\leqslant q}}\vert a_{i,j}z_2^i z_3^j\vert+\vert (z_2,z_3)\vert^{q+1}M. \end{equation*} Let $\epsilon\in(0,1)$ be fixed. For $(z_2,z_3)\in B(0,\eta_0)$, we use the upper bound \begin{equation*} \vert a_{0,j}\vert\vert z_3\vert^{1+\epsilon}\eta_0^{j-(1+\epsilon)},\quad \forall j\geqslant 2, \end{equation*} for the terms inside of the first sum, and the upper bound \begin{equation*} \vert a_{i,j}\vert \vert z_2\vert\vert z_3\vert \eta_0^{i+j-2},\quad \forall i+j\geqslant 2, \end{equation*} for the terms inside of the second sum. For the last term, we write \begin{equation*} \vert (z_2,z_3)\vert^{q+1}\leqslant C (\vert z_2\vert^{q+1}+\vert z_3\vert^{q+1})\leqslant C(\vert z_2\vert^{q+1}+\vert z_3\vert^{1+\epsilon}\eta_0^{q-\epsilon}), \end{equation*} and we finally obtain the upper bound \begin{equation} \label{eq:nice_candidate_for_domination} \vert u(1,z_2,z_3)-(a_{0,1}z_ 3+a_{q,0}z_2^q)\vert\leqslant C_1\vert z_3\vert^{1+\epsilon}+C_2\vert z_2\vert \vert z_3\vert+ C_3\vert z_2\vert^{q+1}, \end{equation} where $C_1,C_2,C_3>0$ are positive constants (depending on $\eta_0$ and $\epsilon$ only). On the other hand, the definition of $C(\theta_c,\eta)$ ensures that \begin{equation*} \left\vert \left(\frac{v_2}{v_1t^{1/(2q)}},\frac{v_3}{v_1\sqrt{t}}\right)\right\vert\leqslant \eta+o(\eta),\quad \eta\to0, \end{equation*} for all $(v_1,v_2,v_3)\in D_t$. Therefore, if $\eta>0$ is small enough so that $\eta+o(\eta)\leqslant \eta_0$, then according to \eqref{eq:nice_candidate_for_domination} we have \begin{multline*} \left\vert \sqrt{t}u\left(1,\frac{v_2}{v_1t^{1/(2q)}},\frac{v_3}{v_1\sqrt{t}}\right)-\left(a_{0,1}\frac{v_3}{v_1}+a_{q,0}\frac{v_2^q}{v_1^q}\right)\right\vert \\\leqslant o(1)\left( C_1\left\vert \frac{v_3}{v_1}\right\vert^{1+\epsilon}+C_2\left\vert \frac{v_2}{v_1}\right\vert \left\vert \frac{v_3}{v_1}\right\vert+ C_3\left\vert \frac{v_2}{v_1}\right\vert^{q+1} \right), \end{multline*} (where $o(1)$ is a function of $t$ alone) for all $(v_1,v_2,v_3)\in D_t$, and the result follows from Lemma~\ref{lem:integrability_of_the_limit_on_D} below, provided that $\epsilon$ has been chosen so small that $1+\epsilon+1/q\leqslant 2$. \end{proof} \begin{lemma} \label{lem:integrability_of_the_limit_on_D} The integral \begin{equation*} \int_D v_1^{p_1}\left\vert\frac{v_2}{v_1}\right\vert^{\alpha}\left\vert\frac{v_3}{v_1}\right\vert^{\beta} e^{-v_1^2/2}e^{-\vert a\vert v_3}\textnormal{d}v \end{equation*} is finite for all $\alpha, \beta\geqslant 0$ such that $\beta+(\alpha+1)/q\leqslant 2$. \end{lemma} \begin{proof} Using Fubini's theorem, this integral can be shown to be equal to \begin{equation*} \int_{0}^\infty v_1^{p_1+1-\beta-(\alpha+1)/q}e^{-v_1^2/2}\text{d}v_1 \int_{0}^\infty v_3^{\beta+(\alpha+1)/q}e^{-\vert a\vert v_3}\text{d}v_3, \end{equation*} up to some positive multiplicative constant. The result follows since $p_1>0$. \end{proof} \begin{lemma} \label{lem:implicit_vs_parametric_derivatives} Let $n\geqslant 1$ and $k\geqslant 0$, and assume that $f: \RR^{n+1}\to\RR$ and $g:\RR\to\RR$, with $g(0)=0$, are two functions infinitely differentiable such that for some constant $c$, \begin{equation} \label{eq:implicit_relation} f(x,g(x), g'(x),\ldots,g^{(n-1)}(x))=c, \end{equation} for all $x$ in some neighborhood of $x=0$, and \begin{equation} \label{eq:zero_derivatives} g'(0)=g^{(2)}(0)=\cdots=g^{(n-1+k)}(0)=0. \end{equation} Then \begin{equation*} \partial_{1,1,\ldots,1}^{(k+1)}f(0)=-\partial_{n+1}f(0)g^{(n+k)}(0). \end{equation*} \end{lemma} \begin{proof} Let $H(n,k)$ denote the statement that the conclusion of the lemma is true for the pair $(n,k)$. We shall prove that \begin{itemize} \item $H(n,0)$ holds for all $n\geqslant 1$; \item For all $n\geqslant 1$ and $k\geqslant 1$, $H(n+1,k-1)$ implies $H(n,k)$. \end{itemize} The lemma will clearly follow by induction. Let $f$ and $g$ be two functions satisfying the hypotheses of Lemma~\ref{lem:implicit_vs_parametric_derivatives} for some $n\geqslant 1$ and $k\geqslant 0$, and set $\gamma(x)=(x,g(x),g'(x),\ldots,g^{(n-1)}(x))$. First, differentiating relation~\eqref{eq:implicit_relation} w.r.t.\ the variable $x$ shows that \begin{equation} \label{eq:implicit_relation_on_derivatives} \partial_1f(\gamma(x))+\sum_{j=2}^n\partial_jf(\gamma(x))g^{j-1}(x)+\partial_{n+1}f(\gamma(x))g^{(n)}(x)=0, \end{equation} for all $x$ in some neighborhood of $0$. Hence, according to \eqref{eq:zero_derivatives}, we get that \begin{equation*} \partial_1f(0)+\partial_{n+1}f(0)g^{(n)}(0)=0, \end{equation*} thereby proving $H(n,0)$. Furthermore, equation~\eqref{eq:implicit_relation_on_derivatives} can be rewritten as \begin{equation*} h(x,g(x),g'(x),\ldots, g^{(n)}(x))=0, \end{equation*} in some neighborhood of $x=0$, where $h:\RR^{n+2}\to\RR$ is defined by \begin{equation} \label{eq:h_function_definition} h(x_1,x_2,x_3,\ldots,x_{n+2})=\partial_1f(\gamma(x))+\sum_{j=2}^n\partial_jf(\gamma(x))x_{j+1}+\partial_{n+1}f(\gamma(x))x_{n+2}. \end{equation} Since equation \eqref{eq:zero_derivatives} is left invariant when replacing $n$ by $n+1$ and $k$ by $k-1$, functions $h$ and $g$ fulfill the hypotheses of the lemma for the pair $(n+1,k-1)$. Therefore, if $H(n+1,k-1)$ holds, then \begin{equation*} \partial_{1,1,\ldots,1}^{(k)}h(0)=-\partial_{n+2}h(0)g^{(n+k)}(0). \end{equation*} But it is clear from the definition~\eqref{eq:h_function_definition} of $h$ that \begin{equation*} \partial_{1,1,\ldots,1}^{(k)}h(0)=\partial_{1,1,\ldots,1}^{(k+1)}f(0), \end{equation*} and \begin{equation*} \partial_{n+2}h(0)=\partial_{n+1}f(0). \end{equation*} Hence $H(n,k)$ holds, and the proof is completed. \end{proof} \section*{Acknowledgments} This work was partially supported by Agence Nationale de la Recherche Grant ANR-09-BLAN-0084-01. We thank Alano Ancona, Guy Barles, Emmanuel Lesigne and Marc Peign\'e for interesting discussions. We thank a referee and an associate editor for their useful comments and suggestions. \end{document}
\begin{document} \title{$C^{1, heta} {\footnotesize \par\noindent {\bf Abstract:} In this paper we obtain $C^{1,\theta}$-estimates on the distance of inertial manifolds for dynamical systems generated by evolutionary parabolic type equations. We consider the situation where the systems are defined in different phase spaces and we estimate the distance in terms of the distance of the resolvent operators of the corresponding elliptic operators and the distance of the nonlinearities of the equations. \vskip 0.5\baselineskip \noindent {\bf Keywords:} inertial manifolds, evolution equations, perturbations. \noindent {\bf 2000 Mathematics Subject Classification:} 35B42, 35K90 } \numberwithin{equation}{section} \newtheorem{teo}{Theorem}[section] \newtheorem{lem}[teo]{Lemma} \newtheorem{cor}[teo]{Corolary} \newtheorem{prop}[teo]{Proposition} \newtheorem{defi}[teo]{Definition} \newtheorem{re}[teo]{Remark} \section{Introduction} We continue in this work the analysis started in \cite{Arrieta-Santamaria-DCDS} on the estimates on the distance of inertial manifolds. Actually, in \cite{Arrieta-Santamaria-DCDS} we considered a family of abstract evolution equations of parabolic type, that may be posed in different phase spaces (see equation (\ref{problemaperturbado}) below) and we impose very general conditions (see (H1) and (H2) below) guaranteing that each problem has an inertial manifold and more important, we were able to obtain estimates in the norm of the supremum on the convergence of the inertial manifolds. These estimates are expressed in terms of the distance of the resolvent operators and in terms of the distance of the nonlinear terms. These results are the starting point of the present paper and are briefly described in Section \ref{setting} (see Theorem \ref{distaciavariedadesinerciales}) One of the main applications of invariant manifolds is that they allow us to describe the dynamics (locally or globally) of an infinite dimensional system with only a finite number of parameters (the dimension of the manifold). This drastic reduction of dimensionality permits in many instances to analyze in detail the dynamics of the equation and study perturbations problem. But for these questions, some extra differentiability on the manifold and some estimates on the convergence on stronger norms like $C^1$ or $C^{1,\theta}$ is desirable, see \cite{Hale&Raugel3,Arrieta-Santamaria-2}. Actually, the estimates from this paper and from \cite{Arrieta-Santamaria-DCDS} are key estimates to obtain good rates on the convergence of attractors of reaction diffusion equations in thin domains, problem which is addressed in \cite{Arrieta-Santamaria-2}. This is actually the main purpose of this work. Under the very general setting from \cite{Arrieta-Santamaria-DCDS} but impossing some extra differentiability and convergence properties on the nonlinear terms (see hipothesis (H2') below) we obtain that the inertial manifolds are uniformly $C^{1,\theta}$ smooth and obtain estimates on the convergence of the manifolds in this $C^{1,\theta}$ norm. Let us mention that the theory of invariant and inertial manifolds is a well established theory. We refer to \cite{Bates-Lu-Zeng1998, Sell&You} for general references on the theory of Inertial manifolds. See also \cite{JamesRobinson} for an accessible introduction to the theory. These inertial manifolds are smooth, see \cite{ChowLuSell}. We also refer to \cite{Henry1, Hale, B&V2,Sell&You,LibroAlexandre,Cholewa} for general references on dynamics of evolutionary equations. We describe now the contents of the paper. In Section \ref{setting} we introduce the notation, review the main hypotheses (specially (H1) and (H2)) and results from \cite{Arrieta-Santamaria-DCDS}. We describe in detail the new hypothesis (H2') and state the main result of the paper, Proposition \ref{FixedPoint-E^1Theta} and Theorem \ref{convergence-C^1-theo}. In Section \ref{smoothness} we analyze the $C^{1,\theta}$ smoothness of the inertial manifold, proving Proposition \ref{FixedPoint-E^1Theta}. The analysis is based in previous results from \cite{ChowLuSell}. In Section \ref{convergence} we obtain the estimates on the distance of the inertial manifold in the $C^{1,\theta}$ norm, proving Theorem \ref{convergence-C^1-theo}. \section{Setting of the problem and main results} \label{setting} In this section we consider the setting of the problem, following \cite{Arrieta-Santamaria-DCDS}. We refer to this paper for more details about the setting. Hence, consider the family of problems, \begin{equation}\label{problemalimite} (P_0)\left\{ \begin{array}{r l } u^0_t+A_0u^0&=F_0^\varepsilon(u^0),\\ u^0(0)\in X^\alpha_0, \end{array} \right. \end{equation} and \begin{equation}\label{problemaperturbado} (P_\varepsilon)\left\{ \begin{array}{r l } u^\varepsilon_t+A_\varepsilon u^\varepsilon&=F_\varepsilon(u^\varepsilon),\qquad 0<\varepsilon\leq \varepsilon_0\\ u^\varepsilon(0)\in X^\alpha_\varepsilon, \end{array} \right. \end{equation} where we assume, that $A_\varepsilon$ is self-adjoint positive linear operator on a separable real Hilbert space $X_\varepsilon$, that is $A_\varepsilon: D(A_\varepsilon)=X^1_\varepsilon\subset X_\varepsilon\rightarrow X_\varepsilon,$ and $F_\varepsilon:X_\varepsilon^\alpha\to X_\varepsilon$, $F_0^\varepsilon:X_0^\alpha\to X_0$ are nonlinearities guaranteeing global existence of solutions of \eqref{problemaperturbado}, for each $0\leq \varepsilon\leq \varepsilon_0$ and for some $0\leq \alpha<1$. Observe that for problem \eqref{problemalimite} we even assume that the nonlinearity depends on $\varepsilon$ also. As in \cite{Arrieta-Santamaria-DCDS}, we assume the existence of linear continuous operators, $E$ and $M$, such that, $E: X_0\rightarrow X_\varepsilon$, $M: X_\varepsilon\rightarrow X_0$ and $E_{\mid_{X^\alpha_0}}: X_0^\alpha\rightarrow X_\varepsilon^\alpha$ and $M_{\mid_{X_\varepsilon^\alpha}}: X_\varepsilon^\alpha\rightarrow X_0^\alpha$, satisfying, \begin{equation}\label{cotaextensionproyeccion} \|E\|_{\mathcal{L}(X_0, X_\varepsilon)}, \|M\|_{\mathcal{L}(X_\varepsilon, X_0)}\leq \kappa ,\qquad \|E\|_{\mathcal{L}(X^\alpha_0, X^\alpha_\varepsilon)}, \|M\|_{\mathcal{L}(X^\alpha_\varepsilon, X^\alpha_0)}\leq \kappa. \end{equation} for some constant $\kappa\geq 1$. We also assume these operators satisfy the following properties, \begin{equation}\label{propiedadesextensionproyeccion} M\circ E= I,\qquad \|Eu_0\|_{X_\varepsilon}\rightarrow \|u_0\|_{X_0}\quad\textrm{for}\quad u_0\in X_0. \end{equation} The family of operators $A_\varepsilon$, for $0\leq \varepsilon\leq \varepsilon_0$, have compact resolvent. This, together with the fact that the operators are selfadjoint, implies that its spectrum is discrete real and consists only of eigenvalues, each one with finite multiplicity. Moreover, the fact that $A_\varepsilon$, $0\leq \varepsilon\leq \varepsilon_0$, is positive implies that its spectrum is positive. So, we denote by $\sigma(A_\varepsilon)$, the spectrum of the operator $A_\varepsilon$, with, $$\sigma(A_\varepsilon)=\{\lambda_n^\varepsilon\}_{n=1}^\infty,\qquad\textrm{ and}\quad 0<c\leq\lambda_1^\varepsilon\leq\lambda_2^\varepsilon\leq...\leq\lambda_n^\varepsilon\leq...$$ and we also denote by $\{\varphi_i^\varepsilon\}_{i=1}^\infty$ an associated orthonormal family of eigenfunctions. Observe that the requirement of the operators $A_\varepsilon$ being positive can be relaxed to requiring that they are all bounded from below uniformly in the parameter $\varepsilonilon$. We can always consider the modified operators $A_\varepsilon+cI$ with $c$ a large enough constant to make the modified operators positive. The nonlinear equations \eqref{problemaperturbado} would have to be rewritten accordingly. \par With respect to the relation between both operators, $A_0$ and $A_\varepsilon$ and following \cite{Arrieta-Santamaria-DCDS}, we will assume the following hypothesis {\sl \paragraph{\textbf{(H1).}} With $\alpha$ the exponent from problems (\ref{problemaperturbado}), we have \begin{equation}\label{H1equation} \|A_\varepsilon^{-1}- EA_0^{-1}M\|_{\mathcal{L}(X_\varepsilon, X_\varepsilon^\alpha)}\to 0\quad \hbox{ as } \varepsilon\to 0. \end{equation} } \par Let us define $\tau(\varepsilon)$ as an increasing function of $\varepsilon$ such that \begin{equation}\label{definition-tau} \|A_\varepsilon^{-1}E- EA_0^{-1}\|_{\mathcal{L}(X_0, X_\varepsilon^\alpha)}\leq \tau(\varepsilon). \end{equation} \par We also recall hypothesis {\bf(H2)} from \cite{Arrieta-Santamaria-DCDS}, regarding the nonlinearities $F_0$ and $F_\varepsilon$, \par {\sl \paragraph{\textbf{(H2).}} We assume that the nonlinear terms $F_\varepsilon: X^\alpha_\varepsilon\rightarrow X_\varepsilon$ and $F_0^\varepsilon: X^\alpha_0\rightarrow X_0$ for $0< \varepsilon\leq \varepsilon_0$, satisfy: \begin{enumerate} \item[(a)] They are uniformly bounded, that is, there exists a constant $C_F>0$ independent of $\varepsilon$ such that, $$\|F_\varepsilon\|_{L^\infty(X_\varepsilon^\alpha, X_\varepsilon)}\leq C_F, \quad \|F_0^\varepsilon\|_{L^\infty(X_0^\alpha, X_0)}\leq C_F$$ \item[(b)] They are globally Lipschitz on $X^\alpha_\varepsilon$ with a uniform Lipstichz constant $L_F$, that is, \begin{equation}\label{LipschitzFepsilon} \|F_\varepsilon(u)- F_\varepsilon(v)\|_{X_\varepsilon}\leq L_F\|u-v\|_{X_\varepsilon^\alpha} \end{equation} \begin{equation}\label{LipschitzF0} \|F_0^\varepsilon(u)- F_0^\varepsilon(v)\|_{X_0}\leq L_F\|u-v\|_{X_0^\alpha}. \end{equation} \item[(c)] They have a uniformly bounded support for $0<\varepsilon\leq \varepsilon_0$: there exists $R>0$ such that $$Supp F_\varepsilon\subset D_{R}=\{u_\varepsilon\in X_\varepsilon^\alpha: \|u_\varepsilon\|_{X_\varepsilon^\alpha}\leq R\}$$ $$Supp F_0^\varepsilon\subset D_{R}=\{u_0\in X_0^\alpha: \|u_0\|_{X_0^\alpha}\leq R\}.$$ \item[(d)] $F_\varepsilon$ is near $F_0^\varepsilon$ in the following sense, \begin{equation}\label{estimacionefes} \sup_{u_0\in X^\alpha_0}\|F_\varepsilon (Eu_0)-EF_0^\varepsilon (u_0)\|_{X_\varepsilon}=\rho(\varepsilon), \end{equation} and $\rho(\varepsilon)\rightarrow 0$ as $\varepsilon\rightarrow 0$. \end{enumerate} } With {\bf (H1)} and {\bf (H2)} we were able to show in \cite{Arrieta-Santamaria-DCDS} the existence, convergence and obtain some rate of the convergence in the norm of the supremum of inertial manifolds. In order to explain the result and to understand the rest of this paper, we need to introduce several notation and results from \cite{Arrieta-Santamaria-DCDS}. We refer to this paper for more explanations. Let us consider $m\in\mathbb{N}$ such that $\lambda_m^0<\lambda_{m+1}^0$ and denote by $\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}$ the canonical orthogonal projection onto the eigenfunctions, $\{\varphi^\varepsilon_i\}_{i=1}^m$, corresponding to the first $m$ eigenvalues of the operator $A_\varepsilon $, $0\leq\varepsilon\leq\varepsilon_0$ and $\mathbf{Q}^{\bm\varepsilon}_{\mathbf{m}}$ the projetion over its orthogonal complement, see \cite{Arrieta-Santamaria-DCDS}. For technical reasons, we express any element belonging to the linear subspace $\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}(X_\varepsilon)$ as a linear combination of the elements of the following basis $$\{\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}(E\varphi^0_1), \mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}(E\varphi^0_2), ...,\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}(E\varphi^0_m)\},\qquad\textrm{for}\quad 0\leq\varepsilon\leq\varepsilon_0,$$ with $\{\varphi^0_i\}_{i=1}^m$ the eigenfunctions related to the first $m$ eigenvalues of $A_0$, which constitute a basis in $\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}(X_\varepsilon)$ and in $\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}(X_\varepsilon^\alpha)$, see \cite{Arrieta-Santamaria-DCDS}. We will denote by $\psi_i^\varepsilon=\mathbf{ P_m^\varepsilon}(E\varphi_i^0)$. Let us denote by $j_\varepsilon$ the isomorphism from $ \mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}(X_\varepsilon)=[\psi_1^\varepsilon, ..., \psi_m^\varepsilon]$ onto $\mathbb{R}^m$, that gives us the coordinates of each vector. That is, \begin{equation}\label{definition-jeps} \begin{array}{rl} j_\varepsilon:\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}(X_\varepsilon)&\longrightarrow \mathbb{R}^m, \\ w_\varepsilon&\longmapsto\bar{p}, \end{array} \end{equation} where $w_\varepsilon=\sum^m_{i=1} p_i\psi^\varepsilon_i$ and $\bar{p}=(p_1, ..., p_m)$. We denote by $|\cdot|$ the usual euclidean norm in $\mathbb{R}^m$, that is $|\bar{p}|=\left(\sum_{i=1}^mp_i^2\right)^{\frac{1}{2}}$, and by $|\cdot|_{\varepsilon,\alpha}$ the following weighted one, \begin{equation}\label{normaalpha} |\bar{p}|_{\varepsilon,\alpha}=\left(\sum_{i=1}^mp_i^2(\lambda_i^\varepsilon)^{2\alpha}\right)^{\frac{1}{2}}. \end{equation} We consider the spaces $(\mathbb{R}^m, |\cdot|)$ and $(\mathbb{R}^m, |\cdot|_{\varepsilon,\alpha})$, that is, $\mathbb{R}^m$ with the norm $|\cdot|$ and $|\cdot|_{\varepsilon,\alpha}$, respectively, and notice that for $w_0=\sum^m_{i=1} p_i\psi^0_i$ and $0\leq\alpha<1$ we have that, \begin{equation}\label{normajepsilon} \|w_0\|_{X^\alpha_0}=|j_0(w_0)|_{\varepsilon,\alpha}. \end{equation} It is also not difficult to see that from the convergence of the eigenvalues (which is obtained from {\bf (H1)}, see \cite{Arrieta-Santamaria-DCDS}), we have that for a fixed $m$ and for all $\delta>0$ small enough there exists $\varepsilon=\varepsilon(\delta)>0$ such that \begin{equation}\label{des-normas} (1-\delta)|\bar p|_{0,\alpha}\leq |\bar p|_{\varepsilon,\alpha}\leq (1+\delta)|\bar p|_{0,\alpha}, \quad 0\leq \varepsilon\leq \varepsilon(\delta), \quad \forall \bar p\in\R^m. \end{equation} With respect to the behavior of the linear semigroup in the subspace $\mathbf{Q}^{\bm\varepsilon}_{\mathbf{m}}X_\varepsilon^\alpha$, notice that we have the expression $$e^{-A_\varepsilon t}\mathbf{Q}^{\bm\varepsilon}_{\mathbf{m}} u=e^{-A_\varepsilon \mathbf{Q}^{\bm\varepsilon}_{\mathbf{m}} t}u=\sum_{i=m+1}^\infty e^{-\lambda_i^\varepsilon t}(u, \varphi_i^\varepsilon)\varphi_i^\varepsilon.$$ Hence, using the expression of $e^{-A_\varepsilon t \mathbf{Q}^{\bm\varepsilon}_{\mathbf{m}} t}$ from above and following a similar proof as Lemma 3.1 from \cite{Arrieta-Santamaria-DCDS}, we get $$\|e^{-A_\varepsilon \mathbf{Q}^{\bm\varepsilon}_{\mathbf{m}} t}\|_{\mathcal{L}(X_\varepsilon, X_\varepsilon)}\leq e^{-\lambda_{m+1}^\varepsilon t},$$ and, \begin{equation}\label{semigrupoproyectado} \|e^{-A_\varepsilon \mathbf{Q}^{\bm\varepsilon}_{\mathbf{m}} t}\|_{\mathcal{L}(X_\varepsilon, X_\varepsilon^\alpha)}\leq e^{-\lambda_{m+1}^\varepsilon t}\left(\max\{\lambda_{m+1}^\varepsilon, \frac{\alpha}{t}\}\right)^\alpha, \end{equation} for $t\geq 0.$ In a similar way, we have $$e^{-A_\varepsilon t}\mathbf{P}^{\bm\varepsilon}_{\mathbf{m}} u=\sum_{i=1}^m e^{-\lambda_i^\varepsilon t}(u, \varphi_i^\varepsilon)\varphi_i^\varepsilon.$$ and following similar steps as above, for $t\leq 0$ we have, \begin{equation}\label{semigrupoproyectadoP} \|e^{-A_\varepsilon \mathbf{P}^{\bm\varepsilon}_{\mathbf{m}} t}\|_{\mathcal{L}(X_\varepsilon, X_\varepsilon)}\leq e^{-\lambda_{m}^\varepsilon t},\,\,\,\,\,\,\,\,\,\,\,\, \|e^{-A_\varepsilon \mathbf{P}^{\bm\varepsilon}_{\mathbf{m}} t}\|_{\mathcal{L}(X^\alpha_\varepsilon, X^\alpha_\varepsilon)}\leq e^{-\lambda_{m}^\varepsilon t}, \end{equation} \begin{equation}\label{semigrupoproyectadoP-alpha} \|e^{-A_\varepsilon \mathbf{P}^{\bm\varepsilon}_{\mathbf{m}} t}\|_{\mathcal{L}(X_\varepsilon, X_\varepsilon^\alpha)}\leq e^{-\lambda_m^\varepsilon t}(\lambda_m^\varepsilon)^\alpha. \end{equation} \par We are looking for inertial manifolds for system \eqref{problemaperturbado} and \eqref{problemalimite} which will be obtained as graphs of appropriate functions. This motivates the introduction of the sets $\mathcal{F}_\varepsilon(L,\rho)$ defined as $$\mathcal{F}_\varepsilon(L,\rho)=\{ \Phi :\mathbb{R}^m\rightarrow\mathbf{Q}_{\mathbf{m}}^{\bm\varepsilon}(X^\alpha_\varepsilon),\quad\textrm{such that}\quad \textrm{supp } \Phi\subset B_R\quad \textrm{and}\quad$$ $$\quad \|\Phi(\bar{p}^1)-\Phi(\bar{p}^2)\|_{X^\alpha_\varepsilon}\leq L|\bar{p}^1-\bar{p}^2|_{\varepsilon,\alpha} \quad\bar{p}^1,\bar{p}^2\in\mathbb{R}^m \}.$$ Then we can show the following result. \begin{prop} (\cite{Arrieta-Santamaria-DCDS})\label{existenciavariedadinercial} Let hypotheses {\bf (H1)} and {\bf (H2)} be satisfied. Assume also that $m\geq 1$ is such that, \begin{equation}\label{CondicionAutovaloresFuerte0} \lambda_{m+1}^0-\lambda_m^0\geq 3(\kappa+2)L_F\left[(\lambda_m^0)^\alpha+(\lambda_{m+1}^0)^\alpha\right], \end{equation} and \begin{equation}\label{autovalorgrande0} (\lambda_m^0)^{1-\alpha}\geq 6(\kappa +2)L_F(1-\alpha)^{-1}. \end{equation} Then, there exist $L<1$ and $\varepsilon_0>0$ such that for all $0<\varepsilon\leq\varepsilon_0$ there exist inertial manifolds $\mathcal{M}_\varepsilon$ and $\mathcal{M}_0^\varepsilon$ for (\ref{problemaperturbado}) and (\ref{problemalimite}) respectively, given by the ``graph'' of a function $\Phi_\varepsilon\in\mathcal{F}_\varepsilon(L,\rho)$ and $\Phi_0^\varepsilon\in\mathcal{F}_0(L,\rho)$. \end{prop} \begin{re} We have written quotations in the word ``graph'' since the manifolds $\mathcal{M}_\varepsilon$, $\mathcal{M}_0^\varepsilon$ are not properly speaking the graph of the functions $\Phi_\varepsilon$, $\Phi_0^\varepsilon$ but rather the graph of the appropriate function obtained via the isomorphism $j_\varepsilon$ which identifies $\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}(X_\varepsilon^\alpha)$ with $\R^m$. That is, $\mathcal{M}_\varepsilon=\{ j_\varepsilon^{-1}(\bar p)+\Phi_\varepsilon(\bar p); \quad \bar p\in \R^m\}$ and $\mathcal{M}_0^\varepsilon=\{ j_0^{-1}(\bar p)+\Phi_0^\varepsilon(\bar p); \quad \bar p\in \R^m\}$ \end{re} The main result from \cite{Arrieta-Santamaria-DCDS} was the following: \begin{teo} (\cite{Arrieta-Santamaria-DCDS}) \label{distaciavariedadesinerciales} Let hypotheses {\bf (H1)} and {\bf (H2)} be satisfied and let $\tau(\varepsilon)$ be defined by \eqref{definition-tau}. Then, under the hypothesis of Proposition \ref{existenciavariedadinercial}, if $\Phi_\varepsilon$ are the maps that give us the inertial manifolds for $0<\varepsilon\leq \varepsilon_0$ then we have, \begin{equation}\label{distance-inertialmanifolds} \|\Phi_\varepsilon-E\Phi_0^\varepsilon\|_{L^\infty(\mathbb{R}^m, X^\alpha_\varepsilon)}\leq C[\tau(\varepsilon)|\log(\tau(\varepsilon))|+\rho(\varepsilon)], \end{equation} with $C$ a constant independent of $\varepsilon$. \end{teo} \par \begin{re} Properly speaking, in \cite{Arrieta-Santamaria-DCDS} the above theorem is proved only for the case for which the nonlinearity $F_0^\varepsilon$ from \eqref{problemalimite} satisfies $F_0^\varepsilon\equiv F_0$ for all $0<\varepsilon<\varepsilon_0$. But revising the proof of \cite{Arrieta-Santamaria-DCDS} we can see that exactly the same argument is valid for the most general case where the nonlinearity depends on $\varepsilon$. \end{re} To obtain stronger convergence results on the inertial manifolds, we will need to requiere stronger conditions on the nonlinearites. These conditions are stated in the following hypothesis, \par {\sl \paragraph{\textbf{(H2').}} We assume that the nonlinear terms $F_\varepsilon$ and $F_0^\varepsilon$, satisfy hipothesis {\bf(H2)} and they are uniformly $C^{1,\theta_F}$ functions from $ X_\varepsilon^\alpha$ to $X_\varepsilon$, and $X_0^\alpha$ to $X_0$ respectively, for some $0<\theta_F\leq 1$. That is, $F_\varepsilon\in C^1(X_\varepsilon^\alpha, X_\varepsilon)$, $F_0^\varepsilon\in C^1(X_0^\alpha, X_0)$ and there exists $L>0$, independent of $\varepsilon$, such that $$\|DF_\varepsilon(u)-DF_\varepsilon(u')\|_{\mathcal{L}(X_\varepsilon^\alpha, X_\varepsilon)}\leq L\|u-u'\|^{\theta_F}_{X_\varepsilon^\alpha},\qquad \forall u, u'\in X_\varepsilon^\alpha.$$ $$\|DF_0^\varepsilon(u)-DF_0^\varepsilon(u')\|_{\mathcal{L}(X_0^\alpha, X_0)}\leq L\|u-u'\|^{\theta_F}_{X_0^\alpha},\qquad \forall u, u'\in X_0^\alpha.$$ } \par We can state now the main results of this section. \begin{prop}\label{FixedPoint-E^1Theta} Assume hypotheses {\bf(H1)} and {\bf(H2')} are satisfied and that the gap conditions \eqref{CondicionAutovaloresFuerte0}, \eqref{autovalorgrande0} hold. Then, for any $\theta>0$ such that $\theta\leq\theta_F$ and $\theta<\theta_0$, where \begin{equation}\label{theta} \theta_0= \frac{\lambda_{m+1}^0-\lambda_m^0-4L_F(\lambda_m^0)^\alpha-2L_F(\lambda_{m+1}^0)^\alpha}{2L_F(\lambda_m^0)^\alpha+\lambda_m^0} \end{equation} then, the functions $\Phi_\varepsilon$, and $\Phi_0^\varepsilon$ for $0< \varepsilon\leq\varepsilon_0$, obtained above, which give the inertial manifolds, are $C^{1, \theta}(\mathbb{R}^m, X_\varepsilon^\alpha)$ and $C^{1, \theta}(\mathbb{R}^m, X_0^\alpha)$. Moreover, the $C^{1,\theta}$ norm is bounded uniformly in $\varepsilon$, for $\varepsilon>0$ small. \end{prop} The main result we want to show in this article is the following: \par \begin{teo}\label{convergence-C^1-theo} Let hypotheses {\bf (H1)}, {\bf (H2')} and gap conditions \eqref{CondicionAutovaloresFuerte0}, \eqref{autovalorgrande0} be satisfied, so that Proposition \ref{FixedPoint-E^1Theta} hold, and we have inertial manifolds $\mathcal{M}^\varepsilon$, $\mathcal{M}_0^\varepsilon$ given as the graphs of the functions $\Phi_\varepsilon$, $\Phi_0^\varepsilon$ for $0<\varepsilon\leq\varepsilon_0$. If we denote by \begin{equation}\label{convergenceDF} \beta(\varepsilon)=\sup_{u\in \mathcal{M}^\varepsilon_0}\|DF_\varepsilon \big(Eu \big)E-EDF_0^\varepsilon\big(u\big)\|_{\mathcal{L}(X_0^\alpha, X_\varepsilon)}, \end{equation} then, there exists $\theta^*$ with $0<\theta^*<\theta_F$ such that for all $0<\theta<\theta^*$, we obtain the following estimate \begin{equation}\label{distance-C^1-inertialmanifolds} \|\Phi_\varepsilon-E\Phi_0^\varepsilon\|_{C^{1, \theta}(\mathbb{R}^m, X_\varepsilon^\alpha)}\leq \mathbf{C} \left(\left[\beta(\varepsilon)+\Big(\tau(\varepsilon)|\log(\tau(\varepsilon))|+\rho(\varepsilon)\Big)^{\theta^*}\right]\right)^{1-\frac{\theta}{\theta^*}}, \end{equation} where $\tau(\varepsilon)$, $\rho(\varepsilon)$ are given by (\ref{definition-tau}), (\ref{estimacionefes}), respectively and $\mathbf{C}$ is a constant independent of $\varepsilon$. \end{teo} \begin{re} As a matter of fact, $\theta^*$ can be chosen $\theta^*<\min\{\theta_F, \theta_0, \theta_1\}$ where $\theta_F$ is from {\bf(H2')}, $\theta_0$ is defined in\eqref{theta} and $\theta_1$, $$\theta_1= \frac{\lambda_{m+1}^0-\lambda_m^0-4L_F(\lambda_m^0)^\alpha}{(\kappa+2)L_F(\lambda_m^0)^\alpha+\lambda_m^0+3}, $$ see \eqref{theta1}. \end{re} As usual, we denote by $C^{1, \theta}(\mathbb{R}^m, X_\varepsilon^\alpha)$ the space of $C^1(\mathbb{R}^m, X_\varepsilon^\alpha)$ maps whose differentials are uniformly H\"{o}lder continuous with H\"{o}lder exponent $\theta$. That is, there is a constant $C$ independent of $\varepsilon$ such that, $$\|D\Phi_\varepsilon(z)-D\Phi_\varepsilon(z')\|_{\mathcal{L}(\mathbb{R}^m, X_\varepsilon^\alpha)}\leq C|z-z'|_{\varepsilon,\alpha}^{\theta}.$$ where the norm $|\cdot |_{\varepsilon,\alpha}$ is given by (\ref{normaalpha}). Notice that the norm $|\cdot|_{\varepsilon, \alpha}$ is equivalent to $|\cdot|$ uniformly in $\varepsilon$ and $\alpha$. The space $C^{1, \theta}(\mathbb{R}^m, X_\varepsilon^\alpha)$ is endowed with the norm $\|\cdot\|_{C^{1, \theta}(\mathbb{R}^m, X_\varepsilon^\alpha)}$ given by, $$\|\Phi_\varepsilon\|_{C^{1, \theta}(\mathbb{R}^m, X_\varepsilon^\alpha)}=\|\Phi_\varepsilon\|_{C^1(\mathbb{R}^m, X_\varepsilon^\alpha)}+\sup_{z, z'\in\mathbb{R}^m}\frac{\|D\Phi_\varepsilon(z)-D\Phi_\varepsilon(z')\|_{\mathcal{L}(\mathbb{R}^m, X_\varepsilon^\alpha)}}{|z-z'|_{\varepsilon,\alpha}^{\theta}}$$ To simplify notation below and unless some clarification is needed, we will denote the norms $\|\cdot\|_{C^{1}(\mathbb{R}^m, X_\varepsilon^\alpha)}$ and $\|\cdot\|_{C^{1, \theta}(\mathbb{R}^m, X_\varepsilon^\alpha)}$ by $\|\cdot\|_{C^{1}}$ and $\|\cdot\|_{C^{1, \theta}}$. Also, very often we will need to consider the following space of bounded linear operators $\mathcal{L}(\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}X^\alpha_\varepsilon, \mathbf{Q}_{\mathbf{m}}^{\bm\varepsilon}X_\varepsilon^\alpha)$ and its norm will be abbreviated by $\|\cdot \|_{\mathcal{L}}$. \section{Smoothness of inertial manifolds}\label{smoothness-subsection} \label{smoothness} In this section we show the $C^{1,\theta}$ smoothness of the inertial manifolds $\Phi_\varepsilon$ and $\Phi_0^\varepsilon$ for a fixed value of the parameter $\varepsilon$. Moreover, we will obtain estimates of its $C^{1,\theta}$ norm which are independent of the parameter $\varepsilon$. \par Recall that the $C^1$ smoothness of the manifold is shown in \cite{Sell&You}, where they proved the following result: \begin{teo}\label{smoothness} Let hypotheses of Proposition \ref{existenciavariedadinercial} be satisfied. Assume that for each $\varepsilon>0$ the nonlinear functions $F_\varepsilon$, $F_0^\varepsilon$ are Lipschitz $C^1$ functions from $X_\varepsilon^\alpha$ to $X_\varepsilon$ and from $X_0^\alpha$ to $X_0$ respectively. Then, the inertial manifolds $\mathcal{M}_\varepsilon$, $\mathcal{M}_0^\varepsilon$ for $\varepsilon>0$, are $C^1$-manifolds and the functions $\Psi_\varepsilon$, $\Psi_0^\varepsilon$ are Lipschitz $C^1$ functions from $\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}X_\varepsilon^\alpha$ to $\mathbf{Q}_{\mathbf{m}}^{\bm\varepsilon}X_\varepsilon^\alpha$ and from $\mathbf{P}_{\mathbf{m}}^{\bm 0}X_0^\alpha$ to $\mathbf{Q}_{\mathbf{m}}^{\bm 0}X_0^\alpha$. \end{teo} \begin{re}\label{relation-Phi-Psi} i) Let us mention that the relation between the maps $\Psi_\varepsilon: \mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}X_\varepsilon^\alpha\to \mathbf{Q}_{\mathbf{m}}^{\bm\varepsilon}X_\varepsilon^\alpha$ (resp. $\Psi_0^\varepsilon: \mathbf{P}_{\mathbf{m}}^{\bm 0}X_0^\alpha\to \mathbf{Q}_{\mathbf{m}}^{\bm 0}X_0^\alpha$) and $\Phi_\varepsilon: \R^m\to \mathbf{Q}_{\mathbf{m}}^{\bm\varepsilon}X_\varepsilon^\alpha$ (resp. $\Phi_0^\varepsilon: \R^m\to \mathbf{Q}_{\mathbf{m}}^{\bm 0}X_0^\alpha$) is $\Phi_\varepsilon=\Psi_\varepsilon\circ j_\varepsilon^{-1}$ (resp. $\Phi_0^\varepsilon=\Psi_0^\varepsilon\circ j_0^{-1}$), where $j_\varepsilon$ is defined by \eqref{definition-jeps}. \par\noindent ii) For the rest of the exposition, whenever we write $\Psi_\varepsilon$, $\Psi_0^\varepsilon$, $\Phi_\varepsilon$ and $\Phi_0^\varepsilon$ we will refer to these maps that define the inertial manifolds. \end{re} The proof of this theorem is based in the following extension of the Contraction Mapping Theorem, see \cite{ChowLuSell}. \begin{lem}\label{contracion} Let $X$ and $Y$ be complete metric spaces with metrics $d_x$ and $d_y$. Let $H: X\times Y\rightarrow X\times Y$ be a continuous function satisfying the following: \begin{itemize} \item[(1)] $H(x, y)=(F(x),G(x, y))$, $F$ does not depend on $y$. \item[(2)] There is a constant $\theta$ with $0\leq\theta <1$ such that one has $$d_x(F(x_1), F(x_2))\leq\theta d_x(x_1, x_2),\qquad x_1, x_2\in X,$$ $$d_y(G(x, y_1), G(x, y_2))\leq \theta d_y(y_1, y_2),\qquad x\in X, y_1, y_2\in Y.$$ \end{itemize} Then there is a unique fixed point $(x^*, y^*)$ of $H$. Moreover, if $(x_n, y_n)$ is any sequence of iterations, $$(x_{n+1}, y_{n+1})=H(x_n, y_n)\qquad\textrm{for}\quad n\geq 1,$$ then $$\lim_{n\rightarrow\infty}(x_n, y_n)=(x^*, y^*).$$ \end{lem} In \cite{ChowLuSell} and \cite{Sell&You} the authors use this lemma to show the existence of an appropriate fixed point which will give the desired differentiability. In our case, we consider the maps $\mathbf{\Pi}_{\mathbf{0}}^\varepsilon:\tilde{\mathcal{F}}_0(L,R)\times \mathcal{E}_0\rightarrow \mathcal{\tilde F}_0(L,R)\times \mathcal{E}_0$ and $\mathbf{\Pi}_{\bm\varepsilon}:\mathcal{\tilde F}_\varepsilon(L,R)\times \mathcal{E}_\varepsilon\rightarrow \mathcal{\tilde F}_\varepsilon(L,R)\times \mathcal{E}_\varepsilon$ given by $$\mathbf{\Pi}_{\mathbf{0}}^\varepsilon: (\upchi^\varepsilon_0, \Upsilon^\varepsilon_0)\rightarrow (\mathbf{T}^\varepsilon_{\mathbf{0}} \upchi^\varepsilon_0,\mathbf{D}_0^\varepsilon(\upchi^\varepsilon_0, \Upsilon^\varepsilon_0)),$$ and $$\mathbf{\Pi}_{\bm\varepsilon}: (\upchi_\varepsilon, \Upsilon_\varepsilon)\rightarrow (\mathbf{T}_{\bm\varepsilon} \upchi_\varepsilon, \mathbf{D}_\varepsilon(\upchi_\varepsilon, \Upsilon_\varepsilon)),$$ where $$\mathcal{\tilde F}_\varepsilon(L,R)\mathord=\Big\{\upchi_\varepsilon:\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}X^\alpha_\varepsilon\rightarrow\mathbf{Q}_{\mathbf{m}}^{\bm\varepsilon}X^\alpha_\varepsilon \,\,/\,\, \|\upchi_\varepsilon(p)-\upchi_\varepsilon(p')\|_{X^\alpha_\varepsilon}\leq L\|p-p'\|_{X_\varepsilon^\alpha}, \,\,\, p, p'\in \mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}X_\varepsilon^\alpha,$$ $$\hbox{supp}(\upchi_\varepsilon)\subset \{ \phi\in \mathbf{P}_{\mathbf{m}}^{\bm\varepsilon} X_\varepsilon^\alpha, \|\phi\|_{X_\varepsilon^\alpha}\leq R\}\Big\}, \qquad 0\leq \varepsilon\leq\varepsilon_0$$ and $$\mathcal{E}_\varepsilon=\{\Upsilon_\varepsilon:\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}X^\alpha_\varepsilon\rightarrow\mathcal{L}(\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}X^\alpha_\varepsilon, \mathbf{Q}_{\mathbf{m}}^{\bm\varepsilon}X_\varepsilon^\alpha)\hbox{ continuous}:\,\,\,\, \qquad\qquad$$ $$\qquad\qquad\qquad\|\Upsilon_\varepsilon(p)p'\|_{X_\varepsilon^\alpha}\leq \|p'\|_{X_\varepsilon^\alpha},\quad p, p'\in\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}X^\alpha_\varepsilon \}\qquad 0\leq \varepsilon\leq\varepsilon_0.$$ Notice that the last contiditon in the definition of $\mathcal{E}_\varepsilon$ could be written equivalently as $\|\Upsilon_\varepsilon(p)\|_{\mathcal{L}}\leq 1$ for all $p\in \mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}X^\alpha_\varepsilon$. The functionals $\mathbf{T}^{\bm\varepsilon}_{\mathbf{0}}$, $\mathbf{T}_{\bm\varepsilon}$ are the ones used in the Lyapunov-Perron method to prove the existence of the inertial manifolds, see \cite{Sell&You}, which are defined as \begin{equation}\label{definition-T0Psi2} (\mathbf{T}^{\bm\varepsilon}_{\mathbf{0}}\upchi^\varepsilon_0)(\xi)=\int_{-\infty}^0e^{A_\varepsilon\mathbf{Q}^{\mathbf{0}}_{\mathbf{m}} s}\mathbf{Q}^{\mathbf{0}}_{\mathbf{m}} F^\varepsilon_0(u^\varepsilon_0(s))ds, \end{equation} \begin{equation}\label{definition-TepsPsi2} (\mathbf{T}_{\bm \varepsilon}\upchi_\varepsilon)(\eta)=\int_{-\infty}^0e^{A_\varepsilon\mathbf{Q}^{\bm\varepsilon}_{\mathbf{m}} s}\mathbf{Q}^{\bm\varepsilon}_{\mathbf{m}} F_\varepsilon(u_\varepsilon(s))ds, \end{equation} with $u^\varepsilon_0(t)=p^\varepsilon_0(t)+\upchi^\varepsilon_0(p^\varepsilon_0(t))$, $u_\varepsilon(t)=p_\varepsilon(t)+\upchi_\varepsilon(p_\varepsilon(t))$, where $p_0^\varepsilon(\cdot)\in [\varphi_1^0,\ldots,\varphi_m^0]$ is the globally defined solution of \begin{equation}\label{equationp*} \left\{ \begin{array}{l} p_t=-A_0 p+\mathbf{P}_{\mathbf{m}}^{\mathbf{0}} F^\varepsilon_0(p+\upchi^\varepsilon_0(p(t)))\\ p(0)=\xi\in [\varphi_1^0,\ldots,\varphi_m^0] \end{array} \right. \end{equation} and $p_\varepsilon(\cdot)\in [\varphi_1^\varepsilon,\ldots,\varphi_m^\varepsilon]$ is the globally defined solution of \begin{equation}\label{equationp} \left\{ \begin{array}{l} p_t=-A_\varepsilon p+\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon} F_\varepsilon(p+\upchi_\varepsilon(p(t)))\\ p(0)=\eta\in [\varphi_1^\varepsilon,\ldots,\varphi_m^\varepsilon]. \end{array} \right. \end{equation} The functionals, ${\mathbf{D}^{\bm\varepsilon}_{\mathbf{0}}}(\upchi^\varepsilon_0, \Upsilon^\varepsilon_0)$, $\mathbf{D}_{\bm\varepsilon}(\upchi_\varepsilon, \Upsilon_\varepsilon)$ are given as follows: for any $\xi\in \mathbf{P}_{\mathbf{m}}^{\mathbf{0}}X^\alpha_0$, $\eta\in \mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}X^\alpha_\varepsilon$, \begin{equation}\label{differential-Psi*} {\mathbf{D}^{\bm\varepsilon}_{\mathbf{0}}}(\upchi^\varepsilon_0, \Upsilon^\varepsilon_0)(\xi)=\int_{-\infty}^0 e^{A_0 \mathbf{Q}_{\mathbf{m}}^{\mathbf{0}} s} \mathbf{Q}_{\mathbf{m}}^{\mathbf{0}} DF^\varepsilon_0(u^\varepsilon_0(s))(I+\Upsilon^\varepsilon_0(p^\varepsilon_0(s)))\Theta^\varepsilon_0(\xi,s)ds, \end{equation} and \begin{equation}\label{differential-Psi} \mathbf{D}_{\bm\varepsilon}(\upchi_\varepsilon, \Upsilon_\varepsilon)(\eta)=\int_{-\infty}^0 e^{A_\varepsilon \mathbf{Q}_{\mathbf{m}}^{\bm\varepsilon} s} \mathbf{Q}_{\mathbf{m}}^{\bm\varepsilon} DF_\varepsilon(u_\varepsilon(s))(I+\Upsilon_\varepsilon(p_\varepsilon(s)))\Theta_\varepsilon(\eta,s)ds, \end{equation} with $u^\varepsilon_0$, $p^\varepsilon_0$, $u_\varepsilon$, $p_\varepsilon$ as above and moreover, $\Theta^\varepsilon_0(\xi, t)=\Theta^\varepsilon_0(\upchi^\varepsilon_0, \Upsilon^\varepsilon_0, \xi, t)$, $\Theta_\varepsilon(\eta, t)=\Theta_\varepsilon(\upchi_\varepsilon, \Upsilon_\varepsilon, \eta, t)$ are the linear maps from $\mathbf{P}_{\mathbf{m}}^{\mathbf{0}}X_0^\alpha$ to $\mathbf{P}_{\mathbf{m}}^{\mathbf{0}}X_0^\alpha$ and from $\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}X_\varepsilon^\alpha$ to $\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}X_\varepsilon^\alpha$ satisfying \begin{equation}\label{equationTheta*-section6} \left\{ \begin{array}{l} {\Theta}_t= -A_0 \Theta+\mathbf{P}_{\mathbf{m}}^{\mathbf{0}} DF^\varepsilon_0(u^\varepsilon_0(t))(I+\Upsilon^\varepsilon_0(p^\varepsilon_0(t)))\Theta\\ \Theta(\xi,0)=I, \end{array} \right. \end{equation} and \begin{equation}\label{equationTheta-section6} \left\{ \begin{array}{l} \Theta_t= -A_\varepsilon \Theta+\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon} DF_\varepsilon(u_\varepsilon(t))(I+\Upsilon_\varepsilon(p_\varepsilon(t)))\Theta\\ \Theta(\eta,0)=I, \end{array} \right. \end{equation} respectively. In fact, in these works it is obtained that the fixed point of the maps $\mathbf{\Pi}_{\mathbf{0}}^\varepsilon$ and $\mathbf{\Pi}_\varepsilon$ are given by $({\upchi^\varepsilon_0}^*, {\Upsilon^\varepsilon_0}^*)=(\Psi^{\varepsilon}_0, D\Psi^{\varepsilon}_0)$, $(\upchi_\varepsilon^*, \Upsilon_\varepsilon^*)=(\Psi_\varepsilon, D\Psi_\varepsilon)$ with $\Psi^\varepsilon_0$ and $\Psi_\varepsilon$ are the maps whose graphs gives us the inertial manifolds (see Remark \ref{relation-Phi-Psi} ii)), which are given by the fixed points of the functionals $\mathbf{T}^{\bm\varepsilon}_{\mathbf{0}}$ and $\mathbf{T}_{\bm\varepsilon}$ and $D\Psi_0^\varepsilon$, $D\Psi_\varepsilon$ are the Frechet derivatives of the inertial manifolds. \par In order to prove the $C^{1,\theta}$ smoothness of the inertial manifolds $\Phi_0^\varepsilon$, $\Phi_\varepsilon$, we will show that if we denote the set $$\mathcal{E}_{\varepsilon}^{ \theta,M}=\{\Upsilon_\varepsilon\in \mathcal{E}_{\varepsilon} : \|\Upsilon_\varepsilon(p)-\Upsilon_\varepsilon(p')\|_{\mathcal{L}} \leq M\|p-p'\|_{X_\varepsilon^\alpha}^\theta,\quad \forall p, p'\in\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}X_\varepsilon^\alpha\}$$ which is a closed set in $\mathcal{E}_{\varepsilon}$, then there exist appropriate $\theta$ and $M$ such that the maps ${\mathbf{D}^{\bm\varepsilon}_{\mathbf{0}}}(\Psi^\varepsilon_0, \cdot)$ and $\mathbf{D}_{\bm\varepsilon}(\Psi_\varepsilon, \cdot)$ from \eqref{differential-Psi*} and \eqref{differential-Psi} with $\Psi^\varepsilon_0$, $\Psi_\varepsilon$ the obtained inertial manifolds, transform $\mathcal{E}_{\varepsilon}^{ \theta,M}$ into itself, see Lemma \ref{PsiUniform} below, which will imply that the fixed point of the maps $\mathbf{\Pi}_{\mathbf{0}}^\varepsilon$ and $\mathbf{\Pi}_{\bm\varepsilon}$ lie in $\mathcal{\tilde F}_0(L,R)\times \mathcal{E}_0^{ \theta,M}$ and $\mathcal{\tilde F}_\varepsilon(L,R)\times \mathcal{E}_\varepsilon^{ \theta,M}$, respectively, obtaining the desired regularity. Throughout this subsection, we provide a proof of Proposition \ref{FixedPoint-E^1Theta} for the inertial manifold $\Phi_\varepsilon$ for each $\varepsilon\geq 0$. Note that the proof of this result for the inertial manifold $\Phi^\varepsilon_0$, consists in following, step by step, the same proof. Then, we focus now in the inertial manifold $\Phi_\varepsilon$ with $\varepsilon>0$ fixed. We start with some estimates. \begin{lem}\label{distp-section6} Let $p^1_\varepsilon(t)$ and $p^2_\varepsilon(t)$ be solutions of (\ref{equationp}) with $p^1_\varepsilon(0)$ and $p^2_\varepsilon(0)$ its initial data, respectively. Then, for $t\leq 0$, $$\|p^1_\varepsilon(t)-p^2_\varepsilon(t)\|_{X_\varepsilon^\alpha}\leq \|p_\varepsilon^1(0)-p_\varepsilon^2(0)\|_{X_\varepsilon^\alpha} e^{-[2L_F(\lambda_m^\varepsilon)^\alpha+\lambda_m^\varepsilon]t}$$ \end{lem} \begin{proof} By the variation of constants formula, $$p_\varepsilon^1(t)-p_\varepsilon^2(t)=e^{-A_\varepsilon t}[p_\varepsilon^1(0)-p_\varepsilon^2(0)]+$$ $$+\int_0^te^{-A_\varepsilon(t-s)}\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}[F_\varepsilon(p_\varepsilon^1(s)+\Psi_\varepsilon(p_\varepsilon^1(s)))-F_\varepsilon(p_\varepsilon^2(s)+\Psi_\varepsilon(p_\varepsilon^2(s)))]ds.$$ Hence, applying \eqref{semigrupoproyectadoP} and \eqref{semigrupoproyectadoP-alpha} and taking into account that $\Psi_\varepsilon, F_\varepsilon$ are uniformly Lipschitz with Lipschitz constants $L<1$ and $L_F$, respectively, we get $$\|p_\varepsilon^1(t)-p_\varepsilon^2(t)\|_{X_\varepsilon^\alpha}\leq e^{-\lambda_m^\varepsilon t}\|p_\varepsilon^1(0)-p_\varepsilon^2(0)\|_{X_\varepsilon^\alpha}+2L_F(\lambda_m^\varepsilon)^\alpha\int_t^0e^{-\lambda_m^\varepsilon(t-s)}\|p_\varepsilon^1(s)-p_\varepsilon^2(s)\|_{X_\varepsilon^\alpha}ds. $$ By Gronwall inequality, $$\|p_\varepsilon^1(t)-p_\varepsilon^2(t)\|_{X_\varepsilon^\alpha}\leq \|p_\varepsilon^1(0)-p_\varepsilon^2(0)\|_{X_\varepsilon^\alpha}e^{-[2L_F(\lambda_m^\varepsilon)^\alpha+\lambda_m^\varepsilon]t},$$ as we wanted to prove. \end{proof} \begin{lem}\label{Jnorm} Let $\Psi_\varepsilon\in\mathcal{\tilde F}_\varepsilon(L, R)$ with $L<1$ and $\Upsilon_\varepsilon\in \mathcal{E}_\varepsilon$, $0<\varepsilon\leq\varepsilon_0$. Then, for $t\leq 0$, $$\|\Theta_\varepsilon(p_\varepsilon^0, t)\|_{\mathcal{L}} \leq e^{-[2L_F(\lambda_m^\varepsilon)^\alpha+\lambda_m^\varepsilon]t}.$$ \end{lem} \begin{proof} If $z_\varepsilon\in\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}X_\varepsilon^\alpha$, with the aid of the variation of constants formula applied to (\ref{equationTheta-section6}), we have for $t\leq 0$, $$\|\Theta_\varepsilon(p_\varepsilon^0, t)z_\varepsilon\|_{X_\varepsilon^\alpha}\leq \|e^{-A_\varepsilon\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}t} z_\varepsilon\|_{X_\varepsilon^\alpha}+$$ $$+\int_t^0\left\|e^{-A_\varepsilon\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}(t-s)}\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}DF_\varepsilon(u_\varepsilon(s))(I+\Upsilon_\varepsilon(p_\varepsilon(s)))\Theta_\varepsilon(p_\varepsilon^0, s)z_\varepsilon\right\|_{X_\varepsilon^\alpha}ds.$$ Hence as before, $$\|\Theta_\varepsilon(p_\varepsilon^0, t)z_\varepsilon\|_{X_\varepsilon^\alpha}\leq e^{-\lambda_m^\varepsilon t}\|z_\varepsilon\|_{X_\varepsilon^\alpha}+2L_F (\lambda_m^\varepsilon)^\alpha\int_t^0e^{-\lambda_m^\varepsilon (t-s)}\|\Theta_\varepsilon(p_\varepsilon^0, s)z_\varepsilon\|_{X_\varepsilon^\alpha}.$$ Using Gronwall inequality, we get $$\|\Theta_\varepsilon(p_\varepsilon^0, t)z_\varepsilon\|_{X_\varepsilon^\alpha}\leq e^{-[2L_F(\lambda_m^\varepsilon)^\alpha+\lambda_m^\varepsilon] t}\|z_\varepsilon\|_{X_\varepsilon^\alpha}$$ from where we get the result. \end{proof} \begin{lem}\label{distThetaEpsilon} Let $0<\theta\leq \theta_F$ and $M>0$ fixed. Let $p_\varepsilon^1, p_\varepsilon^2\in\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon} X_\varepsilon^\alpha$ and consider $\Theta_\varepsilon^1(t)=\Theta_\varepsilon(p_\varepsilon^1, t)$, $\Theta_\varepsilon^2(t)=\Theta_\varepsilon(p_\varepsilon^2, t)$ the solutions of (\ref{equationTheta-section6}) for some $\Upsilon_\varepsilon\in \mathcal{E}_\varepsilon^{\theta,M}$. Then, for $t\leq 0$, $$\|\Theta_\varepsilon^1(t)-\Theta_\varepsilon^2(t)\|_{\mathcal{L}} \leq \left(\frac{2L}{(\theta+1)L_F}+\frac{M}{2(\theta+1)}\right)\|p_\varepsilon^1-p_\varepsilon^2\|_{X_\varepsilon^\alpha}^\theta\,\, e^{-(2(\theta+2)L_F(\lambda_m^\varepsilon)^\alpha +(\theta+1)\lambda_m^\varepsilon) t}.$$ \end{lem} \begin{proof} Applying the variation of constants formula to (\ref{equationTheta-section6}), for $t\leq 0$, $$\|\Theta_\varepsilon^1(t)-\Theta_\varepsilon^2(t)\|_{\mathcal{L}}\leq \int_t^0\Big\|e^{-A_\varepsilon\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}(t-s)}\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}[DF_\varepsilon(u_\varepsilon^1(s))(I+\Upsilon_\varepsilon(p_\varepsilon^1(s)))\Theta^1_\varepsilon(s)$$ $$-DF_\varepsilon(u_\varepsilon^2(s))(I+\Upsilon_\varepsilon(p_\varepsilon^2(s)))\Theta^2_\varepsilon(s)]\Big\|_{\mathcal{L}}ds$$ with $u_\varepsilon^i(s)=p_\varepsilon^i(s)+\Psi_\varepsilon(p_\varepsilon^i(s))$, $i= 1, 2$. We can decompose the above integral in the following way, $$\|\Theta_\varepsilon^1(t)-\Theta_\varepsilon^2(t)\|_{\mathcal{L}}\leq$$ $$\leq\int_t^0\Big\|e^{-A_\varepsilon\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}(t-s)}\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon} [DF_\varepsilon(u_\varepsilon^1(s))-DF_\varepsilon(u_\varepsilon^2(s))](I+\Upsilon_\varepsilon(p_\varepsilon^1(s)))\Theta_\varepsilon^1(s)\Big\|_{\mathcal{L}}ds+$$ $$+\int_t^0\Big\|e^{-A_\varepsilon\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}(t-s)}\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon} DF_\varepsilon(u_\varepsilon^2(s))(\Upsilon_\varepsilon(p_\varepsilon^1(s))-\Upsilon_\varepsilon(p_\varepsilon^2(s)))\Theta_\varepsilon^1(s)\Big\|_{\mathcal{L}}ds+$$ $$+\int_t^0\Big\|e^{-A_\varepsilon\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}(t-s)}\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon} DF_\varepsilon(u_\varepsilon^2(s))[(I+\Upsilon_\varepsilon(p_\varepsilon^2(s)))(\Theta_\varepsilon^1(s)-\Theta_\varepsilon^2(s))\Big\|_{\mathcal{L}}ds=$$ $$=I_1+I_2+I_3.$$ We analyze each term separately. By hipothesis {\bf(H2')}, \eqref{semigrupoproyectadoP-alpha} and Lemma \ref{Jnorm}, $$I_1\leq 2 L(\lambda_m^\varepsilon)^\alpha e^{-\lambda_m^\varepsilon t}\int_t^0 \|u^1_\varepsilon(s)-u^2_\varepsilon(s)\|_{X_\varepsilon^\alpha}^\theta e^{-2L_F(\lambda_m^\varepsilon)^\alpha s}ds\leq $$ $$\leq 4L(\lambda_m^\varepsilon)^\alpha e^{-\lambda_m^\varepsilon t}\int_t^0 \|p_\varepsilon^1(s)-p_\varepsilon^2(s)\|_{X_\varepsilon^\alpha}^\theta e^{-2L_F(\lambda_m^\varepsilon)^\alpha s}ds.$$ Applying Lemma \ref{distp-section6}, $$I_1\leq \frac{2L}{(\theta+1)L_F}\|p_\varepsilon^1-p_\varepsilon^2\|_{X_\varepsilon^\alpha}^\theta e^{-[2(\theta+1)L_F(\lambda_m^\varepsilon)^\alpha+(\theta+1)\lambda_m^\varepsilon]t}.$$ Since $\Upsilon_\varepsilon\in \mathcal{E}_\varepsilon^{\theta,M}$, $0<\theta\leq\theta_F$, and by Lemma \ref{Jnorm}, we have $$I_2\leq L_F(\lambda_m^\varepsilon)^\alpha Me^{-\lambda_m^\varepsilon t}\int_t^0 \|p^1_\varepsilon(s)-p^2_\varepsilon(s)\|_{X_\varepsilon^\alpha}^\theta e^{-2L_F(\lambda_m^\varepsilon)^\alpha s} ds.$$ Applying Lemma \ref{distp-section6}, $$I_2\leq\frac{M}{2(\theta+1)}\|p_\varepsilon^1-p_\varepsilon^2\|_{X_\varepsilon^\alpha}^\theta e^{-[2(\theta+1)L_F(\lambda_m^\varepsilon)^\alpha + (\theta+1)\lambda_m^\varepsilon]t}.$$ This last term is estimated as follows, $$I_3\leq 2L_F(\lambda_m^\varepsilon)^\alpha\int_t^0e^{-\lambda_m^\varepsilon(t- s)}\|\Theta_\varepsilon^1(s)-\Theta_\varepsilon^2(s)\|_{\mathcal{L}}ds.$$ So, $$\|\Theta_\varepsilon^1(t)-\Theta_\varepsilon^2(t)\|_{\mathcal{L}}\leq $$ $$ \left(\frac{2L}{(\theta+1)L_F}+\frac{M}{2(\theta+1)}\right)\|p_\varepsilon^1-p_\varepsilon^2\|_{X_\varepsilon^\alpha}^\theta e^{-[2(\theta+1)L_F(\lambda_m^\varepsilon)^\alpha+(\theta+1)\lambda_m^\varepsilon]t}$$ $$+2L_F(\lambda_m^\varepsilon)^\alpha\int_t^0e^{-\lambda_m^\varepsilon (t-s)}\|\Theta_\varepsilon^1(s)-\Theta_\varepsilon^2(s)\|_{\mathcal{L}}ds.$$ Applying Gronwall inequality, $$\|\Theta_\varepsilon^1(t)-\Theta_\varepsilon^2(t)\|_{\mathcal{L}}\leq $$ $$\leq \left(\frac{2L}{(\theta+1)L_F}+\frac{M}{2(\theta+1)}\right)\|p_\varepsilon^1-p_\varepsilon^2\|_{X_\varepsilon^\alpha}^\theta e^{-[2(\theta+2)L_F(\lambda_m^\varepsilon)^\alpha+(\theta+1)\lambda_m^\varepsilon]t},$$ which shows the result. \end{proof} For the sake of notation, there are several exponents that repeat themselves very often and they are kind of long. We will abbreviate the exponents as follows: \begin{equation}\label{def-exponents} \begin{array}{l} \Lambda_0=2L_F(\lambda_m^\varepsilon)^\alpha +\lambda_m^\varepsilon \\ \Lambda_1=\lambda_{m+1}^\varepsilon -(\theta+1)\lambda_m^\varepsilon-2(\theta+1) L_F(\lambda_m^\varepsilon)^\alpha \\ \Lambda_2=\lambda_{m+1}^\varepsilon -(\theta+1)\lambda_m^\varepsilon-2(\theta+2) L_F(\lambda_m^\varepsilon)^\alpha \end{array} \end{equation} We can prove now the following Lemma. \begin{lem}\label{PsiUniform} If we choose $\theta$ such that $0<\theta\leq\theta_F$ and $\theta< \theta_0$ with $\theta_0$ given by \eqref{theta}, then there exist $M_0=M_0(\theta)>0$ such that for each $M\geq M_0$ and for $\varepsilon$ small enough, we have $\mathbf{D}_{\bm\varepsilon}(\Psi_\varepsilon, \cdot)$ maps $\mathcal{E}_\varepsilon^{\theta,M}$ into $\mathcal{E}_\varepsilon^{\theta,M}$. \end{lem} \begin{proof} Let $\Upsilon_\varepsilon\in \mathcal{E}_\varepsilon^{\theta,M}$ and $p_\varepsilon^1, p_\varepsilon^2\in\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}X_\varepsilon^\alpha$. In \cite{Sell&You} the authors prove $\mathbf{D}_{\bm\varepsilon}(\Psi_\varepsilon, \cdot)$ maps $\mathcal{E}_\varepsilon$ into $\mathcal{E}_\varepsilon$. So, it remains to prove that, $$\|\mathbf{D}_{\bm\varepsilon}(\Psi_\varepsilon, \Upsilon_\varepsilon)(p_\varepsilon^1)-\mathbf{D}_{\bm\varepsilon}(\Psi_\varepsilon, \Upsilon_\varepsilon)(p_\varepsilon^2)\|_{\mathcal{L}}\leq M\|p_\varepsilon^1-p_\varepsilon^2\|_{X_\varepsilon^\alpha}^\theta,$$ with $M$ and $\theta$ as in the statement. From expression (\ref{differential-Psi}), we have, $$\|\mathbf{D}_{\bm\varepsilon}(\Psi_\varepsilon, \Upsilon_\varepsilon)(p_\varepsilon^1)-\mathbf{D}_{\bm\varepsilon}(\Psi_\varepsilon, \Upsilon_\varepsilon)(p_\varepsilon^2)\|_{\mathcal{L}}\leq$$ $$\int_{-\infty}^0\Big\|e^{A_\varepsilon\mathbf{Q}_{\mathbf{m}}^{\bm\varepsilon}s}\mathbf{Q}_{\mathbf{m}}^{\bm\varepsilon}[DF_\varepsilon(u_\varepsilon^1(s))(I+\Upsilon_\varepsilon(p_\varepsilon^1(s)))\Theta^1_\varepsilon(s)-DF_\varepsilon(u_\varepsilon^2(s))(I+\Upsilon_\varepsilon(p_\varepsilon^2(s)))\Theta^2_\varepsilon(s)]\Big\|_{\mathcal{L}}ds,$$ with $p_\varepsilon^i(s)$ the solution of (\ref{equationp}) with $p_\varepsilon^i(0)=p_\varepsilon^i$ and $u_\varepsilon^i(s)=p_\varepsilon^i(s)+\Psi_\varepsilon(p_\varepsilon^i(s))$, for $i=1, 2$. In a similar way as in proof of Lemma \ref{distThetaEpsilon}, we decompose it as follows, $$\|\mathbf{D}_{\bm\varepsilon}(\Psi_\varepsilon, \Upsilon_\varepsilon)(p_\varepsilon^1)-\mathbf{D}_{\bm\varepsilon}(\Psi_\varepsilon, \Upsilon_\varepsilon)(p_\varepsilon^2)\|_{\mathcal{L}}\leq$$ $$\leq\int_{-\infty}^0\Big\|e^{A_\varepsilon\mathbf{Q}_{\mathbf{m}}^{\bm\varepsilon}s}\mathbf{Q}_{\mathbf{m}}^{\bm\varepsilon} [DF_\varepsilon(u_\varepsilon^1(s))-DF_\varepsilon(u_\varepsilon^2(s))](I+\Upsilon_\varepsilon(p_\varepsilon^1(s)))\Theta_\varepsilon^1(s)\Big\|_{\mathcal{L}}ds+$$ $$+\int_{-\infty}^0\Big\|e^{A_\varepsilon\mathbf{Q}_{\mathbf{m}}^{\bm\varepsilon}s}\mathbf{Q}_{\mathbf{m}}^{\bm\varepsilon} DF_\varepsilon(u_\varepsilon^2(s))[\Upsilon_\varepsilon(p_\varepsilon^1(s))-\Upsilon_\varepsilon(p_\varepsilon^2(s))]\Theta_\varepsilon^1(s)\Big\|_{\mathcal{L}}ds+$$ $$+\int_{-\infty}^0\Big\|e^{A_\varepsilon\mathbf{Q}_{\mathbf{m}}^{\bm\varepsilon}s}\mathbf{Q}_{\mathbf{m}}^{\bm\varepsilon} DF_\varepsilon(u_\varepsilon^2(s))(I+\Upsilon_\varepsilon(p_\varepsilon^2(s)))[\Theta_\varepsilon^1(s)-\Theta_\varepsilon^2(s)]\Big\|_{\mathcal{L}}ds=$$ $$=I_1+I_2+I_3.$$ Following the same arguments used in that proof and since $\Upsilon_\varepsilon\in \mathcal{E}_\varepsilon^{\theta,M}$ we get $$I_1\leq 4L(\lambda_{m+1}^\varepsilon)^\alpha\|p_\varepsilon^1-p_\varepsilon^2\|_{X_\varepsilon^\alpha}^\theta\int_{-\infty}^0 e^{\Lambda_1 s}ds\leq\frac{4L(\lambda_{m+1}^\varepsilon)^\alpha}{\Lambda_1}\|p_\varepsilon^1-p_\varepsilon^2\|_{X_\varepsilon^\alpha}^\theta$$ Similarly, for $I_2$, $$I_2\leq L_F(\lambda_{m+1}^\varepsilon)^\alpha M\|p_\varepsilon^1-p_\varepsilon^2\|_{X_\varepsilon^\alpha}^\theta\int_{-\infty}^0 e^{\Lambda_1 s}ds\leq \frac{L_F(\lambda_{m+1}^\varepsilon)^\alpha M}{\Lambda_1}\|p_\varepsilon^1-p_\varepsilon^2\|_{X_\varepsilon^\alpha}^\theta.$$ And finally, applying Lemma \ref{distThetaEpsilon}, $$I_3\leq 2L_F(\lambda_{m+1}^\varepsilon)^\alpha \left(\frac{2L}{(\theta+1)L_F}+\frac{M}{2(\theta+1)}\right)\|p_\varepsilon^1-p_\varepsilon^2\|_{X_\varepsilon^\alpha}^\theta\int_{-\infty}^0 e^{-\Lambda_2 s}ds$$ which implies, $$I_3\leq\frac{2L_F(\lambda_{m+1}^\varepsilon)^\alpha}{\Lambda_2} \left(\frac{2L}{(\theta+1)L_F}+\frac{M}{2(\theta+1)}\right)\|p_\varepsilon^1-p_\varepsilon^2\|_{X_\varepsilon^\alpha}^\theta.$$ Putting everything together we obtain $$\|\mathbf{D}_{\bm\varepsilon}(\Psi_\varepsilon, \Upsilon_\varepsilon)(p_\varepsilon^1)-\mathbf{D}_{\bm\varepsilon}(\Psi_\varepsilon, \Upsilon_\varepsilon)(p_\varepsilon^2)\|_{\mathcal{L}}\leq$$ $$(4L+ML_F)(\lambda_{m+1}^\varepsilon)^\alpha \Big(\frac{1}{\Lambda_1}+\frac{1}{(\theta+1)\Lambda_2}\Big)\|p_\varepsilon^1-p_\varepsilon^2\|_{X_\varepsilon^\alpha}^\theta$$ But since $\Lambda_2\leq \Lambda_1$, see \eqref{def-exponents}, and $\theta>0$, we have $$\|\mathbf{D}_{\bm\varepsilon}(\Psi_\varepsilon, \Upsilon_\varepsilon)(p_\varepsilon^1)-\mathbf{D}_{\bm\varepsilon}(\Psi_\varepsilon, \Upsilon_\varepsilon)(p_\varepsilon^2)\|_{\mathcal{L}}\leq (4L+ML_F)(\lambda_{m+1}^\varepsilon)^\alpha\frac{2}{\Lambda_2}\|p_\varepsilon^1-p_\varepsilon^2\|_{X_\varepsilon^\alpha}^\theta$$ $$=\Big( \frac{8L (\lambda_{m+1}^\varepsilon)^{\alpha}}{\Lambda_2}+M\frac{2L_F(\lambda_{m+1}^\varepsilon)^\alpha}{\Lambda_2}\Big)\|p_\varepsilon^1-p_\varepsilon^2\|_{X_\varepsilon^\alpha}^\theta$$ But if we consider $$\theta^0=\frac{\lambda_{m+1}^0-\lambda_m^0-4L_F(\lambda_m^0)^\alpha-2L_F(\lambda_{m+1}^0)^\alpha}{2L_F(\lambda_m^0)^\alpha+\lambda_m^0},$$ then, direct computations show that if $\theta<\theta_0$ and $\varepsilon$ is small, then $\frac{2L_F(\lambda_{m+1}^\varepsilon)^\alpha}{\Lambda_2}\leq \eta$ for some $\eta<1$. This implies that if we choose $M$ large enough then $$\Big( \frac{8L (\lambda_{m+1}^\varepsilon)^{\alpha}}{\Lambda_2}+M\frac{2L_F(\lambda_{m+1}^\varepsilon)^\alpha}{\Lambda_2}\Big)\leq M$$ which shows the result. \end{proof} We can prove now the main result of this subsection. \begin{proof} {\sl (of Proposition \ref{FixedPoint-E^1Theta})} Again, we do only the proof for $\Phi_\varepsilon$ being the proof for $\Phi_0^\varepsilon$ completely similar. Since $\Phi_\varepsilon=\Psi_\varepsilon\circ j_\varepsilon^{-1}$ and $j_\varepsilon$ is an isomorphism, see Remark \ref{relation-Phi-Psi} and (\ref{definition-jeps}), to prove $\Phi_\varepsilon\in C^{1,\theta}(\mathbb{R}^m, X_\varepsilon^\alpha)$ for some $\theta$, is equivalent to prove $\Psi_\varepsilon\in C^{1,\theta}(\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon} X_\varepsilon^\alpha, X_\varepsilon^\alpha)$. In \cite{Sell&You}, the authors prove the existence of the unique fixed point $(\Psi_\varepsilon^*, \Upsilon_\varepsilon^*)=(\Psi_\varepsilon, D\Psi_\varepsilon)\in\mathcal{\tilde F}_\varepsilon(L, R)\times \mathcal{E}_\varepsilon$ of the map $$\mathbf{\Pi}_{\bm\varepsilon}: (\Psi_\varepsilon, \Upsilon_\varepsilon)\rightarrow (\mathbf{T}_{\bm\varepsilon}\Psi_\varepsilon, \mathbf{D}_{\bm\varepsilon}(\Psi_\varepsilon, \Upsilon_\varepsilon)).$$ We want to prove that, in fact, this fixed point belongs to $\mathcal{\tilde F}_\varepsilon(L, R)\times \mathcal{E}_\varepsilon^{\theta,M}$. We proceed as follows. Let $\{z_n\}_{n\geq 0}\in \mathcal{\tilde F}_\varepsilon(L, R)\times \mathcal{E}_\varepsilon^{\theta,M}$ be a sequence given by $$z_0= (\Psi_\varepsilon, 0),\qquad z_1=\mathbf{\Pi}_{\bm\varepsilon} z_1=(\mathbf{T}_{\bm\varepsilon}\Psi_\varepsilon, \mathbf{D}_{\bm\varepsilon}(\Psi_\varepsilon, 0)),\quad ... \quad z_n=\mathbf{\Pi}_{\bm\varepsilon}^nz_0.$$ Note that the first coordinate of $z_n$ is $\mathbf{T}_{\bm\varepsilon}^n\Psi_\varepsilon$ which coincides with $\Psi_\varepsilon$ for all $n=1,2,\ldots$ since $\Psi_\varepsilon$ is fixed point of $\mathbf{T}_{\bm\varepsilon}$. Hence, by Lemma \ref{PsiUniform}, $\{z_n\}_{n\geq 0}\in \mathcal{\tilde F}_\varepsilon(L, R)\times \mathcal{E}_\varepsilon^{\theta,M}$ with $\theta$ and $M$ described in this lemma. By Lemma \ref{contracion}, $$\lim_{n\rightarrow\infty} z_n= (\Psi_\varepsilon, D\Psi_\varepsilon).$$ Hence, since $\mathcal{E}_\varepsilon^{\theta,M}$ is a closed subspace of $\mathcal{E}_\varepsilon$ and $z_n\in \mathcal{E}_\varepsilon^{\theta,M}$ for all $n=1,2,\ldots$, then $$ (\Psi_\varepsilon, D\Psi_\varepsilon)\in \mathcal{\tilde F}_\varepsilon(L, R)\times \mathcal{E}_\varepsilon^{\theta,M}.$$ That is, $\Psi_\varepsilon\in C^{1,\theta}(\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon} X_\varepsilon^\alpha, X_\varepsilon^\alpha)$, for $0<\varepsilon\leq\varepsilon_0$, with $0<\theta\leq\theta_F$ and $\theta<\theta^0$, see (\ref{theta}). Then, $\Phi_\varepsilon\in C^{1,\theta}(\mathbb{R}^m, X_\varepsilon^\alpha)$ as we wanted to prove. \end{proof} \par \section{$C^{1, \theta}$-estimates on the inertial manifolds} \label{convergence} In this section we study the $C^{1, \theta}$-convergence, with $0<\theta\leq 1$ small enough, of the inertial manifolds $\Phi^\varepsilon_0$, $\Phi_\varepsilon$, $0<\varepsilon\leq\varepsilon_0$. For that we will obtain first the $C^1$-convergence of these manifolds, and, with an interpolation argument and applying the results obtained in the previous subsection, we get the $C^{1, \theta}$-convergence and a rate of this convergence. Before proving the main result of this subsection, Theorem \ref{convergence-C^1-theo}, we need the following estimate. \begin{lem}\label{Jdistance} Let $\Theta^\varepsilon_0(j_0^{-1}(z),t)=\Theta^\varepsilon_0(\Psi_0^\varepsilon, D\Psi_0^\varepsilon, j_0^{-1}(z),t)$ and $\Theta_\varepsilon(j_\varepsilon^{-1}(z),t)=\Theta_\varepsilon(\Psi_\varepsilon, D\Psi_\varepsilon, j_\varepsilon^{-1}(z),t)$ be solutions of (\ref{equationTheta*-section6}) and (\ref{equationTheta-section6}), for $z\in\mathbb{R}^m$ and $t\leq 0$. Then, we have, $$\|\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}E\Theta^\varepsilon_0(j_0^{-1}(z), t)-\Theta_\varepsilon(j_\varepsilon^{-1}(z),t)\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}E\|_{\mathcal{L}(\mathbf{P}_{\mathbf{m}}^{\mathbf{0}}X_0^\alpha, \mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}X_\varepsilon^\alpha)}\leq $$ $$C[\beta(\varepsilon)+[\tau(\varepsilon)|\log(\tau(\varepsilon))|+\rho(\varepsilon)]^\theta]e^{-[(4+(\kappa+2)\theta )L_F(\lambda_m^\varepsilon)^\alpha+(\theta+1)\lambda_m^\varepsilon+3\theta] t}\,\,+$$ $$+\frac{\|ED\Psi_0^\varepsilon-D\Psi_\varepsilon\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}E\|_\infty}{2}e^{-[4L_F(\lambda_m^\varepsilon)^\alpha+\lambda_m^\varepsilon] t},$$ where $C$ is a constant independent of $\varepsilon$, $0<\theta\leq\theta_F$ and $\theta<\theta_0$, and $\kappa$ is given by \eqref{cotaextensionproyeccion}. \end{lem} \begin{re} We denote by $\|ED\Psi_0^\varepsilon-D\Psi_\varepsilon\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon} E\|_\infty$ the sup norm, that is \begin{equation}\label{supnorm} \|ED\Psi_0^\varepsilon-D\Psi_\varepsilon \mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}E\|_\infty=\sup_{p\in\mathbf{P}_{\mathbf{m}}^{\mathbf{0}}X_0^\alpha}\|ED\Psi_0^\varepsilon(p)-D\Psi_\varepsilon(\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}Ep)\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}E\|_{\mathcal{L}(\mathbf{P}_{\mathbf{m}}^{\mathbf{0}}X_0^\alpha, X_\varepsilon^\alpha)}, \end{equation} \end{re} \begin{proof} With the Variation of Constants Formula applied to \eqref{equationTheta*-section6} and \eqref{equationTheta-section6}, and denoting by $\Theta^\varepsilon_0(t)=\Theta^\varepsilon_0(j_0^{-1}(z),t)$ and $\Theta_\varepsilon(t)=\Theta_\varepsilon(j_\varepsilon^{-1}(z),t)$, we get $$E\Theta^\varepsilon_0(t)-\Theta_\varepsilon (t) \mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}E =Ee^{-A_0\mathbf{P}_{\mathbf{m}}^{\mathbf{0}} t}-e^{-A_\varepsilon\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon} t} \mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}E+$$ $$+\int_t^0\left(Ee^{-A_0\mathbf{P}_{\mathbf{m}}^{\mathbf{0}} (t-s)}\mathbf{P}_{\mathbf{m}}^{\mathbf{0}}DF^\varepsilon_0(u^\varepsilon_0(s))(I+D\Psi_0^\varepsilon(p^\varepsilon_0(s)))\Theta^\varepsilon_0(s)-\right. \qquad\qquad\qquad\qquad$$ $$\qquad\qquad\qquad \left. e^{-A_\varepsilon\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}(t-s)}\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}DF_\varepsilon(u_\varepsilon(s))(I+D\Psi_\varepsilon(p_\varepsilon(s)))\Theta_\varepsilon(s)\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon} E \right) ds:= I'+\int_t^0I$$ We estimate now $I'$ and $I$. Notice first that $\|I'\|_{\mathcal{L}(\mathbf{P}_{\mathbf{m}}^{\mathbf{0}}X_0^\alpha, \mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}X_\varepsilon^\alpha )}$ is analyzed with Lemma 5.1, from \cite{Arrieta-Santamaria-DCDS} obtaining, $$\|I'\|_{\mathcal{L}(\mathbf{P}_{\mathbf{m}}^{\mathbf{0}}X_0^\alpha, \mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}X_\varepsilon^\alpha )}\leq C_4e^{-(\lambda_m^0+1)t}\tau(\varepsilon)$$ \par Moreover, for $I$ we get, the following decomposition: $$I=Ee^{-A_0\mathbf{P}_{\mathbf{m}}^{\mathbf{0}} (t-s)}\mathbf{P}_{\mathbf{m}}^{\mathbf{0}}DF^\varepsilon_0(u^\varepsilon_0(s))(I+D\Psi_0^\varepsilon(p^\varepsilon_0(s)))\Theta^\varepsilon_0(s)-$$ $$e^{-A_\varepsilon\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}(t-s)}\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}DF_\varepsilon(u_\varepsilon(s))(I+D\Psi_\varepsilon(p_\varepsilon(s)))\Theta_\varepsilon(s)\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon} E=$$ $$=\left( Ee^{-A_0\mathbf{P}_{\mathbf{m}}^{\mathbf{0}} (t-s)}\mathbf{P}_{\mathbf{m}}^{\mathbf{0}}- e^{-A_\varepsilon\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}(t-s)}\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}E\right) DF^\varepsilon_0(u^\varepsilon_0(s))(I+D\Psi_0^\varepsilon(p^\varepsilon_0(s)))\Theta^\varepsilon_0(s)$$ $$+e^{-A_\varepsilon\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}(t-s)}\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}\Big( EDF^\varepsilon_0(u^\varepsilon_0(s))- DF_\varepsilon(Eu^\varepsilon_0(s))E\Big)(I+D\Psi_0^\varepsilon(p^\varepsilon_0(s)))\Theta^\varepsilon_0(s)$$ $$+e^{-A_\varepsilon\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}(t-s)}\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}\Big( DF_\varepsilon(Eu^\varepsilon_0(s))- DF_\varepsilon(u_\varepsilon (s))\Big)E(I+D\Psi_0^\varepsilon(p^\varepsilon_0(s)))\Theta^\varepsilon_0(s)$$ $$+e^{-A_\varepsilon\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}(t-s)}\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon} DF_\varepsilon(u_\varepsilon(s)) \Big( E(I+ D\Psi_0^\varepsilon(p^\varepsilon_0(s)))-(I+ D\Psi_\varepsilon( \mathbf{P}_{\mathbf{m}}^{\bm\varepsilon} E p^\varepsilon_0(s)))E\Big)\Theta^\varepsilon_0(s)$$ $$+e^{-A_\varepsilon\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}(t-s)}\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon} DF_\varepsilon(u_\varepsilon(s))\Big( (I+D\Psi_\varepsilon( \mathbf{P}_{\mathbf{m}}^{\bm\varepsilon} E p^\varepsilon_0(s)))- (I+D\Psi_\varepsilon(p_\varepsilon(s)))\Big)E\Theta^\varepsilon_0(s)$$ $$+e^{-A_\varepsilon\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}(t-s)}\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}DF_\varepsilon(u_\varepsilon(s))(I+D\Psi_\varepsilon(p_\varepsilon(s))) \Big( E\Theta^\varepsilon_0(s)- \Theta_\varepsilon(s)\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon} E\Big)$$ $$=I_1+I_2+I_3+I_4+I_5+I_6.$$ Now we can study the norm $\|I\|_{\mathcal{L}(\mathbf{P}_{\mathbf{m}}^{\mathbf{0}}X_0^\alpha, \mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}X_\varepsilon^\alpha)}$ analyzing the norm of each term separately. \par By Lemma \ref{Jnorm}, Lemma 5.1 from \cite{Arrieta-Santamaria-DCDS} and \eqref{semigrupoproyectadoP-alpha} we have, $$\|I_1\|_{\mathcal{L}(\mathbf{P}_{\mathbf{m}}^{\mathbf{0}}X_0^\alpha, \mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}X_\varepsilon^\alpha)}\leq 2L_FC_4\tau(\varepsilon)e^{-(\lambda_m^0+1)t}e^{(-2L_F(\lambda_m^0)^\alpha+1) s}.$$ With the definition of $\beta(\varepsilon)$ from \eqref{convergenceDF} and again Lemma \ref{Jnorm} and \eqref{semigrupoproyectadoP-alpha} $$\|I_2\|_{\mathcal{L}(\mathbf{P}_{\mathbf{m}}^{\mathbf{0}}X_0^\alpha, \mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}X_\varepsilon^\alpha)}\leq 2(\lambda_m^\varepsilon)^\alpha\beta(\varepsilon)e^{-\lambda_m^\varepsilon t}e^{-2L_F(\lambda_m^\varepsilon)^\alpha s}.$$ To study the term $I_3$, again, from \eqref{convergenceDF}, \eqref{semigrupoproyectadoP-alpha}, Lemma \ref{Jnorm} and the properties on the norm of extension operator, see \eqref{cotaextensionproyeccion}, for $0<\theta\leq \theta_F$, $$\|I_3\|_{\mathcal{L}(\mathbf{P}_{\mathbf{m}}^{\mathbf{0}}X_0^\alpha, \mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}X_\varepsilon^\alpha)}\leq 2\kappa(\lambda_m^\varepsilon)^\alpha L\|Eu^\varepsilon_0(s)-u_\varepsilon(s)\|_{X_\varepsilon^\alpha}^\theta e^{-\lambda_m^\varepsilon t}e^{-2L_F(\lambda_m^\varepsilon)^\alpha s}.$$ Remember that, $$u^\varepsilon_0(s)=p^\varepsilon_0(s)+\Psi^\varepsilon_0(p^\varepsilon_0(s))=p^\varepsilon_0(s)+\Phi^\varepsilon_0(j_0(p^\varepsilon_0(s))),$$ and for $0<\varepsilon\leq\varepsilon_0$, $$u_\varepsilon(s)=p_\varepsilon(s)+\Psi_\varepsilon(p_\varepsilon(s))=p_\varepsilon(s)+\Phi_\varepsilon(j_\varepsilon(p_\varepsilon(s))).$$ Then, $$\|Eu^\varepsilon_0(s)-u_\varepsilon(s)\|_{X_\varepsilon^\alpha}\leq$$ $$\|p_\varepsilon(s)-Ep^\varepsilon_0(s)\|_{X_\varepsilon^\alpha}+ \|\Phi_\varepsilon(j_\varepsilon(p_\varepsilon(s)))-\Phi_\varepsilon(j_0(p^\varepsilon_0(s))\|_{X_\varepsilon^\alpha}+\|\Phi_\varepsilon(j_0(p^\varepsilon_0(s))-\Phi_0^\varepsilon(j_0(p^\varepsilon_0(s)))\|_{X_\varepsilon^\alpha}\leq$$ $$\|p_\varepsilon(s)-Ep^\varepsilon_0(s)\|_{X_\varepsilon^\alpha}+ |j_\varepsilon(p_\varepsilon(s))-j_0(p^\varepsilon_0(s))|_{0,\alpha}+\|\Phi_\varepsilon-E\Phi_0^\varepsilon\|_{L^\infty(\mathbb{R}^m, X_\varepsilon^\alpha)}.$$ Applying now Lemma 5.4 from \cite{Arrieta-Santamaria-DCDS}, we get $$|j_\varepsilon(p_\varepsilon(s))-j_0(p^\varepsilon_0(s))|_{0,\alpha}\leq (\kappa +1)\|p_\varepsilon(s)-Ep^\varepsilon_0(s)\|_{X_\varepsilon^\alpha}+ (\kappa +1)C_P\tau(\varepsilon)\|p_0^\varepsilon\|_{X_0}$$ Applying also Theorem \ref{distaciavariedadesinerciales}, we get $$\|\Phi_\varepsilon-E\Phi_0^\varepsilon\|_{L^\infty(\mathbb{R}^m, X_\varepsilon^\alpha)}\leq C[\tau(\varepsilon)|\log(\tau(\varepsilon))|+\rho(\varepsilon)]$$ Hence, $$\|Eu^\varepsilon_0(s)-u_\varepsilon(s)\|_{X_\varepsilon^\alpha}\leq$$ $$ (\kappa+2)\|p_\varepsilon(s)-Ep^\varepsilon_0(s)\|_{X_\varepsilon^\alpha}+ (\kappa +1)C_P\tau(\varepsilon)\|p_0^\varepsilon\|_{X_0}+C[\tau(\varepsilon)|\log(\tau(\varepsilon))|+\rho(\varepsilon)].$$ To estimate now $\|p_\varepsilon(s)-Ep^\varepsilon_0(s)\|_{X_\varepsilon^\alpha}$ we follow Lemma 5.6 from \cite{Arrieta-Santamaria-DCDS} and to estimate $\|p_0^\varepsilon\|_{X_0}$ we use Lemma 5.5 from \cite{Arrieta-Santamaria-DCDS} also. Putting all these estimates together, we get $$\|Eu^\varepsilon_0(s)-u_\varepsilon(s)\|_{X_\varepsilon^\alpha}\leq$$ $$\leq (\kappa+2)\left( \frac{L_F}{(\lambda_m^\varepsilon)^{1-\alpha}}\tau(\varepsilon)|\log(\tau(\varepsilon))|+\rho(\varepsilon)+K_2e^{-2s}\tau(\varepsilon)\right) e^{-[(\kappa+2)L_F(\lambda_m^\varepsilon)^\alpha+\lambda_m^\varepsilon]s}+$$ $$+ (\kappa+1)C_P\tau(\varepsilon)(R+C_F)e^{-\lambda_m^\varepsilon s\theta}+C[\tau(\varepsilon)|\log(\tau(\varepsilon))|+\rho(\varepsilon)]\leq $$ $$\leq C [\tau(\varepsilon)|\log(\tau(\varepsilon))|+\rho(\varepsilon)] e^{-[(\kappa+2)L_F(\lambda_m^\varepsilon)^\alpha+\lambda_m^\varepsilon+3]s},$$ with $C>0$ independent of $\varepsilon$. Observe that since $s\leq 0$, we have $e^{-[(\kappa+2)L_F(\lambda_m^\varepsilon)^\alpha+\lambda_m^\varepsilon+3]s}\geq 1$. Hence, $$\resizebox{.99\hsize}{!}{$\|I_3\|_{\mathcal{L}(\mathbf{P}_{\mathbf{m}}^{\mathbf{0}}X_0^\alpha, \mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}X_\varepsilon^\alpha)}\leq 2\kappa(\lambda_m^\varepsilon)^\alpha LC [\tau(\varepsilon)|\log(\tau(\varepsilon))|\mathord+\rho(\varepsilon)]^\theta e^{-\lambda_m^\varepsilon t}e^{-[(2\mathord+(\kappa\mathord+2)\theta)L_F(\lambda_m^\varepsilon)^\alpha\mathord+\theta\lambda_m^\varepsilon+3\theta] s}$}.$$ By Lemma \ref{Jnorm}, we have, $$\|I_4\|_{\mathcal{L}(\mathbf{P}_{\mathbf{m}}^{\mathbf{0}}X_0^\alpha, \mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}X_\varepsilon^\alpha)}\leq (\lambda_m^\varepsilon)^\alpha L_F\|ED\Psi_0^\varepsilon-D\Psi_\varepsilon \mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}E\|_\infty e^{-\lambda_m^\varepsilon t}e^{-2L_F(\lambda_m^\varepsilon)^\alpha s}.$$ By Section \ref{smoothness-subsection}, $D\Psi_\varepsilon\in \mathcal{E}_\varepsilon^{\theta,M}$ for $0<\theta\leq\theta_F$ and $\theta<\theta_0$. Applying estimate (\ref{cotaextensionproyeccion}), Lemma \ref{Jnorm} and Lemma 5.6 from \cite{Arrieta-Santamaria-DCDS}, we have, $$\resizebox{14.3cm}{!}{$\|I_5\|_{\mathcal{L}(\mathbf{P}_{\mathbf{m}}^{\mathbf{0}}X_0^\alpha, \mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}X_\varepsilon^\alpha)}\leq \kappa L_F(\lambda_m^\varepsilon)^\alpha M(\tau(\varepsilon)|\log(\tau(\varepsilon))|\mathord+\rho(\varepsilon))^\theta e^{-\lambda_m^\varepsilon t}e^{-[(2\mathord+(\kappa\mathord+2)\theta)L_F(\lambda_m^\varepsilon)^\alpha\mathord+\theta\lambda_m^\varepsilon\mathord+3\theta] s} $}$$ Finally, the norm of term $I_6$ is estimated by, $$\|I_6\|_{\mathcal{L}(\mathbf{P}_{\mathbf{m}}^{\mathbf{0}}X_0^\alpha, \mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}X_\varepsilon^\alpha)}\leq 2(\lambda_m^\varepsilon)^\alpha L_F e^{-\lambda_m^\varepsilon (t-s)}\|E\Theta^\varepsilon_0(s)-\Theta_\varepsilon(s)\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}E \|_{\mathcal{L}(\mathbf{P}_{\mathbf{m}}^{\mathbf{0}}X_0^\alpha, \mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}X_\varepsilon^\alpha)}.$$ Putting all together, $$\|I\|_{\mathcal{L}(\mathbf{P}_{\mathbf{m}}^{\mathbf{0}}X_0^\alpha, \mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}X_\varepsilon^\alpha)}\leq$$ $$ CL_FL(\lambda_m^\varepsilon)^\alpha\left[\beta(\varepsilon)\mathord+(\tau(\varepsilon)|\log(\tau(\varepsilon))|\mathord+\rho(\varepsilon))^\theta\right]e^{-\lambda_m^\varepsilon t} e^{-[(2\mathord+(\kappa\mathord+2)\theta)L_F(\lambda_m^\varepsilon)^\alpha\mathord+\theta\lambda_m^\varepsilon\mathord+3\theta]s}$$ $$+(\lambda_m^\varepsilon)^\alpha L_F\|ED\Psi_0^\varepsilon-D\Psi_\varepsilon\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon} E\|_\infty e^{-\lambda_m^\varepsilon t}e^{-2L_F(\lambda_m^\varepsilon)^\alpha s}+$$ $$+ 2(\lambda_m^\varepsilon)^\alpha L_F e^{-\lambda_m^\varepsilon(t-s)}\|E\Theta^\varepsilon_0(s)-\Theta_\varepsilon(s)\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}E \|_{\mathcal{L}(\mathbf{P}_{\mathbf{m}}^{\mathbf{0}}X_0^\alpha, \mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}X_\varepsilon^\alpha)}.$$ Then, $$\|E\Theta^\varepsilon_0(t)-\Theta_\varepsilon(t)\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}E \|_{\mathcal{L}(\mathbf{P}_{\mathbf{m}}^{\mathbf{0}}X_0^\alpha, \mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}X_\varepsilon^\alpha)}\leq \|I'\|_{\mathcal{L}(\mathbf{P}_{\mathbf{m}}^{\mathbf{0}}X_0^\alpha, \mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}X_\varepsilon^\alpha)}+\int_t^0\|I\|_{\mathcal{L}(\mathbf{P}_{\mathbf{m}}^{\mathbf{0}}X_0^\alpha, \mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}X_\varepsilon^\alpha)}\leq $$ $$\leq C_4 e^{-(\lambda_m^0+1)t}\tau(\varepsilon)+$$ $$CL_FL(\lambda_m^\varepsilon)^\alpha\left[\beta(\varepsilon)\mathord+(\tau(\varepsilon)|\log(\tau(\varepsilon))|\mathord+\rho(\varepsilon))^\theta\right]e^{-\lambda_m^\varepsilon t}\int_t^0 e^{-[(2\mathord+(\kappa\mathord+2)\theta)L_F(\lambda_m^\varepsilon)^\alpha\mathord+\theta\lambda_m^\varepsilon\mathord+3\theta] s}ds$$ $$+(\lambda_m^\varepsilon)^\alpha L_F\|ED\Psi_0^\varepsilon-D\Psi_\varepsilon\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon} E\|_\infty e^{-\lambda_m^\varepsilon t}\int_t^0 e^{-2L_F(\lambda_m^\varepsilon)^\alpha s}ds+$$ $$+ 2(\lambda_m^\varepsilon)^\alpha L_F e^{-\lambda_m^\varepsilon t}\int_t^0 e^{\lambda_m^\varepsilon s}\|E\Theta^\varepsilon_0(s)-\Theta_\varepsilon(s)\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}E \|_{\mathcal{L}(\mathbf{P}_{\mathbf{m}}^{\mathbf{0}}X_0^\alpha, \mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}X_\varepsilon^\alpha)}ds.$$ So, we have, $$\|E\Theta^\varepsilon_0(t)-\Theta_\varepsilon(t)\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}E \|_{\mathcal{L}(\mathbf{P}_{\mathbf{m}}^{\mathbf{0}}X_0^\alpha, \mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}X_\varepsilon^\alpha)}\leq $$ $$\leq C \left[\beta(\varepsilon)+(\tau(\varepsilon)|\log(\tau(\varepsilon))|+\rho(\varepsilon))^\theta\right]e^{-[(2+(\kappa+2)\theta)L_F(\lambda_m^\varepsilon)^\alpha +(\theta+1)\lambda_m^\varepsilon+3\theta] t}+$$ $$+\frac{\|ED\Psi_0^\varepsilon-D\Psi_\varepsilon\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon} E\|_\infty}{2}e^{-[2L_F(\lambda_m^\varepsilon)^\alpha +\lambda_m^\varepsilon] t}+ $$ $$+ 2(\lambda_m^\varepsilon)^\alpha L_F e^{-\lambda_m^\varepsilon t}\int_t^0 e^{\lambda_m^\varepsilon s}\|E\Theta^\varepsilon_0(s)-\Theta_\varepsilon(s)\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}E \|_{\mathcal{L}(\mathbf{P}_{\mathbf{m}}^{\mathbf{0}}X_0^\alpha, \mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}X_\varepsilon^\alpha)}ds.$$ Applying Gronwall inequality, $$\|E\Theta^\varepsilon_0(t)-\Theta_\varepsilon(t)\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}E \|_{\mathcal{L}(\mathbf{P}_{\mathbf{m}}^{\mathbf{0}}X_0^\alpha, \mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}X_\varepsilon^\alpha)}\leq$$ $$\leq C \left[\beta(\varepsilon)+(\tau(\varepsilon)|\log(\tau(\varepsilon))|+\rho(\varepsilon))^\theta\right]e^{-[(4+(\kappa+2)\theta)L_F(\lambda_m^\varepsilon)^\alpha +(\theta+1)\lambda_m^\varepsilon+3\theta] t}+$$ $$+\frac{\|ED\Psi_0^\varepsilon-D\Psi_\varepsilon \mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}E\|_\infty}{2}e^{-[4L_F(\lambda_m^\varepsilon)^\alpha+\lambda_m^\varepsilon] t},$$ with $C>0$ a constant independent of $\varepsilon$ and $0<\theta\leq\theta_F$ with $\theta<\theta_0$. \end{proof} We show now the convergence of the differential of inertial manifolds and establish a rate for this convergence. For this, we define $\theta_1$ and $\tilde{\theta}$ as follows, \begin{equation}\label{theta1} \theta_1= \frac{\lambda_{m+1}^0-\lambda_m^0-4L_F(\lambda_m^0)^\alpha}{(\kappa+2)L_F(\lambda_m^0)^\alpha+\lambda_m^0+3}, \end{equation} and, \begin{equation}\label{theta*} \tilde{\theta}=\min\left\{\theta_F,\, \theta_0,\, \theta_1\right\}. \end{equation} \begin{prop}\label{differential-convergence} With $\Phi_0^\varepsilon$ and $\Phi_\varepsilon$ the inertial manifolds, and if $\theta<\tilde{\theta}$ , we have the following estimate \begin{equation}\label{C1-convergence} \|ED\Phi_0^\varepsilon\mathord-D\Phi_\varepsilon\|_{C^1(\mathbb{R}^m, X_\varepsilon^\alpha)}\leq C\left[\beta(\varepsilon)\mathord+\Big(\tau(\varepsilon)|\log(\tau(\varepsilon))|\mathord+\rho(\varepsilon)\Big)^\theta\right] \end{equation} where $C$ is a constant independent of $\varepsilon$. \end{prop} \begin{proof} Taking into account the estimate obtained in Theorem \ref{distaciavariedadesinerciales}, it remains to estimate $\|ED\Phi_0^\varepsilon-D\Phi_\varepsilon\|_{L^\infty(\mathbb{R}^m,\, \mathcal{L}(\mathbb{R}^m, X_\varepsilon^\alpha))}$, that is, $$\sup_{z\in\mathbb{R}^m}\|ED\Phi_0^\varepsilon(z)\mathord-D\Phi_\varepsilon(z)\|_{\mathcal{L}(\mathbb{R}^m, X_\varepsilon^\alpha)}.$$ But we know that, $$\sup_{z\in\mathbb{R}^m}\|ED\Phi_0^\varepsilon(z)-D\Phi_\varepsilon(z)\|_{\mathcal{L}(\mathbb{R}^m, X_\varepsilon^\alpha)}=$$ $$=\sup_{z\in\mathbb{R}^m}\|ED\Psi_0^\varepsilon(j_0^{-1}(z))j_0^{-1}-D\Psi_\varepsilon(j_\varepsilon^{-1}(z))\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}Ej_0^{-1}\|_{\mathcal{L}(\mathbb{R}^m, X_\varepsilon^\alpha)}=$$ $$=\sup_{z\in\mathbb{R}^m}\|ED\Psi_0^\varepsilon(j_0^{-1}(z))-D\Psi_\varepsilon(\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}Ej_0^{-1}(z))\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}E\|_{\mathcal{L}(\mathbf{P}_{\mathbf{m}}^{\mathbf{0}}X_0^\alpha, X_\varepsilon^\alpha)}=$$ $$=\sup_{p^\varepsilon_0\in\mathbf{P}_{\mathbf{m}}^{\mathbf{0}}X_0^\alpha}\|ED\Psi_0^\varepsilon(p^\varepsilon_0)-D\Psi_\varepsilon(\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}Ep^\varepsilon_0)\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}E\|_{\mathcal{L}(\mathbf{P}_{\mathbf{m}}^{\mathbf{0}}X_0^\alpha, X_\varepsilon^\alpha)}=\|ED\Psi_0^\varepsilon-D\Psi_\varepsilon E\|_\infty.$$ We have applied $|j_0(p^\varepsilon_0)|_{0,\alpha}=\|p^\varepsilon_0\|_{X_0^\alpha}$ for any $p^\varepsilon_0\in \mathbf{P}_{\mathbf{m}}^{\mathbf{0}}X_0$, see (\ref{normajepsilon}). Then, for $z'\in\mathbb{R}^m$, with the definition (\ref{differential-Psi}), and denoting again by $\Theta^\varepsilon_0(t)=\Theta^\varepsilon_0(j_0^{-1}(z),t)$ and $\Theta_\varepsilon(t)=\Theta_\varepsilon(j_\varepsilon^{-1}(z),t)$, we have $$ED\Psi_0^\varepsilon(j_0^{-1}(z))j_0^{-1}(z')-D\Psi_\varepsilon(\mathbf{P_m^\varepsilon}E\circ j_0^{-1}(z))\mathbf{P_m^\varepsilon}E\circ j_0^{-1}(z')=$$ $$=\int_{-\infty}^0\left( Ee^{A_0\mathbf{Q}_{\mathbf{m}}^{\mathbf{0}}s}\mathbf{Q}_{\mathbf{m}}^{\mathbf{0}}DF^\varepsilon_0(u^\varepsilon_0(s))(I+D\Psi_0^\varepsilon(p^\varepsilon_0(s)))\Theta^\varepsilon_0(s)j_0^{-1}(z')\right.$$ $$\left. -e^{A_\varepsilon\mathbf{Q}_{\mathbf{m}}^{\bm\varepsilon}s}\mathbf{Q}_{\mathbf{m}}^{\bm\varepsilon}DF_\varepsilon(u_\varepsilon(s))(I+D\Psi_\varepsilon(p_\varepsilon(s)))\Theta_\varepsilon(s)\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}Ej_0^{-1}(z')ds\right)=\int_{-\infty}^0I$$ But, the integrand $I$ can be decomposed, in a similar way as above in the proof of Lemma \ref{Jdistance}, as $$I=\left(Ee^{A_0\mathbf{Q}_{\mathbf{m}}^{\mathbf{0}}s}\mathbf{Q}_{\mathbf{m}}^{\mathbf{0}}-e^{A_\varepsilon\mathbf{Q}_{\mathbf{m}}^{\bm\varepsilon} s}\mathbf{Q}_{\mathbf{m}}^{\bm\varepsilon} E\right)DF^\varepsilon_0(u^\varepsilon_0(s))(I+D\Psi_0^\varepsilon(p^\varepsilon_0(s)))\Theta^\varepsilon_0(s)j_0^{-1}(z')+$$ $$+e^{A_\varepsilon\mathbf{Q}_{\mathbf{m}}^{\bm\varepsilon}s}\mathbf{Q}_{\mathbf{m}}^{\bm\varepsilon}\Big(EDF^\varepsilon_0(u^\varepsilon_0)-DF_\varepsilon(Eu^\varepsilon_0(s))E\Big)(I+D\Psi_0^\varepsilon(p^\varepsilon_0(s)))\Theta^\varepsilon_0(s)j_0^{-1}(z')$$ $$+e^{A_\varepsilon\mathbf{Q}_{\mathbf{m}}^{\bm\varepsilon}s}\mathbf{Q}_{\mathbf{m}}^{\bm\varepsilon}\Big(DF_\varepsilon(Eu^\varepsilon_0)-DF_\varepsilon(u_\varepsilon(s))\Big)E(I+D\Psi_0^\varepsilon(p^\varepsilon_0(s)))\Theta^\varepsilon_0(s)j_0^{-1}(z')$$ $$+e^{A_\varepsilon\mathbf{Q}_{\mathbf{m}}^{\bm\varepsilon}s}\mathbf{Q}_{\mathbf{m}}^{\bm\varepsilon}DF_\varepsilon(u_\varepsilon(s))\Big( E(I\mathord+D\Psi_0^\varepsilon(p^\varepsilon_0(s)))\mathord-(I\mathord+D\Psi_\varepsilon(\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}Ep^\varepsilon_0(s)))E\Big)\Theta^\varepsilon_0(s)j_0^{-1}(z') $$ $$+e^{A_\varepsilon\mathbf{Q}_{\mathbf{m}}^{\bm\varepsilon}s}\mathbf{Q}_{\mathbf{m}}^{\bm\varepsilon}DF_\varepsilon(u_\varepsilon(s))\Big((I\mathord+D\Psi_\varepsilon(\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}Ep^\varepsilon_0(s))))\mathord-(I\mathord+D\Psi_\varepsilon(p_\varepsilon(s)))\Big)E\Theta^\varepsilon_0(s)j_0^{-1}(z') $$ $$+e^{A_\varepsilon\mathbf{Q}_{\mathbf{m}}^{\bm\varepsilon}s}\mathbf{Q}_{\mathbf{m}}^{\bm\varepsilon}DF_\varepsilon(u_\varepsilon(s))(I+D\Psi_\varepsilon(p_\varepsilon(s)))\Big( E\Theta^\varepsilon_0(s)-\Theta_\varepsilon(s)\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}E\Big)j_0^{-1}(z')$$ $$=I_1+I_2+I_3+I_4+I_5+I_6.$$ Applying Lemma 5.3 from \cite{Arrieta-Santamaria-DCDS} and Lemma \ref{Jnorm}, $$\|I_1\|_{X_\varepsilon^\alpha}\leq 2 C_5 L_F l_\varepsilon^\alpha(-s)e^{[-2L_F(\lambda_m^\varepsilon)^\alpha+\lambda_{m+1}^\varepsilon-\lambda_m^\varepsilon-1]s}|z'|_{0,\alpha}.$$ Following the same steps as in the proof of Lemma \ref{Jdistance}, we obtain, $$\|I_2\|_{X_\varepsilon^\alpha}\leq 2(\lambda_{m+1}^\varepsilon)^\alpha\beta(\varepsilon)e^{[-2L_F(\lambda_m^\varepsilon)^\alpha+\lambda_{m+1}^\varepsilon-\lambda_m^\varepsilon]s}|z'|_{0,\alpha}, $$ $$\|I_4\|_{X_\varepsilon^\alpha}\leq (\lambda_{m+1}^\varepsilon)^\alpha L_F\|ED\Psi_0^\varepsilon-D\Psi_\varepsilon E\|_\infty e^{[-2L_F(\lambda_m^\varepsilon)^\alpha+\lambda_{m+1}^\varepsilon-\lambda_m^\varepsilon]s}|z'|_{0,\alpha}. $$ For the sake of clarity we will denote by \begin{equation}\label{def-exponents-2} \begin{array}{l} \Lambda_3=-(2+(\kappa+2)\theta)L_F(\lambda_m^\varepsilon)^\alpha+\lambda_{m+1}^\varepsilon-(\theta+1)\lambda_m^\varepsilon-3\theta \\ \Lambda_4=-(4+(\kappa+2)\theta)L_F(\lambda_m^\varepsilon)^\alpha+\lambda_{m+1}^\varepsilon-(\theta+1)\lambda_m^\varepsilon-3\theta. \end{array} \end{equation} Then, we have, $$\|I_3\|_{X_\varepsilon^\alpha}\leq 2\kappa(\lambda_{m+1}^\varepsilon)^\alpha LC[\tau(\varepsilon)|\log(\tau(\varepsilon))|+\rho(\varepsilon)]^\theta e^{\Lambda_3s}|z'|_{0,\alpha}, $$ $$\|I_5\|_{X_\varepsilon^\alpha}\leq \kappa L_F(\lambda_{m+1}^\varepsilon)^\alpha M C\left(\tau(\varepsilon)|\log(\tau(\varepsilon))|+\rho(\varepsilon)\right)^\theta e^{\Lambda_3 s}|z'|_{0,\alpha}, $$ and for the norm of $I_6$ we apply Lemma \ref{Jdistance}, $$\|I_6\|_{X_\varepsilon^\alpha}\leq\left( 2(\lambda_{m+1}^\varepsilon)^\alpha L_F C \left[\beta(\varepsilon)+(\tau(\varepsilon)|\log(\tau(\varepsilon))|+\rho(\varepsilon))^\theta\right]e^{\Lambda_4 s}+\right.$$ $$\left.(\lambda_{m+1}^\varepsilon)^\alpha L_F\|ED\Psi_0-D\Psi_\varepsilon E\|_\infty e^{[-4L_F(\lambda_m^\varepsilon)^\alpha+\lambda_{m+1}^\varepsilon-\lambda_m^\varepsilon] s}\right)|z'|_{0,\alpha}.$$ Putting everything together, $\|I\|_{X_\varepsilon^\alpha}\leq \|I_1\|_{X_\varepsilon^\alpha}+\|I_2\|_{X_\varepsilon^\alpha}+\|I_3\|_{X_\varepsilon^\alpha}+\|I_4\|_{X_\varepsilon^\alpha}+\|I_5\|_{X_\varepsilon^\alpha}+\|I_6\|_{X_\varepsilon^\alpha}$, so, $$\int_{-\infty}^0\|I\|_{X_\varepsilon^\alpha}ds\leq 2 C_5 L_F|z'|_{0,\alpha} \int_{-\infty}^0l_\varepsilon^\alpha(-s)e^{[-2L_F(\lambda_m^\varepsilon)^\alpha+\lambda_{m+1}^\varepsilon-\lambda_m^\varepsilon-1]s}ds+$$ $$+ 2(\lambda_{m+1}^\varepsilon)^\alpha\beta(\varepsilon)|z'|_{0,\alpha}\int_{-\infty}^0 e^{[-2L_F(\lambda_m^\varepsilon)^\alpha+\lambda_{m+1}^\varepsilon-\lambda_m^\varepsilon]s}ds+$$ $$+2\kappa(\lambda_{m+1}^\varepsilon)^\alpha LC[\tau(\varepsilon)|\log(\tau(\varepsilon))|+\rho(\varepsilon)]^\theta|z'|_{0,\alpha}\int_{-\infty}^0 e^{\Lambda_3s}ds+$$ $$+(\lambda_{m+1}^\varepsilon)^\alpha L_F\|ED\Psi_0-D\Psi_\varepsilon E\|_\infty|z'|_{0,\alpha}\int_{-\infty}^0 e^{[-2L_F(\lambda_m^\varepsilon)^\alpha+\lambda_{m+1}^\varepsilon-\lambda_m^\varepsilon]s}ds+$$ $$+\kappa L_F(\lambda_{m+1}^\varepsilon)^\alpha MC\left(\tau(\varepsilon)|\log(\tau(\varepsilon))|+\rho(\varepsilon)\right)^\theta|z'|_{0,\alpha}\int_{-\infty}^0 e^{\Lambda_3 s}ds+$$ $$+2(\lambda_{m+1}^\varepsilon)^\alpha L_F C \left[\beta(\varepsilon)+(\tau(\varepsilon)|\log(\tau(\varepsilon))|+\rho(\varepsilon))^\theta\right]|z'|_{0,\alpha}\int_{-\infty}^0e^{\Lambda_4 s}ds+$$ $$+(\lambda_{m+1}^\varepsilon)^\alpha L_F\|ED\Psi_0-D\Psi_\varepsilon E\|_\infty|z'|_{0,\alpha}\int_{-\infty}^0e^{[-4L_F(\lambda_m^\varepsilon)^\alpha+\lambda_{m+1}^\varepsilon-\lambda_m^\varepsilon]s}ds.$$ By Lemma 3.10 from \cite{Arrieta-Santamaria-DCDS}, the gap conditions described in Proposition \ref{existenciavariedadinercial} and $0<\theta<\tilde{\theta}$, see (\ref{theta*}), for $\varepsilon$ small enough, $$\leq \left(C[\beta(\varepsilon)+(\tau(\varepsilon)|\log(\tau(\varepsilon))|+\rho(\varepsilon))^\theta]+\frac{1}{2}\|ED\Psi_0^\varepsilon-D\Psi_\varepsilon E\|_\infty\right)|z'|_{0,\alpha}$$ Hence, $$\|[ED\Psi_0^\varepsilon(j_0^{-1}(z))- D\Psi_\varepsilon(\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}Ej_0^{-1}(z))\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}E]j_0^{-1}(z')\|_{X_\varepsilon^\alpha}\leq$$ $$\leq \left(C\left[\beta(\varepsilon)+(\tau(\varepsilon)|\log(\tau(\varepsilon))|+\rho(\varepsilon))^\theta\right]+\frac{1}{2}\|ED\Psi_0^\varepsilon-D\Psi_\varepsilon E\|_\infty\right)|z'|_{0,\alpha}.$$ Since $\Psi_\varepsilon$ and $\Psi_0^\varepsilon$ have bounded support, we consider the sup norm described in (\ref{supnorm}) for $u_0\in\mathbf{P}_{\mathbf{m}}^{\mathbf{0}}X_0^\alpha$ with $\|u_0\|_{X_0^\alpha}\leq 2\mathcal{R}$, with $\mathcal{R}>0$ an upper bound of the support of all $\Psi_\varepsilon$, $0<\varepsilon\leq\varepsilon_0$, and of $\Psi_0^\varepsilon$. So, $$\|ED\Psi_0^\varepsilon-D\Psi_\varepsilon E\|_\infty=\qquad\qquad\qquad $$ $$=\sup_{p\in\mathbf{P}_{\mathbf{m}}^{\mathbf{0}}X_0^\alpha, \|p\|_{X_0^\alpha}\leq2\mathcal{R}}\|ED\Psi_0^\varepsilon(p)-D\Psi_\varepsilon(\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}Ep)\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}E\|_{\mathcal{L}(\mathbf{P}_{\mathbf{m}}^{\mathbf{0}}X_0^\alpha, X_\varepsilon^\alpha)}$$ $$=\sup_{z\in\mathbb{R}^m, |z|_{0,\alpha}\leq 2\mathcal{R}}\|ED\Psi_0^\varepsilon(j_0^{-1}(z))- D\Psi_\varepsilon(\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}Ej_0^{-1}(z))\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}E\|_{\mathcal{L}(\mathbf{P}_{\mathbf{m}}^{\mathbf{0}}X_0^\alpha, X_\varepsilon^\alpha)}\leq $$ $$\leq C\left[\beta(\varepsilon)+(\tau(\varepsilon)|\log(\tau(\varepsilon))|+\rho(\varepsilon))^\theta\right]+\frac{1}{2}\|ED\Psi_0^\varepsilon-D\Psi_\varepsilon E\|_\infty.$$ which implies, $$\|ED\Psi_0^\varepsilon-D\Psi_\varepsilon E\|_\infty \leq 2C\left[\beta(\varepsilon)+(\tau(\varepsilon)|\log(\tau(\varepsilon))|+\rho(\varepsilon))^\theta\right],$$ with $\theta<\tilde{\theta}$. Hence, for $\theta<\tilde{\theta}$, $$\sup_{z\in\mathbb{R}^m}\|ED\Phi_0^\varepsilon(z)-D\Phi_\varepsilon(z)\|_{\mathcal{L}(\mathbb{R}^m, X_\varepsilon^\alpha)}\leq 2 C\left[\beta(\varepsilon)+(\tau(\varepsilon)|\log(\tau(\varepsilon))|+\rho(\varepsilon))^\theta\right].$$ Applying Theorem \ref{distaciavariedadesinerciales}, then $$\|ED\Phi_0^\varepsilon-D\Phi_\varepsilon\|_{C^1(\mathbb{R}^m, X_\varepsilon^\alpha)}\leq C\left[\beta(\varepsilon)+(\tau(\varepsilon)|\log(\tau(\varepsilon))|+\rho(\varepsilon))^\theta\right].$$ Which concludes the proof of the proposition. \end{proof} With this estimate we can analyze in detail the $C^{1, \theta}$-convergence of inertial manifolds for some $\theta<\tilde{\theta}$, small enough. We introduce now the proof of the main result of this subsection. \begin{proof} {\sl (of Theorem \ref{convergence-C^1-theo})} We want to show the existence of $\theta^*$ such that we can prove the convergence of the inertial manifolds $\Phi_\varepsilon$ to $\Phi^\varepsilon_0$, when $\varepsilon$ tends to zero in the $C^{1, \theta}$ topology for $\theta<\theta^*$ and obtain a rate of this convergence. That is, an estimate of $\|\Phi_\varepsilon- E\Phi_0^\varepsilon\|_{C^{1, \theta}(\mathbb{R}^m, X_\varepsilon^\alpha)}$. Let us choose $\theta^*<\tilde{\theta}$ as close as we want to $\tilde{\theta}$, where $\tilde{\theta}$ is given by \eqref{theta*}, so that Proposition \ref{differential-convergence} holds. As we have mentioned, $$\|\Phi_\varepsilon- E\Phi_0^\varepsilon\|_{C^{1, \theta}(\mathbb{R}^m, X_\varepsilon^\alpha)}=\|\Phi_\varepsilon- E\Phi_0^\varepsilon\|_{C^1(\mathbb{R}^m, X_\varepsilon^\alpha)}+$$ $$+ \sup_{z, z'\in\mathbb{R}^m}\frac{\|(D\Phi_\varepsilon- ED\Phi_0^\varepsilon)(z)-(D\Phi_\varepsilon- ED\Phi_0^\varepsilon)(z')\|_{\mathcal{L}(\mathbb{R}^m, X_\varepsilon^\alpha)}}{|z-z'|_{\varepsilon,\alpha}^{\theta}}=$$ $$=I_1+I_2.$$ For $\theta<\theta^*$, $I_2$ can be written as $I_2=I_{21}\cdot I_{22}$, where $$I_{21}=\left(\frac{\|(D\Phi_\varepsilon- ED\Phi_0^\varepsilon)(z)-(D\Phi_\varepsilon- ED\Phi_0^\varepsilon)(z')\|_{\mathcal{L}(\mathbb{R}^m, X^\alpha_\varepsilon)}}{|z-z'|_{\varepsilon,\alpha}^{\theta^*}}\right)^{\frac{\theta}{\theta^*}}$$ $$I_{22}=\|(D\Phi_\varepsilon- ED\Phi_0^\varepsilon)(z)-(D\Phi_\varepsilon- ED\Phi_0^\varepsilon)(z')\|^{1-\frac{\theta}{\theta^*}}_{\mathcal{L}(\mathbb{R}^m, X_\varepsilon^\alpha)}$$ Note that, since for each $\varepsilon> 0$, $\Phi_\varepsilon=\Psi_\varepsilon\circ j_\varepsilon^{-1}$, and $\Phi^\varepsilon_0=\Psi^\varepsilon_0\circ j_0^{-1}$ then by the chain rule, for all $z, \bar v\in\mathbb{R}^m$, $$D\Phi_\varepsilon(z)z'=D\Psi_\varepsilon(j_\varepsilon^{-1}(z))( j_\varepsilon^{-1}(z')),$$ $$D\Phi^\varepsilon_0(z)z'=D\Psi^\varepsilon_0(j_0^{-1}(z))( j_0^{-1}(z')).$$ Also, notice that from the definition of $j_\varepsilon$, $j_0$, we have $j_\varepsilon\circ \mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}E=j_0$ or equivalently $j_\varepsilon^{-1}= \mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}E\circ j_0^{-1}$. Then, applying \eqref{des-normas} to the denominator, $$I_{21}\leq \resizebox{14.5cm}{!}{$\left(\frac{\|(D\Psi_\varepsilon(j_\varepsilon^{-1}(z))\mathord-D\Psi_\varepsilon(j_\varepsilon^{-1}(z')))j_\varepsilon^{-1}\mathord+(ED\Psi_0^\varepsilon(j_0^{-1}(z'))\mathord-ED\Psi_0^\varepsilon(j_0^{-1}(z)))j_0^{-1}\|_{\mathcal{L}(\mathbb{R}^m, X_\varepsilon^\alpha)}}{(1-\delta)^{\theta^*}\|j_0^{-1}(z)-j_0^{-1}(z')\|_{X_0^\alpha}^{\theta^*}}\right)^{\frac{\theta}{\theta^*}}$}$$ Since in the previous subsection we have proved $D\Psi_\varepsilon\in \mathcal{E}_\varepsilon^{\theta,M}$, with $\theta<\theta_0$, in particular we have $D\Psi_\varepsilon\in \mathcal{E}_\varepsilon^{\theta,M}$, with $\theta<\tilde{\theta}$. Without loss of generality we consider $D\Psi_\varepsilon\in \mathcal{E}_\varepsilon^{\theta^*,M}$. Moreover, $\|j_\varepsilon^{-1}\|_{\mathcal{L}(\mathbb{R}^m, \mathbf{P}_{\mathbf{m}}^{\bm\varepsilon} X_\varepsilon^\alpha)}=\|\mathbf{P}_{\mathbf{m}}^{\bm\varepsilon}E\circ j_0^{-1}\|_{\mathcal{L}(\mathbb{R}^m, \mathbf{P}_{\mathbf{m}}^{\bm\varepsilon} X_\varepsilon^\alpha)}\leq\kappa$, see (\ref{normajepsilon}) and (\ref{cotaextensionproyeccion}). Then, we obtain $$I_{21}\leq \frac{(M\kappa(\kappa+1))^{\frac{\theta}{\theta^*}}\|j_0^{-1}(z)-j_0^{-1}(z')\|_{X_0^\alpha}^{\theta}}{(1-\delta)^\theta\|j_0^{-1}(z)-j_0^{-1}(z')\|_{X_0^\alpha}^{\theta}}=\frac{\left(M\kappa(\kappa+1)\right)^{\frac{\theta}{\theta^*}}}{(1-\delta)^\theta}.$$ Note that, $$I_{22}\leq \left(2\|D\Phi_\varepsilon-ED\Phi_0^\varepsilon\|_{L^\infty(\mathbb{R}^m,\, \mathcal{L}(\mathbb{R}^m, X_\varepsilon^\alpha))}\right)^{1-\frac{\theta}{\theta^*}}.$$ Hence, for $\theta<\theta^*$, $$\|\Phi_\varepsilon- E\Phi_0^\varepsilon\|_{C^{1, \theta}(\mathbb{R}^m, X_\varepsilon^\alpha)}\leq $$ $$\leq \|\Phi_\varepsilon- E\Phi_0^\varepsilon\|_{L^\infty(\mathbb{R}^m, X_\varepsilon^\alpha)}+\|D\Phi_\varepsilon-ED\Phi_0^\varepsilon\|_{L^\infty(\mathbb{R}^m,\, \mathcal{L}(\mathbb{R}^m, X_\varepsilon^\alpha))}+$$ $$+\frac{\left(M\kappa(\kappa+1)\right)^{\frac{\theta}{\theta^*}}}{(1-\delta)^\theta} \left(2\|D\Phi_\varepsilon-ED\Phi_0^\varepsilon\|_{L^\infty(\mathbb{R}^m,\, \mathcal{L}(\mathbb{R}^m, X_\varepsilon^\alpha))}\right)^{1-\frac{\theta}{\theta^*}}.$$ By Theorem \ref{distaciavariedadesinerciales} and Proposition \ref{differential-convergence}, we have $$\|\Phi_\varepsilon- E\Phi_0^\varepsilon\|_{C^{1, \theta}(\mathbb{R}^m, X_\varepsilon^\alpha)}\leq $$ $$\leq C[\tau(\varepsilon)|\log(\tau(\varepsilon))|+\rho(\varepsilon)]+2 C\left[\beta(\varepsilon)+(\tau(\varepsilon)|\log(\tau(\varepsilon))|+\rho(\varepsilon))^{\theta^*}\right]+$$ $$+\frac{\left(M\kappa(\kappa+1)\right)^{\frac{\theta}{\theta^*}}}{(1-\delta)^\theta} \left(4 C\left[\beta(\varepsilon)+(\tau(\varepsilon)|\log(\tau(\varepsilon))|+\rho(\varepsilon))^{\theta^*}\right]\right)^{1-\frac{\theta}{\theta^*}}\leq$$ $$\leq \mathbf{C} \left(\left[\beta(\varepsilon)+(\tau(\varepsilon)|\log(\tau(\varepsilon))|+\rho(\varepsilon))^{\theta^*}\right]\right)^{1-\frac{\theta}{\theta^*}},$$ which shows the result. \end{proof} \par \begin{thebibliography}{99} \bibitem{Arrieta-Santamaria-DCDS} J.M. Arrieta, E. Santamar\'ia, {\sl Estimates on the distance of Inertial Manifolds}, Discrete and Continuous Dynamical Systems A, 34, Vol 10 pp. 3921-3944 (2014) \bibitem{Arrieta-Santamaria-2} J.M. Arrieta, E. Santamar\'ia, {\sl Distance of attractors for thin domains}, (In preparation) \bibitem{B&V2}A. V. Babin and M. I. Vishik, {\sl Attractors of Evolution Equations}, Studies in Mathematics and its Applications, 25. North-Holland Publishing Co., Amsterdam, (1992). \bibitem{Bates-Lu-Zeng1998} Bates, P.W.; Lu, K.; Zeng, C. {\sl Existence and Persistence of Invariant Manifolds for Semiflows in Banach Space} Mem. Am. Math. Soc. bf 135, (1998), no. 645. \bibitem{LibroAlexandre}A. N. Carvalho, J. Langa, J. C. Robinson, {\sl Attractors for Infinite-Dimensional Non-Autonomous Dynamical-Systems}, Applied Mathematical Sciences, Vol. 182, Springer, (2012). \bibitem{ChaoBiaoCo}Shui-Nee Chow, Xiao-Biao Lin and Kening Lu, {\sl Smooth Invariant Foliations in Infinite Dimensional Spaces}, Journal of Differential Equations 94 (1991), no. 2, 266–291 \bibitem{ChowLuSell}S. Chow, K. Lu and G. R. Sell, {\sl Smoothness of Inertial Manifolds}, Journal of Mathematical Analysis and Applications, 169, no. 1, 283-312, (1992). \bibitem{Cholewa}J. W. Cholewa and T. Dlotko, {\sl Global Attractors in Abstract Parabolic Problems}, London Mathematical Society Lecture Note Series, 278. Cambridge University Press, Cambridge, (2000) \bibitem{Hale}Jack K. Hale, {\sl Asymptotic Behavior of Dissipative Systems}, American Mathematical Society (1988). \bibitem{Hale&Raugel3}Jack K. Hale and Genevieve Raugel, {\sl Reaction-Diffusion Equation on Thin Domains}, J. Math. Pures et Appl. (9) 71 (1992), no. 1, 33-95. \bibitem{Henry1}Daniel B. Henry, {\sl Geometric Theory of Semilinear Parabolic Equations}, Lecture Notes in Mathematics, 840. Springer-Verlag, Berlin-New York, (1981). \bibitem{Jones}Don A. Jones, Andrew M. Stuart and Edriss S. Titi, {\sl Persistence of Invariant Sets for Dissipative Evolution Equations}, Journal of Mathematical Analysis and Applications 219, 479-502 (1998) \bibitem{Ng2013} P. S. Ngiamsunthorn, {\sl Invariant manifolds for parabolic equations under perturbation of the domain}, Nonlinear Analysis TMA 80, pp 23-48, (2013) \bibitem{Raugel}Genevieve Raugel, {\sl Dynamics of partial differential equations on thin domains}. Dynamical systems (Montecatini Terme, 1994), 208-315, Lecture Notes in Math., 1609, Springer, Berlin, (1995). \bibitem{JamesRobinson}James C. Robinson, {\sl Infinite-dimensional dynamical systems. An introduction to dissipative parabolic PDEs and the theory of global attractors}, Cambridge Texts in Applied Mathematics. Cambridge University Press, Cambridge, 2001 \bibitem{Sell&You}George R. Sell and Yuncheng You, {\sl Dynamics of Evolutionary Equations}, Applied Mathematical Sciences, 143, Springer (2002). \bibitem{Varchon2012} N. Varchon, {\sl Domain perturbation and invariant manifolds}, J. Evol. Equ. 12 (2012), 547-569 \end{thebibliography} \end{document} \section{Introduction} \selectlanguage{english} In this paper we provide a comprehensive presentation of the periodic unfolding method for thin domains with a periodic oscillatory boundary. We adapt the unfolding method introduced in \cite{CiorDamGri02} by D. Cioranescu, D. Damlamian and G. Griso (see also \cite{CiorDamGri08}) to thin domains with an oscillatory boundary and show that it provides a general approach to analyze in a systematic and unified way, certain elliptic problems posed in this kind of domains. Throughout this paper, we consider thin domains with order of thickness $\varepsilon>0$ which are defined as follows \begin{equation}\label{thin1a} R^\varepsilonilon = \Big\{ (x,y) \in \R^2 \; | \; x \in (0,1), \; 0 < y < \varepsilonilon \, g(x/\varepsilon^\alpha) \Big\}, \end{equation} where $\alpha>0$ and $g:\mathbb{R} \longrightarrow\mathbb{R}$ is an $L-$periodic function satisfying $0<g_0 \leq g(\cdot)\leq g_1$ for some fixed positive constants $g_0$ and $g_1$. Moreover, in the whole paper we will assume that the function $g$ is a non constant function in order to guarantee the oscillating behavior of the upper boundary. Observe that the different values of $\alpha>0$ will give us different types of oscillatory behavior or rugosity at the boundary. We will distinguish three different cases taking into account the relation between the order of the thickness of the domain and the order of the period of the oscillations. More precisely: \begin{itemize} \item $0<\alpha<1$. We will refer to this case as ``weak oscillatory'' case. The order of the period of the oscillations is $\varepsilon^\alpha$, which is much larger than the order of the amplitude of the oscillations, $\varepsilon$, or the order of the thickness of the domain, also $\varepsilon$. Notice that in this case, if the function $g$ is smooth enough, then the function $x\to \varepsilon g(x/\varepsilon^\alpha)$ is uniformly $C^{1,\theta}$ for any $\theta<1- \alpha$ and it goes to zero in $C^{1,\theta}(\R)$. \item $\alpha=1$. We will refer to this case as ``resonant'' or ``critical'' case. Notice that the order of the period coincides with the order of the amplitude of the oscillations and it also coincides with the order of the thickness of the domain. Moreover, if again the function $g$ is smooth enough, then the function $x\to \varepsilon g(x/\varepsilon)$ is uniformly $C^{1}$ but it does not go to 0 in this topology. \item $\alpha>1$. We will refer to this case as ``fast'' or ``extremely high oscillatory'' case. The order of the period of the oscillations is much smaller than the order of the amplitude of the oscillations or the order of the thickness of the domain. The function $x\to \varepsilon g(x/\varepsilon^\alpha)$ is uniformly bounded in some H\" older norm but not in $C^{1}$. \end{itemize} In the thin domain $R^\varepsilon$ we study the behavior of the solutions of the Neumann problem for the Laplace operator, \begin{equation} \label{OPI0} \left\{ \begin{gathered} - \Delta u^\varepsilonilon + u^\varepsilonilon = f^\varepsilonilon \quad \textrm{ in } R^\varepsilonilon \\ \frac{\partial u^\varepsilonilon}{\partial \nu^\varepsilonilon} = 0 \quad \textrm{ on } \partial R^\varepsilonilon \end{gathered} \right. \end{equation} where $f^\varepsilonilon \in L^2(R^\varepsilonilon)$ and $\nu^\varepsilonilon$ is the unit outward normal to $\partial R^\varepsilonilon$. The variational formulation is: find $u^\varepsilonilon \in H^1(R^\varepsilonilon)$ such that \begin{equation} \label{VFP1} \int_{R^\varepsilonilon} \Big\{ \frac{\partial u^\varepsilonilon}{\partial x} \frac{\partial \varphi}{\partial x} + \frac{\partial u^\varepsilonilon}{\partial y} \frac{\partial \varphi}{\partial y} + u^\varepsilonilon \varphi \Big\} \,dx dy = \int_{R^\varepsilonilon} f^\varepsilon \varphi \,dx dy, \quad \forall \varphi \in H^1(R^\varepsilonilon). \end{equation} Observe that, the existence and uniqueness of solutions for problem \eqref{VFP1} are guaranteed by Lax-Milgram Theorem for every fixed $\varepsilon>0$. Notice also that the behavior of the solutions will depend essentially on the value of the parameter $\alpha$. Moreover, since the domain $R^\varepsilon$ has order of thickness $\varepsilon$ it is expected that the family of solutions $u^\varepsilon$ will converge to a function of just one variable as the parameter $\varepsilon$ tends to zero. {\color{blue} Hence, the purpose of this paper is to introduce an unfolding operator which allows us to obtain in an easy way the homogenized limit problem and a corrector result for problem \eqref{OPI0} in the three different situations described above: weak, critical and fast oscillations. Moreover, the interest of this method comes from the fact that for the homogeneous Neumann problem \eqref{OPI0} we may admit non-smooth periodic oscillatory boundaries. } {\color{blue} Let us point out that there are several papers addressing the problem of studying the effect of rough boundaries on the behavior of the solution of partial differential equations posed in thin domains. We will mention some of them here and we also refer to their corresponding bibliographies. In \cite{MelPo09, MelPo10, MelPo12} the authors study the asymptotic behavior of solutions to certain elliptic and parabolic problems in a thin perforated domain with rapidly varying thickness. The results obtained in these papers are related to the construction of a suitable asymptotic expansion of the solutions which was proposed by T. Melnyk in \cite{Mel} for the investigation of elliptic and spectral problems in thin perforated domains with rapidly varying thickness. The analysis of the asymptotic behavior of solutions to various boundary value problems in thin regions with rapidly changing thickness was also the subject of \cite{KornVo,ChechPi}. We also refer the reader to \cite{AnBra, BaZa, BraFon} where the asymptotic description of nonlinearly elastic thin films with a fast-oscillating profile was obtained. In addition, in \cite{ArrCarPerSil,ArrPer2013, ArrVil2014a,ArrVil2016,DamPe,Per} many classes of thin domains with oscillating boundaries have been considered to discuss the limit problems and convergence properties of the solutions of the Neumann problem \eqref{OPI0}. More specifically, we would like to point out that the problems studied in this paper have originally discussed in previous works considering smooth boundaries and using different techniques depending on the frequency of the oscillations. Thus, the case where the thin domain presents weak roughness ($0<\alpha <1$) was treated in \cite{Arr} using changes of variables and rescaling the thin domain as in classical works in thin domains with no oscillations, see for instance \cite{HaRau, Rau}. The resonant case ($\alpha=1$) was studied in \cite{ArrCarPerSil, MelPo10} using standard techniques in homogenization, see \cite{BenLioPap, CioPau, SP, CiorDo} for a general introduction to the homogenization theory. In particular, in \cite{ArrCarPerSil} the authors combine methods from linear homogenization theory and from the theory of nonlinear dynamics of dissipative systems to analyze the convergence properties of the elliptic and the corresponding semilinear parabolic problem. An asymptotic expansion of the solutions was constructed in \cite{MelPo10} to derive the homogenized limit problem and correctors results for more general elliptic differential equations with rapidly oscillating coefficients posed in a thin perforated region with rapidly changing thickness in $\R^n$, $2\leq n$. Recently, the homogenized limit problem for the case of thin domains with a fast oscillatory boundary ($\alpha>1$) was obtained in \cite{ArrPer2013} by decomposing the domain in two parts separating the oscillatory boundary. The authors also consider more general and complicated geometries for thin domains which are not given as the graph of certain smooth functions. Unlike the works mentioned above, the method introduced in this paper allows us to tackle the three cases for the homogenous Neumann problem \eqref{OPI0} in a unified way and with milder assumptions on the regularity of the domain $R^\varepsilon$, which is related to the regularity of the function $g$. As a matter of fact, we will be able to deal with a larger class of oscillating boundaries. For instance, we may admit thin domains where the function $g$ is continuous, comb-like thin domains or domains where extension operators do not apply, see Figure \ref{fthin2}. Our general requirements for the function $g$ are expressed in hypothesis $\bf (H_g)$ in Section \ref{21}. \begin{figure} \caption{Examples of thin domains.} \label{fthin2} \end{figure} Furthermore, it is worth observing that the unfolding method also allows us to obtain some new strong convergence results, see for instance Proposition \ref{3correct result} and Proposition \ref{strong5}. } Notice that thin structures with oscillating boundaries appear in many fields of science as fluid dynamics (lubrication), solid mechanics (thin rods, plates or shells) or even physiology (blood circulation). Therefore, obtaining the limit equation of the model considered on thin domains with an oscillatory boundary not necessarily smooth, comparing the limit solution and the solutions of the original equation, analyzing the coefficients of the limit equation and understanding how the geometry of the thin domains affects the limit equation are very relevant issues in applied science. See \cite{BouCiu,PaSu} for some concrete applied problems. Finally, we would like to point out that the unfolding method has been successfully applied to other structures to study the effect of rough boundaries on the behavior of the solution of partial differential equations. Among others papers, we can cite \cite{DamPe} for an application of the method to a 2-dimensional domain with oscillating boundaries (actually \cite{DamPe} is the first time where the unfolding operator method is applied to a domain with an oscillatory boundary). More precisely, using the periodic unfolding method, the homogenized limit problem and a result of strong convergence is obtained for a variational problem on a sequence of 2-dimensional comb-like domains. In the framework of the linearized elasticity we would like to mention \cite{BGG07B,BGG07A,BG08} where the authors describe the homogenization process for the junction of rods and a plate. In \cite{CasLuSu} the authors study the asymptotic behavior of the solutions of the Navier-Stokes system in a thin domain satisfying the Navier boundary condition on a periodic rough set. Recently we have considered in \cite{ArrVil2014b,ArrVil2016} two-dimensional thin domains where the oscillations at the boundary are such that both the amplitude and period of the oscillations may vary in space. {\color{blue} This paper is organized in four sections. In Section \ref{21} we fix some notation that will be used throughout the paper, we introduce the unfolding operator and we prove its main properties. In addition, in Subsection \ref{The averaging} we introduce an averaging operator which is the adjoint of the unfolding operator. Section \ref{22} is dedicated to the resonant case. We apply the unfolding method to identify the homogenized limit problem and to give a corrector result replacing the conditions imposed on the smoothness of the boundary in \cite{ArrCarPerSil, MelPo10} by the weaker condition of existence of a Poincar\'e$-$Wirtinger inequality in the reference cell. In Section \ref{23} we study the case of weak oscillations in a similar way as the resonant case. We point out that, in the framework of unfolding method, we easily get a new result of strong convergence, see Proposition \ref{3correct result}. Section \ref{extrem} concerns with the case of thin domains with very highly oscillatory boundaries. In such situations the period of the oscillations is so small that we have to proceed in a different way than the previous two cases to derive the homogenized limit problem. Furthermore, a new result of strong convergence is obtained, see Proposition \ref{strong5}. } \section{Definition of the unfolding operator and main properties}\label{21} \subsection{Some notation and description of thin domains with an oscillatory boundary}\label{notation} We will consider two-dimensional thin open sets with an oscillatory behavior in its top boundary which are defined as follows \begin{equation}\label{thingebar} R^\varepsilon = \Big\{ (x, y) \in \R^2 \; | \; x \in (0,1), \; \varepsilon b < y < \varepsilonilon \, g(x/\varepsilon^\alpha) \Big\}, \end{equation} where b is {\color{blue} a non-negative constant}, the parameters $\varepsilon$ and $\alpha$ are greater than zero and the function $g: \R \to \R$ satisfies the following hypothesis \par \begin{itemize} \item[\textbf{(H{\scriptsize g})}] $g: \R \to \R$ is defined for all $x \in \R$, is $L-$periodic (that is $g(x+L)=g(x) \, \forall x\in \R$), it belongs to $L^\infty(\R)$ and there exist {\color{blue} two non-negative constants} $g_0$ and $g_1$ such that $0\leq b\leq g_0 \leq g(x) \leq g_1$ for all $x \in \R$, where $g_0=\min_{x \in \R}\{g(x)\}\geqslant b.$ Moreover, assume that $g(\cdot)$ is lower semicontinuous, that is, ${\displaystyle g(x_0)\leq \liminf_{x\to x_0}g(x), \; \forall x_0 \in \R}$. \end{itemize} \par Since $g(\cdot)$ is $L-$periodic, the representative cell which describes the thin structure is given by $$Y^*=\{(y_1, y_2) \in \R^2 \; | \; y_1 \in (0, L), \; b < y_2 < g(y_1)\}.$$ \begin{figure} \caption{Example of oscillating boundary} \label{thin1raro} \end{figure} \begin{remark} Observe that from hypothesis $\textbf{(H{\scriptsize g})}$ the representative cell $Y^*$ is an open set. As a matter of fact, let $(x_0, y_0)$ be a point in $Y^*$, then from the lower semicontinuity of the function $g(\cdot)$ we have $$y_0< g(x_0)\leq \liminf_{x\to x_0}g(x).$$ Thus, if we consider $\delta\equiv g(x_0)-y_0>0$ then, there exists $\varepsilon>0$ such that $$ g(x_0)-\delta/2\leq g(y_1) \quad \hbox{ for all } |y_1-x_0|<\varepsilon.$$ Therefore, for $\varepsilon$ small enough we can guarantee that the neighborhood of the point $(x_0, y_0)$ given by $$U=\big\{(y_1, y_2)\in \R^2 : |y_1-x_0|<\varepsilon, |y_2-y_0|<\min\{\delta/2, (y_0-b)/2\}\big\}$$ is contained in $Y^*$. Observe also that since $Y^*$ is an open set we immediately have that $R^\varepsilon$ is an open set too. \end{remark} \begin{remark} Throughout the paper we will use the subindex $\#$ to denote periodicity with respect to the first variable of the reference cell. Thus, let $\psi$ be a function in $Y^*$ and denote by $\psi^\#$ its periodic extension to $Y^*_{\hbox{per}}=\{(y_1, y_2) \in \R^2 \; | \; y_1 \in \R, \; b < y_2 < g(y_1)\}$, defined by $$\psi^\#(y_1+kL, y_2)= \psi(y_1, y_2) \quad \hbox{a.e. in } Y^*, \, \forall k \in \Z.$$ Then, the space $W^{1,p}_{\#}\big(Y^*\big)$ is given by $$W^{1,p}_{\#}\big(Y^*\big)= \{ \psi \in W^{1,p}\big(Y^*\big): \psi^\# \in W^{1,p}\big(\mathcal{O}) \hbox{ for any bounded set } \mathcal{O} \subset Y^*_{\hbox{per}}\}.$$ Moreover, $\mathcal{M}_{Y^*}(\cdot)$ denotes the mean value over the reference cell $Y^*$. \end{remark} \begin{remark} Notice that the sets considered in this section may be disconnected since the minimum of the function $g$ can be equal to $b$. Actually, we are interested in including disconnected thin sets to obtain the limit problem for the case with fast oscillations in Section \ref{extrem}. \end{remark} It is important to note that in our setting two functions $g(\cdot)$ and $\hat{g}(\cdot)$ satisfying that $g(x)=\hat{g}(x)$ for a.e. $x \in \R$ do not define the same basic cell $Y^*$ and, as a consequence, the corresponding open sets $R^\varepsilon$ are different too. For instance, consider the constant function $\hat{g}\equiv b+2$ and the following $L$-periodic function $$ g(y_1)= \left\{ \begin{aligned} b+2 \quad &\hbox{if } y_1 \in [0,L)\setminus\{L/2\},\\ b+1 \quad &\hbox{if } y_1=L/2. \end{aligned} \right. $$ It is obvious that $g(x)=\hat{g}(x)$ for a.e. $x \in \R$ but, notice that, $\hat{g}$ defines a thin domain $\hat{R}^\varepsilon$ which does not present oscillations while $g$ defines a fissured thin domain $R^\varepsilon = \hat{R}^\varepsilon\setminus \bigcup I^\varepsilon_k$ where $I^\varepsilon_k$ is given by $$I^\varepsilon_k=\Big\{\Big(\varepsilon k L +\frac{\varepsilon L}{2}, y\Big): \varepsilon(b+1)<y<\varepsilon(b+2)\Big\},$$ where $k$ is any integer satisfying $0<\varepsilon k L +\frac{\varepsilon L}{2}<1,$ see Figure \ref{fissured}. \begin{figure} \caption{Fissured thin domain $R^\varepsilon$ } \label{fissured} \end{figure} By analogy with the definition of the integer and fractional part of a real number, for $x\in \R$, $[x]_{L}$ denotes the unique integer such that $x \in \big[ [x]_{L} L, ([x]_{L} + 1) L\big)$ and $\{x\}_L \in [0, L)$ is such that $x = [x]_{L}L + \{x\}_L$. Then, for each $\varepsilon>0$ and for every $x \in \R$ there exists a unique integer, $\Big[\frac{x}{\varepsilon^\alpha}\Big]_L$, such that \begin{equation}\label{decomposition} x = \varepsilon^\alpha\Big[\frac{x}{\varepsilon^\alpha}\Big]_LL + \varepsilon^\alpha\Big\{\frac{x}{\varepsilon^\alpha}\Big\}_L, \quad \Big\{\frac{x}{\varepsilon^\alpha}\Big\}_L \in [0, L). \end{equation} In addition, we will use the following notations \begin{itemize} \item[-] $I^\varepsilon= \hbox{Int}\Big\{\displaystyle \bigcup_{k=0}^{N_\varepsilon} [\varepsilon^\alpha kL, \varepsilon^\alpha L(k+1)] \Big\}$ where $N_\varepsilon$ is the largest integer such that $\varepsilon^\alpha L(N_\varepsilon+1)\leqslant1$. \item[-] $\Lambda^\varepsilon= I \setminus I^\varepsilon$, recall that $I=(0,1)$. Equivalently, $\Lambda^\varepsilon = [\varepsilon^\alpha L(N_\varepsilon +1), 1)$. \end{itemize} Observe that by construction $\Lambda^\varepsilon$ converges to the empty set as $\varepsilon$ goes to zero. Moreover, the set $I^\varepsilon$ allows us to define $R_0^\varepsilon$, the set which contains all the cells totally included in $R^\varepsilon$ $$R_0^\varepsilon = \Big\{ (x, y) \in \R^2 \; | \; x \in I^\varepsilon, \; \varepsilon b < y < \varepsilonilon \, g(x/\varepsilon^\alpha) \Big\},$$ while $\Lambda^\varepsilon$ gives us the subset of $R^\varepsilon$ containing the corresponding part of the unique cell which is not totally included in $R^\varepsilon$, that is \begin{equation}\label{cellno} R_1^\varepsilon = \Big\{ (x, y) \in \R^2 \; | \; x \in \Lambda^\varepsilon, \; \varepsilon b < y < \varepsilonilon \, g(x/\varepsilon^\alpha) \Big\}. \end{equation} \begin{figure} \caption{Sets $R_0^\varepsilon$ and $R_1^\varepsilon$ } \end{figure} \subsection{The unfolding operator}\label{definition unfol op} In this subsection we first define the unfolding operator specific to thin domains with an oscillatory boundary and show some important properties. \begin{definition}\label{unfold def} Let $\varphi$ be a Lebesgue-measurable function in $R^\varepsilon$. The unfolding operator $\mathcal{T_\varepsilon}$, acting on $\varphi$, is defined as the following function in $(0,1) \times Y^*$ \begin{eqnarray*} \mathcal{T}_\varepsilon(\varphi) (x, y_1, y_2) = \left\{ \begin{array}{ll} \varphi \Big( \varepsilon^\alpha \Big[\frac{x}{\varepsilon^\alpha}\Big]_{L}L + \varepsilon^\alpha y_1, \varepsilon y_2\Big) \quad \hbox{for} \quad (x, y_1, y_2) \in I^\varepsilon \times Y^*, \\ 0 \hspace{4.3cm} \hbox{for} \quad (x, y_1, y_2) \in \Lambda^\varepsilon \times Y^*. \end{array} \right. \end{eqnarray*} \end{definition} Note that the operator $\mathcal{T_\varepsilon}$ transforms Lebesgue-measurable functions defined on $R^\varepsilon$ into Lebesgue-measurable functions defined on the fixed set $(0,1) \times Y^*$ which are piecewise constant with respect to $x$. Therefore, if $\varphi$ is very regular, $\mathcal{T_\varepsilon(\varphi)}$ inherits the regularity as a function of $(y_1, y_2)$ and not respect to the variable $x$. \begin{figure} \caption{$(0,1)\times Y^*$ associated to the cell depicted in Figure \ref{thin1raro} \end{figure} As in classical periodic homogenization, the unfolding operator reflects two scales: the ``macroscopic'' scale $x$ gives the position in the interval $(0,1)$ and the ``microscopic'' scale $(y_1, y_2)$ gives the position in the cell $Y^*$. Notice that the oscillations of the boundary are captured in the variables $(y_1, y_2)$ which belong to the basic cell $Y^*$. The following result considers several basic and somehow immediate properties of the unfolding operator. Then, some of their proofs are omitted. \begin{proposition}\label{properties} The unfolding operator $\mathcal{T_\varepsilon}$ has the following properties: \par\noindent $i)$ $ \mathcal{T_\varepsilon}$ is a linear operator. \par\noindent $ii)$ $\mathcal{T}_\varepsilon(\varphi \psi)=\mathcal{T_\varepsilon(\varphi)} \mathcal{T_\varepsilon(\psi)}$ $\, \forall \, \varphi, \psi$ Lebesgue-measurable functions in $R^\varepsilon$. \par\noindent $iii)$ Every function $\varphi \in L^p(R^\varepsilon)$, with $1\leq p \leq \infty$, satisfies $$\mathcal{T}_\varepsilon(\varphi) \Big(x, \Big\{\frac{x}{\varepsilon^\alpha}\Big\}_L, \frac{y}{\varepsilon}\Big)= \varphi(x,y), \quad \forall (x,y) \in R_0^\varepsilon.$$ \par\noindent $iv)$ Let $\varphi$ be a measurable function on $Y^*$ extended by $L-$periodicity in the first variable. Then, $\varphi^\varepsilon(x,y) = \varphi (\frac{x}{\varepsilon^\alpha},\frac{y}{\varepsilon})$ is a measurable function on $R^\varepsilon$ such that \begin{equation}\label{eqosci} \mathcal{T}_\varepsilon(\varphi^\varepsilon)(x,y_1, y_2) = \varphi(y_1, y_2), \quad \forall (x, y_1, y_2) \in I^\varepsilon \times Y^*. \end{equation} Furthermore, if $\varphi \in L^p(Y^*)$, with $1\leq p \leq \infty$ then $\varphi^\varepsilon \in L^p(R^\varepsilon)$. \par\noindent $v)$ Let $\varphi \in L^1(R^\varepsilon).$ The following integral equality holds \begin{align*} & \frac{1}{L}\int_{ (0,1) \times Y^*} \mathcal{T_\varepsilon(\varphi)} (x, y_1, y_2) dx dy_1 dy_2 = \frac{1}{\varepsilon}\int_{R_0^\varepsilon} \varphi (x, y) dx dy\\ &= \frac{1}{\varepsilon}\int_{R^\varepsilon} \varphi (x, y) dx dy - \frac{1}{\varepsilon}\int_{R_1^\varepsilon} \varphi (x, y) dx dy. \end{align*} \par\noindent $vi)$ For every $\varphi \in L^p(R^\varepsilon)$ we have $\mathcal{T_\varepsilon(\varphi)} \in L^p\big( (0,1)\times Y^*\big)$, with $1\leq p \leq \infty$. In addition, the following relationship exists between their norms: $$\|\mathcal{T_\varepsilon(\varphi)}\|_{L^p\big( (0,1)\times Y^*\big)} = \Big(\frac{L}{\varepsilon}\Big)^{\frac{1}{p}} \, \|\varphi\|_{L^p(R_0^\varepsilon)} \leq \Big(\frac{L}{\varepsilon}\Big)^{\frac{1}{p}} \, \|\varphi\|_{L^p(R^\varepsilon)}.$$ In the special case $p=\infty$, $\|\mathcal{T_\varepsilon(\varphi)}\|_{L^\infty \big( (0,1)\times Y^*\big)}= \|\varphi\|_{L^\infty(R^\varepsilon_0)} \leq \|\varphi\|_{L^\infty(R^\varepsilon)}.$ \par\noindent $vii)$ For every $\varphi \in W^{1,p}(R^\varepsilon)$, $1\leq p \leq \infty$, one has $$ \frac{\partial}{\partial y_1}\mathcal{T_\varepsilon(\varphi)} = \varepsilon^\alpha \mathcal{T_\varepsilon}\Big(\frac{\partial \varphi}{\partial x}\Big) \quad \hbox{ and }\quad \frac{\partial}{\partial y_2}\mathcal{T_\varepsilon(\varphi)} = \varepsilon \mathcal{T_\varepsilon}\Big(\frac{\partial \varphi}{\partial y}\Big).$$ \par\noindent $viii)$ If $\varphi \in W^{1,p}(R^\varepsilon)$, then $\mathcal{T_\varepsilon(\varphi)}$ belongs to $L^p\big((0,1); W^{1,p}(Y^*)\big)$, $1\leq p \leq \infty$. Moreover, the following relationship exists between their norms, in case $1\leq p < \infty$ \begin{align*} \Big\| \frac{\partial}{\partial y_1}\mathcal{T_\varepsilon(\varphi)}\Big\|_{L^p\big( (0,1)\times Y^*\big)} &=\varepsilon^\alpha \Big(\frac{L}{\varepsilon}\Big)^{\frac{1}{p}} \, \Big\| \frac{\partial \varphi}{\partial x}\Big\|_{L^p(R^\varepsilon_0)} \leq \varepsilon^\alpha \Big(\frac{L}{\varepsilon}\Big)^{\frac{1}{p}} \, \Big\| \frac{\partial \varphi}{\partial x}\Big\|_{L^p(R^\varepsilon)},\\ \Big\| \frac{\partial}{\partial y_2}\mathcal{T_\varepsilon(\varphi)}\Big\|_{L^p\big( (0,1)\times Y^*\big)} &= \varepsilon \Big(\frac{L}{\varepsilon}\Big)^{\frac{1}{p}} \, \Big\| \frac{\partial \varphi}{\partial y}\Big\|_{L^p(R^\varepsilon_0)} \leq \varepsilon \Big(\frac{L}{\varepsilon}\Big)^{\frac{1}{p}} \, \Big\| \frac{\partial \varphi}{\partial y}\Big\|_{L^p(R^\varepsilon)}. \end{align*} For $p=\infty$ one has \begin{align*} &\Big\| \frac{\partial}{\partial y_1}\mathcal{T_\varepsilon(\varphi)}\Big\|_{L^\infty \big( (0,1)\times Y^*\big)}= \varepsilon^\alpha \Big\| \frac{\partial \varphi}{\partial x}\Big\|_{L^\infty(R^\varepsilon_0)} \leq \varepsilon^\alpha \Big\| \frac{\partial \varphi}{\partial x}\Big\|_{L^\infty(R^\varepsilon)} ,\\ &\Big\| \frac{\partial}{\partial y_2}\mathcal{T_\varepsilon(\varphi)}\Big\|_{L^\infty\big( (0,1)\times Y^*\big)} = \varepsilon \Big\| \frac{\partial \varphi}{\partial y}\Big\|_{L^\infty(R^\varepsilon_0)} \leq \varepsilon \Big\| \frac{\partial \varphi}{\partial y}\Big\|_{L^\infty(R^\varepsilon)} . \end{align*} \end{proposition} \begin{proof} $i)$, $ii)$ and $iii)$ are a simple consequence of definition of the unfolding operator. \par\noindent $iv)$ {\color{blue}It is easy to see that $\varphi^\varepsilon$ is well defined, see property v) of Proposition 3.4 in \cite{ArrVil2016}. In addition, using the periodic structure of $R^\varepsilon$ and an obvious change of variables if $\varphi \in L^p(Y^*)$ we get $\varphi^\varepsilon \in L^p(R^\varepsilon)$ for $1\leq p<\infty$ \begin{align*} &\int_{R^\varepsilon} |\varphi^\varepsilon|^p \, dxdy \leq \sum_{k=0}^{N_\varepsilon+1}\int_{\varepsilon^\alpha k L}^{\varepsilon^\alpha L( k+1)}\int_{\varepsilon b}^{\varepsilon g(x/\varepsilon^\alpha)} |\varphi^\varepsilon|^p \, dydx = \sum_{k=0}^{N_\varepsilon+1} \varepsilon^{\alpha+1}\int_{Y^*} |\varphi(y_1, y_2)|^p dy_1dy_2\\ &= C \varepsilon \int_{Y^*} |\varphi(y_1, y_2)|^p dy_1dy_2, \end{align*} where the constant does not depend on $\varepsilon$. The result is obvious for $p=\infty$.} {\color{blue} Finally, in view of Definition \ref{unfold def} we get \eqref{eqosci}. } \par\noindent $v)$ Suppose that $\varphi \in L^1(R^\varepsilon)$. Then, using that $\mathcal{T}_\varepsilon(\varphi^\varepsilon)$ is piecewise constant in $x$ {\color{blue} we have} \begin{align*} & \frac{1}{L}\int_{ (0,1) \times Y^*} \mathcal{T_\varepsilon(\varphi)} (x, y_1, y_2) \, dx dy_1 dy_2 = \frac{1}{L}\int_{ I^\varepsilon \times Y^*} \varphi \Big( \varepsilon^\alpha \Big[\frac{x}{\varepsilon^\alpha}\Big]_{L}L + \varepsilon^\alpha y_1, \varepsilon y_2\Big)\, dx dy_1 dy_2\\ &= \frac{1}{L}\sum_{k=0}^{N_\varepsilon}\int_{k \varepsilon^\alpha L}^{(k+1)\varepsilon^\alpha L}\int_{Y^*} \varphi \Big( \varepsilon^\alpha kL + \varepsilon^\alpha y_1, \varepsilon y_2\Big)\, dy_1 dy_2 dx\\ &=\varepsilon^\alpha \sum_{k=0}^{N_\varepsilon} \int_{Y^*} \varphi \Big( \varepsilon^\alpha kL + \varepsilon^\alpha y_1, \varepsilon y_2\Big)\, dy_1 dy_2= \frac{1}{\varepsilon}\sum_{k=0}^{N_\varepsilon} \int_{k \varepsilon^\alpha L}^{(k+1)\varepsilon^\alpha L}\int_{\varepsilon b}^{\varepsilon g(x/\varepsilon^\alpha)} \varphi(x,y)\, dydx\\ &= \frac{1}{\varepsilon}\int_{R_0^\varepsilon} \varphi (x, y) dx dy. \end{align*} Then, the desired equality is straightforward. \par\noindent $vi)$ {\color{blue} For $p \in [1, \infty)$ the result is a consequence of properties ii) and v). For $p=\infty$ it is straightforward.} \par\noindent $vii)$ It is obvious from the definition of the unfolding operator. \par\noindent $viii)$ This result is a immediate consequence of the properties vi) and vii). \end{proof} \begin{remark}\label{rescaled norm} Notice that, due to the order of the height of the thin set the factor $\frac{1}{\varepsilon}$ appears in properties v) and vi). Then, it makes sense to consider the following rescaled Lebesgue measure in the thin domains $$\rho_{\varepsilon}(\mathcal{O}) = \frac{1}{\varepsilon}|\mathcal{O}|, \; \forall\, \mathcal{O} \subset R^\varepsilon,$$ which is widely considered in works involving thin domains, see e.g. \cite{HaRau,Rau,PriRi,PerSil13}. As a matter of fact, from now on, we use the following rescaled norms in the thin open sets \begin{align*} |||\varphi|||_{L^p(R^\varepsilon)} &= \varepsilon^{-1/p}||\varphi||_{L^p(R^\varepsilon)}, \quad \forall \varphi \in L^p(R^\varepsilon), \quad 1\leq p < \infty,\\ |||\varphi|||_{W^{1,p}( R^\varepsilon)} &= \varepsilon^{-1/p}||\varphi||_{W^{1,p}( R^\varepsilon)}, \quad \forall \varphi \in W^{1,p}(R^\varepsilon), \quad 1\leq p < \infty. \end{align*} For completeness we may denote $|||\varphi|||_{L^\infty(R^\varepsilon)} =||\varphi||_{L^\infty(R^\varepsilon)}.$ {\color{blue} For instance, from property vi) above we have $$\|\mathcal{T_\varepsilon(\varphi)}\|_{L^p\big( (0,1)\times Y^*\big)}\leq {L}^{1/p} \, |||\varphi|||_{L^p(R^\varepsilon)}, \; 1\leq p < \infty. $$} \end{remark} Property v) in Proposition \ref{properties} will be essential to pass to the limit when dealing with solutions of differential equations because it will allow us to transform any integral over the thin set depending on the parameter $\varepsilon$ into an integral over the fixed set $(0,1) \times Y^*.$ Notice that, in view of this property, we may say that the unfolding operator ``almost preserves'' the integral of the functions since the ``integration defect'' arises only from the unique cell which is not completely included in $R^\varepsilon$ and it is controlled by the integral on $R^\varepsilon_1$. Therefore, an important concept for the unfolding method is the following property called unfolding criterion for integrals, u.c.i. \begin{definition}\label{u.c.i} A sequence $\{\varphi^\varepsilon\}$ in $L^1(R^\varepsilon)$ satisfies the unfolding criterion for integrals (u.c.i.) if $$\frac{1}{\varepsilon}\int_{R^\varepsilon_1} |\varphi^\varepsilon| dxdy \buildrel \epsilon\to 0\over\longrightarrow 0.$$ \end{definition} As an immediate consequence, if a sequence $\{\varphi^\varepsilon\}$ satisfies the u.c.i we get from property v) of Proposition \ref{properties} $$\frac{1}{L}\int_{ (0,1) \times Y^*} \mathcal{T_\varepsilon(\varphi^\varepsilon)} (x, y_1, y_2) dx dy_1 dy_2 - \frac{1}{\varepsilon}\int_{R^\varepsilon} \varphi^\varepsilon (x, y) dx dy\buildrel \epsilon\to 0\over\longrightarrow 0.$$ In the next three propositions we obtain some criteria to guarantee that some functions satisfy the u.c.i. \begin{proposition}\label{uci bounded} Let $\varphi^\varepsilon$ be a sequence in $L^p(R^\varepsilon)$ for $p\in (1, \infty]$ with $|||\varphi^\varepsilon|||_{L^p(R^\varepsilon)}$ uniformly bounded. Then, $\{\varphi^\varepsilon\}$ satisfies the unfolding criterion for integrals. Furthermore, let $\psi^\varepsilon \in L^q(R^\varepsilon)$ with $|||\psi^\varepsilon|||_{L^q(R^\varepsilon)}$ uniformly bounded for $\displaystyle{\frac{1}{p} +\frac{1}{q} =\frac{1}{r}}$ with $r>1$. Then, the product sequence $\{\varphi^\varepsilon\psi^\varepsilon\}$ satisfies the unfolding criterion for integrals. \end{proposition} \begin{proof} First assume $1<p<\infty$. Taking into account that $\rho_\varepsilon (R_1^\varepsilon) \buildrel \epsilon\to 0\over\longrightarrow 0$, recall that $\rho_\varepsilon$ denotes the rescaled Lebesgue measure ($\rho_\varepsilon(\cdot) = \frac{1}{\varepsilon}|\cdot|$ see Remark \ref{rescaled norm}), and since there exists a constant $C$ independent of $\varepsilon$ such that $|||\varphi^\varepsilon|||_{L^p(R^\varepsilon)}<C$ we can ensure by H\"older's inequality that $\varphi^\varepsilon$ satisfies the u.c.i. \begin{align*} \frac{1}{\varepsilon}\int_{R^\varepsilon_1}& |\varphi^\varepsilon| dxdy\leq \frac{1}{\varepsilon} ||\varphi^\varepsilon||_{L^p(R^\varepsilon)} |R_1^\varepsilon|^{\frac{1}{q}}= \varepsilon^{-1/p} ||\varphi^\varepsilon||_{L^p(R^\varepsilon)} \varepsilon^{-1/q}|R_1^\varepsilon|^{\frac{1}{q}}=|||\varphi^\varepsilon|||_{L^p(R^\varepsilon)} \rho_\varepsilon (R_1^\varepsilon)^{\frac{1}{q}} \buildrel \epsilon\to 0\over\longrightarrow 0, \end{align*} where $q$ is such that $\displaystyle{\frac{1}{p} +\frac{1}{q}=1 }.$ {\color{blue}For $p=\infty$ we have $ \frac{1}{\varepsilon}\int_{R^\varepsilon_1} |\varphi^\varepsilon| dxdy\leq ||\varphi^\varepsilon||_{L^\infty(R^\varepsilon)} \rho_\varepsilon (R_1^\varepsilon)\buildrel \epsilon\to 0\over\longrightarrow 0. $ } For the second statement, since $\varphi^\varepsilon \psi^\varepsilon \in L^r(R^\varepsilon)$ for some $r>1$ we obtain $\displaystyle{\frac{1}{\varepsilon}\int_{R^\varepsilon_1} |\varphi^\varepsilon \psi^\varepsilon| dxdy \buildrel \epsilon\to 0\over\longrightarrow 0.}$ \end{proof} \begin{proposition}\label{uci 1vartest} Let $\varphi^\varepsilon \in L^p(R^\varepsilon)$ with $|||\varphi^\varepsilon|||_{L^p(R^\varepsilon)}$ uniformly bounded, $p\in (1, \infty]$, and $\phi \in L^q(0,1)$, $\displaystyle{\frac{1}{p} +\frac{1}{q}=1 }$, then \begin{align*} \frac{1}{\varepsilon}\int_{R^\varepsilon_1} |\varphi^\varepsilon \phi| dxdy \buildrel \epsilon\to 0\over\longrightarrow 0. \end{align*} \end{proposition} \begin{proof} Observe that $\chi_{\Lambda^\varepsilon}(x) \buildrel \epsilon\to 0\over\longrightarrow 0$ for all $x \in \R$. Consequently, taking into account that $\phi$ depends only on the variable $x$, the definition of $R^\varepsilon_1$ and by Lebesgue's dominated convergence theorem we have \begin{align*} \frac{1}{\varepsilon}\int_{R^\varepsilon_1} &|\phi|^q dxdy = \frac{1}{\varepsilon}\int_{\Lambda^\varepsilon} \int_{\varepsilon b}^{\varepsilon g(x/\varepsilon^\alpha)} |\phi|^q dydx \leq (g_1 - b) \int_{\R} |\phi|^q \chi_{\Lambda^\varepsilon} dx \buildrel \epsilon\to 0\over\longrightarrow 0. \end{align*} Hence, by H\"older's inequality we get the result. \end{proof} \begin{proposition}\label{uci 3test} Let $\{\varphi^\varepsilon\}$ be a uniformly bounded sequence in $L^p(R^\varepsilon)$ for $1<p<\infty$, $|||\varphi^\varepsilon|||_{L^p(R^\varepsilon)} \leq C$, and let $\{\psi^\varepsilon\}$ be the sequence in $L^q(R^\varepsilon)$, $\displaystyle{\frac{1}{p} +\frac{1}{q}=1 }$, defined as follows $$\psi^\varepsilon(x,y) = \psi^{\#} (\frac{x}{\varepsilon^\alpha},\frac{y}{\varepsilon})$$ where $\psi \in L^q(Y^*)$. Then, the product sequence $\{\varphi^\varepsilon\psi^\varepsilon\}$ satisfies the unfolding criterion for integrals. \end{proposition} \begin{proof} From the definition of $R^\varepsilon_1$ and performing the same computations as in iv) of Proposition \ref{properties} we obtain \begin{align*} \frac{1}{\varepsilon}\int_{R^\varepsilon_1} |\psi^\varepsilon|^q \, dxdy &\leq \frac{1}{\varepsilon}\int_{\varepsilon^\alpha L (N_\varepsilon+1) }^{\varepsilon^\alpha L( N_\varepsilon+2)}\int_{\varepsilon b}^{\varepsilon g(x/\varepsilon^\alpha)} \Big|\psi^{\#}\big(\frac{x}{\varepsilon^\alpha},\frac{y}{\varepsilon}\big)\Big|^q \, dydx= \varepsilon^{\alpha}\int_{Y^*} |\psi(y_1, y_2)|^q dy_1dy_2. \end{align*} Then, the sequence $\{\varphi^\varepsilon \psi^\varepsilon\}$ satisfies the u.c.i by H\"older's inequality. \end{proof} Now, we are going to analyze some convergence properties of the unfolding operator as $\varepsilon$ goes to zero. \begin{proposition}\label{testconvergence} Let $ \varphi \in L^p(0, 1)$, $1\leq p < \infty$. Then considering $\varphi$ as a function defined in $R^\varepsilon$ we have $$ \mathcal{T_\varepsilon(\varphi)} \buildrel \epsilon\to 0\over\longrightarrow \varphi \quad \hbox{s}-L^p\big( (0,1)\times Y^*\big).$$ \end{proposition} \begin{proof} First of all note that for any $x \in \R$ and $y_1 \in [0, L]$ we have \begin{align*} 0\leq x- \varepsilon^\alpha \Big[\frac{x}{\varepsilon^\alpha}\Big]_{L}L \leq \varepsilon^\alpha L, \quad \hbox{ and } \quad \varepsilon^\alpha \Big[\frac{x}{\varepsilon^\alpha}\Big]_{L}L + \varepsilon^\alpha y_1 \buildrel \epsilon\to 0\over\longrightarrow x. \end{align*} Then, the result is obvious for any $\varphi \in \mathcal{D}(0,1)$. {\color{blue} By a density argument, see Proposition 3.11 in \cite{ArrVil2016}, the result holds for any $ \varphi \in L^p(0, 1)$, $1\leq p < \infty$.} \end{proof} To conclude this subsection we show two important results assuming that the thin set $R^\varepsilon$ is connected, $b$ is strictly less than $g_0$, see hypothesis $\textbf{(H{\scriptsize g})}$. For simplicity, we assume that $b=0$. In fact, the results that we prove below, Lemma \ref{poinc} and Proposition \ref{convergence prop}, hold true for any positive $b$ such that $0<b<g_0$ with minor modifications. Then, we consider \begin{equation}\label{thinbaru} R^\varepsilon = \Big\{ (x, y) \in \R^2 \; | \; x \in (0,1), \; 0 < y < \varepsilonilon \, g(x/\varepsilon^\alpha) \Big\}. \end{equation} where $g_0=\min_{x \in \R}\{g(x)\}>0$. {\color{blue} Then, the open sets $R^\varepsilon$ and $Y^*$ are connected.} First we prove in Lemma \ref{poinc} that if the reference cell $Y^*$ is connected, that is $0<g_0$, then the Poincar\'e-Wirtinger inequality holds in $Y^*$. Recall that this inequality is defined as \begin{definition}\label{poincare} A bounded open set $U$ satisfies the Poincar\'e-Wirtinger inequality for the exponents $1\leq p< \infty$ if there exists a constant $C_p$ such that $$ \quad \Big\| \varphi - \frac{1}{|U|}\int_U\varphi\,dx dy\Big\|_{L^p(U)} \leq C_p \| \nabla\varphi \|_{L^p(U)}, \quad \forall \varphi \in W^{1,p}(U).$$ \end{definition} \begin{lemma}\label{poinc} Assume $Y^*$ is connected, that is, $0<g_0$. Then, the Poincar\'e-Wirtinger inequality for the exponents $1< p< \infty$ holds in $Y^*$. \end{lemma} \begin{proof} Assume that the statement is not true and we will reach a contradiction. If the Poincar\'e-Wirtinger inequality does not hold in $Y^*$ there exists a sequence $\{u_n\} \subset W^{1,p}(Y^*)$ with $||u_n - \mathcal{M}_{Y^*}(u_n)||_{L^p(Y^*)}\ne 0$ and such that \begin{equation}\label{ucon} \frac{\displaystyle \int_{Y^*}|\nabla u_n|^p\, dy_2 dy_1}{\displaystyle \int_{Y^*} |u_n - \mathcal{M}_{Y^*}(u_n)|^p\,dy_2 dy_1}\to 0, \quad \hbox{ as } n\to0. \end{equation} Then, we define $$w_n = \frac{u_n - \mathcal{M}_{Y^*}(u_n)}{||u_n - \mathcal{M}_{Y^*}(u_n)||_{L^p(Y^*)}}.$$ Notice that $||w_n||_{L^p(Y^*)}=1$, $\mathcal{M}_{Y^*}(w_n)=0$ and taking into account \eqref{ucon} we have \begin{equation}\label{ucon1} \int_{Y^*}|\nabla w_n|^p\, dy_1 dy_2 \to 0, \quad \hbox{ as } n\to0. \end{equation} Now, we define the domain $Y^*_0$ given by $$Y^*_0=\Big\{(y_1, y_2)\in \R^2: 0<y_1<L, 0<y_2<\frac{g_0}{2} \Big\} \subset Y^*.$$ Then, taking into account the properties of $w_n$ we have $$||w_n||_{L^p(Y^*_0)}\leq1\quad \hbox{ and }\; \int_{Y^*_0}|\nabla w_n|^p\, dy_1 dy_2 \to 0, \quad \hbox{ as } n\to0.$$ Thus, by weak compactness there exist $w_0 \in W^{1,p}(Y^*_0)$ such that, up to subsequences, $$w_n \rightharpoonup w_0 \quad \hbox{w}-W^{1,p}(Y^*_0), \, s-L^p(Y^*_0)$$ Moreover since $ ||\nabla w_n||_{L^p(Y^*_0)}\to 0$ we have that $w_0$ is constant and the following convergence $$w_n \to w_0 \quad \hbox{s}-W^{1,p}(Y^*_0).$$ Consider now the sequence $v_n=w_n-w_0\in W^{1,p}(Y^*)$. Note that since $\nabla v_n = \nabla w_n$ we have \begin{equation}\label{tl} ||\nabla v_n||_{L^p(Y^*)}\to 0. \end{equation} Now we prove that $$|| v_n||_{L^p(Y^*)}\to 0.$$ Taking into account the fact that $$v_n(y_1, y_2) - v_n(y_1, 0) = \int_0^{y_2} \frac{\partial v_n}{\partial y_2}(y_1, s)\; ds,$$ we have $$|| v_n||_{L^p(Y^*)}\leq C || v_n(\cdot, 0)||_{L^p(0, L)} + C\Big|\Big|\frac{\partial v_n}{\partial y_2}\Big|\Big|_{L^p(Y^*)}.$$ Therefore, using that the trace $v_n(\cdot, 0)$ goes strongly towards $0$ in $L^p(0,L)$ since $v_n$ converges to 0 strongly in $W^{1,p}(Y^*_0)$ and the convergence \eqref{tl} we get $$|| w_n-w_0||_{L^p(Y^*)}=|| v_n||_{L^p(Y^*)}\to 0.$$ This convergence and \eqref{ucon1} lead to $$w_n \to w_0 \quad \hbox{s}-W^{1,p}(Y^*).$$ Then, since by definition ${\displaystyle \int_{Y^*}w_n\, dy_1 dy_2 = 0}$ we get $w_0=0$ and therefore $\|w_n\|_{L^p(Y^*)}\to 0$. This is in contradiction with the definition of $w_n$ for which $\|w_n\|_{L^p(Y^*)}=1.$ \end{proof} \begin{remark}\label{1poinc} Observe that, taking $y_1$ as a parameter, from the one-dimensional Poincar\'e-Wirtinger inequality we get $$ \Big\| \varphi - \frac{1}{g_0}\int_0^{g_0} \varphi\, dy_2\Big\|^p_{L^p(0, g(y_1))}\leq C_p \int_{0}^{g(y_1)}\Big|\frac{\partial\varphi}{\partial y_2}\Big|^p\, dy_2,\; \forall \varphi \in W^{1,p}(Y^*).$$ Since $0<g_0\leq g(y_1) \leq g_1 $ for all $y_1\in \R $ we can ensure that $C_p$ is uniformly bounded respect to $y_1$. \end{remark} Finally, we obtain fundamental convergence results which do not depend on the value of the parameter $\alpha$. To do that, we introduce a suitable decomposition of the functions $\varphi \in W^{1,p}( R^\varepsilon)$ where the geometry of the thin domains plays a crucial role. Actually, we write $\varphi(x, y) = V(x) + \varphi_r(x,y)$ where $V$ is defined as follows \begin{equation}\label{Vfunc} V(x) \equiv \frac{1}{\varepsilon g_0} \int_{0}^{\varepsilon g_0} \varphi(x,s)\; ds, \quad \hbox{ for a.e. } x \in (0,1). \end{equation} and $\varphi_r(x, y) \equiv \varphi(x,y) - V(x)$. \begin{proposition}\label{convergence prop} Let $\varphi^\varepsilon$ be in $W^{1,p}(R^\varepsilon)$ with $|||\varphi^\varepsilon|||_{W^{1,p}( R^\varepsilon)}$ uniformly bounded for some $1<p<\infty$. Assume that $R^\varepsilon$ is given by \eqref{thinbaru} and $V^\varepsilon$ is defined as in \eqref{Vfunc} for every $\varphi^\varepsilon$. Then, there exists a function $\varphi$ in $W^{1,p}(0,1)$ such that, up to subsequences \begin{align*} &V^\varepsilon \weto \varphi \; \quad \hbox{w}- W^{1,p}(0,1), \hbox{ s}-L^p(0,1),\\ &|||\varphi^\varepsilon -\varphi|||_{L^p(R^\varepsilon)}\buildrel \epsilon\to 0\over\longrightarrow 0,\\ &\mathcal{T_\varepsilon(\varphi^\varepsilon)}\buildrel \epsilon\to 0\over\longrightarrow \varphi \; \quad \hbox{s}-L^p\big( (0,1); W^{1,p}(Y^*)\big). \end{align*} Furthermore, there exists $\bar{\varphi} \in L^p((0,1) \times Y^*)$ with $\frac{\partial \bar{\varphi}}{\partial y_2} \in L^p((0,1)\times Y^*)$ such that, up to subsequences \begin{equation}\label{deryto} \begin{split} & \frac{1}{\varepsilon}\mathcal{T_\varepsilon}(\varphi^\varepsilon_r) \weto \bar \varphi \quad \hbox{w}-L^p((0,1)\times Y^*), \quad \mathcal{T_\varepsilon}\big(\frac{\partial \varphi^\varepsilon}{\partial y}\big) \weto \frac{\partial \bar \varphi}{\partial y_2} \quad \hbox{w}-L^p((0,1)\times Y^*), \end{split} \end{equation} where $\varphi^\varepsilon_r \equiv \varphi^\varepsilon - V^\varepsilon$. \end{proposition} \begin{proof} First, we obtain estimates for the function $V^\varepsilon$. Note that since $\varphi^\varepsilon \in W^{1,p}(R^\varepsilon)$ we have $V^\varepsilon \in W^{1,p}(0,1)$ . Using Holder's inequality we get \begin{align*} & \| V^\varepsilon \|_{L^p(0,1)} \leq \Big( \int_0^1 \frac{1}{\varepsilon g_0 } \int_{0}^{\varepsilon g_0} |\varphi^\varepsilon(x,s)|^p\; ds \, dx \Big)^\frac{1}{p} \leq C \varepsilon^{-\frac{1}{p}} \|\varphi^\varepsilon\|_{L^p(R^\varepsilon)} = C|||\varphi^\varepsilon|||_{L^p(R^\varepsilon)},\\ & \Big\| \frac{\partial V^\varepsilon}{\partial x} \Big\|_{L^p(0,1)} \leq \Big( \int_0^1 \frac{1}{\varepsilon g_0} \int_{0}^{\varepsilon g_0} \Big|\frac{\partial \varphi^\varepsilon}{\partial x}(x,s)\Big|^p\; ds \, dx \Big)^\frac{1}{p} \leq C \varepsilon^{-\frac{1}{p}}\Big\|\frac{\partial \varphi^\varepsilon}{\partial x}\Big\|_{L^p(R^\varepsilon)} = C\Big|\Big|\Big|\frac{\partial \varphi^\varepsilon}{\partial x}\Big|\Big|\Big|_{L^p(R^\varepsilon)}. \end{align*} From these inequalities and taking into account that the norm $|||\varphi^\varepsilon|||_{W^{1,p}( R^\varepsilon)}$ is uniformly bounded we can ensure that there exists a function $\varphi \in W^{1,p}(0,1)$ such that, up to subsequences \begin{equation}\label{vconv} V^\varepsilon \weto \varphi \; \quad \hbox{w}-W^{1,p}(0,1) \quad \hbox{and } \hbox{s}-L^{p}(0,1). \end{equation} Recall that $\varphi^\varepsilon_r \equiv \varphi^\varepsilon - V^\varepsilon$. Thus, from the one-dimensional Poincar\'e-Wirtinger inequality, see Remark \ref{1poinc}, and using a simple change of variables we easily get \begin{align*} & \int_{0}^{\varepsilon g(x/\varepsilon^\alpha)} | \varphi^\varepsilon_r |^p dy = \int_{0}^{\varepsilon g(x/\varepsilon^\alpha)} \Big| \varphi^\varepsilon(x,y) - \frac{1}{\varepsilon g_0 } \int_{0}^{ \varepsilon g_0} \varphi^\varepsilon(x,s)\; ds\Big|^p dy \leq C\varepsilon^p \int_{0}^{\varepsilon g(x/\varepsilon^\alpha)} \Big| \frac{\partial \varphi^\varepsilon}{\partial y}(x,y)\Big|^p dy. \end{align*} Then, integrating in the interval $(0,1)$ we have \begin{equation}\label{barucon} |||\varphi^\varepsilon_r |||_{L^p(R^\varepsilon)}= |||\varphi^\varepsilon - V^\varepsilon|||_{L^p(R^\varepsilon)}\leq \varepsilon \Big|\Big|\Big|\frac{\partial \varphi^\varepsilon}{\partial y} \Big|\Big|\Big|_{L^p(R^\varepsilon)}. \end{equation} Therefore, since $|||\varphi^\varepsilon|||_{W^{1,p}( R^\varepsilon)}$ is uniformly bounded we obtain $$ |||\varphi^\varepsilon - V^\varepsilon|||_{L^p(R^\varepsilon)}\buildrel \epsilon\to 0\over\longrightarrow 0.$$ Moreover, using \eqref{vconv} we immediately get $$|||\varphi^\varepsilon -\varphi|||_{L^p(R^\varepsilon)}\buildrel \epsilon\to 0\over\longrightarrow 0.$$ Hence, from property vi) in Proposition \ref{properties} we have $$||\mathcal{T_\varepsilon}(\varphi^\varepsilon) -\mathcal{T_\varepsilon}(\varphi)||_{L^p((0,1)\times Y^*)}\buildrel \epsilon\to 0\over\longrightarrow 0.$$ Consequently, since $ \mathcal{T_\varepsilon}(\varphi) \buildrel \epsilon\to 0\over\longrightarrow \varphi \quad \hbox{s}-L^p\big( (0,1)\times Y^*\big)$, see Proposition \ref{testconvergence}, we obtain $$ \mathcal{T_\varepsilon(\varphi^\varepsilon)}\buildrel \epsilon\to 0\over\longrightarrow \varphi \; \hbox{ s}-L^p\big( (0,1)\times Y^*\big).$$ Furthermore, since we have the following inequalities, they follow straightforward using properties vi) and viii) of Proposition \ref{properties} and $|||\varphi^\varepsilon|||_{W^{1,p}( R^\varepsilon)}\leq C$, \begin{align*} & \Big\| \frac{\partial}{\partial y_1}\mathcal{T_\varepsilon(\varphi^\varepsilon)}\Big\|_{L^p\big( (0,1)\times Y^*\big)} \leq \varepsilon^\alpha C, \quad \Big\| \frac{\partial}{\partial y_2}\mathcal{T_\varepsilon(\varphi^\varepsilon)}\Big\|_{L^p\big( (0,1)\times Y^*\big)} \leq \varepsilon C, \end{align*} we obtain $$\mathcal{T_\varepsilon(\varphi^\varepsilon)}\buildrel \epsilon\to 0\over\longrightarrow \varphi \; \quad \hbox{s}-L^p\big( (0,1); W^{1,p}(Y^*)\big).$$ Finally, we obtain convergences \eqref{deryto}. Observe that ${\displaystyle \frac{\partial \varphi^\varepsilon_r}{\partial y} = \frac{\partial \varphi^\varepsilon}{\partial y}}$. Then, taking into account property vii) in Proposition \ref{properties} we have $$\frac{1}{\varepsilon}\frac{\partial \mathcal{T_\varepsilon}( \varphi^\varepsilon_r)}{\partial y_2}= \mathcal{T_\varepsilon}\Big(\frac{\partial \varphi^\varepsilon_r}{\partial y}\Big)= \mathcal{T_\varepsilon}\Big(\frac{\partial \varphi^\varepsilon}{\partial y}\Big).$$ Hence, by using property vi) of Proposition \ref{properties} we get \begin{equation*} \Big|\Big|\frac{1}{\varepsilon}\frac{\partial \mathcal{T_\varepsilon}( \varphi^\varepsilon_r)}{\partial y_2}\Big|\Big|_{L^p((0,1)\times Y^*)}\leq L^{1/p}\Big|\Big|\Big|\frac{\partial \varphi^\varepsilon}{\partial y}\Big|\Big|\Big|_{L^p(R^\varepsilon)}. \end{equation*} Moreover, estimate \eqref{barucon} implies \begin{equation*} \frac{1}{\varepsilon}|||\varphi^\varepsilon_r |||_{L^p(R^\varepsilon)}\leq \Big|\Big|\Big|\frac{\partial \varphi^\varepsilon}{\partial y}\Big|\Big|\Big|_{L^p(R^\varepsilon)}. \end{equation*} { \color{blue} Therefore, since $|||\varphi^\varepsilon|||_{W^{1,p}( R^\varepsilon)}$ is uniformly bounded and $\frac{1}{\varepsilon}\frac{\partial \mathcal{T_\varepsilon}( \varphi^\varepsilon_r)}{\partial y_2} = \mathcal{T_\varepsilon}\big(\frac{\partial \varphi^\varepsilon}{\partial y}\big)$ we can ensure that there exists a function $\bar{\varphi} \in L^2((0,1) \times Y^*)$ with ${\displaystyle \frac{\partial \bar \varphi}{\partial y_2} \in L^p((0,1)\times Y^*)}$ such that, up to subsequences \begin{equation*} \begin{split} & \frac{1}{\varepsilon}\mathcal{T_\varepsilon}(\varphi^\varepsilon_r) \weto \bar \varphi \quad \hbox{w-}L^p((0,1)\times Y^*),\quad \mathcal{T_\varepsilon}\big(\frac{\partial \varphi^\varepsilon}{\partial y}\big)\weto \frac{\partial \bar \varphi}{\partial y_2} \quad \hbox{w-}L^p((0,1)\times Y^*), \end{split} \end{equation*} which ends the proof.} \end{proof} \subsection{The averaging operator}\label{The averaging} In this subsection we introduce the averaging operator $\mathcal{U}_\varepsilon$ which is the formal adjoint of the unfolding operator. We will use it to obtain some strong convergences and corrector results. Note that, to define and to obtain the main properties of the averaging operator $\mathcal{U}_\varepsilon$ we may consider a general thin open set as in \eqref{thingebar}. That is, we do not need to assume that $R^\varepsilon$ is connected. \begin{definition} Let $\varphi$ be a function in $L^p((0,1) \times Y^*)$, $ p \in [1, \infty]$, then we set \begin{eqnarray*} \mathcal{U_\varepsilon}(\varphi) (x, y) = \left\{ \begin{array}{ll} {\displaystyle \frac{1}{L}\int_{0}^ {L}\varphi\Big(\varepsilon^\alpha\Big[\frac{x}{\varepsilon^\alpha}\Big]_{L}L + \varepsilon^\alpha y_1, \Big\{\frac{x}{\varepsilon^\alpha}\Big\}_L, \frac{y}{\varepsilon}\Big)\; dy_1, \quad \forall (x, y) \in R_0^\varepsilon,}\\ 0 \hspace{7.5cm} \forall (x, y) \in R_1^\varepsilon. \end{array} \right. \end{eqnarray*} \end{definition} The following proposition provides the main properties of $\mathcal{U}_\varepsilon$. \begin{proposition}\label{averaging} The averaging operator satisfies the following properties. \begin{enumerate} \item[i)] $\mathcal{U}_\varepsilon$ is the formal adjoint of the unfolding operator $\mathcal{T_\varepsilon}$, in the sense that $$\frac{1}{L} \int_{(0, 1)\times Y^*} \mathcal{T_\varepsilon(\varphi)} \psi\; dx dy_1 dy_2 = \frac{1}{\varepsilon} \int_{R^\varepsilon} \varphi\, \mathcal{U_\varepsilon}(\psi) \; dxdy,$$ for $\varphi \in L^q(R^\varepsilon)$ and $\psi \in L^p((0,1) \times Y^*)$ with $1\leq p, q \leq \infty$ and $\frac{1}{p} + \frac{1}{q} = 1$. \item[ii)] The averaging operator $\mathcal{U}_\varepsilon$ is linear and continuous from $L^p((0,1) \times Y^*)$ to $L^p(R^\varepsilon)$, $1\leq p \leq \infty$, and for every $\varphi \in L^p((0,1) \times Y^*)$ with $p \in [1, \infty)$ one has $$ |||\mathcal{U_\varepsilon}(\varphi)|||_{L^p(R^\varepsilon)} \leqslant \Big(\frac{1}{L}\Big)^{1/p} \|\varphi\|_{L^p\big( (0,1)\times Y^*\big)}.$$ \item[iii)] $\mathcal{U_\varepsilon}$ is ``almost'' the left inverse of $\mathcal{T_\varepsilon}$ in the sense that for every $\varphi \in L^p(R^\varepsilon), 1\leq p \leq \infty,$ we have \begin{eqnarray*} \mathcal{U_\varepsilon}\big(\mathcal{T_\varepsilon}(\varphi)\big) (x, y) = \left\{ \begin{array}{ll} \varphi(x, y) \quad \hbox{for} \quad (x, y) \in R^\varepsilon_0, \\ 0 \hspace{1.3cm} \hbox{for} \quad (x, y) \in R^\varepsilon_1, \end{array} \right. \end{eqnarray*} \item[iv]) Let $ \phi \in L^p(0, 1)$, $1\leq p < \infty$. Then, $|||\mathcal{U_\varepsilon}(\phi) - \phi|||_{L^p(R^\varepsilon)} \buildrel \epsilon\to 0\over\longrightarrow 0.$ \item[v)] Let $\{\varphi^\varepsilon\}$ be a sequence in $L^p(R^\varepsilon)$, $p \in [1, \infty)$, such that it satisfies $$\mathcal{T_\varepsilon(\varphi^\varepsilon)} \buildrel \epsilon\to 0\over\longrightarrow \varphi \hbox{ s}-L^p\big( (0,1)\times Y^*\big)\; \hbox{ and } \quad \frac{1}{\varepsilon}\int_{R^\varepsilon_1} |\varphi^\varepsilon|^p dxdy \buildrel \epsilon\to 0\over\longrightarrow 0.$$ Then, $|||\mathcal{U_\varepsilon}(\varphi) - \varphi^\varepsilon|||_{L^p(R^\varepsilon)} \buildrel \epsilon\to 0\over\longrightarrow 0.$ \end{enumerate} \end{proposition} \begin{proof} {\color{blue} The proof follows the same arguments as in \cite[Proposition 5.2]{ArrVil2016}. } \end{proof} \section{The resonant case, $\alpha =1$ }\label{22} In this section we apply the periodic unfolding operator introduced in the previous section in order to obtain the limit problem of \eqref{OPI0} when the parameter $\alpha $ is equal to $1$. Therefore, throughout this section we consider a two-dimensional thin domain given by \begin{equation}\label{thin1} R^\varepsilonilon = \Big\{ (x,y) \in \R^2 \; | \; x \in (0,1), \; 0 < y < \varepsilonilon \, g(x/\varepsilon) \Big\}, \end{equation} where $g(\cdot)$ satisfies the hypothesis \textbf{(H{\scriptsize g})} stated at the beginning of Section \ref{21}. In addition, we require that $0<g_0$ which in particular implies that the Poincar\'e-Wirtinger inequality holds, see Definition \ref{poincare} and Lemma \ref{poinc}, for the exponent $p \in (1, \infty)$ in the representative cell $$Y^*=\{(y_1, y_2) \in \R^2 \; | \; y_1 \in (0, L), \; 0 < y_2 < g(y_1)\}.$$ First we state a compactness result which allows us to identify the limit of the image of the gradient of a uniformly bounded sequence by the unfolding operator method. \begin{theorem}\label{convergence theor1} Let $\varphi^\varepsilon \in W^{1,p}(R^\varepsilon)$ for every $\varepsilon>0$ and $|||\varphi^\varepsilon|||_{W^{1,p}( R^\varepsilon)}$ uniformly bounded for some $1<p<\infty$. Then, there exist functions $\varphi \in W^{1,p}(0,1)$ and $\varphi_1 \in L^{p}\big( (0,1); W_{\#}^{1,p}(Y^*)\big)$ such that, up to subsequences \begin{itemize} \item[i)] $\mathcal{T_\varepsilon(\varphi^\varepsilon)}\buildrel \epsilon\to 0\over\longrightarrow \varphi \quad \hbox{s}-L^p\big( (0,1); W^{1,p}(Y^*)\big),$ \item[ii)] ${\displaystyle \mathcal{T_\varepsilon}\Big(\frac{\partial\varphi^\varepsilon}{\partial x}\Big)\weto \frac{\partial\varphi}{\partial x} + \frac{\partial \varphi_1}{\partial y_1}\,\quad \hbox{w}-L^p\big( (0,1)\times Y^*\big),}$ ${\displaystyle \quad \mathcal{T_\varepsilon}\Big(\frac{\partial\varphi^\varepsilon}{\partial y}\Big)\weto \frac{\partial \varphi_1}{\partial y_2}\, \quad \hbox{w}-L^p\big( (0,1)\times Y^*\big).}$ \end{itemize} \end{theorem} \begin{proof} \par\noindent $i)$ This convergence was obtained in Proposition \ref{convergence prop} for any $\alpha$ greater than 0. \par\noindent $i)$ Following similar arguments as in \cite{CiorDamGri08} we introduce the function $Z_\varepsilon$ defined in $(0,1) \times Y^*$ as follows $$Z_\varepsilon(x, y_1, y_2) \equiv \frac{1}{\varepsilon}\Big(\mathcal{T_\varepsilon(\varphi^\varepsilon)}(x, y_1, y_2) - \frac{1}{|Y^*|}\int_{Y^*}T_\varepsilon(\varphi^\varepsilon)(x, y_1, y_2)\; dy_1 dy_2\Big).$$ Observe that $Z_\varepsilon(x, \cdot, \cdot)$ has mean value zero in $Y^*$. Moreover, it satisfies $$\frac{\partial Z_\varepsilon}{\partial y_1} =\frac{1}{\varepsilon}\frac{\partial}{\partial y_1}\mathcal{T_\varepsilon(\varphi^\varepsilon)} = \mathcal{T_\varepsilon}\Big(\frac{\partial\varphi^\varepsilon}{\partial x}\Big), \quad \frac{\partial Z_\varepsilon}{\partial y_2} =\frac{1}{\varepsilon} \frac{\partial}{\partial y_2}\mathcal{T_\varepsilon(\varphi^\varepsilon)} =\mathcal{T_\varepsilon}\Big(\frac{\partial\varphi^\varepsilon}{\partial y}\Big).$$ where we have used property vii) of Proposition \ref{properties}. Hence, in order to get ii) we will prove that \begin{equation}\label{Zconv0} Z_\varepsilon \weto \varphi_1 + y_1^c\frac{\partial\varphi}{\partial x} \quad \hbox{w}-L^p\big( (0,1);W^{1,p}(Y^*)\big), \end{equation} where ${\displaystyle s^c \equiv s - \frac{1}{|Y^*|}\int_{Y^*} y_1\; dy_1 dy_2}$, for any $s \in \R$. Observe that by Proposition \ref{properties}, viii), and using that $|||\varphi^\varepsilon|||_{W^{1,p}( R^\varepsilon)}\leq C$ for some constant $C$ independent of $\varepsilon$, the sequences $\Big\{\frac{\partial Z_\varepsilon}{\partial y_1} \Big\}$ and $\Big\{\frac{\partial Z_\varepsilon}{\partial y_2}\Big\}$ are bounded in $L^p\big( (0,1)\times Y^*\big)$. Then, applying the Poincar\'e-Wirtinger inequality to the function $(y_1, y_2) \to Z_\varepsilon(x, y_1, y_2) - y_1^c\frac{\partial\varphi}{\partial x}(x)$ (which is also of mean value zero in $Y^*$) we obtain $$\Big|\Big|Z_\varepsilon - y_1^c\frac{\partial\varphi}{\partial x}\Big|\Big|_{L^p( (0,1)\times Y^*)} \leqslant C.$$ Hence, we can conclude that there is a function $\varphi_1 \in L^p\big( (0,1); W^{1,p}(Y^*)\big)$ such that, up to subsequences \begin{equation*} Z_\varepsilon - y_1^c\frac{\partial\varphi}{\partial x} \weto \varphi_1 \quad \hbox{w}-L^p\big( (0,1);W^{1,p}(Y^*)\big), \end{equation*} {\color{blue} which in particular gives us ii).} {\color{blue} It remains to show the $L-$periodicity of the function $\varphi_1$. However, the proof of this fact follows along the same lines as the proof of Theorem 3.5 in \cite{CiorDamGri08} with obvious modifications.} \end{proof} We can now state the homogenization theorem. We will see that the proof is now very straightforward using the previous compactness result. \begin{theorem}\label{hom1} Let $u^\varepsilon$ be the solution of problem \eqref{VFP1} with $f^\varepsilon \in L^2(R^\varepsilon)$ satisfying $||| f^\varepsilon |||_{L^2(R^\varepsilonilon)} \leq C$ for some positive constant C independent of the parameter $\varepsilon>0$. Assume that there exists $\hat{f} \in L^2((0,1)\times Y^*)$ such that $$\displaystyle{\mathcal{T_\varepsilon}(f^\varepsilon)\weto \hat{f} \hbox{ weakly in }\; L^2\big( (0,1)\times Y^*\big)}.$$ Then, there exist $u \in H^1(0, 1)$ and $u_1\in L^2\Big((0,1); H^1_{\#}\big(Y^*\big)\Big)$ such that \begin{align} & \mathcal{T_\varepsilon}(u^\varepsilon)\buildrel \epsilon\to 0\over\longrightarrow u \quad \hbox{strongly in }\; L^2\big( (0,1); H^1(Y^*)\big),\label{11}\\ & \mathcal{T_\varepsilon}\Big(\frac{\partial u^\varepsilon}{\partial x}\Big)\weto \frac{\partial u}{\partial x}(x) +\frac{\partial u_1}{\partial y_1}(x, y_1, y_2) \quad \hbox{weakly in }\; L^2\big((0,1) \times Y^*\big),\label{dev11} \\ & \mathcal{T_\varepsilon}\Big(\frac{\partial u^\varepsilon}{\partial y}\Big)\weto \frac{\partial u_1}{\partial y_2}(x, y_1, y_2)\quad \quad \hbox{weakly in }\; L^2\big((0,1) \times Y^*\big),\label{dev21} \end{align} and the pair $(u, u_1)$ is the unique solution in $H^1(0, 1) \times L^2\Big((0,1); H^1_{\#}\big(Y^*\big)/\R\Big) $ of the problem \begin{equation} \label{system homogenized1a} \left\{ \begin{gathered} \forall \phi \in H^1(0, 1), \forall \varphi \in L^2\Big((0,1); H^1_{\#}\big(Y^*(x)\big)\Big) \\ \int_{(0,1)\times Y^*} \left\{ \Big( \frac{\partial u}{\partial x}(x) + \frac{\partial u_1}{\partial y_1}(x, y_1, y_2)\Big) \Big( \frac{\partial \phi}{\partial x}(x) + \frac{\partial \varphi}{\partial y_1}(x, y_1, y_2)\Big)\right\} dxdy_1dy_2 \\ + \int_{(0,1)\times Y^*} \left\{ \, \frac{\partial u_1}{\partial y_2}(x, y_1, y_2)\frac{\partial \varphi}{\partial y_2}(x, y_1, y_2) + u(x) \phi(x) \right\} dxdy_1dy_2\\ = \int_{(0,1)\times Y^*} \hat{f}(x,y_1,y_2) \phi(x) dxdy_1dy_2. \end{gathered} \right. \end{equation} Moreover, $u_1 (x, y_1, y_2) = - X(y_1, y_2) \frac{\partial u}{\partial x}(x)$ a.e. for $(x, y_1, y_2) \in (0,1) \times Y^*$ and $u \in H^1(0,1)$ is the unique weak solution of the following Neumann problem \begin{equation} \label{GLPri} \left\{ \begin{gathered} -q_0u_{xx} + u = f_0(x), \quad x \in (0,1),\\ u'(0)=u'(1)=0, \end{gathered} \right. \end{equation} where ${\displaystyle f_0= \frac{1}{|Y^*|}\int_{Y^*}\hat{f} dy_1dy_2}$, the homogenized coefficient is defined by $$ \begin{gathered} q_0=\frac{1}{|Y^*|} \int_{Y^*} \Big\{ 1 - \frac{\partial X}{\partial y_1}(y_1,y_2) \Big\} dy_1 dy_2, \qquad \end{gathered} $$ and $X \in H^1_\#(Y^*)$ satisfying ${\displaystyle \int_{Y^*} X \; dy_1 dy_2 = 0}$ is the unique solution of the following problem \begin{equation} \label{AUXG11a} \int_{Y^*} \nabla X \nabla \psi \, dy_1 dy_2= \int_{Y^*} \frac{\partial \psi}{\partial y_1} \, dy_1 dy_2, \quad \forall \psi \in H^1_{\#}(Y^*). \end{equation} \end{theorem} \begin{remark} Note that $H^1_{\#}\big(Y^*\big)/\R$ is identified with the closed subspace of $H^1_{\#}\big(Y^*\big)$ consisting of all its functions with mean value $0.$ \end{remark} \begin{proof} First of all, we obtain the a priori estimate satisfied by $u^\varepsilon$, solution of the variational problem \eqref{VFP1}. Considering $u^\varepsilon$ as a test function in \eqref{VFP1} we obtain $$\Big\| \frac{\partial u^\varepsilon}{\partial x}\Big\|^2_{L^2(R^\varepsilon)} + \Big\| \frac{\partial u^\varepsilon}{\partial y}\Big\|^2_{L^2(R^\varepsilon)} +\|u^\varepsilon\|^2_{L^2(R^\varepsilon)} \leq \|f^\varepsilon\|_{L^2(R^\varepsilon)} \|u^\varepsilon\|_{L^2(R^\varepsilon)}.$$ Then, using that $|||f^\varepsilon|||_{L^2(R^\varepsilon)}\leq C$, with $C$ independent of $\varepsilon$, we deduce that \begin{equation}\label{unifbound} |||u^\varepsilon|||_{H^1(R^\varepsilon)} \leq C. \end{equation} Hence, in view of Theorem \ref{convergence theor1} follows that there exist $u \in H^1(0, 1)$ and $u_1\in L^2\big((0,1); H^1_{\#}\big(Y^*\big)\big)$ such that one has, at least for a subsequence, the convergences \eqref{11}-\eqref{dev21}. Now we prove that $u$ and $u_1$ satisfy \eqref{system homogenized1a}. We apply the unfolding operator to the variational formulation \eqref{VFP1}, by property v) in Proposition \ref{properties}, we have \begin{equation} \begin{split} \int_{(0, 1) \times Y^*} &\left\{\mathcal{T_\varepsilon}\Big(\frac{\partial u^\varepsilon}{\partial x}\Big) \mathcal{T_\varepsilon}\Big(\frac{\partial \phi}{\partial x}\Big)+ \mathcal{T_\varepsilon}(u^\varepsilon)\mathcal{T_\varepsilon}(\phi) \right\} dx dy_1 dy_2 + \frac{L}{\varepsilon}\int_{R_1^\varepsilon} \left\{\frac{\partial u^\varepsilon}{\partial x} \frac{\partial \phi}{\partial x} + u^\varepsilon\phi \right\} dx dy \\ &= \int_{(0, 1) \times Y^*} \mathcal{T_\varepsilon}(f^\varepsilon) \mathcal{T_\varepsilon(\phi)} dx dy_1 dy_2 + \frac{L}{\varepsilon}\int_{R_1^\varepsilon} f^\varepsilon\phi , \;\; \forall \phi \in H^1(0, 1).\label{1eq} \end{split} \end{equation} Note that, since $R^\varepsilon$ degenerates into a line and the limit problem will be one dimensional, we have taken $ \phi \in H^1(0, 1)$ as a test function in \eqref{VFP1} and then, the derivative respect to $y$ does not appear. Taking into account convergences \eqref{11}-\eqref{dev21}, Proposition \ref{uci 1vartest}, Proposition \ref{testconvergence} and the hypothesis on the function $f^\varepsilon$ we may pass to the limit in \eqref{1eq} to obtain \begin{eqnarray}\label{system 1} \int_{(0,1)\times Y^*} \left\{ \Big( \frac{\partial u}{\partial x}(x) + \frac{\partial u_1}{\partial y_1}(x, y_1)\Big) \frac{\partial \phi}{\partial x}(x) + u(x) \phi(x) \right\} \,dxdy_1dy_2\nonumber\\ = \int_{(0,1)\times Y^*} \hat{f}(x,y_1, y_2) \phi(x)\, dxdy_1dy_2, \quad \forall \phi \in H^1(0, 1). \end{eqnarray} Observe that the terms in \eqref{1eq} with ${\displaystyle \frac{1}{\varepsilon}}$ disappear in the limit from Proposition \ref{uci 1vartest}. We choose now as a test function in \eqref{VFP1} the function $v^\varepsilon$ defined by $$v^\varepsilon(x, y) = \varepsilon \phi(x) \psi\Big(\frac{x}{\varepsilon}, \frac{y}{\varepsilon}\Big), \quad (x, y) \in R^\varepsilon.$$ where $\phi \in \mathcal{D}(0,1)$ and $\psi \in H^1_{\#}(Y^*)$. Observe that, in view of property iv) in Proposition \ref{properties}, $v^\varepsilon$ is well defined and it is obvious from its definition that it satisfies \begin{align*} &v^\varepsilon \in H^1(R^\varepsilon), \quad \mathcal{T}_\varepsilon(v^\varepsilon) = \varepsilon \mathcal{T}_\varepsilon(\phi) \psi,\\ &\frac{\partial v^\varepsilon}{\partial x} = \varepsilon \frac{\partial\phi}{\partial x} \psi\Big(\frac{x}{\varepsilon}, \frac{y}{\varepsilon}\Big)+ \phi \frac{\partial\psi}{\partial y_1}\Big(\frac{x}{\varepsilon}, \frac{y}{\varepsilon}\Big), \quad \frac{\partial v^\varepsilon}{\partial y}= \phi \frac{\partial\psi}{\partial y_1}\Big(\frac{x}{\varepsilon}, \frac{y}{\varepsilon}\Big). \end{align*} Hence, using the properties of the unfolding operator we get \begin{equation*} \begin{split} &\mathcal{T_\varepsilon}(v^\varepsilon) \buildrel \epsilon\to 0\over\longrightarrow 0 \quad \hbox{s}-L^2((0,1)\times Y^*),\\ &\mathcal{T_\varepsilon}\Big(\frac{\partial v^\varepsilon}{\partial x}\Big) \buildrel \epsilon\to 0\over\longrightarrow \phi \frac{\partial \psi}{\partial y_1} \quad \hbox{s}-L^2((0,1)\times Y^*),\\ &\mathcal{T_\varepsilon}\Big(\frac{\partial v^\varepsilon}{\partial y}\Big) \buildrel \epsilon\to 0\over\longrightarrow \phi \frac{\partial \psi}{\partial y_2} \quad \hbox{s}-L^2((0,1)\times Y^*) . \end{split} \end{equation*} Then, taking $v^\varepsilon$ as test function in the weak formulation \eqref{VFP1} and applying the unfolding operator we get \begin{align*} &\int_{(0, 1) \times Y^*} \left\{\mathcal{T_\varepsilon}\Big(\frac{\partial u^\varepsilon}{\partial x}\Big) \mathcal{T_\varepsilon}\Big(\frac{\partial v^\varepsilon}{\partial x}\Big) + \mathcal{T_\varepsilon}\Big(\frac{\partial u^\varepsilon}{\partial y}\Big) \mathcal{T_\varepsilon}\Big(\frac{\partial v^\varepsilon}{\partial y}\Big) + \mathcal{T_\varepsilon}(u^\varepsilon)\mathcal{T_\varepsilon}(v^\varepsilon) \right\} dx dy_1 dy_2 \\ &= \int_{(0, 1) \times Y^*} \mathcal{T_\varepsilon}(f^\varepsilon) \mathcal{T_\varepsilon}(v ^\varepsilon) dx dy_1 dy_2 + \frac{L}{\varepsilon}\int_{R_1^\varepsilon} \left\{f^\varepsilon v^\varepsilon - \frac{\partial u^\varepsilon}{\partial x} \frac{\partial v^\varepsilon}{\partial x} - \frac{\partial u^\varepsilon}{\partial y} \frac{\partial v^\varepsilon}{\partial y}- u^\varepsilon v^\varepsilon\right\} dx dy, \end{align*} which by Proposition \ref{uci 3test} and the convergences above gives at the limit $$\int_{(0, 1) \times Y^*} \Big\{ \Big( \frac{\partial u}{\partial x} + \frac{\partial u_1}{\partial y_1}\Big)\phi\frac{\partial \psi}{\partial y_1} + \frac{\partial u_1}{\partial y_2}\phi\frac{\partial \psi}{\partial y_2}\Big\}\;dxdy_1dy_2 =0, \quad \forall \phi \in \mathcal{D}(0,1), \, \psi \in H^1_{\#}(Y^*).$$ By the density of the tensor product $\mathcal{D}(0,1) \otimes H^1_{\#}(Y^*)$ in $L^2\big( (0,1); H^1_{\#}( Y^*)\big)$, the equality holds true \begin{align}\label{system 2} \int_{(0, 1) \times Y^*} \Big\{ \Big( \frac{\partial u}{\partial x} + \frac{\partial u_1}{\partial y_1}\Big)\frac{\partial \varphi}{\partial y_1} + \frac{\partial u_1}{\partial y_2}\frac{\partial \varphi}{\partial y_2}\Big\}\;dxdy_1dy_2 =0, \, \forall \varphi \in L^2\big( (0,1); H^1_{\#}( Y^*)\big). \end{align} Therefore, adding up terms \eqref{system 1} and \eqref{system 2} we obtain the homogenized system \eqref{system homogenized1a}. We prove now the existence and uniqueness of solution for problem \eqref{system homogenized1a}. Note that, endowing the Hilbert space $H^1(0, 1) \times L^2\Big((0,1); H^1_{\#}\big(Y^*\big)/\R\Big)$ with the following norm \begin{equation*} \rho(\phi, \varphi)= \Big(\Big\|\frac{\partial \phi}{\partial x} + \frac{\partial \varphi}{\partial y_1} \Big\|^2_{L^2\big( (0,1) \times Y^*\big)} + \Big\|\frac{\partial \varphi}{\partial y_2} \Big\|^2_{L^2\big( (0,1) \times Y^*\big)} + \|\phi\|^2_{L^2\big( (0,1) \times Y^*\big)}\Big)^{1/2}, \end{equation*} $\forall \phi \in H^1(0,1), \, \varphi \in L^2\Big((0,1); H^1_{\#}\big(Y^*\big)/\R\Big)$, it is obvious that \eqref{system homogenized1a} satisfies the conditions of the Lax-Milgram Theorem. Therefore, the main point is to show the product space $H^1(0, 1) \times L^2\Big((0,1); H^1_{\#}\big(Y^*\big)/\R\Big)$ endowed with $\rho$ is a Hilbert space. However, it is easy to check that the norm $\rho$ is equivalent to the usual norm of the product space taking into account that for any $(\phi, \varphi) \in H^1(0, 1) \times L^2\Big((0,1); H^1_{\#}\big(Y^*\big)/\R\Big)$ the following inequality holds \begin{align*} &\Big\|\frac{\partial \phi}{\partial x} + \frac{\partial \varphi}{\partial y_1} \Big\|^2_{L^2\big( (0,1) \times Y^*\big)} \ge \Big\|\frac{\partial \phi}{\partial x} + \frac{\partial \varphi}{\partial y_1} \Big\|^2_{L^2\big( (0,1) \times Y_0^*\big)} = \Big\|\frac{\partial \phi}{\partial x}\Big\|^2_{L^2\big( (0,1) \times Y_0^*\big)} + \Big\|\frac{\partial \varphi}{\partial y_1} \Big\|^2_{L^2\big( (0,1) \times Y_0^*\big)}\\ &\ge \frac{1}{|Y^*_0|} \Big\|\frac{\partial \phi}{\partial x}\Big\|^2_{L^2(0,1)}, \end{align*} where recall that $Y^*_0=\Big\{(y_1, y_2)\in \R^2: 0<y_1<L, 0<y_2<\frac{g_0}{2} \Big\} \subset Y^*$. Notice that the inequality above is obtained since by integration by parts and the periodicity of $\varphi$ one has $$\displaystyle{\int_{(0, 1) \times Y_0^*} \frac{\partial \phi}{\partial x} \frac{\partial \varphi}{\partial y_1} \, dxdy_1dy_2= 0.}$$ Consequently, there exists a unique solution of the homogenized system \eqref{system homogenized1a} in $H^1(0, 1) \times L^2\Big((0,1); H^1_{\#}\big(Y^*\big)/\R\Big)$. {\color{blue} To conclude the proof we obtain the homogenized limit equation in terms of $u$. Using that $X$ is the unique $L$-periodic solution of the problem \eqref{AUXG11a} and since $u$ does not depend on $y_1$ and $y_2$ we have that $- X(y_1, y_2) \frac{\partial u}{\partial x}(x)$ satisfies \begin{align}\label{auxsi} &\int_{(0, 1) \times Y^*} \Big\{-\frac{\partial u}{\partial x} \frac{\partial X}{\partial y_1}\frac{\partial \varphi}{\partial y_1} -\frac{\partial u}{\partial x} \frac{\partial X}{\partial y_2}\frac{\partial \varphi}{\partial y_2}\Big\}\;dxdy_1dy_2\nonumber \\ &\quad = \int_{(0, 1) \times Y^*} -\frac{\partial u}{\partial x}\frac{\partial \varphi}{\partial y_1} \, dx dy_1 dy_2, \;\forall \varphi \in L^2\big( (0,1); H^1_{\#}( Y^*)\big). \end{align} Then, since $u_1$ satisfies \eqref{system 2} we have} $$ u_1 (x, y_1, y_2) = - X(y_1, y_2) \frac{\partial u}{\partial x}(x).$$ Using this expression of $u_1$ in \eqref{system 1} we find the weak formulation of \eqref{GLPri}. It remains to prove existence and uniqueness of solution of the homogenized limit problem \eqref{GLPri}. However, it is an immediate consequence of Lax-Milgram Theorem once we see that the problem is well posed in the sense that $q_0>0.$ For this we argue in a similar way as in \cite{ArrCarPerSil}. Let $a(\cdot, \cdot)$ be the bilinear form $a(\cdot, \cdot)$ associated with the variational formulation of \eqref{AUXG11a}. Then, $X$ satisfies $a(X,\Phi) = \int_{Y^*} \frac{\partial \Phi}{\partial y_1} \, dy_1 dy_2, \, \hbox{ for any } \Phi \in H^1_{\#}(Y^*). $ Consequently, \begin{equation}\label{bilinearpri} a(y_1-X,\Phi) = \int_{Y^*} \frac{\partial \Phi}{\partial y_1} \, dy_1 dy_2- \int_{Y^*} \frac{\partial \Phi}{\partial y_1} \, dy_1 dy_2=0, \hbox{ for any } \Phi \in H^1_{\#}(Y^*). \end{equation} In particular, $a(y_1-X, X)=0$. Turning back to $q_0,$ we have \begin{equation}\label{bilinearci} q_0=\frac{1}{|Y^*|} \int_{Y^*} \Big\{ 1 - \frac{\partial X}{\partial y_1}\Big\} dy_1 dy_2=\frac{1}{|Y^*|} \int_{Y^*} \frac{\partial}{\partial y_1}\big( y_1 - X\big)\frac{\partial y_1}{\partial y_1} dy_1 dy_2=\frac{1}{|Y^*|} a(y_1-X, y_1). \end{equation} Hence, using \eqref{bilinearpri} we get $$|Y^*| q_0 = a(y_1-X, y_1)- a(y_1-X, -X) = a(y_1-X, y_1-X) = \|\nabla(y_1 -X)\|^2_{[L^2(Y^*)]^2}.$$ Therefore, since $|Y^*|>0$ we can conclude that $q_0>0$. Indeed, if this is not true, then we would have $$\frac{\partial (y_1 -X)}{\partial y_1} = 0, \quad \frac{\partial (y_1 -X)}{\partial y_2} = 0$$ which implies that there exists a constant $C$ such that $ y_1 - X = C$. This is impossible because $X$ is $L$-periodic in the first variable. Since the Lax-Milgram Theorem guarantees the uniqueness of the weak solution of \eqref{GLPri} we know that every weakly convergent subsequence of the sequence $\{u^\varepsilon\}$ converges to the same limit. Thus the whole sequence $\{u^\varepsilon\}$ converge weakly to the limit $u$. \end{proof} \begin{remark}\label{regu} Notice that since $q_0$ is constant it follows from standard elliptic regularity theory that $u \in H^2(0,1)$. \end{remark} \begin{remark}\label{auxprob} Observe that assuming extra regularity conditions on the function $g(\cdot)$, for instance $g \in C^1(\R)$, an easy integration by parts shows that \eqref{AUXG11a} is the variational formulation associated to the usual auxiliary problem defined in the basic cell \begin{equation}\label{auxprob1} \left\{ \begin{gathered} - \Delta X = 0 \textrm{ in } Y^*, \\ \frac{\partial X}{\partial N} = 0 \textrm{ on } B_2 , \\ \frac{\partial X}{\partial N} = N_1 \textrm{ on } B_1, \\ \int_{Y^*} X \; dy_1 dy_2 = 0, \end{gathered} \right. \end{equation} where $N=(N_1, N_2)$ is the unit outward normal to $\partial Y^*$ and, $B_1$ and $B_2$ are the upper and the lower boundary of $Y^*$ respectively. Moreover, standard elliptic regularity theory shows that $X \in H^2(Y^*)\cap C^0(Y^*).$ \end{remark} \begin{remark} Notice that the coefficient $q_0$ reflects how the geometry of the thin domain, in particular the rough boundary, affects the limit equation. In fact, we may prove that $q_0 < 1.$ For this, we use \eqref{bilinearpri}, \eqref{bilinearci} and the basic properties of the symmetric bilinear form \begin{align*} 0<|Y^*| q_0 &=a(y_1-X, y_1) =a(y_1-X, y_1) + a(y_1-X, X) \\ &= a(y_1, y_1) - a(X, X) = |Y^*| - \|\nabla X\|^2_{[L^2(Y^*)]^2} < |Y^*|. \end{align*} Thus, we get $0<q_0 < 1.$ \end{remark} We complete this section analyzing the strong convergence of the solutions without any additional regularity condition on the boundary of the thin domain. \begin{proposition}\label{1strong conv} Assume that hypothesis of Theorem \ref{hom1} are satisfied. Then, one has \par\noindent $i)$ $|||u^\varepsilon - u|||_{L^2(R^\varepsilon)} \buildrel \epsilon\to 0\over\longrightarrow 0.$ \par\noindent $ii)$ The following strong convergences \begin{equation*} \mathcal{T_\varepsilon}\Big(\frac{\partial u^\varepsilon}{\partial x}\Big)\buildrel \epsilon\to 0\over\longrightarrow \frac{\partial u}{\partial x} +\frac{\partial u_1}{\partial y_1} \quad \hbox{s}- L^2\big((0,1) \times Y^*\big), \; \mathcal{T_\varepsilon}\Big(\frac{\partial u^\varepsilon}{\partial y}\Big)\buildrel \epsilon\to 0\over\longrightarrow \frac{\partial u_1}{\partial y_2}\; \hbox{ s}-L^2\big((0,1) \times Y^*\big). \end{equation*} Moreover, \begin{align*} \frac{1}{\varepsilon}\int_{R_1^\varepsilon} |\nabla u^\varepsilon|^2 dxdy \buildrel \epsilon\to 0\over\longrightarrow 0, \end{align*} where $R_1^\varepsilon$ is the subset of $R^\varepsilon$ containing the corresponding part of the unique cell which is not totally included in $R^\varepsilon$, see \eqref{cellno}. \par\noindent $iii)$ Let $X$ be the unique solution of \eqref{AUXG11a}. Then, the following convergence holds \begin{equation}\label{corrector1} \lim_{\varepsilon \to 0}||| \nabla u^\varepsilon - \nabla u + \mathcal{U_\varepsilon}\Big(\frac{\partial u}{\partial x}\Big) (X^\varepsilon_1, X^\varepsilon_2)|||_{[L^2(R^\varepsilon)]^2}=0, \end{equation} where $$ X_{1}^{\varepsilon}(x,y) \equiv \frac{\partial X}{\partial y_1} \big(\frac{x}{\varepsilon}, \frac{y}{ \varepsilon}\big) \hbox{ and } X_{2}^{\varepsilon}(x,y) \equiv \frac{\partial X}{\partial y_2} \big(\frac{x}{\varepsilon}, \frac{y}{ \varepsilon}\big), \; \forall (x,y) \in R^\varepsilon.$$ \end{proposition} \begin{proof} \par\noindent $i)$ It is enough to apply Proposition \ref{convergence prop} to the sequence of solutions $\{u^\varepsilon\}$. \par\noindent $ii)$ The proof of these convergences is based on the convergence of the energy. {\color{blue} Taking $\varphi = u^\varepsilon$ in the variational formulation \eqref{VFP1} we obtain the following convergence due to the unfolding criterion for integrals and convergence i) of this proposition \begin{equation}\label{conver1a} \begin{split} \int_{(0, 1) \times Y^*} &\left\{{\mathcal{T_\varepsilon}\Big(\frac{\partial u^\varepsilon}{\partial x}\Big)}^2 + {\mathcal{T_\varepsilon}\Big(\frac{\partial u^\varepsilon}{\partial y}\Big)}^2\right\} dx dy_1 dy_2 + \frac{1}{\varepsilon}\int_{R_1^\varepsilon} |\nabla u^\varepsilon|^2 dxdy\\ & \buildrel \epsilon\to 0\over\longrightarrow \int_{(0, 1) \times Y^*} \left\{ \hat{f}u - u^2 \right\}dx dy_1 dy_2. \end{split} \end{equation}} Now, considering $\phi = u$ and $\varphi = u_1$ as test functions in (\ref{system homogenized1a}) we get \begin{equation}\label{conver2a} \int_{(0, 1) \times Y^*} \left\{ \Big({\frac{\partial u}{\partial x}} + \frac{\partial u_1}{\partial y_1} \Big)^2 + {\frac{\partial u_1}{\partial y_2}}^2 \right\} dxdy_1dy_2 = \int_{(0, 1) \times Y^*} \left\{ \hat{f}u - u^2 \right\} dx dy_1 dy_2. \end{equation} Finally, summing up \eqref{conver1a} and \eqref{conver2a} we have \begin{align*} \int_{(0, 1) \times Y^*} &\left\{{\mathcal{T_\varepsilon}\Big(\frac{\partial u^\varepsilon}{\partial x}\Big)}^2 + {\mathcal{T_\varepsilon}\Big(\frac{\partial u^\varepsilon}{\partial y}\Big)}^2\right\} dx dy_1 dy_2 + \frac{1}{\varepsilon}\int_{R_1^\varepsilon} |\nabla u^\varepsilon|^2 dxdy \\&\buildrel \epsilon\to 0\over\longrightarrow \int_{(0, 1) \times Y^*}\left\{ \Big({\frac{\partial u}{\partial x}} + \frac{\partial u_1}{\partial y_1} \Big)^2 + {\frac{\partial u_1}{\partial y_2}}^2\right\} dxdy_1dy_2. \end{align*} Therefore, using the limit above, weak convergences \eqref{dev11} and \eqref{dev21} and by standard weak lower-semicontinuity we obtain the following inequalities \begin{align*} &\int_{(0, 1) \times Y^*}\left\{ \Big({\frac{\partial u}{\partial x}} + \frac{\partial u_1}{\partial y_1} \Big)^2 + {\frac{\partial u_1}{\partial y_2}}^2\right\} dxdy_1dy_2\nonumber\\ &\leq \liminf_{\varepsilon \to 0}\int_{(0, 1) \times Y^*} \left\{{\mathcal{T_\varepsilon}\Big(\frac{\partial u^\varepsilon}{\partial x}\Big)}^2 + {\mathcal{T_\varepsilon}\Big(\frac{\partial u^\varepsilon}{\partial y}\Big)}^2\right\}dxdy_1dy_2\nonumber\\ &\leq \limsup_{\varepsilon \to 0}\int_{(0, 1) \times Y^*} \left\{{\mathcal{T_\varepsilon}\Big(\frac{\partial u^\varepsilon}{\partial x}\Big)}^2 + {\mathcal{T_\varepsilon}\Big(\frac{\partial u^\varepsilon}{\partial y}\Big)}^2\right\}dxdy_1dy_2\nonumber\\ &\leq \lim_{\varepsilon \to 0}\left\{\int_{(0, 1) \times Y^*} \left\{{\mathcal{T_\varepsilon}\Big(\frac{\partial u^\varepsilon}{\partial x}\Big)}^2 + {\mathcal{T_\varepsilon}\Big(\frac{\partial u^\varepsilon}{\partial y}\Big)}^2\right\} dxdy_1dy_2 + \frac{1}{\varepsilon}\int_{R_1^\varepsilon} |\nabla u^\varepsilon|^2 dxdy \right\}\nonumber\\ &=\int_{(0, 1) \times Y^*}\left\{ \Big({\frac{\partial u}{\partial x}} + \frac{\partial u_1}{\partial y_1} \Big)^2 + {\frac{\partial u_1}{\partial y_2}}^2\right\} dxdy_1dy_2. \end{align*} Consequently, we deduce that \begin{align}\label{conver31a} \int_{(0, 1) \times Y^*}& \left\{{\mathcal{T_\varepsilon}\Big(\frac{\partial u^\varepsilon}{\partial x}\Big)}^2 + {\mathcal{T_\varepsilon}\Big(\frac{\partial u^\varepsilon}{\partial y}\Big)}^2\right\} dx dy_1 dy_2\nonumber\\ &\buildrel \epsilon\to 0\over\longrightarrow \int_{(0, 1) \times Y^*}\left\{ \Big({\frac{\partial u}{\partial x}} + \frac{\partial u_1}{\partial y_1} \Big)^2 + {\frac{\partial u_1}{\partial y_2}}^2\right\} dxdy_1dy_2.\\ \frac{1}{\varepsilon}\int_{R_1^\varepsilon} |\nabla& u^\varepsilon|^2 dxdy \buildrel \epsilon\to 0\over\longrightarrow 0.\nonumber \end{align} Hence, due to the weak convergences \eqref{dev11}, \eqref{dev21} and the convergence \eqref{conver31a} we obtain by the Radon-Riesz property the strong convergences of ii). \par\noindent $iii)$ From property v) of Proposition \ref{averaging} and convergences obtained in ii) we immediately get $$\lim_{\varepsilon \to 0}||| \nabla u^\varepsilon - \mathcal{U_\varepsilon}(\nabla u) - \mathcal{U_\varepsilon}(\nabla_{y_1y_2}u_1)|||_{[L^2(R^\varepsilon)]^2}=0.$$ Moreover, from iv) of Proposition \ref{averaging} we have $$|||\mathcal{U_\varepsilon}(\nabla u) - \nabla u|||_{L^2(R^\varepsilon)} \buildrel \epsilon\to 0\over\longrightarrow 0.$$ Consequently, we obtain $$\lim_{\varepsilon \to 0}||| \nabla u^\varepsilon -\nabla u - \mathcal{U_\varepsilon}(\nabla_{y_1y_2}u_1)|||_{[L^2(R^\varepsilon)]^2}=0.$$ Finally, taking into account that ${\displaystyle u_1 (x, y_1, y_2) = - X(y_1, y_2) \frac{\partial u}{\partial x}(x)}$ and the definition of the averaging operator we get the desired convergence. \end{proof} {\color{blue} Assuming extra regularity conditions on the reference cell, $g \in C^1(\R)$, we get as a simple consequence of the corrector result above the two first terms in the usual asymptotic expansion of the solution $u^\varepsilon$ which gives us a kind of strong convergence in $H^1$-norm. We refer the reader to \cite{PerSil13} for a classical proof of this result where error estimates were also obtained and to \cite{MelPo10} where corrector results for more general elliptic problems posed in perforated thin domains were established. } \begin{corollary}\label{corr1} If $g(\cdot) \in C^1(\R)$ then the following corrector result holds $$\lim_{\varepsilon \to 0}\Big|\Big|\Big| u^\varepsilon - u + \varepsilon \frac{\partial u}{\partial x} X^\varepsilon\Big|\Big|\Big|_{H^1(R^\varepsilon)}=0,$$ where $X^\varepsilon(x, y)\equiv X( x/\varepsilon, y / \varepsilon), \; (x, y) \in R^\varepsilon.$ \end{corollary} \begin{proof} First of all, notice that the corrector is well defined, that is, $\varepsilon \frac{\partial u}{\partial x} X^\varepsilon \in H^1(R^\varepsilon)$ since $u \in H^2(0,1)$ and $X \in H^2(Y^*)$ as we have already mentioned in Remark \ref{regu} and Remark \ref{auxprob}. Moreover, from the properties of the averaging operator, see Proposition \ref{averaging}, we easily get $$\lim_{\varepsilon \to 0}||| \mathcal{U_\varepsilon}\Big(\frac{\partial u}{\partial x}\Big) (X^\varepsilon_1, X^\varepsilon_2) - \frac{\partial u}{\partial x} (X^\varepsilon_1, X^\varepsilon_2)|||_{[L^2(R^\varepsilon)]^2}=0,$$ Then, by a simple computation the desired strong convergence follows straightforward from convergence i) in Proposition \ref{1strong conv} and \eqref{corrector1}. \end{proof} \section{Weak oscillations, $\alpha < 1$}\label{23} {\color{blue} In this section we are dealing with an oscillatory thin domain $R^\varepsilon$ defined as} $$R^\varepsilonilon = \Big\{ (x,y) \in \R^2 \; | \; x \in (0,1), \; 0 < y < \varepsilonilon \, g(x/\varepsilon^\alpha) \Big\},$$ where $0<\alpha < 1$ and $g(\cdot)$ satisfies hypothesis \textbf{(H{\scriptsize g})} stated at the beginning of Section \ref{21}. Moreover, we assume that $0<g_0$ in order to guarantee that the Poincar\'e$-$Wirtinger inequality holds in the representative cell $Y^*$. To obtain the homogenized limit problem we will follow a similar approach as in the previous section. Then, we begin this section with the corresponding compactness result. \begin{theorem}\label{convergence theor2} Let $\varphi^\varepsilon \in W^{1,p}(R^\varepsilon)$ for some $1<p<\infty$, with $|||\varphi^\varepsilon|||_{W^{1,p}( R^\varepsilon)}$ uniformly bounded. Then, there exist functions $\varphi \in W^{1,p}(0,1)$, $\varphi_1 \in L^{p}\big( (0,1); W_{\#}^{1,p}(Y^*)\big)$ with ${\displaystyle \frac{\partial \varphi_1}{\partial y_2}=0}$ such that, up to subsequences \begin{itemize} \item[i)] $\mathcal{T_\varepsilon(\varphi^\varepsilon)}\buildrel \epsilon\to 0\over\longrightarrow \varphi \; \quad \hbox{ s}-L^p\big( (0,1); W^{1,p}(Y^*)\big),$ \item[ii)] $\mathcal{T_\varepsilon}\Big(\frac{\partial\varphi^\varepsilon}{\partial x}\Big)\weto \frac{\partial\varphi}{\partial x} + \frac{\partial \varphi_1}{\partial y_1}\, \quad \hbox{ w}-L^p\big( (0,1)\times Y^*\big).$ \end{itemize} \end{theorem} \begin{proof} \par\noindent $i)$ This convergence was obtained in Proposition \ref{convergence prop} for any $\alpha$ greater than 0. \par\noindent $ii)$ This assertion can be proved following the same arguments as the corresponding proof of Theorem \ref{convergence theor1}. Therefore, we stress here just the main differences respect to the case $\alpha=1$. We consider the operator \begin{equation}\label{operator2} Z_\varepsilon := \frac{1}{\varepsilon^\alpha}\Big(\mathcal{T_\varepsilon(\varphi^\varepsilon)} - \frac{1}{|Y^*|}\int_{Y^*}T_\varepsilon(\varphi^\varepsilon)\; dy_2 dy_1\Big), \end{equation} which has mean value zero in $Y^*$ and from Proposition \ref{properties} vii) satisfies \begin{align}\label{partial y12} &\frac{\partial Z_\varepsilon}{\partial y_1} = \mathcal{T_\varepsilon}\Big(\frac{\partial\varphi^\varepsilon}{\partial x}\Big), \quad \frac{\partial Z_\varepsilon}{\partial y_2} = \varepsilon^{1 - \alpha} \mathcal{T_\varepsilon}\Big(\frac{\partial\varphi^\varepsilon}{\partial y}\Big). \end{align} In the same way as the resonant case it is not difficult to prove that there exists a function $\varphi_1 \in L^{p}\big( (0,1); W^{1,p}(Y^*)\big)$ such that, up to subsequences, \begin{equation}\label{Z2} Z_\varepsilon - y_1^c\frac{\partial\varphi}{\partial x} \weto \varphi_1 \quad \hbox{w}-L^p\big( (0,1);W^{1,p}(Y^*)\big), \end{equation} where ${\displaystyle y_1^c= y_1 - \frac{1}{|Y^*|}\int_{Y^*} y_1\; dy_2 dy_1}$. Hence, taking into account \eqref{partial y12}, \eqref{Z2} and since $1-\alpha > 0$ we get \begin{align*} \mathcal{T_\varepsilon}\Big(\frac{\partial\varphi^\varepsilon}{\partial x}\Big) \weto \frac{\partial\varphi}{\partial x} (x) + \frac{\partial \varphi_1}{\partial y_1} (x, y_1, y_2)\quad \hbox{w}-L^p\big( (0,1)\times Y^*\big)\quad \hbox{and} \quad \frac{\partial \varphi_1}{\partial y_2} = 0. \end{align*} Finally, by the same computation as Theorem \ref{convergence theor1} we get the periodicity of $\varphi_1$. \end{proof} Now we are in conditions to get the homogenized limit for problem \eqref{OPI0} when $0<\alpha<1$. \begin{theorem}\label{hom2} Let $u^\varepsilon$ be the solution of problem \eqref{VFP1} with $f^\varepsilon \in L^2(R^\varepsilon)$ satisfying $||| f^\varepsilon |||_{L^2(R^\varepsilonilon)} \leq C$ for some positive constant C independent of the parameter $\varepsilon>0$. Assume that there exists $\hat{f} \in L^2((0,1)\times Y^*)$ such that $$\displaystyle{\mathcal{T_\varepsilon}(f^\varepsilon)\weto \hat{f} \hbox{ weakly in }\; L^2\big( (0,1)\times Y^*\big)}.$$ Then, there exist $u \in H^1(0, 1)$ and $u_1\in L^2\Big((0,1); H^1_{\#}\big(Y^*\big)\Big)$ with ${\displaystyle \frac{\partial u_1}{\partial y_2}=0}$ such that \begin{align} & \mathcal{T_\varepsilon}(u^\varepsilon)\buildrel \epsilon\to 0\over\longrightarrow u \quad \hbox{strongly in }\; L^2\big( (0,1); H^1(Y^*)\big),\label{fun1a2}\\ &\mathcal{T_\varepsilon}\Big(\frac{\partial u^\varepsilon}{\partial x}\Big)\weto \frac{\partial u}{\partial x}(x) +\frac{\partial u_1}{\partial y_1}(x, y_1, y_2) \quad \hbox{weakly in }\; L^2\big((0,1) \times Y^*\big),\label{dev1a2} \end{align} where $u_1$ is the unique function, up to constants, such that $$\frac{\partial u_1}{\partial y_1} = \Big(-1 + \frac{1}{g\mathcal{M}(\frac{1}{g})}\Big) \frac{\partial u}{\partial x},$$ and $u\in H^1(0,1)$ is the unique solution of the Neumann problem \begin{equation} \label{GLP21a} \left\{ \begin{gathered} - \frac{1}{\mathcal{M}(g)\mathcal{M}\big(\frac{1}{g}\big)} u_{xx} + u = f_0(x), \quad x \in (0,1),\\ u'(0)=u'(1)=0, \end{gathered} \right. \end{equation} where ${\displaystyle f_0(x)= \frac{1}{|Y^*|}\int_{Y^*}\hat{f}(x, y_1, y_2) dy_1dy_2}$, ${\displaystyle\mathcal{M}(g)=\frac{1}{L}\int_0^L g(y_1)\, dy_1}$ and ${\displaystyle\mathcal{M}\big(\frac{1}{g}\big)=\frac{1}{L}\int_0^L \frac{1}{g(y_1)}\, dy_1}$. \end{theorem} \begin{remark} Observe that the limit problem is well defined since the assumptions on the function $g$ imply that ${\displaystyle \frac{1}{g} \in L^1(0,L).}$ \end{remark} \begin{proof} As we have already seen in the previous section, see \eqref{unifbound}, $|||u^\varepsilon|||_{H^1(R^\varepsilon)} $ is uniformly bounded. Then, from Theorem \ref{convergence theor2} there are two functions $u \in H^1(0, 1)$ and $u_1\in L^2\big((0,1); H^1_{\#}(Y^*$)\big) with $ \frac{\partial u_1}{\partial y_2}=0$ such that, up to subsequences, {\color{blue}convergences \eqref{fun1a2} and \eqref{dev1a2} are satisfied.} We are now in position to provide the equations satisfied by $u$ and $u_1$. Transforming \eqref{VFP1} by the unfolding operator $\mathcal{T_\varepsilon}$ and {\color{blue} passing to the limit taking into account convergences \eqref{fun1a2} and \eqref{dev1a2} and Proposition \ref{uci 1vartest} we obtain} \begin{eqnarray}\label{system 21} \int_{(0,1)\times Y^*} \left\{ \Big( \frac{\partial u}{\partial x}(x) + \frac{\partial u_1}{\partial y_1}(x, y_1)\Big) \frac{\partial \phi}{\partial x}(x) + u(x) \phi(x) \right\} dxdy_1dy_2\nonumber\\ = \int_{(0,1)\times Y^*} \hat{f}(x, y_1, y_2) \phi(x) dxdy_1dy_2 \quad \forall \phi \in H^1(0, 1). \end{eqnarray} To identify $u_1$, we introduce the function $v^\varepsilon \in H^1(R^\varepsilon) $ defined by $$v^\varepsilon(x, y) = \varepsilon^\alpha \phi(x) \psi(x/\varepsilon^\alpha)$$ where $\phi \in \mathcal{D}(0,1)$ and $\psi \in H^1_{\#}(Y^*)$ such that $\frac{\partial \psi}{\partial y_2}=0,$ that is, $\psi(y_1, y_2)=\psi(y_1)$ and $\psi(0)=\psi(L)$. It is easy to get the partial derivatives $$\frac{\partial v^\varepsilon}{\partial x} = \varepsilon^\alpha \frac{\partial\phi}{\partial x}(x) \psi\Big(\frac{x}{\varepsilon^\alpha}\Big)+ \phi(x) \frac{\partial\psi}{\partial y_1}\Big(\frac{x}{\varepsilon^\alpha}\Big), \qquad \frac{\partial v^\varepsilon}{\partial y}=0.$$ Thus, using the basic properties of the unfolding operator we easily get \begin{equation}\label{testv21} \begin{split} &\mathcal{T_\varepsilon}(v^\varepsilon) \buildrel \epsilon\to 0\over\longrightarrow 0 \quad \hbox{s-}L^2((0,1)\times Y^*),\\ &\mathcal{T_\varepsilon}\Big(\frac{\partial v^\varepsilon}{\partial x}\Big) \buildrel \epsilon\to 0\over\longrightarrow \phi \frac{\partial \psi}{\partial y_1} \quad \hbox{s-}L^2((0,1)\times Y^*). \end{split} \end{equation} We take now $v^\varepsilon \in H^1(R^\varepsilon)$ as a test function in \eqref{VFP1} and applying the unfolding operator we can pass to the limit using convergences \eqref{fun1a2}, \eqref{dev1a2}, \eqref{testv21} and Proposition \ref{uci 3test}. Thus, we obtain at the limit $$\int_{(0, 1) \times Y^*} \Big( \frac{\partial u}{\partial x}(x) + \frac{\partial u_1}{\partial y_1}(x, y_1)\Big)\phi (x)\frac{\partial \psi}{\partial y_1}(y_1)\;dxdy_1dy_2 =0,$$ for any $\phi \in \mathcal{D}(0,1)$ and $\psi \in H^1_{\#}(Y^*)$ such that $\frac{\partial \psi}{\partial y_2}=0.$ By density, this equality holds true for all $\psi \in L^2\big( (0,1); H^1_{\#}( Y^*)\big)$ with $\frac{\partial\psi}{\partial y_2}=0$ \begin{equation}\label{system 32} \int_{(0, 1) \times Y^*} \Big( \frac{\partial u}{\partial x}(x) + \frac{\partial u_1}{\partial y_1}(x, y_1)\Big)\frac{\partial \psi}{\partial y_1}(x, y_1)\;dxdy_1dy_2 =0. \end{equation} Since all functions in \eqref{system 32} do not depend on $y_2$ we have $$\int_{(0, 1) \times (0, L)} \Big( \frac{\partial u}{\partial x}(x) + \frac{\partial u_1}{\partial y_1}(x, y_1)\Big)g(y_1)\frac{\partial \psi}{\partial y_1}(x, y_1)\;dxdy_1 =0.$$ Hence, treating $x$ as a parameter in the above equation we have that there exists a function $T$ depending only on $x$ such that $$\Big( \frac{\partial u}{\partial x}(x) + \frac{\partial u_1}{\partial y_1}(x, y_1)\Big)g(y_1)= T(x) \quad \hbox{ a.e. in } (0, L).$$ Consequently, since $u_1$ is $L$-periodic we have $$0 = \frac{1}{L}\int_{(0, L)} \frac{\partial u_1}{\partial y_1} \;dy_1 = - \frac{\partial u}{\partial x} + \frac{T}{L}\int_{(0, L)} \frac{1}{g}\; dy_1= - \frac{\partial u}{\partial x} + T \mathcal{M}\Big(\frac{1}{g }\Big).$$ Then, we get $$\frac{\partial u_1}{\partial y_1} = \Big(-1 + \frac{1}{g \mathcal{M}\big(\frac{1}{g}\big)}\Big) \frac{\partial u}{\partial x}.$$ Replacing $u_1$ by its value in the equation \eqref{system 21} we obtain the weak formulation of \eqref{GLP21a} \begin{equation*} \int_{0}^{1} \Big\{ \frac{1}{\mathcal{M}(g)\mathcal{M}\big(\frac{1}{g}\big)} \, u_x(x) \, \phi_x(x) + \, u(x) \, \phi(x) \Big\} dx = \int_{0}^{1} \, f_0(x) \, \phi(x) \, dx, \quad \forall \varphi \in H^1(0,1). \end{equation*} Thanks to Lax-Milgram Theorem we can ensure the existence and uniqueness of solution of $\eqref{GLP21a}$ which ends the proof. \end{proof} Now we are going to get new strong convergences without additional assumptions. \begin{proposition} \label{3correct result} Assume that hypothesis of Theorem \ref{hom2} are satisfied. Then, the following strong convergences hold \begin{itemize} \item[i)] $|||u^\varepsilon -u|||_{L^2(R^\varepsilon)} \buildrel \epsilon\to 0\over\longrightarrow 0.$ \item[ii)] $\displaystyle{\mathcal{T_\varepsilon}\Big(\frac{\partial u^\varepsilon}{\partial x}\Big)\buildrel \epsilon\to 0\over\longrightarrow \frac{\partial u}{\partial x} +\frac{\partial u_1}{\partial y_1} \hbox{ s}-L^2\big((0,1) \times Y^*\big),\; \mathcal{T_\varepsilon}\Big(\frac{\partial u^\varepsilon}{\partial y}\Big)\buildrel \epsilon\to 0\over\longrightarrow 0 \hbox{ s}-L^2\big((0,1) \times Y^*\big).}$ Moreover, $$\frac{1}{\varepsilon}\int_{R_1^\varepsilon} |\nabla u^\varepsilon|^2 dxdy \buildrel \epsilon\to 0\over\longrightarrow 0.$$ \item[iii)]${\displaystyle \lim_{\varepsilon \to 0}||| \nabla u^\varepsilon -\nabla u - \mathcal{U_\varepsilon}(\nabla_{y_1y_2}u_1)|||_{[L^2(R^\varepsilon)]^2}=0.}$ \item[iv)] Let $X$ be a function in $H_{\#}^1(Y^*)$ satisfying $$\frac{\partial X}{\partial y_1} = 1 - \frac{1}{g\mathcal{M}(\frac{1}{g})} \quad \hbox{ and } \quad \frac{\partial X}{\partial y_2} =0.$$ Then, \begin{align*} &\displaystyle{\lim_{\varepsilon \to 0}\Big|\Big|\Big| \frac{\partial u^\varepsilon}{\partial x}- \frac{\partial u}{\partial x} + \mathcal{U_\varepsilon}\Big(\frac{\partial u}{\partial x}\Big)X^\varepsilon_1\Big|\Big|\Big|_{L^2(R^\varepsilon)}=0,} \quad \displaystyle{\lim_{\varepsilon \to 0}\Big|\Big|\Big| \frac{\partial u^\varepsilon}{\partial y}\Big|\Big|\Big|_{L^2(R^\varepsilon)}=0,} \end{align*} where ${\displaystyle X_1^\varepsilon(x)\equiv \frac{\partial X}{\partial y_1}\Big(\frac{x}{\varepsilon^\alpha}\Big)},$ $x \in (0,1)$. \end{itemize} \end{proposition} \begin{proof} \par\noindent $i)$ It follows directly from Proposition \ref{convergence prop}. \par\noindent $ii)$ {\color{blue} These convergences are based on the convergence of the energy. In fact, the proof is so similar to the proof of ii) of Proposition \ref{1strong conv} that it will be omitted.} \par\noindent $iii)$ It is a direct consequence from the convergences obtained in ii) and the properties iv) and v) of Proposition \ref{averaging}. \par\noindent $iv)$ Using the usual notation referred to corrector results we introduce an $L-$periodic function $X \in H_{\#}^1(Y^*)$ such that $\displaystyle{u_1 = -X\frac{\partial u}{\partial x}}.$ Then, $X$ satisfies $$\frac{\partial X}{\partial y_1} = 1 - \frac{1}{g\mathcal{M}(\frac{1}{g})}\quad \hbox{ and } \quad \frac{\partial X}{\partial y_2} =0.$$ Moreover, $$\mathcal{U_\varepsilon}\Big(\frac{\partial u_1}{\partial y_1}\Big) = - \mathcal{U_\varepsilon}\Big(\frac{\partial u}{\partial x}\Big)\frac{\partial X}{\partial y_1}\Big(\frac{x}{\varepsilon^\alpha}\Big).$$ Hence, from iii) we get the desired convergences. \end{proof} \begin{remark} Notice that from standard elliptic regularity theory we can ensure that $u \in H^2(0,1)$. Note that, in case that we assume extra regularity conditions, for instance the function $g \in C^0(\R, \R)$, we can define the function $X$ as follows $$X(y_1) = \int_{0}^{y_1}1 - \frac{1}{g\mathcal{M}(\frac{1}{g})}\, dy_1,$$ which belongs to $C^1(0,L)$ and it is $L$-periodic. Then, using similar arguments as Corollary \ref{corr1} it follows that \begin{equation*} \begin{split} \lim_{\varepsilon \to 0}\Big|\Big|\Big| u^\varepsilon - u + \varepsilon^\alpha\frac{\partial u}{\partial x} X^\varepsilon\Big|\Big|\Big|_{H^1(R^\varepsilon)}=0,\quad \hbox{ where }X^\varepsilon(x)\equiv X\Big(\frac{x}{\varepsilon^\alpha}\Big), \; \forall x \in (0,1). \end{split} \end{equation*} \end{remark} \section{Extremely high oscillatory behavior, $\alpha >1$}\label{extrem} In this section we analyze the behavior of the solutions of the Neumann problem \eqref{OPI0} as the upper boundary of the thin domains presents a very high oscillatory boundary. Then, the thin domain is defined as follows \begin{equation}\label{thin3} R^\varepsilonilon = \Big\{ (x,y) \in \R^2 \; | \; x \in (0,1), \; 0 < y < \varepsilonilon \, g(x/\varepsilon^\alpha) \Big\}, \quad \alpha>1. \end{equation} We require that the function $g(\cdot)$ satisfies the hypothesis \textbf{(H{\scriptsize g})} from Section \ref{21}. Moreover we assume that $0<g_0$ which implies that $R^\varepsilon$ is connected. We would like to point out that even though we use the unfolding operator to get the homogenized limit problem, the approach is different to the two previous cases. The roughness is so strong that we can not obtain a compactness theorem using the same arguments as Theorem \ref{convergence theor1} or Theorem \ref{convergence theor2}. In particular, defining an operator $Z_\varepsilon$ analogous as the previous cases $$Z_\varepsilon := \frac{1}{\varepsilon^\alpha}\Big(\mathcal{T_\varepsilon(\varphi^\varepsilon)} - \frac{1}{|Y^*|}\int_{Y^*}T_\varepsilon(\varphi^\varepsilon)\; dy_2 dy_1\Big),$$ does not help much to get a convergence result. Observe that in case $\alpha>1$ the partial derivative respect to $y_2$, $\frac{\partial Z_\varepsilon}{\partial y_2} = \varepsilon^{1 - \alpha} \mathcal{T_\varepsilon}\Big(\frac{\partial\varphi^\varepsilon}{\partial y}\Big),$ is not bounded. To overcome this difficulty we will divide the thin domain in two thin parts: one of them, $R^\varepsilon_{+}$, presents high oscillations and the other one, $R^\varepsilon_{-}$, is a non oscillating thin domain. Then, we consider these two open sets \begin{align*} R^\varepsilon_+&= \Big\{ (x,y) \in \R^2 \; | \; x \in (0,1), \; \varepsilon g_0 < y < \varepsilonilon \, g(x/\varepsilon^\alpha) \Big\},\\ R^\varepsilon_-&= \Big\{ (x,y) \in \R^2 \; | \; x \in (0,1), \;0 < y < \varepsilon g_0 \Big\}. \end{align*} Moreover, we set \begin{align*} Y^*&=\{ (y_1,y_2) \in \mathbb{R}^2 \; : \; 0< y_1 < L, \; 0< y_2 < g(y_1) \}, \\ Y^*_+&=\{ (y_1,y_2) \in \mathbb{R}^2 \; : \; 0< y_1 < L, \; g_0 < y_2 < g(y_1) \},\\ R_- &= \big\{ (x,y) \in \R^2 \; | \; x \in (0,1), \;0 < y < g_0 \big\}, \\ R_+ &= \big\{ (x,y) \in \R^2 \; | \; x \in (0,1), \;g_0 < y < g_1 \big\}. \end{align*} \begin{remark} Notice that the reference cell for the unfolding operator restricted to the oscillating part, $Y^*_+$, may be disconnected since recall that $g_0=\min_{x \in \R}\{g(x)\}$. \end{remark} We first introduce an operator which allows us to rescale $R^\varepsilon_-$ in order to work over a fixed domain. This operator is called rescaling operator and for any $\varphi \in L^2(R^\varepsilon_-)$ it is defined as follows \begin{equation}\label{rescaling} \Pi_\varepsilon (\varphi) (x, y) = \varphi(x, \varepsilon y), \quad \forall (x, y) \in R_- . \end{equation} \begin{proposition} \label{properties1} The rescaling operator $\Pi_\varepsilon$ has the following properties: \begin{enumerate} \item[i)] Let $\varphi \in L^1(R^\varepsilon_-)$. Then, \begin{equation}\label{integral} \int_{ R_-} \Pi_\varepsilon (\varphi)(x, y) \,dx dy = \frac{1}{\varepsilon}\int_{R ^\varepsilon_-} \varphi (x, y) \,dx dy. \end{equation} \item[ii)] $\Pi_\varepsilon$ is linear and continuous from $\varphi \in L^p(R^\varepsilon_-)$ to $\Pi_\varepsilon (\varphi) \in L^p(R_-)$, $1\leq p \leq \infty$. In addition, the following relationship exits between their norms \begin{align*} &\|\Pi_\varepsilon( \varphi) \|_{ L^p(R_-)} = |||\varphi|||_{L^p(R^\varepsilon_-)} \quad \hbox{ for } 1\leq p \leq \infty. \end{align*} \item[iii)] For $\varphi \in W^{1,p}(R^\varepsilon_-)$, $1\leq p \leq \infty$ we have $$\frac{\partial \Pi_\varepsilon (\varphi)}{\partial x}= \Pi_\varepsilon \Big(\frac{\partial \varphi}{\partial x}\Big), \quad \frac{\partial \Pi_\varepsilon (\varphi)}{\partial y}=\varepsilon \Pi_\varepsilon \Big(\frac{\partial \varphi}{\partial y}\Big).$$ \item[iv)] Let $ \phi \in L^p(0, 1)$, $1\leq p \leq \infty$. Then, considering $\phi$ as a function in $R^\varepsilon_-$ one has $ \Pi_\varepsilon(\phi) = \phi.$ \end{enumerate} \end{proposition} \begin{proof} These assertions are straightforward from the definition of the rescaling operator and their proofs are omitted. \end{proof} We obtain now the homogenization result. \begin{theorem}\label{hom3} Let $u^\varepsilon$ be the solution of problem \eqref{VFP1} with $f^\varepsilon \in L^2(R^\varepsilon)$ satisfying $||| f^\varepsilon |||_{L^2(R^\varepsilonilon)} \leq C$ for some positive constant $C$ independent of $\varepsilon>0$. Assume the function $\hat{f}^\varepsilon(x) = \frac{1}{\varepsilon}\int_0^{\varepsilon g(x/\varepsilon^\alpha)} f^\varepsilon(x, y) \, dy$ satisfies that there exists a function $\hat{f}$ such that $${\displaystyle \hat{f}^\varepsilon\weto \hat{f} \; \hbox{ weakly in } L^2(0,1).}$$ Then, there exists a unique element $u \in H^1(0,1)$ such that, as $\varepsilon$ goes to zero, \begin{align*} &\mathcal{T_\varepsilon}(u^\varepsilon) \buildrel \epsilon\to 0\over\longrightarrow u \quad \hbox{ s}-L^2\big( (0,1); H^1(Y^*)\big), \quad \lim_{\varepsilon \to 0}|||u^\varepsilon - u|||_{L^2(R^\varepsilon)}=0. \end{align*} Moreover, $u$ is the unique weak solution of the following Neumann problem \begin{equation} \label{GLP3pri} \left\{ \begin{gathered} -g_0 u_{xx} + \frac{|Y^*|}{L}u = \hat{f}(x), \quad x \in (0,1),\\ u'(0)=u'(1)=0. \end{gathered} \right. \end{equation} \end{theorem} \begin{proof} Throughout this proof we denote by $\mathcal{T_\varepsilon}$ the unfolding operator associated to the cell $Y^*$, $\mathcal{T_\varepsilon}: L^2(R^\varepsilon)\to L^2((0,1)\times Y^*)$, and by $\mathcal{T^+_\varepsilon}$ the unfolding operator associated to the cell $Y^*_+$, $\mathcal{T^+_\varepsilon}: L^2(R^\varepsilon_+)\to L^2((0,1)\times Y^*_+)$. As we have already seen in the two previous sections, taking $u^\varepsilon$ as a test function in \eqref{VFP1} we easily get that there exists a constant $C$ independent of $\varepsilon$ such that \begin{equation}\label{a priori3} ||| u^\varepsilonilon |||_{H^1(R^\varepsilonilon)} \le C \quad \forall \varepsilonilon > 0. \end{equation} From \eqref{a priori3} and using Proposition \ref{convergence prop} we have that there exists $u \in H^1(0,1)$ such that \begin{equation}\label{limit+} \mathcal{T_\varepsilon}(u^\varepsilon) \buildrel \epsilon\to 0\over\longrightarrow u \quad \hbox{ s}-L^2\big( (0,1); H^1(Y^*)\big), \end{equation} \begin{equation}\label{limitpi} \lim_{\varepsilon \to 0}|||u^\varepsilon - u|||_{L^2(R^\varepsilon)}=0. \end{equation} In order to simplify the notation we denote the restriction of the solution to $R^\varepsilon_{+}$ and $R^{\varepsilon}_-$ as follows $u^\varepsilon_+:= u^\varepsilon|_{R^\varepsilonilon_+} \; \hbox{ and }\; u^\varepsilon_-:= u^\varepsilon|_{R^\varepsilonilon_-}.$ From the a priori estimate \eqref{a priori3} and property vi) in Proposition \ref{properties} we have that ${\displaystyle\mathcal{T^+_\varepsilon}\Big(\frac{\partial u^\varepsilon_+}{\partial x}\Big)}$ is bounded and then, by weak compactness there exists a function $u_1 \in L^2\big( (0,1)\times Y^*_+)\big)$ such that, up to subsequences, \begin{equation}\label{limit partial+} \mathcal{T_\varepsilon}\Big(\frac{\partial u^\varepsilon_+}{\partial x}\Big)\weto u_1 \quad \hbox{ w}-L^2\big( (0,1) \times Y^*_+\big). \end{equation} Now, we will prove that $u_1(x, y_1, y_2) = 0$ for a.e. $(x, y_1, y_2) \in (0,1)\times Y^*_+.$ To do this, we will define suitable test functions. Observe that since $g_0=\min_{x \in \R}\{g(x)\}$ and $g(\cdot)$ is $L-$periodic there is, at least, a point $y_0 \in [0,L)$ where the minimum, $g_0$, is attained, that is, $g(y_0)=g_0$. Therefore, we have by definition that the segment \begin{equation}\label{intersect} \{(y_0,y_2)\in \R^2 : y_2 \in (g_0, g_1)\} \cap Y^*_+= \emptyset. \end{equation} Assume first that $y_0>0$, later on we deal with $y_0=0$. Then, for any $\phi \in \mathcal{D}(0,y_0)$ we define the following function \begin{equation}\label{funj} \psi(y_1)= \left\{ \begin{aligned} \int_0^{y_1} \phi(z)\, dz \quad \hbox{if } 0\leq y_1 < y_0,\\ 0 \quad \hbox{if } y_0<y_1<L. \end{aligned} \right. \end{equation} Notice that $\psi$ can be extended by $L-$periodicity and $\psi \in C^\infty[0,y_0) \cup C^\infty(y_0,L)$. Then, we consider the following test function $$\varphi^\varepsilon(x, y)= \varepsilon^\alpha \tilde{\varphi}\Big(x, \frac{y}{\varepsilon}\Big) \psi\Big(\Big\{\frac{x}{\varepsilon^\alpha}\Big\}_L\Big),\quad (x, y)\in R^\varepsilon,$$ where $\varphi \in \mathcal{D}(R_+)$, $\psi$ is defined in \eqref{funj} and $\; \widetilde{}\; $ denotes the standard extension by zero. Note that, in view of \eqref{intersect} and definition of $\psi$, see \eqref{funj}, the functions $\varphi^\varepsilon$ are continuous in $R^\varepsilon.$ Then, applying the unfolding operator introduced in Definition \ref{unfold def} to the restriction of $\varphi^\varepsilon$ to the thin domain $R^\varepsilonilon_+$ we get \begin{eqnarray*} \mathcal{T}_\varepsilon^+(\varphi^\varepsilon) (x, y_1, y_2) = \left\{ \begin{array}{ll} \varepsilon^\alpha \varphi \Big( \varepsilon^\alpha \Big[\frac{x}{\varepsilon^\alpha}\Big]_{L}L + \varepsilon^\alpha y_1, y_2\Big)\psi(y_1) \quad \hbox{for} \quad (x, y_1, y_2) \in I^\varepsilon \times Y^*_+, \\ 0 \hspace{5.5cm} \hbox{for} \quad (x, y_1, y_2) \in \Lambda^\varepsilon \times Y^*_+. \end{array} \right. \end{eqnarray*} Moreover, by property vii) in Proposition \ref{properties} we have \begin{align*} &\mathcal{T}^+_\varepsilon\Big(\frac{\partial \varphi^\varepsilon}{\partial x}\Big) = \frac{1}{\varepsilon^\alpha} \frac{\partial}{\partial y_1}\mathcal{T}^+_\varepsilon(\varphi^\varepsilon)= \varepsilon^\alpha\mathcal{T}^+_\varepsilon\Big(\frac{\partial \varphi}{\partial x}\Big)\psi(y_1) + \psi'(y_1)\mathcal{T}^+_\varepsilon(\varphi),\\ & \mathcal{T}^+_\varepsilon\Big(\frac{\partial \varphi^\varepsilon}{\partial y}\Big) = \frac{1}{\varepsilon} \frac{\partial}{\partial y_2}\mathcal{T}^+_\varepsilon(\varphi^\varepsilon)= \varepsilon^{\alpha-1}\mathcal{T}^+_\varepsilon\Big(\frac{\partial \varphi}{\partial y}\Big)\psi(y_1). \end{align*} Hence, since $\alpha > 1$ we get the following convergences \begin{equation} \begin{split}\label{conv31} &\mathcal{T}^+_\varepsilon(\varphi^\varepsilon) \buildrel \epsilon\to 0\over\longrightarrow 0 \quad \hbox{s}-L^2\big( (0,1)\times Y^*_+\big),\\ &\mathcal{T}^+_\varepsilon\Big(\frac{\partial \varphi^\varepsilon}{\partial y}\Big) \buildrel \epsilon\to 0\over\longrightarrow 0 \quad \hbox{s}-L^2\big( (0,1)\times Y^*_+\big),\\ &\mathcal{T}^+_\varepsilon\Big(\frac{\partial \varphi^\varepsilon}{\partial x}\Big) \buildrel \epsilon\to 0\over\longrightarrow \psi'(y_1)\varphi (x, y_2) \quad \hbox{s}-L^2\big( (0,1)\times Y^*_+\big). \end{split} \end{equation} Now, taking into account that $\varphi^\varepsilon$ is the null function in $R^\varepsilon_-$ we obtain the following integral equality from the weak formulation \eqref{VFP1} $$\int_{R^\varepsilonilon_+} \Big\{ \frac{\partial u^\varepsilonilon_+}{\partial x} \frac{\partial \varphi^\varepsilon}{\partial x} + \frac{\partial u^\varepsilonilon_+}{\partial y} \frac{\partial \varphi^\varepsilon}{\partial y} + u^\varepsilonilon_+ \varphi^\varepsilon \Big\} dx dy = \int_{R^\varepsilonilon_+} f ^\varepsilon\varphi^\varepsilon dx dy.$$ Applying the unfolding operator, $\mathcal{T}^+_\varepsilon$, and taking into account that $||| f^\varepsilon |||_{L^2(R^\varepsilonilon)} \leq C$, Proposition \ref{uci bounded} and convergences \eqref{limit+}, \eqref{limit partial+} and \eqref{conv31} we get at the limit $$\int_{(0,1) \times Y^*_+} u_1(x, y_1, y_2) \psi'(y_1) \varphi(x, y_2)\, dxdy_1dy_2=0.$$ This implies that for any $\varphi \in \mathcal{D}(R_+)$ and $\psi$ defined as \eqref{funj} we have $$\int_{(0,1) \times (g_0, g_1)} \varphi(x, y_2) \Big[ \int_{(0,L)} \tilde{u}_1(x, y_1, y_2) \psi'(y_1)\,dy_1\Big] \;dxdy_2=0.$$ Consequently, we get $$ \int_{(0,L)} \tilde{u}_1(x, y_1, y) \psi'(y_1)\,dy_1=0, \quad \textrm{ a.e. for } (x, y) \in R_+.$$ Then, from definition \eqref{funj} we obtain that $$ \int_{(0,y_0)} \tilde{u}_1(x, y_1, y) \phi(y_1)\,dy_1=0, \quad \forall \phi \in \mathcal{D}(0,y_0) \textrm{ and a.e. for } (x, y) \in R_+.$$ Hence, we have that \begin{equation}\label{b0} \tilde{u}_1(x, y_1, y_2) = 0 \hbox{ for a.e. } (x, y_1, y_2) \in (0,1)\times(0,y_0)\times(g_0, g_1). \end{equation} Now, we repeat the same arguments defining $\psi$ as follows \begin{equation}\label{funj1} \psi(y_1)= \left\{ \begin{aligned} 0 \quad \hbox{if } 0\leq x < y_0,\\ \int_{y_0}^{y_1} \phi(z)\, dz - \int_{y_0}^{L} \phi(z)\, dz \quad \hbox{if } y_0<x<L, \end{aligned} \right. \end{equation} where $\phi \in \mathcal{D}(y_0,L)$. Notice that, $\psi$ is $L-$periodic and $\psi \in C^\infty[0,y_0) \cup C^\infty(y_0,L)$. Thus, using the same reasoning as above we get \begin{equation}\label{0b} \tilde{u}_1(x, y_1, y_2) = 0 \hbox{ for a.e } (x, y_1, y_2) \in (0,1)\times(y_0,L)\times(g_0, g_1). \end{equation} Hence, from \eqref{b0} and \eqref{0b} we can conclude that $u_1(x, y_1, y_2) = 0 \hbox{ for a.e. } (x, y_1, y_2) \in (0,1)\times Y^*_+.$ Finally, note that in case $y_0=0$ we may define \begin{equation*} \psi(y_1)= \int_0^{y_1} \phi(z)\, dz \quad \hbox{if } 0<x<L. \end{equation*} for any $\phi \in \mathcal{D}(0,L)$. Thus, taking into account that in this case we have $$ \{(0,y_2)\in \R^2 : y_2 \in (g_0, g_1)\} \cap Y^*_+= \emptyset, \hbox{ and } \{(L,y_2)\in \R^2 : y_2 \in (g_0, g_1)\} \cap Y^*_+= \emptyset,$$ we can ensure that the following test function is well defined $$\varphi^\varepsilon(x, y)= \varepsilon^\alpha \tilde{\varphi}\Big(x, \frac{y}{\varepsilon}\Big) \psi\Big(\Big\{\frac{x}{\varepsilon^\alpha}\Big\}_L\Big),\quad (x, y)\in R^\varepsilon.$$ Then, using exactly the same arguments as in the previous case we obtain $u_1=0$. Therefore, we get \begin{equation}\label{limit partial+1} \mathcal{T_\varepsilon}\Big(\frac{\partial u^\varepsilon_+}{\partial x}\Big)\weto 0 \quad \hbox{w}-L^2\big( (0,1) \times Y^*_+\big). \end{equation} As far as the $u^\varepsilon_-$ is concerned, from the a priori estimate \eqref{a priori3} and taking into account properties ii) and iii) in Proposition \ref{properties1} we know that there exists $u_- \in H^1(0,1)$ such that, up to subsequences, \begin{align}\label{conv resc} \Pi_\varepsilon(u^\varepsilon_-) \weto u_ - \, \, \hbox{w}-H^1\big( R_-), \quad \Pi_\varepsilon\Big(\frac{\partial u^\varepsilon_-}{\partial x}\Big) \weto \frac{\partial u_ -}{\partial x} \, \, \hbox{w}-L^2\big( R_-). \end{align} Moreover, from properties ii) and iv) of Proposition \ref{properties1} we have $$\|\Pi_\varepsilon( u^\varepsilon_-) - u \|_{ L^p(R_-)} = ||| u^\varepsilon_- - u|||_{L^p(R^\varepsilon_-)} \leq ||| u^\varepsilon - u|||_{L^p(R^\varepsilon)}.$$ Then, taking into account \eqref{limitpi} we obtain $$\Pi_\varepsilon(u^\varepsilon_-) \weto u \quad \hbox{s}-L^2\big( R_-),$$ which leads to $u(x) = u_-(x)$ for a.e. $x \in (0,1)$. To conclude the proof we obtain the limit weak formulation satisfied by $u$. Let us apply the unfolding and the rescaling operator to the original variational formulation \eqref{VFP1}. Thus, we have \begin{align*} &\frac{1}{L}\int_{(0, 1) \times Y^*_+} \mathcal{T^+_\varepsilon}\Big(\frac{\partial u^\varepsilon}{\partial x}\Big) \mathcal{T^+_\varepsilon}\Big(\frac{\partial \phi}{\partial x}\Big)\,dx dy_1 dy_2 + \frac{1}{L}\int_{(0, 1) \times Y^*} \mathcal{T_\varepsilon}(u^\varepsilon)\mathcal{T_\varepsilon}(\phi) dx dy_1 dy_2+ \frac{1}{\varepsilon}\int_{R_{+1}^\varepsilon} \frac{\partial u^\varepsilon}{\partial x} \frac{\partial \phi}{\partial x}dx dy \\ &\quad + \frac{1}{\varepsilon}\int_{R_{1}^\varepsilon}u^\varepsilon\phi dx dy+ \int_{R_-} \Pi_\varepsilon\Big(\frac{\partial u^\varepsilon}{\partial x}\Big) \Pi_\varepsilon\Big(\frac{\partial \phi}{\partial x}\Big)dx dy = \frac{1}{\varepsilon}\int_{R^\varepsilonilon} f ^\varepsilon \phi \, dx dy, \;\; \forall \phi \in H^1(0, 1). \end{align*} Recall that $R_{+1}^\varepsilon$ and $R_{1}^\varepsilon$ are the subsets of $R^\varepsilon_{+}$ and $R^\varepsilon$ respectively which contain the corresponding part of the unique cell which is not totally included in the thin set, $R^\varepsilon_+$ or $R^\varepsilon$. Hence, taking into account the properties of the unfolding and the rescaling operator, the converges obtained above and the assumption on the function $f^\varepsilon$ we can pass to the limit in the last equality, it leads to $$\frac{1}{L}\int_{(0, 1) \times Y^*} u\phi \, dx dy_1 dy_2 + \int_{R_-} \frac{\partial u}{\partial x} \frac{\partial \phi}{\partial x} \,dx dy = \int_{(0, 1)} \hat{f}\phi \, dx,\; \forall {\phi \in H^1(0,1)}$$ Consequently, we get that $u \in H^1(0,1)$ satisfies $$\int_0^1 \Big\{g_0 \frac{\partial u}{\partial x} \frac{\partial \phi}{\partial x} + \frac{|Y^*|}{L} u\phi \Big\}\, dx= \int_{(0, 1)} \hat{f}\phi \, dx,\; \forall {\phi \in H^1(0,1)},$$ which is the variational formulation of \eqref{GLP3pri}. \end{proof} \begin{remark}\label{extremq} Observe that in case $f^\varepsilon(x, y)=f(x)$ we get from the average convergence for periodic functions (see, e.g., \cite[p. xvi]{CioPau}) $$\hat{f}^\varepsilon= g\Big(\frac{x}{\varepsilon}\Big)f(x) \weto\hat{f} = f \mathcal{M}(g)=\frac{1}{L}\int_0^L g(y_1)\, dy_1\;f = \frac{|Y^*|}{L}\,f.$$ Hence the homogenized limit problem is given by $$ \left\{ \begin{gathered} - \frac{|Y^*_-|}{|Y^*| }u_{xx} + u = f, \quad x \in (0,1), \\ u'(0) = u' (1) = 0, \end{gathered} \right. $$ where $Y^*_-=\{(y_1, y_2)\in \R^2: y_1 \in (0,L), 0<y_2<g_0\}. $ Note that we recover the homogenized limit problem obtained in \cite{ArrPer2013} for Lipschitz domains. \end{remark} To end this section we obtain some strong convergences for the sequence of solutions which had not been obtained in previous papers. We show how the extremely oscillatory behavior affects the solutions. Indeed, we prove that the roughness is so strong that the gradient of the solutions tends to zero in the upper part. \begin{proposition}\label{strong5} Under the same hypothesis as in Theorem \ref{hom3} the solution of problem \eqref{VFP1} satisfies the following convergences \begin{itemize} \item[i)] $\quad \Pi_\varepsilon(u^\varepsilon_-) \buildrel \epsilon\to 0\over\longrightarrow u \quad \hbox{ s}-H^1\big( R_-),\quad \Pi_\varepsilon\Big(\frac{\partial u^\varepsilon_-}{\partial y}\Big) \buildrel \epsilon\to 0\over\longrightarrow 0\quad \hbox{ s}-L^2\big( R_-),$ \begin{align} &\mathcal{T_\varepsilon^+}\Big(\frac{\partial u^\varepsilon_+}{\partial x}\Big)\buildrel \epsilon\to 0\over\longrightarrow 0 \quad \hbox{ s}-L^2\big( (0,1) \times Y^*_+\big),\quad \mathcal{T_\varepsilon^+}\Big(\frac{\partial u^\varepsilon_+}{\partial y}\Big)\buildrel \epsilon\to 0\over\longrightarrow 0 \quad \hbox{ s}-L^2\big( (0,1) \times Y^*_+\big),\label{3partial2}\\ &\lim_{\varepsilon \to 0}\frac{1}{\varepsilon}\int_{R_{+1}^\varepsilon} |\nabla u^\varepsilon|^2 dxdy=0\label{3uci strong}. \end{align} \item[ii)] ${\displaystyle \lim_{\varepsilon \to 0}\Big|\Big|\Big|\frac{\partial u^\varepsilon}{\partial x}\Big|\Big|\Big|_{L^2(R^\varepsilon_+)}=0}$ \hbox{ and } ${\,\displaystyle\lim_{\varepsilon \to 0}\Big|\Big|\Big|\frac{\partial u^\varepsilon}{\partial y}\Big|\Big|\Big|_{L^2(R^\varepsilon_+)}=0.}$ \end{itemize} \end{proposition} \begin{proof} $i)$ To obtain the convergences we take $u^\varepsilon -u$ as a test function in \eqref{VFP1} \begin{equation}\label{3deriv} \int_{R^\varepsilonilon} \Big\{ \Big(\frac{\partial u^\varepsilonilon}{\partial x}\Big)^2 - \frac{\partial u^\varepsilonilon}{\partial x}\frac{\partial u}{\partial x} +\Big(\frac{\partial u^\varepsilonilon}{\partial y}\Big)^2 + (u^\varepsilonilon)^2 - u^\varepsilonilon u \Big\} dx dy = \int_{R^\varepsilonilon} f ^\varepsilon(u^\varepsilon -u) dx dy. \end{equation} Applying the unfolding and the rescaling operator we are allowed to pass to the limit in \eqref{3deriv}. Then, due to the convergences \eqref{limit partial+1}, \eqref{conv resc} and the strong convergence \eqref{limitpi} we get \begin{align*} \int_{(0,1) \times Y^*_+} \Big\{ \mathcal{T^+_\varepsilon}\Big(\frac{\partial u^\varepsilon_+}{\partial x}\Big)^2 + \mathcal{T^+_\varepsilon}\Big(\frac{\partial u^\varepsilon_+}{\partial y}\Big)^2\Big\} dx dy_1 dy_2 + \frac{1}{\varepsilon}\int_{R_{+1}^\varepsilon} |\nabla u^\varepsilon|^2 dxdy\\ +\int_{R_-} \Big\{ \Pi_\varepsilon\Big(\frac{\partial u^\varepsilon_-}{\partial x}\Big)^2 - \Big(\frac{\partial u}{\partial x}\Big)^2+\Pi_\varepsilon\Big(\frac{\partial u^\varepsilon_-}{\partial y}\Big)^2 \Big\} dx dy \buildrel \epsilon\to 0\over\longrightarrow 0. \end{align*} Therefore, by weak lower-semicontinuity we have \begin{equation*} \begin{split} 0&\leq\liminf_{\varepsilon \to 0} \int_{R_-} \Big\{ \Pi_\varepsilon\Big(\frac{\partial u^\varepsilon_-}{\partial x}\Big)^2 - \Big(\frac{\partial u}{\partial x}\Big)^2 \Big\} dx dy\\ &\leq\limsup_{\varepsilon \to 0} \int_{R_-} \Big\{ \Pi_\varepsilon\Big(\frac{\partial u^\varepsilon_-}{\partial x}\Big)^2 - \Big(\frac{\partial u}{\partial x}\Big)^2 \Big\} dx dy\\ &\leq\left\{\lim_{\varepsilon \to 0} \int_{R_-} \Big\{ \Pi_\varepsilon\Big(\frac{\partial u^\varepsilon_-}{\partial x}\Big)^2 - \Big(\frac{\partial u}{\partial x}\Big)^2 \Big\} dx dy+ \int_{R_-}\Pi_\varepsilon\Big(\frac{\partial u^\varepsilon_-}{\partial y}\Big)^2\, dx dy\right.\\ & \left.+\int_{(0,1) \times Y^*_+} \Big\{ \mathcal{T^+_\varepsilon}\Big(\frac{\partial u^\varepsilon_+}{\partial x}\Big)^2 + \mathcal{T^+_\varepsilon}\Big(\frac{\partial u^\varepsilon_+}{\partial y}\Big)^2\Big\} dx dy_1 dy_2 + \frac{1}{\varepsilon}\int_{R_{+1}^\varepsilon} |\nabla u^\varepsilon|^2 dxdy\right\} = 0. \end{split} \end{equation*} Consequently, the desired convergences are straightforward. \par\noindent $ii)$ From convergences \eqref{3partial2}, \eqref{3uci strong} and property v) of Proposition \ref{averaging} one immediately has the convergences. \end{proof} \end{document}
\begin{document} \title{ Understanding Domain Randomization for Sim-to-real Transfer } \begin{abstract} Reinforcement learning encounters many challenges when applied directly in the real world. Sim-to-real transfer is widely used to transfer the knowledge learned from simulation to the real world. Domain randomization---one of the most popular algorithms for sim-to-real transfer---has been demonstrated to be effective in various tasks in robotics and autonomous driving. Despite its empirical successes, theoretical understanding on why this simple algorithm works is limited. In this paper, we propose a theoretical framework for sim-to-real transfers, in which the simulator is modeled as a set of MDPs with tunable parameters (corresponding to unknown physical parameters such as friction). We provide sharp bounds on the sim-to-real gap---the difference between the value of policy returned by domain randomization and the value of an optimal policy for the real world. We prove that sim-to-real transfer can succeed under mild conditions without any real-world training samples. Our theory also highlights the importance of using memory (i.e., history-dependent policies) in domain randomization. Our proof is based on novel techniques that reduce the problem of bounding the sim-to-real gap to the problem of designing efficient learning algorithms for infinite-horizon MDPs, which we believe are of independent interest. \end{abstract} \section{Introduction} \label{sec: introduction} Reinforcement Learning (RL) is concerned with sequential decision making, in which the agent interacts with the environment to maximize its cumulative rewards. This framework has achieved tremendous empirical successes in various fields such as Atari games, Go and StarCraft~\citep{mnih2013playing,silver2017mastering,vinyals2019grandmaster}. However, state-of-the-art algorithms often require a large amount of training samples to achieve such a good performance. While feasible in applications that have a good simulator such as the examples above, these methods are limited in applications where interactions with the real environment are costly and risky, such as healthcare and robotics. One solution to this challenge is \tilde{p}h{sim-to-real transfer}~\citep{floreano2008evolutionary,kober2013reinforcement}. The basic idea is to train an RL agent in a simulator that approximates the real world and then transfer the trained agent to the real environment. This paradigm has been widely applied, especially in robotics~\citep{rusu2017sim,peng2018sim,chebotar2019closing} and autonomous driving \citep{pouyanfar2019roads,niu2021dr2l}. Sim-to-real transfer is appealing as it provides an essentially unlimited amount of data to the agent, and reduces the costs and risks in training. However, sim-to-real transfer faces the fundamental challenge that the policy trained in the simulated environment may have degenerated performance in the real world due to the \tilde{p}h{sim-to-real gap}---the mismatch between simulated and real environments. In addition to building higher-fidelity simulators to alleviate this gap, \tilde{p}h{domain randomization} is another popular method~\citep{sadeghi2016cad2rl,tobin2017domain,peng2018sim,andrychowicz2020learning}. Instead of training the agent in a single simulated environment, domain randomization randomizes the dynamics of the environment, thus exposes the agent to a diverse set of environments in the training phase. Policies learned entirely in the simulated environment with domain randomization can be directly transferred to the physical world with good performance~\citep{sadeghi2016cad2rl,matas2018sim,andrychowicz2020learning}. In this paper, we focus on understanding sim-to-real transfer and domain randomization from a theoretical perspective. The empirical successes raise the question: can we provide guarantees for the sub-optimality gap of the policy that is trained in a simulator with domain randomization and directly transferred to the physical world? To do so, we formulate the simulator as a set of MDPs with tunable latent variables, which corresponds to unknown parameters such as friction coefficient or wind velocity in the real physical world. We model the training process with domain randomization as finding an optimal history-dependent policy for a \tilde{p}h{latent MDP}, in which an MDP is randomly drawn from a set of MDPs in the simulator at the beginning of each episode. Our contributions can be summarized as follows: \begin{itemize} \item We propose a novel formulation of sim-to-real transfer and establish the connection between domain randomization and the latent MDP model~\citep{kwon2021rl}. The latent MDP model illustrates the uniform sampling nature of domain randomization, and helps to analyze the sim-to-real gap for the policy obtained from domain randomization. \item We study the optimality of domain randomization in three different settings. Our results indicate that the sim-to-real gap of the policy trained in the simulation can be $o(H)$ when the randomized simulator class is finite or satisfies certain smoothness condition, where $H$ is the horizon of the real-world interaction. We also provide a lower bound showing that such benign conditions are necessary for efficient learning. Our theory highlights the importance of using memory (i.e., history-dependent policies) in domain randomization. \item To analyze the optimality of domain randomization, we propose a novel proof framework which reduces the problem of bounding the sim-to-real gap of domain randomization to the problem of designing efficient learning algorithms for infinite-horizon MDPs, which we believe are of independent interest. \item As a byproduct of our proof, we provide the first provably efficient model-based algorithm for learning infinite-horizon average-reward MDPs with general function approximation (Algorithm~{\textnormal{e}}f{alg: general_opt_alg} in Appendix~{\textnormal{e}}f{appendix: infinite simulator class}). Our algorithm achieves a regret bound of $\tilde{O}(D\sqrt{d_e T})$, where $T$ is the total timesteps and $d_e$ is a complexity measure of a certain function class $\caF$ that depends on the eluder dimension~\citep{russo2013eluder,osband2014model}. \end{itemize} \section{Related Work} \label{sec: related_work} \paragraph{Sim-to-Real and Domain Randomization} The basic idea of sim-to-real is to first train an RL agent in simulation, and then transfer it to the real environment. This idea has been widely applied to problems such as robotics~\cite[e.g.,][]{ng2006autonomous,bousmalis2018using,tan2018sim, andrychowicz2020learning} and autonomous driving~\cite[e.g.,][]{pouyanfar2019roads,niu2021dr2l}. To alleviate the influence of reality gap, previous works have proposed different methods to help with sim-to-real transfer, including progressive networks~\citep{rusu2017sim}, inverse dynamics models~\citep{christiano2016transfer} and Bayesian methods~\citep{cutler2015efficient,pautrat2018bayesian}. Domain randomization is an alternative approach to making the learned policy to be more adaptive to different environments~\citep{sadeghi2016cad2rl,tobin2017domain,peng2018sim,andrychowicz2020learning}, thus greatly reducing the number of real-world interactions. There are also theoretical works related to sim-to-real transfer. \cite{jiang2018pac} uses the number of different state-action pairs as a measure of the gap between the simulator and the real environment. Under the assumption that the number of different pairs is constant, they prove the hardness of sim-to-real transfer and propose efficient adaptation algorithms with further conditions. \cite{feng2019does} prove that an approximate simulator model can effectively reduce the sample complexity in the real environment by eliminating sub-optimal actions from the policy search space. \cite{zhong2019pac} formulate a theoretical sim-to-real framework using the rich observation Markov decision processes (ROMDPs), and show that the transfer can result in a smaller real-world sample complexity. None of these results study benefits of domain randomization in sim-to-real transfer. Furthermore, all above works require real-world samples to fine-tune their policy during training, while our work and the domain randomization algorithm do not. \paragraph{POMDPs and Latent MDPs} Partially observable Markov decision processes (POMDPs) are a general framework for sequential decision-making problems when the state is not fully observable~\citep{smallwood1973optimal,kaelbling98planning,vlassis2012computational,jin2020sample,xiong2021sublinear}. Latent MDPs~\citep{kwon2021rl}, or LMDPs, are a special type of POMDPs, in which the real environment is randomly sampled from a set of MDPs at the beginning of each episode. This model has been widely investigated with different names such as hidden-model MDPs and multi-model MDPs. There are also results studying the planning problem in LMDPs, when the true parameters of the model is given~\citep{chades2012momdps,buchholz2019computation,steimle2021multi} . \cite{kwon2021rl} consider the regret minimization problem for LMDPs, and provide efficient learning algorithms under different conditions. We remark that all works mentioned above focus on the problems of finding the optimal policies for POMDPs or latent MDPs, which is perpendicular to the central problem of this paper---bounding the performance gap of transferring the optimal policies of latent MDPs from simulation to the real environment. \paragraph{Infinite-horizon Average-Reward MDPs} Recent theoretical progress has produced many provably sample-efficient algorithms for RL in infinite-horizon average-reward setting. Nearly matching upper bounds and lower bounds are known for the tabular setting~\citep{jaksch2010near,fruit2018efficient,zhang2019regret,wei2020model}. Beyond the tabular case, \cite{wei2021learning} propose efficient algorithms for infinite-horizon MDPs with linear function approximation. To the best of our knowledge, our result (Algorithm~{\textnormal{e}}f{alg: general_opt_alg}) is the first efficient algorithm with near-optimal regret for infinite-horizon average-reward MDPs with general function approximation. \section{Preliminaries} \label{sec: preliminaries} \subsection{Episodic MDPs} We consider episodic RL problems where each MDP is specified by $\caM = (\caS,\caA, P, R, H, s_1)$. $\caS$ and $\caA$ are the state and the action space with cardinality $S$ and $A$ respectively. We assume that $S$ and $A$ are finite but can be extremely large. $P: \caS \times \caA {\textnormal{i}}ghtarrow \Delta(\caS)$ is the transition probability matrix so that $P(\cdot|s,a)$ gives the distribution over states if action $a$ is taken on state $s$, $R: \caS \times \caA {\textnormal{i}}ghtarrow [0,1]$ is the reward function. $H$ is the number of steps in one episode. For simplicity, we assume the agent always starts from the same state in each episode, and use $s_1$ to denote the initial state at step $h=1$. It is straight-forward to extend our results to the case with random initialization. At step $h \in [H]$, the agent observes the current state $s_h \in \caS$, takes action $a_h \in \caA$, receives reward $R(s_h,a_h)$, and transits to state $s_{h+1}$ with probability $P(s_{h+1}|s_h,a_h)$. The episode ends when $s_{H+1}$ is reached. We consider the history-dependent policy class $\Pi$, where $\pi \in \Pi$ is a collection of mappings from the history observations to the distributions over actions. Specifically, we use ${traj}_h = \{(s_1,a_1,s_2,a_2, \cdots, s_h) ~|~ s_i \in \caS, a_i \in \caA, i \in [h] \}$ to denote the set of all possible trajectories of history till step $h$. We define a policy $\pi \in \Pi$ to be a collection of $H$ policy functions $\{\pi_h: {traj}_h {\textnormal{i}}ghtarrow \Delta(\caA)\}_{h\in [H]}$. We define $V^{\pi}_{\caM,h}: \caS {\textnormal{i}}ghtarrow \mathbb{R}$ to be the value function at step $h$ under policy $\pi$ on MDP $\caM$, i.e., $V_{\caM,h}^{\pi}(s)=\mathbb{E}_{\caM, \pi}[\sum_{t=h}^{H} R(s_{t}, a_{t}) \mid s_{h}=s].$ Accordingly, we define $Q^{\pi}_{\caM,h}: \caS \times \caA {\textnormal{i}}ghtarrow R$ to be the Q-value function at step $h$: $Q_{\caM,h}^{\pi}(s,a)=\mathbb{E}_{\caM, \pi}[R(s_h,a_h) + \sum_{t=h+1}^{H} R(s_{t}, a_{t}) \mid s_{h}=s, a_h = a].$ We use $\pi^*_{\caM}$ to denote the optimal policy for a single MDP $\caM$. It can be shown that there exists $\pi^*_{\caM}$ such that the policy at step $h$ depends on only the state at step $h$ but not any other prior history. That is, $\pi^*_{\caM}$ can be expressed as a collection of $H$ policy functions mapping from $\caS$ to $\Delta(\caA)$. We use $V^*_{\caM, h}$ and $Q^*_{\caM,h}$ to denote the optimal value and Q-functions under the optimal policy $\pi^*_{\caM}$ at step $h$. \subsection{Practical Implementation of Domain Randomization} In this subsection, we briefly introduce how domain randomization works in practical applications. Domain randomization is a popular technique for improving domain transfer~\citep{tobin2017domain,peng2018sim,matas2018sim}, which is often used for zero-shot transfer when the target domain is unknown or cannot be easily used for training. For example, by highly randomizing the rendering settings for their simulated training set, \cite{sadeghi2016cad2rl} trained vision-based controllers for a quadrotor using only synthetically rendered scenes. \cite{andrychowicz2020learning} studied the problem of dexterous in-hand manipulation. The training is performed entirely in a simulated environment in which they randomize the physical parameters of the system like friction coefficients and vision properties such as object's appearance. To apply domain randomization in the simulation training, the first step before domain randomization is usually to build a simulator that is close to the real environment. The simulated model is further improved to match the physical system more closely through calibration. Though the simulation is still a rough approximation of the physical setup after these engineering efforts, these steps ensure that the randomized simulators generated by domain randomization can cover the real-world variability. During the training phase, many aspects of the simulated environment are randomized in each episode in order to help the agent learn a policy that generalizes to reality. The policy trained with domain randomization can be represented using recurrent neural network with memory such as LSTM~\citep{yu2018policy,andrychowicz2020learning,doersch2019sim2real}. Such a memory-augmented structure allows the policy to potentially identify the properties of the current environment and adapt its behavior accordingly. With sufficient data sampled using the simulator, the agent can find a near-optimal policy w.r.t. the average value function over a variety of simulation environments. This policy has shown its great adaptivity in many previous results, and can be directly applied to the physical world without any real-world fine-tuning~\citep{sadeghi2016cad2rl,matas2018sim,andrychowicz2020learning}. \section{Formulation} In this section, we propose our theoretical formulation of sim-to-real and domain randomization. The corresponding models will be used to analyze the optimality of domain randomization in the next section, which can also serve as a starting point for future research on sim-to-real. \subsection{Sim-to-real Transfer} In this paper, we model the simulator as a set of MDPs with tunable latent parameters. We consider an MDP set $\caU$ representing the simulator model with joint state space $\caS$ and joint action space $\caA$. Each MDP $\caM = (\caS,\caA, P_{\caM}, R, H, s_1)$ in $\caU$ has its own transition dynamics $P_{\caM}$, which corresponds to an MDP with certain choice of latent parameters. Our result can be easily extended to the case where the rewards are also influenced by the latent parameters. We assume that there exists an MDP $\caM^* \in \caU$ that represents the dynamics of the real environment. We can now explain our general framework of sim-to-real. For simplicity, we assume that during the simulation phase (or training phase), we are given the entire set $\caU$ that represents MDPs under different tunable latent parameter. Or equivalently, the learning agent is allowed to interact with any MDP $\caM \in \caU$ in arbitrary fashion, and sample arbitrary amount of trajectories. However, we do not know which MDP $\caM \in \caU$ represents the real environment. The objective of sim-to-real transfer is to find a policy $\pi$ purely based on $\caU$, which performs well in the real environment. In particular, we measure the performance in terms of the \tilde{p}h{sim-to-real gap}, which is defined as the difference between the value of learned policy $\pi$ and the value of an optimal policy for the real world: \begin{equation} \text{Gap}(\pi) = V^{*}_{\caM^*,1}(s_1) - V^{\pi}_{\caM^*,1}(s_1). \end{equation} We remark that in our framework, the policy $\pi$ is learned exclusively in simulation without the use of any real world samples. We study this framework because (1) our primary interests---domain randomization algorithm does not use any real-world samples for training; (2) we would like to focus on the problem of knowledge transfer from simulation to the real world. The more general learning paradigm that allows the fine-tuning of policy learned in simulation using real-world samples can be viewed as a combination of sim-to-real transfer and standard on-policy reinforcement learning, which we left as an interesting topic for future research. \subsection{Domain Randomization and LMDPs} We first introduce Latent Markov decision processes (LMDPs) and then explain domain randomization in the viewpoint of LMDPs. A LMDP can be represented as $(\caU,\nu)$, where $\caU$ is a set of MDPs with joint state space $\caS$ and joint action space $\caA$, and $\nu$ is a distribution over $\caU$. Each MDP $\caM = (\caS,\caA, P_{\caM}, R, H, s_1)$ in $\caU$ has its own transition dynamics $P_{\caM}$ that may differs from other MDPs. At the start of an episode, an MDP $\caM \in \caU$ is randomly chosen according to the distribution $\nu$. The agent does not know explicitly which MDP is sampled, but she is allowed to interact with this MDP $\caM$ for one entire episode. Domain randomization algorithm first specifies a distribution over tunable parameters, which equivalently gives a distribution $\nu$ over MDPs in simulator $\caU$. This induces a LMDP with distribution $\nu$. The algorithm then samples trajectories from this LMDP, runs RL algorithms in order to find the near-optimal policy of this LMDP. We consider the ideal scenario that the domain randomization algorithm eventually find the globally optimal policy of this LMDP, which we formulate as domain randomization oracle as follows: \begin{definition} (Domain Randomization Oracle) Let $\caU$ be the set of MDPs generated by domain randomization and $\nu$ be the uniform distribution over $\caU$. The domain randomization oracle returns an optimal history-dependent policy $\pi^*_{\text{DR}}$ of the LMDP $(\caU, \nu)$: \begin{align} \label{eqn: definition of pi*} \pi^*_{\text{DR}} = \argmax_{\pi \in \Pi} \mathbb{E}_{\caM \sim \nu} V^{\pi}_{\caM,1} (s_1). \end{align} \end{definition} Since LMDP is a special case of POMDPs, its optimal policy $\pi^*_{\text{DR}}$ in general will depend on history. This is in sharp contrast with the optimal policy of a MDP, which is history-independent. We emphasize that both the memory-augmented policy and the randomization of the simulated environment are critical to the optimality guarantee of domain randomization. We also note that we don't restrict the learning algorithm used to find the policy $\pi^*_{\text{DR}}$, which can be either in a model-based or model-free style. Also, we don't explicitly define the behavior of $\pi^*_{\text{DR}}$. The only thing we know about $\pi^*_{\text{DR}}$ is that it satisfies the optimality condition defined in Equation~{\textnormal{e}}f{eqn: definition of pi*}. In this paper, we aim to bound the sim-to-real gap of $\pi^*_{\text{DR}}$, i.e., $\text{Gap}(\pi^*_{\text{DR}},\caU)$ under different regimes. \iffalse We assume all MDPs in $\caU$ are communicating MDPs with a bounded diameter. This is a natural assumption widely used in the literature~\citep{jaksch2010near,agrawal2017posterior,fruit2020improved}. \begin{assumption} \label{assumption: communicating MDP} (Communicating MDPs~\citep{jaksch2010near}) The diameter of any MDP $\caM \in \caU$ is bounded by $D$. That is, consider the stochastic process defined by a stationary policy $\pi: \mathcal{S} {\textnormal{i}}ghtarrow \mathcal{A}$ on an MDP with initial state $s$. Let $T(s'|\caM, \pi, s)$ denote the random variable for the first time step in which state $s'$ is reached in this process, then $$\max _{s \neq s^{\prime} \in \mathcal{S}} \min _{\pi: \mathcal{S} {\textnormal{i}}ghtarrow \mathcal{A}} \mathbb{E}\left[T\left(s^{\prime} \mid \caM, \pi, s{\textnormal{i}}ght){\textnormal{i}}ght] \leq D.$$ \end{assumption} Without such an assumption, the following proposition indicates that the sub-optimality gap can be $\Omega(H)$, the proof of which is deferred to Appendix~{\textnormal{e}}f{appendix: proof of prop 1}. \begin{proposition} \label{prop: lower bound without diameter assumption} Without Assumption~{\textnormal{e}}f{assumption: communicating MDP}, there exists a hard instance $\caU$ such that $\operatorname{Gap}(\pi^*_{\text{DR}},\caU) = \Omega(H)$. \end{proposition} \fi \section{Main Results} \label{sec: main_results} We are ready to present the sim-to-real gap of $\pi^*_{\text{DR}}$ in this section. We study the gap in three different settings under our sim-to-real framework: finite simulator class (the cardinality $|\caU|$ is finite) with the separation condition (MDPs in $\caU$ are distinct), finite simulator class without the separation condition, and infinite simulator class. During our analysis, we mainly study the long-horizon setting where $H$ is relatively large compared with other parameters. This is a challenging setting that has been widely-studied in recent years~\citep{gupta2019relay,mandlekar2020learning,pirk2020modeling}. We show that the sim-to-real gap of $\pi^*_{\text{DR}}$ is only $O(\log^3(H))$ for the finite simulator class with the separation condition, and only $\tilde{O}(\sqrt{H})$ in the last two settings, matching the best possible lower bound in terms of $H$. In our analysis, we assume that the MDPs in $\caU$ are communicating MDPs with a bounded diameter. \begin{assumption}[Communicating MDPs~\citep{jaksch2010near}] \label{assumption: communicating MDP} The diameter of any MDP $\caM \in \caU$ is bounded by $D$. That is, consider the stochastic process defined by a stationary policy $\pi: \mathcal{S} {\textnormal{i}}ghtarrow \mathcal{A}$ on an MDP with initial state $s$. Let $T(s'|\caM, \pi, s)$ denote the random variable for the first time step in which state $s'$ is reached in this process, then $\max _{s \neq s^{\prime} \in \mathcal{S}} \min _{\pi: \mathcal{S} {\textnormal{i}}ghtarrow \mathcal{A}} \mathbb{E}\left[T\left(s^{\prime} \mid \caM, \pi, s{\textnormal{i}}ght){\textnormal{i}}ght] \leq D.$ \end{assumption} This is a natural assumption widely used in the literature~\citep{jaksch2010near,agrawal2017posterior,fruit2020improved}. The communicating MDP model also covers many real-world tasks in robotics. For example, transferring the position or angle of a mechanical arm only costs constant time. Moreover, the diameter assumption is necessary under our framework. \begin{proposition} \label{prop: lower bound without diameter assumption} Without Assumption~{\textnormal{e}}f{assumption: communicating MDP}, there exists a hard instance $\caU$ so that $\operatorname{Gap}(\pi^*_{\text{DR}}) = \Omega(H)$. \end{proposition} We prove Proposition~{\textnormal{e}}f{prop: lower bound without diameter assumption} in Appendix~{\textnormal{e}}f{appendix: proof of prop 1}. Note that the worst possible gap of any policy is $H$, so $\pi^*_{\text{DR}}$ becomes ineffective without Assumption {\textnormal{e}}f{assumption: communicating MDP}. \subsection{Finite Simulator Class With Separation Condition} As a starting point, we will show the sim-to-real gap when the MDP set $\caU$ is a finite set with cardinality $M$. Intuitively, a desired property of $\pi^*_{\text{DR}}$ is the ability to identify the environment the agent is exploring within a few steps. This is because $\pi^*_{\text{DR}}$ is trained under uniform random environments, so we hope it can learn to tell the differences between environments. As long as $\pi^*_{\text{DR}}$ has this property, the agent is able to identify the environment dynamics quickly, and behave optimally afterwards (note that the MDP set $\caU$ is known to the agent). Before presenting the general results, we first examine a simpler case where all MDPs in $\caU$ are distinct. Concretely, we assume that any two MDPs in $\caU$ are well-separated on at least one state-action pair. Note that this assumption is much weaker than the separation condition in \citet{kwon2021rl}, which assumes strongly separated condition for each state-action pair. \begin{assumption}[$\delta$-separated MDP set] \label{assumption: separated MDPs} For any $\caM_1, \caM_2 \in \caU$, there exists a state-action pair $(s,a) \in \caS \times \caA$, such that the $L_1$ distance between the probability of next state of the different MDPs is at least $\delta$, i.e. $\left\|\left(P_{\caM_{1}}-P_{\caM_{2}}{\textnormal{i}}ght)(\cdot \mid s, a){\textnormal{i}}ght\|_{1} \geq \delta.$ \end{assumption} The following theorem shows the sim-to-real gap of $\pi^*_{\text{DR}}$ in $\delta$-separated MDP sets. \begin{theorem} \label{theorem: well-separated gap} Under Assumption~{\textnormal{e}}f{assumption: communicating MDP} and Assumption~{\textnormal{e}}f{assumption: separated MDPs}, for any $\caM \in \caU$, the sim-to-real gap of $\pi^*_{\text{DR}}$ is at most \begin{align} \operatorname{Gap}({\pi^*_{\text{DR}}}) = O\left(\frac{DM^3\log(MH)\log^2(SMH/\delta)}{\delta^4}{\textnormal{i}}ght). \end{align} \end{theorem} The proof of Theorem~{\textnormal{e}}f{theorem: well-separated gap} is deferred to Appendix~{\textnormal{e}}f{appendix: omitted proof in setting 1}. Though the dependence on $M$ and $\delta$ may not be tight, our bound has only poly-logarithmic dependence on the horizon $H$. The main difficulty to prove Theorem {\textnormal{e}}f{theorem: well-separated gap} is that we do not know what $\pi^*_{\text{DR}}$ does exactly despite knowing a simple and clean strategy in the real-world interaction with minimum sim-to-real gap. That is, to firstly visit the state-action pairs that help the agent identify the environment quickly and then follow the optimal policy in the real MDP $\caM^*$ after identifying $\caM^*$. Therefore, we use a novel constructive argument in the proof. We construct a base policy that implements the idea mentioned above, and show that $\pi^*_{\text{DR}}$ cannot be much worse than the base policy. The proof overview can be found in Section {\textnormal{e}}f{sec: proof overview}. \subsection{Finite Simulator Class Without Separation Condition} Now we generalize the setting and study the sim-to-real gap of $\pi^*_{\text{DR}}$ when $\caU$ is finite but not necessary a $\delta$-separated MDP set. Surprisingly, we show that $\pi^*_{\text{DR}}$ can achieve $\tilde{O}(\sqrt{H})$ sim-to-real gap when $|\caU| = M$. \begin{theorem} \label{theorem: finite class gap} Under Assumption~{\textnormal{e}}f{assumption: communicating MDP}, when the MDP set induced by domain randomization $\caU$ is a finite set with cardinality $M$, the sim-to-real gap of $\pi^*_{\text{DR}}$ is upper bounded by \begin{align} \operatorname{Gap}({\pi^*_{\text{DR}}}) = O\left(D\sqrt{M^3H \log(MH)}{\textnormal{i}}ght). \end{align} \end{theorem} Theorem {\textnormal{e}}f{theorem: finite class gap} is proved in Appendix~{\textnormal{e}}f{appendix: omitted proof in setting 2}. This theorem implies the importance of randomization and memory in the domain randomization algorithms \citep{sadeghi2016cad2rl,tobin2017domain,peng2018sim,andrychowicz2020learning}. With both of them, we successfully reduce the worst possible gap of $\pi^*_{\text{DR}}$ from the order of $H$ to the order of $\sqrt{H}$, so per step loss will be only $\tilde{O}(H^{-1/2})$. Without randomization, it is not possible to reduce the worst possible gap (i.e., the sim-to-real gap) because the policy is even not trained on all environments. Without memory, the policy is not able to implicitly ``identify'' the environments, so it cannot achieve sublinear loss in the worst case. We also use a constructive argument to prove Theorem {\textnormal{e}}f{theorem: finite class gap}. However, it is more difficult to construct the base policy because we do not have any idea to minimize the gap without the well-separated condition (Assumption~{\textnormal{e}}f{assumption: separated MDPs}). Fortunately, we observe that the base policy is also a memory-based policy, which basically can be viewed as an algorithm that seeks to minimize the sim-to-real gap in an unknown underlying MDP in $\caU$. Therefore, we connect the sim-to-real gap of the base policy with the regret bound of the algorithms in \textit{infinite-horizon average-reward MDPs} \citep{bartlett2012regal, fruit2018efficient, zhang2019regret}. The proof overview is deferred to Section {\textnormal{e}}f{sec: proof overview}. To illustrate the hardness of minimizing the worst case gap, we prove the following lower bound for $\text{Gap}(\pi,\caU)$ to show that any policy must suffer a gap at least $\Omega(\sqrt{H})$. \begin{theorem} \label{thm: lower bound} Under Assumption~{\textnormal{e}}f{assumption: communicating MDP}, suppose $A \geq 10, SA \geq M \geq 100, D \geq 20 \log_A{M}, H \geq DM$, for any history dependent policy $\pi = \{\pi_h: traj_h {\textnormal{i}}ghtarrow \caA\}_{h=1}^H$, there exists a set of $M$ MDPs $\caU = \{\mathcal{M}_m\}_{m=1}^{M}$ and a choice of $\mathcal{M}^* \in \caU$ such that $\operatorname{Gap}(\pi)$ is at least $\Omega(\sqrt{DMH}).$ \end{theorem} The proof of Theorem~{\textnormal{e}}f{thm: lower bound} follows the idea of the lower bound proof for tabular MDPs~\citep{jaksch2010near}, which we defer to Appendix~{\textnormal{e}}f{appendix: lower bound proof}. This lower bound implies that $\Omega(\sqrt{H})$ sim-to-real gap is unavoidable for the policy $\pi^*_{\text{DR}}$ when directly transferred to the real environment. \subsection{Infinite Simulator Class} In real-world scenarios, the MDP class is very likely to be extensively large. For instance, many physical parameters such as surface friction coefficients and robot joint damping coefficients are sampled uniformly from a continuous interval in the Dexterous Hand Manipulation algorithms \citep{andrychowicz2020learning}. In these cases, the induced MDP set $\caU$ is large and even infinite. A natural question is whether we can extend our analysis to the infinite simulator class case, and provide a corresponding sim-to-real gap. Intuitively, since the domain randomization approach returns the optimal policy in the average manner, the policy $\pi^*_{\text{DR}}$ can perform bad in the real world $\caM^*$ if most MDPs in the randomized set differ much with $\caM^*$. In other words, $\caU$ must be "smooth" near $\caM^*$ for domain randomization to return a nontrivial policy. By "smoothness", we mean that there is a positive probability that the uniform distribution $\nu$ returns a MDP that is close to $\caM^*$. This is because the probability that $\nu$ samples exactly $\caM^*$ in a infinite simulator class is 0, so domain randomization cannot work at all if such smoothness does not hold. Formally, we assume there is a distance measure $d(\caM_1, \caM_2)$ on $\caU$ between two MDPs $\caM_1$ and $\caM_2$. Define the ${\epsilon}ilon$-neighborhood $\caC_{\caM^*, {\epsilon}ilon}$ of $\caM^*$ as $\caC_{\caM^*, {\epsilon}ilon} \defeq \{\caM \in \caU: d(\caM, \caM^*) \leq {\epsilon}ilon\}$. The smoothness condition is formally stated as follows: \begin{assumption}[Smoothness near $\caM^*$] \label{assumption: Lipchitz} There exists a positive real number ${\epsilon}ilon_0$, and a Lipchitz constant $L$, such that for the policy $\pi^*_{\text{DR}}$, the value function of any two MDPs in $\caC_{\caM^*, {\epsilon}ilon_0}$ is $L$-Lipchitz w.r.t the distance function $d$, i.e. \begin{align} \left|V^{\pi^*_{\text{DR}}}_{\caM_1, 1}(s_1) - V^{\pi^*_{\text{DR}}}_{\caM_2,1}(s_1){\textnormal{i}}ght| \leq L \cdot d(\caM_1,\caM_2), \forall \caM_1, \caM_2 \in \caC_{\caM^*, {\epsilon}ilon_0}. \end{align} \end{assumption} For example, we can set $d(\caM_1, \caM_2) = \dbI[\caM_1 \neq \caM_2]$ in the finite simulator class. For complicated simulator class, we need to ensure there exists some $d(\cdot, \cdot)$ that $L$ is not large. With Assumption {\textnormal{e}}f{assumption: Lipchitz}, it is possible to compute the sim-to-real gap of $\pi^*_{\text{DR}}$. In the finite simulator class, we have shown that the gap depends on $M$ polynomially, which can be viewed as the complexity of $\caU$. The question is, how do we measure the complexity of $\caU$ when it is infinitely large? Motivated by \citet{ayoub2020model}, we consider the function class \begin{align} \caF = \left\{f_{\caM}(s,a,\lambda): \caS \times \caA \times \Lambda {\textnormal{i}}ghtarrow \mathbb{R} \text { such that } f_{\caM}(s, a, \lambda)=P_{\caM} \lambda(s, a) \text { for } \caM \in \mathcal{U}, \lambda \in \Lambda{\textnormal{i}}ght\}, \end{align} where $\Lambda = \{\lambda^*_\caM, \caM \in \caU\}$ is the optimal bias functions of $\caM \in \caU$ in the infinite-horizon average-reward setting (\citet{bartlett2012regal, fruit2018efficient, zhang2019regret}). We note this function class is only used for analysis purposes to express our complexity measure; it does not affect the domain randomization algorithm. We use the the ${\epsilon}ilon$-log-covering number and the ${\epsilon}ilon$-eluder dimension of $\caF$ to characterize the complexity of the simulator class $\caU$. In the setting of linear combined models~\citep{ayoub2020model}, the ${\epsilon}ilon$-log-covering number and the ${\epsilon}ilon$-eluder dimension are $O\left(d \log(1/{\epsilon}ilon){\textnormal{i}}ght)$, where $d$ is the dimension of the linear representation in linear combined models. For readers not familiar with eluder dimension or infinite-horizon average-reward MDPs, please see Appendix~{\textnormal{e}}f{appendix: additional preliminaries} for preliminary explanations. Here comes our bound of sim-to-real gap for the infinite simulator class setting, which is proved in Appendix~{\textnormal{e}}f{appendix: omitted proof in setting 3}. \begin{theorem} \label{theorem: sub-optimality gap, large simulator class} Under Assumption~{\textnormal{e}}f{assumption: communicating MDP} and {\textnormal{e}}f{assumption: Lipchitz}, the sim-to-real gap of the domain randomization policy $\pi^*_{\text{DR}}$ is at most for $0 \leq {\epsilon}ilon < {\epsilon}ilon_0$ \begin{align} \operatorname{Gap}(\pi^*_{\text{DR}}) = O\left(\frac{D\sqrt{d_eH\log \left(H \cdot \mathcal{N}(\mathcal{F}, 1 / H){\textnormal{i}}ght)}}{ \nu\left(\caC_{\caM^*,{\epsilon}ilon}{\textnormal{i}}ght)} + L{\epsilon}ilon{\textnormal{i}}ght). \end{align} Here $\nu(\caC_{\caM^*,{\epsilon}ilon})$ is the probability of $\nu$ sampling a MDP in $\caC_{\caM^*, {\epsilon}ilon}$, $d_e = \text{dim}_E(\caF, 1/H)$ is the $1/H$-eluder dimension $\caF$, and $\mathcal{N}(\mathcal{F}, 1 / H)$ is the $1/H$-covering number of $\caF$ w.r.t. $L_\infty$ norm. \end{theorem} Theorem {\textnormal{e}}f{theorem: sub-optimality gap, large simulator class} is a generalization of Theorem {\textnormal{e}}f{theorem: finite class gap}, since we can reduce Theorem {\textnormal{e}}f{theorem: sub-optimality gap, large simulator class} to Theorem {\textnormal{e}}f{theorem: finite class gap} by setting $d(\caM_1, \caM_2) = \dbI[\caM_1 \neq \caM_2]$ and ${\epsilon}ilon = 0$, in which case $\nu(\caC_{\caM^*,{\epsilon}ilon}) = 1/M$ and $d_e \leq M$. The proof overview can be found in Section {\textnormal{e}}f{sec: proof overview}. The main technique is still a reduction to the regret minimization problem in infinite-horizon average-reward setting. We construct a base policy and shows that the regret of it is only $\tilde{O}(\sqrt{H})$. A key point to note is that our construction of the base policy also solves an open problem of designing efficient algorithms that achieve $\tilde{O}(\sqrt{T})$ regret in the infinite-horizon average-reward setting with general function approximation. This base policy is of independent interests. To complement our positive results, we also provide a negative result that even if the MDPs in $\caU$ have nice low-rank properties (e.g., the linear low-rank property \citep{jin2020provably,zhou2020nearly}), the policy $\pi^*_{\text{DR}}$ returned by the domain randomization oracle can still have $\Omega(H)$ sim-to-real gap when the simulator class is large and the smoothness condition (Assumption {\textnormal{e}}f{assumption: Lipchitz}) does not hold. This explains the necessity of our preconditions. Please refer to Proposition~{\textnormal{e}}f{theorem: lower bound, M > H} in Appendix {\textnormal{e}}f{appendix: lower bound large class} for details. \iffalse For the lower bound, we also prove that it is impossible to find a policy with $o(H)$ gap without smoothness condition even if we know the MDPs have linear low-rank properties \citep{jin2020provably,zhou2020nearly}. \begin{theorem} \label{theorem: lower bound, M > H} Suppose All MDPs in the MDP set $\mathcal{U}$ are linear mixture models sharing a common low dimensional representation with dimension $d = O(\log(M))$, there exists a hard instance such that the sim-to-real gap of the policy $\pi^*_{\text{DR}}$ returned by the domain randomization oracle can be still $\Omega(H)$ when $M \geq H$. \end{theorem} The proof is deferred to Appendix ~{\textnormal{e}}f{appendix: proof of the lower bound, M > H}. Intuitively, since the domain randomization approach returns the optimal policy in the average manner, the policy $\pi^*_{\text{DR}}$ can perform bad in the real world $\caM^*$ if most MDPs in the randomized set differ much with $\caM^*$, or the simulator class is rather non-smooth. \fi \iffalse In this and the next sections, we study the case where the simulator class $\caU$ has finite cardinality $M$. In this section, we focus on the simplified case where the $L_1$ distance of the transition dynamics between two different MDPs in $\caU$ is at least $\delta$. \begin{assumption} \label{assumption: separated MDPs} ($\delta$-separated MDPs) For any $\caM_1, \caM_2 \in \caU$, there exists a state-action pair $(s,a) \in \caS \times \caA$, such that the $L_1$ distance between the probability of next state of the different MDPs is at least $\delta$, i.e. \begin{align} \left\|\left(P_{\caM_{1}}-P_{\caM_{2}}{\textnormal{i}}ght)(\cdot \mid s, a){\textnormal{i}}ght\|_{1} \geq \delta. \end{align} \end{assumption} Intuitively, with Assumption~{\textnormal{e}}f{assumption: separated MDPs}, it is possible for $\pi^*_{\text{DR}}$ to first identify the MDP $\caM^* \in \caU$ that represents the dynamics of the real environment, and then follow the optimal policy of $\caM^*$. However, we have no knowledge of the explicit formulation of policy $\pi^*_{\text{DR}}$ besides the oracle condition~{\textnormal{e}}f{eqn: definition of pi*}. To tackle this issue, we use the following lemma to reduce the problem to the sim-to-real gap of another policy $\hat{\pi}$. \begin{lemma} \label{lemma: reduction to algorithm design} Suppose there exists a policy $\hat{\pi} \in \Pi$ such that the sim-to-real gap of $\hat{\pi}$ for any MDP $\caM \in \caU$ satisfies $V_{\caM,1}^{*}(s_1) - V_{\caM,1}^{\hat{\pi}} (s_1) \leq C$, then we have \begin{align} \operatorname{Gap}(\pi^*_{\text{DR}},\caU) \leq MC. \end{align} Here $C$ denotes any $poly(M,D,H)$ term. \end{lemma} \jiachen{This lemma is a key lemma showing why we turn to the regret analysis of $\hat{\pi}$, we can put it in the previous section} Lemma~{\textnormal{e}}f{lemma: reduction to algorithm design} indicates that, if we can explicitly define a policy $\hat{\pi}$ satisfying $V_{\caM,1}^{*}(s_1) - V_{\caM,1}^{\hat{\pi}} (s_1) \leq C$ for any $\caM \in \caU$, then we can upper bound $\operatorname{Gap}(\pi^*_{\text{DR}},\caU)$. We defer the proof to Appendix~{\textnormal{e}}f{appendix: proof of reduction lemma}. The remaining problem, which we name as \textit{model identification} problem, is defined as follows: Suppose the real MDP $\caM^*$ belongs to the MDP set $\mathcal{U}$. We know the full information (transition matrix) of any MDP in the MDP set $\mathcal{U}$. How to design a history-dependent policy $\hat{\pi} \in \Pi$ with minimum worst-case sim-to-real gap $\max_{\caM \in \caU} \left(V^*_{\caM,1}(s_1) - V^{\hat{\pi}}_{\caM,1}(s_1){\textnormal{i}}ght)$. \subsection{Sub-optimality Gap with Separation Condition} In this subsection, we explicitly define the policy $\hat{\pi}$ with worst-case sub-optimaltiy gap guarantee under the separation condition. Note that a history-dependent policy for LMDPs can also be regarded as an algorithm for episodic MDPs with one episode. By deriving an upper bound of the sub-optimality gap for $\hat{\pi}$, we can upper bound $\operatorname{Gap}(\pi^*_{\text{DR}},\caU)$ with Lemma~{\textnormal{e}}f{lemma: reduction to algorithm design}. The policy $\hat{\pi}$ is formally defined in Algorithm~{\textnormal{e}}f{alg: gap-dpendent algorithm}. There are two stages in Algorithm~{\textnormal{e}}f{alg: gap-dpendent algorithm}. In the first stage, the agent's goal is to quickly explore the environment and find the real MDP $\caM^*$ from the MDP set $\caU$. This stage contains at most $M-1$ parts. In each part, the agent randomly selects two MDPs $\caM_1$ and $\caM_2$ from the remaining MDP set $\caD$. Since the agent knows the transition dynamics of $\caM_1$ and $\caM_2$, it can find the most informative state-action pair $(s_0,a_0)$ with maximum total-variation difference between $P_{\caM_1}(\cdot|s_0,a_0)$ and $P_{\caM_2}(\cdot|s_0,a_0)$. The algorithm calls Subroutine~{\textnormal{e}}f{subroutine: collecting data, M >2} to collect enough samples from $(s_0,a_0)$ pairs, and then eliminates the MDP with less likelihood. At the end of the first stage, the MDP set $\caD$ is ensured to contain only one MDP $\caM^*$ with high probability. Therefore, the agent can directly execute the optimal policy for the real MDP till step $H+1$ in the second stage. \begin{algorithm} \caption{Optimistic Exploration Under Separation Condition} \label{alg: gap-dpendent algorithm} \begin{algorithmic}[1] \State Initialize: the MDP set $\mathcal{D} = \mathcal{U}$, $n_0 = \frac{c_0 \log^2(SMH)\log(MH)}{\delta^4}$ for a constant $c_0$ \LineComment{\textit{Stage 1: Explore and find the real MDP $\caM^*$}} \While{$|\mathcal{D}| \geq 1$} \State Randomly select two MDPs $\caM_1$ and $\caM_2$ from the MDP set $\caD$ \State Choose $(s_0,a_0) = \argmax_{(s,a) \in \caS \times \caA}\left\|\left(P_{\caM_{1}}-P_{\caM_{2}}{\textnormal{i}}ght)(\cdot \mid s, a){\textnormal{i}}ght\|_{1}$ \State Call Subroutine~{\textnormal{e}}f{subroutine: collecting data, M >2} with parameter $(s_0,a_0)$ and $n_0$ to collect history samples $\caH_{\caM_1,\caM_2}$ \If{ $\exists s' \in \caH_{\caM_1,\caM_2}, P_{\caM_{2}}(s'|s_0,a_0) = 0$ or $\prod_{s' \in \caH_{\caM_1,\caM_2}} \frac{P_{\caM_{1}}(s'|s_0,a_0)}{P_{\caM_{2}}(s'|s_0,a_0)} \geq 1$} \State Eliminate $\caM_2$ from the MDP set $\mathcal{D}$ \mathbb{E}lse \State Eliminate $\caM_1$ from the MDP set $\mathcal{D}$ \mathbb{E}ndIf \mathbb{E}ndWhile \LineComment{\textit{Stage 2: Run the optimal policy of $\caM^*$}} \State Denote $\hat{\caM}$ as the remaining MDP in the MDP set $\caD$ \State Run the optimal policy of $\hat{\caM}$ for the remaining steps \end{algorithmic} \end{algorithm} Theorem~{\textnormal{e}}f{theorem: gap-dependent regret bound} states an upper bound of the sub-optimality gap for Algorithm~{\textnormal{e}}f{alg: gap-dpendent algorithm}, the proof of which is deferred to Appendix~{\textnormal{e}}f{appendix: section gap-dependent bound}. \begin{theorem} \label{theorem: gap-dependent regret bound} Suppose we use $\hat{\pi}$ to denote the history-dependent policy represented by Algorithm~{\textnormal{e}}f{alg: gap-dpendent algorithm}. Under Assumption~{\textnormal{e}}f{assumption: communicating MDP} and Assumption~{\textnormal{e}}f{assumption: separated MDPs}, for any $\caM \in \caU$, the sub-optimality gap of Algorithm~{\textnormal{e}}f{alg: gap-dpendent algorithm} is at most \begin{align} V_{\caM,1}^{*}(s_1) - V_{\caM,1}^{\hat{\pi}} (s_1) \leq O\left(\frac{DM^2 \log(MH) \log^2(SMH)}{\delta^4}{\textnormal{i}}ght). \end{align} \end{theorem} Combining with Lemma~{\textnormal{e}}f{lemma: reduction to algorithm design}, we can bound the sub-optimality gap of $\pi^*_{\text{DR}}$ in Corollary~{\textnormal{e}}f{corollary: gap-dependent bound}. From this corollary, we know that the policy $\pi^*_{\text{DR}}$ returned by the domain randomization oracle can suffer only $O(\log H)$ loss under the separation condition. \begin{corollary} \label{corollary: gap-dependent bound} Under Assumption~{\textnormal{e}}f{assumption: communicating MDP} and Assumption~{\textnormal{e}}f{assumption: separated MDPs}, the sub-optimality gap of policy $\pi^*_{\text{DR}}$ is at most \begin{align} \operatorname{Gap}({\pi^*_{\text{DR}}},\caU) \leq O\left(\frac{DM^3\log(MH)\log^2(SMH)}{\delta^4}{\textnormal{i}}ght). \end{align} \end{corollary} \section{Finite Simulator Class without Separation Condition} \label{sec: gap-independent bound for finite MDP set} In this section, we study the performance of the policy $\pi^*_{\text{DR}}$ returned by the domain randomization oracle when the simulator class $\caU$ is finite with cardinality $M$. We remove the separation assumption (Assumption~{\textnormal{e}}f{assumption: separated MDPs}) made in the last section. To derive the upper bound of the sub-optimality gap, we connect the problem to the setting of infinite-horizon average-reward MDPs, and propose an efficient algorithm in this setting. \subsection{Preliminary: Connection with infinite-horizon average-reward setting} The infinite-horizon average-reward setting has been well-explored in the recent few years~(e.g.~\cite{jaksch2010near,agrawal2017posterior,fruit2018efficient,wei2020model}). The main difference compared with the episodic setting is that the agent interacts with the environment for infinite steps instead of restarting every $H$ steps. The gain of a policy is defined in the average manner. \begin{definition} (Definition 4 in \cite{agrawal2017posterior}) The gain ${\textnormal{h}}o^{\pi}(s)$ of a stationary policy $\pi$ from starting state $s_1 = s$ is defined as: \begin{align} {\textnormal{h}}o^{\pi}(s)=\mathbb{E}\left[\lim _{T {\textnormal{i}}ghtarrow \infty} \frac{1}{T} \sum_{t=1}^{T} R(s_{t}, \pi\left(s_{t}{\textnormal{i}}ght)) \mid s_{1}=s{\textnormal{i}}ght] \end{align} \end{definition} In this setting, a common assumption is that the MDP is communicating (Assumption~{\textnormal{e}}f{assumption: communicating MDP}). Under this assumption, we have the following lemma. \begin{lemma} \label{lemma: properties of communicating MDP} \citep[Lemma~2.1]{agrawal2017posterior} For a communicating MDP $\mathcal{M}$ with diameter $D$: (a) The optimal gain ${\textnormal{h}}o^*$ is state-independent and is achieved by a deterministic stationary policy $\pi^*_{\text{DR}}$; that is, there exists a deterministic policy $\pi^*$ such that \begin{align} {\textnormal{h}}o^{*}:=\max _{s^{\prime} \in \mathcal{S}} \max _{\pi} {\textnormal{h}}o^{\pi}\left(s^{\prime}{\textnormal{i}}ght)={\textnormal{h}}o^{\pi^{*}}(s), \forall s \in \mathcal{S} \end{align} (b) The optimal gain ${\textnormal{h}}o^*$ satisfies the following equation: \begin{align} \label{eqn: Bellman equation, infinite setting} {\textnormal{h}}o^{*}=\min _{\lambda \in \mathbb{R}^{S}} \max _{s, a} \left[R(s, a)+P \lambda(s,a)-\lambda(s){\textnormal{i}}ght]=\max _{a}\left[ R(s,a)+P \lambda^{*}(s,a)-\lambda^{*}(s){\textnormal{i}}ght], \forall s \end{align} where $P \lambda(s,a) = \sum_{s'} P(s'|s,a) \lambda(s')$, and $\lambda^*$ is the bias vector of the optimal policy $\pi^*_{\text{DR}}$ satisfying \begin{align} 0 \leq \lambda^{*}(s) \leq D. \end{align} \end{lemma} The regret minimization problem has been widely studied in this setting, with regret to be defined as $Reg(T) = \mathbb{E}\left[T {\textnormal{h}}o^* - \sum_{t=1}^{T} R(s_t,a_t){\textnormal{i}}ght]$, where the expectation is over the randomness of the trajectories. For example, \cite{jaksch2010near} proposed an efficient algorithm called UCRL2, which achieves regret upper bound $\tilde{O}(DS\sqrt{AT})$. Note that the optimal policy in the infinite-horizon setting may not obtain optimal value in the episodic setting with the same transition dynamics. However, with a slight abuse of the notation, the following lemma indicates that the optimal policy in the infinite-horizon setting can still be near-optimal in the episodic setting. We prove this lemma in Appendix~{\textnormal{e}}f{appendix: proof of the lemma, connection with infinite-horizon setting}. \begin{lemma} \label{lemma: connection with infinite-horizon setting} For the same MDP $\caM$, let ${\textnormal{h}}o^*_{\caM}$ and $V^*_{\caM,1}(s_1)$ to be the optimal expected gain in the infinite-horizon setting and the optimal value function in the episodic setting respectively. We have the following inequality: \begin{align} H {\textnormal{h}}o^*_{\caM} -D \leq V^*_{\caM,1}(s_1) \leq H {\textnormal{h}}o^*_{\caM} + D. \end{align} \end{lemma} The above lemma indicates that, if we can design an algorithm (a non-stationary policy) $\hat{\pi}$ in the infinite-horizon setting with regret $Reg(H)$, then the sub-optimality gap of this algorithm in episodic setting satisfies $V^*_{\caM,1}(s_1) - V^{\hat{\pi}}_{\caM,1}(s_1) \leq Reg(H) +D$. This connection inspires us to design an efficient algorithm solving model identification problem in infinite-horizon average-reward setting. \subsection{Efficient Algorithm without Separation Condition} In this subsection, we propose an efficient algorithm for model identification problem in the infinite-horizon average-reward setting. Our algorithm is described in Algorithm~{\textnormal{e}}f{alg: optimistic exploration}. In episode $k$, the agent executes the optimal policy of the optimistic MDP $\caM^k$ with the maximum expected gain ${\textnormal{h}}o^*_{\caM^k}$. Once the agent collects enough data and realizes that the current MDP $\caM^k$ is not $\caM^*$ that represents the dynamics of the real environment, the agent eliminates $\caM^k$ from the MDP set. \begin{algorithm} \caption{Optimistic Exploration} \label{alg: optimistic exploration} \begin{algorithmic}[1] \State Initialize: the MDP set $\mathcal{U}_1 = \mathcal{U}$, the episode counter $k=1$, $h_0 = 1$ \State Calculate $\caM^{1} = \argmax_{\caM \in \caU_1} {\textnormal{h}}o^*_{\caM} $ \For{step $h = 1,\cdots, H$} \State Take action $a_h = \pi^*_{\caM^k}(s_h)$, obtain the reward $R(s_h,a_h)$, and observe the next state $s_{h+1}$ \If{$\left|\sum_{t=h_0}^{h} \left(P_{\caM^k} \lambda_{\caM^k}^{*}\left(s_{t}, a_{t}{\textnormal{i}}ght) - \lambda_{\caM^k}^{*}\left(s_{t+1}{\textnormal{i}}ght){\textnormal{i}}ght) {\textnormal{i}}ght| > D\sqrt{2(h-h_0) \log(2HM)}$} \State Eliminate $\caM^{k}$ from the MDP set $\mathcal{U}_k$, denote the remaining set as $\caU_{k+1}$ \State Calculate $\caM^{k+1} = \argmax_{\caM \in \caU_{k+1}} {\textnormal{h}}o^*_{\caM}$ \State Set $h_0 = h+1$, and $k = k +1$. \mathbb{E}ndIf \mathbb{E}ndFor \end{algorithmic} \end{algorithm} To indicate the basic idea of the elimination condition defined in Line 5 of Algorithm~{\textnormal{e}}f{alg: optimistic exploration}, we briefly explain our regret analysis of Algorithm~{\textnormal{e}}f{alg: optimistic exploration}. Suppose the MDP $\caM^k$ selected in episode $k$ satisfies the optimistic condition ${\textnormal{h}}o^*_{\caM^k} \geq {\textnormal{h}}o^*_{\caM^*}$, then the regret in $H$ steps can be bounded as: \begin{align} &H {\textnormal{h}}o^*_{\caM^*} - \sum_{h=1}^{H} R(s_h,a_h) \\ \leq & \sum_{k=1}^{K} \sum_{h = \tau(k)}^{\tau(k+1)-1} \left({\textnormal{h}}o^*_{\caM^k} - R(s_h,a_h){\textnormal{i}}ght) \\ = & \sum_{k=1}^{K} \sum_{h = \tau(k)}^{\tau(k+1)-1} \left(P_{\caM^k} \lambda^*_{\caM^k}(s_h,a_h) - \lambda^*_{\caM^k}(s_h){\textnormal{i}}ght) \\ = & \sum_{k=1}^{K} \sum_{h = \tau(k)}^{\tau(k+1)-1} \left(P_{\caM^k} \lambda^*_{\caM^k}(s_h,a_h) - \lambda^*_{\caM^k}(s_{h+1}){\textnormal{i}}ght) + \sum_{k=1}^{K} \left(\lambda^*_{\caM^k}(s_{\tau(k+1)}) - \lambda^*_{\caM^k}(s_{\tau(k)}){\textnormal{i}}ght) \\ \label{inq: regret decomposition in infinite-horizon setting} \leq & \sum_{k=1}^{K} \sum_{h = \tau(k)}^{\tau(k+1)-1} \left(P_{\caM^k} \lambda^*_{\caM^k}(s_h,a_h) - \lambda^*_{\caM^k}(s_{h+1}){\textnormal{i}}ght) + KD. \end{align} Here we use $K$ to denote the total number of episodes, and we use $\tau(k)$ to denote the first step of episode $k$. The first inequality is due to the optimistic condition ${\textnormal{h}}o^*_{\caM^k} \geq {\textnormal{h}}o^*_{\caM^*}$. The first equality is due to the Bellman equation in the finite-horizon setting (Eqn~{\textnormal{e}}f{eqn: Bellman equation, infinite setting}). The last inequality is due to $0 \leq \lambda^*_{\caM}(s) \leq D$. From the above inequality, we know that the regret in episode $k$ depends on the summation $\sum_{h = \tau(k)}^{\tau(k+1)-1} \left(P_{\caM^k} \lambda^*_{\caM^k}(s_h,a_h) - \lambda^*_{\caM^k}(s_{h+1}){\textnormal{i}}ght)$. If this term is relatively small, we can continue following the policy $\pi^*_{\caM^k}$ with little loss. Since $\lambda^*_{\caM^k}(s_{h+1})$ is an empirical sample of $P_{\caM^*} \lambda^*_{\caM^k}(s_h,a_h)$, we can guarantee that $\caM^k$ is not $\caM^*$ with high probability if this term is relatively large. Based on the above discussion, we can upper bound the regret of Algorithm~{\textnormal{e}}f{alg: optimistic exploration}. We defer the proof of Theorem~{\textnormal{e}}f{theorem: square-root H regret bound} to Appendix~{\textnormal{e}}f{appendix: omitted proof in section gap-independent bound, finite set}. \begin{theorem} \label{theorem: square-root H regret bound} Under Assumption~{\textnormal{e}}f{assumption: communicating MDP}, the regret of Algorithm~{\textnormal{e}}f{alg: optimistic exploration} is upper bounded by \begin{align} Reg(H) \leq O\left(D\sqrt{MH \log(MH)}{\textnormal{i}}ght). \end{align} \end{theorem} Combining Lemmas~{\textnormal{e}}f{lemma: reduction to algorithm design} and~{\textnormal{e}}f{lemma: connection with infinite-horizon setting}, we can deduce the upper bound of the sub-optimality gap for $\pi^*_{\text{DR}}$. \begin{corollary} \label{corollary: square-root H bound} Under Assumption~{\textnormal{e}}f{assumption: communicating MDP}, the sub-optimality gap of policy $\pi^*_{\text{DR}}$ is at most \begin{align} \operatorname{Gap}({\pi^*_{\text{DR}}},\caU) \leq O\left(D\sqrt{M^3H \log(MH)}{\textnormal{i}}ght). \end{align} \end{corollary} \subsection{Lower Bound} To illustrate the hardness of the domain transfer problem, we prove the following lower bound for $Gap(\pi,\caU)$. \begin{theorem} \label{thm: lower bound} Under Assumption~{\textnormal{e}}f{assumption: communicating MDP}, suppose $A \geq 10, SA \geq M \geq 100, d \geq 20 \log_A{M}, H \geq DM$, for any history dependent policy $\pi = \{\pi_h: traj_h {\textnormal{i}}ghtarrow a_h\}_{h=1}^H$, there exists a set of $M$ MDPs $\caU = \{\mathcal{M}_m\}_{m=1}^{M}$ and a choice of $\mathcal{M}^* \in \caU$ such that $\operatorname{Gap}(\pi,\caU)$ is at least $\Omega(\sqrt{DMH}).$ \end{theorem} The proof of Theorem~{\textnormal{e}}f{thm: lower bound} follows the idea of the lower bound proof for tabular MDPs~\citep{jaksch2010near}, which we defer to Appendix~{\textnormal{e}}f{appendix: lower bound proof}. This lower bound implies that $\Omega(\sqrt{H})$ sub-optimality gap is unavoidable for the policy $\pi^*_{\text{DR}}$ when directly transferred to the real environment. Compared with this result, the upper bound for $\pi^*_{\text{DR}}$ in Corollary~{\textnormal{e}}f{corollary: square-root H bound} is near optimal w.r.t. the horizon $H$. \fi \section{Proof Overview} \label{sec: proof overview} In this section, we will give a short overview of our novel proof techniques for the results shown in section {\textnormal{e}}f{sec: main_results}. The main proof technique is based on reducing the problem of bounding the sim-to-real gap to the problem of constructing base policies. In the settings without separation conditions, we further connect the construction of the base policies to the design of efficient learning algorithms for the infinite-horizon average-reward settings. \subsection{Reducing to Constructing Base Policies} \label{sec: reduce to learning algorithms} Intuitively, if there exists a base policy $\hat{\pi} \in \Pi$ with bounded sim-to-real gap, then the gap of $\pi^*_{\text{DR}}$ will not be too large since $\pi^*_{\text{DR}}$ defined in Eqn~{\textnormal{e}}f{eqn: definition of pi*} is the policy with the maximum average value. \begin{lemma} \label{lemma: construction argument} Suppose there exists a policy $\hat{\pi} \in \Pi$ such that the sim-to-real gap of $\hat{\pi}$ for any MDP $\caM \in \caU$ satisfies $V_{\caM,1}^{*}(s_1) - V_{\caM,1}^{\hat{\pi}} (s_1) \leq C$, then we have $\operatorname{Gap}(\pi^*_{\text{DR}}) \leq MC$ when $\caU$ is a finite set with $|\caU| = M$. Furthermore, when $\caU$ is an infinite set satisfying the smoothness condition (assumption {\textnormal{e}}f{assumption: Lipchitz}), we have for any $0 < {\epsilon}ilon < {\epsilon}ilon_0$, $\operatorname{Gap}(\pi^*_{\text{DR}}) \leq C / \nu\left(\caC_{\caM^*, {\epsilon}ilon}{\textnormal{i}}ght) + L{\epsilon}ilon.$ \end{lemma} We defer the proof to Appendix~{\textnormal{e}}f{appendix: proof of reduction lemma}. Now with this reduction lemma, the remaining problem is defined as follows: Suppose the real MDP $\caM^*$ belongs to the MDP set $\mathcal{U}$. We know the full information (transition matrix) of any MDP in the MDP set $\mathcal{U}$. How to design a history-dependent policy $\hat{\pi} \in \Pi$ with minimum sim-to-real gap $\max_{\caM \in \caU} \left(V^*_{\caM,1}(s_1) - V^{\hat{\pi}}_{\caM,1}(s_1){\textnormal{i}}ght)$. \subsection{The Construction of the Base Policies} \label{sec: construction of base policies} \paragraph{With separation conditions} With the help of Lemma~{\textnormal{e}}f{lemma: construction argument}, we can bound the sim-to-real gap in the setting of finite simulator class with separation condition by constructing a history-dependent policy $\hat{\pi}$. The formal definition of the policy $\hat{\pi}$ can be found in Appendix~{\textnormal{e}}f{appendix: finite simulator class with separation condition}. The idea of the construction is based on elimination: the policy $\hat{\pi}$ explicitly collects samples on the ``informative'' state-action pairs and eliminates the MDP that is less likely to be the real MDP from the candidate set. Once the agent identifies the real MDP representing the dynamics of the physical environment, it follows the optimal policy of the real MDP until the end of the interactions. \paragraph{Without separation conditions} The main challenge in this setting is that, we can no longer construct a policy $\hat{\pi}$ that ``identify'' the real MDP using the approaches as in the settings with separation conditions. In fact, we may not be able to even ``identify'' the real MDP since there can be MDPs in $\caU$ that is very close to real MDP. Here, we use a different approach, which reduces the minimization of sim-to-real gap of $\hat{\pi}$ to the regret minimization problem in the infinite-horizon average-reward MDPs. The infinite-horizon average-reward setting has been well-studied~\cite[e.g.,][]{jaksch2010near,agrawal2017posterior,fruit2018efficient,wei2020model}. The main difference compared with the episodic setting is that the agent interacts with the environment for infinite steps. The gain of a policy is defined in the average manner. The value of a policy $\pi$ is defined as ${\textnormal{h}}o^{\pi}(s)=\mathbb{E}[\lim _{T {\textnormal{i}}ghtarrow \infty} \sum_{t=1}^{T} R(s_{t}, \pi\left(s_{t}{\textnormal{i}}ght)) / T \mid s_{1}=s]$. The optimal gain is defined as ${\textnormal{h}}o^*(s) \defeq \max_{s \in \caS} \max_\pi {\textnormal{h}}o^\pi(s)$, which is shown to be state-independent in \cite{agrawal2017posterior}, so we use ${\textnormal{h}}o^*$ for short. The regret in the infinite-horizon setting is defined as $\text{Reg}(T) = \mathbb{E}\left[T {\textnormal{h}}o^* - \sum_{t=1}^{T} R(s_t,a_t){\textnormal{i}}ght]$, where the expectation is over the randomness of the trajectories. A more detailed explanation of infinite-horizon average-reward MDPs can be found in Appendix~{\textnormal{e}}f{appendix: infinite-horizon average-reward MDP}. For an MDP $\caM \in \caU$, we can view it as a finite-horizon MDP with horizon $H$; or we can view it as an infinite-horizon MDP. This is because Assumption {\textnormal{e}}f{assumption: communicating MDP} ensures that the agent can travel to any state from any state $s_H$ encountered at the $H$-th step (this may not be the case in the standard finite-horizon MDPs, since people often assume that the states at the $H$-th level are terminating state). The following lemma shows the connection between these two views. \begin{lemma} \label{lemma: connection with infinite-horizon setting} For a MDP $\caM$, let ${\textnormal{h}}o^*_{\caM}$ and $V^*_{\caM,1}(s_1)$ to be the optimal expected gain in the infinite-horizon view and the optimal value function in the episodic view respectively. We have the following inequality: $H {\textnormal{h}}o^*_{\caM} -D \leq V^*_{\caM,1}(s_1) \leq H {\textnormal{h}}o^*_{\caM} + D.$ \end{lemma} This lemma indicates that, if we can design an algorithm (i.e. the base policy) $\hat{\pi}$ in the infinite-horizon setting with regret $\text{Reg}(H)$, then the sim-to-real gap of this algorithm in episodic setting satisfies $\text{Gap}(\hat{\pi}) = V^*_{\caM,1}(s_1) - V^{\hat{\pi}}_{\caM,1}(s_1) \leq \text{Reg}(H) +D$. This lemma connects the sim-to-real gap of $\hat{\pi}$ in finite-horizon setting to the regret in the infinite-horizon setting. With the help of Lemma~{\textnormal{e}}f{lemma: construction argument} and~{\textnormal{e}}f{lemma: connection with infinite-horizon setting}, the remaining problem is to design an efficient exploration algorithm for infinite-horizon average-reward MDPs with the knowledge that the real MDP $\caM^*$ belongs to a known MDP set $\caU$. Therefore, we propose two optimistic-exploration algorithms (Algorithm~{\textnormal{e}}f{alg: optimistic exploration} and Algorithm~{\textnormal{e}}f{alg: general_opt_alg}) for the setting of finite simulator class and infinite simulator class respectively. The formal definition of the algorithms are deferred to Appendix~{\textnormal{e}}f{appendix: finite simulator class without separation condition} and Appendix~{\textnormal{e}}f{appendix: infinite simulator class}. Note that our Algorithm~{\textnormal{e}}f{alg: general_opt_alg} is the first efficient algorithm with $\tilde{O}(\sqrt{T})$ regret in the infinite-horizon average-reward MDPs with general function approximation, which is of independent interest for efficient online exploration in reinforcement learning. \iffalse \subsection{Sub-optimality Gap with Separation Condition} In this subsection, we explicitly define the policy $\hat{\pi}$ with worst-case sub-optimaltiy gap guarantee under the separation condition. Note that a history-dependent policy for LMDPs can also be regarded as an algorithm for episodic MDPs with one episode. By deriving an upper bound of the sub-optimality gap for $\hat{\pi}$, we can upper bound $\operatorname{Gap}(\pi^*_{\text{DR}},\caU)$ with Lemma~{\textnormal{e}}f{lemma: reduction to algorithm design}. The policy $\hat{\pi}$ is formally defined in Algorithm~{\textnormal{e}}f{alg: gap-dpendent algorithm}. There are two stages in Algorithm~{\textnormal{e}}f{alg: gap-dpendent algorithm}. In the first stage, the agent's goal is to quickly explore the environment and find the real MDP $\caM^*$ from the MDP set $\caU$. This stage contains at most $M-1$ parts. In each part, the agent randomly selects two MDPs $\caM_1$ and $\caM_2$ from the remaining MDP set $\caD$. Since the agent knows the transition dynamics of $\caM_1$ and $\caM_2$, it can find the most informative state-action pair $(s_0,a_0)$ with maximum total-variation difference between $P_{\caM_1}(\cdot|s_0,a_0)$ and $P_{\caM_2}(\cdot|s_0,a_0)$. The algorithm calls Subroutine~{\textnormal{e}}f{subroutine: collecting data, M >2} to collect enough samples from $(s_0,a_0)$ pairs, and then eliminates the MDP with less likelihood. At the end of the first stage, the MDP set $\caD$ is ensured to contain only one MDP $\caM^*$ with high probability. Therefore, the agent can directly execute the optimal policy for the real MDP till step $H+1$ in the second stage. \begin{algorithm} \caption{Optimistic Exploration Under Separation Condition} \label{alg: gap-dpendent algorithm} \begin{algorithmic}[1] \State Initialize: the MDP set $\mathcal{D} = \mathcal{U}$, $n_0 = \frac{c_0 \log^2(SMH)\log(MH)}{\delta^4}$ for a constant $c_0$ \LineComment{\textit{Stage 1: Explore and find the real MDP $\caM^*$}} \While{$|\mathcal{D}| \geq 1$} \State Randomly select two MDPs $\caM_1$ and $\caM_2$ from the MDP set $\caD$ \State Choose $(s_0,a_0) = \argmax_{(s,a) \in \caS \times \caA}\left\|\left(P_{\caM_{1}}-P_{\caM_{2}}{\textnormal{i}}ght)(\cdot \mid s, a){\textnormal{i}}ght\|_{1}$ \State Call Subroutine~{\textnormal{e}}f{subroutine: collecting data, M >2} with parameter $(s_0,a_0)$ and $n_0$ to collect history samples $\caH_{\caM_1,\caM_2}$ \If{ $\exists s' \in \caH_{\caM_1,\caM_2}, P_{\caM_{2}}(s'|s_0,a_0) = 0$ or $\prod_{s' \in \caH_{\caM_1,\caM_2}} \frac{P_{\caM_{1}}(s'|s_0,a_0)}{P_{\caM_{2}}(s'|s_0,a_0)} \geq 1$} \State Eliminate $\caM_2$ from the MDP set $\mathcal{D}$ \mathbb{E}lse \State Eliminate $\caM_1$ from the MDP set $\mathcal{D}$ \mathbb{E}ndIf \mathbb{E}ndWhile \LineComment{\textit{Stage 2: Run the optimal policy of $\caM^*$}} \State Denote $\hat{\caM}$ as the remaining MDP in the MDP set $\caD$ \State Run the optimal policy of $\hat{\caM}$ for the remaining steps \end{algorithmic} \end{algorithm} Theorem~{\textnormal{e}}f{theorem: gap-dependent regret bound} states an upper bound of the sub-optimality gap for Algorithm~{\textnormal{e}}f{alg: gap-dpendent algorithm}, the proof of which is deferred to Appendix~{\textnormal{e}}f{appendix: section gap-dependent bound}. \begin{theorem} \label{theorem: gap-dependent regret bound} Suppose we use $\hat{\pi}$ to denote the history-dependent policy represented by Algorithm~{\textnormal{e}}f{alg: gap-dpendent algorithm}. Under Assumption~{\textnormal{e}}f{assumption: communicating MDP} and Assumption~{\textnormal{e}}f{assumption: separated MDPs}, for any $\caM \in \caU$, the sub-optimality gap of Algorithm~{\textnormal{e}}f{alg: gap-dpendent algorithm} is at most \begin{align} V_{\caM,1}^{*}(s_1) - V_{\caM,1}^{\hat{\pi}} (s_1) \leq O\left(\frac{DM^2 \log(MH) \log^2(SMH)}{\delta^4}{\textnormal{i}}ght). \end{align} \end{theorem} Combining with Lemma~{\textnormal{e}}f{lemma: reduction to algorithm design}, we can bound the sub-optimality gap of $\pi^*_{\text{DR}}$ in Corollary~{\textnormal{e}}f{corollary: gap-dependent bound}. From this corollary, we know that the policy $\pi^*_{\text{DR}}$ returned by the domain randomization oracle can suffer only $O(\log H)$ loss under the separation condition. \begin{corollary} \label{corollary: gap-dependent bound} Under Assumption~{\textnormal{e}}f{assumption: communicating MDP} and Assumption~{\textnormal{e}}f{assumption: separated MDPs}, the sub-optimality gap of policy $\pi^*_{\text{DR}}$ is at most \begin{align} \operatorname{Gap}({\pi^*_{\text{DR}}},\caU) \leq O\left(\frac{DM^3\log(MH)\log^2(SMH)}{\delta^4}{\textnormal{i}}ght). \end{align} \end{corollary} \section{Finite Simulator Class without Separation Condition} \label{sec: gap-independent bound for finite MDP set} In this section, we study the performance of the policy $\pi^*_{\text{DR}}$ returned by the domain randomization oracle when the simulator class $\caU$ is finite with cardinality $M$. We remove the separation assumption (Assumption~{\textnormal{e}}f{assumption: separated MDPs}) made in the last section. To derive the upper bound of the sub-optimality gap, we connect the problem to the setting of infinite-horizon average-reward MDPs, and propose an efficient algorithm in this setting. \subsection{Preliminary: Connection with infinite-horizon average-reward setting} The infinite-horizon average-reward setting has been well-explored in the recent few years~(e.g.~\cite{jaksch2010near,agrawal2017posterior,fruit2018efficient,wei2020model}). The main difference compared with the episodic setting is that the agent interacts with the environment for infinite steps instead of restarting every $H$ steps. The gain of a policy is defined in the average manner. \begin{definition} (Definition 4 in \cite{agrawal2017posterior}) The gain ${\textnormal{h}}o^{\pi}(s)$ of a stationary policy $\pi$ from starting state $s_1 = s$ is defined as: \begin{align} {\textnormal{h}}o^{\pi}(s)=\mathbb{E}\left[\lim _{T {\textnormal{i}}ghtarrow \infty} \frac{1}{T} \sum_{t=1}^{T} R(s_{t}, \pi\left(s_{t}{\textnormal{i}}ght)) \mid s_{1}=s{\textnormal{i}}ght] \end{align} \end{definition} In this setting, a common assumption is that the MDP is communicating (Assumption~{\textnormal{e}}f{assumption: communicating MDP}). Under this assumption, we have the following lemma. \begin{lemma} \label{lemma: properties of communicating MDP} \citep[Lemma~2.1]{agrawal2017posterior} For a communicating MDP $\mathcal{M}$ with diameter $D$: (a) The optimal gain ${\textnormal{h}}o^*$ is state-independent and is achieved by a deterministic stationary policy $\pi^*_{\text{DR}}$; that is, there exists a deterministic policy $\pi^*$ such that \begin{align} {\textnormal{h}}o^{*}:=\max _{s^{\prime} \in \mathcal{S}} \max _{\pi} {\textnormal{h}}o^{\pi}\left(s^{\prime}{\textnormal{i}}ght)={\textnormal{h}}o^{\pi^{*}}(s), \forall s \in \mathcal{S} \end{align} (b) The optimal gain ${\textnormal{h}}o^*$ satisfies the following equation: \begin{align} \label{eqn: Bellman equation, infinite setting} {\textnormal{h}}o^{*}=\min _{\lambda \in \mathbb{R}^{S}} \max _{s, a} \left[R(s, a)+P \lambda(s,a)-\lambda(s){\textnormal{i}}ght]=\max _{a}\left[ R(s,a)+P \lambda^{*}(s,a)-\lambda^{*}(s){\textnormal{i}}ght], \forall s \end{align} where $P \lambda(s,a) = \sum_{s'} P(s'|s,a) \lambda(s')$, and $\lambda^*$ is the bias vector of the optimal policy $\pi^*_{\text{DR}}$ satisfying \begin{align} 0 \leq \lambda^{*}(s) \leq D. \end{align} \end{lemma} The regret minimization problem has been widely studied in this setting, with regret to be defined as $Reg(T) = \mathbb{E}\left[T {\textnormal{h}}o^* - \sum_{t=1}^{T} R(s_t,a_t){\textnormal{i}}ght]$, where the expectation is over the randomness of the trajectories. For example, \cite{jaksch2010near} proposed an efficient algorithm called UCRL2, which achieves regret upper bound $\tilde{O}(DS\sqrt{AT})$. Note that the optimal policy in the infinite-horizon setting may not obtain optimal value in the episodic setting with the same transition dynamics. However, with a slight abuse of the notation, the following lemma indicates that the optimal policy in the infinite-horizon setting can still be near-optimal in the episodic setting. We prove this lemma in Appendix~{\textnormal{e}}f{appendix: proof of the lemma, connection with infinite-horizon setting}. \begin{lemma} \label{lemma: connection with infinite-horizon setting} For the same MDP $\caM$, let ${\textnormal{h}}o^*_{\caM}$ and $V^*_{\caM,1}(s_1)$ to be the optimal expected gain in the infinite-horizon setting and the optimal value function in the episodic setting respectively. We have the following inequality: \begin{align} H {\textnormal{h}}o^*_{\caM} -D \leq V^*_{\caM,1}(s_1) \leq H {\textnormal{h}}o^*_{\caM} + D. \end{align} \end{lemma} The above lemma indicates that, if we can design an algorithm (a non-stationary policy) $\hat{\pi}$ in the infinite-horizon setting with regret $Reg(H)$, then the sub-optimality gap of this algorithm in episodic setting satisfies $V^*_{\caM,1}(s_1) - V^{\hat{\pi}}_{\caM,1}(s_1) \leq Reg(H) +D$. This connection inspires us to design an efficient algorithm solving model identification problem in infinite-horizon average-reward setting. \subsection{Efficient Algorithm without Separation Condition} In this subsection, we propose an efficient algorithm for model identification problem in the infinite-horizon average-reward setting. Our algorithm is described in Algorithm~{\textnormal{e}}f{alg: optimistic exploration}. In episode $k$, the agent executes the optimal policy of the optimistic MDP $\caM^k$ with the maximum expected gain ${\textnormal{h}}o^*_{\caM^k}$. Once the agent collects enough data and realizes that the current MDP $\caM^k$ is not $\caM^*$ that represents the dynamics of the real environment, the agent eliminates $\caM^k$ from the MDP set. \begin{algorithm} \caption{Optimistic Exploration} \label{alg: optimistic exploration} \begin{algorithmic}[1] \State Initialize: the MDP set $\mathcal{U}_1 = \mathcal{U}$, the episode counter $k=1$, $h_0 = 1$ \State Calculate $\caM^{1} = \argmax_{\caM \in \caU_1} {\textnormal{h}}o^*_{\caM} $ \For{step $h = 1,\cdots, H$} \State Take action $a_h = \pi^*_{\caM^k}(s_h)$, obtain the reward $R(s_h,a_h)$, and observe the next state $s_{h+1}$ \If{$\left|\sum_{t=h_0}^{h} \left(P_{\caM^k} \lambda_{\caM^k}^{*}\left(s_{t}, a_{t}{\textnormal{i}}ght) - \lambda_{\caM^k}^{*}\left(s_{t+1}{\textnormal{i}}ght){\textnormal{i}}ght) {\textnormal{i}}ght| > D\sqrt{2(h-h_0) \log(2HM)}$} \State Eliminate $\caM^{k}$ from the MDP set $\mathcal{U}_k$, denote the remaining set as $\caU_{k+1}$ \State Calculate $\caM^{k+1} = \argmax_{\caM \in \caU_{k+1}} {\textnormal{h}}o^*_{\caM}$ \State Set $h_0 = h+1$, and $k = k +1$. \mathbb{E}ndIf \mathbb{E}ndFor \end{algorithmic} \end{algorithm} To indicate the basic idea of the elimination condition defined in Line 5 of Algorithm~{\textnormal{e}}f{alg: optimistic exploration}, we briefly explain our regret analysis of Algorithm~{\textnormal{e}}f{alg: optimistic exploration}. Suppose the MDP $\caM^k$ selected in episode $k$ satisfies the optimistic condition ${\textnormal{h}}o^*_{\caM^k} \geq {\textnormal{h}}o^*_{\caM^*}$, then the regret in $H$ steps can be bounded as: \begin{align} &H {\textnormal{h}}o^*_{\caM^*} - \sum_{h=1}^{H} R(s_h,a_h) \\ \leq & \sum_{k=1}^{K} \sum_{h = \tau(k)}^{\tau(k+1)-1} \left({\textnormal{h}}o^*_{\caM^k} - R(s_h,a_h){\textnormal{i}}ght) \\ = & \sum_{k=1}^{K} \sum_{h = \tau(k)}^{\tau(k+1)-1} \left(P_{\caM^k} \lambda^*_{\caM^k}(s_h,a_h) - \lambda^*_{\caM^k}(s_h){\textnormal{i}}ght) \\ = & \sum_{k=1}^{K} \sum_{h = \tau(k)}^{\tau(k+1)-1} \left(P_{\caM^k} \lambda^*_{\caM^k}(s_h,a_h) - \lambda^*_{\caM^k}(s_{h+1}){\textnormal{i}}ght) + \sum_{k=1}^{K} \left(\lambda^*_{\caM^k}(s_{\tau(k+1)}) - \lambda^*_{\caM^k}(s_{\tau(k)}){\textnormal{i}}ght) \\ \label{inq: regret decomposition in infinite-horizon setting} \leq & \sum_{k=1}^{K} \sum_{h = \tau(k)}^{\tau(k+1)-1} \left(P_{\caM^k} \lambda^*_{\caM^k}(s_h,a_h) - \lambda^*_{\caM^k}(s_{h+1}){\textnormal{i}}ght) + KD. \end{align} Here we use $K$ to denote the total number of episodes, and we use $\tau(k)$ to denote the first step of episode $k$. The first inequality is due to the optimistic condition ${\textnormal{h}}o^*_{\caM^k} \geq {\textnormal{h}}o^*_{\caM^*}$. The first equality is due to the Bellman equation in the finite-horizon setting (Eqn~{\textnormal{e}}f{eqn: Bellman equation, infinite setting}). The last inequality is due to $0 \leq \lambda^*_{\caM}(s) \leq D$. From the above inequality, we know that the regret in episode $k$ depends on the summation $\sum_{h = \tau(k)}^{\tau(k+1)-1} \left(P_{\caM^k} \lambda^*_{\caM^k}(s_h,a_h) - \lambda^*_{\caM^k}(s_{h+1}){\textnormal{i}}ght)$. If this term is relatively small, we can continue following the policy $\pi^*_{\caM^k}$ with little loss. Since $\lambda^*_{\caM^k}(s_{h+1})$ is an empirical sample of $P_{\caM^*} \lambda^*_{\caM^k}(s_h,a_h)$, we can guarantee that $\caM^k$ is not $\caM^*$ with high probability if this term is relatively large. Based on the above discussion, we can upper bound the regret of Algorithm~{\textnormal{e}}f{alg: optimistic exploration}. We defer the proof of Theorem~{\textnormal{e}}f{theorem: square-root H regret bound} to Appendix~{\textnormal{e}}f{appendix: omitted proof in section gap-independent bound, finite set}. \begin{theorem} \label{theorem: square-root H regret bound} Under Assumption~{\textnormal{e}}f{assumption: communicating MDP}, the regret of Algorithm~{\textnormal{e}}f{alg: optimistic exploration} is upper bounded by \begin{align} Reg(H) \leq O\left(D\sqrt{MH \log(MH)}{\textnormal{i}}ght). \end{align} \end{theorem} Combining Lemmas~{\textnormal{e}}f{lemma: reduction to algorithm design} and~{\textnormal{e}}f{lemma: connection with infinite-horizon setting}, we can deduce the upper bound of the sub-optimality gap for $\pi^*_{\text{DR}}$. \begin{corollary} \label{corollary: square-root H bound} Under Assumption~{\textnormal{e}}f{assumption: communicating MDP}, the sub-optimality gap of policy $\pi^*_{\text{DR}}$ is at most \begin{align} \operatorname{Gap}({\pi^*_{\text{DR}}},\caU) \leq O\left(D\sqrt{M^3H \log(MH)}{\textnormal{i}}ght). \end{align} \end{corollary} \subsection{Lower Bound} To illustrate the hardness of the domain transfer problem, we prove the following lower bound for $Gap(\pi,\caU)$. \begin{theorem} \label{thm: lower bound} Under Assumption~{\textnormal{e}}f{assumption: communicating MDP}, suppose $A \geq 10, SA \geq M \geq 100, d \geq 20 \log_A{M}, H \geq DM$, for any history dependent policy $\pi = \{\pi_h: traj_h {\textnormal{i}}ghtarrow a_h\}_{h=1}^H$, there exists a set of $M$ MDPs $\caU = \{\mathcal{M}_m\}_{m=1}^{M}$ and a choice of $\mathcal{M}^* \in \caU$ such that $\operatorname{Gap}(\pi,\caU)$ is at least $\Omega(\sqrt{DMH}).$ \end{theorem} The proof of Theorem~{\textnormal{e}}f{thm: lower bound} follows the idea of the lower bound proof for tabular MDPs~\citep{jaksch2010near}, which we defer to Appendix~{\textnormal{e}}f{appendix: lower bound proof}. This lower bound implies that $\Omega(\sqrt{H})$ sub-optimality gap is unavoidable for the policy $\pi^*_{\text{DR}}$ when directly transferred to the real environment. Compared with this result, the upper bound for $\pi^*_{\text{DR}}$ in Corollary~{\textnormal{e}}f{corollary: square-root H bound} is near optimal w.r.t. the horizon $H$. In the previous sections, we mainly talk about the performance of $\pi^*_{\text{DR}}$ when the cardinality of the simulator class $\caU$ is finite. However, the simulator class can be extremely large if we have little knowledge of the real world dynamics and randomize over various parameters of the environment. In this section, we analyse the efficiency of the domain randomization approach when the simulator class is extremely large. \subsection{Negative Results} Following the idea from the previous results studying efficient exploration for RL with linear function approximation~\citep{jin2020provably,zhou2020nearly}, one may wonder whether it is possible to derive upper bounds when the MDPs in the randomized set $\caU$ share a common low-dimensional representation. However, even if the MDPs in $\caU$ maintain such benign properties, the following lower bound indicates that the policy $\pi^*_{\text{DR}}$ returned by the domain randomization oracle can still be extremely bad when the simulator class is relatively large. \begin{theorem} \label{theorem: lower bound, M > H} Suppose All MDPs in the MDP set $\mathcal{U}$ are linear mixture models sharing a common low dimensional representation with dimension $d = O(\log(M))$, there exists a hard instance such that the sub-optimality gap of the policy $\pi^*_{\text{DR}}$ returned by the domain randomization oracle can be still $\Omega(H)$ when $M \geq H$. \end{theorem} We defer the proof of Theorem~{\textnormal{e}}f{theorem: lower bound, M > H} to Appendix ~{\textnormal{e}}f{appendix: proof of the lower bound, M > H}. Intuitively, since the domain randomization approach returns the optimal policy in the average manner, the policy $\pi^*_{\text{DR}}$ can perform bad in the real world $\caM^*$ if most MDPs in the randomized set differ much with $\caM^*$, or the simulator class is rather non-smooth. In Section~{\textnormal{e}}f{subsection: eluder dimension algorithm}, we propose an efficient algorithm for infinite-horizon average-reward MDPs with general function approximation. With the help of regret bound in Section~{\textnormal{e}}f{subsection: eluder dimension algorithm}, we upper bound $\operatorname{Gap}(\pi^*_{\text{DR}},\caU)$ under the assumption that the simulator class is relatively smooth in Section~{\textnormal{e}}f{subsection: discussion}, and then propose alternative methods when such assumption doesn't holds. \subsection{Efficient Algorithm with General Function Approximation} \label{subsection: eluder dimension algorithm} In this subsection, we propose an provably efficient model-based algorithm solving infinite-horizon average-reward MDPs with general function approximation. Considering the model class $\caU$ which covers the true MDP $\caM^*$, i.e. $\caM^* \in \caU$, we define the function space $\Lambda = \{\lambda^*_{\caM}, \caM \in \caU\}$, and space $\caX = \caS \times \caA \times \Lambda$. We define the function space \begin{align} \caF = \left\{f_{\caM}(s,a,\lambda): \caX {\textnormal{i}}ghtarrow \mathbb{R} \text { such that } f_{\caM}(s, a, \lambda)=P_{\caM} \lambda(s, a) \text { for } \caM \in \mathcal{U}, \lambda \in \Lambda{\textnormal{i}}ght\}. \end{align} \begin{assumption} \label{assumption: eluder dimension} The ${\epsilon}ilon$-eluder dimension of function class $\caF$ with ${\epsilon}ilon = \frac{1}{H}$ is bounded by $d_e$. \end{assumption} The function $\caF$ characterizes the complexity of the model class $\caU$. In tabular setting, the ${\epsilon}ilon$-log-covering number and the ${\epsilon}ilon$-eluder dimension of $\caF$ can be $O\left(SA\log(1/{\epsilon}ilon){\textnormal{i}}ght)$. In the setting of linear mixture models~\citep{zhou2020nearly}, the ${\epsilon}ilon$-log-covering number and the ${\epsilon}ilon$-eluder dimension are $O\left(d \log(1/{\epsilon}ilon){\textnormal{i}}ght)$, where $d$ is the dimension of the linear representation in linear mixture models. Our algorithm, which is described in Algorithm~{\textnormal{e}}f{alg: general_opt_alg}, also follows the well-known principle of optimism in the face of uncertainty. In each episode $k$, we calculate the optimistic MDP $\caM^k$ with maximum expected gain ${\textnormal{h}}o_{\caM^k}^{*}$. We execute the optimal policy of $\caM^*$ to interact with the environment and collect more samples. Once we have collected enough samples in episode $k$, we update the model class $\caU_k$ and compute the optimistic MDP for episode $k+1$. Compared with the setting of episodic MDP with general function approximation~\citep{ayoub2020model,wang2020reinforcement,jin2021bellman}, the additional problem in the infinite-horizon setting is that the regret technically has linear dependence on the number of total episodes, or the number of steps that we update the optimistic model and the policy. This corresponds to the last term ($KD$) in Inq~{\textnormal{e}}f{inq: regret decomposition in infinite-horizon setting}. Therefore, to design efficient algorithm with near-optimal regret in the infinite-horizon setting, the algorithm should maintain low-switching property~\citep{bai2019provably,kong2021online}. Taking inspiration from the recent work that studies efficient exploration with low switching cost in episodic setting~\citep{kong2021online}, we define the importance score, $\sup _{f_{1}, f_{2} \in \mathcal{F}} \frac{\left\|f_{1}-f_{2}{\textnormal{i}}ght\|_{\mathcal{Z}_{new}}^{2}}{\left\|f_{1}-f_{2}{\textnormal{i}}ght\|_{\mathcal{Z}}^{2}+\alpha}$, as a measure of the importance for new samples collected in current episode, and only update the optimistic model and the policy when the importance score is greater than $1$. Here $\left\|f_1 - f_2{\textnormal{i}}ght\|^2_{\caZ}$ is a shorthand of $\sum_{(s,a,s',\lambda) \in \caZ} \left(f_1(s,a,\lambda)-f_2(s,a,\lambda){\textnormal{i}}ght)^2$. \begin{algorithm} \caption{General Optimistic Algorithm} \label{alg: general_opt_alg} \begin{algorithmic}[1] \State Initialize: the MDP set $\mathcal{U}_1 = \mathcal{U}$, episode counter $k=1$ \State Initialize: the history data set $\caZ = \tilde{p}tyset$, $\caZ_{new} = \tilde{p}tyset$ \State $\alpha = 4D^2+1$, $\beta = c D^{2} \log \left(H \cdot \mathcal{N}(\mathcal{F}, 1/H){\textnormal{i}}ght)$ for a constant $c$. \State Compute $\caM^1 = \argmax_{\caM \in \caU_1} {\textnormal{h}}o^\star_{\caM}$ \For{step $h = 1,\cdots, H$} \State Take action $a_h=\pi^*_{\caM^k}(s_h)$ in the current state $s_h$, and transit to state $s_{h+1}$ \State Add $(s_h,a_h,s_{h+1},\lambda^*_{\caM^k})$ to the set $\caZ_{new}$ \If{ importance score $\sup _{f_{1}, f_{2} \in \mathcal{F}} \frac{\left\|f_{1}-f_{2}{\textnormal{i}}ght\|_{\mathcal{Z}_{new}}^{2}}{\left\|f_{1}-f_{2}{\textnormal{i}}ght\|_{\mathcal{Z}}^{2}+\alpha} \geq 1$} \State Add the history data $\caZ_{new}$ to the set $\caZ$ \State Calculate $\hat{\caM}_{k+1} = \argmin_{\caM \in \caU} \sum_{(s_h,a_h,s_{h+1},\lambda_h) \in \caZ} \left(P_{\caM}\lambda_h(s_h,a_h) - \lambda_h(s_{h+1}){\textnormal{i}}ght)^2$ \State Update $\caU_{k+1} = \left\{\caM \in \caU: \sum_{(s_h,a_h,s_{h+1},\lambda_h) \in \caZ}\left(\left(P_{\caM} - P_{\hat{\caM}_{k+1}}{\textnormal{i}}ght) \lambda_h(s_h,a_h){\textnormal{i}}ght)^2 \leq \beta{\textnormal{i}}ght\}$ \State Compute $\caM^{k+1} = \argmax_{\caM \in \caU_{k+1}} {\textnormal{h}}o^\star_{\caM}$ \State Episode counter $k = k+1$ \mathbb{E}ndIf \mathbb{E}ndFor \end{algorithmic} \end{algorithm} We state the regret upper bound of Algorithm~{\textnormal{e}}f{alg: general_opt_alg} in Theorem~{\textnormal{e}}f{theorem: gap-dependent regret bound}, and defer the proof of the theorem to Appendix~{\textnormal{e}}f{appendix: proof of eluder dimension regret bound}. \begin{theorem} \label{theorem: eluder dimension regret bound} Under Assumption~{\textnormal{e}}f{assumption: communicating MDP} and Assumption~{\textnormal{e}}f{assumption: eluder dimension}, the regret of Algorithm~{\textnormal{e}}f{alg: general_opt_alg} is uppder bounded by \begin{align} Reg(H) \leq O\left(D\sqrt{d_e H \log \left(H \cdot \mathcal{N}(\mathcal{F}, 1/H){\textnormal{i}}ght)}{\textnormal{i}}ght), \end{align} where $\mathcal{N}(\mathcal{F}, 1/H)$ is the $\frac{1}{H}$-covering number of $\caF$ w.r.t $L_{\infty}$ norm. \end{theorem} \subsection{Sub-optimality Gap for Large Simulator Class} \label{subsection: discussion} From Theorem~{\textnormal{e}}f{theorem: lower bound, M > H} we know that the domain randomization approach can be information-theoretically useless when the simulator class is rather non-smooth and extremely large. To make the problem tractable, we assume the value function is relatively smooth w.r.t certain distance function $d(\caM_1,\caM_2)$. \begin{assumption} \label{assumption: Lipchitz} There exists a Lipchitz constant $L$, such that for any policy $\pi$, the value function is $L$-Lipchitz w.r.t the distance function $d: \caU \times \caU {\textnormal{i}}ghtarrow \mathbb{R}$, i.e. \begin{align} \left|V^{\pi}_{\caM_1, 1}(s_1) - V^{\pi}_{\caM_2,1}(s_1){\textnormal{i}}ght| \leq L \cdot d(\caM_1,\caM_2), \forall \caM_1, \caM_2 \in \caU. \end{align} \end{assumption} Assumption~{\textnormal{e}}f{assumption: Lipchitz} states that the value function of the MDPs in the simulator class $\caU$ is Lipchitz w.r.t. a certain distance measure. If the MDPs are parameterized by a parameter $\theta$, then the distance $d(\caM_1,\caM_2)$ can be the $L_2$ distance between $\theta_1$ and $\theta_2$. \jiachen{Take finite class as an example} \begin{theorem} \label{theorem: sub-optimality gap, large simulator class} Under Assumption~{\textnormal{e}}f{assumption: communicating MDP}, {\textnormal{e}}f{assumption: eluder dimension} and {\textnormal{e}}f{assumption: Lipchitz}, the sub-optimality gap of the domain randomization policy $\pi^*_{\text{DR}}$ is at most \begin{align} \operatorname{Gap}(\pi^*_{\text{DR}},\caU) \leq O\left(\frac{D\sqrt{d_eH\log \left(\mathcal{N}(H \cdot \mathcal{F}, 1 / H){\textnormal{i}}ght)}}{ \nu\left(\caC_{\caM^*,{\epsilon}ilon}{\textnormal{i}}ght)} + L{\epsilon}ilon{\textnormal{i}}ght). \end{align} Here $\nu(\caC_{\caM^*,{\epsilon}ilon})$ is the probability measure of the ${\epsilon}ilon$-neighboring regime of $\caM^*$ with distance measure $d(\caM_1,\caM_2)$. \end{theorem} We defer the proof of Theorem~{\textnormal{e}}f{theorem: sub-optimality gap, large simulator class} to Appendix~{\textnormal{e}}f{appendix: proof of sub-optimality gap for large simulator class}. The above bound indicates the good performance of $\pi^*_{\text{DR}}$ when the simulator class is smooth and the neighboring regime $\nu(\caC_{\caM^*,{\epsilon}ilon})$ is large, but can be vacuous when such benign properties don't hold. From the above discussion, we know that the domain randomization approach may fail in certain situations. Is it possible to directly transfer the policy to the real environment when the simulator class is extremely large and non-smooth? An alternative is to use adversarial training to obtain a robust policy and helps transfer to physical environments. That is, the simulation phase returns the policy that minimizes the worst case value gap for all $\caM \in \caU$: \begin{align} \label{eqn: adversrial training} \pi^*_{AT} = \argmin_{\pi} \max_{\caM \in \caU} \left[V_{\mathcal{M}, 1}^{*}\left(s_{1}{\textnormal{i}}ght)-V_{\mathcal{M}, 1}^{\pi^{*}}\left(s_{1}{\textnormal{i}}ght){\textnormal{i}}ght]. \end{align} \begin{theorem} \label{theorem: sub-optimality gap, adversarial training} Under assumption~{\textnormal{e}}f{assumption: communicating MDP} and assumption~{\textnormal{e}}f{assumption: eluder dimension}, the sub-optimality gap of the policy $\pi^*_{AT}$ returned by the adversarial training oracle is at most \begin{align} \operatorname{Gap}(\pi^*_{AT},\caU) \leq O\left(D\sqrt{d_e H \log \left(H \cdot \mathcal{N}(\mathcal{F}, 1/H){\textnormal{i}}ght)}{\textnormal{i}}ght). \end{align} \end{theorem} We prove Theorem~{\textnormal{e}}f{theorem: sub-optimality gap, adversarial training} in Appendix~{\textnormal{e}}f{appendix: proof of the sub-optimality gap for adversarial training}. This theorem shows that the policy $\pi^*_{AT}$ returned by the adversarial training approach can only suffer $O(\sqrt{H})$ loss in the $H$-step interactions, thus $\pi^*_{AT}$ can perform well when directly transferred to the real environment. The efficiency of adversarial training has also been observed by many empirical results ~\citep{pinto2017robust,teh2017distral,tessler2019action}. \fi \section{Conclusion} \label{sec: conclusion} In this paper, we study the optimality of policies learned from domain randomization in sim-to-real transfer without real-world samples. We propose a novel formulation of sim-to-real transfer and view domain randomization as an oracle that returns the optimal policy of an LMDP with uniform initialization distribution. Following this idea, we show that the policy $\pi^*_{\text{DR}}$ can suffer only $o(H)$ loss compared with the optimal value function of the real environment when the simulator class is finite or satisfies certain smoothness condition, thus this policy can perform well in the long-horizon cases. We hope our formulation and analysis can provide insight to design more efficient algorithms for sim-to-real transfer in the future. \section{Acknowledgments} Liwei Wang was supported by National Key R\&D Program of China (2018YFB1402600), Exploratory Research Project of Zhejiang Lab (No. 2022RC0AN02), BJNSF (L172037), Project 2020BD006 supported by PKUBaidu Fund. \appendix \section{Additional Preliminaries} \label{appendix: additional preliminaries} \subsection{Infinite-horizon Average-reward MDPs} \label{appendix: infinite-horizon average-reward MDP} The infinite-horizon average-reward setting has been well-explored in the recent few years~(e.g.~\cite{jaksch2010near,agrawal2017posterior,fruit2018efficient,wei2020model}). The main difference compared with the episodic setting is that the agent interacts with the environment for infinite steps instead of restarting every $H$ steps. The gain of a policy is defined in the average manner. \begin{definition} (Definition 4 in \cite{agrawal2017posterior}) The gain ${\textnormal{h}}o^{\pi}(s)$ of a stationary policy $\pi$ from starting state $s_1 = s$ is defined as: \begin{align} {\textnormal{h}}o^{\pi}(s)=\mathbb{E}\left[\lim _{T {\textnormal{i}}ghtarrow \infty} \frac{1}{T} \sum_{t=1}^{T} R(s_{t}, \pi\left(s_{t}{\textnormal{i}}ght)) \mid s_{1}=s{\textnormal{i}}ght] \end{align} \end{definition} In this setting, a common assumption is that the MDP is communicating (Assumption~{\textnormal{e}}f{assumption: communicating MDP}). Under this assumption, we have the following lemma. \begin{lemma} \label{lemma: properties of communicating MDP} \citep[Lemma~2.1]{agrawal2017posterior} For a communicating MDP $\mathcal{M}$ with diameter $D$: (a) The optimal gain ${\textnormal{h}}o^*$ is state-independent and is achieved by a deterministic stationary policy $\pi^*_{\text{DR}}$; that is, there exists a deterministic policy $\pi^*$ such that \begin{align} {\textnormal{h}}o^{*}:=\max _{s^{\prime} \in \mathcal{S}} \max _{\pi} {\textnormal{h}}o^{\pi}\left(s^{\prime}{\textnormal{i}}ght)={\textnormal{h}}o^{\pi^{*}}(s), \forall s \in \mathcal{S} \end{align} (b) The optimal gain ${\textnormal{h}}o^*$ satisfies the following equation: \begin{align} \label{eqn: Bellman equation, infinite setting} {\textnormal{h}}o^{*}=\min _{\lambda \in \mathbb{R}^{S}} \max _{s, a} \left[R(s, a)+P \lambda(s,a)-\lambda(s){\textnormal{i}}ght]=\max _{a}\left[ R(s,a)+P \lambda^{*}(s,a)-\lambda^{*}(s){\textnormal{i}}ght], \forall s \end{align} where $P \lambda(s,a) = \sum_{s'} P(s'|s,a) \lambda(s')$, and $\lambda^*$ is the bias vector of the optimal policy $\pi^*_{\text{DR}}$ satisfying \begin{align} 0 \leq \lambda^{*}(s) \leq D. \end{align} \end{lemma} The regret minimization problem has been widely studied in this setting, with regret to be defined as $Reg(T) = \mathbb{E}\left[T {\textnormal{h}}o^* - \sum_{t=1}^{T} R(s_t,a_t){\textnormal{i}}ght]$, where the expectation is over the randomness of the trajectories. For example, \cite{jaksch2010near} proposed an efficient algorithm called UCRL2, which achieves regret upper bound $\tilde{O}(DS\sqrt{AT})$. For notation convenience, we use $PV(s,a)$ or $P\lambda(s, a)$ as a shorthand of $\sum_{s' \in \caS} P(s'|s,a)V(s')$ or $\sum_{s' \in \caS} P(s'|s,a)\lambda(s')$. \subsection{Eluder Dimension} Proposed by \cite{russo2013eluder}, eluder dimension has become a widely-used concept to characterize the complexity of different function classes in bandits and RL~\citep{wang2020reinforcement, ayoub2020model, jin2021bellman,kong2021online}. In this work, we define eluder dimension to characterize the complexity of the function $\caF$: \begin{align} \caF = \left\{f_{\caM}(s,a,\lambda): \caS \times \caA \times \Lambda {\textnormal{i}}ghtarrow \mathbb{R} \text { such that } f_{\caM}(s, a, \lambda)=P_{\caM} \lambda(s, a) \text { for } \caM \in \mathcal{U}, \lambda \in \Lambda{\textnormal{i}}ght\}, \end{align} where $\Lambda = \{\lambda^*_\caM, \caM \in \caU\}$ is the optimal bias functions of $\caM \in \caU$ in the infinite-horizon average-reward setting (\citet{bartlett2012regal, fruit2018efficient, zhang2019regret}). \begin{definition} (Eluder dimension). Let ${\epsilon}ilon \geq 0$ and $\caZ = \{(s_i,a_i,\lambda_i)\}_{i=1}^{n} \subset \caS \times \caA \times \Lambda$ be a sequence of history samples. \begin{itemize} \item A history sample $(s,a,\lambda) \in \caS \times \caA \times \Lambda$ is ${\epsilon}ilon$-dependent on $\caZ$ with respect to $\caF$ if any $f,f' \in \caF$ satisfying $\|f-f'\|_{\caZ} \leq {\epsilon}ilon$ also satisfies $\|f(s,a)-f'(s,a)\| \leq {\epsilon}ilon$. Here $\|f-f'\|_{\caZ}$ is a shorthand of $\sqrt{\sum_{(s,a,\lambda) \in \caZ}(f-f')^2(s,a,\lambda)}$. \item An $(s,a,\lambda)$ is ${\epsilon}ilon$-independent of $\caZ$ with respect to $\caF$ if $(s,a,\lambda)$ is not ${\epsilon}ilon$-dependent on $\caZ$. \item The ${\epsilon}ilon$-eluder dimension of a function class $\caF$ is the length of the longest sequence of elements in $\caS \times \caA \times \Lambda$ such that, for some ${\epsilon}ilon' \geq {\epsilon}ilon$, every element is ${\epsilon}ilon'$-independent of its predecessors. \end{itemize} \end{definition} \section{Omitted Proof in Section~{\textnormal{e}}f{sec: proof overview}} \label{appendix: reduction techniques} \subsection{Proof of Lemma~{\textnormal{e}}f{lemma: construction argument}} \label{appendix: proof of reduction lemma} \begin{proof} We firstly study the case where $\caU$ is a finite set with $|\caU| = M$. For $\hat{\pi}$, we have \begin{align} \frac{1}{M} \sum_{\caM \in \caU} \left(V_{\caM,1}^{*}(s_1) - V_{\caM,1}^{\hat{\pi}} (s_1) {\textnormal{i}}ght) \leq C. \end{align} By the optimality of $\pi^*_{\text{DR}}$, we know that \begin{align} \frac{1}{M} \sum_{\caM \in \caU} V_{\caM, 1}^{\pi^*_{\text{DR}}}\left(s_{1}{\textnormal{i}}ght) \geq \frac{1}{M} \sum_{\caM \in \caU} V_{\caM, 1}^{\hat{\pi}}\left(s_{1}{\textnormal{i}}ght). \end{align} Therefore, \begin{align} \frac{1}{M} \sum_{\caM \in \caU} \left(V_{\caM,1}^{*}(s_1) - V_{\caM,1}^{\pi^*_{\text{DR}}} (s_1) {\textnormal{i}}ght) \leq C. \end{align} Since the gap $V^{*}_{\caM,1}(s_1) - V_{\caM,1}^{\pi^*_{\text{DR}}} (s_1) \geq 0$ for any $i \in [M]$, we have $\frac{1}{M} \left(V_{M^*,1}^{*}(s_1) - V_{M^*,1}^{\pi^*_{\text{DR}}} (s_1) {\textnormal{i}}ght) \leq C$. That is, \begin{align} \left(V_{M^*,1}^{*}(s_1) - V_{M^*,1}^{\pi^*_{\text{DR}}} (s_1) {\textnormal{i}}ght) \leq MC. \end{align} For the case where $\caU$ is an infinite set satisfying Assumption~{\textnormal{e}}f{assumption: Lipchitz}, by the optimality of $\pi^*_{\text{DR}}$, we have \begin{align} \mathbb{E}_{\caM \sim \nu} \left[V^{\pi^*_{\text{DR}}}_{\caM,1} (s_1) {\textnormal{i}}ght] \geq \mathbb{E}_{\caM \sim \nu} \left[V^{\hat{\pi}}_{\caM,1} (s_1) {\textnormal{i}}ght]. \end{align} Therefore, \begin{align} \label{inq: reduction to algorithm design, Lipchitz case, 1} \mathbb{E}_{\caM \sim \nu\left(\caC_{\caM^*,{\epsilon}ilon}{\textnormal{i}}ght)} \left[V^{*}_{\caM^*,1} (s_1) - V^{\pi^*_{\text{DR}}}_{\caM,1} (s_1) {\textnormal{i}}ght]\leq \mathbb{E}_{\caM \sim \nu} \left[V^{*}_{\caM^*,1} (s_1) - V^{\pi^*_{\text{DR}}}_{\caM,1} (s_1) {\textnormal{i}}ght] \leq \mathbb{E}_{\caM \sim \nu} \left[V^{*}_{\caM^*,1} (s_1)- V^{\hat{\pi}}_{\caM,1} (s_1) {\textnormal{i}}ght]. \end{align} By Assumption~{\textnormal{e}}f{assumption: Lipchitz}, for any $\caM \in \caC(\caM^*,{\epsilon}ilon)$, we have \begin{align} & \left|V^{\pi^*_{\text{DR}}}_{\caM^*,1} (s_1)- V^{\pi^*_{\text{DR}}}_{\caM,1} (s_1) {\textnormal{i}}ght| \leq L{\epsilon}ilon. \end{align} Therefore, we have \begin{align} \label{inq: reduction to algorithm design, Lipchitz case, 2} \nu\left(\caC_{\caM^*,{\epsilon}ilon}{\textnormal{i}}ght)\left(V^{*}_{\caM^*,1} (s_1) - V^{\pi^*_{\text{DR}}}_{\caM^*,1} (s_1) -L{\epsilon}ilon{\textnormal{i}}ght) \leq \mathbb{E}_{\caM \sim \nu\left(\caC_{\caM^*,{\epsilon}ilon}{\textnormal{i}}ght)} \left[V^{*}_{\caM^*,1} (s_1) - V^{\pi^*_{\text{DR}}}_{\caM,1} (s_1) {\textnormal{i}}ght] \end{align} Combining Inq~{\textnormal{e}}f{inq: reduction to algorithm design, Lipchitz case, 1} and Inq~{\textnormal{e}}f{inq: reduction to algorithm design, Lipchitz case, 2}, we have \begin{align} \nu\left(\caC_{\caM^*,{\epsilon}ilon}{\textnormal{i}}ght) \left(V^{*}_{\caM^*,1} (s_1) - V^{\pi^*_{\text{DR}}}_{\caM^*,1} (s_1) -L{\epsilon}ilon{\textnormal{i}}ght) \leq C, \end{align} The lemma can be proved by reordering the above inequality. \end{proof} \subsection{Proof of Lemma~{\textnormal{e}}f{lemma: connection with infinite-horizon setting}} \label{appendix: proof of the lemma, connection with infinite-horizon setting} \begin{proof} For MDP $\caM$, denote $\pi^*_{in}$ as the optimal policy in the infinite-horizon setting and $\{\pi^*_{ep,h}\}_{h=1}^{H}$ as the optimal policy in the episodic setting. By the optimality of $\pi^*_{ep}$, we have $V^{*}_{\caM,1}(s_1) = V^{\pi^*_{ep}}_{\caM,1}(s_1)\geq V^{\pi^*_{in}}_{\caM,1}(s_1)$. By the Bellman equation in the infinite-horizon setting, we know that \begin{align} \lambda^*_{\caM}(s) + {\textnormal{h}}o^*_{\caM} = R(s,\pi^*_{in}(s)) + P_{\caM} \lambda^*_{\caM}(s,\pi^*_{in}(s)), \forall s \in \mathcal{S} \end{align} For notation simplicity, we use $d_h(s_1,\pi)$ to denote the state distribution at step $h$ after starting from state $s_1$ at step $1$ following policy $\pi$. From the above equation, we have \begin{align} \lambda^*_{\caM}(s_1) + H{\textnormal{h}}o^*_{\caM} = \sum_{h=1}^{H} \mathbb{E}_{s_h \sim d_h(s_1,\pi^*_{in})} R(s_h, ,\pi^*_{in}(s_h)) + \mathbb{E}_{s_{H+1} \sim d_{H+1}(s_1,\pi^*_{in}))} \lambda^*_{\caM}(s_{H+1}). \end{align} That is, \begin{align} |\sum_{h=1}^{H} \mathbb{E}_{s_h \sim d_h(s_1,\pi^*_{in})} R(s_h, ,\pi^*_{in}(s_h)) - H{\textnormal{h}}o^*_{\caM}| = |\lambda^*_{\caM}(s_1) - \mathbb{E}_{s_{H+1} \sim d_{H+1}(s_1,\pi^*_{in})} \lambda^*_{\caM}(s_{H+1}) | \leq D, \end{align} where $\sum_{h=1}^{H} \mathbb{E}_{s_h \sim d_h(s_1,\pi^*_{in})} R(s_h, ,\pi^*_{in}(s_h)) = V_{\caM,1}^{\pi^*_{in}} (s_1) $. Therefore, we have $H {\textnormal{h}}o^*_{\caM} -D \leq V_{\caM,1}^{\pi^*_{in}} (s_1) \leq V_{\caM,1}^*(s_1)$. For the second inequality, by the Bellman equation in the infinite-horizon setting, we have \begin{align} \lambda_{\caM}^*(s_1) + H{\textnormal{h}}o_{\caM}^* \geq \sum_{h=1}^{H} \mathbb{E}_{s_h \sim d_h(s_1,\pi^*_{ep})} R(s_h, ,\pi^*_{ep,h}(s_h)) + \mathbb{E}_{s_{H+1} \sim d_{H+1}(s_1,\pi^*_{ep})} \lambda_{\caM}^*(s_{H+1}). \end{align} That is, \begin{align} \sum_{h=1}^{H} \mathbb{E}_{s_h \sim d_h(s_1,\pi^*_{ep})} R(s_h, ,\pi^*_{ep,h}(s_h)) - H{\textnormal{h}}o^*_{\caM} \leq \lambda_{\caM}^*(s_1) - \mathbb{E}_{s_{H+1} \sim d_{H+1}(s_1,\pi^*_{ep})} \lambda_{\caM}^*(s_{H+1}) \leq D, \end{align} where $\sum_{h=1}^{H} \mathbb{E}_{s_h \sim d_h(s_1,\pi^*_{ep})} R(s_h, ,\pi^*_{ep,h}(s_h)) = V_{\caM,1}^{*} (s_1) $. \end{proof} \section{The Construction of Learning Algorithms} \label{appendix: construction of learning algorithms} \subsection{Finite Simulator Class with Separation Condition} \label{appendix: finite simulator class with separation condition} In this subsection, we explicitly define the base policy $\hat{\pi}$ with sim-to-real gap guarantee under the separation condition. Note that a history-dependent policy for LMDPs can also be regarded as an algorithm for finite-horizon MDPs. By deriving an upper bound of the sim-to-real gap for $\hat{\pi}$, we can upper bound $\operatorname{Gap}(\pi^*_{\text{DR}},\caU)$ with Lemma~{\textnormal{e}}f{lemma: construction argument}. \begin{algorithm} \caption{Optimistic Exploration Under Separation Condition} \label{alg: gap-dpendent algorithm} \begin{algorithmic}[1] \State Initialize: the MDP set $\mathcal{D} = \mathcal{U}$, $n_0 = \frac{c_0 \log^2(SMH)\log(MH)}{\delta^4}$ for a constant $c_0$ \LineComment{\textit{Stage 1: Explore and find the real MDP $\caM^*$}} \While{$|\mathcal{D}| \geq 1$} \State Randomly select two MDPs $\caM_1$ and $\caM_2$ from the MDP set $\caD$ \State Choose $(s_0,a_0) = \argmax_{(s,a) \in \caS \times \caA}\left\|\left(P_{\caM_{1}}-P_{\caM_{2}}{\textnormal{i}}ght)(\cdot \mid s, a){\textnormal{i}}ght\|_{1}$ \State Call Subroutine~{\textnormal{e}}f{subroutine: collecting data, M >2} with parameter $(s_0,a_0)$ and $n_0$ to collect history samples $\caH_{\caM_1,\caM_2}$ \If{ $\exists s' \in \caH_{\caM_1,\caM_2}, P_{\caM_{2}}(s'|s_0,a_0) = 0$ or $\prod_{s' \in \caH_{\caM_1,\caM_2}} \frac{P_{\caM_{1}}(s'|s_0,a_0)}{P_{\caM_{2}}(s'|s_0,a_0)} \geq 1$} \State Eliminate $\caM_2$ from the MDP set $\mathcal{D}$ \mathbb{E}lse \State Eliminate $\caM_1$ from the MDP set $\mathcal{D}$ \mathbb{E}ndIf \mathbb{E}ndWhile \LineComment{\textit{Stage 2: Run the optimal policy of $\caM^*$}} \State Denote $\hat{\caM}$ as the remaining MDP in the MDP set $\caD$ \State Run the optimal policy of $\hat{\caM}$ for the remaining steps \end{algorithmic} \end{algorithm} \begin{algorithm} \caption{Subroutine: collecting data for $(s_0,a_0)$} \label{subroutine: collecting data, M >2} \begin{algorithmic}[5] \State Input: informative state-action pair $(s_0,a_0)$, required visitation count $n_0$ \State Initialize: counter $N(s_0,a_0) = 0$, history data $\mathcal{H} = \tilde{p}tyset$ \State Denote $\pi_{\caM}(s,s^{\prime})$ as the policy with the minimum expected travelling time $\mathbb{E}\left[T\left(s^{\prime} \mid \caM, \pi, s{\textnormal{i}}ght){\textnormal{i}}ght]$ for MDP $\caM$ (Defined in Assumption~{\textnormal{e}}f{assumption: communicating MDP}) \While{$N(s_0,a_0) \leq n_0$} \For{$i = 1, \cdots, M$} \State Denote the current state as $s_{init}$ \State Run the policy $\pi_{\caM_{i}}(s_{init},s_0)$ for $2D$ steps, breaking the loop immediately once the agent enters state $s_0$ \mathbb{E}ndFor \If{the agent enters state $s_0$} \State Execute $a_0$, enter the next state $s'$. \State counter $N(s_0,a_0) = N(s_0,a_0) + 1$, $\mathcal{H} = \mathcal{H} \bigcup \{s'\}$ \mathbb{E}ndIf \mathbb{E}ndWhile \State Output: history data $\mathcal{H}$ \end{algorithmic} \end{algorithm} The policy $\hat{\pi}$ is formally defined in Algorithm~{\textnormal{e}}f{alg: gap-dpendent algorithm}. There are two stages in Algorithm~{\textnormal{e}}f{alg: gap-dpendent algorithm}. In the first stage, the agent's goal is to quickly explore the environment and find the real MDP $\caM^*$ from the MDP set $\caU$. This stage contains at most $M-1$ parts. In each part, the agent randomly selects two MDPs $\caM_1$ and $\caM_2$ from the remaining MDP set $\caD$. Since the agent knows the transition dynamics of $\caM_1$ and $\caM_2$, it can find the most informative state-action pair $(s_0,a_0)$ with maximum total-variation difference between $P_{\caM_1}(\cdot|s_0,a_0)$ and $P_{\caM_2}(\cdot|s_0,a_0)$. The algorithm calls Subroutine~{\textnormal{e}}f{subroutine: collecting data, M >2} to collect enough samples from $(s_0,a_0)$ pairs, and then eliminates the MDP with less likelihood. At the end of the first stage, the MDP set $\caD$ is ensured to contain only one MDP $\caM^*$ with high probability. Therefore, the agent can directly execute the optimal policy for the real MDP till step $H+1$ in the second stage. Subroutine~{\textnormal{e}}f{subroutine: collecting data, M >2} is designed to collect enough samples from the given state-action pair $(s_0,a_0)$. The basic idea in Subroutine~{\textnormal{e}}f{subroutine: collecting data, M >2} is to quickly enter the state $s_0$ and take action $a_0$, until the visitation counter $N(s_0,a_0) = n_0$. Denote $\pi_{\caM}(s,s^{\prime})$ as the policy with the minimum expected travelling time $\mathbb{E}\left[T\left(s^{\prime} \mid \caM, \pi, s{\textnormal{i}}ght){\textnormal{i}}ght]$ for MDP $\caM$. Suppose the agent is currently at state $s_{init}$ and runs the policy $\pi_{\caM^*}(s_{init},s_0)$ in the following steps. By Assumption~{\textnormal{e}}f{assumption: communicating MDP} and Markov's inequality, the agent will enter state $s_0$ in $2D$ steps with probability at least $1/2$. Therefore, in Subroutine~{\textnormal{e}}f{subroutine: collecting data, M >2}, we runs the policy $\pi_{\caM_{i}}(s_{init},s_0)$ for $2D$ steps for $i \in [M]$ alternatively. This ensures that the agent can enter state $s_0$ in $2MD$ steps with probability at least $1/2$. Theorem~{\textnormal{e}}f{theorem: gap-dependent regret bound} states an upper bound of the sim-to-real gap for Algorithm~{\textnormal{e}}f{alg: gap-dpendent algorithm}, which is proved in Appendix~{\textnormal{e}}f{appendix: proof of the regret in setting 1}. \begin{theorem} \label{theorem: gap-dependent regret bound} Suppose we use $\hat{\pi}$ to denote the history-dependent policy represented by Algorithm~{\textnormal{e}}f{alg: gap-dpendent algorithm}. Under Assumption~{\textnormal{e}}f{assumption: communicating MDP} and Assumption~{\textnormal{e}}f{assumption: separated MDPs}, for any $\caM \in \caU$, the sim-to-real gap of Algorithm~{\textnormal{e}}f{alg: gap-dpendent algorithm} is at most \begin{align} V_{\caM,1}^{*}(s_1) - V_{\caM,1}^{\hat{\pi}} (s_1) \leq O\left(\frac{DM^2 \log(MH) \log^2(SMH/\delta)}{\delta^4}{\textnormal{i}}ght). \end{align} \end{theorem} \subsection{Finite Simulator Class without Separation Condition} \label{appendix: finite simulator class without separation condition} In this subsection, we propose an efficient algorithm in the infinite-horizon average-reward setting for finite simulator class. Our algorithm is described in Algorithm~{\textnormal{e}}f{alg: optimistic exploration}. In episode $k$, the agent executes the optimal policy of the optimistic MDP $\caM^k$ with the maximum expected gain ${\textnormal{h}}o^*_{\caM^k}$. Once the agent collects enough data and realizes that the current MDP $\caM^k$ is not $\caM^*$ that represents the dynamics of the real environment, the agent eliminates $\caM^k$ from the MDP set. \begin{algorithm} \caption{Optimistic Exploration} \label{alg: optimistic exploration} \begin{algorithmic}[1] \State Initialize: the MDP set $\mathcal{U}_1 = \mathcal{U}$, the episode counter $k=1$, $h_0 = 1$ \State Calculate $\caM^{1} = \argmax_{\caM \in \caU_1} {\textnormal{h}}o^*_{\caM} $ \For{step $h = 1,\cdots, H$} \State Take action $a_h = \pi^*_{\caM^k}(s_h)$, obtain the reward $R(s_h,a_h)$, and observe the next state $s_{h+1}$ \If{$\left|\sum_{t=h_0}^{h} \left(P_{\caM^k} \lambda_{\caM^k}^{*}\left(s_{t}, a_{t}{\textnormal{i}}ght) - \lambda_{\caM^k}^{*}\left(s_{t+1}{\textnormal{i}}ght){\textnormal{i}}ght) {\textnormal{i}}ght| > D\sqrt{2(h-h_0) \log(2HM)}$} \label{alg: stopping condition finite} \State Eliminate $\caM^{k}$ from the MDP set $\mathcal{U}_k$, denote the remaining set as $\caU_{k+1}$ \State Calculate $\caM^{k+1} = \argmax_{\caM \in \caU_{k+1}} {\textnormal{h}}o^*_{\caM}$ \State Set $h_0 = h+1$, and $k = k +1$. \mathbb{E}ndIf \mathbb{E}ndFor \end{algorithmic} \end{algorithm} To indicate the basic idea of the elimination condition defined in Line {\textnormal{e}}f{alg: stopping condition finite} of Algorithm~{\textnormal{e}}f{alg: optimistic exploration}, we briefly explain our regret analysis of Algorithm~{\textnormal{e}}f{alg: optimistic exploration}. Suppose the MDP $\caM^k$ selected in episode $k$ satisfies the optimistic condition ${\textnormal{h}}o^*_{\caM^k} \geq {\textnormal{h}}o^*_{\caM^*}$, then the regret in $H$ steps can be bounded as: \begin{align} &H {\textnormal{h}}o^*_{\caM^*} - \sum_{h=1}^{H} R(s_h,a_h) \\ \leq & \sum_{k=1}^{K} \sum_{h = \tau(k)}^{\tau(k+1)-1} \left({\textnormal{h}}o^*_{\caM^k} - R(s_h,a_h){\textnormal{i}}ght) \\ = & \sum_{k=1}^{K} \sum_{h = \tau(k)}^{\tau(k+1)-1} \left(P_{\caM^k} \lambda^*_{\caM^k}(s_h,a_h) - \lambda^*_{\caM^k}(s_h){\textnormal{i}}ght) \\ = & \sum_{k=1}^{K} \sum_{h = \tau(k)}^{\tau(k+1)-1} \left(P_{\caM^k} \lambda^*_{\caM^k}(s_h,a_h) - \lambda^*_{\caM^k}(s_{h+1}){\textnormal{i}}ght) + \sum_{k=1}^{K} \left(\lambda^*_{\caM^k}(s_{\tau(k+1)}) - \lambda^*_{\caM^k}(s_{\tau(k)}){\textnormal{i}}ght) \\ \label{inq: regret decomposition in infinite-horizon setting} \leq & \sum_{k=1}^{K} \sum_{h = \tau(k)}^{\tau(k+1)-1} \left(P_{\caM^k} \lambda^*_{\caM^k}(s_h,a_h) - \lambda^*_{\caM^k}(s_{h+1}){\textnormal{i}}ght) + KD. \end{align} Here we use $K$ to denote the total number of episodes, and we use $\tau(k)$ to denote the first step of episode $k$. The first inequality is due to the optimistic condition ${\textnormal{h}}o^*_{\caM^k} \geq {\textnormal{h}}o^*_{\caM^*}$. The first equality is due to the Bellman equation in the finite-horizon setting (Eqn~{\textnormal{e}}f{eqn: Bellman equation, infinite setting}). The last inequality is due to $0 \leq \lambda^*_{\caM}(s) \leq D$. From the above inequality, we know that the regret in episode $k$ depends on the summation $\sum_{h = \tau(k)}^{\tau(k+1)-1} \left(P_{\caM^k} \lambda^*_{\caM^k}(s_h,a_h) - \lambda^*_{\caM^k}(s_{h+1}){\textnormal{i}}ght)$. If this term is relatively small, we can continue following the policy $\pi^*_{\caM^k}$ with little loss. Since $\lambda^*_{\caM^k}(s_{h+1})$ is an empirical sample of $P_{\caM^*} \lambda^*_{\caM^k}(s_h,a_h)$, we can guarantee that $\caM^k$ is not $\caM^*$ with high probability if this term is relatively large. Based on the above discussion, we can upper bound the regret of Algorithm~{\textnormal{e}}f{alg: optimistic exploration}. We defer the proof of Theorem~{\textnormal{e}}f{theorem: square-root H regret bound} to Appendix~{\textnormal{e}}f{appendix: proof of regret in setting 2}. \begin{theorem} \label{theorem: square-root H regret bound} Under Assumption~{\textnormal{e}}f{assumption: communicating MDP}, the regret of Algorithm~{\textnormal{e}}f{alg: optimistic exploration} is upper bounded by \begin{align} \text{Reg}(H) \leq O\left(D\sqrt{MH \log(MH)}{\textnormal{i}}ght). \end{align} \end{theorem} \subsection{Infinite Simulator Class} \label{appendix: infinite simulator class} In this subsection, we propose a provably efficient model-based algorithm solving infinite-horizon average-reward MDPs with general function approximation. To the best of our knowledge, our result is the first efficient algorithm with near-optimal regret for infinite-horizon average-reward MDPs with general function approximation. Considering the model class $\caU$ which covers the true MDP $\caM^*$, i.e. $\caM^* \in \caU$, we define the function space $\Lambda = \{\lambda^*_{\caM}, \caM \in \caU\}$, and space $\caX = \caS \times \caA \times \Lambda$. We define the function space \begin{align} \caF = \left\{f_{\caM}(s,a,\lambda): \caX {\textnormal{i}}ghtarrow \mathbb{R} \text { such that } f_{\caM}(s, a, \lambda)=P_{\caM} \lambda(s, a) \text { for } \caM \in \mathcal{U}, \lambda \in \Lambda{\textnormal{i}}ght\}. \end{align} Our algorithm, which is described in Algorithm~{\textnormal{e}}f{alg: general_opt_alg}, also follows the well-known principle of optimism in the face of uncertainty. In each episode $k$, we calculate the optimistic MDP $\caM^k$ with maximum expected gain ${\textnormal{h}}o_{\caM^k}^{*}$. We execute the optimal policy of $\caM^*$ to interact with the environment and collect more samples. Once we have collected enough samples in episode $k$, we update the model class $\caU_k$ and compute the optimistic MDP for episode $k+1$. Compared with the setting of episodic MDP with general function approximation~\citep{ayoub2020model,wang2020reinforcement,jin2021bellman}, the additional problem in the infinite-horizon setting is that the regret technically has linear dependence on the number of total episodes, or the number of steps that we update the optimistic model and the policy. This corresponds to the last term ($KD$) in Inq~{\textnormal{e}}f{inq: regret decomposition in infinite-horizon setting}. Therefore, to design efficient algorithm with near-optimal regret in the infinite-horizon setting, the algorithm should maintain low-switching property~\citep{bai2019provably,kong2021online}. Taking inspiration from the recent work that studies efficient exploration with low switching cost in episodic setting~\citep{kong2021online}, we define the importance score, $\sup _{f_{1}, f_{2} \in \mathcal{F}} \frac{\left\|f_{1}-f_{2}{\textnormal{i}}ght\|_{\mathcal{Z}_{new}}^{2}}{\left\|f_{1}-f_{2}{\textnormal{i}}ght\|_{\mathcal{Z}}^{2}+\alpha}$, as a measure of the importance for new samples collected in current episode, and only update the optimistic model and the policy when the importance score is greater than $1$. Here $\left\|f_1 - f_2{\textnormal{i}}ght\|^2_{\caZ}$ is a shorthand of $\sum_{(s,a,s',\lambda) \in \caZ} \left(f_1(s,a,\lambda)-f_2(s,a,\lambda){\textnormal{i}}ght)^2$. \begin{algorithm} \caption{General Optimistic Algorithm} \label{alg: general_opt_alg} \begin{algorithmic}[1] \State Initialize: the MDP set $\mathcal{U}_1 = \mathcal{U}$, episode counter $k=1$ \State Initialize: the history data set $\caZ = \tilde{p}tyset$, $\caZ_{new} = \tilde{p}tyset$ \State $\alpha = 4D^2+1$, $\beta = c D^{2} \log \left(H \cdot \mathcal{N}(\mathcal{F}, 1/H){\textnormal{i}}ght)$ for a constant $c$. \State Compute $\caM^1 = \argmax_{\caM \in \caU_1} {\textnormal{h}}o^\star_{\caM}$ \For{step $h = 1,\cdots, H$} \State Take action $a_h=\pi^*_{\caM^k}(s_h)$ in the current state $s_h$, and transit to state $s_{h+1}$ \State Add $(s_h,a_h,s_{h+1},\lambda^*_{\caM^k})$ to the set $\caZ_{new}$ \If{ importance score $\sup _{f_{1}, f_{2} \in \mathcal{F}} \frac{\left\|f_{1}-f_{2}{\textnormal{i}}ght\|_{\mathcal{Z}_{new}}^{2}}{\left\|f_{1}-f_{2}{\textnormal{i}}ght\|_{\mathcal{Z}}^{2}+\alpha} \geq 1$} \State Add the history data $\caZ_{new}$ to the set $\caZ$ \State Calculate $\hat{\caM}_{k+1} = \argmin_{\caM \in \caU} \sum_{(s_h,a_h,s_{h+1},\lambda_h) \in \caZ} \left(P_{\caM}\lambda_h(s_h,a_h) - \lambda_h(s_{h+1}){\textnormal{i}}ght)^2$ \State Update $\caU_{k+1} = \left\{\caM \in \caU: \sum_{(s_h,a_h,s_{h+1},\lambda_h) \in \caZ}\left(\left(P_{\caM} - P_{\hat{\caM}_{k+1}}{\textnormal{i}}ght) \lambda_h(s_h,a_h){\textnormal{i}}ght)^2 \leq \beta{\textnormal{i}}ght\}$ \State Compute $\caM^{k+1} = \argmax_{\caM \in \caU_{k+1}} {\textnormal{h}}o^\star_{\caM}$ \State Episode counter $k = k+1$ \mathbb{E}ndIf \mathbb{E}ndFor \end{algorithmic} \end{algorithm} We state the regret upper bound of Algorithm~{\textnormal{e}}f{alg: general_opt_alg} in Theorem~{\textnormal{e}}f{theorem: gap-dependent regret bound}, and defer the proof of the theorem to Appendix~{\textnormal{e}}f{appendix: proof of eluder dimension regret bound}. \begin{theorem} \label{theorem: eluder dimension regret bound} Under Assumption~{\textnormal{e}}f{assumption: communicating MDP}, the regret of Algorithm~{\textnormal{e}}f{alg: general_opt_alg} is uppder bounded by \begin{align} Reg(H) \leq O\left(D\sqrt{d_e H \log \left(H \cdot \mathcal{N}(\mathcal{F}, 1/H){\textnormal{i}}ght)}{\textnormal{i}}ght), \end{align} where $d_e$ is the ${\epsilon}ilon$-eluder dimension of function class $\caF$ with ${\epsilon}ilon = \frac{1}{H}$, and $\mathcal{N}(\mathcal{F}, 1/H)$ is the $\frac{1}{H}$-covering number of $\caF$ w.r.t $L_{\infty}$ norm. \end{theorem} For $\alpha > 0$, we say the covering number $\caN(\caF, \alpha)$ of $\caF$ w.r.t the $L_\infty$ norm equals $m$ if there is $m$ functions in $\caF$ such that any function in $\caF$ is at most $\alpha$ away from one of these $m$ functions in norm $\|\cdot\|_\infty$. The $\|\cdot\|_\infty$ norm of function $f$ is defined as $\|f\|_\infty \defeq \max_{x \in \caX} |f(x)|$. \section{Omitted Proof for finite simulator class with separation condition} \label{appendix: omitted proof in setting 1} \subsection{Proof of Theorem~{\textnormal{e}}f{theorem: gap-dependent regret bound}} \label{appendix: proof of the regret in setting 1} The formal definition of $\hat{\pi}$ is given in Algorithm~{\textnormal{e}}f{alg: gap-dpendent algorithm}. To upper bound the sim-to-real gap of $\hat{\pi}$, we discuss on the following two benign properties of Algorithm~{\textnormal{e}}f{alg: gap-dpendent algorithm} in Lemma~{\textnormal{e}}f{lemma: never eliminating M*, gap-dependent bound} and Lemma~{\textnormal{e}}f{lemma: expected stopping time for gap-dependent bound}. Lemma~{\textnormal{e}}f{lemma: never eliminating M*, gap-dependent bound} states that the true MDP $\caM^*$ will never be eliminated from the MDP set $\caD$. Therefore, in stage 2, the agent will execute the optimal policy in the remaining steps with high probability. Lemma~{\textnormal{e}}f{lemma: expected stopping time for gap-dependent bound} states that the total number of steps in stage 1 is upper bounded by $\tilde{O}(\frac{M^2}{\delta^4})$. This is where the final bound in Theorem~{\textnormal{e}}f{theorem: gap-dependent regret bound} comes from. \begin{lemma} \label{lemma: never eliminating M*, gap-dependent bound} With probability at least $1-\frac{1}{H}$, the true MDP $\caM^*$ will never be eliminated from the MDP set $\caD$ in stage 1. \end{lemma} The while-loop in stage 1 will last for $M-1$ times. To prove Lemma~{\textnormal{e}}f{lemma: never eliminating M*, gap-dependent bound}, we need to prove that, if the true MDP $\caM^*$ is selected in a certain loop, then $\prod_{(s,a,s') \in \caH_{\caM_1,\caM_2}} \frac{P_{\caM^*}(s'|s,a)}{P_{\caM}(s'|s,a)} \geq 1$ holds with high probability. This is illustrated in the following lemma. \begin{lemma} \label{lemma: not eliminating M* in a fixed episode, gap-dependent bound} Suppose $\caH = \{s'_i\}_{i=1}^{n_0}$ is a set of $n_0 = \frac{c_0 \log^2(SMH/\delta)\log(MH)}{\delta^4}$ independent samples from a given state-action pair $(s_0,a_0)$ and MDP $\caM^*$ for a large constant $c_0$. Let ${\caM}_1$ denote another MDP satisfying $\left\|(P_{\caM^*} - P_{{\caM}_1})(\cdot|s_0,a_0){\textnormal{i}}ght\|_1 \geq \delta$, then the following inequality holds with probability at least $1-\frac{1}{MH}$: \begin{align} \label{inq: not eliminating M* in a fixed episode} \prod_{s' \in \caH} \frac{P_{\caM^*}(s'|s_0,a_0)}{P_{{\caM}_1}(s'|s_0,a_0)} > 1, \end{align} \end{lemma} \begin{proof} The proof of Lemma~{\textnormal{e}}f{lemma: not eliminating M* in a fixed episode, gap-dependent bound} is inspired by the analysis in~\cite{kwon2021rl}. To prove Inq~{\textnormal{e}}f{inq: not eliminating M* in a fixed episode}, it is enough to show that \begin{align} \label{inq: ln probability ration, gap-dependent bound} \ln{\left(\prod_{s' \in \caH} \frac{P_{\caM^*}(s'|s_0,a_0)}{P_{{\caM}_1}(s'|s_0,a_0)}{\textnormal{i}}ght)} = \sum_{s' \in \caH} \ln{\left(\frac{P_{\caM^*}(s'|s_0,a_0)}{P_{{\caM}_1}(s'|s_0,a_0)}{\textnormal{i}}ght)} > 0. \end{align} holds with probability at least $1-\frac{1}{MH}$. Note that $\frac{P_{\caM^*}(s'|s_0,a_0)}{P_{{\caM}_1}(s'|s_0,a_0)}$ can be unbounded since $P_{{\caM}_1}(s'|s_0,a_0)$ can be zero for certain $s'$. To tackle this issue, we define $\tilde{P}_{\caM_1}$ for a sufficiently small $\alpha \leq \frac{\delta}{4S}$: \begin{align} \tilde{P}_{\caM_1} (s'|s,a) = \alpha + (1-\alpha S) P_{\caM_1}(s'|s,a). \end{align} We have $\left\|\left(\tilde{P}_{\caM_1}-P_{\caM_1}{\textnormal{i}}ght)(\cdot|s,a){\textnormal{i}}ght\| \leq 2S\alpha \leq \frac{\delta}{2}$, thus $\left\|\left(\tilde{P}_{\caM_1}-P_{\caM^*}{\textnormal{i}}ght)(\cdot|s,a){\textnormal{i}}ght\| \geq \frac{\delta}{2}$. Also, we have $\ln\left(\frac{1}{\tilde{P}_{\caM_1}(s'|s_0,a_0)}{\textnormal{i}}ght) \leq \ln(1/\alpha)$ for any $s' \in \caS$. With the above definition, we can decompose Inq~{\textnormal{e}}f{inq: ln probability ration, gap-dependent bound} into two terms: \begin{align} \label{eqn: prob ratio decomposition, gap-dependent bound} \sum_{s' \in \caH} \ln{\left(\frac{P_{\caM^*}(s'|s_0,a_0)}{P_{{\caM}_1}(s'|s_0,a_0)}{\textnormal{i}}ght)} = \sum_{s' \in \caH} \ln{\left(\frac{P_{\caM^*}(s'|s_0,a_0)}{\tilde{P}_{{\caM}_1}(s'|s_0,a_0)}{\textnormal{i}}ght)} + \sum_{s' \in \caH} \ln{\left(\frac{\tilde{P}_{\caM_1}(s'|s_0,a_0)}{P_{\caM_1}(s'|s_0,a_0)}{\textnormal{i}}ght)}. \end{align} Taking expectation over $s' \sim P_{\caM^*}(\cdot|s,a)$ for the first term, we have \begin{align} \mathbb{E}\left[\sum_{i=1}^{n_0} \ln\left(\frac{P_{\caM^*}(s'_i|s_0,a_0)}{\tilde{P}_{\caM_1}(s'_i|s_0,a_0)}{\textnormal{i}}ght){\textnormal{i}}ght] =& \sum_{i=1}^{n_0} \sum_{s'} P_{\caM^*}(s'|s_0,a_0) \ln{\left(\frac{P_{\caM^*}(s'|s_0,a_0)}{\tilde{P}_{{\caM}_1}(s'|s_0,a_0)}{\textnormal{i}}ght)} \\ =& n_0 D_{KL}\left(P_{\caM^*}(s'|s_0,a_0)|\tilde{P}_{{\caM}_1}(s'|s_0,a_0){\textnormal{i}}ght) \\ \geq & \frac{n_0 \delta^2}{2}, \end{align} where the last inequality is due to Pinsker's inequality. \begin{lemma} (Lemma C.2 in~\cite{kwon2021rl}) Suppose $X$ is an arbitrary discrete random variable on a finite support $\caX$. Then, $\ln(1/P(X))$ is a sub-exponential random variable~\citep{vershynin2010introduction} with Orcliz norm $\ln(1/P(X))_{\phi_1} = 1/e$. \end{lemma} From the above Lemma, we know that both $\tilde{P}_{\caM_1}(s'|s_0,a_0)$ and ${P}_{\caM^*}(s'|s_0,a_0)$ are sub-exponential random variables. By Azuma-Hoeffing's inequality, we have with probability at least $1-\delta_0/2$, \begin{align} \sum_{s' \in \caH} \ln\left(\frac{1}{\tilde{P}_{\caM_1}(s'|s_0,a_0)}{\textnormal{i}}ght) \geq \mathbb{E}\left[\sum_{i=1}^{n_0} \ln\left(\frac{1}{\tilde{P}_{\caM_1}(s'_i|s_0,a_0)}{\textnormal{i}}ght){\textnormal{i}}ght] - \log(1/\alpha)\sqrt{2n_0 \log(2/\delta_0)}. \end{align} By Proposition 5.16 in~\cite{vershynin2010introduction}, with probability at least $1-\delta_0/2$, \begin{align} \sum_{s' \in \caH} \ln\left({{P}_{\caM^*}(s'|s_0,a_0)}{\textnormal{i}}ght) \geq \mathbb{E}\left[\sum_{i=1}^{n_0} \ln\left({{P}_{\caM^*}(s'_i|s_0,a_0)}{\textnormal{i}}ght){\textnormal{i}}ght] - \sqrt{n_0 \log(2/\delta_0)/c}, \end{align} for a certain constant $c>0$. Therefore, we can lower bound the first term in Eqn~{\textnormal{e}}f{eqn: prob ratio decomposition, gap-dependent bound}, \begin{align} \label{inq: prob ratio decomposition part 1, gap-dependent bound} \sum_{s' \in \caH} \ln{\left(\frac{P_{\caM^*}(s'|s_0,a_0)}{\tilde{P}_{{\caM}_1}(s'|s_0,a_0)}{\textnormal{i}}ght)} \geq \frac{n_0 \delta^2}{2} - \log(1/\alpha)\sqrt{2n_0 \log(2/\delta_0)} - \sqrt{n_0 \log(2/\delta_0)/c}, \end{align} with probability at least $1-\delta_0$. For the second term in Eqn~{\textnormal{e}}f{eqn: prob ratio decomposition, gap-dependent bound}, by the definition of $\tilde{P}_{\caM_1}$, \begin{align} \label{inq: prob ratio decomposition part 2, gap-dependent bound} \sum_{s' \in \caH} \ln{\left(\frac{\tilde{P}_{\caM_1}(s'|s_0,a_0)}{P_{\caM_1}(s'|s_0,a_0)}{\textnormal{i}}ght)} \geq -2\alpha Sn_0 \end{align} Combining Inq~{\textnormal{e}}f{inq: prob ratio decomposition part 1, gap-dependent bound} and Inq~{\textnormal{e}}f{inq: prob ratio decomposition part 2, gap-dependent bound}, we have \begin{align} \sum_{s' \in \caH} \ln{\left(\frac{P_{\caM^*}(s'|s_0,a_0)}{P_{{\caM}_1}(s'|s_0,a_0)}{\textnormal{i}}ght)} \geq \frac{n_0 \delta^2}{2} - \log(1/\alpha)\sqrt{2n_0 \log(2/\delta_0)} - \sqrt{n_0 \log(2/\delta_0)/c} - 2\alpha Sn_0 \end{align} Setting $\alpha = \frac{\delta^2}{8S}$, $\delta_0 = \frac{1}{MH}$ , and $n_0 = \frac{c_0\log^2(SMH/\delta)\log(MH)}{\delta^4}$, we have \begin{align} \sum_{s' \in \caH} \ln{\left(\frac{P_{\caM^*}(s'|s_0,a_0)}{P_{{\caM}_1}(s'|s_0,a_0)}{\textnormal{i}}ght)} > 0 \end{align} holds with probability at least $1-\frac{1}{MH}$. \end{proof} \begin{lemma} \label{lemma: expected stopping time for gap-dependent bound} Suppose Stage 1 in Algorithm~{\textnormal{e}}f{alg: gap-dpendent algorithm} ends in $h_0$ steps. We have $\mathbb{E}[h_0] \leq O(\frac{DM^2 \log^2(SMH/\delta)\log(MH)}{\delta^4})$, where the expectation is over all randomness in the algorithm and the environment. \end{lemma} \begin{proof} Recall that $\pi_{\caM}(s,s^{\prime})$ is the policy with the minimum expected travelling time $\mathbb{E}\left[T\left(s^{\prime} \mid \caM, \pi, s{\textnormal{i}}ght){\textnormal{i}}ght]$ for MDP $\caM$. By Assumption~{\textnormal{e}}f{assumption: communicating MDP}, we have \begin{align} \mathbb{E}\left[T\left(s^{\prime} \mid \caM^*, \pi_{\caM}(s,s^{\prime}), s{\textnormal{i}}ght){\textnormal{i}}ght] \leq D. \end{align} Given state $s$ and $s'$, by Markov's inequality, we know that with probability at least $\frac{1}{2}$, \begin{align} \label{inq: maximum travelling time} T\left(s^{\prime} \mid \caM^*, \pi_{\caM^*}(s,s^{\prime}), s{\textnormal{i}}ght) \leq 2D. \end{align} Consider the following stochastic process: In each episode $k$, the agent starts from a state $s_k$ that is arbitrarily selected, and run the policy $\pi_{\caM^*}(s_k,s_0)$ for $2D$ steps on the MDP $\caM^*$. The process terminates once the agent enters a certain state $s_0$. By Inq~{\textnormal{e}}f{inq: maximum travelling time}, the probability that the process terminates within $k$ episodes is at least $1- \frac{1}{2^k}$. By the basic algebraic calculation, the expected stopping episode can be bounded by a constant $4$. Now we return to the proof of Lemma~{\textnormal{e}}f{lemma: expected stopping time for gap-dependent bound}. In Subroutine~{\textnormal{e}}f{subroutine: collecting data, M >2}, we run policy $\pi_{\caM_i}$ for each MDP $\caM_i \in \caD$ alternately. By Lemma~{\textnormal{e}}f{lemma: never eliminating M*, gap-dependent bound}, the true MDP $\caM^*$ is always contained in the MDP set $\caD$. Therefore, the expected travelling time to enter state $s_0$ for $n_0$ times is bounded by $n_0 \cdot M \cdot 8D$. In stage 1, we call Subroutine~{\textnormal{e}}f{subroutine: collecting data, M >2} for $M-1$ times, which means that the expected steps in stage 1 satisfies $\mathbb{E}[h_{0}] \leq 8M^2n_0d = O(\frac{DM^2 \log^2(SMH/\delta)\log(MH)}{\delta^4})$. \end{proof} \begin{proof} (Proof of Theorem~{\textnormal{e}}f{theorem: gap-dependent regret bound}) Recall that we use $h_0$ to denote the total steps in stage 1. Firstly, we prove that $\operatorname{Gap}(\hat{\pi}, \caU)$ is upper bounded by $O\left(\mathbb{E}[h_0] + D{\textnormal{i}}ght)$. \begin{align} \label{inq: value gap, gap-dependent regret bound} & V^{*}_{\caM^*,1}(s_1) - V^{\hat{\pi}}_{\caM^*,1}(s_1) \\ = & \mathbb{E}_{h_0}\left[ \mathbb{E}_{\caM^*,\pi^*_{\caM^*}}\left[\sum_{h=1}^{h_0} R(s_h,a_h)\mid h_0{\textnormal{i}}ght] + \mathbb{E}_{\caM^*,\pi^*_{\caM^*}} \left[V^{*}_{\caM^*,h_0+1}(s_{h_0+1})\mid h_0{\textnormal{i}}ght] {\textnormal{i}}ght] \\ & - \mathbb{E}_{h_0}\left[ \mathbb{E}_{\caM^*,\hat{\pi}}\left[\sum_{h=1}^{h_0} R(s_h,a_h)\mid h_0{\textnormal{i}}ght] + \mathbb{E}_{\caM^*,\hat{\pi}} \left[V^{\hat{\pi}}_{\caM^*,h_0+1}(s_{h_0+1})\mid h_0{\textnormal{i}}ght] {\textnormal{i}}ght] \\ \leq &\mathbb{E}_{h_0}\left[ \mathbb{E}_{\caM^*,\pi^*_{\caM^*}}\left[\sum_{h=1}^{h_0} R(s_h,a_h)\mid h_0{\textnormal{i}}ght] {\textnormal{i}}ght] + \mathbb{E}_{h_0}\left[ \mathbb{E}_{\caM^*,\pi^*_{\caM^*}} \left[V^{*}_{\caM^*,h_0+1}(s_{h_0+1})\mid h_0{\textnormal{i}}ght] - \mathbb{E}_{\caM^*,\hat{\pi}} \left[V^{\hat{\pi}}_{\caM^*,h_0+1}(s_{h_0+1})\mid h_0{\textnormal{i}}ght]{\textnormal{i}}ght] \\ \leq & \mathbb{E}[h_0] + \mathbb{E}_{h_0}\left[ \mathbb{E}_{\caM^*,\pi^*_{\caM^*}} \left[V^{*}_{\caM^*,h_0+1}(s_{h_0+1}) \mid h_0{\textnormal{i}}ght] - \mathbb{E}_{\caM^*,\hat{\pi}} \left[V^{\hat{\pi}}_{\caM^*,h_0+1}(s_{h_0+1})\mid h_0{\textnormal{i}}ght] {\textnormal{i}}ght] \end{align} The outer expectation is over all possible $h_0$, while the inner expectation is over the possible trajectories given fixed $h_0$. By Lemma~{\textnormal{e}}f{lemma: never eliminating M*, gap-dependent bound}, we know that $\hat{\pi} = \pi^*_{\text{DR}}$ after $h_0$ steps with probability at least $1-\frac{1}{H}$. If this high-probability event happens, the second part in the above inequality equals to $$\mathbb{E}\left[ \mathbb{E}_{\caM^*,\pi^*_{\caM^*}} \left[V^{*}_{\caM^*,h_0+1}(s_{h_0+1}) \mid h_0{\textnormal{i}}ght] - \mathbb{E}_{\caM^*,\hat{\pi}} \left[V^{*}_{\caM^*,h_0+1}(s_{h_0+1})\mid h_0{\textnormal{i}}ght] {\textnormal{i}}ght].$$ We can prove that this term is upper bounded by $2D$. Given fixed $h_0$, $\mathbb{E}_{\caM^*,\pi^*_{\caM^*}} \left[V^{*}_{\caM^*,h_0+1}(s_{h_0+1}) \mid h_0{\textnormal{i}}ght] - \mathbb{E}_{\caM^*,\hat{\pi}} \left[V^{*}_{\caM^*,h_0+1}(s_{h_0+1})\mid h_0{\textnormal{i}}ght]$ is the difference of the value function in step $h_0+1$ under two different distribution of $s_{h_0+1}$. We use $d_{h}(s,\pi)$ to denote the state distribution in step $h$ after starting from state $s$ in step $h_0+1$ following policy $\pi$. \begin{align} V^{*}_{\caM^*,h_0+1}(s_{h_0+1}) =& \sum_{h=h_{0}+1}^{H}\mathbb{E}_{s_h \sim d_{h}(s_{h_0+1},\pi^*_{\caM^*})} R(s_h, \pi^*_{\text{DR}}(s_h)) \\ =& \sum_{h=h_0+1} \left({\textnormal{h}}o^*_{\caM^*} + \mathbb{E}_{s_h \sim d_{h}(s_{h_0+1},\pi^*_{\caM^*})} \lambda^*_{\caM^*}(s_h) - \mathbb{E}_{s_{h+1} \sim d_{h+1}(s_{h_0+1},\pi^*_{\caM^*})} \lambda^*_{\caM^*}(s_{h+1}){\textnormal{i}}ght) \\ = & (H-h_0){\textnormal{h}}o^*_{\caM^*} + \lambda^*_{\caM^*}(s_{h_0+1}) - \mathbb{E}_{s_{H+1} \sim d_{H+1}(s_{h_0+1},\pi^*_{\caM^*})} \lambda^*_{\caM^*}(s_{H+1}), \end{align} where the second equality is due to the Bellman equation in infinite-horizon setting (Eqn~{\textnormal{e}}f{eqn: Bellman equation, infinite setting}). Note that by the communicating property, we have $0 \leq \lambda^*_{\caM^*}(s) \leq D$ for any $s \in \caS$. Therefore, we have \begin{align} \mathbb{E}_{\caM^*,\pi^*_{\caM^*}} \left[V^{*}_{\caM^*,h_0+1}(s_{h_0+1}) \mid h_0{\textnormal{i}}ght] - \mathbb{E}_{\caM^*,\hat{\pi}} \left[V^{*}_{\caM^*,h_0+1}(s_{h_0+1})\mid h_0{\textnormal{i}}ght] \leq 2D. \end{align} If the high-probability event defined in Lemma~{\textnormal{e}}f{lemma: never eliminating M*, gap-dependent bound} does not hold (This happens with probability at most $\frac{1}{H}$), we still have $ \mathbb{E}_{\caM^*,\pi^*_{\caM^*}} \left[V^{*}_{\caM^*,h_0+1}(s_{h_0+1}) \mid h_0{\textnormal{i}}ght] - \mathbb{E}_{\caM^*,\hat{\pi}} \left[V^{\hat{\pi}}_{\caM^*,h_0+1}(s_{h_0+1})\mid h_0{\textnormal{i}}ght] \leq H$, thus this does not influence the final bound. Taking expectation over all possible $h_0$, and plugging the result back to Inq~{\textnormal{e}}f{inq: value gap, gap-dependent regret bound}, we have \begin{align} V^{*}_{\caM^*,1}(s_1) - V^{\hat{\pi}}_{\caM^*,1}(s_1) \leq O\left(\mathbb{E}[h_0] + D{\textnormal{i}}ght) = O \left(\frac{DM^2 \log^2(SMH/\delta)\log(MH)}{\delta^4}{\textnormal{i}}ght). \end{align} \end{proof} \subsection{Proof of Theorem~{\textnormal{e}}f{theorem: well-separated gap}} \label{proof of main theorem in setting 1} \begin{proof} The theorem can be proved by combining Theorem~{\textnormal{e}}f{theorem: gap-dependent regret bound} and Lemma~{\textnormal{e}}f{lemma: construction argument}. By Theorem~{\textnormal{e}}f{theorem: gap-dependent regret bound}, we prove that the constructed policy $\hat{\pi}$ satisfies \begin{align} V^{*}_{\caM^*,1}(s_1) - V^{\hat{\pi}}_{\caM^*,1}(s_1) \leq O\left(\frac{DM^2 \log^2(SMH)\log(MH)}{\delta^4}{\textnormal{i}}ght). \end{align} By Lemma~{\textnormal{e}}f{lemma: construction argument}, the sim-to-real gap of policy $\pi^*_{\text{DR}}$ is bounded by \begin{align} \operatorname{Gap}(\pi^*_{\text{DR}},\caU) \leq O\left(\frac{DM^3 \log^2(SMH)\log(MH)}{\delta^4}{\textnormal{i}}ght). \end{align} \end{proof} \section{Omitted proof for finite simulator class without separation condition} \label{appendix: omitted proof in setting 2} \subsection{Proof of Theorem~{\textnormal{e}}f{theorem: square-root H regret bound}} \label{appendix: proof of regret in setting 2} \begin{lemma} \label{lemma: optimism, square-root H regret bound} (Optimism) With probability at least $1-\frac{1}{MH}$, we have ${\textnormal{h}}o^*_{\caM^k} \geq {\textnormal{h}}o^*_{\caM^*}$ for any $k \in [K]$. \end{lemma} \begin{proof} For any fixed $\caM \in \caU$, and fixed step $h \in [H]$, by Azuma's inequality, we have with probability at least $1-\frac{1}{M^2H^2}$, \begin{align} \left|\sum_{t=h_0}^{h} \left(\lambda_{\caM}^{*}\left(s_{t+1}{\textnormal{i}}ght)-P_{\caM^*} \lambda_{\caM}^{*}\left(s_{t}, a_{t}{\textnormal{i}}ght){\textnormal{i}}ght) {\textnormal{i}}ght| \leq D\sqrt{2(h-h_0)\log(2HM)}. \end{align} Taking union bounds over all possible $\caM$ and $h$, we know that the above event holds for all possible $\caM$ and $h$ with probability $1-\frac{1}{MH}$. Under the above event, the true MDP $\caM^*$ will never be eliminated from the MDP set $\caU_k$. Therefore, we have ${\textnormal{h}}o^*_{\caM^k} \geq {\textnormal{h}}o^*_{\caM^*}$. \end{proof} \begin{proof} (Proof of Theorem~{\textnormal{e}}f{theorem: square-root H regret bound}) By Lemma~{\textnormal{e}}f{lemma: optimism, square-root H regret bound}, we know that ${\textnormal{h}}o^*_{\caM^k} \geq {\textnormal{h}}o^*_{\caM^*}$ for any $k \in [K]$. We use $\tau(h)$ to denote the episode that step $h$ belongs to. We can upper bound the regret in $H$ steps as follows: \begin{align} & H {\textnormal{h}}o^*_{\caM^*} - \sum_{h=1}^{H} R(s_h,a_h) \\ \leq & \sum_{h=1}^{H} \left( {\textnormal{h}}o^*_{\caM^{\tau(h)}} - R(s_h,a_h) {\textnormal{i}}ght) \\ = & \sum_{h=1}^{H} \left( P_{\caM^{\tau(h)}} \lambda^*_{\caM^{\tau(h)}}(s_h,a_h) - \lambda^*_{\caM^{\tau(h)}}(s_h) {\textnormal{i}}ght) \\ = & \sum_{h=1}^{H} \left( P_{\caM^{\tau(h)}} - P_{\caM^*}{\textnormal{i}}ght)\lambda^*_{\caM^{\tau(h)}}(s_h,a_h) + \sum_{h=1}^{H} \left( P_{\caM^*} \lambda^*_{\caM^{\tau(h)}}(s_h,a_h) - \lambda^*_{\caM^{\tau(h)}}(s_h) {\textnormal{i}}ght) \\ = & \sum_{h=1}^{H} \left( P_{\caM^{\tau(h)}} - P_{\caM^*}{\textnormal{i}}ght)\lambda^*_{\caM^{\tau(h)}}(s_h,a_h) + P_{\caM^*} \lambda^*_{\caM^{\tau(h)}}(s_{h_0},a_{h_0}) - \lambda^*_{\caM^{\tau(h)}}(s_1) \\ &+ \sum_{h=1}^{H-1} \left( P_{\caM^*} \lambda^*_{\caM^{\tau(h)}}(s_h,a_h) - \lambda^*_{\caM^{\tau(h)}}(s_{h+1}) {\textnormal{i}}ght) \end{align} By Azuma's inequality, we know that \begin{align} \sum_{h=1}^{H-1} \left( P_{\caM^*} \lambda^*_{\caM^{\tau(h)}}(s_h,a_h) - \lambda^*_{\caM^{\tau(h)}}(s_{h+1}) {\textnormal{i}}ght) \leq D\sqrt{H \log(2MH)}, \end{align} holds with probability at least $1-\frac{1}{MH}$. Since $0 \leq \lambda^*_{\caM} \leq D$, we have $P_{\caM^*} \lambda^*_{\caM^{\tau(h)}}(s_{h_0},a_{h_0}) - \lambda^*_{\caM^{\tau(h)}}(s_1) \leq D$. Therefore, we have \begin{align} \label{appendix: regret decomposition formula} H {\textnormal{h}}o^*_{\caM^*} - \sum_{h=1}^{H} R(s_h,a_h) \leq & \sum_{h=1}^{H} \left( P_{\caM^{\tau(h)}} - P_{\caM^*}{\textnormal{i}}ght)\lambda^*_{\caM^{\tau(h)}}(s_h,a_h) + D\sqrt{H\log(2HM)} + D \\ \leq &D\sqrt{2MH \log(2MH)} + M + D\sqrt{H\log(2HM)} + D. \end{align} The first term in ({\textnormal{e}}f{appendix: regret decomposition formula}) is bounded by the stopping condition (line {\textnormal{e}}f{alg: stopping condition finite} of Algorithm {\textnormal{e}}f{alg: optimistic exploration}). By Lemma~{\textnormal{e}}f{lemma: connection with infinite-horizon setting}, we have $V^*_{\caM^*,1}(s_1) \leq H {\textnormal{h}}o^*_{\caM^*} + D$. Therefore, we have $V^*_{\caM^*,1}(s_1)- \sum_{h=1}^{H} R(s_h,a_h) \leq O\left(D\sqrt{MH \log(MH)}{\textnormal{i}}ght)$ with probability at least $1-\frac{2}{MH}$. Therefore, we have \begin{align} V^*_{\caM^*,1}(s_1)- V^{\pi^*_{DR}}_{\caM^*,1}(s_1) \leq O\left(D\sqrt{MH \log(MH)} + H \cdot \frac{2}{MH}){\textnormal{i}}ght) = O\left(D\sqrt{MH \log(MH)}{\textnormal{i}}ght). \end{align} \end{proof} \subsection{Proof of Theorem~{\textnormal{e}}f{theorem: finite class gap}} \label{Proof of the main theorem in setting 2} Theorem~{\textnormal{e}}f{theorem: finite class gap} can be proved by combining Theorem~{\textnormal{e}}f{theorem: square-root H regret bound} , Lemma~{\textnormal{e}}f{lemma: construction argument} and Lemma~{\textnormal{e}}f{lemma: connection with infinite-horizon setting}. By Theorem~{\textnormal{e}}f{theorem: square-root H regret bound}, for any $\caM \in \caU$, the policy $\hat{\pi}$ represented by Algorithm~{\textnormal{e}}f{alg: optimistic exploration} has regret bound $H{\textnormal{h}}o^*_{\caM} - \sum_{h=1}^{H}R(s_h,a_h) \leq O\left(D \sqrt{MH \log (MH)}{\textnormal{i}}ght)$. Taking expectation over $\{s_h,a_h\}_{h \in [H]}$ and combining the inequality with Lemma~{\textnormal{e}}f{lemma: connection with infinite-horizon setting}, we have for any $\caM \in \caU$. \begin{align} V^{*}_{\caM,1}(s_1) - V^{\hat{\pi}}_{\caM,1}(s_1) \leq O\left(D \sqrt{MH \log (MH)}{\textnormal{i}}ght). \end{align} Then the theorem can be proved by Lemma~{\textnormal{e}}f{lemma: construction argument}. \section{Omitted Proof for Infinite Simulator Class} \label{appendix: omitted proof in setting 3} \subsection{Proof of Theorem~{\textnormal{e}}f{theorem: eluder dimension regret bound}} \label{appendix: proof of eluder dimension regret bound} \begin{lemma} \label{lemma: low-switching, eluder dimension} (Low Switching) The total number of episode $K$ is bounded by \begin{align} K \leq O(\text{dim}_{E}(\caF, 1/H) \log(D^2H) \log(H)) \end{align} \end{lemma} \begin{proof} By Lemma~5 of~\cite{kong2021online}, we know that \begin{align} \sum_{t=1}^{H} \min\left\{\sup_{f_1,f_2 \in \mathcal{F}} \frac{\left(f_1(x_t) - f_2(x_t){\textnormal{i}}ght)^2}{\|f_1-f_2\|^2_{\mathcal{Z}_t}+1},1{\textnormal{i}}ght\} \leq C \text{dim}_{E}(\caF, 1/H) \log(D^2H) \log(H) \end{align} for some constant $C > 0$. Our idea is to use this result to upper bound the number of total switching steps. Let $\tau(k)$ be the first step of episode $k$. By the definition of the function class $\mathcal{F}$, we have $(f_1 - f_2)^2(x_{t}) \leq 4D^2$ for any $f_1,f_2 \in \mathcal{F}$ and $x_t$. By the switching rule, we know that, once the agent starts a new episode after step $\tau(k+1)-1$, we have \begin{align} \sum_{t = \tau(k)}^{\tau(k+1)-1} (f_1 - f_2)^2(x_{t}) \leq \sum_{t=1}^{\tau(k)-1}(f_1 - f_2)^2(x_{t}) + \alpha + 4D^2, \forall f_1,f_2,x_t \end{align} Therefore, we have \begin{align} \sum_{t = 1}^{\tau(k+1)-1} (f_1 - f_2)^2(x_{t}) \leq 2\sum_{t=1}^{\tau(k)-1}(f_1 - f_2)^2(x_{t}) + \alpha + 4D^2, \forall f_1,f_2,x_t \end{align} Now we upper bound the importance score in the switching step $\tau(k+1)-1$. \begin{align} \min\left\{\sup_{f_1,f_2} \frac{\sum_{t = \tau(k)}^{\tau({k+1})-1} (f_1(x_t) - f_2(x_t))^2}{\|f_1-f_2\|_{\mathcal{Z}_{\tau(k)}}^2 + \alpha}, 1{\textnormal{i}}ght\} \leq & \min\left\{\sum_{t = \tau(k)}^{\tau(k+1)-1} \sup_{f_1,f_2} \frac{ (f_1(x_t) - f_2(x_t))^2}{\|f_1-f_2\|_{\mathcal{Z}_{\tau(k)}}^2 + \alpha}, 1{\textnormal{i}}ght\} \\ \leq & \min\left\{\sum_{t = \tau(k)}^{\tau(k+1)-1} \sup_{f_1,f_2} \frac{2 (f_1(x_t) - f_2(x_t))^2}{\|f_1-f_2\|_{\mathcal{Z}_{\tau(k+1)}}^2 -4D^2+ \alpha}, 1{\textnormal{i}}ght\} \\ \leq & \min\left\{\sum_{t = \tau(k)}^{\tau(k+1)-1} \sup_{f_1,f_2} \frac{2 (f_1(x_t) - f_2(x_t))^2}{\|f_1-f_2\|_{\mathcal{Z}_{t}}^2 -4D^2+ \alpha} , 1{\textnormal{i}}ght\} \\ \leq & \sum_{t = \tau(k)}^{\tau(k+1)-1} \min\left\{\sup_{f_1,f_2} \frac{2(f_1(x_t) - f_2(x_t))^2}{\|f_1-f_2\|_{\mathcal{Z}_{t}}^2 -4D^2+ \alpha} , 1{\textnormal{i}}ght\} \end{align} Suppose the number of episodes is at most $K$. If we set $\alpha = 4D^2 + 1$, we have \begin{align} \sum_{k=1}^{K} \min\left\{\sup_{f_1,f_2} \frac{\sum_{t = \tau(k)}^{\tau({k+1})-1} (f_1(x_t) - f_2(x_t))^2}{\|f_1-f_2\|_{\mathcal{Z}_{\tau(k)}}^2 + \alpha}, 1{\textnormal{i}}ght\} \leq & \sum_{t = 1}^{H} \min\left\{\sup_{f_1,f_2} \frac{2 (f_1(x_t) - f_2(x_t))^2}{\|f_1-f_2\|_{\mathcal{Z}_{t}}^2 -4D^2+ 2\alpha} , 1{\textnormal{i}}ght\} \\ \leq & C\text{dim}_{E}(\caF, 1/H) \log(D^2H) \log(H) \end{align} By the switching rule, we have $\sup_{f_1,f_2} \frac{\sum_{t = \tau(k)}^{\tau({k+1})-1} (f_1(x_t) - f_2(x_t))^2}{\|f_1-f_2\|_{\mathcal{Z}_{\tau(k)}}^2 + \alpha} \geq 1$. Therefore, the LHS of the above inequality is exactly $K$. Thus we have \begin{align} K \leq C \text{dim}_{E}(\caF, 1/H) \log(D^2H) \log(H). \end{align} \end{proof} \begin{lemma} \label{lemma: optimism, eluder dimension} (Optimism) With probability at least $1-\frac{1}{H}$, $\caM^* \in \caU_k$ holds for any episode $k \in [K]$. \end{lemma} \begin{proof} This lemma comes directly from Theorem 6 of \cite{ayoub2020model}. Define the Filtration $\mathbb{F} = (\mathbb{F}_h)_{h > 0}$ so that $\mathbb{F}_{h-1}$ is generated by $(s_1,a_1,\lambda_1, \cdots, s_h,a_h,\lambda_h)$. Then we have $\mathbb{E}[\lambda_{h}(s_{h+1})\mid \mathbb{F}_{h-1}] = P_{\caM^*} \lambda_h(s_h,a_h) = f_{\caM^*}(s_h,a_h,\lambda_h)$. Meanwhile, $\lambda_{h}(s_{h+1}) - f_{\caM^*}(s_h,a_h,\lambda_h)$ is conditionally $\frac{D}{2}$-subgaussian given $\mathbb{F}_{h-1}$. By Theorem 6 of \cite{ayoub2020model}, we can directly know that $f_{\caM^*} \in \caU_k$ for any $k \in [K]$ with probability at least $1-\frac{1}{H}$. \end{proof} \begin{proof} (Proof of Theorem~{\textnormal{e}}f{theorem: eluder dimension regret bound}) Let $\tau(k)$ be the first step of episode $k$. Under the high-probability event defined in Lemma~{\textnormal{e}}f{lemma: optimism, eluder dimension}, we can decompose the regret using the same trick in previous sections. \begin{align} &\sum_{k = 1}^K \sum_{h = \tau(k)}^{\tau(k+1)-1} H{\textnormal{h}}o^\star_{\caM^*} - R(s_h, a_h) \\ \leq & \sum_{k = 1}^K \sum_{h = \tau(k)}^{\tau(k+1)-1} \left({\textnormal{h}}o_{\caM^k}^\star - R(s_h, a_h){\textnormal{i}}ght) \\ = &\sum_{k = 1}^K \sum_{h = \tau(k)}^{\tau(k+1)-1} \left(P_{\caM^k} \lambda_{h} (s_h, a_h) - \lambda_{h}(s_h){\textnormal{i}}ght) \\ = &\sum_{k = 1}^K \sum_{h = \tau(k)}^{\tau(k+1)-1} (P_{\caM^k} - P_{\caM^*}) \lambda_h(s_h, a_h) + \sum_{k = 1}^K \sum_{h = \tau(k)}^{\tau(k+1)-1} \left(P_{\caM^*} \lambda_{h} (s_h, a_h) - \lambda_{h}(s_h){\textnormal{i}}ght) \\ \label{eqn: regret decomposition, eluder dimension} \leq & \sum_{k = 1}^K \sum_{h = \tau(k)}^{\tau(k+1)-1} (P_{\caM^k} - P_{\caM^*}) \lambda_h(s_h, a_h) + \sum_{k = 1}^K \sum_{h = \tau(k)}^{\tau(k+1)-2} \left(P_{\caM^*} \lambda_h(s_h, a_h) - \lambda_{h}(s_{h+1}){\textnormal{i}}ght) + DK, \end{align} where the first inequality is due to optimism condition in Lemma~{\textnormal{e}}f{lemma: optimism, eluder dimension}. The first equality is due to the Bellman equation~{\textnormal{e}}f{eqn: Bellman equation, infinite setting} and $\lambda_h = \lambda^*_{\caM^k}$ for $\tau(k) \leq h \leq \tau(k+1)-1$. The last inequality is due to $0 \leq \lambda_h(s) \leq D$ for any $s \in \caS$. Now we bound the first two terms in Eqn~{\textnormal{e}}f{eqn: regret decomposition, eluder dimension}. The second term can be regarded as a martingale difference sequence. By Azuma's inequality, with probability at least $1-\frac{1}{H}$, \begin{align} \sum_{k = 1}^K \sum_{h = \tau(k)}^{\tau(k+1)-1 - 1} \left(P_{\caM^*} \lambda_h(s_h, a_h) - \lambda_{h}(s_{h+1}){\textnormal{i}}ght) \leq D\sqrt{2H \log(H)}. \end{align} Now we focus on the upper bound of the first term in Eqn~{\textnormal{e}}f{eqn: regret decomposition, eluder dimension}. Under the high-probability event defined in Lemma~{\textnormal{e}}f{lemma: optimism, eluder dimension}, the true model $P$ is always in the model class $\caU_k$. For episode $k$, from the construction of $\caU_{k}$ we know that any $f_1, f_2 \in \caU_{k}$ satisfies $\|f_1 - f_2\|_{\caZ_{\tau(k)}}^2 \leq 2 \beta$. Since $\caM^k, \caM^* \in \caU_k$, we have \begin{align} \sum_{t = 1}^{T(k-1)} \left(\left(P_{\caM^k} - P_{\caM^*}{\textnormal{i}}ght) \lambda_t (s_t, a_t) {\textnormal{i}}ght)^2 \leq 2 \beta \end{align} Moreover, by the if condition in Line 8 of Alg.~{\textnormal{e}}f{alg: general_opt_alg}, we have for any $ \tau(k) \leq h \leq \tau(k+1)-1$, \begin{align} \sum_{t = \tau(k)}^{h} \left(\left(P_{\caM^k} - P_{\caM^*}{\textnormal{i}}ght) \lambda_t (s_t, a_t) {\textnormal{i}}ght)^2 \leq 2\beta + \alpha + D^2. \end{align} Summing up the above two equations, we have \begin{align} \sum_{t = 1}^{h} \left(\left(P_{\caM^k} - P_{\caM^*}{\textnormal{i}}ght) \lambda_t (s_t, a_t) {\textnormal{i}}ght)^2 \leq 4\beta + \alpha + D^2. \end{align} We invoke Lemma 26 of \cite{jin2021bellman} by setting $\caG = \caF - \caF$, $\Pi = \{\delta_x(\cdot)|x \in \caX\}$ where $\delta_x(\cdot)$ is the dirac measure centered at $x$, $g_t = f_{\caM^k} - f_{\caM^*}$, $\omega = 1/H$, $\beta = 4\beta + \alpha + D^2$ and $\mu_t = \mathbf{1}\left\{\cdot=\left(s_t, a_t, \lambda_t{\textnormal{i}}ght){\textnormal{i}}ght\}$, then we have \begin{align} \sum_{\tau=1}^K \sum_{t=S(\tau)}^{T(\tau)} \left|\left(P_{\caM^{\tau}} - P_{\caM^*}{\textnormal{i}}ght) \lambda_t (s_t, a_t) {\textnormal{i}}ght| \leq & O\left(\sqrt{\text{dim}_{E}(\caF, 1/H) \beta H} + \min\left(H, \text{dim}_{E}(\caF, 1/H){\textnormal{i}}ght)D + H \cdot \frac{1}{H} {\textnormal{i}}ght) \\ =& O\left(\sqrt{\text{dim}_{E}(\caF, 1/H) \beta H}{\textnormal{i}}ght) \end{align} Plugging the results back to Eqn~{\textnormal{e}}f{eqn: regret decomposition, eluder dimension}, we have \begin{align} \sum_{k = 1}^K \sum_{h = \tau(k)}^{\tau(k+1)-1} H{\textnormal{h}}o^\star - R(s_h, a_h) \leq O\left(D\sqrt{H \text{dim}_{E}(\caF, 1/H) \log \left( H \cdot \mathcal{N}(\mathcal{F}, 1/H){\textnormal{i}}ght)} {\textnormal{i}}ght) \end{align} By Lemma~{\textnormal{e}}f{lemma: connection with infinite-horizon setting}, we have $V^*_{\caM^*,1}(s_1) \leq H {\textnormal{h}}o^*_{\caM^*} + D$. Therefore, we have \begin{align} V^*_{\caM^*,1}(s_1)- \sum_{h=1}^{H} R(s_h,a_h) \leq O\left(D\sqrt{H \text{dim}_{E}(\caF, 1/H) \log \left( H \cdot \mathcal{N}(\mathcal{F}, 1/H){\textnormal{i}}ght)} {\textnormal{i}}ght), \end{align} with probability at least $1-\frac{2}{MH}$. If the high-probability event doesn't holds (This happens with probability at most $\frac{2}{H}$), then the gap $V^*_{\caM^*,1}(s_1)- V^{\pi^*_{\text{DR}}}_{\caM^*,1}(s_1)$ still can be bounded by $H$. Taking expectation over the trajectory $\{s_h\}_h$, we have \begin{align} V^*_{\caM^*,1}(s_1)- V^{\pi^*_{\text{DR}}}_{\caM^*,1}(s_1) \leq & O\left(D\sqrt{H \text{dim}_{E}(\caF, 1/H) \log \left( H \cdot \mathcal{N}(\mathcal{F}, 1/H){\textnormal{i}}ght)} + H \cdot \frac{2}{H}){\textnormal{i}}ght)\\ & = O\left(D\sqrt{H \text{dim}_{E}(\caF, 1/H) \log \left( H \cdot \mathcal{N}(\mathcal{F}, 1/H){\textnormal{i}}ght)} {\textnormal{i}}ght). \end{align} \end{proof} \subsection{Proof of Theorem~{\textnormal{e}}f{theorem: sub-optimality gap, large simulator class}} \label{appendix: proof of sub-optimality gap for large simulator class} \begin{proof} Theorem~{\textnormal{e}}f{theorem: sub-optimality gap, large simulator class} can be proved by combining Theorem~{\textnormal{e}}f{theorem: eluder dimension regret bound} , Lemma~{\textnormal{e}}f{lemma: construction argument} and Lemma~{\textnormal{e}}f{lemma: connection with infinite-horizon setting}. By Theorem~{\textnormal{e}}f{theorem: eluder dimension regret bound}, for any $\caM \in \caU$, the policy $\hat{\pi}$ represented by Algorithm~{\textnormal{e}}f{alg: general_opt_alg} can obtain regret bound $H{\textnormal{h}}o^*_{\caM} - \sum_{h=1}^{H}R(s_h,a_h) \leq O\left(D \sqrt{d_{e} H \log (H \cdot \mathcal{N}(\mathcal{F}, 1 / H))}{\textnormal{i}}ght)$. Taking expectation over $\{s_h,a_h\}_{h \in [H]}$ and combining the inequality with Lemma~{\textnormal{e}}f{lemma: connection with infinite-horizon setting}, we have for any $\caM \in \caU$. \begin{align} V^{*}_{\caM,1}(s_1) - V^{\hat{\pi}}_{\caM,1}(s_1) \leq O\left(D \sqrt{d_{e} H \log (H \cdot \mathcal{N}(\mathcal{F}, 1 / H))}{\textnormal{i}}ght). \end{align} Then the theorem can be proved by Lemma~{\textnormal{e}}f{lemma: construction argument}. \end{proof} \section{Lower Bounds} \subsection{Proof of Proposition~{\textnormal{e}}f{prop: lower bound without diameter assumption}} \label{appendix: proof of prop 1} \begin{proof} Consider the following construction of $\caU$. There are $3M+1$ states. There are $M$ actions in the initial state $s_0$, which is denoted as $\{a_i\}_{i=1}^{M}$. After taking action $a_i$ in state $s_0$, the agent will transit to state $s_{i,1}$ with probability $1$. In state $s_{i,1}$ for $i \in [M]$, the agent can only take action $a_0$, and then transits to state $s_{i,2}$ with probability $p_i$, and transits to state $s_{i,3}$ with probability $1-p_i$. State $\{s_{i,2}\}_{i=1}^{M}$ and $\{s_{i,3}\}_{i=1}^{M}$ are all absorbing states. That is, the agent can only take one action $a_0$ in these states, and transits back to the current state with probability $1$. The agent can only obtain reward $1$ in state $s_{i,2}$ for $i \in [M]$, and all the other states have zero rewards. Therefore, if the agent knows the transition dynamics of the MDP, it should take action $a_i$ with $i = \argmax_{i} p_i$ in state $s_0$. Now we define the transition dynamics of each MDP $\caM_i$. For each MDP $\caM_i \in \caU$, we have $p_i = 1$ and $p_{j} = 0$ for all $j \in [M], j \neq i$. Therefore, the agent cannot identify $\caM^*$ in state $s_0$. The best policy in state $s_0$ for latent MDP $\caU$ is to randomly take an action $a_i$. In this case, the sim-to-real gap can be at least $\Omega(H)$ since the agent takes the wrong action in state $s_0$ with probability $1-\frac{1}{M}$. \end{proof} \subsection{Proof of Theorem~{\textnormal{e}}f{thm: lower bound}} \label{appendix: lower bound proof} \begin{proof} We first show that $\Omega(\sqrt{MH})$ holds with the hard instance for multi-armed bandit~\citep{lattimore2020bandit}. Consider a class of K-armed bandits instances with $K = M$. For the bandit instance $\caM_i$, the expected reward of arm $i$ is $\frac{1}{2} + {\epsilon}ilon$, while the expected rewards of other arms are $\frac{1}{2}$. Note that this is exactly the hard instance for $K$-armed bandits. Following the proof idea of the lower bound for multi-armed bandits, we know that the regret (sim-to-real gap) is at least $\Omega(\sqrt{MH})$. We restate the hard instance construction from~\cite{jaksch2010near}. This hard instance construction has also been applied to prove the lower bound in episodic setting~\citep{jin2018q}. We firstly introduce the two-state MDP construction. In their construction, the reward does not depend on actions but states. State 1 always has reward 1 and state 0 always has reward 0. From state 1, any action takes the agent to state 0 with probability $\delta$, and to state 1 with probability $1-\delta$. In state 0, there is one action $a^*$ takes the agent to state 1 with probability $\delta+{\epsilon}ilon$, and the other action $a$ takes the agent to state 1 with probability $\delta$. A standard Markov chain analysis shows that the stationary distribution of the optimal policy (that is, the one that takes action $a^*$ in state 0) has a probability of being in state 1 of \begin{align} \frac{\frac{1}{\delta}}{\frac{1}{\delta}+\frac{1}{\delta+{\bm{a}}repsilon}}=\frac{\delta+{\bm{a}}repsilon}{2 \delta+{\bm{a}}repsilon} \geq \frac{1}{2}+\frac{{\bm{a}}repsilon}{6 \delta} \text { for } {\bm{a}}repsilon \leq \delta. \end{align} In contrast, acting sub-optimally (that is taking action $a$ in state 0) leads to a uniform distribution over the two states. The regret per time step of pulling a sub-optimal action is of order ${\epsilon}ilon/\delta$. Consider $O(S)$ copies of this two-state MDP where only one of the copies has such a good action $a^*$. These copies are connected into a single MDP with an $A$-ary tree structure. In this construction, the agent needs to identify the optimal state-action pair over totally $SA$ different choices. Setting $\delta = \frac{1}{D}$ and ${\epsilon}ilon = c\sqrt{\frac{SA}{TD}}$, \cite{jaksch2010near} prove that the regret in the infinite-horizon setting is $\Omega(\sqrt{DSAT})$. Our analysis follows the same idea of~\cite{jaksch2010near}. For the MDP instance $\caM_i$, the optimal state-action pair is $(s_i,a_i)$ ($(s_i,a_i) \neq (s_j,a_j), \forall i \neq j$). With the knowledge of the transition dynamics of each $\caM_i$, the agent needs to identify the optimal state-action pair over totally $M$ different pairs in our setting. Therefore, we can similarly prove that the lower bound is $\Omega(\sqrt{DMH})$ following their analysis. \end{proof} \subsection{Lower Bound in the Large Simulator Class} \label{appendix: lower bound large class} \begin{proposition} \label{theorem: lower bound, M > H} Suppose All MDPs in the MDP set $\mathcal{U}$ are linear mixture models \citep{zhou2020nearly} sharing a common low dimensional representation with dimension $d = O(\log(M))$, there exists a hard instance such that the sim-to-real gap of the policy $\pi^*_{\text{DR}}$ returned by the domain randomization oracle can be still $\Omega(H)$ when $M \geq H$. \end{proposition} \begin{proof} We can consider the following linear bandit instance as a special case. Suppose there are two actions with feature $\phi(a_1) = (1,0)$ and $\phi(a_2) = (1,1)$. In the MDP set $\mathcal{M}$, there are $M-1$ MDPs with parameter $\theta_{i} = (\frac{1}{2}, -p_i)$ with $\frac{1}{4}<p_i < \frac{1}{2}, i \in [M-1]$, and one MDP with parameter $\theta_M = (\frac{1}{2}, \frac{1}{2})$. Suppose $M = 4H+5$, the optimal policy of such an LMDP with uniform initialization will never pull the action $a_2$, which can suffer $\Omega(H)$ sim-to-real gap in the MDP with parameter $\theta_M$. \end{proof} \iffalse \subsection{Proof of Theorem~{\textnormal{e}}f{theorem: sub-optimality gap, adversarial training}} \label{appendix: proof of the sub-optimality gap for adversarial training} By Theorem~{\textnormal{e}}f{theorem: eluder dimension regret bound}, for any $\caM \in \caU$, the policy $\hat{\pi}$ represented by Algorithm~{\textnormal{e}}f{alg: general_opt_alg} can obtain regret bound $H{\textnormal{h}}o^*_{\caM} - \sum_{h=1}^{H}R(s_h,a_h) \leq O\left(D \sqrt{d_{e} H \log (H \cdot \mathcal{N}(\mathcal{F}, 1 / H))}{\textnormal{i}}ght)$. Taking expectation over $\{s_h,a_h\}_{h \in [H]}$ and combining the inequality with Lemma~{\textnormal{e}}f{lemma: connection with infinite-horizon setting}, we have for any $\caM \in \caU$. \begin{align} V^{*}_{\caM,1}(s_1) - V^{\hat{\pi}}_{\caM,1}(s_1) \leq O\left(D \sqrt{d_{e} H \log (H \cdot \mathcal{N}(\mathcal{F}, 1 / H))}{\textnormal{i}}ght). \end{align} By the definition of policy $\pi^*_{AT}$, \begin{align} V^{*}_{\caM^*,1}(s_1) - V^{\pi^*_{AT}}_{\caM,1}(s_1) \leq & \max_{\caM} \left[V^{*}_{\caM,1}(s_1) - V^{\pi^*_{AT}}_{\caM,1}(s_1){\textnormal{i}}ght] \\ \leq& \max_{\caM} \left[V^{*}_{\caM,1}(s_1) - V^{\hat{\pi}}_{\caM,1}(s_1){\textnormal{i}}ght]\\ \leq & O\left(D \sqrt{d_{e} H \log (H \cdot \mathcal{N}(\mathcal{F}, 1 / H))}{\textnormal{i}}ght). \end{align} \fi \end{document}
\begin{document} \begin{abstract} A logarithm representation of operators is introduced as well as a concept of pre-infinitesimal generator. Generators of invertible evolution families are represented by the logarithm representation, and a set of operators represented by the logarithm is shown to be associated with analytic semigroups. Consequently generally-unbounded infinitesimal generators of invertible evolution families are characterized by a convergent power series representation. \end{abstract} \maketitle \section{Introduction} \label{sec1} The logarithm of an injective sectorial operator was introduced by Nollau~\cite{69nollau} in 1969. After a long time, the logarithm of sectorial operators were studied again from 1990's \cite{boyadzhiev,00okazawa-01, 00okazawa-02}, and its utility was established with respect to the definition of the logarithms of operators~\cite{03hasse,martinez} (for a review of sectorial operators, see Hasse \cite{06hasse}). While the sectorial operator has been a generic framework to define the logarithm of operators, the sectorial property is not generally satisfied by the evolution family of operators, where the evolution family correspond to the exponentials of operators in abstract Banach space framework (for the definition of evolution family in this paper, see Sec.~\ref{secevolv}). In this paper we characterize infinitesimal generators of invertible evolution families that has not been settled so far. First of all, by introducing a kind of similarity transform, a logarithm representation is obtained for such generators. The logarithm representation is utilized to show a convergent power series representation of invertible evolution families generated by certain unbounded operators, although the validity of such a representation is not established for any evolution families generated by unbounded operators. In this context, the concept of pre-infinitesimal generator is introduced. \section{Mathematical settings} \label{tp-group} \subsection{Evolution family on Banach spaces} \label{secevolv} Let $X$ and $B(X)$ be a Banach space with a norm $\| \cdot \|$ and a space of bounded linear operators on $X$. The same notation is used for the norm equipped with $B(X)$, if there is no ambiguity. For a positive and finite $T$, let elements of evolution family $\{U(t,s) \}_{-T \le t, s \le T}$ be mappings: $(t,s) \to U(t,s)$ satisfying the strong continuity for $-T \le t, s \le T$ (for reviews or textbooks, see \cite{02arendt,66Kato,72krein,83pazy,79tanabe}). The semigroup properties: \begin{equation} \label{sg1} U(t,r)~ U(r,s) = U(t,s), \end{equation} and \begin{equation} \label{sg2} U(s,s) = I, \end{equation} are assumed to be satisfied, where $I$ denotes the identity operator of $X$. Both $U(t,s)$ and $U(s,t)$ are assumed to be well-defined to satisfy \begin{equation} \label{sg3} U(s,t) ~ U(t,s) = U(s,s) = I, \end{equation} where $U(s,t)$ corresponds to the inverse operator of $U(t,s)$. Since $U(t,s) ~ U(s,t) = U(t,t) = I$ is also true, the commutation between $U(t,s)$ and $U(s,t)$ follows. Operator $U(t,s)$ is a generalization of exponential function; indeed the properties shown in Eqs.~\eqref{sg1}-\eqref{sg3} are satisfied by taking $U(t,s)$ as $e^{t-s}$. Evolution family is an abstract concept of exponential function valid for both finite and infinite dimensional Banach spaces. Let $Y$ be a dense Banach subspace of $X$, and the topology of $Y$ be stronger than that of $X$. The space $Y$ is assumed to be $U(t,s)$-invariant. Following the definition of $C_0$-(semi)group (cf. the assumption $H_2$ in Sec.~5.3 of Pazy~\cite{83pazy} or corresponding discussion in Kato~\cite{70kato,73kato}), $U(t,s)$ is assumed to satisfy the boundedness \cite{66Kato,83pazy}; there exist real numbers $M$ and $\beta$ such that \begin{equation} \label{qb} \| U(t,s) \|_{B(X)} \le M e^{\beta t}, \quad \| U(t,s) \|_{B(Y)} \le M e^{\beta t}. \end{equation} Inequalities \eqref{qb} are practically reduced to \[ \| U(t,s) \|_{B(X)} \le M e^{\beta T}, \quad \| U(t,s) \|_{B(Y)} \le M e^{\beta T}, \] when $t$-interval is restricted to be finite $[-T,T]$. \subsection{Pre-infinitesimal generator} The counterpart of the logarithm in the abstract framework is introduced. There are two concepts associated with the logarithm of operators; one is the pre-infinitesimal generator and the other is $t$-differential of $U(t,s)$. For $-T \le t, s \le T$, the weak limit \begin{equation} \label{weakdef} \begin{array}{ll} \mathop{\rm wlim}\limits_{h \to 0} h^{-1} (U(t+h,s) - U(t,s)) ~u = \mathop{\rm wlim}\limits_{h \to 0} h^{-1}(U(t+h,t) - I) ~ U(t,s) ~u, \end{array} \end{equation} is assumed to exist for $u$, which is an element of a dense subspace $Y$ of $X$. A linear operator $A(t): Y ~\to~ X$ is defined by \begin{equation} \label{pe-group} A(t) u := \mathop{\rm wlim}\limits_{h \to 0} h^{-1} (U(t+h,t) - I) u \end{equation} for $u \in Y$ and $-T \le t, s \le T$, and then let $t$-differential of $U(t,s)$ in a weak sense be \begin{equation} \label{de-group} \begin{array}{ll} \partial_t U(t,s)~u = A(t) U(t,s) ~u. \end{array} \end{equation} Equation~\eqref{de-group} is regarded as a differential equation satisfied by $U(t,s) u$ that implies a relation between $A(t)$ and the logarithm: \[ \begin{array}{ll} A(t) = \partial_t U(t,s) ~ U(s,t). \end{array} \] Let us call $A(t)$ defined by Eq.~\eqref{pe-group} for a whole family $\{U(t,s)\}_{-T \le t,s \le T}$ the pre-infinitesimal generator. Note that pre-infinitesimal generators are not necessarily infinitesimal generators; e.g., in $t$-independent cases, $A$ defined by Eq.~\eqref{pe-group} is not necessarily a densely-defined and closed linear operator, while $A$ must be a densely-defined and closed linear operator with its resolvent set included in $\{\lambda \in {\mathbb C}: {\rm Re} \lambda > \beta \}$. On the other hand, infinitesimal generators are necessarily pre-infinitesimal generators. In the following a set of pre-infinitesimal generators is denoted by $G(X)$. \subsection{A principal branch of logarithm} The logarithm is defined by the Dunford integral in this paper. Two difficulties of dealing with logarithm are its singularity at the origin and its multi-valued property. By introducing a constant $\kappa \in {\mathbb C}$, the singularity can be handled. Let $\arg$ be a function of complex number, which gives the angle between the positive real axis to the line including the point to the origin. For the multi-valued property, a principle branch (denoted by ``Log") of the logarithm (denoted by ``$ \log$") is chosen for any complex number $z \in C$, a branch of logarithm is defined by \[ \begin{array}{ll} {\rm Log} z = \log |z| + i \arg Z, \end{array} \] where $Z$ is a complex number chosen to satisfy $|Z| = |z|$, $-\pi < \arg Z \le \pi$, and $\arg Z = \arg z + 2 n \pi$ for a certain integer $n$. \begin{lemma} \label{lem3} Let $t$ and $s$ satisfy $0 \le t,s \le T$. For a given $U(t,s)$ defined in Sec.~\ref{tp-group}, its logarithm is well defined; there exists a certain complex number $\kappa$ satisfying \begin{equation} \label{logex3} \begin{array}{ll} {\rm Log} (U(t,s)+\kappa I) = \frac{1}{2 \pi i} \int_{\Gamma} {\rm Log} \lambda ~ ( \lambda - U(t,s) - \kappa )^{-1} d \lambda, \end{array} \end{equation} where an integral path $\Gamma$, which excludes the origin, is a circle in the resolvent set of $U(t,s) +\kappa I$. Here $\Gamma$ is independent of $t$ and $s$. ${\rm Log} (U(t,s)+ \kappa I)$ is bounded on $X$. \end{lemma} \begin{proof} The logarithm ${\rm Log}$ holds the singularity at the origin, so that it is necessary to show a possibility of taking a simple closed curve (integral path) excluding the origin in order to define the logarithm by means of the Dunford-Riesz integral. It is not generally possible to take such a path in case of $\kappa =0$. First, $U(t,s)$ is assumed to be bounded for $0 \le t,s \le T$ (Eq.~\eqref{qb}), and the spectral set of $U(t,s)$ is a bounded set in ${\mathbb C}$. Second, for $\kappa$ satisfying \begin{equation} \label{cr-cond} |\kappa| > M e^{\beta T}, \end{equation} the spectral set of $U(t,s) + \kappa I$ is separated with the origin. Consequently it is possible to take an integral path $\Gamma$ including the spectral set of $U(t,s)+\kappa I$ and excluding the origin. Equation~\eqref{logex3} follows from the Dunford-Riesz integral~\cite{43dunford}. Furthermore, by adjusting the amplitude of $\kappa$, an appropriate integral path always exists independent of $t$ and $s$. ${\rm Log} (U(t,s)+\kappa I)$ is bounded on $X$, since $\Gamma$ is included in the resolvent set of $(U(t,s)+\kappa I)$. \end{proof} According to this lemma, by introducing nonzero $\kappa$, the logarithm of $U(t,s)+\kappa I$ is well-defined without assuming the sectorial property to $U(t,s)$. On the other hand Eq.~\eqref{logex3} is valid with $\kappa=0$ only for limited cases. \section{Main results} \subsection{Logarithm representation of pre-infinitesimal generator} \begin{theorem} \label{thm1} Let $t$ and $s$ satisfy $-T \le t,s \le T$, and $Y$ be a dense subspace of $X$. For $U(t,s)$ defined in Sec.~\ref{tp-group}, let $A(t) \in G(X)$ and $\partial_t U(t,s)$ be determined by Eqs.~\eqref{pe-group} and \eqref{de-group} respectively. If $A(t)$ and $U(t,s)$ commute, an evolution family $\{ A(t) \}_{-T \le t \le T}$ is represented by means of the logarithm function; there exists a certain complex number $\kappa \ne 0$ such that \begin{equation} \label{logex} \begin{array}{ll} A(t) ~ u = (I+ \kappa U(s,t))~ \partial_{t} {\rm Log} ~ (U(t,s) + \kappa I) ~ u, \end{array} \end{equation} where $u$ is an element in $Y$. Note that $U(t,s)$ defined in Sec.~\ref{tp-group} is assumed to be invertible. \end{theorem} \begin{proof} For $U(t,s)$ defined in Sec.~\ref{tp-group}, operators $ {\rm Log} ~ (U(t,s) + \kappa I)$ and $ {\rm Log} ~ (U(t+h,s) + \kappa I)$ are well defined for a certain $\kappa$ (Lemma~\ref{lem3}). The $t$-differential in a weak sense is formally written by \begin{equation} \label{difference0} \begin{array} {ll} \mathop{\rm wlim}\limits_{h \to 0} \frac{1}{h} \{ {\rm Log} ~(U(t+h,s)+\kappa I) - {\rm Log} ~(U(t,s)+ \kappa I) \} \\ =\mathop{\rm wlim}\limits_{h \to 0} \frac{1}{h} \frac{1}{2 \pi i} \int_{\Gamma} {\rm Log} \lambda \\ ~ \{ ( \lambda - U(t+h,s) - \kappa )^{-1} - ( \lambda - U(t,s) - \kappa )^{-1} \} d \lambda \\ = \mathop{\rm wlim}\limits_{h \to 0} \frac{1}{2 \pi i} \int_{\Gamma} {\rm Log} \lambda \\ ~ \{ (\lambda - U(t+h,s)-\kappa )^{-1} \frac{U(t+h,s)-U(t,s)}{h} (\lambda - U(t,s)- \kappa )^{-1} \} d \lambda \end{array} \end{equation} where $\Gamma$, which is possible to be taken independent of $t$, $s$ and $h$ for a sufficiently large certain $\kappa$, denotes a circle in the resolvent set of both $U(t,s)+ \kappa I$ and $U(t+h,s)+\kappa I$. A part of the integrand of Eq.~\eqref{difference0} is estimated as \begin{equation} \label{intee} \begin{array} {ll} \quad \| \{ (\lambda - U(t+h,s)-\kappa )^{-1} \frac{U(t+h,s)-U(t,s)}{h} (\lambda - U(t,s)- \kappa )^{-1} \} v \|_X \\ \le \| \{ (\lambda - U(t+h,s)- \kappa )^{-1} \|_{B(X)} \\ \| \frac{U(t+h,s)-U(t,s)}{h} (\lambda - U(t,s)- \kappa )^{-1} \} v \|_X, \end{array} \end{equation} for $v \in X$. There are two steps to prove the validity of Eq.~\eqref{difference0}. In the first step, the former part of the right hand side of Eq.~\eqref{intee} satisfies \[ \begin{array} {ll} \| \{ (\lambda - U(t+h,s)- \kappa )^{-1} \|_{B(X)} < \infty, \end{array} \] since $\lambda$ is taken from the resolvent set of $U(t+h,s)- \kappa I$. In the same way the operator $(\lambda - U(t,s) - \kappa )^{-1}$ is bounded on $X$ and $Y$. Then the continuity of the mapping $t \to (\lambda - U(t,s)- \kappa )^{-1}$ as for the strong topology follows: \[ \begin{array} {ll} \| (\lambda - U(t+h,s)- \kappa )^{-1} - (\lambda - U(t,s)- \kappa I)^{-1} \|_{B(X)} \\ \le \| (\lambda - U(t+h,s)- \kappa )^{-1} \|_{B(X)} \|( U(t+h,s)-U(t,s)) (\lambda - U(t,s)-\kappa )^{-1} \|_{B(X)}. \end{array} \] In the second step, the latter part of the right hand side of Eq.~\eqref{intee} is estimated as \begin{equation} \label{unibound} \begin{array}{ll} \left\| \frac{U(t+h,s)-U(t,s)}{h} (\lambda - U(t,s)-\kappa )^{-1} u \right\|_X \\ = \left\| \frac{1}{h} \int_t^{t+h} A(\tau) U(\tau,s) (\lambda - U(t,s)- \kappa )^{-1} u ~ d\tau \right\|_X \\ \le \frac{1}{|h|} \int_t^{t+h} \| A(\tau) U(\tau,s)\|_{B(Y,X)} \| (\lambda - U(t,s)- \kappa )^{-1}\|_{B(Y)} \|u \|_{Y} ~ d\tau \end{array} \end{equation} for $u \in Y$. Because $\| A(\tau) U(\tau,s) \|_{B(Y,X)} < \infty $ is true by assumption, the right hand side of Eq.~\eqref{unibound} is finite. Equation~\eqref{unibound} shows the uniform boundedness with respect to $h$, then the uniform convergence ($h \to 0$) of Eq.~\eqref{difference0} follows. Consequently the weak limit process $h \to 0$ for the integrand of Eq. (\ref{difference0}) is justified, as well as the commutation between the limit and the integral. According to Eq.~\eqref{difference0}, interchange of the limit with the integral leads to \[ \begin{array} {ll} \partial_t {\rm Log} (U(t,s) + \kappa I) ~ u = \frac{1}{2 \pi i} \int_{\Gamma} d\lambda \\ ~ \left[ ( {\rm Log} \lambda ) (\lambda - U(t,s)- \kappa )^{-1} ~\mathop{\rm wlim}\limits_{h \to 0} \left ( \frac{U(t+h,s)-U(t,s)}{h} \right) ~ (\lambda - U(t,s)- \kappa )^{-1} \right] ~u \\ \end{array} \] for $u \in Y$. Because we are also allowed to interchange $A(t)$ with $U(t,s)$, \begin{equation} \label{intermed} \begin{array}{ll} \partial_t {\rm Log} (U(t,s) + \kappa I) ~ u \\ = \frac{1}{2 \pi i} \int_{\Gamma} ({\rm Log} \lambda) (\lambda-U(t,s)-\kappa )^{-1} A(t) ~ U(t,s) ~ (\lambda -U(t,s)- \kappa )^{-1} d \lambda ~ u \\ = \frac{1}{2 \pi i} \int_{\Gamma} ({\rm Log} \lambda) ~(\lambda-U(t,s) - \kappa )^{-2} ~ U(t,s) ~ d \lambda ~A(t) ~ u \end{array} \end{equation} for $u \in Y$. A part of the right hand side is calculated as \[ \begin{array}{ll} \quad \frac{1}{2 \pi i} \int_{\Gamma}~ ({\rm Log} \lambda) ~(\lambda-U(t,s) - \kappa )^{-2} U(t,s) ~ d \lambda \\ = \frac{1}{2 \pi i} \int_{\Gamma} \frac{1}{\lambda} ~(\lambda-U(t,s) - \kappa )^{-1} ~ U(t,s) ~ d \lambda \\ = \frac{1}{2 \pi i} \int_{\Gamma} \frac{1}{\lambda} ~(\lambda-U(t,s)- \kappa )^{-1} ~ \{ \lambda- \kappa -(\lambda - U(t,s) - \kappa ) \} ~ d \lambda \\ = \frac{1}{2 \pi i} \int_{\Gamma} (\lambda-U(t,s)-\kappa )^{-1} ~ d \lambda - \frac{1}{2 \pi i} \int_{\Gamma} \frac{\kappa}{\lambda} (\lambda-U(t,s)-\kappa )^{-1} ~ d \lambda - \frac{1}{2 \pi i} \int_{\Gamma} \frac{1}{\lambda} ~ d \lambda \\ = \frac{1}{2 \pi i} \int_{\Gamma} (\lambda-U(t,s)-\kappa )^{-1} ~ d \lambda - \frac{1}{2 \pi i} \int_{\Gamma} \frac{\kappa}{\lambda} (\lambda-U(t,s)-\kappa )^{-1} ~ d \lambda \\ = \frac{1}{2 \pi i} \int_{\Gamma} (\lambda-U(t,s)- \kappa )^{-1} ~ d \lambda \\ ~ - \kappa (U(t,s)+ \kappa I )^{-1} \left\{ \frac{1}{2 \pi i} \int_{\Gamma} \frac{1}{\lambda} (U(t,s)+ \kappa I) (\lambda - U(t,s)-\kappa)^{-1} d \lambda \right\} \\ = \frac{1}{2 \pi i} \int_{\Gamma} (\lambda-U(t,s)-\kappa )^{-1} ~ d \lambda \\ ~ - \kappa (U(t,s)+\kappa I)^{-1} \left\{ \frac{1}{2 \pi i} \int_{\Gamma} (\lambda - U(t,s)-\kappa )^{-1} d \lambda -\frac{1}{2 \pi i} \int_{\Gamma} \frac{1}{\lambda} d \lambda \right\} \\ = (I - \kappa (U(t,s)+\kappa I)^{-1} ) ~ \frac{1}{2 \pi i} \int_{\Gamma} (\lambda - U(t,s)-\kappa)^{-1} d \lambda \\ = (I - \kappa (U(t,s)+ \kappa I)^{-1}) ~ \frac{1}{2 \pi i} \int_{|\nu| = r} \sum_{n=1}^{\infty} \frac{U(t,s)^{n}}{\nu^{n+1}} ~ d \nu \\ = I- \kappa (U(t,s)+ \kappa I)^{-1}, \end{array} \] due to the integration by parts, where $|\lambda - \kappa| = |\nu| =r$ is a properly chosen circle large enough to include $\Gamma$. $ (2 \pi i)^{-1} \int_{\Gamma} \lambda^{-1} d \lambda = 0$ is seen by applying $d {\rm Log} \lambda/d \lambda = 1/\lambda$. $(2 \pi i)^{-1} \int_{|\nu| = r} \sum_{n=1}^{\infty} U(t,s)^{n} \nu^{-n-1} ~ d \nu = I$ follows from the singularity of $\nu^{-n-1}$. Consequently we have \[ \begin{array}{ll} A(t) ~ u = \{I- \kappa (U(t,s)+\kappa I)^{-1}\}^{-1} ~ \partial_{t} {\rm Log} ~ (U(t,s) + \kappa I) ~ u \\ \quad = (U(t,s)+\kappa I) U(t,s)^{-1} ~ \partial_{t} {\rm Log} ~ (U(t,s) + \kappa I) ~ u \\ \quad = (I+\kappa U(s,t))~ \partial_{t} {\rm Log} ~ (U(t,s) + \kappa I) ~ u \end{array} \] for $u \in Y$. \end{proof} What is introduced by Eq.~\eqref{logex} is a kind of resolvent approximation of $A(t)$ \[ \begin{array}{ll} \partial_{t} {\rm Log} ~ (U(t,s) + \kappa I) = (I+\kappa U(s,t))^{-1} A(t), \end{array} \] in which $A(t)$ is approximated by the resolvent of $U(s,t)$. As seen in the following it is notable that there is no need to take $\kappa \to 0$. This point is different from the usual treatment of resolvent approximations. On the other hand, it is also seen by Eq.~\eqref{logex} that \[ \partial_{t} {\rm Log} ~ (U(t,s) + \kappa I) = (U(t,s)+\kappa I )|_{\kappa = 0} A(t) (U(t,s)+\kappa I)^{-1} \] shows a structure of similarity transform, where $(U(t,s)+\kappa)|_{\kappa=0}$ means $U(t,s)+\kappa$ satisfying a condition $\kappa=0$. Under the validity of Theorem~\ref{thm1}, for $-T \le t,s \le T$, let $a(t,s)$ be defined by \[ \begin{array}{ll} a(t,s) := {\rm Log} (U(t,s)+\kappa I), \end{array} \] then Eq.~\eqref{logex} is written as $A(t) = (I+\kappa U(s,t))~ \partial_{t} a(t,s)$. Since $\kappa$ is chosen to separate the spectral set of $U(t,s)+\kappa I$ from the origin, the inverse operator of \[ \begin{array}{ll} (I+ \kappa U(s,t)) = U(s,t) (U(t,s)+\kappa I) \end{array} \] is well defined as $(I+\kappa U(s,t))^{-1} = (U(t,s)+\kappa I)^{-1} U(t,s)$. It also ensures that $\partial_{t} a(t,s)$ is well defined. \begin{corollary} \label{transform} Let $t$ and $s$ satisfy $0 \le t,s \le T$. For $U(t,s)$ and $A(t)$ satisfying the assumption of Theorem~\ref{thm1}, the exponential of $a(t,s)$ is represented by a convergent power series: \begin{equation} \label{convp} \begin{array}{ll} e^{a(t,s)} = \sum_{n=0}^{\infty} \frac{ a(t,s)^n}{n!}, \end{array} \end{equation} with a relation $e^{a(t,s)} = \exp({\rm Log} (U(t,s)+\kappa I)) = U(t,s)+\kappa I$. If $a(t,s)$ with different $t$ and $s$ are further assumed to commute, \begin{equation} \label{replce} \begin{array}{ll} \partial_t e^{a(t,s)} u_s = \partial_t a(t,s) ~ e^{a(t,s)} u_s \\ \end{array} \end{equation} is satisfied for $u_s \in Y$, where $\partial_t$ denotes a $t-$differential in a weak sense. \end{corollary} \begin{proof} Since $a(t,s)$ is a bounded operator on $X$ (Lemma~\ref{lem3}), the exponential of $a(t,s)$ is represented by a convergent power series \cite{82kato}: \begin{equation} \label{convp} \begin{array}{ll} e^{a(t,s)} = \sum_{n=0}^{\infty} \frac{ a(t,s)^n}{n!}, \end{array} \end{equation} where \[ \begin{array}{ll} e^{a(t,s)} = \exp({\rm Log} (U(t,s)+\kappa I)) \\ = \frac{1}{2 \pi i} \int_{\Gamma} e^{{\rm Log} \lambda} ~ ( \lambda - U(t,s) - \kappa)^{-1} d \lambda \\ = U(t,s)+\kappa I \end{array} \] is satisfied. Since $a(t,s)$ with different $t$ and $s$ commute, \[ \begin{array}{ll} \partial_t \{ a(t,s) \}^n = n \{ a(t,s) \}^{n-1} ~ \partial_t a(t,s) \end{array} \] leads to \begin{equation} \label{traeq} \begin{array}{ll} \partial_t e^{a(t,s)} u_s = e^{a(t,s)} ~ (\partial_t a(t,s)) u_s \end{array} \end{equation} for $u_s \in Y$. This is a linear evolution equation satisfied by $e^{a(t,s)} u_s$. \end{proof} Further calculi on Eq.~\eqref{traeq} lead to \[ \begin{array}{ll} \partial_t e^{a(t,s)} u_s = \partial_t (U(t,s)+\kappa I) u_s = \partial_t U(t,s) u_s, \end{array} \] and \[ \begin{array}{ll} e^{a(t,s)} ~ (\partial_t a(t,s) ) u_s = (U(t,s)+\kappa I) \partial_t( {\rm Log} (U(t,s)+\kappa I)) u_s \\ = (U(t,s)+\kappa I) (U(t,s)+\kappa I)^{-1} U(t,s) A(t) u_s \\ = U(t,s) A(t) u_s = A(t) U(t,s) u_s, \end{array} \] where Theorem~\ref{thm1} is applied. As a result \[ \partial_t U(t,s) u_s = A(t) U(t,s) u_s \] is obtained. Note that $e^{a(t,s)}$ does not satisfy the semigroup property, while $U(t,s)$ satisfies it. \section{Abstract Cauchy problem} \subsection{Autonomous case} \label{homosection} Logarithmic representation is utilized to solve autonomous Cauchy problem \begin{equation} \label{homoporo} \left\{ \begin{array}{ll} \partial_t u(t) = A(t) u(t) \\ u(s) = u_s, \end{array} \right. \end{equation} in $X$, where $A(t) \in G(X):Y \to X$ is assumed to be an infinitesimal generator of $U(t,s)$, $-T \le t,s \le T$ is satisfied, $Y$ is a dense subspace of $X$ permitting the representation shown in Eq.~\eqref{logex}, and $u_s$ is an element of $X$. As seen in Eq.~\eqref{replce}, under the assumption of commutation, a related Cauchy problem is obtained as \begin{equation} \left\{ \begin{array}{ll} \label{reweq} \partial_t v(t,s) = (\partial_t a(t,s)) ~ v(t,s) \\ v(s,s) = e^{a(s,s)} u_s, \end{array} \right. \end{equation} in $X$, where $\partial_t a(t,s) = \partial_t {\rm Log} (U(t,s)+\kappa I)$ is well-defined. It is possible to solve re-written Cauchy problem, and the solution is represented by \[ \begin{array}{ll} v(t,s) = e^{a(t,s)} u_s = \sum_{n=0}^{\infty} \frac{ a(t,s)^n}{n!} u_s \end{array} \] for $u_s \in X$ (cf.~Eq.~\eqref{convp}). \begin{theorem} \label{hols} Operator $e^{a(t,s)}$ is holomorphic. \end{theorem} \begin{proof} According to the boundedness of $a(t,s)$ on $X$ (Lemma~\ref{lem3}), $\partial_t^n e^{a(t,s)}$~\cite{51taylor} is possible to be represented as \begin{equation} \label{anreap} \begin{array}{ll} \partial_t^n e^{a(t,s)} = \frac{1}{2 \pi i} \int_{\Gamma} \lambda^n e^{\lambda} (\lambda-a(t,s))^{-1} ~ d \lambda, \end{array} \end{equation} for a certain $\kappa$, where $ \lambda^n e^{\lambda}$ does not hold any singularity for any finite $\lambda$. Following the standard theory of evolution equation, \[ \begin{array}{ll} \| \partial_t^n e^{a(t,s)} \| \le \frac{C_{\theta,n}}{ \pi (t \sin \theta)^n} \end{array} \] is true for a certain constant $C_{\theta,n}$ ($n = 0,1,2,\cdots$), where $\theta \in (0 \pi/2)$ and $|\arg t| < \pi/2$ are satisfied (for the detail, e.g., see \cite{79tanabe}). It follows that \begin{equation} \label{leiq} \begin{array}{ll} { \displaystyle \lim_{t \to +0} } \sup t^n \| \partial_t^n e^{a(t,s)} \| \le {\displaystyle \lim_{t \to +0}} \sup t^n \frac{C_{\theta,n}}{\pi (t \sin \theta)^n} < \infty. \end{array} \end{equation} Consequently, for $|z-t|<t \sin \theta$, the power series expansion \[ \begin{array}{ll} \sum_{n=0}^{\infty} \frac{(z-t)^n}{n !} \partial_t^n e^{a(t,s)} \end{array} \] is uniformly convergent in a wider sense. Therefore $e^{a(t,s)}$ is holomorphic. \end{proof} \begin{theorem} \label{reprr} For $u_s \in X$ there exists a unique solution $u(t) \in C([-T,T];X)$ of \eqref{homoporo} with a convergent power series representation: \begin{equation} \label{dairep} \begin{array}{ll} u(t) = U(t,s) u_s = ( e^{a(t,s)} - \kappa I ) u_s = \left( \sum_{n=0}^{\infty} \frac{ a(t,s)^n}{n!} - \kappa I \right) u_s, \end{array} \end{equation} where $\kappa$ is a certain complex number. \end{theorem} \begin{proof} The unique existence follows from the assumption for $A(t)$. $e^{a(t,s)}$ is holomorphic function (Theorem~\ref{hols}) with the convergent power series representation (Eq.~\eqref{convp}). The solution of the original Cauchy problem is obtained as \[ \begin{array}{ll} u(t) = (I+ \kappa U(s,t))^{-1}v(t,s) = (I+ \kappa U(s,t))^{-1} \sum_{n=0}^{\infty} \frac{ a(t,s)^n}{n!} u_s \end{array} \] for the initial value $u_s \in X$. Note that $A(t)$ is not assumed to be a generator of analytic evolution family, but only a generator of invertible evolution family. \end{proof} For $I_{\lambda}$ denoting the resolvent operator of $A(t)$, the evolution operator defined by the Hille-Yosida approximation is written by \[ \begin{array}{ll} u(t) = {\displaystyle \lim_{\lambda \to 0} } \exp ( \int_s^t I_{\lambda} A(\tau) ~d \tau ) u_s, \end{array} \] so that more informative representation is provided by Theorem~\ref{reprr} compared to the standard theory based on the Hille-Yosida theorem. \subsection{Non-autonomous case} Series representation in autonomous part leads to the enhancement of the solvability. Let $Y$ be a dense subspace of $X$ permitting the representation shown in Eq.~\eqref{logex}, and $u_s$ is an element of $X$. Let us consider non-autonomous Cauchy problem \begin{equation} \label{origih} \left\{ \begin{array}{ll} \partial_t u(t) = A(t) u(t) + f(t) \\ u(s) = u_s \end{array} \right. \end{equation} in $X$, where $A(t) \in G(X):Y \to X$ is assumed to be the infinitesimal generator $U(t,s)$, $f \in L^1(-T,T;X)$ is locally H\"older continuous on $[-T,T]$ \[ \begin{array}{ll} \| f(t) - f(s) \| \le C_{H} |t-s|^{\gamma} \end{array} \] for a certain positive constant $C_{H}$, $\gamma \le 1$ and $-T \le t,s \le T$. The solution of non-autonomous problem does not necessarily exist in such a setting (in general, $f \in C([-T,T];X)$ is necessary). \begin{theorem} \label{thm-inh} Let $f \in L^1(-T,T;X)$ be locally H\"older continuous on $[-T,T]$. For $u_s \in X$ there exists a unique solution $u(t) \in C([-T,T];X)$ for \eqref{origih} such that \[ \begin{array}{ll} u(t) = [ \sum_{n=0}^{\infty} \frac{ a(t,s)^n}{n!} - \kappa I ] u_s + \int_s^t [ \sum_{n=0}^{\infty} \frac{ a(t,\tau)^n}{n!} - \kappa I ] f(\tau) d\tau \end{array} \] using a certain complex number $\kappa$. \end{theorem} \begin{proof} Let us start with cases with $f \in C([-T,T];X)$. The unique existence follows from the standard theory of evolution equation. The representation follows from that of $U(t,s)$ and the Duhamel's principle \begin{equation} \begin{array}{ll} u(t,s) = U(t,s) u_s + \int_s^tU(t,\tau) f(\tau) d\tau \\ = ( e^{a(t,s)} - \kappa I ) u_s + \int_s^t [ e^{a(t,\tau)} - \kappa I ] f(\tau) d\tau, \end{array} \end{equation} where the convergent power series representation of $e^{a(t,s)}$ is valid (cf.~Eq.~\eqref{convp}). Next let us consider cases with the locally H\"older continuous $f(t)$. According to the linearity of Eq.~\eqref{origih}, it is sufficient to consider the inhomogeneous term. For $\epsilon$ satisfying $0 < \epsilon << T$, \[ \begin{array}{ll} \int_s^{t+\epsilon} [e^{a(t,\tau)} -\kappa I] f(\tau) d\tau ~\to~ \int_s^{t} [e^{a(t,\tau)} -\kappa I] f(\tau) d\tau \end{array} \] is true by taking $\epsilon \to 0$. On the other hand, \begin{equation} \label{conveq} \begin{array}{ll} A(t) \int_s^{t+\epsilon} [e^{a(t,\tau)} -\kappa I] f(\tau) d\tau = \int_s^{t+\epsilon} A(t) U(t,\tau) f(\tau) d\tau \\ = \int_s^{t+\epsilon} A(t) U(t,\tau)( f(\tau) - f(t)) d\tau + \int_s^{t+\epsilon} A(t) U(t,\tau) f(t) d\tau \\ = \int_s^{t+\epsilon} A(t) U(t,\tau) ( f(\tau) - f(t)) d\tau - \int_s^{t+\epsilon} \partial_{\tau} U(t,\tau) f(t) d\tau \\ = \int_s^{t+\epsilon} A(t) U(t,\tau) ( f(\tau) - f(t)) d\tau - U(t,t+\epsilon) f(t) + U(t,s) f(t) \\ = \int_s^{t+\epsilon} (1+\kappa U(s,t))\partial_{t} a(t,s) [e^{a(t,\tau)} -\kappa I] ( f(\tau) - f(t)) d\tau \\ \quad - U(t,t+\epsilon) f(t) + U(t,s)f(t), \end{array} \end{equation} where $\partial_{\tau} U(t,\tau) = -A(\tau) U(t,\tau)$ is utilized. The H\"older continuity and Eq.~\eqref{leiq} lead to the strong convergence of the right hand of Eq.~\eqref{conveq}: \[ \begin{array}{ll} A(t) \int_s^{t+\epsilon} [e^{a(t,\tau)} - \kappa I] f(\tau) d\tau \\ \to \quad \int_s^t (1+\kappa U(s,t)) (\partial_{t} a(t,s)) [e^{a(t,\tau)} -\kappa I] (f(\tau) - f(t)) d\tau + ( U(t,s) - I) f(t) \end{array} \] (due to $\epsilon \to 0$) for $f \in L^1(0,T;X)$. $A(t)$ is assumed to be an infinitesimal generator, so that $A(t)$ is a closed operator from $Y$ to $X$. It follows that \[ \begin{array}{ll} \int_s^{t} [e^{a(t,\tau)} -\kappa I] f(\tau) d\tau \in Y \end{array} \] and \[ \begin{array}{ll} A(t) \int_s^{t} [e^{a(t,\tau)} -\kappa I] f(\tau) d\tau \\ = \int_s^t (1+ \kappa U(s,t)) (\partial_{t} a(t,s)) [e^{a(t,\tau)} -\kappa I] (f(\tau) - f(t)) d\tau + ( U(t,s) - I) f(t) \in X. \end{array} \] The right hand side of this equation is strongly continuous on $[-T,T]$. As a result, \[ \begin{array}{ll} \partial_{t} \int_s^{t+\epsilon} [e^{a(t,\tau)} -\kappa I] f(\tau) d\tau \\ = [ e^{a(t,t+\epsilon)} -\kappa I ] f(t+\epsilon) + \int_s^{t+\epsilon} (\partial_{t} a(t,\tau)) e^{a(t,\tau)} f(\tau) d\tau \\ \to \quad f(t) + \int_s^{t} (1+\kappa U(\tau,t))^{-1} A(t) (U(t,\tau)+\kappa I) f(\tau) d\tau \\ \qquad = f(t) + \int_s^{t} A(t) U(t,\tau) f(\tau) d\tau \\ \qquad = f(t) + A(t) \int_s^{t} [e^{a(t,\tau)} -\kappa I] f(\tau) d\tau. \end{array} \] We see that $ \int_s^{t} [e^{a(t,\tau)} -\kappa I] f(\tau) d\tau$ satisfies Eq.~\eqref{origih}, and that it is sufficient to assume $f \in L^1(0,T;X)$ as H\"older continuous. \end{proof} This result should be compared to the standard theory of evolution equations in which the inhomogeneous term $f$ is assumed to be continuous on $[-T,T]$. If the inhomogeneous term $f \in L^p([0,T];X)$ is further assumed to be satisfied for $1< p < \infty$, and $Y = D(A(t)) =D(A(0))$ and $A(\cdot) \in C([0,T],{\mathcal L}(Y,X))$, Eq.~\eqref{reweq} with such an inhomogeneous term corresponds to the equation exhibiting the maximal regularity of type $L^p$~\cite{01pruess}. \section{Concluding remark} As for the applicability of the theory, the conditions to obtain the logarithmic representation (conditions shown in Sec.~\ref{tp-group}) are not so restrictive; indeed, they can be satisfied by $C_0$-groups generated by $t$-independent infinitesimal generators. The most restrictive condition to obtain Eq.~\eqref{logex} is the commutation between $A(t)$ and $U(t,s)$. Such a commutation is trivially satisfied by $t$-independent $A(t) = A$, and also satisfied when the variable $t$ is separable (i.e., for an integrable function $g(t)$, $A(t) = g(t) A$). In this sense the operators specified in Theorem~\ref{thm1} correspond to a moderate generalization of $t$-independent infinitesimal generators. \end{document}
\begin{document} \sloppy{} \let\WriteBookmarks\relax \def1{1} \def.001{.001} \shorttitle{Efficient and Effective Local Search for the SUKP and BMCP} \shortauthors{Zhu et~al.} \author[1]{Wenli Zhu} \author[2]{Liangqing Luo} \address[1]{School of Statistics, Jiangxi University of Finance and Economics, Nanchang Jiangxi 330013, China; Email: [email protected]} \address[2]{School of Statistics, Jiangxi University of Finance and Economics, Nanchang Jiangxi 330013, China; Corresponding author, Email: [email protected]} \title [mode = title]{Efficient and Effective Local Search for the Set-Union Knapsack Problem and Budgeted Maximum Coverage Problem} \begin{abstract} The Set-Union Knapsack Problem (SUKP) and Budgeted Maximum Coverage Problem (BMCP) are two closely related variant problems of the popular knapsack problem. Given a set of weighted elements and a set of items with nonnegative values, where each item covers several distinct elements, these two problems both aim to find a subset of items that maximizes an objective function while satisfying a knapsack capacity (budget) constraint. We propose an efficient and effective local search algorithm called E2LS for these two problems. To our knowledge, this is the first time that an algorithm has been proposed for both of them. E2LS trade-offs the search region and search efficiency by applying a proposed novel operator ADD$^*$ to traverse the refined search region. Such a trade-off mechanism allows E2LS to explore the solution space widely and quickly. The tabu search method is also applied in E2LS to help the algorithm escape from local optima. Extensive experiments on a total of 168 public instances with various scales demonstrate the excellent performance of the proposed algorithm for both the SUKP and BMCP. \end{abstract} \iffalse \begin{highlights} \item Propose an efficient and effective local search called E2LS for the SUKP and BMCP. \item Investigate for the first time an algorithm for solving both of these two problems. \item Propose to improve the search efficiency by refining the search region. \item Propose an effective operator ADD$^*$ to traverse the refined search region. \item Experimental results demonstrate the superiority of the proposed algorithm. \end{highlights} \fi \begin{keywords} Set-union knapsack problem \sep Budgeted maximum coverage problem \sep Local search \sep Tabu search \end{keywords} \maketitle \iffalse \subsection*{Abstract} The Set-Union Knapsack Problem (SUKP) and Budgeted Maximum Coverage Problem (BMCP) are two closely related variant problems of the popular knapsack problem. Given a set of weighted elements and a set of items with nonnegative values, where each item covers several distinct elements, these two problems both aim to find a subset of items that maximizes an objective function while satisfying a knapsack capacity (budget) constraint. We propose an efficient and effective local search algorithm called E2LS for these two problems. To our knowledge, this is the first time that an algorithm has been proposed for both of them. E2LS trade-offs the search region and search efficiency by applying a proposed novel operator ADD$^*$ to traverse the refined search region. Such a trade-off mechanism allows E2LS to explore the solution space widely and quickly. The tabu search method is also applied in E2LS to help the algorithm escape from local optima. Extensive experiments on a total of 168 public instances with various scales demonstrate the excellent performance of the proposed algorithm for both the SUKP and BMCP. \subsection*{Keywords} Set-union knapsack problem; Budgeted maximum coverage problem; Local search; Tabu search \fi \section{Introduction} \label{Sec_Intro} The Set-Union Knapsack Problem (SUKP)~\citep{Goldschmidt1994} and Budgeted Maximum Coverage Problem (BMCP)~\citep{Khuller1999} are two closely related NP-hard combinatorial optimization problems. Let $I = \{i_1,...,i_m\}$ be a set of $m$ items where each item $i_j, j \in \{1,...,m\}$ has a nonnegative value $v_j$, $E = \{e_1,...,e_n\}$ be a set of $n$ elements where each element $e_k, k \in \{1,...,n\}$ has a nonnegative weight $w_k$, and $C$ is the capacity of a given knapsack in the SUKP (or the budget in the BMCP). The items and elements are associated by a relation matrix $R \in \{0,1\}^{m \times n}$, where $R_{jk} = 1$ indicates that $e_k$ is covered by $i_j$, otherwise $R_{jk} = 0$. The SUKP aims to find a subset $S$ of $I$ that maximizes the total value of the items in $S$, at the same time the total weight of the elements covered by the items in $S$ does not exceed the capacity $C$. The SUKP can be stated formally as follows. \begin{equation} \label{eq_f} \text{Maximize}~~f(S) = \sum\nolimits_{j \in \{j | i_j\in S\}}v_j, \end{equation} \begin{equation} \label{eq_W} \text{Subject to}~~W(S) = \sum\nolimits_{k \in \{k | R_{jk} = 1, i_j \in S\}}w_k \leq C. \end{equation} For the BMCP, the goal is to find a subset $S$ of $I$ that maximizes the total weight of the elements covered by the items in $S$, while the total value of the items in $S$ does not exceed the capacity (budget) $C$. The BMCP can be stated formally as follows. \begin{equation} \text{Maximize}~~W(S), \end{equation} \begin{equation} \text{Subject to}~~f(S) \leq C. \end{equation} Obviously, the SUKP and BMCP can be transferred to each other by swapping the optimization objective and constraint objective. Both the SUKP and BMCP are computationally challenging and have many real-world applications, such as flexible manufacturing~\citep{Goldschmidt1994}, financial decision making~\citep{Khuller1999,Kellerer2004}, data compression~\citep{Yang2016}, software defined network optimization~\citep{Kar2016}, project investment~\citep{Wei2019}, etc. Exact and approximation algorithms are two kinds of methods for the SUKP and BMCP. For example, exact algorithms for the SUKP based on dynamic programming~\citep{Arulselvan2014} and linear integer programming~\citep{Wei2019}, and greedy approximation algorithms for the SUKP~\citep{Taylor2016} and BMCP~\citep{Khuller1999}. These algorithms can all theoretically guarantee the quality of their solutions, but the exact algorithms are hard to scale to large instances, and the approximation algorithms are hard to yield high-quality results. Heuristic algorithms such as population-based algorithms and local search algorithms are more practical than exact and approximation algorithms. Population-based methods usually use bio-inspired metaheuristic operations among the population, so as to find excellent individuals. He et al.~\citep{He2018} first proposed a binary artificial bee colony algorithm for the SUKP. Later on, a swarm intelligence-based algorithm~\citep{Ozsoydan2019} was proposed to solve the SUKP. There are also some hybrid algorithms for the SUKP that combine population-based methods with local search to improve the performance. For example, Lin et al.~\citep{Lin2019} combined binary particle swarm optimization with tabu search, Dahmani et al.~\citep{Dahmani2020} combined swarm optimization with local search operators, Wu and He~\citep{Wu2020} proposed a hybrid Jaya algorithm for the SUKP. Population-based methods have attracted lots of attention. However, these methods are more complex to design than local search algorithms, and the performance of existing population-based methods is not as good as the state-of-the-art local search methods for the SUKP. For the local search methods, the I2PLS algorithm~\citep{Wei2019} is the first local search method for the SUKP, which is based on tabu search. Later on, some better tabu search methods were proposed~\citep{Wei2021KBTS,Wei2021MSBTS}. Wei and Hao~\citep{Wei2021MSBTS} tested these tabu search algorithms on SUKP instances with no more than 1,000 items and elements. Recently, Zhou et al.~\citep{Zhou2021} proposed an efficient local search algorithm called ATS-DLA for the SUKP, and tested the algorithm on SUKP instances with up to 5,000 items and elements. For the BMCP, Li et al.~\citep{Li2021} proposed the first local search method. Zhou et al.~\citep{Zhou2022} proposed a local search algorithm based on a partial depth-first search tree. Local search algorithms have obtained excellent results for solving the SUKP and BMCP. However, there are still some disadvantages of the existing local search algorithms for these two problems. The I2PLS~\citep{Wei2019}, KBTS~\citep{Wei2021KBTS}, MSBTS~\citep{Wei2021MSBTS}, and PLTS~\citep{Li2021} algorithms all tend to find the best neighbor solution of the current solution in each iteration. However, their search neighborhood contains lots of low-quality moves, which may reduce the algorithm efficiency. The search region of the ATS-DLA algorithm~\citep{Zhou2021} is small, which may make the algorithm hard to escape from some local optima. The VDLS algorithm~\citep{Zhou2022} has a wide and deep search region. However, VDLS does not allow the current solution worse than the previous one, which may restrict the algorithm's search ability. In summary, these local search methods can not trade-off the efficiency and search region well. To handle this issue, we propose an efficient and effective local search algorithm, called E2LS, for both the SUKP and BMCP. To the best of our knowledge, this is the first time that an algorithm has been proposed to solve the SUKP and BMCP simultaneously. E2LS uses a random greedy algorithm to generate the initial solution, and an effective local search method to explore the solution space. E2LS restricts the items that can be removed from or added into the current solution to improve the search efficiency. In this way, E2LS can refine the search region by abandoning low-quality candidates. The local search operator in E2LS then traverses the refined search region to find high-quality moves. Thus E2LS can explore the solution space widely and quickly, so as to find high-quality solutions. Moreover, the tabu search method in~\citep{Wei2021MSBTS} is used in E2LS to prevent the algorithm from getting stuck in local optima. Indeed, as we have shown in this work, our proposed E2LS algorithm significantly outperforms the state-of-the-art heuristic algorithms for both the SUKP and BMCP. The main contributions of this work are as follows. \begin{itemize} \item We propose an efficient and effective local search algorithm called E2LS for the SUKP and BMCP. We investigate for the first time an algorithm for solving both of these two problems. \item E2LS trade-offs the search region and search efficiency well. E2LS restricts the items that can be removed from or added into the current solution, so as to abandon low-quality moves and refine the search region. The proposed operator ADD$^*$ can traverse the refined search region, so as to explore the solution space widely and efficiently. \item The mechanism that trade-offs the search region and efficiency, as well as the method of traversing the refined search region, could be applied to other combinatorial optimization problems. \item Extensive experiments demonstrate that E2LS significantly outperforms the state-of-the-art algorithms for both the SUKP and BMCP. In particular, E2LS provides four new best-known solutions for the SUKP, and 27 new best-known solutions for the BMCP. \end{itemize} \section{The Proposed E2LS Algorithm} \label{Sec_Method} This section introduces the proposed E2LS algorithm. We first present the components of E2LS, including the random greedy initialization method and the searching methods, then present the main process of E2LS. Finally, we discuss the advantages of E2LS over other state-of-the-art local search algorithms for the SUKP and BMCP. Note that the proposed E2LS algorithm can be used to solve both the SUKP and BMCP. This section mainly introduces its SUKP version. The BMCP version can be obtained simply by swapping the optimization objective and constraint objective of the SUKP version. Before introducing our method, we first present several essential definitions used in the E2LS algorithm. \textbf{Definition 1. (Additional Weight)} Note that $W(S)$ (see Eq. \ref{eq_W}) is the total weight of the elements covered by the items in $S$. Let $AW(S,i_j)$ to be the additional weight of an item $i_j$ to a solution $S$. If $i_j \notin S$, the additional weight $AW(S,i_j) = W(S \cup \{i_j\}) - W(S)$ represents the increase of the total weight of the covered elements caused by adding $i_j$ into $S$. Otherwise, $AW(S,i_j) = W(S) - W(S \backslash \{i_j\})$ represents the decrease of the total weight of the covered elements caused by removing $i_j$ from $S$. \textbf{Definition 2. (Value-weight Ratio)} The value-weight ratio of an item $i_j$ to a solution $S$ is defined as $R_{vw}(S,i_j) = v_j / AW(S,i_j)$, which is the ratio of the value of $i_j$ to the addition weight of $i_j$ to $S$. Obviously, an item with a larger value-weight ratio to a solution $S$ is a better candidate item of $S$~\citep{Khuller1999,Zhou2021,Zhou2022}. \subsection{Random Greedy Initialization Method} We propose a simple construction method to generate the initial solution for E2LS. The procedure of the initialization method is shown in Algorithm \ref{alg_init}. \begin{algorithm}[t] \caption{Random\_Greedy($t$)} \label{alg_init} \LinesNumbered \KwIn{Sampling times $t$} \KwOut{Solution $S$} Initialize $S \leftarrow \emptyset$\; \While{TRUE}{ Initialize feasible candidates $FC \leftarrow \emptyset$\; \For{$j \leftarrow 1 : n$}{ \lIf{$i_j \in S$}{\textbf{continue}} \lIf{$AW(S,i_j) = 0$}{$S \leftarrow S \cup \{i_j\}$} \lIf{$AW(S,i_j) + W(S) \leq C$}{$FC \leftarrow FC \cup \{i_j\}$} } \lIf{$FC = \emptyset$}{\textbf{break}} \Else{ $M \leftarrow 0$\; \For{$j \leftarrow 1 : t$}{ $i_r \leftarrow$ a random item in $FC$\; \If{$R_{vw}(S,i_r) > M$}{$M \leftarrow R_{vw}(S,i_r), i_b \leftarrow i_r$} } $S \leftarrow S \cup \{i_b\}$\; } } \textbf{return} $S$\; \end{algorithm} The algorithm starts with an empty solution $S$, and repeats to add items into $S$ until no more items can be added into $S$ (line 8). In each loop, each item $i_j \notin S$ with $AW(S,i_j) = 0$ will be added into $S$ (line 6). Such an operation can increase $f(S)$ (see Eq. \ref{eq_f}) without increasing $W(S)$. When there is at least one item that adding into $S$ will result in a feasible solution, i.e., the feasible candidates $FC \neq \emptyset$, the algorithm applies the probabilistic sampling strategy~\citep{Cai2015,Zheng2021} to select the item to be added. Specifically, the algorithm first random samples $t$ items with replacement in $FC$ (lines 11-12), then selects to add the item with the maximum value-weight ratio (lines 13-15). We set the parameter $t$ to be $\sqrt{max\{m,n\}}$ as the algorithm MSBTS~\citep{Wei2021MSBTS} does. This setting can help the E2LS algorithm yield high-quality and diverse initial solutions. In particular, we analyze the influence of $t$ on the performance of E2LS in experiments. The results show that E2LS is very robust and not sensitive to the parameter $t$. Even with a random initialization method (i.e., $t = 1$), E2LS also has excellent performance. \subsection{Searching Methods in E2LS} The local search method in E2LS is the main improvement of E2LS to other local search algorithms for the SUKP and BMCP. In E2LS, we propose an efficient and effective local search operator to explore the solution space. We also apply the solution-based tabu search method in~\citep{Wei2021MSBTS} to avoid getting stuck in local optima. This subsection first describes how to represent the tabu list, then introduces the local search process. \subsubsection{Tabu List Representation} The tabu search method in~\citep{Wei2021MSBTS} uses three hash vectors $H_1,H_2,H_3$ to represent the tabu list. The length of each vector is set to $L$ ($L = 10^8$ by default). The three hash vectors are initialized to 0, indicating that no solution is prohibited by the tabu list. Each solution $S$ is corresponding to three hash values $h_1(S),h_2(S),h_3(S)$. A solution $S$ is prohibited (i.e., in the tabu list) if $H_1[h_1(S)] \wedge H_2[h_2(S)] \wedge H_3[h_3(S)] = 1$. See the details for calculating the hash values of a solution below. For an instance with $m$ items, a weight matrix $\mathcal{W} \in \mathbf{N}^{\{3 \times m\}}$ is calculated as follows. First, let $\mathcal{W}_{lj} = \lfloor j^{\gamma_l} \rfloor, l = (1,2,3), j = (1,...,m)$, where $\gamma_1,\gamma_2,\gamma_3$ are set to 1.2, 1.6, 2.0, respectively. Then, random shuffle each of the three rows of $\mathcal{W}$. Given a solution $S$ that can be represented by $(y_1,...,y_m)$ where $y_j = 1$ if $i_j \in S$, and $y_j = 0$ otherwise. The hash values $h_l(S), l = (1,2,3)$ can be calculated by $h_l(S) = (\sum_{j=1}^{m}{\lfloor \mathcal{W}_{lj} \times y_j \rfloor) ~\text{mod}~ L}$. \subsubsection{Local Search Process} The procedure of the local search method in E2LS is shown in Algorithm \ref{alg_LS}. The search operator first removes an item from the input solution $S$ (line 6), and then uses the proposed operator ADD$^*$ (Algorithm \ref{alg_add}) to add items into the resulting solution $S'$ (line 8). The best solution that is not in the tabu list found during the search process is represented by $S_b$ (line 9). \begin{algorithm}[t] \caption{Local\_Search($S,r_{num},a_{num}$)} \label{alg_LS} \LinesNumbered \KwIn{Input solution $S$, maximum size of the candidate set of the items to be removed $r_{num}$, maximum size of the candidate set of the items to be added $a_{num}$} \KwOut{Output solution $S$} $S_b \leftarrow \emptyset$\; $U \leftarrow$ a set of items in $S$\; Sort the items in $U$ in ascending order of their value-weight ratios to $S$\; \For{$j \leftarrow 1 : \text{min}\{r_{num},|U|\}$}{ $i \leftarrow$ the $j$-th item in $U$\; $S' \leftarrow S \backslash \{i\}$\; \If{$S'$ is in the tabu list}{ $S' \leftarrow$ ADD$^*$($S',\emptyset,a_{num}$)\; \lIf{$f(S') > f(S_b)$}{$S_b \leftarrow S'$} } } \textbf{return} $S_b$\; \end{algorithm} \begin{algorithm}[t] \caption{ADD$^*$($S,S_b,a_{num}$)} \label{alg_add} \LinesNumbered \KwIn{Input solution $S$, best solution found in this step $S_b$, maximum size of the candidate set of the items to be added $a_{num}$} \KwOut{Output solution $S$} \While{TRUE}{ Initialize feasible candidates $FC \leftarrow \emptyset$\; \For{$j \leftarrow 1 : m$}{ \lIf{$i_j \in S$}{\textbf{continue}} \If{$AW(S,i_j) = 0$}{ \If{$S \cup \{i_j\}$ is not in the tabu list}{ $S \leftarrow S \cup \{i_j\}$\; \lIf{$f(S) > f(S_b)$}{$S_b \leftarrow S$} } } \ElseIf{$AW(S,i_j) + W(S) \leq C$}{$FC \leftarrow FC \cup \{i_j\}$} } \lIf{$FC = \emptyset$}{\textbf{return} $S_b$} } Sort the items in $FC$ in descending order of their value-weight ratios to $S$\; \For{$j \leftarrow 1 : \text{min}\{a_{num},|FC|\}$}{ $i \leftarrow$ the $j$-th item in $FC$\; \lIf{$S \cup \{i\}$ is in the tabu list}{\textbf{continue}} $S' \leftarrow S \cup \{i\}$\; \lIf{$f(S') > f(S_b)$}{$S_b \leftarrow S'$} $S' \leftarrow$ ADD$^*$($S',S_b,a_{num}$)\; } \textbf{return} $S_b$\; \end{algorithm} As shown in Algorithm \ref{alg_add}, the function ADD$^*$ tries to add items into the input solution $S$ recursively (line 18) until no more items can be added into $S$ (line 11). The best solution that is not in the tabu list found during the process is represented by $S_b$ (lines 8 and 17). The algorithm first calculates the set of the feasible candidate items $FC$ of $S$ (lines 1-11). During the process, each item $i_j \notin S$ with $AW(S,i_j) = 0$ will be added into $S$ if the resulting solution is not in the tabu list (lines 5-7). After that, the algorithm traverses part of the set $FC$ to add an item $i$ into $S$ (lines 13-14), and then calls the function ADD$^*$ with the input solution $S \cup \{i\}$ to continue the search (line 18). During the process, the solutions in the tabu list are prohibited (line 15). The proposed function ADD$^*$ is powerful and actually has a wide search region. The reasons are as follows. Suppose we set $a_{num} = m$, ADD$^*$ can traverse all the combinations of the candidate items by the recursive process, i.e., ADD$^*$ can find the optimal solution corresponding to its input partial solution when the items in its input solution are fixed. In particular, the function ADD$^*$ can be regarded as an exact solver for an instance with $m$ items if we call ADD$^*$($\emptyset,\emptyset,m$). However, an exhaustive search must be inefficient. In order to improve the efficiency of the local search process, the low-quality moves (candidate items) should not be considered during the search process. To yield high-quality results, the items with large value-weight ratios to the current solution should be added into (or be kept in) it, and the items with small value-weight ratios should be removed from (or not be added into) it. Therefore, in Algorithm \ref{alg_LS}, the items that can be removed from $S$ are restricted by a parameter $r_{num}$. That is, only the top min$\{r_{num},|S|\}$ items with minimum value-weight ratios to $S$ can be removed from $S$ (lines 3-4). In Algorithm \ref{alg_add}, only the top min$\{a_{num},|FC|\}$ items with maximum value-weight ratios to $S$ can be added into $S$ (lines 12-13). By applying this strategy, the search region can be refined significantly, i.e., the number of items considered by ADD$^*$ is reduced from $|FC|$ to min$\{a_{num},|FC|\}$, and the efficiency can be improved greatly. Also, the operator ADD$^*$ does not traverse all the moves, but only traverses the refined search region. Moreover, our proposed ADD$^*$ operator can add multiple items into the current solution $S$, which can not be regarded as the same as the combination of multiple continuous $ADD$ operators in~\citep{Wei2019,Wei2021KBTS} or $flip$ operators (on items not in $S$) in ~\citep{Wei2021MSBTS,Li2021}. For example, the ADD$^*$ operator adds items $i_1,i_2$, and $i_3$ into $S$, while the best neighbor solution of $S$ find by the commonly used $ADD$ operator might chooses to add item $i_4$. In summary, ADD$^*$ effective because it can traverse various combinations of the items in the refined search region that can be added into the current solution. \subsubsection{Main Framework of E2LS} The main process of E2LS is shown in Algorithm \ref{alg_E2LS}. E2LS first initializes the three hash vectors $H_1,H_2,H_3$ to 0 (line 2), and calls the Random\_Greedy function (Algorithm \ref{alg_init}) to generate the initial solution $S$ (line 3). Then, E2LS calls the Local\_Search function (Algorithm \ref{alg_LS}) to explore the solution space until the cut-off time is reached (lines 5-12). During the iterations, the Random\_Greedy function will be called again when the Local\_Search function with the input solution $S$ can not find a solution that is not in the tabu list (line 7-8), i.e., the output solution of the Local\_Search function is $\emptyset$. The solution $S$ obtained in each iteration will be added into the tabu list by setting $H_1[h_1(S)] \wedge H_2[h_2(S)] \wedge H_3[h_3(S)]$ to 1 (line 11). The best solution found so far is represented by $S_b$ (line 12). \begin{algorithm}[t] \caption{E2LS($I,T_{max},t,r_{num},a_{num}$)} \label{alg_E2LS} \LinesNumbered \KwIn{Instance $I$, cut-off time $T_{max}$, sampling times $t$, maximum size of the candidate set of the items to be removed $r_{num}$, maximum size of the candidate set of the items to be added $a_{num}$} \KwOut{Output solution $S$} Read instance $I$\; Initialize $H_1,H_2,H_3$ to 0\; $S \leftarrow$ Random\_Greedy($t$)\; Initialize $S_b \leftarrow \emptyset$\; \While{the cut-off time $T_{max}$ is not reached}{ $S' \leftarrow$ Local\_Search($S,r_{num},a_{num}$)\; \If{$S' = \emptyset$}{ $S \leftarrow$ Random\_Greedy($t$)\; \textbf{continue}\; } $S \leftarrow S'$\; \lFor{$j \leftarrow 1 : 3$}{$H_j[h_j(S)] \leftarrow 1$} \lIf{$f(S) > f(S_b)$}{$S_b \leftarrow S$} } \textbf{return} $S_b$\; \end{algorithm} \subsubsection{Advantages of E2LS} This subsection discusses the main advantages of our proposed E2LS over the state-of-the-art local search algorithms for the SUKP and BMCP. The first category of local search methods includes the I2PLS~\citep{Wei2019}, KBTS~\citep{Wei2021KBTS}, MSBTS~\citep{Wei2021MSBTS} algorithms for the SUKP, and the PLTS~\citep{Li2021} algorithm for the BMCP. These algorithms are all based on tabu search. Their common shortcoming is that their search neighborhoods contain lots of low-quality moves. While E2LS refines the search neighborhood according to the value-weight ratios of the items. Thus, E2LS shows much better performance and higher efficiency than these algorithms. The second category is the ATS-DLA algorithm~\citep{Zhou2021}, which also uses the tabu search method and is efficient for large scale SUKP instances. However, ATS-DLA can only remove or add one item in each iteration, which leads to a relatively small and overrefined search region, and may make it hard to escape from local optima in some cases. While E2LS can add multiple items in each iteration by traversing the refined search region, and considering various combinations of the items. Therefore, E2LS can explore the solution space wider and deeper than ATS-DLA, so as to find higher-quality solutions. The third category is the VDLS algorithm~\citep{Zhou2022}, which is not based on tabu search but a partial depth-first search method. VDLS has a large search region, and can yield significantly better solutions than the PLTS algorithm~\citep{Li2021}. However, VDLS does not allow the solution to get worse during the search process, and the initial solution generated by VDLS is fixed. Such mechanisms limit the search ability of the algorithm. E2LS does not restrict that the current solution must be better than the previous one, and applies the tabu search method to avoid getting stuck in local optima. Moreover, the random greedy initialization method in E2LS can generate high-quality and diverse initial solutions for E2LS. Thus E2LS also shows significantly better performance than VDLS. \section{Computational Results} \label{Sec_Result} This section presents experimental results and analyses, before which we first introduce the benchmark instances used in the experiments and the experimental setup. \subsection{Benchmark Instances} We tested E2LS on a total of 78 public SUKP instances with at most 5,000 items or elements, and 90 public BMCP instances with no more than 5,200 items or elements. The 78 tested SUKP instances can be divided into three sets as follows. \begin{itemize} \item \textit{Set I:} This set contains 30 instances with 85 to 500 items or elements. This set was proposed in~\citep{He2018} and widely used in~\citep{He2018,Ozsoydan2019,Lin2019,Wei2019,Dahmani2020,Wu2020,Wei2021KBTS,Wei2021MSBTS}. \item \textit{Set II:} This set contains 30 instances with 585 to 1,000 items or elements. This set was proposed in~\citep{Wei2021KBTS} and used in~\citep{Wei2021KBTS,Wei2021MSBTS}. \item \textit{Set III:} This set contains 18 instances with 850 to 5,000 items or elements. This set was proposed in~\citep{Zhou2021}. \end{itemize} Each instance in \textit{Sets I}, \textit{II}, and \textit{III} is characterized by four parameters: the number of items $m$, the number of elements $n$, the density of the relation matrix $\alpha = (\sum_{j=1}^m{\sum_{k=1}^n{R_{ij}}})/(mn)$, and the ratio of knapsack capacity $C$ to the total weight of the elements $\beta = C/\sum_{k=1}^n{w_k}$. The name of a SUKP instance consists of these four parameters. For example, \textit{sukp\_85\_100\_0.10\_0.75} represents a SUKP instance with 85 items, 100 elements, $\alpha = 0.10$, and $\beta = 0.75$. The 90 tested BMCP instances can be divided into three sets as follows. \begin{itemize} \item \textit{Set A:} This set contains 30 instances with 585 to 1,000 items or elements. This set was proposed in~\citep{Li2021} and used in~\citep{Li2021,Zhou2022}. \item \textit{Set B:} This set contains 30 instances with the number of items or elements ranging from 1,000 to 1,600. This set was proposed in~\citep{Zhou2022}. \item \textit{Set C:} This set contains 30 instances with the number of items or elements ranging from 4,000 to 5,200. This set was proposed in~\citep{Zhou2022}. \end{itemize} Each instance in \textit{Set A} is characterized by four parameters: the number of items $m$, the number of elements $n$, the knapsack capacity (budget) $C$, and the density of the relation matrix $\alpha$. The instances in \textit{Sets B} and \textit{C} have more complex structures than those in \textit{Set A}. Zhou et al.~\citep{Zhou2022} created these instances by first randomly grouping the items and elements, then deciding the connection of them according to a parameter $\rho$ that represents the density of the relation matrix of each group. Each instance in \textit{Sets B} and \textit{C} is characterized by parameters $m,n,C$ and $\rho$. \subsection{Experimental Setup} We first introduce the baseline algorithms we selected. For the SUKP, we select some of the state-of-the-art heuristic algorithms, including ATS-DLA~\citep{Zhou2021}, MSBTS~\citep{Wei2021MSBTS}, KBTS~\citep{Wei2021KBTS}, and I2PLS~\citep{Wei2019}, as the baseline algorithms. For the BMCP, we select the heuristic algorithms PLTS~\citep{Li2021} and VDLS~\citep{Zhou2022} as the baseline algorithms. To our knowledge, they are also the only two heuristic algorithms for the BMCP. The E2LS algorithm and the baseline algorithms were implemented in C++ and compiled by g++. All the experiments were performed on a server using an Intel速 Xeon速 E5-1603 v3 2.80GHz CPU, running Ubuntu 16.04 Linux operation system. The parameters in E2LS include the sampling times $t$, and the maximum size of the candidate set of the items to be removed/added $r_{num}/a_{num}$. We tuned these parameters according to our experience. The default value of $t$ is set to $\sqrt{max\{m,n\}}$. For the parameters $r_{num}$ and $a_{num}$, we set the default values of them to be different when solving the SUKP and BMCP, since the properties of these two problems are different. Specifically, for the SUKP, we set the default values as $r_{num} = 2$ and $a_{num} = 2$. For the BMCP, we set $r_{num} = 5$ and $a_{num} = 5$. The detailed reasons why their default values are different when solving the SUKP and BMCP, as well as the influence of these parameters on the performance of E2LS, are presented in Section \ref{Sec_para}. We set the cut-off time for each algorithm to be 500 seconds for the instances in \textit{Set I} as~\citep{Wei2021MSBTS} did, 1,000 seconds for the instances in \textit{Sets II} and \textit{III} as~\citep{Wei2021MSBTS,Zhou2021} did, 600 seconds for the instances in \textit{Set A} as~\citep{Li2021} did, and 1,800 seconds for the instances in \textit{Sets B} and \textit{C} as~\citep{Zhou2022} did. Each instance was calculated 10 independent times for each algorithm. \input{table-SUKP-split} \input{table-BMCP-split} \input{table-SUKP2} \input{table-BMCP2} \subsection{Comparison with the baselines} The comparison results of E2LS with the baseline algorithms on the three sets of SUKP instances are presented in Tables \ref{table-SUKP1}, \ref{table-SUKP2}, and \ref{table-SUKP3}\footnote{Note that there are two instances with the same name but the different structures in \textit{Set II} and \textit{Set III}.}, respectively. The comparison results of E2LS with the baseline algorithms on the three sets of BMCP instances are presented in Tables \ref{table-BMCP1}, \ref{table-BMCP2}, and \ref{table-BMCP3}, respectively. In these tables, unique best results are appeared in bold, while equal best results are appeared in italic. Tables \ref{table-SUKP1}, \ref{table-SUKP2}, and \ref{table-SUKP3} compare the best solution (objective value), average solution, standard deviations over 10 runs (S.D.), and the average run times in seconds (to obtain the best solution in each run) of each involved algorithm. Tables \ref{table-BMCP1}, \ref{table-BMCP2}, and \ref{table-BMCP3} present the best solution and average solution of each involved algorithm, coupled with the standard deviations and average run times of E2LS. The results of PLTS and VDLS in Tables \ref{table-BMCP1}, \ref{table-BMCP2}, and \ref{table-BMCP3} are from~\citep{Zhou2022} (they used the similar machine as ours, Intel速 Xeon速 E5-2650 v3 2.30GHz). Moreover, in order to show the advantage of E2LS over the baselines more clearly, we summarize the comparative results between E2LS and each baseline algorithm in Tables \ref{table-SUKP-compare} and \ref{table-BMCP-compare}. Columns \#Wins, \#Ties, and \#Losses indicate the number of instances for which E2LS obtains a better, equal, and worse result than the compared algorithm according to the best solution and average solution indicators. As the results shown in Tables \ref{table-SUKP1}, \ref{table-SUKP2}, \ref{table-SUKP3}, and \ref{table-SUKP-compare}, E2LS does not lose to any baseline algorithm according to the best solution indicator. Actually, E2LS can obtain the best-known solution for all the 78 tested SUKP instances. E2LS also has good stability and robustness, since the average solutions obtained by E2LS are also excellent and the standard deviations of E2LS are very small. In particular, E2LS obtains three new best-known solutions for instances \textit{sukp\_1000\_850\_0.15\_0.85}, \textit{sukp\_3000\_2850\_0.10\_0.75}, and \textit{sukp\_3000\_2850\_0.15\_0.85}\footnote{The result 9565 of instance \textit{sukp\_1000\_850\_0.15\_0.85} can also be obtained by MSBTS, but has not been reported in the literature. The result 9207 of instance \textit{sukp\_5000\_4850\_0.10\_0.75} has been reported in~\citep{Zhou2021}.}. Moreover, there are no baseline algorithms that can solve the SUKP instances with various scales well. For example, ATS-DLA shows good performance for the instances in \textit{Sets II} and \textit{III}, since its search operator that considers only one item per step makes it efficient for (relatively) large instances. However, ATS-DLA is not good at solving the instances in \textit{Set I}, because its search operator has a small search region, while a large search region is more important than a high search efficiency for small instances. Algorithms KBTS and MSBTS show good performance for the instances in \textit{Set I}, because they traverse all the possible moves per step and thus have large search regions. However, KBTS shows worse performance than MSBTS on \textit{Set II}, since the solution-based tabu search is better than the attributed-based tabu search for the SUKP~\citep{Wei2021MSBTS}. Also, both KBTS and MSBTS show worse performance than ATS-DLA on \textit{Set III}, since their operators that traverse all the possible moves are inefficient for large instances. While the proposed E2LS shows excellent performance on all three sets of SUKP instances, because E2LS extracts the advantages of the baseline algorithms and improves their disadvantages. On the one hand, E2LS refines the search region according to the value-weight ratios of the items, so as to abandon the low-quality moves and improve the search efficiency. Thus E2LS works well on \textit{Sets II} and \textit{III}. On the other hand, the proposed operator ADD$^*$ can traverse the refined search region and add the best feasible combination of multiple candidate items into the current solution per iteration, which leads to a large search region. Thus E2LS also works well on \textit{Set I}. In summary, E2LS is efficient and effective because it can trade-off the search region and search efficiency well. As for the comparison results on the BMCP instances shown in Tables \ref{table-BMCP1}, \ref{table-BMCP2}, \ref{table-BMCP3}, and \ref{table-BMCP-compare}, the advantage of E2LS over the BMCP baseline algorithms PLTS and VDLS is more obvious than that of E2LS over the SUKP baselines. This might be because the SUKP is more well-studied than the BMCP in terms of heuristic methods. Specifically, E2LS does not lose to the BMCP baselines according to either the best or the average solution indicator, and obtains 7/20 new best-known solutions for the instances in \textit{B}/\textit{C}. Moreover, the average solutions of E2LS are equal to its best solutions on all the tested instances except \textit{bmcp\_5000\_4800\_0.5\_7000}. And E2LS can obtain such excellent results within very small run times for most of the tested instances. The results indicate again the excellent stability, robustness, and efficiency of the proposed E2LS algorithm. \begin{figure*} \caption{Analyses on the influence of parameters $r_{num} \label{fig_Para-I} \label{fig_Para-A} \label{fig_Para-II} \label{fig_Para-B} \label{fig_Para-III} \label{fig_Para-C} \label{fig_Para} \end{figure*} \input{table-SUKP-para} \input{table-BMCP-para} \subsection{Analyses on the parameters} \label{Sec_para} We then analyze the influence of the parameters including $r_{num}$, $a_{num}$, and $t$ on the performance of E2LS by comparing E2LS with its variants on each instance set. We first compare the E2LS algorithm with different values of $r_{num}$ and $a_{num}$. We tested five pairs of parameters. They are $r_{num},a_{num} = 1,2,5,10,m$ respectively. The results are shown in Figure \ref{fig_Para}, that compares the average values of the best and average solutions of each algorithm on all the instances in each instance set. The results in Figure \ref{fig_Para} show that with the increase of the parameter values from 1 to $m$, the performance of E2LS first increases and then decreases. This result indicates that balancing the search efficiency and the search region is reasonable and necessary. When we set small values to the parameters (e.g., 1 for the SUKP, 1 or 2 for the BMCP), E2LS can explore the search space very quickly, but it might be easy to get stuck in some local optima. When we set large values to the parameters (e.g., 10 or $m$), E2LS can explore the search space widely and deeply, but the low efficiency might also make it hard to escape from local optima. We can also observe that the properties of the SUKP and BMCP are different. For the SUKP, E2LS with small values of parameters are not good at solving the small instances in \textit{Set I}, and E2LS with large values of parameters are not good at solving the large instances in \textit{Sets II} and \textit{III}. This result is consistent with the results of the SUKP baseline algorithms, and can explain their performance. That is, algorithms KBTS and MSBTS with large and unrefined search regions are good at solving the small instances in \textit{Set I}, but do not work well on \textit{Set III}. In contrast, ATS-DLA with a small and overrefined search region works well on \textit{Sets II} and \textit{III}, but not on \textit{Set I}. For the BMCP, the instances in \textit{Sets A} and \textit{B} prefer small parameter values to large ones. While the instances in \textit{Set C} prefer large parameter values to small ones. This might be because for the relatively small instances in \textit{Sets A} and \textit{B} (with 585 to 1,600 items or elements), low search efficiency is more likely to get stuck in local optima than a small search region. While the situation for the large instances in \textit{Set C} (with 4,000 to 5,200 items or elements) is on the contrary. Moreover, the best settings of parameters when solving the SUKP and BMCP are different. The best settings of $r_{num}, a_{num}$ are 2 and 5 for the SUKP and BMCP, respectively. This is also because of the different properties of these two problems. For example, when solving the SUKP, items with zero additional weights will not be added into the feasible candidate set $FC$ (which will be refined by the parameter $a_{num}$), and will be added into the current solution directly by the ADD$^*$ operator if such a move is not prohibited by the tabu list (lines 5-8 in Algorithm \ref{alg_add}). While when solving the BMCP, there is no item with zero additional weights (values), since the value of each item is positive (in the BMCP, an item with zero values should always be contained in the solution). Therefore, if we set the parameter $a_{num}$ to be the same when solving the SUKP and BMCP, the ADD$^*$ operator for the SUKP can add more items than that for the BMCP. Thus it is reasonable to set larger parameter values for the BMCP than for the SUKP. We further analyze the influence of the parameter $t$ on the performance of E2LS. We denote E2LS($t = 1$) and E2LS($t = 10$) as two variants of E2LS with the parameter $t$ equals to 1 and 10, respectively. Note that E2LS($t = 1$) actually generates the initial solution randomly. We compare these two variants with E2LS and the baseline algorithms. Tables \ref{table-SUKP-para} and \ref{table-BMCP-para} show the results of the algorithms for the SUKP and BMCP, respectively. The results are the average values of the best and average solutions, the standard deviations, and the average run times of each algorithm on all the instances in each instance set. As the results show, both E2LS($t = 1$) and E2LS($t = 10$) can obtain competitive results with E2LS, and their performance is significantly better than the baseline algorithms for both the SUKP and BMCP. The result indicates that the local search method in E2LS has excellent stability and robustness, as it can yield excellent results with various initial solutions, even if they are generated randomly. Moreover, E2LS slightly outperforms E2LS($t = 1$) and E2LS($t = 10$), indicating that higher-quality initial solutions lead to better performance, and the random greedy initialization method in E2LS is effective. \section{Conclusion} \label{Sec_Conclusion} This paper proposes an efficient and effective local search algorithm called E2LS for the SUKP and BMCP problems. To our knowledge, this is the first time that an algorithm has been proposed for these two closely related problems. The E2LS algorithm can explore the solution space efficiently by refining the search region, i.e., abandoning the low-quality moves. The proposed ADD$^*$ operator in E2LS can traverse the refined search region and provide high-quality moves quickly. As a result, E2LS trade-offs the search region and efficiency well, which leads to an excellent performance. Such a trade-off mechanism and the approach of traversing the refined search region could be applied to various combinatorial optimization problems. Extensive experiments on 78 public SUKP instances and 90 public BMCP instances with various scales demonstrate the superiority of the proposed algorithm. In particular, E2LS provides four new best-known solutions for the SUKP, and 27 new best-known solutions for the BMCP. \section*{Declarations of interest} None. \end{document}
\bm egin{document} \author[1]{Michael Neilan\thanks{[email protected]}} \author[2]{Abner J. Salgado\thanks{[email protected]}} \author[3]{Wujun Zhang\thanks{[email protected]}} \affil[1]{Department of Mathematics, University of Pittsburgh} \affil[2]{Department of Mathematics, The University of Tennessee} \affil[2]{Department of Mathematics, Rutgers University} \date{\today} \title{The \MA equation} \title{The \MA equation} \bm egin{abstract}{\normalsize We review recent advances in the numerical analysis of the Monge-Amp\`ere equation. Various computational techniques are discussed including wide-stencil finite difference schemes, two-scaled methods, finite element methods, and methods based on geometric considerations. Particular focus is the development of appropriate stability and consistency estimates which lead to rates of convergence of the discrete approximations. Finally we present numerical experiments which highlight each method for a variety of test problem with different levels of regularity.}\end{abstract} \input intro.tex \input FD.tex \input OP.tex \input Galerkin.tex \input Numerics.tex \input Conclusion.tex \input main.bbl \end{document}
\begin{document} \title{On the Gaussian surface area of spectrahedra} \author{Srinivasan Arunachalam\thanks{IBM T.J. Watson Research Center \textsf{[email protected]}} \and Oded Regev\thanks{Courant Institute of Mathematical Sciences, New York University, \textsf{[email protected]}} \quad \and Penghui Yao\thanks{State Key Laboratory for Novel Software Technology, Nanjing University, \textsf{[email protected]}} \and } \maketitle \begin{abstract} We show that for sufficiently large $n\geq 1$ and $d=C n^{3/4}$ for some universal constant $C>0$, a random spectrahedron with matrices drawn from Gaussian orthogonal ensemble has Gaussian surface area $\Theta(n^{1/8})$ with high~probability. \end{abstract} \section{Introduction} A \emph{spectrahedron} $S\subseteq{\mathbb R}^n$ is a set of the form $$ S=\set{x\in{\mathbb R}^n:\sum_ix_iA^{(i)}\preceq B} \; , $$ for some $d\times d$ symmetric matrices $A^{(1)},\ldots, A^{(n)}, B\in \mathrm{Sym}_d$. Here we will be concerned with the \emph{Gaussian surface area} of $S$, defined as \begin{align}\label{eq:gsaoriginaldefinitionwithouter} \mathcal{G}SA\br{S}=\liminf_{\delta\rightarrow 0}\frac{\mathcal{G}^n\br{S_{\delta}^{\mathrm{out}}}}{\delta} \; , \end{align} where $S_{\delta}^{\mathrm{out}}=\set{x \notin S:\mathrm{dist}(x,S)\leq\delta}$ denotes the outer $\delta$-neighborhood of $S$ under Euclidean distance and $\mathcal{G}^n(\cdot)$ denotes the standard Gaussian measure on $\mathbb{R}^n$ whose density is $(2\pi)^{-n/2} \exp(-\|x\|^2/2)$. Ball showed that the $\mathcal{G}SA$ of any convex body in $\mathbb{R}^n$ is $O(n^{1/4})$~\cite{ball1993reverse}, which was later shown to be tight by Nazarov~\cite{nazarov2003maximal}. Moreover, Nazarov~\cite{klivans2008learning} showed that the $\mathcal{G}SA$ of a $d$-facet polytope\footnote{A $d$-facet polytope is the special case of a spectrahedron when the matrices, $A^{(1)},\ldots,A^{(n)},B$ are \emph{diagonal}.} in ${\mathbb R}^n$ is $O(\sqrt{\log d})$ and this fact has found application in learning theory and constructing pseudorandom generators for polytopes~\cite{klivans2008learning,harsha2013invariance,servedio2017fooling,chattopadhyay2019simple}. We refer the interested reader to~\cite{klivans2008learning,harsha2013invariance} for more details. Motivated by recent work~\cite{arunachalam2021positive}, this raises the question of whether the $\mathcal{G}SA$ of spectrahedra is also small. In this note we answer this question in the negative. Recall that a matrix $\boldsymbol{A}$ drawn from the Gaussian orthogonal ensemble is a symmetric matrix whose entries $\set{\boldsymbol{A}_{i,j}}_{i\leq j}$ are all independent normal random variables of mean $0$ having variance $1$ if $i<j$ and variance $2$ if $i=j$. \begin{theorem} \label{thm:gsa} For a universal constant $C>0$ and any integers $n,d \ge 1$ satisfying $d\leq n/C$ the following hold. If $\boldsymbol{A}^{(1)},\ldots,\boldsymbol{A}^{(n)}$ are i.i.d.~drawn from the $d\times d$ Gaussian orthogonal ensemble, then the spectrahedron \begin{equation}\label{eq:main theorem spectrahedron} \mathcal{T}=\Big\{x\in {\mathbb{R}}^n:\sum_i x_i \boldsymbol{A}^{(i)}\preceq 2 \sqrt{nd}\cdot{\mathbb{I}} \Big\} \end{equation} satisfies $\mathcal{G}SA(\mathcal{T})\geq c\cdot\sqrt{n/d}$ for some absolute constant $c>0$ with probability at least $1-C \exp(-d n^{-3/4} / C)$. Moreover, for any integer $d$ satisfying $d\leq n/C$, $\mathcal{G}SA(\mathcal{T})\leq2\sqrt{n}/(\sqrt{\pi d})$ holds with probability at least $1 - \exp(-n/50)$. \end{theorem} The theorem shows the existence of spectrahedra with $\mathcal{G}SA$ of $\Omega(n^{1/8})$. (In fact, a random spectrahedron as above satisfies this with constant probability). This lower bound can be contrasted with the $\mathcal{G}SA$ upper bound of Ball~\cite{ball1993reverse} of $O(n^{1/4})$ for \emph{arbitrary} convex bodies. Moreover, the lower bound shows that in contrast to the case of polytopes, the $\mathcal{G}SA$ of spectrahedra can depend polynomially on $d$. A natural open question is how large the $\mathcal{G}SA$ of arbitrary spectrahedra can be; can spectrahedra with small $d$ (say, polynomial in $n$) achieve a $\mathcal{G}SA$ of $\Theta(n^{1/4})$? \section{Preliminaries}\label{sec:pre} For a matrix $A$, $\lambda_{\max}(A)$ is the maximum eigenvalue of $A$. We use $\boldsymbol{g},\boldsymbol{x},\boldsymbol{A}$ to denote random variables. We let $\mathcal{G}(0,\sigma^2)$ be the normal distribution with mean $0$ and variance $\sigma^2$. We denote by $\mathcal{H}_{d}$ the $d\times d$ Gaussian orthogonal ensemble (GOE). Namely, $\boldsymbol{A}\sim\mathcal{H}_{d}$ if it is a symmetric matrix with entries $\set{\boldsymbol{A}_{i,j}}_{i\leq j}$ independently distributed satisfying $\boldsymbol{A}_{i,j}\sim \mathcal{G}(0,1)$ for $i<j$ and $\boldsymbol{A}_{i,i}\sim \mathcal{G}(0,2)$. To keep notations short, for $b\geq 0$ we use $[a\pm b]$ to represent the interval $[a-b, a+b]$. For every $c\geq 0$, we use $c\cdot[a\pm b]$ to represent the interval $[ac\pm bc]$. We denote the set of $n$-dimensional unit vectors by $S^{n-1}$. Finally, we let $\chi_n$ be the $\chi$ distribution with $n$ degrees of freedom, which is the square root of the sum of the squares of $n$ independent standard normal variables. The following are some simple facts about the $\chi$ distribution. \begin{fact} \label{fact:chidistribution} Let $n\in \mathbb{Z}_{>0}$ and $h(\cdot)$ be the pdf of $\chi_n$. Then the following hold. \begin{enumerate} \item $h(x)\geq c$ for $x\in[\sqrt{n}\pm c]$, where $c>0$ is an absolute constant. \item $h(x)\leq \sqrt{n}/(\sqrt{\pi}\cdot |x|)$ for $x\in{\mathbb R}$. \end{enumerate} \end{fact} \begin{proof} Recall that by definition $$ h(x)=\frac{1}{2^{\frac{n}{2}-1}\mathcal{G}amma(\frac{n}{2})}x^{n-1}e^{-x^2/2} $$ for $x\geq 0$, where $\mathcal{G}amma(\cdot)$ denotes the gamma function, and $h(x)=0$ otherwise. By elementary calculus, $x^{n-1}e^{-x^2/2}$ monotonically increases for $0\leq x< \sqrt{n-1}$ and monotonically decreases for $x>\sqrt{n-1}$. We therefore have \begin{equation}\label{eqn:2} x^{n-1}e^{-x^2/2} \geq \min\set{(\sqrt{n}+c)^{n-1}e^{-(\sqrt{n}+c)^2/2},(\sqrt{n}-c)^{n-1}e^{-(\sqrt{n}-c)^2/2}} \end{equation} for $0<c\leq 1$ and $x\in[\sqrt{n}\pm c]$. Item 1 now follows from Eq.~\eqref{eqn:2} and the fact that $\mathcal{G}amma(z)\leq\sqrt{2\pi}z^{z-1/2}e^{-z+1/(12z)}$ for all $z>0$~\cite{stirling,jameson_2015}. Item 2 is trivial for $x\leq 0$. For $x>0$, it follows from the inequalities $\mathcal{G}amma(z)\geq \sqrt{2\pi}z^{z-1/2}e^{-z}$ for all $z>0$~\cite{stirling,jameson_2015} and $x^n e^{-x^2/2}\leq n^{n/2}e^{-n/2}$, which follows from the same argument as~above. \end{proof} \begin{lemma}[{\cite[comment below Lemma 1]{10.1214/aos/1015957395}}]\label{lem:chi2} For $n\geq 1$, let $\boldsymbol{r}$ be a random variable distributed according to $\chi_n$. Then for every $x>0$, we have \[ \Pr\Br{n-2\sqrt{nx}\leq \boldsymbol{r}^2\leq n+2\sqrt{nx}+2x} \geq 1-2e^{-x} \; . \] \end{lemma} For our purposes, it will be convenient to use an alternative definition of Gaussian surface area in terms of the \emph{inner} surface area. Namely, for $S_{\delta}^{\mathrm{in}}=\set{x \in S:\mathrm{dist}(x,S^c}\leq\delta)$ where $S^c$ is the complement of the body $S$, we define, \begin{align} \label{eq:innerGSA} \mathcal{G}SA\br{S}=\lim_{\delta\rightarrow 0}\frac{\mathcal{G}^n\br{S^{\text{in}}_{\delta}}}{\delta} \; . \end{align} It follows from Huang et al.~\cite[Theorem 3.3]{huang2021minkowski} that this definition is equivalent to the one in Eq.~\eqref{eq:gsaoriginaldefinitionwithouter} when $S$ is a convex body that contains the origin, which is sufficient for our purposes. To prove our main theorem, we use the following facts, starting with a well known bound on the size of an $\varepsilon$-net of the $n$-dimensional sphere. \begin{fact}[{\label{fac:epsnet}\cite[Lemma 2.3.4]{tao2012topics}}] For every $d \ge 1$ and any $0<\varepsilon<1/2$ there exists an $\varepsilon$-net of the sphere $S^{d-1}$ of cardinality at most $(3/\varepsilon)^d$. \end{fact} The following claim gives a formula for the pdf of the product of two real-valued random variables. \begin{claim}[{\cite[Page 134, Theorem 3]{rohatgi_saleh_2015}}]\label{clm:productpdf} Let $\boldsymbol{x},\boldsymbol{y}$ be two real-valued random variables and $f$ be the pdf of $(\boldsymbol{x},\boldsymbol{y})$. Then the pdf of $\boldsymbol{z}=\boldsymbol{x}\cdot\boldsymbol{y}$ is given by \[ g\br{z}=\int_{-\infty}^{\infty}f\br{x,\frac{z}{x}}\cdot \frac{1}{|x|}dx. \] \end{claim} \begin{theorem}[{\cite[Theorem 1]{10.1214/EJP.v15-798}}] \label{thm:tracy} Let $\boldsymbol{A}\sim\mathcal{H}_d$. For every $0<\eta<1$, it holds that \[ \Pr \Big[{\lambda_{\max}\br{\boldsymbol{A}}\in 2\sqrt{d} \Br{1\pm\eta}}\Big]\geq 1- C\cdot e^{-d\eta^{3/2}/C}, \] for some absolute constant $C>0$. \end{theorem} \section{Proof of main theorem} The core of the argument is in the following lemma, bounding $q(2 \sqrt{nd})$ where $q$ is the pdf of the largest eigenvalue of the matrix showing up in Eq.~\eqref{eq:main theorem spectrahedron}. We will later show that this value is essentially the same as $\mathcal{G}SA(\mathcal{T})$, where $\mathcal{T}$ is the spectrahedron in the statement of the theorem. \begin{lemma} \label{lem:main} For $n,d\geq 1$ and $A^{(1)},\ldots, A^{(n)} \in \mathrm{Sym}_d$, let $q(\cdot)$ be the probability density function of \[ \lambda_{\max}\br{ \sum_i\boldsymbol{x}_i A^{(i)}} \; , \] where $\boldsymbol{x}=(\boldsymbol{x}_1,\ldots,\boldsymbol{x}_n)$ is a random vector and each entry is i.i.d.\ drawn from $\mathcal{G}(0,1)$. If $\boldsymbol{A}^{(1)},\ldots,\boldsymbol{A}^{(n)}$ are i.i.d.~drawn from the $d\times d$ Gaussian orthogonal ensemble, then $q(2 \sqrt{nd})\geq~c\cdot\sqrt{1/d}$ with probability at least $1-C \exp(-d n^{-3/4} / C)$ (over the choice of $\boldsymbol{A}^{(1)},\ldots,\boldsymbol{A}^{(n)}$) where $c, C>0$ are universal constants. Moreover, for any integer $d$ and any $d\times d$ matrices $A^{(1)},\ldots,A^{(n)}$, $q(2 \sqrt{nd}) \leq 1/(2\sqrt{\pi d})$. \end{lemma} \begin{proof} Let $\boldsymbol{y}\sim S^{n-1}$ be chosen uniformly from the unit sphere and for matrices $A^{(1)},\ldots, A^{(n)}$, denote by $p$ the pdf of $\lambda_{\max}\br{\sum_i\boldsymbol{y}_i A^{(i)}}$. Let $\boldsymbol{r}\sim \chi_n$ and notice that $\boldsymbol{r} \boldsymbol{y}$ is distributed like $\boldsymbol{x}$ (since both are spherically symmetric and by definition, have equally distributed norms). Denote by $h$ the pdf of $\boldsymbol{r}$. By Claim~\ref{clm:productpdf}, we have \begin{eqnarray} q\br{2 \sqrt{ nd} } = \int_{-\infty}^{\infty}h\br{2\sqrt{nd}/z}p\br{z}\frac{1}{|z|}dz \; . \label{eq:equivq} \end{eqnarray} Using Item 2 of Fact~\ref{fact:chidistribution}, $h(2\sqrt{nd}/z)/|z| \leq 1/(2\sqrt{\pi d})$ for all $z$. Hence Eq.~\eqref{eq:equivq} can be bounded as $(1/(2\sqrt{\pi d}))\cdot \int_{-\infty}^{\infty}p\br{z}dz = 1/(2\sqrt{\pi d})$, establishing the claimed upper bound on $q$. To prove the lower bound on $q$, let $\boldsymbol{A}^{(1)},\ldots,\boldsymbol{A}^{(n)}\sim \mathcal{H}_d$ be $n$ matrices chosen i.i.d.\ from the Gaussian orthogonal ensemble. Observe that by Theorem~\ref{thm:tracy}, we have \begin{align} \label{eq:rangeoflambdamax} \Pr\Big[\lambda_{\max}\br{\sum_{i=1}^n \boldsymbol{y}_i \boldsymbol{A}^{(i)}} \in I\Big]\geq 1- C \exp(-d n^{-3/4} / C) \; , \end{align} where \[ I=2\sqrt{d}\cdot[1\pm c/\sqrt{n}] \; , \] for some universal constants $C,c>0$. Define the set of~matrices \begin{equation*}\label{eqn:G} G=\set{\br{A^{(1)},\ldots, A^{(n)}}:\Pr\Big[\lambda_{\max}\br{\sum_{i=1}^n \boldsymbol{y}_i A^{(i)}}\in I\Big]\geq \frac{1}{2}}. \end{equation*} Then, using the definition of $G$ and Eq.~\eqref{eq:rangeoflambdamax}, we have \[ \Pr\Big[ \br{\boldsymbol{A}^{(1)},\ldots,\boldsymbol{A}^{(n)}}\in G \Big] \geq 1-2 C \exp(-d n^{-3/4} / C) \; . \] Now fix any $(A^{(1)},\ldots, A^{(n)})\in G$. By definition of $G$, $\int_I p\br{z} dz \geq 1/2$, and therefore the right-hand side of Eq.~\eqref{eq:equivq} is at least \begin{eqnarray} \int_{I}h\br{2\sqrt{nd}/z}p\br{z}\frac{1}{z}dz \geq c \cdot\int_{I}p\br{z}\frac{1}{z}dz \geq \frac{c}{2\sqrt{d}(1+c/\sqrt{n})}\cdot\int_Ip(z)dz\geq \frac{c}{5\sqrt{d}} \; , \label{eqn:1} \end{eqnarray} for some absolute constant $c>0$, where we used Item 1 of Fact~\ref{fact:chidistribution} to conclude that $h(2\sqrt{nd}/z)\geq c$ for all $z \in I$. \end{proof} We next relate $q(2 \sqrt{nd})$ to $\mathcal{G}SA(\mathcal{T})$. For a vector $v\in S^{d-1}$, and $d \times d$ symmetric matrices $A^{(1)},\ldots,A^{(n)}$, define the vector \begin{equation}\label{eqn:wv} W_v=\big(v^T A^{(1)} v,v^T A^{(2)} v,\ldots,v^T A^{(n)} v\big) \in {\mathbb{R}}^n \; . \end{equation} Notice that $\mathcal{T}$ can be written as \[ \mathcal{T}= \Big\{x\in {\mathbb{R}}^n:\sum_i x_i A^{(i)}\preceq 2 \sqrt{nd}\cdot{\mathbb{I}} \Big\} = \Big\{x\in {\mathbb{R}}^n: \forall v \in S^{d-1},~ \langle x, W_v \rangle \leq 2 \sqrt{nd} \Big\} \; . \] We say that $A^{(1)},\ldots,A^{(n)}$ are \emph{good} if \[ \forall v\in S^{d-1},~\frac{1}{2}\sqrt{n} \leq \norm{W_v} \leq 2\sqrt{n}\; . \] \begin{lemma} \label{lem:technicallemma} There exists a constant $C\geq 1$ such that for all integers $n$ and $d\leq n/C$, random matrices $\boldsymbol{A}^{(1)},\ldots,\boldsymbol{A}^{(n)}$ drawn i.i.d.\ from $\mathcal{H}_d$ are good with probability at least $1-\exp\br{-n/50}$. \end{lemma} \begin{proof} For a fixed $v\in S^{d-1}$, we claim that \begin{align} \label{eq:part1bound} \Pr [{n\leq \norm{\boldsymbol{W}_v}^2\leq3n}]\geq 1-2\exp\br{-n/40}. \; \end{align} To see this, observe that by definition of the Gaussian orthogonal ensemble, for $\boldsymbol{A}\sim\mathcal{H}_d$ and unit vector $v\in{\mathbb R}^d$, $v^T\boldsymbol{A} v = \sum_{i,j} v_i v_j \boldsymbol{A}_{i,j}$ is distributed according to $$ \br{4\sum_{i<j}v_i^2v_j^2+2\sum_iv_i^4}^{1/2}\cdot \mathcal{G}(0,1)=\sqrt{2}\cdot \mathcal{G}(0,1). $$ Therefore, each entry in $\boldsymbol{W}_v$ is distributed according to $\mathcal{G}(0,2)$, and Lemma~\ref{lem:chi2} implies Eq.~\eqref{eq:part1bound}. We next prove that with high probability (over the $\boldsymbol{A}^{(i)}$s), for \emph{every} unit vector $z$, $\|\boldsymbol{W}_z\|$ is large. First, by Fact~\ref{fac:epsnet}, there exists a set ${\mathcal{V}}=\{v_1,\ldots,v_{10^{5d}}\}\subseteq {\mathbb{R}}^d$ of unit vectors that form a $10^{-4}$-net of the unit Euclidean sphere. Applying a union bound on ${\mathcal{V}}$, we have \begin{equation}\label{eqn:wvwv} \Pr [{\forall v\in{\mathcal{V}}:n\leq \norm{\boldsymbol{W}_v}^2\leq 3n}]\geq 1-2\exp\br{-n/40}\cdot 10^{5d}\geq 1-\exp\br{-n/50} \; , \end{equation} here we used that $d\leq n/C$ for a sufficiently large $C$. To conclude the proof, it suffices to show that if $\boldsymbol{A}^{(1)},\ldots,\boldsymbol{A}^{(n)}$ are such that \[ \forall v\in{\mathcal{V}}, ~n \leq \norm{\boldsymbol{W}_v}^2 \leq 3n \; , \] then also \[ \forall z\in S^{d-1},~\norm{ \boldsymbol{W}_z } \geq\frac{1}{2}\sqrt{n} \; . \] Let $\boldsymbol{b}_{\max}=\max_{z\in S^{d-1}}\norm{\boldsymbol{W}_z}$ and $\boldsymbol{b}_{\min}=\min_{z\in S^{d-1}}\norm{\boldsymbol{W}_z}$. Let $\boldsymbol{z}_{\max}$ and $\boldsymbol{z}_{\min}$ be the vectors achieving the maximum and the minimum respectively. Let $\boldsymbol{v}_{\max}$ and $\boldsymbol{v}_{\min}$ be the vectors in ${\mathcal{V}}$ that are closest to $\boldsymbol{z}_{\max}$ and $\boldsymbol{z}_{\min}$, respectively. For any vectors $z,v\in S^{d-1}$ with $\norm{z-v}\leq 10^{-4}$, applying the spectral decomposition of $zz^T-vv^T$, there exist unit vectors $u_1,u_2$ and $0\leq\lambda\leq\frac{1}{100}$ such that \begin{equation}\label{eqn:vz} zz^T-vv^T=\lambda\cdot \br{u_1u^T_1-u_2u_2^T} \; . \end{equation} Hence \begin{align*} \norm{\boldsymbol{W}_{z}-\boldsymbol{W}_{v}}^2 = \sum_{i=1}^n\br{z^T\boldsymbol{A}^{(i)}z-v^T \boldsymbol{A}^{(i)}v}^2 &=\sum_{i=1}^n\br{\mbox{\rm Tr} \br{\boldsymbol{A}^{(i)}\br{zz^T-vv^T}}}^2 \\ &\leq\frac{1}{10^4}\sum_{i=1}^n\br{u_1^T \boldsymbol{A}^{(i)}u_1-u_2^T\boldsymbol{A}^{(i)}u_2}^2\\ &\leq \frac{1}{5000}\sum_{i=1}^n\br{\br{u_1^T \boldsymbol{A}^{(i)}u_1}^2+\br{u_2^T\boldsymbol{A}^{(i)}u_2}^2}\\ &\leq\frac{\boldsymbol{b}_{\max}^2}{2500} \; . \end{align*} Choosing $z=\boldsymbol{z}_{\max}$ and $v=\boldsymbol{v}_{\max}$, we have \[ \norm{\boldsymbol{W}_{\boldsymbol{z}_{\max}}}\leq\norm{\boldsymbol{W}_{\boldsymbol{v}_{\max}}}+\frac{\boldsymbol{b}_{\max}}{50} \; . \] Now, since $\norm{\boldsymbol{W}_{\boldsymbol{z}_{\max}}}=\boldsymbol{b}_{\max}$, we have \[ \boldsymbol{b}_{\max} \leq \frac{50}{49}\norm{\boldsymbol{W}_{\boldsymbol{v}_{\max}}} \leq \frac{50}{49} \sqrt{3n} \le 2 \sqrt{n} \; . \] Similarly, we set $z=\boldsymbol{z}_{\min}$ and $v=\boldsymbol{v}_{\min}$ and obtain \[ \boldsymbol{b}_{\min}\geq\norm{\boldsymbol{W}_{\boldsymbol{v}_{\min}}}-\frac{\boldsymbol{b}_{\max}}{50} \geq \sqrt{n} - \frac{1}{25} \sqrt{n} > \frac{1}{2} \sqrt{n} \; . \] This concludes the result. \end{proof} For the following claim, we define the inner and outer shells of $\mathcal{T}$ as \begin{align*} \mathcal{D}^{\mathrm{in}}_{\delta} &= \Big\{x:\lambda_{\max}\Big(\sum_ix_iA^{(i)}\Big)\in\sqrt{n}\cdot \Br{2\sqrt{d}-\delta,2\sqrt{d}} \Big\} \; , \\ \mathcal{D}^{\mathrm{out}}_{\delta} &= \Big\{x:\lambda_{\max}\Big(\sum_ix_iA^{(i)}\Big)\in\sqrt{n}\cdot \Br{2\sqrt{d},2\sqrt{d}+\delta} \Big\} \; . \end{align*} Also recall the inner and outer neighborhoods of $\mathcal{T}$, defined as \begin{align*} \mathcal{T}_{\delta}^{\mathrm{in}} &=\set{x\in\mathcal{T}:\exists y\notin\mathcal{T}:\norm{x-y} \leq \delta} \; , \\ \mathcal{T}_{\delta}^{\mathrm{out}} &=\set{x\notin\mathcal{T}:\exists y\in\mathcal{T}:\norm{x-y}\leq \delta} \; . \end{align*} \begin{claim} \label{claim:DdeltaTdelta} For sufficiently small $\delta>0$ and any good $A^{(1)},\ldots,A^{(n)}$, we have $\mathcal{D}^{\mathrm{in}}_{\delta}\subseteq\mathcal{T}_{4\delta}^{\mathrm{in}}$ and $\mathcal{T}_{\delta}^{\mathrm{out}} \subseteq \mathcal{D}^{\mathrm{out}}_{2\delta}$. \end{claim} \begin{proof} For every $x\in \mathcal{D}^{\mathrm{in}}_{\delta}$, let $v$ be a unit eigenvector of $\sum_i x_iA^{(i)}$ with the eigenvalue $\lambda_{\max}(\sum_i x_iA^{\br{i}})$. Therefore, \[ \langle x, W_v \rangle = v^T \Big(\sum x_i A^{(i)}\Big) v \geq(2\sqrt{d}-\delta)\sqrt{n} \; . \] Setting $y=2\delta\sqrt{n}W_v / \norm{W_v}^2$, we have \begin{align*} \langle x+y, W_v \rangle &= \langle x,W_v \rangle + 2\delta\sqrt{n} \geq \br{2\sqrt{d}-\delta}\sqrt{n}+2\delta\sqrt{n} =\br{2\sqrt{d}+\delta}\sqrt{n} \; , \end{align*} and so $x+y\notin\mathcal{T}$. Moreover, since $A^{(1)},\ldots,A^{(n)}$ are good, $ \|y\| = 2\delta\sqrt{n}/\norm{W_v} \leq 4\delta $ and therefore $x \in \mathcal{T}_{4\delta}^{\mathrm{in}}$, as desired. For the other containment, let $x\in \mathcal{T}_{\delta}^{\mathrm{out}}$. Then for any unit vector $v$, by Cauchy-Schwarz and using $\|W_v\| \le 2 \sqrt{n}$, \[ \langle x, W_v \rangle \le 2 \sqrt{nd} + 2 \delta \sqrt{n} \; , \] implying that $x \in \mathcal{D}^{\mathrm{out}}_{2\delta}$, as desired. \end{proof} We now prove our main theorem. \begin{proof}[Proof of Theorem~\ref{thm:gsa}] By Lemmas~\ref{lem:main} and~\ref{lem:technicallemma}, if $\boldsymbol{A}^{(1)},\ldots,\boldsymbol{A}^{(n)}$ are i.i.d.~drawn from the $d\times d$ Gaussian orthogonal ensemble, then with probability at least $1-C \exp(-d n^{-3/4} / C)$, we have that $q(2 \sqrt{nd})\geq~c\cdot\sqrt{1/d}$ (where $q(\cdot)$ is as defined in Lemma~\ref{lem:main}) and that $\boldsymbol{A}^{(1)},\ldots,\boldsymbol{A}^{(n)}$ are good, where $c,C>0$ are some constants. Since $q(\cdot)$ is continuous, the former implies that $\mathcal{G}^n({\mathcal{D}}^{\mathrm{in}}_{\delta})\geq c\delta \sqrt{n/(2d)}$ for sufficiently small $\delta>0$. Thus, $\mathcal{G}^n(\mathcal{T}_{4\delta}^{\mathrm{in}}) \ge c\delta \sqrt{n/(2d)}$ by Claim~\ref{claim:DdeltaTdelta}. By definition of $\mathcal{G}SA(S)=\lim_{\delta\rightarrow 0}\mathcal{G}^n(S^{\mathrm{in}}_{\delta})/\delta$, we obtain the desired lower bound on $\mathcal{G}SA(\mathcal{T})$. Similarly, by Lemmas~\ref{lem:main} and~\ref{lem:technicallemma}, if $\boldsymbol{A}^{(1)},\ldots,\boldsymbol{A}^{(n)}$ are i.i.d.~drawn from the $d\times d$ Gaussian orthogonal ensemble, then with probability at least $1-\exp\br{-n/50}$, $\mathcal{G}^n({\mathcal{D}}^{\mathrm{out}}_{\delta}) \le \delta \sqrt{n}/(\sqrt{\pi d})$ for sufficiently small $\delta>0$. Thus, $\mathcal{G}^n(\mathcal{T}^\text{out}_{\delta/2}) \le \delta \sqrt{n}/(\sqrt{\pi d})$ by Claim~\ref{claim:DdeltaTdelta}. We complete the proof using $\mathcal{G}SA(S)=\lim_{\delta\rightarrow 0}{\mathcal{G}^n(S^{\text{out}}_{\delta}})/\delta$. \end{proof} \paragraph{Acknowledgements.} We thank Daniel Kane, Assaf Naor, Fedor Nazarov, and Yiming Zhao for useful correspondence. O.R. is supported by the Simons Collaboration on Algorithms and Geometry, a Simons Investigator Award, and by the National Science Foundation (NSF) under Grant No.~CCF-1814524. P.Y. is supported by the National Key R\&D Program of China 2018YFB1003202, National Natural Science Foundation of China (Grant No. 61972191), the Program for Innovative Talents and Entrepreneur in Jiangsu and Anhui Initiative in Quantum Information Technologies Grant No.~AHY150100. \end{document}
\begin{document} \begin{abstract} We prove the existence of equilibrium states for partially hyperbolic endomorphisms with one-dimensional center bundle. We also prove, regarding a class of potentials, the uniqueness of such measures for endomorphisms defined on the $2$-torus that: have a linear model as a factor; and with the condition that this measure gives zero weight to the set where the conjugacy with the linear model fails to be invertible. In particular, we obtain the uniqueness of the measure of maximal entropy. For the $n$-torus, the uniqueness in the case with one-dimensional center holds for absolutely partially hyperbolic maps with additional hypotheses on the invariant leaves, namely, dynamical coherence and quasi-isometry. \end{abstract} \maketitle \section{Introduction} Ergodic theory is the branch of mathematics that studies dynamical systems equipped with an invariant probability measure. This theory was motivated by statistical mechanics, aiming, for instance, to understand and solve problems connected with the kinetic theory of gases, such as whether a Hamiltonian system is ergodic or not. A system is called \textit{ergodic} when it is dynamically indivisible from a measure theoretical point of view, meaning that each invariant set has either zero or total measure. There are two kinds of entropy used to describe the complexity of a dynamical system: metric entropy and topological entropy. The \textit{metric entropy} provides the maximum amount of average information one can obtain from a system with respect to an invariant measure, and the \textit{topological entropy} describes the exponential growth rate of the number of orbits. The variational principle establishes that the topological entropy coincides with the supremum of the metric entropy over all invariant measures. A \textit{measure of maximal entropy} is an invariant measure with its metric entropy coinciding with the topological entropy of the system, in other words, it is a measure in which the supremum is achieved. Such measures ``perceive'' the whole complexity of the system. A generalization of topological entropy is the \textit{pressure} of the map $f: X \to X$ with respect to a continuous potential $\phi: X \to \mathbb{R}$, which is a ``weighted entropy'' according to $\phi$. This quantity also obeys a variational principle, stating that the pressure coincides with the supremum of the metric entropy plus $\int \phi d\mu$ over all invariant measures $\mu$. An invariant measure for $f$ is called an \textit{equilibrium state} if it achieves this supremum. This measure maximizes the free energy of the system. In particular, an equilibrium state with respect to the potential $\phi \equiv 0$ is called \textit{measure of maximal entropy.} Describing the existence and uniqueness/finiteness of equilibrium states is an active research topic in dynamical systems. A particular case of interest is when a system has a unique measure of maximal entropy, and we say that it is \textit{intrinsically ergodic}. Uniformly hyperbolic diffeomorphisms are intrinsically ergodic \cite{bowen,margulis}, while partially hyperbolic diffeomorphisms can be intrinsically ergodic \cite{buzzifishersambarinovasquez2012} or not, as illustrated by a simple example. Indeed, consider $f = A \times Id: \mathbb{T}^n \times \mathbb{S}^1 \to \mathbb{T}^n \times \mathbb{S}^1$, where $A$ is an Anosov toral automorphism. Then $A$ has only one measure of maximal entropy, the volume $m$ on $\mathbb{T}^n$, and $m \times \mu$ has the same metric entropy as $m$ for any probability measure $\mu$ on $\mathbb{S}^1$. This product measure is $f$-invariant and the isometric factor on $\mathbb{S}^1$ does not contribute to the topological entropy, so any such product is a measure of maximal entropy for $f$. Although the above example works for $A$ an Anosov toral endomorphism, for non-invertible systems with some kind of hyperbolicity this problem has been much less explored in its generality. A \textit{uniformly hyperbolic endomorphism} (also called \textit{Anosov endomorphism)} is a generalization of the concepts of uniformly hyperbolic diffeomorphism and expanding map, being hyperbolic but not necessarily invertible. Anosov endomorphisms in general are not structurally stable, since any two nearby maps may have different amounts of unstable manifolds for corresponding points, which obstructs the existence of a conjugacy \cite{mane1975stability, przytycki1976anosov}. Nonetheless, their dynamics can be studied in spaces of orbits where a kind of stability holds, and it can be shown in this way that they are intrinsically ergodic, see Section \ref{sec:preli} for more details. Partially hyperbolic endomorphisms are the non-invertible generalization of partial hyperbolicity, as we define on Section \ref{sec:preli}. There are examples of partially hyperbolic endomorphisms that are not intrinsically ergodic on the product $\mathbb{T}^n \times I$, having an expanding map as a factor on $\mathbb{T}^n$ \cite{kan1994, nunezramirezvasquez}. The example $f \times Id: \mathbb{T}^n \times \mathbb{S}^1 \to \mathbb{T}^n \times \mathbb{S}^1$, with $f$ an Anosov endomorphism, has non-hyperbolic linearization and infinitely many measures of maximal entropy. But the intrinsic ergodicity for partially hyperbolic endomorphisms with hyperbolic linearization presenting contracting directions was not yet explored, to the best of our knowledge. Our first effort is to guarantee a general result of existence of equilibrium states in the case that the central direction is one-dimensional, as the result by W. Cowieson and L.-S. Young in \cite{cowiesonyoung} for the invertible case. \begin{manualtheorem}{A} \label{teoA} For $M$ a closed Riemannian manifold, if $f: M \to M$ is a $C^1$ partially hyperbolic endomorphism with one-dimensional center bundle, then there is an equilibrium state for $(f, \phi)$, for any continuous potential $\phi: M \to \mathbb{R}$. \end{manualtheorem} Observe that in the above result we do not require $f$ to be absolutely partially hyperbolic (see Definition \ref{def:abs-ph}), since we do not use any finer estimate of the derivatives. Neither we require $f$ to be dynamically coherent (see Definition \ref{def:dyn-cohe}), because the central direction is one dimensional, which implies that the central bundle (we can define a bundle over connected components of the natural extension space, see Subsection \ref{sebsec:inverse-limit}) is integrable, even if it is not necessarily uniquely integrable. We then explore a case in the torus for which uniqueness of the measure of maximal entropy can be established, with an approach inspired by \cite{buzzifishersambarinovasquez2012, ures2012intrinsic}. \begin{manualtheorem}{B} \label{teoB} Let $f: \mathbb{T}^n \to \mathbb{T}^n$ be an absolutely partially hyperbolic endomorphism that is dynamically coherent, with $\dim (E^c) = 1$, and with $W^\sigma_F$ (for $\sigma \in \{u,c,s\}$) quasi-isometric, where $F: \mathbb{R}^n \to \mathbb{R}^n$ is a lift of $f$. If $A = f_*$ is hyperbolic and $A$ is a factor of $f$, then $f$ has a unique measure of maximal entropy $\mu$. Moreover, $(f, \mu)$ and $(A, m)$ are ergodically equivalent. \end{manualtheorem} In the invertible setting, the work of M. Brin \cite{brin2003dynamical} gives us that the quasi-isometry of strong foliations guarantees dynamical coherence for absolutely partially hyperbolic diffeomorphisms. Even if this is true on our non-invertible context, we still need absolute partial hyperbolicity in the proof of Lemma \ref{lem:lemma1}. L. Hall and A. Hammerlindl have proved that partially hyperbolic surface endomorphisms with hyperbolic linearization are dynamically coherent and their foliations by unstable and central leaves in the universal cover are quasi-isometric \cite{hall2021partially}. Then, Theorem \ref{teoB} has a simpler formulation on $\mathbb{T}^2$. Observe that in this case the hypothesis on absolute partial hyperbolicity can be dropped, since Lemma \ref{lem:lemma1} is trivially valid for $A: \mathbb{T}^2 \to \mathbb{T}^2$. \begin{manualtheorem}{C} \label{teoC} Let $f: \mathbb{T}^2 \to \mathbb{T}^2$ be a partially hyperbolic endomorphism with hyperbolic linearization $A$. If $A$ is a factor of $f$, then $f$ has a unique measure of maximal entropy $\mu$, and $(f, \mu)$ is ergodically equivalent to $(A, m)$. \end{manualtheorem} Let $f: \mathbb{T}^n \to \mathbb{T}^n$ be a partially hyperbolic endomorphism with hyperbolic linearization $A$. Then a lift $F: \mathbb{R}^n \to \mathbb{R}^n$ of $f$ to the universal cover has $A: \mathbb{R}^n \to \mathbb{R}^n$ as a factor --- where $A$ is the unique linear map that projects to $A: \mathbb{T}^n \to \mathbb{T}^n$, and we use the same notation for both maps. In other words, there is a continuous and surjective map $H: \mathbb{R}^n \to \mathbb{R}^n$ such that $H \circ F = A \circ H$. It remains to explore conditions for this semiconjugacy to descend to $\mathbb{T}^n$. In the uniformly hyperbolic case, $H$ is a conjugacy, and it projects to $\mathbb{T}^n$ if and only if $f$ has a unique unstable manifold for each point \cite{sumi1994linearization}. But there is no similar result for the partial hyperbolic case. A more general result can be obtained in Theorem \ref{teoD} for a specific class of potentials, the ones that are written as $\phi \circ h$, with $\phi: \mathbb{T}^n \to \mathbb{R}$ continuous and $h: \mathbb{T}^n \to \mathbb{T}^n$ being the semiconjugacy that makes $A$ a factor of $f$. For this, we require that the corresponding equilibrium state $\mu$ for $(A, \phi)$ gives measure $0$ to $\{ z \in \mathbb{T}^n \; : \; \# h^{-1}(z) > 1 \}$, as in \cite[Theorem A]{crisostomo-tahzibi2019}. \begin{manualtheorem}{D} \label{teoD} Let $f: \mathbb{T}^n \to \mathbb{T}^n$ be an absolutely partially hyperbolic endomorphism that is dynamically coherent, with $\dim (E^c) = 1$, and with $W^\sigma_F$ (for $\sigma \in \{u,c,s\}$) quasi-isometric, where $F: \mathbb{R}^n \to \mathbb{R}^n$ is a lift of $f$. Consider a continuous potential $\phi: \mathbb{T}^n \to \mathbb{R}$ and $\mu$ the unique equilibrium state for $(A, \phi)$. If $A = f_*$ is hyperbolic, $A$ is a factor of $f$ with semiconjugacy $h$, and $$\mu\{ z \in \mathbb{T}^n \; : \; \# h^{-1}(z) > 1 \} = 0,$$ then $f$ has a unique equilibrium state $\hat{\mu}$ for $(f, \phi \circ h)$. Moreover, $(f, \hat{\mu})$ and $(A, \mu)$ are ergodically equivalent. \end{manualtheorem} As before, the above result holds for partially hyperbolic endomorphisms with hyperbolic linearization in $\mathbb{T}^2$, without requiring the additional hypotheses that we use on the higher dimensional setting. The proof of Theorem \ref{teoD} runs exactly as the one for Theorem \ref{teoB} after Lemma \ref{lem:lemma3}, which is replaced by the hypothesis on $\{ z \in \mathbb{T}^n \; : \; \# h^{-1}(z) > 1 \}$. Further results can be explored by the investigation of conditions under which we can obtain dynamical coherence and quasi-isometry of foliations for general partially hyperbolic endomorphisms. Another possibility would be to consider the case in which $\dim (E^c) > 1$. \subsection*{Organization of the paper} Section \ref{sec:preli} is dedicated to summarize concepts that give context and tools for our results. In Section \ref{sec:teoA} we give the proof of the existence of equilibrium states by proving that, under the hypotheses of Theorem \ref{teoA}, $f$ is $h$-expansive. We address the proof of uniqueness on Section \ref{sec:teoB}. \section{Preliminary concepts} \label{sec:preli} In this section we introduce some necessary concepts and results for our proofs. Here $M$ is an closed smooth manifold (compact, connected and without boundary), and $(X,d)$ is a compact metric space. \subsection{Equilibrium states} We now review a few facts on entropy and equilibrium states. We start this section with an alternative definition of metric entropy. Let $f:X\to X$ be a continuous map and $\mu$ an ergodic $f$-invariant probability measure. We define the $d_{n}$ metric over $X$ as $$d_{n}(x,y):=\displaystyle\max_{0\leq i\leq n-1}d(f^{i}(x), f^{i}(y)).$$ For $\delta\in (0,1)$, $n\in \mathbb{N}$ and $\epsilon>0$, a finite set $E\subset X$ is called $(n,\epsilon,\delta)$-\textit{spanning} if the union of the $\epsilon$-balls $B^{n}_{\epsilon}(x)=\{y\in X: d_{n}(x,y)<\epsilon\}$, centered at points $x\in E$, has $\mu$-measure greater than $1-\delta$. The \textit{metric entropy} is defined by $$h_{\mu}(f)=\displaystyle\lim_{\epsilon \rightarrow 0}\displaystyle\limsup_{n\rightarrow \infty}\frac{1}{n}\log \left( \min\{\# E: E\subseteq X \ {\rm is \ }(n,\epsilon,\delta)-\rm spanning \} \right).$$ Let $K\subset X$ be a non-empty compact set. A set $E\subseteq K$ is said to be $(n,\epsilon)$-\textit{spanning} if $K$ is covered by the union of the $\epsilon$-balls centered at points of $E$. The \textit{topological entropy} of $f$ on $K$ is defined by $$h(f,K)=\displaystyle\lim_{\epsilon \rightarrow 0}\displaystyle\limsup_{n\rightarrow \infty}\frac{1}{n}\log \left( \min\{\# E: E\subseteq K \ \mbox{\rm{is}} \ (n,\epsilon)-\mbox{\rm{spanning}} \}\right).$$ We denote $h_{top}(f):=h(f,X).$ The classical definition of metric entropy can be seen, for instance, on \cite[Chapter 9]{viana_oliveira_2016}, and the above characterization is for ergodic measures on compact metric spaces. Dinaburg's variational principle establishes a relation between metric entropy and topological entropy. It states that $$\sup \{h_{\mu}(f):\mu\in \mathcal{M}(X,f)\}=\sup \{h_{\mu}(f):\mu\in \mathcal{M}_{e}(X,f)\}=h_{top}(f),$$ where $\mathcal{M}(X,f)$ denotes the set of $f$-invariant Borel probability measures on $X$ and $\mathcal{M}_{e}(X,f)$ denotes the set of ergodic measures on $X$. \begin{definition} A \textit{measure of maximal entropy} is a probability measure $\mu\in \mathcal{M}(X,f)$ such that $h_{\mu}(f)=h_{top}(f).$ \end{definition} A generalization of the concept of entropy is given by \textit{(topological) pressure}. We define the \textit{pressure} of $f$ with respect to the continuous \textit{potential} $\phi: M \to \mathbb{R}$ by $$P(f,\phi)=\displaystyle\lim_{\epsilon \rightarrow 0}\displaystyle\limsup_{n\rightarrow \infty}\frac{1}{n}\log \left( \inf \left\{\sum_{x \in E}e^{\phi_n(x)}: E\subseteq M \ \mbox{\rm{is}} \ (n,\epsilon)-\mbox{\rm{spanning}} \right\}\right),$$ where $\phi_n(x) = \sum_{i = 0}^{n-1} \phi \circ f^i$. And Walter's variational principle gives us that $$P(f,\phi) = \sup_{\mu \in \mathcal{M}(X,f)} \left\{h_{\mu}(f) + \int_M \phi d\mu \right\}.$$ \begin{definition} An \textit{equilibrium state} for the pair $(f, \phi)$ is a probability measure $\mu\in \mathcal{M}(X,f)$ such that $P(f,\phi) = h_{\mu}(f) + \int_M \phi d\mu.$ \end{definition} In general, establishing the existence of equilibrium states is a non-trivial task, but in certain contexts there are results that facilitate this task. For instance, when the pressure function $P_\phi:\mathcal{M}(X,f)\to [0,\infty)$ defined by $P_\phi(\mu):= h_{\mu}(f) + \int_M \phi d\mu$ is upper semi-continuous, $f$ admits an equilibrium state with respect to $\phi$. Let $f:X\to X$ be a homeomorphism. The \textit{bi-infinite Bowen ball} around $x\in X$ of size $\epsilon>0$ is the set $$\Gamma_{\epsilon}(x):=\{y\in X: d(f^{n}(x),f^{n}(y))<\epsilon \ \mbox{\rm{for all}} \ n\in \mathbb{Z}\}.$$ We say that $f$ is \textit{expansive} if there is a constant $\epsilon>0$ such that $\Gamma_{\epsilon}(x)=\{x\}$ for all $x\in X.$ To generalize the concept of expansiveness, consider $$\phi_\epsilon(x) := \bigcap_{n =1}^\infty B^n_\epsilon(x).$$ \begin{definition} A continuous map $f:X\to X$ is called: \begin{enumerate} \item \textit{$h$-expansive} if $h^{\ast}_{f}(\epsilon):=\sup_{x\in M}h(f,\phi_{\epsilon}(x))=0;$ \item \textit{asymptotically $h$-expansive} if $\displaystyle\lim_{\epsilon\to 0}h^{\ast}_{f}(\epsilon)=0.$ \end{enumerate} \end{definition} Of course $h$-expansiveness implies asymptotic $h$-expansiveness. It is also easy to check that, if $f$ is a homeomorphism, then $\Gamma_\epsilon(x) \subseteq \phi_\epsilon(x)$ and by \cite[Corollary 2.3]{bowen-h-expansive1972}, $h^{\ast}_{f}(\epsilon) = h^{\ast}_{f, homeo}(\epsilon) :=\sup_{x\in M}h(f,\Gamma_{\epsilon}(x))$. In particular, if $f$ is expansive, then it is $h$-expansive. \begin{theorem}[\cite{misiurewicz1973diffeomorphism}]\label{emme} If $f:X\to X$ is asymptotically $h$-expansive, then the entropy function is upper semi-continuous. In particular, $f$ admits an equilibrium state for any continuous potential. \end{theorem} Uniqueness of equilibrium states is a much more delicate problem, but it also brings interesting properties. For instance, when such measures are unique, they are ergodic. Some classes of maps are known to have a unique equilibrium state. For expansive homeomorphisms with specification property, for instance, this problem was solved for Hölder continuous potentials by R. Bowen in \cite{bowen}. Following B. Weiss \cite{weiss1970}, we call a system \textit{intrinsically ergodic} if there is a unique measure of maximal entropy. Our main tool to prove intrinsic ergodicity is the following theorem, known as Ledrappier--Walters' formula. \begin{theorem}{\cite[Theorem 2.1]{ledrappier1977relativised}}\label{principiolw} Let $X$ and $Y$ be compact metric spaces and consider $T: X \to X$, $S: Y \to Y$ and $h: X \to Y$ continuous maps with $h$ surjective and such that $S \circ h = h \circ T$. If $\phi: X \to X$ is continuous and $\nu$ is an $S$-invariant probability measure, then $$\sup\{h_\mu(T | S) + \int \phi d \mu \; : \; \mu \in \mathcal{M}(X, T) \mbox{ and } \mu \circ \pi^{-1} = \nu \} = \int_Y P(T,\phi,h^{-1}(y)) d\nu(y).$$ In particular, if $\phi=0$, we have that $$\sup\{h_\mu(T) \; : \; \mu \in \mathcal{M}(X, T) \mbox{ and } \mu \circ h^{-1} = \nu \} = h_\nu(S) + \int_Y h_{top}(T,h^{-1}(y)) d\nu(y).$$ \end{theorem} \subsection{Natural extensions} \label{sebsec:inverse-limit} For certain aspects, we need to analyze the past orbit of a point. If the map is not invertible, every point has more than one preimage, and there are several ``choices of past''. We can make each one of these choices a point on a new space, defined as follows. \begin{definition} Given $f: X \to X$ continuous, the \emph{natural extension} (or \emph{inverse limit space}) associated to the triple $X$, $d$ and $f$ is \begin{itemize} \item $X_f = \{\Tilde{x} = (x_k) \in X^\mathbb{Z}: x_{k+1} = f(x_k) \mbox{, } \forall k \in \mathbb{Z}\}$, \item $(\Tilde{f}(\Tilde{x}))_k = x_{k+1}$ $\forall k \in \mathbb{Z}$ and $\forall \Tilde{x} \in X_f$, \item $\Tilde{d}(\Tilde{x}, \Tilde{y}) = \sum\limits_k \dfrac{d(x_k, y_k)}{2^{|k|}}$. \end{itemize} \end{definition} We also denote $(X_f, \tilde{f})$ as $\varprojlim(X,f)$. We have that $(X_f,\Tilde{d})$ is a compact metric space and the shift map $\Tilde{f}$ is continuous and invertible. Let $\pi: X_f \to X$ be the projection on the 0th coordinate, $\pi(\tilde{x}) = x_0$, then $\pi$ is a continuous surjection and $f \circ \pi = \pi \circ \tilde{f}$. Therefore, every non-invertible topological dynamical system on a compact metric space is a topological factor of an invertible topological dynamical system on a compact metric space. With the metric $\tilde{d}$ over $X_f$, we can define precisely the continuity of objects that depend on the orbit of a point, such as the invariant manifolds given by the invariant splittings in the definitions of the next subsection. Even when $X = M$ is a manifold, the space of natural extension does not have a manifold structure, but we can still pull back the tangent structure of the original manifold to the natural extension. By doing so, we have a manifold structure on each connected component of $M_f$, which is diffeomorphic to the universal covering $\Tilde{M}$ of $M$. On each one of these connected components, we have unstable and stable foliations. Additionally, if $\dim E^c = 1$ there are central curves tangent to $E^c$ restricted to this space. By making use of the inverse limit space, it is also possible to better comprehend invariant measures for non-invertible systems. Unless stated otherwise, all measures on this work are over the Borel $\sigma$-algebra on the given space. For any $\tilde{f}$-invariant probability measure $\tilde{\mu}$ on $X_f$, we obtain $\pi_* \tilde{\mu}$ as a probability measure on $X$ that is $f$-invariant. What makes the natural extension the natural construction to study ergodic theory of non-invertible systems is the fact that $\pi_*$ is actually a bijection between the invariant probabilities for $(X_f, \tilde{f})$ and $(X,f)$, as stated in \cite[Proposition I.3.1]{qian2009smooth}. \begin{proposition}[\cite{qian2009smooth}] \label{prop:corresp-measures} Let $(X,d)$ be a compact metric space and $f: X \to X$ continuous. For any $f$-invariant probability measure $\mu$ on $X$, there is a unique $\tilde{f}$-invariant probability measure $\tilde{\mu}$ on $X_f$ such that $\pi_* \tilde{\mu} = \mu$. \end{proposition} Moreover, we have that the metric entropies for corresponding measures are the same \cite[Proposition I.3.4]{qian2009smooth}, that is, $h_{\mu}(f) = h_{\tilde{\mu}}(\tilde{f})$. Thus, by the variational principle, the topological entropy of $(X_f, \tilde{f})$ and $(X,f)$ is the same. An Anosov endomorphism is topologically conjugate to its linearization at the natural extension level \cite[Theorem 1.20]{przytycki1976anosov}, so their natural extensions have the same topological entropy and corresponding invariant Borel probability measures. Thus, by the invariance of entropy between a system and its natural extension, Anosov endomorphisms are intrinsically ergodic. We investigate this property for partially hyperbolic endomorphisms. \subsection{Partially hyperbolic endomorphisms} Partial hyperbolicity means that the dynamics presents directions with hyperbolicity dominating a central direction. On our context, we have $f: M \to M$ a local diffeomorphism, and we do not have a global invariant splitting in general. Indeed, even if $E^c_x$ is trivial ($f$ is an Anosov endomorphism), the case in which there is an invariant splitting --- which we call \textit{special} --- is not robust \cite{przytycki1976anosov}. So the definition cannot be made by using a global invariant splitting. We can define partial hyperbolicity for splittings along given $f$-orbits or using invariant cones. \begin{definition} \label{def:abs-ph} We say that a $C^1$ local diffeomorphism $f: M \to M$ is \textit{absolutely partially hyperbolic} if, for any $f$-orbit $\tilde{x} \in M_f$, we have a splitting $T_{x_i}M = E^u_{x_i} \oplus E^c_{x_i} \oplus E^s_{x_i}$, $i \in \mathbb{Z}$, such that \begin{enumerate} \item it is $Df$-invariant: $Df_{x_i}E^\sigma_{x_i} = E^\sigma_{x_{i+1}}$, $\sigma \in \{u, c, s\}$, for all $i \in \mathbb{Z}$; \item if $\tilde{x}, \tilde{y}, \tilde{z} \in M_f$, and $v^s \in E^s_{x_i}$, $v^c \in E^c_{y_j}$ and $v^u \in E^u_{z_k}$ are unit vectors, $i, j, k \in \mathbb{Z}$, then $$\Vert Df_{x_i} v^s \Vert < \Vert Df_{y_j} v^c \Vert < \Vert Df_{z_k} v^u \Vert.$$ Moreover, $\Vert \restr{Df}{E^s_{x_i}} \Vert < 1$, $\Vert \restr{Df}{E^u_{x_i}} \Vert > 1$ for any $\tilde{x} \in M_f$ and $i \in \mathbb{Z}$. \end{enumerate} \end{definition} We say that $f$ is \textit{(weakly) partially hyperbolic} if the inequality $\Vert Df_{x} v^s \Vert < \Vert Df_{x} v^c \Vert< \Vert Df_{x} v^u \Vert$ on the above definition holds for unit $v^\sigma \in E^\sigma_x$, $\sigma \in \{u, c, s\}$, that is, we can make this comparison just for vectors on the same tangent space. For endomorphisms, we may have that: $E^c$ is trivial, obtaining an Anosov endomorphism with contracting direction; $E^s$ is trivial; or $E^s$ and $E^u$ are both trivial, obtaining an expanding map. Absolutely partially hyperbolic endomorphisms then generalize Anosov endomorphisms (which in turn generalize expanding maps and Anosov diffeomorphisms) and absolutely partially hyperbolic diffeomorphisms (the invertible case). To see that the class of partially hyperbolic endomorphisms is open in $C^1(M, M)$, for instance, it is convenient to work with an alternative definition. For this, we consider a cone family as $\mathcal{C} = \{\mathcal{C}(x)\}_{x \in M} \subseteq TM$ with each $\mathcal{C}(x) \subseteq T_xM$ being a cone. We say that $\mathcal{C}$ is \textit{$Df$-invariant} if $Df_x(\mathcal{C}(x)) \subseteq \textrm{Int } \mathcal{C}(f(x))$ and \textit{$Df^{-1}$-invariant} if $Df^{-1}_{f(x)}(\mathcal{C}(f(x)) \subseteq \textrm{Int } \mathcal{C}(x)$, where $Df^{-1}_{f(x)}: T_{f(x)}M \to T_xM$. \begin{definition} We say that a $C^1$ local diffeomorphism $f: M \to M$ is \textit{absolutely partially hyperbolic} if there are $Df$-invariant cone families $\mathcal{C}^u$ and $\mathcal{C}^{uc}$, $Df^{-1}$-invariant cone families $\mathcal{C}^{cs}$ and $\mathcal{C}^{s}$, and constants $C>1$, $0 < \lambda < \gamma_1 < \gamma_2 < \mu$, with $\lambda < 1 < \mu$, such that \begin{align*} \Vert Df_xv \Vert &> \mu \Vert v \Vert \mbox{ for all } v \in\mathcal{C}^u(x);\\ \Vert Df_xv \Vert &> \gamma_2 \Vert v \Vert \mbox{ for all } v \in\mathcal{C}^{uc}(x);\\ \Vert Df^{-1}_xv \Vert &> \gamma_1^{-1} \Vert v \Vert \mbox{ for all } v \in\mathcal{C}^{cs}(x);\\ \Vert Df^{-1}_xv \Vert &> \lambda^{-1} \Vert v \Vert \mbox{ for all } v \in\mathcal{C}^s(x).\\ \end{align*} \end{definition} If $M$ is an orientable surface, then the existence of $f: M \to M$ a partially hyperbolic endomorphism implies that $M = \mathbb{T}^2$, since there is a $Df$-invariant line field $E^c \subseteq TM$ transversal to the cone field \cite{hall2021partially}. This direction is described generically with $E^c$, but it could also be contracting or expanding, with less expansion than the cones. A classification of partially hyperbolic endomorphisms on surfaces is given in \cite{hall2022classification}. We recall that there are local and global unstable/stable manifolds for $f$ in $M$ \cite[Theorem 2.1]{przytycki1976anosov}, with the unstable manifolds depending on the whole orbit $\tilde{x}$. In general, a partially hyperbolic endomorphism $f: M \to M$ does not have a global invariant unstable or central bundle, and these manifolds do not form a foliation. However, if each point has only one unstable/central direction, we say that $f$ is \textit{u/c-special}, and we say that $f$ is \textit{special} if it is both u- and c-special, as it is the case for linear toral endomorphisms. As the natural extension is an important tool to understand the ergodic properties of endomorphisms, we have that the lift of the endomorphism to the universal cover is the natural way to explore its geometric and differential properties, due to the following result. \begin{proposition}[\cite{costa-micena2021}] Let $\overline{M}$ be the universal cover of $M$ and $F: \overline{M} \to \overline{M}$ a lift for $f$. Then $f$ is a partially hyperbolic endomorphism if and only if $F: \overline{M} \to \overline{M}$ is a partially hyperbolic diffeomorphism. \end{proposition} Thus, at the universal cover level, we do have global unstable and central bundles. This implies that there is an unstable foliation at the universal cover. If $F: \overline{M} \to \overline{M}$ is \textit{dynamically coherent}, then it also has a center foliation. \begin{definition} A partial hyperbolic diffeomorphism $f: M \to M$ is said to be \textit{dynamically coherent} if there are invariant foliations $W^{uc}$ ans $W^{cs}$ tangent to $E^{uc} = E^u \oplus E^c$ and $E^{cs} = E^c \oplus E^s$ respectively. \end{definition} Since partial hyperbolic endomorphisms do not have unstable or central foliations in general, unless they are special, dynamical coherence is defined as follows. \begin{definition} \label{def:dyn-cohe} A partial hyperbolic endomorphism $f: M \to M$ is said to be \textit{dynamically coherent} if there are unique invariant leaves $W^{uc}(\tilde{x})$ and $W^{cs}(\tilde{x})$ tangent to $E^{uc}(\tilde{x}) = E^u(\tilde{x}) \oplus E^c(\tilde{x})$ and $E^{cs}(x) = E^c(\tilde{x}) \oplus E^s(x)$ respectively. \end{definition} Since $E^u(\tilde{x})$ has the greatest expansion rate, the complementary direction $E^{cs}(x)$ is uniquely defined, not depending on the past orbit of $x$ (see \cite[Lemma 2.5]{costa-micena2021}). This bundle can still be not uniquely integrable. If $f$ is a dynamically coherent partial hyperbolic endomorphism, its lift $F: \overline{M} \to \overline{M}$ is dynamically coherent. In the case that $M = \mathbb{T}^n$, another property that we verify for the lift $F$ is a general geometric property for foliations on $\mathbb{R}^n$, called \textit{quasi-isometry}. It means that the metric given by the induced distance between to points along a leaf of the foliation is equivalent to the Euclidean distance between them. \begin{definition} Given a foliation $\mathcal{F}$ of $\mathbb{R}^n$, with $d_\mathcal{F}$ the distance along the leaves, we say that $\mathcal{F}$ is \emph{quasi-isometric} if there are constants $a, b > 0$ such that, for every $y \in \mathcal{F}(x)$, $$d_\mathcal{F}(x,y) \leq a \Vert x-y \Vert +b.$$ \end{definition} If the foliation $\mathcal{F}$ is uniformly continuous, we can take $b=0$ in the above definition. For special Anosov endomorphisms, quasi-isometry is guaranteed by the existence and the properties of a conjugacy between the map $f$ and a linear one, and we cover such conjugacy in the next subsection. For systems with partial hyperbolicity, quasi-isometry of the stable and unstable leaves implies dynamical coherence \cite{brin2003dynamical}. This geometric property is important as well to our uniqueness result. \subsection{Hyperbolic linearization and semiconjugacy} \label{subsec:semicon} Let $f: \mathbb{T}^n \to \mathbb{T}^n$ be an endomorphism and $A$ its linearization, that is, $A$ is the unique linear map that induces the same homomorphism of $\mathbb{Z}^n \cong \pi_1(\mathbb{T}^n)$ as $f$. Consider $F: \mathbb{R}^n \to \mathbb{R}^n$ a lift of $f$ and $A: \mathbb{R}^n \to \mathbb{R}^n$ the linear lift of $A$ to $\mathbb{R}^n$. If $A$ is hyperbolic, then by \cite[Theorem 8.2.1]{aoki1994topological} and its proof, there is a unique continuous surjection $H: \mathbb{R}^n \to \mathbb{R}^n$ on the universal cover with \begin{itemize} \item $A \circ H = H \circ F$; \item $d(H, Id) < K$ and $K$ goes to $0$ as $d_ {C^1}(f,A)$ tends to $0$; \item $H$ is uniformly continuous. \end{itemize} This semiconjugacy is not necessarily preserved under deck transformations, that is, it does not necessarily projects to a semiconjugacy in $\mathbb{T}^n$. If $f$ is an Anosov endomorphism ($E^c$ is trivial), $H$ projects to $\mathbb{T}^n$ if and only if $f$ is special \cite[Proposition 7]{cantarino2021anosov}. But we can use $H$ to induce a semiconjugacy $\tilde{h}: \mathbb{T}^n_f \to \mathbb{T}^n_A$ between the natural extensions of $f$ and $A$. This is a consequence of \cite[Propositions 7.2.4 and 8.3.1]{aoki1994topological}, which we describe briefly. Firstly, we need to understand the structure of the natural extension of a toral covering map $f: \mathbb{T}^n \to \mathbb{T}^n$ as a topological group. For a specific finite covering $\tilde{\mathbb{T}^n}$, the natural extension $(S, \tilde{F}) = \varprojlim(\tilde{\mathbb{T}^n}, F')$ of the lift $F'$ is constructed and proved to be a \emph{solenoidal group}, that is, a compact connected abelian group with finite topological dimension \cite[\S 7.2]{aoki1994topological}, that obeys the following the commutative diagram. \[ \begin{tikzcd}[column sep=5em, row sep=2.5em] \mathbb{R}^n \arrow[rr,"F" near start] \arrow[dr,<->,swap,"\psi"] \arrow[dd,swap,"p''" near start] && \mathbb{R}^n \arrow[dd,swap,"p''" near start] \arrow[dr,<->,"\psi"'] \\ & \stackrel[\subseteq \; \mathbb{R}^p \oplus S_q]{}{\psi(\mathbb{R}^n)} \arrow[rr,crossing over,"\overline{F}" near start] \arrow[rr,crossing over,"\overline{F}"' near start, shift right=1.5ex] && \stackrel[\subseteq \; \mathbb{R}^p \oplus S_q]{}{\psi(\mathbb{R}^n)} \arrow[dd,"p_1" near start] \\ \tilde{\mathbb{T}^n} \arrow[rr,"F'" near start] \arrow[dr,swap,"\varprojlim"] \arrow[dd,"p'"' near start] && \tilde{\mathbb{T}^n} \arrow[dr,swap,"\varprojlim"] \arrow[dd,"p'"' near start]\\ & S := \tilde{\mathbb{T}^p} \oplus S_q \arrow[rr, crossing over, "\tilde{F}" near start] \arrow[uu,<-,crossing over,"p_1" near end] && \tilde{\mathbb{T}^p} \oplus S_q \arrow[dd,<->,"\beta" near end]\\ \mathbb{T}^n \arrow[rr,"f" near start] \arrow[dr,swap,"\varprojlim"] && \mathbb{T}^n \arrow[dr,swap,"\varprojlim"]\\ & \mathbb{T}^n_f \arrow[rr,"\tilde{f}" near start] \arrow[uu,<->,crossing over,"\beta" near start] && \mathbb{T}^n_f\\ \end{tikzcd} \] The lift of $f$ and $F'$ to the universal cover, $F: \mathbb{R}^n \to \mathbb{R}^n$ is homeomorphic to its image under an injective function $\psi$. The image $\psi(\mathbb{R}^n)$ is dense on $\mathbb{R}^p \oplus S_q$, where $p+q = n$ and $S_q$ is a solenoidal group. Thus $\overline{F} = \psi^{-1} \circ F \circ \psi$ can be extended to $\mathbb{R}^p \oplus S_q$. The map $p_1: \mathbb{R}^p \oplus S_q \to \tilde{\mathbb{T}^p} \oplus S_q$ is a projection and $\overline{F}$ can be projected to $\tilde{F}$. Then \cite[Theorem 7.2.4]{aoki1994topological} gives us that $(S, \tilde{F}) = \varprojlim(\tilde{\mathbb{T}^n}, F')$. Finally, we have an isomorphism $\beta$ between $(\mathbb{T}^n_f, \tilde{f})$ and $(S, \tilde{F})$ \cite[Lemma 7.2.5]{aoki1994topological}. The same constructions can be made for the linearization $A: \mathbb{T}^n \to \mathbb{T}^n$. If $A$ is hyperbolic, the semiconjugacy $H: \mathbb{R}^n \to \mathbb{R}^n$ on the universal cover is carried to $\mathbb{R}^p \oplus S_q$ as $\overline{H} = \psi^{-1} \circ H \circ \psi$, and $\overline{H}$ is shown in \cite[Theorem 8.3.1]{aoki1994topological} to project to $S$. Therefore, there is a semiconjugacy $\tilde{h}: \mathbb{T}^n_f \to \mathbb{T}^n_A$ between $(\mathbb{T}^n_f, \tilde{f})$ and $(\mathbb{T}^n_A, \tilde{A})$. The existence of this semiconjugacy has consequences to the metric and topological entropies of $f$. Indeed, if $f:X\to X$ and $g:Y\to Y$ are continuous maps, there is $\phi:X\to Y$ a continuous surjection such that $\phi \circ f=g\circ \phi,$ and the measures $\mu$ and $\nu = h_*\mu$ are $f$ and $g$-invariant, respectively, then \begin{enumerate} \item $h_{\nu}(g)\leq h_{\mu}(f)$ and \item $h_{top}(g)\leq h_{top}(f).$ \end{enumerate} \section{Proof of Theorem \ref{teoA}} \label{sec:teoA} The main step to prove Theorem \ref{teoA} is in demonstrating that the inverse limit $\tilde{f}: M_f \to M_f$ of $f$ is h-expansive. The proof is similar to the one for the invertible case, as proven by \cite{cowiesonyoung}, and we refer to \cite{diazFisher} for a presentation closer to ours. \begin{theorem} If $f: M \to M$ is a $C^1$ partially hyperbolic endomorphism with one-dimensional center bundle, then $\tilde{f}: M_f \to M_f$ is $h$-expansive. \end{theorem} \begin{proof} In order to prove that the topological entropy of $\tilde{f}$ restricted to $\Gamma_\varepsilon(\tilde{x})$ is equal to $0$ for each $\tilde{x} \in M_f$, we show that $\Gamma_\varepsilon(\tilde{x})$ is contained on a local center-stable disk and the exponential growth of its spanning sets is given on a local center curve. This curve is one-dimensional, and its iterates have bounded length, so the entropy along it is $0$. Since $M$ is a closed manifold, we have that the natural extension is a fiber bundle $(\tilde{X}, X, \pi, C)$, where the fiber $C$ is a Cantor set \cite[Theorem 6.5.1]{aoki1994topological}. Let $\beta > 0$ be sufficiently small such that \begin{itemize} \item for each $x \in M$, $\pi^{-1}(B(x, \beta)) \simeq B(x, \beta) \times \pi^{-1}(\{x\})$; \item there is $\delta \in (0, \beta)$ such that, for each $\tilde{x} \in M_f$ and $\tilde{y} \in \tilde{B}(\tilde{x}, \beta) \simeq B(x, \beta) \times \{\tilde{x}\}$, if $d(\tilde{x}, \tilde{y}) < \delta$ then \begin{itemize} \item $W^s_\beta(x) \pitchfork \gamma^c_\beta(\tilde{y})$ is a singleton, where $\gamma^c_\beta(\tilde{y})$ is a curve with center $y = \pi(\tilde{y})$ and radius lesser than $\beta$ that is tangent to the central bundle; \item $W^u_\beta(\tilde{x}) \pitchfork D^{cs}_\beta(\tilde{y})$ is a singleton, where $$D^{cs}_\beta(\tilde{y}) = \bigcup_{z \in \gamma^c_\beta(\tilde{y})} W^s_\beta(z),$$ is a disk tangent to $E^c \oplus E^s$. \end{itemize} \end{itemize} \begin{figure} \caption{$\pi^{-1} \label{fig:mme1} \end{figure} In other words, we are taking $\beta$ small enough to have inside the ball $B(x, \beta)$ a local product structure that depends only on $\tilde{x}$, so that it can be seen either in $B(x, \beta) \subseteq M$ or in $\tilde{B}(\tilde{x}, \beta)$, the connected component of $\tilde{x}$ in $\pi^{-1}(B(x, \beta)) \subseteq M_f$. See Figure \ref{fig:mme1}. Note that we are not assuming dynamical coherence, since we only need the existence of center curves, which is given by the fact that $\dim E^c = 1$. Let $\alpha > 0$ be such that $\lambda \alpha < \delta$, where $\lambda = \max_{x \in M} \Vert \restr{Df}{E^u} \Vert$. Then we can construct foliated boxes $V(\tilde{x}) \subseteq B(x, \beta)$ with uniform size $\alpha$ (not depending on $x \in M$) such that they have local product structure and their images $f(V(\tilde{x})) \subseteq B(f(x), \beta)$ also have local product structure. More precisely, $$V(\tilde{x}) := \bigcup_{y \in D^{cs}_\alpha(\tilde{x})} W^u_\alpha(\tilde{y}),$$ where $\tilde{y} = \pi^{-1}(\{y\}) \cap \tilde{B}(\tilde{x}, \beta).$ By construction, these boxes are small enough so that we can lift then to the natural extension inside $\tilde{B}(\tilde{x}, \beta)$, and we denote this lift by $\tilde{V}(\tilde{x}) := \pi^{-1}(V(\tilde{x})) \cap \tilde{B}(\tilde{x}, \beta)$, see Figure \ref{fig:mme2}. We use $\tilde{A}$ to denote the lift of any subset $A$ of $V(\tilde{x})$ to $\tilde{V}(\tilde{x})$. Then we can use the inverse $\tilde{f}^{-1}$ on each box, and the proof is concluded exactly as in \cite[Theorem 1.2]{diazFisher}, which we include here for completeness. \begin{figure} \caption{The box $\tilde{V} \label{fig:mme2} \end{figure} Since the sizes of the boxes $\tilde{V}(\tilde{x})$ are uniform, there is $\varepsilon > 0$ such that $B(\tilde{x}, \varepsilon) \cap \tilde{B}(\tilde{x}, \beta) \subseteq \tilde{V}(\tilde{x})$ for any $\tilde{x} \in M_f$. Fixed $\tilde{x} \in M_f$, we consider $\tilde{x}^n : = \tilde{f}^n(\tilde{x})$ and $\gamma_n := \gamma^c_\alpha(\tilde{x}^n)$ chosen in such way that $f(\gamma_{n-1}) \cap \gamma_n$ contains a open interval around $x_n = \pi(\tilde{x}^n)$. This is possible since the central direction is invariant under $f$. So we are fixing each central curve coherently along the forward orbit of $\tilde{x}$, and each foliated box $\tilde{V}$ henceforth is with respect to this choice. \begin{proof}[Claim 1:] $\Gamma_\varepsilon(\tilde{x}) \subseteq D^{cs}_\alpha(\tilde{x})$. Indeed, for each $\tilde{z} \in \Gamma_\varepsilon(\tilde{x})$, we have that $\tilde{z}^k := \tilde{f}^k(\tilde{z}) \in B(\tilde{f}^k(\tilde{x}), \varepsilon)$ for each $k \in \mathbb{Z}$ by the definition of $\Gamma_\varepsilon(\tilde{x})$, thus $\tilde{z}^k \in \tilde{V}(\tilde{x}^k)$. Consider the projections $$\begin{array}{cccc} p^{cs}: & \tilde{V}(\tilde{x}) & \to & \tilde{D}^{cs}_\alpha(\tilde{x}) \\ & \tilde{z} & \mapsto & \tilde{W}^u_\alpha(\tilde{z}) \cap \tilde{D}^{cs}_\alpha(\tilde{x}) \end{array}$$ and $$\begin{array}{cccc} p^{c}: & \tilde{D}^{cs}_\alpha(\tilde{x}) & \to & \tilde{\gamma}(\tilde{x}) \\ & \tilde{y} & \mapsto & \tilde{W}^s_\alpha(\tilde{y}) \cap \tilde{\gamma}(\tilde{x}), \end{array}$$ that are well defined by the local product structure. Define $\tilde{y}^n := p^{cs}(\tilde{z}^n)$ and $\tilde{w}^n := p^{c}(\tilde{y}^n)$ for each $n \in \mathbb{N}$. Suppose that $\tilde{z}^n \notin \tilde{D}^{cs}_\alpha(\tilde{x}^n)$, that is, $\tilde{z}^n \neq \tilde{y}^n$. Then, since $\tilde{y}^n \in \tilde{W}^u(\tilde{z}^n)$, there is $m \in \mathbb{N}$ such that $d(\tilde{z}^{n+m}, \tilde{y}^{n+m}) > \alpha$, which implies that $\tilde{z}^{n+m} \notin \tilde{V}(\tilde{x}^{n+m})$, a contradiction. \end{proof} Define $$\Gamma^c := \bigcap_{n \geq 0} \tilde{f}^{-n}(\tilde{\gamma_n}),$$ as the set of points in $\tilde{\gamma}(\tilde{x})$ such that their images under $\tilde{f}^n$ are in $\tilde{\gamma_n}$. The length of $\tilde{f}^n(\Gamma^c)$ is lesser than $2 \alpha$ for all $n \in \mathbb{N}$. In particular we have that $h(\tilde{f}, \Gamma^c) = 0$, see \cite[Lemma 4.2]{buzzifishersambarinovasquez2012}, for instance. \begin{proof}[Claim 2:] $h(\tilde{f}, \tilde{D}^{cs}_\alpha(\tilde{x})) = h(\tilde{f}, \Gamma^c)$. Indeed, if $S$ is an $(n, \epsilon/2)$-spanning set for $\Gamma^c$, then it is an $(n, \epsilon)$-spanning set for $$B(\Gamma^c, \epsilon/2) = \{\tilde{x} \in M_f \; : \; \exists \; \tilde{y} \in \Gamma^c \mbox{ with } d(\tilde{x}, \tilde{y}) < \epsilon/2 \}.$$ If $n \in \mathbb{N}$ is sufficiently large, then $\tilde{f}^n(\tilde{D}^{cs}_\alpha(\tilde{x})) \subseteq B(\tilde{f}^n(\Gamma^c), \epsilon/2)$, and $S$ is an $(n, \epsilon)$-spanning set for $\tilde{D}^{cs}_\alpha(\tilde{x})$. Thus \begin{align*} h(\tilde{f}, \Gamma^c) &= \displaystyle\lim_{\epsilon \rightarrow 0} \displaystyle\limsup_{n\rightarrow \infty}\frac{1}{n}\log \min\{\# S: S \subseteq \Gamma^c \ {\rm is \ an \ }(n,\epsilon){\rm -spanning \ set} \}\\ &= \displaystyle\lim_{\epsilon \rightarrow 0} \displaystyle\limsup_{n\rightarrow \infty}\frac{1}{n}\log \min\{\# S: S \subseteq \tilde{D}^{cs}_\alpha(\tilde{x}) \ {\rm is \ an \ }(n,\epsilon/2){\rm -spanning \ set} \}\\ &= h(\tilde{f}, \tilde{D}^{cs}_\alpha(\tilde{x})). \end{align*} \end{proof} Since $h(\tilde{f}, \Gamma^c) = 0$ and $\Gamma_\varepsilon(\tilde{x}) \subseteq \tilde{D}^{cs}_\alpha(\tilde{x})$, then Claim 2 implies that $h(\tilde{f}, \Gamma_\varepsilon(\tilde{x})) = 0$. \end{proof} The previous theorem implies that $\tilde{f}$ is $h$-expansive, and from Theorem \ref{emme} we have that $\tilde{\mu} \mapsto h_{\tilde{\mu}}(\tilde{f})$ is upper semicontinuous, where $\tilde{\mu}$ is an $\tilde{f}$-invariant measure. Remember that by Proposition \ref{prop:corresp-measures} there is a injective correspondence between $\tilde{f}$-invariant and $f$-invariant measures given by $\mu = \pi_*{\tilde{\mu}}$. Now consider $\phi: M \to R$ a continuous potential on $M$. Consider the function $\mu \mapsto P_\mu(f, \phi)$, where $$P_\mu(f, \phi) := h_\mu(f) + \int_M \phi d\mu.$$ Then, since $h_\mu(f) = h_{\tilde{\mu}}(\tilde{f})$ and $$\int_M \phi d\mu = \int_M \phi d\pi_*{\tilde{\mu}} = \int_{M_f} \phi \circ \pi d\tilde{\mu},$$ thus $$P_\mu(f, \phi) = P_{\tilde{\mu}}(\tilde{f}, \phi \circ \pi) := h_{\tilde{\mu}}(\tilde{f}) + \int_{M_f} \phi \circ \pi d\tilde{\mu}.$$ So, for any potential given by $\phi \circ \pi$ (a lift to $M_f$ of a continuous potential on $M$), the function $\tilde{\mu} \mapsto P_{\tilde{\mu}}(\tilde{f}, \phi \circ \pi)$ is upper semicontinuous and, thus, admits a maximum, an equilibrium state. Additionally, this maximum $\tilde{\mu}$ projects to $\mu$, an equilibrium state for $\phi$. \section{Proof of Theorem \ref{teoB}} \label{sec:teoB} Even without an unstable foliation for $f$ in $\mathbb{T}^n$, an unstable foliation does exist for $F$ in $\mathbb{R}^n$, and all computations in the universal cover work as in the invertible case. Thus, as in \cite[Lemma 3.4]{ures2012intrinsic}, we prove the following, where $H: \mathbb{R}^n \to \mathbb{R}^n$ is a lift for the semiconjugacy $h: \mathbb{T}^n \to \mathbb{T}^n$. \begin{lemma} \label{lem:lemma1} If $\dim(E_F^c) = 1$, $f$ is dynamically coherent and $A = f_*$ is hyperbolic, then $A$ admits a partially hyperbolic splitting $\mathbb{R}^n = E^u_A \oplus E^c_A \oplus E^s_A$ with $\dim(E_A^c) = 1$. Additionally, $H(W^c_F(x)) = W^c_A(H(x))$ for all $x \in \mathbb{R}^n$. \end{lemma} If $A$ is hyperbolic, we also have that there is $\alpha > 0$ such that, for all $x, y \in \mathbb{R}^n$, $H(x) = H(y)$ if and only if $d(F^k(x), F^k(y)) < \alpha$ for all $k \in \mathbb{Z}$. Indeed, if $H(x) = H(y)$, then $$A^k(H(x)) = A^k(H(y)) \implies H(F^k(x)) = H(F^k(y)) \; \mbox{ for all } k \in \mathbb{Z}.$$ Since $d(H(z), z) < K$ for all $z \in \mathbb{R}^n$, then $$d(F^k(x), F^k(y)) \leq d(F^k(x), H \circ F^k(x)) + d(H \circ F^k(x), H\circ F^k(y)) + d(H \circ F^k(y), F^k(y)),$$ which is lesser than $2K$ for all $k \in \mathbb{Z}$. For the reciprocal implication, we use the expansiveness of $A$ and the fact that it is linear to guarantee that it is expansive with any expansiveness constant (see, for instance, \cite[Lemma 8.2.3]{aoki1994topological}). Since \begin{align*} &d(A^k \circ H(x), A^k \circ H(y)) = d(H \circ F^k(x), H \circ F^k(y)) \leq d(H \circ F^k(x), F^k(x))\\ &+ d(F^k(x), F^k(y)) + d(F^k(y), H \circ F^k(y)) < 2K + \alpha \end{align*} for all $k \in \mathbb{Z}$, then $H(x) = H(y)$. \begin{lemma} \label{lem:lemma2} If $A$ is hyperbolic, $F$ is dynamically coherent and $W^{u/s}_F$ is quasi-isometric, then $H(x) = H(y)$ implies that $y \in W^c_F(x)$. \end{lemma} \begin{proof} Suppose that $y \notin W^c_F(x) = W^{uc}_F(x) \cap W^{cs}_F(x)$. Then $y \notin W^{cs}_F(x)$ or $y \notin W^{uc}_F(x)$. Let us see that the first case is absurd, and the second case is analogous. If $y \notin W^{cs}_F(x)$, then there is $z \neq y$ such that $z = W^{cs}_F(x) \cap W^u_F(y)$. Indeed, the global product structure follows from the quasi-isometry, as proved in \cite[Theorem 1.1]{hammerlindl2012dynamics} for partially hyperbolic diffeomorphisms on (not necessary compact) manifolds. It then applies to $F: \mathbb{R}^n \to \mathbb{R}^n$. Consider $D_{cs} = d_{cs}(x,z)$ and $D_u = d_u(y,z)$. There are $1 < \lambda_c < \lambda_u$ such that \begin{align*} &d(F^k(x), F^k(z)) \leq \lambda_c^k D_{cs} \mbox{ and}\\ &d_u(F^k(y), F^k(z)) \geq \lambda_u^k D_{u} \end{align*} for all $k \in \mathbb{Z}$. Since $W^u_F$ is quasi-isometric, we have $d_u(F^k(y), F^k(z)) \leq a d(F^k(y), F^k(z)) + b$ and thus $$d(F^k(y), F^k(z)) \geq \dfrac{\lambda_u^k D_u -b}{a}.$$ Therefore \begin{align*} d(F^k(x), F^k(y)) &\geq d(F^k(y), F^k(z)) - d(F^k(x), F^k(z))\\ &> \dfrac{\lambda_u^k D_u -b}{a} - \lambda_c^k D_{cs} \xrightarrow[]{n \to \infty} \infty, \end{align*} which implies that $H(x) \neq H(y)$, a contradiction. \end{proof} \begin{remark} The above lemma requires that $n = \dim(\mathbb{T}^n) \geq 3$, for we need $\dim(E^\sigma_F) \neq 0$, $\sigma \in \{u, c, s\}$. If $n = 2$, we have the same result just by requiring that $A$ is hyperbolic. Indeed, in this case, $\dim(E^c) = \dim(E^u) = 1$ and we have dynamical coherence, quasi-isometry and global product structure by \cite{hall2021partially}. Supposing that $y \notin W^c_F(x)$, there is a unique $z \in W^u_F(y) \cap W^c_F(x)$, $z \neq y$, and the proof by absurd is the same. \end{remark} \begin{remark} By lemmas \ref{lem:lemma1} and \ref{lem:lemma2}, for all $z \in \mathbb{R}^n$ we have that $H^{-1}(W^c_A(z)) = W^c_F(x)$ for any $x \in H^{-1}(z)$. \end{remark} \begin{lemma} \label{prop:prop1} If $A$ is hyperbolic, $\dim(E^c_F) = 1$, $F$ is dynamically coherent and $W^\sigma_F$ is quasi-isometric for $\sigma \in \{u, c, s\}$, then for all $z \in \mathbb{R}^n$ we have that $H^{-1}(z)$ is a compact and connected subset of $W^c_F(x)$ for $x \in H^{-1}(z)$. \end{lemma} \begin{proof} We have that $H^{-1}(z)$ is closed, and it is bounded since $d(H, Id) < K$. For $x \in H^{-1}(z)$, $H^{-1}(z)$ it is a subset of $W^c_F(x)$ by Lemma \ref{lem:lemma2}. It remains to prove connectedness. Fixing $x, y \in H^{-1}(z)$ and given $w \in [x, y]_c$, we have by the quasi-isometry of $W^c_F$ that \begin{align*} d(F^k(x) F^n(w)) &\leq d_c(F^k(x) F^n(w)) \leq d_c(F^k(x) F^n(y)) \\ &\leq a d(F^k(x) F^n(y)) + b \leq 2 a K + b,\\ \end{align*} which implies that $z = H(x) = H(w)$, as previously shown using the expansiveness of $A$. Thus, $[x, y]_c \subseteq H^{-1}(z)$ for all $x, y \in H^{-1}(z)$, and $H^{-1}(z)$ is connected. \end{proof} Let $\overline{\Gamma} = \{z \in \mathbb{R}^n \; : \; \#H^{-1}(z) > 1 \}$ be the set of points for which $H$ fails to be invertible. Consider $p(\overline{\Gamma}) = \Gamma$, with $p: \mathbb{R}^n \to \mathbb{T}^n$ the canonical projection. \begin{lemma} \label{lem:lemma3} Under the hypotheses of Lemma \ref{prop:prop1}, $m(\overline{\Gamma})$ = 0. \end{lemma} \begin{proof} For all $z \in \mathbb{R}^n$ there is $x \in \mathbb{R}^n$ such that $H^{-1}(W^c_A(z)) = W^c_F(x)$. Consider $\overline{\Gamma}^c_z = W^c_A(z) \cap \overline{\Gamma}$. Then $\{H^{-1}(y) \; : \; y \in \overline{\Gamma}^c_z \}$ is a family of disjoint nontrivial intervals in $W^c_A(z)$, we obtain that $\overline{\Gamma}^c_z$ is countable for all $z \in \mathbb{R}^n$. Therefore, since $W^c_A$ is a linear foliation, Fubini's theorem provides that $m(\overline{\Gamma})=0$. \end{proof} Thus, $\Gamma = p(\overline{\Gamma})$ also has zero volume on $\mathbb{T}^n$, and it satisfies $\Gamma = \{z \in \mathbb{T}^n \; : \; \#h^{-1}(z) > 1 \}$ since $H$ is a lift for $h$. Moreover, $h^{-1}(\Gamma)=\Gamma.$ Lemma \ref{prop:prop1} implies that $H^{-1}(\overline{z})$ is a compact and connected one-dimensional central disk, with its length bounded with a constant that does not depend on $\overline{z}$. But if $\overline{x}, \overline{y} \in \mathbb{R}^n$ are such that $H(\overline{x}) = H(\overline{y})$, then we have that $d(F^k(\overline{x}), F^k(\overline{x})) < 2K$ for any $k \in \mathbb{Z}$. Thus $\overline{x} \in H^{-1}(\overline{z})$ if and only if $F^n(\overline{x}) \in H^{-1}(A^n(\overline{z}))$ for any $n \in \mathbb{N}$. Therefore, the length of $F^n(H^{-1}(\overline{z}))$ is also bounded with the same constant than $H^{-1}(\overline{z})$. These bounds are the same for the projections of the stable manifolds, and this implies $h(f, h^{-1}(z)) = 0$ for all $z \in \mathbb{T}^n$. By Theorem \ref{teoA} there exists a measure of maximal entropy $\mu$. Then, using Ledrappier--Walters' formula, we have that $h_{\ast}\mu=m.$ Now, we want to prove that $\mu$ is the unique measure of maximal entropy that project into $m$. By contradiction, suppose that there exists a measure of maximal entropy $\eta\neq \mu$, then $h_{\ast}\eta=m=h_{\ast}\mu$. From Lemma \ref{lem:lemma3}, it follows that $\mu(\Gamma)=\eta(\Gamma)=0$ and for every continuous function $\psi:\mathbb{T}^{n}\to \mathbb{R}$ we have that \begin{align*} \int \psi d\mu &= \int_{\mathbb{T}^n\setminus \Gamma}\psi d\mu = \int_{\mathbb{T}^n\setminus \Gamma}\psi\circ h^{-1}\circ h d\mu\\ &=\int_{\mathbb{T}^n\setminus\Gamma}\psi\circ h^{-1} dh_{\ast}\mu =\int_{\mathbb{T}^{n}\setminus\Gamma}\psi\circ h^{-1}dh_{\ast}\eta =\int \psi d\eta. \end{align*} Hence, $\mu=\eta$, a contradiction. Therefore, there exists a unique measure of maximal entropy. \end{document}
\begin{document} \title{Posetted trees and Baker-Campbell-Hausdorff product} \author{Donatella Iacono} {\operatorname{ad}\,}dress{\newline Universit\`a degli Studi di Bari, \newline Dipartimento di Matematica, \newline Via E. Orabona 4, I-70125 Bari, Italy.} \email{[email protected]} \urladdr{www.dm.uniba.it/~iacono/} \author{Marco Manetti} {\operatorname{ad}\,}dress{\newline Universit\`a degli studi di Roma ``La Sapienza'', \newline Dipartimento di Matematica \lq\lq Guido Castelnuovo\rq\rq, \newline P.le Aldo Moro 5, I-00185 Roma, Italy.} \email{[email protected]} \urladdr{www.mat.uniroma1.it/people/manetti/} \begin{abstract} We introduce the combinatorial notion of posetted trees and we use it in order to write an explicit expression of the Baker-Campbell-Hausdorff formula. \end{abstract} \subjclass[2010]{05C05,17B01} \keywords{Rooted trees, posets, Lie algebras} \maketitle \section{Introduction} If $a,b$ are continuous operators on a Hilbert space, we may write \[ e^ae^b=e^{a\bullet b},\qquad a\bullet b=a+b+\sum_{n=2}^{\infty}w_n(a,b),\] where $w_n$ is a universal, non commutative, homogeneous polynomial of degree $n$ with rational coefficients. The product $\bullet$ is called, after \cite{Baker,Campbell,Hausdorff}, Baker-Campbell-Hausdorff (BCH) product: it is associative and the BCH theorem asserts that every polynomial $w_n$ is a Lie element, i.e., is a linear combination of nested commutators. However, the proof of the BCH theorem does not give directly an explicit description of $w_n$ as a Lie element; moreover, such description is not unique in view of the Jacobi identity. The most famous explicit expression of $a \bullet b$, in terms of nested commutators, is probably the one due to E. Dynkin (see \cite[Equation 1]{Dynkin} or \cite[Equation 1.7.3]{DK}): \begin{equation}\label{equ.dynkinformula} a\bullet b=\sum_{n>0} \frac{(-1)^{n-1}}{n} \sum \frac{1}{p_{1}!q_{1}!\ldots p_{n}!q_{n}!}ad(a)^{p_{1}}ad(b)^{q_{1}}\ldots ad(a)^{p_{n}}ad(b)^{q_{n}-1}b, \end{equation} where $ad(x)=[x,-]$ is the adjoint operator, the second sum is over all possible combinations of $p_1, q_1, \ldots , p_k, q_k \in \mathbb{N}$ such that $p_i +q_i > 0$, for $i = 1, \ldots ,k$, and $\sum_{i=1}^k (p_i +q_i)= n$.\par The literature about Baker-Campbell-Hausdorff formula is huge. For instance: in 1998, V. Kathotia \cite{kathotia} derived a trees summation expression for the BCH product over the real numbers using M. Kontsevich's universal formula for deformation quantization of Poisson manifolds; the coefficientf of this formula are certain integrals on configuration spaces and it is still unknown if they are rational numbers. In the papers \cite{FioMan} and \cite{FiMaMa}, the authors recognize the equation $a\bullet b\bullet c=0$ as the Maurer-Cartan equation of the canonical $L_{\infty}$ structure on the conormalized complex of singular cochains, on the standard two dimensional simplex with values in a Lie algebra. Therefore, the possibility of an explicit description of $a\bullet b$, again as a trees summation formula, by using the standard tools of homological perturbation and homotopy transfer theory \cite{LodayVallette}. The reader may also consult \cite{WS} for a list of explicit and recursive formulas. The aim of this paper is to give a simple and elementary combinatorial description of the polynomial $w_n$ that uses some notions about planar rooted trees. The necessary combinatorial background is summarized in Sections \ref{sec.subroot} and \ref{sec.posetted}. In particular, for every finite planar rooted tree $\Gamma$, the set of its leaves admits a total ordering (from left to right) and also a partial ordering $\preceq$, which takes care of the position of leaves with respect to the subroots. Then, we define a posetted tree as a finite planar rooted tree, whose leaves are labelled by elements on a partially ordered set (a poset), monotonically with respect to $\preceq$. Our main result (Theorem~\ref{thm.bchtrees}) gives an explicit description of every $w_n$ as a linear combination with rational coefficients of nested commutators, indexed by a certain set of posetted trees with $n$ leaves. The formula of the coefficients involves the Bernoulli numbers and is completely described in terms of the combinatorial data of posetted trees. \section{Subroots of planar rooted trees} \label{sec.subroot} This section is devoted to introduce the notion, already known in the parallel logic programming community \cite{informatica}, of subroots of a planar rooted trees. Recall that a tree is called a \emph{rooted tree} if one vertex has been designated the \emph{root}. Every rooted tree has a natural structure of directed tree such that, for every vertex $u$, there exists a unique directed path from $u$ to the root. We shall write $u\to v$ if the vertex $v$ belongs to the directed path from $u$ to the root. A \emph{leaf} is a vertex without incoming edges: equivalently, a vertex $u$ is a leaf if the relation $v\to u$ implies $u=v$. A vertex is called \emph{internal} if it is not a leaf; notice that, if a rooted tree has at least two vertices, then the root is an internal vertex. \begin{center} \begin{figure} \caption{A rooted tree, with $v \to u$. } \end{figure} \end{center} From now on, we consider only planar rooted trees; following \cite{operads}, we denote by $\mathcal{T}$ the set of finite planar rooted trees with the root at the top and the leaves at the bottom (i.e., every directed path moves upward), and such that every internal vertex has at least two incoming edges. We also write \[ \mathcal{T}=\bigcup_{n>0} \mathcal{T}_n,\;\] where $\mathcal{T}_n$ is the set of planar rooted trees with $n$ leaves and, for every $\Gamma\in \mathcal{T}$, we denote by $L(\Gamma)$ the set of leaves of $\Gamma$. The planarity of the tree gives, for every internal vertex $v$, a total ordering of the edges ending in $v$, from the leftmost to the rightmost (see Figure~\ref{fig.orientazione}). \begin{center} \begin{figure} \caption{An element of $\mathcal{T} \label{fig.orientazione} \end{figure} \end{center} \begin{definition} A \emph{rightmost branch} of a planar rooted tree $\Gamma\in \mathcal{T}$ is a maximal connected subgraph $\Omega\subset \Gamma$, with the property that every edge of $\Omega$ is a rightmost edge of $\Gamma$. A rightmost branch is called non trivial if it has at least two vertices. \end{definition} \begin{center} \begin{figure} \caption{An element of $\mathcal{T} \label{fig.ramidestritratteggiati} \end{figure} \end{center} \begin{definition}\label{def.rigthmost m(v) d(v)} A \emph{local rightmost leaf} is a leaf lying on a non trivial rightmost branch. Given an internal vertex $v$, we call $m(v)$ the leaf lying on the rightmost branch containing $v$. We also denote by $d(v)$ the distance between $v$ and $m(v)$, as defined in \cite{ore}. \end{definition} \begin{definition} A \emph{subroot} is the vertex of a non trivial rightmost branch which is nearest to the root. The set of subroots of a finite planar rooted tree $\Gamma$ will be denoted by $R(\Gamma)$. \end{definition} Therefore, we have the natural bijections \[ \{\text{ subroots }\}\cong\{\text{ non trivial rightmost branches }\} \cong\{\text{ local rightmost leaves }\}.\] \begin{example} In the tree of Figure~\ref{fig.esempiomassimilocali}, the subroots are the vertices $r,a,c$ and $e$; the rightmost leaves are the leaves $2,3,5$ and $7$. Moreover, $m(a)=3, m(c)=2, m(e)=5$ and $ m(r)=m(b)=m(f)=7$; and $d(r)= 3$, $d(b)=2$ and $d(a)= d(c)= d(e)= d(f)= 1$. \end{example} \begin{center} \begin{figure} \caption{The subroots are denoted by $\bullet$, while the local rightmost leaves by $ \stackrel{\scriptscriptstyle{\otimes} \label{fig.esempiomassimilocali} \end{figure} \end{center} A planar rooted tree $\Gamma\in\mathcal{T}$ is a \emph{binary tree} if every internal vertex has exactly two incoming edges. We use the notation \[ \mathcal{B}=\bigcup_{n>0} \mathcal{B}_n \subset \mathcal{T},\; \] where $\mathcal{B}_n$ is the set of planar binary rooted trees with $n$ leaves. Using the notion introduced above, it is very easy to see that a tree $\Gamma \in \mathcal{T}_n$ is a binary tree if and only if it satisfies the equality: \[\sum_{v \in R(\Gamma)}d(v)=n-1.\] Let $R$ be a (non associative) algebra over a field $\mathbb{K}$ and $\Gamma\in \mathcal{B}$ a planar rooted tree. Labelling the leaves of $\Gamma$ with elements of $R$, we can associate the product element in $R$ obtained by the usual operadic rules \cite{LodayVallette,operads}, i.e., we perform the product of $R$ at every internal vertex in the order arising from the planar structure of the directed tree. For instance, the following labelled tree \begin{center} \begin{picture}(200,65) \unitlength=0.20mm \letvertex A=(200,125)\letvertex B=(110,70) \letvertex C=(95,40)\letvertex D=(80,10) \letvertex E=(110,10) \letvertex F=(140,10) \letvertex H=(245,10)\letvertex L=(275,10)\letvertex M=(300,10) \letvertex N=(330,10) \letvertex O=(315,35) \letvertex P=(262,35) \letvertex Q=(290,70) \put(73,-5){$\scriptstyle{r_1}$} \put(103,-5){$\scriptstyle{r_2}$} \put(133,-5){$\scriptstyle{r_3}$} \put(240,-5){$\scriptstyle{r_4}$} \put(270,-5){$\scriptstyle{r_5}$} \put(295,-5){$\scriptstyle{r_6}$} \put(325,-5){$\scriptstyle{r_7}$} \drawundirectededge(B,A){} \drawundirectededge(E,C){} \drawundirectededge(F,B){} \drawundirectededge(Q,A){} \drawundirectededge(C,B){} \drawundirectededge(D,C){} \drawundirectededge(E,C){} \drawundirectededge(F,B){} \drawundirectededge(P,Q){} \drawundirectededge(H,P){}\drawundirectededge(L,P){} \drawundirectededge(M,O){}\drawundirectededge(O,Q){} \drawundirectededge(N,O){} \drawvertex(A){$\bullet$}\drawvertex(B){$\bullet$} \drawvertex(C){$\bullet$} \drawvertex(P){$\bullet$} \drawvertex(D){$\circ$} \drawvertex(E){$\scriptscriptstyle{\otimes}$}\drawvertex(F){$\scriptscriptstyle{\otimes}$} \drawvertex(N){$\scriptscriptstyle{\otimes}$} \drawvertex(L){$\scriptscriptstyle{\otimes}$} \drawvertex(M){$\circ$} \drawvertex(H){$\circ$}\drawvertex(O){$\circ$} \drawvertex(P){$\circ$} \drawvertex(Q){$\circ$} \end{picture} \end{center} gives the product $((r_1 r_2)r_3) ((r_4 r_5 )(r_6 r_7)) \in R$. Given any map $f: L(\Gamma) \to R$ (the labelling), we denote by $Z_{\Gamma}(f) \in R$ the corresponding product element. If $S\subset R$, then the elements $Z_{\Gamma}(f)$, with $\Gamma\in\mathcal{B}$ and $f\colon L(\Gamma)\to S$, are a set of generators of the subalgebra generated by $S$. If $R$ is either symmetric or skewsymmetric (e.g., a Lie algebra), then we may reduce the set of generators by a suitable choice of the labelling. Keeping in mind our main application (the BCH product), a possible way of doing that is by introducing the combinatorial notion of posetted trees. \section{Posetted trees} \label{sec.posetted} Using the notion of subroot, we can define a partial order $ \preceq $ on the set of leaves $L(\Gamma)$. \begin{definition} Given two leaves $l_1$ and $l_2$ in a tree $\Gamma\in\mathcal{T}$, we say $l_1 \preceq l_2$ if $l_1=l_2$ or there exists a subroot $v \in R(\Gamma)$ such that $l_2=m(v)$ and $l_1 \to v$. \end{definition} \begin{center} \begin{figure} \caption{Here, we have $l_1\preceq l_6$, $l_2 \preceq l_3 \preceq l_5 \preceq l_6$ and $l_4 \preceq l_5 $.} \label{fig.ordineFoglie} \end{figure} \end{center} \begin{definition} For every poset $(A ,\leq)$, we denote \[ \mathcal{T}(A)=\{ (\Gamma, f) \, | \, \Gamma \in \mathcal{T}, \, f:(L(\Gamma), \preceq) \to (A ,\leq), f \mbox{ monotone}\} \] In a similar way, we define $\mathcal{B}(A)$, and, for every $n>0$, $\mathcal{T}_n(A)$ and $\mathcal{B}_n(A)$. \end{definition} We call \emph{posetted trees} the elements of $\mathcal{T}(A)$. \begin{example}\label{example. B(a < b)} The sets $\mathcal{B}_1(b\le a)$, $\mathcal{B}_2(b\le a)$ and $\mathcal{B}_3(b\le a)$ contain 2, 3 and 8 posetted trees, respectively (see Figures \ref{fig.b2ab} and \ref{fig.b3ab}). \end{example} \begin{center} \begin{figure} \caption{ The 5 posetted trees of $\mathcal{B} \label{fig.b2ab} \end{figure} \end{center} \begin{center} \begin{figure} \caption{ The 8 posetted trees of $\mathcal{B} \label{fig.b3ab} \end{figure} \end{center} \begin{remark} If $A=\{1,\ldots,m\}$ with the usual order, then there exists a natural inclusion of $\mathcal{T}_n(A)$ into the set of admissible graphs with $n$ vertices of the first kind and $m$ vertices of the second kind considered in \cite{kathotia,Konts} . \end{remark} Assume that $A$ is a subset of a (skew)commutative algebra $R$ and choose a total ordering on $A$. Then, it is easy to see that the elements $Z_{\Gamma}(f)$, with $(\Gamma,f)\in \mathcal{B}(A)$ generate, as a $\mathbb{K}$ vector space, the subalgebra generated by $A$. \section{An expression of the Baker-Campbell-Hausdorff product in terms of posetted trees} Let $L$ be a Lie algebra over a field $\mathbb{K}$ of characteristic $0$, which is complete with respect to its lower descending series $L^1=L$, $L^{n+1}=[L^n,L]$. Denote by $\bullet\colon L\times L\to L$ the Baker-Campbell-Hausdorff (BCH) product, obtained formally by the formula $a\bullet b=\log(e^ae^b)$. It is well known that \[ a \bullet b =a+b + \displaystyle\frac{1}{2}[a,b]+\frac{1}{12} [a,[a,b]]-\frac{1}{12} [b,[b,a]]+ \cdots,\] is an element of the Lie subalgebra generated by $a$ and $b$ and, then, it can be expressed as an infinite sum \[ a\bullet b=\sum_{(\Gamma,f)\in\mathcal{B}(b\le a)} s_{(\Gamma,f)}Z_{\Gamma}(f),\] for a sequence $s_{(\Gamma,f)}\in\mathbb{K}$. Clearly, in view of the alternating properties of the product and Jacobi identity, such a sequence is not unique. The Dynkin Formula \eqref{equ.dynkinformula} provides a sequence as above where $s_{(\Gamma,f)}=0$, whenever $\Gamma$ has at least 2 subroots: on the other hand, the explicit expression of the nonvanishing $s_{(\Gamma,f)}$ is rather complicated. Here, we describe another sequence $b_{(\Gamma,f)}$ of rational numbers with the above properties. First of all, define the sequence of rational numbers $\{b_n\}$, for every $n\ge 0$, by their ordinary generating function \[ \sum_{n\ge 0}b_n x^n=\frac{x}{e^x-1}\;. \] Notice that $b_n=B_n n!$ where the $B_n$ are the Bernoulli numbers. In particular, the only non trivial odd term of the sequence is $b_1=-\frac{1}{2}$ and we have: \[ b_0=1, \quad b_2=\frac{1}{12},\quad b_4=-\frac{1}{720},\quad\ldots \, .\] \begin{definition} Given a poset $A$ and a posetted tree $(\Gamma,f)\in \mathcal{T}(A)$, let us define \[ b_{(\Gamma,f)} := \prod_{v \in R(\Gamma)} \frac{b_{d(v)}}{t(v)}, \] where the $b_n$'s are the rational numbers above and, for every subroot $v\in R(\Gamma)$, we have \[ t(v)=\text{ number of leaves $u\in L(\Gamma)$ such that $u\to v$ and $f(u)=f(m(v))$}.\] We remind that $m(v)$ is the leaf lying on the rightmost branch containing $v$ (Definition \ref{def.rigthmost m(v) d(v)}). \end{definition} \begin{example} Let $A=\{ b \leq a\}$ and consider the posetted tree \begin{center} \begin{picture}(300,60) \unitlength=0.30mm \put(110,30){$(\Gamma,f):$} \letvertex AA=(200,60)\letvertex BB=(185,35) \letvertex CC=(170,10)\letvertex DD=(200,10) \letvertex EE=(230,10)\letvertex FF=(215,35) \drawundirectededge(AA,CC){}\drawundirectededge(AA,EE){} \drawundirectededge(BB,DD){} \drawvertex(AA){$\bullet$}\drawvertex(BB){$\bullet$} \drawvertex(CC){$\circ$}\drawvertex(DD){$\scriptscriptstyle{\otimes}$} \drawvertex(EE){$\scriptscriptstyle{\otimes}$} \put(175,35){$\scriptstyle{u}$}\put(192,59){$\scriptstyle{v}$} \put(168,0){$\scriptstyle{b}$}\put(198,0){$\scriptstyle{a}$} \put(228,0){$\scriptstyle{a}$} \end{picture} \end{center} Here, we have $d(u)=d(v)=1; \ t(u)=1; \ t(v)=2$; therefore, $ b_{(\Gamma,f)}=\dfrac{b_1}{ 1} \cdot \dfrac{b_1}{ \,2}=\dfrac{1}{8}. $ \end{example} \begin{theorem}\label{thm.bchtrees} Let $L$ be a Lie algebra as above; then, for every positive integer $k$ and every $a_1, \ldots , a_k \in L$, we have \begin{equation}\label{equ.bchperk} a_k\bullet a_{k-1} \bullet \cdots \bullet a_1 =\sum_{(\Gamma,f) \, \in \, \mathcal{B}\,(a_1 \leq a_2 \leq \cdots \leq a_k)} b_{(\Gamma,f)} Z_\Gamma(f), \end{equation} \begin{equation}\label{equ.bchperkbis} a_1\bullet a_{2} \bullet \cdots \bullet a_k =\sum_{n=1}^{+\infty}(-1)^{n-1}\sum_{(\Gamma,f) \, \in \, \mathcal{B}_n\,(a_1 \leq a_2 \leq \cdots \leq a_k)} b_{(\Gamma,f)} Z_\Gamma(f). \end{equation} In particluar, for $a,b\in L$, we have \begin{equation}\label{equ.bchperdue} a\bullet b=\sum_{(\Gamma,f) \in \mathcal{B}(b \leq a)} b_{(\Gamma,f)} Z_\Gamma(f). \end{equation} \end{theorem} \begin{proof} Let us first prove Formula \eqref{equ.bchperdue}. Let $\mathcal{C}'\,(b\leq a)\subset \mathcal{B}\,(b\leq a)$ be the subset of posetted trees having every local rightmost leaf labelled with $a$ and denote by $\mathcal{C}\,(b\leq a)= \mathcal{C}'\,(b\leq a)\cup \mathcal{B}_1(b)$. Since the bracket is skewsymmetric, we have that $Z_\Gamma(f)=0$, for every $(\Gamma,f)\notin\mathcal{C}\,(b\leq a)$; therefore, \[ \sum_{(\Gamma,f) \in \mathcal{B}(b \leq a)} b_{(\Gamma,f)} Z_\Gamma(f)= \sum_{(\Gamma,f) \in \mathcal{C}(b\leq a)} b_{(\Gamma,f)} Z_\Gamma(f).\] In \cite[Theorem. 1.6.1]{DK} and \cite{hall}, the following recursive formula for the Baker-Campbell-Hausdorff product is proved: \[a\bullet b=\sum_{r\ge 0}Z_r,\] where \[Z_0=b,\qquad Z_{r+1}=\frac{1}{r+1}\sum_{m\ge 0}b_m \sum_{i_1+\cdots+i_m=r}({\operatorname{ad}\,} Z_{i_1}) ({\operatorname{ad}\,} Z_{i_2})\cdots({\operatorname{ad}\,} Z_{i_m})a,\quad \text{for }r\ge 0.\] For every $r>0$, let $\mathcal{C}_r\subset \mathcal{C}(b\leq a)$ be the subset of posetted trees with exactly $r$ leaves labelled with $a$; we prove that, for every $r\ge 0$, we have \begin{equation}\label{equ.formulazetaerre} Z_r=\sum_{(\Gamma,f) \in \mathcal{C}_r} b_{(\Gamma,f)} Z_\Gamma(f). \end{equation} This is clear for $r=0$; for $r=1$, we have \[ Z_{1}=\sum_{m\ge 0}b_m({\operatorname{ad}\,} Z_{0})^m a=\sum_{m\ge 0}b_m({\operatorname{ad}\,} b)^m a,\] whereas $\mathcal{C}_1=\{\Omega_m\}$, $m\ge 0$, is the set of posetted trees of Bernoulli type \cite{torossian}, i.e., \begin{center} \begin{picture}(290,30) \unitlength=0.30mm \letvertex A=(105,45) \letvertex B=(135,40) \letvertex C=(165,35) \letvertex D=(195,30) \letvertex E=(225,25) \letvertex F=(255,20) \letvertex a=(105,5) \letvertex b=(135,5) \letvertex c=(165,5) \letvertex d=(195,5) \letvertex e=(225,5) \letvertex f=(255,5) \letvertex g=(285,5) \put(60,15){$\Omega_m:$} \drawundirectededge(A,B){} \drawundirectededge(B,C){} \drawundirectededge(C,D){} \drawundirectededge(D,E){} \drawundirectededge(F,g){} \drawundirectededge(A,a){} \drawundirectededge(F,f){} \drawundirectededge(B,b){} \drawundirectededge(E,e){} \drawundirectededge(C,c){}\drawundirectededge(D,d){} \dashline[0]{2}(228,25)(252,20) \drawvertex(A){$\bullet$}\drawvertex(B){$\circ$} \drawvertex(C){$\circ$}\drawvertex(D){$\circ$} \drawvertex(E){$\circ$}\drawvertex(F){$\circ$} \drawvertex(a){$\circ$}\drawvertex(c){$\circ$} \drawvertex(b){$\circ$}\drawvertex(d){$\circ$} \drawvertex(e){$\circ$}\drawvertex(f){$\circ$} \drawvertex(g){$\scriptscriptstyle\otimes$} \put(103,-5){$\scriptstyle{b}$} \put(133,-5){$\scriptstyle{b}$} \put(163,-5){$\scriptstyle{b}$} \put(193,-5){$\scriptstyle{b}$} \put(223,-5){$\scriptstyle{b}$} \put(253,-5){$\scriptstyle{b}$} \put(283,-5){$\scriptstyle{a}$} \end{picture} \end{center} where $m$ is the number of leaves labelled with $b$. Therefore, the coefficient $b_{(\Omega_m)}$ is exactly $b_m$ and so \[ Z_1=\sum_{(\Gamma,f) \in \mathcal{C}_1} b_{(\Gamma,f)} Z_\Gamma(f).\] \noindent Moreover, every element of $\mathcal{C}_{r+1}$ is obtained in a unique way starting from a tree $\Omega_m$ and grafting, at each of the $m$ leaves labelled with $b$, the roots of elements of $\mathcal{C}_{i_1},\ldots,\mathcal{C}_{i_m}$, with $i_1+\cdots+i_m=r$ (for the definition of the grafting see \cite[Definition 1.37]{operads}). Therefore, the proof of \eqref{equ.formulazetaerre} follows easily by induction on $r$. Next, since $\bullet$ is associative, we have \[ a_1\bullet a_{2} \bullet \cdots \bullet a_k=-((-a_k)\bullet\cdots\bullet(-a_1)),\] and Formula \eqref{equ.bchperkbis} follows immediately from \eqref{equ.bchperk}. Finally, setting $b=a_{k-1} \bullet \cdots \bullet a_1$, we have that every posetted tree of $\mathcal{B}\,(a_1 \leq a_2 \leq \cdots \leq a_k)$ can be described in a unique way as a posetted tree in $\mathcal{C}\,(b\leq a_k)$, where at every leaf labelled with $b$ is grafted the root of a posetted tree of $\mathcal{B}\,(a_1 \leq a_2 \leq \cdots \leq a_{k-1})$. In view of the associativity relation \[a_k\bullet a_{k-1} \bullet \cdots \bullet a_1= a_k\bullet b, \] we obtain that \eqref{equ.bchperk} is a consequence of $\displaystyle a_k\bullet b=\sum_{(\Gamma,f) \in \mathcal{C}(b\le a_k)} b_{(\Gamma,f)} Z_\Gamma(f)$. \end{proof} \begin{remark} Choose $a_1=b$ and $a_2=a$ in Equation \eqref{equ.bchperk}, and $a_1=a$ and $a_2=b$ in Equation \eqref{equ.bchperkbis}. Comparing the coefficient of the product $ad(b)^n(a)$ in both equations, we obtain the following relations \begin{equation}\label{equazione bernoulli number} (1+n(-1)^n)b_n=-\sum_{i=1}^{n-1}(-1)^ib_ib_{n-i},\qquad n>0. \end{equation} Indeed, the coefficient of $ad(b)^n(a)$ in Equation \eqref{equ.bchperkbis} comes from the Bernoulli tree $\Omega_n$ and so it is exactly $b_n$. On the other side, we need to consider the subset $S(n)$ of trees ${(\Gamma,f) \in \mathcal{B}(a \leq b)} $ with only one subroot, $n$ leaves labelled $b$ and one leaf labelled $a$. For any $ (\Gamma,f) \in S(n)$, we have $Z_\Gamma(f)=\pm ad(b)^n(a)$, and we can define $C_n$ as \[ \sum_{(\Gamma,f)\in S(n)} b_{(\Gamma,f)} Z_\Gamma(f)=C_n ad(b)^n(a).\] Comparing the coefficients, we have \[C_n=(-1)^n b_n.\] Next, let us compute $C_n$ recursively. There are two different types of contributions to $C_n$ due to the following graphs. The first contribution is due to the graph with only one subroot; in this case, the coefficient is $\dfrac{b_n}{n} ad(b)^{n-1}([a,b])=-\dfrac{b_n}{n} ad(b)^{n}(a).$ The other contribution is due to the graphs obtained from a graph in $S(i)$, for every $i=1,\ldots,n-1$, and grafting, at the leaves labelled with $a$, a graph of $S(n-i)$. Therefore, for every fixed $i$, the coefficient is \[ \frac{b_i}{n}C_{n-i} ad(b)^{i-1}([ad(b)^{n-i}(a),b])=-\frac{b_i}{n}C_{n-i} ad(b)^{n}(a).\] Summing up, we have \[ C_n=-\dfrac{b_n}{n}-\sum_{i=1}^{n-1}\frac{b_i}{n}C_{n-i};\] and, since $C_n=(-1)^n b_n$, we get the relation \[b_n(1+n(-1)^n)=-\sum_{i=1}^{n-1}(-1)^ib_i b_{n-i}.\] Note that, in the previous computation, we have just used the fact that the product is associative and therefore apply for every associative product defined by Equation \eqref{equ.bchperk}. More precisely, let $a_n$ be any sequence in $\mathbb{K}$, and for any $(\Gamma,f) \in \mathcal{B}(b\leq a)$, define \[ a_{(\Gamma,f)} := \prod_{v \in R(\Gamma)} \frac{a_{d(v)}}{t(v)}, \] and the product \begin{equation} a\ast b=\sum_{(\Gamma,f) \in \mathcal{B}(b \leq a)} a_{(\Gamma,f)} Z_\Gamma(f). \end{equation} \end{remark} \begin{proposition} In the notation above, the product $\ast$ is associative if and only if there exists an $h\in\mathbb{K}$ such that $a_n =h^n b_n$, for every $n>0$. \end{proposition} \begin{proof} One implication is clear, if $a_n =h^n b_n$, then \[ a\ast b=\sum_{(\Gamma,f) \in \mathcal{B}(b \leq a)} a_{(\Gamma,f)} Z_\Gamma(f)= \sum_{(\Gamma,f) } \prod_{v \in R(\Gamma)}h^{d(v)} b_{(\Gamma,f)} Z_\Gamma(f)= h^{-1}((ha)\bullet (hb)); \] this implies that the product $\ast$ is associative (in the last equality we use that $\sum_{v \in R(\Gamma)}d(v)=n-1$). As regards the other implication, assume that the product $\ast$ is associative; then, Equation \eqref{equ.bchperkbis} holds for the product $\ast$ instead of $\bullet$. Arguing as in the above remark, we conclude that the numbers $a_n$ must satisfy Equation~\eqref{equazione bernoulli number}, and this easily implies that $a_n= (-2a_1)^n b_n$, for every $n>0$. \end{proof} \end{document}
\begin{document} \newcommand{\mathbb {C}}{\mathbb {C}} \newcommand{\mathbb{N}}{\mathbb{N}} \newcommand{\mathbb{R}}{\mathbb{R}} \newcommand{\mathcal{F}}{\mathcal{F}} \newcommand{\tau}{\tau} \renewcommand{\roman{enumi}}{\roman{enumi}} \renewcommand{\theenumi)}{\roman{enumi})} \tauwapnumbers \newtheorem{0-THM1}{Theorem}[section] \newtheorem{0-PROP1}[0-THM1]{Proposition} \newtheorem{0-DEF1}[0-THM1]{Definition} \newtheorem{0-PROP2}[0-THM1]{Proposition} \newtheorem{0-EXPLE1}[0-THM1]{Example} \newtheorem{theorem}[0-THM1]{Theorem} \newtheorem{2-LEM1}{Lemma}[section] \newtheorem{2-LEM2}[2-LEM1]{Lemma} \newtheorem{2-THM1}[2-LEM1]{Theorem} \newtheorem{2-THM2}[2-LEM1]{Theorem} \newtheorem{3-DEF1}{Definition}[section] \newtheorem{3-THM1}[3-DEF1]{Theorem} \newtheorem{3-REM0}[3-DEF1]{Remark} \newtheorem{3-DEF2}[3-DEF1]{Definition} \newtheorem{3-LEM1}[3-DEF1]{Lemma} \newtheorem{3-REM1}[3-DEF1]{Remark} \newtheorem{3-COR1}[3-DEF1]{Corollary} \newtheorem{3-REM2}[3-DEF1]{Remark} \newtheorem{4-PROP1}{Proposition}[section] \newtheorem{4-PROP2}[4-PROP1]{Proposition} \newtheorem{4-REM1}[4-PROP1]{Remark} \newtheorem{4-PROP3}[4-PROP1]{Proposition} \newtheorem{4-PROP4}[4-PROP1]{Proposition} \newtheorem{4-COR1}[4-PROP1]{Corollary} \newtheorem{A-PROP1}{Proposition}[section] \newtheorem{A-REM1}[A-PROP1]{Remark} \newtheorem{A-PROP2}[A-PROP1]{Proposition} \title{Stability of propagation features under time-asymptotic approximations for a class of dispersive equations} \author{Florent Dewez\footnote{Inria, Lille-Nord Europe research center, France. E-mail: [email protected] or [email protected]}} \date{} \maketitle \begin{abstract} We consider solutions in frequency bands of dispersive equations on the line defined by Fourier multipliers, these solutions being considered as wave packets. In this paper, a refinement of an existing method permitting to expand time-asymptotically the solution formulas is proposed, leading to a first term inheriting the mean position of the true solution together with a constant variance error. In particular, this first term is supported in a space-time cone whose origin position depends explicitly on the initial state, implying especially a shifted time-decay rate. This method, which takes into account both spatial and frequency information of the initial state, makes then stable some propagation features and permits a better description of the motion and the dispersion of the solutions of interest. The results are achieved firstly by making apparent the cone origin in the solution formula, secondly by applying precisely an adapted version of the stationary phase method with a new error bound, and finally by minimizing the error bound with respect to the cone origin. \end{abstract} \noindent \textbf{Mathematics Subject Classification (2010).} Primary 35B40; Secondary 35S10, 35B30, 35Q41, 35Q40.\\ \noindent \textbf{Keywords.} Wave packet, dispersive equation, oscillatory integral, stationary phase method, frequency band. \tauection{Introduction} In this paper, we are interested in the time-asymptotic behaviour of wave packets of the form \begin{equation} \label{eq:sol_formula0} u_f(t,x) = \frac{1}{2 \pi} \int_\mathbb{R} \mathcal{F} u_0 (p) \, e^{-itf(p) + ixp} \, dp \; , \end{equation} where $t \in \mathbb{R}$, $x \in \mathbb{R}$ and $f: \mathbb{R} \longrightarrow \mathbb{R}$ is a strictly convex symbol. We suppose that the Fourier transform $\mathcal{F} u_0$ of $u_0 \in \mathcal{S}(\mathbb{R})$ is supported in a bounded interval $[p_1, p_2]$, where $p_1 < p_2$ are two finite real numbers. In terms of evolution equations, wave packets of the form \eqref{eq:sol_formula0} are solutions of the following type of dispersive equations: \begin{equation} \label{eq:evoleq0} \left\{ \begin{array}{l} \left[ i \, \partial_t - f \big(D\big) \right] u_f(t) = 0 \\ [2mm] u_f(0) = u_0 \end{array} \right. \; , \end{equation} for $t \in \mathbb{R}$, where $f(D)$ is the Fourier multiplier associated with $f$ and $u_0 \in \mathcal{S}(\mathbb{R})$ the initial datum supposed to be in the frequency band $[p_1, p_2]$. For instance, the solutions of the free Schrödinger equation, of the Klein-Gordon equation or of certain higher-order evolution equations can be described by wave packets of the form \eqref{eq:sol_formula0}; we refer to \cite[Section 6]{D17-1} for further details. In the present setting, the frequency band hypothesis prevents the wave packet $u_f$ from being too much localized in space according to the uncertainty principle and makes hence challenging the task of describing its spatial propagation.\\ Some approaches solving this problem time-asymptotically have been developed. In \cite{AMHDR2012}, the authors propose to approximate the solution of the Klein-Gordon equation on a star-shaped network by a spatially localized function, the latter tending to the true solution as the time tends to infinity. This has been achieved by applying precisely the version of the stationary phase method given in \cite[Theorem 7.7.5]{H83} to an integral solution formula of the equation, the desired approximation being given by the first term of the asymptotic expansion from the stationary phase method. The principle of the stationary phase method, which consists in evaluating the integrand of the oscillatory integral of interest at the stationary point of the phase function, combined with the bounded frequency band hypothesis leads to an approximation supported in a space-time cone: this cone describes both the motion and the dispersion of the solution for large times. In particular, the results exhibit in this setting the influence of the tunnel effect on the time-decay rate of the solution. In \cite{AMD17}, this approach has been adapted to the setting of the free Schrödinger equation on the line with initial states having integrable singular frequencies in order to study the effect of such singularities on the time-asymptotic behaviour. The version of the stationary phase method proposed in \cite[Section 2.9]{E56} has been used since it covers the case of singular amplitudes; we mention that the authors in \cite{AMD17} propose modern formulations and detailed proofs of the results from \cite{E56}. The results show that a free particle with a singular frequency tends to travel at the speed associated with this frequency. This is highlighted by the existence of space-time cones, containing the direction given by the singular frequency, in which the time-decay rates are below the rate of the classical decay inherited from the classical dispersion. However, the expansion provided in \cite{AMD17} is proved to blow up when approaching the space-time direction associated with the singular frequency, preventing the method from approximating uniformly the true solution in regions containing this direction. This is due to the fact that the first term of the expansion inherits the singularity of the initial state; see \cite[Sec. 3]{AMD17} for more details. To tackle this issue, another approach has been proposed in \cite{D17-1}: the precision of asymptotic expansions to one term is removed in favour of less precise but more flexible explicit and uniform estimates. In particular, they cover the above critical regions. Further this flexibility has permitted to consider not only the free Schrödinger equation but also equations of type \eqref{eq:evoleq0} with initial data having singular frequencies. The uniform estimates for the solutions have been achieved by applying a generalization \cite[Theorem 4.8]{D17-1} of the classical van der Corput Lemma \cite[Prop. 2, Chap. VIII]{S93} to the case of singular and integrable amplitudes. We mention now that the approach developed in \cite{AMHDR2012} and the subsequent adaptations appearing in \cite{AMD17, D17-1} describe the time-asymptotic motion and the dispersion of the solutions by exploiting only the frequency information of the initial state, and not the spatial ones. For instance, this is highlighted by the fact that the origin of the space-time cones resulting from the above methods is always at the space-time point $(0, 0)$, whatever the localization of the initial state is. Consequently the associated first terms provide poor approximations of the solutions during a long time for initial states spatially far from the origin. \\ In view of this, we aim firstly at extending the approach used in \cite{AMHDR2012} to the general setting of dispersive equations of the form \eqref{eq:evoleq0} and secondly at refining it in order to exploit the spatial information of the initial datum.\\ Regarding the first point, we establish uniform and explicit remainder estimates for a time-asymptotic expansion to one term of the wave packet \eqref{eq:sol_formula0} solution of \eqref{eq:evoleq0}. As in \cite{AMHDR2012, AMD17}, we apply our new version of the stationary phase method for oscillatory integrals of the form \begin{equation*} \forall \, \omega > 0 \qquad \int_\mathbb{R} U(p) \, e^{i \omega \psi(p)} \, dp \; , \end{equation*} to the Fourier solution formula of equation \eqref{eq:evoleq0}, for a sufficiently regular and compactly supported $\mathcal{F} u_0$. In this paper, we adapt the computations of the proof for the stationary phase method in \cite{AMD17} to the case of regular and compactly supported amplitude functions $U: \mathbb{R} \longrightarrow \mathbb {C}$ and concave phase functions $\psi: \mathbb{R} \longrightarrow \mathbb{R}$. Asymptotic expansions together with uniform and explicit remainder estimates are given in Theorem \ref{2-THM1} for stationary points of the phase inside the support of the amplitude and outside the support in Theorem \ref{2-THM2}.\\ The refinement we propose is based on the two following key points: \begin{enumerate} \item We establish a remainder estimate for the stationary phase method involving the $L^2$-norm of the first derivative of the amplitude, and not the $L^\infty$-norm as in the original proofs \cite{E56, AMD17}; this is done in the above mentioned Theorems \ref{2-THM1} and \ref{2-THM2}, the proof being substantially based on the application of Cauchy-Schwarz inequality to the integral representation of the remainder term.\\ The interest of the $L^2$-norm lies in the applications to the solution formula \eqref{eq:sol_formula0}: the amplitude function $U$ being equal to the Fourier transform of the initial datum (up to a factor) in this setting, Plancherel theorem is applicable and leads to an estimate depending explicitly on the spatial part of the initial datum. \item We introduce an arbitrary space-time shift parametrized by a two-dimensional parameter $(t_0, x_0)$ in the integral formula \eqref{eq:sol_formula0}. Roughly speaking, this shift modifies the initial datum which is then given by the solution at time $t_0$ spatially translated by $x_0$. By applying then the above mentioned stationary phase method with the new remainder estimate, we obtain a family of time-asymptotic expansions of the solution parametrized by $(t_0, x_0)$ with explicit dependence on this parameter; in particular, the parameter $(t_0, x_0)$ is the origin of the cone in which is supported the associated first term.\\ The parametrized family of time-asymptotic expansions for the solution of equation \eqref{eq:evoleq0} is given in Theorem \ref{3-THM1}. \end{enumerate} The combination of the two preceding points makes feasible the computation of the space-time parameter $(t^*, x^*)$ minimizing the remainder bound for the time-asymptotic expansion of the wave packet \eqref{eq:sol_formula0}; see Corollary \ref{3-COR1}. It is then proved in Proposition \ref{4-PROP2} that the first term associated with this optimal parameter has the same mean position as the solution; Proposition \ref{4-PROP4} shows that the difference between the variance of this approximation and the variance of the solution is an explicit constant independent from time. This refined approach permits thus to put the cone in space-time in such a way that the associated first term provides a more accurate time-asymptotic approximation of solutions in frequency bands of equations of type \eqref{eq:evoleq0}. Let us illustrate our main result in the case of the free Schrödinger equation on the line with initial datum $u_0 \in \mathcal{S}(\mathbb{R})$, namely \begin{equation} \label{eq:free_schrodinger} \left\{ \begin{array}{l} \displaystyle i \, \partial_t u_S(t) = -\frac{1}{2} \, \partial_{xx} u_S(t) \\ [2mm] u_S(0) = u_0 \end{array} \right. \; , \end{equation} for $t \in \mathbb{R}$, whose solution is the wave function associated with the free quantum particle being in the state $u_0$ at the initial time; we note that equation \eqref{eq:free_schrodinger} is actually of the form \eqref{eq:evoleq0} with symbol $f(p) = \frac{1}{2} \, p^2$ and its solution is given by \begin{equation} \label{eq:sol_schro} \forall \, (t,x) \in \mathbb{R} \times \mathbb{R} \qquad u_S(t,x) = \frac{1}{2 \pi} \int_\mathbb{R} \mathcal{F} u_0 (p) \, e^{-\frac{1}{2} itp^2 + ixp} \, dp \; . \end{equation} In quantum mechanics, the frequency band hypothesis means that the particle has a momentum localized in the interval $[p_1,p_2]$. According to the physical principle of group velocity, the wave packet given by the solution will travel in space at different speeds between $p_1$ and $p_2$ over time. Hence a free wave packet in the frequency band $[p_1,p_2]$ is expected to be mainly spatially localized in an interval of the form $\big[ p_1 \, (t-t_0) + x_0 , p_2 \, (t-t_0) + x_0 \big]$, where $t_0$ and $x_0$ have to be fixed, describing hence the motion and the dispersion of the associated particle. The following result, which is a direct consequence of our main result Corollary \ref{3-COR1}, is a mathematical formulation of this principle: \begin{0-THM1} \label{0-THM1} Consider the free Schrödinger equation on the line \eqref{eq:free_schrodinger} with $u_0 \in \mathcal{S}(\mathbb{R})$. Let $p_1$, $p_2$, $\tilde{p}_1$ and $\tilde{p}_2$ be four finite real numbers such that $[p_1, p_2] \tauubset (\tilde{p}_1, \tilde{p}_2)$. Suppose $\| u_0 \|_{L^2(\mathbb{R})} = 1$ and $supp \, \mathcal{F} u_0 \tauubseteq [p_1,p_2]$, and define \begin{align*} & \bullet \quad t^* = \argmin_{\tau \in \mathbb{R}} \left( \int_\mathbb{R} x^2 \, \big| u_S(\tau, x) \big|^2 \, dx - \Big( \int_\mathbb{R} x \, \big| u_S(\tau, x) \big|^2 \, dx \Big)^2 \right) ; \\ & \bullet \quad x^* = \int_\mathbb{R} x \, \big| u_S(t^*, x) \big|^2 \, dx \; . \end{align*} Then for all $\displaystyle (t,x) \in \left\{ (t,x) \in \big( \mathbb{R} \backslash \{t^* \} \big) \times \mathbb{R} \, \bigg| \, p_1 \leqslant \frac{x - x^*}{t - t^*} \leqslant p_2 \right\}$, we have \begin{align} & \left| u_S(t, x) - \frac{1}{\tauqrt{2 \pi}} \, e^{- sgn(t-t^*) i \frac{\pi}{4}} \, e^{-it \big(\frac{x-x^*}{t - t^*}\big)^2 + ix \frac{x-x^*}{t - t^*}} \, \mathcal{F} u_0 \left( \frac{x-x^*}{t - t^*} \right) |t-t^*|^{-\frac{1}{2}} \right| \nonumber \\ & \hspace{2cm} \leqslant C_1(\delta, \tilde{p}_1, \tilde{p}_2) \, \tauqrt{\int_\mathbb{R} x^2 \, \big| u_S(t^*, x) \big|^2 \, dx - \Big(\int_\mathbb{R} x \, \big| u_S(t^*, x) \big|^2 \, dx \Big)^2} \, |t-t^*|^{-\delta} \; , \label{eq:schro_ae} \end{align} where the real number $\delta$ is arbitrarily chosen in $\big( \frac{1}{2}, \frac{3}{4} \big)$, and for all \linebreak $\displaystyle (t,x) \in \left\{ (t,x) \in \big( \mathbb{R} \backslash \{t^* \} \big) \times \mathbb{R} \, \bigg| \, \frac{x - x^*}{t - t^*} < p_1 \text{ or } p_2 < \frac{x - x^*}{t - t^*} \right\}$, we have \begin{align*} \big| u_S(t,x) \big| & \leqslant \Bigg( C_2(p_1, p_2, \tilde{p}_1, \tilde{p}_2) \, \tauqrt{\int_\mathbb{R} x^2 \, \big| u_S(t^*, x) \big|^2 \, dx - \Big( \int_\mathbb{R} x \, \big| u_S(t^*, x) \big|^2 \, dx \Big)^2} \\ & \hspace{1.5cm} + C_3(p_1, p_2, \tilde{p}_1, \tilde{p}_2) \, \big\| u_0 \big\|_{L^1(\mathbb{R})} \Bigg) \, |t-t^*|^{-1} \; . \end{align*} All the above constants are defined in Theorem \ref{3-THM1}. \end{0-THM1} \noindent See Corollary \ref{3-COR1} for the general result. Let us now make some comments on this result: \begin{itemize} \item The origin of the space-time cone, in which lies the support of the first term of the expansion in \eqref{eq:schro_ae}, is actually put at the mean spatial position of the solution at the time when the variance of the solution is minimal. Hence, contrary to the preceding versions in \cite{AMHDR2012, AMD17, D17-1}, the position of the cone indeed takes into account spatial information of the solution, the cone illustrating then better the propagation and the motion of the associated particle. In particular, the mean positions of the solution and of the approximation are equal and the difference between the two variances is constant. \item On one hand, we observe that the first term is spatially well-localized for a solution in a narrow frequency band; on the other hand, the error is bounded by the minimal value of the standard deviation of the solution. Combined with the uncertainty principle, this exhibits a compromise: a frequency well-localized solution \eqref{eq:sol_schro} can be approximated by a function supported in a narrow space-time cone but a time sufficiently far from $t^*$ is required to achieve a good precision; on the other hand, the approximation of the solution \eqref{eq:sol_schro} with a small minimal standard deviation lies in a larger cone but the bound of the error is smaller than in the preceding case. \item We remark that the time-decay rate is shifted by $t^*$, which is the time when the variance of the solution of equation \eqref{eq:free_schrodinger} is minimal; this corresponds to the fact that the origin of the cone belongs to the space-time line $\big\{ (t,x) \in \mathbb{R} \times \mathbb{R} \, \big| \, t = t^* \big\}$. Hence if we require an error smaller than a certain threshold $\varepsilon > 0$, then this precision is achieved for all $t \in \mathbb{R}$ satisfying \begin{equation*} |t - t^*| > C_1(\delta, \tilde{p}_1, \tilde{p}_2)^{\frac{1}{\delta}} \left( \int_\mathbb{R} x^2 \, \big| u_S(t^*, x) \big|^2 \, dx - \Big( \int_\mathbb{R} x \, \big| u_S(t^*, x) \big|^2 \, dx \Big)^2 \right)^{\frac{1}{2 \delta}} \varepsilon^{-\frac{1}{\delta}} =: \eta(\varepsilon) \; . \end{equation*} In particular if we are interested in the evolution of the solution for positive times and if $t^* < -\eta(\varepsilon)$, then the error of the approximation is smaller than $\varepsilon$ for all $t \geqslant 0$. This has to be compared with the results from the classical approach (as in \cite{AMHDR2012, AMD17}) which always imply the existence of a small time-interval with left-endpoint given by $0$ in which the error is larger than a given threshold: this is due to the lack of flexibility of the classical approach which enforces $t^* = 0$ (the decay rate is then $t^{-\frac{1}{2}}$) and puts automatically the origin of the cone at the origin of space-time. \end{itemize} Let us now comment on some possible improvements or applications of the present results. First of all, an interesting issue would be to apply the approach developed in this paper to more complicated settings. One may consider dispersive equations on certain networks where integral solution formulas are available, as for example the Schrödinger equation on a star-shaped network with infinite branches \cite{AMAN15} or on a tadpole graph \cite{AMAN17}. In both papers, propagation features are exhibited by exploiting wave packets in frequency bands and one may hope a better description of physical phenomena by using our refined method. We could also consider the Schrödinger equation with a potential. In \cite{D17-2}, the time-asymptotic behaviour of the two first terms of the Dyson-Phillips series \cite[Chapter III, Theorem 1.10]{EN2000} representing the perturbed solution is studied by means of asymptotic expansions. The results concerning the second term of the series are interpreted as follows: if the initial state travels from left to right in space, then the positive frequencies of the potential tend to accelerate the motion of the second term while the negative frequencies tend to slow down or even reverse it, exhibiting advanced and retarded transmissions as well as reflections. The application of the present results could bring more information on these phenomena, in particular precise spatial information on the transmitted and reflected wave packets. As explained in this paper, the notion of frequency band is physically meaningful and permits to describe precisely time-asymptotically the propagation of solutions of certain dispersive equations. However it is a restrictive hypothesis: for example, a function in a finite frequency band is necessarily a $\mathcal{C}^\infty$-function. Hence it would be relevant to extend this notion to functions whose Fourier transform is not necessarily compactly supported but still localized in a weaker sens. In this setting, the first term of the expansion is no longer supported in a space-time cone and so one has to quantify the localization by means of different tools. For instance, we can consider approaches based on weighted norms; such norms have been used in \cite{G07}, \cite{EHT15} or in \cite{EKMT16} to show that the continuous part of the perturbed Schrödinger evolution transports away from the origin with non-zero velocity. Our approach makes appear naturally the shifted time-decay rate $|t-t^*|^{-\frac{1}{2}}$, where $t^*$ minimizes the variance of the solution. It would be also interesting to introduce this time-shift in other existing results to obtain greater precision. For instance, one may consider the important $L^p-L^{p'}$ estimates for which a simple argument makes apparent the shifted decay; this is proved in the following result: \begin{0-PROP1} Consider the free Schrödinger equation on the line \eqref{eq:free_schrodinger} with $u_0 \in \mathcal{S}(\mathbb{R})$ and define $t^* \in \mathbb{R}$ as follows: \begin{equation*} t^* := \argmin_{\tau \in \mathbb{R}} \left( \int_\mathbb{R} x^2 \, \big| u_S(\tau, x) \big|^2 \, dx - \Big( \int_\mathbb{R} x \, \big| u_S(\tau, x) \big|^2 \, dx \Big)^2 \right) . \end{equation*} Then for all $p \in [2, \infty]$, we have \begin{equation*} \forall \, t \in \mathbb{R} \backslash \{t^* \} \qquad \big\| u_S(t, .) \big\|_{L^p(\mathbb{R})} \leqslant \left( \frac{1}{4 \pi} \right)^{-\frac{1}{2} + \frac{1}{p}} \, \big\| u_S(t^*, .) \big\|_{L^{p'}(\mathbb{R})} \, |t-t^*|^{-\frac{1}{2} + \frac{1}{p}} \; , \end{equation*} where $p'$ is the conjugate of $p$. \end{0-PROP1} \begin{proof} For the sake of clarity, we use the one-parameter group $\big( e^{-i t \partial_{xx}} \big)_{t \in \mathbb{R}}$ which permits to describe the Schrödinger evolution as follows: \begin{equation*} \forall \, t \in \mathbb{R} \qquad u_S(t) = e^{-i t \partial_{xx}} u_0 \; . \end{equation*} Using the group property, we have for any $t \in \mathbb{R}$, \begin{equation*} e^{-i t \partial_{xx}} u_0 = e^{-i (t-t^*) \partial_{xx}} \, e^{-i t^* \partial_{xx}} u_0 \; , \end{equation*} and by applying the classical $L^p-L^{p'}$ estimate \cite[Proposition 2.2.3]{C03} to the above right-hand side, we obtain for all $t \neq t^*$, \begin{align*} \big\| u_S(t, .) \big\|_{L^p(\mathbb{R})} & = \Big\| e^{-i t \partial_{xx}} u_0 \Big\|_{L^p(\mathbb{R})} \\ & \leqslant \left( \frac{1}{4 \pi} \right)^{-\frac{1}{2} + \frac{1}{p}} \, \Big\| e^{-i t^* \partial_{xx}} u_0 \Big\|_{L^{p'}(\mathbb{R})} \, |t-t^*|^{-\frac{1}{2} + \frac{1}{p}} \\ & = \left( \frac{1}{4 \pi} \right)^{-\frac{1}{2} + \frac{1}{p}} \, \big\| u_S(t^*, .) \big\|_{L^{p'}(\mathbb{R})} \, |t-t^*|^{-\frac{1}{2} + \frac{1}{p}} \; . \end{align*} Note that we are allowed to apply the classical $L^p-L^{p'}$ estimate since $e^{-i t^* \partial_{xx}} u_0 \in \mathcal{S}(\mathbb{R}) \tauubset L^{p'}(\mathbb{R})$ thanks to the hypothesis $u_0 \in \mathcal{S}(\mathbb{R})$. \end{proof} \noindent Since the classical $L^p-L^{p'}$ estimates are exploited to establish Strichartz estimates which are themselves used to study non-linear dispersive phenomena, it is necessary to extend the above shifted $L^p-L^{p'}$ estimates to spaces larger than the Schwartz space in view of precise applications. In particular, one may examine whether $t^*$ defined above still satisfies some optimal conditions; this could be linked with the results established in \cite{CXZ2010}. Regarding long-term perspectives of our work, one could consider the full soliton resolution for non-linear dispersive equations \cite{DKM11, DKM12-1, DKM12-2, DKM13, DKM15}, which aims at classifying the asymptotic behaviour of the non-linear solutions. A key argument for the results contained in this series of papers is the channel energy method \cite{KLLS15}, which consists in estimating the associated free solution outside a space-time cone or channel; this estimate is then used to prove that a dispersive term appearing in the decomposition of the non-linear solution goes to $0$ in the energy-space. In particular, we mention that the authors in \cite{CKS14} have to shift in time the cones and channels to derive the desired estimates. Hence one might hope that the ideas proposed in the present paper could help to understand the requirement for this shift and more generally to refine the channel energy method. Finally we could also think about minimal escape velocities \cite{HSS00,SS87} which aim at exhi\-biting propagation features for evolution operators of type $e^{-it H}$, where $H$ is a general Hamiltonian; for instance, on may consider $H = -\partial_{xx} + V$ where $V$ is real-valued potential. As explained in \cite{HS17}, the method to establish these estimates generalizes the integration by parts which is actually crucial to describe the time-asymptotic behaviour of wave packets, as illustrated in the present paper. Our approach could bring more precision to the abstract setting and hence lead to estimates containing more information on the propagation of general wave packets.\\ The paper is organized as follows: in the following section, we begin with the new remainder estimate for an adapted version of the stationary phase method developed in \cite{E56}. We establish then time-asymptotic expansions with explicit and uniform remainder estimates depending on the shift parameter $(t_0, x_0)$ for the solution of the dispersive equation \eqref{eq:evoleq0} in Section \ref{sec:applications_de}; this section provides also the value of the optimal parameter $(t^*, x^*)$ together with the bound of the associated remainder estimate. Finally Section \ref{sec:preservation} contains results for the mean position and the variance of the first term of the time-asymptotic expansions given in Section \ref{sec:applications_de}.\\ \tauection{Explicit error estimates for a stationary phase me\-thod via Cauchy-Schwarz inequality} \label{sec:asympt_exp} In this section, we establish asymptotic expansions for oscillatory integrals of the form \begin{equation} \label{eq:oscill_int} \forall \, \omega > 0 \qquad \int_{\mathbb{R}} U(p) \, e^{i \omega \psi(p)} \, dp \; , \end{equation} where the amplitude $U : \mathbb{R} \longrightarrow \mathbb {C}$ is a continuously differentiable function supported on a bounded interval and the phase $\psi : \mathbb{R} \longrightarrow \mathbb{R}$ is a strictly concave $\mathcal{C}^3$-function having a unique stationary point $p_0$. The remainder estimates we provide are explicit, uniform with respect to $p_0$ and involve the $L^2$-norm of the first derivative of the amplitude. The last point plays actually a key role in the refined method developed in Section \ref{sec:applications_de}. The asymptotic expansions together with the uniform and explicit error estimates are established in Theorems \ref{2-THM1} and \ref{2-THM2}.\\ We start by stating two technical lemmas which will be substantially used in the proof of Theorem \ref{2-THM1}.\\ The first step to expand $\omega$-asymptotically integrals of type \eqref{eq:oscill_int} consists in making simpler the phase function in order to integrate then by parts. To do so, we use the diffeomorphisms $\varphi_j$ ($j = 1,2)$ defined and studied in the following lemma. The values of these diffeomorphisms at the stationary point $p_0$ are provided in order to compute explicitly the first term of the expansions and two inequalities for $\varphi_j$ are established to estimate the errors of these expansions.\\ The proof of the following result lies mainly on an integral representation of $\varphi_j$. \begin{2-LEM2} \label{2-LEM2} Let $p_0$, $\tilde{p}_1$ and $\tilde{p}_2$ be three finite real numbers such that $p_0 \in (\tilde{p}_1, \tilde{p}_2)$. Suppose that $\psi \in \mathcal{C}^3(\mathbb{R}, \mathbb{R})$ is a strictly concave function which has a unique stationary point at $p_0$. Then, for $j = 1,2$, the function \begin{equation*} \begin{array}{ccccc} \varphi_j & : & I_j & \longrightarrow & [0, s_j] \\ & & p & \longmapsto & \big( \psi(p_0) - \psi(p)\big)^{\frac{1}{2}} \end{array} \end{equation*} where $I_1 := [\tilde{p}_1, p_0]$, $I_2 := [p_0, \tilde{p}_2]$ and $s_j := \varphi_j(\tilde{p}_j)$, satisfies the following properties: \begin{enumerate} \item the function $\varphi_j$ is a $\mathcal{C}^2$-diffeomorphism between $I_j$ and $[0, s_j]$ ; \item we have \begin{equation*} \varphi_j'(p_0) = (-1)^j \tauqrt{- \frac{\psi''(p_0)}{2}} \; ; \end{equation*} \item for all $p \in I_j$, the absolute value of $\varphi_j'(p)$ is lower bounded as follows: \begin{equation*} \Big| \varphi_j'(p) \Big| \geqslant \frac{1}{\tauqrt{2}} \, \min_{[\tilde{p}_1, \tilde{p}_2]} \big\{-\psi''\big\} \, \big\| \psi'' \big\|_{L^{\infty}(\tilde{p}_1, \tilde{p}_2)}^{-\frac{1}{2}} \; ; \end{equation*} \item we have the following $L^{\infty}$-norm estimate for $\big( \varphi_j^{\, -1}\big)''$: \begin{align*} \Big\| \big( \varphi_j^{\, -1} \big) '' \Big\|_{L^{\infty}(0, s_j)} & \leqslant \big\| \psi'' \big\|_{L^{\infty}(\tilde{p}_1, \tilde{p}_2)}^{\frac{3}{2}} \, \big\| \psi^{(3)} \big\|_{L^{\infty}(\tilde{p}_1, \tilde{p}_2)} \, \min_{[\tilde{p}_1, \tilde{p}_2]} \big\{-\psi''\big\}^{-\frac{7}{2}} \\ & \hspace{1cm} + \frac{1}{3} \, \big\| \psi'' \big\|_{L^{\infty}(\tilde{p}_1, \tilde{p}_2)}^{\frac{5}{2}} \, \big\| \psi^{(3)} \big\|_{L^{\infty}(\tilde{p}_1, \tilde{p}_2)} \, \min_{[\tilde{p}_1, \tilde{p}_2]} \big\{-\psi''\big\}^{-\frac{9}{2}} \; . \end{align*} \end{enumerate} \end{2-LEM2} \begin{proof} Let $j \in \{1,2\}$ and fix $p_0 \in (\tilde{p}_1, \tilde{p}_2)$. The proof of the present lemma is mainly based on the following integral representation of the function $\varphi_j$: \begin{equation} \label{eq:varphi} \varphi_j(p) = (-1)^j \, (p-p_0) \left( \int_0^1 \int_0^1 -\psi''\big( (1-\tau)(1-\nu)p + (\nu - \nu \tau + \tau) p_0 \big) \, (1 - \tau) \, d\nu d\tau \right)^{\frac{1}{2}} \; , \end{equation} for all $p \in I_j$. This representation can be derived by noting firstly that \begin{equation*} \psi(p_0) - \psi(p) = \int_{p}^{p_0} \psi'(t) \, dt = -\int_{p}^{p_0} \psi'(p_0) - \psi'(t) \, dt = \int_{p}^{p_0} \int_t^{p_0} -\psi''(v) \, dv \, dt \; ; \end{equation*} then we make the change of variable $(\nu, \tau) = \big( \frac{v - t}{p_0-t}, \frac{t - p}{p_0 - p} \big)$, leading to \begin{equation*} \psi(p_0) - \psi(p) = (p - p_0)^2 \int_0^1 \int_0^1 -\psi''\big( (1-\tau)(1-\nu)p + (\nu - \nu \tau + \tau) p_0 \big) \, (1 - \tau) \, d\nu d\tau \; , \end{equation*} and we take finally the square root of the preceding equality to obtain the desired representation \eqref{eq:varphi}. \begin{enumerate} \item Since $\psi$ is a strictly concave function on $\mathbb{R}$, the function $\varphi_j$ is actually the square root of the non-negative $\mathcal{C}^3$-function $p \longmapsto \psi(p_0) - \psi(p)$, showing that $\varphi_j$ is twice continuously differentiable on $I_j \backslash \{ p_0 \}$ ($\varphi_j$ is actually a $\mathcal{C}^3$-function on this domain). Let us prove that it is also twice differentiable on the whole $I_j$. To do so, note that we have for $p \in I_j \backslash \{ p_0 \}$, \begin{align} \varphi_j'(p) & = - \frac{1}{2} \, \psi'(p) \, \big( \psi(p_0) - \psi(p) \big)^{-\frac{1}{2}} \nonumber \\ & = -\frac{1}{2} \left( \int_p^{p_0} -\psi''(q) \, dq \right) \varphi_j(p)^{-1} \nonumber \\ & = \frac{1}{2} \left( (p-p_0) \int_0^1 -\psi'' \big( (1-t)p + t p_0 \big) \, dt \right) \varphi_j(p)^{-1} \nonumber \\ & = \frac{(-1)^{j}}{2} \left( \int_0^1 -\psi'' \big( (1-t)p + t p_0 \big) \, dt \right) \nonumber \\ & \hspace{1cm} \times \left( \int_0^1 \int_0^1 -\psi''\big( (1-\tau)(1-\nu)p + (\nu - \nu \tau + \tau) p_0 \big) \, (1 - \tau) \, d\nu d\tau \right)^{-\frac{1}{2}} \; . \label{eq:deriv_varphi_2} \end{align} The preceding equality combined with the positivity of the $\mathcal{C}^1$-function $-\psi''$ shows that $\varphi_j'$ is continuously differentiable on $I_j$ whose derivative is given by \begin{align*} \varphi_j''(p) & = \frac{(-1)^{j}}{2} \left( \int_0^1 -\psi^{(3)} \big( (p-p_0) t + p_0 \big) \, (1-t) \, dt \right) \\ & \hspace{1cm} \times \left( \int_0^1 \int_0^1 -\psi''\big( (1-\tau)(1-\nu)p + (\nu - \nu \tau + \tau) p_0 \big) \, (1 - \tau) \, d\nu d\tau \right)^{-\frac{1}{2}} \\ & \hspace{-0.8cm} + \frac{(-1)^{j}}{2} \left( \int_0^1 -\psi'' \big( (1-t)p + t p_0 \big) \, dt \right) \\ & \hspace{-0.3cm} \times \left( -\frac{1}{2} \right) \frac{\int_0^1 \int_0^1 -\psi^{(3)}\big( (1-\tau)(1-\nu)p + (\nu - \nu \tau + \tau) p_0 \big) \, (1 - \tau)^2 \, (1-\nu) \, d\nu d\tau}{\left( \int_0^1 \int_0^1 -\psi''\big( (1-\tau)(1-\nu)p + (\nu - \nu \tau + \tau) p_0 \big) \, (1 - \tau) \, d\nu d\tau \right)^{\frac{3}{2}}} \; , \end{align*} for all $p \in I_j$.\\ Now, according to equality \eqref{eq:deriv_varphi_2}, we observe that $\varphi_j'$ is negative for $j = 1$ and positive for $j = 2$ since $-\psi'' > 0$. By the inverse function theorem, we deduce that $\varphi_j$ is a $\mathcal{C}^2$-diffeomorphism. \item Thanks to the integral representation \eqref{eq:varphi}, we have \begin{align*} \varphi_j'(p_0) & = \lim_{p \rightarrow p_0} \frac{\varphi_j(p) - \varphi_j(p_0)}{p - p_0} \\ & = (-1)^j \, \lim_{p \rightarrow p_0} \left( \int_0^1 \int_0^1 -\psi''\big( (1-\tau)(1-\nu)p + (\nu - \nu \tau + \tau) p_0 \big) \, (1 - \tau) \, d\nu d\tau \right)^{\frac{1}{2}} \\ & = (-1)^j \tauqrt{- \frac{\psi''(p_0)}{2}} \; . \end{align*} \item From equality \eqref{eq:deriv_varphi_2} (which holds actually for all $p \in I_j$), we deduce the following lower estimate for $\varphi_j'$: \begin{equation} \label{eq:le_varphi_deriv} \forall \, p \in I_j \qquad \Big| \varphi_j'(p) \Big| \geqslant \frac{1}{\tauqrt{2}} \, \min_{[\tilde{p}_1, \tilde{p}_2]} \big\{-\psi''\big\} \, \big\| \psi'' \big\|_{L^{\infty}(\tilde{p}_1, \tilde{p}_2)}^{-\frac{1}{2}} \; . \end{equation} \item From the expression of $\varphi_j''$ computed in i), we obtain the following upper estimate: \begin{align*} \forall \, p \in I_j \qquad \Big| \varphi_j''(p) \Big| & \leqslant \frac{1}{2 \tauqrt{2}} \, \big\| \psi^{(3)} \big\|_{L^{\infty}(\tilde{p}_1, \tilde{p}_2)} \, \min_{[\tilde{p}_1, \tilde{p}_2]} \big\{-\psi''\big\}^{-\frac{1}{2}} \\ & \hspace{1cm} + \frac{1}{6 \tauqrt{2}} \, \big\| \psi'' \big\|_{L^{\infty}(\tilde{p}_1, \tilde{p}_2)} \, \big\| \psi^{(3)} \big\|_{L^{\infty}(\tilde{p}_1, \tilde{p}_2)} \, \min_{[\tilde{p}_1, \tilde{p}_2]} \big\{-\psi''\big\}^{-\frac{3}{2}} \; . \end{align*} By combining the preceding inequality with estimate \eqref{eq:le_varphi_deriv} and the following relation, \begin{equation*} \forall \, s \in [0, s_j] \qquad \big(\varphi_j^{\, -1} \big)''(s) = - \frac{\varphi_j'' \big( \varphi_j^{\, -1}(s) \big)}{\varphi_j'\big( \varphi_j^{\, -1}(s) \big)^3} \; , \end{equation*} we obtain finally for all $s \in [0, s_j]$, \begin{align*} \Big| \big(\varphi_j^{\, -1} \big)''(s) \Big| & \leqslant \big\| \psi'' \big\|_{L^{\infty}(\tilde{p}_1, \tilde{p}_2)}^{\frac{3}{2}} \, \big\| \psi^{(3)} \big\|_{L^{\infty}(\tilde{p}_1, \tilde{p}_2)} \, \min_{[\tilde{p}_1, \tilde{p}_2]} \big\{-\psi''\big\}^{-\frac{7}{2}} \\ & \hspace{1cm} + \frac{1}{3} \, \big\| \psi'' \big\|_{L^{\infty}(\tilde{p}_1, \tilde{p}_2)}^{\frac{5}{2}} \, \big\| \psi^{(3)} \big\|_{L^{\infty}(\tilde{p}_1, \tilde{p}_2)} \, \min_{[\tilde{p}_1, \tilde{p}_2]} \big\{-\psi''\big\}^{-\frac{9}{2}} \; . \end{align*} \end{enumerate} \end{proof} After having applied the above diffeomorphism to the integral \eqref{eq:oscill_int} (previously splitted at $p_0$), the phase becomes the simple quadratic function $s \longmapsto -s^2$. In order to make an integration by parts, creating then the first and the remainder terms of the integral, one needs an expression for a primitive of the function $s \in [0,s_0] \longmapsto e^{-i \omega s^2} \in \mathbb {C}$, for fixed $s_0$, $\omega > 0$. In the following lemma, a useful integral representation of such a primitive is given. As in the preceding result, its value at the origin and an inequality are also provided to compute respectively the first term of the expansion and an upper bound for the remainder term.\\ To prove Lemma \ref{2-LEM1}, we refer to the paper \cite{AMD17} which gives actually the successive primitives of more general functions by using essentially complex analysis; see \cite[Theorems 6.4, 6.5 and Corollary 6.6]{AMD17}. \begin{2-LEM1} \label{2-LEM1} Let $\omega, s_0 > 0$ be two real numbers and define the function $\phi(.,\omega): [0,s_0] \longrightarrow \mathbb {C}$ by \begin{equation*} \phi(s,\omega) := - \int_{\Lambda(s)} e^{-i \omega z^2} \, dz \; , \end{equation*} where $\Lambda(s)$ is the half-line in the complex plane given by \begin{equation*} \Lambda(s) := \left\{ s + t \, e^{-i \frac{\pi}{4}} \, \Big| \, t \geqslant 0 \right\} \tauubset \mathbb {C} \; . \end{equation*} Then \begin{enumerate} \item the function $\phi(.,\omega)$ is a primitive of the function $\displaystyle s \in [0,s_0] \longmapsto e^{- i \omega s^2} \in \mathbb {C}$ ; \item we have \begin{equation*} \phi(0,\omega) = - \, \frac{1}{2} \, \tauqrt{\pi} \, e^{-i \frac{\pi}{4}} \, \omega^{- \frac{1}{2}} \; ; \end{equation*} \item the function $\phi(.,\omega)$ satisfies \begin{equation*} \forall \, s \in (0, s_0] \qquad \big| \phi(s,\omega) \big| \leqslant L(\delta) \, s^{1 - 2 \delta} \, \omega^{-\delta} \; , \end{equation*} where the real number $\delta$ is arbitrarily chosen in $\big( \frac{1}{2}, 1 \big)$ and the constant $L(\delta) > 0$ is defined by \begin{equation*} L(\delta) := \frac{\tauqrt{\pi}}{2} \left( \frac{1}{2 \tauqrt{\pi}} \: + \: \tauqrt{\frac{1}{4 \pi} + \frac{1}{2}} \right)^{2 \delta -1} \; . \end{equation*} \end{enumerate} \end{2-LEM1} \begin{proof} The function $\phi(.,\omega)$ of the present paper corresponds actually to the function $\phi_1^{(2)}(.,\omega,2,1)$ defined in \cite[Theorem 2.3]{AMD17}. Hence we apply the results established in \cite{AMD17} to the present situation: \begin{enumerate} \item One proves this first point by applying \cite[Corollary 6.6]{AMD17}, which is a consequence of Theorems 6.4 and 6.5 of \cite{AMD17}, in the case $n = 1$, $j = 2$, $\rho_j = 2$ and $\mu_j = 1$. \item The proof of this point lies only on basic computations which are carried out in the fourth step of the proof of \cite[Theorem 2.3]{AMD17}. \item The combination of Lemmas 2.4 and 2.6 of \cite{AMD17} assures this last point. \end{enumerate} \end{proof} Thanks to the two preceding lemmas, we are now in position to establish the desired asymptotic expansions with respect to the parameter $\omega$ of oscillatory integrals of type \eqref{eq:oscill_int}. In the following theorem, we are interested in the case where the stationary point $p_0$ of the phase belongs to a neighbourhood of the support of the amplitude. We emphasize that the remainder estimate we provide is different from those appearing in the original paper \cite{E56} and in \cite{AMD17}.\\ Technically speaking, we split the integral at the stationary point $p_0$ and we study separately the two resulting integrals. In each situation, the method consists firstly in using the diffeomorphism introduced in Lemma \ref{2-LEM2} to make the phase function simpler, secondly in integrating by parts to create the expansion by using Lemma \ref{2-LEM2} ii), Lemma \ref{2-LEM1} i) and ii), and finally in bounding the remainder term by combining Lemma \ref{2-LEM2} iii), iv) and Lemma \ref{2-LEM1} iii) with Cauchy-Schwarz inequality. \begin{2-THM1} \label{2-THM1} Let $p_1$, $p_2$, $\tilde{p}_1$ and $\tilde{p}_2$ be four finite real numbers such that $[p_1, p_2] \tauubset (\tilde{p}_1, \tilde{p}_2)$. Suppose that $\psi \in \mathcal{C}^3(\mathbb{R}, \mathbb{R}): \mathbb{R} \longrightarrow \mathbb{R}$ is a strictly concave function which has a unique stationary point at $p_0 \in (\tilde{p}_1, \tilde{p}_2)$. And assume that $U \in \mathcal{C}^1(\mathbb{R}, \mathbb {C})$ is a function satisfying \begin{equation*} supp \, U \tauubseteq [p_1,p_2] \; . \end{equation*} Then we have for all $\omega > 0$, \begin{align*} & \left| \int_{\mathbb{R}} U(p) \, e^{i \omega \psi(p)} \, dp - \tauqrt{2 \pi} \, e^{-i \frac{\pi}{4}} \, e^{i \omega \psi(p_0)} \frac{U(p_0)}{\tauqrt{-\psi''(p_0)}} \, \omega^{-\frac{1}{2}} \right| \\ & \hspace{2cm} \leqslant \Big( C_1(\psi, \delta, \tilde{p}_1, \tilde{p}_2) \, \big\| U' \big\|_{L^2(\mathbb{R})} + C_2(\psi, \delta, \tilde{p}_1, \tilde{p}_2) \, \big\| U \big\|_{L^{\infty}(\mathbb{R})} \Big) \, \omega^{- \delta} \; , \end{align*} where the real number $\delta$ is arbitrarily chosen in $\big( \frac{1}{2}, \frac{3}{4} \big)$ and \begin{align*} & \bullet \quad C_1(\psi, \delta, \tilde{p}_1, \tilde{p}_2) := \frac{2^{\delta+1} \, L(\delta)}{\tauqrt{3-4\delta}} \, \big( \tilde{p}_2 - \tilde{p}_1 \big)^{\frac{3 - 4 \delta}{2}} \, c_1(\psi, \delta, \tilde{p}_1, \tilde{p}_2) \; ; \\ & \bullet \quad C_2(\psi, \delta, \tilde{p}_1, \tilde{p}_2) := \frac{2^{\delta - 1} L(\delta)}{1 - \delta} \, \big( \tilde{p}_2 - \tilde{p}_1 \big)^{2-2\delta} \, c_2(\psi, \delta, \tilde{p}_1, \tilde{p}_2) \; ; \\ & \bullet \quad c_1(\psi, \delta, \tilde{p}_1, \tilde{p}_2) := \big\| \psi'' \big\|_{L^{\infty}(\tilde{p}_1, \tilde{p}_2)}^{\frac{3}{2}-\delta} \, \min_{[\tilde{p}_1, \tilde{p}_2]}\big\{-\psi''\big\}^{-\frac{3}{2}} \; ; \\ & \bullet \quad c_2(\psi, \delta, \tilde{p}_1, \tilde{p}_2) := \big\| \psi'' \big\|_{L^{\infty}(\tilde{p}_1, \tilde{p}_2)}^{\frac{5}{2}-\delta} \, \big\| \psi^{(3)} \big\|_{L^{\infty}(\tilde{p}_1, \tilde{p}_2)} \, \min_{[\tilde{p}_1, \tilde{p}_2]} \big\{-\psi''\big\}^{-\frac{7}{2}} \\ & \hspace{4.5cm} + \frac{1}{3} \, \big\| \psi'' \big\|_{L^{\infty}(\tilde{p}_1, \tilde{p}_2)}^{\frac{7}{2}-\delta} \, \big\| \psi^{(3)} \big\|_{L^{\infty}(\tilde{p}_1, \tilde{p}_2)} \, \min_{[\tilde{p}_1, \tilde{p}_2]} \big\{-\psi''\big\}^{-\frac{9}{2}} \; . \end{align*} The constant $L(\delta) > 0$ is defined in Lemma \ref{2-LEM1} iii). \end{2-THM1} \begin{proof} Let $\omega > 0$ and choose $p_0 \in (\tilde{p}_1, \tilde{p}_2)$. First of all, since the support of the amplitude is included in $[p_1, p_2] \tauubset (\tilde{p}_1, \tilde{p}_2)$, we have clearly \begin{equation*} \int_\mathbb{R} U(p) \, e^{i \omega \psi(p)} \, dp \, = \int_{\tilde{p}_1}^{\tilde{p}_2} U(p) \, e^{i \omega \psi(p)} \, dp \, =: I(\omega) \; . \end{equation*} Splitting the above integral at the point $p_0$ and using the two $\mathcal{C}^2$-diffeomorphisms defined in Lemma \ref{2-LEM2}, we obtain \begin{align*} I(\omega) & = - \int_0^{s_1} \big(U \circ \varphi_1^{\, -1} \big)(p) \, \big(\varphi_1^{\, -1}\big)'(p) \, e^{-i \omega s^2} \, ds \, e^{i \omega \psi(p_0)} \\ & \hspace{1cm} + \int_0^{s_2} \big(U \circ \varphi_2^{\, -1} \big)(p) \, \big(\varphi_2^{\, -1}\big)'(p) \, e^{-i \omega s^2} \, ds \, e^{i \omega \psi(p_0)} \; ; \end{align*} note that we have used the fact that $\varphi_1$ and $\varphi_2$ are respectively decreasing and increasing. We integrate now by parts by using the primitive $s \longmapsto \phi(s,\omega)$ given in Lemma \ref{2-LEM1} and the regularity of $\varphi_j$: \begin{align*} & (-1)^j \int_0^{s_j} \big(U \circ \varphi_j^{\, -1} \big)(s) \, \big(\varphi_j^{\, -1}\big)'(s) \, e^{-i \omega s^2} \, ds \\ & \hspace{1.5cm} = (-1)^j \Big[ \big(U \circ \varphi_j^{\, -1} \big)(s) \, \big(\varphi_j^{\, -1}\big)'(s) \, \phi(s,\omega) \Big]_0^{s_j} \\ & \hspace{3cm} + (-1)^{j+1} \int_0^{s_j} \Big( \big(U \circ \varphi_j^{\, -1} \big) \, \big(\varphi_j^{\, -1}\big)'\Big)'(s) \, \phi(s, \omega) \, ds \\ & \hspace{1.5cm} = (-1)^{j+1} \big(U \circ \varphi_j^{\, -1} \big)(0) \, \big(\varphi_j^{\, -1}\big)'(0) \, \phi(0,\omega) \\ & \hspace{3cm} + (-1)^{j+1} \int_0^{s_j} \Big( \big(U \circ \varphi_j^{\, -1} \big) \, \big(\varphi_j^{\, -1}\big)'\Big)'(s) \, \phi(s, \omega) \, ds \\ & \hspace{1.5cm} = \frac{1}{2} \tauqrt{2 \pi} \, e^{-i \frac{\pi}{4}} \frac{U(p_0)}{\tauqrt{- \psi''(p_0)}} \, \omega^{-\frac{1}{2}} \\ & \hspace{3cm} + (-1)^{j+1} \int_0^{s_j} \Big( \big(U \circ \varphi_j^{\, -1} \big) \, \big(\varphi_j^{\, -1}\big)'\Big)'(s) \, \phi(s, \omega) \, ds \; ; \end{align*} the second equality has been obtained by using the fact that $U(\tilde{p}_j) = 0$ and the last one by applying Lemma \ref{2-LEM2} ii) and Lemma \ref{2-LEM1} ii). Hence it follows \begin{align*} I(\omega) & = \tauqrt{2 \pi} \, e^{-i \frac{\pi}{4}} \, e^{i \omega \psi(p_0)} \, \frac{U(p_0)}{\tauqrt{- \psi''(p_0)}} \, \omega^{-\frac{1}{2}} \\ & \hspace{1.5cm} + \tauum_{j=1}^2 (-1)^{j+1} \int_0^{s_j} \Big( \big(U \circ \varphi_j^{\, -1} \big) \, \big(\varphi_j^{\, -1}\big)'\Big)'(s) \, \phi(s, \omega) \, ds \, e^{i \omega \psi(p_0)} \; . \end{align*} To estimate each term of the remainder, we proceed as follows: \begin{align*} & \left| (-1)^{j+1} \int_0^{s_j} \Big( \big(U \circ \varphi_j^{\, -1} \big) \, \big(\varphi_j^{\, -1}\big)'\Big)'(s) \, \phi(s, \omega) \, ds \right| \\ & \hspace{1.5cm} \leqslant \left| \int_0^{s_j} \big(U' \circ \varphi_j^{\, -1} \big)(s) \, \big(\varphi_j^{\, -1}\big)'(s)^2 \, \phi(s, \omega) \, ds \right| \\ & \hspace{3cm} + \left| \int_0^{s_j} \big(U \circ \varphi_j^{\, -1} \big)(s) \, \big(\varphi_j^{\, -1}\big)''(s) \, \phi(s, \omega) \, ds \right| \\ & \hspace{1.5cm} \leqslant \left( \int_0^{s_j} \Big| \big(U' \circ \varphi_j^{\, -1} \big)(s) \, \big(\varphi_j^{\, -1}\big)'(s)^2 \Big|^2 \, ds \right)^{\frac{1}{2}} \left( \int_0^{s_j} \big| \phi(s, \omega) \big|^2 \, ds \right)^{\frac{1}{2}} \\ & \hspace{3cm} + \int_0^{s_j} \big| \phi(s, \omega) \big|\, ds \, \big\| U \big\|_{L^{\infty}(\mathbb{R})} \, \Big\| \big(\varphi_j^{\, -1}\big)'' \Big\|_{L^{\infty}(0,s_j)} \; ; \end{align*} let us remark that we have applied Cauchy-Schwarz inequality to the first integral. We continue the proof by estimating each resulting term; first of all, by making the change of variable $p = \varphi_j^{\, -1}(s)$ and by using Lemma \ref{2-LEM2} iii), we obtain \begin{equation*} \left( \int_0^{s_j} \Big| \big(U' \circ \varphi_j^{\, -1} \big)(s) \, \big(\varphi_j^{\, -1}\big)'(s)^2 \Big|^2 \, ds \right)^{\frac{1}{2}} \leqslant 2^{\frac{3}{4}} \, \big\| \psi'' \big\|_{L^{\infty}(\tilde{p}_1, \tilde{p}_2)}^{\frac{3}{4}} \, \min_{[\tilde{p}_1, \tilde{p}_2]}\big\{-\psi''\big\}^{-\frac{3}{2}} \, \big\| U' \big\|_{L^2(\mathbb{R})} \; . \end{equation*} Then we use the point iii) of Lemma \ref{2-LEM1} to derive the two following inequalities: \begin{align*} & \bullet \quad \int_0^{s_j} \big| \phi(s, \omega) \big|\, ds \leqslant L(\delta) \int_0^{s_j} s^{1-2\delta} \, ds \, \omega^{-\delta} \leqslant \frac{L(\delta)}{2 - 2 \delta} \, \varphi_j(\tilde{p}_j)^{2-2\delta} \, \omega^{-\delta} \; ;\\ & \bullet \quad \left( \int_0^{s_j} \big| \phi(s, \omega) \big|^2 \, ds \right)^{\frac{1}{2}} \leqslant \frac{L(\delta)}{\tauqrt{3-4\delta}} \, \varphi_j(\tilde{p}_j)^{\frac{3-4\delta}{2}} \, \omega^{-\delta} \; . \end{align*} By using the integral representation \eqref{eq:varphi} of $\varphi_j$, we obtain \begin{equation*} \varphi_j(\tilde{p}_j) \leqslant \frac{1}{\tauqrt{2}} \, \big\| \psi'' \big\|_{L^{\infty}(\tilde{p}_1, \tilde{p}_2)}^{\frac{1}{2}} \, (\tilde{p_2} - \tilde{p}_1) \; , \end{equation*} which permits to deduce \begin{align*} & \bullet \quad \int_0^{s_j} \big| \phi(s, \omega) \big|\, ds \leqslant \frac{1}{2^{1-\delta}} \, \frac{L(\delta)}{2 - 2 \delta} \big( \tilde{p}_2 - \tilde{p}_1 \big)^{2-2\delta} \, \big\| \psi'' \big\|_{L^{\infty}(\tilde{p}_1, \tilde{p}_2)}^{1 - \delta} \, \omega^{-\delta} \; ;\\ & \bullet \quad \left( \int_0^{s_j} \big| \phi(s, \omega) \big|^2 \, ds \right)^{\frac{1}{2}} \leqslant \frac{1}{2^{\frac{3}{4} - \delta}} \, \frac{L(\delta)}{\tauqrt{3-4\delta}} \, \big( \tilde{p}_2 - \tilde{p}_1 \big)^{\frac{3-4\delta}{2}} \, \big\| \psi'' \big\|_{L^{\infty}(\tilde{p}_1, \tilde{p}_2)}^{\frac{3}{4}-\delta} \, \omega^{-\delta} \; . \end{align*} And, from Lemma \ref{2-LEM2} iv), we recall that \begin{align*} \Big\| \big( \varphi_j^{\, -1} \big)'' \Big\|_{L^{\infty}(0, s_j)} & \leqslant \big\| \psi'' \big\|_{L^{\infty}(\tilde{p}_1, \tilde{p}_2)}^{\frac{3}{2}} \, \big\| \psi^{(3)} \big\|_{L^{\infty}(\tilde{p}_1, \tilde{p}_2)} \, \min_{[\tilde{p}_1, \tilde{p}_2]} \big\{-\psi''\big\}^{-\frac{7}{2}} \\ & \hspace{1cm} + \frac{1}{3} \, \big\| \psi'' \big\|_{L^{\infty}(\tilde{p}_1, \tilde{p}_2)}^{\frac{5}{2}} \, \big\| \psi^{(3)} \big\|_{L^{\infty}(\tilde{p}_1, \tilde{p}_2)} \, \min_{[\tilde{p}_1, \tilde{p}_2]} \big\{-\psi''\big\}^{-\frac{9}{2}} \; . \end{align*} Putting everything together provides the desired estimate, namely, \begin{align*} & \left| I(\omega) - \tauqrt{2 \pi} \, e^{-i \frac{\pi}{4}} \, e^{i \omega \psi(p_0)} \, \frac{U(p_0)}{\tauqrt{- \psi''(p_0)}} \, \omega^{-\frac{1}{2}} \right| \\ & \hspace{1cm} \leqslant \tauum_{j=1}^2 \left| (-1)^{j+1} \int_0^{s_j} \Big( \big(U \circ \varphi_j^{\, -1} \big) \, \big(\varphi_j^{\, -1}\big)'\Big)'(s) \, \phi(s, \omega) \, ds \, e^{i \omega \psi(p_0)} \right| \\ & \hspace{1cm} \leqslant \frac{2^{\delta+1} \, L(\delta)}{\tauqrt{3-4\delta}} \, \big( \tilde{p}_2 - \tilde{p}_1 \big)^{\frac{3 - 4 \delta}{2}} \, \big\| \psi'' \big\|_{L^{\infty}(\tilde{p}_1, \tilde{p}_2)}^{\frac{3}{2}-\delta} \, \min_{[\tilde{p}_1, \tilde{p}_2]} \big\{-\psi''\big\}^{-\frac{3}{2}} \, \big\| U' \big\|_{L^2(\mathbb{R})} \, \omega^{-\delta} \\ & \hspace{2cm} + \frac{2^{\delta - 1} L(\delta)}{1 - \delta} \, \big( \tilde{p}_2 - \tilde{p}_1 \big)^{2-2\delta} \bigg( \big\| \psi'' \big\|_{L^{\infty}(\tilde{p}_1, \tilde{p}_2)}^{\frac{5}{2}-\delta} \, \big\| \psi^{(3)} \big\|_{L^{\infty}(\tilde{p}_1, \tilde{p}_2)} \, \min_{[\tilde{p}_1, \tilde{p}_2]} \big\{-\psi''\big\}^{-\frac{7}{2}} \\ & \hspace{3cm} + \frac{1}{3} \, \big\| \psi'' \big\|_{L^{\infty}(\tilde{p}_1, \tilde{p}_2)}^{\frac{7}{2}-\delta} \, \big\| \psi^{(3)} \big\|_{L^{\infty}(\tilde{p}_1, \tilde{p}_2)} \, \min_{[\tilde{p}_1, \tilde{p}_2]} \big\{-\psi''\big\}^{-\frac{9}{2}} \bigg) \, \| U \|_{L^{\infty}(\mathbb{R})} \, \omega^{-\delta} \; . \end{align*} \end{proof} We end this section by providing an explicit and uniform bound for oscillatory integrals of type \eqref{eq:oscill_int} in the case where there is no stationary point inside the support of the amplitude, making the decay with respect to $\omega$ faster. As above, the estimate involves the $L^2$-norm of the first derivative of the amplitude in view of applications to dispersive equations in the following section.\\ The proof of the following result lies on classical arguments (as those in \cite[Chap.~VIII, Sec.~1, Prop.~2]{S93}) combined with Cauchy-Schwarz inequality. \begin{2-THM2} \label{2-THM2} Let $p_1$, $p_2$, $\tilde{p}_1$ and $\tilde{p}_2$ be four finite real numbers such that $[p_1, p_2] \tauubset (\tilde{p}_1, \tilde{p}_2)$. Suppose that $\psi \in \mathcal{C}^2(\mathbb{R}, \mathbb{R}): \mathbb{R} \longrightarrow \mathbb{R}$ is a concave function such that $| \psi' | > 0$ on $[p_1, p_2]$. And assume that $U \in \mathcal{C}^1(\mathbb{R}, \mathbb {C})$ is a function satisfying \begin{equation*} supp \, U \tauubseteq [p_1,p_2] \; . \end{equation*} Then we have for all $\omega > 0$, \begin{align*} \left| \int_{\mathbb{R}} U(p) \, e^{i \omega \psi(p)} \, dp \right| \leqslant \Big( C_3(\psi, p_1, p_2) \, \big\| U' \big\|_{L^2(\mathbb{R})} + C_4(\psi, p_1, p_2) \, \big\| U \big\|_{L^{\infty}(\mathbb{R})} \Big) \, \omega^{-1} \; . \end{align*} where \begin{align*} & \bullet \quad C_3(\psi, p_1, p_2) := (p_2 - p_1)^{\frac{1}{2}} \, \min \Big\{ \big| \psi'(p_1) \big|, \big| \psi'(p_2) \big| \Big\}^{-1} \; ; \\ & \bullet \quad C_4(\psi, p_1, p_2) := \min \Big\{ \big| \psi'(p_1) \big|, \big| \psi'(p_2) \big| \Big\}^{-1} \; . \end{align*} \end{2-THM2} \begin{proof} Let $\omega > 0$. Since $\psi'$ is monotonic and has a constant sign on $[p_1, p_2]$, we have \begin{equation*} \forall \, p \in [p_1, p_2] \qquad \big| \psi'(p) \big| \geqslant \min \Big\{ \big| \psi'(p_1) \big|, \big| \psi'(p_2) \big| \Big\} =: m_{p_1, p_2}(\psi') > 0 \; . \end{equation*} Hence we are allowed to integrate by parts as follows: \begin{equation*} \int_{\mathbb{R}} U(p) \, e^{i \omega \psi(p)} \, dp = \int_{p_1}^{p_2} U(p) \, e^{i \omega \psi(p)} \, dp = - i \int_{p_1}^{p_2} \left( \frac{U}{\psi'} \right)'\hspace{-1mm}(p) \, e^{i \omega \psi(p)} \, dp \, \omega^{-1} \; . \end{equation*} Moreover we have \begin{align*} & \left| - i \int_{p_1}^{p_2} \left( \frac{U}{\psi'} \right)'\hspace{-1mm}(p) \, e^{i \omega \psi(p)} \, dp \right| \\ & \hspace{1.5cm} \leqslant \left| \int_{p_1}^{p_2} U'(p) \, \psi'(p)^{-1} \, e^{i \omega \psi(p)} \, dp \right| + \int_{p_1}^{p_2} \Big| U(p) \, \psi''(p) \, \psi'(p)^{-2} \Big| \, dp \\ & \hspace{1.5cm} \leqslant \big\| U' \big\|_{L^2(\mathbb{R})} \, \big\| (\psi')^{-1} \big\|_{L^2(p_1, p_2)} + \big\| U \big\|_{L^{\infty}(\mathbb{R})} \int_{p_1}^{p_2} \Big| \psi''(p) \, \psi'(p)^{-2} \Big| \, dp \; ; \end{align*} as in the preceding proof, we have applied Cauchy-Schwarz inequality to the first integral. Now the hypotheses $\psi'' \leqslant 0$ and $\psi'$ is monotonic with a constant sign allow to carry the following computations out: \begin{equation*} \int_{p_1}^{p_2} \Big| \psi''(p) \, \psi'(p)^{-2} \Big| \, dp = \left| - \int_{p_1}^{p_2} \psi''(p) \, \psi'(p)^{-2} \, dp \right| = \Big| \psi'(p_2)^{-1} - \psi'(p_1)^{-1} \Big| \leqslant m_{p_1, p_2}(\psi')^{-1} \; . \end{equation*} Furthermore, we have \begin{equation*} \big\| (\psi')^{-1} \big\|_{L^2(p_1, p_2)} \leqslant m_{p_1, p_2}(\psi')^{-1} \, (p_2 - p_1)^{\frac{1}{2}} \; . \end{equation*} Consequently we obtain \begin{align*} \left| \int_{\mathbb{R}} U(p) \, e^{i \omega \psi(p)} \, dp \right| & \leqslant \Big( m_{p_1, p_2}(\psi')^{-1} \, (p_2 - p_1)^{\frac{1}{2}} \, \big\| U' \big\|_{L^2(\mathbb{R})} + m_{p_1, p_2}(\psi')^{-1} \, \big\| U \big\|_{L^{\infty}(\mathbb{R})} \Big) \, \omega^{-1} \; . \end{align*} \end{proof} \tauection{Minimization of error estimates and origin of the propagation cone for a family of dispersive equations} \label{sec:applications_de} We start this section by introducing the Fourier transform $\mathcal{F} u : \mathbb{R} \longrightarrow \mathbb {C}$ of a function $u : \mathbb{R} \longrightarrow \mathbb {C}$ belonging to the Schwartz space $\mathcal{S}(\mathbb{R})$: \begin{equation*} \forall \, p \in \mathbb{R} \qquad \mathcal{F} u(p) := \int_{\mathbb{R}} u(x) \, e^{-ixp} \, dx \; . \end{equation*} The Fourier transform defines an invertible operator from $\mathcal{S}(\mathbb{R})$ onto itself, and can be extended to the space of square-integrable functions $L^2(\mathbb{R})$ and to the tempered distributions $\mathcal{S}'(\mathbb{R})$. Moreover, for $u \in L^2(\mathbb{R})$, Plancherel theorem assures the following equality: \begin{equation*} \forall \, u \in L^2(\mathbb{R}) \qquad \| u \|_{L^2(\mathbb{R})} = \frac{1}{\tauqrt{2\pi}}\big\| \mathcal{F} u \big \|_{L^2(\mathbb{R})} \; ; \end{equation*} see \cite[Theorem 7.1.6]{H83}. Consider now a $\mathcal{C}^{\infty}$-function $f : \mathbb{R} \longrightarrow \mathbb{R}$ such that all its derivatives grow at most as a polynomial at infinity and consider the associated operator $f(D) : \mathcal{S}(\mathbb{R}) \longrightarrow \mathcal{S}(\mathbb{R})$ defined by \begin{equation*} \forall \, x \in \mathbb{R} \qquad f(D) u(x) := \frac{1}{2 \pi} \int_{\mathbb{R}} f(p) \, \mathcal{F} u(p) \, e^{i x p} \, dp = \mathcal{F}^{-1} \Big( f \, \mathcal{F} u \Big)(x) \; , \end{equation*} which can be extended to the tempered distributions $\mathcal{S}'(\mathbb{R})$. The operator $f(D) : \mathcal{S}'(\mathbb{R}) \longrightarrow \mathcal{S}'(\mathbb{R})$ is called a \textit{Fourier multiplier} associated to the \emph{symbol} $f$. Given such an operator, we introduce the following evolution equation on the line, \begin{equation} \label{eq:evoleq} \left\{ \begin{array}{l} \left[ i \, \partial_t - f \big(D\big) \right] u_f(t) = 0 \\ [2mm] u_f(0) = u_0 \end{array} \right. \; , \end{equation} for $t \in \mathbb{R}$. If we suppose $u_0 \in \mathcal{S}'(\mathbb{R})$ then the equation \eqref{eq:evoleq} has a unique solution in $\displaystyle \mathcal{C}^1\big( \mathbb{R} , \mathcal{S}'(\mathbb{R}) \big)$ given by the following solution formula, \begin{equation*} u_f(t) = \mathcal{F}^{-1} \Big( e^{-i t f} \mathcal{F} u_0 \Big) \; . \end{equation*} We refer to \cite{BAT1994} for a detailed study of this family of equations. In this this paper, we suppose that the symbol $f$ is strictly convex; an important example of such an equation is given by the free Schr√∂dinger equation whose symbol is $f_S(p) = \frac{1}{2} \, p^2$.\\ For the sake of better presentation of the results, we consider initial data $u_0$ belonging only to the Schwartz space $\mathcal{S}(\mathbb{R})$ to focus on the approach we propose. We mention that it is possible to extend our results to the case of initial data in $L^2(\mathbb{R})$ with additional assumptions on regularity and decay; but this falls out of the scope of the paper.\\ Further the initial data are assumed to be in bounded frequency bands, meaning that their Fourier transforms are supported on bounded intervals $[p_1,p_2]$, where $p_1 < p_2$ are two finite real numbers. Under such hypotheses, the solution formula for the equation \eqref{eq:evoleq} defines a function $u_f : \mathbb{R} \times \mathbb{R} \longrightarrow \mathbb {C}$ given by \begin{equation} \label{eq:formula} u_f(t,x) = \frac{1}{2 \pi} \int_{p_1}^{p_2} \mathcal{F} u_0(p) \, e^{-itf(p) + ixp} \, dp \; . \end{equation} We define now the space-time cone related to the symbol $f$ and to the frequency band $[\tilde{p}_1,\tilde{p}_2]$ with origin $(t_0,x_0) \in \mathbb{R}^2$: \begin{3-DEF2} \label{3-DEF2} Let $t_0, x_0, \tilde{p}_1, \tilde{p}_2$ be four finite real numbers such that $\tilde{p}_1 < \tilde{p}_2$ and let $f: \mathbb{R} \longrightarrow \mathbb{R}$ be a symbol. \begin{enumerate} \item We define the space-time cone $\mathfrak{C}_f\big( [\tilde{p}_1, \tilde{p}_2], (t_0, x_0) \big)$ as follows: \begin{equation} \label{eq:cone} \mathfrak{C}_f\big( [\tilde{p}_1, \tilde{p}_2], (t_0, x_0) \big) := \left\{ (t,x) \in \big( \mathbb{R} \backslash \{t_0 \} \big) \times \mathbb{R} \, \bigg| \, f'(\tilde{p}_1) \leqslant \frac{x - x_0}{t - t_0} \leqslant f'(\tilde{p}_2) \right\} \; . \end{equation} \item Let $\mathfrak{C}_f\big( [\tilde{p}_1, \tilde{p}_2], (t_0, x_0) \big)^c$ be the complement of the space-time cone $\mathfrak{C}_f\big( [\tilde{p}_1, \tilde{p}_2], (t_0, x_0) \big)$ in $\big( \mathbb{R} \backslash \{t_0 \} \big) \times \mathbb{R}$ . \end{enumerate} \end{3-DEF2} In this section, we aim at computing time-asymptotic expansions to one term of the solution formula \eqref{eq:formula} for initial data in frequency bands. We show that the resulting first term of these expansions is supported in a space-time cone of type \eqref{eq:cone}, providing asymptotic propagation features for the solutions. In a first step, the origin of the cone is arbitrarily chosen and the remainder estimates are explicit with respect to this origin. In a second step, we determine the origin of the cone minimizing this remainder estimate.\\ In the following theorem, we provide a time-asymptotic expansion to one term with explicit error estimate of the solution \eqref{eq:formula} in the space-time cone $\mathfrak{C}_f\big( [\tilde{p}_1, \tilde{p}_2], (t_0, x_0) \big)$, where $[p_1, p_2] \tauubset (\tilde{p}_1, \tilde{p}_2)$ and $(t_0,x_0) \in \mathbb{R}^2$ is arbitrarily chosen. A uniform estimate of \eqref{eq:formula} outside the cone is also established.\\ The proof of Theorem \ref{3-THM1} follows the lines of the one of \cite[Theorem 5.2]{AMD17}: it consists mainly in rewriting the solution formula \eqref{eq:formula} as an oscillatory integral with respect to time and in applying then Theorems \ref{2-THM1} and \ref{2-THM2}. Here the expansion in a cone with arbitrary origin is obtained thanks to a space-time shift in the integral defining \eqref{eq:formula}. And the explicitness of the remainder with respect to the origin is possible thanks to the new remainder estimate given in Theorem \ref{2-THM1}, allowing the application of Plancherel theorem. \begin{3-THM1} \label{3-THM1} Let $p_1$, $p_2$, $\tilde{p}_1$ and $\tilde{p}_2$ be four finite real numbers such that $[p_1, p_2] \tauubset (\tilde{p}_1, \tilde{p}_2)$. Suppose that $u_0 \in \mathcal{S}(\mathbb{R})$ is a function whose Fourier transform satisfies \begin{equation*} supp \, \mathcal{F} u_0 \tauubseteq [p_1,p_2] \; . \end{equation*} Fix $(t_0,x_0) \in \mathbb{R}^2$. Then \begin{enumerate} \item for all $(t,x) \in \mathfrak{C}_f\big( [\tilde{p}_1, \tilde{p}_2], (t_0, x_0) \big)$, we have \begin{align} & \left| u_f(t, x) - \frac{1}{\tauqrt{2 \pi}} \, e^{- sgn(t-t_0) i \frac{\pi}{4}} \, e^{-itf(p_0(t,x)) + ix p_0(t,x)} \, \frac{\mathcal{F} u_0 \big( p_0(t,x) \big)}{\tauqrt{f''\big( p_0(t,x) \big)}} \, |t-t_0|^{-\frac{1}{2}} \right| \nonumber \\ & \hspace{2cm} \leqslant \Big( C_5(f, \delta, \tilde{p}_1, \tilde{p}_2) \, \big\| (.-x_0) \, u_f(t_0,.) \big\|_{L^2(\mathbb{R})} \nonumber \\ & \hspace{4cm} + C_6(f, \delta, \tilde{p}_1, \tilde{p}_2) \, \big\| u_0 \big\|_{L^1(\mathbb{R})} \Big) \, |t-t_0|^{- \delta} \; , \label{eq:remainder_est1} \end{align} where the real number $\delta$ is arbitrarily chosen in $\big( \frac{1}{2}, \frac{3}{4} \big)$ and \begin{align*} & \bullet \quad p_0(t,x) := (f')^{-1} \hspace{-1mm} \left( \frac{x-x_0}{t-t_0} \right) \; ; \\ & \bullet \quad C_5(\psi, \delta, \tilde{p}_1, \tilde{p}_2) := \frac{2^{\delta+\frac{1}{2}} \, L(\delta)}{\tauqrt{\pi} \tauqrt{3-4\delta}} \, \big( \tilde{p}_2 - \tilde{p}_1 \big)^{\frac{3 - 4 \delta}{2}} \, c_5(f, \delta, \tilde{p}_1, \tilde{p}_2) \; ; \\ & \bullet \quad C_6(\psi, \delta, \tilde{p}_1, \tilde{p}_2) := \frac{2^{\delta - 2} L(\delta)}{\pi(1 - \delta)} \, \big( \tilde{p}_2 - \tilde{p}_1 \big)^{2-2\delta} \, c_6(f, \delta, \tilde{p}_1, \tilde{p}_2) \; ; \\ & \bullet \quad c_5(f, \delta, \tilde{p}_1, \tilde{p}_2) := \big\| f'' \big\|_{L^{\infty}(\tilde{p}_1, \tilde{p}_2)}^{\frac{3}{2}-\delta} \, \min_{[\tilde{p}_1, \tilde{p}_2]}\big\{f''\big\}^{-\frac{3}{2}} \; ; \\ & \bullet \quad c_6(\psi, \delta, \tilde{p}_1, \tilde{p}_2) := \big\| f'' \big\|_{L^{\infty}(\tilde{p}_1, \tilde{p}_2)}^{\frac{5}{2}-\delta} \, \big\| f^{(3)} \big\|_{L^{\infty}(\tilde{p}_1, \tilde{p}_2)} \, \min_{[\tilde{p}_1, \tilde{p}_2]} \big\{f''\big\}^{-\frac{7}{2}} \\ & \hspace{4.5cm} + \frac{1}{3} \, \big\| f'' \big\|_{L^{\infty}(\tilde{p}_1, \tilde{p}_2)}^{\frac{7}{2}-\delta} \, \big\| f^{(3)} \big\|_{L^{\infty}(\tilde{p}_1, \tilde{p}_2)} \, \min_{[\tilde{p}_1, \tilde{p}_2]} \big\{f''\big\}^{-\frac{9}{2}} \; . \end{align*} The constant $L(\delta) > 0$ is defined in Lemma \ref{2-LEM1} iii); \item for all $(t,x) \in \mathfrak{C}_f\big( [\tilde{p}_1, \tilde{p}_2], (t_0, x_0) \big)^c$, we have \begin{align} \big| u_f(t,x) \big| & \leqslant \Big( C_7(f, p_1, p_2, \tilde{p}_1, \tilde{p}_2) \, \big\| (.-x_0) \, u_f(t_0,.) \big\|_{L^2(\mathbb{R})} \nonumber \\ & \hspace{1.5cm} + C_8(f, p_1, p_2, \tilde{p}_1, \tilde{p}_2) \, \big\| u_0 \big\|_{L^1(\mathbb{R})} \Big) \, |t-t_0|^{-1} \; , \label{eq:remainder_est2} \end{align} where \begin{align*} & \bullet \quad C_7(f, p_1, p_2, \tilde{p}_1, \tilde{p}_2) := \frac{1}{\tauqrt{2\pi}} \, (p_2 - p_1)^{\frac{1}{2}}\, \min \hspace{-1mm} \big\{ f'(p_1) - f'(\tilde{p}_1), f'(\tilde{p}_2) - f'(p_2) \big\}^{-1} \; ; \\ & \bullet \quad C_8(f, p_1, p_2, \tilde{p}_1, \tilde{p}_2) := \frac{1}{2\pi} \, \min \hspace{-1mm} \big\{ f'(p_1) - f'(\tilde{p}_1), f'(\tilde{p}_2) - f'(p_2) \big\}^{-1} \; . \end{align*} \end{enumerate} \end{3-THM1} \begin{proof} The cases where $t > t_0$ and $t < t_0$ are distinguished for the sake of readability.\\ \underline{Case 1:} $t > t_0$\\ We rewrite the solution formula as an oscillatory integral by proceeding as follows\footnote{In \cite{AMD17, D17-1}, the parameters $t_0$ and $x_0$ are implicitly equal to $0$. Allowing these parameters to be arbitrary produces a space-time shift in the solution formula and permits to consider cones with arbitrary origin.} \begin{align*} u_f(t,x) & = \frac{1}{2 \pi} \int_{\mathbb{R}} \mathcal{F} u_0(p) \, e^{-i t f(p) + i x p} \, dp \\ & = \int_{\mathbb{R}} \frac{1}{2 \pi} \, \mathcal{F} u_0(p) \, e^{-i t_0 f(p) + i x_0 p} \, e^{i (t-t_0) \big( \frac{x-x_0}{t-t_0} p \, - \, f(p) \big)} \, dp \\ & = \int_{\mathbb{R}} \mathbf{U}_f(p,t_0,x_0) \, e^{i (t-t_0) \Psi_f(p,t,x,t_0,x_0)} \, dp \\ & =: I_f(t, x, u_0, t_0, x_0) \; . \end{align*} We note that the amplitude \begin{equation*} \mathbf{U}_f(p,t_0,x_0) := \frac{1}{2 \pi} \, \mathcal{F} u_0(p) \, e^{-i t_0 f(p) + i x_0 p} \; , \end{equation*} which is actually the Fourier transform of $\frac{1}{2 \pi} u_f(t_0, \, . + x_0)$, is a $\mathcal{C}^\infty$-function (with respect to the variable $p$) whose support is included in $[p_1,p_2]$. The phase function \begin{equation*} \Psi_f(p,t,x,t_0,x_0) := \frac{x-x_0}{t-t_0} p \, - \, f(p) \end{equation*} is a $\mathcal{C}^{\infty}$-function on $\mathbb{R}$ which is strictly concave since we have supposed $f'' > 0$ in this section.\\ Now we remark that the existence of a stationary point for the phase inside the interval $\tilde{I} := (\tilde{p}_1, \tilde{p}_2)$ depends on the value of $\frac{x-x_0}{t-t_0}$: it exists and is unique if and only if $\frac{x-x_0}{t-t_0} \in f'\big( \tilde{I}\big)$. In this case, the stationary point $p_0(t,x)$ is given by \begin{equation*} p_0(t,x) = (f')^{-1} \hspace{-1mm} \left( \frac{x-x_0}{t-t_0} \right) \; . \end{equation*} Let us now distinguish two sub-cases to apply Theorem \ref{2-THM1} and Theorem \ref{2-THM2}. \begin{itemize} \item \emph{Case} $\frac{x-x_0}{t-t_0} \in f'\big( \tilde{I}\big)$. In this case, the stationary point belongs to $\tilde{I}$. Hence we are allowed to apply Theorem \ref{2-THM1} to the oscillatory integral $I_f(t, x, u_0, t_0, x_0)$ with $\omega = t - t_0$: \begin{align*} & \left| I_f(t, x, u_0, t_0, x_0) - \frac{1}{\tauqrt{2 \pi}} \, e^{-i \frac{\pi}{4}} \, e^{-itf(p_0(t,x)) + ix p_0(t,x)} \, \frac{\mathcal{F} u_0 \big( p_0(t,x) \big)}{\tauqrt{f''\big( p_0(t,x) \big)}} \, (t-t_0)^{-\frac{1}{2}} \right| \\ & \hspace{2cm} \leqslant \frac{1}{2 \pi} \, \bigg( C_1(\Psi_f, \delta, \tilde{p}_1, \tilde{p}_2) \, \Big\| \partial_p \left[ \mathcal{F} u_0(\cdot) \, e^{- i t_0 f(\cdot) + i x_0 \cdot} \right] \Big\|_{L^2(\mathbb{R})} \\ & \hspace{4cm} + C_2(\Psi_f, \delta, \tilde{p}_1, \tilde{p}_2) \, \big\| \mathcal{F} u_0 \big\|_{L^{\infty}(\mathbb{R})} \bigg) \, (t-t_0)^{- \delta} \; , \end{align*} with $\delta \in \big( \frac{1}{2}, \frac{3}{4} \big)$ and the constants $C_1(\Psi_f, \delta, \tilde{p}_1, \tilde{p}_2)$, $C_2(\Psi_f, \delta, \tilde{p}_1, \tilde{p}_2) > 0$ are defined in Theorem \ref{2-THM1}. Since we have \begin{equation*} \partial_p^{(2)} \Psi_f(p,t,x,t_0,x_0) = -f''(p) \quad , \quad \partial_p^{(3)} \Psi_f(p,t,x,t_0,x_0) = -f^{(3)}(p) \; , \end{equation*} and since the constants $C_1(\Psi_f, \delta, \tilde{p}_1, \tilde{p}_2)$ and $C_2(\Psi_f, \delta, \tilde{p}_1, \tilde{p}_2)$ depend only on the se\-cond and third derivatives (with respect to $p$) of the phase, we can claim that these constants depend on $f$ rather than $\Psi_f$. Furthermore, Plancherel theorem and standard properties of the Fourier transform provide \begin{align*} \Big\| \partial_p \left[ \mathcal{F} u_0(.) e^{-it f(\cdot) + i x_0 \cdot} \right] \Big\|_{L^2(\mathbb{R})} & = \tauqrt{2\pi} \, \Big\| x \longmapsto x \, \mathcal{F}^{-1}\big[\mathcal{F} u_0(.) e^{-it_0 f(\cdot) + i x_0 \cdot} \big](x) \Big\|_{L^2(\mathbb{R})} \\ & = \tauqrt{2\pi} \, \Big\| x \longmapsto x \, \mathcal{F}^{-1}\big[\mathcal{F} u_0(.) \, e^{-it_0 f(\cdot)} \big](x + x_0) \Big\|_{L^2(\mathbb{R})} \\ & = \tauqrt{2\pi} \, \Big\| x \longmapsto (x-x_0) \, \mathcal{F}^{-1}\big[\mathcal{F} u_0(.) \, e^{-it_0 f(\cdot)} \big](x) \Big\|_{L^2(\mathbb{R})} \\ & = \tauqrt{2\pi} \, \Big\| x \longmapsto (x-x_0) \, u_f(t_0,x) \Big\|_{L^2(\mathbb{R})} \; . \end{align*} Hence we obtain finally \begin{align*} & \left| u_f(t, x) - \frac{1}{\tauqrt{2 \pi}} \, e^{-i \frac{\pi}{4}} \, e^{-itf(p_0(t,x)) + ix p_0(t,x)} \, \frac{\mathcal{F} u_0 \big( p_0(t,x) \big)}{\tauqrt{f''\big( p_0(t,x) \big)}} \, (t-t_0)^{-\frac{1}{2}} \right| \\ & \hspace{2cm} \leqslant \Bigg( \frac{1}{\tauqrt{2 \pi}} \, C_1(-f, \delta, \tilde{p}_1, \tilde{p}_2) \, \big\| (.-x_0) \, u_f(t_0,.) \big\|_{L^2(\mathbb{R})} \\ & \hspace{4cm} + \frac{1}{2 \pi} \, C_2(-f, \delta, \tilde{p}_1, \tilde{p}_2) \, \big\| u_0 \big\|_{L^1(\mathbb{R})} \Bigg) \, (t-t_0)^{- \delta} \; , \end{align*} where we have used the classical estimate $\big\| \mathcal{F} u_0 \big\|_{L^\infty(\mathbb{R})} \leqslant \| u_0 \|_{L^1(\mathbb{R})}$. \item \emph{Case} $\frac{x-x_0}{t-t_0} \notin f'\big(\tilde{I}\big)$. As previously, we rewrite the solution formula as the oscillatory integral $I_f(t,x,u_0,t_0,x_0)$. Here the phase $\Psi_f(.,t,x,t_0,x_0)$ has no stationary point inside the interval $\tilde{I} = (\tilde{p}_1, \tilde{p}_2)$ and one has \begin{equation*} \forall \, p \in [p_1, p_2] \qquad \Big| \partial_p \Psi_f(p,t,x,t_0,x_0) \Big| = \left| \frac{x-x_0}{t-t_0} - f'(p) \right| \geqslant m_{\tilde{I}}(f) > 0 \end{equation*} where $m_{\tilde{I}}(f) := \min \big\{ f'(p_1) - f'(\tilde{p}_1), f'(\tilde{p}_2) - f'(p_2) \big\}$. Consequently we can apply Theorem \ref{2-THM2} which provides \begin{align*} \big| u_f(t,x) \big| & \leqslant \Bigg( \frac{1}{\tauqrt{2 \pi}} \, C_3(-f, \tilde{p}_1, \tilde{p}_2) \, \big\| (.-x_0) \, u_f(t_0,.) \big\|_{L^2(\mathbb{R})} \\ & \hspace{1.5cm} + \frac{1}{2 \pi} \, C_4(-f, \tilde{p}_1, \tilde{p}_2) \, \big\| u_0 \big\|_{L^1(\mathbb{R})} \Bigg) (t-t_0)^{-1} \; , \end{align*} where the constants $C_3(-f, \tilde{p}_1, \tilde{p}_2)$, $C_4(-f, \tilde{p}_1, \tilde{p}_2) > 0$ are defined in Theorem \ref{2-THM2}. \end{itemize} \underline{Case 2:} $t < t_0$\\ Here we have \begin{align*} u_f(t,x) & = \int_{\mathbb{R}} \frac{1}{2 \pi} \, \mathcal{F} u_0(p) \, e^{-i t_0 f(p) + i x_0 p} \, e^{i (t-t_0) \big( \frac{x-x_0}{t-t_0} p \, - \, f(p) \big)} \, dp \\ & = \overline{\int_{\mathbb{R}} \frac{1}{2 \pi} \, \overline{\mathcal{F} u_0(p) \, e^{-i t_0 f(p) + i x_0 p}} \, e^{i (t_0-t) \big( \frac{x-x_0}{t-t_0} p \, - \, f(p) \big)} \, dp} \\ & = \overline{\int_{\mathbb{R}} \overline{\mathbf{U}_f(p,t_0,x_0)} \, e^{i (t_0-t) \Psi_f(p,t,x,t_0,x_0)} \, dp} \; . \end{align*} Following the arguments and computations of the preceding case $t > t_0$, we obtain \begin{align*} & \left| u_f(t, x) - \frac{1}{\tauqrt{2 \pi}} \, e^{+i \frac{\pi}{4}} \, e^{-itf(p_0(t,x)) + ix p_0(t,x)} \, \frac{\mathcal{F} u_0 \big( p_0(t,x) \big)}{\tauqrt{f''\big( p_0(t,x) \big)}} \, (t_0-t)^{-\frac{1}{2}} \right| \\ & \hspace{2cm} \leqslant \bigg( \frac{1}{\tauqrt{2 \pi}} \, C_1(-f, \delta, \tilde{p}_1, \tilde{p}_2) \big\| (.-x_0) \, u_f(t_0,.) \big\|_{L^2(\mathbb{R})} \\ & \hspace{4cm} + \frac{1}{2\pi} \, C_2(-f, \delta, \tilde{p}_1, \tilde{p}_2) \big\| u_0 \big\|_{L^1(\mathbb{R})} \bigg) \, (t_0-t)^{- \delta} \; , \end{align*} for all $(t,x) \in (-\infty, t_0) \times \mathbb{R}$ such that $\frac{x-x_0}{t-t_0} \in f' \big( \tilde{I} \big)$, and \begin{align*} \big| u_f(t,x) \big| & \leqslant \Bigg( \frac{1}{\tauqrt{2 \pi}} \, C_3(-f, \tilde{p}_1, \tilde{p}_2) \, \big\| (.-x_0) \, u_f(t_0,.) \big\|_{L^2(\mathbb{R})} \\ & \hspace{1.5cm} + \frac{1}{2 \pi} \, C_4(-f, \tilde{p}_1, \tilde{p}_2) \, \big\| u_0 \big\|_{L^1(\mathbb{R})} \Bigg) (t_0-t)^{-1} \; , \end{align*} for all $(t,x) \in (-\infty, t_0) \times \mathbb{R}$ such that $\frac{x-x_0}{t-t_0} \notin f' \big( \tilde{I} \big)$, leading to the desired estimates. \end{proof} We define now the following moment-type and variance-type quantities for normalized $u \in L^2(\mathbb{R})$. \begin{3-DEF1} \label{3-DEF1} Choose $u \in L^2(\mathbb{R})$ such that $\| u \|_{L^2(\mathbb{R})} = 1$ and let $f : \mathbb{R} \longrightarrow \mathbb{R}$ be a symbol. If they exist, we define the real numbers $\mathcal{M}_f(u)$ and $\mathcal{V}_f(u)$ as follows, \begin{align*} \mathcal{M}_f(u) := \int_{\mathbb{R}} f(x) \, \big| u(x) \big|^2 \, dx \quad , \quad \mathcal{V}_f(u) := \mathcal{M}_{f^2}(u) - \mathcal{M}_f(u)^2 \; . \end{align*} If $f(x) = x$, then we note for simplicity \begin{equation*} \mathcal{M}_1(u) := \mathcal{M}_f(u) \quad , \quad \mathcal{M}_2(u) := \mathcal{M}_{f^2}(u) \quad , \quad \mathcal{V}(u) := \mathcal{V}_f(u) \; . \end{equation*} \end{3-DEF1} \begin{3-REM0} \em The above quantities $\mathcal{M}_1(u)$, $\mathcal{M}_2(u)$ and $\mathcal{V}(u)$ are respectively the mean, the second moment and the variance of $|u|^2$. \end{3-REM0} The following lemma will permit to determine the space-time cone in which the remainder bound given in Theorem \ref{3-THM1} is minimal with respect to $(t_0,x_0)$; see Corollary \ref{3-COR1}. This is based on the minimisation of the function $(t_0, x_0) \longmapsto \big\| (.-x_0) u_f(t_0,.) \big\|_{L^2(\mathbb{R})}^2$ which is the moment of order 2 of $x \longmapsto \big| u_f(t_0,x+x_0) \big|^2$ for fixed $(t_0,x_0) \in \mathbb{R}^2$. \begin{3-LEM1} \label{3-LEM1} Suppose that the hypotheses of Theorem \ref{3-THM1} are satisfied and suppose in addition that $\| u_0 \|_{L^2(\mathbb{R})} = 1$. Then the function $g : \mathbb{R}^2 \longrightarrow \mathbb{R}_+$ defined by \begin{equation*} g(t_0,x_0) = \big\| (.-x_0) \, u_f(t_0,.) \big\|_{L^2(\mathbb{R})}^2 \end{equation*} has a global minimum at $(t^*, x^*) \in \mathbb{R}^2$ with \begin{align*} & \bullet \quad t^* = \argmin_{\tau \in \mathbb{R}} \mathcal{V}\big(u_f(\tau,.)\big) \; ; \\ & \bullet \quad x^* = \mathcal{M}_1\big( u_f(t^*,.) \big) \; . \end{align*} \end{3-LEM1} \begin{proof} For fixed $t_0 \in \mathbb{R}$, differentiating twice the function $g(t_0, .)$ with respect to its second argument shows that \begin{equation*} \partial_{x_0}^{(2)} g(t_0, x_0) = 2 > 0 \; , \end{equation*} Hence $g(t_0,.)$ is a polynomial function of degree 2 whose unique global minimum is \begin{equation*} \tilde{x}(t_0) = \int_\mathbb{R} x \big| u_f(t_0,x) \big|^2 \, dx = \mathcal{M}_1\big(u_f(t_0,.) \big) \; . \end{equation*} It follows that \begin{equation*} g\big(t_0, \tilde{x}(t_0) \big) = \int_\mathbb{R} \Big(x-\mathcal{M}_1\big(u_f(t_0,.) \big) \Big)^2 \, \big| u_f(t_0,x) \big|^2 \, dx = \mathcal{V}\big(u_f(t_0,.) \big) \; . \end{equation*} Lemma \ref{A-PROP2} assures that $ t_0 \in \mathbb{R} \longmapsto \mathcal{V}\big(u_f(t_0,.) \big) \in \mathbb{R}_+$ is a polynomial of degree 2 whose leading coefficient is $\mathcal{V}_{f'}\big( \frac{1}{\tauqrt{2 \pi}} \mathcal{F} u_0 \big)$. Since we have by simple calculations, \begin{align*} \mathcal{V}_{f'} \hspace{-1mm} \left( \frac{1}{\tauqrt{2 \pi}} \mathcal{F} u_0 \right) & = \mathcal{M}_{f'^{\, 2}} \hspace{-1mm} \left( \frac{1}{\tauqrt{2 \pi}} \, \mathcal{F} u_0 \right) - \mathcal{M}_{f'} \hspace{-1mm} \left( \frac{1}{\tauqrt{2 \pi}} \, \mathcal{F} u_0 \right)^2 \\ & = \frac{1}{2 \pi} \int_\mathbb{R} \left( f'(p) - \mathcal{M}_{f'} \hspace{-1mm} \left( \frac{1}{\tauqrt{2 \pi}} \, \mathcal{F} u_0 \right) \right)^2 \big| \mathcal{F} u_0(p) \big|^2 \, dp \; , \end{align*} and since $f$ is supposed to be strictly convex in this paper, the leading coefficient $\mathcal{V}_{f'}\big( \frac{1}{\tauqrt{2 \pi}} \mathcal{F} u_0 \big)$ is necessarily positive. Thus the function $t_0 \in \mathbb{R} \longmapsto g\big(t_0, \tilde{x}(t_0) \big) \in \mathbb{R}_+$ has a global minimum at a certain $t^* \in \mathbb{R}$, \emph{i.e.}, \begin{equation*} t^* = \argmin_{\tau \in \mathbb{R}} \mathcal{V}\big(u_f(\tau,.)\big) \; . \end{equation*} Finally we define \begin{equation*} x^* := \tilde{x}(t^*) = \mathcal{M}_1\big(u_f(t^*,.) \big) \; . \end{equation*} \end{proof} \begin{3-REM1} \label{3-REM1} \em The polynomial nature of the function $t_0 \in \mathbb{R} \longmapsto g\big(t_0, \tilde{x}(t_0) \big) \in \mathbb{R}_+$ permits to derive the following formula for $t^*$: \begin{align} t^* & = \frac{1}{\mathcal{V}_{f'}\big( \frac{1}{\tauqrt{2 \pi}} \, \mathcal{F} u_0 \big)} \left( - \frac{1}{2 \pi} \Im \bigg( \int_{\mathbb{R}} f'(p) \, \mathcal{F} u_0(p) \, \overline{\big( \mathcal{F} u_0 \big)'(p)} \, dp \right) \nonumber \\ & \hspace{1.5cm} + \mathcal{M}_{f'} \hspace{-1mm} \left( \frac{1}{\tauqrt{2 \pi} } \, \mathcal{F} u_0 \right) \mathcal{M}_1(u_0) \bigg) \; . \label{eq:formula_t} \end{align} Furthermore, from Lemma \ref{A-PROP1}, we have \begin{equation*} \mathcal{M}_1\big(u_f(t^*,.) \big) = \mathcal{M}_{f'} \hspace{-1mm} \left( \frac{1}{\tauqrt{2 \pi}} \, \mathcal{F} u_0 \right) t^* + \mathcal{M}_1(u_0) \; ; \end{equation*} inserting formula \eqref{eq:formula_t} into the preceding equality provides \begin{align*} x^* & = \frac{1}{\mathcal{V}_{f'}\big( \frac{1}{\tauqrt{2 \pi}} \, \mathcal{F} u_0 \big)} \bigg( - \frac{1}{2 \pi} \Im \hspace{-1mm} \left( \int_{\mathbb{R}} f'(p) \, \mathcal{F} u_0(p) \, \overline{\big( \mathcal{F} u_0 \big)'(p)} \, dp \right) \mathcal{M}_{f'} \hspace{-1mm} \left( \frac{1}{\tauqrt{2 \pi} } \, \mathcal{F} u_0 \right) \\ & \hspace{1.5cm} + \mathcal{M}_{f'^2} \hspace{-1mm} \left( \frac{1}{\tauqrt{2 \pi} } \, \mathcal{F} u_0 \right) \mathcal{M}_1(u_0) \bigg) \; . \end{align*} \end{3-REM1} The final result of this section is a direct consequence of Theorem \ref{3-THM1} and of Lemma \ref{3-LEM1}: it shows that the bounds of the error estimates appearing in Theorem \ref{3-THM1} are minimised by putting the origin of the cone at the point $\big( t^*, x^* \big)$ defined above. \begin{3-COR1} \label{3-COR1} Suppose that the hypotheses of Theorem \ref{3-THM1} are satisfied and suppose in addition that $\| u_0 \|_{L^2(\mathbb{R})} = 1$. Then the $(t_0, x_0)$-dependent right-hand sides of estimates \eqref{eq:remainder_est1} and \eqref{eq:remainder_est2} in Theorem \ref{3-THM1} have a global minimum at the point $\big( t^*, x^* \big) \in \mathbb{R}^2$ with \begin{align*} & \bullet \quad t^* = \argmin_{\tau \in \mathbb{R}} \mathcal{V}\big(u_f(\tau,.)\big) \; ; \\ & \bullet \quad x^* = \mathcal{M}_1\big( u_f(t^*,.) \big) \; . \end{align*} In this case, for all $(t,x) \in \mathfrak{C}_f\big( [\tilde{p}_1, \tilde{p}_2], (t^*, x^*) \big)$, we have \begin{align*} & \left| u_f(t, x) - \frac{1}{\tauqrt{2 \pi}} \, e^{- sgn(t-t^*) i \frac{\pi}{4}} \, e^{-itf(p_0(t,x)) + ix p_0(t,x)} \, \frac{\mathcal{F} u_0 \big( p_0(t,x) \big)}{\tauqrt{f''\big( p_0(t,x) \big)}} \, |t-t^*|^{-\frac{1}{2}} \right| \nonumber \\ & \hspace{2cm} \leqslant \Big( C_5(f, \delta, \tilde{p}_1, \tilde{p}_2) \, \min_{\tau \in \mathbb{R}} \left(\tauqrt{\mathcal{V}\big(u_f(\tau,.) \big)}\right) \nonumber \\ & \hspace{4cm} + C_6(f, \delta, \tilde{p}_1, \tilde{p}_2) \, \big\| u_0 \big\|_{L^1(\mathbb{R})} \Big) \, |t-t^*|^{- \delta} \; , \end{align*} where the real number $\delta$ is arbitrarily chosen in $\big( \frac{1}{2}, \frac{3}{4} \big)$ and $p_0(t,x) := (f')^{-1}\big( \frac{x-x^*}{t - t^*} \big)$, and for all $(t,x) \in \mathfrak{C}_f\big( [\tilde{p}_1, \tilde{p}_2], (t^*, x^*) \big)^c$, we have \begin{align*} \big| u_f(t,x) \big| & \leqslant \Big( C_7(f, p_1, p_2, \tilde{p}_1, \tilde{p}_2) \, \min_{\tau \in \mathbb{R}} \left(\tauqrt{\mathcal{V}\big(u_f(\tau,.) \big)}\right) \nonumber \\ & \hspace{1.5cm} + C_8(f, p_1, p_2, \tilde{p}_1, \tilde{p}_2) \, \big\| u_0 \big\|_{L^1(\mathbb{R})} \Big) \, |t-t^*|^{-1} \; ; \label{eq:remainder_est2} \end{align*} all the constants are defined in Theorem \ref{3-THM1}. \end{3-COR1} \begin{proof} For fixed initial datum and $\delta \in \big( \frac{1}{2}, \frac{3}{4} \big)$, it is clear that minimizing the remainder bounds \eqref{eq:remainder_est1} and \eqref{eq:remainder_est2} with respect to $(t_0, x_0)$ is equivalent to minimizing the function $g : \mathbb{R}^2 \longrightarrow \mathbb{R}_+$ defined in Lemma \ref{3-LEM1}. This lemma affirms that $g$ has a global minimum at $(t^*, x^*)$ defined above. Furthermore we have \begin{align*} \big\| (.-x^*) \, u_f(t^*,.) \big\|_{L^2(\mathbb{R})}^2 & = \int_\mathbb{R} \Big(x-\mathcal{M}_1\big(u_f(t^*,.) \big) \Big)^2 \, \big| u_f(t^*,x) \big|^2 \, dx \\ & = \mathcal{V}\big(u_f(t^*,.) \big) \; , \end{align*} which is equal to the minimum of $\tau \in \mathbb{R} \longmapsto \mathcal{V}\big(u_f(\tau,.) \big)$ by the definition of $t^*$. This ends the proof. \end{proof} \tauection{Mean position and variance stable under time-asymp\-to\-tic approximations} \label{sec:preservation} Here we are interested in the mean position and the variance of the first term of the expansions given in Theorem \ref{3-THM1}. We exploit the flexibility inherited from the preceding section to choose the origin of the space-time cone, in which we expand the solution of equation \eqref{eq:evoleq}, in such a way that the associated first term and the solution share the same mean position and the difference between the variances is an explicit constant. It turns out that such a cone corresponds to the one in which the $(t_0,x_0)$-dependent remainder estimate from Theorem \ref{3-THM1} is minimized.\\ This section illustrates that the refined method we have developed in the present paper offers approximations of the solution of equation \eqref{eq:evoleq} describing precisely its time-asymptotic propagation features.\\ Let $u_0 \in \mathcal{S}(\mathbb{R})$ such that \begin{equation*} supp \, \mathcal{F} u_0 \tauubseteq [p_1, p_2] \; , \end{equation*} where $p_1 < p_2$ are two finite real numbers, let $f : \mathbb{R} \longrightarrow \mathbb{R}$ be a strictly convex symbol and choose $(t_0, x_0) \in \mathbb{R}^2$. We define $H_f(.,.,u_0,t_0,x_0) : \big( \mathbb{R} \backslash \{t_0 \}\big) \times \mathbb{R} \longrightarrow \mathbb {C}$ as \begin{equation*} \label{eq:H_f} H_f(t,x,u_0,t_0,x_0) := \frac{1}{\tauqrt{2 \pi}} \, e^{- sgn(t - t_0) i \frac{\pi}{4}} \, e^{-it f( p_0(t,x)) + ix p_0(t,x)} \, \frac{\mathcal{F} u_0 \big(p_0(t,x) \big)}{\tauqrt{f''\big(p_0(t,x) \big)}} \, |t - t_0|^{-\frac{1}{2}} \; , \end{equation*} with $p_0(t,x) := (f')^{-1}\big( \frac{x-x_0}{t-t_0} \big)$. The function $H_f(.,.,u_0,t_0,x_0)$ is actually the first term of the expansion given in Theorem \ref{3-THM1}; we recall that it is supported in the cone $\mathfrak{C}_f \big( [p_1,p_2], (t_0,x_0) \big)$.\\ In the following proposition, we compute the mean position of $H_f(t,.,u_0,t_0,x_0)$ for all $t \neq t_0$. \begin{4-PROP1} \label{4-PROP1} Let $(t_0,x_0) \in \mathbb{R}^2$. Suppose that the hypotheses of Theorem \ref{3-THM1} are satisfied and suppose in addition that $\| u_0 \|_{L^2(\mathbb{R})} = 1$. Then for all $t \in \mathbb{R} \backslash \{ t_0 \}$, we have \begin{equation*} \mathcal{M}_1\Big( H_f(t,.,u_0,t_0,x_0) \Big) = x_0 + \mathcal{M}_{f'} \hspace{-1mm} \left( \frac{1}{\tauqrt{2 \pi}} \, \mathcal{F} u_0 \right) (t-t_0) \; . \end{equation*} \end{4-PROP1} \begin{proof} Let $t \in \mathbb{R} \backslash \{t_0\}$. First of all, we note that \begin{equation*} x_0 + f'(p_1)(t-t_0) < x_0 + f'(p_2)(t-t_0) \quad \Longleftrightarrow \quad t > t_0 \; . \end{equation*} Hence using the definition of $H_f(t,x,u_0,t_0,x_0)$ given just above, we have \begin{align*} \int_{\mathbb{R}} x \, \Big|H_f(t,x,u_0,t_0,x_0) \Big|^2 \, dx & = \frac{sgn(t-t_0)}{2 \pi} \int_{x_0 + f'(p_1) (t-t_0)}^{x_0 + f'(p_2) (t-t_0)} x \, \frac{ \big| \mathcal{F} u_0 \big( p_0(t,x) \big) \big|^2}{f''\big(p_0(t,x) \big)} \, dx \, |t - t_0 |^{-1} \\ & = \frac{1}{2 \pi} \int_{x_0 + f'(p_1) (t-t_0)}^{x_0 + f'(p_2) (t-t_0)} x \, \frac{ \big| \mathcal{F} u_0 \big( p_0(t,x) \big) \big|^2}{f''\big(p_0(t,x) \big)} \, dx \, (t - t_0 )^{-1} \; . \end{align*} We make now the change of variable $x = x_0 + f'(p)(t-t_0)$ to obtain the desired result: \begin{align*} \int_{\mathbb{R}} x \, \Big| H_f \big(t,x, u_0, t_0, x_0 \big) \Big|^2 \, dx & = \frac{1}{2 \pi} \int_{p_1}^{p_2} \big(x_0 + f'(p) (t-t_0 ) \big) \big| \mathcal{F} u_0 (p) \big|^2 \, dp \\ & = x_0 + \mathcal{M}_{f'} \hspace{-1mm} \left( \frac{1}{\tauqrt{2 \pi}} \mathcal{F} u_0 \right) \, t - \mathcal{M}_{f'} \hspace{-1mm} \left( \frac{1}{\tauqrt{2 \pi}} \mathcal{F} u_0 \right) \, t_0 \; ; \end{align*} note that we have used the fact that $\frac{1}{2 \pi} \big\| \mathcal{F} u_0 \big\|_{L^2(p_1,p_2)}^2 = 1$, which is a direct consequence of the assumption $\| u_0 \|_{L^2(\mathbb{R})} = 1$. \end{proof} In the following result, we prove that the mean positions of the solution $u_f(t,.)$ of equation \eqref{eq:evoleq} and of the first term $H_f(t,.,u_0,t_0,x_0)$ are equal if and only if $(t_0,x_0)$ belongs to the space-time line given by the mean position of the solution of \eqref{eq:evoleq}. \begin{4-PROP2} \label{4-PROP2} Let $(t_0,x_0) \in \mathbb{R}^2$. Suppose that the hypotheses of Theorem \ref{3-THM1} are satisfied and suppose in addition that $\| u_0 \|_{L^2(\mathbb{R})} = 1$. Then for all $t \in \mathbb{R} \backslash \{ t_0 \}$, we have \begin{equation*} \mathcal{M}_1 \big( u_f (t,.) \big) = \mathcal{M}_1\big( H_f(t,.,u_0,t_0,x_0) \big) \quad \Longleftrightarrow \quad x_0 = \mathcal{M}_1 \big( u_f (t_0,.) \big) \; . \end{equation*} \end{4-PROP2} \begin{proof} Let $t \in \mathbb{R} \backslash \{t_0\}$. According to Propositions \ref{4-PROP1} and \ref{A-PROP1}, we have \begin{align*} & \bullet \quad \mathcal{M}_1\big( u_f(t,.) \big) = \mathcal{M}_1(u_0) + \mathcal{M}_{f'} \hspace{-1mm} \left( \frac{1}{\tauqrt{2 \pi}} \mathcal{F} u_0 \right) t \; ; \\ & \bullet \quad \mathcal{M}_1\big( H_f(t,.,u_0,t_0,x_0) \big) = x_0 + \mathcal{M}_{f'} \hspace{-1mm} \left( \frac{1}{\tauqrt{2 \pi}} \, \mathcal{F} u_0 \right) (t-t_0) \; . \end{align*} Hence these two mean positions are equal if and only if \begin{equation*} \mathcal{M}_1(u_0) = x_0 - \mathcal{M}_{f'} \hspace{-1mm} \left( \frac{1}{\tauqrt{2 \pi}} \, \mathcal{F} u_0 \right) t_0 \; , \end{equation*} which is equivalent to $x_0 = \mathcal{M}_1 \big( u_f (t_0,.) \big)$. \end{proof} \begin{4-REM1} \label{4-REM1} \em The definition of $x^*$ from Lemma \ref{3-LEM1} (or Corollary \ref{3-COR1}) and the preceding proposition assure that the mean positions of $u_f(t,.)$ and $H_f(t,.,u_0, t^*,x^*)$ are equal for all $t \neq t_0$. \end{4-REM1} In the two following results, we focus on the variances of the solution $u_f(t,.)$ and of the first term $H_f(t,.,u_0,t_0,x_0)$ for all $t \neq t_0$. We give firstly a formula for the difference between the two variances for arbitrary $(t_0,x_0)$ and we determine secondly the value of $(t_0,x_0)$ so that this difference is constant for all $t \neq t_0$. \begin{4-PROP3} \label{4-PROP3} Let $(t_0,x_0) \in \mathbb{R}^2$. Suppose that the hypotheses of Theorem \ref{3-THM1} are satisfied and suppose in addition that $\| u_0 \|_{L^2(\mathbb{R})} = 1$. Then for all $t \in \mathbb{R} \backslash \{ t_0 \}$, we have \begin{align} & \mathcal{V}\big( u_f(t,.) \big) - \mathcal{V}\big(H_f(t,.,u_0, t_0, x_0) \big) \nonumber \\ & \hspace{1.5cm} = 2 \bigg( \frac{1}{2\pi} \, \Im \hspace{-1mm} \left( \int_{\mathbb{R}} f'(p) \, \mathcal{F} u_0 (p) \, \overline{\big( \mathcal{F} u_0 \big)'(p)} \, dp \right) - \mathcal{M}_{f'} \hspace{-1mm} \left( \frac{1}{\tauqrt{2 \pi}} \mathcal{F} u_0 \right) \mathcal{M}_1(u_0) \nonumber \\ & \hspace{3cm} + \mathcal{V}_{f'}\hspace{-1mm} \left( \frac{1}{\tauqrt{2 \pi}} \, \mathcal{F} u_0 \right) t_0 \bigg) \, t + \mathcal{V}(u_0) - \mathcal{V}_{f'}\hspace{-1mm} \left( \frac{1}{\tauqrt{2 \pi}} \, \mathcal{F} u_0 \right) t_0^{\, 2} \; . \label{eq:var_diff} \end{align} \end{4-PROP3} \begin{proof} Let $t \in \mathbb{R} \backslash \{t_0\}$. Similarly to the arguments employed in the proof of Proposition \ref{4-PROP1}, we have \begin{align*} \mathcal{M}_2\big( H_f(t,.,u_0,t_0,x_0) \big) & = \frac{1}{2 \pi} \int_{x_0 + f'(p_1) (t-t_0)}^{x_0 + f'(p_2) (t-t_0)} x^2 \, \frac{ \big| \mathcal{F} u_0 \big( p_0(t,x) \big) \big|^2}{f''\big(p_0(t,x) \big)} \, dx \, (t - t_0)^{-1} \\ & = \frac{1}{2 \pi} \int_{p_1}^{p_2} \big(x_0 + f'(p) (t-t_0 ) \big)^2 \big| \mathcal{F} u_0 (p) \big|^2 \, dp \\ & = x_0^{\, 2} + \mathcal{M}_{f'^2} \hspace{-1mm} \left( \frac{1}{\tauqrt{2 \pi}} \mathcal{F} u_0 \right) (t-t_0)^2 + 2 \, x_0 \, \mathcal{M}_{f'} \hspace{-1mm} \left( \frac{1}{\tauqrt{2 \pi}} \mathcal{F} u_0 \right) (t-t_0) \; . \end{align*} Moreover, applying Proposition \ref{4-PROP1} gives \begin{equation*} \mathcal{M}_1\big( H_f(t,.,u_0,t_0,x_0) \big)^2 = x_0^{\, 2} + \mathcal{M}_{f'} \hspace{-1mm} \left( \frac{1}{\tauqrt{2 \pi}} \, \mathcal{F} u_0 \right)^2 (t-t_0)^2 + 2 \, x_0 \, \mathcal{M}_{f'} \hspace{-1mm} \left( \frac{1}{\tauqrt{2 \pi}} \, \mathcal{F} u_0 \right) (t-t_0) \; . \end{equation*} It follows: \begin{align*} \mathcal{V}\big(H_f(t,.,u_0,t_0,x_0) \big) & = \left( \mathcal{M}_{f'^2} \hspace{-1mm} \left( \frac{1}{\tauqrt{2 \pi}} \, \mathcal{F} u_0 \right) - \mathcal{M}_{f'} \hspace{-1mm} \left( \frac{1}{\tauqrt{2 \pi}} \, \mathcal{F} u_0 \right)^2 \right) (t-t_0)^2 \\ & = \mathcal{V}_{f'}\hspace{-1mm} \left( \frac{1}{\tauqrt{2 \pi}} \, \mathcal{F} u_0 \right) (t-t_0)^2 \; . \end{align*} Using the formula for $\mathcal{V}\big( u_f(t,.) \big)$ from Proposition \ref{A-PROP2}, we obtain finally \begin{align*} & \mathcal{V}\big( u_f(t,.) \big) - \mathcal{V}\Big(H_f(t,.,u_0,t_0,x_0) \Big) \\ & \hspace{1.5cm} = \mathcal{V}_{f'}\hspace{-1mm} \left( \frac{1}{\tauqrt{2 \pi}} \, \mathcal{F} u_0 \right) t^2 + 2 \bigg( \frac{1}{2\pi} \, \Im \hspace{-1mm} \left( \int_{\mathbb{R}} f'(p) \, \mathcal{F} u_0 (p) \, \overline{\big( \mathcal{F} u_0 \big)'(p)} \, dp \right) \\ & \hspace{3cm} - \mathcal{M}_{f'} \hspace{-1mm} \left( \frac{1}{\tauqrt{2 \pi}} \, \mathcal{F} u_0 \right) \mathcal{M}_1(u_0) \bigg) t + \mathcal{V}(u_0) \\ &\hspace{4.5cm} - \mathcal{V}_{f'}\hspace{-1mm} \left( \frac{1}{\tauqrt{2 \pi}} \, \mathcal{F} u_0 \right) (t-t_0)^2 \\ & \hspace{1.5cm} = 2 \bigg( \frac{1}{2\pi} \, \Im \hspace{-1mm} \left( \int_{\mathbb{R}} f'(p) \, \mathcal{F} u_0 (p) \, \overline{\big( \mathcal{F} u_0 \big)'(p)} \, dp \right) - \mathcal{M}_{f'} \hspace{-1mm} \left( \frac{1}{\tauqrt{2 \pi}} \mathcal{F} u_0 \right) \mathcal{M}_1(u_0) \\ & \hspace{3cm} + \mathcal{V}_{f'}\hspace{-1mm} \left( \frac{1}{\tauqrt{2 \pi}} \, \mathcal{F} u_0 \right) t_0 \bigg) \, t + \mathcal{V}(u_0) - \mathcal{V}_{f'}\hspace{-1mm} \left( \frac{1}{\tauqrt{2 \pi}} \, \mathcal{F} u_0 \right) t_0^{\, 2} \; . \end{align*} \end{proof} In view of the preceding result, the difference between the variances of $u_f(t,.)$ and $H_f(t,.,u_0,t_0,x_0)$ is an affine function with respect to $t$. Consequently the unique way to make this difference constant is to choose $t_0$ in such way that the leading coefficient is equal to $0$. It turns out that the unique $t_0$ satisfying this property is the one minimizing the variance of $u_f$, namely $t^*$ introduced in Lemma \ref{3-LEM1}. \begin{4-PROP4} \label{4-PROP4} Let $(t_0,x_0) \in \mathbb{R}^2$. Suppose that the hypotheses of Theorem \ref{3-THM1} are sa\-tisfied and suppose in addition that $\| u_0 \|_{L^2(\mathbb{R})} = 1$. Then we have the following equivalence: \begin{align*} & \exists \, C \in \mathbb{R} \quad \forall \, t \in \mathbb{R} \backslash \{t_0\} \qquad \mathcal{V} \big( u_f (t,.) \big) - \mathcal{V}\big( H_f(t,.,u_0,t_0,x_0) \big) = C \\ & \hspace{1.5cm} \Longleftrightarrow \qquad t_0 = t^* := \argmin_{\tau \in \mathbb{R}} \mathcal{V}\big( u_f(\tau,.) \big) \; . \end{align*} In particular, we have \begin{equation*} C = \min_{\tau \in \mathbb{R}} \mathcal{V}\big(u_f(\tau,.) \big) \; . \end{equation*} \end{4-PROP4} \begin{proof} According to Proposition \ref{4-PROP3}, the difference between the variances of $u_f(t,.)$ and $H_f(t,.,u_0,t_0,x_0)$ is constant if and only if \begin{align*} t_0 & = \frac{1}{\mathcal{V}_{f'}\big( \frac{1}{\tauqrt{2 \pi}} \, \mathcal{F} u_0 \big)} \left( - \frac{1}{2 \pi} \Im \bigg( \int_{\mathbb{R}} f'(p) \, \mathcal{F} u_0(p) \, \overline{\big( \mathcal{F} u_0 \big)'(p)} \, dp \right) \\ & \hspace{1.5cm} + \mathcal{M}_{f'} \hspace{-1mm} \left( \frac{1}{\tauqrt{2 \pi} } \, \mathcal{F} u_0 \right) \mathcal{M}_1(u_0) \bigg) \; , \end{align*} which is equal to $t^*$ according to Remark \ref{3-REM1}. By evaluating equality \eqref{eq:var_diff} at $t^*$, we obtain for all $t \in \mathbb{R} \backslash \{t^* \}$, \begin{equation} \label{eq:a} \mathcal{V}\big( u_f(t,.) \big) - \mathcal{V}\big(H_f(t,.,u_0,t^*, x_0 \big) = \mathcal{V}(u_0) - \mathcal{V}_{f'}\hspace{-1mm} \left( \frac{1}{\tauqrt{2 \pi}} \, \mathcal{F} u_0 \right) (t^*)^2 \; . \end{equation} We note now that \begin{align} \mathcal{V}\big(u_f(t^*,.) \big) & = \mathcal{V}_{f'}\hspace{-1mm} \left( \frac{1}{\tauqrt{2 \pi}} \, \mathcal{F} u_0 \right) (t^*)^2 + 2 \bigg( \frac{1}{2\pi} \, \Im \hspace{-1mm} \left( \int_{\mathbb{R}} f'(p) \, \mathcal{F} u_0 (p) \, \overline{\big( \mathcal{F} u_0 \big)'(p)} \, dp \right) \nonumber \\ & \hspace{3cm} - \mathcal{M}_{f'} \hspace{-1mm} \left( \frac{1}{\tauqrt{2 \pi}} \, \mathcal{F} u_0 \right) \mathcal{M}_1(u_0) \bigg) t^* + \mathcal{V}(u_0) \nonumber \\ & = \mathcal{V}_{f'}\hspace{-1mm} \left( \frac{1}{\tauqrt{2 \pi}} \, \mathcal{F} u_0 \right) (t^*)^2 - 2 \mathcal{V}_{f'}\hspace{-1mm} \left( \frac{1}{\tauqrt{2 \pi}} \, \mathcal{F} u_0 \right) (t^*)^2 + \mathcal{V}(u_0) \nonumber \\ & = - \mathcal{V}_{f'}\hspace{-1mm} \left( \frac{1}{\tauqrt{2 \pi}} \, \mathcal{F} u_0 \right) (t^*)^2 + \mathcal{V}(u_0) \; . \label{eq:b} \end{align} From equalities \eqref{eq:a} and \eqref{eq:b}, it follows finally \begin{equation*} \mathcal{V}\big( u_f(t,.) \big) - \mathcal{V}\big(H_f(t,.,u_0,t^*, x_0 \big) = \mathcal{V}\big(u_f(t^*,.) \big) = \min_{\tau \in \mathbb{R}} \mathcal{V}\big(u_f(\tau,.) \big) \; , \end{equation*} the last equality being obtained by the definition $\displaystyle t^* = \argmin_{\tau \in \mathbb{R}} \mathcal{V}\big(u_f(\tau,.) \big)$. \end{proof} According to Propositions \ref{4-PROP2} and \ref{4-PROP4}, choosing $(t^*, x^*)$, introduced in Lemma \ref{3-LEM1}, as the origin of the cone in which we expand the solution $u_f(t,.)$ of equation \eqref{eq:evoleq} provides a time-asymptotic approximation $H_f(t,.,u_0,t^*,x^*)$ having the right mean position and a constant error. This is summarized in the following corollary. \begin{4-COR1} Suppose that the hypotheses of Theorem \ref{3-THM1} are satisfied and suppose in addition that $\| u_0 \|_{L^2(\mathbb{R})} = 1$. Then for all $t \in \mathbb{R} \backslash \{ t^* \}$, we have \begin{equation*} \left\{ \begin{array}{l} \displaystyle \mathcal{M}_1 \big( u_f (t,.) \big) = \mathcal{M}_1\big( H_f(t,.,u_0,t^*, x^*) \big) \\[2mm] \displaystyle \mathcal{V} \big( u_f (t,.) \big) - \mathcal{V}\big( H_f(t,.,u_0,t^*,x^*) \big) = \min_{\tau \in \mathbb{R}} \mathcal{V}\big(u_f(\tau,.) \big) \end{array} \right. \; , \end{equation*} where $t^*$ and $x^*$ are defined in Lemma \ref{3-LEM1}. \end{4-COR1} \begin{proof} Simple application of Propositions \ref{4-PROP2} and \ref{4-PROP4} combined with the definitions of $t^*$ and $x^*$. \end{proof} \tauection{Appendix A: Mean position and variance of the free wave packet} In this appendix, we give the formulas for the mean position and the variance of the wave packet $u_f$ defined in \eqref{eq:sol_formula0}. The proofs we propose here are substantially based on the fact that the wave packet is defined via the Fourier transform, permitting to apply some pro\-per\-ties of the Fourier transform.\\ We begin with the formula for the mean position. \begin{4-PROP1} \label{A-PROP1} Suppose that $u_0 \in \mathcal{S}(\mathbb{R})$. Then for all $t \in \mathbb{R}$, we have \begin{equation*} \mathcal{M}_1 \big( u_f(t,.) \big) = \mathcal{M}_{f'} \hspace{-1mm} \left( \frac{1}{\tauqrt{2 \pi}} \, \mathcal{F} u_0 \right) t + \mathcal{M}_1(u_0) \; . \end{equation*} \end{4-PROP1} \begin{proof} For $t \in \mathbb{R}$, we have \begin{align} \int_{\mathbb{R}} x \, \big| u_f(t,x) \big|^2 \, dx & = \int_{\mathbb{R}} x \, u_f(t,x) \, \overline{u_f(t,x)} \, dx \nonumber \\ & = \frac{1}{2 \pi} \int_{\mathbb{R}} \mathcal{F}\big[x \mapsto x \, u_f(t,x)\big](p) \, \overline{\mathcal{F} \big[x \mapsto u_f(t,x) \big](p)} \, dp \nonumber \\ & = \frac{i}{2 \pi} \int_{\mathbb{R}} \partial_p \mathcal{F}\big[x \mapsto u_f(t,x)\big](p) \, \overline{\mathcal{F} \big[x \mapsto u_f(t,x) \big](p)} \, dp \; ; \label{eq:ehren} \end{align} the second and third equalities have been obtained by applying Plancherel theorem and basic properties of the Fourier transform. Using now the expression \eqref{eq:sol_formula0} of the wave packet $u_f(t,x)$, we obtain for all $p \in \mathbb{R}$, \begin{align*} & \bullet \quad \partial_p \mathcal{F}\big[x \mapsto u_f(t,x)\big](p) = e^{-itf(p)} \Big( -i t f'(p) \, \mathcal{F} u_0(p) + (\mathcal{F} u_0)'(p) \Big) \; ; \\[2mm] & \bullet \quad \overline{\mathcal{F} \big[x \mapsto u_f(t,x) \big](p)} = e^{itf(p)} \, \overline{\mathcal{F} u_0(p)} \; . \end{align*} By combing the two last equalities with \eqref{eq:ehren} and by using again basic properties of the Fourier transform, it follows \begin{align*} \int_{\mathbb{R}} x \, \big| u_f(t,x) \big|^2 \, dx & = \frac{i}{2 \pi} \int_{\mathbb{R}} \Big( -i t f'(p) \, \mathcal{F} u_0(p) \: + \: (\mathcal{F} u_0)'(p) \Big) \, \overline{\mathcal{F} u_0(p)} \, dp \\ & = \frac{1}{2 \pi} \int_{\mathbb{R}} f'(p) \, \big| \mathcal{F} u_0 (p) \big|^2 \, dp \, t \: + \: \frac{i}{2 \pi} \int_{\mathbb{R}} (\mathcal{F} u_0)'(p) \, \overline{\mathcal{F} u_0(p)} \, dp \\ & = \frac{1}{2 \pi} \int_{\mathbb{R}} f'(p) \, \big| \mathcal{F} u_0 (p) \big|^2 \, dp \, t \: + \: \frac{1}{2 \pi} \int_{\mathbb{R}} \mathcal{F} \big[x \mapsto x \, u_0(x)\big](p) \, \overline{\mathcal{F} u_0(p)} \, dp \\ & = \frac{1}{2 \pi} \int_{\mathbb{R}} f'(p) \, \big| \mathcal{F} u_0 (p) \big|^2 \, dp \, t \: + \: \int_{\mathbb{R}} x \, \big| u_0(x) \big|^2 \, dx \; , \end{align*} leading finally to the desired equality. \end{proof} \begin{A-REM1} \em The preceding formula is actually an extension of the well-known Ehrenfest theorem to the family of dispersive equations of type \eqref{eq:evoleq0}. We recall that Ehrenfest theorem in the setting of the free Schr√∂dinger equation \eqref{eq:free_schrodinger} gives the following formula for the mean position of the free particle: \begin{equation*} \mathcal{M}_1\big( u_{S}(t,.) \big) = \mathcal{M}_1 \hspace{-1mm} \left( \frac{1}{\tauqrt{2 \pi}} \, \mathcal{F} u_0 \right) t + \mathcal{M}_1(u_0) \; . \end{equation*} \end{A-REM1} The formula for the variance is provided in the following result. \begin{A-PROP2} \label{A-PROP2} Suppose that $u_0 \in \mathcal{S}(\mathbb{R})$. Then for all $t \in \mathbb{R}$, we have \begin{align*} \mathcal{V}\big( u_f(t,.) \big) & = \mathcal{V}_{f'}\hspace{-1mm} \left( \frac{1}{\tauqrt{2 \pi}} \, \mathcal{F} u_0 \right) t^2 + 2 \bigg( \frac{1}{2\pi} \, \Im \hspace{-1mm} \left( \int_{\mathbb{R}} f'(p) \, \mathcal{F} u_0 (p) \, \overline{\big( \mathcal{F} u_0 \big)'(p)} \, dp \right) \\ & \hspace{1.5cm} - \mathcal{M}_{f'} \hspace{-1mm} \left( \frac{1}{\tauqrt{2 \pi}} \, \mathcal{F} u_0 \right) \mathcal{M}_1(u_0) \bigg) t + \mathcal{V}(u_0) \; . \end{align*} \end{A-PROP2} \begin{proof} Following the computational arguments of the proof of Proposition \ref{A-PROP1}, we have for all $t \in \mathbb{R}$, \begin{align} \int_{\mathbb{R}} x^2 \, \big| u_f(t,x) \big|^2 \, dx & = \int_{\mathbb{R}} \big| x \, u_f(t,x) \big|^2 \, dx \nonumber \\ & = \frac{1}{2 \pi} \int_{\mathbb{R}} \Big| \mathcal{F}\big[x \mapsto x \, u_f(t,x)\big](p) \Big|^2 \, dp \nonumber \\ & = \frac{1}{2 \pi} \int_{\mathbb{R}} \Big| \partial_p \mathcal{F}\big[x \mapsto u_f(t,x)\big](p) \Big|^2 \, dp \nonumber \\ & = \frac{1}{2 \pi} \int_{\mathbb{R}} \Big| - i t f'(p) \, \mathcal{F} u_0(p) + (\mathcal{F} u_0)'(p) \Big|^2 \, dp \; , \label{eq:var} \end{align} and we recall that \begin{equation*} \forall \, p \in \mathbb{R} \qquad (\mathcal{F} u_0)'(p) = -i \, \mathcal{F} \big[ x \mapsto x \, u_0(x) \big](p) \; . \end{equation*} Inserting the preceding relation into \eqref{eq:var} and expanding then the square of the absolute value provides \begin{align*} \int_{\mathbb{R}} x^2 \, \big| u_f(t,x) \big|^2 \, dx & = \frac{1}{2\pi} \int_\mathbb{R} f'(p)^2 \, \big| \mathcal{F} u_0 (p) \big|^2 \, dp \, t^2 \: + \: \int_\mathbb{R} x^2 \, \big| u_0 (x) \big|^2 \, dx \nonumber \\ & \hspace{1cm} - \: \frac{1}{\pi} \int_{\mathbb{R}} \mathbb{R}e \hspace{-1mm} \left( i \, f'(p) \, \mathcal{F} u_0 (p) \, \overline{\big( \mathcal{F} u_0 \big)'(p)} \right) dp \, t \\ & = \mathcal{M}_{f'^2} \hspace{-1mm} \left( \frac{1}{\tauqrt{2 \pi}} \, \mathcal{F} u_0 \right) t^2 \: + \: \mathcal{M}_2(u_0) \nonumber \\ & \hspace{1cm} + \: \frac{1}{\pi} \, \Im \hspace{-1mm} \left( \int_{\mathbb{R}} f'(p) \, \mathcal{F} u_0 (p) \, \overline{\big( \mathcal{F} u_0 \big)'(p)} \, dp \right) t \; , \end{align*} Now by using Proposition \ref{A-PROP1}, we have \begin{equation*} \mathcal{M}_1 \big( u_f(t,.) \big)^2 = \mathcal{M}_{f'} \hspace{-1mm} \left( \frac{1}{\tauqrt{2 \pi}} \, \mathcal{F} u_0 \right)^{\hspace{-1mm} 2} t^2 + \mathcal{M}_1(u_0)^2 + 2 \, \mathcal{M}_{f'} \hspace{-1mm} \left( \frac{1}{\tauqrt{2 \pi}} \, \mathcal{F} u_0 \right) \mathcal{M}_1(u_0) \, t \; . \end{equation*} which leads finally to \begin{align*} \mathcal{V}\big( u_f(t,.) \big) &= \mathcal{M}_2\big( u_f(t,.) \big) - \mathcal{M}_1\big( u_f(t,.) \big)^2 \\ & = \mathcal{M}_{f'^2} \hspace{-1mm} \left( \frac{1}{\tauqrt{2 \pi}} \, \mathcal{F} u_0 \right) t^2 \: + \: \mathcal{M}_2(u_0) \: + \: \frac{1}{\pi} \, \Im \hspace{-1mm} \left( \int_{\mathbb{R}} f'(p) \, \mathcal{F} u_0 (p) \, \overline{\big( \mathcal{F} u_0 \big)'(p)} \, dp \right) t \\ & \hspace{1cm} - \mathcal{M}_{f'} \hspace{-1mm} \left( \frac{1}{\tauqrt{2 \pi}} \, \mathcal{F} u_0 \right)^{\hspace{-1mm} 2} t^2 - \mathcal{M}_1(u_0)^2 \: - \: 2 \, \mathcal{M}_{f'} \hspace{-1mm} \left( \frac{1}{\tauqrt{2 \pi}} \, \mathcal{F} u_0 \right) \mathcal{M}_1(u_0) \, t \\ & = \mathcal{V}_{f'}\hspace{-1mm} \left( \frac{1}{\tauqrt{2 \pi}} \, \mathcal{F} u_0 \right) t^2 + \mathcal{V}(u_0) + 2 \bigg( \frac{1}{2\pi} \, \Im \hspace{-1mm} \left( \int_{\mathbb{R}} f'(p) \, \mathcal{F} u_0 (p) \, \overline{\big( \mathcal{F} u_0 \big)'(p)} \, dp \right) \\ & \hspace{1cm} - \mathcal{M}_{f'} \hspace{-1mm} \left( \frac{1}{\tauqrt{2 \pi}} \, \mathcal{F} u_0 \right) \mathcal{M}_1(u_0) \bigg) t \; . \end{align*} \end{proof} \noindent \textbf{Acknowledgements:}\\ The author gratefully thanks Prof.~Felix Ali Mehmeti whose numerous and insightful comments helped to improve the present paper. \end{document}
\begin{document} \title{Multi-Armed Bandits with Metric Movement Costs} \begin{abstract} We consider the non-stochastic Multi-Armed Bandit problem in a setting where there is a fixed and known metric on the action space that determines a cost for switching between any pair of actions. The loss of the online learner has two components: the first is the usual loss of the selected actions, and the second is an additional loss due to switching between actions. Our main contribution gives a tight characterization of the expected minimax regret in this setting, in terms of a complexity measure~$\cC$ of the underlying metric which depends on its covering numbers. In finite metric spaces with $k$ actions, we give an efficient algorithm that achieves regret of the form $\smash{\wt{O}(\max\set{\cC^{1/3}T^{2/3},\sqrt{kT}})}$, and show that this is the best possible. Our regret bound generalizes previous known regret bounds for some special cases: (i)~the unit-switching cost regret $\smash{\wt{\Theta}(\max\set{k^{1/3}T^{2/3},\sqrt{kT}})}$ where $\cC=\Theta(k)$, and (ii) the interval metric with regret $\smash{\wt{\Theta}(\max\set{T^{2/3},\sqrt{kT}})}$ where $\cC=\Theta(1)$. For infinite metrics spaces with Lipschitz loss functions, we derive a tight regret bound of $\smash{\wt{\Theta}(T^{\frac{d+1}{d+2}})}$ where $d\ge 1$ is the Minkowski dimension of the space, which is known to be tight even when there are no switching costs. \end{abstract} \section{Introduction} Multi-Armed Bandit (MAB) is perhaps one of the most well studied model for learning that allows to incorporate settings with limited feedback. In its simplest form, MAB can be thought of as a game between a learner and an adversary: At first, the adversary chooses an arbitrary sequence of losses $\ell_1,\ldots,\ell_T$ (possibly adversarially). Then, at each round the learner chooses an action $i_t$ from a finite set of actions~$K$. At the end of each round, the learner gets to observe her loss $\ell_t(i_t)$, and \emph{only} the loss of her chosen action. The objective of the learner is to minimize her (external) regret, defined as the expected difference between her loss, $\smash{\sum_{t=1}^T \ell_t(i_t)}$, and the loss of the best action in hindsight, i.e., $\smash{\min_{i \in K} \sum_{t=1}^T \ell_t(i)}$. One simplification of the MAB is that it assumes that the learner can switch between actions without any cost, this is in contrast to online algorithms that maintain a state and have a cost of switching between states. One simple intermediate solution is to add further costs to the learner that penalize \emph{movements between actions}. (Since we compare the learner to the single best action, the adversary has no movement and hence no movement cost.) This approach has been studied in the MAB with unit switching costs \citep{arora2012online,dekel2014bandits}, where the learner is not only penalized for her loss but also pays a unit cost for any time she switches between actions. This simple penalty implicitly advocates the construction of algorithms that avoid frequent fluctuation in their decisions. Regulating switching has been successfully applied to many interesting instances such as buffering problems \citep{geulen2010regret}, limited-delay lossy coding \citep{gyorgy2014near} and dynamic pricing with patient buyers \citep{feldman2016online}. The unit switching cost assumes that any pair of actions have the same cost, which in many scenarios is far from true. For example, consider an ice-cream vendor on a beach, where his actions are to select a location and price. Clearly, changing location comes at a cost, while changing prices might come with no cost. In this case we can define a interval metric (the coast line) and the movement cost is the distance. A more involved case is a hot-dog vendor in Manhattan, which needs to select a location and price. Again, it makes sense to charge a switching cost between locations according to their distance, and in this case the Manhattan-distance seems the most appropriate. Such settings are at the core of our model for MAB with movement cost. The authors of \cite{koren2017bandits} considered a MAB problem equipped with an interval metric, i.e, the actions are $[0,1]$ and the movement cost is the distance between the actions. They proposed a new online algorithm, called the Slowly Moving Bandit~(SMB) algorithm, that achieves optimal regret bound for this setting, and applied it to a dynamic pricing problem with patient buyers to achieve a new tight regret bound. The objective of this paper is to handle general metric spaces, both finite and infinite. We show how to generalize the SMB algorithm and its analysis to design optimal moving-cost algorithms for \emph{any} metric space over finite decision space. Our main result identifies an intrinsic complexity measure of the metric space, which we call the \emph{covering/packing complexity}, and give a tight characterization of the expected movement regret in terms of the complexity of the underlying metric. In particular, in finite metric spaces of complexity $\cC$ with $k$ actions, we give a regret bound of the form $\smash{\wt{O}(\max\set{\cC^{1/3}T^{2/3},\sqrt{kT}})}$ and present an efficient algorithm that achieves it. We also give a matching $\smash{\wt{Omega}(\max\set{\cC^{1/3}T^{2/3},\sqrt{kT}})}$ lower bound that applies to \emph{any} metric with complexity $\cC$. We extend out results to general continuous metric spaces. For such a settings we clearly have to make some assumption about the losses, and we make the rather standard assumption that the losses are Lipchitz with respect to the underlying metric. In this setting our results depend on a quite different complexity measures: the upper and lower Minkowski dimensions of the space, thus exhibiting a phase transition between the finite case (that corresponds to Minkowski dimension zero) and the infinite case. Specifically, we give an upper bound on the regret of $\smash{\wt{O}(T^{\frac{d+1}{d+2}})}$ where $d \ge 1$ is the \emph{upper} Minkowski dimension. When the upper and lower Minkowski dimensions coincide---which is the case in many natural spaces, such as normed vector spaces---the latter bound matches a lower bound of \cite{bubeck2011x} that holds even when there are no switching costs. Thus, a surprising implication of our result is that in infinite actions spaces (of bounded Minkowski dimension), adding movement costs do not add to the complexity of the MAB problem! Our approach extends the techniques of \cite{koren2017bandits} for the SMB algorithm, which was designed to optimize over an interval metric, which is equivalent to a complete binary Hierarchally well-Separated Tree (HST) metric space. By carefully balancing and regulating its sampling distributions, the SMB algorithm avoids switching between far-apart nodes in the tree and possibly incurring large movement costs with respect to the associated metric. We show that the SMB regret guarantees are much more general than just binary balanced trees, and give an analysis of the SMB algorithm when applied to general HSTs. As a second step, we show that a rich class of trees, on which the SMB algorithm can be applied, can be used to upper-bound any general metric. Finally, we reduce the case of an infinite metric space to the finite case via simple discretization, and show that this reduction gives rise to the Minkowski dimension as a natural complexity measure. All of these contractions turn out to be optimal (up to logarithmic factors), as demonstrated by our matching lower bounds. \subsection{Related Work} Perhaps the most well known classical algorithm for non-stochastic bandit is the \ensuremath{\textsc{Exp3}}\xspace Algorithm \citep{auer2002nonstochastic} that guarantee a regret of $\smash{\wt{O}(\sqrt{kT})}$ without movement costs. However, for general MAB algorithms there are no guarantees for slow movement between actions. In fact, it is known that in a worst case $\smash{\wt{Omega}(T)}$ switches between actions are expected (see \cite{dekel2014bandits}). A simple case of MAB with movement cost is the uniform metric, i.e., when the distance between any two actions is the same. This setting has seen intensive study, both in terms of analyzing optimal regret rates \citep{arora2012online,dekel2014bandits}, as well as applications \citep{geulen2010regret, gyorgy2014near, feldman2016online}. Our main technical tools for achieving lower bounds is through the lower bound of \citet{dekel2014bandits} that achieve such bound for this special case. The general problem of bandits with movement costs has been first introduced in \cite{koren2017bandits}, where the authors gave an efficient algorithm for a $2$-HST binary balanced tree metric, as well as for evenly spaced points on the interval. The main contribution of this paper is a generalization of these results to general metric spaces. There is a vast and vigorous study of MAB in continuous spaces \citep{Kleinberg04,cope2009regret,auer2007improved,bubeck2011x,yu2011unimodal}. These works relate the change in the payoff to the change in the action. Specifically, there has been a vast research on Lipschitz MAB with stochastic payoffs \citep{kleinberg2008multi,slivkins2011multi,slivkins2013ranked,kleinberg2010sharp,magureanu2014lipschitz}, where, roughly, the expected reward is Lipschitz. For applying our results in continuous spaces we too need to assume Lipschitz losses, however, our metric defines also the movement cost between actions and not only relates the losses of similar actions. Our general findings is that in Euclidean spaces, one can achieve the same regret bounds when movement cost is applied. Thus, the SMB algorithm can achieve the optimal regret rate. One can model our problem as a deterministic Markov Decision Process (MDP), where the states are the MAB actions and in every state there is an action to move the MDP to a given state (which correspond to switching actions). The payoff would be the payoff of the MAB action associated with the state plus the movement cost to the next state. The work of \citet{Ortner10} studies deterministic MDP where the payoffs are stochastic, and also allows for a fixed uniform switching cost. The work of \citet{Even-DarKM09} and it extensions \citep{NeuGSA14,YuMS2009} studies a MDP where the payoffs are adversarial but there is full information of the payoffs. Latter this work was extended to the bandit model by \citet{NeuGSA14}. This line of works imposes various assumptions regarding the MDP and the benchmark policies, specifically, that the MDP is ``mixing'' and that the policies considered has full support stationary distributions, assumptions that clearly fail in our very specific setting. Bayesian MAB, such as in the Gittins index (see \cite{Gittins}), assume that the payoffs are from some stochastic process. It is known that when there are switching costs then the existence of an optimal index policy is not guaranteed \citep{BanksS94}. There have been some works on special cases with a fixed uniform switching cost \citep{AgrawalHT1988,AsawaT1996}. The most relevant work is that of \citet{guha2009multi} which for a general metric over the actions gives a constant approximation off-line algorithm. For a survey of switching costs in this context see \cite{Jun2004}. The MAB problem with movement costs is related to the literature on online algorithms and the competitive analysis framework \citep{BorodinEl98}. A prototypical online problem is the Metrical Task System (MTS) presented by \citet{borodin1992optimal}. In a metrical task system there are a collection of states and a metric over the states. Similar to MAB, the online algorithm at each time step moves to a state, incurs a movement cost according to the metric, and suffers a loss that corresponds to that state. However, unlike MAB, in an MTS the online algorithm is given the loss prior to selecting the new state. Furthermore, competitive analysis has a much more stringent benchmark: the best sequence of actions in retrospect. Like most of the regret minimization literature, we use the best single action in hindsight as a benchmark, aiming for a vanishing average regret. One of our main technical tools is an approximation from above of a metric via a Metric Tree (i.e., $2$-HST). $k$-HST metrics have been vastly studied in the online algorithms starting with \cite{Bartal96}. The main goal is to derive a simpler metric representation (using randomized trees) that will both upper and lower bound the given metric. The main result is to show a bound of $O(\log n)$ on the expected stretch of any edge, and this is also the best possible \citep{FakcharoenpholRT04}. It is noteworthy that for bandit learning, and in contrast with these works, an upper bound over the metric suffices to achieve optimal regret rate. This is since in online learning we compete against the best \emph{static} action in hindsight, which does not move at all and hence has zero movement cost. In contrast, in a MTS, where one compete against the best \emph{dynamic} sequence of actions, one needs both an upper a lower bound on the metric. \section{Problem Setup and Background} In this section we recall the setting of Multi-armed Bandit with Movement Costs introduced in \cite{koren2017bandits}, and review the necessary background required to state our main results. \subsection{Multi-armed Bandits with Movement Costs} In the Multi-armed Bandits (MAB) with Movement Costs problem, we consider a game between an online learner and an adversary continuing for $T$ rounds. There is a set $K$, possibly infinite, of actions (or ``arms'') that the learner can choose from. The set of actions is equipped with a fixed and known metric $\Delta$ that determines a cost $\Delta(i,j) \in [0,1]$ for moving between any pair of actions $i,j \in K$. Before the game begins, an adversary fixes a sequence $\ell_1,\ldots,\ell_T : K \mapsto [0,1]$ of loss functions assigning loss values in $[0,1]$ to actions in $K$ (in particular, we assume an oblivious adversary). Then, on each round $t=1,\ldots,T$, the learner picks an action $i_t \in K$, possibly at random. At the end of each round $t$, the learner gets to observe her loss (namely, $\ell_t(i_t)$) and nothing else. In contrast with the standard MAB setting, in addition to the loss $\ell_t(i_t)$ the learner suffers an additional cost due to her movement between actions, which is determined by the metric and is equal to $\Delta(i_t,i_{t-1})$. Thus, the total cost at round $t$ is given by $\ell_t(i_t)+\Delta(i_{t-1},i_t)$. The goal of the learner, over the course of $T$ rounds of the game, is to minimize her expected movement-regret, which is defined as the difference between her (expected) total costs and the total costs of the best fixed action in hindsight (that incurs no movement costs); namely, the \emph{movement regret} with respect to a sequence $\ell_{1:T}$ of loss vectors and a metric $\Delta$ equals \begin{align*} \textrm{Regret}_\mathsf{MC}(\ell_{1:T},\Delta) = \mathbb{E}E{ \sum_{t=1}^T \ell_t(i_t) + \sum_{t=2}^T \Deltata(i_t,i_{t-1})} - \min_{i \in K} \sum_{t=1}^T \ell_t(i) ~. \end{align*} Here, the expectation is taken with respect to the learner's randomization in choosing the actions $i_1,\ldots,i_T$; notice that, as we assume an oblivious adversary, the loss functions $\ell_t$ are deterministic and cannot depend on the learner's randomization. \subsection{Basic Definitions in Metric Spaces} We recall basic notions in metric space that govern the regret in the MAB with movement costs setting. Throughout we assume a bounded metric space $(K,\Delta)$, where for normalization we assume $\Delta(i,j) \in [0,1]$ for all $i,j \in K$. Given a point $i \in K$ we will denote by $B_\epsilon(i)=\set{j\in K : \Delta(i,j)\le \epsilon}$ the ball of radius $\epsilon$ around $i$. The following definitions are standard. \begin{definition}[Packing numbers] A subset $P\subset K$ in a metric space $(K,\Delta)$ is an \emph{$\epsilon$-packing} if the sets $\{B_\epsilon(i)\}_{i\in P}$ are disjoint sets. The \emph{$\epsilon$-packing number} of $\Delta$, denoted $\rhoacking_\epsilon(\Delta)$, is the maximum cardinality of any $\epsilon$-packing of $K$. \end{definition} \begin{definition}[Covering numbers] A subset $C\subset K$ in a metric space $(K,\Delta)$ is an \emph{$\epsilon$-covering} if $K \subseteq \cup_{i\in C} B_\epsilon(i)$. The \emph{$\epsilon$-covering number} of $K$, denoted $N^\mathrm{c}_\epsilon(\Delta)$, is the minimum cardinality of any $\epsilon$-covering of $K$. \end{definition} \rhoaragraph{Tree metrics and HSTs.} We recall the notion of a tree metric, and in particular, a metric induced by an Hierarchically well-Separated (HST) Tree; see \cite{Bartal96} for more details. Any weighted tree defines a metric over the vertices, by considering the shortest path between each two nodes. An HST tree ($2$-HST tree, to be precise) is a rooted weighted tree such that: 1) the edge weight from any node to each of its children is the same and 2) the edge weight along any path from the root to a leaf are decreasing by a factor $2$ per edge. We will also assume that all leaves are of the same depth in the tree (this does not imply that the tree is complete). Given a tree $\mathcal{T}$ we let $\mathrm{depth}(\mathcal{T})$ denote its height, which is the maximal length of a path from any leaf to the root. Let $\mathrm{level}(v)$ be the level of a node $v \in \mathcal{T}$, where the level of the leaves is $0$ and the level of the root is $\mathrm{depth}(\mathcal{T})$. Given nodes $u,v \in \mathcal{T}$, let $\mathrm{LCA}(u,v)$ be their least common ancestor node in~$\mathcal{T}$. The metric which we next define is equivalent (up to a constant factor) to standard tree--metric induced over the leaves by an HST. By a slight abuse of terminology, we will call it HST metric: \begin{definition}[HST metric] Let $K$ be a finite set and let $\mathcal{T}$ be a tree whose leaves are at the same depth and are indexed by elements of $K$. Then the HST metric $\DeltaT$ over $K$ induced by the tree $\mathcal{T}$ is defined as follows: \begin{align*} \DeltaT(i,j) = \frac{2^{\mathrm{level}(\mathrm{LCA}(i,j))}}{2^{\mathrm{depth}(\mathcal{T})}} \qquad \forall ~ i,j \in K . \end{align*} \end{definition} For a HST metric $\DeltaT$, observe that the packing number and covering number are simple to characterize: for all $0 \le h < \mathrm{depth}(\mathcal{T})$ we have that for $\epsilon = 2^{h-H}$, $$ N^\mathrm{c}_\epsilon(\DeltaT) = \rhoacking_\epsilon(\DeltaT) = \Lrabs{\set{v \in \mathcal{T} : \mathrm{level}(v) = h}} . $$ \rhoaragraph{Complexity measures for finite metric spaces.} We next define the two notions of complexity that, as we will later see, governs the complexity of MAB with metric movement costs. \begin{definition}[covering~complexity\xspace] The covering~complexity\xspace of a metric space $(K,\Delta)$ denoted $\cC_{\smash{\textrm{c}}}(\Delta)$ is given by \[ \cC_{\smash{\textrm{c}}}(\Delta)=\sup_{0<\epsilonilon< 1} \, \epsilon \!\cdot\! N^\mathrm{c}_\epsilon(\Delta). \] \end{definition} \begin{definition}[packing~complexity\xspace] The packing~complexity\xspace of a metric space $(K,\Delta)$ denoted $\cC_{\smash{\textrm{p}}}(\Delta)$ is given by \[ \cC_{\smash{\textrm{p}}} (\Delta)=\sup_{0<\epsilonilon < 1} \, \epsilon \!\cdot\! \rhoacking_\epsilon(\Delta). \] \end{definition} For a HST metric, the two complexity measures coincide as its packing and covering numbers are the same. Therefore, for a HST metric $\DeltaT$ we will simply denote the complexity of $(K,\DeltaT)$ as $\cC(\mathcal{T})$. In fact, it is known that in any metric space $\rhoacking_\epsilon(\Delta)\leN^\mathrm{c}_\epsilon(\Delta)\le \rhoacking_{\smash{\epsilon/2}}(\Delta)$ for all $\epsilon>0$. Thus, for a general metric space we obtain that \begin{align} \label{eq:equiv} \cC_{\smash{\textrm{p}}}(\Delta) \le \cC_{\smash{\textrm{c}}}(\Delta) \le 2\cC_{\smash{\textrm{p}}}(\Delta). \end{align} \rhoaragraph{Complexity measures for infinite metric spaces.} For infinite metric spaces, we require the following definition. \begin{definition}[Minkowski dimensions] Let $(K,\Delta)$ be a bounded metric space. The upper Minkowski dimension of $(K,\Delta)$, denoted $\smash{\overline{\cD}}(\Delta)$, is defined as \begin{align*} \smash{\overline{\cD}}(\Delta) = \limsup_{\epsilon \to 0} \frac{\log{\rhoacking_\epsilon(\Delta)}}{\log(1/\epsilon)} = \limsup_{\epsilon \to 0} \frac{\log{N^\mathrm{c}_\epsilon(\Delta)}}{\log(1/\epsilon)} . \end{align*} Similarly, the lower Minkowski dimension is denoted by $\rhodim(\Delta)$ and is defined as \begin{align*} \rhodim(\Delta) = \liminf_{\epsilon \to 0} \frac{\log{\rhoacking_\epsilon(\Delta)}}{\log(1/\epsilon)} = \liminf_{\epsilon \to 0} \frac{\log{N^\mathrm{c}_\epsilon(\Delta)}}{\log(1/\epsilon)} . \end{align*} \end{definition} \ignore{ Note that if $d = \smash{\overline{\cD}}(\Delta)$ then there exists a constant $C>0$ such that $N^\mathrm{c}_\epsilon(\Delta) \ge C \epsilon^{-d}$ for any $\epsilon>0$; similarly, if $d' = \rhodim(\Delta)$ then for some constant $C'>0$ it holds that $N^\mathrm{c}_\epsilon(\Delta) \le C' \epsilon^{-d'}$ for any $\epsilon>0$. } We refer to \cite{tao2009minkowski} for more background on the Minkowski dimensions and related notions in metric spaces theory. \section{Main Results} We now state the main results of the paper, which give a complete characterization of the expected regret in the MAB with movement costs problem. \subsection{Finite Metric Spaces} \label{sec:results-finite} The following are the main results of the paper. Detailed proofs are provided in \cref{ap:proofs}. \begin{theorem}[Upper Bound] \label{thm:upper} Let $(K,\Delta)$ be a finite metric space over $\abs{K} = k$ elements with diameter $\le 1$ and covering~complexity\xspace $\cC_{\smash{\textrm{c}}} = \cC_{\smash{\textrm{c}}}(\Delta)$. There exists an algorithm such that for any sequence of loss functions $\ell_1,\ldots,\ell_T$ guarantees that \begin{align*} \textrm{Regret}_\mathsf{MC}(\ell_{1:T},\Delta) = \wt{O}\Lr{ \max\Lrset{ \cC_{\smash{\textrm{c}}}^{1/3} T^{2/3}, \sqrt{kT} } } . \end{align*} \end{theorem} \begin{theorem}[Lower Bound] \label{thm:lower} Let $(K,\Delta)$ be a finite metric space over $\abs{K} = k$ elements with diameter $\ge 1$ and packing~complexity\xspace $\cC_{\smash{\textrm{p}}} = \cC_{\smash{\textrm{p}}}(\Delta)$. For any algorithm there exists a sequence $\ell_1,\ldots, \ell_{T}$ of loss functions such that \begin{align*} \textrm{Regret}_\mathsf{MC}(\ell_{1:T},\Delta) = \wt{Omega}\Lr{ \max\Lrset{ \cC_{\smash{\textrm{p}}}^{1/3} T^{2/3}, \sqrt{kT} } } . \end{align*} \end{theorem} Recalling \cref{eq:equiv}, we see that the regret bounds obtained in \cref{thm:upper,thm:lower} are matching up to logarithmic factors. Notice that the tightness is achieved \emph{per instance}; namely, for any given metric we are able to fully characterize the regret's rate of growth as a function of the intrinsic properties of the metric. (In particular, this is substantially stronger than demonstrating a specific metric for which the upper bound cannot be improved.) Note that for the lower bound statement in \cref{thm:lower} we require that the diameter of $K$ is bounded away from zero, where for simplicity we assume a constant bound of $1$. Such an assumption is necessary to avoid degenerate metrics. Indeed, when the diameter is very small, the problem reduces to the standard MAB setting without any additional costs and we obtain a regret rate of $Omega(\sqrt{kT})$. Notice how the above results extend known instances of the problem from previous work: for uniform movement costs (i.e., unit switching costs) over $K=\set{1,\ldots,k}$ we have $\cC_{\smash{\textrm{c}}}=\Theta(k)$, so that the obtain bound is $\smash{\wt{\Theta}(\max\set{k^{1/3} T^{2/3}, \sqrt{kT}})},$ which recovers the results in \cite{arora2012online,dekel2014bandits}; and for a $2$-HST binary balanced tree with $k$ leaves, we have $\cC_{\smash{\textrm{c}}}=\Theta(1)$ and the resulting bound is $\smash{\wt{\Theta}(\max\set{T^{2/3}, \sqrt{kT}})},$ which is identical to the bound proved in \cite{koren2017bandits}. The $2$-HST regret bound in \cite{koren2017bandits} was primarily used to obtain regret bounds for the action space $K=[0,1]$. In the next section we show how this technique is extended for infinite metric space to obtain regret bounds that depend on the dimensionality of the action space. \subsection{Infinite Metric Spaces} \label{sec:results-infinite} When $(K,\Delta)$ is an infinite metric space, without additional constraints on the loss functions, the problem becomes ill-posed with a linear regret rate, even without movement costs. Therefore, one has to make additional assumptions on the loss functions in order to achieve sublinear regret. One natural assumption, which is common in previous work, is to assume that the loss functions $\ell_1,\ldots,\ell_T$ are all $1$-Lipschitz with respect to the metric $\Delta$. Under this assumption, we have the following result. \begin{theorem} \label{thm:coverdim} Let $(K,\Delta)$ be a metric space with diameter $\le 1$ and upper Minkowski dimension $d = \smash{\overline{\cD}}(\Delta)$, such that $d\ge 1$. There exists a strategy such that for any sequence of loss functions $\ell_1,\ldots,\ell_T$, which are all $1$-Lipschitz with respect to $\Delta$, guarantees that \begin{align*} \textrm{Regret}_\mathsf{MC}(\ell_{1:T},\Delta) = \wt{O}\Lr{ T^\frac{d+1}{d+2} } . \end{align*} \end{theorem} Again, we observe that the above result extend the case of $K=[0,1]$ where $d=1$. Indeed, for Lipschitz functions over the interval a tight regret bound of $\smash{\wt\Theta(T^{2/3})}$ was achieved in \cite{koren2017bandits}, which is exactly the bound we obtain above. We mention that a lower bound of $\smash{\wtOmega(T^{\smash{\frac{d+1}{d+2}}})}$ is known for MAB in metric spaces with Lipschitz cost functions---even \emph{without movement costs}---where $d = \rhodim(\Delta)$ is the lower Minkowski dimension. \begin{theorem}[\citet{bubeck2011x}] \label{thm:coverdimlow} Let $(K,\Delta)$ be a metric space with diameter $\le 1$ and lower Minkowski dimension $d = \rhodim(\Delta)$, such that $d\ge 1$. Then for any learning algorithm, there exists a sequence of loss function $\ell_1,\ldots,\ell_T$, which are all $1$-Lipschitz with respect to $\Delta$, such that the regret (without movement costs) is $ \smash{\wt{Omega}\Lr{ T^\frac{d+1}{d+2} }} . $ \end{theorem} In many natural metric spaces in which the upper and lower Minkowski dimensions coincide (e.g., normed spaces), the bound of \cref{thm:coverdim} is tight up to logarithmic factors in $T$. In particular, and quite surprisingly, we see that the movement costs do not add to the regret of the problem! It is important to note that \cref{thm:coverdim} holds only for metric spaces whose (upper) Minkowski dimension is at least $1$. Indeed, finite metric spaces are of Minkowski dimension zero, and as we demonstrated in \cref{sec:results-finite} above, a $\smash{O(\sqrt{T})}$ regret bound is not achievable. Finite matric spaces are associated with a complexity measure which is very different from the Minkowski dimension (i.e., the covering/packing complexity). In other words, we exhibit a phase transition between dimension $d=0$ and $d\ge1$ in the rate of growth of the regret induced by the metric. \section{Algorithms} In this section we turn to prove \cref{thm:upper}. Our strategy is much inspired by the approach in \cite{koren2017bandits}, and we employ a two-step approach: First, we consider the case that the metric is a HST metric; we then turn to deal with general metrics, and show how to upper-bound any metric with a HST metric. \subsection{Tree Metrics: The Slowly-Moving Bandit Algorithm} In this section we analyze the simplest case of the problem, in which the metric $\Delta=\DeltaT$ is induced by a HST tree $\mathcal{T}$ (whose leaves are associated with actions in $K$). In this case, our main tool is the Slowly-Moving Bandit (SMB) algorithm \cite{koren2017bandits}: we demonstrate how it can be applied to general tree metrics, and analyze its performance in terms of intrinsic properties of the metric. We begin by reviewing the SMB algorithm. In order to present the algorithm we require few additional notations. The algorithm receives as input a tree structure over the set of actions $K$, and its operation depends on the tree structure. We fix a HST tree $\mathcal{T}$ and let $H = \mathrm{depth}(\mathcal{T})$. For any level $0 \le h \le H$ and action $i \in K$, let $A_h(i)$ be the set of leaves of $\mathcal{T}$ that share a common ancestor with $i$ at level $h$ (recall that level $h=0$ is the bottom--most level corresponding to the singletons). In terms of the tree metric we have that $A_{h}(i)=\{j: \DeltaT(i,j) \le 2^{-H+h}\}$. The \ensuremath{\textsc{SMB}}\xspace algorithm is presented in \cref{alg:smb}. The algorithm is based on the multiplicative update method, in the spirit of \textsc{Exp3} algorithms \cite{auer2002nonstochastic}. Similarly to \textsc{Exp3}, the algorithm computes at each round $t$ an estimator $\wt{\ell}_t$ to the loss vector $\ell_t$ using the single loss value $\ell_t(i_t)$ observed. In addition to being an (almost) unbiased estimate for the true loss vector, the estimator $\wt{\ell}_t$ used by \ensuremath{\textsc{SMB}}\xspace has the additional property of inducing slowly-changing sampling distributions $p_t$: This is done by choosing at random a level $h_t$ of the tree to be rebalanced (in terms of the weights maintained by the algorithm): As a result, the marginal probabilities $p_{t+1}(A_{h_t}(i))$ are not changed at round $t$. In turn, and in contrast with \textsc{Exp3}, the algorithm choice of action at round $t+1$ is not purely sampled from $p_t$, but rather conditioned on our last choice of level $h_t$. This is informally justified by the fact that $p_t$ and $p_{t+1}$ agree on the marginal distribution of $A_{h_t}(i_t)$, hence we can think of the level drawn at round $t$ as if it were drawn subject to $p_{t+1}(A_{h_t})=p_{t}(A_{h_t})$. \begin{myalgorithm}[h] \wrapalgo[0.70\textwidth]{ Input: A tree $\mathcal{T}$ with a set of finite leaves $K$, $\eta>0$.\\ Initialize: $H=\mathrm{depth}(\mathcal{T})$, $A_{h} (i) = B_{2^{-H+h}}(i), ~ \forall i\in K, 0\le h\le H$ \\ Initialize $p_1 = \text{unif}(K)$, $h_0 = H$ and $i_0 \sim p_1$\\ For $t=1,\ldots,T$: \begin{enumerate}[nosep,label=(\arabic*)] \item Choose action $i_t \sim p_t(\,\cdot \mid A_{h_{t-1}}(i_{t-1}))$, observe loss $\ell_t(i_t)$ \item Choose $\sigma_{t,0},\ldots,\sigma_{t,H-1} \in \set{\rhom 1}$ uniformly at random;\\ let $h_t = \min\set{0 \le h \le H : \sigma_{t,h} < 0}$ where $\sigma_{t,H} = -1$ \item Compute vectors $\bm\bar{\ell}_{t,0},\ldots,\bm\bar{\ell}_{t,H-1}$ recursively via \begin{flalign*} \bm\bar{\ell}_{t,0}(i) = \frac{\ind{i_t = i}}{p_t(i)} \ell_t(i_t) , && \end{flalign*} and for all $h \ge 1$: \begin{flalign*} \bm\bar{\ell}_{t,h}(i) = -\frac{1}{\eta} \ln\lr{ \sum_{j \in A_{h}(i)} \frac{p_t(j)}{p_t(A_{h}(i))} e^{ -\eta (1+\sigma_{t,h-1}) \bm\bar{\ell}_{t,h-1}(j) } } && \end{flalign*} \item Define $E_t = \set{i : \text{$p_t(A_h(i)) < 2^h\eta$ for some $0 \le h < H$}}$ and set: \begin{align*} \wt{\ell}_t = \mycases {0}{if $i_t \in E_t$;} {\bm\bar{\ell}_{t,0} + \sum_{h=0}^{H-1} \sigma_{t,h} \bm\bar{\ell}_{t,h}}{otherwise} \end{align*} \item Update: \begin{align*} p_{t+1}(i) = \frac{ p_t(i) \, e^{-\eta\wt{\ell}_t(i)} }{ \sum_{j=1}^k p_t(j) \, e^{-\eta \wt{\ell}_t(j)} } \qquad \forall ~ i \in K \end{align*} \end{enumerate} } \caption{The \ensuremath{\textsc{SMB}}\xspace algorithm.} \label{alg:smb} \end{myalgorithm} A key observation is that by directly applying SMB to the metric $\DeltaT$, we can achieve the following regret bound: \begin{theorem} \label{lem:smb-tree} Let $(K,\DeltaT)$ be a metric space defined by a $2$-HST $\mathcal{T}$ with $\mathrm{depth}(\mathcal{T}) = H$ and covering~complexity\xspacelow $\cC(\mathcal{T})=\cC$. Using SMB algorithm we can achieve the following regret bound: \begin{align} \label{eq:smb} \textrm{Regret}_\mathsf{MC}(\ell_{1:T},\DeltaT) = O\LR{ H \sqrt{2^H T \cC \smash{\log\cC }} + H 2^{-H} T } ~. \end{align} \end{theorem} To show \cref{lem:smb-tree}, we adapt the analysis of \cite{koren2017bandits} (that applies only to complete binary HSTs) to handle more general HSTs. We defer this part of our analysis to the appendix, since it follows from a technical modification of the original proof; for the proof of \cref{lem:smb-tree}, see \cref{sec:smb-analysis}. For a tree that is either too deep or too shallow, \cref{eq:smb} may not necessarily lead to a sublinear regret bound, let alone optimal. The main idea behind achieving optimal regret bound for a general tree, is to modify it until one of two things happen: Either we have optimized the depth so that the two terms in the left-hand side of \cref{eq:smb} are of same order: In that case, we will show that one can achieve regret rate of order $O(\cC(\mathcal{T})^{1/3}T^{2/3})$. If we fail to do that, we show that the first term in the left-hand side is the dominant one, and it will be of order $O(\sqrt{kT})$. For trees that are in some sense ``well behaved" we have the following Corollary of \cref{lem:smb-tree} (for a proof see \cref{prf:smb-goodtree}). \begin{corollary} \label{cor:smb-goodtree} Let $(K,\DeltaT)$ be a metric space defined by a tree $\mathcal{T}$ over $\abs{K}=k$ leaves with $\mathrm{depth}(\mathcal{T})=H$ and covering~complexity\xspacelow $\cC(\mathcal{T})=\cC$. Assume that $\mathcal{T}$ satisfies the following: \begin{enumerate}[nosep,label=(\arabic*)] \item\label{it:01} $ 2^{-H} H T\le \sqrt{2^H H \cC T}$; \item\label{it:01.5} One of the following is true: \begin{enumerate}[nosep] \item\label{it:02} $2^{H}\cC \le k$; \item\label{it:03} $ 2^{-(H-1)} (H-1) T\ge \sqrt{2^{H-1} (H-1) \cC T}$. \end{enumerate} \end{enumerate} Then, the SMB algorithm can be used to attain $ \textrm{Regret}_\mathsf{MC}(\ell_{1:T},\DeltaT) = \wt{O}\Lr{ \max\Lrset{ \cC^{1/3} T^{2/3}, \sqrt{kT} } } . $ \end{corollary} The following establishes \thmref{thm:upper} for the special case of tree metrics (see \cref{prf:tree} for proofs). \begin{lemma}\label{lem:tree} For any tree $\mathcal{T}$ and time horizon $T$, there exists a tree $\mathcal{T}'$ (over the same set $K$ of $k$ leaves) that satisfies the conditions of \cref{cor:smb-goodtree}, such that $\Delta_{\mathcal{T}'} \ge \Delta_{\mathcal{T}}$ and $\cC(\mathcal{T}') = \cC(\mathcal{T})$. Furthermore, $\mathcal{T}'$ can be constructed efficiently from $\mathcal{T}$ (i.e., in time polynomial in $\abs{K}$ and $T$). Hence, applying SMB to the metric space $(K,\Delta_{\mathcal{T}'})$ leads to $ \textrm{Regret}_\mathsf{MC}(\ell_{1:T},\DeltaT) = \wt{O}\Lr{ \max\Lrset{\cC(\mathcal{T})^{1/3} T^{2/3}, \sqrt{kT}} } . $ \end{lemma} \subsection{General Finite Metrics} Finally, we obtain the general finite case as a corollary of the following. \begin{lemma}\label{lem:main2} Let $(K,\Delta)$ be a finite metric space. There exists a tree metric $\DeltaT$ over $K$ (with $\abs{K}=k$) such that $4\DeltaT$, dominates $\Delta$ (i.e., such that $4\DeltaT(i,j) \ge \Delta(i,j)$ for all $i,j \in K$) for which $\cC(\mathcal{T}) = O(\cC_{\smash{\textrm{c}}}(\Delta)\log{k})$. Furthermore, $\mathcal{T}$ can be constructed efficiently. \end{lemma} \begin{proof} Let $H$ be such that the minimal distance in $\Deltata$ is larger than $2^{-H}$. For each $r=2^{-1},2^{-2},\ldots, 2^{-H}$ we let $\{B_r(i_{\{1,r\}}),\ldots, B_r(i_{\{m_r,r\}})\} = \mathcal{B}_{r}$ be a covering of $K$ of size $N^\mathrm{c}_{r} (\mathcal{T})\log{k}$ using balls of radius~$r$. Note that finding a minimal set of balls of radius~$r$ that covers $K$ is exactly the set cover problem. Hence, we can efficiently approximate it (to within a $O(\log{k})$ factor) and construct the sets $\mathcal{B}_{r}$. We now construct a tree graph, whose nodes are associated with the cover balls: The leaves correspond to singleton balls, hence correspond to the action space. For each leaf $i$ we find an action $a_1(i) \in K$ such that: $ i \in B_{2^{-H+1}}(a_1(i)) \in \mathcal{B}_{2^{-H+1}}. $ If there is more than one, we arbitrarily choose one, and we connect an edge between $i$ and $B_{2^{-H+1}}(a_1(i))$. We continue in this manner inductively to define $a_{r}(i)$ for every $a$ and $r<1$: given $a_{r-1}(i)$ we find an action $a_r(i)$ such that $ a_{r-1}(i)\in B_{2^{-H+r}}(a_r(i))\in \mathcal{B}_{2^{-H+r}}, $ and we connect an edge from $B_{2^{-H+r-1}}(a_{r-1}(i))$ and $B_{2^{-H+r}}(a_{r}(i))$. We now claim that the metric induced by the tree graph dominates up to factor $4$ the original metric. Let $i,j\in K$ such that $\DeltaT(i,j)< 2^{-H+r}$ then by construction there are $i,a_1(i),a_2(i),\ldots a_r(i)$ and $j,a_1(j),a_2(j),\ldots a_r(j)$, such that $a_r(i)=a_r(j)$ and for which it holds that $\Delta(a_s(i),a_{s-1}(i))\le 2^{-H+s}$ and similarly $\Delta(a_s(j),a_{s-1}(j))\le 2^{-H+s}$ for every $s\le r$. Denoting $a_0(i)=i$ and $a_0(j)=j$, we have that \begin{align*} \Delta(i,j) &\le \sum_{s=1}^r \Delta(a_{s-1}(i),a_s(i))+\sum_{s=1}^r \Delta(a_{s-1}(j),a_s(j)) \\ &\le 2\sum_{s=1}^{r} 2^{-H+s} \le 2 \!\cdot\! 2^{-H} \!\cdot\! 2^{r+1} \le 4\DeltaT(i,j) .&&\qedhere \end{align*} \end{proof} \subsection{Infinite Metric Spaces} Finally, we address infinite spaces by discretizing the space $K$ and reducing to the finite case. Recall that in this case we also assume that the loss functions are Lipschitz. \begin{proof}[Proof of \cref{thm:coverdim}] Given the definition of the covering dimension $d = \smash{\overline{\cD}}(\Delta) \ge 1$, it is straightforward that for some constant $C>0$ (that might depend on the metric $\Delta$) it holds that $N^\mathrm{c}_r(\Delta) \le C r^{-d}$ for all $r>0$. Fix some $\epsilon>0$, and take a minimal $2\epsilon$-covering $K'$ of $K$ of size $\abs{K'} \le C (2\epsilon)^{-d} \le C \epsilon^{-d}$. Observe that by restricting the algorithm to pick actions from $K'$, we might lose at most $O(\epsilon T)$ in the regret. Also, since $K'$ is minimal, the distance between any two elements in $K'$ is at least $\epsilon$, thus the covering~complexity\xspace of the space has \begin{align*} \cC_{\smash{\textrm{c}}}(\Delta) = \sup_{r \ge \epsilon} \, r \!\cdot\! N^\mathrm{c}_r(\Delta) \le C \sup_{r \ge \epsilon} \, r^{-d+1} \le C \epsilon^{-d+1} , \end{align*} as we assume that $d \ge 1$. Hence, by \cref{thm:upper} and the Lipschitz assumption, there exists an algorithm for which \begin{align*} \textrm{Regret}_\mathsf{MC}(\ell_{1:T},\Delta) = \wt{O}\lr{ \max\Lrset{ \epsilon^{-\frac{d-1}{3}} T^{\frac{2}{3}}, \epsilon^{-\frac{d}{2}} T^{\frac{1}{2}}, \epsilon T } } . \end{align*} A simple computation reveals that $\epsilon=\Theta(T^{-\frac{1}{d+2}})$ optimizes the above bound, and leads to $\wt{O}(T^\frac{d+1}{d+2})$ movement regret. \end{proof} \appendix \section{Proofs}\label{ap:proofs} \subsection{Proof of \cref{cor:smb-goodtree}}\label{prf:smb-goodtree} \begin{corollary*} Let $(K,\DeltaT)$ be a metric space defined by a tree $\mathcal{T}$ over $\abs{K}=k$ leaves with $\mathrm{depth}(\mathcal{T})=H$ and covering~complexity\xspacelow $\cC(\mathcal{T})=\cC$. Assume that $\mathcal{T}$ satisfies the following: \begin{enumerate}[nosep,label=(\arabic*)] \item\label{it:1} $ 2^{-H} T\le \sqrt{2^H \cC T}$; \item\label{it:1.5} One of the following is true: \begin{enumerate}[nosep] \item\label{it:2} $2^{H-1}\cC \le k$; \item\label{it:3} $ 2^{-(H-1)} T\ge \sqrt{2^{H-1} \cC T}$. \end{enumerate} \end{enumerate} Then, the SMB algorithm can be used to attain regret bounded as \begin{align*} \textrm{Regret}_\mathsf{MC}(\ell_{1:T},\DeltaT) = \wt{O}\LR{ \max\Lrset{ \cC^{1/3} T^{2/3}, \sqrt{kT} } } ~. \end{align*} \end{corollary*} \begin{proof} \crefname{enumi}{condition}{conditions} Notice that by \cref{it:1} and \cref{lem:smb-tree} we have $ \textrm{Regret}_\mathsf{MC} \le c \sqrt{T 2^H H^2 \cC \log \cC } $ for some constant factor~$c$. First assume, we have that \cref{it:2} holds. Note that in particular $\max(2^{H-1},\cC)\le k$. We thus, obtain: \begin{align} \textrm{Regret}_\mathsf{MC} & \le c\sqrt{T2^H H^2 \cC \log \cC } \nonumberumber \\ & \le c\sqrt{T 2^H \cC\log^2 k \log k } & \because~\max(2^{H-1}, \cC)\le k \nonumberumber \\ \label{eq:1} &\le 4c \sqrt{T k \log \cC}\log^{3/2}k \le 4c \sqrt{k T}\log^{3/2}{k} . \end{align} The next case is that \cref{it:3} holds. By reordering we can write \cref{it:3} as: \begin{align} \label{eq:2} 2^{H} &\le 8 T^{1/3} \cC^{-1/3} ; \end{align} Which also implies $H=O(\log T)$, hence: \begin{align}\label{eq:3} \sqrt{2^H H^2 T \cC\log \cC} &= \wt{O}\Lr{\cC^{1/3}T^{2/3} } . \end{align} Overall, we see that in both cases the regret is bounded by the maximum between the two terms in \cref{eq:1,eq:3}. \end{proof} \subsection{Proof of \cref{lem:tree}} \label{prf:tree} \begin{lemma*} For any tree $\mathcal{T}$ and time horizon $T$, there exists a tree $\mathcal{T}'$ (over the same set $K$ of $k$ leaves) that satisfies the conditions of \cref{cor:smb-goodtree}, such that $\Delta_{\mathcal{T}'} \ge \Delta_{\mathcal{T}}$ and $\cC(\mathcal{T}') = \cC(\mathcal{T})$. Furthermore, $\mathcal{T}'$ can be constructed efficiently from $\mathcal{T}$ (i.e., in time polynomial in $\abs{K}$ and $T$). Hence, applying SMB to the metric space $(K,\Delta_{\mathcal{T}'})$ leads to \[ \textrm{Regret}_\mathsf{MC}(\ell_{1:T},\DeltaT) = \wt{O}\lr{ \max\Lrset{\cC(\mathcal{T})^{1/3} T^{2/3}, \sqrt{kT}} } . \] \end{lemma*} \begin{proof} \crefname{enumi}{condition}{conditions} Let us call $\mathcal{T}$ a \emph{$T$-well-behaved\xspace} tree if it satisfies the conditions of \cref{cor:smb-goodtree}. First we construct a tree $\mathcal{T}_1$ that will satisfy \cref{it:1}. To do that, we simply add to each leaf at $\mathcal{T}$ a single son, which is a new leaf: we naturally identify each leaf in $\mathcal{T}_1$ with an actions from $K$, by considering the father of the leaf. One can see that, with the definition of HST-metric we have not changed the distances: i.e. $\Delta_{\mathcal{T}_1}=\Delta_{\mathcal{T}}$. In particular, we did not change the covering number or the complexity. (Note however, that this change does effect the Algorithm though, as it depends on the tree representation and not directly on the metric.) The aforementioned change, did however change the depth of the tree and increased it by one. We can repeat this step iteratively until \cref{it:1} is satisfied. To avoid the notation $\mathcal{T}_1$, we will simply assume that $\mathcal{T}$ satisfies \cref{it:1}. Next, we prove the following statement by induction over $H$ the depth of $\mathcal{T}$: We assume that for every tree $\mathcal{T}$ of depth $H-1$ that satisfies \cref{it:1} the statement holds, and prove it for depth $H$. For $H=1$, since $\cC \le k$ we have that \cref{it:2} holds. Next, let $\mathcal{T}_1$ be a tree that we get from $\mathcal{T}$ by connecting all the leaves to their grandparents (and removing their fathers from the graph). The first observation is that we have increased the distance between the leaves, so $\DeltaT \le \Deltata_{\mathcal{T}_1}$. We also assume that $\mathcal{T}$ is not $T$-well behaved, because otherwise the statement obviously holds for $\mathcal{T}$ with $\mathcal{T}'=\mathcal{T}$. Given that $\cC(\mathcal{T})> 2^{-H+1} k$ we next show that $\cC(\mathcal{T}_1)= \cC(\mathcal{T})$. \ignore{Note that for every $r>2^{-H+1}$, $N^\mathrm{c}_r(\mathcal{T})=N^\mathrm{c}_r(\mathcal{T}_1)$. Therefore a necessary condition for $\cC(\mathcal{T})\ne \cC(\mathcal{T}_1)$ is the following: \begin{align}\label{eq:cond} 2^{-H+1}k = 2^{-H+1}N^\mathrm{c}_{2^{-H+1}}(\mathcal{T}_1) \ne 2^{-H+1}N^\mathrm{c}_{2^{-H+1}}(\mathcal{T}) .\end{align} Indeed, if \cref{eq:cond} is not satisfied, then for every level $r\ge 2^{-H+1}$ we have that $rN^\mathrm{c}_r(\mathcal{T})=rN^\mathrm{c}_r(\mathcal{T}_1)$, and further, since $2^{-H}N^\mathrm{c}_{2^{-H}}(\mathcal{T})\le 2^{-H+1}|K|$ we have by assumption that $\cC_{\smash{\textrm{c}}}(\mathcal{T})\ne 2^{-H}N^\mathrm{c}_{2^{-H}}(\mathcal{T})$. Thus, since the covering number of the tree are equal at all levels $r>2^{-H}$ we have that the covering~complexity\xspacelow of $\mathcal{T}$ and $\mathcal{T}_1$ is equal. Thus, we've shown that \cref{eq:cond} is satisfied. } Note that by construction for every $r=2^{-1},\ldots,2^{-H+2}$ we have that $N^\mathrm{c}_{r}(\mathcal{T})=N^\mathrm{c}_r(\mathcal{T}_1)$. We also have by assumption $\cC(\mathcal{T})> 2^{-H}k$ and since any covering is smaller than $k$ we also have $\cC(\mathcal{T})> 2^{-H+1}N^\mathrm{c}_{2^{-H+1}}(\mathcal{T})$. Overall, by defintion of $\cC(\mathcal{T})$ we have that $\cC(\mathcal{T})=\sup_{2^{-H+2} \le \epsilonilon\le 1} N^\mathrm{c}_\epsilonilon(\mathcal{T})$. Hence, \begin{align*} \cC_{\smash{\textrm{c}}}(\mathcal{T}_1)&= \sup_{0<\epsilonilon\le 1} \epsilonilon N^\mathrm{c}_{\epsilonilon}(\mathcal{T}_1) \\ &= \max\lrset{ \max_{2^{-H+2}<\epsilonilon\le 1}\epsilonilonN^\mathrm{c}_{\epsilonilon}(\mathcal{T}_1) ~,~ 2^{-H+1}k } = \max\lrset{ \max_{2^{-H+2}<\epsilonilon\le 1}\epsilonilonN^\mathrm{c}_{\epsilonilon}(\mathcal{T}) ~,~ 2^{-H+1}k } \\ &= \max\set{ \cC(\mathcal{T}) ~,~ 2^{-H+1}k} \\ &= \cC_{\smash{\textrm{c}}}(\mathcal{T}) . \end{align*} Next, we assume that $\mathcal{T}_1$ does not satisfy \cref{it:1}. We then have $ 2^{-(H-1)} T > \sqrt{2^{H-1} \cC(\mathcal{T}_1) T} = \sqrt{2^{H-1} \cC(\mathcal{T}) T} , $ which implies that $\mathcal{T}$ satisfies \cref{it:3}. Thus, either $\mathcal{T}$ is well-behaved\xspace or we can construct a tree $\mathcal{T}_1$ with depth $H-1$ such that $\DeltaT \le \Deltata_{\mathcal{T}_1}$, $\cC_{\smash{\textrm{c}}}(\mathcal{T})=\cC_{\smash{\textrm{c}}}(\mathcal{T}_1)$ and $\mathcal{T}_1$ satisfies \cref{it:1}. The result now follows by an induction step. \end{proof} \subsection{Proof of \cref{thm:upper}} \begin{theorem*} Let $(K,\Delta)$ be a finite metric space over $\abs{K} = k$ elements with diameter $\le 1$ and covering~complexity\xspace $\cC_{\smash{\textrm{c}}} = \cC_{\smash{\textrm{c}}}(\Delta)$. There exists an algorithm such that for any sequence of loss functions $\ell_1,\ldots,\ell_T$ guarantees that $ \textrm{Regret}_\mathsf{MC}(\ell_{1:T},\Delta) = \wt{O}\Lr{ \max\Lrset{ \cC_{\smash{\textrm{c}}}^{1/3} T^{2/3}, \sqrt{kT} } } . $ \end{theorem*} \begin{proof} Given a finite metric space $(K,\Deltata)$ we have by \cref{lem:main2} a tree $\mathcal{T}$ with complexity $\cC=\cC_{\smash{\textrm{c}}}(\Deltata)$ such that $\Delta(i,j)\le 4\DeltaT(i,j)$. We can apply SMB as depicted in \cref{lem:tree} over the sequence of losses $\frac{1}{4}\ell_1,\ldots, \frac{1}{4}\ell_T$ To obtain: \begin{align*} \frac{1}{4}\textrm{Regret}_T& = \frac{1}{4} \mathbb{E}E{\sum_{t=1}^T \ell_t(i_t) + \frac{1}{4} \Delta(i_t,i_{t-1})} - \min_{i^*\in K} \sum \frac{1}{4} \ell_t(i^*) \\ &\le \frac{1}{4}\mathbb{E}E{\sum_{t=1}^T \ell_t(i_t) + \DeltaT(i_t,i_{t-1})} - \min_{i^*\in K} \sum \frac{1}{4} \ell_t(i^*) \\ & = \wt{O}\left(\max\left(\cC^{1/3}T^{2/3},\sqrt{kT}\right)\right) .&&\qedhere \end{align*} \end{proof} \subsection{Proof of \cref{thm:lower}}\label{sec:lower} We next set out to prove the lower bound of \cref{thm:lower}. We begin by recalling the known lower bound for MAB with unit switching cost. \begin{theorem}[\citet{dekel2014bandits}]\label{thm:koren} Let $(K,\Delta)$ be a metric space over $\abs{K}=k\ge 2$ actions and $\Delta(i,j)=c$ for every $i \ne j \in K$. Then for any algorithm, there exists a sequence $\ell_1,\ldots,\ell_T$ such that \[ \textrm{Regret}_\mathsf{MC}(\ell_{1:T},\Delta) = \wt{Omega}\Lr{(ck)^{1/3}T^{2/3}}. \] \end{theorem} Note that for a discrete metric the minimum covering of $k$ points with balls of radius $c<1$ is by $k$ balls, hence $N^\mathrm{c}_{c}(\Delta)=k$. Thus we see that \cref{thm:koren} already gives \cref{thm:lower} for the special case of a unit-cost metric (up to logarithmic factors). The general case can be derived by embedding the lower bound construction in an action set that constitute a $c$-packing of size $\rhoacking_c(\Delta)$. \begin{proof}[Proof of \cref{thm:lower} (sketch)] First, it is easy to see that the adversary can always force a regret of $Omega(\sqrt{kT})$; indeed, this lower bound applies for the MAB problem even when there is no movement cost between actions \citep{auer2002nonstochastic}. We next show a regret lower bound of $Omega(\cC_{\smash{\textrm{p}}}^{1/3} T^{2/3})$. By definition, there exist $\epsilon$ such that $\cC_{\smash{\textrm{p}}} = \epsilon \rhoacking_{\epsilon}(\Delta)$. Let $B_\epsilon(i_1),\ldots,B_\epsilon(i_n)$ be a set of balls that form a maximal packing with $n=\rhoacking_{\epsilon}(\Delta)$, and observe that $\Delta(i,i') \ge \epsilon$ for all $i,i' \in \set{i_1,\ldots,i_n}$, $i \ne i'$. Since we assume the diameter of the metric space is exactly $1$ we have that for all $\epsilonilon<1$, $\rhoacking_{\epsilon}(\Delta)\ge 2$. Therefore we may assume $n\ge 2$. We can now use \cref{thm:koren} to show that for any algorithm, one can construct a sequence $\ell_1,\ldots,\ell_T$ of loss functions supported on $i_1,\ldots,i_n$ (and extend them to the entire domain $K$ by assigning a maximal loss of $1$ to any $i \notin \set{i_1,\ldots,i_n}$) such that \begin{align*} \textrm{Regret}_\mathsf{MC}(\ell_{1:T},\Delta) = Omega\Lr{ (\epsilon n)^{1/3} T^{2/3} } = Omega\Lr{ \cC_{\smash{\textrm{p}}}^{1/3} T^{2/3} } .&\qedhere \end{align*} \end{proof} \section{Analysis of SMB for General HSTs} \label{sec:smb-analysis} In this section, we extend the analysis given in \cite{koren2017bandits} for the SMB algorithm (\cref{alg:smb}) to general HST metrics over finite action sets, and prove the following theorem. \begin{theorem} \label{thm:main} Assume that the metric $\Delta = \DeltaT$ is a metric specified by a tree $\mathcal{T}$ which is a HST with $\mathrm{depth}(\mathcal{T}) = H$ and covering~complexity\xspacelow $\cC(\mathcal{T})=\cC$. Then, for any sequence of loss functions $\ell_1,\ldots, \ell_T$, \cref{alg:smb} guarantees that \begin{align*} \textrm{Regret}_\mathsf{MC}(\ell_{1:t},\DeltaT) = O\LR{ \frac{H\log{\cC}}{\eta} + \eta \cC H 2^H T + H 2^{-H} T } . \end{align*} In particular, by setting $\eta=\Theta\Lr{\sqrt{2^{-H}\log(\cC)/\cC T}}$, the bound on the expected movement regret of the algorithm becomes \begin{align*} \textrm{Regret}_\mathsf{MC}(\ell_{1:t},\DeltaT) = O\lr{ H \sqrt{T 2^H \cC\log{\cC}} + H 2^{-H} T } . \end{align*} \end{theorem} The main new ingredients in the generalized proof are bounds on the bias and the variance of the loss estimates $\wt{\ell}_t$ used by \cref{alg:smb}, which we give in the following two lemmas. In the proof of both, we require the following inequality: \begin{align} \label{eq:biasvar} \frac{1}{H} \sum_{h=0}^{H-1} \sum_{i \in K} \frac{2^h}{\abs{A_h(i)}} \le 2^H \cC . \end{align} This follows from the fact that $\sum_{i \in K} \abs{A_h(i)}^{-1}$ equals $N^\mathrm{c}(\DeltaT,2^{h-H})$ (both quantities are equal to the number of nodes in the $h$'th level of $\mathcal{T}$), and since $2^{h-H} N^\mathrm{c}(\DeltaT,2^{h-H}) \le \cC_{\smash{\textrm{c}}}(\DeltaT) = \cC$ by definition of the (covering) covering~complexity\xspacelow of $\mathcal{T}$. We begin with bounding the bias of the estimator $\wt{\ell}_t$ from the true loss vector $\ell_t$. \begin{lemma} \label{lem:unbiased} For all $t$, we have $\mathbb{E}[\wt{\ell}_t(i)] \le \ell_t(i)$ and $\mathbb{E}[\ell_t(i_t)] \le\mathbb{E}[p_t \cdot \wt{\ell}_t] + \eta H 2^H \cC$. \end{lemma} \begin{proof} The proof of the first inequality is identical to the one found in \cite{koren2017bandits} and thus omitted. To bound $\mathbb{E}[\ell_t(i_t)]$, observe that $\mathbb{E}[p_t \cdot \wt{\ell}_t \mid i_t \in E_t] = 0$ and \begin{align*} \mathbb{E}[p_t \cdot \wt{\ell}_t \mid i_t \notin E_t] = \mathbb{E}[p_t \cdot \bm\bar{\ell}_{t,0} \mid i_t \notin E_t] + \sum_{h=0}^{H-1} \mathbb{E}[\sigma_{t,h}] \, \mathbb{E}[p_t \cdot \bm\bar{\ell}_{t,h} \mid i_t \notin E_t] = \mathbb{E}[\ell_t(i_t) \mid i_t \notin E_t] . \end{align*} Then, denoting $\beta_t= \Pr\left[i_t\in E_t\right]$, we have \begin{align*} \mathbb{E}[\ell_t(i_t)] &= \beta_t \mathbb{E}[\ell_t(i_t) \mid i_t \in E_t] + (1-\beta_t) \mathbb{E}[\ell_t(i_t) \mid i_t \notin E_t] \\ &\le \beta_t + (1-\beta_t) \mathbb{E}[p_t \cdot \wt{\ell}_t \mid i_t \notin E_t] \\ &= \beta_t + \mathbb{E}[p_t \cdot \wt{\ell}_t] , \end{align*} where for the inequality we used the fact that $\ell_t(i_t) \le 1$. To complete the proof, we have to show that $\beta_t \le \eta H 2^H \cC$. To this end, write \begin{align*} \beta_t = \Pr[i_t \in E_t] \le \sum_{h=0}^{H-1} \Pr[p_t(A_h(i_t)) < 2^h\eta] . \end{align*} Using \cref{eq:property} to write \begin{align*} \mathbb{E}E{ \frac{1}{p_t(A_h(i_t))} } = \sum_{i \in K} \frac{1}{\abs{A_h(i)}} \mathbb{E}E{ \frac{\ind{i_t \in A_h(i)}}{p_t(A_h(i))} } = \sum_{i \in K} \frac{1}{\abs{A_h(i)}} , \end{align*} together with Markov's inequality, we obtain \begin{align*} \Pr\!\big[ p_t(A_h(i_t)) < 2^h \eta \big] = \Pr\lrbra{ \frac{1}{p_t(A_h(i_t))} > \frac{1}{2^h \eta} } \le \eta \sum_{i \in K} \frac{2^h}{\abs{A_h(i)}} . \end{align*} Using \cref{eq:biasvar}, we conclude that \begin{align*} \beta_t \le \eta \sum_{h=0}^{H-1} \sum_{i \in K} \frac{2^h}{\abs{A_h(i)}} \le \eta H 2^H \cC .&\qedhere \end{align*} \end{proof} We proceed to control the variance of the estimator $\wt{\ell}_t$. \begin{lemma} \label{lem:variance} For all $t$, we have $\mathbb{E}[p_t \cdot \wt{\ell}_t^2] \le 2H 2^H \cC$. \end{lemma} \begin{proof} We begin by bounding \begin{align*} \wt{\ell}_t^2(i) \le \Lr{ \bm\bar{\ell}_{t,0}(i) + \sum_{h=0}^{H-1} \sigma_{t,h} \bm\bar{\ell}_{t,h}(i) }^2 . \end{align*} Since $\mathbb{E}[\sigma_{t,h}] = 0$ and $\mathbb{E}[\sigma_{t,h} \sigma_{t,h'}] = 0$ for all $h \ne h'$, we have for all $i$ that \begin{align} \label{eq:var1} \mathbb{E}[\wt{\ell}_t^2(i)] = \mathbb{E}[\wt{\ell}_{t,0}^2(i)] + \sum_{h=0}^{H-1} \mathbb{E}[ \bm\bar{\ell}_{t,h}^2(i) ] \le 2\sum_{h=0}^{H-1} \mathbb{E}[ \bm\bar{\ell}_{t,h}^2(i) ] . \end{align} Following \cite{koren2017bandits}, we have for all $h$ by \cref{lem:bell} that \begin{align*} p_t \cdot \bm\bar{\ell}_{t,h}^2 &\le \frac{\sum_{i \in K} p_t(i) \ind{i_t \in A_h(i)}}{p_t(A_h(i_t))^2} \rhorod_{j=0}^{h-1} (1+\sigma_{t,j})^2 \\ &= \frac{1}{p_t(A_h(i_t))} \rhorod_{j=0}^{h-1} (1+\sigma_{t,j})^2 \\ &= \sum_{i \in K} \frac{1}{\abs{A_h(i)}} \frac{\ind{i_t \in A_h(i)}}{p_t(A_h(i))} \rhorod_{j=0}^{h-1} (1+\sigma_{t,j})^2 . \end{align*} Now, since $i_t$ is independent of the $\sigma_{t,j}$, and recalling \cref{eq:property}, we get \begin{align*} \mathbb{E}_t[p_t \cdot \bm\bar{\ell}_{t,h}^2] \le \sum_{i \in K} \frac{1}{\abs{A_h(i)}} \mathbb{E}E{ \frac{\ind{i_t \in A_h(i)}}{p_t(A_h(i))} } \rhorod_{j=0}^{h-1} \mathbb{E}[(1+\sigma_{t,j})^2] = \sum_{i \in K} \frac{2^h}{\abs{A_h(i)}} . \end{align*} This, combined with \cref{eq:var1,eq:biasvar}, gives the result: \begin{align*} \mathbb{E}[p_t \cdot \wt{\ell}_t^2] \le 2\sum_{h=0}^{H-1} \mathbb{E}[p_t \cdot \bm\bar{\ell}_{t,h}^2] \le 2 \sum_{h=0}^{H-1} \sum_{i \in K} \frac{2^h}{\abs{A_h(i)}} \le 2H 2^H \cC &.\qedhere \end{align*} \end{proof} \subsection{Additional Lemmas from \cite{koren2017bandits}} We state several lemmas proved in \cite{koren2017bandits} that are required for our generalized analysis; we refer to the original paper for the proofs. \begin{lemma} \label{lem:bell} For all $t$ and $0 \le h < H$ the following holds almost surely: \begin{align} \label{eq:bell1} 0 \le \bm\bar{\ell}_{t,h}(i) \le \frac{\ind{i_t \in A_h(i)}}{p_t(A_h(i))} \rhorod_{j=0}^{h-1} (1+\sigma_{t,j}) \qquad \forall ~ i \in K \,. \end{align} In particular, if $\sigma_{t,j} = -1$ then $\bm\bar{\ell}_{t,h} = 0$ for all $h > j$. As a result, \begin{align} \label{eq:tell-equiv} \wt{\ell}_t = \bm\bar{\ell}_{t,0} - \bm\bar{\ell}_{t,h_t} + \sum_{j=0}^{h_t-1} \bm\bar{\ell}_{t,j} . \end{align} \end{lemma} \begin{lemma} \label{lem:sampling} For all $t$ and $0 \le h < H$ the following hold: \begin{enumerate}[label=(\roman*)] \item for all $A \in \set{ A_h(i) : i \in K}$ we have \begin{align} \label{eq:property} \mathbb{E}E{\frac{\ind{i_t\in A}}{p_t(A)}}=1 ~; \end{align} \item with probability at least $1-2^{-(h+1)}$, we have that $A_h(i_t)=A_h(i_{t-1})$. \end{enumerate} \end{lemma} \begin{lemma}[Second-order regret bound for MW] \label{lem:mw2} Let $\eta > 0$ and let $c_1,\ldots,c_T \in \mathbb{R}^k$ be real vectors such that $c_t(i) \ge -1/\eta$ for all $t$ and $i$. Consider a sequence of probability vectors $q_1,\ldots,q_T$ defined by $q_1 = (\tfrac{1}{k},\ldots,\tfrac{1}{k})$, and for all $t > 1$: \begin{align*} q_{t+1}(i) = \frac{ q_t(i) \, e^{-\eta c_t(i)} }{ \sum_{j=1}^k q_t(j) \, e^{-\eta c_t(j)} } \qquad \forall ~ i \in [k] . \end{align*} Then, for all $i^\starar \in [k]$ we have that \begin{align*} \sum_{t=1}^T q_t \cdot c_t - \sum_{t=1}^T c_t(i^\starar) \le \frac{\ln{k}}{\eta} + \eta \sum_{t=1}^T q_t \cdot c_t^2 . \end{align*} \end{lemma} \subsection{Regret Analysis} We now have all we need in order to prove our main result. \begin{proof}[Proof of \cref{thm:main}] First, we bound the expected movement cost. \cref{lem:sampling} says that with probability at least $1-2^{-(h+1)}$, the actions $i_t$ and $i_{t-1}$ belong to the same subtree on level $h$ of the tree, which means that $\DeltaT(i_t,i_{t-1}) \le 2^{h-H}$ with the same probability. Hence, \begin{align} \label{eq:Emove} \mathbb{E}[\DeltaT(i_t,i_{t-1})] \le \sum_{h=0}^{H-1} 2^{h-H} \Pr\!\big[ \DeltaT(i_t,i_{t-1}) > 2^{h-H} \big] \le \sum_{h=0}^{H-1} 2^{-(H+1)} = \frac{H}{2^{H+1}} , \end{align} and the cumulative movement cost is then $O(H 2^{-H} T)$. We turn to analyze the cumulative loss of the algorithm. We begin by observing that $\wt{\ell}_t(i) \ge -1/\eta$ for all $t$ and $i$. To see this, notice that $\wt{\ell}_t = 0$ unless $i_t \notin E_t$, in which case we have, by \cref{lem:bell} and the definition of $E_t$, \begin{align*} 0 \le \bm\bar{\ell}_{t,h}(i) \le \frac{2^h}{p_t(A_h(i_t))} \le \frac{1}{\eta} \qquad\quad \forall ~ 0 \le h < H , \end{align*} and since $\wt{\ell}_t$ has the form $\wt{\ell}_t = \bm\bar{\ell}_{t,0} + \sum_{j=0}^{h_t-1} \bm\bar{\ell}_{t,j} - \bm\bar{\ell}_{t,h_t}$ (recall \cref{eq:tell-equiv}), we see that $\wt{\ell}_t(i) \ge -1/\eta$. Hence, we can use second-order bound of \cref{lem:mw2} on the vectors $\wt{\ell}_t$ to obtain \begin{align*} \sum_{t=1}^T p_t \cdot \wt{\ell}_t - \sum_{t=1}^T \wt{\ell}_t(i^\starar) \le \frac{\ln{k}}{\eta} + \eta \sum_{t=1}^T p_t \cdot \wt{\ell}_t^2 \end{align*} for any fixed $i^\starar \in K$. Taking expectations and using \cref{lem:unbiased,lem:variance}, and using the rough bound $k \le 2^H \cC$, we have \begin{align} \label{eq:Eregret} \mathbb{E}E{ \sum_{t=1}^T \ell_t(i_t) } - \sum_{t=1}^T \ell_t(i^*) \le \frac{H\ln(\cC)}{\eta} + 3\eta H 2^H T \cC . \end{align} The theorem now follows from \cref{eq:Emove,eq:Eregret}. \end{proof} \end{document}
\begin{document} \title{Vector spaces of non-extendable holomorphic functions} \author{Luis Bernal-Gonz\'alez} \maketitle {\footnotesize {\sl \centerline{Departamento de An\'alisis Matem\'atico. Facultad de Matem\'aticas.} \centerline{Apdo.~1160. Avda.~Reina Mercedes, 41080 Sevilla, Spain.} \centerline{E-mail: {\tt [email protected]}} }} \vskip 10pt \centerline{\sl To Professor Jos\'e Bonet Solves on his 60th birthday} \begin{abstract} \noindent In this paper, the linear structure of the family $H_e(G)$ of holomorphic functions in a domain $G$ of the complex plane that are not analytically continuable beyond the boundary of $G$ is analyzed. We prove that $H_e(G)$ contains, except for zero, a dense algebra; and, under appropriate conditions, the subfamily of $H_e(G)$ consisting of boundary-regular functions contains dense vector spaces with maximal dimension, as well as infinite dimensional closed vector spaces and large algebras. The case in which $G$ is a domain of existence in a complex Banach space is also considered. The results obtained complete or extend a number of previous ones by several authors. \vskip .15cm \noindent {\sl 2010 Mathematics Subject Classification:} Primary 30B40. Secondary 15A03, 30H50, 32D05, 46G20. \vskip .15cm \noindent {\sl Key words and phrases:} Dense-lineability, spaceability, algebrability, non-continuable holomorphic functions, domain of existence. \end{abstract} \section{Introduction} \quad This paper intends to be a contribution to the study of the linear structure of the family of non-extendable holomorphic functions. The search of linear (or, in general, algebraic) structures within nonlinear sets has become a trend in the last two decades, see e.g.~the survey \cite{BPS}. Here we restrict ourselves to the setting of complex analytic functions, with focus on those ones that cannot be continued beyond the boundary of the domain. \vskip .15cm Although our main concern is the complex plane $\mathbb C$, it is convenient to state definitions in a more general framework. Assume that $E$ is a complex Banach space. A {\it domain} $G$ in $E$ is a nonempty connected open subset of $E$. Along this paper, we will assume that $G \ne E$. Denote by $H(G)$ the space of all holomorphic functions $f:G \to \mathbb C$ (see e.g.~\cite{Chae} for definitions and pro\-per\-ties), and by $\partial G$ the boundary of $G$. We say that a function $f \in H(G)$ is {\it holomorphically non-extendable across any boundary point} (synonymous expressions are: $f$ is {\it analytically non-continuable beyond} $\partial G$, $f$ is {\it holomorphic exactly} on $G$, $G$ is a {\it domain of existence} of $f$) whenever there do not exist two domains $G_1$ and $G_2$ in $E$ and $\widetilde{f} \in H(G_1)$ such that $G_2 \subset G \cap G_1$, $G_1 \not\subset G$ and $\widetilde{f} = f$ on $G_2$. We denote by $H_e(G)$ the family of all $f \in H(G)$ that are holomorphic exactly on $G$. A domain $G \subset E$ is said to be a {\it domain of existence} if it is a domain of existence of some function $f \in H(G)$ (that is, if $H_e(G) \ne \emptyset$). And $G$ is called a {\it domain of holomorphy} \,provided that there do not exist two domains $G_1$ and $G_2$ in $E$ with $G_2 \subset G \cap G_1$, $G_1 \not\subset G$ such that, for every $f \in H(G)$, there exists $\widetilde{f} \in H(G_1)$ with $\widetilde{f} = f$ on $G_2$. \vskip .15cm Plainly, every domain of existence is a domain of holomorphy. In the case where $E = \mathbb C^N$ $(N \in \mathbb N := \{1,2,...\})$, the Cartan--Thullen theorem \cite{Kaup} asserts that $G$ is a domain of existence if and only if $G$ is a domain of holomorphy, and if and only if $G$ is holomorphically convex, that is, for every compact subset $K$ of $G$, the set $\widehat{K} := \{x \in G: \, |f(x)| \le \sup_K |f|$ for all $f \in H(G)\}$ satisfies dist$(\widehat{K},E \setminus G) > 0$. \vskip .15cm Turning to the case $E = \mathbb C$, in 1884 Mittag-Leffler proved that every domain $G \subset \mathbb C$ is a domain of existence \cite[Chapter 10]{Hille} (this is no longer true for higher dimensions, see e.g.~\cite{Krantz}). Moreover, $f \in H_e(G)$ if and only if $\rho (f,a) = {\rm dist}(a,\partial G)$ for all $a \in G$, where $\rho (f,a)$ denotes the radius of convergence of the Taylor series of $f$ with center at $a$. Of course, we have $H_e(G) \subset H_{we}(G)$, where \,$H_{we} (G)$ \,stands for the class of functions which are holomorphic weakly exactly on $G$, that is, $f \in H_{we}(G)$ \,if and only if $f$ has no holomorphic extension to any domain containing $G$ strictly. But the reverse inclusion is not true: take e.g.~$G = \mathbb C \setminus (-\infty ,0]$ and $f :=$ the principal branch of $\log z$. Observe that \,$H_e(G) = H_{we}(G)$ \,provided that \,$G$ \,is a {\it Jordan domain}, i.e.~a domain in $\mathbb C$ such that $\partial G$ is a homeomorphic image in $\mathbb C_\infty$ of a circle. Here $\mathbb C_\infty := \mathbb C \,\cup \, \{\infty\}$ denotes the one-point compactification of $\mathbb C$. Note that we allow unbounded domains: for instance, an open half-plane is Jordan. \vskip .15cm Recall that a domain \,$G \subset \mathbb C$ \,is said to be {\it regular} if \,$G = \overline{G}^0$ ($A^0$ denotes the interior of $A$, while $\overline{A}$ stands for the closure of $A$). It is plain that every Jordan domain is regular, but there are regular (even simply connected) domains that are not Jordan, for instance, $G = \{z: |z-1| < 1$ and $|z-(1/2)| > 1/2\}$. \vskip .15cm For every domain $G$ of a complex Banach space $E$, the space $H(G)$ will be endowed with the topology of uniform convergence on compacta. If $E = \mathbb C^N$ $(N \in \mathbb N )$, $H(G)$ becomes a Fr\'echet space (i.e.~a complete metrizable locally convex space), but it is no longer metrizable if $E$ is infinite dimensional, see \cite{Alex,AnsP,Chae}. In 1933 Kierst and Szpilrajn \cite{KiS} showed in the case of the open unit disc $G = \mathbb D = \{z \in \mathbb C : \, |z| < 1\}$ that the property discovered by Mittag-Leffler is topologically generic; specifically, $H_e(\mathbb D)$ is residual (so dense) in $H(\mathbb D )$, that is, its complement in $H(\mathbb D )$ is of first category. For extensions and improvements of the Mittag-Leffler and Kierst--Szpilrajn theorems, see \cite{Bieber,Jarni,Kaha68,Lelong,Ryll}. Kahane and the author (see \cite[Theorem 3.1 and following remarks]{Kah} and \cite[Theorem 3.1]{BerS}) observed that the residuality of $H_e(G)$ holds for many subspaces $X$ of $H(G)$: \begin{theorem} \lambdabel{Th Kahane} Let $G \subset \mathbb C$ be a domain and $X$ be a Baire topological vector space with $X \subset H(G)$ such that all evaluation functionals $$ f \in X \mapsto f^{(k)} (a) \in \mathbb C \,\,\,\, (a \in G, \, k \ge 0) $$ are con\-ti\-nuous and, for every $a \in G$ and every $r > {\rm dist}\,(a,\partial G)$, there exists $f \in X$ such that $\rho (f,a) < r$. Then $H_e(G) \cap X$ is residual in $X$. \end{theorem} Of course the case $X = H(G)$ is included. But Theorem \ref{Th Kahane} also includes some interesting strict subspaces, such as Hardy ($H^p$, $p>0$) and Bergman ($B^p$, $p>0$) spaces (in the case $G = \mathbb D$, see \cite{BerS}; for definitions and properties, see e.g.~\cite{Zhu}) and the space $A^\infty (G)$ (see \cite{BerCL}) if $G$ is a {\it regular} domain. By $A^\infty (G)$ we have denoted the class of {\it boundary-regular} holomorphic functions in $G$, that is, $A^\infty (G) = \{ f \in H(G): \ f^{(j)} $ has a continuous extension to $\overline{G}$ for all $j \in \mathbb N_0 \} $, where $\mathbb N_0 := \{0,1,2, \dots \}$. It becomes a Fr\'echet space when it is endowed with the topology of uniform convergence of functions and all their derivatives on each compact set $K \subset \overline{G}$. Chmielowski \cite{Chm} had established in 1980 that $H_e(G) \cap A^\infty (G) \ne \emptyset$ if $G$ is regular; see also \cite{Siciak}. \vskip .15cm In Section 2 we will recall some lineability notions and review the main known results about the algebraic structure of $H_e(G)$ in $H(G)$ and in subspaces of it, including the infinite dimensional case. Section 3 contains our new statements on the linear structure --in its diverse degrees-- of $H_e(G)$, with special emphasis on the class of boundary-regular holomorphic functions. \section{Lineability notions and known results} \quad When a set $A$ lives in a bigger set $X$ endowed with some structure (vector space, topological vector space, algebra), an alternative way to measure the size of $A$ involves finding large sub-structures within $A$. A number of concepts have been coined in order to describe the algebraic size of a set, see \cite{AGS,Bay1,Ber2,GuQ} (see also \cite{BPS} for an account of lineability properties of specific subsets of vector spaces). Namely, if $X$ is a vector space, $\alphapha$ is a cardinal number and $A \subset X$, then $A$ is said to be: \begin{enumerate} \item[$\bullet$] {\it lineable} if there is an infinite dimensional vector space $M$ such that $M \setminus \{0\} \subset A$, \item[$\bullet$] {\it $\alphapha$-lineable} if there exists a vector space $M$ with dim$(M) = \alphapha$ and $M \setminus \{0\} \subset A$ (hence lineability means $\alphaeph_0$-lineability, where $\alphaeph_0 = {\rm card}\,(\mathbb N )$, the cardinality of the set of positive integers), and \item[$\bullet$] {\it maximal lineable} in $X$ if $A$ is ${\rm dim}\,(X)$-lineable. \end{enumerate} If, in addition, $X$ is a topological vector space, then $A$ is said to be: \begin{enumerate} \item[$\bullet$] {\it dense-lineable} or {\it algebraically generic} in $X$ whenever there is a dense vector subspace $M$ of $X$ satisfying $M \setminus \{0\} \subset A$ (hence dense-lineability implies lineability as soon as dim$(X) = \infty$), \item[$\bullet$] {\it maximal dense-lineable} in $X$ whenever there is a dense vector subspace $M$ of $X$ satisfying $M \setminus \{0\} \subset A$ and dim$\,(M) =$ dim$\,(X)$, and \item[$\bullet$] {\it spaceable} in $X$ if there is a closed infinite dimensional vector subspace $M$ such that $M \setminus \{0\} \subset A$ (hence spaceability implies lineability). \end{enumerate} And, according to \cite{APS,BarG}, when $X$ is a topological vector space contained in some (linear) algebra then $A$ is called: \begin{enumerate} \item[$\bullet$] {\it algebrable} if there is an algebra \,$M$ so that $M \setminus \{0\} \subset A$ and $M$ is infinitely generated, that is, the cardinality of any system of generators of \,$M$ is infinite. \item[$\bullet$] {\it densely algebrable} in $X$ if, in addition, $M$ can be taken dense in $X$. \item[$\bullet$] {\it $\alphapha$-algebrable} if there is an $\alphapha$-generated algebra \,$M$ with \,$M \setminus \{0\} \subset A$. \item[$\bullet$] {\it strongly $\alphapha$-algebrable} if there exists an $\alphapha$-generated {\it free} algebra \,$M$ with \,$M \setminus \{0\} \subset A$ (for $\alphapha = \alphaeph_0$, we simply say {\it strongly algebrable}). \item[$\bullet$] {\it densely strongly $\alphapha$-algebrable} if, in addition, the free algebra \,$M$ can be taken dense in $X$. \end{enumerate} Note that if $X$ is contained in a commutative algebra then a set $B \subset X$ is a generating set of some free algebra contained in $A$ if and only if for any $N \in \mathbb N$, any nonzero polynomial $P$ in $N$ variables without constant term and any distinct $f_1,...,f_N \in B$, we have $P(f_1,...,f_N) \ne 0$ and $P(f_1,...,f_N) \in A$. Observe that strong $\alphapha$-algebrability $\mathbb Rightarrow$ $\alphapha$-algebrability $\mathbb Rightarrow$ $\alphapha$-lineability, and none of these implications can be reversed, see \cite[p.~74]{BPS}. \vskip .15cm The next dense-lineability criterion, that can be found in \cite[Theorem 2.3]{BerOrd} and is an extension of statements from \cite{AGPS,Ber,Ber2}, will be used later. \begin{theorem}\lambdabel{maximal dense-lineable} Assume that \,$X$ is a topological vector space. Let $A \subset X$ be an $\alpha$-lineable subset. Suppose that there exists a dense-lineable subset $B \subset X$ such that $A + B \subset A$, and that $X$ has an open basis \,$\cal B$ for its to\-po\-lo\-gy such that ${\rm card} ({\cal B}) \le \alpha$. Then $A$ is dense-lineable and if, in addition, $A \cap B = \emptyset$, then $A \cup \{0\}$ contains a dense vector space $D$ with \,${\rm dim} (D) = \alpha$. \end{theorem} As for spaceability, we provide the following statement that is ascribed by Kitson and Timoney \cite{KitT} to Kalton. It is, in turn, a Fr\'echet version of an earlier result by Wilansky \cite{Wil} given in the Banach setting. Theorem \ref{Wilansky-Kalton} will be needed in the proof of Theorem \ref{H_e(G)Ainfty spaceable} below. \begin{theorem} \lambdabel{Wilansky-Kalton} If $X$ is a Fr\'echet space and $Y \subset X$ is a closed linear subspace, then the complement $X \setminus Y$ is spaceable if and only if \,$Y$ has infinite codimension. \end{theorem} Concerning algebrability, the following criterion is given in \cite[Proposition 7]{BalBF} (see also \cite[Theorem 1.5]{BBFG}) for a family of functions ${\cal F} \subset \mathbb R^{[0,1]}$. By mimicking its proof, in Proposition \ref{exponentials} below we provide an extension to the case ${\cal F} \subset \mathbb C^\Omegamega$, which will be needed in Section 3. By $\cal E$ we denote the family of exponential-like functions $\mathbb C \to \mathbb C$, that is, the functions of the form \,$\varphi (z) = \sum_{j=1}^m a_j e^{b_j z}$ \,for some $m \in \mathbb N$, some $a_1,...,a_m \in \mathbb C \setminus \{0\}$ and some distinct $b_1,...,b_m \in \mathbb C \setminus \{0\}$. As usual, $\mathfrak{c}$ will stand for the cardinality of the continuum. \begin{proposition} \lambdabel{exponentials} Let \,$\Omegamega$ be a nonempty set and ${\cal F} \subset \mathbb C^\Omegamega$. Assume that there exists a function \,$f \in {\cal F}$ such that \,$f(\Omegamega )$ is uncountable and \,$\varphi \circ f \in {\cal F}$ \,for every $\varphi \in {\cal E}$. Then \,${\cal F}$ is strongly $\mathfrak{c}$-algebrable. More precisely, if $H \subset (0,+\infty )$ is a set with \,{\rm card}$(H) = \mathfrak{c}$ \,and linearly independent over the field $\mathbb Q$ of rational numbers, then $$ \{\exp \circ (rf): \, r \in H\} $$ is a free system of generators of an algebra contained in \,${\cal F} \cup \{0\}$. \end{proposition} \begin{proof} Firstly, each function \,$\varphi (z) = \sum_{j=1}^m a_j e^{b_j z}$ in ${\cal E}$ (with $a_1, \dots ,b_m \in \mathbb C \setminus \{0\}$ and $b_1,...,b_m$ distinct) has at most countably many zeros. Indeed, we can assume $|b_1| = \cdots = |b_p| > |b_j|$ $(j=p+1,...,m)$. Then $b_j = |b_1| e^{i \theta_j}$ $(j=1,...,p)$ with $|\theta_j - \theta_1| \in (0,\pi ]$ for $j=2,...,p$ (so $c_j := \cos (\theta_j - \theta_1) < 1$ for $j=2,...,p$). Hence we have for all $r > 0$ that $$ |\varphi (re^{-i \theta_1})| \ge |a_1| e^{|b_1|r} - \sum_{j=2}^p |a_j| e^{|b_1|c_j r} - \sum_{j=p+1}^m |a_j| e^{|b_j|r} \longrightarrow +\infty \,\,\, \hbox{as} \,\, r \to +\infty . $$ Therefore $\varphi$ is a nonconstant entire function, so $\varphi^{-1}(\{0\})$ is countable. Now, consider a nonzero polynomial $P$ in $N$ complex variables without constant term, as well as numbers $r_1, \dots ,r_N \in H$. The function $\Phi : \Omegamega \to \mathbb C$ given by $\Phi = P(\exp \circ (r_1f), \dots ,\exp \circ (r_Nf)) \in \mathbb C$ is of the form $$ \sum_{j=1}^m a_i (e^{r_1f(x)})^{k(j,1)} \cdots (e^{r_Nf(x)})^{k(j,N)} = \sum_{j=1}^m a_i \exp \left(f(x) \sum_{l=1}^N r_l k(j,l)\right), $$ where $a_1, \dots ,a_m \in \mathbb C \setminus \{0\}$ and the matrix $[k(j,l)]_{j=1,...,m \atop l=1,...,N}$ of nonnegative integers has distinct nonzero rows. Thus, the numbers $b_j := \sum_{l=1}^N r_l k(j,l)$ $(j=1,...,m)$ are distinct and nonzero; hence the function \,$\varphi (z) := \sum_{j=1}^m a_j e^{b_j z}$ \,belongs to \,$\cal E$. But $\Phi = \varphi \circ f$, so if $\Phi \equiv 0$ then we would have $\varphi |_{f(\Omegamega )} \equiv 0$, which contradicts the fact that $f(\Omegamega )$ is uncountable. Consequently, $\Phi \ne 0$ and, by hypothesis, $\Phi \in {\cal F}$. This proves the proposition. \end{proof} Turning to our setting of non-extendable holomorphic functions, and using the previous terminology, Aron, Garc\'{\i}a and Maestre \cite{AGM} proved in 2001 the following. \begin{theorem} \lambdabel{AronGarciaMaestre} Assume that $N \in \mathbb N$ and $G \subset \mathbb C^N$ is a domain of existence. Then $H_e(G)$ is dense-lineable, spaceable and algebrable. In fact, there is a closed infinitely generated algebra contained in $H_e(G)$. \end{theorem} Recall that a subset $A$ of a locally convex space $E$ is said to be {\it sum-absorbing} whenever there is $\lambda > 0$ such that $\lambda (A + A) \subset A$, and $E$ is called {\it nearly-Baire} if, given a sequence $(A_j)$ of sum-absorbing balanced closed subsets with $E = \bigcup_{j=1}^\infty A_j$, there is $j_0$ such that $A_{j_0}$ is a neighborhood of $0$. In 2008, Valdivia \cite{Val} showed that the dense subspace contained in $H_e(G) \cup \{0\}$ can be chosen to be nearly-Baire for any domain of existence $G \subset \mathbb C^N$. In the case $N=1$, the author had demonstrated in 2006 \cite{BerB} that $H_e(G)$ is {\it maximal dense-lineable} in $H(G)$ for any domain $G \subset \mathbb C$. \vskip .15cm In the special case $G = \mathbb D$, Aron {\it et al.}~\cite{AGM} considered the nonseparable Banach space $H^\infty := \{f \in H(\mathbb D ): \, f$ is bounded on $\mathbb D\}$, endowed with the supremum norm, and showed that $H_e(\mathbb D ) \cap H^\infty$ contains, except for zero, an infinitely generated algebra that is nonseparable and closed in $H^\infty$. The author \cite{BerS} obtained in 2005 that, under appropriate conditions on a function space $X \subset H(G)$, the set $H_e(\mathbb D) \cap X$ is dense-lineable or spaceable in $X$. In particular, the families $H_e(\mathbb D ) \cap H^p , \, H_e(\mathbb D ) \cap B^p$ $(p>0)$ and $H_e(\mathbb D ) \cap A^\infty (\mathbb D )$ turn out to be dense-lineable as well as spaceable in $H^p, \, B^p$ and $A^\infty (\mathbb D )$, respectively. \vskip .15cm We say that a domain $G \subset \mathbb C$ is {\it finite-length} provided that there is $M \in (0,+\infty )$ such that for any pair $a,b \in G$ there exists a curve $\gamma \subset G$ joining \,$a$ \,to \,$b$ \,for which ${\rm length} (\gamma ) \le M$. In 2008, Bernal {\it et al.}~\cite{BerCL} established the following assertion, showing that regularity plus appropriate metrical and topological conditions assure dense-lineability for $H_e(G)$ in $A^\infty (G)$. \begin{theorem} \lambdabel{Bernal-MCCM-Luh} Let \,$G \subset \mathbb C$ be a finite-length regular domain such that \,$\mathbb C \setminus \overline{G}$ \,is connected. Then \,$H_e(G) \cap A^\infty (G)$ \,is dense-lineable in \,$A^\infty (G)$. \end{theorem} And Valdivia \cite{Val2} obtained in 2009 the following more precise result for the bigger class \,$H_{we}(G)$ \,under less restrictive assumptions. \begin{theorem} \lambdabel{Valdivia} Let $G \subset \mathbb C$ be a regular domain. Then there exists a nearly-Baire dense subspace \,$M \subset A^\infty (G)$ \,such that \,$M \subset A^\infty (G)$ \,and \,$M \setminus \{0\} \subset H_{we}(G)$. \end{theorem} The same result still holds if we replace \,$A^\infty (G)$ \,by the smaller space $$ A^\infty_b(G) := \{f \in A^\infty (G): \hbox{ each } f^{(j)} \,\, (j=0,1, \dots ) \,\hbox{ is bounded on } G\}, $$ see \cite{Val2}. Note that, as a consequence of Theorem \ref{Valdivia}, we obtain the following. \begin{corollary} If \,$G \subset \mathbb C$ \,is a Jordan domain then the family \,$H_e(G) \cap A^\infty (G)$ \,is dense-lineable. \end{corollary} We have, in addition, the next theorem, whose parts (a) and (b) are showed in \cite{BerCL}, while part (c) is proved in \cite{BerOrd} by using Theorem \ref{maximal dense-lineable} above. \begin{theorem} \lambdabel{Jordan-X-finitelength} Assume that $G \subset \mathbb C$ is a domain. We have: \begin{enumerate} \item[\rm (a)] If \,$G$ is a Jordan domain with analytic boundary then $H_e(G) \cap A^\infty (G)$ is spaceable in $A^\infty (G)$. \item[\rm (b)] If \,$\partial G$ does not contain isolated points, $X \subset H(G)$ is a vector space with \,$H_e(G) \cap X \ne \emptyset$ \,and there is a nonconstant function \,$\varphi$ \,holomorphic on some domain \,$\Omegamega \supset \overline{G}$ \,such that \,$\varphi X \subset X$, then \,$H_e(G) \cap X$ \,is li\-ne\-a\-ble. \item[\rm (c)] If \,$G$ \,is a regular finite-length domain such that \,$\mathbb C \setminus \overline{G}$ \,is connected then \,$H_e(G) \cap A^\infty (G)$ \,is maximal dense-lineable in \,$A^\infty (G)$. \end{enumerate} \end{theorem} Finally, in the infinite dimensional setting, Alves \cite{Alv} has recently proved the following assertion. \begin{theorem} \lambdabel{Alves} Suppose that \,$G$ is a domain of existence of a separable complex Banach space $E$. Then \,$H_e(G)$ is $\mathfrak{c}$-lineable and algebrable. \end{theorem} In particular, $H_e(G)$ is maximal lineable in $H(G)$. The proof in \cite{Alv} yields in fact the strong algebrability of \,$H_e(G)$. \section{Main results} \quad We start with a theorem that, in the case $E = \mathbb C$, complements Theorems \ref{AronGarciaMaestre} and \ref{Alves}. The following algebraic concept is in order. Let $p \in \mathbb N_0^p$ and consider the {\it lexicographical order} ``$\le$'' on $\mathbb N^p$ defined by: $M := (m_1,...,m_p) \le (j_1,...,j_p) =: J$ if and only if $M=J$ or $M < J$; and $M < J$ if and only if there is $s \in \{1,...,p\}$ such that $m_k = j_k$ for $k \le s-1$ and $m_s < j_s$. Since it is a total order on $\mathbb N_0^p$, every nonempty finite subset $S \subset \mathbb N_0^p$ reaches a maximum $R = (r_1,...,r_p)$. Denote by ${\cal P}_{p,0}$ the family of all nonzero polynomials of $p$ complex variables without constant term. Given $P \in {\cal P}_{p,0}$, there is a nonempty finite set $S \subset \mathbb N_0 \setminus \{(0,...,0)\}$ such that $P(z_1, \dots ,z_p) = \sum_{J \in S} c_J z_1^{j_1} \cdots z_p^{j^p}$ and $c_J \in \mathbb C \setminus \{0\}$ for all $J \in S$. If $R = \max S$ then we say that $c_R z_1^{r_1} \cdots z_p^{r_p}$ is the {\it dominant monomial} of $P$. \vskip .15cm As usual, the Euclidean open ball with center $a \in \mathbb C$ and radius $r > 0$ will be denoted by $B(a,r)$. Moreover, $C(A)$ will stand for the set of all continuous functions $A \to \mathbb C$, where $A \subset \mathbb C$. \begin{theorem} \lambdabel{H_e(G)densely strongly c-algebrable} For any domain $G \subset \mathbb C$, the set $H_e(G)$ is densely strongly $\mathfrak{c}$-algebrable in $H(G)$. \end{theorem} \begin{proof} We need some topological preparation before constructing an adequate free algebra. We denote by $G_*$ the one-point compactification of $G$. Recall that in $G_*$ the whole boundary $\partial G$ collapses to a unique point, say $\omega$. Let us fix an increasing sequence $\{K_N:\, N \in \mathbb N\}$ of compact subsets of $G$ such that each compact subset of $G$ is contained in some $K_N$ and each connected component of the complement of every $K_N$ contains some connected component of the complement of $G$, see \cite[Chapter 7]{Conway}. Choose a countable dense subset $\{g_N: \, N \in \mathbb N\}$ of the (separable) space $H(G)$. \vskip .15cm Select also a sequence $\{a_n: \, n \in \mathbb N\}$ of distinct points of $G$ such that it has no accumulation point in $G$ and each {\it prime end} (see \cite[Chapter 9]{ColLow}) of $\partial G$ is an accumulation point of the sequence. More precisely, the sequence $\{a_n\}_{n \ge 1}$ should have the following property: for every $a \in G$ and every $r >$ dist$(a,\partial G)$, the intersection of $\{a_n\}_{n \ge 1}$ with the connected component of $B(a,r) \cap G$ containing $a$ is infinite. An example of the required sequence may be defined as follows. Let $A = \{\alphapha_k\}_{k \ge 1}$ be a dense countable subset of $G$. For each $k \in \mathbb N$ choose $b_k \in \partial G$ such that $|b_k - \alphapha_k| =$ dist$(\alphapha_k,\partial G)$. For every $k \in \mathbb N$ let $\{a_{k,l}: \, l \in \mathbb N\}$ be a sequence of points of the line interval joining $\alphapha_k$ with the corresponding point $b_k$ such that $|a_{k,l} - b_k| < 1/(k + l)$ $(k,l \in \mathbb N )$. Each one-fold sequence $\{a_n\}$ (without repetitions) consisting of all distinct points of the set $\{a_{k,l}: \, k,l \in \mathbb N\}$ has the required property. \vskip .15cm Fix $N \in \mathbb N$. For the set $A_N := K_N \cup \{a_n: \, n \in \mathbb N\}$ we have: \begin{enumerate} \item[$\bullet$] The set $A_N$ is closed in $G$ because the set $\{a_n: \, n \in \mathbb N\}$ does not cluster in $G$. \item[$\bullet$] The set $G_* \setminus A_N$ is connected due to the shape of $K_N$ (recall that in $G_*$ the whole boundary $\partial G$ collapses to $\omega$) and to the denumerability of $\{a_n: \, n \in \mathbb N\}$. \item[$\bullet$] The set $G_* \setminus A_N$ is locally connected at $\omega$, again by the denu\-me\-ra\-bility of $\{a_n: \, n \in \mathbb N\}$ and by the fact that one can suppose that neighborhoods of \,$\omega$ \,do not intersect $K_N$. \end{enumerate} In other words, each $A_N$ is an Arakelian subset of $G$, see \cite{Gaier1}. Now, we define a family of functions $B = \{f_\alpha : \, \alpha \ge 1\} \subset H(G)$ as follows. If $\alpha \not\in \mathbb N$, by the Weierstrass interpolation theorem one can select $f_\alpha \in H(G)$ such that $$f_\alpha (a_n) = e^{n^\alpha} \,\,\, (n=1,2, \dots ),$$ because $\{a_n\}_{n \ge 1}$ lacks accumulation points in $G$ (see e.g.~\cite[Chapter 13]{Rudin}). Let $\alpha = N \in \mathbb N$. Consider the function $h_N:A_N \to \mathbb C$ given by $$ h_N(z) = \left\{ \begin{array}{ll} g_N(z) & \mbox{if } z \in K_N \\ e^{n^N} & \mbox{if } z = a_n \mbox{ and } a_n \not\in K_N. \end{array} \right. $$ Note that $h_N \in C(A_N) \cap H(A_N^0)$, $A_N$ is Arakelian and $a_n \in A_N \setminus \overline{A_N^0}$ whenever \,$a_n \not\in K_N$. Under these conditions, a remarkable approximation-interpolation result due to Gauthier and Hengartner \cite[Theorem and remark 2 in page 702]{GauH} asserts the existence of a function $f_N \in H(G)$ such that $$|f_N(z) - h_N(z)| < {1 \over N} \, \hbox{ for all } z \in A_N, \hbox{ and}$$ $$f_N(a_n) = h_N(a_n) \, \hbox{ for all } n \in \mathbb N \, \hbox{ with } a_n \not\in K_N$$ In particular, $$|f_N(z) - g_N(z)| < 1/N \quad (z \in K_N) \eqno (1)$$ and $f_N(a_n) = e^{n^N}$ provided that $a_n \notin K_N$. Denote by $\cal A$ the algebra ge\-ne\-ra\-ted by $B$. Since each compact set $K \subset G$ is contained in all $K_N$'s (except for a finite number of them) and, from (1), we have $\sup_{z \in K} |f_N(z)-g_N(z)| \to 0$ as $N \to \infty$, the density of $\{g_N\}_{N \ge 1}$ forces $\{f_N\}_{N \ge 1}$ to be dense, so $\cal A$ is dense. \vskip .15cm Finally, we prove that \,$\cal A$ \,is freely $\mathfrak{c}$-generated and \,${\cal A} \setminus \{0\} \subset H_e(G)$. For this, observe that, of course, card$\,[1,+\infty ) = \mathfrak{c}$, and fix $p \in \mathbb N$ as well as a polynomial $$ P(z_1, \dots ,z_p) = \sum_{J \in S} c_J z_1^{j_1} \cdots z_p^{j_p} \in {\cal P}_{p,0}, $$ with its shape described at the beginning of this section. Let $c_R z_1^{r_1} \cdots z_p^{r_p}$ be its dominant monomial. Also, let $\alpha_1, \dots ,\alpha_p$ be different numbers of $[1,+\infty )$. We can assume $\alpha_1 > \alpha_2 > \cdots > \alpha_p$. Suppose that $P(f_{\alpha_1}, \dots ,f_{\alpha_p}) \equiv 0$. If $N = [\alphapha_1]$ (the integer part of $\alpha_1$) then, since $K_N$ is compact in $G$ and $\{a_n\}_{n \ge 1}$ lacks accumulation points in $G$, there is $n_0 \in \mathbb N$ for which $a_n \notin K_N$ whenever $n \ge n_0$. Hence $f_{\alpha_j}(a_n) = e^{n^{\alpha_j}}$ for all $j \in \{1,...,p\}$ and all $n \ge n_0$. Now, observe that, for $n \ge n_0$, $P(f_{\alpha_1}(a_n), \dots ,f_{\alpha_p}(a_n))$ is a sum of one term of the form \,$D_n = c_R e^{r_1 n^{\alpha_1} + \cdots + r_p n^{\alpha_p}}$ and finitely many terms of the form \,$E_n = c_J e^{j_1 n^{\alpha_1} + \cdots + j_p n^{\alpha_p}}$. The definition of dominant monomial and the assumption $\alpha_1 > \alpha_2 > \cdots > \alpha_p$ \,yield \,$D_n \to +\infty$ \,and \,$E_n/D_n \to 0$ as $n \to \infty$, from which one derives $$ |P(f_{\alpha_1}(a_n), \dots ,f_{\alpha_p}(a_n))| \to +\infty \,\,\,\, (n \to \infty ), $$ that contradicts $P(f_{\alpha_1}, \dots ,f_{\alpha_p}) \equiv 0$. Hence $F := P(f_{\alpha_1}, \dots ,f_{\alpha_p}) \not \equiv 0$, which shows that \,$\cal A$ \,is freely generated. \vskip .15cm Our remaining task is to demonstrate that $F \in H_e(G)$. Recall that $|F(a_n)| \to +\infty$ as $n \to \infty$. Assume, by way of contradiction, that $F \notin H_e(G)$. Then there would exist some point $a \in G$ such that $\rho (F,a) > {\rm dist} (a,\partial G)$. Choose $r$ with dist$(a,\partial G) < r < \rho (f,a)$. By the construction of $\{a_n: \, n \in \mathbb N\}$, we can select a sequence $\{n_1 < n_2 < \cdots \} \subset \mathbb N$ for which $a_{n_k} \in G \cap B(a,r)$ $(k \in \mathbb N )$. On the other hand, the sum $S(z)$ of the Taylor series of $F$ with center $a$ is bounded on $B(a,r)$. But $S = F$ on $G \cap B(a,r)$, so $S(a_{n_k}) = F(a_{n_k})$ $(k=1,2,...)$, which is absurd because $|F(a_{n_k})| \to +\infty$ as $k \to \infty$. The proof is finished. \end{proof} Next, we extend parts (a) and (c) of Theorem \ref{Jordan-X-finitelength} by showing that the regularity of the domain is enough to reach the same conclusions for the bigger class $H_{we}(G)$ (or for the same class $H_e(G)$ if a little more is assumed). This will be carried out in Theorems \ref{H_e(G)Ainfty maximal dense-lineable} and \ref{H_e(G)Ainfty spaceable}. We assume the Continuum Hypothesis (CH) in the following result. \begin{theorem} \lambdabel{H_e(G)Ainfty maximal dense-lineable} \begin{enumerate} \item[\rm (a)] For any regular domain $G \subset \mathbb C$, the set \,$H_{we}(G) \cap A^\infty (G)$ \,is maximal dense-lineable in $A^\infty (G)$. \item[\rm (b)] For any Jordan domain $G \subset \mathbb C$, the set \,$H_e(G) \cap A^\infty (G)$ \,is maximal dense-lineable in $A^\infty (G)$. \end{enumerate} \end{theorem} \begin{proof} Since (b) is a particular case of (a), we only must prove (b). To this end, and according to Theorem \ref{Valdivia}, we select a nearly-Baire dense subspace \,$M \subset A^\infty (G)$ \,such that $M \setminus \{0\} \subset H_e(G)$. Since dim$(A^\infty (G))$ $= \mathfrak{c}$, it is enough to prove that every infinite dimensional nearly-Baire topological vector space $M$ must satisfy dim$(M) > \alphaeph_0$. To do this, assume, by way of contradiction, that dim$(M) = \alphaeph_0$. Then there would exist a sequence of vector subspaces $$ X_1 \subset X_2 \subset \cdots \subset X_n \subset \cdots $$ such that dim$(X_n) = n$ $(n=1,2,...)$ and $M = \bigcup_{n=1}^\infty X_n$. As dim$(X_n) < \infty$, we have that each $X_n$ is closed and, trivially, balanced and sum-absorbing. Consequently, some $X_{m}$ is a neighborhood of $0$, hence $M = X_{m}$, which is absurd because dim$(M) = \infty$. \end{proof} In order to face spaceability, the following two assertions (an elementary topological lemma and a deep interpolation result by Valdivia \cite{Val0,Val2}, resp.) will be needed. Recall that a topological space is called {\it perfect} whenever it lacks isolated points. \begin{lemma} \lambdabel{LemmaT1perfect} Assume that \,$X$ is a $T_1$, perfect, second countable topological space. Then from each dense sequence \,$\{x_n\}_{n \ge 1}$ in \,$X$ one can extract infinitely many sequences \,$\{x_{n(k,j)}\}_{j \ge 1}$ $(k=1,2, \dots )$ \,such that each of them consists of different points and is dense in \,$X$, and they are pairwise disjoint. \end{lemma} \begin{proof} Every nonempty open subset $U$ of $X$ is infinite. Indeed, if $U$ were finite, say $U = \{y_1, \dots ,y_p\}$ (with the $y_i$'s different), then $V := U \setminus \{y_1\} = \{y_2, \dots , y_p\}$ would be open (because any singleton $\{y\}$ is closed, since $X$ is $T_1$). Therefore $V \setminus \{y_2\} = \{y_3, \dots ,y_p\}$ is open, and continuing this process we get after a finite number of steps that $\{y_p\}$ is open, which is absurd because $X$ is perfect. Since $X$ is second countable, there exists a countable open basis $\{U_n\}_{n \ge 1}$. Hence each member $U_n$ is infinite. \vskip .15cm Consider the following strict well-order in $\mathbb N \times \mathbb N$: we say that $(l,s) < (k,j)$ if and only if either $l+s < k+l$ or $l+s=k+j$ but $l<k$. Then $(1,1)$ is the least element of $\mathbb N \times \mathbb N$ and we have $(1,1) < (1,2) < (2,1) < (1,3) < (2,2) < (3,1) < (1,4) < \cdots$. Since $\{x_n\}_{n \ge 1}$ is dense, one can find $n(1,1) \in \mathbb N$ with $x_{n(1,1)} \in U_1$. Now, since $U_n \setminus F$ is open and nonempty for every $n$ and every finite set $F \subset X$, we may select, for each $(k,j) > (1,1)$, an element $x_{n(k,j)} \in U_j \setminus \{x_{n(l,s)}: \, (l,s) < (k,j)\}$. It is then plain that the sequences $\{x_{n(k,j)}\}_{j \ge 1}$ $(k=1,2, \dots )$ are dense in $X$ and pairwise disjoint, and each of them consists of different points. \end{proof} \begin{theorem} \lambdabel{Valdivia-2} Let $G \subset \mathbb C$ be a regular domain. Then there is a dense subset $\{z_j: \, j \in \mathbb N\}$ in $\partial G$ consisting of different points such that, for any of its arbitrary subsets $\{u_j: \, j \in \mathbb N\}$ and any infinite dimensional triangular matrix $$ [a_{n+1,j}]_{j \ge n; \, n \in \mathbb N_0} $$ of complex numbers, there is a function $f \in A_b^\infty (G)$ such that $$ f^{(j)}(u_{n+1}) = a_{n+1,j} \quad (j \ge n; \, n \in \mathbb N_0). $$ \end{theorem} Let \,$\cal P$ \,denote the set of all polynomials in $z$. Of course, ${\cal P} \subset A^\infty (G)$. \begin{theorem} \lambdabel{H_e(G)Ainfty spaceable} \begin{itemize} \item[\rm (a)] For any regular domain $G \subset \mathbb C$, the set \,$H_{we}(G) \cap A^\infty (G)$ \,is spaceable in $A^\infty (G)$. \item[\rm (b)] For any Jordan domain $G \subset \mathbb C$, the set \,$H_e(G) \cap A^\infty (G)$ \,is spaceable in $A^\infty (G)$. \end{itemize} \end{theorem} \begin{proof} Again, it is enough to demonstrate (a). Consider the sequence $\{z_j: \, j \in \mathbb N\} \subset \partial G$ whose existence is guaranteed by Theorem \ref{Valdivia-2}. By Lemma \ref{LemmaT1perfect}, we can extract pairwise disjoint sequences $\{z_{n(k,j)}\}_{j \ge 1}$ $(k = 1,2, \dots )$ such that each of them is still dense in $\partial G$. Let $0! \cdot 0^0 := 1$. According to Theorem \ref{Valdivia-2}, there exist functions $f_k \in A^\infty (G)$ $(k = 1,2, \dots )$ such that $$ f_k^{(j)}(z_{n(k,l)}) = j!j^j \hbox{ \ for all \ } j \ge n(k,l)-1 \,\,\, (l \in \mathbb N ) \, \hbox{ and } \eqno (2) $$ $$ f_k^{(j)}(z_{n(s,l)}) = 0 \hbox{ \ for all \ } s \ne k \hbox{ \ and all \ } j \ge n(s,l)-1 \,\,\,(l \in \mathbb N ). \eqno (3) $$ We have that the functions $f_k$ $(k \ge 1)$ are linearly independent. Indeed, if they were linearly dependent, there would be $p \in \mathbb N$ as well as scalars $c_1, \dots ,c_p$ such that $c_p \ne 0$ and $F := \sum_{k=1}^p c_k f_k = 0$ on $\overline{G}$. Therefore $\sum_{k=1}^p c_k f_k^{(j)} = 0$ on $\partial G$ for every $j \ge 0$. In particular, if $N := n(p,1)$, we have by (2) and (3) that $$ 0 = \sum_{k=0}^p c_k f_k^{(N)} (z_N) = 0 + c_p f_p^{(N)} (z_N) = c_p \, N!N^N, $$ which is a contradiction. \vskip .15cm Let us define $$ M := \hbox{span}\, \{f_k : \, k=1,2, \dots \}. $$ Plainly, $M$ is an infinite dimensional vector subspace of $A^\infty (G)$. Fix $F \in M \setminus \{0\}$. Then $F$ can be written as in the preceding paragraph, $F = \sum_{k=1}^p c_k f_k$, with $c_p \ne 0$. In particular, by (2) and (3), we get \, $$ F^{(j)}(z_{n(p,l)}) = c_p f_p^{(j)} (z_{n(p,l)}) = c_p \, j!j^j \eqno (4) $$ for every $l \in \mathbb N$ and every $j \ge n(p,l)$. Then for the radius of convergence of the associated Taylor series we have $$ \rho (F,z_{n(p,l)}) = \left[ \limsup_{j \to \infty} \left| {F^{(j)}(z_{n(p,l)}) \over j!} \right|^{1/j} \right]^{-1} = 0 \hbox{ \ for all \ } l \in \mathbb N . \eqno (5) $$ If $F \notin H_{we}(G)$ then there would be an open ball $B$ with $B \cap \partial G \ne \emptyset$ such that $F$ extends holomorphically on $B$. Due to the density of $\{z_{n(p,l)}\}_{l \ge 1}$, one can select $l \in \mathbb N$ with $z_{n(p,l)} \in B$, which is impossible by (5). Thus $M \setminus \{0\} \subset H_{we}(G) \cap A^\infty (G)$. \vskip .15cm Now, consider the space $$ X := \overline{M} = \overline{\hbox{span}} \, \{f_k : \, k=1,2, \dots \}, $$ where the closure is taken in $A^\infty (G)$. Since $A^\infty (G)$ is a Fr\'echet space, we get that $X$ is a Fr\'echet space under the inherited topology. Suppose that $F \in X$. Then there exists a sequence $\{F_\nu = \sum_{k=1}^\infty \lambda_{\nu ,k} f_k\}_{\nu \ge 1} \subset M$ such that, for every $j \in \mathbb N$, $F_{\nu}^{(j)} \to F^{(j)}$ $(\nu \to \infty )$ uniformly on every compact subset of \,$\overline{G}$; here the coefficients $\lambda_{\nu,k}$ are complex numbers such that, for every $\nu$, $\lambda_{\nu ,k} = 0$ provided that $k$ is large enough. Therefore \,$\sum_{k=1}^\infty \lambda_{\nu ,k} f_k^{(j)} (z_{n(m,l)}) \to F^{(j)}(z_{n(m,l)})$ as $\nu \to \infty$, for all $j \in \mathbb N_0$ and all $m,l \in \mathbb N$. From (2) and (3), we get $$ \lambda_{\nu ,m} j!j^j \to F^{(j)}(z_{n(m,l)}) \,\,\, (\nu \to \infty ) \,\,\, \hbox{for all } m,l \in \mathbb N \hbox{ and all } j \ge n(m,l). $$ Then $\lambda_{\nu ,m} \to F^{(j)}(z_{n(m,l)})/j!j^j$ as $\nu \to \infty$. Now the uniqueness of the limit leads us to the existence, for each $m \in \mathbb N$, of a constant $K_m \in \mathbb C$ such that $$ F^{(j)}(z_{n(m,l)})= K_m \, j!j^j \, \hbox{ provided that } j \ge n(m,l). \eqno (6) $$ Assume now that $F \in X \setminus H_{we}(G)$. Then we have again at our disposal an open ball $B$ with $B \cap \partial G \ne \emptyset$ such that $F$ extends holomorphically on $B \cup G$. Fix any $m \in \mathbb N$. If $K_m \ne 0$ then (6) entails that $\rho (F,z_{n(m,l)}) = 0$ for each $l$, which is impossible because, by density, there is $l$ with $z_{n(m,l)} \in B$. Consequently, $K_m = 0$ and, again by the density of $\{z_{n(m,l)}\}_{l \ge 1}$, (6) implies that $F^{(j)}(w) = 0$ (if $j$ is large enough) for at least one point $w \in B$. The Identity Principle tells us that $F$ is a polynomial. Hence $X \setminus H_e(G) \subset {\cal P}$. Let $Y := \overline{{\cal P} \cap X}$, which is a closed linear subspace of $X$. Observe that $$ H_e(G) \cap A^\infty (G) \supset X \setminus {\cal P} = X \setminus ({\cal P} \cap X) \supset X \setminus Y. \eqno (7) $$ Suppose that $F \in {\cal P} \cap X$. Then (6) implies that the corresponding constants $K_m$ are all $0$, so $$ F^{(j)}(z_{n(m,l)})= 0 \,\, \hbox{ for all } j \ge n(m,l) \,\,\,\, (m,l=1,2, \dots ) \eqno (8) $$ If now $F \in Y$ then $F$ is a limit in $A^\infty (G)$ of a sequence of functions each of them satisfying (8). Hence $F$ also satisfies (8), which is inconsistent with (4) if, in addition, $F \in M$, except that $F = 0$. In other words, $Y \cap M = \{0\}$. But $M$ is an infinite dimensional vector space contained in $X$. Thus $Y$ has infinite codimension in $X$. From Wilansky--Kalton's Theorem \ref{Wilansky-Kalton} one derives that $X \setminus Y$ is spaceable in $X$. Since $X$ is closed in $A^\infty (G)$, a subset of $X$ is closed in $X$ if and only if it is closed in $A^\infty (G)$. This fact and (7) entail the desired spaceability of $H_e(G) \cap A^\infty (G)$ in $A^\infty (G)$. \end{proof} \begin{remark} {\rm Observe that, by using Theorem \ref{Valdivia-2}, the same proof above works to show the spaceability of \,$H_{we}(G) \cap A_b^\infty (G)$ \,in \,$A_b^\infty (G)$, whenever \,$G \subset \mathbb C$ \,is regular.} \end{remark} Algebrability of $H_{we}(G)$ inside $A^\infty (G)$ can also be asserted. This complements the final part of Theorem \ref{AronGarciaMaestre} and will be shown in Theorem \ref{H_e(G)Ainfty algebrable} below, but prior to it we need the next auxiliary statement, which is probably well known. For the sake of completeness, we provide an elementary proof of this statement. By \,$A^1(G)$ \,it is denoted the space of functions \,$f \in H(G)$ \,such that $f$ and $f'$ extend continuously to \,$\overline{G}$. \begin{lemma} \lambdabel{Lemma-composition} Suppose that $G \subset \mathbb C$ is a domain such that \,$\partial G$ does not contain isolated points. Let \,$f \in H_{we}(G) \cap A^1(G)$ \,and \,$\varphi$ be a nonconstant entire function. Then \,$\varphi \circ f \in H_{we}(G)$. \end{lemma} \begin{proof} Assume, by way of contradiction, that \,$F := \varphi \circ f \not\in H_{we}(G)$. Then there is an open ball $B$ centered at some point $z_0 \in \partial G$ as well as a function $\widetilde{F} \in H(G \cup B)$ such that $\widetilde{F} = F$ on $G$. Let $B_1 \subset B$ be any closed ball centered at $z_0$. If $\widetilde{F}'(z) = 0$ for all $z \in B \cap \partial G$ then, since $\partial G$ is perfect, we would have \,$\widetilde{F}'= 0$ \,on a subset of $B$ having some accumulation point in $B$ (namely, on $B_1 \cap \partial G$). By the Analytic Continuation Principle, $\widetilde{F}'= 0$ on $B$, hence $\widetilde{F}$ is constant on $B$. By the same Principle, and since $\widetilde{F}=F$ on $G$, we get $F$ = constant in $G$. Since $f \in H_{we}(G)$, $f$ is not constant, so $f(G)$ is open by the Open Mapping Theorem for analytic functions. Therefore $\varphi$ is constant on the open set $f(G)$, and a third application of the Analytic Continuation Principle yields $\varphi$ = constant, which is absurd. Then there must be $z_1 \in B \cap \partial G$ with $\widetilde{F}'(z_1) \ne 0$. From the Local Representation Theorem (see e.g.~\cite{Alf}) we derive the existence of an open ball $B_2 \subset B$ centered at $z_1$ and of a domain $W$ with $W \ni \widetilde{F}(z_1) = \varphi (f(z_1))$ (recall that $f$ extends continuously to $\partial G$) such that $\widetilde{F} : B_2 \to W$ is bijective. In particular, $0 \ne \widetilde{F}'(z_1) = \varphi ' (f(z_1)) f'(z_1)$, where in the last equality the hypothesis $f \in A^1(G)$ has been used. Thus $\varphi ' (f(z_1)) \ne 0$ and, again by the Local Representation Theorem, there are an open ball $B_0$ centered at $f(z_1)$ and a domain $V$ with $\varphi (f(z_1)) \in V \subset W$ such that $\varphi : B_0 \to V$ is bijective. Then $(\widetilde{F}|_W)^{-1}(V)$ is a domain satisfying $z_1 \in (\widetilde{F}|_W)^{-1}(V) \subset B$. Consequently, $G \cup (\widetilde{F}|_W)^{-1}(V)$ is a domain containing \,$G$ \,strictly and the function $\widetilde{f}: G \cup (\widetilde{F}|_W)^{-1}(V) \to \mathbb C$ given by $$ \widetilde{f} (z) = \left\{ \begin{array}{ll} f(z) & \mbox{if } z \in G \\ \varphi^{-1}(\widetilde{F}(z)) & \mbox{if } z \in (\widetilde{F}|_W)^{-1}(V) \end{array} \right. $$ is well defined, holomorphic and extends $f$. This contradicts our assumption that $f \in H_{we}(G)$ and the proof is finished. \end{proof} \begin{remark} {\rm The conclusion of the last lemma fails if no condition is imposed on $f$. For instance, if \,$G = \mathbb C \setminus (-\infty ,0]$, $f$ is the principal branch of \,$\log z$ \,and \,$\varphi = \exp$, then \,$f \in H_{we}(G)$ \,but \,$\varphi \circ f =$ Identity $\not\in H_{we}(G)$.} \end{remark} \begin{theorem} \lambdabel{H_e(G)Ainfty algebrable} \begin{itemize} \item[\rm (a)] Let \,$G \subset \mathbb C$ \,be a domain such that \,$\partial G$ \,lacks isolated points, and let \,$X$ \,be an algebra of functions with \,$X \subset A^1(G)$ \,that is stable under composition with entire functions, that is, $$ f \in X \hbox{ \ and \ } \varphi \in H(\mathbb C ) \,\, \Longrightarrow \,\, \varphi \circ f \in X. $$ Then the family \,$H_{we}(G) \cap \,X$ \,is either empty or strongly $\mathfrak{c}$-algebrable. \item[\rm (b)] For any regular domain $G \subset \mathbb C$, the set $H_{we}(G) \cap A^\infty (G)$ is strongly $\mathfrak{c}$-algebrable. For any Jordan domain $G \subset \mathbb C$, the set $H_{e}(G) \cap A^\infty (G)$ is strongly $\mathfrak{c}$-algebrable. \end{itemize} \end{theorem} \begin{proof} Evidently, (b) is a special case of (a), because \,$A^\infty (G)$ \,is an algebra contained in \,$A^1(G)$ \,that is stable under composition with members of \,$H(\mathbb C )$ and, $G$ being regular, we have \,$H_{we}(G) \cap A^\infty (G) \ne \emptyset$ \,and \,$\partial G$ \,lacks isolated points. Therefore, our goal is to prove (a). \vskip .15cm To this end, suppose that \,$f \in H_{we}(G) \cap \,X$. Let $\Omegamega := G$, ${\cal F} := H_{we}(G) \cap \,X$. Since $f$ is nonconstant, the set $f(\Omegamega )$ is open, so uncountable. According to our assumptions and Lemma \ref{Lemma-composition}, we have that \,$\varphi \circ f \in {\cal F}$ \,for every $\varphi \in {\cal E}$, the family of exponential-like functions. Finally, thanks to Proposition \ref{exponentials}, the set \,${\cal F}$ \,is strongly $\mathfrak{c}$-algebrable, as desired. \end{proof} \begin{remark} {\rm We have used several times the fact that \,$H_e(G) = H_{we}(G)$ \,if $G$ is a Jordan domain. More generally, it is easy to see that \,$H_e(G) = H_{we}(G)$ \,if the domain \,$G$ \,satisfies the following property: {\it \begin{enumerate} \item[\rm (P)] For every open ball \,$B$ \,with \,$B \not\subset G$ \,and \,$B \cap G \ne \emptyset$, and every connected component \,$A$ of \,$B \cap G$, there exists an open ball \,$B_0$ \,satisfying \,$B \supset B_0 \not \subset G$ \,and \,$A \supset B_0 \cap G \ne \emptyset$. \end{enumerate}} \noindent Then we can replace ``For any Jordan domain'' by ``For any regular domain satisfying (P)'' in part (b) of Theorems \ref{H_e(G)Ainfty maximal dense-lineable}, \ref{H_e(G)Ainfty spaceable} and \ref{H_e(G)Ainfty algebrable}. Note that, for instance, \,$\Omegamega_1 := \{z: \, |z-1| < 1$ and $|z-(1/2)| > 1/2\}$ \,and \,$\Omegamega_2 := \{z=x+iy: \, x < 0 \hbox{ or } [x = 0 \hbox{ and } y < 0] \hbox{ or } [x > 0 \hbox{ and } y < 1 + \sin {1 \over x}]\}$ are regular non-Jordan domains such that (P) holds for \,$\Omegamega_1$ \,but not for \,$\Omegamega_2$.} \end{remark} Our next result establishes, for an arbitrary complex Banach space, that certain ``good shape'' of the domain guarantees the {\it dense} lineability. This complements Theorem \ref{Alves}. Recall that in $H(G)$ we are considering the to\-po\-lo\-gy of uniform convergence in compacta (see \cite{Chae} or \cite{Din} for a description of this topology and other ones if $E$ is infinite dimensional). Recall also that a subset $A$ of a vector space is called {\it balanced} provided that $\lambda x \in A$ whenever $x \in A$ and $|\lambda | \le 1$. \begin{theorem} \lambdabel{H_e(G)in infinite dimension} Suppose that \,$G$ is a domain of existence of a separable complex Banach space $E$ such that there is $x_0 \in G$ such that $G - x_0$ is balanced. Then \,$H_e(G)$ is maximal dense-lineable in \,$H(G)$. \end{theorem} \begin{proof} Since \,$f \in H_e(G) =: A$ \,if and only if \,$f(\cdot + x_0) \in H_e(G - x_0)$, we can suppose that $x_0 = 0 \in G$ and $G$ is balanced. Under this assumption, the Taylor series centered at $0$ of each $f \in H(G)$ converges to $f$ uniformly on compacta (see \cite[Proposition 3.36]{Din}). Consequently, the set \,$B$ \,of the restrictions to \,$G$ \,of the (continuous) polynomials in \,$E$ \,is dense in \,$H(G)$. Hence \,$B$ \,is dense-lineable, because it is a vector space. Notice that, since $G$ is separable (because $E$ is), the space $C(G)$ of complex continuous functions on $G$ has cardinality $\mathfrak{c}$ which, together with the fact $C(E) \supset H(G) =: X$, implies ${\rm dim} (X) = \mathfrak{c} = {\rm card}\,(X)$. \vskip .15cm On one hand, we have by Theorem \ref{Alves} that \,$A$ \,is $\mathfrak{c}$-lineable. On the other hand, it is evident that if \,$f \in H_e(G)$ \,and \,$g \in H(E)$ \,then \,$f + g \in H_e(G)$. In particular, $A + B \subset A$. Trivially, $A \cap B = \emptyset$. Now, observe that the collection \,${\cal B}$ \,of all sets of the form $$ V(f,K,\varepsilon ) = \{g \in X: \, f \in X, \, \varepsilon > 0, \,K \hbox{ compact } \subset G\} $$ \,is an open basis for the topology of \,$H(G)$. Note also that, as $G$ is separable, every compact subset $K$ is closed and separable, so $K$ is the closure of some countable subset of $G$. Since ${\rm card}\,(G) = \mathfrak{c}$, the collection of the countable subsets of $G$ has cardinality $\mathfrak{c}$, and therefore the same holds for the collection \,$\cal K$ \,of all compact subsets of $G$. Hence \,${\rm card}\,(X) = \mathfrak{c} = {\rm card}\,(0,+\infty ) = {\rm card}\,({\cal K})$, so \,${\rm card}\,({\cal B}) = \mathfrak{c}$. An application of Theorem \ref{maximal dense-lineable} (with $\alpha = \mathfrak{c}$) concludes the proof. \end{proof} \noindent {\bf Question.} Let $E$ be a separable complex Banach space. Under appropriate conditions, is \,$H_e(G)$ {\it densely} \,algebrable in \,$H(G)$? Is it {\it $\mathfrak{c}$-algebrable}\,? \vskip .15cm We finish this paper by establishing that, for any {\it finite dimensional} \,domain of existence, the conclusion of Theorem \ref{H_e(G)in infinite dimension} always holds and the second part of the last question has always a positive answer. This complements Theorem \ref{AronGarciaMaestre}. In the second part, the CH will be assumed. \begin{theorem} Let $N \in \mathbb N$ and $G \subset \mathbb C^N$ be a domain of existence. Then \,$H_e(G)$ \,is maximal dense-lineable in \,$H(G)$ \,and $\mathfrak{c}$-algebrable. \end{theorem} \begin{proof} The maximal dense-lineability of $H_e(G)$ can be shown exactly as in the proof of Theorem \ref{H_e(G)Ainfty maximal dense-lineable}, by using the existence of a dense nearly-Baire subspace (see the paragraph following Theorem \ref{AronGarciaMaestre}). As for $\mathfrak{c}$-algebrability, suppose, by way of contradiction, that any algebra in $H_e(G) \cup \{0\}$ is coun\-ta\-bly generated. In particular, the closed algebra $\cal A$ given in the conclusion of Theorem \ref{AronGarciaMaestre} contains a countable set $\{f_n: \, n=1,2,...\}$ such that eve\-ry $f \in {\cal A}$ is a linear combination of products of the form $f_{i_1}^{m_1} \cdots f_{i_p}^{m_p}$ with $m_1, \dots ,m_p \in \mathbb N$ ($p \in \mathbb N$). Observe that there are countably many such products. But ${\cal A}$ was infinite dimensional when considered as a vector space. Since ${\cal A}$ is closed in the F-space $H(G)$, the space ${\cal A}$ is a separable infinite dimensional F-space. A standard application of Baire's category theorem yields \,${\rm dim}\,({\cal A}) = \mathfrak{c}$, which contradicts the fact that $\cal A$ is a countably ge\-ne\-ra\-ted vector space. The proof is finished. \end{proof} \noindent {\bf Acknowledgements.} The authors have been partially supported by the Plan Andaluz de Investigaci\'on de la Junta de Andaluc\'{\i}a FQM-127 Grant P08-FQM-03543 and by MEC Grant MTM2012-34847-C02-01. {\footnotesize } \end{document}
\begin{document} \title{Casimir effect from macroscopic quantum electrodynamics} \author{T G Philbin} \address{School of Physics and Astronomy, University of St Andrews, North Haugh, St Andrews, Fife KY16 9SS, Scotland, UK.} \ead{[email protected]} \begin{abstract} The canonical quantization of macroscopic electromagnetism was recently presented in {\it New J.\ Phys.}\ {\bf 12} (2010) 123008. This theory is here used to derive the Casimir effect, by considering the special case of thermal and zero-point fields. The stress-energy-momentum tensor of the canonical theory follows from Noether's theorem, and its electromagnetic part in thermal equilibrium gives the Casimir energy density and stress tensor. The results hold for arbitrary inhomogeneous magnetodielectrics and are obtained from a rigorous quantization of electromagnetism in dispersive, dissipative media. Continuing doubts about the status of the standard Lifshitz theory as a proper quantum treatment of Casimir forces do not apply to the derivation given here. Moreover, the correct expressions for the Casimir energy density and stress tensor inside media follow automatically from the simple restriction to thermal equilibrium, without the need for complicated thermodynamical or mechanical arguments. \end{abstract} \pacs{42.50.Lc, 42.50.Nn, 12.20.-m} \section{Introduction} The phenomenon of forces on macroscopic objects due to electromagnetic zero-point and thermal fields is usually called the Casimir effect, after the first, highly idealized, theoretical result in the subject~\cite{cas48}. A theory of the effect for realistic materials was given by Lifshitz and co-workers~\cite{lif55, dzy61, LL} and this remains the most general formalism for describing the local electromagnetic quantities that produce the forces. Some authors prefer other terminology, such as Casimir-Lifshitz effect, or van der Waals forces, to designate this same phenomenon; in this paper the term Casimir effect is used, both for the zero-point and thermal contributions. Despite its good agreement with experiments that measure the zero-point Casimir force, there have been been persistent doubts about the status of Lifshitz theory as a quantum theory. These doubts are voiced, for example, in two recent publications~\cite{bar10,ros10}, and a detailed analysis in~\cite{ros10} concludes that ``Lifshitz theory is actually a classical stochastic electrodynamical theory''. The source of these doubts is not difficult to find if one examines the details of Lifshitz theory. As the Casimir effect is a phenomenon of quantum electromagnetism in the presence of macroscopic media, one would expect the general theory of the effect to be based on the principles of quantum electrodynamics. Lifshitz theory, however, is based rather on the the principles of thermodynamics. In fact there is no Hamiltonian in Lifshitz theory and there are no quantized fields, yet the formalism seeks to calculate the forces caused by quantum zero-point fields (as well as by thermal fields). The requirements of a quantum theory of light in media that can serve as a basis for the Casimir effect (among many other phenomena) are easy to identify. The theory must hold for arbitrary magnetodielectrics in order to be applicable to the realistic materials used in experiments, materials whose optical properties (dielectric functions) are known only through measurement. As the Casimir effect is a broadband phenomenon in which all frequencies must be included, the theory must also take full account of material dispersion and absorption. The classical theory of light in these circumstances is of course the macroscopic Maxwell equations, where the electromagnetic properties of the media are encompassed in arbitrary electric permittivities and magnetic permeabilities obeying the Kramers-Kronig relations. The theoretical basis of the Casimir effect should therefore be a quantum theory of Maxwell's macroscopic electromagnetism. But whereas the quantum theory of the free-space Maxwell equations, quantum electrodynamics (QED), was the foundation of quantum field theory, and led to a general formalism for quantizing classical field theories, this quantization procedure was not applied to macroscopic electromagnetism. What has become known in the literature as macroscopic QED is not based on the rules for quantizing field theories, but is instead a phenomenological theory wherein no rigorous quantization is attempted (see~\cite{kno01,sch08} for detailed presentations). This phenomenological procedure is subject to much of the criticism directed at Lifshitz theory, and it will be seen that the results derived in this paper require, among other things, an action principle, something that is lacking in the phenomenological approach. The quantization rules of quantum field theory were not obviously applicable in the case of macroscopic electromagnetism due to the complications of dispersion and dissipation, and this is what led to the phenomenological approach. In fact it had been ``widely agreed"~\cite{hut92} that a proper quantization of macroscopic electromagnetism could not be performed, and that only in cases where a simple microscopic model of the dielectric functions of the medium is explicitly introduced could the standard quantization rules be applied. In~\cite{phi10}, however, the canonical quantization of macroscopic electromagnetism was achieved, providing a rigorous macroscopic QED and removing the need for a phenomenological approach. Since the macroscopic QED derived in~\cite{phi10} applies to arbitrary magnetodielectrics and takes full account of dispersion and absorption, it meets the criteria outlined above for a rigorous quantum foundation for the Casimir effect. This paper derives the Casimir effect from macroscopic QED by considering the special case of thermal equilibrium.\footnote{Hereafter macroscopic QED refers to the canonically quantized theory presented in~\cite{phi10}, not the previous phenomenological formalism~\cite{kno01,sch08}.} A brief summary of macroscopic QED is given in section~\ref{sec:mQED}. The stress-energy-momentum tensor of macroscopic electromagnetism is derived in full generality in section~\ref{sec:em}, by application of Noether's theorem to the action principle given in~\cite{phi10}. Specialization to thermal equilibrium (including the zero-point fields) is made in section~\ref{sec:thermal}. The expectation value of the electromagnetic part of the stress-energy-momentum tensor in thermal equilibrium gives the Casimir effect. Correlation functions of the quantum field operators in thermal equilibrium are calculated in~\ref{sec:cor} and used in sections~\ref{sec:en} and~\ref{sec:st} to obtain the Casimir energy density and stress tensor. \section{Macroscopic QED} \label{sec:mQED} This section summarizes the results of macroscopic QED~\cite{phi10} that we require to derive the Casimir effect. The action of macroscopic electromagnetism is~\cite{phi10} \begin{equation} \label{S} \fl S[\phi,\mathbf{A},\mathbf{X}_\omega,\mathbf{Y}_\omega]=S_{\mathrm{em}}[\phi,\mathbf{A}]+S_ \mathrm {X}[\mathbf{X}_\omega]+S_ \mathrm{Y}[\mathbf{Y}_\omega]+S_{\mathrm{int}}[\phi,\mathbf{A},\mathbf{X}_\omega,\mathbf{Y}_\omega], \end{equation} where $S_{\mathrm{em}}$ is the free electromagnetic action \begin{equation} \label{Sem} S_{\mathrm{em}}[\phi,\mathbf{A}]=\frac{\kappa_0}{2}\int\rmd^4 x\left(\frac{1}{c^2}\mathbf{E}\cdot\mathbf{E}-\mathbf{B}\cdot\mathbf{B}\right), \quad \kappa_0=1/\mu_0, \end{equation} $S_ \mathrm{X}$ and $S_ \mathrm{Y}$ are the actions for free reservoir oscillators: \begin{eqnarray} S_ \mathrm{X}[\mathbf{X}_\omega]=\frac{1}{2}\int\rmd^4 x\int_0^\infty\rmd\omega\left(\partial_t\mathbf{X}_\omega\cdot\partial_t\mathbf{X}_\omega-\omega^2\mathbf{X}_\omega\cdot\mathbf{X}_\omega\right), \\[3pt] S_ \mathrm{Y}[\mathbf{Y}_\omega]=\frac{1}{2}\int\rmd^4 x\int_0^\infty\rmd\omega\left(\partial_t\mathbf{Y}_\omega\cdot\partial_t\mathbf{Y}_\omega-\omega^2\mathbf{Y}_\omega\cdot\mathbf{Y}_\omega\right), \end{eqnarray} and $S_{\mathrm{int}}$ is the interaction part of the action, coupling the electromagnetic fields to the reservoir: \begin{eqnarray} S_{\mathrm{int}}[\phi,\mathbf{A},\mathbf{X}_\omega,\mathbf{Y}_\omega]=\int\rmd^4 x\int_0^\infty\rmd\omega\left[\alpha(\mathbf{r},\omega)\mathbf{X}_\omega\cdot\mathbf{E}+\beta(\mathbf{r},\omega)\mathbf{Y}_\omega\cdot\mathbf{B}\right], \label{Sint} \\[3pt] \alpha(\mathbf{r},\omega)=\left[\frac{2\varepsilon_0}{\pi}\omega\varepsilon_\mathrm{I}(\mathbf{r},\omega)\right]^{1/2}, \qquad \beta(\mathbf{r},\omega)=\left[-\frac{2\kappa_0}{\pi}\omega\kappa_\mathrm{I}(\mathbf{r},\omega)\right]^{1/2}. \label{ab} \end{eqnarray} The imaginary parts of the dielectric functions of the medium appear in the coupling functions (\ref{ab}); their real parts are given by the Kramers-Kronig relation~\cite{LLcm,jac}: \begin{equation} \label{KK} \varepsilon_\mathrm{R}(\mathbf{r},\omega')-1=\frac{2}{\pi}\mathrm{P}\int_0^\infty\rmd\omega\frac{\omega\varepsilon_\mathrm{I}(\mathbf{r},\omega)}{\omega^2-\omega'^2}, \qquad \mbox{and similarly for $\kappa(\mathbf{r},\omega)$.} \end{equation} As in~\cite{phi10}, we assume the medium is isotropic, with scalar dielectric functions $\varepsilon(\mathbf{r},\omega)$ and $\kappa(\mathbf{r},\omega)=1/\mu (\mathbf{r},\omega)$; anisotropy can be included by obvious modifications. The field equations of the action (\ref{S})--(\ref{ab}) are \begin{eqnarray} \varepsilon_0\nabla\cdot\mathbf{E}+\int_0^\infty\rmd\omega\,\nabla\cdot\left[\alpha(\mathbf{r},\omega)\mathbf{X}_\omega\right]=0, \label{Eeq} \\[3pt] -\kappa_0\nabla\times\mathbf{B}+\varepsilon_0\partial_t\mathbf{E}+\int_0^\infty\rmd\omega\left\{\alpha(\mathbf{r},\omega) \partial_t\mathbf{X}_\omega+\nabla\times\left[\beta(\mathbf{r},\omega)\mathbf{Y}_\omega\right]\right\}=0, \label{Beq} \\[3pt] -\partial_t^2\mathbf{X}_\omega-\omega^2\mathbf{X}_\omega+\alpha(\mathbf{r},\omega)\mathbf{E}=0, \label{Xeq} \\ -\partial_t^2\mathbf{Y}_\omega-\omega^2\mathbf{Y}_\omega+\beta(\mathbf{r},\omega)\mathbf{B}=0. \label{Yeq} \end{eqnarray} By solving the equations for the reservoir fields $\mathbf{X}_\omega$ and $\mathbf{Y}_\omega$, the independent equations of the electromagnetic fields are found and are precisely the macroscopic Maxwell equations~\cite{phi10} \begin{eqnarray} \nabla\cdot\mathbf{D}=\sigma, \label{gauss} \\ \nabla\times\mathbf{H}-\partial_t\mathbf{D}=\mathbf{j}, \label{amp} \end{eqnarray} where the charge density $\sigma$ and current density $\mathbf{j}$ are given in terms of arbitrary free-field solutions for the reservoir fields $\mathbf{X}_\omega$ and $\mathbf{Y}_\omega$. The other two Maxwell equations are identities in terms of the potentials $\mathbf{A}$ and $\phi$. As the action (\ref{S})--(\ref{ab}) features the dynamical fields and their first derivatives, canonical quantization can proceed without difficulty~\cite{phi10}. The resulting Hamiltonian can be diagonalized to the form \begin{equation} \label{Hdiag} \hat{H}=\sum_{\lambda=\mathrm{e},\mathrm{m}}\int\rmd^3 \mathbf{r}\int_0^\infty\rmd\omega\,\hbar\omega\mathbf{\hat{C}}^\dagger_\lambda(\mathbf{r},\omega)\cdot\mathbf{\hat{C}}_\lambda (\mathbf{r},\omega), \end{equation} where the diagonalizing eigenmode creation and annihilation operators obey \begin{equation} \label{C,C} \fl \left[\hat{C}_{\lambda i}(\mathbf{r},\omega),\hat{C}^\dagger_{\lambda' j}(\mathbf{r'},\omega')\right]=\delta_{ij}\delta_{\lambda\lambda'} \delta(\omega-\omega') \delta(\mathbf{r}-\mathbf{r'}), \qquad \left[\hat{C}_{\lambda i}(\mathbf{r},\omega),\hat{C}_{\lambda' j}(\mathbf{r'},\omega')\right]=0. \end{equation} The infinite zero-point energy has been omitted from the Hamiltonian (\ref{Hdiag}). Zero-point energy is of course crucial for the Casimir effect, but the electromagnetic energy density and stress tensor will be calculated below taking full account of the zero-point fields. The quantum macroscopic Maxwell equations \begin{eqnarray} \nabla\cdot\mathbf{\hat{D}}= \hat{\sigma}, \label{qgauss} \\ \nabla\times\mathbf{\hat{H}}-\partial_t\mathbf{\hat{D}}=\mathbf{\hat{j}}, \label{qamp} \end{eqnarray} hold with the charge and current density operators given, in the frequency domain, by \begin{eqnarray} \fl \hat{\sigma} (\mathbf{r}, \omega)=\frac{1}{\rmi\omega}\nabla \cdot\mathbf{\hat{j}}(\mathbf{r}, \omega) =-2\pi\nabla\cdot\left\{\left[\frac{\hbar\varepsilon_0}{\pi}\varepsilon_\mathrm{I}(\mathbf{r},\omega)\right]^{1/2}\mathbf{\hat{C}}_\mathrm{e}(\mathbf{r},\omega)\right\}, \label{qsigma} \\[3pt] \fl \mathbf{\hat{j}} (\mathbf{r}, \omega)=-2\pi\rmi \omega\left[\frac{\hbar\varepsilon_0}{\pi}\varepsilon_\mathrm{I}(\mathbf{r},\omega)\right]^{1/2}\mathbf{\hat{C}}_\mathrm{e}(\mathbf{r},\omega) +2\pi\nabla\times\left\{\left[-\frac{\hbar\kappa_0}{\pi}\kappa_\mathrm{I}(\mathbf{r},\omega)\right]^{1/2}\mathbf{\hat{C}}_\mathrm{m}(\mathbf{r},\omega)\right\}. \label{jopdef} \end{eqnarray} The relationship between fields in the time and frequency domains is, for the example of the electric field, \begin{equation} \label{Efreq} \mathbf{\hat{E}}(\mathbf{r},t)=\frac{1}{2\pi}\int_0^\infty \rmd\omega\left[\mathbf{\hat{E}} (\mathbf{r},\omega)\exp(-\rmi\omega t)+\mbox{c.c.}\right]. \end{equation} The electromagnetic field operators and the reservoir field operators are also expressed in terms of the diagonalizing operators, as follows. For the electric field operator the relation follows from (\ref{jopdef}) and \begin{equation} \fl \mathbf{\hat{E}}(\mathbf{r},t)=\frac{\mu_0}{2\pi}\int_0^\infty\rmd \omega\int\rmd^3\mathbf{r'}\left[\rmi \omega \mathbf{G}(\mathbf{r},\mathbf{r'},\omega)\cdot\mathbf{\hat{j}} (\mathbf{r'}, \omega)\exp(-\rmi \omega t)+\mbox{h.c.}\right], \label{EopG} \end{equation} where the Green bi-tensor $\mathbf{G}(\mathbf{r},\mathbf{r'},\omega)$ is the solution of \begin{equation} \label{green} \nabla\times\left[\kappa(\mathbf{r},\omega)\nabla\times\mathbf{G}(\mathbf{r},\mathbf{r'}, \omega)\right]-\frac{\omega^2}{c^2}\varepsilon(\mathbf{r},\omega)\mathbf{G}(\mathbf{r},\mathbf{r'}, \omega)=\mathds{1}\delta(\mathbf{r}-\mathbf{r'}). \end{equation} The magnetic field operator is \begin{equation} \label{BE} \mathbf{\hat{B}}(\mathbf{r},\omega)=-\rmi\nabla\times\mathbf{\hat{E}}(\mathbf{r},\omega)/\omega. \end{equation} Finally, the reservoir field operators $\mathbf{\hat{X}}_\omega$ and $\mathbf{\hat{Y}}_\omega$ can be written in the frequency domain as \begin{eqnarray} \mathbf{\hat{X}}_\omega(\mathbf{r},\omega')=&2\pi\sqrt{\frac{\hbar}{2\omega}}\delta(\omega-\omega') \mathbf{\hat{C}}_\mathrm{e}(\mathbf{r},\omega') \nonumber \\[3pt] & +\frac{\alpha(\mathbf{r},\omega)}{2\omega}\left[\frac{1}{\omega-\omega'-\rmi0^+}+\frac{1}{\omega+\omega'}\right]\mathbf{\hat{E}}(\mathbf{r},\omega'), \label{XCE} \\[3pt] \mathbf{\hat{Y}}_\omega(\mathbf{r},\omega')=& 2\pi\sqrt{\frac{\hbar}{2\omega}}\delta(\omega-\omega') \mathbf{\hat{C}}_\mathrm{m}(\mathbf{r},\omega') \nonumber \\[3pt] &-\frac{\rmi\beta(\mathbf{r},\omega)}{2\omega'\omega}\left[\frac{1}{\omega-\omega'-\rmi0^+}+\frac{1}{\omega+\omega'}\right]\nabla\times\mathbf{\hat{E}}(\mathbf{r},\omega'). \label{YCB} \end{eqnarray} The last two equations were not explicitly written in~\cite{phi10}; they are obtained from the expansion (58) in~\cite{phi10} of the operator $\mathbf{\hat{X}}_\omega$ in terms of the diagonalizing operators, and the analogous expansion of $\mathbf{\hat{Y}}_\omega$, by inserting the results (70), (71), (76) and (88) of~\cite{phi10}. Use is also made of the fact that all frequency arguments are taken only at positive values (see (\ref{Efreq})), so that the sums of frequencies in denominators do not give rise to poles. \section{Stress-energy-momentum tensor of macroscopic electromagnetism} \label{sec:em} The canonical formulation of macroscopic electromagnetism given in~\cite{phi10} leads directly to the stress-energy-momentum tensor of the system, in both the classical and quantum cases. As dissipation of electromagnetic energy by the medium is fully taken into account through the presence of the reservoir fields, a conserved total energy of the system exists (if the dielectric functions of the medium are time independent). Similarly, total momentum is conserved if the medium is homogeneous. The complete stress-energy-momentum tensor of the system follows from application of Noether's theorem to the action (\ref{S})--(\ref{ab}). We state the results for classical fields, but the quantum stress-energy-momentum tensor is of the same form because the quantum field operators obey the same dynamical equations as the classical fields, as shown in~\cite{phi10}. \subsection{Energy density and energy flux} The invariance of the action (\ref{S})--(\ref{ab}) under active time translations of the dynamical fields implies a conservation law that can be extracted as follows~\cite{wei}. We make an active infinitesimal time translation of all the dynamical fields---in the case of the vector potential, $\mathbf{A}(\mathbf{r},t)\rightarrow\mathbf{A}(\mathbf{r},t+\zeta(\mathbf{r},t))$---but take the translation $\zeta(\mathbf{r},t)$ to vary in space and time. The resulting change in the action can be reduced to the form \begin{equation} \label{actvart} \delta S=\int d^4x\left(\rho\,\partial_t\zeta+\mathbf{s}\cdot\nabla\zeta\right), \end{equation} where $\rho$ is the energy density and $\mathbf{s}$ is the energy flux, obeying the conservation law \begin{equation} \label{cone} \partial_t \rho+ \nabla \cdot\mathbf{s}=0. \end{equation} The calculation is straightforward and yields \begin{eqnarray} \fl \rho=&\frac{\kappa_0}{2}\left[\frac{1}{c^2}\mathbf{E}\cdot(-\partial_t\mathbf{A}+\nabla\phi)+\mathbf{B}^2\right] \nonumber \\[3pt] \fl &+\int_0^\infty\rmd\omega\left[\frac{1}{2}(\partial_t\mathbf{X}_\omega)^2+\frac{1}{2}(\partial_t\mathbf{Y}_\omega)^2+\frac{1}{2}\omega^2(\mathbf{X}_\omega^2+\mathbf{Y}_\omega^2)+\alpha\mathbf{X}_\omega\cdot\nabla\phi-\beta\mathbf{Y}_\omega\cdot\mathbf{B}\right], \label{rho1} \\[3pt] \fl s^i=&-\kappa_0\left[\frac{1}{c^2}E^i\partial_t\phi+2\nabla^{[i}A^{j]}\partial_tA_j\right]-\int_0^\infty\rmd\omega\left[\alpha X^i_\omega \partial_t\phi-\beta Y^j_\omega\epsilon_j^{\ ik}\partial_tA_k\right], \label{s1} \end{eqnarray} where anti-symmetrization of tensor indices is denoted by square brackets and $\epsilon^{ijk}$ is the Levi-Civita tensor~\cite{MTW}. As in the case of free-space electromagnetism~\cite{jac,wei}, the energy density (\ref{rho1}) and flux (\ref{s1}) that directly emerge from Noether's theorem are not gauge invariant. They are however equivalent to gauge-invariant quantities because they fail to be gauge invariant up to terms that \emph{identically} satisfy the conservation law (\ref{cone}). Specifically, the quantities \begin{eqnarray} f^{i\ t}_{\ \,t}:= -\varepsilon_0\phi E^i-\int_0^\infty\rmd\omega\,\alpha\phi X^i_\omega =:-f^{t\ i}_{\ \,t}, \\[3pt] f^{j\ i}_{\ \,t}:=-2\kappa_0\phi \nabla^{[i}A^{j]}+\int_0^\infty\rmd\omega\,\epsilon^{ijk}\beta\phi Y_{\omega k} \end{eqnarray} identically satisfy \begin{equation} \label{fid} \partial_t\nabla_if^{i\ t}_{\ \,t}+\nabla_i(\partial_tf^{t\ i}_{\ \,t}+\nabla_jf^{j\ i}_{\ \,t})=0. \end{equation} Comparing (\ref{fid}) with (\ref{cone}), we see that if $\nabla_if^{i\ t}_{\ \,t}$ is added to $\rho$, and $\partial_tf^{t\ i}_{\ \,t}+\nabla_jf^{j\ i}_{\ \,t}$ is added to $s^i$, then the conservation law (\ref{cone}) will still hold. With use of the field equations (\ref{Eeq})--(\ref{Yeq}), the energy density and flux that result from these additions are gauge invariant and are given by \begin{eqnarray} \fl \rho=&\frac{\kappa_0}{2}\left[\frac{1}{c^2}\mathbf{E}^2+\mathbf{B}^2\right] \nonumber \\[3pt] \fl &+\int_0^\infty\rmd\omega\left[\frac{1}{2}(\partial_t\mathbf{X}_\omega)^2+\frac{1}{2}(\partial_t\mathbf{Y}_\omega)^2+\frac{1}{2}\omega^2(\mathbf{X}_\omega^2+\mathbf{Y}_\omega^2)-\beta\mathbf{Y}_\omega\cdot\mathbf{B}\right], \label{rho} \\[3pt] \fl \mathbf{s}=&\kappa_0 \mathbf{E}\times \mathbf{B}-\int_0^\infty\rmd\omega\,\beta \mathbf{E}\times \mathbf{Y}_\omega. \label{s} \end{eqnarray} It is straightforward to verify that the conservation law (\ref{cone}) holds for (\ref{rho}) and (\ref{s}) when the fields obey the dynamical equations (\ref{Eeq})--(\ref{Yeq}). \subsection{Momentum density and stress tensor} The total momentum of the electromagnetic field plus matter is conserved in flat space-time. In the description of macroscopic electromagnetism, however, the microscopic degrees of freedom of the magnetodielectric medium are not included in the action; as a result, the translation symmetry that gives rise to momentum conservation is not in general present in the dynamical system considered. A conservation law for momentum will exist within the macroscopic framework only if the coupling functions (\ref{ab}) are independent of position (a homogeneous medium) so that the action is invariant under active spatial translations of the dynamical fields. It is nevertheless instructive to retain the general case of an inhomogeneous medium; the conservation law for momentum will then be seen to fail in the inhomogeneous case due to the appearance of spatial derivatives of the coupling functions (\ref{ab}). We make an active infinitesimal spatial translation of all the dynamical fields; in the case of the vector potential this is $\mathbf{A}(\mathbf{r},t)\rightarrow\mathbf{A} (\mathbf{r}+\mathbf{w}(\mathbf{r},t),t)$. The resulting change in the action can be written \begin{equation} \label{actvarr} \delta S=-\int d^4x\left(p_i\,\partial_t w^i+\sigma_i^{\ j}\,\nabla_j w^i\right), \end{equation} where the momentum density $\mathbf{p}$ and stress tensor $\sigma_i^{\ j}$ obey, in homogeneous media, the conservation law \begin{equation} \label{conm} \partial_t p_i+ \nabla_j\sigma_i^{\ j}=0. \end{equation} The form (\ref{actvarr}) is achieved with \begin{eqnarray} \fl p_i=&\varepsilon_0E^j\nabla_iA_j -\int_0^\infty\rmd\omega\left(\partial_t X^j_\omega\nabla_i X_{\omega j}+\partial_t Y^j_\omega\nabla_i Y_{\omega j}-\alpha X^j_\omega\nabla_iA_j\right), \label{p1} \\[3pt] \fl \sigma_i^{\ j}=&\mathcal{L}\,\delta_i^{\ j}+\kappa_0\left(\frac{1}{c^2}E^j\nabla_i\phi+2\nabla^{[j}A^{k]}\nabla_iA_k\right)+\int_0^\infty\rmd\omega\left(\alpha X^j_\omega\nabla_i\phi-\beta Y^k_\omega\epsilon_{k\ l}^{\ \,j}\nabla_iA^l\right), \label{stress1} \end{eqnarray} where $\mathcal{L}$ is the Lagrangian density, i.e.\ the integrand in the action (\ref{S}). Again, the initial results (\ref{p1}) and (\ref{stress1}) are not gauge invariant. The quantities \begin{eqnarray} f^{j\ t}_{\ \,i}:= -\varepsilon_0A_iE^j-\int_0^\infty\rmd\omega\,\alpha A_iX^j_\omega =:-f^{t\ j}_{\ \,i}, \\ f^{k\ j}_{\ \,i}:=-2\kappa_0A_i\nabla^{[j}A^{k]}+\int_0^\infty\rmd\omega\,\beta\epsilon^{ljk}A_iY_{\omega l}, \end{eqnarray} identically satisfy \begin{equation} \label{fid2} \partial_t\nabla_jf^{j\ t}_{\ \,i}+\nabla_j(\partial_tf^{t\ j}_{\ \,i}+\nabla_kf^{k\ j}_{\ \,i})=0. \end{equation} Thus, addition of $\nabla_jf^{j\ t}_{\ \,i}$ to $p_i$, and of $\partial_tf^{t\ j}_{\ \,i}+\nabla_kf^{k\ j}_{\ \,i}$ to $\sigma_i^{\ j}$, does not affect the momentum conservation law (\ref{conm}). After these additions, and use of the field equations (\ref{Eeq})--(\ref{Yeq}), we obtain a gauge-invariant momentum density and stress tensor: \begin{eqnarray} \fl p_i=&\varepsilon_0(\mathbf{E}\times\mathbf{B})_i -\int_0^\infty\rmd\omega\left[\partial_t X^j_\omega\nabla_i X_{\omega j}+\partial_t Y^j_\omega\nabla_i Y_{\omega j}+\alpha (\mathbf{B}\times\mathbf{X}_\omega)_i\right], \label{p} \\[3pt] \fl \sigma_i^{\ j}=&\frac{1}{2}\delta_i^{\ j}(\varepsilon_0\mathbf{E}^2+\kappa_0\mathbf{B}^2)-\varepsilon_0E_iE^j-\kappa_0B_iB^j \nonumber \\[3pt] \fl &+\int_0^\infty\rmd\omega\left\{ \delta_i^{\ j}\left[\frac{1}{2}(\partial_t\mathbf{X}_\omega)^2+\frac{1}{2}(\partial_t\mathbf{Y}_\omega)^2-\frac{1}{2}\omega^2(\mathbf{X}_\omega^2+\mathbf{Y}_\omega^2)+\alpha \mathbf{X}_\omega\cdot \mathbf{E}\right]\right. \nonumber \\[3pt] \fl & \qquad \qquad \quad -\alpha E_iX^j_\omega+\beta Y_{\omega i}B^j {\Bigg\}}. \label{stress} \end{eqnarray} When the field equations (\ref{Eeq})--(\ref{Yeq}) hold, the momentum density (\ref{p}) and stress tensor (\ref{stress}) satisfy \begin{equation} \label{conminhom} \partial_t p_i+ \nabla_j\sigma_i^{\ j}=\int_0^\infty\rmd\omega\left(\mathbf{E}\cdot\mathbf{X}_\omega\nabla_i\alpha+\mathbf{B}\cdot\mathbf{Y}_\omega\nabla_i\beta\right), \end{equation} so that the conservation law (\ref{conm}) indeed holds for homogeneous media. \section{Thermal equilibrium} \label{sec:thermal} Casimir forces can be calculated from either the electromagnetic energy density or stress tensor in the presence of macroscopic media. General electromagnetic fields will exert forces on the media, but the Casimir effect is the special case when the fields are in their ground state (zero-point fields) or in thermal equilibrium with the media (the latter case of course includes the contribution of the former). To derive the Casimir effect we therefore need only assume that the eigenmodes of macroscopic QED are in a thermal mixed quantum state. The expressions for the electromagnetic energy density and stress tensor that determine the forces then follow from the general results (\ref{rho}) and (\ref{stress}) (which, as noted above, also hold for quantum field operators). This is in sharp contrast to Lifshitz theory~\cite{lif55, dzy61, LL}, where extraordinarily complicated thermodynamical and mechanical arguments are required to obtain the electromagnetic stress tensor in media and vacuum, in a manner that has nothing obvious to do with the principles of quantum mechanics~\cite{ros10}. Although the calculations below are certainly tedious, the only ingredients are the quantum theory of macroscopic electromagnetism~\cite{phi10} and a restriction to the case of thermal equilibrium. To impose thermal equilibrium on the bosonic eigenmodes of macroscopic QED, we assume that the expectation value of the number operator of the eigenmodes is given by \begin{eqnarray} \fl \left\langle\mathbf{\hat{C}}^\dagger_\lambda(\mathbf{r},\omega)\otimes\mathbf{\hat{C}}_{\lambda'} (\mathbf{r'},\omega')\right\rangle=\mathcal{N}(\omega)\mathds{1}\delta_{\lambda \lambda'}\delta(\omega-\omega') \delta(\mathbf{r}-\mathbf{r'}) \label{CdC} \\ \qquad \qquad \ \, =\left\langle\mathbf{\hat{C}}_{\lambda'}(\mathbf{r'},\omega')\otimes\mathbf{\hat{C}}^\dagger_{\lambda} (\mathbf{r},\omega)\right\rangle-\mathds{1}\delta_{\lambda \lambda'}\delta(\omega-\omega') \delta(\mathbf{r}-\mathbf{r'}), \label{CCd} \\ \quad \ \ \, \mathcal{N}(\omega):=\left[\exp\left(\frac{\hbar\omega}{k_B T}\right)-1\right]^{-1}, \label{N} \\ \fl \left\langle\mathbf{\hat{C}}_\lambda(\mathbf{r},\omega)\otimes\mathbf{\hat{C}}_{\lambda'} (\mathbf{r'},\omega')\right\rangle=0. \label{CC} \end{eqnarray} Each eigenmode at each frequency $\omega$ and each position $\mathbf{r}$ is a quantum harmonic oscillator and so the expectation value for the number of quanta (excitation level) in each of these oscillators should be given by the Planck distribution (\ref{N}). The complication in implementing this simple prescription is that the eigenmodes are a continuum in frequency and position. The obvious way of handling this last fact is to use delta functions as in (\ref{CdC}), a procedure also followed in~\cite{ros10}, where a simple quantized microscopic model of a medium was analysed. But it must be admitted that there is no clear mathematical basis for the density operator which is supposed to underlie the expectation value (\ref{CdC}). The problem is that the density operator should be defined in terms of number states of the quanta, the excitation levels of the oscillators, but because a continuum of oscillators is excited, there is no clear way of writing and normalizing these number states from which one would then construct the density operator of a thermal mixed state. This may well be an inessential technical issue, but one should bear in mind that the apparently clear intuition employed in writing (\ref{CdC})--(\ref{CC}) conceals formidable mathematical difficulties. Moreover, careful study of the derivations that follow will show that the delta function in frequency in the correlation functions (\ref{CdC}) and (\ref{CCd}) is to be regarded as a limit to be taken only in the final stages of the calculations; this allows any ambiguities arising from products of delta functions to be negotiated. \section{Thermal field correlation functions} \label{sec:cor} As all the dynamical field operators of macroscopic QED are expressible in terms of the eigenmode creation and annihilation operators, (\ref{CdC})--(\ref{CC}) immediately give the expectation values of products of field operators (correlation functions) in thermal equilibrium. For the current density operator (\ref{jopdef}) in the frequency domain we obtain \begin{eqnarray} \fl \left\langle\mathbf{\hat{j}}^\dagger (\mathbf{r}, \omega)\otimes\mathbf{\hat{j}}(\mathbf{r'}, \omega')\right \rangle &=&4\pi\hbar\,\mathcal{N}(\omega)\delta(\omega-\omega')\Bigg\{\omega^2 \varepsilon_0\varepsilon_\mathrm{I}(\mathbf{r},\omega)\mathds{1}\delta(\mathbf{r}-\mathbf{r'}) \nonumber \\ && + \left.\kappa_0\nabla\times\left[\sqrt{-\kappa_\mathrm{I}(\mathbf{r},\omega)}\,\mathds{1}\delta(\mathbf{r}-\mathbf{r'})\sqrt{-\kappa_\mathrm{I}(\mathbf{r'},\omega')}\right]\times\stackrel{\leftarrow}{\nabla'}\right\} \label{jdj} \\ \fl &=&\frac{\mathcal{N}(\omega)}{\mathcal{N}(\omega)+1}\left\langle\mathbf{\hat{j}} (\mathbf{r'}, \omega')\otimes\mathbf{\hat{j}}^\dagger(\mathbf{r}, \omega)\right \rangle \label{jjd} \\ \fl \left \langle\mathbf{\hat{j}} (\mathbf{r}, \omega)\otimes\mathbf{\hat{j}}(\mathbf{r'}, \omega')\right \rangle &=&0, \label{jj} \end{eqnarray} where the notation $\times\stackrel{\leftarrow}{\nabla'}$ denotes a curl with respect to the right-hand index, so that $\mathbf{V}(\mathbf{r})\times\stackrel{\leftarrow}{\nabla}= \nabla \times\mathbf{V}(\mathbf{r})$ for a vector $\mathbf{V}(\mathbf{r})$ (note that there is no minus sign included in this definition of $\nabla$ acting on the right-hand side).. Equations (\ref{jdj})--(\ref{jj}) can be viewed as an example of the fluctuation-dissipation theorem, but here they are a simple consequence of macroscopic QED in thermal equilibrium. From (\ref{Efreq}), the equal-time correlation function for the electric field operator is expressed in terms of the frequency-domain correlation function by \begin{eqnarray} \fl \left\langle\mathbf{\hat{E}} (\mathbf{r}, t)\otimes\mathbf{\hat{E}}(\mathbf{r'}, t)\right \rangle=&\frac{1}{4\pi^2}\int_0^\infty\rmd\omega\int_0^\infty\rmd\omega' \left\{ \exp[-\rmi(\omega-\omega')t]\left\langle\mathbf{\hat{E}} (\mathbf{r}, \omega)\otimes\mathbf{\hat{E}}^\dagger(\mathbf{r'}, \omega')\right \rangle\right. \nonumber \\ \fl &\left.+\exp[\rmi(\omega-\omega')t]\left\langle\mathbf{\hat{E}}^ \dagger (\mathbf{r}, \omega)\otimes\mathbf{\hat{E}}(\mathbf{r'}, \omega')\right \rangle\right\}. \label{EEEEf} \end{eqnarray} (Terms of the form $\langle\mathbf{\hat{E}}(\omega)\mathbf{\hat{E}}(\omega') \rangle$ and $\langle\mathbf{\hat{E}}^\dagger(\omega)\mathbf{\hat{E}}^\dagger(\omega') \rangle$ vanish.) The frequency-domain correlation functions are written in terms of the Green bi-tensor using (\ref{Efreq}), (\ref{EopG}) and (\ref{jdj})--(\ref{jj}); after spatial integrations by parts we obtain \begin{eqnarray} \fl \left\langle\hat{E}^\dagger_i (\mathbf{r}, \omega)\hat{E}_j(\mathbf{r'}, \omega')\right \rangle \nonumber \\ = 4\pi\hbar\mu_0 \int\rmd^3\mathbf{r''} \,\mathcal{N}(\omega)\omega^2\delta(\omega-\omega')\left\{\frac{\omega^2}{c^2} \varepsilon_\mathrm{I}(\mathbf{r''},\omega)G_{ik}(\mathbf{r},\mathbf{r''},\omega)G^{*\ k}_{\ j}(\mathbf{r'},\mathbf{r''},\omega) \right. \nonumber \\ \quad \left. -\kappa_\mathrm{I}(\mathbf{r''},\omega)[\mathbf{G}(\mathbf{r},\mathbf{r''},\omega)\times\stackrel{\leftarrow}{\nabla''}]_{ik}\,[\mathbf{G}^*(\mathbf{r'},\mathbf{r''},\omega)\times\stackrel{\leftarrow}{\nabla''}]_{j}^{\ k}\right\}. \label{EdE1} \end{eqnarray} The correlation function $\langle\hat{E}_i (\mathbf{r}, \omega)\hat{E}^\dagger_j(\mathbf{r'}, \omega')\rangle$ is given by the complex conjugate of (\ref{EdE1}) with $\mathcal{N}(\omega)$ replaced by $\mathcal{N}(\omega)+1$. The right-hand side of (\ref{EdE1}) can be simplified as follows. We first note the symmetry property of the Green bi-tensor that holds for media invariant under time reversal (non-magnetic,\footnote{Note the distinction between the somewhat confusing terms {\it magnetic media} and {\it magnetodielectric media}. The former refers to media with permanent magnetizations in the absence of applied fields, the latter to media that have a non-trivial magnetic (and electric) response to electromagnetic fields.} non-moving media)~\cite{LL}: \begin{equation} \label{recip} G_{ij}(\mathbf{r},\mathbf{r'},\omega)=G_{ji}(\mathbf{r'},\mathbf{r},\omega). \end{equation} Take the matrix product of $\mathbf{G}^*(\mathbf{r''},\mathbf{r},\omega)$ with (\ref{green}) and integrate over $\mathbf{r}$; after integration by parts and use of (\ref{recip}), the imaginary part of the resulting relation simplifies to \begin{eqnarray} \fl \int\! \rmd^3\mathbf{r}\left\{-\kappa_\mathrm{I}(\mathbf{r},\omega)[\mathbf{G}^*(\mathbf{r''},\mathbf{r},\omega)\times\stackrel{\leftarrow}{\nabla}]_{ik}\,[\nabla \times\mathbf{G}(\mathbf{r},\mathbf{r'},\omega)]_{\ j}^{k} \right. \nonumber \\ \left. +\frac{\omega^2}{c^2} \varepsilon_\mathrm{I}(\mathbf{r},\omega)G^*_{\ ik}(\mathbf{r''},\mathbf{r},\omega)G^{k}_{\ j}(\mathbf{r},\mathbf{r'},\omega)\right\}=G_{\mathrm{I} ij}(\mathbf{r''},\mathbf{r'},\omega). \label{Gid} \end{eqnarray} Use of (\ref{Gid}) and its complex conjugate, together with (\ref{recip}), allows (\ref{EdE1}) to be simplified to \begin{eqnarray} \left\langle\mathbf{\hat{E}}^ \dagger (\mathbf{r}, \omega)\otimes\mathbf{\hat{E}}(\mathbf{r'}, \omega')\right \rangle \!&=& 4\pi\hbar\mu_0 \delta(\omega-\omega')\mathcal{N}(\omega)\omega^2\mathbf{G}_{\mathrm{I}}(\mathbf{r},\mathbf{r'},\omega) \label{EdE} \\ &=&\frac{\mathcal{N}(\omega)}{\mathcal{N}(\omega)+1}\left\langle\mathbf{\hat{E}} (\mathbf{r}, \omega)\otimes\mathbf{\hat{E}}^\dagger(\mathbf{r'}, \omega')\right \rangle. \label{EEd} \end{eqnarray} The equal-time correlation function (\ref{EEEEf}) is, from (\ref{EdE}) and (\ref{EEd}), \begin{equation} \label{EE} \left\langle\mathbf{\hat{E}} (\mathbf{r}, t)\otimes\mathbf{\hat{E}}(\mathbf{r'}, t)\right \rangle=\frac{\hbar\mu_0}{\pi}\int_0^\infty\rmd\omega\,\omega^2\coth\left(\frac{\hbar\omega}{2k_B T}\right)\mathbf{G}_{\mathrm{I}}(\mathbf{r},\mathbf{r'},\omega), \end{equation} where a factor of $2\mathcal{N}(\omega)+1$ has been rewritten as a hyperbolic cotangent (recall (\ref{N})). Correlation functions for the magnetic field operator are easily found from the electric-field expressions through use of (\ref{BE}), with the results \begin{eqnarray} \fl \left\langle\mathbf{\hat{B}}^ \dagger (\mathbf{r}, \omega)\otimes\mathbf{\hat{B}}(\mathbf{r'}, \omega')\right \rangle \!&=& 4\pi\hbar\mu_0 \delta(\omega-\omega')\mathcal{N}(\omega)\nabla\times\mathbf{G}_{\mathrm{I}}(\mathbf{r},\mathbf{r'},\omega)\times\stackrel{\leftarrow}{\nabla'} \label{BdB} \\ &=&\frac{\mathcal{N}(\omega)}{\mathcal{N}(\omega)+1}\left\langle\mathbf{\hat{B}} (\mathbf{r}, \omega)\otimes\mathbf{\hat{B}}^\dagger(\mathbf{r'}, \omega')\right \rangle, \label{BBd} \\ \fl \left\langle\mathbf{\hat{B}} (\mathbf{r}, t)\otimes\mathbf{\hat{B}}(\mathbf{r'}, t)\right \rangle&=&\frac{\hbar\mu_0}{\pi}\int_0^\infty\rmd\omega \,\coth\left(\frac{\hbar\omega}{2k_B T}\right)\nabla\times\mathbf{G}_{\mathrm{I}}(\mathbf{r},\mathbf{r'},\omega)\times\stackrel{\leftarrow}{\nabla'}. \label{BB} \end{eqnarray} Correlation functions for the reservoir field operators $\mathbf{\hat{X}}_\omega$ and $\mathbf{\hat{Y}}_\omega$ are found using (\ref{XCE}) and (\ref{YCB}). Our goal is to calculate the electromagnetic part of the energy density and stress tensor, by isolating the electromagnetic part of the expressions (\ref{rho}) and (\ref{stress}) when they are evaluated in the case of thermal equilibrium. This requires us to eliminate the reservoir fields by expressing them in terms of the electromagnetic fields and free-field terms that are independent of the coupling to the electromagnetic fields. Equations (\ref{XCE}) and (\ref{YCB}) perform this separation of the reservoir fields into terms that would be present in the absence of any coupling to the electromagnetic fields (the first terms on the right-hand sides) and electromagnetic terms (the second terms on the right-hand sides). Only terms in correlation functions that depend on the electromagnetic fields are required to obtain the electromagnetic part of the energy density and stress tensor. We therefore drop terms in correlation functions that are independent of the electromagnetic fields. (Further remarks on the non-electromagnetic parts of the energy density and stress tensor will be made in the next section.) The correlation function in the frequency domain of the reservoir field $\mathbf{\hat{X}}_\omega$ with itself is obtained using (\ref{XCE}). We use a subscript $\scriptstyle E$ on the correlation function to denote the fact that we include only the terms that depend on the electric field; this yields the following correlation function: \begin{eqnarray} \fl \left\langle\mathbf{\hat{X}}^\dagger_\omega (\mathbf{r}, \omega')\otimes\mathbf{\hat{X}}_\omega(\mathbf{r'}, \omega'')\right \rangle_E \nonumber \\[3pt] \fl \quad =\sqrt{\frac{\hbar}{2\omega}}\delta(\omega-\omega')\frac{\pi\alpha(\mathbf{r'},\omega)}{\omega}\left[\frac{1}{\omega-\omega''-\rmi0^+}+\frac{1}{\omega+\omega''}\right]\left\langle\mathbf{\hat{C}}^\dagger_\mathrm{e} (\mathbf{r}, \omega')\otimes\mathbf{\hat{E}}(\mathbf{r'}, \omega'')\right\rangle \nonumber \\[3pt] \fl \qquad +\sqrt{\frac{\hbar}{2\omega}}\delta(\omega-\omega'')\frac{\pi\alpha(\mathbf{r'},\omega)}{\omega}\left[\frac{1}{\omega-\omega'+\rmi0^+}+\frac{1}{\omega+\omega'}\right]\left\langle\mathbf{\hat{E}}^\dagger(\mathbf{r}, \omega')\otimes\mathbf{\hat{C}}_\mathrm{e}(\mathbf{r'}, \omega'')\right\rangle \nonumber \\[3pt] \fl \qquad +\frac{\alpha(\mathbf{r},\omega)\alpha(\mathbf{r'},\omega)}{4\omega^2}\left[\frac{1}{\omega-\omega'+\rmi0^+}+\frac{1}{\omega+\omega'}\right]\left[\frac{1}{\omega-\omega''-\rmi0^+}+\frac{1}{\omega+\omega''}\right] \nonumber \\[3pt] \fl \qquad\quad \times\left\langle\mathbf{\hat{E}}^\dagger(\mathbf{r}, \omega')\otimes\mathbf{\hat{E}}(\mathbf{r'}, \omega'')\right\rangle. \label{XdX0} \end{eqnarray} The third term on the right-hand side of (\ref{XdX0}) will produce a product of principal values if the quantities containing the infinitesimal number $0^+$ are expanded in terms of principal values and delta functions. This product of principal values would need to be treated with care depending on the integration variable to which they are referred (as well as integrations over $\omega'$ and $\omega''$, there will be an integration over $\omega$ in the energy density and stress). It is safer to rewrite the term in question in a form that does not produce a product of principal values; this can be done using the identity \begin{eqnarray} \fl \frac{1}{(\omega-\omega'+\rmi0^+)(\omega-\omega''-\rmi0^+)}=\frac{1}{\omega'-\omega''-2\rmi0^+}\left(\frac{1}{\omega-\omega'+\rmi0^+}-\frac{1}{\omega-\omega''-\rmi0^+}\right) \nonumber \\[3pt] =\frac{1}{\omega'-\omega''-2\rmi0^+}\left[\mathrm{P}\frac{\omega'-\omega''}{(\omega-\omega')(\omega-\omega'')}-\rmi\pi\delta(\omega-\omega')-\rmi\pi\delta(\omega-\omega'')\right] \nonumber \\[3pt] =\mathrm{P}\frac{1}{(\omega-\omega')(\omega-\omega'')}-\rmi\pi\frac{\delta(\omega-\omega')+\delta(\omega-\omega'')}{\omega'-\omega''-2\rmi0^+}. \label{PVid} \end{eqnarray} From (\ref{PVid}), the product of the quantities in square brackets in the third term on the right-hand side of (\ref{XdX0}) simplifies to \begin{eqnarray} \fl \left[\frac{1}{\omega-\omega'+\rmi0^+}+\frac{1}{\omega+\omega'}\right]\left[\frac{1}{\omega-\omega''-\rmi0^+}+\frac{1}{\omega+\omega''}\right] \nonumber \\[3pt] =\mathrm{P}\frac{4\omega^2}{(\omega^2-\omega'^2)(\omega^2-\omega''^2)}-\rmi\pi\frac{\delta(\omega-\omega')+\delta(\omega-\omega'')}{\omega'-\omega''-2\rmi0^+}, \label{PVid2} \end{eqnarray} where all terms apart from the right-hand side of (\ref{PVid}) were expanded as principal values and delta functions (we do not perform a similar expansion in the last term in (\ref{PVid}) and (\ref{PVid2}) at this stage simply to save space). At the risk of long-windedness, we rewrite (\ref{XdX0}) having inserted (\ref{PVid2}): \begin{eqnarray} \fl \left\langle\mathbf{\hat{X}}^\dagger_\omega (\mathbf{r}, \omega')\otimes\mathbf{\hat{X}}_\omega(\mathbf{r'}, \omega'')\right \rangle_E \nonumber \\[3pt] \fl \quad =\sqrt{\frac{\hbar}{2\omega}}\delta(\omega-\omega')\frac{\pi\alpha(\mathbf{r'},\omega)}{\omega}\left[\frac{1}{\omega-\omega''-\rmi0^+}+\frac{1}{\omega+\omega''}\right]\left\langle\mathbf{\hat{C}}^\dagger_\mathrm{e} (\mathbf{r}, \omega')\otimes\mathbf{\hat{E}}(\mathbf{r'}, \omega'')\right\rangle \nonumber \\[3pt] \fl \qquad +\sqrt{\frac{\hbar}{2\omega}}\delta(\omega-\omega'')\frac{\pi\alpha(\mathbf{r'},\omega)}{\omega}\left[\frac{1}{\omega-\omega'+\rmi0^+}+\frac{1}{\omega+\omega'}\right]\left\langle\mathbf{\hat{E}}^\dagger(\mathbf{r}, \omega')\otimes\mathbf{\hat{C}}_\mathrm{e}(\mathbf{r'}, \omega'')\right\rangle \nonumber \\[3pt] \fl \qquad +\frac{\alpha(\mathbf{r},\omega)\alpha(\mathbf{r'},\omega)}{4\omega^2}\left[\mathrm{P}\frac{4\omega^2}{(\omega^2-\omega'^2)(\omega^2-\omega''^2)}-\rmi\pi\frac{\delta(\omega-\omega')+\delta(\omega-\omega'')}{\omega'-\omega''-2\rmi0^+}\right] \nonumber \\[3pt] \fl \qquad\quad \times\left\langle\mathbf{\hat{E}}^\dagger(\mathbf{r}, \omega')\otimes\mathbf{\hat{E}}(\mathbf{r'}, \omega'')\right\rangle. \label{XdX0.1} \end{eqnarray} The correlation function in the third term on the right-hand side of (\ref{XdX0.1}) is given by (\ref{EdE}); the correlation functions in the first two terms are shown by (\ref{CdC})--(\ref{CC}), (\ref{EopG}) and (\ref{jopdef}) to be \begin{eqnarray} \fl \left\langle\hat{C}^\dagger_{\mathrm{e}i} (\mathbf{r}, \omega')\hat{E}_j(\mathbf{r'}, \omega'')\right\rangle=2\pi\mu_0\omega''^2\mathcal{N}(\omega')\left[\frac{\hbar\varepsilon_0}{\pi}\varepsilon_\mathrm{I}(\mathbf{r},\omega'')\right]^{1/2}G_{ji}(\mathbf{r'},\mathbf{r},\omega'')\delta(\omega'-\omega''), \label{CdE} \\[3pt] \fl \left\langle\hat{E}^\dagger_{i} (\mathbf{r}, \omega')\hat{C}_{\mathrm{e}j}(\mathbf{r'}, \omega'')\right\rangle=2\pi\mu_0\omega'^2\mathcal{N}(\omega')\left[\frac{\hbar\varepsilon_0}{\pi}\varepsilon_\mathrm{I}(\mathbf{r'},\omega')\right]^{1/2}G^*_{ij}(\mathbf{r},\mathbf{r'},\omega')\delta(\omega'-\omega''). \label{EdC} \end{eqnarray} Now we must focus on the expectation values containing only $\mathbf{\hat{X}}_\omega$ that are required for the expectation values of the energy density (\ref{rho}) and stress tensor (\ref{stress}). In writing the expectation values we employ a shortened notation for two limits involving the Green bi-tensor: \begin{eqnarray} \Delta^{\!E}_{\ \,ij}(\mathbf{r},\omega):=\omega^2\lim_{\mathbf{r'}\to\mathbf{r}}G_{ij}(\mathbf{r},\mathbf{r'},\omega), \label{DeltaE} \\ \Delta^{\!B}_{\ \,ij}(\mathbf{r},\omega):=\lim_{\mathbf{r'}\to\mathbf{r}}[\nabla\times\mathbf{G}(\mathbf{r},\mathbf{r'},\omega)\times\stackrel{\leftarrow}{\nabla'}]_{ij}. \label{DeltaB} \end{eqnarray} The limit $\mathbf{r'}\to\mathbf{r}$ appears because the expectation values of the energy density (\ref{rho}) and stress tensor (\ref{stress}) contain correlation functions evaluated at $\mathbf{r'}=\mathbf{r}$. But the Green bi-tensor itself diverges when $\mathbf{r'}=\mathbf{r}$, so the limit in (\ref{DeltaE}) and (\ref{DeltaB}) must be understood to be taken only in the final expressions for physical quantities. The zero-point part of the Casimir effect requires a regularization to remove the divergent zero-point energy that is always present in a homogeneous medium (including vacuum) and that does not contribute to the Casimir force. This regularization is implemented at the level of the Green bi-tensor in a manner familiar from Lifshitz theory~\cite{lif55, dzy61, LL}. We also understand this regularization to be included in (\ref{DeltaE}) and (\ref{DeltaB}) when they appear below in the Casimir energy density and stress tensor. For the energy density (\ref{rho}) we need $\int_0^\infty\rmd\omega\langle(\partial\mathbf{\hat{X}}_\omega)^2/2+\omega^2\mathbf{\hat{X}}_\omega^2/2\rangle$; from the general relation (\ref{EEEEf}) it therefore follows that we must insert a factor of $(\omega'\omega''+\omega^2)/2$ into (\ref{XdX0.1}). Inserting this factor and substituting (\ref{CdE}), (\ref{EdC}), (\ref{EdE}) and (\ref{ab}) we obtain, after minor simplifications, \begin{eqnarray} \fl \left\langle\frac{1}{2}(\omega'\omega''+\omega^2)\hat{X}^\dagger_{\omega i} (\mathbf{r}, \omega'){\hat{X}}_{\omega j}(\mathbf{r}, \omega'')\right \rangle_E \nonumber \\[3pt] \fl \quad =\frac{\hbar\pi}{c^2}\frac{(\omega'\omega''+\omega^2)}{\omega}\sqrt{ \varepsilon_\mathrm{I}(\mathbf{r},\omega) \varepsilon_\mathrm{I}(\mathbf{r},\omega'')}\left[\mathrm{P}\frac{2\omega}{\omega^2-\omega''^2}+\rmi\pi\delta(\omega-\omega'')\right]\mathcal{N}(\omega') \nonumber \\[3pt] \times\delta(\omega-\omega')\delta(\omega'-\omega'')\Delta^{\!E}_{\ \,ji}(\mathbf{r},\omega') \nonumber \\[3pt] \fl \qquad +\frac{\hbar\pi}{c^2}\frac{(\omega'\omega''+\omega^2)}{\omega}\sqrt{ \varepsilon_\mathrm{I}(\mathbf{r},\omega) \varepsilon_\mathrm{I}(\mathbf{r},\omega')}\left[\mathrm{P}\frac{2\omega}{\omega^2-\omega'^2}-\rmi\pi\delta(\omega-\omega')\right]\mathcal{N}(\omega') \nonumber \\[3pt] \times\delta(\omega-\omega'')\delta(\omega'-\omega'')\Delta^{\!E*}_{\ \,ij}(\mathbf{r},\omega') \nonumber \\[3pt] \fl \qquad +\frac{\hbar}{c^2}\frac{(\omega'\omega''+\omega^2)}{\omega}\varepsilon_\mathrm{I}(\mathbf{r},\omega)\left[\mathrm{P}\frac{4\omega^2}{(\omega^2-\omega'^2)(\omega^2-\omega''^2)}-\rmi\pi\frac{\delta(\omega-\omega')+\delta(\omega-\omega'')}{\omega'-\omega''-2\rmi0^+}\right] \nonumber \\[3pt] \times\mathcal{N}(\omega')\delta(\omega'-\omega'')\mathrm{Im}\Delta^{\!E}_{\ \,ij}(\mathbf{r},\omega') . \label{XdX0.2} \end{eqnarray} The right-hand side of (\ref{XdX0.2}) consists of a sum of three terms. The first two terms each contain a product of three delta functions; these terms containing three delta functions can be combined and written as \begin{eqnarray} \fl \frac{2\hbar\pi^2\omega}{c^2}\varepsilon_\mathrm{I}(\mathbf{r},\omega)\mathcal{N}(\omega)\left[\rmi\Delta^{\!E}_{\ \,ji}(\mathbf{r},\omega')-\rmi \Delta^{\!E*}_{\ \,ij}(\mathbf{r},\omega')\right] \delta(\omega-\omega')\delta(\omega-\omega'')\delta(\omega'-\omega''). \label{3delta} \end{eqnarray} The third term on the right-hand side of (\ref{XdX0.2}) has a part containing the sum of two delta functions in a numerator; this part gives the contribution \begin{eqnarray} \fl -\rmi\frac{\hbar\pi}{c^2}\frac{(\omega'\omega''+\omega^2)}{\omega}\varepsilon_\mathrm{I}(\mathbf{r},\omega)[\delta(\omega-\omega')+\delta(\omega-\omega'')]\left[\mathrm{P}\frac{1}{(\omega'-\omega'')}+\rmi\pi\delta(\omega'-\omega'')\right] \nonumber \\[3pt] \times\mathcal{N}(\omega')\delta(\omega'-\omega'')\mathrm{Im}\Delta^{\!E}_{\ \,ij}(\mathbf{r},\omega') \nonumber \\[3pt] \fl \qquad =-\rmi\frac{\hbar\pi}{c^2}\frac{(\omega'\omega''+\omega^2)}{\omega}\varepsilon_\mathrm{I}(\mathbf{r},\omega)\left[\mathrm{P}\frac{\delta(\omega-\omega')}{(\omega-\omega'')}+\mathrm{P}\frac{\delta(\omega-\omega'')}{(\omega'-\omega)}+2\rmi\pi\delta(\omega-\omega')\delta(\omega-\omega'')\right] \nonumber \\[3pt] \qquad \times\mathcal{N}(\omega')\delta(\omega'-\omega'')\mathrm{Im}\Delta^{\!E}_{\ \,ij}(\mathbf{r},\omega') \nonumber \\[3pt] \fl \qquad =\frac{4\hbar\pi^2\omega}{c^2}\varepsilon_\mathrm{I}(\mathbf{r},\omega)\mathcal{N}(\omega)\mathrm{Im}\Delta^{\!E}_{\ \,ij}(\mathbf{r},\omega') \delta(\omega-\omega')\delta(\omega-\omega'')\delta(\omega'-\omega''), \end{eqnarray} which cancels with (\ref{3delta}). The first two terms in the sum on the right-hand of (\ref{XdX0.2}) have now been reduced to the parts containing a principal value; consider the first of these, namely \begin{equation} \fl \frac{2\hbar\pi}{c^2}(\omega'\omega''+\omega^2)\sqrt{ \varepsilon_\mathrm{I}(\mathbf{r},\omega) \varepsilon_\mathrm{I}(\mathbf{r},\omega'')}\,\mathrm{P}\frac{1}{\omega^2-\omega''^2}\mathcal{N}(\omega')\delta(\omega-\omega')\delta(\omega'-\omega'')\Delta^{\!E}_{\ \,ji}(\mathbf{r},\omega'). \label{P2del} \end{equation} Because of the delta functions, (\ref{P2del}) is restricted to contribute only when the denominator in the principal value vanishes; by the definition of a principal value there is no such contribution, unless the factor $\omega-\omega''$ in the denominator $\omega^2-\omega''^2$ is canceled by an equal factor $\omega-\omega''$ in the numerator. But there is such a factor in the numerator, obtained by Taylor expanding the remaining function of $\omega$ (apart from the the factor $\omega-\omega''$ in the denominator) around the point $\omega=\omega''$; the contribution of (\ref{P2del}) is therefore \begin{eqnarray} \fl \frac{2\hbar\pi}{c^2}\sqrt{ \varepsilon_\mathrm{I}(\mathbf{r},\omega'') }\mathcal{N}(\omega')\delta(\omega-\omega')\delta(\omega'-\omega'')\Delta^{\!E}_{\ \,ji}(\mathbf{r},\omega')\frac{\rmd}{\rmd\omega}\left.\left[\sqrt{ \varepsilon_\mathrm{I}(\mathbf{r},\omega)}\frac{(\omega'\omega''+\omega^2)}{\omega+\omega''}\right]\right|_{\omega=\omega''} \nonumber \\[3pt] = \frac{\hbar\pi}{c^2}\frac{\rmd\left[\omega''\varepsilon_\mathrm{I}(\mathbf{r},\omega'')\right]}{\rmd\omega''} \mathcal{N}(\omega')\delta(\omega-\omega')\delta(\omega'-\omega'')\Delta^{\!E}_{\ \,ji}(\mathbf{r},\omega'). \label{P2del3} \end{eqnarray} The principal value in the second term in the sum on the right-hand of (\ref{XdX0.2}) is dealt with in the manner employed for (\ref{P2del}). Implementing all these simplifications of (\ref{XdX0.2}), and using the symmetry $\Delta^{\!E}_{\ \,ij}(\mathbf{r},\omega)=\Delta^{\!E}_{\ \,ji}(\mathbf{r},\omega)$ that follows from (\ref{recip}) and (\ref{DeltaE}), we obtain \begin{eqnarray} \fl \left\langle\frac{1}{2}(\omega'\omega''+\omega^2)\hat{X}^\dagger_{\omega i} (\mathbf{r}, \omega'){\hat{X}}_{\omega j}(\mathbf{r}, \omega'')\right \rangle_E \nonumber \\[3pt] =\frac{2\hbar\pi}{c^2}\frac{\rmd\left[\omega'\varepsilon_\mathrm{I}(\mathbf{r},\omega')\right]}{\rmd\omega'} \mathcal{N}(\omega')\delta(\omega-\omega')\delta(\omega'-\omega'')\mathrm{Re}\Delta^{\!E}_{\ \,ij}(\mathbf{r},\omega') \nonumber \\[3pt] \quad +\frac{4\hbar}{c^2}\mathrm{P}\frac{\omega(\omega'^2+\omega^2)}{(\omega^2-\omega'^2)^2} \varepsilon_\mathrm{I}(\mathbf{r},\omega)\mathcal{N}(\omega')\delta(\omega'-\omega'')\mathrm{Im}\Delta^{\!E}_{\ \,ij}(\mathbf{r},\omega') . \label{XdXen} \\[3pt] =\frac{\mathcal{N}(\omega')}{\mathcal{N}(\omega')+1}\left\langle\frac{1}{2}(\omega'\omega''+\omega^2)\hat{X}_{\omega i} (\mathbf{r}, \omega'){\hat{X}}^\dagger_{\omega j}(\mathbf{r}, \omega'')\right \rangle_E. \label{XXden} \end{eqnarray} The equality (\ref{XXden}) is easily verified by tracing back minor changes in the derivation of (\ref{XdXen}). We can now write the time-domain expectation value $\int_0^\infty\rmd\omega\langle(\partial\mathbf{\hat{X}}_\omega)^2/2+\omega^2\mathbf{\hat{X}}_\omega^2/2\rangle$ using (\ref{XXden}), (\ref{XdXen}) and the general relation (\ref{EEEEf}): \begin{eqnarray} \fl \left\langle\int_0^\infty\rmd\omega\left[\frac{1}{2}(\partial_t\mathbf{\hat{X}}_\omega)^2+\frac{1}{2}\omega^2\mathbf{\hat{X}}_\omega^2\right]\right\rangle_E \nonumber \\[3pt] \fl \qquad \quad =\frac{\hbar}{2\pi c^2}\int_0^\infty\rmd\omega\,\frac{\rmd\left[\omega\varepsilon_\mathrm{I}(\mathbf{r},\omega)\right]}{\rmd\omega} \coth\left(\frac{\hbar\omega}{2k_B T}\right)\mathrm{Re}\Delta^{\!E\ i}_{\ \,i}(\mathbf{r},\omega) \nonumber \\[3pt] \fl \qquad \qquad +\frac{\hbar}{\pi^2c^2}\int_0^\infty\rmd\omega\int_0^\infty\rmd\omega' \,\mathrm{P}\frac{\omega(\omega'^2+\omega^2)}{(\omega^2-\omega'^2)^2} \varepsilon_\mathrm{I}(\mathbf{r},\omega) \coth\left(\frac{\hbar\omega'}{2k_B T}\right)\mathrm{Im}\Delta^{\!E\ i}_{\ \,i}(\mathbf{r},\omega') \label{rhoX1} \end{eqnarray} Use of the identity \begin{equation} \frac{\omega(\omega'^2+\omega^2)}{(\omega^2-\omega'^2)^2}=\frac{\rmd}{\rmd\omega'}\left(\frac{\omega\omega'}{\omega^2-\omega'^2}\right) \end{equation} in the last term of (\ref{rhoX1}) allows the integration over $\omega$ to be performed by means of the Kramers-Kronig relation (\ref{KK}), yielding finally \begin{eqnarray} \fl \left\langle\int_0^\infty\rmd\omega\left[\frac{1}{2}(\partial_t\mathbf{\hat{X}}_\omega)^2+\frac{1}{2}\omega^2\mathbf{\hat{X}}_\omega^2\right]\right\rangle _E \nonumber \\[3pt] \fl \qquad \qquad =\frac{\hbar}{2\pi c^2}\mathrm{Im}\int_0^\infty\rmd\omega' \, \coth\left(\frac{\hbar\omega'}{2k_B T}\right)\left(\frac{\rmd}{\rmd\omega'}\left\{\omega'\left[\varepsilon(\mathbf{r},\omega')-1\right]\right\}\right) \Delta^{\!E\ i}_{\ \,i}(\mathbf{r},\omega'). \label{rhoX} \end{eqnarray} Turning now to the the expectation value containing only $\mathbf{\hat{X}}_\omega$ that appears in the expectation value of the stress tensor (\ref{stress}), we see that this is $\int_0^\infty\rmd\omega\langle(\partial\mathbf{\hat{X}}_\omega)^2/2-\omega^2\mathbf{\hat{X}}_\omega^2/2\rangle$. Recalling the general relation (\ref{EEEEf}), we must therefore insert a factor $(\omega'\omega''-\omega^2)/2$ in (\ref{XdX0.1}). This factor, in the first term on the right-hand side of (\ref{XdX0.1}), can be written $\omega'(\omega''-\omega)/2$ because of the presence of $\delta(\omega-\omega')$, and so the pole at $\omega=\omega''$ inside the square brackets is cancelled. Similarly, the pole at $\omega=\omega'$ in the second term on the right-hand side of (\ref{XdX0.1}) is also cancelled by the factor $(\omega'\omega''-\omega^2)/2$. In the final term on the right-hand side of (\ref{XdX0.1}), multiplication by the factor $(\omega'\omega''-\omega^2)/2$ also produces a simplification that follows from \begin{equation} \fl (\omega'\omega''-\omega^2)\frac{\delta(\omega-\omega')+\delta(\omega-\omega'')}{\omega'-\omega''-2\rmi0^+}=-\omega'\delta(\omega-\omega')+\omega''\delta(\omega-\omega''), \end{equation} which vanishes when combined with a factor $\delta(\omega'-\omega'')$ from (\ref{EdE}). We substitute (\ref{CdE}), (\ref{EdC}), (\ref{EdE}) and (\ref{ab}) and find \begin{eqnarray} \fl \left\langle\frac{1}{2}(\omega'\omega''-\omega^2)\hat{X}^\dagger_{\omega i} (\mathbf{r}, \omega'){\hat{X}}_{\omega j}(\mathbf{r}, \omega'')\right \rangle_E \nonumber \\[3pt] =-\frac{2\hbar\pi}{c^2}\varepsilon_\mathrm{I}(\mathbf{r},\omega)\mathcal{N}(\omega') \delta(\omega-\omega')\delta(\omega'-\omega'')\mathrm{Re}\Delta^{\!E}_{\ \,ij}(\mathbf{r},\omega') \nonumber \\[3pt] \quad -\frac{4\hbar}{c^2}\varepsilon_\mathrm{I}(\mathbf{r},\omega)\mathrm{P}\frac{\omega}{(\omega^2-\omega'^2)}\mathcal{N}(\omega')\delta(\omega'-\omega'')\mathrm{Im}\Delta^{\!E}_{\ \,ij}(\mathbf{r},\omega') \label{XdXstress} \\[3pt] =\frac{\mathcal{N}(\omega')}{\mathcal{N}(\omega')+1}\left\langle\frac{1}{2}(\omega'\omega''-\omega^2)\hat{X}_{\omega i} (\mathbf{r}, \omega'){\hat{X}}^\dagger_{\omega j}(\mathbf{r}, \omega'')\right \rangle_E. \label{XXdstress} \end{eqnarray} The time-domain expectation value $\int_0^\infty\rmd\omega\langle(\partial\mathbf{\hat{X}}_\omega)^2/2-\omega^2\mathbf{\hat{X}}_\omega^2/2\rangle$ follows from (\ref{XdXstress}), (\ref{XXdstress}) and the general relation (\ref{EEEEf}); the Kramers-Kronig relation (\ref{KK}) can be immediately applied, with the result \begin{eqnarray} \fl \left\langle\int_0^\infty\rmd\omega\left[\frac{1}{2}(\partial_t\mathbf{\hat{X}}_\omega)^2-\frac{1}{2}\omega^2\mathbf{\hat{X}}_\omega^2\right]\right\rangle_E \nonumber \\[3pt] \fl \qquad \qquad =-\frac{\hbar}{2\pi c^2}\mathrm{Im}\int_0^\infty\rmd\omega' \, \coth\left(\frac{\hbar\omega'}{2k_B T}\right)\left[\varepsilon(\mathbf{r},\omega')-1\right]\Delta^{\!E\ i}_{\ \,i}(\mathbf{r},\omega'). \label{stressX} \end{eqnarray} Expectation values for the $\mathbf{\hat{Y}}_\omega$ field analogous to (\ref{rhoX}) and (\ref{stressX}) are also required. In keeping with the discussion earlier in this section, only terms that depend on the magnetic-field part of $\mathbf{\hat{Y}}_\omega$ in (\ref{YCB}) are included in the calculation and we denote this fact by a subscript $\scriptstyle B$ on expectation values. The derivations are very similar to those described in detail above for the case of the $\mathbf{\hat{X}}_\omega$ field; we therefore simply state the results: \begin{eqnarray} \fl \left\langle\int_0^\infty\rmd\omega\left[\frac{1}{2}(\partial_t\mathbf{\hat{Y}}_\omega)^2+\frac{1}{2}\omega^2\mathbf{\hat{Y}}_\omega^2\right]\right\rangle_B \nonumber \\[3pt] \fl \qquad \qquad =\frac{\hbar}{2\pi}\mathrm{Im}\int_0^\infty\rmd\omega' \, \coth\left(\frac{\hbar\omega'}{2k_B T}\right)\left(\frac{\rmd}{\rmd\omega'}\left\{-\omega'\left[\kappa(\mathbf{r},\omega')-1\right]\right\}\right)\Delta^{\!B\ i}_{\ \,i}(\mathbf{r},\omega'). \label{rhoY} \end{eqnarray} \begin{eqnarray} \fl \left\langle\int_0^\infty\rmd\omega\left[\frac{1}{2}(\partial_t\mathbf{\hat{Y}}_\omega)^2-\frac{1}{2}\omega^2\mathbf{\hat{Y}}_\omega^2\right]\right\rangle_B \nonumber \\[3pt] \fl \qquad \qquad =\frac{\hbar}{2\pi}\mathrm{Im}\int_0^\infty\rmd\omega' \, \coth\left(\frac{\hbar\omega'}{2k_B T}\right)\left[\kappa(\mathbf{r},\omega')-1\right]\Delta^{\!B\ i}_{\ \,i}(\mathbf{r},\omega'), \label{stressY} \end{eqnarray} where the definition (\ref{DeltaB}) has been employed. To calculate the expectation values of the energy density (\ref{rho}) and stress tenor (\ref{stress}) we also require the expectation values of $\int_0^\infty\rmd\omega\,\beta (\mathbf{r},\omega)\hat{Y}_{\omega i}\hat{B}_j$ and $\int_0^\infty\rmd\omega\,\alpha (\mathbf{r},\omega)\hat{X}_{\omega i}\hat{E}_j$. To find the first of these expectation values, we calculate the frequency-domain correlation function of $\mathbf{\hat{Y}}_\omega$ and $\mathbf{\hat{B}}$. Again we include only terms that depend on the magnetic-field part of $\mathbf{\hat{Y}}_\omega$ in (\ref{YCB}) and denote this fact by a subscript $\scriptstyle B$ on the correlation function. From (\ref{YCB}) and (\ref{BE}) we find \begin{eqnarray} \fl \left\langle\mathbf{\hat{Y}}^\dagger_\omega (\mathbf{r}, \omega')\otimes\mathbf{\hat{B}}(\mathbf{r'}, \omega'')\right \rangle_B \nonumber \\[3pt] \fl \qquad = -\rmi\frac{\pi}{\omega''}\sqrt{\frac{2\hbar}{\omega}}\delta(\omega-\omega')\left\langle\mathbf{\hat{C}}^\dagger_{\mathrm{m}}(\mathbf{r}, \omega')\otimes\nabla'\times\mathbf{\hat{E}}(\mathbf{r'}, \omega'')\right \rangle \nonumber \\[3pt] \fl \qquad \quad + \frac{\beta (\mathbf{r},\omega)}{2\omega\omega'\omega''} \left[\mathrm{P}\frac{2\omega}{\omega^2-\omega'^2}-\rmi\pi\delta(\omega-\omega')\right]\left\langle\nabla\times\mathbf{\hat{E}}^\dagger(\mathbf{r}, \omega')\otimes\nabla'\times\mathbf{\hat{E}}(\mathbf{r'}, \omega'')\right\rangle. \label{YBd0} \end{eqnarray} The second correlation function on the right-hand side of (\ref{YBd0}) is found from (\ref{EdE}); the first correlation function is shown by (\ref{CdC})--(\ref{CC}), (\ref{EopG}) and (\ref{jopdef}) to be \begin{eqnarray} \fl \left\langle\hat{C}^\dagger_{\mathrm{m}i} (\mathbf{r}, \omega')(\nabla'\times\mathbf{\hat{E}})_j(\mathbf{r'}, \omega'')\right\rangle \nonumber \\[3pt] \fl \qquad =2\pi\rmi\mu_0\omega''\left[-\frac{\hbar\kappa_0}{\pi}\kappa_\mathrm{I}(\mathbf{r},\omega'')\right]^{1/2}\mathcal{N}(\omega')(\nabla'\times\mathbf{G}(\mathbf{r'},\mathbf{r},\omega')\times\stackrel{\leftarrow}{\nabla})_{ji}\delta(\omega'-\omega''). \label{CdcurlE} \end{eqnarray} Insertion of (\ref{EdE}) and (\ref{CdcurlE}) in (\ref{YBd0}), with use of (\ref{ab}), gives \begin{eqnarray} \fl \left\langle\hat{Y}^\dagger_{\omega i}(\mathbf{r}, \omega')\hat{B}_j(\mathbf{r'}, \omega'')\right \rangle_B \nonumber \\[3pt] \fl \quad =2\pi^2\mu_0\sqrt{\frac{2\hbar}{\omega}}\left[-\frac{\hbar\kappa_0}{\pi}\kappa_\mathrm{I}(\mathbf{r},\omega'')\right]^{1/2}\mathcal{N}(\omega')(\nabla'\times\mathbf{G}(\mathbf{r'},\mathbf{r},\omega')\times\stackrel{\leftarrow}{\nabla})_{ji}\delta(\omega-\omega')\delta(\omega'-\omega'') \nonumber \\[3pt] \fl \qquad +\frac{2\pi\hbar\mu_0\omega'}{\omega\omega''}\left[-\frac{2\kappa_0}{\pi}\omega\kappa_\mathrm{I}(\mathbf{r},\omega'')\right]^{1/2} \delta(\omega'-\omega'')\mathcal{N}(\omega') \nonumber \\[3pt] \times\left[\mathrm{P}\frac{2\omega}{\omega^2-\omega'^2}-\rmi\pi\delta(\omega-\omega')\right](\nabla\times\mathbf{G}_{\mathrm{I}}(\mathbf{r},\mathbf{r'},\omega')\times\stackrel{\leftarrow}{\nabla'})_{ij}. \label{YdB} \end{eqnarray} It is straightforward to show that the correlation function $\left\langle\hat{Y}_{\omega i}(\mathbf{r}, \omega')\hat{B}^\dagger_j(\mathbf{r'}, \omega'')\right \rangle_B$ differs from (\ref{YdB}) by a complex conjugation and the replacement of $\mathcal{N}(\omega')$ by $\mathcal{N}(\omega')+1$; we can then compute the equal-time correlation function of $\mathbf{\hat{Y}}_\omega$ and $\mathbf{\hat{B}}$ using these frequency-domain results and the general relation (\ref{EEEEf}). The expectation value of interest is $\langle\int_0^\infty\rmd\omega\,\beta (\mathbf{r},\omega)\hat{Y}_{\omega i}\hat{B}_j\rangle_B$; with use of (\ref{ab}) and the Kramer-Kronig relation (\ref{KK}), this expectation value is found to be \begin{eqnarray} \fl \left\langle\int_0^\infty\rmd\omega\,\beta (\mathbf{r},\omega)\hat{Y}_{\omega i} (\mathbf{r}, t)\hat{B}_j(\mathbf{r}, t)\right \rangle_B \nonumber \\[3pt] = -\frac{\hbar}{\pi}\mathrm{Im}\int_0^\infty\rmd\omega'\, \left[\kappa (\mathbf{r},\omega')-1\right] \coth\left(\frac{\hbar\omega'}{2k_B T}\right)\Delta^{\!B}_{\ \,ij}(\mathbf{r},\omega'). \label{betaYB} \end{eqnarray} The final expectation value required for our purposes is that of $\int_0^\infty\rmd\omega\,\alpha (\mathbf{r},\omega)\hat{X}_{\omega i}\hat{E}_j$. Only terms that depend on the electric-field part of $\mathbf{\hat{X}}_\omega$ in (\ref{XCE}) are included; this fact is denoted by a subscript $\scriptstyle E$ on the correlation function. The calculation exactly parallels that leading to (\ref{betaYB}) and the result is \begin{eqnarray} \fl \left\langle\int_0^\infty\rmd\omega\,\alpha (\mathbf{r},\omega)\hat{X}_{\omega i} (\mathbf{r}, t)\hat{E}_j(\mathbf{r}, t)\right \rangle_E \nonumber \\[3pt] = \frac{\hbar}{\pi c^2}\mathrm{Im}\int_0^\infty\rmd\omega'\,\left[\varepsilon (\mathbf{r},\omega')-1\right] \coth\left(\frac{\hbar\omega'}{2k_B T}\right)\Delta^{\!E}_{\ \,ij}(\mathbf{r},\omega'). \label{alphaXE} \end{eqnarray} This completes the set of expectation values needed to compute the electromagnetic part of the energy density and stress tensor in thermal equilibrium. \section{Casimir energy density} \label{sec:en} In the previous section we ignored the free-field parts of the operators $\mathbf{\hat{X}}_\omega$ and $\mathbf{\hat{Y}}_\omega$. The rationale for this omission in deriving the Casimir effect is that the stress-energy associated with the free-field part of the reservoir represents the absorbed energy due to the dissipation of the medium, together with the zero-point energy of the reservoir. The dissipated energy is included in the canonical theory of macroscopic electromagnetism, with the result that the system is closed and a proper quantization can be performed~\cite{phi10}. The description as a closed system is also essential to the existence of a stress-energy-momentum tensor, derived in section~\ref{sec:em}. But Casimir forces are caused by the stress-energy of the electromagnetic fields, not by the stress-energy absorbed and dissipated in the medium. This is why the correlation functions of the previous section, which include only the electromagnetic contribution, determine the Casimir energy density and stress. The classical expression (\ref{rho}) for the energy density also holds in the quantum theory, except that the final term must be written in a hermitian form to give a hermitian energy-density operator. We thus have the operator \begin{eqnarray} \fl \hat{\rho}=&\frac{\kappa_0}{2}\left[\frac{1}{c^2}\mathbf{\hat{E}}^2+\mathbf{\hat{B}}^2\right] \nonumber \\[3pt] \fl &+\int_0^\infty\rmd\omega\left[\frac{1}{2}(\partial_t\mathbf{\hat{X}}_\omega)^2+\frac{1}{2}(\partial_t\mathbf{\hat{Y}}_\omega)^2+\frac{1}{2}\omega^2(\mathbf{\hat{X}}_\omega^2+\mathbf{\hat{Y}}_\omega^2)-\frac{1}{2}\beta\left(\mathbf{\hat{Y}}_\omega\cdot\mathbf{\hat{B}}+\mathbf{\hat{B}} \cdot\mathbf{\hat{Y}}_\omega\right)\right]. \label{rhoop} \end{eqnarray} The $\beta$-dependent term in (\ref{rhoop}) has an expectation value that follows immediately from (\ref{betaYB}); the hermitian combination in (\ref{rhoop}) just picks out the real part of (\ref{betaYB}), which is the entire right-hand side. The expectation value of the quadratic terms in the electric and magnetic fields in (\ref{rhoop}) are obtained from (\ref{EE}) and (\ref{BB}). When all terms are combined, the final expression for the Casimir energy density $\langle\hat{\rho}\rangle$ is \begin{eqnarray} \fl \left\langle\hat{\rho}\right\rangle =\frac{\hbar}{2\pi}\mathrm{Im}\int_0^\infty\rmd\omega \, \coth\left(\frac{\hbar\omega}{2k_B T}\right)&\left\{ \frac{1}{c^2}\frac{\rmd[\omega\varepsilon(\mathbf{r},\omega)]}{\rmd\omega} \Delta^{\!E\ i}_{\ \,i}(\mathbf{r},\omega)\right. \nonumber \\[3pt] &\left. \ \, +\left[\kappa(\mathbf{r},\omega)-\omega\frac{\rmd\kappa(\mathbf{r},\omega)}{\rmd\omega}\right] \Delta^{\!B\ i}_{\ \,i}(\mathbf{r},\omega)\right\}. \label{rhocas} \end{eqnarray} Note that the $\kappa$-dependent factor in (\ref{rhocas}) takes the form $[\rmd(\omega\mu)/\rmd\omega]/\mu^2$ when written in terms of $\mu=1/\kappa$. Casimir forces at zero temperature can be found by using (\ref{rhocas}) to calculate the total Casimir energy of a configuration of objects and taking derivatives with respect to the parameters specifying their separations and relative orientations. The factors in (\ref{rhocas}) that depend on the dielectric functions have a familiar form: the Brillouin expression~\cite{jac,LLcm} for the monochromatic electromagnetic energy density of a {\it lossless} medium contains the same quantities, where $\varepsilon$ and $\kappa$ are real in that case. The result (\ref{rhocas}) is in many ways remarkably simple, considering that it holds for arbitrary dispersion (consistent with the Kramers-Kronig relations). Dispersion has a highly complicating effect on the electromagnetic energy of general fields in media, even when the difficulty of losses can be ignored~\cite{phi11}. Here the losses are compensated because of the imposition of thermal equilibrium, and the restriction to thermal (and zero-point) fields also has the special effect that dispersion contributes only through the simple first-order frequency derivatives in (\ref{rhocas}). For computational purposes it is more convenient to re-express the frequency integral in (\ref{rhocas}) as a sum over imaginary frequencies. Because of the property~\cite{LL} \begin{equation} \label{GG*} \mathbf{G}(\mathbf{r},\mathbf{r'},-\omega)=\mathbf{G}^*(\mathbf{r},\mathbf{r'},\omega) \end{equation} of the Green bi-tensor, its real part is even in $\omega$ while is imaginary part is odd, and the same property holds for $ \Delta^{\!E}_{\ \,ij}(\mathbf{r},\omega)$ and $ \Delta^{\!B}_{\ \,ij}(\mathbf{r},\omega)$ (recall (\ref{DeltaE}) and (\ref{DeltaB})). Moreover, the dielectric functions also have the property (\ref{GG*})~\cite{LL}. The hyperbolic cotangent in (\ref{rhocas}), on the other hand, is odd in $\omega$. All this means that the imaginary part of the integral in (\ref{rhocas}) is automatically extracted if we modify the integration over $\omega$ so that it runs from $-\infty$ to $\infty$ and multiply by by $-\rmi/2$; thus (\ref{rhocas}) can be replaced by \begin{eqnarray} \fl \left\langle\hat{\rho}\right\rangle =-\rmi\frac{\hbar}{4\pi}\int_{-\infty}^\infty\rmd\omega \, \coth\left(\frac{\hbar\omega}{2k_B T}\right)&\left\{ \frac{1}{c^2}\frac{\rmd[\omega\varepsilon(\mathbf{r},\omega)]}{\rmd\omega} \Delta^{\!E\ i}_{\ \,i}(\mathbf{r},\omega)\right. \nonumber \\[3pt] &\left. \ \, +\left[\kappa(\mathbf{r},\omega)-\omega\frac{\rmd\kappa(\mathbf{r},\omega)}{\rmd\omega}\right] \Delta^{\!B\ i}_{\ \,i}(\mathbf{r},\omega)\right\}. \label{rhocas2} \end{eqnarray} We can now close the frequency integral in the upper-half complex frequency plane, where the dielectric functions and the Green bi-tensor are analytic~\cite{LLcm,LL}. This contour integral is given by the sum of the residue contributions from the poles in the hyperbolic cotangent term at positive imaginary frequencies $\omega=2\pi\rmi k_BTn/\hbar$, $n=0,1,2,\dots$. With the notation \begin{equation} \label{xi} \rmi \xi_n:=\rmi \frac{2\pi k_BTn}{\hbar}, \qquad n=0,1,2,\dots, \end{equation} for these imaginary frequencies, the Casimir energy density (\ref{rhocas2}) then has the form \begin{eqnarray} \left\langle\hat{\rho}\right\rangle =k_BT{\sum_{n=0}}'&\left\{ \frac{1}{c^2}\left.\frac{\rmd[\omega\varepsilon(\mathbf{r},\omega)]}{\rmd\omega}\right|_{\omega=\rmi\xi_n} \Delta^{\!E\ i}_{\ \,i}(\mathbf{r},\rmi\xi_n)\right. \nonumber \\[3pt] &\left. \ \, +\left[\frac{1}{\mu^2(\mathbf{r},\omega)}\frac{\rmd[\omega\mu(\mathbf{r},\omega)]}{\rmd\omega}\right]_{\omega=\rmi\xi_n} \Delta^{\!B\ i}_{\ \,i}(\mathbf{r},\rmi\xi_n)\right\}, \label{rhocasim} \end{eqnarray} where the prime on the summation sign means that the first term in the sum is taken with a factor of $1/2$ (because the contour passes through the pole at $\omega=0)$, and where $\kappa$ has been replaced by $\mu(=1/\kappa)$. At zero temperature, where only the zero-point contribution remains, the sum in (\ref{rhocasim}) becomes an integral over positive imaginary frequencies: \begin{equation} \fl \left\langle\hat{\rho}\right\rangle_{T=0} =\frac{\hbar}{2\pi}\int_0^\infty\rmd\xi\left\{ \frac{1}{c^2}\frac{\rmd[\xi\varepsilon(\mathbf{r},\rmi\xi)]}{\rmd\xi} \Delta^{\!E\ i}_{\ \,i}(\mathbf{r},\rmi\xi) +\frac{1}{\mu^2(\mathbf{r},\rmi\xi)}\frac{\rmd[\xi\mu(\mathbf{r},\rmi\xi)]}{\rmd\xi} \Delta^{\!B\ i}_{\ \,i}(\mathbf{r},\rmi\xi)\right\}. \label{rhocasim0} \end{equation} The correct expression for the Casimir energy density in media has been a subject of conflicting assertions; as pointed out in~\cite{phi10b}, however, only the form (\ref{rhocasim0}) gives a Casimir force between parallel plates that agrees with the force obtained from the vacuum stress tensor between the plates. The correct form of the energy density emerges automatically from macroscopic QED, which is moreover a fully quantum treatment of the problem. \section{Casimir stress tensor} \label{sec:st} The quantum stress tensor operator has the same form as the classical expression (\ref{stress}), when the latter is written so as to give a hermitian operator: \begin{eqnarray} \fl \hat{\sigma}_{ij}&=\frac{1}{2}\delta_{ij}(\varepsilon_0\mathbf{\hat{E}}^2+\kappa_0\mathbf{\hat{B}}^2)-\varepsilon_0\hat{E}_i\hat{E}_j-\kappa_0\hat{B}_i{B}_j \nonumber \\[3pt] \fl &+\int_0^\infty\rmd\omega\left\{ \delta_{ij}\left[\frac{1}{2}(\partial_t\mathbf{\hat{X}}_\omega)^2+\frac{1}{2}(\partial_t\mathbf{\hat{Y}}_\omega)^2-\frac{1}{2}\omega^2(\mathbf{{\hat{X}}}_\omega^2+\mathbf{{\hat{Y}}}_\omega^2)+\frac{1}{2}\alpha \left(\mathbf{{\hat{X}}}_\omega\cdot \mathbf{{\hat{E}}}+\mathbf{{\hat{E}}} \cdot\mathbf{{\hat{X}}}_\omega\right)\right]\right. \nonumber \\[3pt] \fl & \qquad \qquad \quad -\frac{1}{2}\alpha\left( \hat{E}_i\hat{X}_{\omega j}+\hat{X}_{\omega j}\hat{E}_i\right)+\frac{1}{2}\beta \left(\hat{Y}_{\omega i}\hat{B}_j+\hat{B}_j \hat{Y}_{\omega i}\right){\Bigg\}}. \label{stressop} \end{eqnarray} The Casimir stress tensor is the expectation value of the electromagnetic part of (\ref{stressop}) in thermal equilibrium. We proceed as in the case of the energy density in the last section. The $\alpha$- and $\beta$-dependent terms in (\ref{stressop}) have expectation values that follow directly from (\ref{alphaXE}) and (\ref{betaYB}), and the expectation values of the terms quadratic in the electric and magnetic fields are obtained from (\ref{EE}) and (\ref{BB}). The Casimir stress tensor $\left\langle\hat{\sigma}_{ij}\right\rangle$ is thereby found to be \begin{eqnarray} \fl \left\langle\hat{\sigma}_{ij}\right\rangle =\frac{\hbar}{\pi}\mathrm{Im}\int_0^\infty\rmd\omega \, \coth\left(\frac{\hbar\omega}{2k_B T}\right)&\left\{ \frac{1}{c^2}\varepsilon(\mathbf{r},\omega)\left[\frac{1}{2}\delta_{ij}\Delta^{\!E\ k}_{\ \,k}(\mathbf{r},\omega)-\Delta^{\!E}_{\ \,ij} (\mathbf{r},\omega)\right]\right. \nonumber \\[3pt] &\left. \ \, +\kappa(\mathbf{r},\omega)\left[\frac{1}{2}\delta_{ij}\Delta^{\!B\ k}_{\ \,k}(\mathbf{r},\omega)-\Delta^{\!B}_{\ \,ij} (\mathbf{r},\omega)\right] \right\}. \label{stresscas} \end{eqnarray} As in the case of the Casimir energy density, a computationally more convenient formula for the stress tensor involves a sum over imaginary frequencies. The derivation is as described in leading up to (\ref{rhocasim}) and the expression is \begin{eqnarray} \left\langle\hat{\sigma}_{ij}\right\rangle =2k_BT{\sum_{n=0}}'&\left\{ \frac{1}{c^2}\varepsilon(\mathbf{r},\rmi\xi_n)\left[\frac{1}{2}\delta_{ij}\Delta^{\!E\ k}_{\ \,k}(\mathbf{r},\rmi\xi_n)-\Delta^{\!E}_{\ \,ij} (\mathbf{r},\rmi\xi_n)\right]\right. \nonumber \\[3pt] &\left. \ \, +\kappa(\mathbf{r},\rmi\xi_n)\left[\frac{1}{2}\delta_{ij}\Delta^{\!B\ k}_{\ \,k}(\mathbf{r},\rmi\xi_n)-\Delta^{\!B}_{\ \,ij} (\mathbf{r},\rmi\xi_n)\right] \right\}. \label{stresscasim} \end{eqnarray} At zero-temperature we obtain from (\ref{stresscasim}) the zero-point Casimir stress as an integral over positive imaginary frequencies: \begin{eqnarray} \left\langle\hat{\sigma}_{ij}\right\rangle =\frac{\hbar}{\pi}\int_0^\infty\rmd\xi&\left\{ \frac{1}{c^2}\varepsilon(\mathbf{r},\rmi\xi)\left[\frac{1}{2}\delta_{ij}\Delta^{\!E\ k}_{\ \,k}(\mathbf{r},\rmi\xi)-\Delta^{\!E}_{\ \,ij} (\mathbf{r},\rmi\xi)\right]\right. \nonumber \\[3pt] &\left. \ \, +\kappa(\mathbf{r},\rmi\xi)\left[\frac{1}{2}\delta_{ij}\Delta^{\!B\ k}_{\ \,k}(\mathbf{r},\rmi\xi)-\Delta^{\!B}_{\ \,ij} (\mathbf{r},\rmi\xi)\right] \right\}. \label{stresscasim0} \end{eqnarray} The formula (\ref{stresscasim}) (with $\kappa=1$) for the Casimir stress tensor in media was obtained by Herculean efforts in~\cite{dzy61}, and is the most general result of Lifshitz theory. Part of the reason why such an enormously complicated formalism was required in~\cite{dzy61} is the lack of a Hamiltonian and Lagrangian basis for the theory, which also undermines any claims that the result applies to quantum electromagnetic fields (see Introduction). As in the case of the energy density in the last section, the Casimir stress tensor in media emerges in a self-contained manner from macroscopic QED in thermal equilibrium, without the need for additional input. It is interesting also to compute the expectation value in thermal equilibrium of the quantum version of (\ref{conminhom}). The first term on the left-hand side of (\ref{conminhom}) has of course zero expectation value in the stationary situation of thermal equilibrium. The right-hand side must be written in hermitian form in the quantum theory, and using $\alpha\nabla_i\alpha=\nabla_i(\alpha^2)/2$ and $\beta\nabla_i\beta=\nabla_i(\beta^2)/2$ its expectation value is found in a manner similar to that used to obtain (\ref{alphaXE}) and (\ref{betaYB}). This leads to the following result for the divergence of the Casimir stress tensor: \begin{eqnarray} \fl \left\langle\nabla_j\hat{\sigma}_{i}^{\ j}\right\rangle &=\frac{\hbar}{2\pi}\mathrm{Im}\int_0^\infty\rmd\omega \, \coth\left(\frac{\hbar\omega}{2k_B T}\right)\left[ \frac{1}{c^2}\Delta^{\!E\ j}_{\ \,j}(\mathbf{r},\omega)\nabla_i \varepsilon(\mathbf{r},\omega)\right. \nonumber \\[3pt] \fl & \qquad \qquad \qquad \qquad\qquad \qquad\ -\Delta^{\!B\ j}_{\ \,j}(\mathbf{r},\omega)\nabla_i\kappa(\mathbf{r},\omega)\Bigg] \label{divstresscas} \\ \fl &= k_BT{\sum_{n=0}}'\left[ \frac{1}{c^2}\Delta^{\!E\ j}_{\ \,j}(\mathbf{r},\rmi\xi_n)\nabla_i \varepsilon(\mathbf{r},\rmi\xi_n)-\Delta^{\!B\ j}_{\ \,j}(\mathbf{r},\rmi\xi_n)\nabla_i\kappa(\mathbf{r},\rmi\xi_n)\right]. \end{eqnarray} As a final remark on the Casimir stress-energy-momentum tensor, we note that the energy flux and momentum density in this tensor must be zero, because the electromagnetic fields are in thermal equilibrium. It is straightforward to verify, by calculations similar to those used to obtain (\ref{rhocas}) and (\ref{stresscas}), that the electromagnetic parts (defined as in section~\ref{sec:cor}) of the energy flux (\ref{s}) and momentum density (\ref{p}) do indeed vanish in thermal equilibrium. \section{Conclusions} The Casimir effect has been derived from macroscopic QED~\cite{phi10} by a simple restriction to thermal equilibrium. Expressions for the Casimir energy density and stress tensor were obtained for arbitrary inhomogeneous magnetodielectrics. As the results are derived from a rigorous quantization of electromagnetic fields in dispersive, dissipative media, they not subject to the criticisms that have been directed at the standard Lifshitz theory of the Casimir effect. Moreover, the canonical basis of macroscopic QED~\cite{phi10} means that the correct forms of the Casimir energy density and stress tensor in media emerge directly from the theory. In Lifshitz theory, by contrast, there is no Hamiltonian or Lagrangian, so that detailed mechanical and thermodynamical arguments are required to obtain the form of the electromagnetic stress tensor in media. \ack This research is supported by the Scottish Government and the Royal Society of Edinburgh. \section*{References} \end{document}
\begin{document} \begin{abstract} We establish Ohno-type identities for multiple harmonic ($q$-)sums which generalize Hoffman's identity and Bradley's identity. Our result leads to a new proof of the Ohno-type relation for $\mathcal{A}$-finite multiple zeta values recently proved by Hirose, Imatomi, Murahara and Saito. As a further application, we give certain sum formulas for $\mathcal{A}_2$- or $\mathcal{A}_3$-finite multiple zeta values. \end{abstract} \title{Ohno-type identities for multiple harmonic sums} \section{Introduction} Let $N$ be a positive integer. Euler \cite{E} proved the following identity for the $N$-th harmonic number: \begin{equation}\label{eq:Euler} \sum_{m=1}^N\frac{(-1)^{m-1}}{m}\binom{N}{m}=\sum_{n=1}^N\frac{1}{n}. \end{equation} It is known today that there are various generalizations of Euler's identity. We call a tuple of positive integers an index. For an index $\boldsymbol{k}=(k_1, \dots, k_r)$, we write it in the form \[ \boldsymbol{k}=(\{1\}^{a_1-1}, b_1+1, \dots, \{1\}^{a_{s-1}-1}, b_{s-1}+1, \{1\}^{a_s-1}, b_s), \] where $a_1, \dots, a_s, b_1, \dots, b_s$ are positive integers and $\{1\}^a$ means $1, \dots, 1$ repeated $a$ times, and then we define its Hoffman dual $\boldsymbol{k}^{\vee}$ by \[ \boldsymbol{k}^{\vee}:=(a_1, \{1\}^{b_1-1}, a_2+1, \{1\}^{b_2-1}, \dots, a_s+1, \{1\}^{b_s-1}). \] Let $\boldsymbol{k}=(k_1, \dots, k_r)$ and $\boldsymbol{k}^{\vee}=(l_1, \dots, l_s)$. After Roman \cite{Rom} (the case $r=1$) and Hernandez \cite{BDWLH} (the case $s=1$), Hoffman \cite{H} proved \begin{equation}\label{eq:Hoffman} \sum_{1\leq m_1\leq\cdots\leq m_r\leq N} \frac{(-1)^{m_r-1}}{m_1^{k_1}\cdots m_r^{k_r}}\binom{N}{m_r} =\sum_{1\leq n_1\leq\cdots\leq n_s\leq N} \frac{1}{n_1^{l_1}\cdots n_s^{l_s}}. \end{equation} There are also $q$-analogs of these identities. Let $q$ be a real number satisfying $0<q<1$. For an integer $m$, we define the $q$-integer $[m]_q:=\frac{1-q^m}{1-q}$. When $0\leq m\leq N$, we define the $q$-factorial $[m]_q!:=\prod_{a=1}^m[a]_q$ ($[0]_q:=1$) and the $q$-binomial coefficient $\binom{N}{m}_q:=\frac{[N]_q!}{[m]_q![N-m]_q!}$. Van Hamme \cite{V} proved a $q$-analog of Euler's identity \eqref{eq:Euler} \[ \sum_{m=1}^N\frac{(-1)^{m-1}q^{\frac{m(m+1)}{2}}}{[m]_q}\binom{N}{m}_q =\sum_{n=1}^N\frac{q^n}{[n]_q}. \] After Dilcher \cite{D} (the case $r=1$) and Prodinger \cite{P} (the case $s=1$), Bradley \cite{B2} proved a $q$-analog of Hoffman's identity (\ref{eq:Hoffman}) \begin{equation}\label{eq:Bradley} \begin{split} \sum_{1\leq m_1\leq\cdots\leq m_r\leq N} \frac{q^{(k_1-1)m_1+\cdots+(k_r-1)m_r}}{[m_1]_q^{k_1}\cdots [m_r]_q^{k_r}} &\cdot(-1)^{m_r-1}q^{\frac{m_r(m_r+1)}{2}}\binom{N}{m_r}_q\\ &=\sum_{1\leq n_1\leq\cdots\leq n_s\leq N} \frac{q^{n_1+\cdots+n_s}}{[n_1]_q^{l_1}\cdots[n_s]_q^{l_s}}. \end{split} \end{equation} The equality \eqref{eq:Hoffman} or \eqref{eq:Bradley} is a kind of duality for multiple harmonic ($q$-)sums. Since the duality relations for ($q$-)multiple zeta values are generalized to Ohno's relations (\cite{O,B1}), it is natural to ask whether (and how) we can generalize \eqref{eq:Hoffman} and \eqref{eq:Bradley} to Ohno-type identities. This question was considered by Oyama \cite{Oyama} and more recently by Hirose, Imatomi, Murahara and Saito \cite{HIMS}. More precisely, they treated identities of the $\mathcal{A}$-finite multiple zeta values, that is, congruences modulo prime numbers. In this article, we prove Ohno-type identities which generalize \eqref{eq:Bradley} (Theorem \ref{MT1}) and \eqref{eq:Hoffman} (Corollary \ref{cor:Ohno}). We stress that our formulas are true identities, not congruences. This allows us to give, besides a new proof of Hirose-Imatomi-Murahara-Saito's relation for $\mathcal{A}$-finite multiple zeta values, sum formulas for $\mathcal{A}_2$- or $\mathcal{A}_3$-finite multiple zeta values, which are congruences modulo square or cube of primes. \section{Main Results} \subsection{Ohno-type identity} For a tuple of non-negative integers $\boldsymbol{e}=(e_1, \dots, e_r)$, we define its weight $\mathrm{wt}(\boldsymbol{e})$ and depth $\mathrm{dep}(\boldsymbol{e})$ to be $e_1+\cdots+e_r$ and $r$, respectively. Let $J_{e,r}$ be the set of all tuples of non-negative integers $\boldsymbol{e}$ such that $\mathrm{wt}(\boldsymbol{e})=e$, $\mathrm{dep}(\boldsymbol{e})=r$, and set $J_{\ast,r}:=\bigcup_{e=0}^{\infty}J_{e,r}$. For $\boldsymbol{e}_1,\boldsymbol{e}_2\in J_{\ast,r}$, $\boldsymbol{e}_1+\boldsymbol{e}_2$ denotes the entrywise sum. Similarly, let $I_{k,r}$ be the set of all indices $\boldsymbol{k}$ such that $\mathrm{wt}(\boldsymbol{k})=k$, $\mathrm{dep}(\boldsymbol{k})=r$, and set $I_{\ast,r}:=\bigcup_{k=0}^{\infty}I_{k,r}$. By convention, $I_{\ast,0}=\{\varnothing\}$ is the set consisting only of the empty index. For $\boldsymbol{k}=(k_1,\dots,k_r)\in I_{\ast,r}$ and $\boldsymbol{e}=(e_1,\dots,e_r) \in J_{\ast,r}$, put \[ b(\boldsymbol{k};\boldsymbol{e}):=\prod_{i=1}^r\binom{k_i+e_i+\delta_{i1}+\delta_{ir}-2}{e_i}, \] where $\delta_{ij}$ is Kronecker's delta. Here, we use the convention that \[ \binom{e-1}{e}=\begin{cases}1 & (e=0), \\ 0 &(e>0). \end{cases} \] For a positive integer $N$, $\boldsymbol{k}=(k_1,\dots,k_r) \in I_{\ast, r}$ and $\boldsymbol{e}=(e_1,\dots,e_r) \in J_{\ast,r}$, we define the multiple harmonic $q$-sums $H_N^{\star}(\boldsymbol{k};q)$ and $z_N^{\star}(\boldsymbol{k};\boldsymbol{e};q)$ by \begin{align*} H_N^{\star}(\boldsymbol{k};q)&:=\sum_{1 \leq m_1\leq\cdots\leq m_r\leq N} \frac{q^{(k_1-1)m_1+\cdots+(k_r-1)m_r}}{[m_1]_q^{k_1}\cdots[m_r]_q^{k_r}} \cdot(-1)^{m_r-1}q^{\frac{m_r(m_r+1)}{2}}\binom{N}{m_r}_q,\\ z_N^{\star}(\boldsymbol{k};\boldsymbol{e};q)&:=\sum_{1\leq m_1\leq\cdots\leq m_r\leq N} \frac{q^{(e_1+1)m_1+\cdots+(e_r+1)m_r}}{[m_1]_q^{k_1+e_1}\cdots[m_r]_q^{k_r+e_r}}. \end{align*} We set $z_N^{\star}(\boldsymbol{k};q):=z_N^{\star}(\boldsymbol{k};\{0\}^r;q)$ and $z_N^{\star}(\varnothing;q):=1$. The first main result is the following: \begin{theorem}\label{MT1} Let $N$ be a positive integer, $e$ a non-negative integer and $\boldsymbol{k}\in I_{\ast,r}$ an index. Set $s:=\mathrm{dep}(\boldsymbol{k}^{\vee})$. Then we have \begin{equation}\label{q-Ohno} \sum_{\boldsymbol{e} \in J_{e,r}}b(\boldsymbol{k};\boldsymbol{e})H_N^{\star}(\boldsymbol{k}+\boldsymbol{e};q) =\sum_{j=0}^ez_N^{\star}(\{1\}^{e-j};q) \sum_{\boldsymbol{e'} \in J_{j,s}}z_N^{\star}(\boldsymbol{k}^{\vee};\boldsymbol{e'};q). \end{equation} \end{theorem} The case $e=0$ gives Bradley's identity $H_N^{\star}(\boldsymbol{k};q)=z_N^{\star}(\boldsymbol{k}^{\vee};q)$. We will prove \eqref{q-Ohno} by using a certain \emph{connected sum} in \S\ref{sec:connected sum}, based on the same idea used in another paper of the authors \cite{SY}. This proof is new even if one specializes it to Hoffman's identity. Let \begin{align} H_N^{\star}(\boldsymbol{k})&:=\lim_{q\to 1}H_N^{\star}(\boldsymbol{k};q) =\sum_{1 \leq m_1\leq\cdots\leq m_r\leq N} \frac{(-1)^{m_r-1}}{m_1^{k_1}\cdots m_r^{k_r}}\binom{N}{m_r}, \notag\\ \zeta_N^\star(\boldsymbol{k})&:=\lim_{q\to 1}z_N^{\star}(\boldsymbol{k};q) =\sum_{1\leq m_1\leq\cdots\leq m_r\leq N} \frac{1}{m_1^{k_1}\cdots m_r^{k_r}}. \label{eq:zeta^star_N} \end{align} By taking the limit $q\to 1$ in \eqref{q-Ohno}, we obtain the following: \begin{corollary}\label{cor:Ohno} Let $N$ be a positive integer, $e$ a non-negative integer and $\boldsymbol{k}\in I_{\ast,r}$ an index. Set $s:=\mathrm{dep}(\boldsymbol{k}^{\vee})$. Then we have \begin{equation}\label{Ohno} \sum_{\boldsymbol{e} \in J_{e,r}}b(\boldsymbol{k};\boldsymbol{e})H_N^{\star}(\boldsymbol{k}+\boldsymbol{e}) =\sum_{j=0}^e\zeta_N^{\star}(\{1\}^{e-j}) \sum_{\boldsymbol{e'} \in J_{j,s}}\zeta_N^{\star}(\boldsymbol{k}^{\vee}+\boldsymbol{e'}). \end{equation} \end{corollary} The case $e=0$ gives Hoffman's identity $H_N^{\star}(\boldsymbol{k})=\zeta_N^{\star}(\boldsymbol{k}^{\vee})$. For an application of \eqref{Ohno}, we recall $\mathcal{A}$-finite multiple zeta values. First we define a $\mathbb{Q}$-algebra $\mathcal{A}$ by \[ \mathcal{A}:=\Biggl(\prod_{p\colon\text{prime}}\mathbb{Z}/p\mathbb{Z} \Biggr) \Biggm/ \Biggl(\bigoplus_{p\colon\text{prime}}\mathbb{Z}/p\mathbb{Z}\Biggr). \] For a positive integer $N$ and an index $\boldsymbol{k}=(k_1,\dots,k_r)\in I_{\ast,r}$, we define a multiple harmonic sum $\zeta_N(\boldsymbol{k})$ by \[ \zeta_N(\boldsymbol{k}):=\sum_{1\leq m_1<\cdots<m_r\leq N} \frac{1}{m_1^{k_1}\cdots m_r^{k_r}} \] (compare with $\zeta_N^\star(\boldsymbol{k})$ given in \eqref{eq:zeta^star_N}). We set $\zeta_N(\varnothing)=\zeta_N^{\star}(\varnothing)=1$ by convention. Then the $\mathcal{A}$-finite multiple zeta values $\zeta_{\mathcal{A}}(\boldsymbol{k})$ and $\zeta_{\mathcal{A}}^{\star}(\boldsymbol{k})$ are defined by \[ \zeta_{\mathcal{A}}(\boldsymbol{k}):=\bigl(\zeta_{p-1}(\boldsymbol{k})\bmod p\bigr)_p, \quad \zeta_{\mathcal{A}}^{\star}(\boldsymbol{k}):=\bigl(\zeta_{p-1}^{\star}(\boldsymbol{k})\bmod p\bigr)_p \in \mathcal{A}. \] Since $(-1)^{m-1}\binom{p-1}{m} \equiv -1 \pmod{p}$ holds for any prime $p$ greater than $m$, we have \[\bigl(H_{p-1}^\star(\boldsymbol{k})\bmod p\bigr)_p=-\zeta_\mathcal{A}^\star(\boldsymbol{k}). \] Moreover, it is known that $\zeta_\mathcal{A}^\star(\{1\}^e)=0$ for $e>0$, while $\zeta_\mathcal{A}^\star(\varnothing)=1$. Hence we obtain the following relation among $\mathcal{A}$-finite multiple zeta values as a corollary of \eqref{Ohno}. \begin{corollary}[Hirose-Imatomi-Murahara-Saito \cite{HIMS}] Let $e$ be a non-negative integer and $\boldsymbol{k} \in I_{\ast,r}$ an index. Set $s:=\mathrm{dep}(\boldsymbol{k}^{\vee})$. Then we have \[ \sum_{\boldsymbol{e} \in J_{e,r}}b(\boldsymbol{k};\boldsymbol{e})\zeta_{\mathcal{A}}^{\star}(\boldsymbol{k}+\boldsymbol{e}) =-\sum_{e' \in J_{e,s}}\zeta_{\mathcal{A}}^{\star}(\boldsymbol{k}^{\vee}+\boldsymbol{e'}). \] \end{corollary} \subsection{Sum formulas for finite multiple zeta values} Before stating our second main result, let us recall the sum formulas for $\mathcal{A}$-finite multiple zeta values. First, it is easily seen that \begin{equation}\label{eq:k,r sum A} \sum_{\boldsymbol{k}\in I_{k,r}}\zeta_\mathcal{A}(\boldsymbol{k}) =\sum_{\boldsymbol{k}\in I_{k,r}}\zeta_\mathcal{A}^\star(\boldsymbol{k})=0, \end{equation} but this is not an analog of the sum formula for the multiple zeta values \cite{G}, since the admissibility condition $k_r\geq 2$ is ignored in \eqref{eq:k,r sum A}. A more precise analog (and its generalization) is due to Saito-Wakabayashi \cite{SW}. For integers $k, r$ and $i$ satisfying $1\leq i\leq r<k$, we put $I_{k,r,i}:=\{(k_1, \dots, k_r) \in I_{k,r}\mid k_i \geq 2\}$ and $B_{\boldsymbol{p}-k}:=(B_{p-k}\bmod{p})_p \in \mathcal{A}$, where $B_n$ denotes the $n$-th Seki-Bernoulli number. Note that $B_{\boldsymbol{p}-k}=0$ if $k$ is even. \begin{theorem}[Saito-Wakabayashi \cite{SW}]\label{SW-thm} Let $k, r$ and $i$ be integers satisfying $1\leq i\leq r<k$. Then, in the ring $\mathcal{A}$, we have equalities \begin{align*} \sum_{\boldsymbol{k} \in I_{k, r, i}}\zeta_{\mathcal{A}}(\boldsymbol{k}) &=(-1)^{i}\biggl\{\binom{k-1}{i-1}+(-1)^r\binom{k-1}{r-i}\biggr\}\frac{B_{\boldsymbol{p}-k}}{k},\\ \sum_{\boldsymbol{k} \in I_{k, r, i}}\zeta_{\mathcal{A}}^{\star}(\boldsymbol{k}) &=(-1)^{i}\biggl\{\binom{k-1}{r-i}+(-1)^r\binom{k-1}{i-1}\biggr\}\frac{B_{\boldsymbol{p}-k}}{k}. \end{align*} \end{theorem} In particular, if $k$ is even, we see that \begin{equation}\label{eq:SW even} \sum_{\boldsymbol{k} \in I_{k,r,i}}\zeta_{\mathcal{A}}(\boldsymbol{k}) =\sum_{\boldsymbol{k} \in I_{k,r,i}}\zeta_{\mathcal{A}}^{\star}(\boldsymbol{k})=0. \end{equation} Our aim is to lift the identities \eqref{eq:k,r sum A} and \eqref{eq:SW even} in $\mathcal{A}$, which represent systems of congruences modulo (almost all) primes $p$, to congruences modulo $p^2$ or $p^3$, by using the identity \eqref{Ohno}. Let $n$ be a positive integer. In accordance with \cite{Ros,S,Z2}, we define a $\mathbb{Q}$-algebra $\mathcal{A}_n$ by \[ \mathcal{A}_n:=\Biggl(\prod_{p\colon \text{prime}}\mathbb{Z}/p^n\mathbb{Z}\Biggr) \Biggm/ \Biggl(\bigoplus_{p\colon \text{prime}}\mathbb{Z}/p^n\mathbb{Z}\Biggr) \] and the $\mathcal{A}_n$-finite multiple zeta values $\zeta_{\mathcal{A}_n}(\boldsymbol{k})$ and $\zeta_{\mathcal{A}_n}^{\star}(\boldsymbol{k})$ by \[ \zeta_{\mathcal{A}_n}(\boldsymbol{k}):=(\zeta_{p-1}(\boldsymbol{k})\bmod{p^n})_p, \quad \zeta_{\mathcal{A}_n}^{\star}(\boldsymbol{k}):=(\zeta_{p-1}^{\star}(\boldsymbol{k})\bmod{p^n})_p \in \mathcal{A}_n. \] We use the symbol $B_{\boldsymbol{p}-k}$ again to denote the element $(B_{p-k} \bmod{p^n})_p$ of $\mathcal{A}_n$, and put $\boldsymbol{p}:=(p\bmod{p^n})_p \in \mathcal{A}_n$. Then our second main result is the following: \begin{theorem}[= Proposition \ref{S_A_2,k,r} + Theorem \ref{thm:A_3-sum} + Theorem \ref{thm:k,r,i sum A_2}] \label{MT2} Let $k, r$ be positive integers satisfying $r\leq k$. Then, in the ring $\mathcal{A}_2$, we have \[ \sum_{\boldsymbol{k} \in I_{k, r}}\zeta_{\mathcal{A}_2}(\boldsymbol{k}) =(-1)^{r-1}\binom{k}{r}\frac{B_{\boldsymbol{p}-k-1}}{k+1}\boldsymbol{p},\quad \sum_{\boldsymbol{k} \in I_{k, r}}\zeta_{\mathcal{A}_2}^{\star}(\boldsymbol{k}) =\binom{k}{r}\frac{B_{\boldsymbol{p}-k-1}}{k+1}\boldsymbol{p}. \] If $k$ is odd, in the ring $\mathcal{A}_3$, we have \[ \sum_{\boldsymbol{k} \in I_{k, r}}\zeta_{\mathcal{A}_3}(\boldsymbol{k}) =(-1)^r\frac{k+1}{2}\binom{k}{r}\frac{B_{\boldsymbol{p}-k-2}}{k+2}\boldsymbol{p}^2,\quad \sum_{\boldsymbol{k} \in I_{k, r}}\zeta_{\mathcal{A}_3}^{\star}(\boldsymbol{k}) =-\frac{k+1}{2}\binom{k}{r}\frac{B_{\boldsymbol{p}-k-2}}{k+2}\boldsymbol{p}^2. \] Furthermore, let $i$ be an integer satisfying $1\leq i\leq r$ and we assume that $k$ is even and greater than $r$. Then the equalities \[ \sum_{\boldsymbol{k} \in I_{k, r,i}}\zeta_{\mathcal{A}_2}(\boldsymbol{k}) =(-1)^{r-1}\frac{a_{k,r,i}}{2}\cdot\frac{B_{\boldsymbol{p}-k-1}}{k+1}\boldsymbol{p},\quad \sum_{\boldsymbol{k} \in I_{k, r, i}}\zeta_{\mathcal{A}_2}^{\star}(\boldsymbol{k}) =\frac{b_{k,r,i}}{2}\cdot\frac{B_{\boldsymbol{p}-k-1}}{k+1}\boldsymbol{p} \] hold in $\mathcal{A}_2$. Here the coefficients $a_{k,r,i}$ and $b_{k,r,i}$ are given by \begin{align*} a_{k,r,i}&:=\binom{k-1}{r} +(-1)^{r-i}\biggl\{(k-r)\binom{k}{i-1}+\binom{k-1}{i-1}+(-1)^{r-1}\binom{k-1}{r-i}\biggr\},\\ b_{k,r,i}&:=\binom{k-1}{r}+ (-1)^{i-1}\biggl\{(k-r)\binom{k}{r-i}+\binom{k-1}{r-i}+(-1)^{r-1}\binom{k-1}{i-1}\biggr\}. \end{align*} \end{theorem} We will prove this theorem in \S\ref{sec:sum formula} and \S\ref{sec:A_3-sum formula}. \section{The proof of Theorem \ref{MT1}}\label{sec:connected sum} \begin{definition}[Connected sum] Let $N$ be a positive integer, $q$ a real number satisfying $0<q<1$ and $x$ an indeterminate. Let $r>0$ and $s\geq0$ be integers. For $\boldsymbol{k}=(k_1,\dots,k_r) \in J_{\ast, r}$ satisfying $k_1, \dots, k_{r-1} \geq 1$ and $\boldsymbol{l}=(l_1, \dots, l_s) \in I_{\ast, s}$, we define a formal power series $Z_N^{\star}(\boldsymbol{k};\boldsymbol{l};q;x)$ in $x$ by \[ Z_N^{\star}(\boldsymbol{k};\boldsymbol{l};q;x) :=\sum_{1\leq m_1\leq\cdots\leq m_r\leq n_1\leq\cdots\leq n_s\leq n_{s+1}=N} F_1(\boldsymbol{k};\boldsymbol{m};q;x)C(m_r,n_1,q,x)F_2(\boldsymbol{l};\boldsymbol{n};q;x), \] where \begin{align*} F_1(\boldsymbol{k};\boldsymbol{m};q;x) &:=\frac{[m_1]_q}{[m_1]_q-q^{m_1}x} \prod_{i=1}^r\frac{q^{(k_i-1)m_i}}{[m_i]_q([m_i]_q-q^{m_i}x)^{k_i-1}} \cdot\frac{[m_r]_q}{[m_r]_q-q^{m_r}x},\\ C(m_r,n_1,q,x) &:=(-1)^{m_r-1}q^{\frac{m_r(m_r+1)}{2}} \frac{\prod_{h=1}^{n_1}([h]_q-q^hx)}{[m_r]_q![n_1-m_r]_q!},\\ F_2(\boldsymbol{l};\boldsymbol{n};q;x) &:=\prod_{j=1}^s\frac{q^{n_j}}{([n_j]_q-q^{n_j}x)[n_j]_q^{l_j-1}} \end{align*} for $\boldsymbol{m}=(m_1,\dots, m_r)$ and $\boldsymbol{n}=(n_1,\dots, n_s)$. \end{definition} \begin{remark} The sum $Z_N^{\star}(\boldsymbol{k};\boldsymbol{l};q;x)$ consists of two parts \[\sum_{1\leq m_1\leq\cdots\leq m_r\leq N} F_1(\boldsymbol{k};\boldsymbol{m};q;x) \ \text{ and }\ \sum_{1\leq n_1\leq\cdots\leq n_s\leq N} F_2(\boldsymbol{l};\boldsymbol{n};q;x), \] connected by the factor $C(m_r,n_1,q,x)$ (and the relation $m_r\leq n_1$). We call it a connected sum with connector $C(m_r,n_1,q,x)$. In \cite{SY}, another type of connected sums is used to give a new proof of Ohno's relation for the multiple zeta values and Bradley's $q$-analog of it. \end{remark} \begin{theorem}\label{step-thm} For $(k_1, \dots, k_r) \in J_{\ast, r}$ with $k_1, \dots, k_{r-1} \geq 1$ and $(l_1, \dots, l_s) \in I_{\ast, s}$, we have \begin{equation}\label{+1;1,} Z_N^{\star}(k_1, \dots, k_r+1;l_1, \dots, l_s; q;x) =Z_N^{\star}(k_1, \dots, k_r;1,l_1, \dots, l_s; q;x). \end{equation} Moreover, if $s>0$, we also have \begin{equation}\label{+1,0;1+} Z_N^{\star}(k_1, \dots, k_r+1,0;l_1, \dots, l_s; q;x) =Z_N^{\star}(k_1, \dots, k_r;1+l_1, \dots, l_s; q;x). \end{equation} \end{theorem} \begin{proof} The equality \eqref{+1;1,} follows from the telescoping sum \begin{align*} &\frac{q^m}{[m]_q-q^mx}\cdot C(m,n,q,x)\\ &=\sum_{a=m}^n\biggl(\frac{q^m}{[m]_q-q^mx}\cdot C(m,a,q,x) -\frac{q^m}{[m]_q-q^mx}\cdot C(m,a-1,q,x)\biggr)\\ &=\sum_{a=m}^nC(m,a,q,x)\cdot\frac{q^a}{[a]_q-q^ax} \end{align*} applied to $m=m_r$, $n=n_2$ and $a=n_1$ in the definition of $Z_N^{\star}(k_1, \dots, k_r;1,l_1, \dots, l_s;q;x)$. Similarly, the equality \eqref{+1,0;1+} follows from the telescoping sum \begin{align*} &\frac{q^m}{[m]_q}\sum_{a=m}^nq^{-a}C(a,n,q,x)\\ &=\frac{q^m}{[m]_q}\sum_{a=m}^n\biggl(\frac{[a]_q}{q^a}\cdot C(a,n,q,x)\cdot\frac{1}{[n]_q} -\frac{[a+1]_q}{q^{a+1}}\cdot C(a+1,n,q,x)\cdot\frac{1}{[n]_q}\biggr)\\ &=C(m,n,q,x)\cdot \frac{1}{[n]_q} \end{align*} applied to $m=m_r$, $n=n_1$, $a=m_{r+1}$ in the definition of $Z_N^{\star}(k_1, \dots, k_r+1,0;l_1, \dots, l_s;q;x)$. \end{proof} \begin{corollary} Let $N$ be a positive integer and $\boldsymbol{k}=(k_1, \dots, k_r)$ an index. We define $P_N(\boldsymbol{k};q;x)$, $Q_N(\boldsymbol{k};q;x)$ and $R_N(q;x)$ by \begin{align*} P_N(\boldsymbol{k};q;x)&:=\sum_{1\leq m_1\leq\cdots\leq m_r\leq N} \frac{[m_1]_q}{[m_1]_q-q^{m_1}x} \prod_{i=1}^r\frac{q^{(k_i-1)m_i}}{[m_i]_q([m_i]_q-q^{m_i}x)^{k_i-1}} \cdot\frac{[m_r]_q}{[m_r]_q-q^{m_r}x}\\ &\hspace{7em}\cdot(-1)^{m_r-1}q^{\frac{m_r(m_r+1)}{2}}\binom{N}{m_r}_q,\\ Q_N(\boldsymbol{k};q;x)&:=\sum_{1\leq m_1\leq\cdots\leq m_r\leq N} \prod_{i=1}^r\frac{q^{m_i}}{([m_i]_q-q^{m_i}x)[m_i]_q^{k_i-1}},\\ R_N(q;x)&:=\prod_{h=1}^N\biggl(1-\frac{q^hx}{[h]_q}\biggr)^{-1}. \end{align*} Then we have \begin{equation}\label{P=QR} P_N(\boldsymbol{k};q;x)=Q_N(\boldsymbol{k}^{\vee};q;x)R_N(q;x). \end{equation} \end{corollary} \begin{proof} By applying equalities in Theorem \ref{step-thm} $\mathrm{wt}(\boldsymbol{k})$ times, we see that \[ Z_N^{\star}(\boldsymbol{k};\varnothing;q;x)=\cdots =Z_N^{\star}(0;\boldsymbol{k}^{\vee};q;x) \] holds by the definition of the Hoffman dual. For example, \[ Z_N^{\star}(1,1,2;\varnothing) \stackrel{\eqref{+1;1,}}{=}Z_N^{\star}(1,1,1;1)\stackrel{\eqref{+1;1,}}{=}Z_N^{\star}(1,1,0;1,1) \stackrel{\eqref{+1,0;1+}}{=}Z_N^{\star}(1,0;2,1) \stackrel{\eqref{+1,0;1+}}{=}Z_N^{\star}(0;3,1) \] (here we abbreviated $Z_N^{\star}(\boldsymbol{k};\boldsymbol{l};q;x)$ as $Z_N^{\star}(\boldsymbol{k};\boldsymbol{l})$). By definition, we have \begin{align*} Z_N^{\star}(\boldsymbol{k};\varnothing;q;x) &=\sum_{1\leq m_1\leq\cdots\leq m_r\leq N}F_1(\boldsymbol{k};\boldsymbol{m};q;x)C(m_r,N,q,x)\\ &=P_N(\boldsymbol{k};q;x)R_N(q;x)^{-1} \end{align*} and \begin{align*} Z_N^{\star}(0;\boldsymbol{k}^{\vee};q;x) &=\sum_{1\leq m\leq n_1\leq\cdots\leq n_s\leq N} \frac{q^{-m}[m]_q}{[m]_q-q^mx}C(m,n_1,q,x)F_2(\boldsymbol{k}^\vee;\boldsymbol{n};q;x)\\ &=Q_N(\boldsymbol{k}^{\vee};q;x). \end{align*} In the last equality, we have used the partial fraction decomposition \[ \sum_{m=1}^{n_1}\frac{[m]_q}{[m]_q-q^mx} \cdot\frac{(-1)^{m-1}q^{\frac{m(m-1)}{2}}}{[m]_q![n_1-m]_q!} =\frac{1}{\prod_{h=1}^{n_1}([h]_q-q^hx)}. \] The proof is complete. \end{proof} \begin{proof}[Proof of Theorem $\ref{MT1}$] By using the expansion formula \[ \frac{1}{([m]_q-q^mx)^k} =\sum_{e=0}^{\infty}\binom{k+e-1}{e}\frac{q^{em}x^e}{[m]_q^{k+e}} \] for a positive integer $m$ and a non-negative integer $k$, we see that \[ P_N(\boldsymbol{k};q;x)=\sum_{e=0}^{\infty}\sum_{\boldsymbol{e} \in J_{e,r}} b(\boldsymbol{k};\boldsymbol{e})H_N^{\star}(\boldsymbol{k}+\boldsymbol{e};q)x^e \] and \[ Q_N(\boldsymbol{k}^{\vee};q;x)=\sum_{e=0}^{\infty}\sum_{\boldsymbol{e} \in J_{e,s}} z_N^{\star}(\boldsymbol{k}^{\vee};\boldsymbol{e};q)x^e. \] Since $R_N(q;x)=\sum_{e=0}^{\infty}z_N^{\star}(\{1\}^{e};q)x^e$, we obtain the identity \eqref{q-Ohno} by comparing the coefficients of $x^e$ in \eqref{P=QR}. \end{proof} \section{Sum formulas for $\mathcal{A}_2$-finite multiple zeta values}\label{sec:sum formula} \subsection{Auxiliary facts} We prepare some known facts for finite multiple zeta values. \begin{proposition}[{\cite[Theorem 6.1, 6.2]{H}, \cite[Theorem 3.1, 3.5]{Z1}}]\label{Aux-1} Let $k_1, k_2$ and $k_3$ be positive integers, and assume that $l:=k_1+k_2+k_3$ is odd. Then \begin{align} \label{A-2} \zeta_{\mathcal{A}}^{\star}(k_1,k_2) &=(-1)^{k_2}\binom{k_1+k_2}{k_1}\frac{B_{\boldsymbol{p}-k_1-k_2}}{k_1+k_2}, \\ \label{A-3} \zeta_{\mathcal{A}}^{\star}(k_1,k_2,k_3) &=\frac{1}{2}\biggl\{(-1)^{k_3}\binom{l}{k_3}-(-1)^{k_1}\binom{l}{k_1}\biggr\} \frac{B_{\boldsymbol{p}-l}}{l}. \end{align} \end{proposition} \begin{proposition}[{\cite{ZC}, \cite[Theorem 3.2]{Z1}}]\label{Aux-2} Let $k, r, k_1$ and $k_2$ be positive integers, and assume that $l:=k_1+k_2$ is even. Then \begin{align} \label{A_2-{k}^r} \zeta_{\mathcal{A}_2}^{\star}(\{k\}^r ) &=k\frac{B_{\boldsymbol{p}-rk-1}}{rk+1}\boldsymbol{p},\\ \label{A_2-2} \zeta_{\mathcal{A}_2}^{\star}(k_1,k_2) &=\frac{1}{2}\biggl\{(-1)^{k_1}k_2\binom{l+1}{k_1}-(-1)^{k_2}k_1\binom{l+1}{k_2}+l\biggr\} \frac{B_{\boldsymbol{p}-l-1}}{l+1}\boldsymbol{p}. \end{align} \end{proposition} \begin{proposition}[{\cite[Corollary 3.16 (42)]{SS}}] Let $n$ be a positive integer and $\boldsymbol{k}=(k_1, \dots, k_r)$ an index. Then \begin{equation}\label{non-star to star} \sum_{j=0}^r (-1)^j\zeta_{\mathcal{A}_n}(k_j, \dots, k_1)\, \zeta_{\mathcal{A}_n}^{\star}(k_{j+1}, \dots, k_r)=0. \end{equation} \end{proposition} \subsection{Computations of sums for $\mathcal{A}_2$-finite multiple zeta values} \begin{definition} Let $k, r$ and $i$ be positive integers satisfying $i\leq r\leq k$. We define four sums $S_{k,r}$, $S_{k,r}^{\star}$, $S_{k,r,i}$ and $S_{k,r,i}^{\star}$ in $\mathcal{A}_2$ by \begin{alignat*}{2} S_{k,r}&:=\sum_{\boldsymbol{k} \in I_{k,r}}\zeta_{\mathcal{A}_2}(\boldsymbol{k}), &\qquad S_{k,r}^{\star}&:=\sum_{\boldsymbol{k} \in I_{k,r}}\zeta_{\mathcal{A}_2}^{\star}(\boldsymbol{k}),\\ S_{k,r,i}&:=\sum_{\boldsymbol{k} \in I_{k,r,i}}\zeta_{\mathcal{A}_2}(\boldsymbol{k}), &\qquad S_{k,r,i}^{\star}&:=\sum_{\boldsymbol{k} \in I_{k,r,i}}\zeta_{\mathcal{A}_2}^{\star}(\boldsymbol{k}). \end{alignat*} \end{definition} For an index $\boldsymbol{k}=(k_1, \dots, k_r)$, we set $\boldsymbol{k}^+:=(k_1, \dots, k_{r-1}, k_r+1)$. We can calculate $S_{k,r}^{\star}$ and $S_{k,r,i}^{\star}$ by using the following identity. \begin{corollary} Let $e$ be a non-negative integer, $\boldsymbol{k} \in I_{\ast,r}$ an index and $s:=\mathrm{dep}(\boldsymbol{k}^{\vee})$. Then we have \begin{equation}\label{A_2-Ohno} \begin{split} &\sum_{j=0}^e\zeta_{\mathcal{A}_2}^{\star}(\{1\}^{e-j}) \sum_{\boldsymbol{e}\in J_{j,r}}\zeta_{\mathcal{A}_2}^{\star}(\boldsymbol{k}+\boldsymbol{e})\\ &=\sum_{\boldsymbol{e}'\in J_{e,s}}b(\boldsymbol{k}^{\vee};\boldsymbol{e}') \Bigl\{-\zeta_{\mathcal{A}_2}^{\star}(\boldsymbol{k}^{\vee}+\boldsymbol{e}') -\zeta_{\mathcal{A}_2}^{\star}(\boldsymbol{k}^{\vee}+\boldsymbol{e}', 1)\boldsymbol{p} +\zeta_{\mathcal{A}_2}^{\star}((\boldsymbol{k}^{\vee}+\boldsymbol{e}')^+)\boldsymbol{p}\Bigr\}. \end{split} \end{equation} \end{corollary} \begin{proof} Since a congruence \[ (-1)^{m-1}\binom{p-1}{m} \equiv -1-\sum_{m\leq n\leq p-1}\frac{p}{n}+\frac{p}{m} \pmod{p^2} \] holds for any odd prime $p$ and any positive integer $m$ with $m<p$ (cf. \cite[Lemma 4.1]{S}), this corollary is a direct consequence of \eqref{Ohno}. \end{proof} \begin{proposition}\label{S_A_2,k,r} For positive integers $k$ and $r$ such that $r\leq k$, we have \[ (-1)^{r-1}S_{k,r}=S_{k,r}^{\star}=\binom{k}{r}\frac{B_{\boldsymbol{p}-k-1}}{k+1}\boldsymbol{p}. \] \end{proposition} \begin{proof} Let $\boldsymbol{k}=(\{1\}^r)$ and $e=k-r$ in \eqref{A_2-Ohno}. Then $\boldsymbol{k}^{\vee}=(r)$ and we have \begin{equation}\label{A_2-app1} \sum_{j=0}^{k-r}\zeta_{\mathcal{A}_2}^{\star}(\{1\}^{k-r-j})S_{j+r,r}^{\star} =\binom{k}{r}\Bigl\{-\zeta_{\mathcal{A}_2}(k)-\zeta_{\mathcal{A}_2}^{\star}(k,1)\boldsymbol{p} +\zeta_{\mathcal{A}_2}(k+1)\boldsymbol{p}\Bigr\}. \end{equation} For $0\leq j<k-r$, $\zeta_{\mathcal{A}_2}^{\star}(\{1\}^{k-r-j})S_{j+r,r}^{\star}=0$ since both $\zeta_{\mathcal{A}_2}^{\star}(\{1\}^{k-r-j})$ and $S_{j+r,r}^{\star}$ are divisible by $\boldsymbol{p}$ by \eqref{A_2-{k}^r} and \eqref{eq:k,r sum A}. Therefore, the left hand side of \eqref{A_2-app1} is equal to $S_{k,r}^{\star}$. On the other hand, the right hand side of \eqref{A_2-app1} is equal to \[ \binom{k}{r}\biggl\{-k\frac{B_{\boldsymbol{p}-k-1}}{k+1}\boldsymbol{p} +\binom{k+1}{k}\frac{B_{\boldsymbol{p}-k-1}}{k+1}\boldsymbol{p}\biggr\} =\binom{k}{r}\frac{B_{\boldsymbol{p}-k-1}}{k+1}\boldsymbol{p} \] by \eqref{A_2-{k}^r} and \eqref{A-2}. Hence we have $S_{k,r}^{\star}=\binom{k}{r}\frac{B_{\boldsymbol{p}-k-1}}{k+1}\boldsymbol{p}$. By taking $\sum_{\boldsymbol{k} \in I_{k,r}}$ of \eqref{non-star to star}, we obtain \[ S_{k,r}^{\star}+\sum_{j=1}^{r-1}(-1)^j\sum_{l=j}^{k-r+j}S_{l,j}S_{k-l,r-j}^{\star}+(-1)^rS_{k,r}=0. \] We see that $S_{l,j}S_{k-l,r-j}^{\star}=0$ for $1\leq j\leq r-1$ and $j \leq l \leq k-r+j$, since both $S_{l,j}$ and $S_{k-l,r-j}^{\star}$ are divisible by $\boldsymbol{p}$ by \eqref{eq:k,r sum A}. This gives $(-1)^{r-1}S_{k,r}=S_{k,r}^{\star}$. \end{proof} Next we compute $S_{k,r,i}^{\star}$ and $S_{k,r,i}$. \begin{theorem}\label{thm:k,r,i sum A_2} Let $k, r$ and $i$ be positive integers satisfying $i\leq r<k$, and assume that $k$ is even. Then we have \[ S_{k,r,i}=(-1)^{r-1}\frac{a_{k,r,i}}{2}\cdot\frac{B_{\boldsymbol{p}-k-1}}{k+1}\boldsymbol{p}, \quad S_{k,r,i}^{\star}=\frac{b_{k,r,i}}{2}\cdot\frac{B_{\boldsymbol{p}-k-1}}{k+1}\boldsymbol{p}, \] where \begin{align*} a_{k,r,i}&=\binom{k-1}{r} +(-1)^{r-i}\biggl\{(k-r)\binom{k}{i-1}+\binom{k-1}{i-1}+(-1)^{r-1}\binom{k-1}{r-i}\biggr\},\\ b_{k,r,i}&=\binom{k-1}{r} +(-1)^{i-1}\biggl\{(k-r)\binom{k}{r-i}+\binom{k-1}{r-i}+(-1)^{r-1}\binom{k-1}{i-1}\biggr\}. \end{align*} \end{theorem} \begin{proof} Let $\boldsymbol{k}=(\{1\}^{i-1},2,\{1\}^{r-i})$ and $e=k-r-1$ in \eqref{A_2-Ohno}. Then $\boldsymbol{k}^{\vee}=(i,r-i+1)$ and we have \begin{equation}\label{A_2-app2} \begin{split} &\sum_{j=0}^{k-r-1}\zeta_{\mathcal{A}_2}^{\star}(\{1\}^{k-r-1-j})S_{j+r+1,r,i}^{\star}\\ &=\sum_{e=0}^{k-r-1} \binom{i+e-1}{e}\binom{k-i-e-1}{k-r-1-e} \Bigl\{-\zeta_{\mathcal{A}_2}^{\star}(i+e,k-i-e)\\ &\hspace{5em}-\zeta_{\mathcal{A}_2}^{\star}(i+e,k-i-e,1)\boldsymbol{p} +\zeta_{\mathcal{A}_2}^{\star}(i+e,k-i-e+1)\boldsymbol{p}\Bigr\}. \end{split} \end{equation} For $0\leq j<k-r-1$, we see that $\zeta_{\mathcal{A}_2}^{\star}(\{1\}^{k-r-1-j})S_{j+r+1,r,i}^{\star}$ is a rational multiple of $B_{\boldsymbol{p}-k+r+j}B_{\boldsymbol{p}-j-r-1}\boldsymbol{p}$ by \eqref{A_2-{k}^r} and Theorem \ref{SW-thm}. Since $k$ is even, one of $B_{\boldsymbol{p}-k+r+j}$ or $B_{\boldsymbol{p}-j-r-1}$ is zero. Therefore, the left hand side of \eqref{A_2-app2} is equal to $S_{k,r,i}^{\star}$. On the other hand, we can calculate the right hand side of \eqref{A_2-app2} as follows. By \eqref{A_2-2}, \eqref{A-2} and \eqref{A-3}, we have \begin{align*} &-\zeta_{\mathcal{A}_2}^{\star}(i+e,k-i-e) -\zeta_{\mathcal{A}_2}^{\star}(i+e,k-i-e,1)\boldsymbol{p} +\zeta_{\mathcal{A}_2}^{\star}(i+e,k-i-e+1)\boldsymbol{p}\\ &=\Biggl[-\frac{1}{2}\biggl\{(-1)^{i+e}(k-i-e)\binom{k+1}{i+e} -(-1)^{k-i-e}(i+e)\binom{k+1}{k-i-e}+k\biggr\}\\ &\qquad -\frac{1}{2}\biggl\{-(k+1)-(-1)^{i+e}\binom{k+1}{i+e}\biggr\} +(-1)^{k-i-e+1}\binom{k+1}{i+e}\Biggr]\frac{B_{\boldsymbol{p}-k-1}}{k+1}\boldsymbol{p}\\ &=\frac{1}{2}\Biggl[1-(-1)^{i+e}(k-i-e+1)\binom{k+1}{i+e} +(-1)^{i+e}(i+e)\binom{k+1}{k-i-e}\Biggr]\frac{B_{\boldsymbol{p}-k-1}}{k+1}\boldsymbol{p}\\ &=\frac{1}{2}\Biggl[1+(-1)^{i-1+e}\binom{k+1}{i+e+1}\Biggr]\frac{B_{\boldsymbol{p}-k-1}}{k+1}\boldsymbol{p}. \end{align*} Therefore, the right hand side of \eqref{A_2-app2} is equal to \begin{equation}\label{A_2-int} \frac{1}{2}\sum_{e=0}^{k-r-1} \binom{i+e-1}{e}\binom{k-i-e-1}{k-r-1-e} \Biggl[1+(-1)^{i-1+e}\binom{k+1}{i+e+1}\Biggr]\frac{B_{\boldsymbol{p}-k-1}}{k+1}\boldsymbol{p}. \end{equation} By comparing the coefficient of $x^{k-r-1}$ in $(1-x)^{-i}(1-x)^{-(r-i+1)}=(1-x)^{-(r+1)}$, we see that \[ \sum_{e=0}^{k-r-1}\binom{i+e-1}{e}\binom{k-i-e-1}{k-r-1-e}=\binom{k-1}{r}, \] and by using the partial fraction decomposition \[ F(x):=\sum_{e=0}^{k-r-1}\frac{(-1)^e}{e!(k-r-1-e)!}\cdot\frac{1}{x+e}=\frac{1}{x(x+1)\cdots (x+k-r-1)}, \] we see that \begin{align*} &\sum_{e=0}^{k-r-1}\binom{i+e-1}{e}\binom{k-i-e-1}{k-r-1-e}\cdot(-1)^{i-1+e}\binom{k+1}{i+e+1} \\ &=(-1)^{i-1}\frac{(k+1)!}{(i-1)!(r-i)!}\sum_{e=0}^{k-r-1}\frac{(-1)^e}{e!(k-r-1-e)!(i+e)(i+e+1)(k-i-e)}\\ &=(-1)^{i-1}\frac{(k+1)!}{(i-1)!(r-i)!}\left\{\frac{1}{k}F(i)-\frac{1}{k+1}F(i+1)+\frac{(-1)^{r-1}}{k(k+1)}F(r-i+1)\right\}\\ &=(-1)^{i-1}\biggl\{(k-r)\binom{k}{r-i}+\binom{k-1}{r-i}+(-1)^{r-1}\binom{k-1}{i-1}\biggr\}. \end{align*} Thus we have proved $S_{k,r,i}^{\star}=\frac{b_{k,r,i}}{2}\cdot\frac{B_{\boldsymbol{p}-k-1}}{k+1}\boldsymbol{p}$. Let us take the sum $\sum_{\boldsymbol{k} \in I_{k,r,r+1-i}}$ of \eqref{non-star to star}. Then we obtain \begin{multline}\label{A_2-SS*} S_{k,r,r+1-i}^{\star} +\sum_{j=1}^{r-i}(-1)^j\sum_{l=j}^{k-r+j-1}S_{l,j}S_{k-l,r-j,r+1-i-j}^{\star}\\ +\sum_{j=r-i+1}^{r-1}(-1)^j\sum_{l=j+1}^{k-r+j}S_{l,j,j+i-r}S_{k-l,r-j}^{\star} +(-1)^rS_{k,r,i}=0. \end{multline} We know that $S_{l,j}S_{k-l,r-j,r+1-i-j}^\star$ is a rational multiple of $B_{\boldsymbol{p}-l-1}B_{\boldsymbol{p}-k+l}\boldsymbol{p}$ for $1\leq j\leq r-i$ and we also know that $S_{l,j,j+i-r}S_{k-l,r-j}^{\star}$ is a rational multiple of $B_{\boldsymbol{p}-l}B_{\boldsymbol{p}-k+l-1}\boldsymbol{p}$ for $r-i+1\leq j\leq r-1$ by Theorem \ref{SW-thm} and Proposition \ref{S_A_2,k,r}. Since $k$ is even, these are zero for every $l$. Therefore, we have \[ S_{k,r,i}=(-1)^{r-1}\frac{b_{k,r,r+1-i}}{2}\cdot\frac{B_{\boldsymbol{p}-k-1}}{k+1}\boldsymbol{p} =(-1)^{r-1}\frac{a_{k,r,i}}{2}\cdot\frac{B_{\boldsymbol{p}-k-1}}{k+1}\boldsymbol{p}. \qedhere \] \end{proof} \section{Sum formulas for $\mathcal{A}_3$-finite multiple zeta values} \label{sec:A_3-sum formula} For positive integers $k$ and $r$ such that $r\leq k$, we set \[ T_{k,r}:=\sum_{\boldsymbol{k}\in I_{k,r}}\zeta_{\mathcal{A}_3}(\boldsymbol{k}),\quad T_{k,r}^{\star}:=\sum_{\boldsymbol{k}\in I_{k,r}}\zeta_{\mathcal{A}_3}^{\star}(\boldsymbol{k}). \] From now on, we assume that $k$ is odd. We recall a formula \begin{equation}\label{A_3-single} \zeta_{\mathcal{A}_3}(k)=-\frac{k(k+1)}{2}\cdot\frac{B_{\boldsymbol{p}-k-2}}{k+2}\boldsymbol{p}^2 \end{equation} proved by Sun \cite[Theorem 5.1]{ZHS}. \begin{theorem}\label{thm:A_3-sum} Let $k$ and $r$ be positive integers satisfying $r\leq k$, and assume that $k$ is odd. Then we have \[ (-1)^{r-1}T_{k,r}=T_{k,r}^{\star}=-\frac{k+1}{2}\binom{k}{r}\frac{B_{\boldsymbol{p}-k-2}}{k+2}\boldsymbol{p}^2. \] \end{theorem} \begin{proof} Since a congruence \begin{align*} &(-1)^{m-1}\binom{p-1}{m}\\ &\equiv -1-\left(\sum_{m\leq n\leq p-1}\frac{1}{n}-\frac{1}{m}\right)p-\left(\sum_{m\leq n_1\leq n_2\leq p-1}\frac{1}{n_1n_2}-\frac{1}{m}\sum_{m\leq n\leq p-1}\frac{1}{n}\right)p^2 \pmod{p^3} \end{align*} holds for any odd prime $p$ and any positive integer $m$ with $m<p$ by \cite[Lemma 4.1]{S}, one can deduce \begin{equation}\label{A_3-app} \begin{split} &\sum_{j=0}^{k-r}\zeta_{\mathcal{A}_3}^{\star}(\{1\}^{k-r-j})T_{j+r,r}^{\star}\\ &=\binom{k}{r}\left\{-\zeta_{\mathcal{A}_3}(k)-\zeta_{\mathcal{A}_3}^{\star}(k,1)\boldsymbol{p} +\zeta_{\mathcal{A}_3}^{\star}(k+1)\boldsymbol{p}-\zeta_{\mathcal{A}_3}^{\star}(k,1,1)\boldsymbol{p}^2 +\zeta_{\mathcal{A}_3}^{\star}(k+1,1)\boldsymbol{p}^2\right\} \end{split} \end{equation} from the identity \eqref{Ohno} in the samy way as \eqref{A_2-app1}. Let us fix $0 \leq j < k-r$. By \eqref{A_2-{k}^r} and Proposition \ref{S_A_2,k,r}, if $j+r$ is odd, then $\zeta_{\mathcal{A}_3}^{\star}(\{1\}^{k-r-j})$ is divisible by $\boldsymbol{p}$ and $T_{j+r,r}^{\star}$ is divisible by $\boldsymbol{p}^2$ and if $j+r$ is even, then $\zeta_{\mathcal{A}_3}^{\star}(\{1\}^{k-r-j})$ is divisible by $\boldsymbol{p}^2$ and $T_{j+r,r}^{\star}$ is divisible by $\boldsymbol{p}$. Therefore, $\zeta_{\mathcal{A}_3}^{\star}(\{1\}^{k-r-j})T_{j+r,r}^{\star}=0$ and we see that the left hand side of \eqref{A_3-app} is equal to $T_{k,r}^{\star}$. On the other hand, by using Proposition \ref{Aux-1}, Proposition \ref{Aux-2} and \eqref{A_3-single}, we see that the right hand side of \eqref{A_3-app} is equal to \begin{align*} &\binom{k}{r}\Biggl[\frac{k(k+1)}{2}-\frac{1}{2}\left\{-\binom{k+2}{k}+k^2+3k+1\right\}+(k+1)\\ &\qquad-\frac{1}{2}\left\{-(k+2)+\binom{k+2}{k}\right\}-(k+2)\Biggr]\frac{B_{\boldsymbol{p}-k-2}}{k+2}\boldsymbol{p}^2\\ &=-\frac{k+1}{2}\binom{k}{r}\frac{B_{\boldsymbol{p}-k-2}}{k+2}\boldsymbol{p}^2. \end{align*} Hence we have $T_{k,r}^{\star}=-\frac{k+1}{2}\binom{k}{r}\frac{B_{\boldsymbol{p}-k-2}}{k+2}\boldsymbol{p}^2$. By taking $\sum_{\boldsymbol{k} \in I_{k,r}}$ of \eqref{non-star to star}, we obtain \[ T_{k,r}^{\star}+\sum_{j=1}^{r-1}(-1)^j\sum_{l=j}^{k-r+j}T_{l,j}T_{k-l,r-j}^{\star}+(-1)^rT_{k,r}=0. \] Let us fix $1\leq j\leq r-1$ and $j \leq l \leq k-r+j$. By Proposition \ref{S_A_2,k,r}, if $l$ is odd, then $T_{l,j}$ is divisible by $\boldsymbol{p}^2$ and $T_{k-l,r-j}^{\star}$ is divisible by $\boldsymbol{p}$ and if $l$ is even, then $T_{l,j}$ is divisible by $\boldsymbol{p}$ and $T_{k-l,r-j}^{\star}$ is divisible by $\boldsymbol{p}^2$. Therefore, we see that $T_{l,j}T_{k-l,r-j}^{\star}=0$ and this gives $(-1)^{r-1}T_{k,r}=T_{k,r}^{\star}$. \end{proof} \end{document}
\begin{document} \title{Intrinsic degree of coherence of classical and quantum states} \author{Abu Saleh Musa Patoary^{*}, Girish Kulkarni^{*}, and Anand K. Jha} \email{[email protected]} \affiliation{Department of Physics, Indian Institute of Technology Kanpur, Kanpur, UP 208016, India} \affiliation{* Both these authors contributed equally} \date{\today} \begin{abstract} In the context of the 2-dimensional (2D) polarization states of light, the degree of polarization $P_{2}$ is equal to the maximum value of the degree of coherence over all possible bases. Therefore, $P_2$ can be referred to as the intrinsic degree of coherence of a 2D state. In addition to (i) the maximum degree of coherence interpretation, $P_2$ also has the following interpretations: (ii) it is the Frobenius distance between the state and the maximally incoherent identity state, (iii) it is the norm of the Bloch-vector representing the state, (iv) it is the distance to the center-of-mass in a configuration of point masses with magnitudes equal to the eigenvalues of the state, (v) it is the visibility in a polarization interference experiment, and (vi) it is the weightage of the pure part of the state. Among these six interpretations of $P_{2}$, the Bloch vector norm, Frobenius distance, and center of mass interpretations have previously been generalized to derive an analogous basis-independent measure $P_{N}$ for $N$-dimensional (ND) states. In this article, by extending the concepts of visibility, degree of coherence, and weightage of pure part to ND spaces, we show that these three remaining interpretations of $P_{2}$ also generalize to the same quantity $P_{N}$, establishing $P_{N}$ as the intrinsic degree of coherence of ND states. We then extend $P_{N}$ to the $N\to\infty$ limit to quantify the intrinsic degree of coherence $P_{\infty}$ of infinite-dimensional states in the orbital angular momentum (OAM), photon number, and position-momentum degrees of freedom. \end{abstract} \maketitle \section{Introduction} Coherence is the physical property responsible for interference phenomena observed in nature and is the subject matter of the classical and quantum theories of coherence \cite{mandel1995cup,born1959pp,zernike1938physica,wolf1959nuovo, glauber1963pr,glauber1963pr2,sudarshan1963prl}. Both these highly-successful theories quantify coherence in terms of the visibility or contrast of the interference. The key difference is that whereas the classical theory formulates the visibility in terms of correlation functions involving products of field amplitudes \cite{born1959pp,wolf1959nuovo, zernike1938physica}, the quantum theory of optical coherence employs correlation functions involving products of field operators that in general may not commute \cite{glauber1963pr,glauber1963pr2,sudarshan1963prl}. In comparison to the classical theory which fails to explain the higher-order correlations of certain quantum light fields \cite{hong1987prl,pittman1996prl}, the quantum theory can be used to quantify the correlations of a general light field to arbitrary orders. However, as far as effects arising from second-order correlations of light fields are concerned, the classical and quantum theories have identical predictions implying that both can be interchangeably used. For quantifying the second-order correlations, a quantity of central interest is the degree of coherence, which is just the suitably-normalized second-order correlation function involving electromagnetic fields at two distinct spacetime points or polarization directions \cite{born1959pp,zernike1938physica}. In the context of a partially polarized field represented by a $2\times2$ polarization matrix $\rho$, the degree of coherence is the magnitude of the suitably-normalized off-diagonal entry which quantifies the correlations between the field components along a specific pair of orthogonal polarizations. Thus, the degree of coherence is a manifestly basis-dependent measure of coherence. In contrast, the maximum degree of coherence over all possible orthonormal polarization bases is a basis-independent measure of coherence known as the degree of polarization \cite{wolf1959nuovo}. Owing to this maximum degree of coherence interpretation, we also refer to the degree of polarization $P_{2}$ as the "intrinsic degree of coherence" of the field. For the polarization matrix $\rho$ which is normalized, $P_{2}$ is given by \begin{align}\label{P2-first} P_2=\sqrt{2 \ {\rm Tr} \ (\rho^2) -1 }. \end{align} In addition to (i) the maximum degree of coherence interpretation, $P_{2}$ also has the following interpretations \cite{born1959pp}: (ii) it is the norm of the Bloch-vector representing the state, (iii) it is the Frobenius distance between the state and the completely incoherent state \cite{luis2005optcomm}, (iv) it is the distance to the center of mass in a configuration of point masses of magnitudes equal to the eigenvalues of the state \cite{alonso2016pra}, (v) it is the visibility obtained in a polarization interference experiment, and (vi) it is the weightage of the completely polarized part of the state. These six interpretations together provide a mathematically appealing and physically intuitive quantification of the intrinsic polarization correlations of a field in a basis-independent manner. While the need for a basis-independent quantification of coherence has been recognized long ago in both, the classical and the quantum theories of optical coherence, such a quantification has fully been achieved only for the two-dimensional polarization states of light. In this context, it is known that the $2\times2$ polarization matrix describing the polarization state of a classical light field is formally identical to the $2\times2$ density matrix describing a quantum two-level system. Moreover, there is a one-to-one correspondence between the Poincare sphere representation of partially polarized fields in terms of Stokes parameters \cite{stokes1851trans} and the Bloch sphere representation of qubits in terms of the Bloch vector components \cite{bloch1946pr}. By this correspondence, the measure $P_{2}$ encodes essentially the same information as the quantum purity, and can therefore be used to quantify the intrinsic coherence of both classical and quantum two-dimensional (2D) states \cite{gamel2012pra}. However, a generalized coherence measure analogous to $P_{2}$ that retains all its interpretations has not been obtained for higher-dimensional states so far. For quantifying the coherence of higher-dimensional systems, a number of studies in recent years have taken a resource theoretic approach \cite{baumgratz2014prl,girolami2014prl,streltsov2015prl,winter2016prl,streltsov2018njp,ma2019epl}. However, the present paper does not follow this resource theoretic approach. Instead, it follows an approach from optical coherence theory which seeks to generalize the basis-independent measure of coherence $P_{2}$ and all its known interpretations to quantify the intrinsic degree of coherence of higher-dimensional classical and quantum states. The first efforts in generalizing $P_{2}$ to higher dimensions were carried out by Barakat \cite{barakat1977optcomm, barakat1983opticaacta} and Samson {\it et al.} \cite{samson1981geophysics, samson1981siam}. In these efforts, they derived a basis-independent measure $P_{N}$ for an $N\times N$ polarization matrix $\rho$ by generalizing the Bloch-vector norm interpretation of $P_2$ to an ND space. In particular, they showed that for a normalized $\rho$, \begin{align} P_N=\sqrt{\frac{N\,{\rm Tr}(\rho^2)-1}{N-1}} \label{P_N}. \end{align} Recently, following up on previous generalizations for three- \cite{luis2005pra} and four- \cite{luis2007josaa} dimensional spaces, the Frobenius distance interpretation of $P_2$ \cite{luis2005optcomm} was generalized to ND spaces to also yield $P_{N}$ \cite{yao2016scirep}. In addition, the center of mass interpretation when applied to ND states yields $P_{N}$ as the generalized measure. Thus, it has so far been possible to show that $P_{N}$ has three of the six interpretations of $P_2$. However, the generalization of the remaining three interpretations have either not been attempted or have had limited success \cite{ellis2005optcomm, gamel2012pra, setala2009optlett}. In this article, we take up the other three interpretations of $P_2$, namely, the visibility, degree of coherence, and weightage of pure part interpretations and extend them to ND spaces. We show that even these three interpretations of $P_{2}$ generalize to the same measure $P_{N}$. In essence, by demonstrating that $P_{N}$ has all the six interpretations of $P_{2}$, we theoretically establish $P_{N}$ as quantifying the intrinsic degree of coherence of ND states. We then extend $P_{N}$ to the $N\to\infty$ limit to quantify the intrinsic degree of coherence $P_{\infty}$ of infinite-dimensional states. The paper is organized as follows. In Sec.~II, we present a conceptual description of the degree of polarization. In Sec.~III, we describe the existing work on how the expression for $P_N$ is obtained by generalizing the Bloch-vector norm, Frobenius distance, and center of mass interpretations of $P_2$ to $N$-dimensional states. In Sec.~IV, we generalize the concepts of visibility, degree of coherence, and weightage of pure part to ND spaces, demonstrate that each of these interpretations of $P_{2}$ uniquely generalizes to $P_{N}$, and thereby establish $P_N$ as the intrinsic degree of coherence of finite $N$-dimensional classical and quantum states. In Sec.~V, we consider infinite-dimensional states in the orbital angular momentum (OAM), photon number, position and momentum bases, and show that the intrinsic degree of coherence $P_{\infty}$ of a normalizable state $\rho$ is given by $P_{\infty}=\sqrt{\mathrm{Tr}(\rho^2)}$. In the rest of the paper, we will use the symbol $\rho$ to denote the density matrix of dimensionality $2, N$ or $\infty$ depending on the context. Also, we will denote the $N\times N$ identity matrix by $\mathds{1}_{N}$. \section{Degree of Polarization} The polarization state of an electromagnetic field can be represented by a positive-semidefinite $2\times2$ Hermitian matrix. It is referred to as the polarization matrix or the coherence matrix and is defined as \cite{mandel1995cup}, \begin{align} \rho=\begin{bmatrix} \langle E_{1}E^*_{1}\rangle & \langle E_{1}E^*_{2}\rangle \\ \langle E^*_{1}E_{2}\rangle & \langle E_{2}E^*_{2}\rangle \end{bmatrix}=\begin{bmatrix} \rho_{11} & \rho_{12} \\ \rho_{21} & \rho_{22} \end{bmatrix}. \label{polarization matrix} \end{align} Here $\langle\cdots\rangle$ denotes the ensemble average over many realizations of the field, and $E_{1}$ and $E_{2}$ denote the electric field components along two mutually orthonormal polarization directions represented by the basis vectors $\{|1\rangle$, $|2\rangle\}$, and $\rho_{ij}$ with $i,j=1,2$ denote the matrix elements of $\rho$ in the $\{|1\rangle,|2\rangle \} $ basis. The basis-dependent quantity $\mu_2=|\rho_{12}|/\sqrt{\rho_{11}\rho_{22}}$ is called the degree of coherence between the polarization basis vectors $\{|1\rangle$, and $|2\rangle \}$. It was shown by Wolf in a classic paper that the maximum value of $\mu_2$ over all possible choices of the bases in the 2D Hilbert space is equal to the degree of polarization $P_2$ \cite{wolf1959nuovo}, which for a normalized $\rho$ can be shown to be \cite{born1959pp}: \begin{align}\label{P2-wolf} P_2=\sqrt{1-4\,{\rm det}\,\rho}=\sqrt{2 \ {\rm Tr} \ (\rho^2) -1 }. \end{align} As the trace and the determinant are invariant under unitary operations, $P_2$ is a basis-independent quantity. Furthermore, $0\leq P_{2}\leq1$ with $P_{2}=1$ only when $\rho$ is a perfectly polarized field (pure state) and $P_{2}=0$ only when $\rho$ is the completely unpolarized field (completely mixed state) represented by the identity matrix. In the next two sections, we consider the six known interpretations of $P_{2}$ that justify its suitability as an intrinsic degree of coherence for 2D states. Following a brief description of each interpretation, we present the generalization to ND space and obtain $P_{N}$ as the ND analog of $P_{2}$. \section{Existing works on generalizing interpretations of $P_{2}$ to ND states} \subsection{Bloch vector norm interpretation} \subsubsection*{\bf 2D states} It is known that an arbitrary 2D state $\rho$ has the following unique decomposition in terms of the Stokes parameters \cite{nielsenchuang}: \begin{align}\label{2D-stokes-decomp} \rho=\frac{1}{2}\Big(\mathds{1}_{2}+\sum_{i=1}^{3} r_{i}\sigma_{i}\Big). \end{align} Here $\sigma_{1},\sigma_{2}$ and $\sigma_{3}$ are the Pauli matrices, and the real scalar quantities $r_{i}$'s are called the Stokes parameters of the state. Such a parametrization is possible due to the fact that $\sigma_{i}$'s, which are the generators of the Lie group $\rm SU(2)$, form an orthonormal basis in the real vector space of traceless $2\times2$ Hermitian matrices with respect to the Hilbert-Schmidt inner-product, $(A,B)\equiv \mathrm{Tr}\left(A^{\dagger}B\right)$. Consequently, the parameters $r_{i}$ can be regarded as the components of a $3$-dimensional vector $\bm{r}\equiv(r_{1},r_{2},r_{3})$, which is referred to as the Bloch vector representing the state in this vector space. For a 2D density matrix $\rho$, the condition $\mathrm{Tr}\,\rho^2\leq1$ is both necessary and sufficient to ensure positive-semidefiniteness, which in turn implies that the space of physical states is characterized by $0\leq|\bm{r}|\leq 1$. This space can be imagined to be a closed sphere in 3 dimensions, termed as the Bloch sphere. The pure states reside on the surface of this sphere with $|\bm{r}|=1$, whereas the maximally incoherent state $\mathds{1}_{2}/2$ with $|\bm{r}|=0$ resides at the center. From Eq.~(\ref{P2-wolf}), it can be shown that the norm of the Bloch vector is equal to $P_{2}$, that is, $|\bm{r}|=\sqrt{\sum_{i=1}^{3}|r_{i}|^2}=P_{2}$ \cite{nielsenchuang}. This way, $P_{2}$ is interpreted as the norm of the Bloch vector representing the state. \subsubsection*{\bf ND states} In direct correspondence with Eq.~(\ref{2D-stokes-decomp}), it has been shown that any ND state $\rho$ can be decomposed as \cite{hioe1981prl,kimura2003pla,byrd2003pra,bertlmann2008jpa}, \begin{align}\label{Ndim-Bloch-form} \rho=\frac{1}{N}\Big(\mathds{1}_{N}+\sqrt{\frac{N(N-1)}{2}}\sum_{i=1}^{(N^2-1)} r_{i}\Lambda_{i}\Big), \end{align} where $\Lambda_{i}$'s are the generalized $N\times N$ Gellmann matrices, and the scalar quantities $r_{i}$'s are the ND analogs of Stokes parameters. In exact analogy with the 2D case, this parametrization is made possible by the fact that $\Lambda_{i}$'s, which are the $(N^2-1)$ generators of the Lie group $\rm SU(N)$, form an orthonormal basis in the real vector space of traceless $N\times N$ Hermitian matrices with respect to the Hilbert-Schmidt inner-product. The parameters $r_{i}$ form the components of the $(N^2-1)$-dimensional Bloch vector $\bm{r}$ representing the state $\rho$. We note that in contrast with the 2D case, the condition $\mathrm{Tr}\,\rho^2\leq1$ is not sufficient to ensure positive-semidefiniteness of ND density matrices. Consequently, only a subset of states represented by the $(N^2-1)$-dimensional sphere and defined by $0\leq|\bm{r}|\leq1$ correspond to physical states \cite{kimura2003pla,byrd2003pra}. Barakat \cite{barakat1977optcomm, barakat1983opticaacta} and Samson {\it et al.} \cite{samson1981geophysics, samson1981siam} were the first ones to show that the norm of the ND Bloch-vector is the degree of polarization $P_N$ of the state. The derivations of $P_N$ by both Barakat \cite{barakat1977optcomm, barakat1983opticaacta} and Samson {\it et al.} \cite{samson1981geophysics, samson1981siam} were presented in terms of the eigenvalues of $\rho$ and not in terms of the Gellman matrices. For 3D states, an explicit derivation of $P_3$ in terms of 3D Gellman matrices was carried out by Set\"{a}l\"{a} \textit{et al.} \cite{setala2002prl, setala2002pre} who also demonstrated usefulness of $P_3$ for studying optical near fields and evanescent fields. We now present the derivation for ND states explicitly in term of ND Gellman matrices and obtain the expression of $P_{N}$ as in Eq.~(\ref{P_N}). We note that the set of $(N^2-1)$ generalized Gellmann matrices $\Lambda_{i}$'s of Eq.~(\ref{Ndim-Bloch-form}) comprises of three subsets: the set $\{U\}$ of $N(N-1)/2$ symmetric matrices, the set $\{V\}$ of $N(N-1)/2$ anti-symmetric matrices, and the set $\{W\}$ of $(N-1)$ diagonal matrices. The explicit forms of these matrices in the orthonormal basis $\{|i\rangle\}^{N}_{i=1}$, where $|i\rangle$ is an ND column vector with the $i^{\rm th}$ entry being 1 and others being 0, are given by \cite{kimura2003pla}, \begin{align} &U_{jk}=|j\rangle \langle k|+|k\rangle \langle j|, \qquad V_{jk}=-i|j\rangle \langle k|+i|k\rangle \langle j|, \notag \\ & {\rm and} \ \ W_{l}=\sqrt{\frac{2}{l(l+1)}}\Big(\sum_{m=1}^l|m\rangle \langle m|-l|l+1\rangle \langle l+1|\Big).\label{Generalized Gellmann} \end{align} where $1\leq j<k\leq N$ and $1\leq l\leq (N-1)$. In terms of these definitions, we write Eq.~(\ref{Ndim-Bloch-form}) as, \begin{multline}\label{Ndim-Bloch-form-2} \rho=\frac{1}{N}\Bigg[\mathds{1}_{N}+\sqrt{\frac{N(N-1)}{2}}\Big(\sum_{j=1}^N\sum_{k=j+1}^N \big\{u_{jk}U_{jk}\big.\\\big.+v_{jk}V_{jk}\big\}+\sum_{l=1}^{N-1}w_{l}W_{l}\Big)\Bigg], \end{multline} where $u_{jk},v_{jk}$ and $w_{l}$ are the Bloch-vector components along the Gellmann matrices $U_{jk},V_{jk}$ and $W_{l}$ respectively. Here, we have relabeled the set of components $\{r_{i}\}$ and the set of matrices $\{\Lambda_{i}\}$ of Eq.~(\ref{Ndim-Bloch-form}) by the set of parameters $\{\{u_{jk}\},\{v_{jk}\},\{w_{l}\}\}$ and the set of matrices $\{\{U_{jk}\},\{V_{jk}\},\{W_{l}\}\}$, respectively. We calculate the components $u_{jk},v_{jk}$ and $w_{l}$ in terms of the density matrix elements and find them to be \begin{align} &u_{jk}=\sqrt{\frac{N}{2(N-1)}}(\rho_{jk}+\rho_{kj}), \notag \hspace{4mm} v_{jk}=i\sqrt{\frac{N}{2(N-1)}}(\rho_{jk}-\rho_{kj}), \notag \\ &w_{l}=\sqrt{\frac{N}{l(l+1)(N-1)}}\Big(\sum_{m=1}^l\rho_{mm}-l\rho_{l+1,l+1}\Big). \label{coeff s} \end{align} The norm of the Bloch vector $\bm{r}$ defined as $|\bm{r}|=\sqrt{\sum_{i=1}^{(N^2-1)}r^2_{i}}$ is therefore given by, \begin{align}\label{bloch-norm} |\bm{r}|=\sqrt{\sum_{j=1}^N\sum_{k=j+1}^N\Big[u^2_{jk}+v^2_{jk}\Big]+\sum_{l=1}^{N-1}w^2_l}. \end{align} In order to evaluate $|\bm{r}|$, we first find that \begin{align}\label{coeff uv} \sum_{j=1}^N\sum_{k=j+1}^N\Big[u^2_{jk}+v^2_{jk}\Big] = \frac{2N}{N-1}\sum_{j=1}^N\sum_{k=j+1}^N |\rho_{jk}|^2. \end{align} We then evaluate the other summation in Eq.~(\ref{bloch-norm}) to be \begin{align} \label{coeff w} \nonumber &\sum_{l=1}^{N-1}w^2_l=\sum_{l=1}^{N-1}\frac{N}{l(l+1)(N-1)}\Big(\sum_{m=1}^l\rho_{mm}-l\rho_{l+1,l+1}\Big)^2 \\ \nonumber & \!\begin{multlined}= \frac{N}{N-1}\Big[\sum_{i=1}^N\rho_{ii}^2\big\{\sum_{j=i}^{N-1}\frac{1}{j(j+1)}+\frac{i-1}{i}\big\}-\frac{2}{N}\sum_{i=1}^N\sum_{j=i+1}^N\rho_{ii}\rho_{jj}\Big] \end{multlined}\\ & =\sum_{i=1}^N\rho_{ii}^2-\frac{2}{N-1}\sum_{i=1}^N\sum_{j=i+1}^N\rho_{ii}\rho_{jj}. \end{align} By substituting Eqs.~(\ref{coeff uv}) and (\ref{coeff w}) into Eq.~(\ref{bloch-norm}), we obtain \begin{align} |\bm{r}|=P_N=\sqrt{\frac{N\,{\rm Tr}(\rho^2)-1}{N-1}}=P_{N}. \end{align} Thus $P_N$, like its two-dimensional analog, can be interpreted as the norm of the Bloch vector corresponding to the ND state. \subsection{Frobenius distance interpretation} \subsubsection*{\bf 2D states} For a 2D state $\rho$, it was known that the degree of polarization $P_2$ can be viewed as the Frobenius-distance between the state $\rho$ and the completely-incoherent state $\mathds{1}_{2}/2$ \cite{luis2005optcomm}, that is, \begin{align}\label{P2-as-frob-norm} P_2=\sqrt{2}\Big{|}\Big{|}\rho-\frac{\mathds{1}_{2}}{2}\Big{|}\Big{|}_F = \sqrt{2 \ {\rm Tr} \ (\rho^2) -1 }. \end{align} Here, the Frobenius-distance is quantified using the Frobenius-norm, defined as $||A||_F\equiv \sqrt{{\rm Tr}(A^{\dagger}A)}$, with the normalization factor ensuring that $0\leq P_{2}\leq1$. We see that the expressions of $P_2$ in Eq.(\ref{P2-wolf}) and Eq.(\ref{P2-as-frob-norm}) are same. \subsubsection*{\bf ND states} The Frobenius-distance interpretation was first generalized to three- \cite{luis2005pra} and four- \cite{luis2007josaa} dimensional states by A.~Luis. More recently, Yao \textit{et al.} \cite{yao2016scirep} have generalized the Frobenius-distance interpretation to ND states to define $P_N$ as: \begin{align}\label{PN-as-frob-norm} P_N\equiv\sqrt{\frac{N}{N-1}}\Big{|}\Big{|}\rho-\frac{\mathds{1}_{N}}{N}\Big{|}\Big{|}_F = P_N=\sqrt{\frac{N\,{\rm Tr}(\rho^2)-1}{N-1}}. \end{align} In other words, $P_{N}$ is the Frobenius-distance between the state $\rho$ and the completely-incoherent state $\mathds{1}_{N}/N$ in the space of $N\times N$ density matrices. The normalization factor in Eq.~(\ref{PN-as-frob-norm}) is again chosen such that $0\leqslant P_N \leqslant 1$. We note that the expressions of $P_N$ in Eqs.~(\ref{P_N}) and (\ref{PN-as-frob-norm}) are the same. Furthermore, it can be verified that when $\rho$ is pure, $\mathrm{Tr}(\rho^2)=1$ implying $P_{N}=1$, whereas when $\rho=\mathds{1}_{N}/{N}$, $\mathrm{Tr}(\rho^2)=1/N$ implying $P_{N}=0$. \subsection{Center of mass interpretation} In a recent study, M. A. Alonso \textit{et. al} \cite{alonso2016pra} have discussed a geometric interpretation of the measure $P_{N}$ of Eq.~(\ref{P_N}) as the distance to the center of mass in a configuration of point masses. \subsubsection*{\bf 2D states} Consider a configuration of 2 point masses of magnitudes equal to the eigenvalues $\lambda_{1}$ and $\lambda_{2}$ of the state, each placed at a unit distance from the origin in opposite directions in a 1-dimensional Euclidean space. The distance $Q$ to the center of mass of this configuration from the origin is given by \begin{equation} Q=\Big|\frac{\lambda_{1}-\lambda_{2}}{\lambda_{1}+\lambda_{2}}\Big|=P_{2}. \end{equation} Thus, $P_{2}$ has the interpretation as the distance of the center of mass from the origin in this configuration. \subsubsection*{\bf ND states} Consider a configuration of $N$ point masses of magnitudes equal to the eigenvalues $\lambda_{1},\lambda_{2},...,\lambda_{N}$ of $\rho$, each placed at a unit distance from the origin and equally-spaced from one-another such that they constitute a regular $(N-1)$-simplex in an $(N-1)$-dimensional Euclidean space. The distance $Q$ to the center of mass of this configuration is given by \begin{equation} Q=\sqrt{\frac{\sum_{i=1}^{N-1}\sum_{j=i+1}^N(\lambda_i-\lambda_j)^2}{(N-1)(\sum_{i=1}^N\lambda_i)^2}}=P_{N}. \end{equation} Therefore, $P_{N}$ is equal to the distance of the centre of mass of this configuration from the origin. \section{Generalizing other interpretations of $P_2$ to ND states} \subsection{Maximum degree of coherence interpretation} \subsubsection*{\bf 2D states} As pointed out in Sec.~II in the context of 2D polarization sates, the basis-dependent quantity $\mu_2$ in Eq.~(\ref{polarization matrix}) quantifies the degree of coherence between the mutually orthogonal polarizations states represented by $|1\rangle$ and $|2\rangle$. Using Eqs.~(\ref{P2-wolf}) and (\ref{mu2}), it can be shown that $0\leq\mu_{2}\leq P_{2}$ and also that $\mu_{2}$ attains the maximum value $P_2$ when the basis $\{|1\rangle, |2\rangle\} $ is such that $\rho_{11}=\rho_{22}$ \cite{born1959pp,wolf1959nuovo}, that is, \begin{align}\label{mu2-max} \max_{\{|1\rangle, |2\rangle\}\in \mathds{S}} \mu_{2}=P_{2}. \end{align} In this way, $P_{2}$ is interpreted as the maximum of $\mu_{2}$ over the set $\mathds{S}$ of all orthonormal bases in the $2$D Hilbert space. In order to generalize the definition of the degree of coherence for ND states, we rewrite $\mu_2$ as \begin{align} \label{mu2} \mu_{2}=\sqrt{\frac{|\rho_{12}|^2}{\rho_{11}\rho_{22}}}. \end{align} We find that while the numerator $|\rho_{12}|^2$ quantifies the correlation between the basis vectors $|1\rangle$ and $|2\rangle $, the denominator provides the normalization such that $0\leq\mu_2\leq 1$. Our aim is to define an ND degree of coherence $\mu_N$ such that it reduces to $\mu_2$ for $N=2$ and lies between 0 and 1. \subsubsection*{\bf ND states} We use the definition in Eq.~(\ref{mu2}) to generalize the concept of the degree of coherence to ND states. We expect the generalized quantity $\mu_N$ to be basis-dependent, the maximum of which must be equal to the ND intrinsic degree of coherence $P_N$. Therefore, in analogy with the definition of $\mu_2$ in Eq.~(\ref{mu2}), we define the ND degree of coherence $\mu_N$ as \begin{align} \label{muN} \mu_N = \sqrt{\frac{\sum_{i=1}^{N-1} \sum_{j=i+1}^N|\rho_{ij}|^2}{\sum_{i=1}^{N-1}\sum_{j=i+1}^N \rho_{ii}\rho_{jj}}}. \end{align} Here, $\rho_{ij}$ are the matrix elements of the state $\rho$ in an orthonormal basis $\{|1\rangle,|2\rangle,\cdots,|N\rangle\}$. The numerator is the sum of the squared magnitudes of all the off-diagonal terms and the denominator is the sum of the products of the pairs of diagonal terms. As expected, $\mu_N$ as defined above reduces to $\mu_2$ for $N=2$, and the normalization term in the denominator makes sure that $\mu_N$ lies between 0 and 1. We further note that $\mu_N$ is a basis-dependent quantity. Now, in order for $\mu_N$ to be considered as the ND analog of $\mu_{2}$, we need to show that the maximum value of $\mu_N$ over the set of all possible ND bases is equal to $P_N$. From Eq.~(\ref{PN-as-frob-norm}) and Eq.~(\ref{muN}), we have \begin{align} \label{muN2} \mu_N^2 &= \frac{\sum_{i=1}^{N-1} \sum_{j=i+1}^N|\rho_{ij}|^2}{\sum_{i=1}^{N-1}\sum_{j=i+1}^N \rho_{ii}\rho_{jj}}=\frac{\frac{1}{2}\left(\sum_{i=1}^{N} \sum_{j=1}^N|\rho_{ij}|^2 -\sum_{i=1}^N \rho_{ii}^2\right)}{\frac{1}{2}\left(\sum_{i=1}^{N}\sum_{j=1}^N \rho_{ii}\rho_{jj} - \sum_{i=1}^N \rho_{ii}^2 \right)} \notag \\ &= \frac{{\rm Tr}(\rho^2)-\sum_{i=1}^N \rho_{ii}^2}{1 - \sum_{i=1}^N \rho_{ii}^2}=1-\frac{1-{\rm Tr}(\rho^2)}{1-\sum_{i=1}^N \rho_{ii}^2}. \end{align} From the above equation, it is clear that $\mu_N^2$ attains its minimum value when the sum $\sum_{i=1}^N \rho_{ii}^2$ is maximum. The sum is maximum when $\rho_{ii}$ is equal to 1 only for a particular $i$ and is zero for the rest, in which case the sum $\sum_{i=1}^N \rho_{ii}^2={\rm Tr}(\rho^2)$ implying ${\rm min} \ \mu_N=0$. Furthermore, $\mu_N^2$ attains its maximum value when the sum $\sum_{i=1}^N \rho_{ii}^2$ is minimum. It is straightforward to show that the sum $\sum_{i=1}^N \rho_{ii}^2$ is minimum when $\rho_{11}=\rho_{22}=\cdots=\rho_{NN}=1/N$, in which case $\sum_{i=1}^N \rho_{ii}^2=\sum_{i=1}^N (1/N)^2=1/N$. Therefore, from Eq.~(\ref{muN2}), we have \begin{align}\label{muN-max} \max_{\{|1\rangle, |2\rangle, \cdots |N\rangle \}\in \mathds{S}} \ \mu_N &=\sqrt{\frac{N \ {\rm Tr}(\rho^2)-1}{N-1}}=P_N, \end{align} which is in direct correspondence with Eq.~(\ref{mu2-max}). Thus, as in the 2D case, we find that the maximum of $\mu_{N}$ over the set $\mathds{S}$ of all orthonormal bases in the ND Hilbert space is equal to the intrinsic degree of coherence $P_{N}$. Moreover, the maximum is achieved in the basis where all the diagonal entries are equal, again as is true in the 2D case. While our analysis does not present a clear physical reasoning for defining $\mu_{N}$ as Eq.~(\ref{muN}), the fact that $\mu_{N}$ satisfies all the mathematical properties of $\mu_{2}$ strongly suggests that $\mu_{N}$ is the ND analog of $\mu_{2}$, and can therefore be referred to as the ND degree of coherence. We now note that our above analysis is physically distinct from a recent study \cite{streltsov2018njp} which relates the maximal resource-theoretic coherence of a state over unitary transformations to the state purity. The distinction arises because whereas optical coherence theory quantifies the system's ability to interfere, the resource theory of coherence quantifies the amount of superposition in a specific basis that can be exploited for certain quantum protocols. In order to illustrate this difference in the context of a 2D state $\rho$, we consider the $l_{1}$-norm measure $|\rho_{12}|$ from resource theory, and the degree of coherence $\mu_{2}=|\rho_{12}|/\sqrt{\rho_{11}\rho_{22}}$ of Eq.~(\ref{mu2}) from optical coherence theory. For a pure state $\rho=|\psi\rangle\langle\psi|$, where $|\psi\rangle=\epsilon |1\rangle+\sqrt{1-\epsilon^2}|2\rangle$ with $\epsilon\to0$, we have $|\rho_{12}|\to0$ which implies that the state is incoherent in a resource-theoretic sense, whereas $\mu_{2}=1$ which implies that the state is fully coherent in the optical coherence-theoretic sense. Therefore, while it is interesting that similar relations between maximal coherence and purity hold in both theories, these relations are physically distinct. \subsection{Visibility Interpretation} \subsubsection*{\bf 2D states} The visibility interpretation of $P_2$ for a 2D state was given by Emil Wolf \cite{wolf1959nuovo} using a polarization interference scheme (see Section 6.2 of Ref.~\cite{mandel1995cup}). As depicted in Fig.~\ref{fig1}, we discuss this scheme with slight modifications in order to make it more amenable to generalization to higher dimensions. A field in the polarization state $\rho$, as given by Eq.~(\ref{polarization matrix}), first passes through a wave-plate (WP) that introduces a phase $\delta$ between the two mutually orthogonal directions represented by vectors $|1\rangle$ and $|2\rangle$. The field then passes through a rotation plate (RP) that rotates the polarization state by an angle $\theta$. Finally, the field is detected using the polarizing beam splitter (PBS) in the two orthogonal polarization directions $|1\rangle$ and $|2\rangle$. The corresponding detection probabilities $I_1$ and $I_2$ at the two output ports are given by \begin{align} &I_1=\rho_{11}\cos^2\theta+\rho_{22}\sin^2\theta+|\rho_{12}|\sin\theta\cos\theta\cos(\beta+\delta),\notag \\ &I_2=\rho_{11}\sin^2\theta+\rho_{22}\cos^2\theta-|\rho_{12}|\sin\theta\cos\theta\cos(\beta+\delta) \notag, \end{align} where $\rho_{12}=|\rho_{12}| e^{i\beta}$. The visibility $V$ of the interference pattern is defined as (see Section 6.2 of Ref.~\cite{mandel1995cup}) \begin{align}\label{visibility-wolf} V=\frac{\langle I_1 \rangle_{{\rm max}(\delta, \theta)}-\langle I_1 \rangle_{{\rm min}(\delta, \theta)}}{\langle I_1 \rangle_{{\rm max}(\delta, \theta)}+\langle I_1 \rangle_{{\rm min}(\delta, \theta)}}, \end{align} where $\langle I_1 \rangle_{{\rm max}(\delta, \theta)}$ and $\langle I_1 \rangle_{{\rm min}(\delta, \theta)}$ are the maximum and minimum values of $I_{1}$, respectively, over all possible $\delta$ and $\theta$. Similarly, we can equivalently define the visibility as \begin{equation}\label{visibility} V=\max_{U\in U(2)}\Big|\frac{I_1-I_2}{I_1+I_2}\Big|=\max_{U\in U(2)}f(I_1,I_2), \end{equation} where $U(2)$ is the group of 2D unitary matrices and where we have denoted $|(I_1-I_2)/(I_1+I_2)|$ as $f(I_1, I_2)$ since we would find this notation to be more convenient when generalizing to ND spaces. The function $f(I_1, I_2)$ has the following properties: (i) It is $1$ if and only if one among $I_1$ and $I_2$ is 1 and the other one is 0, (ii) It is 0 if and only if $I_1=I_2$, (iii) It is a {\it Schur-convex} function, that is, for two given sets of probabilities $\{I_{1},I_2\}$ and $\{I'_{1},I'_2\}$ if $\{I'_1,I'_2\}$ majorizes $\{I_1,I_2\}$ then $f(I_1,I_2)\leq f(I'_1,I'_2)$ \cite{bhatia2013springer}. The maximization involved in Eq.~(\ref{visibility}) can be carried out using Schur's theorem which states that the measured probability distribution of a state in any basis is majorized by the eigenvalue distribution of the state \cite{nielsen2002notes}, that is, $(I_{1},I_{2})\prec(\lambda_{1},\lambda_{2})$. Since there always exists a unitary transformation such that $I_{1}=\lambda_{1}$ and $I_{2}=\lambda_{2}$, $f(I_1, I_2)$ becomes maximum when $I_{1}=\lambda_{1}$ and $I_{2}=\lambda_{2}$, and in that case we get \begin{equation}\label{eigen-visibility} V=\max_{U\in U(2)}f(I_1,I_2)=f(\lambda_1,\lambda_2)=\Big|\frac{\lambda_{1}-\lambda_{2}}{\lambda_{1}+\lambda_{2}}\Big|=P_{2}, \end{equation} that is, $P_2$ equals the 2D visibility in a polarization interference experiment. The importance of the visibility interpretation is that it not only provides a physically intuitive way of understanding the degree of coherence but also provides an experimental scheme for measuring it. \begin{figure} \caption{(a) Schematic setup for describing degree of polarization $P_2$ as the visibility in a polarization interference experiment. (b) Schematic setup for describing $N$-dimensional degree of polarization or $N$-dimensional intrinsic degree of coherence $P_{N} \label{fig1} \end{figure} \subsubsection*{\bf ND states} In direct analogy with the scheme depicted in Fig.~\ref{fig1}(a), Fig.~\ref{fig1}(b) depicts the general interference situation for an ND density matrix $\rho$ represented in an orthonormal basis $\{|1\rangle,|2\rangle,\cdots,|N\rangle\}$. The density matrix $\rho$ is acted upon by a general $N \times N$ unitary operator $U$, which can be realized by a combinations of optical elements. The $N$-port splitter (NPS) divides the density matrix along $N$ orthonormal states $\{|1\rangle,|2\rangle,\cdots,|N\rangle\}$ and the the detection probabilities along the basis vectors are represented by $\{I_{1},I_{2},\cdots,I_{N}\}$. In analogy with the definition of $f(I_{1},I_{2})$ for the 2D case, we define \begin{align} \label{ND Visibility} f(I_1,I_2,\cdots,I_N)=\sqrt{\frac{\sum_{i=1}^{N-1}\sum_{j=i+1}^N(I_i-I_j)^2}{(N-1)(\sum_{i=1}^N I_i)^2}}, \end{align} which satisfies the following properties: (i) It is $1$ if and only if $I_{i}=1$ for some $i=k$, and $I_{i}=0$ for $i\neq k$, where $i=1, 2, \cdots, N$ and $k\leq N$, (ii) It is $0$ if and only if all the probabilities are equal, that is, $I_{i}=1/N,\,\,\,{\rm where}\ i=1,2,\cdots,N$ and (iii) It is a {\it Schur-convex} function, as may be proved using theorem II.3.14 of Ref.~\cite{bhatia2013springer}. We know by virtue of Schur's theorem \cite{nielsen2002notes} that $\{I_{1},I_{2},\cdots,I_{N}\}\prec\{\lambda_{1},\lambda_{2},\cdots,\lambda_{N}\}$, where $\lambda_i$s are eigenvalues of the density matrix. We also know that there always exists a unitary transformation $U\in U(N)$ such that $\{I_{1},I_{2},\cdots,I_{N}\}=\{\lambda_{1},\lambda_{2},\cdots,\lambda_{N}\}$. Using these facts, we define the ND visibility $V$ as $f(I_1,I_2\cdots,I_N)$ maximized over $U(N)$, i.e, \begin{align} V &=\max_{U\in U(N)} f(I_1,I_2\cdots,I_N)=\sqrt{\frac{\sum_{i=1}^{N-1}\sum_{j=i+1}^N(\lambda_i-\lambda_j)^2}{(N-1)(\sum_{i=1}^N\lambda_i)^2}} \notag \\ &=\sqrt{\frac{N \ {\rm Tr}(\rho^2)-1}{N-1}}=P_{N}. \label{PN-visibility} \end{align} Thus, we find that just as in the 2D case, $P_N$ has the interpretation as the ND visibility of an experiment. \subsection{Weightage of pure part interpretation} \subsubsection*{\bf 2D states} In the context of partially polarized fields, it has been shown that any 2D polarization state $\rho$ can be uniquely decomposed into a weighted mixture of two fields, one of which is completely polarized or pure, and the other one completely unpolarized or fully mixed \cite{mandel1995cup, born1959pp}. Mathematically , this implies that \begin{align}\label{wolf-decomp} \rho=s_{1} |\psi_{1}\rangle\langle\psi_{1}| + (1-s_{1})\frac{\mathds{1}_{2}}{2}, \end{align} where $|\psi_{1}\rangle$ represents the completely polarized pure state, $s_1=\lambda_1-\lambda_2$ with $\lambda_{1}$ and $\lambda_{2}$ being the eigenvalues of $\rho$ denotes the weightage of the pure part, and $\mathds{1}_{2}$ is the completely unpolarized state. From Eq.~(\ref{eigen-visibility}), we know that for a normalized $\rho$, $\lambda_1 -\lambda_2=P_2$, from which we get \begin{align} s_{1}=\lambda_{1}-\lambda_{2}=P_{2}. \end{align} In other words, $P_2$ is equal to the weightage of the pure portion of the state. This interpretation is physically intuitive as it implies that in order to prepare the state by mixing together a pure state and the completely mixed state, the needed weightage of the pure part is $P_{2}$. \subsubsection*{\bf ND states} We now generalize this interpretation of $P_{2}$ to higher dimensions. The quantification of $P_2$ in terms of the weightage of its pure part is possible only because of the existence of the unique decomposition in Eq.~(\ref{wolf-decomp}). However, it is now known that such a unique decomposition in terms of just two matrices is not possible for ND states \cite{brosseau1998fundamentals, gill2014pra,gill2017pra}. For a 3D polarization state it has been shown that a unique decomposition is possible in terms of three matrices, one of which is the rank-1 matrix which is a pure state, the second one is a rank-2 matrix and the third one is the identity matrix \cite{ellis2005optcomm}. It has been argued that the weightage of the pure part of this decomposition, which is equal to $\lambda_1-\lambda_2$, where $\lambda_1$ and $\lambda_2$ are the two largest eigenvalues of $\rho$, could be taken as the degree of polarization of the 3D state. However, a few issues have been pointed out regarding this decomposition because of which the weightage of the rank-1 matrix of this decomposition cannot in general be taken as the 3D degree of polarization \cite{gamel2012pra, setala2009optlett}. In contrast, we now show that it is possible to have a unique decomposition of an ND state as a weighted mixture of $N$ matrices as given below, one of which is completely mixed and the rest $N-1$ are completely pure. \begin{equation}\label{PN-decomp} \rho=\sum_{i=1}^{N-1}s_{i}|\psi_{i}\rangle\langle\psi_{i}|+\Big(1-\sum^{N-1}_{i=1}s_{i}\Big)\frac{\mathds{1}_{N}}{N}, \end{equation} Here the states $\{|\psi_{i}\rangle\}$'s are pure and orthonormal and the corresponding weightages $s_{i}$'s are real and non-negative. In order to ensure a unique decomposition for every physical density matrix, it must be verified that the number of independent parameters are identical on the two sides of Eq.~(\ref{PN-decomp}). On the left side, the density matrix $\rho$ has $(N^2-1)$ free parameters. On the right side:(i) there are $(N-1)\,\,s_{i}$'s, (ii) each of the $(N-1)\,\,|\psi_{i}\rangle$'s has $2(N-1)$ free parameters, and (iii) the mutual orthogonality between $|\psi_{i}\rangle$'s would introduce $(N-1)(N-2)$ constraints. These conditions imply $(N^2-1)$ free parameters on the right-hand side as well. We introduce an additional vector $|\psi_{N}\rangle$ to the set of $(N-1)\,\,|\psi_{i}\rangle$'s such that $|\psi_i\rangle$ with $i=1\cdots N$ form an orthonormal and complete basis, that is, $\sum_{i=1}^{N}|\psi_i\rangle\langle\psi_i|=\mathds{1}_{N}$. Now, if Eq.~(\ref{PN-decomp}) is written in this $|\psi_i\rangle$ basis, then the right hand side is completely diagonal. This implies that the representation of $\rho$ on the left-hand-side must also be diagonal in this basis, that is, $|\psi_{i}\rangle$'s must necessarily be the eigenvectors of $\rho$ with $\rho=\sum_{i=1}^{N} \lambda_i|\psi_i\rangle\langle\psi_i|$. Here, we have denoted the corresponding eigenvalues as $\lambda_{i}$ and have assumed $\lambda_{1}\geq\lambda_{2}\geq...\geq\lambda_{N}$. The Eq.~(\ref{PN-decomp}) therefore takes the form: \begin{equation} \rho = \sum_{i=1}^{N-1}(\lambda_i-\lambda_N)|\psi_i\rangle \langle \psi_i|+(N\lambda_N) \frac{\mathds{1_{N}}}{N}. \end{equation} As the weightages $s_{i}=(\lambda_i-\lambda_N)$ are non-negative, the above decomposition is necessarily unique. We note that Eq.~(\ref{PN-visibility}) expresses $P_N$ in terms of the eigenvalues of $\rho$. Using this, and after a straightforward calculation, we obtain an expression for $P_N$ solely in term of the weightage of the pure parts given as \begin{align} P_N &=\sqrt{\frac{\sum_{i=1}^{N-1}\sum_{j=i+1}^N(\lambda_i-\lambda_j)^2}{(N-1)(\sum_{i=1}^N\lambda_i)^2}}=\sqrt{\frac{N\sum_{i=1}^{N-1}s_i^2 -( \sum_{i=1}^{N-1}s_i)^2}{N-1}}\notag\\ &=\sqrt{(\sum_{i=1}^{N-1}s_i)^2-\frac{2N}{N-1}\sum_{i=1}^{N-1}\sum_{j=i+1}^{N-1}s_is_j} \leqslant \sum_{i=1}^{N-1}s_i. \label{PN-purity} \end{align} The above equation expresses the weightage of pure part interpretation of $P_{N}$. Just as in the 2D case, we find that $\rho$ can be generated by mixing together a completely mixed state and $N-1$ pure states in a particular proportion. However, the difference is that whereas $P_{2}=s_{1}$ in the 2D case, for the ND case, we find $P_{N}\leq \sum_{i=1}^{N-1}s_{i}$. In other words, the total weightage of pure parts puts an upper bound on the intrinsic degree of coherence. Moreover, the bound is tight as in any ND space, there exist states with only two non-zero eigenvalues. For such states,the bound is saturated, i.e, $P_{N}= \sum_{i=1}^{N-1}s_{i}$. \section{Quantifying the intrinsic degree of coherence $P_{\infty}$ of infinite-dimensional states} In this section, we extend $P_{N}$ to the $N\to\infty$ limit to quantify the intrinsic degree of coherence $P_{\infty}$ of infinite-dimensional states. The procedure is not quite as straightforward as computing the $N\to\infty$ limit of Eq.~(\ref{P_N}) due to the following reasons: Firstly, from the expression for $P_{N}$, we note that in general, $\lim_{N\to\infty} P_{N}$ may not exist. This is because certain infinite-dimensional states can be non-normalizable, in which case $\mathrm{Tr}(\rho^2)$ can diverge \cite{merzbacher1998wiley}. Secondly, owing to the fact that $N$ can take only integer values, even if $\lim_{N\to\infty} P_{N}$ exists, the generalization implicitly assumes the existence of a discrete or countably-infinite basis in the infinite-dimensional vector space. While this assumption is manifestly valid for the infinite-dimensional spaces spanned by the discrete OAM and photon number bases, its validity is not evident for the infinite-dimensional space spanned by the uncountably-infinite or continuous variable position and momentum bases. Here, we present rigorous derivation of $P_\infty$ for infinite-dimensional states. We show that for any normalized infinite-dimensional state $\rho$ in the orbital angular momentum (OAM), photon number, position and momentum bases, the expression for $P_\infty$ is given by $P_{\infty}=\sqrt{\mathrm{Tr}(\rho^2)}$. \subsection{Orbital Angular Momentum and Angle Representations} We denote the OAM eigenstates as $|l\rangle$, where $l=-\infty,...,-1,0,1,...,\infty$, and the angle eigenstates as $|\theta\rangle$, where $\theta\in[0,2\pi)$. Owing to the Fourier relationship between the OAM and angle observables \cite{peggbarnett1990pra}, the eigenstates are related as \begin{subequations}\label{oam-ang-basisvecs} \begin{align}\label{oam-eigvec} |l\rangle&=\frac{1}{\sqrt{2\pi}}\int_{0}^{2\pi} e^{+il\theta}|\theta\rangle\,\mathrm{d}\theta,\\\label{ang-eigvec} |\theta\rangle&=\frac{1}{\sqrt{2\pi}}\sum_{l=-\infty}^{+\infty} e^{-il\theta}|l\rangle. \end{align} \end{subequations} We note that in contrast with finite-dimensional vectors, infinite-dimensional vectors may be non-normalizable. For instance, it is evident from Eq.~(\ref{ang-eigvec}) that the angle eigenstate $|\theta\rangle$ is non-normalizable. We now consider a state $\rho$ written in the OAM basis as \begin{equation}\label{oam-improper-state} \rho=\sum_{l=-\infty}^{+\infty}\sum_{l'=-\infty}^{+\infty} c_{ll'}|l\rangle\langle l'|. \end{equation} We rewrite the state $\rho$ of Eq.~(\ref{oam-improper-state}) in the limiting form \begin{equation}\label{oam-proper-state} \rho=\lim_{D\to\infty} \sum_{l=-D}^{+D}\sum_{l'=-D}^{+D} c_{ll'}|l\rangle\langle l'|. \end{equation} In essence, the above relation views the infinite-dimensional state $\rho$ as the $D\to\infty$ limit of a $(2D+1)$-dimensional state residing in the finite state space spanned by the OAM eigenstates $|l\rangle$ for $l=-D,...,-1,0,1,...,D$, where $D$ is an arbitrarily-large but finite integer. We now use Eq.~(\ref{P_N}) to compute $P_{2D+1}$ and evaluate $P_{\infty}=\lim_{D\to\infty}P_{2D+1}$ which yields \begin{equation}\label{Pinf-oam-proper} P_{\infty}=\lim_{D\to\infty} \sqrt{\frac{(2D+1)\,\sum_{l=-D}^{+D}\sum_{l'=-D}^{+D}|c_{ll'}|^2-1}{2D}}. \end{equation} Now let us assume that $\rho$ is normalized, that is $\mathrm{Tr}(\rho)=\sum_{l=-\infty}^{+\infty} c_{ll}=1$. This implies that $\sum_{l=-\infty}^{+\infty}\sum_{l'=-\infty}^{+\infty}|c_{ll'}|^2=\mathrm{Tr}(\rho^2)\leq 1$. Under this condition, Eq.~(\ref{Pinf-oam-proper}) evaluates to \begin{equation}\label{Pinf-oam-physical} P_{\infty}=\sqrt{\sum_{l=-\infty}^{+\infty}\sum_{l'=-\infty}^{+\infty}|c_{ll'}|^2}=\sqrt{\mathrm{Tr}(\rho^2)}. \end{equation} The above equation can be used to evaluate $P_{\infty}$ of a normalized state $\rho$. However, when $\rho$ is non-normalizable, such as the angle eigenstate $\rho=|\theta\rangle\langle \theta|$ of Eq.~(\ref{ang-eigvec}), the quantity $\mathrm{Tr}(\rho^2)$ diverges. In such cases, Eq.~(\ref{Pinf-oam-physical}) cannot be used to compute $P_{\infty}$. We now use the basis invariance of $P_{\infty}$ to derive its expression in terms of the angle representation of $\rho$. Using Eq.~(\ref{oam-eigvec}) to substitute for $|l\rangle$ and $\langle l'|$ into Eq.~(\ref{oam-improper-state}), it follows that $\rho$ has the angle representation \begin{equation}\label{angle-improper-state} \rho=\int_{0}^{2\pi}\int_{0}^{2\pi} W(\theta,\theta')\,|\theta\rangle\langle \theta'|\,\mathrm{d}\theta\,\mathrm{d}\theta', \end{equation} where the continuous matrix elements $W(\theta,\theta')$ are related to the coefficients $c_{ll'}$ as \begin{equation}\label{oam-angle-improper-wktheorem} W(\theta,\theta')=\frac{1}{2\pi}\sum_{l=-\infty}^{+\infty} \sum_{l'=-\infty}^{+\infty} c_{ll'}\,e^{+i(l\theta-l'\theta')}. \end{equation} In the context of light fields, $W(\theta,\theta')$ is the angular coherence function, which quantifies the correlation between the field amplitudes at angular positions $\theta$ and $\theta'$ \cite{jha2011pra,kulkarni2017natcomm}. Assuming that $\rho$ is normalized, we have $\mathrm{Tr}(\rho)=\int_{0}^{2\pi}W(\theta,\theta)\,\mathrm{d}\theta=1$. Substituting Eq.~(\ref{angle-improper-state}) in Eq.~(\ref{Pinf-oam-physical}), we obtain \begin{equation}\label{Pinf-physical-angle} P_{\infty}=\sqrt{\mathrm{Tr}(\rho^2)}=\sqrt{\int_{0}^{2\pi}\int_{0}^{2\pi}|W(\theta,\theta')|^2\,\mathrm{d}\theta\,\mathrm{d}\theta'}. \end{equation} The equations (\ref{Pinf-oam-physical}) and (\ref{Pinf-physical-angle}) can be used to compute $P_{\infty}$ of any normalized infinite-dimensional state in the OAM and angle representations. \subsection{Photon number representation} The photon number eigenstates $|n\rangle$, where $n=0,...,\infty$ span an orthonormal and complete basis in the infinite-dimensional Fock space. It is known that like OAM and angle, the photon number and optical phase are conjugate observables. However -- owing to the fact that unlike the OAM eigenvalues, the photon number eigenvalues can take only non-negative integer values -- the optical phase eigenstates in the infinite state space are not orthonormal, and therefore do not constitute a well-defined basis \cite{susskind1964ppf}. For our purposes it is sufficient to restrict our attention to the photon number basis, and compute $P_{\infty}$ in an identical manner as we did previously for states in the OAM basis. We first consider a general state expressed in the photon number basis as \begin{equation} \rho=\sum_{n=0}^{\infty}\sum_{n'=0}^{\infty} a_{nn'}|n\rangle\langle n'|. \end{equation} We rewrite the above state in the limiting form \begin{equation} \rho=\lim_{D\to\infty}\sum_{n=0}^{D}\sum_{n'=0}^{D} a_{nn'}|n\rangle\langle n'|, \end{equation} where $D$ is an arbitrarily-large but finite positive integer. We then compute $P_{\infty}$ of $\rho$ by using Eq.~(\ref{P_N}) to compute $P_{D+1}$ of a $(D+1)$-dimensional state in the limit $D\to\infty$ as \begin{equation}\label{Pinf-proper-photnumber} P_{\infty}=\lim_{D\to\infty} \sqrt{\frac{(D+1)\,\sum_{n=0}^{D}\sum_{n'=0}^{D}|a_{nn'}|^2-1}{D}}. \end{equation} We assume that $\mathrm{Tr}(\rho)=\sum_{n=0}^{\infty}a_{nn}=1$, which implies $\sum_{n=0}^{\infty}\sum_{n'=0}^{\infty}|a_{nn'}|^2=\mathrm{Tr}(\rho^2)\leq 1$. Under this condition, Eq.~(\ref{Pinf-proper-photnumber}) reduces to the form \begin{equation}\label{Pinf-photnum-physical} P_{\infty}=\sqrt{\sum_{n=0}^{\infty}\sum_{n'=0}^{\infty}|a_{nn'}|^2}=\sqrt{\mathrm{Tr}(\rho^2)}. \end{equation} \subsection{Position and Momentum Representations} We now consider infinite-dimensional states in the continuous-variable position and momentum representations. For conceptual clarity, we present our analysis for a one-dimensional configuration space which is labeled by the co-ordinate $x$. The corresponding canonical momentum space is labeled by the co-ordinate $p$. A general state $\rho$ in the position basis is written as \begin{equation}\label{position-state-improper} \rho=\int_{-\infty}^{+\infty}\int_{-\infty}^{+\infty} G(x,x')\,|x\rangle\langle x'|\,\mathrm{d}x\,\,\mathrm{d}x'. \end{equation} Similarly, in the momentum basis $\rho$ is given by \begin{equation}\label{momentum-state-improper} \hspace{-8mm}\rho=\int_{-\infty}^{+\infty}\int_{-\infty}^{+\infty} \Gamma(p,p')\,|p\rangle\langle p'|\,\,\mathrm{d}p\,\,\mathrm{d}p'. \end{equation} The continuous matrix elements $G(x,x')$ and $\Gamma(p,p')$ represent the cross-correlation functions in the position and momentum representations, respectively. We recall that the expressions (\ref{Pinf-oam-physical}) and (\ref{Pinf-photnum-physical}) for $P_{\infty}$ of states in the OAM and photon number bases were derived by viewing the infinite-dimensional state as the infinite integer limit of a finite-dimensional state. As the dimensionality was constrained to take only integer values, the derivations implicitly depended on the fact that the OAM and photon number bases are discrete, and hence countably-infinite. However in the present case, both the position and the momentum bases are continuous, that is, uncountably-infinite. Nevertheless, we now show that this issue can be circumvented by constructing a physically indistinguishable finite-dimensional state space for position and momentum variables. Our construction extensively draws on techniques developed previously by Pegg and Barnett for constructing finite-dimensional state spaces for the OAM-angle \cite{peggbarnett1990pra} and photon number-optical phase \cite{peggbarnett1988epl,peggbarnett1989pra} pairs of observables. \subsubsection{Construction of a finite-dimensional space} We consider an arbitrarily-large but finite region $[-p_{\rm max},p_{\rm max}]$ in momentum space as depicted in Fig.~2. We sample $(2D+1)$ equally-spaced momentum values $p_{j}$ in this region, where $j=-D,...,0,...,D$, with $D$ also being arbitrarily large but finite. The spacing between consecutive values is $\Delta p=p_{\rm max}/D$, which is made arbitrarily close to zero. Using the $(2D+1)$ orthonormal eigenstates $|p_j\rangle$ corresponding to the momentum eigenvalues $p_{j}=j\Delta p$, we develop a consistent $(2D+1)$-dimensional state space for position and momentum. We will compute $P_{\infty}$ for $\rho$ by first computing $P_{2D+1}$ of a $(2D+1)$-dimensional state and then taking the limit of $D\to\infty$ and $p_{\rm max}\to \infty$, subject to the condition that $1/\Delta p=D/p_{\rm max}\to \infty$. To this end, we note that a momentum operator $\hat{p}$ must be a generator of translations in position space. Therefore, a position state $|x\rangle$ must satisfy \cite{merzbacher1998wiley} \begin{equation}\label{pos-shift} \exp\left(-i\hat{p}\eta/\hbar\right)|x\rangle=|x+\eta\rangle. \end{equation} If we define $|x_{0}\rangle$ as the state corresponding to the origin, then \begin{equation}\label{position-shift} |x\rangle=\exp\left(-i\hat{p}x/\hbar\right)|x_{0}\rangle. \end{equation} Now, similarly a position operator $\hat{x}$ must be a generator of translations in momentum space. This implies that \begin{equation}\label{momentum-shift} \exp\left(+ip_{k}\hat{x}/\hbar\right)|p_{j}\rangle=|p_{j+k}\rangle, \end{equation} where the translations are cyclic such that $\exp\left(ip_{1}\hat{x}/\hbar\right)|p_{D}\rangle=|p_{-D}\rangle$. We now use the orthonormal states $|p_{j}\rangle$ and equations (\ref{pos-shift}) and (\ref{momentum-shift}) to derive the form of the corresponding position eigenstates in the $(2D+1)$-dimensional state space. Let us suppose that $|x_{0}\rangle$ takes the general form, \begin{equation}\label{x0-vector} |x_{0}\rangle=\sum_{j=-D}^{+D} c_{j}|p_{j}\rangle. \end{equation} Evaluating $\exp(+ip_{k}\hat{x}/\hbar)|x_{0}\rangle$ by using Eq.~(\ref{momentum-shift}), we get \begin{equation} |x_{0}\rangle=\sum_{j=-D}^{+D} c_{j}|p_{j+k}\rangle. \end{equation} Now since the above equation is true for all $k$, the coefficients $c_{j}$ are necessarily independent of $j$, and upon normalization, they become $c_{j}=(1/\sqrt{2D+1})$. Using Eq.~(\ref{position-shift}), we then obtain \begin{equation}\label{xvector} |x\rangle=\sum_{j=-D}^{+D}\frac{e^{-ip_{j}x/\hbar}}{\sqrt{2D+1}}|p_{j}\rangle. \end{equation} The inner product $ \langle x|x'\rangle$ can therefore be written as \begin{align}\notag \langle x|x'\rangle&=\sum_{j=-D}^{+D}\sum_{k=-D}^{+D}\frac{e^{+i(p_{j}x-p_{k}x')/\hbar}}{(2D+1)}\langle p_{j}|p_{k}\rangle \\ &=\frac{1}{(2D+1)}\frac{\sin\left[(2D+1)(x-x')\Delta p/2\hbar\right]}{\sin\left[(x-x')\Delta p/2\hbar\right]}. \end{align} This implies that $\langle x|x'\rangle=0$ only when $(x-x')=2\pi\hbar n/\{(2D+1)\Delta p\}$, where $n$ is a non-zero integer. This orthogonality condition allows us to select an orthonormal basis comprising the basis vectors $|x_{m}\rangle$ corresponding to the positions \begin{equation}\label{xmvalues} x_{m}=\frac{2\pi m\hbar}{(2D+1)\Delta p}.\hspace{4mm} (m=-D,...,0,...,D) \end{equation} \begin{figure} \caption{In the finite state space, the position eigenvectors $|x_{m} \label{pos-mom-fig} \end{figure} These $(2D+1)$ positions are equally-spaced from $x_{-D}$ to $x_{D}$ with a spacing of $\Delta x=2\pi\hbar/\{(2D+1)\Delta p\}$. We write the orthonormality and completeness relations for the basis vectors $|x_{m}\rangle$ and $|p_{j}\rangle$ as \begin{subequations}\label{xp-orthocomp-proper} \begin{align} &\langle x_{m}|x_{n}\rangle=\delta_{mn}, \hspace{14mm} \langle p_{j}|p_{k}\rangle=\delta_{jk},\\ &\sum_{m=-D}^{+D} |x_{m}\rangle\langle x_{m}|=1,\hspace{10mm} \sum_{j=-D}^{+D} |p_{j}\rangle\langle p_{j}|=1. \end{align} \end{subequations} Using equations (\ref{xvector}) and (\ref{xmvalues}), we find that the basis vectors are related as \begin{subequations}\label{xp-relations-proper} \begin{align} |x_{m}\rangle&=\frac{1}{\sqrt{2D+1}}\sum_{j=-D}^{+D} e^{-i2\pi mj/(2D+1)}\,|p_{j}\rangle,\\ |p_{j}\rangle&=\frac{1}{\sqrt{2D+1}}\sum_{m=-D}^{+D} e^{+i2\pi mj/(2D+1)}\,|x_{m}\rangle. \end{align} \end{subequations} Thus, we have derived a finite-dimensional state space for position and momentum, which is depicted schematically in Fig.~1. In order to prove that the finite state space is physically consistent, we must show that the commutator $[\hat{x},\hat{p}]$ in this space is physically indistinguishable from the improper commutation relation $[\hat{x},\hat{p}]=i\hbar$. To this end, we note that $\hat{x}=\sum_{m=-D}^{+D}x_{m}|x_{m}\rangle\langle x_{m}|$ and $\hat{p}=\sum_{j=-D}^{+D}p_{j}|p_{j}\rangle\langle p_{j}|$. Using these expressions, we find that the commutator $[\hat{x},\hat{p}]$ has the following matrix elements: \begin{subequations}\label{xp-commutator-matrixelements} \begin{align}\label{xp-comm-xbasis} \langle x_{m}|[\hat{x},\hat{p}]|x_{n}\rangle&=\frac{2\pi\hbar(m-n)}{(2D+1)^2}\sum_{j=-D}^{+D}j\, e^{i2\pi(m-n)j/(2D+1)},\\\label{xp-comm-pbasis} \langle p_{j}|[\hat{x},\hat{p}]|p_{k}\rangle&=\frac{2\pi\hbar(k-j)}{(2D+1)^2}\sum_{m=-D}^{+D}m\, e^{-i2\pi(j-k)m/(2D+1)}. \end{align} \end{subequations} We notice that the diagonal elements $\langle x_{m}|[\hat{x},\hat{p}]|x_{m}\rangle$ and $\langle p_{j}|[\hat{x},\hat{p}]|p_{j}\rangle$ are all zero. As a result, the trace of $[\hat{x},\hat{p}]$ is zero, as expected for any commutator of finite-dimensional operators. We evaluate the above equations (\ref{xp-commutator-matrixelements}) in the limit $D\to\infty$ using Mathematica \cite{mathematica}, and simplify to obtain \begin{subequations} \begin{align} [\hat{x},\hat{p}]&=\lim_{D\to\infty} i\hbar\Big[1-(2D+1)|x_{(D+\frac{1}{2})}\rangle\langle x_{(D+\frac{1}{2})}|\Big],\\ [\hat{x},\hat{p}]&=\lim_{D\to\infty} i\hbar\Big[1-(2D+1)|p_{(D+\frac{1}{2})}\rangle\langle p_{(D+\frac{1}{2})}|\Big]. \end{align} \end{subequations} We find that when the expectation value of $[\hat{x},\hat{p}]$ is evaluated for any physical state, the contributions from the second term in the above expressions asymptotically vanish. In this limit, we recover the usual commutator $[\hat{x},\hat{p}]=i\hbar$ for infinite-dimensional operators. Thus, we have constructed a consistent finite-dimensional state space for position and momentum. \subsubsection{Derivation of the expression for $P_{\infty}$} We write the state $\rho$ from Eq.~(\ref{position-state-improper}) in the position basis of the finite-dimensional state space as \begin{equation}\label{position-state-proper} \rho=\lim_{D\Delta x\to\infty}\lim_{\Delta x\to 0}\sum_{m=-D}^{+D}\sum_{n=-D}^{+D} \bar{G}_{x_{m}x_{n}}|x_{m}\rangle\langle x_{n}|. \end{equation} Similarly, $\rho$ can be written in the momentum basis as \begin{equation}\label{momentum-state-proper} \rho=\lim_{D\Delta p\to\infty}\lim_{\Delta p\to 0}\sum_{j=-D}^{+D}\sum_{k=-D}^{+D} \bar{\Gamma}_{p_{j}p_{k}}|p_{j}\rangle\langle p_{k}|. \end{equation} As $\rho$ is normalized, we have $\sum_{m=-D}^{+D}\bar{G}_{x_{m}x_{m}}=\sum_{j=-D}^{+D}\bar{\Gamma}_{p_{j},p_{j}}=1$. We can compute $P_{\infty}$ for $\rho$ by first computing $P_{2D+1}$ in terms of $\bar{G}_{x_{m}x_{n}}$ and $\bar{\Gamma}_{p_{j}p_{k}}$, and then evaluating its limiting value as $D\to\infty$ and $p_{\rm max}\to\infty$, subject to the constraint $D/p_{\rm max}\to \infty$. These limits together ensure that $\Delta x\to 0$ and $\Delta p \to 0$, such that $D\Delta x\to\infty$ and $D\Delta p\to \infty$. Thus, we can compute $P_{\infty}$ in terms of $\bar{G}_{x_{m}x_{n}}$ as \begin{equation}\label{P-pos-proper} P_{\infty}=\lim_{D\Delta x\to\infty}\lim_{\Delta x\to 0}\sqrt{\frac{2D+1}{2D}\Big[\sum_{m,n}|\bar{G}_{x_{m}x_{n}}|^2-\frac{1}{2D+1}\Big]}. \end{equation} Similarly in terms of $\bar{\Gamma}_{p_{j}p_{k}}$, we have \begin{equation}\label{P-mom-proper} P_{\infty}=\lim_{D\Delta p\to\infty}\lim_{\Delta p\to 0}\sqrt{\frac{2D+1}{2D}\Big[\sum_{j,k}|\bar{\Gamma}_{p_{j}p_{k}}|^2-\frac{1}{2D+1}\Big]}. \end{equation} In order to derive the form of $P_{\infty}$ in terms of $G(x,x')$ and $\Gamma(p,p')$, we must obtain the relation of these continuous functions to their discrete counterparts $\bar{G}_{x_{m}x_{n}}$ and $\bar{\Gamma}_{p_{j}p_{k}}$, respectively. Now if $\rho$ is a physical state, then $G(x,x')$ and $\Gamma(p,p')$ must be continuous integrable functions normalizable to unity. Thus, the relation of $G(x,x')$ to $\bar{G}_{x_{m}x_{m}}$, and that of $\Gamma(p,p')$ to $\bar{\Gamma}_{p_{j}p_{k}}$, must be such that $\sum_{m=-D}^{+D}\bar{G}_{x_{m}x_{m}}=\sum_{j=-D}^{+D}\bar{\Gamma}_{p_{j},p_{j}}=1$ should imply $\int_{-\infty}^{+\infty} G(x,x)\,\mathrm{d}x=\int_{-\infty}^{+\infty} \Gamma(p,p)\,\mathrm{d}p=1$. We now consider the relations \begin{subequations}\label{prop-improp} \begin{align}\label{prop-improp-x} G(x_{m},x_{n})&=\lim_{D\Delta x\to\infty}\lim_{\Delta x\to 0} \bar{G}_{x_{m}x_{n}}/\Delta x,\\\label{prop-improp-p} \Gamma(p_{j},p_{k})&=\lim_{D\Delta p\to\infty}\lim_{\Delta p\to 0}\bar{\Gamma}_{p_{j}p_{k}}/\Delta p. \end{align} \end{subequations} Substituting the above relations in $\sum_{m=-D}^{+D}\bar{G}_{x_{m}x_{m}}=\sum_{j=-D}^{+D}\bar{\Gamma}_{p_{j},p_{j}}=1$ yields $\lim_{D\Delta x\to\infty}\lim_{\Delta x\to 0} G(x_{m},x_{m})\Delta x$ and $\lim_{D\Delta p\to\infty}\lim_{\Delta p\to 0}\Gamma(p_{j},p_{j})\Delta p=1$. These summations are equivalent to the integral relations $\int_{-\infty}^{+\infty} G(x,x)\,\mathrm{d}x=\int_{-\infty}^{+\infty} \Gamma(p,p)\,\mathrm{d}p=1$, which implies that equations (\ref{prop-improp}) are correct. Upon substituting Eq.~(\ref{prop-improp-x}) in Eq.~(\ref{P-pos-proper}), and Eq.~(\ref{prop-improp-p}) in Eq.~(\ref{P-mom-proper}) and simplifying, we obtain \begin{align}\notag P_{\infty}&=\lim_{D\Delta x\to\infty}\lim_{\Delta x\to 0} \sqrt{\sum_{m,n=-D}^{+D} |G(m\Delta x,n\Delta x)|^2\,\Delta x\,\Delta x},\\\notag P_{\infty}&=\lim_{D\Delta p\to\infty}\lim_{\Delta p\to 0} \sqrt{\sum_{j,k=-D}^{+D} |\Gamma(j\Delta p,k\Delta p)|^2\,\Delta p\,\Delta p}. \end{align} The above equations can be expressed in integral form as \cite{riemann-integral} \begin{subequations}\label{P-improper} \begin{align}\label{P-pos-improper} P_{\infty}&=\sqrt{\iint_{-\infty}^{+\infty}|G(x,x')|^2\,\mathrm{d}x\,\mathrm{d}x'}=\sqrt{\mathrm{Tr}(\rho^2)},\\\label{P-mom-improper} P_{\infty}&=\sqrt{\iint_{-\infty}^{+\infty} |\Gamma(p,p')|^2\,\mathrm{d}p\,\mathrm{d}p'}=\sqrt{\mathrm{Tr}(\rho^2)}. \end{align} \end{subequations} Moreover, in terms of the Wigner function representation $W(x,p)=(1/(\pi\hbar))\int_{-\infty}^{+\infty}\langle x+y|\hat{\rho}|x-y\rangle e^{-2ipy/\hbar}\,\mathrm{d}y$ of $\rho$ \cite{wigner1932pr}, the measure $P_{\infty}$ can be expressed as \begin{equation}\label{P-wigfunc-improper} P_{\infty}=\sqrt{\mathrm{Tr}(\rho^2)}=\sqrt{2\pi\hbar\iint_{-\infty}^{+\infty}W^2(x,p)\,\mathrm{d}x\,\mathrm{d}p}. \end{equation} We note that the form of $P_{\infty}$ in Eq.~(\ref{P-pos-improper}) is identical to a measure known as the "overall degree of coherence" that was introduced and employed by Bastiaans for characterizing the spatial coherence of partially coherent fields in a complete manner \cite{bastiaans1983josa,bastiaans1984josaa}. Here, we have derived the measure for general classical and quantum states in the position and momentum representations from an entirely distinct perspective. \section{Conclusion and Discussion} In the context of two-dimensional partially polarized electromagnetic fields, the basis-independent degree of polarization $P_2$ can be used to quantify the intrinsic degree of coherence of two-dimensional states. The measure $P_2$ has six known interpretations: (i) it is the Frobenius distance between the state and the identity matrix, (ii) it is the norm of the Bloch-vector representing the state, (iii) it is the distance to the center of mass in a configuration of point masses, (iv) it is the maximum of the degree of coherence, (v) it is the visibility in a polarization interference experiment, and (vi) it is equal to the weightage of the pure part of the state. By generalizing the first three interpretations, past studies had derived analogous expressions for the intrinsic degree of coherence $P_N$ of $N$-dimensional (ND) states. Here, we extended the concepts of visibility, degree of coherence, and weightage of pure part to ND states, and showed that $P_{2}$ generalizes to $P_{N}$ with respect to these interpretations as well. While other yet-to-be-discovered interpretations may still exist, we showed that $P_{N}$ has all the known interpretations of $P_{2}$, and can therefore be regarded as the intrinsic degree of coherence of $N$-dimensional states. Finally, we extended the formulation of $P_{N}$ to the $N\to\infty$ limit and quantify the intrinsic degree of coherence $P_{\infty}$ of infinite-dimensional states in the OAM, photon number, position and momentum representations. \section*{Acknowledgment} We thank Shaurya Aarav and Ishan Mata for discussions. We further acknowledge financial support through grant no. EMR/2015/001931 from the Science and Engineering Research Board, Department of Science \& Technology, Government of India and through grant no. DST/ICPS/QuST/Theme -1/2019 from the Department of Science \& Technology, Government of India. \end{document}
\begin{document} \title{Entanglement Across Separate Silicon Dies in a Modular Superconducting Qubit Device} \author{Alysson Gold} \thanks{These two authors contributed equally. Corresponding electronic mail: [email protected]} \affiliation{Rigetti Computing, 775 Heinz Ave, Berkeley CA 94701} \author{JP Paquette} \thanks{These two authors contributed equally. Corresponding electronic mail: [email protected]} \affiliation{Rigetti Computing, 775 Heinz Ave, Berkeley CA 94701} \author{Anna Stockklauser} \affiliation{Rigetti Computing, 775 Heinz Ave, Berkeley CA 94701} \author{Matthew J. Reagor} \affiliation{Rigetti Computing, 775 Heinz Ave, Berkeley CA 94701} \author{M. Sohaib Alam} \affiliation{Rigetti Computing, 775 Heinz Ave, Berkeley CA 94701} \author{Andrew Bestwick} \affiliation{Rigetti Computing, 775 Heinz Ave, Berkeley CA 94701} \author{Nicolas Didier} \affiliation{Rigetti Computing, 775 Heinz Ave, Berkeley CA 94701} \author{Ani Nersisyan} \affiliation{Rigetti Computing, 775 Heinz Ave, Berkeley CA 94701} \author{Feyza Oruc} \affiliation{Rigetti Computing, 775 Heinz Ave, Berkeley CA 94701} \author{Armin Razavi} \affiliation{Rigetti Computing, 775 Heinz Ave, Berkeley CA 94701} \author{Ben Scharmann} \affiliation{Rigetti Computing, 775 Heinz Ave, Berkeley CA 94701} \author{Eyob A. Sete} \affiliation{Rigetti Computing, 775 Heinz Ave, Berkeley CA 94701} \author{Biswajit Sur} \affiliation{Rigetti Computing, 775 Heinz Ave, Berkeley CA 94701} \author{Davide Venturelli} \affiliation{Quantum Artificial Intelligence Laboratory (QuAIL), NASA Ames Research Center, Moffett Field, CA 94035, USA} \affiliation{USRA Research Institute for Advanced Computer Science (RIACS), Mountain View, CA 94043, USA} \author{Cody James Winkleblack} \affiliation{Rigetti Computing, 775 Heinz Ave, Berkeley CA 94701} \author{Filip Wudarski} \affiliation{Quantum Artificial Intelligence Laboratory (QuAIL), NASA Ames Research Center, Moffett Field, CA 94035, USA} \affiliation{USRA Research Institute for Advanced Computer Science (RIACS), Mountain View, CA 94043, USA} \author{Mike Harburn} \affiliation{Rigetti Computing, 775 Heinz Ave, Berkeley CA 94701} \author{Chad Rigetti} \affiliation{Rigetti Computing, 775 Heinz Ave, Berkeley CA 94701} \date{\today} \begin{abstract} Assembling future large-scale quantum computers out of smaller, specialized modules promises to simplify a number of formidable science and engineering challenges. One of the primary challenges in developing a modular architecture is in engineering high fidelity, low-latency quantum interconnects between modules. Here we demonstrate a modular solid state architecture with deterministic inter-module coupling between four physically separate, interchangeable superconducting qubit integrated circuits, achieving two-qubit gate fidelities as high as 99.1$\pm0.5$\% and 98.3$\pm$0.3\% for iSWAP and CZ entangling gates, respectively. The quality of the inter-module entanglement is further confirmed by a demonstration of Bell-inequality violation for disjoint pairs of entangled qubits across the four separate silicon dies. Having proven out the fundamental building blocks, this work provides the technological foundations for a modular quantum processor: technology which will accelerate near-term experimental efforts and open up new paths to the fault-tolerant era for solid state qubit architectures. \end{abstract} \maketitle \section{\label{sec:introduction} Introduction} Progress in quantum operations over multi-node networks could enable modular architectures spanning distances from the nanometer to the kilometer scale \cite{PRXQuantum.2.017002, steffen2013deterministic, chou2018deterministic, wan2019quantum, hensen2015loophole}. Heralded entanglement protocols, whereby entanglement is generated probabilistically, have now reached entanglement rates up to 200 Hz \cite{olmschenk2009quantum, moehring2007entanglement, humphreys2018deterministic, monroe2014large, ritter2012elementary, Zhong_2020,krastanov2021}. Superconducting systems have established direct exchange of quantum information over cryogenic microwave channels \cite{PhysRevLett.120.200501, PhysRevLett.125.260502, axline2018demand, leung2019deterministic, zhong2020deterministic, kurpiers2018deterministic}, which is particularly useful towards interconnects of intermediate range such as between dilution refrigerators. Yet, in the context of superconducting qubit based processors, none of these methods are likely to outperform local gates between qubits, which can achieve coupling rates in the tens of MHz and fidelities reaching 99.9\% \cite{Chen_2020, Foxen_2020, negirneac2020highfidelity, sung2020realization, stehlik2021tunable}. Importantly, modules consisting of closely spaced and directly coupled separate physical dies retain many of the benefits of distributed modular architectures without the challenge of remote entanglement. Increased isolation between modules reduces cross-talk and correlated errors, for example due to high energy background radiation \cite{wilen2020correlated, vepsalainen2020impact, cardani2020reducing}, and by fabricating smaller units and selecting the highest yielding units for device assembly, higher device yield is achievable \cite{8614500, dickel2018chip}. Mastering 3D integration and modular solid state architectures has thus been a long-standing objective \cite{rosenberg20173d, foxen2017qubit, brecht2016multilayer}. We demonstrate herein a modular superconducting qubit device with direct coupling between physical modules. The device, which consists of four eight-qubit integrated circuits fabricated on individual dies and flip-chip bonded to a larger carrier chip, achieves coupling rates and entanglement quality similar to the state-of-the-art in intra-chip coupling. \section{\label{sec:Results} Results} \subsection{\label{sec:des} Design of a Modular Superconducting Qubit Device} The multi-die device assembly is constructed through flip-chip bonding of four nominally identical dies to a larger carrier die as shown in Fig.~\ref{fig:multiDieSchem}a. The carrier chip assumes a similar role to the chip multiprocessor in a classical multi-core processor while also providing microwave shielding, circuitry to interface between the individual QuIC chips and signal routing for the device I/O. The smaller individual dies comprise the qubit integrated circuits (QuIC), each consisting of four flux tunable and four fixed transmon qubits \cite{reagor2018demonstration}, and corresponding readout resonators and flux control lines as shown in Fig.~\ref{fig:multiDieSchem}b. The readout is multiplexed with four qubits and resonators per readout line and qubits are driven through the readout on this test platform. Qubits are labelled with a letter for the die position from left to right and a number for the qubit position within the die, e.g. B6. Entangled pairs are labeled according to the adjacent qubits, eg. B6-C1. The QuIC dies are designed to be identical in order to maximize fabrication yield and enable modular assembly. The benefits to fabrication yield are evident in considering the number of distinct permutations that exist for a single device assembly: for a wafer with 220 QuIC dies, there are over $2.2\mathrm{E}9$ possible unique device assemblies. \begin{figure*} \caption{ \label{fig:multiDieSchem} \label{fig:multiDieSchem} \end{figure*} The device Hamiltonian is designed to enable two-qubit parametric gates \cite{bertet2006parametric, niskanen2007quantum, didier2018parametric, reagor2018demonstration, hong2020demonstration} between pairs of qubits on separate dies (see Methods). Coupling between qubits on separate chips is mediated through capacitive couplers on the QuIC and the carrier side, resulting in a cross-chip, charge-charge interaction. The carrier chip contains couplers with paddles at each end which are positioned below similar paddle-shaped couplers extending from the qubits on the QuIC as shown in Fig.~\ref{fig:multiDieSchem}a. There is no coupling between qubits on the same die in this test platform to isolate the basic inter-chip coupling mechanism and avoid complexities arising from larger circuits such as frequency collisions and leakage. However, the qubit and coupler design can be adapted to a larger lattice with intra-chip connectivity. \subsection{\label{sec:fab} Device Fabrication, Assembly, and Validation} The QuIC chips are fabricated using standard lithographic techniques on a Si wafer which is then diced to create individual dies. The Josephson junctions which form the SQUID loops of the transmon qubits are fabricated through double-angle evaporation of Al. Superconducting circuit components, including the Al pads for the Josephson junctions and Nb ground planes, signal routing and coplanar waveguides and resonators, are fabricated by pattern, deposition and liftoff steps \cite{nersisyan2019manufacturing}. Flip-chip bonding of the carrier and QuIC modules is accomplished through the deposition and patterning of indium bumps of height $6.5\ \mu \mathrm{m}$ and $40\ \mu\mathrm{m}$ diameter onto the carrier chip. The QuIC chips are flipped and aligned to the carrier before thermo-compression bonding, creating a superconducting bond between the carrier and QuIC chips. The fabrication process is described in detail in the Methods section. The indium bump heights post-bonding determine the vertical separation between the QuIC chips and carrier, as shown in Fig.~\ref{fig:multiDieSchem}a. Importantly, the capacitance between the carrier and QuIC paddles is inversely proportional to the height of the indium bump bonds, $h$, as expected for a parallel plate capacitor. The bare coupling rate between qubits, $g$ is directly proportional to this capacitance and thus follows the same dependence on $h$. Due to bonding process variation, the indium bump height spans a range of 1.5 $\mu$m to 4 $\mu$m. As shown in Fig.~\ref{fig:couplingSummary}, this corresponds to a range for $g/2\pi$ of 8.8 MHz to 26.1 MHz for the coupling rate. \begin{figure} \caption{Impact of post-bonding indium bump height on a) coupling rate, $g$ in linear units and b) both coherent and incoherent simulated gate errors. The fit parameters in a) are $a=27.9\pm6.4$ MHz $\mu$m and $b=0.7\pm3.5$ MHz. The discrepancy between measurement and simulation and details on bump height measurements are discussed in the Methods section. There are only two distinct entangled qubit pairs in the designed Hamiltonian, the II-III and I-IV pairs as shown in Fig.~\ref{fig:multiDieSchem} \label{fig:couplingSummary} \end{figure} Despite the range of anticipated couplings, the simulated fidelity for parametric gates in this design is relatively unaffected. Figure~\ref{fig:couplingSummary} shows simulation results of both the parametric CZ unitary gate error in the absence of loss and dephasing channels (coherent error) and the coherence-limited gate error (incoherent error) as a function of the bump height. The incoherent error is obtained assuming an ideal coherent exchange between the qubits while the coherent error takes into account the unwanted interactions arising from the capacitive coupling and flux modulation. For the coherence-limited fidelity calculation, we use a relaxation time, $\mathrm{T}_1$, of 73/18 $\mu$s and a dephasing time, $\mathrm{T}_2$, of 43/15 $\mu$s for fixed/tunable qubits. Over the full range of indium bump height expected from the bonding process, the predicted maximum achievable fidelity (taken as the minimum of the coherence-limited and unitary fidelity) varies from just under 99.0\% to 99.5\%. For an initial proof of concept, this range is acceptable; however, to push towards fidelities exceeding 99\% or to employ this as part of a tunable coupler scheme \cite{PhysRevLett.125.240503, PhysRevApplied.10.054062}, efforts will be needed to reduce the spread. Additional calibration of the force applied during the bonding process and design revisions to reduce the sensitivity of the coupling to bump height by changing the paddle geometry could reduce this variation further for gate schemes requiring a tighter tolerance. The device assembly, designed and fabricated as described above, is measured in a dilution refrigerator at 10 mK. To assess the accuracy of the simulations and modeling conducted during device design, we characterize the device Hamiltonian (qubit and resonator frequencies, and coupling rates between elements) and compare with predictions from simulation. Qubit frequencies are within 2.1\% of predicted values for the $f_{01}$ transition and 11\% for the qubit anharmonicity at zero applied flux bias (see Methods), demonstrating good agreement and indicating the inter-chip coupling technology does not impact the steady-state device physics in an unexpected manner. \subsection{\label{app:g_meas} Experimental Determination of the Bare Coupling Rates} The capacitive coupling formed by the inter-chip couplers results in a charge-charge interaction between the coupled qubits, $q_1$ and $q_2$. In the dispersive regime, where the detuning between qubits is large compared to the bare coupling rate, $|\omega_{01, 1}-\omega_{01, 2}| \gg g$, we can calculate the bare coupling rate from the measured dispersive shift, $\chi_{qq}$. The relationship between $\chi_{qq}$ and $g$ is given by Eq.~\eqref{eq:qqchi} for the general case of two flux-tunable transmons, which differs from the treatment of transmon-resonator dispersive shifts. Note here we are working in the transmon limit and the equation below is the result of a perturbative expansion of the Hamiltonian eigenstates and eigenenergies as a function of applied magnetic flux, following the treatment in Ref.~\onlinecite{didier2018parametric}. $\mathrm{E}_{\mathrm{J, eff}}(\Phi)$ is the effective Josephson energy of the DC SQUID, a function of the applied magnetic flux through the SQUID, $\Phi$, and is defined, along with $\lambda(\Phi)$ and $\Lambda(\Phi)$ in the same reference. \begin{align} \label{eq:qqchi} \chi_{qq}(\Phi_1, \Phi_2) &= 2g^2 \bigg[\frac{\mu_{01,1}^2(\Phi_1) \mu_{12,2}^2(\Phi_2) }{\omega_{01,1}(\Phi_1) - \omega_{12,2}(\Phi_2)}\nonumber\\ &\qquad-\frac{\mu_{12,1}^2(\Phi_1) \mu_{01,2}^2(\Phi_2)}{\omega_{12,1}(\Phi_1) - \omega_{01,2}(\Phi_2)}\bigg],\\ \mu_{01}(\Phi) &= \left[\frac{\mathrm{E}_{\mathrm{J, eff}}(\Phi)}{\mathrm{E}_{\mathrm{J, eff}}(0)}\right]^{1/4} \frac{\lambda(\Phi)}{\lambda(0)},\\ \mu_{12}(\Phi) &= \left[\frac{\mathrm{E}_{\mathrm{J, eff}}(\Phi)}{\mathrm{E}_{\mathrm{J, eff}}(0)}\right]^{1/4} \frac{\Lambda(\Phi)}{\lambda(0)}. \end{align} We measure the dispersive shift through time Ramsey measurements. In a time Ramsey measurement, an X/2 pulse is applied to a qubit, $q_1$, causing the qubit to precess about the equator. After a time delay, $\Delta t$, a Z pulse rotates the qubit through a phase $\phi = 2\pi\Delta t\delta f$, where $\delta f$ is the detuning of the pulse frequency relative to the qubit frequency, $f_{01}$. Finally, another X/2 pulse is applied and the qubit state is measured. The resulting excited state visibility oscillates as a function of the time delay, reaching full visibility when the Z rotation perfectly offsets the phase accumulated from the precession during the delay time. From the period of the oscillations, the difference between the qubit frequency and the applied pulse frequency (already detuned from the expected qubit frequency by $\delta f$), and by consequence the qubit frequency itself as the applied pulse frequency is well defined by the control electronics, can be determined with high precision. To measure $\chi_{qq}$, the time Ramsey measurement is performed on a qubit $q_1$ with adjacent qubit $q_2$ in the ground state and again in the excited state. The difference between the $f_{01}$ measured for $q_1$ at both points gives a precise measurement of $\chi_{qq}$. The measurement is then performed in the opposite direction, with the state of $q_1$ varied while the frequency of $q_2$ is measured. The bare coupling rate calculated from the dispersive shift, as given by Eq.~\ref{eq:qqchi}, should be equivalent in both directions for a pure ZZ interaction, to within the error of the measurement. While the design target for all couplers was 12 MHz for a 3 $\mu$m indium bump height post-bonding, the observed coupling rate varied across the chip from 13.26$\pm$0.59 MHz to 18.94$\pm$0.39 MHz (see Fig.~\ref{fig:couplingSummary}). This was within the anticipated range due to indium bump height variation (see Methods for further details, in particular Table~\ref{tab:CouplingStrengths}). \subsection{\label{sec:DevChar} Cross-chip Entangling Gate Performance} We calibrated and benchmarked gates on ten out of twelve inter-chip pairs. The remaining two pairs could achieve population transfer but due to frequency targeting error in the fabrication process, the gate modulation frequencies were outside of the frequency band of the control electronics and degraded AC flux control resulted in low fidelity. The primary benchmarking methods employed were two-qubit randomized benchmarking (RB) and interleaved randomized benchmarking (iRB) \cite{Magesan_2012, knill2008randomized}. We quote the estimate from iRB when the RB protocol estimates an average gate fidelity of $\mathcal{F}_{\mathrm{RB}}\geq 92\%$, which bounds the iRB estimate to at most 20\% of the reported gate error due to imperfect gate randomization \cite{Magesan_2012}. Below this empirical threshold, the assumptions of the error model can lead to large uncertainty and an overestimate of the gate fidelity for iRB. Table~\ref{tab:Coherence Limited Fidelity} provides a summary of the CZ gate fidelities measured for each of the ten pairs and compares them with the coherence limited fidelity (the maximum achievable fidelity predicted from the measured relaxation and dephasing times of the qubit pair). The coherence limited fidelity is computed from the $\mathrm{T}_1$ and $\mathrm{T}_2$ under modulation, $\widetilde{\mathrm{T}}_1$ and $\widetilde{\mathrm{T}}_2$, ie. the coherence times as measured while an AC flux bias is applied to the tunable qubit at the gate modulation frequency, emulating conditions during gate operation. $\widetilde{\mathrm{T}}_1$ and $\widetilde{\mathrm{T}}_2$ for the tunable qubit in each pair, the limiting qubit in regards to coherence, are also recorded in the table. Comparing the measured fidelities with the coherence limited fidelities, the fidelity is almost always limited by the qubit coherence suggesting the inter-chip coupling mechanism does not limit gate error directly. Furthermore, we have compared qubit coherence times for inter-chip-coupled qubits to a baseline of similar qubits that are not coupled by inter-chip couplers and coherence times do not appear to be limited by the inter-chip coupling technology itself (see Methods). This suggests that gate fidelities are limited by the same sources of error arising in monolithic devices and are not directly, or indirectly through impacts to qubit coherence, negatively impacted by the inter-chip coupling technology. Apart from four gates with two level systems which could explain the increased dephasing under modulation, CZ gate fidelities were above 90\% with 5 out of 12 gates demonstrating greater than 95\% measured fidelities. While Table~\ref{tab:Coherence Limited Fidelity} lists only CZ fidelities as CZ gates were within the AC flux control band for all pairs and they allowed a straightforward comparison to the coherence limited fidelity, the maximum gate fidelity measured was a 99.1\% $\pm$ 0.5\% iRB for an iSWAP gate on the C1-D6 pair, which we expand upon in Fig.~\ref{fig:TwoQGateSummary}. \begin{figure} \caption{\label{fig:TwoQGateSummary} \label{fig:TwoQGateSummary} \end{figure} \begin{table*} \caption{\label{tab:Coherence Limited Fidelity} Measured fidelity for CZ gates compared with coherence limited fidelity. The tunable qubit coherence times for the fixed - tunable pair with the tunable qubit under modulation, $\widetilde{\mathrm{T}}_1$ and $\widetilde{\mathrm{T}}_2$, are also listed.} \begin{ruledtabular} \begin{tabular}{lcccccc} \textrm{Pair}& \textrm{$\widetilde{\mathrm{T}}_1$ ($\mu$s)}& \textrm{$\widetilde{\mathrm{T}}_2$ ($\mu$s)}& \textrm{Coherence Limited Fidelity ($\%$)}& \textrm{Measured Fidelity ($\%$)}& \textrm{Gate Time (ns)}\\ \colrule A0-B7 & 24.57$\pm$1.60 & 8.98$\pm$0.92 & 98.58$\pm$0.11 & 98.34$\pm$0.31 & 152\\ A1-B6 & 9.35$\pm$0.79 & 1.59$\pm$0.12 & 92.07$\pm$0.55 & 90.09$\pm$0.51 & 148\\ A3-B4 & 11.56$\pm$1.76 & 1.07$\pm$0.11 & 88.84$\pm$1.07 & 82.70$\pm$0.78 & 164\\ B0-C7 & 26.67$\pm$4.61 & 2.74$\pm$0.31 & 96.74$\pm$0.33 & 96.04$\pm$0.72 & 128\\ B2-C5 & 4.59$\pm$0.78 & 1.66$\pm$0.21 & 88.17$\pm$1.30 & 84.63$\pm$0.92 & 176\\ B3-C4 & 7.16$\pm$0.92 & 1.36$\pm$0.11 & 97.40$\pm$0.15 & 97.47$\pm$0.94 & 116\\ C0-D7 & 1.52$\pm$0.42 & 2.75$\pm$0.37 & 90.81$\pm$0.98 & 87.08$\pm$0.59 & 284\\ C1-D6 & 14.51$\pm$0.67 & 2.52$\pm$0.24 & 96.92$\pm$0.24 & 97.26$\pm$0.29 & 108\\ C2-D5 & 7.49$\pm$0.67 & 1.93$\pm$0.14 & 77.25$\pm$1.42 & 80.68$\pm$0.98 & 468\\ C3-D4 & 30.72$\pm$2.50 & 5.09$\pm$0.65 & 98.20$\pm$0.19 & 96.78$\pm$1.73 & 116\\ \end{tabular} \end{ruledtabular} \end{table*} \subsection{\label{sec:bell} Multi-die Bell Inequality Violation} We now turn our attention to assessing the viability of a future modular quantum processor based on these techniques. Importantly for this analysis, the inter-chip connections on our test device are established by unique qubits, and, in addition, qubits fabricated on the same chip are not coupled. We thus investigate the simultaneous quality of two-qubit connections, for the three disjoint pairs. This step is important for assessing functional challenges towards leveraging non-local quantum states in larger scale algorithms. Following a tradition established for multi-node or modular experimental efforts in superconducting qubits \cite{Narla2016,dickel2018chip,Campagne2018,zhong2019}, we design a test for the deterministic violation of a Bell inequality with this inter-chip platform. We describe a figure of merit $ \langle \mathcal{W}_{\Sigma} \rangle =\sum_k \langle W_k \rangle$, where $W_k$ is a witness to entanglement of connection $k$, applying the standard Bell observable for two-qubits, \begin{equation} \mathcal{W} = Q S + RS + RT - QT, \end{equation} with $Q=Z_n$, $R=X_n$, $S=\frac{X_m-Z_m}{\sqrt{2}}$, and $T=\frac{X_m+Z_m}{\sqrt{2}}$, taking qubits $\{n,m\}$ across an inter-chip connection. For $N$ disjoint Bell pairs, the total figure of merit is bounded by $\langle\mathcal{W}_{\Sigma}\rangle\leq 2N\sqrt{2}$ and signal above $\langle\mathcal{W}_{\Sigma}\rangle>2N$ certifies that the network supports genuine entanglement over at least one connection simultaneously. Moreover, investigating the individual Bell signals $\langle W_{k}\rangle$ can test entanglement over each connection independently. Our experimental procedure is shown in Fig.~\ref{fig:belltest}. We choose three connections that bridge all four chips in a disjoint pattern (A0-B7, B0-C7, C1-D6). We prepare the three pairs in an equal superposition of two-qubits: $|\Psi\rangle_k=(|00\rangle_k+|11\rangle_k)/\sqrt{2}$. Then, we measure the qubits in the $\langle ZZ \rangle$ basis or $\langle XX \rangle$ basis. A total of 100 experiments were run, having $10^{4}$ shots per basis, collecting measurement data simultaneously for all pairs. A summary of results is in Table~\ref{tab:bells}, where all three connections violate the Bell test by at least three standard deviations. With high confidence, therefore, the test platform supports simultaneous disjoint, pair-wise entanglement involving all four chips. Additionally, our total figure of merit, $\langle \mathcal{W}_{\Sigma}\rangle=6.651\pm0.067$ exceeds the classical bound by nearly ten standard deviations. \begin{figure} \caption{Simultaneous Bell Inequality Violations. (a) Disjoint, pairwise entanglement is generated across chip boundaries via CNOT operations on inter-chip couplers compiled to CZ gates, with the optional basis change for evaluating either $\langle ZZ \rangle$ (no final Hadamard) or $\langle XX \rangle$ (with final Hadamard). No error mitigation or readout correction schemes were applied. (b) Histograms of the average values $\langle W_k \rangle$ across 100 individual experimental runs using 10$^{4} \label{fig:belltest} \end{figure} \begin{table} \caption{\label{tab:bells}Results for joint Bell test marginals and total observable.} \begin{ruledtabular} \begin{tabular}{ccc} Observable & Qubits & Result \\ $\langle \mathcal{W}_{A,B} \rangle$ & A0-B7 & 2.184 $\pm$ 0.060 \\ $\langle \mathcal{W}_{B,C} \rangle$ & B0-C7 & 2.183 $\pm$ 0.034 \\ $\langle \mathcal{W}_{C,D} \rangle$ & C1-D6 & 2.284 $\pm$ 0.029 \\ \hline $\langle \mathcal{W}_{\Sigma} \rangle$ & combined & 6.651 $\pm$ 0.067 \end{tabular} \end{ruledtabular} \end{table} \section{\label{sec:Discussion}Discussion} Concluding, we demonstrate that the flip-chip bonded, multi-die fabrication process with inter-chip coupler technology is capable of achieving high fidelity entanglement, including gate fidelities regularly exceeding 95\% and up to 99\% in the best case, and simultaneous entanglement between silicon dies violating the Bell test by over three standard deviations. Future work should explore the potential benefits of this modular approach beyond the intrinsic advantages in regards to flexible device construction and yield. This includes increased isolation between qubits on separate physical die, an important factor particularly in developing robust hardware suitable for near-term error correction schemes. Recently, the impacts of cosmic and background radiation on solid state devices have been of significant interest due to the correlated errors that result and the challenge these pose to fault tolerant quantum computing \cite{wilen2020correlated, vepsalainen2020impact, cardani2020reducing}. In this case and more generally, the physics of quasi-particle trapping and phonon propagation through superconducting qubit chips would be interesting to explore on multi-die devices. Phonons should collect on the boundaries of the individual die and not propagate to qubits on other dies, reducing correlated errors. Finally, we note that the true impact of this technology will be in its integration with state-of-the-art processing architectures. With additional intra-chip circuitry and changes to the qubit topology on the individual QuICs, this technology can be extended to form a seamless modular quantum processor that is flexible in regards to the number and type of modules integrated and, with sensitivity to fabrication yield and intra-die cross-talk limited only by the module size, highly scalable. By enabling the fabrication of devices consisting of hundreds to thousands of qubits which are sufficiently isolated to mitigate correlated errors, this technology provides a clear path forward towards fault tolerant computing. \section{\label{sec:Methods}Methods} \subsection{\label{app:Fab} Fabrication and Bonding Process} The carrier chip is composed of cavities etched in Si, coated with patterned superconducting metal, and indium bumps which after bonding form the superconducting connection between carrier and QuICs. Carrier chips are fabricated from high resistance Si wafers. The fabrication flow starts with a photolithography process to create cavity patterns on wafers followed by a Bosch etch (DRIE) step to fabricate 24 $\mu$m deep pockets with vertical sidewalls. The surface is then conformally coated with a 560 nm-thick Nb/MoRe stack, deposited by sputtering (PVD), to form a continuous superconducting shield. Vertical cavity sidewalls are confirmed to have a continuous metal film connecting the top surface to the cavity bottom surface. A thin layer of MoRe alloy is deposited on top of Nb film to seal the Nb surface, enabling an oxide-free metal-to-metal interface for reliable electrical connection between the Nb device layer and the In bumps in the bonding areas. The metal film stack is then patterned by a two-plane photolithography process followed by a reactive ion etching (RIE) step with a certain etching selectivity to the Si substrate. During the first exposure, focus and dose settings are selected to target the top wafer surface patterning, while in the second exposure settings are changed to target the cavity bottom surface only, which is 24 $\mu$m deeper. Once the Nb/MoRe stack etching is completed, a negative-tone photoresist lithography is used to transfer the In bump patterns onto the top metal surface by lift-off processing \cite{o2017superconducting}. Electron-beam evaporation is used to deposit a 6.5 $\mu$m thick indium layer, producing a high quality, easy to lift-off film. An automatic lift-off tool that uses a combination of chemical cleaning and physical energy produced by high-pressure jets removes the In film from non-patterned areas, completing the process. No Josephson junction fabrication steps are required as no active components are located on the carrier chip. Future designs could include transmons in the carrier chip as part of, for example, a quantum bus to provide longer-range bus coupling between qubits instead of the direct coupling employed in this initial design. To establish a superconducting connection between the carrier chip and the four separate QuIC chips, a flip chip bonder is used. Each QuIC device is precisely aligned and thermo-compression bonded to the carrier chip. Prior to bonding, both carrier and QuIC chip surfaces are solvent cleaned followed by an atmospheric downstream plasma cleaning to chemically clear surfaces of native oxides. This process also temporarily passivates the In bumps from oxidation and helps to generate a strong chemical bond between In bumps and the corresponding pads. The multi-chip bonding process consists of sequential bonding of four separate QuICs to the designated locations on the carrier chip. For each bonding, the carrier and QuIC chips are aligned to each other with a horizontal accuracy of better than $\pm$2.5 $\mu$m. After the horizontal alignment is completed, a vertical parallelism adjustment is done using auto-collimation and laser-levelling methods with an accuracy of $\pm$0.5 $\mu$m. The ensuing thermo-compression process consists of three different phases. In the first phase, force and temperature values are gradually increased and stabilized. In the second phase, the actual thermo-compression bonding takes place for two minutes. To prevent thermal aging of QuICs that are already bonded, the carrier chip temperature is maintained at 30$^\circ$C, while the QuICs are heated to 70$^\circ$C only during bonding. In the final phase, the stack is cooled to 30$^\circ$C with a nitrogen flow. The same process is repeated sequentially for all the QuICs. \subsection{\label{app:DevHam}Device Hamiltonian and Parametric Gates} To design the device Hamiltonian, the circuit parameters were extracted using quasi-static electromagnetic simulations and a positive second order representation \cite{scheer2018computational} was used to solve the linearized circuit. The non-linear effects of the Josephson junctions are subsequently accounted for through a perturbative treatment. The designed Hamiltonian parameters for this device are provided in table \ref{tab:devHam}, including the maximum and minimum $f_{01}$ transition frequencies over the flux bias tuning range, the anharmonicity at the maximum of the tuning range, $\eta=f_{12,\mathrm{max}}-f_{01,\mathrm{max}}$, the frequency of the readout resonator coupled to the qubit, $f_{\mathrm{RO}}$, and the qubit-readout resonator dispersive shift, $\chi_{\mathrm{q, RO}}$. The coupling rate between qubit pairs was designed to be $12$ MHz at a 3$\mu$m indium bump height for all couplers. \begin{table*} \caption{\label{tab:devHam}Hamiltonian properties as designed. The qubit numbering corresponds to that shown in Fig.~\ref{fig:multiDieSchem}b, where there are only 4 unique design targets which are repeated in inverted order on the opposite side of the chip.} \begin{ruledtabular} \begin{tabular}{cccccccc} Qubit & Flux Tunable? & $f_{01, \mathrm{max}}$ (MHz)&$f_{01, \mathrm{min}}$ (MHz)& $\eta$ (MHz)&$f_{\mathrm{RO}}$ (MHz)&$\chi_{\mathrm{q, RO}}/2\pi$ (MHz)\\ \hline I & Fixed & 3654 & 3654 & -190 & 7232 & 0.80\\ II & Tunable & 5066 & 4266 & -200 & 7476 & 0.81\\ III & Fixed & 3714 & 3714 & -190 & 7273 & 0.85\\ IV & Tunable & 4946 & 4146 & -200 & 7425 & 0.81 \end{tabular} \end{ruledtabular} \end{table*} Due to fabrication process variation, the Josephson junction width, and hence the Josephson energy, of each fabricated junction will differ slightly from the design target. Using the relationship between the room temperature conductance of a junction and its Josephson energy at cryogenic temperatures \cite{PhysRevLett.10.486}, a more accurate prediction for the device Hamiltonian can be obtained for fabricated devices using the same modelling process as during the initial device design, but replacing the target EJ values with the predicted EJ values from room temperature conductance measurements. Changes to the Josephson energy of the single junction in a fixed transmon or the two junctions in a DC-SQUID tunable transmon primarily impact the qubit frequencies, with little impact on qubit anharmonicities, readout resonator properties, or coupling rates. We plot in Fig.~\ref{fig:HamSumm} the design target, predicted and measured $f_{01, \mathrm{max}}$, demonstrating agreement between the predicted frequencies and those measured cold, to within $\pm$ 108 MHz or 2.2\% in the worst case. The discrepancies are within the prediction error we expect due to uncertainty in the empirically determined linear coefficient relating room temperature conductance to inductance at cryogenic temperatures, and uncertainty in the conductance measurement itself. The qubit anharmonicities are compared with the design target and are accurate to within 11\%, demonstrating a systematic offset that will be corrected in future designs. \begin{figure} \caption{Comparison of predicted and measured transition frequency, $f_{01} \label{fig:HamSumm} \end{figure} The device Hamiltonian is designed to enable parametric gates between one tunable ($T$) qubit coupled to one fixed qubit ($F$). In this scheme, an AC flux bias at RF frequency $f_p$ is applied to the tunable qubit around its parking flux bias. Under flux modulation, the transmon frequency oscillates at harmonics of the modulation frequency around its time-averaged frequency $\bar{f}_{T,01}$. Transmon frequency modulation gives rise to sidebands at frequencies $\bar{f}_{T,01}+kf_p$, separated by the modulation frequency around the average frequency. When the modulation frequency is tuned such as to align one sideband with the transition frequency of the fixed qubit, a coherent exchange takes place between the two qubits at a rate equal to the bare coupling strength renormalized by the sideband weight. When the tunable qubit is parked at the maximum of the tuning band, only even sidebands have a non-zero weight and the sideband $k=\pm2$ is used. Entangling gates are then enacted by modulating the tunable qubit at half the average detuning between the qubits' transition frequencies. To obtain the iSWAP gate, the interaction between states $|01\rangle$ and $|10\rangle$ is activated at the modulation frequency $f_p=|\bar{f}_{T,01}-f_{F,01}|/2\equiv\Delta/2$ (with the convention $|FT\rangle$). For the CZ gate, the interaction between $|11\rangle$ and $|02\rangle$ is activated at $f_p=(\Delta+\eta_T)/2$ (CZ$_{02}$) or between $|11\rangle$ and $|20\rangle$ at $f_p=(\Delta-\eta_F)/2$ (CZ$_{20}$). The gate time is adjusted to provide a $\pi$ rotation for iSWAP and $2\pi$ rotation for CZ between the corresponding two-qubit states. \subsection{\label{app:g_analysis} Analysis of Bump Heights and Coupling Rates} \begin{figure} \caption{Magnified image of a region of the carrier chip showing the indium bumps post-bonding and post-shearing, a destructive process whereby a cut is made through the device separating the QuIC chips from the carrier chip. The diameter and height of the indium bumps are known pre-bonding, such that by measuring the bump diameter post-bonding, a rough estimate for the bump height can be obtained assuming the bump is cylindrical in both cases: $h_{\mathrm{post} \label{fig:shearTestPics} \end{figure} After cryogenic tests were complete, the indium bump heights were measured at various locations across the device. This was done in a destructive manner by shearing the chip to separate the QuIC and the carrier chips, then measuring the indium bump diameter, as shown in Fig.~\ref{fig:shearTestPics}. Working under the assumption that before and after bonding the indium bumps are approximately cylindrical in shape, and with the diameter and height known before bonding, the diameter as measured after shearing provides an estimate for the bump height post-bonding. The measured coupling rates could then be compared with post-hoc simulated coupling rates computed for each coupler based on the measured bump heights. We obtain qualitative agreement between the simulated and measured coupling rates, however the measured coupling rates are approximately 20\% lower than expected from design. Future studies will look to address this discrepancy. Some of the reasons for the latter are the overestimation of the bump height size and inaccuracies in material properties assumed in the simulation. In addition the measured coupling values are effectively reduced by a term representing the next-nearest neighbor couplings to resonators, which exist in a higher band than the transmons. \begin{table} \caption{\label{tab:CouplingStrengths} Bare coupling rate, $g$, as measured from the qubit-qubit dispersive shift, $\chi_{qq}$ compared to simulated values given measured bump heights, $h$. } \begin{ruledtabular} \begin{tabular}{lccc} \textrm{Pair}& \textrm{$h$ ($\mu$m)}& \textrm{Meas. $g/2\pi$ (MHz)}& \textrm{Sim. $g/2\pi$ (MHz)}\\ \colrule A0-B7 & 2.18$\pm$0.13 & 13.26$\pm$0.59 & 16.86$\pm$1.05\\ A1-B6 & 2.03$\pm$0.08 & 14.89$\pm$1.00 & 18.44$\pm$0.79\\ A2-B5 & 1.96$\pm$0.18 & 15.72$\pm$0.96 & 19.32$\pm$1.96\\ A3-B4 & 1.94$\pm$0.29 & 16.46$\pm$1.07 & 19.60$\pm$3.15\\ B0-C7 & 1.95$\pm$0.27 & 14.34$\pm$0.44 & 19.42$\pm$2.89\\ B1-C6 & 1.89$\pm$0.18 & 14.91$\pm$0.62 & 20.11$\pm$2.12\\ B2-C5 & 1.69$\pm$0.14 & 15.40$\pm$0.54 & 22.62$\pm$2.07\\ B3-C4 & 1.76$\pm$0.08 & 16.63$\pm$1.81 & 21.29$\pm$1.04\\ C0-D7 & 1.77$\pm$0.04 & 14.54$\pm$0.87 & 21.11$\pm$0.51\\ C1-D6 & 1.69$\pm$0.01 & 18.94$\pm$0.39 & 22.29$\pm$0.07\\ C2-D5 & 1.55$\pm$0.01 & 18.88$\pm$1.70 & 24.52$\pm$0.16\\ C3-D4 & 1.60$\pm$0.11 & 18.92$\pm$0.43 & 23.70$\pm$1.66\\ \end{tabular} \end{ruledtabular} \end{table} \subsection{\label{app:fid_summ} Impact of Inter-chip Couplers on Qubit Coherence and 2Q Gate Stability} An important question to address is whether inter-chip coupling exposes qubits to additional loss or dephasing channels relative to standard intra-chip lateral couplers. As the electric field between the paddles of the coupler passes through vacuum rather than a silicon substrate, no additional dielectric losses are expected. Furthermore, the galvanic connection across the carrier chip is small enough relative to the 3-8 GHz band of interest that it can be treated as a lumped element and is not expected to produce any additional resonant coupling between qubits and the electromagnetic environment (chip modes, package modes, etc.) up to frequencies in excess of 15 GHz. \begin{figure} \caption{Comparison of $\mathrm{T} \label{fig:T1T2comp} \end{figure} It is thus not expected that these couplers should impact the qubit relaxation ($\mathrm{T}_1$) or dephasing ($\mathrm{T}_2$) times. This was reflected in experimental results, as shown in Fig.~\ref{fig:T1T2comp} where we compare $\mathrm{T}_1$ and $\mathrm{T}_2$ for a device with inter-chip coupling compared to devices from similar wafers with no coupling at all. No statistically significant difference in the $\mathrm{T}_1$ and $\mathrm{T}_2$ times was observed relative to the baseline. \begin{figure} \caption{Timeseries of the iRB fidelity and $\mathrm{T} \label{fig:iRBtimeseries} \end{figure} In addition to comparing the coherence between inter-chip qubits and intra-chip qubits, we also assess the entangling gate stability and relaxation time, $\mathrm{T}_1$ over a period of 24 hours, as plotted for the C1-D6 pair in Fig.~\ref{fig:iRBtimeseries}. For each data point plotted, the qubits are re-tuned, readout is calibrated, and the two-qubit gate is benchmarked using interleaved randomized benchmarking. Qubit re-tuning is done by parking the tunable qubit at its maximum frequency and calibrating the gate pulses for the fixed and tunable qubit. Readout calibration involves preparing ground and excited states to update the classifier that will be used to discriminate single shot measurements. $\mathrm{T}_1$ decay is monitored through repeated coherence measurements taken after re-tuning and readout calibrations and before benchmarking. Temporal fluctuations show a drop in the gate fidelity when the $\mathrm{T}_1$ falls below 10 $\mu$s, but generally it remains stable to within four percentage points with a distribution centered around 98\%. Fluctuations in $\mathrm{T}_1$ is an active research topic in the field of superconducting qubits \cite{vepsalainen2020impact, cardani2020reducing}. \begin{acknowledgments} The authors would like to thank A. Grassellino, A. Romanenko, L. Cardani, and R. McDermott for insightful discussions on the effects of cosmic and background radiation on qubit coherence. We thank the Rigetti fabrication team for manufacturing the device, the Rigetti technical operations team for fridge build out and maintenance, the Rigetti cryogenic hardware team for providing the chip packaging, the Rigetti quantum engineering team for guidance during measurement and data analysis, and the Rigetti control systems and embedded software teams for creating the Rigetti AWG control system. This material is based upon work supported by Rigetti Computing and the Defense Advanced Research Projects Agency (DARPA) under agreement No. HR00112090058 and DARPA under IAA 8839, Annex 114. \end{acknowledgments} \end{document}
\begin{document} { \renewcommand*{\thefootnote}{\fnsymbol{footnote}} \title{\textbf{\sffamily Nonparametric C- and D-vine based quantile regression}} \date{\small \today} \newcounter{savecntr1} \newcounter{restorecntr1} \newcounter{savecntr2} \newcounter{restorecntr2} \author{Marija Tepegjozova\setcounter{savecntr1}{\value{footnote}}\thanks{Department of Mathematics, Technische Universit{\"a}t M{\"u}nchen, Boltzmannstra{\ss}e 3, 85748 Garching, Germany (email: \href{mailto:[email protected]}{[email protected]} (corresponding author), \href{mailto:[email protected]}{[email protected]})}, Jing Zhou\setcounter{restorecntr1}{\value{footnote}}\thanks{ORStat and Leuven Statistics Research Center, KU Leuven, Naamsestraat 69-box 3555 Leuven, Belgium (email: \href{mailto:[email protected]}{[email protected]}, \href{mailto:[email protected]}{[email protected]})}, Gerda Claeskens$^{\dagger}$ and Claudia Czado$^{*}$ } \maketitle } \begin{abstract} { Quantile regression is a field with steadily growing importance in statistical modeling. It is a complementary method to linear regression, since computing a range of conditional quantile functions provides more accurate modeling of the stochastic relationship among variables, especially in the tails. We introduce a non-restrictive and highly flexible nonparametric quantile regression approach based on C- and D-vine copulas. Vine copulas allow for separate modeling of marginal distributions and the dependence structure in the data, and can be expressed through a graphical structure consisting of a sequence of linked trees. This way, we obtain a quantile regression model that overcomes typical issues of quantile regression such as quantile crossings or collinearity, the need for transformations and interactions of variables. Our approach incorporates a two-step ahead ordering of variables, by maximizing the conditional log-likelihood of the tree sequence, while taking into account the next two tree levels. We show that the nonparametric conditional quantile estimator is consistent. The performance of the proposed methods is evaluated in both low- and high-dimensional settings using simulated and real-world data. The results support the superior prediction ability of the proposed models.} \end{abstract} \keywords{ vine copulas, conditional quantile function, nonparametric pair-copulas} \maketitle \section{Introduction}\label{introduction1} As a robust alternative to the ordinary least squares regression, which estimates the conditional mean, quantile regression \citep{koenker1978regression} focuses on the conditional quantiles. This method has been studied extensively in statistics, economics, and finance. The pioneer literature by \citet{Koenker2005} investigated linear quantile regression systematically. It presented properties of the estimators including asymptotic normality and consistency, under various assumptions such as independence of the observations, independent and identically distributed (i.i.d.) errors with continuous distribution, and predictors having bounded second moment. Subsequent extensions of linear quantile regression have been intensively studied, see for example adapting quantile regression in the Bayesian framework \citep{yu2001bayesian}, for longitudinal data \citep{koenker2004quantile}, time-series models \citep{xiao2009conditional}, high-dimensional models with $l_1$-regularizer \citep{belloni2011etal}, nonparametric estimation by kernel weighted local linear fitting \citep{yu1998local}, and by additive models \citep{koenker2011additive, fenske2011identifying}, etc. The theoretical analysis of the above-mentioned extensions is based on imposing additional assumptions such as samples that are i.i.d. (see for example \citet{yu1998local,belloni2011etal}), or that are generated by a known additive function (see for example \citet{koenker2011additive, koenker2004quantile}). Such assumptions, which guarantee the performance of the proposed methods for certain data structures, cause concerns in applications due to the uncertainty of the real-world data structures. \citet{bernard2015conditional} addressed other potential concerns such as quantile crossings and model-misspecification, when the dependence structure of the response variables and the predictors does not follow a Gaussian copula. Flexible models without assuming homoscedasticity, or a linear relationship between the response and the predictors are of interest. Recent research on dealing with this issue includes quantile forests \citep{meinshausen2006quantile, li2017forest,athey2019generalized} inspired by the earlier work of random forests \citep{breiman2001random} and modeling conditional quantiles using copulas (see also \citet{noh2013copula, noh2015semiparametric, chen2009copula}). \noindent Vine copulas in the context of conditional quantile prediction have been investigated by \citet{kraus2017d} using drawable vine copulas (D-vines), \citet{chang2019prediction} and most recently, \citet{zhu2021simplified} using restricted regular vines (R-vines). The approach of \citet{chang2019prediction} is based on first finding the locally optimal regular vine structure among all predictors and then adding the response to each selected tree in the vine structure as a leaf, as also followed by \citet{bauer2016pair} in the context of non-Gaussian conditional independence testing. The procedure in \citet{chang2019prediction} allows for a recursive determination of the response quantiles, which is restricted through the prespecified dependence structure among predictors. The latter might not be the one maximizing the conditional response likelihood, which is the main focus in regression setup. The approach of \citet{kraus2017d} is based on optimizing the conditional log-likelihood and selecting predictors sequentially until no improvement of the conditional log-likelihood is achieved. This approach based on the conditional response likelihood is more appropriate to determine the associated response quantiles. Further, the intensive simulation study in \citet{kraus2017d} showed the superior performance of the D-vine copulas based quantile regression compared to various quantile regression methods, i.e., linear quantile regression \citep{koenker1978regression}, boosting additive quantile regression \citep{Koenker2005, koenker2011additive, fenske2011identifying}, nonparametric quantile regression \citep{li2013optimal}, and semiparametric quantile regression \citep{noh2015semiparametric}.In parallel to our work, \citet{zhu2021simplified} proposed an extension of this D-vine based forward regression to a restricted R-vine forward regression with comparable performance to the D-vine regression. Thus, the D-vine quantile regression will be our benchmark model.\\ We extend the method of \citet{kraus2017d} in two ways: (1) our approach is applicable to both C-vine and D-vine copulas; (2) a two-step ahead construction is introduced, instead of the one-step ahead construction. Since the two-step ahead construction is the main difference between our method and \citet{kraus2017d}, we further explain the second point in more detail. Our proposed method proceeds by adding predictors to the model sequentially. However, in contrast to \citet{kraus2017d} with only one variable ahead, our new approach proposes to look up two variables ahead for selecting the variable to be added in each step. The general idea of this two-step ahead algorithm is as follows: in each step of the algorithm, we study combinations of two variables to find the variable, which in combination with the other improves the conditional log-likelihood the most. Thus, in combination with a forward selection method, this two-step ahead algorithm allows us to construct nonparametric quantile estimators that improve the conditional log-likelihood in each step and, most importantly, take possible future improvements into account. Our method is applicable to both low and high-dimensional data. By construction, quantile crossings are avoided. All marginal densities and copulas are estimated nonparametrically, allowing more flexibility than parametric specifications. \citet{kraus2017d} addressed the necessity and possible benefit of the nonparametric estimation of bivariate copulas in the quantile regression framework. This construction permits a large variety of dependence structures, resulting in a well-performing conditional quantile estimator. Moreover, extending to the C-vine copula class, in addition to the D-vine copulas, provides greater flexibility. \noindent The paper is organized as follows. Section~\ref{section:background} introduces the general setup, the concept of C-vine and D-vine copulas and the nonparametric approach for estimating copula densities. Section~\ref{section:vinebasedintro} describes the vine based approach for quantile regression. The new two-step ahead forward selection algorithms are described in Section~\ref{section:main}. We investigate in Proposition~\ref{thm:consistency} the consistency of the conditional quantile estimator for given variable orders. The finite sample performance of the vine based conditional quantile estimator is evaluated in Section~\ref{section:simulation} by several quantile related measurements in various simulation settings. We apply the newly introduced algorithms to low- and high-dimensional real data in Section~\ref{section: real data}. In Section~\ref{section:disscussion} we conclude and discuss possible directions of future research. \section{Theoretical background}\label{section:background} Consider the random vector $\bm{X} = (X_1, \ldots, X_d)^T$ with observed values $\bm{x}=(x_1, \ldots, x_d)^T$, joint distribution and density function $F$ and $f$, marginal distribution and density functions $F_{X_j}$ and $f_{X_j}$ for $X_j, j=1,\ldots, d$. Sklar's theorem \citep{sklar1959fonctions} allows to represent any multivariate distribution in terms of its marginals $F_{X_j}$ and a copula $C$ encoding the dependence structure. In the continuous case, $C$ is unique and satisfies $F(\bm{x}) = C(F_{X_1}(x_1), \ldots, F_{X_d}(x_d))$ and $f(\bm{x}) = c(F_{X_1}(x_1), \ldots, F_{X_d}(x_d))[\prod_{j=1}^d f_{X_j}(x_j)]$, where $c$ is the density function of the copula $C$. To characterize the dependence structure of $\bm{X}$, we transform each $X_j$ to a uniform variable $U_j$ by applying the probability integral transform, i.e. $U_j \coloneqq F_{X_j}(X_j), \; j = 1, \ldots, d$. Then the random vector $\bm U = (U_1, \ldots, U_d)^T$ with observed values $(u_1, \ldots, u_d)^T$ has a copula as a joint distribution denoted as $C_{U_1,\ldots, U_d}$ with associated copula density function $c_{U_1, \ldots, U_d}$. While the catalogue of bivariate parametric copula families is large, this is not true for $d > 2$. Therefore conditioning was applied to construct multivariate copulas using only bivariate copulas as building blocks. \citet{joe1996families} gave the first pair copula construction for $d$ dimensions in terms of distribution functions, while \citet{bedford2002vines} independently developed constructions in terms of densities together with a graphical building plan, called a regular vine tree structure. It consists of a set of linked trees $T_1, \ldots, T_d$ (edges in tree $T_j$ become nodes in tree $T_{j+1}$) satisfying a proximity condition, which allows to identify all possible constructions. Each edge of the trees is associated with a pair copula $C_{U_i, U_{j}; \bms{U}_D}$, where $D$ is a subset of indices not containing $i,j$. In this case the set $\{i,j\}$ is called the conditioned set, while $D$ is the conditioning set. A joint density using the class of vine copulas is then the product of all pair copulas identified by the tree structure evaluated at appropriate conditional distribution functions $F_{X_{j}|\bms{X}_D}$ and the product of the marginal densities $f_{X_j},j=1,\ldots,d$. A detailed treatment of vine copulas together with estimation methods and model choice approaches are given, for example in \citet{joe2014dependence} and \citet{czado2019analyzing}. \noindent Since we are interested in simple copula based estimation methods for conditional quantiles, we restrict to two subclasses of the regular vine tree structure, namely the D- and C-vine structure. We show later that these structures allow us to express conditional distribution and quantiles in closed form. In the D-vine tree structure all trees are paths, i.e. all nodes have degree $\leq 2$. Nodes with degree 1 are called leaf nodes. A C-vine structure occurs, when all trees are stars with a root node in the center. The right and left panel of Figure~\ref{fig:examplecdvine} illustrates a D-vine and a C-vine tree sequence in four dimensions, respectively. \noindent For these sub classes we can easily give the corresponding vine density \citep[Chapter~4]{czado2019analyzing}. For a D-vine density we have a permutation $s_1,\ldots ,s_d$ of $1,\ldots ,d$ such that \begin{equation} \label{eq:d-vine} \begin{aligned} f( x_1,\hdots,x_d)= & \prod_{j=1} ^{d-1}\prod_{i=1}^{d-j} c_{U_{s_i},U_{s_{i+j}};U_{s_{i+1}},\hdots,U_{s_{i+j-1}}} \left( F_{X_{s_i}|X_{s_{i+1}},\hdots,X_{s_{i+j-1}}} (x_{s_i}|x_{s_{i+1}}, \hdots, x_{s_{i+j-1}}) , \right. \\ & \left. F_{X_{s_{i+j}}|X_{s_{i+1}},\hdots,X_{s_{i+j-1}}} (x_{s_{i+j}}|x_{s_{i+1}}, \hdots, x_{s_{i+j-1}}) \right) \cdot \prod_{k=1}^{d}f_{X_{s_k}} (x_{s_k}), \end{aligned} \end{equation} while for a C-vine density the following representation holds \begin{equation} \label{eq:c-vine} \begin{aligned} f( x_1,\hdots,x_d)= & \prod_{j=1} ^{d-1}\prod_{i=1}^{d-j} c_{U_{s_j},U_{s_{j+i}};U_{s_1},\hdots,U_{s_{j-1}}} \left( F_{X_{s_j}|X_{s_1},\hdots,X_{s_{j-1}}} (x_{s_j}|x_{s_1}, \hdots, x_{s_{j-1}}), \right. \\ & \left. F_{X_{s_{j+i}}|X_{s_1},\hdots,X_{s_{j-1}}} (x_{s_{j+i}}|x_{s_1}, \hdots, x_{s_{j-1}}) \right) \cdot \prod_{k=1}^{d}f_{X_{s_k}} (x_{s_k}). \end{aligned} \end{equation} To determine the needed conditional distribution $F_{X_{j}|\bms{X}_D}$ in \eqref{eq:d-vine} and \eqref{eq:c-vine} for appropriate choices of $j$ and $D$, the recursion discussed in \citet{joe1996families} is available. Using $u_j=F_{X_{j}|\bms{X}_D}(x_j|\bms{x}_D)$ for $j=1,\ldots,d$ we can express them as compositions of h-functions. These are defined in general as $h_{U_i|U_j;\bms{U}_D}(u_i|u_j; \bm{u}_D) = \frac{\partial}{\partial u_j}C_{U_i,U_j;\bms{U}_D}(u_i,u_j;\bm{u}_D)$. Additionally we made in \eqref{eq:d-vine} and \eqref{eq:c-vine} the simplifying assumption \citep[Section 5.4]{czado2019analyzing}, that is, the copula function $C_{U_i,U_j;\bms{U}_D}$ does not depend on the specific conditioning value of $\bm{u}_D$, i.e. $C_{U_i,U_j;\bms{U}_D}(u_i,u_j;\bm{u}_D)=C_{U_i,U_j;\bms{U}_D}(u_i,u_j)$. The dependence on $\bm{u}_D$ in \eqref{eq:d-vine} and \eqref{eq:c-vine} is completely captured by the arguments of the pair copulas. This assumption is often made for tractability reasons in higher dimensions (\citet{haff2010simplified} and \citet{stoeber2013simplified}). It implies further, that the h-function satisfies $h_{U_i|U_j;\bms{U}_D}(u_i|u_j; \bm{u}_D) = \frac{\partial}{\partial u_j}C_{U_i,U_j;\bms{U}_D}(u_i,u_j)=C_{U_i|U_j;\bms{U}_D}(u_i|u_j)$ and is independent of $\bm{u}_D$. \begin{figure} \caption{ C-vine tree sequence (left panel) and a D-vine tree sequence (right panel) in 4 dimensions.} \label{fig:examplecdvine} \end{figure} \subsection{Nonparametric estimators of the copula densities and h-functions}\label{sectionnonpar} There are many methods to estimate a bivariate copula density $c_{U_i, U_j}$ nonparametrically. Examples are the transformation estimator \citep{charpentier2007estimation}, the transformation local likelihood estimator \citep{geenens2017probit}, the tapered transformation estimator \citep{wen2015improved}, the beta kernel estimator \citep{charpentier2007estimation}, and the mirror-reflection estimator \citep{gijbels1990estimating}. Among the above-mentioned kernel estimators, the transformation local likelihood estimator \citep{geenens2017probit} was found by \citet{nagler2017nonparametric} to have an overall best performance. The estimator is implemented in the R packages \texttt{kdecopula} \citep{kdecopula} and \texttt{rvinecopulib} \citep{rvinecopulib} using Gaussian kernels. We review its construction in Appendix~\ref{section:appendA}. To satisfy the copula definition, it is scaled to have uniform margins. \noindent As mentioned above the simplifying assumption implies that $h_{U_i|U_j;\bms{U}_D}(u_i|u_j; \bm{u}_D)$ is independent of specific values of $\bm{u}_D$. Thus it is sufficient to show how the h-function $h_{U_i|U_j}=C_{U_i|U_j}(u_i | u_j)$ can be estimated nonparametrically. For this we use as estimator \begin{equation*} \hat{C}_{U_i|U_j}(u_i | u_j) = \int^{u_i}_0 \hat{c}_{U_i, U_j}(\tilde{u}_i, u_j) \mathrm{d} \tilde{u}_i \end{equation*} where $\hat{c}_{U_i, U_j}$ is one of the above mentioned nonparametric estimators of the bivariate copula density of $(U_i, U_j)$ for which it holds that $\hat{c}_{U_i, U_j}$ integrates to 1 and has uniform margins.\\ \section{Vine based quantile regression}\label{section:vinebasedintro} In the general regression framework the predictive ability of a set of variables $\bm{X} = (X_1, \ldots, X_p)^T$ for the response $Y\in\mathbbm{R}$ is studied. The main interest of vine based quantile regression is to predict the $\alpha \in (0, 1)$ quantile $ q_{\alpha}(x_1, \ldots, x_p) = F^{-1}_{Y|X_1, \ldots, X_p} (\alpha | x_1, \ldots, x_p) $ of the response variable $Y$ given $\bm{X}$ by using a copula based model of $(Y, \bm{X})^T$. As shown in \citet{kraus2017d} this can be expressed as \begin{equation}\label{eq:conditional quantile} F^{-1}_{Y|X_1, \ldots, X_p} (\alpha | x_1, \ldots, x_p) = F^{-1}_Y\big(C^{-1}_{V|U_1, \ldots,U_p}(\alpha | F_{X_1}(x_1), \ldots, F_{X_p}(x_p))\big), \end{equation} where $C_{V\vert U_1,\hdots , U_p}$ is the conditional distribution function of $V=F_Y(Y)$ given $U_j= F_{X_j}(X_j)= u_j$ for $j = 1,\hdots ,p$ with corresponding density $c_{V\vert U_1,\hdots , U_p}$, and $C_{V, U_1,\hdots , U_p}$ denotes the $(p+1)$-dimensional copula associated with the joint distribution of $(Y,\bm{X})^T$. In view of Section~\ref{introduction1}, we have $d= p+1$. An estimate of $q_{\alpha}(x_1, \ldots, x_p)$ can be obtained using estimated marginal quantile functions $\hat{F}^{-1}_Y$, $\hat{F}^{-1}_{X_j}, j = 1, \ldots, p$ and the estimated conditional distribution function $\hat{C}^{-1}_{V |U_ 1, \ldots,U_ p}$ giving $\hat{q}_{\alpha}(x_1, \hdots, x_p) =\hat{F}^{-1}_Y \big( \hat{C}^{-1}_{V\vert U_1,\hdots , U_p}(\alpha\vert \hat{F}_{X_1}(x_1), \hdots \hat{F}_{X_p}(x_p))\big)$. \noindent In general $C_{V,U_1,\hdots ,U_p}$ can be any $(p+1)$-dimensional multivariate copula, however for certain vine structures the corresponding conditional distribution function $C_{V\vert U_1,\hdots , U_p}$ can be obtained in closed form not requiring numerical integration. For D-vine structures this is possible and has been already utilized in \citet{kraus2017d}. \citet{masterMarija} showed that this is also the case for certain C-vine structures. More precisely the copula $C_{V, U_1,\hdots , U_p}$ with D-vine structure allows to express $C_{V|U_1,\hdots , U_p}$ in a closed form if and only if the response $V$ is a leaf node in the first tree of the tree sequence. For a C-vine structure we need, that the node containing the response variable $V$ in the conditioned set is not a root node in any tree. Additional flexibility in using such D- and C-vine structures is achieved by allowing for nonparametric pair-copulas as building blocks. \noindent The order of the predictors within the tree sequences itself is a free parameter with direct impact on the target function $C_{V\vert U_1,\hdots , U_p}$ and thus, on the corresponding prediction performance of $q_{\alpha}(x_1, \ldots, x_p)$. For this we recall the concept of a node order for C- and D-vine copulas introduced in \citet{masterMarija}. A D-vine copula denoted by $\mathcal{C}_D$ has order $ \mathcal{O}_{D}(\mathcal{C}_D)= (V,U_{i_1},\hdots ,U_{i_p}),$ if the response $V$ is the first node of the first tree $T_1$ and $U_{i_k}$ is the $(k+1)$-th node of $T_1$, for $k=1,\hdots ,p$. A C-vine copula $\mathcal{C}_C$ has order $\mathcal{O}_{C}(\mathcal{C}_C) = (V,U_{i_1},\hdots ,U_{i_p}),$ if $U_{i_1}$ is the root node in the first tree $T_1$, $U_{i_2}U_{i_1}$ is the root node in the second tree $T_2$, and $U_{i_k}U_{i_{k-1}}; U_{i_1},\hdots ,U_{i_{k-2}}$ is the root node in the $k$-th tree $T_k$ for $k=3, \hdots, p-1$. \noindent Now our goal is to find an optimal order of D- or C-vine copula model with regard to a fit measure. This measure has to allow to quantify the explanatory power of a model. One such measure is the estimated conditional copula log-likelihood function as a fit measure. For $N$ i.i.d. observations $\bm{v} \coloneqq (v^{(1)},\hdots ,v^{(N)})^T\;\textrm{and}\; \bm{u}_j\coloneqq( u_j^{(1)},\hdots ,u_j^{(N)})^T,\; \textrm{for}\; j=1,\hdots ,p$ of the random vector $(V,U_1,\hdots ,U_p)^T$ we fit a C- or D-vine copula with order $\mathcal{O}(\hat{\mathcal{C}})=(V,U_1,\hdots ,U_p)$ using nonparametric pair copulas. We denote this copula by $\hat{\mathcal{C}}$, then the fitted conditional log-likelihood can be determined as \begin{align*} cll & (\hat{\mathcal{C}},\bm{v}, (\bm{u}_1,\hdots ,\bm{u}_p)) = \sum_{n=1}^N \ln \hat{c}_{V\vert U_1,\hdots , U_p}(v^{(n)}\vert u_1^{(n)},\hdots ,u_p^{(n)}) = \sum_{n=1}^N\Big[\ln \hat{c}_{V,U_1}(v^{(n)},u_1^{(n)}) + \\ & \sum_{j=2}^{p}\ln \hat{c}_{V,U_j\vert U_1,\hdots ,U_{j-1}}(\hat{C}_{V\vert U_1,\hdots ,U_{j-1}}( v^{(n)}\vert u_1^{(n)},\hdots ,u_{j-1}^{(n)}), \hat{C}_{U_j\vert U_1,\hdots ,U_{j-1}}( u_j^{(n)}\vert u_1^{(n)},\hdots ,u_{j-1}^{(n)})) \Big]. \end{align*} Penalizations for model complexity when parametric pair copulas are used can be added as shown in \citet{masterMarija}. To define an appropriate penalty in the case of using nonparametric pair copulas is an open research question (see also Section \ref{section:disscussion}). \section{Forward selection algorithms}\label{section:main} Having a set of $p$ predictors, there are $p!$ different orders that uniquely determine $p!$ C-vines and $p!$ D-vines. Fitting and comparing all of them is computationally inefficient. Thus, the idea is to have an algorithm that will sequentially choose the elements of the order, so that at every step the resulting model for the prediction of the conditional quantiles has the highest conditional log-likelihood. Building upon the idea of \citet{kraus2017d} for the one-step ahead D-vine regression, we propose an algorithm which allows for more flexibility and which is less greedy, given the intention to obtain a globally optimal C- or D-vine fit. The algorithm builds the C- or D-vine step by step, starting with an order consisting of only the response variable $V$. Each step adds one of the predictors to the order based on the improvement of the conditional log-likelihood, while taking into account the possibility of future improvement, i.e. extending our view two steps ahead in the order. As discussed in Section~\ref{sectionnonpar}, the pair copulas at each step are estimated nonparametrically in contrast to the parametric approach of \citet{kraus2017d}. We present the implementation for both C-vine and D-vine based quantile regression in a single algorithm, in which the user decides whether to fit a C-vine or D-vine model based on the background knowledge of dependency structures in the data. Implementation for a large data set is computationally challenging; therefore, randomization is introduced to guarantee computational efficiency in high dimensions. \subsection{Two-step ahead forward selection algorithm for C- and D-vine based quantile regression}\label{twostep} \textbf{Input and data preprocessing:} Consider $N$ i.i.d observations $ \bm{y} \coloneqq (y^{(1)},\hdots ,y^{(N)})$ and $\bm{x}_j\coloneqq( x_j^{(1)},\hdots ,x_j^{(N)})\;\; \textrm{for}\; j=1,\hdots ,p , $ from the random vector $(Y,X_1,\hdots ,X_p)^T$. The input data is on the x-scale, but in order to fit bivariate copulas we need to transform it to the u-scale using the probability integral transform. Since the marginal distributions are unknown we estimate them, i.e. $F_{Y}$ and $F_{X_j}$, for $j=1,\hdots ,p,$ are estimated using a univariate nonparametric kernel density estimator with the R package \texttt{kde1d} \citep{kde1d}. This results in the pseudo copula data $\hat{v}^{(n)} \coloneqq \hat{F}_Y(y^{(n)})\;$ and $\hat{u}_j^{(n)}\coloneqq\hat{F}_{X_j}(x_j^{(n)}),$ for $n=1,\hdots ,N,\;\; j=1,\hdots ,p. $ The normalized marginals (z-scale) are defined as $Z_j\coloneqq\Phi^{-1}(U_j)$ for $j=1,\hdots ,p,$ and $Z_V\coloneqq\Phi^{-1}(V)$, where $\Phi$ denotes the standard normal distribution function. \\ \textbf{Step 1:} To reduce computational complexity, we perform a pre-selection of the predictors based on Kendall's $\tau$. This is motivated by the fact that Kendall's $\tau$ is rank-based, therefore invariant with respect to monotone transformations of the marginals and can be expressed in terms of pair copulas. Using the pseudo copula data $(\hat{\bm v}, \hat{\bm u}_j) = \lbrace \hat{v}^{(n)}, \hat{u}^{(n)}_j \vert n = 1,\hdots , N \rbrace, $ estimates $\hat{\tau}_{VU_j}$ of the Kendall's $\tau$ values between the response $V$, and all possible predictors $U_j$ for $j=1,\hdots ,p$, are obtained. For a given $k\leq p$, the $k$ largest estimates of $\vert\hat{\tau}_{VU_j}\vert$ are selected and the corresponding indices $q_1,\hdots ,q_k$ are identified such that $\vert\hat{\tau}_{VU_{q_1}}\vert \geq \vert\hat{\tau}_{VU_{q_2}}\vert \geq\hdots\geq \vert\hat{\tau}_{VU_{q_k}}\vert \geq \vert\hat{\tau}_{VU_{q_{k+1}}}\vert \geq\hdots\geq \vert\hat{\tau}_{VU_{q_p}}\vert.$ The parameter $k$ is a hyper-parameter and therefore subject to tuning. To obtain a parsimonious model, we suggest a $k$ corresponding to $5\%$ - $20\%$ of the total number of predictors. The $k$ candidate predictors and the corresponding candidate index set of step 1 are defined as $U_{q_1},\hdots , U_{q_k}$ and $K_1 = \left\lbrace q_1,\hdots ,q_k\right\rbrace$, respectively. For all $ c \in K_1$ and $j\in \left\lbrace 1,\hdots ,p\right\rbrace \setminus \left\lbrace c\right\rbrace$ the candidate two-step ahead C- or D-vine copulas are defined as the 3-dimensional copulas $\mathcal{C}^1_{c,j}$ with order $\mathcal{O}(\mathcal{C}^1_{c,j}) = (V,U_c,U_j)$. The first predictor is added to the order based on the conditional log-likelihood of the candidate two-step ahead C- or D-vine copulas, $\mathcal{C}^1_{c,j}$, given as \begin{small} \begin{equation*} cll\left(\mathcal{C}^1_{c,j},\bm{\hat{v}},(\bm{\hat{u}}_c,\bm{\hat{u}}_j)\right) = \sum_{n=1}^N \Big[\log \hat{c}_{V,U_c}(\hat{v}^{(n)},\hat{u}_c^{(n)}) + \log \hat{c}_{V,U_j\vert U_c}\big( \hat{h}_{V|U_c} ( \hat{v}^{(n)}\vert\hat{u}_c^{(n)}),\hat{h}_{U_j|U_c}( \hat{u}_j^{(n)}\vert\hat{u}_c^{(n)})\big)\Big]. \end{equation*} \end{small} \noindent For each candidate predictor $U_c$, the maximal two-step ahead conditional log-likelihood at step 1, $cll_c^1$, is defined as $ cll_c^1 \coloneqq \max_{j\in \lbrace 1,\hdots ,p\rbrace \setminus \lbrace c\rbrace} cll\left(\mathcal{C}^1_{c,j}, \bm{\hat{v}},(\bm{\hat{u}}_c,\bm{\hat{u}}_j)\right),\;\forall c\in K_1. $ Finally, based on the maximal two-step ahead conditional log-likelihood at step 1, $cll_c^1$, the index $t_1$ is chosen as $ t_1 \coloneqq \argmax_{c\in K_1}\; cll_c^1,$ and the corresponding candidate predictor $U_{t_1}$ is selected as the first predictor added to the order. An illustration of the vine tree structure of the candidate two-step ahead copulas $\mathcal{C}^1_{c,j}$, in the case of fitting a D-vine model, with order $\mathcal{O}_{D}(\mathcal{C}^1_{c,j}) = (V,U_c,U_j)$ is given in Figure~\ref{1stepDvine}. Finally, the current optimal fit after the first step is the C-vine or D-vine copula, $\mathcal{C}_1$ with order $\mathcal{O}(\mathcal{C}_1) = (V,U_{t_1})$. \begin{figure} \caption{ $V$ is fixed as the first node of $T_1$ and the first candidate predictor to be included in the model, $U_c$ (gray), is chosen based on the conditional log-likelihood of the two-step ahead copula including the predictor $U_j$ (gray filled).} \label{1stepDvine} \end{figure} \noindent\textbf{Step \bm{r}:} After $r-1$ steps, the current optimal fit is the C- or D-vine copula $\mathcal{C}_{r-1}$ with order $ \mathcal{O}(\mathcal{C}_{r-1}) = (V,U_{t_1},\hdots ,U_{t_{r-1}})$. At each previous step $i$, the order of the current optimal fit is sequentially updated with the predictor $U_{t_i}$ for $i = 1,\hdots ,r-1$. At the $r$-th step the next predictor candidate is to be included. To do so, the set of potential candidates is narrowed based on a partial correlation measure. Defining a partial Kendall's $\tau$ is not straightforward and requires the notion of a partial copula, which is the average over the conditional copula given the values of the conditioning values (for example see \cite{gijbels2021study} and the references given there). In addition, the computation in the case of multivariate conditioning is very demanding and still an open research problem. Therefore we took a pragmatic view and base our candidate selection on partial correlation. Due to the assumption of Gaussian margins inherited to the Pearson's partial correlation, the estimates are computed on the z-scale. Estimates of the empirical Pearson's partial correlation, $\hat{\rho}_{Z_V,Z_j;Z_{t_1},\hdots ,Z_{t_{r-1}}}$, between the normalized response variable $V$ and available predictors $U_j$ for $j\in\lbrace 1,2,\hdots ,p\rbrace\setminus\lbrace t_1,\hdots ,t_{r-1}\rbrace$ are obtained. Similar to the first step, a set of candidate predictors of size $k$ is selected based on the largest values of $\vert\hat{\rho}_{Z_V,Z_j;Z_{t_1},\hdots ,Z_{t_{r-1}}}\vert$ and the corresponding indices $q_1,\hdots ,q_k$. The $k$ candidate predictors and the corresponding candidate index set of step $r$ are defined as $U_{q_1},\hdots , U_{q_k}$ and the set $K_r = \left\lbrace q_1,\hdots ,q_k\right\rbrace$, respectively. For all $ c \in K_r$ and $j\in\left\lbrace 1,2,\hdots ,p\right\rbrace\setminus\left\lbrace t_1,\hdots ,t_{r-1},c\right\rbrace$ the candidate two-step ahead C- or D-vine copulas are defined as the copulas $\mathcal{C}^r_{c,j}$ with order $\mathcal{O}(\mathcal{C}^r_{c,j}) = ( V,U_{t_1},\hdots ,U_{t_{r-1}}, U_c, U_j)$. There are $k(p-r)$ different candidate two-step ahead C- or D-vine copulas $\mathcal{C}^r_{c,j}$ (since we have $k$ candidates for the one-step ahead extension $U_c$, and for each, $p-(r-1) -1$ two step ahead extensions $U_j$). Their corresponding conditional log-likelihood functions are given as \begin{small} \begin{equation*} \begin{split} c&ll\left(\mathcal{C}^r_{c,j},\right. \left. \bm{\hat{v}}, (\bm{\hat{u}}_{t_1}\hdots \bm{\hat{u}}_{t_{r-1}}, \bm{\hat{u}}_c,\bm{\hat{u}}_j)\right) = \; cll\left(\mathcal{C}_{r-1}, \bm{\hat{v}},(\bm{\hat{u}}_{t_1}\hdots \bm{\hat{u}}_{t_{r-1}} )\right)+ \\ & \sum_{n=1}^N \log \hat{c}_{VU_{c};U_{t_1},\hdots ,U_{t_{r-1}}} \left( \hat{C}_{V\vert U_{t_1},\hdots ,U_{t_{r-1}}}\big( \hat{v}^{\left(n\right)}\vert \hat{u}_{t_1}^{\left(n\right)},\hdots ,\hat{u}_{t_{r-1}}^{(n)}\big), \hat{C}_{U_{c}\vert U_{t_1},\hdots ,U_{t_{r-1}}}\big(\hat{u}_{c}^{(n)}\vert \hat{u}_{t_1}^{(n)},\hdots ,\hat{u}_{t_{r-1}}^{(n)}\big) \right) \\ & + \sum_{n=1}^N \log \hat{c}_{VU_j;U_{t_1},\hdots ,U_{t_{r-1}},U_{c}} \left( \hat{C}_{V\vert U_{t_1},\hdots ,U_{t_{r-1}},U_{c}}\big( \hat{v}^{(n)}\vert \hat{u}_{t_1}^{(n)},\hdots ,\hat{u}_{t_{r-1}}^{(n)},\hat{u}_{c}^{(n)}\big), \right. \\ & \left. \hskip 4.9cm \hat{C}_{U_j\vert U_{t_1},\hdots ,U_{t_{r-1}},U_{c}}\big(\hat{u}_{j}^{(n)}\vert \hat{u}_{t_1}^{(n)},\hdots ,\hat{u}_{t_{r-1}}^{(n)},\hat{u}_{c}^{(n)}\big) \right). \end{split} \end{equation*} \end{small} The $r$-th predictor is then added to the order based on the maximal two-step ahead conditional log-likelihood at Step~$r$, $cll_c^r$, defined as \begin{equation}\label{cll_max} cll_c^r \coloneqq \max_{j\in\left\lbrace 1,2,\hdots ,p\right\rbrace\setminus\left\lbrace t_1,\hdots ,t_{r-1},c\right\rbrace} cll\left(\mathcal{C}^r_{c,j},\bm{\hat{v}},(\bm{\hat{u}}_{t_1}\hdots \bm{\hat{u}}_{t_{r-1}}, \bm{\hat{u}}_c, \bm{\hat{u}}_j )\right),\;\forall c\in K_r. \end{equation} The index $t_r$ is chosen as $ t_r \coloneqq \argmax_{c\in K_r} \; cll_c^r,$ and the predictor $U_{t_r}$ is selected as the $r-$th predictor of the order. An illustration of the vine tree structure of the candidate two-step ahead copulas $\mathcal{C}^r_{c,j}$, for a D-vine model with order $\mathcal{O}_{D}(\mathcal{C}^r_{c,j}) = ( V,U_{t_1},\hdots ,U_{t_{r-1}}, U_c, U_j)$ is given in Figure~\ref{rstepDvine}. At this step, the current optimal fit is the C-vine or D-vine copula $\mathcal{C}_r$, with order $\mathcal{O}(\mathcal{C}_r) = ( V,U_{t_1},\hdots U_{t_r}).$ The iterative procedure is repeated until all predictors are included in the order of the C- or D-vine copula model. \begin{figure} \caption{ In step $r$, the current optimal fit, $\mathcal{C} \label{rstepDvine} \end{figure} \subsubsection{Additional variable reduction in higher dimensions}\label{twostepred} The above search procedure requires calculating $p-r$ conditional log-likelihoods for each candidate predictor at a given step $r$. This leads to calculating a total of $(p-r)k$ conditional log-likelihoods, where $k$ is the number of candidates. For $p$ large, this procedure would cause a heavy computational burden. Hence, the idea is to reduce the number of conditional log-likelihoods calculated for each candidate predictor. This is achieved by reducing the size of the set, over which the maximal two-step ahead conditional log-likelihood $cll_c^r$ in \eqref{cll_max}, is computed. Instead of over the set $\left\lbrace 1,2,\hdots ,p\right\rbrace\setminus\left\lbrace t_1,\hdots ,t_{r-1},c\right\rbrace$, the maximum can be taken over an appropriate subset. This subset can be then chosen either based on the largest Pearson's partial correlations in absolute value denoted as $|\hat{\rho}_{Z_V,Z_j;Z_{t_1},\hdots ,Z_{t_{r-1}},Z_c}|$, by random selection, or a combination of the two. The selection method and the size of reduction are user-decided. \subsection{Consistency of the conditional quantile estimator} The conditional quantile function on the original scale in \eqref{eq:conditional quantile} requires the inverse of the marginal distribution function of $Y$. Following \citet{kraus2017d, noh2013copula}, the marginal cumulative distribution functions $F_Y$ and $F_{X_j}, j=1,\ldots p$, are estimated nonparametrically to reduce the bias caused by model misspecification. Examples of nonparametric estimators for the marginal distributions $F_Y$ and $F_{X_j}$'s, are the continuous kernel smoothing estimator \citep{parzen1962estimation} and the transformed local likelihood estimator in the univariate case \citep{geenens2014probit}. Using a Gaussian kernel, the above two estimators of the marginal distribution are uniformly strong consistent. When also all inverses of the h-functions are estimated nonparametrically, we establish the consistency of the conditional quantile estimator $\hat{F}^{-1}_{Y|X_1, \ldots, X_p}$ in Proposition~\ref{thm:consistency} for fixed variable orders. By showing the uniform consistency, Proposition~\ref{thm:consistency} gives an indication on the performance of the conditional quantile estimator $\hat{F}^{-1}_{Y|X_1, \ldots, X_p}$ for fixed variable orders, while combining the consistent estimators of $F_Y$, $F_{X_j}$'s, and bivariate copula densities. Under the consistency guarantee, the numerical performance of $\hat{F}^{-1}_{Y|X_1, \ldots, X_p}$ investigated by extensive simulation studies is presented in Section~\ref{section:simulation}. \begin{proposition}\label{thm:consistency} Let the inverse of the marginal distribution functions $F_Y$ and $F_{X_j}$ $j=1,\ldots,p$ be uniformly continuous and estimated nonparametrically, and let the inverse of the h-functions expressing the conditional quantile estimator $C^{-1}_{V|U_1, \ldots, U_p}$ be uniformly continuous and estimated nonparametrically in the interior of the support of bivariate copulas, i.e., $[\delta, 1-\delta]^2, \delta \to 0_+$. \begin{enumerate} \item[1.] If estimators of the inverse of marginal functions $\hat{F}^{-1}_Y$, $\hat{F}^{-1}_{X_j}$, $j=1,\ldots,p$, are uniformly strong consistent on the support $[\delta,1 -\delta], \delta \to 0_+$, and the estimators of the inverse of h-functions composing the conditional quantile estimator $C^{-1}_{V|U_1, \ldots, U_p}$ are uniformly strong consistent, then the estimator $\hat{F}^{-1}_{Y|X_1, \ldots, X_p} (\alpha | x_1, \ldots, x_p) $ is also uniformly strong consistent. \item[2.] If estimators of the inverse of marginal functions $\hat{F}^{-1}_Y$, $\hat{F}^{-1}_{X_j}$, $j=1,\ldots,p$, are at least weak consistent, and the estimators of the inverse of h-functions are also at least weak consistent, then the estimator $\hat{F}^{-1}_{Y|X_1, \ldots, X_p} (\alpha | x_1, \ldots, x_p) $ is weak consistent. \end{enumerate} \end{proposition} \noindent For more details about uniform continuous functions see \citet[Section~5.4]{bartle2000introduction}, \citet[p.109,Def.~1]{kolmogorov1970introductory}. For a definition of strong uniform consistency or convergence with probability one, see \citet{ryzin1969,silverman1978weak} and \citet[p.16]{durrett2010probability}, while for a definition for weak consistency or convergence in probability, see \citet[p.53]{durrett2010probability}. The strong uniform consistency result in Proposition 1 requires additionally that all estimators of $\hat{F}^{-1}_Y$, $\hat{F}^{-1}_{X_j}$, for $j=1,\ldots p$, are strong uniformly consistent on a truncated compact interval $[\delta, 1 - \delta], \delta \to 0_+$. Although not directly used in the proof of Proposition~\ref{thm:consistency} in Appendix~\ref{section:appendB}, the truncation is an essential condition for guaranteeing the strong uniform consistency of all estimators of the inverse of the marginal distributions (i.e. estimators of quantile functions), see \citet{cheng1995uniform, van1998bootstrapping, cheng1984almost}. \section{Simulation study}\label{section:simulation} The proposed two-step ahead forward selection algorithms for C- and D-vine based quantile regression, from Section~\ref{twostep}, are implemented in the statistical language R \citep{Rlanguage}. The D-vine one-step ahead algorithm is implemented in the R package \texttt{vinereg} \citep{vinereg}. In the simulation study from \cite{kraus2017d}, it is shown that the D-vine one-step ahead forward selection algorithm performs better or similar, compared to other state of the art quantile methods, boosting additive quantile regression \citep{Koenker2005quantile, fenske2011identifying}, nonparametric quantile regression \citep{li2013optimal}, semi-parametric quantile regression \citep{noh2015semiparametric}, and the linear quantile regression \citep{koenker1978regression}. Thus we use the one-step ahead algorithm as the benchmark competitive method in the simulation study. We set up the following simulation settings given below. Each setting is replicated for $R = 100$ times. In each simulation replication, we randomly generate $N_{\rm train}$ samples used for fitting the appropriate nonparametric vine based quantile regression models. Additionally, another $N_{\rm eval} = \frac{1}{2}N_{\rm train}$ samples for Settings (a) -- (f) and $N_{\rm eval} = N_{\rm train}$ for Settings (g), (h) are generated for predicting conditional quantiles from the models. Settings (a) -- (f) are designed to test quantile prediction accuracy of nonparametric C- or D-vine quantile regression in cases where $p \leq N$; hence, we set $N_{\rm train} = 1000 \mbox{ or }300$. Settings (g) and (h) test quantile prediction accuracy in cases where $p > N$; hence, we set $N_{\rm train} = 100$. \begin{enumerate} \item[(a)] Simulation Setting M5 from \citet{kraus2017d}:\\ $$Y = \sqrt{|2X_1 - X_2 + 0.5 |} + (-0.5X_3 + 1)(0.1 X_4^3) + \sigma\varepsilon,$$ with $\varepsilon \sim N(0, 1), \sigma \in \{0.1, 1\}$, $(X_1, X_2, X_3, X_4)^T \sim N_4(0, \Sigma)$, and the $(i,j)$th component of the covariance matrix given as $(\Sigma)_{i,j} = 0.5^{|i - j|}$. \item[(b)] $(Y, X_1, \ldots, X_5)^T$ follows a mixture of two 6-dimensional t copulas with degrees of freedom equal to 3 and mixture probabilities 0.3 and 0.7. Association matrices $R_1$, $R_2$ and marginal distributions are recorded in Table~\ref{table:marginal b}. \begin{table}[!htpb] \centering \begin{tabular}{c c} $R_1= \begin{pmatrix} 1 & 0.6 & 0.5 & 0.6 & 0.7 & 0.1 \\ 0.6 & 1 & 0.5 & 0.5 & 0.5 & 0.5 \\ 0.5 & 0.5 & 1 & 0.5 & 0.5 & 0.5 \\ 0.6 & 0.5 & 0.5 & 1 & 0.5 & 0.5 \\ 0.7 & 0.5 & 0.5 & 0.5 & 1 & 0.5 \\ 0.1 & 0.5 & 0.5 & 0.5 & 0.5 & 1 \end{pmatrix} $ & $ R_2 = \begin{pmatrix} 1 & -0.3 & -0.5 & -0.4 & -0.5 & -0.1 \\ -0.3 & 1 & 0.5 & 0.5 & 0.5 & 0.5 \\ -0.5 & 0.5 & 1 & 0.5 & 0.5 & 0.5 \\ -0.4 & 0.5 & 0.5 & 1 & 0.5 & 0.5 \\ -0.5 & 0.5 & 0.5 & 0.5 & 1 & 0.5 \\ -0.1 & 0.5 & 0.5 & 0.5 & 0.5 & 1 \end{pmatrix} $ \\ \end{tabular} \newline \newline \begin{tabular}{c c c c c c } $Y$ & $X_1$ & $X_2$ & $X_3$ & $X_4$ & $X_5$ \\ \hline $N(0, 1)$ & $t_4$ & $N(1, 4)$ & $t_4$ & $N(1, 4)$ & $t_4$ \\ \end{tabular} \caption{Association matrices of the multivariate t-copula and marginal distributions for Setting (b).} \label{table:marginal b}. \end{table} \item[(c)] Linear and heteroscedastic \citep{chang2019prediction}:\\ $Y =5(X_1 +X_2 +X_3 +X_4)+10(U_1 +U_2 +U_3 +U_4)\varepsilon,$ where $(X_1, X_2, X_3, X_4)^T \sim N(0, \Sigma)$, $\Sigma_{i,j} = 0.5^{I\{i \neq j\}}$, $\varepsilon \sim N_4(0, 0.5)$, and $U_j,$ $j = 1,\ldots, 4$ are obtained from the $X_j$'s by the probability integral transform. \item[(d)] Nonlinear and heteroscedastic \citep{chang2019prediction}: \\ $Y = U_1U_2e^{1.8U_3U_4} + 0.5(U_1 + U_2 + U_3 + U_4)\varepsilon ,$ where $U_j, j = 1,\ldots, 4$ are probability integral transformed from $N_4(0, \Sigma)$, $\Sigma_{i,j} = 0.5^{I\{i \neq j\}}$, and $\varepsilon \sim N(0,0.5)$. \item[(e)] R-vine copula \citep{czado2019analyzing}: $(V, U_1, \ldots, U_4)^T$ follows an R-vine distribution with pair copulas given in Table~\ref{table:Rvinesample}. \begin{table}[ht] \centering \begin{tabular}{|cc|rcl|ccc|} \hline Tree & Edge & Conditioned & ; & Conditioning & Family & Parameter & Kendall's $\tau$ \\ \hline 1 & 1 & $U_1,U_3$ & ; & & Gumbel & 3.9 & 0.74\\ 1 & 2 & $U_2,U_3$ & ; & & Gauss & 0.9 &0.71 \\ 1 & 3 & $V_{\;}\;,U_3$ & ; & & Gauss & 0.5& 0.33 \\ 1 & 4 & $V_{\;}\;,U_4$ & ; & & Clayton & 4.8 & 0.71\\ \hline 2 & 1 & $V_{\;}\;,U_1$ & ; & $U_3$ & Gumbel(90) & 6.5 & -0.85 \\ 2 & 2 & $V_{\;}\;,U_2$ & ; & $U_3$ & Gumbel(90) & 2.6 & -0.62 \\ 2 & 3 & $U_3,U_4$ & ; & $V$ & Gumbel & 1.9 & 0.48 \\ \hline 3 & 1 & $U_1,U_2$ & ; & $V_{\;}\;,U_3$ & Clayton & 0.9 &0.31 \\ 3 & 2 & $U_2,U_4$ & ; & $V_{\;}\;,U_3$ & Clayton(90) &5.1 &-0.72 \\ \hline 4 & 1 & $U_1,U_4$ & ; & $V_{\;}\;,U_2,U_3$ & Gauss & 0.2 &0.13 \\ \hline \end{tabular} \caption{Pair copulas of the R-vine $C_{V,U_1,U_2,U_3,U_4}$, with their family parameter and Kendall's $\tau$ for Setting (e).} \label{table:Rvinesample} \end{table} \item[(f)] D-vine copula \citep{masterMarija}: $(V,U_1,\ldots, U_5)^T$ follows a D-vine distribution with pair copulas given in Table~\ref{table:Dvinesample}. \begin{table}[ht] \centering \begin{tabular}{|cc|rcl|ccc|} \hline Tree & Edge & Conditioned & ; & Conditioning & Family & Parameter & Kendall's $\tau$ \\ \hline 1 & 1 & $V_{\;}\;,U_1$ & ; & & Clayton & 3.00 & 0.60\\ 1 & 2 & $U_1,U_2$ & ; & & Joe & 8.77 & 0.80\\ 1 & 3 & $U_2,U_3$ & ; & & Gumbel & 2.00 & 0.50\\ 1 & 4 & $U_3,U_4$ & ; & & Gauss & 0.20 & 0.13\\ 1 & 5 & $U_4,U_5$ & ; & & Indep. & 0.00 & 0.00\\ \hline 2 & 1 & $V_{\;}\;,U_2$ & ; & $U_1$ & Gumbel & 5.00 & 0.80\\ 2 & 2 & $U_1,U_3$ & ; & $U_2$ & Frank & 9.44 &0.65 \\ 2 & 3 & $U_2,U_4$ & ; & $U_3$ & Joe & 2.78 & 0.49\\ 2 & 4 & $U_3,U_5$ & ; & $U_4$ & Gauss & 0.20 & 0.13 \\ \hline 3 & 1 & $V_{\;}\;,U_3$ & ; & $U_1,U_2$ & Joe & 3.83 & 0.60 \\ 3 & 2 & $U_1,U_4$ & ; & $U_2,U_3$ & Frank & 6.73 & 0.55\\ 3 & 3 & $U_2,U_5$ & ; & $U_3,U_4$ & Gauss & 0.29 & 0.19\\ \hline 4 & 1 & $V_{\;}\;,U_4$ & ; & $U_1,U_2,U_3$ & Clayton & 2.00 &0.50\\ 4 & 2 & $U_1,U_5$ & ; & $U_2,U_3,U_4$ & Gauss & 0.09 &0.06 \\ \hline 5 & 1 & $V_{\;}\;,U_5$ & ; & $U_1,U_2,U_3,U_4$ & Indep. & 0.00 &0.00\\ \hline \end{tabular} \caption{Pair copulas of the D-vine $C_{V,U_1,U_2,U_3,U_4,U_5}$, with their family parameter and Kendall's $\tau$ for Setting (f).} \label{table:Dvinesample} \end{table} \item[(g)] Similar to Setting (a),\\ $$Y = \sqrt{|2X_1 - X_2 + 0.5 |} + (-0.5X_3 + 1)(0.1 X_4^3) + (X_5, \ldots, X_{110})(0, \ldots, 0)^T + \sigma\varepsilon,$$ where $(X_1, \ldots, X_{110})^T \sim N_{110}(0, \Sigma)$ with the $(i, j)$th component of the covariance matrix $(\Sigma)_{i, j} = 0.5^{|i - j|}$, $\varepsilon \sim N(0, 1)$, and $\sigma \in \{0.1, 1\}$ . \item[(h)] Similar to (g),\\ $Y = (X_1^{3}, \ldots, X_{110}^{3}) \bm{\beta} + \varepsilon ,$ where $(X_1, \ldots, X_{10})^T \sim N_{10}(0, \Sigma_A)$ with the $(i, j)$th component of the covariance matrix $(\Sigma_A)_{i, j} = 0.8^{|i - j|}$, $(X_{11}, \ldots, X_{110})^T \sim N_{100}(0, \Sigma_B)$ with $(\Sigma_B)_{i, j} = 0.4^{|i - j|}$. The first 10 entries of $\bm\beta$ are a descending sequence between $(2, 1.1) $ with increment of $0.1$ respectively, and the rest are equal to 0. We assume $\varepsilon \sim N(0, \sigma)$ and $\sigma \in \{0.1, 1\}$. \end{enumerate} \noindent Since the true regression quantiles are difficult to obtain in most settings, we consider the averaged check loss \citep{kraus2017d, komunjer2013quantile} and the interval score \citep{chang2019prediction, gneiting2007strictly}, instead of the out-of-sample mean averaged square error in \citet{kraus2017d}, to evaluate the performance of the estimation methods. For a chosen $\alpha \in (0,1)$, the averaged check loss is defined as \begin{equation}\label{eq:check loss} \widehat{\mbox{CL}}_\alpha = \frac{1}{R} \sum_{r = 1}^R \bigg\{ \frac{1}{N_{\rm eval}} \sum_{n = 1}^{N_{\rm eval}}\Big\{ \gamma_\alpha \left(Y_{r,n}^{\rm eval} - \hat{q}_{\alpha}(X_{r, n}^{\rm eval}) \right) \Big\} \bigg\}, \end{equation} where $\gamma_\alpha$ is the check loss function. \noindent The interval score, for the $(1 - \alpha) \times 100\%$ prediction interval, is defined as \begin{eqnarray}\label{eq:interval score} \lefteqn{\widehat{\mbox{IS}}_\alpha = \frac{1}{R} \sum_{r = 1}^R \bigg\{ \frac{1}{N_{\rm eval}} \sum_{n = 1}^{N_{\rm eval}}\Big\{ \big(\hat{q}_{\alpha/2}(X_{r, n}^{\rm eval}) - \hat{q}_{1 - \alpha/2}(X_{r, n}^{\rm eval})\big) } \\ \nonumber &&+ \frac{2}{\alpha} \big( \hat{q}_{1-\alpha/2}(X_{r, n}^{\rm eval}) - Y_{r, n}^{\rm eval}\big) I\{Y_{r, n}^{\rm eval} \leq \hat{q}_{1-\alpha/2}(X_{r, n}^{\rm eval})\}\\ &&+ \frac{2}{\alpha}\big( Y_{r, n}^{\rm eval} - \hat{q}_{\alpha/2}(X_{r,n}^{\rm eval})\big) I\{Y_{r, n}^{\rm eval} > \hat{q}_{\alpha/2}(X_{r, n}^{\rm eval})\} \Big\} \bigg\}, \nonumber \end{eqnarray} and smaller interval scores are better. \begin{table}[!htp] \centering \def1.2{1.1} \begin{tabular}{|c|c|c|c|c|c||c|c|c|c|} \hline Setting &Model & $\widehat{\mbox{IS}}_{0.05}$ & $\widehat{\mbox{CL}}_{0.05}$ & $\widehat{\mbox{CL}}_{0.5}$ & $\widehat{\mbox{CL}}_{0.95}$ & $\widehat{\mbox{IS}}_{0.05}$ & $\widehat{\mbox{CL}}_{0.05}$ & $\widehat{\mbox{CL}}_{0.5}$ & $\widehat{\mbox{CL}}_{0.95}$ \\ \hline && \multicolumn{4}{c||}{$N_{{\rm train}}=300$} & \multicolumn{4}{c|}{$N_{{\rm train}}=1000$}\\ \cline{1-10} (a) & D-vine One-step & 55.54 & 0.66 & 0.16 & 0.51 & 55.89 & 0.67 & 0.15 & 0.50 \\ $\sigma = 0.1$ & D-vine Two-step & 43.33 & 0.47 & \cellcolor{mygray}0.10 & 0.41& 40.74 & 0.45 & \cellcolor{mygray}0.09 & \cellcolor{mygray}0.37 \\ **& C-vine One-step & 53.51 & 0.64 & 0.16 & 0.49 & 54.52 & 0.66 & 0.15 & 0.49 \\ & C-vine Two-step & \cellcolor{mygray}42.01 & \cellcolor{mygray}0.45 & \cellcolor{mygray}0.10 & \cellcolor{mygray}0.40 & \cellcolor{mygray}40.04 & \cellcolor{mygray}0.44 & \cellcolor{mygray}0.09 & \cellcolor{mygray}0.37 \\ \cline{1-10} (a) &D-vine One-step & 154.35 & 1.63 & \cellcolor{mygray}0.45 & 1.62 & 162.12 & 1.70 & 0.43 & 1.66 \\ $\sigma = 1$ &D-vine Two-step & 148.53 & 1.57 & \cellcolor{mygray}0.45 & \cellcolor{mygray}1.56 & \cellcolor{mygray}156.77 & \cellcolor{mygray}1.63 & \cellcolor{mygray}0.42& \cellcolor{mygray}1.62 \\ **& C-vine One-step & 151.60 & 1.61 &\cellcolor{mygray} 0.45 & 1.60 & 160.78 & 1.68 & 0.43 & 1.65 \\ & C-vine Two-step & \cellcolor{mygray}148.41 & \cellcolor{mygray}1.56 & \cellcolor{mygray}0.45 & \cellcolor{mygray} 1.56 & 156.79 & \cellcolor{mygray}1.63 & \cellcolor{mygray}0.42 & \cellcolor{mygray}1.62 \\ \cline{1-10} (b) & D-vine One-step & \cellcolor{mygray}118.75 & \cellcolor{mygray}1.29 & 0.42 & \cellcolor{mygray}1.30 & 125.33 & 1.37 & \cellcolor{mygray}0.40 & \cellcolor{mygray}1.36 \\ *&D-vine Two-step & 119.10 & 1.30 & 0.42 & \cellcolor{mygray} 1.30 & 125.24 & \cellcolor{mygray}1.36 & \cellcolor{mygray} 0.40 & \cellcolor{mygray}1.36 \\ & C-vine One-step & 119.08 & 1.30 & \cellcolor{mygray}0.41 & \cellcolor{mygray} 1.30& \cellcolor{mygray}125.12 & \cellcolor{mygray}1.36 & \cellcolor{mygray}0.40 & \cellcolor{mygray}1.36 \\ & C-vine Two-step & 118.90 & 1.30 & 0.42 & \cellcolor{mygray}1.30 & 125.30 & \cellcolor{mygray}1.36 & \cellcolor{mygray}0.40 & \cellcolor{mygray}1.36 \\ \cline{1-10} (c) &D-vine One-step & 2908.90 & 30.54 & \cellcolor{mygray}8.55 & 30.42 & 3064.78 & 31.69 & \cellcolor{mygray}8.15 & 31.47 \\ **&D-vine Two-step & 2853.52 & 30.21 & 8.70 & 29.95 & \cellcolor{mygray}3041.95 & \cellcolor{mygray}31.61 & 8.20 & 31.26\\ & C-vine One-step & 2859.23 & 30.24 & 8.59 & 29.95 & 3046.52 & 31.64 & 8.18 & 31.25 \\ & C-vine Two-step & \cellcolor{mygray}2850.10 & \cellcolor{mygray}30.19 & 8.64 & \cellcolor{mygray}29.84 & 3042.46 & 31.62 & 8.20 & \cellcolor{mygray}31.23 \\ \cline{1-10} (d) &D-vine One-step & 86.40 & 0.92 & \cellcolor{mygray}0.24 & 0.91 & 91.11 & \cellcolor{mygray}0.96 &\cellcolor{mygray} 0.22 & 0.95\\ **&D-vine Two-step & 83.54 & \cellcolor{mygray}0.90 & \cellcolor{mygray}0.24 & 0.88 & 89.56 & \cellcolor{mygray}0.96 & \cellcolor{mygray}0.22 & \cellcolor{mygray}0.92 \\ & C-vine One-step & 84.99 & 0.91 & \cellcolor{mygray}0.24 & 0.90 & 90.40 &\cellcolor{mygray} 0.96 & \cellcolor{mygray}0.22 & 0.94 \\ & C-vine Two-step & \cellcolor{mygray}83.33 & \cellcolor{mygray}0.90 &\cellcolor{mygray} 0.24 & \cellcolor{mygray}0.87 & \cellcolor{mygray} 89.47 & \cellcolor{mygray}0.96 & \cellcolor{mygray}0.22 & \cellcolor{mygray}0.92 \\ \cline{1-10} (e) &D-vine One-step & 10.59 & 0.11 & \cellcolor{mygray}0.03 & 0.11 & 10.49 & 0.11 & 0.03 & 0.11 \\ *&D-vine Two-step & 10.32 & \cellcolor{mygray}0.10 & \cellcolor{mygray} 0.03 & 0.11 & 10.26 & \cellcolor{mygray}0.09 & \cellcolor{mygray}0.02 & 0.11 \\ & C-vine One-step & \cellcolor{mygray}10.23 & 0.11 & \cellcolor{mygray}0.03 & \cellcolor{mygray}0.10 & \cellcolor{mygray}10.02 & 0.10 &\cellcolor{mygray} 0.02 & \cellcolor{mygray}0.10 \\ & C-vine Two-step & 10.35 & \cellcolor{mygray} 0.10 & \cellcolor{mygray}0.03 & 0.11 & 10.33 & 0.10 & \cellcolor{mygray}0.02 & 0.11 \\ \cline{1-10} (f) &D-vine One-step & 13.79 & 0.16 & 0.04 & 0.14 &13.70 & 0.16 & 0.04 & 0.14\\ **&D-vine Two-step & \cellcolor{mygray}8.44 & \cellcolor{mygray}0.09 & \cellcolor{mygray}0.02 & \cellcolor{mygray}0.08 & \cellcolor{mygray}8.28 & \cellcolor{mygray}0.09 & \cellcolor{mygray}0.02 & \cellcolor{mygray}0.08 \\ & C-vine One-step & 12.62 & 0.14 & 0.04 & 0.13 & 12.23 & 0.13 & 0.04 & 0.13 \\ & C-vine Two-step & 9.09 & 0.10 &\cellcolor{mygray} 0.02 & 0.09 &8.93 & \cellcolor{mygray}0.09 &\cellcolor{mygray} 0.02 & \cellcolor{mygray}0.08 \\ \hline \end{tabular} \caption{Out-of-sample predictions $\widehat{\mbox{IS}}_{0.5}$, $\widehat{\mbox{CL}}_{0.05}$, $\widehat{\mbox{CL}}_{0.5}$, $\widehat{\mbox{CL}}_{0.95}$ for Settings (a) -- (f) with $N_{{\rm train}}=300$ and $N_{{\rm train}}=1000$. Lower values, indicating better performance, are highlighted in gray. With ** we denote the scenarios in which there is an improvement through the second step and with * we denote scenarios in which the models perform similar.} \label{setting300} \end{table} \begin{table}[!h] \centering \def1.2{1.1} \begin{tabular}{|c|c|c|c|c||c|c|c|c|} \hline Model & $\widehat{\mbox{IS}}_{0.05}$ & $\widehat{\mbox{CL}}_{0.05}$ & $\widehat{\mbox{CL}}_{0.5}$ & $\widehat{\mbox{CL}}_{0.95}$ & $\widehat{\mbox{IS}}_{0.05}$ & $\widehat{\mbox{CL}}_{0.05}$ & $\widehat{\mbox{CL}}_{0.5}$ & $\widehat{\mbox{CL}}_{0.95}$ \\ \hline & \multicolumn{4}{c||}{(g), $\sigma = 0.1$ *} & \multicolumn{4}{c|}{(g), $\sigma = 1$ **}\\\cline{2-9} D-vine One-step &\cellcolor{mygray}19.63 & 0.26 & \cellcolor{mygray}0.25 & \cellcolor{mygray}0.23 & 53.38 & 0.69 & 0.67 & 0.65 \\ D-vine Two-step & 20.48 & 0.26 & 0.26 & 0.25 & \cellcolor{mygray}52.17 & 0.68 & \cellcolor{mygray}0.65 &\cellcolor{mygray}0.63 \\ C-vine One-step & 19.73 & \cellcolor{mygray}0.25 & \cellcolor{mygray}0.25 & 0.24 & 53.62 & 0.69 & 0.67 & 0.65 \\ C-vine Two-step & 19.79 & \cellcolor{mygray}0.25 & \cellcolor{mygray}0.25 & 0.25 & 52.35& \cellcolor{mygray}0.67 & \cellcolor{mygray}0.65 & 0.64\\ \hline & \multicolumn{4}{c||}{(h), $\sigma = 0.1$ **} & \multicolumn{4}{c|}{(h), $\sigma = 1$ **}\\ \cline{2-9} D-vine One-step &558.36 & 6.92 & 6.98& 7.04 & 554.18 & 6.87 & 6.93 & 6.99 \\ D-vine Two-step & 529.51& 6.46 & 6.62& 6.78 & 531.30 & 6.64 & 6.64 & 6.64 \\ C-vine One-step & 514.08& 6.05 & 6.43 & 6.81 & 512.96 & 6.39 & 6.41 & 6.44\\ C-vine Two-step & \cellcolor{mygray}479.66 & \cellcolor{mygray}5.87& \cellcolor{mygray}6.00 & \cellcolor{mygray}6.12 & \cellcolor{mygray}483.92 & \cellcolor{mygray}6.05 &\cellcolor{mygray}6.05 & \cellcolor{mygray}6.05 \\ \hline \end{tabular} \caption{Out-of-sample predictions $\widehat{\mbox{IS}}_{0.5}$, $\widehat{\mbox{CL}}_{0.05}$, $\widehat{\mbox{CL}}_{0.5}$, $\widehat{\mbox{CL}}_{0.95}$ for Settings (g) -- (h) with $N_{\rm train}=100$. Lower values, indicating better performance, are highlighted in gray. With ** we denote the scenarios in which there is an improvement through the second step and with * we denote scenarios in which the models perform similar.} \label{settingbig} \end{table} For Settings (a) -- (f), the estimation procedure for the two-step ahead C- or D-vine quantile regression follows exactly Section~\ref{twostep} where the candidate sets at each step include all possible remaining predictors. The additional variable reduction described in Section~\ref{twostepred} is not applied; thus, we calculate all possible conditional log-likelihoods in each step. On the contrary, due to computational burden in Settings (g) and (h), we set the number of candidates to be $k=5$ and the additional variable reduction from Section~\ref{twostepred} is applied. The chosen subset contains 20\% of all possible choices, where 10\% are predictors having the highest Pearson's partial correlation with the response and the remaining 10\% are chosen randomly from the remaining predictors. Performance of the C- and D-vine two-step ahead quantile regression is compared with the C- and D-vine one-step ahead quantile regression. The performance of the competitive methods, evaluated by the averaged check loss at 5\%, 50\%, 95\% quantile levels and interval score for the 95\% prediction interval, are recorded in Tables~\ref{setting300} and \ref{settingbig}. All densities are estimated nonparametrically for a fair comparison. Table~\ref{setting300} shows that the C- and D-vine two-step ahead regression models outperform the C- and D-vine one-step ahead regression models in five out of seven settings, except Settings (b) and (e), in which all models perform quite similarly to each other. Again, when comparing regression models within the same vine copula class, the C-vine two-step ahead regression models outperform the C-vine one-step ahead models in five out of seven settings. Similarly, the D-vine two-step ahead models outperform the D-vine one-step ahead models in six out of seven scenarios, except Setting (b) only. In scenarios where there is no significant improvement through the second step, both one-step and two-step ahead approaches perform very similar. All of that implies that the two-step ahead vine based quantile regression greatly improves the performance of the one-step ahead quantile regression. Table~\ref{settingbig} indicates that in the high-dimensional settings, where the two-step ahead quantile regression was used in combination with the additional variable selection from Section~\ref{twostepred}, in three out of four simulation settings, the two-step ahead models outperform the one-step ahead models. In Setting (g), we can see that all models show similar performance. In Setting (g) with standard deviation $\sigma =0.1$, the D-vine one-step ahead model outperforms the other models, while in Setting (g) with $\sigma=1$, the D-vine two-step ahead model shows a better performance. In Setting (h), we see a significant improvement in the two-step ahead models compared to the one-step ahead models. For both $\sigma =0.1$ and $\sigma =1$, the best performing model is the C-vine two-step ahead model. These results indicate that the newly proposed method improves the accuracy of the one-step ahead quantile regression in high dimensions, even with an attempt to ease the computational complexity of the two-step ahead model with a low number of candidates, compared to the number of predictors. \noindent The proposed two-step algorithms, as compared to the one-step algorithms are computationally more intensive. We present the averaged computation time over $R =100$ replications on 100 paralleled cores (Xeon Gold 6140 [email protected] GHz) in Settings~(g), (h) where $p > N_{\rm train}$, for the one step ahead and the two-step ahead approach. The high-dimensional settings have similar computational times since the computational intensity depends on the number of pair copula estimations and the number of candidates, which are the same for Settings~(g), (h). Hence, we only report the averaged computational times for Settings~(g), (h). The average computation time in minutes for the one-step ahead (C- and D-vine) approach is 83.01, in contrast to 200.28 by the two-step ahead (C- and D-vine) approach. With the variable reduction from Section~\ref{twostepred}, the two-step algorithms double the time consumption of the one-step algorithms in exchange for prediction accuracy. \section{Real data examples}\label{section: real data} We test the proposed methods on two real data sets, i.e., the Concrete data set from \citet{yeh1998modeling} corresponding to $p \leq N$, and the Riboflavin data set from \citet{buhlmann2011statistics} corresponding to $p > N$. For both, performance of the four competitive algorithms is evaluated by the averaged check loss defined in \eqref{eq:check loss} at 5\%, 50\% and 95\% quantile levels, and the 95\% prediction interval score defined in \eqref{eq:interval score}, by randomly splitting the data set into training and evaluation sets 100 times. \subsection{Concrete data set} \noindent The Concrete data set was initially used in \citet{yeh1998modeling}, and is available at the UCI Machine Learning Repository \citep{UCIrep}. The data set has in total 1030 samples. Our objective is quantile predictions of the concrete compressive strength, which is a highly nonlinear function of age and ingredients. The predictors are age (\verb|AgeDay|, counted in days) and 7 physical measurements of the concrete ingredients (given in kg in a $m^3$ mixture): cement (\verb|CementComp|), blast furnace slag (\verb|BlastFur|), fly ash (\verb|FlyAsh|), water (\verb|WaterComp|), superplastizer (\verb|Superplastizer|), coarse aggregate (\verb|CoarseAggre|) and fine aggregate (\verb|FineAggre|). We randomly split the data set into a training set with 830 samples and an evaluation set with 200 samples; the random splitting is repeated 100 times. Performance of the proposed C- and D-vine two-step ahead quantile regression, compared to the C- and D-vine one-step ahead quantile regression, is evaluated by several measurements reported in Table~\ref{concrete} after 100 repetitions of fitting the models. It is not unexpected that the results of the four algorithms are more distinct than most simulation settings, given the small number of predictors. However, there is an improvement in the performance of the two-step ahead approach compared to the one-step ahead approach for both C- and D-vine based models. Also, the C-vine model seems more appropriate for modeling the dependency structure in the data set. Finally, out of all models, the C-vine two-step ahead algorithm is the best performing algorithm in terms of out-of-sample predictions $\widehat{\mbox{IS}}_{0.5}$, $\widehat{\mbox{CL}}_{0.05}$, $\widehat{\mbox{CL}}_{0.5}$, $\widehat{\mbox{CL}}_{0.95}$ on the Concrete data set, as seen in Table~\ref{concrete} . \begin{table}[!h] \centering \def1.2{1.2} \begin{tabular}{|l|c||c|c|c|} \hline Model & $\widehat{\mbox{IS}}_{0.05}$ & $\widehat{\mbox{CL}}_{0.05}$ & $\widehat{\mbox{CL}}_{0.5}$ & $\widehat{\mbox{CL}}_{0.95}$ \\ \hline D-vine One-step & 1032.32 & 10.75 & 2.76 & 10.52 \\ D-vine Two-step & 987.10 & 10.54 & 2.78 & 9.82 \\ C-vine One-step & 976.75 & 10.65 & 2.70 & \cellcolor{mygray} 9.45 \\ \rowcolor{mygray} C-vine Two-step & 967.00 & 10.52 & 2.64 & 9.45 \\ \hline \end{tabular} \caption{Concrete data set: Out-of-sample predictions $\widehat{\mbox{IS}}_{0.5}$, $\widehat{\mbox{CL}}_{0.05}$, $\widehat{\mbox{CL}}_{0.5}$, $\widehat{\mbox{CL}}_{0.95}$. The best performing model is highlighted in gray.} \label{concrete} \end{table} \noindent In Figure~\ref{figure:quantconcrete} the marginal effect plots based on the fitted quantiles, from the C-vine two-step model, for the three most influential predictors are given. The marginal effect of a predictor is its expected impact on the quantile estimator, where the expectation is taken over all other predictors. This is estimated using all fitted conditional quantiles and smoothed over the predictors considered. \begin{figure} \caption{Marginal effect plots for the 3 most influential predictors on the concrete compressive strength for $\alpha$ values of $ 0.05$ (red colour), $0.5$ (green colour) and $ 0.95$ (blue color).} \label{figure:quantconcrete} \end{figure} \subsection{Riboflavin data set} The Riboflavin data set, available in the R package \texttt{hdi}, aims at quantile predictions of the log-transformed production rate of Bacillus subtilis using log-transformed expression levels of 4088 genes. To reduce the computational burden, we perform a pre-selection of the top 100 genes with the highest variance \citep{buhlmann2011statistics}, resulting in a subset with $p = 100$ log-transformed gene expressions and $N =71$ samples. Random splitting of the subset into training set with 61 samples and evaluation set with 10 samples, is repeated for 100 times. For the C- and D-vine two-step ahead quantile regression the number of candidates is set to $k=10$. Additionally, to further reduce the computational burden the additional variable selection from Section~\ref{twostepred} is applied with the chosen subset containing 25\% of all possible choices, where 15\% are predictors having the highest partial correlation with the log-transformed Bacillus subtilis production rate and the remaining 10\% are chosen randomly from the remaining predictors. Performance of competitive quantile regression models is reported in Table~\ref{riboflavin}, where we see that the proposed C-vine two-step ahead quantile regression is the best performing model and outperforms both the D-vine one-step ahead quantile regression from \citet{kraus2017d} and the C-vine one-step ahead quantile regression to a large extent. Further, the second best performing method is the D-vine two-step ahead model which, while performing slightly worse than the C-vine two-step ahead model, also significantly outperforms both the C-vine and D-vine one-step ahead models. \begin{table}[!htp] \centering \def1.2{1.2} \begin{tabular}{|c|c|c|c|c|} \hline Model & $\widehat{\mbox{IS}}_{0.05}$ & $\widehat{\mbox{CL}}_{0.05}$ & $\widehat{\mbox{CL}}_{0.5}$ & $\widehat{\mbox{CL}}_{0.95}$ \\ \hline D-vine One-step & 33.83 & 0.44 & 0.42 & 0.41 \\ D-vine Two-step & 30.57 & 0.44 & 0.38 & 0.33 \\ C-vine One-step & 34.52 & 0.49 & 0.43 & 0.38 \\ C-vine Two-step & \cellcolor{mygray}28.59 & \cellcolor{mygray}0.41 & \cellcolor{mygray}0.36 & \cellcolor{mygray}0.30 \\ \hline \end{tabular} \caption{Out-of-sample predictions $\widehat{\mbox{IS}}_{0.5}$, $\widehat{\mbox{CL}}_{0.05}$, $\widehat{\mbox{CL}}_{0.5}$, $\widehat{\mbox{CL}}_{0.95}$. The best performing model is highlighted in gray.} \label{riboflavin} \end{table} Since the predictors entering the C- and D-vine models yield a descending order of the predictors contributing to maximizing the conditional log-likelihood, the order indicates the influence of the predictors to the response variable. It is often of practical interest to know which gene expressions are of the highest importance for prediction. Since we repeat the random splitting of the subset for $R = 100$ times, the importance of the gene expressions is ranked sequentially by choosing the one with the highest frequency of each element in the order excluding the gene expressions chosen in the previous steps. For instance, the most important gene expression is chosen as the one most frequently ranked first; the second most important gene is chosen as the one most frequently chosen as the second element in the order, excluding the most important gene selected in the previous step. The top ten most influential gene expressions using the C- and D-vine one- or two-step ahead models are recorded in Table~\ref{table: top 10 gene serial numer}. \begin{table}[!h] \centering \def1.2{1.2} \begin{adjustbox}{max width=\textwidth, keepaspectratio} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline Model/Position & 1 & 2& 3 & 4& 5 & 6 & 7 & 8& 9 & 10 \\ \hline D-vine One-step & GGT & YCIC & MTA & RPSE & YVAK & THIK & ANSB & SPOVB & YVZB & YQJB \\ D-vine Two-step & MTA & RPSE & THIK & YMFE & YCIC & sigM & PGM & YACC & YVQF & YKPB \\ C-vine One-step & GGT & YCIC & MTA & RPSE & HIT & BFMBAB & PHRC & YBAE & PGM & YHEF \\ C-vine Two-step & MTA & RPSE & THIK & YCIC & YURU & PGM & sigM & YACC & YKRM & ASNB\\ \hline \end{tabular} \end{adjustbox} \caption{The 10 most influential gene expressions on the conditional quantile function, ranked based on their position in the order.} \label{table: top 10 gene serial numer} \end{table} Figure~\ref{figure:ribo1} shows the marginal effects plots based on the fitted quantiles, from the C-vine two-step model, for the 10 most influential predictors on the log-transformed Bacillus subtilis production rate. \begin{figure} \caption{Marginal effect plots for the 10 most influential predictors on the log-transformed Bacillus subtilis production rate for $\alpha = 0.5$.} \label{figure:ribo1} \end{figure} \section{Summary and discussion}\label{section:disscussion} In this paper, we introduce a two-step ahead forward selection algorithm for nonparametric C- and D-vine copula based quantile regression. Inclusion of future information, obtained through considering the next tree in the two-step ahead algorithm, yields a significantly less greedy sequential selection procedure in comparison to the already existing one-step ahead algorithm for D-vine based quantile regression in \citet{kraus2017d}. We extend the vine-based quantile regression framework to include C-vine copulas, providing an additional choice for the dependence structure. Further, for the first time, nonparametric bivariate copulas are used to construct vine copula-based quantile regression models. The nonparametric estimation overcomes the problem of possible family misspecification in the parametric estimation of bivariate copulas and allows for even more flexibility in dependence estimation. Additionally, under mild regularity conditions, the nonparametric conditional quantile estimator is shown to be consistent.\\ \noindent The extensive simulation study, including several different settings and data sets with different dimensions, strengths of dependence and tail dependencies, shows that the two-step ahead algorithm outperforms the one-step ahead algorithm in most scenarios. The results for the Concrete and Riboflavin data sets are especially interesting, as the C-vine two-step ahead algorithm has a significant improvement compared to the other algorithms. These findings provide strong evidence for the need of modeling the dependence structure following a C-vine copula. In addition, the two-step ahead algorithm allows controlling the computational intensity independently of the data dimensions through the number of candidate predictors and the additional variable selection discussed in Section 5. Thus, fitting vine based quantile regression models in high dimensions becomes feasible. As seen in several simulation settings, there is a significant gain by introducing additional dependence structures other then the D-vine based quantile regression. A further research area is developing similar forward selection algorithms for R-vine tree structures while optimising the conditional log-likelihood.\\ \noindent At each step of the vine building stage, we compare equal-sized models with the same number of variables. The conditional log-likelihood is suited for such a comparison. Other questions might come in handy, such as choosing between a C-vine, D-vine or R-vine information criteria. When maximum likelihood estimation is employed at all stages, the selection criteria by Akaike (AIC) \citep{Akaike73}, the Bayesian information criterion (BIC) \citep{Schwarz78} and the focussed information criterion (FIC) \citep{ClaeskensHjort2003} might be used immediately. \citet{KoHjortHobaekHaff2019} studied FIC and AIC specifically for the selection of parametric copulas. The copula information criterion in the spirit of the Akaike information criterion by \citet{GronnebergHjort2014} can be used for selection among copula models with empirically estimated margins, while \citet{KoHjort2019} studied such a criterion for parametric copula models. We plan a deeper investigation of the use of information criteria for nonparametrically estimated copulas and for vines in particular. Such a study is beyond the scope of this paper but could be interesting to study stopping criteria too for building vines. \noindent Nonparametrically estimated vines are offering considerable flexibility. Their parametric counterparts, on the other hand, are enjoying simplicity. An interesting route for further research is to combine parametric and nonparametric components in the construction of the vines in an efficient way to bring the most benefit, which should be made tangible through some criterion such that guidance can be provided about which components should be modeled nonparametrically and which others are best modeled parametrically. For some types of models, such choice between a parametric and a nonparametric model has been investigated by \citet{JullumHjort2017} via the focussed information criterion. This and alternative methods taking the effective degrees of freedom into account are worth further investigating for vine copula models. \section*{Acknowledgments} We would like to thank the editor and the two referees for their comments, which helped to improve the manuscript. This work was supported by the Deutsche Forschungsgemeinschaft [DFG CZ 86/6-1], the Research Foundation Flanders and KU Leuven internal fund C16/20/002. The resources and services used in this work were provided by the VSC (Flemish Supercomputer Center), funded by the Research Foundation-Flanders (FWO) and the Flemish Government. \appendix \section{Construction of the transformation local likelihood estimator of the copula density }\label{section:appendA} Let the $N\times 2$ transformed sample matrix be \begin{equation}\label{eq:transformed sample} D = (S, T), \end{equation} where the transformed samples $D_n = \big(S_n = \Phi^{-1}(U_i^{(n)}), T_n = \Phi^{-1}(U_j^{(n)})\big), n = 1, \ldots, N$, and $\Phi$ denotes the cumulative distribution function of a standard Gaussian distribution. The logarithm of the density $f_{S, T}$ of the transformed samples $(S_n, T_n), n = 1, \ldots, N$ is approximated locally by a bivariate polynomial expansion $P_{\bms{a}_m}$ of order $m$ with intercept $\tilde a_{m,0}$ such that the approximation is denoted by $$ \tilde f_{S, T}(\Phi^{-1}(u_i^{(n)}), \Phi^{-1}(u_j^{(n)})) = \exp\big\{\tilde a_{m, 0}(\Phi^{-1}(u_i^{(n)}), \Phi^{-1}(u_j^{(n)})) \big\}. $$ The transformation local likelihood estimator is then defined as \begin{equation}\label{eq:kernel estimator} \tilde c (u_i^{(n)}, u_j^{(n)}) = \frac{\tilde f_{S, T}(\Phi^{-1}(u_i^{(n)}), \Phi^{-1}(u_j^{(n)}))}{\phi(\Phi^{-1}(u_i^{(n)})) \phi(\Phi^{-1}(u_j^{(n)}))}. \end{equation} To get the local polynomial approximation, we need a kernel function $\bm K$ with 2$\times$2 bandwidth matrix $\bm B_N$. For some pair $(\check s, \check t)$ close to $(s, t)$, $\log f_{S T}(\check s, \check t)$ is assumed to be well approximated, locally, by for instance a polynomial with $m = 1$ (log-linear) \begin{eqnarray*} P_{\bms a_1}(\check s - s, \check t - t) = a_{1, 0}(s, t) + a_{1, 1}(s, t) (\check s - s) + a_{1, 2}(s, t) (\check t - t), \end{eqnarray*} or $m = 2$ (log-quadratic) \begin{eqnarray*} \lefteqn{P_{\bms a_2}(\check s - s, \check t - t) = a_{2, 0}(s, t) + a_{2, 1}(s, t) (\check s - s) + a_{2, 2}(s, t) (\check t - t) } & \\ &+ a_{2, 3}(s, t) (\check s - s)^2 + a_{2, 4}(s, t) (\check t - t)^2 + a_{2, 5}(s, t) (\check s - s)(\check t - t). \end{eqnarray*} The coefficient vector of the polynomial expansion $P_{a_m}$ is denoted by $a_m(s,t)$, where $a_1(s,t) = (a_{1,0}(s,t),\allowbreak a_{1,1}(s,t), a_{1,2}(s,t))$ for the log-linear approximation and $a_2(s,t) = (a_{2,0}(s,t),\ldots, a_{2,5}(s,t))$ for the log-quadratic. The estimated coefficient vector $\tilde{\bm a}_m(s,t)$ is obtained by a maximization problem in \eqref{eq:kernel polynomial term} \begin{eqnarray}\label{eq:kernel polynomial term} \lefteqn{\tilde{\bm a}_m(s,t) = \arg\max_{a_m} \bigg\{ \sum_{n = 1}^N \bm{K} \bigg(\bm{B}_N^{-1/2} \begin{pmatrix} s - S_n \\ t - T_n \end{pmatrix} \bigg) P_{\bms{a}_m}(S_n - s, T_n - t)}&&\nonumber\\ && -N \Big\{\int\!\!\int_{\mathbbm R^2} \bm{K} \bigg(\bm{B}_N^{-1/2} \begin{pmatrix} s - \check s \\ t - \check t \end{pmatrix} \bigg) \exp\Big(P_{\bms{a}_m}(\check s - s, t - t)\Big) \mathrm{d}\check s \mathrm{d} \check t \Big\}\bigg\}. \end{eqnarray} \noindent While it is well-known that kernel estimators suffer from the curse of dimensionality, in the vine construction only two-dimensional functions need to be estimated, this thus avoids problems with high-dimensionality. \\ We next explain as in \citet{geenens2017probit} how a bandwidth selection is obtained. Consider the principal component decomposition for the $N \times 2$ sample matrix $D = (S, T)$ in \eqref{eq:transformed sample}, such that the $N\times 2$ matrix $(Q, R)$ follows \begin{equation} (Q, R)^T = {W} D^T, \end{equation} where each row of $W$ is an eigenvector of $D^T D$. We obtain an estimator of $f_{S T}$ through the density estimator of $f_{QR}$, which can be estimated based on a diagonal bandwidth matrix $\text{diag}(h_Q^2, h_R^2)$. Selecting the bandwidths $h_Q$ uses samples $Q_n, n = 1, \ldots, N$ as \begin{eqnarray}\label{eq:bandwidth matrix} h_Q = \arg\min_{h > 0} \bigg\{\int_{-\infty}^{\infty} \Big\{\tilde f_Q^{(p)} \Big\}^2 dq - \frac{2}{N}\sum_{n = 1}^N\tilde f_{Q(-n)}^{(p)}(\hat Q_n)\bigg\}, \end{eqnarray} where $\tilde f_Q^{(p)} (p = 1, 2)$ are the local polynomial estimators for $f_Q$, and $\tilde f_{Q(-n)}^{(p)}$ is the ``leave-one-out" version of $\tilde f_{Q}^{(p)}$ computed by leaving out $Q_n$. The procedure of selecting $h_R$ is similar. The bandwidth matrix for the bivariate copula density is then given by $\bm{B}_N = K_N^{(p)} {W}^{-1} \textrm{diag}(h_Q^2, h_R^2) {W}^{-1}$ where $K_N^{(p)}$ takes $N^{1/45}$ to ensure an asymptotic optimal bandwidth order for the local log-quadratic case ($p = 2$), see \citet[Section~4]{geenens2017probit} for details. Selection for the k-nearest-neighbour type bandwidth is similar. The k-nearest-neighbour bandwidths denoted as $h'_Q$ and $h'_R$ are obtained by restricting the minimization in \eqref{eq:bandwidth matrix} in the interval $(0,1)$, i.e., $$ h'_Q = \arg\min_{h'_Q \in (0, 1)} \bigg\{\int_{-\infty}^{\infty} \Big\{\tilde f_Q^{(p)} \Big\}^2 dq - \frac{2}{N}\sum_{n = 1}^N\tilde f_{Q(-n)}^{(p)}(\hat Q_n)\bigg\}. $$ Estimating $f_{QR}$ at any $(q, r)$ is obtained by using its $k = K_N^{(p)} \cdot h'_Q \cdot N$ nearest neighbours where $K_N^{(p)}$ takes $N^{-4/45}$ for $p = 2$. The R package \texttt{rvinecopulib} only implemented the bandwidth in \eqref{eq:bandwidth matrix} for the quadratic case with $p = 2$. \section{Proof of Proposition~\ref{thm:consistency}} \label{section:appendB} \begin{proof} We first show statement 1. By \eqref{eq:conditional quantile}, the estimator $\hat{F}^{-1}_{Y|X_1, \ldots, X_p} (\alpha | x_1, \ldots, x_p) =$\\ $ \hat{F}^{-1}_Y\big(\hat{C}^{-1}_{V|U_1, \ldots, U_p}(\alpha | \hat{u}_1, \ldots, \hat{u}_p)\big),$ where $\hat u_j = \hat F_j(x_j), j = 1, \ldots, p$ denote variables on the u-scale. To avoid heavy notation, $N$ referring to the sample size will be omitted here. Following \cite{wied2012consistency, silverman1978weak}, to show the uniformly strong consistency of $\hat{F}^{-1}_Y\big(\hat{C}^{-1}_{V|U_1, \ldots, U_p}(\alpha | \hat{u}_1, \ldots, \hat{u}_p)\big)$, we show $$\sup_\alpha \big|\hat{F}^{-1}_Y\big(\hat{C}^{-1}_{V|U_1, \ldots, U_p}(\alpha | \hat{u}_1, \ldots, \hat{u}_p)\big) - F^{-1}_Y\big(C^{-1}_{V|U_1, \ldots, U_p}(\alpha | u_1, \ldots, u_p)\big) \big| \to 0 \ a.s. $$ To improve the readability and simplify the notation in the proof, we first introduce some shorthand notation. Define $$D_{C,1} = \hat{F}^{-1}_Y\big(\hat{C}^{-1}_{V|U_1, \ldots, U_p}(\alpha | \hat{u}_1, \ldots, \hat{u}_p)\big), D_{C,2} = {F}^{-1}_Y\big(\hat{C}^{-1}_{V|U_1, \ldots, U_p}(\alpha | \hat{u}_1, \ldots, \hat{u}_p)\big),$$ $$D_{C,3} = \hat{F}^{-1}_Y\big(C^{-1}_{V|U_1, \ldots, U_p}(\alpha | u_1, \ldots, u_p)\big), D_{C,4} = {F}^{-1}_Y\big(C^{-1}_{V|U_1, \ldots, U_p}(\alpha | u_1, \ldots, u_p)\big),$$ and the two differences $D_C = D_{C,1} - D_{C,3}$ and $D_F = D_{C, 3} - D_{C,4}.$ \\ For all $\epsilon \geq 0$, \begin{eqnarray}\label{eq:converge uniform} 1 &\geq& P\big(\sup_\alpha \big|D_{C,1} - D_{C,4} \big| \leq \epsilon \big) \nonumber =P\big(\sup_\alpha \big| D_{C,1} - D_{C,3} + D_{C,3} - D_{C,4} \big| \leq \epsilon \big) \\ & = & P\big(\sup_\alpha \big| D_C + D_F \big| \leq \epsilon \big)\nonumber\\ &\geq& P\big(\sup_\alpha \big\{\big|D_C \big| + \big|D_F \big| \big\} \leq \epsilon \big) \geq P\big(\sup_\alpha \big|D_C\big| + \sup_\alpha\big|D_F \big| \leq \epsilon \big)\nonumber \\ &\geq& P\big( \big(\sup_\alpha \big|D_C \big| \leq \frac{3}{4}\epsilon \big) \cap \big(\sup_\alpha\big|D_F\big| \leq \frac{1}{4}\epsilon \big) \big) \nonumber\\ &=& P\Big( \big(\sup_\alpha \big|D_C\big| \leq \frac{3}{4}\epsilon\big) \;\big| \; \big(\sup_\alpha\big|D_F \big| \leq \frac{1}{4}\epsilon \big) \Big) \cdot P\big(\sup_\alpha\big| D_F \big| \leq \frac{1}{4}\epsilon \big). \end{eqnarray} Denote the event $A=\sup_\alpha\big|D_F \big| \leq \frac{1}{4}\epsilon$, $P(A) = 1$ holds by the uniform strong consistency of the estimator of $F_Y^{-1}$. Next,we show that the conditional probability in \eqref{eq:converge uniform} is equal to 1. \begin{eqnarray*} \lefteqn{P\big( (\sup_\alpha |D_C| \leq \frac{3}{4}\epsilon) \;\big| A\big) = P\big( \sup_\alpha |D_{C,1} - D_{C,2} + D_{C,2} - D_{C,3} + D_{C,4} - D_{C,4}| \leq \frac{3}{4}\epsilon \big| A \big) }\\ &\geq & P\big( \sup_\alpha |D_{C,1} - D_{C,2}| + \sup_\alpha |D_{C,4} - D_{C,3}| + \sup_\alpha |D_{C,2} - D_{C,4}| \leq \frac{3}{4}\epsilon \big| A \big). \end{eqnarray*} This conditional probability is equal to 1, since the first and second supremum are less than or equal to $\frac{1}{4}\epsilon$ by conditioning on $A$ and due to the uniform consistency of $\hat{F}^{-1}_Y$. The last supremum is less than or equal to $\frac{1}{4}\epsilon$ by \citet[][Thm.2]{bartle1961preservation} on almost uniform convergence, applied to the continuous inverse distribution function $F^{-1}_Y$, and taking the measurable space to be the probability space. First, $P\big( \sup_\alpha \big|\big(\hat{C}^{-1}_{V|U_1, \ldots, U_p}(\alpha | \hat{u}_1, \ldots, \hat{u}_p)\big) - \big(C^{-1}_{V|U_1, \ldots, U_p}(\alpha | u_1, \ldots, u_p)\big) \big| \leq \frac{1}{4}\epsilon \big) = 1$, which can be argued similar to \eqref{eq:converge uniform} using the uniform consistency and continuity of the inverse of the h-functions. Next, \eqref{eq:converge uniform} states $ P( \sup_\alpha |D_{C,1} - D_{C,4}| \leq \epsilon ) = 1.$ We conclude that $\hat{F}^{-1}_Y\big(\hat{C}^{-1}_{V|U_1, \ldots, U_p}(\alpha | \hat{u}_1, \ldots, \hat{u}_p)\big)$ is uniformly strong consistent. \\ To prove the weak consistency in 2, by \cite{wied2012consistency, silverman1978weak}, we only need to show $ P(|D_{C,1} - D_{C,4}| \leq \epsilon ) \to 1.$ Using the same technique as in \eqref{eq:converge uniform} and a similar argument for proving statement 2 of Proposition~\ref{thm:consistency} with Theorem 2 on convergence in measure in \cite{bartle1961preservation}, the weak consistency can be obtained. \end{proof} \end{document}
\begin{document} \title[Sharp estimates for semi-stable radial solutions] {Sharp estimates for semi-stable radial solutions of semilinear elliptic equations} \author{Salvador Villegas} \thanks{The author has been supported by the MEC Spanish grants MTM2005-01331 and MTM2006-09282} \address{Departamento de An\'{a}lisis Matem\'{a}tico, Universidad de Granada, 18071 Granada, Spain.} \email{[email protected]} \begin{abstract} This paper is devoted to the study of semi-stable radial solutions $u\in H^1(B_1)$ of $-\Delta u=g(u) \mbox{ in } B_1\setminus \{ 0\}$, where $g\in C^1(\mathbb R)$ is a general nonlinearity and $B_1$ is the unit ball of $\mathbb R^N$. We establish sharp pointwise estimates for such solutions. As an application of these results, we obtain optimal pointwise estimates for the extremal solution and its derivatives (up to order three) of the semilinear elliptic equation $-\Delta u=\lambda f(u)$, posed in $B_1$, with Dirichlet data $u|_{\partial B_1}=0$, and a continuous, positive, nondecreasing and convex function $f$ on $[0,\infty)$ such that $f(s)/s\rightarrow\infty$ as $s\rightarrow\infty$. In addition, we provide, for $N\geq 10$, a large family of semi-stable radially decreasing unbounded $H^1(B_1)$ solutions. \end{abstract} \maketitle \section{Introduction and main results} This paper deals with the semi-stability of radial solutions $u\in H^1(B_1)$ of \begin{equation}\label{mainequation} -\Delta u=g(u) \ \ \mbox{ in } B_1\setminus \{ 0\}\, , \end{equation} \noindent where $B_1$ is the unit ball of $\mathbb R^N$, and $g\in C^1(\mathbb R)$ is a general nonlinearity. A radial solution $u\in H^1(B_1)$ of (\ref{mainequation}) is called semi-stable if $$\int_{B_1} \left( \vert \nabla v\vert^2-g'(u)v^2\right) \, dx\geq 0$$ \noindent for every $v\in C^\infty (B_1)$ with compact support in $B_1\setminus \{ 0\}$. As an application of some general results obtained in this paper for this class of solutions (for arbitrary $g\in C^1(\mathbb R)$), we will establish sharp pointwise estimates related to the following semilinear elliptic equation, which has been extensively studied. $$ \left\{ \begin{array}{ll} -\Delta u=\lambda f(u)\ \ \ \ \ \ \ & \mbox{ in } \Omega \, ,\\ u\geq 0 & \mbox{ in } \Omega \, ,\\ u=0 & \mbox{ on } \partial\Omega \, ,\\ \end{array} \right. \eqno{(P_\lambda)} $$ \ \noindent where $\Omega\subset\mathbb R^N$ is a smooth bounded domain, $N\geq 2$, $\lambda\geq 0$ is a real parameter, and the nonlinearity $f:[0,\infty)\rightarrow \mathbb R$ satisfies \begin{equation}\label{convexa} f \mbox{ is } C^1, \mbox{ nondecreasing and convex, }f(0)>0,\mbox{ and }\lim_{u\to +\infty}\frac{f(u)}{u}=+\infty. \end{equation} \ It is well known that there exists a finite positive extremal parameter $\lambda^\ast$ such that ($P_\lambda$) has a minimal classical solution $u_\lambda\in C^2(\overline{\Omega})$ if $0\leq \lambda <\lambda^\ast$, while no solution exists, even in the weak sense, for $\lambda>\lambda^\ast$. The set $\{u_\lambda:\, 0\leq \lambda < \lambda^\ast\}$ forms a branch of classical solutions increasing in $\lambda$. Its increasing pointwise limit $u^\ast(x):=\lim_{\lambda\uparrow\lambda^\ast}u_\lambda(x)$ is a weak solution of ($P_\lambda$) for $\lambda=\lambda^\ast$, which is called the extremal solution of ($P_\lambda$) (see \cite{Bre,BV}). The regularity and properties of the extremal solutions depend strongly on the dimension $N$, domain $\Omega$ and nonlinearity $f$. When $f(u)=e^u$, it is known that $u^\ast\in L^\infty (\Omega)$ if $N<10$ (for every $\Omega$) (see \cite{CrR,MP}), while $u^\ast (x)=-2\log \vert x\vert$ and $\lambda^\ast=2(N-2)$ if $N\geq 10$ and $\Omega=B_1$ (see \cite{JL}). There is an analogous result for $f(u)=(1+u)^p$ with $p>1$ (see \cite{BV}). Brezis and V\'azquez \cite{BV} raised the question of determining the boundedness of $u^\ast$, depending on the dimension $N$, for general nonlinearities $f$ satisfying (\ref{convexa}). The best result is due to Nedev \cite{Ne}, who proved that $u^\ast \in L^\infty (\Omega)$ if $N\leq 3$, and Cabr\'e \cite{cabre4}, who has proved recently that $u^\ast \in L^\infty (\Omega)$ if $N=4$ and $\Omega$ is convex. Cabr\'e and Capella \cite{cc} have proved that $u^\ast \in L^\infty (\Omega)$ if $N\leq 9$ and $\Omega=B_1$ (similar results for the $p-$laplacian operator are contained in \cite{plaplaciano}). Another interesting question is whether the extremal solution lies in the energy class. Nedev \cite{Ne,Ne2} proved that $u^\ast \in H_0^1(\Omega)$ if $N\leq 5$ (for every $\Omega$) or $\Omega$ is strictly convex (for every $N\geq 2$). Brezis and V\'azquez \cite{BV} proved that a sufficient condition to have $u^\ast \in H_0^1(\Omega)$ is that $\liminf_{u\to \infty} u\, f'(u)/f(u)>1$ (for every $\Omega$ and $N\geq 2$). On the other hand, it is an open problem (see \cite[Problem 5]{BV}) to know the behavior of $f'(u^\ast)$ near the the singularities of $u^\ast$. Is it always like $C/\vert x\vert^2\, $? If $\Omega=B_1$, it is easily seen by the Gidas-Ni-Nirenberg symmetry result that $u_\lambda$ is radially decreasing for $0<\lambda<\lambda^\ast$. Hence, its limit $u^\ast$ is also radially decreasing. In this situation, Cabr\'e and Capella \cite{cc} have proved the following result: \begin{theorem}(\cite{cc}).\label{cabrecapella} Assume that $\Omega=B_1$, $N\geq 2$, and that $f$ satisfies (\ref{convexa}). Let $u^\ast$ be the extremal solution of ($P_\lambda$). We have that \begin{enumerate} \item[i)] If $N<10$, then $u^\ast \in L^\infty (B_1)$, \ \item[ii)] If $N=10$, then $u^\ast(x)\leq C\, \left\vert \log \vert x\vert \right\vert $ \ in $B_1$ for some constant $C$, \ \item[iii)] If $N>10$, then $\displaystyleplaystyle{u^\ast(x)\leq C\, \vert x\vert^{-N/2+\sqrt{N-1}+2}\sqrt{\left\vert \log \vert x\vert \right\vert}} \ $ in $B_1$ for some constant $C$, \ \item[iv)] If $N\geq 10$ and $k\in \{1,2,3\}$, then $\displaystyleplaystyle{\vert \partial^{k}u^\ast(x)\vert \leq C\, \,\vert x\vert^{-N/2+\sqrt{N-1}+2-k}\sqrt{\left\vert \log \vert x\vert \right\vert}}\ $ in $B_1$ for some constant $C$. \end{enumerate} \end{theorem} \ Among other results, in this paper we establish sharp pointwise estimates for $u^\ast$ and its derivatives (up to order three) in the radial case. We improve the above theorem, answering affirmatively to an open question raised in \cite{cc}, about the removal of the factor $\sqrt{\left\vert \log \vert x\vert \right\vert}$. By abuse of notation, we write $u(r)$ instead of $u(x)$, where $r=\vert x\vert$ and $x\in \mathbb R^N$. We denote by $u_r$ the radial derivative of a radial function $u$. \begin{theorem}\label{extremal} Assume that $\Omega=B_1$, $N\geq 2$, and that $f$ satisfies (\ref{convexa}). Let $u^\ast$ be the extremal solution of ($P_\lambda$). We have that \begin{enumerate} \item[i)] If $N<10$, then $u^\ast(r)\leq C\, (1-r)\, , \ \ \forall r\in [0,1]$, \ \item[ii)] If $N=10$, then $u^\ast(r)\leq C\, \vert \log r\vert \, , \ \ \forall r\in (0,1]$, \ \item[iii)] If $N>10$, then $\displaystyleplaystyle{u^\ast(r)\leq C\, \left( r^{-N/2+\sqrt{N-1}+2}-1\right) \, , \ \ \forall r\in (0,1]}$, \ \item[iv)] If $N\geq 10$, then $\displaystyleplaystyle{\vert \partial_r^{(k)}u^\ast(r)\vert \leq C\, \,r^{-N/2+\sqrt{N-1}+2-k} \, , \ \ \forall r\in (0,1]},$ \noindent $\forall k\in \{1,2,3\}$, \end{enumerate} \noindent where $\displaystyleplaystyle{C=C_N \min_{t\in [1/2,1]}\vert u^\ast_r(t)\vert}$, and $C_N$ is a constant depending only on $N$. \end{theorem} \, \begin{remark}\label{C} It is immediate that if we replace the function $f$ by $\tilde{f}:=f(\cdot/M)$, with $M>0$, then the extremal solution $\tilde{u}^\ast$ associated to $\tilde{f}$ is $\tilde{u}^\ast=M u^\ast$. Hence the constant $C$ in Theorem \ref{extremal} must depend homogeneously on $u^\ast$. In fact, this linear coefficient is very small since, for instance, we have $$\min_{t\in [1/2,1]}\vert u^\ast_r(t)\vert\leq 4(u^\ast(1/2)-u^\ast(3/4))\leq 4u^\ast(1/2)\leq \frac{4}{\mbox{measure}\, ( B_{1/2})} \Vert u^\ast \Vert_{L^1(B_{1/2})}.$$ \end{remark} \begin{remark} In \cite{BV} it is proved that if $$N>10 \ \ \ \ \ \ \mbox{ and }\ \ \ \ \ \ p\geq p_N:=\frac{N-2\sqrt{N-1}}{N-2\sqrt{N-1}-4}\, ,$$ \noindent then the extremal solution for $f(u)=(1+u)^p$ and $\Omega=B_1$ is given by $u^\ast(r)=r^{-2/(p-1)}-1$. In particular, if $N>10$ and $p=p_N$ (called the Joseph-Lundgren exponent), then $u^\ast(r)=r^{-N/2+\sqrt{N-1}+2}-1$. Hence the pointwise estimates of Theorem \ref{extremal} for $u^\ast$ and its derivatives (up to order three) are optimal if $N>10$. The optimality of the theorem for $N=10$ follows immediately by considering $f(u)=e^u$. As mentioned before, it is obtained in this case that $u^\ast(r)=2\vert \log r\vert$. \end{remark} \begin{remark} In fact, the convexity of $f$ is not necessary to obtain our main results. Specifically, if we assume $f\in C^1$, nondecreasing, $f(0)>0$ and $\lim_{u\to +\infty}f(u)/u=+\infty$, then it can be proved (see \cite[Proposition 5.1]{cc}) that there exits a finite positive extremal parameter $\lambda^\ast$ such that ($P_\lambda$) has a minimal classical solution $u_\lambda\in C^2(\overline{\Omega})$ if $0\leq \lambda <\lambda^\ast$, while no solution exists, even in the weak sense, for $\lambda>\lambda^\ast$. The set $\{u_\lambda:\, 0\leq \lambda < \lambda^\ast\}$ of classical solutions is increasing in $\lambda$ and its pointwise limit $u^\ast(x):=\lim_{\lambda\uparrow\lambda^\ast}u_\lambda(x)$ is a semi-stable weak solution of ($P_\lambda$) for $\lambda=\lambda^\ast$. Note that the family of minimal solutions $\{ u_\lambda \}$ may not be continuous as a function of $\lambda$, as in the case of $f$ convex. Under these hypothesis of $f$ it is possible to obtain the results (with the only exception of the case $N\geq 10$ and $k=3$ of item iv)) of Theorems \ref{cabrecapella} and \ref{extremal}. \end{remark} As we have mentioned, the proof of Theorem \ref{extremal} is based on general properties of semi-stable radial solutions. Note that the minimality of $u_\lambda$ implies its semi-stability. Clearly, we can pass to the limit and obtain that $u^\ast$ is also radial and semi-stable. In addition, by a result of Nedev \cite{Ne2} (see also \cite{cc}), we have that $u^\ast\in H_0^1(B_1)$. Recalling the definition of the semi-stability at the beginning of the paper, we observe that a radial solution $u\in H^1(B_1)$ of (\ref{mainequation}) is bounded away from the origin. Hence, using standard regularity results, we obtain $u\in C^2(B_1\setminus \{ 0\})$, and the definition of semi-stability makes sense. If $u$ is a bounded radial solution of (\ref{mainequation}), then $u\in C^2(\overline{B_1})$ and the semi-stability of $u$ means that the first eigenvalue of the linearized problem $-\Delta-g'(u)$ in $B_1$ is nonnegative. Note that the expression which defines the semi-stability is nothing but the second variation of the energy functional associated to (\ref{mainequation}) in a domain $\Omega\subset\mathbb R^N$ (with $\overline{\Omega}\subset B_1\setminus\{ 0\}$): $E_\Omega (u)=\int_\Omega \left( \vert \nabla u\vert^2 /2-G(u)\right) \, dx$, where $G'=g$. Thus, if $u\in C^2(B_1\setminus \{ 0\})$ is a local minimizer of $E_\Omega$ for every smooth domain $\Omega\subset\mathbb R^N$ (with $\overline{\Omega}\subset B_1\setminus\{ 0\}$) (i.e., a minimizer under every small enough $C^1(\Omega)$ perturbation vanishing on $\partial \Omega$), then $u$ is a semi-stable solution of (\ref{mainequation}). Other general situations include stable solutions: minimal solutions, extremal solutions or absolute minimizers between a subsolution and a supersolution (see \cite[Rem. 1.11]{cc} for more details). Our main results about semi-stable radial solutions are the following. \begin{theorem}\label{principal} Let $N\geq 2$, $g\in C^1(\mathbb R)$, and $u\in H^1(B_1)$ be a semi-stable radial solution of (\ref{mainequation}). Then there exists a constant $M_N$ depending only on $N$ such that: \begin{enumerate} \item[i)] If $N<10$, then $\Vert u\Vert_{L^\infty(B_1)}\leq M_N \Vert u\Vert_{H^1(B_1\setminus \overline{B_{1/2}})}$. \ \item[ii)] If $N=10$, then $\vert u(r)\vert \leq M_{10} \Vert u\Vert_{H^1(B_1\setminus \overline{B_{1/2}})} \, (\vert \log r\vert +1)\, , \ \ \forall r\in (0,1]$. \ \item[iii)] If $N>10$, then $\displaystyleplaystyle{\vert u(r)\vert \leq M_N \Vert u\Vert_{H^1(B_1\setminus \overline{B_{1/2}})} \, r^{-N/2+\sqrt{N-1}+2}\, , \ \ \forall r\in (0,1]}$. \end{enumerate} \end{theorem} \ \begin{theorem}\label{estimas} Let $N\geq 2$, $g\in C^1(\mathbb R)$, and $u\in H^1(B_1)$ be a semi-stable radially decreasing solution of (\ref{mainequation}). Then there exists a constant $M'_N$ depending only on $N$ such that: \begin{enumerate} \item[i)] If $g\geq 0$, then $$\vert u_r(r)\vert \leq M'_N \Vert \nabla u\Vert_{L^2(B_1\setminus B_{1/2})} r^{-N/2+\sqrt{N-1}+1}\, ,\ \ \forall r\in (0,1/2].$$ \ \item[ii)] If $g\geq 0$ is nondecreasing, then $$\vert u_{rr}(r)\vert \leq M'_N \Vert \nabla u\Vert_{L^2(B_1\setminus B_{1/2})} r^{-N/2+\sqrt{N-1}}\, ,\ \ \forall r\in (0,1/2].$$ \ \item[iii)] If $g\geq 0$ is nondecreasing and convex, then $$\vert u_{rrr}(r)\vert \leq M'_N \Vert \nabla u\Vert_{L^2(B_1\setminus B_{1/2})} r^{-N/2+\sqrt{N-1}-1}\, ,\ \ \forall r\in (0,1/2].$$ \end{enumerate} \end{theorem} \ \begin{remark} \label{anillo} We emphasize that the estimates obtained in Theorems \ref{principal} and \ref{estimas} are in terms of the $H^1$ norm of the annulus $B_1\setminus \overline{B_{1/2}}$, while $u$ is required to belong to $H^1(B_1)$. In fact, this requirement is essential to obtain our results, since we can always find radial weak solutions of (\ref{mainequation}) (not in the Sobolev space of the unit ball), for which the statements of Theorems \ref{principal} and \ref{estimas} fail to satisfy (see \cite{BV,cc}). \end{remark} \begin{remark} In \cite[Rem. 1.9]{cc} the authors raised the question whether the estimates of Theorem \ref{estimas} hold for general nonlinearities $g$, without the assumptions on the nonnegativeness of $g$, $g'$ and/or $g''$. In this paper we answer negatively to this question. In fact, without assumptions on the sign of $g$, $g'$ or $g''$ it is not possible to obtain any pointwise estimate for $\vert u_r\vert$, $\vert u_{rr}\vert$ or $\vert u_{rrr}\vert$ (see Corollaries \ref{nohay}, \ref{nohayy} and \ref{nohayyy}). \end{remark} To prove the main results of the paper we will use Lemma \ref{essential}, which, roughly speaking, says that the are some restrictions on the growth of the derivative of a radial semi-stable solution of (\ref{mainequation}) around the origin. In the proof of this lemma, we will make use of \cite[Lem. 2.1]{cc}, which was inspired by the proof of Simons theorem on the nonexistence of singular minimal cones in $\mathbb R^N$ for $N\leq 7$ (see \cite[Th. 10.10]{simons} and \cite[Rem. 2.2]{cc} for more details). Similar methods are used in \cite{cabre,yo} to study the stability or instability of radial solutions in all space $\mathbb R^N$. \ The paper is organized as follows. In Section \ref{dos} we prove Theorems \ref{extremal}, \ref{principal} and \ref{estimas}. Section \ref{tres} provides, for $N\geq 10$, a large family of semi-stable radially decreasing unbounded $H^1(B_1)$ solutions of problems of the type (\ref{mainequation}). Taking solutions of this family, we will show the impossibility of obtaining pointwise estimates for $\vert u_r\vert$, $\vert u_{rr}\vert$ or $\vert u_{rrr}\vert$ if no further assumptions on the sign of $g$, $g'$ or $g''$ are imposed. \section{Proof of the main results}\label{dos} \begin{lemma}\label{essential} Let $N\geq 2$, $g\in C^1(\mathbb R)$, and $u\in H^1(B_1)$ be a semi-stable radial solution of (\ref{mainequation}). Then there exists a constant $K_N$ depending only on $N$ such that: \begin{equation}\label{inequality} \int_0^r t^{N-1}u_r(t)^2 \, dt\leq K_N \Vert \nabla u\Vert_{L^2(B_1\setminus B_{1/2})}^2\, r^{2\sqrt{N-1}+2}\, \ \ \ \forall r\in [0,1]. \end{equation} \end{lemma} \noindent {\bf Proof.} Let us use \cite[Lem. 2.1]{cc} (see also the proof of \cite[Lem. 2.3]{cc}) to assure that $$(N-1)\int_{B_1}u_r^2\, \eta^2\, dx \leq \int_{B_1}u_r^2\vert \nabla \left(r\, \eta\right) \vert^2 \, dx\, , $$ \noindent for every $\eta \in (H^1\cap L^\infty )(B_1)$ with compact support in $B_1$ and such that $\vert \nabla \left(r\, \eta\right) \vert\in L^\infty (B_1)$. Applying this inequality to a radial function $\eta (\vert x\vert)$ we obtain \begin{equation}\label{stablepropert} (N-1)\int_0^1 u_r(t)^2\eta(t)^2 t^{N-1}\, dt\leq \int_0^1 u_r(t)^2 \left(t \, \eta (t)\right)'^{\, 2} t^{N-1}\, dt\, . \end{equation} We now fix $r\in (0,1/2)$ and consider the function $$\eta (t)=\left\{ \begin{array}{ll} r^{-\sqrt{N-1}-1} & \mbox{ if } 0\leq t \leq r\, , \\ \\ t^{-\sqrt{N-1}-1} & \mbox{ if } r<t\leq 1/2\, , \\ \\ 2^{\sqrt{N-1}+2}(1-t) & \mbox{ if } 1/2<t\leq 1\, . \end{array} \right. $$ Since $(N-1)\eta(t)^2=\left(t \, \eta (t)\right)'^{\, 2}$ for $r<t<1/2$, inequality (\ref{stablepropert}) shows that $$\begin{array}{l}\displaystyleplaystyle{\ \ \ (N-2)r^{-2\sqrt{N-1}-2}\int_0^r u_r(t)^2 t^{N-1}\, dt}\\ \displaystyleplaystyle{=\int_0^r \left( (N-1) \eta (t)^2-\left(t \, \eta (t)\right)'^{\, 2}\right) u_r(t)^2 t^{N-1}\, dt }\\ \displaystyleplaystyle{\leq -\int_{1/2}^1 \left( (N-1) \eta (t)^2-\left(t \, \eta (t)\right)'^{\, 2}\right) u_r(t)^2 t^{N-1}\, dt }\leq \displaystyleplaystyle{ \alpha_N \int_{1/2}^1 u_r(t)^2 t^{N-1}\, dt,} \end{array}$$ \noindent where the constant $\displaystyleplaystyle{\alpha_N=\max_{1/2\leq t\leq 1}-\left( (N-1) \eta (t)^2-\left(t \, \eta (t)\right)'^{\, 2}\right)}$ depends only on $N$. This establishes (\ref{inequality}) for $r\in [0,1/2]$, if $N>2$. If $r\in (1/2,1]$ and $N>2$ then, applying the above inequality for $r=1/2$, we obtain $$\begin{array}{l}\displaystyleplaystyle{\ \ \ \int_0^r t^{N-1}u_r(t)^2\, dt\leq \int_0^{1/2} t^{N-1}u_r(t)^2\, dt+\int_{1/2}^1 t^{N-1}u_r(t)^2\, dt} \\ \displaystyleplaystyle{\leq\left( \frac{\alpha_N}{N-2}\left(\frac{1}{2}\right)^{2\sqrt{N-1}+2}+1\right)\int_{1/2}^1 t^{N-1}u_r(t)^2\, dt} \\ \displaystyleplaystyle{\leq (2r)^{2\sqrt{N-1}+2}\left( \frac{\alpha_N}{N-2}\left(\frac{1}{2}\right)^{2\sqrt{N-1}+2}+1\right)\int_{1/2}^1 t^{N-1}u_r(t)^2\, dt} \end{array}$$ \noindent which is the desired conclusion with $\displaystyleplaystyle{K_N=\frac{1}{\omega_N}\left(\frac{\alpha_N}{N-2}+2^{2\sqrt{N-1}+2}\right)}$ (Note that the constant obtained for $r\in (1/2,1]$ is greater than the one for $r\in [0,1/2]$). Finally, if $N=2$, changing the definition of $\eta (t)$ in $[0,r]$ by $\eta (t)=1/(r\, t)$, if $r_0<t\leq r$; $\eta(t)=1/(r\, r_0)$, if $0\leq t\leq r_0$ (for arbitrary $r_0\in (0,r)$), we obtain $$\frac{1}{r^2}\int_{r_0}^r\frac{u_r(t)^2}{t}dt\leq \alpha_2 \int_{1/2}^1u_r(t)^2t\, dt\, .$$ Letting $r_0\rightarrow 0$ and taking into account that $t/r^2\leq 1/t$ for $0<t\leq r$ yields (\ref{inequality}) for $N=2$ and $r\in[0,1/2]$. If $r\in (1/2,1]$, we can apply similar arguments to the case $N>2$ to complete the proof. \qed \begin{proposition}\label{prorand2r} Let $N\geq 2$, $g\in C^1(\mathbb R)$, and $u\in H^1(B_1)$ be a semi-stable radial solution of (\ref{mainequation}). Then there exists a constant $K'_N$ depending only on $N$ such that: \begin{equation}\label{inequalityrand2r} \left\vert u(r)-u\left(\frac{r}{2}\right) \right\vert \leq K'_N \Vert \nabla u\Vert_{L^2(B_1\setminus B_{1/2})}\, r^{-N/2+\sqrt{N-1}+2}\, \ \ \ \forall r\in (0,1]. \end{equation} \end{proposition} \noindent {\bf Proof.} Fix $r\in (0,1]$. Applying Cauchy-Schwarz and Lemma \ref{essential} we deduce $$\begin{array}{l} \displaystyleplaystyle{\ \ \ \left\vert u(r)-u\left(\frac{r}{2}\right) \right\vert \leq \int_{r/2}^r\vert u_r(t)\vert t^\frac{N-1}{2}\frac{1}{t^\frac{N-1}{2}}\, dt} \\ \displaystyleplaystyle{\leq\left( \int_{r/2}^r u_r(t)^2 t^{N-1}\, dt\right)^{1/2} \left( \int_{r/2}^r \frac{1}{t^{N-1}}\, dt\right)^{1/2}}\\ \displaystyleplaystyle{\leq K_N^{1/2} \Vert \nabla u\Vert_{L^2(B_1\setminus B_{1/2})}\, r^{\sqrt{N-1}+1}\left( r^{2-N} \int_{1/2}^1 \frac{1}{t^{N-1}}\, dt\right)^{1/2}}, \end{array}$$ \noindent and (\ref{inequalityrand2r}) is proved. \qed \ \noindent {\bf Proof of Theorem \ref{principal}.} Let $0<r\leq 1$. Then, there exist $m\in \mathbb N$ and $1/2<r_1\leq 1$ such that $r=r_1/2^{m-1}$. Since $u$ is radial we have $u(r_1)\leq \Vert u\Vert_{L^\infty (B_1\setminus B_{1/2})}\leq \gamma_N \Vert u\Vert_{H^1(B_1\setminus \overline{B_{1/2}})}$, where $\gamma_N$ depends only on $N$. From this and Proposition \ref{prorand2r}, it follows that \begin{equation}\label{key} \begin{array}{l} \displaystyleplaystyle{\vert u(r)\vert \leq \vert u(r_1)-u(r)\vert +\vert u(r_1)\vert\leq \sum_{i=1}^{m-1}\left\vert u\left(\frac{r_1}{2^{i-1}}\right)-u\left(\frac{r_1}{2^i}\right)\right\vert+\vert u(r_1)\vert} \\ \displaystyleplaystyle{\leq K'_N \Vert \nabla u\Vert_{L^2(B_1\setminus B_{1/2})} \sum_{i=1}^{m-1}\left(\frac{r_1}{2^{i-1}}\right)^{-N/2+\sqrt{N-1}+2}+ \gamma_N \Vert u\Vert_{H^1(B_1\setminus \overline{B_{1/2}})} }\\ \displaystyleplaystyle{\leq \left( K'_N\sum_{i=1}^{m-1}\left(\frac{r_1}{2^{i-1}}\right)^{-N/2+\sqrt{N-1}+2}+ \gamma_N \right) \Vert u\Vert_{H^1(B_1\setminus \overline{B_{1/2}})}.} \end{array} \end{equation} \ $\bullet$ If $2\leq N<10$, we have $-N/2+\sqrt{N-1}+2>0$. Then $$\sum_{i=1}^{m-1}\left(\frac{r_1}{2^{i-1}}\right)^{-N/2+\sqrt{N-1}+2}\leq \sum_{i=1}^{\infty}\left(\frac{1}{2^{i-1}}\right)^{-N/2+\sqrt{N-1}+2},$$ \noindent which is a convergent series. Applying (\ref{key}), statement i) of the theorem is proved. \ $\bullet$ If $N=10$, we have $-N/2+\sqrt{N-1}+2=0$. From (\ref{key}) we obtain $$\begin{array}{l} \displaystyleplaystyle{\vert u(r)\vert \leq \left( K'_N (m-1)+\gamma_N \right) \Vert u\Vert_{H^1(B_1\setminus \overline{B_{1/2}})}} \\=\displaystyleplaystyle{\left( K'_N \left( \frac{\log r_1-\log r}{\log 2}\right) + \gamma_N \right) \Vert u\Vert_{H^1(B_1\setminus \overline{B_{1/2}})}} \\ \displaystyleplaystyle{\leq \left( \frac{K'_N}{\log 2}+ \gamma_N \right) \left( \vert \log r\vert +1\right) \Vert u\Vert_{H^1(B_1\setminus \overline{B_{1/2}})},} \end{array}$$ \noindent which gives statement ii). \ $\bullet$ If $N>10$, we have $-N/2+\sqrt{N-1}+2<0$. Then $$\sum_{i=1}^{m-1}\left(\frac{r_1}{2^{i-1}}\right)^{-N/2+\sqrt{N-1}+2}= \frac{r^{-N/2+\sqrt{N-1}+2}-r_1^ {-N/2+\sqrt{N-1}+2}}{(1/2)^{-N/2+\sqrt{N-1}+2}-1}.$$ From this and (\ref{key}), we conclude $$\vert u(r)\vert \leq \left( \frac{K'_N}{(1/2)^{-N/2+\sqrt{N-1}+2}-1}+\gamma_N \right) r^{-N/2+\sqrt{N-1}+2}\Vert u\Vert_{H^1(B_1\setminus \overline{B_{1/2}})}\, ,$$ \noindent which completes the proof. \qed \ \noindent {\bf Proof of Theorem \ref{estimas}.} \begin{enumerate} \item[i)] We first observe that $(-r^{N-1}u_r)'=r^{N-1}g(u)\geq 0$. Hence $-r^{N-1}u_r$ is a positive nondecreasing function and so is $r^{2N-2}u_r^2$. Thus, for $0<r\leq 1/2$, we have $$\int_0^{2r} t^{N-1}u_r(t)^2\, dt\geq \int_r^{2r} t^{N-1}u_r(t)^2 \, dt=\int_r^{2r} t^{2N-2}u_r(t)^2 \frac{1}{t^{N-1}}\, dt$$ $$\geq r^{2N-2}u_r(r)^2 \int_r^{2r}\frac{1}{t^{N-1}}\, dt=r^{2N-2}u_r(r)^2 \, r^{2-N}\int_1^{2}\frac{1}{t^{N-1}}\, dt\ ,$$ From this and Lemma \ref{essential} we obtain i). \ \item[ii)] Consider the function $\Psi(r)=-N\, r^{1-1/N}u_r(r^{1/N})\, , r\in (0,1]$. It is easy to check that $\Psi'(r)=g(u(r^{1/N}))\, , r\in (0,1]$. As $g$ is nonnegative and nondecreasing we have that $\Psi$ is a nonnegative nondecreasing concave function. It follows immediately that $0\leq \Psi'(r)\leq \Psi(r)/r\, ,r\in (0,1]$; which becomes $$0\leq -(N-1)r^{-1/N}u_r(r^{1/N})-u_{rr}(r^{1/N})\leq -N\, r^{-1/N}u_r(r^{1/N})\, ,r\in (0,1].$$ Hence $$r^{-1/N}u_r(r^{1/N})\leq u_{rr}(r^{1/N})\leq -(N-1)r^{-1/N}u_r(r^{1/N})\, ,r\in (0,1].$$ Therefore $\vert u_{rr}(r)\vert\leq (N-1)\vert u_r(r)\vert /r \, ,r\in (0,1]$; and ii) follows from i). \ \item[iii)] An easy computation shows that $$u_{rrr}=-u_r g'(u)-\frac{N-1}{r}u_{rr}+\frac{N-1}{r^2}u_r \, , \ \ r\in (0,1].$$ On the other hand, it is proved in \cite[Th. 1.8 (c)]{cc} that $g'(u(r))\leq h_N/r^2\, , r\in (0,1]$, for some constant $h_N$. Since we have shown $\vert u_{rr}(r)\vert\leq (N-1)\vert u_r(r)\vert /r \, ,r\in (0,1]$ in the proof of statement ii), it follows from the above formula $\vert u_{rrr}(r)\vert\leq s_N\vert u_r(r)\vert /r^2 \, ,r\in (0,1]$, for some constant $s_N$ depending only on $N$. Recalling i), the proof is now completed. \qed \end{enumerate} \ To deduce Theorem \ref{extremal} from Theorems \ref{principal} and \ref{estimas} we need the following lemma. \begin{lemma}\label{monotonias} Let $N\geq 2$, $g\in C^1(\mathbb R)$ nonnegative and nondecreasing function and $u$ a radially decreasing solution of (\ref{mainequation}) (neither $u\in H^1(B_1)$ nor $u$ is semi-stable is required). Then \begin{enumerate} \item[i)] $r^{N-1}\vert u_r\vert$ is nondecreasing for $r\in (0,1]$. \item[ii)] $r^{-1}\vert u_r\vert$ is nonincreasing for $r\in (0,1]$. \item[iii)] $\max_{t\in [1/2,1]}\vert u_r(t)\vert\leq 2^{N-1}\min_{t\in [1/2,1]}\vert u_r(t)\vert$. \item[iv)] $\Vert \nabla u\Vert_{L^2(B_1\setminus B_{1/2})}\leq q_N \min_{t\in [1/2,1]}\vert u_r(t)\vert$, for a certain constant $q_N$ depending only on $N$. \end{enumerate} \end{lemma} \noindent {\bf Proof.} \begin{enumerate} \item[i)] Since $u_r<0$ we have $\left(r^{N-1}\vert u_r\vert\right)'=r^{N-1}g(u)\geq 0$. \item[ii)] As in the proof of statement ii) of Theorem \ref{estimas} we have that the function $\Psi(r)=-N\, r^{1-1/N}u_r(r^{1/N})$ is nonnegative, nondecreasing and concave for $r\in (0,1]$. Therefore $\Psi(r)/r=-N\, r^{-1/N}u_r(r^{1/N})$ is nonincreasing , and ii) follows immediately. \item[iii)]Take $r_1,r_2 \in [1/2,1]$ such that $\vert u_r(r_1)\vert=\min_{t\in [1/2,1]}\vert u_r(t)\vert$ and $\vert u_r(r_2)\vert=\max_{t\in [1/2,1]}\vert u_r(t)\vert$. If $r_2\leq r_1$, we deduce from i) that $\vert u_r(r_2)\vert \leq (r_1/r_2)^{N-1}\vert u_r(r_1)\vert\leq 2^{N-1}\vert u_r(r_1)\vert$. If $r_2> r_1$, we deduce from ii) that $\vert u_r(r_2)\vert \leq (r_2/r_1)\vert u_r(r_1)\vert\leq 2\vert u_r(r_1)\vert\leq 2^{N-1}\vert u_r(r_1)\vert$. \item[iv)] We see at once that $$\Vert \nabla u\Vert_{L^2(B_1\setminus B_{1/2})}\leq (\mbox{measure}\, (B_1\setminus B_{1/2}))^{1/2}\max_{t\in [1/2,1]}\vert u_r(t)\vert\, ,$$ \noindent and iv) follows from iii). \qed \end{enumerate} \ \noindent {\bf Proof of Theorem \ref{extremal}} As we have mentioned, it is well known that $u^\ast$ is a semi-stable radially decreasing $H_0^1(B_1)$ solution of (\ref{mainequation}) for $g(s)=\lambda^\ast f(s)$. Hence, we can apply to $u^\ast$ the results obtained in Theorems \ref{principal} and \ref{estimas} and Lemma \ref{monotonias}. Let us first prove i), ii) and iii) for $r\in (0,1/2)$. Since $u^\ast (1)=0$, and on account of statement iv) of Lemma \ref{monotonias}, we have $\Vert u^\ast \Vert_{H^1(B_1\setminus \overline{B_{1/2}})}\leq h_N \Vert \nabla u^\ast \Vert_{L^2(B_1\setminus B_{1/2})}\leq h'_N \min_{t\in [1/2,1]}\vert u^\ast_r(t)\vert$, for certain constants $h_N,h'_N$ depending only on $N$. From this and Theorem \ref{principal}: i) follows from the inequality $1\leq 2(1-r)$, for $r\in (0,1/2)$. ii) follows from the inequality $\displaystyleplaystyle{\vert \log r\vert +1\leq \frac{\log 2+1}{\log 2} \vert \log r\vert}$, for $r\in (0,1/2)$. iii) follows from the inequality $$r^{-N/2+\sqrt{N-1}+2}\leq \frac{(1/2)^{-N/2+\sqrt{N-1}+2}}{(1/2)^{-N/2+\sqrt{N-1}+2}-1}(r^{-N/2+\sqrt{N-1}+2}-1), \mbox{ for }r\in (0,1/2).$$ \ We next show i), ii) and iii) for $r\in [1/2,1]$. From statement iii) of Lemma \ref{monotonias} it follows that $$u^\ast (r)=\int_{r}^1 \vert u^\ast_r(t)\vert\, dt \leq (1-r)\, 2^{N-1}\min_{t\in [1/2,1]}\vert u^\ast_r(t)\vert \, , \ \ \ \ \forall r\in[1/2,1],$$ \noindent which is the desired conclusion if $N<10$. If $N=10$, our claim follows from the inequality $1-r\leq \vert\log r\vert$, for $r\in[1/2,1]$. Finally, if $N>10$, the desired conclusion follows immediately from the inequality $1-r\leq z_N( r^{-N/2+\sqrt{N-1}+2}-1)$, for $r\in [1/2,1]$, for a certain constant $z_N$. \ We now prove statement iv). In the case $k=1$ and $r\in (0,1/2)$, it follows immediately from statement i) of Theorem \ref{estimas} and statement iv) of Lemma \ref{monotonias}. The case $k=1$ and $r\in [1/2,1]$ is also obvious on account of statement iii) of Lemma \ref{monotonias} and the inequality $1\leq r^{-N/2+\sqrt{N-1}+1}$, for $r\in [1/2,1]$, for $N\geq 10$. Finally, as in the proof of statement ii) and iii) of Theorem \ref{estimas}, we have $\vert u^\ast_{rr}(r)\vert\leq (N-1)\vert u^\ast_r(r)\vert /r$ and $\vert u^\ast_{rrr}(r)\vert\leq s_N \vert u^\ast_r(r)\vert /r^2$, for $r\in (0,1]$, which gives statement iv) for $k=2,3$ from the case $k=1$. \qed \section{A family of semi-stable solutions}\label{tres} \begin{theorem}\label{familia} Let $h\in (C^2\cap L^1)(0,1]$ be a nonnegative function and consider $$\Phi (r)=r^{2\sqrt{N-1}}\left( 1+\int_0^r h(s)\, ds\right)\ \ \ \ \forall r\in (0,1].$$ Define $u_r<0$ by $$\Phi'(r)=(N-1)\, r^{N-3}u_r(r)^2\ \ \ \ \forall r\in (0,1].$$ Then, for $N\geq 10$, $u$ is a semi-stable radially decreasing unbounded $H^1(B_1)$ solution of a problem of the type (\ref{mainequation}), where $u$ is any function with radial derivative $u_r$. \end{theorem} To prove Theorem \ref{familia} we need the following lemma, which is a generalization of the classical Hardy inequality: \begin{lemma}\label{masquehardy} Let $\Phi \in C^1(0,L)$, $0<L\leq \infty$, satisfying $\Phi'>0$. Then $$\int_0^L \frac{4 \Phi^2}{\Phi'} \xi'^2 \geq \int_0^L \Phi' \xi^2\, ,$$ \noindent for every $\xi \in C^\infty (0,L)$ with compact support. \end{lemma} \noindent {\bf Proof.} Integrating by parts and applying Cauchy-Schwarz we obtain $$\int_0^L \Phi' \xi^2=-2 \int_0^L \Phi \xi \xi' \leq 2\int_0^L \frac{\vert \Phi\vert}{\sqrt{\Phi'}} \vert \xi' \vert \sqrt{\Phi'} \vert \xi \vert \leq 2\left(\int_0^L \frac{ \Phi^2}{\Phi'}\xi'^2\right)^{1/2}\left(\int_0^L \Phi' \xi^2\right)^{1/2},$$ \noindent which establishes the desired inequality. \qed \ In the case $\Phi(r)=((N-2)/4)r^{N-2}$, $r>0$, the above lemma is the Hardy inequality for radial functions in $\mathbb R^N$, $N>2$. \ \noindent {\bf Proof of Theorem \ref{familia}.} First of all, since $\Phi \in C^1(0,1]\cap C[0,1]$ is an increasing function, we obtain $\Phi'\in L^1(0,1)$ and hence $r^{N-1}u_r^2=r^2 \Phi' /(N-1) \in L^1(0,1)$, which gives $u\in H^1(B_1)$. On the other hand, since $\Phi'(r)\geq 2\sqrt{N-1}\, r^{2\sqrt{N-1}-1},\, r\in(0,1]$, we deduce $\vert u_r(r)\vert \geq \sqrt{2}(N-1)^{-1/4} \ r^{-N/2+\sqrt{N-1}+1},\, r\in (0,1]$. As $N\geq 10$, we have $-N/2+\sqrt{N-1}+1\leq -1$. It follows that $u_r\notin L^1(0,1)$ and, since $u$ is radially decreasing, we obtain $\lim_{r\to 0}u(r)=+\infty$. Since $h\in C^2(0,1]$, it follows that $u_r \in C^2(0,1]$. Therefore, $\Delta u\in C^1\left(\overline{B_1}\setminus\{ 0\}\right)$. Hence, taking $g\in C^1(\mathbb R)$ such that $g(s)=-\Delta u(u^{-1}(s))$, for $s\in[u(1),+\infty)$, we conclude that $u$ is solution of a problem of the type (\ref{mainequation}). It remains to prove that $u$ is semi-stable. Taking into account that $u_r\neq 0$ in $(0,1]$ and applying \cite[Lem. 2.1]{cc}, the semi-stability of $u$ is equivalent to \begin{equation}\label{stableproperty} \int_0^1 r^{N-1}u_r^2\ \xi'^2\, dr \geq (N-1)\int_0^1 r^{N-3} u_r^2\ \xi^2\, dr, \end{equation} \noindent for every $\xi \in C^\infty (0,1)$ with compact support. For this purpose, we will apply the lemma above. From the definition of $\Phi$ it is easily seen that $\Phi'\geq 2\sqrt{N-1}\, \Phi /r,$ $r\in (0,1]$. It follows that $$\frac{\Phi'r^2}{N-1}\geq \frac{4 \Phi^2}{\Phi'}\mbox{ in }(0,1].$$ Finally, since $\Phi'r^2/(N-1)=r^{N-1}u_r^2$ and $\Phi'=(N-1)r^{N-3} u_r^2$ in $(0,1]$, we deduce (\ref{stableproperty}) by applying Lemma \ref{masquehardy}. \qed \ As an application of Theorem \ref{familia} we have the following results, which show the impossibility of obtaining any pointwise estimate for $\vert u_r\vert$, $\vert u_{rr}\vert$ or $\vert u_{rrr}\vert$ if the positivity of $g$, $g'$ or $g''$ is not satisfied, for semi-stable radially decreasing $H^1(B_1)$ solutions of a problem of the type (\ref{mainequation}) and $N\geq 10$. \begin{proposition}\label{sucesiones} Let $\{r_n\}\subset(0,1]$, $\{M_n\}\subset \mathbb R^+$ two sequences with $r_n\downarrow 0$. Then, for $N\geq 10$, there exists $u\in H^1(B_1)$, which is a semi-stable radially decreasing unbounded solution of a problem of the type (\ref{mainequation}), satisfying $$\vert u_r(r_n)\vert \geq M_n \ \ \ \ \forall n\in \mathbb{N}.$$ \end{proposition} \noindent {\bf Proof.} It is easily seen that for every sequences $\{r_n\}\subset(0,1]$, $\{y_n\}\subset \mathbb R^+$, with $r_n\downarrow 0$, there exists a nonnegative function $h\in (C^2\cap L^1)(0,1]$ satisfying $h(r_n)=y_n$. Take $y_n=(N-1)\, M_n^2\, r_n^{N-2\sqrt{N-1}-3}$ and apply Theorem \ref{familia} with this function $h$. It is clear, from the definition of $\Phi$, that $\Phi'(r)\geq h(r) r^{2\sqrt{N-1}}, \, r\in (0,1]$. Hence $$(N-1)\, r_n^{N-3}u_r(r_n)^2=\Phi'(r_n)\geq h(r_n) r_n^{2\sqrt{N-1}}=y_n r_n^{2\sqrt{N-1}}=(N-1)r_n^{N-3}M_n^2,$$ \noindent and the proposition follows. \qed \begin{corollary}\label{nohay} Let $N\geq 10$. There does not exist a function $\psi:(0,1] \rightarrow \mathbb R^+$ with the following property: for every $u\in H^1(B_1)$ semi-stable radially decreasing solution of a problem of the type (\ref{mainequation}), there exist $C>0$ and $\varepsilon \in (0,1]$ such that $\vert u_r(r)\vert \leq C \psi (r)$ for $r\in (0,\varepsilon]$. \end{corollary} \noindent {\bf Proof.} Suppose that such a function $\psi$ exists and consider the sequences $r_n=1/n$, $M_n =n \, \psi (1/n)$. By the proposition above, there exists $u\in H^1(B_1)$, which is a semi-stable radially decreasing unbounded solution of a problem of the type (\ref{mainequation}), satisfying $\vert u_r(1/n)\vert \geq n \, \psi (1/n)$, a contradiction. \qed \begin{proposition}\label{sucesiones2} Let $\{r_n\}\subset(0,1]$, $\{M_n\}\subset \mathbb R^+$ two sequences with $r_n\downarrow 0$. Then, for $N\geq 10$, there exists $u\in H^1(B_1)$, which is a semi-stable radially decreasing unbounded solution of a problem of the type (\ref{mainequation}) with $g\geq 0$, satisfying $$\vert u_{rr}(r_n)\vert \geq M_n \ \ \ \ \forall n\in \mathbb{N}.$$ \end{proposition} \noindent {\bf Proof.} Let $h\in C^2(0,1]$, increasing, satisfying $0\leq h\leq 1$. Define $\Phi$ and $u_r$ as in Theorem \ref{familia}. We claim that \begin{enumerate} \item[i)] $u$ is a semi-stable radially decreasing unbounded $H^1(B_1)$ solution of a problem of the type (\ref{mainequation}) with $g\geq 0$. \item[ii)] $\vert u_r \vert\leq D_N r^{-N/2+\sqrt{N-1}+1} ,\ \ \forall r\in (0,1]$, where $D_N$ only depends on $N$. \item[iii)] $-u_{rr}\geq E_N h'(r)r^{-N/2+\sqrt{N-1}+2}-F_N r^{-N/2+\sqrt{N-1}} ,\ \ \forall r\in (0,1]$, where $E_N>0$ and $F_N$ only depend on $N$. \end{enumerate} Since $h$ is positive and increasing, then $\Phi''>0$. Hence $(N-1)r^{N-3}u_r^2$ is increasing and so is $r^{2N-2}u_r^2$. This implies that $-r^{N-1}u_r$ is increasing, which is is equivalent to the positiveness of $g$. On the other hand note that, since $0\leq h\leq 1$, we obtain $\Phi'(r)\leq G_N r^{2\sqrt{N-1}-1}$ in $(0,1]$, for a constant $G_N$. Hence, from the definition of $u_r$ we obtain ii). To prove iii) observe that, from the positiveness of $h$, we obtain $\Phi''(r)\geq r^{2\sqrt{N-1}}h'(r)$ in $(0,1]$. On the other hand, from the definition of $u_r$ we have $\Phi''(r)=(N-1)\left( (N-3)r^{N-4}u_r^2+2u_r u_{rr} r^{N-3}\right)$. Therefore, by ii) and the previous inequality we obtain iii). Finally, it is easily seen that for every sequences $\{r_n\}\subset(0,1]$, $\{y_n\}\subset \mathbb R^+$, with $r_n\downarrow 0$, there exists $h\in C^2(0,1]$, increasing, satisfying $0\leq h\leq 1$ and $h'(r_n)=y_n$. Take $y_n$ such that $E_N y_n r_n^{-N/2+\sqrt{N-1}+2}-F_N r_n^{-N/2+\sqrt{N-1}}=M_n$. Applying iii) we deduce $-u_{rr}(r_n)\geq M_n$ and the proof is complete. \qed \begin{corollary}\label{nohayy} Let $N\geq 10$. There does not exist a function $\psi:(0,1] \rightarrow \mathbb R^+$ with the following property: for every $u\in H^1(B_1)$ semi-stable radially decreasing solution of a problem of the type (\ref{mainequation}) with $g\geq 0$, there exist $C>0$ and $\varepsilon \in (0,1]$ such that $\vert u_{rr}(r)\vert \leq C \psi (r)$ for $r\in (0,\varepsilon]$. \end{corollary} \noindent {\bf Proof.} Arguing as in Corollary \ref{nohay} and using Proposition \ref{sucesiones2}, we conclude the proof of the corollary. \qed \begin{proposition}\label{sucesiones3} Let $\{r_n\}\subset(0,1]$, $\{M_n\}\subset \mathbb R^+$ two sequences with $r_n\downarrow 0$. Then, for $N\geq 10$, there exists $u\in H^1(B_1)$, which is a semi-stable radially decreasing unbounded solution of a problem of the type (\ref{mainequation}) with $g,g'\geq 0$, satisfying $$\vert u_{rrr}(r_n)\vert \geq M_n \ \ \ \ \forall n\in \mathbb{N}.$$ \end{proposition} \begin{lemma}\label{hpequenna} For any dimension $N\geq 10$, there exists $\varepsilon_N>0$ with the following property: for every $h\in C^2(0,1]\cap C^1[0,1]$ satisfying $h(0)=0$, $0\leq h'\leq \varepsilon_N$ and $h''\leq 0$, $u$ is a semi-stable radially decreasing unbounded $H^1(B_1)$ solution of a problem of the type (\ref{mainequation}) with $g,g'\geq 0$, where $u_r$ is defined in terms of $h$ as in Theorem \ref{familia}. \end{lemma} \noindent {\bf Proof.} Similarly as in the proof of Proposition \ref{sucesiones2} (item i)), $h'\geq 0$ implies that $u$ is a semi-stable radially decreasing unbounded $H^1(B_1)$ solution of a problem of the type (\ref{mainequation}) with $g\geq 0$. On the other hand, from the definition of $\Phi$ and $u_r$ it follows easily that $$\begin{array}{ll} u_r &\displaystyleplaystyle{=-\sqrt{(N-1)^{-1}\, r^{3-N}\Phi'}} \\ &\displaystyleplaystyle{= -r^{-N/2+\sqrt{N-1}+1}\sqrt{2(N-1)^{-1/2}\left(1+\int_0^r h\right)+(N-1)^{-1}r\, h}}\\ \end{array}$$ Put this last expression in the form $u_r=-r^{-N/2+\sqrt{N-1}+1}\varphi (r)$, where $\varphi(r)$ (and of course $u_r$) depends on $h$. Now consider the set $X=\{ h\in C^2(0,1]\cap C^1[0,1]:h(0)=0\, , 0\leq h'\, ,h''\leq 0 \}$ and the norm $\Vert h\Vert_X=\Vert h'\Vert_{L^\infty (0,1)}$. Taking $\Vert h\Vert_X \to 0$, we have \begin{equation}\label{tyu} \lim_{\Vert h\Vert_X \to 0} \varphi =\sqrt{2}(N-1)^{-1/4}, \ \ \lim_{\Vert h\Vert_X \to 0} \varphi'=0, \ \ \lim_{\Vert h\Vert_X \to 0} \left( \varphi''-\frac{(N-1)^{-1}r\, h''}{2\varphi}\right)=0, \end{equation} \noindent where all the limits are taken uniformly in $r\in (0,1]$. On the other hand, it is easy to check that $$\begin{array}{ll} r^2g'(u)&\displaystyleplaystyle{=\frac{-r^2 u_{rrr}}{u_r}-\frac{(N-1)r\, u_{rr}}{u_r}+(N-1)}\\ &\displaystyleplaystyle{=\frac{-r^2 \varphi''}{\varphi}-\frac{(2\sqrt{N-1}+1)\, r \varphi'}{\varphi}+\frac{(N-2)^2}{4}}\\ \end{array}$$ Hence, from (\ref{tyu}), we can assert that, for $h\in X$ with small $\Vert h\Vert_X$, $r^2 g'(u) >0$ in $(0,1]$, and the lemma follows. \qed \ \noindent {\bf Proof of Proposition \ref{sucesiones3}.} We follow the notation used in the previous lemma. From (\ref{tyu}), we deduce that $$\lim_{\Vert h\Vert_X \to 0} \left( r^{N/2-\sqrt{N-1}+1}u_{rrr}+\frac{(N-1)^{-1}\, r^3 h''}{2\varphi}\right)=\sigma ,$$ \noindent uniformly in $r\in (0,1]$, where $\sigma=-( -N/2+\sqrt{N-1}+1)(-N/2+\sqrt{N-1})\sqrt{2}(N-1)^{-1/4}<0$. Then, taking $\varepsilon'_N>0$ sufficient small (possibly less than $\varepsilon_N$), we have that $$r^{N/2-\sqrt{N-1}+1}u_{rrr}\geq -\left(\frac{(N-1)^{-1}\, r^3 h''}{2\sqrt{2}(N-1)^{-1/4}+1}\right)+\sigma -1\, , \ \forall r\in (0,1],$$ \noindent for $\Vert h\Vert_X \leq \varepsilon'_N$. Finally, it is easily seen that for every sequences $\{r_n\}\subset(0,1]$, $\{y_n\}\subset \mathbb R^+$, with $r_n\downarrow 0$, there exists $h\in X$, with $\Vert h\Vert_X \leq \varepsilon'_N$, satisfying $h''(r_n)=-y_n$. (Take, for instance $h(r)=\int_0^r z(t)\, dt$, where $z\in C^1(0,1]\cap C[0,1]$ is decreasing, $0\leq z(t)\leq \varepsilon'_N$ and satisfies $z'(r_n)=-y_n$.) Take $y_n$ such that $r_n^{N/2-\sqrt{N-1}+1}M_n=\left(\frac{(N-1)^{-1}\, r_n^3 y_n}{2\sqrt{2}(N-1)^{-1/4}+1}\right)+\sigma -1$. Applying the above inequality, we obtain $u_{rrr}(r_n)\geq M_n$ and the proof is complete. \qed \begin{corollary}\label{nohayyy} Let $N\geq 10$. There does not exist a function $\psi:(0,1] \rightarrow \mathbb R^+$ with the following property: for every $u\in H^1(B_1)$ semi-stable radially decreasing solution of a problem of the type (\ref{mainequation}) with $g,g'\geq 0$, there exist $C>0$ and $\varepsilon \in (0,1]$ such that $\vert u_{rrr}(r)\vert \leq C \psi (r)$ for $r\in (0,\varepsilon]$. \end{corollary} \noindent {\bf Proof.} Applying Proposition \ref{sucesiones3}, this follows by the same method as in Corollaries \ref{nohay} and \ref{nohayy}. \qed \ {\bf Acknowledgments.} The author would like to thank Xavier Cabr\'e for very stimulating discussions. \end{document}
\begin{document} \title{ Hyper-atoms and the critical pair Theory} \author{ Yahya O. Hamidoune\thanks{UPMC Univ Paris 06, E. Combinatoire, Case 189, 4 Place Jussieu, 75005 Paris, France, {\tt [email protected]} } } \maketitle \begin{abstract} We introduce the notion of a hyper-atom. One of the main results of this paper is the $\frac{2|G|}3$--Theorem: Let $S$ be a finite generating subset of an abelian group $G$ of order $\ge 2$. Let $T$ be a finite subset of $G$ such that $2\le |S|\le |T|$, $S+T$ is aperiodic, $0\in S{\bf c}ap T$ and $$ \frac{2|G|+2}3\ge |S+T|= |S|+|T|-1.$$ Let $H$ be a hyper-atom of $S$. Then $S$ and $T$ are $H$--quasi-periodic. Moreover $\phi(S)$ and $\phi(T)$ are arithmetic progressions with the same difference, where $\phi :G\mapsto G/H$ denotes the canonical morphism. This result implies easily the traditional critical pair Theory and its basic stone: Kemperman's Structure Theorem. \end{abstract} \section{Introduction} For a subset $A$ of an abelian group $G$, the {\em period} of $A$ is $\Pi (A)=\{x\in G :A+x=A\}$. The set $A$ is said to be {\em periodic} if $\Pi(A)\neq \{0\}$. A basic tool in Additive Number Theory is the following generalization of the Cauchy-Davenport Theorem due to Kneser: \begin{theirtheorem}[Kneser {\bf c}ite{tv}]\label{kneser} Let $G$ be an abelian group and let $A, B\subset G$ be finite subsets of $G$ such that $A+B$ is aperiodic. Then $|A+B|\ge |A|+|B|-1$. \end{theirtheorem} The description of the subsets $A$ and $B$ with $|A+B|= |A|+|B|-1$, obtained by kemperman in {\bf c}ite{kempacta} is a deep result in the classical critical pair Theory. Another step in this direction is proposed by Grynkiewicz in {\bf c}ite{davkem}. The cumulated proofs of these two results is about 80 pages. One of our aims in the present work is to present a methodology leading to new easier results and shortest proofs for the existing ones. This work is essentially self-contained. We assume only Kneser's Theorem, Proposition \ref{Cay}, Theorem \ref{2atomejc} and Proposition \ref{strongip}. The last three results are proved in around 3 pages in {\bf c}ite{hiso2007}. The isoperimetric method is a global approach introduced by the author, which derive additive inequalities from global properties of the fragments and atoms. The reader may refer to the recent paper {\bf c}ite{hiso2007} for an introduction to the applications of this method. For a subset $X$, we put $\partial _S(X)=(X+S)\setminus X$ and $X^S=G\setminus (X+S)$. Suppose that $|G|\ge 2k-1$ and let $0\in S$ be a generating subset. The {\em $kth$--connectivity} of $S$ is defined as $$ \kappaappa _k (S )=\min \{|X+S|-|X|\ : \ \ \infty >|X|\geq k \ {\rm and}\ |X+S|\le |G|-k\}, $$ where $\min \emptyset =|G|-2k+1$. We shall say that a subset $X$ induces a {\em $k$--separation} if $ |X|\geq k$ and $|X^S|\geq k$. We shall say that $S$ is $k$--separable if some $X$ induces a $k$--separation. A finite subset $X$ of $G$ such that $|X|\ge k$, $|X^S|\ge k$ and $|\partial (X)|=\kappaappa _k(S)$ is called a {\em $k$--fragment} of $S$. A $k$--fragment with minimum cardinality is called a {\em $k$--atom}. Let $0\in S$ be a generating subset of an abelian group $G.$ We shall say that $S$ is a {\em Vosper subset} if for all $X\subset G$ with $|X|\ge 2$, we have $|X+S|\ge \min (|G|-1,|X|+|S|)$. A subgroup with maximal cardinality which is a $1$--fragment will be called a {\em hyper-atom}. In Section 3, we prove the existence of hyper-atoms and obtain the following result: Let $S$ be a finite generating subset of an abelian group $G$ such that $0 \in S,$ $| S | \leq (|G|+1)/2$ and $\kappaappa _2 (S)\le |S|-1.$ Let $H$ be a hyper-atom of $S$. Then $\phi (S)$ is either an arithmetic progression or a Vosper subset, where $\phi$ is the canonical morphism from $G$ onto $G/H$. A set $A$ is said to be {\em $H$--quasi-periodic} if there is an $x$ such that $(A\setminus (x+H))+H=A\setminus (x+H)$. In Section 5, we apply the global isoperimetric methodology introduced in {\bf c}ite{hiso2007} to prove the following Vosper's type result: Let $S$ be a finite generating subset of an abelian group $G$ of order $\ge 2$. Let $T$ be a finite subset of $G$ such that $|S|\le |T|$, $S+T$ is aperiodic, $0\in S{\bf c}ap T$ and $$ \frac{2|G|+2}3\ge |S+T|= |S|+|T|-1.$$ Let $H$ be a hyper-atom of $S$. Then $S$ and $T$ are $H$--quasi-periodic. Moreover $\phi(S)$ and $\phi(T)$ are arithmetic progressions with the same difference, where $\phi :G\mapsto G/H$ denotes the canonical morphism. This $\frac{2|G|}3$--Theorem implies easily several critical pair results. As an illustration we deduce from it new proofs of Kemperman's Structure Theorem and Lev's Theorem. Quite likely, the methods introduced in the present work lead to descriptions for subsets $A,B$ with $|A+B|=|A|+|B|+m$, with some small other values of $m\ge 0$. However we shall limit ourselves to the case $m=-1$ in order to illustrate the method in a relatively simple context. \section{Terminology and preliminaries} Let $ A$ and $B$ be subsets of $ G $. The subgroup generated by $A$ will be denoted by $\subgp{A}$. The {\em Minkowski sum} is defined as $$A+B=\{x+y \ : \ x\in A\ \mbox{and}\ y\in B\}.$$ Recall the following two results: \begin{theirlemma}(folklore) Let $G$ be a finite group and let $A$ and $B$ be subsets such that $|A|+|B|\ge |G|+1$. Then $A+B=G$. \label{prehistorical} \end{theirlemma} \begin{theirtheorem}\label{scherk}(Scherk){\bf c}ite{scherck} Let $X$ and $Y$ be nonempty finite subsets of an abelian group $G$. If there is an element $c$ of $G$ such that $|X{\bf c}ap(c-Y)| = 1$, then $|X + Y| \geq |X| + |Y| - 1.$ \end{theirtheorem} Scherck's Theorem follows easily from Kneser's Theorem. By a {\em proper} subgroup of $G$ we shall mean a subgroup of $G$ distinct from $G$. The next lemma is related to a notion introduced by Lee {\bf c}ite{lee}: \begin{theirlemma}{\bf c}ite{balart}{Let $X$ be a subset of $G$. Then $(X^S)^{-S}+S=X+S$. \label{lee}} \end{theirlemma} Clearly $X\subset (X^S)^{-S}$. Take $x\notin X+S$. Then $x\notin X^S$ and hence $x-S\subset X^S-S.$ It follows that $x\in (X^S-S)+S=G\setminus ((X^S)^{-S}).$ \mbox{$\Box$} Throughout all this section, $S$ denotes a finite generating subset of an abelian group $G$ with $0\in S$. Note that the best one can get, using the isoperimetric method, for a general subset $S$ is obtained by decomposing modulo the subgroup generated by a translated copy of $S$ containing $0$. The reader may find all basic facts from the isoperimetric method in the recent paper {\bf c}ite{hiso2007}. Notice that $\kappaappa _k (S)$ is the maximal integer $j$ such that for every finite subset $X\subset G$ with $|X|\geq k$, \begin{equation} |X+S|\geq \min \Big(|G|-k+1,|X|+j\Big). \label{eqisoper0} \end{equation} Formulae (\ref{eqisoper0}) is an immediate consequence of the definitions. We shall call (\ref{eqisoper0}) the {\em isoperimetric inequality}. The reader may use the conclusion of this lemma as a definition of $\kappaappa _k (S)$. Since $|\partial (\{0\})|\ge \kappaappa _1$, we have \begin{equation}\label{bound} \kappaappa _1(S)\le |S|-1. \end{equation} The basic intersection theorem is the following: \begin{theorem}{\bf c}ite{halgebra,hiso2007} Let $S$ be generating subset of an abelian group $G$ with $0\in S$. Let $A$ be a $k$--atom and let $F$ be a $k$-fragment such that $|A{\bf c}ap F|\ge k$. Then $A\subset F.$ In particular, distinct $k$-atoms intersect in at most $k-1$ elements. \label{inter2frag} \end{theorem} The structure of $1$--atoms is the following: \begin{proposition} \label{Cay}{\bf c}ite{hejc2,hjct} Let $ S$ be a generating subset of an abelian group $G$ with $0\in S$. Let $H$ be a $1$--atom of $S$ with $0\in H$. Then $H$ is a subgroup. Moreover \begin{equation}\label{olson} \kappaappa _1(S)\geq \frac{|S|}{2}. \end{equation} \end{proposition} \begin{proof} Take $x\in H$. Since $x\in (H+x){\bf c}ap H$ and since $H+x$ is a $1$--atom, we have $H+x=H$ by Theorem \ref{inter2frag}. Therefore $H$ is a subgroup. Since $S$ generates $G$, we have $|H+S|\ge 2|H|$, and hence $\kappaappa _1(S)=|H+S|-|H|\ge \frac{|S+H|}{2}\ge \frac{|S|}{2}.$ \end{proof} Recently, Balandraud introduced some isoperimetric objects and proved a strong form of Kneser's Theorem using {Proposition} \ref{Cay}. The next result is proved in {\bf c}ite{Hejcvosp1}. The finite case is reported with almost the same proof in {\bf c}ite{hactaa}. \begin{theorem} { {\bf c}ite{Hejcvosp1,hactaa}\label{2atom} Let $S$ be a finite generating $2$--separable subset of an abelian group $G$ with $0\in S$ and $\kappaappa _2 (S) \leq |S|-1$. Let $H$ be a $2$--atom with $0\in H$. Then $H$ is a subgroup or $|H|=2.$ \label{2atomejc} } \end{theorem} A short proof of this result is given in {\bf c}ite{hiso2007}. \begin{corollary} { [{\bf c}ite{Hejcvosp1},Theorem 4.6]\label{ejcf} Let $S$ be a $2$--separable finite subset of an abelian group $G$ such that $0\in S$, $|S|\leq (|G|+1)/2$ and $\kappaappa _2 (S) \leq |S|-1$. If $S$ is not an arithmetic progression then there is a subgroup $H$ which is a $2$--fragment of $S$. \label{vosper} } \end{corollary} \begin{proof} Suppose that $S$ is not an arithmetic progression. Let $H$ be a $2$--atom such that $0\in H$. If $\kappaappa _2\leq |S|-2$, then clearly $\kappaappa _2=\kappaappa _1$ and $H$ is also a $1$--atom. By Proposition \ref{Cay}, $H$ is a subgroup. Then we may assume $$\kappaappa _2(S)=|S|-1.$$ By Theorem \ref{2atomejc}, it would be enough to consider the case $|H|=2$, say $H=\{0,x\}$. Put $N=\subgp{x}.$ Decompose $S=S_0{\bf c}up {\bf c}dots {\bf c}up S_j$ modulo $N$, where $|S_0+H|\le |S_1+H| \le {\bf c}dots \le |S_j+H|.$ We have $|S|+1=|S+H|=\sum \limits_{0\le i \le j}|S_i+\{0,x\}|.$ Then $|S_i|=|N|$, for all $i\ge 1$. We have $j\ge 1$, since otherwise $S$ would be an arithmetic progression. In particular $N$ is finite. We have $|N+S|<|G|$, since otherwise $|S|\ge |G|-|N|+1\ge \frac{|G|+2}{2},$ a contradiction. Now \begin{eqnarray*} |N|+|S|-1&=&|N|+\kappaappa _2(S)\\&\le& |S+N|= (j+1)|N|\\&\le& |S|+|N|-1, \end{eqnarray*} and hence $N$ is a $2$-fragment. \end{proof} Corollary \ref{vosper} was used to solve Lewin's Conjecture on the Frobenius number {\bf c}ite{hactaa}. Corollary \ref{vosper} coincides with [{\bf c}ite{Hejcvosp1},Theorem 4.6]. A special case of this result is Theorem 6.6 of {\bf c}ite{hactaa}. As mentioned in {\bf c}ite{hplagne}, there was a misprint in this last statement. Indeed $|H| + |B| - 1$ should be replaced by $|H| + |B|$ in case (iii) of [ Theorem 6.6, {\bf c}ite{hactaa}]. Alternative proofs of Corollary \ref{vosper} (with $|S|\leq |G|/2$ replacing $|S|\leq (|G|+1)/2$), using Kermperman's Structure Theorem, were obtained by Grynkiewicz in {\bf c}ite{davdecomp} and Lev in {\bf c}ite{levkemp}. In the present paper, Corollary \ref{vosper} will be one of the pieces leading to a new proof of Kemperman's Theorem. Let $H$ be a subgroup. A partition $A=\bigcup \limits_{i\in I} A_i$ will be called a $H$--{\em decomposition} of $A$ if for every $i$, where $A_i$ is the nonempty intersection of some $H$--coset with $A$. A $H$--decomposition $A=\bigcup \limits_{i\in I} A_i$ will be called a $H$--{\em modular-progression } if it is an arithmetic progression modulo $H$. We need the following consequence of Menger's Theorem: \begin{proposition} {\bf c}ite{hiso2007}{ Let $G $ be an abelian group and let $S$ be a finite subset of $G$ with $0\in S$. Let $H$ be a subgroup of $G$ and let $S=S_0{\bf c}up {\bf c}dots {\bf c}up S_u$ be a $H$-decomposition with $0\in S_0$. Let $X=X_0{\bf c}up {\bf c}dots {\bf c}up X_t$ be a $H$-decomposition with $\frac{|G|}{|H|}\ge t+u+1.$ Assume that $\kappaappa _1(\phi (S))\ge u.$ Then there are pairwise distinct elements $n_1, n_2, {\bf c}dots, n_{r} \in [0,t]$ and elements $y_1, y_2, {\bf c}dots, y_{r} \in S\setminus H$ such that $$|\phi(X{\bf c}up X_{{n_1}}y_1{\bf c}up {\bf c}dots {\bf c}up X_{{n_r}}y_{r})|=t+u+1.$$ \label{strongip}} \end{proposition} We call the property given in Proposition \ref{strongip} the {\em strong isoperimetric property}. \section{Hyper-atoms } In this section, we investigate the new notion of a hyper-atom. Recall that $S$ is a Vosper subset if and only if $S$ is non $2$--separable or if $\kappaappa _2(S)\ge |S|$. \begin{lemma} { Let $S$ be a finite generating Vosper subset of an abelian group $G$ such that $0 \in S$. Let $X\subset G$ be such that $|X+S|=|X|+|S|-1$. Also assume that $|X|\ge |S|$ if $|X|+|S|=|G|$. Then for every $y\in S$, we have $|X+(S\setminus \{y\})|\ge |X|+|S|-2$. \label{vominus}} \end{lemma} \begin{proof} By the definition of a Vosper subset, we have $|X+S|\ge |G|-1$. There are two possibilities: {\bf Case} 1. $|X+S|= |G|-1$. Suppose that $|X+ (S\setminus \{y\})|\le |X|+|S|-3$ and take an element $z$ of $(X+S)\setminus (X+(S\setminus \{y\}))$. We have $z-y\in X$. Also $(X\setminus \{z-y\})+S\subset ((X+S)\setminus \{z\})$. By the definition of a Vosper subset, we have $|(X\setminus \{z-y\})+S|\ge \min (|G|-1,|X|-1+|S|)=|X|+|S|-1$. Clearly $X+S\supset ((X\setminus \{z-y\})+S){\bf c}up \{z\}$. Hence $|X+S|\ge |X|+|S|$, a contradiction. {\bf Case} 2. $|X+S|= |G|$. Suppose that $|X+ (S\setminus \{y\})|\le |X|+|S|-3$ and take a $2$--subset $R$ of $(X+S)\setminus ( X+(S\setminus \{y\}))$. We have $R-y\subset X$. Also $(X\setminus (R-y))+S\subset (X+S)\setminus R$. By the definition of a Vosper subset, $|(X\setminus (R-y))+S|\ge \min (|G|-1,|X|-2+|S|)$. We have $|X|=1$. Otherwise and since $X+S\supset ((X\setminus (R-y))+S){\bf c}up R$, we have $|X+S|\ge |X|+|S|$, a contradiction. Then $|X|=|S|=3$, and hence $|G|=5$. Now by the Cauchy Davenport Theorem, $|X+(S\setminus \{y\})|\ge |X|+|S|-2$, a contradiction. \end{proof} Let us prove a lemma about the {fragments in quotient groups}. \begin{lemma} { Let $G $ be an abelian group and let $S$ be a finite $2$-separable generating subset containing $0$. Let $H$ be a subgroup which is a $2$--fragment and let $\phi : G\mapsto G/H$ be the canonical morphism. Then \begin{equation}\label{cosetgraph} \kappaappa _1(\phi (S))= |\phi (S)|-1. \end{equation} Let $K$ be a subgroup which is a $1$--fragment of $\phi (S)$. Then $\phi ^{-1}(K)$ is a $2$--fragment of $S$. }\label{quotient} \end{lemma} \begin{proof} Put $|\phi (S)|=u+1$. Since $|G|>|H+S|,$ we have $\phi (S)\ne G/H$, and hence $\phi (S)$ is $1$--separable. Let $X\subset G/H$ be such that $X+\phi (S)\neq G/H$. Clearly $\phi^{-1} (X)+S\neq G$. Then $|\phi^{-1} (X)+S|\ge |\phi^{-1} (X)|+\kappaappa _2(S)= |\phi^{-1} (X)|+u|H|.$ It follows that $|X+\phi (S)||H|\ge |X||H|+u|H|.$ Hence $\kappaappa _1(\phi (S))\ge u=|\phi (S)|-1$. The reverse inequality is obvious and follows by (\ref{bound}). This proves (\ref{cosetgraph}). Let $K$ be a subgroup which is a $1$--fragment of $\phi (S)$. Then $|K+\phi (S)|=|K|+u$. Thus $|\phi ^{-1} (K)+S|=|K||H|+u|H|.$ In particular, $\phi ^{-1} (K)$ is a $2$--fragment.\end{proof} Let $S$ be a finite generating subset of an abelian group $G$ such that $0 \in S.$ Proposition \ref{Cay} states that there is a $1$--atom of $S$ which is a subgroup. A subgroup with maximal cardinality which is a $1$--fragment will be called a {\em hyper-atom} of $S$. This definition may be adapted to non-abelian groups and even abstract graphs. As we shall see, the hyper-atom is more closely related to the critical pair theory than the $2$--atom. \begin{theorem}\label{hyperatom} Let $S$ be a finite generating subset of an abelian group $G$ such that $0 \in S,$ $| S | \leq (|G|+1)/2$ and $\kappaappa _2 (S)\le |S|-1.$ Let $H$ be a hyper-atom of $S$. Then $\phi (S)$ is either an arithmetic progression or a Vosper subset, where $\phi$ is the canonical morphism from $G$ onto $G/H$. \end{theorem} \begin{proof} Let us show that \begin{equation}\label{referee} 2|\phi (S)|-1\le \frac{|G|}{|H|}.\end{equation} Clearly we may assume that $G$ is finite. Observe that $2|S+H|-2|H|\le 2|S|-2< |G|.$ It follows, since$|S+H|$ is a multiple of $|H|$, that $2|S+H|\le |G|+|H|,$ and hence (\ref{referee}) holds. Suppose now that $\phi (S)$ is not a Vosper subset. By the definition of a Vosper subset, $\phi (S)$ is $2$--separable and $\kappaappa _2(\phi(S))\le |\phi(S)|-1.$ Observe that $\phi(S)$ can not have a $1$--fragment $M$ which is a non-zero subgroup. Otherwise by Lemma \ref{quotient}, $\phi ^{-1}(M)$ is a $2$--fragment of $S$ containing strictly $H$, contradicting the maximality of $H$. By (\ref{referee}) and Corollary \ref{vosper}, $\phi(S)$ is an arithmetic progression.\end{proof} Theorem \ref{hyperatom} implies a result proved by Plagne and the author {\bf c}ite{hplagne} and some extensions of it, proved using Kermperman's Theory, obtained by Grynkiewicz in {\bf c}ite{davdecomp} and Lev in {\bf c}ite{levkemp}. All these results follow from Theorem \ref{hyperatom}. Two main new facts in Theorem \ref{hyperatom} are: \begin{itemize} \item The subgroup $H$ in Theorem \ref{hyperatom} is well described as a hyper-atom; \item The equality $|H+S|-|H|=\kappaappa _1$ is much precise than the inequality $|H+S|\le |H|+|S|-1$ in the previous results. This equality will be needed later. \end{itemize} \section{Transfer Lemmas} \begin{lemma}\label{AP1} Let $G$ be an abelian group and let $Y\subset G$ be an arithmetic progression containing $0$. Let $X\subset \subgp{Y}$ such that $|X+Y|=|X|+|Y|-1$. Suppose that $|\subgp{Y}|\neq |X+Y|$ or $|Y|=2$ Then $X$ is an arithmetic progression with the same difference as $Y$. \end{lemma} \begin{proof} The Lemma is obvious, once observed that a subset with cardinality $|\subgp{Y}|-1$ of the cyclic group $\subgp{Y}$ (generated by an arithmetic progression containing $0$) is an arithmetic progression with arbitrary difference.\end{proof} \begin{lemma}\label{nongenerating} Let $S$ and $T$ be finite subsets of an abelian group $G$ such that $0\in S{\bf c}ap T$ and $S+T$ is aperiodic. Also assume that $2\le |S|$ and that $|S+T|= |S|+|T|-1$. If $T\not\subset \subgp{S}$ then $T$ is $M$--quasi-periodic, where $\subgp{S}=M$. \end{lemma} \begin{proof} Decompose $T=T_1{\bf c}up {\bf c}dots {\bf c}up T_t$ modulo $M$. Put $W=\{i \in [0,t] : |T_i+S|<|M|\}.$ Therefore we have using (\ref{olson}), \begin{eqnarray*}|T+S|&=& \sum _{i\in W}|T_i+S|+\sum _{i\notin W}|M|\\&\ge& \sum _{i\in W}(|T_i|+\kappaappa _1(S))+ \sum _{i\notin W}|T_i+S|\nonumber \\ &\ge& |T|+|W|\frac{|S|}{2}.\label{X11} \end{eqnarray*} It follows that $W=\{j\}$, form some $j$. Since $T+S$ is aperiodic, we have by Kneser's Theorem, $|T_j+S|\ge |T_j|+|S|-1$. Since $$|T|+|S|-1=|T+S|\ge \sum _{i\neq j}|T_i+S|+|T_j|+|S|-1,$$ the result follows \end{proof} \begin{lemma}\label{transfer} Let $S$ and $T$ be finite subsets of an abelian group $G$ such that $0\in S{\bf c}ap T$ and $S+T$ is aperiodic. Let $H$ be a finite subgroup and let $\phi:G\mapsto G/H$ denotes the canonical morphism. Let $S=\bigcup \limits_{0\le i\le u}S_i$ be a $H$--decomposition such that $\phi(S_0), {\bf c}dots ,\phi(S_u) $ is a progression with difference $d$ and that $S\setminus S_u$ is $H$--periodic. Then there is a decomposition $T=\bigcup \limits_{0\le i\le t}T_i$, such that $\phi(T_0), {\bf c}dots ,\phi(T_t) $ is a progression with difference $d$ and such that $T\setminus T_t$ is $H$--periodic. \end{lemma} The proof is an easy exercise. \begin{lemma}\label{tpowers} Let $S$ and $T$ be subsets of a finite abelian group $G$, generated by $S$ such that $S+T$ is aperiodic, $0\in S{\bf c}ap T$ and $|S+T|=|S|+|T|-1$. Then $T^S-S$ is aperiodic and $|T^S-S|=|T^S|+|S|-1$. \end{lemma} \begin{proof} The set $T^S-S$ is aperiodic by Lemma \ref{lee}. Clearly $T^S-S\subset G\setminus T.$ Thus $|T^S-S|\le |G|-|T|=|T^S|+|S|-1$. By Kneser's Theorem we have $|T^S-S|\ge |T^S|+|S|-1.$\end{proof} \section{The $\frac{2|G|}3$--Theorem } The following result encodes efficiently the critical pair Theory. \begin{theorem}\label{twothird} Let $S$ be a finite generating subset of an abelian group $G$ of order $\ge 2$. Let $T$ be a finite subset of $G$ such that $|S|\le |T|$, $S+T$ is aperiodic, $0\in S{\bf c}ap T$ and $$ \frac{2|G|+2}3\ge |S+T|= |S|+|T|-1.$$ Let $H$ be a hyper-atom of $S$ and let $\phi :G\mapsto G/H$ denotes the canonical morphism. Then \begin{itemize} \item $S$ and $T$ are $H$--quasi-periodic, \item $\phi(S)$ and $\phi(T)$ are arithmetic progressions with the same difference. \end{itemize} \end{theorem} \begin{proof} Set $|G|=n$, $h=|H|$, $|\phi(S)|=u+1$, $|\phi(T)|=t+1$ and $q=\frac{n}{h}$. We have $|S|\le \frac{|S|+|T|}2\le \frac{n+1}3<\frac{n+1}2$. Assume first that $|H|=1$. By Corollary \ref{vosper}, $S$ is an arithmetic progression. The result holds clearly, observing that $S$ generates $G$. Assume now that $|H|\ge 2.$ Take $H$--decompositions $S=\bigcup \limits_{0\le i\le u}S_i$, $T=\bigcup \limits_{0\le i\le t}T_i$ and $S+T=\bigcup _{0\le i \le k}E_i$. Without loss of generality, we shall assume that \begin{itemize} \item $0\in S_0$ and $|S_0|\ge {\bf c}dots \ge |S_u|$. \item $|E_{t+1}|\ge {\bf c}dots \ge |E_k|$ and $T_i+S_0\subset E_i$, for all $0\le i \le t$. \end{itemize} For $0\le i \le t$, we put $\alpha _i=|E_i|-|T_i|$. By the definition we have $u|H|=|H+S|-|H|=\kappaappa _2(S)\le |S|-1.$ It follows that for all $u\ge j\geq 0$ \begin{equation}\label{plein} |S_{u-j}|+{\bf c}dots +|S_u|\ge j|H|+1 \end{equation} Put $P=\{i \in [0,t] : |E_i|=h\},$ $W=\{i \in [0,t] : |E_i|<h\}.$ By (\ref{plein}), $|S_0|>\frac{h}2$. Thus $\subgp{S_0}=H$, by Lemma \ref{prehistorical}. We have by (\ref{olson}), $|T_i+S_0|\ge |T_i|+\kappaappa _1(S_0) \ge |T_i|+\frac{|S_0|}{2},$ for all $i\in W$. Therefore \begin{eqnarray}|T+S| &\ge& \sum _{i\in W}|T_i+S_0|+\sum _{i\in P}|E_i|+\sum _{t+1\le i \le k}|E_i| \nonumber \\&\ge& |T|+|W|\frac{|S_0|}{2}+\sum _{i\in P}\alpha _i+\sum _{t+1\le i \le k}|E_i|.\label{X11} \end{eqnarray} {\bf Claim 0} $ t+1+u\le q.$ Suppose the contrary. By Lemma \ref{prehistorical}, $\phi(S\setminus S_u)+\phi(T)=G/H$. In particular $k+1=q$ and $|E_i|\ge |S_{u-1}|$, for all $i$. Therefore we have by (\ref{plein}), \begin{eqnarray*} |S+T| &\ge &(t+1) |S_0|+(q-t-1)|S_{u-1}|\\ &=& (2t+2-q)|S_0|+(q-t-1)(|S_0|+|S_{u-1}|)\\ &=& (2t+2-q)\frac{uh+1}{u+1}+2\frac{uh+1}{u+1}(q-t-1)\\ &\ge&\frac{un+3}{u+1}, \end{eqnarray*} noticing that $q< t+1+u\le 2t+1.$ Therefore $u=1$ and $q=t+1$. Since $\frac{n}3>|S|-1\ge \kappaappa _2(S)\ge h=\frac{n}q$, we must have $q\ge 4.$ We must have $|W|\ge 3, $ since otherwise by (\ref{X11}), $$|T+S|\ge (q-2)h+2|S_0|\ge(q-1)h+1=n-h+1> \frac{2n}3+1, $$ a contradiction. We must have $|W|=3,$ since otherwise by (\ref{X11}), $|T+S|\ge |T|+2|S_0|\ge |T|+|S|, $ a contradiction. Then $|P|=q-|W|\ge 4-3=1$. Since $\subgp{S}=G$ and $u=1$, we have $\subgp{\phi(S)}=\subgp{\phi(S_1)}=G/H$, and hence there a $\gamma \in P$ such that $T_\gamma+S_1\subset E_j,$ for some $j\in W$. By Lemma \ref{prehistorical}, $ |T_\gamma| +|S_j|\le h$. In particular, $\alpha _\gamma \ge |S_1|$. By (\ref{X11}), $|T+S|\ge |T|+|S_1|+\frac{3|S_0|}2>|T|+|S|,$ a contradiction. Hence $ t+1+u\le q.$ {\bf Claim} 1. $|\phi (S+T)|=|\phi (S)|+|\phi (T)|-1.$ By Claim 0, (\ref{cosetgraph}) and (\ref{eqisoper0}), we have $$k+1=|\phi(S+T)|\ge \min(q,t+1+u)=t+1+u.$$ Also, we have $(t+1)h\ge |T|\ge |S|> \kappaappa _2(S)=uh$, and hence $t\ge u.$ By Lemma \ref{cosetgraph}, $\kappaappa _1(\phi (S))= |\phi (S)|-1$. By Proposition \ref{strongip}, there a subset $J\subset [0,t]$ with $|J|=u$ and a family $\{ mi ;i\in J\}$ of integers in $[1,u]$ such that $T+S $ contains the $H$--decomposition $(\bigcup \limits_{0\le i\le t}T_i+S_0){\bf c}up (\bigcup \limits_{ i\in J}T_i+S_{mi})$. We have $k= t+u,$ since otherwise \begin{eqnarray*} |S+T| &\ge& \sum _{i\in J}|T_i+S_0|+\sum _{i\notin J}|T_i+S_0|+\sum _{i\in J}|T_i+S_{mi}|+\sum _{i\ge t+u+1}|E_i|\label{AP1}\\ &\ge& u|S_0|+|T|+|S_{u}|\ge |T|+|S|, \label{AP2} \end{eqnarray*} a contradiction. Thus $$|\phi (S+T)|=|\phi (S)|+|\phi (T)|-1.$$ {\bf Claim 2} $|E_{k-1}|\ge |S_{u-1}|$. By Theorem \ref{hyperatom}, $\phi(S)$ is an arithmetic progression or a Vosper subset. Let us show that \begin{equation} |\phi (T)+\phi (S\setminus S_0)|\ge t+u \label{omit} \end{equation} Notice that (\ref{omit}) is obvious if $\phi(S)$ is an arithmetic progression and follows by Lemma \ref{vominus} if $\phi(S)$ is a Vosper subset. Claim 2 follows now. {\bf Claim 3} If $u\ge 2$ then $q-1\ge t+u+2$. Assume $u\ge 2$. We must have \begin{equation} |P|\ge 2.\label{EQP>1}\end{equation} Suppose the contrary. By Lemma \ref{prehistorical}, $|T_i|<\frac{h}{3}<\frac{|S_0|}2$ for every $i\in W$. We have using Claim 2 and (\ref{plein}), \begin{eqnarray*} 2|T|>|S+T|&\ge& \sum _{i\in W}|T_i+S_0|+ \sum _{i\in P}|E_i|+ \sum _{t+1\le i\le k}|E_i|\nonumber\\ &\ge& \sum _{i\in W}2|T_i|+ |P||H|+|S_{u-1}|+ |S_u|\label{EQV2T0} \\ &\ge& \sum _{i\in W}2|T_i|+ 2h+1>2|T|,\label{EQV2T} \end{eqnarray*} a contradiction. Take $U\subset P$ with $|U|=2.$ We have using (\ref{plein}), $$ 2|S_0|\ge |S_0|+|S_{u-1}|\ge \frac{2}{3}(|S_u|+|S_{u-1}|+|S_{u-2})\ge \frac{4h+2}3.$$ By (\ref{omit}), we have $|E_i|\ge |S_{u-1}|$ for all $i\le k-1$. Clearly $|E_i|\ge |S_0|$, for all $0\le i \le t$. The claim must hold since otherwise we have using Claim 2 and (\ref{plein}), \begin{eqnarray*} |S+T|&\ge& \sum _{i \in U} |E_i|+(t-1)|S_0|+(u-1)|S_{u-1}|+|S_u|\\ &=& 2h+ (t-2)|S_0|+(u-2)|S_{u-1}|+(|S_0|+|S_{u-1}|+|S_u|)\\ &=& 2h+(t-u)|S_0|+(u-2)(|S_0|+|S_{u-1}|)+(|S_0|+|S_{u-1}|+|S_u|)\\ &\ge& 2h+ (t-u)\frac{2h+1}3+\frac{(4h+2)(u-2)}3+2h+1\\ &>&(t+u+2)\frac{2h}3+1\ge q\frac{2h}3+1=\frac{2n}{3}+1, \end{eqnarray*} a contradiction. By Claim 1 and Claim 3, $\phi(S)$ can not be a Vosper subset. By Theorem \ref{hyperatom}, $\phi(S)$ is an arithmetic progression. By Lemma \ref{AP1} and since $t+1+u<q$ for $u\ge 2$, $\phi(T)$ is an arithmetic progression with the same difference as $\phi (S)$. Now we shall reorder the $S_i$'s And $T_i$'s using the modular progression structure. Take $H$--decompositions $S=\bigcup \limits_{0\le i\le u}B_i$, $ T=\bigcup \limits_{0\le i\le t}A_i$ and a $H$--decomposition $S+T=\bigcup _{0\le i \le k}R_i$. Since $-d$ is a difference of $\phi(S)$, we may assume $0\in B_0$ and that \begin{enumerate} \item $\phi(B_0), {\bf c}dots , \phi(B_u)$ is an arithmetic progression with difference $d$ and $|B_0|\ge |B_u|$. \item $\phi(A_0), {\bf c}dots , \phi(A_t)$ is an arithmetic progression with difference $d$. \item $A_i+B_0\subset R_i$, for all $0\le i \le t$; \item $A_{t}+B_{i}\subset R_{t+i}$, for all $1\le i \le u$. \end{enumerate} We shall put $Y=\{i \in [0,t] : |R_i|<h\}.$ Since $|B_0|\ge |B_u|$, we have using (\ref{plein}), $|B_0|>\frac{h}2$. Thus $\subgp{B_0}=H$, by Lemma \ref{prehistorical}. We have by (\ref{olson}), $|A_i+B_0|\ge |A_i|+\kappaappa _1(B_0) \ge |A_i|+\frac{|B_0|}{2},$ for all $i\in Y$. Thus \begin{eqnarray}|T+S|&\ge & \sum _{0\le i \le t}|A_i+B_0|+\sum _{1\le i \le u}|A_t+B_i|\\ &\ge & |T|+|Y|\frac{|B_0|}{2}+\sum _{1\le i \le u}|A_t+B_i|.\label{Y11} \end{eqnarray} By (\ref{Y11}), we have $|T+S|\ge|T|+|Y|\frac{|B_0|}{2}+|S\setminus B_0|,$ and hence $|Y|\le 1.$ {\bf Claim} 4. $Y=\emptyset$. Suppose the contrary. Then $Y=\{r\}$, for some $0\le r \le t$. Assume first $r<t$. By Lemma \ref{prehistorical}, $h\ge |A_r|+|B_0|$. Thus \begin{eqnarray*} |S+T|&\ge&|R_r|+ th+ |A_t+(B\setminus B_{0})|\\ &\ge & |B_{0}|+|A_r|+|B_{0}|+(t-1)h+|A_t|+ \sum _{1\le i\le u-1}|B_{i}|\\ &\ge& |T|+|S|-|B_u|+|B_0|\ge |S|+|T|, \end{eqnarray*} a contradiction. Then $r=t$. By Lemma \ref{prehistorical}, $h\ge |A_t|+|B_0|$. Also $|R_t|\ge |A_{t-1}|$. Hence \begin{eqnarray*} |S+T|&\ge& th+|R_t|+ |A_t+(B\setminus B_{0})|\\ &\ge & |A_t|+|B_{0}|+(t-1)h+|A_{t-1}|+ \sum _{1\le i\le u}|B_{i}|\\ &\ge& |T|+|S|, \end{eqnarray*} a contradiction. Let us show that $|E_i|=h,$ for all $i\le k-1$. Suppose that there is an $r\le k-1$ with $|E_r|<h$. By Claim 4, $t+1\le r$. Since $(A_{t}+B_{r}){\bf c}up (A_{t-1}+B_{r+1})\subset R_r$, we have using (\ref{plein}), $2h\ge |A_t|+|B_r|+|A_{t-1}|+|B_{r+1}|\ge |A_t|+|A_{t-1}|+h+1$, by Lemma \ref{prehistorical}. Thus $|T+H|-|T|\ge 2h-(|A_t|+|A_{t-1}|)\ge h+1.$ Now $|S+T|\ge |T+H|+|S\setminus B_0|\ge |T|+h+1+|S|-|B_0|>|T|+|S|,$ a contradiction. Since $S+T$ is aperiodic, the set $B_u+A_t$ is aperiodic. By Kneser's Theorem $|B_u+A_t|\ge |B_u|+|A_t|-1.$ Now have \begin{eqnarray*} |S|+|T|-1=|S+T|&\ge& (t+u)h+|A_t+B_{u}|\\ &\ge & th+|B_{u}|+th+|A_t|-1+ \sum _{1\le i\le u-1}|B_{i}|\\ &\ge& |T|+|S|-1 \end{eqnarray*} Thus $|S\setminus B_u|=uh$ and $|T\setminus A_t|=th$.\end{proof} Notice that the subgroup in Theorem \ref{twothird} depends only one of the sets (namely $S$), while the subgroup in Kemperman Structure Theorem depends on $S$ and $T$. \section{The Structure Theory} A pair $\{A,B\}$ of subsets of an abelian group $G$ will be called a {\em weak pair} if one of the following conditions holds: \begin{itemize} \item [(WP1)] $\min(|A|,|B|)=1$; \item [(WP2)] There is $d\in G$, with order $\ge |A|+|B|-1$, such that $A$ and $B$ are arithmetic progressions with difference $d$; \item [(WP3)] $A$ is aperiodic and there is a finite subgroup $H$ and $g\in G$ such that $A,B$ are contained in some $H$--cosets and $g-B=H\setminus A$; \item [(WP4)] There is a subgroup $H$ such that $A,B$ are contained in some $H$-cosets and $|A|+|B|=|H|+1 $, and moreover there is $c\in G$ such that $|(c-A){\bf c}ap B|=1.$ \end{itemize} An elementary pair is a pair $\{A,B\}$ satisfying one of the conditions (WP1), (WP2), (SP3) and (SP4), where \begin{itemize} \item $SP3$ = "$WP3$ and for every $c\in G$, such that $|(c-A){\bf c}ap B|\neq 1,$ \item $SP4$ = "$WP4$ and for every $d\in G\setminus \{c\}$, $|(d-A){\bf c}ap B|\neq 1.$ \end{itemize} The notion of a weak pair was suggested by Kemperman {\bf c}ite{kempacta} (end of section 5, Page 82) in order to formulate an easier description. The reader could use the following Lemma, implicit in Kemperman's work {\bf c}ite{kempacta}, if he wants to work with elementary pairs. \begin{theirlemma} (Kemperman) Let $A,B$ be a weak pair. There is a nonzero subgroup $H$ and $H$-quasi-periodic decompositions $A=A_0{\bf c}up A_1$ and $B=B_0{\bf c}up B_1$ such that $(A_1,B_1)$ is an elementary pair and $|(\phi(a_1+b_1)-\phi(A)){\bf c}ap \phi (B)|=1$, where $\phi : G\mapsto G/H$ is the canonical morphism and $a_1\in A_1$ and $b_1\in B_1$. \label{strongpair} \end{theirlemma} The proof of the above lemma is implicit Kemperman's work {\bf c}ite{kempacta}. It can be done within one page or less using Kneser's Theorem. The reader may also refer to the Appendix of Lev's paper {\bf c}ite{levkemp}. \begin{theorem}\label{finalcor} Let $T$ and $S$ be a finite subset of an abelian $G$ such that $\subgp{S{\bf c}up T}=G$. Assume that $|S|\le |T|$, $0\in S{\bf c}ap T$ and $$ |S+T|= |S|+|T|-1.$$ Also assume that $S+T$ is aperiodic and that $\{S,T\}$ is not a weak pair. \begin{itemize} \item[(i)] If $G\neq \subgp{S}$ then $T$ is $\subgp{S}$--quasi-periodic; \item[(ii)] If $G= \subgp{S}$ and $G\neq \subgp{U}$ then $S$ and $T$ are $U$--quasi-periodic, where $U=T^S-a$, for some $a\in T^S$; \item[(iii)] If $G= \subgp{S}= \subgp{U}$ then there is a proper subgroup $H$ of $L$ such that $S$ and $T$ are $H$--quasi-periodic modular progressions. Moreover $H$ is either a hyper-atom of $S$ or a hyper-atom of $U$. \end{itemize} \end{theorem} \begin{proof} Since $\{S,T\}$ is not a weak pair, we have $|S|\ge 2$. Put $L=\subgp{S}$. If $L\neq G$, then $T$ is $L$--quasi-periodic by Lemma \ref{nongenerating} and (i) holds. So we may assume without loss of generality that $S$ generates $G$. We have $|T^S|\ge 2$, otherwise putting $G=(T+S){\bf c}up \{v\}$, we have clearly $v-S\subset G\setminus T$. Actually we have equality by the relation $|T+S|=|T|+|S|-1$. Then $\{T,S\}$ is a weak pair, a contradiction. Notice that $|S|+|T|+|T^S|=|S|+|T|+(|G|-|T+S|)=|G|+1$. \begin{itemize} \item If $|T|\le |T^S|$, then $|S|+|T|\le \frac{2(|S|+|T|+|T^S|)}3=\frac{2|G|+2}3$, and (iii) holds by Theorem \ref{twothird}. \item $|T|> |T^S|$, then $|S|+|T^S|\le \frac{2(|S|+|T|+|T^S|)}3=\frac{2|G|+2}3$. Assume first $|U|\ge |S|$. By Theorem \ref{twothird}, $S$ a quasi-periodic modular $H$--progression, where $H$ is the hyper-atom of $S$. By Lemma \ref{transfer}, $T$ a quasi-periodic modular $H$--progression. Thus (iii) holds. Assume now $|U|<|S|$ and that $U$ generates $G$. By Theorem \ref{twothird}, $S$ a quasi-periodic modular $H$--progression, where $H$ is the hyper-atom of $U$. By Lemma \ref{transfer}, $T$ a quasi-periodic modular $H$--progression. Thus (iii) holds. \item If $\subgp{U}\neq G$, then by Lemma \ref{nongenerating}, $S$ and $T$ are $\subgp{U}$--quasi-periodic. Thus (ii) holds. \end{itemize} The proof is complete. \end{proof} Following a suggestion of Kemperman {\bf c}ite{kempacta}, we formulate the main classical critical pair result in the following way: \begin{corollary}\label{KST} (Kemperman Structure Theorem {\bf c}ite{kempacta}) Let $A,B$ be finite subsets of an abelian group $G$ with $|G|\ge 2.$ Then the following conditions are equivalent: \begin{itemize} \item [(I)] $|A+B|= |A|+|B|-1,$ and moreover $|(c-A){\bf c}ap B|=1$ for some $c$ if $A+B$ is periodic. \item [(II)] There is a nonzero subgroup $H$ and $H$-quasi-periodic decompositions $A=A_0{\bf c}up {\bf c}dots {\bf c}up A_u$ and $B=B_0{\bf c}up {\bf c}dots {\bf c}up B_t$ such that $(A_0,B_0)$ is an elementary pair and $|(\phi(A_0+B_0)-\phi(A)){\bf c}ap \phi (B)|=1$, where $\phi : G\mapsto G/H$ is the canonical morphism. \end{itemize} \end{corollary} \begin{proof} The implication (II) $\Rightarrow $ (I) is quite easy. Suppose that (I) holds. Without loss of generality we may take $|A|\ge |B|$, $0\in A{\bf c}ap B$ and $G=\subgp{A{\bf c}up B}$. Let $Q$ denotes the period of $A+B$ and let $\psi : G\mapsto G/Q$ denotes the canonical morphism. {\bf Case} 1. $|Q|=1.$ Clearly (II) holds with $H=\{0\}$ if $\{A,B\}$ is a weak pair. Suppose that $\{A,B\}$ is not a weak pair. By Theorem \ref{finalcor}, there is a proper subgroup $H$ such that $A$ and $B$ are $H$--quasi-periodic. Take a minimal such a group. By $\rho _H(X)$ we shall mean the unique non-full $H$--coset's trace on $X$ if such a coset exists. The pair $\{\rho(A), \rho(B)\}$ must be a weak pair, by Theorem \ref{finalcor}. Since $A+B$ is aperiodic, we have $|(\phi(\rho(A)+\rho(B))-\phi(A)){\bf c}ap \phi (B)|=1$, where $\phi : G\mapsto G/H$ is the canonical morphism. {\bf Case} 2. $|Q|\ge 2.$ Take two $Q$--decompositions $A=A_0{\bf c}up {\bf c}dots {\bf c}up A_u$ and $B=B_0{\bf c}up {\bf c}dots {\bf c}up B_t$ such that $c\in A_0+B_0$. Observe that $\psi (c)=\psi (A_0)+\psi (B_0)$ has a unique expression. Hence by Scherck's Theorem \ref{scherk}, $|\psi(A)+\psi (B)|\ge |\psi(A)|+|\psi (B)|-1=t+u+1.$ We must have $|\psi(A)+\psi (B)|= t+u+1,$ since otherwise $|A+B|=|\psi(A+B)||H|\ge |A|+|B|.$ By Lemma \ref{prehistorical}, we have $$|A_0|+|B_0|\le |H|+1.$$ Now we have \begin{eqnarray*} \sum _{0\le i \le u}|A_i|+\sum _{0\le i \le t}|B_i|-1&=& |A+B|\\&=&|H|(u+t+1)\ge |H|(u+t)+|A_0|+|B_0|-1\ge |A|+|B|-1, \end{eqnarray*} observing that $|A|+|B|-1 = |A+B| = |\psi(A+B)||H|=|A+H|+|B+H|-|H|.$ It follows that \begin{itemize} \item $|A_1|={\bf c}dots =|A_u|=|B_1|={\bf c}dots =|B_t|=|H|,$ \item $|A_0|+|B_0|\le |H|+1.$ \end{itemize} Therefore $\{A_0,B_0\}$ is a weak pair. \end{proof} We shall now prove Lev's Structure Theorem. \begin{corollary}\label{LEVK} (Lev's Structure Theorem){\bf c}ite{levkemp} Let A and B be finite nonempty subsets of a non-zero abelian group $G$, with $|A|\le |B|$, and $|A+B|=|A|+|B|-1$. Suppose that either $A+B$ is aperiodic or there is a $c$ with $|(c-A){\bf c}ap B| = 1$. Then there exists a finite proper subgroup $H\subset G$ such that $A$ and $B$ are $H$--quasi-periodic and $\{ \phi(A), \phi (B)\}$ is a weak pair. \end{corollary} \begin{proof} Without loss of generality we may assume that $0\in A{\bf c}ap B$ and $\subgp{A{\bf c}up B}=G$. Put $L=\subgp{B}$. Let $P$ be the period of $A+B$ and let $\sigma : G\mapsto G/P$ be the canonical morphism. {\bf Case} 1. $|P|=1$. \begin{itemize} \item [(i)] If $G\neq L$ then $A$ is $L$--quasi-periodic and $|\phi(B)|=1$, where $\phi : G\mapsto G/L$ is the canonical morphism, by Theorem \ref{finalcor}. \item [(ii)]If $G= L \neq \subgp{U}$, then by Theorem \ref{finalcor}, $B$ and $A$ and $\subgp{U}$--quasi-periodic, where $U=A^B-a,$ for some $a\in A^B.$ Also by Theorem \ref{finalcor}, $\phi(B+A)=G/\subgp{U}$, where $\phi : G\mapsto G/\subgp{U}$ is the canonical morphism. \item [(iii)]If $G= L= \subgp{U}$, then there is a proper subgroup $H$ such that $B$ and $A$ are $H$--quasi-periodic modular progressions, by Theorem \ref{finalcor}. \end{itemize} In order to show that $\{\phi(B),\phi (A)\}$ is a weak pair, it would be enough to observe that there is a uniquely representable element in $\phi(B)+\phi (A)$. This follows easily since $A+B$ is aperiodic and since $A$ and $B$ are quasi-periodic. {\bf Case} 2. $A+B$ is periodic. By Case 1, there is subgroup $K$ of $G/P$ such that $\sigma (A)$ and $\sigma (B)$ are $K$--quasi-periodic and $\{\tau (\sigma (A)),\tau (\sigma (B))\}$ is a weak pair, where $\tau : G/K\mapsto G/P/K$. Put $\psi=\tau{\bf c}irc \sigma$ and $Q=\psi ^{-1}(K)$. Clearly $A$ and $B$ are $Q$--quasi-periodic. Clearly $\{\psi(A),\psi (B)\}$ is a weak pair.\end{proof} {\bf Acknowledgement}. The author wishes an anonymous referee for valuable comments on a preliminary draft related to this work. \end{document}
\begin{document} \title{On Decomposition of the Last Passage Time of Diffusions} \begin{abstract} For a regular transient diffusion, we provide a decomposition of its last passage time to a certain state $\alpha$. This is accomplished by transforming the original diffusion into two diffusions using the occupation time of the area above and below $\alpha$. Based on these two processes, the decomposition formula of the Laplace transform of the last passage time is derived explicitly in a simple form in terms of Green functions. This formula may be used for further investigations of diffusions with switching parameters. As one example, we demonstrate an application to a diffusion with two-valued drift. \end{abstract} \noindent Keywords: diffusion; last passage time; decomposition; occupation time; Green function \noindent Mathematics Subject Classification (2010): 60J60\\ \section{Introduction} \subsection{General setup} Fix a probability space $(\Omega,\hh,\p)$ and a filtration $\F=(\F_t)_{t\ge 0}$. Let $X=\{\omega(t),t\ge0;\p^x\}$ be a one-dimensional regular canonical diffusion starting at $x\in\R$. Its state space is given by $\mathcal{I}=(\ell,r)\subset\R$. Our analysis focuses on a decomposition of the last passage time of some fixed level $\alpha\in \mathcal{I}$ which is denoted by \begin{equation}\label{eq:lambda} \lambda_\alpha:=\sup\{t:\omega(t)=\alpha\} \end{equation} with $\sup\emptyset=0$. Our objective is to decompose the Laplace transform of $\lambda_\alpha$ in a simple formula convenient for use (see Proposition \ref{prop:laplace-product}). We prove it in Section \ref{sec:proof} and provide an example in Section \ref{sec:example}. In Section \ref{sec:appl}, as an application, we illustrate how the decomposition is implemented in computing the last passage time distribution of a diffusion with two-valued drift. Below we introduce necessary tools for our analysis. For the basic facts regarding linear diffusions, we refer the reader to Chapter II in \citet{borodina-salminen}. The scale function and the speed measure of $X$ are given by $s(\cdot)$ and $m(\cdot)$, respectively. We assume $s$ and $m$ are absolutely continuous with respect to the Lebesgue measure and have smooth derivatives. The killing measure is given by $k(\cdot)$. We assume that killing does not occur in the interior of the state-space; that is, $k(\diff x)=0$ for $x\in \mathcal{I}$. On the other hand, if $X$ hits $\ell$ or $r$, it is killed and immediately transferred to the cemetery $\Delta\notin \mathcal{I}$. The lifetime of $X$ is given by \begin{equation*} \xi=\inf\{t: \omega(t-)=\ell \;\text{or}\; r\}. \end{equation*} We assume that $X$ is transient. The transience is equivalent to one or both of the boundaries being attracting; that is, $s(\ell)>-\infty$ and/or $s(r)<+\infty$. To obtain concrete results, we set a specific assumption: \begin{assump}\normalfont\label{assump-s} \begin{equation*}\label{eq:assumption} s(\ell)>-\infty\quad\text{and}\quad s(r)=+\infty. \end{equation*} \end{assump} \noindent Then, it holds that \begin{equation*} \p^x\left(\lim_{t\to\xi}\omega(t)=\ell\right)=1, \quad \forall x\in \mathcal{I}. \end{equation*} For the later reference, we state the definition of the killing rate of a diffusion: the killing rate $\gamma(x)$ at $x\in \mathcal{I}$ is \begin{equation}\label{eq:rate-def} \gamma(x):=\lim_{s\downarrow 0}\frac{1}{s}\left(1-\p^x(\xi>s)\right). \end{equation} We use superscripts $+$ and $-$ to denote the right and left derivatives of some function $f$ with respect to the scale function: \begin{equation*} f^+(x):=\lim_{h\downarrow 0}\frac{f(x+h)-f(x)}{s(x+h)-s(x)}, \quad f^-(x):=\lim_{h\uparrow 0}\frac{f(x+h)-f(x)}{s(x+h)-s(x)}. \end{equation*} The infinitesimal drift and diffusion parameters are given by $\mu(\cdot)$ and $\sigma(\cdot)$, respectively. We let $\G$ denote the second-order differential operator \begin{eqnarray*}\label{eq:diff-operator} \G f(x)=\frac{1}{2}\sigma^2(x)f''(x)+\mu(x)f'(x), \quad x\in \mathcal{I}. \end{eqnarray*} For every $t\ge 0$, the transition function is given by $P_t: \mathcal{I}\times \B(\mathcal{I})\mapsto [0,1]$ such that for all $t,s\ge 0$ \begin{equation*} \p\left(X_{t+s}\in A\mid \F_s\right)=P_t(X_s,A), \quad \forall A\in\B(\mathcal{I}) \quad a.s. \end{equation*} For every $t>0$ and $x\in \mathcal{I}$, $P_t(x,\cdot):A\mapsto P_t(x,A)$ is absolutely continuous with respect to the speed measure $m$: \begin{equation*} P_t(x,A)=\int_Ap(t;x,y)m(\diff y), \quad A\in\B(\mathcal{I}). \end{equation*} The transition density $p$ may be taken to be positive, jointly continuous in all variables, and symmetric such that $p(t;x,y)=p(t;y,x)$. The Laplace transform of the hitting time $H_z:=\inf\{t:\omega(t)=z\}$ for $z\in \mathcal{I}$ is given by \begin{equation}\label{eq:hitting-time-laplace} \E^x\left[e^{-qH_z}\right]=\begin{cases} \frac{\psi_q(x)}{\psi_q(z)}, \quad x\le z,\\ \frac{\phi_q(x)}{\phi_q(z)}, \quad x\ge z, \end{cases} \end{equation} where the continuous positive functions $\psi_q$ and $\phi_q$ denote linearly independent solutions of the ODE $\G f=qf$ with $q>0$. Here $\psi_q$ is increasing while $\phi_q$ is decreasing. They are unique up to a multiplicative constant, once the boundary conditions at $\ell$ and $r$ are specified. Finally, the \emph{Green function} is defined as \begin{equation}\label{eq:green-q} G_q(x,y):=\begin{cases} \frac{\psi_q(x)\phi_q(y)}{w_q}, \quad x\le y,\\ \frac{\psi_q(y)\phi_q(x)}{w_q}, \quad x\ge y \end{cases} \end{equation} with the \emph{Wronskian} $w_q:=\psi_q^+(x)\phi_q(x)-\psi_q(x)\phi_q^+(x)=\psi_q^-(x)\phi_q(x)-\psi_q(x)\phi_q^-(x)$. It holds that $G_q(x,y)=\int_{0}^{\infty}e^{-qt}p(t;x,y)\diff t$ for $x,y\in \mathcal{I}$. Under Assumption \ref{assump-s}, the killing boundary $\ell$ is attracting and $\lim_{x\downarrow\ell}\E^x\left[e^{-qH_z}\right]=\frac{\psi_q(\ell+)}{\psi_q(z)}=0$ for $z\in \mathcal{I}$. Hence $\psi_q(\ell+)=0$. As the right boundary $r$ is not attracting, $\lim_{z\uparrow r}\E^x\left[e^{-qH_z}\right]=\frac{\psi_q(x)}{\psi_q(r-)}=0$ for $x\in \mathcal{I}$ and we obtain $\psi_q(r-)=+\infty$. Next, due to the transience of $X$, we define \begin{equation*}\label{eq:G0} G_0(x,y):=\lim_{q\downarrow 0}G_q(x,y)=\int_{0}^{\infty}p(t;x,y)\diff t<+\infty. \end{equation*} Following \citet[Section 4.11]{IM1974}, this quantity is represented by \begin{equation}\label{eq:green-0} G_0(x,y)=\begin{cases} \psi_0(x)\phi_0(y), \quad x\le y,\\ \psi_0(y)\phi_0(x), \quad x\ge y \end{cases} \end{equation} where the continuous positive functions $\psi_0$ and $\phi_0$ denote (linearly independent) solutions of the ODE $\G f=0$ such that \[w_0:=\psi_0^+(x)\phi_0(x)-\psi_0(x)\phi_0^+(x)=\psi_0^-(x)\phi_0(x)-\psi_0(x)\phi_0^-(x)=1.\] Here $\psi_0$ is increasing while $\phi_0$ is decreasing. These functions are uniquely determined based on the boundary conditions as shown in the following lemma. \begin{lemma}\label{lemma:psi-phi} Under Assumption \ref{assump-s}, the functions $\psi_0$ and $\phi_0$ in \eqref{eq:green-0} satisfy the following conditions: \begin{enumerate} \item $\phi_0\equiv1$, \item $\psi_0(\ell+)=0$, \item $\psi_0(r-)=+\infty$. \end{enumerate} \end{lemma} \begin{proof} Let $\ell<x\le y\le z<r$. Then, by \eqref{eq:hitting-time-laplace} \begin{align*}\label{eq:boundary-derivations} \lim_{q\downarrow 0}\frac{G_q(x,y)}{G_q(y,z)}&=\lim_{q\downarrow 0}\frac{\psi_q(x)\phi_q(y)}{\psi_q(y)\phi_q(z)}=\lim_{q\downarrow 0}\frac{\E^x\left[e^{-qH_y}\right]}{\E^z\left[e^{-qH_y}\right]} =\frac{\p^x(H_y<+\infty)}{\p^z(H_y<+\infty)}. \end{align*} On the other hand, by definition of $G_0$, we obtain \begin{equation*} \lim_{q\downarrow 0}\frac{G_q(x,y)}{G_q(y,z)}=\frac{G_0(x,y)}{G_0(y,z)}=\frac{\psi_0(x)\phi_0(y)}{\psi_0(y)\phi_0(z)}. \end{equation*} Hence \begin{equation}\label{eq:prob-G0-connection} \frac{\p^x(H_y<+\infty)}{\p^z(H_y<+\infty)}=\frac{\psi_0(x)\phi_0(y)}{\psi_0(y)\phi_0(z)}. \end{equation} For the killing boundary $\ell$, $\lim_{x\downarrow\ell}\p^x(H_y<+\infty)=0$ and we obtain $\psi_0(\ell+)=0$. As $\p^z(H_y<+\infty)=1$, the right-hand side in \eqref{eq:prob-G0-connection} does not depend on $z$. Thus, the function $\phi_0(z)$ takes the same value for every $z$ and we may set $\phi_0\equiv1$. By substituting $\p^z(H_y<+\infty)=1$ in \eqref{eq:prob-G0-connection}, we also obtain $\psi_0(r-)=+\infty$ due to $\lim_{y\uparrow r}\p^x(H_y<+\infty)=0$. \end{proof} Since $\psi_0$ solves $\G f=0$ and is increasing, we can set $\psi_0(x)=s(x)+\text{constant}$. Then, the boundary condition at $\ell$ determines the constant, i.e., \[\psi_0(x)=s(x)-s(\ell), \quad x\in \mathcal{I},\] which in turn leads to \begin{equation}\label{eq:G0-explicit} G_0(x,y)=(s(x)-s(\ell))\wedge (s(y)-s(\ell)), \end{equation} since $\phi_0\equiv 1$ by Lemma \ref{lemma:psi-phi}. \subsection{Applications of last passage times} The last passage time $\lambda_\alpha$ in \eqref{eq:lambda} is not a stopping time because it looks into the future path of the process. Last passage times have a wide range of applications in financial modeling as discussed in \citet{nikeghbali-platen}. These applications range from the analysis of default risk to insider trading and option valuation. \citet{elliott2000} and \citet{jeanblanc_rutkowski} discuss the valuation of defaultable claims with payoff depending on the last passage time of a firm's value to a certain level. See also \citet{coculescu2012} and Chapters 4 and 5 in \citet{jeanblanc2009}. \citet{egami-kevkhishvili-reversal} develop a new risk management framework for companies based on the last passage time of a leverage ratio to some alarming level. They derive the distribution of the time interval between the last passage time and the killing time which corresponds to default time in the financial context. Their analysis of actual company data demonstrates that the information regarding this time interval together with the distribution of the last passage time is useful for credit risk management. To distinguish the information available to a regular trader versus an insider, \citet{imkeller2002} uses the last passage time of a Brownian motion driving a stock price process. The last passage time, which is not a stopping time to a regular trader, becomes a stopping time to an insider by utilizing progressive enlargement of filtrations. This study illustrates how additional information provided by the last passage time can create arbitrage opportunities. Last passage times have also been used in the European put and call option pricing. The related studies are presented in \citet{profeta2010}. These studies show that option prices can be expressed in terms of probability distributions of last passage times. See also \citet{cheridito2012}. \section{Decomposition of $X$ and its last passage time}\label{sec:proof} Let us now return to the transient diffusion $X$ on $\mathcal{I}=(\ell, r)$ with Assumption \ref{assump-s}. We write $X_t=X_t(\omega)=\omega(t)$. Consider the last passage time of some fixed level $\alpha\in \mathcal{I}$ for $X$. This time is denoted by $\lambda_\alpha$ and is defined in \eqref{eq:lambda}. As $X$ is a transient diffusion, $\lambda_\alpha<+\infty$ a.s. To ensure that $\lambda_\alpha>0$, let us fix the starting point $x\ge\alpha$. The distribution of the last passage time is given by \begin{equation*}\label{eq:lambda-dist} \p^x(\lambda_\alpha\in\diff t)=\frac{p(t;x,\alpha)}{G_0(\alpha,\alpha)}\diff t. \end{equation*} See \citet[Proposition 4]{salminen1984}, \citet[Chapter II.3.20]{borodina-salminen}, \citet{egami-kevkhishvili-reversal}. Then, the Laplace transform is \begin{equation}\label{eq:lambda-laplace} \E^x\left[e^{-q\lambda_\alpha}\right]=\int_{0}^{\infty}e^{-qt}\frac{p(t;x,\alpha)}{G_0(\alpha,\alpha)}\diff t=\frac{G_q(x,\alpha)}{G_0(\alpha,\alpha)}. \end{equation} \subsection{Time-changed processes} Let us fix some $\alpha\in \mathcal{I}$ and consider an occupation time of the region above and below $\alpha$ \begin{equation*}\label{eq:occup_time} \Gamma_+(t):=\int_{0}^{t}\mathbf{1}_{\{X_t\ge\alpha\}}\diff t \quad\text{and}\quad \Gamma_-(t):=\int_{0}^{t}\mathbf{1}_{\{X_t<\alpha\}}\diff t \end{equation*} together with its right inverse: \begin{equation*}\label{eq:inverse_occup} \Gamma_+^{-1}(t):=\inf\{s:\Gamma_+(s)>t\} \quad\text{and}\quad \Gamma^{-1}_-(t):=\inf\{s: \Gamma_-(s)>t\}. \end{equation*} Define $\lambda^A_\alpha:=\Gamma_+(\lambda_\alpha)$ and $\lambda^B_\alpha:=\Gamma_-(\lambda_\alpha)$. Then, it holds that \begin{equation*}\label{eq:lambda-decomp} \lambda_\alpha=\lambda^A_\alpha + \lambda^B_\alpha. \end{equation*} Here $B$ stands for ``\emph{below} the level $\alpha$" and $A$ stands for ``\emph{above} the level $\alpha$". We will use the following time-changed processes: \begin{equation}\label{eq:X-time-change} \hat{X}^A(t):=X(\Gamma_+^{-1}(t))\quad\text{and}\quad X^B(t):=X(\Gamma_-^{-1}(t)) . \end{equation} Our interest lies in how we can decompose the last passage time $\lambda_\alpha$ of $X$ to the point $\alpha$ in terms of $\hat{X}^A$ and $X^B$. Note that $\hat{X}^A$ and $X^B$ have the same speed measure and scale function as $X$ \citep[Theorem 10.12]{dynkin1}. Then, $X^B$ can be seen as the process for which $\alpha$ is a reflecting boundary. Similarly, $\hat{X}^A$ can be considered as the process for which $\alpha$ is an elastic boundary. The hat is to stress that $\hat{X}^A$ is a killed process with the non-zero killing measure $\hat{k}^A$. See Figure \ref{fig:picture} for this probabilistic feature. Further note that \[\text{$\lambda_\alpha^A$ is considered as the killing time of $\hat{X}^A$ and $\lambda_\alpha^B$ is the last passage time of $X^B$ to level $\alpha$}. \] \begin{figure} \caption{A schematic expression of $\hat{X} \label{fig:picture} \end{figure} Let us introduce another diffusion $X^A$ on $[\alpha, r)$ for which $\alpha$ is a reflecting boundary. The process $X^A$ has the same speed measure and scale function as $\hat{X}^A$ (hence the same as $X$) but its killing measure is \emph{zero}. To be precise, we denote \begin{equation}\label{eq:Xhat} \hat{X}^A(t)=\begin{cases} X^A(t), \quad 0\le t <\lambda_\alpha^A\\ \Delta, \quad t\ge \lambda_\alpha^A \end{cases} \end{equation} to distinguish the killed process $\hat{X}^A$ from $X^A$. The killing measure of $\hat{X}^A$ satisfies $\hat{k}^A(\{\alpha\})=\gamma$ with the killing rate $\gamma>0$ and $\hat{k}^A(\diff x)=0$ for $x\neq \alpha$. Note that the speed measure of $\hat{X}^A$ satisfies $\hat{m}^A(\{\alpha\})=0$. The main result shows that the Laplace transform of the last passage time $\lambda_\alpha$ can be decomposed into two parts: \begin{proposition}\label{prop:laplace-product} Under Assumption \ref{assump-s}, the Laplace transform of $\lambda_\alpha$ in \eqref{eq:lambda-laplace} is represented as \begin{equation}\label{eq:prop-laplace} \E^\alpha[e^{-q\lambda_\alpha}]=\frac{G_q^A(\alpha, \alpha)}{G^A_q(\alpha, \alpha)+G^B_q(\alpha, \alpha)}\cdot \frac{G^B_q(\alpha, \alpha)}{G^B_0(\alpha, \alpha)}. \end{equation} where $G^A_\cdot(\cdot, \cdot)$ and $G^B_\cdot(\cdot, \cdot)$ are the Green functions of $X^A$ (not $\hat{X}^A$) and $X^B$, respectively. \end{proposition} \begin{proof} The series of Lemmas \ref{lem:exponential} $\sim$ \ref{lem:lemma3} in the next subsection lead to this result. \end{proof} Before we start with the lemmas, we need to introduce the local time at $\alpha$ for $X^A$ and $X^B$ by denoting \[L^A(t):=L^A(t, \alpha)=L(\Gamma^{-1}_+(t), \alpha)\quad\text{and}\quad L^B(t):=L^B(t, \alpha)=L(\Gamma^{-1}_-(t), \alpha), \] where $L(\cdot, \alpha)$ is the local time of $X$ at $\alpha$. Let us also define the inverse local time processes \[ \rho^A(s):=\rho^A(s, \alpha)=\inf\{t: L^A(t, \alpha)>s\} \quad\text{and}\quad \rho^B(s):=\rho^B(s, \alpha)=\inf\{t: L^B(t, \alpha)>s\}. \] \begin{remark}\label{rem:indep}\normalfont Note that excursions of $X^A$ from $\alpha$ are independent of the excursions of $X^B$ from $\alpha$ due to the Markov property of $X$. For example, in Figure \ref{fig:picture}, the excursion of $X^B$ commencing at time $0$ by the clock $\Gamma_-(\cdot)$ occurs when $X^A$ returns to $\alpha$ at time $u$ by the clock $\Gamma_+(\cdot)$. This excursion of $X^A$ corresponds to the time interval $[0,t_1)$ in the real clock $(t)$ and is independent of $X^B$ and hence of $L^B$. Then, the excursion of $X^A$ commencing at time $u$ (see the upper left panel in Figure \ref{fig:picture}) by the clock $\Gamma_+(\cdot)$ occurs when $X^B$ returns to $\alpha$ at time $\lambda_\alpha^B$ by the clock $\Gamma_-(\cdot)$. This excursion of $X^B$ corresponds to the time interval $[t_1,t_2)$ in the real clock $(t)$ and is independent of $X^A$ and hence of $L^A$. In this way, the construction of $X^A$ and $X^B$ in \eqref{eq:X-time-change} and \eqref{eq:Xhat} implies that $L^A(\cdot)$ and $L^B(\cdot)$ are independent.\myBox \end{remark} \subsubsection{Diffusion with a reflecting boundary}\label{sec:reflecting} The diffusion $X^B$ on $(\ell,\alpha]$ is reflecting at the boundary $\alpha$. Recall that $X^B$ has the same scale function and speed and killing measures as $X$. The left boundary $\ell$ is a killing boundary for $X^B$. We use $B$ to denote quantities associated with $X^B$. From the boundary condition at $\alpha$, we have $m^B(\{\alpha\})=k^B(\{\alpha\})=0$ and $(\phi^B_q)^-(\alpha)=0$. For the left boundary, the conditions are ${\psi^B_q}(\ell+)=0$ and ${\psi^B_0}(\ell+)=0$. In addition, it holds that $\p^z(H^B_y<+\infty)=1$ for $y\le z\le\alpha$ and we again deduce from \eqref{eq:prob-G0-connection} applied to $X^B$ that $\phi^B_0\equiv1$. Then, the Green function $G_0^B(x,y)$ coincides with \eqref{eq:G0-explicit}: \begin{equation}\label{eq:G=GB} G_0(x,y)=G^B_0(x, y), \quad x, y\in(\ell,\alpha]. \end{equation} Recall that the diffusion $X^A$ on $[\alpha, r)$ is reflecting at the boundary $\alpha$ (see \eqref{eq:Xhat}). We denote its Green function by $G^A_q(x, y)$. Let us stress that $G^A_q$ is \emph{not} the Green function of $\hat{X}^A$. \subsection{The Laplace transform of $\lambda_\alpha$} Let us introduce a (generic) exponential random variable $\mathbf{e_q}$ with rate $q>0$ which is independent of $X$. Hence it is independent of both $\hat{X}^A$ and $X^B$. Recall \eqref{eq:Xhat} which states that $\hat{X}^A(t)=X^A(t)$ for $t\in [0, \lambda_\alpha^A)$. In this subsection, the argument is concerned with the time interval $[0, \lambda_\alpha^A)$, so that we deal with $X^A$, not $\hat{X}^A$. For simplicity, in the sequel we omit the subscript $\alpha$ to denote $\lambda:=\lambda_\alpha$, $\lambda^A:=\lambda^A_\alpha$, and $\lambda^B:=\lambda^B_\alpha$. Let us start with \begin{align}\label{eq:main} \E^\alpha[e^{-q\lambda}]&=\p^\alpha(\lambda \le \mathbf{e_q})=\p^\alpha(\Gamma_+(\lambda)\le \Gamma_+(\mathbf{e_q}), \Gamma_-(\lambda)\le \Gamma_-(\mathbf{e_q}))\nonumber \\ &=\p^\alpha(\lambda^A\le \Gamma_+(\mathbf{e_q}), \lambda^B\le \Gamma_- (\mathbf{e_q}))\nonumber \\ &=\p^\alpha\left[\lambda^A\le \Gamma_+(\mathbf{e_q})\mid \lambda^B\le \Gamma_- (\mathbf{e_q})\right]\p^\alpha(\lambda^B\le \Gamma_- (\mathbf{e_q})). \end{align} We shall compute explicitly the right-hand side of \eqref{eq:main}. Let us first consider the set $\{\omega: \lambda^B(\omega)\le \Gamma_-(\mathbf{e_q})(\omega)\}$. \begin{lemma}\label{lem:exponential} Let $\mathbf{e_q}$ be a (generic) exponential random variable with rate $q>0$. Define $P:=\{\omega: \lambda^B(\omega)\le \Gamma_-(\mathbf{e_q})(\omega)\}$ and $Q:=\{\omega:\lambda^B(\omega)\le \mathbf{e_q}(\omega)\}$. Then the sets $P$ and $Q$ are equivalent. Similarly, the sets $\{\omega: \lambda^A(\omega)\le \Gamma_+(\mathbf{e_q})(\omega)\}$ and $\{\omega:\lambda^A(\omega)\le \mathbf{e_q}(\omega)\}$ are equivalent. \end{lemma} \begin{proof} Suppose that $\omega\in P$. Then $\lambda^B\le \Gamma_-(\mathbf{e_q})\le \mathbf{e_q}$ by the definition of $\Gamma_-(\cdot)$, so that $\omega\in Q$. On the other hand, suppose that $\omega\in Q$. This implies that by the memoryless property \begin{equation}\label{eq:(1)} \mathbf{e_q}-\lambda^B=e'_q \circ \theta(\lambda^B) \end{equation} where $e'_q$ is another exponential random variable with rate $q$ and $\theta(\cdot)$ is the shift operator. Define $J:=\Gamma^{-1}_-(\mathbf{e_q})$. Since $\Gamma^{-1}_-(\lambda^B)=\lambda$, we have \begin{equation}\label{eq:(2)} \mathbf{e_q}-\lambda^B=J-\lambda=J'\circ \theta(\lambda) \end{equation} for some nonnegative random variable $J'$. From \eqref{eq:(1)} and \eqref{eq:(2)}, $J'$ is an exponential random variable with rate $q$. Then, the representation \begin{equation}\label{eq:J} J=\lambda+J'\circ \theta(\lambda) \end{equation} implies that it is also an exponential random variable with rate $q$. Indeed, $J$ is a continuous random variable and \eqref{eq:J} shows that $J$ has the memoryless property as $\mathbf{e_q}$ in \eqref{eq:(1)} does, so that $J$ must be an exponential random variable with rate $q$. Now, $\lambda\le J$ implies that $\lambda^B=\Gamma_-(\lambda)\le \Gamma_-(J)$. By rewriting $J$ as a generic exponential random variable $\mathbf{e_q}$, we conclude that $\omega\in P$. \end{proof} The next two lemmas are concerned with the first term of \eqref{eq:main}, the conditional probability. \begin{lemma} For the exponential random variable $\mathbf{e_q}$ with rate $q>0$, we have \begin{equation*} \p^\alpha[\lambda^A\le \Gamma_+(\mathbf{e_q})\mid \lambda^B\le \Gamma_- (\mathbf{e_q})]=\p^\alpha[L^B(\Gamma_-(\mathbf{e_q}))<L^A(\Gamma_+(\mathbf{e_q}))\mid \lambda^B\le \Gamma_- (\mathbf{e_q})] \end{equation*} \end{lemma} \begin{proof} Define \begin{equation}\label{eq:u} u:=\sup\{t<\lambda^A: X^A_t=\alpha\}; \end{equation} that is, the last time of visit to $\alpha$ before $\hat{X}^A$ is elastically killed at time $\lambda^A=\Gamma_+(\lambda)$. This time point $u$ is characterized as $\Gamma_-(\Gamma_+^{-1}(u))=\lambda^B$, and hence we have \begin{equation}\label{eq:LA=LB} L^A(u)=L^B(\lambda^B). \end{equation} It may be useful to see the schematic diagram in Figure \ref{fig:picture}. Since the local time $L^A(\cdot)$ shall not increase until the next visit to $\alpha$ by $X^A$ (at time $\lambda^A$), we have \begin{equation*} \lambda^A=\inf\{t: L^A(t)>L^A(u)\}, \end{equation*} which implies $L^A(\lambda^A)>L^B(\lambda^B)$. Moreover, by the definition of $\lambda^B$, we have $L^B(\lambda^B)=L^B(\infty)$. Now we condition on $\lambda^B\le \Gamma_-(\mathbf{e_q})$. Under this condition, it is easy to see that the event $\{\lambda^A\le \Gamma_+(\mathbf{e_q})\}$ occurs when and only when the equal sign holds. Due to the argument in the preceding paragraph, under the condition $\lambda^B\le \Gamma_-(\mathbf{e_q})$, $\{\lambda^A\le \Gamma_+(\mathbf{e_q})\}=\{L^A(\Gamma_+(\mathbf{e_q}))>L^B(\Gamma_-(\mathbf{e_q}))\}$ due to \[L^A(\Gamma_+(\mathbf{e_q}))=L^A(\lambda^A)>L^B(\lambda^B)=L^B(\Gamma_-(\mathbf{e_q})),\] which proves the lemma. \end{proof} \begin{lemma}\label{lem:lemma2} For the exponential random variable $\mathbf{e_q}$ with rate $q>0$, we have \begin{equation}\label{eq:lemma2} \p^\alpha[L^B(\Gamma_-(\mathbf{e_q}))<L^A(\Gamma_+(\mathbf{e_q}))\mid \lambda^B\le \Gamma_- (\mathbf{e_q})]=\frac{G_q^A(\alpha, \alpha)}{G^A_q(\alpha, \alpha)+G^B_q(\alpha, \alpha)}. \end{equation} \end{lemma} \begin{proof} First, we shall prove that the left-hand side of \eqref{eq:lemma2} simplifies to \begin{equation}\label{eq:interim} \p^\alpha[L^B(\Gamma_-(\mathbf{e_q}))<L^A(\Gamma_+(\mathbf{e_q}))\mid \lambda^B\le \Gamma_- (\mathbf{e_q})]=\p^\alpha(L^B(\mathbf{e_q})<L^A(\mathbf{e_q})). \end{equation} Indeed, given the fact $\lambda^B\le \Gamma_-(\mathbf{e_q})$, due to Lemma \ref{lem:exponential}, \begin{equation}\label{eq:reduceB-eq} \Gamma_-(\mathbf{e_q})-\lambda^B=\mathbf{e_q}\circ \theta(\lambda^B)=\mathbf{e_q}-\lambda^B. \end{equation} Here $\mathbf{e_q}$ denotes a generic exponential random variable. Let us denote (see the lower right panel in Figure \ref{fig:picture}) \[ c:=\inf\{t: \Gamma_-(t)\ge \lambda^B\}, \] for which the condition $\lambda^B\le \Gamma_-(\mathbf{e_q})$ implies that $c\le \mathbf{e_q}$. Since the time point $c$ is the left-end point of a region where $\Gamma_-(\cdot)$ becomes constant, it corresponds to the left-end point of an excursion of $X^A$ from level $\alpha$. Hence $\Gamma_+(c)=u$, the right-hand side being defined in \eqref{eq:u}. Using the same argument as in Lemma \ref{lem:exponential}, we have \begin{equation}\label{eq:reduceA-eq} \Gamma_+(\mathbf{e_q})-u=\mathbf{e_q} \circ \theta(u)=\mathbf{e_q}-u. \end{equation} In other words, equations \eqref{eq:reduceB-eq} and \eqref{eq:reduceA-eq} imply that, instead of evaluating $L^B$ and $L^A$ at $\Gamma_-(\mathbf{e_q})$ and $\Gamma_+(\mathbf{e_q})$, we can evaluate $L^B$ and $L^A$ both at time $\mathbf{e_q}$. Hence \eqref{eq:interim} is proved. Let us now evaluate $\p^\alpha(L^B(\mathbf{e_q})<L^A(\mathbf{e_q}))$. It is known that the random variables $L^B(\mathbf{e_q})$ and $L^A(\mathbf{e_q})$ are exponentially distributed and \[ \p^\alpha(L^A(\mathbf{e_q})>s)=\p^\alpha(\rho^A(s)<\mathbf{e_q})=\E^\alpha[e^{-q\rho^A(s)}]=\exp\left(-\frac{s}{G^A_q(\alpha, \alpha)}\right). \] See \citet[Section 7]{getoor1979} and \citet[Chapter II.2.14]{borodina-salminen}. Similarly, we have \begin{equation}\label{eq:rate-LB} \p^\alpha(L^B(\mathbf{e_q})>s)=\exp\left(-\frac{s}{G^B_q(\alpha, \alpha)}\right). \end{equation} Since $\mathbf{e_q}$ is independent of $X$, $L^A(\mathbf{e_q})$ and $L^B(\mathbf{e_q})$ are independent. See Remark \ref{rem:indep}. \[ \p^\alpha(L^B(\mathbf{e_q})<L^A(\mathbf{e_q}))=\frac{\frac{1}{G^B_q(\alpha, \alpha)}}{\frac{1}{G^A_q(\alpha, \alpha)}+\frac{1}{G^B_q(\alpha, \alpha)}}, \] which yields \eqref{eq:lemma2}. \end{proof} \noindent By the two lemmas, we have computed the first term of \eqref{eq:main} on its right-hand side: \begin{equation}\label{eq:the-first-term} \p^\alpha[\lambda^A\le \Gamma_+(\mathbf{e_q})\mid \lambda^B\le \Gamma_- (\mathbf{e_q})]=\frac{G_q^A(\alpha, \alpha)}{G^A_q(\alpha, \alpha)+G^B_q(\alpha, \alpha)}. \end{equation} Let us proceed to the second term of \eqref{eq:main}. \begin{lemma}\label{lem:lemma3} It holds that for the exponential random variable $\mathbf{e_q}$ with rate $q>0$, \begin{equation}\label{eq:second-term} \p^\alpha(\lambda^B\le \Gamma_- (\mathbf{e_q}))=\frac{G^B_q(\alpha, \alpha)}{G^B_0(\alpha, \alpha)}. \end{equation} \end{lemma} \begin{proof} Due to Lemma \ref{lem:exponential}, $\p^\alpha(\lambda^B\le\Gamma_-(\mathbf{e_q}))=\p^\alpha(\lambda^B\le \mathbf{e_q})=\E^\alpha[e^{-q\lambda^B}]$. Using the expression of the Laplace transform of the last passage time in \eqref{eq:lambda-laplace}, we obtain \eqref{eq:second-term}. \end{proof} By combining Lemmas \ref{lem:lemma2} and \ref{lem:lemma3}, we obtain the result of Proposition \ref{prop:laplace-product}. \subsection{The killing rate of $\hat{X}^A$} In this subsection, we shall find the killing rate of $\hat{X}^A$ at $\alpha$ under the condition $\lambda_B\le \Gamma_-(\mathbf{e_q})$. We denote this rate by $\gamma_q$. For this purpose, we represent the first term on the right-hand side of \eqref{eq:main} in an alternative way. More specifically, we shall prove the following: \begin{proposition}\label{prop:killing-rate} The first term on the right-hand side of \eqref{eq:main} has the representation in terms of the killing rate $\gamma_q$ \begin{equation}\label{eq:alternative} \p^\alpha[\lambda^A\le \Gamma_+(\mathbf{e_q})\mid \lambda^B\le \Gamma_- (\mathbf{e_q})]=\frac{\gamma_q\cdot G^A_q(\alpha, \alpha)}{1+\gamma_q\cdot G^A_q(\alpha, \alpha)}, \end{equation} where $\gamma_q=\frac{1}{G^B_q(\alpha, \alpha)}$. \end{proposition} \begin{remark}\normalfont When we plug the value of $\gamma_q=\frac{1}{G^B_q(\alpha, \alpha)}$ into \eqref{eq:alternative}, we retrieve \eqref{eq:the-first-term}: \[ \p^\alpha[\lambda^A\le \Gamma_+(\mathbf{e_q})\mid \lambda^B\le \Gamma_- (\mathbf{e_q})]=\frac{\gamma_q\cdot G^A_q(\alpha, \alpha)}{1+\gamma_q \cdot G^A_q(\alpha, \alpha)}=\frac{G_q^A(\alpha, \alpha)}{G^A_q(\alpha, \alpha)+G^B_q(\alpha, \alpha)}. \]\myBox \end{remark} \begin{proof} Consider the killing time for the process $\hat{X}^A$, which has been denoted by $\lambda^A=\Gamma_+(\lambda)$. By \eqref{eq:rate-def}, the killing rate of $\hat{X}^A$ at $\alpha$ under the condition $\lambda^B\le \Gamma_- (\mathbf{\mathbf{e_q}})$ is given by \begin{equation}\label{eq:gamma-q} \gamma_q:=\lim_{s\downarrow 0}\frac{1}{s}\left(1-\p^\alpha[\lambda^A>s\mid \lambda^B\le \Gamma_- (\mathbf{\mathbf{e_q}})]\right). \end{equation} Let $\tilde{Y}$ denote a diffusion on $[\alpha,r)$ which has the same scale function and speed measure as $\hat{X}^A$ and for which $\alpha$ is a reflecting boundary. Let the killing measure of $\tilde{Y}$ be \emph{zero}. We use the tilde sign to denote quantities associated with $\tilde{Y}$. Following \citet[Section 5.6]{IM1974} and \citet[Chapter II.4.22]{borodina-salminen}, one can kill the process $\tilde{Y}$ in the following way: The local time at $\alpha$ is $\tilde{L}(t):=\tilde{L}(t,\alpha)$ and we let $\tau$ be an independent exponential random variable with rate $\gamma_q$. Then, under the condition $\lambda^B\le \Gamma_- (\mathbf{\mathbf{e_q}})$, $\hat{X}^A$ is distributed as the diffusion $\tilde{Y}$ that is killed at time $\inf\{t:\tilde{L}(t)\ge \tau\}$. Since $\mathbf{e_q}$ is independent of $X$ and hence of $\hat{X}^A$, under the condition $\lambda^B\le\Gamma_-(\mathbf{e_q})$, the scale function and the speed measure of $\hat{X}^A$ remain the same. Hence, for the purpose of computing \eqref{eq:gamma-q}, we have \begin{equation*}\label{eq:interim3} \p^\alpha[\lambda^A>s\mid \lambda^B\le\Gamma_-(\mathbf{e_q})]=\p^\alpha(\tau\ge\tilde{L}(s))=\E^\alpha[e^{-\gamma_q\tilde{L}(s)}], \end{equation*} from which we observe that \begin{equation}\label{eq:A-killing1} \p^\alpha[L^A(\lambda^A)>s\mid \lambda^B\le\Gamma_-(\mathbf{e_q})]=\p^\alpha[\lambda^A>\rho^A(s)\mid \lambda^B\le\Gamma_-(\mathbf{e_q})]=\p^\alpha(\tau\ge s)=e^{-\gamma_q s}. \end{equation} On the other hand, under the condition $\lambda^B\le \Gamma_- (\mathbf{\mathbf{e_q}})$, the local time of $X^B$ shall not increase after time $\lambda^B$ due to the occurrence of $\lambda^B$, so that $L^B(\lambda^B)=L^B(\Gamma_-(\mathbf{e_q}))$, while $L^A(t)=L^B(\lambda^B)$ for $u\le t<\lambda^A$ (see \eqref{eq:u} and \eqref{eq:LA=LB}). Therefore, \begin{align}\label{eq:A-killing2} \p^\alpha[L^A(\lambda^A)>s\mid \lambda^B\le\Gamma_-(\mathbf{e_q})]&=\p^\alpha[L^B(\lambda^B)>s\mid \lambda^B\le\Gamma_-(\mathbf{e_q})]=\p^\alpha(L^B(\Gamma_-(\mathbf{e_q}))>s) \nonumber \\ &=\p^\alpha(L^B(\mathbf{e_q})>s)=\exp\left(-\frac{s}{G^B_q(\alpha, \alpha)}\right) \end{align} where the last two equalities are due to Lemma \ref{lem:exponential} and \eqref{eq:rate-LB}, respectively. It follows from \eqref{eq:A-killing1} and \eqref{eq:A-killing2} that $\gamma_q=\frac{1}{G^B_q(\alpha, \alpha)}$, which is also confirmed by the definition \eqref{eq:rate-def} with \eqref{eq:A-killing2}: \begin{align*} \lim_{s\downarrow 0}\frac{1}{s}\left(1-\p^\alpha(L^B(\mathbf{e_q})>s)\right) &= \lim_{s\downarrow 0}\frac{1}{s}\left(1-\exp\left(-\frac{s}{G^B_q(\alpha, \alpha)}\right)\right) =\frac{1}{G^B_q(\alpha, \alpha)}, \end{align*} by evaluating $L^B$ at $\mathbf{e_q}$ as is justified in the proof of Lemma \ref{lem:exponential}. Given $\lambda^B\le \Gamma_- (\mathbf{\mathbf{e_q}})$, the above argument shows that the way of killing $\tilde{Y}$ using the exponential random variable $\tau$ with rate $\gamma_q=\frac{1}{G^B_q(\alpha, \alpha)}$ is identical to the way in which one kills $X^A$ as in \eqref{eq:Xhat}. Recall also that the processes $\tilde{Y}$ and $X^A$ have the same scale function, speed measure, and the boundary conditions, so that $\tilde{G}_q(x, y)=G_q^A(x, y)$ on $[\alpha, r)$. Finally, we compute \eqref{eq:alternative}, denoting the inverse local time of $\tilde{Y}$ by $\tilde{\rho}(t):=\tilde{\rho}(t, \alpha)=\inf\{s: \tilde{L}(s, \alpha)>t\}$: \begin{align*}\label{eq:Gamma+} \p^\alpha[\lambda^A\le \Gamma_+(\mathbf{\mathbf{e_q}})\mid \lambda^B\le \Gamma_- (\mathbf{\mathbf{e_q}})]&=\E^\alpha[e^{-q\lambda^A}\mid \lambda^B\le \Gamma_- (\mathbf{\mathbf{e_q}})]=\E^\alpha\left[e^{-q\inf\{t:\tilde{L}(t)\ge \tau\}}\right] \nonumber\\ &=\int_0^\infty\E^\alpha\left[e^{-q\inf\{t:\tilde{L}(t)\ge s\}}\right]\gamma_q e^{-\gamma_q s}\diff s=\int_0^\infty\E^\alpha\left[e^{-q\tilde{\rho}(s)}\right]\gamma_q e^{-\gamma_q s}\diff s \nonumber\\ &=\int_0^\infty e^{-\frac{s}{\tilde{G_q}(\alpha,\alpha)}}\gamma_q e^{-\gamma_q s}\diff s=\frac{\gamma_q\cdot\tilde{G_q}(\alpha,\alpha)}{1+\gamma_q\cdot\tilde{G_q}(\alpha,\alpha)}=\frac{\gamma_q\cdot G_q^A(\alpha,\alpha)}{1+\gamma_q\cdot G_q^A(\alpha,\alpha)}. \end{align*} In the second line, we used the fact that the jumps of the inverse local time process $\tilde{\rho}(s)$ occur countably many times, so that the value of the integral is not affected if $\inf\{t:\tilde{L}(t)\ge s\}$ is replaced by $\inf\{t:\tilde{L}(t)>s\}$. \end{proof} \section{Example: Brownian motion with drift}\label{sec:example} In this example, we consider the last passage time of the level $0$ for the Brownian motion with drift starting at $0$. We decompose its Laplace transform using Proposition \ref{prop:laplace-product}. Let $X$ be a Brownian motion with drift $\mu<0$ and set $\nu=-\mu>0$. The state space is $\mathcal{I}=(-\infty,+\infty)$ and both boundaries are natural. The scale function is $s(x)=\frac{1}{2\nu}(e^{2\nu x}-1)$ and we see that $\lim_{y\downarrow-\infty}s(y)=-\frac{1}{2\nu}>-\infty$. The generator is given by $\G f(x)=\frac{1}{2}f''(x)-\nu f'(x)$. The linearly independent solutions to $\G f=qf$ are given by $\psi_q(x)=e^{(\sqrt{\nu^2+2q}+\nu)x}$ and $\phi_q(x)=e^{-(\sqrt{\nu^2+2q}-\nu)x}$. Moreover, $\psi_q^+(x)=(\nu+\sqrt{\nu^2+2q})e^{(\sqrt{\nu^2+2q}-\nu)x}$ and $\phi_q^+(x)=(\nu-\sqrt{\nu^2+2q})e^{-(\sqrt{\nu^2+2q}+\nu)x}$. Thus, $w_q=2\sqrt{\nu^2+2q}$ and $G_q(x,0)=\frac{1}{2\sqrt{\nu^2+2q}}e^{-(\sqrt{\nu^2+2q}-\nu)x}$ for $x\ge 0$. On the other hand, $G_0(0,0)=s(0)-\lim_{y\downarrow-\infty}s(y)=\frac{1}{2\nu}$ and we obtain from \eqref{eq:lambda-laplace} \begin{equation}\label{eq:lambda-laplace-bm} \E^x\left[e^{-q\lambda_0}\right]=\frac{\nu}{\sqrt{\nu^2+2q}}e^{-(\sqrt{\nu^2+2q}-\nu)x}, \quad x\ge 0. \end{equation} \subsection{Brownian motion with drift on the negative axis reflecting at $0$} \label{sec:BM-below} Let us consider $X^B\in(-\infty,0]$. For this diffusion, the increasing and decreasing solutions to $\G f=qf$ are $\psi_q^B$ and $\phi_q^B$, respectively. The boundary condition at the reflecting boundary $0$ is $(\phi_q^B)^-(0)=0$. Now, $\phi_q^B(x)=c_1\psi_q(x)+c_2\phi_q(x)$ with some constants $c_1,c_2$. Due to the condition at $0$, these constants must satisfy $c_1(\nu+\sqrt{\nu^2+2q})+c_2(\nu-\sqrt{\nu^2+2q})=0$. We set $c_1=\frac{\sqrt{\nu^2+2q}-\nu}{2\sqrt{\nu^2+2q}}$ and $c_2=\frac{\sqrt{\nu^2+2q}+\nu}{2\sqrt{\nu^2+2q}}$. As for the increasing solution, there is no boundary condition at $0$ and we set $\psi_q^B(x)=\psi_q(x)$. Thus, the Wronskian is given by $w_q^B=\sqrt{\nu^2+2q}+\nu$. By the definition of Green function \eqref{eq:green-q}, we obtain $G_q^B(0,0)=\frac{1}{\sqrt{\nu^2+2q}+\nu}$. We also have, by \eqref{eq:G=GB}, $G^B_0(0,0)=G_0(0,0)=\frac{1}{2\nu}$. Then, we obtain from \eqref{eq:second-term} \begin{equation}\label{eq:Gamma-mius-explicit} \p^0(\lambda_0^B\le \Gamma_-(\mathbf{e_q}))=\frac{G^B_q(0,0)}{G^B_0(0, 0)}=\frac{2\nu}{\sqrt{\nu^2+2q}+\nu}. \end{equation} Recall that this is the second term of the product in \eqref{eq:prop-laplace} in Proposition \ref{prop:laplace-product}. \subsection{Brownian motion with drift on the positive axis reflecting at $0$}\label{sec:BM-above} Finally, let us consider $X^A$ on $[0,\infty)$. For this diffusion, the increasing and decreasing solutions to $\G f=qf$ are $\psi_q^A$ and $\phi_q^A$, respectively. Recall that in Proposition \ref{prop:laplace-product} and \ref{prop:killing-rate}, we can work with $X^A$ which is reflected at $\alpha$. At the reflecting boundary $0$, the condition is $(\psi_q^A)^+(0)=0$. There is no boundary condition at $0$ for $\phi^A_q$ and we set $\phi^A_q(x)=\phi_q(x)$. Now, $\psi_q^A(x)=d_1\psi_q(x)+d_2\phi_q(x)$ with some constants $d_1,d_2$. Due to the condition at $0$, these constants must satisfy $d_1(\nu+\sqrt{\nu^2+2q})+d_2(\nu-\sqrt{\nu^2+2q})=0$. We choose $d_1=\frac{\sqrt{\nu^2+2q}-\nu}{2\sqrt{\nu^2+2q}}$ and $d_2=\frac{\sqrt{\nu^2+2q}+\nu}{2\sqrt{\nu^2+2q}}$, so that the Wronskian is $w_q^A=\sqrt{\nu^2+2q}-\nu$. By the definition of Green function \eqref{eq:green-q}, we obtain $G_q^A(0,0)=\frac{1}{\sqrt{\nu^2+2q}-\nu}$. Now we resort to \eqref{eq:the-first-term} (see also \eqref{eq:alternative}): \begin{equation}\label{eq:XA-explicit} \p^0[\lambda_0^A\le \Gamma_+(\mathbf{e_q})\mid \lambda_0^B\le \Gamma_- (\mathbf{e_q})]=\frac{G_q^A(0, 0)}{G^A_q(0, 0)+G^B_q(0, 0)}=\frac{\frac{G_q^A(0, 0)}{G_q^B(0, 0)}}{1+\frac{G_q^A(0, 0)}{G_q^B(0, 0)}}=\frac{\sqrt{\nu^2+2q}+\nu}{2\sqrt{\nu^2+2q}}. \end{equation} From \eqref{eq:lambda-laplace-bm} with $x=0$, \eqref{eq:Gamma-mius-explicit}, and \eqref{eq:XA-explicit}, we confirm that \eqref{eq:prop-laplace} holds. \section{Application}\label{sec:appl} We demonstrate how our results can be applied to a diffusion $X$ on $\mathcal{I}=(\ell,r)\subset\R$ whose parameters are different above and below some fixed level $\alpha\in \mathcal{I}$, which is discussed, for example, in Section 6.5 of \cite{karatzas} in the context of stochastic control problems. The decomposition method of the last passage time in this paper is convenient when dealing with such processes. Under Assumption \ref{assump-s}, we can find the Laplace transform of the last passage time to $\alpha$ by decomposing $X$ into two diffusions as in \eqref{eq:X-time-change}. These two diffusions are treated separately in our framework and we can easily evaluate the Laplace transform using Proposition \ref{prop:laplace-product}. In other words, Proposition \ref{prop:laplace-product} allows us to bypass (often) hard calculations related to $X$ (with switching parameters) and to reduce the object to two processes with no switching parameters. Let the infinitesimal drift and diffusion parameters of $X$ be $\mu(\cdot)$ and $\sigma(\cdot)$, respectively, such that \begin{align*} \mu(x)&=\mu^B(x)\mathbf{1}_{(\ell,\alpha)}(x)+\mu^A(x)\mathbf{1}_{[\alpha,r)}(x),\\ \sigma(x)&=\sigma^B(x)\mathbf{1}_{(\ell,\alpha)}(x)+\sigma^A(x)\mathbf{1}_{[\alpha,r)}(x). \end{align*} The parameters $\mu^A(\cdot)$ and $\sigma^A(\cdot)$ are set such that $s(r)=+\infty$, while $\mu^B(\cdot)$ and $\sigma^B(\cdot)$ ensure that $s(\ell)>-\infty$. Thus, Assumption \ref{assump-s} is satisfied by $X$. As in \eqref{eq:X-time-change}, we decompose $X$ into $\hat{X}^A$ and $X^B$ which are diffusions with non-switching parameters. Recall also $X^A$ in \eqref{eq:Xhat}. Propositions \ref{prop:laplace-product} and \ref{prop:killing-rate} hold for $X$ without any need of modification. Note that the Green functions $G_\cdot^A$ and $G_\cdot^B$ should be calculated based on the parameters of $X^A$ and $X^B$ separately. We illustrate the statements in the preceding paragraphs by the example of a Brownian motion $X$ with two-valued drift: \begin{align}\label{eq:X-switching} \diff X_t&=\mu(X_t)\diff t+\diff W_t, \\ \mu(X_t)&=\begin{cases} \mu^A, \quad X_t\ge 0,\\ \mu^B, \quad X_t< 0 \nonumber \end{cases} \end{align} with constants $\mu^A,\mu^B<0$. Here $W$ is a standard one-dimensional Brownian motion. For this kind of $X$, we can compute the Laplace transform $\E^0[e^{-q\lambda_0}]$ easily with the aid of Proposition \ref{prop:laplace-product}: we transform $X$ into two Brownian motions with constant drifts by using \eqref{eq:X-time-change} and treat these two processes separately. First, $X^B$ is a Brownian motion on $(-\infty,0]$ with drift $\mu^B$ reflecting at $0$. From Section \ref{sec:BM-below}, we obtain $G_q^B(0,0)=\frac{1}{\sqrt{(\mu^B)^2+2q}-\mu^B}$ and $G^B_0(0,0)=-\frac{1}{2\mu^B}$. Next, $X^A$ is a Brownian motion on $[0,\infty)$ with drift $\mu^A$ reflecting at $0$. From Section \ref{sec:BM-above}, we obtain $G_q^A(0,0)=\frac{1}{\sqrt{(\mu^A)^2+2q}+\mu^A}$. Then, Proposition \ref{prop:laplace-product} yields \begin{align}\label{eq:lambda-laplace-switching} \E^0[e^{-q\lambda_0}]&=\frac{G_q^A(0, 0)}{G^A_q(0, 0)+G^B_q(0,0)}\cdot \frac{G^B_q(0, 0)}{G^B_0(0, 0)}=\frac{-2\mu^B}{\mu^A-\mu^B+\sqrt{(\mu^A)^2+2q}+\sqrt{(\mu^B)^2+2q}}. \end{align} This result can be confirmed by \citet{benes1980} where they derive the Laplace transform of the transition density function (with respect to the Lebesgue measure) of $X$ in \eqref{eq:X-switching}. We use this information to obtain $G_q(0,0)=\frac{1}{\mu^A-\mu^B+\sqrt{(\mu^A)^2+2q}+\sqrt{(\mu^B)^2+2q}}$ and $G_0(0,0)=-\frac{1}{2\mu^B}$. We used the fact that the speed measure of $X$ is given by $m(\diff x)=2e^{2\mu(x)x}\diff x$ for $x\in\mathcal{I}$. Then, \eqref{eq:lambda-laplace} yields \begin{equation*} \E^0[e^{-q\lambda_0}]=\frac{-2\mu^B}{\mu^A-\mu^B+\sqrt{(\mu^A)^2+2q}+\sqrt{(\mu^B)^2+2q}} \end{equation*} which is the same as \eqref{eq:lambda-laplace-switching}. \end{document}
\begin{document} \def\begin{eqnarray}{\begin{eqnarray}} \def\end{eqnarray}{\end{eqnarray}} \def\label{\label} \def\nonumber{\nonumber} \def\displaystyle{\displaystyle} \def\overline{\overlineverline} \def\left({\left(} \def\vecec{r}ight){\vecec{r}ight)} \def\left[{\left[} \def\vecec{r}ight]{\vecec{r}ight]} \def\langle{\langle} \def\vecec{r}angle{\vecec{r}angle} \def\hat{\hatat} \def\tilde{\tilde} \def\vecec{r}{\vecec{r}} \def\vecec{r}o{\vecec{\vecec{r}ho}} \def\hat{\hatat} \def\vec{\vecec} \title{Broadband chip-based source of a quantum noise with electrically-controllable beam splitter} \author{E. A. Vashukevich$^1$, V. V. Lebedev$^{2,3}$, I.~V.~Ilichev$^{2,3}$, P. M. Agruzov$^{2,3}$, A. V. Shamrai$^{2,3}$, V. M. Petrov${^3}$, T. Yu. Golubeva$^1$,} \affiliation{$^1$ Saint Petersburg State University, Universitetskaya nab. 7/9, St. Petersburg, 199034 Russian Federation,\\ $^2$ Ioffe Institute, Politekhnicheskaya str. 29, St. Petersburg, 194021 Russian Federation\\ $^3$ ITMO University, Kronverksky pr. 49, St. Petersburg, 197101 Russian Federation} \begin{abstract} For the first time, the theory and practical realization of a broadband quantum noise generator based on original integrated optical beam splitter in the form of a Mach-Zehnder interferometer is demonstrated. The beam splitter with a double output, made on a lithium niobate substrate, provided accurate electro-optical balancing of the homodyne quantum noise detection circuit. According to our knowledge, the experimentally obtained excess of quantum noise over classical noise by 12 dB in the frequency band over 4 GHz, which is the best parameters of quantum noise generators known from the literature. \end{abstract} \maketitle \section{Introduction} Quantum noise generators and random number generators based on them are on demand for many applications \cite{1,2,3}. The homodyne detection of vacuum fluctuations is one of the most efficient techniques to build a quantum noise generator. As a rule, quantum noise generators are used for the subsequent development of quantum random number generators. To do this, the analog signal must be converted into a digital code \cite{4,5,6,7,8,9,10}. Quantum vacuum fluctuations are used as a physical source of entropy of the noise generator. Its technical implementation is based on a local oscillator, beam splitter, and balanced detection scheme, that provides suppression of classical noise and registration of quantum shot noise. From the point of view of the informational throughput of the random number generator, one of the most important parameter here is the frequency band of the quantum noise at the output of the balanced detector \cite{6}. The currently experimentally achieved maximum band of homodyne detection of vacuum fluctuations is about 1 GHz \cite{8, 9}, which is due to the use of schemes on the so-called "volumetric" optics. Integrated optical beam splitters based on silicon optical waveguides \cite{10} can only partially solve the problems of volumetric optics. Sufficiently high absorption and photosensitivity at telecommunications wavelengths (1500 - 1600 nm) produce sources of additional classical noise and limit the maximum optical power, and the thermoelectric control used in \cite{10} for active tuning has a high inertia and is not suitable for broadband devices. This did not allow the generation of quantum noise in the band of more than 150 MHz. To construct a quantum noise generator, a scheme based on balanced homodyne detection of a vacuum field is widely used. However, such an implementation of the generator has a significant limitation: the lack of the ability to control interference when mixing fields on the beam splitter. Violation of the ideal symmetry of the beam splitter coefficients leads to a violation of the balance on the detectors, which negatively affects the visibility of the signal, and, as a result, the speed of random number generation. To solve this problem and implement the control of balanced detector, we used an integrated optical Mach-Zehnder interferometer formed by input Y-branch, output X-coupler, and with the electro-optical control of phase difference between arms of the interferometer. Note that the Y-branch despite the presence of only one input port (Fig. \vecec{r}ef{Fig1}) serves as a mixer of fields of a local oscillator and vacuum fluctuations. Vacuum fluctuations penetrate in the Y-branch from substrate as leakage modes. It is not difficult to prove that there is a second port, given the unitarity of conversion of the input radiation to the output produced by the beam splitter. By organizing the illumination of the circuit from the output side and varying the phase difference of the fields, it is possible to see the points of the output radiation through the substrate in a situation where the central the mode is suppressed by destructive interference. It is these output points that will correspond to the second, unlit (vacuum) input of the beam splitter when the circuit is normally illuminated from left to right. Thus, we will describe the input Y-branch of the scheme under consideration as a four-port device \cite{11} like a X-coupler or volume beam splitter, on one of the ports of which there is a field in the vacuum state. Then the whole system under consideration is similar to the “usual” Mach-Zehnder interferometer with two inputs and two outputs. \section{Theory} Let us consider the case when the strong classic field from the local oscillator $E_{LO}(z,t)$ enters the beam splitter at the input 1, and only the vacuum field $\hat E_{vac}(z,t)$ enters at the input 2 (Fig. \vecec{r}ef{Fig1}). After mixing on the first beam splitter, the fields are given a relative phase delay $\phi$, after which the fields are mixed again on the second beam splitter and detected. It is the phase difference $\phi$ that acts as a parameter that additionally controls the interference conditions on the second beam splitter. The transformation of the fields on the first and second beam splitters can be set by matrices $M_{BS,1}, M_{BS,2}$: \begin{eqnarray} M_{BS,i}=\begin{pmatrix}\cos{\left(\alpha_i\vecec{r}ight)}&\sin{\left(\alpha_i\vecec{r}ight)}\\\sin{\left(\alpha_i\vecec{r}ight)}&-\cos{\left(\alpha_i\vecec{r}ight)}\end{pmatrix},\;\;i=1,2. \end{eqnarray} Here the parameters $\alpha_1,\alpha_2 $ are set so the $\cos{\left(\alpha_i\vecec{r}ight)}$ is equal to the amplitude transmission coefficient of the beam splitter $t_i$, and the $\sin{\left(\alpha_i\vecec{r}ight)}$ is correspondingly equal to the reflection coefficient $r_i$. \begin{figure*} \caption{ a) Definition of the coefficients $r_i$ and $t_i$ for the volumetric beam splitter cube. b), c) the possible types of tunable integrated optical beam splitters. b) beam splitter, based on two X-couplers, that form the Mach-Zehnder interferometer with 2 inputs and 2 outputs; the input 1 is directly connected by optical fiber to the source of $E_{LO} \label{Fig1} \end{figure*} It can be noted that with the chosen parametrization, the conservation law $r_i^2+t_i^2=1$ is fulfilled automatically. The phase delay in one of the arms of the interferometer is given by the matrix: \begin{eqnarray} M_{Ph}=\begin{pmatrix}\exp{\{i \phi/2\}}&0\\0&\exp{\{-i \phi/2\}}\end{pmatrix} \end{eqnarray} Next, one can write down the expression for the fields at the output of the scheme $\hat E_{out,1},\hat E_{out,2} $ in terms of the initial fields: \begin{eqnarray} &&\binom{\hat E_{out,1}}{\hat E_{out,2}}=U\binom{ E_{LO}}{\hat E_{vac}}\\ &&U=M_{BS,2}\cdot M_{Ph}\cdot M_{BS,1}\label{4} \end{eqnarray} Here the elements of the of the transformation matrix are: \begin{eqnarray} &&U_{11}=e^{i\phi/2}\left(\cos{\left(\alpha_1\vecec{r}ight)}\cos{\left(\alpha_2\vecec{r}ight)}+e^{-i \phi}\sin{\left(\alpha_1\vecec{r}ight)}\sin{\left(\alpha_2\vecec{r}ight)}\vecec{r}ight)\\ &&U_{12}=e^{i\phi/2}\left(\sin{\left(\alpha_1\vecec{r}ight)}\cos{\left(\alpha_2\vecec{r}ight)}-e^{-i\phi}\cos{\left(\alpha_1\vecec{r}ight)}\sin{\left(\alpha_2\vecec{r}ight)}\vecec{r}ight)\;\;\;\;\;\;\\ &&U_{21}=e^{i\phi/2}\left(\cos{\left(\alpha_1\vecec{r}ight)}\sin{\left(\alpha_2\vecec{r}ight)}-e^{-i\phi}\sin{\left(\alpha_1\vecec{r}ight)}\cos{\left(\alpha_2\vecec{r}ight)}\vecec{r}ight)\;\;\;\;\;\;\\ &&U_{22}=e^{-i\phi/2}\left(\cos{\left(\alpha_1\vecec{r}ight)}\cos{\left(\alpha_2\vecec{r}ight)}+e^{i\phi}\sin{\left(\alpha_1\vecec{r}ight)}\sin{\left(\alpha_2\vecec{r}ight)}\vecec{r}ight)\;\;\;\;\;\; \end{eqnarray} The photocurrent operators on both detectors can be written as follows: \begin{eqnarray} &&\hat j_1= \hat E^\dag_{out,1}\hat E_{out,1}=(U^*_{11}E^*_{LO}+U^*_{12}\hat E^\dag_{vac})\nonumber\\&&\times(U_{11}E_{LO}+U_{12}\hat E_{vac})\\ &&\hat j_2= \hat E^\dag_{out,2}\hat E_{out,2}=(U^*_{21}E^*_{LO}+U^*_{22}\hat E^\dag_{vac})\nonumber\\&&\times(U_{21}E_{LO}+U_{22}\hat E_{vac}) \end{eqnarray} Now, for convenience, let’s move on to the quadrature components (with numeric representation for the classical field and operator for the quantum one): $E_{LO}=\vecarepsilon^\prime+ i\vecarepsilon^{\prime\prime}$, $\hat E_{vac}=\hat x+ i \hat y$; in addition, let us denote $|E_{LO}|^2=I_{LO}$. The differential signal $\hat j_-(\phi)$ can be written as follows (we denote $\alpha_\pm=2(\alpha_1\pm\alpha_2)$): \begin{eqnarray}&&\hat j_-(\phi)=\hat i_1-\hat i_2=\left[I_{LO}-\hat x^2-\hat y^2\vecec{r}ight]\nonumber\\ &&\times\left[\cos^2(\phi/2)\cos(\alpha_-)+\sin^2(\phi/2)\cos(\alpha_+)\vecec{r}ight]\nonumber\\ &&+2\left[\vecarepsilon^\prime\hat x+\vecarepsilon^{\prime\prime}\hat y\vecec{r}ight]\nonumber\\ &&\times\left[\cos^2(\phi/2)\sin(\alpha_-)+\sin^2(\phi/2)\sin(\alpha_+)\vecec{r}ight]\nonumber\\&&-2 \left[\vecarepsilon^\prime\hat y-\vecarepsilon^{\prime\prime}\hat x\vecec{r}ight]\sin (\phi) \sin (2 \alpha_2) \end{eqnarray} In the case when the beam splitters are symmetric $r_i=1/\sqrt{2}=t_i$, the transformation matrix has a simple form: \begin{eqnarray} U=\begin{pmatrix}\cos{\phi/2}&i \sin{\phi/2}\\i \sin{\phi/2}&\cos{\phi/2}\end{pmatrix} \end{eqnarray} The difference photocurrent with symmetric beam splitters $j^{s}_-(\phi)$ will be modulated by the phase difference $\phi$: \begin{eqnarray} &&\hat j^{s}_-(\phi)=(I_{LO} -\hat x^2 - \hat y^2)\cos{\phi} -2 (\vecarepsilon^\prime \hat y -\vecarepsilon^{\prime\prime}\hat x)\sin{\phi}\;\;\;\; \end{eqnarray} As you can see, with the phase difference $\phi=\pi/2$ the first term proportional to the intensity of the fields will be completely suppressed, and homodyne detection of the quadrature components of the quantum field is carried out in the Mach-Zehnder interferometer scheme. For simplicity, here and further, we will select the phase of the local oscillator so that $\vecarepsilon^{\prime\prime}=0$, then the difference signal can be written as follows: \begin{eqnarray} \hat j^{s}_-(\pi/2)=-2\vecarepsilon^\prime \hat y \label{9} \end{eqnarray} Since the value of the quadrature component measured in the experiment is a random variable, we have the opportunity to generate a sequence of truly random numbers in the proposed scheme. As can be seen from (\vecec{r}ef{9}), the dispersion of the noise field y increases in proportion to the magnitude of the field of the local oscillator. Now we will take into account the differences between the real experimental situation and the ideal one described above. First of all, we take into account the possible asymmetry of the beam splitter. Let’s assume that one of the beam splitters (for example, the output one) is asymmetric, choosing $r_2=\sqrt{0.49}, t_2=\sqrt{0.51}$. Now we write down the dependence of the difference current in the case of an asymmetric beam splitter $\hat j^{a}_-(\phi)$ on the phase, leaving the field of the local oscillator purely real: \begin{eqnarray} \hat j^{a}_-(\phi)=&& 0.9998 \cos (\phi) \left(I_{LO}^2-\hat x^2-\hat y^2\vecec{r}ight)\nonumber\\&&-2\cdot0.9998\sin (\phi) \vecarepsilon^\prime \hat y-2\cdot0.02\vecarepsilon^\prime \hat x \label{10} \end{eqnarray} As one can see, the second quadrature component begins to appear in the signal. Moreover, the last term in (\vecec{r}ef{10}) is phase-independent. However, the contribution from the first term, proportional to the intensity of the fields, which is most harmful for noise generation, can be completely compensated by choosing a suitable phase of the modulator. When the phase difference is $\phi=\frac{\pi}{2}$ we will get: \begin{eqnarray} &&\hat j^{a}_-(\frac{\pi}{2})=-2\cdot0.9998 \vecarepsilon^\prime \hat y-2\cdot0.02\vecarepsilon^\prime \hat x \end{eqnarray} Note that the obtained form of recording the signal allows us to talk about the measurement of the generalized quadrature of the noise field, that is, the measurement in the basis expanded by some angle. Since the distribution of the vacuum field on the phase plane is absolutely symmetric, the reversal of the basis does not introduce any changes in the operation of the noise generator. Thus, in the absence of other factors of the “imperfection” of the scheme, the asymmetry of the beam splitter would be insignificant for the generation of random numbers. The key factor here, however, is the fact that if the beam splitter is not symmetric, an additional classical noise component will be present in the difference current, which is completely subtracted when considering a symmetric circuit. We show this explicitly by introducing losses $\eta_1, \eta_2$, in both arms of the interferometer associated with the presence of classical noise. Then, instead of the Eq. (\vecec{r}ef{4}) used above, the field at the output of the interferometer will be set by the expression: \begin{eqnarray} &&\binom{\hat E_{out,1}}{\hat E_{out,2}}=\tilde U\binom{ E_{LO}}{\hat E_{vac}}\\ &&\tilde U=M_{BS,2}\cdot\begin{pmatrix}\eta_1&0\\0&\eta_2\end{pmatrix} M_{Ph}\cdot M_{BS,1} \end{eqnarray} We repeat all the calculations made without taking into account losses, preserving the assumptions made earlier about the symmetry of the first beam splitter and the realness of the field of the local oscillator. The difference current $\hat j_-(\phi)$ will then have the form: \begin{eqnarray} &&\hat j_-(\phi)=\frac{1}{2} \cos(2\alpha_2) \left(\eta_1^2-\eta_2^2\vecec{r}ight) \left(I_{LO}+\hat x^2+\hat y^2\vecec{r}ight)\nonumber\\&&+\eta_1\eta_2 \sin(2\alpha_2)\cos(\phi)\left(I_{LO}-\hat x^2-\hat y^2\vecec{r}ight)\nonumber\\ &&+\vecarepsilon^\prime \hat x \cos (2\alpha_2) \left(\eta_1^2+\eta_2^2\vecec{r}ight)-2\eta_1\eta_2\vecarepsilon^\prime \hat y\sin (2 \alpha_2) \sin (\phi )\;\;\;\;\label{21} \end{eqnarray} In the case of symmetrical output beam splitter $\alpha_2=\pi/4$, the expression (\vecec{r}ef{21}) is reduced by the choice of phase $\phi=\pi/2$ to (\vecec{r}ef{9}) with extra multiplication by factor $\eta_1\eta_2$. However, as one can see, for any asymmetric beam splitter in (\vecec{r}ef{21}), there remains a phase-independent first term containing the intensity of the field of the local oscillator, which will dominate the signal of interest. This term can be removed by analyzing the phase $\phi$ and selecting it so that the following equation is performed: \begin{eqnarray} \cos(\phi)\approx\frac{\left(\eta_1^2-\eta_2^2\vecec{r}ight)}{2\eta_1\eta_2}\cot(2\alpha_2) \end{eqnarray} Such a choice of the modulator phase leads to complete mutual compensation of the first two terms in expression (\vecec{r}ef{21}). As a result, the expression for the difference photocurrent can again be represented as the amplification of the generalized noise quadrature (similar to Eq. (\vecec{r}ef{10})), where the gain is lesser than the original one by a factor of $\eta_1\eta_2$. It is interesting to estimate the amount of phase adjustment required for balancing the circuit. For example, for $\cos(2\alpha_2)=-0.02, \eta_1=0.9,\eta_2=0.85 $ to compensate for the term containing $I_{LO}$, we need to add to the phase $\phi=\pi/2$ only the $3\times10^{-4}\pi$. Thus, the use of a phase modulator makes it possible to balance the circuit in the presence of not only asymmetric detector operation, but also various classical noises in the interferometer channels. \section{Experiment} We have proposed and experimentally implemented a broadband quantum noise generator using an integrated optical Mach-Zehnder interferometer with a single input and a double output as an electrically-controlled beam splitter (BS), made on the basis of optical waveguides in a lithium niobate crystal substrate (Fig. 2). The $\hatbox{LiNbO}_3$ congruent single crystal plate of the X-cut had the size $5\times50\times1$mm$^3$. The single-mode channel optical waveguides were manufactured using the technology of thermal diffusion of Ti-ions \cite{12}. Light propagated along Y crystallographic axes. A push-pull electrodes was deposited along one of the arms of the interferometer, which made it possible to adjust the amplitude transmission coefficient of the beam splitter $t_i$ using the electro-optical effect. \begin{figure} \caption{ The experimental realization: quantum noise generator based on integrated-optic Mach-Zehnder beam splitter with the electrical control. LO is the local oscillator, BS is the beam splitter, FA is the fiber-optic assembly, BD is the balanced detector, OPS is the operating point control system. } \label{Fig2} \end{figure} A single frequency laser with distributed feedback with a wavelength of 1552 nm, a radiation line width of 170 kHz, and a power of 100 mW was used as a local oscillator (LO). A special attention was paid to the design of the balanced detector (BD). The radiation from the beam splitter outputs was transmitted through the fiber assembly (FA) to the InGaAs-pin photodiodes (A,B) that form the balanced detector. The difference of the optical paths of the fiber-optic assembly did not exceed 0.1 of the operating wavelength. The band of each photodiode was 10 GHz, which provided the band of the balanced detector above 4 GHz. For this purpose, photodiodes with the closest possible frequency response and the same sensitivity of $\approx$0.78\!A/W were selected. A high saturation current ($\sim$ 30 mA) and a dark current of less than 1 $\mu$A provided a high dynamic range. The operating point control system (OPC) provided accurate balancing of output currents ($<$0.1$\%$) \cite{13}. To suppress classical noise, anti-phase subtraction of synchronous signals was provided by equalizing the optical and electrical paths in the balanced circuit. The efficiency of the balanced photodetector was evaluated by suppressing common-mode interference. To emulate common-mode signal the laser radiation was modulated in amplitude and differencial signal was applied to the tunable integrated optical BS. The suppression was defined as the ratio of the frequency response to a differential signal to the frequency response to a common-mode signal. The common-mode interference suppression by more than 15 dB is observed in the band over 3 GHz. The decrease in the suppression efficiency with increasing frequency was due to increased requirements for the accuracy of performing phase matching and the difference in the frequency response of photodiodes. \begin{figure} \caption{ Experimental results. a): the spectral power density $N(f)$ at the output of balanced detector. 1: local oscillator “OFF”, 2: local oscillator “ON”; b): the electrical power $P_{el} \label{Fig3} \end{figure} A part of the electrical signal from the output of the balanced detector was sent to OPC unit, which generated a feedback signal a control voltage $\pm U_C$. This voltage was applied to electrodes, which made it possible to change the phase delay between the arms, and, consequently, to control the splitting coefficient of the beam splitter. The accuracy of the splitting control was not worse as 0.1$\%$ in power. Measurements of the noise signal at the output showed an excess of the spectral power density of the detected quantum noise by an amount of more than 12 dB above the level of technical noise of the measuring system with a preamp in the band of more than 4 GHz (Fig. \vecec{r}ef{Fig3}, a). The level of classical noise caused by random intensity noise (RIN) of the laser has a value much smaller than the quantum ones, which is confirmed by the proximity to the linear dependence characteristic of quantum shot noise (Fig. \vecec{r}ef{Fig3}, b). The power level of the recorded quantum noise is in good agreement with the theoretical estimation, which predicts a linear growth: \begin{eqnarray} N(f)=2qP_{opt}AR_0|H^2_{pd}(f)|, \end{eqnarray} where $N(f)$ is the spectral power density, $q$ is the electron charge, $P_{opt}$ is the power on the input of the photodiodes, $A$ is the direct current sensitivity of the photodiodes, $H_{pd}(f)$ is the transfer function of the balanced photodetector, $R_0$ is the output loading resistor of the balanced detector. According to the estimates obtained from the literature, we have developed a broadband quantum noise generator, probably with the highest parameters for the present time. Experimentally, an excess of quantum noise over classical noise was obtained by more than 12 dB in the band of more than 3 GHz, which is the best characteristics for this type of generators known from the literature. This generator is based on homodyne detection of quantum fluctuations, using a controlled beam splitter based on a Mach-Zehnder waveguide interferometer on a lithium niobate substrate and a high-frequency balanced detector. \section{Conclusion} We have designed and experimentally implemented a quantum noise source with remarkable characteristics. Three factors mainly determine the broadband source with a spectral bandwidth of more than 3 GHz. First of all, such a broad generation band turns out to be possible due to the integrated-optical chip-based implementation of homodyne detection with the field mixer in the form of Y-branch. It should be noted that, in contrast to traditional volumetric beam splitters or waveguide X-couplers, the reflection and transmission coefficients of which depend quite critically on the the light wavelengtht \cite{15}, integrated optical Y-branches are much more broadband elements. Thus, the spectral width of our generator is determined by the spectral characteristics of the detectors, not the beam splitters. This factor allowed us to build a theory without considering the spectral dependence of the beam splitter coefficients. The second important factor that makes it possible to achieve record results is developing a feedback loop that allows us to quickly and accurately control the delay in the arm of the Mach-Zehnder interferometer. As shown in the theoretical part of the paper, fine tuning of the balance allows getting rid of the influence of classical noises in the system. The presence of asymmetric losses in the interferometer channels leads to degeneration of interference and the uncompensated currents proportional to the homodyne intensity mixing into the signal. Even a weak asymmetry can significantly impair the signal visibility due to large values of the homodyne amplitude. Controlling the interference phase using a feedback loop enabled the achievement of the visibility of the quantum noise signal above the classical noise by more than 12 dB. It should be noted that if we imagine an ideal situation in which there are no classical noises in the system, then the asymmetry of the beam splitter does not worsen the observation parameters of quantum noise but only rotates the observation basis, which is not significant for symmetric noise distribution. Correctness of the quantum-mechanical description of the measurement in the presence of a feedback loop should also be discussed here. It is a well-known situation when feedback stabilizes the photoelectron flux but degrades the noise characteristics of light that produce this flux \cite{16,17,18}. In the mentioned case, the direct connection between the operators of the light's quadratures and the photocurrent's operator is lost. There is no such problem in our case since the feedback controls not the quantum system but the classical one. Finally, the third factor is the experimental selection of detectors with the closest possible characteristics. The selection of suitable photodiodes is an essential factor in improving the performance of the circuit, and we suggest that it is this block of the circuit that define cut-of frequency of the spectral characteristics of the device as a whole. \end{document}
\begin{document} \title[$H$-stability of Syzygy Bundles on Regular Surfaces]{$H$-stability of Syzygy Bundles on some regular Algebraic Surfaces } \author{H. Torres-L\'opez} \address{CONACyT - U. A. Matem\'aticas, U. Aut\'onoma de Zacatecas \newline Calzada Solidaridad entronque Paseo a la Bufa, \newline C.P. 98000, Zacatecas, Zac. M\'exico.} \email{[email protected]} \author{A. G. Zamora} \address{U. A. Matem\'aticas, U. Aut\'onoma de Zacatecas \newline Calzada Solidaridad entronque Paseo a la Bufa, \newline C.P. 98000, Zacatecas, Zac. M\'exico.} \email{[email protected]} \thanks{The first author was partially supported by project FORDECYT 265667. The second author was partially supported by Conacyt Grant CB 2015-257079} \subjclass[2000]{14J60} \keywords{Syzygy bundles, $H-$stability, Algebraic Surfaces} \newtheorem{Theorem}{Theorem}[section] \newtheorem{Lemma}[Theorem]{Lemma} \newtheorem{Definition}[Theorem]{Definition} \newtheorem{Proposition}[Theorem]{Proposition} \newtheorem{Corollary}[Theorem]{Corollary} \newtheorem{Remark}[Theorem]{Remark} \newtheoremstyle{TheoremNum} {\topsep}{\topsep} {\itshape} {} {\bfseries} {.} { } {\thmname{#1}\thmnote{ \bfseries #3}} \theoremstyle{TheoremNum} \newtheorem{rtheorem}{Theorem} \newtheorem{rlema}{Lemma} \begin{abstract} Let $L$ be a globally generated line bundle over a smooth irreducible complex projective surface $X$. The syzygy bundle $M_{L}$ is the kernel of the evaluation map $H^0(L)\otimes\mathcal O_X\to L$. We prove the $L$-stability of $M_L$ for Hirzebruch surfaces, del Pezzo surfaces and Enriques surfaces. The $(-K_X)$-stability of syzygy bundles $M_L$ over del Pezzo surfaces is also obtained. \end{abstract} \maketitle \section{Introduction} Let $X$ be a smooth irreducible projective variety over $\mathbb{C}$ and let $L$ be a globally generated line bundle over $X$ (from now on simply a generated bundle). The kernel $M_L$ of the evaluation map $H^0(L)\otimes \mathcal{O}_X\rightarrow L$ fits into the following exact sequence \begin{equation}\label{dualspam} \xymatrix{ 0 \ar[r]^{} & M_{L} \ar[r]^{}& H^0(L)\otimes \mathcal{O}_X \ar[r]^{} & L \ar[r]^{} & 0.} \end{equation} \noindent The bundle $M_{L}$ is called a syzygy bundle. The rank of $M_{L}$ is $h^0(L)-1$. The vector bundles $M_L$ have been extensively studied from different points of view. When $X$ is a projective irreducible smooth curve of genus $g\geq 1$ L. Ein and R. Lazarsfeld showed in \cite{ein} that the syzygy bundle $M_L$ is stable for $d>2g$ and it is semi-stable for $d=2g$ (see also \cite{butler}). After this, the semi-stability of $M_L$ was proved for line bundles with $\text{deg}(L)\geq 2g-\text{Cliff}(C)$ (see \cite[Corollary 5.4]{mistretastopino} and \cite[Theorem 1.3]{camerecurvas}). In (\cite{ram}), Paranjape and Ramanan proved that $M_{K_C}$ is semi-stable and is stable if $C$ is non-hyperelliptic. In \cite{schneider}, Schneider showed that $M_L$ is semi-stable for a general curve $C$ (see also \cite{but}). The semi-stability for incomplete linear series over general curves was proved in \cite{leticiaynewstead}. In \cite{flenner}, Flenner showed the stability of $M_L$ for projective spaces. The stability of syzygy bundles for incomplete linear series in projective spaces has been studied by several authors (see \cite{coanda}, \cite{rosa}, \cite{brenner}). Recently, in \cite{CaucciLahoz2020}, the authors proved that given an Abelian variety $A$ and any ample line bundle $L$ on $A$ the syzygy bundle $M_{L^d}$ is $L$-stable if $d\ge 2$. In the case of a projective surface $X$, we must start by fixing a polarization $H$ and then ask for the $H$-stability of the syzygy bundle $M_L$. Recall that $H$-stability for a vector bundle $E$ on $X$ means that for any sub-bundle $F\subset E$ $$\mu_H(F):= \frac{c_1(F).H}{\text{rk}(F)} < \mu_H(E):= \frac{c_1(E).H}{\text{rk}(E)}.$$ In the pioneer work of Camere (\cite{camere}) it was proved that $M_L$ is $L$-stable for any ample and generated $L$ on a $K3$ surface and for any generated $L$ with $L^2\ge 14$ on an abelian surface $X$. In \cite{lazarsfeldsurfaces} Ein, Lazarsfeld and Mustopa fixed an ample divisor $L$ and an arbitrary divisor $D$ over $X$, and setting $L_d=dL+D$ ($d\in \mathbb{N}$) showed that $M_{L_d}$ is $L$-stable for $d>>0$. Most of our results derive from the following: \vskip0.1cm \textbf{Theorem 2.2} {\it Let $X$ be a smooth projective surface. Let $L$ be an ample and generated line bundle over $X$ and $H$ be a divisor such that an irreducible and non-singular curve $C$ exists in $|H|$. Assume that \begin{enumerate} \item $h^1(L-H)=0$; \item $h^0(H)\geq h^0(L\vert_C)$; \item $M_{L\vert_C}$ is semi-stable; \end{enumerate} Then $M_L$ is $H$-stable.} \vskip0.1cm Section 2 is devoted to the proof of Theorem \ref{Theorem2}. If $L=H$ and $X$ is regular, then conditions (1) and (2) are automatically satisfied and in order to prove the stability of $M_L$ it is sufficient to prove the semi-stability of $M_{L\vert_C}$. In section 3, using this idea, we obtain the $L$-stability of the syzygy bundle $M_L$ in the following cases: if $\vert L \vert$ contains either a genus $\le 1$ curve or a Brill--Noether general curve (Corollary \ref{Cor1}), if $X$ is either a del Pezzo (Corollary \ref{delPezzo}) or a Hirzebruch surfaces (Corollary \ref{F_n}) and if $X$ is an Enriques surfaces under the condition that $\text{Cliff}(C)\geq 2$ (Corollary \ref{Enriques}). Moreover, we study the stability of syzygy bundle over del Pezzo surfaces with respect to the anti-canonical polarization: \vskip0.1cm \textbf{Theorem 3.7} {\it Let $X$ be a del Pezzo surface and let $L$ be a generated line bundle on $X$. If $L$ contains an irreducible curve, then the vector bundle $M_L$ is $(-K_X)$-stable.} \vskip0.1cm \vskip0.1cm \textbf{Conventions:} We work over the field of complex numbers $\mathbb{C}$. Given a coherent sheaf $\mathcal{G}$ on a variety $X$ we write $h^i(\mathcal{G})$ to denote the dimension of the $i$-th cohomology group $H^i(X,\mathcal{G})$. The sheaf $K_X$ will denote the canonical sheaf on $X$. A surface always means a smooth (or non-singular) projective complex irreducible surface. \section{Stability on some regular surfaces} The aim of this section is to prove Theorem \ref{Theorem2}. Let $X$ be a projective, irreducible and non-singular surface $X$. Let $H$ be an ample line bundle on $X$. The $H$-slope of a vector bundle $E$ is given by \begin{eqnarray*} \mu_H(E):=\frac{c_1(E).H}{\text{rk}(E)}, \end{eqnarray*} where $c_1(E)$ is the first Chern class of $E$ and $\text{rk}(E)$ is the rank of $E$. In particular, the $H$-slope of $M_L$ is given by \begin{eqnarray*} \mu_H(M_L)=\frac{c_1(M_L).H}{\text{rk}(M_L)}=\frac{-L.H}{h^0(L)-1}. \end{eqnarray*} Sometimes we write $\mu(E)$ by $\mu_H(E)$, when there is not confusion about the choice of $H$. Note that if $C\in \vert H \vert $ is non-singular projective and irreducible, we can compute the $H$-slope of a vector bundle $E$ as \begin{eqnarray*} \mu_H(E)=\mu(E\vert_C):=\frac{\text{deg}(E\vert_C)}{\text{rk}(E)}, \end{eqnarray*} where $\text{deg}(E\vert_C)$ is the degree of vector bundle $E$ restricted to the curve $C$. Assume that there exists an irreducible and non-singular curve $C$ in the linear system $\vert H \vert$ and take a point $x\in C$. For the remainder of the argument $H$, $C \in \vert H \vert$ and $x\in C$ would be fixed. Note that $C\in \vert H\otimes m_x\vert$. Given any sub-bundle $F\subset M_{L}$ we have, thanks to the condition $h^1(L-H)=0$, a commutative diagram: \begin{equation} \label{eq:ES} \xymatrix{0 \ar[r] & K \ar[r] \ar@{^{(}->}[d] & F|_C \ar[r] \ar@{^{(}->}[d] & N \ar[r] \ar@{^{(}->}[d]^{} & 0\\ 0\ar[r]^{} & H^0(L-H)\otimes \mathcal{O}_C \ar[r]_{}& (M_{L})|_C \ar[r]_{} & M_{L\vert_C} \ar[r]^{} & 0 ,\\ } \end{equation} (see \cite{lazarsfeldsurfaces} page 76 and also \cite{camere}, proof of Proposition 3). \begin{Lemma}\label{MainTheorem} Let $X$ be a smooth projective surface. Let $L$ be an ample and generated line bundle over $X$ and $H$ be a divisor such that an irreducible and non-singular curve $C$ exists in $\vert H \vert$. Assume that the following statements are satisfied: \begin{enumerate} \item $h^1(L-H)=0$. \item $M_{L\vert_C}$ is semi-stable. \end{enumerate} Let $F\subset M_L$ be a sub-bundle with $\mu_H(F)\geq \mu_H(M_L)$. Then, $$ \text{rk}(F)\geq h^0(H)-1 + \text{rk}(K).$$ \end{Lemma} The proof of the Lemma is quite analogous to the one of Lemma 1.1 in \cite{lazarsfeldsurfaces}. We only need to replace $B$ by $L-H$ and $L_d-B$ by $H$ and to compute the dimension of a fiber. We include a proof for the reader convenience. \begin{proof} Note that for any vector bundle $E$ on $X$, $\mu_H(E)=\mu(E\vert_C)$. Therefore the condition (3) is equivalent to $\mu(F\vert_C)\ge \mu ((M_{L})\vert_C)$. If $K=0$ in the exact sequence \eqref{eq:ES}, then \begin{eqnarray*} \mu(N)=\mu(F\vert_C)\geq \mu((M_{L})\vert_C)>\mu(M_{L\vert_C}) \end{eqnarray*} gives a contradiction with the semi-stability of $M_{L\vert_C}$ and thus we have $K\neq 0$. The multiplication map of sections \begin{eqnarray*} \nu: \mathbb{P}(H^0(H\otimes m_x)) \times \mathbb{P}(H^0(L-H))\rightarrow \mathbb{P}(H^0(L\otimes m_x)) \end{eqnarray*} is a finite morphism. After localizing at the given point $x$ we obtain a commutative diagram: \begin{equation*} \xymatrix{ & {(K)}_x \ar@{^{(}->}[r] \ar@{^{(}->}[d] & (F|_C)_{x} \ar@{^{(}->}[d] & & \\ & H^0(L-H) \ar@{^{(}->}[r]& {((M_{L})\vert_C)}_x=H^0(L\otimes m_x). & & ,\\ } \end{equation*} Let $Z= \nu^{-1}(\mathbb{P} (F|_C)_x)$, since $\nu$ is finite $\dim \mathbb{P}((F|_C)_{x}) \ge \dim Z$. Given $s\in H^0( H\otimes m_x)$, we have that $s$ induces the injective morphism \begin{eqnarray*} H^0(L-H) &\rightarrow & H^0(L\otimes m_x ) \\ \phi &\mapsto & s\otimes \phi. \end{eqnarray*} Therefore for any $\phi$ in the image of the morphism $(K)_x \to H^0(L-H)$, we have that $(s,\phi)\in Z$ and $\pi_2(s,\phi)=s$. It follows that the projection \begin{eqnarray*} \pi_2:Z\rightarrow \mathbb{P}(H^0(H\otimes m_x)) \end{eqnarray*} is dominant and the dimension of the general fibre is greater or equal than $\text{rk}(K)-1$. Hence \begin{eqnarray}\label{cotaF} \text{rk}(F)\geq h^0(H)+\text{rk}(K)-1. \end{eqnarray} \end{proof} \begin{Theorem}\label{Theorem2} Let $X$ be a smooth projective complex surface. Let $L$ be an ample and generated line bundle over $X$ and $H$ be a divisor such that an irreducible and non-singular curve $C$ exists in $\vert H \vert$. Assume that: \begin{enumerate} \item $h^1(L-H)=0$, \item $h^0(H)\geq h^0(L\vert_C)$, \item $M_{L\vert_C}$ is semi-stable. \end{enumerate} Then $M_L$ is $H$-stable. \end{Theorem} \begin{proof} Assume that $M_L$ is $H$-unstable, let $F\subset M_L$ be a sub-bundle such that $\mu_H(F)\geq \mu_H(M_L)$. By Lemma \ref{MainTheorem}, we have \begin{eqnarray*} \text{rk}(F)\geq h^0(H)+\text{rk}(K)-1. \end{eqnarray*} We can repeat the proof of Lemma 1.2 in \cite{lazarsfeldsurfaces} replacing $B$ by $L-H$ and $L_d-B$ by $H$. More specifically, in Lemma 1.2 in \cite{lazarsfeldsurfaces} an inequality (inequality (*) at the beginning of page 78) is obtained that translated to our situation, just as we did with Lemma \ref{MainTheorem}, yields: $$\text{rk}(K)\ge h^0(L-H)\cdot\left(\frac{\text{rk} (F)}{\text{rk}(M_L)}\right).$$ Therefore, we have: \begin{eqnarray*} \text{rk}(F)&\geq& h^0(H)+\text{rk}(K)-1\\ &\geq & h^0(H)+ h^0(L-H)\cdot \left(\frac{\text{rk}(F)}{\text{rk}(M_{L})}\right)-1. \end{eqnarray*} That is, \begin{eqnarray*} \text{rk}(F)\geq \frac{(h^0(H)-1)\text{rk}(M_L)}{\text{rk}(M_L)-h^0(L-H)}. \end{eqnarray*} Since $\text{rk}(M_L)-h^0(L-H)=\text{rk}(M_L\vert_C)=h^0(L\vert_C)-1$, from the hypothesis $(2)$, we obtain that $\text{rk}(F)\geq \text{rk}(M_L)$ which is impossible because $F$ is a sub-bundle of $M_L$. This proves the theorem. \end{proof} \section{Applications to regular surfaces} Theorem \ref{Theorem2} is especially useful when $X$ is a regular surface and $L=H$. In this case conditions $(1)$ and $(2)$ are automatically satisfied and only $(3)$ remains to be verified. Also, as $H=L$ is ample and generated we can assume that $h^0(L)\ge 3$. Thus, a curve $C\in \vert L \vert$ exists which is non-singular and connected, and therefore irreducible (Bertini's Theorem). \begin{Corollary}\label{Cor1} Assume $X$ is regular and $L$ is an ample and generated line bundle on $X$. Let $C\in \vert L \vert$ be irreducible and non-singular and assume that either: \begin{enumerate} \item $g(C)\le 1$ or \item $C$ is Brill--Noether general, \end{enumerate} then $M_L$ is $L$-stable. \end{Corollary} \begin{proof} Assume that $g(C)=0$. First note that $h^1(L\vert_C)=0$ because $L|_C$ is a globally generated bundle over $C$. Taking cohomology in the exact sequence: $$0\to M_{L\vert_C} \to H^0(L\vert_C)\otimes \mathcal{O}_C \to L\vert_C \to 0,$$ we see that $h^0( M_{L\vert_C})=h^1( M_{L\vert_C})=0$. By Grothendieck's Theorem, we get $M_{L\vert_C} \simeq \oplus \mathcal{O}_C(-1)$. Therefore, $M_{L\vert_C}$ is semi-stable and the result follows at once from Theorem \ref{Theorem2}. Next, let $C$ be an elliptic non-singular curve and let $L$ be a line bundle on $C$ of degree $d\geq 2$, observe that $M_L$ is stable. Indeed, the rank of $M_L^{\vee}$ is $d-1$, therefore the slope of $M_L^{\vee}$ is given by \begin{eqnarray*} \mu(M_L^{\vee})=\frac{d}{d-1}. \end{eqnarray*} Let $F$ be a quotient of $M_L^{\vee}$, we want to prove that $\mu(M_L^{\vee})<\mu(F).$ Since $F^{\vee}$ is a sub-bundle of $M_L$, we get that $H^0(F^{\vee})=0$ and thus $H^1(F)=0$, by Serre duality. By Riemann Roch Theorem, we get \begin{eqnarray*} h^0(F)= \text{deg}(F) + \text{rk}(F)(1-g(C))+h^1(F)=\text{deg}(F). \end{eqnarray*} Since $F$ is globally generated over $C$, it follows that $h^0(F)\geq \text{rk}(F)$. Assume that $h^0(F)=\text{rk}(F)$, then the evaluation map \begin{eqnarray*} H^0(F)\otimes \mathcal{O}_C \to F \end{eqnarray*} is an isomorphism, which is impossible because $H^1(F)=0.$ Therefore $\text{deg}(F)\geq \text{rk}(F)+1$ and \begin{eqnarray*} \mu(F)\geq \frac{\text{rk}(F)+1}{\text{rk}(F)}>\frac{d}{d-1}=\mu(M_L^{\vee}), \end{eqnarray*} the last inequality following from $\text{rk}(F)<d-1.$ Finally, assume that $C$ is Brill--Noether general. By \cite{schneider} we have that $M_{L\vert_C}$ is semi-stable. \end{proof} Another general case that can be treated is when the anticanonical divisor $-K_X$ is nef. \begin{Proposition}\label{qigual0} Let $X$ be a regular surface and let $L$ be an ample and generated line bundle on $X$. Let $C\in |L|$ be irreducible and non-singular. Then: \begin{enumerate} \item if $-L.K_X\ge 2$, then $M_L$ is $L$-stable. \item if $-L.K_X= 1$ and $\text{Cliff}(C)\geq 1$; then $M_L$ is $L$-stable. \item if $L.K_X= 0$ and $\text{Cliff}(C)\geq 2$; then $M_L$ is $L$-stable. \end{enumerate} \end{Proposition} \begin{proof} If $g(C)\le 1$, then apply Corollary \ref{Cor1}. If $-L.K_X\geq 2$, then by the Adjunction Formula we have $\deg (L\vert_C)=L^2\ge 2g(C)$. Therefore $M_{L\vert_C}$ is semi-stable and $M_L$ is $L$-stable by Theorem \ref{Theorem2}. This proves (1). The proofs of (2) and (3) are similar: using Adjunction Formula we get $\deg (L\vert_C)=L^2\geq 2g(C)-\text{Cliff}(C),$ therefore $M_L\vert_C$ is semi-stable. \end{proof} Some particular cases of this situation are: \begin{Corollary}\label{delPezzo} Let $L$ be an ample and generated line bundle over a del Pezzo surface, then $M_L$ is $L$-stable. \end{Corollary} \begin{proof} If $C\in \vert L \vert$ is irreducible and non-singular and $g(C)\le 1$, then the result follows from Corollary \ref{Cor1}. Otherwise, from $h^0(-K_X)\ge 2$ we have $-L.K_X\ge 2$ and the result follows from Proposition \ref{qigual0}. \end{proof} \begin{Corollary}\label{F_n} Let $\mathbb{F}_n$ be a non-singular Hirzebruch surface and let $L$ be an ample and generated line bundle. Then $M_L$ is $L$-stable. \end{Corollary} \begin{proof} The canonical line bundle is given by $K_{\mathbb{F}_n}= -2C_n-(n+2)F,$ where $C_n$ and $F$ are respectively the section and the fiber of the structural fibration $\mathbb{F}_n \to \mathbb{P}^1$. If $L$ is ample, then $c_1(L)=aC_n+bF$ with $a>0$ and $b>na.$ Since $-L.K_{\mathbb{F}_n}=2b-an+2a>3$, it follows that $M_L$ is $L$-stable by Proposition \ref{qigual0}. \end{proof} The $L$-stability of $M_L$ for $X$ a $K3$ surface was studied in (\cite{camere}, Theorem 1). As a byproduct of our method we can recover the quoted result: \begin{Corollary} Let $X$ be a smooth projective $K3$ surface and $L$ an ample and generated line bundle over $X$. Then $M_L$ is $L$-stable. \end{Corollary} \begin{proof} Let $C\in \vert L \vert$ be irreducible and non-singular. By Adjunction Formula, $L\vert_C=K_C$ is the canonical line bundle. In (\cite{ram}), Paranjape and Ramanan proved that $M_{K_C}$ is semi-stable. The Corollary follows from Theorem \ref{Theorem2}. \end{proof} Similar results can be obtained for regular surfaces with numerically trivial canonical divisor: \begin{Corollary}\label{Enriques} Let $X$ be an Enriques surface. Let $L$ be an ample and generated line bundle on $X$. Assume that an irreducible and non-singular curve $C$ in the linear system $|L|$ exists such that $\text{Cliff}(C)\geq 2$. Then $M_L$ is $L$-stable. \end{Corollary} \begin{proof} Note that $\text{deg}(L|_C)=L^2=2g-2\geq 2g-\text{Cliff}(C)$, thus $M_{L\vert_C}$ is semi-stable. \end{proof} Finally we address the case of $(-K_X)$-stability for del Pezzo surfaces. \begin{Theorem}\label{-KdelPezzo} Let $X$ be a del Pezzo surface and let $L$ be a globally generated line bundle on $X$. If $\vert L \vert$ contains an irreducible curve, then the vector bundle $M_L$ is $(-K_X)$-stable. \end{Theorem} \begin{proof} Note that $h^1(L)=h^2(L)=0.$ From Riemann-Roch Theorem we get \begin{eqnarray*} h^0(L)=1+ \frac{L^2-L.K_X}{2}. \end{eqnarray*} Therefore the rank of $M_L$ is equal to $\frac{L^2-L.K_X}{2}$. We want to prove that $M_L^{\vee}$ is stable with respect to the polarization $-K_X$. Let $C$ be a non-singular projective curve in the linear system $|-K_X|.$ By Adjunction Formula $C$ is an elliptic curve. The slope of $M_L^{\vee}$ with respect to $-K_X$ is given by \begin{eqnarray*} \mu(M_L^{\vee})=-\frac{2L.K_X}{L^2-L.K_X}. \end{eqnarray*} Let $F$ be a torsion-free quotient sheaf of $M_L^{\vee}$ of rank $0<\text{rk} F<r;$ then $F|_C$ is a quotient of $(M_L^{\vee})|_C$. We want to prove that \begin{eqnarray*} \mu(M_L^{\vee})<\mu(F). \end{eqnarray*} First we assume that $F|_C$ is a vector bundle on $C$. The following properties are satisfied: \begin{enumerate} \item[(i)] $H^1(C,F|_C)=0.$ \item[(ii)] $\text{deg}(F|_C)\geq \text{rk}(F)+1.$ \end{enumerate} Indeed, since $F$ is globally generated, we have $F|_C$ is globally generated, and therefore $h^0(F^{\vee}|_C)=0$ and $h^1(F|_C)=0$ by Serre duality, this proves (i). To prove (ii), by Riemann Roch Theorem for curves, we get \begin{eqnarray*} h^0(F|_C)= \text{deg}(F|_C) + \text{rk}(F|_C)(1-g(C))+h^1(F|_C)=\text{deg}(F|_C). \end{eqnarray*} Since $F|_C$ is globally generated over $C$, it follows that $h^0(F|_C)\geq \text{rk}(F|_C)$. Assume that $h^0(F|_C)=\text{rk}(F|_C)$, then the evaluation map \begin{eqnarray*} H^0(F|_C)\otimes \mathcal{O}_C \to F|_C \end{eqnarray*} is an isomorphism but this is impossible because $H^1(F|_C)=0.$ Therefore $\text{deg}(F|_C)\geq \text{rk}(F)+1$ and we obtain $(ii).$ Hence, we have \begin{eqnarray*} \mu(F)=\mu(F|_C)\geq 1+\frac{1}{\text{rk}(F|_C)}>1+\frac{2}{L^2-L.K_X} \end{eqnarray*} therefore \begin{eqnarray*} \mu(M_L^{\vee})= -\frac{2L.K_X}{L^2-L.K_X} \leq 1+\frac{2}{L^2-L.K_X} < \mu(F), \end{eqnarray*} the first inequality follows by Adjunction Formula and the fact that $\vert L \vert$ contains an irreducible curve. Hence $\mu(M_L^{\vee})<\mu(F)$. Now, if $F|_C$ is not a vector bundle, then $F|_C=E\oplus \tau$, where $E$ is a vector bundle and $\tau$ is a torsion sheaf over $C$. The above proof can be repeated to obtain $\mu(M_L^{\vee})<\mu(E)$. Then \begin{eqnarray*} \mu(M_L^{\vee})<\mu(E)\leq \mu(E\oplus \tau)=\mu(F) \end{eqnarray*} Therefore $M_L^{\vee}$ is stable with respect to polarization $-K_X$. \end{proof} \end{document}
\begin{document} \title{The path to recent progress on small gaps between primes} \author{D. A. Goldston, J. Pintz and C. Y. Y{\i}ld{\i}r{\i}m} \thanks{The first author was supported by NSF grant DMS-0300563, the NSF Focused Research Group grant 0244660, and the American Institute of Mathematics; the second author by OTKA grants No. T38396, T43623, T49693 and the Balaton program; the third author by T\"{U}B\.{I}TAK } \date{\today} \maketitle \section*{1. Introduction } In the articles {\it Primes in Tuples I \& II} ([13], [14]) we have presented the proofs of some assertions about the existence of small gaps between prime numbers which go beyond the hitherto established results. Our method depends on tuple approximations. However, the approximations and the way of applying the approximations has changed over time, and some comments in this paper may provide insight as to the development of our work. First, here is a short narration of our results. Let \begin{equation} \theta(n) := \begin{cases} \log n &\text{ if $n$ is prime}, \\ 0 &\text{ otherwise}, \end{cases} \label{eq: 1} \end{equation} and \begin{equation} \Theta(N;q,a) := \sum_{\substack{n\leq N \\ n\equiv a \, (\bmod \, q)}}\theta(n). \label{eq: 2} \end{equation} In this paper $N$ will always be a large integer, $p$ will denote a prime number, and $p_{n}$ will denote the $n$-th prime. The prime number theorem says that \begin{equation} \lim_{x\to\infty}{|\{p: \; p \leq x\}|\over {x\over \log x}} =1 , \label{eq: 3} \end{equation} and this can also be expressed as \begin{equation} \sum_{n \leq x}\theta(n) \sim x \qquad {\rm as} \,\, x\to\infty. \label{eq: 4} \end{equation} It follows trivially from the prime number theorem that \begin{equation} \liminf_{n\to \infty}{p_{n+1}-p_{n}\over \log p_{n}} \leq 1 . \label{eq: 5} \end{equation} By combining former methods with a construction of certain (rather sparsely distributed) intervals which contain more primes than the expected number by a factor of $e^{\gamma}$, Maier [25] had reached the best known result in this direction that \begin{equation} \liminf_{n\to \infty}{p_{n+1}-p_{n}\over \log p_{n}} \leq 0.24846... \;\; . \label{eq: 6} \end{equation} It is natural to expect that modulo $q$ the primes would be almost equally distributed in the reduced residue classes. The deepest knowledge on primes which plays a role in our method concerns a measure of the distribution of primes in reduced residue classes referred to as the level of distribution of primes in arithmetic progressions. We say that the primes have {\it level of distribution} $\alpha$ if \begin{equation} \sum_{q\leq Q}\max_{\substack{a \\ (a,q)=1}}\left| \Theta(N;q,a) - {N\over \phi(q)}\right| \ll {N\over (\log N)^{A}} \label{eq: 7} \end{equation} holds for any $A > 0$ and any arbitrarily small fixed $\epsilon > 0$ with \begin{equation} Q = N^{\alpha - \epsilon} . \label{eq: 8} \end{equation} The {\it Bombieri-Vinogradov theorem} provides the level ${1\over 2}$, while the {\it Elliott-Halberstam conjecture} asserts that the primes have level of distribution $1$. The Bombieri-Vinogradov theorem allows taking $Q=N^{{1\over 2}}(\log N)^{-B(A)}$ in (7), by virtue of which we have proved unconditionally in [13] that for any fixed $r\geq 1$, \begin{equation} \liminf_{n\to \infty}{p_{n+r}-p_{n}\over \log p_{n}} \leq (\sqrt{r}-1)^2 \;\, ; \label{eq: 9} \end{equation} in particular, \begin{equation} \liminf_{n\to \infty}{p_{n+1}-p_{n}\over \log p_{n}} = 0 . \label{eq: 10} \end{equation} In fact, assuming that the level of distribution of primes is $\alpha$, we obtain more generally than (9) that, for $r \geq 2$, \begin{equation} \liminf_{n\to \infty}{p_{n+r}-p_{n}\over \log p_{n}} \leq (\sqrt{r}- \sqrt{2\alpha})^2 . \label{eq: 11} \end{equation} Furthermore, assuming that $\alpha > {1\over 2}$, there exists an explicitly calculable constant $C(\alpha)$ such that for $k \geq C(\alpha)$ any sequence of $k$-tuples \begin{equation} \{(n+h_{1}, n+h_{2}, \ldots, n+h_{k})\}_{n=1}^{\infty} , \label{eq: 12} \end{equation} with the set of distinct integers $\mathcal H = \{h_{1}, h_{2}, \ldots, h_{k}\}$ {\it admissible} in the sense that $\displaystyle \prod_{i=1}^{k}(n+h_{i})$ has no fixed prime factor for every $n$, contains at least two primes infinitely often. For instance if $\alpha \geq 0.971$, then this holds for $k \geq 6$, giving \begin{equation} \liminf_{n\to \infty} (p_{n+1}-p_{n}) \leq 16, \label{eq: 13} \end{equation} in view of the shortest admissible $6$-tuple $(n, n+4, n+6, n+10, n+12, n+16)$. We note that the gaps obeying Eq.s (9)-(11) constitute a positive proportion of all gaps of the corresponding kind. By incorporating Maier's method into ours we improved (9) to \begin{equation} \liminf_{n\to \infty}{p_{n+r}-p_{n}\over \log p_{n}} \leq e^{-\gamma}(\sqrt{r}-1)^2 , \label{eq: 14} \end{equation} but for these gaps we don't have a proof of there being a positive proportion of all gaps of this kind. (These results will appear in forthcoming articles). In [14] the result (10) was considerably improved to \begin{equation} \liminf_{n\to \infty}{p_{n+1}-p_{n}\over (\log p_{n})^{{1\over 2}} (\log\log p_{n})^2 } < \infty. \label{eq: 15} \end{equation} In fact, the methods of [14] lead to a much more general result: When $\mathcal A \subseteq \Bbb N$ is a sequence satisfying $\mathcal A (N) := |\{n; n\leq N, n\in \mathcal A \}| > C(\log N)^{1/2}(\log\log N)^2 $ for all sufficiently large $N$, infinitely many of the differences of two elements of $\mathcal A$ can be expressed as the difference of two primes. \section*{2. Former approximations by truncated divisor sums} The von Mangoldt function \begin{equation} \Lambda(n) := \begin{cases} \log p &\text{ if \, $n=p^{m}$, $m\in \Bbb Z^{+}$}, \\ 0 &\text{ otherwise}, \end{cases} \label{eq: 16} \end{equation} can be expressed as \begin{equation} \Lambda(n) = \sum_{d \mid n }\mu(d)\log({R\over d}) \qquad {\rm for} \,\, n > 1 . \label{eq: 17} \end{equation} Since the proper prime powers contribute negligibly, the prime number theorem (4) can be rewritten as \begin{equation} \psi(x) := \sum_{n \leq x}\Lambda(n) \sim x \qquad {\rm as} \,\, x\to\infty. \label{eq: 18} \end{equation} It is natural to expect that the truncated sum \begin{equation} \Lambda_{R}(n) := \sum_{\substack{d \mid n \\ d \leq R}}\mu(d)\log({R\over d}) \qquad {\rm for} \,\, n\geq 1. \label{eq: 19} \end{equation} mimics the behaviour of $\Lambda (n)$ on some averages. The beginning of our line of research is Goldston's [6] alternative rendering of the proof of Bombieri and Davenport's theorem on small gaps between primes. Goldston replaced the application of the circle method in the original proof by the use of the truncated divisor sum (19). The use of functions like $\Lambda_{R}(n)$ goes back to Selberg's work [27] on the zeros of the Riemann zeta-function $\zeta(s)$. The most beneficial feature of the truncated divisor sums is that they can be used in place of $\Lambda(n)$ on some occasions when it is not known how to work with $\Lambda(n)$ itself. The principal such situation arises in counting the primes in tuples. Let \begin{equation} \mathcal H = \{h_{1}, h_{2}, \ldots, h_{k}\} \;\; {\rm with} \;\; 1\leq h_{1}, \ldots, h_{k} \leq h \;\; {\rm distinct \,\, integers} \label{eq: 20} \end{equation} (the restriction of $h_{i}$ to positive integers is inessential; the whole set $\mathcal H$ can be shifted by a fixed integer with no effect on our procedure), and for a prime $p$ denote by $\nu_{p}(\mathcal H)$ the number of distinct residue classes modulo $p$ occupied by the elements of $\mathcal H$. The singular series associated with the $k$-tuple $\mathcal H$ is defined as \begin{equation} \mathfrak S (\mathcal H) : = \prod_{p}(1-{1\over p})^{-k}(1-{\nu_{p}(\mathcal H)\over p}). \label{eq: 21} \end{equation} Since $\nu_{p}(\mathcal H)=k$ for $p>h$, the product is convergent. The admissibility of $\mathcal H$ is equivalent to $\mathfrak S (\mathcal H) \neq 0$, and to $\nu_{p}(\mathcal H) \neq p$ for all primes. Hardy and Littlewood [23] conjectured that \begin{equation} \sum_{n \leq N}\Lambda(n;\mathcal H) := \sum_{n\leq N}\Lambda(n+h_{1}) \cdots \Lambda(n+h_{k}) = N(\mathfrak S (\mathcal H) + o(1)), \quad {\rm as} \,\, N\to\infty . \label{eq: 22} \end{equation} The prime number theorem is the $k=1$ case, and for $k\geq 2$ the conjecture remains unproved. (This conjecture is trivially true if $\mathcal H$ is inadmissible). A simplified version of Goldston's argument in [6] was given in [17] as follows. To obtain information on small gaps between primes, let \begin{equation} \psi(n,h) := \psi(n+h)-\psi(n,h) = \sum_{n<m\leq n+h}\Lambda(m), \quad \psi_{R}(n,h) := \sum_{n<m\leq n+h}\Lambda_{R}(m) , \label{eq: 23} \end{equation} and consider the inequality \begin{equation} \sum_{N < n\leq 2N}(\psi(n,h)-\psi_{R}(n,h))^{2} \geq 0. \label{eq: 24} \end{equation} The strength of this inequality depends on how well $\Lambda_{R}(n)$ approximates $\Lambda(n)$. On multiplying out the terms and using from [6] the formulas \begin{align} & \sum_{n\leq N}\Lambda_{R}(n)\Lambda_{R}(n+k) \sim \mathfrak S(\{0,k\})N, \quad \sum_{n\leq N}\Lambda(n)\Lambda_{R}(n+k) \sim \mathfrak S(\{0,k\})N \quad (k\neq 0) \label{eq: 25} \\ & \sum_{n\leq N}\Lambda_{R}(n)^{2} \sim N\log R, \quad \sum_{n\leq N}\Lambda(n)\Lambda_{R}(n) \sim N\log R , \label{eq: 26} \end{align} valid for $|k| \leq R \leq N^{{1\over 2}}(\log N)^{-A}$, gives, taking $h=\lambda\log N$ with $\lambda \ll 1$, \begin{equation} \sum_{N < n\leq 2N}(\psi(n+h)-\psi(n))^{2} \geq (hN\log R + Nh^{2})(1-o(1)) \geq ({\lambda\over 2}+ \lambda^2 - \epsilon)N(\log N)^2 \label{eq: 27} \end{equation} (in obtaining this one needs the two-tuple case of Gallagher's singular series average given in (46) below, which can be traced back to Hardy and Littlewood's and Bombieri and Davenport's work). If the interval $(n, n+h]$ never contains more than one prime, then the left-hand side of (27) is at most \begin{equation} \log N \sum_{N < n\leq 2N}(\psi(n+h)-\psi(n)) \sim \lambda N (\log N)^2 , \label{eq: 28} \end{equation} which contradicts (27) if $\lambda > {1\over 2}$, and thus one obtains \begin{equation} \liminf_{n\to \infty}{p_{n+1}-p_{n}\over \log p_{n}} \leq {1\over 2} . \label{eq: 29} \end{equation} Later on Goldston et al. in [3], [4], [7], [15], [16], [18] applied this lower-bound method to various problems concerning the distribution of primes and in [8] to the pair correlation of zeros of the Riemann zeta-function. In most of these works the more delicate divisor sum \begin{equation} \lambda_{R}(n) := \sum_{r\leq R}{\mu^{2}(r)\over \phi(r)}\sum_{d\mid (r,n)}d\mu(d) \label{eq: 30} \end{equation} was employed especially because it led to better conditional results which depend on the Generalized Riemann Hypothesis. The left-hand side of (27) is the second moment for primes in short intervals. Gallagher [5] showed that the Hardy-Littlewood conjecture (22) implies that the moments for primes in intervals of length $h \sim \lambda\log N$ are the moments of a Poisson distribution with mean $\lambda$. In particular, it is expected that \begin{equation} \sum_{n\leq N}(\psi(n+h)-\psi(n))^2 \sim (\lambda + \lambda^{2}) N (\log N)^2 \label{eq: 31} \end{equation} which in view of (28) implies (10) but is probably very hard to prove. It is known from the work of Goldston and Montgomery [12] that assuming the Riemann Hypothesis, an extension of (31) for $1 \leq h \leq N^{1-\epsilon}$ is equivalent to a form of the pair correlation conjecture for the zeros of the Riemann zeta-function. We thus see that the factor ${1\over 2}$ in (27) is what is lost from the truncation level $R$, and an obvious strategy is to try to improve on the range of $R$ where (25)-(26) are valid. In fact, the asymptotics in (26) are known to hold for $R\leq N$ (the first relation in (26) is a special case of a result of Graham [21]). It is easy to see that the second relation in (25) will hold with $R=N^{\alpha - \epsilon}$, where $\alpha$ is the level of distribution of primes in arithmetic progressions. For the first relation in (25) however, one can prove the the formula is valid for $R=N^{1/2 + \eta}$ for a small $\eta > 0$, but unless one also assumes a somewhat unnatural level of distribution conjecture for $\Lambda_{R}$, one can go no further. Thus increasing the range of $R$ in (25) is not currently possible. However, there is another possible approach motivated by Gallagher's work [5]. In 1999 the first and third authors discovered how to calculate some of the higher moments of the short divisor sums (19) and (30). At first this was achieved through straightforward summation and only the triple correlations of $\Lambda_{R}(n)$ were worked out in [17]. In applying these formulas, the idea of finding approximate moments with some expressions corresponding to (24) was eventually replaced with \begin{equation} \sum_{N < n\leq 2N}(\psi(n,h)-\rho\log N)(\psi_{R}(n,h)-C)^{2} \label{eq: 32} \end{equation} which if positive for some $\rho > 1$ implies that for some $n$ we have $\psi(n,h) \geq 2\log N $. Here $C$ is available to optimize the argument. Thus the problem was switched from trying to find a good fit for $\psi(n,h)$ with a short divisor sum approximation to the easier problem of trying to maximize a given quadratic form, or more generally a mollification problem. With just third correlations this resulted in (29), thus giving no improvement over Bombieri and Davenport's result. Nevertheless the new method was not totally fruitless since it gave \begin{equation} \liminf_{n\to \infty}{p_{n+r}-p_{n}\over \log p_{n}} \leq r - {\sqrt{r}\over 2} , \label{eq: 33} \end{equation} whereas the argument leading to (29) gives $r-{1\over 2}$. Independently of us, Sivak [29] incorporated Maier's method into [17] and improved upon (33) by the factor $e^{-\gamma}$ (cf. (6) and (14) ). Following [17], with considerable help from other mathematicians, in [20] the $k$-level correlations of $\Lambda_{R}(n)$ were calculated. This leap was achieved through replacing straightforward summation with complex integration upon the use of Perron type formulae. Thus it became feasible to approximate $\Lambda(n,\mathcal H)$ which was defined in (22) by \begin{equation} \Lambda_{R}(n;\mathcal H) := \Lambda_{R}(n +h_{1})\Lambda_{R}(n +h_{2}) \cdots \Lambda_{R}(n +h_{k}) . \label{eq: 34} \end{equation} Writing \begin{equation} \Lambda_{R}(n; \mathbf H) := (\log R)^{k-|\mathcal H|}\Lambda_{R}(n;\mathcal H), \quad \psi_{R}^{(k)}(n,h) := \sum_{1\leq h_{1},\ldots, h_{k}\leq h} \Lambda_{R}(n; \mathbf H) , \label{eq: 35} \end{equation} where the distinct components of the $k$-dimensional vector $\mathbf H$ are the elements of the set $\mathcal H$, $\psi_{R}^{(j)}(n,h)$ provided the approximation to $\psi(n,h)^j$, and the expression \begin{equation} \sum_{N < n\leq 2N}(\psi(n,h)-\rho\log N)(\sum_{j=0}^{k} a_{j}\psi_{R}^{(j)}(n,h)(\log R)^{k-j})^2 \label{eq: 36} \end{equation} could be evaluated. Here the $a_{j}$ are constants available to optimize the argument. The optimization turned out to be a rather complicated problem which will not be discussed here, but the solution was recently completed in [19] with the result that for any fixed $\lambda > (\sqrt{r}-\sqrt{{\alpha\over 2}})^2 $ and $N$ sufficiently large, \begin{equation} \sum_{\substack{n\leq N \\ p_{n+r}-p_{n} \leq \lambda\log p_{n}}}1 \gg_{r} \sum_{\substack{p\leq N \\ p :\, {\rm prime}}}1 . \label{eq: 37} \end{equation} In particular, unconditionally, for any fixed $\eta >0$ and for all sufficiently large $N > N_{0}(\eta)$, a positive proportion of gaps $p_{n+1}-p_{n}$ with $p_{n}\leq N$ are smaller than $({1\over 4}+\eta)\log N$. This is numerically a little short of Maier's result (6), but (6) was shown to hold for a sparse sequence of gaps. The work [19] also turned out to be instrumental in Green and Tao's [22] proof that the primes contain arbitrarily long arithmetic progressions. The efforts made in 2003 using divisor sums which are more complicated than $\Lambda_{R}(n)$ and $\lambda_{R}(n)$ gave rise to more difficult calculations and didn't meet with success. During this work Granville and Soundararajan provided us with the idea that the method should be applied directly to individual tuples rather than sums over tuples which constitute approximations of moments. They replaced the earlier expressions with \begin{equation} \sum_{N < n\leq 2N}(\sum_{h_{i}\in \mathcal H}\Lambda(n+h_{i}) - r\log 3N)(\tilde{\Lambda}_{R}(n; \mathcal H))^2 , \label{eq: 38} \end{equation} where $\tilde{\Lambda}_{R}(n; \mathcal H)$ is a short divisor sum which should be large when $\mathcal H$ is a prime tuple. This is the type of expression which is used in the proof of the result described in connection with (12)--(13) above. However, for obtaining the results (9)--(11). we need arguments based on using (32) and (36). \section*{3. Detecting prime tuples} We call the tuple (12) a {\it prime tuple} when all of its components are prime numbers. Obviously this is equivalent to requiring that \begin{equation} P_{\mathcal H}(n) := (n+h_{1})(n+h_{2})\cdots (n+h_{k}) \label{eq: 39} \end{equation} is a product of $k$ primes. As the generalized von Mangoldt function \begin{equation} \Lambda_{k}(n) := \sum_{d \mid n}\mu(d)(\log {n\over d})^k \label{eq: 40} \end{equation} vanishes when $n$ has more than $k$ distinct prime factors, we may use \begin{equation} {1\over k!}\sum_{\substack{d\mid P_{\mathcal H}(n) \\ d \leq R}}\mu(d)(\log {R\over d})^k \label{eq: 41} \end{equation} for approximating prime tuples. (Here $1/k!$ is just a normalization factor. That (41) will be also counting some tuples by including proper prime power factors doesn't pose a threat since in our applications their contribution is negligible). But this idea by itself brings restricted progress: now the right-hand side of (6) can be replaced with $1-{\sqrt{3}\over 2}$. The efficiency of the argument is greatly increased if instead of trying to include tuples composed only of primes, one looks for tuples with primes in many components. So in [13] we employ \begin{equation} \Lambda_{R}(n; \mathcal H, \ell) := {1\over (k+\ell)!}\sum_{\substack{d\mid P_{\mathcal H}(n) \\ d \leq R}}\mu(d)(\log {R\over d})^{k+\ell} , \label{eq: 42} \end{equation} where $|\mathcal H| = k$ and $0 \leq \ell \leq k$, and consider those $P_{\mathcal H}(n)$ which have at most $k+\ell$ distinct prime factors. In our applications the optimal order of magnitude of the integer $\ell$ turns out to be about $\sqrt{k}$. To implement this new approximation in the skeleton of the argument, the quantities \begin{equation} \sum_{n \leq N}\Lambda_{R}(n;\mathcal H_{1},\ell_{1}) \Lambda_{R}(n;\mathcal H_{2},\ell_{2}) , \label{eq: 43} \end{equation} and \begin{equation} \sum_{n \leq N}\Lambda_{R}(n;\mathcal H_{1},\ell_{1}) \Lambda_{R}(n;\mathcal H_{2},\ell_{2}) \theta(n+h_{0}) , \label{eq: 44} \end{equation} are calculated as $R,\, N \to \infty$. The latter has three cases according as $h_{0} \not\in \mathcal H_{1} \cup \mathcal H_{2}$, or $h_{0} \in \mathcal H_{1} \setminus \mathcal H_{2}$, or $h_{0} \in \mathcal H_{1} \cap \mathcal H_{2}$. Here $M= |\mathcal H_{1}| + |\mathcal H_{2}| + \ell_{1} + \ell_{2} $ is taken as a fixed integer which may be arbitrarily large. The calculation of (43) is valid with $R$ as large as $N^{{1\over 2}-\epsilon}$ and $h \leq R^{C}$ for any constant $C>0$. The calculation of (44) can be carried out for $R$ as large as $N^{{\alpha\over 2}-\epsilon}$ and $h \leq R$. It should be noted that in [19] in the same context the usage of (34) which has $k$ truncations, restricted the range of the divisors greatly, for then $R \leq N^{{1\over 4k}-\epsilon}$ was needed. Moreover the calculations were more complicated compared to the present situation of dealing with only one truncation. Requiring the positivity of the quantity \begin{equation} \sum_{n=N+1}^{2N}(\sum_{1 \leq h_{0}\leq h}\theta(n+h_{0}) - r\log 3N)(\sum_{\substack{\mathcal H \subset \{1,2,\ldots , h\} \\ |\mathcal H|=k}}\Lambda_{R}(n; \mathcal H, \ell))^2 , \qquad (h=\lambda\log 3N), \label{eq: 45} \end{equation} which can be calculated easily from asymptotic formulas for (43) and (44), and Gallagher's [5] result that with the notation of (20) for fixed $k$ \begin{equation} \sum_{\mathcal H}\mathfrak S (\mathcal H) \sim h^k \qquad {\rm as} \;\;\; h\to\infty , \label{eq: 46} \end{equation} yields the results (9)--(11). For the proof of the result mentioned in connection with (12), the positivity of (38) with $r=1$ and $\Lambda_{R}(n; \mathcal H,\ell)$ for an $\mathcal H$ satisfying (20) in place of $\tilde{\Lambda}_{R}(n; \mathcal H)$ is used. For (13), the positivity of an optimal linear combination of the quantities for (12) is pursued. The proof of (15) in [14] also depends on the positivity of (45) for $r=1$ and $h = {C\log N\over k}$ modified with the extra restriction \begin{equation} (P_{\mathcal H}(n), \displaystyle \prod_{p\leq \sqrt{\log N}}p)=1 \label{eq: 47} \end{equation} on the tuples to be summed over, but involves some essential differences from the procedure described above. Now the size of $k$ is taken as large as $c{\sqrt{\log N}\over (\log\log N)^2 }$ (where $c$ is a sufficiently small explicitly calculable absolute constant). This necessitates a much more refined treatment of the error terms arising in the argument, and in due course the restriction (47) is brought in to avoid the complications arising from the possibly irregular behaviour of $\nu_{p}(\mathcal H)$ for small $p$. In the new argument a modified version of the Bombieri-Vinogradov theorem is needed. Roughly speaking, in the version developed for this purpose, compared to (7) the range of the moduli $q$ is curtailed a little bit in return for a little stronger upper-bound. Moreover, instead of Gallagher's result (46) which was for fixed $k$ (though the result may hold for $k$ growing as some function of $h$, we do not know exactly how large this function can be in addition to dealing with the problem of non-uniformity in $k$), the weaker property that $\sum_{\mathcal H}\mathfrak S (\mathcal H)/h^k$ is non-decreasing (apart from a factor of $1+o(1)$) as a function of $k$ is proved and employed. The whole argument is designed to give the more general result which was mentioned after (15). \section*{4. Small gaps between almost primes} In the context of our work by {\it almost prime} we mean an {\it $E_2$-number}, i.e. a natural number which is a product of two distinct primes. We have been able to apply our methods to finding small gaps between almost primes in collaboration with S. W. Graham. For this purpose a Bombieri-Vinogradov type theorem for $\Lambda \ast \Lambda$ is needed, and the work of Motohashi [26] on obtaining such a result for the Dirichlet convolution of two sequences is readily applicable (see also [1]). In [9] alternative proofs of some results of [13] such as (10) and (13) are given couched in the formalism of the Selberg sieve. Denoting by $q_{n}$ the $n$-th $E_{2}$-number, in [9] and [10] it is shown that there is a constant $C$ such that for any positive integer $r$, \begin{equation} \liminf_{n\to\infty}(q_{n+r}-q_{n}) \leq Cre^{r} ; \label{eq: 48} \end{equation} in particular \begin{equation} \liminf_{n\to\infty}(q_{n+1}-q_{n}) \leq 6 . \label{eq: 49} \end{equation} Furthermore in [11] proofs of a strong form of the Erd\"{o}s--Mirsky conjecture and related assertions have been obtained. \section*{5. Further remarks on the origin of our method} In 1950 Selberg was working on applications of his sieve method to the twin prime and Goldbach problems and invented a weighted sieve method that gave results which were later superseded by other methods and thereafter largely neglected. Much later in 1991 Selberg published the details of this work in Volume II of his Collected Works [28], describing it as ``by now of historical interest only". In 1997 Heath-Brown [24] generalized Selberg's argument from the twin prime problem to the problem of almost prime tuples. Heath-Brown let \begin{equation} \Pi = \prod_{i=1}^{k}(a_{i}n +b_{i}) \label{eq: 50} \end{equation} with certain natural conditions on the integers $a_{i}$ and $b_{i}$. Then the argument of Selberg (for the case $k=2$) and Heath-Brown for the general case is to choose $\rho > 0$ and the numbers $\lambda_{d}$ of the Selberg sieve so that, with $\tau$ the divisor function, \begin{equation} Q = \sum_{n\leq x}\{1-\rho\sum_{i=1}^{k}\tau(a_{i}n + b_{i})\}(\sum_{d \mid \Pi}\lambda_{d})^2 > 0 . \label{eq: 51} \end{equation} From this it follows that there is at least one value of $n$ for which \begin{equation} \sum_{i=1}^{k}\tau(a_{i}n + b_{i}) < {1\over \rho} . \label{eq: 52} \end{equation} Selberg found in the case $k=2$ that $\rho = {1\over 14}$ is acceptable, which shows that one of $n$ and $n+2$ has at most two, while the other has at most three prime factors for infinitely many $n$. Remarkably, this is exactly the same type of tuple argument of Granville and Soundararajan which we have used, and the similarity doesn't end here. Multiplying out, we have $Q = Q_{1} - \rho Q_{2}$ where \begin{equation} Q_{1} = \sum_{n\leq x}(\sum_{d \mid \Pi}\lambda_{d})^2 > 0 , \qquad Q_{2} = \sum_{i=1}^{k} \sum_{n\leq x} \tau(a_{i}n + b_{i})\}(\sum_{d \mid \Pi}\lambda_{d})^2 > 0 . \label{eq: 53} \end{equation} The goal is now to pick $\lambda_{d}$ optimally. As usual, the $\lambda_{d}$ are first made $0$ for $d>R$. At this point it appears difficult to find the exact solution to this problem. Further discussion of this may be found in [28] and [24]. Heath-Brown, desiring to keep $Q_{2}$ small, made the choice \begin{equation} \lambda_{d} = \mu(d)({\log (R/d)\over \log R})^{k+1}, \label{eq: 54} \end{equation} and with this choice we see \begin{equation} Q_{1} = {((k+1)!)^{2}\over (\log R)^{2k+2}} \sum_{n\leq x}(\Lambda_{R}(n; \mathcal H, 1))^2 . \label{eq: 55} \end{equation} Hence Heath-Brown used the approximation for a $k$-tuple with at most $k+1$ distinct prime factors. This observation was the starting point for our work with the approximation $\Lambda_{R}(n; \mathcal H, \ell)$. The evaluation of $Q_{2}$ with its $\tau$ weights is much harder to evaluate than $Q_{1}$ and requires Kloosterman sum estimates. The weight $\Lambda$ in $Q_{2}$ in place of $\tau$ requires essentially the same analysis as $Q_{1}$ if we use the Bombieri-Vinogradov theorem. Apparently these arguments were never viewed as directly applicable to primes themselves, and this connection was missed until now. \vspace*{.5cm} \footnotesize D. A. Goldston \,\,\, ([email protected]) Department of Mathematics San Jose State University San Jose, CA 95192 USA \\ J. Pintz \,\,\, ([email protected]) R\'enyi Mathematical Institute of the Hungarian Academy of Sciences H-1364 Budapest P.O.B. 127 Hungary \\ C. Y. Y{\i}ld{\i}r{\i}m \,\,\, ([email protected]) \vspace*{-.1cm} \begin{tabbing} tw \= Department of Mathematicsmath \= \& math \= \c Cengelk\"oy, Istanbul, P.K. 6, 81220 \= \kill \mbox{ } \> Department of Mathematics \> \mbox{ } \> Feza G\"ursey Enstit\"us\"u \\ \mbox{ } \> Bo\~{g}azi\c{c}i University \> \& \> \c Cengelk\"oy, Istanbul, P.K. 6, 81220 \\ \mbox{ } \> Bebek, Istanbul 34342 \> \mbox{ } \> Turkey \\ \mbox{ } \> Turkey \> \mbox{ } \> \mbox{ } \end{tabbing} \end{document}
\begin{document} \thanks{The author was partially supported by an AMS--Simons Travel Grant. Some of this work was completed while the author was a guest that the Max Planck Institute for Mathematics, and he thanks them for their hospitality.} \begin{abstract} For a fixed mod $p$ automorphic Galois representation, $p$-adic automorphic Galois representations lifting it determine points in universal deformation space. In the case of modular forms and under some technical conditions, B\"{o}ckle showed that every component of deformation space contains a smooth modular point, which then implies their Zariski density when coupled with the infinite fern of Gouv{\^e}a--Mazur. We generalize B\"{o}ckle's result to the context of polarized Galois representations for CM fields, and to two dimensional Galois representations for totally real fields. More specifically, under assumptions necessary to apply a small $R = \mathbb{T}$ theorem and an assumption on the local mod $p$ representation, we prove that every irreducible component of the universal polarized deformation space contains an automorphic point. When combined with work of Chenevier, this implies new results on the Zariski density of automorphic points in polarized deformation space in dimension three. \end{abstract} \title{On automorphic points in polarized deformation rings} \setcounter{tocdepth}{1} \tableofcontents \section*{Introduction}\label{sec:intro} Inspired by the $p$-adic deformation theory of modular forms, Mazur developed a deformation theory for Galois representations that has played a central role in modern algebraic number theory. If the fixed mod $p$ Galois representation arises from a modular eigenform, the $p$-adic deformation theory of its Hecke eigensystem can naturally be viewed as part of the $p$-adic deformation theory of the Galois representation. A natural (but vague) question is: are these two deformation theories \emph{the same}? One way to make this precise is to ask whether or not the universal deformation ring for the mod $p$ Galois representation is isomorphic to an appropriate ``big" $p$-adic Hecke algebra. Such a result is know as a ``big $R=\mathbb{T}$" theorem, and implies that the theories of Galois representations and automorphic forms are intimately connected even when one leaves the geometric world. This is further illustrated by Emerton's strategy for proving the Fontaine--Mazur conjecture, which is to start with a big $R=\mathbb{T}$ theorem, and then identify the geometric locus inside deformation space with the classical modular points \cite{EmertonLocGlob}. The theory of pseudo-representations often allows one to deduce the existence of a surjection from a universal deformation ring $R^{\mathrm{univ}}$ to a big $p$-adic Hecke algebra $\mathbb{T}$. In these situations, a big $R=\mathbb{T}$ theorem follows from knowing that the classical automorphic points in $\Spec R$ are Zariski dense (at least up to reduced quotients). The first such result was proved by Gouv\^{e}a and Mazur \cite{GouveaMazur} in the context of $\GL_2$ over $\mathbb{Q}$, under the assumption that the universal deformation ring is formally smooth, i.e. a power series over $\mathcal{O}$, of the expected dimension. They constructed a sort-of fractal in the rigid analytic generic fibre of $\Spec R$ that they called the \emph{infinite fern}. The infinite fern shows that the Zariski closure of the modular points has large dimension, and adding the assumption that $R^{\mathrm{univ}}$ is a power series over $\mathcal{O}$ of the correct dimension implies the Zariski density. B\"{o}ckle improved on this \cite{BockleDensity} using a novel idea. He used explicit computations of local deformation rings \cites{RamakrishnaFinFlat,BockleDemuskin} together with small $R$ equals $\mathbb{T}$ theorems (of the type proved by Taylor and Wiles) to show that every irreducible component of universal deformation space contains a smooth modular point, under certain conditions on the residual representation. The smoothness of the point allows one to deduce that the component has the correct dimension, then the infinite fern implies the modular points are Zariski dense in that component. In higher dimensions or for more general number fields, the situation is more subtle (see \cite{CalegariMazur}*{\S1.1}), and the naive generalization of the Zariski density statement in the rigid analytic generic fibre is false in general (see \cite{LoefflerDense} and \cite{CalegariEven2}*{\S5}). However, if one restricts to \emph{polarized} representations of the Galois group of CM or totally real fields, then the situation is much more hopeful, and a precise conjecture was made by Chenevier \cite{ChenevierFern}*{Conjecture~1.15}. Chenevier \cite{ChenevierFern} expanded and refined the construction of Gouv\^{e}a and Mazur to three dimensional Galois representations for CM fields $F$ in which $p$ is totally split and to two dimensional Galois representations for totally real fields $F$ of even degree over $\mathbb{Q}$ in which $p$ is totally split. Chenevier thus proves his conjecture, which is a higher dimensional analogue of the Gouv\^{e}a--Mazur conjecture, in these situations under the additional assumption that the universal deformation ring is formally smooth of the correct dimension. In this paper we give a new and more geometric interpretation of B\"{o}ckle's strategy, which affords strong generalization. In principal, our methods apply any time the ``numerical coincidence" discussed in \cite{CHT}*{\S1} holds. We focus on the case of polarized representations of Galois groups of CM fields (in arbitrary dimension), and two dimensional representations of totally real fields. In order to state some of our main results, we set up some notation. Let $F$ be a CM field with maximal totally real subfield $F^+$. Let $S$ be a finite set of finite places of $F^+$ containing all those that ramify in $F$, let $F_S$ be the maximal extension of $F$ unramified outside of the places above those in $S$, and set $G_{F,S} = \mathrm{Gal}(F_S/F)$. Let $E$ be a finite extension of $\mathbb{Q}_p$ with ring of integers $\mathcal{O}$ and residue field $\mathbb{F}$. Assume we are given a continuous absolutely irreducible \[ \overline{\rho} : G_{F,S} \longrightarrow \GL_n(\mathbb{F}) \] such that $\overline{\rho}^c = \overline{\rho}^\vee \otimes \overline{\varepsilonilon}^{1-n}$, where $c\in G_{F^+}$ is some choice of complex conjugation, $\overline{\rho}^c$ is the conjugate representation given by $\overline{\rho}^c(\gamma) = \overline{\rho}(c\gamma c)$ for all $\gamma \in G_{F,S}$, $\overline{\rho}^\vee$ is the $\mathbb{F}$-linear dual of $\overline{\rho}$, and $\overline{\varepsilonilon}$ is the mod $p$ cyclotomic character. Letting $R^{\mathrm{univ}}$ denote the universal deformation ring for $\overline{\rho}$ on the category of complete Noetherian local $\mathcal{O}$-algebras with residue field $\mathbb{F}$, there is a quotient $R^{\mathrm{pol}}$ of $R^{\mathrm{univ}}$ that is universal for deformations $\rho$ such that $\rho^c \cong \rho^\vee \otimes \varepsilonilon^{1-n}$ (see \S\ref{sec:polardefring}). The Galois representations associated to regular algebraic conjugate self dual cuspidal automorphic representations of $\GL_n(\mathbb{A}_F)$ that lift $\overline{\rho}$ naturally yield $\mathbb{Q}bar_p$-points of $\Spec R^{\mathrm{pol}}$. \begin{mainthm}\label{thm:intromain} Fix $\iota : \mathbb{Q}bar_p \xrightarrow{\sim} \mathbb{C}$. Assume $p\nmid 2n$ and that every place above $p$ in $F^+$ splits in $F$. Assume there is a regular algebraic conjugate self dual cuspidal automorphic representation $\pi$ of $\GL_n(\mathbb{A}_F)$ such that $\overline{\rho}\otimes\mathbb{F}bar_p$ is isomorphic to the mod $p$ Galois representation attached to $\pi$ and $\iota$. Assume further: \begin{ass} \item Either \begin{itemize} \item $\rho_{\pi,\iota}|_{G_v}$ is potentially diagonalizable for each $v|p$ in $F$, or \item $\pi$ is $\iota$-ordinary. \end{itemize} \item $\overline{\rho}(G_{F(\zeta_p)})$ is adequate and $\zeta_p \notin F$. \item For each $v|p$ in $F$, there is no nonzero $\mathbb{F}[G_v]$-equivariant map $\overline{\rho}|_{G_v} \mathrm{rig}htarrow \overline{\rho}|_{G_v}(1)$. \end{ass} Then for any irreducible component $\mathcal{C}$ of $\Spec R^{\mathrm{pol}}$, there is a regular algebraic conjugate self dual cuspidal automorphic representation whose associated Galois representation determines a $\mathbb{Q}bar_p$-point of $\mathcal{C}$. \end{mainthm} We refer the reader to \cref{thm:mainPD,thm:mainord} below for slightly more general and refined statements, to \S\ref{sec:Local} for the definition of a potentially diagonalizable representation, to \S\ref{sec:AutGalRep} for a discussion of regular algebraic polarizable cuspidal automorphic representations and their associated Galois representations, and to \cref{def:GLadequate} for the definition of an adequate subgroup of $\GL_n(\mathbb{F})$. The characteristic zero points on universal polarized deformation rings arising from regular algebraic polarized cuspidal automorphic representation are known to be formally smooth in a great deal of generality, see \cite{MeSmooth}*{Theorem~C} and \cite{BHS}*{Corollaire~4.13}. Combining this with our main theorems and Chenevier's infinite fern, we obtain new cases of Chenevier's conjecture (\cref{thm:dim3dense} below): \begin{mainthm}\label{thm:introdense} Fix $\iota : \mathbb{Q}bar_p \xrightarrow{\sim} \mathbb{C}$. Assume that $n = 3$, and that $p>2$. Assume there is a regular algebraic conjugate self dual cuspidal automorphic representation $\pi$ of $\GL_3(\mathbb{A}_F)$ such that $\overline{\rho}$ is isomorphic to the mod $p$ Galois representation attached to $\pi$ and $\iota$. Assume further: \begin{ass} \item $p$ is totally split in $F$. \item For each $v|p$ in $F$, $\pi_v$ is unramified and $\rho_{\pi,\iota}|_{G_v}$ is potentially diagonalizable. If $p = 3$, then we further assume that $\pi$ is $\iota$-ordinary. \item $\overline{\rho}(G_{F(\zeta_p)})$ is adequate and $\zeta_p \notin F$. \item For each $v|p$ in $F$, there is no nonzero $\mathbb{F}[G_v]$-equivariant map $\overline{\rho}|_{G_v} \mathrm{rig}htarrow \overline{\rho}|_{G_v}(1)$. \end{ass} Then, letting $\mathfrak{X}$ denote the rigid analytic generic fibre of $\Spf R^{\mathrm{pol}}$, the set of points in $\mathfrak{X}$ induced by Galois representations associated to regular algebraic conjugate self dual cuspidal automorphic representation of level prime to $p$ is Zariski dense. \end{mainthm} We refer the reader to \S\ref{sec:genfib} for an overview of the rigid analytic generic fibre. Along the way to proving \cref{thm:introdense} we deduce nice ring theoretic properties for $R^{\mathrm{pol}}$ (\cref{thm:CMgeom} below). However, using potentially automorphy theorems, we can prove these nice ring theoretic properties in many cases without assuming residual automorphy. For example: \begin{mainthm}\label{mainthm:CMgeom} Assume that $p>2(n+1)$ and that every $v|p$ in $F^+$ splits in $F$. Assume further: \begin{ass} \item $\overline{\rho}|_{G_{F(\zeta_p)}}$ is absolutely irreducible. \item For each $v|p$, if we write the semisimplification of $\overline{\rho}|_{G_v}$ as $(\overline{\rho}|_{G_v})^{\mathrm{ss}} = \mathrm{op}lus \overline{\rho}_i$ with each $\overline{\rho}_i$ irreducible, then $\rho_i \not\cong \rho_j(1)$ for any $i,j$. \end{ass} Then $R^{\mathrm{pol}}$ is an $\mathcal{O}$-flat, reduced, complete intersection ring of dimension $1 + \frac{n(n+1)}{2}[F^+:\mathbb{Q}]$. \end{mainthm} We refer the reader to \cref{thm:potCMgeom} for a more general statement (see \cref{rmk:PDlift}). This can be seen as answering a polarized version of a question of Mazur \cite{MazurDefGalRep}*{\S 1.10} in many cases. We also prove similar theorems in the context of $\GL_2$ over a totally real field, \cref{thm:mainHilbdet,thm:mainHilb,thm:Hilbgeom,thm:Hilbdense} below, and our results even yield new cases over $\mathbb{Q}$ (see \cref{rmk:overQ}). Most of the assumptions in the above theorems, and in any of the main theorems below, are used to invoke results in the literature. In particular, improvements in (small) $R = \mathbb{T}$ theorems or improvements in the infinite fern would immediately yield improvements in \cref{thm:intromain,thm:introdense}, respectively. The only additional assumption imposed in this paper is the assumption that there are no nonzero $\mathbb{F}[G_v]$-equivariant maps $\overline{\rho}|_{G_v} \mathrm{rig}htarrow \overline{\rho}|_{G_v}(1)$. This is used to guarantee that the universal lifting rings of the local representations $\overline{\rho}|_{G_v}$ are regular for each $v|p$. We explain how this is used below. \subsection*{Strategy} As in B\"{o}ckle's work, we use a small $R = \mathbb{T}$ theorem to show that every irreducible component of the universal deformation ring contains an automorphic point. Under appropriate assumptions, including one that implies the universal local lifting ring at $p$ is regular, B\"{o}ckle shows that a suitable locus (i.e. finite flat or ordinary) inside the universal local deformation space is cut out by the ``right number" of equations. In order to do this, he uses explicit computations of local deformation rings, due to Ramakrishna \cite{RamakrishnaFinFlat} in the finite flat case, and due to himself \cite{BockleDemuskin} in the ordinary case. In arbitrary dimensions, the local Fontaine--Laffaille deformation rings are known to be power series in the correct number of variables, so one could proceed as in \cite{BockleDensity}. However, explicitly computing ordinary deformation rings in arbitrary dimensions seems intractable (even dimension two is hard). More importantly, in higher dimensions there are many types of mod $p$ Galois representations that have neither Fontaine--Laffaille nor ordinary lifts, so one would like a strategy that works for more general local conditions. The main idea of this paper is that using the \emph{finiteness} of the universal global polarized deformation ring $R^{\mathrm{pol}}$ over the universal local lifting ring $R^{\mathrm{loc}}$ at places dividing $p$ (a principle the author first learned from Frank Calegari's blog Persiflage \cite{PersiflageFinite}, see also \cite{MeFrank}) we can turn our problem into one of intersections in $\Spec R^{\mathrm{loc}}$. This has the effect of allowing us to weaken a \emph{global} unobstructedness assumption to a \emph{local} unobstructedness assumption. More specifically, armed with a small $R = \mathbb{T}$ theorem, we can deduce the existence of an automorphic point on any irreducible component $\mathcal{C}$ of $\Spec R^{\mathrm{pol}}$ if we prove that the intersection of $\mathcal{C}$ and the generic fibre of our small deformation ring $R$ inside $\Spec R^{\mathrm{pol}}$ is nonempty. Using the finiteness of $R^{\mathrm{pol}}$ over $R^{\mathrm{loc}}$, we first turn this into a problem of an intersection in $\Spec R^{\mathrm{loc}}$. Since $R^{\mathrm{loc}}$ is regular, we can use intersection theory in regular local rings and the lower bound for $\dim\mathcal{C}$ arising from Galois cohomology to prove that the intersection of the image of $\mathcal{C}$ in $\Spec R^{\mathrm{loc}}$ with the appropriate fixed weight $p$-adic Hodge theoretic locus $X^{\mathrm{loc}}$ has dimension $\ge 1$. One then uses the finiteness of $R^{\mathrm{pol}}$ over $R^{\mathrm{loc}}$ again to show there is a dimension $1$ point on the intersection of $\mathcal{C}$ and our small deformation ring $R$. But the small $R = \mathbb{T}$ theorem implies $R$ is finite over $\mathbb{Z}_p$, hence this point must be in the generic fibre. Let us make a few remarks. Firstly, it is crucial for our method that we know our small deformation ring is finite over $\mathbb{Z}_p$. Hence, it is not just the automorphy of $p$-adic Galois representations that we need, but the underlying $R = \mathbb{T}$ (or $R^{\mathrm{red}} = \mathbb{T}$) theorems. Secondly, the author thinks it is interesting to compare the above with one heuristic justification for the Fontaine--Mazur conjecture. Namely, since the conjectural dimension of $\Spec R^{\mathrm{pol}}[1/p]$ plus the dimension of our fixed weight $p$-adic Hodge theoretic locus $X^{\mathrm{loc}}$ in $\Spec R^{\mathrm{loc}}[1/p]$ equals the dimension of $\Spec R^{\mathrm{loc}}[1/p]$ (in favourable situations), one might imagine they intersect at finitely many points, and maybe even transversely. The assumption of residual automorphy guarantees that this intersection is nonempty, and a small $R = \mathbb{T}$ theorem implies that this intersection is a finite set of points. The main theorems in this article and their method of proof (under the appropriate assumptions) imply that if $\Spec R^{\mathrm{pol}}[1/p]$ intersects $X^{\mathrm{loc}}$, then every irreducible component of $\Spec R^{\mathrm{pol}}[1/p]$ intersects $X^{\mathrm{loc}}$. Finally, since the argument above uses only the dimension of $X^{\mathrm{loc}}$ and that a small $R=\mathbb{T}$ theorem is known for the $p$-adic Hodge theoretic conditions defined by $X^{\mathrm{loc}}$, this allows us flexibility in the choice of $X^{\mathrm{loc}}$, provided it is nonzero and has the correct dimension. For example, we can use this to conclude existence of automorphic points on each irreducible component of $\Spec R^{\mathrm{pol}}$ whose local representations at places dividing $p$ have certain pre-prescribed local properties. It is natural to wonder what happens when $R^{\mathrm{loc}}$ is not regular. We discuss an example in \S\ref{sec:eg} that illustrates the subtleties in this case. More specifically, we use an example of Serre to show that the conclusion of our main lemma, \cref{thm:thelemma}, may not hold when $R^{\mathrm{loc}}$ is not regular. In particular, one no longer has the flexibility in the choice of $X^{\mathrm{loc}}$ as discussed above, and proving the main theorems of this paper in this case seems to require a better understanding of the irreducible components of the universal deformation rings in question. \subsection*{Outline} In \S\ref{sec:mainlem} we first recall a fact from intersection theory in regular local rings and prove our main lemma. We then recall the rigid analytic generic fibre of a affine formal scheme, and gather some facts regarding the relationship between the irreducible components of the generic fibre with those in underlying affine scheme. In \S\ref{sec:GenDef} we recall some basics in deformation theory, in particular presentations and polarized deformation rings. We gather results from the literature on local deformation rings in \S\ref{sec:Local}. In \S\ref{sec:AutGalRep} we recall the notion of regular algebraic polarized cuspidal automorphic representations and their associated Galois representations. In \S\ref{sec:CM} we prove our main theorems for polarized Galois representations of CM fields. We do this after first recalling the small $R = \mathbb{T}$ theorems that we use in the proofs of our main theorems. Finally, \S\ref{sec:Hilb} treats the case of $\GL_2$ over totally real fields. \section*{Notation and Conventions} We fix a prime $p$ and an algebraic closure $\mathbb{Q}bar_p$ of $\mathbb{Q}_p$. We let $\mathcal{O}_{\mathbb{Q}bar_p}$ and $\mathbb{F}bar_p$ denote the ring of integers and residue field, respectively, of $\mathbb{Q}bar$, and let $\mathfrak{m}_{\mathbb{Q}bar_p}$ be the maximal ideal of $\mathcal{O}_{\mathbb{Q}bar_p}$. Throughout $E$ will denote a finite extension of $\mathbb{Q}_p$ inside $\mathbb{Q}bar_p$. We let $\mathcal{O}$ and $\mathbb{F}$ denote the ring of integers and residue field, respectively, of $E$. We denote the maximal ideal of a commutative local ring $A$ by $\mathfrak{m}_A$. We let $\mathbb{C}NL_{\mathcal{O}}$ be the category of complete local commutative Noetherian $\mathcal{O}$-algebras $A$ such that the structure map $\mathcal{O} \mathrm{rig}htarrow A$ induces an isomorphism $\mathbb{F} \xrightarrow{\sim} A/\mathfrak{m}_A$, and whose morphisms are local $\mathcal{O}$-algebra morphisms. We will refer to an object, resp. a morphism, in $\mathbb{C}NL_{\mathcal{O}}$ as a $\mathbb{C}NL_{\mathcal{O}}$-algebra, resp. a $\mathbb{C}NL_{\mathcal{O}}$-morphism. Let $R$ be a commutative ring equipped with a canonical map $R^\square \mathrm{rig}htarrow R$, resp. $R^{\mathrm{univ}} \mathrm{rig}htarrow R$, with $R^\square$ a universal lifting ring, resp. $R^{\mathrm{univ}}$ a universal deformation ring (see \S\ref{sec:GenDef}). Then for any homomorphism $x : R \mathrm{rig}htarrow A$, with $A$ a commutative ring, we denote by $\rho_x$ the pushforward of the universal lift, resp. universal deformation, via $R^\square \mathrm{rig}htarrow R \xrightarrow{x} A$, resp. via $R^{\mathrm{univ}} \mathrm{rig}htarrow R \xrightarrow{x} A$. Throughout $F$ will denote a number field. A CM field is always assumed to be imaginary. If $F$ is CM, its maximal totally real subfield of will be denoted by $F^+$, and $\delta_{F/F^+}$ will denote the nontrivial $\{\pm 1\}$-valued character of $\mathrm{Gal}(F/F^+)$. We fix an algebraic closure $\overline{F}$ of $F$ and set $G_F = \mathrm{Gal}(\overline{F}/F)$. We will assume that all finite extensions $L/F$ are taken in $\overline{F}$. If $L/F$ is a finite Galois extension and $S$ is a finite set of finite places of $F$, we let $L_S$ denote the maximal extension of $L$ that is unramified outside of any of the places in $L$ above those in $S$ and the Archimedean places. Throughout $\zeta_p$ will denote a primitive $p$th root of unity in $\overline{F}$. For a finite place $v$ of $F$, we denote by $F_v$ its completion at $v$, and $\mathcal{O}_{F_v}$ its ring of integers. We fix an algebraic closure $\overline{F}_v$ of $F_v$ and let $G_v = \mathrm{Gal}(\overline{F}_v/F_v)$, and denote by $I_v$ the inertia subgroup of $G_v$. We will assume that all finite extensions of $F_v$ are taken inside of $\overline{F}_v$. We let $\mathbb{A}rt_{F_v} : F_v^\times \hookrightarrow G_v^{\mathrm{ab}}$ be the Artin reciprocity map normalized so that uniformizers are sent to geometric Frobenius elements. We denote the adeles of $F$ by $\mathbb{A}_F$, and let $\mathbb{A}rt_F : F^\times \backslash \mathbb{A}_F^\times \mathrm{rig}htarrow G_F^{\mathrm{ab}}$ be $\mathbb{A}rt_F = \prod_{v} \mathbb{A}rt_{F_v}$. Let $\mathbb{Z}_+^n$ be the set of tuples of integers $(\lambda_1,\ldots,\lambda_n)$ such that $\lambda_1 \ge \cdots \ge \lambda_n$. For any $v|p$ in $F$, we write $\mathrm{Hom}(F_v,\mathbb{Q}bar_p)$ for the set of continuous field embeddings $F_v \hookrightarrow \mathbb{Q}bar_p$. If $\sigma \in \mathrm{Hom}(F_v,\mathbb{Q}bar_p)$, we again write $\sigma$ for the induced embedding $F \hookrightarrow \mathbb{Q}bar_p$. Conversely, given a field embedding $\sigma : F \hookrightarrow \mathbb{Q}bar_p$, we again write $\sigma$ for the continuous embedding $F_v \hookrightarrow \mathbb{Q}bar_p$ induced by $\sigma$. If we are given an isomorphism $\iota : \mathbb{Q}bar_p \xrightarrow{\sim} \Omega$ of fields, and $\sigma: K\hookrightarrow \mathbb{Q}bar_p$ is an embedding of fields, we write $\iota\sigma$ for $\iota\circ\sigma$. If $r : G \mathrm{rig}htarrow \mathbb{A}ut_{\mathbb{Q}bar_p}(V)$ is a representation of a group $G$ on a $\mathbb{Q}bar_p$-vector space $V$, then we will denote by $\iota r$ the representation of $G$ on the $\Omega$-vector space $V\otimes_{L,\iota}\Omega$. If $\chi : F^\times \backslash \mathbb{A}_F^\times \mathrm{rig}htarrow \mathbb{C}^\times$ is a continuous character whose restriction to the connected component of $(F\otimes \mathbb{R})^\times$ is given by $x \mapsto \prod_{\sigma \in \mathrm{Hom}(F,\mathbb{C})} x_\sigma^{\lambda_\sigma}$ for some integers $\lambda_\sigma$, and $\iota : \mathbb{Q}bar_p \xrightarrow{\sim} \mathbb{C}$ is an isomorphism, we let $\chi_\iota : G_F \mathrm{rig}htarrow \mathbb{Q}bar_p^\times$ be the continuous character given by \[ \chi_\iota(\mathbb{A}rt_F(x)) = \iota^{-1} \Big( \chi(x) \prod_{\sigma \in \mathrm{Hom}(F,\mathbb{C})} x_\sigma^{-\lambda_\sigma} \Big) \prod_{\sigma\in \mathrm{Hom}(F,\mathbb{Q}bar_p)} x_\sigma^{\lambda_{\iota\sigma}}. \] We denote by $\varepsilonilon$ the $p$-adic cyclotomic character. We use covariant $p$-adic Hodge theory, and normalize our Hodge--Tate weights so that the Hodge--Tate weight of $\varepsilonilon$ is $-1$. Let $v$ be a place above $p$ in $F$, and let $\rho : G_v \mathrm{rig}htarrow \GL(V) \cong \GL_n(\mathbb{Q}bar_p)$ a potentially semistable representation on an $n$-dimensional vector space over $\mathbb{Q}bar_p$. For $\sigma\in \mathrm{Hom}(F_v,\mathbb{Q}bar_p)$, we will write $\mathrm{HT}_\sigma(\rho)$ for the multiset of $n$ Hodge--Tate weights with respect to $\sigma$. Specifically, an integer $i$ appears in $\mathrm{HT}_\sigma(\rho)$ with multiplicity equal to the $L$-dimension of the $i$th graded piece of the $n$-dimensional filtered $\mathbb{Q}bar_p$-vector space $D_{\mathrm{dR}}(\rho)\otimes_{(F_v\otimes_{\mathbb{Q}_p}\mathbb{Q}bar_p)} \mathbb{Q}bar_p$, where $D_{\mathrm{dR}}(\rho) = (B_{\mathrm{dR}}\otimes_{\mathbb{Q}_p} V)^{G_v}$, $B_{\mathrm{dR}}$ is Fontaine's ring of de~Rham periods, and we view $\mathbb{Q}bar_p$ as a $F_v\otimes_{\mathbb{Q}_p} \mathbb{Q}bar_p$-algebra via $\sigma \otimes 1$. We say that $\rho$ has \emph{regular weight} if $\mathrm{HT}_\sigma(\rho)$ consists of $n$ distinct integers for each $\sigma\in \mathrm{Hom}(F_v,\mathbb{Q}bar_p)$. If this is the case, then there is $\lambda \in (\mathbb{Z}_+^n)^{\mathrm{Hom}(F_v,\mathbb{Q}bar_p)}$ such that $\mathrm{HT}_\sigma(\rho) = \{\lambda_{\sigma,j} + n - j\}_{j = 1,\ldots,n}$ for each $\sigma \in \mathrm{Hom}(F_v,\mathbb{Q}bar_p)$, and we call $\lambda$ the \emph{weight} of $\rho$. For a finite place $v$ of $F$, an $n$-dimensional \emph{inertial type} is a representation $\tau : I_v \mathrm{rig}htarrow \GL_n(\mathbb{Q}bar_p)$ of $I_v$ with open kernel that extends to a representation of the Weil group of $F_v$. We say $\tau$ is \emph{defined over} $E$ if it is the extension of scalars to $\mathbb{Q}bar_p$ of a representation valued in $\GL_n(E)$. If $\rho : G_v \mathrm{rig}htarrow \GL(V) \cong \GL_n(\mathbb{Q}bar_p)$ is a a potentially semistable representation of $G_v$ on an $n$-dimensional vector space over $\mathbb{Q}bar_p$, we say that $\rho$ has \emph{inertial type} $\tau$ if the restriction to $I_v$ of the Weil--Deligne representation associated to $\rho$ is isomorphic to $\tau$. If $v|p$, this is equivalent to demanding that for every $\gamma \in I_v$, the trace of $\gamma$ acting on \[ D_{\mathrm{ps}t}(\rho) := \varinjlim_{K/F_v \text{ finite}} (B_{\mathrm{st}} \otimes_{\mathbb{Q}_p} V)^{G_K} \] equals $\tr\tau(\gamma)$, where $B_{\mathrm{st}}$ denotes Fontaine's ring of semistable periods. \section{Irreducible components}\label{sec:mainlem} In this section, we first prove our main lemma, \cref{thm:thelemma}, that will allow us to deduce the existence of automorphic point in every irreducible component of a (polarized) universal deformation ring from a small $R = \mathbb{T}$ theorem. We then recall Berthelot's rigid analytic generic fibre and record a lemma that allows us to deduce Zariski density statements in the generic fibre when there are multiple components. \subsection{Intersections and the main lemma}\label{sec:intersect} The following lemma is an easy consequence of the intersection theory in regular local rings. \begin{lem}\label{thm:intersection} Let $B$ be a regular local commutative ring and let $R$ be a finite commutative $B$-algebra. For $\mathfrak{p}\in \Spec R$, $\mathfrak{q}\in \Spec B$ we have \[ \dim R/(\mathfrak{p},\mathfrak{q} R) \ge \dim R/\mathfrak{p} + \dim B/\mathfrak{q} - \dim B.\] \end{lem} \begin{proof} Let $\mathfrak{p}_B$ be the pullback of $\mathfrak{p}$ to $B$. If $\mathfrak{r}\in \Spec B$ is minimal containing $\mathfrak{p}_B + \mathfrak{q}$, then \cite{SerreLocAlg}*{Chapter V, Theorem 3} implies \[ \mathrm{ht}_B\mathfrak{r} \le \mathrm{ht}_B\mathfrak{p}_B + \mathrm{ht}_B\mathfrak{q}. \] Since $B$ is catenary, this implies \[ \dim B/\mathfrak{r} \ge \dim B/\mathfrak{p}_B + \dim B/\mathfrak{q} - \dim B.\] Since $R$ is finite over $B$, this in turn implies \[ \dim B/\mathfrak{r} \ge \dim R/\mathfrak{p} + \dim B/\mathfrak{q} - \dim B. \] Again using that $R$ is finite over $B$, we can find $\mathfrak{r}_R\in \Spec R$ containing $\mathfrak{p}$ and lying over $\mathfrak{r}\in\Spec B$, hence also containing $\mathfrak{r} R \supseteq \mathfrak{q} R$. Then \[ \dim R/(\mathfrak{p},\mathfrak{q} R) \ge \dim R/\mathfrak{r}_R = \dim B/\mathfrak{r} \ge \dim R/\mathfrak{p} + \dim B/\mathfrak{q} - \dim B.\qedhere\] \end{proof} We easily derive from this the following lemma, which is the linchpin in the proofs of our main theorems. We will use notation suggestive of our intended application to automorphic points in deformation rings. Recall that $E$ is a finite extension of $\mathbb{Q}_p$ with ring of integers $\mathcal{O}$. \begin{lem}\label{thm:thelemma} Let $R^{\mathrm{loc}}$ be a local commutative $\mathcal{O}$-algebra and let $X^{\mathrm{loc}}\subseteq\Spec R^{\mathrm{loc}}$ be a closed subscheme. Let $R$ be a commutative $R^{\mathrm{loc}}$-algebra and let $X = X^{\mathrm{loc}} \times_{\Spec R^{\mathrm{loc}}} \Spec R$. Let $\mathcal{C}$ be an irreducible component of $\Spec R$. Assume that \begin{ass} \item $R^{\mathrm{loc}}$ is a regular local ring; \item $X$ is finite over $\mathcal{O}$; \item $\dim \mathcal{C} + \dim X^{\mathrm{loc}} - \dim R^{\mathrm{loc}} \ge 1$. \end{ass} Then the intersection of $\mathcal{C}$ with $X\otimes_{\mathcal{O}} E$ in $\Spec R$ is nonempty. \end{lem} \begin{proof} Choose $\mathfrak{q} \in X^{\mathrm{loc}}$ such that $\dim X^{\mathrm{loc}} = \dim R^{\mathrm{loc}}/\mathfrak{q}$. Then $R/\mathfrak{q} R$ is finite over $\mathcal{O}$, which implies that $R/\mathfrak{m}_{R^{\mathrm{loc}}} R$ is finite over $R^{\mathrm{loc}}/\mathfrak{m}_{R^{\mathrm{loc}}}$, hence $R$ is finite over $R^{\mathrm{loc}}$. Then \cref{thm:intersection} implies that \[ \dim(\mathcal{C}\cap \Spec R/\mathfrak{q} R) \ge \dim \mathcal{C} + \dim R^{\mathrm{loc}}/\mathfrak{q} - \dim R^{\mathrm{loc}} \ge 1,\] so there is $\mathfrak{q}' \in \mathcal{C}\cap \Spec R/\mathfrak{q} R$ with $\dim R/\mathfrak{q}' = 1$. Since $R/\mathfrak{q} R$ is finite over $\mathcal{O}$, $p\notin \mathfrak{q}'$, and $\mathfrak{q}'$ is in the intersection of $\mathcal{C}$ and $\Spec (R/\mathfrak{q} R) [1/p]$. \end{proof} \subsection{Generic fibres}\label{sec:genfib} We recall the rigid analytic generic fibre of Berthelot \cite{BerthelotRigCohomSupp}*{\S0.2}, for which we use \cite{deJongCrysDieu}*{\S7} as a reference. Let $X = \Spf R$ be an Noetherian adic affine formal $\mathcal{O}$-scheme such that $R/I$ is a finite type $\mathbb{F}$-algebra, where $I\subset R$ is the largest ideal defining the topology on $R$. There is a rigid analytic space $X^{\mathrm{rig}}$, called the \emph{rigid analytic generic fibre} of $\Spf R$, that represents the functor that sends an $E$-affinoid algebra $A$ to the set of continuous $\mathcal{O}$-algebra morphisms $R \mathrm{rig}htarrow A$ (see \cite{deJongCrysDieu}*{\S7.1}). Moreover, $X \mapsto X^{\mathrm{rig}}$ is functorial, and there is a canonical $\mathcal{O}$-algebra morphism $R \mathrm{rig}htarrow \Gamma(X^{\mathrm{rig}},\mathcal{O}_{X^{\mathrm{rig}}})$. If $R$ is a $\mathbb{C}NL_{\mathcal{O}}$-algebra (which is the case of interest for us), then $(\Spf R)^{\mathrm{rig}}$ has the following concrete description: if $\mathcal{O}[[y_1,\ldots,y_g]]/(f_1,\ldots,f_k)$ is a presentation for $\mathcal{O}$-flat quotient of $R$, then $(\Spf R)^{\mathrm{rig}}$ is isomorphic to the locus in the open rigid analytic unit $n$-ball over $E$ cut out by the equations $f_1 = \cdots = f_k = 0$. The following is \cite{deJongCrysDieu}*{Lemma~7.1.9}. \begin{prop}\label{thm:genfibprop} Let $X = \Spf R$ be an affine formal $\mathcal{O}$-scheme as above. \begin{enumerate} \item\label{genfibprop:pts} There is a bijection between the points of $X^{\mathrm{rig}}$ and the set of maximal ideals of $R[1/p]$. This bijection is functorial in $R$. \item\label{genfibprop:locring} Let $x\in X^{\mathrm{rig}}$ correspond to the maximal ideal $\mathfrak{m} \subset R[1/p]$ under the bijection of \cref{genfibprop:pts}. There is a canonical morphism of local rings $R[1/p]_{\mathfrak{m}} \mathrm{rig}htarrow \mathcal{O}_{X^{\mathrm{rig}},x}$. This map is compatible with $R \mathrm{rig}htarrow \Gamma(X^{\mathrm{rig}},\mathcal{O}_{X^{\mathrm{rig}}})$, and induces an isomorphism on completions. \end{enumerate} \end{prop} \begin{lem}\label{thm:speczardense} Let $R$ be an $\mathcal{O}$-flat $\mathbb{C}NL_{\mathcal{O}}$-algebra, and let $X = \Spf R$. Let $Z$ be a set of maximal ideals in $R[1/p]$, and let $Z^{\mathrm{rig}} \subset X^{\mathrm{rig}}$ be the set of points corresponding to $Z$ under \cref{genfibprop:pts} of \cref{thm:genfibprop}. If $Z^{\mathrm{rig}}$ is Zariski dense in $X^{\mathrm{rig}}$, then $Z$ is Zariski dense in $\Spec R$. \end{lem} \begin{proof} Take $f\in R$ that vanishes at all $\mathfrak{m} \in Z$. Since $R$ is $\mathcal{O}$-flat, it suffices to prove $f$ is nilpotent in $R[1/p]$. Since $R[1/p]$ is Jacobson (see \cite{EGA4.3}*{Corollaire~10.5.8}), it further suffices to prove that $f$ belongs to every maximal ideal of $R[1/p]$. The Zariski density of $Z^{\mathrm{rig}}$ implies that the image of $f$ under $R \mathrm{rig}htarrow \Gamma(X^{\mathrm{rig}},\mathcal{O}_{X^{\mathrm{rig}}})$ vanishes at all points in $X^{\mathrm{rig}}$, which implies $f$ belongs to every maximal ideal of $R[1/p]$ by \cref{thm:genfibprop}. \end{proof} The converse is not true in general. For example, let $\widehat{\mathbb{G}}_m = \Spf \mathcal{O}[[t]]$ is the formal multiplicative group over $\mathcal{O}$, and let $Z$ be the set the maximal ideals in $\mathcal{O}[[t]][1/p]$ corresponding to $p$-power roots of unity in $\mathbb{Q}bar_p$. Then $Z$ is Zariski dense in $\Spec \mathcal{O}[[t]]$, but $\widehat{\mathbb{G}}_m^{\mathrm{rig}}$ is the open rigid analytic unit ball over $E$ with coordinate $t$, and every point in $Z^{\mathrm{rig}}$ is a zero of the analytic function $\log(1+t)$. Loeffler \cite{LoefflerDense} has shown that this observation has interesting consequences for universal deformation rings for one dimensional Galois representations. To apply the principal theorems in this paper to Chenevier's conjecture, we will need to understand the relationship between irreducible components of universal deformation rings and irreducible components of their rigid analytic generic fibre. The following lemma follows from a result of Conrad \cite{ConradIrredRig}*{Theorem~2.3.1}. \begin{lem}\label{thm:genfibcomps} Let $R$ be an $\mathcal{O}$-flat, reduced $\mathbb{C}NL_{\mathcal{O}}$-algebra, and let $X = \Spf R$. The map that sends a minimal prime ideal $\mathfrak{q}$ of $R$ to $(\Spf R/\mathfrak{q})^{\mathrm{rig}}$ induces a bijection between the irreducible components of $\Spec R$ and the irreducible components of $X^{\mathrm{rig}}$. \end{lem} \begin{proof} Let $\{\mathfrak{q}_i\}_i$ be the minimal prime ideals of $R$. Since $(\cdot)^{\mathrm{rig}}$ takes closed immersions to closed immersions (see \cite{deJongCrysDieu}*{Proposition~7.2.4}), $(\Spf R/\mathfrak{q}_i)^{\mathrm{rig}}$ is a closed analytic subvariety of $X^{\mathrm{rig}}$. We wish to show that $\{(\Spf R/\mathfrak{q}_i)^{\mathrm{rig}}\}_i$ is the set of irreducible components of $X^{\mathrm{rig}}$. Let $\widetilde{R}$ be the normalization of $R$, and let $\phi : \Spf \widetilde{R} \mathrm{rig}htarrow \Spf R$ denote the canonical map. Since $R$ is excellent, the map that sends a maximal ideal $\tilde{\mathfrak{m}}$ in $\widetilde{R}$ to the minimal prime ideal in $R$ containing $\tilde{\mathfrak{m}}\cap R$ induces a bijection between connected components of $\Spf \widetilde{R}$ and $\{\mathfrak{q}_i\}_i$ (see \cite{EGA4.2}*{Scholie~7.8.3(vii)}). Let $\widetilde{X}_i$ denote the connected component of $\Spf \widetilde{R}$ corresponding to the minimal prime $\mathfrak{q}_i$ of $R$, hence $\phi(\widetilde{X}_i) = \Spf R/\mathfrak{q}_i$. By \cite{ConradIrredRig}*{Theorem~2.3.1}, the irreducible components of $X^{\mathrm{rig}}$ are $\{\phi^{\mathrm{rig}}(\widetilde{X}_i)\}_i$, and the functoriality of $(\cdot)^{\mathrm{rig}}$ implies $\phi^{\mathrm{rig}}(\widetilde{X}_i) = (\Spf R/\mathfrak{q}_i)^{\mathrm{rig}}$. \end{proof} In \S\S\ref{sec:CM} and \ref{sec:Hilb}, we will use the above via the following lemma. \begin{lem}\label{thm:genfiblem} Let $R$ be an $\mathcal{O}$-flat, reduced, equidimensional $\mathbb{C}NL_{\mathcal{O}}$-algebra, and let $X = \Spf R$. Let $Z$ be a set of maximal ideals in $R[1/p]$, and let $Z^{\mathrm{rig}} \subset X^{\mathrm{rig}}$ be the set of points corresponding to $Z$ under \cref{thm:genfibprop}. Assume: \begin{ass} \item\label{genfiblem:dim} Every irreducible component of the Zariski closure of $Z^{\mathrm{rig}}$ has dimension equal to $\dim R[1/p]$. \item\label{genfiblem:comps} For every irreducible component $\mathcal{C}$ of $\Spec R$, there is $\mathfrak{m} \in Z\cap \mathcal{C}$ such that $R[1/p]_{\mathfrak{m}}$ is regular. \end{ass} Then $Z^{\mathrm{rig}}$ is Zariski dense in $X^{\mathrm{rig}}$. \end{lem} \begin{proof} Since $R[1/p]$ is equidimensional, $X^{\mathrm{rig}}$ is equidimensional of dimension $\dim R[1/p]$ by \cref{thm:genfibprop}. By \cref{genfiblem:comps} and \cref{thm:genfibcomps}, for every irreducible component $X^{\mathrm{rig}}_i$ of $X^{\mathrm{rig}}$, there is a point $x \in Z^{\mathrm{rig}}\cap X^{\mathrm{rig}}_i$ that lies on no other irreducible component of $X^{\mathrm{rig}}$. The lemma now follows from \cref{genfiblem:dim} and \cite{ConradIrredRig}*{Corollary~2.2.7}. \end{proof} \section{General deformation theory}\label{sec:GenDef} We recall some generalities in the deformation theory of group representations, and fix some notation that will be used in the rest of this article. \subsection{Universal and fixed determinant deformation rings}\label{sec:general} Let $\Delta$ be a profinite group satisfying the $p$-\emph{finiteness condition}: for any open subgroup $H$ of $\Delta$, there are only finitely many continuous homomorphisms $H \mathrm{rig}htarrow \mathbb{F}_p$. This implies that for any $\mathbb{F}$-vector space $M$ with continuous $\mathbb{F}$-linear action of $\Delta$, the cohomology groups $H^i(\Delta,M)$ are all finite dimensional, as is the group of continuous $1$-cocyles $Z^1(\Delta,M)$. Fix a continuous homomorphism \[ \overline{\rho} : \Delta \longrightarrow \GL_n(\mathbb{F}).\] Let $A$ be a $\mathbb{C}NL_{\mathcal{O}}$-algebra. A \emph{lift} of $\overline{\rho}$ to a $\mathbb{C}NL_{\mathcal{O}}$-algebra $A$ is a continuous homomorphism \[ \rho : \Delta \longrightarrow \GL_n(A) \] such that $\overline{\rho} = \rho \mod {\mathfrak{m}_A}$. A \emph{deformation} of $\overline{\rho}$ to $A$ a $1+\mathrm{M}_n(\mathfrak{m}_A)$-conjugacy class of lifts. We will often abuse notation and denote a deformation by a lift in its conjugacy class. We let $D^\square$, resp. $D$, denote the set valued functor on $\mathbb{C}NL_{\mathcal{O}}$ that sends a $\mathbb{C}NL_{\mathcal{O}}$-algebra $A$ to the set of lifts, resp. deformations, of $\overline{\rho}$ to $A$. If we wish to emphasize $\overline{\rho}$, we will write $D_{\overline{\rho}}^\square$ and $D_{\overline{\rho}}$, respectively. The functor $D^\square$ is representable, and so is $D$ if $\mathrm{End}_{\mathbb{F}[\Delta]}(\overline{\rho}) = \mathbb{F}$ (see \cite{BockleDefTheory}*{Proposition~1.3}). The representing object for $D^\square$, denoted $R^\square$, is called the \emph{universal lifting ring} for $\overline{\rho}$. We denote by $\rho^\square$ the universal lift to $R^\square$. If $\mathrm{End}_{\mathbb{F}[\Delta]}(\overline{\rho}) = \mathbb{F}$, the object representing $D$, denoted $R^{\mathrm{univ}}$, is called the \emph{universal deformation ring} for $\overline{\rho}$. We denote by $\rho^{\mathrm{univ}}$ the universal deformation to $R^{\mathrm{univ}}$. If we wish to emphasize $\overline{\rho}$, we will write $R_{\overline{\rho}}^\square$ and $R_{\overline{\rho}}^{\mathrm{univ}}$, respectively. The following well known lemma (see \cite{MazurDefFermat}*{\S12} and \cite{BLGGT}*{Lemma~1.2.1}) allows us to enlarge our coefficient field $E$, and we will sometimes invoke it without comment. \begin{lem}\label{thm:coefchange} Let $E'/E$ be a finite extension with ring of integers $\mathcal{O}'$ and residue field $\mathbb{F}'$. Let $\overline{\rho}' = \overline{\rho} \otimes \mathbb{F}$. \begin{enumerate} \item The universal $\mathbb{C}NL_{\mathcal{O}'}$-lifting ring $R_{\overline{\rho}'}^\square$ is canonically isomorphic to $R_{\overline{\rho}}^\square \otimes_{\mathcal{O}} \mathcal{O}'$. \item If $\mathrm{End}_{\mathbb{F}[\Delta]}(\overline{\rho}) = \mathbb{F}$, the universal $\mathbb{C}NL_{\mathcal{O}'}$-deformation ring $R_{\overline{\rho}'}$ is canonically isomorphic to $R_{\overline{\rho}'} \otimes_{\mathcal{O}} \mathcal{O}'$. \end{enumerate} \end{lem} This lemma has the following consequence that we will use bellow. Let \[ \rho : \Delta \longrightarrow \GL_n(\mathcal{O}_{\mathbb{Q}bar_p}) \] be a continuous representation such that $\overline{\rho} \otimes \mathbb{F}bar_p = \rho \mod {\mathfrak{m}_{\mathbb{Q}bar_p}}$. The compactness of $\Delta$ implies there is a finite extension $E'/E$ inside $L$, with ring of integers $\mathcal{O}'$, such that $\rho$ takes image in $\GL_n(\mathcal{O}')$. Then \cref{thm:coefchange} implies there is a unique local $\mathcal{O}$-algebra morphism $x : R_{\overline{\rho}}^\square \mathrm{rig}htarrow \mathcal{O}_{\mathbb{Q}bar_p}$ such that $\rho = \rho_x$. Let $\ad$ denote the adjoint action of $\GL_n$ on its Lie algebra $\mathfrak{gl}_n$. Let $\ad(\overline{\rho})$ and $\ad^0(\overline{\rho})$, denote $\mathfrak{gl}_n(\mathbb{F})$ and its trace zero subspace $\mathfrak{sl}_n(\mathbb{F})$, respectively, each equipped with the adjoint action $\ad\circ \overline{\rho}$ of $\Delta$. \begin{prop}\label{thm:genunivpres} \begin{enumerate} \item There is a presentation \[ R^\square \cong \mathcal{O}[[x_1,\ldots,x_g]]/(f_1,\ldots,f_k)\] with $g = \dim_\mathbb{F} Z^1(\Delta,\ad(\overline{\rho}))$ and $k \le \dim_{\mathbb{F}} H^2(\Delta,\ad(\overline{\rho}))$. In particular, every irreducible component of $\Spec R^\square$ has dimension at least \[ 1+ \dim Z^1(\Delta,\ad(\overline{\rho})) - \dim_\mathbb{F} H^2(\Delta,\ad(\overline{\rho})),\] and $R^\square$ is formally smooth over $\mathcal{O}$ if $H^2(\Delta,\ad(\overline{\rho})) = 0$. \item Assume $\mathrm{End}_{\mathbb{F}[\Delta]}(\overline{\rho}) = \mathbb{F}$. There is a presentation \[ R^{\mathrm{univ}} \cong \mathcal{O}[[x_1,\ldots,x_g]]/(f_1,\ldots,f_k)\] with $g = \dim_\mathbb{F} H^1(\Delta,\ad(\overline{\rho}))$ and $k \le \dim_{\mathbb{F}} H^2(\Delta,\ad(\overline{\rho}))$. In particular, every irreducible component of $\Spec R^{\mathrm{univ}}$ has dimension at least \[1+ \dim H^1(\Delta,\ad(\overline{\rho})) - \dim_\mathbb{F} H^2(\Delta,\ad(\overline{\rho})),\] and $R^{\mathrm{univ}}$ is formally smooth over $\mathcal{O}$ if $H^2(\Delta,\ad(\overline{\rho})) = 0$. \end{enumerate} \end{prop} \begin{proof} The second part is \cite{BockleLocGlob}*{Theorem~2.4}. The first part is proved in the same way, since the tangent space of $D^\square$ is isomorphic to $Z^1(\Delta,\ad(\overline{\rho})$ via the map $Z^1(\Delta,\ad(\overline{\rho})) \mathrm{rig}htarrow D^\square(\mathbb{F}[\varepsilon])$ given by $\kappa \mapsto (1+\varepsilon \kappa)\overline{\rho}$, where $\mathbb{F}[\varepsilon] = \mathbb{F}[\varepsilon]/(\varepsilon^2)$ is the ring of dual numbers over $\mathbb{F}$. \end{proof} Now let $\mu : \Delta \mathrm{rig}htarrow \mathcal{O}^\times$ be a continuous character such that $\det\overline{\rho} = \mu \mod {\mathfrak{m}_{\mathcal{O}}}$. Define subfunctors $D^{\square,\mu} \subseteq D^\square$ and $D^\mu\subseteq D$, that send a $\mathbb{C}NL_{\mathcal{O}}$-algebra to the set of lifts and deformations, respectively, $\rho$ to $A$ such that $\det\rho = \mu$. These subfunctors are easily seen to be represented by the quotient of $R^\square$, resp. of $R$ (assuming $\mathrm{End}_{\mathbb{F}[\Delta]}(\overline{\rho}) = \mathbb{F}$), by the ideal generated by $\{\det\rho^\square(\delta) - \mu(\delta) \mid \delta \in \Delta\}$, resp. generated by $\{\det\rho^{\mathrm{univ}}(\delta) - \mu(\delta) \mid \delta\in \Delta\}$. The representing object for $D^{\square,\mu}$, denoted $R^{\square,\mu}$, is called the \emph{universal determinant} $\mu$ \emph{lifting ring} for $\overline{\rho}$. If $\mathrm{End}_{\mathbb{F}[\Delta]}(\overline{\rho}) = \mathbb{F}$, the object representing $D^\mu$, denoted $R^{\mu}$, is called the \emph{universal determinant} $\mu$ \emph{deformation ring} for $\overline{\rho}$. \begin{prop}\label{thm:gendetpres} Assume $p\nmid n$. \begin{enumerate} \item There is a presentation \[ R^{\square,\mu} \cong \mathcal{O}[[x_1,\ldots,x_g]]/(f_1,\ldots,f_k)\] with $g = \dim_\mathbb{F} Z^1(\Delta,\ad^0(\overline{\rho}))$ and $k \le \dim_{\mathbb{F}} H^2(\Delta,\ad^0(\overline{\rho}))$. In particular, every irreducible component of $\Spec R^{\square,\mu}$ has dimension at least \[ 1+ \dim Z^1(\Delta,\ad^0(\overline{\rho})) - \dim_\mathbb{F} H^2(\Delta,\ad^0(\overline{\rho})),\] and $R^{\square,\mu}$ is formally smooth over $\mathcal{O}$ if $H^2(\Delta,\ad^0(\overline{\rho})) = 0$. \item Assume $\mathrm{End}_{\mathbb{F}[\Delta]}(\overline{\rho}) = \mathbb{F}$. There is a presentation \[ R^{\mu} \cong \mathcal{O}[[x_1,\ldots,x_g]]/(f_1,\ldots,f_k)\] with $g = \dim_\mathbb{F} H^1(\Delta,\ad^0(\overline{\rho}))$ and $k \le \dim_{\mathbb{F}} H^2(\Delta,\ad^0(\overline{\rho}))$. In particular, every irreducible component of $\Spec R^\mu$ has dimension at least \[1+ \dim H^1(\Delta,\ad^0(\overline{\rho})) - \dim_\mathbb{F} H^2(\Delta,\ad^0(\overline{\rho})),\] and $R^\mu$ is formally smooth over $\mathcal{O}$ if $H^2(\Delta,\ad^0(\overline{\rho})) = 0$. \end{enumerate} \end{prop} \begin{proof} This is similar to \cref{thm:genunivpres} above. For example, see \cite{KisinModof2}*{Lemma~4.1.1} (since we have assumed $p\nmid n$, $\ad(\overline{\rho}) = \ad^0(\overline{\rho})\mathrm{op}lus \mathbb{F}$ is $\Delta$-equivariant, so the groups denoted $H^i(G,\ad^0 V)'$ in \cite{KisinModof2}*{\S 4.1} are just $H^i(\Delta,\ad^0(\overline{\rho}))$ in our case). \end{proof} \subsection{Polarized deformation rings}\label{sec:polardefring} We now assume that $\Delta$ is an open index two subgroup of a profinite group $\Gamma$, and that there is $c\in \Gamma\smallsetminus \Delta$ of order $2$. So $\Gamma$ is the semidirect product of $\Delta$ and $\{1,c\}$. For any commutative ring $A$ and homomorphism $\rho : \Delta \mathrm{rig}htarrow \GL_n(A)$, we let $\rho^c$ denote the conjugate homomorphism, i.e. $\rho^c(\delta) = \rho(c\delta c)$ for all $\delta\in \Delta$, and let $\rho^\vee$ denote the $A$-linear dual of $\rho$, i.e. $\rho^\vee(\delta) = {}^t \rho(\delta)^{-1}$ for all $\delta\in \Delta$. Let $\mu : \Gamma \mathrm{rig}htarrow \mathcal{O}^\times$ be a continuous character, and let $\overline{\mu}: \Gamma\mathrm{rig}htarrow \mathbb{F}^\times$ be its reduction mod $\mathfrak{m}_{\mathcal{O}}$. We assume that \[ \overline{\rho}^c \cong \overline{\rho}^\vee \otimes\overline{\mu}.\] We then define a subfunctor $D^{\mathrm{pol}} \subseteq D$ by letting $D^{\mathrm{pol}}(A)$, for a $\mathbb{C}NL_{\mathcal{O}}$-algebra $A$, to be the subset of deformations $\rho$ of $\overline{\rho}$ to $A$ such that $\rho^c \cong \rho^\vee \otimes \mu$. \begin{prop}\label{thm:poldefrep} Assume $\overline{\rho}$ is absolutely irreducible. Then $D^{\mathrm{pol}}$ is representable by a quotient $R^{\mathrm{pol}}$ of $R^{\mathrm{univ}}$. \end{prop} \begin{proof} Let $R^{\mathrm{pol}}$ be the quotient of $R^{\mathrm{univ}}$ by the ideal generated by \[ \{\tr\rho^{\mathrm{univ}}(c\delta c) - \tr{}^t \rho^{\mathrm{univ}}(\delta)^{-1}\mu(\delta) \mid \delta \in \Delta\}.\] The result now follows from \cite{CarayolAnneauLocal}*{Th\'{e}or\`{e}me~1}. \end{proof} If $\overline{\rho}$ is absolutely irreducible, we call the object $R^{\mathrm{pol}}$ representing $D^{\mathrm{pol}}$ the \emph{universal} $\mu$-\emph{polarized deformation ring} of $\overline{\rho}$. If we wish to emphasize the role of $\mu$, we will write $R^{\mu\mbox{-}\mathrm{pol}}$ for $R^{\mathrm{pol}}$. We recall the Clozel--Harris--Taylor group scheme $\mathcal{G}_n$, which is the group scheme over $\mathbb{Z}$ defined as the semidirect product \[ (\GL_n \times \GL_1) \rtimes \{1,\jmath\} = \mathcal{G}_n^0 \rtimes \{1,\jmath\}, \] where $\jmath (g,a) \jmath = (a \,{}^t\! g^{-1},a)$, and the homomorphism $\nu : \mathcal{G}_n \mathrm{rig}htarrow \GL_1$ given by $\nu(g,a) = a$ and $\nu(\jmath) = -1$. If $A$ is any commutative ring and $r : \Gamma \mathrm{rig}htarrow \mathcal{G}_n(A)$ is a homomorphism such that $r(\Delta) \subseteq \mathcal{G}_n^0(A)$, we will write $r|_\Delta$ for the composite of the restriction of $r$ to $\Delta$ with the projection $\mathcal{G}_n^0(A) \mathrm{rig}htarrow \GL_n(A)$. In particular, we view $A^n$ as an $A[\Delta]$-module via $r|_{\Delta}$. The following is (part of) \cite{CHT}*{Lemma~2.1.1} \begin{lem}\label{thm:Gdhoms} Let $A$ be a topological ring. There is a natural bijection between the following two sets. \begin{enumerate} \item Continuous homomorphisms $r : \Gamma \mathrm{rig}htarrow \mathcal{G}_n(A)$ inducing an isomorphism $\Gamma/\Delta \xrightarrow{\sim} \mathcal{G}_n(A)/\mathcal{G}_n^0(A)$. \item Triples $(\rho,\eta,\langle\cdot,\cdot\mathrm{rig}htarrowngle)$, where $\rho : \Delta \mathrm{rig}htarrow \GL_n(A)$ and $\eta : \Gamma \mathrm{rig}htarrow A^\times$ are continuous homomorphisms and $\langle \cdot, \cdot \mathrm{rig}htarrowngle$ is a perfect $A$-linear pairing on $A^n$ satisfying \[ \langle \rho(\delta) a, \rho(c\delta c) b \mathrm{rig}htarrowngle = \eta(\delta)\langle a, b \mathrm{rig}htarrowngle \quad \text{and} \quad \langle a, b \mathrm{rig}htarrowngle = - \eta(c) \langle b, a \mathrm{rig}htarrowngle \] for all $a,b\in A^n$ and $\delta \in \Delta$. \end{enumerate} Under this bijection, $\rho = r|_\Delta$, $\eta = \nu\circ r$, and $\langle a, b \mathrm{rig}htarrowngle = {}^t a P^{-1} b$ for $r(c) = (P,-\eta(c))\jmath$. \end{lem} In particular, our fixed $\overline{\rho}$ extends to a continuous homomorphism \[ \overline{r} : \Gamma \longrightarrow \mathcal{G}_n(\mathbb{F}),\] such that $\nu\circ\overline{r} = \overline{\mu}$, which we fix. For a $\mathbb{C}NL_{\mathcal{O}}$-algebra $A$, a \emph{lift} of $\overline{r}$ to $A$ is a continuous homomorphism $r \mathrm{rig}htarrow \mathcal{G}_n(A)$ such that $r \mod \mathfrak{m}_A = \overline{r}$. A \emph{deformation} of $\overline{r}$ to $A$ is a $1+\mathrm{M}_n(\mathfrak{m}_A)$-conjugacy class of lifts. By \cref{thm:Gdhoms}, if $r$ is a deformation of $\overline{r}$ to a $\mathbb{C}NL_{\mathcal{O}}$-algebra $A$, then $r|_\Delta$ is a deformation of $\overline{\rho}$ to $A$. We say a lift or a deformation $r$ of $\overline{r}$ \emph{has multiplier} $\mu$ if $\nu \circ r = \mu$. We let $D_{\overline{r}}^{\mathrm{pol}}$ be the set valued functor on $\mathbb{C}NL_{\mathcal{O}}$ that sends a $\mathbb{C}NL_{\mathcal{O}}$-algebra $A$ to the set of deformations of $\overline{r}$ to $A$ with multiplier $\mu$. \begin{prop}\label{thm:polisofunctors} Assume $p>2$ and $\overline{\rho}$ is absolutely irreducible. The map $D_{\overline{r}}^{\mathrm{pol}} \mathrm{rig}htarrow D_{\overline{\rho}}^{\mathrm{pol}}$ given by $r \mapsto r|_\Delta$ is an isomorphism of functors. \end{prop} \begin{proof} This is exactly as in \cite{ChenevierFern}*{Lemma~1.5}. As the proof is short, we reproduce it for completeness. Let $A$ be a $\mathbb{C}NL_{\mathcal{O}}$-algebra. The map $D_{\overline{r}}^{\mathrm{pol}}(A) \mathrm{rig}htarrow D_{\overline{\rho}}^{\mathrm{pol}}(A)$ is surjective by \cref{thm:Gdhoms}. Assume that $r_1,r_2$ are two lifts of $\overline{r}$ to $A$ such that $r_i|_{\Delta}$ are $1+\mathrm{M}_n(\mathfrak{m}_A)$-conjugate. Then replacing $r_2$ with a lift in its deformation class, we can assume $r_1|_\Delta = r_2|_\Delta$. Letting $P_i$ be given by $r_i(c) = (P_i,-\mu(c))\jmath$ as in \cref{thm:Gdhoms}, $P_1P_2^{-1}$ commutes with $r_1|_\Delta = r_2|_\Delta$. Since $\overline{\rho}$ is absolutely irreducible, $P_1 P_2^{-1} = \beta \in A^\times$; since $r_1$ and $r_2$ both lift $\overline{r}$, $\beta \in 1+\mathfrak{m}_A$; and since $p>2$, there is $\alpha \in 1+\mathfrak{m}_A$ such that $\alpha^2 = \beta$. Then $r_1 = \alpha r_2 \alpha^{-1}$, so $r_1$ and $r_2$ define the same deformation. \end{proof} In particular, when $p>2$ and $\overline{\rho}$ is absolutely irreducible, we view $R^{\mathrm{pol}}$ as the universal ($\mu$-polarized) deformation ring for $\overline{r}$. Since $\mathfrak{gl}_n = \Lie \GL_n \subset \Lie \mathcal{G}_n$, the adjoint action of $\GL_n$ on $\mathfrak{gl}_n$ extends to $\mathcal{G}_n$ by \[ \ad(g,a)(x) = gxg^{-1} \quad \text{and} \quad \ad(\jmath)(x) = -{}^t x.\] If $A$ is a commutative ring, and \[ r : \Gamma \longrightarrow \mathcal{G}_n(A) \] is a homomorphism, we write $\ad(r)$ for $\mathfrak{gl}_n(A)$ with the adjoint action $\ad\circ r$ of $\Gamma$. \begin{prop}\label{thm:genpolpres} Assume $p>2$ and $\overline{\rho}$ is absolutely irreducible. There is a presentation \[ R^{\mathrm{pol}} \cong \mathcal{O}[[x_1,\ldots,x_g]]/(f_1,\ldots,f_k)\] with $g = \dim_{\mathbb{F}} H^1(\Gamma,\ad(\overline{r}))$ and $k \le \dim_{\mathbb{F}} H^2(\Gamma,\ad(\overline{r}))$. In particular, every irreducible component of $\Spec R^{\mathrm{pol}}$ has dimension at least \[1+\dim_{\mathbb{F}} H^1(\Gamma,\ad(\overline{r})) - \dim_{\mathbb{F}} H^2(\Gamma,\ad(\overline{r})),\] and $R^{\mathrm{pol}}$ is formally smooth over $\mathcal{O}$ if $H^2(\Gamma,\ad(\overline{r})) = 0$. \end{prop} \begin{proof} This is a special case of \cite{CHT}*{Lemma~2.2.11 and Corollary~2.2.12}. (In our case, the sets denoted $S$ and $T$ in \cite{CHT}*{Lemma~2.2.11 and Corollary~2.2.12} are both empty. So in the notation of \cite{CHT}, $R_{\mathscr{S},T}^{\mathrm{loc}} = \mathcal{O}$ and $H^i_{\mathscr{S},T}(\Gamma,\ad(\overline{r})) =H^i(\Gamma,\ad(\overline{r}))$). \end{proof} \begin{rmk}\label{rmk:GL2pol} We could also define polarized deformation functor in the case that $\overline{\rho} \cong \overline{\rho}^\vee \otimes\overline{\mu}$, which amounts to $\GSp_n$ or $\GO_n$-valued deformation theory with fixed multiplier character $\mu$. In particular, when $n=2$, $p>2$, and $\mu$ lifts $\det\overline{\rho}$, the universal $\mu$-polarized deformation functor equals the universal determinant $\mu$ deformation functor. It is useful to keep this in mind when comparing the results of \S\cref{sec:CM} and \S\cref{sec:Hilb}. \end{rmk} \section{Local deformation theory}\label{sec:Local} In this section we recall some results from the literature on local deformation rings that we will need later. \subsection{Setup}\label{sec:locsetup} Let $v$ be a finite place of $F$. Fix a continuous representation \[ \overline{\rho}_v : G_v \longrightarrow \GL_n(\mathbb{F}), \] and a continuous character $\mu : G_v \mathrm{rig}htarrow \mathcal{O}^\times$ with $\mu \bmod {\mathfrak{m}_{\mathcal{O}}} = \det \overline{\rho}_v$. Denote by $R_v^\square$ the universal lifting ring for $\overline{\rho}_v$, and $\rho_v^\square$ the universal lift. Let $R_v^{\square,\mu}$ be the universal determinant $\mu$ lifting ring, and $\rho_v^\mu$ the universal $R_v^{\square,\mu}$-valued lift. We refer the reader to \cite{CHT}*{Definition~2.2.2} for the notion of a \emph{local deformation problem}. We will primarily use the following characterization (\cite{CHT}*{Lemma~2.2.3}): \begin{itemize} \item Any local deformation problem is representable by a quotient of $R_v^\square$. \item A quotient $R$ of $R_v^\square$ represents a local deformation problem if and only if it satisfies the following: for any $\mathbb{C}NL_{\mathcal{O}}$-algebra $A$, lift $\rho$ of $\overline{\rho}_v$ to $A$, and $g \in 1+\mathrm{M}_n(\mathfrak{m}_A)$, the $\mathbb{C}NL_{\mathcal{O}}$-algebra morphism $R_v^\square \mathrm{rig}htarrow A$ induced by $\rho$ factors through $R$ if and only if the $\mathbb{C}NL_{\mathcal{O}}$-algebra map $R_v^\square \mathrm{rig}htarrow A$ induced by $g\rho g^{-1}$ factors through $R$. \end{itemize} \begin{lem}\label{thm:defquolem} Let $R$ be an $\mathcal{O}$-flat reduced quotient of $R_v^\square$. \begin{enumerate} \item\label{defquolem:flat} $R$ represents a local deformation problem if and only if for every finite totally ramified extension $E'/E$ with ring of integers $\mathcal{O}'$, lift $\rho$ of $\overline{\rho}$ to $\mathcal{O}'$, and $g \in 1+\mathrm{M}_n(\mathfrak{m}_{\mathcal{O}'})$, the $\mathbb{C}NL_{\mathcal{O}}$-algebra morphism $R_v^\square \mathrm{rig}htarrow \mathcal{O}'$ induced by $\rho$ factors through $R$ if and only if the $\mathbb{C}NL_{\mathcal{O}}$-algebra map $R_v^\square \mathrm{rig}htarrow \mathcal{O}'$ induced by $g\rho g^{-1}$ factors through $R$. \item\label{defquolem:irred} $R$ represents a local deformation problem if and only if $R/\mathfrak{q}$ represents a local deformation problem for each minimal prime $\mathfrak{q}$ of $R$. \end{enumerate} \end{lem} \begin{proof} Let $g \in 1+\mathrm{M}_n(\mathfrak{m}_{R_v^\square})$. Then the lift $g\rho_v^\square g^{-1}$ induces a $\mathbb{C}NL_{\mathcal{O}}$-algebra morphism $\phi_g : R_v^\square \mathrm{rig}htarrow R_v^\square$. It is easy to see that a quotient $f : R_v^\square \mathrm{rig}htarrow R'$ represents a local deformation problem if and only if $f \circ \phi_g = f$ for every $g\in 1+\mathrm{M}_n(\mathfrak{m}_{R_v^\square})$. Using this reformulation, \cref{defquolem:flat} follows from the fact that the maximal ideals in $R_v^\square[1/p]$ are Zariski dense (\cite{EGA4.3}*{Proposition~10.5.3}), since $R$ is $\mathcal{O}$-flat and reduced and the residue field for any maximal ideal of $R_v^\square[1/p]$ is a finite totally ramified extension of $E$, (\cite{KW2}*{Proposition~2.2}). It similarly follows that $R$ represents a local deformation problem if $R/\mathfrak{q}$ does for each minimal prime $\mathfrak{q}$ of $R$, and the converse follows from the argument of \cite{BLGGT}*{Lemma~1.2.2}. \end{proof} \subsection{Residual characteristic $\ne p$}\label{sec:notp} We first assume that $v\nmid p$. We recall a quotient of $R_v^\square$ studied by Taylor \cite{TaylorIHES}*{Proposition~3.1} (where it is denoted $R_v^{\mathrm{loc}}/\mathcal{J}_v^{(1,\ldots,1)}$) and \cite{ThorneAdequate}*{Proposition~3.12}, that will appear when recalling results from the literature in \S\ref{sec:CMsmallRT}. \begin{prop}\label{thm:R1} Assume $\overline{\rho}_v$ is trivial. There is a quotient $R_v^1$ of $R_v^\square$ representing the following local deformation problem: for a lift $\rho$ of $\overline{\rho}$ to a $\mathbb{C}NL_{\mathcal{O}}$-algebra $A$, the induced $\mathbb{C}NL_{\mathcal{O}}$-algebra morphism $R_v^\square \mathrm{rig}htarrow A$ factors through $R_v^1$ if and only if for every $\gamma\in I_v$, the characteristic polynomial of $\rho(\gamma)$ is $(X-1)^n$. The $\mathbb{C}NL_{\mathcal{O}}$-algebra $R_v^1$ is equidimensional of dimension $n^2+1$ and every generic point of $\Spec R_v^1$ has characteristic $0$. \end{prop} The construction of $R_v^1$ can be seen easily. Indeed, if $\overline{\rho}_v$ is trivial then $\rho_v^\square|_{I_v}$ must factor through tame inertia. Letting $X^n - a_{n-1}X^{n-1} + \cdots + (-1)^n a_0$ denote the characteristic polynomial for a generator of tame inertia, we let $R_v^1$ be the quotient of $R_v^\square$ by the ideal generated by $\{a_j - \binom{n}{j}\}_{j=0,\ldots,n-1}$. The following seems to be well known, but for lack of a concrete reference we include a proof. \begin{lem}\label{thm:R1factor} Let $A$ be a reduced $\mathbb{C}NL_{\mathcal{O}}$-algebra and let $\rho : G_v \mathrm{rig}htarrow \GL_n(A)$ be a lift of $\overline{\rho}_v$. There is a finite extension $K/F_v$ such that the $\mathbb{C}NL_{\mathcal{O}}$-algebra morphism $R_{\overline{\rho}_v|_{G_K}}^\square \mathrm{rig}htarrow A$ induced by $\rho|_{G_K}$ factors through $R_{\overline{\rho}_v|_{G_K}}^1$. \end{lem} \begin{proof} Assume first that $A$ in an integral domain. There is a finite extension $L/F_v$ such that the image of the wild ramification under $\rho|_{G_L}$ is trivial. Letting $t$ denote a generator of the tame inertia of $G_L$, $\Phi$ a lift of Frobenius to $G_L$, and $q$ the order of the residue field of $L$, the relation $\Phi^{-1}t \Phi = t^q$ implies that the eigenvalues of $\rho(t)$, in an algebraic closure of the fraction field of $A$, are stable under the taking $q$th powers, hence are roots of unity. Passing to another finite extension $K/L$ to trivialize these eigenvalues, the image of inertia under $\rho|_{G_K}$ is unipotent. Now assume $A$ is only reduced. By the above discussion, for each minimal prime ideal $\mathfrak{q}$ of $A$, there is a a finite extension $K_{\mathfrak{q}}/F_v$ such that $\rho(I_{K_{\mathfrak{q}}})$ is unipotent. Since $A$ is reduced, there is an injection \[ \GL_n(A) \hookrightarrow \prod_{\mathfrak{q}} \GL_n(A/\mathfrak{q}),\] the product the product being taken over all minimal prime ideals of $A$. We can then take $K$ to be the composite of the $K_{\mathfrak{q}}$. be any finite extension of $F_v$ containing each $K_{\mathfrak{q}}$, and such that $\overline{\rho}_v|_{G_K}$ is trivial. \end{proof} \subsection{Residual characteristic $p$}\label{sec:p} For the remainder of this section we assume that $v|p$. \begin{lem}\label{thm:localsmooth} \begin{enumerate} \item If $H^0(G_v,\ad(\overline{\rho}_v)(1)) = 0$, then $R_v^\square$ is isomorphic to a power series over $\mathcal{O}$ in $n^2(1 + [F_v:\mathbb{Q}_p])$ variables. \item\label{localsmooth:det} If $p\nmid n$ and $H^0(G_v,ad^0(\overline{\rho}_v)(1)) = 0$, then $R_v^{\square,\mu}$ is isomorphic to a power series over $\mathcal{O}$ in $(n^2 - 1)(1+ [F_v:\mathbb{Q}_p])$ variables. \end{enumerate} \end{lem} \begin{proof} The trace pairing on $\ad(\overline{\rho}_v)$ is perfect, and induces a perfect pairing on $\ad^0(\overline{\rho})$ if $p\nmid n$. By Tate local duality, $\dim_{\mathbb{F}} H^2(G_v,\ad(\overline{\rho}_v)) = \dim_{\mathbb{F}}H^0(G_v,\ad(\overline{\rho}_v)(1))$, and $\dim_{\mathbb{F}} H^2(G_v,\ad^0(\overline{\rho})) = \dim_{\mathbb{F}} H^0(G_v,\ad^0(\overline{\rho})(1))$ if $p\nmid n$. The result then follows from part 1 of \cref{thm:genunivpres,thm:gendetpres}, and the local Euler--Poincar\'{e} characteristic formula. \end{proof} The following fundamental result is due to Kisin \cite{KisinPssDefRing}*{Corollary~2.7.7 and Theorem~3.3.8} \begin{thm}\label{thm:crdefring} Fix $\lambda \in (\mathbb{Z}_+^n)^{\mathrm{Hom}(F_v,\mathbb{Q}bar_p)}$ and an inertial type $\tau$ defined over $E$. There is a (possibly zero) $\mathcal{O}$-flat reduced quotient $R_v^{\lambda,\tau,\mathrm{cr}}$ of $R_v^\square$ such that an $E$-algebra morphism $x : R_v^\square[1/p] \mathrm{rig}htarrow \mathbb{Q}bar_p$ factors through $R_v^{\lambda,\tau,\mathrm{cr}}[1/p]$, if and only if $\rho_x$ is potentially crystalline of weight $\lambda$ inertial type $\tau$. If nonzero, then $\Spec R_v^{\lambda,\tau,\mathrm{cr}}[1/p]$ is equidimensional of dimension $n^2+\frac{n(n-1)}{2}[F_v:\mathbb{Q}_p]$, and is formally smooth over $E$. \end{thm} If $\tau = 1$, then we omit it from the notation, and just write $R_v^{\lambda,\mathrm{cr}}$ for $R_v^{\lambda,1,\mathrm{cr}}$. This theorem yields the following corollary, using the argument \cite{EmertonGeeBM}*{\S4.3}. \begin{cor}\label{thm:crdefringdet} Assume $p\nmid n$. Fix $\lambda \in (\mathbb{Z}_+^n)^{\mathrm{Hom}(F_v,\mathbb{Q}bar_p)}$ and an inertial type $\tau$ defined over $E$. There is a (possibly zero) $\mathcal{O}$-flat reduced quotient $R_v^{\lambda,\tau,cris,\mu}$ of $R_v^{\square,\mu}$ such that an $E$-algebra morphism $x : R_v^{\square,\mu}[1/p] \mathrm{rig}htarrow \mathbb{Q}bar_p$ factors through $R_v^{\lambda,\tau,\mathrm{cr},\mu}[1/p]$ if and only if $\rho_x$ is potentially crystalline of weight $\lambda$ and inertial type $\tau$. If nonzero, then $\Spec R_v^{\lambda,\tau,\mathrm{cr},\mu}[1/p]$ is equidimensional of dimension $n^2-1 + \frac{n(n-1)}{2}[F_v:\mathbb{Q}_p]$ and is formally smooth over $E$. \end{cor} In \cite{EmertonGeeBM}*{Lemma~4.3.1} it is assumed $p>n$, but this is only used to guarantee that a character, denoted $\theta$ there, has an $n$th root. For this it suffices that for any positive integer $k$, the binomial coefficient $\binom{1/n}{k}$ is a $p$-adic integer, which holds whenever $p\nmid n$. Note that knowing whether or not the above rings are nonzero amounts to showing there is a potentially semistable or potentially crystalline lift with the required weight and inertial type (and determinant). In general, this seems to be a difficult problem. We recall the notion of a potentially diagonalizable representation from \cite{BLGGT}. We say a lift \[ \rho : G_v \longrightarrow \GL_n(\mathcal{O}_{\mathbb{Q}bar_p}) \] of $\overline{\rho}_v\otimes \mathbb{F}bar_p$ is \emph{potentially diagonalizable of weight} $\lambda \in (\mathbb{Z}_+^n)^{\mathrm{Hom}(F_v,\mathbb{Q}bar_p)}$ if there is a finite extension $K/F_v$ and continuous characters $\chi_1,\ldots,\chi_n : G_K \longrightarrow \mathcal{O}_{\mathbb{Q}bar_p}$, such that the following hold: \begin{itemize} \item $\rho|_{G_K}$ and $\chi_1\mathrm{op}lus\cdots\mathrm{op}lus \chi_n$ are both crystalline of weight $\lambda_K$, where $\lambda_K \in (\mathbb{Z}_+^n)^{\mathrm{Hom}(K,\mathbb{Q}bar_p)}$ is given by $\lambda_{K,\sigma'} = \lambda_\sigma$ if $\sigma' : K \hookrightarrow \mathbb{Q}bar_p$ extends $\sigma : F_v \hookrightarrow \mathbb{Q}bar_p$; \item the $\mathcal{O}_{\mathbb{Q}bar_p}$-points of $\Spec R_{\overline{\rho}_v|_{G_K}}^{\lambda,\mathrm{cr}}$ determined by $\rho|_{G_K}$ and $\chi_1\mathrm{op}lus \cdots \mathrm{op}lus \chi_n$ lie on the same irreducible component. \end{itemize} We say $\rho$ is \emph{potentially diagonalizable of regular weight} if there is $\lambda \in (\mathbb{Z}_+^n)^{\mathrm{Hom}(F_v,\mathbb{Q}bar_p)}$ such that $\rho$ is potentially diagonalizable of weight $\lambda$. One can also define potentially diagonalizable lifts in the nonregular weight case, but we will not have use for them here. We also note that in \cite{BLGGT}, potentially diagonalizable is defined using irreducible components of $\Spec R_{\overline{\rho}_v|_{G_K}}^{\lambda,\mathrm{cr}}\otimes \mathbb{Q}bar_p$, but it is equivalent to use irreducible components of $\Spec R_{\overline{\rho}_v|_{G_K}}^{\lambda,\mathrm{cr}}$, postcomposing $\chi_1\mathrm{op}lus\cdots\mathrm{op}lus\chi_n$ by an element of $\mathrm{Gal}(\mathbb{Q}bar_p/E)$ if necessary. For given $\lambda \in (\mathbb{Z}_+^n)^{\mathrm{Hom}(F_v,\mathbb{Q}bar_p)}$ and inertial type $\tau$ defined over $E$, we'll call an $\mathcal{O}_{\mathbb{Q}bar_p}$-point $x$ of $\Spec R_v^{\lambda,\tau,\mathrm{cr}}$, or of $\Spec R_v^{\lambda,\tau,\mathrm{cr},\mu}$, \emph{potentially diagonalizable} if $\rho_x$ is. We will call an irreducible component of $\Spec R_v^{\lambda,\tau,\mathrm{cr}}$, or of $\Spec R_v^{\lambda,\tau,\mathrm{cr},\mu}$, \emph{potentially diagonalizable} if it contains a potentially diagonalizable point. Finally, we note that by \cite{BLGGT}*{Lemma~1.4.1}, the notion of being potentially diagonalizable depends only on $\rho \otimes \mathbb{Q}bar_p$. Hence, it makes sense to talk about a $\mathbb{Q}bar_p$-valued representations as being potentially diagonalizable without specifying an invariant lattice. This in particular applies to the restrictions to $G_v$ of the automorphic Galois representations of \S\ref{sec:AutGalRep}. It is not currently known whether or not every potentially crystalline representation is potentially diagonalizable, nor whether or not any residual representation has a potentially diagonalizable lift of regular weight. The latter question has been investigated by Gee--Herzig--Liu--Savitt \cite{GHLS}. In particular, in dimension two it is always possible, and we will use the following lemma in \S\ref{sec:Hilb}. \begin{lem}\label{thm:BTlift} Assume that $n = 2$, and that $\mu$ is de~Rham. Then $\overline{\rho}_v$ admits a regular weight potentially diagonalizable lift with determinant $\mu$. \end{lem} \begin{proof} First assume $\overline{\rho}_v$ is peu ramifi\'{e}e, in the sense of \cite{GHLS}*{Definition~2.1.3}. Fix $\lambda \in (\mathbb{Z}_+^2)^{\mathrm{Hom}(F_v,\mathbb{Q}bar_p)}$ such that $\lambda_{\sigma,1} + 1 + \lambda_{\sigma,2} = \mathrm{HT}_\sigma(\mu)$ for each $\sigma\in \mathrm{Hom}(F_v,\mathbb{Q}bar_p)$. By \cite{GHLS}*{Corollary~2.1.11}, $\overline{\rho}_v$ has a potentially diagonalizable lift $\rho : G_v \longrightarrow \GL_2(\mathcal{O}_{\mathbb{Q}bar_p})$ of weight $\lambda$. Enlarging $E$ if necessary, we can assume $\det\rho$ takes values in $\mathcal{O}^\times$, and $\mu(\det\rho)^{-1} : G_v \mathrm{rig}htarrow 1+\mathfrak{m}_{\mathcal{O}}$ is finitely ramified by choice of $\lambda$. Since $p>2$, there is a continuous finitely ramified character $\eta : G_v \mathrm{rig}htarrow 1+\mathfrak{m}_{\mathcal{O}}$ such that $\eta^2 = \mu(\det\rho)^{-1}$. Then $\rho\otimes\eta$ is a potentially diagonalizable lift of $\overline{\rho}$ of weight $\lambda$ and determinant $\mu$. It only remains to treat the case (see \cite{GHLS}*{Examples~2.1.4}) \[ \overline{\rho} \cong \begin{pmatrix} \overline{\chi} & c \\ & \overline{\chi}^{-1}\overline{\mu} \end{pmatrix} \] with $\overline{\chi}^2\overline{\mu}^{-1} = \overline{\varepsilonilon}$, and tr\`{e}s ramifi\'{e}e cocycle $c$. The result the follows from the argument of \cite{BLGGOrdLifts1}*{Lemma~6.1.6}. We sketch the details. Twisting, we may assume that $\mathrm{HT}_\sigma(\mu) > 1$ for each $\sigma : F_v\hookrightarrow \mathbb{Q}bar_p$. Then any lift of the form \[ \rho \cong \begin{pmatrix} \chi & \ast \\ & \chi^{-1}\mu \end{pmatrix} \] with $\chi$ a finitely ramified character lifting $\overline{\chi}$ is potentially crystalline, hence potentially diagonalizable by \cite{BLGGT}*{Lemma~1.4.3}. Choose some finitely ramified lift $\chi$ of $\overline{\chi}$ and set $\mathrm{ps}i:=\chi^2\mu^{-1}\varepsilonilon^{-1}$. Let $L$ be the line in $H^1(G_v,\mathbb{F}(\overline{\varepsilonilon}))$ spanned by the cohomology class generated by $c$, and let $H$ be the hyperplane in $H^1(G_v,\mathbb{F})$ annihilated by $L$ under local Tate duality. Fix a uniformizer $\varpi$ in $\mathcal{O}$. Then there is $n\ge 1$ such that $\mathrm{ps}i^{-1} \mod {\varpi^{n+1}} = 1 + \varpi^n\overline{\alpha}$ with $\overline{\alpha} : G_v \mathrm{rig}htarrow \mathbb{F}$ a nontrivial homomorphism. An argument using local Tate duality, as in see Case~2 in the proof of \cite{BLGGOrdLifts1}*{Lemma~6.1.6}, shows it that is suffices to prove $\overline{\alpha} \in H$. We assume otherwise. Since $\overline{\rho}$ is tr\`{e}s ramifi\'{e}e, $H$ does not contain the unramfied line, and we can find $\overline{a} \in \mathbb{F}^\times$ such that $\alpha + u_{\overline{a}} \in H$, where $u_{\overline{a}} : G_v \mathrm{rig}htarrow \mathbb{F}$ is the unramified homomorphism that sends $\mathbb{F}rob_v$ to $\overline{a}$. Choosing some $b\in \mathcal{O}^\times$ such that $\overline{a} = 2b \mod \varpi$, and replacing $\chi$ by $\chi$ times the unramified character that sends $\mathbb{F}rob_v$ to $(1+\varpi^n b)^{-1}$ yields $\overline{\alpha} \in H$, finishing the proof. \end{proof} Let $\rho : G_v \longrightarrow \GL_n(\mathbb{Q}bar_p)$ be a continuous representation. We say $\rho$ is \emph{ordinary of weight} $\lambda \in (\mathbb{Z}_+^n)^{\mathrm{Hom}(F_v,\mathbb{Q}bar_p)}$ if there is a $G_v$-stable decreasing filtration \[ \mathbb{Q}bar_p^n = V_1 \supset V_2 \supset \cdots \supset V_{n+1} = \{0\} \] with one dimensional graded pieces and an open subgroup $U$ of $\mathcal{O}_{F_v}^\times$ such that for each $1\le j \le n$, letting $\mathrm{ps}i_j : G_v \mathrm{rig}htarrow \mathbb{Q}bar_p^\times$ denote the character giving the $G_v$-action on $V_j/V_{j+1}$, we have \[ \mathrm{ps}i_j\circ\mathbb{A}rt_{F_v}(x) = \prod_{\sigma: F_v \hookrightarrow \mathbb{Q}bar_p} \sigma (x)^{j - n -\lambda_{\sigma,j}} \] for all $x \in U$. We say $\rho$ is \emph{ordinary} of \emph{regular weight} if there is $\lambda\in (\mathbb{Z}_+^n)^{\mathrm{Hom}(F_v,\mathbb{Q}bar_p)}$ such that it is ordinary of weight $\lambda$. The following is a result of Geraghty \cite{GeraghtyOrdinary}*{Lemma~3.3.3}. \begin{thm}\label{thm:ordring} Let $\lambda \in (\mathbb{Z}_+^n)^{\mathrm{Hom}(F_v,\mathbb{Q}bar_p)}$ and let $\tau$ be an inertial type defined over $E$. There is a (possibly zero) $\mathcal{O}$-flat reduced quotient $R_v^{\lambda,\tau,\mathrm{ord}}$ of $R_v^\square$ such that an $E$-algebra morphism $x : R_v^\square[1/p] \mathrm{rig}htarrow \mathbb{Q}bar_p$ factors through $R_v^{\lambda,\tau,\mathrm{ord}}[1/p]$ if and only if $\rho_x$ is ordinary of weight $\lambda$. If nonzero, then $\Spec R_v^{\lambda,\tau,\mathrm{ord}}[1/p]$ is equidimensional of dimension $n^2 + \frac{n(n-1)}{2}[F_v:\mathbb{Q}_p]$ and admits an open dense subscheme that is formally smooth over $E$. \end{thm} Note that in order for $R_v^{\lambda,\tau,\mathrm{ord}}$ to be nonzero, $\tau$ must be a direct sum of finite order characters of $I_v$. \section{Automorphic Galois representations}\label{sec:AutGalRep} We recall the notion of regular algebraic polarized cuspidal automorphic representations and their associated Galois representations. Throughout this section, $F$ is assumed to be either a CM or totally real number field, with maximal totally real subfield $F^+$. Let $c \in G_{F^+}$ be a choice of complex conjugation. \subsection{Polarized auotmorphic Galois representations}\label{sec:pol} Following \cite{BLGGT}*{\S2.1}, we say that a pair $(\pi,\chi)$ is a \emph{polarized automorphic representation} of $\GL_n(\mathbb{A}_F)$ if \begin{itemize} \item $\pi$ is an automorphic representation of $\GL_n(\mathbb{A}_F)$; \item $\chi : (F^+)^\times \backslash \mathbb{A}_{F^+}^\times \mathrm{rig}htarrow \mathbb{C}^\times$ is a continuous character with $\chi_v(-1)$ independent of $v | \infty$; \item $\pi^c \cong \pi^\vee \otimes (\chi\circ \mathrm{Nm}_{F/F^+}\circ \det)$; \end{itemize} When $F$ is totally real, the requirement that $\chi_v(-1)$ be independent of $v|\infty$ has been shown to be redundant by Patrikis \cite{PatrikisSign}*{Theorem~2.1}. Note that when $F$ is CM, if $(\pi,\chi)$ is a polarized automorphic representation of $\GL_n(\mathbb{A}_F)$, then so is $(\pi,\chi\delta_{F/F^+})$. We do not specify a sign convention in this generality, unlike in \cite{BLGGT}*{\S2.1}, but will in the regular algebraic case below. Our convention will sometimes differ from that of \textit{loc. cit.}, but it ensures that the character $\chi_\iota \varepsilonilon^{1-n}$ of $G_{F^+}$ in \cref{thm:autgalrep} below is totally odd, which is more convenient for us (see \cref{thm:Gdhoms,thm:polringdim}). We say that an automorphic representation $\pi$ of $\GL_n(\mathbb{A}_F)$ is \emph{polarizable} if there is a character $\chi$ such that $(\pi,\chi)$ is a polarized automorphic representation. If $F$ is CM and $(\pi,\delta_{F/F^+}^n)$ is polarized, then we say that $\pi$ is \emph{conjugate self-dual}. An automorphic representation $\pi$ of $\GL_n(\mathbb{A}_F)$ is called \emph{regular algebraic} if $\pi_\infty$ has the same infinitesimal character as an irreducible algebraic representation of $\mathbb{R}es_{F/\mathbb{Q}} \GL_n$. If $\lambda = (\lambda_\sigma) \in (\mathbb{Z}_+^n)^{\mathrm{Hom}(F,\mathbb{C})}$, then we let $\xi_\lambda$ denote the irreducible algebraic representation of $\prod_{\sigma}\GL_n$ which is the tensor product over $\sigma \in \mathrm{Hom}(F,\mathbb{C})$ of the irreducible algebraic representations with highest weight $\lambda_\sigma$. We say a regular algebraic automorphic representation $\pi$ of $\GL_n(\mathbb{A}_F)$ has \emph{weight} $\lambda \in (\mathbb{Z}_+^n)^{\mathrm{Hom}(F,\mathbb{C})}$ if $\pi_\infty$ has the same infinitesimal character as $\xi_\lambda^\vee$. We will say a polarized automorphic representation $(\pi,\chi)$ of $\GL_n(\mathbb{A}_F)$ is cuspidal if $\pi$ is. We will say a polarized automorphic representation $(\pi,\chi)$ of $\GL_n(\mathbb{A}_F)$ is regular algebraic if $\pi$ is. In this case $\chi$ is necessarily an algebraic character, and if $F$ is CM, we fix the sign of $\chi_v(-1)$ for $v | \infty$ as follows. Letting $q\in \mathbb{Z}$ be the unique integer such that $\chi \lvert \cdot \rvert_{\mathbb{A}_{F^+}}^q$ is finite order, we require $\chi_v(-1) = (-1)^{n+q}$ for each $v|\infty$ in $F^+$. Let $\pi$ be a regular algebraic polarizable cuspidal automorphic representation of $\GL_n(\mathbb{A}_F)$ of weight $\lambda$, and let $\iota : \mathbb{Q}bar_p \xrightarrow{\sim} \mathbb{C}$ be an isomorphism. Let $v|p$ in $F$ and let $\varpi_v$ be a choice of uniformizer for $F_v$. For any integer $a\ge 1$, let $\mathrm{Iw}(v^{a,a})$ denote the subgroup of $\GL_n(\mathcal{O}_{F_v})$ of matrices that reduce to an upper triangular matrix modulo $\varpi_v^a$. For each $1\le j \le n$, the space $(\iota^{-1}\pi_v)^{\mathrm{Iw}(v^{a,a})}$ has an action of the Hecke operator \[ U_{\varpi_v}^{(j)} = \left[ \mathrm{Iw}(v^{a,a}) \begin{pmatrix} \varpi_v 1_j & \\ & 1_{n-j} \end{pmatrix} \mathrm{Iw}(v^{a,a}) \mathrm{rig}ht], \] and they commute with one another. We define modified Hecke operators \[ U_{\lambda,\varpi_v}^{(j)} = \Big(\prod_{\sigma : F_v \hookrightarrow \mathbb{Q}bar_p} \prod_{i=1}^j \sigma(\varpi_v)^{-\lambda_{\iota\sigma,n-i+1}}\Big) U_{\varpi_v}^{(j)}\] for each $1\le j \le n$. We say that $\pi$ is $\iota$-\emph{ordinary} if for each $v|p$, there is an integer $a \ge 1$ and a nonzero vector in $(\iota^{-1}\pi_v)^{\mathrm{Iw}(v^{a,a})}$ that is an eigenvector for each $U_{\lambda,\varpi_v}^{(1)},\ldots,U_{\lambda,\varpi_v}^{(n)}$ with eigenvalues that are $p$-adic units. This definition does not depend on the choice of $\varpi_v$. We say a polarized regular algebraic cuspidal automorphic representation $(\pi,\chi)$ of $\GL_n(\mathbb{A}_F)$ is $\iota$-\emph{ordinary} if $\pi$ is. The following theorem is due to the work of many people. We refer the reader to \cite{BLGGT}*{Theorem~2.1.1} and the references contained there for \cref{autgalrep:pol,autgalrep:pss,autgalrep:locglob} (noting that the assumption of an Iwahori fixed vector in part (4) of \cite{BLGGT}*{Theorem~2.1.1} can be removed by the main result of \cite{Caraianilp}), and to \cite{ThorneReducible}*{Theorem~2.4} for \cref{autgalrep:ord}. \begin{thm}\label{thm:autgalrep} Let $(\pi,\chi)$ be a regular algebraic, polarized, cuspidal automorphic representation of $\GL_n(\mathbb{A}_F)$, of weight $\lambda \in (\mathbb{Z}_+^n)^{\mathrm{Hom}(F,\mathbb{C})}$. Fix an isomorphism $\iota : \mathbb{Q}bar_p \xrightarrow{\sim} \mathbb{C}$, and for each $v|p$ in $F$, let $\lambda_v = (\lambda_{v,\sigma}) \in (\mathbb{Z}_+^n)^{\mathrm{Hom}(F_v,\mathbb{Q}bar_p)}$ be given by $\lambda_{v,\sigma} = \lambda_{\iota\sigma}$. There is a continuous semisimple representation \[ \rho_{\pi,\iota} : G_F \longrightarrow \GL_n(\mathbb{Q}bar_p) \] satisfying the following properties. \begin{enumerate}[ref=\arabic*] \item\label{autgalrep:pol} There is a perfect symmetric pairing $\langle \cdot, \cdot \mathrm{rig}htarrowngle$ on $\mathbb{Q}bar_p^n$ such that for any $a,b\in \mathbb{Q}bar_p^n$ and $\gamma\in G_F$, \[ \langle \rho_{\pi,\iota}(\gamma) a, \rho_{\pi,\iota}(c\gamma c) b \mathrm{rig}htarrowngle = (\chi_\iota\varepsilonilon^{1-n})(\gamma)\langle a, b \mathrm{rig}htarrowngle. \] \item\label{autgalrep:pss} For all $v|p$, $\rho_{\pi,\iota}|_{G_v}$ is potentially semistable of weight $\lambda_v$. \item\label{autgalrep:locglob} For any finite place $v$, \[\iota\mathrm{WD}(\rho_{\pi,\iota}|_{G_v})^{\mathbb{F}ss} \cong \rec_{F_v}(\pi_v\otimes \mathrm{ab}s{\cdot}^{\frac{1-n}{2}}). \] \item\label{autgalrep:ord} If $(\pi,\chi)$ is $\iota$-ordinary, then for each $v|p$, $\rho_{\pi,\iota}$ is ordinary of weight $\lambda_v$. \end{enumerate} \end{thm} In \cref{autgalrep:locglob}, $\rec_{F_v}$ is the Local Langlands reciprocity map that takes irreducible admissible representations of $\GL_n(F_v)$ to Frobenius semi-simple Weil--Deligne representations, normalized as in \cite{HarrisTaylor} and \cite{HenniartLL}, and $\mathrm{WD}(\rho_{\pi,\iota}|_{G_v})$ is the $\mathbb{Q}bar_p$-Weil--Deligne representation associated to $\rho_{\pi,\iota}|_{G_v}$. Since $G_{F,S}$ is compact, $\rho_{\pi,\iota}(G_{F,S})$ stabilizes a lattice. So conjugating $\rho_{\pi,\iota}$ if necessary, we may assume it takes valued in $\GL_n(\mathcal{O}_{\mathbb{Q}bar_p})$, and denote by \[ \overline{\rho}_{\pi,\iota} : G_{F,S} \longrightarrow \GL_n(\mathbb{F}bar_p)\] the semisimplification of its reduction modulo $\mathfrak{m}_{\mathbb{Q}bar_p}$, which is independent of the choice of lattice. \subsection{Hilbert modular forms}\label{sec:HilbGalReps} If $n = 2$ and $F$ is totally real, then any automorphic representation $\pi$ of $\GL_2(\mathbb{A}_F)$ is polarizable. More specifically, if $\chi$ denotes the central character of $\pi$, then $(\pi,\chi)$ is polarized. If $\pi$ is regular algebraic and cuspidal, then $\pi$ is a twist of the automorphic representation generated by a Hilbert modular cusp form (see \cite{ClozelRegAlg}*{\S 1.2.3}). \section{Polarized deformation rings and automorphic points}\label{sec:CM} In this section, we prove our main theorems on polarized deformation rings for Galois representations of CM fields. Before proceeding, we fix the assumptions and notation that will be used throughout this section. \subsection{Setup}\label{sec:polarsetup} We assume $p>2$, and fix an isomorphism $\iota : \mathbb{Q}bar_p \xrightarrow{\sim} \mathbb{C}$. We assume that our fixed number field $F$ is CM, and denote by $F^+$ its maximal totally real subfield. Fix a finite set of places $S$ of $F^+$ containing all places above $p$, and let $F_S$ be the maximal extension of $F$ unramified outside of the places in $F$ above $S$. Note that $F_S/F^+$ is Galois, and we set $G_{F^+,S} = \mathrm{Gal}(F_S/F^+)$ and $G_{F,S} = \mathrm{Gal}(F_S/F)$. We fix a choice of complex conjugation $c\in G_{F^+}$. We fix a continuous absolutely irreducible \[ \overline{\rho} : G_{F,S} \longrightarrow \GL_n(\mathbb{F}) \] and a continuous totally odd character $\mu : G_{F^+,S} \mathrm{rig}htarrow\mathcal{O}^\times$ and assume there is a perfect symmetric pairing $\langle \cdot, \cdot \mathrm{rig}htarrowngle$ on $\mathbb{F}^n$ such that \[ \langle \overline{\rho}(\gamma) a, \overline{\rho}(c\gamma c)b \mathrm{rig}htarrowngle = \mu(\gamma) \langle a, b \mathrm{rig}htarrowngle \] for all $\gamma \in G_{F,S}$ and $a,b \in \mathbb{F}^n$, where $\overline{\mu} : G_{F^+,S} \mathrm{rig}htarrow \mathbb{F}^\times$ is the reduction of $\mu$ modulo $\mathfrak{m}_{\mathcal{O}}$. Let $R^{\mathrm{pol}}$ be the universal $\mu$-polarized deformation ring for $\overline{\rho}$ as in \S\ref{sec:polardefring}. Since the pairing $\langle \cdot, \cdot \mathrm{rig}htarrowngle$ is symmetric, we can and do fix a lift \[ \overline{r} : G_{F^+,S} \longrightarrow \mathcal{G}_n(\mathbb{F}) \] of $\overline{\rho}$ with $\nu \circ \overline{r} = \overline{\mu}$ as in \cref{thm:Gdhoms}. We also view $R^{\mathrm{pol}}$ as the universal $\mu$-polarized deformation ring of $\overline{r}$. \begin{lem}\label{thm:polringdim} There is a presentation \[ R^{\mathrm{pol}} \cong \mathcal{O}[[x_1,\ldots,x_g]]/(f_1,\ldots,f_k)\] with $g-k \ge \frac{n(n+1)}{2}[F^+:\mathbb{Q}]$. In particular, every irreducible component of $\Spec R^{\mathrm{pol}}$ has dimension at least $1+\frac{n(n+1)}{2}[F^+:\mathbb{Q}]$. \end{lem} \begin{proof} The result then follows from \cref{thm:genpolpres} and the global Euler--Poincar\'{e} characteristic formula, using that $H^0(G_{F^+,S},\ad(\overline{r})) = 0$ since $\overline{\rho}$ is absolutely irreducible, and that $\dim_{\mathbb{F}} \ad(\overline{r})^{c_v = 1} = \frac{n(n-1)}{2}$ for each $v|\infty$ since $\mu$ is totally odd (see \cite{CHT}*{Lemma~2.1.3}). \end{proof} \begin{defn}\label{def:autpoint} Let $R$ be a quotient of $R^{\mathrm{pol}}$, let $x \in \Spec R(\mathbb{Q}bar_p)$, and let $\rho_x$ be the pushforward of the universal $\mu$-polarized deformation via $R^{\mathrm{pol}} \twoheadrightarrow R \xrightarrow{x} \mathbb{Q}bar_p$. We call $x$ an \emph{automorphic point} if there is a regular algebraic polarized cuspidal automorphic representation $(\pi,\chi)$ of $\GL_n(\mathbb{A}_F)$ such that $\rho_x \cong \rho_{\pi,\iota}$ and $\mu = \chi_\iota \varepsilonilon^{1-n}$. Given a finite extension $L/F$ of CM fields, we say $x$ is an $L$-\emph{potentially automorphic point} if there is a regular algebraic polarized cuspidal automorphic representation $(\pi,\chi)$ of $\GL_n(\mathbb{A}_L)$ such that $\rho_x|_{G_L} \cong \rho_{\pi,\iota}$ and $\mu|_{G_{L^+}} = \chi_\iota \varepsilonilon^{1-n}$. In either case, we say $x$ is $\iota$-\emph{ordinary} if $\pi$ is, and that $x$ has \emph{level prime to} $p$, resp. \emph{potentially prime to} $p$, if for all $v|p$ the local representation $\pi_v$ is unramified, resp. becomes unramified after a finite base change. If $X^{\mathrm{rig}}$ is the rigid analytic generic fibre of $\Spf R$, and $x^{\mathrm{rig}} \in X^{\mathrm{rig}}$ is the point corresponding to $\ker(x) \subset R[1/p]$, then we say $x^{\mathrm{rig}}$ is an \emph{automorhpic point} if $x$ is, and if this is the case we further say $x^{\mathrm{rig}}$ has \emph{level prime to} $p$ if $x$ does. \end{defn} \subsection{Small $R = \mathbb{T}$ theorems from the literature}\label{sec:CMsmallRT} In this subsection we recall the small $R = \mathbb{T}$ theorems that are used in the proofs of our main theorems. Before stating them, we recall some terminology of \cite{CHT} for the deformation theory of $\overline{r}$. Let $\widetilde{S}$ be a finite set of finite places of $F$ such that every $w\in \widetilde{S}$ is split over some $v\in S$, and $\widetilde{S}$ contains at most one place above any $v\in S$. For each $w\in \widetilde{S}$, let $R_w$ be a quotient of $R_w^\square : = R_{\overline{\rho}|_{G_w}}^\square$ that represents a local deformation problem. By \cref{thm:defquolem}, the rings $R_w$ in \cref{thm:PDsmallRT,thm:ordsmallRT,thm:potsmallRT} below represent local deformation problems. We refer to the tuple \[ \mathcal{S} = (F/F^+, S, \widetilde{S}, \mathcal{O}, \overline{r}, \mu, \{R_w\}_{w\in \widetilde{S}}) \] as a \emph{global} $\mathcal{G}_n$-\emph{deformation datum}. This differs from the definition in \cite{CHT}*{\S2.3} in that our ramification set $S$ may contain places that do not split in $F/F^+$, and $\widetilde{S}$ is not required to contain a place above every $v\in S$. As the results in \cite{ChenevierFern} make no assumption on the splitting behaviour in $F$ of the places in $S \smallsetminus \{v|p\}$, we also wish to make no such assumption. A \emph{type} $\mathcal{S}$ deformation of $\overline{r}$ is a deformation $r : G_{F^+,S} \mathrm{rig}htarrow \mathcal{G}_n(A)$ with $A$ a $\mathbb{C}NL_{\mathcal{O}}$-algebra such that for any (equivalently for one) lift $r$ in its deformation class \begin{itemize} \item $\nu \circ r = \mu$, and \item for each $w\in \widetilde{S}$, the $\mathbb{C}NL_{\mathcal{O}}$-morphism $R_w^\square \mathrm{rig}htarrow A$ induced by the lift $r|_{G_w}$ of $\overline{r}|_{G_w} = \overline{\rho}|_{G_w}$, factors through $R_w$. \end{itemize} We let $D_{\mathcal{S}}$ be the set valued functor on $\mathbb{C}NL_{\mathcal{O}}$ that takes a $\mathbb{C}NL_{\mathcal{O}}$-algebra $A$ to the set of deformations of type $\mathcal{S}$. It is easy to see that $D_{\mathcal{S}}$ is represented by a quotient $R_{\mathcal{S}}$ of $R^{\mathrm{pol}}$. Indeed, let $R_{\widetilde{S}}^\square = \widehat{\otimes}_{w\in \widetilde{S}} R_w^\square$ and $R_{\mathcal{S}}^{\mathrm{loc}} = \widehat{\otimes}_{w\in \widetilde{S}} R_w$, where the completed tensor products are taken over $\mathcal{O}$. A choice of lift $r$ in the equivalence class of the universal $\mu$-polarized deformation of $\overline{r}$ determines a $\mathbb{C}NL_{\mathcal{O}}$ algebra morphism $R_{\widetilde{S}}^\square \mathrm{rig}htarrow R^{\mathrm{pol}}$ by $r \mapsto \{r|_{G_w}\}_{w\in\widetilde{S}}$. This $R_{\widetilde{S}}^\square$-algebra structure on $R^{\mathrm{pol}}$ may depend on the choice of lift $r$, but it is canonical up to $\mathbb{C}NL_{\mathcal{O}}$-automorphisms of $R_{\widetilde{S}}^\square$. We can then define \[ R_{\mathcal{S}} := R_{\mathcal{S}}^{\mathrm{loc}} \otimes_{R_{\widetilde{S}}^\square} R^{\mathrm{pol}}.\] We call $R_{\mathcal{S}}$ the \emph{universal type} $\mathcal{S}$ \emph{deformation ring}, and note that it has an $R_{\mathcal{S}}^{\mathrm{loc}}$-algebra structure that is canonical up to $\mathbb{C}NL_{\mathcal{O}}$-automorphisms of $R_{\mathcal{S}}^{\mathrm{loc}}$. The following lemma follows immediately from the construction of $R_{\mathcal{S}}$. \begin{lem}\label{thm:typeSquotient} Let $\widetilde{S}' \subseteq \widetilde{S}$, and let $\mathcal{S}'$ be the global $\mathcal{G}_n$-deformation datum \[ \mathcal{S}' := (F/F^+, S, \widetilde{S}', \mathcal{O}, \overline{r}, \mu, \{R_w\}_{w\in \widetilde{S}'}). \] Let $T = \widetilde{S} \smallsetminus \widetilde{S}'$. There is a canonical isomorphism \[ R_{\mathcal{S}} \cong R_{\mathcal{S}'} \otimes_{(\widehat{\otimes}_{w\in T} R_w^\square)} (\widehat{\otimes}_{w\in T} R_w).\] In particular, if $R_w = R_w^\square$ for all $w\in \widetilde{S} \smallsetminus \widetilde{S}'$, then there is a canonical isomorphism $R_{\mathcal{S}} \cong R_{\mathcal{S}'}$. \end{lem} Before proceeding, we introduce the conditions on the residual representation that appear as assumptions in the small $R = \mathbb{T}$ theorems we quote. We first recall the definition of an adequate subgroup of $\GL_n(\mathbb{F})$ in \cite{GHTadequateIM}*{\S1}. \begin{defn}\label{def:GLadequate} A subgroup $\Gamma$ of $\GL_n(\mathbb{F}bar)$ is \emph{adequate} if the following hold: \begin{enumerate} \item $H^1(\Gamma,\mathbb{F}bar) = 0$ and $H^1(\Gamma,\mathfrak{gl}_n(\mathbb{F}bar)/\mathfrak{z}) = 0$, where $\mathfrak{z}$ is the centre of $\mathfrak{gl}_n(\mathbb{F}bar)$; \item\label{GLadequate:End} $\mathrm{End}_{\mathbb{F}bar}(\mathbb{F}bar^n)$ is spanned by the semisimple elements in $\Gamma$. \end{enumerate} \end{defn} This is slightly more general than \cite{ThorneAdequate}*{Definition~2.3}, as it allows $p|n$. However, if $p\nmid n$ then the two definitions are equivalent. We note that \cref{GLadequate:End} implies that $\Gamma$ acts irreducibly on $\mathbb{F}bar^n$. The following partial converse is a theorem of Guralnick--Herzig--Taylor--Thorne \cite{ThorneAdequate}*{Theorem~A.9}. \begin{thm}\label{thm:adequate} Let $\Gamma$ be a subgroup of $\GL_n(\mathbb{F}bar)$ that acts absolutely irreducibly on $\mathbb{F}bar^n$. Let $\Gamma^0$ be the subgroup of $\Gamma$ generated by elements of $p$-power order. Let $d\ge 1$ be the maximal dimension of an irreducible $\Gamma^0$-submodule of $\mathbb{F}bar^n$. If $p > 2(d+1)$, then $\Gamma$ is adequate and $p\nmid n$. \end{thm} We now state the small $R = \mathbb{T}$ theorems that we use. The first of which is due to Barnet-Lamb--Gee--Geraghty--Taylor \cite{BLGGT}, with improvements by Barnet-Lamb--Gee--Geraghty \cite{BLGGU2}*{Appendix~A} and Dieulefait--Gee \cite{DieulefaitSym5}*{Appendix~B}. \begin{thm}\label{thm:PDsmallRT} Assume that $p\nmid 2n$ and that every $v|p$ in $F^+$ splits in $F$. For each $v|p$ in $F^+$, fix a choice of place $\tilde{v}$ of $F$ above $v$, and set $\widetilde{S}_p = \{\tilde{v}\}_{v|p \text{ in }F^+}$. Assume further: \begin{ass} \item $\mu$ is de~Rham. \item\label{PDsmallRT:PD} $\overline{\rho} \otimes \mathbb{F}bar_p \cong \overline{\rho}_{\pi,\iota}$ and $\overline{\mu} = \chi_\iota\varepsilonilon^{1-n} \mod {\mathfrak{m}_{\mathbb{Q}bar_p}}$, where $(\pi,\chi)$ is a regular algebraic polarized cuspidal automorphic representation of $\GL_n(\mathbb{A}_F)$ such that $\rho_{\pi,\iota}|_{G_{\tilde{v}}}$ is potentially diagonalizable for each $\tilde{v} \in \widetilde{S}_p$. \item\label{PDsmallRT:adequate} $\overline{\rho}(G_{F(\zeta_p)})$ is adequate and $\zeta_p \notin F$. \end{ass} For each $\tilde{v} \in \widetilde{S}_p$, fix $\lambda_{\tilde{v}} \in (\mathbb{Z}_+^n)^{\mathrm{Hom}(F_{\tilde{v}},\mathbb{Q}bar_p)}$ and an inertial type $\tau_{\tilde{v}}$ defined over $E$, and we let $R_{\tilde{v}}$ be a quotient of $R_{\tilde{v}}^{\lambda_{\tilde{v}},\tau_{\tilde{v}},\mathrm{cr}}$ corresponding to a union of potentially diagonalizable irreducible components of $\Spec R_{\tilde{v}}^{\lambda_{\tilde{v}},\tau_{\tilde{v}},\mathrm{cr}}$. Let $\mathcal{S}$ be the global $\mathcal{G}_n$-deformation datum \[ \mathcal{S} = (F/F^+, S, \widetilde{S}_p, \mathcal{O}, \overline{r}, \mu, \{R_{\tilde{v}}\}_{\tilde{v}\in \widetilde{S}_p}). \] Then the following hold: \begin{enumerate} \item\label{PDsmallRT:finite} The universal type $\mathcal{S}$ deformation ring $R_{\mathcal{S}}$ is finite over $\mathcal{O}$. \item\label{PDsmallRT:autpt} Every $x\in \Spec R_{\mathcal{S}}(\mathbb{Q}bar_p)$ is automorphic of level potentially prime to $p$. \end{enumerate} \end{thm} \begin{proof} \mathbb{C}ref{PDsmallRT:autpt} follows from \cite{DieulefaitSym5}*{Theorem~9}. To show \cref{PDsmallRT:finite}, we will apply \cite{ThorneAdequate}*{Theorem~10.1}. However, we have not fixed irreducible components at the finite places $\tilde{v}\in \widetilde{S}$. We now explain how we can reduce to this case using base change and \cite{BLGGU2}*{Theorem~6.8}. Note that both \cite{DieulefaitSym5}*{Appendix~B} and \cite{BLGGU2}*{Theorem~6.8} use the stronger definition of adequate that implies $p\nmid n$. To prove that $R_{\mathcal{S}}$ is finite over $\mathcal{O}$, it suffices to prove that $R_{\mathcal{S}}^{\mathrm{red}}$ is finite over $\mathcal{O}$. For this, it suffices to prove that $R_{\mathcal{S}}/\mathfrak{q}$ is finite over $\mathcal{O}$ for any minimal prime $\mathfrak{q}$ of $R_{\mathcal{S}}$, since $R_{\mathcal{S}}^{\mathrm{red}}$ injects into $\prod_{\mathfrak{q}} R_{\mathcal{S}}/\mathfrak{q}$. Fix a minimal prime $\mathfrak{q}$ of $R_{\mathcal{S}}$, and let $r_{\mathfrak{q}} : G_{F^+,S} \mathrm{rig}htarrow \mathcal{G}_n(R_{\mathcal{S}}/\mathfrak{q})$ be the induced deformation. We now choose a finite solvable extension $L/F$ of CM fields, with maximal totally real subfield $L^+$, satisfying the following: \begin{itemize} \item $L$ is disjoint from the subfield of $\overline{F}$ fixed by $\overline{r}|_{G_{F(\zeta_p)}}$. \item Letting $S_{L^+}$ denote the set of places in $L^+$ above those in $S$, each $w \in S_{L^+}$ splits in $L$. For each $w\in S_{L^+}$ we fix a choice of place $\tilde{w}$ in $L$ above $w$ such that if $w|p$, then $\tilde{w}$ lies above some $\tilde{v} \in \widetilde{S}_p$. We let $\widetilde{S}_L = \{ \tilde{w} \mid w \in S_{L^+}\}$. \item For each $\tilde{w} \in \widetilde{S}_L$ with $\tilde{w} \nmid p$, the $\mathbb{C}NL_{\mathcal{O}}$-algebra map $R_{\tilde{w}}^\square \mathrm{rig}htarrow R_{\mathcal{S}}/\mathfrak{q}$ induced by $r_{\mathfrak{q}}|_{G_{\tilde{w}}}$ factors through the quotient $R_{\tilde{w}}^1$ of \cref{thm:R1} (here we use \cref{thm:R1factor}). \item For each $\tilde{w} \in \widetilde{S}_L$, $\tau_{\tilde{v}}|_{G_{\tilde{w}}} = 1$, where $\tilde{v} \in \widetilde{S}_p$ is the place in $\widetilde{S}_p$ below $\tilde{w}$. \end{itemize} We define a global $\mathcal{G}_n$-deformation datum \[ \mathcal{S}_L = (L/L^+, S_{L^+}, \widetilde{S}_L, \mathcal{O}, \overline{r}|_{G_{L^+}}, \mu|_{G_{L^+}}, \{R_{\tilde{w}}\}_{\tilde{w}\in \widetilde{S}_L})\] where \begin{itemize} \item for $\tilde{w}\nmid p$, $R_{\tilde{w}}$ is a quotient of $R_{\tilde{w}}^1$ by a minimal prime through which the $\mathbb{C}NL_{\mathcal{O}}$-algebra morphism $R_{\tilde{w}}^1 \mathrm{rig}htarrow R_{\mathcal{S}}/\mathfrak{q}$ induced by $r_{\mathfrak{q}}|_{G_{\tilde{w}}}$ factors; \item for $\tilde{w} \mid p$, $R_{\tilde{w}}$ is a quotient of $R_{\tilde{w}}^{\lambda_{\tilde{w}},\mathrm{cr}}$ through which the $\mathbb{C}NL_{\mathcal{O}}$-algebra morphism \[ R_{\tilde{w}}^{\lambda_{\tilde{w}},\mathrm{cr}}\longrightarrow R_{\tilde{v}} \longrightarrow R_{\mathcal{S}}/\mathfrak{q}\] induced by $r_{\mathfrak{q}}|_{G_{\tilde{w}}}$ factors. Here $\lambda_{\tilde{w},\sigma'} = \lambda_{\tilde{v},\sigma}$ if $\sigma' : L_{\tilde{w}} \hookrightarrow \mathbb{Q}bar_p$ extends $\sigma : F_{\tilde{v}} \hookrightarrow \mathbb{Q}bar_p$. \end{itemize} The deformation $r_{\mathfrak{q}}|_{G_{L^+}}$ is of type $\mathcal{S}_L$, so there is an induced $\mathbb{C}NL_{\mathcal{O}}$-algebra map $R_{\mathcal{S}_L} \mathrm{rig}htarrow R_{\mathcal{S}}/\mathfrak{q}$, and it is finite by \cite{BLGGT}*{Lemma~1.2.3}. So we are reduced to showing $R_{\mathcal{S}_L}$ is finite over $\mathcal{O}$. For $\tilde{w} \nmid p$, $R_{\tilde{w}}$ has characteristic zero by \cref{thm:R1}, so contains a $\mathbb{Q}bar_p$-point. For each $\tilde{v} \in \widetilde{S}_p$, $R_{\tilde{w}}$ contains a potentially diagonalizable point by choice of $R_{\tilde{w}}$ and $R_{\tilde{v}}$, where $\tilde{v}$ is the place of $F$ below $\tilde{w}$. We now apply \cite{BLGGU2}*{Theorem~6.8}, and we have a lift \[ r : G_{L^+,S_{L^+}} \longrightarrow \mathcal{G}_n(\mathcal{O}_{\mathbb{Q}bar_p}) \] that defines a type $\mathcal{S}_L$-deformation of $\overline{r}|_{G_{L^+}}$. Then \cite{DieulefaitSym5}*{Appendix~B} implies there is a regular algebraic polarized cuspidal automorphic representation $(\pi',\chi')$ of $\GL_n(\mathbb{A}_L)$ such that $r|_{G_L}\otimes\mathbb{Q}bar_p \cong \rho_{\pi',\iota}$ and $\chi'_\iota \varepsilonilon^{1-n} = \mu|_{G_{L^+}}$. Take $\tilde{w} \in \widetilde{S}_L$. If $\tilde{w} | p$, then $\Spec R_{\tilde{w}}[1/p]$ is the unique irreducible component of $\Spec R_{\tilde{w}}^{\lambda_{\tilde{w}},\mathrm{cr}}[1/p]$ containing the point determined by $r|_{G_{\tilde{w}}}$, as $R_{\tilde{w}}^{\lambda_{\tilde{w}},\mathrm{cr}}[1/p]$ is formally smooth. If $\tilde{w} \nmid p$, then $\Spec R_{\tilde{w}}[1/p]$ is the unique irreducible component of $\Spec R_{\tilde{w}}^\square[1/p]$ containing $r|_{G_{\tilde{w}}}$ by \cite{BLGGT}*{Lemma~1.3.2}, using local-global compatibility, i.e. \cref{autgalrep:locglob} of \cref{thm:autgalrep}, and the fact that $\pi'$ being cuspidal implies that $\pi_w$ is generic for all finite $w$. We can now apply \cite{ThorneAdequate}*{Theorem~10.1}, using \cite{Thorne2adic}*{Proposition~7.1} in place of \cite{ThorneAdequate}*{Proposition~4.4} (see \cite{Thorne2adic}*{\S 7}). This completes the proof. \end{proof} For our purposes below, it would suffice to fix irreducible components at places dividing $p$ in \cref{thm:PDsmallRT} above, but we do not want to fix irreducible components at places not dividing $p$ (see \cref{rmk:minimal} below). We have a similar theorem in the ordinary case, due to Geraghty \cite{GeraghtyOrdinary} and Thorne \cite{ThorneAdequate}. \begin{thm}\label{thm:ordsmallRT} Assume $p>2$ and that every $v|p$ in $F^+$ splits in $F$. For each $v|p$ in $F^+$, fix a choice of place $\tilde{v}$ of $F$ above $v$, and set $\widetilde{S}_p = \{\tilde{v}\}_{v|p \text{ in }F^+}$. Assume further: \begin{ass} \item $\mu$ is de~Rham. \item\label{ordsmallRT:ord} $\overline{\rho} \otimes \mathbb{F}bar_p \cong \overline{\rho}_{\pi,\iota}$ and $\overline{\mu} = \chi_\iota\varepsilonilon^{1-n} \mod {\mathfrak{m}_{\mathbb{Q}bar_p}}$, where $(\pi,\chi)$ is an $\iota$-ordinary regular algebraic polarized cuspidal automorphic representation of $\GL_n(\mathbb{A}_F)$. \item $\overline{\rho}(G_{F(\zeta_p)})$ is adequate and $\zeta_p \notin F$. \end{ass} For each $\tilde{v} \in \widetilde{S}_p$, fix $\lambda_{\tilde{v}} \in (\mathbb{Z}_+^n)^{\mathrm{Hom}(F_{\tilde{v}},\mathbb{Q}bar_p)}$ and an inertial type $\tau_{\tilde{v}}$ defined over $E$, and let $R_{\tilde{v}}$ be a quotient of $R_{\tilde{v}}^{\lambda_{\tilde{v}},\tau_{\tilde{v}},\mathrm{ord}}$ corresponding to a union of irreducible components of $\Spec R_{\tilde{v}}^{\lambda_{\tilde{v}},\tau_{\tilde{v}},\mathrm{ord}}$. Let $\mathcal{S}$ be the global $\mathcal{G}_n$-deformation datum \[ \mathcal{S} = (F/F^+, S, \widetilde{S}_p, \mathcal{O}, \overline{r}, \mu, \{R_{\tilde{v}}\}_{\tilde{v}\in \widetilde{S}_p}). \] Then the following hold: \begin{enumerate} \item The universal type $\mathcal{S}$ deformation ring $R_{\mathcal{S}}$ is finite over $\mathcal{O}$. \item Every $x\in \Spec R_{\mathcal{S}}(\mathbb{Q}bar_p)$ is $\iota$-ordinary automorphic. \end{enumerate} \end{thm} \begin{proof} It suffices to consider the case $R_{\tilde{v}} = R_{\tilde{v}}^{\lambda_{\tilde{v}},\tau_{\tilde{v}},\mathrm{ord}}$ for each $\tilde{v} \in \widetilde{S}_p$. As with \cref{thm:PDsmallRT}, we are free to replace $F$ with a finite solvable extension disjoint from the subfield of $\overline{F}$ fixed by $\overline{r}|_{G_{F(\zeta_p)}}$. After such an extension, we may assume that \begin{itemize} \item every $v\in S$ splits in $F$; \item for every $\tilde{v} \in \widetilde{S}$ above $p$, $\tau_{\tilde{v}} = 1$. \end{itemize} By \cref{thm:typeSquotient}, we may also enlarge $\widetilde{S}$ to ensure that it contains exactly one place $w$ above any $v \in S$, setting $R_w = R_w^\square$ for each $w \in \widetilde{S}$ with $w\nmid p$. We are now in the setting of \cite{ThorneAdequate} and our stated theorem is simply the combination of \cite{ThorneAdequate}*{Theorem~9.1 and Theorem~10.1}, again using \cite{Thorne2adic}*{Proposition~7.1} in place of \cite{ThorneAdequate}*{Proposition~4.4}. \end{proof} Finally, using the potential automorphy results of Barnet-Lamb--Gee--Geraghty--Taylor \cite{BLGGT}, we also have a potential version of the above two theorems. \begin{thm}\label{thm:potsmallRT} Assume $p>2$ and that every $v|p$ in $F^+$ splits in $F$. For each $v|p$ in $F^+$, fix a choice of place $\tilde{v}$ of $F$ above $v$, and set $\widetilde{S}_p = \{\tilde{v}\}_{v|p \text{ in }F^+}$. Assume further: \begin{ass} \item $\mu$ is de~Rham. \item\label{potsmallRT:adequate} $\overline{\rho}|_{G_{F(\zeta_p)}}$ is absolutely irreducible and $\zeta_p \notin F$. Moreover, letting $d$ denote the maximal dimension of an irreducible subrepresentation of the restriction of $\overline{\rho}$ to the closed subgroup of $G_{F}$ generated by all Sylow pro-$p$ subgroups, we assume $p>2(d+1)$. \end{ass} For each $\tilde{v} \in \widetilde{S}_p$, fix $\lambda_{\tilde{v}} \in (\mathbb{Z}_+^n)^{\mathrm{Hom}(F_{\tilde{v}},\mathbb{Q}bar_p)}$ and an inertial type $\tau_{\tilde{v}}$ defined over $E$, and we choose $R_{\tilde{v}}$ such that one of the following hold: \begin{casez} \item\label{potsmallRT:PD} For each $\tilde{v} \in \widetilde{S}_p$, $R_{\tilde{v}}$ is a quotient of $R_{\tilde{v}}^{\lambda_{\tilde{v}},\tau_{\tilde{v}},\mathrm{cr}}$ corresponding to a union of potentially diagonalizable irreducible components of $\Spec R_{\tilde{v}}^{\lambda_{\tilde{v}},\tau_{\tilde{v}},\mathrm{cr}}$. \item\label{potsmallRT:ord} For each $\tilde{v} \in \widetilde{S}_p$, $R_{\tilde{v}}$ is a quotient of $R_{\tilde{v}}^{\lambda_{\tilde{v}},\tau_{\tilde{v}},\mathrm{ord}}$ corresponding to a union of irreducible components of $\Spec R_{\tilde{v}}^{\lambda_{\tilde{v}},\tau_{\tilde{v}},\mathrm{ord}}$. \end{casez} Let $\mathcal{S}$ be the global $\mathcal{G}_n$-deformation datum \[ \mathcal{S} = (F/F^+, S, \widetilde{S}_p, \mathcal{O}, \overline{r}, \mu, \{R_{\tilde{v}}\}_{\tilde{v}\in \widetilde{S}_p}). \] Then the following hold: \begin{enumerate} \item\label{potPDsmallRT:finite} The universal type $\mathcal{S}$ deformation ring $R_{\mathcal{S}}$ is finite over $\mathcal{O}$. \item\label{potPDsmallRT:autpt} Given any finite extension $F^{(\mathrm{avoid})}/F$, there is a finite extension of CM fields $L/F$, disjoint from $F^{(\mathrm{avoid})}$, such that every $x\in \Spec R_{\mathcal{S}}(\mathbb{Q}bar_p)$ is $L$-potentially automorphic. \end{enumerate} \end{thm} \begin{proof} We can and do assume that $F^{(\mathrm{avoid})}$ contains the fixed field of $\overline{r}|_{G_{F(\zeta_p)}}$. We first take a CM extension $M/F$, disjoint from $F^{(\mathrm{avoid})}$, such that for any finite place $w$ of $M$ lying over a place in $S$, $\overline{\rho}|_{G_w}$ is trivial. In particular, for any finite place $w$ of $M$ lying above a place in $S$, $\overline{\rho}|_{G_w}$ admits a characteristic zero lift $\rho_w$, which we may assume satisfy $\rho_{cw}^c \cong \rho_w^\vee \otimes \mu|_{G_w}$. Then, using \cref{potsmallRT:adequate}, we can apply \cite{BLGGT}*{Proposition~3.3.1} to deduce the existence of a finite extension $L/M$ of CM fields, with maximal totally real subfield $L^+$, and a regular algebraic polarized cuspidal automorphic representation $(\pi,\chi)$ of $\GL_n(\mathbb{A}_L)$ such that \begin{itemize} \item $L$ is disjoint from $F^{(\mathrm{avoid})}$; \item $\overline{\rho}|_{G_L}\otimes\mathbb{F}bar_p \cong \overline{\rho}_{\pi,\iota}$ and $\overline{\mu}|_{G_{L^+}} = \chi_\iota\varepsilonilon^{1-n}$; \item $\pi$ is unramified at $p$ and outside of $S$; \item $\pi$ is $\iota$-ordinary. \end{itemize} Let $S_{L^+}$ be the set of places of $L^+$ above $S$, $\widetilde{S}_L$ be the set of places in $L$ above $\widetilde{S}$, and $\widetilde{S}_{L,p}$ be the set places in $\widetilde{S}_L$ dividing $p$. For each $\tilde{w} \in \widetilde{S}_{L,p}$, let $\lambda_{\tilde{w}} \in (\mathbb{Z}_+^n)^{\mathrm{Hom}(L_{\tilde{w}},\mathbb{Q}bar_p)}$ be given by $\lambda_{\tilde{w},\sigma'} = \lambda_{\tilde{v},\sigma}$ if $\sigma' : L_{\tilde{w}} \hookrightarrow \mathbb{Q}bar_p$ extends $\sigma : F_{\tilde{v}} \hookrightarrow \mathbb{Q}bar_p$, and let $\tau_{\tilde{w}} = \tau_{\tilde{v}}|_{I_{\tilde{w}}}$. We then consider the global $\mathcal{G}_n$-deformation datum \[\mathcal{S}_L = (L/L^+, S_{L^+}, \widetilde{S}_L, \mathcal{O}, \overline{r}|_{G_{L^+}}, \mu|_{G_{L^+}}, \{R_{\tilde{w}}\}_{\tilde{w}\in \widetilde{S}_{L,p}}),\] where \begin{itemize} \item if we are in \cref{potsmallRT:PD}, then for each $\tilde{w} \in \tilde{S}_{L,p}$, $R_{\tilde{w}}$ is the quotient of $R_{\tilde{w}}^{\lambda_{\tilde{w}},\tau_{\tilde{w}},\mathrm{cr}}$ corresponding the union of all potentially diagonalizable irreducible components; \item if we are in \cref{potsmallRT:ord}, then for each $\tilde{w} \in \tilde{S}_{L,p}$, $R_{\tilde{w}} = R_{\tilde{w}}^{\lambda_{\tilde{w}},\tau_{\tilde{w}},\mathrm{ord}}$. \end{itemize} There is then a canonical $\mathbb{C}NL_{\mathcal{O}}$-algebra map \[ R_{\mathcal{S}_L} \longrightarrow R_{\mathcal{S}}, \] which is finite by \cite{BLGGT}*{Lemma~1.2.3}. The theorem now follows from \cref{thm:PDsmallRT} if we are in \cref{potsmallRT:PD}, and from \cref{thm:ordsmallRT} if we are in \cref{potsmallRT:ord}. \end{proof} \subsection{The main theorems in the CM case}\label{sec:mainCM} We first prove our main theorem in the potentially diagonalizable case. \begin{thm}\label{thm:mainPD} Assume that $p\nmid 2n$ and that every $v|p$ in $F^+$ splits in $F$. For each $v|p$ in $F^+$, fix a choice of place $\tilde{v}$ of $F$ above $v$, and set $\widetilde{S}_p = \{\tilde{v}\}_{v|p \text{ in }F^+}$. Assume further: \begin{ass} \item $\mu$ is de~Rham. \item\label{mainPD:aut} $\overline{\rho} \otimes \mathbb{F}bar_p \cong \overline{\rho}_{\pi,\iota}$ and $\overline{\mu} = \chi_\iota\varepsilonilon^{1-n} \mod {\mathfrak{m}_{\mathbb{Q}bar_p}}$, where $(\pi,\chi)$ is a regular algebraic polarized cuspidal automorphic representation of $\GL_n(\mathbb{A}_F)$ such that $\rho_{\pi,\iota}|_{G_{\tilde{v}}}$ is potentially diagonalizable for each $\tilde{v} \in \widetilde{S}_p$. \item $\overline{\rho}|_{G_{F(\zeta_p)}}$ is adequate and $\zeta_p \notin F$. \item\label{mainPD:smooth} $H^0(G_{\tilde{v}},\ad(\overline{\rho})(1)) = 0$ for every $\tilde{v}\in \widetilde{S}_p$. \end{ass} Then any irreducible component of $\Spec R^{\mathrm{pol}}$ contains an automorphic point $x$ of level potentially prime to $p$. Moreover, assume that for every $\tilde{v} \in \widetilde{S}_p$, we are given $\lambda_{\tilde{v}} \in (\mathbb{Z}_+^n)^{\mathrm{Hom}(F_{\tilde{v}},\mathbb{Q}bar_p)}$, an inertial type $\tau_{\tilde{v}}$ defined over $E$, and a nonzero potentially diagonalizable irreducible component $\mathcal{C}_{\tilde{v}}$ of $\Spec R_{\tilde{v}}^{\lambda_{\tilde{v}},\tau_{\tilde{v}},\mathrm{cr}}$. Then we may assume the $\mathbb{Q}bar_p$-point of $\Spec R_{\tilde{v}}^\square$ determined by $\rho_x|_{G_{\tilde{v}}}$ lies in $\mathcal{C}_{\tilde{v}}$ for each $\tilde{v}\in\widetilde{S}_p$. \end{thm} \begin{proof} We first note that our \cref{mainPD:aut} implies that for each $\tilde{v}\in \widetilde{S}_p$, there is a choice of $\lambda_{\tilde{v}} \in (\mathbb{Z}_+^n)^{\mathrm{Hom}(F_{\tilde{v}},\mathbb{Q}bar_p)}$, inertial type $\tau_{\tilde{v}}$ defined over $E$, and nonzero potentially diagonalizable irreducible component $\mathcal{C}_{\tilde{v}}$ of $\Spec R_{\tilde{v}}^{\lambda_{\tilde{v}},\tau_{\tilde{v}},\mathrm{cr}}$ (enlarging $E$ if necessary using \cref{thm:coefchange}). We fix such a choice for each $\tilde{v} \in \widetilde{S}_p$. Fix an irreducible component $\mathcal{C}$ of $\Spec R^{\mathrm{pol}}$. Set $R^{\mathrm{loc}} = \widehat{\otimes}_{\tilde{v}\in \widetilde{S}_p} R_v^\square$, and let \[ X^{\mathrm{loc}} = \Spec(\widehat{\otimes} R_{\tilde{v}}) \subset \Spec R^{\mathrm{loc}},\] where $R_{\tilde{v}}$ is the quotient of $R_{\tilde{v}}^{\lambda_{\tilde{v}},\tau_{\tilde{v}},\mathrm{cr}}$ corresponding to $\mathcal{C}_{\tilde{v}}$. Choosing a lift $r : G_{F^+,S} \mathrm{rig}htarrow \mathcal{G}_n(R^{\mathrm{pol}})$ in the class of the universal $\mu$-polarized deformation gives a local $\mathbb{C}NL_{\mathcal{O}}$-algebra morphism $R^{\mathrm{loc}} \mathrm{rig}htarrow R^{\mathrm{pol}}$, and we let $X = X^{\mathrm{loc}} \times_{\Spec R^{\mathrm{loc}}} \Spec R^{\mathrm{pol}}$ under this map. Then $X = \Spec R_{\mathcal{S}}$, where $\mathcal{S}$ is the global $\mathcal{G}_n$-deformation datum \[ \mathcal{S} = (F/F^+, S, \widetilde{S}_p, \mathcal{O}, \overline{r}, \mu, \{R_{\tilde{v}}\}_{\tilde{v}\in\widetilde{S}_p}). \] \mathbb{C}ref{PDsmallRT:finite} of \cref{thm:PDsmallRT} implies that $X$ is finite over $\mathcal{O}$. We also have \begin{itemize} \item $R^{\mathrm{loc}}$ is isomorphic to a power series over $\mathcal{O}$ in $n^2\lvert \widetilde{S}_p \rvert + n^2[F^+:\mathbb{Q}]$-variables by \cref{mainPD:smooth} and \cref{thm:localsmooth}; \item $\dim X^{\mathrm{loc}} = 1 + \sum_{\tilde{v} \in \widetilde{S}_p} n^2 + \frac{n(n-1)}{2}[F_{\tilde{v}}:\mathbb{Q}_p] = 1+n^2\lvert \widetilde{S}_p\rvert + \frac{n(n-1)}{2}[F^+:\mathbb{Q}]$ by \cref{thm:crdefring}; \item $\dim \mathcal{C} \ge 1 + \frac{n(n+1)}{2}[F^+:\mathbb{Q}]$ by \cref{thm:polringdim}. \end{itemize} We can now apply \cref{thm:thelemma} to conclude that \[ \mathcal{C} \cap (X\otimes_{\mathcal{O}} E) = \mathcal{C} \cap \Spec R_{\mathcal{S}}[1/p] \ne \emptyset. \] Applying \cref{PDsmallRT:autpt} of \cref{thm:PDsmallRT} finishes the proof. \end{proof} Using the polarization, the condition $H^0(G_{\tilde{v}},\ad(\overline{\rho})(1)) = 0$ for all $\tilde{v} \in \widetilde{S}_p$ appearing in \cref{thm:mainPD} and \cref{thm:mainord,thm:CMgeom} below is equivalent to $H^0(G_w,\ad(\overline{\rho})(1)) = 0$ for all $w|p$ in $F$, which is the condition in \cref{thm:dim3dense} below. This is also equivalent to there being no nonzero $\mathbb{F}[G_w]$-equivariant map $\overline{\rho}|_{G_w} \mathrm{rig}htarrow \overline{\rho}|_{G_w}(1)$, which is how this condition was stated in the introduction. Our main theorem in the ordinary case is the following. \begin{thm}\label{thm:mainord} Assume $p>2$ and that every $v|p$ in $F^+$ splits in $F$. For each $v|p$ in $F^+$, fix a choice of place $\tilde{v}$ of $F$ above $v$, and set $\widetilde{S}_p = \{\tilde{v}\}_{v|p \text{ in }F^+}$. Assume further: \begin{ass} \item $\mu$ is de~Rham. \item\label{mainord:ord} $\overline{\rho} \otimes \mathbb{F}bar_p \cong \overline{\rho}_{\pi,\iota}$ and $\overline{\mu} = \chi_\iota\varepsilonilon^{1-n} \mod {\mathfrak{m}_{\mathbb{Q}bar_p}}$, where $(\pi,\chi)$ is an $\iota$-ordinary regular algebraic polarized cuspidal automorphic representation of $\GL_n(\mathbb{A}_F)$. \item $\overline{\rho}(G_{F(\zeta_p)})$ is adequate and $\zeta_p \notin F$. \item $H^0(G_v,\ad(\overline{\rho})(1)) = 0$ for every $\tilde{v}\in \widetilde{S}_p$. \end{ass} Then any irreducible component $\mathcal{C}$ of $\Spec R^{\mathrm{pol}}$ contains an $\iota$-ordinary automorphic point $x$. Moreover, assume that for every $\tilde{v} \in \widetilde{S}_p$, we are given $\lambda_{\tilde{v}} \in (\mathbb{Z}_+^n)^{\mathrm{Hom}(F_{\tilde{v}},\mathbb{Q}bar_p)}$, an inertial type $\tau_{\tilde{v}}$ defined over $E$, and a nonzero irreducible component $\mathcal{C}_{\tilde{v}}$ of $\Spec R_{\tilde{v}}^{\lambda_{\tilde{v}},\tau_{\tilde{v}},\mathrm{ord}}$. Then we may assume the $\mathbb{Q}bar_p$-point of $\Spec R_{\tilde{v}}^\square$ determined by $\rho_x|_{G_{\tilde{v}}}$ lies in $\mathcal{C}_{\tilde{v}}$ for each $\tilde{v}\in\widetilde{S}_p$. \end{thm} \begin{proof} We first note that \cref{thm:autgalrep} and our \cref{mainord:ord} implies that for each $\tilde{v}\in \widetilde{S}_p$, there is a choice of $\lambda_{\tilde{v}} \in (\mathbb{Z}_+^n)^{\mathrm{Hom}(F_{\tilde{v}},\mathbb{Q}bar_p)}$ and inertial type $\tau_{\tilde{v}}$ defined over $E$, such that $R_{\tilde{v}}^{\lambda_{\tilde{v}},\tau_{\tilde{v}},\mathrm{ord}} \ne 0$ (enlarging $E$ if necessary using \cref{thm:coefchange}). The proof is then almost identical to the proof of \cref{thm:mainPD}, taking $X^{\mathrm{loc}} = \Spec(\widehat{\otimes} R_{\tilde{v}})$, with $R_{\tilde{v}}$ the quotient of $R_{\tilde{v}}^{\lambda_{\tilde{v}},\tau_{\tilde{v}},\mathrm{ord}}$ by the minimal prime corresponding to $\mathcal{C}_{\tilde{v}}$, and using \cref{thm:ordsmallRT} instead of \cref{thm:PDsmallRT} and \cref{thm:ordring} instead of \cref{thm:crdefring}. \end{proof} \begin{cor}\label{thm:CMgeom} Let the assumptions be as in either \cref{thm:mainPD} or \cref{thm:mainord}. Then $R^{\mathrm{pol}}$ is an $\mathcal{O}$-flat, reduced, complete intersection ring of dimension $1 + \frac{n(n+1)}{2}[F^+:\mathbb{Q}]$. \end{cor} \begin{proof} \mathbb{C}ref{thm:mainPD} and \cref{thm:mainord} imply that for any minimal prime ideal $\mathfrak{q}$ of $R^{\mathrm{pol}}$ there is an automorphic point $x\in \Spec (R^{\mathrm{pol}}/\mathfrak{q}) (\mathbb{Q}bar_p)$. In particular, this shows $R^{\mathrm{pol}}/\mathfrak{q}$ is $\mathcal{O}$-flat, and it has dimension $1 + \frac{n(n+1)}{2}[F^+:\mathbb{Q}]$ by \cite{MeSmooth}*{Theorem~C}. So $R^{\mathrm{pol}}$ is equidimensional of dimension $1+\frac{n(n+1)}{2}[F^+:\mathbb{Q}]$. This together with \cref{thm:polringdim} implies that $R^{\mathrm{pol}}$ is a complete intersection. This in turn implies that $R^{\mathrm{pol}}$ has no embedded prime ideals, and since $p$ does not belong to any minimal prime ideal, it is not a zero divisor and $R^{\mathrm{pol}}$ is $\mathcal{O}$-flat. Applying \cite{MeSmooth}*{Theorem~C} again, we see that $R^{\mathrm{pol}}$ is generically regular. Since $R^{\mathrm{pol}}$ is generically regular and contains no embedded prime ideals, it is reduced. \end{proof} Strengthening the assumption on the residual representation slightly, we can apply potential automorphy theorems to deduce the conclusion of \cref{thm:CMgeom} without assuming residual automorphy. \begin{thm}\label{thm:potCMgeom} Assume that $p>2$ and that every $v|p$ in $F^+$ splits in $F$. Assume further: \begin{ass} \item $\mu$ is de~Rham. \item $\overline{\rho}|_{G_{F(\zeta_p)}}$ is absolutely irreducible and $\zeta_p \notin F$. Moreover, letting $d$ denote the maximal dimension of an irreducible subrepresentation of the restriction of $\overline{\rho}$ to the closed subgroup of $G_{F}$ generated by all Sylow pro-$p$ subgroups, we assume $p>2(d+1)$. \item\label{potCMgeom:smooth} $H^0(G_{\tilde{v}},\ad(\overline{\rho})(1)) = 0$ for every $\tilde{v}\in \widetilde{S}_p$. \item\label{potCMgeom:loc} One of the following hold: \begin{casez} \item\label{potCMgeom:PD} for each $\tilde{v} \in \widetilde{S}_p$, $\overline{\rho}|_{G_{\tilde{v}}}$ admits a regular weight potentially diagonalizable lift; \item\label{potCMgeom:ord} for each $\tilde{v} \in \widetilde{S}_p$, $\overline{\rho}|_{G_{\tilde{v}}}$ admits a regular weight ordinary lift. \end{casez} \end{ass} Then the following hold. \begin{enumerate} \item\label{potCMgeom:pot} For any given finite extension $F^{(\mathrm{avoid})}$ of $F$, there is a finite extension $L/F$ of CM fields, disjoint from $F^{(\mathrm{avoid})}$, such that any irreducible component $\Spec R^{\mathrm{pol}}$ contains an $L$-potentially automorphic point. \item\label{potCMgeom:geom} $R^{\mathrm{pol}}$ is an $\mathcal{O}$-flat, reduced, complete intersection ring of dimension $1 + \frac{n(n+1)}{2}[F^+:\mathbb{Q}]$. \end{enumerate} \end{thm} \begin{proof} By \cref{potCMgeom:loc}, for each $\tilde{v} \in \tilde{S}_p$, there is a choice of $\lambda \in (\mathbb{Z}_+^n)^{\mathrm{Hom}(F_{\tilde{v}},\mathbb{Q}bar_p)}$ and an inertial type $\tau_{\tilde{v}}$ defined over $E$ (extending $E$ if necessary), such that \begin{itemize} \item $R_{\tilde{v}}^{\lambda_{\tilde{v}},\tau_{\tilde{v}},\mathrm{cr}}$ has a potentially diagonalizable point if we are in \cref{potCMgeom:PD}, in which case we fix a potentially diagonalizable irreducible component $\mathcal{C}_{\tilde{v}}$ of $\Spec R_{\tilde{v}}^{\lambda_{\tilde{v}},\tau_{\tilde{v}},\mathrm{cr}}$; \item $R_{\tilde{v}}^{\lambda_{\tilde{v}},\tau_{\tilde{v}},\mathrm{ord}} \ne 0$ if we are in \cref{potCMgeom:ord}, in which case we fix an irreducible component $\mathcal{C}_{\tilde{v}}$ of $\Spec R_{\tilde{v}}^{\lambda_{\tilde{v}},\tau_{\tilde{v}},\mathrm{ord}}$. \end{itemize} The proof of \cref{potCMgeom:pot} is then exactly as in \cref{thm:mainPD}, using \cref{thm:potsmallRT} instead of \cref{thm:PDsmallRT}, and the proof of \cref{potCMgeom:geom} is exactly as in \cref{thm:potCMgeom}, using \cref{potCMgeom:pot} instead of \cref{thm:mainPD,thm:mainord} (note \cite{MeSmooth}*{Theorem~C} only requires potential automorphy). \end{proof} \begin{rmk}\label{rmk:PDlift} It is expected that \cref{potCMgeom:PD} of \cref{potCMgeom:loc} in \cref{thm:potCMgeom} always holds \cite{EmertonGeeBM}*{Conjecture~A.3}, and it is known in many cases by work of Gee--Herzig--Liu--Savitt \cite{GHLS}. For example, assume that there is a $G_{\tilde{v}}$-stable filtration $0 = U_0 \subset U_1\subset \cdots \subset U_k = \mathbb{F}^n$ whose graded pieces $U_i/U_{i-1}$ are irreducible, such that there is no nonzero $\mathbb{F}[G_{\tilde{v}}]$-morphism $U_{i-1}(-1) \mathrm{rig}htarrow U_i/U_{i-1}$ for any $1\le i \le k$. Then \cite{GHLS}*{Corollary~2.1.11} implies that $\rho_{\tilde{v}}$ admits a potentially diagonalizable lift of regular weight (see \cite{GHLS}*{Examples~2.1.4}). \end{rmk} \begin{rmk}\label{rmk:minimal} We note that it is possible to prove versions of the main theorems here without the potentially diagonalizable or ordinary hypothesis at the expense of a stronger assumption on the residual image. In particular, that each $v\in S$ splits in $F$ and that $H^0(G_w,\ad(\overline{\rho})(1)) = 0$ for \emph{all} places $w$ above a place in $S$, as opposed to just the ones above $p$. This is because the only general $R = \mathbb{T}$ theorem at our disposal, without a potentially diagonalizable or ordinary assumption, is the minimal $R = \mathbb{T}$ theorem \cite{ThorneAdequate}*{Theorem~7.1}. To apply \cref{thm:thelemma} in this situation, it would be necessary to include all places in $S$ when defining $R^{\mathrm{loc}}$ and $X^{\mathrm{loc}}$. This would then force us to require that the unrestricted local deformation rings are regular at all places in $S$. \end{rmk} \subsection{Density of automorphic points}\label{sec:dim3dense} We now combine our main theorems with \cite{MeSmooth} and the work of Chenevier \cite{ChenevierFern} to prove new cases of Chenevier's conjecture \cite{ChenevierFern}*{Conjecture~1.15}. Recall $F$ is a CM field with maximal totally real subfield $F^+$, and we have a fixed isomorphism $\mathbb{Q}bar_p \xrightarrow{\sim}\mathbb{C}$. We now assume that our finite set of finite places $S$ of $F^+$ contains all finite places that ramify in $F$ (as well as all those above $p$). We restrict ourselves to dimension $3$, i.e. \[ \overline{\rho} : G_{F,S} \longrightarrow \GL_3(\mathbb{F}) \] is continuous and absolutely irreducible. We also restrict ourselves to the conjugate self dual case, i.e. we assume \[ \overline{\rho}^c \cong \overline{\rho}^\vee \otimes \overline{\mu} \quad \text{with} \quad \overline{\mu} = \varepsilonilon^{-2}\delta_{F/F^+} \bmod {\mathfrak{m}_{\mathcal{O}}}.\] \begin{thm}\label{thm:dim3dense} Let the assumptions and notation be as above \S\ref{sec:dim3dense}. Let $R^{\mathrm{pol}}$ be the universal $\varepsilonilon^{-2}\delta_{F/F^+}$-polarized deformation ring for $\overline{\rho}$, and let $\mathfrak{X}$ be its rigid analytic generic fibre. Assume further: \begin{ass} \item $p > 2$ and is totally split in $F$. \item $\overline{\rho}\otimes \mathbb{F}bar_p \cong \overline{\rho}_{\pi,\iota}$, where $\pi$ is a regular algebraic conjugate self dual cuspidal automorphic representation of $\GL_3(\mathbb{A}_F)$ such that for each $w|p$ in $F$, $\pi_w$ is unramified and $\rho_{\pi,\iota}|_{G_w}$ is potentially diagonalizable. If $p = 3$, then we further assume that $\pi$ is $\iota$-ordinary. \item $\overline{\rho}(G_{F(\zeta_p)})$ is adequate and $\zeta_p \notin F$. \item $H^0(G_w,\ad(\overline{\rho})(1)) = 0$ for every $w|p$ in $F$. \end{ass} Then the set of automorphic points of level prime to $p$ in $\mathfrak{X}$ is Zariski dense. \end{thm} \begin{proof} By \cite{ChenevierFern}*{Theorem~A}, the Zariski closure in $\mathfrak{X}$ of the set of automorphic points of level prime to $p$ has dimension at least $6[F^+:\mathbb{Q}]$. (Chenevier actually works with the universal $\delta_{F/F^+}$-polarized deformation ring, but this is simply because of a difference in normalization: in \cite{ChenevierFern}, the $\rho_{\pi,\iota}$ are normalized so that $\rho_{\pi,\iota}^c \cong \rho_{\pi,\iota}^\vee$, whereas our normalization yields $\rho_{\pi,\iota}^c \cong \rho_{\pi,\iota}^\vee \otimes\varepsilonilon^{-2}$.) By \cref{thm:CMgeom}, $R^{\mathrm{pol}}$ is $\mathcal{O}$-flat, reduced, and equidimensional of dimension $1+6[F^+:\mathbb{Q}]$. \mathbb{C}ref{thm:mainPD,thm:mainord} imply that every irreducible component of $\Spec R^{\mathrm{pol}}$ contains an automorphic point of level prime to $p$, and \cite{MeSmooth}*{Theorem~C} implies that $(R^{\mathrm{pol}})_x^\wedge$ is formally smooth over $E$ for any such point $x$. Applying \cref{thm:genfiblem} finishes the proof. \end{proof} \mathbb{C}ref{thm:dim3dense} and \cref{thm:speczardense} immediately imply: \begin{cor}\label{thm:specdim3dense} Let the assumptions and notation be as in \cref{thm:dim3dense}. Then the set of automorphic points of level prime to $p$ in $\Spec R^{\mathrm{pol}}$ is Zariski dense. \end{cor} \section{The Hilbert modular case}\label{sec:Hilb} We now investigate the Hilbert modular case, i.e. the case of two dimensional representations of the absolute Galois group of a totally real field. We first fix some assumptions and notation that will be used throughout this section. \subsection{Setup}\label{sec:Hilbsetup} Throughout this section we assume $p>2$ and fix an isomorphism $\iota : \mathbb{Q}bar_p \xrightarrow{\sim} \mathbb{C}$. We assume that our number field $F$ is totally real, and we fix a finite set of finite places $S$ of $F$ containing all places above $p$. Let $F_S$ be the maximal extension of $F$ unramified outside of the places in $S$ and the infinite places, and we set $G_{F,S} = \mathrm{Gal}(F_S/F)$. We fix a continuous absolutely irreducible \[ \overline{\rho} : G_{F,S} \longrightarrow \GL_2(\mathbb{F})\] such that $\det\overline{\rho}$ is totally odd. We also fix a continuous character character $\mu : G_{F,S} \mathrm{rig}htarrow \mathcal{O}^\times$ such that $\det\overline{\rho} = \overline{\mu}$, where $\overline{\mu}$ denotes the reduction of $\mu$ modulo $\mathfrak{m}_{\mathcal{O}}$. We let $R^{\mathrm{univ}}$ be the universal deformation ring for $\overline{\rho}$, and let $R^\mu$ be the universal determinant $\mu$ deformation ring for $\overline{\rho}$. \begin{defn} Let $R$ be a quotient of $R^{\mathrm{univ}}$, let $x \in \Spec R(\mathbb{Q}bar_p)$, and let $\rho_x$ be the pushforward of the universal deformation via $R^{\mathrm{univ}} \twoheadrightarrow R \xrightarrow{x} \mathbb{Q}bar_p$. We say $x$ is an \emph{automorphic point} if there is a regular algebraic cuspidal automorphic representation $\pi$ of $\GL_2(\mathbb{A}_F)$ such that $\rho_x \cong \rho_{\pi,\iota}$. We say $x$ has \emph{level prime to} $p$ if $\pi_v$ is unramified for each $v|p$ in $F$. Given a finite extension $L/F$ of totally real fields, we say $x$ is an $L$-\emph{potentially automorphic point} if there is a regular algebraic cuspidal automorphic representation $\pi$ of $\GL_2(\mathbb{A}_L)$ such that $\rho_x|_{G_L} \cong \rho_{\pi,\iota}$ We say $x$ is an \emph{essentially automorphic point} if there is a regular algebraic cuspidal automorphic representation $\pi$ of $\GL_2(\mathbb{A}_F)$ and a continuous character $\mathrm{ps}i : G_{F,S} \mathrm{rig}htarrow \mathbb{Q}bar_p^\times$, such $\rho_x \cong \rho_{\pi,\iota}\otimes \mathrm{ps}i$. If $\pi_v$ is unramified for each $v|p$, then we say $x$ has \emph{level essentially prime to} $p$. If $X^{\mathrm{rig}}$ is the rigid analytic generic fibre of $\Spf R$, and $x^{\mathrm{rig}} \in X^{\mathrm{rig}}$ is the point corresponding to $\ker(x) \subset R[1/p]$, then we say $x^{\mathrm{rig}}$ is an \emph{automorhpic point}, resp. an \emph{essentially automorphic point}, if $x$ is, and if this is the case we further say $x^{\mathrm{rig}}$ has \emph{level essentially prime to} $p$ if $x$ does. \end{defn} It is necessary to introduce the notion of essentially automorphic points to avoid assuming Leopoldt's conjecture. We will find use for the following standard lemma, cf. \cite{BockleGenFibre}*{Proposition~2.1}. \begin{lem}\label{thm:det} Let $\Gamma$ be the maximal pro-$p$ abelian quotient of $G_{F,S}$, and let $\Psi : G_{F,S} \mathrm{rig}htarrow \mathcal{O}[[\Gamma]]^\times$ be the tautological character. Let $\widetilde{\det\overline{\rho}} : G_{F,S} \mathrm{rig}htarrow \mathcal{O}^\times$ be the Teichm\"{u}ller lift of $\det\overline{\rho}$, and let $\widehat{\mu}^{\frac{1}{2}}: G_{F,S} \mathrm{rig}htarrow 1+\mathfrak{m}_{\mathcal{O}}$ be the unique character such that $(\widehat{\mu}^{\frac{1}{2}})^2 = \mu(\widetilde{\det\overline{\rho}})^{-1}$ (here we use that $p>2$). The $\mathbb{C}NL_{\mathcal{O}}$-algebra morphism $R^{\mathrm{univ}} \mathrm{rig}htarrow R^\mu \widehat{\otimes}\, \mathcal{O}[[\Gamma]]$ induced by $\rho^\mu \otimes\widehat{\mu}^{\frac{1}{2}}\Psi$ is an isomorphism. \end{lem} This has the following immediate corollary that will allow us to deduce the existence of automorphic points in the irreducible components of the nonfixed determinant deformation ring from the existence of automorphic points in the irreducible components of fixed determinant deformation rings. \begin{lem}\label{thm:comps} Let $\mathcal{C}$ be an irreducible component of $\Spec R^{\mathrm{univ}}$. After possibly enlarging $\mathcal{O}$, there is a finite $p$-power order character $\theta: G_{F,S} \mathrm{rig}htarrow \mathcal{O}^\times$ such that some irreducible component of $\Spec R^{\theta\mu} \subseteq \Spec R^{\mathrm{univ}}$ is contained in $\mathcal{C}$, where $R^{\theta\mu}$ is the universal determinant $\theta\mu$ deformation ring for $\overline{\rho}$. \end{lem} \begin{prop}\label{thm:dimdet} There is a presentation \[ R^\mu \cong \mathcal{O}[[x_1,\ldots,x_g]]/(f_1,\ldots,f_k)\] with $g-k \ge 2[F:\mathbb{Q}]$. In particular, each irreducible component of $\Spec R^\mu$ has dimension at least $1+2[F:\mathbb{Q}]$. \end{prop} \begin{proof} This follows from \cref{thm:gendetpres} and the global Euler--Poincar\'{e} characteristic formula, since $\det\overline{\rho}$ is totally odd. \end{proof} \begin{cor}\label{thm:dimnodet} There is a presentation \[ R^{\mathrm{univ}} \cong \mathcal{O}[[x_1,\ldots,x_g]]/(f_1,\ldots,f_k)\] with $g-k \ge 1+d_F+2[F:\mathbb{Q}]$, where $d_F$ is the Leopoldt defect for $F$ and $p$. In particular, every irreducible component of $\Spec R^{\mathrm{univ}}$ has dimension at least $2+d_F + 2[F:\mathbb{Q}]$. \end{cor} \begin{proof} Let $\Gamma$ be the maximal pro-$p$ abelian quotient of $G_{F,S}$. Then $\Gamma \cong \mathbb{Z}_p^{1+d_F} \times \Gamma_{\mathrm{tor}}$ with $\Gamma_{\mathrm{tor}}$ a product of finite cyclic groups of $p$-power order. From this it follows that $\mathcal{O}[[\Gamma]]$ has a presentation \[ \mathcal{O}[[\Gamma]] \cong \mathcal{O}[[y_1,\ldots,y_r]]/(g_1,\ldots,g_s)\] with $r-s = 1+d_F$. The corollary then follows from \cref{thm:dimdet} and \cref{thm:det}. \end{proof} We will need a smoothness result analogous to \cite{MeSmooth}*{Theorem~C} in the Hilbert modular case. \begin{thm}\label{thm:smoothHilb} Let $L/F$ be a finite extension of totally real fields, and let $x \in \Spec R^\mu(\mathbb{Q}bar_p) \subset \Spec R^{\mathrm{univ}}(\mathbb{Q}bar_p)$ be an $L$-potentially automorphic point. Let $(R^\mu)_x^\wedge$ and $(R^{\mathrm{univ}})_x^\wedge$ denote the respective localizations and completions at $x$. If $\overline{\rho}(G_{L(\zeta_p)})$ is adequate, then $(R^\mu)_x^\wedge$ and $(R^{\mathrm{univ}})_x^\wedge$ are formally smooth over $E$ of dimensions $2[F:\mathbb{Q}]$ and $1+d_F+2[F:\mathbb{Q}]$, respectively. \end{thm} \begin{proof} Let $\Gamma$ be the maximal pro-$p$ abelian quotient of $\Gamma$, and fix a splitting $\Gamma \cong \Gamma_{\mathrm{free}}\times\Gamma_{\mathrm{tor}}$ with $\Gamma_{\mathrm{free}} \cong \mathbb{Z}_p^{1+d_F}$ and $\Gamma_{\mathrm{tor}}$ finite. Using \cref{thm:coefchange}, we can assume that $E$ contains the values of any $\mathbb{Q}bar_p$-character of $\Gamma_{\mathrm{tor}}$ as well as the values of $\det\rho_x$. Using \cref{thm:det} and choosing appropriate topological generators for $\Gamma_{\mathrm{free}}$, we have $R^{\mathrm{univ}} \cong R^\mu[[y_1,\ldots,y_{1+d_F}]][\Gamma_{\mathrm{tor}}]$, and we can assume $y_1,\ldots,y_{1+d_F}$ lie in the kernel of $x$. This implies $(R^{\mathrm{univ}})_x^\wedge \cong (R^\mu)_x^\wedge[[y_1,\ldots,y_{1+d_F}]]$, and the result for $R^{\mathrm{univ}}$ follows from that for $R^\mu$. The result for $R^\mu$ follows from \cite{MeSmooth}*{Theorem~B} using an argument as in \cite{KisinGeoDefs}*{\S8}. We give a sketch. Let $k = R^\mu[1/p]/\ker(x)$, and let $\rho : G_{F,S} \mathrm{rig}htarrow \GL_2(k)$ be the pushforward of the universal $R^\mu$-valued deformation via $R^\mu[1/p] \twoheadrightarrow k$. We let $\ad^0(\rho)$ denote the trace zero subspace of the Lie algebra of $\GL_2(k)$, equipped with the adjoint $G_{F,S}$-action $\ad\circ \rho$. By \cite{MeSmooth}*{Theorem~B}, the geometric Bloch--Kato Selmer group \[ H_g^1(G_{F,S},\ad^0(\rho)) := \ker\big( H^1(G_{F,S},\ad^0(\rho)) \mathrm{rig}htarrow \prod_{v|p} H^1(G_v,B_{\mathrm{dR}}\otimes_{\mathbb{Q}_p}\ad^0(\rho))\big)\] is trivial. Using this, the argument of \cite{KisinGeoDefs}*{Theorem~8.2} shows that $H^1(G_{F,S},\ad^0(\rho))$ injects into \[ \prod_{v|p} \mathrm{Fil}^0 (B_{\mathrm{dR}}\otimes_{\mathbb{Q}_p}\ad^0(\rho))^{G_v}, \] and this space has $k$-dimension $\sum_{v|p} 2[F_v:\mathbb{Q}_p] = 2[F:\mathbb{Q}]$, since the Hodge--Tate weights for $\rho|_{G_v}$ are distinct for each $v|p$ and each embedding $\sigma : F_v \hookrightarrow \mathbb{Q}bar_p$. On the other hand, using the argument of \cite{KisinOverConvFM}*{Proposition~9.5}, $H^1(G_{F,S},\ad^0(\rho))$ is isomorphic to the tangent space of $(R^\mu)_x^\wedge$ and $\dim(R^\mu)_x^\wedge \ge 2[F:\mathbb{Q}]$ by \cref{thm:dimdet}. \end{proof} \subsection{Small $R = \mathbb{T}$ theorems from the literature}\label{sec:smallRTHilb} Let $\widetilde{S} \subseteq S$, and for each $v\in \widetilde{S}$, let $R_v$ be a quotient of $R_v^\square : = R_{\overline{\rho}|_{G_v}}^\square$ that represents a local deformation problem. By \cref{thm:defquolem}, the rings $R_v$ in \cref{thm:smallHilb,thm:potsmallHilb} below represent local deformation problems. We refer to the tuple \[ \mathcal{S} = (F, S, \widetilde{S}, \mathcal{O}, \overline{\rho},\mu, \{R_v\}_{v\in \widetilde{S}}) \] as a \emph{global} $\GL_2$-\emph{deformation datum}. A \emph{type} $\mathcal{S}$ deformation of $\overline{\rho}$ is a deformation $\rho : G_{F,S} \mathrm{rig}htarrow \GL_2(A)$ with $A$ a $\mathbb{C}NL_{\mathcal{O}}$-algebra such that for any (equivalently for one) lift $\rho$ in its deformation class \begin{itemize} \item $\det\rho = \mu$, and \item for each $v\in \widetilde{S}$, the $\mathbb{C}NL_{\mathcal{O}}$-morphism $R_v^\square \mathrm{rig}htarrow A$ induced by the lift $\rho|_{G_v}$ of $\overline{\rho}|_{G_v}$, factors through $R_v$. \end{itemize} The set valued functor $D_{\mathcal{S}}$ on $\mathbb{C}NL_{\mathcal{O}}$ that takes a $\mathbb{C}NL_{\mathcal{O}}$-algebra $A$ to the set of deformations of type $\mathcal{S}$ is representable. Indeed a choice of lift $\rho^\mu$ in the universal $R^\mu$-valued deformation determines a $\mathbb{C}NL_{\mathcal{O}}$-algebra morphism $\widehat{\otimes}_{v\in\widetilde{S}} R_v^\square \mathrm{rig}htarrow R^\mu$, which is canonical up to automorphisms of $\widehat{\otimes}_{v\in\widetilde{S}} R_v^\square$. Then $D_{\mathcal{S}}$ is represented by \[ R_{\mathcal{S}} := R^\mu \otimes_{(\widehat{\otimes}_{v\in\widetilde{S}} R_v^\square)} (\widehat{\otimes}_{v\in\widetilde{S}} R_v).\] We call $R_{\mathcal{S}}$ the \emph{universal type} $\mathcal{S}$ \emph{deformation ring}. We will deduce the following small $R = \mathbb{T}$ theorem from \cref{thm:PDsmallRT}, using an input due to Barnet-Lamb--Gee--Geraghty \cite{BLGGOrdLifts2}. As in \S\ref{sec:CM}, both \cref{smallHilb:finite,smallHilb:modular} are crucial for our main theorems in this section. \begin{thm}\label{thm:smallHilb} Recall we have assumed $p>2$ and $\det\overline{\rho}$ is totally odd. Assume further: \begin{ass} \item $\mu$ is de~Rham \item $\overline{\rho} \otimes\mathbb{F}bar_p \cong \overline{\rho}_{\pi,\iota}$, where $\pi$ is regular algebraic cuspidal automorphic representation of $\GL_2(\mathbb{A}_F)$. \item\label{smallHilb:5} $\overline{\rho}(G_{F(\zeta_p)})$ is adequate. \end{ass} For each $v|p$, let $\lambda_v \in (\mathbb{Z}_+^n)^{\mathrm{Hom}(F_v,\mathbb{Q}bar_p)}$ and $\tau_v$ be an inertial type defined over $E$, and let $R_v$ be a quotient $R_v^{\lambda_v,\tau_v,\mathrm{cr},\mu}$ corresponding to a union of potentially diagonalizable irreducible components. Let $\mathcal{S}$ be the global $\GL_2$-deformation datum \[ \mathcal{S} = (F,S,S_p,\mathcal{O},\overline{\rho},\mu,\{R_v\}_{v\in S_p}).\] and let $R_{\mathcal{S}}$ be the universal type $\mathcal{S}$ deformation ring. Then \begin{enumerate} \item\label{smallHilb:finite} $R_{\mathcal{S}}$ is finite over $\mathcal{O}$. \item\label{smallHilb:modular} Every $\mathbb{Q}bar_p$-point of $\Spec R_{\mathcal{S}}$ is an automorphic point of level potentially prime to $p$. \end{enumerate} \end{thm} \begin{proof} By \cite{BLGGOrdLifts2}*{Theorem~2.1.2} (and it's proof), there is a finite solvable totally real extension $L/F$, disjoint from the fixed field of $\overline{\rho}|_{G_{F(\zeta_p)}}$, and a regular algebraic cuspidal automorphic representation $\pi'$ of $\GL_2(\mathbb{A}_L)$ such that $\overline{\rho}|_{G_L}\otimes\mathbb{F}bar_p \cong \overline{\rho}_{\pi',\iota}$ and such that $\rho_{\pi',\iota}|_{G_w}$ is potentially diagonalizable for any $w|p$ in $L$. Let $\chi$ denote the central character of $\pi'$, and let $S_L$ denote the set of places in $L$ above those in $S$. Now choose a quadratic CM extension $M/L$, disjoint from $\overline{\rho}|_{G_{L(\zeta_p)}}$, such that each $w|p$ in $L$ splits in $M$. Using the standard symplectic pairing for $\GL_2$ and \cref{thm:Gdhoms}, $\overline{\rho}|_{G_M}$ extends to a continuous homomorphism \[ \overline{r} : G_{L,S_L} \longrightarrow \mathcal{G}_2(\mathbb{F}) \] with $\nu\circ \overline{r} = \overline{\mu}|_{G_L}$. Using \cite{BLGGT}*{Lemma~2.2.2}, there is a regular algebraic cuspidal automorphic representation $\Pi$ of $\GL_2(\mathbb{A}_M)$ such that $(\Pi,\chi)$ is polarized and such that $\rho_{\pi',\iota}|_{G_M} = \rho_{\Pi,\iota}$. In particular, $\overline{r}|_{G_M}\otimes \mathbb{F}bar_p \cong \overline{\rho}_{\Pi,\iota}$, $\overline{\mu}|_{G_L} = \chi_\iota\varepsilonilon^{-1} \mod {\mathfrak{m}_{\mathbb{Q}bar_p}}$, and $\rho_{\Pi,\iota}|_{G_w}$ is potentially diagonalizable for each $w|p$ in $M$. Note also that $\overline{\rho}(G_{M(\zeta_p)})$ is adequate and $\zeta_p \notin M$, by choice of $M$. For each $w|p$ in $L$, fix a choice of $\tilde{w}$ above $w$ in $M$, and set $\widetilde{S}_p = \{\tilde{w}\}_{w|p \text{ in }L}$. For each $\tilde{w} \in \widetilde{S}_p$, if $v$ is the place below it in $F$, we let $\tau_{\tilde{w}} = \tau_v|_{I_{\tilde{w}}}$ and $\lambda_{\tilde{w}} \in (\mathbb{Z}_+^n)^{\mathrm{Hom}(M_{\tilde{w}},\mathbb{Q}bar_p)}$ be given by $\lambda_{\tilde{w},\sigma'} = \lambda_{v,\sigma}$ if $\sigma' : M_{\tilde{w}} \hookrightarrow \mathbb{Q}bar_p$ extends $\sigma : F_v \hookrightarrow \mathbb{Q}bar_p$. For each $\tilde{w} \in \widetilde{S}_p$, we then let $R_{\tilde{w}}$ be the quotient of $R_{\tilde{w}}^{\lambda_{\tilde{w}},\tau_{\tilde{w}},\mathrm{cr}}$ corresponding to the union of all potentially diagonalizable irreducible components. We then have a global $\mathcal{G}_2$-deformation datum \[ \mathcal{S}_M = (M/L, S_L, \tilde{S}_p, \mathcal{O}, \overline{r}, \mu, \{R_{\tilde{w}}\}_{\tilde{w} \in \widetilde{S}_p}),\] and we let $R_{\mathcal{S}_M}$ be the universal type $\mathcal{S}_M$ deformation ring. For any type $\mathcal{S}$ deformation $\rho$ of $\overline{\rho}$ to a $\mathbb{C}NL_{\mathcal{O}}$-algebra $A$, again using the standard symplectic pairing for $\GL_2$ and \cref{thm:Gdhoms}, we obtain a type $\mathcal{S}_M$ deformation \[ r : G_{L,S_L} \longrightarrow \mathcal{G}_2(A). \] We deduce the existence of a $\mathbb{C}NL_{\mathcal{O}}$-algebra map $R_{\mathcal{S}_L} \mathrm{rig}htarrow R_{\mathcal{S}}$. A standard argument using Nakayama's Lemma and \cite{KW0}*{Lemma~3.6} shows that this map is finite. \mathbb{C}ref{smallHilb:finite} now follows from \cref{PDsmallRT:finite} of \cref{thm:PDsmallRT}, and \ref{smallHilb:modular} follows from \cref{PDsmallRT:autpt} of \cref{thm:PDsmallRT} and \cite{BLGGT}*{Lemma~2.2.2}. \end{proof} We again combine this with potential automorphy theorems. \begin{thm}\label{thm:potsmallHilb} Let the notation and assumptions be as in the beginning of this section, and assume further: \begin{ass} \item $\mu$ is de~Rham. \item $\overline{\rho}(G_{F(\zeta_p)})$ is adequate. \end{ass} For each $v|p$, let $\lambda_v \in (\mathbb{Z}_+^n)^{\mathrm{Hom}(F_v,\mathbb{Q}bar_p)}$ and $\tau_v$ be an inertial type defined over $E$, and let $R_v$ be a quotient $R_v^{\lambda_v,\tau_v,\mathrm{cr},\mu}$ corresponding to a union of potentially diagonalizable irreducible components. Let $\mathcal{S}$ be the global $\GL_2$-deformation datum \[ \mathcal{S} = (F,S,S_p,\mathcal{O},\overline{\rho},\mu,\{R_v\}_{v\in S_p}).\] and let $R_{\mathcal{S}}$ be the universal type $\mathcal{S}$ deformation ring. Then \begin{enumerate} \item\label{potsmallHilb:finite} $R_{\mathcal{S}}$ is finite over $\mathcal{O}$. \item\label{potsmallHilb:modular} Given any finite extension $F^{(\mathrm{avoid})}/F$, there is a finite extension of totally real fields $L/F$, disjoint from $F^{(\mathrm{avoid})}$, such that every $\mathbb{Q}bar_p$-point of $\Spec R_{\mathcal{S}}$ is $L$-potentially automorphic. \end{enumerate} \end{thm} \begin{proof} We can and do assume that $F^{(\mathrm{avoid})}$ contains the fixed field of $\overline{\rho}|_{G_{F(\zeta_p)}}$. By \cite{TaylorRmkFM}*{Corollary~1.7}, we can find a finite totally real extension $L/F$ and a regular algebraic cuspidal automorphic representation $\pi$ of $\GL_2(\mathbb{A}_L)$ such that $\overline{\rho}|_{G_L}\otimes \mathbb{F}bar_p \cong \overline{\rho}_{\pi,\iota}$. Further, by using the refinement \cite{HSBT}*{Proposition~2.1} of Moret-Bailly's theorem in place of \cite{TaylorRmkFM}*{Theorem~G}, we may assume $L$ is disjoint from $F^{(\mathrm{avoid})}$ (see also \cite{BLGGT}*{Theorem~3.1.2}). Let $S_L$, resp. $S_{L,p}$, denotes the set of places of $L$ above $S$, resp. above $p$. For each $w \in S_{L,p}$, if $v$ is the place below it in $F$, we let $\tau_w = \tau_v|_{I_w}$ and $\lambda_w \in (\mathbb{Z}_+^n)^{\mathrm{Hom}(L_w,\mathbb{Q}bar_p)}$ be given by $\lambda_{w,\sigma'} = \lambda_{v,\sigma}$ if $\sigma' : L_w \hookrightarrow \mathbb{Q}bar_p$ extends $\sigma : F_v \hookrightarrow \mathbb{Q}bar_p$. For each $w\in S_{L,p}$, we then let $R_w$ be the quotient of $R_w^{\lambda_w,\tau_w,\mathrm{cr},\mu}$ corresponding the union of all potentially diagonalizable irreducible components. We have the global $\GL_2$-deformation datum \[ \mathcal{S}_L = (L,S_L,S_{L,p},\mathcal{O},\overline{\rho}|_{G_L},\mu|_{G_L},\{R_w\}_{w\in S_{L,p}}),\] and the universal type $\mathcal{S}_L$-deformation ring $R_{\mathcal{S}_L}$. There is a natural map $R_{\mathcal{S}_L} \mathrm{rig}htarrow R_{\mathcal{S}}$, which is finite (\cite{KW0}*{Lemma~3.6}). The theorem now follows from applying \cref{thm:smallHilb} to $R_{\mathcal{S}_L}$. \end{proof} \subsection{The main theorems in the Hilbert modular case}\label{sec:mainHilb} We can now prove our main theorems in the Hilbert modular case. \begin{thm}\label{thm:mainHilbdet} Recall we have assumed $p>2$ and that $\det\overline{\rho}$ is totally odd. Assume further: \begin{ass} \item $\mu$ is de~Rham. \item\label{mainHilbdet:aut} $\overline{\rho} \otimes\mathbb{F}bar_p \cong \overline{\rho}_{\pi,\iota}$, where $\pi$ is a regular algebraic cuspidal automorphic representation of $\GL_2(\mathbb{A}_F)$. \item $\overline{\rho}(G_{F(\zeta_p)})$ is adequate. \item\label{mainHilbdet:smooth} $H^0(G_v,\ad^0(\overline{\rho})(1)) = 0$ for every $v|p$. \end{ass} Then any irreducible component $\mathcal{C}$ of $\Spec R^\mu$ contains an automorphic point $x$ of level potentially prime to $p$. Moreover, assume that for every $v|p$ in $F$, we are given $\lambda_v \in (\mathbb{Z}_+^n)^{\mathrm{Hom}(F_v,\mathbb{Q}bar_p)}$, an inertial type $\tau_v$ defined over $E$, and a nonzero potentially diagonalizable irreducible component $\mathcal{C}_v$ of $\Spec R_v^{\lambda_v,\tau_v,\mathrm{cr},\mu}$. Then we can assume that the $\mathbb{Q}bar_p$-point of $\Spec R_v^\square$ determined by $\rho_x|_{G_v}$ lies in $\mathcal{C}_v$ for each $v|p$. \end{thm} \begin{proof} We first note that by \cref{thm:BTlift}, for each $v|p$ there is a choice of $\lambda_v \in (\mathbb{Z}_+^n)^{\mathrm{Hom}(F_v,\mathbb{Q}bar_p)}$, inertial type $\tau_v$ defined over $E$, and nonzero potentially diagonalizable irreducible component $\mathcal{C}_v$ of $\Spec R_v^{\lambda_v,\tau_v,\mathrm{cr},\mu}$, which we fix. The proof now is similar to \cref{thm:mainPD}; we give the details. Fix an irreducible component $\mathcal{C}$ of $\Spec R^\mu$. Set $R^{\mathrm{loc}} = \widehat{\otimes}_{v\in S_p} R_v^{\square,\mu}$, and let \[ X^{\mathrm{loc}} = \Spec(\widehat{\otimes}_{v\in S_p} R_v) \subseteq \Spec R^{\mathrm{loc}}, \] where $R_v$ is the quotient of $R_v^{\lambda_v,\tau_v,\mathrm{cr},\mu}$ corresponding to $\mathcal{C}_v$. Choosing a lift $\rho^\mu : G_{F,S} \mathrm{rig}htarrow \GL_2(R^\mu)$ in the class of the universal determinant $\mu$ deformation gives a local $\mathbb{C}NL_{\mathcal{O}}$-algebra morphism $R^{\mathrm{loc}} \mathrm{rig}htarrow R^\mu$, and we let $X = X^{\mathrm{loc}} \times_{\Spec R^{\mathrm{loc}}} \Spec R^\mu$. Then $X = \Spec R_{\mathcal{S}}$, where $\mathcal{S}$ is the global $\GL_2$-deformation datum \[ \mathcal{S} = (F, S, S_p, \mathcal{O}, \overline{\rho}, \mu, \{R_v\}_{v\in S_p}). \] \mathbb{C}ref{smallHilb:finite} of \cref{thm:smallHilb} implies that $X$ is finite over $\mathcal{O}$. We also have \begin{itemize} \item $R^{\mathrm{loc}}$ is isomorphic to a power series over $\mathcal{O}$ in $3|S_p| + 3[F:\mathbb{Q}]$-variables by \cref{mainHilbdet:smooth} and \cref{thm:localsmooth}; \item $\dim X^{\mathrm{loc}} = 1+\sum_{v\in S_p} 3 + [F_v : \mathbb{Q}_p] = 1+ 3\lvert S_p \rvert + [F:\mathbb{Q}]$ by \cref{thm:crdefringdet}; \item $\dim \mathcal{C} \ge 1 + 2[F:\mathbb{Q}]$ by \cref{thm:dimdet}. \end{itemize} We can now apply \cref{thm:thelemma} to conclude that \[ \mathcal{C} \cap (X\otimes_{\mathcal{O}} E) = \mathcal{C} \cap \Spec R_{\mathcal{S}}[1/p] \ne \emptyset. \] Applying \cref{smallHilb:modular} \cref{thm:smallHilb} finishes the proof. \end{proof} \begin{rmk}\label{rmk:comppss} An analogue of \cref{thm:mainHilbdet} can be proved with ``potentially diagonalizable" replaced with ``semistable with distinct Hodge--Tate weights", at the cost of assuming $p$ is totally split in $F$ by using the $R=\mathbb{T}$ theorem of Kisin \cite{KisinFM} and the subsequent improvements on the Breuil--M\'{e}zard conjecture due to Pa\v{s}k\={u}nas \cite{PaskunasBM} and Hu--Tan \cite{HuTanBM}. We leave the precise statements to the reader. \end{rmk} We also have the potential version of the above theorem. \begin{thm}\label{thm:mainpotHilb} Recall we have assumed $p>2$ and that $\det\overline{\rho}$ is totally odd. Assume further: \begin{ass} \item $\mu$ is de~Rham. \item\label{mainpotHilb:TW} $\overline{\rho}(G_{F(\zeta_p)})$ is adequate. \item\label{mainpotHilb:smooth} $H^0(G_v,\ad^0(\overline{\rho})(1)) = 0$ for every $v|p$. \end{ass} Then given any finite extension $F^{(\mathrm{avoid})}/F$, there is a finite extension $L/F$ of totally real fields, disjoint $F^{(\mathrm{avoid})}$, such that every irreducible component $\mathcal{C}$ of $\Spec R^\mu$ contains an $L$-potentially automorphic point $x$. Moreover, assume that for every $v|p$ in $F$, we are given $\lambda_v \in (\mathbb{Z}_+^n)^{\mathrm{Hom}(F_v,\mathbb{Q}bar_p)}$, an inertial type $\tau_v$ defined over $E$, and a nonzero potentially diagonalizable irreducible component $\mathcal{C}_v$ of $\Spec R_v^{\lambda_v,\tau_v,\mathrm{cr},\mu}$. Then we can assume that the $\mathbb{Q}bar_p$-point of $\Spec R_v^\square$ determined by $\rho_x|_{G_v}$ lies in $\mathcal{C}_v$ for each $v|p$. \end{thm} \begin{proof} The proof is identical to the proof of \cref{thm:mainHilbdet}, except using \cref{thm:potsmallHilb} in place of \cref{thm:smallHilb}. \end{proof} \mathbb{C}ref{thm:comps,thm:mainHilbdet} immediately imply. \begin{cor}\label{thm:mainHilb} \begin{enumerate} \item\label{mainHilb:aut} Let the assumptions be as in \cref{thm:mainHilbdet}. Then any irreducible component of $\Spec R^{\mathrm{univ}}$ contains an automorphic point. \item\label{mainHilb:pot} Let the assumptions be as in \cref{thm:mainpotHilb}. Then for any given finite extension $F^{(\mathrm{avoid})}$, there is a finite extension $L/F$ of totally real fields, disjoint form $F^{(\mathrm{avoid})}$, such that every irreducible component of $\Spec R^{\mathrm{univ}}$ contains an $L$-potentially automorphic point. \end{enumerate} \end{cor} As in \S\ref{sec:CM}, we also deduce ring theoretic properties for $R^{\mathrm{univ}}$ and $R^\mu$. \begin{cor}\label{thm:Hilbgeom} Recall we have assumed $p>2$ and that $\det\overline{\rho}$ is totally odd. Assume further: \begin{ass} \item $\overline{\rho}(G_{F(\zeta_p)})$ is adequate. \item $H^0(G_v,\ad^0(\overline{\rho})(1)) = 0$ for every $v|p$. \end{ass} Then $R^{\mathrm{univ}}$ and $R^\mu$ are $\mathcal{O}$-flat, reduced, complete intersection rings of dimensions $2+d_F+2[F:\mathbb{Q}]$ and $1+2[F:\mathbb{Q}]$, respectively. \end{cor} \begin{proof} The proof for $R^{\mathrm{univ}}$ is similar to that of \cref{thm:CMgeom}. We give the details. \mathbb{C}ref{mainHilb:pot} of \cref{thm:mainHilb} implies that there is a finite extension $L/F$ of totally real fields, disjoint form the fixed field of $\overline{\rho}|_{G_{F(\zeta_p)}}$, such that for any minimal prime ideal $\mathfrak{q}$ of $R^{\mathrm{univ}}$ there is an $L$-potentially automorphic point $x\in \Spec (R^{\mathrm{univ}}/\mathfrak{q}) (\mathbb{Q}bar_p)$. In particular, this shows $R^{\mathrm{univ}}/\mathfrak{q}$ is $\mathcal{O}$-flat, and it has dimension $2 + d_F + 2[F:\mathbb{Q}]$ by \cref{thm:smoothHilb}. So $R^{\mathrm{univ}}$ is equidimensional of dimension $2+d_F + 2[F:\mathbb{Q}]$. This together with \cref{thm:dimnodet} implies that $R^{\mathrm{univ}}$ is a complete intersection. This in turn implies that $R^{\mathrm{univ}}$ has no embedded prime ideals, and since $p$ does not belong to any minimal prime ideal, it is not a zero divisor and $R^{\mathrm{univ}}$ is $\mathcal{O}$-flat. Applying \cref{thm:smoothHilb} again, we see that $R^{\mathrm{univ}}$ is generically regular. Since $R^{\mathrm{univ}}$ is generically regular and contains no embedded prime ideals, it is reduced. When $\mu$ is de~Rham, the proof for $R^\mu$ is the same, using \cref{thm:mainpotHilb} instead of \cref{mainHilb:pot} of \cref{thm:mainHilb}, and \cref{thm:dimdet} instead of \cref{thm:dimnodet}. When $\mu : G_{F,S} \mathrm{rig}htarrow \mathcal{O}^\times$ is an arbitrary character lifting $\det\overline{\rho}$, we choose a character $\mu' : G_{F,S} \mathrm{rig}htarrow \mathcal{O}^\times$ lifting $\det\overline{\rho}$ such that $\mu'$ is de~Rham. Since $p>2$, there is a character $(\mu'\mu^{-1})^{\frac{1}{2}} : G_{F,S} \mathrm{rig}htarrow 1+\mathfrak{m}_{\mathcal{O}}$ such that $((\mu'\mu^{-1})^{\frac{1}{2}})^2 = \mu'\mu^{-1}$. Twisting the universal determinant $\mu$-deformation of $\overline{\rho}$ by $(\mu'\mu^{-1})^{\frac{1}{2}}$ induces a $\mathbb{C}NL_{\mathcal{O}}$-algebra isomorphism $R^{\mu'} \xrightarrow{\sim} R^\mu$. This completes the proof. \end{proof} Finally, combining the above with \cref{thm:smoothHilb} and the work of Gouvea--Mazur and Chenevier, we obtain: \begin{thm}\label{thm:Hilbdense} Assume $p>2$ and that $\det\overline{\rho}$ is totally odd. Let $\mathfrak{X}$ be the rigid analytic generic fibre of $\Spf R^{\mathrm{univ}}$. Assume further: \begin{ass} \item $p$ is totally split in $F$, and if $F \ne \mathbb{Q}$, then $[F:\mathbb{Q}]$ is even. \item\label{Hilbdense:aut} $\overline{\rho} \otimes\mathbb{F}bar_p \cong \overline{\rho}_{\pi,\iota}$, where $\pi$ is a regular algebraic cuspidal automorphic representation of $\GL_2(\mathbb{A}_F)$ such that for all $v|p$, $\pi_v$ is unramified and $\rho_{\pi,\iota}|_{G_v}$ is potentially diagonalizable. \item\label{Hilbdense:adequate} $\overline{\rho}(G_{F(\zeta_p)})$ is adequate. \item\label{Hilbdense:smooth} $H^0(G_v,\ad^0(\overline{\rho})(1)) = 0$ for every $v|p$. \end{ass} Then the set of essentially automorphic points of level essentially prime to $p$ in $\mathfrak{X}$ is Zariski dense. If Leopoldt's conjecture holds for $F$ and $p$, then the set of automorphic points of level essentially prime to $p$ in $\mathfrak{X}$ is Zariski dense. \end{thm} \begin{proof} Note that if Leopoldt's conjecture holds for $F$ and $p$, then the set of essentially automorphic points of level essentially prime to $p$ and the set of automorphic points of level essentially prime to $p$ coincide, so the second claim follows from the first. Work of Gouvea and Mazur when $F = \mathbb{Q}$ (see \cite{Emertonpadic}*{Corollary~2.28}) and Chenevier \cite{ChenevierFern}*{Theorem~5.9} when $[F:\mathbb{Q}]$ is even, implies that the Zariski closure in $\mathfrak{X}$ of the set of essentially automorphic points of level essentially prime to $p$ has dimension at least $1+d_F + 2[F:\mathbb{Q}]$. By \cref{thm:Hilbgeom}, $R^{\mathrm{univ}}$ is $\mathcal{O}$-flat, reduced, and equidimensional of dimension $2+d_F + 2[F:\mathbb{Q}]$. Then by \cref{thm:smoothHilb} and \cref{thm:genfiblem}, it suffices to prove that every irreducible component of $\Spec R^{\mathrm{univ}}$ contains an automorphic point of level essentially prime to $p$. Fix an irreducible component $\mathcal{C}$ of $\Spec R^{\mathrm{univ}}$, and let $\mu = \det \rho_{\pi,\iota}$. By \cref{thm:comps}, there is a finite $p$-power order character $\theta : G_{F,S} \mathrm{rig}htarrow \mathcal{O}^\times$ (extending $E$ if necessary) and an irreducible component $\mathcal{C}_{\theta\mu}$ of $\Spec R^{\theta\mu}$ such that $\mathcal{C}_{\theta\mu}\subset \mathcal{C}$. Since $p>2$, there is a finite $p$-power order character $\eta : G_{F,S} \mathrm{rig}htarrow \mathcal{O}^\times$ such that $\eta^2 = \theta$. Twisting by $\eta$ gives an isomorphism $R^{\mu} \xrightarrow{\sim} R^{\theta\mu}$, so there is an irreducible component $\mathcal{C}_\mu$ of $\Spec R^\mu$ such that twisting by $\eta$ yields an isomorphism $\mathcal{C}_\mu \xrightarrow{\sim} \mathcal{C}_{\theta\mu}$. By our assumption on $\pi$, for each $v|p$ there is a choice of $\lambda_v \in (\mathbb{Z}_+^n)^{\mathrm{Hom}(F_v,\mathbb{Q}bar_p)}$ such that $R_v^{\lambda_v,\mathrm{cr}} \ne 0$ and has a potentially diagonalizable irreducible component $\mathcal{C}_v$. Applying \cref{thm:mainHilbdet}, we deduce there is an automorphic point $x$ on $\mathcal{C}_\mu$ whose image in $\Spec R_v^\square$ lies on $\mathcal{C}_v$ for each $v|p$. In particular, $\rho_x|_{G_v}$ is crystalline for each $v|p$. By local global compatibility, we deduce that $x$ has level prime to $p$. Then $\rho_x \otimes \eta$ is an automorphic point on $\mathcal{C}_{\theta\mu} \subset \mathcal{C}$ of level essentially prime to $p$, which completes the proof. \end{proof} \mathbb{C}ref{thm:Hilbdense} and \cref{thm:speczardense} immediately imply: \begin{cor}\label{thm:specHilbdense} Let the assumptions and notation be as in \cref{thm:Hilbdense}. Then the set of automorphic points of level essentially prime to $p$ in $\Spec R^{\mathrm{univ}}$ is Zariski dense. If Leopoldt's conjecture holds for $F$ and $p$, then the set of automorphic points of level essentially prime to $p$ in $\Spec R^{\mathrm{univ}}$ is Zariski dense. \end{cor} \begin{rmk}\label{rmk:primetop} Even if Leopoldt's conjecture holds for $F$ and $p$, it is necessary to use automorphic points of level essentially prime to $p$, i.e. it is not in general true that the automorphic points of pime to $p$ level are Zariski dense in $\Spec R^{\mathrm{univ}}$. Indeed, let $\Gamma$ be the maximal pro-$p$ abelian quotient of $G_{F,S}$, and let $\Gamma_{\mathrm{tor}}$ denote its torsion subgroup. Then $\mathcal{O}[[\Gamma]]$ is the universal deformation ring for lifts of $\det\overline{\rho}$, and taking determinants yields a $\mathbb{C}NL_{\mathcal{O}}$-algebra map $\mathcal{O}[[\Gamma]] \mathrm{rig}htarrow R^{\mathrm{univ}}$ that takes crystalline points of $\Spec R^{\mathrm{univ}}$ to crystalline points of $\Spec \mathcal{O}[[\Gamma]]$. Applying \cref{thm:det} with $\mu$ equal to the Teichm\"{u}ller lift of $\det\overline{\rho}$, we see that $\det\rho^{\mathrm{univ}}$ is the universal deformation of $\det\overline{\rho}$, from which it follows that the map $\Spec R^{\mathrm{univ}} \mathrm{rig}htarrow \Spec \mathcal{O}[[\Gamma]]$ is surjective. Hence, if the crystalline points are dense in $\Spec R^{\mathrm{univ}}$, they will be dense in $\Spec \mathcal{O}[[\Gamma]]$, which isn't true in general. For example, say there are two places $v$ and $w$ above $p$ in $F$ such that $F_v$ and $F_w$ both contain a primitive $p$th root of unity. The natural map \[\mu_p(\mathcal{O}_{F_v}) \times \mu_p(\mathcal{O}_{F_w}) \longrightarrow \Gamma_{\mathrm{tor}} \] coming from class field theory is injective ($F$ is totally real and $p>2$), so we can find a character $\theta : \Gamma_{\mathrm{tor}} \mathrm{rig}htarrow \mathcal{O}^\times$ (extending $\mathcal{O}$ if necessary) that is trivial on $\mu_p(\mathcal{O}_{F_v})$ but not on $\mu_p(\mathcal{O}_{F_w})$. This character $\theta$ determines an irreducible component of $\Spec \mathcal{O}[[\Gamma]]$, and any point on this component yields a character whose restriction to $\Gamma_{\mathrm{tor}}$ equals $\theta$. But since $F$ is totally real, any crystalline character of $G_{F,S}$ is of the form $\phi\varepsilonilon^m$ for some $m\in\mathbb{Z}$ and a finite order character $\phi$ that is unramified at all places above $p$. In particular a crystalline character of $G_{F,S}$ is trivial on $\mu_p(\mathcal{O}_{F_v})$ if and only if it is trivial on $\mu_p(\mathcal{O}_{F_w})$. \end{rmk} \begin{rmk}\label{rmk:overQ} We now compare \cref{thm:Hilbdense} with the previous results \cite{BockleDensity} and \cite{EmertonLocGlob}*{Theorem~1.2.3} for $F = \mathbb{Q}$. We will temporarily use homological normalizations (i.e. uniformizers correspond to arithmetic Frobenii and Hodge--Tate weights are normalized so that $\varepsilonilon$ has Hodge--Tate weight $1$) for the Galois representations associated to modular forms, as this is more common in the literature we quote. First, we claim that \cref{Hilbdense:aut} always holds for absolutely irreducible odd $\overline{\rho}$ when $F = \mathbb{Q}$. Indeed, there is $m\in\mathbb{Z}$ such that $\overline{\rho}\otimes \overline{\varepsilonilon}^m$ has Serre weight $k \in [2,p+1]$. By the strong form of Serre's conjecture, there is cuspform $f$ of level prime to $p$ and (classical) weight $k$ such that $\overline{\rho}_{f,\iota} \cong \overline{\rho} \otimes \overline{\varepsilonilon}^m$. If $k\le p$, then $\rho_{f,\iota}|_{G_p}$ is potentially diagonalizable by \cite{GaoLiuFLPD}*{Theorem~3.0.3}. If $k = p+1$, then $\overline{\rho}_{f,\iota}|_{I_p}$ is an extension of the trivial character by the cyclotomic character, and \cite{KW2}*{Lemma~3.5} implies that $\rho_{f,\iota}$ is ordinary crystalline, hence potentially diagonalizable by \cite{BLGGT}*{Lemma~1.4.3}. Applying the twist $(\mathrm{ab}s{\cdot}_{\mathbb{A}}\circ\det)^m$ to the regular algebraic cuspidal automorphic representation of $\GL_2(\mathbb{A}_{\mathbb{Q}})$ generated by $f$ yields a regular algebraic cuspidal automorphic representation $\pi$ satisfying \cref{Hilbdense:aut} of \cref{thm:Hilbdense}. Secondly, the assumption that $H^0(G_v,\ad^0(\overline{\rho})(1))=0$ is easily checked to be equivalent to the following (extending scalars if necessary): \begin{itemize} \item $\overline{\rho}|_{G_v} \not\cong \overline{\chi} \otimes \begin{pmatrix} 1 & \ast \\ & \overline{\varepsilonilon} \end{pmatrix}$ for any character $\overline{\chi} : G_v \mathrm{rig}htarrow \mathbb{F}^\times$, and \item if $[F_v(\zeta_p):F_v] = 2$, then $\overline{\rho}$ is not isomorphic to the induction of a character $\overline{\theta} : G_{F_v(\zeta_p)} \mathrm{rig}htarrow \mathbb{F}^\times$. \end{itemize} In particular, \cref{thm:Hilbdense} removes the assumption from \cite{BockleDensity} and \cite{EmertonLocGlob}*{Theorem~1.2.3} requiring the semisimplification of $\overline{\rho}|_{G_p}$ to be nonscalar. However there is one situation covered by \cite{BockleDensity} and \cite{EmertonLocGlob}*{Theorem~1.2.3} that we do not treat here, namely if $p=3$ and the projective image of $\overline{\rho}|_{G_{F(\zeta_3)}}$ is conjugate to $\PSL_2(\mathbb{F}_3)$. By \cite{BLGGU2}*{Proposition~6.5}, if $p>2$ and $\overline{\rho}|_{G_{F(\zeta_p)}}$ acts absolutely irreducibly but does not have adequate image, then either \begin{itemize} \item $p = 3$ and the projective image of $\overline{\rho}|_{G_{F(\zeta_3)}}$ is conjugate to $\PSL_2(\mathbb{F}_3)$, or \item $p = 5$ and the projective image of $\overline{\rho}|_{G_{F(\zeta_5)}}$ is conjugate to $\PSL_2(\mathbb{F}_5)$. \end{itemize} The latter case does not occur when $\det\overline{\rho}$ is totally odd and $p=5$ is unramified in $F$. It would be possible to treat the former case using the main ideas in this article and $\GL_2$-automorphy lifting theorems, but this would also require proving \cref{thm:smoothHilb} using $\GL_2$-patching, instead of quoting \cite{MeSmooth}*{Theorem~B}, so we do not pursue it here. \end{rmk} \subsection{An example}\label{sec:eg} We finish by using an example due to Serre (see \cite{RibetIrred}*{\S2}) to illustrate the subtleties, in particular the failure of \cref{thm:thelemma}, that can occur when $R^{\mathrm{loc}}$ is not regular. We again use homological normalizations (i.e. uniformizers correspond to arithmetic Frobenii and Hodge--Tate weights are normalized so that $\varepsilonilon$ has Hodge--Tate weight $1$) for the Galois representations associated to modular forms, to be consistent with the literature we quote. There is a newform $f$ of weight $2$ and level $\Gamma_1(13)$, with $q$-expansion (see \cite{LMFDB1324a}) \begin{equation}\label{eq:qexp} f(q) = q + (-\zeta_6-1)q^2 + (2\zeta_6-2)q^3 + \zeta_6q^4 + (-2\zeta_6+1)q^5 + \cdots, \end{equation} where $\zeta_6 = e^{\frac{2\pi i}{6}}$. The coefficient field of $f$ is $\mathbb{Q}(\sqrt{-3})$, and the nebentypus of $f$ is the character $(\mathbb{Z}/13\mathbb{Z})^\times \mathrm{rig}htarrow \mathbb{Q}(\sqrt{-3})^\times$ given by $2\mapsto \zeta_6$. Further, $f$ and its conjugate are a basis for $S_2(\Gamma_1(13))$. We take $E = \mathbb{Q}_3(\sqrt{-3})$, and let \[ \rho_f : G_{\mathbb{Q},\{3,13\}} \longrightarrow \GL_2(E) \] denote the $(\sqrt{-3})$-adic representation attached to $f$, and let \[ \overline{\rho}_f : G_{\mathbb{Q},\{3,13\}} \longrightarrow \GL_2(\mathbb{F}_3) \] be the residual representation. The residual representation $\overline{\rho}$ is absolutely irreducible, however its restriction to $G_{\mathbb{Q}(\zeta_3)}$ has abelian image. In particular, $H^0(G_3,\ad^0(\overline{\rho})(1)) \ne 0$. Let $\mu : G_{\mathbb{Q},\{3,13\}} \mathrm{rig}htarrow \mathcal{O}^\times$ be the continuous character such that $\mu\varepsilonilon^{-1}$ is the quadratic character of $\mathrm{Gal}(\mathbb{Q}(\sqrt{13})/\mathbb{Q})$. Then $\det\overline{\rho}_f = \overline{\mu}$, the reduction of $\mu$ modulo $\mathfrak{m}_{\mathcal{O}}$. Let $R^\mu$ be the universal determinant $\mu$ deformation ring for $\overline{\rho}$. Every irreducible component of $\Spec R^\mu$ has dimension at least $3$ by \cref{thm:dimdet}. Let $R^{\mathrm{loc}} = R_3^{\square,\mu}$. By \cref{thm:gendetpres} and the local Euler--Poincar\'{e} characteristic formula, $\dim R^{\mathrm{loc}} \ge 7$. We fix a lift in the class of the universal determinant $\mu$ deformation of $\overline{\rho}$, hence a $\mathbb{C}NL_{\mathcal{O}}$-algebra morphism $R^{\mathrm{loc}} \mathrm{rig}htarrow R^\mu$. Note that $\rho_f|_{G_3}$ is crystalline with Hodge--Tate weights $\{0,1\}$, and $\det\rho_f|_{I_3} = \mu|_{I_3}$. So twisting $\rho_f|_{G_3}$ by an unramified character, if necessary, we see that $R_3^{(0,0),\mathrm{cr},\mu} \ne 0$. We let $X^{\mathrm{loc}} =\Spec R_3^{(0,0),\mathrm{cr},\mu} \subset \Spec R^{\mathrm{loc}}$, so $\dim X^{\mathrm{loc}} = 5$ by \cref{thm:crdefringdet}. In particular, for any irreducible component $\mathcal{C}$ of $\Spec R^\mu$, we have \[ \dim \mathcal{C} + \dim X^{\mathrm{loc}} - \dim R^{\mathrm{loc}} \ge 1.\] But, letting $X = X^{\mathrm{loc}} \times_{\Spec R^{\mathrm{loc}}} \Spec R^\mu$, we claim there is an irreducible component $\mathcal{C}$ of $\Spec R^\mu$ such that \[\mathcal{C} \cap (X\otimes_{\mathcal{O}} E) = \emptyset.\] (However, the justification below does show that a different choice of $X^{\mathrm{loc}}$ yields a nontrivial intersection with $\mathcal{C}$.) To see this, first note that by a version of Carayol's Lemma \cite{DiamondRefined}*{Lemma~2.1}, there is a modular lift $\rho_g$ of $\overline{\rho}_f$ with $g\in S_2(\Gamma_1(9\cdot 13))$ and $\det\rho_g = \mu$. By local global compatibility, we have $\rho_g|_{I_{13}} = \mu|_{I_{13}} \mathrm{op}lus 1$. Let $\mathcal{C}$ be an irreducible component of $\Spec R^\mu$ containing the point induced by $\rho_g$. Under the map $R_{13}^\square \mathrm{rig}htarrow R^\mu$, the image of $\mathcal{C}$ in $\Spec R_{13}^\square$ is contained in some characteristic $0$ irreducible component of $\Spec R_{13}^\square$. Inertial types are locally constant on $\Spec R_{13}^\square[1/p]$ (see \cite{GeeTypes}*{\S2}), so we deduce that for any $\mathbb{Q}bar_p$-point $x$ on $\mathcal{C}$, the semisimplification of $\rho_x|_{I_{13}}$ is isomorphic to $\mu|_{I_{13}}\mathrm{op}lus 1$. Since $\mu|_{I_{13}}\ne 1$, we further conclude that $\rho_x|_{I_{13}} \cong\mu|_{I_{13}}\mathrm{op}lus 1$. Now assume that $\mathcal{C} \cap (X\otimes_{\mathcal{O}} E) \ne 0$, and choose a $\mathbb{Q}bar_p$-point $x$ in this intersection. Then $\rho_x$ is unramfied outside of $\{3,13,\infty\}$, is crystalline at $3$ with Hodge--Tate weights $\{0,1\}$, and $\rho_x|_{I_{13}} \cong\mu|_{I_{13}}\mathrm{op}lus 1$. The $q$-expansion \eqref{eq:qexp} shows $f$ is $3$-ordinary, so \[ \overline{\rho}_f|_{G_3} \cong \begin{pmatrix} \chi_1 & \ast \\ & \chi_2 \end{pmatrix}, \] with $\chi_2$ unramified and $\chi_1|_{I_3} = \overline{\varepsilonilon}|_{I_3}$. Using \cite{KW2}*{Lemma~3.5}, we see that $\rho_x$ is also ordinary, and since $\overline{\rho}_f$ is $3$-distinguished, we can apply \cite{SWirreducible}*{Theorem in \S1} to conclude that $\rho_x$ is modular. But by local global compatibility, it must arrise from an eigenform in $S_2(\Gamma_1(13))$ with a quadratic nebentypus. This is a contradiction as $S_2(\Gamma_1(13))$ is spanned by $f$ and its conjugate, and the nebentypus of $f$ has order $6$. \begin{bibdiv} \begin{biblist} \bibselect{/home/patrick/Dropbox/Research/Refs} \end{biblist} \end{bibdiv} \end{document}
\begin{document} \begin{JGGarticle} {Reflections in Conics, Quadrics and Hyperquadrics via Clifford Algebra} {} {Daniel Klawitter} {\JGGaddress{ Dresden University of Technology, Germany} } \begin{JGGabstract}\\ In this article we present a new and not fully employed geometric algebra model. With this model a generalization of the conformal geometric algebra model is achieved. We discuss the geometric objects that can be represented. Furthermore, we show that the Pin group of this geometric algebra corresponds to the group of inversions with respect to quadrics in principal position. We discuss the construction for the two- and three-dimensional case in detail and give the construction for arbitrary dimension.\\ [1mm]{\em Key Words: Clifford algebra, geometric algebra, generalized inversion, conic, quadric, hyperquadric}.\\ {\em MSC2010: 15A66, 51B99, 51M15, 51N15}. \end{JGGabstract} \section{Algebraic Background} \noindent Before we introduce the quadric geometric algebra we review some algebraic background. \subsection{Geometric Algebra} \begin{definition} Let $V$ be a real valued vector space of dimension $n$. Furthermore, let $b:V\mapsto \mathds{R}$ be a quadratic form on $V$. The pair $(V,b)$ is called \emph{quadratic space}. \end{definition} \noindent We denote the Matrix belonging to $b$ by $\mathrm{B}_{ij}$ with $1\leq i,j\leq n$. Therefore $b(x_i,x_j)=\mathrm{B}_{ij}$ for some basis vectors $x_i$ and $x_j$. \begin{definition} The Clifford algebra is defined by the relations \begin{equation} \label{EQ1} x_ix_j+x_jx_i=2\mathrm{B}_{ij},\quad 1\leq i,j\leq n. \end{equation} \end{definition} \noindent Usually the algebra is denoted by $\mathcal{C}\ell(V,b)$. By Silvester's law of inertia we can always find a basis $\lbrace e_1,\dots,e_n \rbrace$ of $V$ such that $e_i^2$ is either $1,-1$ or $0$. \begin{definition} The number of basis vectors that square to $(1,-1,0)$ is called \emph{signature} $(p,q,r)$. If $r\neq 0$ we call the geometric algebra \emph{degenerate}. We will denote this Clifford algebra by $\mathcal{C}\ell_{(p,q,r)}$. \end{definition} \begin{remark} A quadratic real space with signature $(p,q,0)$ is abbreviated by $\mathds{R}^{p,q}$. \end{remark} \noindent With the new basis $\lbrace e_1,\dots,e_n \rbrace$ the relations \eqref{EQ1} become \begin{equation*} e_ie_j+e_je_i=0,\quad i\neq j \mbox{ and } e_ie_i=\mathrm{B}_{ii}. \end{equation*} In the remainder of this paper we shall abbreviate the product of basis elements with lists \begin{equation*} e_{12\dots k}:=e_1e_2\dots e_k,\, \mbox{with $0\leq k\leq n$}. \end{equation*} The $2^n$ monomials \begin{equation*} e_{i_1}e_{i_2}\dots e_{i_k},\quad 0\leq k\leq n \end{equation*} form the standard basis of the Clifford algebra. The Clifford algebra and the exterior algebra are canonically isomorphic (as vector spaces). The dimension of a Clifford algebra is calculated by \begin{equation*} \dim \mathcal{C}\ell_{(p,q,r)}=\sum\limits_{i=0}^n{\dim {\bigwedge}^i V}=\sum\limits_{i=0}^n\binom{n}{i}=2^n. \end{equation*} Moreover, the Clifford algebra $\mathcal{C}\ell_{(p,q,r)}$ possesses a $\mathds{Z}_2$-grading,\ {\it i.e.}, it can be decomposed in an even and an odd part \begin{equation*} \mathcal{C}\ell_{(p,q,r)}=\mathcal{C}\ell_{(p,q,r)}^+ \oplus \mathcal{C}\ell_{(p,q,r)}^-=\bigoplus\limits_{\substack{i=0\\i \text{ even}}}^n{{\bigwedge}^i V} \oplus \bigoplus\limits_{\substack{i=0\\i \text{ odd}}}^n{{\bigwedge}^i V}. \end{equation*} The even part $\mathcal{C}\ell_{(p,q,r)}^+$ is a subalgebra, because the product of two even graded monomials must be even graded since the generators cancel only in pairs. Elements contained in ${\bigwedge}^i V$ are called \emph{$i$-blades} and any $\mathds{R}$-linear combination of $i$-blades is called a \emph{multi-vector}. The product of invertible $1$-blades is called a \emph{versor}. \subsection{Clifford Algebra Involutions} \noindent For our purposes two involutions that exist on each Clifford algebra are interesting. The \emph{conjugation} is an \emph{anti-automorphism} denoted by an asterisk, see \cite{selig:geometricfundamentalsofrobotics}. Its effect on generators is given by $e_i^\ast=-e_i$. There is no effect on scalars. Extending the conjugation by using linearity yields \begin{equation} (e_{i_1}e_{i_2}\dots e_{i_k})^\ast=(-1)^k e_{i_k}\dots e_{i_2}e_{i_1},\quad 0\leq k \leq n,\ 1 \leq i_1<\ldots <i_k\leq n. \end{equation} The geometric product of a $1$-blade $\mathfrak{v}=\sum\limits_{i=1}^{n}{x_ie_i}\in{\bigwedge}^1 V$ with its conjugate results in \begin{equation*} \mathfrak{v}\mathfrak{v}^\ast=-x_1^2-x_2^2-\dots-x_p^2+x_{p+1}^2+\dots +x_{p+q}^2=-b(\mathfrak{v},\mathfrak{v}). \end{equation*} The map $N:\mathcal{C}\ell_{(p,q,r)}\mapsto\mathcal{C}\ell_{(p,q,r)}$ with $N(\mathfrak{v}):=\mathfrak{v}\mathfrak{v}^\ast$ is called the \emph{norm} of the Clifford algebra and the inverse of a vector $\mathfrak{v}\in\bigwedge^1 V$ is computed by $\mathfrak{v}^{-1}:=\frac{\mathfrak{v}^\ast}{N(\mathfrak{v})}$. For general multi-vectors the determination can be found in \cite{fontijne:EfficientImplementationofGeometricAlgebra}. Note that in general not every element is invertible. The other involution we are dealing with is the \emph{main involution}. It is denoted by $\alpha$ and defined by \begin{equation*} \alpha(e_{i_1}e_{i_2}\dots e_{i_k})=(-1)^k e_{i_1}e_{i_2}\dots e_{i_k},\quad 0\leq k \leq n,\ 1 \leq i_1<\ldots <i_k\leq n. \end{equation*} The main involution has no effect on the even subalgebra and it commutes with the conjugation, {\it i.e.}, $\alpha(\mathfrak{M}^\ast)=\alpha(\mathfrak{\mathfrak{M}})^\ast$ for arbitrary $\mathfrak{\mathfrak{M}}\in\mathcal{C}\ell_{(p,q,r)}$. \subsection{Clifford Algebra Products} \noindent On $1$-blades, {\it i.e.}, vectors $\mathfrak{a},\mathfrak{b}\in{\bigwedge}^1 V$ we can write the inner product in terms of the geometric product \begin{equation}\label{EQ10} \mathfrak{a}\cdot \mathfrak{b}:=\frac{1}{2}(\mathfrak{a}\mathfrak{b}+\mathfrak{b}\mathfrak{a}). \end{equation} A generalization of the inner product to blades can be found in \cite{hestenes:cliffordalgebra}. For $\mathfrak{A}\in{\bigwedge}^k V,\mathfrak{B}\in{\bigwedge}^l V$ the generalized inner product is defined by \begin{equation*} \mathfrak{A}\cdot \mathfrak{B}:=\left[ \mathfrak{A}\mathfrak{B} \right]_{\vert k-l\vert}, \end{equation*} where $\left[ \cdot \right]_{m}\,m\in\mathds{N}$ denotes the grade-$m$ part. There is another product on $1$-blades, {\it i.e.}, the outer (or exterior) product \begin{equation}\label{EQ12} \mathfrak{a}\wedge \mathfrak{b}:=\frac{1}{2}(\mathfrak{a}\mathfrak{b}-\mathfrak{b}\mathfrak{a}). \end{equation} This product can also be generalized to blades, see again \cite{hestenes:cliffordalgebra}. For $\mathfrak{A}\in{\bigwedge}^k V,\mathfrak{B}\in{\bigwedge}^l V$ the generalized outer product is defined by \begin{equation*} \mathfrak{A}\wedge \mathfrak{B}:=\left[ \mathfrak{A}\mathfrak{B} \right]_{\vert k+l\vert}. \end{equation*} Thus, the exterior product of $\bigwedge V$ can be expressed in terms of the geometric product. From equation \eqref{EQ10} and \eqref{EQ12} it follows that for $1$-blades the geometric product can be written as the sum of the inner and the outer product \begin{equation*} \mathfrak{ab}=\mathfrak{a}\cdot \mathfrak{b}+\mathfrak{a}\wedge \mathfrak{b}. \end{equation*} More generally, this can be defined for multivectors with the commutator and the anti-commutator product, see \cite{perwass:geometricalgebra}. For treating geometric entities within this algebra context the definition of the \emph{inner product null space} and its dual the \emph{outer product null space} is needed. \begin{definition} The \emph{inner product null space} (IPNS) of a blade $\mathfrak{A}\in{\bigwedge}^k V$, cf. \cite{perwass:geometricalgebra}, is defined by \begin{equation*} \mathds{NI}(\mathfrak{A}):=\left\lbrace \mathfrak{v}\in{\bigwedge}^1 V:\mathfrak{v}\cdot \mathfrak{A}=0 \right\rbrace. \end{equation*} Moreover, the \emph{outer product null space} (OPNS) of a blade $\mathfrak{A}\in{\bigwedge}^k V$ is defined by \begin{equation*} \mathds{NO}(\mathfrak{A}):=\left\lbrace \mathfrak{v}\in{\bigwedge}^1 V:\mathfrak{v}\wedge \mathfrak{A}=0 \right\rbrace. \end{equation*} \end{definition} \subsection{Pin and Spin groups} \noindent With respect to the geometric product the units of a Clifford algebra denoted by $\mathcal{C}\ell_{(p,q,r)}^\times$ form a group. \begin{definition} The \emph{Clifford group} is defined by \begin{equation*} \Gamma(\mathcal{C}\ell_{(p,q,r)}):=\left\lbrace \mathfrak{g}\in\mathcal{C}\ell_{(p,q,r)}^\times\mid \alpha(\mathfrak{g})\mathfrak{v} \mathfrak{g}^{-1}\in {\bigwedge}^1 V \mbox{ for all } \mathfrak{v}\in{\bigwedge}^1 V \right\rbrace. \end{equation*} \end{definition} \noindent A proof that $\Gamma(\mathcal{C}\ell_{(p,q,r)})$ is indeed a group with respect to the geometric product can be found in \cite{gallier:cliffordalgebrascliffordgroups}. We define two important subgroups of the Clifford group. \begin{definition} The \emph{Pin group} is the subgroup of the Clifford group with\linebreak $N(\mathfrak{g})=1$. \begin{equation*} \mbox{Pin}_{(p,q,r)}\!:\!=\left\lbrace \mathfrak{g}\in\mathcal{C}\ell_{(p,q,r)}\mid \mathfrak{gg}^\ast\!=\!\pm 1\mbox{ and } \alpha(\mathfrak{g})\mathfrak{v}\mathfrak{g}^\ast\in {\bigwedge}^1 V \mbox{ for all } \mathfrak{v}\in{\bigwedge}^1 V \right\rbrace. \end{equation*} Furthermore, we define the \emph{Spin group} by $\mathrm{Pin}_{(p,q,r)}\cap \mathcal{C}\ell_{(p,q,r)}^+$ \begin{equation*} \mbox{Spin}_{(p,q,r)}\!:=\!\left\lbrace \mathfrak{g}\in\mathcal{C}\ell_{(p,q,r)}^+\mid \mathfrak{gg}^\ast\!=\!\pm 1\mbox{ and } \alpha(\mathfrak{g})\mathfrak{v}\mathfrak{g}^\ast\in {\bigwedge}^1 V \mbox{ for all } \mathfrak{v}\in{\bigwedge}^1 V \right\rbrace. \end{equation*} \end{definition} \begin{remark} For non-degenerate Clifford algebras the Pin group is a double cover of the orthogonal group of the quadratic space $(V,b)$. Moreover, the Spin group is a double cover of the special orthogonal group of $(V,b)$. \end{remark} \section{Quadric Geometric Algebra}\label{section3} \noindent The geometric algebra we will study was first introduced by Zamora \cite{zamora:geometricalgebra}. We discuss the planar, {\it i.e.}, two-dimensional case in detail before we move on to higher dimensions. \subsection{The Embedding} The quadric geometric algebra for the two-dimensional case is constructed with a Clifford algebra over a six-dimensional vector space $V=\mathds{R}^6$. In fact the term quadric could be replaced by the term conic for the two-dimensional case. Without loss of generality we call it quadric geometric algebra and abbreviate this term with Q$n$GA, where $n$ denotes the dimension of the base space. The quadratic form we are using is derived by the quadratic form of the conformal geometric algebra used in \cite{dorst:gaviewer}: \[\mathrm{Q}=\left( \begin{array}{cccccc} 0 & 0 & -1 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 & 0 & 0\\ -1 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & -1\\ 0 & 0 & 0 & 0 & 1 & 0\\ 0 & 0 & 0 & -1 & 0 & 0\\ \end{array}\right) .\] The signature of the resulting algebra is $(p,q,r)=(4,2,0)$. For every axis, {\it i.e.}, the $x$- and the $y$-axes a conformal embedding is performed, see \cite{dorst:geometricalgebra}. Therefore, we have the embedding $\eta:\mathds{R}^2\rightarrow\bigwedge^1 V$, \begin{align} \mathds{R}^2\ni P&\mapsto \mathfrak{p}\in{\bigwedge}^1 V,\nonumber\\ P=(x,y)^\mathrm{T}&\mapsto e_1 + x e_2 + \frac{1}{2} x^2 e_3 + e_4 + y e_5 +\frac{1}{2} y^2 e_6=\mathfrak{p}.\label{EQ21} \end{align} Affine points $(x,y)^\mathrm{T}\in\mathds{R}^2$ are embedded as null vectors. This means \begin{equation}\label{EQ22} \eta(P)^2=0 \mbox{ for $P\in\mathds{R}^2$.} \end{equation} The projection on the generator subspace spanned by $e_1,e_2$, and $e_3$ is denoted by subscript $x$ and the projection on $e_4,e_5,e_6$ by subscript $y$. Due to the fact, that the embedding is conformal (see \cite{dorst:geometricalgebra,zamora:geometricalgebra}) for both axes we get the additional conditions: \begin{equation}\label{EQ23} \eta(P)_x^2=0,\quad \eta (P)_y^2=0. \end{equation} In the following we call grade-1 elements satisfying \eqref{EQ22} and \eqref{EQ23} \emph{embedded points}. Let $P_1=(x_1,y_1)^\mathrm{T}\in\mathds{R}^2$ and $P_2=(x_2,y_2)^\mathrm{T}\in\mathds{R}^2$ be two points. The inner product of their images under $\eta$ results in \begin{align*} \eta(P_1)\cdot\eta(P_2)&=-\frac{1}{2}x_2^2+x_1x_2-\frac{1}{2}x_1^2-\frac{1}{2}y_2^2+y_1y_2-\frac{1}{2}y_1^2\\ &=-\frac{1}{2}\left( (x_2-x_1)^2+(y_2-y_1)^2\right)\\ &=-\frac{1}{2}d^2_E(P_1,P_2), \end{align*} where $d_E(P_1,P_2)$ denotes the Euclidean distance between the points $P_1$ and $P_2$. Note that this formula only is true for normalized null vectors. This means that the homogeneous factors has to be equal to one. These vectors can be interpreted as images of points under $\eta$ and we call them normalized. They are characterized by \begin{equation}\label{EQ27} -e_3\cdot \mathfrak{p} = 1,\quad -e_6\cdot \mathfrak{p}=1. \end{equation} The inner product of an arbitrary embedded point with $e_3$ or $e_6$ is constant. Therefore, we can interpret these elements as points at infinity, see \cite{dorst:geometricalgebra}. Furthermore, the combination of the conditions \eqref{EQ27} results in \begin{equation*} -(e_3+e_6)\cdot \mathfrak{p} =2. \end{equation*} Thus, the geometric entity corresponding $e_3+e_6$ can be interpreted as point at infinity. The elements $e_3$ and $e_6$ represent the ideal points corresponding to each axis and $e_3+e_6$ represents a point at infinity contained in both axes. Geometrically, these three algebra elements describe the same point, {\it i.e.}, the point at infinity $\infty$ although they differ algebraically.\\ \noindent There are grade-1 elements that satisfy the conditions \eqref{EQ22} and \eqref{EQ23} without having a preimage in $\mathds{R}^2$. For example $e_3,e_6,e_3+e_6$ and algebra elements of the form: \begin{equation*} \mathfrak{u}_1=e_1+x_0e_2+\frac{1}{2}x_0^2e_3+e_6,\quad\quad \mathfrak{u}_2=e_3+e_4+y_0 e_5+\frac{1}{2}y_0^2 e_6. \end{equation*} If we determine the Euclidean distance of an embedded point to $\mathfrak{u}_1$ or $\mathfrak{u}_2$, the result is a complex number and depends on $x_0$ respectively $y_0$. Hence, these elements do not represent points. \subsection{Geometric entities} \noindent To calculate the preimage $\eta^{-1}$ of $\mathfrak{p}\in{\bigwedge}^1V$ representing an embedded point, {\it i.e.}, an algebra element fulfilling \eqref{EQ22} and \eqref{EQ23}, we determine its IPNS ($\mathds{NI}(\mathfrak{p})$) with respect to the embedding. This is called \emph{geometric inner product null space} and dual \emph{geometric outer product null space}, see \cite{perwass:geometricalgebra}. \begin{definition} The \emph{geometric inner product null space} (GIPNS) and dual the \emph{geometric outer product null space} (GOPNS) of a $k$-blade $\mathfrak{A}\in{\bigwedge}^kV$ is defined by \begin{align*} \mathds{NI}_G(\mathfrak{A})&:=\left\lbrace (x,y)^\mathrm{T}\in\mathds{R}^2:\epsilon(x,y) \cdot \mathfrak{A}=0\right\rbrace, \\ \mathds{NO}_G(\mathfrak{A})&:=\left\lbrace (x,y)^\mathrm{T}\in\mathds{R}^2:\epsilon(x,y) \wedge \mathfrak{A}=0\right\rbrace . \end{align*} \end{definition} \begin{remark} When dealing with an algebra element and the corresponding geometric entity, we will explicitly mention what null space is meant. For example we will talk about inner product conics. This means the inner product null space defines a conic in $\mathds{R}^2$. \end{remark} \noindent Before we start the examination of geometric objects occurring in this model we define special $5$-blades that are necessary to change from inner product to outer product null spaces and vice versa. \begin{definition}\label{DEF8} On the one hand the $5$-blade \[\mathfrak{I}=e_2 \wedge e_5 \wedge e_1 \wedge e_4 \wedge (e_3+e_6)\] maps outer product null spaces to inner product null spaces \begin{equation*} \mathfrak{I}:{\bigwedge}^i V\rightarrow {\bigwedge}^{k-i} V,\quad\quad {\bigwedge}^i V \ni \mathfrak{v}\mapsto \mathfrak{v}\cdot \mathfrak{I} \in{\bigwedge}^{k-i} V, \end{equation*} with $i\in\left\lbrace 1,\ldots,4 \right\rbrace $ and $k=5$ for the planar quadric geometric algebra. On the other hand \[\mathfrak{I}^*:=e_2\wedge e_5 \wedge e_3\wedge e_6 \wedge(e_1+e_4)\] maps dual elements to normal elements respectively inner product null spaces to outer product null spaces. There is no difference in left- or right multiplication with these $5$-blades. The result differs by the factor $-1$ and describes the same geometric entity. \end{definition} \noindent With Def. \ref{DEF8} we get (see \cite{zamora:geometricalgebra}) \begin{equation*} \mathds{NI}_G(\mathfrak{A})=\mathds{NO}_G(\mathfrak{A}\cdot \mathfrak{I}),\quad\quad \mathds{NO}_G(\mathfrak{A})=\mathds{NI}_G(\mathfrak{A}\cdot \mathfrak{I}^*). \end{equation*} Note that dualization is realized with the inner product. Now we take a look at the inner product null space of grade-1 elements that are not embedded points. Therefore, at least one of the conditions \eqref{EQ22} or \eqref{EQ23} is not satisfied. Let $\mathfrak{c}=-2a_1e_1+2a_2e_2-a_3e_3-2a_4e_4+2a_5e_5-a_6e_6$ be a general $1$-blade. The GIPNS results \begin{align*} \mathds{NI}_G(\mathfrak{c})&=\left\lbrace (x,y)^\mathrm{T} \in\mathds{R}^2 \mid\eta(x,y)\cdot \mathfrak{c}=0 \right\rbrace\\ &= \left\lbrace (x,y)^\mathrm{T} \in\mathds{R}^2 \mid a_1x^2+2a_2x+a_3+a_4y^2+2a_5y+a_6=0\right\rbrace . \end{align*} The GIPNS is a conic in principal position, because there is no term containing $xy$. Any conic has a coefficient matrix and is given by \[\begin{pmatrix} 1 &x &y \end{pmatrix}\begin{pmatrix} a_{11}&a_{12}&a_{13}\\ a_{12}&a_{22}&a_{23}\\ a_{13}&a_{23}&a_{33}\\ \end{pmatrix}\begin{pmatrix} 1\\x\\y \end{pmatrix}=0.\] Therefore, we can define a bijection $\chi$ between those symmetric matrices which represent conics in principal position and $1$-blades by \begin{equation}\label{EQ37} \begin{pmatrix} a_0&a_2&a_5\\ a_2&a_1&0\\ a_5&0&a_4 \end{pmatrix}\mapsto 2a_1e_1-2a_2e_2+a_3e_3+2a_4e_4-2a_5e_5+a_6e_6. \end{equation} For the bijection \eqref{EQ37} we assume that $a_3:=\frac{1}{2}a_0$ and $a_6:=\frac{1}{2}a_0$. It would be sufficient to demand that $a_3+a_6=a_0$ to result in the same conic, because the constant value is equal to $a_3+a_6$. This does not change the GIPNS of the conic. With Eq. \eqref{EQ37} embedded points can be interpreted as circles whose radii are equal to zero.\\ \noindent After dualization an inner product conic $\mathfrak{c}$ becomes an outer product conic $\hat{\mathfrak{c}}$ that is a four-blade and can be generated by the outer product of four embedded points. These four points lie on the conic because \[\mathfrak{p}_i\in\mathds{NO}_G(\mathfrak{p}_1 \wedge \mathfrak{p}_2\wedge \mathfrak{p}_3 \wedge \mathfrak{p}_4), \mbox{ for $i=1,\ldots,4$}.\] The natural question that arises is: Is there a way to classify conics in this model? For this purpose we study the incidence of the conics with the three additional ideal points. If a conic contains both ideal elements $e_3$ and $e_6$ it automatically contains also $e_3+e_6$. First we look at the entities $\mathfrak{a}\in{\bigwedge}^1 V$ that contain the ideal points $e_3,e_6$, and therefore, also $e_3+e_6$. Thus, we get the conditions \begin{equation*} \mathfrak{a}\cdot e_3=2a_1=0,\quad \mathfrak{a}\cdot e_6=2a_4=0. \end{equation*} Hence, $a_1$ and $a_4$ have to vanish. The corresponding algebra element has the form \[\mathfrak{l}=2a_2e_2-a_3e_3+2a_5e_5-a_6e_6.\] Its GIPNS is calculated by \[\mathds{NI}_G(\mathfrak{l})=\left\lbrace (x,y)^T\in\mathds{R}^2 \mid 2a_2x+2a_5y+a_3+a_6=0 \right\rbrace.\] Clearly, this entity describes an inner product line and every line passes through $e_3, e_6$, and $e_3+e_6$. An algebra element that contains just $e_3$ or $e_6$ is a parabola whose axis are parallel to the $x$-axis or the $y$-axis. An element that contains $e_3+e_6$, but neither $e_3$ nor $e_6$, is given by the condition $\mathfrak{a}\cdot (e_3+e_6)=2a_1+2a _4=0$. This means $a_1=-a_4$ and the corresponding conic is an equilateral hyperbola, {\it i.e.}, the asymptotes enclose an angle of $90^\circ$. All other conics in principal axes position can be obtained by the wedge product of four embedded points or by using the bijection between conics and the algebra elements, cf. Eq. \eqref{EQ37}. \begin{remark} Note, that this description of conics also contains conics without real points. \end{remark} \noindent In the most general case two-blades correspond to inner product point quadruples. This can be seen from \begin{equation*} \mathds{NI}_G(\mathfrak{a}\wedge \mathfrak{b})=\mathds{NI}_G(\mathfrak{a})\cap \mathds{NI}_G(\mathfrak{b}), \end{equation*} see \cite{perwass:geometricalgebra}. Therefore, two-blades represent all points belonging to both conics that are represented by the vectors. If two non-degenerate conics do not intersect, the corresponding two-blade represents a complex inner product point quadruple. Furthermore, we can see by dualization, that three-blades belong to outer product point quadruples. For two inner product lines $\mathfrak{l}_1,\mathfrak{l}_2$ the corresponding two-blade $\mathfrak{l}_1\wedge \mathfrak{l}_2$ represents an inner product pair of points, where one of the points is their affine intersection point and the other the point at infinity. \begin{example}\label{EX1} Let us generate a conic through four points. Therefore, we choose four points and embed them via \eqref{EQ21}. \begin{align*} P_1=(-1,0)^\mathrm{T}&\to\mathfrak{p}_1=e_1-e_2+\frac{1}{2}e_3+e_4,\\ P_2=(1,0)^\mathrm{T}\,\,\,\,\,&\to\mathfrak{p}_2=e_1+e_2+\frac{1}{2}e_3+e_4,\\ P_3=(0,-1)^\mathrm{T}&\to\mathfrak{p}_3=e_1+e_4-e_5+\frac{1}{2}e_6,\\ P_4=(-1,0)^\mathrm{T}&\to\mathfrak{p}_4=e_1+e_4+e_5+\frac{1}{2}e_6.\\ \end{align*} The corresponding inner product representation is calculated by \[\mathfrak{c}=\mathfrak{I} \cdot (\mathfrak{p}_1 \wedge \mathfrak{p}_2 \wedge \mathfrak{p}_3 \wedge \mathfrak{p}_4)=4e_1-e_3+4e_4-e_6.\] The GIPNS is given by \[\mathds{NI}_G(\mathfrak{c})=\left\lbrace (x,y)^T\in\mathds{R}^2\mid -x^2+1-y^2=0\right\rbrace .\] With the bijection \eqref{EQ37} we see easily that $\mathfrak{c}$ is the image of the conic given by the diagonal matrix diag$(1,-1,-1)$, {\it i.e.}, the unit circle centered at the origin. \end{example} \subsection{Transformations} In this section we discuss transformations in this algebra. The Clifford algebra $\mathcal{C}\ell_{(4,2,0)}$ corresponds to the quadratic space $\mathds{R}^{(4,2)}$, and therefore, the sandwich action of vectors represents reflections in hyperplanes in this space. It is clear that the transformations induce transformations that are not linear in the base space $\mathds{R}^2$ because the embedding $\eta$ is quadratic. From the last section we know that the geometric entity corresponding to vectors are axis aligned conics. Furthermore, a transformation acts via the sandwich operator and results again in a $k$-blade when applied to a $k$-blade $k=1,\ldots,4$. We begin with an example: \begin{example}\label{EX2} Let $\mathfrak{c}$ be the circle from Ex. \ref{EX1} \[\mathfrak{c}=4e_1-e_3+4e_4-e_6\] and let $\mathfrak{p}=e_1+e_2+\frac{1}{2}e_3+e_4+2e_5+2e_6$ be $\eta(1,2)^\mathrm{T}$. Applying the sandwich operator to $\mathfrak{p}$ results in \[\mathfrak{p}'=\alpha(\mathfrak{c})\mathfrak{p}\mathfrak{c}^{-1}=5e_1+e_2-\frac{1}{2}e_3+5e_4+2e_5+e_6.\] Now we check if this entity is still an embedded point. Therefore, the conditions \eqref{EQ22} and \eqref{EQ23} have to be checked. Condition \eqref{EQ22} is satisfied, but condition \eqref{EQ23} not \[{\mathfrak{p}'}_x^2=6,\quad {\mathfrak{p}'}_y^2=-6.\] Hence, $\mathfrak{p}'$ cannot be interpreted as embedded point. The GIPNS of $\mathfrak{p}'$ is given by \[\mathds{NI}_G(\mathfrak{p}')=\left\lbrace (x,y)^\mathrm{T}\in\mathds{R}^2 \mid -\frac{5}{2}x^2+x-\frac{5}{2}y^2+2y-\frac{1}{2} =0\right\rbrace. \] This represents a pair of complex lines intersecting in the real point $(\frac{1}{5},\frac{2}{5})^\mathrm{T}$. \end{example} \noindent Ex. \ref{EX2} shows that in general a conic is mapped to another conic. We can not map a circle of radius zero to a circle of radius zero and define a mapping for points on this way. Thus, we study the action of the transformations applied to vectors that correspond to conics. \begin{theorem} A conic represented by the vector $\mathfrak{a}\in\bigwedge^1 V$ is pointwise fixed under the transformation induced by itself. Furthermore, these transformation are involutions. \end{theorem} \begin{proof} First, we show that the conic corresponding to the transformation is fixed pointwise. Therefore, we look at the action of a general vector \[\mathfrak{a}= -2a_1e_1+2a_2e_2-a_3e_3-2a_4e_4+2a_5e_5-a_6e_6\] to itself, and find \[\alpha(\mathfrak{a})\mathfrak{a}\mathfrak{a}^{-1}=\alpha(\mathfrak{a})=-\mathfrak{a}.\] Multiplication with a homogeneous factor does not change the GIPNS. Thus, the result is the conic represented by $\mathfrak{a}$ again. To show that the points of the conic $\mathfrak{a}$ are fixed under the transformation induced by $\mathfrak{a}$, we examine the action of $\mathfrak{a}$ on the intersection points of the conic $\mathfrak{a}$ with all lines containing the point $(0,0)$. These lines are given by \begin{align*} \mathfrak{l}(x,y)&=\mathfrak{I}\cdot\left( \eta(0,0)\wedge \eta(x,y)\wedge e_3 \wedge e_6\right)\\ &=\mathfrak{I}\cdot\left( (e_1+e_4)\wedge (e_1+xe_2+\frac{1}{2}x_1^2e_3+e_4+ye_5+\frac{1}{2}y^2e_6)\wedge e_3 \wedge e_6\right)\\ &=2ye_2-2xe_5. \end{align*} The intersection of all these lines with the conic $\mathfrak{a}$ is represented by \begin{align*} \mathfrak{l}(x,y)\wedge \mathfrak{a}=&-2a_3ye_{23}+4a_1ye_{12}-4a_4ye_{24}-4a_1xe_{15}+4(a_5y+a_2x)e_{25}\\ &-2a_3xe_{35}-2a_6ye_{26}-4a_4xe_{45}+2a_6xe_{56}. \end{align*} This two-blade represents the pair of common points of the conic and $\mathfrak{l}(x,y)$. The application of the transformation induced by $\mathfrak{a}$ to $\mathfrak{l}(x,y)\wedge \mathfrak{a}$ results in \begin{align*} \alpha(\mathfrak{a})(\mathfrak{l}(x,y)\wedge \mathfrak{a})\mathfrak{a}^{-1}&=-2a_3ye_{23}\!+\!4a_1ye_{12}\!-\!4a_4ye_{24}\!-\!4a_1xe_{15}\\ &+4(a_5y+a_2x)e_{25}\!-\!2a_3xe_{35}\!-\!2a_6ye_{26}\!-\!4a_4xe_{45}\!+\!2a_6xe_{56}. \end{align*} This shows, that all pairs of common points of the pencil of lines with the conic are fixed, and therefore, the whole conic is fixed pointwise. To see that the transformation is an involution we have to apply it twice to an arbitrary $k$-blade $\mathfrak{B}$ \[\alpha(\mathfrak{a})\alpha(\mathfrak{a})\mathfrak{B}\mathfrak{a}^{-1}\mathfrak{a}^{-1}=\alpha(\mathfrak{a}^2)\mathfrak{B}{(\mathfrak{a}^2)}^{-1}=\mathfrak{B}.\] The last equality follows because $\mathfrak{a}^2$ is a real number. \end{proof} \noindent Due to the fact that these transformations are represented as reflections with respect to hyperplanes in $\mathds{R}^{(4,2)}$, they are involutions and fix the corresponding hyperplane pointwise. This is the reason why we interpret these transformations as reflections or inversions with respect to conics. Furthermore, the whole group of transformations is generated by the action of vectors. Note that the image of a conic in principal position is always a conic in principal position in this model and that intersection point quadruples of a conic with the reflection conic stay fixed, no matter if the intersection points are real or complex. \begin{remark} The group of conformal transformations of a quadratic space $\mathds{R}^{(p,q)}$ can be described as the Pin group of a Clifford algebra $\mathcal{C}\ell_{(p+1,q+1,0)}$, see \cite{porteous:cliffordalgebrasandtheclassicalgroups}. Therefore, the group of conformal transformations of the \emph{Minkowski space} $\mathds{R}^{(3,1)}$ is isomorphic to the group of inversions with respect to conics in principal position except for a translation. Especially for the planar quadric geometric algebra Q2GA we have the signature $(p,q,r)=(4,2,0)$ which is identical to the signature of the homogeneous model where we choose Lie's quadric as metric quadric. Thus, the Pin group of Q2GA is isomorphic to the group of Lie transformations. \end{remark} \subsection{Effect on Lines and Points} In Ex. \ref{EX2} we have seen that a circle with radius zero is mapped to a pair of complex conjugate lines intersecting in a real point. Therefore, we have to search for a better description of points in this model. One way to describe points as two-blades is to examine the intersection of two lines. If we take just the affine point of intersection, we can define an embedding of the affine plane as points of intersection of pairs of lines. Therefore, we take two lines through a given point $(x,y)^\mathrm{T}\in\mathds{R}^2$. We define this point to be the point of the line parallel to the $x$-axis and the line parallel to the $y$-axis. \begin{align} \mathfrak{p}&=(\mathfrak{I}\cdot (\eta(x,y)\wedge\eta(x,0)\wedge e_3\wedge e_6))\wedge (\mathfrak{I}\cdot (\eta(x,y)\wedge\eta(0,y)\wedge e_3\wedge e_6))\nonumber\\ &= -y^2xe_{23}-2yx e_{25}-yx^2 e_{35}+yx^2 e_{56}-y^2x e_{26}\nonumber\\ &= 2 e_{25}+x( e_{35}-e_{56})+y(e_{23}+ e_{26}). \label{EQ41} \end{align} Note that this element represents a pair of points, since lines meet also in the point $\infty$. We parametrize affine lines as the sets of all lines passing through two points $p_1=(x_1,y_1)^\mathrm{T}$ and $p_2=(x_2,y_2)^\mathrm{T}$. The inner product line is derived by \begin{align} \mathfrak{l}(p_1,p_2)&=\mathfrak{I}\cdot\left( \eta(p_1)\wedge \eta(p_2)\wedge e_3\wedge e_6\right)\nonumber\\ &= -2(y_1-y_2) e_2+(x_1y_2-y_1x_2) e_3\nonumber\\ &+2(x_1-x_2) e_5+(x_1y_2-y_1x_2)e_6.\label{EQ42} \end{align} The GIPNS of this line is determined by \[\mathds{NI}_G(\mathfrak{l}(p_1,p_2))=\left\lbrace(x,y)\in\mathds{R}^2\mid (x_1-x_2)y+(y_2-y_1)x+(y_1x_2-x_1y_2)=0 \right\rbrace.\] \begin{theorem} The image of a line under an inversion with respect to a conic represented by the non-null vector $\mathfrak{a}\in{\bigwedge}^1 \mathcal{C}\ell_{(4,2,0)}$ is a conic. Moreover, for non-degenerate conics this conic is the image of $\mathfrak{a}$ under an affine transformation, {\it i.e.}, translation and scalar multiplication. \end{theorem} \begin{proof} To show this we concentrate on conics with no terms in $x$ and $y$. We can do this because we are just interested in the type of the image conic. Furthermore, we can perform translations by two reflections in parallel lines, and thus, we can carry over the results from the principal position to an arbitrary position. Furthermore, we just show this theorem for non degenerate conics. A conic with no terms in $x$ or $y$ is given by \[\mathfrak{a}=2a_1e_1+\frac{1}{2}a_0e_3+2a_4 e_4+\frac{1}{2}a_0e_6.\] Since we are interested in real conics, the coefficients $a_0,a_1$, and $a_4$ are not allowed to have the same sign. In the sequel we assume that the conic is real. Now we can look at the matrix of the conic \[\mathrm{M}=\begin{pmatrix} 1&0&\\0&\frac{a_1}{a_0}&0\\0&0&\frac{a_4}{a_0} \end{pmatrix}.\] The image of the set of lines \eqref{EQ42} is calculated by \begin{align*} \alpha(\mathfrak{a})\mathfrak{l}(p_1,p_2)\mathfrak{a}^{-1}&=-\frac{4a_1(x_1y_2-y_1x_2)}{a0}e_1-2(y_1-y_2)e_2\\ &-\frac{4a_4(x_1y_2-y_1x_2)}{a_0}e_4+2(x_1-x_2)e_5. \end{align*} The coefficient matrix of the corresponding conic is given by \[\mathrm{N}(p_1,p_2)=\frac{1}{a_0}\begin{pmatrix} 0& a_0(y_2-y_1)&a_0(x_1-x_2)\\ a_0(y_2-y_1) & 2c_1(x_1y_2-y_1x_2)&0\\ a_0(x_1-x_2)&0&2a_4(x_1y_2-y_1x_2) \end{pmatrix}.\] From this representation we see immediately that lines through the center of the conic are fixed, but not pointwise. In order to transform this matrix to diagonal form we apply the transformation \[p\mapsto\underbrace{\begin{pmatrix} 1&0&0\\\alpha&1&0\\\beta&0&1 \end{pmatrix}}_{=:\mathrm{T}}p,\mbox{ with $\alpha=-\frac{a_0(y1-y2)}{2a_1(x_1y_2-y_1x_2)}$ and $\beta=\frac{a_0(x1-x2)}{2a_4(x_1y_2-y_1x_2)}$}.\] Here $p=(1,x,y)^\mathrm{T}\in\mathds{R}^3$ is a point in the projective plane. The action of this coordinate transformation applied to the coefficient matrix of the conic yields \begin{align*} \mathrm{N}'(p_1,p_2)&=\mathrm{T}^{-\mathrm{T}}\mathrm{N}(p_1,p_2)\mathrm{T}^{-1}\\ &=\begin{pmatrix} -\frac{a_0^2(a_4(y_1-y_2)^2+a_1(x_1-x_2)^2)}{2a_4a_1(x_1y_2-y_1x_2))}&0&0\\ 0&\frac{2 a_1(x_1y_2-y_1x_2)}{a_0}&0\\ 0&0&\frac{2 a_4(x_1y_2-y_1x_2)}{a_0} \end{pmatrix}. \end{align*} If we look at the affine part of the conic, we see that this results in \[\mathrm{N}'(p_1,p_2)=\begin{pmatrix} 1&0&0\\0&ka_1&0\\0&0&ka_4 \end{pmatrix},\mbox{ with $k=-\frac{4a_1^2(x_1y_2-y_1x_2)^2a_4}{a_0^2(a_4(y_1-y_2)^2+a_1(x_1-x_2)^2)}$}.\] Therefore, the image is identical to the conic corresponding to $\mathfrak{a}$ except for a translation $\mathrm{T}$ and a scaling $k$. Furthermore, lines are mapped to real conics. \end{proof} \noindent To illustrate this, Fig. \ref{FIG1} shows the reflections of three intersecting lines with respect to a circle, an ellipse, a parabola, and a hyperbola. The inversion conic $\mathfrak{a}_i,\,i=1,\ldots,4$ is shown in red while the pairs (line, image of the line) are presented in another colour (but the same). The inversion conics are from left to right and from up to down given by \begin{equation*} \mathfrak{a}_1:x^2+y^2=1,\quad \mathfrak{a}_2:\frac{25}{16}x^2+\frac{16}{25}y^2=1,\quad \mathfrak{a}_3:x^2-y=1,\quad \mathfrak{a}_4:x^2-\frac{3}{4}y^2=1. \end{equation*} \begin{figure} \caption{Inversions with respect to conics} \label{FIG1} \end{figure} \subsection{Subgroups} In this section we examine some subgroups that are embedded naturally in the Pin and the Spin group of Q2GA. \paragraph{Rotation} First, we concentrate on the group that is generated by inversions with respect to lines passing through the origin. Therefore, we study the action of these mappings applied to points embedded via \eqref{EQ41}. Two lines through the origin may be represented by \begin{align*} \mathfrak{l}_1&=\mathfrak{I}\cdot\left( \eta(0,0)\wedge\eta(\cos\varphi,\sin\varphi)\wedge e_3\wedge e_6 \right),\\ \mathfrak{l}_2&=\mathfrak{I}\cdot\left( \eta(0,0)\wedge\eta(\cos\psi,\sin\psi)\wedge e_3\wedge e_6 \right). \end{align*} Furthermore, we are interested in orientation preserving transformations, {\it i.e.}, elements from the Spin group. The composition of two reflections in $\mathfrak{l}_1$ and $\mathfrak{l}_2$ is given by their geometric product \begin{align} \mathfrak{l}_1\mathfrak{l}_2&=(\sin\psi\sin\varphi+\cos\psi\cos\varphi) +(\cos\psi\sin\varphi-\sin\psi\cos\varphi)e_{25}\nonumber\\ \intertext{and with the addition theorems for sine and cosine we conclude} \mathfrak{l}_1\mathfrak{l}_2&=\cos(\varphi-\psi)+\sin(\varphi-\psi)e_{25}\label{EQ43}. \end{align} \FloatBarrier \noindent The square of $e_{25}$ is $-1$. This means that the consecutive reflection in two lines through the origin results in an algebra element that can be interpreted as complex number with norm equal to $1$. It is a well-known result that rotations in the plane can be described by normed complex numbers. Let us look at the action of such an element applied to a point that is described by \eqref{EQ41}. Let $\mathfrak{R}=\cos\varphi+\sin\varphi e_{25}$ be an element in the form of Equation \eqref{EQ43} and $\mathfrak{p}=2 e_{25}+x_0( e_{35}-e_{56})+y_0(e_{23}+ e_{26})$ a point of the form \eqref{EQ41}. We compute \begin{align*} \mathfrak{p}'=&\alpha(\mathfrak{R})\mathfrak{p}\mathfrak{R}^{-1}=\mathfrak{R}\mathfrak{p}\mathfrak{R}^{-1}\\ =&2 e_{25}+(-2\sin\varphi\cos\varphi x_0+2\cos(\varphi)^2 y_0-y_0)(e_{23}+e_{26})\\ +&(2\sin\varphi\cos\varphi y_0+2\cos(\varphi)^2 x_0-x_0) (e_{35}-e_{56}). \end{align*} The GIPNS of this entity can be computed or we can simply read the coordinates of the image point. \begin{align*} x&=2\cos(\varphi)^2 x_0+2\cos\varphi\sin\varphi y_0-x_0=\cos (2\varphi) x_0 +\sin (2\varphi) y_0 ,\\ y&=2\cos(\varphi)^2 y_0-2\cos\varphi\sin\varphi x_0-y_0=\cos (2\varphi) y_0 -\sin (2\varphi) x_0. \end{align*} Therefore, we can see that this transformation, indeed, is a rotation about the origin with the rotation angle $2\varphi$. So these elements constitute a double cover of the group $\mathrm{SO}(2)$. \begin{remark} From the fact that vectors are mapped to vectors by a reflection in a line it follows that axis aligned conics have to be mapped to axis aligned conics. Therefore, these mappings can not be interpreted pointwise for conics. \end{remark} \paragraph{Translations} Now we aim at the group of planar Euclidean displacements $\mathrm{SE}(2)$. Therefore, we show that two consecutive reflections in parallel lines result in a translation. The group of planar Euclidean displacements can be generated as the semi-direct product of $\mathrm{SO}(2)$ and $\mathrm{T}(2)$, which describes the abelian translation group. Let $\mathfrak{l}_1$ and $\mathfrak{l}_2$ be two parallel lines and let $t_1, t_2$ be their distances from the origin. The lines are given by \begin{align*} \mathfrak{l}_1(\varphi,t_1)&=2\sin\varphi e_2-t_1 e_3-2\cos\varphi e_5-t_1 e_6,\\ \mathfrak{l}_2(\varphi,t_2)&=2\sin\phi e_2-t_2 e_3-2\cos\varphi e_5-t_2 e_6. \end{align*} The composition can be expressed with the geometric product as \begin{align} \mathfrak{T}(\varphi,t_1,t_2)&=\mathfrak{l}_1(\varphi,t_1)\mathfrak{l}_2(\varphi,t_2)\nonumber\\ &=2+(t_1-t_2)\sin\varphi (e_{23}+e_{26})+(t_1-t_2)\cos\varphi (e_{35}-e_{56}).\label{EQ44} \end{align} Applying the sandwich operator to a point $\mathfrak{p}$ results in \begin{align*} \alpha(\mathfrak{T})\mathfrak{p}\mathfrak{T}^{-1}&=\mathfrak{T}\mathfrak{p}\mathfrak{T}^{-1}\\ &=2e_{25}+(y_0-2t_2\cos\varphi+2t_1\cos\varphi)(e_{23}+e_{26})\\ &+(x_0-2t_1\sin\varphi+2t_2\sin\varphi) (e_{35}-e_{56}). \end{align*} The image is determined by \begin{equation*} x=x_0-\sin\varphi(2(t_1-t_2)),\quad\quad y=y_0+\cos\varphi(2(t_1-t_2)). \end{equation*} Therefore, the transformation is a translation in the direction normal to the given lines $\mathfrak{l}_1$ and $\mathfrak{l}_2$. \paragraph{The Group of planar Euclidean Displacements} Translations and rotations about the origin generate the entire group of planar Euclidean displacements $\mathrm{SE}(2)$. Furthermore, we can now examine the group that is generated by rotations and translations as a subgroup of the Spin group. These algebra elements have the form \FloatBarrier \[a_0 + a_1 e_{25}+a_2 (e_{23}+e_{26})+a_3 (e_{35}-e_{56}).\] The multiplication table of the geometric product for the generators $e_0,e_{25},e_{23}+e_{26},e_{35}-e_{56}$ is given in Table \ref{Table1}. \begin{table}[hbt] \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline & $1$ & $e_{25}$ & $e_{23}+e_{26}$ & $e_{35}-e_{56}$ \\ \hline $1$ & $1$ & $e_{25}$ & $e_{23}+e_{26}$ & $e_{35}-e_{56}$ \\ \hline $e_{25}$ & $e_{25}$ & -1 & $e_{35}-e_{56}$ & $-(e_{23}+e_{26})$ \\ \hline $e_{23}+e_{26}$ & $e_{23}+e_{26}$ & $-(e_{35}-e_{56})$ & 0 & 0 \\ \hline $e_{35}-e_{56}$ & $e_{35}-e_{56}$ & $e_{23}+e_{26}$ & 0 & 0 \\ \hline \end{tabular} \caption{Multiplication table of planar displacements in $\mathcal{C}\ell_{(4,2,0)}$} \label{Table1} \end{center} \end{table} Hence, this is indeed a subgroup of the Spin group. Furthermore, it is isomorphic to a subgroup of the multiplicative group of dual quaternions, called planar dual quaternions. We can define a bijection by \[e_{25}\mapsto \mathbf{i},\quad (e_{23}+e_{26}) \mapsto \epsilon\mathbf{j},\quad (e_{35}-e_{56}) \mapsto \epsilon\mathbf{k}.\] \begin{remark} If we restrict ourself to reflections in lines and circles, we are able to describe the group of conformal transformations of the plane. \end{remark} \paragraph{Inversions applied to Points} In this section we study the action of reflections with respect to a conic in principal position (centered at the origin) on points. The points are embedded as points of intersection of two lines, as discussed in the previous section. Furthermore, we have to note that the transformations map point pairs to point pairs. Hence, the pair of points of intersection of two lines (the affine and the ideal point) are mapped to a pair of points. All lines pass through $\infty$ and so the image of every line must pass through the image of this point. The generalization to conics not in principal position is obtained by the application of a coordinate transformation. The inversion conic is given by \[\mathfrak{a}=\frac{1}{2}c_0 (e_3+e_6) +2 c_1 e_1 +2c_2 e_4.\] A point is represented as intersection of two lines (see \eqref{EQ41} by \[\mathfrak{p} = 2 e_{25}+y_0 (e_{23}+e_{26})+x_0(e_{35}-e_{56}).\] Applying the sandwich operator to the point results in \begin{equation*} \mathfrak{p}'=\alpha(\mathfrak{a})\mathfrak{p}\mathfrak{a}^{-1}= -\frac{2y_0c_1}{c_0}e_{12}+\frac{2x_0c_1}{c_0}e_{15}-2e_{25}+\frac{2y_0c_2}{c_0}e_{24}+\frac{2x_0c_2}{c_0}e_{45}. \end{equation*} This is not of the form \eqref{EQ41}, and therefore, it is not the representation of the intersection of two lines. The GIPNS is calculated \begin{align*} \mathds{NI}_G(\mathfrak{p}')=&\left\lbrace (x,y)^\mathrm{T} \in\mathds{R}^2\mid -2c_1(-x_0y+xy_0)e_1-(y_0c_1x^2+2yc_0+y_0c_2y^2)e_2\right. \\ &\left. -2c_2(-x_0y+xy_0)e_4+(x_0c_1x^2+x_0c_2y^2+2xc_0)e_5=0\right\rbrace \end{align*} The solution set of this GIPNS can be written as \begin{equation*} x=-\frac{2c_0x_0}{c_1x_0^2+c_2 y_0^2},\quad\quad y=-\frac{2c_0y_0}{c_1x_0^2+c_2 y_0^2}. \end{equation*} Note that we excluded the solution $x=0,y=0$, that is the image of $\infty$ under the inversion. \begin{remark} Inversion with respect to conics that are not in principal position can be performed by the composition of an inversion and a rotation. Note, that this rotation has to be applied pointwise. \end{remark} \subsection{Generalization to higher Dimensions} The main advantage of this geometric algebra model is its flexibility. It is no problem to change the dimension. We discuss the model for the $n$-dimensional case and we show some examples for the three-dimensional case. We start with a real vector space of dimension $n$. For each axis we use a conformal embedding. Therefore, the dimension of the geometric algebra is $2^{3n}$and its quadratic form is given by \[\mathrm{Q}= \underbrace{\begin{pmatrix} \mathrm{D} & & & \\ & \mathrm{D} & & \\ & & \ddots & \\ & & & \mathrm{D} \end{pmatrix}}_{n\mbox{-times}},\quad \mathrm{D}=\begin{pmatrix} 0 & 0 & -1\\ 0 & 1 & 0\\ -1 & 0 &0 \end{pmatrix}. \] The embedding $\eta$ is realized by \begin{align} \eta:\mathds{R}^n&\to{\bigwedge}^1 V,\nonumber\\ (x_1,\ldots,x_n)&\mapsto e_1+x_1 e_2+\frac{1}{2}x_1^2e_3+\ldots +e_{3n-2}+x_n e_{3n-1}+\frac{1}{2}x_n^2 e_{3n}.\label{EQ51} \end{align} The conditions for an embedded point \eqref{EQ22} and \eqref{EQ23} generalize to \[\eta(P)^2=0,\quad\eta(P)_{x_1}^2=0,\quad\eta(P)_{x_2}^2=0, \ldots\quad \eta(P)_{x_n}^2=0.\] We define analogue to Def. \ref{DEF8}: \begin{definition} The blade $\mathfrak{I}$ that maps outer product null spaces to inner product null spaces is defined by \[\mathfrak{I}=\bigwedge\limits_{\substack{i=1\\i\mod 3=2}}^n e_i \wedge \bigwedge\limits_{\substack{j=1\\j\mod 3=1}}^n e_j\wedge \sum\limits_{\substack{k=1\\k\mod 3=0}}^n e_k .\] Inner product null spaces can be mapped to outer product null spaces with the blade \[\mathfrak{I}^*:=\bigwedge\limits_{\substack{i=1\\i\mod 3=2}}^n e_i \wedge \bigwedge\limits_{\substack{j=1\\j\mod 3=0}}^n e_j\wedge \sum\limits_{\substack{k=1\\k\mod 3=1}}^n e_k.\] \end{definition} \noindent Grade-1 elements correspond to inner product axis aligned hyperquadrics. As the dimension is growing, the number of objects that can be represented grows, too. Blades of grade $k, (k\leq n)$ correspond to the intersection of $k$ hyperquadrics. \paragraph{Quadrics in $3$ Dimensions} To construct the quadric geometric algebra for the three-dimensional space we use the quadratic space $\mathds{R}^{(6,3)}$ given by the nine-dimensional real vector space $\mathds{R}^9$ together with the quadratic form \[\mathrm{Q}= \begin{pmatrix} \mathrm{D} & & \\ & \mathrm{D} & \\ & & \mathrm{D} \end{pmatrix},\quad \mathrm{D}=\begin{pmatrix} 0 & 0 & -1\\ 0 & 1 & 0\\ -1 & 0 &0 \end{pmatrix}. \] For three dimensions the embedding $\eta$, see Eq \eqref{EQ51}, has the following form \begin{align} \eta:\mathds{R}^3&\to {\bigwedge}^1{V},\nonumber\\ (x,y,z)^\mathrm{T}&\mapsto e_1+xe_2+\frac{1}{2}x^2 e_3+e_4+ye_5+\frac{1}{2}y^2 e_6+e_7+ze_8+\frac{1}{2}z^2 e_9\nonumber. \end{align} The conditions for an embedded point \eqref{EQ22} and \eqref{EQ23} generalize to \[\eta(P)^2=0,\quad \eta(P)_x^2=0,\quad\eta(P)_y^2=0,\quad \eta(P)_z^2=0.\] Moreover, the blades $\mathfrak{I}$ and $\mathfrak{I}^*$ are given by \begin{align} \mathfrak{I}&=(e_2\wedge e_5\wedge e_8) \wedge (e_1\wedge e_4\wedge e_7)\wedge (e_3+e_6+e_9),\nonumber\\ \mathfrak{I}^*&=(e_2\wedge e_5\wedge e_8) \wedge (e_3\wedge e_6\wedge e_9)\wedge (e_1+e_4+e_7)\nonumber. \end{align} The corresponding geometric algebra has dimension $2^9=512$. Any quadric in principal position except for translation in $\mathds{R}^3$ is uniquely determined by six values. This can be seen from the symmetric matrix of the equation of the quadric that has, in general, ten free entries. The fact that we are treating quadrics in principal position reduces the number of free entries to seven. Furthermore, this matrix representation is homogeneous, and therefore, we have six degrees of freedom. In analogy to Eq. \eqref{EQ37} we obtain a bijection $\chi$ that is defined as \begin{equation*} \begin{pmatrix} c_0&c_2&c_5&c_8\\ c_2 & c_1 &0 &0\\ c_5 & 0 & c_4 & 0\\ c_8 &0 & 0 & c_7 \end{pmatrix}\mapsto \mathfrak{q}, \end{equation*} with \[\mathfrak{q}=2c_1e_1\!-\!2c_2e_2\!+\!\frac{1}{3}c_0e_3\!+\!2c_4e_4\!-\!2c_5e_5+\!\frac{1}{3}c_0e_6 \!+\!2c_7e_7\!-\!2c_8e_8\!+\!\frac{1}{3}c_0e_9.\] As we did for the planar case, we can now pay our attention on the intersection of three planes in order to get a pair of points containing one affine and one ideal point. We choose these planes to be parallel to the coordinate planes and passing through a given point $P=(x_0,y_0,z_0)^\mathrm{T}$. Expressed in terms of the quadric geometric algebra $\mathcal{C}\ell_{(6,3,0)}$ we get \begin{align*} \mathfrak{p}&=(\eta(x_0,y_0,z_0)^\mathrm{T}\wedge\eta(0,0,z_0)^\mathrm{T}\wedge\eta(x_0,0,z_0)^\mathrm{T}\wedge e_3\wedge e_6\wedge e_9)\cdot \mathfrak{I}\\ & \wedge (\eta(x_0,y_0,z_0)^\mathrm{T}\wedge \eta(0,y_0,0)^\mathrm{T}\wedge\eta(x_0,y_0,0)^\mathrm{T}\wedge e_3\wedge e_6\wedge e_9)\cdot \mathfrak{I}\\ & \wedge (\eta(x_0,y_0,z_0)^\mathrm{T}\wedge\eta(x_0,0,0)^\mathrm{T}\wedge\eta(x_0,y_0,0)^\mathrm{T}\wedge e_3\wedge e_6\wedge e_9)\cdot \mathfrak{I}\\ &=3e_{258}+(e_{358}-e_{568}+e_{589})x_0+({e_{238}+e_{268}-e_{289}})y_0\\ &+(-e_{235}+e_{256}+e_{259})z_0. \end{align*} \begin{remark} A representation of the group of Euclidean displacements $\mathrm{SE}(3)$ can be obtained by studying the composition of reflections in planes. Planes correspond to vectors that are obtained by $(\eta P_1\wedge\eta P_2\wedge\eta P_3\wedge e_3 \wedge e_6 \wedge e_9)\cdot\mathfrak{I}$. \end{remark} \noindent Now we define an inner product inversion quadric with $P_1=( \frac{9}{10},0,0)^\mathrm{T},\,P_2=( -\frac{9}{10},0,0)^\mathrm{T},$ $P_3=( 0,\frac{3}{4},0)^\mathrm{T},\,P_4=( 0,-\frac{3}{4},0)^\mathrm{T},\,P_5=( 0,0,\frac{5}{4})^\mathrm{T},\,P_6=( 0,0,-\frac{5}{4})^\mathrm{T}$ \begin{equation*} \mathfrak{a}=(\eta P_1\wedge \eta P_2\wedge \eta P_3 \wedge \eta P_4\wedge \eta P_5\wedge \eta P_6)\cdot \mathfrak{I}=\frac{25}{9}e_1-\frac{3}{8}e_3+4e_4-\frac{3}{8}e_6+\frac{36}{25}e_7-\frac{3}{8}e_9. \end{equation*} The GIPNS of $\mathfrak{a}$ is given by \[\mathds{NI}_G(\mathfrak{a})=\left\lbrace (x,y,z)^\mathrm{T}\in\mathds{R}^3 \Big|\, \frac{100}{81}x^2+\frac{16}{9}y^2+\frac{16}{25}z^2-1=0\right\rbrace.\] In Fig. \ref{FIG2} (left) we can see the inversion of the unit cube $[-1,1]^3$ with respect to the ellipsoid defined by $\mathfrak{a}$. The image of every face of the cube, {\it i.e.}, of every plane is an ellipsoid passing through the origin. \noindent The action of the inversion given by $\mathfrak{a}$ on pairs of points can be written as \begin{equation*} f(x,y,z)=\frac{45^2}{2^2(30^2 y^2+25^2x^2+18^2z^2)}\left(x,y,z \right) ^\mathrm{T}. \end{equation*} Note that the point $\infty$ is mapped to the origin and that the origin is mapped to $\infty$. Therefore, we have a map from $\mathds{R}^3\backslash\left\lbrace 0 \right\rbrace \to \mathds{R}^3$. \begin{figure} \caption{Inversion of the unit cube with respect to an ellipsoid (left), inversion with respect to a hyperboloid (right)} \label{FIG2} \end{figure} One main advantage of this method is that we can calculate the image ellipsoid of one face of the cube (one plane) directly by applying the sandwich operator to the plane that is expressed as quadric. For example the plane passing through $P_1=(1,1,1)^\mathrm{T},\,P_2=(1,1,-1)^\mathrm{T},\,P_3=(1,-1,1)^\mathrm{T}$ can be expressed as vector by \begin{figure} \caption{Inversion with respect to a cylinder (left), inversion with respect to an elliptic paraboloid (right)} \label{FIG3} \end{figure} \begin{equation*} \mathfrak{p}=(\eta P_1\wedge\eta P_2\wedge\eta P_3\wedge e_3\wedge e_6 \wedge e_9)\cdot \mathfrak{I}=3e_2+e_3+e_6+e_9. \end{equation*} To check that that this indeed the representation of the plane we compute the GIPNS \[\mathds{NI}_G(\mathfrak{p})=\left\lbrace (x,y,z)^\mathrm{T}\in\mathds{R}^3\mid 1-x=0 \right\rbrace.\] Applying the sandwich operator to $\mathfrak{p}$ results in \begin{equation*} \mathfrak{c}=\alpha(\mathfrak{a})\mathfrak{p} \mathfrak{a}^{-1}=-\frac{200}{27}e_1-3e_2-\frac{32}{3}e_4-\frac{96}{25}e_7 \end{equation*} with GIPNS \[\mathds{NI}_G(\mathfrak{c})=\left\lbrace (x,y,z)^\mathrm{T} \in\mathds{R}^3\mid \frac{10^2}{9^2}x^2-x + \frac{4^2}{3^2}y^2+\frac{4^2}{5^2}z^2=0 \right\rbrace. \] This is one of the ellipsoids displayed in Fig. \ref{FIG2} (left). Furthermore, we can intersect two inner product planes $\mathfrak{p}_1,\mathfrak{p}_2$ to get an inner product line that is an edge of the cube. After that we can apply the sandwich operator to the line and get the intersection curve (an ellipse) of the two ellipsoids that are the images of $\mathfrak{p}_1,\mathfrak{p}_2$. \FloatBarrier \begin{remark} It is more convenient to compute the sandwich operator by a conjugation instead inversion. This means we use $\alpha(\mathfrak{a}) \mathfrak{X} \mathfrak{a}^\ast$ with $\mathfrak{a}\in\mathcal{C}\ell^\times_{(p,q,r)}$ and $\mathfrak{X}\in{\bigwedge}^kV$. In general, the modified sandwich operator is easier to handle, because computing the inverse of a Clifford algebra element is extremely expensive. Moreover, we are working in a projective setting, and therefore, multiplication with a homogeneous factor does not change the occurring geometric inner product and outer product null spaces. \end{remark} The second example in three-dimensional space is a hyperboloid of two sheets in principal position that is generated by \[\mathfrak{a}=(\eta P_1\wedge\eta P_2\wedge\eta P_3\wedge\eta P_4\wedge\eta P_5\wedge\eta P_6)\cdot \mathfrak{I}=-6e_1+e_3+\frac{9}{8}e_4+e_6+\frac{9}{8}e_7+e_9.\] Here, we have $P_1\!=\!(-1,0,0)^\mathrm{T},\,P_2\!=\!(1,0,0)^\mathrm{T},\,P_3\!=\!(2,0,4)^\mathrm{T},\,P_4\!=\!(2,0,-4)^\mathrm{T},$ $P_5\!=\!(2,4,0)^\mathrm{T}$, $P_6\!=\!(2,-4,0)^\mathrm{T}$. We calculate the GIPNS \[\mathds{NI}_G(\mathfrak{a})=\left\lbrace (x,y,z)^\mathrm{T}\in\mathds{R}^3\mid x^2-1-\frac{3}{16}y^2-\frac{3}{16}z^2=0 \right\rbrace. \] The mapping applied to pairs of points results in \[f(x,y,z)=\frac{16}{16x^2-3y^2-3z^2}(x,y,z)^\mathrm{T}.\] The image of the cube $[1,3]\times[-1,1]\times[-1,1]$ under this mapping is shown in Fig.\ \ref{FIG2} (right). Fig.\ \ref{FIG3} (left) shows the image of an inversion with respect to a cylinder given by $\mathfrak{a}=3e_1-2e_3+3e_4-2e_6-2e_9$ applied to a cube. The equation of the cylinder is derived as $x^2+y^2=4$. Planes that are not parallel to the axis of the cylinder are mapped to paraboloids. Another example is presented in Fig.\ \ref{FIG3} (right). The inversion quadric is an elliptic paraboloid given by $\mathfrak{a}=3e_1-2e_3+12e_4-2e_6+6e_8-2e_9$ respectively by $x^2+y^2-4z+4=0$. \FloatBarrier \section{Conclusion} \noindent The geometric algebra presented in this article serves for a lot of applications. A generalization of inversions with respect to conics, quadrics and even hyperquadrics in any dimension is possible with the use of the sandwich operator. Hyperquadrics in principal position are simply represented as grade-1 elements. Furthermore, this model serves as a generalization of the conformal geometric algebra, see \cite{dorst:geometricalgebra}. Classical representations of groups are embedded in this algebra naturally. \end{JGGarticle} \end{document}
\begin{document} \title{Domino tilings of cylinders: the domino group \\ and connected components under flips} \author{Nicolau C. Saldanha} \maketitle \begin{abstract} We consider domino tilings of three-dimensional cubiculated regions. In order to study such tilings, we define new objects: the \textit{domino group} and \textit{domino complex} of a quadriculated disk. As an application, we study the problem of connectivity via flips. A flip is a local move: two neighboring parallel dominoes are removed and placed back in a different position. The twist is an integer associated to each tiling, which is invariant under flips. A balanced quadriculated disk ${\cal D}$ is {\em regular} if whenever two tilings ${\mathbf t}_0$ and ${\mathbf t}_1$ of ${\cal D} \times [0,N]$ have the same twist then ${\mathbf t}_0$ and ${\mathbf t}_1$ can be joined by a sequence of flips provided some extra vertical space is allowed. We show that ${\cal D}$ is regular if and only if its domino group is isomorphic to ${\mathbb{Z}} \oplus {\mathbb{Z}}/(2)$. We prove that a rectangle ${\cal D} = [0,L] \times [0,M]$ with $LM$ even is regular if and only if $\min\{L,M\} \ge 3$ and conjecture that in general ``large'' disks are regular. We also prove that if ${\cal D}$ is regular then the extra vertical space necessary to join by flips two tilings of ${\cal D} \times [0,N]$ with the same twist depends only on ${\cal D}$, not on the height $N$. Furthermore, almost every pair of tilings with the same twist can be joined via flips (with no extra space). In the cases where ${\cal D}$ is rectangular but not regular we prove partial results concerning the structure of the domino group: the group is not abelian and has exponential growth. Connected components via flips are now small and almost no pair of tilings with the same twist can be joined via flips. \end{abstract} \footnotetext{2010 {\em Mathematics Subject Classification}. Primary 05B45; Secondary 52C20, 52C22, 05C70. {\em Keywords and phrases} Three-dimensional tilings, dominoes, dimers} \bigbreak \section{Introduction} Let ${\cal D} \subset {\mathbb{R}}^2$ be a quadriculated region in the plane, i.e., a union of finitely many unit squares $[a,a+1] \times [b,b+1]$, $(a,b) \in {\mathbb{Z}}^2$. A {\em domino} is a closed $2\times 1$ or $1\times 2$ rectangle; a {\em domino tiling} of ${\cal D}$ is a covering of ${\cal D}$ by dominoes with disjoint interiors. In a more combinatorial language, ${\cal D}$ can be identified with a bipartite graph: vertices of the graph are unit squares in ${\cal D}$ and adjacent squares are joined by edges. A domino tiling of ${\cal D}$ is then a perfect matching; we prefer to speak of dominoes and tilings (instead of edges and matchings). A quadriculated region ${\cal D}$ is a planar quadriculated {\em disk} if ${\cal D}$ is contractible with contractible interior and therefore homeomorphic to the unit disk. In particular, ${\cal D}$ is connected and simply connected with connected interior. The color of a square is $(-1)^{a+b}$, with $+1$ equal black and $-1$ equal white. We always assume that our quadriculated regions ${\cal D}$ are {\em balanced} (equal number of white and black squares). We sometimes assume that ${\cal D}$ is {\em tileable} (admits at least one domino tiling). A quadriculated disk ${\cal D}$ is {\em nontrivial} if it has at least $6$ unit squares and at least one square has at least three neighbours; we usually also assume that our quadriculated disks are nontrivial. \begin{figure} \caption{Six examples of quadriculated regions, all connected and balanced. The first two are trivial disks. The third and fourth are nontrivial tileable quadriculated disks. The fifth one is a nontrivial balanced quadriculated disk which is not tileable. The sixth example is not a disk.} \label{fig:disks} \end{figure} \begin{figure} \caption{A tiling of the box $[0,4]\times [0,4]\times [0,2]$. The orientation of ${\mathbb{R} \label{fig:twist2} \end{figure} A cubiculated region is a set ${\cal R} \subset {\mathbb{R}}^3$ which is a union of finitely many unit cubes $[a,a+1] \times [b,b+1] \times [c,c+1]$, $(a,b,c) \in {\mathbb{Z}}^3$. The color of a cube is $(-1)^{a+b+c}$. In this paper we always assume ${\cal R}$ to be balanced and contractible with contractible interior. A cylinder is a simple example of a cubiculated region: \[ {\cal R}_N = {\cal D} \times [0,N] \subset {\mathbb{R}}^3, \] where ${\cal D}$ is a fixed balanced quadriculated disk. A box is a special case of a cylinder: ${\cal D} = [0,L] \times [0,M]$, $LM$ even; ${\cal R}_N = {\cal D} \times [0,N]$. A (3D) domino is the union of two unit cubes with a common face, thus a $2\times 1\times 1$ rectangular cuboid. A (domino) tiling of ${\cal R}$ is a family of dominoes with disjoint interiors whose union is ${\cal R}$. Again, ${\cal R}$ can be identified with a bipartite graph, dominoes with edges and tilings with matchings. The set of domino tilings of a region ${\cal R}$ is denoted by ${\cal T}({\cal R})$. We follow \cite{primeiroartigo} and \cite{segundoartigo} in drawing tilings of cubiculated regions by floors, as in Figure \ref{fig:twist2}. Vertical dominoes (i.e., dominoes in the $z$ direction) appear as two squares, one in each of two adjacent floors; for visual facility, we leave the right square unfilled. A flip is a local move in ${\cal T}({\cal R})$: two parallel and adjacent (3D) dominoes are removed and placed back in a different position. Examples of flips are shown in Figure \ref{fig:flipexample}. This is of course a natural generalization of the planar flip. It is well known that if ${\cal D} \subset {\mathbb{R}}^2$ is a quadriculated disk then any two tilings of ${\cal D}$ can be joined by a finite sequence of flips (\cite{thurston1990}, \cite{saldanhatomei1995}). This is not true for 3D regions and tilings. For ${\mathbf t}_0, {\mathbf t}_1 \in {\cal T}({\cal R})$, we write ${\mathbf t}_0 \approx {\mathbf t}_1$ if ${\mathbf t}_0$ and ${\mathbf t}_1$ can be joined by a finite sequence of flips. \begin{figure} \caption{Three tilings of the box $[0,4]\times[0,4]\times[0,3]$ and two flips.} \label{fig:flipexample} \end{figure} A trit is the only local move involving three dominoes which does not reduce to flips (see \cite{primeiroartigo}, \cite{segundoartigo}, \cite{FKMS}). The three dominoes involved are in three different directions and fill a $2\times 2\times 2$ box minus two opposite unit cubes. Figure \ref{fig:trit} shows a trit in the $3\times 3\times 2$ box; notice that the first tiling admits no flips. \begin{figure} \caption{Two tilings of the box $[0,3]\times[0,3]\times[0,2]$ joined by a trit. The first tiling has twist $-1$, the second one has twist $0$.} \label{fig:trit} \end{figure} For a fixed balanced quadriculated disk ${\cal D}$, let ${\mathbf t}_0 \in {\cal T}({\cal R}_{N_0})$ and ${\mathbf t}_1 \in {\cal T}({\cal R}_{N_1})$. These tilings can be concatenated to define a tiling ${\mathbf t}_0 \ast {\mathbf t}_1 \in {\cal T}({\cal R}_{N_0+N_1})$. If ${\mathbf t}_0$ and ${\mathbf t}_1$ are drawn as in our figures (say, Figures \ref{fig:flipexample} and \ref{fig:trit}), then the figure for ${\mathbf t}_0 \ast {\mathbf t}_1$ is obtained by concatenating the figures for ${\mathbf t}_0$ and ${\mathbf t}_1$. Equivalently, we may translate ${\mathbf t}_1$ by $(0,0,N_0)$ to obtain a tiling $\tilde{\mathbf t}_1$ of ${\cal D} \times [N_0,N_0+N_1]$. The set of dominoes forming ${\mathbf t}_0 \ast {\mathbf t}_1$ is the disjoint union of the set of dominoes forming ${\mathbf t}_0$ and $\tilde{\mathbf t}_1$. For $N$ even, there exists a tiling ${\mathbf t}_{\operatorname{vert},N} \in {\cal T}({\cal R}_N)$ such that all dominoes are vertical (i.e., of the form $[a,a+1]\times [b,b+1] \times [c,c+2]$); we call this the vertical tiling. For ${\cal D} = [0,4]^2$, let ${\mathbf t}_{\operatorname{vert},2}, {\mathbf t}_0, {\mathbf t}_1 \in {\cal T}({\cal R}_2)$ be the tilings in Figure \ref{fig:442}. Clearly, ${\mathbf t}_0 \not\approx {\mathbf t}_1$ (neither admits a flip); a computation verifies that ${\mathbf t}_0 \ast {\mathbf t}_{\operatorname{vert},2} \approx {\mathbf t}_1 \ast {\mathbf t}_{\operatorname{vert},2}$. \begin{figure} \caption{Three tilings ${\mathbf t} \label{fig:442} \end{figure} Motivated by this example, we define a weaker equivalence relation $\sim$ on tilings (meaning that ${\mathbf t}_0 \approx {\mathbf t}_1$ always implies ${\mathbf t}_0 \sim {\mathbf t}_1$). Assume $N_0 \equiv N_1 \pmod 2$ and ${\mathbf t}_i \in {\cal T}({\cal R}_{N_i})$: ${\mathbf t}_0 \sim {\mathbf t}_1$ if and only if there exist $M_0 \in 2{\mathbb{N}}$ and $M_1 = N_0 + M_0 - N_1 \in 2{\mathbb{N}}$ such that ${\mathbf t}_0 \ast {\mathbf t}_{\operatorname{vert},M_0} \approx {\mathbf t}_1 \ast {\mathbf t}_{\operatorname{vert},M_1}$. Thus, for the tilings in Figure \ref{fig:442} we have ${\mathbf t}_0 \sim {\mathbf t}_1$ (in this case, we can take $M_0 = M_1 = 2$). In \cite{FKMS} the concept of a {\em refinement} of a tiling is introduced. Once we are familiar with the concept of refinement, it is not hard to see that if ${\mathbf t}_0, {\mathbf t}_1 \in {\cal T}({\cal R}_n)$ satisfy ${\mathbf t}_0 \sim {\mathbf t}_1$ then there exist refinements $\tilde{\mathbf t}_0, \tilde{\mathbf t}_1$ with $\tilde{\mathbf t}_0 \approx \tilde{\mathbf t}_1$ but the converse is not always true. We shall not require the concept of refinement in the present paper. Given a balanced quadriculated disk ${\cal D}$, we define the {\em full domino group $G_{{\cal D}}$} and the {\em even domino group $G^{+}_{{\cal D}}$}, a subgroup of index $2$ of $G_{{\cal D}}$. The set of elements of the group is the set of equivalence classes of $\sim$; for $G^{+}_{{\cal D}}$ we only take even values of $N$: \begin{equation} \label{eq:dominogroup} G_{{\cal D}} = \left( \bigsqcup_{N \in {\mathbb{N}}^\ast} {\cal T}({\cal R}_{N}) \right)/\sim \quad > \quad G^{+}_{{\cal D}} = \left( \bigsqcup_{N \in {\mathbb{N}}^\ast} {\cal T}({\cal R}_{2N}) \right)/\sim. \end{equation} As usual with spaces defined as a quotient under an equivalence relation, we abuse notation and think of the elements of $G_{{\cal D}}$ as tilings ${\mathbf t}$. The operation in $G_{{\cal D}}$ is $\ast$, the concatenation; the identity element is ${\mathbf t}_{\operatorname{vert},2}$. The inverse of a tiling ${\mathbf t} \in {\cal T}({\cal R}_N)$ is ${\mathbf t}^{-1} \in {\cal T}({\cal R}_N)$ obtained by reflecting in the $z$ coordinate. There is a homomorphism $G_{{\cal D}} \to \{\pm 1\}$ taking ${\mathbf t} \in {\cal T}({\cal R}_N)$ to $N \bmod 2$; by construction, $G^{+}_{{\cal D}}$ is the kernel of this homomorphism. As we shall see in Section \ref{sect:plug}, $G_{{\cal D}}$ is a finitely presented group, the fundamental group of an explicit finite complex. The domino group $G_{{\cal D}}$ has different structures for different quadriculated disks ${\cal D}$ and it indicates the behavior of connected components under flips of the region ${\cal R}_N$, particularly for large values of $N$. The {\em twist} (see \cite{primeiroartigo}, \cite{segundoartigo}, \cite{FKMS}) is a group homomorphism $\operatorname{Tw}: G_{{\cal D}} \to {\mathbb{Z}}$. A valid definition of the twist for tilings of cylinders is presented in Section \ref{sect:twist}. (There are many equivalent definitions of twist: the one presented in \cite{FKMS} is very general and uses homology theory; the one presented in \cite{primeiroartigo} is more elementary but still complicated, and works only for certain regions. Unfortunately, none of these definitions is as simple as might be desired.) We recall a few basic facts: trits change the value of the twist by adding $\pm 1$. For cylinders, taking a mirror image of a tiling changes the sign of its twist: in particular, $\operatorname{Tw}({\mathbf t}^{-1}) = -\operatorname{Tw}({\mathbf t})$. On the other hand, if a rotation preserves ${\cal R}$ then it also preserves twist. If ${\cal D}$ is not trivial then the map $\operatorname{Tw}$ is surjective; in particular, $G_{{\cal D}}$ is infinite. A quadriculated disk ${\cal D}$ is {\em regular} if $\operatorname{Tw}: G^{+}_{{\cal D}} \to {\mathbb{Z}}$ is an isomorphism, which implies $G_{{\cal D}} \approx {\mathbb{Z}} \oplus ({\mathbb{Z}}/(2))$. We conjecture that a quadriculated disk is regular unless it has a norrow bottleneck. The following theorem is a special case of this conjecture; more evidence in favor of the conjecture can be found in \cite{marreiros}. \begin{theo} \label{theo:rectangle} Let ${\cal D} = [0,L] \times [0,M]$ be a rectangle with $LM$ even. Then ${\cal D}$ is regular if and only if $\min\{L,M\} \ge 3$. \end{theo} \begin{figure} \caption{Three tilings ${\mathbf t} \label{fig:L2} \end{figure} As we shall see Lemma \ref{lemma:thin} (in Section \ref{sect:thin}), for ${\cal D} = [0,2] \times [0,M]$, $M \ge 3$, there exists a surjective map $\phi: G_{{\cal D}} \to F_2 \ltimes {\mathbb{Z}}/(2)$ where $F_2$ is the free group with two generators. This implies that $G_{{\cal D}}$ has exponential growth and therefore $\operatorname{Tw}({\mathbf t}_0) = \operatorname{Tw}({\mathbf t}_1)$ is far from implying that ${\mathbf t}_0 \sim {\mathbf t}_1$. For instance, the three tilings ${\mathbf t}_0$, ${\mathbf t}_1$ and ${\mathbf t}_2$ shown in Figure \ref{fig:L2} satisfy $\operatorname{Tw}({\mathbf t}_0) = \operatorname{Tw}({\mathbf t}_1) = \operatorname{Tw}({\mathbf t}_2)$ and ${\mathbf t}_0 \not\sim {\mathbf t}_1 \not\sim {\mathbf t}_2 \not\sim {\mathbf t}_0$ (this claim and the similar one in Figure \ref{fig:234} will follow from applying the map $\phi$, constructed in Lemma \ref{lemma:thin}). By definition, if ${\cal D}$ is a regular quadriculated disk and ${\mathbf t}_0, {\mathbf t}_1 \in {\cal T}({\cal R}_N)$ satisfy $\operatorname{Tw}({\mathbf t}_0) = \operatorname{Tw}({\mathbf t}_1)$ then there exists $M \in 2{\mathbb{N}}$ such that ${\mathbf t}_0 \ast {\mathbf t}_{\operatorname{vert},M} \approx {\mathbf t}_1 \ast {\mathbf t}_{\operatorname{vert},M}$. It is natural to ask about the size of $M$. \begin{theo} \label{theo:M} Let ${\cal D}$ be a regular quadriculated disk containing a $2\times 3$ rectangle. Then there exists $M$ (depending on ${\cal D}$ only) such that for all $N \in {\mathbb{N}}$ and for all ${\mathbf t}_0, {\mathbf t}_1 \in {\cal T}({\cal R}_N)$ if $\operatorname{Tw}({\mathbf t}_0) = \operatorname{Tw}({\mathbf t}_1)$ then ${\mathbf t}_0 \ast {\mathbf t}_{\operatorname{vert},M} \approx {\mathbf t}_1 \ast {\mathbf t}_{\operatorname{vert},M}$. \end{theo} There are several ways in which it would be desireable to improve this result, such as providing an estimate for $M$ (following the proof gives us at best a crude estimate for $M$, and the computations seem daunting even for small examples). We do not try to state or prove a related result for irregular disks; the proof of Theorem \ref{theo:M} suggests that we should decide whether the domino group $G_{{\cal D}}$ is hyperbolic (see Remark \ref{remark:hyperbolic}). The consequences of regularity are further explored in other papers; here we provide a sample. Given a cubiculated region ${\cal R}$, we consider random tilings ${\mathbf T}$ of ${\cal R}$ (i.e., random variables ${\mathbf T}: \Omega \to {\cal T}({\cal R})$). Such random tilings are assumed to be chosen uniformly in ${\cal T}({\cal R})$. There are significant difficulties in implementing such random variables (with practical computer programs and correct probability distribution). In a related vein, even giving a good estimate for $|{\cal T}({\cal R})|$ (where ${\cal R}$ a large box) is an open problem. We describe a probabilistic result which is stated and proved in \cite{saldanhaejc}. Consider the cardinality $|\ker(\operatorname{Tw})| \in {\mathbb{N}}^\ast \cup \{\infty\}$ of the normal subgroup $\ker(\operatorname{Tw}) < G^{+}_{{\cal D}}$, the kernel of $\operatorname{Tw}: G^{+}_{{\cal D}} \to {\mathbb{Z}}$. If $\ker(\operatorname{Tw})$ is infinite, $1/|\ker(\operatorname{Tw})|$ is understood to be equal to $0$. For non trivial ${\cal D}$, $|\ker(\operatorname{Tw})| = 1$ if and only if ${\cal D}$ is regular. For ${\cal D} = [0,L] \times [0,M]$ (with $LM$ even), Theorem \ref{theo:rectangle} and Lemma \ref{lemma:thin} imply that: \[ \frac{1}{|\ker(\operatorname{Tw})|} = \begin{cases} 1, & \min\{L,M\} \ge 3, \\ 0, & \min\{L,M\} = 2. \end{cases} \] There are no known examples of irregular disks for which $|\ker(\operatorname{Tw})| \notin \{1,\infty\}$. Let ${\cal D}$ be a non trivial quadriculated disk. Let $G^{+}_{{\cal D}}$ be the even domino group; let $\operatorname{Tw}: G^{+}_{{\cal D}} \to {\mathbb{Z}}$ be the twist map. Let ${\mathbf T}_0, {\mathbf T}_1$ be independent random tilings of ${\cal R}_N = {\cal D} \times [0,N]$; we have \begin{equation} \label{equation:limprob} \lim_{N \to \infty} \operatorname{Prob}[{\mathbf T}_0 \approx {\mathbf T}_1 | \operatorname{Tw}({\mathbf T}_0) = \operatorname{Tw}({\mathbf T}_1)] = \frac{1}{|\ker(\operatorname{Tw})|}. \end{equation} Equation \ref{equation:limprob} is essentially Theorem 6 from \cite{saldanhaejc}. In particular, the probability tends to $1$ if and only if ${\cal D}$ is regular and tends to $0$ if ${\cal D} = [0,2] \times [0,M]$. Section \ref{sect:examples} lists a few computational examples. In Sections \ref{sect:plug} and \ref{sect:cork} we present some helpful concepts, such as that of a {\em plug}, a {\em floor} and a {\em cork}; we also construct certain tilings which will be important again later. In Section \ref{sect:groupcomplex} we construct a $2$-complex, the {\em domino complex} $\mathcal{C}_{{\cal D}}$ (where ${\cal D}$ is a balanced quadriculated disk): tilings of ${\cal R}_N = {\cal D} \times [0,N]$ correspond to closed paths of length $N$ in $\mathcal{C}_{{\cal D}}$ and the domino group $G_{{\cal D}}$ is the fundamental group $\pi_1(\mathcal{C}_{{\cal D}})$. In Section \ref{sect:twist} we present a self-contained definition of twist and prove some basic properties. Section \ref{sect:thin} is dedicated to proving that rectangles $[0,2] \times [0,M]$ (for $M \ge 3$) are {\em not} regular. The fact that $G_{{\cal D}}$ is the fundamental group of a finite $2$-complex implies that it is finitely presented, and gives us an explicit finite family of generators. This family is finite, but too large to be useful in explicit computations: in Section \ref{sect:gen} we present a far smaller, and therefore more manageable, family of generators. Given such a family, it is not hard to produce an algorithm which, given an explicit disk ${\cal D}$, will, if ${\cal D}$ is regular, produce a proof of this fact in finite time; if ${\cal D}$ is not regular the algorithm will run forever. For small examples, the algorithm can actually be executed. In Section \ref{sect:44} we walk through this algorithm for ${\cal D} = [0,4]\times[0,4]$, proving that it is regular; this is Lemma \ref{lemma:44}. Similarly, we see in Lemma \ref{lemma:34} that if $L,M \in [3,6] \cap {\mathbb{Z}}$ and $LM$ is even then ${\cal D} = [0,L]\times [0,M]$ is regular. In Section \ref{sect:thick} we prove Theorem \ref{theo:rectangle}. In Section \ref{sect:cD} we define the constant $c_{{\cal D}} \in {\mathbb{Q}} \cap (0,+\infty)$ for a regular disk ${\cal D}$ and construct a quasi-isometry between $\tilde\mathcal{C}_{{\cal D}}$ and the {\em spine}, a subcomplex $\tilde\mathcal{C}^{\bullet}_{{\cal D}} \subset \tilde\mathcal{C}_{{\cal D}}$ isometric to ${\mathbb{R}}$, with vertices in ${\mathbb{Z}}$. In Section \ref{sect:theoM} we prove Theorem \ref{theo:M}. Finally, Section \ref{sect:final} contains a few final remarks. The author thanks Juliana Freire, Caroline Klivans, Raphael de Marreiros, Pedro Milet and Breno Pereira for helpful conversations, comments and suggestions; Caroline Klivans read a preliminary version and provided detailed and helpful recommendations. The author thanks the referee for several insightful and productive suggestions. The author is also thankful for the generous support of CNPq, CAPES and FAPERJ (Brazil). \section{Examples} \label{sect:examples} For the $3\times 3\times 2$ box there exist $229$ tilings: exactly one tiling has twist $-1$ (the first tiling shown in Figure \ref{fig:trit}) and exactly one tiling has twist $+1$ (its mirror image), the other $227$ tilings have twist $0$ and form an equivalence class under $\approx$ (and also under $\sim$). For the $4\times 4\times 2$ box there exist tilings with the same twist but which can not be joined by a sequence of flips. Indeed, for this region ${\cal R}$, there are $32000$ tilings, $5$ possible values for the twist (from $-2$ to $+2$) and ${\cal T}({\cal R})$ has $9$ connected components via flips. All $31484$ tilings with twist $0$ are in the same connected component. The $256$ tilings with twist $+1$ form two connected components with $128$ tilings each (and similarly for twist $-1$). The two tilings in Figure \ref{fig:442} are the only tilings with twist $+2$: notice that neither admits a flip (similarly, there are two tilings with twist $-2$). In this example, one can check that $\operatorname{Tw}({\mathbf t}_0) = \operatorname{Tw}({\mathbf t}_1)$ implies ${\mathbf t}_0 \sim {\mathbf t}_1$. For the $4\times 4\times 4$ box there are $5051532105$ tilings, $9$ possible values for the twist (from $-4$ to $+4$) and the set of tilings ${\cal T}({\cal R})$ has $93$ equivalent classes under $\approx$. The number of tilings of twist $0$ is $4413212553$, forming one giant connected component with $4412646453$ tilings, two components with $283044$ tilings each and $12$ isolated tilings. The number of tilings of twist $1$ is $310188792$, forming one giant component with $310185960$ tilings and $12$ components with $236$ tilings each. In this example a brute force computation verifies that $\operatorname{Tw}({\mathbf t}_0) = \operatorname{Tw}({\mathbf t}_1)$ implies ${\mathbf t}_0 \ast {\mathbf t}_{\operatorname{vert},2} \approx {\mathbf t}_1 \ast {\mathbf t}_{\operatorname{vert},2}$ (and therefore ${\mathbf t}_0 \sim {\mathbf t}_1$). In other words, if $\operatorname{Tw}({\mathbf t}_0) = \operatorname{Tw}({\mathbf t}_1)$ then, after adding two floors of vertical dominoes, the two resulting tilings are flip connected (compare with Theorem \ref{theo:M}). For the $4\times 4\times N$ box, $N$ a multiple of $4$, the possible values of the twist are $[-\frac32 N+2,\frac32 N-2] \cap {\mathbb{Z}}$ (see Figure \ref{fig:rocket} and Example \ref{example:cD44}). \begin{figure} \caption{This tiling ${\mathbf t} \label{fig:noflip8} \end{figure} It is not easy to extend these computations to larger boxes but there are examples of isolated tilings (i.e., with no flips), including tilings of twist $0$. Figure \ref{fig:noflip8} shows an example of a tiling ${\mathbf t}_1$ of ${\cal R}_4$ for ${\cal D} = [0,8]^2$; the tiling ${\mathbf t}_1$ admits no flips and satisfies $\operatorname{Tw}({\mathbf t}_1) = 0$. If ${\mathbf t}_2$ is obtained from ${\mathbf t}_1$ by rotating $90^{\circ}$ in $xy$ then $\operatorname{Tw}({\mathbf t}_{\operatorname{vert},4}) = \operatorname{Tw}({\mathbf t}_1) = \operatorname{Tw}({\mathbf t}_2) = 0$ and ${\mathbf t}_{\operatorname{vert},4} \not\approx {\mathbf t}_1 \not\approx {\mathbf t}_2 \not\approx {\mathbf t}_{\operatorname{vert},4}$ (since both ${\mathbf t}_1$ and ${\mathbf t}_2$ are isolated); it is not hard to verify (by brute force) that ${\mathbf t}_{\operatorname{vert},4} \sim {\mathbf t}_1 \sim {\mathbf t}_2 \sim {\mathbf t}_{\operatorname{vert},4}$. As proved in \cite{saldanhaejc}, the number of tilings of ${\cal R}_N$ per value of the twist approaches a normal distribution when the basis ${\cal D}$ is kept fixed and $N$ goes to infinity. This may seem surprising given the results for the cube $4\times 4\times 4$, but this region is too small for us to see the patern. The normal distribution is proved in Theorem 4 of \cite{saldanhaejc}; it is also visible in Figure 2 of the same paper. \section{Floors and plugs} \label{sect:plug} In this section, let ${\cal D} \subset {\mathbb{R}}^2$ be a fixed but arbitrary balanced quadriculated disk (thus ${\cal D}$ is connected and simply connected, with connected interior). Recall that ${\cal D}$ is nontrivial if at least one unit square has at least $3$ neighbors (see Figure \ref{fig:disks}). Let $|{\cal D}|$ be the number of squares of ${\cal D}$ (so that $|{\cal D}|$ is even; if ${\cal D}$ is nontrivial then $|{\cal D}| \ge 6$). Rectangles provide us with a family of examples: ${\cal D} = [0,L] \times [0,M]$ where $L, M \in {\mathbb{N}}$, $2 \le L \le M$, and $LM$ is even. For a disk ${\cal D}$, a \emph{plug} is a balanced quadriculated subregion $p \subseteq {\cal D}$. In other words, a plug $p$ is a union of finitely many unit squares $[a,a+1] \times [b,b+1] \subset {\cal D}$ (with $(a,b) \in {\mathbb{Z}}^2$) such that the numbers of black and white squares in $p$ are equal. From a graph point of view, $p$ is a balanced induced subgraph of ${\cal D}$. Let ${\cal P}$ be the set of \emph{plugs} for ${\cal D}$. We allow for the \emph{empty plug} ${\mathbf p_\circ} = \emptyset \in {\cal P}$ and the \emph{full plug} ${\mathbf p_\bullet} = {\cal D} \in {\cal P}$. Each plug $p \in {\cal P}$ has a complement $p^c \in {\cal P}$: the interiors of $p$ and $p^c$ are disjoint and $p \cup p^c = {\cal D}$; thus, for instance, ${\mathbf p_\bullet} = {\mathbf p_\circ}^c$. We have $|{\cal P}| = \binom{2k}{k}$, $k = |{\cal D}|/2$. Given plugs $p, \tilde p \in {\cal P}$ and $N_0, N_1 \in {\mathbb{Z}}$, $N_1 > N_0 + 2$, we define the {\em cork} ${\cal R}_{N_0,N_1;p,\tilde p}$ to be the balanced cubiculated region \[ {\cal R}_{N_0,N_1;p,\tilde p} = ({\cal D} \times [N_0,N_1]) \smallsetminus \operatorname{int}((p \times [N_0,N_0+1]) \cup (\tilde p \times [N_1-1,N_1])). \] Thus, ${\cal R}_{N_0,N_1;p,\tilde p}$ is obtained from ${\cal R}_{N_0,N_1} = {\cal D} \times [N_0,N_1]$ by removing $p_0$ from the bottom floor $N_0+1$ (i.e., ${\cal D} \times [N_0,N_0+1]$) and $p_1$ from the top floor $N_1$. Notice that ${\cal R}_{0,N;{\mathbf p_\circ},{\mathbf p_\circ}} = {\cal R}_N$ and ${\cal R}_{0,N;{\mathbf p_\bullet},{\mathbf p_\bullet}}$ is a translated copy of ${\cal R}_{N-2}$. For disjoint $p, \tilde p \in {\cal P}$, consider the planar region ${\cal D}_{p, \tilde p} = {\cal D} \smallsetminus (p \sqcup \tilde p)$ (we make here the usual abuse of neglecting boundaries). The region ${\cal D}_{p, \tilde p}$ is balanced, but possibly neither connected not tileable; we may also have ${\cal D}_{p, \tilde p}$ empty. Consider $p_{N_0}, p_{N_1} \in {\cal P}$ and the cork ${\cal R} = {\cal R}_{N_0,N_1; p_{N_0}, p_{N_1}}$. A tiling ${\mathbf t} \in {\cal T}({\cal R})$ can be described as a sequence of \emph{floors} and plugs: \begin{equation} \label{equation:floorsandplugs} (p_{N_0},f_{N_0+1},p_{N_0+1},f_{N_0+2},p_{N_0+2},\ldots ,p_{N_1-1},f_{N_1},p_{N_1}). \end{equation} The $j$-th plug $p_j = \operatorname{plug}_j({\mathbf t}) \in {\cal P}$ is the union of the unit squares $[a,a+1]\times[b,b+1] \times \{j\}$ contained in ${\cal D} \times \{j\}$ and crossed by a vertical domino $[a,a+1]\times [b,b+1] \times [j-1,j+1]$ in the tiling ${\mathbf t}$. Similarly, the reduced $j$-th floor $f^{\ast}_j = \operatorname{floor}^{\ast}_j({\mathbf t}) \in {\cal T}({\cal D}_{p_{j-1},p_{j}})$ corresponds to the set of horizontal dominoes of ${\mathbf t}$ contained in ${\cal D} \times [j-1,j]$. Notice that $p_{j-1}$ and $p_{j}$ are disjoint (for all $j$). Figure \ref{fig:floorplug} shows a tiling represented as a sequence of reduced floors and plugs. The (full) $j$-th floor is $f_j = \operatorname{floor}_j({\mathbf t}) = (p_{j-1},f^{\ast}_j,p_j)$, so that the representation in Equation \ref{equation:floorsandplugs} is redundant. Figures \ref{fig:twist2}, \ref{fig:flipexample}, \ref{fig:trit}, \ref{fig:442} (and others) show examples of tilings represented as sequences of (full) floors. \begin{figure} \caption{A tiling of the box $4\times 4\times 4$ as a sequence of reduced floors and plugs.} \label{fig:floorplug} \end{figure} Given a floor $f = (p_0,f^\ast,p_1)$, define $f^{-1} = (p_1,f^\ast,p_0)$, also a valid floor; we say that $f$ and $f^{-1}$ differ by orientation only (see also Section \ref{sect:groupcomplex}). A floor $f = (p_0,f^\ast,p_1)$ is \emph{vertical} if $f^\ast = \emptyset$ (the empty tiling of the empty region ${\cal D}_{p_0,p_1}$), or, equivalently, if $p_1 = p_0^c$. From the point of view of tilings, a floor is vertical if all dominoes intersecting it are vertical. \begin{lemma} \label{lemma:trivialdisk} If ${\cal D}$ is a trivial quadriculated disk and ${\mathbf t}_0, {\mathbf t}_1 \in {\cal T}({\cal R}_N)$ then ${\mathbf t}_0 \approx {\mathbf t}_1$. \end{lemma} Recall that a disk ${\cal D}$ is non trivial if at least one square has at least three neighbors (see Figure \ref{fig:disks}). \begin{proof} If ${\cal D}$ is not the $2\times 2$ square then, as a graph, ${\cal D}$ is isomorphic to $[0,1] \times [0,M]$ and therefore, as a graph, ${\cal R}_N$ is isomorphic to the quadriculated disk $[0,M] \times [0,N]$. In other words, we have two interpretations of the same graph, one of them 2D, the other 3D. The meaning of flips is the same in both interpretations. We then know from \cite{thurston1990} and \cite{saldanhatomei1995} that ${\mathbf t}_0 \approx {\mathbf t}_1$. If ${\cal D} = [0,2]\times [0,2]$ then there are $6$ plugs: in all cases, vertical dominoes can be matched in adjacent pairs. Thus, a few vertical flips take any tiling to a tiling with no vertical dominoes and then to a base tiling. \end{proof} \section{Tilings of corks} \label{sect:cork} This section is about Lemma \ref{lemma:cork}, which states that, for sufficiently large $N$, the corks ${\cal R}_{0,N;{\mathbf p_\circ},p}$, ${\cal R}_{0,N;p,{\mathbf p_\circ}}$ admit tilings. Figure \ref{fig:cork} shows an example. \begin{figure} \caption{A plug $p$ and a tiling of the cork ${\cal R} \label{fig:cork} \end{figure} Given $p \in {\cal P}$, let $|p|$ be the number of squares in $p$. If $N_1 - N_0$ is even and $p \in {\cal P}$ then there exists a unique tiling ${\mathbf t}_{\operatorname{vert}} \in {\cal T}({\cal R}_{N_0,N_1;p,p})$ such that all floors are vertical. For a balanced quadriculated disk ${\cal D}$ and a plug $p \in {\cal P}$, consider the cork ${\cal R}_{-N,N;p,p}$; a tiling ${\mathbf t} \in {\cal T}({\cal R}_{-N,N;p,p})$ is {\em even} if ${\mathbf t}$ is of the form: \begin{equation} \label{eq:bt0} {\mathbf t} = (p,f^{\ast}_{N},p_{N-1},f^{\ast}_{N-1}, \ldots, p_1,f^{\ast}_1,p_0,f^{\ast}_1,p_1,\ldots, f^{\ast}_{N-1},p_{N-1},f^{\ast}_N,p), \end{equation} i.e., if the tiling is symmetric with respect to the reflection on the $xy$ plane. An example of an even tiling is the vertical tiling ${\mathbf t}_{\operatorname{vert}} \in {\cal T}({\cal R}_{-N,N;p,p})$. \begin{lemma} \label{lemma:eventiling} If a tiling ${\mathbf t} \in {\cal T}({\cal R}_{-N,N;p,p})$ is even then ${\mathbf t} \approx {\mathbf t}_{\operatorname{vert}}$. \end{lemma} \begin{proof} Begin with ${\mathbf t}_0 = {\mathbf t}$ as in Equation \ref{eq:bt0} above. Performing one vertical flip for each (horizontal) domino in $f_1^{\ast}$ takes us to \[ {\mathbf t}_1 = (p,f^{\ast}_{N},p_{N-1},f^{\ast}_{N-1}, \ldots, p_2,f^{\ast}_2,p_1,\emptyset,p_1^c,\emptyset,p_1,f^{\ast}_2,p_2,\ldots, f^{\ast}_{N-1},p_{N-1},f^{\ast}_N,p); \] performing three vertical flips for each domino in $f_2^{\ast}$ then takes us to \[ {\mathbf t}_2 = (p,f^{\ast}_{N},p_{N-1},f^{\ast}_{N-1}, \ldots, p_2,\emptyset,p_2^c,\emptyset,p_2,\emptyset,p_2^c,\emptyset,p_2,\ldots, f^{\ast}_{N-1},p_{N-1},f^{\ast}_N,p); \] proceed to define a finite sequence ${\mathbf t}_0 \approx {\mathbf t}_1 \approx \cdots {\mathbf t}_{N-1} \approx {\mathbf t}_N = {\mathbf t}_{\operatorname{vert}}$. \end{proof} \begin{lemma} \label{lemma:flipcork} Consider a balanced quadriculated disk ${\cal D}$ and a plug $p \in {\cal P}$. If $N$ is even and $N \ge |p|$ then there exists an even tiling ${\mathbf t} \in {\cal T}({\cal R}_{-N,N;p,p})$ such that $\operatorname{plug}_0({\mathbf t}) = {\mathbf p_\circ}$; in particular, ${\mathbf t} \approx {\mathbf t}_{\operatorname{vert}}$. \end{lemma} The hypothesis $N \ge |p|$ works well with the proof but will not be particularly important. In many examples a weaker hypothesis would also work; we do not attempt to improve this bound. The construction in the proof below will be used again, particularly in the proof of Lemma \ref{lemma:thicksublemma}. \begin{proof} Consider a spanning tree for ${\cal D}$; this spanning tree will be kept fixed during the construction. Define the {\em distance} between two squares of ${\cal D}$ as measured along the spanning tree; in particular, the distance is an even integer if the two squares have the same color and an odd integer otherwise. In this proof we denote a horizontal domino in ${\cal R}_{-N,N;p,p}$ by $(s_a,s_b,c)$ where $s_a$ and $s_b$ are adjacent squares in ${\cal D}$ and $c \in \{-N+1, \ldots, N\} \subset {\mathbb{Z}}$ denotes the floor; similarly, a vertical domino is denoted by $(s,c,c+1)$. The proof is by induction on the even integer $|p|$. The case $|p| = 0$ (so that $p = {\mathbf p_\circ}$) is trivial (but perhaps too degenerate if we take $N = 0$). In general, given $p \in {\cal P}$ with $|p| \ge 2$, let $\ell$ be the (odd) minimal distance between a black square and a white square in $p$; let $s_0$ and $s_\ell$ be squares which realize this minimum value. Let $\tilde p = p \smallsetminus (s_0 \cup s_\ell) \in {\cal P}$ (with the usual abuse of notation) so that $|\tilde p| = |p| - 2$. Set $\tilde N = N-2$ and consider a tiling ${\mathbf t}_1 \in {\cal T}({\cal R}_{-N,N;p,p})$ such that ${\mathbf t}_{\operatorname{vert}} \approx {\mathbf t}_1$, $\operatorname{plug}_{-\tilde N}({\mathbf t}_1) = \operatorname{plug}_{\tilde N}({\mathbf t}_1) = \tilde p$ and all floors between $-\tilde N+1$ and $\tilde N$ are vertical. The restriction of ${\mathbf t}_1$ to ${\cal R}_{-\tilde N,\tilde N;\tilde p,\tilde p}$ is ${\mathbf t}_{\operatorname{vert}} \in {\cal T}({\cal R}_{-\tilde N,\tilde N;\tilde p,\tilde p})$. The desired result then follows by the induction hypothesis. We are left with contructing ${\mathbf t}_1$. Let $s_0, s_1, \ldots, s_\ell$ be the sequence of squares, read along the spanning tree, from the black square $s_0$ to the white square $s_\ell$. Notice that the minimality of $\ell$ implies that the squares $s_1, \ldots, s_{\ell - 1}$ do not belong to the plug $p$. The horizontal dominoes of ${\mathbf t}_1$ are: \begin{gather*} (s_1,s_2,-N+1), (s_3,s_4,-N+1), \ldots, (s_{\ell-2},s_{\ell-1},-N+1); \\ (s_0,s_1,-N+2), (s_2,s_3,-N+2), \ldots, (s_{\ell-1},s_{\ell},-N+2); \\ (s_0,s_1,N-1), (s_2,s_3,N-1), \ldots, (s_{\ell-1},s_{\ell},N-1); \\ (s_1,s_2,N), (s_3,s_4,N), \ldots, (s_{\ell-2},s_{\ell-1},N). \end{gather*} All other dominoes of ${\mathbf t}_1$ are vertical, completing the construction of ${\mathbf t}_1$. It follows from Lemma \ref{lemma:eventiling} that ${\mathbf t} \approx {\mathbf t}_{\operatorname{vert}}$. \end{proof} \begin{example} \label{example:R2} Figure \ref{fig:R2} illustrates this construction for ${\cal D} = [0,4]^2$. The even tiling ${\mathbf t}_1$, as in the proof above, is constructed (a few vertical floors are omitted). See also Figure \ref{fig:penultimate} in Section \ref{sect:thick}. \end{example} \begin{figure} \caption{A quadriculated disk ${\cal D} \label{fig:R2} \end{figure} \begin{lemma} \label{lemma:cork} Given a quadriculated region ${\cal D} \subset {\mathbb{R}}^2$ as above; if $N \ge 2|{\cal D}|$ and $p \in {\cal P}$ then both corks ${\cal R}_{0,N;{\mathbf p_\circ},p}$, ${\cal R}_{0,N;p,{\mathbf p_\circ}}$ admit tilings. \end{lemma} \begin{proof} From Lemma \ref{lemma:flipcork} there exist tilings of ${\cal R}_{0,N;{\mathbf p_\circ},p}$ for all even $N$, $N \ge |{\cal D}|$: just restrict ${\mathbf t}$ in the statement of Lemma \ref{lemma:flipcork} to ${\cal R}_{0,N;{\mathbf p_\circ},p}$. In particular, taking $p = {\mathbf p_\bullet}$, there exists a tiling of ${\cal R}_{0,|{\cal D}|;{\mathbf p_\circ},{\mathbf p_\bullet}} = {\cal R}_{0,|{\cal D}|-1;{\mathbf p_\circ},{\mathbf p_\circ}}$. Also, there exists a vertical tiling of ${\cal R}_{0,2;{\mathbf p_\circ},{\mathbf p_\circ}}$; by concatenation, there exists a tiling of ${\cal R}_{0,N;{\mathbf p_\circ},{\mathbf p_\circ}}$ for every $N \ge |{\cal D}|$ (either even or odd). Again by concatenation, there exists a tiling of ${\cal R}_{0,N;{\mathbf p_\circ},p}$ for every $p \in {\cal P}$ and every $N \ge 2|{\cal D}|$ (either even or odd). \end{proof} \begin{remark} \label{remark:cork} Lemma \ref{lemma:cork} gives us an explicit estimate for the minimum $N_p$ such that $N \ge N_p$ implies that ${\cal R}_{0,N;p,{\mathbf p_\circ}}$ admit tilings. We are not trying, however, to obtain the best estimate. A few examples should convince the reader that $N_p$ can often be taken to be significantly smaller that $2|{\cal D}|$. \end{remark} \section{The domino group $G_{{\cal D}}$ and the complex $\mathcal{C}_{{\cal D}}$} \label{sect:groupcomplex} The domino group has already been defined in a combinatorial way in the Introduction, Equation \eqref{eq:dominogroup}. Let us recall the definition. \begin{definition} \label{definition:dominogroup} Consider a fixed quadriculated disk ${\cal D}$. Let ${\cal T}({\cal R}_{\ast})$ be the set of all tilings of all regions ${\cal R}_{N} = {\cal D} \times [0,N]$, $N \in {\mathbb{N}}^\ast$. The equivalence relation $\sim$ is defined on ${\cal T}({\cal R}_{\ast})$: let $G_{{\cal D}}$ be the quotient space ${\cal T}({\cal R}_{\ast})/\sim$. The set $G_{{\cal D}}$ has a group structure. Indeed, the identity is the vertical tiling ${\mathbf t}_{\operatorname{vert}}$ of ${\cal R}_2$. The product of two tilings ${\mathbf t}_1 \in {\cal T}({\cal R}_{N_1})$ and ${\mathbf t}_2 \in {\cal T}({\cal R}_{N_2})$ is the concatenation ${\mathbf t}_1\ast {\mathbf t}_2 \in {\cal T}({\cal R}_{N_1+N_2})$. The inverse of ${\mathbf t}_1 \in {\cal T}({\cal R}_{N_1})$ is obtained by reflection on the horizontal plane. \end{definition} In this section we construct another of our main objects: the complex $\mathcal{C}_{{\cal D}}$. As we shall see in Equation \eqref{equation:dominogroup}, $G_{{\cal D}}$ equals the fundamental group of $\mathcal{C}_{{\cal D}}$, giving us an equivalent definition of the domino group in another language. Given a quadriculated disk ${\cal D}$, we construct a $2$-complex $\mathcal{C}_{{\cal D}}$. The vertices of $\mathcal{C}_{{\cal D}}$ are the plugs $p \in {\cal P}$. We first construct a graph, which is almost (but not quite) the same thing as constructing the $1$-skeleton of $\mathcal{C}_{{\cal D}}$. The (undirected) edges of $\mathcal{C}_{{\cal D}}$ between two vertices (plugs) $p_0, p_1 \in {\cal P}$ are valid (full) floors of the form $f = (p_0, f^{\ast}, p_1)$. Thus, if $p_0$ and $p_1$ are not disjoint there is no edge between them; if $p_0$ and $p_1$ are disjoint, there is one edge for each tiling of ${\cal D}_{p_0,p_1}$. Notice that if $p_0 \ne p_1$ then $f = (p_0, f^{\ast}, p_1)$ and $f^{-1} = (p_1, f^\ast, p_0)$ define two orientations of the same edge. Notice also that there is one loop from ${\mathbf p_\circ}$ to itself for each tiling of ${\cal D}$; there are no other loops in $\mathcal{C}_{{\cal D}}$. There is a natural identification between tilings of the cork ${\cal R}_{N;p_0,p_N}$ and paths of length $N$ in $\mathcal{C}_{{\cal D}}$ from $p_0$ to $p_N$. Tilings of ${\cal R}_N$ correspond to closed paths of length $N$ in $\mathcal{C}_{{\cal D}}$ from ${\mathbf p_\circ}$ to itself. Indeed, both our usual figures of tilings (for instance, Figure \ref{fig:noflip8}) and descriptions as lists of plugs and floors (as in Equation \ref{equation:floorsandplugs}) can be directly interpreted as paths: each floor is an edge, the initial vertex of each edge is a plug indicated by white squares and the final vertex is a plug indicated by black squares. Loops from ${\mathbf p_\circ}$ to itself are interpreted in the graph theoretical sense: if we go from ${\mathbf p_\circ}$ to ${\mathbf p_\circ}$ in one move, we must specify which loop (i.e., which floor) is used, and nothing else. In particular, we do not have to specify what ``orientation'' of the loop was used; consistently, no such ``orientation'' exists for a horizontal floor. In this sense a graph with loops is not quite the same as a $1$-complex: this difficulty will be addressed by attaching certain $2$-cells (see below). Let us stay with the graph theoretical point of view for a little longer. It follows from Lemma \ref{lemma:cork} that $\mathcal{C}_{{\cal D}}$ is path connected and not bipartite: for sufficiently large $N$ there exist paths of length $N$ from any initial vertex to any final vertex. Let $A = A_{{\cal D}} \in {\mathbb{Z}}^{{\cal P} \times {\cal P}}$ be the adjacency matrix of $\mathcal{C}_{{\cal D}}$. We have $A_{(p,\tilde p)} = |{\cal T}({\cal D}_{p,\tilde p})|$; we set $|{\cal T}({\cal D}_{p,\tilde p})| = 0$ if $p$ and $\tilde p$ are not disjoint. From the description of tilings of corks as paths in $\mathcal{C}_{{\cal D}}$ we have \begin{equation} \label{eq:count} |{\cal T}({\cal R}_{0,N;p_{0},p_{N}})| = (A^{N})_{(p_0,p_N)} = \sum_{(p_1,\ldots,p_{N-1}) \in {\cal P}^{N-1}} \left( \prod_{1 \le j \le N} |{\cal T}({\cal D}_{p_{j-1},p_j})| \right). \end{equation} \goodbreak \begin{lemma} \label{coro:A1Npos} Consider a balanced quadriculated disk ${\cal D} \subset {\mathbb{R}}^2$; let $A = A_{{\cal D}}$ be the adjacency matrix of $\mathcal{C}_{{\cal D}}$. If $N \ge 4|{\cal D}|$ then all entries of $A^N$ are strictly positive. \end{lemma} \begin{proof} This is equivalent to saying that for all $N \ge 4|{\cal D}|$ and for all $p, \tilde p \in {\cal P}$ the cork ${\cal R}_{N;p,\tilde p}$ admits a tiling. Consider plugs $p, \tilde p \in {\cal P}$. Use Lemma \ref{lemma:cork} to obtain ${\mathbf t} \in {\cal T}({\cal R}_{0,2|{\cal D}|;p,{\mathbf p_\circ}})$ and $\tilde{\mathbf t} \in {\cal T}({\cal R}_{2|{\cal D}|,N;{\mathbf p_\circ},\tilde p})$ (the second one requires a translation by $(0,0,2|{\cal D}|)$). Concatenate ${\mathbf t}$ and $\tilde{\mathbf t}$ to obtain the desired tiling of ${\cal R}_{0,N;p,\tilde p}$. \end{proof} We complete the construction of the complex $\mathcal{C}_{{\cal D}}$ by attaching $2$-cells. First, we adress the fact that a graph with loops is not quite the same thing as a $1$-complex. We solve this by attaching a disk with boundary $f\ast f$ for each floor $f$ of the form $f = ({\mathbf p_\circ},f^\ast,{\mathbf p_\circ})$, i.e., for each loop: this guarantees that $f$ and $f^{-1}$ are now homotopic. The other $2$-cells correspond to flips, and it is convenient to describe separately horizontal flips, which will correspond to bigons, and vertical flips, which will correspond to quadrilaterals. Let $p_0, p_1 \in {\cal P}$ be disjoint plugs; let $f^{\ast}_a$ and $f^{\ast}_b$ be tilings of ${\cal D}_{p_0, p_1}$ joined by a flip. Attach a bigon with vertices $p_0$ and $p_1$ and edges $f_a = (p_0,f^{\ast}_a,p_1)$ and $f_b = (p_0,f^{\ast}_b,p_1)$; see example in Figure \ref{fig:hflip}. \begin{figure} \caption{A horizontal flip defines a $2$-cell in $\mathcal{C} \label{fig:hflip} \end{figure} Let $p_0, p_1, \tilde p_1, p_2 \in {\cal P}$ be plugs. Assume that $p_1$ is obtained from $\tilde p_1$ by removing two adjacent squares; let $d$ be the domino formed by these two squares. Assume that $\tilde p_1$ is disjoint from both $p_0$ and $p_2$; notice that this implies that $p_1$ is likewise disjoint from both $p_0$ and $p_2$. Let $\tilde f^{\ast}_1$ and $\tilde f^{\ast}_2$ be tilings of ${\cal D}_{p_0,\tilde p_1}$ and ${\cal D}_{\tilde p_1,p_2}$, respectively. Let $f^{\ast}_1$ and $f^{\ast}_2$ be tilings of ${\cal D}_{p_0, p_1}$ and ${\cal D}_{p_1,p_2}$ obtained from $\tilde f^{\ast}_1$ and $\tilde f^{\ast}_2$, respectively, by adding the domino $d$. Attach a quadrilateral with vertices $p_0, p_1, \tilde p_1, p_2$ and edges $f_1 = (p_0,f^{\ast}_1,p_1)$, $\tilde f_1 = (p_0,\tilde f^{\ast}_1,\tilde p_1)$, $f_2 = (p_1,f^{\ast}_2,p_2)$ and $\tilde f_2 = (\tilde p_1, \tilde f^{\ast}_2, p_2)$; see an example in Figure \ref{fig:vflip}. \begin{figure} \caption{A vertical flip defines a $2$-cell in $\mathcal{C} \label{fig:vflip} \end{figure} The above identification between tilings of corks and paths in $\mathcal{C}_{{\cal D}}$ gives us a natural concatenation operation. Let $p_0, p_1, p_2 \in {\cal P}$ be plugs. Let ${\cal R}_{01} = {\cal R}_{N_0,N_1;p_0,p_1}$, ${\cal R}_{12} = {\cal R}_{N_1,N_2;p_1,p_2}$ and ${\cal R}_{02} = {\cal R}_{N_0,N_2;p_0,p_2}$ be corks. If ${\mathbf t}_{01} \in {\cal T}({\cal R}_{01})$ and ${\mathbf t}_{12} \in {\cal T}({\cal R}_{12})$ are tilings, concatenate them to define ${\mathbf t}_{02} = {\mathbf t}_{01} \ast {\mathbf t}_{12} \in {\cal T}({\cal R}_{02})$. The dominoes of ${\mathbf t}_{02}$ are: the dominoes of ${\mathbf t}_{01}$, the dominoes of ${\mathbf t}_{12}$, vertical dominoes of the form $s \times [N_1-1,N_1+1]$ for $s \subset {\cal D}$ a square in the plug $p_1$. We thus have $\ast: {\cal T}({\cal R}_{01}) \times {\cal T}({\cal R}_{12}) \to {\cal T}({\cal R}_{02})$, \[ {\cal T}({\cal R}_{01}) \ast {\cal T}({\cal R}_{12}) = \{ {\mathbf t} \in {\cal T}({\cal R}_{02}) \;|\; \operatorname{plug}_{N_1}({\mathbf t}) = p_1 \}. \] \begin{lemma} \label{lemma:movevert} Let ${\cal D}$ be a balanced quadriculated disk; let $p_0, p_1 \in {\cal P}$ be plugs; let ${\mathbf t} \in {\cal T}({\cal R}_{N_0,N_1;p_0,p_1})$ be a tiling. Let ${\mathbf t}_{\operatorname{vert},p_0} \in {\cal T}({\cal R}_{N_0-2,N_0;p_0,p_0})$ and ${\mathbf t}_{\operatorname{vert},p_1} \in {\cal T}({\cal R}_{N_1,N_1+2;p_1,p_1})$ be the vertical tilings in the corks above. Let $\tilde{\mathbf t}_0 = {\mathbf t}_{\operatorname{vert},0} \ast {\mathbf t} \in {\cal T}({\cal R}_{N_0-2,N_1;p_0,p_1})$, $\tilde{\mathbf t}_1 = {\mathbf t} \ast {\mathbf t}_{\operatorname{vert},1} \in {\cal T}({\cal R}_{N_0,N_1+2;p_0,p_1})$. Apply a translation by $(0,0,2)$ (and abuse notation): $\tilde{\mathbf t}_0 \in {\cal T}({\cal R}_{N_0,N_1+2;p_0,p_1})$. Then $\tilde{\mathbf t}_0 \approx \tilde{\mathbf t}_1$. \end{lemma} \begin{proof} In order to transform $\tilde{\mathbf t}_0$ into $\tilde{\mathbf t}_1$ it is better to focus on the horizontal dominoes and think of the vertical dominoes as background. We have to move every horizontal domino down by two floors. This is performed in increasing order of the $z$ coordinate, thus guaranteeing that at the time a horizontal domino must move the four unit cubes one or two floors below it are filled in by two vertical dominoes. Two flips then have the desired effect of moving down the horizontal domino. \end{proof} In the introduction we gave a combinatorial definition of the equivalence relation $\sim$ which is weaker than $\approx$; we now extend it to tilings of corks. Consider ${\mathbf t}_0 \in {\cal T}({\cal R}_{0,N_0;p_a,p_b})$ and ${\mathbf t}_1 \in {\cal T}({\cal R}_{0,N_1;p_a,p_b})$ with $N_0$ and $N_1$ of the same parity. We write ${\mathbf t}_0 \sim {\mathbf t}_1$ if and only if there exist $M_0, M_1 \in 2{\mathbb{N}}$ with $N_0 + M_0 = N_1 + M_1$ and ${\mathbf t}_0 \ast {\mathbf t}_{\operatorname{vert},0} \approx {\mathbf t}_1 \ast {\mathbf t}_{\operatorname{vert},1}$ where ${\mathbf t}_{\operatorname{vert},i}$ is the vertical tiling of ${\cal R}_{N_i,N_i+M_i;p_b,p_b}$. The following lemma gives an alternative definition which is topological and natural. \begin{lemma} \label{lemma:homotopy} Consider ${\mathbf t}_0 \in {\cal T}({\cal R}_{0,N_0;p_a,p_b})$ and ${\mathbf t}_1 \in {\cal T}({\cal R}_{0,N_1;p_a,p_b})$: we have ${\mathbf t}_0 \sim {\mathbf t}_1$ if and only if the paths ${\mathbf t}_0$ and ${\mathbf t}_1$ from $p_a$ to $p_b$ are homotopic with fixed endpoints. \end{lemma} \begin{proof} First notice that the moves in the definition of $\sim$ (applying flips, adding or deleting pairs of vertical floors at the endpoint) are examples of homotopies with fixed endpoints, proving one implication. For the other direction, notice that a homotopy with fixed endpoints between ${\mathbf t}_0$ and ${\mathbf t}_1$ allows for flips since in the definition of $\mathcal{C}_{{\cal D}}$ there are $2$-cells corresponding to flips. A homotopy allows for an extra operation. At any vertex (plug) $p_i$ and for any edge (floor) $f = (p_i,f^\ast,\tilde p)$ we may add two new edges to the path, thus modifying the path from ${\mathbf t} = (\ldots,f_i,p_i,f_{i+1},\ldots)$ (of length $N$) to $\tilde{\mathbf t} = (\ldots,f_i,p_i,f,\tilde p,f^{-1},p_i,f_{i+1},\ldots)$ (of length $N+2$). We may also do the same operation in reverse. We must therefore check that ${\mathbf t} \sim \tilde{\mathbf t}$. Indeed, add two vertical floors at the end of ${\mathbf t}$. Use Lemma \ref{lemma:movevert} to move (by flips) the two vertical floors to position $i$, thus obtaining the path (tiling) $(\ldots,f_i,p_i,f_{\operatorname{vert}},p_i^c,f_{\operatorname{vert}},p_i,f_{i+1},\ldots)$ (of length $N+2$). The above path is equivalent to $\tilde{\mathbf t}$ by vertical flips, as discussed in the proof of Lemma \ref{lemma:eventiling}. Also, the $2$-cells allow for a few other operations which may, at first, look new and requiring checking. For instance, the $2$-cell in Figure \ref{fig:vflip} was introduced to allow us to move from $(\ldots,p_0,f_1,p_1,f_2,p_2,\ldots)$ to $(\ldots,p_0,\tilde f_1,\tilde p_1,\tilde f_2,p_2,\ldots)$ (or vice-versa), which is a vertical flip and therefore consistent with $\sim$ (and even with $\approx$). This cell also has the perhaps unexpected effect of allowing us to move from $(\ldots,p_1,f_1^{-1},p_0,\tilde f_1,\tilde p_1, \ldots)$ to $(\ldots,p_1,f_2,p_2,\tilde f_2^{-1},\tilde p_1, \ldots)$. It is not hard to verify that this example is also consistent with $\approx$. A case by case study reveals that all moves are indeed consistent with $\sim$; alternatively, adding and deleting pairs of matching floors shows that the previous paragraphs already provide a complete proof. \end{proof} Given a balanced quadriculated disk ${\cal D}$, we defined in the introduction the {\em domino group $G_{{\cal D}}$}: elements of the group are tilings modulo $\sim$ and the operation is concatenation. It follows from Lemma \ref{lemma:homotopy} and Definition \ref{definition:dominogroup} that \begin{equation} \label{equation:dominogroup} G_{{\cal D}} = \pi_1(\mathcal{C}_{{\cal D}},{\mathbf p_\circ}), \end{equation} the fundamental group of the complex $\mathcal{C}_{{\cal D}}$ with base point ${\mathbf p_\circ} \in {\cal P}$. There is a homomorphism $G_{{\cal D}} \to {\mathbb{Z}}/(2)$ taking ${\mathbf t} \in {\cal T}({\cal R}_{N})$ to $N \bmod 2$. The normal subgroup $G^{+}_{{\cal D}} < G_{{\cal D}}$ is the kernel of this homomorphism. If ${\cal D}$ is tileable, a tiling ${\mathbf t}_c$ of ${\cal R}_1$ defines an element $c \in G_{{\cal D}}$ of order $2$: indeed, ${\mathbf t}_c \ast {\mathbf t}_c \approx {\mathbf t}_{\operatorname{vert}} \in {\cal T}({\cal R}_2)$. Let $H = \{e, c\} < G_{{\cal D}}$: we have that $G_{{\cal D}}$ is then a semidirect product $G_{{\cal D}} = G^{+}_{{\cal D}} \ltimes H$; as we shall see, this is sometimes but not always a direct product. Consider the corresponding double cover $\mathcal{C}^{+}_{{\cal D}}$. The set of vertices of $\mathcal{C}^{+}_{{\cal D}}$ is ${\cal P}^{+} = {\cal P} \times {\mathbb{Z}}/(2)$: its elements are {\em plugs with parity}: a plug $p$ with the extra information of the parity of its position. Similarly, an oriented edge of $\mathcal{C}^{+}_{{\cal D}}$ is a {\em floor with parity}, a pair $(f,k)$ where $k \in {\mathbb{Z}}/(2)$ indicates the parity of its position; notice that $(f,k)^{-1} = (f^{-1},k+1)$. It follows from Lemma \ref{lemma:trivialdisk} that if ${\cal D}$ is trivial than the domino group $G_{{\cal D}}$ is isomorphic to ${\mathbb{Z}}/(2)$. As we shall see in the next section, if ${\cal D}$ is nontrivial then $G_{{\cal D}}$ is infinite. \section{Twist} \label{sect:twist} Following \cite{segundoartigo}, we first recall the of the twist of a tiling ${\mathbf t} \in {\cal T}({\cal R}_N)$ (there are other definitions for other regions, as discussed in \cite{FKMS}). For a domino $d$, let $v(d) \in \{\pm e_1, \pm e_2, \pm e_3\} \subset {\mathbb{R}}^3$ be the unit vector from the center of the white cube to the center of the black cube of $d$. For $u \in \{\pm e_1, \pm e_2\}$, define the \emph{$u$-shade} of $X \subset {\mathbb{R}}^3$ to be \[ {\cal S}^{u}(X) = \operatorname{int}((X + [0,+\infty) u) \smallsetminus X); \quad X + [0,+\infty) u = \{x+tu; x \in X, t \in [0,+\infty)\}. \] (The case $u = \pm e_3$, which is also discussed in \cite{segundoartigo}, will not be considered in the present paper.) Given a tiling ${\mathbf t} \in {\cal T}({\cal R})$ and two dominoes $d_0$ and $d_1$ of $t$, define the {\em effect of $d_0$ on $d_1$ along $u$} as $\tau^{u}(d_0,d_1) \in \{0,\pm\frac14\}$: \[ \tau^{u}(d_0,d_1) = \begin{cases} \frac14 \det(v(d_1),v(d_0),u), & d_1 \cap {\cal S}^{u}(d_0) \ne \emptyset, \\ 0, & \textrm{otherwise.} \end{cases} \] The {\em twist} of ${\mathbf t}$ is \[ \operatorname{Tw}({\mathbf t}) = \sum_{d_0,d_1 \in {\mathbf t}} \tau^{u}(d_0,d_1). \] It is shown in \cite{segundoartigo} that the value of $\operatorname{Tw}({\mathbf t})$ is always an integer and that it does not depend on the choice of $u \in \{\pm e_1, \pm e_2\}$ (see also Remark \ref{remark:cocycle}). \begin{figure} \caption{The value of $4\tau^{e_2} \label{fig:tau} \end{figure} Notice that $\tau^{u}(d_0,d_1) = 0$ unless, for some $i \in \{0,1\}$ and for some $j$, we have that $d_i$ is horizontal and contained in ${\cal D} \times [j-1,j]$ and $d_{1-i}$ is vertical and intersects ${\cal D} \times [j-1,j]$. Given a planar domino $d \subset {\cal D}$ and a unit square $s \subset {\cal D}$ with disjoint interiors, define \[ \tau^u(d,s) = \tau^u(d \times [0,1], s \times [0,2]) + \tau^u(s \times [0,2], d \times [0,1]) \in \{0,\pm\frac14\}; \] examples are given in Figure \ref{fig:tau}. Given two disjoint plugs $p, \tilde p \in {\cal D}$ and a planar tiling $f \in {\cal T}({\cal D}_{p,\tilde p})$ define \begin{equation} \label{equation:cocycle} \tau^u(f,p) = \sum_{d \in f, s \in p} \tau^u(d,s) \in \frac14{\mathbb{Z}}; \qquad \tau^u(f; p, \tilde p) = \tau^u(f,\tilde p) - \tau^u(f,p) \in \frac14 {\mathbb{Z}}. \end{equation} Notice that $\tau^u(f; \tilde p, p) = -\tau^u(f; p, \tilde p)$. For ${\mathbf t} \in {\cal T}({\cal R}_N)$ we clearly have \[ \operatorname{Tw}({\mathbf t}) = \sum_{0 < j \le N} \tau^u(\operatorname{floor}_j({\mathbf t}); \operatorname{plug}_{j-1}({\mathbf t}), \operatorname{plug}_j({\mathbf t})). \] The values of $\tau^u(f,p)$ and $\tau^u(f;p,\tilde p)$ usually depend on the choice of $u$ and neither is necessarily an integer. Other papers (including \cite{FKMS}, \cite{primeiroartigo} and \cite{segundoartigo}) discuss several ways to think about the twist; we shall see another way in this paper. The twist defines a homomorphism $\operatorname{Tw}: G_{{\cal D}} \to {\mathbb{Z}}$. Recall that if ${\cal D}$ is trivial then $G_{{\cal D}} \approx {\mathbb{Z}}/(2)$ and therefore the map $\operatorname{Tw}$ is constant equal to $0$. \begin{remark} \label{remark:cocycle} Equation \ref{equation:cocycle} defines $\tau^u$ as a function taking oriented edges of $\mathcal{C}_{{\cal D}}$ to real numbers. In the language of homology, $\tau^u \in C^1(\mathcal{C}_{{\cal D}};{\mathbb{R}})$. It is not hard to check that $\tau^u \in Z^1 \subset C^1$, i.e., that $\tau^u$ is a cocycle; also, $\tau^{e_1} - \tau^{e_2} \in B^1 \subset Z^1$. The two cocycles $\tau_{e_1}$ and $\tau_{e_2}$ therefore define the same element of the cohomology: $[\tau^{e_1}] = [\tau^{e_2}] \in H^1(\mathcal{C}_{{\cal D}};{\mathbb{R}})$. It is well known that thare exists a natural isomorphism $\operatorname{Hom}(\pi_1(X);{\mathbb{Z}}) \approx H^1(X)$. By following the construction of this isomorphism, we see that $\operatorname{Tw} \in \operatorname{Hom}(\pi_1(\mathcal{C}_{{\cal D}});{\mathbb{Z}})$ is taken to $[\tau^u]$ (for either $u = e_1$ or $u = e_2$). This provides us with an alternative justification for the facts that $\operatorname{Tw}$ does not depend on the choice of $u$ and is invariant under flips. \end{remark} \begin{lemma} \label{lemma:nontrivialtwist} Let ${\cal D}$ be a nontrivial balanced quadriculated disk and $N \ge 4|{\cal D}| + 3$. There exist tilings ${\mathbf t}_0$ and ${\mathbf t}_1$ of ${\cal R}_N = {\cal D} \times [0,N]$ with $\operatorname{Tw}({\mathbf t}_1) = \operatorname{Tw}({\mathbf t}_0) + 1$. In particular, the restriction $\operatorname{Tw}: G^{+}_{{\cal D}} \to {\mathbb{Z}}$ is surjective. \end{lemma} Again, we are not trying to obtain sharp estimates. For ${\cal D} = [0,2] \times [0,3]$, there exist tilings of twists $-1$, $0$ and $1$ in ${\cal R}_N = [0,2] \times [0,3] \times [0,N]$ for $N \ge 3$, as illustrated in Figure \ref{fig:233} for $N = 3$. As another example, Figure \ref{fig:girafa} shows tilings of twists $-1$, $0$ and $1$ of ${\cal R}_5$ for another nontrivial quadriculated disk ${\cal D}$. \begin{figure} \caption{Tilings of twist $-1$, $0$ and $1$ of the box $[0,2] \times [0,3] \times [0,3]$.} \label{fig:233} \end{figure} \begin{figure} \caption{Tilings of twist $-1$, $0$ and $1$ of the region ${\cal D} \label{fig:girafa} \end{figure} \begin{proof} Assume without loss of generality that at least one square has neighbors in the directions $e_1$ and $\pm e_2$. Figure \ref{fig:nontrivialtwist} shows floors $2|{\cal D}|+1$, $2|{\cal D}|+2$ and $2|{\cal D}|+3$ for tilings ${\mathbf t}_0$ and ${\mathbf t}_1$. The other squares are filled in by vertical dominoes filling floors $2|{\cal D}|+1$ and $2|{\cal D}|+2$. Lemma \ref{lemma:cork} guarantees that the first $2|{\cal D}|$ floors and the last $N - (2|{\cal D}| + 3) \ge 2|{\cal D}|$ can be consistently tiled: tile them in the same way for ${\mathbf t}_0$ and ${\mathbf t}_1$. Set \[ d_j = \tau^{e_2}(\operatorname{floor}_j({\mathbf t}_1); \operatorname{plug}_{j-1}({\mathbf t}_1), \operatorname{plug}_j({\mathbf t}_1)) - \tau^{e_2}(\operatorname{floor}_j({\mathbf t}_0); \operatorname{plug}_{j-1}({\mathbf t}_0), \operatorname{plug}_j({\mathbf t}_0)). \] We have $d_j = 1$ for $j = 2|{\cal D}|+2$ and $d_j = 0$ otherwise. Thus $\operatorname{Tw}({\mathbf t}_1) - \operatorname{Tw}({\mathbf t}_0) = 1$, completing the proof. \end{proof} \begin{figure} \caption{Two tiling with twist differing by one. The central square in the first column has at least three neighbors. The two detached squares must exist (in some position in ${\cal D} \label{fig:nontrivialtwist} \end{figure} Recall from the Introduction that a quadriculated disk ${\cal D}$ is {\em regular} if the map $\operatorname{Tw}: G^{+}_{{\cal D}} \to {\mathbb{Z}}$ is an isomorphism: it then follows that $G_{{\cal D}}$ is isomorphic to ${\mathbb{Z}} \oplus ({\mathbb{Z}}/(2))$. The aim of Sections \ref{sect:thin} to \ref{sect:thick} is to prove Theorem \ref{theo:rectangle}: a rectangle $[0,L] \times [0,M]$ is regular if and only if $\min\{L,M\} \ge 3$. \section{Thin rectangles} \label{sect:thin} Consider ${\cal D} = [0,2] \times [0,3]$ and the two tilings ${\mathbf t}_0, {\mathbf t}_1 \in {\cal T}({\cal R}_4)$ shown in Figure \ref{fig:234}. The tilings satisfy $\operatorname{Tw}({\mathbf t}_0) = \operatorname{Tw}({\mathbf t}_1) = +1$ and ${\mathbf t}_0 \not\sim {\mathbf t}_1$. The second claim is not obvious; it follows from the main result in this section. \begin{figure} \caption{Two tilings ${\mathbf t} \label{fig:234} \end{figure} Let $F_2$ be the free group in $2$ generators $a$ and $b$. Let $\psi: {\mathbb{Z}}/(2) \to \operatorname{Aut}(F_2)$ be defined by $\psi(1)(a) = b^{-1}$, $\psi(1)(b) = a^{-1}$; Let $G_2 = F_2 \ltimes_{\psi} {\mathbb{Z}}/(2)$ be the corresponding semidirect product; in other words, a presentation of $G_2$ is: \begin{equation} \label{eq:presentation} G_2 = \langle a, b, c | c^2 = e, cac = b^{-1}, cbc = a^{-1} \rangle. \end{equation} \begin{lemma} \label{lemma:thin} Let ${\cal D} = [0,2] \times [0,M]$, $M \ge 3$. Then there exists a surjective homomorphism $\phi: G_{{\cal D}} \to G_2$. \end{lemma} As we shall see, ${\cal D} = [0,2]\times [0,3]$ and ${\mathbf t}_0, {\mathbf t}_1 \in {\cal T}({\cal R}_4)$ as in Figure \ref{fig:234}, we have $\phi({\mathbf t}_0) = a$ and $\phi({\mathbf t}_1) = b^{-1}$, which implies ${\mathbf t}_0 \not\sim {\mathbf t}_1$. The idea of the proof is to construct an explicit map taking oriented edges of $\mathcal{C}_{{\cal D}}$ to $G_2$. Most edges are taken to the identity; the exceptions are explicitely listed in Figures \ref{fig:oddfree} and \ref{fig:evenfree} and in Table \ref{tab:phree}. These edges belong to the boundary of a small number of $2$-cells and it is therefore easy to check that the boundary of such cells is taken to the identity. \begin{proof} We first construct the restriction $\phi: G^{+}_{{\cal D}} \to F_2$ by working in $\mathcal{C}_{{\cal D}}^{+}$, the double cover of $\mathcal{C}_{{\cal D}}$ constructed at the end of Section \ref{sect:groupcomplex}. We provide an explicit map taking each floor with parity ${\mathbf f} = (f,k) = (p_0,f^{\ast},p_1,k)$ to $\phi({\mathbf f}) \in \{e,a,a^{-1},b,b^{-1}\} \subset F_2$ (recall that floors with parity are edges of $\mathcal{C}^{+}_{{\cal D}}$). Most floors are taken to the identity element $e$. It is helpful to consider first the case $M$ odd, then the case $M$ even. For odd $M$, $\phi({\mathbf f}) = e$ {\em unless}: the central column $[0,2] \times [\frac{M-1}{2},\frac{M+1}{2}]$ is occupied by a domino of $f^{\ast}$ {\em and} the plug $p_0$ marks exactly $\frac{M-1}{2}$ squares in the region $[0,2] \times [0,\frac{M-1}{2}]$, all of the same color. It then follows that all squares in ${\cal D} \smallsetminus [0,2] \times [\frac{M-1}{2},\frac{M+1}{2}]$ are marked by either $p_0$ and $p_1$, and the marking follows a checkerboard patterns. Given odd $M$, there exist only two floors $f_0$ and $f_1 = f_0^{-1}$ (and four signed floors) satisfying the conditions above, shown in Figure \ref{fig:oddfree} for $M = 7$. Set ${\mathbf f}_0 = (f_0,0)$ and ${\mathbf f}_1 = (f_1,0)$ so that ${\mathbf f}_0^{-1} = (f_1,1)$ and ${\mathbf f}_1^{-1} = (f_0,1)$. \begin{figure} \caption{The floors $f_0$ and $f_1$ for ${\cal D} \label{fig:oddfree} \end{figure} Finally, set $\phi({\mathbf f}_0) = a$, $\phi({\mathbf f}_0^{-1}) = a^{-1}$, $\phi({\mathbf f}_1) = b$ and $\phi({\mathbf f}_1^{-1}) = b^{-1}$. In order to verify that $\phi$ indeed defined a group homomorphism $\phi: G_{{\cal D}} = \pi_1(\mathcal{C}_{{\cal D}}) \to F_2$ we must verify that for every $2$-cell the oriented boundary is taken to $e$. Since neither $f_0$ nor $f_1$ are part of the boundary of any $2$-cell, the result follows. For even $M$, $\phi({\mathbf f}) = e$ {\em unless} exactly one of the following conditions hold: \begin{enumerate} \item{the column $[0,2] \times [\frac{M}{2}-1,\frac{M}{2}]$ is occupied by a domino of $f^{\ast}$ {\em and} the plug $p_0$ marks exactly $\frac{M}{2}-1$ squares in the region $[0,2] \times [0,\frac{M}{2}-1]$, all of the same color.} \item{the column $[0,2] \times [\frac{M}{2},\frac{M}{2}+1]$ is occupied by a domino of $f^{\ast}$ {\em and} the plug $p_0$ marks exactly $\frac{M}{2}-1$ squares in the region $[0,2] \times [\frac{M}{2}+1,M]$, all of the same color.} \end{enumerate} There are then four classes of floors for which $\phi$ is non trivial, shown in Figure \ref{fig:evenfree}; call them class $0$, $1$, $2$ and $3$ (in the order shown). \begin{figure} \caption{Four classes of floors for ${\cal D} \label{fig:evenfree} \end{figure} If ${\mathbf f} = (f,k)$ is of class $j$, the value of $\phi({\mathbf f})$ is shown in Table \ref{tab:phree}. \begin{table}[ht] \begin{center} \begin{tabular}{c | c c c c c c c c} $(j,k)$ & $(0,0)$ & $(1,0)$ & $(2,0)$ & $(3,0)$ & $(0,1)$ & $(1,1)$ & $(2,1)$ & $(3,1)$ \\ [0.5ex] \hline $\vphantom{\frac{\tilde f}{2}}\phi({\mathbf f})$ & $a$ & $b$ & $a^{-1}$ & $b^{-1}$ & $b^{-1}$ & $a^{-1}$ & $b$ & $a$ \\ \end{tabular} \caption{Values of $\phi({\mathbf f})$.} \label{tab:phree} \end{center} \end{table} Notice that the floors shown in Figure \ref{fig:badfloors} satisfy both conditions above: by definition, $\phi({\mathbf f}) = e$ if ${\mathbf f} = (f,k)$ for either example and for either value of $k$. \begin{figure} \caption{Floors for ${\cal D} \label{fig:badfloors} \end{figure} A case by case check shows that $\phi$ extends to a homomorphism since it takes the boundary of any $2$-cell to $e$. This completes the construction of the homomorphisms $\phi: G_{{\cal D}} \to G_2$ for ${\cal D} = [0,2] \times [0,M]$, $M \ge 3$. In all cases, it is not hard to create tilings ${\mathbf t} \in {\cal R}_N$ (with $N$ even) such that $\phi({\mathbf t}) = a$ or $\phi({\mathbf t}) = b$; for the first two tilings ${\mathbf t}_0, {\mathbf t}_1 \in {\cal T}({\cal R}_6)$ in Figure \ref{fig:L2} we have $\phi({\mathbf t}_0) = a^{-1}$ and $\phi({\mathbf t}_1) = b$ (with $M = 5$). This, by the way, proves what we claimed back then in the Introduction; notice that $\phi({\mathbf t}_2) = e$. Finally, we extend the map to $G_{{\cal D}}$ by taking a tiling ${\mathbf t} \in {\cal T}_{{\cal R}_1}$ to the generator $c$ of $H \approx {\mathbb{Z}}/(2)$; it is easy to check that the relations in Equation \ref{eq:presentation} indeed hold. \end{proof} \begin{figure} \caption{Another example of a quadriculated disk which is not regular.} \label{fig:nonregulart} \end{figure} For $M = 3$ and $M = 4$ the map $\phi$ constructed above is an isomorphism. For $M \ge 5$ this is not the case; indeed, $\operatorname{Tw}|_{\ker(\phi) \cap G^{+}_{{\cal D}}}$ is surjective. We do not provide a proof of this claim here, but it follows easily from \cite{primeiroartigo}. The construction above is inspired by the results of \cite{primeiroartigo}; we focus here on the coefficients of highest degree of the Laurent polynomial $P_{{\mathbf t}} \in {\mathbb{Z}}[q,q^{-1}]$. \goodbreak \begin{remark} \label{remark:nonregulart} A construction similar to the proof of Lemma \ref{lemma:thin} proves that a few other quadriculated disks are likewise not regular. Consider, for instance, the disk ${\cal D}$ in Figure \ref{fig:nonregulart}. The floors $f_0$ and $f_1$ shown in the same figure have the same property as the floors of the same name in the proof for rectangles $[0,2]\times[0,M]$, $M$ odd. Indeed, it is easy to check that they belong to the boundary of no $2$-cell. We therefore here also have a surjective homomorphism $\phi: G_{{\cal D}} \to G_2 = F_2 \ltimes {\mathbb{Z}}/(2)$. See \cite{marreiros} for far more on irregular disks. \end{remark} \section{Generators} \label{sect:gen} Our construction of $\mathcal{C}_{{\cal D}}$ yields a finite but complicated presentation of $G_{{\cal D}} = \pi_1(\mathcal{C}_{{\cal D}})$. We present a more convenient family of generators of $G_{{\cal D}}$, which will be particularly useful in order to prove that certain disks ${\cal D}$ are regular. A quadriculated disk ${\cal D}$ is {\em hamiltonian} if, seen as a graph, it is hamiltonian. Notice that if a balanced quadriculated disk is hamiltonian then it is tileable: just place dominoes along a hamiltonian path. A rectangle ${\cal D} = [0,L] \times [0,M]$, $LM$ even, is an example of a hamiltonian disk (Figures \ref{fig:R2} and \ref{fig:rectangularpaths} show examples of hamiltonian paths). \begin{figure} \caption{Hamiltonian paths for rectangles $[0,L] \times [0,M]$.} \label{fig:rectangularpaths} \end{figure} Given a hamiltonian quadriculated disk, fix an arbitrary hamiltonian path $\gamma_0 = (s_1, \ldots, s_{|{\cal D}|})$ where the $s_i$ are distinct unit squares contained in ${\cal D}$ and $s_i$ and $s_{i+1}$ are adjacent (for all $i$). We say that a planar domino $d \subset {\cal D}$ is contained in the path $\gamma_0$ if $d = s_i \cup s_{i+1}$ (for some $i$). Similarly, we say that a horizontal domino $d \subset {\cal R}_N$ {\em respects} $\gamma_0$ if its projection $\tilde d \subset {\cal D}$ is a planar domino contained in $\gamma_0$; by definition, vertical dominoes always {\em respect} $\gamma_0$. For each plug $p$, construct a tiling ${\mathbf t}_p \in {\cal R}_{N;p,{\mathbf p_\circ}}$, $N$ even, as in the proof of Lemma \ref{lemma:flipcork}, using $\gamma_0$ as spanning tree. Notice that all dominoes in ${\mathbf t}_p$ respect $\gamma_0$. Again, Figure \ref{fig:R2} provides an example. The following lemma shows that, given the hamiltonian path $\gamma_0$ and a plug $p \in {\cal P}$, the tiling ${\mathbf t}_p$ above is well defined up to flips. \begin{lemma} \label{lemma:welldefined} Consider an hamiltonian quadriculated disk ${\cal D}$ with a fixed path $\gamma_0$. Consider a plug $p \in {\cal P}$ and two tilings ${\mathbf t}_0 \in {\cal R}_{N_0;p,{\mathbf p_\circ}}$ and ${\mathbf t}_1 \in {\cal R}_{N_1;p,{\mathbf p_\circ}}$, where $N_0$ and $N_1$ are both even. If both ${\mathbf t}_0$ and ${\mathbf t}_1$ respect $\gamma_0$ then ${\mathbf t}_1^{-1} \ast {\mathbf t}_0 \approx {\mathbf t}_{\operatorname{vert}} \in {\cal R}_{N_0+N_1}$ and ${\mathbf t}_0 \ast {\mathbf t}_1^{-1} \approx {\mathbf t}_{\operatorname{vert}} \in {\cal R}_{N_0+N_1;p,p}$. In particular, the paths in $\mathcal{C}_{{\cal D}}$ defined by ${\mathbf t}_0$ and ${\mathbf t}_1$ are homotopic with fixed endpoints. \end{lemma} \begin{proof} Given a hamiltonian path $\gamma_0$ and a cork ${\cal R}_{N_i;p,{\mathbf p_\circ}}$, we construct a planar region $\tilde{\cal D}_p \subseteq [0,|{\cal D}|] \times [0,N_i]$ and a folding map from $\tilde{\cal D}_p$ to ${\cal R}_{N_i;p,{\mathbf p_\circ}}$. The folding map takes a unit square $[j-1,j]\times[k-1,k]$ to the unit cube $s_j \times [k-1,k]$. Consistently, we obtain the quadriculated disk $\tilde{\cal D}_p$ from $[0,|{\cal D}|] \times [0,N_i]$ by removing the unit squares $[j-1,j]\times[0,1]$ for which $s_j \subset p$; notice that $N_i \ge 2$ implies that $\tilde{\cal D}_p$ is contractible. A tiling ${\mathbf t}_i$ of ${\cal R}_{N_i;p,{\mathbf p_\circ}}$ respects $\gamma_0$ if and only if it can be unfolded, i.e., if and only if it is the image under the folding map of a tiling $\tilde{\mathbf t}_i$ of $\tilde{\cal D}_p$. The result follows from the well known fact that two domino tilings of a quadriculated disk can be joined by a finite sequence of flips. \end{proof} Consider a hamiltonian disk ${\cal D}$ with a fixed path $\gamma_0$. Consider a floor $f_1 = (p_{0},f_1^{\ast},p_1)$; let $f_{\operatorname{vert}} = (p_1,\emptyset,p_1^{-1})$ be a matching vertical floor (recall that $p_1^{-1}$ is the complement of $p_1$). As above, construct tilings \[ {\mathbf t}_{p_0} \in {\cal R}_{N_{p_0};{p_0},{\mathbf p_\circ}}, \quad {\mathbf t}_{p_1^{-1}} \in {\cal R}_{N_{p_1^{-1}};p_1^{-1},{\mathbf p_\circ}}, \quad {\mathbf t}_{f_1} = {\mathbf t}_{p_0}^{-1} \ast f_1 \ast f_{\operatorname{vert}} \ast {\mathbf t}_{p_1^{-1}} \in {\cal R}_{N} \] where $N = N_{p_0}+N_{p_1^{-1}}+2$: notice that the dominoes in the tiling ${\mathbf t}_{f_1}$ which do not respect $\gamma_0$ are all contained in the original floor $f_1$. \begin{lemma} \label{lemma:floorsast} Consider an hamiltonian quadriculated disk ${\cal D}$ and a fixed path $\gamma_0$. Consider $N$ even and a tiling ${\mathbf t} \in {\cal R}_N$ with floors $f_1, \ldots, f_N$; we then have \[ {\mathbf t} \sim {\mathbf t}_{f_1} \ast {\mathbf t}_{f_2}^{-1} \ast \cdots \ast {\mathbf t}_{f_i}^{(-1)^{(i+1)}} \ast \cdots \ast {\mathbf t}_{f_N}^{-1}. \] \end{lemma} \begin{proof} For each $i$, write $f_i = (p_{i-1},f_i^{\ast},p_i)$. First, between each pair of floors $f_i$ and $f_{i+1}$ insert a large number of vertical floors. Second, as in the proof of Lemma \ref{lemma:flipcork}, for even $i$, modify the region between the original floors $f_i$ and $f_{i+1}$ from ${\mathbf t}_{\operatorname{vert}} \in {\cal T}({\cal R}_{N_i;p_i,p_i})$ to ${\mathbf t}_{p_i} \ast {\mathbf t}_{p_i}^{-1}$. Thus, after $f_i$ we now have an even number of floors (all of whose dominoes respect $\gamma_0$), followed by ${\mathbf p_\circ}$, followed by the same even number of floors, followed by $f_{i+1}$. Similarly, for odd $i$, repeat the construction between the original floors $f_i$ and $f_{i+1}$ but using $p_i^{-1}$ (instead of $p_i$); we thus obtain the desired tiling and complete the proof. \end{proof} We consider a special case of the above construction. Consider a domino $d \subset {\cal D}$ which is not contained in the path $\gamma_0$; thus, $d = s_i \cup s_j$ where $i +1 < j$. Consider a plug $p \in {\cal P}$ disjoint from $d$, so that $\tilde p = p \cup d$ is a plug distinct from $p$. Set $p_0 = p$, $p_1 = \tilde p^{-1}$ so that ${\cal D}_{p_0,p_1}$ consists of the domino $d$, only. Let $f_1 = (p_0,d,p_1)$ and define ${\mathbf t}_{d;p} = {\mathbf t}_{f_1}$ as above. Thus, the only domino of ${\mathbf t}_{d;p}$ which does not respect $\gamma_0$ is $d \times [0,1]$. Figure \ref{fig:tdp} below illustrates this construction for ${\cal D} = [0,4] \times [0,4]$: we show a path $\gamma_0$, a domino $d$ not contained in $\gamma_0$, a plug $p \in {\cal P}$ and a valid tiling ${\mathbf t}_{d;p} \in {\cal T}({\cal R}_{-2,2})$. Notice that $\operatorname{plug}_0({\mathbf t}_{d;p}) = p$. The domino $d \times [0,1]$ is the only one not respecting $\gamma_0$; it appears in $\operatorname{floor}_1({\mathbf t}_{d;p})$, the third floor in the figure. \begin{figure} \caption{For ${\cal D} \label{fig:tdp} \end{figure} Notice that if the construction of ${\mathbf t}_{d,p}$ above is performed with a domino $d$ which is contained in the path $\gamma_0$ (and any compatible plug $p$) we obtain a tiling ${\mathbf t}_{d;p}$ such that every domino respects $\gamma_0$: the tiling is therefore a tiling of $\gamma_0 \times [0,N]$ (for some positive even $N$) and therefore ${\mathbf t}_{d;p} \approx {\mathbf t}_{\operatorname{vert}}$. \begin{lemma} \label{lemma:decfloor} Consider a hamiltonian disk ${\cal D}$ with a fixed path $\gamma_0$. Consider a floor $f = (p,f^{\ast},\tilde p)$. Let $f^{\ast} = \{d_0, \ldots, d_{k-1}\}$ so that ${\cal D}_{p,\tilde p} = d_0 \cup \cdots \cup d_{k-1}$ and $k = \frac12 |{\cal D}_{p,\tilde p}|$. Let $p_0 = p$ and $p_i = p_{i-1} \cup d_{i-1}$ so that $p_k = \tilde p^{-1}$. Then \[ {\mathbf t}_{f} \sim {\mathbf t}_{d_0;p_0} \ast \cdots \ast {\mathbf t}_{d_i;p_i} \ast \cdots \ast {\mathbf t}_{d_{k-1};p_{k-1}}. \] \end{lemma} \begin{proof} Recall that ${\mathbf t}_{f} = {\mathbf t}_{p}^{-1} \ast f \ast f_{\operatorname{vert}} \ast {\mathbf t}_{\tilde p^{-1}}$. As in the proof of Lemma \ref{lemma:floorsast}, insert a large number of vertical floors around $f$. As in Lemma \ref{lemma:movevert}, the horizontal dominoes can be moved up or down: do so so that they appear in the desired order, with significant vertical space between them. As in the proof of Lemma \ref{lemma:floorsast}, change the region between $d_i$ and $d_{i+1}$ to introduce ${\mathbf t}_{p_{i+1}}^{-1} \ast {\mathbf t}_{p_{i+1}}$. This obtains the desired tiling and proves our lemma. \end{proof} Consider a hamiltonian disk ${\cal D}$ with a fixed path $\gamma_0$; assume without loss of generality that the color of the square $s_i$ is $(-1)^i$. Consider a domino $d = s_{i_{d,-}} \cup s_{i_{d,+}}$ not contained in $\gamma_0$ so that we may assume $i_{d,-} + 1 < i_{d,+}$. We say that $d$ decomposes $\gamma_0$ into the three following intervals: $I_{d;-1} = {\mathbb{Z}} \cap [1,i_{d,-}-1]$, $I_{d;0} = {\mathbb{Z}} \cap [i_{d,-}+1,i_{d,+}-1]$, $I_{d;+1} = {\mathbb{Z}} \cap [i_{d,+}+1,|{\cal D}|]$. Notice that the sets $I_{d;\pm 1}$ may be empty; the interval $I_{d;0}$ always has even and positive cardinality. Recall that a plug $p$ is compatible with $d$ if there is no unit square contained in both $p$ and $d$. Consider a plug $p \in {\cal P}$ compatible with $d$ and $j \in \{-1,0,+1\}$; define \[ \operatorname{flux}_j(d;p) = \sum_{i \in I_{d;j}, s_i \subset p} (-1)^i. \] A verbal description may be helpful; in order to compute $\operatorname{flux}_j(d;p)$ go through the list of squares in both $p$ and $I_{d;j}$: each such square contributes with $+1$ or $-1$ according to color. Notice that we always have $\operatorname{flux}_{-1}(d;p) + \operatorname{flux}_0(d;p) + \operatorname{flux}_{+1}(d;p) = 0$. Define $\operatorname{flux}(d;p) = (\operatorname{flux}_{-1}(d;p),\operatorname{flux}_0(d;p),\operatorname{flux}_{+1}(d;p)) \in H$ for $H = \{(\phi_{-1},\phi_0,\phi_{+1}) \in {\mathbb{Z}}^3 | \phi_{-1}+\phi_0+\phi_{+1}=0 \}$. For $d$, $p$ and ${\mathbf t}_{d;p}$ as in Figure \ref{fig:tdp}, we have $d = s_3 \cup s_6$, $I_{d;-1} = \{1,2\}$, $I_{d;0} = \{4,5\}$, $I_{d;+1} = \{7,\ldots, 16\}$, $\operatorname{flux}(d;p) = (0,-1,+1)$. \begin{lemma} \label{lemma:flux} Consider a hamiltonian disk ${\cal D}$ with a fixed path $\gamma_0$ and a domino $d \subset {\cal D}$ not contained in $\gamma_0$. Consider two plugs $p_0, p_1 \in {\cal P}$, both compatible with $d$. If $\operatorname{flux}(d;p_0) = \operatorname{flux}(d;p_1)$ then ${\mathbf t}_{d;p_0} \sim {\mathbf t}_{d;p_1}$. \end{lemma} \begin{proof} This proof requires familiarity with the main results of \cite{saldanhatomei1995}, particularly the concept of flux for quadriculated surfaces and Theorem 4.1. Assume without loss of generality that ${\mathbf t}_{d;p_0}, {\mathbf t}_{d;p_1} \in {\cal T}({\cal R}_{-N,N})$; we prove that ${\mathbf t}_{d;p_0} \approx {\mathbf t}_{d;p_1}$. Indeed, both can be interpreted as tilings of the quadriculated surface \[ (\gamma_0 \times [-N,N]) \smallsetminus ((s_{i_{d,-}} \times [0,1]) \cup (s_{i_{d,+}} \times [0,1])). \] Here we use folding as in the proof of Lemma \ref{lemma:welldefined}, so that the quadriculated region is a rectangle minus two unit squares. The hypothesis $\operatorname{flux}(d;p_0) = \operatorname{flux}(d;p_1)$ shows that the two tilings have the same flux in the sense of \cite{saldanhatomei1995}. It follows from Theorem 4.1 of \cite{saldanhatomei1995} that, interpreted as tilings of this surface, we have ${\mathbf t}_{d;p_0} \approx {\mathbf t}_{d;p_1}$. The resulting sequence of flips is also good in ${\cal R}_{-N,N}$, proving the claim and completing the proof. \end{proof} \begin{remark} \label{rem:twflux} Consider a hamiltonian disk ${\cal D}$ with a fixed path $\gamma_0$ and a domino $d \subset {\cal D}$ not contained in $\gamma_0$. Then there exists $s \in \{+1,-1\}$ such that for every plug $p$ compatible with $d$ we have \[ \operatorname{Tw}({\mathbf t}_{d;p}) = s \sum_j (-1)^j \operatorname{flux}_j(d;p). \] \end{remark} Given a hamiltonian disk ${\cal D}$, a fixed path $\gamma_0$ and a domino $d \subset {\cal D}$ not contained in $\gamma_0$, let \begin{gather*} \Phi_d = \{ (\phi_{-1},\phi_0,\phi_{+1}) \in H \;|\; \forall j, \phi_j \in [\phi_j^{\min},\phi_j^{\max}] \}, \\ \phi_j^{\min} = - |\{i \in I_{d;j} | (-1)^i = -1\}|, \qquad \phi_j^{\max} = |\{i \in I_{d;j} | (-1)^i = -1\}|. \end{gather*} Clearly, for all $p \in {\cal P}$, if $p$ is compatible with $d$ then $\operatorname{flux}(d;p) \in \Phi_d$; conversely, for all $\phi \in \Phi_d$ there exists $p \in {\cal P}$ such that $p$ compatible with $d$ and $\operatorname{flux}(d;p) = \phi$. A {\em complete family} of compatible plugs for $d$ is a family $(p_{d,\phi})_{\phi \in \Phi_d}$ with $\operatorname{flux}(d;p_{d,\phi}) = \phi$ (for all $\phi \in \Phi_d$). \begin{coro} \label{coro:generators} Consider a hamiltonian disk ${\cal D}$ with a fixed path $\gamma_0$. For each domino $d \subset {\cal D}$ not contained in $\gamma_0$, consider a complete family of compatible plugs $(p_{d,\phi})_{\phi \in \Phi_d}$. Consider the family of tilings $({\mathbf t}_{d;p_{d,\phi}})$: this family of tilings generates the domino group $G^{+}_{{\cal D}}$. \end{coro} \begin{proof} This follows directly from Lemmas \ref{lemma:floorsast}, \ref{lemma:decfloor} and \ref{lemma:flux}. \end{proof} \section{Small regular rectangles} \label{sect:44} In this section we apply the results of the previous section, particularly Corollary \ref{coro:generators}, to compute $G_{{\cal D}}$ for a few examples. \begin{lemma} \label{lemma:44} The rectangle ${\cal D} = [0,4] \times [0,4]$ is regular. Thus, $\operatorname{Tw}: G_{{\cal D}}^{+} \to {\mathbb{Z}}$ is an isomorphism. The group $G_{{\cal D}}$ is isomorphic to ${\mathbb{Z}} \oplus ({\mathbb{Z}}/(2))$, with generators $a$, the tiling shown in Figure \ref{fig:tdp}, and $c$, given by any tiling of ${\cal R}_1$; $a \in G_{{\cal D}}^{+}$ has twist $1$, $c$ has order $2$ and we have $a\ast c \approx c\ast a$. \end{lemma} \begin{proof} The proof is now a long computation. The verification that $a\ast c \approx c\ast a$ is given by an explicit sequence of flips. We apply Corollary \ref{coro:generators} to obtain a manageable list of generators of $G_{{\cal D}}^{+}$; for each generator ${\mathbf t}$ we compute $k = \operatorname{Tw}({\mathbf t})$ and verify (by an explicit sequence of flips) that ${\mathbf t} \sim a^k$. We use the same path $\gamma_0$ shown in Figure \ref{fig:tdp}. We first list all dominoes $d$ not contained in $\gamma_0$: there are $9$ such dominoes, three dominoes per column, each contained in a single row. For each such domino $d$, we list all the (finitely many) possible values of $\operatorname{flux}(d;\ast) \in H \subset {\mathbb{Z}}^3$. For instance, for $d$ as in Figure \ref{fig:tdp} we have, for any $p \in {\cal P}$, $|\operatorname{flux}_{-1}(d;p)| \le 1$ and $|\operatorname{flux}_0(d;p)| \le 1$, thus giving us a list of $9$ values. For each such value we obtain an explicit $p$, compute ${\mathbf t}_{d;p}$ and complete the verification as above. Notice that the tiling ${\mathbf t}_{d;p}$ in Figure \ref{fig:tdp}, used to define $a$, is an instance of this construction. This verification is performed by a computer, but there are some simplifications which significantly reduce the amount of computations involved. For instance, the fact that ${\cal D}$ is symmetric with respect to a vertical line reduces from $9$ to $6$ the number of dominoes $d$ to be checked. Also, for each $d$ it suffices to consider $\phi \in \Phi_d$ for which $|\phi_0|$ is maximal. Indeed, assume for concreteness that $d$ is in the first column. If $|\phi_0|$ does not have the largest possible value (for that $d$) then we may take $p$ which marks neither of the unit squares one row below $d$. A few flips then take ${\mathbf t}_{d,p}$ to ${\mathbf t}_{\tilde d,p}$ where $\tilde d$ is the domino in the same column as $d$, one row lower; this domino $\tilde d$ can be assumed to have already been taken care of. After these simplifications, there are less than $20$ distinct cases to be verified, so the computer work can be double checked by hand. \end{proof} \begin{lemma} \label{lemma:34} Let ${\cal D} = [0,L] \times [0,M]$ where $L, M \in [3,6] \cap {\mathbb{Z}}$ and $LM$ is even: the quadriculated disk ${\cal D}$ is regular. \end{lemma} \begin{proof} Again, a finite and manageable computation. Construct explicit candidates for generators $a$ and $c$. The tiling $a \in {\cal T}({\cal R}_4)$ can be formed by taking a copy of one of the tilings in Figure \ref{fig:234} in a $2\times 3\times 4$ box and vertical dominoes elsewhere (this is how we obtained $a$ for ${\cal D} = [0,4]\times [0,4]$ in Figure \ref{fig:tdp} and in Lemma \ref{lemma:44}). Notice that $\operatorname{Tw}(a) = +1$. It is nice but not strictly necessary to check that the choice of ${\mathbf t}_0$ or ${\mathbf t}_1$ and different positions for the box obtain the same element of $G^{+}_{{\cal D}}$. For $c$, we take a tiling of ${\cal R}_1$; we verify that $a\ast c \sim c\ast a$. For each quadriculated disk, choose a path $\gamma_0$ and list the dominoes $d$ not contained in $\gamma_0$. For each $d$, list the finitely many possible values of $\operatorname{flux}(d;\ast)$. For each value of the flux, choose a plug $p$ and construct ${\mathbf t}_{d;p}$. For each such ${\mathbf t}_{d;p}$, compute $k = \operatorname{Tw}({\mathbf t}_{d;p})$ and verify (by an explicit sequence of flips) that ${\mathbf t}_{d;p} \sim a^k$. By Corollary \ref{coro:generators}, we are then done (as in Lemma \ref{lemma:44}, after a relatively short computer verification). \end{proof} The proofs of Lemmas \ref{lemma:44} and \ref{lemma:34} thus describe an algorithm. Given a hamiltonian quadriculated disk ${\cal D}$ containing a $2\times 3$ rectangle, we fix a path $\gamma_0$ and we construct $a \in {\cal R}_4$ with $\operatorname{Tw}(a) = +1$ and vertical dominoes outside a $2\times 3\times 4$ box, as in Figures \ref{fig:234} and \ref{fig:tdp}. We make a list of the dominoes $d_i \subset {\cal D}$ not contained in $\gamma_0$ and for each domino we make a list of compatible plugs $p_j$ covering all possible values of $\operatorname{flux}(d_i;p_j)$. For each pair $(d_i;p_j)$ we construct a tiling ${\mathbf t}_{d_i;p_j}$ and compute $k = \operatorname{Tw}({\mathbf t}_{d_i;p_j})$. We then obtain a finite list of questions of the form: here are two tilings ${\mathbf t}_{d_i,p_j}$ and $a^k$; is it the case that ${\mathbf t}_{d_i,p_j} \sim a^k$? If the answer is {\em yes} in every case then this can be verified in finite time and we obtain a proof that ${\cal D}$ is regular (similar to the proofs of Lemmas \ref{lemma:44} and \ref{lemma:34}). If in some case the answer is {\em no} then ${\cal D}$ is not regular; in order to prove that ${\mathbf t}_0 \not\sim {\mathbf t}_1$ we require a new idea or construction, as was the case for Lemma \ref{lemma:thin}. A natural question at this point is: exactly which quadriculated disks are regular? As of this writing we do not have a complete answer. Figure \ref{fig:regularornot} shows five small quadriculated disks. The first three are regular, as can be verified using the methods described above. The same methods applied to the last two are inconclusive, but suggest that they are most likely not regular. \begin{figure} \caption{Five quadriculated disks: which are regular?} \label{fig:regularornot} \end{figure} \section{Larger regular rectangles} \label{sect:thick} In this section we move from specific quadriculated disks to a large family of examples. \begin{lemma} \label{lemma:thick} Let ${\cal D} = [0,L] \times [0,M]$ where $L, M \ge 3$ and $LM$ is even: the quadriculated disk ${\cal D}$ is regular. \end{lemma} Together with Lemma \ref{lemma:thin}, this completes the proof of Theorem \ref{theo:rectangle}. We first prove a sublemma. \begin{lemma} \label{lemma:thicksublemma} Let $L \ge 3$ be a fixed number. If $L$ is odd and $[0,L] \times [0,M]$ is regular for both $M = 4$ and $M = 6$ then $[0,L] \times [0,M]$ is regular for any even $M > 6$. If $L$ is even and $[0,L] \times [0,M]$ is regular for all $M \in [3,6] \cap {\mathbb{Z}}$ then $[0,L] \times [0,M]$ is regular for any $M > 6$. \end{lemma} \begin{proof} The proof is by induction on $M$. We take the hamiltonian path $\gamma_0$ as indicated in Figure \ref{fig:rectangularpaths}. We apply Corollary \ref{coro:generators}: we must prove that for every domino $d \subset {\cal D}$ and $\phi \in \Phi_d$ there exists a compatible plug $p$ for which $\operatorname{flux}(d;p) = \phi$ and ${\mathbf t}_{d;p} \sim a^k$ (for some $k \in {\mathbb{Z}}$). Here again $a \in {\cal T}({\cal R}_4)$ is a tiling similar to the one shown in Figure \ref{fig:tdp}. Consider first $d$ in the first column. Clearly $|\phi_{-1}| + |\phi_0| < L$ for any $\phi \in \Phi_d$ and therefore also $|\phi_{+1}| < L$. For any $\phi \in \Phi_d$ take $p = p_{\phi} \in {\cal P}$ (with $\operatorname{flux}(d;p) = \phi$) by marking in $I_{d;+1}$ the first $|\phi_{+1}|$ unit squares of color $\operatorname{sign}(\phi_{+1})$ (as in the first example in Figure \ref{fig:thicksublemma}). This implies that $p$ marks only unit squares in the first four columns. Let $\tilde{\cal D} = [0,L] \times [0,4] \subseteq {\cal D}$ be the subdisk formed by the first four columns. Following the usual construction, all dominoes of ${\mathbf t}_{d,p}$ outside $\tilde{\cal D} \times [-N,N]$ are vertical with the same parity. Let $\tilde{\mathbf t}_{d,p} \in {\cal T}(\tilde{\cal D}\times [-N,N])$ be the restriction of ${\mathbf t}_{d,p}$ to $\tilde{\cal D}\times [-N,N]$. By hypothesis, if $N$ is taken sufficiently large then $\tilde{\mathbf t}_{d,p} \approx a^k$, i.e., there exists a sequence of flips taking one tiling to the other. By mere juxtaposition of vertical dominoes in $({\cal D} \smallsetminus \tilde{\cal D}) \times [-N,N]$ we have ${\mathbf t}_{d,p} \approx a^k$, proving this first case. A similar argument holds if $d$ is in the last column. \begin{figure} \caption{A rectangle ${\cal D} \label{fig:thicksublemma} \end{figure} Consider now $d$ in some intermediate column (neither the first nor the last). We first consider the subcase where $\phi_{-1} \phi_{+1} \ge 0$. We clearly have $|\phi_0| < L$ and therefore also $|\phi_{-1}| + |\phi_{+1}| < L$. Construct $p = p_{\phi}$ by selecting squares in $I_{d;\pm 1}$ as near as possible to the columns occupied by $d$ (as in the second example in Figure \ref{fig:thicksublemma}). At most $6$ columns are occupied: take $\tilde{\cal D} \subset {\cal D}$ to be the union of the occupied columns. By hypothesis, $\tilde{\cal D}$ is regular; the proof proceeds as in the previous case. Consider finally the subcase where $d$ is in some intermediate column and $\phi_{-1} \phi_{+1} < 0$. Assume $\phi_{-1} < 0$ (the other case is similar). Let $l \le \lceil \frac{L}{2} \rceil$ be the number of unit squares of color $-1$ in the first column of ${\cal D}$. Thus, if $L$ is even then $l = \frac{L}{2}$; if $L$ is odd then either $l = \frac{L-1}{2}$ or $l = \frac{L+1}{2}$. Notice that $l$ is also the number of unit squares of color $+1$ in the last column of ${\cal D}$. If either $|\phi_{-1}| < l$ or $|\phi_{+1}| < l$ then $p = p_{\phi}$ can be constructed, as in the previous cases, so as to occupy at most $6$ columns and the proof proceeds as before (as in the third example in Figure \ref{fig:thicksublemma}). We may thus assume $|\phi_{-1}| \ge l$ and $|\phi_{+1}| \ge l$. Construct $p = p_\phi$ marking all unit squares of color $-1$ in the first column and all unit squares of color $+1$ in the last column (as in the fourth example in Figure \ref{fig:thicksublemma}). Let $\tilde{\cal D} = [0,L] \times [1,M-1] \subset {\cal D}$; $\tilde{\cal D}$ is regular by induction hypothesis. Let $\tilde\phi = (\phi_{-1} + l, \phi_0, \phi_{+1} - l)$ and let $\tilde p$ be the plug for $\tilde{\cal D}$ obtained from $p$ by intersection, i.e., by discarding the $l$ marked squares in the first column and the $l$ marked squares in the last column. We have $\operatorname{flux}(\tilde p) = \tilde\phi$. Construct as usual the tiling $\tilde{\mathbf t}_{d;\tilde p} \in {\cal T}(\tilde{\cal D} \times [-\tilde N,\tilde N])$. By induction hypothesis, we have $\tilde{\mathbf t}_{d;\tilde p} \approx a^k$ provided $\tilde N$ is taken large enough (and even); here $k = \operatorname{Tw}(\tilde{\mathbf t}_{d;\tilde p})$. We may also assume that $a$ occupies the last two rows and three central columns of $\tilde{\cal D}$, thus leaving free at least the first row, the first and last column of $\tilde{\cal D}$ (we assume here $M > 6$, as we can). Construct ${\mathbf t}_0 = {\mathbf t}_{d;p} \in {\cal T}({\cal D} \times [-N,N])$ as usual, matching the $l$ squares in the first column with the $l$ squares in the last column: these are the last matches to be addressed, and we may leave vertical space $[-\tilde N,\tilde N]$ for the previous matches, so that $N = \tilde N + 2l$. Notice that ${\mathbf t}_0$ respects the subregion $\tilde{\cal D} \times [-\tilde N,\tilde N]$ and coincides there with $\tilde{\mathbf t}_{d;\tilde p}$. We thus have ${\mathbf t}_0 \approx {\mathbf t}_1$, where ${\mathbf t}_1 \in {\cal T}({\cal D} \times [-N,N])$ coincides with $a^k$ in $\tilde{\cal D} \times [-\tilde N,\tilde N]$ and with ${\mathbf t}_0$ elsewhere. We are therefore left with proving that ${\mathbf t}_1 \approx {\mathbf t}_2$ where ${\mathbf t}_2 \in {\cal T}({\cal D} \times [-N,N])$ coincides with $a^k$ (and with ${\mathbf t}_1$) in $\tilde{\cal D} \times [-\tilde N,\tilde N]$ and is vertical elsewhere. \begin{figure} \caption{The path $\gamma_1$ and the floors $[\tilde N, \tilde N+2]$ for the tilings ${\mathbf t} \label{fig:penultimate} \end{figure} We construct a new tiling ${\mathbf t}_3 \approx {\mathbf t}_1$; the floors $[-\tilde N,\tilde N]$ of ${\mathbf t}_1$ and ${\mathbf t}_3$ coincide. In order to construct the remaining floors of ${\mathbf t}_3$, first construct a path $\gamma_1$ coinciding with $\gamma_0$ in the first and last column of ${\cal D}$ such that its intersection with $\tilde{\cal D}$ is contained in the union of the first row and the first and last columns of $\tilde{\cal D}$, as illustrated right half of Figure \ref{fig:penultimate}. The floors $[-N,-\tilde N] \cup [\tilde N,N]$ of the tilings ${\mathbf t}_0$ and ${\mathbf t}_1$ are constructed as in the proof of Lemma \ref{lemma:flipcork} and Figure \ref{fig:R2}, using the original path $\gamma_0$: Figure \ref{fig:penultimate} shows floors $[\tilde N, \tilde N+2]$. The tiling ${\mathbf t}_3$ is similarly constructed, but using, for the new floors, the path $\gamma_1$ instead. The fact that ${\mathbf t}_3 \approx {\mathbf t}_1$ is proved looking at pairs of floors $[2z,2z+2]$ (with $z \in {\mathbb{Z}}$), as in Figure \ref{fig:penultimate}; we may either give an explicit sequence of flips or use the results from \cite{primeiroartigo}. Finally, we prove that ${\mathbf t}_2 \approx {\mathbf t}_3$. Indeed, they coincide by construction outside the quadriculated surface $\gamma_1 \times [-N,N]$, which is respected by both tilings. Thus, the problem of finding a sequence of flips from ${\mathbf t}_2$ to ${\mathbf t}_3$ is the problem of connecting by flips two rather explicit tilings of a quadriculated disk: this follows either from an explicit sequence of flips or from \cite{thurston1990} and \cite{saldanhatomei1995}. \end{proof} \begin{proof}[Proof of Lemma \ref{lemma:thick}] Apply Lemma \ref{lemma:thicksublemma} for each $L \in [3,6] \cap {\mathbb{Z}}$: the hypothesis is provided by Lemma \ref{lemma:34}. We thus have that $[0,L] \times [0,M]$ is regular provided $LM$ is even, $3 \le L \le 6$ and $M \ge 3$. Or, equivalently, provided $LM$ is even, $L \ge 3$ and $3 \le M \le 6$. Apply Lemma \ref{lemma:thicksublemma} again for each $L \ge 3$ to obtain the desired conclusion. \end{proof} It would of course be interesting to prove that a larger class of disks is regular. As mentioned in Section \ref{sect:44}, regular disks seem to be common. \section{The constant $c_{{\cal D}}$ and the spine $\tilde\mathcal{C}^{\bullet}_{{\cal D}}$} \label{sect:cD} Let ${\cal D}$ be a fixed but arbitrary non trivial regular disk. Let $\mathcal{C}_{{\cal D}}$ be the $2$-complex constructed in Section \ref{sect:groupcomplex}. Let $\Pi^{+}: \mathcal{C}^{+}_{{\cal D}} \to \mathcal{C}_{{\cal D}}$ be the double cover constructed at the end of Section \ref{sect:groupcomplex}. Since ${\cal D}$ is regular, we have that $\operatorname{Tw}: \pi_1(\mathcal{C}^{+}_{{\cal D}}) \to {\mathbb{Z}}$ is an isomorphism. Let $\Pi: \tilde\mathcal{C}_{{\cal D}} \to \mathcal{C}^{+}_{{\cal D}}$ be its universal cover. Let $\tilde{\cal P}$ be the set of vertices of $\tilde\mathcal{C}_{{\cal D}}$; select as base point a vertex $\tilde{\mathbf p_\circ} \in \tilde{\cal P}$ which is a preimage of ${\mathbf p_\circ} \in {\cal P}$. Notice that the set of preimages in $\tilde{\cal P}$ of the empty plug ${\mathbf p_\circ} \in {\cal P}$ is naturally identified with $\pi_1(\mathcal{C}_{{\cal D}}) \approx {\mathbb{Z}} \oplus {\mathbb{Z}}/(2)$. As in Remark \ref{remark:cocycle}, lift $\tau^u \in C^1(\mathcal{C}_{{\cal D}};{\mathbb{R}})$ to $\tau^u \in C^1(\tilde\mathcal{C}_{{\cal D}};{\mathbb{R}})$ (here $u \in \{e_1,e_2\}$ is fixed but arbitrary). Since $\tilde\mathcal{C}_{{\cal D}}$ is simply connected we have $\tau^u \in B^1(\tilde\mathcal{C}_{{\cal D}};{\mathbb{R}})$. Integrate $\tau^u$ to obtain a function $\operatorname{tw}: \tilde{\cal P} \to \frac14{\mathbb{Z}} \subset {\mathbb{R}}$ satisfying $\operatorname{tw}(\tilde{\mathbf p_\circ}) = 0$. Notice that if ${\mathbf t} \in {\cal T}({\cal R}_N)$ is interpreted as a path from $[0,N]$ to $\mathcal{C}_{{\cal D}}$ then such a path can be lifted to $\tilde{\mathbf t}: [0,N] \to \tilde\mathcal{C}_{{\cal D}}$ and we then have $\operatorname{Tw}({\mathbf t}) = \operatorname{tw}(\tilde{\mathbf t}(N)) - \operatorname{tw}(\tilde{\mathbf t}(0))$. Let $\sigma: \tilde\mathcal{C}_{{\cal D}} \to \tilde\mathcal{C}_{{\cal D}}$ be a generator of the group of deck transformations of the covering map $\Pi: \tilde\mathcal{C}_{{\cal D}} \to \mathcal{C}^{+}_{{\cal D}}$: choose $\sigma$ such that $\operatorname{tw}(\sigma(p)) = 1+\operatorname{tw}(p)$ for all $p \in \tilde{\cal P}$. We are interested in {\em tiling paths}: paths $\Gamma: [N_0,N_1] \to \mathcal{C}_{{\cal D}}$ (for $N_0, N_1 \in {\mathbb{Z}}$) taking integers to vertices (i.e., plugs) and intervals $[j-j,j]$ (for $j \in {\mathbb{Z}}$) to edges (i.e., floors). Such paths can of course be lifted to $\tilde\Gamma: [N_0,N_1] \to \tilde\mathcal{C}_{{\cal D}}$ with $\Pi^{+} \circ \Pi \circ \tilde\Gamma = \Gamma$. The lift is well defined if a starting point $\tilde\Gamma(N_0)$ is given. As discussed in Section \ref{sect:groupcomplex}, tiling paths correspond to tilings of corks ${\cal R}_{N_0,N_0,p_0,p_1}$ where $p_0 = \Gamma(N_0)$ and $p_1 = \Gamma(N_1)$. Consistently, write $\operatorname{Tw}(\Gamma) = \operatorname{tw}(\tilde\Gamma(N_1)) - \operatorname{tw}(\tilde\Gamma(N_0))$. \begin{figure} \caption{A simple closed tiling path $\Gamma: [0,8] \to \mathcal{C} \label{fig:rocket} \end{figure} A tiling path $\Gamma$ is {\em closed} if $\Gamma(N_0) = \Gamma(N_1)$; Figure \ref{fig:rocket} shows an example of a closed tiling path $\Gamma: [0,8] \to \mathcal{C}_{{\cal D}}$ for ${\cal D} = [0,4]^2$. A closed tiling path $\Gamma: [0,N] \to \mathcal{C}_{{\cal D}}$ is {\em simple} if $k_0, k_1 \in {\mathbb{Z}} \cap [0,N)$ and $\Gamma(k_0) = \Gamma(k_1)$ imply $k_0 = k_1$. Clearly, if $\Gamma: [0,N] \to \mathcal{C}_{{\cal D}}$ is a simple closed tiling path then $N \le |{\cal P}|$. Let $c_{{\cal D}} \in {\mathbb{Q}} \cap (0,+\infty)$ be the maximum value of $\operatorname{Tw}(\Gamma)/N$ taken over all simple closed tiling paths $\Gamma: [0,N] \to \mathcal{C}_{{\cal D}}$; let $\Gamma_{\bullet}: [0,N_{\bullet}] \to \mathcal{C}_{{\cal D}}$ be a closed tiling path for which the maximum value is acheived. Since we are taking the maximum over a non empty finite set, the maximum is well defined. Lemma \ref{lemma:cD} below provides alternative characterizations of $c_{{\cal D}}$. Lemma \ref{lemma:tropical} and Example \ref{example:cD44} show how to compute $c_{{\cal D}}$ for a given regular quadriculated disk ${\cal D}$. For $p_0, p_1 \in {\cal P}$ and $N \in {\mathbb{N}}$, let $m_{N;p_0,p_1} \in \{-\infty\} \cup \frac14 {\mathbb{Z}}$ be the maximum value of $\operatorname{Tw}(\Gamma)$ for $\Gamma: [0,N] \to \mathcal{C}_{{\cal D}}$ a tiling path with $\Gamma(0) = p_0$, $\Gamma(N) = p_1$. We follow here the convention that the maximum of the empty set is $-\infty$. Thus, for instance, if ${\cal D}_{p_0,p_1}$ admits no tiling (in particular, if $p_0$ and $p_1$ are not disjoint) then $m_{1;p_0,p_1} = -\infty$. Even more degenerately, $m_{0,p_0,p_1}$ equals $0$ if $p_0 = p_1$ and $-\infty$ otherwise. The following result provides us with estimates for $m_{N,p_0,p_1}$. We will further discuss these numbers below; see Equation \ref{equation:tropicalpower} and Lemma \ref{lemma:tropical}. \begin{lemma} \label{lemma:cD} Let ${\cal D}$ be a fixed but arbitrary non trivial regular disk. Let $c_{{\cal D}} \in {\mathbb{Q}} \cap (0,+\infty)$ be as defined above. \begin{enumerate} \item{For any closed tiling path $\Gamma: [0,N] \to \mathcal{C}_{{\cal D}}$ we have $|\operatorname{Tw}(\Gamma)| \le c_{{\cal D}} N$.} \item{There exists constants $d_{-}, d_{+} \in {\mathbb{R}}$ (depending on ${\cal D}$ only; not depending on $N$, $p_0$ or $p_1$) such that, for all $p_0, p_1 \in {\cal P}$ and all $N \ge 4|{\cal D}|$, \[ c_{{\cal D}} N + d_{-} \le m_{N;p_0,p_1} \le c_{{\cal D}} N + d_{+}. \]} \end{enumerate} \end{lemma} \begin{proof} A {\em double point} for a closed tiling path $\Gamma: [0,N] \to \mathcal{C}_{{\cal D}}$ is a pair $\{k_0,k_1\} \subset {\mathbb{Z}} \cap [0,N)$ with $k_0 < k_1$ and $\Gamma(k_0) = \Gamma(k_1)$. We prove the first item by induction on the number of double points: if there are $0$ double points the curve is simple and the claim holds by definition of $c_{{\cal D}}$. Let $\{k_0,k_1\}$ be a double point. Let $\Gamma_1: [0,k_1-k_0] \to \mathcal{C}_{{\cal D}}$ and $\Gamma_2: [0,N-k_1+k_0] \to \mathcal{C}_{{\cal D}}$ be closed tiling paths defined by $\Gamma_1(k) = \Gamma(k_0+k)$ and $\Gamma_2(k) = \Gamma(k_1+k)$; if $k_1+k > N$ we interpret $\Gamma(k_1+k) = \Gamma(k_1+k-N)$. By induction hypothesis we have $|\operatorname{Tw}(\Gamma_1)| \le c_{{\cal D}} (k1-k0)$ and $|\operatorname{Tw}(\Gamma_2)| \le c_{{\cal D}} (N-k1+k0)$. From the definition of $\operatorname{Tw}$ we have $\operatorname{Tw}(\Gamma) = \operatorname{Tw}(\Gamma_1) + \operatorname{Tw}(\Gamma_2)$. We thus have $|\operatorname{Tw}(\Gamma)| \le |\operatorname{Tw}(\Gamma_1)| + |\operatorname{Tw}(\Gamma_2)| \le c_{{\cal D}} N$, completing the proof of the first item. Recall that $\Gamma_\bullet: [0,N_\bullet] \to \mathcal{C}_{{\cal D}}$ is a simple closed tiling path satisfying $\operatorname{Tw}(\Gamma_\bullet) = c_{{\cal D}} N_\bullet$. Let ${\mathbf t}_{\bullet} \in {\cal T}({\cal R}_{0,N_\bullet;p_{\bullet},p_{\bullet}})$ be the corresponding tiling where $p_\bullet = \Gamma_\bullet(0) \in {\cal P}$. Let ${\mathbf t}_0 \in {\cal T}({\cal R}_{0,4|{\cal D}|;p_0,p_\bullet})$ be an arbitrary tiling (its existence is guaranteed by Lemma \ref{lemma:cork}). Similarly, for each $k \in [4|{\cal D}|,4|{\cal D}|+N_\bullet] \cap {\mathbb{Z}}$, let ${\mathbf t}_k \in {\cal T}({\cal R}_{0,k;p_\bullet,p_1})$ be an arbitrary tiling. Let $\tilde d$ be the minimum value of $\operatorname{Tw}({\mathbf t}_0) + \operatorname{Tw}({\mathbf t}_k)$. For $N \ge 8|{\cal D}|$, consider the tiling ${\mathbf t} = {\mathbf t}_0 \ast {\mathbf t}_{\bullet}^j \ast {\mathbf t}_k \in {\cal T}({\cal R}_{0,N;p_0,p_1})$, where $k = 4|{\cal D}| + ( (N-8|{\cal D}|) \bmod N_{\bullet} )$ and $j = \lfloor (N-8|{\cal D}|)/N_{\bullet} \rfloor$. In particular, $jN_\bullet \ge N - 8|{\cal D}| - N_{\bullet}$. We have \[ \operatorname{Tw}({\mathbf t}) = \operatorname{Tw}({\mathbf t}_0) + c_{{\cal D}} jN_{\bullet} + \operatorname{Tw}({\mathbf t}_k) \ge c_{{\cal D}} N - c_{{\cal D}} (8|{\cal D}| + N_{\bullet}) + \tilde d, \] obtaining the desired $d_{-}$. For each pair $(p_0,p_1) \in {\cal P}^2$ let ${\mathbf t}_{p_1,p_0} \in {\cal T}({\cal R}_{0,4|{\cal D}|;p_1,p_0})$ be an arbitrary tiling. For any ${\mathbf t} \in {\cal T}({\cal R}_{0,N;p_0,p_1})$ we have from the first item that $\operatorname{Tw}({\mathbf t} \ast {\mathbf t}_{p_1,p_0}) \le c_{{\cal D}} (N+4|{\cal D}|)$ and therefore \[ \operatorname{Tw}({\mathbf t}) \le c_{{\cal D}} N + 4c_{{\cal D}} |{\cal D}| - \operatorname{Tw}({\mathbf t}_{p_1,p_0}); \] taking the minimum of $\operatorname{Tw}({\mathbf t}_{p_1,p_0})$ over all $(p_0,p_1) \in {\cal P}^2$ gives us the desired $d_{+}$ and completes the proof. \end{proof} We briefly describe how to compute $c_{{\cal D}}$. In brief, the problem of computing $c_{{\cal D}}$ is the problem of computing an eigenvalue of a matrix, but in the tropical semifield. The subject of tropical mathematics is vast, and we do not assume any knowledge of it; \cite{litvinov} is a nice introductory text, with an ample bibliography. A semifield is a an algebraic structure of the form $(A,1,\oplus,\otimes,\cdot^{-1})$ where $A$ is a set, $1 \in A$ and the binary operations $\oplus$ and $\otimes$ in $A$ are associative and commutative, satisfy the usual distributive law, and $\otimes$, together with $1$ and $\cdot^{-1}$, endow $A$ with an abelian multiplicative group structure. An obvious example is $A = (0,+\infty) \subset {\mathbb{R}}$ with the usual operations. The set $\frac14 {\mathbb{Z}}$ is a semifield with the operations $a \oplus b = \max\{a,b\}$ and $a \otimes b = a+b$: this is the {\em tropical} semifield. If ${\mathbf M}^N$ is a ${\cal P} \times {\cal P}$ matrix with entries $({\mathbf M}^N)_{p_0,p_1} = m_{N;p_0,p_1}$ (as in Lemma \ref{lemma:cD}) then ${\mathbf M}^N$ is the tropical power of ${\mathbf M} = {\mathbf M}^1$: \begin{equation} \label{equation:tropicalpower} ({\mathbf M}^{N_0+N_1})_{(p_0,p_2)} = \max_{p_1 \in {\cal P}}\; \left(({\mathbf M}^{N_0})_{(p_0,p_1)} + ({\mathbf M}^{N_1})_{(p_1,p_2)}\right). \end{equation} For a relatively small disk ${\cal D}$ it is therefore not hard to compute ${\mathbf M}^N$. In this tropical context, a diagonal matrix $D$ has diagonal entries in $\frac14 {\mathbb{Z}}$ and off-diagonal matrices equal to $-\infty$. The inverse $D^{-1}$ has diagonal entries $(D^{-1})_{p,p} = -D_{p,p}$ and conjugation $D^{-1} {\mathbf M}^N D$ is of course defined with tropical operations: \[ (D^{-1}{\mathbf M}^N D)_{p_0,p_1} = -D_{p_0,p_0} + ({\mathbf M}^N)_{p_0,p_1} + D_{p_1,p_1}. \] The number $c_{{\cal D}}$ is (in this context) an eigenvalue of ${\mathbf M}$. \begin{lemma} \label{lemma:tropical} Let ${\cal D}$ be a regular quadriculated disk and let ${\mathbf M}$ be the tropical matrix constructed above. Let $D$ be a diagonal matrix and $N \in {\mathbb{N}}^\ast$; then \begin{equation} \label{equation:upperc} c_{{\cal D}} \le \frac{m}{N}, \qquad m = \max_{p_0,p_1 \in {\cal P}} (D^{-1}{\mathbf M}^N D)_{p_0,p_1}. \end{equation} \end{lemma} \begin{proof} It follows from Equation \ref{equation:tropicalpower} (and induction) that \[ m_{kN;p_0,p_1} = ({\mathbf M}^{kN})_{p_0,p_1} \le km + D_{p_0,p_0} - D_{p_1,p_1} \] for all $k \in {\mathbb{N}}^\ast$ (and for all $p_0, p_1 \in {\cal P}$). From Lemma \ref{lemma:cD}, $c_{{\cal D}} \le m/N$. \end{proof} \begin{example} \label{example:cD44} For ${\cal D} = [0,4]^2$, we have $c_{{\cal D}} = \frac32$. Indeed, the closed tiling path shown in Figure \ref{fig:rocket} implies $c_{{\cal D}} \ge \frac32$. On the other hand, for $N = 4$ it is not too hard to obtain a diagonal matrix $D$ for which $m = 6$ (where $m$ is defined in Equation \ref{equation:upperc}, Lemma \ref{lemma:tropical}): we therefore have $c_{{\cal D}} \le \frac32$. In this example therefore we can take $N_{\bullet} = 8$ and $\Gamma_{\bullet}: [0,N_{\bullet}] \to \mathcal{C}_{{\cal D}}$ as in Figure \ref{fig:rocket}. \end{example} Recall that $\Gamma_{\bullet}$ is a simple closed tiling path with $\operatorname{Tw}(\Gamma_{\bullet}) = c_{{\cal D}} N_{\bullet}$. Extend it to define $\Gamma_{\bullet}: {\mathbb{R}} \to \mathcal{C}_{{\cal D}}$, periodic with period $N_{\bullet}$. Lift this path to define $\tilde\Gamma_{\bullet}: {\mathbb{R}} \to \tilde\mathcal{C}_{{\cal D}}$ with $\tilde\Gamma_{\bullet}(t+N_{\bullet}) = \sigma^m(\tilde\Gamma_{\bullet}(t))$ where $m = c_{{\cal D}} N_{\bullet}$. (Recall that $\sigma: \tilde\mathcal{C}_{{\cal D}} \to \tilde\mathcal{C}_{{\cal D}}$ is a deck transformation.) The {\em spine} of $\tilde\mathcal{C}_{{\cal D}}$ is $\tilde\mathcal{C}_{{\cal D}}^{\bullet} \subset \tilde\mathcal{C}_{{\cal D}}$, the image of $\tilde\Gamma_{\bullet}$. Clearly, the spine $\tilde\mathcal{C}_{{\cal D}}^{\bullet}$ is a $1$-complex isomorphic to ${\mathbb{R}}$, with integers being vertices. Endow both $\tilde\mathcal{C}_{{\cal D}}^{\bullet}$ and $\tilde\mathcal{C}_{{\cal D}}$ with metric structures: each edge has length $1$ and the distance between two vertices $p_0$ and $p_1$ is the minimal $N$ for which there exists a tiling path $\Gamma$ with $\Gamma(0) = p_0$ and $\Gamma(N) = p_1$. For each vertex $p$ of $\tilde\mathcal{C}_{{\cal D}}$, let $\Pi(p) \in \tilde\mathcal{C}_{{\cal D}}^{\bullet}$ be the vertex of $\tilde\mathcal{C}_{{\cal D}}^{\bullet}$ nearest to $p$; in case of a draw choose arbitrarily but preserve $\Pi(\sigma^m(p)) = \sigma^m(\Pi(p))$. Extend $\Pi$ to $1$ and $2$-cells, always preserving the identity $\Pi \circ \sigma^m = \sigma^m \circ \Pi$ and the fact that the restriction of $\Pi$ to $\tilde\mathcal{C}_{{\cal D}}^{\bullet}$ is the identity. Thus, $i: \tilde\mathcal{C}_{{\cal D}}^{\bullet} \to \tilde\mathcal{C}_{{\cal D}}$ and $\Pi: \tilde\mathcal{C}_{{\cal D}} \to \tilde\mathcal{C}_{{\cal D}}^{\bullet}$ are continuous maps taking vertices to vertices. Clearly $\Pi \circ i$ is the identity map. The following result shows that $i$ and $\Pi$ are quasi-isometries in the sense of Gromov. There is a vast literature on quasi-isometries; see for instance \cite{gromov} and \cite{sherdaverman}. We will keep the discussion self contained. \begin{lemma} \label{lemma:quasi} If $p_0, p_1$ are vertices of the spine $\tilde\mathcal{C}_{{\cal D}}^{\bullet}$ then $d_{\tilde\mathcal{C}_{{\cal D}}^{\bullet}}(p_0,p_1) = d_{\tilde\mathcal{C}_{{\cal D}}}(p_0,p_1)$. There exists a constant $d$ such that $d_{\tilde\mathcal{C}_{{\cal D}}}(p,\Pi(p)) \le d$ for every vertex $p$ of $\tilde\mathcal{C}_{{\cal D}}$. \end{lemma} \begin{proof} Let $p_0, p_1$ be vertices of the spine $\tilde\mathcal{C}_{{\cal D}}^{\bullet}$; we may assume without loss of generality that $\operatorname{Tw}(p_0) < \operatorname{Tw}(p_1)$. Clearly $d_{\tilde\mathcal{C}_{{\cal D}}}(p_0,p_1) \le d_{\tilde\mathcal{C}_{{\cal D}}^{\bullet}}(p_0,p_1)$; assume by contradiction that $N_0 = d_{\tilde\mathcal{C}_{{\cal D}}}(p_0,p_1) < N_1 = d_{\tilde\mathcal{C}_{{\cal D}}^{\bullet}}(p_0,p_1)$. Take $\tilde N_1 > N_1$, $\tilde N_1 = kN_{\bullet}$, $k \in {\mathbb{N}}^\ast$; set $\tilde N_0 = N_0 + \tilde N_1 - N_1$. Let $\Gamma: [0,N_0] \to \tilde\mathcal{C}_{{\cal D}}$ be a tiling path with $\Gamma(0) = p_0$, $\Gamma(N_0) = p_1$. Extend $\Gamma$ to $\Gamma: [0,\tilde N_0] \to \tilde\mathcal{C}_{{\cal D}}$ by following the spine $\tilde\mathcal{C}_{{\cal D}}^{\bullet}$ in the positive direction so that $\Gamma(\tilde N_0) = \sigma^{km}(p_0)$. The closed curve $\Gamma: [0,\tilde N_0] \to \mathcal{C}_{{\cal D}}$ satisfies $\operatorname{Tw}(\Gamma) = km > c_{{\cal D}}\tilde N_0$, violating Lemma \ref{lemma:cD}. For the second claim, consider the equivalence class in $\tilde{\cal P}$ identifying $p$ with $\sigma^m(p)$. There are finitely many equivalence classes: let $X \subset \tilde{\cal P}$ be a set of representatives. Take $d = \max_{p \in X} d_{\tilde\mathcal{C}_{{\cal D}}}(p,\Pi(p))$: the claim follows from invariance under $\sigma^m$. \end{proof} \section{Proof of Theorem \ref{theo:M}} \label{sect:theoM} Let $\Gamma_{\bullet}: [0,N_{\bullet}] \to \tilde\mathcal{C}_{{\cal D}}$ and the spine $\mathcal{C}^{\bullet}_{{\cal D}}$ be as in the previous section. Let $d$ be as in Lemma \ref{lemma:quasi}. Recall that $\tilde\mathcal{C}_{{\cal D}}$ is the universal cover of the finite complex $\mathcal{C}_{{\cal D}}$ and therefore simply connected. Given two tiling paths $\Gamma_0: [0,N_0] \to \tilde\mathcal{C}_{{\cal D}}$ and $\Gamma_1: [0,N_1] \to \tilde\mathcal{C}_{{\cal D}}$ with $\Gamma_0(0) = \Gamma_1(0)$ and $\Gamma_0(N_0) = \Gamma_1(N_1)$ there exists therefore a homotopy with fixed endpoints between $\Gamma_0$ and $\Gamma_1$. We may combinatorialize the concept of homotopy (with fixed endpoints) as follows: the homotopy is a family $(\Gamma_s)_{s \in \frac1S{\mathbb{Z}} \cap [0,1]}$, of tiling paths ($S \in {\mathbb{N}}^\ast$), all with the same endpoints, where two consecutive tiling paths $\Gamma_{\frac{k}{S}}$ and $\Gamma_{\frac{k+1}{S}}$ differ by one of the two moves below. \begin{enumerate} \item{The two consecutive paths may have the same length and differ by a flip, in other words, by moving the path across one of the $2$-cells shown in Figures \ref{fig:hflip} and \ref{fig:vflip}.} \item{The two consecutive paths may have lengths differing by two; the longer one is obtained from the shorter one by inserting two adjacent floors (edges), one the inverse of the other.} \end{enumerate} Two tilings ${\mathbf t}_0, {\mathbf t}_1 \in {\cal T}({\cal R}_N)$ with $\operatorname{Tw}({\mathbf t}_0) = \operatorname{Tw}({\mathbf t}_1) = t$ thus correspond to two tiling paths $\Gamma_0, \Gamma_1: [0,N] \to \tilde\mathcal{C}_{{\cal D}}$ with the same endpoints: $\Gamma_0(0) = \Gamma_1(0) = {\mathbf p_\circ}$, $\Gamma_0(N) = \Gamma_1(N) = \sigma^t({\mathbf p_\circ})$. We know that a homotopy (with fixed endpoints) exists. If all paths in the homotopy have length at most $N + M$ then ${\mathbf t}_0 \ast {\mathbf t}_{\operatorname{vert},M} \approx {\mathbf t}_1 \ast {\mathbf t}_{\operatorname{vert},M}$. In order to prove Theorem \ref{theo:M}, therefore, we need to control the length of paths in a homotopy. \begin{proof}[Proof of Theorem \ref{theo:M}] Recall that $d$ is as in Lemma \ref{lemma:quasi}. Let $\tilde M$ be such that if $\Gamma_0, \Gamma_1$ are tiling paths with the same endpoints in $\tilde\mathcal{C}_{{\cal D}}$ with lengths $N_0, N_1 \le 4d + 4$ then there exists a homotopy (with fixed endpoints) from $\Gamma_0$ to $\Gamma_1$ such that all intermediate paths have length at most $\tilde M$. The existence of such $\tilde M$ follows from the fact that the number of such pairs of paths with values in $\mathcal{C}_{{\cal D}}$ is finite; pairs differing by a deck transformation are equivalent. We claim that $M = \tilde M + 4d$ satisfies the statement of the theorem. Given an integer $t$ we construct a tiling path $\Gamma_0$ from ${\mathbf p_\circ} \in \tilde{\cal P}$ to $\sigma^t({\mathbf p_\circ})$ as follows. First construct the shortest arc from ${\mathbf p_\circ}$ to the spine $\mathcal{C}^{\bullet}_{{\cal D}}$: this will be the beginning of $\Gamma_0$. Next construct the shortest arc from the spine $\mathcal{C}^{\bullet}_{{\cal D}}$ to $\sigma^t({\mathbf p_\circ})$: this will be the end of $\Gamma_0$. Notice that beginning and end have length at most $d$ each. The middle of $\Gamma_0$ connects the final point of the first arc to the initial point of the second arc along the spine $\mathcal{C}^{\bullet}_{{\cal D}}$: from Lemma \ref{lemma:quasi}, it is the shortest arc connecting these two points. Let $N_0$ be the length of $\Gamma_0$. Given a tiling ${\mathbf t}_1 \in {\cal T}({\cal R}_{N_1})$, $\operatorname{Tw}({\mathbf t}) = t$, consider the corresponding tiling path $\Gamma_1: [0,{N_1}] \to \tilde\mathcal{C}_{{\cal D}}$ with endpoints $\Gamma_0(1) = {\mathbf p_\circ}$ and $\Gamma_1({N_1}) = \sigma^t({\mathbf p_\circ})$. From Lemma \ref{lemma:quasi} and the triangle inequality, ${N_1} \ge N_0 - 4d$. We construct a homotopy from $\Gamma_0$ to $\Gamma_1$. We first define intermediate paths $\Gamma_{\frac{s}{1+N_1}}$ for $s \in \{1,2,\ldots,N_1\}$ as follows. First follow $\Gamma_1$ from $\Gamma_1(0)$ to $\Gamma_1(s)$: call this part the first part of $\Gamma_{\frac{s}{1+N_1}}$. We now construct the second part of $\Gamma_{\frac{s}{1+N_1}}$ just as we constructed $\Gamma_0$. More precisely: first construct the shortest arc from $\Gamma_1(s)$ to the spine $\mathcal{C}^{\bullet}_{{\cal D}}$: this will be the beginning of the second part of $\Gamma_{\frac{s}{1+N_1}}$. Next construct the shortest arc from the spine $\mathcal{C}^{\bullet}_{{\cal D}}$ to $\sigma^t({\mathbf p_\circ})$: this will be the end of the second part of $\Gamma_{\frac{s}{1+N_1}}$ (notice that it coincides with the end of $\Gamma_0$). The middle of the second part of $\Gamma_{\frac{s}{1+N_1}}$ again connects the final point of the first arc to the initial point of the second arc along the spine $\mathcal{C}^{\bullet}_{{\cal D}}$. As above, the length of $\Gamma_{\frac{s}{1+N_1}}$ is at most $N_1 + 4d$. We now need homotopies from $\Gamma_{\frac{s}{1+N_1}}$ to $\Gamma_{\frac{s+1}{1+N_1}}$. The case $s = N_1$ is easy: $\Gamma_{\frac{N_1}{1+N_1}}$ differs from $\Gamma_1$ just by the fact that at the end we add a path from $\sigma^t({\mathbf p_\circ})$ to the spine $\mathcal{C}^{\bullet}_{{\cal D}}$ and back. We may assume that it is the same path, so the homotopy consists of using the second move above to eliminate the difference (or perhaps the reader prefers to imitate the proof of Lemma \ref{lemma:eventiling}). We thus focus in the case $s < N_1$. Notice that $\Gamma_{\frac{s}{1+N_1}}$ and $\Gamma_{\frac{s+1}{1+N_1}}$ coincide from $0$ to $s$. The path $\Gamma_{\frac{s}{1+N_1}}$ then follows an arc from $\Gamma_1(s)$ to $\mathcal{C}^{\bullet}_{{\cal D}}$ and then follows along the spine $\mathcal{C}^{\bullet}_{{\cal D}}$. The path $\Gamma_{\frac{s+1}{1+N_1}}$ first moves from $\Gamma_1(s)$ to $\Gamma_1(s+1)$, then follows an arc from $\Gamma_1(s+1)$ to $\mathcal{C}^{\bullet}_{{\cal D}}$ and then follows along the spine $\mathcal{C}^{\bullet}_{{\cal D}}$ (see Figure \ref{fig:homotopy}). \begin{figure} \caption{The two paths $\Gamma_{\frac{s} \label{fig:homotopy} \end{figure} There exist therefore intervals of length at most $4d$ each in the domains of $\Gamma_{\frac{s}{1+N_1}}$ and $\Gamma_{\frac{s+1}{1+N_1}}$ such that these two paths coincide outside the intervals. We construct a homotopy from one to the other changing these arcs only. By definition of $\tilde M$, such a homotopy exists using intermediate curves of length at most $\tilde M$. Thus, when we plug this smaller homotopy inside the larger one all intermediate curves have length at most $N_1 + M$, as desired. \end{proof} \begin{coro} \label{coro:giant} Let ${\cal D}$ be a regular quadriculated disk; let $M \in {\mathbb{N}}^{\ast}$ be as in Theorem \ref{theo:M}. For $N \in {\mathbb{N}}^{\ast}$ and $t \in {\mathbb{Z}}$, let ${\cal T}_{N,t} = \{ {\mathbf t} \in {\cal T}({\cal R}_N) \;|\; \operatorname{Tw}({\mathbf t}) = t \}$. Partition ${\cal T}_{N,t}$ by the equivalence relation $\approx$. All tilings ${\mathbf t} \in {\cal T}_{N,t}$ having {\em at least} $M$ vertical floors belong to the same connected component. \end{coro} \begin{proof} Let ${\mathbf t}_0, {\mathbf t}_1 \in {\cal T}_{N,t}$ be two such tilings. Apply Lemma \ref{lemma:movevert} to move vertical floors to the end and therefore obtain $\tilde{\mathbf t}_0, \tilde{\mathbf t}_1 \in {\cal T}({\cal R}_{N-M})$ with ${\mathbf t}_i \approx \tilde{\mathbf t}_1 \ast {\mathbf t}_{\operatorname{vert},M}$ (for $i \in \{0,1\}$). From Theorem \ref{theo:M}, $\tilde{\mathbf t}_0 \ast {\mathbf t}_{\operatorname{vert},M} \approx \tilde{\mathbf t}_1 \ast {\mathbf t}_{\operatorname{vert},M}$, completing the proof. \end{proof} \begin{remark} \label{remark:diameter} It follows from the proof of Theorem \ref{theo:M} that there exists a linear bound (as a function of $N$) to the number of flips necessary to move in ${\cal T}({\cal R}_{N+M})$ from ${\mathbf t}_0 \ast {\mathbf t}_{\operatorname{vert},M}$ to ${\mathbf t}_1 \ast {\mathbf t}_{\operatorname{vert},M}$, where ${\mathbf t}_0, {\mathbf t}_1 \in {\cal T}({\cal R}_N)$, $\operatorname{Tw}({\mathbf t}_0) = \operatorname{Tw}({\mathbf t}_1)$. Indeed, it suffices to {\em first} establish the length of the longest path. The contruction of $\Gamma_{\frac{s}{1+N_1}}$ is then done in the order above, with any extra length taken up by going back and forth along $\mathcal{C}^{\bullet}_{{\cal D}}$ at the start of the middle of the second part of $\Gamma_{\frac{s}{1+N_1}}$. \end{remark} \begin{remark} \label{remark:hyperbolic} The crucial property of the domino group $G_{{\cal D}} = \pi_1(\mathcal{C}_{{\cal D}})$ in the proof of Theorem \ref{theo:M} appears to be hyperbolicity (in the sense of Gromov, see \cite{gromov}). The group ${\mathbb{Z}}$ is of course a rather too special example of a hyperbolic group. Work in progress indicates that for some irregular regions the domino group $G_{{\cal D}}$ has exponential growth but also contains copies of ${\mathbb{Z}}^2$ and is therefore not hyperbolic. \end{remark} \section{Final remarks} \label{sect:final} The reader probably sees that many questions were left unanswered; we make a few remarks about them. The most obvious question is probably: exactly which quadriculated disks are regular? This question is briefly discussed at the end of Section \ref{sect:44}. As mentioned there, we do not have a complete answer. Significant progress has been acheived in \cite{marreiros}, and there is more related work in progress. Another question is to compute the domino group $G_{{\cal D}}$ for examples of non regular quadriculated disks ${\cal D}$. Notice that even for rectangles $[0,2] \times [0,N]$ we did not compute the group; in fact it is not hard to see (particularly using \cite{primeiroartigo}) that the map we constructed is not an isomorphism. Also here, there is work in progress. Theorem \ref{theo:M} invites several questions. We might want to determine the best value of $M$ as a function of ${\cal D}$. Computations for the example ${\cal D} = [0,4]^2$ show that if $N = 4$ then $M = 2$ works; there is some evidence suggesting that $M = 2$ may work for all $N$. At this point, it is even consistent with what we know that $M$ can be taken as a constant independent of ${\cal D}$; perhaps even $M = 2$ works. It would also be interesting to obtain a similar result without the hypothesis of ${\cal D}$ being regular. The general case (when ${\cal D}$ is not regular) probably depends on the structure of the domino group $G_{{\cal D}}$; as discussed in Remark \ref{remark:hyperbolic}, the concept of hyperbolicity may be relevant. In \cite{saldanhaejc} we present some probabilistic results. In particular, we use Theorem \ref{theo:M} and Corollary \ref{coro:giant} (together with an estimate on the distribution of twist) to say something about the number and sizes of the connected components (or equivalence classes) under $\approx$. In particular, we see that small values of twist account for almost all tilings and that, given a small value for the twist, the connected component described in Corollary \ref{coro:giant} is a ``giant component'' which contains almost all tilings with that twist. Still, it would be interesting to have a better understanding of the smaller connected components. Another natural question is: what happens in dimensions $4$ and higher? This is the subject of \cite{KS}: the twist can be defined, but now with values in ${\mathbb{Z}}/(2)$. Many regions are regular; for regular regions, if two tilings have the same twist then almost always they can be joined by a finite sequence of flips. Also, if a little extra space is allowed, then two tilings with the same twist can always be joined by flips. Thus, the space of tilings has two twin giant components, one for each value of the twist. \noindent \footnotesize Departamento de Matem\'atica, PUC-Rio \\ Rua Marqu\^es de S\~ao Vicente, 225, Rio de Janeiro, RJ 22451-900, Brazil \\ \url{[email protected]} \end{document}
{\cal B}egin{equation}gin{document} \title{Improved mixing time bounds for the \\ Thorp shuffle and $L$-reversal chain } \author{ {\sc Ben Morris}\thanks{Department of Mathematics, University of California, Davis. Email: {\tt [email protected]}. Research partially supported by Sloan Fellowship and NSF grant DMS-0707144.} } \date{} \maketitle {\cal B}egin{equation}gin{abstract} \noindent We prove a theorem that reduces bounding the mixing time of a card shuffle to verifying a condition that involves only pairs of cards, then we use it to obtain improved bounds for two previously studied models. E.~Thorp introduced the following card shuffling model in 1973. Suppose the number of cards $n$ is even. Cut the deck into two equal piles. Drop the first card from the left pile or from the right pile according to the outcome of a fair coin flip. Then drop from the other pile. Continue this way until both piles are empty. We obtain a mixing time bound of $O(\log^4 n)$. Previously, the best known bound was $O(\log^{29} n)$ and previous proofs were only valid for $n$ a power of $2$. We also analyze the following model, called the {\it $L$-reversal chain}, introduced by Durrett. There are $n$ cards arrayed in a circle. Each step, an interval of cards of length at most $L$ is chosen uniformly at random and its order is reversed. Durrett has conjectured that the mixing time is $O(\max(n, {n^3 \over L^3})\log n)$. We obtain a bound that is within a factor $O(\log^2 n)$ of this, the first bound within a poly log factor of the conjecture. \epsilonnd{abstract} \varOmegacounter{page}{1} \section{Introduction} \label{intro} Card shuffling has a rich history in mathematics, dating back to work of Markov \cite{markov} and Poincare \cite{poincare}. A basic problem is to determine the mixing time, i.e., the number of shuffles necessary to mix up the deck (sec Section \ref{secaps} for a precise definition). A natural first step (used as far back Borel and Cheron \cite{bc} in 1940) is to determinine the number of steps necessary to randomize single cards and pairs. Clearly this is always a lower bound for the mixing time. On the other hand, it is often not far from an upper bound as well; for a number of models of card shuffling (see, e.g., Diaconis and Shahshahani \cite{ds}, Wilson \cite{wilson}, or Bayer and Diaconis \cite{bd}) the the mixing time is only a small factor (e.g. $O(1)$ or $O(\log n)$) larger than the time required to mix pairs. This suggests finding a general method that reduces bounding the mixing time (in the global sense that the distribution on all $n!$ permutations is roughly uniform) to verifying a local condition that involves only pairs of cards. In this paper, we introduce such a method and use it to analyze two previously studied models. In both cases we find an upper bound for the mixing time that is within a poly logarithmic factor of optimal. We study card shuffles that can be viewed as generalizations of three card Monte. In three card Monte, the cards are spread out face down on a table. In one step, the dealer chooses two cards, puts them together and then separates them quickly so that an observer cannot tell which is which. We call this operation a {\it collision}, and model it mathematically as a random permutation that is an even mixture of a transposition and the identity. We prove a general theorem that applies to any method of shuffling that uses collisions. The theorem bounds the change in relative entropy after many steps of the chain, based on something that is related to the interactions between pairs of cards. Next we use the theorem to analyze two card shuffling models, the Thorp shuffle and Durrett's $L$-reversal model. \subsection{Applications} \labelel{secaps} In this section we describe two applications of our main theorem. First, we give a formal definition of the mixing time. Let $p(x,y)$ be transition probabilities for a Markov chain on a finite state space $V$ with a uniform stationary distribution. For probability measures $\mu$ and $\nu$ on $V$, define the total variation distance $|| \mu - \nu || = \sum_{x \in V} |\mu(x) - \nu(x) |$, and define the mixing time {\cal B}egin{equation} \labelel{mixingtime} T_{\rm mix} = \min \{n: || p^n(x, \, \cdot) - {\cal U } || \leq {\textstyle{1\over4}} \mbox{ for all $x \in V$}\} \,, \epsilonnd{equation} where ${\cal U }$ denotes the uniform distribution. Our first application is the Thorp shuffle, which is defined as follows. Assume that the number of cards, $n$, is even. Cut the deck into two equal piles. Drop the first card from the left pile or the right pile according to the outcome of a fair coin flip; then drop from the other pile. Continue this way, with independent coin flips deciding whether to drop {\sc left-right} or {\sc right-left} each time, until both piles are empty. The Thorp shuffle, despite its simple description, has been hard to analyze. Determining its mixing time has been called the ``longest-standing open card shuffling problem'' \cite{per}. In \cite{thorp} the author obtained the first poly log upper bound, proving a bound of $O(\log^{44} n)$, valid when $n$ is a power of $2$. Montenegro and Tetali \cite{mt-thorp} built on this to get a bound of $O(\log^{29} n)$. In the present paper, we dispense with the power-of-two assumption and get an improved bound of $O(\log^4 n)$. We also analyze a Markov chain that was introduced by Durrett \cite{durrett} as a model for evolution of a genome (see \cite{durrettbio}). In the {\it $L$-reversal chain} there are two parameters, $n$ and $L$. The cards are located at the vertices of an $n$-cycle, which we label $0, \dots, n - 1$. Each step, a (nonempty) interval of cards of length at most $L$ is chosen uniformly at random and its order is reversed. By the coupon collector problem, $O(n \log n)$ steps are needed to break adjacencies between neighboring pairs. Furthermore, the mixing time for a single card is on the order ${n^3 \over L^3}$, because each step the probability that a particular card moves is on the order of $L/n$ and each time a card moves it performs a step of a symmetric random walk with typical displacement on the order $L$. These considerations led Durrett to the following conjecture. \\ \\ {{\cal B}f Conjecture (Durrett). } The mixing time for the $L$-reversal chain is $O(\max(n, {n^3 \over L^3})\log n)$. \\ \\ In \cite{durrett}, Durrett proves the corresponding lower bound using Wilson's technique \cite{wilson} based on eigenfunctions. The spectral gap was determined to be within constant factors of $\max(n, {n^3 \over L^3})$ by Cancrini, Caputo and Martinelli \cite{ccm}. The best previously-known bound for the mixing time, which could be obtained by applying standard comparison techniques, was within a factor $O(n^{2/3})$ of the Durrett's conjecture in the worst case. Durrett's conjecture has presented a challenge to existing techniques. As shown by Martinelli et al, the log Sobolev constant does not give the conjectured mixing time. Furthermore, the mixing time in $L^2$ (defined by replacing total variation distance by an appropriate $L^2$ distance in equation (\ref{mixingtime})) can be nearly $n^{1/3}$ times the conjecture, as the following example shows. Let $L = n^{2/3}$, so that the conjectured mixing time is $O(n \log n)$. We claim that in this case the $L^2$ mixing time is at least $c n^{4/3}$ for a constant $c$. Let $A$ be the event that cards $1, \dots, n/2$ occupy positions $1, \dots n/2$ in any order. If the initial ordering is the identity permutation, then after $t$ shuffles we have {\cal B}egin{equation}gin{eqnarray*} \P(A) &\geq& \P(\mbox{none of the reversed intervals contained cards $1$ or $n/2$}) \\ &\geq& \Bigl(1 - {2L \over n} \Bigr)^t, \epsilonnd{eqnarray*} which is much larger than ${n \choose n/2}^{-1}$ unless $t \geq c n^{4/3}$ for a constant $c$. Since mixing in $L^2$ implies convergence of transition probabilities, the $L^2$ mixing time is at least on the order of $n^{4/3}$, which is higher than the conjecture. This means that in order to prove the conjectured bound on the mixing time in total variation, one cannot use any method for bounding mixing times that gives a bound in $L^2$. In the present paper, we prove that the mixing time is $O\Bigl( (n \vee {n^3 \over L^3})\log^3n \Bigr)$. This is the first upper bound that is within a poly log factor of the conjecture. The remainder of this paper is organized as follows. In Section \ref{bk} we give some necessary background on entropy and prove some elementary inequalities. In Section \ref{gen} we define {\it Monte shuffles}, the general model of card shuffling to which our main theorem will apply. In Section \ref{secmain} we prove the main theorem. In Section \ref{secthorp} we analyze the Thorp shuffle and in Section \ref{seclrev} we analyze the $L$-reversal chain. \section{Background} \labelel{bk} For a probability distribution $\{p_i: i \in V\}$, define the (relative) entropy of $p$ by $\epsilonnt(p) = \sum_{i \in V} p_i \log (|V| p_i)$, where we define $0 \log 0 = 0$. The following well-known inequality links relative entropy to total variation distance. Let ${\cal U }$ denote the uniform distribution over $V$. Then {\cal B}egin{equation}gin{equation} \labelel{totent} || p - {\cal U } || \leq \sqrt{ {\textstyle{1\over2}} \epsilonnt(p)}. \epsilonnd{equation} If $X$ is a random variable (or random permutation) taking finitely many values, define $\epsilonnt(X)$ as the relative entropy of the distribution of $X$. Note that if $\P(X = i) = p_i$ for $i \in V$ then $\epsilonnt(X) = \epsilon(\log (|V| p_X))$. We shall think of the distribution of a random permutation in $\S_n$ as a sequence of probabilities of length $n!$, indexed by permutations in $\S_n$. If $\f$ is a sigma-field, then we shall write $\epsilonnt(X {\,|\,}\f)$ for the relative entropy of the conditional distribution of $X$ given $\f$. Note that $\epsilonnt(X {\,|\,}\f)$ is a random variable. If $\pi$ is a random permutation in $S_n$, then for $1 \leq k \leq n$, define $\f_k = \sigma( \pi^{-1}(k), \dots, \pi^{-1}(n))$, and define $\epsilonnt(\pi, k) = \epsilonnt( \pi^{-1}(k) {\,|\,}\f_{k+1})$ (where we think of the conditional distribution of $\pi^{-1}(k)$ given $\f_{k+1}$ as being a sequence of length $k$). The standard entropy chain rule (see, e.g., \cite{cover}) gives the following proposition. {\cal B}egin{equation}gin{proposition} \labelel{decomp} For any $i \leq n$ we have \[ \epsilonnt(\pi) = \epsilon \Bigl(\epsilonnt( \pi {\,|\,}\f_{i})\Bigr) + \sum_{k=i}^n \epsilon(\epsilonnt(\pi, k)). \] \epsilonnd{proposition} To compute the relative entropy in first term on the right hand side, we think of the distribution of $\pi$ given $\f_i$ as a sequence of probabilities of length $(i-1)!$. {\cal B}egin{equation}gin{remark} Substituting $i = 1$ into the formula gives $\epsilonnt(\pi) = \sum_{k=1}^n \epsilon(\epsilonnt(\pi,k))$. \epsilonnd{remark} If we think of $\pi$ as representing the order of a deck of cards, with $\pi(i) = \mbox{location of card $i$}$, then this allows us to think of $\epsilon(\epsilonnt(\pi, k))$ as the portion of the overall entropy $\epsilonnt(\pi)$ that is attributable to the location $k$. If $S \subset \{1, \dots, n\}$ is a set of positions then we shall refer to the quantity $\sum_{k \in S} \epsilonnt(\pi, k)$ as the entropy that is {\it attributable to $S$}. {\cal B}egin{equation}gin{definition} For $p, q \geq 0$, define $d(p, q) = {\textstyle{1\over2}} p\log p + {\textstyle{1\over2}} q \log q - {p + q \over 2} \log\Bigl({p + q \over 2} \Bigr)$. \epsilonnd{definition} We will need the following proposition. {\cal B}egin{equation}gin{proposition} \labelel{conv} Fix $p \geq 0$. The function $d(p,\,\cdot)$ is convex. \epsilonnd{proposition} {\cal B}egin{equation}gin{proof} A calculation shows that the second derivative is positive. \epsilonnd{proof} Observe that $d(p,q) \geq 0$, with equality iff $p=q$ by the strict convexity of the function $x \to x \log x$. Furthermore, some calculations give {\cal B}egin{equation} \label{eq-df} d(p, q) = {p + q \over 2} f\Bigl( {p-q \over p+q} \Bigr), \epsilonnd{equation} where $f(\Delta) = {\textstyle{1\over2}} (1+ \Delta) \log(1 + \Delta) + {\textstyle{1\over2}} (1 - \Delta) \log(1 - \Delta)$. If $p = \{p_i: i \in V\}$ and $q = \{q_i: i \in V\}$ are both probability distributions on $V$, then we can define the ``distance'' $d(p,q)$ between $p$ and $q$, by $d(p,q) = \sum_{i \in V} d(p_i, q_i)$. (We use the term {\it distance} loosely and don't claim that $d(\cdot,\, \cdot)$ satisfies the triangle inequality.) Note that $d(p,q)$ is the difference between the average of the entropies of $p$ and $q$ and the entropy of the average (i.e. an even mixture) of $p$ and $q$. We will use the following projection lemma. {\cal B}egin{equation}gin{lemma} \labelel{projection} Let $X$ and $Y$ be random variables with distributions $p$ and $q$, respectively. Fix a function $g$ and let $P$ and $Q$ be the distributions of $g(X)$ and $g(Y)$, respectively. Then $d(p, q) \geq d(P,Q)$. \epsilonnd{lemma} {\cal B}egin{equation}gin{proof} Let $S_i = \{x: g(x) = i\}$. Then \[ P_i = \sum_{x \in S_i} p_x; \hspace{.4 in} Q_i = \sum_{x \in S_i} q_x. \] We have {\cal B}egin{equation}gin{eqnarray} d(p, q) &=& \sum_i \sum_{x \in S_i} d(p_x, q_x) \\ &=& \sum_i \sum_{x \in S_i} {p_x + q_x \over 2} f\Bigl( {p_x-q_x \over p_x+q_x} \Bigr) \\ \labelel{jen} &=& \sum_i \Bigl[ {P_i + Q_i \over 2}\Bigr] \sum_{x \in S_i} {p_x + q_x \over 2} \Bigl[ {P_i + Q_i \over 2}\Bigr]^{-1} f\Bigl( {p_x-q_x \over p_x+q_x} \Bigr). \epsilonnd{eqnarray} Note that $f$ has a positive second derivative, hence is is convex. Thus by Jensen's inequality, the quantity (\ref{jen}) is at least {\cal B}egin{equation}gin{eqnarray} \sum_i \Bigl[ { P_i + Q_i \over 2} \Bigr] f\Bigl( \sum_{x \in S_i} {p_x + q_x \over 2} \Bigl[ {P_i + Q_i \over 2}\Bigr]^{-1} {p_x-q_x \over p_x+q_x} \Bigr) &=& \sum_i \Bigl[ { P_i + Q_i \over 2} \Bigr] f\Bigl( {P_i - Q_i \over P_i + Q_i} \Bigr) \\ &=& \sum_i d(P_i, Q_i) \\ &=& d(P, Q). \epsilonnd{eqnarray} \epsilonnd{proof} Let ${\cal U }$ denote the uniform distribution on $V$. Note that if $\mu$ is an arbitrary distribution on $V$, then $\epsilonnt(\mu)$ and $d(\mu, {\cal U })$ are both notions of a distance from $\mu$ to ${\cal U }$. The following lemma relates the two. {\cal B}egin{equation}gin{lemma} \labelel{dent} For any distribution $\mu$ on $V$ we have \[ d(\mu, {\cal U }) \geq {c \over \log |V|} \epsilonnt(\mu), \] for a universal constant $c > 0$. \epsilonnd{lemma} {\cal B}egin{equation}gin{proof} Let $n = |V|$, define $\mt = n\mu$ and define $g:(0, \infty) \to \R$ by $g(x) = x \log x - (x - 1).$ Then {\cal B}egin{equation}gin{eqnarray} \epsilonnt(\mu) &=& \sum_{i \in V} \mu(i) \log(n \mu(i)) \\ &=& {1 \over n} \sum_{i \in V} \mt(i) \log \mt(i) - (\mt(i) - 1) \\ &=& {1 \over n} \sum_{i \in V} g(\mt(i)), \epsilonnd{eqnarray} where the second equality holds because $\sum_{i \in V} (\mt(i) - 1) = 0$. Thus it's enough to show for a universal constant $c$ we have {\cal B}egin{equation}gin{equation} \labelel{showd} d(\mu(i), \sfrac{1}{n}) \geq {c \over n \log n} g( \mt(i)), \epsilonnd{equation} for all $i \in V$. Fix $i \in V$ and let $x = \mt(i)$. Then by equation (\ref{eq-df}) we have {\cal B}egin{equation}gin{eqnarray} d( \mu(i), \sfrac{1}{n}) &=& {1 \over n} d(x, 1) \\ &=& {1 \over n} \Bigl( { x + 1 \over 2} \Bigr) f\Bigl( {x -1 \over x+1} \Bigr), \epsilonnd{eqnarray} where $f(\Delta) = {\textstyle{1\over2}} (1+ \Delta) \log(1 + \Delta) + {\textstyle{1\over2}} (1 - \Delta) \log(1 - \Delta)$. Thus it remains to show that the function $R(x)$ defined by {\cal B}egin{equation} \label{ratio} R(x) = {g(x) \over \Bigl( { x + 1 \over 2} \Bigr) f\Bigl( {x -1 \over x+1} \Bigr)} \epsilonnd{equation} is at most $c^{-1} \log n$ on the interval $[0,n]$, for a constant $c>0$. Note that $R(x)$ is bounded on the interval $[0,2]$. (This can be seen by applying L'Hopital's rule twice for the point $x = 1$.) Let $x \in [2,n]$. The denominator in (\ref{ratio}) is at least \[ {x \over 2} f\Bigl( {x - 1 \over x+1} \Bigr) \geq {x \over 2} f({\textstyle{1\over3}}), \] since the function $x \to f(x-1/x+1)$ is increasing on $[2, \infty)$. The numerator is $g(x) \leq x \log x \leq x \log n$. Thus $R(x) \leq {2\log n/f({\textstyle{1\over3}})}$ on the interval $[2,n]$ and the proof is complete. \epsilonnd{proof} \section{General set-up: card shuffles with collisions} \labelel{gen} \subsection{Collisions} \labelel{mset} We shall now define a {\it collision}, which is the basic ingredient in all of the card shuffles analyzed in the present paper. If $\pi$ is a random permutation in $S_n$ such that \[ \pi= \left\{{\cal B}egin{equation}gin{array}{ll} \id & \mbox{with probability ${\textstyle{1\over2}}$;}\\ (a,b) & \mbox{with probability ${\textstyle{1\over2}}$,}\\ \epsilonnd{array} \right. \] for some $a,b \in \{1, 2, \dots, n\}$ (where we write $\id$ for the identity permutation and $(a,b)$ for the transposition of $a$ and $b$), then we will call $\pi$ a {\it collision.} If $\pi$ and $\mu$ are permutations in $S_n$, then we write $\pi \mu$ for the composition $\mu \circ \pi$. A card shuffle can be described as a random permutation chosen from a certain probability distribution. If we start with the identity permutation and each shuffle has the distribution of $\pi$, then after $t$ steps the cards are distributed like $\pi_1 \cdots \pi_t$, where the $\pi_i$ are i.i.d.~copies of $\pi$. In this paper, we shall consider shuffling permutations $\pi$ that can be written in the form {\cal B}egin{equation} \label{form} \pi = \nu c(a_1, b_1) c(a_2, b_2) \cdots c(a_k, b_k), \epsilonnd{equation} where $\nu$ is an arbitrary random permutation, the numbers $a_1, \dots, a_k, b_1, \dots, b_k$ are disjoint, and $c(a_j, b_j)$ is a collision of $a_j$ and $b_j$. The values of $a_j$ and $b_j$ and the number of collisions (which can be zero) may depend on $\nu$, but conditional on $\nu$ the $c(a_j, b_j)$ are independent collisions. We shall call shuffles of this type {\it Monte}. For $t \geq 1$, define $\pi_{(t)} = \pi_1 \cdots \pi_t$. \subsection{Warm-Up Lemma} \labelel{abuse} In this section we prove a simple lemma with a short proof that brings out many of the central ideas of our main theorem (Theorem \ref{maintheorem} below). We start with an easy proposition. {\cal B}egin{equation}gin{proposition} \labelel{det} Suppose that $\pi$ is any fixed permutation. Then \[ \epsilonnt(\mu \pi) = \epsilonnt(\mu). \] \epsilonnd{proposition} {\cal B}egin{equation}gin{proof} Up to a re-labeling of indices, the random permutation $\mu \pi$ has the same distribution as $\mu$, hence the same relative entropy. \epsilonnd{proof} If $\pi$ is random and independent of $\mu$ then $\epsilonnt(\mu \pi) \leq \epsilonnt(\mu)$, which follows by conditioning on $\pi$, applying Proposition \ref{det}, and then applying Jensen's inequality to the function $x \to x \log x$. It follows that if $\pi_1, \pi_2, \dots$ are i.i.d.~copies of $\pi$ then $\epsilonnt(\pi_1 \cdots \pi_k)$ is nonincreasing in $k$. In this section we study the decay of entropy $\epsilonnt(\mu \pi) - \epsilonnt(\mu)$ in the case where the permutation $\pi$ is a collision. The following lemma relates to the case where $\pi$ is a collision between the $j$th card and another card of smaller index. The lemma says that the relative entropy is reduced by at least $c \epsilonnt(\mu, j)/\log n$, on average (where ``on average'' means with respect to the different possible choices of indices $i\leq j$). {\cal B}egin{equation}gin{lemma} \labelel{mainlemma} Let $\mu$ be a random permutation. Then for a universal constant $c$ we have \[ j^{-1} \sum_{i \leq j} \epsilonnt(\mu c(i, j)) \leq \epsilonnt(\mu) - c \epsilonnt(\mu, j)/\log n. \] \epsilonnd{lemma} {\cal B}egin{equation}gin{proof} Using the abuse of notation ${\textstyle{1\over2}} \pi_1 + {\textstyle{1\over2}} \pi_2$ for a random permutation whose distribution is an even mixture of the distributions of $\pi_1$ and $\pi_2$, we have \[ \mu c(i, j) = {\textstyle{1\over2}} \mu + {\textstyle{1\over2}} \mu (i, j). \] Let ${\cal L}(X {\,|\,}\f)$ denote the conditional distribution of random variable (or random permutation) $X$ given the sigma field $\f$. Let $\mt = \mu(i,j)$ (i.e., the product of $\mu$ and the transposition $(i,j)$). Note that $\mt$ and $\mu$ are the same, except that $\mt^{-1}(i) = \mu^{-1}(j)$ and $\mu^{-1}(i) = \mt^{-1}(j)$ and recall that $i \leq j$. It follows that $\epsilonnt( \mt {\,|\,}\f_{j+1}) = \epsilonnt(\mu {\,|\,}\f_{j+1})$ and hence $ \epsilonnt ( \mu c(i,j) {\,|\,}\f_{j+1} ) - \epsilonnt(\mu {\,|\,}\f_{j+1}) = -d( {\cal L}( \mt {\,|\,}\f_{j+1}), {\cal L}(\mu {\,|\,}\f_{j+1}))$. But by the projection lemma, {\cal B}egin{equation}gin{eqnarray*} d( {\cal L}( \mt {\,|\,}\f_{j+1}), {\cal L}(\mu {\,|\,}\f_{j+1})) &\geq& d({\cal L}(\mt^{-1}(j) {\,|\,}\f_{j+1}), {\cal L}(\mu^{-1}(j) {\,|\,}\f_{j+1})) \\ &=& d( {\cal L}(\mu^{-1}(i) | \f_{j+1}), {\cal L}(\mu^{-1}(j) {\,|\,}\f_{j+1})). \epsilonnd{eqnarray*} Hence {\cal B}egin{equation}gin{eqnarray} \nonumber j^{-1} \sum_{i \leq j} \epsilonnt(\mu c(i, j) {\,|\,}\f_{j+1}) - \epsilonnt(\mu \given \f_{j+1}) &=& -j^{-1} \sum_{i \leq j} d({\cal L}(\mu^{-1}(i) {\,|\,}\f_{j+1}), {\cal L}(\mu^{-1}(j) {\,|\,}\f_{j+1})) \\ \nonumber &\leq& -d\Bigl( j^{-1}\sum_{i \leq j} {\cal L}(\mu^{-1}(i) {\,|\,}\f_{j+1}), {\cal L}(\mu^{-1}(j) {\,|\,}\f_{j+1}) \Bigr) \\ \nonumber &=& -d\Bigl({\cal U }, {\cal L}(\mu^{-1}(j) {\,|\,}\f_{j+1})\Bigr) \\ &\leq& -{c \over \log n} \epsilonnt( {\cal L}(\mu^{-1}(j) {\,|\,}\f_{j+1}), \epsilonnd{eqnarray} where the first inequality is by Proposition \ref{conv} and the second is by Lemma \ref{dent}. Here ${\cal U }$ denotes the uniform distribution over $\{1, \dots, n\} - \{ \mu^{-1}(j+1), \dots, \mu^{-1}(n) \}$. Taking expectations gives {\cal B}egin{equation}gin{eqnarray} \labelel{diff} j^{-1} \sum_{i \leq j} \epsilon(\epsilonnt(\mu c(i, j) {\,|\,}\f_{j+1} )) - \epsilon(\epsilonnt(\mu {\,|\,}\f_{j+1})) &\leq& -{c \over \log n} \epsilonnt( \mu , j). \epsilonnd{eqnarray} Since $\epsilonnt(\mu, k) = \epsilonnt(\mu c(i,j), k)$ for all $k \geq j+1$, Proposition \ref{decomp} and equation (\ref{diff}) yield the lemma. \epsilonnd{proof} \section{Main Theorem} \labelel{secmain} Let $\pi$ be a random permutation in $\S_n$ that is Monte (i.e., can be written in the form (\ref{form})) and let $\pi_1, \pi_2, \dots$ be independent copies of $\pi$. For $t \geq 1$ let ${ \pi_{(t)}} = \pi_1 \cdots \pi_t$. \\ \\ {{\cal B}f Convention. } We shall use the following convention throughout. For integers $x$ with $1 \leq x \leq n$, we denote by {\it card $x$} the card initially in position $x$. \\ \\ For cards $x$ and $y$, say that {\it $x$ collides with $y$ at time $m$} if for some $i$ and $j$ we have ${ \pi_{(m)}}^{-1}(i) = x$, ${ \pi_{(m)}}^{-1}(j) = y$, and $\pi_m$ has a collision of $i$ and $j$. We will need the following definition. {\cal B}egin{equation}gin{definition} For a random variable $X$, a finite set $S$ and a real number $A \in [0,1]$, say that the distribution of $X$ is $A$-uniform over $S$ if \[ \P(X = i) \geq A|S|^{-1}, \] for all $i \in S$. \epsilonnd{definition} {\cal B}egin{equation}gin{remark} If $A < 1$ then the distribution of $X$ need not be concentrated on $S$. (But if $A = 1$, then $X$ is uniform over $S$.) \epsilonnd{remark} Our main theorem is a generalization of Lemma \ref{mainlemma}. It generalizes from a collision to an arbitrary Monte shuffle, and it bounds the loss in relative entropy after many steps. {\cal B}egin{equation}gin{theorem} \labelel{maintheorem} Let $\pi$ be a Monte shuffle on $n$ cards. Fix an integer $t > 0$ and suppose that $T$ is a random variable taking values in $\{1, \dots, t\}$, which is independent of the shuffles $\{\pi_i: i \geq 0\}$. For a card $x$, let $b(x)$ denote the first card to collide with $x$ after time $T$ (or $b(x) = x$ if there is no such card). Define the match $m(x)$ of $x$ by \[ m(x) := \left\{{\cal B}egin{equation}gin{array}{ll} b(x) & \mbox{if $x = b(b(x))$;}\\ x & \mbox{otherwise.}\\ \epsilonnd{array} \right. \] Suppose that for every card $i$ there is a constant $A_i \in [0,1]$ such that the distribution of $m(i)$ is $A_i$-uniform over $\{1, \dots, i\}$. Let $\mu$ be an arbitrary random permutation that is independent of $\{\pi_i : i \geq 0\}$. Then \[ \epsilonnt(\mu { \pi_{(t)}}) - \epsilonnt(\mu) \leq {-C \over \log n} \sum_{k=1}^n A_k E_k, \] where $E_k = \epsilon( \epsilonnt(\mu, k))$ and $C$ is a universal constant. \epsilonnd{theorem} {\cal B}egin{equation}gin{proof} Let $\matches = (m(i): 1 \leq i \leq n)$. For $i$ and $j$ with $j \leq i$, let $c(i, j)$ be a collision of $i$ and $j$. Assume that all of the $c(i,j)$ are independent of $\mu$, ${ \pi_{(t)}}$ and each other. Note that \[ \Bigl[ \prod_{i: m(i) \leq i} c(i, m(i)) \Bigr] { \pi_{(t)}} \] has the same distribution as ${ \pi_{(t)}}$, so it is enough to bound the relative entropy of the distribution of $\mu \Bigl[ \prod_{i: m(i) \leq i} c(i, m(i)) \Bigr] { \pi_{(t)}}$. By expressing this as a mixture of conditional distributions given $\matches$ and ${ \pi_{(t)}}$, and then using Jensen's inequality applied to $x \to x \log x$, the entropy can be bounded above by the expected value of {\cal B}egin{equation}gin{eqnarray} \epsilonnt\Bigl( \mu \Bigl[ \prod_{i: m(i) \leq i} c(i, m(i)) \Bigr] { \pi_{(t)}} {\cal B}iggiven \matches, { \pi_{(t)}}\Bigr) &=& \epsilonnt\Bigl( \mu \Bigl[ \prod_{i: m(i) \leq i} c(i, m(i)) \Bigr] {\cal B}iggiven \matches, { \pi_{(t)}}\Bigr) \\ &=& \labelel{bigperm} \epsilonnt\Bigl( \mu \Bigl[ \prod_{i: m(i) \leq i} c(i, m(i)) \Bigr] {\cal B}iggiven \matches\Bigr), \epsilonnd{eqnarray} where the first equality holds by Proposition \ref{det} and the second equality holds because the permutation $\mu$, the product of collisions $c(i, m(i))$ and ${ \pi_{(t)}}$ are conditionally independent given $\matches$. For $1 \leq k \leq n$, let \[ \nu_k = \prod_{i: m(i) \leq i \leq k} c(i, m(i)). \] Note that the right hand side of (\ref{bigperm}) is $\epsilonnt( \mu \nu_n | \matches)$ and $\nu_0 = \id$. Since $\mu$ is independent of $\matches$, we have $\epsilonnt(\mu {\,|\,}\matches) = \epsilonnt(\mu)$ and hence \[ \epsilonnt(\mu \nu_n {\,|\,}\matches) - \epsilonnt(\mu) = \sum_{k=1}^n \epsilonnt(\mu \nu_k {\,|\,}\matches) - \epsilonnt(\mu \nu_{k-1} {\,|\,}\matches). \] Thus, it is enough to show that for every $k$ we have {\cal B}egin{equation}gin{equation} \labelel{ourclaim} \epsilon\Bigl( \epsilonnt( \mu \nu_k {\,|\,}\matches) - \epsilonnt( \mu \nu_{k-1} {\,|\,}\matches) \Bigr) \leq {-CA_k E_k \over \log n}. \epsilonnd{equation} Note that if $m(k) > k$ then $\nu_k = \nu_{k-1}$. If $m(k) \leq k$ then $\nu_k = \nu_{k-1} \, c(k, m(k)$). We can now proceed in a way that is analogous to the proof of Lemma \ref{mainlemma}. Note that \[ \mu \nu_k = {\textstyle{1\over2}} \mu \nu_{k-1} + {\textstyle{1\over2}} \mu \nu_{k-1} (k, m(k)). \] Fix $i \leq k$, let $\lambda = \mu \nu_{k-1}$ and let $\lt = \lambda (k,i)$. Note that $\lt$ and $\lambda$ are the same, except that $\lt^{-1}(k) = \lambda^{-1}(i)$ and $\lambda^{-1}(k) = \lt^{-1}(i)$. Note also that $\nu_{k-1}$ has $k+1, \dots, n$ as fixed points, so $(\lambda^{-1}(k+1),\dots, \lambda^{-1}(n)) = (\mu^{-1}(k+1),\dots, \mu^{-1}(n))$. Let {\cal B}egin{equation}gin{eqnarray*} \f_{k+1} &=& \sigma( \mu^{-1}(k+1), \dots, \mu^{-1}(n)) \\ &=& \sigma( \lambda^{-1}(k+1), \dots, \lambda^{-1}(n)), \\ \epsilonnd{eqnarray*} and define $\fkp = \sigma(\f_{k+1}, \matches)$. Then we have $\epsilonnthat( \lt {\,|\,}\fkp) = \epsilonnthat(\lambda {\,|\,}\fkp)$ and hence $$ \epsilonnthat( \lambda c(k,i) {\,|\,}\fkp ) - \epsilonnt(\lambda {\,|\,}\fkp) = -d( {\cal L}( \lt {\,|\,}\fkp), {\cal L}(\lambda {\,|\,}\fkp)).$$ But by the projection lemma, {\cal B}egin{equation}gin{eqnarray*} d\Bigl( {\cal L}( \lt {\,|\,}\fkp), {\cal L}(\lambda {\,|\,}\fkp)\Bigr) &\geq& d\Bigl({\cal L}(\lt^{-1}(k) {\,|\,}\fkp), {\cal L}(\lambda^{-1}(k) {\,|\,}\fkp)\Bigr) \\ &=& d\Bigl( {\cal L}(\lambda^{-1}(i) | \fkp), {\cal L}(\lambda^{-1}(k) {\,|\,}\fkp)\Bigr). \epsilonnd{eqnarray*} Thus, since $m(k)$ is $\fkp$-measurable, on the event that $m(k) \leq k$ we have {\cal B}egin{equation}gin{eqnarray*} \labelel{ourclaim2} \epsilonnt( \mu \nu_k {\,|\,}\fkp) - \epsilonnt( \mu \nu_{k-1} {\,|\,}\fkp) &=& \epsilonnthat( \lambda c(k,m(k)) {\,|\,}\fkp ) - \epsilonnt(\lambda {\,|\,}\fkp) \\ &\leq& -d\Bigl( {\cal L}(\lambda^{-1}(m(k)) | \fkp), {\cal L}(\lambda^{-1}(k) {\,|\,}\fkp)\Bigr) \\ &=& - \sum_{i \leq k} {\mathbf 1}(m(k) = i) d\Bigl({\cal L}(\mu^{-1}(i) {\,|\,}\f_{k+1}), {\cal L}(\mu^{-1}(k) {\,|\,}\f_{k+1})\Bigr), \epsilonnd{eqnarray*} where in the third line we replaced $\lambda$ by $\mu$ because $\nu_{k-1}$ does not contain the collision $c(k, m(k))$ and hence has $k$ and $m(k)$ as fixed points, and we replaced the sigma field $\fkp$ by $\f_{k+1}$ because $\mu$ is independent of $\matches$. Taking expectations gives {\cal B}egin{equation}gin{eqnarray} \nonumber \epsilon\Bigl( \epsilonnt( \mu \nu_k {\,|\,}\fkp) - \epsilonnt( \mu \nu_{k-1} {\,|\,}\fkp) \Bigr) &\leq& - \epsilon\Bigl( \sum_{i \leq k} \P(m(k) = i) d\Bigl({\cal L}(\mu^{-1}(i) {\,|\,}\f_{k+1}), {\cal L}(\mu^{-1}(k) {\,|\,}\f_{k+1})\Bigr)\Bigr) \\ \nonumber &\leq& - \epsilon\Bigl( A_k k^{-1} \sum_{i \leq k} d\Bigl({\cal L}(\mu^{-1}(i) {\,|\,}\f_{k+1}), {\cal L}(\mu^{-1}(k) {\,|\,}\f_{k+1})\Bigr) \Bigr) \\ &\leq& \labelel{dstuff} - \epsilon \Bigl(A_k d\Bigl( k^{-1} \sum_{i \leq k} {\cal L}(\mu^{-1}(i) {\,|\,}\f_{k+1}), {\cal L}(\mu^{-1}(k) {\,|\,}\f_{k+1}) \Bigr) \Bigr). \epsilonnd{eqnarray} where the second inequality follows by the $A_k$-uniformity of $m(k)$ and the independence of $m(k)$ and $\mu$, and the third inequality is by Proposition \ref{conv}. The first argument of $d(\cdot, \, \cdot)$ in the right hand side of equation (\ref{dstuff}) is the uniform distribution over $\{1, \dots, n\} - \{\mu^{-1}(k+1), \dots, \mu^{-1}(n)\}$. Thus the right hand side of (\ref{dstuff}) is {\cal B}egin{equation}gin{eqnarray} & & - A_k \epsilon \Bigl( d\Bigl({\cal U }, {\cal L}(\mu^{-1}(k) {\,|\,}\f_{k+1}) \Bigr) \Bigr)\\ \labelel{diff2} &\leq& - {C A_k \over \log n} \epsilon(\epsilonnt( {\cal L}(\mu^{-1}(k) {\,|\,}\f_{k+1})) = - {C A_k E_k \over \log n}, \epsilonnd{eqnarray} where the inequality holds by Lemma \ref{dent}. Since $\mu \nu_{k}$ and $\mu \nu_{k-1}$ agree in positions $k+1, \dots, n$, the portion of their respective entropies that is attributable to those positions coincides, hence Proposition \ref{decomp} and equation (\ref{diff2}) yield the theorem. \epsilonnd{proof} {\cal B}egin{equation}gin{remark} Since for any distribution $p$ we have $d(p, p) = 0$, equation (\ref{dstuff}) is still true if $m(k)$ is only $A_k$--uniform over $\{0, \dots, k-1\}$. So the assumptions of the theorem can be relaxed so that there is no lower bound necessary on the probability that $m(k) = k$. \epsilonnd{remark} \section{Thorp shuffle} \labelel{secthorp} In this section we show that Theorem \ref{maintheorem} implies an improved bound for the Thorp shuffle. Recall that the Thorp shuffle has the following description. Assume that the number of cards, $n$, is even. Cut the deck into two equal piles. Drop the first card from the left pile or the right pile according to the outcome of a fair coin flip; then drop from the other pile. Continue this way, with independent coin flips deciding whether to drop {\sc left-right} or {\sc right-left} each time, until both piles are empty. We will actually work with the time reversal of the Thorp shuffle, which clearly has the same mixing time. Suppose that we label the positions in the deck $0, 1, \dots, n-1$. Note that the Thorp shuffle can be described in the following way. Each step, for $x$ with $0 \leq x \leq {n \over 2} - 1$, the cards at positions $x$ and $x + n/2$ collide and are moved to positions $2x {\cal B}mod \, n$ and $2x + 1 {\cal B}mod \, n$. Thus, the time reversal can be described as follows. Each step, for even numbers $x \in \{0, \dots, n-2\}$, the cards in positions $x$ and $x+1$ collide and are moved to positions $x/2$ and ${x/2} + n/2$. We write ${ \pi_{(t)}}$ for a product of $t$ i.i.d.~copies of the reverse Thorp shuffle. Our main lemma is the following. {\cal B}egin{equation}gin{lemma} \labelel{tlem} Let $t = \lc \log_2 n \rc$. There is a universal constant $C$ such that for any random permutation $\mu$ we have \[ \epsilonnt(\mu { \pi_{(t)}}) \leq (1-C/\log^2 n) \epsilonnt( \mu). \] \epsilonnd{lemma} {\cal B}egin{equation}gin{proof} Partition the locations $0, \dots, n-1$ into intervals $I_m$ as follows. Let $I_0 = \{0\}$, and for $m = 1,2,\dots, \lc \log_2 n \rc,$ define $I_m = \{2^{m-1}, \dots, 2^m-1\} \cap \{0, \dots, n-1\}$. For $i \in \{0, \dots, n-1\}$, define $E_i = \epsilonnt(\mu, i)$. We can write the entropy of $\mu$ as \[ \epsilonnt(\mu) = \sum_{m} \sum_{i \in I_m} E_i. \] Let $\mss$ be the value of $m$ that maximizes $\sum_{i \in m} E_i$. Then \[ \sum_{j \in I_\mss} E_j \geq {c \over \log n} \epsilonnt(\mu), \] for a constant $c$. Since the reverse Thorp shuffle is in Monte form, we may use Theorem \ref{maintheorem}. We will also use the remark immediately following Theorem \ref{maintheorem}, which says that the distribution of the card matched with $i$ need only be $A_i$ uniform over $\{j: j < i\}$ in order for the conclusions of the theorem to hold. Fix $m$ with $1 \leq m \leq \lc \log_2 n \rc$. We will show that the assumptions of the theorem hold with $t = \lc \log_2 n \rc$, \[ A_i = \left\{{\cal B}egin{equation}gin{array}{ll} 1/4 & \mbox{if $i \in I_\ms$;}\\ 0 & \mbox{otherwise,}\\ \epsilonnd{array} \right. \] and the random variable $T$ defined as follows. Let $T$ be any random variable that satisfies {\cal B}egin{equation} \label{goodtime} \P(T = r) \ge 2^{r - \ms - 1}, \epsilonnd{equation} for $r = 0, \dots, \ms$. \\ Fix $i \in I_\ms$. We shall show that for any $j < i$ we have $\P(m(i) = j) \ge 1/4i$. Define $f: \Z \to \Z$ by $f(t) = \floor{t/2}$. Note that if $X_s(j)$ denotes the position of card $j$ at time $s$, then $X_s(j) = f(X_{s-1}(j)) + Z_s(j)$, where $Z_s(j)$ is a random ``offset'' whose distribution is uniform over $\{0, n/2\}$. Note that in step of the shuffle, the distance between a pair of cards is cut roughly in half if they have the same offsets. More precisely, if $x > y$ then {\cal B}egin{equation} \label{closer} f(x) - f(y) \leq \left\{{\cal B}egin{equation}gin{array}{ll} (x-y)/2 & \mbox{if $x$ is odd or $y$ is even;}\\ (x-y)/2 + {\textstyle{1\over2}} & \mbox{otherwise.}\\ \epsilonnd{array} \right. \epsilonnd{equation} It follows that $\ceiling{ \log_2(f(x) - f(y))} \leq \ceiling{ (\log_2 (x - y))}$ and $\ceiling{ \log_2(f(x) - f(y))} \leq \ceiling{ (\log_2 (x - y))} -1$ unless $x = y+1$ and $x$ is even. Say that two positions $x$ and $y$ are neighbors if $|x - y| = 1$ and $\min(x,y)$ is even. (Note that in each step of the reverse Thorp shuffle, the neighbors collide.) Since $n$ is even we can write $n/2$ = $2^k l$ for some $k \geq 0$ and odd integer $l$. Fix $i$ and $j$ with $j \leq i$. We shall show that $\P(m(i) = j) \geq 1/4i$. First, we claim that $\p(\mbox{$X_m(j)$ is even}) \geq {\textstyle{1\over2}}$. To see this, note that $f^\ms(j) = 0$, where we write $f^r$ for the $r$-fold iterate of $f$. Hence, if $m \leq k$, then $X_\ms(j) = \sum_{r=0}^{\ms-1} 2^{-r} Z_{\ms-r}(j)$. Each of the $Z_{\ms-r}(j)$ is either $0$ or $2^k l$, so each term in the sum is even. Assume now that $\ms > k$. Suppose that the value of $Z_{\ms-k}(j)$ (which is either $0$ or $n/2$) is determined by an unbiased coin flip. For $\ms - k \leq s \leq \ms$, let $X'_s(j)$ be what the position of card $j$ at time $s$ would have been if the outcome of the coin flip determining $Z_{\ms-k}$ had been different. Since $f(x) - f(y) = {\textstyle{1\over2}}(x-y)$ if $x - y$ is even, it follows that $| X'_s(j) - X_s(j) | = 2^{\ms-s}l$ for $\ms-k \leq s \leq \ms$. Thus $| X'_\ms(j) - X_\ms(j) | = l$, which is odd. So one of $X'_\ms(j)$ and $X_\ms(j)$ is odd and the other is even. Since they have the same distribution, they are each even with probability ${\textstyle{1\over2}}$. Let $y_0 = X_0(i)$, and for $s \geq 1$ let $y_s = f(y_{s-1}) + Z_s(j)$, i.e., where card $i$ would be located after $s$ steps if its offsets were the same as those for $j$. Let $\tau = \min\{s: |y_s - X_s(j)| = 1$ and $X_s(j)$ is even$\}$. Since $|i - j| \leq 2^\ms$ equation (\ref{closer}) and the sentence immediately following it imply that there must be a value of $s \leq \ms$ such that $|y_s - X_s(j)| = 1$. Combining this with the fact that $X_\ms(j)$ is even with probability at least ${\textstyle{1\over2}}$ gives $\P(\tau \leq \ms) \geq {\textstyle{1\over2}}$. Furthermore, given $\tau = r$, the conditional probability that $X_s(i) = y_s$ for $0 \leq s \leq r$ (and hence $i$ and $j$ collide at time $\tau$) is $2^{-r}$. Finally, since assumption (\ref{goodtime}) gives $\P(T = r) \geq 2^{r - \ms - 1}$, It follows that $\P(m(i) = j) \geq 2^{-\ms-2} \geq {1 \over 4i}$. We have shown that the assumptions of Theorem (\ref{maintheorem}) are met with $t = \lc \log_2 n \rc$ and $A_i = 1/4$ for $i \in I_\ms$. Applying this with $m = \mss$ shows that for any permutation $\mu$, we have $\epsilonnt( \mu { \pi_{(t)}}) \leq (1 - C/\log^2 n) \epsilonnt(\mu)$, for a universal constant $C$. It follows that for any $B \in \{1 ,2, \dots\}$ we have {\cal B}egin{equation}gin{eqnarray*} \epsilonnt( \pi_{(B t \log^3 n)}) &\leq& (1 - C/\log^2 n)^{B \log^3 n} \,\epsilonnt(\id) \\ &\leq& n^{1 - CB} \log n, \epsilonnd{eqnarray*} since $\epsilonnt(\id) = \log n! \leq n \log n$ and $1-u \leq e^{-u}$ for all $u$. If $B$ is large enough so that $n^{1-CB} \log n \leq {\textstyle{1\over8}}$ for all $n$, then $\epsilonnt( \pi_{(Bt \log^3 n)}) \leq {\textstyle{1\over8}}$ and hence $|| \pi_{(Bt \log^3 n)} - {\cal U } || \leq {\textstyle{1\over4}}$ by equation (\ref{totent}). It follows that the mixing time is at most $Bt \log^3 n = O(\log^4 n)$. \epsilonnd{proof} \section{$L$-reversal chain} \labelel{seclrev} In this section we analyze Durrett's $L$-reversal chain. Recall that the $L$-reversal chain has two parameters, $n$ and $L$. The cards are located at the vertices of an $n$-cycle, which we label $\{0, \dots, n -1\}$. Each step, a vertex $v$ and a number $l \in \{0, \dots, L\}$ are chosen independently and uniformly at random. Then the interval of cards $v, v+1, \dots, v + l$ is reversed, where the numbers are taken mod $n$. Equivalently, each step a (nonempty) interval of length at most $L$ (i.e., of size between $1$ and $L+1$) is chosen uniformly at random and reversed. We shall assume that $L > L_0$ for a suitable value of $L_0$ and $n \geq 4L$. The cases where $L$ is constant and where $n \leq cL$ for a constant $c$ were both treated in \cite{durrett}. We put the shuffle in Monte form as follows. Let $\mu_{i,j}$ denote the permutation that reverses the cards in positions $i, i+1, \dots, j$ and leaves the rest unchanged. Let $Z$ be uniform over $\{1, \dots, L\}$. Choose $v$ uniformly at random from $\{0, \dots, n-1\}$ and let {\cal B}egin{equation} \label{putinmonte} \pi = \left\{{\cal B}egin{equation}gin{array}{ll} \mu_{v, v+L} & \mbox{with probability ${1 \over 2(L+1)}$;}\\ \mu_{v, v+L-1} & \mbox{with probability ${1 \over 2(L+1)}$;}\\ \mu_{v, v+Z} c(v, Z) & \mbox{with probability ${L \over (L+1)}$.}\\ \epsilonnd{array} \right. \epsilonnd{equation} Since $\mu_{v, v+2}(v, v+2) = \id$ and $\mu_{v, v+1}(v, v+1) = \id$, it is easily verified that $\pi$ has the distribution of an $L$-reversal shuffle. We write ${ \pi_{(t)}}$ for a product of $t$ i.i.d.~copies of the $L$-reversal shuffle. Our main technical lemma is the following. {\cal B}egin{equation}gin{lemma} \labelel{tl} There is a universal constant $C$ such that for any random permutation $\mu$ there is a value of $t \in \{1, \dots, {Cn^3 \over L^3}\}$ such that \[ \epsilonnt(\mu { \pi_{(t)}}) \leq (1-f(t)) \epsilonnt( \mu), \] where $f(t) = {\gamma \over \log^2 n} \Bigl( {t \over n} \wedge 1\Bigr)$, for a universal constant $\gamma$. \epsilonnd{lemma} Before proving Lemma \ref{tl}, we first show how it gives the claimed mixing time bound. {\cal B}egin{equation}gin{lemma} \labelel{mixingtime} The mixing time for the $L$-reversal chain is $O\Bigl( (n \vee {n^3 \over L^3})\log^3n \Bigr)$. \epsilonnd{lemma} {\cal B}egin{equation}gin{proof} Let $t$ and $f$ be as defined in Lemma \ref{tl}. Then {\cal B}egin{equation} \label{rell} {t \over f(t)} = \gamma^{-1} (\log^2 n) t \Bigl({n \over t} \vee 1 \Bigr) = \gamma^{-1} \log^2 n ({n \vee t}) \leq T, \epsilonnd{equation} where $T = \gamma \log^2 n [ n \vee {Cn^3 \over L^3}]$. Note that $1/T$ is a bound on the long run rate of entropy loss per unit of time. Lemma \ref{tl} implies that there is a $t_1 \in \{1, \dots, {Cn^3 \over L^3}\}$ such that \[ \epsilonnt(\pi_1 \cdots \pi_{t_1}) \leq (1-f(t_1)) \epsilonnt( \id), \] and a $t_2 \in \{1, \dots, {Cn^3 \over L^3}\}$ such that \[ \epsilonnt(\pi_1 \cdots \pi_{t_1 + t_2}) \leq (1-f(t_2)) \epsilonnt( \pi_1 \cdots \pi_{t_1}), \] etc. Continue this way to define $t_3, t_4$, and so on. For $j \geq 1$ let $\tau_j = \sum_{i=1}^j t_i$. Then {\cal B}egin{equation}gin{eqnarray} \epsilonnt( \pi_{(\tau_j)}) &\leq& \Bigl[\prod_{i=1}^j (1 - f(t_j)) \Bigr] \epsilonnt(\id)\\ &\leq& \epsilonxp\Bigl( - \sum_{i=1}^{j} f(t_j) \Bigr) \epsilonnt(\id). \epsilonnd{eqnarray} But since $t_j \leq T f(t_j)$ by equation (\ref{rell}), we have \[ \tau_j = \sum_{i=1}^{j} t_j \leq T \sum_{i=0}^{j} f(t_j). \] It follows that {\cal B}egin{equation}gin{eqnarray} \epsilonnt( \pi_{(\tau_j)}) &\leq& \epsilonxp\Bigl({-\tau_j \over T} \Bigr) \epsilonnt(\id). \epsilonnd{eqnarray} Since $\epsilonnt(\id) = \log n! \leq n\log n$, it follows that if $\tau_j \geq T \log(8n \log n)$ we have $\epsilonnt(\pi_{(\tau_j)}) \leq {\textstyle{1\over8}}$ and hence $|| \pi_{(\tau_j)} - {\cal U } || \leq {\textstyle{1\over4}}$ by equation (\ref{totent}). It follows that the mixing time is $O(T \log(8n \log n)) = O\Bigl( (n \vee {n^3 \over L^3})\log^3n \Bigr)$. \epsilonnd{proof} We shall now prove Lemma \ref{tl}. {\cal B}egin{equation}gin{proofof}{Proof of Lemma \ref{tl}} Let $m = \lceil \log_2 (n/L) \rceil$. Then we can partition the set of locations $\{0, \dots, n-1\}$ into $m+1$ intervals as follows. Let $I_0 = \{0, \dots, L\}$, and for $1 \leq k \leq m$ define $I_k = \{2^{k-1} L + 1, \dots, 2^k L\} \cap \{0, \dots, n-1\}$. Define $E_k = \epsilon(\epsilonnt(\mu, k))$. Note that we can write the entropy of $\mu$ as {\cal B}egin{equation} \label{eee1} \epsilonnt(\mu) = \sum_{k=0}^m \sum_{j \in I_k} E_j\, . \epsilonnd{equation} Thus, if $\kss$ maximizes $\sum_{j \in I_k} E_j$, then \[ \sum_{j \in I_\kss} E_j \geq {1 \over m+1} \epsilonnt(\mu). \] Suppose first that $\kss=0$. Then we can take $t=1$. Let $\pi$ be a random permutation corresponding to one move of the $L$-reversal chain. Let $E$ be the event that $\pi$ reverses $a, a+1, \dots, b$ for $a,b \in \{0, \dots, L\}$. Then (using an abuse of notation similar to that in Section \ref{abuse}) we can write $\pi$ as \[ \pi = \alpha \pi_1 + (1 - \alpha) \pi_2, \] where $\alpha = \P(E)$, $\pi_1$ is $\pi$ conditioned on $E$, and $\pi_2$ is $\pi$ conditioned on $E^c$. Then $\mu \pi = \alpha \mu \pi_1 + (1 - \alpha) \mu \pi_2$ and hence {\cal B}egin{equation}gin{eqnarray} \epsilonnt(\mu \pi) &=& \epsilonnt ( \alpha \mu \pi_1 + (1 - \alpha) \mu \pi_2) \\ &\leq& \alpha \epsilonnt( \mu \pi_1) + (1- \alpha) \epsilonnt(\mu \pi_2) \\ &\leq& \alpha \epsilonnt( \mu \pi_1) + (1- \alpha) \epsilonnt(\mu), \epsilonnd{eqnarray} where both inequalities follow from the convexity of $x \to x \log x$. It follows that {\cal B}egin{equation} \label{ebou} \epsilonnt( \mu \pi) - \epsilonnt(\mu) \leq \alpha \Bigl[ \epsilonnt(\mu \pi_1) - \epsilonnt( \mu) \Bigr]. \epsilonnd{equation} Note that $\pi_1$ does not move any of the cards in locations $\{L+1, \dots, n\}$. Hence by Proposition \ref{decomp}, the entropy difference $\epsilonnt(\mu \pi_1) - \epsilonnt(\mu)$ is the expected loss in entropy attributable to positions $\{0, \dots, L\}$, i.e., $\epsilon\Bigl( \epsilonnt( \mu \pi_1 {\,|\,}\f_{L+1}) - \epsilonnt(\mu {\,|\,}\f_{L+1}) \Bigr)$, where $\f_{L+1} = \sigma( \mu^{-1}(L+1), \dots, \mu^{-1}(n-1))$. The permutation $\pi_1$ is a step of a modified $L$-reversal chain on the $L + 1$ cards in the line graph $\{0, \dots, L$\}, reversing an interval of the form $a, a+1, \dots, b$ for $0 \leq a \leq b \leq L$. In Theorem 6 of \cite{durrett}, it is shown (by comparison with shuffling through random transpositions \cite{ds}; see \cite{dsc} for background on comparison techniques) that the log Sobolev constant for the $L$-reversal chain on $n$ cards is at most $B {n^3 \over L^2} \log n$ for a constant $B$. This remains true if we consider the modified $L$-reversal process on the line graph. Thus $\pi_1$ has a log Sobolev constant that is at most $2 B {L} \log L$, and hence (by the well-known relationship between the log Sobolev constant and decay of relative entropy; see, e.g., \cite{miclo}) multiplying $\mu$ by $\pi_1$ reduces the relative entropy by at least $1/B' L \log L$ times the entropy attributable to positions $\{0, \dots, L\}$, for a constant $B'$. Thus the right hand side of (\ref{ebou}) is at most {\cal B}egin{equation}gin{eqnarray} -\alpha(B'L \log L)^{-1} \sum_{j \in I_1} E_j &\leq& - (8B'n \log L)^{-1} \sum_{j \in I_1} E_j \\ &=& - (8B'n \log^2 n)^{-1} \epsilonnt(\mu), \epsilonnd{eqnarray} where the second line follows from the fact that $\alpha \geq {L \over 8n}$. Next we shall consider the case where $\kss \geq 1$, so that the interval is of the form $\{2^{k-1}L + 1, \dots, 2^k L\} \cap \{0,1 \dots, n-1\}$. We will use Theorem \ref{maintheorem} to get a decay of entropy in this case. We make the following claim. {\cal B}egin{equation}gin{claim} \labelel{lclaim} Fix $k \geq 1$. There are universal constants $C$ and $\alpha > 0$ such that if $t = {4^k C n / L^3}$, $T = t/2$ and \[ A_y = \left\{{\cal B}egin{equation}gin{array}{ll} \alpha ({t \over n} \wedge 1) & \mbox{if $y \in I_k$;}\\ 0 & \mbox{otherwise,}\\ \epsilonnd{array} \right. \] then the assumptions of Theorem \ref{maintheorem} are satisfied by $t, T,$ and the $A_y$. \epsilonnd{claim} In order to prove this claim, it is helpful to know that the $L$-reversal chain enjoys certain monotonicity properties. Roughly speaking, the closer two cards are together, the more likely they are to collide after a given number of steps. Before proving Claim \ref{lclaim}, we shall verify these monotonicity properties. \\ \\ {{\cal B}f Two types of monotonocity.} Fix $x$ and $y$ in $\{0, \dots, n\}$ and let $x_m$ and $y_m$ denote the positions of cards $x$ and $y$, respectively, at time $m$. Define $Z_m = |x_m - y_m|$, i.e., the graph distance between $x_m$ and $y_m$ in the $n$-cycle. Note that $Z_m$ is a Markov chain. We shall need the following lemma. {\cal B}egin{equation}gin{lemma} \labelel{monlem1} Let $\phat$ denote the transition matrix of $Z_m$. Then $\phat$ is monotone, i.e., if $b \geq a$ then $\phat(b, \cdot) \sgeq \phat(a, \cdot)$, where $\sgeq$ denotes stochastic domination. \epsilonnd{lemma} {\cal B}egin{equation}gin{proof} Fix positions $u$ and $a$ with $a \leq n/2$, and let $N(a, u)$ denote the number of legal intervals (i.e., intervals of length at most $L$) that move the card in position $a$ to position $u$ without moving the card in position $0$. Then \[ N(a,u) = \left\{{\cal B}egin{equation}gin{array}{ll} \min(u, \lfloor {\textstyle{1\over2}}(L - a + u) + 1 \rf) & \mbox{if $u < a$;}\\ \min(a, \lfloor {\textstyle{1\over2}}(L - u + a) + 1 \rfloor) & \mbox{if $u > a$.}\\ \epsilonnd{array} \right. \] (Recall that we assume that $n \geq 4L$.) Suppose that $|x_m - y_m| = a$. For $u \leq n/2$, let $M(a,u)$ denote the number of legal intervals whose reversal at time $m$ would make $|x_{m+1} - y_{m+1}| = u$. If $a \neq u$ then $M(a,u)$ counts intervals that move $x$ but not $y$ and intervals that move $y$ but not $x$. Thus we have $M(a,u) = 2(N(a,u) + N(a,n-u))$. It is easily verified that $M(a, u)$ is nonincreasing in $a$ for $u < a \leq n/2$ and nondecreasing in $a$ for $0 < a < u$. It follows that $Z_m$ is monotone. \epsilonnd{proof} We now prove that $Z_m$ has another type of monotonicity property. Note that in each move of the $L$-reversal process, there are exactly four cards that are adjacent to a different pair of cards after the move than they were before. We say that those cards are {\it cut} and write, e.g., ``card $i$ is cut at time $m$''. We say that a location is cut if the card in that location is cut. \\ \\ {{\cal B}f The cut-stopped process.} It will be convenient to consider a modified version $Z'_m$ of $Z_m$, where we introduce two absorbing states $\sgood$ and ${\overline S}ad$, and have the following occur when either $x$ or $y$ is cut. If $x$ and $y$ are within a distance $L$ of each other, then $Z'_m$ transitions to $\sgood$; otherwise, it transitions to ${\overline S}ad$. We shall call this modified process the {\it cut-stopped process.} We can impose an order on the state space of $\{Z'_m: m \geq 0\}$ based on the order of the positive integers, with the additional states $\wgood$ and $\wbad$ as the minimum and maximum states, respectively. Our next lemma says that the cut-stopped process $Z'_m$ is monotone with respect to this order. {\cal B}egin{equation}gin{lemma} \labelel{undom} The cut-stopped process is monotone. \epsilonnd{lemma} {\cal B}egin{equation}gin{proof} The proof is a slight modification of the proof of Lemma \ref{monlem1}. Suppose that $Z'_m = z$. Note that the probability of absorbing in $0$ in the next step is a nonincreasing function of $z$, and the probability of absorbing in $\infty$ in the next step is a nondecreasing function of $z$. The rest of the argument is almost identical to the proof of Lemma \ref{monlem1}. Fix positions $u$ and $a$ with $a \leq n/2$, and let $N'(a, u)$ denote the number of intervals of length at most $L$ that move the card in position $a$ to position $u$, but neither move the card in position $0$, cut position $0$, nor cut position $a$. Then \[ N'(a,u) = \left\{{\cal B}egin{equation}gin{array}{ll} \min(0, u-2, \lfloor {\textstyle{1\over2}}(L - a + u) \rf) & \mbox{if $u < a$;}\\ \min(0, a-2, \lfloor {\textstyle{1\over2}}(L - u + a)\rfloor) & \mbox{if $u < a$.}\\ \epsilonnd{array} \right. \] Suppose that $|x_m - y_m| = a$. For $u \leq n/2$, let $M'(a,u)$ denote the number of legal intervals that don't cut $x$ or $y$ and whose reversal at time $m$ would make $|x_{m+1} - y_{m+1}| = u$. If $a \neq u$ then $M'(a,u) = 2(N'(a,u) + N'(a,n-u))$. It is easily verified that $M'(a, u)$ is nonincreasing in $a$ for $u < a \leq n/2$ and nondecreasing in $a$ for $0 < a < u$. It follows that $Z'_m$ is monotone. \epsilonnd{proof} \noindent We are now ready to prove Claim \ref{lclaim}. For the convenience of the reader, we state the claim again. Recall that $I_k = \{2^{k-1} L + 1, \dots, 2^k L\} \cap \{0, \dots, n-1\}$. \\ \\ {{\cal B}f Claim \ref{lclaim}} {\it There are universal constants $C$ and $\alpha>0$ such that if $t = {4^k C n / L^3}$, $T = t/2$ and \[ A_y = \left\{{\cal B}egin{equation}gin{array}{ll} \alpha ({t \over n} \wedge 1) & \mbox{if $y \in I_k$;}\\ 0 & \mbox{otherwise,}\\ \epsilonnd{array} \right. \] then the assumptions of Theorem \ref{maintheorem} are satisfied by $t, T,$ and the $A_y$.} {\cal B}egin{equation}gin{proof} Let $y \in I_k$. We need to show that if $x \leq y$, then with probability at least $A_y$, cards $x$ and $y$ collide between time $T$ and time $t$, and this is the first collision that either is involved in after time $T$. Fix $y \in I_k$ and $x$ with $x < y$. Let $\tau$ be the first time after time $T$ that either $x$ or $y$ is cut. Note that if $x$ and $y$ collide at time $\tau$ and $\tau \leq t$ then $m(x) = y$. Thus, given that $|x_\tau - y_\tau| \leq L$ and $\tau \leq t$ the conditional probability that $m(x) = y$ is at least $1/8L$. This is because the number of intervals that cut either $x$ or $y$ is at most $4L$, so the conditional probability that $x$ and $y$ are at the endpoints of the interval that is reversed at time $\tau$ is at least $1/4L$. The conditional probability that $x$ and $y$ collide is at least half of this. Thus it is enough to show that for a universal constant $\alpha$ we have {\cal B}egin{equation}gin{equation} \labelel{dagger} \P(|x_\tau - y_\tau| \leq L, \tau \leq t) \geq \alpha \A L/y. \epsilonnd{equation} For $m \geq 0$ let $Z_m = |x_m - y_m|$. Let ${\cal B}egin{equation}ta > 0$ be a constant and suppose that $L > 2{\cal B}egin{equation}ta$. We claim that with probability bounded away from $0$ we have $|Z_m| \leq {\cal B}egin{equation}ta L$ for some $m < T$. To see this, let $M = \min\{m: Z_m \leq L\}$. First, we will show that with probability bounded away from zero we have $M \leq T'$, where $T' = T/2$. Suppose that $Z_0 > L$. Let $X$ be a random variable with the distribution of $Z_1 - Z_0$ and let $X_1, X_2, \dots$ be i.i.d.~copies of $X$. Note that the random variable $Z_{T'} - Z_0$ can be coupled with the $X_i$ in such a way that $Z_{T'} - Z_0 \leq \sum_{1=1}^{T'} X_i$ on the event that $M > T'$. It follows that $\P(M \leq T') \geq \P( \sum_{i \leq T'} X_i \geq -Z_0)$. But since when $X$ is nonzero (which happens with probability on the order of $L/n$) it has a typical value on the order of $L$, it has second and third moments satisfying $\sigma^2 \geq {C_2L^3/n}$ and $\rho \leq {C_3 L^4/n}$, respectively. Berry Esseen bounds (see, e.g., \cite{durrettbook}) imply that for a universal constant $C_{B}$ we have {\cal B}egin{equation} \label{be} | F_{T'}(x) - \Phi(x)| \leq {C_{B} \rho \over \sigma^3 \sqrt{T'}} \leq {C' L \over C y}, \epsilonnd{equation} where $F_{T'}$ is the cumulative distribution function (cdf) of ${1 \over \sigma \sqrt{T'}} \sum_{i \leq T'} X_i$, $\Phi$ is the standard normal cdf, $C'$ is a constant that incorporates $C_2, C_3$ and $C_b$, and $C$ is the constant appearing in the definition of $t$. For the final inequality we use the fact that $t=4T'$ is within constant factors of $C y^2 n/L^3$, since $y \in I_k$. Since $y \geq L$, the quantity (\ref{be}) can be made arbitrarily close to zero for sufficiently large $C$. It follows that $\sum_{i \leq T'} X_i$ is roughly normal with standard deviation a large constant times $y$, hence is less than $-Z_0$ with probability bounded away from zero. (Recall that $Z_0 = y - x \leq y$). It follows that with probability bounded away from zero we have $Z_m \leq L$ for some $m \leq T/2$. Now note that if $x$ and $y$ are within distance $L$ then given that one of them moves in the next step, the conditional probability that they are brought to within a distance ${\cal B}egin{equation}ta L$ is bounded away from zero. Since $t$ is much larger than $n/L$, there is probability bounded away from zero that either $x$ or $y$ is moved between time $m$ and $m + T/2$. This verifies the claim. The above claim and the strong Markov property imply that in order to show (\ref{dagger}), it is enough to show that if $|i - j| \leq {\cal B}egin{equation}ta L$, $m' \leq T/2$ and $\tau$ is the first time that $i$ or $j$ is cut after time $m'$, then for a universal constant $\alpha > 0$ we have $\P(|i_\tau - j_\tau| \leq L, \tau \leq m' + t/2) \geq \alpha \A L/y$. For every pair of cards $i$ and $j$, let $T(i,j)$ be the first time that either $i$ or $j$ is cut after time $m'$. Define $t' = \min(t/2, n)$. Let $A(i,j)$ be the event that $T(i,j) \leq m' + t'$ and at time $T(i,j)$ the distance between $i$ and $j$ is most $L$. Let $f(i,j) = \P(A(i,j))$. Since $t' \leq t/2$, it is enough to prove that if $|i - j| \leq {\cal B}egin{equation}ta L$ then {\cal B}egin{equation} \label{rtp} f(i,j) \geq \alpha \A L/y. \epsilonnd{equation} Since the probability that either $i$ or $j$ is involved in a cut on any given step is at most $8/n$, we have {\cal B}egin{equation}gin{equation} \labelel{gamgam} f(i,j) \leq \min(1, 8t'/n). \epsilonnd{equation} Also, note that {\cal B}egin{equation}gin{eqnarray*} \sum_{i,j} f(i,j) &=& \sum_{l=m'+1}^{m' + t'} \,\, \sum_{i,j} \P(\mbox{ $T(i,j) \geq l$, $|i_{l} - j_{l}| \leq L$, either $i$ or $j$ is cut at time $l$}) \\ &=& \sum_{k=1}^{t'} \sum_{\twosubs{u < v}{|u - v| \leq L}} g(u,v,k), \epsilonnd{eqnarray*} where $g(u,v,k)$ is the probability that cards in locations $u$ and $v$ are cut at time $m' + k$, but neither had been cut since time $m'$. Since the $L$-reversal process is symmetric it is its own time-reversal. Thus, $g(u,v,k)$ is the probability that either location $u$ or $v$ is cut in the first move, but neither the card in location $u$ at time $1$ nor the card in location $v$ at time $1$ is is cut in the next $k - 1$ moves. This probability is at least $\sfrac{1}{n} \Bigl( {n - 8 \over n} \Bigr)^{t'-1}$. Since there are $nL$ such pairs $(u, v)$, summing over $u,v$ and $k$ gives {\cal B}egin{equation}gin{eqnarray*} \sum_{i,j} f(i,j) &\geq& t' nL \frac{1}{n} \Bigl( {n - 8 \over n} \Bigr)^{t'-1} \\ &\geq& c' L t', \epsilonnd{eqnarray*} for a universal constant $c'$, where the second inequality holds because $t' \leq n$. It follows that for any $i$ we have {\cal B}egin{equation} \label{clt} \sum_{j} f(i,j) = {1 \over n} \sum_{i,j} f(i,j) \geq c' L t'/n \,. \epsilonnd{equation} Let $g(i,j) = \P(A(i,j) \cap B(i,j))$ where $B(i,j)$ is the event that at no time before time $T(i,j)$ was the distance between $i$ and $j$ greater than $Dy$, where the constant $D$ is to be specified below. Note that {\cal B}egin{equation}gin{eqnarray} \labelel{frown} \sum_j g(i,j) &\geq& \sum_j f(i,j) - \P(A(i,j) \cap B^c(i,j)), \epsilonnd{eqnarray} where $B^c(i,j)$ denotes the complement of $B(i,j)$. We claim that $\sum_{j} g(i,j) \geq c L t'/n$ for a universal constant $c$. To see this, fix a card $i$ and $k \leq t'$ and say that a card $u$ is {\it bad} if $|i_0 - u_0| \leq L$, and $\max_{0 \leq r \leq m' + k} |i_r - u_r| > Dy$. Since the $L$-reversal process is symmetric, and the probability that $i$ or $u$ is cut in any given step is at most $8/n$, we have {\cal B}egin{equation} \label{sstar} \sum_j \P\Bigl( A(i,j) \cap B^c(i,j) \cap [T(i,j) = m' + k] \Bigr) \leq {8 \over n} \epsilon(B), \epsilonnd{equation} where $B$ is the number of bad cards. Let $u$ be a card initially within distance $L$ of card $i$. If $u_m$ is the position of card $u$ at time $m$, then we can write $u_m = u + W_1 + \cdots W_m \; ({\cal B}mod \,\, n)$, where $W_j \in \{-L, \dots, L\}$ is the displacement of card $u$ at time $j$. Define $u'_m = u + W_1 + \cdots W_m$ (i.e., like $u_m$, but without the ${\cal B}mod \,\, n$), with a similar definition for $i'_m$. Then $u'_m$ is a symmetric random walk on the integers. Each step there is a jump with probability on the order of $L/n$ and the sizes of jumps are at most $L$. It follows that for sufficiently large $A$, the probability that $\max_{1 \leq m \leq k} |u'_m - u'| > A ({kL \over n})^{1/2} L$ can be made arbitrarily close to zero. Since $k$ is at most a constant times ${y^2n \over L^3}$, we have $A ({kL \over n})^{1/2} L \leq A' y$ for a constant $A'$. A similar argument applies to $i'_m$. Finally, since $|i_m - u_m| \leq |i'_m - u'_m|$ (where the first $| \cdot |$ refers to distance in the $n$-cycle), it follows that for any $\epsilonpsilon > 0$, if $D$ is large enough then $\P( \max_{1 \leq m \leq k} |i_m - u_m| > Dy) < \epsilonpsilon.$ Thus, since there are at most $2L$ cards initially within a distance $L$ of card $i$, we have $E(B) \leq 2L\epsilonpsilon.$ Hence, summing equation (\ref{sstar}) over $k \leq t'$ gives {\cal B}egin{equation} \label{smiley} \sum_j { {\cal B}f P}\Bigl( A(i,j) \cap B^c(i,j) \Bigr) \leq 16L \epsilonpsilon {t'/n} \; . \epsilonnd{equation} Combining this with equations (\ref{frown}) and (\ref{clt}) gives {\cal B}egin{equation}gin{eqnarray} \labelel{ssstar} \sum_j g(i,j) &\geq& cL t'/n, \epsilonnd{eqnarray} for a constant $c$, if $\epsilonpsilon$ is small enough. We now define ${\cal B}egin{equation}ta$ to be a constant smaller than $c/32$. Since for any $j$ we have $g(i,j) \leq f(i,j) \leq 8t'/n$ (by equation (\ref{gamgam})), we have $\sum_{j: |i - j| \leq {\cal B}egin{equation}ta L} g(i,j) \leq 16 {\cal B}egin{equation}ta L t'/n \leq cLt'/2n$, and hence \[ \sum_{j: |i-j| > {\cal B}egin{equation}ta L} g(i,j) \geq cLt'/2n, \] by equation (\ref{ssstar}). Since $g(i,j) = 0$ for $|j-i| > Dy$, the average value of $g(i,j)$, where $j$ ranges over values such that ${\cal B}egin{equation}ta L < |i - j| \leq Dy$, must be at least ${cLt' /4Dyn} \geq {\alpha L \A /y}$, for a constant $\alpha$. Since both $Z_m$ and the cut-stopped process $Z'_m$ are monotone by Lemmas \ref{monlem1} and \ref{undom}, the function $g(i,j)$ is nonincreasing in $|i-j|$. It follows that $g(i,j) \geq {\alpha L \A /y}$ if $|i - j| \leq {\cal B}egin{equation}ta L$. Since $g \leq f$, this verifies equation (\ref{rtp}), which completes the proof of Claim \ref{lclaim}. \epsilonnd{proof} \noindent Using Claim \ref{lclaim} with $k = \kss$ and applying Theorem \ref{maintheorem} gives \[ \epsilonnt( \mu { \pi_{(t)}} ) - \epsilonnt(\mu) \leq {-C \over \log^2 n} \Bigl({t \over n} \wedge 1 \Bigr) \epsilonnt(\mu), \] for a universal constant $C$, and the proof of Lemma \ref{tl} is complete. \epsilonnd{proofof} \noindent {{\cal B}f Acknowledgments.} I am grateful to A.~Soshnikov for many valuable conversations during the early stages of this work. {\cal B}egin{equation}gin{thebibliography}{99} {\cal B}ibitem{bd} Bayer, D.~and Diaconis, P. Tracing the dovetail shuffle to its lair, {\it Annals of Applied Probability.} {{\cal B}f 2} (1992). pp.~294--313. {\cal B}ibitem{bc} Borel, E.~and Cheron, A. Theorie mathematique du bridge a la portee de tous. Gauthier-Villars (1940). {\cal B}ibitem{ccm} Cancrini, N., Caputo, P. and Martinelli, F. Relaxation time of $L$-reversal chains and other chromosome shuffles. {\it Annals of Applied Probability\/}~{{\cal B}f 16} (2006), pp.~1506--1527. {\cal B}ibitem{cover} Cover, T. and Thomas, J. (1991) {\it Elements of Information Theory.} Wiley. {\cal B}ibitem{per} Diaconis, P. Personal Communication. {\cal B}ibitem{dsc} Diaconis, P. and Saloff-Coste, L. (1993). Comparison Theorems for reversible Markov chains. {\epsilonm Ann.~Appl.~Prob.~3}, 696--730. {\cal B}ibitem{ds} Diaconis, P. and Shahshahani, M. Generating a random permutation with random transpositions. {\it Z. Wahrsch. Verw. Gebiete ~57}, 159--179. {\cal B}ibitem{durrettbook} Durrett, R. (2003) Probability: Theory and Examples. Pacific Grove CA: Wadsworth and Brooks/Cole. {\cal B}ibitem{durrett} Durrett, R. (2003) Shuffling Chromosomes. {\it J. Theoret. Probab.} {{\cal B}f 16}. pp.~725--750. {\cal B}ibitem{durrettbio} Durrett, R, York, T. and Rasmus N. (2007) Dependence of Paracentric Inversion Rate on Tract Length. {\it BioMedCentral Bioinformatics} {{\cal B}f 8}. {\cal B}ibitem{hoeffding} Hoeffding, W. Probability inequalities for sums of bounded random variables. {\it Journal of the American Statistical Association\/~{\cal B}f58} (1963), pp.~13--30. {\cal B}ibitem{markov} Markov, A Extension of the law of large numbers to dependent events (Russian). {\it Bull. Soc. Math. Kazan ~2}, pp.~155--156. {\cal B}ibitem{miclo} Miclo, L. (1996) Sur les problemes de sortie discrets inhomogenes. {\it Ann. Appl. Probab.} {{\cal B}f 6}, pp.~1112--1156. {\cal B}ibitem{mt-thorp} Montenegro, R.~and Tetali, P. Mathematical Aspects of Mixing Times in Markov Chains. Foundations and Trends in Theoretical Computer Science, Now Publishers. {\cal B}ibitem{thorp} Morris, B. The mixing time of the Thorp shuffle. {\it SIAM Journal on Computing, STOC 2005 special issue.} {\cal B}ibitem{poincare} Poincare, H. (1912) Calcul des probabilit\'es, 2nd ed. Gauthier Villars, Paris. {\cal B}ibitem{wilson} Wilson, D. (2004) Mixing times of lozenge tiling and card shuffling Markov chains. {\it Ann.~Appl.~Prob.} {{\cal B}f 14}, pp.~274--325. \epsilonnd{thebibliography} \epsilonnd{document}
\begin{document} \title[Humphreys' Conjecture]{A Note on Humphreys' Conjecture on Blocks} \author{Matthew Westaway} \email{[email protected]} {{\mbox{\rm ad}}}dress{School of Mathematics, University of Birmingham, Birmingham, B15 2TT, UK} \date{\today} \subjclass[2020]{Primary: 17B35, 17B50, Secondary: 16D70, 17B10} \keywords{Reduced enveloping algebra, Block decomposition, Lie algebra, Humphreys' conjecture} \begin{abstract} Humphreys' conjecture on blocks parametrises the blocks of reduced enveloping algebras $U_\chi({\mathfrak g})$, where ${\mathfrak g}$ is the Lie algebra of a reductive algebraic group over an algebraically closed field of characteristic $p>0$ and $\chi\in{\mathfrak g}^{*}$. It is well-known to hold under Jantzen's standard assumptions. We note here that it holds under slightly weaker assumptions, by utilising the full generality of certain results in the literature. We also provide a new approach to prove the result for ${\mathfrak g}$ of type $G_2$ in characteristic 3, a case in which the previously mentioned weaker assumptions do not hold. This approach requires some dimensional calculations for certain centralisers, which we conduct in the Appendix for all the exceptional Lie algebras in bad characteristic. \end{abstract} \maketitle \section{Introduction}\label{sec1} One of the most powerful tools in representation theory is the notion of the {\bf block decomposition}. Given a finite-dimensional ${\mathbb K}$-algebra $A$, the block decomposition of $A$ gives a partition of the set of irreducible $A$-modules. We may then study the representation theory of $A$ block-by-block. In particular, if we well-understand one block (say, a block containing a trivial module) then we can often use {\bf translation functors} to gain insight into the structure of other blocks. If $G$ is an algebraic group over an algebraically closed field ${\mathbb K}$ of characteristic $p>0$ and ${\mathfrak g}$ is its Lie algebra, then we may form, for each $\chi\in{\mathfrak g}^{*}$, the {\bf reduced enveloping algebra} $U_\chi({\mathfrak g})$. This is a finite-dimensional ${\mathbb K}$-algebra which is important to the representation theory of ${\mathfrak g}$, and so we would like to understand its blocks. The leading result in this direction is {\bf Humphreys' conjecture on blocks}. \begin{conj}[Humphreys' conjecture on blocks] Suppose $G$ is reductive and let $\chi\in{\mathfrak g}^{*}$ be nilpotent. Then there exists a natural bijection between the blocks of $U_\chi({\mathfrak g})$ and the set $\Lambda_\chi/W_{\bullet}$. In particular, $$\left\ensuremath{\mathbf{e}}\xspacert\{\mbox{Blocks of}\,\,\, U_\chi({\mathfrak g})\}\right\ensuremath{\mathbf{e}}\xspacert=\left\ensuremath{\mathbf{e}}\xspacert \Lambda_\chi/W_{\bullet}\right\ensuremath{\mathbf{e}}\xspacert.$$ \end{conj} Here, $\Lambda_\chi$ is a certain finite subset of ${\mathfrak h}^{*}$, where ${\mathfrak h}$ is the Lie algebra of a maximal torus $T$ of $G$, and $W$ is the Weyl group of $(G,T)$, which acts on ${\mathfrak h}^{*}$ via the dot-action and thus induces an equivalence relation on $\Lambda_\chi$. The requirement that $\chi$ is nilpotent means that $\chi$ vanishes on the Lie algebra ${\mathfrak b}$ of a Borel subgroup $B$ of $G$ (which we may assume contains $T$). This conjecture was proved by Humphreys \cite{Hu3.1} in 1971 for $\chi=0$, subject to the requirements that $G$ be semisimple and that $p>h$, where $h$ is the Coxeter number of $(G,T)$. Humphreys then extended the result further to $\chi$ in so-called standard Levi form in 1998 in \cite{Hu.1} (the paper \cite{Hu.1} doesn't explicitly state what assumptions are being made, but the argument holds for any connected reductive algebraic group whose derived group is simply-connected). Under three assumptions (which we will call Jantzen's standard assumptions \cite{J1.1,J2.1} and denote (A), (B) and (C)), the conjecture was then proved by Brown and Gordon in \cite{BG.1} for all $\chi\in{\mathfrak g}^{*}$ when $p>2$, and then improved by Gordon in \cite{Go.1} to include the $p=2$ case (so long as (A), (B) and (C) still hold). In fact, under assumptions (A), (B) and (C), Humphreys' conjecture on blocks allows us to count the number of blocks of $U_\chi({\mathfrak g})$ for {\em all} $\chi\in{\mathfrak g}^{*}$, as these assumptions are sufficient to reduce the computation to the case of nilpotent $\chi$ (see \cite{FP1.1}, also Remark~\ref{NilpRed}, {\em infra}). Furthermore, Braun \cite{Br.1} recently proved the conjecture for ${\mathfrak g}={\mathfrak s}{\mathfrak l}_n$ with $p\ensuremath{\mathbf{e}}\xspacert n$, where assumptions (A) and (B) hold but (C) doesn't. In this case, however, the restriction to nilpotent $\chi$ is necessary, as the analogous result for semisimple $\chi$ was shown in \cite{Br.1} to fail when $p=n=3$. Let us now explain Jantzen's standard assumptions. These are: (A) that the derived group of $G$ is simply-connected; (B) that the prime $p$ is good for $G$; and (C) that there exists a non-degenerate $G$-invariant bilinear form on ${\mathfrak g}$. The primes that are not good for a given $G$ can be listed explicitly (and are all less that or equal to $5$), and the existence of a non-degenerate $G$-invariant bilinear form on ${\mathfrak g}$ holds whenever ${\mathfrak g}$ is simple. The question motivating this note is: what happens to Humphreys' conjecture on blocks for nilpotent $p$-characters if we remove assumptions (B) and/or (C)? We see in Section~\ref{sec3} that there is a natural surjection $f:\{\mbox{Blocks of}\,\,\, U_\chi({\mathfrak g})\}\to\Lambda_\chi/W_{\bullet}$ under only assumption (A). It turns out that this can be deduced from the literature \cite{J2.1,KW.1}. Furthermore, we show in Theorem~\ref{BlockNumb} that the known proof of the injectivity of $f$ works without assumption (B). This therefore confirms Humphreys' conjecture for the almost-simple groups over algebraically closed fields of the following bad characteristics: \begin{cor}\label{ASGps} Let $G$ be an almost-simple group over an algebraically closed field ${\mathbb K}$ of bad characteristic $p>0$. Then Humphreys' conjecture holds for $G$ when $p=2$ and $G$ is of type $E_6, E_8$ or $G_2$, when $p=3$ and $G$ is of type $E_7, E_8$ or $F_4$, and when $p=5$ and $G$ is of type $E_8$. \end{cor} We also provide a different approach to the proof of the injectivity in Proposition~\ref{prop1}, which demonstrates that injectivity in fact holds whenever there exists a collection of irreducible modules of a certain nice form (namely, which are so-called {\bf baby Verma modules}). Premet's theorem \cite{Pr1.1} shows the existence of such irreducible modules under assumptions (A), (B) and (C), and we observe in Corollary~\ref{BlockG23} that the existence also holds for the almost-simple algebraic group of type $G_2$ in characteristic 3 (where assumption (C) fails). This thus proves Humphreys' conjecture on blocks for $G_2$ in characteristic 3, which could not be deduced using the previous approach. In the Appendix, we conduct some calculations with a view to finding other examples where these irreducible modules exist. Unfortunately, the calculations do not lead to further examples, but we hope the calculations are interesting in their own right, as they demonstrate divisibility bounds for irreducible modules for certain nice $\chi$ and small primes. {\bf Statements and Declarations:} The author was supported during this research by the Engineering and Physical Sciences Research Council, grant EP/R018952/1, and later by a research fellowship from the Royal Commission for the Exhibition of 1851. {\bf Acknowledgments:} The author would like to thank Ami Braun and Dmitriy Rumynin for suggesting this question, and Simon Goodwin for engaging in many useful discussions regarding this subject and comments on earlier versions of this paper. \section{Preliminaries on Lie algebras}\label{sec2} Throughout this note we work with a connected algebraic group $G$ over an algebraically closed field ${\mathbb K}$ of characteristic $p>0$. More precise assumptions on $G$ are given section-by-section, but it is always at least a reductive algebraic group with simply-connected derived subgroup. Inside $G$, we fix a maximal torus $T$ and a Borel subgroup $B$ of $G$ containing $T$. Write $X(T)$ for the character group of $T$, $Y(T)$ for the cocharacter group of $T$, and $\langle\cdot,\cdot\rangle:X(T)\times Y(T)\to{\mathbb Z}$ for the natural pairing. We write ${\mathfrak g}$ for the Lie algebra of $G$, ${\mathfrak b}$ for the Lie algebra of $B$ and ${\mathfrak h}$ for the Lie algebra of $T$. As Lie algebras of algebraic groups these are all restricted, so come equipped with $p$-th power maps ${\mathfrak g}\to{\mathfrak g}$ (resp. ${\mathfrak b}\to{\mathfrak b}$, ${\mathfrak h}\to{\mathfrak h}$) written $x\mapsto x^{[p]}$ . Set $\Phi$ to be the root system of $G$ with respect to $T$, $\Phi^{+}$ to be the positive roots corresponding to $B$ and $\Pi$ to be the simple roots. For $\alpha\in \Phi$ we set $\alpha^\ensuremath{\mathbf{e}}\xspacee\in Y(T)$ to be the corresponding coroot, and we write ${\mathfrak g}_\alpha$ for the root space of $\alpha$ in ${\mathfrak g}$. We then define ${\mathfrak n}^{+}=\bigoplus_{\alpha\in\Phi^{+}}{\mathfrak g}_\alpha$ and ${\mathfrak n}^{-}=\bigoplus_{\alpha\in\Phi^{+}}{\mathfrak g}_{-\alpha}$, so ${\mathfrak g}={\mathfrak n}^{-}\oplus{\mathfrak h}\oplus{\mathfrak n}^{+}$. For $\alpha\in \Phi$ we define $h_\alpha\coloneqq d\alpha^\ensuremath{\mathbf{e}}\xspacee(1)\in{\mathfrak h}$, and we choose $e_\alpha\in {\mathfrak g}_\alpha$ and $e_{-\alpha}\in{\mathfrak g}_{-\alpha}$ so that $[e_\alpha,e_{-\alpha}]=h_\alpha$ (see, for example, \cite{J4.1} for more details on this procedure). We also choose a basis $h_1,\ldots,h_d$ of ${\mathfrak h}$ with the property that $h_i^{[p]}=h_i$ for all $1\leq i\leq d$. Set $W$ to be the Weyl group of $\Phi$, which acts naturally on $X(T)$ and ${\mathfrak h}^{*}$. We fix $\rho\in X(T)\otimes_{{\mathbb Z}}{\mathbb Q}$ to be the half-sum of positive roots in $\Phi$. This then allows us to define the dot-action of $W$ on $X(T)$ as $w\cdot\lambda=w(\lambda+\rho)-\rho$ (noting that this action makes sense even if $\rho\notin X(T)$). When $\rho\in X(T)$, $d\rho(h_\alpha)=1$ for all $\alpha\in \Pi$. If $\rho\notin X(T)$, we may still define $d\rho\in{\mathfrak h}^{*}$ such that $d\rho(h_\alpha)=1$ for all $\alpha\in\Pi$, since the derived subgroup being simply-connected implies that these $h_\alpha$ are linearly independent in ${\mathfrak h}$. We may therefore define the dot action on ${\mathfrak h}^{*}$ similarly to how it was defined on $X(T)$. When we wish to specify that $W$ is acting through the dot-action, we may write $W_\bullet$ instead of $W$. We write $U({\mathfrak g})$ for the universal enveloping algebra of ${\mathfrak g}$. We write $Z_p$ for the central subalgebra of $U({\mathfrak g})$ generated by all $x^p-x^{[p]}$ with $x\in{\mathfrak g}$, which we call the {\bf $p$-centre} of $U({\mathfrak g})$. Given $\chi\in{\mathfrak g}^{*}$, we write $U_\chi({\mathfrak g})$ for the reduced enveloping algebra $U_\chi({\mathfrak g})\coloneqq U({\mathfrak g})/\langle x^p-x^{[p]}-\chi(x)^p \,\ensuremath{\mathbf{e}}\xspacert\, x\in{\mathfrak g}\rangle$. Each irreducible ${\mathfrak g}$-module is finite-dimensional \cite[Theorem A.4]{J1.1} and so, by Schur's lemma, each irreducible ${\mathfrak g}$-module is a $U_\chi({\mathfrak g})$-module for some $\chi\in{\mathfrak g}^{*}$. For $\chi\in{\mathfrak g}^{*}$, we recall that the centraliser of $\chi$ in ${\mathfrak g}$ is defined as $c_{{\mathfrak g}}(\chi)\coloneqq \{x\in{\mathfrak g}\,\ensuremath{\mathbf{e}}\xspacert\,\chi([x,{\mathfrak g}])=0\}.$ The adjoint action of $G$ on ${\mathfrak g}$ induces the coadjoint action of $G$ on ${\mathfrak g}^{*}$, and if $\chi,\mu\in{\mathfrak g}^{*}$ lie in the same coadjoint $G$-orbit then $U_\chi({\mathfrak g})\cong U_\mu({\mathfrak g})$. The derived group of $G$ being simply-connected implies (see \cite{J2.1,KW.1}) that any $\mu\in{\mathfrak g}^{*}$ lies in the same $G$-orbit as some $\chi\in{\mathfrak g}^{*}$ with $\chi({\mathfrak n}^{+})=0$. Putting these two observations together, we always assume $\chi({\mathfrak n}^{+})=0$ throughout this paper. We can define, for each $\lambda\in{\mathfrak h}^{*}$, a one-dimensional ${\mathfrak b}$-module ${\mathbb K}_\lambda$ on which ${\mathfrak n}^{+}$ acts as zero and ${\mathfrak h}$ acts via $\lambda$. The assumption that $\chi({\mathfrak n}^{+})=0$ means that ${\mathbb K}_\lambda$ is a $U_\chi({\mathfrak b})$-module if and only if $\lambda\in\Lambda_\chi$, where \begin{equation*} \begin{split} \Lambda_\chi & \coloneqq \{\lambda\in{\mathfrak h}^{*}\mid\lambda(h)^p-\lambda(h^{[p]})=\chi(h)^p\,\,\mbox{for all}\,\, h\in{\mathfrak h}\} \\ & = \{\lambda\in{\mathfrak h}^{*}\mid\lambda(h_i)^p-\lambda(h_i)=\chi(h_i)^p\,\,\mbox{for all}\,\, 1\leq i\leq d\} \end{split} \end{equation*} and that all irreducible $U_\chi({\mathfrak b})$-modules are of this form. We therefore may define the {\bf baby Verma module} $Z_\chi(\lambda)=U_\chi({\mathfrak g})\otimes_{U_{\chi}({\mathfrak b})}{\mathbb K}_\lambda$, a $U_\chi({\mathfrak g})$-module of dimension $p^N$, where $N=\left\ensuremath{\mathbf{e}}\xspacert\Phi^{+}\right\ensuremath{\mathbf{e}}\xspacert$. Every irreducible $U_\chi({\mathfrak g})$-module is the quotient of some baby Verma module (see \cite[Lem. B.4]{J1.1}). Since $W_\bullet$ acts on ${\mathfrak h}^{*}$, we may define an equivalence relation on $\Lambda_\chi$ by setting $\lambda\sim\mu$ if and only if there exists $w\in W$ with $w\cdot\lambda=\mu$. We write $\Lambda_\chi/W_{\bullet}$ for the set of equivalence classes of $\Lambda_\chi$ under this relation. If $\chi({\mathfrak b})=0$ then $\Lambda_\chi=\Lambda_0=\{d\lambda\in{\mathfrak h}^{*}\,\ensuremath{\mathbf{e}}\xspacert\,\lambda\in X(T)\}=X(T)/pX(T)$. In this case, $W_\bullet$ in fact acts on $\Lambda_\chi$, so $\Lambda_\chi/W_\bullet$ is the set of $W_\bullet$-orbits for this action. The condition that $\chi({\mathfrak b})=0$ is sufficiently important in this paper that we make the definition $${\mathfrak b}^{\perp}\coloneqq\{\chi\in{\mathfrak g}^{*}\,\ensuremath{\mathbf{e}}\xspacert\,\chi({\mathfrak b})=0\}.$$ We say that $\chi\in{\mathfrak b}^{\perp}$ is in {\bf standard Levi form} if there exists a subset $I\subseteq\Pi$ of simple roots such that $$\chi(e_{-\alpha})=\twopartdef{1}{\alpha\in I,}{0}{\alpha\in\Phi^{+}\setminus I.}$$ If $I=\Pi$ we say that $\chi$ is {\bf regular nilpotent in standard Levi form}. In general, we say $\chi\in{\mathfrak g}^{*}$ is {\bf regular nilpotent} if it is in the same $G$-orbit as the $\mu\in{\mathfrak g}^{*}$ which is regular nilpotent in standard Levi form. \section{Preliminaries on Blocks}\label{sec3} Let us briefly recall the definition of the {\bf blocks} of a finite-dimensional ${\mathbb K}$-algebra $A$ (one can find more details in \cite[I.16, III.9]{BGo.1}, for example). We say that one irreducible $A$-module $M$ is {\bf linked to} another irreducible $A$-module $N$ if ${{\mbox{\rm Ext}}}^1(M,N)\neq 0$. This is not an equivalence relation, but we may refine it to one. The equivalence classes under the resulting equivalence relation are then called the {\bf blocks} of $A$. In this note, we are concerned with the case of $A=U_\chi({\mathfrak g})$ with $\chi\in{\mathfrak b}^\perp$. Under assumptions (A), (B) and (C) the results in this section are well-known -- for example, they are contained within the proof of Proposition C.5 in \cite{J1.1}. Nonetheless, we recall them to highlight when assumptions (A), (B) and (C) are or are not necessary. Remember from Section~\ref{sec2} that each irreducible $U_\chi({\mathfrak g})$-module is a quotient of a baby Verma module $Z_\chi(\lambda)$, and thus all irreducible $U_\chi({\mathfrak g})$-modules appear as composition factors of baby Verma modules. Recall also that the {\bf Grothendieck group} ${\mathscr G} (U_\chi({\mathfrak g}))$ of the category of finite-dimensional $U_\chi({\mathfrak g})$-modules is the abelian group generated by symbols $[M]$, for $M$ running over the collection of all finite-dimensional $U_\chi({\mathfrak g})$-modules, subject to the relation that $[P]+[N]=[M]$ if $$0\to P\to M\to N\to 0$$ is a short exact sequence of $U_\chi({\mathfrak g})$-modules. It is clear that in ${\mathscr G} (U_\chi({\mathfrak g}))$ we have, for $\lambda\in\Lambda_0$, $$[Z_\chi(\lambda)]=\sum_{\tiny L\in{{\mbox{\rm Irr}}}(U_\chi({\mathfrak g}))} [Z_\chi(\lambda):L][L],$$ where ${{\mbox{\rm Irr}}}(U_\chi({\mathfrak g}))$ is the set of isomorphism classes of irreducible $U_\chi({\mathfrak g})$-modules and $[Z_\chi(\lambda):L]$ indicates the composition multiplicity of $L$ in $Z_\chi(\lambda)$. We wish to define the map $$f:\{\mbox{Blocks of}\,\, U_\chi({\mathfrak g})\}\to \{[Z_\chi(\lambda)]\,\ensuremath{\mathbf{e}}\xspacert\,\lambda\in \Lambda_0\}\subseteq {\mathscr G}(U_\chi({\mathfrak g})),$$ as follows. Let ${\mathfrak B}$ be a block of $U_\chi({\mathfrak g})$, and let $E$ be an irreducible module in this block. There must exist $\lambda\in \Lambda_0$ such that $E$ is a quotient of $Z_\chi(\lambda)$. We then define $f({\mathfrak B})=[Z_\chi(\lambda)]$. For this to be well-defined, it is necessary to see that it does not depend on our choice of $E\in{\mathfrak B}$ or on our choice of $Z_\chi(\lambda)\twoheadrightarrow E$. For this, we note that $U({\mathfrak g})^G\subseteq Z(U({\mathfrak g}))$ acts on the baby Verma module $Z_\chi(\lambda)$ via scalar multiplication as follows. Under the assumption that the derived group of $G$ is simply-connected (assumption (A)), the argument of Kac and Weisfeiler in \cite[Th. 1]{KW.1} (c.f. \cite[Th. 9.3]{J2.1}) shows that there exists an isomorphism $\pi:U({\mathfrak g})^G\to S({\mathfrak h})^{W_{\bullet}}$, where the dot-action on $S({\mathfrak h})$ is obtained by identifying $S({\mathfrak h})$ with the algebra $P({\mathfrak h}^{*})$ of polynomial functions on ${\mathfrak h}^{*}$ and then defining $(w\cdot F)(\lambda)=F(w^{-1}\cdot\lambda)$ for $w\in W$, $F\in P({\mathfrak h}^{*})$ and $\lambda\in{\mathfrak h}^{*}$. This isomorphism allows us, as in \cite{J2.1}, to define a homomorphism ${{\mbox{\rm cen}}}_{\lambda}:U({\mathfrak g})^G\to{\mathbb K}$ which sends $u\in U({\mathfrak g})^G$ to $\pi(u)(\lambda)$, viewing $\pi(u)$ as an element of $P({\mathfrak h}^{*})$. Then $U({\mathfrak g})^G$ acts on $Z_\chi(\lambda)$ via the character ${{\mbox{\rm cen}}}_\lambda$, for $\lambda\in \Lambda_0$. If $E$ and $E'$ lie in the same block then it is easy to see that $U({\mathfrak g})^G$ must act the same on both modules, and if $Z_{\chi}(\lambda_E)\twoheadrightarrow E$ and $Z_{\chi}(\lambda_{E'})\twoheadrightarrow E'$ then $U({\mathfrak g})^G$ acts on $E$ via ${{\mbox{\rm cen}}}_{\lambda_E}$ and on $E'$ by ${{\mbox{\rm cen}}}_{\lambda_{E'}}$. Thus, ${{\mbox{\rm cen}}}_{\lambda_E}={{\mbox{\rm cen}}}_{\lambda_{E'}}$ and so, as in \cite[Cor. 9.4]{J2.1} (see also \cite[Th. 2]{KW.1}), we have $\lambda_E\in W_\bullet \lambda_{E'}$. One may then observe, using \cite[C.2]{J1.1}, that $[Z_\chi(\lambda_E)]=[Z_{\chi}(\lambda_{E'})]$. This shows that $f$ is well-defined. Furthermore, $f$ is clearly surjective (just take the block containing an irreducible quotient of the desired $Z_{\chi}(\lambda)$). The above discussion also shows that $[Z_\chi(\lambda)]\cong [Z_\chi(\mu)]$ if and only if $\lambda\in W_\bullet\mu$. Thus, there is a bijection $$\{[Z_\chi(\lambda)]\,\ensuremath{\mathbf{e}}\xspacert\,\lambda\in \Lambda_0\}\leftrightarrow \Lambda_0/W_\bullet.$$ In particular, we get the following proposition (which also may more-or-less be found in \cite[C.5]{J1.1}), observing that at no point thus far have we required assumptions (B) or (C). \begin{prop}\label{Lower} Let $G$ be a connected reductive algebraic group over an algebraically closed field ${\mathbb K}$ of characteristic $p>0$, with simply-connected derived subgroup, and let $\chi\in{\mathfrak b}^\perp$. Then there exists a natural surjection between the set of blocks of $U_\chi({\mathfrak g})$ and the set $\Lambda_\chi/W_{\bullet}=\Lambda_0/W_{\bullet}$. In particular, $$\left\ensuremath{\mathbf{e}}\xspacert\{\mbox{Blocks of}\,\,\, U_\chi({\mathfrak g})\}\right\ensuremath{\mathbf{e}}\xspacert\geq\left\ensuremath{\mathbf{e}}\xspacert \Lambda_0/W_{\bullet}\right\ensuremath{\mathbf{e}}\xspacert.$$ \end{prop} \begin{rmk} We have used in the above argument the fact that, when assumption (A) holds, there exists an isomorphism $U({\mathfrak g})^G\xrightarrow{\sim}S({\mathfrak h})^{W_\bullet}$. This result dates back to Kac and Weisfeiler \cite{KW.1}, who proved it for connected almost-simple algebraic groups under the assumption that $G\neq SO_{2n+1}({\mathbb K})$ when $p=2$.{\mathfrak o}otnote{In \cite[Th. 1]{KW.1} it is required that either $p\neq 2$ or $\rho\in X(T)$, where $\rho$ is the half sum of positive roots. This is then generalised to the given assumptions in \cite[Th. 1 BIS]{KW.1}. The $W$-action used in the latter theorem can be easily seen to be the same as the dot-action we are using.} According to Janzten \cite[Rem. 9.3]{J2.1}, the argument of Kac and Weisfeiler holds for reductive ${\mathfrak g}$ whenever assumption (A) holds. Jantzen further gives an argument \cite[9.6]{J2.1} using reduction mod $p$ techniques which holds under his standard assumptions. In fact, slightly weaker assumptions are sufficient: assumption (B) is only needed to ensure $p$ is not a so-called torsion prime of $\Phi^\ensuremath{\mathbf{e}}\xspacee$ (in the sense of \cite[Prop. 8]{Dem.1}), which is also satisfied for the bad prime 3 in case $G_2$, while assumption (C) is only needed to ensure that the (derivatives of the) simple roots are linearly independent in ${\mathfrak h}^{*}$, which is also satisfied for $p=2$ in type $F_4$ and $p=3$ in type $G_2$. In particular, the argument of Kac-Weisfeiler is unnecessary for our later result (Corollary~\ref{BlockG23}) that Humphreys' conjecture on blocks holds for the almost-simple algebraic group of type $G_2$ in characteristic 3. \end{rmk} \section{Upper bound} Humphreys' conjecture on blocks claims that the map $f$ defined in the previous section is, in fact, a bijection. What remains, therefore, is to show that $$\left\ensuremath{\mathbf{e}}\xspacert\{\mbox{Blocks of}\,\,\, U_\chi({\mathfrak g})\}\right\ensuremath{\mathbf{e}}\xspacert\leq\left\ensuremath{\mathbf{e}}\xspacert \Lambda_0/W_{\bullet}\right\ensuremath{\mathbf{e}}\xspacert.$$ Gordon \cite{Go.1} has shown that this inequality holds under assumptions (A), (B) and (C), and a similar argument is reproduced in \cite[C.5]{J1.1}. We give a version of this argument here in order to observe that it does not require assumption (B), and to highlight where assumption (C) is necessary: The discussion in Section~\ref{sec3} shows that $U_\chi({\mathfrak g})$ has $\left\ensuremath{\mathbf{e}}\xspacert \Lambda_0/W_{\bullet}\right\ensuremath{\mathbf{e}}\xspacert$ blocks if, for each $\lambda\in\Lambda_0$, all composition factors of the baby Verma module $Z_\chi(\lambda)$ lie in the same block. This property holds for the $\mu\in{\mathfrak g}^{*}$ which is regular nilpotent in standard Levi form, since the corresponding baby Verma module has a unique maximal submodule and so is indecomposable, and it is well-known that all composition factors of an indecomposable module lie in the same block. Therefore $U_\chi({\mathfrak g})$ has $\left\ensuremath{\mathbf{e}}\xspacert \Lambda_0/W_{\bullet}\right\ensuremath{\mathbf{e}}\xspacert$ blocks for all $\chi$ in the $G$-orbit of $\mu$. Suppose now that the intersection of ${\mathfrak b}^\perp$ with the $G$-orbit of $\mu$ is dense in ${\mathfrak b}^\perp$. By \cite[Prop. 2.7]{Ga.1}, $$D_{\left\ensuremath{\mathbf{e}}\xspacert \Lambda_0/W_{\bullet}\right\ensuremath{\mathbf{e}}\xspacert}\coloneqq\{\chi\in{\mathfrak b}^{\perp}\,\ensuremath{\mathbf{e}}\xspacert\,U_\chi({\mathfrak g})\,\,\mbox{has at most}\,\,\left\ensuremath{\mathbf{e}}\xspacert \Lambda_0/W_{\bullet}\right\ensuremath{\mathbf{e}}\xspacert\,\,\mbox{blocks}\}$$ is closed in ${\mathfrak b}^{\perp}$. Since $(G\cdot\mu)\cap{\mathfrak b}^{\perp}\subseteq D_{\left\ensuremath{\mathbf{e}}\xspacert \Lambda_0/W_{\bullet}\right\ensuremath{\mathbf{e}}\xspacert}$, Humphreys' conjecture on blocks would follow. When can we say that $(G\cdot\mu)\cap{\mathfrak b}^\perp$ is dense in ${\mathfrak b}^\perp$? Well, if there exists a $G$-equivariant isomorphism $\Theta:{\mathfrak g}\xrightarrow{\sim}{\mathfrak g}^{*}$, we can set $y\coloneqq\Theta^{-1}(\mu)$. Then \cite[6.3, 6.7]{J3.1} (which make no assumptions on $p$) establish that the $G$-orbit of $y$ is dense in the nilpotent cone ${\mathcal N}$ of ${\mathfrak g}$, and so the $G$-orbit of $\mu$ is dense in $\Theta({\mathcal N})$. Thus, $(G\cdot\mu)\cap {\mathfrak b}^\perp$ is dense in ${\mathfrak b}^{\perp}$, and so (cf. \cite[Th. 3.6]{Go.1}) under assumptions (A) and (C) we get Humphreys' conjecture of blocks: \begin{theorem}\label{BlockNumb} Let $G$ be a connected reductive algebraic group over an algebraically closed field ${\mathbb K}$ of characteristic $p>0$, with simply-connected derived subgroup. Suppose that there exists a $G$-module isomorphism $\Theta:{\mathfrak g}\xrightarrow{\sim}{\mathfrak g}^{*}$. Let $\chi\in{\mathfrak b}^{\perp}$. Then $$\left\ensuremath{\mathbf{e}}\xspacert\{\mbox{Blocks of}\,\,\, U_\chi({\mathfrak g})\}\right\ensuremath{\mathbf{e}}\xspacert=\left\ensuremath{\mathbf{e}}\xspacert \Lambda_\chi/W_{\bullet}\right\ensuremath{\mathbf{e}}\xspacert.$$ \end{theorem} \begin{rmk} It is straightforward to see that this theorem implies Corollary~\ref{ASGps} from the Introduction. \end{rmk} \begin{rmk}\label{NilpRed} Suppose $\chi\in{\mathfrak g}^{*}$ with $\chi({\mathfrak n}^{+})=0$. Under assumption (C), there exists a $G$-module isomorphism $\Theta:{\mathfrak g}\to{\mathfrak g}^{*}$, so we may fix $x\in{\mathfrak g}$ such that $\Theta(x)=\chi$. In ${\mathfrak g}$ it is well-known that $x$ has a (unique) Jordan decomposition $x=x_s+x_n$, where $x_s$ is semisimple, $x_n$ is nilpotent, and $[x_s,x_n]=0$, and thus we may define the Jordan decomposition $\chi=\chi_s+\chi_n$ where $\chi_s=\Theta(x_s)$ and $\chi_n=\Theta(x_n)$. In fact, Kac and Weisfeiler \cite[Th. 4]{KW.1} show that a Jordan decomposition of $\chi$ may be defined even when assumption (C) does not hold, so long as assumption (A) does instead: we say that $\chi=\chi_s+\chi_n$ is a Jordan decomposition if there exists $g\in G$ such that $g\cdot\chi_s({\mathfrak n}^{+}\oplus{\mathfrak n}^{-})=0$, $g\cdot\chi_n({\mathfrak h}\oplus{\mathfrak n}^{+})=0$, and, for $\alpha\in\Phi^{+}$, $g\cdot\chi(h_\alpha)\neq 0$ only if $g\cdot\chi(e_{\pm\alpha})=0$. Under assumptions (A) and (B), Friedlander and Parshall \cite{FP1.1} show that there is an equivalence of categories between $\{U_\chi({\mathfrak g})-\mbox{modules}\}$ and $\{U_\chi({\mathfrak c}_{{\mathfrak g}}(\chi_s))-\mbox{modules}\}$ (the categories of finite-dimensional modules). It can then further be shown under those assumptions (see, for example, \cite[B.9]{J1.1}) that there is an equivalence of categories between $\{U_\chi({\mathfrak c}_{{\mathfrak g}}(\chi_s))-\mbox{modules}\}$ and $\{U_{\chi_n}({\mathfrak c}_{{\mathfrak g}}(\chi_s))-\mbox{modules}\}$. Under assumptions (A) and (B), this then often allows us to reduce representation-theoretic questions to the case of nilpotent $\chi$. When assumption (C) holds, we may do this for Humphreys' conjecture on blocks (we assume here that $\chi$ is chosen so that $g$ may be taken as $1$ in the definition of the Jordan decomposition, recalling that reduced enveloping algebras are unchanged by the coadjoint $G$-action on their corresponding $p$-character). The equivalence of categories between $\{U_\chi({\mathfrak g})-\mbox{modules}\}$ and $\{U_{\chi_n}({\mathfrak l})-\mbox{modules}\}$ (where ${\mathfrak l}\coloneqq {\mathfrak c}_{{\mathfrak g}}(\chi_s)$) clearly preserves the number of blocks of the respective algebras. Thus, Humphreys' conjecture on blocks for $({\mathfrak l},\chi_n)$ will imply it for $({\mathfrak g},\chi)$ if and only if $\left\ensuremath{\mathbf{e}}\xspacert\Lambda_\chi/W_{\bullet}\right\ensuremath{\mathbf{e}}\xspacert=\left\ensuremath{\mathbf{e}}\xspacert\Lambda_{\chi_n} /W'_\bullet\right\ensuremath{\mathbf{e}}\xspacert$, where $W'$ is the Weyl group corresponding to ${\mathfrak l}$. What is $W'$? Well, the root system for ${\mathfrak l}$ is $\{\alpha\in\Phi\mid \chi_s(h_\alpha)= 0\}$ so it is easy to see that $W'$ lies inside $W(\Lambda_\chi)$, the set of $w\in W$ which fix $\Lambda_\chi$ setwise (it is straightforward to see under our assumptions that it doesn't matter in defining this subgroup whether we consider the usual action or the dot-action of $W$, since $\rho\in \Lambda_0$). When assumption (C) holds, $W(\Lambda_\chi)$ is parabolic (see \cite[Lem. 7]{MR.1}, \cite[Prop. 1.15]{Hu4.1}), and so one can easily check that $W'=W(\Lambda_\chi)$ in this case (see \cite[Rem. 3.12(3)]{BG.1}). This then obviously implies that $\left\ensuremath{\mathbf{e}}\xspacert\Lambda_\chi/W_{\bullet}\right\ensuremath{\mathbf{e}}\xspacert=\left\ensuremath{\mathbf{e}}\xspacert\Lambda_{\chi} /W'_\bullet\right\ensuremath{\mathbf{e}}\xspacert$, and so what remains is to show that $\left\ensuremath{\mathbf{e}}\xspacert\Lambda_{\chi} /W'_\bullet\right\ensuremath{\mathbf{e}}\xspacert=\left\ensuremath{\mathbf{e}}\xspacert\Lambda_{\chi_n} /W'_\bullet\right\ensuremath{\mathbf{e}}\xspacert$. One can check that there exists $\lambda\in \Lambda_\chi$ such that $w(\lambda)=\lambda$ for all $w\in W'=W(\Lambda_\chi)$. Then the map $\Lambda_\chi=\lambda+\Lambda_0\to \Lambda_0=\Lambda_{\chi_n}$, $\lambda+\tau\mapsto\tau$, induces a bijection $\Lambda_\chi/W'_\bullet\xrightarrow{\sim}\Lambda_{\chi_n}/W'_\bullet$ as required. Braun \cite[Th. 6.23, Ex. 6.25]{Br.1} has shown that when assumption (C) fails to hold, it can be the case that Humphreys' conjecture on blocks holds for nilpotent $\chi$ but fails for general $\chi$. Specifically, set ${\mathfrak g}={\mathfrak s}{\mathfrak l}_3$, $p=3$, and choose $\chi\in{\mathfrak s}{\mathfrak l}_3^{*}$ such that $\chi(e_{11}-e_{22})=\chi(e_{22}-e_{33})\neq 0$ (using $e_{ij}$ for the usual basis elements of ${\mathfrak g}{\mathfrak l}_3$). Recalling that the Weyl group for ${\mathfrak s}{\mathfrak l}_3$ is the symmetric group $S_3$, one can check that $W(\Lambda_\chi)=\{{{\mbox{\rm Id}}},(1,2,3),(1,3,2)\}$ and so is not a parabolic subgroup of $W$. Thus, $W'\neq W(\Lambda_\chi)$ and so there can be linkages under $W$ which do not exist under $W'$. In particular, choosing suitable $\chi$, one can use this to show that $\left\ensuremath{\mathbf{e}}\xspacert\Lambda_\chi/W_{\bullet}\right\ensuremath{\mathbf{e}}\xspacert<\left\ensuremath{\mathbf{e}}\xspacert\Lambda_{\chi_n} /W'_\bullet\right\ensuremath{\mathbf{e}}\xspacert$. Braun's argument then shows that the latter value is the number of blocks of $U_{\chi_n}({\mathfrak l})$ and so the number of blocks of $U_\chi({\mathfrak g})$. We note that this argument highlights that \cite[Lem. 7]{MR.1} requires the assumption that $p$ be very good for the root system. \end{rmk} The argument above highlights one approach to proving Humphreys' conjecture on blocks; namely, to obtain the desired result it suffices to find a dense subset of ${\mathfrak b}^{\perp}$ lying inside $D_{\left\ensuremath{\mathbf{e}}\xspacert \Lambda_\chi/W_{\bullet}\right\ensuremath{\mathbf{e}}\xspacert}$. Note that ${\mathfrak b}^\perp={\mathbb K}^N$, where $N=\left\ensuremath{\mathbf{e}}\xspacert\Phi^{+}\right\ensuremath{\mathbf{e}}\xspacert$, and recall that any non-empty open subset is dense in ${\mathbb K}^N$ when it is equipped with the Zariski topology. For each $\lambda\in \Lambda_0$, define $$C_\lambda\coloneqq \{\chi\in{\mathfrak b}^{\perp}\,\ensuremath{\mathbf{e}}\xspacert\,\,\mbox{All composition factors of }\, Z_\chi(\lambda)\,\,\mbox{are in the same block of }\,U_\chi({\mathfrak g})\,\},$$ and define $$C\coloneqq\bigcap_{\lambda\in \Lambda_0}C_\lambda.$$ It is straightforward from the arguments in Section~\ref{sec3} to see that $C\subseteq D_{\left\ensuremath{\mathbf{e}}\xspacert \Lambda_\chi/W_{\bullet}\right\ensuremath{\mathbf{e}}\xspacert}$. Furthermore, if for each $\lambda\in \Lambda_0$ we can find a dense open subset $\widehat{C}_\lambda$ of ${\mathfrak b}^{\perp}$ with $\widehat{C}_\lambda\subseteq C_\lambda$, then $$\widehat{C}\coloneqq\bigcap_{\lambda\in \Lambda_0}\widehat{C}_\lambda$$ would be a dense open subset of ${\mathfrak b}^{\perp}$ contained in $C\subseteq D_{\left\ensuremath{\mathbf{e}}\xspacert \Lambda_\chi/W_{\bullet}\right\ensuremath{\mathbf{e}}\xspacert}$. Finding the desired $\widehat{C}_\lambda$ therefore provides an approach to proving Humphreys' conjecture on blocks, and in the rest of this section we explore one particular way of obtaining such $\widehat{C}_\lambda$. For each $\lambda\in \Lambda_0$, consider the set $$S_\lambda\coloneqq\{\chi\in{\mathfrak b}^{\perp}\,\ensuremath{\mathbf{e}}\xspacert\,Z_\chi(\lambda)\,\,\mbox{is an irreducible } U_\chi({\mathfrak g})\mbox{-module}\}.$$ It is remarked in \cite[C.6]{J1.1} that $S_\lambda$ is open in ${\mathfrak b}^\perp$. Specifically, if we define, for $s=1,\ldots,p^N-1$, the set $$N_{\lambda,s}=\{\chi\in{\mathfrak b}^{\perp}\,\ensuremath{\mathbf{e}}\xspacert\,Z_\chi(\lambda)\,\,\mbox{has a } U_\chi({\mathfrak g})\mbox{-submodule of dimension } s\},$$ then clearly $S_\lambda=\bigcap_{s=1}^{p^N-1}N_{\lambda,s}^c$ (where, for $X\subseteq {\mathfrak b}^\perp$, $X^c$ denotes ${\mathfrak b}^\perp\setminus X$). The openness of $S_\lambda$ then follows from the closure of each $N_{\lambda,s}$ in ${\mathfrak b}^\perp$ (which is proved in \cite[C.6]{J1.1}, and one can check that the proof doesn't use assumptions (B) or (C)). \begin{prop}\label{prop1} Let $G$ be a connected reductive algebraic group over an algebraically closed field ${\mathbb K}$ of characteristic $p>0$, with simply-connected derived subgroup. Let $\chi\in{\mathfrak b}^{\perp}$, and suppose that for each $\lambda\in \Lambda_0$ there exists $\mu_\lambda\in {\mathfrak b}^{\perp}$ such that $Z_{\mu_\lambda}(\lambda)$ is an irreducible $U_{\mu_\lambda}({\mathfrak g})$-module. Then $\left\ensuremath{\mathbf{e}}\xspacert\{\mbox{Blocks of}\,\, U_\chi({\mathfrak g})\}\right\ensuremath{\mathbf{e}}\xspacert=\left\ensuremath{\mathbf{e}}\xspacert \Lambda_\chi/W_{\bullet}\right\ensuremath{\mathbf{e}}\xspacert$. \end{prop} \begin{proof} Our assumption guarantees that each $S_\lambda$, for $\lambda\in\Lambda_0$, is non-empty. Each $S_\lambda$ is thus a dense open subset of ${\mathfrak b}^{\perp}$ and it is clear that $S_\lambda\subseteq C_\lambda$ for each $\lambda\in\Lambda_0$. We therefore have that $\bigcap_{\lambda\in \Lambda_0}S_\lambda$ is a dense open subset of ${\mathfrak b}^{\perp}$. Since $\bigcap_{\lambda\in \Lambda_0}S_\lambda\subseteq C\subseteq D_{\left\ensuremath{\mathbf{e}}\xspacert \Lambda_\chi/W_{\bullet}\right\ensuremath{\mathbf{e}}\xspacert}$ and $D_{\left\ensuremath{\mathbf{e}}\xspacert \Lambda_\chi/W_{\bullet}\right\ensuremath{\mathbf{e}}\xspacert}$ is closed in ${\mathfrak b}^\perp$, we conclude ${\mathfrak b}^\perp=D_{\left\ensuremath{\mathbf{e}}\xspacert \Lambda_\chi/W_{\bullet}\right\ensuremath{\mathbf{e}}\xspacert}$. Hence, $\left\ensuremath{\mathbf{e}}\xspacert\{\mbox{Blocks of}\,\, U_\chi({\mathfrak g})\}\right\ensuremath{\mathbf{e}}\xspacert\leq \left\ensuremath{\mathbf{e}}\xspacert \Lambda_\chi/W_{\bullet}\right\ensuremath{\mathbf{e}}\xspacert$ and, together with Proposition~\ref{Lower}, this gives the desired result. \end{proof} \begin{cor}\label{BlockG23} Suppose $G$ is the almost-simple simply-connected algebraic group of type $G_2$ over an algebraically closed field ${\mathbb K}$ of characteristic $3$. If $\chi\in{\mathfrak g}^{*}$ satisfies $\chi({\mathfrak b})=0$, then $U_\chi({\mathfrak g})$ has exactly $\left\ensuremath{\mathbf{e}}\xspacert\Lambda_\chi/W_\bullet\right\ensuremath{\mathbf{e}}\xspacert=3$ blocks. \end{cor} \begin{proof} The calculations in Subsection~\ref{G23}, {\em infra}, show that, for each $\lambda\in \Lambda_0$, the regular nilpotent $\chi$ in standard Levi form gives an irreducible baby Verma module. The result then follows from Proposition~\ref{prop1} (and one can check directly that $\left\ensuremath{\mathbf{e}}\xspacert\Lambda_\chi/W_\bullet\right\ensuremath{\mathbf{e}}\xspacert=3$). \end{proof} \begin{rmk} From the discussion in Section \ref{sec3}, it is sufficient to check the condition of Proposition~\ref{prop1} for representatives $\lambda\in\Lambda_0/{W_\bullet}$. \end{rmk} \begin{rmk} By Premet's theorem \cite{Pr1.1,Pr3.1}, Proposition~\ref{prop1} gives a proof of Humphreys' conjecture on blocks when Jantzen's standard assumptions hold. This is similar to the proof of Proposition~\ref{BlockNumb}, {\em supra}. \end{rmk} Proposition~\ref{prop1} shows that Humphreys' conjecture on blocks holds when irreducible baby Verma modules exist. The next proposition shows what happens when they don't. \begin{prop}\label{NoIrred} Let $\lambda\in \Lambda_0$. If there does not exist $\mu_\lambda\in{\mathfrak b}^{\perp}$ such that $Z_{\mu_\lambda}(\lambda)$ is irreducible, then there exists $1\leq s\leq p^N-1$ such that, for all $\chi\in{\mathfrak b}^{\perp}$, $Z_\chi(\lambda)$ has an $s$-dimensional submodule. \end{prop} \begin{proof} If there does not exist $\mu_\lambda\in{\mathfrak b}^{\perp}$ such that $Z_{\mu_\lambda}(\lambda)$ is irreducible then, using the above notation, $$\bigcap_{s=1}^{p^N-1}N_{\lambda,s}^c=\emptyset.$$ Since each $N_{\lambda,s}^c$ is open in ${\mathfrak b}^{\perp}$, and each non-empty open set in ${\mathfrak b}^{\perp}$ is dense, this implies that there exists $1\leq s\leq p^N-1$ such that $N_{\lambda,s}^c=\emptyset$. This implies that $N_{\lambda,s}={\mathfrak b}^{\perp}$, as required. \end{proof} We end by observing an obvious generalisation of the statement that, for $\lambda\in\Lambda_0$, $S_\lambda$ is open dense in ${\mathfrak b}^{\perp}$ whenever there exists $\chi\in{\mathfrak b}^{\perp}$ with $Z_\chi(\lambda)$ irreducible. \begin{prop} Let $\lambda\in \Lambda_0$. Suppose that there exists $\chi_\lambda\in{\mathfrak b}^{\perp}$ and $0\leq k\leq N$ such that every submodule of $Z_{\chi_\lambda}(\lambda)$ has dimension divisible by $p^k$. Then the subset $$V_\lambda\coloneqq\{\mu\in{\mathfrak b}^{\perp}\,\ensuremath{\mathbf{e}}\xspacert\,\mbox{Each } U_\mu({\mathfrak g})\mbox{-submodule of }\,Z_\mu(\lambda)\,\,\mbox{has dimension divisible by }\, p^{k}\}$$ is a dense open subset of ${\mathfrak b}^{\perp}$. \end{prop} \begin{proof} The result follows easily once we note that $$V_\lambda=\bigcap_{\substack{1\leq s\leq p^N \\ p^k\nmid s}}N_{\lambda,s}^c.$$ \end{proof} \begin{rmk} This proposition therefore allows us to use the results of Appendix~\ref{sec6} to find dense open subsets $V_\lambda$ of ${\mathfrak b}^{\perp}$. These subsets are thus candidates for the sets $\widehat{C}_\lambda$ discussed earlier; all that remains to show is that $V_\lambda\subseteq C_\lambda$ for all $\lambda\in\Lambda_0$. If this were to hold, then the previous discussion would give a proof of Humphreys' conjecture on blocks for such ${\mathfrak g}$. \end{rmk} \begin{thebibliography}{9999} \bibitem{Br.1} A. Braun, {\em The center of the enveloping algebra of the $p$-Lie algebras ${\mathfrak s}{\mathfrak l}_n$, ${\mathfrak p}{\mathfrak g}{\mathfrak l}_n$, ${\mathfrak p}{\mathfrak s}{\mathfrak l}_n$ when $p$ divides $n$}, J. Algebra {\bf 504} (2018), 217--290. \bibitem{BG.1} K. Brown, I. Gordon, {\em The ramification of centres: Lie algebras in positive characteristic and quantised enveloping algebras}, Math. Z. {\bf 238} (2001), 733--779. \bibitem{BGo.1} K. Brown, K. Goodearl, {\em Lectures on algebraic quantum groups}, Advanced Courses in Mathematics, CRM Barcelona, Birkh\"{a}user Verlag, Basel, 2002. \bibitem{Dem.1} M. Demazure, {\em Invariants sym\'{e}triques entiers des groupes de Weyl et torsion}, Invent. Math. {\bf 21} (1973), 287--301. \bibitem{FP1.1} E. Friedlander, B. Parshall, {\em Modular representation theory of Lie algebras}, Amer. J. Math. {\bf 110} (1988), 1055--1094. \bibitem{Ga.1} P. Gabriel, {\em Finite representation type is open} in: V.Dlad, P. Gabriel (ed.), Proceedings of the International Conference on Representations of Algebras (Carleton 1974), Carleton Mathematical Lecture Notes, No. 9, Carleton University, Ottawa, Ont., 1974, 407pp. \bibitem{Go.1} I. Gordon, {\em Representations of semisimple Lie algebras in positive characteristic and quantum groups at roots of unity} in: A. Pressley (ed.), Quantum groups and Lie theory (Durham 1999), London Math. Soc. Lecture Note Ser., 290, Cambridge Univ. Press, Cambridge, 2001, pp. 149--167. \bibitem{Hu3.1} J. Humphreys, {\em Modular representations of classical Lie algebras and semisimple groups} J. Algebra {\bf 19} (1971), 51--79. \bibitem{Hu.1} J. Humphreys, {\em Modular representations of simple Lie algebras}, Bull. Amer. Math. Soc. (N.S.) {\bf 35} (1998), 105--122. \bibitem{Hu4.1} J. Humphreys, {\em Reflection groups and Coxeter groups}, Cambridge Studies in Advanced Mathematics, 29. Cambridge University Press, Cambridge, 1990. \bibitem{J3.1} J. ~C. ~Jantzen, {\em Nilpotent orbits in representation theory}, Lie theory, 1--211, Progr. Math. {bf 228}, Birkh\"{a}user Boston, Boston, MA, 2004. \bibitem{J1.1} J. C. Jantzen, {\em Representations of Lie algebras in positive characteristic}, Representation theory of algebraic groups and quantum groups, 175--218, Adv. Stud. Pure Math., 40, Math. Soc. Japan, Tokyo, 2004. \bibitem{J2.1} J. C. Jantzen, {\em Representations of Lie algebras in prime characteristic}, in Representation Theories and Algebraic Geometry, Proceedings (A. Broer, Ed.), pp.\ 185–235. Montreal, NATO ASI Series, Vol. C 514, Kluwer, Dordrecht, 1998. \bibitem{J4.1} J.~C.~Jantzen, {\em Subregular nilpotent representations of ${\mathfrak s}{\mathfrak l}_n$ and ${\mathfrak s}{\mathfrak o}_{2n+1}$}, Math. Proc. Cambridge Philos. Soc. {\bf 126} (1999), 223--257. \bibitem{KW.1} V. Kac, B. Weisfeiler, {\em Coadjoint action of a semi-simple algebraic group and the center of the enveloping algebra in characteristic $p$}, Indag. Math. {\bf 38} (1976), 136--151. \bibitem{MR.1} I. Mirkovi\'{c}, D. Rumynin, {\em Centers of reduced enveloping algebras}, Math. Z. {\bf 231} (1999), 123--132. \bibitem{Pr1.1} A. Premet, {\em Support varieties of non-restricted modules over Lie algebras of reductive groups}, J. London Math. Soc. {\bf 55} (1997), 236--250. \bibitem{Pr3.1} A. Premet, {\em Irreducible representations of Lie algebras of reductive groups and the Kac-Weisfeiler conjecture}, Invent. Math. {\bf 121} (1995), 79--117. \end{thebibliography} \appendix \section{Divisibility bounds}\label{sec6} By Proposition~\ref{prop1}, Humphreys' conjecture on blocks holds whenever, for each $\lambda\in\Lambda_0$, there exists $\mu_\lambda\in{\mathfrak b}^\perp$ such that $Z_{\mu_\lambda}(\lambda)$ is an irreducible $U_{\mu_\lambda}({\mathfrak g})$-module. The natural choice for such $\mu_\lambda$ is the $\chi\in{\mathfrak g}^{*}$ which is regular nilpotent in standard Levi form. For such $\chi$, one way to try to show that each $Z_\chi(\lambda)$ is irreducible is to show that each $\dim(Z_\chi(\lambda))$ is divisible by $p^N$, where $N=\left\ensuremath{\mathbf{e}}\xspacert\Phi^{+}\right\ensuremath{\mathbf{e}}\xspacert$. This appendix contains some computations to determine some $k\leq N$ such that all $U_\chi({\mathfrak g})$-modules have dimension divisible by $p^k$. Unfortunately, except for the case of $G_2$ in characteristic $3$, we do not find $k$ to be equal to $N$ when assumptions (B) or (C) fail. In a few cases, we are even able to show that $p^N$ does {\em not} divide $\dim(Z_\chi(\lambda))$ for some $\lambda\in\Lambda_0$. In this appendix, we assume $G$ is an almost-simple simply-connected algebraic group over an algebraically closed field ${\mathbb K}$ of positive characteristic $p>0$, and we write $\Phi$ for its (indecomposable) root system. Specifically, let $G_{{\mathbb Z}}$ be a split reductive group scheme over ${\mathbb Z}$ with root data $(X(T), \Phi, \alpha\mapsto\alpha^\ensuremath{\mathbf{e}}\xspacee)$, let $T_{\mathbb Z}$ be a split maximal torus of $G_{\mathbb Z}$, and let ${\mathfrak g}_{{\mathbb Z}}$ be the Lie ring of $G_{{\mathbb Z}}$. Throughout this appendix, we think of $G$ as being obtained from $G_{{\mathbb Z}}$ through base change, so $G=(G_{\mathbb Z})_{{\mathbb K}}$, $T=(T_{\mathbb Z})_{\mathbb K}$ and ${\mathfrak g}={\mathfrak g}_{{\mathbb Z}}\otimes_{{\mathbb Z}}{\mathbb K}$. In particular, the elements $e_\beta$ ($\beta\in \Phi$) and $h_\alpha$ ($\alpha\in\Pi$) form a Chevalley basis of ${\mathfrak g}$. Under these assumptions, ${\mathfrak g}$ is a simple Lie algebra unless $\Phi$ is of type $A_n$ with $p$ dividing $n+1$; of type $B_n$, $C_n$, $D_n$, $F_4$ or $E_7$ with $p=2$; or of type $E_6$ or $G_2$ with $p=3$ (see, for example, \cite[6.4(b)]{J2.2}). If ${\mathfrak g}$ is simple then there exists a $G$-equivariant isomorphism ${\mathfrak g}\xrightarrow{\sim}{\mathfrak g}^{*}$ coming from the Killing form, so assumption (C) holds. We also note that assumption (A) holds for all such $G$, since $G$ equals its derived subgroup. We consider here both those $G$ which satisfy assumption (C) and those which don't (i.e. we also consider those $G$ with ${\mathfrak g}$ non-simple). We focus our attention on the exceptional types $E_6$, $E_7$, $E_8$, $F_4$ and $G_2$. We generally assume throughout this appendix that $\chi$ is in standard Levi form with $I=\Pi$, although we don't make that assumption in this preliminary discussion. When $G$ satisfies assumptions (A), (B) and (C), Premet's theorem \cite{Pr1.2,Pr3.2} (proving the second Kac-Weisfeiler conjecture \cite{KW2.2}) shows that the dimension of each $U_\chi({\mathfrak g})$-module is divisible by $p^{\dim (G\cdot\chi)/2}$. We note also that when (A) and (B) hold but (C) does not - i.e. when $\Phi=A_n$ and $p$ divides $n+1$ - Premet's theorem shows the same result for faithful irreducible $U_\chi({\mathfrak g})$-modules. When $\chi$ is regular nilpotent and assumption (C) holds, we know that $\dim (G\cdot\chi) /2=N$. Hence, in this situation we have that all irreducible $U_\chi({\mathfrak g})$-modules have dimension divisible by $p^{N}$. This means that all baby Verma modules are irreducible, and so all irreducible $U_\chi({\mathfrak g})$-modules are baby Verma modules. Outside of the setting of Premet's theorem, there are other ways to determine powers of $p$ which divide the dimensions of all $U_\chi({\mathfrak g})$-modules. Two particular results are relevant here. Both utilize the {\bf centraliser} in ${\mathfrak g}$ of $\chi\in{\mathfrak g}^{*}$, which the reader will recall is defined as $c_{{\mathfrak g}}(\chi)\coloneqq \{x\in{\mathfrak g}\,\ensuremath{\mathbf{e}}\xspacert\,\chi([x,{\mathfrak g}])=0\}.$ The first result comes from Premet and Skryabin \cite{PS.2}, and applies when the prime $p$ is {\bf non-special} for the root system $\Phi$. This means that $p\neq 2$ when $\Phi$ is $B_n$, $C_n$ or $F_4$, and $p\neq 3$ when $\Phi=G_2$ (i.e. $p$ does not divide any non-zero off-diagonal entry of the Cartan matrix). \begin{prop}\label{nonspec} Let $\chi\in{\mathfrak b}^{\perp}$, and let $d(\chi)\coloneqq{\mathfrak r}ac{1}{2}{{\mbox{\rm codim}}}_{{\mathfrak g}}(c_{\mathfrak g}(\chi))$. If $p$ is non-special for $\Phi$, then every $U_\chi({\mathfrak g})$-module has dimension divisible by $p^{d(\chi)}$. \end{prop} The second proposition we use is also due to Premet \cite{J2.2,Pr1.2,Pr2.2}. To apply it, recall that a restricted Lie algebra is called {\bf unipotent} if for all $x\in{\mathfrak g}$ there exists $r>0$ such that $x^{[p^r]}=0$, where $x^{[p^r]}$ denotes the image of $x$ under $r$ applications of $\,^{[p]}$. In particular, this applies to ${\mathfrak n}^{-}$ and any restricted subalgebras of it. \begin{prop}\label{unip} Let $\chi\in{\mathfrak g}^{*}$. If ${\mathfrak m}$ is a unipotent restricted subalgebra of ${\mathfrak g}$ with $\chi([{\mathfrak m},{\mathfrak m}])=0$, $\chi({\mathfrak m}^{[p]})=0$ and ${\mathfrak m}\cap c_{\mathfrak g}(\chi)=0$, then every finite-dimensional $U_\chi({\mathfrak g})$-module is free over $U_\chi({\mathfrak m})$. \end{prop} In applying the second proposition when $\chi\in{\mathfrak b}^{\perp}$, the reader should note the following. Suppose ${\mathfrak m}$, a ${\mathbb K}$-subspace of ${\mathfrak g}$, has a basis consisting of elements $e_{-\alpha}$ for $\alpha\in\Psi$, where $\Psi$ is some subset of $\Phi^{+}$. The condition $\chi([{\mathfrak m},{\mathfrak m}])=0$ is clearly satisfied if $\chi(e_{-\alpha-\beta})=0$ for all $\alpha,\beta\in\Psi$. Furthermore, we have in $U({\mathfrak g})$ that $$\left(\sum_{\alpha\in\Psi}c_\alpha e_{-\alpha}\right)^{p}-\left(\sum_{\alpha\in\Psi}c_\alpha e_{-\alpha}\right)^{[p]}=\sum_{\alpha\in\Psi}c_\alpha^p (e_{-\alpha}^p-e_{-\alpha}^{[p]})=\sum_{\alpha\in\Psi}c_\alpha^p e_{-\alpha}^p$$ by the semilinearity of the map $x\mapsto x^p-x^{[p]}$, and we have $$\left(\sum_{\alpha\in\Psi}c_\alpha e_{-\alpha}\right)^{p}\in \sum_{\alpha\in\Psi}c_\alpha^p e_{-\alpha}^p + \sum_{\gamma_1,\ldots,\gamma_p\in\Psi}{\mathbb K} e_{-\gamma_1-\gamma_2-\cdots-\gamma_p}$$ where we interpret $e_{-\gamma_1-\gamma_2-\cdots-\gamma_p}=0$ if $-\gamma_1-\gamma_2-\cdots-\gamma_p\notin \Phi$. We hence conclude that $$\left(\sum_{\alpha\in\Psi}c_\alpha e_{-\alpha}\right)^{[p]}\in \sum_{\gamma_1,\ldots,\gamma_p\in\Psi}{\mathbb K} e_{-\gamma_1-\gamma_2-\cdots-\gamma_p}.$$ In particular, if $\chi(e_{-\gamma_1-\gamma_2-\cdots-\gamma_p})=0$ for all $\gamma_1,\ldots,\gamma_p\in\Psi$, we find that $\chi({\mathfrak m}^{[p]})=0$. Furthermore, if $\Psi$ satisfies the condition that $\alpha,\beta\in\Psi$, $\alpha+\beta\in\Phi$ implies $\alpha+\beta\in\Psi$ (we call this the condition of $\Psi$ being {\bf closed}), then it is enough to check that $\chi(e_{-\alpha-\beta})=0$ for all $\alpha,\beta\in\Psi$. Finally, we observe that $\Psi$ being closed is enough to show that ${\mathfrak m}$ is a subalgebra. So we may obtain a corollary to Proposition~\ref{unip}: \begin{cor} Let $\chi\in{\mathfrak b}^{\perp}$ and let $\Psi$ be a closed subset of $\Phi^{+}$. Suppose that $\chi(e_{-\alpha-\beta})=0$ for all $\alpha,\beta\in\Psi$. Furthermore, let ${\mathfrak m}$ be the subspace of ${\mathfrak g}$ with basis consisting of the $e_{-\alpha}$ with $\alpha\in\Psi$, and suppose that ${\mathfrak m}\cap c_{\mathfrak g}(\chi)=0$. Then every finite-dimensional $U_\chi({\mathfrak g})$-module has dimension divisible by $p^{\ensuremath{\mathbf{e}}\xspacert\Psi\ensuremath{\mathbf{e}}\xspacert}$. \end{cor} The above discussion actually shows that this corollary can be improved a bit. Given two roots $\alpha,\beta\in\Phi$, write $C_{\alpha,\beta}\coloneqq q+1$ where $q\in{\mathbb N}$ is maximal for the condition that $\beta-q\alpha$ lies in $\Phi$ (so, in particular, $[e_{\alpha},e_{\beta}]=\pm C_{\alpha,\beta} e_{\alpha+\beta}$ if $\alpha+\beta\in\Phi$). Let us say that $\Psi$ is {\bf $p$-closed} if, for all $\alpha,\beta\in\Psi$ with $\alpha+\beta\in \Phi$, either $\alpha+\beta\in \Psi$ or $p$ divides $C_{\gamma,\delta}$ for all $\gamma, \delta\in\Psi$ with $\gamma+\delta=\alpha+\beta$. Then we easily obtain the following. \begin{cor}\label{pclos} Let $\chi\in{\mathfrak b}^{\perp}$, and let $\Psi$ be a $p$-closed subset of $\Phi^{+}$. Suppose that $\chi(e_{-\alpha-\beta})=0$ for all $\alpha,\beta\in\Psi$ with $\alpha+\beta\in\Psi$. Furthermore, let ${\mathfrak m}$ be the subspace of ${\mathfrak g}$ with basis consisting of the $e_{-\alpha}$ with $\alpha\in\Psi$, and suppose that ${\mathfrak m}\cap c_{\mathfrak g}(\chi)=0$. Then every finite-dimensional $U_\chi({\mathfrak g})$-module has dimension divisible by $p^{\ensuremath{\mathbf{e}}\xspacert\Psi\ensuremath{\mathbf{e}}\xspacert}$. \end{cor} Let us consider a bit further the condition that ${\mathfrak m}\cap c_{\mathfrak g}(\chi)=0$. Let $x\in {\mathfrak m}\cap c_{\mathfrak g}(\chi)$. We can then write $$x=\sum_{\alpha\in\Psi}c_\alpha e_{-\alpha}.$$ The fact that $x\in c_{\mathfrak g}(\chi)$ means that $\chi([x,{\mathfrak g}])=0$. This is equivalent to the requirement that $\chi([x,e_\beta])=0$ for all $\beta\in\Phi$ and $\chi([x,h])=0$ for all $h\in{\mathfrak h}$. Let $\Delta$ be the subset of $\Phi^{-}$ such that $\chi(e_{\alpha})\neq 0$ for $\alpha\in \Delta$. We then have, for $\beta\in\Phi$, that $$0=\chi([x,e_\beta])=\sum_{\substack{\gamma\in\Psi \\ \beta-\gamma\in\Delta}}c_{\gamma}\chi([e_{-\gamma},e_\beta])$$ and, for $h\in{\mathfrak h}$, that $$0=\chi([x,h])=\sum_{\gamma\in\Delta}c_\gamma\gamma(h)\chi(e_{-\gamma}).$$ Showing that ${\mathfrak m}\cap c_{\mathfrak g}(\chi)=0$ then involves showing that there is no non-zero solution to these equations in $c_\gamma$. \begin{table} \label{tab1} \begin{center} \bgroup \renewcommand{2}{2} \caption{Dimensions of centralisers of regular nilpotent elements and $p$-characters} \begin{tabular}{|| p{2em} | p{2em} | p{4em} | p{4em} ||} \hline $G$ & $p$ & $\dim{\mathfrak c}_{\mathfrak g}(e)$ & $\dim{\mathfrak c}_{\mathfrak g}(\chi)$ \\ [0.5ex] \hline\hline $E_6$ & 2 & 8 & 8 \\ [0.5ex] \hline $E_6$ & 3* & 9 & 10 \\ [0.5ex] \hline $E_7$ & 2* & 14 & 15 \\ [0.5ex] \hline $E_7$ & 3 & 9 & 9 \\ [0.5ex] \hline $E_8$ & 2 & 16 & 16 \\ [0.5ex] \hline $E_8$ & 3 & 12 & 12 \\ [0.5ex] \hline $E_8$ & 5 & 10 & 10\\ [0.5ex] \hline $F_4$ & 2* & 8 & 6 \\ [0.5ex] \hline $F_4$ & 3 & 6 & 6 \\ [0.5ex] \hline $G_2$ & 2 & 4 & 4 \\ [0.5ex] \hline $G_2$ & 3* & 3 & 2 \\ [0.5ex] \hline \end{tabular} \egroup \end{center} \end{table} We now turn to the application of these propositions. In each case, we take $\chi$ to be regular nilpotent in standard Levi form and we apply one of the propositions or its corollaries to determine a divisibility bound for the dimensions of $U_\chi({\mathfrak g})$-modules. We do this for $\Phi$ of exceptional type. Principally, we compute the centraliser ${\mathfrak c}_{\mathfrak g}(\chi)$ and use its description to determine the bound. For $\Phi=G_2$ we give the explicit computations, but for the larger rank examples the results were obtained using Sage \cite{S.2}. Because of this, when there is a choice we take the structure coefficients to be as used in the Sage class {{\mathfrak o}ntfamily{pcr}\selectfont LieAlgebraChevalleyBasis\_with\_category}. However, we use the labelling of the simple roots as given in \cite{Sp.2}. \begin{rmk} Our computations of $\dim{\mathfrak c}_{\mathfrak g}(\chi)$ can be compared with the computations of $\dim{\mathfrak c}_{\mathfrak g}(e)$ for $e=\sum_{\alpha\in\Pi}e_\alpha$ which can be deduced from \cite[Cor. 2.5, Thm. 2.6]{Sp.2}. The results are listed in Table~\ref{tab1}. When ${\mathfrak g}$ is simple, $\chi$ and $e$ are identified through the $G$-equivariant isomorphism ${\mathfrak g}\xrightarrow{\sim}{\mathfrak g}^{*}$, and thus ${\mathfrak c}_{\mathfrak g}(\chi)={\mathfrak c}_{\mathfrak g}(e)$. In the subsections below, we nonetheless include calculations of ${\mathfrak c}_{\mathfrak g}(\chi)$ for the bad primes for which ${\mathfrak g}$ is simple, since we give explicit bases for the centralisers in these case and in some instances we use such bases to show the reducibility of the corresponding baby Verma modules. In the other cases (which we label with an asterisk (*) in Table~\ref{ta1}), however, we find that the dimensions of ${\mathfrak c}_{\mathfrak g}(e)$ and ${\mathfrak c}_{\mathfrak g}(\chi)$ differ from each other. Note also that we give in Table~\ref{ta1} the dimension of ${\mathfrak c}_{\mathfrak g}(\chi)$ for $G_2$ in characteristic 3, even though we do not give it in Subsection~\ref{G23} below, because it is easy to compute. \end{rmk} \begin{rmk} In our discussion of ${\mathfrak g}$ so far, the Lie algebra ${\mathfrak g}$ of $G$ has been obtained as ${\mathfrak g}={\mathfrak g}_{\mathbb Z}\otimes_{\mathbb Z} {\mathbb K}$, where ${\mathfrak g}_{\mathbb Z}$ is a ${\mathbb Z}$-form of the complex simple Lie algebra ${\mathfrak g}_{\mathbb C}$. In particular, ${\mathfrak g}_{{\mathbb Z}}$ is the ${\mathbb Z}$-form coming from the chosen Chevalley basis of ${\mathfrak g}_{\mathbb C}$, which is what gives our Chevalley basis of ${\mathfrak g}$. We may then also define ${\mathfrak g}_{{\mathbb F}_p}={\mathfrak g}_{\mathbb Z}\otimes_{{\mathbb Z}} {\mathbb F}_p$, so that ${\mathfrak g}={\mathfrak g}_{{\mathbb F}_p}\otimes_{{\mathbb F}_p}{\mathbb K}$. Therefore, if $\chi_{{\mathbb F}_p}:{\mathfrak g}_{{\mathbb F}_p}\to{\mathbb F}_p$ is a linear form, we may define $\chi:{\mathfrak g}\to{\mathbb K}$ by linear extension. It is clear that any $\chi$ in standard Levi form may be obtained in this way. Our calculations in Sage are calculations with ${\mathfrak g}_{{\mathbb F}_p}$ and $\chi_{{\mathbb F}_p}$ rather than ${\mathfrak g}$. However, when $\chi$ is obtained through scalar extension from an ${\mathbb F}_p$-linear form, the above discussion shows that determining the elements of ${\mathfrak g}$ which lie in ${\mathfrak c}_{\mathfrak g}(\chi)$ comes down to finding solutions to certain linear equations with coefficients in ${\mathbb F}_p$. This in particular shows that ${\mathfrak c}_{{\mathfrak g}_{{\mathbb F}_p}}(\chi_{{\mathbb F}_p})\otimes_{{\mathbb F}_p}{\mathbb K}={\mathfrak c}_{{\mathfrak g}}(\chi)$, so our calculations over ${\mathbb F}_p$ also lead to the results over ${\mathbb K}$. \end{rmk} \subsection{$G_2$ in characteristic 2}\label{G22} Suppose $\Phi=G_2$ and $p=2$. Since $p$ is non-special in this case, we may apply Proposition~\ref{nonspec}. Let us therefore compute $c_{\mathfrak g}(\chi)$. Set $x\in{\mathfrak g}$ be written as $x=\sum_{\gamma\in\Phi}c_\gamma e_\gamma + \sum_{\gamma\in\Pi}d_\gamma h_\gamma$, with the $c_\gamma$, $d_\gamma$ lying in ${\mathbb K}$. Then the relations required for $x\in c_{{\mathfrak g}}(\chi)$ are as follows: \begin{itemize} \item $0=\chi([x,e_{3\alpha+2\beta}])=0$, \item $0=\chi([x,e_{3\alpha+\beta}])=c_{-3\alpha-2\beta}\chi([e_{-3\alpha-2\beta},e_{3\alpha+\beta}])=c_{-3\alpha-2\beta}\chi(e_{-\beta})= c_{-3\alpha-2\beta}$, \item $0=\chi([x,e_{2\alpha+\beta}])=c_{-3\alpha-\beta}\chi([e_{-3\alpha-\beta},e_{2\alpha+\beta}])=c_{-3\alpha-\beta}\chi(e_{-\alpha})= c_{-3\alpha-\beta},$ \item $0=\chi([x,e_{\alpha+\beta}])=c_{-2\alpha-\beta}\chi([e_{-2\alpha-\beta},e_{\alpha+\beta}])=c_{-2\alpha-\beta}\chi(2e_{-\alpha})=0$, \item $0=\chi([x,e_{\beta}])=c_{-\alpha-\beta}\chi([e_{-\alpha-\beta},e_{\beta}])=c_{-\alpha-\beta}\chi(e_{-\alpha})= c_{-\alpha-\beta}$, \item $0=\chi([x,e_{\alpha}])=c_{-\alpha-\beta}\chi([e_{-\alpha-\beta},e_{\alpha}])=c_{-\alpha-\beta}\chi(-3e_{-\alpha})=-3c_{-\alpha-\beta}=c_{-\alpha-\beta}$, \item $0=\chi([x,h_\alpha])=c_{-\alpha}\chi(\alpha(h_\alpha)e_{-\alpha}) + c_{-\beta}\chi(\beta(h_\alpha)e_{-\beta})=2c_{-\alpha}-3c_{-\beta}=c_{-\beta}$, \item $0=\chi([x,h_\beta])=c_{-\alpha}\chi(\alpha(h_\beta)e_{-\alpha}) + c_{-\beta}\chi(\beta(h_\beta)e_{-\beta})=-c_{-\alpha} +2c_{-\beta}=c_{-\alpha}$, \item $0=\chi([x,e_{-\alpha}])=d_{\alpha}\chi([h_{\alpha},e_{-\alpha}]) + d_{\beta}\chi([h_{\beta},e_{-\alpha}])=d_{\alpha}\chi(-\alpha(h_\alpha)e_{-\alpha}) + d_{\beta}\chi(-\alpha(h_{\beta})e_{-\alpha})=-2d_{\alpha}+d_{\beta}=d_{\beta}$, \item $0=\chi([x,e_{-\beta}])=d_{\alpha}\chi([h_{\alpha},e_{-\beta}]) + d_{\beta}\chi([h_{\beta},e_{-\beta}])=d_{\alpha}\chi(-\beta(h_\alpha)e_{-\beta}) + d_{\beta}\chi(-\beta(h_{\beta})e_{-\beta})=3d_{\alpha}-2d_{\beta}=d_{\alpha}$, \item $0=\chi([x,e_{-\alpha-\beta}])=c_{\alpha}\chi([e_{\alpha},e_{-\alpha-\beta}]) + c_{\beta}\chi([e_{\beta},e_{-\alpha-\beta}])= 3c_{\alpha}\chi(e_{-\beta})- c_{\beta}\chi(e_{-\alpha})=c_{\alpha}+c_{\beta}$, \item $0=\chi([x,e_{-2\alpha-\beta}])=c_{\alpha+\beta}\chi([e_{\alpha+\beta},e_{-2\alpha-\beta}])=- 2c_{\alpha+\beta}\chi(e_{-\alpha})=0$, \item $0=\chi([x,e_{-3\alpha-\beta}])=c_{2\alpha+\beta}\chi([e_{2\alpha+\beta},e_{-3\alpha-\beta}])=- c_{2\alpha+\beta}\chi(e_{-\alpha})= c_{2\alpha+\beta}$, \item $0=\chi([x,e_{-3\alpha-2\beta}])=c_{3\alpha+\beta}\chi([e_{3\alpha+\beta},e_{-3\alpha-2\beta}])=- c_{3\alpha+\beta}\chi(e_{-\alpha})= c_{3\alpha+\beta}$. \end{itemize} We therefore conclude that $$c_{{\mathfrak g}}(\chi)=\{ae_{-2\alpha-\beta} + b(e_{\alpha}+e_{\beta}) + ce_{\alpha+\beta} + de_{3\alpha+2\beta}\,\ensuremath{\mathbf{e}}\xspacert\,a,b,c,d\in{\mathbb K}\}$$ and so is 4-dimensional. Hence, $d(\chi)={\mathfrak r}ac{1}{2}(14-4)=5$, and so by Proposition~\ref{nonspec} we conclude that every finite-dimensional $U_\chi({\mathfrak g})$-module has dimension divisible by $2^5$. We furthermore note that a $U_\chi({\mathfrak g})$-module of dimension $2^5$ does indeed exist in this case. Let $\lambda\in \Lambda_0$ be such that $\lambda(h_\beta)=0$, and let us write $\omega_1\in \Lambda_0$ for the map with $\omega_1(h_\alpha)=1$ and $\omega_1(h_\beta)=0$. We may then define a $U_\chi({\mathfrak g})$-module homomorphism $$Z_\chi(\lambda)\to Z_\chi(\lambda-\omega_1),\qquad v_{\lambda}\mapsto e_{-2\alpha-\beta}v_{\lambda-\omega_1}.$$ This has a kernel of dimension $2^5$ and so both the kernel and image of this homomorphism are $U_\chi({\mathfrak g})$-modules of dimension $2^5$. \subsection{$G_2$ in characteristic 3}\label{G23} Suppose $\Phi=G_2$ and $p=3$. Note that this Lie algebra is not simple, since it has an ideal generated by the short roots. In this case $p$ is not non-special for $\Phi$ so we cannot apply Proposition~\ref{nonspec}. Instead, we want to apply Proposition~\ref{unip}, and so we need to find an appropriate ${\mathfrak m}$. Take ${\mathfrak m}={\mathfrak n}^{-}$. In this case, $\Psi=\Phi^{+}$ is closed and $\chi(e_{-\gamma-\delta})=0$ for all $\gamma,\delta\in \Psi$. In the notation of the previous discussion, we have $\Delta=\{-\alpha,-\beta\}$. Let $x=\sum_{\gamma\in\Phi^{+}} c_\alpha e_{-\alpha}$. Then the relations required for $x\in c_{{\mathfrak g}}(\chi)$ are as follows: \begin{itemize} \item $0=\chi([x,e_{3\alpha+2\beta}])=0$, \item $0=\chi([x,e_{3\alpha+\beta}])=c_{3\alpha+2\beta}\chi([e_{-3\alpha-2\beta},e_{3\alpha+\beta}])=c_{3\alpha+2\beta}\chi(e_{-\beta})= c_{3\alpha+2\beta}$, \item $0=\chi([x,e_{2\alpha+\beta}])=c_{3\alpha+\beta}\chi([e_{-3\alpha-\beta},e_{2\alpha+\beta}])=c_{3\alpha+\beta}\chi(e_{-\alpha})= c_{3\alpha+\beta}$, \item $0=\chi([x,e_{\alpha+\beta}])=c_{2\alpha+\beta}\chi([e_{-2\alpha-\beta},e_{\alpha+\beta}])=c_{2\alpha+\beta}\chi(2e_{-\alpha})= 2c_{2\alpha+\beta}$, \item $0=\chi([x,e_{\beta}])=c_{\alpha+\beta}\chi([e_{-\alpha-\beta},e_{\beta}])=c_{\alpha+\beta}\chi(e_{-\alpha})=c_{\alpha+\beta}$, \item $0=\chi([x,e_{\alpha}])=c_{\alpha+\beta}\chi([e_{-\alpha-\beta},e_{\alpha}])=c_{\alpha+\beta}\chi(-3e_{-\alpha})=0$, \item $0=\chi([x,h_\alpha])=c_{\alpha}\chi(\alpha(h_\alpha)e_{-\alpha}) + c_{\beta}\chi(\beta(h_\alpha)e_{-\beta})=2c_\alpha -3c_\beta=2c_\alpha$, \item $0=\chi([x,h_\beta])=c_{\alpha}\chi(\alpha(h_\beta)e_{-\alpha}) + c_{\beta}\chi(\beta(h_\beta)e_{-\beta})=-c_\alpha +2c_\beta$. \end{itemize} It is easy to see that these relations force $x=0$, so ${\mathfrak m}\cap c_{{\mathfrak g}}(\chi)=0$. Hence, Proposition~\ref{unip} shows that every finite-dimensional $U_\chi({\mathfrak g})$-module has dimension divisible by $3^6$, which is $3^{\dim{\mathfrak n}^{-}}$. So in this case each baby Verma module $Z_\chi(\lambda)$ is irreducible. \subsection{$F_4$ in characteristic 2}\label{F42} Set $\Phi=F_4$ and $p=2$. Since $p$ is not non-special in this case we need to use Proposition~\ref{unip}; in fact, we use Corollary~\ref{pclos}. Set ${\mathfrak m}$ to be the subspace of ${\mathfrak n}^{-}$ with basis given by the elements $e_{-\alpha}$ for $\alpha\in\Psi\coloneqq\Phi^{+}\setminus\{\alpha_2+ 2\alpha_3\}$. It is straightforward to see that $\Psi$ is 2-closed. We want to see that ${\mathfrak m}\cap{\mathfrak c}_{\mathfrak g}(\chi)=0$. We do this by giving a basis of ${\mathfrak c}_{\mathfrak g}(\chi)$ as follows: \begin{enumerate} \item $e_{-\alpha_2-\alpha_3-\alpha_4} + e_{-\alpha_2-2\alpha_3}$; \item $e_{\alpha_3} +e_{\alpha_4}$; \item $e_{\alpha_3+\alpha_4}$; \item $e_{\alpha_1+\alpha_2+2\alpha_3+\alpha_4} + e_{\alpha_2+2\alpha_3+2\alpha_4}$; \item $e_{\alpha_1+2\alpha_2+3\alpha_3+\alpha_4}+e_{\alpha_1+2\alpha_2+2\alpha_3+2\alpha_4}$; \item $e_{2\alpha_1+3\alpha_2+4\alpha_3+2\alpha_4}$. \end{enumerate} It is clear from this basis description that ${\mathfrak c}_{\mathfrak g}(\chi)\cap{\mathfrak m}=0$. Hence, Corollary~\ref{pclos} applies and we get that every finite-dimensional $U_\chi({\mathfrak g})$-module has dimension divisible by $2^{\left\ensuremath{\mathbf{e}}\xspacert\Psi\right\ensuremath{\mathbf{e}}\xspacert}=2^{\left\ensuremath{\mathbf{e}}\xspacert\Phi^{+}\right\ensuremath{\mathbf{e}}\xspacert-1}=2^{23}$. Now, set ${\mathfrak r}$ to be the ${\mathbb K}$-subspace of ${\mathfrak g}$ generated by $e_\beta$ for all $\beta\in\Phi^{+}\setminus\{\alpha_3,\alpha_4\}$, by $h_1, h_2+h_3, h_3+h_4$, by $e_{\alpha_3}+e_{\alpha_4}$, and by $e_{-\alpha_2-\alpha_3-\alpha_4}+e_{-\alpha_2-2\alpha_3}, e_{-\alpha_3-\alpha_4}$ and $e_{-\alpha_3}+e_{-\alpha_4}$. This has dimension 29. One may check that it is in fact a subalgebra of ${\mathfrak g}$ (using that the characteristic of ${\mathbb K}$ is 2). One may also check that $\chi([{\mathfrak r},{\mathfrak r}])=0$, and that $\chi({\mathfrak r}^{[p]})=0$ (since the characteristic is 2, we have $(x+y)^{[2]}=[x,y]$ whenever $x^{[2]}=y^{[2]}=0$). Then $U_\chi({\mathfrak r})$ has dimension $2^{29}$ and has a 1-dimensional trivial module ${\mathbb K}_\chi$. Therefore, $U_\chi({\mathfrak g})\otimes_{U_\chi({\mathfrak r})}{\mathbb K}_\chi$ is a $U_\chi({\mathfrak g})$-module of dimension $2^{23}$, so the divisibility bound we found is strict. \subsection{$F_4$ in characteristic 3}\label{F43} Set $\Phi=F_4$ and $p=3$. Since $p$ is non-special in this case, we may apply Proposition~\ref{nonspec}. We must therefore give $c_{\mathfrak g}(\chi)$, and Sage computations show that ${\mathfrak c}_{\mathfrak g}(\chi)$ is the ${\mathbb K}$-subspace of ${\mathfrak g}$ with the following basis: \begin{enumerate} \item $e_{-\alpha_2-2\alpha_3-\alpha_4} + 2e_{-\alpha_1-\alpha_2-\alpha_3-\alpha_4} + e_{-\alpha_1-\alpha_2-2\alpha_3}$; \item $2e_{\alpha_1}+2e_{\alpha_2}+e_{\alpha_3} +e_{\alpha_4}$; \item $e_{\alpha_2+\alpha_3+\alpha_4}+2e_{\alpha_2+2\alpha_3} + 2e_{\alpha_1+\alpha_2+\alpha_3}$, \item $e_{\alpha_2+2\alpha_3+2\alpha_4} + e_{\alpha_1+2\alpha_2+2\alpha_3} + e_{\alpha_1+\alpha_2+2\alpha_3+\alpha_4}$; \item $e_{\alpha_1+2\alpha_2+3\alpha_3+\alpha_4}+2e_{\alpha_1+2\alpha_2+2\alpha_3+2\alpha_4}$; \item $e_{2\alpha_1+3\alpha_2+4\alpha_3+2\alpha_4}$. \end{enumerate} Therefore, $\dim c_{{\mathfrak g}}(\chi)=6$, and so $d(\chi)=23=\left\ensuremath{\mathbf{e}}\xspacert\Phi^{+}\right\ensuremath{\mathbf{e}}\xspacert-1$. Hence, every finite-dimensional $U_\chi({\mathfrak g})$-module has dimension divisible by $3^{23}$. \subsection{$E_6$ in characteristic 2}\label{E62} Suppose $\Phi=E_6$ and $p=2$. Since $p$ is non-special in this case, we may apply Proposition~\ref{nonspec}. We must therefore give $c_{\mathfrak g}(\chi)$, and Sage computations show that ${\mathfrak c}_{\mathfrak g}(\chi)$ is the ${\mathbb K}$-subspace of ${\mathfrak g}$ with the following basis: \begin{enumerate} \item $e_{-\alpha_2-\alpha_3-\alpha_4}+e_{-\alpha_2-\alpha_3-\alpha_6} + e_{-\alpha_3-\alpha_4-\alpha_6}$; \item $e_{\alpha_1}+e_{\alpha_2}+e_{\alpha_3} +e_{\alpha_4} + e_{\alpha_5} +e_{\alpha_6}$; \item $e_{\alpha_1+\alpha_2}+e_{\alpha_2+\alpha_3} + e_{\alpha_3+\alpha_4} + e_{\alpha_3+\alpha_6} + e_{\alpha_4+\alpha_5}$; \item $e_{\alpha_1+\alpha_2+\alpha_3+\alpha_4} + e_{\alpha_2+\alpha_3+\alpha_4+\alpha_5} + e_{\alpha_1+\alpha_2+\alpha_3+\alpha_6}+e_{\alpha_3+\alpha_4+\alpha_5+\alpha_6}$; \item $e_{\alpha_1+\alpha_2+\alpha_3+\alpha_4+\alpha_6}+e_{\alpha_2+2\alpha_3+\alpha_4+\alpha_6} + e_{\alpha_2+\alpha_3+\alpha_4+\alpha_5+\alpha_6}$; \item $e_{\alpha_1+2\alpha_2+2\alpha_3+\alpha_4+\alpha_6}+e_{\alpha_1+\alpha_2+2\alpha_3+\alpha_4+\alpha_5+\alpha_6} + e_{\alpha_2+2\alpha_3+2\alpha_4+\alpha_5+\alpha_6}$; \item $e_{\alpha_1+2\alpha_2+2\alpha_3+\alpha_4+\alpha_5+\alpha_6} + e_{\alpha_1+\alpha_2+2\alpha_3+2\alpha_4 +\alpha_5 +\alpha_6}$; \item $e_{\alpha_1+2\alpha_2+3\alpha_3+2\alpha_4+\alpha_5+2\alpha_6}$. \end{enumerate} In particular we see that $\dim c_{{\mathfrak g}}(\chi)=8$, and so $d(\chi)=35=\left\ensuremath{\mathbf{e}}\xspacert\Phi^{+}\right\ensuremath{\mathbf{e}}\xspacert-1$. Hence, every finite-dimensional $U_\chi({\mathfrak g})$-module has dimension divisible by $2^{35}$. \subsection{$E_6$ in characteristic 3}\label{E63} Suppose $\Phi=E_6$ and $p=3$. Since $p$ is non-special in this case, we may apply Proposition~\ref{nonspec}. We must therefore give $c_{\mathfrak g}(\chi)$, and Sage computations show that ${\mathfrak c}_{\mathfrak g}(\chi)$ is the ${\mathbb K}$-subspace of ${\mathfrak g}$ with the following basis: \begin{enumerate} \item $e_{-\alpha_2-\alpha_3-\alpha_4-\alpha_5}+2e_{-\alpha_3-\alpha_4-\alpha_5-\alpha_6} + e_{-\alpha_2-\alpha_3-\alpha_4-\alpha_6} + 2e_{-\alpha_1-\alpha_2-\alpha_3-\alpha_4}+e_{-\alpha_1-\alpha_2-\alpha_3-\alpha_6}$; \item $2e_{-\alpha_1}+e_{-\alpha_2}+2e_{-\alpha_4}+e_{-\alpha_5}$; \item $h_1+2h_2+h_4+2h_5$; \item $e_{\alpha_1}+e_{\alpha_2}+e_{\alpha_3} +e_{\alpha_4} + e_{\alpha_5} +e_{\alpha_6}$; \item $e_{\alpha_1+\alpha_2+\alpha_3}+e_{\alpha_2+\alpha_3+\alpha_4} + e_{\alpha_3+\alpha_4+\alpha_5} + e_{\alpha_2+\alpha_3+\alpha_6} + e_{\alpha_3+\alpha_4+\alpha_6}$; \item $e_{\alpha_1+\alpha_2+\alpha_3+\alpha_4} + e_{\alpha_2+\alpha_3+\alpha_4+\alpha_5} + 2e_{\alpha_1+\alpha_2+\alpha_3+\alpha_6}+2e_{\alpha_3+\alpha_4+\alpha_5+\alpha_6}$; \item $2e_{\alpha_1+\alpha_2+\alpha_3+\alpha_4+\alpha_6}+2e_{\alpha_2+2\alpha_3+\alpha_4+\alpha_6} + e_{\alpha_2+\alpha_3+\alpha_4+\alpha_5+\alpha_6}+2e_{\alpha_1+\alpha_2+\alpha_3+\alpha_4+\alpha_5}$; \item $2e_{\alpha_1+2\alpha_2+2\alpha_3+\alpha_4+\alpha_6}+e_{\alpha_1+\alpha_2+2\alpha_3+\alpha_4+\alpha_5+\alpha_6} + e_{\alpha_2+2\alpha_3+2\alpha_4+\alpha_5+\alpha_6}$; \item $e_{\alpha_1+2\alpha_2+2\alpha_3+\alpha_4+\alpha_5+\alpha_6} + 2e_{\alpha_1+\alpha_2+2\alpha_3+2\alpha_4 +\alpha_5 +\alpha_6}$; \item $e_{\alpha_1+2\alpha_2+3\alpha_3+2\alpha_4+\alpha_5+2\alpha_6}$. \end{enumerate} Hence, $\dim(c_{\mathfrak g}(\chi))=10$ and so $d(\chi)=34=\left\ensuremath{\mathbf{e}}\xspacert\Phi^{+}\right\ensuremath{\mathbf{e}}\xspacert-2$. Proposition~\ref{nonspec} then says that all $U_\chi({\mathfrak g})$-modules have dimension divisible by $3^{34}$. Now, set ${\mathfrak r}$ be the ${\mathbb K}$-subspace of ${\mathfrak g}$ generated by $e_\beta$ for all $\beta\in\Phi^{+}$, by $h_1, h_2, h_4, h_5$ and $h_6$, and by $2e_{-\alpha_1}+e_{-\alpha_2}$ and $2e_{-\alpha_4}+e_{-\alpha_5}$. This has dimension 43. One may check that it is in fact a subalgebra of ${\mathfrak g}$ (using that the characteristic is 3). One may also check that $\chi([{\mathfrak r},{\mathfrak r}])=0$ and that $\chi({\mathfrak r}^{[p]})=0$. Then $U_\chi({\mathfrak r})$ has dimension $3^{43}$ and has a 1-dimensional module ${\mathbb K}_\chi$. Therefore $U_\chi({\mathfrak g})\otimes_{U_\chi({\mathfrak r})}{\mathbb K}_\chi$ is a $U_\chi({\mathfrak g})$-module of dimension $3^{35}$. In particular, this shows that it is {\em not} true in this case that all baby Verma modules are irreducible $U_\chi({\mathfrak g})$-modules. It obviously, however, does not imply that our divisibility bound is strict. \subsection{$E_7$ in characteristic 2}\label{E72} Suppose $\Phi=E_7$ and $p=2$. Since $p$ is non-special in this case, we may apply Proposition~\ref{nonspec}. We must therefore give $c_{\mathfrak g}(\chi)$, and Sage computations show that ${\mathfrak c}_{\mathfrak g}(\chi)$ is the ${\mathbb K}$-subspace of ${\mathfrak g}$ with the following basis: \begin{enumerate} \item $e_{-\alpha_1-2\alpha_2-2\alpha_3-2\alpha_4-\alpha_5-\alpha_7} + e_{-\alpha_2-2\alpha_3-2\alpha_4-2\alpha_5-\alpha_6-\alpha_7} + e_{-\alpha_1-\alpha_2-2\alpha_3-2\alpha_4-\alpha_5-\alpha_6-\alpha_7} \\ + e_{-\alpha_1-\alpha_2-\alpha_3-2\alpha_4-2\alpha_5-\alpha_6-\alpha_7}$; \item $e_{-\alpha_1-\alpha_2-\alpha_3-\alpha_4-\alpha_5}+e_{-\alpha_3-2\alpha_4-\alpha_5-\alpha_7} + e_{-\alpha_1-\alpha_2-\alpha_3-\alpha_4-\alpha_7} + e_{-\alpha_2-\alpha_3-\alpha_4-\alpha_5-\alpha_7};$ \item $e_{-\alpha_3-\alpha_4-\alpha_5} + e_{-\alpha_4-\alpha_5-\alpha_7}+e_{-\alpha_3-\alpha_4-\alpha_7}$; \item $e_{-\alpha_1}+e_{-\alpha_3}+e_{-\alpha_7}$; \item $h_1+h_3+h_7$; \item $e_{\alpha_1}+e_{\alpha_2}+e_{\alpha_3} +e_{\alpha_4} + e_{\alpha_5} +e_{\alpha_6}+e_{\alpha_7}$; \item $e_{\alpha_1+\alpha_2}+e_{\alpha_2+\alpha_3} + e_{\alpha_3+\alpha_4} + e_{\alpha_4+\alpha_5} + e_{\alpha_4+\alpha_7}+e_{\alpha_5+\alpha_6}$; \item $e_{\alpha_1+\alpha_2+\alpha_3+\alpha_4} + e_{\alpha_2+\alpha_3+\alpha_4+\alpha_5} + e_{\alpha_2+\alpha_3+\alpha_4+\alpha_7} +e_{\alpha_4+\alpha_5+\alpha_6+\alpha_7}+e_{\alpha_3+\alpha_4+\alpha_5+\alpha_6}$; \item $e_{\alpha_1+\alpha_2+\alpha_3+\alpha_4+\alpha_7}+e_{\alpha_2+\alpha_3+\alpha_4+\alpha_5+\alpha_7} + e_{\alpha_3+\alpha_4+\alpha_5+\alpha_6+\alpha_7}+e_{\alpha_3+2\alpha_4+\alpha_5+\alpha_7}$; \item $e_{\alpha_1+\alpha_2+\alpha_3+2\alpha_4+\alpha_5+\alpha_7}+e_{\alpha_2+2\alpha_3+2\alpha_4+\alpha_5+\alpha_7} + e_{\alpha_2+\alpha_3+2\alpha_4+\alpha_5+\alpha_6+\alpha_7} + e_{\alpha_3+2\alpha_4+2\alpha_5+\alpha_6+\alpha_7}$; \item $e_{\alpha_1+\alpha_2+\alpha_3+2\alpha_4+\alpha_5+\alpha_6+\alpha_7} + e_{\alpha_2+2\alpha_3+2\alpha_4 +\alpha_5 +\alpha_6+\alpha_7} +e_{\alpha_2+\alpha_3+2\alpha_4+2\alpha_5+\alpha_6+\alpha_7}$; \item $e_{\alpha_1+2\alpha_2+2\alpha_3+2\alpha_4+\alpha_5+\alpha_7}+e_{\alpha_1+\alpha_2+2\alpha_3+2\alpha_4+\alpha_5+\alpha_6+\alpha_7} + e_{\alpha_1+\alpha_2+\alpha_3+2\alpha_4+2\alpha_5+\alpha_6+\alpha_7}$; \item $e_{\alpha_1+2\alpha_2+2\alpha_3+2\alpha_4+2\alpha_5+\alpha_6+\alpha_7}+e_{\alpha_1+\alpha_2+2\alpha_3+3\alpha_4+2\alpha_5+\alpha_6+\alpha_7} + e_{\alpha_2+2\alpha_3+3\alpha_4+2\alpha_5+\alpha_6+2\alpha_7}$; \item $e_{\alpha_1+2\alpha_2+3\alpha_3+3\alpha_3+2\alpha_5+\alpha_6+\alpha_7} + e_{\alpha_1+2\alpha_2+2\alpha_3+3\alpha_4+2\alpha_5+\alpha_6+2\alpha_7}$; \item $e_{\alpha_1+2\alpha_2+3\alpha_3+4\alpha_4+3\alpha_5+2\alpha_6+2\alpha_7}$. \end{enumerate} We conclude that $\dim({\mathfrak c}_{\mathfrak g}(\chi))=15$ and so $d(\chi)=59=\left\ensuremath{\mathbf{e}}\xspacert\Phi^{+}\right\ensuremath{\mathbf{e}}\xspacert-4$. We then conclude from Proposition~\ref{nonspec} that every finite-dimensional $U_\chi({\mathfrak g})$-module has dimension divisible by $2^{59}$. \subsection{$E_7$ in characteristic 3}\label{E73} Suppose $\Phi=E_7$ and $p=3$. Since $p$ is non-special in this case, we may apply Proposition~\ref{nonspec}. We must therefore give $c_{\mathfrak g}(\chi)$, and Sage computations show that ${\mathfrak c}_{\mathfrak g}(\chi)$ is the ${\mathbb K}$-subspace of ${\mathfrak g}$ with the following basis: \begin{enumerate} \item $e_{-\alpha_2-\alpha_3-\alpha_4-\alpha_5}+2e_{-\alpha_2-\alpha_3-\alpha_4-\alpha_7} + 2e_{-\alpha_3-\alpha_4-\alpha_5-\alpha_6} +e_{-\alpha_3-\alpha_4-\alpha_5-\alpha_7}+e_{-\alpha_4-\alpha_5-\alpha_6-\alpha_7}$; \item $e_{\alpha_1}+e_{\alpha_2}+e_{\alpha_3} +e_{\alpha_4} + e_{\alpha_5} +e_{\alpha_6}+e_{\alpha_7}$; \item $e_{\alpha_1+\alpha_2+\alpha_3}+e_{\alpha_2+\alpha_3+\alpha_4} + e_{\alpha_3+\alpha_4+\alpha_5} + e_{\alpha_3+\alpha_4+\alpha_7} + e_{\alpha_4+\alpha_5+\alpha_6}+e_{\alpha_4+\alpha_5+\alpha_7}$; \item $e_{\alpha_1+\alpha_2+\alpha_3+\alpha_4+\alpha_5} + e_{\alpha_2+\alpha_3+\alpha_4+\alpha_5+\alpha_6} + 2e_{\alpha_2+\alpha_3+\alpha_4+\alpha_5+\alpha_7} +e_{\alpha_3+2\alpha_4+\alpha_5+\alpha_7}+e_{\alpha_3+\alpha_4+\alpha_5+\alpha_6+\alpha_7}$; \item $2e_{\alpha_1+\alpha_2+\alpha_3+2\alpha_4+\alpha_5+\alpha_7}+e_{\alpha_1+\alpha_2+\alpha_3+\alpha_4+\alpha_5+\alpha_6+\alpha_7} + e_{\alpha_2+2\alpha_3+2\alpha_4+\alpha_5+\alpha_7}+e_{\alpha_2+\alpha_3+2\alpha_4+\alpha_5+\alpha_6+\alpha_7}\\+2e_{\alpha_3+2\alpha_4+2\alpha_5+\alpha_6+\alpha_7}$; \item $e_{\alpha_1+2\alpha_2+2\alpha_3+2\alpha_4+\alpha_5+\alpha_7}+e_{\alpha_1+\alpha_2+2\alpha_3+2\alpha_4+\alpha_5+\alpha_6+\alpha_7} + 2e_{\alpha_1+\alpha_2+\alpha_3+2\alpha_4+2\alpha_5+\alpha_6+\alpha_7}$; \item $e_{\alpha_1+2\alpha_2+2\alpha_3+2\alpha_4+2\alpha_5+\alpha_6+\alpha_7} + 2e_{\alpha_1+\alpha_2+2\alpha_3+3\alpha_4 +2\alpha_5 +\alpha_6+\alpha_7} +e_{\alpha_2+2\alpha_3+3\alpha_4+2\alpha_5+\alpha_6+2\alpha_7}$; \item $e_{\alpha_1+2\alpha_2+3\alpha_3+3\alpha_4+2\alpha_5+\alpha_6+\alpha_7}+2e_{\alpha_1+2\alpha_2+2\alpha_3+3\alpha_4+2\alpha_5+\alpha_6+2\alpha_7}$; \item $e_{\alpha_1+2\alpha_2+3\alpha_3+4\alpha_4+3\alpha_5+2\alpha_6+2\alpha_7}$. \end{enumerate} In particular we see that $\dim c_{{\mathfrak g}}(\chi)=9$, and so $d(\chi)=62=\left\ensuremath{\mathbf{e}}\xspacert\Phi^{+}\right\ensuremath{\mathbf{e}}\xspacert-1$. Every finite-dimensional $U_\chi({\mathfrak g})$-module therefore has dimension divisible by $3^{62}$. \subsection{$E_8$ in characteristic 2}\label{E82} Suppose $\Phi=E_8$ and $p=2$. Since $p$ is non-special in this case, we may apply Proposition~\ref{nonspec}. We must therefore give $c_{\mathfrak g}(\chi)$, and Sage computations show that ${\mathfrak c}_{\mathfrak g}(\chi)$ is the ${\mathbb K}$-subspace of ${\mathfrak g}$ with the following basis: \begin{enumerate} \item $e_{-\alpha_2-2\alpha_3-3\alpha_4-4\alpha_5 - 2\alpha_6-\alpha_7-2\alpha_8}+e_{-\alpha_1-\alpha_2-2\alpha_3-3\alpha_4 -3\alpha_5-2\alpha_6-\alpha_7-2\alpha_8} + e_{-\alpha_1-2\alpha_2-2\alpha_3-3\alpha_4-3\alpha_5-2\alpha_6-\alpha_7-\alpha_8} \\ +e_{-\alpha_1-2\alpha_2-2\alpha_3-2\alpha_4-3\alpha_5-2\alpha_6-\alpha_7-2\alpha_8}$; \item $e_{-\alpha_2-2\alpha_3-2\alpha_4-2\alpha_5-\alpha_6-\alpha_8} + e_{-\alpha_2-\alpha_3-\alpha_4-2\alpha_5-2\alpha_6-\alpha_7-\alpha_8} + e_{-\alpha_2-\alpha_3-2\alpha_4-2\alpha_5-\alpha_6-\alpha_7-\alpha_8} \\ + e_{-\alpha_3-2\alpha_4-2\alpha_5-2\alpha_6-\alpha_7-\alpha_8}$; \item $e_{-\alpha_2-\alpha_3-\alpha_4-\alpha_5-\alpha_6}+e_{-\alpha_2-\alpha_3-\alpha_4-\alpha_5-\alpha_8} + e_{-\alpha_4-2\alpha_5-\alpha_6-\alpha_8} + e_{-\alpha_3-\alpha_4-\alpha_5-\alpha_6-\alpha_8}$; \item $e_{-\alpha_4-\alpha_5-\alpha_6} + e_{-\alpha_4-\alpha_5-\alpha_8} + e_{-\alpha_5-\alpha_6-\alpha_8}$; \item $e_{\alpha_1}+e_{\alpha_2}+e_{\alpha_3} +e_{\alpha_4} + e_{\alpha_5} +e_{\alpha_6}+e_{\alpha_7}+e_{\alpha_8}$; \item $e_{\alpha_1+\alpha_2}+e_{\alpha_2+\alpha_3} + e_{\alpha_3+\alpha_4} + e_{\alpha_4+\alpha_5} + e_{\alpha_5+\alpha_6}+e_{\alpha_5+\alpha_8} + e_{\alpha_6+\alpha_7} $ \item $e_{\alpha_1+\alpha_2+\alpha_3+\alpha_4} + e_{\alpha_2+\alpha_3+\alpha_4+\alpha_5} + e_{\alpha_3+\alpha_4+\alpha_5+\alpha_6} +e_{\alpha_3+\alpha_4+\alpha_5+\alpha_8}+e_{\alpha_5+\alpha_6+\alpha_7+\alpha_8} + e_{\alpha_4+\alpha_5+\alpha_6+\alpha_7}$; \item $e_{\alpha_1+\alpha_2+\alpha_3+\alpha_4+\alpha_5+\alpha_6+\alpha_8}+e_{\alpha_2+\alpha_3+\alpha_4+2\alpha_5+\alpha_6+\alpha_8} +e_{\alpha_3+2\alpha_4+2\alpha_5+\alpha_6+\alpha_8}+e_{\alpha_1+\alpha_2+\alpha_3+\alpha_4+\alpha_5+\alpha_6+\alpha_7} \\ +e_{\alpha_4+2\alpha_5+2\alpha_6+\alpha_7+\alpha_8} + e_{\alpha_3+\alpha_4+2\alpha_5+\alpha_6+\alpha_7+\alpha_8}$; \item $e_{\alpha_1+\alpha_2+\alpha_3+\alpha_4+\alpha_5+\alpha_6+\alpha_7+\alpha_8}+e_{\alpha_2+\alpha_3+\alpha_4+2\alpha_5+\alpha_6+\alpha_7+\alpha_8} + e_{\alpha_3+2\alpha_4+2\alpha_5+\alpha_6+\alpha_7+\alpha_8}\\+e_{\alpha_3+\alpha_4+2\alpha_5+2\alpha_6+\alpha_7+\alpha_8}$; \item $e_{\alpha_3+2\alpha_4+3\alpha_5+2\alpha_6+\alpha_7+2\alpha_8} + e_{\alpha_2+2\alpha_3+2\alpha_4 +2\alpha_5 +2\alpha_6+\alpha_7+\alpha_8}+e_{\alpha_2+\alpha_3+2\alpha_4+3\alpha_5+2\alpha_6+\alpha_7+\alpha_8}\\+e_{\alpha_1+\alpha_2+\alpha_3+2\alpha_4+2\alpha_5+2\alpha_6+\alpha_7+\alpha_8}$; \item $e_{\alpha_2+2\alpha_3+3\alpha_4+3\alpha_5+2\alpha_6+\alpha_7+\alpha_8}+e_{\alpha_2+2\alpha_3+2\alpha_4+3\alpha_5+2\alpha_6+\alpha_7+2\alpha_8}+e_{\alpha_1+2\alpha_2+2\alpha_3+2\alpha_4+2\alpha_5+2\alpha_6+\alpha_7+\alpha_8}\\+e_{\alpha_1+\alpha_2+2\alpha_3+2\alpha_4+3\alpha_5+2\alpha_6+\alpha_7+\alpha_8}$; \item $e_{\alpha_1+\alpha_2+2\alpha_3+3\alpha_4+3\alpha_5+2\alpha_6+\alpha_7+\alpha_8}+e_{\alpha_1+\alpha_2+2\alpha_3+2\alpha_4+3\alpha_5+2\alpha_6+\alpha_7+2\alpha_8} + e_{\alpha_1+2\alpha_2+2\alpha_3+2\alpha_4+3\alpha_5+2\alpha_6+\alpha_7+\alpha_8}$; \item $e_{\alpha_2+2\alpha_3+3\alpha_4+4\alpha_5+3\alpha_6+2\alpha_7+2\alpha_8} + e_{\alpha_1+\alpha_2+2\alpha_3+3\alpha_4+4\alpha_5+3\alpha_6+\alpha_7+2\alpha_8} + e_{\alpha_1+2\alpha_2+3\alpha_3+3\alpha_4+3\alpha_5+2\alpha_6+\alpha_7+2\alpha_8} \\+ e_{\alpha_1+2\alpha_2+2\alpha_3+3\alpha_4+4\alpha_5+2\alpha_6+\alpha_7+2\alpha_8}$; \item $e_{\alpha_1+2\alpha_2+3\alpha_3+4\alpha_4+4\alpha_5+2\alpha_6+\alpha_7+2\alpha_8} + e_{\alpha_1+2\alpha_2+2\alpha_3+3\alpha_4+4\alpha_5+3\alpha_6+2\alpha_7+2\alpha_8} + e_{\alpha_1+2\alpha_2+3\alpha_3+3\alpha_4+4\alpha_5+3\alpha_6+\alpha_7+2\alpha_8}$; \item $e_{\alpha_1+2\alpha_2+3\alpha_3+4\alpha_4+5\alpha_5+4\alpha_6+2\alpha_7+2\alpha_8} + e_{\alpha_1+2\alpha_2+3\alpha_3+4\alpha_4+5\alpha_5+3\alpha_6+2\alpha_7+3\alpha_8}$; \item $e_{2\alpha_1+3\alpha_2+4\alpha_3+5\alpha_4+6\alpha_5+4\alpha_6+2\alpha_7+3\alpha_8}$. \end{enumerate} In particular, $\dim({\mathfrak c}_{\mathfrak g}(\chi))=16$ and so $d(\chi)=116=\left\ensuremath{\mathbf{e}}\xspacert \Phi^{+}\right\ensuremath{\mathbf{e}}\xspacert -4$. Proposition~\ref{nonspec} then says that all finite-dimensional $U_\chi({\mathfrak g})$-modules have dimension divisible by $2^{116}$. \subsection{$E_8$ in characteristic 3}\label{E83} Suppose $\Phi=E_8$ and $p=3$. Since $p$ is non-special in this case, we may apply Proposition~\ref{nonspec}. We must therefore give $c_{\mathfrak g}(\chi)$, and Sage computations show that ${\mathfrak c}_{\mathfrak g}(\chi)$ is the ${\mathbb K}$-subspace of ${\mathfrak g}$ with the following basis: \begin{enumerate} \item $e_{-\alpha_1-\alpha_2-2\alpha_3-2\alpha_4 - 2\alpha_5-\alpha_6-\alpha_8}+e_{-\alpha_1-\alpha_2-\alpha_3-2\alpha_4 -2\alpha_5-\alpha_6-\alpha_7-\alpha_8} + e_{-\alpha_2-2\alpha_3-2\alpha_4-2\alpha_5-\alpha_6-\alpha_7-\alpha_8}\\+2e_{-\alpha_1-\alpha_2-\alpha_3-\alpha_4-2\alpha_5-2\alpha_6-\alpha_7-\alpha_8}+2e_{-\alpha_3-2\alpha_4-3\alpha_5-2\alpha_6-\alpha_7-\alpha_8}+e_{-\alpha_2-\alpha_3-2\alpha_4-2\alpha_5-2\alpha_6-\alpha_7-\alpha_8}$; \item $e_{-\alpha_3-\alpha_4-\alpha_5-\alpha_6} + 2e_{-\alpha_3-\alpha_4-\alpha_5-\alpha_8} + e_{-\alpha_4-\alpha_5-\alpha_6-\alpha_8} + e_{-\alpha_5-\alpha_6-\alpha_7-\alpha_8} + 2e_{-\alpha_4-\alpha_5-\alpha_6-\alpha_7}$; \item $e_{\alpha_1}+e_{\alpha_2}+e_{\alpha_3} +e_{\alpha_4} + e_{\alpha_5} +e_{\alpha_6}+e_{\alpha_7}+e_{\alpha_8}$; \item $e_{\alpha_1+\alpha_2+\alpha_3}+e_{\alpha_2+\alpha_3+\alpha_4} + e_{\alpha_3+\alpha_4+\alpha_5} + e_{\alpha_4+\alpha_5+\alpha_6} + e_{\alpha_4+\alpha_5+\alpha_8}+e_{\alpha_5+\alpha_6+\alpha_8} + e_{\alpha_5+\alpha_6+\alpha_7}$; \item $e_{\alpha_1+\alpha_2+\alpha_3+\alpha_4+\alpha_5+\alpha_6+\alpha_8} + 2e_{\alpha_2+\alpha_3+\alpha_4+2\alpha_5+\alpha_6+\alpha_8} + e_{\alpha_3+2\alpha_4+2\alpha_5+\alpha_6+\alpha_8}\\+2e_{\alpha_4+2\alpha_5+2\alpha_6+\alpha_7+\alpha_8}+e_{\alpha_2+\alpha_3+\alpha_4+\alpha_5+\alpha_6+\alpha_7+\alpha_8} + e_{\alpha_3+\alpha_4+2\alpha_5+\alpha_6+\alpha_7+\alpha_8}$; \item $e_{\alpha_1+\alpha_2+\alpha_3+\alpha_4+2\alpha_5+\alpha_6+\alpha_7+\alpha_8}+2e_{\alpha_1+\alpha_2+\alpha_3+2\alpha_4+2\alpha_5+\alpha_6+\alpha_8} +2e_{\alpha_2+\alpha_3+\alpha_4+2\alpha_5+2\alpha_6+\alpha_7+\alpha_8}\\+e_{\alpha_2+\alpha_3+2\alpha_4+2\alpha_5+\alpha_6+\alpha_7+\alpha_8}+e_{\alpha_2+2\alpha_3+2\alpha_4+2\alpha_5+\alpha_6+\alpha_8}$; \item $e_{\alpha_1+2\alpha_2+2\alpha_3+2\alpha_4+2\alpha_5+\alpha_6+\alpha_8}+e_{\alpha_1+\alpha_2+2\alpha_3+2\alpha_4+2\alpha_5+\alpha_6+\alpha_7+\alpha_8} + e_{\alpha_3+2\alpha_4+3\alpha_5+2\alpha_6+\alpha_7+2\alpha_8}\\+e_{\alpha_2+2\alpha_3+2\alpha_4+2\alpha_5+2\alpha_6+\alpha_7+\alpha_8}+2e_{\alpha_2+\alpha_3+2\alpha_4+3\alpha_5+2\alpha_6+\alpha_7+\alpha_8} + e_{\alpha_1+\alpha_2+\alpha_3+2\alpha_4+2\alpha_5+2\alpha_6+\alpha_7+\alpha_8}$; \item $e_{\alpha_2+2\alpha_3+3\alpha_4+3\alpha_5+2\alpha_6+\alpha_7+\alpha_8} + 2e_{\alpha_2+2\alpha_3+2\alpha_4 +3\alpha_5 +2\alpha_6+\alpha_7+2\alpha_8}+e_{\alpha_1+2\alpha_2+2\alpha_3+2\alpha_4+2\alpha_5+2\alpha_6+\alpha_7+\alpha_8}\\+2e_{\alpha_1+\alpha_2+\alpha_3+2\alpha_4+3\alpha_5+2\alpha_6+\alpha_7+2\alpha_8} + 2e_{\alpha_1+\alpha_2+2\alpha_3+2\alpha_4+3\alpha_5+2\alpha_6+\alpha_7+\alpha_8}$; \item $e_{\alpha_2+2\alpha_3+3\alpha_4+4\alpha_5+3\alpha_6+2\alpha_7+2\alpha_8}+2e_{\alpha_1+\alpha_2+2\alpha_3+3\alpha_4+4\alpha_5+3\alpha_6+\alpha_7+2\alpha_8}+2e_{\alpha_1+2\alpha_2+3\alpha_3+3\alpha_4+3\alpha_5+2\alpha_6+\alpha_7+2\alpha_8}\\+e_{\alpha_1+2\alpha_2+2\alpha_3+3\alpha_4+4\alpha_5+2\alpha_6+\alpha_7+2\alpha_8}$; \item $e_{\alpha_1+2\alpha_2+3\alpha_3+4\alpha_4+4\alpha_5+2\alpha_6+\alpha_7+2\alpha_8}+e_{\alpha_1+2\alpha_2+2\alpha_3+3\alpha_4+4\alpha_5+3\alpha_6+2\alpha_7+2\alpha_8} \\+ 2e_{\alpha_1+2\alpha_2+3\alpha_3+3\alpha_4+4\alpha_5+3\alpha_6+\alpha_7+2\alpha_8}$ \item $e_{\alpha_1+2\alpha_2+3\alpha_3+4\alpha_4+5\alpha_5+4\alpha_6+2\alpha_7+2\alpha_8} + 2e_{\alpha_1+2\alpha_2+3\alpha_3+4\alpha_4+5\alpha_5+3\alpha_6+2\alpha_7+3\alpha_8}$; \item $e_{2\alpha_1+3\alpha_2+4\alpha_3+5\alpha_4+6\alpha_5+4\alpha_6+2\alpha_7+3\alpha_8}$. \end{enumerate} Therefore $\dim({\mathfrak c}_{\mathfrak g}(\chi))=12$ and so $d(\chi)=118=\left\ensuremath{\mathbf{e}}\xspacert \Phi^{+}\right\ensuremath{\mathbf{e}}\xspacert -2$. Proposition~\ref{nonspec} then says that each $U_\chi({\mathfrak g})$-module has dimension divisible by $3^{118}$. \subsection{$E_8$ in characteristic 5}\label{E85} Suppose $\Phi=E_8$ and $p=5$. Since $p$ is non-special in this case, we may apply Proposition~\ref{nonspec}. We must therefore give $c_{\mathfrak g}(\chi)$, and Sage computations show that ${\mathfrak c}_{\mathfrak g}(\chi)$ is the ${\mathbb K}$-subspace of ${\mathfrak g}$ with the following basis: \begin{enumerate} \item $e_{-\alpha_1-\alpha_2-\alpha_3-\alpha_4 - \alpha_5-\alpha_6}+4e_{-\alpha_1-\alpha_2-\alpha_3-\alpha_4 -\alpha_5-\alpha_8} + e_{-\alpha_2-\alpha_3-\alpha_4-\alpha_5-\alpha_6-\alpha_8}+2e_{-\alpha_3-\alpha_4-2\alpha_5-\alpha_6-\alpha_8}\\+2e_{-\alpha_2-\alpha_3-\alpha_4-\alpha_5-\alpha_6-\alpha_7}+2e_{-\alpha_4-2\alpha_5-\alpha_6-\alpha_7-\alpha_8} + 3e_{-\alpha_3-\alpha_4-\alpha_5-\alpha_6-\alpha_7-\alpha_8}$; \item $e_{\alpha_1}+e_{\alpha_2}+e_{\alpha_3} +e_{\alpha_4} + e_{\alpha_5} +e_{\alpha_6}+e_{\alpha_7}+e_{\alpha_8}$; \item $e_{\alpha_1+\alpha_2+\alpha_3+\alpha_4+\alpha_5}+e_{\alpha_2+\alpha_3+\alpha_4+\alpha_5+\alpha_6} + e_{\alpha_2+\alpha_3+\alpha_4+\alpha_5+\alpha_8} + 2e_{\alpha_4+2\alpha_5+\alpha_6+\alpha_8} + 3e_{\alpha_3+\alpha_4+\alpha_5+\alpha_6+\alpha_8}\\ +e_{\alpha_3+\alpha_4+\alpha_5+\alpha_6+\alpha_7}+2e_{\alpha_4+\alpha_5+\alpha_6+\alpha_7+\alpha_8}$; \item $e_{\alpha_1+\alpha_2+\alpha_3+\alpha_4+\alpha_5+\alpha_6+\alpha_8} + 4e_{\alpha_2+\alpha_3+\alpha_4+2\alpha_5+\alpha_6+\alpha_8} + e_{\alpha_3+2\alpha_4+2\alpha_5+\alpha_6+\alpha_8}+3e_{\alpha_1+\alpha_2+\alpha_3+\alpha_4+\alpha_5+\alpha_6+\alpha_7}\\+4e_{\alpha_4+2\alpha_5+2\alpha_6+\alpha_7+\alpha_8}+3e_{\alpha_2+\alpha_3+\alpha_4+\alpha_5+\alpha_6+\alpha_7+\alpha_8} + e_{\alpha_3+\alpha_4+2\alpha_5+\alpha_6+\alpha_7+\alpha_8}$; \item $e_{\alpha_1+2\alpha_2+2\alpha_3+2\alpha_4+2\alpha_5+\alpha_6+\alpha_8}+e_{\alpha_1+\alpha_2+2\alpha_3+2\alpha_4+2\alpha_5+\alpha_6+\alpha_7+\alpha_8} + 2e_{\alpha_3+2\alpha_4+3\alpha_5+2\alpha_6+\alpha_7+2\alpha_8}\\+2e_{\alpha_3+2\alpha_4+3\alpha_5+2\alpha_6+\alpha_7+2\alpha_8}+3e_{\alpha_2+\alpha_3+2\alpha_4+3\alpha_5+2\alpha_6+\alpha_7+\alpha_8}+2e_{\alpha_1+\alpha_2+\alpha_3+2\alpha_4+2\alpha_5+2\alpha_6+\alpha_7+\alpha_8}$; \item $e_{\alpha_2+2\alpha_3+3\alpha_4+3\alpha_5+2\alpha_6+\alpha_7+\alpha_8}+4e_{\alpha_2+2\alpha_3+2\alpha_4+3\alpha_5+2\alpha_6+\alpha_7+2\alpha_8} + e_{\alpha_1+2\alpha_2+2\alpha_3+2\alpha_4+2\alpha_5+2\alpha_6+\alpha_7+\alpha_8}\\+2e_{\alpha_1+\alpha_2+\alpha_3+2\alpha_4+3\alpha_5+2\alpha_6+\alpha_7+2\alpha_8}+4e_{\alpha_1+\alpha_2+2\alpha_3+2\alpha_4+3\alpha_5+2\alpha_6+\alpha_7+\alpha_8}$; \item $e_{\alpha_2+2\alpha_3+3\alpha_4+4\alpha_5+3\alpha_6+2\alpha_7+2\alpha_8} + 4e_{\alpha_1+\alpha_2+2\alpha_3+3\alpha_4 +4\alpha_5 +3\alpha_6+\alpha_7+2\alpha_8}+4e_{\alpha_1+2\alpha_2+3\alpha_3+3\alpha_4+3\alpha_5+2\alpha_6+\alpha_7+2\alpha_8}\\+e_{\alpha_1+2\alpha_2+2\alpha_3+3\alpha_4+4\alpha_5+2\alpha_6+\alpha_7+2\alpha_8}$; \item $e_{\alpha_1+2\alpha_2+3\alpha_3+4\alpha_4+4\alpha_5+2\alpha_6+\alpha_7+2\alpha_8}+e_{\alpha_1+2\alpha_2+2\alpha_3+3\alpha_4+4\alpha_5+3\alpha_6+2\alpha_7+2\alpha_8}\\+4e_{\alpha_1+2\alpha_2+3\alpha_3+3\alpha_4+4\alpha_5+3\alpha_6+\alpha_7+2\alpha_8}$; \item $e_{\alpha_1+2\alpha_2+3\alpha_3+4\alpha_4+5\alpha_5+4\alpha_6+2\alpha_7+2\alpha_8}+4e_{\alpha_1+2\alpha_2+3\alpha_3+4\alpha_4+5\alpha_5+3\alpha_6+2\alpha_7+3\alpha_8}$; \item $e_{2\alpha_1+3\alpha_2+4\alpha_3+5\alpha_4+6\alpha_5+4\alpha_6+2\alpha_7+3\alpha_8}$. \end{enumerate} In particular we see that $\dim c_{{\mathfrak g}}(\chi)=10$, and so $d(\chi)=119=\left\ensuremath{\mathbf{e}}\xspacert\Phi^{+}\right\ensuremath{\mathbf{e}}\xspacert-1$. Hence, every finite-dimensional $U_\chi({\mathfrak g})$-module has dimension divisible by $5^{119}$. \end{document}
\begin{document} \title[]{$f$-EIKONAL HELIX SUBMANIFOLDS AND $f$-EIKONAL HELIX CURVES} \author{Evren Z\i plar} \address{Department of Mathematics, Faculty of Science, University of Ankara, Tando\u{g}an, Turkey} \email{[email protected]} \urladdr{} \author{Ali \c{S}enol} \address{Department of Mathematics, Faculty of Science, \c{C}ank\i r\i\ Karatekin University, \c{C}ank\i r\i , Turkey} \email{[email protected]} \author{Yusuf Yayl\i } \address{Department of Mathematics, Faculty of Science, University of Ankara, Tando\u{g}an, Turkey} \email{[email protected]} \thanks{} \urladdr{} \date{} \subjclass[2000]{ \ 53A04, 53B25, 53C40, 53C50.} \keywords{Helix submanifold; Eikonal function; Helix line.\\ Corresponding author: Evren Z\i plar, e-mail: [email protected]} \thanks{} \begin{abstract} Let $M\subset \mathbb{R} ^{n}$ be a Riemannian helix submanifold with respect to the unit direction $ d\in \mathbb{R} ^{n}$ and $f:M\rightarrow \mathbb{R} $ be a eikonal function. We say that $M$ is a $f$-eikonal helix submanifold if for each $q\in M$ the angle between $\nabla f$ and $d$ is constant.Let $ M\subset \mathbb{R} ^{n}$ be a Riemannian submanifold and $\alpha :I\rightarrow M$ be a curve with unit tangent $T$. Let $f:M\rightarrow \mathbb{R} $ be a eikonal function along the curve $\alpha $. We say that $\alpha $ is a $f$-eikonal helix curve if the angle between $\nabla f$ and $T$ \ is constant along the curve $\alpha $. $\nabla f$ will be called as the axis of the $f$-eikonal helix curve.The aim of this article is to give that the relations between $f$-eikonal helix submanifolds and $f$-eikonal helix curves, and to investigate $f$-eikonal helix curves on Riemannian manifolds. \end{abstract} \maketitle \section{Introduction} In differential geometry of manifolds, an helix submanifold of $ \mathbb{R} ^{n}$ with respect to a fixed direction $d$ in $ \mathbb{R} ^{n}$ is defined by the property that tangent space makes a constant angle with the fixed direction $d$ (helix direction) in [3]. Di Scala and Ruiz-Hern \'{a}ndez have introduced the concept of these manifolds in [3]. Recently, M. Ghomi worked out the shadow problem given by H.Wente. And, He mentioned the shadow boundary in [6]. Ruiz-Hern\'{a}ndez investigated that shadow boundaries are related to helix submanifolds in [10]. Helix hypersurfaces have been worked in nonflat ambient spaces in [4,5]. Cermelli and Di Scala have also studied helix hypersurfaces in liquid cristals in [2]. The plan of this article is as follows. Section 2, we give some important definitions and remarks which will be used in other sections.In section 3, we define $f$-eikonal helix submanifolds and define $f$-eikonal helix curves. And also, we give an important property between $f$-eikonal helix submanifolds and $f$-eikonal helix curves, see Theorem 3.2. In Theorem 3.1 and 3.3, we show that when a curve on a manifold is $f$-eikonal helix curve. Besides,we give the important relation between geodesic curves and $f$ -eikonal helix curves, see Theorem 3.4. Section 4, in 3-dimensional Riemannian manifold, we find out the axis of a $f$-eikonal helix curve and we give the relation between the curvatures of the curve in Theorem 4.1. Then, we give more important corollary relating to helix submanifolds. In section 5, we specify the relation between $f$-eikonal helix curve and general helix. \section{Basic Definitions} \begin{definition} Given a submanifold $M\subset \mathbb{R} ^{n}$ and an unitary vector $d$ in $ \mathbb{R} ^{n}$, we say that $M$ is a helix with respect to $d$ if for each $q\in M$ the angle between $d$ and $T_{q}M$ is constant. Let us recall that a unitary vector $d$ can be decomposed in its tangent and orthogonal components along the submanifold $M$, i.e. $d=\cos (\theta )T^{\ast }+\sin (\theta )\xi $ with $\left\Vert T^{\ast }\right\Vert =\left\Vert \xi \right\Vert =1$, where $T^{\ast }\in TM$ and $\xi \in \vartheta (M)$.The angle between $d$ and $T_{q}M$ is constant if and only if the tangential component of $d$ has constant length $\left\Vert \cos (\theta )T^{\ast }\right\Vert =\cos (\theta )$. We can assume that $0<\theta <\frac{ \pi }{2}$ and we can say that $M$ is a helix of angle $\theta $. We will call $T^{\ast }$ and $\xi $ the tangent and normal directions of the helix submanifold $M$. We can call $d$ the helix direction of $M$ and we will assume $d$ always to be unitary [3]. \end{definition} \begin{definition} Let $M\subset \mathbb{R} ^{n}$ be a helix submanifold of angle $\theta \neq \frac{\pi }{2}$ w.r. to the direction $d\in \mathbb{R} ^{n}$. We will call the integral curves of the tangent direction $T^{\ast }$ of the helix $M$, the helix lines of $M$ w.r.to $d$ [3]. \end{definition} \noindent \textbf{Remark 2.1 }\textit{We say that }$\xi $\textit{\ is parallel normal in the direction }$X\in TM$\textit{\ if }$\ \nabla _{X}^{\perp }\xi =0$\textit{. Here, }$\nabla ^{\perp }$\textit{\ denotes the normal connection of }$M$\textit{\ induced by the standard covariant derivative of the Euclidean ambient. And, we denote by }$D$\textit{\ the standard covariant derivative in }$ \mathbb{R} ^{n}$\textit{\ and denote by }$\nabla $\textit{\ the induced covariant derivative in }$M$\textit{. \ [3]. } \begin{definition} Let $M$ be a submanifold of the Riemannian manifold $ \mathbb{R} ^{n}$ and let $D$ be the Riemannian connexion on $ \mathbb{R} ^{n}$. For $C^{\infty \text{ }}$fields $X$ and $Y$ with domain $A$ on $M$ (and tangent to $M$), define $\nabla _{X}Y$ and $V(X,Y)$ on $A$ by decomposing $D_{X}Y$ into unique tangential and normal components, respectively; thus, \begin{equation*} D_{X}Y=\nabla _{X}Y+V(X,Y)\text{. } \end{equation*} Then, $\nabla $ is the Riemannian connexion on $M$ and $V$ is a symmetric vector-valued 2-covariant $C^{\infty \text{ }}$tensor called the second fundamental tensor. The above composition equation is called the Gauss equation [7]. \end{definition} \noindent \textbf{Remark 2.2 }\textit{Let us observe that for any helix euclidean submanifold }$M$\textit{, the following system holds for every }$ X\in TM$\textit{, where the helix direction }$d=\cos (\theta )T^{\ast }+\sin (\theta )\xi $\textit{. } \begin{equation} \cos (\theta )\nabla _{X}T^{\ast }-\sin (\theta )A^{\xi }(X)=0 \end{equation} \begin{equation} \cos (\theta )V(X,T^{\ast })+\sin (\theta )\nabla _{X}^{\bot }\xi =0 \end{equation} \textit{[3].} \begin{definition} Let $(M,g)$ be a Riemannian manifold, where $g$ is the metric. Let $ f:M\rightarrow \mathbb{R} $ be a function and let $\nabla f$ be its gradient, i.e., $df(X)=g(\nabla f$ $,X)$. We say that $f$ is eikonal if it satisfies: \begin{equation*} \left\Vert \nabla f\right\Vert =\text{constant.} \end{equation*} [3]. \end{definition} \begin{definition} Let $\alpha =\alpha (t):I\subset \mathbb{R} \rightarrow M$ be an immersed curve in 3-dimensional Riemannian manifold $M$ . The unit tangent vector field of $\alpha $ will be denoted by $T$. Also, $ \kappa >0$ and $\tau $ will denote the curvature and torsion of $\alpha $, respectively.Therefore if $\left\{ T,N,B\right\} $ is the Frenet frame of $ \alpha $ and $\nabla $ is the Levi-Civita connection of $M$, then one can write the Frenet equations of $\alpha $ as \begin{equation*} \nabla _{T}T=\kappa N \end{equation*} \begin{equation*} \nabla _{T}N=-\kappa T+\tau B \end{equation*} \begin{equation*} \nabla _{T}B=-\tau N \end{equation*} [1]. \end{definition} \begin{definition} Let $\alpha :I\subset \mathbb{R} \rightarrow E^{n}$ be a curve in $E^{n}$ with arc-length parameter $s$ and let $X$ be a unit constant vector of $E^{n}$. For all $s\in I$, if \begin{equation*} \left\langle V_{1},X\right\rangle =\cos (\varphi ),\varphi \neq \frac{\pi }{2 },\varphi =\text{constant,} \end{equation*} then the curve $\alpha $ is called a general helix in $E^{n}$, where $V_{1}$ is the unit tangent vector of $\alpha $ at its point $\alpha (s)$ and $ \varphi $ is a constant angle between the vector fields $V_{1}$ and $X$ [12]. \end{definition} Throughout all section, the submanifolds $M\subset \mathbb{R} ^{n}$ have the induced metric by $ \mathbb{R} ^{n}$. \section{$f$-EIKONAL HELIX CURVES} In this section, we define $f$-eikonal helix submanifolds and define $f$ -eikonal helix curves. And also, we give an important property between $f$ -eikonal helix submanifolds and $f$-eikonal helix curves, see Theorem 3.2. In Theorem 3.1 and 3.3, we show that when a curve on a manifold is $f$ -eikonal helix curve. Besides,we give the important relation between geodesic curves and $f$-eikonal helix curves, see Theorem 3.4. \begin{definition} Let $M\subset \mathbb{R} ^{n}$ be a Riemannian helix submanifold with respect to the unit direction $ d\in \mathbb{R} ^{n}$ and $f:M\rightarrow \mathbb{R} $ be a eikonal function. We say that $M$ is a $f$-eikonal helix submanifold if for each $q\in M$ the angle between $\nabla f$ and $d$ is constant. \end{definition} For definition 3.1, $\left\langle \nabla f,d\right\rangle =$ constant since $ \left\Vert \nabla f\right\Vert $ and $d$ are constant. \begin{example} Let $M\subset \mathbb{R} ^{n}$ be a Riemannian helix submanifold with respect to the unit direction $ d\in \mathbb{R} ^{n}$. Let us assume that the tangent component of $d$ equals $\nabla f$ for a eikonal function $f:M\rightarrow \mathbb{R} $. Because of the definition helix submanifold, we have $\left\langle \nabla f,d\right\rangle =$ constant. That is, $M$ is a $f$-eikonal helix submanifold. \end{example} \begin{definition} Let $M\subset \mathbb{R} ^{n}$ be a Riemannian submanifold and $\alpha :I\rightarrow M$ be a curve with unit tangent $T$. Let $f:M\rightarrow \mathbb{R} $ be a eikonal function along the curve $\alpha $, i.e. $\left\Vert \nabla f\right\Vert =$constant along the curve $\alpha .$We say that $\alpha $ is a $f$-eikonal helix curve if the angle between $\nabla f$ and $T$ \ is constant along the curve $\alpha $. $\nabla f$ will be called as the axis of the $f$-eikonal helix curve. \end{definition} \begin{example} Let $M\subset \mathbb{R} ^{n}$ be a Riemannian submanifold and $\alpha :I\rightarrow M$ be a curve with unit tangent $T$. Let $f:M\rightarrow \mathbb{R} $ be a eikonal function along the curve $\alpha $. If $\nabla f$ \ equals $T$ , then $\left\langle \nabla f\ ,\nabla f\ \right\rangle =$constant. That is, $\alpha $ is a $f$-eikonal helix curve. \end{example} \begin{example} We consider the Riemannian manifold $M= \mathbb{R} ^{3}$. Let \begin{equation*} f:M\rightarrow \mathbb{R} \end{equation*} \begin{equation*} \left( x,y,z\right) \rightarrow f\left( x,y,z\right) =x^{2}+y^{2}+z \end{equation*} be a function defined on $M$. Then, the curve \begin{equation*} \alpha :I\subset \mathbb{R} \rightarrow M \end{equation*} \begin{equation*} s\rightarrow \alpha \left( s\right) =(\cos \frac{s}{\sqrt{2}},\sin \frac{s}{ \sqrt{2}},\frac{s}{\sqrt{2}}) \end{equation*} is a $f$-eikonal helix curve on $M$. Firstly, we will show that $f$ is a eikonal function along the curve $\alpha $. If we compute $\nabla f$, we find $\nabla f$ as \begin{equation*} \nabla f=\left( 2x,2y,1\right) \text{.} \end{equation*} So, we get \begin{equation*} \left\Vert \nabla f\right\Vert =\sqrt{4\left( x^{2}+y^{2}\right) +1}\text{.} \end{equation*} And, if we compute $\left\Vert \nabla f\right\Vert $ along the curve $\alpha $, we find \begin{equation*} \left\Vert \nabla f\right\Vert |_{\alpha }=\sqrt{5}=\text{constant.} \end{equation*} That is, $f$ is a eikonal function along the curve $\alpha $. Now, we will show that the angle between $\nabla f$ and $T$ (the unit tangent of $\alpha $) is constant along the curve $\alpha $. Since \begin{equation*} \nabla f|_{\alpha }=(2\cos \frac{s}{\sqrt{2}},2\sin \frac{s}{\sqrt{2}},1) \end{equation*} and \begin{equation*} T=(-\frac{1}{\sqrt{2}}\sin \frac{s}{\sqrt{2}},\frac{1}{\sqrt{2}}\cos \frac{s }{\sqrt{2}},\frac{1}{\sqrt{2}})\text{ ,} \end{equation*} we obtain \begin{equation*} \left\langle \nabla f|_{\alpha },T\right\rangle =\frac{1}{\sqrt{2}}=\cos (\theta ) \end{equation*} along the curve $\alpha $, where $\theta $ is the angle between $\nabla f$ and $T$. Consequently, $\alpha $ is a $f$-eikonal helix curve on $M$. \end{example} \begin{example} We consider the Riemannian manifold $M= \mathbb{R} ^{n}$. Let \begin{equation*} f:M\rightarrow \mathbb{R} \end{equation*} \begin{equation*} \left( x_{1},x_{2},...,x_{n}\right) \rightarrow f\left( x_{1},x_{2},...,x_{n}\right) =a_{1}x_{1}+...+a_{n}x_{n}+c \end{equation*} be a function defined on $M$, where $a_{1},...,a_{n},c$ are constant. Then, all generalized helices with the axis $\left( a_{1},a_{2},...,a_{n}\right) $ are $f$-eikonal helices. Let $\alpha $ be any generalized helice with the axis $X=\left( a_{1},a_{2},...,a_{n}\right) $. Then, the angle between the unit tangent of $ \alpha $ and $X$ is constant along the curve $\alpha $. On the other hand, since $\nabla f=\left( a_{1},a_{2},...,a_{n}\right) $, we have $X=\nabla f$. So, we can easily say that the angle between the unit tangent of $\alpha $ and $\nabla f$ is constant along the curve $\alpha $. Also, $\left\Vert \nabla f\right\Vert =$constant. Consequently, the curve $\alpha $ is a $f$ -eikonal helix. Since $\alpha $ is arbitrary, all generalized helices with the axis $\left( a_{1},a_{2},...,a_{n}\right) $ are $f$-eikonal helices. \end{example} \begin{theorem} Let $M\subset \mathbb{R} ^{n}$ be a Riemannian submanifold and $\alpha :I\rightarrow M$ be a curve with unit tangent $T$. Let $f:M\rightarrow \mathbb{R} $ be a eikonal function along the curve $\alpha $. Then, $\alpha $ is a $f$ -eikonal helix curve if and only if $f$ is a linear function along the curve $\alpha $. \end{theorem} \begin{proof} Firstly, we assume that $\alpha $ is a $f$-eikonal helix curve. Since $f$ is a eikonal function along the curve $\alpha $, $\left\Vert \nabla f\right\Vert |_{\alpha }=$constant. On the other hand, we know that $ X[f]=\left\langle \nabla f,X\right\rangle $ for each $X\in TM$ (see Definition 2.4). In particular, for $X=T$, we have \begin{eqnarray*} \left\langle \nabla f|_{\alpha },T\right\rangle &=&T[f] \\ &=&\frac{d}{ds}\left( f\circ \alpha \right) \text{ .} \end{eqnarray*} And, since $\left\Vert \nabla f\right\Vert |_{\alpha }=$constant and $\alpha $ is a $f$-eikonal, $\left\langle \nabla f|_{\alpha },T\right\rangle = $ constant. Thus, we obtain \begin{equation*} \frac{d}{ds}\left( f\circ \alpha \right) =\text{constant .} \end{equation*} In other words, $f|_{\alpha }$ is a linear function. Conversely, we assume that $f|_{\alpha }$ is a linear function. Clearly, \begin{equation*} \frac{d}{ds}\left( f\circ \alpha \right) =\text{constant .} \end{equation*} Hence, we get \begin{equation*} \left\langle \nabla f|_{\alpha },T\right\rangle =\text{constant .} \end{equation*} And, since $\left\Vert \nabla f\right\Vert |_{\alpha }=$constant and $T$ is unit, the angle between $\nabla f$ and $T$ is constant along the curve $ \alpha $. That is, $\alpha $ is a $f$-eikonal helix curve. \end{proof} \noindent This completes the proof of the Theorem. \begin{theorem} Let $M\subset \mathbb{R} ^{n}$ be a $f$-eikonal helix submanifold .Then, the helix lines of $M$ are $ f $-eikonal helix curves. \end{theorem} \begin{proof} Recall that $d=\cos (\theta )T^{\ast }+\sin (\theta )\xi $ is the decomposition of $d$ in its tangent and normal components.Let $\alpha $ be the helix line of $M$ with unit speed. That is, $\frac{d\alpha }{ds}=T^{\ast }$. Hence, doing the dot product with $\nabla f$ in each part of $d$ along the helix lines of $M$, we obtain: \begin{equation*} \left\langle \nabla f,d\right\rangle =\cos (\theta )\left\langle \nabla f, \frac{d\alpha }{ds}\right\rangle +\sin (\theta )\left\langle \nabla f,\xi \right\rangle \end{equation*} Due to the fact that $M$ is a $f$-eikonal helix submanifold, $\left\langle \nabla f,d\right\rangle =$ constant along the helix lines of $M$. On the other hand, $\left\langle \nabla f,\xi \right\rangle =0$ since $\nabla f\in TM$. So, $\left\langle \nabla f,\frac{d\alpha }{ds}\right\rangle $ is constant along the helix lines of $M$. It follows that the helix lines of $M$ are $f$-eikonal helix curves. \end{proof} \begin{theorem} Let $i:M\rightarrow \mathbb{R} ^{n}$ be a submanifold and let $f:M\rightarrow \mathbb{R} $ be a eikonal function, where $M$ has the induced metric by $ \mathbb{R} ^{n}$. Let us assume that $\alpha :I\subset \mathbb{R} \rightarrow M$ is a unit speed (parametrized by arc length function $s$) curve on $M$ with unit tangent $T$ . Then,$\alpha $ is a $f$-eikonal helix curve if and only if \begin{equation*} \beta (s)=\phi (\alpha (s))=\left( i(\alpha (s)),f(\alpha (s))\right) \subset \mathbb{R} ^{n}\times \mathbb{R} \text{ } \end{equation*} is a general helix with the axis $d=(0,1)$. Here, $\phi :M\rightarrow \mathbb{R} ^{n}\times \mathbb{R} $ is given by $\phi (p)=\left( i(p),f(p)\right) $ and $i:M\rightarrow \mathbb{R} ^{n}$ is given by $i(p)=p$, where $p\in M$. \end{theorem} \begin{proof} We consider the curve $\beta (s)=\left( i(\alpha (s)),f(\alpha (s))\right) =\left( \alpha (s),f(\alpha (s))\right) $. Then, the tangent of $\beta $ \begin{equation*} \beta ^{ {\acute{}} }(s)=\left( T,\frac{d(f\circ \alpha )}{ds}\right) \text{,} \end{equation*} where $T$ is the unit tangent of $\alpha $.On the other hand, we know that $ X[f]=\left\langle \nabla f,X\right\rangle $ for each $X\in TM$ (see definition 2.4). In particular, for $X=T$, \begin{equation*} T[f]=\left\langle \nabla f,T\right\rangle \end{equation*} \begin{equation*} \frac{d\alpha }{ds}[f]=\left\langle \nabla f,T\right\rangle \end{equation*} and so, we have: \begin{equation*} \frac{d(f\circ \alpha )}{ds}=\left\langle \nabla f,T\right\rangle \text{.} \end{equation*} Therefore, we obtain \begin{equation} \beta ^{ {\acute{}} }(s)=\left( T,\left\langle \nabla f,T\right\rangle \right) \text{.} \end{equation} Hence, doing the dot product with $d$ in each part of (3.1) , we get: \begin{equation} \left\langle \beta ^{ {\acute{}} }(s),d\right\rangle =\left\langle \nabla f,T\right\rangle \text{.} \end{equation} From the equality (3.2), we can write \begin{equation*} \left\Vert \beta ^{ {\acute{}} }(s)\right\Vert .\cos (\theta )=\left\langle \nabla f,T\right\rangle \text{,} \end{equation*} where $\theta $ is the angle between $d$ and $\beta ^{ {\acute{}} }(s)$. It follows that \begin{equation} \cos (\theta )=\frac{\left\langle \nabla f,T\right\rangle }{\sqrt{ 1+\left\langle \nabla f,T\right\rangle ^{2}}}\text{.} \end{equation} If $\alpha $ is a $f$-eikonal helix curve,i.e. $\left\langle \nabla f,T\right\rangle =$ constant, it can be easily seen that $\cos (\theta )=$ constant by using (3.3). That is, $\beta $ is a general helix with the axis $ d=(0,1)$.Conversely, we assume that $\beta $ is a general helix, i.e. $\cos (\theta )=$constant.Hence, by using (3.3), we can write \begin{equation} \left\langle \nabla f,T\right\rangle ^{2}=\frac{\cos ^{2}(\theta )}{\sin ^{2}(\theta )}=\text{constant }(\theta \neq 0)\text{.} \end{equation} And so, from (3.4), we deduce that $\left\langle \nabla f,T\right\rangle =$ constant. In other words, $\alpha $ is a $f$-eikonal helix curve. \end{proof} \begin{theorem} Let $M\subset \mathbb{R} ^{n}$ be a complete connected smooth Riemannian submanifold without boundary and let $M$ be isometric to a Riemannian product $N\times \mathbb{R} $ . Let us assume that $f:M\rightarrow \mathbb{R} $ be a non-trivial affine function (see main theorem in [9]). Then, all geodesic curves on $M$ are $f$-eikonal helix curves. \end{theorem} \begin{proof} Since $f:M\rightarrow \mathbb{R} $ is a affine function, for each unit geodesic $\alpha :(-\infty ,\infty )\rightarrow M$ there are constants $a$ and $b\in \mathbb{R} $ such that \begin{equation*} f\left( \alpha (s)\right) =as+b\text{.} \end{equation*} for all $s\in (-\infty ,\infty )$ (see [8] or see [9]). On the other hand, we know that \begin{equation*} X[f]=\left\langle \nabla f,X\right\rangle \end{equation*} for each $X\in TM$. In particular, for $X=T$ (the unit tangent of $\alpha $ ), \begin{equation*} T[f]=\left\langle \nabla f,T\right\rangle \end{equation*} \begin{equation*} \frac{d\alpha }{ds}[f]=\left\langle \nabla f,T\right\rangle \end{equation*} and so, we have \begin{equation} \frac{d(f\circ \alpha )}{ds}=\left\langle \nabla f,T\right\rangle \text{.} \end{equation} Moreover, since $f\left( \alpha (s)\right) =as+b$, $\dfrac{d(f\circ \alpha ) }{ds}=$constant. Hence, from (3.5), we obtain \begin{equation*} \left\langle \nabla f,T\right\rangle =\text{constant} \end{equation*} along the curve $\alpha $.On the other hand, from Lemma 2.3 (see [11]), $ \left\Vert \nabla f\right\Vert =$constant. Consequently, all geodesic curves on $M$ are $f$-eikonal helix curves. \end{proof} \begin{example} In Theorem 3.4., we take $M$ to be the cylindrical surface $S^{1}\times \mathbb{R} $. And, we take $f$ to be the function \begin{equation*} f:S^{1}\times \mathbb{R} \rightarrow \mathbb{R} \end{equation*} \begin{equation*} \left( x,t\right) \rightarrow f\left( x,t\right) =t\text{ .} \end{equation*} Then, the curves $\alpha $ in the form \begin{equation*} \alpha \left( s\right) =\left( \cos (c\frac{s}{\sqrt{c^{2}+a^{2}}}+d),\sin (c \frac{s}{\sqrt{c^{2}+a^{2}}}+d),a\frac{s}{\sqrt{c^{2}+a^{2}}}+b\right) \end{equation*} $f$- eikonal helix curves since all geodesic curves on $M$ are the curves $ \alpha $ with the unit tangent $T$, where $c^{2}+a^{2}$ $\neq 0$. Here, $a$,$ b$,$c$,$d$ are real numbers. In fact, $f\left( \alpha \left( s\right) \right) =a\dfrac{s}{\sqrt{ c^{2}+a^{2}}}+b$ and $\dfrac{d\left( f\circ \alpha \right) }{ds}=$constant. So, $\left\langle \nabla f,T\right\rangle =\dfrac{d(f\circ \alpha )}{ds}=$ constant. On the other hand, $\left\Vert \nabla f\right\Vert =$constant since $f$ is an affine function. Consequently, since $\left\Vert \nabla f\right\Vert =$constant, $\left\Vert T\right\Vert =1$ and $\left\langle \nabla f,T\right\rangle =$constant, the angle between $\nabla f$ and $T$ along the curves $\alpha $. In other words, the curves $\alpha $ are $f$ -eikonal helix curves. \end{example} \section{THE AXIS OF $f$-EIKONAL HELIX CURVES} In this seciton, in 3-dimensional Riemannian manifold, we find out the axis of a $f$-eikonal helix curve and we give the relation between the curvatures of the curve in Theorem 4.1. Then, we give more important corollary relating to helix submanifolds \begin{theorem} Let $M\subset $ $ \mathbb{R} ^{4}$ be a 3-dimensional Riemannian manifold and let $M$ be a complete connected smooth without boundary. Also, let $M$ be isometric to a Riemannian product $N\times \mathbb{R} $ . Let us assume that $f:M\rightarrow \mathbb{R} $ be a non-trivial affine function (see main theorem in [9]) and $\alpha :I\rightarrow M$ be a $f$-eikonal helix curve. Then, the following properties are hold: \textit{(1) The axis of }$\alpha $: \begin{equation*} \nabla f=\left\Vert \nabla f\right\Vert \left( \cos (\theta )T+\sin (\theta )B\right) \text{,} \end{equation*} where $\theta $ is constant. (2) $\dfrac{\tau }{\kappa }=$constant. \end{theorem} \ \ \ \ \ \ \ \begin{proof} (1) Since $\alpha $ is $f$-eikonal helix curve, we can write \begin{equation} \left\langle \nabla f,T\right\rangle =\text{constant.} \end{equation} If we take the derivative in each part of (4.1) in the direction $T$ on $M$, we have \begin{equation} \left\langle \nabla _{T}\nabla f,T\right\rangle +\left\langle \nabla f,\nabla _{T}T\right\rangle =0\text{.} \end{equation} On the other hand, from Lemma 2.3 (see [11]), $\nabla f$ is parallel in $M$, i.e. $\nabla _{X}\nabla f=0$ for arbitrary $X\in TM$. So, we get $\nabla _{T}\nabla f=0$.Then, by using (4.2) and Frenet formulas, we obtain \begin{equation} \kappa \left\langle \nabla f,N\right\rangle =0\text{.} \end{equation} Since $\kappa $ is assumed to be positive, (4.3) implies that $\left\langle \nabla f,N\right\rangle =0$. Hence, we can write the axis of $\alpha $ as \begin{equation} \nabla f=\lambda _{1}T+\lambda _{2}B\text{.} \end{equation} Doing the dot product with $T$ in each part of (4.4), we get \begin{equation} \left\langle \nabla f,T\right\rangle =\lambda _{1}=\left\Vert \nabla f\right\Vert \cos (\theta )\text{,} \end{equation} where $\theta $ is the angle between $\nabla f$ and $T$. And, since $ \left\Vert \nabla f\right\Vert ^{2}=\lambda _{1}^{2}+\lambda _{2}^{2}$, we also have \begin{equation*} \lambda _{2}=\left\Vert \nabla f\right\Vert \sin (\theta ) \end{equation*} by using (4.5).Finally, the axis of $\alpha $ \begin{equation*} \nabla f=\left\Vert \nabla f\right\Vert \left( \cos (\theta )T+\sin (\theta )B\right) \text{.} \end{equation*} (2) From the proof of (1), we can write \begin{equation} \left\langle \nabla f,N\right\rangle =0. \end{equation} If we take the derivative in each part of (4.6) in the direction $T$ on $M$, we have \begin{equation} \left\langle \nabla _{T}\nabla f,N\right\rangle +\left\langle \nabla f,\nabla _{T}N\right\rangle =0\text{.} \end{equation} And, from the proof of (1), $\nabla _{T}\nabla f=0$. Hence, from (4.7), \begin{equation} \left\langle \nabla f,\nabla _{T}N\right\rangle =0\text{.} \end{equation} By using Frenet formulas, from (4.8) we obtain \begin{equation} -\kappa \left\langle \nabla f,T\right\rangle +\tau \left\langle \nabla f,B\right\rangle =0\text{.} \end{equation} On the other hand, by using (4.4), we can write as $\left\langle \nabla f,T\right\rangle =\lambda _{1}$ and $\left\langle \nabla f,B\right\rangle =\lambda _{2}$.Since $\lambda _{1}=\left\Vert \nabla f\right\Vert \cos (\theta )$ and $\lambda _{2}=\left\Vert \nabla f\right\Vert \sin (\theta )$ from the proof of (1), we obtain \begin{equation} \left\langle \nabla f,T\right\rangle =\left\Vert \nabla f\right\Vert \cos (\theta )\text{ and }\left\langle \nabla f,B\right\rangle =\left\Vert \nabla f\right\Vert \sin (\theta )\text{.} \end{equation} So, by using (4.9) and the equalities (4.10), we have \begin{equation*} \dfrac{\tau }{\kappa }=\cot (\theta )\text{=constant.} \end{equation*} This completes the proof of the Theorem. \end{proof} The latter Theorem 4.1 has the following corollaries. \begin{corollary} Let $M\subset $ $ \mathbb{R} ^{4}$ be a 3-dimensional Riemannian $f$-helix submanifold and let $M$ be a complete connected smooth without boundary. Also, let $M$ be isometric to a Riemannian product $N\times \mathbb{R} $ . Let us assume that $f:M\rightarrow \mathbb{R} $ be a non-trivial affine function (see main theorem in [9]).Then, $\dfrac{ \tau }{\kappa }$ is constant along the helix lines of $M$. \end{corollary} \begin{proof} It is obvious by using Theorem 4.1 and Theorem 3.2. \end{proof} \begin{corollary} Let $M\subset $ $ \mathbb{R} ^{4}$ be a 3-dimensional Riemannian $f$-helix submanifold and let $M$ be a complete connected smooth without boundary. Also, let $M$ be isometric to a Riemannian product $N\times \mathbb{R} $ . Let us assume that $f:M\rightarrow \mathbb{R} $ be a non-trivial affine function (see main theorem in [9]).Then, \begin{equation*} \nabla f=\left\Vert \nabla f\right\Vert \left( \cos (\theta )T+\sin (\theta )B\right) \end{equation*} along the helix line of $M$. In other words, $\nabla f$ is the axis of the helix line of $M$. \end{corollary} \begin{proof} It is obvious by using Theorem 4.1 and Theorem 3.2. \end{proof} \section{THE RELATION BETWEEN $f$-EIKONAL HELIX CURVE AND GENERAL HELIX} In section, we specify the relation between $f$-eikonal helix curve and general helix. \begin{lemma} Let $M\subset \mathbb{R} ^{n}$ be a Riemannian helix submanifold with respect to the unit direction $ d\in \mathbb{R} ^{n}$ and $f:M\rightarrow \mathbb{R} $ be a function. Let us assume that $\alpha :I\subset \mathbb{R} \rightarrow M$ is a unit speed (parametrized by arc length function $s$) curve on $M$ with unit tangent $T$ . Then,the normal component $\xi $ of $d$ is parallel normal in the direction $T$ if and only if $\ (\nabla f)^{ {\acute{}} }\in $ $TM$ along the curve $\alpha $, where $T^{\ast }=\nabla f$ is the unit tangent component of the direction $d$. \end{lemma} \begin{proof} We assume that the normal component $\xi $ of $d$ is parallel normal in the direction $T$. Since $T$ and $\nabla f$ $\in TM$, from the Gauss equation in Definition 2.3, \begin{equation} D_{T}\nabla f=\nabla _{T}\nabla f+V(T,\nabla f) \end{equation} According to this Lemma, since the normal component $\xi $ of $d$ is parallel normal in the direction $T$, i.e.$\nabla _{T}^{\bot }\xi =0$ (see Remark 2.1), from (2.2) in Remark 2.2 ($0<\theta <\frac{\pi }{2}$) \begin{equation} V(T,\nabla f)=0 \end{equation} So, by using (5.1),(5.2) and Frenet formulas, we have: \begin{equation*} D_{T}\nabla f=\frac{d\nabla f}{ds}=(\nabla f)^{ {\acute{}} }=\nabla _{T}\nabla f\text{.} \end{equation*} That is, the vector field $(\nabla f)^{ {\acute{}} }\in TM$ along the curve $\alpha $, where $TM$ is the tangent space of $M$. Conversely, let us assume that $(\nabla f)^{ {\acute{}} }\in $ $TM$ along the curve $\alpha $. Then, from Gauss equation, $ V(T,\nabla f)=0$. Hence, from (2.2) in Remark 2.2 ($0<\theta <\frac{\pi }{2}$ ), $\nabla _{T}^{\bot }\xi =0$ . That is, the normal component $\xi $ of $d$ is parallel normal in the direction $T$. This completes the proof. \end{proof} \begin{lemma} All $f$-eikonal helix curves with the constant axis $\nabla f$ are general helices. \end{lemma} \begin{proof} For all $f$-eikonal helix curves, we know that the angle between $\nabla f$ and unit tangent vector fields $T$ of these curves is constant along these curves. Moreover, from this lemma, $\nabla f$ is constant. Therefore, tangent vector fields of these curves make a constant angle with the constant direction $\nabla f$. It follows that all $f$-eikonal helix curves with the constant axis $\nabla f$ are general helices by using definition 2.6. \end{proof} \begin{theorem} Let $M\subset \mathbb{R} ^{n}$ be a complete connected smooth Riemannian helix submanifold with respect to the unit direction $d\in \mathbb{R} ^{n}$ and $f:M\rightarrow \mathbb{R} $ be an affine function . Let us assume that $\alpha =\alpha \left( s\right) :I\subset \mathbb{R} \rightarrow M$ is a $f$-eikonal helix curve on $M$ with unit tangent $T$ . Then, if the normal component $\xi $ of $d$ is parallel normal in the direction $T$, then the $f$-eikonal helix curve $\alpha $ is a general helix, where $T^{\ast }=\nabla f$ is the unit tangent component of the direction $d$. \end{theorem} \begin{proof} Since $T$ and $\nabla f$ $\in TM$, from the Gauss equation in Definition 2.3, \begin{equation} D_{T}\nabla f=\nabla _{T}\nabla f+V(T,\nabla f) \end{equation} Since $f$ is an affine function ,$\nabla f$ \ parallel in $M$, i.e. $\nabla _{X}\nabla f=0$ for arbitrary $X\in TM$, $\nabla _{T}\nabla f=0$ (see Lemma 2.3. in [11]). On the other hand, according to the Lemma 5.1, $(\nabla f)^{ {\acute{}} }\in $ $TM$ due to the fact that the normal component $\xi $ of $d$ is parallel normal in the direction $T$. Therefore, from Gauss equation, $ V(T,\nabla f)=0$. Hence, from (5.3), we have: \begin{equation*} D_{T}\nabla f=\frac{d\nabla f}{ds}=(\nabla f)^{ {\acute{}} }=0 \end{equation*} along the curve $\alpha $. That is, the axis $\nabla f$ of $\alpha $ is constant. So, the above Lemma 5.2 follows that the $f$-eikonal helix curve $ \alpha $ is a general helix. This completes the proof. \end{proof} \end{document}
\begin{document} \begin{abstract} Let $K_1,\: K_2\subset {\ensuremath{\mathbb{R}}}^2$ be two convex, compact sets. We would like to know if there are commuting torus homeomorphisms $f$ and $h$ homotopic to the identity, with lifts $\tilde f$ and $\tilde h$, such that $K_1$ and $K_2$ are their rotation sets, respectively. In this work, we proof some cases where it cannot happen, assuming some restrictions on rotation sets. \end{abstract} \maketitle \section{Introduction} In a landmark paper \cite{mz89}, Miziurewicz and Zieman have introduced the concept of rotation sets for torus homeomorphisms in the isotopy class of the identity. This concept generalizes the notion of rotation number of circle homeomorphisms defined by H. Poincar\'e, and has proven to be a very effective tool in describing behaviour of toral dynamics. For instance, the analysis of the rotation sets can supply information on the abundance of periodic points \cite{f89}, the topological entropy of the map \cite{l} and sensitive dependence on initial conditions \cite{kf13}. We define rotation sets properly on section \ref{rotset} but, in a nutshell, the rotation set of a torus homeomorphisms is the convex closure of all rotation vectors for individual points, that is, $$\rho(\tilde f, \tilde x)= \lim_{n\to\infty}\frac{\tilde f^n(\tilde x)- \tilde x}{n},$$ when such limit exists. This study of the rotation theory, while it has seen significant advances in the last decade, still has left several relevant open questions, and the most relevant is probably to characterize the possible rotation sets for torus homeomorphisms. Up until very recently, it was not known if there existed a convex compact set that was not the rotation set of some torus homeomorphism, but the first example of such sets appeared in \cite{ct}. The standing conjecture on the subject, proposed by Franks and Misiurewicz on \cite{fm}, poses that the following convex sets could not be the rotation sets of torus homeomorphisms. \begin{itemize} \item[i] A segment of a line with irrational slope that contains a rational point in its interior. \item[ii] A segment of a line with irrational slope without rational points. \item[iii] A segment of a line with rational slope without rational points. \end{itemize} In \cite{ct}, the authors showed that the conjecture is true in case i, but A. Avila has announced a construction for a torus homeomorphism whose rotation set lies in the class ii above. It is still not know if the unit circle can be realized as rotation set. A very recent and relevant work on the subject, \cite{pat} exhibits a one parameter family of torus homeomorphisms and study the corresponding family of rotation sets. In this work we attempt to analyse possible connections between the study of group actions and the rotation theory for torus homeomorphisms. In particular, we were interested in determining, given a pair of commuting torus homeomorphisms, if there was some relationship between the respective rotation sets. The first question we tackled was to see if given compact convex sets $K_1$ and $K_2$ such that each $K_i$ was a rotation set for a torus homeomorphism, could we also find a pair of commuting homeomorphisms $f_i$ such that $K_i$ was their respective rotation set? Our first result shows that this is not true in general. Specifically, we show that: \begin{main}\label{ncom0} If $\rho(\tilde f)$ is a segment $J$ with rational slope containing rational points and $\rho(\tilde h)$ is whatever following cases: \begin{enumerate} \item It has nonempty interior. \item It is any nontrivial segment nonparallel to $J$. \end{enumerate} Then, $\tilde f$ and $\tilde h$ do not commute. \end{main} The techniques developed for Theorem \ref{ncom0} actually gives us a stronger result. It concerns the rigidity of torus homeomorphisms as described by their deviations that is, \begin{equation*} \textnormal{Desv} (\tilde f)= \sup_{\stackrel{n\in {\ensuremath{\mathbb{Z}}}}{x,y\in {\ensuremath{\mathbb{R}}}^2}} \left\|\left(\tilde f^{n}(x)-x\right)-\left(\tilde f^{n}(y)-y\right)\right\|. \end{equation*} If $v\in{\ensuremath{\mathbb{R}}}^2_*$, we define \emph{deviation of $\ti f$ in the direction $v$}, by \begin{equation*} \textnormal{Desv}_v (\ti f)= \sup_{\stackrel{n\in {\ensuremath{\mathbb{Z}}}}{x,y\in {\ensuremath{\mathbb{R}}}^2}} \left|\textnormal{Pr}_v\left[\left(\ti f^{n}(x)-x\right)-\left(\ti f^{n}(y)-y\right)\right]\right|. \end{equation*} If deviation of $\ti f$ is unbounded, we will write $\textnormal{Desv}_v (\ti f)=\infty$. We said $f$ is \emph{annular}, if some lift $\tilde f:{\ensuremath{\mathbb{R}}}^2\to {\ensuremath{\mathbb{R}}}^2$ of $f$ has uniformly bounded displacement in a rational direction; that is, there are $v\in {\ensuremath{\mathbb{Z}}}_*^2$ and $M>0$ such that \[ \left|\left\langle \tilde f^n(x)-x,v\right\rangle\right|\leqslant M,\] for every $x\in{\ensuremath{\mathbb{R}}}^2$ and $n\in{\ensuremath{\mathbb{Z}}}$. In that case we said $\ti f$ is $v^\bot$-\emph{annular}. The technical result shows that in some cases, if $f$ and $g$ are commuting homeomorphisms, then if $f$ has deviation restrictions $g$ must also has deviation restrictions. \begin{main}\label{d0} Let $f, h\in \mathrm{Homeo}_0({\ensuremath{\mathbb{T}}}^2)$ commute and let $\tilde f,\tilde h$ be respective lifts. If $\tilde f$ is $v$-annular, for some $v\in{\ensuremath{\mathbb{Z}}}^2_*$, then $\textnormal{Desv}_{v^{\bot}}(\tilde h)<\infty$ or $\textnormal{Desv} (\tilde f)<\infty$. \end{main} A previous version of Theorem \ref{d0} for area preserving homeomorphisms was obtained by Benayon in his doctoral thesis \cite{mau}. The paper is organized as follows: The next section formally introduces all the relevant objects and describes the necessary tools, and section \ref{dem} is devoted to the proofs of the main results. \section{Preliminaries}\label{pre} \subsection{Notations} We will denote ${\ensuremath{\mathbb{N}}}_*$, ${\ensuremath{\mathbb{Z}}}_*$ and ${\ensuremath{\mathbb{R}}}_*$ the naturals, integers and reals numbers without zero, respectively. We denote the two-dimensional torus ${\ensuremath{\mathbb{T}}}^2={\ensuremath{\mathbb{R}}}^2 / {\ensuremath{\mathbb{Z}}}^2$. Let $\mathrm{Homeo}_0({\ensuremath{\mathbb{T}}}^2)$ be the space of homeomorphisms homotopic to the identity of ${\ensuremath{\mathbb{T}}}^2$, $f\in \mathrm{Homeo}_0({\ensuremath{\mathbb{T}}}^2)$, $\tilde{f}:{\ensuremath{\mathbb{R}}}^{2}\to {\ensuremath{\mathbb{R}}}^{2}$ be any lift of $f$ and $\pi:{\ensuremath{\mathbb{T}}}^2\to {\ensuremath{\mathbb{R}}}^2$ the canonical projection. We also denote by $\left\langle\: ,\right\rangle$ the usual scalar product in ${\ensuremath{\mathbb{R}}}^2$, $\textnormal{Pr}_v:{\ensuremath{\mathbb{R}}}^2\to{\ensuremath{\mathbb{R}}}$ the orthogonal projection given by $$\textnormal{Pr}_v(x)=\frac{\left\langle x ,\: v\right\rangle}{\left\|v\right\|}.$$ We shall denote in ${\ensuremath{\mathbb{R}}}^2$ the integer translations $T_1(z)=z+(1,0)$ and $T_2(z)=z+(0,1)$, and the projections $\textnormal{Pr}_1(x,y)=x$ and $\textnormal{Pr}_2(x,y)=y$. If $v=(a,b)\in{\ensuremath{\mathbb{R}}}^2$, we write $v^{\bot}=(-b,a)$. We write $\textnormal{Conv} (A)$ for the hull convex of $A$. \subsection{Atkinson's Lemma} Another result that will be useful in the next section is Atkinson's Lemma on ergodic theory. See \cite{a}. \begin{pro} Let $(X,{\ensuremath{\mathcal{B}}},\mu)$ be a probability space and let $T:X\to X$ be an ergodic automorphism. Consider $\phi:X\to {\ensuremath{\mathbb{R}}}$ an integrable map such that $\int \phi \: d\mu=0$, then for every $B\in {\ensuremath{\mathcal{B}}}$ and every $\epsilon >0$, \begin{align*} \mu\left(\bigcup_{n\in{\ensuremath{\mathbb{N}}}}B\cap T^{-n}(B)\cap\left\{x\in X:\; \left|\sum_{i=0}^{n-1}\phi(T^i(x))\right|<\epsilon\right\}\right)=\mu(B). \end{align*} \end{pro} \begin{cor}\label{atk} Let $X$ be a separable metric space, $f:X\to X$ be a homeomorphism and $\mu$ an $f$-invariant ergodic nonatomic Borel probability measure. If $\phi\in {\ensuremath{\mathbb{L}}}_1(\mu)$ is such that $\int \phi \: d\mu=0$, then for $\mu$-a.e. $x\in X$, there is an increasing sequence $(n_i)_{i\in{\ensuremath{\mathbb{N}}}}$ of integers such that \begin{equation*} f^{n_i}(x)\to x\ \ \ \textit{and} \ \ \ \sum_{k=0}^{n_i-1}(\phi_1\circ f^{k})(x)\to 0, \ \ \ \textit{as} \ \ \ i\to \infty. \end{equation*} \end{cor} \subsection{Rotation theory} \label{rotset} From now on, let $f\in \mathrm{Homeo}_0({\ensuremath{\mathbb{T}}}^2)$ and let $\tilde{f}:{\ensuremath{\mathbb{R}}}^{2}\to {\ensuremath{\mathbb{R}}}^{2}$ be any lift of $f$. The following definitions were introduced by Misiurewics and Ziemian in \cite{mz89}. \begin{defi} Let $x\in{\ensuremath{\mathbb{T}}}^2$. If \begin{equation}\label{rot} \lim_{n\to\infty}\frac{\tilde{f}^n(\tilde{x})-\tilde{x}}{n} \end{equation} exist, for some $\tilde{x}\in\pi^{-1}(x)$, then the limit (1) is denoted by $\rho(\tilde{f},x)$ and call \textsl{the rotation vector of $x$ by $\tilde{f}$}. \end{defi} \begin{defi} The point-wise rotation set of $\tilde{f}$ is: \[\rho_p(\tilde{f})=\bigcup_{x\in {\ensuremath{\mathbb{T}}}^2}\rho(\tilde{f},x)\] \end{defi} \begin{defi} The \textsl{rotation set of $\tilde{f}$}, denoted by $\rho(\tilde{f})$, is the set of points $v\in{\ensuremath{\mathbb{R}}}^2$, such that there exist sequences $(n_i)\in {\ensuremath{\mathbb{Z}}}$ and $(x_i)\in {\ensuremath{\mathbb{R}}}^2$ with $n_i\to \infty$ as $i\to\infty$, and \[\lim_{i\to \infty} \frac{\tilde{f}^{n_i}(x_i)-x_i}{n_i}=v.\] \end{defi} Let $\phi:{\ensuremath{\mathbb{T}}}^2\to {\ensuremath{\mathbb{R}}}^2$ be the displacement function defined by $\phi(x)=\tilde f(\tilde x)-\tilde x$, where $\tilde x\in \pi^{-1}(x)$ and $\pi:{\ensuremath{\mathbb{R}}}^2\to {\ensuremath{\mathbb{T}}}^2$ is the natural projection. Denote the space of all $f$-invariant probability measures on ${\ensuremath{\mathbb{T}}}^2$ by ${\ensuremath{\mathcal{M}}} (f)$ and the subset of ergodic measures by $\textit{M} _e(f)$. \begin{defi} If $\mu\in {\ensuremath{\mathcal{M}}} (f)$, define its rotation vector by $\rho(\mu,f)=\int \phi\ d\mu$. Also the sets \begin{align*} \rho_m(\ti f)=&\left\{\rho(\mu,\ti f); \mu\in {\ensuremath{\mathcal{M}}} (f) \right\}\ and \\ \rho_e(\ti f)=&\left\{\rho(\mu,\ti f); \mu\in {\ensuremath{\mathcal{M}}}_e (f)\right\}. \end{align*} \end{defi} \begin{pro}\label{birkof} If $\mu\in {\ensuremath{\mathcal{M}}}_e(f)$, then $\rho(\mu, \ti f)=\rho(\tilde f, x)$, for $\mu$-a.e. $x\in{\ensuremath{\mathbb{T}}}^2$. \end{pro} \begin{proof} If $\mu\in M_e(f)$, by the ergodic theorem follows that $\mu$-a.e. $x\in T^2$, \[\lim_{n\to \infty}\frac{1}{n}\sum_{k=0}^{n-1}{\phi\circ f^k(x)}=\int \phi\ d\mu.\] Therefore for $\mu$-a.e. $x\in{\ensuremath{\mathbb{T}}}^2$ and every $\tilde x\in \pi^{-1}(x)$, \begin{equation*} \rho(\tilde{f},x)=\lim_{n\to\infty}\frac{\tilde{f}^n(\tilde{x})-\tilde{x}}{n}=\rho(\mu,f). \end{equation*} \end{proof} \begin{teo}[\cite{mz89}]\label{convexo} It holds \[\rho(\tilde f)=\textnormal{Conv}( \rho_p(\tilde f))= \textnormal{Conv} (\rho_e(\tilde f))=\rho_m(\ti f).\] \end{teo} \subsection{Bounded deviation} In this part we study the relationship between deviation and rotation set concepts. \begin{lem}\label{proye} If $\textnormal{Desv}_u(\ti f)$ is bounded, for some $u\in{\ensuremath{\mathbb{R}}}^2_*$, then there is $a\in{\ensuremath{\mathbb{R}}}$ such that, $\textnormal{Pr}_u(\rho(\ti f))=\left\{a\right\}$. \end{lem} In that case, $\rho (\tilde f)\subset L(u^{\bot},a):=\left\{tu^{\bot}+au;\;t\in{\ensuremath{\mathbb{R}}}\right\}$. \begin{proof} From definition there is $M>0$, such that for all $x, y\in{\ensuremath{\mathbb{R}}}^2$ and $n\in{\ensuremath{\mathbb{Z}}}$, \begin{equation}\label{desv1} \left|\textnormal{Pr}_u\left[\left(\tilde f^{n}(x)-x\right)-\left(\tilde f^{n}(y)-y\right)\right]\right|\leqslant M. \end{equation} Set $p=\rho(\ti f, x_0)$ for some $x_0\in{\ensuremath{\mathbb{T}}}^2$, which exist since ${\ensuremath{\mathcal{M}}}_e(f)$ is not empty, with $\textnormal{Pr}_u(p)=a\in{\ensuremath{\mathbb{R}}}$. By $(\ref{desv1})$ follows that $$\displaystyle \lim_{n\to\infty}\textnormal{Pr}_u\frac{\tilde f^{n}(y)-y}{n}=a,$$ for every $y\in{\ensuremath{\mathbb{R}}}^2$. Therefore $\left\{a\right\}=\textnormal{Pr}_u(\rho_p(\ti f))=\textnormal{Pr}_u(\rho(\ti f))$. \end{proof} \begin{ob}\label{int} It is an elementary exercise to show that if $\tilde f$ is $v$-annular, for some $v\in{\ensuremath{\mathbb{Z}}}_*^2$, then $\rho (\tilde f)\subset L(v,a):=\left\{tv+av^{\bot};\;t\in{\ensuremath{\mathbb{R}}}\right\}$. \end{ob} The following result shows that the converse to Remark \ref{int} is true in several case. \begin{teo}[\cite{pa}]\label{pa} If $\rho(\tilde f)$ is a segment with rational slope containing rational points, then $f^k$ is annular, for some $k\in{\ensuremath{\mathbb{N}}}$. \end{teo} \begin{pro}\label{wanul} If $\rho(\ti h)$ is a nontrivial segment, contained in the straight line $$L(w,b)=\left\{tw+bw^{\bot};\;t\in{\ensuremath{\mathbb{R}}}\right\},$$ where $b=\textnormal{Pr}_{w^{\bot}}(\rho(\ti h))$ and $w\in{\ensuremath{\mathbb{R}}}^2$, then $\textnormal{Desv}_{v^{\bot}}(\ti h)$ is unbounded, for every $v\in{\ensuremath{\mathbb{R}}}^2$ nonparallel to $w$. \end{pro} \begin{proof} Suppose by contradiction that there is $M>0$ such that, $\textnormal{Desv}_{v^{\bot}}(\ti h)\leqslant M$. We know because of the Lemma \ref{proye}, that $\rho(\ti h)$ is a subset of $$L(v,a)=\left\{tv+av^{\bot};\;t\in{\ensuremath{\mathbb{R}}}\right\},$$ where $a=\textnormal{Pr}_{v^{\bot}}(p)$, for every $p\in\rho(\ti h)$. So $\rho(\ti h)\subset [L(v,a)\cap L(w,b)]$. As $v$ and $w$ are not parallels, we can deduce that $\rho(\ti h)=\left\{p\right\}$. Absurd! \end{proof} \subsection{Annularity and commuting homeomorphisms} Let $f, h\in \mathrm{Homeo}_0({\ensuremath{\mathbb{T}}}^2)$ commute. In \cite{pk} was shown there exist lifts of $f$ and $h$ that also commute. Let $\ti f,\ti h$ be respective commuting lifts. The next lemma is contained in (\cite{mau}, Proposition 3.1). We include the proof here for the sake of completeness. \begin{lem}\label{mau} If $\ti f$ is $(0,1)$-annular and $\textnormal{Desv}_{(0,1)^{\bot}}(\ti h)$ is unbounded then there exist a set $E$, such that \begin{itemize} \item[i.)] The set $E$ is open, $\ti f$-invariant and $T_2(E)=E$. \item[ii.)] The connected components of $E$ have uniformly bounded diameter. \item[iii.)] For every $z\in {\ensuremath{\mathbb{R}}}^2$, there is $u\in {\ensuremath{\mathbb{Z}}}^2$ such that $(z+u)\in E$, that is, $\pi(E)={\ensuremath{\mathbb{T}}}^2$. \end{itemize} \end{lem} \begin{proof} $i.)$ Denote by \[H=\left\{(x,y)\in{\ensuremath{\mathbb{R}}}^2;\: x\geqslant 0\right\}.\] the closed half plane. And define the set $$B=\overline{\bigcup_{n\in{\ensuremath{\mathbb{Z}}}}\tilde f^n(H)}.$$ Then, $\tilde f(B)=B$ and $T_2(B)=B$. Since $\tilde f$ is $(0,1)$-\emph{annular} there is $M>0$ such that for every $z\in{\ensuremath{\mathbb{R}}}^2$, \begin{displaymath} \left|\textnormal{Pr}_1[\tilde f^n(z)-z]\right|\leqslant M. \end{displaymath} Thus, $$B\subset\left\{(x,y)\in{\ensuremath{\mathbb{R}}}^2;\; x\geqslant -M \right\}.$$ Denote by $\Delta=\sup_{z\in{\ensuremath{\mathbb{R}}}^2}\left\|\tilde f(z)-z\right\|$ and $A^\prime=B\setminus\left[T^k_1(int(B))\right],$ where $k>2M+\Delta +1$. Note that $\ti f(A^\prime)=A^\prime$. Hence the set $$A^{\prime\prime}=\left\{(x,y)\in{\ensuremath{\mathbb{R}}}^2;\: M\leqslant x\leqslant M + \Delta + 1\right\}\subset A^\prime.$$ Let $A$ be a connected component of $A^\prime$, such that $A^{\prime\prime}\subset A$. We can see that $\tilde f(A)\cap A\neq \emptyset$, thus $\tilde f(A)=A$. Furthermore $T_2(A)=A$, $\textnormal{diam}\:(\textnormal{Pr}_1(A))\leqslant k+2M$ and $\: (T_2\!\circ\! \tilde h^n)(A)=\tilde h^n(A)$, for every $n\in{\ensuremath{\mathbb{Z}}}$. Consider $l\in{\ensuremath{\mathbb{N}}}$ with $l>2(k+2M+1)$ such that the set, $$F=\left\{(x,y)\in{\ensuremath{\mathbb{R}}}^2;\; k+M \leqslant x \leqslant l-M\right\},$$ satisfies $\textnormal{diam}\:(\textnormal{Pr}_1(F))>k+2M+2$. We can deduce, \[\textnormal{diam}\:[\textnormal{Pr}_1(B\setminus int(T_1^{l+k}(B)))]\leqslant l+k+2M. \] Therefore, $$\textnormal{diam}\:[\textnormal{Pr}_1(B\setminus int(T_1^{2l+k}(B)))]\leqslant 2(l+k+2M).$$ Since, $\textnormal{Desv}_{(1,0)}(\tilde h)$ is unbounded, there are $z_1,\: z_2\in{\ensuremath{\mathbb{R}}}^2$ and $n\in{\ensuremath{\mathbb{Z}}}$ such that, \[\left|\textnormal{Pr}_1[(\tilde h^n(z_1)-z_1)-(\tilde h^n(z_2)-z_2)]\right|>2(l+k+2M).\] As $\textnormal{diam}\:(\textnormal{Pr}_1(A))\leqslant k+2M$, we have in particular that if $z_1,\: z_2\in A$ then, $$\left|\textnormal{Pr}_1[\tilde h^n(z_1)-\tilde h^n(z_2)]\right|>2l+k+2M.$$ So we may suppose $\tilde h^n(z_1)\in A$ and $\tilde h^n(z_2)\in T_1^l(A)$, replacing $\ti h^n$ by $\ti h^n+T^j_1$, for some $j\in{\ensuremath{\mathbb{Z}}}$, if necessary. We define, $$E=[T^k_1(int(B))\setminus T_1^l(B)]\setminus \tilde h^n(A).$$ We can see that $E$ is open, $\tilde f$-invariant and $T_2(E)=E$, because $B$ and $\tilde h^n(A)$ are. $ii.)$ We most show that there exist $M_0>0$ such that, $\textnormal{diam}(U)\leqslant M_0$, being $U$ any connected component of $E$. Since $E$ is contained in $T^k_1(int(B))\setminus T_1^l(B)$, we deduce that $$\textnormal{diam}\:(\textnormal{Pr}_1(E))\leqslant (l+M)-(k-M)=M_1.$$ Let $\gamma:[0,1]\to {\ensuremath{\mathbb{R}}}^2$ be a path contained in $\tilde h^n(A)$, from $A$ to $T^l_1(A)$. As $(T^j_2\! \circ\! \tilde h^n)(A)=\tilde h^n(A)$, for every $n,\: j\in{\ensuremath{\mathbb{Z}}}$, so the path $T^j_2(\gamma)$ is also contained in $\tilde h^n(A)$, for every $j\in{\ensuremath{\mathbb{Z}}}$, from $A$ to $T^l_1(A)$. But $\gamma$ is compact, so there exists $a,\: b\in{\ensuremath{\mathbb{R}}}$ such that $a\leqslant \textnormal{Pr}_2(z)\leqslant b$, for every $z\in \gamma$. Then, for any connected component $U$ of $E$, $diam(\textnormal{Pr}_2(U))\leqslant \left|b-a+1\right|$. Thus, set $M_2=\left|b-a+1\right|$ and $M_0=\sqrt{M_1^2+M_2^2}$. $iii.)$ Let us see that for every $z\in{\ensuremath{\mathbb{R}}}^2$, there exist $u\in{\ensuremath{\mathbb{Z}}}^2$ such that $z+u\in E$. Suppose by contradiction that there exist $z\in{\ensuremath{\mathbb{R}}}^2$ such that $(z+{\ensuremath{\mathbb{Z}}}^2)\cap E=\emptyset$. But there is $u\in{\ensuremath{\mathbb{Z}}}^2$ such that $w=z+u\in F$, so $w\in \tilde h^n(A)$. Since $\textnormal{diam}(\textnormal{Pr}_1(F))>k+2M+2$, we can choose $u$ such that $T_1^j(w)\in (F\cap \tilde h^n(A))$, for $j=0,\ldots,k+2M+1$. Then $\tilde h^{-n}(T_1^j(w))\in A$, are $k+2M+2$ translate of a point in $A$. It contradicts that $\textnormal{diam}(\textnormal{Pr}_1(A))\leqslant k+2M$. \end{proof} \subsection{An auxiliary result} Next lemma can be found in \cite{kk}. \begin{lem}\label{kk} Let $f\in \mathrm{Homeo}_0({\ensuremath{\mathbb{T}}}^2)$, $A\in SL(2,{\ensuremath{\mathbb{Z}}})$ and $h\in \mathrm{Homeo}({\ensuremath{\mathbb{T}}}^2)$ isotopic to $f_A$. Let $\tilde f$ and $\tilde h$ be respective lifts. Then $$\rho(\tilde h \tilde f \tilde h^{-1})=A\rho(\tilde f).$$ In particular, $\rho(\tilde A \tilde f \tilde A^{-1})=A\rho(\tilde f).$ \end{lem} \begin{ob}\label{v} If $\tilde f$ is $v$-annular, for some $v\in{\ensuremath{\mathbb{Z}}}_*^2$, then there exist some homeomorphism conjugated to $\tilde f$ that is $(0,1)$-annular. In fact, given $v=(q,p)\in {\ensuremath{\mathbb{Z}}}^2$, there exist integers $a,\: b$, such that $pa+qb=1$. Consider the matrix \[ A= \begin{bmatrix} p & -q\\ b & a \end{bmatrix}. \] Hence $det(A)=1$, but $A\cdot(q,p)=(0,1)$, so $Lema\; \ref{kk}$ shows that $A\tilde f A^{-1}$ is $(0,1)$-annular. \end{ob} \section{Theorems A and B} \label{dem} \subsection{Noncommuting homeomorphisms} Let us show how Theorem \ref{ncom} follows from Theorem \ref{d1}. \begin{main}\label{ncom} If $\rho(\tilde f)$ is a segment $J$ with rational slope containing rational points and $\rho(\tilde h)$ is whatever following cases: \begin{enumerate} \item It has nonempty interior. \item It is any nontrivial segment nonparallel to $J$. \end{enumerate} Then, $\tilde f$ and $\tilde h$ do not commute. \end{main} \begin{proof} Suppose by contradiction that $\tilde f$ and $\tilde h$ commute. By Theorem \ref{pa} there is $k\in{\ensuremath{\mathbb{N}}}$ such that $\tilde f^k$ is annular. Denote $\ti g=\ti f^k$, so $\tilde g$ and $\tilde h$ commute and suppose that $\ti g$ is $v$-\emph{annular}, for some $v\in {\ensuremath{\mathbb{Z}}}^2_*$. We claim that whatever cases (1) or (2) above for $\rho(\ti h)$, $\textnormal{Desv}_{v^{\bot}}(\tilde h)$ is unbounded. In fact, the first case follows from Lemma \ref{proye} and the second holds applied Proposition \ref{wanul}. Hence, the Theorem \ref{d1} implies $\textnormal{Desv} (\tilde g)<\infty$. Therefore, $\tilde g$ is a pseudo-rotation. It yields a contradiction because $\rho(\tilde f)$ is a nontrivial segment. \end{proof} \subsection{Theorem B} From now on let $f, h\in \mathrm{Homeo}_0({\ensuremath{\mathbb{T}}}^2)$ commute and let $\ti f,\ti h$ be respective commuting lifts. In the proof of next proposition, we denote by $L=\left\{t(0,1);\; t\in{\ensuremath{\mathbb{R}}}\right\}$. \begin{pro}\label{d} If $\tilde f$ is $(0,1)$-annular then $\textnormal{Desv}_{(1,0)}(\tilde h)<\infty$ or $\textnormal{Desv} (\tilde f)<\infty$. \end{pro} \begin{proof} Suppose that $\textnormal{Desv}_{(1,0)}(\tilde h)$ is unbounded. Then there is a set $E$ with the properties in Lemma \ref{mau}. Since the connected components of $E$ have uniformly bounded diameter, we can consider $M_0>0$ such that for every connected component $U$ of $E$, $\textnormal{diam}\;(U)< M_0$. \begin{afp}\label{m1} There exist $\overline{n}\in {\ensuremath{\mathbb{Z}}}^+$, $\: \overline{m}\in {\ensuremath{\mathbb{Z}}}$ and $U$ a connected component of $E$, such that $$\left\|\tilde f^{k\overline{n}}(x)-x-k(0,\overline{m})\right\|\leqslant M_0,$$ for all $k\in{\ensuremath{\mathbb{N}}}$ and every $x\in U$. \end{afp} Indeed, because of the Remark \ref{int}, we may suppose that $\rho(\tilde f)\subset L$, that is, $\rho(\tilde f)=\left\{0\right\}\times \left[a,b\right]$, with $a\leqslant b$, for $a,b\in{\ensuremath{\mathbb{R}}}$. Since $(0,a)$ is an extremal point of $\rho(\tilde f)$, from Theorem \ref{convexo} there exist $\mu\in {\ensuremath{\mathcal{M}}}_e(f)$ such that for $\mu$-a.e $x\in{\ensuremath{\mathbb{T}}}^2$, $$\rho(\mu,f)=\int \phi\ d\mu=(0,a),$$ where $\phi:{\ensuremath{\mathbb{T}}}^2\to {\ensuremath{\mathbb{R}}}^2$ is the displacement function defined by $\phi(x)=\tilde f(\tilde x)-\tilde x$ for $\tilde x\in \pi^{-1}(x)$. Then, follows from Proposition \ref{birkof}, that for $\mu$-a.e $x\in{\ensuremath{\mathbb{T}}}^2$ and every $\tilde x\in \pi^{-1}(x)$, $$\lim_{n\to\infty}\frac{\tilde f^n(\tilde x)-\tilde x}{n} =(0,a).$$ Consider the first projection $\textnormal{Pr}_1:{\ensuremath{\mathbb{R}}}^2\to {\ensuremath{\mathbb{R}}}$ and the function $\phi_1:{\ensuremath{\mathbb{T}}}^2\to{\ensuremath{\mathbb{R}}}$, defined by $\phi_1(x)=\textnormal{Pr}_1(\phi)(x).$ Hence, $$\int \phi_1\ d\mu=0.$$ By the Atkinson's Lemma (Corollary \ref{atk}), applied to the function $\phi_1$, we have that for $\mu$-a.e $x\in {\ensuremath{\mathbb{T}}}^2$, there is an increasing sequence of positive integers $(n_i)_{i\in{\ensuremath{\mathbb{N}}}}$, such that \begin{equation*} f^{n_i}(x)\to x\ \ \ \textit{and} \ \ \ \sum_{i=0}^{n_i-1}(\phi_1\circ f^{k})(x)\to 0, \ \ \ \textit{as} \ \ \ i\to \infty. \end{equation*} That is, for $\mu$-a.e $x\in {\ensuremath{\mathbb{T}}}^2$ and $\ti x\in \pi^{-1}(x)$, \begin{equation}\label{at} \textnormal{Pr}_1[\tilde f^{n_i}(\tilde x)- \tilde x]\to 0, \ \ \ \textit{as} \ \ \ i\to \infty. \end{equation} Let $x\in T^2$ satisfying (\ref{at}) and $\tilde x=\pi^{-1}(x)$, by property \textit{iii.)} of $E$, there is $u\in{\ensuremath{\mathbb{Z}}}^2$ such that $\tilde x+u=w\in U$, for some connected component $U$ of $E$. Hence, there are sequences $(n_i)_{i\in{\ensuremath{\mathbb{N}}}}\in {\ensuremath{\mathbb{Z}}}^+$ and $(m_i)_{i\in{\ensuremath{\mathbb{N}}}}\in{\ensuremath{\mathbb{Z}}}$, such that $\tilde f^{n_i}(w)-T_v^{m_i}(w)\to 0$, as $i\to \infty$. As $ U$ is open set, let $\epsilon>0$ be such that $B_{\epsilon}(w)\subset U$. Then there is $i_0\in{\ensuremath{\mathbb{N}}}$ such that for all $i>i_0$, $\tilde f^{n_i}(w)\in T_v^{m_i}(B_{\epsilon}(w))\subset T_v^{m_i}(U)$. In particular, for some $\overline{m}\in{\ensuremath{\mathbb{Z}}}$, there is $\overline{n}\in{\ensuremath{\mathbb{Z}}}^+$ such that $\tilde f^{\overline{n}}(U)\cap T_v^{\overline{m}}(U)\neq \emptyset$. By property \textit{i.)} of $E$, we have that $T_v^m(U)$ is also a connected component of $E$, for every $m\in{\ensuremath{\mathbb{Z}}}$. But $E$ is invariant under $\tilde f$, so $\tilde f$ permutes connected components of $E$. Therefore $\tilde f^{\overline{n}}(U)=T_v^{\overline{m}}(U)$. By induction, we have that $\tilde f^{k\overline{n}}(U)=T_v^{k\overline{m}}(U)$, for every $k\in{\ensuremath{\mathbb{N}}}.$ Since $M_0>\textnormal{diam}(U)$ then, \begin{equation*} \left\|\tilde{f} ^{k\overline{n}}(x)-x-k(0,\overline{m})\right\|\leqslant M_0, \ \ \ \textit {for every} \ \ x\in U \ \textit {and} \ k\in {\ensuremath{\mathbb{N}}}, \end{equation*} concluding the proof of the Claim $\ref{m1}$. Now consider $D$ an integer translate of $[0,1]^2$ containing $w$. We claim that there exist a finite open cover $C=\bigcup_{j=1}^s\left\{U_j\right\}$ for $D$, where every $U_j$ is connected and for some $u_j\in{\ensuremath{\mathbb{Z}}}^2$, $U_j+u_j$ is a connected component of $E$. In fact, by property $iii.)$ of $E$, it follows that for every point $y\in D$, there is $u_*\in{\ensuremath{\mathbb{Z}}}^2$, such that $(y+u_*)\in U^*_y$, for some connected component $U^*_y$ of $E$. Denote $U^\prime=U^*_y-u_*$ and $C^\prime=\bigcup_{y\in D}\left\{U^\prime_y\right\}.$ By properties of $E$, the set $C^\prime$ is an open cover for $D$ such that every $U^\prime + u_*$ is a connected component of $E$. Moreover, $C^\prime$ contains the set $U=U^*_w$. Since $D$ is compact, there is a finite subcover for $D$, denote by \begin{align*} C=\bigcup_{j=1}^s\left\{U_j\right\}, \ \ \text{where}\ \left\{U_j\right\}\subset C^\prime,\ \text{for every}\ j=1,\ldots,s, \end{align*} as claimed. Set $U^*_j=U_j+u_j$ for some $u_j\in{\ensuremath{\mathbb{Z}}}^2$. So we have $\tilde f^n(U_j)=\tilde f^n(U^*_j)-u_j$ is an integer translate of any connected component of $E$, for every $n\in{\ensuremath{\mathbb{Z}}}$, because $\ti f$ permutes connected components of $E$. By property $ii.)$ of $E$, $\textnormal{diam}(\tilde f^n(U_j))< M_0,$ for every $n\in{\ensuremath{\mathbb{Z}}}$ and $j=1,\ldots, s$. We may assume $\left\{U_1\right\}=\left\{U\right\}\subset C$, where $U$ is the open set of Claim $\ref{m1}$ (otherwise consider a finite cover $\left\{U\right\}\cup C$). Since $D$ is connected, without lost of generality, we may list the open sets $U_j$, $j=1,\ldots,s$ of $C$, such that \begin{equation}\label{uj} U_j\cap\left[\bigcup_{1\leqslant l<j}U_l\right]\neq\emptyset,\ \ \textit{for every $\ j=2,\ldots,s$}. \end{equation} \begin{afp}\label{s} There exists $M>0$ and $p\in{\ensuremath{\mathbb{Q}}}^2$ such that, $\left\|\tilde f^k(y)-y-kp\right\|\leqslant M,\ $ for every $y\in D$ and $k\in{\ensuremath{\mathbb{N}}}$. \end{afp} To proof this claim let us first show by induction that \begin{equation}\label{induc} \left\|\tilde f^{k\overline{n}}(y)-y-k(0,\overline{m})\right\|\leqslant (2j-1)M_0, \end{equation} for every $y\in U_j,\ $ $j=1,\ldots,s,\ $ and $k\in{\ensuremath{\mathbb{N}}}$, where $\overline{n}$ and $\overline{m}$ are like Claim $\ref{m1}$. \begin{itemize} \item[i.] The case $j=1$ is Claim \ref{m1}. \item[ii.] Suppose it holds for every $U_l$, with $1\leqslant l\leqslant j$, and let us see the case $j+1$: By (\ref{uj}), we have $U_{j+1}\cap U_l\neq \emptyset$, for some $U_l$ with $1\leqslant l \leqslant j$. Then, $\tilde f^{\overline{n}}(U_{j+1})\cap \tilde f^{\overline{n}}(U_{l})\neq \emptyset$. By induction we see that, $\tilde f^{k\overline{n}}(U_{j+1})\cap \tilde f^{k\overline{n}}(U_{l})\neq \emptyset$, for every $k\in{\ensuremath{\mathbb{N}}}.$ Since $\textnormal{diam}(\tilde f^n(U_j))< M_0,$ for every $n\in{\ensuremath{\mathbb{Z}}}$ and $j=1,\ldots, s$, then for every $y\in U_{j+1}$, $k\in{\ensuremath{\mathbb{N}}}$ and some $x\in U_{j+1} \cap U_l$, we have \begin{align*} \left\|\tilde f^{k\overline{n}}(y)-y-k(0,\overline{m})\right\| &\leqslant \left\|\tilde f ^{k\overline{n}}(y)-\tilde f ^{k\overline{n}}(x)\right\|+ \left\|\tilde f ^{k\overline{n}}(x)-x-k(0,\overline{m})\right\| + \left\|x-y\right\|\\ & \leqslant M_0 + (2l-1)M_0 + M_0 \leqslant [2(j+1)-1]M_0. \end{align*} \end{itemize} Thus equation (\ref{induc}) holds. Setting $M_1=(2s-1)M_0$, it follows that $\left\|\tilde f ^{k\overline{n}}(y)-y-k(0,\overline{m})\right\|<M_1$, for every $y\in D$ and $k\in N$. Denote by $p=\frac{1}{\overline{n}}(0,\overline{m})\in {\ensuremath{\mathbb{Q}}}^2$. We know that given $k\in {\ensuremath{\mathbb{N}}}$, there are $t\: ,r \in {\ensuremath{\mathbb{N}}}$, such that $k=t\overline{n}+r$, where $0\leqslant r\leqslant \overline{n}$. Hence, for every $y\in D$ and $k\in {\ensuremath{\mathbb{N}}}$, \begin{align*} \left\|\tilde{f} ^{k}(y)-y-kp\right\| & = \left\|\tilde{f} ^{t\overline{n}+r}(y)-y-(t\overline{n}+r)p\right\|\\ & \leqslant \left\|\tilde{f} ^r(\tilde{f}^{t\overline{n}}(y))-\tilde{f}^{t\overline{n}}(y)\right\| + \left\|\tilde{f} ^{t\overline{n}}(y)-y-t(0,\overline{m})\right\|+ \left\|rp\right\|\\ &\leqslant n_i\ \sup_{z\in {\ensuremath{\mathbb{R}}}^2}\left\|\tilde{f}(z)-z\right\|+ M_1 + r\left\|p\right\|=M, \end{align*} completing the proof of the Claim $\ref{s}$. Given $z\in{\ensuremath{\mathbb{R}}}^2$, let $u\in{\ensuremath{\mathbb{Z}}}^2$ be such that $z+u=y\in D$. Thus, \begin{align*} \left\|\tilde{f}^{k}(z)-z-kp\right\| = &\left\|\tilde{f} ^{k}(z+u)-(z+u)-kp\right\|= \left\|\tilde{f} ^{k}(y)-y-kp\right\| \leqslant M. \end{align*} Therefore, given $z\in {\ensuremath{\mathbb{R}}}^2$ and $k\in{\ensuremath{\mathbb{N}}}$, there exists $M>0$ and $p\in{\ensuremath{\mathbb{Q}}}^2$, such that $$\left\|\tilde{f}^{k}(z)-z-kp\right\| \leqslant M.$$ This implies that $\textnormal{Desv}(\tilde{f})<\infty.$ \end{proof} Now let us consider the general case, for every $v\in{\ensuremath{\mathbb{Z}}}^2_*$. \begin{main}\label{d1} If $\tilde f$ is $v$-annular, for some $v\in{\ensuremath{\mathbb{Z}}}^2_*$, then $\textnormal{Desv}_{v^{\bot}}(\tilde h)<\infty$ or $\textnormal{Desv} (\tilde f)<\infty$. \end{main} \begin{proof} By Remark \ref{v}, follows that if $\tilde f$ is $v$-annular then $A\tilde fA^{-1}$ is $(0,1)$-annular. In that way the homeomorphisms $A\tilde fA^{-1}$ and $A\tilde hA^{-1}$ satisfy hypothesis of Proposition \ref{d}. \end{proof} \end{document} \end{document}
\betaetagin{document} \tildetle[Effective Katok's horseshoe theorem ]{An effective version of Katok's horseshoe theorem for conservative $C^2$ surface diffeomorphisms} \date{\today} \maketitle \betaetagin{abstract} For area preserving $C^2$ surface diffeomorphisms, we give an explicit finite information condition, on the exponential growth of the number of Bowen's $(n, \delta lta)-$balls needed to cover a positive proportion of the space, that is sufficient to guarantee positive topological entropy. This can be seen as an effective version of Katok's horseshoe theorem in the conservative setting. We also show that the analogous result is false in dimension larger than $3$. \varepsilonnd{abstract} \section{Introduction} Let $X$ be a compact smooth surface with a Riemannian metric. Denote by ${\rm Diff}^{r}_{\text{vol}}(X)$ the group of $C^r$ diffeomorphisms which preserve the volume form $m$ induced by the Riemannian metric. Without loss of generality, we assume that $m(X) = 1$. A well-known result of Katok, based on Pesin theory, says that if $f \in {\rm Diff}^{1+\varepsilonpsilon}(X)$ has non-zero Lyapunov exponent for some $f-$invariant non atomic measure, then the topological entropy of $f$ is positive and that $f$ actually has invariant horseshoes that carry most of the topological entropy (see for example \circte{katok_ihes}, or \circte{kh}). In particular, this is the case for any $f \in {\rm Diff}^{1+\varepsilonpsilon}_{\text{vol}}(X)$ having positive Lyapunov exponents on a positive measure set, or in other words, when $f$ has positive metric entropy by Pesin's formula. Besides the positivity of Lyapunov exponents, another manifestation of positive metric entropy is the exponential rate of growth of the Bowen $(n, \delta lta)-$balls (see Definition \ref{def:bowen}) that are needed to cover a definite proportion of $X$ (see for example \circte{kh}). \betaetagin{defi} \label{def:bowen}Given a continuous map $f : X \to X$. For any $ \delta lta > 0$, integer $n\geq 1$, any $x \in X$, we define Bowen's $(n, \delta lta)-$ball centered at $x$ by \betaetagin{align*} B_f(x, n, \delta lta) = \{y | d(f^{i}(x), f^{i}(y)) < \delta lta, \varphiorall 0 \leq i \leq n-1 \} \varepsilonnd{align*} Given an $f-$invariant measure $\mu$. For any $\varepsilon \in ( 0, 1 )$, let $N_{f}(n, \delta lta, \varepsilon) = \inf_{\mathcal U} |\mathcal U|$, where $\mathcal U$ is taken over all the subsets of $\{B_f(x, n, \delta lta)\}_{x \in X}$ such that the union of $(n, \delta lta)-$balls in $\mathcal U$ has $\mu-$measure not less than $1-\varepsilon$. For a finite set $I$, we use $|I|$ to denote the cardinality of $I$. \varepsilonnd{defi} By the sub-additive growth of the number of Bowen balls and Katok's horseshoe theorem, the following statement is true by compacity: \textit{Fact: If the $C^2$ norm of $f$ is bounded by $D>0$, and if $h, \delta lta,\varepsilon>0$ are fixed, then there exists $n_0 = n_0(D,h, \delta lta,\varepsilonps)>0$ such that if $N_{f}(n, \delta lta, \varepsilon) \geq e^{nh}$ for some integer $n > n_0$, then $f$ has positive topological entropy. } {\it Sketch of proof.} Assume by contradiction that there exists $h, \delta lta,\varepsilon>0$ and a sequence $f_{n}$ with a uniform bound on its $C^2$ norm for which $N_{f_n}(n, \delta lta, \varepsilon) \geq e^{nh}$ and $h_{\rm top}(f_{n})=0$. By compacity we can, up to passing to a subsequence, assume that $f_n$ has a limit $f$ that is $C^{1+{\rm Lip}}$. Since for any $g$, the minimal number $N_{g}(n, \delta lta)$ needed to cover all of $X$ is essentially sub-additive in $n$, we have that for a fixed $k \in \N$, and for any $n$ sufficiently large $N_{f_{n}}(k, \delta lta) \geq e^{kh/2}$. Therefore $N_{f}(k, \delta lta) \geq e^{kh/2}$ for any $k \in \N$ and hence $f$ has positive topological entropy. By Katok's horseshoe theorem, this contradicts the assumption $h_{\rm top}(f_n)=0$ for all $n$. $ \mathcal Box$ In this paper, we will give a direct proof of the above fact that also provides an explicit upper bound for $n_0(D,h, \delta lta,\varepsilonps)$. Our bound will essentially be a tower-exponential of height $K \sim \log(\varphirac{\log A}{h})$ where $A=\|f\|_{C^1}$. The norm of the second derivative of $f$ enters into the argument of the tower-exponential bound. We will not use in our proof {\it any} ergodic theory. Our main tool is a finite information closing lemma for a map $g\in {\rm Diff}^{2}_{\text{vol}}(X) $ that generalizes the one obtained in \circte[Theorem 4]{AFLXZ}. Theorem 4 in \circte{AFLXZ} asserts that if $x$ is such that $\norm{Dg^q(x)}$ is comparable to $\norm{Dg}^{ \thetaeta q}$ where $ \thetaeta$ is close to $1$ and $q$ is sufficiently large compared to powers of the $C^2$ norm of $g$, then there exists a hyperbolic periodic point that shadows a piece of a length $q$ orbit of $x$. A similar effective closing lemma was previously obtained by Climenhaga and Pesin in \circte{CP} for $C^{1+\varepsilonpsilon}-$diffeomorphisms in any dimension, assuming however the existence of a splitting of the tangent spaces along a long orbit with some additional estimates of effective hyerbolicity. For an interesting application of the latter effective approach, we refer the reader to \circte{CDP}. In this note we will need a generalized version of the effective closing lemma in \circte{AFLXZ} that gives a shadowing of $x$ by a hyperbolic periodic orbit, even when $\norm{Dg^q(x)}$ is much smaller than $\norm{Dg}^{ \thetaeta q}$, provided that $ \norm{Dg^q(x)} \geq \norm{Dg(g^i(x))}^{ \thetaeta q}$, for most of the $i \in [0,q]$. An inductive use of this closing lemma allows one to obtain, under the growth condition of the $(n, \delta lta)$-balls, sufficiently many hyperbolic periodic points with a good control on their local stable and unstable manifolds to insure the existence of a horseshoe. Note that, in order exploit the growth condition of the Bowen balls, we need sufficiently precise informations from the shadowing property, which are not covered by the direct bootstrapping of Theorem 4 in \circte{AFLXZ}. With the same approach, we are also able to conclude positive topological entropy from derivative growth at an explicit time scale along a single, yet not too concentrated, orbit. \subsection{Statements of the main results} Throughout this note, $X$ is a compact surface with a volume form $m$. Without loss of generality, we assume that $m(X) = 1$. We will denote by $f : X \to X$ a $C^2$ diffeomorphism that preserves $m$ such that for constants $A,D > 0$, \betaetagin{align} \label{norms} \tag{$*$} \left\{ \betaetagin{array}{ll} \norm{Df}&\leq A, \\ \norm{D^2f}&\leq D. \varepsilonnd{array} \right. \varepsilonnd{align} Here $\norm{Df}$, $\norm{D^2f}$ denote respectively the supremum of the first and second derivatives of $f$. All the constants that appear in the text will implicitly depend on the surface $X$. To simplify notations, we define the following. \betaetagin{defi} For $R_0,R_1 > 0$, $K \in \Z_{+}$, we define function $Tower: \R_{+}^2 \tildemes \Z_{+} \to \R$ by the following recurrence relation, \betaetagin{eqnarray} Tower(R_0, R_1, K) = \betaetagin{cases} R_0, & K=1, \\ R_1^{Tower(R_0, R_1, K-1 )}, & K \geq 2. \varepsilonnd{cases} \varepsilonnd{eqnarray} \varepsilonnd{defi} Our main result is the following. \betaetagin{Main}\label{mainA} There exists a constant $C_0 = C_0(X) >0$ such that the following is true. For any $A, D > 1$, $h \in ( 0, \log A]$, $\varepsilon \in (0,1)$, $ \delta lta > 0$, denote by \betaetagin{eqnarray}\label{defQ} P_0 &=& \max( \varepsilon^{-1}e^{C_0(\log(\varphirac{\log A}{h}))^2 + C_0}, C_0h^{-1}\log \delta lta^{-1} ), \\ P_1 &=& e^{C_0 h^{-1} \log D \log A} \label{defQ1}. \varepsilonnd{eqnarray} If $f : X \to X$ is a $C^2$ diffeomorphism preserving $m$ that satisfies \varepsilonqref{norms}, and $N_{f}(n, \delta lta, \varepsilon) > e^{nh}$ for some $n \geq Tower(P_0, P_1,K_0)$, where $K_0 = \lceil C_0 \log(\varphirac{\log A}{h}) + C_0 \rceil $, then $f$ has positive topological entropy. \varepsilonnd{Main} Theorem \ref{mainA} gives positive topological entropy from complexity growth at an explicit large time scale. Some adaptation of the proof also allows us to conclude positive topological entropy from derivative growth at an explicit time scale along a single, yet not too concentrated, orbit. To precisely formulate such a result, we introduce the following notation. \betaetagin{defi} Given a continuous map $f : X \to X$, for any subset $I \subset \Z$, any $x \in X$, we set $Orb(f,x,I) = \{ f^{i}(x) | i \in I\}$. For constants $c, \delta lta > 0, \varepsilon \in (0,1)$, we say that $x$ is \textit{ $(n, c, \delta lta,\varepsilon)$-sparse } if for any subset $I \subset \{0,\cdots, n-1 \}$ satisfying $|I| > c n$ we have $m(B( Orb(f,x, I), \delta lta)) > \varepsilon$. \varepsilonnd{defi} \betaetagin{Main}\label{mainB} There exists a constant $C_0 = C_0(X) > 0$ such that the following is true. For any $A, D > 1$, $h \in ( 0, \log A]$, $\varepsilon \in (0,1)$, let \betaetagin{eqnarray*} P_0 = \varepsilon^{-1}e^{C_0(\log(\varphirac{\log A}{h}))^2 + C_0}, \quad P_1 = e^{C_0 h^{-1} \log D \log A}. \varepsilonnd{eqnarray*} If $f : X \to X$ is a $C^2$ diffeomorphism preserving $m$ that satisfies \varepsilonqref{norms}, and there exists $x \in X$ such that for some $n \geq Tower(P_0,P_1,K_0)$, where $ K_0 = \lceil C_0 \log(\varphirac{\log A}{h}) + C_0 \rceil$, we have \betaetagin{itemize} \item $\norm{Df^{n}(x)} > e^{nh}$, \item $x$ is $(n, Tower(P_0,P_1,K_0-1)^{-1}, D^{-Tower(P_0, P_1, K_0-1)},\varepsilon)$-sparse, \varepsilonnd{itemize} then $f$ has positive topological entropy. \varepsilonnd{Main} Observe that a non-concentration condition, such as the second condition of Theorem \ref{mainB}, is necessary to conclude positive entropy, for otherwise $x$ could just belong to a hyperbolic periodic orbit with a small period. We remark that Theorem \ref{mainA} does not hold in general in dimension at least $4$ as the following example shows. \betaetagin{example} Denote by $\{ g_t \}_{t \in \R}$ a geodesic flow on $Y := S_1M$, the unit tangent space of a hyperbolic surface $M$, preserving the Liouville measure $\mu$. We set $h_0 := h_{\mu}(g_1) > 0$. Let ${\mathbb T} = \R / \Z$ be the circle and let $\varphi \in C^\infty({\mathbb T})$ be a function such that $\int_{{\mathbb T}} \varphi d \thetaeta = 0$ and $\varphi |_{[0, \varphirac{1}{2}]} \varepsilonquiv 1$. For any $ \alphalpha \in \R$, denote by $R_{ \alphalpha} : {\mathbb T} \to {\mathbb T}$ the rotation $ \thetaeta \mapsto \thetaeta + \alphalpha[1]$, and consider the $C^2$ map $f_{ \alphalpha} : {\mathbb T} \tildemes Y \to {\mathbb T} \tildemes Y$ defined as follows, \betaetagin{eqnarray*} f_{ \alphalpha}( \thetaeta,x) = ( \thetaeta+ \alphalpha, g_{\varphi( \thetaeta)}(x)),\quad \varphiorall ( \thetaeta,x) \in {\mathbb T} \tildemes Y. \varepsilonnd{eqnarray*} \varepsilonnd{example} Observe that for any $ \alphalpha \in \R$, $f_{ \alphalpha}$ preserves the smooth measure $\nu := Leb_{{\mathbb T}} \tildemes \mu$. It is clear that $\sup_{ \alphalpha \in {\mathbb T}} \norm{f_{ \alphalpha}}_{C^2} < \infty$. Moreover, we have the following that shows that Theorem \ref{mainA} does not hold in general in dimension at least $4$. \betaetagin{prop} We have that \label{prop counter} \betaetagin{itemize} \item[(1)] For any $ \alphalpha\in \R - {\mathbb Q}$, the topological entropy $h_{top}(f_{ \alphalpha})=0$. \item[(2)] There exists $ \delta lta > 0$ such that for any $\varepsilon \in (0,1)$, any integer $n_0 > 0$, there exists $n > n_0$, ${\mathbf a}r{ \alphalpha} \in {\mathbb T}$, such that for any $ \alpha \in [0, {\mathbf a}r{ \alpha}]$ it holds that $N_{f_{ \alphalpha}}(n, \delta lta, \varepsilon) > e^{\varphirac{nh_0}{2}}$. \varepsilonnd{itemize} \varepsilonnd{prop} \betaetagin{proof} Abramov Rohlin formula for the entropy of a skew product yields (1) \circte{AR}. To see (1) directly, let $(q_n)_{n \in \N}$ be the sequence of denominators of the best rational approximations of $ \alphalpha$. Then by Denjoy-Koksma theorem, the partial sums $S_{q_n}\varphi$ defined as $S_{q_n}\varphi( \thetaeta) := \sum_{i=0}^{q_n-1} \varphi( \thetaeta + i \alphalpha)$,$\varphiorall \thetaeta \in {\mathbb T}$, converge uniformly in the $C^\infty$ topology to $0$, as $n$ tends to infinity. By direct computations, we see that \betaetagin{eqnarray*} f_{ \alphalpha}^{q_n}( \thetaeta,x) = ( \thetaeta + q_n \alphalpha, g_{S_{q_n}( \thetaeta)}(x)),\quad \varphiorall ( \thetaeta,x) \in {\mathbb T} \tildemes Y. \varepsilonnd{eqnarray*} This implies that $f_{ \alphalpha}^{q_n}$ converge to $Id$ in the $C^{\infty}$ topology, as $n$ tends to infinty. By Ruelle's entropy inequality, such convergence can happen only if $h_{top}(f_{ \alphalpha}) = 0$. To see (2), we notice that by $h_{\mu}(g_1) = h_0 > 0$, there exists $ \delta lta > 0$, such that for any $\varepsilon \in (0,1)$, any $n_0 > 0$, there exists $n > n_0$ such that $N_{g_1}(n, \delta lta, \varepsilon) > e^{ \varphirac{nh_0}{2}}$. Then by choosing $ \alphalpha$ to be sufficiently close to $0$, so that $i \alphalpha \in [0, \varphirac{1}{2}]$ for all $0 \leq i \leq n$, we have $f_{ \alphalpha}^{i}( \thetaeta,x) = ( \thetaeta+i \alphalpha, g_{i}(x))$ for any $( \thetaeta,x) \in {\mathbb T} \tildemes Y$, any $0 \leq i \leq n$. Then it is direct to see that $N_{f_{ \alphalpha}}(n, \delta lta, \varepsilon) \geq N_{g_1}(n, \delta lta, \varepsilon) > e^{\varphirac{nh_0}{2}}$. This concludes the proof. \varepsilonnd{proof} \betaetagin{nota} For any $n \geq 1$, any $x \in X$, we will denote by $\mu_{x,n} = \varphirac{1}{n}\sum_{m=0}^{n-1} \delta lta_{f^{m}(x)}$. For any $x \in X$, any linear subspace $E \subset T_{x}X$, any $r > 0$, we denote by $B_{E}(r) = \{v \in E | \norm{v} < r \}$. For any subset $A \subset X$, any $r > 0$, we denote by $B(A, r) = \{x | d(x,A) < r\}$. For any measurable subset $K \subset X$, we use $|K|$ or $m(K)$ to denote the measure of $K$. We will use $c, c_1, \cdots$ to denote generic positive constants which are allowed to vary from line to line, and may or may not depend on $X$, but independent of everything else. Under our notations, expressions like $cA \leq B \leq cA$ are legitimate. For two variables $A,B > 0$, we denote $A \gg B$ ( resp. $A \ll B$ ) if we have $A \geq c B$ ( resp. $cA \leq B$ ) for some constant $c$ as above. \varepsilonnd{nota} \section{From hyperbolic points to positive entropy} \betaetagin{defi} \label{def hyp pts} Let $g : X \to X$ be a $C^1$ diffeomorphism. For $ \alphalpha \in (0,\psii), r \in (0,1)$, a hyperbolic periodic point of $g$, denoted by $y \in X$, is said to be $( \alphalpha, r)-$hyperbolic if the following is true. Let $E^{s}(y), E^{u}(y)$ be respectively the stable and unstable direction at $y$. Then \betaetagin{enumerate} \item The angle between $E^s(y)$ and $E^u(y)$ is at least $ \alphalpha$, \item The local stable (resp. local unstable ) manifold of $g$ at $x$ contains $\varepsilonxp_{y}(graph(\gammamma_s))$ (resp. $\varepsilonxp_{y}(graph(\gammamma_u))$), where $\gammamma_s : B_{E^{s}(y)}( r) \to E^{u}(y)$ (resp. $\gammamma_u : B_{E^{u}(y)}( r ) \to E^{s}(y)$) is a Lipschitz function such that $\gammamma_s(0)=0$ and $Lip(\gammamma_s) < \varphirac{1}{100}$ (resp. $\gammamma_u(0) = 0$ and $Lip(\gammamma_u) < \varphirac{1}{100}$). \varepsilonnd{enumerate} Moreover, we denote $\varepsilonxp_y(graph(\gammamma_s))$ ( resp. $\varepsilonxp_y(graph(\gammamma_u))$ ) by $ \mathcal Ws_{r}(y)$ (resp. $ \mathcal Wu_r(y)$). For any $ \alphalpha \in (0,\psii), r > 0$, the set of all $( \alphalpha,r)-$hyperbolic points of $g$ is denoted by $\mathcal{H}(g, \alphalpha,r)$. To simplify notations, for any $\lambda \in (0,1)$, a $(\lambda^{2}, \lambda^{3})-$hyperbolic point of $g$ is said to be $\lambda-$hyperbolic. The set of all $\lambda-$hyperbolic points of $g$ is denoted by $\mathcal{H}(g, \lambda)$. \varepsilonnd{defi} \betaetagin{defi}[Heteroclinic intersection] For any $C^1$ diffeomorphism $g : X \to X$, for any two distinct hyperbolic periodic points of $g$ denoted by $p,q$, we say that $p,q$ has a heteroclinic intersection, if the stable submanifold of $p$ intersects transversely the unstable manifold of $q$, and the unstable submanifold of $p$ intersects transversely the stable manifold of $q$. \varepsilonnd{defi} The following simple lemma shows that for any given $ \alphalpha,r$, there cannot be too many $( \alphalpha, r)-$points unless there is a heteroclinic intersection. \betaetagin{prop}\label{lem alt} There exist $C_1,C_2 > 1$ depending only on $X$ such that, for any $ \alphalpha \in (0,\psii)$, any $0 < r < C_1^{-1} $, if a $C^1$ diffeomorphism $g : X \to X$ satisfy $|\cal{H}(g, \alphalpha, r)| > C_2 r^{-2} \alphalpha^{-4}$, then there exists a heteroclinic intersection for $g$. In particular, $g$ has positive topological entropy. In particular, if $\lambda \ll 1$ and $\mathcal{H}(g, \lambda) \gg \lambda^{-14}$, then there exists a heteroclinic intersection for $g$. \varepsilonnd{prop} \betaetagin{proof} In order to be able to measure the angles between vectors in nearby tangent spaces, we cover the surface $X$ by finitely many $C^{\infty}$ local charts $\{ \psisi : [-1,2]^2 \to X \}_{\psisi \in \cal{B}}$ indexed by $\cal{B}$. For any three distinct points $x,y,z \in \R^2$, let $ \alphangle(x,y,z)$ denote $ \alphangle(x-y, z-y)$. For any $ \betaetata > 0$, any $v \in \R^2 \setminus \{0\}$, let $C(v, \betaetata) :=\{ u | \alphangle(u, v) < \betaetata \} \betaigcup \{0\}$. We will choose $\{ \psisi : [-1,2]^2 \to X \}_{\psisi \in \cal{B}}$ and a constant $c_0 > 0$, depending only on X, such that for any $x \in X$, any $\psisi \in \cal{B}$ such that $x \in \psisi([-0,1]^2)$, for any $v_1, v_2 \in T_xX \setminus \{0\}$, set $\hat{x} := \psisi^{-1}(x), \hat{v}_1 := D\psisi^{-1}(x,v_1), \hat{v}_2 := D\psisi^{-1}(x,v_2)$, then : \betaetagin{enumerate} \item $2^{-1} \alphangle(v_1, v_2) \leq \alphangle( \hat{v}_1, \hat{v}_2) \leq 2 \alphangle(v_1,v_2)$, \item If $\norm{v_1}, \norm{v_2} < 2c_0^{-1}$, then $\psisi^{-1} exp_x(v_i)$ is defined and \betaetagin{eqnarray*} 2^{-1} \alphangle(v_1,v_2) \leq \alphangle( \psisi^{-1}\varepsilonxp_x(v_1) , \hat{x}, \psisi^{-1}\varepsilonxp_x(v_2) ) \leq 2 \alphangle(v_1,v_2). \varepsilonnd{eqnarray*} \varepsilonnd{enumerate} We fix an arbitrary smooth measure $\hat{m}$ on compact manifold \betaetagin{eqnarray*} \widehat{X} = \{(x,v_1,v_2) | x \in X, v_1,v_2 \in T_xX, \norm{v_1}= \norm{v_2}=c_0^{-1}\}. \varepsilonnd{eqnarray*} Let $c_1 > 0$ be a large constant to determined later, and for any $(x,v_1,v_2) \in \widehat{X}$, any $\psisi \in \cal B$ so that $x \in \psisi((0,1)^2)$ and set $$Q_{\psisi}(x,v_1,v_2) = \{(y, u_1,u_2) \in \widehat{X} | |\hat{x} - \hat{y}| <\varphirac{r \alphalpha}{c_1} , \alphangle(\hat{v}_1, \hat{u}_1), \alphangle(\hat{v}_2 , \hat{u}_2) < \varphirac{ \alphalpha}{40} \}.$$ Then there exists $c_2 > 0$ depending only on $X, c_1$, such that for all $(x,v_1,v_2) \in \widehat{X}$, any $\psisi \in \cal B$ so that $x \in \psisi((0,1)^2)$, we have $$\hat{m}(\psisi(Q_{\psisi}(x,v_1,v_2))) > c_2^{-1}r^2 \alphalpha^4.$$ By pigeonhole principle, there exists a constant $c_3 > 0$ depending only on $X, c_2$, such that whenever $|\cal H(g, \alphalpha,r)| > c_3 r^{-2} \alphalpha^{-4}$, there exists a chart $\psisi \in \cal{B}$, $(y_i, v^{s}_i, v^{u}_i) \in \widehat{X}$, $i=1,2$ so that \betaetagin{enumerate} \item $y_1,y_2 \in \cal{H}(g, \alphalpha, r) \betaigcap \psisi((0,1)^2)$ are two distinct points; \item for $i=1,2$, $ \alphangle(v^{s}_i, v^{u}_i) \leq \varphirac{\psii}{2}$, and $v_i^s$ (resp. $v_i^u$) lies in the stable (resp. unstable) direction of $y_i$; \item $Q_{\psisi}(y_1,v_1^s,v_1^u) \betaigcap Q_{\psisi}(y_2,v_2^s,v_2^u) \neq \varepsilonmptyset.$ \varepsilonnd{enumerate} This implies that $|\hat{y}_1 - \hat{y}_2| < \varphirac{2r \alphalpha}{c_1}$, $ \alphangle(\hat{v}_1^{s}, \hat{v}_2^{s}) < \varphirac{ \alphalpha}{20}$ and $ \alphangle( \hat{v}_1^{u}, \hat{v}_2^{u}) < \varphirac{ \alphalpha}{20}$. For $i=1,2$, let us denote $ \alphalpha_i = \alphangle(v_i^u, v_i^s)$. By the definition of $\cal H(g, \alphalpha, r)$ we have $ \alphalpha_1, \alphalpha_2 \geq \alphalpha$. Then $ \alphangle(\hat{v}_i^u, \hat{v}_i^s) \geq 2^{-1} \alphalpha_i$ for $i=1,2$. Moreover for $r \ll 1$, we have $\psisi^{-1}( \mathcal W^{u}_r(y_i) ) \subset \hat{y}_i + C(\hat{v}_i^{u}, \varphirac{1}{20} \alphalpha_i)$ since there exists $\gammamma_u : B_{E^u(y_i)}(r) \to E^s(y_i)$ with $Lip(\gammamma_u) < \varphirac{1}{100}$, such that $ \mathcal Wu_r(y_i) = exp_{y_i} graph(\gammamma_u)$ and $graph(\gammamma_u) \subset C(v_i^u, \varphirac{1}{40} \alphalpha_i)$. Similarly, we have $\psisi^{-1}( \mathcal W^{s}_r(y_i)) \subset \hat{y}_i + C(\hat{v}_i^s, \varphirac{1}{20} \alphalpha_i)$. By straightforward calculations, when $c_1$ is chosen to be sufficiently large, $y_1$,$y_2$ above have a heteroclinic intersection. Thus for any $r \ll 1$, any $C^1$ diffeomorphism $g : X \to X$ so that $|\cal{H}(g, \alphalpha, r)| \gg r^{-2} \alphalpha^{-4}$, there exists a heteroclinic intersection for $g$. It is a standard fact that for $C^1$ surface diffeomorphism, the existence of a heteroclinic intersection implies positive topological entropy. This concludes the proof. \varepsilonnd{proof} \section{A closing lemma} \betaetagin{defi} For any $\varepsilonta > 0$, any integer $l > 0$, any $C^0$ map $g : X \to X$, any subset $Y \subset X$, a point $x \in X$ is said to be $(\varepsilonta, l,g)-$recurrent for $Y$ if we have \betaetagin{align*} \varphirac{1}{l}|\{ 0 \leq j \leq l-1| g^{j}(x) \in Y \}| > \varepsilonta. \varepsilonnd{align*} For any subset $Y \subset X$, we denote by \betaetagin{eqnarray*} \mathcal{R}(Y, \varepsilonta, l, g) := \{ (\varepsilonta, l, g)-\mbox{recurrent points for $Y$ }\}. \varepsilonnd{eqnarray*} \varepsilonnd{defi} For any $\lambda, \xi > 0$, we set \betaetagin{eqnarray} \label{defG} \cal{G}(\lambda, \xi, g):= \betaigcup_{y \in \mathcal{H}(g, \lambda)} B( \mathcal Wu_{\lambda^{3}}(y), \xi). \varepsilonnd{eqnarray} By our definition, we clearly have $\cal{G}(\lambda, \xi, g) = \cal{G}(\lambda, \xi, g^k)$ for any $ k \geq 1$, since $\mathcal{H}(g, \lambda) = \mathcal{H}(g^k, \lambda)$ for any $k \geq 1$. Theorem 4 in \circte{AFLXZ} can be strengthened to prove the following proposition. \betaetagin{prop} \label{lem hyp pt} There exist $C = C(X) > 1$, and an absolute constant $ \thetaeta_0 \in (\varphirac{1}{2},1)$ such that the following is true. For each $\Deltalta \geq 1$, we set \betaetagin{eqnarray} \label{def eta L} \varepsilonta = \varepsilonta( \Deltalta) := C^{-1}\Deltalta^{-2}\in (0,1) \varepsilonnd{eqnarray} Let $g : X \to X$ be a $C^2$ diffeomorphism preserving $m$. If for $A_1 \geq C$, $D_1 \geq A_1$, an integer $q \geq D_1^{C \Deltalta }$ and $x \in X$, we have the following : \betaetagin{enumerate} \item $\norm{Dg} \leq A_1^{\Deltalta}$ , \item $\norm{D^2g} \leq D_1$ , \item $x \notin \cal R(\{y | \norm{Dg(y)}>A_1^{ \thetaeta_{0}^{-1}} \}, \varepsilonta, q, g)$, \item $\norm{Dg^{q}(x)} > A_1^{q}$, \varepsilonnd{enumerate} then \betaetagin{eqnarray*} x \in \cal F(A_1,D_1, \Deltalta, q , g) := \betaigcup_{1 \leq j \leq q} g^{-j }( \cal{G}( D_1^{-C \Deltalta}, A_1^{-\varphirac{ q }{2D_1^{C \Deltalta}}}, g) ) \varepsilonnd{eqnarray*} \varepsilonnd{prop} The proof of Proposition \ref{lem hyp pt} follows closely that of Theorem 4 in \circte{AFLXZ}. In our case we need to get more precise informations on the regularity of local invariant manifolds, as well as the location of the hyperbolic point. We defer its proof to Appendix A relying on many estimates from \circte{AFLXZ}. \section{Estimates along a tower exponential sequence}\label{Estimates along a tower exponential sequence} Without loss of generality, we will always assume that $D,A$ in Theorem \ref{mainA}, \ref{mainB} satisfy \betaetagin{eqnarray} \label{cond 1} D > A \gg 1. \varepsilonnd{eqnarray} Then we can assume that for any $C^2$ map $g : X \to X$ such that $\norm{Dg}, \norm{D^2g} \leq D$, we have \betaetagin{eqnarray*} \norm{D^2g^k} < D^{k}, \quad \varphiorall k \geq 1. \varepsilonnd{eqnarray*} Let $C, \thetaeta_0$ be defined in Proposition \ref{lem hyp pt}. For $D,A,h$ given in Theorem \ref{mainA} or \ref{mainB}, set $C'$ to be a large positive constant depending only on $X$ to be determined later. We set \betaetagin{eqnarray}\label{def K Delta} \quad \Deltalta = \varphirac{16 \log A}{h}, \quad K = \lceil \varphirac{\log(\varphirac{\Deltalta}{4})}{-\log \thetaeta_0} \rceil \geq 2, \quad \varepsilonta = \varepsilonta(\Deltalta) \mbox{ ( see \varepsilonqref{def eta L} )}. \varepsilonnd{eqnarray} Define \betaetagin{eqnarray} \label{HHH} H = H (X, A, h) := C' \Deltalta. \varepsilonnd{eqnarray} Given an integer $n \geq 1$, $\varepsilon \in ( 0, 1)$, we inductively define the following. \betaetagin{eqnarray} q_0 &=& \lceil \varepsilon^{-1} e^{C'(\log \Deltalta)^{2} } \label{initial q} \rceil , \\ l_k&=&\left\{ \betaetagin{array}{cl} \lceil D^{q_kH} \rceil \ , &0 \leq k \leq K-1 \\[1mm] \lceil \varphirac{n }{ q_{K}} \rceil , & k = K \\[1mm] \varepsilonnd{array} \right. , \quad q_{k+1} = q_{k}l_k \label{def l q}. \varepsilonnd{eqnarray} For $0 \leq k \leq K$, we set \betaetagin{eqnarray} \lambda_k = D^{-C \Deltalta q_k}, \quad \xi_k = A^{-\varphirac{q_{k+1} \thetaeta_0^{k+1}}{2D^{C \Deltalta q_k}}}, \varepsilonnd{eqnarray} and set \betaetagin{eqnarray*} Q_0 = \varepsilon^{-1}e^{C'(\log \Deltalta)^2 }, \quad Q_1 = e^{20C' h^{-1} \log D \log A} . \varepsilonnd{eqnarray*} We have the following simple lemma. \betaetagin{lemma} \label{lemmacollectionsofproperties} \betaetagin{enumerate} \item $e^{\varphirac{h}{16}} < A^{ \thetaeta_0^{K+1}} \leq e^{\varphirac{h}{4}}$, \item For any $C' \gg 1$, for all $0 \leq k \leq K-1$, we have $D^{q_k H} \leq l_{k} \leq Tower(Q_0, Q_1, k+2)$. If $n > Tower( Q_0, Q_1, K+3)$, then $l_{K} \geq D^{q_K H}$, \item For any $C' \gg 1$, set $ \delta lta_0 = D^{-Tower( Q_0, Q_1,K+1)}$, we have \betaetagin{eqnarray*} && \xi_i \leq \xi_0, \quad \delta lta_0 < \min( \lambda^3_K, \xi_0), \\ && C' \lambda_i^{-11} \max( \delta lta_0 , \xi_i) < \varepsilon, \quad \varphiorall 0 \leq i \leq K. \varepsilonnd{eqnarray*} \varepsilonnd{enumerate} \varepsilonnd{lemma} We define for $0 \leq k \leq K$, \betaetagin{eqnarray} \cal{G}_k &:=& \cal{G}(\lambda_k, \xi_k, f), \label{def G k} \\ \label{def F k} \cal{F}_k &:=& \cal{F}(A^{q_k \thetaeta_0^{k+1}}, D^{q_k}, \Deltalta , l_k, f^{q_k}). \varepsilonnd{eqnarray} The following is a corollary of Proposition \ref{lem hyp pt}. \betaetagin{cor}\label{corofproplemhyppt} If $n > Tower(Q_0, Q_1, K+3)$, then for any $0 \leq k \leq K$ we have \betaetagin{eqnarray*} x \notin \cal R( \{ y | \norm{Df^{q_{k}}(y)} > A^{q_k \thetaeta_0^{k}} \}, \varepsilonta(\Deltalta), l_k, f^{q_k}) \betaigcup \cal F_k \implies \norm{Df^{q_{k+1}}(x)} \leq A^{q_{k+1} \thetaeta_0^{k+1}}. \varepsilonnd{eqnarray*} \varepsilonnd{cor} \betaetagin{proof} By Lemma \ref{lemmacollectionsofproperties}(2), if $n > Tower(Q_0, Q_1, K+3)$ then for any $0 \leq k \leq K$, we have $l_K \geq D^{q_k H}$. By our choice of $A, D$, we have \betaetagin{eqnarray*} \norm{Df^{q_k}} \leq A^{q_k}, \quad \norm{D^2f^{q_{k}}} < D^{q_k}, \quad \varphiorall 0 \leq k \leq K. \varepsilonnd{eqnarray*} We take any $0 \leq k \leq K$, and an arbitrary point $x \in X$ such that $\norm{Df^{q_{k+1}}(x)} > A^{q_{k+1} \thetaeta_0^{k+1}}$. It suffices to show that $x \in \cal R( \{ y | \norm{Df^{q_{k}}(y)} > A^{q_k \thetaeta_0^{k}} \}, \varepsilonta(\Deltalta), l_k, f^{q_k}) \betaigcup \cal F_k $. By Lemma \ref{lemmacollectionsofproperties}(1) we have $\norm{Df^{q_{k}}(x)} \leq A^{q_{k}} \leq (A^{q_k \thetaeta_0^{k+1}})^{\varphirac{16 \log A}{h}}$. If $x \in \cal R( \{ y | \norm{Df^{q_{k}}(y)} > A^{q_k \thetaeta_0^{k}} \}, \varepsilonta(\Deltalta), l_k, f^{q_k})$, we are done. Otherwise, we can verify conditions (1)-(4) in Proposition \ref{lem hyp pt} for $(f^{q_k}, A^{q_k \thetaeta_0^{k+1}}, D^{q_k}, \varphirac{16 \log A}{h}, l_{k})$ in place of $(g, A_1, D_1, \Deltalta, q)$. We can apply Proposition \ref{lem hyp pt} for map $g = f^{q_k}$ to show that $x \in \cal F_k$. This completes the proof. \varepsilonnd{proof} The following is a straightforward consequence of Proposition \ref{lem alt}. \betaetagin{cor} \label{cor big G implies positive entropy} For all $C' \gg 1$ the following is true. If we have at least one of the following : (1) there exists $0 \leq i \leq K$ such that $|\cal G_i | \geq \varphirac{\varepsilonta^{K-i}\varepsilon}{(K+1)l_i}$, (2) there exists $0 \leq i \leq K-1$ such that $m( B ( \cal G_i, D^{-Tower(Q_0, Q_1,K+3)} )) > \varepsilon $, then $f$ has a heteroclinic intersection, in which case $f$ has positive topological entropy. \varepsilonnd{cor} We include the proof of Corollary \ref{cor big G implies positive entropy} in Appendix \ref{AppB}. \betaetagin{rema} \label{remarc'} Given $A,D,h$ as in Theorem \ref{mainA} or \ref{mainB}, we will choose $C'$ to be sufficiently large so that the conclusions of both Lemma \ref{lemmacollectionsofproperties} and Corollary \ref{cor big G implies positive entropy} hold. \varepsilonnd{rema} \section{An iterative decomposition} Now we say a few words about the general strategy behind the proof of Theorem \ref{mainA} and Theorem \ref{mainB}. We will inductively define a sequence of decompositions of the surface $X$, denoted by $X = M_{i} \sqcup E_i$. To start the induction, we define $M_0 = X$ and $E_0 = \varepsilonmptyset$. Assume that for $k \geq 0$, we have defined $M_k, E_k$ satisfying the following condition: \centerline{ \betaetagin{textit} {For each $x \in M_k$, we have $\norm{Df^{q_k}(x)} \leq A^{q_k \thetaeta_0^{k}}$. } \varepsilonnd{textit} } Then $E_{k+1}$ is defined as the set of the points that up till some finite time scale, either run into $E_{k}$ with frequency $ \geq \varepsilonta$, or is shadowed by hyperbolic orbits ( of course the first case does not happen if $E_k$ is empty ). We will use Proposition \ref{lem hyp pt} to show that the complement of $E_{k+1}$, defined as $M_{k+1}$, again satisfies the induction hypothesis. We then argue that after roughly $K = O(\log (\varphirac{\log A}{h}))$ steps, $E_{K+1}$ has to be large. This will show that at some previous time scale, there are enough different hyperbolic hyperbolic points to create a homoclinic intersection. The formal construction is the following. For all $0 \leq k \leq K+1$, we define $M_{k}, E_{k}$ through the following inductive formula. Let \betaetagin{eqnarray} E_0 = \varepsilonmptyset, \quad M_0 = X \label{def init} \varepsilonnd{eqnarray} and for all $0 \leq k \leq K$, we define \betaetagin{eqnarray} \label{defofek+1} E_{k+1} &=& \mathcal{R}(E_{k}, \varepsilonta, l_k, f^{q_k}) \label{def E k+1} \betaigcup \cal{F}_k,\\ M_{k+1} &=& X \setminus E_{k+1} \label{def M k+1} \varepsilonnd{eqnarray} \betaetagin{lemma} \label{lem small der} If $n > Tower(Q_0, Q_1, K+3)$, then for any $0 \leq k \leq K+1$ we have \betaetagin{align*} x \in M_{k}&\implies \norm{Df^{q_{k}}(x)} \leq A^{q_{k} \thetaeta_0^{k}}. \varepsilonnd{align*} \varepsilonnd{lemma} \betaetagin{proof} This is clear when $k=0$ by $\norm{Df} \leq A$ and sub-multiplicativity. Assume that the lemma is valid for some integer $k \in \{0\cdots, K\}$, then $\{ x | \norm{Df^{q_{k}}(x)} > A^{q_k \thetaeta_0^{k}}\} \subset E_{k}$ ( we consider the inclusion valid if both sides are empty). By Corollary \ref{corofproplemhyppt} and \varepsilonqref{def E k+1}, we see that any $x \in X$ such that $\norm{Df^{q_{k+1}}(x)} > A^{q_{k+1} \thetaeta_0^{k+1}}$ is contained in $E_{k+1}$. This completes the induction, thus finishes the proof.\varepsilonnd{proof} We will give the proof of Theorem \ref{mainA} and \ref{mainB} in the next two subsections. In the following, we let $C, \thetaeta_0$ be defined in Proposition \ref{lem hyp pt}, let $A,D,h > 0$ be given by Theorem \ref{mainA} or \ref{mainB}, and let $C'$ be sufficiently large depending only on $X$, satisfying Remark \ref{remarc'}. \subsection{Proof of Theorem \ref{mainA}} \betaetagin{prop} \label{propK} Let $C_0$ in Theorem \ref{mainA} be sufficiently large. Then under the conditions of Theorem \ref{mainA}, we have \betaetagin{eqnarray*} |E_{K+1}| \geq \varepsilon. \varepsilonnd{eqnarray*} \varepsilonnd{prop} \betaetagin{proof} We first show the following lemma. \betaetagin{lemma} \label{lem big ball} Let $C_0$ in Theorem \ref{mainA} be sufficiently large, and let $n$ be given as in Theorem \ref{mainA}. Then for each $y \in M_{K+1}$, we have \betaetagin{eqnarray*} B(y, e^{-2nh/5} \delta lta ) \subset B_f(y, n, \delta lta). \varepsilonnd{eqnarray*} \varepsilonnd{lemma} \betaetagin{proof} It is clear from \varepsilonqref{def l q} that \betaetagin{eqnarray*} l_{K} \in (\varphirac{n}{q_{K}}, \varphirac{21}{20} \varphirac{n}{q_{K}} ). \varepsilonnd{eqnarray*} Let $y \in M_{K+1}$. For each $0 \leq i \leq l_K-1$, we denote by \betaetagin{eqnarray*} a_i = \log \norm{Df^{q_{K}}(f^{iq_{K}}(y))} , \quad \delta lta_{i} = e^{-2nh/5 + iq_{K}h/24+\sum_{j=0}^{i-1} a_i} \delta lta, \quad B_i = B(f^{iq_{K}}(y), \delta lta_{i}). \varepsilonnd{eqnarray*} By letting $C_0$ in Theorem \ref{mainA} be sufficiently large, we can ensure that $n > Tower(P_0, P_1, K_0) > Tower(Q_0, Q_1, K+3)$. Then by Lemma \ref{lemmacollectionsofproperties}(1) and Lemma \ref{lem small der}, we have for each $z \in M_{K}$, $ \log \norm{Df^{q_{K}}(z)} \leq q_{K} \thetaeta_0^{K}\log A \leq \varphirac{hq_{K}}{4}$. Then by $y \in M_{K+1}$, \varepsilonqref{defofek+1} and Lemma \ref{lem small der}, we have $ y \notin \mathcal{R}(\{z | \log \norm{Df^{q_K}(z)} > \varphirac{hq_K}{4}\}, \varepsilonta, l_k, f^{q_k})$, thus \betaetagin{eqnarray*} |\{ 0 \leq i \leq l_K-1 | a_i > \varphirac{hq_{K}}{4} \}| \leq \varepsilonta l_K. \varepsilonnd{eqnarray*} Since $0 \leq a_i \leq q_{K}\log A$ for any $0 \leq i \leq l_K-1$, we have \betaetagin{eqnarray*} \sum_{j=0}^{i-1} a_j \leq \sum_{j=0}^{l_K-1} a_j \leq \varepsilonta l_K q_K \log A + \varphirac{l_Kq_{K}h}{4} \leq \varphirac{7l_Kq_{K}h}{24}, \quad \varphiorall 0 \leq i \leq l_K-1. \varepsilonnd{eqnarray*} The last inequality follows from $\varepsilonta \leq \varphirac{h}{24 \log A}$ which is a consequence of \varepsilonqref{def K Delta}, \varepsilonqref{def eta L} and $h \in (0, \log A]$. Then for any $0 \leq i \leq l_K-1$, we have \betaetagin{eqnarray} \label{term 100} \delta lta_i \leq e^{-2nh/5 + l_Kq_{K}h/3} \delta lta \leq e^{-\varphirac{1}{20}nh} \delta lta. \varepsilonnd{eqnarray} We claim that for any integer $0 \leq i \leq l_K-1$, \betaetagin{eqnarray} \label{lab pass} f^{iq_{K}}(B_0) \subset B_i. \varepsilonnd{eqnarray} We first show that the above claim concludes the proof of our lemma. Indeed, for any $0 \leq l \leq n$, there exist $0 \leq i \leq l_K-1, 0 \leq j \leq q_{K}-1$ such that $l = iq_{K} + j$. Then we have \betaetagin{eqnarray*} f^{l}(B_0)= f^{j}(f^{iq_{K}}(B_0)) \subset f^{j}(B_{i}) \subset B(f^{l}(y), \delta lta) \varepsilonnd{eqnarray*} The last inclusion follows from $A^{q_{K}} \delta lta_i \leq A^{q_{K}}e^{-nh/20} \delta lta \leq \delta lta$, by $\norm{Df^j} \leq A^{q_{K}}$, \varepsilonqref{term 100} and $\varphirac{n}{q_{K}} \geq \varphirac{20\log D}{h}$. Now we obviously have \varepsilonqref{lab pass} for $i=0$. Assume that we have \varepsilonqref{lab pass} for some $0 \leq i \leq l_K-1$, we will show that we have \varepsilonqref{lab pass} for $i+1$. It suffices to show that $f^{q_{K}}(B_i) \subset B_{i+1}$. Using the $C^{2}$ bound $\norm{D^2f^{q_{K}}} \leq D^{q_{K}}$ and $\varphirac{n}{q_{K}} \geq \varphirac{20\log D}{h}$, we see that for any $z \in B_i$, \betaetagin{eqnarray*} \norm{Df^{q_{K}}(z)} &\leq& e^{a_i} + \delta lta_i D^{q_{K}}\\ &\leq& e^{a_i} + D^{q_{K}} e^{-nh/20} \delta lta \leq e^{a_i + hq_{K}/24}. \varepsilonnd{eqnarray*} Since $ \delta lta_{i+1} = e^{a_i + hq_{K}/24} \delta lta_{i}$, we obtain $f^{q_{K+1}}(B_i) \subset B_{i+1}$. This proves \varepsilonqref{lab pass} and concludes the proof of Lemma \ref{lem big ball}. \varepsilonnd{proof} To proceed with the proof of Proposition \ref{propK}, observe that by Lemma \ref{lem big ball}, $M_{K+1} = X\setminus E_{K+1}$ can be covered by $ce^{4nh/5} \delta lta^{-2}$ many Bowen's $(n, \delta lta)-$balls. By \varepsilonqref{defQ}, $n > P_0$ and by letting $C_0$ be large, we have $c \delta lta^{-2} < e^{P_0h/5} < e^{nh/5}$. This implies that $|E_{K+1}| \geq \varepsilon$.  \varepsilonnd{proof} \betaetagin{proof}[Proof of Theorem \ref{mainA}] Since $f$ is area preserving, by Markov's inequality we have \betaetagin{eqnarray*} |\mathcal{R}(E_{k}, \varepsilonta, l_k, f^{q_k})| \leq \varepsilonta^{-1}|E_{k}|. \varepsilonnd{eqnarray*} Again by the fact that $f$ is area preserving, we obtain the following inequality by \varepsilonqref{def E k+1}, \varepsilonqref{def F k} \betaetagin{eqnarray} \label{ineq E} |E_{k+1}| \leq\varepsilonta^{-1}|E_{k}| + |\cal{F}_k| \leq \varepsilonta^{-1}|E_{k}| + l_k | \cal{G}_k|. \varepsilonnd{eqnarray} By \varepsilonqref{ineq E} and \varepsilonqref{def init}, we have \betaetagin{eqnarray*} |E_{K+1}| \leq \sum_{i=0}^{K} \varepsilonta^{i-K} l_i |\cal G_i|. \varepsilonnd{eqnarray*} Thus $|E_{K+1}| \geq \varepsilon$ implies that $|\cal G_i| \geq \varepsilonta^{K-i} \varphirac{\varepsilon}{(K+1)l_{i}}$ for some $0 \leq i \leq K$, which by Corollary \ref{cor big G implies positive entropy} (1) implies that $f$ has positive entropy. \varepsilonnd{proof} \subsection{Proof of Theorem \ref{mainB}} The proof of Theorem \ref{mainB} is parallel to that of Theorem \ref{mainA}. The following proposition is an analogue of Proposition \ref{propK}. \betaetagin{prop} \label{lem tilde e big} Let $C_0$ in Theorem \ref{mainB} be sufficiently large, and let $n$ be as in Theorem \ref{mainB}. Then under the condition of Theorem \ref{mainB}, we have \betaetagin{eqnarray*} \mu_{x,n}(E_K) \geq \varphirac{h}{2 \log A }. \varepsilonnd{eqnarray*} \varepsilonnd{prop} \betaetagin{proof} By letting $C_0$ in Theorem \ref{mainA} be sufficiently large, we can ensure that $n > Tower(P_0, P_1, K_0) > Tower(Q_0, Q_1, K+3)$. Then by Lemma \ref{lem small der}, for each $y \in M_{K}$, we have $\norm{Df^{q_K}(y)} \leq A^{q_K \thetaeta_0^{K}} \leq e^{\varphirac{q_Kh}{2}}$. We take a subset $\{p_1, \cdots, p_{l}\} \subset \{0,\cdots, n-q_K\}$ so that $I_j :=\{p_j,\cdots, p_j + {q_K}-1\}, 1 \leq j \leq l $ are disjoint subsets of $\{0,\cdots, n-1\}$ and $f^{p_j}(x) \in M_{K}$ for all $j$. Moreover, we assume that for any $k \in \{0,\cdots, n-1\} \setminus \betaigcup_{j=1}^l I_j$, we have $f^{k}(x) \in E_K$. The construction of $\{p_i\}_{i=1}^{l}$ is straightforward. Then by sub-multiplicativity, we have \betaetagin{eqnarray*} \log\norm{Df^{n}(x)} &\leq& \sum_{i=1}^{l} \log \norm{Df^{q_K}(f^{p_i}(x))} + (n - l q_K) \log A \\ &\leq& \varphirac{1}{2} l h q_K + (n-lq_K) \log A. \varepsilonnd{eqnarray*} By the condition in Theorem \ref{mainB}, we have $\log \norm{Df^{n}(x)} > n h$. Thus $n(\log A - h) > l q_K( \log A - \varphirac{h}{2})$. Then we see that $\mu_{x,n}(E_K) \geq \varphirac{n-lq_K}{n} \geq \varphirac{h}{2 \log A} $. \varepsilonnd{proof} \betaetagin{proof}[Proof of Theorem \ref{mainB}] For any measurable set $B \subset X$, any integers $n,l \geq 1$, any $x \in X$, we have \betaetagin{eqnarray*} \mu_{x,n}(f^{-l}(B)) \leq \varphirac{l}{n} + \mu_{x,n}(B). \varepsilonnd{eqnarray*} Then for any $k=0,\cdots, K-1$, by Markov's inequality we have \betaetagin{eqnarray*} \mu_{x,n}(\cal R(E_k, \varepsilonta, l_k, f^{q_k})) &\leq&(\varepsilonta l_k) ^{-1} \int \sum_{i=0}^{l_K-1} 1_{f^{-iq_{K}}(E_k)}) d\mu_{x, n}\\ &\leq& (\varepsilonta l_k) ^{-1} \sum_{i=0}^{l_k-1} \mu_{x,n}(f^{-iq_{k}}(E_k)) \\ &\leq& (\varepsilonta l_k) ^{-1} \sum_{i=0}^{l_k-1} (\mu_{x,n}(E_k) + \varphirac{iq_{k}}{n}) \leq \varepsilonta^{-1}\mu_{x,n}(E_k) + \varphirac{q_{k+1}}{2n \varepsilonta}. \varepsilonnd{eqnarray*} Similarly, we have \betaetagin{eqnarray*} \mu_{x,n}(\cal {F}_k) &\leq& \sum_{i=0}^{l_k-1} \mu_{x,n}(g^{-iq_k}(\cal G_k)) \\ &\leq& l_k \mu_{x,n}(\cal G_k) + \varphirac{l_kq_{k+1}}{n}. \varepsilonnd{eqnarray*} Then we have an inequality analogous to \varepsilonqref{ineq E}, as follows, \betaetagin{eqnarray*} \label{ineq E2} \mu_{x,n}(E_{k+1}) &\leq& \mu_{x,n}(\cal R(E_k, \varepsilonta, l_k, f^{q_k}))+ \mu_{x,n}(\cal {F}_k) \\ &\leq& \varepsilonta^{-1} \mu_{x,n}(E_k) + \varphirac{q_{k+1}}{2n\varepsilonta} + l_k \mu_{x,n}(\cal G_k) + \varphirac{l_kq_{k+1}}{n}. \nonumber \varepsilonnd{eqnarray*} By the simple observation that $l_k \geq l_0 \geq \varepsilonta^{-1}$ for all $0 \leq k \leq K$, we have \betaetagin{eqnarray*} \mu_{x,n}(E_K) \leq \sum_{i=0}^{K-1} \varepsilonta^{-K+i+1}( l_i \mu_{x,n}(\cal G_i) + \varphirac{2l_{i}q_{i+1}}{n}). \varepsilonnd{eqnarray*} By \varepsilonqref{def l q} and Proposition \ref{lem tilde e big}, we see that there exists $0 \leq i \leq K-1$ such that \betaetagin{eqnarray*} \mu_{x,n}(\cal G_i) &\geq& l_i^{-1}(\varphirac{\varepsilonta^{K-i-1}}{K}\varphirac{h}{2\log A } - \varphirac{2q_{K}l_{K-1}}{n})\geq l_i^{-2}. \varepsilonnd{eqnarray*} The last inequality follows from \betaetagin{eqnarray*} \varphirac{2q_K l_{K-1}}{n} < \varphirac{2l_{K-1}}{l_K} < q_0^{-1} \leq e^{-C' (\log \Deltalta)^2} < \varphirac{\varepsilonta^{K}h}{4K \log A}, \\ \varphirac{\varepsilonta^{K}h}{4K \log A} > l_0^{-1} \geq l_i^{-1}, \quad \varphiorall 0 \leq i \leq K-1, \varepsilonnd{eqnarray*} by letting $C'$ be larger than some absolute constant. In particular, by Lemma \ref{lemmacollectionsofproperties}(2), \varepsilonqref{def K Delta}, \varepsilonqref{initial q}, \varepsilonqref{def l q}, and by letting $C_0$ in Theorem \ref{mainB} be sufficiently large, we have \betaetagin{eqnarray*} &\mu_{x,n}(\cal G_i) > Tower(Q_0, Q_1, K+1)^{-2} > Tower(Q_0, Q_1, K+2)^{-1} , \\ &K_0 \geq K +4, \quad P_i > Q_i, i=0,1. \varepsilonnd{eqnarray*} By the condition of Theorem \ref{mainB} that $x$ is $(n, Tower(P_0,P_1,K_0-1)^{-1}, D^{-Tower(P_0,P_1,K_0-1)},\varepsilon)$-sparse, we see that \betaetagin{eqnarray*} m(B(\cal G_i, D^{-Tower(Q_0, Q_1,K+3)})) \geq m(B(\cal G_i, D^{-Tower(P_0, P_1,K_0-1)})) > \varepsilon. \varepsilonnd{eqnarray*} This concludes the proof by Corollary \ref{cor big G implies positive entropy} (2). \varepsilonnd{proof} \alphappendix \section{} In this section we prove the main technical result Proposition \ref{lem hyp pt}. We start with a slight generalization of Pliss lemma \circte{pliss}. \betaetagin{lemma}[a variant of Pliss] \label{lem improved pliss} For any real numbers $N \geq 1$, $1 > \thetaeta_0 > \thetaeta_1 > \thetaeta_2 > 0$, $\varepsilonta \in (0,\varphirac{1}{2} \varphirac{1- \thetaeta_0}{N- \thetaeta_0}\varphirac{ \thetaeta_1 - \thetaeta_2}{N - \thetaeta_2})$, for any integer $n \geq 1$, real number $l > 0$, the following is true. Given a sequence of $n$ real numbers $a_1,...,a_n$. Assume that \betaetagin{enumerate} \item $a_i \leq Nl$ for all $1\leq i \leq n$, \item $\sum_{i=1}^{n}a_i >n \thetaeta_1l$, \item $|\{ 1 \leq i \leq n | a_i > \thetaeta_0 l\}|< \varepsilonta n$. \varepsilonnd{enumerate} Then there exist at least $\varphirac{ \thetaeta_1- \thetaeta_2}{1- \thetaeta_2}n$ indexes $i$'s such that $\varphirac{1}{k}\sum_{j=i}^{i+k-1} a_j > \thetaeta_2 l$ for all $1 \leq k\leq n+1-i$. \varepsilonnd{lemma} \betaetagin{proof} Denote by $$A := \{i | \mbox{ there exists $1 \leq k \leq n+1-i$ such that $\varphirac{1}{k}\sum_{j=i}^{i+k-1} a_j \leq \thetaeta_2l$}\} $$ Without loss of generality, we assume that $A \neq \varepsilonmptyset$, for otherwise the conclusion of Lemma \ref{lem improved pliss} is already true. Then $A$ is contained in a non-empty set $I \subset \{1,\cdots, n\}$ satisfying that $\varphirac{1}{|I|} \sum_{i \in I} a_i \leq \thetaeta_2 l$. Then by (1),(2), we obtain that \betaetagin{eqnarray*} l N |I^{c}| + l \thetaeta_2|I| > l n \thetaeta_1. \varepsilonnd{eqnarray*} By $l > 0$, the above inequality implies that $|I^c| \geq \varphirac{ \thetaeta_1 - \thetaeta_2}{N- \thetaeta_2}n$. We claim that \betaetagin{eqnarray} \label{lab improved pliss} \varphirac{1}{|I^{c}|}\sum_{i \in I^{c}} a_i \leq l. \varepsilonnd{eqnarray} Indeed, if \varepsilonqref{lab improved pliss} was false, by (1) we would have at least $\varphirac{1- \thetaeta_0}{N- \thetaeta_0}|I^{c}| \geq \varphirac{1- \thetaeta_0}{N- \thetaeta_0}\varphirac{ \thetaeta_1 - \thetaeta_2}{N- \thetaeta_2}n > \varepsilonta n$ indexes $i \in I^{c}$ such that $a_i > \thetaeta_0 l$, but this would contradict (3). Now we use (2) again, with the improved estimate \varepsilonqref{lab improved pliss} in place of (1), and obtain \betaetagin{eqnarray*} l |I^{c}| + \thetaeta_2 l |I| \geq \sum_{i \in I^{c}} a_i + \sum_{i \in I} a_i > n \thetaeta_1 l. \varepsilonnd{eqnarray*} This implies that $|I^c| \geq \varphirac{ \thetaeta_1 - \thetaeta_2}{ 1 - \thetaeta_2}n$. We conclude the proof by the definition of $I$. \varepsilonnd{proof} Let $x$ be given by the condition of Proposition \ref{lem hyp pt}. We will define a collection of charts along a sub-orbit of $x$ following the definitions and estimates in \circte{AFLXZ}. Let $v_{s}$ be a unit vector in the most contracting direction of $Dg^{q}(x)$ in $T_{x}X$, and let $v_{u}$ be a unit vector orthogonal to $v_{s}$. For each $0 \leq i \leq q$, we define \betaetagin{eqnarray*} v^s_i&:=&\varphirac{Dg^{i}(v_s)}{\norm{Dg^{i}(v_s)}}, \quad v^u_i:=\varphirac{Dg^{i}(v_u)}{\norm{Dg^{i}(v_u)}}, \\ \lambda^s_{i}&:=&\log\varphirac{\norm{Dg(v^s_i)}}{\norm{v^s_i}}, \quad \lambda^u_{i} :=\log\varphirac{\norm{Dg(v^u_i)}}{\norm{v^u_i}}, \\ \omegaverline{\lambda}^{e}_i &:=& \min\{ \lambda^u_i, -\lambda^s_i \}. \varepsilonnd{eqnarray*} Given $r > 0, \tau >0, \kappa>0$, we define a {\it $(r, \tau,\kappa)-$Box }, which we denote by $U(r,\tau,\kappa)$, to be \betaetagin{eqnarray*} U(r,\tau,\kappa) = \{(v,w) \in \R^2 | \norm{v} \leq r, \norm{w} \leq \tau + \kappa \norm{v} \}. \varepsilonnd{eqnarray*} For $\kappa>0$, we denote by \betaetagin{eqnarray*}C(\kappa)&=&\{(v,w)\in \mathbb{R}^2 | \norm{w} < \kappa \norm{v} \},\\ \tildelde{C}(\kappa)&=&\{(v,w)\in \mathbb{R}^2 | \norm{v} < \kappa \norm{w} \}. \varepsilonnd{eqnarray*} We will refer to these sets as cones. We now recall some definitions in \circte{AFLXZ}. \noindent $ \betaullet$ A curve contained in $\mathbb{R}^2=\mathbb{R}_x\omegaplus \mathbb{R}_y$ is called a {\it $\kappa-$horizontal graph} if it is the graph of a Lipschitz function from an closed interval $I\subset\mathbb{R}_x$ to $\mathbb{R}_y$ with Lipschitz constant less than $\kappa$. Similarly, we can define the {\it $\kappa-$ vertical graphs}. \noindent $ \betaullet$ The boundary of an $(r,\tau,\kappa)-$Box $U$ is the union of two $0-$ vertical graphs and two $\kappa-$ horizontal graphs. We call these graphs respectively, the {\it left (resp. right) vertical boundary of $U$ } and the {\it upper (resp. lower) horizontal boundary of $U$}. We call the union of the left and right vertical boundary of $U$ the {\it vertical boundary of $U$}. Similarly we call the union of the upper and lower horizontal boundary of $U$ the {\it horizontal boundary of $U$}. \noindent $ \betaullet$ Horizontal and vertical graphs which connect the boundaries of $U$ will be called full horizontal and full vertical graphs as in the following definition. Given $r,\tau,\kappa, \varepsilonta >0$, for each $(r,\tau, \kappa)-$Box $U$, an {\it $\varepsilonta-$ full horizontal graph of $U$} is an $\varepsilonta-$ horizontal graph $L$ such that $L \subset U$ and the endpoints of $L$ are contained in the vertical boundary of $U$. Similarly, we define {the \it $\varepsilonta-$full vertical graphs of $U$}. \noindent $ \betaullet$ We define an {\it $\varepsilonta-$horizontal strip of $U$} to be a subset of $U$ bounded by the vertical boundary of $U$ and two disjoint $\varepsilonta-$ full horizontal graphs of $U$ which are both disjoint from the horizontal boundary of $U$. Similarly we can define {\it $\varepsilonta-$vertical strips of $U$}. Like Boxes, we define the horizontal, vertical boundary of a strip. \noindent $ \betaullet$ Given a Box $U$, $\cal{R}'$ a vertical strip of $U$, and $\cal{R}$ a horizontal strip of $U$, a homeomorphism that maps $\mathcal{R}'$ to $\mathcal{R}$ is said to be {\it regular} if it maps the horizontal (resp. vertical) boundary of $\mathcal{R}'$ homeomorphically to the horizontal (resp. vertical) boundary of $\mathcal{R}$. We recall the definition of hyperbolic map in \circte{AFLXZ}. \betaetagin{defi} \label{def hyp map} Given $r,\tau >0, 0 <\kappa,\kappa',\kappa''<1$. Denote $U =U(r,\tau,\kappa)$, and let $\mathcal{R}_1$ be a $\kappa-$vertical strip of $U$, $\mathcal{R}_2$ be a $\kappa-$horizontal strip of $U$. A $C^1$ diffeomorphism $G : \mathcal{R}_1 \to \R^2$ is called a hyperbolic map if it satisfies the following conditions: \betaetagin{eqnarray} \label{R1R2} \mbox{$G$ is a regular map from $\mathcal{R}_1$ to $\mathcal{R}_2$},\\ \label{preserve horizontal cone}\varphiorall x\in \mathcal{R}_1, DG_x(C(\kappa'))\subset C(\varphirac{1}{2}\kappa'),\\ \label{preserve vertical cone}\varphiorall x\in \mathcal{R}_2, DG^{-1}_x(\tildelde{C}(\kappa")) \subset \tildelde{C}(\varphirac{1}{2}\kappa"). \varepsilonnd{eqnarray} The following is a sketch of a hyperbolic map. \betaetagin{figure}[ht!] \centering \includegraphics[width=120mm]{figure0.png} \caption{ $\mathcal{R}_1$ is the topological rectangle $abcd$, $\mathcal{R}_2$ is the topological rectangle $a'b'c'd'$. Under a hyperbolic map $G$, $ab$ is mapped to $a'b'$. Similarly, $bc,cd,da$ are mapped respectively to $b'c',c'd',d'a'$. \label{overflow}} \varepsilonnd{figure} \varepsilonnd{defi} For each $0 \leq n \leq q$, we define $i_n : \R^2 \to T_{x_n}S$ as \betaetagin{eqnarray*} i_n(a,b) = a v^{u}_n + bv^{s}_n. \varepsilonnd{eqnarray*} There exists a constant $R = R(X) > 0$ such that : $exp_{x_n}$ is a diffeomorphism restricted to $i_n(B(0, D_1^{-\Deltalta}R))$ and $exp_{x_{n+1}}^{-1}$ is a diffeomorphism restricted to $gexp_{x_n}i_n(B(0,D_1^{-\Deltalta}R))$. Denote by $g_n$ the $C^2$ diffeomorphism \betaetagin{eqnarray*} \label{def g n} g_n: B(0,D_1^{-\Deltalta}R) &\to& \mathbb{R}^2 \\ g_n(v,w)&=&i_{n+1}^{-1}\varepsilonxp_{x_{n+1}}^{-1}g\varepsilonxp_{x_n} i_n(v,w). \nonumber \varepsilonnd{eqnarray*} We set $M:=1000$, and \betaetagin{eqnarray*} {\mathbf a}r{r} = D_1^{-3\Deltalta M}, \quad {\mathbf a}r{\kappa} = D_1^{-\Deltalta M}, \quad \delta lta = \varphirac{\log A_1}{100}. \varepsilonnd{eqnarray*} The main estimates in \circte{AFLXZ} are summarised in the following proposition. \betaetagin{prop} \label{main prop in aflxz} Under the conditions of Proposition \ref{lem hyp pt} for some absolute constant $ \thetaeta_0 \in (0,1)$ sufficiently close to $1$, and $C > 0$ sufficiently large depending only on $X$, there exist constant $C_1 = C_1(X)$, integers $0 \leq i_1 \leq i_2 \leq q$, and sequences of positive numbers $\{(r_n,\tau_n, \kappa_n, \tildelde{\kappa)}_{n} \}_{i_1 \leq n \leq i_2}$ such that : \betaetagin{enumerate} \item (Positive proportion) \betaetagin{eqnarray*} i_2 - i_1 \geq D_1^{-C_1\Deltalta}q, \varepsilonnd{eqnarray*} \item (Tameness at the starting and ending points ) \betaetagin{eqnarray*} &&\cot \alphangle(E^u_{i_1}, E^s_{i_1}), \cot \alphangle(E^u_{i_2}, E^s_{i_2}) < \varphirac{D_1^{M\Deltalta }}{100}, \\ &&10^{6}{\mathbf a}r{r} \geq r_i \geq \tau_i, \quad \varphiorall i_1 \leq i \leq i_2\\ &&r_{i_1} = \tau_{i_1} = {\mathbf a}r{r}, \quad \kappa_{i_1} = \tildelde{\kappa}_{i_1} ={\mathbf a}r{\kappa}, \\ &&r_{i_2} = 10^{6}{\mathbf a}r{r}, \quad \tau_{i_2} \leq \varphirac{1}{10}{\mathbf a}r{r}, \quad \kappa_{i_2} = \varphirac{1}{100}{\mathbf a}r{\kappa}, \quad \tildelde{\kappa}_{i_2} = 100{\mathbf a}r{\kappa}, \\ &&\sum_{n=i_1}^{i_2-1} \lambda_n^{u}, \sum_{n=i_1}^{i_2-1} -\lambda_n^{s} \geq \varphirac{2}{3}(i_2-i_1)a, \varepsilonnd{eqnarray*} \item (Transversal mappings) Let $r_n, \tau_n, \kappa_n$ be as above, we let \betaetagin{eqnarray*} U_n = U(r_n, \tau_n, \kappa_n), \quad C_n = C(\kappa_n), \quad \tildelde{C}_n = \tildelde{C}(\tildelde{\kappa}_n) \varepsilonnd{eqnarray*} If $\Gamma$ is a $\kappa_n-$full horizontal graph of $U_n$, then $g_n(\Gamma) \betaigcap U_{n+1}$ is a $\kappa_{n+1}$-full horizontal graph of $U_{n+1}$. Moreover, the image of the horizontal boundary of $U_n$ under $g_n$ is disjoint from the horizontal boundary of $U_{n+1}$; the image of the vertical boundary of $U_n$ under $g_n$ is disjoint from the vertical boundary of $U_{n+1}$. \item (Cone condition) Furthermore, for any $(v,w) \in U_n$, we have $(Dg_n)_{(v,w)}(C_n) \subset C_{n+1}$; for any $(v,w) \in g_n(U_n) \betaigcap U_{n+1}$, we have $(Dg_n^{-1})_{(v,w)}(\tildelde{C}_{n+1}) \subset \tildelde{C}_{n}$. Moreover, for any $(v,w) \in U_n$, any $(V,W) \in C_{n}$, let $({\mathbf a}r{V}, {\mathbf a}r{W}) = (Dg_n)_{(v,w)}(V,W)$, we have $|{\mathbf a}r{V}| \geq e^{\lambda_n^{u}- \delta lta}|V|$; for any $(v,w) \in g_n(U_n) \betaigcap U_{n+1}$, any $(V,W) \in \tildelde{C}_{n+1}$, let $({\mathbf a}r{V}, {\mathbf a}r{W}) = (Dg_n^{-1})_{(v,w)}(V,W)$, we have $|{\mathbf a}r{W}| \geq e^{-\lambda_n^{s} - \delta lta}|W|$. \item (Hyperbolic map) Denote \betaetagin{eqnarray*} J &=& i_{i_1}^{-1} exp_{x_{i_1}}^{-1} exp_{x_{i_2}} i_{i_2}, \\ G &=& { i_{i_1}^{-1}exp_{x_{i_1}}^{-1}} g^{i_2-i_1}exp_{x_{i_1}}i_{i_1} = J g_{i_2-1}\cdots g_{i_1}. \varepsilonnd{eqnarray*} There exist $\mathcal R_1$, a $100{\mathbf a}r{\kappa}-$vertical strip of $U_{i_1}$, and $\mathcal R_2$, a $100{\mathbf a}r{\kappa}-$horizontal strip of $U_{i_1}$ such that $G$ is a hyperbolic map from $\mathcal{R}_1$ to $\mathcal{R}_2$ with parameters $\kappa'= {\mathbf a}r{\kappa} , \kappa'' = 100 {\mathbf a}r{\kappa}$. Moreover, for each $0 \leq j \leq i_2-i_1$, we have $g_{i_1+j-1}\cdots g_{i_1}(\mathcal R_1) \subset U_{i_1+j}$. \varepsilonnd{enumerate} \varepsilonnd{prop} We will give a sketch of the proof and refer the detailed estimates to \circte{AFLXZ}. \betaetagin{proof} Set $a = \log A_1$. Condition (4) in Proposition \ref{lem hyp pt} translates into \betaetagin{eqnarray*} \varphirac{1}{q} \sum_{i=0}^{q-1} \lambda_i^{s} \leq -a, \quad \varphirac{1}{q} \sum_{i=0}^{q-1} \lambda_i^{u} \geq a. \varepsilonnd{eqnarray*} Using condition (3) and Lemma \ref{lem improved pliss} in place of the Pliss lemma, by setting $ \thetaeta_0 \in (0,1)$ to be an absolute constant sufficiently close to $1$, and setting $C > 0$ to be sufficiently large depending only on $X$, we can show analogously to Lemma 4.4 in \circte{AFLXZ} , that there are more than $q/2$ points in $\{g^{k}(x)| 0 \leq k \leq q-1\}$ that are \lq\lq good in the orbit of $x$\rq\rq. Here a point $g^{n}(x)$ is said to be \textit{good in the orbit of $x$} if $n \in [1, q-1]$ satisfies the following conditions : \betaetagin{eqnarray} \varphirac{1}{k}\sum_{j=n}^{n+k-1}\omegaverline{\lambda}_{j}^e&>&(1-\varphirac{1}{1000}) \thetaeta_0^{-1}a, \varphiorall 1\leq k\leq q-n,\label{fwd good 2}\\ \varphirac{1}{k}\sum_{j=n-k}^{n-1}\omegaverline{\lambda}_{j}^e&>&(1-\varphirac{1}{1000}) \thetaeta_0^{-1}a, \varphiorall 1\leq k\leq n.\label{bwd good} \varepsilonnd{eqnarray} We can show in analogy to Lemma 4.5 that $|\cot \alphangle(v^s_n, v^u_n)| \leq A_1^{3\Deltalta }$ for all $n$ such that $g^{n}(x)$ is good in the orbit. Then there exist an integer $0 \leq i \leq D_1^{-C_1 \Deltalta}q $ such that the subsequence $(x_{i + j \lceil D_1^{-C_1 \Deltalta}q\rceil} )_{0 \leq j \leq \lfloor \varphirac{q}{\lceil D_1^{-C_1 \Deltalta}q \rceil} \rfloor-1} $ contains at least $\varphirac{1}{3}D_1^{C_1 \Deltalta}$ many points which are good in the orbit of $x$. By letting $C_1$ to be sufficiently large depending only on $X$, we can apply the pigeonhole principle to the above subsequence as in the proof of Proposition 4.1 in \circte{AFLXZ} and obtain $0 \leq i_1 < i_2 \leq q-1$ that satisfy the following conditions: \betaetagin{enumerate} \item $i_2 - i_1 \geq D_1^{-\Deltalta C_1}q$, \item $\sum_{j=i_1}^{i_1+ k-1}\omegaverline{\lambda}_{j}^e>(1-\varphirac{1}{1000}) \thetaeta_0^{-1}ak, \varphiorall 1\leq k \leq i_2-i_1,$ \item $\sum_{j=i_2-k}^{i_2-1}\omegaverline{\lambda}_{j}^e>(1-\varphirac{1}{1000}) \thetaeta_0^{-1}ak, \varphiorall 1\leq k\leq i_2-i_1,$ \item The angles $ \alphangle(v^s_{i_1}, v^u_{i_1}), \alphangle(v^s_{i_2},v^u_{i_2})$ satisfy \betaetagin{eqnarray*} \log|\cot \alphangle(v^s_{i_1}, v^u_{i_2})|\leq 3\Deltalta a, \quad \log|\cot \alphangle(v^s_{i_2}, v^u_{i_2})|\leq 3\Deltalta a, \varepsilonnd{eqnarray*} \item Moreover, we have $d(g^{i_1}(x), g^{i_2}(x)) < D_1^{-\varphirac{C_1 \Deltalta}{200}}$, and \betaetagin{eqnarray*} d_{T^1X}(v^s_{i_1}, v^s_{i_2})<D_1^{-\varphirac{C_1 \Deltalta}{200}}, \quad d_{T^1X}(v^u_{i_1}, v^u_{i_2})<D_1^{-\varphirac{C_1 \Deltalta}{200}}. \varepsilonnd{eqnarray*} \varepsilonnd{enumerate} We note the similarities between the above conditions and those of Definition 4.3 in \circte{AFLXZ}. However here we have a large inverse power of $D_1$ in (5) instead of a small inverse power of $q$ as in Definition 4.3, (4) in \circte{AFLXZ}. This is sufficient for the rest of proof, since $r_{i_1}$,$r_{i_2}$, $ \alphangle( E^s_{i_1}, E^u_{i_1})$ and $ \alphangle( E^s_{i_2}, E^u_{i_2})$ are lower bounded by $D^{-O(\Deltalta )}$. At this point, we can invoke the proof of Proposition 4.2, and obtain (2) as a consequence of Lemma 4.6, 4.7, 4.8 in \circte{AFLXZ}; and obtain (3),(4) as a consequence of Proposition 4.5 in \circte{AFLXZ}. We obtain (5) following the proof of Proposition 4.4 in \circte{AFLXZ}. \varepsilonnd{proof} Now we are ready to conclude the proof of Proposition \ref{lem hyp pt}. \betaetagin{proof}[Proof of Proposition \ref{lem hyp pt}] We apply Proposition \ref{main prop in aflxz} and obtain $i_1$,$i_2$, $\cal{R}_1$, $\cal{R}_2$, $G$, $U_i$, $C_i$, $\tildelde{C}_i$ as in the proposition. We set $i= i_1, j= i_2$. By (5) in Proposition \ref{main prop in aflxz} and Proposition 4.3 in \circte{AFLXZ}, we obtain a hyperbolic periodic point in $\mathcal R_1 \betaigcap \mathcal R_2$, denoted by $y$. We note the following lemma whose proof follows from the standard construction of unstable / stable manifolds for uniformly hyperbolic maps using graph transform argument. For this reason, we omit its proof. \betaetagin{lemma} \label{lem standard graph transform} Let $r, \tau > 0, L > 1$, $0 < \kappa, \kappa', \kappa'' < 1$, $U = U(r, \tau, \kappa)$ and let $G : \cal{R}_1 \to \cal{R}_2$ be a hyperbolic map where $\cal R_1$ ( resp. $\cal R_2$) is the $\kappa-$vertical strip (resp. $\kappa-$ horizontal strip ) of $U$ as in Definition \ref{def hyp map}, and $\kappa', \kappa''$ satisfy inclusion \varepsilonqref{preserve horizontal cone}, \varepsilonqref{preserve vertical cone} respectively. Assume that (1) For each $x \in \cal R_1$, each $(V, W) \in C(\kappa')$, set $({\mathbf a}r{V}, {\mathbf a}r{W}) = DG_x(V,W)$, then $|{\mathbf a}r{V}| \geq L|V|$, (2) For each $x \in \cal R_2$, each $(V, W) \in \tildelde{C}(\kappa'')$, set $({\mathbf a}r{V}, {\mathbf a}r{W}) = DG_x^{-1}(V,W)$, then $|{\mathbf a}r{W}| \geq L|W|$. Then there exists a hyperbolic fixed point of $G$, $y\in \cal R_1 \betaigcap \cal R_2$, whose local unstable manifold in $\cal R_2$, denoted by $ \mathcal Wu_{G}(y)$, is a $\kappa'-$horizontal graph; whose local stable manifold in $\cal R_1$, denoted by $ \mathcal Ws_{G}(y)$, is a $\kappa''-$vertical graph. Moreover we have $$G(\cal R_1) \subset B( \mathcal Wu_G(y), 2L^{-1} diam(U)).$$ \varepsilonnd{lemma} We set $L = A^{\varphirac{j-i}{2}}$. We now verify conditions (1),(2) of Lemma \ref{lem standard graph transform} for $L$, $G$, $U = U_{i_1}$, $\kappa = 100{\mathbf a}r{\kappa}, \kappa' = {\mathbf a}r{\kappa}, \kappa'' = 100{\mathbf a}r{\kappa}$. We only verify condition (2) in details since condition (1) can be verified in a similar fashion. By Proposition \ref{main prop in aflxz}(5), for any $i \leq n \leq j-1$, we have $g_{n+1}^{-1} \cdots g_{j-1}^{-1} J^{-1}(\cal R_2) = g_{n} \cdots g_{i}(\cal R_1) \subset U_{n+1} \betaigcap g_{n}(U_{n})$. For any $i \leq n \leq j$, any $(v,w) \in \cal R_2$, for any $(V,W) \in \tildelde{C}_{j}$ ( here $\tildelde{C}_j$ is given by Proposition \ref{main prop in aflxz}(3)), denote by $(v_n, w_n) = g_{n}^{-1}\cdots g_{j-1}^{-1}J^{-1}(v,w)$, $(V_n, W_n) = D(J g_{j-1}\cdots g_{n})^{-1}_{(v,w)}(V,W)$. Then we have $(v_n, w_n) \in U_n$ for all $i \leq n \leq j$. By Proposition \ref{main prop in aflxz}(2),(4), we have $|W_i| \geq e^{\sum_{n=i}^{j-1}( -\lambda_n^{s} - \delta lta)}|W_j| \geq A^{\varphirac{j-i}{2}}|W| = L|W|$. By Lemma \ref{lem standard graph transform} and Proposition \ref{main prop in aflxz}(2), we obtain \betaetagin{eqnarray*} G(\cal{R}_1) \subset B( \mathcal Wu_{G}(y), 200A^{-\varphirac{j-i}{2}}{\mathbf a}r{r}) \varepsilonnd{eqnarray*} We denote by $z = exp_{x_{i_1}}i_{i_1}(y)$. By Proposition \ref{main prop in aflxz}(5) and the fact that $y$ is a hyperbolic fixed point of $G$, we conclude that $z$ is a $g-$hyperbolic periodic point. Then by Proposition \ref{main prop in aflxz} and by possibly increasing $C_1$ depending only on $X$, we can ensure that $z \in \cal{H}(g, D_1^{-C_1\Deltalta})$, and \betaetagin{eqnarray*} g^{j}(x) \in exp_{x_{i_1}}i_{i_1}G(\cal{R}_1) \subset B( \mathcal Wu_{D_1^{-3C_1\Deltalta}}(z), A_1^{-\varphirac{ q }{2D_1^{C_1 \Deltalta}}}) \varepsilonnd{eqnarray*} We conclude the proof by letting $C$ to be sufficiently large depending only on $X$. \varepsilonnd{proof} \section{} \label{AppB} \betaetagin{proof}[Proof of Corollary \ref{cor big G implies positive entropy}] In this following, we briefly denote $H(f, \alphalpha, r)$ by $H( \alphalpha, r)$, and denote $H(f, \lambda )$ by $H(\lambda )$. We first prove the corollary under condition (1). For any $ \alphalpha, r, \xi > 0$, any $y \in \cal H( \alphalpha, r)$, \betaetagin{eqnarray*} |B( \mathcal Wu_r(y), \xi)| \ll r \xi. \varepsilonnd{eqnarray*} It is clear from the definition of $\cal G$ in \varepsilonqref{defG} that for any $\lambda \in (0,1)$, \betaetagin{eqnarray*} |\cal H(\lambda)| &\geq& |\cal G(\lambda, \xi, f) | / | B( \mathcal Wu_{\lambda^3}(y), \xi) | \\ &\gg& \lambda^{-3} \xi^{-1} | \cal G(\lambda, \xi, f) | \varepsilonnd{eqnarray*} By \varepsilonqref{def G k} and Proposition \ref{lem alt}, it suffices to check that $|\cal G_i| \gg A^{-\varphirac{q_{i+1} \thetaeta_0^{i+1}}{2D^{C \Deltalta q_i}}} D^{11C \Deltalta q_i}$. Since $x^{-1} > e^{-x}$ for $x \in (0, \infty)$, we have \betaetagin{eqnarray*} |\cal G_i| &\geq& \varphirac{\varepsilonta^{K-i}\varepsilon}{(K+1)l_i} \\ &>& (10K)^{-1}\varepsilonta^{K} \varepsilon \varphirac{q_{i+1} \thetaeta_0^{i+1}}{D^{C \Deltalta q_i}l_i} A^{-\varphirac{q_{i+1} \thetaeta_0^{i+1}}{4D^{C \Deltalta q_i}}} \\ &\gg& A^{-\varphirac{q_{i+1} \thetaeta_0^{i+1}}{2D^{C \Deltalta q_i}}} D^{11C \Deltalta q_i}. \varepsilonnd{eqnarray*} The last inequality follows from by letting $C' \gg 1$, and \betaetagin{itemize} \item $K^{-1}\varepsilonta^{K}\varepsilon q_{i+1} \thetaeta_0^{i+1} l_{i}^{-1} \gg 1$ since by \varepsilonqref{initial q}, and $q_{i+1} l_{i}^{-1} \thetaeta_0^{i+1} \geq \varphirac{1}{2}q_{i} \thetaeta_0^{i} \geq \varphirac{1}{2}q_0 \geq \varphirac{1}{2}\varepsilon^{-1}e^{C' (\log(\varphirac{\log A}{h}))^2 + C'} \gg \varepsilon^{-1} K \varepsilonta^{-K}$, \item $A^{\varphirac{q_{i+1} \thetaeta_0^{i+1}}{4D^{C \Deltalta q_i}}} \geq D^{12C \Deltalta q_i}$, by $A \gg 1$, \varepsilonqref{HHH} and \varepsilonqref{def l q}. \varepsilonnd{itemize} Now we consider condition (2). We set $ \delta lta_0 = D^{-Tower(Q_0, Q_1, K+3)}$. By Lemma \ref{lemmacollectionsofproperties}(3), we have \betaetagin{eqnarray*} \delta lta_0 \leq \xi_0 \mbox{ and } \delta lta_0 \leq \lambda^3_i, \varphiorall 0 \leq i \leq K. \varepsilonnd{eqnarray*} For any $\lambda, \xi \in (0,1)$, any $y \in \cal H( \lambda )$, any $ \delta lta \in (0, \lambda^3 )$, we have \betaetagin{eqnarray*} m( B(B( \mathcal Wu_{\lambda^3}(y), \xi), \delta lta )) \ll \lambda^3 \max( \xi, \delta lta). \varepsilonnd{eqnarray*} By \varepsilonqref{defG} and condition (2), we have for some $0 \leq i \leq K$ that, \betaetagin{eqnarray*} |\cal H( \lambda_i)| &\geq& m( B( \cal G(\lambda_i, \xi_i, f), \delta lta_0) ) / \sup_{y \in H(\lambda_i) }m(B(B( \mathcal Wu_{\lambda^3_i}(y), \xi_i), \delta lta_0)) \\ &\gg&  \varepsilon \lambda_i^{-3} \min( \xi_i^{-1} , \delta lta_0^{-1}). \varepsilonnd{eqnarray*} By Proposition \ref{lem alt}, it suffices to observe from Lemma \ref{lemmacollectionsofproperties} that \betaetagin{eqnarray*} \varepsilon \gg \lambda_i^{-11} \max(\xi_i, \delta lta_0), \quad \varphiorall 0 \leq i \leq K. \varepsilonnd{eqnarray*} \varepsilonnd{proof} \betaetagin{thebibliography}{aaaa} \betaibitem{AR} L. M. Abramov, V. A. Rohlin, {\it The entropy of a skew product of measure preserving transformations}, AMS Translations, { 48} (1965), 255-265. \betaibitem{AFLXZ} A. Avila, B. Fayad, P. Le Calvez, D. Xu, Z. Zhang, \newblock {\it On mixing diffeomorphisms of the disk,} \newblock {arXiv:1509.06906.} \betaibitem{CDP} V. Climenhaga, D. Dolgopyat, Y. Pesin, {\it Non-stationary non-uniform hyperbolicity: SRB measures for non-uniformly hyperbolic attractors,} Communications in Mathematical Physics, 346, issue 2 (2016), 553-602. \betaibitem{CP} V. Climenhaga, Y. Pesin, {\it Hadamard-Perron theorems and effective hyperbolicity,} Ergodic Theory and Dynamical Systems, 36 (2016), 23-63. \betaibitem{katok_ihes} A. Katok, {\it Lyapunov exponents, entropy and periodic orbits for diffeomorphisms,} Publications Mathématiques de l'IH\'ES, 51 (1980), 137-174. \betaibitem{kh}A. Katok, B. Hasselblatt, {\it Introduction to the Modern Theory of Dynamical Systems.} Encyclopedia of Mathematics and its Applications 54. Cambridge: Cambridge University Press, 1995. \betaibitem{pliss} V.Pliss, {\it On a conjecture of Smale}, Diff. Uravnenjia, 8:268-282, 1972. \varepsilonnd{thebibliography} \ \ \ {\varphiootnotesize \noindent Bassam Fayad\\ Institut de Math\'ematiques de Jussieu-Paris Rive Gauche UMR7586 CNRS Université Paris Diderot-Université Pierre et Marie Curie \\ E-mail: [email protected] \ \ {\varphiootnotesize \noindent Zhiyuan Zhang\\ Institut de Math\'{e}matique de Jussieu---Paris Rive Gauche, B\^{a}timent Sophie Germain, Bureau 652\\ 75205 PARIS CEDEX 13, FRANCE\\ Email address: [email protected] \varepsilonnd{document}
\begin{document} \title{Penalized bias reduction in extreme value estimation for censored Pareto-type data, and long-tailed insurance applications} \begin{abstract} {\noindent The subject of tail estimation for randomly censored data from a heavy tailed distribution receives growing attention, motivated by applications for instance in actuarial statistics. The bias of the available estimators of the extreme value index can be substantial and depends strongly on the amount of censoring. We review the available estimators, propose a new bias reduced estimator, and show how shrinkage estimation can help to keep the MSE under control. A bootstrap algorithm is proposed to construct confidence intervals. We compare these new proposals with the existing estimators through simulation. We conclude this paper with a detailed study of a long-tailed car insurance portfolio, which typically exhibit heavy censoring. } \end{abstract} \noindent {\bf Keywords:} Extreme value index; Pareto-type; Tail estimation; Random censoring; Bias reduction. \section{Introduction} \label{Sec1} Extreme value analysis under random right censoring is becoming more popular with applications for example in survival analysis, reliability and insurance. For instance, in certain long-tailed insurance products, such as car liability insurance, long developments of claims are encountered. At evaluation of the portfolio a large proportion of the claims are then not fully developed and hence are censored. In the setting of random right censoring the variable of interest $X$ with distribution function (df) $F$ can be censored by a random variable $C$ with df $G$. Moreover observations of $X$ and $C$ are assumed to be independent. One then observes $Z=\min (X,C)$ with df $H$ satisfying $1-H=(1-F)(1-G)$, jointly with the indicator $\delta =1_{(X \leq C)}$ which equals 1 if the observation $Z$ is non-censored. Here we assume that $X$ and $C$ both are Pareto-type distributed with extreme value index (EVI) $\gamma_1 >0$ and $\gamma_2 >0$, i.e. \[ \bar{F} (x) = 1-F(x) = x^{-1/\gamma_1}\ell_1 (x) \mbox{ and } \bar{G}(y) = 1-G(y) = y^{-1/\gamma_2}\ell_2 (y), \;\; x,y>1, \] where both $\ell_1, \ell_2$ are slowly varying at infinity: \[ \ell_j(tx)/\ell_j (t) \to_{t \to \infty} 1, \;\; \mbox{ for every } x>1 \;\; (j=1,2). \] Note that $1-H$ then also belongs to the domain of attraction of an extreme value distribution with positive EVI $\gamma = { {\gamma_1 \gamma_2}\over{\gamma_1+\gamma_2}}$. Of course, the smaller $\gamma_2/\gamma_1$ the heavier the censoring will be. In long-tailed insurance applications as discussed above, the proportion of censored data can well be larger than 50\%, so that the situation $\gamma_2 < \gamma_1$ is then most relevant. In this paper we discuss the estimation of $\gamma_1$ based on independent and identically distributed (i.i.d.) observations $(Z_i,\delta_i)$ ($i=1,\ldots,n$) with $Z_i =\min (X_i,C_i)$ and $\delta_i=1_{(X_i\leq C_i)}$, where $(X_i,C_i)$ ($i=1,\ldots,n$) are i.i.d. random variables from $(F,G)$. In the next section we review the available estimators for $\gamma_1$ that were published in the literature. In Section 3 we propose a new bias reduced estimator which is based on an estimator proposed by Worms and Worms (2014). Moreover we show how shrinkage estimation, as introduced in Beirlant et al. (2017) in the non-censoring case, can also be used in the censoring context. In Section 4 a parametric bootstrap algorithm is proposed in order to construct confidence intervals for $\gamma_1$. We then report on a simulation study involving all available estimators and the proposed bootstrap algorithm. Finally we make a detailed study of a motor third party liability (MTPL) case study. \section{A review of estimators of $\gamma_1$} \label{Sec2} In case there is no censoring (i.e. $\gamma_2 =\infty$ and $1-H=1-F$), the Hill (1975) estimator is the benchmark estimator for $\gamma_1=\gamma$. Denoting the ordered $Z$ data by $Z_{1,n}\leq Z_{2,n} \leq \ldots \leq Z_{n,n}$ this estimator is given by \[ \hat{\gamma}_{Z,k}^H= {1 \over k}\sum_{j=1}^k \log {Z_{n-j+1,n} \over Z_{n-k,n}}. \] This estimator follows using maximum likelihood when approximating the distribution of the peaks $Z/t$ over a threshold $t$, given $Z> t$, by a simple Pareto distribution with density $y \mapsto \gamma^{-1} y^{-\gamma^{-1}-1}$, and taking a top order statistic $Z_{n-k,n}$ as a threshold $t$. \\ It can also be found back by estimating the functional \begin{equation} L_t := \mathbb{E}(\log Z - \log t|Z>t)= \int_1^{\infty} {\bar{F}(ut) \over \bar{F}(t)} {du \over u}, \label{MElog} \end{equation} which tends to the extreme value index $\gamma$ of $Z$ as $t\to \infty$. In \eqref{MElog} $\bar{F}$ is estimated by the empirical survival function $1-\hat{F}_n$, again using $Z_{n-k,n}$ as a threshold $t$. This leads to an alternative writing of $\hat{\gamma}_{Z,k}^H$ by partial summation: \[ \hat{\gamma}_{Z,k}^{(H)}= {1 \over k}\sum_{j=1}^k j(\log Z_{n-j+1,n} - \log Z_{n-j,n}). \] While both approaches yield the same estimator in the non-censoring case this is no longer the case under random censoring. \begin{itemize} \item Beirlant et al. (2007) proposed the following estimator of $\gamma_1$ using the maximum likelihood approach: \begin {equation} \hat{\gamma}_{1,k}^{(H)} = {\hat{\gamma}_{Z,k}^{(H)} \over \hat{p}_k} , \label{Censored Hill} \end {equation} with $\hat{p}_k = {1\over k} \sum_{j = 1}^k \delta_{n-j+1,n}$ the proportion of non-censored observations under the largest $k$ observations of $Z$, where $\delta_{n-j+1,n}$ denotes the $\delta$ indicator attached to $Z_{n-j+1,n}$ $(1 \leq j \leq n)$. Indeed, $\hat{\gamma}_{Z,k}^H$ estimates $\gamma$ while $\hat{p}_k$ is shown to be a consistent estimator of $p=\gamma_2/(\gamma_1+\gamma_2)$. Einmahl et al. (2008) enhanced the asymptotic analysis of this estimator and generalized this approach by considering any classical EVI estimator $\hat{\gamma}_{Z,k}^{(.)}$ of $\gamma$, proposing the estimators $ \hat{\gamma}_{1,k}^{(.)}={\hat{\gamma}_{Z,k}^{(.)}\over \hat{p}_k}$. See also Gomes and Oliveira (2003), Gomes and Neves (2011), and Brahimi et al. (2015) for other papers in this spirit. \item Worms and Worms (2014) essentially used the second approach estimating \eqref{MElog} by substituting $1-F$ with the Kaplan-Meier estimator $1-\hat{F}^{KM}_n (x) = \Pi_{Z_{i,n} \leq x} \left(1- {1 \over n-i+1} \right)^{\delta_{i,n}}$, setting $1-\hat{F}^{KM}_n (Z_{n,n})=0$: \begin {equation} \hat{\gamma}_{1,k}^{(W)} = \sum_{j=1}^k {1-\hat{F}^{KM}_n (Z_{n-j+1,n}) \over 1-\hat{F}^{KM}_n (Z_{n-k,n})} \left( \log Z_{n-j+1,n}-\log Z_{n-j,n}\right). \label{Worms} \end {equation} Worms and Worms (2014) also introduced \begin {equation} \hat{\gamma}_{1,k}^{(KM)} = \sum_{j=1}^k {1-\hat{F}^{KM}_n (Z_{n-j+1,n}) \over 1-\hat{F}^{KM}_n (Z_{n-k,n})}{\delta_{n-j+1,n} \over j} \left( \log Z_{n-j+1,n}-\log Z_{n-k,n}\right). \label{WormsKM} \end {equation} Through simulations the estimator $\hat{\gamma}^{(W)}_{1,k}$ was found to have the best RMSE behaviour for smaller values of $k$. Below we will then concentrate on $\hat{\gamma}_{1,k}^{(W)}$. Unfortunately, the asymptotic distribution of $\hat{\gamma}_{1,k}^{(W)}$ is not known up to now. The asymptotic normality of a fixed threshold version of $\hat{\gamma}_{1,k}^{(KM)}$ can be derived from Worms and Worms (2017) in case $\gamma_1 < \gamma_2$. Brahimi et al. (2016) consider closely related estimators but also considered asymptotic results in case $\gamma_1 < \gamma_2$ or $p >1/2$. \item In an objective Bayesian approach (see Zellner, 1971), Ameraoui et al. (2016) recently proposed several other estimators. They considered the maximum posterior (m) and the mean posterior (e) estimators of the posterior density of $\gamma_1$. The maximal data information (M) prior and a conjugate gamma prior with parameters $(a,b)$ were considered. It was also shown that Jeffreys prior lead to special cases of the conjugate prior estimators setting $a=b=0$. This then leads to the following estimators: \begin{eqnarray} \hat{\gamma}_{1,k}^{(m,M)}&=&{2 k\hat{\gamma}_{Z,k}^{(H)} \over 1+k\hat{p}_k+\sqrt{(1+k\hat{p}_k)^2+4k\hat{\gamma}_{Z,k}^{(H)}}} , \label{Bayes1} \\ \hat{\gamma}_{1,k}^{(e,a,b)} &=& {k\hat{\gamma}_{Z,k}^{(H)}+b \over k\hat{p}_k +a}, \label{Bayes2} \\ \hat{\gamma}_{1,k}^{(m,a,b)} &=& {k\hat{\gamma}_{Z,k}^{(H)}+b \over k\hat{p}_k +a-1}. \label{Bayes3} \end{eqnarray} \end{itemize} It is well-known that extreme value estimators often suffer from severe bias. In the random censoring case Einmahl et al. (2008) first derived the asymptotic bias of $\hat{\gamma}_{1,k}^{(H)}$, which was further detailed in Beirlant et al. (2016) under more specific assumptions on the slowly varying functions $\ell_1 $ and $\ell_2$, which are commonly proposed in extreme value statistics: \begin{eqnarray*} 1-F(x)&=&C_1 {x^{-1/\gamma_1}}(1+D_{1} x^{-\beta_1} (1+o(1))),\; x \to \infty,\\ 1-G(y)&=&C_2 {y^{-1/\gamma_2}}(1+D_{2} x^{-\beta_2} (1+o(1))),\; y \to \infty, \end{eqnarray*} where $\beta_1,\beta_2,C_1,C_2$ are positive constants and $D_1,D_2$ are real constants. Taking the bias of the Hill estimator $\hat{\gamma}^{(H)}_{Z,k}$ as a reference, it was observed that especially when $\beta_1 \leq \beta_2$ the bias of $\hat{\gamma}_{1,k}^{(H)}$ increases with decreasing value of $p$, i.e. for smaller $\gamma_2/\gamma_1$. Within the maximum likelihood approach, a bias reduced estimator was then proposed for the censoring case following the technique from Beirlant et al. (2009) where the distribution of the excesses $X/t|X>t$ is approximated by the extended Pareto (EP) distribution with df $\mathbb{P}(Y \leq y)=1-(y\{1+\kappa_t (1-y^{-\beta_1})\})^{-1/\gamma_1}$ where $\kappa_t = \gamma_1 D_1 t^{-\beta_1} (1+o(1))$ as $t \to \infty$. The resulting estimator is given by \begin{equation} \hat{\gamma}_{1,k}^{(EP)} =\hat{\gamma}_{1,k}^{(H)}+C_{\hat{\gamma}_{1,k}^{(H)},\beta_*}{H_{Z,k}^{(-\beta_*)}\over \hat{p}_k} \left\{ H_{Z,k}^{(-\beta_*)}-\hat{\gamma}_{1,k}^{(H)}E_{Z,k}^{(c)}(-\beta_*) \right\} \label{biasreduced} \end{equation} where, $\beta_{*}=\min(\beta_1,\beta_2)$ and \begin{eqnarray*} H_{Z,k}^{(-\beta_*)}&=&{1\over\beta_*} \left(1-{1\over{k}}\sum_{j=1}^k \left({Z_{n-j+1,n}\over Z_{n-k,n}}\right)^{-\beta_*}\right), \\ E_{Z,k}^{(c)}(-\beta_*)&=&{1\over{k}}\sum_{j=1}^k \delta_{n-j+1,n}{\left(Z_{n-j+1,n}\over{Z_{n-k,n}}\right)}^{-\beta_*},\\ C_{\gamma,\beta_*}&=&-{{(1+\gamma\beta_*)^3(1+2\gamma\beta_*)}\over{\gamma^4\beta_*^3}}. \end{eqnarray*} In this estimation procedure $\beta_*$ is assumed to be known. In fact, since in the definition of the EP distribution the term $y^{-\beta_*}$ is multiplied by the $\kappa_t$-factor, the asymptotic distribution of tail estimators based on the EP distribution will not depend on the asymptotic distribution of an estimator of $\beta_*$. One can also impute estimators of the parameter $\beta_*$ of the distribution $H$ of $Z$ without increasing the bias in estimating $\gamma_1$. Estimators of $\rho_* = -\gamma \, \beta_*$ were discussed in Fraga Alves {\it et al.} (2003). An estimator for $\beta_*$ is then given by $-\rho_*/ \hat{\gamma}_{Z,k}^{(H)}$. In the simulations the sensitivity of the choice of $\rho_*$ was examined. In \eqref{biasreduced} one can reparametrize $\beta_* \hat{\gamma}_{Z,k}^{(H)}$ by $-\rho_*$ with $\rho_* <0$, leading to \begin{equation} \hat{\gamma}_{1,k}^{(EP)}(\rho_*)= \hat{\gamma}_{1,k}^{(H)}- {\hat{\gamma}_{1,k}^{(H)} (1-\rho_*)^2(1-2\rho_*)\over \rho_*^3} \left\{ H_{Z,k}^{(\rho_*/ \hat{\gamma}_{Z,k}^{(H)})}-\hat{\gamma}_{1,k}^{(H)}E_{Z,k}^{(c)}(\rho_*/ \hat{\gamma}_{Z,k}^{(H)}) \right\}. \label{EPrho} \end{equation} It was shown that when using the correct value of $\beta_*$ or $\rho_*$, the asymptotic bias of $\hat{\gamma}_{1,k}^{(EP)}$ is 0 as long as $\sqrt{k} (k/n)^{\beta_*} = O(1)$, whereas the asymptotic bias of the original estimator $\hat{\gamma}^{(H)}_{1,k}$ is only 0 when $\sqrt{k} (k/n)^{\beta_*} \to 0$ as $k,n \to \infty$. Hence the bias is reduced for a longer set of values of $k \geq 1$ when choosing $t$ as $X_{n-k,n}$. At the other hand the variance of this bias reduced estimator was shown to be increased by the factor $\left({1+\gamma\beta_*\over \gamma \beta_*} \right)^2$ in comparison with the estimator $\hat{\gamma}^{(H)}_{1,k}$. Before comparing the different estimators through simulations, we next derive a bias reduced estimator starting from the Worms \& Worms estimator $\hat{\gamma}_{1,k}^{(W)}$ from \eqref{Worms}. Following Beirlant et al. (2017), we then also apply shrinkage estimation on $\kappa_t$ forcing this parameter to decrease to $0$ as $t \to \infty$ or $k \downarrow 1$ as it is the case in the mathematical definition of $\kappa_t$. \section{Bias reduction of the Worms \& Worms estimator and penalized estimation of bias } \label{Sec3} Using the EP approximation to the survival function ${\bar{F}(ut) \over \bar{F}(t)} $ of the excesses $X/t|X>t$, leads to the following approximation of the integral expression of $L_t$ in \eqref{MElog}: as $t\to \infty$ \begin{equation} L_t = \int_1^{\infty} {\bar{F}(ut) \over \bar{F}(t)} {du \over u} = \gamma_1 - \kappa_t {\beta_1 \gamma_1 \over 1+\beta_1 \gamma_1}(1+o(1)). \label{Lt} \end{equation} Similarly, considering $$E_t (-\beta_1):= \mathbb{E} \left( ({X \over t})^{-\beta_1}|X>t \right) = 1+ \int_1^{\infty}{\bar{F}(ut) \over \bar{F}(t)} du^{-\beta_1}$$ leads to \begin{equation} (1+\gamma_1\beta_1) E_t (-\beta_1) = 1 + {\kappa_t \over \gamma_1} {(\beta_1 \gamma_1)^2 \over 1+2\beta_1 \gamma_1}(1+o(1)). \label{Et} \end{equation} Substituting $\gamma_1$ in the left hand side of \eqref{Et} by the expression $L_t + \kappa_t {\beta_1 \gamma_1 \over 1+\beta_1 \gamma_1}$ which follows from \eqref{Lt}, one obtains for $t \to 0$ that \begin{eqnarray*} && \hspace{-1cm}(1+\beta_1 L_t) E_t(-\beta_1) \\&=& 1+ \left\{ {\kappa_t \over \gamma_1} {(\beta_1 \gamma_1)^2 \over 1+2\beta_1 \gamma_1}- \kappa_t E_t(-\beta_1){\beta_1^2 \gamma_1 \over 1+\beta_1 \gamma_1}E_t(-\beta_1)\right\} (1+o(1))\\ &=& 1+ {\kappa_t \over \gamma_1}(\beta_1 \gamma_1)^2 [{1 \over 1+2\beta_1\gamma_1}-{1 \over (1+\beta_1\gamma_1)^2}](1+o(1))\\ &=& 1+ {\kappa_t \over \gamma_1}{ (\beta_1 \gamma_1)^4 \over (1+\beta_1\gamma_1)^2(1+2\beta_1\gamma_1)}(1+o(1)), \end{eqnarray*} where in the second step we approximated $E_t(-\beta_1)$ by $(1+\gamma_1\beta_1)^{-1}$. We now conclude that \begin{equation} \kappa_t = {L_t (1+\beta_1 L_t)^3(1+2\beta_1 L_t) \over (\beta_1 L_t)^4}\left\{E_t(-\beta_1)-{1 \over 1+\beta_1 L_t} \right\}(1+o(1)). \label{deltatilde} \end{equation} Estimating $L_t$ at a random threshold $Z_{n-k,n}$ by $\hat{\gamma}_{1,k}^{(W)}$ and similarly $E_t(-\beta_1)$ by \[ \hat{E}_k (-\beta_1)= 1+ \sum_{j=1}^k {1-\hat{F}^{KM}_n (Z_{n-j+1,n}) \over 1-\hat{F}^{KM}_n (Z_{n-k,n})} \left( Z^{-\beta_1}_{n-j+1,n}- Z_{n-j,n}^{-\beta_1} \right)/Z^{-\beta_1}_{n-k,n}, \] we obtain the following bias reduced estimator for $\gamma_1$ combining \eqref{Lt} and \eqref{deltatilde}: \begin{equation} \hat{\gamma}_{1,k}^{(BR,W)}= \hat{\gamma}_{1,k}^{(W)}+ {\hat{\gamma}_{1,k}^{(W)} (1+\beta_1\hat{\gamma}_{1,k}^{(W)})^2(1+2\beta_1 \hat{\gamma}_{1,k}^{(W)})\over (\beta_1\hat{\gamma}_{1,k}^{(W)})^3} \left\{\hat{E}_k (-\beta_1)-{1 \over 1+\beta_1 \hat{\gamma}_{1,k}^{(W)}} \right\}. \label{BRW} \end{equation} In \eqref{BRW} one can reparametrize $\beta_1 \hat{\gamma}_{1,k}^{(W)}$ by $-\rho_1$ with $\rho_1 <0$, leading to \begin{equation} \hat{\gamma}_{1,k}^{(BR,W)}(\rho_1)= \hat{\gamma}_{1,k}^{(W)}- {\hat{\gamma}_{1,k}^{(W)} (1-\rho_1)^2(1-2\rho_1)\over \rho_1^3} \left\{\hat{E}_k (\rho_1/\hat{\gamma}_{1,k}^{(W)})-{1 \over 1-\rho_1} \right\}, \label{BRWrho} \end{equation} and we will study the sensitivity of the estimator with respect to the choice of $\rho_1$. In fact our objective will be to look for an appropriate choice of $\rho_1$ such that the plot of the estimates as a function of $k$ is most constant in order to assist practitioners. Also here the variance of the bias reduced estimator can be expected to be inflated compared with the corresponding estimator (here $\hat{\gamma}_{1,k}^{(W)}$). This will be confirmed by the simulations in the next section. However, Beirlant et al. (2017) showed that this problem can be alleviated forcing the bias estimator ${\hat{\gamma}_{1,k}^{(W)} (1-\rho_1)^2(1-2\rho_1)\over (-\rho_1)^3} \left\{\hat{E}_k (\rho_1/\hat{\gamma}_{1,k}^{(W)})-{1 \over 1-\rho_1} \right\}$ to decrease to 0 as $t \to \infty$ or $k \downarrow 1$. Formally applying the shrinkage procedure from Beirlant et al. (2017) leads to a penalized version of \eqref{deltatilde}: \[ \kappa_t^s = {1+\beta_1 L_t \over \frac{\omega L_t}{k\sigma^2_{1,k,n}} +\frac{(\beta_1 L_t)^4}{L_t (1+\beta_1 L_t)^2(1+2\beta_1 L_t)}} \left\{ E_t(-\beta_1)-{1 \over 1+\beta_1 L_t} \right\}, \] where $\sigma^2_{1,k,n}= (k/n)^{-2\rho_1}$ and $\omega$ is a weight factor that allows to control the penalization. The term $(\omega L_t)/(k\sigma^2_{1,k,n})$ makes the bias correction shrink for smaller values of $k$, i.e. when the original estimator $\hat{\gamma}_{1,k}^{(W)}$ is asymptotically unbiased, namely $k\sigma^2_{1,k,n} \to 0$. This then leads to the penalized estimator \begin{equation} \hat{\gamma}_{1,k}^{(s,W)}(\rho_1) = \hat{\gamma}_{1,k}^{(W)} - {\rho_1 \over \frac{\omega \hat{\gamma}_{1,k}^{(W)} }{k\sigma^2_{1,k,n}} +\frac{\rho_1^4}{\hat{\gamma}_{1,k}^{(W)} (1-\rho_1)^2(1-2\rho_1)} } \left\{\hat{E}_k (\rho_1/\hat{\gamma}_{1,k}^{(W)})-{1 \over 1-\rho_1} \right\}. \label{pW} \end{equation} In a similar way the bias component in $\hat{\gamma}_{1,k}^{(EP)}(\rho_*)$ can be penalized for smaller values of $k$: \begin{equation} \hat{\gamma}_{1,k}^{(s,EP)}(\rho_*) = \hat{\gamma}_{1,k}^{(H)}- {\rho_* \over \frac{\omega \hat{\gamma}_{1,k}^{(H)} }{k\sigma^2_{*,k,n}} +\frac{\rho_*^4}{\hat{\gamma}_{1,k}^{(H)} (1-\rho_*)^2(1-2\rho_*)} } \left\{ H_{Z,k}^{(\rho_*/ \hat{\gamma}_{Z,k}^{(H)})}-\hat{\gamma}_{1,k}^{(H)}E_{Z,k}^{(c)}(\rho_*/ \hat{\gamma}_{Z,k}^{(H)}) \right\} \label{pEP} \end{equation} with $\sigma^2_{*,k,n}=(k/n)^{-2\rho_*}$. \section{Bootstrap confidence intervals for $\gamma_1$} \label{SecB} Given the lack of any distribution theory for the Worms and Worms estimator $\hat{\gamma}_{1,k}^{(W)}$ we here present a parametric bootstrap algorithm in order to construct confidence intervals for $\gamma_1$. The main idea behind this bootstrap procedure is that for a value of $k$ where the bias of an estimator of $\gamma_1$ is 0, one can as well simulate from simple Pareto distributions rather than from the true Pareto-type distribution $F$ and $G$ in order to construct samples of estimators. Also note that $$ \hat{\gamma}_{2,k}^{(W)} = \sum_{j=1}^k {1-\hat{G}^{KM}_n (Z_{n-j+1,n}) \over 1-\hat{G}^{KM}_n (Z_{n-k,n})} \left( \log Z_{n-j+1,n}-\log Z_{n-j,n}\right), $$ where $1-\hat{G}^{KM}_n (y)= \Pi_{Z_{i,n} \leq y} \left(1- {1 \over n-i+1} \right)^{1-\delta_{i,n}}$ denotes the Kaplan-Meier estimator of $G$, jointly with its bias reduced versions constructed in a similar way as in the preceding section (replacing $\delta_{n-j+1,n}$ by $1-\delta_{n-j+1,n}$), lead to estimates of $\gamma_2$. The procedure then runs as follows: \begin{itemize} \item Given a value of $\hat{k}_1$, respectively $\hat{k}_2$, where the bias of the estimator $\hat{\gamma}_{1,k}^{(W)}$, respectively $\hat{\gamma}_{2,k}^{(W)}$, is judged to be negligible, one can perform a parametric bootstrap using samples of size $n$ from $\left( \min(\hat{X}_i, \hat{C}_i),1_{(\hat{X}_i \leq \hat{C}_i)}\right)$ ($i=1,\ldots,n$) where $\hat{X}_i$, respectively $\hat{C}_i$, are simulated from a standard Pareto distribution with survival function $x^{-1/\hat{\gamma}_{1,\hat{k}_1}^{(W)}}, \, x>1$, respectively $y^{-1/\hat{\gamma}_{2,\hat{k}_2}^{(W)}}, \,y>1$. \item The values $\hat{k}_j$, $j=1,2$, are chosen from $$ \hat{k}_j = \max \{k: |\hat{\gamma}_{j,k}^{(W)}-\hat{\gamma}_{j,k}^{(s,W)}(\rho_j)| \leq \epsilon \} $$ for a small value of $\epsilon$. \item From each bootstrap sample one then retains a bootstrap estimate $\hat{\gamma}_{1,\hat{k}_1}^{(*,s,W)}$ of $\gamma_1$. \item Finally, repeating this bootstrap sampling step $N$ times, we consider the empirical distribution of the values $\hat{\gamma}_{1,\hat{k}_1}^{(*,s,W)}(j)$ $(j=1,\ldots,N)$, and more specifically the $\lfloor N\alpha/2 \rfloor$, $\lfloor N(1-\alpha)/2 \rfloor$ empirical quantiles , in order to construct a $100(1-\alpha)\%$ confidence interval for $\gamma_1$. \end{itemize} In order to test this bootstrap procedure in the next section we will apply this procedure to several simulated censored samples under different values of the proportion of non-censoring. \section{Finite sample simulations} \label{Sec4} As the asymptotic distribution of $\hat{\gamma}_{1,k}^{(W)}$ and hence also of $\hat{\gamma}_{1,k}^{(BR,W)}(\rho_1)$ and $\hat{\gamma}_{1,k}^{(s,W)}(\rho_1)$ is not known, we here consider a comparison using finite sample simulations. We report the simulation results for sample size $n=500$ from \begin{itemize} \item the Burr ($\eta,\tau,\lambda$) distribution with right tail function $$ 1-F(x) = \left({\eta \over \eta + x^{\tau}} \right)^{\lambda}, \; x>0, $$ with $\eta,\tau,\lambda >0$, and $\gamma = 1/(\tau\lambda), \beta = \tau, D= -\lambda \eta$; \item the Fr\'echet ($\alpha$) distribution with right tail function $$ 1-F(x) = 1-\exp (-x^{-\alpha}), \; x>0, $$ with $\alpha >0$, and $\gamma = 1/\alpha, \beta = \alpha, D= -1/2$. \end{itemize} Here we present results concerning the bias and the root mean squared error of the different estimators of $\gamma_1$ discussed above, and of the bootstrap algorithm, in case \begin{itemize} \item Burr ($10,2,2$) censored by Burr ($10,5,2$) with $\gamma_1=0.25$ and $\gamma_2=0.10$, leading to heavy censoring with the proportion of non-censoring $p=0.286$; see Figure 1; \item Burr ($10,2,1$) censored by Burr ($10,2,1$) with $\gamma_1=\gamma_2=0.5$ so that $p=0.5$; see Figure 2; \item Burr ($10,5,2$) censored by Burr ($10,2,2$) with $\gamma_1=0.10$ and $\gamma_2=0.25$, with light censoring $p=0.714$; see Figure 3; \item Fr\'echet (2) censored by Fr\'echet (1) with $\gamma_1=0.5$ and $\gamma_2=1$, so that $p=2/3$; see Figure 4. \end{itemize} In each of these four cases we consider the results for \begin{itemize} \item $\hat{\gamma}_{1,k}^{(H)}$ from \eqref{Censored Hill} , $\hat{\gamma}_{1,k}^{(EP)}(\rho_*)$ from \eqref{EPrho}, and $\hat{\gamma}_{1,k}^{(s,EP)}(\rho_*)$ from \eqref{pEP} with $\omega=1$ and for different values of $\rho_*$ (left in Figures 1-4), \item $\hat{\gamma}_{1,k}^{(W)}$ from \eqref{Worms}, $\hat{\gamma}_{1,k}^{(KM)}$ from \eqref{WormsKM}, $\hat{\gamma}_{1,k}^{(BR,W)}(\rho_1)$ from \eqref{BRWrho}, and $\hat{\gamma}_{1,k}^{(s,W)}(\rho_1)$ from \eqref{pW} with $\omega=1$ and for different values of $\rho_1$ (see middle of Figures 1-4), \item the Bayesian estimators $\hat{\gamma}_{1,k}^{(m,M)}$, $\hat{\gamma}_{1,k}^{(e,1,2)}$ and $\hat{\gamma}_{1,k}^{(m,1,2)}$ from \eqref{Bayes1}, \eqref{Bayes2} and \eqref{Bayes3} (right in Figures 1-4). \end{itemize} One observes that in case $p<0.5$ the likelihood based estimator $\hat{\gamma}_{1,k}^{(H)}$ and its bias reduced versions, and the Bayesian estimators have larger bias than the estimators derived from estimating the functional form $L_t$ in \eqref{MElog}. Bias reduction of $\hat{\gamma}_{1,k}^{(H)}$ helps only partially in such cases. In these cases the bias reduced and penalized estimators $\hat{\gamma}_{1,k}^{(BR,W)}$ and $\hat{\gamma}_{1,k}^{(s,W)}(\rho_1)$ perform good with low bias and low RMSE for a long interval of values of $k$ which is quite helpful in choosing an appropriate value of $k$. This is in contrast with the bias of $\hat{\gamma}_{1,k}^{(W)}$ and $\hat{\gamma}_{1,k}^{(KM)}$ which is systematically decreasing with decreasing value of $k$. In order to evaluate the effect of the penalization in $\hat{\gamma}_{1,k}^{(s,W)}(\rho_1)$, we focused the scale for the bias and RMSE plots in the Burr cases, see Figure 5. Especially in case $p \leq 0.5$ and for smaller values of $k$ the mean of the shrinkage estimator is much more stable and ultimately for small $k$ the bahaviour of this estimator follows that of $\hat{\gamma}_{1,k}^{(W)}$. The resulting RMSE is then also a lower envelope of the RMSE curves of $\hat{\gamma}_{1,k}^{(W)}$ and $\hat{\gamma}_{1,k}^{(BR,W)}$. On the other hand when $p>0.5$, and especially in the Fr\'echet case, the basic estimators $\hat{\gamma}_{1,k}^{(H)}$, $\hat{\gamma}_{1,k}^{(W)}$ and $\hat{\gamma}_{1,k}^{(m,M)}$ work almost equally well, and this also holds for the different approaches to bias reduction. Within the group of Bayes estimators clearly $\hat{\gamma}_{1,k}^{(m,M)}$ works best. We also tested the proposed bootstrap procedure in the same cases as considered in Figures 1 to 4. We applied the algorithm with $\alpha=0.05$ and $N=1000$ to 1000 samples of size $n=500$. In Figure 6 the 1000 confidence intervals are given when choosing $\hat{k}_1$ and $\hat{k}_2$ adaptively using $\epsilon=0.01$, and when keeping $\hat{k}_1=\hat{k}_2$ fixed to $25 = 0.05 \times n$ throughout (this value of $k$ appears appropriate on the basis of Figures 1 to 4). Further simulations showed that for $n=1000$ keeping $\hat{k}_1=\hat{k}_2$ fixed to $0.04 \times n$ leads to confidence intervals that attain the required 95\% level closely. The confidence intervals missing the correct value of $\gamma_1$ are put in dark grey. Of course when the $k$ values are chosen adaptively, it is more difficult to attain the confidence level $1-\alpha = 0.95$. A deeper understanding of the distribution of $\hat{\gamma}_{1,k}^{(W)}$ and $\hat{E}_k (-\beta_1)$ appears necessary to enhance the adaptive choice of $k_1, k_2$ and the performance of the bootstrap algorithm. \section{A case study from car insurance} \label{Sec5} Finally, in order to illustrate the merits of the newly proposed method, we consider a data set with indexed total payments from a motor third party liability insurance company operating in the EU, with records from 1995 till 2010 with $n=849$ claims of which only 340 were completely developed at the end of 2010. For every claim the indexed cumulative payments are given at the end of every year until development. In Figure 6 we plotted the proportions of non-censored data $\hat{p}_k$ which are situated in the top $100k/n$\% of cumulative payments at the end of 2010 as a function of $k/n$. In practice most companies substitute the censored observations by ultimate predictions obtained through reserving techniques. Here we show how the extreme value methods for censored data can also be used directly without ultimates in order to obtain relevant extreme value predictions. Due to the long-tail nature of such portfolios, only the claims with arrival year between 1995 and 1999 can be considered to satisfy the condition of weak censoring $p > 0.5$ or $\gamma_1 < \gamma_2$ when using the information up to 2010. For this group of early claims 29\% is censored at the end of 2010, while the percentage of censoring is 60\% when considering all claims. Note also that when considering all claims the largest 20 \% are all censored, whereas this amount increases to 40\% for the claims arriving after 2003. Needless to say that extreme value methodology is quite challenging in such a case. In order to illustrate the stability of the proposed bias reduction technique based on the Worms and Worms estimator over different percentages of censoring, we also split the full data set in groups along the arrival times in 1995-1999, 1998-2002, 2001-2005, 2004-2008, 2007-2010. In Figure 7 the original estimates $\hat{\gamma}_{1,k}^{(W)}$ and $\hat{\gamma}_{1,k}^{(BR,W)}(-3)$ are given as a function of $k/n$ for each of these subgroups and for the complete data set. The value $\rho_1=-3$ was chosen as this value yields the most constant plots as a function of $k$. The plots of $\hat{\gamma}_{1,k}^{(W)}$ are steepest for the claims which are most recent in 2010. The stability of the bias reduced estimates over these subgroups is quite convincing leading to estimates of $\gamma_1$ between 0.6 and 0.7. As another validity check, in Figure 8 we consider only the claims from 1995-1999 with their cumulative payments as of 2000 till 2010 in steps of 2 years. Note that in 2000 only 9\% of those claims were fully developed, while at 2010 this percentage rose to 71\%. Again the bias reduced estimates $\hat{\gamma}_{1,k}^{(BR,W)}(-3)$ are remarkably stable over $k$. We also applied the bootstrap algorithm to this case study. Using $\epsilon = 0.01$ leads to $\hat{k}_1=73$, $\hat{k}_2=50$, $\hat{\gamma}_{1,73}^{(W)}=0.725$, and $\hat{\gamma}_{2,50}^{(W)}=0.652$, see Figure 10 (left). The confidence intervals for the different values of $k$ are given in Figure 10 (right) with special attention for the case $k=73$ which leads to the interval 95\% confidence interval $(0.48;0.91)$. Choosing $\hat{k}_1=\hat{k}_2$ fixed at 4 to 5\% of the sample size $n=849$, as suggested in the simulation section, leads to lower bounds that are somewhat lower than 0.48, as can be seen from Figure 10 (right). \section{Conclusion} \label{Sec6} The estimator $\hat{\gamma}_{1,k}^{(W)}$ from Worms and Worms (2014) has the best RMSE behaviour between all available first order estimators of $\gamma_1$. In order to enhance the practical use of this estimator we proposed bias reduction and penalization techniques which lead to improved bias and RMSE behaviour. Moreover a bootstrap procedure is proposed in order to construct confidence intervals. This is especially useful with long-tailed insurance products. In order to enhance the adaptive choice of the number of extreme data $k$ asymptotic representations of the estimators involved are needed for all cases, but especially in case of heavy censoring. This will be the subject of future work. \section{Acknowledgments} \label{Sec7} \noindent The authors are indebted to Rym and Julien Worms for helpful discussions and suggestions on this topic. This work is based on the research supported wholly/in part by the National Research Foundation of South Africa (Grant Number 102628). The Grantholder acknowledges that opinions, findings and conclusions or recommendations expressed in any publication generated by the NRF supported research is that of the author(s), and that the NRF accepts no liability whatsoever in this regard. \begin{landscape} \begin{figure} \caption{Bias and RMSE for \textbf{Burr(10,2,2)} \end{figure} \begin{figure} \caption{Bias and RMSE for \textbf{Burr(10,2,1)} \end{figure} \begin{figure} \caption{Bias and RMSE for \textbf{Burr(10,5,2)} \end{figure} \begin{figure} \caption{Bias and RMSE for \textbf{Fr\'echet(2)} \end{figure} \begin{figure} \caption{Bias (top) and RMSE (bottom) for $\hat{\gamma} \end{figure} \begin{figure} \caption{Simulated bootstrap 95\% confidence intervals with adaptive choice of $k_1$ and $k_2$ using $\epsilon = 0.01$ (4 left frames) and using $k_1=k_2=25=5\%n$ for every sample (4 right frames)} \end{figure} \end{landscape} \begin{figure} \caption{Car liability data: plot of $\hat{p} \end{figure} \begin{figure} \caption{Car liability data: $\hat{\gamma} \end{figure} \begin{figure} \caption{Car liability data: $\hat{\gamma} \end{figure} \begin{figure} \caption{Car liability data, using all claims: $\hat{\gamma} \end{figure} \end{document}
\begin{document} \author[K. Fellner, W. Prager, B.Q. Tang]{Klemens Fellner, Wolfgang Prager, Bao Q. Tang} \address{Klemens Fellner \break Institute of Mathematics and Scientific Computing, University of Graz, Heinrichstrasse 36, 8010 Graz, Austria} \email{[email protected]} \address{Wolfgang Prager \break Institute of Mathematics and Scientific Computing, University of Graz, Heinrichstrasse 36, 8010 Graz, Austria} \email{[email protected]} \address{Bao Quoc Tang \break Institute of Mathematics and Scientific Computing, University of Graz, Heinrichstrasse 36, 8010 Graz, Austria} \email{[email protected]} \subjclass[2010]{35B35, 35B40, 35F35, 35K37, 35Q92} \keywords{Reaction-diffusion systems; Entropy method; First order chemical reaction networks; Complex balance condition; Convergence to equilibrium} \begin{abstract} In this paper, the applicability of the entropy method for the trend towards equilibrium for reaction-diffusion systems arising from first order chemical reaction networks is studied. In particular, we present a suitable entropy structure for weakly reversible reaction networks without detail balance condition. We show by deriving an entropy-entropy dissipation estimate that for any weakly reversible network each solution trajectory converges exponentially fast to the unique positive equilibrium with computable rates. This convergence is shown to be true even in cases when the diffusion coefficients of all but one species are zero. For non-weakly reversible networks consisting of source, transmission and target components, it is shown that species belonging to a source or transmission component decay to zero exponentially fast while species belonging to a target component converge to the corresponding positive equilibria, which are determined by the dynamics of the target component and the mass injected from other components. The results of this work, in some sense, complete the picture of trend to equilibrium for first order chemical reaction networks. \end{abstract} \maketitle \numberwithin{equation}{section} \newtheorem{example}{Example}[section] \section{Introduction and Main results} This paper investigates the applicability of the entropy method and proves the convergence to equilibrium for reaction-diffusion systems, which do not satisfy a detailed balance condition. The mathematical theory of (spatially homogeneous) chemical reaction networks goes back to the pioneer works of e.g. Horn, Jackson, Feinberg and the Volperts, see \cite{Fe79, Fe87, FeHo74, Ho72, Ho74, HoJa72,Vol72,Vol94} and the references therein. The aim is to study the dynamical system behaviour of reaction networks {\it independently of the values of the reaction rates}. It is conjectured since the early of 1970s that in a complex balanced system, the trajectories of the corresponding dynamical system always converge to a positive equilibrium. This conjecture was given the name Global Attractor Conjecture by Craciun et al. \cite{Cra09}. The conjecture in its full generality is -- up to our knowledge -- still unsolved so far, despite many attempts have been made by mathematicians to attack this problem. From the many previous works concerning the large time behaviour of chemical reaction networks, the majority of the existing results considers the spatially homogeneous ODE setting. The PDE setting in terms of reaction-diffusion systems is less studied. Also detailed quantitative statements like, e.g. rates of convergence to equilibrium, constitute frequently open questions even in the ODE setting. Our general aim is to prove quantitative results on the large-time behaviour of chemical reaction networks modelled by reaction-diffusion systems. In the present work, we study reaction-diffusion systems arising from first order chemical reaction networks and show that all solution trajectories converge exponentially to corresponding equilibria with explicitly computable rates. Our approach applies the so called entropy method. Going back to ideas of Boltzmann and Grad, the fundamental idea of the entropy method is to quantify the monotone decay of a suitable entropy (e.g. a convex Lyapunov) functional in terms of a \emph{functional inequality} connecting the time-derivative of the entropy, the so called entropy dissipation functional, back to the entropy functional itself, i.e. to derive a so called \emph{entropy entropy-dissipation (EED)} inequality. Such an EED inequality can only hold provided that all conservation laws are taken into account. After having established an EED inequality and applying it to global solutions of a dissipative evolutionary problem, a direct Gronwall argument implies convergence to equilibrium in relative entropy with rates and constants, which can be made explicit. By being based on functional inequalities (rather than on direct estimates on the solutions), a major advantage of the entropy method is its robustness with respect to model variations and generalisations. Moreover, the entropy method is per se a fully nonlinear approach. The fundamental idea of the entropy method originates from the pioneer works of kinetic theory and from names like Boltzmann and Grad in order to investigate the trend to equilibrium of e.g. models of gas kinetics. A systematic effort in developing the entropy method for dissipative evolution equations started not until much later, see e.g. the seminal works \cite{tos96,toscani_villani1,CJMTU,AMTU,DVinhom1} and the references therein for scalar (nonlinear) diffusion or Fokker-Planck equations, and in particular the paper of Desvillettes and Villani concerning the trend to equilibrium for the spatial inhomogeneous Boltzmann equation \cite{DVinhom2}. The derivation of EED inequalities for scalar evolution equations is typically based on the Barky-Emery strategy (see e.g. \cite{CJMTU,AMTU}), which seems to fail (or be too involved) to apply to systems. The great challenge of the entropy method for systems is, therefore, to be able to derive an entropy entropy-dissipation inequality, which summarises (in the sense of measuring with a convex entropy functional) the entire dissipative behaviour of solutions to a (possibly nonlinear) dynamical system to which the EED inequality shall be applied to. Preliminary results based on a (non-explicit) compactness-contradiction argument in 2D were obtained e.g. in \cite{Gro92,GGH96,GH97} in the context semiconductor drift-diffusion models. The first proof of an EED inequality with explicitly computable constants and rates for specific nonlinear reaction-diffusion systems was shown in \cite{DeFe06} and followed by e.g. \cite{DeFe_Con, DeFe08, DeFe15, BaFeEv14, MiHaMa14}. The application of these EED inequalities to global solutions of the corresponding reaction-diffusion systems proves (together with Csisz\'ar-Kullback-Pinsker type inequalities) the explicit convergence to equilibrium for these reaction-diffusion systems. We emphasise that all these previous results on entropy methods for systems assumed a {\it detailed balance condition} and, thus, features the free energy functional as a natural convex Lyapunov functional. A main novelty of the paper lies in demonstrating how the entropy method can be generalised to first order reaction networks without detailed balance equilibria. In particular we shall consider firstly \emph{weakly reversible networks} and secondly even more general \emph{composite systems consisting of source, transmission and target components} (see below for the precise definitions). We feel that it is important to point out that while there are certainly many classical approaches by which linear reaction-diffusion systems can be successfully dealt with, our task at hand is to clarify the entropic structure and the applicability of the entropy method for linear reaction networks as a first step before being able to turn to nonlinear problems in the future. {See \cite{DFT} for such a generalisation of the method to nonlinear reaction-diffusion systems satisfying the so-called complex balance condition (see Definition \ref{ComplexBalance} below).} The goal of this present work is to prove the explicit convergence to equilibrium for the complex balanced and more general reaction-diffusion systems corresponding to first order reaction networks. To be more precise, we study first order reaction networks of the form \begin{figure} \caption{A first-order chemical reaction network} \label{Reaction} \end{figure}\\ where $S_i, i = 1,2,\ldots, N$, are different chemical substances (or species) and $a_{ij}, a_{ji} \geq 0$ are reaction rate constants. In particular, $a_{ij}$ denotes the reaction rates from the species $S_j$ to $S_i$. First order reaction networks appear in many classical models, see e.g. \cite{Smo,Rot}. More recently, first order catalytic reactions are used to model transcription and translation of genes in \cite{TVO}. The evolution of the surface morphology during epitaxial growth involves the nucleation and growth of atomic islands, and these processes may be described by first order adsorption and desorption reactions coupled with diffusion along the surface. A first order reaction network can also be used to describe the reversible transitions between various conformational states of proteins (see e.g. \cite{Mayor03}). RNA also exists in several conformations, and the transitions between various folding states follow first order kinetics (see \cite{Bok03}). In the present paper, we investigate the entropy method and the trend to equilibrium of reaction-diffusion systems modelling first order reaction networks with mass action kinetics. More precisely, we shall consider the reaction network $\mathcal{N}$ in the context of reaction-diffusion equations and assume that for all $i=1,2,\ldots,N$ the substances $S_i$ are described by spatial-temporal concentrations $u_i(x,t)$ at position $x\in \Omega$ and time $t\geq 0$. Here, $\Omega$ shall denote a bounded domain $\Omega \subset \mathbb{R}^n$ with sufficiently smooth boundary $\partial\Omega$ (that is $\partial\Omega\in C^{2+\alpha}$ to avoid all difficulties with boundary regularity, although the below methods should equally work under weaker assumptions) and the outer unit normal $\nu(x)$ for all $x\in\partial\Omega$. Due to the rescaling $x\to |\Omega|^{1/n} x$, we can moreover consider (without loss of generality) domains with normalised volume, i.e. $$ |\Omega|=1. $$ In addition, we assume that each substance $S_i$ diffuses with a diffusion rate $d_i\geq 0$ for all $i=1,2,\ldots,N$. Finally, we shall assume mass action law kinetics as model for the reaction rates, which leads to the following linear reaction-diffusion system: \begin{equation}\label{VectorSystem} \begin{cases} X_t = D\Delta X + AX, &\qquad x\in\Omega, \qquad t>0,\\ \partial_{\nu}X = 0, &\qquad x\in\partial\Omega, \qquad t>0,\\ X(x,0) = X_0(x)\ge0, &\qquad x\in\Omega, \end{cases} \end{equation} where $X(x,t) = [u_1(x,t), u_2(x,t), \ldots, u_N(x,t)]^{T}$ denotes the vector of concentrations subject to non-negative initial conditions $X_0(x) = [u_{1,0}(x)\ge0, u_{2,0}(x)\ge0, \ldots, u_{N,0}(x)\ge0]^T$, $D = \text{diag}(d_1,d_2,\ldots,d_N)$ denotes the diagonal diffusion matrix and the reaction matrix $A = (a_{ij}) \in \mathbb{R}^{N\times N}$ satisfies the following conditions: \begin{equation}\label{a_jj} \begin{cases} a_{ij} \geq 0, &\qquad \text{for all } i\not=j, \quad i,j =1,2,\ldots,N,\\ a_{jj} = -\sum_{i=1,i\not= j}^{N}a_{ij}, &\qquad \text{for all } j =1,2,\ldots,N. \end{cases} \end{equation} The conditions \eqref{a_jj} on the reaction matrix $A$ imply in particular that the vector $(1,1,\ldots,1)^T$ constitutes a left-eigenvector corresponding to the eigenvalue zero. Together with homogeneous Neumann boundary conditions this implies that solutions to \eqref{VectorSystem} admit the following {\it conservation of total mass}\,: \begin{equation}\label{MassConservation} \sum_{i=1}^{N}\int_{\Omega}u_i(x,t)dx = \sum_{i=1}^{N}\int_{\Omega}u_{i,0}(x)dx =: M>0, \qquad \text{ for all } t>0, \end{equation} where $M>0$ is the {\it initial total mass}, which we shall assume positive. If $X(x,t)\equiv X(t)$, then system \eqref{VectorSystem} reduces to the corresponding space-homogeneous ODE model. Independently of PDE- or ODE-setting, we recall the following definitions of equilibria from e.g. \cite{HoJa72,Fe79,Vol94}. \begin{definition}[Homogeneous Equilibrium]\label{Equilibria} \\ A state $X_{\infty} = (u_{1,\infty}, u_{2,\infty}, \ldots, u_{N,\infty})$ is called a homogeneous {equilibrium} or shortly equilibrium of the first order reaction network $\mathcal{N}$ if $AX_{\infty}=0$. \end{definition} \begin{definition}[Detailed Balance Equilibrium]\label{DetailedBalance} \\ A positive equilibrium state $X_{\infty} = (u_{1,\infty}, u_{2,\infty}, \ldots, u_{N,\infty})>0$ is called a {detailed balance equilibrium} for the reaction network $\mathcal{N}$ if a positive reaction rate constant $a_{ij}>0$ for $i\neq j$ implies also a positive reversed reaction rate constant $a_{ji}>0$ and that the forward and backward reaction rates balance at equilibrium, i.e. $$ a_{ji}u_{i,\infty} = a_{ij}u_{j,\infty} $$ The reaction network $\mathcal{N}$ is called to satisfy the detailed balance condition if it admits a detailed balance equilibrium. \end{definition} \begin{definition}[Complex Balance Equilibrium]\label{ComplexBalance} \\ A positive equilibrium state $X_{\infty} = (u_{1,\infty}, u_{2,\infty}, \ldots, u_{N,\infty})>0$ is called a complex balance equilibrium for the reaction network $\mathcal{N}$ if for all $k=1,2,\ldots,N$, the total in-flow into the substance $S_k$ balances in equilibrium the total out-flow from $S_k$ to all other substances $S_i$, i.e. $$ \sum_{\{1\leq i\leq N:\; a_{ki}>0\}}a_{ki}u_{i,\infty} = \biggl(\sum_{\{1\leq j\leq N:\; a_{jk}>0\}}a_{jk}\biggr)u_{k,\infty}. $$ The reaction network $\mathcal{N}$ is called complex balanced if it admits a complex balance equilibrium. Moreover for complex balanced chemical reaction networks, all equilibria are complex balanced, see e.g. \cite{Ho72}. \end{definition} \begin{example}[Detailed balance equilibria are complex balance equilibria] \\ It is easy to see that detailed balance equilibria are also complex balance equilibria while the reverse does not hold in general, even for reversible networks. For example, consider the reaction network in Figure \ref{ReversibleNet}, \begin{figure} \caption{A reversible network} \label{ReversibleNet} \end{figure} where all reaction rates constants $a_{ij}>0$ are assumed positive and the network is thus fully reversible. The corresponding reaction-diffusion system with homogeneous Neumann boundary conditions \begin{equation}\label{3x3} \begin{cases} \partial_tu_1 - d_1\Delta u_1 = -(a_{21}+a_{31})u_1 + a_{12}u_2 + a_{13}u_3,\\ \partial_tu_2 - d_2\Delta u_2 = a_{21}u_1 -(a_{12}+a_{32})u_2 + a_{23}u_3,\\ \partial_tu_3 - d_3\Delta u_3 = a_{31}u_1 + a_{32}u_2 -(a_{13}+a_{23})u_3,\\ \partial_{\nu} u_1 = \partial_{\nu} u_2 = \partial_{\nu} u_3 = 0. \end{cases} \end{equation} exhibits the constant equilibrium $X_{\infty} = (u_{1,\infty}, u_{2,\infty}, u_{3,\infty})$ satisfying $AX_{\infty}=0$, i.e. \begin{equation}\label{equi3x3} \begin{cases} a_{12}u_{2,\infty} + a_{13}u_{3,\infty} = (a_{21}+a_{31})u_{1,\infty},\\ a_{21}u_{1,\infty} + a_{23}u_{3,\infty} = (a_{12}+a_{32})u_{2,\infty},\\ a_{31}u_{1,\infty} + a_{32}u_{2,\infty} = (a_{13}+a_{23})u_{3,\infty}, \end{cases} \end{equation} which has a unique nontrivial solution once the mass conservation \eqref{MassConservation} is taken into account. According to Definition \ref{ComplexBalance}, it is clear that system \eqref{equi3x3} constitutes a complex balance equilibrium for all reaction rate constants $a_{ij}>0$. For $X_{\infty}$ to be a detailed balance equilibrium, however, it is additionally necessary that \begin{equation}\label{detailbalance3x3} \begin{cases} a_{12}u_{2,\infty} = a_{21}u_{1,\infty},\\ a_{23}u_{3,\infty} = a_{32}u_{2,\infty},\\ a_{31}u_{1,\infty} = a_{13}u_{3,\infty}, \end{cases} \end{equation} which obviously implies \eqref{equi3x3}. Yet the equations \eqref{detailbalance3x3} can only have a solution if \begin{equation}\label{detailbalance3x3cond} \frac{a_{12}\cdot a_{23}\cdot a_{31}}{a_{21}\cdot a_{32}\cdot a_{13}} = 1, \end{equation} holds; in other words if the product of the reaction rate constants multiplied in the clockwise sense of the above reaction network graph equals the product of the reaction rate constants multiplied in the counterclockwise sense. The condition \eqref{detailbalance3x3cond} is thus necessary and sufficient for system \eqref{3x3} to admit a detailed balance equilibrium. \end{example} \begin{remark}[General definition of detailed and complex balance] \\ The concepts of detailed balance and complex balance are also defined for general higher order chemical reaction networks, see e.g. \cite{Ho72}. For simplicity, we stated here the definition corresponding to the first order network $\mathcal{N}$. In general, one can roughly say that a state $X_{\infty}$ is called a complex balanced equilibrium if at equilibrium the total in-flow to each specie $S_i$ is equal to the total out-flow from $S_i$. \end{remark} \begin{remark}[Detailed balance and reversibility] \\ It follows from Definition \eqref{DetailedBalance} that if $\mathcal{N}$ satisfies the detailed balance condition, then it is also reversible in the sense that for any reaction $S_i \rightarrow S_j$ also the reverse reaction $S_j \rightarrow S_i$ takes place. \end{remark} The set of complex balanced systems is much larger than the one of detailed balance systems. Horn already gave necessary and sufficient conditions for a network to satisfy the complex balance condition in \cite{Ho72}. For convenience of the reader, we present in the following the associated definitions of directed graphs as representations of reaction networks. The image of the associated graphs will also help following some of our main estimates. A directed graph $G$ corresponding to a given reaction network $\mathcal{N}$ is defined by considering the substances $S_i, i = 1,2,\ldots, N,$ as the $N$ nodes of $G$, which are connected for all $i\not=j = 1,2,\ldots, N$ by an edge with starting node $S_i$ and finishing node $S_j$ if and only if the reaction $S_i \xrightarrow{a_{ji}} S_j$ occurs with a positive reaction rate constant $a_{ji}>0$. \begin{definition}[Linkage classes partition of a first order reaction network, Connected networks]\label{linkageclass} \\ \textcolor{black}{ A linkage class $\mathcal{L}$ of a first order network $\mathcal{N}$ is a maximal set of connected substances, i.e. $S_i, S_j \in \mathcal{L}$ implies that $S_i$ and $S_j$ are connected (in the sense {that there exist $S_i \equiv S_{r_1},S_{r_2}\ldots, S_{r_{k-1}}, S_{r_k} \equiv S_j$ such that for each $1\le \ell\le k-1$, either the reaction $S_{r_\ell} \to S_{r_{\ell+1}}$ or $S_{r_{\ell+1}}\to S_{r_{\ell}}$ happens}) but $S_i \in \mathcal{L}$ and $S_j \not\in \mathcal{L}$ implies that $S_i$ and $S_j$ are not connected. } \textcolor{black} {If a reaction network consists only of one linkage class, we shall call such a network \emph{connected.}} \end{definition} \begin{definition}[Weak reversibility of a first order reaction network]\label{weaklyreversible} \\ A {\color{black}first order reaction network $\mathcal N$} is called weakly reversible if for any {\color{black} reaction} $S_i \rightarrow S_j$ with $i\not= j$, there exists a {\color{black} chain of reactions} $S_j \equiv S_{j_1} \rightarrow S_{j_2}\rightarrow\ldots \rightarrow S_{j_r} \equiv S_i$ where $S_{j_1}, S_{j_2}, \ldots, S_{j_r}$ are other {\color{black}chemical substances of $\mathcal N$}. \textcolor{black}{If a reaction network $\mathcal N$ is weakly reversible, then we also call the corresponding directed graph $G$ \emph{weakly reversible}.} \end{definition} \begin{definition}[Strongly connected components of a directed graph]\label{stronglyconnected} \\ A subgraph $H\subset G$ of a directed graph $G$ is called a strongly connected component if for any two nodes $S_i, S_j$ in $H$, we can find a path from $S_i$ to $S_j$ of the form $S_i \rightarrow S_{i_1} \rightarrow \ldots \rightarrow S_{i_r} \rightarrow S_j$ with all $S_{i_1}, S_{i_2}, \ldots, S_{i_r}$ belonging to $H$. \textcolor{black}{We call a first order reaction network $\mathcal N$ strongly connected when its corresponding graph $G$ is \emph{strongly connected}.} \end{definition} \begin{remark}[Partition of weakly reversible first order reaction networks $\mathcal{N}$ into disjoint strongly connected components/subnetworks]\hfil\label{Partitions}\\ \textcolor{black}{ Firstly, it follows directly from Definition \ref{linkageclass} that any first order reaction network $\mathcal{N}$ can be uniquely partitioned into a pairwise disjoint union of linkage classes and each linkage class $\mathcal{L}$ constitutes a connected subnetwork $\mathcal{N}_{\mathcal{L}}$. In particular, for a weakly reversible first order reaction network $\mathcal N$, each linkage class $\mathcal{L}$ forms a connected weakly reversible subnetwork $\mathcal{N}_{\mathcal{L}}$ and it is straightforward to show that the directed graph corresponding to $\mathcal{N}_{\mathcal{L}}$ is strongly connected according to Definition \ref{stronglyconnected}. (Consider that for all reactions being part of the connection between $S_i, S_j\in \mathcal{N}_{\mathcal{L}}$, the weak reversibility implies the existence of a returning chain of reactions. Thus, there exist chains of reactions connecting $S_i$ to $S_j$ and vice versa.) } black \textcolor{black}{ Secondly, any directed graph $G$ can be partitioned into a pairwise disjoint union of strongly connected components, all of which are weakly reversible according to Definition \ref{weaklyreversible}. Note that these strongly connected components can still be connected via ``non-weakly-blackreversible" reactions (see e.g. Figure \ref{NonReversibleReaction}). Therefore, for general directed graphs, multiple strongly connected components may constitute one linkage class. However, if the directed graph $G$ is additionally weakly reversible, then each strongly connected component has to constitute exactly one linkage class since otherwise we have already seen that weakly reversible subnetworks $\mathcal{N}_{\mathcal{L}}$ corresponding to one linkage class $\mathcal{L}$ are strongly connected. } \textcolor{black}{ Thus, for weakly reversible first order reaction networks $\mathcal{N}$, the partition of linkage classes is identical to the partition of strongly connected components of the corresponding directed graphs. } {\color{black} Therefore, with a marginal abuse of notation, we will use the terminology ``strongly connected component" or ``strongly connected subnetwork" both for such a connected weakly reversible first order reaction subnetwork $\mathcal{N}_{\mathcal{L}}$ and its corresponding strongly connected subgraph/component.} \end{remark} \begin{remark}[\textcolor{black}{Linkage classes of first order reaction networks can be treated independently}]\hfil\label{StronglyConnected}\\ For first order reaction networks, each node represents exactly one substance. \textcolor{black}{ Thus, any linkage class of a first order reaction network can be treated independently from the others. In particular, all the strongly connected components of a weakly reversible first order reaction network can be treated independently since these subnetworks form different linkage classes. } For higher order reaction networks, \textcolor{black}{where the nodes of the corresponding graphs are so-called complexes consisting of multiple substances,} this is not necessarily true since one substance might need to be represented by different nodes. \end{remark} Because of Remarks \ref{Partitions} and \ref{StronglyConnected}, we will consider in Section \ref{weakly} \textcolor{black}{weakly reversible first order networks partitioned into strongly connected first order reaction subnetworks $\mathcal{N}_{\mathcal{L}}$, and each strongly connected component $\mathcal{N}_{\mathcal{L}}$ can (w.l.o.g) be treated independently.} In Section \ref{non-weakly}, we will consider (w.l.o.g) connected reaction networks $\mathcal{N}$ consisting of one linkage class, yet we shall not assume {\color{black} weak reversibility. Hence the corresponding directed graphs are not strongly connected and may consists of multiple strongly connected components, but the underlying undirected graphs are connected (see e.g. Figure \ref{NonReversibleReaction})}. \begin{lemma}[Strongly connected networks, irreducible reaction matrices and complex balance equilibria]\label{Characteristic}\hfil\\ \textcolor{black}{ For any first order reaction network $\mathcal N$ the following statements are equivalent:} \begin{itemize} \item The first order reaction network $\mathcal{N}$ is \textcolor{black}{strongly connected}. \item The corresponding reaction matrix $A$ of $\mathcal{N}$ is irreducible. \item The first order reaction network $\mathcal{N}$ is complex balanced and for any positive mass $M>0$ (as set by the conservation law \eqref{MassConservation}), and there exists of a unique, positive complex balance equilibrium $X_{\infty} = (u_{1,\infty}, u_{2,\infty}, \ldots, u_{N,\infty})>0$ of system \eqref{VectorSystem}, which satisfies \begin{equation}\label{Equilibrium} \begin{cases} AX_{\infty} = 0,\\ \sum_{i=1}^{N}u_{i,\infty} = M>0. \end{cases} \end{equation} \end{itemize} \end{lemma} \begin{proof} The equivalence of strong connectivity for first order networks and irreducibility of the reaction matrix $A$ follows e.g. from \cite[Definition 2.1, page 46]{Seneta} and \cite[Theorem 3.2, page 78]{Min88}. Next, the Perron-Frobenius theorem implies for any irreducible reaction matrix $A$ and any positive mass $\sum_{i=1}^{N}u_{i,\infty}=M>0$ the existence of a unique positive equilibrium, see e.g. \cite{Seneta,Per07} and Lemma \ref{UniqueEqui} below. This equilibrium satisfies $AX_{\infty} = 0$ and is thus a complex balance equilibrium according to Definition \ref{ComplexBalance}. Hence, the strongly connected first order reaction network $\mathcal{N}$ is complex balanced (independently of the value of $M$). Finally, Lemma \ref{UniqueEqui} below implies that strongly connected first order reaction networks possessing unique positive equilibrium (for fixed $M>0$) have irreducible reaction matrices $A$. \end{proof} \begin{remark}[Complex balanced higher order systems are necessarily weakly reversible] \\ For higher order reaction network, it holds only true that systems with complex balance equilibrium are necessarily weakly reversible. Thus, weakly reversible systems constitute the more general class of reaction networks. \end{remark} \begin{remark}\label{PDEsetting} The equilibrium $X_{\infty}$ in \eqref{Equilibrium} is spatially homogeneous. Thus, it coincides with the equilibrium for the corresponding spatially homogeneous ODE system $X_t = AX$ of the reaction network given in Figure \ref{Reaction}. In \cite{Ar11} or \cite{SiMa}, the authors proved that $X(t) \longrightarrow X_{\infty}$ as $t\longrightarrow +\infty$. However, the method used in this paper cannot be directly applied to prove the convergence to equilibrium for PDE system \eqref{VectorSystem}. \end{remark} The first main result of this paper concerns the convergence to equilibrium for weakly reversible reaction networks of the form displayed in Figure \ref{Reaction}. Our method of proof applies the entropy method to prove explicit exponential convergence of solutions of system \eqref{VectorSystem} to the unique equilibrium. As mentioned above, all previous results of explicit EED inequalities (see e.g. \cite{DeFe06,DeFe_Con,DeFe08,DeFe15,BaFeEv14, MiHaMa14}) considered reaction-diffusion systems satisfying a detailed balance condition. In the current paper, we shall show that the following quadratic relative entropy between any two solutions $X=(u_1,\ldots,u_N)$ and $Y=(v_1,\ldots,v_N)$ \begin{equation}\label{RelativeEntropy_Quad} \mathcal{E}(X|Y)(t) = \sum_{i=1}^{N}\int_{\Omega}\frac{|u_i|^2}{v_i}dx \end{equation} is an entropy functional, see Lemma \ref{ExplicitEnDiss} below, which is the first key result of this paper. In particular, we can consider the special case $Y=X_{\infty}$ for such an entropy functional. By using the linearity of first order systems, it is then straightforward to check (by using \eqref{a_jj} and $AX_{\infty}=0$) that the quadratic relative entropy towards an equilibrium state $X_{\infty}$, i.e. \begin{equation}\label{RelativeEntropy} \mathcal{E}(X-X_{\infty}|X_{\infty}) = \sum_{i=1}^{N}\int_{\Omega}\frac{|u_i - u_{i,\infty}|^2}{u_{i,\infty}}dx \end{equation} is equally an entropy functional, which decays monotone in time according to the following explicit form of the entropy dissipation functional $\frac{d}{dt}\mathcal{E}(X- X_{\infty}|X_{\infty})=-\mathcal{D}(X-X_{\infty}|X_{\infty})$: \begin{equation}\label{EntropyDissipation} \begin{aligned} \mathcal{D}(X-X_{\infty}|X_{\infty}) &= 2\sum_{i=1}^{N}d_i\int_{\Omega}\frac{|\nabla (u_i- u_{i,\infty})|^2}{u_{i,\infty}}dx\\ &\quad\, + \sum_{i,j=1;i<j}^{N}(a_{ji}u_{i,\infty} + a_{ij}u_{j,\infty})\int_{\Omega}\left(\frac{u_i- u_{i,\infty}}{u_{i,\infty}} - \frac{u_j- u_{j,\infty}}{u_{j,\infty}}\right)^2dx\ge0\\ &= 2\sum_{i=1}^{N}d_i\int_{\Omega}\frac{|\nabla u_i|^2}{u_{i,\infty}}dx + \sum_{i,j=1;i<j}^{N}(a_{ji}u_{i,\infty} + a_{ij}u_{j,\infty})\int_{\Omega}\left(\frac{u_i}{u_{i,\infty}} - \frac{u_j}{u_{j,\infty}}\right)^2dx\\ &= \mathcal{D}(X|X_{\infty})= - \frac{d}{dt}\mathcal{E}(X|X_{\infty}) \ge 0.\\ \end{aligned} \end{equation} The dissipative structure of the quadratic relative entropy towards equilibrium \eqref{RelativeEntropy} is a special cases of generalised relative entropies discussed e.g. in \cite[Chapter 6]{Per07}. The entropy functional \eqref{RelativeEntropy_Quad}, i.e. the observation of the dissipativeness of the relative entropy between any two solutions, is however related to a general property of linear Markow processes, which was recently shown in \cite{FJ16}. With the help of the explicit form of entropy dissipation \eqref{EntropyDissipation}, we are able to show (in Lemma \ref{EEDEstimate} below) an entropy-entropy dissipation inequality of the form \begin{equation}\label{EEDa} \mathcal{D}(X-X_{\infty}|X_{\infty}) \geq \lambda\,\mathcal{E}(X-X_{\infty}|X_{\infty}), \end{equation} where $\lambda>0$ is an explicitly computable constant. Once the EED inequality \eqref{EEDa} is proven, the statement of the first main theorem follows from a standard Gronwall argument, see Section \ref{weakly} below: \begin{theorem}[Exponential equilibration of weakly reversible first order reaction networks] \label{FirstResult}\\ \textcolor{black}{ Given a weakly reversible first order reaction network partitioned into linkage classes. Consider (w.l.o.g.) any corresponding strongly connected subnetwork $\mathcal{N}_{\mathcal{L}}$. Assume for $\mathcal{N}_{\mathcal{L}}$} that the diffusion coefficients $d_i$ are positive for all $i = 1,2,\ldots, N$, and the initial mass $M$ is positive. Then, the unique global solution to initial-boundary problem \eqref{VectorSystem} converges exponentially to the unique positive equilibrium $X_{\infty} = (u_{1,\infty}, u_{2,\infty}, \ldots, u_{N,\infty})$, i.e. \begin{equation*} \sum_{i=1}^{N}\int_{\Omega}\frac{|u_i(t) - u_{i,\infty}|^2}{u_{i,\infty}}dx \leq e^{-\lambda t}\sum_{i=1}^{N}\int_{\Omega}\frac{|u_{i,0} - u_{i,\infty}|^2}{u_{i,\infty}}dx, \end{equation*} where the constant $\lambda >0$ {\color{black}depends explicitly on the reaction matrix $A$, the domain $\Omega$, the diffusion matrix $D$ and the initial mass $M$.} \end{theorem} \begin{remark}[Lyapunov functionals for ODE systems]\hfil\label{Lj}\\ {For ODE systems, Lyapunov functionals have been mainly considered in the analysis of nonlinear ODE systems. Moreover, for nonlinear ODE systems, $L^1$-type Lyapunov functionals are most commonly used in the study of the large-time-behaviour.} For reaction-diffusion systems, however, $L^1$-functionals are not useful for the entropy method and proving explicit convergence to equilibrium, since they do not measure the spatial diffusion in an exploitable way. We also remark, that while logarithmic relative entropy functionals of the form \begin{equation}\label{Entropy-Like} V_{X_{\infty}}(X)(t) = \sum_{i=1}^{N}\left(u_i(\ln u_i - \ln u_{i,\infty} - 1) + u_{i,\infty}\right) \end{equation} were known to constitute a monotone decaying Lyapunov functional for complex balanced ODE reaction networks (see e.g. \cite{HoJa72, Man,SiMa}), up to our knowledge and somewhat surprisingly, no explicit expression of the {\it entropy dissipation} $-dV/dt$ in complex balanced systems has been derived so far. We also refer the reader to e.g. \cite{MiSi} for the stability of some mass action law reaction-diffusion systems, where the author used techniques of $\omega$-limit sets along with the monotonicity of $L^1$-type Lyapunov functional. Our results in this paper are significantly stronger in the sense that we show, by using the entropy method, the exponential convergence to equilibrium with computable rates. In addition and in comparison to $\omega$-limit techniques, the entropy method has also the major advantage of relying on functional inequalities rather than on specific estimates of solutions to a given system. Having such functional entropy entropy-dissipation inequalities once and for all established makes the entropy method robust with respect to model variations and generalisations. As example, it is the intrinsic robustness of the entropy method, which makes it possibly to also apply to non weakly reversible reaction networks, see Theorems \ref{SourceTransmission} and \ref{Target} below. \end{remark} The assumption on the positivity of all diffusion coefficients in Theorem \ref{FirstResult} is not necessary as such. As already shown in e.g. \cite{DeFe_Con, BaFeEv14}, the combined effect of diffusion of a specie and its weakly reversible reaction with other (possibly non-diffusive) species will lead to a indirect ``diffusion-effect" on the latter specie. This indirect diffusion-effect can also be measured in terms of functional inequalities. Hence the exponential convergence to equilibrium still holds for systems with partial degenerate diffusion. \textcolor{black}{ Note that the indirect ``diffusion transfer'' and the convergence results of this paper resembles to some degree the framework of hypocoercivity for evolution equations like linear kinetic Fokker-Planck equations, see e.g. \cite{Vil09,DMS,AAS}. However, while hypocoercivity typically requires the use of suitably constructed Lyapunov functionals, the indirect ``diffusion-effect" can be entirely express in functional inequalities linking the relative entropy and the associated entropy dissipation functional. The entropy method present in this paper proves convergence to equilibrium essentially regardless of full- or degenerate diffusion matrices.} The exponential convergence for weakly reversible systems \eqref{VectorSystem} with degenerate diffusion is stated in the following Theorem \ref{DegenerateDiffusion} to be proved in Section \ref{weakly} below: \begin{theorem}[Equilibration of linear networks with degenerate diffusion] \label{DegenerateDiffusion}\\ \textcolor{black}{ Given a weakly reversible first order reaction network partitioned into linkage classes. Consider (w.l.o.g.) any corresponding strongly connected subnetwork $\mathcal{N}_{\mathcal{L}}$. Assume that the initial mass $M$ is positive for $\mathcal{N}_{\mathcal{L}}$}. Moreover, assume that at least one diffusion coefficient $d_i$ is positive for some $i = 1, 2, \ldots, N$. Then, the solution to \eqref{VectorSystem} converges exponentially fast to the unique positive equilibrium $X_{\infty} = (u_{1,\infty}, u_{2,\infty}, \ldots, u_{N,\infty})$: \begin{equation*} \sum_{i=1}^{N}\int_{\Omega}\frac{|u_i(t) - u_{i,\infty}|^2}{u_{i,\infty}}dx \leq e^{-\lambda' t}\sum_{i=1}^{N}\int_{\Omega}\frac{|u_{i,0} - u_{i,\infty}|^2}{u_{i,\infty}}dx \end{equation*} with a computable rate $\lambda'>0$, {\color{black}which depends explicitly on $A$, $\Omega$, $D$ and $M$}. \end{theorem} \begin{remark}[Same results of linear ODE reaction networks]\hfil\\ We remark that our approach can of course be adapted to equally apply to linear ODE reaction networks by eliminating the terms and calculations concerning spatial diffusion. Thus, all the results of this paper hold equally for such linear ODE systems. \end{remark} As the second main result of this manuscript, we shall derive an entropy approach and prove convergence to equilibrium for reaction networks as in Figure \ref{Reaction}, for which {\it the weak reversibility assumption does not hold}. For first order reaction networks, this implies that the system is not complex balanced, or in other words, that equilibria are not necessarily positive. Due to the lack of positivity of equilibria, it follows immediately that the relative entropy used for weakly reversible systems is not directly applicable. In the following we proposed a modified entropy approach. At first, it is necessary to understand the structure of non weakly reversible reaction networks. We state here the necessary terminology and the main ideas. Since for any non weakly reversible linkage class, the associated directed graph $G$ is connected (which means that the underlying undirected version of $G$ is a connected graph) but not strongly connected, $G$ consists of $r\geq 2$ strongly connected components, which we denote by $C_1, C_2, \ldots, C_r$. Then, we can construct a {\it directed acyclic graph} $G^C$, i.e. $G^C$ is a directed graph with no directed cycles as follows: \begin{itemize} \item[a)] $G^C$ has as nodes the $r$ strongly connected components $C_1,C_2, \ldots, C_r$, \item[b)] for two nodes $C_i$ and $C_j$ of $G^C$, if there exists a reaction $C_i\ni S_{k} \xrightarrow{a_{\ell k}}S_{\ell} \in C_j$ with $a_{\ell k}>0$, then there exists also the edge $C_i \rightarrow C_j$ on $G^C$. \end{itemize} Due to the structure of $G^C$, its nodes, or equivalently the strongly connected components of $G$, can be labeled as one of the following three types: \begin{itemize} \item A strongly connected component $C_i$ is called a {\it source component} if there is no in-flow to $C_i$, i.e. there does not exist an edge $S_k\rightarrow S_j$ where $S_k\not\in C_i$ and $S_j\in C_i$. \item A strongly connected component $C_i$ is called a {\it target component} if there is no out-flow from $C_i$, i.e. there does not exist and edge $S_k \rightarrow S_j$ where $S_k\in C_i$ and $S_j\not\in C_i$. \item If $C_i$ is neither a source component nor a target component, then we call $C_i$ a {\it transmission component}. \end{itemize} \begin{example} Consider the reaction network in Figure \ref{NonReversibleReaction}. \begin{figure} \caption{A non-weakly reversible reaction network consisting of four strongly connected components} \label{NonReversibleReaction} \end{figure} The depicted network has $4$ strongly connected components $C_1 = \{S_1, S_2\},\, C_2 = \{S_3\},\, C_3 = \{S_4, S_5\},\, C_4 = \{S_6\}$, where $C_1$ is a source component, $C_2$ is a transmission component and $C_3, C_4$ are target components. \end{example} By definition, each of the three types of strongly connected components is subject to a different dynamic, which can be written as follows: Let $C_i$ be a strongly connected component and denote by $X_i$ the concentrations within $C_i$. Moreover, denote by $A_i$ the reaction matrix formed by all reactions within the component $C_i$. Then, we have \begin{itemize} \item for a source component $C_i$: \begin{equation*} \partial_t{X}_{i} - D_{i}\Delta X_{i} = A_iX_{i} + \mathcal{F}_i^{out}, \end{equation*} where $\mathcal{F}_i^{out}$ summarises the out-flow from the source component $C_i$. \item for a target component $C_i$: \begin{equation*} \partial_t{X}_{i} - D_{i}\Delta X_{i} = \mathcal{F}_i^{in} + A_iX_{i}, \end{equation*} where $\mathcal{F}_i^{in}$ summarises the in-flow into the target component $C_i$. \item for a transmission component $C_i$: \begin{equation*} \partial_t{X}_{i} - D_{i}\Delta X_{i} = \mathcal{F}_i^{in} + A_iX_{i} + \mathcal{F}_i^{out}, \end{equation*} where $\mathcal{F}_i^{in}$, $\mathcal{F}_i^{out}$ are the in/out-flow of the transmission component $C_i$. \end{itemize} In the dynamics of transmission and target components, the in-flow $\mathcal{F}_i^{in}$ depends only on species which do not belong to $C_i$, so that $\mathcal{F}_i^{in}$ can be treated as an external source for the system for $C_i$. However, it may happen that $\mathcal{F}_i^{in}$ contains inflow from species whose behaviour is not a-priori known. For acyclic graphs $G^C$, however, it is possible to avoid these difficulties, since the {\it topological order} of acyclic graphs allows to re-order the $r$ strongly connected components $C_1,C_2,\ldots, C_r$ in such a way that for every edge $C_i \rightarrow C_j$ of $G^C$ it holds that $i<j$. This permits to study the dynamics of all components $C_i$ sequentially according to the topological order and, when at times considering a transmission component (or later a target component) $C_i$, the required in-flow $\mathcal{F}_i^{in}$ contains only species whose behaviour is already known. Due to the structure of the network, it is expected that species belonging to source or transmission components are subsequently losing mass such that the concentrations decay to zero in the large-time behaviour as time goes to infinity. In contrast, the species belonging to a target component converge to an equilibrium state, which is determined by the reactions within this component and by the mass "injected" from other components. Since the source and transmission components do not converge to positive equilibria, the relative entropy method used for weakly reversible networks is directly not applicable. Instead, for each component $C_i$ we will modify the entropy method by introducing an \emph{artificial equilibrium state with normalised mass}, which balances the reaction within $C_i$. The artificial equilibrium will allow us to consider a quadratic functional, which is similar to the relative entropy in weakly reversible networks and which can be proved to decay exponentially to zero. This result is stated in the following Theorem: \begin{theorem}[Exponential decay to zero of source and transmission components] \label{SourceTransmission}\\ \textcolor{black}{Given an arbitrary first order reaction network partitioned into linkage classes and consider (w.l.o.g.) any corresponding connected subnetwork $\mathcal{N}_{\mathcal{L}}$. Assume for $\mathcal{N}_{\mathcal{L}}$ that all diffusion coefficients $d_i$ are positive.} Then, for each $C_i$ being a source or a transmission component of $\mathcal{N}_{\mathcal{L}}$, there exist constants $K_i>0$ and $\lambda_i>0$ {\color{black}depending explicitly on $A_i$ and $\Omega$} such that, for any specie $S_{\ell} \in C_i$, the concentration $u_{\ell}$ of $S_{\ell}$ decays exponentially to zero, i.e. \begin{equation*} \|u_{\ell}(t,\cdot)\|_{L^2(\Omega)}^2 \leq K_ie^{-\lambda_i t}, \qquad \text{ for all }\ t>0. \end{equation*} \end{theorem} For a target component $C_i$, due to the in-flow $\mathcal{F}_i^{in}$, the total mass of $C_i$ is not conserved but increasing. Hence, $C_i$ does not possesses an equilibrium as weakly reversible networks, which is explicitly given in terms of the reaction rates and the conserved initial total mass. However, since each target component is strongly connected and thus a weakly reversible reaction network with mass influx, there still exists a unique, positive equilibrium of $C_i$ denoted by $X_{i,\infty}$, which balances the reactions within $C_i$ and has a total mass, which is the sum of the total initial mass of $C_i$ and the total "injected mass" from the other components via the in-flow $\mathcal{F}_i^{in}$ (see Lemma \ref{FinalStateTarget}). We emphasis that in general the injected mass is not given explicitly but depends on the time evolution of all the influencing species higher up with respect to the topological order of the graph $G^C$. Since the equilibrium $X_{i,\infty}$ is positive, we can use again a relative entropy functional to prove the convergence of the species belonging to a target component to their corresponding equilibrium states. \begin{theorem}[Exponential convergence for target components] \label{Target}\\ \textcolor{black}{Given an arbitrary first order reaction network partitioned into linkage classes and consider (w.l.o.g.) any corresponding connected subnetwork $\mathcal{N}_{\mathcal{L}}$.} Assume for $\mathcal{N}_{\mathcal{L}}$ that all diffusion coefficients $d_i$ are positive. Then, for all target components $C_i = \{S_{i_1}, S_{i_2}, \ldots, S_{i_{N_i}}\}$ of $\mathcal{N}_{\mathcal{L}}$, where $N_i$ is the number of species belonging to $C_i$, there exists a unique positive equilibrium state $X_{i,\infty} = (u_{i_1,\infty}, \ldots, u_{i_{N_i},\infty})$ and the concentrations $u_{i_\ell}$ of $S_{i_\ell}$ converges exponentially to the corresponding equilibrium value \begin{equation*} \|u_{i_\ell}(t) - u_{i_\ell,\infty}\|_{L^2(\Omega)}^2 \leq K_ie^{-\lambda_i t}, \qquad \text{ for all } t>0, \end{equation*} {\color{black}with the constants $K_i>0$ and $\lambda_i >0$ depending explicitly on $A_i$, $\Omega$ and $D_i$ and on the equilibrium state $X_{i,\infty}$. } \end{theorem} \textcolor{black}{ \begin{remark} Note that by Lemma \ref{FinalStateTarget}, the equilibrium state $X_{i,\infty}$ depends explicitly on the mass injected into the target component $C_i$, but that injected mass itself depends non-explicitly on the initial data and on the history of the reaction-diffusion network. \end{remark}} \begin{remark} We remark that in the same way as Theorem \ref{DegenerateDiffusion} generalises Theorem \ref{FirstResult} to allow for degenerate diffusion matrices, it is equally possible to generalise Theorems \ref{SourceTransmission} and \ref{Target} in the sense that it is sufficient to assume that for each target component there is at least one diffusion coefficient is positive. In particular, the proof of Theorem \ref{SourceTransmission} holds independently from the entries of a non-negative diffusion matrices $D_i$. \end{remark} \noindent\underline{Outline:} The rest of the paper is organised as follows: In Section 2, we present the entropy method for weakly reversible networks and prove exponential convergence to the positive equilibrium. Non weakly reversible networks will be investigated in the Section 3. By using the structure of the underlying graphs, we are able completely resolve the large-time behaviour of all species belonging to such first order networks. {We also remark that all constants in this manuscript are explicit in the sense that they are derived in constructive ways. However, since these constants are not optimal, we will denote them by using generic letters like $K_i$ or $\lambda_i$, etc. The issue of optimal rates and constants for the convergence is subtle, and can be investigated in future works.} \noindent\underline{Notation:} We shall use the shortcut $\overline{v}erline{f}=\int_{\Omega} f(x) \,dx$, whenever $|\Omega|=1$, and $\|\cdot\|$ for the usual norm in $L^2(\Omega)$, i.e. \begin{equation*} \|f\|^2 = \int_{\Omega}|f(x)|^2dx. \end{equation*} \section{Strongly connected first order networks}\label{weakly} \textcolor{black}{ In this section, we consider strongly connected first order reaction networks $\mathcal{N}$, for which the associated directed graph is strongly connected. This is w.l.o.g. by Remarks \ref{Partitions} and \ref{StronglyConnected}, since any weakly reversible first order reaction network can be partitioned into disjoint strongly connected components/subnetworks, which can be treated independently. } Moreover, we recall that $\Omega\subset\mathbb{R}^n$ is a bounded domain with smooth boundary $\partial\Omega$ (say $\partial\Omega\in C^{2+\alpha}$) and normalised volume $|\Omega| = 1$ (w.l.o.g. by rescaling). Finally, we recall the system \eqref{VectorSystem} \begin{equation}\label{VectorSystem_ReCall} \begin{cases} \partial_tX - D\Delta X = AX, &\quad x\in\Omega, \quad t>0,\\ \partial_{\nu}X = 0, &\quad x\in\partial\Omega, \quad t>0,\\ X(x,0) = X_0(x), &\quad x\in\Omega, \end{cases} \end{equation} where $X = [u_1, u_2, \ldots, u_N]^T$ denotes the vector of concentrations, the vector $X_0 = [u_{1,0}, u_{2,0}, \ldots, u_{N,0}]^T$ denotes the initial data, the diffusion matrix $D = \text{diag}(d_1,d_2,\ldots, d_N)$ and the reaction matrix $A = (a_{ij}) \in \mathbb{R}^{N\times N}$ satisfies \begin{equation}\label{a_jj_Recall} \begin{cases} a_{ij} \geq 0, &\qquad \text{for all } i\not=j, \quad i,j =1,2,\ldots,N,\\ a_{jj} = -\sum_{i=1,i\not= j}^{N}a_{ij}, &\qquad \text{for all } j =1,2,\ldots,N. \end{cases} \end{equation} Moreover, since $\mathcal{N}_{\mathcal{L}}$ is strongly connected, we know that the reaction matrix $A$ is irreducible, see Lemma \ref{Characteristic}. For the linear system \eqref{VectorSystem_ReCall}, the existence of a global unique solution follows by standard arguments, see e.g. \cite{Smo,Rot}: \begin{theorem}[Global well-posedness of linear reaction-diffusion networks]\hfil\\ For all given initial data $X_0 \in (L^2(\Omega))^N$, there exists a unique solution $X\in C([0,T];(L^2(\Omega))^N)\cap L^2(0,T;(H^1(\Omega))^N)$ for all $T>0$. Moreover, if $X_0 \geq 0$ then $X(t) \geq 0$ for all $t>0$. Finally, the solutions to \eqref{VectorSystem_ReCall} conserve the total mass \eqref{MassConservation} for all $t>0$: \begin{equation}\label{MassConservation_ReCall} \sum_{i=1}^{N}\int_{\Omega}u_i(x,t)dx = \sum_{i=1}^{N}\int_{\Omega}u_{i,0}(x)dx =: M >0, \end{equation} where the initial mass $M$ is assumed positive. \end{theorem} Lemma \ref{Characteristic} stated the equivalence to weak reversibility first order reaction networks and irreducibility of the reaction matrices $A$, which follows e.g. from \cite[Definition 2.1, page 46]{Seneta} and \cite[Theorem 3.2, page 78]{Min88}. Moreover, Lemma \ref{Characteristic} stated the existence of a unique positive complex balance equilibrium to \eqref{VectorSystem_ReCall} for any given positive initial mass $M>0$. Concerning the proof of this part of Lemma \ref{Characteristic}, it remains to show the following \begin{lemma}[Unique positive equilibria for \textcolor{black}{strongly connected} networks with fixed mass $M$]\label{UniqueEqui}\hfil\\ The {\color{black} first order} reaction network $\mathcal{N}$ is \textcolor{black}{strongly connected} if and only if the system \eqref{VectorSystem_ReCall} admits a unique positive equilibrium for any fixed positive mass $M>0$. \end{lemma} \begin{proof} {\it Sufficiency:} Assume that $\mathcal{N}$ is \textcolor{black}{strongly connected}. Thanks to the first equivalency in Lemma \ref{Characteristic}, the reaction matrix $A$ is irreducible. Moreover, for large enough $\alpha>0$, we have that $A+ \alpha E$ is nonnegative in the sense that all of its elements are nonnegative. We can then apply an extension of the Perron-Frobenius theorem, see e.g. \cite[Theorem 2.6, page 46]{Seneta} or \cite[Chapter 6.3.1]{Per07}, to obtain the existence of a unique positive equilibrium, i.e. a positive right zero-eigenvector $X_{\infty} = (u_{1,\infty}, u_{2,\infty}, \ldots, u_{N,\infty})>0$ satisfying $AX_{\infty} = 0$ such that $\sum_{i=1}^{N}u_{i,\infty} = M>0$. {\it Necessity:} {{Now assume that \eqref{VectorSystem_ReCall} has a unique positive equilibrium $X_{\infty}$. Since $AX_{\infty} = 0$ and $X_{\infty}$ is uniquely determined by the mass conservation, we obtain that $\dim(\ker A) = 1$. By using a contradiction argument, we assume that $\mathcal{N}$ is not \textcolor{black}{strongly connected}, then the reaction matrix $A$ is reducible, i.e. \begin{equation*} A = P^T\begin{pmatrix} B&0\\ C&D \end{pmatrix}P \end{equation*} for some permutation matrix $P$, in which $D$ is irreducible. Choose $d$ to be an eigenvector of $D$ corresponding to zero eigenvalue. Then \begin{equation*} AP^T\begin{pmatrix} 0\\d \end{pmatrix} = P^T\begin{pmatrix}B&0\\C&D\end{pmatrix}\begin{pmatrix} 0\\d \end{pmatrix} = P^T\begin{pmatrix}0\\Dd\end{pmatrix} = \begin{pmatrix} 0\\0 \end{pmatrix} \end{equation*} which means that $P^T\begin{pmatrix} 0\\d \end{pmatrix}$ is an eigenvector of $A$ corresponding to zero eigenvalue. Since $X_{\infty}$ is strictly positive, $P^T\begin{pmatrix} 0\\d \end{pmatrix}$ and $X_{\infty}$ are linear independent, which leads to a contradiction with $\mathrm{dim}(\mathrm{ker} A) = 1$.}} \end{proof} In the following, we will use the entropy method to study the trend to equilibrium. More precisely, for two trajectories $X = (u_1, u_2, \ldots, u_N)$ and $Y = (v_1, v_2, \ldots, v_N)$ to \eqref{VectorSystem_ReCall}, where $Y(t)$ has non-zero components for all times $t>0$, we consider the following quadratic relative entropy functional \begin{equation}\label{RelativeEntropy_Recall} \mathcal{E}(X|Y)(t) = \sum_{i=1}^{N}\int_{\Omega}\frac{|u_i|^2}{v_i}dx. \end{equation} The following key Lemma \ref{ExplicitEnDiss} provides an explicit expression of the entropy dissipation associated to \eqref{RelativeEntropy_Recall}: \begin{lemma}[Relative entropy dissipation functional]\label{ExplicitEnDiss}\hfil\\ Assume that $v_i(t) \not= 0$ for all $i = 1,2,\ldots, N$ and $t>0$. Then, we have \begin{align*} \mathcal{D}(X|Y) = -\frac{d}{dt}\mathcal{E}(X|Y)= 2\sum_{i=1}^{N}d_i\int_{\Omega}v_i\left|\nabla\Bigl(\frac{u_i}{v_i}\Bigr)\right|^2dx+ \!\!\sum_{i,j=1; i<j}^{N}\int_{\Omega}(a_{ij}v_j + a_{ji}v_i)\biggl(\frac{u_i}{v_i} - \frac{u_j}{v_j}\biggr)^{\!2}dx. \end{align*} \end{lemma} \begin{proof} For convenience we recall that $$ \partial_tu_i - d_i\Delta{u_i} = \sum_{j=1}^{N}a_{ij}u_j \qquad \text{ and } \qquad \partial_tv_i - d_i\Delta{v_i} = \sum_{j=1}^{N}a_{ij}v_j, $$ for all $i=1,\ldots, N$. Hence, we compute \begin{align} \frac{d}{dt}\mathcal{E}(X|Y) &= \sum_{i=1}^{N}\int_{\Omega}\left[2\frac{u_i}{v_i}\partial_tu_i - \frac{u_i^2}{v_i^2}\partial_tv_i\right]dx\nonumber \\ &= \sum_{i=1}^{N}\int_{\Omega}\left[2\frac{u_i}{v_i}\biggl(d_i\Delta{u_i} + \sum_{j=1}^{N}a_{ij}u_j\biggr) - \frac{u_i^2}{v_i^2}\biggl(d_i\Delta{v_i} + \sum_{j=1}^{N}a_{ij}v_j\biggr)\right]dx\nonumber \\ &= \sum_{i=1}^{N}\int_{\Omega}\left(2d_i\frac{u_i}{v_i}\Delta{u_i} - d_i\frac{u_i^2}{v_i^2}\Delta{v_i}\right)dx + \sum_{i=1}^{N}\int_{\Omega}\biggl(2\frac{u_i}{v_i}\sum_{j=1}^{N}a_{ij}u_j - \frac{u_i^2}{v_i^2}\sum_{j=1}^{N}a_{ij}v_j\biggr)dx\nonumber \\ &=: \sum_{i=1}^{N}\int_{\Omega}J^{(i)}_Ddx + \int_{\Omega}\sum_{i=1}^{N}J^{(i)}_Rdx\nonumber \\ &=: \mathcal{I}_D + \mathcal{I}_R.\label{e1} \end{align} Using integration by parts, we have \begin{align} \int_{\Omega}J^{(i)}_{D}dx &= \int_{\Omega}\left(2d_i\frac{u_i}{v_i}\Delta{u_i} - d_i\frac{u_i^2}{v_i^2}\Delta{v_i}\right)dx\nonumber \\ &= -2d_i\int_{\Omega}\left(\nabla\Bigl(\frac{u_i}{v_i}\Bigr)\nabla u_i - \frac{u_i}{v_i}\nabla\Bigl(\frac{u_i}{v_i}\Bigr)\nabla v_i\right)dx\nonumber \\ &= -2d_i\int_{\Omega}v_i\left|\nabla\Bigl(\frac{u_i}{v_i}\Bigr)\right|^2dx. \label{e2} \end{align} Thus, \begin{equation} \label{e3} \mathcal{I}_D = -2\sum_{i=1}^{N}d_i\int_{\Omega}v_i\left|\nabla\Bigl(\frac{u_i}{v_i}\Bigr)\right|^2dx. \end{equation} For the reaction terms $\mathcal{I}_R$, we use $ a_{ii} = -\sum_{j=1,j\not=i}^{N}a_{ji} $ to calculate \begin{align} J^{(i)}_R &= 2\frac{u_i}{v_i}\sum_{j=1}^{N}a_{ij}u_j - \frac{u_i^2}{v_i^2}\sum_{j=1}^{N}a_{ij}v_j\nonumber \\ &= 2\frac{u_i}{v_i}\biggl(\,\sum_{j=1,j\not=i}^{N}a_{ij}u_j + a_{ii}u_i\biggr) - \frac{u_i^2}{v_i^2}\biggl(\,\sum_{j=1,j\not=i}^{N}a_{ij}v_j + a_{ii}v_i\biggr)\nonumber \\ &= 2\frac{u_i}{v_i}\biggl(\,\sum_{j=1,j\not=i}^{N}a_{ij}u_j - u_i\sum_{j=1,j\not=i}^{N}a_{ji}\biggr) - \frac{u_i^2}{v_i^2}\biggl(\,\sum_{j=1,j\not=i}^{N}a_{ij}v_j - v_i\sum_{j=1,j\not=i}^{N}a_{ji}\biggr)\nonumber \\ &=\sum_{j=1,j\not=i}^{N}\left(2\frac{u_i}{v_i}(a_{ij}u_j - a_{ji}u_i) - \frac{u_i^2}{v_i^2}(a_{ij}v_j - a_{ji}v_i)\right). \label{e4} \end{align} Therefore, \begin{align} \mathcal{I}_R = \sum_{i=1}^{N}\int_{\Omega}J^{(i)}_Rdx&= \int_{\Omega}\sum_{i=1}^{N}\sum_{j=1,j\not=i}^{N}\left(2\frac{u_i}{v_i}(a_{ij}u_j - a_{ji}u_i) - \frac{u_i^2}{v_i^2}(a_{ij}v_j - a_{ji}v_i)\right)dx\\ &= \sum_{i,j=1;i<j}^{N}\int_{\Omega}\biggl[2\frac{u_i}{v_i}(a_{ij}u_j - a_{ji}u_i) - \frac{u_i^2}{v_i^2}(a_{ij}v_j - a_{ji}v_i)\nonumber \\ &\qquad\qquad\qquad+ 2\frac{u_j}{v_j}(a_{ji}u_i - a_{ij}u_j) - \frac{u_j^2}{v_j^2}(a_{ji}v_i - a_{ij}v_j)\biggr]dx\nonumber\\ &= \sum_{i,j=1;i<j}^{N}\int_{\Omega}\biggl[2(a_{ij}u_j - a_{ji}u_i)\left(\frac{u_i}{v_i}-\frac{u_j}{v_j}\right)- (a_{ij}v_j - a_{ji}v_i)\left(\frac{u_i^2}{v_i^2}-\frac{u_j^2}{v_j^2}\right)\biggr]dx\nonumber \\ &= \sum_{i,j=1;i<j}^{N}\int_{\Omega}\left(\frac{u_i}{v_i}-\frac{u_j}{v_j}\right)\left[2(a_{ij}u_j-a_{ji}u_i) - (a_{ij}v_j - a_{ji}v_i)\left(\frac{u_i}{v_i}+\frac{u_j}{v_j}\right)\right]dx\nonumber \\ &= -\sum_{i,j=1;i<j}^{N}\int_{\Omega}(a_{ij}v_j + a_{ji}v_i)\left(\frac{u_i}{v_i}-\frac{u_j}{v_j}\right)^2dx. \label{e5} \end{align} By combining \eqref{e1}, \eqref{e3} and \eqref{e5}, we obtain the result stated in the Lemma. \end{proof} In order to simplify the following calculations, we introduce the difference to the equilibrium $$ W:= (w_1,w_2,\ldots, w_N) = (u_1-u_{1,\infty}, u_{2} - u_{2,\infty}, \ldots, u_N - u_{N,\infty}) = X - X_{\infty}, $$ and remark that thanks to the linearity of the system, the difference $W$ is the solution to \eqref{VectorSystem_ReCall} subject to the shifted initial data $$ W(x,0) = X(x,0) - X_{\infty},\qquad \text{ for all } x\in\Omega. $$ Note that the total initial mass corresponding to $W$ is zero, i.e. $$ M_{W} := \sum_{i=1}^{N}\int_{\Omega}{w_{i,0}}\,dx = \sum_{i=1}^{N}\int_{\Omega}(u_{i,0}(x) - u_{i,\infty})\,dx = 0, $$ and that $W$ conserves the zero mass \begin{equation*} \sum_{i=1}^{N}\int_{\Omega}{w_i}(t,x)\,dx = 0, \qquad \text{ for all } t>0. \end{equation*} By using the relative entropy dissipation functional derived in Lemma \ref{ExplicitEnDiss}, we have \begin{equation*} \mathcal{D}(W|X_{\infty}) = 2\sum_{i=1}^{N}\int_{\Omega}d_i\,\frac{|\nabla w_i|^2}{u_{i,\infty}}\,dx + \sum_{i,j=1;i<j}^{N}(a_{ij}u_{j,\infty} + a_{ji}u_{i,\infty})\int_{\Omega}\Bigl( \frac{w_i}{u_{i,\infty}} - \frac{w_j}{u_{j,\infty}}\Bigr)^{\!2}dx. \end{equation*} The following Lemma about entropy-entropy dissipation estimate is the main key to prove the convergence to equilibrium for \eqref{VectorSystem_ReCall}. \begin{lemma}[Entropy-Entropy Dissipation Estimate]\label{EEDEstimate}\hfil\\ There exists an explicit constant $\lambda>0$ {\color{black}depending explicitly on the reaction matrix $A$, the domain $\Omega$, the diffusion matrix $D$ and the initial mass $M$} such that \begin{equation*} \mathcal{D}(W|X_{\infty}) \geq \lambda\,\mathcal{E}(W|X_{\infty}). \end{equation*} \end{lemma} \begin{proof} We divide the proof in several steps: \noindent {\bf Step 1.} (Additivity of the relative entropy w.r.t. spatial averages)\hfil\\ Straightforward calculation leads to \begin{align} \mathcal{E}(W|X_{\infty}) &= \sum_{i=1}^{N}\int_{\Omega}\frac{|w_i|^2}{u_{i,\infty}}dx= \sum_{i=1}^{N}\int_{\Omega}\frac{|w_i - \overline{v}erline{w_i}|^2}{u_{i,\infty}}dx + \sum_{i=1}^{N}\frac{|\overline{v}erline{w_i}|^2}{u_{i,\infty}}\nonumber\\ &= \mathcal{E}(W-\overline{v}erline{W}|X_{\infty}) + \mathcal{E}(\overline{v}erline{W}|X_{\infty}) \label{e7} \end{align} where we denote $\overline{v}erline{W} = (\overline{v}erline{w_1}, \overline{v}erline{w_2}, \ldots, \overline{v}erline{w_N})$ and we recall that $\overline{v}erline{w_i}=\int_{\Omega} w_i\,dx$ for $i=1,\ldots,N$ due to $|\Omega|=1$. \noindent {\bf Step 2.} (Entropy dissipation due to diffusion)\\ By using Poincar\'e's inequality \begin{equation}\label{Poincare} \|\nabla f\|^2 \geq C_P\|f - \overline{v}erline{f}\|^2, \qquad \text{ for all } f\in H^1(\Omega), \end{equation} we have \begin{align} \frac{1}{2}\mathcal{D}(W|X_{\infty}) &\geq \sum_{i=1}^{N}d_i\int_{\Omega}\frac{|\nabla w_i|^2}{u_{i,\infty}}dx \geq C_P\sum_{i=1}^{N}d_i\int_{\Omega}\frac{|w_i - \overline{v}erline{w_i}|^2}{u_{i,\infty}}dx\nonumber \\ &\geq C_P\min\{d_1,d_2,\ldots,d_N\}\,\mathcal{E}(W-\overline{v}erline{W}|X_{\infty}). \label{e8} \end{align} \noindent {\bf Step 3.} (Entropy dissipation due to reactions) \hfil\\ From \eqref{e7} and \eqref{e8}, it remains to control $$ \mathcal{E}(\overline{v}erline{W}|X_{\infty}) = \sum_{i=1}^{N}\frac{\overline{v}erline{w_i}^2}{u_{i,\infty}}. $$ By using Jensen's inequality we have, recalling that $|\Omega| = 1$, \begin{align} \frac{1}{2}\mathcal{D}(W|X_{\infty}) &\geq \frac{1}{2}\sum_{i,j=1;i<j}^{N}(a_{ij}u_{j,\infty} + a_{ji}u_{i,\infty})\int_{\Omega}\left( \frac{w_i}{u_{i,\infty}} - \frac{w_j}{u_{j,\infty}}\right)^2dx\nonumber\\ &\geq \frac{1}{2}\sum_{i,j=1;i<j}^{N}(a_{ij}u_{j,\infty} + a_{ji}u_{i,\infty})\left( \frac{\overline{v}erline{w_i}}{u_{i,\infty}} - \frac{\overline{v}erline{w_j}}{u_{j,\infty}}\right)^2dx.\label{e9} \end{align} It then remains to prove that \begin{equation}\label{e10} \frac{1}{2}\sum_{i,j=1;i<j}^{N}(a_{ij}u_{j,\infty} + a_{ji}u_{i,\infty})\left( \frac{\overline{v}erline{w_i}}{u_{i,\infty}} - \frac{\overline{v}erline{w_j}}{u_{j,\infty}}\right)^2dx \geq \gamma \sum_{i=1}^{N}\frac{\overline{v}erline{w_i}^2}{u_{i,\infty}} \end{equation} for some $\gamma>0$. Note that if both reactions $S_i \rightarrow S_j$ and $S_j \rightarrow S_i$ do not appear in the reaction network, then we have $a_{ij} = a_{ji} = 0$ and thus $$ a_{ij}u_{j,\infty} + a_{ji}u_{i,\infty} = 0. $$ Hence, the expression $$ \sum_{i,j=1;i<j}^{N}(a_{ij}u_{j,\infty} + a_{ji}u_{i,\infty})\left( \frac{\overline{v}erline{w_i}}{u_{i,\infty}} - \frac{\overline{v}erline{w_j}}{u_{j,\infty}}\right)^2dx $$ may not contain all pairs $(i,j)$ with $i\not=j$. However, the weak reversibility of the network allows to make all pairs $(i,j)$ with $i\not=j$ appear in the following sense: There exists an explicit constant $\xi>0$ such that \begin{equation}\label{e11} \sum_{i,j=1;i<j}^{N}(a_{ij}u_{j,\infty} + a_{ji}u_{i,\infty})\left( \frac{\overline{v}erline{w_i}}{u_{i,\infty}} - \frac{\overline{v}erline{w_j}}{u_{j,\infty}}\right)^2dx \geq \xi\sum_{i,j=1;i<j}^{N}\left(\frac{\overline{v}erline{w_i}}{u_{i,\infty}} - \frac{\overline{v}erline{w_j}}{u_{j,\infty}}\right)^2. \end{equation} Indeed, assume that $a_{ij} = a_{ji} = 0$ for some $i\not=j$. Due to the weak reversibility of the network, there exists a path from $S_i$ to $S_j$ as follows $$ S_i \equiv S_{j_1} \xrightarrow{a_{j_2j_1}} S_{j_2} \xrightarrow{a_{j_3j_2}} \ldots \xrightarrow{a_{j_{r}j_{r-1}}} S_{j_{r}} \equiv S_j $$ with $r\geq 3$ and $a_{j_kj_{k-1}} > 0$ for all $k = 2,3,\ldots, r$. Thus, with \[ {\color{black}0<\sigma = \min_{(a_{ij}, a_{ji})\not=(0,0);1\le i<j\le N}\{a_{ij}u_{i,\infty} + a_{ji}u_{j,\infty} \} \leq \min_{2\leq k\leq r}\{a_{j_kj_{k-1}}u_{j_{k-1},\infty} + a_{j_{k-1}j_k}u_{j_k,\infty} \}} \] we have \begin{multline} \sum_{k=2}^{r}(a_{j_kj_{k-1}}u_{j_{k-1},\infty} + a_{j_{k-1}j_k}u_{j_{k},\infty})\left(\frac{\overline{v}erline{w_{j_{k}}}}{u_{j_k,\infty}} - \frac{\overline{v}erline{w_{j_{k-1}}}}{u_{j_{k-1},\infty}}\right)^2\\ \geq {\sigma}\sum_{k=2}^{r}\left(\frac{\overline{v}erline{w_{j_{k}}}}{u_{j_k,\infty}} - \frac{\overline{v}erline{w_{j_{k-1}}}}{u_{j_{k-1},\infty}}\right)^2\\ \geq \frac{\sigma}{r-1}\left(\frac{\overline{v}erline{w_{j_{1}}}}{u_{j_1,\infty}} - \frac{\overline{v}erline{w_{j_{r}}}}{u_{j_{r},\infty}}\right)^2 = \frac{\sigma}{N-1}\left(\frac{\overline{v}erline{w_{i}}}{u_{i,\infty}} - \frac{\overline{v}erline{w_{j}}}{u_{j,\infty}}\right)^2. \label{e12} \end{multline} Since there {\color{black} are less than $N(N-1)/2$} pairs $(i,j)$ with $a_{ij} = a_{ji} = 0$, we can repeat this procedure to finally get \eqref{e11} {\color{black}with $\xi = 2\sigma/(N(N-1)^2)$}. From \eqref{e10} and \eqref{e11}, we are left to find a constant $\gamma>0$ satisfying \begin{equation}\label{e13} \sum_{i,j=1;i<j}^{N}\left(\frac{\overline{v}erline{w_i}}{u_{i,\infty}} - \frac{\overline{v}erline{w_j}}{u_{j,\infty}}\right)^2 \geq \frac{2\gamma}{\xi}\sum_{i=1}^{N}\frac{\overline{v}erline{w_i}^2}{u_{i,\infty}} \end{equation} with the constraint of the conserved zero total mass \begin{equation}\label{e14} \sum_{i=1}^{N}\overline{v}erline{w_i} = 0. \end{equation} Because of \eqref{e14}, \begin{equation}\label{e14_1} \sum_{i=1}^{N}\overline{v}erline{w_i}^2 = -\sum_{i,j=1;i\neq j}^{N}\overline{v}erline{w_i}\,\overline{v}erline{w_j}= -2\sum_{i,j=1;i<j}^{N}\overline{v}erline{w_i}\,\overline{v}erline{w_j}. \end{equation} Therefore, we can estimate for $C = \min_{1\le i<j\le N}\frac{1}{u_{i,\infty}u_{j,\infty}}$ \begin{multline} \sum_{i,j=1;i<j}^{N}\left(\frac{\overline{v}erline{w_i}}{u_{i,\infty}} - \frac{\overline{v}erline{w_j}}{u_{j,\infty}}\right)^2 \geq \min_{i<j}\frac{1}{u_{i,\infty}u_{j,\infty}}\sum_{i<j}u_{i,\infty}u_{j,\infty}\left(\frac{\overline{v}erline{w_i}}{u_{i,\infty}} - \frac{\overline{v}erline{w_j}}{u_{j,\infty}}\right)^2 \\ \geq -2\min_{i<j}\frac{1}{u_{i,\infty}u_{j,\infty}}\sum_{i<j}\overline{v}erline{w_i}\;\overline{v}erline{w_j} = \min_{i<j}\frac{1}{u_{i,\infty}u_{j,\infty}}\sum_{i=1}^{N}\overline{v}erline{w_i}^2 \geq \min_{i<j}\frac{1}{u_{i,\infty}u_{j,\infty}}\sum_{i=1}^{N}\frac{\overline{v}erline{w_i}^2}{u_{i,\infty}}. \label{e14_2} \end{multline} In conclusion, we have proved \eqref{e13} {\color{black}with $\gamma = \frac{\xi}{2}\min_{i<j}\frac{1}{u_{i,\infty}u_{j,\infty}}$}, which in combination with \eqref{e11} implies \eqref{e10} and thus completes the proof of this Lemma. \end{proof} \begin{theorem}[Convergence to Equilibrium]\hfil\label{Convergence}\\ \textcolor{black}{ Consider (w.l.o.g) a strongly connected subnetwork $\mathcal{N}$ of a weakly reversible first order reaction network. Assume for $\mathcal{N}$ that the diffusion coefficients $d_i$ are positive for all $i = 1,2,\ldots, N$, and the initial mass $M$ is positive. } Then, the unique global solution to \eqref{VectorSystem_ReCall} converges to the unique positive equilibrium $X_{\infty}$ in the following sense: \begin{equation*} \sum_{i=1}^{N}\int_{\Omega}\frac{|u_i(t) - u_{i,\infty}|^2}{u_{i,\infty}}dx \leq e^{-\lambda t}\sum_{i=1}^{N}\int_{\Omega}\frac{|u_{i,0} - u_{i,\infty}|^2}{u_{i,\infty}}dx, \end{equation*} where the constant $\lambda >0$ {\color{black}is computed as in Lemma \ref{EEDEstimate}}. \end{theorem} \begin{proof} From Lemma \ref{EEDEstimate} we have \begin{equation*} \frac{d}{dt}\mathcal{E}(X-X_{\infty}|X_{\infty}) = -\mathcal{D}(X-X_{\infty}|X_{\infty}) \leq \lambda\, \mathcal{E}(X - X_{\infty}|X_{\infty}). \end{equation*} By Gronwall's inequality, \begin{equation*} \mathcal{E}(X(t) - X_{\infty}|X_{\infty}) \leq e^{-\lambda t}\mathcal{E}(X_0 - X_{\infty}|X_{\infty}), \end{equation*} and the proof is complete. \end{proof} \textcolor{black}{ \begin{proof}[Proof of Theorem \ref{FirstResult}] Theorem \ref{FirstResult} is a direct consequence of Theorem \ref{Convergence} and the partition of weakly reversible first order reaction network into strongly connected components. \end{proof} } We now turn to the case of degenerate diffusion, where some of the diffusion coefficients $d_i$ can be zero. In the proof of the Theorem \ref{Convergence}, we have used non-degenerate diffusion of all species in order to control distance of the concentrations to their spatial averages (see estimate \eqref{e8}). This procedure must thus be adapted in the case of degenerate diffusion. It was already proven in \cite{DeFe_Con, BaFeEv14, MiHaMa14} that even if some diffusion coefficients vanish, one can still show exponential convergence to equilibrium provided reversible reactions. The technique used in these mentioned references is based on the fact that diffusion of one specie, which is connected through a reversible reaction with another specie, induces a indirect kind of "diffusion effect" to the latter specie. We will prove that this principle is still valid for weakly reversible reaction networks as considered in this section. \begin{theorem}[Convergence to Equilibrium with Degenerate Diffusion]\label{ConvDegDiffusion}\hfil\\ \textcolor{black}{ Consider (w.l.o.g) a strongly connected subnetwork $\mathcal{N}$ of a weakly reversible first order reaction network. Assume for $\mathcal{N}$ that the initial mass $M$ is positive. Moreover, assume that at least one diffusion coefficient $d_i$ is positive for some $i = 1, 2, \ldots, N$. } Then, the solution to \eqref{VectorSystem_ReCall} converges exponentially to equilibrium via the following estimate \begin{equation*} \sum_{i=1}^{N}\int_{\Omega}\frac{|u_i(t) - u_{i,\infty}|^2}{u_{i,\infty}}dx \leq e^{-\lambda' t}\sum_{i=1}^{N}\int_{\Omega}\frac{|u_{i,0} - u_{i,\infty}|^2}{u_{i,\infty}}\,dx, \end{equation*} for some explicit rate $\lambda'>0$ {\color{black}which depends explicitly on $A$, $\Omega$, $D$ and $M$}.. \end{theorem} \begin{proof} We aim for a similar entropy-entropy dissipation inequality as stated in Lemma \eqref{EEDEstimate}, i.e. we want to find a constant $\lambda'>0$ such that \begin{equation}\label{e15} \mathcal{D}(W|X_{\infty}) \geq \lambda'\, \mathcal{E}(W|X_{\infty}) = \lambda'\,[\mathcal E(W-\overline{v}erline{W}|X_{\infty}) + \mathcal E(\overline{v}erline{W}|X_{\infty})]. \end{equation} Due to the degenerate diffusion, the diffusion part of $\mathcal{D}(W|X_{\infty})$ is insufficent to control $\mathcal E(W - \overline{v}erline{W}|X_{\infty})$ as in \eqref{e8}, since some of diffusion coefficients can be zero. This difficulty can be resolved by quantifying the fact that diffusion of one specie is transferred to another species when connected via a weakly reversible reaction path. Without loss of generality, we assume that $d_1>0$ and estimate $\mathcal{D}(W|X_{\infty})$ by \begin{equation}\label{e16} \mathcal{D}(W|X_{\infty}) \geq d_1\int_{\Omega}\frac{|\nabla w_1|^2}{u_{1,\infty}}\,dx + \sum_{i,j=1;i<j}^{N}(a_{ij}u_{j,\infty}+a_{ji}u_{i,\infty})\int_{\Omega}\left(\frac{w_i}{u_{i,\infty}} - \frac{w_j}{u_{j,\infty}}\right)^2dx. \end{equation} By arguments similar to \eqref{e11} and \eqref{e12}, we have \begin{equation}\label{e17} \mathcal{D}(W|X_{\infty}) \geq d_1\int_{\Omega}\frac{|\nabla w_1|^2}{u_{1,\infty}}dx + \xi\sum_{i,j=1;i<j}^{N}\int_{\Omega}\left(\frac{w_i}{u_{i,\infty}} - \frac{w_j}{u_{j,\infty}}\right)^2dx. \end{equation} To control $\mathcal{E}(W - \overline{v}erline{W}|X_{\infty})$, we use the following estimate for all $i=2,3,\ldots,N$: \begin{equation}\label{e18} \int_{\Omega}\frac{|\nabla w_1|^2}{u_{1,\infty}}\,dx + \int_{\Omega}\left(\frac{w_1}{u_{1,\infty}} - \frac{w_i}{u_{i,\infty}}\right)^2dx \geq \beta\int_{\Omega}\frac{|w_i - \overline{v}erline{w_i}|^2}{u_{i,\infty}}\,dx, \end{equation} {\color{black}with $\beta = \frac{1}{2u_{1,\infty}}\min\left\{\frac{C_P}{u_{1,\infty}}, 1 \right\}$}: Indeed, thanks to Poincar\'{e}'s inequality $\|\nabla f\|^2 \geq C_P\|f - \overline{v}erline{f}\|^2$, we estimate for various sufficiently small constants $C$ \begin{align} \int_{\Omega}\frac{|\nabla w_1|^2}{u_{1,\infty}}\,dx &+ \int_{\Omega}\left(\frac{w_1}{u_{1,\infty}} - \frac{w_i}{u_{i,\infty}}\right)^2dx\nonumber \geq \int_{\Omega}\left[C_P\,\frac{|w_1 - \overline{v}erline{w_1}|^2}{u_{1,\infty}} + \left(\frac{w_1 - \overline{v}erline{w_1}}{u_{1,\infty}} + \frac{\overline{v}erline{w_1}}{u_{1,\infty}}- \frac{w_i}{u_{i,\infty}}\right)^2\right]dx\nonumber \\ &\geq \frac 12\min\left\{\frac{C_P}{u_{1,\infty}}, 1 \right\}\int_{\Omega}\left(\frac{\overline{v}erline{w_1}}{u_{1,\infty}}- \frac{w_i}{u_{i,\infty}}\right)^2dx\nonumber \\ &= \frac 12\min\left\{\frac{C_P}{u_{1,\infty}}, 1 \right\}\int_{\Omega}\left(\frac{\overline{v}erline{w_1}}{u_{1,\infty}}- \frac{\overline{v}erline{w_i}}{u_{i,\infty}} + \frac{\overline{v}erline{w_i}}{u_{i,\infty}} - \frac{w_i}{u_{i,\infty}}\right)^2dx\nonumber \\ &= \frac 12\min\left\{\frac{C_P}{u_{1,\infty}}, 1 \right\}\int_{\Omega}\left(\frac{\overline{v}erline{w_1}}{u_{1,\infty}}- \frac{\overline{v}erline{w_i}}{u_{i,\infty}}\right)^2dx + \frac 12\min\left\{\frac{C_P}{u_{1,\infty}}, 1 \right\}\int_{\Omega}\left(\frac{\overline{v}erline{w_i}}{u_{i,\infty}} - \frac{w_i}{u_{i,\infty}}\right)^2dx\nonumber \\ &\geq \frac{1}{2u_{1,\infty}}\min\left\{\frac{C_P}{u_{1,\infty}}, 1 \right\}\int_{\Omega}\frac{|w_i - \overline{v}erline{w_i}|^2}{u_{i,\infty}}\,dx. \label{e19} \end{align} Now, thanks to \eqref{e17} and \eqref{e18} \begin{align} \mathcal{D}(W|X_{\infty}) &\geq {\color{black}\min\left\{\frac{d_1}{N},\frac{\xi}{2} \right\}\beta}\sum_{i=1}^{N}\int_{\Omega}\frac{|w_i - \overline{v}erline{w_i}|^2}{u_{i,\infty}}dx + \frac{\xi}{2}\sum_{i,j=1;i<j}^{N}\int_{\Omega}\left(\frac{w_i}{u_{i,\infty}} - \frac{w_j}{u_{j,\infty}}\right)^2dx\nonumber \\ &\geq {\color{black}\min\left\{\frac{d_1}{N},\frac{\xi}{2} \right\}\beta}\,\mathcal{E}(W - \overline{v}erline{W}|X_{\infty}) + \frac{\xi}{2}\sum_{i,j=1;i<j}^{N}\int_{\Omega}\left(\frac{\overline{v}erline{w_i}}{u_{i,\infty}} - \frac{\overline{v}erline{w_j}}{u_{j,\infty}}\right)^2dx\nonumber \\ &\geq {\color{black}\min\left\{\frac{d_1}{N},\frac{\xi}{2} \right\}\beta}\mathcal{E}(W - \overline{v}erline{W}|X_{\infty}) + \frac{\gamma}{4}\mathcal{E}(\overline{v}erline{W}|X_{\infty}) \qquad\qquad (\text{by using }\eqref{e13})\nonumber \\ &\geq \lambda'\,\mathcal{E}(W|X_{\infty})\label{e20} \end{align} {\color{black}with $\lambda' = \min\left\{\frac{\beta d_1}{N}, \frac{\xi \beta}{2}, \frac{\gamma}{4}\right\}$}. Thus \eqref{e15} is proved and the proof is complete. \end{proof} \textcolor{black}{ \begin{proof}[Proof of Theorem \ref{DegenerateDiffusion}] Theorem \ref{DegenerateDiffusion} is a direct consequence of Theorem \ref{ConvDegDiffusion} and the partition of weakly reversible first order reaction network into strongly connected components. \end{proof} } \begin{remark}\label{DiffReactCoupl} The estimate \eqref{e18} is usually interpreted as follows: the sum of the dissipation due to the diffusion of $w_1$ and the dissipation caused by the reaction between $w_1$ and $w_i$ are bounded below by \eqref{e19}, which is essentially a diffusion dissipation term of the specie $w_i$ (after having applied Poincar\'e's inequality). In this sense, a "diffusion effect" has been transferred onto $w_i$. We remark that while the presented proof for the linear case is straightforward, the proof of an analogous estimate to \eqref{e18} in nonlinear cases turns out to be quite tricky. Readers are referred to \cite{DeFe_Con} or \cite[Lemma 3.6]{BaFeEv14} for more details. \end{remark} \section{Non-weakly reversible networks}\label{non-weakly} In this section, we consider \textcolor{black}{(w.l.o.g.) reaction networks $\mathcal{N}$ which are not weakly reversible, yet form one linkage class. Thus, the corresponding directed graph $G$ is connected yet not strongly connected (i.e. the underlying undirected graph of $G$ is connected)}. We will show that in the large time behaviour, each specie tends exponentially fast either to zero or to a positive equilibrium value depending on its position in the graph representing the network. For weakly reversible reaction-diffusion networks (corresponding to strongly connected graphs), it was proven in Section \ref{weakly} that each specie converges exponentially fast to a unique, positive equilibrium value, which is given explicitly in terms of the reaction rates and the conserved initial total mass. For non weakly reversible reaction networks, however, we will show that while the equilibria are still unique and attained exponentially fast, the equilibrium values are in general no longer explicitly given but depend on the position in the graph in general and on the history of the concentrations of the influencing species in particular. Moreover, since non weakly reversible reaction networks \eqref{VectorSystem_ReCall} may no longer have positive equilibria, the relative entropy method used in Section \ref{weakly} is not directly applicable. Nevertheless, we will see that the relative entropy and the ideas of the entropy method still play the essential role our analysis of non-weakly reversible networks. As the large time behaviour of the species depend on their position within the network, we need to first state some important properties of the graph $G$. The following Lemmas \ref{newgraph} and \ref{TopoOrder} are well known in graph theory. We refer the reader to the book \cite{BJG08} for a reference. \begin{lemma}[Strongly connected components form acyclic graphs $G^C$] \label{newgraph}\hfil\\ Let $G$ be a directed graph which is connected, that is the underlying undirected graph of $G$ is connected, but not strongly connected such that the graph $G$ contains at least $r\geq 2$ strongly connect components, which we shall denote by $C_1, C_2, \ldots, C_r$. Thus, we can define a directed graph $G^C$ of strongly connected components as follows \begin{itemize} \item[-] $G^C$ has as nodes the $r$ strongly connected components $C_1, C_2, \ldots, C_r$, \item[-] for two nodes $C_i$ and $C_j$ of $G^C$, if there exists a reaction $C_i \ni S_k \xrightarrow{a_{\ell k}} S_\ell\in C_j$ with $a_{\ell k} >0$, then we define a directed edge $C_i \rightarrow C_j$ of $G^C$. \end{itemize} Then, the directed graph $G^C$ is acyclic, that is $G^C$ does not contain any cycles. \end{lemma} \begin{proof} The proof can be found in e.g. \cite[Chapter 1]{BJG08} and shows that if $G^C$ would contain a cycle then this cycle should have been contained in a strongly connected component in the first place. \end{proof} \begin{lemma}[{Topological order of acyclic graphs, \cite[Chapter 1]{BJG08}}]\label{TopoOrder}\hfil\\ There exists a reordering of the nodes of $G^C$ in such a way that for all direct edges $C_i \rightarrow C_j$ we always have $i < j$. \end{lemma} From now on, we will always consider topologically ordered graphs $G^C$. For each $i=1,2,\ldots, N$, we denote by $N_i$ the number of species belonging to $C_i$. For notational convenience later on, we shall set $L[0] = 0$ and introduce the cumulative number $L[i]$ of the species contained in all strongly connected components up to $C_i$, i.e. \begin{equation}\label{L} L[i] = N_1 + N_2 + \ldots + N_i \qquad \text{ for all } i=1,2,\ldots, r. \end{equation} We then reorder the species of the network $\mathcal{N}$ in such a order that the species belong to the component $C_i$ are $S_{L[i-1]+1}, S_{L[i-1]+2}, \ldots, S_{L[i]}$ for all $i=1,2,\ldots, N$. Each component $C_i$ belongs to one of the following three types: \begin{itemize} \item {\it Source component}: $C_i$ is a source component if there is no in-flow to $C_i$, i.e. there does not exist an edge $C_i\not\ni S_k\rightarrow S_j \in C_i$, \item {\it Target component}: $C_i$ is a target component if there is no out-flow from $C_i$, i.e. there does not exist an edge $C_i\ni S_k \rightarrow S_j\not\in C_i$, \item {\it Transmission component}: If $C_i$ is neither a source component nor a target component, then $C_i$ is called a transmission component. \end{itemize} The above classification of strongly connected components greatly improves the notation of the corresponding dynamics, which quantifies the behaviour of the species belonging to the three types of components. In the following, we denote by $X_{i} = (u_{L[i-1]+1}, u_{L[i-1]+2}, \ldots, u_{L[i]})^T$ the concentration vector of the species belonging to $C_i$. The evolution of the species belonging to a component $C_i$ depends on the type of $C_i$: \begin{itemize} \item[(i)] For a source component $C_i$, the system for $X_i$ is of the form \begin{equation}\label{SourceSystem} \begin{cases} \partial_tX_{i} - D_i\Delta X_{i} = A_iX_{i} - F_{i}^{out}X_{i}, &x\in\Omega, \qquad t>0,\\ \partial_{\nu}X_{i} = 0, &x\in\partial\Omega, \qquad t>0,\\ X_{i}(x,0) = X_{i,0}(x), &x\in\Omega, \end{cases} \end{equation} where the diffusion matrix $D_i$ is \begin{equation}\label{DiffMatrix} D_i = \text{diag}(d_{L[i-1]+1}, d_{L[i-1]+2}, \ldots, d_{L[i]}) \in \mathbb{R}^{N_i\times N_i}, \end{equation} the reaction matrix $A_i$ is \begin{equation}\label{SelfReacMatrix} A_i = (a_{L[i-1]+k,L[i-1]+\ell})_{1\leq k, \ell \leq N_i} \in \mathbb{R}^{N_i\times N_i}, \end{equation} and the out flow matrix is defined as \begin{equation}\label{OutFlow} F_{i}^{out} = \text{diag}(f_{L[i-1]+1}, f_{L[i-1]+2}, \ldots, f_{L[i]}) \in \mathbb{R}^{N_i\times N_i} \end{equation} with $$ f_{L[i-1]+k} = \sum_{\ell=L[i]+1}^{N}a_{\ell,L[i-1]+k} \qquad \forall k=1,2,\ldots, N_i, $$ where the lower summation index $L[i]+1$ follows for the topological order of the graph $G^C$. Roughly speaking, $f_{L[i-1]+k}$ is the sum of all the reaction rates from the specie $S_{L[i-1]+k}$ to species outside of $C_i$. It may happen that $f_{L[i-1]+k} = 0$ for some $k=1,2,\ldots,N$, but there exists at least one $k_0$ such that $f_{L[i-1]+k_0} > 0$ since $C_i$ is a source component.\\ \item[(ii)] If $C_i$ is a transmission component, the system for $X_i$ writes as \begin{equation}\label{TransmissionSystem} \begin{cases} \partial_tX_{i} - D_i\Delta X_{i} = \mathcal{F}_{i}^{in} + A_iX_{i} - F_{i}^{out}X_i, &x\in\Omega,\quad t>0,\\ \partial_{\nu}X_{i} = 0, &x\in\partial\Omega, \quad t>0,\\ X_{i}(x,0) = X_{i,0}(x), &x\in\Omega, \end{cases} \end{equation} where the diffusion matrix $D_i$, the reaction matrix $A_i$ and the out flow matrix$F_{i}^{out}$ are defined as above in \eqref{DiffMatrix}, \eqref{SelfReacMatrix} and \eqref{OutFlow}, respectively. The in-flow vector $\mathcal{F}_{i}^{in}$ is defined by \begin{equation}\label{InFlow} \mathcal{F}_{i}^{in} = \begin{pmatrix} z_{L[i-1]+1}\\ z_{L[i-1]+2}\\ \ldots \\ z_{L[i]}\\ \end{pmatrix} \quad \text{ with } \quad z_{L[i-1]+\ell} = \sum_{k=1}^{L[i-1]}a_{L[i-1]+\ell, k}u_k. \end{equation} We remark that by studying all components $C_i$ within the topological order of $G^C$, the dynamics of the previous components $C_1, C_2, \ldots, C_{i-1}$ is already known at the time we analyse the component $C_i$. Thus, in system \eqref{TransmissionSystem} the in-flow vector $\mathcal{F}_{i}^{in}$ can be considered as a given external in-flow.\\ \item[(iii)] If $C_i$ is a target component, we can write \begin{equation}\label{TargetSystem} \begin{cases} \partial_tX_{i} - D_i\Delta X_{i} = \mathcal{F}_{i}^{in} + A_iX_{i}, &x\in\Omega,\quad t>0,\\ \partial_{\nu}X_{i} = 0, &x\in\partial\Omega, \quad t>0,\\ X_{i}(x,0) = X_{i,0}(x), &x\in\Omega, \end{cases} \end{equation} where the reaction matrix $A_i$ and the in-flow $\mathcal{F}_{i}^{in}$ are defined in the same way as above in \eqref{SelfReacMatrix} and \eqref{InFlow}. \end{itemize} By modifying the relative entropy method in Section \ref{weakly}, we obtain the \begin{proof}[Proof of Theorem \ref{SourceTransmission}] Since the ongoing outflow vanishes the mass of all source components and subsequently all transmission components, the corresponding equilibrium values are expected to be zero and the relative entropy method used for weakly reversible networks is not directly applicable here. We instead introduce a concept of "artificial equilibrium states with normalised mass" for these components, which allows to derive a quadratic entropy-like functional, which can be proved to decay exponentially. Due to their different dynamics, we have to distinguish the two cases: $C_i$ is a source component and $C_i$ is a transmission component. The aim of the proof is to show that if $C_i$ is a source or a transmission component then for all $k = 1, \ldots, N_i$, \begin{equation}\label{source_transmission} \|u_{L[i-1]+k}(t)\|_{L^2(\Omega)}^2 \leq K_ie^{-\lambda_i t}, \qquad \text{ for all } t\geq 0, \end{equation} for explicit constants $K_i>0$ and $\lambda_i>0$. In order to simplify the notation, we shall denote \begin{equation}\label{newnotation} v_k = u_{L[i-1]+k},\quad \text{and}\quad b_{k,\ell} = a_{L[i-1]+k, L[i-1]+\ell},\qquad \text{for all } 1\leq k, \ell \leq N_i. \end{equation} Then, the concentration vector $X_i$ and the reaction matrix $A_i$ can be rewritten as \begin{equation*} X_i = (v_1, v_2, \ldots, v_{N_i})\qquad \text{ and } \qquad A_i = (b_{k,\ell})_{1\leq k, \ell \leq N_i}. \end{equation*} Note that the index $i$ for the component $C_i$ is fixed.\\ \noindent {\bf Case 1:} $C_i$ is a source component. We recall the corresponding system from \eqref{SourceSystem} \begin{equation}\label{SourceSystem_Recall} \begin{cases} \partial_tX_{i} - D_i\Delta X_{i} = A_iX_{i} - F_{i}^{out}X_{i}, &x\in\Omega, \qquad t>0,\\ \partial_{\nu}X_{i} = 0, &x\in\partial\Omega, \qquad t>0,\\ X_{i}(x,0) = X_{i,0}(x), &x\in\Omega. \end{cases} \end{equation} We now introduce an artificial equilibrium state $X_{i,\infty} = (v_{1,\infty}, v_{2,\infty}\ldots, v_{N_i, \infty})^T$ with normalised mass to \eqref{SourceSystem_Recall}, which is defined as the solution of the system \begin{equation}\label{SourceEqui} \begin{cases} A_iX_{i,\infty} = 0,\\ v_{1,\infty} + v_{2,\infty} + \ldots + v_{N_i,\infty} = 1. \end{cases} \end{equation} It follows from Lemma \ref{UniqueEqui} that there exists a unique positive solution $X_{i,\infty}$ to \eqref{SourceEqui}. Here we notice that $X_{i,\infty}$ balances all reactions within $C_i$ while the total mass contained in $X_{i,\infty}$ is normalised to one. In the following we will study the evolution of the quadratic entropy-like functional \begin{equation}\label{ReEn_Source} \mathcal{E}(X_{i}|X_{i,\infty}) = \sum_{k=1}^{N_i}\int_{\Omega}\frac{|v_{k}|^2}{v_{k,\infty}}dx. \end{equation} By similar calculations as in Lemma \ref{ExplicitEnDiss}, we obtain the time derivative of this quadratic functional \begin{align}\label{EnDiss_Source} \mathcal{D}(X_{i}|X_{i,\infty}) &= -\frac{d}{dt}\mathcal{E}(X_{i}|X_{i,\infty})\nonumber \\ &= 2\sum_{k=1}^{N_i}d_{L[i-1]+k}\int_{\Omega}\frac{|\nabla v_{k}|^2}{v_{k,\infty}}dx\nonumber\\ &\quad + \sum_{k,\ell=1;k<\ell}^{N_i}(b_{k,\ell}v_{\ell,\infty} + b_{\ell,k}v_{k,\infty})\int_{\Omega}\left(\frac{v_{k}}{v_{k, \infty}} - \frac{v_{\ell}}{v_{\ell, \infty}}\right)^2dx\nonumber\\ &\quad + 2\sum_{k=1}^{N_i}f_{L[i-1]+k}\int_{\Omega}\frac{|v_k|^2}{v_{k,\infty}}dx. \end{align} We remark that since $C_i$ is a source component, there exists an index $k_0 \in \{1,2,\ldots,N_i\}$ such that the out-flow $f_{L[i-1]+k_0}>0$ is positive. Then, an estimate similar to \eqref{e11} gives for various constants $C$ \begin{align} \mathcal{D}(X_{i}|X_{i,\infty})&\geq \xi\sum_{k,\ell=1;k<\ell}^{N_i}\int_{\Omega}\Bigl(\frac{v_{k}}{v_{k, \infty}} - \frac{v_{\ell}}{v_{\ell, \infty}}\Bigr)^{\!2}dx + 2f_{L[i-1]+k_0}\int_{\Omega}\frac{|v_{k_0}|^2}{v_{k_0,\infty}}\,dx\nonumber \\ &\geq {\color{black}\min\{\xi/2, f_{L[i-1]+k_0}/2N_i\}}\sum_{\ell=1;\ell\not=k_0}^{N_i}\int_{\Omega}\biggl[\Bigl(\frac{v_{\ell}}{v_{\ell, \infty}} - \frac{v_{k_0}}{v_{k_0, \infty}}\Bigr)^{\!2} + \frac{|v_{k_0}|^2}{v_{k_0,\infty}}\biggr]dx\nonumber\\ &\quad\, + f_{L[i-1]+k_0}\int_{\Omega}\frac{|v_{k_0}|^2}{v_{k_0,\infty}}dx\nonumber\\ &\geq \lambda_i\sum_{\ell=1}^{N_i}\int_{\Omega}\frac{|v_{\ell}|^2}{v_{\ell,\infty}}\,dx = \lambda_i\, \mathcal{E}(X_{i}|X_{i,\infty})\label{EnEnDissEst_Source} \end{align} {\color{black}with $\lambda_i = {\color{black}\min\{\xi/4, f_{L[i-1]+k_0}/4N_i\}}$}. It follows that \begin{equation*} \frac{d}{dt}\mathcal{E}(X_{i}|X_{i,\infty}) = -\mathcal{D}(X_{i}|X_{i,\infty})\leq -\lambda_i\, \mathcal{E}(X_{i}|X_{i,\infty}), \end{equation*} and thus \begin{equation*} \sum_{k=1}^{N_i}\int_{\Omega}\frac{|v_k(t)|^2}{v_{k,\infty}}dx = \mathcal{E}(X_{i}(t)|X_{i,\infty}) \leq e^{-\lambda_i t}\mathcal{E}(X_{i,0}|X_{i,\infty}), \end{equation*} or equivalently \begin{equation*} \|u_{L[i-1]+k}(t)\|^2 \leq e^{-\lambda_i t}\mathcal{E}(X_{i,0}|X_{i,\infty})\max_{1\le i\le N_i}\{v_{i,\infty}\} \qquad \text{ for all } t>0, \; \text{ for all } k=1,2,\ldots,N_i, \end{equation*} {\color{black}which proves \eqref{source_transmission} with $K_i = \mathcal{E}(X_{i,0}|X_{i,\infty})\max_{1\le i\le N_i}\{v_{i,\infty}\}$} in the case $C_i$ is a source component. \noindent {\bf Case 2:} $C_i$ is a transmission component. By recalling that the components $C_i$ are topologically ordered, we can assume without loss of generality that $u_\ell$, with $\ell=1,2,\ldots,L[i-1]$, obeys the following exponential decay \begin{equation}\label{Decay} \|u_\ell(t)\|^2 \leq K^*e^{-\lambda^* t}, \qquad \ell=1,2,\ldots,L[i-1], \quad\text{ for all } t>0. \end{equation} for {\color{black}$0<\lambda^* = \min\limits_{1\leq k\leq i-1}\lambda_k$ and $K^* = \max\limits_{1\leq k\leq i-1}K_i$}. We also recall the system for $C_i$, \begin{equation}\label{TransmissionSystem_ReCall} \begin{cases} \partial_tX_{i} - D_i\Delta X_{i} = \mathcal{F}_{i}^{in} + A_iX_{i} - F_{i}^{out}X_{i}, &x\in\Omega,\quad t>0,\\ \partial_{\nu}X_{i} = 0, &x\in\partial\Omega, \quad t>0,\\ X_{i}(x,0) = X_{i,0}(x), &x\in\Omega, \end{cases} \end{equation} with $\mathcal{F}_{i}^{in}$ is defined as \eqref{InFlow}. Denote by $X_{i,\infty} = (v_{1,\infty}, \ldots, v_{N_i,\infty})^T$ the artificial equilibrium state of \eqref{TransmissionSystem_ReCall}, which is the unique positive solution to \begin{equation}\label{TransmissionEqui} \begin{cases} A_iX_{i,\infty} = 0,\\ v_{1,\infty} + v_{2,\infty} + \ldots + v_{N_i,\infty} = 1. \end{cases} \end{equation} Again, we can compute the time derivative of \begin{equation}\label{ReEn_Transmission} \mathcal{E}(X_{i}|X_{i,\infty}) = \sum_{k=1}^{N_i}\int_{\Omega}\frac{|v_k|^2}{v_{k,\infty}}dx \end{equation} as \begin{align}\label{EnDiss_Transmission} \mathcal{D}(X_{i}|X_{i,\infty}) &= -\frac{d}{dt}\mathcal{E}(X_{i}, X_{i,\infty})\nonumber\\ &= 2\sum_{i=1}^{N_i}d_{L[i-1]+k}\int_{\Omega}\frac{|\nabla v_k|^2}{v_{k,\infty}}dx + \sum_{k,\ell=1;k<\ell}^{N_i}(b_{k,\ell}v_{\ell,\infty} + b_{\ell,k}v_{k,\infty})\int_{\Omega}\left(\frac{v_{k}}{v_{k, \infty}} - \frac{v_{\ell}}{v_{\ell, \infty}}\right)^2dx\nonumber\\ &\quad + 2\sum_{k=1}^{N_i}f_{L[i-1]+k}\int_{\Omega}\frac{|v_k|^2}{v_{k,\infty}}dx-2\sum_{k=1}^{N_i}\int_{\Omega}\biggl(\frac{v_k}{v_{k,\infty}}\sum_{\ell=1}^{L[i-1]}a_{L[i-1]+k,\ell}\,u_{\ell}\biggr)dx. \end{align} Because $C_i$ is a transmission component, there exists an index $k_0 \in \{1,\ldots, N_i\}$ such that $f_{L[i-1]+k_0}>0$ is positive. In comparison to \eqref{EnDiss_Source}, the dissipation $\mathcal{D}(X_i|X_{i,\infty})$ in \eqref{EnDiss_Transmission} has the additional term \begin{equation*} - 2\sum_{k=1}^{N_i}\int_{\Omega}\biggl(\frac{v_k}{v_{k,\infty}}\sum_{\ell=1}^{L[i-1]}a_{L[i-1]+k,\ell}\,u_{\ell}\biggr)dx \end{equation*} to be estimated. Thanks to the decay \eqref{Decay} of $u_{\ell}$, we can estimate \begin{align}\label{InFlow_Estimate} \left|2\sum_{k=1}^{N_i}\int_{\Omega}\biggl(\frac{v_k}{v_{k,\infty}}\sum_{\ell=1}^{L[i-1]}a_{L[i-1]+k,\ell}\,u_{\ell}\biggr)dx\right| &\leq 2\sum_{k=1}^{N_i}\sum_{\ell=1}^{L[i-1]}a_{L[i-1]+k,\ell}\int_{\Omega}\left|\frac{v_{k}}{v_{k,\infty}}u_{\ell}\right|dx \nonumber\\ &\leq f_{L[i-1]+k_0}\sum_{k=1}^{N_i}\int_{\Omega}\frac{|v_k|^2}{v_{k,\infty}}dx + \kappa\sum_{\ell=1}^{L[i-1]}\|u_{\ell}\|^2\nonumber\\ &\leq f_{L[i-1]+k_0}\sum_{k=1}^{N_i}\int_{\Omega}\frac{|v_k|^2}{v_{k,\infty}}dx + \kappa K^*e^{-\lambda^* t} \end{align} {\color{black}with $\kappa = N_iL[i-1]\max\limits_{i<j}\{a_{ij}^2\}/(f_{L[i-1]+k_0}\min\limits_{k}\{v_{k,\infty}\})$}. Then, with the help of \eqref{InFlow_Estimate}, we estimate \begin{equation*} \begin{aligned} \mathcal{D}(X_{i}|X_{i,\infty}) \ge \sum_{k,\ell=1;k<\ell}^{N_i}(b_{k,\ell}v_{\ell,\infty} + b_{\ell,k}v_{k,\infty})\int_{\Omega}\left(\frac{v_{k}}{v_{k, \infty}} - \frac{v_{\ell}}{v_{\ell, \infty}}\right)^2dx + f_{L[i-1]+k_0}\int_{\Omega}\frac{|v_{k_0}|^2}{v_{k_0,\infty}}dx - \kappa K^* e^{-\lambda^* t}, \end{aligned} \end{equation*} and similarly to \eqref{EnEnDissEst_Source}, we obtain for ${\color{black}\overline{v}erline{\lambda} = \min\{\xi/4, f_{L[i-1]+k_0}/4N_i\}},$ \begin{equation}\label{EnEnDissEst_Transmission} \mathcal{D}(X_{i}|X_{i,\infty}) \geq \overline{v}erline{\lambda}\,\mathcal{E}(X_i|X_{i,\infty}) - \kappa K^* e^{-\lambda^* t}. \end{equation} From \eqref{EnEnDissEst_Transmission}, we can use the classic Gronwall lemma to have \begin{equation*} \mathcal{E}(X_{i}(t)|X_{i,\infty}) \leq K_ie^{-\lambda_it}, \end{equation*} {\color{black}with $\lambda_i = \min\{\overline{v}erline{\lambda}, \lambda^*\}$ and $K_i = 2\max\{\mathcal{E}(X_{i,0}|X_{i,\infty}), \kappa K^* \}$}, which ends the proof in the case that $C_i$ is a transmission component. \end{proof} For a target component, we need to define its corresponding equilibrium state. This equilibrium state balances the reactions within the component and has as total mass the sum of the initial total mass of the target component plus the total "injected mass" from the other components. In general, the injected mass will not be given explicitly but depend on the time evolution of the influences species prior to $C_i$ in terms of the topological order. \begin{lemma}[Equilibrium state of target components]\label{FinalStateTarget}\hfil\\ For each target component $C_i$, if{ \begin{equation}\label{positivemass} \sum\limits_{k=1}^{N_i}\overline{v}erline{u}_{L[i-1]+k,0} + \sum\limits_{k=1}^{N_i}\sum\limits_{\ell=1}^{L[i-1]}a_{L[i-1]+k,\ell}\int\limits_{0}^{+\infty}\overline{v}erline{u_{\ell}}(s)ds > 0 \end{equation}} holds, then there exists a unique positive equilibrium state $X_{i,\infty} = (v_{1,\infty}, v_{2,\infty}, \ldots, v_{N_i,\infty})$ satisfying \begin{equation}\label{EquiTarget} \begin{cases} A_iX_{i,\infty} = 0,\\ \sum\limits_{k=1}^{N_i}v_{k,\infty} = \sum\limits_{k=1}^{N_i}\overline{v}erline{u}_{L[i-1]+k,0} + \sum\limits_{k=1}^{N_i}\sum\limits_{\ell=1}^{L[i-1]}a_{L[i-1]+k,\ell}\int\limits_{0}^{+\infty}\overline{v}erline{u_{\ell}}(s)ds. \end{cases} \end{equation} {Otherwise, if the sum \eqref{positivemass} should be zero, then the initial and the total injected mass into the target component $C_i$ is zero and the concentrations of the target component $C_i$ remain zero of all times.} \end{lemma} \begin{proof} By \eqref{Decay} we have for all $\ell = 1, 2, \ldots, L[i-1]$ that $\|u_{\ell}(t)\|^2 \leq K^* e^{-\lambda^*t}$. Thus, Jensen's inequality yields \begin{equation}\label{Jensen} \int_{0}^{+\infty}\overline{v}erline{u_{\ell}}(s)ds \leq \int_{0}^{+\infty}\|u(s)\|^{1/2}_{L^2(\Omega)}ds \leq K^*\int_{0}^{+\infty}e^{-\frac{\lambda^*}{2}s}ds = \frac{2K^*}{\lambda^*} \end{equation} and the right hand side of the second equation in \eqref{EquiTarget} is finite. Therefore, the existence of a unique $X_{i,\infty}$ satisfying \eqref{EquiTarget} follows from Lemma \ref{UniqueEqui}. \end{proof} { \begin{remark} {The positive sign in assumption \eqref{positivemass} ensures that either initially or during the ongoing reactions positive mass is present/injected into the component $C_i$.} When this assumption does not hold, then the target component does not possess a positive equilibrium and all of its concentrations remain zero for all times. For example, consider the network \begin{center}\scalebox{1}[1]{ \begin{tikzpicture} \node (c) at (2,2) {$S_1$} node (d) at (2,0) {$S_2$} node (e) at (4,0) {$S_4$} node(f) at(4,2){$S_3$}; \draw[arrows=->] ([xshift =0.5mm,yshift=-0.5mm]d.east) -- node [below] {\scalebox{.8}[.8]{$a_{42}$}} ([xshift=-0.5mm,yshift=-0.5mm]e.west); \draw[arrows=->] ([xshift =0.5mm]c) -- node [above] {\scalebox{.8}[.8]{$a_{31}$}} ([xshift=-0.5mm]f); \draw[arrows=->] ([yshift =-0.5mm]c) -- node [right] {\scalebox{.8}[.8]{$a_{21}$}} ([yshift=0.5mm]d); \end{tikzpicture}} \end{center} when the initial data of all species are zero except $S_3$. In this case, the target component $\{S_4\}$ will not ever receive any mass, and thus remains zero for all $t>0$. \end{remark}} We now begin the \begin{proof}[Proof of Theorem \ref{Target}] With the notations introduced in \eqref{L} and Lemma \ref{FinalStateTarget}, we identify the indexes in the statement of Theorem \ref{Target} as $i_k = L[i-1]+k$ and the equilibrium state $u_{i_k,\infty} = v_{k,\infty}$ for $k= 1,\ldots, N_i$. The aim now is to prove for all $k=1,\ldots, N_i$, \begin{equation*} \|v_k(t) - v_{k,\infty}\|_{L^2(\Omega)}^2 \leq K_ie^{-\lambda_i t} \quad \text{ for all } t\geq 0 \end{equation*} for some explicit constants $K_i>0$ and $\lambda_i>0$. We recall the system for a target component $C_i$, \begin{equation}\label{TargetSystem_Recall} \begin{cases} \partial_tX_{i} - D_i\Delta X_{i} = \mathcal{F}_{i}^{in} + A_iX_{i}, &x\in\Omega,\quad t>0,\\ \partial_{\nu}X_{i} = 0, &x\in\partial\Omega, \quad t>0,\\ X_{i}(x,0) = X_{i,0}(x), &x\in\Omega, \end{cases} \end{equation} where \begin{equation*} \mathcal{F}_{i}^{in} = \begin{pmatrix} z_{L[i-1]+1}\\ z_{L[i-1]+2}\\ \ldots \\ z_{L[i]}\\ \end{pmatrix} \quad \text{ with } \quad z_{L[i-1]+\ell} = \sum_{k=1}^{L[i-1]}a_{L[i-1]+\ell, k}\,u_k. \end{equation*} Note that the total mass of $C_i$ is not conserved but increases in time due to the in-flow vector $\mathcal{F}^{in}_i$. To compute the total mass of $C_i$ at a time $t>0$, we sum up all the equations of \eqref{TargetSystem_Recall} then integrating over $\Omega$, \begin{equation*} \frac{d}{dt}\sum_{k=1}^{N_i}\overline{v}erline{u}_{L[i-1]+k}(t) = \sum_{k=1}^{N_i}\overline{v}erline{z}_{L[i-1]+k}(t) = \sum_{k=1}^{N_i}\sum_{\ell=1}^{L[i-1]}a_{L[i-1]+k,\ell}\,\overline{v}erline{u_{\ell}}(t) \end{equation*} thanks to the homogeneous Neumann boundary condition and the fact that $(1,\ldots,1)^T$ is a left eigenvector with eigenvalue zero of $A_i$ since $A_i$ is a reaction matrix. Thus, we have \begin{equation}\label{massTar} \sum_{k=1}^{N_i}\overline{v}erline{u}_{L[i-1]+k}(t) = \sum_{k=1}^{N_i}\overline{v}erline{u}_{L[i-1]+k,0} + \sum_{k=1}^{N_i}\sum_{\ell=1}^{L[i-1]}a_{L[i-1]+k,\ell}\int_{0}^{t}\overline{v}erline{u_{\ell}}(s)ds. \end{equation} {Given that the right hand side of \eqref{massTar} should be zero for all times $t>0$, then $\overline{v}erline{u}_{L[i-1]+k}(t)=0$ for all $k=1,\ldots,N_i$ and for all $t>0$ and $X_{i,\infty}=0$ and the statement of the Theorem holds trivially.} {Otherwise, if the right hand side of \eqref{massTar} is positive for some time $t>0$, then assumption \eqref{positivemass} is satisfied an $X_{i,\infty}$ is a positive equilibrium.} Recalling the change of notation $v_{k} = u_{L[i-1] + k}$ in \eqref{newnotation}, we denote by $$w_k(t) = v_k(t) - v_{k,\infty} = u_{L[i-1]+k}(t) - v_{k,\infty}$$ the distance from $u_{L[i-1]+k}$ to its corresponding equilibrium state for all $k=1,2,\ldots,N_i$. It implies that $(w_k)_{k=1,\ldots,N_i}$ solves the system \eqref{TargetSystem_Recall} subject to the initial data $w_{k,0} = u_{L[i-1]+k,0} - v_{k,\infty}$ for all $k=1,2,\ldots, N_i$. We define $W_i = (w_1, w_2, \ldots, w_{N_i})$ and consider the relative entropy-like functional \begin{equation}\label{EntropyTarget} \mathcal{E}(W_{i}|X_{i,\infty}) = \sum_{k=1}^{N_i}\int_{\Omega}\frac{|w_k|^2}{v_{k,\infty}}dx = \sum_{k=1}^{N_i}\int_{\Omega}\frac{|w_k - \overline{v}erline{w_k}|^2}{v_{k,\infty}}dx + \sum_{k=1}^{N_i}\frac{\overline{v}erline{w_k}^2}{v_{k,\infty}} =: \mathcal{E}_1 + \mathcal{E}_2. \end{equation} By using again arguments of Lemma \ref{ExplicitEnDiss}, we calculate the entropy dissipation \begin{align}\label{EnDissTarget} \mathcal{D}(W_{i}|X_{i,\infty}) &= -\frac{d}{dt}\mathcal{E}(W_{i}|X_{i,\infty})\nonumber\\ &= 2\sum_{i=1}^{N_i}d_{L[i-1]+k}\int_{\Omega}\frac{|\nabla w_k|^2}{v_{k,\infty}}dx + \sum_{k,\ell=1;k<\ell}^{N_i}(b_{k,\ell}v_{\ell,\infty} + b_{\ell,k}v_{k,\infty})\int_{\Omega}\left(\frac{w_{k}}{v_{k, \infty}} - \frac{w_{\ell}}{v_{\ell, \infty}}\right)^2dx\nonumber\\ &\quad - 2\sum_{k=1}^{N_i}\sum_{\ell=1}^{L[i-1]}a_{L[i-1]+k,\ell}\int_{\Omega}\frac{w_k}{v_{k,\infty}}\,u_{\ell}\,dx \end{align} For the last term of \eqref{EnDissTarget}, we estimate \begin{align} \biggl|2&\sum_{k=1}^{N_i}\sum_{\ell=1}^{L[i-1]}a_{L[i-1]+k,\ell}\int_{\Omega}\frac{w_k}{v_{k,\infty}}\,u_{\ell}\,dx\biggr|\nonumber \\ &\leq 2\sum_{k=1}^{N_i}\sum_{\ell=1}^{L[i-1]}a_{L[i-1]+k,\ell}\int_{\Omega}\frac{|w_k - \overline{v}erline{w_k}|}{v_{k,\infty}}|u_{\ell}|dx\ + 2\sum_{k=1}^{N_i}\sum_{\ell=1}^{L[i-1]}a_{L[i-1]+k,\ell}\,\frac{|\overline{v}erline{w_k}|}{v_{k,\infty}}|\overline{v}erline{u_{\ell}}|\nonumber \\ &\leq C_{P}\sum_{k=1}^{N_i}d_{L[i-1]+k}\int_{\Omega}\frac{|w_k - \overline{v}erline{w_k}|^2}{v_{k,\infty}}dx +\kappa_1\sum_{\ell=1}^{L[i-1]}\|u_{\ell}\|^2+ \kappa_2\sum_{k=1}^{N_i}\frac{\overline{v}erline{w_k}^2}{v_{k,\infty}} + \kappa_3\sum_{\ell=1}^{L[i-1]}\overline{v}erline{u_{\ell}}^2\nonumber \\ &\leq \sum_{k=1}^{N_i}d_{L[i-1]+k}\int_{\Omega}\frac{|\nabla w_k|^2}{v_{k,\infty}}dx + \kappa_2\sum_{k=1}^{N_i}\frac{\overline{v}erline{w_k}^2}{v_{k,\infty}}+ (\kappa_1+\kappa_3)K^*e^{-\lambda^*t},\label{LastTerm} \end{align} {\color{black} with \[ \kappa_1 = \frac{N_iL[i-1]\max\limits_{i<j}\{a_{ij}^2\}}{C_P\min\limits_{k}\{d_{L[i-1]+k}v_{k,\infty}\}}, \quad \kappa_2 = \frac 12\xi \max\limits_{k}\{v_{k,\infty}\},\quad \kappa_3 = \frac{N_iL[i-1]\max\limits_{i<j}\{a_{ij}^2\}}{\kappa_2v_{k,\infty}}, \] where $\kappa_2$ is chosen in such a way that the last step of the below estimate \eqref{ee1_2} is fulfilled}, and we have used $\|u_{\ell}(t)\|^2 \leq K^*e^{-\lambda^*t}$ for all $\ell = 1,\ldots, L[i-1]$ in the last estimate. By inserting \eqref{LastTerm} into \eqref{EnDissTarget}, we obtain \begin{align} \mathcal{D}(W_{i}|X_{i,\infty}) &\geq \sum_{i=1}^{N_i}d_{L[i-1]+k}\int_{\Omega}\frac{|\nabla w_k|^2}{v_{k,\infty}}dx+ \sum_{k,\ell=1;k<\ell}^{N_i}(b_{k,\ell}v_{\ell,\infty} + b_{\ell,k}v_{k,\infty})\left(\frac{\overline{v}erline{w_{k}}}{v_{k, \infty}}- \frac{\overline{v}erline{w_{\ell}}}{v_{\ell, \infty}}\right)^2\nonumber\\ &\quad - \kappa_2\sum_{k=1}^{N_i}\frac{\overline{v}erline{w_k}^2}{v_{k,\infty}}- (\kappa_1+\kappa_3) K^*e^{-\lambda^* t} =: \mathcal D_1 + \mathcal{D}_2\label{ee1} \end{align} where $\mathcal{D}_1$ is the term containing the gradients and $\mathcal{D}_2$ is the rest of the right hand side. It follows from Poincar\'e's inequality that \begin{equation}\label{ee1_1} \begin{aligned} \mathcal D_1 \geq \sum_{i=1}^{N_i}d_{L[i-1]+k}\int_{\Omega}\frac{|\nabla w_k|^2}{v_{k,\infty}}dx\geq C_{P}\sum_{i=1}^{N_i}d_{L[i-1]+k}\int_{\Omega}\frac{|w_k-\overline{v}erline{w_k}|^2}{v_{k,\infty}}dx \geq \kappa_4\mathcal{E}_1 \end{aligned} \end{equation} {\color{black}with $\kappa_4 = C_P\min\limits_{k}\{d_{L[i-1]+k}\}$}. To control $\mathcal{E}_2$, we use arguments similar to Step 3 in the proof of Lemma \ref{EEDEstimate}. First, by using \eqref{massTar}, we have the total mass of $(w_k)_{1\leq k\leq N_i}$ is computed as, \begin{align} \sum_{k=1}^{N_i}\overline{v}erline{w_k}(t) = \sum_{k=1}^{N_i}\overline{v}erline{u}_{L[i-1]+k}(t) - \sum_{k=1}^{N_i}v_{k,\infty}\nonumber &= \sum_{k=1}^{N_i}\overline{v}erline{u}_{L[i-1]+k,0} + \sum_{k=1}^{N_i}\sum_{\ell=1}^{L[i-1]}a_{L[i-1]+k,\ell}\int_{0}^{t}\overline{v}erline{u_{\ell}}(s)ds\nonumber \\ &\quad - \sum\limits_{k=1}^{N_i}\overline{v}erline{u}_{L[i-1]+k,0} - \sum\limits_{k=1}^{N_i}\sum\limits_{\ell=1}^{L[i-1]}a_{L[i-1]+k,\ell}\int_{0}^{+\infty}\overline{v}erline{u_{\ell}}(s)ds\nonumber\\ &= - \sum\limits_{k=1}^{N_i}\sum\limits_{\ell=1}^{L[i-1]}a_{L[i-1]+k,\ell}\int_{t}^{+\infty}\overline{v}erline{u_{\ell}}(s)ds=: -\delta(t).\label{TargetInitMass} \end{align} Hence, \begin{equation}\label{kaka} -2\sum_{k,\ell=1;k<\ell}^{N_i}\overline{v}erline{w_k}\,\overline{v}erline{w_{\ell}} = -\sum_{k,\ell=1;k\neq \ell}^{N_i}\overline{v}erline{w_k}\,\overline{v}erline{w_{\ell} } = \sum_{k=1}^{N_i}\overline{v}erline{w_k}^2 -\sum_{k,\ell=1}^{N_i}\overline{v}erline{w_k}\,\overline{v}erline{w_{\ell}} =\sum_{k=1}^{N_i}\overline{v}erline{w_k}^2 - \delta^2(t). \end{equation} By using \eqref{e11} and \eqref{kaka}, we estimate \begin{align} \mathcal D_2 &\geq \xi\sum_{k,\ell=1; k<\ell}^{N_i}\left(\frac{\overline{v}erline{w_k}}{v_{k,\infty}} - \frac{\overline{v}erline{w_\ell}}{v_{\ell,\infty}}\right)^2 - \kappa_2\sum_{k=1}^{N_i}\frac{\overline{v}erline{w_k}^2}{v_{k,\infty}} - (\kappa_1+\kappa_3)K^*e^{-\lambda^*t}\nonumber\\ &\geq -2\xi\max\limits_{k<\ell}\{v_{k,\infty}v_{\ell,\infty}\}\sum_{k,\ell=1;k<\ell}^{N_i}\overline{v}erline{w_k}\,\overline{v}erline{w_{\ell}} - \kappa_2\sum_{k=1}^{N_i}\frac{\overline{v}erline{w_k}^2}{v_{k,\infty}} - (\kappa_1+\kappa_3)K^*e^{-\lambda^*t}\nonumber\\ &= \xi\max\limits_{k<\ell}\{v_{k,\infty}v_{\ell,\infty}\}\left(\sum_{k=1}^{N_i}\overline{v}erline{w_k}^2 - \delta^2\right) - \kappa_2\sum_{k=1}^{N_i}\frac{\overline{v}erline{w_k}^2}{v_{k,\infty}} - (\kappa_1+\kappa_3)K^*e^{-\lambda^*t}\nonumber \\ &\geq \frac 12\xi \max\limits_{k}\{v_{k,\infty}\}\sum_{k=1}^{N_i}\frac{\overline{v}erline{w_k}^2}{v_{k,\infty}} - \xi \max\limits_{k<\ell}\{v_{k,\infty}v_{\ell,\infty}\}\delta^2 - (\kappa_1+\kappa_3)K^*e^{-\lambda^* t}\label{ee1_2} \end{align} for $\varepsilon>0$ is sufficiently small. It follows from \eqref{TargetInitMass} and $\overline{v}erline{u_\ell} \leq{\|u_{\ell}\|} \leq \sqrt{K^*}e^{-\lambda^*t/2}$ that \begin{equation*} \delta^2 \leq N_iL[i-1]\max\limits_{i<j}\{a_{ij}^2\}\sum_{\ell=1}^{L[i-1]}\left(\int_{t}^{+\infty}\overline{v}erline{u_{\ell}}(s)ds\right)^2 \leq \kappa_4e^{-\lambda^*t} \end{equation*} with {\color{black}$\kappa_4 = 4K^*N_iL[i-1]^2\max\limits_{i<j}\{a_{ij}^2\}(\lambda^*)^{-2}$}. Hence, \eqref{ee1_2} implies that \begin{equation}\label{keke} \mathcal D_2 \geq \frac 12\xi \max\limits_{k}\{v_{k,\infty}\}\sum_{k=1}^{N_i}\frac{\overline{v}erline{w_k}^2}{v_{k,\infty}} - \max\{\kappa_4\xi\max\limits_{k<\ell}\{v_{k,\infty}v_{\ell,\infty}\}, (\kappa_1+\kappa_3) K^*\}e^{-\lambda^*t} = \kappa_5\mathcal{E}_2 - \kappa_6e^{-\lambda^*t}. \end{equation} Combining \eqref{keke} and \eqref{ee1_1} yields \begin{equation}\label{ee2} \mathcal{D}(W_{i}| X_{i,\infty}) \geq \min\{\kappa_4, \kappa_5\}\mathcal{E}(W_{i}| X_{i,\infty}) - \kappa_6e^{-\lambda^*t}. \end{equation} Therefore, by applying a classic Gronwall lemma, \begin{equation}\label{ee4} \mathcal{E}(W_{i}(t)|X_{i,\infty}) \leq K_ie^{-\lambda_i t} \qquad \text{ for all } t\geq 0 \end{equation} {\color{black}with $\lambda_i = \min\{\kappa_4, \kappa_5,\lambda^* \}$ and $K_i = 2\max\{\mathcal{E}(X_{i,0}|X_{i,\infty}), \kappa_6\}$}. This completes the proof of the Theorem. \end{proof} \vskip 0.5cm \noindent{\bf Acknowledgements.} We would like to thank the anonymous referee for his valuable comments and suggestions, which improve the presentation of the paper. The third author is supported by International Research Training Group IGDK 1754. This work has partially been supported by NAWI Graz. \end{document}
\begin{document} \title[First-order, stationary MFGs with congestion]{First-order, stationary mean-field games with congestion} \author[D. Evangelista]{David Evangelista} \address[D. Evangelista]{ King Abdullah University of Science and Technology (KAUST), CEMSE Division , Thuwal 23955-6900. Saudi Arabia, and KAUST SRI, Center for Uncertainty Quantification in Computational Science and Engineering.} \email{[email protected]} \author[R. Ferreira]{Rita Ferreira} \address[R. Ferreira]{ King Abdullah University of Science and Technology (KAUST), CEMSE Division, Thuwal 23955-6900. Saudi Arabia, and KAUST SRI, Center for Uncertainty Quantification in Computational Science and Engineering.} \email{[email protected]} \author[D. A. Gomes]{Diogo A. Gomes} \address[D. A. Gomes]{ King Abdullah University of Science and Technology (KAUST), CEMSE Division, Thuwal 23955-6900. Saudi Arabia, and KAUST SRI, Center for Uncertainty Quantification in Computational Science and Engineering.} \email{[email protected]} \author[L. Nurbekyan]{Levon Nurbekyan} \address[L. Nurbekyan]{ King Abdullah University of Science and Technology (KAUST), CEMSE Division, Thuwal 23955-6900. Saudi Arabia, and KAUST SRI, Center for Uncertainty Quantification in Computational Science and Engineering.} \email{[email protected]} \author[V. Voskanyan]{Vardan Voskanyan} \address[V. Voskanyan]{ King Abdullah University of Science and Technology (KAUST), CEMSE Division, Thuwal 23955-6900. Saudi Arabia, and KAUST SRI, Center for Uncertainty Quantification in Computational Science and Engineering.} \email{[email protected]} \keywords{Mean-Field Game; Congestion; Calculus of Variations} \subjclass[2010]{ 35J47, 35A01} \thanks{ The authors were partially supported by King Abdullah University of Science and Technology (KAUST) baseline and start-up funds. } {\rm d}ate{\today} \begin{abstract} Mean-field games (MFGs) are models for large populations of competing rational agents that seek to optimize a suitable functional. In the case of congestion, this functional takes into account the difficulty of moving in high-density areas. Here, we study stationary MFGs with congestion with quadratic or power-like Hamiltonians. First, using explicit examples, we illustrate two main difficulties: the lack of classical solutions and the existence of areas with vanishing density. Our main contribution is a new variational formulation for MFGs with congestion. This formulation was not previously known, and, thanks to it, we prove the existence and uniqueness of solutions. Finally, we consider applications to numerical methods. \end{abstract} \maketitle \section{Introduction} Mean-field games (MFGs) is a branch of game theory that studies systems with a large number of competing agents. These games were introduced in \cite{ll1,ll2,ll3} (see also \cite{LCDF}) and \cite{Caines2,Caines1} motivated by problems arising in population dynamics, mathematical economics, social sciences, and engineering. MFGs have been the focus of intense study in the last few years and substantial progress has been achieved. Congestion problems, which arise in models where the motion of agents in high-density regions is expensive, are a challenging class of MFGs. Many MFGs are determined by a system of a Hamilton--Jacobi equation coupled with a transport or Fokker--Planck equation. In congestion problems, these equations have singularities and, thus, their analysis requires particular care. Here, we study first-order stationary MFGs with congestion. Our main example is the system \begin{equation} \label{main} \begin{cases} \frac{|P+Du|^\gamma}{\gamma m^\alpha}+V(x)=g(m)+{\overline{H}}\\ -\operatorname{div}(m^{1-\alpha} |P+Du|^{\gamma-2}(P+Du))=0, \end{cases} \end{equation} where $x$ takes values on the $d$-dimensional torus, ${\mathbb{T}}^d$, and the unknowns are $u, m:{\mathbb{T}}^d\to {\mathbb{R}}$ and ${\overline{H}}\in {\mathbb{R}}$, with $m\geqslant 0$ and $\int_{{\mathbb{T}}^d} m\,dx=1$. Here, $1\leqslant\alpha\leqslant \gamma<\infty$, $V:{\mathbb{T}}^d\to {\mathbb{R}}$, $V\in C^\infty({\mathbb{T}}^d)$, and $g:{\mathbb{R}}^+\to {\mathbb{R}}$ with $g(m)=G'(m)$ for some convex function $G:{\mathbb{R}}_0^+\to {\mathbb{R}}$ with $G\in C^\infty({\mathbb{R}}^+)\cap C({\mathbb{R}}_0^+)$. In particular, the convexity of $G$ gives that $g$ is monotonically increasing. The preceding MFG is a model where agents incur in a large cost if moving in regions with a high agent density. The constant $-{\overline{H}}$ is the average cost per unit of time corresponding to the Lagrangian \[ L(x,v, m)=m^\alpha \frac{|v|^{\gamma'}}{\gamma'}+v\cdot P-V(x)+g(m), \] where $\frac 1 \gamma +\frac 1 {\gamma'}=1$. More precisely, the typical agent seeks to minimize the long-time average cost \[ \lim_{T\to \infty}\frac 1 T \int_0^T \bigg(m^\alpha \frac{|{\rm d}ot{\bf x}(s)|^{\gamma'}}{\gamma'}+{\rm d}ot{\bf x}(s)\cdot P-V({\bf x}(s))+g(m({\bf x}(s)))\bigg)ds . \] Due to this optimization process, agents avoid moving in high-density regions. Further, because $g$ is increasing, agents prefer to remain in low-density regions rather than in high-density regions. Finally, we observe that \(P\) determines the preferred direction of motion. In the stationary case, the theory for second-order MFGs without singularities is well understood. For example, the papers \cite{GM, GPM1, GPatVrt, PV15, GR} address the existence of classical solutions and weak solutions were examined in \cite{bocorsporr}. In dimension one, a characterization of solutions for stationary MFGs was developed in \cite{Gomes2016b} (including non-monotone MFGs) and, in the case of congestion, in \cite{GNPr216, nurbekyan17}. The theory of weak solutions was considered in \cite{FG2}, where a general existence result was proven using a monotonicity argument. The monotonicity structure that many MFGs enjoy has important applications to numerical methods, see \cite{AFG}. A review of MFG models can be found in \cite{GJS2} and a survey of regularity results in \cite{GPV}. The congestion problem was first introduced in \cite{LCDF} where a uniqueness condition was established. Next, the existence problem for stationary MFGs with congestion, positive viscosity, and a quadratic Hamiltonian was proved in \cite{GMit}. Subsequently, this problem was examined in more generality in \cite{GE}. The time-dependent case was considered in \cite{GVrt2} (classical solutions) and \cite{Graber2} (weak solutions). Later, \cite{Achdou2016} examined weak solutions for time-dependent problems. Apart from the results in \cite{FG2}, the one-dimensional examples in \cite{GNPr216, nurbekyan17}, and the radial cases in \cite{EvGomNur17}, little is known about first-order MFGs with congestion. The critical difficulties stem from two issues: first-order Hamilton--Jacobi equations provide little a priori regularity; second, because the transport equation is a first-order equation, we cannot use Harnack-type results and, thus, we cannot bound $m$ by below by a positive constant. Indeed, as we show in Section \ref{lcs}, $m$ can vanish. In many MFG problems, the regularity follows from a priori bounds that combine both equations in \eqref{main}. Here, with standard methods, we can only get relatively weak bounds, see Remark~\ref{rmk:onfoeinic}. For example, if \(0<\alpha\leqslant 1\), then there exists a constant, $C$, such that for any regular enough solution of \eqref{main}, we have \[ \int_{{\mathbb{T}}^d} \left[\left(\frac{|P+Du|^\gamma}{m^\alpha}\right) (1+m) + (m-1) g(m)\right]dx\leqslant C. \] In Section \ref{apbsec}, we examine a priori bounds for a class of MFGs that generalize \eqref{main}. While the bounds from Section \ref{apbsec} are interesting on their own, they are not enough to prove the existence of solutions. In the case of MFGs without congestion, a number of variational principles have been proposed, see \cite{ll1, LCDF}. These are not only of independent interest but also have important applications, see, for example, \cite{MR3195846} for a study of efficiency loss in oscillator synchronization games, the recent results in \cite{GraCard} where variational principles are used to prove the existence of solutions for first-order MFGs, and \cite{MR3644590} where optimal transport methods are used to examine constrained MFGs, which is an alternative approach to model congestion. Hard congestion problems can be modeled by variational problems, see for example \cite{San12} or \cite{San16}. However, soft-congestion models such as \eqref{main} do not fit this framework. In Section \ref{vp}, we study a new variational problem for which \eqref{main} is the corresponding Euler--Lagrange equation. More concretely, let $G:{\mathbb{R}}_0^+\to {\mathbb{R}}$ with $G'=g$. Then, \eqref{main} is the Euler--Lagrange equation of the functional \begin{equation} \label{nvp} J[u,m]=\int_{{\mathbb{T}}^d} \left(\frac{|P+Du|^\gamma}{\gamma (\alpha-1) m^{\alpha-1}}-V m +G(m)\right) dx; \end{equation} that is, if $(u,m)$, with $u,m:{\mathbb{T}}^d\to {\mathbb{R}}$ and $m>0$, is a smooth enough minimizer of $J$ under the constraint \[ \int_{{\mathbb{T}}^d} m \,dx=1, \] then $(u,m)$ solves \eqref{main}. The existence of a minimizer of \eqref{nvp} is addressed as follows. First, the $\alpha=1$ case is examined in Section \ref{ccsec}. The $1<\alpha<\gamma$ case and the case where $\alpha=\gamma$ with $g=m^\theta$, $\theta>0$, are addressed in Theorem \ref{thm:exist1}. Finally, the $\alpha=\gamma$ with more general assumptions on $g$ case is considered in Theorem \ref{thm:exist2}. The uniqueness of a solution is shown in Theorem \ref{thm:uniqmin}. Our new variational principle provides a new construction of weak solutions for MFGs that does not rely on the high-order regularizations in \cite{FG2} nor requires ellipticity as in \cite{ GPatVrt,GPM1, GR, GM, PV15}. Moreover, our methods suggest an alternative computational approach for stationary MFGs with congestion that complements the existing ones, see \cite{AFG}. Our variational methods do not apply for $0<\alpha<1$ nor to second-order MFGs. Sections~\ref{2dcase} and \ref{tsom} are devoted to the study of these cases. In Section \ref{2dcase}, we examine first-order MFGs with $0<\alpha<1$ in the two-dimensional case. There, we perform a change of variables for which our variational methods can be used. Next, in Section \ref{tsom}, we study various cases where we can reduce second-order MFG systems to scalar equations. Moreover, we show that these equations are equivalent to Euler--Lagrange equations of suitable functionals. Finally, in Sections \ref{num}--\ref{num9}, we use our results to develop numerical methods for MFGs. \section{Some explicit examples} \label{ee} Before developing the general theory, we consider three examples that illustrate some of the properties of \eqref{main}. First, we prove that \eqref{main} may fail to have classical solutions. Next, we examine the critical congestion case, $\alpha=1$. In this case, $u$ is constant and the existence of solutions to \eqref{main} can be addressed by solving algebraic equations. \subsection{Lack of classical solutions} \label{lcs} In general, \eqref{main} may not have classical solutions. To illustrate this behavior, we consider the case when $P=0,$ where the analysis is elementary. Here, to simplify the presentation, we take $\gamma=2$ and $g(m)=m$, but the analysis is similar for the general case. By adding a constant to $V$, we can assume without loss of generality that \begin{equation} \label{nv} \int_{{\mathbb{T}}^d} \,Vdx=0. \end{equation} In this case, \eqref{main} becomes \begin{equation} \label{main2} \begin{cases} \frac{|Du|^2}{2 m^\alpha}+V(x)=m+{\overline{H}}\\ -\operatorname{div}(m^{1-\alpha} Du)=0. \end{cases} \end{equation} Now, we assume that \((u,m, \overline H)\) is a classical solution to \eqref{main2} with \(m>0\) and \(\int_{{\mathbb{T}}^d} m\,dx =1\). Then, multiplying the second equation by $u$ and integrating over ${\mathbb{T}}^d$, we have \[ \int_{{\mathbb{T}}^d} m^{1-\alpha}|Du|^2\,dx=0. \] Hence, because $m$ does not vanish, $u$ is constant. Accordingly, the first equation in \eqref{main2} becomes \[ m=-{\overline{H}} + V(x). \] Using $\int_{{\mathbb{T}}^d}m \,dx=1$ and \eqref{nv}, we obtain \begin{equation}\label{eq:lcs} m=1+V(x). \end{equation} However, without further assumptions, $1+V$ may take negative values and, thus, \eqref{main} may not have a classical solution with $m>0$. For a general MFG of the form \[ \begin{cases} m^\alpha H(\frac{Du}{m^\alpha}, x)=g(m)+{\overline{H}}\\ \operatorname{div}(D_pH(\frac{Du}{m^\alpha}, x) m)=0 \end{cases} \] with a Hamiltonian $H:{\mathbb{R}}^d\times {\mathbb{T}}^d\to {\mathbb{R}}$ satisfying $D_pH (p,x)p>0$ for $p\in {\mathbb{R}}^d \backslash \{0\}$, a similar argument yields $u$ constant. \subsection{Critical congestion $\alpha=1$} \label{ccsec} If $\alpha=1$ and $\gamma=2$, the second equation in \eqref{main} becomes $\Delta u=0$. Hence, $u$ is constant. Therefore, the first equation in \eqref{main} is the following algebraic equation for $m$: \[ \frac{|P|^2}{2 m}-g(m)={\overline{H}}-V(x). \] Suppose that $g$ is increasing and that $P\neq 0$. Then, for each $x$ and for each fixed ${\overline{H}}$, the preceding equation has at most one solution, $m(x)>0$. Furthermore, the constant ${\overline{H}}$ is determined by the normalization condition on $m$. The $\gamma\neq 2$ case is similar; the second equation in \eqref{main} is \[ \operatorname{div}(|P+Du|^{\gamma-2}(P+Du))=0. \] The prior equation is the $\gamma$-Laplacian equation for the function $P\cdot x+u(x)$ and, again, $u$ is constant due to the periodicity. \section{Some formal a priori bounds} \label{apbsec} In this section, we prove some a priori bounds for the following more general version of \eqref{main}: \begin{equation}\label{mainGen} \begin{cases} m^{\bar{\alpha}} H\big(\frac{P+Du}{m^{\bar{\alpha}}}\big)+V(x)=g(m)+{\overline{H}}\\ -\operatorname{div}\left(mD_pH\big(\frac{P+Du}{m^{\bar{\alpha}}}\big) \right)=0, \end{cases} \end{equation} where \(H:{\mathbb{R}}^d\to{\mathbb{R}}\) is the Hamiltonian and \(\bar \alpha\) is the congestion parameter. We note that for \begin{equation}\label{hamilt1} H(p)=\frac{|p|^\gamma}{\gamma}, \end{equation} \eqref{mainGen} reduces to \eqref{main} by setting ${\bar{\alpha}}:=\frac{\alpha}{\gamma-1}$. The bounds established next give partial regularity for solutions of \eqref{main}. Unfortunately, this regularity is not sufficient to ensure the existence of solutions. We examine the existence of solutions in the next section using methods from the calculus of variations. As before, we assume that: \begin{hyp}\label{A0} The congestion parameter, ${\bar{\alpha}}$, is non-negative, $V\in C^\infty({\mathbb{T}}^d)$, and $g:{\mathbb{R}}^+\to {\mathbb{R}}$ is a $C^\infty$ monotonically increasing function. \end{hyp} Regarding the Hamiltonian, we work under the following assumptions: \begin{hyp}\label{A4} The Hamiltonian, $H:{\mathbb{R}}^d\to{\mathbb{R}}$, is a $C^\infty$ function. Moreover, there exists a constant, $C>0$, such that \begin{equation*} D_pH(p)\cdot p - H(p)\geqslant \frac 1 C H(p)-C \end{equation*} for all $p\in \mathbb R^d$. \end{hyp} \begin{hyp}\label{As1.2} There exist $\gamma>1$ and a constant, $C>0$, such that \[ \frac1C|p|^{\gamma}- C\leqslant H(p)\leqslant C\big(|p|^{\gamma}+1\big) \] for all $p\in \mathbb R^d$. \end{hyp} \begin{remark}\label{A3} Note that Assumptions~\ref{A4} and \ref{As1.2} imply that there exists a constant, $\tilde C>0$, such that \begin{equation*} D_pH(p)\cdot p - H(p)\geqslant \frac1{\tilde C}|p|^{\gamma}- \tilde C \end{equation*} for all $p\in \mathbb R^d$. \end{remark} \begin{hyp}\label{ADpH} For all \({\rm d}elta>0\), there exists a constant, $C_{\rm d}elta>0$, such that \[ |D_pH(p)| \leqslant C_{\rm d}elta + {\rm d}elta H(p) \] for all $p\in \mathbb R^d$. \end{hyp} \begin{hyp}\label{ADpH2} There exists a constant, $C>0$, such that \[ |D_pH(p)| \leqslant C(|p|^{\gamma-1}+1) \] for all $p\in \mathbb R^d$. \end{hyp} \begin{hyp}\label{A5} There exist a constant, $\operatorname{Var}epsilon\in (0, 1)$, such that for any symmetric matrix $M\in {\mathbb{R}}^{d\times d}$ and vector $p\in {\mathbb{R}}^d$, we have \[ \bar{\alpha}|D^2_{pp}H(p)M|^2|p|^2\leqslant 4(1-\varepsilon)(D_pH(p)\cdot p-H(p)) \operatorname{tr}(D^2_{pp} H(p)MM), \] where $|A|=\sup\limits_{x\neq 0} \frac{|Ax|}{|x|}$ for a matrix $A$. \end{hyp} The preceding assumptions are similar to the ones in \cite{GE}, where examples of Hamiltonians satisfying them are discussed. For example, it is not hard to check that $H(p)=\frac{|p|^{\gamma}}{\gamma}+b \cdot p$ with \(b\in{\mathbb{R}}^d\) satisfies the prior assumptions, including Assumption \ref{A5} for $\bar{\alpha}< 4/{\gamma}$ and $\gamma\in[1,2]$. Next, we prove two a priori estimates for solutions of \eqref{mainGen}, the first of which holds only for \({\bar{\alpha}} \leqslant 1\). \begin{proposition}\label{prop:foe} Suppose that Assumptions~\ref{A0}--\ref{ADpH} hold and that \({\bar{\alpha}} \leqslant 1\). Then, there exists a constant, $C>0$, such that, for any smooth solution of \eqref{mainGen}, $(u,m, {\overline{H}})$ with \(m>0\) and \(\int_{{\mathbb{T}}^d} m\, dx =1\), we have \begin{equation} \label{eq:foe0} \begin{aligned} \int_{{\mathbb{T}}^d} \left[\Big|\frac{P+Du}{m^{\bar \alpha}}\Big|^\gamma ( m^{\bar \alpha } +m^{\bar \alpha +1}) + (m-1) g(m)\right]dx\leqslant C\bigg(1+ \int_{{\mathbb{T}}^d} m^{\bar \alpha +1}\, dx\bigg) . \end{aligned} \end{equation} \end{proposition} \begin{remark}\label{rmk:onfoe} We observe that for several examples of \(g\), \eqref{eq:foe0} has encoded \textit{better} integrability conditions. For instance, if there exist \(c>0\) and \(\theta>\bar\alpha\) such that \(\frac1c m^\theta - c \leqslant g(m) \leqslant c(1+ m^\theta)\) in the previous proposition, then \eqref{eq:foe0} can be replaced by \begin{equation} \label{eq:foe00} \begin{aligned} \int_{{\mathbb{T}}^d} \left[\Big|\frac{P+Du}{m^{\bar \alpha}}\Big|^\gamma ( m^{\bar \alpha } +m^{\bar \alpha +1}) + m^{\theta +1}\right]dx\leqslant C. \end{aligned} \end{equation} \end{remark} \begin{proof}[Proof of Proposition~\ref{prop:foe}] Subtracting the first equation in \eqref{mainGen} multiplied by $(m-1)$ from the second equation in \eqref{mainGen} multiplied by $u$ first, and then integrating the resulting equation over \({\mathbb{T}}^d\), we obtain \begin{equation}\label{eq:foe1} \begin{aligned} &\int_{{\mathbb{T}}^d} m^{\bar \alpha +1} \bigg(D_p H\Big(\frac{P+Du}{m^{\bar{\alpha}}} \Big)\cdot \frac{P+Du}{m^{\bar{\alpha}}}- H\Big( \frac{P+Du}{m^{\bar{\alpha}}} \Big)\bigg)dx + \int_{{\mathbb{T}}^d} m^{\bar \alpha} H\Big(\frac{P+Du}{m^{\bar{\alpha}}} \Big)\,dx \\ &\quad - \int_{{\mathbb{T}}^d} m D_p H\Big(\frac{P+Du}{m^{\bar{\alpha}}} \Big)\cdot P\,dx+ \int_{{\mathbb{T}}^d} (m-1) g(m) \,dx = \int_{{\mathbb{T}}^d} V(x) (m-1)\,dx\leqslant 2\Vert V\Vert_\infty , \end{aligned} \end{equation} where we used integration by parts and the condition \(\int_{{\mathbb{T}}^d} m\, dx =1\). By Remark~\ref{A3}, we have \begin{align*} &\int_{{\mathbb{T}}^d} m^{\bar \alpha +1} \bigg(D_p H\Big(\frac{P+Du}{m^{\bar{\alpha}}} \Big)\cdot \frac{P+Du}{m^{\bar{\alpha}}}- H\Big( \frac{P+Du}{m^{\bar{\alpha}}} \Big)\bigg)dx\\ &\quad \geqslant \frac1{\tilde C} \int_{{\mathbb{T}}^d} m^{\bar \alpha +1} \Big|\frac{P+Du}{m^{\bar{\alpha}}} \Big|^\gamma \,dx - \tilde C \int_{{\mathbb{T}}^d} m^{\bar \alpha +1} \, dx. \end{align*} Moreover, by Assumption~\ref{As1.2}, we have \begin{equation} \label{eq:foe3} \begin{aligned} \int_{{\mathbb{T}}^d} m^{\bar \alpha} H\Big(\frac{P+Du}{m^{\bar{\alpha}}} \Big)\,dx \geqslant \frac1{ C} \int_{{\mathbb{T}}^d} m^{\bar \alpha } \Big|\frac{P+Du}{m^{\bar{\alpha}}} \Big|^\gamma \,dx - C \int_{{\mathbb{T}}^d} m^{\bar \alpha } \, dx. \end{aligned} \end{equation} Next, using Assumptions~\ref{As1.2} and~\ref{ADpH} with \({\rm d}elta=\frac{1}{2C(\tilde C+ C)|P|}\) and recalling that \(\int_{{\mathbb{T}}^d} m\, dx =1\), we can find a constant, \(\bar C>0\), such that \begin{equation} \label{eq:foe5} \begin{aligned} -\int_{{\mathbb{T}}^d} m D_p H\Big(\frac{P+Du}{m^{\bar{\alpha}}} \Big)\cdot P\,dx \geqslant -\frac1{2(C+\tilde C)} \int_{{\mathbb{T}}^d} m \Big|\frac{P+Du}{m^{\bar{\alpha}}} \Big|^\gamma \,dx - \bar C. \end{aligned} \end{equation} Finally, we observe that if \(0<{\bar{\alpha}} \leqslant 1\), then \(m\leqslant m^{\bar{\alpha}}\) if \(m\leqslant1\) and \(m\leqslant m^{{\bar{\alpha}}+1}\) if \(m\geqslant1\). Hence, \begin{equation*} \begin{aligned} &-\frac1{2(C+\tilde C)} \int_{{\mathbb{T}}^d} m \Big|\frac{P+Du}{m^{\bar{\alpha}}} \Big|^\gamma \,dx\\ &\qquad \geqslant -\frac1{2C} \int_{{\mathbb{T}}^d} m^{\bar{\alpha}} \Big|\frac{P+Du}{m^{\bar{\alpha}}} \Big|^\gamma \,dx-\frac1{2\tilde C} \int_{{\mathbb{T}}^d} m^{{\bar{\alpha}} + 1} \Big|\frac{P+Du}{m^{\bar{\alpha}}} \Big|^\gamma \,dx, \end{aligned} \end{equation*} which, together with \eqref{eq:foe1}--\eqref{eq:foe5} and the estimate \(m^{\bar{\alpha}} \leqslant m^{{\bar{\alpha}}+1} + 1\), yields \eqref{eq:foe0}. \end{proof} \begin{proposition}\label{prop:foe2} Suppose that Assumptions~\ref{A0}--\ref{As1.2} and Assumption~\ref{ADpH2} hold. Then, there exists a constant, $C>0$, such that, for any smooth solution, $(u,m, {\overline{H}})$, of \eqref{mainGen} with \(m>0\) and \(\int_{{\mathbb{T}}^d} m\, dx =1\), we have \begin{equation} \label{eq:foe02} \begin{aligned} \int_{{\mathbb{T}}^d} \left[\Big|\frac{P+Du}{m^{\bar \alpha}}\Big|^\gamma ( m^{\bar \alpha } +m^{\bar \alpha +1}) + (m-1) g(m)\right]dx\leqslant C\bigg(1+ \int_{{\mathbb{T}}^d} m^{1-\bar \alpha(\gamma-1)}\, dx\bigg) . \end{aligned} \end{equation} \end{proposition} \begin{proof} We proceed exactly as in the proof of Proposition~\ref{prop:foe} up to the estimate \eqref{eq:foe5}. Here, to estimate the term on the left-hand side of \eqref{eq:foe5}, we argue as follows. Using Assumption~\ref{ADpH2} and the condition \(\int_{{\mathbb{T}}^d} m\, dx =1 \) first, and then the fact that \(|p|^{\gamma-1} \leqslant \frac{1}{2\tilde CC|P|} |p|^\gamma + \bar C\) for all \(p\in{\mathbb{R}}^d\) and for some positive constant \(\bar C\) independent of \(p\), we obtain \begin{equation}\label{eq:foe62} \begin{aligned} &-\int_{{\mathbb{T}}^d} m D_p H\Big(\frac{P+Du}{m^{\bar{\alpha}}} \Big)\cdot P\,dx \\ \geqslant& - C|P|\int_{{\mathbb{T}}^d} m^{1+{\bar{\alpha}}} \frac{|P+Du|^{\gamma-1}}{m^{{\bar{\alpha}}\gamma}}\,dx -C|P|\\ \geqslant&-\frac1{2\tilde C} \int_{{\mathbb{T}}^d} m^{1+{\bar{\alpha}}} \Big|\frac{P+Du}{m^{\bar{\alpha}}} \Big|^\gamma \,dx -C|P|\bigg(1+ \bar C \int_{{\mathbb{T}}^d} m^{1-\bar \alpha(\gamma-1) }\, dx\bigg) . \end{aligned} \end{equation} From \eqref{eq:foe1}--\eqref{eq:foe3}, \eqref{eq:foe62}, and using the estimate \(m^{\bar{\alpha}} \leqslant m^{{\bar{\alpha}}+1} + 1\), we deduce \eqref{eq:foe02}. \end{proof} \begin{remark}\label{rmk:onfoe2} If \({\bar{\alpha}}(\gamma-1) \leqslant1\) and, for instance, there exist \(c>0\) and \(\theta>0\) such that \(\frac1c m^\theta - c \leqslant g(m) \leqslant c(1+ m^\theta)\) in the previous proposition, then \eqref{eq:foe02} can be replaced by \eqref{eq:foe00}. \end{remark} \begin{remark}\label{rmk:onfoeinic} As we mentioned before, \eqref{mainGen} reduces to \eqref{main} for ${\bar{\alpha}}=\frac{\alpha}{\gamma-1}$ and \(H\) given by \eqref{hamilt1}. In this case, the condition \(0<{\bar{\alpha}}\leqslant 1\) is equivalent to \(0<\alpha\leqslant\gamma-1\), while \({\bar{\alpha}}(\gamma-1) \leqslant1\) is equivalent to \(0<\alpha\leqslant 1\). For \(\alpha\) and \(\gamma\) in this range, Propositions~\ref{prop:foe} and \ref{prop:foe2} and Remarks~\ref{rmk:onfoe} and \ref{rmk:onfoe2} provide a priori estimates for smooth solutions of \eqref{main}. \end{remark} The following proposition gives an a priori second-order estimate. \begin{proposition} Suppose that Assumptions~\ref{A0} and \ref{A5} hold. Then, there exists a constant, $C>0$, such that, for any smooth solution of \eqref{mainGen}, $(u,m, {\overline{H}})$ with \(m>0\) and \(\int_{{\mathbb{T}}^d} m\, dx =1\), we have \begin{equation}\label{SecOrdEst} \int_{{\mathbb{T}}^d}\operatorname{tr}\Big(D^2_{pp}H\Big(\frac{P+Du}{m^{\bar{\alpha}}} \Big)D^2uD^2u\Big)m^{1-{\bar{\alpha}}} \,dx+\int_{{\mathbb{T}}^d} g'(m)|Dm|^2\,dx \leqslant C. \end{equation} \end{proposition} \begin{proof} For simplicity, we omit the argument, $\frac{P+Du}{m^{\bar{\alpha}}}$, of the Hamiltonian and its derivatives. Differentiating the first equation in \eqref{mainGen} with respect to $x_k$ and using Einstein summation convention, we have \begin{equation}\label{HJdiff} {\bar{\alpha}} m^{{\bar{\alpha}}-1}m_{x_k}H +D_{p_i}H u_{x_ix_k}-{\bar{\alpha}} \frac{ (P_i+u_{x_i})D_{p_i} H m_{x_k}}{m} +V_{x_k}(x)=g'(m)m_{x_k}. \end{equation} Next, we note that \[ (D_{p_i}H u_{x_ix_k})_{x_k}=\frac{D^2_{p_ip_j}H u_{x_jx_k}u_{x_ix_k}}{m^{\bar{\alpha}}}-\frac{{\bar{\alpha}} D^2_{p_ip_j}H(P_j+u_{x_j})m_{x_k}u_{x_i x_k}}{m^{{\bar{\alpha}}+1}} +D_{p_i}H (u_{x_kx_k})_{x_i}. \] By differentiating \eqref{HJdiff} with respect to $x_k$ and using the previous equality, we obtain \begin{align}\label{HJ2diff} \begin{split} ({\bar{\alpha}} m^{{\bar{\alpha}}-1}m_{x_k}H)_{x_k}+& \frac{D^2_{p_ip_j}H u_{x_jx_k}u_{x_ix_k}}{m^{\bar{\alpha}}} -\frac{{\bar{\alpha}} D^2_{p_ip_j}H(P_j+u_{x_j})m_{x_k} u_{x_i x_k}}{m^{{\bar{\alpha}}+1}}\\ +D_{p_i}H (u_{x_kx_k})_{x_i} &-\left({\bar{\alpha}} \frac{(P_i+u_{x_i}) D_{p_i}H m_{x_k}}{m}\right)_{x_k} +V_{x_kx_k}(x)=(g'(m)m_{x_k})_{x_k}. \end{split} \end{align} Now, we multiply the second equation in \eqref{mainGen} by $u_{x_kx_k}$ and integrate by parts. Accordingly, we get the identity \[ 0=\int_{{\mathbb{T}}^d}-\operatorname{div}(mD_pH)u_{x_kx_k}\,dx=\int_{{\mathbb{T}}^d}m D_{p_i}H(u_{x_kx_k})_{x_i}\,dx. \] Next, we multiply \eqref{HJ2diff} by $m$, integrate by parts, and use the prior identity to derive \begin{align*} &\int_{{\mathbb{T}}^d} \operatorname{tr} (D^2_{pp}H D^2uD^2u) m^{1-{\bar{\alpha}}}\,dx+\int_{{\mathbb{T}}^d}g'(m)|Dm|^2 \,dx\\ &\quad\leqslant \int_{{\mathbb{T}}^d}{\bar{\alpha}} m^{{\bar{\alpha}}-1}H|Dm|^2\,dx-\int_{{\mathbb{T}}^d}{\bar{\alpha}} \frac{(P+Du)\cdot D_pH|Dm|^2}{m}\,dx-\int_{{\mathbb{T}}^d}m \Delta V \,dx\\ &\qquad+\int_{{\mathbb{T}}^d}\frac{{\bar{\alpha}} |D^2_{pp}H D^2u||P+Du| |Dm| }{m^{{\bar{\alpha}}}}\,dx\\ &\quad\leqslant \int_{{\mathbb{T}}^d}{\bar{\alpha}} m^{{\bar{\alpha}}-1}|Dm|^2\left(H-D_pH\cdot \frac{P+Du}{m^{\bar{\alpha}}}\right) dx +\int_{{\mathbb{T}}^d}\frac{{\bar{\alpha}} |D^2_{pp}H D^2u||P+Du||Dm|}{m^{{\bar{\alpha}}}}dx \\ & \qquad+ \Vert V\Vert_{C^2({\mathbb{T}}^d)}, \end{align*} where in the last inequality we used that $m$ is a probability density. Setting $Q:=\frac{P+Du}{m^{\bar{\alpha}}}$ and \(C:= \Vert V\Vert_{C^2({\mathbb{T}}^d)}\), so far we proved that \begin{equation}\label{soe1} \begin{aligned} &\int_{{\mathbb{T}}^d}\operatorname{tr}(D^2_{pp}H(Q)D^2uD^2u)m^{1-{\bar{\alpha}}} \,dx+\int_{{\mathbb{T}}^d} g'(m)|Dm|^2\,dx\\ &\quad\leqslant \int_{{\mathbb{T}}^d}{\bar{\alpha}} m^{{\bar{\alpha}}-1}|Dm|^2\left(H(Q)-D_pH(Q)\cdot Q\right) \,dx +\int_{{\mathbb{T}}^d}{\bar{\alpha}} |D^2_{pp}H(Q) D^2u||Q||Dm|dx + C. \end{aligned} \end{equation} Finally, from Assumption~\ref{A5} and Cauchy's inequality, we have \begin{align*} &{\bar{\alpha}} |D_{pp}^2H(Q) D^2u ||Q||Dm|\\ &\quad\leqslant 2{\bar{\alpha}} \sqrt{(D_pH(Q)\cdot Q-H(Q))}m^{\frac{{\bar{\alpha}}-1}{2} } |Dm|\sqrt{ (1-\varepsilon)\operatorname{tr}(D^2_{pp} H(Q)D^2u D^2u) } m^{\frac{1-{\bar{\alpha}}}{2} } \\ &\quad\leqslant \bar{\alpha}(D_pH(Q)\cdot Q -H(Q)) m^{\bar{\alpha}-1} |Dm|^2 + (1-\varepsilon) \operatorname{tr}(D_{pp}^2H(Q)D^2uD^2u) m^{1-{\bar{\alpha}} }, \end{align*} which together with \eqref{soe1} gives \[ \int_{{\mathbb{T}}^d}\operatorname{Var}epsilon \operatorname{tr}(D^2_{pp}H(Q)D^2uD^2u)m^{1-{\bar{\alpha}}} dx+\int_{{\mathbb{T}}^d} g'(m)|Dm|^2dx\leqslant C.\qedhere \] \end{proof} It is worth mentioning that the estimate \eqref{SecOrdEst} is an analog of the second-order estimates proved in \cite{ll2}. For $g(m)=m^\theta$, \eqref{SecOrdEst} can be combined with the Sobolev theorem to yield improved integrability for $m$. \section{A variational problem} \label{vp} In this section, we study a minimization problem associated with the functional in \eqref{nvp}, $J$. To incorporate the case in which \(m\) is zero on a set of positive measure, we consider the following extension of \(J\). Let \(\bar J\) be the functional defined by \begin{equation} \label{nvpnew} \bar J[u,m]=\int_{{\mathbb{T}}^d} \left[\bar f(\nabla u, m)-V m +G(m)\right] dx, \end{equation} where, for \((p,m)\in{\mathbb{R}}^d \times {\mathbb{R}}^+_0\), \begin{equation}\label{barf} \begin{aligned} \bar f (p,m) = \begin{cases} \frac{|P+p|^\gamma}{\gamma (\alpha-1) m^{\alpha-1}} & \text{if } m\not=0,\\ +\infty & \text{if } m=0 \text{ and } p\not= -P,\\ 0 &\text{if } m=0 \text{ and } p= -P. \end{cases} \end{aligned} \end{equation} We aim at proving the existence and uniqueness of solutions to the variational problem \begin{equation} \label{mmz} \min_{(u,m)\in {\mathcal{A}}_{q,r}} \bar J[u,m], \end{equation} where $\bar J$ is given by \eqref{nvpnew} and ${\mathcal{A}}_{q,r}$ is the set \[ {\mathcal{A}}_{q,r}=\left\{(u,m)\in W^{1,q}({\mathbb{T}}^d) \times L^r({\mathbb{T}}^d):\int_{{\mathbb{T}}^d} u \,dx=0,\int_{{\mathbb{T}}^d} m \,dx=1,m\geqslant 0\right\}, \] with $q\geqslant1$ and \(r\geqslant 1\) to be chosen later. To this end, we prove in Section~\ref{auxformin} that \(\bar f \) is convex and lower semi-continuous. These properties entail the sequential, weakly lower semi-continuity of \(\bar J\) in an appropriate function space. This result is a key ingredient in the proof of the existence of solutions to \eqref{mmz}, which is presented in Section~\ref{secexist}. Next, in Section~\ref{uniqmin}, we discuss the uniqueness of these solutions. Finally, in Section~\ref{varexplicit}, we further characterize the solutions in the \(P=0\) case. This characterization will be useful to validate our numerical methods in Sections~\ref{num}--\ref{num9}. \subsection{Lower semi-continuity properties of $\bar J$}\label{auxformin} Here, we study the lower semi-continuity of the functional $\bar J$ given by \eqref{nvpnew}. We first prove that \(\bar f \) defined in \eqref{barf} is convex and lower semi-continuous. \begin{lem} \label{barfcxlsc} Suppose that $1<\alpha\leqslant\gamma$. Then, $\bar f$ given by \eqref{barf} is convex and lower semi-continuous in \({\mathbb{R}}^d \times {\mathbb{R}}^+_0\). \end{lem} \begin{proof} We begin by proving that \(\bar f\) is convex in \({\mathbb{R}}^d\times {\mathbb{R}}^+\). Without loss of generality, we assume that \(P=0\). Fix \(\alpha\) and \(\gamma\) such that $1<\alpha\leqslant\gamma$, and set \(\kappa_1 = \gamma - \alpha +1\) and \(\kappa_2 = \frac{\gamma}{\gamma - \alpha +1}\). Note that \(1 \leqslant \kappa_1 <\gamma \) and \(1 < \kappa_2 \leqslant \gamma\). Moreover, for \((p,m)\in{\mathbb{R}}^d \times {\mathbb{R}}^+\), we may rewrite \(\bar f \) as \[ \bar f(p,m) =\frac{1}{\gamma(\alpha-1)} \left( m \left|\frac{p}{m}\right|^{\kappa_2} \right) ^{\kappa_1} = \phi\left( m\,\psi\left( \frac{p}{m} \right)\right), \] where \(\phi(t) = \frac{1}{\gamma(\alpha-1)}t^{\kappa_1}\) for \(t \in {\mathbb{R}}^+_0\) and \(\psi(p)=|p|^{\kappa_2}\) for \(p\in{\mathbb{R}}^d\). Because \(\psi\) is a convex function in \({\mathbb{R}}^d\), its perspective function, defined by \(\bar \psi (p,m)= m\,\psi\left( \frac{p}{m} \right)\), is convex in \({\mathbb{R}}^d\times {\mathbb{R}}^+\) (see, for instance, \cite[Lemma~2]{DaMa08}). Because \(\phi\) is an increasing convex function in \({\mathbb{R}}^+_0\), \(\bar f\) is a convex function in \({\mathbb{R}}^d\times {\mathbb{R}}^+\). Next, we prove that \(\bar f\) is convex in \({\mathbb{R}}^d\times {\mathbb{R}}^+_0\). Let \(\lambda \in (0,1)\), \(p_1,\, p_2 \in {\mathbb{R}}^d\), \(m_1,\, m_2 \in {\mathbb{R}}^+_0\). We want to show that \begin{equation}\label{barfcx} \begin{aligned} \bar f(\lambda (p_1,m_1) + (1-\lambda) (p_2, m_2)) \leqslant \lambda\bar f (p_1,m_1) + (1-\lambda) \bar f(p_2, m_2). \end{aligned} \end{equation} We are only left to prove that \eqref{barfcx} holds when either \(m_1=0\) or \(m_2=0\). Consider first the \(m_1=0 \) case. If \(p_1\not=-P\) or \(m_2=0\) and \(p_2\not=-P\), then the right-hand side of \eqref{barfcx} equals to \(\infty\); thus, \eqref{barfcx} holds in these sub-cases. If \(p_1=-P\) and \(p_2=-P\), then \eqref{barfcx} reduces to the condition \(0\leqslant 0\); thus, \eqref{barfcx} holds in this sub-case. Finally, if \(p_1=-P\), \(p_2\not=-P\), and \(m_2\not=0\), then \eqref{barfcx} reduces to the condition \((1-\lambda)^{\gamma-\alpha}\leqslant 1\); thus, \eqref{barfcx} also holds in this sub-case because \(1-\lambda \in (0,1)\) and \(\gamma-\alpha\geqslant0\). The \(m_2=0\) case is analogous. Finally, we prove that $\bar f$ is lower semi-continuous in \({\mathbb{R}}^d \times {\mathbb{R}}^+_0\). This amounts to showing that if \((p,m),\, (p_j,m_j) \in {\mathbb{R}}^d \times {\mathbb{R}}^+_0\), \(j\in{\mathbb{N}}\), are such that \((p_j,m_j) \to (p,m)\) in \({\mathbb{R}}^d \times {\mathbb{R}}^+_0\) as \(j\to\infty\), then \begin{equation}\label{barflsc} \begin{aligned} \bar f(p,m) \leqslant \liminf_{j\to\infty} \bar f(p_j,m_j). \end{aligned} \end{equation} Because \(\bar f\) is convex in \({\mathbb{R}}^d\times {\mathbb{R}}^+_0\), it is continuous in the interior of its effective domain. Thus, we are left to prove that \eqref{barflsc} holds when \(m=0\). Assume that \(m=0\). If \(p=-P\), then \eqref{barflsc} holds because \(\bar f(m,p) = \bar f(0,-P) =0 \) in this case. If \(p\not=-P\), then \(p_j\not=-P\) for all \(j\in{\mathbb{N}}\) sufficiently large. For any such \(j\), we have \begin{equation*} \begin{aligned} \bar f (p_j,m_j) = \begin{cases} \frac{|P+p_j|^\gamma}{\gamma (\alpha-1) m_j^{\alpha-1}} & \text{if } m_j\not=0,\\ +\infty & \text{if } m_j=0 . \end{cases} \end{aligned} \end{equation*} Define \(S=\{j\in{\mathbb{N}}\!: \, m_j \not =0\}\). If \(S\) has finite cardinality, then \(\bar f(p_j,m_j) = \infty\) for all \(j\in{\mathbb{N}}\) sufficiently large; thus \eqref{barflsc} holds. If \(S\) has infinite cardinality, then \[\liminf_{j\to\infty} \bar f(p_j,m_j) = \liminf_{j\to\infty\atop j\in S} \bar f(p_j,m_j) = \liminf_{j\to\infty\atop j\in S} \frac{|P+p_j|^\gamma}{\gamma (\alpha-1) m_j^{\alpha-1}} = \frac{|P+p|^\gamma}{0^+}= \infty;\] thus, \eqref{barflsc} holds. \end{proof} The following proposition is a simple consequence of \cite[Theorem~5.14]{FoLe07} and is closely related to the sequential, weakly lower semi-continuity of \(\bar J\). \begin{proposition}\label{slscLq} Let \(\bar f\) be the function given by \eqref{barf} with \(1<\alpha \leqslant \gamma\). Then, the functional \[ (v_1,v_2) \in L^1({\mathbb{T}}^d;{\mathbb{R}}^d) \times L^1({\mathbb{T}}^d; {\mathbb{R}}^+_0) \mapsto \int_{{\mathbb{T}}^d} \left[ \bar f(v_1(x), v_2(x)) + G(v_2(x)) \right]\, dx \] is sequentially lower semi-continuous with respect to the weak convergence in \(L^1({\mathbb{T}}^d) \times L^1({\mathbb{T}}^d; {\mathbb{R}}^+_0)\). \end{proposition} \begin{proof} Because \(G\) is a real-valued, convex function on \({\mathbb{R}}^+_0\), we have \(G(m)\geqslant -C_0(1+ m)\) for all \(m\in{\mathbb{R}}^+_0\) and for some positive constant \(C_0\) independent of \(m\). Then, by Lemma~\ref{barfcxlsc} and the non-negativeness of \(\bar f\), the mapping \begin{equation*} \begin{aligned} (p,m)\in {\mathbb{R}}^d \times {\mathbb{R}}^+_0 \mapsto \bar f(p,m) + G(m) \end{aligned} \end{equation*} is convex and lower semi-continuous in \({\mathbb{R}}^d \times {\mathbb{R}}^+_0\) and bounded from below by \(-C_0(1+ |m|)\) for all \((p,m)\in {\mathbb{R}}^d \times {\mathbb{R}}^+_0\). Consequently, Proposition~\ref{slscLq} is an immediate consequence of \cite[Theorem~5.14]{FoLe07}. \end{proof} As a simple corollary to the previous proposition, we obtain the following lower semi-continuity result on \(\bar J\). \begin{corollary}\label{barJwlsc} Let \(q, \, r, r'\geqslant 1\) be such that \(\frac{1}{r} + \frac{1}{r'}= 1\). Then, the functional $\bar J$ given by \eqref{nvpnew} with \(V\in L^{r'}({\mathbb{T}}^d)\) is sequentially, weakly lower semi-continuous in \(W^{1,q}({\mathbb{T}}^d) \times L^r({\mathbb{T}}^d;{\mathbb{R}}^+_0)\). \end{corollary} \begin{proof} We observe first that the functional \begin{equation*} \begin{aligned} m\mapsto \int_{{\mathbb{T}}^d} V(x) m(x)\, dx \end{aligned} \end{equation*} is continuous with respect to the weak convergence in \(L^r({\mathbb{T}}^d)\) because \(V\in L^{r'}({\mathbb{T}}^d)\). To conclude, we invoke Proposition~\ref{slscLq} and recall that because ${\mathbb{T}}^d$ is compact, sequential weak lower semi-continuity in \(L^1 \times L^1\) implies sequential weak lower semi-continuity in \(L^q \times L^r\). \end{proof} Next, we prove the lower semi-continuity in the sense of measures of the first integral term in \(\bar J\). This result will be useful to proving existence of solutions to \eqref{mmz} when \(\alpha=\gamma\). We recall that if \((X,\mathfrak{M})\) is a measurable space and \(\operatorname{Var}theta: \mathfrak{M} \to {\mathbb{R}}^n\) is a vectorial measure, then the total variation of \(\operatorname{Var}theta\) is the measure \(\Vert \operatorname{Var}theta\Vert: \mathfrak{M} \to[0,\infty)\) defined for all \(E\subset \mathfrak{M}\) by \begin{equation}\label{totalvar} \begin{aligned} \Vert \operatorname{Var}theta\Vert(E)= \sup \bigg\{ \sum_{i=1}^\infty |\operatorname{Var}theta(E_i)|\!: \, \{E_i\}\subset\mathfrak{M} \text{ is a partition of } E \bigg\}. \end{aligned} \end{equation} Moreover, given \(\tilde E \in \mathfrak{M} \), we denote by \(\operatorname{Var}theta\lfloor\tilde E\) the restriction of \(\operatorname{Var}theta\) to \(\tilde {E}\), which is the measure given by \(\big(\operatorname{Var}theta\lfloor\tilde E\big)(E)=\operatorname{Var}theta(E\cap \tilde E)\) for \(E\in \mathfrak{M}\). \begin{remark}\label{ontotalvar} Observe that if \((X,\mathfrak{M})\) is a measurable space and \(\operatorname{Var}theta: \mathfrak{M} \to {\mathbb{R}}^n\) is a vectorial measure, then \begin{equation*} \begin{aligned} \frac{1}{n}\sum_{j=1}^n \Vert \operatorname{Var}theta_j \Vert (E) \leqslant \Vert \operatorname{Var}theta\Vert(E) \leqslant \sum_{j=1}^n \Vert \operatorname{Var}theta_j \Vert (E) \end{aligned} \end{equation*} for all \(E\subset \mathfrak{M}\), where each \(\Vert \operatorname{Var}theta_j \Vert (E)\) is given by \eqref{totalvar} (with \(n=1\)). \end{remark} In what follows, \(\mathcal{L}^d\) stands for the \(d\)-dimensional Lebesgue measure. \begin{proposition}\label{slscM} Let \(\bar f\) be the function given by \eqref{barf} with \(\alpha = \gamma\). If \((v_1^n,v_2^n)_{n\in{\mathbb{N}}} \subset L^1({\mathbb{T}}^d;{\mathbb{R}}^d) \times L^1({\mathbb{T}}^d; {\mathbb{R}}^+_0)\) and \(\operatorname{Var}theta\in {\mathcal{M}}({\mathbb{T}}^d;{\mathbb{R}}^d \times {\mathbb{R}}^+_0) \) are such that \begin{equation*} \begin{aligned} (v_1^n,v_2^n) \mathcal{L}^d\lfloor{{\mathbb{T}}^d} \buildrel{\hskip-.6mm\star}\over\weakly \operatorname{Var}theta \enspace \text{weakly-$\star$ in } {\mathcal{M}}({\mathbb{T}}^d;{\mathbb{R}}^d \times {\mathbb{R}}^+_0), \end{aligned} \end{equation*} then \begin{equation*} \begin{aligned} \liminf_{n\to\infty} \int_{{\mathbb{T}}^d} \bar f(v_1^n(x),v_2^n(x))\, dx \geqslant \int_{{\mathbb{T}}^d} \bar f\left( \frac{d\operatorname{Var}theta}{d \mathcal{L}^d} (x)\right) dx + \int_{{\mathbb{T}}^d} \bar f^\infty\left( \frac{d\operatorname{Var}theta_s}{d \Vert\operatorname{Var}theta_s\Vert} (x)\right) d \Vert\operatorname{Var}theta_s\Vert(x), \end{aligned} \end{equation*} where \(\operatorname{Var}theta = \frac{d\operatorname{Var}theta}{d \mathcal{L}^d} \mathcal{L}^d\lfloor{\mathbb{T}}^d +\operatorname{Var}theta_s \) is the Lebesgue--Besicovitch decomposition of \(\operatorname{Var}theta\) with respect to the \(d\)-dimensional Lebesgue measure, \(\frac{d\operatorname{Var}theta_s}{d \Vert\operatorname{Var}theta_s\Vert}\) is the Radon--Nikodym derivative of \(\operatorname{Var}theta_s\) with respect to its total variation, and \(\bar f^\infty: {\mathbb{R}}^d \times {\mathbb{R}}^+_0 \to [0,\infty] \) is the recession function of \(\bar f\); that is, for \((p,m)\in{\mathbb{R}}^d \times {\mathbb{R}}^+_0\), \begin{equation}\label{barfrec} \begin{aligned} \bar f^\infty (p,m) = \begin{cases} \frac{|p|^\gamma}{\gamma (\gamma-1) m^{\gamma-1}} & \text{if } m\not=0,\\ +\infty & \text{if } m=0 \text{ and } p\not= 0,\\ 0 &\text{if } m=0 \text{ and } p= 0. \end{cases} \end{aligned} \end{equation}\end{proposition} \begin{proof} In view of Remark~\ref{ontotalvar}, \cite[Theorem~5.19]{FoLe07} holds when we consider the total variation defined by \eqref{totalvar} (compare with \cite[Definition~1.183]{FoLe07}). To conclude, it suffices to use \cite[Theorem~4.70]{FoLe07} to characterize the recession function of \(\bar f\), taking into account that \(\bar f\) is proper, convex, and lower semi-continuous on \({\mathbb{R}}^d\times {\mathbb{R}}^+_0\). Thus, we obtain \begin{equation*} \begin{aligned} \bar f^\infty (p,m) = \lim_{t\to\infty} \frac{\bar f((-P,0)+t(p,m)) - \bar f(-P,0)}{t}, \end{aligned} \end{equation*} from which we derive \eqref{barfrec}. \end{proof} \subsection{Existence of solutions }\label{secexist} Here, we examine the existence of solutions to the minimization problem \eqref{mmz}. From Theorems~\ref{thm:exist1} and \ref{thm:exist2} below, it follows that for all \(1< \alpha\leqslant\gamma\), there exists a solution to this problem. \begin{theorem}\label{thm:exist1} Assume that \(V\in L^\infty({\mathbb{T}}^d)\) and \begin{itemize} \item[($\mathcal{G}$1)] \(G\) is coercive; that is, \({\rm d}isplaystyle\lim_{z\to+\infty}\frac{G(z)}{z} = +\infty\). \end{itemize} Then, the minimization problem \eqref{mmz} has a solution \((u,m) \in {\mathcal{A}}_{ \gamma/\alpha,1}\) for all \(1<\alpha< \gamma\). Moreover, if \begin{itemize} \item[($\mathcal{G}$2)] there exist positive constants, \(\theta\) and \(C\), such that \({\rm d}isplaystyle G(z)\geqslant\frac1C z^{\theta+1} - C \text{ for all } z>0 , \) \end{itemize} then the minimization problem \eqref{mmz} has a solution \((u,m) \in {\mathcal{A}}_{ {\gamma(1+\theta)}/{(\alpha + \theta)},1+\theta}\) for all \(1<\alpha\leqslant\gamma\). \end{theorem} \begin{proof} We start by observing that for any $q\geqslant 1$ and \(r\geqslant 1\), we have \begin{equation} \label{mmzfinite} -\sup_{{\mathbb{T}}^d} V + G(1) \leqslant \inf_{(u,m)\in {\mathcal{A}}_{q,r}} \bar J[u,m] \leqslant \frac{|P|^\gamma}{\gamma (\alpha-1) } + \Vert V\Vert_{L^1({\mathbb{T}}^d)} + G(1), \end{equation} using the condition \(\int_{{\mathbb{T}}^d} m \, dx =1\), the convexity of \(G\) together with Jensen's inequality, and the non-negativeness of \(\bar f\) to obtain the lower bound; to obtain the upper bound, we use \(u=0\) and \(m=1\) as test functions. Let \(1<\alpha\leqslant\gamma\), and set \(q=\tfrac{\gamma}{\alpha}\) and \(r=1\). Let \((u_n,m_n)_{n\in{\mathbb{N}}}\subset {\mathcal{A}}_{q,1}\) be an infimizing sequence for \eqref{mmz}; that is, a sequence \((u_n,m_n)_{n\in{\mathbb{N}}}\subset {\mathcal{A}}_{q,1}\) such that \begin{equation}\label{infseq} \begin{aligned} \liminf_{n\to\infty} \bar J[u_n,m_n] = \inf_{(u,m)\in {\mathcal{A}}_{q,1}} \bar J[u,m]. \end{aligned} \end{equation} Extracting a subsequence if necessary, we may assume that the lower limit on the left-hand side of \eqref{infseq} is a limit and, in view of \eqref{mmzfinite}, \begin{equation*} \begin{aligned} \sup_{n\in{\mathbb{N}}} \left|\bar J[u_n,m_n]\right| \leqslant C \end{aligned} \end{equation*} for some positive constant \(C\). This estimate, the condition \(\int_{{\mathbb{T}}^d} m_n \, dx = 1\), and the non-negativeness of \(\bar f\) and of \(\int_{{\mathbb{T}}^d}\left[G(m_n) - G(1)\right]dx\) yield \begin{equation} \label{boundsDum} \begin{aligned} \int_{{\mathbb{T}}^d} \bar f(\nabla u_n, m_n)\,dx \leqslant C+|G(1)|+ \Vert V\Vert_\infty \enspace \text{ and } \int_{{\mathbb{T}}^d} G(m_n)\,dx \leqslant C+2|G(1)|+ \Vert V\Vert_\infty \end{aligned} \end{equation} for all \(n\in{\mathbb{N}}\). Recalling the definition of \(\bar f\), the first condition in \eqref{boundsDum} implies that \(m_n>0\) a.e.\! in \(U_n=\{x\in{\mathbb{T}}^d\!: \, \nabla u_n\not=-P\}\) and \begin{equation*} \begin{aligned} \int_{{\mathbb{T}}^d} \bar f(\nabla u_n, m_n)\,dx = \int_{U_n} \frac{|P+\nabla u_n|^\gamma}{\gamma (\alpha-1) m_n^{\alpha-1}} \, dx \leqslant C+|G(1)|+ \Vert V\Vert_\infty. \end{aligned} \end{equation*} Recalling that \(q=\tfrac{\gamma}{\alpha}\) and using the preceding estimate and Young's inequality, we obtain \begin{equation} \label{Dun1} \begin{aligned} \int_{{\mathbb{T}}^d} |P+\nabla u_n|^q \, dx&= \int_{U_n} |P+\nabla u_n|^{\frac{\gamma}{\alpha}} \, dx = \int_{U_n} \frac{|P+\nabla u_n|^{\frac{\gamma}{\alpha}}}{m_n^{\frac{\alpha-1}{\alpha}}} m_n^{\frac{\alpha-1}{\alpha}}\, dx\\ & \leqslant \frac{1}{\alpha} \int_{U_n} \frac{|P+\nabla u_n|^\gamma}{ m_n^{\alpha-1}} \, dx + \frac{\alpha-1}{\alpha} \int_{{\mathbb{T}}^d} m_n \, dx \\ &\leqslant \frac{\alpha-1}{\alpha} \left[\gamma (C+|G(1)|+ \Vert V\Vert_\infty ) + 1\right]. \end{aligned} \end{equation} Assume now that \(G\) satisfies ($\mathcal{G}$1) and that \(\alpha<\gamma\). Note that \(q>1\) in this case. Extracting a subsequence if necessary, from \eqref{Dun1} together with Poincar\'e--Wirtinger's inequality and from the second estimate in \eqref{boundsDum} together with ($\mathcal{G}$1) and De la Vall\'ee Poussin's criterion, there exists \((\bar u,\bar m) \in {\mathcal{A}}_{ q,1}\) such that \begin{equation*} \begin{aligned} u_n \rightharpoonup \bar u \text{ weakly in } W^{1,q}({\mathbb{T}}^d) \enspace \text{ and }\enspace m_n \rightharpoonup \bar m \text{ weakly in } L^1({\mathbb{T}}^d). \end{aligned} \end{equation*} Thus, invoking Corollary~\ref{barJwlsc}, we have \begin{equation*} \begin{aligned} \inf_{(u,m)\in {\mathcal{A}}_{q,1}} \bar J[u,m] \leqslant \bar J[\bar u,\bar m] &\leqslant\liminf_{n\to\infty} \bar J[u_n,m_n] = \inf_{(u,m)\in {\mathcal{A}}_{q,1}} \bar J[u,m]. \end{aligned} \end{equation*} Hence, recalling that \(q=\gamma/\alpha\), we conclude that \((\bar u,\bar m) \in {\mathcal{A}}_{ \gamma/\alpha,1}\) satisfies \begin{equation*} \begin{aligned} \bar J[\bar u,\bar m] = \min_{(u,m)\in {\mathcal{A}}_{\gamma/\alpha,1}} \bar J[u,m]. \end{aligned} \end{equation*} Assume now that ($\mathcal{G}$2) holds. Then, in particular, ($\mathcal{G}$1) holds. Let \(1<\alpha\leqslant\gamma\), and set \(q=\gamma({1+\theta})/({\alpha+\theta})\) and \(r=1+\theta\); note that \(q>\tfrac{\gamma}{\alpha}\geqslant 1\). Let \((u_n,m_n)_{n\in{\mathbb{N}}}\subset {\mathcal{A}}_{q,r}\) be an infimizing sequence for \eqref{mmz}. Arguing as above, we conclude that \eqref{boundsDum} holds. Hence, using the definition of \(\bar f\) and ($\mathcal{G}$2), we obtain \begin{equation}\label{betterDunmn} \begin{aligned} \sup_{n\in{\mathbb{N}}} \int_{U_n} \frac{|P+\nabla u_n|^\gamma}{ m_n^{\alpha-1}} \, dx <\infty\enspace \text{ and } \enspace\int_{{\mathbb{T}}^d} m_n^{\theta+1}\,dx <\infty, \end{aligned} \end{equation} where, as before, \(U_n=\{x\in{\mathbb{T}}^d\!: \, \nabla u_n\not=-P\}\) and \(m_n>0\) a.e.\! in \(U_n\). Set \(a=\frac{(\alpha-1)(1+\theta)}{\alpha+\theta}\), \(b=\frac{\alpha+\theta}{1+\theta}\), and \(b'=\frac{b}{b-1} = \frac{\alpha+\theta}{\alpha-1}\). Note that \(b,\, b'>1\), \(ab = \alpha-1\), \(qb = \gamma\), and \(ab'=r\). Then, arguing as in \eqref{Dun1} and using \eqref{betterDunmn}, we obtain \begin{equation*} \begin{aligned} \sup_{n\in{\mathbb{N}}} \int_{{\mathbb{T}}^d} |P+\nabla u_n|^q \, dx &= \sup_{n\in{\mathbb{N}}} \left( \int_{U_n} \frac{|P+\nabla u_n|^{q}}{m_n^a} m_n^{a}\, dx\right) \\ &\leqslant \sup_{n\in{\mathbb{N}}} \left( \frac{1}{b} \int_{U_n} \frac{|P+\nabla u_n|^{qb}}{ m_n^{ab}} \, dx + \frac{1}{b'} \int_{{\mathbb{T}}^d} m_n^{ab'} \, dx \right) <\infty. \end{aligned} \end{equation*} Reasoning once more as in the preceding case, we conclude that there exists \((\bar u,\bar m) \in {\mathcal{A}}_{ {\gamma(1+\theta)}/{(\alpha + \theta)},1+\theta}\) satisfying \begin{equation*} \begin{aligned} \bar J[\bar u,\bar m] = \min_{(u,m)\in {\mathcal{A}}_{ {\gamma(1+\theta)}/{(\alpha + \theta)},1+\theta}} \bar J[u,m]. \end{aligned} \qedhere \end{equation*} \end{proof} \begin{remark}\label{onV} If ($\mathcal{G}$2) holds, then Theorem~\ref{thm:exist1} remains valid under the weaker assumption \(V\in L^{\frac{\theta+1}{\theta}}({\mathbb{T}}^d)\) with \(\sup_{{\mathbb{T}}^d} V \in {\mathbb{R}}\). \end{remark} Before proving the existence of solutions in the case in which \(\alpha=\gamma\) and \(G\) satisfies Assumption~($\mathcal{G}$1) in Theorem~\ref{thm:exist1}, we briefly recall some properties of the space, \(BV({\mathbb{T}}^d)\), of functions of bounded variation in \({\mathbb{T}}^d\). We say that \(u\in BV({\mathbb{T}}^d) \) if \(u\in L^1({\mathbb{T}}^d)\) and its distributional derivative, \(Du\), belongs to \({\mathcal{M}}({\mathbb{T}}^d;{\mathbb{R}}^d)\); that is, there exists a Radon measure, \(Du \in {\mathcal{M}}({\mathbb{T}}^d;{\mathbb{R}}^d) \), such that for all \(\phi\in C^1({\mathbb{T}}^d)\), we have \begin{equation*} \begin{aligned} \int_{{\mathbb{T}}^d} u(x) \nabla \phi(x)\, dx = - \int_{{\mathbb{T}}^d} \phi(x) \, dDu(x). \end{aligned} \end{equation*} The space \(BV({\mathbb{T}}^d)\) is a Banach space when endowed with the norm \(\Vert u \Vert_{BV({\mathbb{T}}^d)}= \Vert u\Vert_{L^1({\mathbb{T}}^d)} +\ \Vert Du\Vert\). Moreover, we have the following compactness property. If \((u_n)_{n\in{\mathbb{N}}}\) is such that \(\sup_{n\in{\mathbb{N}}} \Vert u_n \Vert_{BV({\mathbb{T}}^d)} < \infty\), then, extracting a subsequence if necessary, there exists \(u\in BV({\mathbb{T}}^d)\) such that \((u_n)_{n\in{\mathbb{N}}}\) weakly-\(\star\) converges to \(u\) in \(BV({\mathbb{T}}^d)\), written \(u_n\buildrel{\hskip-.6mm\star}\over\weakly u\) weakly-\(\star\) in \(BV({\mathbb{T}}^d)\); that is, \(u_n \to u \) (strongly) in \(L^1({\mathbb{T}}^d)\) and \(Du_n\buildrel{\hskip-.6mm\star}\over\weakly Du\) weakly-\(\star\) in \({\mathcal{M}}({\mathbb{T}}^d;{\mathbb{R}}^d)\). Given \(u\in BV({\mathbb{T}}^d)\), the Radon-Nikodym derivative of \(Du\) with respect to the \(d\)-dimensional Lebesgue measure is denoted by \(\nabla u\) and the singular part of the Lebesgue--Besicovitch decomposition of \(D u\) with respect to the \(d\)-dimensional Lebesgue measure is denoted by \(D^su\); thus, \begin{equation*} \begin{aligned} Du= \nabla u {\mathcal{L}}^d\lfloor {\mathbb{T}}^d + D^s u \end{aligned} \end{equation*} stands for the Lebesgue--Besicovitch decomposition of \(D u\) with respect to the \(d\)-dimensional Lebesgue measure. By the Polar decomposition theorem, we have that \(\big|\frac{dD^s u}{d \Vert D^s u\Vert}(x)\big|=1\) for \(\Vert D^s u\Vert\)-a.e.\! \(x\in{\mathbb{T}}^d\). Finally, we note that \(u\in BV({\mathbb{T}}^d) \) belongs to \( W^{1,1}({\mathbb{T}}^d)\) if and only if \(D^s u \equiv0\). In this case, \(Du = \nabla u {\mathcal{L}}^d\lfloor {\mathbb{T}}^d\), where \(\nabla u\) is the usual (weak) gradient of \(u\); moreover, in that case, \(\Vert Du\Vert({\mathbb{T}}^d) = \int_{{\mathbb{T}}^d} |\nabla u|\,dx\). \begin{theorem}\label{thm:exist2} Assume that \(\alpha=\gamma\), \(V\in L^\infty({\mathbb{T}}^d)\), and \(G\) satisfies Assumption~($\mathcal{G}$1) in Theorem~\ref{thm:exist1}. Then, the minimization problem \eqref{mmz} has a solution \((u,m) \in {\mathcal{A}}_{1,1}\). \end{theorem} \begin{proof} As at the beginning of the proof of Theorem~\ref{thm:exist1}, let \((u_n,m_n)_{n\in{\mathbb{N}}}\subset {\mathcal{A}}_{1,1}\) be an infimizing sequence for \eqref{mmz}; that is, a sequence \((u_n,m_n)_{n\in{\mathbb{N}}}\subset {\mathcal{A}}_{1,1}\) satisfying \eqref{infseq}. Observe that \eqref{boundsDum} and \eqref{Dun1} are valid for \(q=1\) and \(r=1\). Thus, extracting a subsequence if necessary, from \eqref{Dun1} together with Poincar\'e--Wirtinger's inequality and from the second estimate in \eqref{boundsDum} together with ($\mathcal{G}$1), there exists \((\bar u,\bar m) \in BV({\mathbb{T}}^d)\times L^1({\mathbb{T}}^d)\) such that \begin{equation*} \begin{aligned} u_n \buildrel{\hskip-.6mm\star}\over\weakly \bar u \text{ weakly-\(\star\) in }BV({\mathbb{T}}^d) \enspace \text{ and }\enspace m_n \rightharpoonup \bar m \text{ weakly in } L^1({\mathbb{T}}^d). \end{aligned} \end{equation*} We claim that \((\bar u,\bar m) \in{\mathcal{A}}_{1,1} \). We first observe that because \((u_n,m_n)_{n\in{\mathbb{N}}}\subset {\mathcal{A}}_{1,1}\), the above weak convergences imply that \(\bar m \geqslant 0\) a.e.\! in \({\mathbb{T}}^d\), \(\int_{{\mathbb{T}}^d} \bar m\, dx =1\), \(\int_{{\mathbb{T}}^d} \bar u\, dx =0\), and \((\nabla u_n,m_n)\mathcal{L}^d\lfloor {\mathbb{T}}^d \buildrel{\hskip-.6mm\star}\over\weakly (D\bar u,\bar m\mathcal{L}^d\lfloor {\mathbb{T}}^d) \) weakly-\(\star\) in \({\mathcal{M}}({\mathbb{T}}^d;{\mathbb{R}}^d\times{\mathbb{R}}^+_0)\). We further observe that the Lebesgue--Besicovitch decomposition of \((D\bar u,\bar m\mathcal{L}^d\lfloor {\mathbb{T}}^d)\) with respect to the \(d\)-dimensional Lebesgue measure is \begin{equation*} \begin{aligned} (D\bar u,\bar m\mathcal{L}^d\lfloor {\mathbb{T}}^d) = (\nabla\bar u, \bar m )\mathcal{L}^d\lfloor {\mathbb{T}}^d + (D^s\bar u,0) \end{aligned} \end{equation*} and that \(\Vert (D^su,0)\Vert = \Vert D^su\Vert \). Thus, by Proposition~\ref{slscM}, it follows that \begin{equation*} \begin{aligned} \liminf_{n\to\infty} \int_{{\mathbb{T}}^d} \bar f(\nabla u_n,m_n)\, dx \geqslant \int_{{\mathbb{T}}^d} \bar f\left( \nabla\bar u, \bar m\right) dx + \int_{{\mathbb{T}}^d} \bar f^\infty\left( \frac{dD^s\bar u}{d \Vert D^s\bar u\Vert} ,0\right) d \Vert D^s\bar u\Vert(x). \end{aligned} \end{equation*} Because \(\bar f\) and \(\bar f^\infty\) are nonnegative functions, from the first uniform estimate in \eqref{boundsDum}, it follows that \begin{equation*} \begin{aligned} \int_{{\mathbb{T}}^d} \bar f^\infty\left( \frac{dD^s\bar u}{d \Vert D^s\bar u\Vert} ,0\right) d \Vert D^s\bar u\Vert(x) \leqslant C+|G(1)|+ \Vert V\Vert_\infty. \end{aligned} \end{equation*} In view of the definition of \(\bar f^\infty\) and the fact that \(\big|\frac{dD^s\bar u}{d \Vert D^s\bar u\Vert}(x)\big|=1\) for \(\Vert D^s\bar u\Vert\)-a.e.\! \(x\in{\mathbb{T}}^d\), this last estimate is only possible if \( \Vert D^s\bar u\Vert \equiv 0\). Hence, also \(D^s\bar u\equiv 0\). This proves that \(\bar u\in W^{1,1}({\mathbb{T}}^d)\). Thus, \((\bar u,\bar m) \in{\mathcal{A}}_{1,1} \) and \begin{equation*} \begin{aligned} \liminf_{n\to\infty} \int_{{\mathbb{T}}^d} \bar f(\nabla u_n,m_n)\, dx \geqslant \int_{{\mathbb{T}}^d} \bar f\left( \nabla\bar u, \bar m\right) dx. \end{aligned} \end{equation*} Because we also have \begin{equation*} \begin{aligned} \liminf_{n\to\infty} \int_{{\mathbb{T}}^d} \left[ -V m_n +G(m_n)\right] dx \geqslant \int_{{\mathbb{T}}^d} \left[-V m +G(m)\right] dx, \end{aligned} \end{equation*} arguing as in the proof of Theorem~\ref{thm:exist1}, we conclude that \((\bar u,\bar m) \in {\mathcal{A}}_{ 1,1}\) satisfies \begin{equation*} \begin{aligned} \bar J[\bar u,\bar m] = \min_{(u,m)\in {\mathcal{A}}_{1,1}} \bar J[u,m]. \end{aligned}\qedhere \end{equation*} \end{proof} \subsection{Uniqueness of minimizers} \label{uniqmin} In this subsection, we study the uniqueness of solutions to the minimization problem \eqref{mmz}. We show that, in particular, the solutions provided by Theorems~\ref{thm:exist1} and \ref{thm:exist2} are unique. \begin{theorem}\label{thm:uniqmin} Let \(1<\alpha\leqslant\gamma\) and \(q,r \geqslant 1\). Assume that \(G\) is strictly convex in \({\mathbb{R}}^+_0\) and that \(V\in L^{\tfrac{r}{r-1}}({\mathbb{T}}^d)\) is such that \(\sup_{{\mathbb{T}}^d} V \in {\mathbb{R}}\). Then, there is at most one solution to \eqref{mmz}. \end{theorem} \begin{proof} Assume that \((u_1,m_1),(u_2,m_2) \in {\mathcal{A}}_{q,r} \) are such that \begin{equation*} \begin{aligned} \bar J[u_1,m_1] = \bar J[u_2,m_2] =\min_{(u,m)\in {\mathcal{A}}_{q,r}} \bar J[u,m]. \end{aligned} \end{equation*} We want to show that \(u_1=u_2\) and \(m_1=m_2\) a.e.\! in \({\mathbb{T}}^d\). Due to \eqref{mmzfinite}, we have that \(j_0:= \min_{(u,m)\in {\mathcal{A}}_{q,r}} \bar J[u,m]\in {\mathbb{R}}\). Then, using H\"older's inequality and Jensen's inequality together with the convexity of \(G\) and the condition \(\int_{{\mathbb{T}}^d} m_1\, dx =1\), it follows that \begin{equation*} \begin{aligned} 0\leqslant\int_{{\mathbb{T}}^d} \bar f (Du_1, m_1)\, dx \leqslant j_0 + \Vert V\Vert_{L^{\tfrac{r}{r-1}}({\mathbb{T}}^d)} \Vert m_1\Vert_{L^{r}({\mathbb{T}}^d)} - G(1). \end{aligned} \end{equation*} Thus, \(\bar f (Du_1, m_1)<\infty\) a.e.\! in \({\mathbb{T}}^d\). In particular, \(Du_1= - P\) a.e.\! in \(\{x\in{\mathbb{T}}^d\!: \, m_1=0\}\). Similarly, \(\bar f (Du_2, m_2)<\infty\) a.e.\! in \({\mathbb{T}}^d\) and \(Du_2= - P\) a.e.\! in \(\{x\in{\mathbb{T}}^d\!: \, m_2=0\}\). Set \(u=\tfrac{u_1 + u_2}{2}\) and \(m=\tfrac{m_1 + m_2}{2}\). In view of the convexity of the function \((p,m)\in{\mathbb{R}}^d \times {\mathbb{R}}^+_0 \mapsto \bar f(p,m) - V(x)m + G(m)\) (see Lemma~\ref{barfcxlsc}), we have \begin{equation*} \begin{aligned} j_0\leqslant \bar J[u,m] \leqslant \frac12 \bar J[u_1,m_1] + \frac12 \bar J[u_2,m_2] = \frac12 j_0 + \frac12 j_0 = j_0. \end{aligned} \end{equation*} Consequently, \(\bar J[u,m] = j_0\) and \begin{equation*} \begin{aligned} 0&= \frac12 \bar J[u_1,m_1] + \frac12 \bar J[u_2,m_2] - \bar J[u,m]\\ & = \int_{{\mathbb{T}}^d} \Big(\frac12 \bar f(\nabla u_1, m_1) + \frac12 \bar f(\nabla u_2, m_2) - \bar f(\nabla u, m) + \frac12 G(m_1) + \frac12 G(m_2) - G(m) \Big)\, dx. \end{aligned} \end{equation*} Because of the convexity of \((p,m)\in{\mathbb{R}}^d \times {\mathbb{R}}^+_0 \mapsto \bar f(p,m) + G(m)\), the integrand in the last integral is nonnegative. Hence, \begin{equation*} \begin{aligned} \frac12 \bar f(\nabla u_1, m_1) + \frac12 \bar f(\nabla u_2, m_2) - \bar f(\nabla u, m) + \frac12 G(m_1) + \frac12 G(m_2) - G(m)=0 \end{aligned} \end{equation*} a.e. in \({\mathbb{T}}^d\). Invoking the convexity of \(\bar f\) and \(G\) once more, the previous equality implies that \begin{equation}\label{uniqbarf} \begin{aligned} \begin{cases} \frac12 \bar f(\nabla u_1, m_1) + \frac12 \bar f(\nabla u_2, m_2) - \bar f(\nabla u, m) =0 \\ \frac12 G(m_1) + \frac12 G(m_2) - G(m)=0 \end{cases} \end{aligned} \end{equation} a.e.\! in \({\mathbb{T}}^d\). Because \(G\) is strictly convex, it follows from the second identity in \eqref{uniqbarf} that \(m_1 = m_2\) a.e.\! in \({\mathbb{T}}^d\). Consequently, the first identity in \eqref{uniqbarf} reduces to \begin{equation*} \begin{aligned} \begin{cases} {\rm d}isplaystyle \frac{\frac12|P+ \nabla u_1|^\gamma + \frac12|P+ \nabla u_2|^\gamma - |P+ \nabla u|^\gamma}{\gamma (\alpha-1) {m_1}^{\alpha-1} } =0 & \text{a.e.\! in } \{x\in{\mathbb{T}}^d\!: \, m_1\not=0\}\\ {\rm d}isplaystyle \nabla u_1= \nabla u_2 = -P & \text{a.e.\! in } \{x\in{\mathbb{T}}^d\!: \, m_1=0\}. \end{cases} \end{aligned} \end{equation*} Because \(\gamma>1\), we conclude that \(\nabla u_1= \nabla u_2\) a.e.\! in \({\mathbb{T}}^d\), which, together with the condition \(\int_{{\mathbb{T}}^d} u_1\,dx = \int_{{\mathbb{T}}^d} u_2\,dx = 1\), yields \(u_1= u_2\) a.e.\! in \({\mathbb{T}}^d\). \end{proof} As an immediate consequence of Theorem~\ref{thm:uniqmin}, we obtain the following result. \begin{corollary}\label{Cor:uniqmin} If, in addition to the hypotheses of Theorem~\ref{thm:exist1} (respectively, Theorem~\ref{thm:exist2}), we assume that \(G\) is strictly convex in \({\mathbb{R}}^+_0\), then the solution to \eqref{mmz} provided by Theorem~\ref{thm:exist1} (respectively, Theorem~\ref{thm:exist2}) is unique. \end{corollary} \subsection{The \(P=0\) case}\label{varexplicit} Here, we further characterize the solutions of \eqref{mmz} when \(P=0\). Assume that \(P=0\) and that \((\bar u, \bar m) \in {\mathcal{A}}_{q,r}\) satisfies \begin{equation*} \bar J[\bar u, \bar m] = \min_{(u,m)\in {\mathcal{A}}_{q,r}} \bar J[u,m]. \end{equation*} Because this minimum is finite, the definition of \(\bar f\) yields \(\bar m >0\) a.e.\!~in the set \(\{x\in{\mathbb{T}}^d\!: \, \nabla\bar u \not= 0\}\). Consequently, the inequality \(\bar J[\bar u, \bar m] \leqslant \bar J[0, \bar m]\) gives \begin{equation*} \begin{aligned} \int_{\{x\in{\mathbb{T}}^d\!: \, \nabla\bar u \not= 0 \}} \frac{|\nabla \bar u|^\gamma}{\gamma (\alpha-1) \bar m^{\alpha-1}} \, dx \leqslant 0. \end{aligned} \end{equation*} This last estimate is possible only if the set \(\{x\in{\mathbb{T}}^d\!: \, \nabla\bar u \not= 0\}\) has zero measure. Therefore, we conclude that \(\nabla \bar u = 0\) a.e.\!~in \({\mathbb{T}}^d\), which, together with the restriction \(\int_{{\mathbb{T}}^d} \bar u \, dx = 0\), implies that \(\bar u =0\) a.e.\!~in \({\mathbb{T}}^d\). Moreover, we have \begin{equation*} \begin{aligned} \bar J[0, \bar m] = \min_{(u,m)\in {\mathcal{A}}_{q,r}} \bar J[u,m] \leqslant \inf_{m\in L^r({\mathbb{T}}^d), m\geqslant 0, \atop \int_{{\mathbb{T}}^d} m \, dx =1} \bar J[0,m] \leqslant \bar J[0, \bar m]. \end{aligned} \end{equation*} Thus, \(\bar m\) satisfies \begin{equation}\label{mwhenP0} \begin{aligned} \tilde J [\bar m] = \min_{m\in L^r({\mathbb{T}}^d), m\geqslant 0, \atop \int_{{\mathbb{T}}^d} m \, dx =1} \tilde J[m], \end{aligned} \end{equation} where \begin{equation*} \begin{aligned} \tilde J[m] =\int_{{\mathbb{T}}^d} \big( -V m +G(m)\big)\, dx. \end{aligned} \end{equation*} Furthermore, we observe that if $G \in C^{\infty}({\mathbb{R}}^+)\cap C({\mathbb{R}}_0^+)$ is coercive and strictly convex then \eqref{mwhenP0} can be solved explicitly. More specifically, the solution, $\bar{m}$, is given by \begin{equation}\label{eq:explweaksolHgeneral} \bar{m}(x)=(G^*)'(V(x)-{\overline{H}}),\quad x\in {\mathbb{T}}^d, \end{equation} where ${\overline{H}} \in {\mathbb{R}}$ is the unique number such that \(\int_{{\mathbb{T}}^d} \bar m\, dx =1\), and $G^*$ is the Legendre transform of $G$; that is, $G^*(q)=\sup\limits_{m\geqslant 0} \{q\cdot m -G(m)\}$. Note that if $G'(0+)=-\infty$ then $(G^*)'(q)>0$ for $q\in {\mathbb{R}}$, and $\bar{m}(x)>0,~x\in{\mathbb{T}}^d$ independently of $V$ and ${\overline{H}}$. Thus, the triplet $(0,\bar{m},{\overline{H}})$ defined by \eqref{eq:explweaksolHgeneral} is the unique classical solution of \eqref{main2}. Alternatively, if $G'(0+)>-\infty$ then $(G^*)'(q)\geqslant 0$ for $q\in{\mathbb{R}}$, with equality if and only if $q\leqslant G'(0)$. Therefore, $\bar{m}$ defined by \eqref{eq:explweaksolHgeneral} may vanish at some points for a particular choice of $V$. More precisely, $\bar{m}(x)=0$ at points $x\in {\mathbb{T}}$ for which $V(x) - {\overline{H}}\leqslant G'(0)$. Next, we take \(G(m)=\frac{m^2}{2}\) and calculate the solution, $\bar{m}$, of \eqref{mwhenP0} using \eqref{eq:explweaksolHgeneral}. We use this solution in Section~\ref{num} to validate our numerical method. For this coupling, \(G\), we have that $G^*(q)=\frac{(q^+)^2}{2},~q\in {\mathbb{R}}$. Therefore, according to \eqref{eq:explweaksolHgeneral}, we obtain \begin{equation}\label{eq:explweaksolH} \begin{aligned} \bar m(x) = (V(x) - \overline H)^+, \end{aligned} \end{equation} where \(\overline H \in {\mathbb{R}}\) is such that \(\int_{{\mathbb{T}}^d} \bar m\, dx =1\). In particular, if \(\inf_{{\mathbb{T}}^d} V \geqslant \int _{{\mathbb{T}}^d} V\, dx -1\), then \begin{equation*} \begin{aligned} \bar m(x) = V(x) + 1 - \int _{{\mathbb{T}}^d} V\, dx. \end{aligned} \end{equation*} Moreover, if \(\inf_{{\mathbb{T}}^d} V > \int _{{\mathbb{T}}^d} V\, dx -1\), then \[(\bar u, \bar m,\overline H)= \bigg(0,V+1-\int _{{\mathbb{T}}^d} V\, dx, -1+\int _{{\mathbb{T}}^d} V\, dx\bigg)\] is the unique classical solution of \eqref{main2} in light of Corollary~\ref{Cor:uniqmin}. \section{The two-dimensional case} \label{2dcase} The variational approach considered in the preceding section requires $1<\alpha\leqslant \gamma$. However, in the two-dimensional case, if \(0<\alpha<1\) and \(\gamma> 1\), we can use properties of divergence-free vector fields to deduce an equivalent equation for which our variational method can be applied. In this section, the space dimension is always $d=2$. Moreover, given \(Q=(q_1, q_2) \in{\mathbb{R}}^2\), we set \(Q^\perp=(-q_2,q_1)\). From the second equation in \eqref{main}, there exists a constant vector, $Q\in {\mathbb{R}}^2$, and a scalar function, $\psi$, such that \begin{equation}\label{divfree1} m^{1-\alpha}|P+Du|^{\gamma-2}(P+Du)= Q^\perp+(D\psi)^\perp. \end{equation} Consequently, \begin{equation*} m^{1-\alpha}|P+Du|^{\gamma-1}=|Q+D\psi|. \end{equation*} Raising the prior expression to the power $\gamma'$, where $\gamma'=\frac{\gamma}{\gamma-1}$, and rearranging the terms, we obtain \[ \frac{|P+Du|^{\gamma}}{m^\alpha}=\frac{|Q+D\psi|^{\gamma'}}{m^{\alpha-(\alpha-1)\gamma'}}. \] Therefore, \[ \frac{|P+Du|^{\gamma}}{\gamma m^\alpha}+V(x)-g(m)-\overline{H} = \frac{|Q+D\psi|^{\gamma'}}{\gamma m^{{\tilde{\alpha}}}}+V(x)-g(m)-\overline{H}, \] with \[ {\tilde{\alpha}}=\alpha-(\alpha-1)\gamma'. \] Moreover, from \eqref{divfree1}, we have \begin{equation}\label{eq:PperpDuPerp} P^\perp+(Du)^\perp= m^{1-{\tilde{\alpha}}} |Q+D\psi|^{\gamma'-2} (Q+D\psi). \end{equation} Accordingly, \[ \operatorname{div}(m^{1-{\tilde{\alpha}}} |Q+D\psi|^{\gamma'-2} (Q+D\psi) )=0. \] Thus, \eqref{main} can be rewritten as \[ \begin{cases} \frac{|Q+D\psi|^{\gamma'}}{\gamma'm^{{\tilde{\alpha}}}}+\frac{\gamma}{\gamma'}V(x)- \frac{\gamma}{\gamma'}g(m)=\frac{\gamma}{\gamma'}\overline{H}\\ \operatorname{div}(m^{1-{\tilde{\alpha}}} |Q+D\psi|^{\gamma'-2} (Q+D\psi) )=0. \end{cases} \] Finally, we notice that if $0<\alpha<1$ and $\gamma>1$, we have $1<{\tilde{\alpha}}<\gamma'$. That is, we obtain an equation of the form of \eqref{main} with exponents ${\tilde{\alpha}}$ and $\gamma'$ in the place of $\alpha$ and $\gamma$. Furthermore, ${\tilde{\alpha}}$ and $\gamma'$ now belong to the range where our prior results apply. \section{Transformations of second-order MFGs} \label{tsom} Now, we discuss a method to transform second-order MFGs into a scalar PDE. For $\alpha=0$, we recover the Hopf-Cole transformation, used in the context of MFGs in \cite{LCDF,ll1, GeRC}, and \cite{MR2928382}, for example, and further generalized in \cite{MR3377677}. Moreover, we obtain extensions of these transformations for the case $\alpha>0$. We also make connections between these systems and the calculus of variations. In Section~\ref{soquadH}, we examine problems with a quadratic Hamiltonian and with \(\alpha=1\). Then, in Section~\ref{otherH}, we extend our analysis to more general Hamiltonians and any congestion parameter. \subsection{Quadratic Hamiltonian and $\alpha=1$}\label{soquadH} Here, we examine the following elliptic version of \eqref{main} for $\alpha=1$ and $\gamma=2$: \begin{equation} \label{mainel} \begin{cases} -\Delta u+ \frac{|P+Du|^2}{2 m}+V(x)=g(m)+{\overline{H}}\\ -\Delta m -\Delta u=0. \end{cases} \end{equation} From the second equation and the periodicity, we get \[ u=\mu-m \] for some real $\mu$. Accordingly, we replace $u$\ in the first equation in \eqref{mainel} and obtain \begin{equation} \label{mmm} \Delta m+ \frac{|P-Dm|^2}{2m}+V(x)=g(m)+{\overline{H}}. \end{equation} As we show next, if $P=0$, the preceding equation is equivalent to the Euler-Lagrange equation of an integral functional. First, we take $P=0$ and multiply \eqref{mmm} by $m^{1/2}$. Then, \eqref{mmm} becomes \[ m^{1/2}\Delta m+ \frac{|Dm|^2}{2m^{1/2}}+V(x)m^{1/2}=m^{1/2}g(m)+{\overline{H}} m^{1/2}. \] Now, we set $\psi=m^{3/2}$ and conclude that \[ \frac 2 3 \Delta \psi+V(x) \psi^{1/3}=g(\psi^{2/3}) \psi^{1/3}+{\overline{H}} \psi^{1/3}. \] The foregoing equation is the Euler--Lagrange equation of the functional \[ \hat J(\psi)=\int_{{\mathbb{T}}^d} \frac{|D\psi|^2}{3}-\frac 3 4 V(x) \psi^{4/3} +\hat G(\psi)+\frac 4 3{\overline{H}} \psi^{4/3}\, dx, \] where, for \(z\geqslant0\), \begin{equation*} \begin{aligned} \hat G(z) = \int_0^z g(r^{2/3})r^{1/3}\, dr. \end{aligned} \end{equation*} \subsection{Other Hamiltonians}\label{otherH} Here, we consider the system \begin{equation}\label{Eq:statcong2} \begin{cases} -\Delta u+ m^{\alpha}H\!\left(\frac{Du+P}{m^{\alpha}}\right)=g(m)+\overline{H}-V(x)\\ -\Delta m-\operatorname{div} \left(m D_pH\!\left(\frac{Du+P}{m^{\alpha}}\right)\right)=0, \end{cases} \end{equation} where the Hamiltonian, $H\operatorname{co}lon{\mathbb{R}}^d\to{\mathbb{R}}$, is the Legendre transform of a strictly convex and coercive Lagrangian, $L\operatorname{co}lon{\mathbb{R}}^d\to{\mathbb{R}}$; that is, \[ H(p)=\sup_{v\in{\mathbb{R}}^d} \{-v\cdot p - L(v)\}. \] In the preceding definition, the maximizer, $v^*$, is given by \begin{equation} \label{eq:LTmax} \begin{aligned} p=-D_vL(v^*(p)) \hbox{ and } v^*(p)=-D_pH(p). \end{aligned} \end{equation} Hence, \begin{equation} \label{eq:HbyLT} \begin{aligned} H(p)=v^*(p)\cdot D_vL(v^*(p))-L(v^*(p)). \end{aligned} \end{equation} Next, we relax \eqref{Eq:statcong2} by replacing $Du$ by a function, $w\operatorname{co}lon{\mathbb{T}}^d\to{\mathbb{R}}^d$. Then, the second equation in \eqref{Eq:statcong2} can be written as \begin{equation*} \operatorname{div}\left(Dm+mD_pH\!\left(\frac{w+P}{m^{\alpha}}\right)\right)=0. \end{equation*} Accordingly, we introduce the divergence-free vector field $Q\operatorname{co}lon{\mathbb{T}}^d\to{\mathbb{R}}^d $ given by \[ Q:=-Dm-mD_pH\!\left(\frac{w+P}{m^{\alpha}}\right)\!. \] Therefore, we obtain the system \begin{equation}\label{Eq:statcong2'} \begin{cases} -\operatorname{div}(w)+ m^{\alpha}H\!\left(\frac{w+P}{m^{\alpha}}\right)=g(m)+\overline{H}-V(x),\\ -Dm-mD_pH\!\left(\frac{w+P}{m^{\alpha}}\right)=Q,\\ \operatorname{div}(Q)=0. \end{cases} \end{equation} Note that if $(u,m, {\overline{H}})$ is a solution of \eqref{Eq:statcong2}, then $(w, m,{\overline{H}})=(Du, m,{\overline{H}})$ solves \eqref{Eq:statcong2'}. The converse implication does not necessarily hold and will be discussed in Remark~\ref{Rmk:converse} below. Now, we note that the second equation in \eqref{Eq:statcong2'} gives \[ \frac{Dm+Q}{m}=-D_pH\!\left(\frac{w+P}{m^{\alpha}}\right)\!. \] Hence, using the second identity in \eqref{eq:LTmax} with $p=\frac{w+P}{m^{\alpha}}$, we obtain $\frac{Dm+Q}{m}=v^*(\frac{w+P}{m^{\alpha}})$. Consequently, the first identity in \eqref{eq:LTmax} gives \[ \frac{w+P}{m^{\alpha}}=-D_vL\!\left(\frac{Dm+Q}{m}\right) \] and, in view of \eqref{eq:HbyLT}, \[ H\!\left(\frac{w+P}{m^{\alpha}}\right)=\frac{Dm+Q}{m}\cdot D_vL\!\left(\frac{Dm+Q}{m}\right)-L\!\left(\frac{Dm+Q}{m}\right)\!. \] Using the two preceding identities in \eqref{Eq:statcong2'}, we obtain \begin{equation}\label{Eq:statcong2b} \begin{cases} \operatorname{div} \left(m^{\alpha} D_vL\!\left(\frac{Dm+Q}{m}\right)\right)+ m^{\alpha}\left(\frac{Dm+Q}{m}\cdot D_vL\!\left(\frac{Dm+Q}{m}\right)-L\!\left(\frac{Dm+Q}{m}\right)\right)=g(m)+\overline{H}-V(x),\\ w=-P-m^{\alpha}D_vL(\frac {Dm+Q} m),\\ \operatorname{div}(Q)=0. \end{cases} \end{equation} Next, we consider a few cases when the system \eqref{Eq:statcong2b} can be further simplified. Assume that $Q=0$ and $H(p)=\frac{1}{\gamma}|p|^{\gamma}$. In this case, we obtain a solution of \eqref{Eq:statcong2} by solving \eqref{Eq:statcong2'}. To see this, we observe first that $L(v)=\frac{1}{\gamma'}|v|^{\gamma'}$, where \(\gamma'=\tfrac{\gamma}{\gamma-1}\); consequently, \eqref{Eq:statcong2b} becomes \begin{equation*} \operatorname{div}\left(m^{\alpha-\gamma'+1}|Dm|^{\gamma'-2} Dm\right)+\frac 1 {\gamma} m^{\alpha-\gamma'}|Dm|^{\gamma'}=g(m)+\overline{H}-V(x). \end{equation*} Next, for $\beta$ to be selected later, we use the change of variables $m=\psi^{\beta}$ to get \[ \beta^{\gamma'-1}\operatorname{div}\left(\psi^{\alpha\beta-\gamma'+1}|D\psi|^{\gamma'-2} D\psi\right)+ \frac {\beta^{\gamma'}} {\gamma} \psi^{\alpha\beta-\gamma'}|D\psi|^{\gamma'}=g(\psi^{\beta})+\overline{H}-V(x). \] We rewrite the preceding equation as \begin{equation*} \begin{aligned} &\beta^{\gamma'-1}\psi^{\alpha\beta-\gamma'+1}\operatorname{div}\left(|D\psi|^{\gamma'-2} D\psi\right)+\beta^{\gamma'-1}\left(\alpha\beta-\gamma'+1+ \tfrac {\beta} {\gamma}\right) \psi^{\alpha\beta-\gamma'}|D\psi|^{\gamma'}\\ &\quad=g(\psi^{\beta})+\overline{H}-V(x). \end{aligned} \end{equation*} Now, we choose $\beta$ such that the second term on the left-hand side of the previous identity vanishes; that is, \begin{equation} \label{eq:betaQ0} \begin{aligned} \beta=\frac{\gamma'-1}{\alpha+1/\gamma} = \frac{\gamma'}{\alpha\gamma + 1}. \end{aligned} \end{equation} Accordingly, we obtain \begin{equation}\label{eq.pLap} \beta^{\gamma'-1}\Delta_{\gamma'} \psi=\left(g(\psi^{\beta})+\overline{H}-V(x)\right)\psi^{\frac{\beta}{\gamma}}, \end{equation} where $\Delta_{p}$ is the $p$-Laplacian operator, $\Delta_{p} \psi=\operatorname{div} (|D\psi|^{p-2}D\psi)$. We note that \eqref{eq.pLap} is the Euler-Lagrange equation of the functional \[ \hat J[\psi]=\int_{{\mathbb{T}}^d} \bigg[\beta^{\gamma'-1}\frac{|D\psi|^{\gamma'}}{\gamma'} +\hat{G}(\psi)-\frac{\gamma}{\beta+\gamma}(V(x)-\overline{H}) {\psi^{\frac{\beta +\gamma}{\gamma}} }\bigg] dx \] where \(\beta\) is given by \eqref{eq:betaQ0} and $\hat{G}(z):=\int_0^z g(r^{\beta})r^{\frac{\beta}{\gamma}}\,dr.$ Note that the unknown $\overline{H}$ is determined by the constraint \begin{equation*} \begin{aligned} \int_{{\mathbb{T}}^d} \psi^{\beta}\,dx=1. \end{aligned} \end{equation*} In particular, for $\gamma=2$, we have \(\beta=\tfrac2{2\alpha +1}\) and \(m=\psi^{\tfrac2{2\alpha +1}}\), where $\psi$ solves \begin{equation}\label{Eq:statcongquad} \tfrac 2 {(2\alpha+1)}\Delta \psi=\Big(g\big(\psi^{\frac 2 {2\alpha+1}}\big)+\overline{H}-V(x) \Big)\psi^{\frac 1 {2\alpha+1}}. \end{equation} As before, \eqref{Eq:statcongquad} is the Euler-Lagrange equation of the functional \[ \hat J[\psi]=\int_{{\mathbb{T}}^d} \bigg[\frac{|D\psi|^2}{2\alpha+1} +\hat{G}(\psi)-\frac{2\alpha+1}{2(\alpha+1)}(V(x)-\overline{H})\psi^{\frac{2(\alpha+1)}{2\alpha+1}} \bigg] dx \] where $\hat{G}(z)=\int_0^z g(r^{\frac{2}{2\alpha+1}})r^{\frac{1}{2\alpha+1}}\,dr$, and ${\overline{H}}$ is chosen such that the constraint \begin{equation*} \begin{aligned} \int_{{\mathbb{T}}^d} \psi^{\frac 2 {2\alpha+1}}\,dx=1 \end{aligned} \end{equation*} holds. Furthermore, for $\alpha=0$, \eqref{eq.pLap} corresponds to the generalized Hopf-Cole transformation from \cite{MR3377677}. \begin{remark} In the case without congestion, which corresponds to $\alpha=0$, and without any restrictions on either $Q$ or $L$, \eqref{Eq:statcong2b} has the form \begin{equation*} \operatorname{div} \left( D_vL\!\left(\tfrac{Dm+Q}{m}\right)\right)+ \tfrac{Dm+Q}{m}\cdot D_vL\!\left(\tfrac{Dm+Q}{m}\right)-L\!\left(\tfrac{Dm+Q}{m}\right)=g(m)+\overline{H}-V(x). \end{equation*} This equation is the Euler-Lagrange equation of the functional \[ \tilde J[m]=\int_{{\mathbb{T}}^d} \left[mL\!\left(\frac{Dm+Q}{m}\right)-G(m)-V(x)m \right]dx \] subjected to the constraint \(\int_{{\mathbb{T}}^d} m\,dx=1\). \end{remark} \begin{remark}\label{Rmk:converse} From a solution $(m, w,{\overline{H}})$ to \eqref{Eq:statcong2b}, we recover a solution to \eqref{Eq:statcong2} if and only if $w=-P-m^{\alpha}D_vL(\frac {Dm+Q} m)$ is a gradient of a function, $u:{\mathbb{T}}^d \to {\mathbb{R}}$. There are two instances when this holds easily; namely: \begin{itemize} \item[(i)] Assume that $d=1$ and $(m, w,{\overline{H}})$ solves \eqref{Eq:statcong2b} with $\int_{{\mathbb{T}}} w\,dx =0$. Then, identifying functions on \({\mathbb{T}}\) with periodic functions on \([0,1]\) and setting \(u(x):=\int_0^x w(t)\,dt\), we conclude that $(m, u,{\overline{H}})$ solves \eqref{Eq:statcong2}. \item[(ii)] Assume that $Q\equiv0$, the Lagrangian is quadratic, and $\psi>0$ solves \eqref{Eq:statcongquad}. In this case, $w=-P-m^{\alpha}D_vL(\frac {Dm} m)=-P-m^{\alpha-1}Dm$ is a gradient if and only if $P=0$. Then, assuming further that \(P=0\), setting $m:=\psi^{ \frac{2\alpha +1}{2}}$ and $u:=-\frac{m^{\alpha}}{\alpha}+c$, \(c\in{\mathbb{R}}\), we conclude that $(u,m, {\overline{H}})$ solves \eqref{Eq:statcong2}, where \({\overline{H}}\) is determined by the constraint \(\int_{{\mathbb{T}}^d} \psi^{\frac 2 {2\alpha+1}}\,dx=1 \). This transformation of $m$ and $u$ generalizes the well-known Hopf-Cole transform to the congestion case. Finally, note that if \(\alpha=1\), we recover the case treated in Section~\ref{soquadH}. \end{itemize} However, in general, the condition for $w$ to be a gradient is more restrictive. For instance, assume that $Q\equiv 0 $ and that the Lagrangian is radial, $L(v)=l(|v|)$ with $l:{\mathbb{R}}_0^+\to {\mathbb{R}}$ of class $C^2$ and $rl''(r)-l'(r)\neq 0$ for all $r>0$. Then, \[w=-P-m^{\alpha}D_vL\Big(\frac {Dm} m\Big)\] is a gradient only if \[ 0=(w_i)_{x_j}-(w_j)_{x_i}=m^{\alpha}\frac{l'(r)-r l''(r)}{r^3}(v_i (v\cdot v_{x_j})-v_j (v\cdot v_{x_i})) \] for all $ i,j \in \{1,..., d\}$, where $r=|Dm|/m$ and $v:=Dm$. Consequently, $v_i (v\cdot v_{x_j})-v_j (v\cdot v_{x_i})=0$ for all $ i,j \in\{1,..., d\}$, which implies that there is a scalar function $\lambda\operatorname{co}lon{\mathbb{T}}^d\to {\mathbb{R}}$ such that $v\cdot v_{x_i}=\lambda v_i$ for all \(i\in\{1,...d\}\). Hence, $m$ must satisfy the identity \begin{equation*} D^2mDm= \lambda Dm. \end{equation*} This identity is rather restrictive in higher dimensions; thus, in general, the solutions to \eqref{Eq:statcong2b} may not correspond to solutions to \eqref{Eq:statcong2}. Finally, note that for radially symmetric Lagrangians, $L(v)=l(|v|)$, the only case when $w$ is automatically a gradient is the case when $r l''(r)=l'(r)$ for $r>0$, which corresponds to the quadratic Lagrangian discussed in (ii) above. \end{remark} \section{Numerical solution for the first-order MFGs with congestion, with $1<\alpha\leqslant\gamma$ } \label{num} In this section, we compute and analyze numerical solutions for the variational problem \eqref{mmz}. First, in Section~\ref{sub:discretization}, we describe the numerical scheme. Then, in Section~\ref{sub:num1d}, we examine problems in one dimension and discuss the corresponding numerical experiments. At last, in Section~\ref{sub:num2d}, we perform and discuss our numerical experiments for the two-dimensional case. \subsection{Discretization} \label{sub:discretization} Here, we detail our numerical scheme. For simplicity, we consider the two-dimensional setting; our methods can easily be adapted to other dimensions. We fix an integer, $N\in{\mathbb{N}}$, and denote by ${\mathbb{T}}_N^2$ the square grid in the two-dimensional torus, ${\mathbb{T}}^2$, with grid size $h=\frac 1 N$. Let $x_{i,j}\in {\mathbb{T}}_N^2$ represent a point with coordinates $(ih,jh)$. A grid function is a vector $\operatorname{Var}phi \in{\mathbb{R}}^{N^2}$ whose components, $\operatorname{Var}phi_{i,j}$, are determined by the values of a function \(\tilde \operatorname{Var}phi:{\mathbb{T}}^2_N \to{\mathbb{R}}\) at \(x_{i,j}\); that is, \(\operatorname{Var}phi_{i,j}= \tilde \operatorname{Var}phi (x_{i,j})\) for $i,j\in~\{0,\ldots,N-1\}$. Because we are in the periodic setting, we let $\operatorname{Var}phi_{i+N,j}=\operatorname{Var}phi_{i,j+N}=\operatorname{Var}phi_{i,j}$. For $\operatorname{Var}phi\in{\mathbb{R}}^{N^2}$, we define the 5-point stencil, central-difference scheme \begin{align}\label{diffschemes} \begin{split} (D_{1}^h \operatorname{Var}phi)_{i,j}&=\frac{-\operatorname{Var}phi_{i+2,j}+8\operatorname{Var}phi_{i+1,j}-8\operatorname{Var}phi_{i-1,j} -\operatorname{Var}phi_{i-2,j}}{12h}\\ (D_{2}^h \operatorname{Var}phi)_{i,j}&=\frac{-\operatorname{Var}phi_{i,j+2}+8\operatorname{Var}phi_{i,j+1}-8\operatorname{Var}phi_{i,j-1} -\operatorname{Var}phi_{i,j-2}}{12h}. \end{split} \end{align} The discrete gradient vector is defined by \begin{equation}\label{gradCentralDiff} [D^h\operatorname{Var}phi]_{i,j} = \left((D_1^h \operatorname{Var}phi)_{i,j},(D_2^h \operatorname{Var}phi)_{i,j}\right) \in {\mathbb{R}}^2. \end{equation} Let $u,m,V\in {\mathbb{R}}^{N^2}$ be grid functions and $P=(p_1,p_2)\in {\mathbb{R}}^2$ be a point. We discretize $\bar f$ in \eqref{barf} as follows. We define a function $f_h:{\mathbb{R}}^{2N^2}\times{\mathbb{R}}^{N^2}\to {\mathbb{R}}^{N^2}$ by setting, for $i,j\in\{0,\ldots,N-1\}$, \[ ( f_h([D_h u],m))_{i,j}= \begin{cases} \frac{1}{m_{i,j}^{\alpha -1}\gamma(\alpha -1) } \left(\left(p_1+(D^h_1 u)_{i,j}\right)^2\right.\\ \left.\qquad\qquad +\left(p_2+(D^h_2 u)_{i,j}\right)^2\right) ^{\gamma /2} & \quad \mbox{if $m_{i,j}\neq 0$}\\ \infty & \quad \mbox{if $m_{i,j}= 0$ and $(D^hu)_{i,j} \neq -P$}\\ 0 & \quad \mbox{if $m_{i,j}= 0$ and $(D^hu)_{i,j}=-P$}\\ \end{cases} \] Next, we construct a discrete version of $\bar J$, $ J_h:{\mathbb{R}}^{N^2}\times {\mathbb{R}}^{N^2}\to{\mathbb{R}}$, in the following manner. For $u,m\in {\mathbb{R}}^{N^2}$, we define \[ J_h(u,m)=h^2\sum_{i,j=0}^{N-1}\big( (f_h([D_h u],m))_{i,j}- V_{i,j}m_{i,j}+G(m_{i,j})\big). \] Accordingly, in the discrete setting, the problem \eqref{mmz} becomes \begin{equation}\label{mmzdiscrete} \min_{(u,m)\in \mathcal A_h} J_h(u,m), \end{equation} where \begin{equation}\label{eq:setAh} {\mathcal{A}}_h=\bigg\{(u,m)\in {\mathbb{R}}^{2N^2}:h^2\sum_{i,j=0}^{N-1} u_{i,j} =0,h^2\sum_{i,j=0}^{N-1} m_{i,j}=1,m_{i,j}\geqslant 0\,\, \forall i,j\in\{0,\ldots,N-1\}\bigg\}. \end{equation} \begin{remark} In alternative to the 5-point stencil, centered-differences, we can discretize $\bar J$ using monotone finite differences \begin{align*} (D^h_1 \operatorname{Var}phi)^{+}_{i,j}=\frac{\operatorname{Var}phi_{i+1,j}-\operatorname{Var}phi_{i,j}}{h}, \quad (D^h_2 \operatorname{Var}phi)^{+}_{i,j}=\frac{\operatorname{Var}phi_{i,j+1}-\operatorname{Var}phi_{i,j}}{h}. \end{align*} Here, $|P+Du|$ is discretized as follows: \begin{align*} |P+D^h \operatorname{Var}phi |_{i,j}=&\max(-p_1-(D^h_1 \operatorname{Var}phi)^{+}_{i,j},0)+\max(p_1+(D^h_1 \operatorname{Var}phi)^{+}_{i-1,j},0)\\&+\max(-p_2-(D^h_2 \operatorname{Var}phi)^{+}_{i,j},0)+\max(p_2+(D^h_2 \operatorname{Var}phi)^{+}_{i,j-1},0). \end{align*} However, in our numerical experiments, the results with the 5-point centered difference discretization and the ones with this monotone discretization were similar. Moreover, our numerical tests were faster using \eqref{gradCentralDiff} than using the monotone discretization. Therefore, in our numerical computations we use the central difference scheme in \eqref{diffschemes}. \end{remark} \subsection{Numerical experiments in one dimension}\label{sub:num1d} To validate our approach, we start by considering the one-dimensional version of the discretized variational problem \eqref{mmzdiscrete} with \(P=0\) and \(G(m) = \frac{m^2}{2}\). In this case, the unique solution of \eqref{mmz} is (see Section~\ref{varexplicit}) \begin{equation}\label{explmmzP0} \begin{aligned} (\bar u, \bar m)= \big(0, (V(x) - \overline H)^+\big), \end{aligned} \end{equation} where \(\overline H\) is such that \(\int_{{\mathbb{T}}} \bar m(x)\, dx = 1\). As a first example, we choose the congestion exponent $\alpha=1.5$, $\gamma=2$, and the potential \[ V(x)=\frac 1 2 \operatorname{co}s\left(2\pi \left(x-\frac 1 4\right)\right), \] as shown in Fig.~\ref{fig:plotVval}. Note that $\int_{{\mathbb{T}}}V\,dx=0 $ and \(\inf_{{\mathbb{T}}} V = -\frac{1}{2}\). As proved in Section~\ref{varexplicit}, \((\bar u,\bar m) = (0, V+1)\) is the minimizer of \eqref{mmz} and \((\bar u,\bar m, \overline H) = (0, V+1, -1)\) is the classical solution of \eqref{main2}. The numerical solution (with \(N=200)\) for the density, $m$, and the value function, $u$, are shown, respectively, in Figs.~\ref{fig:plotmval} and \ref{fig:plotuval}. As expected, $u\equiv 0$ and, because the potential does not have regions where it is {too negative}, the graph of \(m\) resembles that of \(V\) everywhere. In Fig.~\ref{fig:plotmvVal}, we depict the absolute error between the numerical solution $m(\cdot)$ and the explicit solution $\bar m(\cdot)=V(\cdot)+1$. The maximum absolute error between these two functions is of order $10^{-8}$. Next, we slightly modify the potential but in such a way that we are in the borderline case for which \eqref{explmmzP0} does not provide a classical solution of \eqref{main2}. More precisely, we consider the potential \[ V(x)=\operatorname{co}s\left(2\pi \left(x-\frac 1 4\right)\right) \] as shown in Fig.~\ref{fig:plotVvalb}. In this case, $\int_{{\mathbb{T}}}V\,dx=0 $, \(\inf_{{\mathbb{T}}} V =V(\frac{3}{4})=-1\), and \((\bar u,\bar m) = (0, V+1)\) is the minimizer of \eqref{mmz}. Note that \(\bar m (\frac{3}{4})=0\); elsewhere in \({\mathbb{T}}\), \(\bar m\) is positive. The numerical solution (with \(N=200)\) for the density, $m$, and the value function, $u$, are shown, respectively, in Figs.~\ref{fig:plotmvalb} and \ref{fig:plotuvalb}. As expected, $u\equiv 0$ and, as before, the graph of \(m\) resembles that of \(V\) everywhere. In Fig.~\ref{fig:plotmvValb}, we depict the absolute error between the numerical solution $m(\cdot)$ and the explicit solution $\bar m(\cdot)=V(\cdot)+1$. The maximum absolute error between these two functions is of order $10^{-6}$. Finally, concerning the validation of our method in the one-dimensional case, we consider a potential for which \eqref{explmmzP0} is far from being a classical solution of \eqref{main2}. Namely, we consider the potential, \[ V(x)=10\operatorname{co}s\left(2\pi \left(x-\frac 1 4\right)\right), \] shown in Fig.~\ref{fig:plotVvalc}. The numerical solution for the density, $m$, and the value function, $u$, are shown, respectively, in Figs.~\ref{fig:plotmvalc} and \ref{fig:plotuvalc}. As expected, $u\equiv 0$ and, in the regions where the potential is not too negative, the graph of \(m\) resembles that of \(V\). In Fig.~\ref{fig:plotmvValc}, we depict the absolute error between the numerical solution $m(\cdot)$ and the explicit solution $\bar m(\cdot)$ given by \eqref{explmmzP0}. To understand the error between \(m\) and \(\bar m\), we performed the numerical simulation for several grid sizes, \(h=\tfrac1N\). In Table~\ref{Table1d}, we present the numerical error and the corresponding running time for \(N\in\{100, 200, 400, 600, 800, 1000\}\). We observe that the error is linear in grid size and the running time quadratic in the number of nodes, as expected for a minimization algorithm where inversions of matrices are used. The simulations were performed in a laptop with a 2.5 GHz Intel Core i7 processor, and 16 GB 1600 MHz DDR3 memory. Next, we show the effect of the preferred direction, $P$, by plotting the behavior of $u$ and $m$ for $P\in\{0,1,2,3,4\} $ in Fig.~\ref{fig:solValidationExp2}. We recall that, if \(P\not=0\), we are not aware of closed-form solutions of \eqref{mmz}. For the example in Fig.~\ref{fig:solValidationExp2}, we chose $V(x)= e^{-(x-\frac{1}{2})^2}$, $G(m) = m^2$, $\alpha= 1.5$, and $\gamma=2$. We observe that the bigger the \(P\), the flatter the graph of \(m\). In other words, as \(P\) grows, agents prefer to move more and their distribution becomes more homogeneous. This effect was first observed for MFGs without congestion in \cite{Gomes2016b}. Finally, in Fig.~\ref{fig:solExpr134}, we illustrate the dependence of the solution on the congestion parameter, \(\alpha\). Precisely, we depict the behavior of the solution for the potential \[V(x)=10\sin\left(2\pi\left(x+\frac 1 4\right)\right)\] and for $\alpha\in\{1.001, 1.2, 1.4, 2\},$ $P=1$, $\gamma=2$, and \(G(m) = m^3\). As expected, the density resembles the potential in the regions where the potential is not \textit{too negative}. This explains the formation of regions with almost no agents (see Fig.~\ref{fig:plotmls4}). Note that the congestion exponent, $\alpha$, determines the strength of the congestion effects. For instance, in the regions where the potential is too negative and, therefore, with fewer agents (see Fig.~\ref{fig:plotmls4}), we see that the higher the congestion exponent is, the lower the density value is. However, these differences are compensated in regions where the potential is positive, where we see that the higher the congestion exponent is, the higher the density value is. This effect is expected because the mass of the system is preserved. \begin{figure} \caption{Potential $V$} \label{fig:plotVval} \caption{Density $m$} \label{fig:plotmval} \caption{Value function $u$} \label{fig:plotuval} \caption{Absolute numerical error} \label{fig:plotmvVal} \caption{Numerical solution of the variational problem \eqref{mmz} \label{fig:solValidation} \end{figure} \begin{figure} \caption{Potential $V$} \label{fig:plotVvalb} \caption{Density $m$} \label{fig:plotmvalb} \caption{Value function $u$} \label{fig:plotuvalb} \caption{Absolute numerical error} \label{fig:plotmvValb} \caption{Numerical solution of the variational problem \eqref{mmz} \label{fig:solValidationb} \end{figure} \begin{figure} \caption{Potential $V$} \label{fig:plotVvalc} \caption{Density $m$} \label{fig:plotmvalc} \caption{Value function $u$} \label{fig:plotuvalc} \caption{Absolute numerical error} \label{fig:plotmvValc} \caption{Numerical solution of the variational problem \eqref{mmz} \label{fig:solValidationc} \end{figure} \begin{table} \begin{center} \begin{tabular}{ | l | | c | c | c| } \hline \textbf{Grid size} & \textbf{Max abs. error} & \textbf{Mean abs. error} & \textbf{Running time (s)} \\ \hline \hline $N=100$ & 0.03083580&0.010075100 & 4.12853\\ \hline $N=200$ & 0.01517990 & 0.004908920 & 6.41835\\ \hline $N=400$ & 0.00768403 & 0.002471920 & 24.1912 \\ \hline $N=600$ & 0.00522876 & 0.001681400 &72.1434 \\ \hline $N=800$ & 0.00385371 & 0.001246080 &105.094 \\ \hline $N=1000$ & 0.00308331 & 0.000994913 &182.188 \\ \hline \end{tabular} \end{center} \caption{Numerical error and running time computation. Comparison between the closed-form solution, $\bar m(x)=(V(x)-{\overline{H}})$, where ${\overline{H}}\in{\mathbb{R}}$ is such that $\int_{{\mathbb{T}}}\bar m\, dx=1$, and the numerical solution of the variational problem \eqref{mmz}. Here, $G(m) = \frac{m^2}{2}$, $P=0$, $V(x)=10 \operatorname{co}s\left(2\pi \left(x-\frac 1 4\right)\right)$, $\alpha= 1.5$, and $\gamma=2$.}\label{Table1d} \end{table} \begin{figure} \caption{Potential $V$} \label{fig:plotVvalExp2} \caption{Value function $u$} \label{fig:plotuvalExp2} \caption{Density $m$} \label{fig:plotmvValExp2} \caption{Numerical solution of the variational problem \eqref{mmz} \label{fig:solValidationExp2} \end{figure} \begin{figure} \caption{Potential $V$} \label{fig:plotV3} \caption{Value function $u$} \label{fig:plotuls4} \caption{Density $m$} \label{fig:plotmls4} \caption{Numerical solution of the variational problem \eqref{mmz} \label{fig:solExpr134} \end{figure} \subsection{Numerical experiments in two dimensions}\label{sub:num2d} Here, we solve the discretized variational problem~\eqref{mmzdiscrete}. We start by validating our numerical method by considering first the \(P=(0,0)\) case. We recall that in this case, we have determined closed-form solutions of \eqref{mmz} in Section~\ref{varexplicit}. Precisely, we treat first the example in which we choose the potential \begin{equation}\label{eq:potential2DValid} V(x,y)= 10\sin\left(2\pi \left(x+\frac 1 4\right)\right)\operatorname{co}s\left(2\pi \left(y+\frac 1 4\right)\right), \end{equation} the coupling \(G(m)=\frac{m^2}{2}\), \(P=(0,0)\), $\alpha= 1.5$, and $\gamma=2$ (see Fig.~\ref{fig:solExpr2c}). The numerical solution (with $N=50$) for the density, $m$, and the value function, $u$, are shown, respectively, in Figs.~\ref{fig:plotmlsvc} and \ref{fig:plotulsvc}. As expected, $u\equiv 0$ and, in the regions where the potential is not \textit{too negative}, the graph of \(m\) resembles that of \(V\). In Fig.~\ref{fig:plotmVvc}, we depict the absolute error between the numerical solution, $m$, and the explicit solution of \eqref{mmz}, $\bar m$, given by \eqref{eq:explweaksolH}. Moreover, we performed the numerical simulation for several grid sizes, \(h=\tfrac1N\). In Table~\ref{Table2d}, we present the numerical error between \(m\) and \(\bar m\) and the corresponding running time for \(N\in\{20, 40, 80\}\). We observe that the error decreases in a sublinear manner in $\frac 1 N$ and the run time is increasing somewhat faster than quadratic in $N^2$, the total number of nodes. Next, in Fig.~\ref{fig:solExpr1}, we choose $P=(p_1,p_2)=(1,3)$, $V(x,y)= \sin\left(2\pi \left(x+\frac 1 4\right)\right)\operatorname{co}s\left(2\pi \left(y+\frac 1 4\right)\right)$, $G(m)=m^3$, $\alpha=1.5$, and $\gamma=2$. The density, $m$, and the value function, $u$, for this example are displayed in Figs.~\ref{fig:plotmls} and \ref{fig:plotuls}, respectively. In this figure, we note the asymmetry caused by the vector $P$, which introduces a preferred direction of motion. Finally, in Fig.~\ref{fig:solExpr12}, we illustrate the solution of \eqref{mmzdiscrete} for $P=(-1,3)$, $G(m) = \frac{m^2}{2}+ m^3$, $V(x,y)= e^{-\sin\left(2\pi \left(x+\frac 1 4\right)\right)^2}\operatorname{co}s\left(2\pi \left(y-\frac 1 4\right)\right)$, $\alpha= 2$, and $\gamma=2.5.$ Note that the numerical solution is robust with respect to the choice of the parameters, and the functions $V$ and $G$. Moreover, the density resembles the potential in all the considered experiments. \begin{figure} \caption{Potential $V$} \label{fig:plotVvc} \caption{Density $m$} \label{fig:plotmlsvc} \caption{Value function $u$} \label{fig:plotulsvc} \caption{Absolute numerical error} \label{fig:plotmVvc} \caption{ Numerical solution of the variational problem \eqref{mmz} \label{fig:solExpr2c} \end{figure} \begin{figure} \caption{Potential $V$} \label{fig:plotVvc2} \caption{The density $m$} \label{fig:plotmls} \caption{The value function $u$} \label{fig:plotuls} \caption{Numerical solution of the variational problem \eqref{mmz} \label{fig:solExpr1} \end{figure} \begin{table} \begin{center} \begin{tabular}{ | l | | c | c | c| } \hline \textbf{Grid size} & \textbf{Max abs. error} & \textbf{Mean abs. error} & \textbf{Running time (s)} \\ \hline \hline $N=20$ & 0.03982920& 0.011831300 & 3.39795 \\ \hline $N=40$ & 0.00691211 & 0.002116950 &222.576 \\ \hline $N=80$ & 0.00108425& 0.000318745 & 6032.06 \\ \hline \end{tabular} \end{center} \caption{Numerical error and running time computation. Comparison between the closed-form solution, $\bar m(x)=(V(x)-{\overline{H}})^+$, where ${\overline{H}}\in{\mathbb{R}}$ is such that $\int_{{\mathbb{T}}^2}\bar m\, dx=1$, and the numerical solution of the variational problem \eqref{mmz}. Here, $P=(0,0)$, $V(x)=10 \sin\left(2\pi \left(x+\frac 1 4\right)\right)\operatorname{co}s\left(2\pi \left(y+\frac 1 4\right)\right)$, $G(m) = \frac{m^2}{2}$, $\alpha= 1.5$, and $\gamma=2$.}\label{Table2d} \end{table} \begin{figure} \caption{Potential $V$} \label{fig:plotV2} \caption{Density $m$} \label{fig:plotmls2} \caption{Value function $u$} \label{fig:plotuls2} \caption{Numerical solution of the variational problem \eqref{mmz} \label{fig:solExpr12} \end{figure} \section{Numerical solution for the two-dimensional first-order MFGs with congestion, with $0<\alpha<1$ and $\gamma>1$}\label{num8} In this section, we numerically solve the following MFG system, introduced in Section \ref{2dcase}: \begin{equation}\label{eq:hjb2d} \begin{cases} \frac{|P+Du|^{\gamma}}{\gamma m^{\alpha}}+V(x)-g(m)=\overline{H}&\quad \mbox{in }\; {\mathbb{T}}^2\\ \operatorname{div}(m^{1-\alpha} |P+Du|^{\gamma-2} (P+Du) )=0 &\quad \mbox{in }\; {\mathbb{T}}^2\\ m>0, \int_{{\mathbb{T}}^2} m(x)\,dx=1 \end{cases} \end{equation} for $0<\alpha<1$ and $\gamma>1$. In the sequel, we describe a procedure to obtain a numerical solution of \eqref{eq:hjb2d}. \paragraph{\textbf{Step 1.}} First, we define \[ \gamma'=\frac{\gamma}{\gamma-1} \quad\mbox{and}\quad \tilde{\alpha}=\alpha-(\alpha-1)\gamma'. \] Note that $\gamma'>1$ and $1<{\tilde{\alpha}}<\gamma'$. \paragraph{\textbf{Step 2.}} Next, using the results from Section \ref{2dcase}, we transform \eqref{eq:hjb2d} into \begin{equation}\label{eq:Transformed} \begin{cases} \frac{|Q+D\psi|^{\gamma'}}{\gamma' m^{{\tilde{\alpha}}}}+\frac{\gamma}{\gamma'}V(x)-\frac{\gamma}{\gamma'}g(m)= {\frac{\gamma}{\gamma'}} \overline{H}&\quad \mbox{in }\; {\mathbb{T}}^2\\ \operatorname{div}(m^{1-{\tilde{\alpha}}} |Q+D\psi|^{\gamma'-2} (Q+D\psi) )=0&\quad \mbox{in }\; {\mathbb{T}}^2\\ m>0,\int_{{\mathbb{T}}^2} m(x)\,dx=1. \end{cases} \end{equation} Then, for each $Q$, we solve the above system. For this, we formulate a variational principle analogous to \eqref{mmz}. Namely, for fixed $Q$, we minimize the functional \begin{equation}\label{eq:Jpsiandm} J[\psi,m] = \int_{{\mathbb{T}}^2}\left( \frac{|Q+D\psi|^{\gamma'}}{\gamma' ({\tilde{\alpha}} - 1) m^{{\tilde{\alpha}}-1}} -\frac{\gamma}{\gamma'}V m +\frac{\gamma}{\gamma'}G(m)\right) dx \end{equation} under the constraints $\int_{{\mathbb{T}}^2}\psi \,dx=0$, $\int_{{\mathbb{T}}^d}m\, dx= 1$, and $m>0$, using the corresponding discretization for $\psi$ and $m$. That is, considering the grid functions, $\psi,m\in {\mathbb{R}}^{N^2}$, we numerically solve \begin{equation}\label{eq:minJpsiandm} \min_{(\psi,m)\in \mathcal A_h} J_h[\psi,m], \end{equation} where \begin{equation}\label{eq:discreteJpsiandm} J_h[\psi,m]=h^2\sum_{i,j=0}^{N-1}\left( ( f_h([D_h \psi],m))_{i,j} - \frac{\gamma}{\gamma'}V_{i,j}m_{i,j}+\frac{\gamma}{\gamma'}G(m_{i,j})\right) \end{equation} with \[ ( f_h([D_h \psi],m))_{i,j}=\frac{1}{m_{i,j}^{{\tilde{\alpha}}-1 }\gamma'({\tilde{\alpha}} -1) } \left(\left(q_1+(D^h_1 \psi)_{i,j}\right)^2 +\left(q_2+(D^h_2 \psi)_{i,j}\right)^2\right) ^{\gamma' /2} \] for $m_{i,j}\not=0$ and $i,j\in\{0,\ldots,N-1\}$. The set $\mathcal{A}_{h}$ is the same as in \eqref{eq:setAh}, replacing $u$ by $\psi$. Moreover, the discretization scheme is the one given in \eqref{diffschemes}--\eqref{gradCentralDiff}. \paragraph{\textbf{Step 3.}} So far, for each \(Q\), we have $\psi$ and $m$ satisfying \eqref{eq:minJpsiandm}. Next, we determine the corresponding vector \(P\). By \eqref{eq:PperpDuPerp}, we have \[ P^\perp+(Du)^\perp= m^{1-{\tilde{\alpha}}} |Q+D\psi|^{\gamma'-2} (Q+D\psi). \] Because $u$ is periodic, by integrating the previous expression in ${\mathbb{T}}^2$, we obtain \begin{equation}\label{eq:Pperp} P^\perp = \int_{{\mathbb{T}}^2}m^{1-{\tilde{\alpha}}} |Q+D\psi|^{\gamma'-2} (Q+D\psi)\,dx. \end{equation} We observe that $P^\perp$ can be obtained using the functional $J$. In fact, let \[ \operatorname{Var}theta[\psi,m]=\frac{|Q+D\psi|^{\gamma'}}{\gamma' ({\tilde{\alpha}} - 1) m^{{\tilde{\alpha}}-1}} -\frac{\gamma}{\gamma'}V m +\frac{\gamma}{\gamma'}G(m) \] denote the integrand of $J$. Taking the variational derivative of $\operatorname{Var}theta$ with respect to $m$ in the direction of $1$, we get \[ \frac{{\rm d}elta}{{\rm d}elta m} \operatorname{Var}theta [\psi,m]=\lim_{\varepsilonlon\to 0}\frac{\operatorname{Var}theta [\psi,m+\varepsilonlon]-\operatorname{Var}theta[\psi,m]}{\varepsilonlon}= -\frac{|Q+D\psi|^{\gamma'}}{\gamma' m^{{\tilde{\alpha}}}} -\frac{\gamma}{\gamma'}V +\frac{\gamma}{\gamma'}g(m). \] Next, differentiating the expression above with respect to $Q$, we obtain \[ -\frac{|Q+D\psi|^{\gamma'-2}(Q+D\psi)}{ m^{{\tilde{\alpha}}}}. \] Finally, multiplying this last expression by $m$ and integrating over ${\mathbb{T}}^2$, we obtain the right-hand side of \eqref{eq:Pperp}. Therefore, $P^\perp$ is alternatively given by \begin{equation} \label{eq:formPper} \begin{aligned} P^\perp = -\int_{{\mathbb{T}}^2}\frac{\partial}{\partial Q}\left(\frac{{\rm d}elta}{{\rm d}elta m} \operatorname{Var}theta[\psi,m](Q)\right)m\,dx. \end{aligned} \end{equation} Recalling that $P^\perp = (-p_2,p_1)$, we observe that the discrete version of \eqref{eq:formPper} is \begin{align}\label{eq:p1p2} \begin{split} p_1 &= h^2\sum_{i,j=0}^{N-1}(\mathbf{D}^h_{q_2}(\mathbf{D}^h_m (\operatorname{Var}theta[\psi,m])(q_2))_{i,j}m_{i,j})_{i,j}\,dx\\ p_2 &= -h^2\sum_{i,j=0}^{N-1}\mathbf{D}^h_{q_1}(\mathbf{D}^h_m (\operatorname{Var}theta[\psi,m])_{i,j}(q_1))_{i,j}m_{i,j}\,dx. \end{split} \end{align} \begin{remark} The notation $(\mathbf{D}^h_{x}(f(x)))_{i,j}$ must be understood as the $i,j$-node of the symbolic derivative of a grid function, $f$, with respect to $x$, whereas $(\mathbf{D}_f^h g[f])_{i,j}$ is the symbolic variational derivative of $g$ with respect to $f$ in the direction of $1$. In our implementation, we use symbolic calculus to compute those derivatives in a straightforward and automated fashion. Here, we obtain \eqref{eq:p1p2} using standard symbolic manipulations applied to $J$. \end{remark} \paragraph{\textbf{Step 4.}} Finally, we use $m$ and \(P\) from the previous steps and solve, for $u$, the Hamilton--Jacobi equation, \begin{equation}\label{eq:hjbEffHamiltgamma} \frac{|P+Du|^{\gamma}}{\gamma m^{\alpha}}+V(x)-g(m)=\overline{H} \quad \mbox{in }\;{\mathbb{T}}^2. \end{equation} Under the notation of Section \ref{sub:discretization}, we use the following monotone scheme for $|P+D^h u |_{i,j}^{\gamma}$, $i,j\in\{0,\ldots, N-1\}$: \begin{align*} |P+D^h u|_{i,j}^{\gamma}=&\max(-p_1-(D^h_1 u)^{+}_{i,j},0)^{\gamma}+\max(p_1+(D^h_1 u)^{+}_{i-1,j},0)^{\gamma}\\&+\max(-p_2-(D^h_2 u)^{+}_{i,j},0)^{\gamma}+\max(p_2+(D^h_2 u)^{+}_{i,j-1},0)^{\gamma}. \end{align*} From \cite{LPV}, we have that \[ \beta u^{(\beta)} + \frac{|P+Du^{(\beta)}|^{\gamma}}{\gamma m^{\alpha}}+V(x)-g(m)=0 \quad \mbox{in }\;{\mathbb{T}}^2 \] converges to \eqref{eq:hjbEffHamiltgamma} when $\beta\to 0$; that is, $\beta u^{(\beta)}$ converges uniformly to $-{\overline{H}}$, and $u^{(\beta)}-\max_{x\in{\mathbb{T}}^2} u(x)$ converges uniformly, up to subsequences, to $u$. We then use the notation for grid functions from Section~\ref{sub:discretization}, and numerically solve, for small $\beta>0$, the discrete problem \[ \beta u^{(\beta)}_{i,j} + \frac{|D^h u^{(\beta)} + P|_{i,j}^{\gamma}}{\gamma m^\alpha_{i,j}} +V_{i,j}-g_{i,j} = 0\quad \mbox{in ${\mathbb{T}}^2_N$}. \] Hence, we set \[ \overline{H}^{(\beta)} = \max\{u^{(\beta)}_{i,j},\; i,j\in\{0,\ldots, N-1\} \}. \] Finally, the solution of \eqref{eq:hjbEffHamiltgamma} is approximated by \[ u_{i,j}\operatorname{co}ng u^\beta_{i,j} - \overline{H}^\beta. \] \subsection{Numerical experiments} In Fig.~\ref{fig:solTransf}, we illustrate the solution of \eqref{eq:hjb2d} for $\alpha=0.8$, $\gamma=2$, and $P=(1,3)$. Moreover, we use the coupling $G(m) = m^3$ and the potential $$V(x,y)= \sin\left(2\pi \left(x+\frac 1 4\right)\right)\operatorname{co}s\left(2\pi \left(y+\frac 1 4\right)\right).$$ In this example, we have $Q=(3,-1)$. The value function, $u$, and the density, $m$, are displayed in Figs.~\ref{fig:plotuTransf} and \ref{fig:plotmTransf}, respectively. Note that, even with $\alpha<1$, the density is similar to the density in Fig. \ref{fig:plotmls}. However, for this example, the value function behaves differently from what we observed in Fig. \ref{fig:plotuls}. \begin{figure} \caption{Potential $V$} \label{fig:plotVTransf} \caption{Density $m$} \label{fig:plotmTransf} \caption{Value function $u$} \label{fig:plotuTransf} \caption{Numerical solution of \eqref{eq:hjb2d} \label{fig:solTransf} \end{figure} \section{Numerical solution for the second-order MFGs with congestion, with $\gamma=2$ and $1<\alpha<\gamma$} \label{num9} In this section, for illustration purposes, we numerically solve a variational problem introduced in Section \ref{otherH}, which corresponds to a second-order MFG with congestion in the quadratic case. More precisely, we consider the minimization problem \begin{equation}\label{eq:varfomul2ndord} \min_{\psi \geqslant 0}\hat J[\psi], \quad\hat J[\psi]=\int_{{\mathbb{T}}^d} \bigg[\frac{|D\psi|^2}{2\alpha+1} +\hat{G}(\psi)-\frac{2\alpha+1}{2(\alpha+1)}(V(x)-\overline{H})\psi^{\frac{2(\alpha+1)}{2\alpha+1}} \bigg] dx, \end{equation} where, for \(z\geqslant0\), \begin{equation}\label{eq:g9.1} \hat{G}(z)=\int_0^z g(r^{\frac{2}{2\alpha+1}})r^{\frac{1}{2\alpha+1}}\,dr \end{equation} and ${\overline{H}}$ is chosen such that the constraint \begin{equation}\label{eq:intpsi1} \begin{aligned} \int_{{\mathbb{T}}^d} \psi^{\frac 2 {2\alpha+1}}\,dx=1 \end{aligned} \end{equation} holds. Recall that the Euler--Lagrange equation of the functional in \eqref{eq:varfomul2ndord} is \eqref{Eq:statcongquad}. Moreover, \eqref{Eq:statcongquad} corresponds to \eqref{eq.pLap} for $\gamma=2$. Using the notation for grid functions introduced in Section \ref{sub:discretization}, we present the discretized version of \eqref{eq:varfomul2ndord} in the two-dimensional case as follows. For a grid function, $\psi\in {\mathbb{R}}^{N^2}$, we numerically solve \begin{equation} \min_{\psi\geqslant 0} J_h[\psi], \end{equation} where \[ J_h[\psi]=h^2\sum_{i,j=0}^{N-1}\left( ( f_h([D_h \psi]))_{i,j}+G(\psi_{i,j})- \frac{2\alpha +1 }{2(\alpha +1)}(V_{i,j} - \overline{H})\psi_{i,j}^{\frac{2(\alpha+1)}{2\alpha+1}}\right) \] and \[ ( f_h([D_h \psi]))_{i,j}=\frac{1}{(2\alpha + 1)} \left(\left((D^h_1 \psi)_{i,j}\right)^2 +\left((D^h_2 \psi)_{i,j}\right)^2\right). \] Moreover, as in the previous sections, the discretization scheme follows \eqref{diffschemes}--\eqref{gradCentralDiff}. Recall that $P=(0,0)$ here. In the following subsection, we perform a numerical experiment to illustrate our method. \subsection{Numerical experiment in the two-dimensional case} Here, we depict a numerical solution of \eqref{eq:varfomul2ndord} for the potential \[ V(x,y)= e^{-\sin\left(2\pi \left(x+\frac 1 4\right)\right)^2}\sin\left(2\pi \left(y-\frac 1 4\right)\right), \] the coupling $g(r) = r^3$ (see \eqref{eq:g9.1}), and $\alpha= 1.5$ (see Fig.~\ref{fig:solSecOr}). For this example, the value of $\overline{H}$ such that \eqref{eq:intpsi1} is satisfied is approximately $\overline{H} \operatorname{co}ng -3.001$. The density, $m$, is displayed in Fig.~\ref{fig:plotmSecOr}. In this example, we note that the density still resembles the potential. However, diffusion tends to spread the distribution of the agents making it more uniform. \begin{figure} \caption{Potential $V$} \label{fig:plotVSecOr} \caption{Density $m$} \label{fig:plotmSecOr} \caption{Numerical solution of \eqref{eq:varfomul2ndord} \label{fig:solSecOr} \end{figure} \section{Conclusions} In this paper, we develop a new variational formulation for systems of first-order MFGs with congestion. This variational principle provides a new construction of weak solutions and leads to a novel numerical approach through an optimization problem. Even though the variational structure strongly depends on the form of the MFGs, it is often possible to modify the variational principle. This approach was followed in Section~\ref{2dcase} for the first-order case and in Section~\ref{tsom} for the second-order case. Moreover, in Sections \ref{num}--\ref{num9}, we use these results to obtain new numerical solutions for MFGs with congestion. {\rm d}ef$'${$'$} \end{document}
\betagin{document} \title{Semidualizing Modules and Rings of Invariants} \author{Billy Sanders} \betagin{abstract} We show there exist no nontrivial semidualizing modules for nonmodular rings of invariants of order $p^n$ with $p$ a prime. \end{abstract} \keywords{Rings of Invariants, Semidualizing Module} \maketitle \section{Introduction} This paper is concerned with the existence of nontrivial semidualizing modules. Recall \betagin{definition} A finitely generated $S$-module $C$ is semidualizing if the map $S\to\hm_S(C,C)$ given by $s\mapsto(x\mapsto sx)$ is an isomorphism and $\ext_S^{i>0}(C,C)=0$. \end{definition} This is equivalent to saying $S$ is totally $C$ reflexive. Examples always include $S$ and the dualizing module, if it exists, thus we call these semidualizing modules. Semidualizing modules were first discovered by Foxby in \cite{Foxby72}. They were later rediscovered by various other authors including Vasconcelos, who called them spherical modules, and Golod, who referred to them as suitable modules. In \cite{Vasconcelos74}, Vasconcelos asks if there exists only a finite number of nonisomorphic semidualizing modules. This question is answered in the affirmative in \cite{Christensen08} for equicharacteristic Cohen-Macaulay algebras, and in \cite{Nasseh12} for the semilocal case. Since their discovery, semidualizing modules have been the focus of much research. See for example \cite{AryaTakahashi09},\cite{Gerko05},\cite{Vasconcelos74},\cite{Nasseh12},\cite{Jorgensen12},\cite{Sather-Wagstaff09}, and \cite{Sather-Wagstaff09b}. It is natural to ask which rings have only trivial semidualizing modules. In \cite{Jorgensen12}, Jorgensen, Leuschke and Sather-Wagstaff give a very nice characterization of rings with a dualizing module and only trivial semidualizing modules. However, this characterization is somewhat abstract and it is difficult to tell whether the conditions hold for a particular ring. Also in \cite{Sather-Wagstaff09}, Sather-Wagstaff proves results relating the existence of nontrivial semidualizing modules to Bass numbers. In this paper, we pose the following question: \betagin{question} If a ring $S$ has a nice (e.g. rational) singularity, then does $S$ have only trivial semidualizing modules? \end{question} The evidence suggests the answer is yes. In \cite{Dao11}, Celikbas and Dao show that only trivial semidualizing modules exist over Veronese subrings, which have a quotient singularity and hence a rational singularity. Furthermore, Sather-Wagstaff shows in \cite{Sather-Wagstaff07} that only trivial semidualizing modules exist for determinantal rings, which also have a rational singularity. It is proven in \cite{Sather-Wagstaff09b}[Example 4.2.14] that all Cohen-Macaulay rings with minimal multiplicity have no nontrivial semidualizing modules. Since rational singularity and dimension 2 imply minimal multiplicity, all rings with rational singularity and dimension 2 have no nontrivial semidualizing modules. The following example shows that there are dimension 3 rings with rational singularity that do not have minimal multiplicity. \betagin{example} Let $$S=k[[x,y,z]]^{(3)}=k[[x^3,y^3,z^3,x^2y,x^2z,y^2x,y^2z,z^2x,z^2y,xyz]]$$ which is the third Veronese subring in three variables. For the multiplicity of $S$ to be minimal, it must equal $\edim S-\dim S+1=10-3+1=8$. However, setting $\bar{S}=S/(x^3,y^3,z^3)S$, $e(S)=e(\bar{S})=\langlembda(\bar{S})$ where $\langlembda$ is length. Since $$\bar{S}=k\oplus kx^2y\oplus kx^2z\oplus ky^2x\oplus ky^2z\oplus kz^2x\oplus kz^2y\oplus kxyz\oplus kx^2y^2z^2$$ we thus have $e(S)=9$. \end{example} In this paper, we add to the evidence that suggests that the answer to Question 1 is yes by investigating the case where $S$ is a ring of invariants, a large class of rings with rational singularity. The following theorem is the main result of this paper. \betagin{utheorem} If $S$ is a power series ring over a field $k$ in finitely many variables and $G$ is a cyclic group of order $p^l$ acting on $S$ with $\mathcal{C}har k\ne p$, then $S^G$ has only trivial semidualizing modules. \end{utheorem} Our approach to the proof of this result, relying on Lemma \ref{lemma}, is different than those of the results in \cite{Dao11} and \cite{Sather-Wagstaff07}. In each of those papers, the key technique involves counting the number of generators, whereas we use Lemma \ref{lemma}. See Section 2 for a further explanation. Section 2 gives preliminary results concerning rings of invariants and semidualizing modules and also gives a sketch of the proof. Section 3 proves a key technical theorem about when a ring has only trivial semidualizing modules, and then Section 4 uses this result to prove our main theorem. All rings considered in this paper will be Noetherian and commutative. \section{Preliminaries} In this section, let $S$ be a Noetherian ring. The proof relies upon the following lemma from \cite{Jorgensen12}. \betagin{lemma}\langlebel{lemma} If $C$ is a semidualizing $S$-module and $D$ is a dualizing module for $S$, then the homomorphism $\eta:C\otimes\hm_S(C,D)\to D$ given by $x\otimes \mathfrak{p}h\mapsto \mathfrak{p}h(x)$ is an isomorphism. \end{lemma} The map $\eta$ being an isomorphism is a strong condition since $D$ is torsionless and since tensor products often have torsion elements. We will exploit this map using the following lemma from \cite{Sather-Wagstaff07}[Fact 2.4] and \cite{Gerko04}[Theorem 3.1]. \betagin{lemma} If $C$ is a semidualizing $S$-module and $S$ is a normal domain, then $C$ is reflexive and hence an element of the class group. \end{lemma} Therefore, when $S$ is normal, $\hm(C,D)$ is the element of the class group associated with $C^{-1}\circ D$, and all three modules involved in Lemma \ref{lemma} are elements of the class group. In Theorem \ref{theorem1}, with strong assumptions on $S$, we show that $A\otimes B$ has torsion for any elements $A$ and $B$ in the class group of $S$ which are not isomorphic to S. The construction of a torsion element is easy, however, it requires considerable work to show that this element is not zero in the tensor product. With this setup, because of Lemma \ref{lemma} and since $D$ does not have torsion, nontrivial semidualizing modules cannot exist. The proof also requires the following lemma which is easily proven in \cite{Sather-Wagstaff09b}[Proposition 2.2.1]. \betagin{lemma}\langlebel{completion} If $R\to S$ is a faithfully flat extension, then $C$ is a semidualizing $R$-module if and only if $C\otimes S$ is a semidualizing $S$-module. \end{lemma} For the remainder of this paper, let $R$ be a polynomial ring in finitely many variables over an algebraically closed field $k$, and let $G$ be a finite group acting linearly on $R$. We shall assume that the characteristic of $k$ does not divide the order of the group. To prove the main result, Section 4 shows that when $|G|=p^l$ for some prime, $R^G$ satisfies the assumptions of Theorem \ref{theorem1}. In order to do this, we need the following definition and lemma. \betagin{definition} Given a character $\chi:G\to k^\times$, we denote by $R_\chi$ the set of relative invariants, namely, the polynomials $f\in R$ such that $gf=\chi(g) f$. \end{definition} Note that $R_\chi$ is an $R^G$-module. The following lemma is from \cite{Benson93}[Theorem 3.9.2]. \betagin{lemma} The ring $R^G$ is a normal domain whose class group is the subgroup $H\subseteq \hm(G,k^\times)$ which consists of the characters that contain all the pseudoreflections in their kernel. Furthermore, for any $\chi\in H$, the relative invariants $R_{\chi^{-1}}$ form the reflexive module corresponding to the element $\chi$. \end{lemma} \section{Class Groups} In this section, let $S$ be a Noetherian ring. We say that an element $\mu$ in an $S$-module $M$ is indivisible if there exists no nonunit $a\in S$ and $\nu\in M$ such that $\mu=a\nu$. \betagin{lemma} Suppose $S$ is a $k$-algebra, with $k$ a field and $M$ and $N$ are $S$-modules. Furthermore, suppose $f\in M$ and $g\in N$ are indivisible, and $\gamma\in M$ and $\rho\in N$ are not unit multiples of $f$ and $g$ respectively. If there exists $k$-bases $E,F,X$ of $M,N, S$ respectively with $f,\gamma\in E$ and $g,\rho\in F$ such that for every $\timesi\in X$ and $\varepsilon\in E$ and $\eta\in F$, $\timesi\varepsilon$ is a $k$-linear multiple of an element in $E$ and $\timesi\eta$ is a $k$-linear multiple of an element of $F$, then $f\otimes g-\gamma\otimes\rho$ is not zero in $M\otimes_{S} N$. \end{lemma} \betagin{proof} Suppose that such bases $E,F,X$ exist. Let $\mathfrak{F}$ denote the free abelian group functor. Recall that for any modules $U$ and $V$ over a ring $R$, we construct $U\otimes_R V$ by quotienting $\mathfrak{F}(U\cup V)$ by the submodule, which we will call $K_{U,V}(R)$, generated by the relations of the form $$(v_1,u_1+u_2)-(v_1,u_1)-(v_1,u_2)$$ $$(v_1+v_2,u_1)-(v_1,u_1)-(v_2,u_1)$$ $$(\langlembda v_1,u_1)-(v_1,\langlembda u_1)$$ with $v_i\in U$, $u_i\in V$ and $\langlembda\in R$. Hence, $M\otimes_S N\cong \mathfrak{F}(M\cup N)/K_{M,N}(S)$ and $M\otimes_k N\cong \mathfrak{F}(M\cup N)/K_{M,N} (k)$. Notice that, since $k\subseteq S$, $K_{M,N}(k)\subseteq K_{M,N}(S)$. So $M\otimes_S N$ is a quotient of $M\otimes_k N$. Specifically, we have the following isomorphism $$\frac{M\otimes_k N}{K_{M,N}(S)/K_{M,N}(k)}\cong \frac{\mathfrak{F}(M\cup N)/K_{M,N}(k)}{K_{M,N}(S)/K_{M,N}(k)}\cong \frac{\mathfrak{F}(M\cup N)}{K_{M,N}(S)}\cong M\otimes_S N$$ We claim that every element of $K_{M,N}(S)/K_{M,N}(k)\subseteq M\otimes_k N $ is of the form $$\sum_{s=1}^r \langlembda_s(\mu_s\thetau_s\otimes\nu_s)-\langlembda_s(\mu_s\otimes\thetau_s\nu_s)$$ with $\langlembda_i\in k$, and $\mu_i\in E$, $\nu_i\in F$, $\thetau_i\in X\backslashtal k$ and $\langlembda_i\in k$. Take $z\in K(S)/K(k)$. Since the generators of $K(S)$ of the form $(v_1,u_1+u_2)-(v_1,u_1)-(v_1,u_2)$ and $(v_1+v_2,u_1)-(v_1,u_1)-(v_2,u_1)$ are in $K(k)$, we may write $$z=\sum_i (m_it_i\otimes n_i-m_i\otimes t_in_i)$$ with $m_i\in M$, $n_i\in N$, and $t_i\in S$. However, since $E,F,X$ are bases of $M,N,X$ respectively, we may also write $$m_i=\sum_j \alpha_{i,j}\mu_{i,j}\mathfrak{q}uad\mathfrak{q}uad\mathfrak{q}uad n_i=\sum_l \beta_{i,l}\nu_{i,l}\mathfrak{q}uad\mathfrak{q}uad\mathfrak{q}uad t_i=\sum_k \kappa_{i,k}\thetau_{i,k}$$ with each $\langlembda_s\in k$, and $\mu_s\in E$, $\nu_s\in F$, $\thetau_s\in X\backslashtal k$ and $\langlembda_s\in k$. So we have \betagin{align*} z &=\sum_i (m_it_i\otimes n_i-m_i\otimes t_in_i)\\ &=\sum_i\left(\left(\sum_j \alpha_{i,j}\mu_{i,j}\right)\left(\sum_k \kappa_{i,k}\thetau_{i,k}\right)\otimes\left(\sum_l \beta_{i,l}\nu_{i,l}\right)-\left(\sum_j \alpha_{i,j}\mu_{i,j}\right)\otimes\left(\sum_k \kappa_{i,k}\thetau_{i,k}\right)\left(\sum_l \beta_{i,l}\nu_{i,l}\right)\right)\\ &=\sum_{i,j,k,l}\left( \alpha_{i,j}\mu_{i,j}\kappa_{i,k}\thetau_{i,k}\otimes\beta_{i,l}\nu_{i,l}-\alpha_{i,j}\mu_{i,j}\otimes\kappa_{i,k}\thetau_{i,k}\beta_{i,l}\nu_{i,l}\right)\\ &=\sum_{i,j,k,l}\alpha_{i,j}\beta_{i,l}\kappa_{i,k}\left( \mu_{i,j}\thetau_{i,k}\otimes\nu_{i,l}-\mu_{i,j}\otimes\thetau_{i,k}\nu_{i,l}\right)\\ &=\sum_{i,j,k,l}\alpha_{i,j}\beta_{i,l}\kappa_{i,k}( \mu_{i,j}\thetau_{i,k}\otimes\nu_{i,l})-\alpha_{i,j}\beta_{i,l}\kappa_{i,k}(\mu_{i,j}\otimes\thetau_{i,k}\nu_{i,l})\\ \end{align*} Lastly, if $\thetau_{i,k}$ is in $k$, then $ \mu_{i,j}\thetau_{i,k}\otimes\nu_{i,l}-\mu_{i,j}\otimes\thetau_{i,k}\nu_{i,l}$ is already zero in $M\otimes_k N$. Therefore, setting $\langlembda_{i,j,k,l}=\alpha_{i,j}\beta_{i,l}\kappa_{i,k}\in k$, the claim is shown. Now suppose $f\otimes g-\gamma\otimes\rho$ is zero in $M\otimes_S N$. Then in $M\otimes_k N$, we may write $$f\otimes g-\gamma\otimes\rho=\sum_{s=1}^r \langlembda_s(\mu_s\thetau_s\otimes\nu_s)-\langlembda_s(\mu_s\otimes\thetau_s\nu_s)$$ with $\langlembda_s\in k$, and $\mu_s\in E$, $\nu_s\in F$, $\thetau_s\in X\backslashtal k$ and $\langlembda_s\in k$. Now $Z=\{a\otimes b\mid a\in E, b\in F\}$ is a $k$-basis of $M\otimes_k N$. Since $f,\gamma\in E$ and $g,\rho\in F$, $f\otimes g$ and $\gamma\otimes\rho$ are in $Z$. By assumption, each $\mu_s\thetau_s\otimes\nu_s$ and $\mu_s\otimes\thetau_s\nu_s$ is a linear multiple of an element in $Z$. Thus, $f\otimes g$ must be a linear multiple of either $\mu_s\thetau_s\otimes\nu_s$ or $\mu_s\otimes\thetau_s\nu_s$ for some $s$. But, since $f$ and $g$ are indivisible and for all $s$, neither $\mu_s\thetau_s$ nor $\thetau_s\nu_s$ is indivisible, this is a contradiction. Therefore, $ f\otimes g-\gamma\otimes\rho$ cannot be zero in $M\otimes_S N$. \end{proof} Take a ring $S$ with class group $L$ with operation $\circ$. Let $T=\bigoplus_{A\in L} A$. We can give this $S$-module an $L$-graded $S$-algebra structure. For any $A,B\in L$, recall that $A\circ B=\hm(\hm(A\otimes_S B,S),S)\in L$. We will define the multiplication on the homogenous elements of $T$ with the natural map $\mathfrak{p}h_{A,B}: A\otimes_S B\to \hm(\hm(A\otimes_S B,S),S)$ by setting $ab=\mathfrak{p}h_{A,B}(a\otimes b)$, for any $a\in A$ and $b\in B$. We can extend this multiplication linearly to the nonhomogenous elements of $T$. Since $S$ is contained in $T$, this algebra is unital, and, because $\hm(\hm(A\otimes_S B,S),S)\cong\hm(\hm(B\otimes_S A,S),S)$, it is commutative as well. This construction is similar to an algebra considered in \cite{TomariWatanabe92}. \betagin{theorem}\langlebel{theorem1} Let $S$ be a Noetherian $k$-algebra, with $k$ a field. Suppose $L$ is finite and cyclic with generator $\Lambda$. Also suppose that the $L$-grading on $T$ can be refined to a grading $\Gammamma$ such that every $\Gammamma$-homogenous component is one dimensional. If there exists a $\Gammamma$-homogenous element $x\in \Lambda\subseteq T$ such that $x^n\in \Lambda^n\subseteq T$ is indivisible (as an element of an $S$-module) for all $n\in \mathbb{N}$ strictly less than $|\Lambda|$, then for any $A,B\in L$ where neither $A$ nor $B$ is isomorphic to $S$, the module $A\otimes_S B$ has torsion. \end{theorem} \betagin{proof} Since $\Lambda$ generates $L$, there exists $a$ and $b$ such that $\Lambda^a=A$ and $\Lambda^b=B$. Then there exists $a,b\in\mathbb{N}$ such that $x^a\in A$ and $x^b\in B$. Since neither $A$ nor $B$ is isomorphic to $S$, $a$ and $b$ are both strictly less than $|L|$ and so $x^a$ and $x^b$ are indivisible. We may assume without loss of generality that $a\ge b$. Let $Q$ a minimal homogenous generating set of $B$ which contains $x^b$. We may assume every element in $Q$ is indivisible, since, by the Noetherian condition, we can replace any divisible element by an indivisible one. Since $B$ is not isomorphic to $S$ and is torsionless, we know that $Q$ has another element $y$ besides $x^b$. Besides being indivisible and homogeneous, $y$ is also not a unit multiple of $x^b$. Set $z=x^a\otimes y -yx^{a-b}\otimes x^b$. We show that $z$ is a torsion element. Since $x^{a-b}$ is in $\Lambda^{a-b}$ and $y$ is in $B=\Lambda^b$, $yx^{a-b}$ is $\Lambda^a$ which is $A$. Thus $z$ is in $A\otimes_S B$. Furthermore, for any $f\in (A\circ B)^{-1}$ we have $x^ayf,x^{a+b} f\in S$. Thus we have, $$(x^ayf)z=x^{2a}yf\otimes y-yx^{a-b}\otimes x^{a+b}yf=x^{2a}yf\otimes y-x^{2a}yf\otimes y=0$$ Thus to show that $z$ is a torsion element, it suffices to show that $z$ is not zero in $A\otimes_S B$. Note that, by construction, $x^a$ and $y$ are indivisible, and since $y$ and $x^b$ are not unit multiples of each other, neither are $x^a$ and $yx^{a-b}$. Also $yx^{a-b}$ is homogenous since $x^{a-b}$ is. We can choose $\Gammamma$-homogenous bases $E$ and $F$ of $A$ and $B$ respectively such that $x^a,y\in E$ and $x^a,yx^{a-b}\in F$. Similarly we can choose a $\Gammamma$-homogenous basis $X$ of $S$. Since every $\Gammamma$-homogenous component of $T$ is one dimensional, for every $\timesi\in X$ and $\varepsilon\in E$ and $\eta\in F$, $\timesi\varepsilon$ is a linear multiple of an element in $E$ and $\timesi\eta$ is a linear multiple of an element of $F$. Thus $z$ meets the hypotheses of the previous proposition. Therefore, $z$ is not zero in $A\otimes_S B$. \end{proof} \betagin{corollary}\langlebel{main} Assume the set up of the last Theorem and that $S$ has a dualizing module. Then $S$ has no nontrivial semidualizing modules. \end{corollary} \betagin{proof} Let $C$ be a semidualizing module for $S$. Then $C\otimes \hm (C,D)\cong D$ where $D$ is a dualizing module. However, $\hm(C,D)\cong C^{-1}\circ D$ is also an element of the class group. Thus by the previous theorem, since $D$ is torsionless, either $C$ or $\hm(C,D)$ is isomorphic to $S$. Therefore, $C$ is isomorphic to $S$ or $D$. \end{proof} \section{Semidualizing Modules of Rings of Invariants} Let $R$ be the polynomial ring in $d$ variables over $k$. We can apply the previous results to the semidualizing modules over rings of invariants for a certain cyclic group, but first we need a lemma. \betagin{lemma}\langlebel{lemma2} Assume $k$ is an algebraically closed field. If $G$ is a finite cyclic group acting linearly on $R$ generated by $g$ whose order is not divisible by the characteristic of $k$, then there exist algebraically independent $x_1,\dots,x_d\in R$ such that $R=k[x_1,\dots,x_d]$ and $gx_i=\zetata^{\eta_i}x_i$ with $\zetata\in k$ a primitive $|G|$th root of unity. \end{lemma} \betagin{proof} By putting $g$ in the Jordan canonical form, it is an easy exercise to see that $g$ is diagonalizable since $|G|$ and $\mathcal{C}har k$ are coprime. Thus, we may choose an eigenbasis, $x_1,\dots,x_d$, of $R_1$. So, $gx_i=\timesi_ix_i$ with $\timesi_i\in k$. Since $g^{|G|}$ should act as the identity, each $\timesi_i$ must be a $|G|$th root of unity, and so we may write $\timesi=\zetata^{\eta_i}$ where $\zetata$ is some fixed primitive $|G|$th root of unity. Also, $R$ is isomorphic to the symmetric algebra of $R_1$ which is a polynomial ring in the variables $x_1,\dots,x_d$. Hence, $x_1,\dots,x_d$ are algebraically independent. \end{proof} To apply the results of Section 3, we will observe that in this case $$T=\bigoplus_{\chi\in L} R_{\chi^{-1}}\subseteq R$$ where $L$ is the class group of $R^G$. The desired grading $\Gammamma$ of $T$ will be the monomial grading with respect to the variables $x_1,\dots,x_d$ defined in the previous lemma. Before we proceed however, we need to show that this grading is a refinement of $L$, to which end, the following lemma suffices. \betagin{lemma} If $G$ consists of diagonal matrices, then for any character $\chi:G\to k^\times$, the set of all monomials in $R_\chi$ is a $k$-basis. \end{lemma} \betagin{proof} Let $X$ be the set of all monomials of $R_\chi$. Since any distinct monomials are linearly independent, $X$ is linearly independent. Take any $g\in G$. Then for each $i$, $gx_i=\langlembda_ix_i$ with $\langlembda_i\in k$. So for any $\underline{x}^{\underline{\alpha}}=x_1^{a_1}\cdots x_d^{a_d}$ in $R$, we have $$g\underline{x}^{\underline{\alpha}}=gx_1^{a_1}\cdots x_n^{a_d}=(\langlembda_1x_1)^{a_1}\cdots (\langlembda_dx_d)^{a_d}=\underline{\langlembda} x_1^{a_1}\cdots x_d^{a_d}=\underline{\langlembda}\underline{x}^{\underline{\alpha}}$$ with $\underline{\langlembda}=\langlembda_1^{a_1}\cdots \langlembda_d^{a_d}$. Take any $f\in R_\chi$. We may write $f=\kappappa_1\underline{x}^{\underline{\alpha}_1}+\cdots+\kappappa_m\underline{x}^{\underline{\alpha}_m}$. On the one hand, we know that $$gf=g(\kappappa_1\underline{x}^{\underline{\alpha}_1}+\cdots+\kappappa_m\underline{x}^{\underline{\alpha}_m}) =g\kappappa_1\underline{x}^{\underline{\alpha}_1}+\cdots+g\kappappa_m\underline{x}^{\underline{\alpha}_m} =\kappappa_1\underline{\langlembda}_1\underline{x}^{\underline{\alpha}_1}+\cdots+\kappappa_m\underline{\langlembda}_m\underline{x}^{\underline{\alpha}_m}$$ with $\underline{\langlembda}_i=\langlembda_1^{{a_1}_i}\cdots \langlembda_d^{{a_d}_i}$. By virtue of $f$ being in $R_\chi$, we also know that $$gf=\chi(g)f=\chi(g)\kappappa_1\underline{x}^{\underline{\alpha}_1}+\cdots+\chi(g)\kappappa_m\underline{x}^{\underline{\alpha}_m}$$ However, since monomials are linearly independent, this means that for each $i$, $\kappappa_i\underline{\langlembda}_i=\chi(g)\kappappa_i$, and so $\underline{\langlembda}_i=\chi(g)$. Therefore, for each $i$, $\underline{x}^{\underline{\alpha_i}}$ is in $R_\chi$ and thus also in $X$. Hence, $X$ spans $R_\chi$ and is a basis. \end{proof} \betagin{prop}\langlebel{prop1} Suppose $S$ is a power series ring over a field $k$ in $d$ variables and $G$ is a cyclic group of order $n$ acting on $R$ with $\mathcal{C}har k$ not dividing $n$. If $g$ generates $G$ and has a primitive $n$th root of unity as an eigenvalue, then $S^G$ has only trivial semidualizing modules. \end{prop} \betagin{proof} By Lemma \ref{completion}, since $\bar{k}\otimes S^G$ is a faithfully flat extension of $S^G$, $C$ is a semidualizing $S^G$-module if and only if $\bar{k}\otimes_{S^G} C$ is a semidualizing $\bar{k}\otimes S^G$-module. Thus, if there are no nontrivial semidualizing modules for $\bar{k}\otimes S^G$, then there are none for $S^G$. So, we may assume that $k$ is algebraically closed. Since $G$ is cyclic and is generated by $g$, a character in $\hm(G,k^\times)$ is completely determined by the image of $g$. However, $g$ can only be sent to an $n$th root of unity. Since $k$ is algebraically closed, and since $\mathcal{C}har k$ does not divide $n$, there are $n$ distinct $n$th roots of unity, which form a cyclic group. Therefore, $G$ is isomorphic to $\hm(G,k^\times)$. Since class group of $R^G$ is a subgroup of $\hm(G,k^\times)$, this means the class group must be cyclic. By the previous lemma, we may write $R=k[x_1,\dots,x_d]$ where $gx_i=\zetata^{\eta_i}x_i$ with $\zetata\in k$ a primitive $|G|$th root on unity. The assumption tells us that we may assume that $\eta_1=0$. Define $\chi: G\to k^\times$ by $g\mapsto \zeta^{-1}$. Since $\zeta^{-1}$ is a primitive $|G|$th root of unity, $\chi$ generates $\hm(G,k^\times)$. So, for some $\langlembda\in\mathbb{N}$, $\chi^\langlembda$ generates the class group $L$. Assume that $\langlembda$ is as small as possible. Note that $gx^\langlembda_1=(\zeta x_1)^\langlembda=\zeta^\langlembda x^\langlembda_1$, and so $x^\langlembda_1\in R_{\chi^{-\langlembda}}$, the reflexive module corresponding to $\chi^{\langlembda}$. Since we have chosen $\langlembda$ to be as small as possible, $|\chi^\langlembda|=n/\langlembda$. Thus, for each $1\le\nu<|\chi^{-\langlembda}|=n/\langlembda$, $\langlembda\nu$ is strictly less than $n$. Since the smallest power of $x_1$ that is invariant is $n$, this means that $(x_1^\langlembda)^\nu$ is indivisible. Therefore, using the monomial grading, the conditions of Corollary \ref{main} and Theorem \ref{theorem1} are satisfied, and thus $R^G$ has no nontrivial semidualizing modules. Since $S^G$ is the completion of $R^G$, and completion is faithfully flat, we are done by Lemma \ref{completion}. \end{proof} We can recover the non modular case of \cite[Corollary 3.21]{Dao11}. \betagin{corollary} The there exists no semidualizing modules over nonmodular Veronese subrings. \end{corollary} \betagin{proof} Let $g$ be an $d\times d$ diagonal matrix whose entries are all $\zetata_n$, a primitive $n$th root of unity. Then the $n$-Veronese subring in $d$ variables is $R=k[[x_1,\dots,x_d]]^G$ where $G$ is the group generated by $g$. Since the order of $G$ is $n$, the result follows from the previous proposition. \end{proof} We now come to our main theorem. \betagin{theorem}\langlebel{thm} If $S$ is a power series ring over a field $k$ in finitely many variables and $G$ is a cyclic group of order $p^l$ acting on $S$ with $\mathcal{C}har k\ne p$, then $S^G$ has only trivial semidualizing modules. \end{theorem} \betagin{proof} By Lemma \ref{lemma}, we may write $R=k[x_1,\dots,x_d]$ where $gx_i=\zetata^{\eta_i}x_i$ with $\zetata\in k$ a primitive $|G|$th root on unity. We may assume that $\zetata^{\eta_1}$ has the greatest order of all the $\zetata^{\eta_i}$ and set $z=|\zeta^{\eta_1}|$. Since $|\zetata^{\eta_i}|$ is a power of $p$ less than $z$, we have $|\zetata^{\eta_i}|$ divides $z$ for each $i$, and so $(\zeta^{\eta_i})^z=1$. Thus, viewing $g$ as a diagonal matrix with entries $\zeta^{\eta_i}$, $g^z$ is the identity, and so $n\le z$. But, $z$ has to be less than $n$, giving us equality. Hence, $\zetata^{\eta_1}$ is a primitive $n$th root of unity. However, since our choice of $\zetata$ is arbitrary, we may assume that $\eta_1=1$. In short, we have $gx_1=\zetata x_1$. The result follows from the previous proposition. \end{proof} The proofs of Theorem \ref{thm} and Proposition \ref{prop1} show that Theorem \ref{theorem1} applies to the class of rings under consideration. Thus we actually have the following result, which resolves in the affirmative a special case of Conjecture 1.3 in \cite{Gotoetal2013}. \betagin{corollary} Assume the set up of the previous theorem, and let $D$ be a dualizing module for $S$. If $M$ is a reflexive module of rank 1 and $M\otimes_S \hm_S(M,D)$ is torsionfree, then $M$ is isomorphic to either $S$ or $D$. \end{corollary} \betagin{proof} Since $M$ and $\hm_S(M,D)$ are both elements of the class group, and since Theorem \ref{theorem1} applies, either $M$ or $\hm(M,D)$ is isomorphic to $S$. In the latter case implies that $M\cong D$. \end{proof} \end{document}
\begin{document} \title{Simple dual braids, noncrossing partitions and Mikado braids of type $D_n$} \begin{abstract} We show that the simple elements of the dual Garside structure of an Artin group of type $D_n$ are Mikado braids, giving a positive answer to a conjecture of Digne and the second author. To this end, we use an embedding of the Artin group of type $D_n$ in a suitable quotient of an Artin group of type $B_n$ noticed by Allcock, of which we give a simple algebraic proof here. This allows one to give a characterization of the Mikado braids of type $D_n$ in terms of those of type $B_n$ and also to describe them topologically. Using this topological representation and Athanasiadis and Reiner's model for noncrossing partitions of type $D_n$ which can be used to represent the simple elements, we deduce the above mentioned conjecture. \end{abstract} ~\\ \noindent\textbf{AMS 2010 Mathematics Classification}: : ~20F36, ~20F55.\\ \noindent\textbf{Keywords.} Coxeter groups, Artin-Tits groups, dual braid monoids, Garside theory, noncrossing partitions. \tableofcontents \thispagestyle{empty} \section{Introduction} The dual braid monoid $B_c^*$ of a Coxeter system $(W,S)$ of spherical type was introduced by Bessis \cite{Dual} and depends on the choice of a standard Coxeter element $c\in W$ (a product of all the elements of $S$ in some order). It is generated by a copy $T_c$ of the set $T$ of reflections of $W$, that is, elements which are conjugates to elements of $S$. As a Garside monoid, it embeds into its group of fractions, which was shown by Bessis to be isomorphic to the Artin group $A_W$ corresponding to $W$. Unfortunately, this isomorphism is poorly understood, and the proof of its existence requires a case-by-case argument \cite[Fact 2.2.4]{Dual}. The aim of this note is to study properties of the simple elements $\mathrm{Div}(c)$ in $B_c^*$ viewed inside $A_W$ in case $W$ is of type $D_n$ and to show that they are \textit{Mikado braids}, that is, that they can be represented as a quotient of two positive canonical lifts of elements of $W$. These braids appeared in work of Dehornoy \cite{D} in type $A_n$ and in work of Dyer \cite{Dyernil} for arbitrary Coxeter systems and have many interesting properties. For example, they satisfy an analogue of Matsumoto's Lemma in Coxeter groups \cite[Section 9]{Dyernil}. We refer the reader to \cite[Section 9]{Dyernil}, \cite[Section 4]{DG} (there the Mikado braids are called \textit{rational permutation braids}, while the terminology Mikado braids rather refers to braids viewed topologically; it is shown however in \cite{DG} that both are equivalent) or \cite[Section 3.2]{Twisted} for more on the topic. Another important property is that their images in the Iwahori-Hecke algebra $H(W)$ of the Coxeter system $(W,S)$ have positivity properties; let us be more precise. There is a natural group homomorphism $a: A_W\longrightarrow H(W)^\times$. If $\beta\in A_W$ is a Mikado braid and if we express its image $a(\beta)$ in the canonical basis $\{C_w~|~w \in W\}$ of the Hecke algebra, then the coefficients are Laurent polynomials with positive coefficients (see \cite[Section 8]{DG}). This is one of the main motivations for studying Mikado braids, and showing that simple dual braids are Mikado braids. This last property was conjectured for an arbitrary Coxeter system $(W,S)$ of spherical type in \cite{DG}, and shown to hold in all the irreducible types different from $D_n$ \cite[Theorems 5.12, 6.6, 7.1]{DG}. In the classical types $A_n$ and $B_n$, the conjecture is proven using a topological characterization of Mikado braids: it can be seen on any reduced braid diagram (resp. symmetric braid diagram in type $B_n$) whether a braid is a Mikado braid or not. The present paper gives topological models for Mikado braids of type $D_n$, similar to those given in types $A_n$ and $B_n$ in \cite{DG}, and solves the above conjecture in the remaining type $D_n$: \begin{theorem}\label{theorem:main} Let $c$ be a standard Coxeter element in a Coxeter group $(W, S)$ of type $D_n$. Then every element of $\mathrm{Div}(c)$ is a Mikado braid. \end{theorem} As a consequence, every simple dual braid in every spherical type Artin group is a Mikado braid, the reduction to the irreducible case being immediate. Licata and Queffelec recently informed us that they also have a proof of the conjecture in types $A, D, E$ with a different approach using categorification \cite{QL}. To prove the conjecture, we proceed as follows. Firstly, we explicitly realize the Artin group $A_{D_n}$ of type $D_n$ as an index two subgroup of a quotient of the Artin group $A_{B_n}$ of type $B_n$. The existence of such a realization, which is of independent interest, is not new: it was noticed by Allcock \cite[Section 4]{Allcock}. We give a simple proof of it here (Proposition~\ref{prop:quotient}). This allows to realize elements of $A_{D_n}$ topologically by Artin braids. We then characterize Mikado braids of type $D_n$ as the images of those Mikado braids of type $B_n$ which surject onto elements of $W_{D_n}\subseteq W_{B_n}$ under the canonical map from $A_{B_n}$ onto $W_{B_n}$ (Theorem~\ref{thm:mikado_bd}). This implies that Mikado braids of type $D_n$ satisfy a nice topological condition, and gives a model for their study in terms of symmetric Artin braids, because elements of $A_{B_n}$ can be realized as symmetric Artin braids on $2n$ strands (see Section~\ref{typeB}). Using Athanasiadis and Reiner's graphical model \cite{AR} for $c$-noncrossing partitions of type $D_n$ (which are in canonical bijections with the simple elements $\mathrm{Div}(c)$ of $B_c^*$; we denote this bijection by $x\mapsto x_c$, where $x$ is a $c$-noncrossing partition), we attach to every such noncrossing partition $x$ an Artin braid $\beta_x$ of type $B_n$, whose image in the above mentioned quotient is precisely the element $x_c\in A_{D_n}$ (Section~\ref{main}). Using the topological characterization of Mikado braids of type $B_n$ from~\cite{DG}, we then prove that $\beta_x$ is a Mikado braid of type $B_n$ (Proposition~\ref{beta_mik}), which concludes by the above mentioned characterization of Mikado braids of type $D_n$ (Theorem~\ref{thm:main}).\\ ~\\ \textbf{Acknowledgments.} We thank Luis Paris for useful discussions with the second author and Jon McCammond for pointing out the reference~\cite{MS}. \section{Artin groups of type $D_n$ inside quotients of Artin groups of type $B_n$} \subsection{Coxeter groups and Artin groups} This section is devoted to recalling basic facts on Coxeter groups and their Artin groups. We refer the reader to \cite{Bou, Hum} or \cite{BjBr} for more on the topic. A \defn{Coxeter system} $(W,S)$ is a group $W$ generated by a set $S$ of involutions subject to additional \defn{braid relations}, that is, relations of the form $st\cdots = ts\cdots$ for $s,t\in S$, $s\neq t$. Here $st\cdots$ denotes a strictly alternating product of $s$ and $t$, and the number $m_{st}$ of factors in the left hand side equals the number $m_{ts}$ of factors in the right hand side. We have $m_{st}\in\{2, 3,\dots\}\cup\{\infty\}$, the case $m_{st}=\infty$ meaning that there is no relation between $s$ and $t$. Let $\ell:W\rightarrow \mathbb{Z}_{\geq 0}$ be the length function with respect to the set of generators $S$. Finite irreducible Coxeter groups are classified in four infinite families of types $A_n$, $B_n$, $D_n$, $I_2(m)$ and six exceptional groups of types $E_6, E_7, E_8, F_4, H_3, H_4$. If $X$ is a given type, we denote by $(W_X, S_X)$ a Coxeter system of this type. The \defn{Artin group} $A_W$ attached to the Coxeter system $(W,S)$ is generated by a copy $\bS$ of the elements of $S$, subject only to the braid relations. This gives rise to a canonical surjection $\pi: A_W\twoheadrightarrow W$ induced by $\bs\mapsto s$. If $W$ has type $X$, we simply denote $A_W$ by $A_X$. The canonical map $\pi$ has a set-theoretic section $W\hookrightarrow A_W$ built as follows: let $w=s_1 s_2\cdots s_k$ be a reduced expression for $w$, that is, we have $s_i\in S$ for all $i=1,\dots, k$ and $k=\ell(w)$. Then the lift $\bs_1 \bs_2\cdots \bs_k$ in $A_W$ is independent of the chosen reduced expression, and we therefore denote it by $\mathbf{w}$. This is a consequence of the fact that in every Coxeter group, one can pass from any reduced expression of a fixed element $w$ to any other just by applying a sequence of braid relations. The element $\mathbf{w}$ is the \defn{canonical positive lift} of $w$. \subsection{Embeddings of Coxeter groups}\label{emb:cox} Let $(W_{B_n}, S_{B_n})$ be a Coxeter system of type $B_n$. We will identify it with the signed permutations group as follows: let $S_{-n,n}$ be the group of permutations of $[-n,n]=\{-n,-n+1,\dots, -1,1,\dots, n\}$ and define $$W_{B_n}:=\{w\in S_{-n,n}~|~w(-i)=-w(i),~\mbox{for all}~ i\in[-n,n]\}.$$ Then setting $s_0:=(-1, 1)$ and $s_i=(i, i+1)(-i, -i-1)$ for all $i=1, \dots, n-1$ we get that $S_{B_n}=\{s_0, s_1,\dots, s_{n-1}\}$ is a simple system for $W_{B_n}$ (see \cite[Section 8.1]{BjBr}). Let $(W_{D_n}, S_{D_n})$ be a Coxeter group of type $D_n$. Recall that $W_{D_n}$ can be realized as an index two subgroup of $W_{B_n}$ as follows: setting $t_0=s_0 s_1 s_0$, $t_i=s_i$ for all $i=1,\dots, n-1$ we have that $S_{D_n}:=\{t_0, t_1,\dots, t_{n-1}\}$ is a simple system for the Coxeter group $W_{D_n}=\left\langle t_0, t_1,\dots, t_{n-1}\right\rangle$ of type $D_n$ (see \cite[Section 8.2]{BjBr}). In the following, a Coxeter group of type $D_n$ will always be viewed inside $W_{B_n}$, with the above identifications. \subsection{Embeddings of Artin groups} We assume the reader to be familiar with Artin groups attached to Coxeter groups and refer to \cite[Chapter IX]{Garside} for basic results. Notice that there are two surjective maps $q_B: A_{B_n}\longrightarrow A_{A_{n-1}}$, $q_D: A_{D_n}\longrightarrow A_{A_{n-1}}$ defined as follows: if we denote by $\{\sigma_1, \dots, \sigma_{n-1}\}$ the set of standard Artin generators of the $n$-strand Artin braid group $A_{A_{n-1}}$, then $q_B(\mathbf{s}_0)=1$, $q_B(\mathbf{s}_i)=\sigma_i$ for $i\neq 0$, while $q_D(\mathbf{t}_0)=\sigma_1$, $q_D(\mathbf{t}_i)=\sigma_i$ for all $i\neq 0$ (see~\cite[Section 2.1]{CP}). Both maps $q_B$ and $q_D$ are split and one can write $A_{X_n}\cong \mathrm{ker}(q_X)\rtimes A_{A_{n-1}}$ for $X\in\{B,D\}$. Crisp and Paris showed that the embedding of $W_{D_n}$ in $W_{B_n}$ which we recalled in Subsection~\ref{emb:cox} does not come from an embedding $\varphi: A_{D_n}\longrightarrow A_{B_n}$ such that $q_D=q_B\circ\varphi$ \cite[Proposition 2.6]{CP}. In this section we show that there is an embedding of $A_{D_n}$ inside a quotient $\widetilde{A}_{B_n}$ of $A_{B_n}$; this embedding can be seen as a natural lift of the embedding of Coxeter groups and has the expected properties (see Lemma~\ref{lem:comp}). This is mostly a reformulation of results of Allcock \cite[Sections 2 and 4]{Allcock}, but we will give a simple algebraic proof of this fact here. \begin{definition} Define $\widetilde{A}_{B_n}$ to be the quotient of $A_{B_n}$ by the smallest normal subgroup containing $\bs_0^2$. \end{definition} It follows immediately from this definition that the canonical map $\pi_n: A_{B_n}\twoheadrightarrow W_{B_n}$ factors through $\widetilde{A}_{B_n}$ via two surjective maps $\pi_{n,1}: A_{B_n}\twoheadrightarrow \widetilde{A}_{B_n}$ and $\pi_{n,2}: \widetilde{A}_{B_n}\twoheadrightarrow W_{B_n}$. \begin{rmq} In \cite[Definition 3.3]{MS}, a similar group, called the \textit{middle group}, is considered. It is defined as the quotient of $A_{B_n}$ by the smallest normal subgroup containing $\bs_1^2$ (as a consequence, every $\bs_i^2$ for $i\geq 1$ is equal to $1$ in the quotient since $\bs_i$ lies in the same conjugacy class as $\bs_1$). \end{rmq} Denote by $s_i'$, $i=0,\dots, n-1$ the image of $\mathbf{s}_i\in A_{B_n}$ in $\widetilde{A}_{B_n}$, for all $s_i\in S_{B_n}$. Set $t_0'= s_0' s_1' s_0'$ and $t_i'= s_i'$ for $i=1, \dots, n-1$. \begin{lemma}\label{RelationsDn} The elements $t_0', t_1',\dots t_{n-1}'$ satisfy the braid relations of type $D_n$, that is, we have $$t_0' t_1'=t_1' t_0', ~t_0' t_2' t_0'= t_2' t_0' t_2', ~t_i ' t_{i+1}' t_i'= t_{i+1}' t_i' t_{i+1}'~\mbox{for all}~ i=1, \dots, n-2,$$ $$t_i't_j'=t_j' t_i'\text{ if }|i-j|>1\text{ and }\{i,j\}\neq\{0, 2\}.$$ \end{lemma} \begin{proof} All the relations except the second one are immediate consequences of the type $B_n$ braid relations satisfied by the $s_0', s_1', \dots, s_{n-1}'$. For the second relation we have \begin{eqnarray*} t_0' t_2' t_0' &=& s_0' s_1' s_0' s_2' s_0' s_1' s_0'=s_0' s_1' s_0'^2 s_2' s_1' s_0'= s_0' s_1' s_2' s_1' s_0'=s_0' s_2' s_1' s_2' s_0'\\ &=& s_2' s_0' s_1' s_0' s_2'= t_2' t_0' t_2'. \end{eqnarray*} \end{proof} An immediate corollary is \begin{corollary} There is a group homomorphism $\iota_n: A_{D_n}\longrightarrow \widetilde{A}_{B_n}$ defined by $\iota_n(\mathbf{t}_i)=t_i'$ for all $i=0, \dots, n-1$. \end{corollary} We have the following situation \begin{lemma}\label{lem:diag} There is a commutative diagram $$\xymatrix{ A_{B_n} \ar@{->>}[rd]_{\pi_{n}^B} \ar@{->>}[r]^{\pi_{n,1}} & \relax \widetilde{A}_{B_n} \ar@{->>}[d]^{\pi_{n,2}} & \langle t_0', \dots, t_{n-1}'\rangle \ar@{^{(}->}[l] \ar@{->>}[d]_{\pi_{n}^D} & A_{D_n} \ar@{->>}[l] \\ ~ & W_{B_n} & W_{D_n} \ar@{^{(}->}[l] & ~ }$$ \noindent where $\pi_n^D: \langle t_0', \dots, t_{n-1}'\rangle \longrightarrow W_{D_n}$ is defined by $\pi_n^D(t_i')=t_i$ for all $i=0,\dots, n-1$. \end{lemma} \begin{rmq} In Proposition~\ref{prop:quotient} below we will show that the map $\iota_n$ is injective; hence $\pi_n^D$ is in fact simply the canonical surjection $A_{D_n}\twoheadrightarrow W_{D_n}$. \end{rmq} \begin{proof} We have to show that the composition of $\pi_{n,2}$ and $\langle t_0',\dots, t_{n-1}'\rangle\hookrightarrow \widetilde{A}_{B_n}$ factors through $W_{D_n}$. It suffices to show that the image of $t_i'$ under this composition is precisely $t_i$ (viewed inside $W_{B_n}$ via the embedding $W_{D_n}\hookrightarrow W_{B_n}$) for all $i=0,\dots, n-1$, which is immediate. \end{proof} \begin{proposition}\label{prop:quotient} The homomorphism $\iota_n$ is injective and $\langle t_0', \ldots , t_{n-1}'\rangle$ is a subgroup of $\widetilde{A}_{B_n}$ of index two. Hence $A_{D_n}$ can be identified with the subgroup of $\widetilde{A}_{B_n}$ generated by the $t_i'$, $i=0, \dots, n-1$. \end{proposition} \begin{proof} We first notice that, as an immediate consequence of Lemma~\ref{lem:diag}, the subgroup $U:= \langle t_0', \ldots , t_{n-1}'\rangle\subseteq \widetilde{A}_{B_n}$ is proper since $W_{D_n}$ is a proper subgroup of $W_{B_n}$. As $s'_0$ interchanges $t_0'$ and ${s'_0}t_0's_0'=t_1'$, and as $s_0'$ commutes with $t_i'$ for $i = 2, \ldots n-1$, the involution $s_0'$ normalizes $U$ and induces on $U$ an automorphism of order $2$ (which is in fact an outer automorphism). Therefore, $U = \iota_n(A_{D_n})$ is of index $2$ in $\widetilde{A}_{B_n}$. Next we determine a presentation of $U$ using the Reidemeister-Schreier algorithm (see for instance \cite{LS}). We take as a Schreier-transversal $T:=\{1, s_0'\}$ for the right cosets of $U$ in $\widetilde{A}_{B_n}$. This yields the generating set $$\{ts_i' \overline{ts_i'}^{-1}~|~ t \in T~\mbox{and}~0 \leq i \leq n-1\} = \{t_i'~|~0 \leq i \leq n-1\}$$ where $\overline{x}$ is the representative of $Ux$ in $T$ for $x \in \widetilde{A}_{B_n}$. Application of this algorithm and of Tietze-transformations (see \cite{LS}) then precisely yields the braid relations as stated in Lemma~\ref{RelationsDn}. This shows that $\iota_n$ is injective. \end{proof} From now on we identify the subgroup $\langle t_0', t_1',\dots, t_{n-1}'\rangle\subseteq\widetilde{A}_{B_n}$ with $A_{D_n}$ and we set $\mathbf{t}_i = t_i'$ for all $i=0,\dots, n-1$. Note that by definition of $\widetilde{A}_{B_n}$, the map $q_B$ factors through $\widetilde{A}_{B_n}$, giving rise to a surjection $\widetilde{q}_B: \widetilde{A}_{B_n}\longrightarrow A_{A_{n-1}}$. Then we have \begin{lemma}\label{lem:comp} The map $\iota_n$ satisfies $\widetilde{q}_B\circ \iota_n= q_D$. \end{lemma} \begin{proof} We have $q_D(\mathbf{t}_0)= \sigma_1$ and $(\widetilde{q}_B\circ \iota_n)(\mathbf{t}_0)= \widetilde{q}_B(s_0' s_1' s_0')= q_B(\bs_0) q_B (\bs_1) q_B(\bs_0)= q_B(\bs_1)=\sigma_1$. For $i\geq 1$ we have $q_D(\bt_i)=\sigma_i= q_B(\bs_i)= \widetilde{q}_B(s_i')=(\widetilde{q}_B\circ \iota_n)(\bt_i)$. \end{proof} \begin{definition} Given $x\in W_{D_n}$, we denote by $\mathbf{x}^D$ the canonical positive lift of $x$ in $A_{D_n}$ (which we will systematically view inside $\widetilde{A}_{B_n}$) and by $\mathbf{x}^B$ the canonical positive lift of $x$ in $A_{B_n}$. \end{definition} \begin{proposition}\label{prop:lifts} Let $x\in W_{D_n}$. We have $\pi_{n,1}(\mathbf{x}^B)=\mathbf{x}^D$. \end{proposition} \begin{proof} Let $t_{i_1} t_{i_2}\cdots t_{i_k}$ be an $S_{D_n}$-reduced expression of $x$ in $W_{D_n}$. Replacing $t_0$ by $s_0 s_1 s_0$ and $t_i$ by $s_i$ for $i=1,\dots, n-1$ we get a word in the elements of $S_{B_n}$ for $x$. Note that this may not be a reduced expression for $x$ in $W_{B_n}$. It suffices to show that one can transform the above word into a reduced expression for $x$ in $W_{B_n}$ just by applying braid relations of type $B_n$ and the relation $s_0^2=1$. We prove the above statement by induction on $k$. If $k=1$ then the claim holds since $t_i$, $i\geq 1$ is replaced by $s_i$ while $t_0$ is replaced by $s_0 s_1 s_0$ which is $S_{B_n}$-reduced. Hence assume that $k>1$. By induction the claim holds for $x'=t_{i_2}\cdots t_{i_k}$. By \cite[Propositions 8.1.2, 8.2.2]{BjBr} one has that $s_j$, $j\geq 1$ is a left descend of $x'$ in $W_{B_n}$ if and only if it is a left descent of $x'$ in $W_{D_n}$. Hence we can assume that $t_{i_1}=t_0$ and that it is the only left descent of $x$ in $W_{D_n}$. Firstly, assume that $s_0$ is a left descent of $x'$ in $W_{B_n}$, hence $s_0$ is not a left descent of $s_0 x'$. We claim that it suffices to show that $s_1$ is not a left descent of $s_0 x'$: indeed, it implies that $\ell(s_0 s_1 s_0 x')=\ell(s_0 x')+2$ (where $\ell$ is the length function in $W_{B_n}$) by the lifting property (see~\cite[Corollary 2.2.8(i)]{BjBr}). Moreover by induction we can get every $S_{B_n}$-reduced decomposition of $x'$ using only the claimed relations, hence we can by induction get a reduced expression for $x'$ starting with $s_0$ with these relations. The only additional relation to apply to get a reduced decomposition of $x$ is the deletion of the $s_0^2=1$ which appears when appending $s_0 s_1 s_0$ at the left of such a reduced expression of $x'$. Hence assume that $s_1 s_0 x' < s_0 x'$ in $W_{B_n}$, i.e., that $s_1$ is a left descent of $s_0 x'$. By~\cite[Proposition 8.1.2]{BjBr} it follows that $x'^{-1} s_0 (1)>x'^{-1} s_0 (2)$ which implies that $x'^{-1}(-1) >x'^{-1}(2)$, hence $-x'^{-1}(2)>x'^{-1}(1)$. But by~\cite[Proposition 8.2.2]{BjBr} it precisely means that $t_0$ is a left descent of $x'$, a contradiction. Now assume that $s_0$ is not a left descent of $x'$ in $W_{B_n}$. Then $s_1$ is not a left descent of $x'$ in $W_{B_n}$, otherwise using \cite[Proposition 8.1.2]{BjBr} again it would be a left descent of $t_{i_1} x'$ in $W_{B_n}$, hence in $W_{D_n}$ by \cite[Proposition 8.2.2]{BjBr}, a contradiction. It follows that a reduced expression for $y=s_1 s_0 x'$ in $W_{B_n}$ is obtained by concatenating $s_1 s_0$ at the left of a reduced expression for $x'$ (which we can obtain by induction). If $s_0 y > y$ then we are done, while if $s_0 y < y$ then by Matsumoto's Lemma we can obtain a reduced expression of $y$ starting with $s_0$ just by applying type $B_n$ braid relations. Deleting the $s_0^2$ at the beginning of the word we then have a reduced expression of $x$. \end{proof} \begin{rmq} The fact that reduced expressions of an element $x\in W_{D_n}$ can be transformed into reduced expressions in $W_{B_n}$ as we did in the proof above had been noticed by Hoefsmit in his thesis~\cite[Section 2.3]{Hoef} without a proof. The fact that $A_{D_n }$ can be realized as a subgroup of $\widetilde{A}_{B_n}$ also implies that the corresponding Iwahori-Hecke algebra $H(W_{D_n})$ of type $D_n$ embeds into the two-parameter Iwahori-Hecke algebra $H(W_{B_n})$ of type $B_n$ where the parameter corresponding to the conjugacy class of $s_0$ is specialized at $1$. This is precisely what Hoefsmit uses to study representations of Iwahori-Hecke algebras of type $D_n$ using the representation theory of those algebras in type $B_n$. \end{rmq} \section{Mikado braids of type $B_n$ and $D_n$} \subsection{Mikado braids of type $B_n$}\label{typeB} We recall from \cite{DG} the following \begin{definition} Let $(W,S)$ be a finite Coxeter system with Artin group $A_W$. An element $\beta\in A_W$ is a \defn{Mikado braid} if there exist $x,y\in W$ such that $\beta=\mathbf{x}^{-1}\mathbf{y}$. We denote by $\mathrm{Mik}(W)$ (or $\mathrm{Mik}(X)$ if $W$ is of type $X$) the set of Mikado braids in $A_W$. \end{definition} We briefly recall results from \cite[Section 6.2]{DG} on topological realizations of Mikado braids in type $B_n$ which will be needed later on. The Artin group $A_{B_n}$ embeds into $A_{A_{2n-1}}$, which is isomorphic to the Artin braid group on $2n$ strands. Labeling the strands by $-n, \dots, -1, 1, \dots, n$, every simple generator in $S_{B_n}\subseteq S_{n, -n}$ is then lifted to an Artin braid as follows. The generator $\bs_0$ exchanges the strands $1$ and $-1$, while the generator $\bs_i$, $i=1,\dots, n-1$ exchanges the strands $i$ and $i+1$ as well as the strands $-i$ and $-i-1$ (in both crossings, the strand coming from the right passes over the strand coming from the left, like in the right picture in Figure~\ref{figure:ref1}). Those braids in $A_{A_{2n-1}}$ which are in $A_{B_n}$ are precisely those braids which are fixed by the automorphism which exchanges each crossing $i, i+1$ by a crossing $-i, -i-1$ of the same type, for all $i$. We call these braids \defn{symmetric}. There is the following graphical characterization of Mikado braids in $A_{B_n}$ \begin{theorem}[{\cite[Theorem 6.3]{DG}}]\label{thm:dg_b} Let $\beta\in A_{B_n}$. The following are equivalent \begin{enumerate} \item The braid $\beta$ is a Mikado braid, that is, there are $x, y\in W_{B_n}$ such that $\beta=\mathbf{x}^{-1}\mathbf{y}$. \item There is an Artin braid in $A_{A_{2n-1}}$ representing $\beta$, such that one can inductively remove pairs of symmetric strands, one of the two strands being above all the other strands (so that the symmetric one is under all the other strands). \end{enumerate} \end{theorem} Note that in the second item above, we remove pairs of strands instead of single strands so that at each step of the process, the obtained braid is still symmetric (hence in $A_{B_n}$). \subsection{Mikado braids of type $D_n$ inside $\widetilde{A}_{B_n}$} The aim of this subsection is to prove the following result, relating Mikado braids of type $D_n$ to Mikado braids of type $B_n$: \begin{theorem}\label{thm:mikado_bd} The Mikado braids of type $D_n$ viewed inside $\widetilde{A}_{B_n}$ are precisely the images of those Mikado braids of type $B_n$ which surject onto elements of $W_{D_n}$, that is, we have $$\mathrm{Mik}(D_n)=\{ \pi_{n,1}(\beta)~|~\beta\in\mathrm{Mik}(B_n)~\text{and}~\pi_n^B(\beta)\in W_{D_n}\}.$$ \end{theorem} \begin{proof} Let $\gamma\in\mathrm{Mik}(D_n)\subseteq\widetilde{A}_{B_n}$. Then there exist $x,y\in W_{D_n}$ such that $\gamma=(\mathbf{x}^D)^{-1}\mathbf{y}^D$. Note that by Lemma~\ref{lem:diag} we have $\pi_{n,2}(\gamma)=x^{-1} y\in W_{D_n}$. But by Proposition~\ref{prop:lifts} we have $\gamma=\pi_{n,1}(\beta)$ where $\beta=(\mathbf{x}^B)^{-1}\mathbf{y}^B\in\mathrm{Mik}(B_n)$, which shows the first inclusion. Conversely, let $\beta\in \mathrm{Mik}(B_n)$ such that $\pi_{n}^B(\beta)\in W_{D_n}$. We have to show that $\pi_{n,1}(\beta)\in\mathrm{Mik}(D_n)$. By definition there are $x,y\in W_{B_n}$ such that $\beta=(\mathbf{x}^B)^{-1} \mathbf{y}^B$. Since $\pi_n^B(\beta)=x^{-1}y\in W_{D_n}$, if either $x$ or $y$ is in $W_{D_n}$ then both of them are in $W_{D_n}$ in which case we are done by Proposition~\ref{prop:lifts}. Hence assume that $x, y\notin W_{D_n}$. Since $W_{D_n}$ is a subgroup of $W_{B_n}$ of index two and $s_0\notin W_{D_n}$ there are $x', y'\in W_{D_n}$ such that $x=s_0 x'$, $y=s_0 y'$. If follows that $\mathbf{x}^B= \mathbf{s}_0^{\pm 1} \mathbf{x'}^B$ (the exponent depending on whether $s_0 x > x$ or not) and $\mathbf{y}^B=\mathbf{s}_0^{\pm 1} \mathbf{y'}^B$. Hence since the image of $\mathbf{s}_0$ in $\widetilde{A}_{B_n}$ has order two, using Proposition~\ref{prop:lifts} again we have $\pi_{n,1}(\beta)=(\mathbf{x'}^D)^{-1} \mathbf{y'}^D$ which concludes. \end{proof} \section{Dual braid monoids} \subsection{Noncrossing partitions}Let $(W,S)$ be a Coxeter system of spherical type. Let $T=\bigcup_{w\in W} w S w^{-1}$ denote the set of reflections in $W$ and ${{\ell_T}}:W\longrightarrow \mathbb{Z}_{\geq 0}$ the corresponding length function. A \defn{standard Coxeter element} in $(W,S)$ is a product of all the elements of $S$. Given $u,v\in W$, we can define a partial order $\leq_T$ on $W$ by $$u\leq_T v\Leftrightarrow {{\ell_T}}(u)+{{\ell_T}}(u^{-1}v)={{\ell_T}}(v).$$ In this case we say that $u$ is a \defn{prefix} of $v$. Let $c$ be a standard Coxeter element. The set $\mathbb{N}C(W, c)$ of \defn{c-noncrossing partitions} consists of all the $x\in W$ such that $x\leq_T c$. The poset $(\mathbb{N}C(W,c),\leq_T)$ is a lattice, isomorphic to the lattice of noncrossing partitions when $W= W_{A_n} \cong \mathfrak{S}_{n+1}$. See~\cite{Arm} for more on the topic. \begin{rmq} There are several (unequivalent) definitions of Coxeter elements (see for instance~\cite[Section 2.2]{BGRW}). The above definitions still make sense for more general Coxeter elements, but for the realization of the dual braid monoids (which are introduced in the next section) inside Artin groups the Coxeter element is required to be standard (see~\cite[Remark 5.11]{DG}). \end{rmq} \subsection{Dual braid monoids}\label{Sub:DualBraid} We recall the definition and properties of dual braid monoids. For a detailed introduction to the topic the reader is referred to \cite{Dual,DG} or \cite{Garside}. Dual braid monoids were introduced by Bessis \cite{Dual}, generalizing definitions of Birman, Ko and Lee \cite{BKL} and Bessis, Digne and Michel \cite{BDM} to all the spherical types. Let $(W, S)$ be a finite Coxeter system. Denote by $T$ the set of reflections in $W$ and by $A_W$ the corresponding Artin-Tits group. Let $c$ be a standard Coxeter element in $W$. Bessis defined the \defn{dual braid monoid} attached to the triple $(W, T, c)$ as follows. Take as generating set a copy $T_c:= \{t_c~|~t \in T\}$ of $T$ and set $$B_c^*:=\langle t_c\in T_c~ |~t_c \in T_c, t_c t'_c= (tt't)_c t_c~\text{if }tt'\leq_T c\rangle$$ The defining relations of $B_c^*$ are called the \defn{dual braid relations} with respect to $c$. We mention some properties of $B_c^*$, which can be found in \cite{Dual}. The monoid $B_c^*$ is infinite and embeds into $A_W$. In fact, $B_c^*$ is a Garside monoid, hence it embeds into its group of fractions $\mathrm{Frac}(B_c^*)$ and the word problem in $\mathrm{Frac}(B_c^*)$ is solvable. Bessis showed that $\mathrm{Frac}(B_c^*)$ is isomorphic to $A_W$, but his proof requires a case-by-case analysis (see \cite[Fact 2.2.4]{Dual}) and the isomorphism is difficult to understand explicitly. More precisely, the embedding $B_c^*\subseteq A_W$ sends $s_c$ to $\mathbf{s}$ for every $s\in S$. In \cite[Proposition 3.13]{DG}, a formula for the elements of $T_c$ (which are the atoms of the monoid $B_c^*$) as products of the Artin generators is given, but it does not give in general a braid word of shortest possible length. \begin{exple}\label{ex:dual} Let $(W,S)$ be of type $A_2$ and $c\in W$ be the Coxeter element $s_1 s_2$ where $s_i=(i,i+1)$. Then we have the dual braid relation $(s_1)_c (s_2)_c=(s_1 s_2 s_1)_c (s_1)_c$. Hence inside $A_W$, the atom $(s_1 s_2 s_1)_c$ corresponding to the non-simple reflection $s_1 s_2 s_1$ is equal to $\mathbf{s}_1\mathbf{s}_2 \mathbf{s}_1^{-1}$. \end{exple} As every Garside monoid, $B_c^*$ has a finite set of \defn{simple elements}, which form a lattice under left divisibility. They are defined as follows. For $x\in \mathbb{N}C(W,c)$, let $x=t_1 t_2\cdots t_k$ be a \defn{$T$-reduced expression} of $x$, that is, a reduced expression as product of reflections. Then Bessis showed that the element $x_c:= (t_1)_c (t_2)_c\cdots (t_k)_c\in B_c^*$ is independent of the choice of the reduced expression of $x$ and therefore well-defined as a consequence of a dual Matsumoto property \cite[Section 1.6]{Dual}. The Garside element is the lift $c_c$ of $c$ and the set $\mathrm{Div}(c)$ of simple elements (that is, of (left) divisors of $c_c$) is given by $\mathrm{Div}(c):=\{ x_c~|~ x\in\mathbb{N}C(W,c)\}$. There is an isomorphism of posets $(\mathbb{N}C(W,c),\leq_T)\cong(\mathrm{Div}(c),\leq), x\mapsto x_c$, where $\leq$ is the left-divisibility order in $B_c^*$. In general, we are only able to determine the elements of $\mathrm{Div}(c)$ as words in the classical Artin generators $\mathbf{S}$ of $A_W$ by an inductive application of the dual braid relations. It is therefore difficult to study properties of elements of $\mathrm{Div}(c)$ viewed inside $A_W$. Note that the composition $B_c^*\hookrightarrow A_W\twoheadrightarrow W$ sends every product $(t_1)_c (t_2)_c\cdots (t_k)_c$, $t_i\in T$ to $t_1 t_2\cdots t_k$. \subsection{Standard Coxeter elements in $W_{D_n}$}\label{sec:cox} In this subsection, we characterize standard Coxeter elements in $W_{D_n}$ in terms of signed permutations. This will be needed to introduce graphical representations of $c$-noncrossing partitions of type $D_n$ in Section~\ref{graphical}. Recall that $W_{D_n} \subseteq W_{B_n}$ and that $w(-i)=-w(i),~\mbox{for all}~ i\in[-n,n]$ and all $w \in W_{B_n}$. In $W_{B_n}$, cycles of the shape $(i_1, \dots, i_r, -i_1, \dots, -i_r)$ are abbreviated by $[i_1, \dots i_r]$ and called \defn{balanced cycles}, and those of type $(i_1, \dots, i_r)( -i_1, \dots, -i_r)$ by $((i_1, \dots, i_r ))$ and called \defn{paired cycles}. The set of reflections in $W_{D_n}$ is $$T:= T_{D_n} :=\{ (i,j)(-i,-j) \mid i, j\in\{-n, \dots, n\}, i\neq \pm j\},$$ and every $w \in W_{D_n}$ can be written as a product of disjoint cycles in which there is an even number of balanced cycles (see~\cite[Section 2]{AR}). \begin{lemma}\label{lem:std} An element $c\in W_{D_n}$ ($n\geq 3$) is a standard Coxeter element if and only if $c=(i_1, -i_1)(i_2, \dots, i_{n}, -i_2, \dots, -i_{n})$ where $\{ i_1, \dots, i_{n}\}=\{1, 2, 3, \dots, n\}$, $i_1\in\{1,2\}$ and the sequence $i_2\cdots i_{n}$ is first increasing, then decreasing. \end{lemma} \begin{proof} The proof is by induction on $n$. The case $n=3$ is easy to check by hand. Let $c$ be a standard Coxeter element in $W_{D_n}$, $n\geq 4$. Then either $s_n c$ or $c s_n$ is a standard Coxeter element in $W_{D_{n-1}}$, in which case induction and a straightforward computation shows that $c$ is of the required form. Conversely if $c$ is of the above form, then since $(i_1, -i_1)$ commutes with $s_n$ either $s_n c$ or $c s_n$ is of the above form in $W_{D_{n-1}}$, hence is a standard Coxeter element in $W_{D_{n-1}}$, implying that $c$ is a standard Coxeter element in $W_{D_n}$. \end{proof} Elements in $\mathbb{N}C(W_{D_n}, c)$ will be described below via a graphical representation. \section{Simple dual braids of type $D_n$ are Mikado braids}\label{main} The aim of this section is to show Theorem~\ref{theorem:main}, that is, that simple dual braids of type $D_n$ are Mikado braids. \subsection{Outline of the proof} The proof proceeds as follows. \begin{itemize} \item\textbf{Step 1.} We describe in Section~\ref{graphical} a pictural model for the elements $x\in\mathbb{N}C(W_{D_n}, c)$ which is due to Athanasiadis and Reiner \cite{AR}. In this model the element $x$ is represented by a diagram consisting of non-intersecting polygons joining labeled points on a circle. The labeling depends on the choice of the standard Coxeter element $c$, more precisely, we first require to write the Coxeter element as a signed permutation (as in Lemma~\ref{lem:std}). \item\textbf{Step 2.} We slightly modify the diagram from Step $1$ associated to $x\in\mathbb{N}C(W_{D_n}, c)$ to obtain a new diagram $N_x$ consisting of non-intersecting polygons joining labeled points on a circle. The only difference with the Athanasiadis-Reiner model is that there is a point with two labels in the latter, which we split in two different points. As we will see, the diagram $N_x$ is not unique in general, but we will show that all the information which we will use from the diagram $N_x$ is independent of the chosen diagram representing $x$. From this new diagram $N_x$, we build a topological braid $\beta_x$ lying in an Artin group $A_{B_n}$ of type $B_n$ (viewed inside $A_{A_{2n-1}}$, hence $\beta_x$ is a symmetric braid on $2n$ strands). We first explain how to define the diagram $N_x$ for elements of $T_{D_n}\subseteq\mathbb{N}C(W_{D_n}, c)$ and we then do it for all $x\in\mathbb{N}C(W_{D_n}, c)$. \item\textbf{Step 3.} We show that the braids $\pi_{n, 1}(\beta_t)\in \widetilde{A}_{B_n}$, for $t\in T_{D_n}$, lie in $A_{D_n}$ and satisfy the dual braid relations with respect to $c$. This will follow from the more general statement that if $x \leq_T xt\leq_T c$ with $t\in T_{D_n}$, then $\pi_{n, 1}(\beta_x) \pi_{n, 1}(\beta_t)=\pi_{n,1}(\beta_{xt})$. This property and the fact that $\pi_{n, 1}(\beta_s)=\mathbf{s}$ for all $s\in S_{D_n}$ will be enough to conclude that $\pi_{n, 1}(\beta_x)$ is equal to the simple dual braid $x_c$ for all $x\in \mathbb{N}C(W_{D_n}, c)$ (this is explained in the proof of Corollary~\ref{cor:sdb}). In particular we also show that $\pi_{n,1}(\beta_x)$ does not depend on the choice of the diagram $N_x$. \item\textbf{Step 4.} We show that the braid $\beta_x$, $x\in \mathbb{N}C(W_{D_n}, c)$ is a Mikado braid in $A_{B_n}$ by using the topological characterization of \cite{DG}. Recall that $\beta_x$ is defined graphically, as an Artin braid on $2n$ strands. Together with Step $3$ and Theorem~\ref{thm:mikado_bd}, it follows that $x_c=\pi_{n, 1}(\beta_x)$ is a Mikado braid, which proves Theorem~\ref{theorem:main}. \end{itemize} \subsection{Graphical model for noncrossing partitions}\label{graphical} Athanasiadis and Reiner found a graphical model for noncrossing partitions of type $D_n$. We present it here (with slightly different conventions). First we explain how to label a circle depending on the choice of the standard Coxeter element $c$. Given a standard Coxeter element $c=(i_1, -i_1)(i_2, \dots, i_{n}, -i_2, \dots, -i_{n})$ in $W_{D_n}$, where the notation is as in Lemma~\ref{lem:std} and where $i_2=-n$, we place $2n-2$ points (labeled by $i_2, \dots, i_n, -i_2, \dots, -i_n$) on a circle as follows: point $-n$ is at the top of the circle while point $n$ is at the bottom. The remaining points all have distinct height depending on their label: if $i<j$ then point $i$ is higher than point $j$. Moreover, when going along the circle in clockwise order starting at $i_2=-n$, the points must be met in the order $i_2 i_3 \cdots i_n (-i_2) (-i_3)\cdots (-i_n)$. Finally, we add a point at the center of the circle, labeled by $\pm i_1$. Athanasiadis and Reiner showed that $c$-noncrossing partitions are those for which there exists a graphical representation as follows (in their description, we have $i_1= n$; this corresponds to a choice of Coxeter element which is not standard, however by conjugation we can assume it to be standard an to have $i_1\in\{ 1, 2\}$. The $c$-noncrossing partition lattices are isomorphic for all Coxeter elements $c$). Given $x\in\mathbb{N}C(W_{D_n}, c)$, consider its cycle decomposition inside $S_{-n,n}$ and associate to each cycle the polygon given by the convex hull of the points labeled by elements in the support of the cycle. It results in a noncrossing diagram, i.e., the various obtained polygons do not intersect, with two possible exceptions: if there is a polygon $Q$ of $x$ with $i_1\in Q$, $-i_1\notin Q$, then $-Q$ is also a polygon of $x$. Thus the two polygons $Q$ and $-Q$ will have the middle point in common (Note that since $x$ is a signed permutation, for every polygon $P$ of $x$ we have that $-P$ is also a polygon of $x$, possibly with $P=-P$). The second case appears when the decomposition of $x$ has a product of factors of the form $[j][i_1]$ for some $j\neq \pm i_1$. In this case to avoid confusion with the noncrossing representation of the reflection $((j, i_1))$ (or $((j, -i_1))$) we have to choose an alternative way of representing this product. Note that the cycle $[j]$ should be considered as a polygon $P$ such that $P=-P$. By analogy with the situation where there is such a polygon and where the point $\pm i_1$ lies inside $P$, we represent $[j]$ by two curves both joining $j$ to $-j$ and not intersecting except at the points $\pm j$, in such a way that the point $\pm i_1$ lies between these two curves. Conversely, to every noncrossing diagram with the above properties, one can associate an element $x$ of $\mathbb{N}C(W_{D_n},c)$ as follows: we send each polygon $P$ with labels $j_1, j_2, \dots, j_k$ (read in clockwise order) to the cycle $(j_1, j_2, \dots, j_k)$ except in case $P=-P$. Each single point with label $i$ is sent to the one-cycle $(i)$ except $i_1$ in case there is a polygon $P$ with $P=-P$ (in which case $\pm i_1$ lie inside $P$). In this last case, if $P$ is labeled by $j_1,j_2,\dots,j_k$ then we send it to the product of cycles $(i_1, -i_1)(j_1,j_2,\dots,j_k)$ (like in the middle example of Figure~\ref{figure:ref4}). The element $x$ is then the product of all the cycles associated to all the polygons of the noncrossing diagram (note that they are disjoint). Note that when the middle point lies in two different polygons, one has to specify in which polygon the label $i_1$ lies. Examples are given in Figure~\ref{figure:ref4} and we refer to~\cite{AR} for more details. \begin{figure} \caption{Examples of noncrossing diagrams for $x_1=((1,-8))((7,5,-2))$, $x_2=((8,7,5))[6,3,1][2]$, $x_3=((6,3,-4))\in\mathbb{N} \label{figure:ref4} \end{figure} \subsection{The diagram $N_x$ and the braid $\beta_x$} To define the diagram $N_x$, we slightly modify the labeling of the circle given in the previous section by splitting the point $\pm i_1$ into two points placed on the vertical axis of the circle consistently with their labels (all the points should be placed such that the point $i$ is higher than the point $j$ if $i<j$). An example is given in Figure~\ref{figure:ordre} and we call this labeling the \defn{$c$-labeling} of the circle. The idea is then to start from Athanasiadis and Reiner's graphical representation of $x\in\mathbb{N}C(W_{D_n}, c)$ and just split the middle point into two points. For convenience we may represent the polygons by curvilinear polygons since in some cases, because of the splitting it might not be possible to have the polygons not intersecting each other. Depending on the situation we will add an edge joining the two points $i_1$ and $-i_1$: we explain more in details below how to draw the diagrams $N_x$, first when $x$ is a reflection, then in general. \begin{figure}\end{figure} \subsubsection{Pictures for reflections}\label{pic:ref} Reflections are all of the form $t= c_1 c_2$, where $c_1$ and $c_2$ are two $2$-cycles with opposite support. If $c_1=(i,j)$, we will draw a curvilinear ``polygon'' with two edges both joining $i$ to $j$. We then orient the polygon in counterclockwise order. We do the same for $c_2=(-i,-j)$ in such a way that the second curvilinear polygon does not intersect the first one. In some cases, there is not a unique way of drawing two such curvilinear polygons with the condition that the resulting diagram should be noncrossing. We explain how to do it in the next paragraph by separating the set of reflections into three classes. Firstly, assume that ${\sf{supp}}(c_1)=\{i, j\}\subseteq \{1, \dots, n\}$, then $N_t$ is drawn as in the left picture of Figure~\ref{figure:ref1}. Now assume that ${\sf{supp}}(c_1)=\{i, -j\}$ with $i\in \{1,\dots, n\}\backslash \{i_1\}$, $j\in\{-1,\dots, -n\}\backslash \{-i_1\}$. In that case, we draw the two curvilinear polygons in such a way that the two middle points labeled by $\pm i_1$ lie between them, as done in Figure~\ref{figure:ref2}. The last case is the case where $c_1=(i_1, j)$ with $j\in\{-1,\dots,-n\}\backslash\{-i_1\}$. In that case, there are two ways of drawing the curvilinear polygon (see the left pictures of Figure~\ref{figure:ref3}). We can choose any of the two pictures for $N_t$. Starting from such a noncrossing diagram, we then associate an Artin braid $\beta_t$ on $2n$ strands to it, by first projecting the noncrossing diagram to the right (as done in the left pictures of Figures~\ref{figure:ref1} and \ref{figure:ref2}), i.e., putting all the points on the same vertical line, obtaining a new graph for the noncrossing partition. This new graph can then be viewed as a braid diagram, viewed from the bottom: a curve joining point $k$ to point $\ell$ corresponds to a $k$-th strand ending at $\ell$, while single points without a curve starting or ending at them correspond to unbraided strands. If a point has nothing at its right (resp. at its left), it means that the corresponding unbraided strand is above all the others (resp. below all the others). The points lying right to (resp. left to) a curve correspond to an unbraided strand lying above (resp. below) the strand corresponding to that curve. See the above mentioned Figures. Note that in the case of Figure~\ref{figure:ref3}, the two braids $\beta_t$ obtained from the two different diagrams $N_t$ are distinct in $A_{B_n}$, but their images $\pi_{n, 1}(\beta_t)$ in the quotient $\widetilde{A}_{B_n}$ are the same because we can invert the crossings corresponding to the generator $\mathbf{s}_0$ (because of the relation $\mathbf{s}_0^2=1$ which holds in the quotient). \begin{figure} \caption{The diagram $N_t$ for $t=(3,6)(-3,-6)$ and the braid $\beta_t$.} \label{figure:ref1} \end{figure} \begin{figure} \caption{The diagram $N_t$ for $t=(3,-5)(-3,5)$ and the braid $\beta_t$.} \label{figure:ref2} \end{figure} \begin{figure} \caption{Two diagrams $N_t$ for $t=(2,-7)(-2,7)$ and the corresponding Artin braids $\beta_t$. Note that the two Artin braids on the right are equal in the quotient $\widetilde{A} \label{figure:ref3} \end{figure} We now generalize the above picturial process, by associating a (possibly non unique) noncrossing diagram $N_x$ and an Artin braid $\beta_x$ to \textit{every} $x\in\mathbb{N}C(W_{D_n}, c)$. \subsubsection{Pictures for noncrossing partitions}\label{pic:all} To obtain a noncrossing diagram $N_x$ with oriented curvilinear polygons from $x$ as we did for reflections in the previous section, we proceed as follows: we orient every polygon of the noncrossing partition in counterclockwise order (note that this is the opposite orientation to the one given by the corresponding cycle of $x$, that is, an arrow $j_2\rightarrow j_1$ means that the cycle of $x$ sends $j_1$ to $j_2$; hence this orientation corresponds to $x^{-1}$). Polygons reduced to a single edge are replaced by curvilinear polygons with two edges as we did for reflections in Section~\ref{pic:ref}. Again we split the points with labels $\pm i_1$ into two points with labels $-i_1$ and $i_1$ respectively as in Figure~\ref{figure:ordre}. In the case where the middle point in the Athanasiadis-Reiner model has no edge starting at it and does not lie inside a symmetric polygon, then the two points $i_1$ and $-i_1$ have no edge starting at them in the new diagram. In the case where there are two distinct polygons $P$ and $-P$ sharing the middle point, they are separated so that each point lies in the correct curvilinear polygon (see Figure~\ref{figure:split}): there might be several non-isotopic diagrams which work when separating $P$ from $-P$ (in case $P$ is a $2$-cycle we precisely get what we already noticed and explained in Figure~\ref{figure:ref3}). A similar argument to the one given in Figure~\ref{figure:ref3} shows that the images in $\widetilde{A}_{B_n}$ of the various Artin braids $\beta_x$ obtained from the distinct diagrams $N_x$ at the end of the process explained below will be equal. In case there is a symmetric polygon $P=-P$ or a factor $[j][i_1]$ in $x$, we add a curvilinear polygon with two edges joining $-i_1$ to $i_1$, oriented in counterclockwise order (Recall that in the noncrossing representation of $[j][i_1]$, the factor $[j]$ is already represented by a curvilinear ``polygon'' with two edges and the point $i_1$ inside it. Here we orient this polygon in counterclockwise order as in all other cases). \begin{figure} \caption{Splitting of two polygons with common middle point.} \label{figure:split} \end{figure} If one has the diagram $N_x$ with oriented curvilinear polygons as in Figure~\ref{figure:split} on the right, we proceed exactly as we did for reflections in Section~\ref{pic:ref} to obtain $\beta_x$: firstly, we put all the black points on a vertical line and project the noncrossing diagram to obtain a picture as in the left pictures in Figures~\ref{figure:ref1} and \ref{figure:ref2}; this diagram gives the Artin braid $\beta_x$ viewed from the bottom. We illustrate this process for the noncrossing diagram of the element $x_2=((8,7,5))[6,3,1][2]$ of Figure~\ref{figure:ref4} in Figure~\ref{figure:split_braid}. Note that as a consequence of this procedure, the orientation we put on polygons, which as we already noticed at the beginning of the subsection is not the one corresponding to $x$ but to $x^{-1}$, defines the permutation induced by the strands of $\beta_x$. The fact that the permutation induced by the strands of $\beta_x$ is $x^{-1}$ rather than $x$ comes from the fact that our convention is to concatenate Artin braids from top to bottom. Note that we can always recover the braid from the middle diagram without ambiguity, because all the strands either strictly go up or down, except possibly in one case: in case $i_1= 2$ and $x=(1,-1)(2,-2)\in \mathbb{N}C(W_{D_n},c)$, then the strands joining $1$ to $-1$ and $-1$ to $1$ do not strictly go up or down. In that case we represent the braid as done in Figure~\ref{figure:1_2} in the next subsection. \begin{figure} \caption{The Artin braid $\beta_{x_2} \label{figure:split_braid} \end{figure} \begin{figure} \caption{The Artin braid $\beta_t$ for $t=(1,-1)(2,-2)$ in case $i_1=2$.} \label{figure:1_2} \end{figure} In this way, we associate to every noncrossing partition $x\in\mathbb{N}C(W_{D_n},c)$ an Artin braid $\beta_x\in A_{B_n}$. For some $x$ there are several possible $\beta_x\in A_{B_n}$ as illustrated in Figure~\ref{figure:ref3}, but they have the same image under $\pi_{n,1}$, hence $\pi_{n, 1}(\beta_x)$ is well-defined. We have \begin{proposition}\label{prop:dual_diagrams} Let $x\in W_{D_n}$, $t\in T_{D_n}$ such that $x\leq_T xt\leq_T c$. Then $$\pi_{n,1}(\beta_x \beta_{t})=\pi_{n,1}(\beta_{xt}).$$ \end{proposition} \begin{proof} The situation $x\in\mathbb{N}C(W_{D_n},c)$, $t\in T$ and $x\leq_T xt\leq_T c$ precisely corresponds to a cover relation in the noncrossing partition lattice of type $D_n$. These covering relations were described by Athanasiadis and Reiner \cite[Section 3]{AR}: there are three families of covering relations. Setting $y=xt$, we have that $x$ is obtained from $y$ by replacing one or two balanced cycles or one paired cycle as follows: $$[j_1, j_2,\dots, j_k]\mapsto [j_1, \dots, j_{\ell}] ((j_{\ell+1}, \dots, j_k)), ~1\leq \ell < k \leq n-1,$$ $$((j_1, j_2,\dots, j_k))\mapsto ((j_1,\dots, j_{\ell}))((j_{\ell+1}, \dots, j_k)), ~1\leq \ell < k \leq n-1,$$ $$[j_1, \dots, j_{\ell}][j_{\ell+1}, \dots, j_k]\mapsto ((j_1, \dots, j_k)), ~1\leq \ell < k \leq n-1.$$ Note that in the last case, we have either $\ell=1$ and $j_1=\pm i_1$ or $k=\ell+1$ and $j_k=\pm i_1$ since $x$ is a noncrossing partition. Indeed, the noncrossing partition has at most one polygon $P$ with $P=-P$, in which case the middle point lies inside $P$. We have to show that the braid that we obtain by the concatenation $\beta_x\star\beta_t$ has the same image in $\widetilde{A}_{B_n}$ as $\beta_{xt}$. It is easy to deduce from the noncrossing representations $N_x$ what the result of the concatenation of two such braids is. By the process explained above, the noncrossing diagram itself can be considered as an Artin braid, viewed inside a circle or rather a cylinder. An edge of a curvilinear polygon represents a strand, and the orientation indicates the startpoint and the endpoint of that strand. Consider the case where the cover relation $xt\mapsto x$ is the first one above, that is, it consists of breaking a symmetric polygon into a symmetric polygon and two opposite cycles. This means that $xt$ has the two symmetric factors $[i_1]$ and $[j_1,j_2,\dots, j_k]$ while $x$ has the same factors as $xt$ except that the two symmetric factors are replaced by $$[i_1] [j_1, \dots, j_\ell] (( j_{\ell+1}, \dots, j_k))$$ for some $\ell\in \{1, \dots, k-1\}$ and $t=((j_\ell, j_k))$. We have $k\geq 2$. All the other polygons of $x$ and $xt$ have support disjoint from $\{\pm j_1, \ldots , \pm j_k\}$, hence when concatenating $\beta_x\star \beta_t$ it is graphically clear that they will stay unchanged: indeed, these polygons are disjoint from the two curvilinear polygons associated to the reflection $t$. Hence we can assume that $xt=[i_1][j_1,j_2,\dots, j_k]$ and $x=[i_1] [j_1, \dots, j_\ell] (( j_{\ell+1}, \dots, j_k))$. The situation is depicted in Figure~\ref{figure:cover} below. \begin{figure} \caption{Concatenating diagrams corresponding to the cover relation $$[i_1] [j_1, \dots, j_k]\mapsto[i_1] [j_1, \dots, j_\ell] (( j_{\ell+1} \label{figure:cover} \end{figure} In the concatenated diagram, the strand starting at $j_1$ first goes to $-j_\ell$ inside $\beta_x$, then the strand starting at $-j_\ell$ goes to $-j_k$ inside $\beta_t$. Hence the result is that the strand starting at $j_1$ goes to $-j_k$, and can be drawn as in the diagram on the right since there is no obstruction for such an isotopy. Similarly, the strand starting at $-j_{\ell+1}$ first goes to $-j_k$, then to $j_{\ell}$, hence is isotopic to the strand which goes directly from $-j_{\ell+1}$ to $-j_{\ell}$ as drawn in the picture on the right. The same happens on the other side, while all other strands stay unchanged. It follows that the result of the concatenation corresponds to the diagram on the right, which is precisely the diagram $N_{xt}$ associated to $xt$. Hence we have the claim in the case where the cover relation is the one described, with $k\geq 2$. We have to show the same for the other two cover relations. We also treat the case of the last cover relation and leave the second one to the reader. Note that in the case where the cover relation is given by $$[j_1, \dots, j_{\ell}][j_{\ell+1}, \dots, j_k]\mapsto ((j_1, \dots, j_k)),$$ we have either $\ell=1$ and $j_1=\pm i_1$ or $\ell+1=k$ and $j_k=\pm i_1$. Assume that $\ell+1=k$ and $j_k=-i_1$, the case where $j_k=i_1$ as well as the cases where $\ell=1$, $j_1=\pm i_1$ are similar. We have $x=((j_1, \dots, j_\ell, -i_1))$, $t=((j_\ell, i_1))$. In this case, there are two possible diagrams $N_x$ for $x$ and the same holds for $N_t$ (see Figure~\ref{figure:ref3} for an illustration in the case where the noncrossing partition is a reflection). Since the corresponding braids $\beta_x$ obtained from the two different diagrams $N_x$ have the same image under $\pi_{n,1}$ we can choose any diagrams among the two, but the diagram $N_t$ has to be chosen to be compatible with the diagram $N_x$ if we want to do the same proof as for the first cover relation. One of the two situations is represented in Figure~\ref{figure:cover2}. Arguing as in the first case we then get the diagram on the right of the figure for the concatenation $\beta_x \star \beta_t$. This diagram is the diagram $N_{xt}$ up to the orientation of the two curves joining $i_1$ to $-i_1$: but changing their orientation corresponds to inverting a middle crossing in $\beta_{xt}$ which gives rise to a braid which has the same image in $\widetilde{A}_{B_n}$ thanks to the relation $\mathbf{s}_0^2=1$. This proves the claim. \begin{figure} \caption{Concatenating diagrams corresponding to the cover relation $$[j_1,\dots, j_{\ell} \label{figure:cover2} \end{figure} \end{proof} \begin{corollary}\label{cor:sdb} Let $x\in\mathbb{N}C(W_{D_n}, c)$. Then $\pi_{n,1}(\beta_x)=x_c$. \end{corollary} \begin{proof} Recall that $S_{D_n}=\{(1,-2)(-1,2)\}\cup\{ (i, i+1)(-i,-i-1)~|~i=1,\dots, n-1\}$. By construction of the braid $\beta_t$ from the diagram $N_t$ we have that $\pi_{n,1}(\beta_s)=\mathbf{s}$ for all $s\in S_{D_n}$, and it is a general fact that $s_c=\mathbf{s}$ for every simple reflection $s$. Hence we have the claim in case $x$ is in $S_{D_n}$ and in particular $\pi_{n,1}(x)$ lies in $A_{D_n}$. Since by Proposition~\ref{prop:dual_diagrams} the elements $\pi_{n,1}(\beta_t)$ with $t\in T_{D_n}$ satisfy the dual braid relations with respect to $c$, we claim that $\pi_{n,1}(\beta_t)=t_c$ for all $t\in T_{D_n}$. Indeed, for all $t\in T_{D_n}$, we can always find $s\in S_{D_n}$ such that either $st\leq_T c$ or $ts\leq_T c$, say, $st\leq_T c$, and $\ell_S(sts) < \ell_S(t)$ (this can be seen for instance using the noncrossing representation of $t$). It follows that we have the dual braid relation $$\pi_{n,1}(\beta_s)\pi_{n, 1}(\beta_t)=\pi_{n,1}(\beta_{sts}) \pi_{n,1}(\beta_s).$$ Arguing by induction on $\ell_S(t)$, we have that $\pi_{n,1}(\beta_q)=q_c$ for every reflection $q$ occurring in the above equality except possibly $t$. Thanks to the dual braid relation $s_c t_c= (sts)_c s_c$ we get that $\pi_{n,1}(\beta_t)=t_c$ and in particular that $\pi_{n,1}(\beta_t)\in A_{D_n}$. Now for $x \in NC(W_{D_n}, c)$ arbitrary we can use Proposition~\ref{prop:dual_diagrams} as well as the fact that $x\leq_T xt \leq_T c$, $t \in T_{D_n}$, implies that $(xt)_c = x_ct_c$ (see the end of Subsection~\ref{Sub:DualBraid}) to get by induction on $\ell_T(x)$ that $\pi_{n,1}(\beta_x)=x_c$. \end{proof} \subsection{Simple dual braids are Mikado braids}\label{end} In all the examples drawn in the figures given in the previous sections, we see that the Artin braids $\beta_x$ resulting from simple dual braids are Mikado braids: they indeed satisfy the topological condition given by the point $(2)$ of Theorem~\ref{thm:dg_b}. This is the main statement which we want to prove here. \begin{proposition}\label{beta_mik} Let $x\in\mathbb{N}C(W_{D_n}, c)$. Then $\beta_x\in A_{B_n}$ is a Mikado braid. \end{proposition} \begin{proof} As $\beta_x \in A_{B_n}$, it suffices to verify the point $(2)$ of Theorem~\ref{thm:dg_b}. Note that except in case $x=(1,-1)(2,-2)$ and $i_1=2$ (in which case the braid $\beta_x$ which is drawn in Figure~\ref{figure:1_2} is obviously Mikado), the diagram which we obtained from $N_x$ by putting all the dots on the same vertical line (as done in Figures~\ref{figure:ref1} and \ref{figure:ref2}; we call this diagram a \textit{vertical diagram}) has the following property: each oriented curve joining two points either strictly increases or strictly decreases, and every two such distinct curves never cross. The first property follows from the fact that the diagram is obtained from $N_x$ by projecting to the right a curve which is already either strictly increasing or strictly decreasing, while the second follows from the fact that the polygons in $N_x$ do not cross. In such a diagram, consider a curve joining two points and going up with respect to the orientation, with no other curve lying at its right. It follows from the discussion in the paragraph above that it always exists. Every single point lying at the right of such a curve corresponds to a vertical unbraided strand in $\beta_x$ which lies above all the other strands. Therefore, every such point can be removed in the vertical diagram, and the symmetric point lying at the left of the curve which is symmetric to the original curve can be removed simultaneously: it corresponds to removing a vertical unbraided strand lying above all the other strands in $\beta_x$, and simultaneously removing the symmetric unbraided strand lying below all the other strands, giving a new braid $\beta_x'$ lying in $A_{B_{n-1}}$ since we removed a symmetric pair of strands. After removing all such points in the vertical diagram, the original curve has nothing at its right, hence corresponds to a strand which lies above all the other strands, and we can therefore remove it, as well as its symmetric strand. Again we obtain an element which lies in an Artin group of type $B_m$ for a smaller $m$. Going on inductively, we can remove every strand corresponding to a curve, with a braid which stays symmetric at each step. If after removing the last curve we still have points, these correspond to vertical unbraided strands which can be removed. This concludes by Theorem~\ref{thm:dg_b}. We illustrate the above procedure in Example~\ref{ex_proof} below. \end{proof} Note that we could define more generally vertical diagrams (not necessarily corresponding to simple dual braids) and associate to them an Artin braid, which would therefore always be Mikado. \begin{exple}\label{ex_proof} We illustrate the procedure given in the proof of Proposition~\ref{beta_mik} in case $x$ is the element $x_2=((8,7,5))[6,3,1][2]$ from Figure~\ref{figure:ref4}. The vertical diagram and the braid $\beta_x$ are given in Figure~\ref{figure:split_braid}. The blue curve joining $6$ to $-1$ in the vertical diagram has no other curve lying at its right. There is only the single point $4$, which corresponds in $\beta_x$ to a strand which lies above all the others, with the symmetric strand $-4$ lying below all the others. Removing the pair of strands $4$ and $-4$, we get a symmetric braid on $14$ strands, hence in $A_{B_{7}}$. We can then remove the strand corresponding to the original curve joining $6$ to $-1$ as well as its symmetric strand, since there is no remaining strand lying above it. Going on inductively we eventually remove all pairs of strands. \end{exple} As a corollary we get the main result \begin{theorem}\label{thm:main} Let $x\in\mathbb{N}C(W_{D_n},c)$. Then $x_c$ is a Mikado braid. \end{theorem} \begin{proof} By Corollary~\ref{cor:sdb} we have that $\pi_{n,1}(\beta_x)=x_c$ for every $x\in\mathbb{N}C(W_{D_n},c)$. But by Proposition~\ref{beta_mik}, $\beta_x$ is a Mikado braid in $A_{B_n}$. Applying Theorem~\ref{thm:mikado_bd} we get that $x_c=\pi_{n,1}(\beta_x)$ is a Mikado braid in $A_{D_n}$. \end{proof} \end{document}
\begin{document} \title{A Simple Vector Proof of Feuerbach's Theorem} \begin{abstract} The celebrated theorem of Feuerbach states that the nine-point circle of a nonequilateral triangle is tangent to both its incircle and its three excircles. In this note, we give a simple proof of Feuerbach's Theorem using straightforward vector computations. All required preliminaries are proven here for the sake of completeness. \end{abstract} \section{Notation and Background} Let $\triangle ABC$ be a nonequilateral triangle. We denote its side-lengths by $a,b,c$, its semiperimeter by $s = \half (a + b + c)$, and its area by $K$. Its {\em classical centers} are the circumcenter $O$, the incenter $I$, the centroid $G$, and the orthocenter $H$ (Figure \ref{HGIO}). The nine-point center $N$ is the midpoint of $OH$ and the center of the nine-point circle, which passes through the side-midpoints $A',B',C'$ and the feet of the three altitudes. The Euler Line Theorem states that $G$ lies on $OH$ with $OG : GH = 1 : 2$. We write $E_a,E_b,E_c$ for the excenters opposite $A,B,C$, respectively; these are points where one internal angle bisector meets two external angle bisectors. Like $I$, the points $E_a,E_b,E_c$ are equidistant from the lines $AB$, $BC$, and $CA$, and thus center three circles each of which is tangent to those lines. These are the excircles, pictured in Figure \ref{excenters}. The {\em classical radii} are the circumradius $R$ ($= |OA| = |OB| = |OC|$), the inradius $r$, and the exradii $r_a,r_b,r_c$. The following area formulas are well known (see, e.g., \cite{C} and \cite{CG}): \[ K = \frac{abc}{4R}=rs=r_a(s-a)=\sqrt{s(s-a)(s-b)(s-c)}. \] Feuerbach's Theorem states that {\em the incircle is internally tangent to the nine-point circle, while the excircles are externally tangent to it} \cite{F}. Two of the four points of tangency can be seen in Figure \ref{excenters}. \section{Vector Formalism} We view the plane as $\mathbb{R}^2$ with its standard vector space structure. Given $\triangle ABC$, the vectors $A - C$ and $B - C$ are linearly independent. Thus for any point $X$, we may write $X - C = \alpha(A - C) + \beta(B - C)$ for unique $\alpha, \beta \in \mathbb{R}$. Defining $\gamma = 1 - \alpha - \beta$, we find that \[ X = \alpha A + \beta B + \gamma C, \;\;\;\;\;\; \alpha + \beta + \gamma = 1. \] This expression for $X$ is unique. One says that $X$ has {\em barycentric coordinates} $(\alpha, \beta, \gamma)$ with respect to $\triangle ABC$ (see, e.g., \cite{C}). The barycentric coordinates are particularly simple when $X$ lies on a side of $\triangle ABC$: \begin{thm}\label{XonBC} Let $X$ lie on side $BC$ of $\triangle ABC$. Then, with respect to $\triangle ABC$, $X$ has barycentric coordinates $(0,|CX|/a,|BX|/a)$. \end{thm} \begin{proof} Since $X$ lies on line $BC$ between $B$ and $C$, there is a unique scalar $t$ such that $X - B = t(C - B)$ and $0 < t < 1$. Taking norms and using $t > 0$, we find $|BX| = |t||BC| = ta$, i.e., $t = |BX|/a$. Rearranging, $X = 0A + (1 - t)B + tC$, in which the coefficients sum to $1$. Finally, $1 - t = (a - |BX|)/a = |CX|/a$. \end{proof} \begin{figure} \caption{The classical centers and the Euler division $OG : GH = 1 : 2$.} \label{HGIO} \end{figure} \begin{figure} \caption{The excenter $E_a$ and $A$-excircle; Feuerbach's theorem.} \label{excenters} \end{figure} The next theorem reduces the computation of a distance $|XY|$ to the simpler distances $|AY|$, $|BY|$, and $|CY|$, when $X$ has known barycentric coordinates. \begin{thm}\label{distance} Let $X$ have barycentric coordinates $(\alpha, \beta, \gamma)$ with respect to $\triangle ABC$. Then for any point $Y$, \[ {|XY|}^2 = \alpha {|AY|}^2 + \beta {|BY|}^2 + \gamma {|CY|}^2 - (\beta \gamma a^2 + \gamma \alpha b^2 + \alpha \beta c^2). \eqno{(*)} \] \end{thm} \begin{proof} Using the common abbreviation $V^2 = V \pmb{\cdot} V$, we compute first that \begin{align*} {|XY|}^2 &= {(Y - X)}^2 \\ &= {(Y - \alpha A - \beta B - \gamma C)}^2 \\ &= {\{ \alpha(Y - A) + \beta(Y - B) + \gamma(Y - C) \}}^2 \\ &= \alpha^2 {|AY|}^2 + \beta^2{|BY|}^2 + \gamma^2 {|CY|}^2 \\ & \hspace{.5in} + 2 \alpha \beta \, (Y - A) \pmb{\cdot} (Y - B) + 2 \alpha \gamma \, (Y - A) \pmb{\cdot} (Y - C) \\ & \hspace{.5in} + 2 \beta \gamma \, (Y - B) \pmb{\cdot} (Y - C). \end{align*} On the other hand, we may compute $c^2$ as follows: \[ {(B - A)}^2 = {\{(Y - A) - (Y - B)\}}^2 = {|AY|}^2 + {|BY|}^2 - 2 \, (Y - A) \pmb{\cdot} (Y - B). \] Thus $2 \alpha \beta \, (Y - A) \pmb{\cdot} (Y - B) = \alpha \beta {|AY|}^2 + \alpha \beta {|BY|}^2 - \alpha \beta c^2$. Substituting this and its analogues into the preceding calculation, the total coefficient of ${|AY|}^2$ becomes $\alpha^2 + \alpha \beta + \alpha \gamma = \alpha(\alpha + \beta + \gamma) = \alpha$, e.g. The result is formula $(*)$. \end{proof} \section{Distances from $N$ to the Vertices} \begin{lem}\label{Gbarys} The centroid $G$ has barycentric coordinates $(\frac{1}{3}, \frac{1}{3}, \frac{1}{3})$. \end{lem} \begin{proof} Let $G'$ be the point with barycentric coordinates $(\frac{1}{3}, \frac{1}{3}, \frac{1}{3})$, and we will prove $G=G'$. Let $A'$ and $B'$ be the midpoints of sides $BC$ and $AC$ respectively. By Theorem \ref{XonBC}, $A' = \half B + \half C$. Now we calculate \[ \mbox{$\frac{1}{3}$}A+\mbox{$\frac{2}{3}$}A' = \mbox{$\frac{1}{3}$}A+\mbox{$\frac{2}{3}$}(\half B + \half C)=\mbox{$\frac{1}{3}$}A + \mbox{$\frac{1}{3}$}B + \mbox{$\frac{1}{3}$}C=G'\] which implies that $G'$ is on segment $AA'$. Similarly, we find that $G'$ is on segment $BB'$. However the intersection of lines $AA'$ and $BB'$ is $G$, and so $G=G'$. \end{proof} \begin{lem}\label{EulerDivision}\emph{(Euler Line Theorem)} $H-O=3(G-O)$ \end{lem} \begin{proof} Let $H'=O+3(G-O)$ and we will prove $H=H'$. By Lemma \ref{Gbarys}, \[H'-O=3(G-O)=A+B+C-3O=(A-O)+(B-O)+(C-O).\] We use this to calculate \begin{align*} (H'-A)\pmb{\cdot}(B-C) &= \{(H'-O)-(A-O)\}\pmb{\cdot}\{(B-O)- (C-O)\} \\ &= \{(B-O)+(C-O)\}\pmb{\cdot}\{(B-O)-(C-O)\} \\ &= |BO|^2-|CO|^2 \\ &= 0 \end{align*} Therefore $H'$ is on the altitude from $A$ to $BC$. Similarly, $H'$ is on the altitude from $B$ to $AC$, but since $H$ is defined to be the intersection of the altitudes, it follows that $H=H'$. \end{proof} \begin{lem}\label{AOdotBO} $(A - O) \pmb{\cdot} (B - O) = R^2 - \half c^2$. \end{lem} \begin{proof} One has \begin{align*} c^2 &= {(A - B)}^2 \\ &= {\{(A - O) - (B - O)\}}^2 \\ &= {|OA|}^2 + {|OB|}^2 - 2 \, (A - O) \pmb{\cdot} (B - O) \\ & = 2R^2 - 2 \, (A - O) \pmb{\cdot} (B - O). \qedhere \end{align*} \end{proof} We now calculate $|AN|$, $|BN|$, $|CN|$, which are needed in Theorem \ref{distance}. \begin{thm}\label{distanceAN} $4{|AN|}^2 = R^2-a^2+b^2+c^2$. \end{thm} \begin{proof} Since $N$ is the midpoint of $OH$, we have $H - O = 2(N - O)$. Combining this observation with Lemma \ref{EulerDivision}, and using Lemma \ref{AOdotBO}, we obtain \begin{align*} 4{|AN|}^2 &= {\{2(A - O) - 2(N - O)\}}^2 \\ &= {\{ (A - O) - (B - O) - (C - O)\}}^2 \\ &= {|AO|}^2 + {|BO|}^2 + {|CO|}^2 \\ & \hspace{.3in} - 2 \, (A - O) \pmb{\cdot} (B - O) - 2 \, (A - O) \pmb{\cdot} (C - O) \\ & \hspace{.3in} + 2 \, (B - O) \pmb{\cdot} (C - O) \\ &= 3R^2 - 2(R^2 - \half c^2) - 2(R^2 - \half b^2) + 2(R^2 - \half a^2) \\ &= R^2 - a^2 + b^2 + c^2. \qedhere \end{align*} \end{proof} \section{Proof of Feuerbach's Theorem} \begin{thm}\label{Ibarys} The incenter $I$ has barycentric coordinates $(a/2s, b/2s, c/2s)$. \end{thm} \begin{proof} Let $I'$ be the point with barycentric coordinates $(a/2s, b/2s, c/2s)$, and we will prove $I=I'$. Let $F$ be the foot of the bisector of $\angle A$ on side $BC$. Applying the Law of Sines to $\triangle ABF$ and $\triangle ACF$, and using $\sin(\pi - x) = \sin x$, we find that \[ \frac{|BF|}{c} = \frac{\sin(\angle BAF)}{\sin(\angle BFA)} = \frac{\sin(\angle CAF)}{\sin(\angle CFA)} = \frac{|CF|}{b}. \] The equations $b|BF| = c|CF|$ and $|BF| + |CF| = a$ jointly imply that $|BF| = ac/(b + c)$. By Theorem \ref{XonBC}, $F = (1 - t)B + tC$, where $t = |BF|/a = c/(b + c)$. Now, \[\mbox{$\frac{b+c}{2s}$}F+\mbox{$\frac{a}{2s}$}A=\mbox{$\frac{b+c}{2s}$}(\mbox{$\frac{b}{b+c}$}B+\mbox{$\frac{c}{b+c}$}C)+\mbox{$\frac{a}{2s}$}A=\mbox{$\frac{a}{2s}$}A+\mbox{$\frac{b}{2s}$}B+\mbox{$\frac{c}{2s}$}C=I'\] which implies that $I'$ is on the angle bisector of $\angle A$. Similarly, $I'$ is on the angle bisector of $\angle B$, but since $I$ is the intersection of these two lines, this implies $I=I'$. \end{proof} We are now in a position to prove Feuerbach's Theorem. \begin{thm}[Feuerbach, 1822]\label{FT} In a nonequilateral triangle, the nine-point circle is internally tangent to the incircle and externally tangent to the three excircles. (For historical details, see {\em \cite{F}} and {\em \cite{M}}.) \end{thm} \begin{proof} Consider the incircle. From elementary geometry, two nonconcentric circles are internally tangent if and only if the distance between their centers is equal to the absolute difference of their radii. Therefore we must prove that $|IN| = |\half R - r|$. Here, $\half R$ is the radius of the nine-point circle, as the latter is the circumcircle of the midpoint-triangle $\triangle A'B'C'$. We set $X = I$ and $Y = N$ in Theorem \ref{distance}, with Theorems \ref{distanceAN} and \ref{Ibarys} supplying the distances $|AN|$, $|BN|$, $|CN|$, and the barycentric coordinates of $I$. For brevity, we use {\em cyclic sums}, in which the displayed term is transformed under the permutations $(a,b,c)$, $(b,c,a)$, and $(c,a,b)$, and the results are summed (thus, symmetric functions of $a,b,c$ may be factored through the summation sign, and $\sum_{\circlearrowright} a = a + b + c = 2s$). The following computation results: \begin{align*} {|IN|}^2 &= \sum_{\pmb{\circlearrowright}} \bigg(\frac{a}{2s}\bigg) \frac{R^2 - a^2 + b^2 + c^2}{4} \, - \sum_{\pmb{\circlearrowright}} \bigg(\frac{b}{2s} \cdot \frac{c}{2s} \bigg) a^2 \\ &= \frac{R^2}{8s} \bigg[ \sum_{\pmb{\circlearrowright}} a \bigg] + \frac{1}{8s} \bigg[ \sum_{\pmb{\circlearrowright}} (-a^3 + ab^2 + ac^2) \bigg] - \frac{abc}{(2s)^2} \bigg[ \sum_{\pmb{\circlearrowright}} a \bigg] \\ &= \frac{R^2}{4} + \frac{(- a + b + c)(a - b + c)(a + b - c) + 2abc}{8s} - \frac{abc}{2s} \\ &= \frac{R^2}{4} + \frac{(2s - 2a)(2s - 2b)(2s - 2c)}{8s} - \frac{abc}{4s} \\ &= \frac{R^2}{4} + \frac{(K^2/s)}{s} - \frac{4RK}{4s} \\ & = \left(\half R\right)^2 + r^2 - Rr \\ & = \left(\half R - r\right)^2. \end{align*} The two penultimate steps use the area formulas of Section 1---in particular, $K = rs = abc/4R$ and $K^2 = s(s - a)(s - b)(s - c)$. A similar calculation applies to the $A$-excircle, with two modifications: (i) $E_a$ has barycentric coordinates \[ \left( \frac{-a}{2(s - a)}, \frac{b}{2(s - a)}, \frac{c}{2(s - a)} \right), \] and (ii) in lieu of $K = rs$, one uses $K = r_a(s - a)$. The result, $|E_a N| = \half R + r_a$, means that the nine-point circle and the $A$-excircle are externally tangent. \end{proof} \noindent {\bf Michael Scheer} is a secondary student now entering his senior year at Stuyvesant High School, the technical academy in New York City's TriBeCa district. \\ \texttt{[email protected]} \end{document}
\begin{document} \title[Short Title]{On some strong convergence results of a new Halpern-type iterative process for quasi-nonexpansive mappings and accretive operators in Banach spaces} \author{$^{1}$Kadri DOGAN} \address{Department of Mathematical Engineering, Yildiz Technical University, Davutpasa Campus, Esenler, 34210 \.{I}stanbul, Turkey} \email{[email protected]} \author{$^{2}$Vatan KARAKAYA} \curraddr{[Department of Mathematical Engineering, Yildiz Technical University, Davutpasa Campus, Esenler, 34210 \.{I}stanbul, Turkey} \email{[email protected]} \subjclass[2010]{ 47H09, 47H10, 37C25} \keywords{Iterative process, accretive operator, strong convergence; sunny nonexpansive retraction, uniformly convex Banach space} \dedicatory{$^{(1,2)}$Department of Mathematical Engineering, Yildiz Technical University, Davutpasa Campus, Esenler, 34210 \.{I}stanbul, Turkey} \begin{abstract} In this study, we introduce a new iterative processes to approximate common fixed points of an infinite family of quasi-nonexpansive mappings and obtain a strongly convergent iterative sequence to the common fixed points of these mappings in a uniformly convex Banach space. Also we prove that this process approximates to zeros of an infinite family of accretive operators and we obtain a strong convergence result for these operators. \end{abstract} \maketitle \section{introduction and preliminaries} Throughout this study, the set of all non-negative integers and the set of reel numbers, which we denote by $ \mathbb{N} $ and $ \mathbb{R} $, respectively. Geometric properties of Banach spaces and nonlinear algorithms, a topic of intensive research efforts, in particular within the past 30 years, or so. Some geometric properties of Banach spaces play a crucial role in fixed point theory. In the first part of the study, we investigate these geometric concepts most of which are well known. We begin with some basic notations. In 1936, Clarkson \cite{Clrk36} achieved a remarkable study on uniform convexity. It signalled the beginning of extensive research efforts on the geometry of Banach spaces and its applications. Most of the results indicated in this work\ were developed in 1991 or later. Let $C$ be a nonempty, closed and convex set, which is subset of $B$ Banach space, and let $B^{\ast }$ be the dual space of $B$. We define the modulus of convexity of $B$, $\delta _{B}(\epsilon )$, as follows: \begin{equation*} \delta _{B}(\epsilon )=\inf \left\{ 1-\frac{\left\Vert a+b\right\Vert }{2} :a,b\in \overline{B(0,1)},\left\Vert a-b\right\Vert \geq \epsilon \right\} . \end{equation*} The modulus of convexity is a real valued function defined from $[0,2]$ to $ [0,1]$ which is continuous on $[0,2)$. A Banach space is uniformly convex if and only if $\ \delta _{B}(\epsilon )>0$ for all $\epsilon >0.$ Let $B$ be a normed space and $S_{B}=\left\{ a\in B:\left\Vert a\right\Vert =1\right\} $ the unit sphere of $B$. Then norm of $B$ is G\^{a}teaux differentiable at point $a\in S_{B}$ if for$\ a\in S_{B}$ \begin{equation*} \frac{d}{dt}\left( \left\Vert a+tb\right\Vert \right) |_{t=0}=\lim_{t\rightarrow 0}\frac{\left\Vert a+tb\right\Vert -\left\Vert a\right\Vert }{t} \end{equation*} exists. The norm of $B$ is said to G\^{a}teaux differentiable if it is G\^{a} teaux differentiable at each point of $S_{B}.$In the case, $B$ is called smooth. The norm of $B$ is said to uniformly G\^{a}teaux differentiable if for each $b\in S_{B}$, the limit is approached uniformly for $a\in S_{B}$. Similarly, if the norm of $B$ is \ uniformly G\^{a}teaux differentiable, then $B$ is called uniformly smooth. A normed space $B$ is called strictly convex if for all $a,b\in B$, $a\neq b,\left\Vert a\right\Vert =\left\Vert b\right\Vert =1,$ we have \begin{equation*} \left\Vert \lambda a+\left( 1-\lambda \right) b\right\Vert <1,\text{ \ for all }\lambda \in \left( 0,1\right) . \end{equation*} Now, the result of the above definitions we give the following theorem and corollary without proofs. \begin{theorem} \cite{Chidume09} Let $B$ be a Banach space. $1)$ $B$ is uniformly convex if and only if $B^{\ast }$ is uniformly smooth. 2) $B$ is uniformly smooth if and only if $B^{\ast }$ is uniformly smooth. \end{theorem} \begin{theorem} \cite{Chidume09} Every uniformly smooth space is reflexive. \end{theorem} A self mapping $\phi $ on $\left[ 0,\infty \right) $ is said to be a gauge map if it is continuous and strictly increasing such that $\phi \left( 0\right) =0$. Let $\phi $ be a gauge function$,$ and let $B$ be any normed space. If the mapping $J_{\phi }:B\rightarrow 2^{B^{\ast }}$ defined by \begin{equation*} J_{\phi }a=\left\{ f\in B^{\ast }:\left\langle a,f\right\rangle =\left\Vert a\right\Vert \left\Vert f\right\Vert ;\left\Vert f\right\Vert =\phi \left( \left\Vert a\right\Vert \right) \right\} \end{equation*} for all $a\in B$, then $J_{\phi }$ is said to be the duality map with gauge function $\phi .$If $\phi \left( t\right) =t$ is selected, then $J_{\phi }=J$ duality mapping is called the normalized duality map. Let \begin{equation*} \psi \left( t\right) =\int_{0}^{t}\phi \left( \varsigma \right) d\varsigma \text{, \ \ }t\geq 0\text{,} \end{equation*} then $\psi \left( \delta t\right) \leq \delta \phi \left( t\right) $ for each $\delta \in \left( 0,1\right) $. \begin{equation*} \rho \left( t\right) =\sup \left\{ \frac{\left\Vert a+b\right\Vert +\left\Vert a-b\right\Vert }{2}-1:a,b\in B,\left\Vert a\right\Vert =1\text{ } and\text{ }\left\Vert b\right\Vert =t\right\} \end{equation*} is called the modulus of smoothness of $B$, where $\rho :\left[ 0,\infty \right) \rightarrow \left[ 0,\infty \right) $ is a mapping. Also, $ \lim_{t\rightarrow 0}\frac{\rho \left( t\right) }{t}=0$ if and only if $B$ is uniformly smoothness. Assume that $q\in \mathbb{R} $ is chosen in the interval $\left( 1,2\right] $. \.{I}f a Banach space $B$ is $q-$uniformly smoothness, then it provides the following conditions. $(i) $ there exists a fix $c>0,$ ($ii)$ $\rho \left( t\right) \leq ct^{q}$. For $ q>2$, there is no q-uniformly smoothness Banach space. In \cite{Cioranescu13} , this assertion was showed by Cioranescu. We say that the mapping J is single-valued and also smoothness if the Banach space $B$ having a sequentially continuous duality mapping J from weak topology to weak$^{\ast } $ topology. The space B is said to have weakly sequentially continuous duality map if \ duality mapping J is continuous and single-valued, see \cite {Cioranescu13, Reich92}, Let C be a nonempty subset of Banach space B and $T:C\rightarrow B$ be a nonself mapping. Also, $\ $let $F\left( T\right) =\left\{ a\in C:Ta=a\right\} $ denote the set of fixed point of $T$. The map $ T:C\rightarrow B$ can be referred as follows: 1) It is nonexpansive if $\left\Vert Ta-Tb\right\Vert \leq \left\Vert a-b\right\Vert $ for all $a,b\in C.$ 2) It is quasi-nonexpansive if $\left\Vert Ta-p\right\Vert \leq \left\Vert a-p\right\Vert $ for all $a\in C$ and $p\in F\left( T\right) $. In the following iterative process defined by Dogan and Karakaya \cite{dogan} . Let $C$ be a convex subset of a normed space $B$\ and $T:C\rightarrow C$\ \ a self map on $B$. \begin{eqnarray} x_{0} &=&x\in C \label{it} \\ f\left( T,x_{n}\right) &=&\left( 1-\wp _{n}\right) x_{n}+\xi _{n}Tx_{n}+\left( \wp _{n}-\xi _{n}\right) Ty_{n} \notag \\ y_{n} &=&\left( 1-\zeta _{n}\right) x_{n}+\zeta _{n}Tx_{n} \notag \end{eqnarray} for $\ n\geq 0$, where $\left\{ \xi _{n}\right\} ,~\left\{ \wp _{n}\right\} , $ $\left\{ \zeta _{n}\right\} $ satisfies the following conditions $C_{1})$ $\wp _{n}\geq \xi _{n}$ $C_{2})$ $\left\{ \wp _{n}-\xi _{n}\right\} _{n=0}^{\infty },\left\{ \wp _{n}\right\} _{n=0}^{\infty },\left\{ \zeta _{n}\right\} _{n=0}^{\infty },\left\{ \xi _{n}\right\} _{n=0}^{\infty }\in \left[ 0,1\right] $ $C_{3})$ $\sum_{n=0}^{\infty }\wp _{n}=\infty .$ In 1967, Halpern \cite{Halpern67} was the first who introduced the following iteration process under the nonexpansive mapping T. For any initial value $ a_{0}\in C$ and any fix $u\in C$, $\varphi _{n}\in \left[ 0,1\right] $ such that $\varphi _{n}=n^{-b}$, \begin{equation} a_{n+1}=\varphi _{n}u+\left( 1-\varphi _{n}\right) Ta_{n}\text{ \ \ \ \ \ } \forall n\in \mathbb{N} \text{,} \label{h} \end{equation} where $b\in \left( 0,1\right) $. In 1977, Lions \cite{Lions77} showed that the iteration process $\left( \ref{h}\right) $ converges strongly to a fixed point of T, where $\left\{ \varphi _{n}\right\} _{n\in \mathbb{N} }$ provides the following first three conditions: $\left( C_{1}\right) $ $\ \lim_{n\rightarrow \infty }\varphi _{n}=0;$ $\left( C_{2}\right) \sum_{n=1}^{\infty }\varphi _{n}=\infty ;$ $\left( C_{3}\right) \ \lim_{n\rightarrow \infty }\frac{\varphi _{n+1}-\varphi _{n}}{\varphi _{n+1}^{2}}=0$; $\left( C_{4}\right) $ $\sum_{n=1}^{\infty }\left\vert \varphi _{n+1}-\varphi _{n}\right\vert <\infty ;$ $\left( C_{5}\right) $ $\lim_{n\rightarrow \infty }\frac{\varphi _{n+1}-\varphi _{n}}{\varphi _{n+1}}=0$; $\left( C_{6}\right) $ $\left\vert \varphi _{n+1}-\varphi _{n}\right\vert \leq o\left( \varphi _{n+1}\right) +\sigma _{n}$, $\sum_{n=1}^{\infty }\sigma _{n}<\infty .$ Also, by exchanging of the above conditions, several authors were obtained various results in different spaces. Let us list the main ones as follows: $\left( 1\right) $ In \cite{Wittmann92}, Wittmann was shown that the sequence $\left\{ a_{n}\right\} _{n\in \mathbb{N} }$ converges strongly of a fixed point of T by the conditions $C_{1}$, $ C_{2} $ and $C_{4}$. $\left( 2\right) $ In \cite{Reich80, Reich94}, Reich was shown that the sequence $\left\{ a_{n}\right\} _{n\in \mathbb{N} }$ converges strongly of a fixed point of T in the uniformly smooth Banach spaces by the conditions $C_{1}$, $C_{2}$ and $C_{6}$. $\left( 3\right) $ In \cite{Shioji97}, Shioji and Takahashi were shown that the sequence $\left\{ a_{n}\right\} _{n\in \mathbb{N} }$ converges strongly of a fixed point of T in the Banach spaces with uniformly G\u{a}teaux differentiable norms by the conditions\ $C_{1}$, $ C_{2} $ and $C_{4}$. $\left( 4\right) $ In \cite{Xu02}, Xu was shown that he sequence $\left\{ a_{n}\right\} _{n\in \mathbb{N} }$ converges strongly of a fixed point of $T$ by the conditions\ $C_{1}$, $ C_{2}$ and $C_{5}$. \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Are\textbf{\ }the\textbf{\ }conditions $C_{1}$and $C_{2}$ enough to guarantee the strong convergence of $\left( \ref{h}\right) $ iteration process for the quasi-nonexpansive mappings, see \cite{Halpern67}? This question was answered positively by some authors. In the following list, you can see the work of these authors \cite{Chidume06, Suzuki07, Hu08, Song09, Saejung10, Nilsrakoo11, Li13, Pang13}. But, in \cite{Suzuki09}, they were shown that the answer to open question is not positive for nonexpansive mappings in Hilbert spaces. The effective domain and range of $A:B\rightarrow 2^{B}$ denoted by $ dom\left( A\right) =\left\{ a\in B:Aa\neq \varnothing \right\} $ and $ R\left( A\right) $, respectively. If there exists $j\in J\left( a-b\right) $ such that $\left\langle a-b\text{, }j\right\rangle \geq 0$ and $ J:B\rightarrow 2^{B^{\ast }}$ duality mapping, then the map $A$ is said to be accretive, for all $a,b\in B$. If $R\left( I+rA\right) =B$, for each $ r\geq 0$, then the accretive map $A$ is $m-$accretive operator. All this paper, let $A:B\rightarrow 2^{B}$ be an accretive operator and be has a zero. Now, we can define a single- valued mapping such that $J_{r}=\left( I+rA\right) ^{-1}:B\rightarrow dom\left( A\right) $. It is called the resolvent of $A$ for $r>0$. Let $A^{-1}=\left\{ a\in B:0\in Aa\right\} $. It is known that $A^{-1}=F\left( J_{r}\right) $ for all $r>0$, $\left( see\text{ ,\cite{Yao09, Takahashi00} }\right) $. Let $B$ be a reflexive, smooth and strictly convex Banach space and $C$ be a nonempty, closed and convex subset $\left( ccs\right) $ of $B$. Under these conditions, for any $a\in B$, there exists a unique point $z\in C$ such that \begin{equation*} \left\Vert z-a\right\Vert \leq \min_{t\in C}\left\Vert t-a\right\Vert \text{ ; see \ \cite{Takahashi00}.\ \ \ \ \ \ \ \ \ } \end{equation*} \begin{definition} \label{d1}\cite{Takahashi00} If $P_{C}a=z$, then the map $P_{C}:B\rightarrow C$ is called the metric projection. \end{definition} Assume that $a\in B$ and $z\in C$, then $z=P_{C}a$ iff $\left\langle z-t,J\left( a-z\right) \right\rangle \geq 0$, for all $t\in C$. In a real Hilbert space $H$, there is a $P_{C}:H\rightarrow C$ projection mapping, which is nonexpansive, but, such a $P_{C}:B\rightarrow C$ projection mapping does not provide the nonexpansive property in a Banach space $B$, where C is a nonempty, closed and convex subset of them; see \cite{Goebel84}. \begin{definition} \cite{Reich73} \ Let $C\subset D$ be subsets of Banach space $B$. A mapping $ Q:C\rightarrow D$ is said to be a sunny if $Q\left( \delta x+\left( 1-\delta \right) Qx\right) =Qx$, for each $x\in B$ and $\delta \in \left[ 0,1\right) $ . \end{definition} $Q$ is said to be a retraction if and only if $Q^{2}=Q$. $Q$ is a sunny nonexpansive retraction if and only if it is sunny, nonexpansive and retraction. In the next time, we will need lemmas in order to prove the main results. \begin{lemma} \label{l1}\cite{Xu02} Let $B$ be a Banach space with weakly sequentially continuous duality mapping $J_{\phi }$. Then \begin{equation*} \psi \left( \left\Vert a+b\right\Vert \right) \leq \psi \left( \left\Vert a\right\Vert \right) +2\left\langle b,j_{\phi }\left( a+b\right) \right\rangle \end{equation*} for $a,b\in B$. If we get $J$ instead of $J_{\phi }$, we have \begin{equation*} \left\Vert a+b\right\Vert ^{2}\leq \left\Vert a\right\Vert ^{2}+2\left\langle b,j\left( a+b\right) \right\rangle \end{equation*} for $a,b\in B$. \end{lemma} \begin{lemma} \label{l2}\cite{Gossez72} Let $B$ be a Banach space with weakly sequentially continuous duality mapping $J_{\phi }$ and $C$ be a $ccs$ of $B$. Let $ T:C\rightarrow C$ be a nonexpansive operator having $F\left( T\right) \neq \varnothing $. Then, for each $u\in C,$ there exists $a\in F\left( T\right) $ such that \begin{equation*} \left\langle u-a,J\left( b-a\right) \right\rangle \leq 0 \end{equation*} for all $b\in F\left( T\right) $. \end{lemma} \begin{lemma} \label{l3}\cite{Xu022} Let $B$ be a reflexive Banach space with weakly sequentially continuous duality mapping $J_{\phi }$ and $C$ be a $ccs$ of $B$ . Assume that $T:C\rightarrow C$ is a nonexpansive operator. Let $z_{t}\in C$ be the unique solution in $C$ to the equation $z_{t}=tu+\left( 1-t\right) Tz_{t}$ such that $u\in C$ and $t\in \left( 0,1\right) $. Then $T$ has a fixed point if and only if $\left\{ z_{t}\right\} _{t\in \left( 0,1\right) }$ remains bounded as $t\rightarrow 0^{+}$, and in this case, $\left\{ z_{t}\right\} _{t\in \left( 0,1\right) }$ converges as $t\rightarrow 0^{+}$ strongly to fixed point of $T$. If we get the sunny nonexpansive retraction defined by $Q:C\rightarrow F\left( T\right) $ such that \begin{equation*} Q\left( u\right) =\lim_{t\rightarrow 0}z_{t}\text{,} \end{equation*} then $Q\left( u\right) $ solves the variational inequality \begin{equation*} \left\langle u-Q\left( u\right) ,J_{\phi }\left( b-Q\left( u\right) \right) \right\rangle \leq 0\text{, }u\in C~and~b\in F\left( T\right) \text{.} \end{equation*} \end{lemma} One of the useful and remarkable results in the theory of nonexpansive mappings is demiclosed principle. It is defined as follows. \begin{definition} \cite{Pang13} Let $B$ be a Banach space, C a nonempty subset of $B$, and $\ T:C\rightarrow B$ a mapping. Then the mapping $T$ is said to be demiclosed at origin, that is, for any sequence $\{a_{n}\}_{n\in N}$ in $C$ which $ a_{n}\rightharpoonup $ $p$ and $\left\Vert Ta_{n}-a_{n}\right\Vert \rightarrow 0$ imply that $Tp=p$. \end{definition} \begin{lemma} \label{l4}\cite{Browder68} Let $B$ be a reflexive Banach space having weakly sequentially continuous duality mapping $J_{\phi }$ with a gauge function $ \phi $, $C$ be a $ccs$ of $B$ and $T:C\rightarrow B$ be a nonexpansive mapping. Then $I-T$ is demiclosed at each $p\in B$, i.e., for any sequence $ \{a_{n}\}_{n\in N}$ in $C$ which converges weakly to $a$, and $(I-T)a_{n}$ $ \rightarrow p$ converges strongly imply that ($I-T)a=p$. (Here $I$ is the identity operator of $B$ into itself.) In particular, assuming $p=0$, it is obtained $a\in F\left( T\right) $. \end{lemma} \begin{lemma} \label{l5}\cite{Park94} Let $\left\{ \mu _{n}\right\} _{n\in \mathbb{N} }$ be a nonnegative real sequence and satisfies the following inequality \begin{equation*} \mu _{n+1}\leq \left( 1-\varphi _{n}\right) \mu _{n}+\varphi _{n}\epsilon _{n}\text{,} \end{equation*} and assume that $\left\{ \varphi _{n}\right\} _{n\in \mathbb{N} }$ and $\left\{ \epsilon _{n}\right\} _{n\in \mathbb{N} }$ satisfy the following conditions: $\left( 1\right) $ $\left\{ \varphi _{n}\right\} _{n\in \mathbb{N} }\subset \left[ 0,1\right] $ and $\dsum\limits_{n=1}^{\infty }\varphi _{n}=\infty $, $\left( 2\right) $ $\lim \sup_{n\rightarrow \infty }\epsilon _{n}\leq 0$, or $\left( 3\right) $ $\dsum\limits_{n=1}^{\infty }\varphi _{n}\epsilon _{n}<\infty $, then $\lim_{n\rightarrow \infty }\mu _{n}=0$. \end{lemma} \begin{lemma} \label{l6}\cite{Takahashi00} Let $\ B$ be a real Banach space, and let $A$ be an $m-$accretive operator on $\ B$. For $t>0$, let $J_{t}$ be a resolvent operator related to $A$ and $t$. Then \begin{equation*} \left\Vert J_{k}a-J_{l}a\right\Vert \leq \frac{\left\vert k-l\right\vert }{k} \left\Vert a-J_{k}a\right\Vert \text{, for all }k,l>0\text{ and }a\in B\text{ .} \end{equation*} \end{lemma} \begin{lemma} \label{l7}\cite{Mainge08} Let $\left\{ \mu _{n}\right\} _{n\in \mathbb{N} }$ be a sequence of real numbers such that there exists a subsequence $ \left\{ \mu _{n_{i}}\right\} _{i\in \mathbb{N} }$ of $\left\{ \mu _{n}\right\} _{n\in \mathbb{N} }$ which satisfies $\mu _{n_{i}}<\mu _{n_{i+1}}$ for all $i\geq 0$. Also, we consider a subsequence $\left\{ \eta _{\left( n\right) }\right\} _{n\geq n_{0}}\subset \mathbb{N} $ defined by \begin{equation*} \eta _{\left( n\right) }=\max \left\{ k\leq n:\mu _{k}\leq \mu _{k+1}\right\} \text{.} \end{equation*} Then $\left\{ \eta _{\left( n\right) }\right\} _{n\geq n_{0}}$ is a nondecreasing sequence providing $\lim_{n\rightarrow \infty }\eta _{\left( n\right) }=\infty $, for all $n\geq n_{0}$. Hence, it holds that $\mu _{\eta _{\left( n\right) }}\leq \mu _{\eta _{\left( n\right) +1}}$, and implies that $\mu _{n}\leq \mu _{\eta _{\left( n\right) +1}}$. \end{lemma} \begin{lemma} \label{l8}\cite{Chang10} Let $B$ be a uniformly convex Banach space and \ $ t>0$ be a constant. Then there exists a continuous, strictly increasing and convex function $g:\left[ 0,2t\right) \rightarrow \left[ 0,\infty \right) $ such that \begin{equation*} \left\Vert \dsum\limits_{i=1}^{\infty }\rho _{i}a_{i}\right\Vert ^{2}\leq \dsum\limits_{i=1}^{\infty }\rho _{i}\left\Vert a_{i}\right\Vert ^{2}-\rho _{k}\rho _{l}g\left( \left\Vert a_{k}-a_{l}\right\Vert \right) \end{equation*} $\forall k,l\geq 0$, $a_{i}\in B_{t}=\left\{ z\in B:\left\Vert z\right\Vert \leq t\right\} $, $\rho _{i}\in \left( 0,1\right) $ and $i\geq 0$ with $ \dsum\limits_{i=0}^{\infty }\rho _{i}=1$. \end{lemma} \section{Main results} \begin{theorem} Let $B$ be a real uniformly convex Banach space having the normalized duality mapping $J$ and $C$ be a $ccs$ of $B$. Assume that $\left\{ T_{i}\right\} _{i\in \mathbb{N} \cup \left\{ 0\right\} }$ is a infinite family of quasi nonexpansive mappings given in the form $T_{i}:C\rightarrow C$ such that $ F=\dbigcap\limits_{i=0}^{\infty }F\left( T_{i}\right) \neq \varnothing $, and for each $i\geq 0$, $T_{i}-I$ is demiclosed at zero. Let $\left\{ v_{n}\right\} _{n\in \mathbb{N} }$ be a sequence generated by \begin{equation} \left\{ \begin{array}{c} v_{1}\text{, }u\in C\text{ arbitrarily chosen, \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ } \\ v_{n+1}=\xi _{n}u+\left( 1-\zeta _{n}\right) T_{0}v_{n}+\left( \zeta _{n}-\xi _{n}\right) T_{0}w_{n}\text{ \ \ } \\ w_{n}=\varphi _{n,0}v_{n}+\dsum\limits_{i=1}^{\infty }\varphi _{n,i}T_{i}v_{n}\text{, \ }\ n\geq 0\text{,\ \ \ \ \ \ \ \ \ } \end{array} \right. \label{itnew} \end{equation} where$\left\{ \zeta _{n}\right\} _{n\in \mathbb{N} }$, $\left\{ \xi _{n}\right\} _{n\in \mathbb{N} }$ and $\left\{ \varphi _{n,i}\right\} _{n\in \mathbb{N} ,i\in \mathbb{N} \cup \left\{ 0\right\} }$ are sequences in $\left[ 0,1\right] $ satisfying the following control conditions: \qquad $\left( 1\right) ~\lim_{n\rightarrow \infty }\xi _{n}=0$; \qquad $\left( 2\right) ~\dsum\limits_{n=1}^{\infty }\xi _{n}=\infty $; \qquad $\left( 3\right) ~\varphi _{n,0}+\dsum\limits_{i=1}^{\infty }\varphi _{n,i}=1$, for all $n\in \mathbb{N} $; \qquad $\left( 4\right) ~\lim \inf_{n\rightarrow \infty }\zeta _{n}\varphi _{n,0}\varphi _{n,i}>0$, for all $n\in \mathbb{N} $. Then $\left\{ v_{n}\right\} _{n\in \mathbb{N} }$ converges strongly as $n\rightarrow \infty $ to $P_{F}u$, where the map $ P_{F}:B\rightarrow F$ is the metric projection. \end{theorem} \begin{proof} The proof consists of three parts. Step 1. Prove that $\left\{ v_{n}\right\} _{n\in \mathbb{N} }$, $\left\{ w_{n}\right\} _{n\in \mathbb{N} }$ and $\left\{ T_{i}v_{n}\right\} _{n\in \mathbb{N} ,i\in \mathbb{N} \cup \left\{ 0\right\} }$ are bounded. Firstly, we show that $\left\{ v_{n}\right\} _{n\in \mathbb{N} }$ is bounded. Let $p\in F$ be fixed. By Lemma \ref{l8}, we have the following inequality \begin{eqnarray} \left\Vert w_{n}-p\right\Vert ^{2} &=&\left\Vert \varphi _{n,0}v_{n}+\dsum\limits_{i=1}^{\infty }\varphi _{n,i}T_{i}v_{n}-p\right\Vert ^{2} \notag \\ &\leq &\varphi _{n,0}\left\Vert v_{n}-p\right\Vert ^{2}+\dsum\limits_{i=1}^{\infty }\varphi _{n,i}\left\Vert T_{i}v_{n}-p\right\Vert ^{2}-\varphi _{n,0}\varphi _{n,i}g\left( \left\Vert v_{n}-T_{i}v_{n}\right\Vert \right) \notag \\ &\leq &\varphi _{n,0}\left\Vert v_{n}-p\right\Vert ^{2}+\dsum\limits_{i=1}^{\infty }\varphi _{n,i}\left\Vert v_{n}-p\right\Vert ^{2}-\varphi _{n,0}\varphi _{n,i}g\left( \left\Vert v_{n}-T_{i}v_{n}\right\Vert \right) \notag \\ &=&\left\Vert v_{n}-p\right\Vert ^{2}-\varphi _{n,0}\varphi _{n,i}g\left( \left\Vert v_{n}-T_{i}v_{n}\right\Vert \right) \label{1} \\ &\leq &\left\Vert v_{n}-p\right\Vert ^{2}\text{.} \notag \end{eqnarray} This show that \begin{eqnarray*} \left\Vert v_{n+1}-p\right\Vert &=&\left\Vert \xi _{n}u+\left( 1-\zeta _{n}\right) T_{0}v_{n}+\left( \zeta _{n}-\xi _{n}\right) T_{0}w_{n}\text{ } -p\right\Vert \\ &\leq &\xi _{n}\left\Vert u-p\right\Vert +\left( 1-\zeta _{n}\right) \left\Vert T_{0}v_{n}-p\right\Vert +\left( \zeta _{n}-\xi _{n}\right) \left\Vert T_{0}w_{n}\text{ }-p\right\Vert \\ &\leq &\xi _{n}\left\Vert u-p\right\Vert +\left( 1-\zeta _{n}\right) \left\Vert v_{n}-p\right\Vert +\left( \zeta _{n}-\xi _{n}\right) \left\Vert w_{n}\text{ }-p\right\Vert \\ &\leq &\xi _{n}\left\Vert u-p\right\Vert +\left( 1-\xi _{n}\right) \left\Vert v_{n}-p\right\Vert \\ &\leq &\max \left\{ \left\Vert u-p\right\Vert \text{, }\left\Vert v_{n}-p\right\Vert \text{ }\right\} \end{eqnarray*} If we continue the way of induction, we have \begin{equation*} \left\Vert v_{n+1}-p\right\Vert \leq \max \left\{ \left\Vert u-p\right\Vert \text{, }\left\Vert v_{1}-p\right\Vert \text{ }\right\} \text{, }\forall n\in \mathbb{N} \text{.} \end{equation*} Therefore, we conclude that $\left\Vert v_{n+1}-p\right\Vert $ is bounded, this implies that $\left\{ v_{n}\right\} _{n\in \mathbb{N} }$ is bounded. Furthermore, it is easily show that $\left\{ T_{i}v_{n}\right\} _{n\in \mathbb{N} ,i\in \mathbb{N} \cup \left\{ 0\right\} }$ $\ $and $\left\{ w_{n}\right\} _{n\in \mathbb{N} }$ are bounded too. Step 2. Show that for any $n\in \mathbb{N} $, \begin{equation} \left\Vert v_{n+1}-z\right\Vert ^{2}\leq \left( 1-\xi _{n}\right) \left\Vert v_{n}-z\right\Vert ^{2}+2\xi _{n}\left\langle u-z,J\left( v_{n+1}-z\right) \right\rangle \text{.} \label{2} \end{equation} By considering $\left( \ref{1}\right) $, we have \begin{equation} \left\Vert w_{n}-z\right\Vert ^{2}=\left\Vert v_{n}-z\right\Vert ^{2}-\varphi _{n,0}\varphi _{n,i}g\left( \left\Vert v_{n}-T_{i}v_{n}\right\Vert \right) \text{.} \label{3} \end{equation} $\left( \ref{3}\right) $ implies that \begin{eqnarray} \left\Vert v_{n+1}-z\right\Vert ^{2} &=&\left\Vert \xi _{n}u+\left( 1-\zeta _{n}\right) T_{0}v_{n}+\left( \zeta _{n}-\xi _{n}\right) T_{0}w_{n}\text{ } -z\right\Vert ^{2} \notag \\ &\leq &\xi _{n}\left\Vert u-z\right\Vert ^{2}+\left( 1-\zeta _{n}\right) \left\Vert T_{0}v_{n}-z\right\Vert ^{2}+\left( \zeta _{n}-\xi _{n}\right) \left\Vert T_{0}w_{n}\text{ }-z\right\Vert ^{2} \notag \\ &\leq &\xi _{n}\left\Vert u-z\right\Vert ^{2}+\left( 1-\zeta _{n}\right) \left\Vert v_{n}-z\right\Vert ^{2} \label{4} \\ &&+\left( \zeta _{n}-\xi _{n}\right) \left[ \left\Vert v_{n}-z\right\Vert ^{2}-\varphi _{n,0}\varphi _{n,i}g\left( \left\Vert v_{n}-T_{i}v_{n}\right\Vert \right) \right] \\ &=&\xi _{n}\left\Vert u-z\right\Vert ^{2}+\left( 1-\xi _{n}\right) \left\Vert v_{n}-z\right\Vert ^{2} \notag \\ &&-\zeta _{n}\varphi _{n,0}\varphi _{n,i}g\left( \left\Vert v_{n}-T_{i}v_{n}\right\Vert \right) +\xi _{n}\varphi _{n,0}\varphi _{n,i}g\left( \left\Vert v_{n}-T_{i}v_{n}\right\Vert \right) \text{.} \end{eqnarray} Assume that $K_{1}=\sup \left\{ \left\vert \left\Vert u-z\right\Vert ^{2}-\left\Vert v_{n}-z\right\Vert ^{2}\right\vert +\xi _{n}\varphi _{n,0}\varphi _{n,i}g\left( \left\Vert v_{n}-T_{i}v_{n}\right\Vert \right) \right\} $. It is conclude form $\left( \ref{4}\right) $ that \begin{equation} \zeta _{n}\varphi _{n,0}\varphi _{n,i}g\left( \left\Vert v_{n}-T_{i}v_{n}\right\Vert \right) \leq \left\Vert v_{n}-z\right\Vert ^{2}-\left\Vert v_{n+1}-z\right\Vert ^{2}+\xi _{n}K_{1}\text{.} \label{5} \end{equation} By Lemma \ref{l1} and $\left( \ref{1}\right) $, we have \begin{eqnarray*} \left\Vert v_{n+1}-z\right\Vert ^{2} &=&\left\Vert \xi _{n}u+\left( 1-\zeta _{n}\right) T_{0}v_{n}+\left( \zeta _{n}-\xi _{n}\right) T_{0}w_{n}\text{ } -z\right\Vert ^{2} \\ &=&\left\Vert \xi _{n}\left( u-z\right) +\left( 1-\zeta _{n}\right) \left( T_{0}v_{n}-z\right) +\left( \zeta _{n}-\xi _{n}\right) \left( T_{0}w_{n} \text{ }-z\right) \right\Vert ^{2} \\ &\leq &\left\Vert \left( 1-\zeta _{n}\right) \left( T_{0}v_{n}-z\right) +\left( \zeta _{n}-\xi _{n}\right) \left( T_{0}w_{n}\text{ }-z\right) \right\Vert ^{2} \\ &&+2\left\langle \xi _{n}\left( u-z\right) ,J\left( v_{n+1}-z\right) \right\rangle \\ &\leq &\left( 1-\zeta _{n}\right) \left\Vert T_{0}v_{n}-z\right\Vert ^{2}+\left( \zeta _{n}-\xi _{n}\right) \left\Vert T_{0}w_{n}\text{ } -z\right\Vert ^{2} \\ &&+2\left\langle \xi _{n}\left( u-z\right) ,J\left( v_{n+1}-z\right) \right\rangle \\ &\leq &\left( 1-\zeta _{n}\right) \left\Vert v_{n}-z\right\Vert ^{2}+\left( \zeta _{n}-\xi _{n}\right) \left\Vert w_{n}\text{ }-z\right\Vert ^{2} \\ &&+2\xi _{n}\left\langle u-z,J\left( v_{n+1}-z\right) \right\rangle \\ &\leq &\left( 1-\zeta _{n}\right) \left\Vert v_{n}-z\right\Vert ^{2}+\left( \zeta _{n}-\xi _{n}\right) \left\Vert v_{n}-z\right\Vert ^{2} \\ &&+2\xi _{n}\left\langle u-z,J\left( v_{n+1}-z\right) \right\rangle \\ &=&\left( 1-\xi _{n}\right) \left\Vert v_{n}-z\right\Vert ^{2}+2\xi _{n}\left\langle u-z,J\left( v_{n+1}-z\right) \right\rangle \text{.} \end{eqnarray*} Step 3. We show that $v_{n}\rightarrow z$ as $n\rightarrow \infty $. For this step, we will examine two cases. Case 1. Suppose that there exists $n_{0}\in \mathbb{N} $ such that $\left\{ \left\Vert v_{n}-z\right\Vert \right\} _{n\geq n_{0}}$ is nonincreasing. furthermore, the sequence $\left\{ \left\Vert v_{n}-z\right\Vert \right\} _{n\in \mathbb{N} }$ is convergent. Thus, it is clear that $\left\Vert v_{n}-z\right\Vert ^{2}-\left\Vert v_{n+1}-z\right\Vert ^{2}\rightarrow 0$ as $n\rightarrow \infty $. In view of condition $\left( 4\right) $ and $\left( \ref{5}\right) $, we have \begin{equation*} \lim_{n\rightarrow \infty }g\left( \left\Vert v_{n}-T_{i}v_{n}\right\Vert \right) =0\text{.} \end{equation*} From the properties of g, we have \begin{equation*} \lim_{n\rightarrow \infty }\left\Vert v_{n}-T_{i}v_{n}\right\Vert =0\text{.} \end{equation*} Also, we can construct the sequences $\left( w_{n}-v_{n}\right) $ and $ \left( v_{n+1}-w_{n}\right) $, as follows: \begin{eqnarray} w_{n}-v_{n} &=&\varphi _{n,0}v_{n}+\dsum\limits_{i=1}^{\infty }\varphi _{n,i}T_{i}v_{n}-v_{n} \notag \\ &=&\dsum\limits_{i=1}^{\infty }\varphi _{n,i}\left( T_{i}v_{n}-v_{n}\right) \text{,} \label{6} \end{eqnarray} and \begin{equation*} v_{n+1}-w_{n}=\xi _{n}u+\left( 1-\zeta _{n}\right) T_{0}v_{n}+\left( \zeta _{n}-\xi _{n}\right) T_{0}w_{n}\text{ }-w_{n} \end{equation*} \begin{eqnarray} \left\Vert v_{n+1}-w_{n}\right\Vert &=&\left\Vert \xi _{n}\left( u-T_{0}w_{n}\right) +\zeta _{n}\left( T_{0}v_{n}-T_{0}w_{n}\right) +\left( T_{0}v_{n}\text{ }-w_{n}\right) \right\Vert \notag \\ &\leq &\xi _{n}\left\Vert u-T_{0}w_{n}\right\Vert +\zeta _{n}\left\Vert T_{0}v_{n}-T_{0}w_{n}\right\Vert +\left\Vert T_{0}v_{n}\text{ } -w_{n}\right\Vert \notag \\ &\leq &\xi _{n}\left\Vert u-T_{0}w_{n}\right\Vert +\zeta _{n}\left\Vert v_{n}-w_{n}\right\Vert +\left\Vert T_{0}v_{n}\text{ }-w_{n}\right\Vert \text{ .} \label{7} \end{eqnarray} These imply that \begin{equation} \lim_{n\rightarrow \infty }\left\Vert v_{n+1}-w_{n}\right\Vert =0\text{ \ \ \ \ \ \ \ \ \ \ \ \ and \ \ \ \ }\lim_{n\rightarrow \infty }\left\Vert w_{n}-v_{n}\text{\ }\right\Vert =0\text{.\ \ \ \ } \label{k} \end{equation} By the expressions in $\left( \ref{k}\right) $, we obtain \begin{equation*} \left\Vert v_{n+1}-v_{n}\right\Vert \leq \left\Vert w_{n}-v_{n}\text{\ } \right\Vert +\left\Vert v_{n+1}-w_{n}\right\Vert \text{.} \end{equation*} This implies that \begin{equation} \lim_{n\rightarrow \infty }\left\Vert v_{n+1}-v_{n}\right\Vert =0\text{.} \label{8} \end{equation} Previously, we have shown that the sequence $\left\{ v_{n}\right\} _{n\in \mathbb{N} }$ is bounded. Therefore, there exists a subsequence $\left\{ v_{n_{j}}\right\} _{j\in \mathbb{N} }$ of $\left\{ v_{n}\right\} _{n\in \mathbb{N} }$ such that $v_{n_{j}+1}\rightarrow l$ for all $j\in \mathbb{N} $. By principle of demiclosedness at zero, It is concluded that $l\in F$. Considering the above facts and Definition $\left( \ref{d1}\right) $, we obtain \begin{eqnarray} \limsup_{n\rightarrow \infty }\left\langle u-z,J\left( v_{n+1},z\right) \right\rangle &=&\lim_{j\rightarrow \infty }\left\langle u-z,J\left( v_{n_{j}+1}-z\right) \right\rangle \notag \\ &=&\left\langle u-z,J\left( l-z\right) \right\rangle \label{9} \\ &=&\left\langle u-P_{F}u,J\left( l-P_{F}u\right) \right\rangle \notag \\ &\leq &0\text{.} \notag \end{eqnarray} By Lemma $\left( \ref{l5}\right) $, \ we have the desired \ result. Case 2. Let $\left\{ n_{j}\right\} _{j\in \mathbb{N} }$ be subsequence of $\left\{ n\right\} _{n\in \mathbb{N} }$ such that \begin{equation*} \left\Vert v_{n_{j}}-z\right\Vert \leq \left\Vert v_{n_{j}+1}-z\right\Vert \text{, for all }j\in \mathbb{N} \text{.} \end{equation*} Then, in view of Lemma $\left( \ref{l7}\right) $, there exists a nondecreasing sequence $\left\{ m_{k}\right\} _{k\in \mathbb{N} }\subset \mathbb{N} $, and hence \begin{equation*} \left\Vert z-v_{m_{k}}\right\Vert <\left\Vert z-v_{m_{k}+1}\right\Vert \text{ \ \ \ \ \ and \ \ \ }\left\Vert z-v_{k}\right\Vert \leq \left\Vert z-v_{m_{k}+1}\right\Vert \text{, }\forall k\in \mathbb{N} \text{.\ \ \ } \end{equation*} If we rewrite the equation $\left( \ref{5}\right) $ for this Lemma, we have \begin{eqnarray*} \zeta _{m_{k}}\varphi _{m_{k},0}\varphi _{m_{k},i}g\left( \left\Vert v_{m_{k}}-T_{i}v_{m_{k}}\right\Vert \right) &\leq &\left\Vert v_{m_{k}}-z\right\Vert ^{2}-\left\Vert v_{m_{k}+1}-z\right\Vert ^{2}+\xi _{m_{k}}K_{1} \\ &\leq &\xi _{m_{k}}K_{1}\text{, }\forall k\in \mathbb{N} \text{.} \end{eqnarray*} Considering the conditions $\left( 1\right) $ and $\left( 2\right) $, we obtain \begin{equation*} \lim_{k\rightarrow \infty }g\left( \left\Vert v_{m_{k}}-T_{i}v_{m_{k}}\right\Vert \right) =0\text{.} \end{equation*} It follows that \begin{equation*} \lim_{k\rightarrow \infty }\left\Vert v_{m_{k}}-T_{i}v_{m_{k}}\right\Vert =0 \text{.} \end{equation*} Therefore, using the same argument as Case 1, we have \begin{equation*} \limsup_{n\rightarrow \infty }\left\langle u-z,J\left( v_{m_{k}},z\right) \right\rangle =\limsup_{n\rightarrow \infty }\left\langle u-z,J\left( v_{v_{m_{k}}+1},z\right) \right\rangle \leq 0\text{.} \end{equation*} Using $\left( \ref{2}\right) $, we get \begin{equation*} \left\Vert v_{m_{k}+1}-z\right\Vert ^{2}\leq \left( 1-\xi _{m_{k}}\right) \left\Vert v_{m_{k}}-z\right\Vert ^{2}+2\xi _{m_{k}}\left\langle u-z,J\left( v_{m_{k}+1}-z\right) \right\rangle \text{.} \end{equation*} Previously, we have shown that the inequality $\left\Vert v_{m_{k}}-z\right\Vert \leq \left\Vert v_{m_{k}+1}-z\right\Vert $ is performed, and hence \begin{eqnarray*} \xi _{m_{k}}\left\Vert v_{m_{k}}-z\right\Vert ^{2} &\leq &\left\Vert v_{m_{k}}-z\right\Vert ^{2}-\left\Vert v_{m_{k}+1}-z\right\Vert ^{2}+2\xi _{m_{k}}\left\langle u-z,J\left( v_{m_{k}+1}-z\right) \right\rangle \\ &\leq &2\xi _{m_{k}}\left\langle u-z,J\left( v_{m_{k}+1}-z\right) \right\rangle \text{.} \end{eqnarray*} Hence, we get \begin{equation} \lim_{k\rightarrow \infty }\left\Vert v_{m_{k}}-z\right\Vert =0\text{.} \label{10} \end{equation} considering the expressions $\left( \ref{9}\right) $ and $\left( \ref{10} \right) $, we obtain \begin{equation*} \lim_{k\rightarrow \infty }\left\Vert v_{m_{k}+1}-z\right\Vert =0\text{.} \end{equation*} Finaly, we get $\left\Vert v_{k}-z\right\Vert \leq \left\Vert v_{m_{k}+1}-z\right\Vert $, $\forall k\in \mathbb{N} $.\ It follows that $v_{m_{k}}\rightarrow z$ as $k\rightarrow \infty $. Then we have $v_{k}\rightarrow z$ as $n\rightarrow \infty $. \end{proof} We obtain the following corollary for a single mapping. \begin{corollary} Let $B$ be a real uniformly convex Banach space having the normalized duality mapping $J$ and $C$ be a $ccs$ of $B$. Assume that $T$ is a quasi nonexpansive mappings given in the form $T:C\rightarrow C$ and $F$ is set of fixed point of $T$ and, $T-I$ is demiclosed at zero. Let $\left\{ v_{n}\right\} _{n\in \mathbb{N} }$ be a sequence generated by \begin{equation*} \left\{ \begin{array}{c} v_{1}\text{, }u\in C\text{ arbitrarily chosen, \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ } \\ v_{n+1}=\xi _{n}u+\left( 1-\zeta _{n}\right) Tv_{n}+\left( \zeta _{n}-\xi _{n}\right) Tw_{n}\text{ \ \ } \\ w_{n}=\left( 1-\varphi _{n}\right) v_{n}+\varphi _{n}Tv_{n}\text{, \ }\ n\geq 0\text{,\ \ \ \ \ \ \ \ \ } \end{array} \right. \end{equation*} where$\left\{ \zeta _{n}\right\} _{n\in \mathbb{N} }$, $\left\{ \xi _{n}\right\} _{n\in \mathbb{N} }$ and $\left\{ \varphi _{n}\right\} _{n\in \mathbb{N} }$ are sequences in $\left[ 0,1\right] $ satisfying the following control conditions: $\qquad \left( 1\right) ~\lim_{n\rightarrow \infty }\xi _{n}=0$; $\qquad \left( 2\right) ~\dsum\limits_{n=1}^{\infty }\xi _{n}=\infty $; $\qquad \left( 3\right) ~\lim \inf_{n\rightarrow \infty }\zeta _{n}\left( 1-\varphi _{n}\right) \varphi _{n}>0$, for all $n\in \mathbb{N} $. Then $\left\{ v_{n}\right\} _{n\in \mathbb{N} }$ converges strongly as $n\rightarrow \infty $ to $P_{F}u$, where the map $ P_{F}:B\rightarrow F$ is the metric projection. \end{corollary} \begin{theorem} Let $B$ be a real uniformly convex Banach space having the weakly sequentially continuous duality mapping $J_{\phi }$ and $C$ be a $ccs$ of $B$ such that $\overline{D(A_{i})}\subset C\subset \dbigcap\limits_{r>0}^{\infty }R(I+rA_{i})$ for each $i\in N$. Assume that $\left\{ A_{i}\right\} _{i\in \mathbb{N} \cup \left\{ 0\right\} }$ is an infinite family of accretive operators satisfying the range condition, and $r_{n}>0$ and $r>0$ be such that $ lim_{n\rightarrow \infty }r_{n}=r$. Let $ J_{r_{n}}^{A_{i}}=(I+r_{n}A_{i})^{-1}$ be the resolvent of $A$. Let $\left\{ v_{n}\right\} _{n\in \mathbb{N} }$ be a sequence generated by \begin{equation} \left\{ \begin{array}{c} v_{1}\text{, }u\in C\text{ arbitrarily chosen, \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ } \\ v_{n+1}=\xi _{n}u+\left( 1-\zeta _{n}\right) J_{r_{n}}^{A_{0}}v_{n}+\left( \zeta _{n}-\xi _{n}\right) J_{r_{n}}^{A_{0}}w_{n}\text{ \ \ } \\ w_{n}=\varphi _{n,0}v_{n}+\dsum\limits_{i=1}^{\infty }\varphi _{n,i}J_{r_{n}}^{A_{i}}v_{n}\text{, \ }\ n\geq 0\text{,\ \ \ \ \ \ \ \ \ } \end{array} \right. \label{res} \end{equation} where$\left\{ \zeta _{n}\right\} _{n\in \mathbb{N} }$, $\left\{ \xi _{n}\right\} _{n\in \mathbb{N} }$ and $\left\{ \varphi _{n,i}\right\} _{n\in \mathbb{N} ,i\in \mathbb{N} \cup \left\{ 0\right\} }$ are sequences in $\left[ 0,1\right] $ satisfying the following control conditions: \qquad $\left( 1\right) ~\lim_{n\rightarrow \infty }\xi _{n}=0$; \qquad $\left( 2\right) ~\dsum\limits_{n=1}^{\infty }\xi _{n}=\infty $; \qquad $\left( 3\right) ~\varphi _{n,0}+\dsum\limits_{i=1}^{\infty }\varphi _{n,i}=1$, for all $n\in \mathbb{N} $; \qquad $\left( 4\right) ~\lim \inf_{n\rightarrow \infty }\zeta _{n}\varphi _{n,0}\varphi _{n,i}>0$, for all $n\in \mathbb{N} $. If $Q_{Z}:B\rightarrow Z$ is the sunny nonexpansive retraction such that $ Z=\dbigcap\limits_{i=1}^{\infty }A_{i}^{-1}\left( 0\right) \neq \varnothing $ , then $\left\{ v_{n}\right\} _{n\in \mathbb{N} }$ converges strongly as $n\rightarrow \infty $ to $Q_{Z}u$. \end{theorem} \begin{proof} The proof consists of three parts. We note that $Z$ is closed and convex. Set $z=Q_{Z}u$. Step 1. Prove that $\left\{ v_{n}\right\} _{n\in \mathbb{N} }$, $\left\{ w_{n}\right\} _{n\in \mathbb{N} }$ and $\left\{ J_{r_{n}}^{A_{i}}v_{n}\right\} _{n\in \mathbb{N} ,i\in \mathbb{N} \cup \left\{ 0\right\} }$ are bounded. Firstly, we show that $\left\{ v_{n}\right\} _{n\in \mathbb{N} }$ is bounded. Let $p\in Z$ be fixed. By Lemma \ref{l8}, we have the following inequality \begin{eqnarray} \left\Vert w_{n}-p\right\Vert ^{2} &=&\left\Vert \varphi _{n,0}v_{n}+\dsum\limits_{i=1}^{\infty }\varphi _{n,i}J_{r_{n}}^{A_{i}}v_{n}-p\right\Vert ^{2} \notag \\ &\leq &\varphi _{n,0}\left\Vert v_{n}-p\right\Vert ^{2}+\dsum\limits_{i=1}^{\infty }\varphi _{n,i}\left\Vert J_{r_{n}}^{A_{i}}v_{n}-p\right\Vert ^{2}-\varphi _{n,0}\varphi _{n,i}g\left( \left\Vert v_{n}-J_{r_{n}}^{A_{i}}v_{n}\right\Vert \right) \notag \\ &\leq &\varphi _{n,0}\left\Vert v_{n}-p\right\Vert ^{2}+\dsum\limits_{i=1}^{\infty }\varphi _{n,i}\left\Vert v_{n}-p\right\Vert ^{2}-\varphi _{n,0}\varphi _{n,i}g\left( \left\Vert v_{n}-J_{r_{n}}^{A_{i}}v_{n}\right\Vert \right) \notag \\ &=&\left\Vert v_{n}-p\right\Vert ^{2}-\varphi _{n,0}\varphi _{n,i}g\left( \left\Vert v_{n}-J_{r_{n}}^{A_{i}}v_{n}\right\Vert \right) \notag \\ &\leq &\left\Vert v_{n}-p\right\Vert ^{2}\text{.} \label{j1} \end{eqnarray} This show that \begin{eqnarray*} \left\Vert v_{n+1}-p\right\Vert &=&\left\Vert \xi _{n}u+\left( 1-\zeta _{n}\right) J_{r_{n}}^{A_{0}}v_{n}+\left( \zeta _{n}-\xi _{n}\right) J_{r_{n}}^{A_{0}}w_{n}\text{ }-p\right\Vert \\ &\leq &\xi _{n}\left\Vert u-p\right\Vert +\left( 1-\zeta _{n}\right) \left\Vert J_{r_{n}}^{A_{0}}v_{n}-p\right\Vert +\left( \zeta _{n}-\xi _{n}\right) \left\Vert J_{r_{n}}^{A_{0}}w_{n}\text{ }-p\right\Vert \\ &\leq &\xi _{n}\left\Vert u-p\right\Vert +\left( 1-\zeta _{n}\right) \left\Vert v_{n}-p\right\Vert +\left( \zeta _{n}-\xi _{n}\right) \left\Vert w_{n}\text{ }-p\right\Vert \\ &\leq &\xi _{n}\left\Vert u-p\right\Vert +\left( 1-\xi _{n}\right) \left\Vert v_{n}-p\right\Vert \\ &\leq &\max \left\{ \left\Vert u-p\right\Vert \text{, }\left\Vert v_{n}-p\right\Vert \text{ }\right\} \end{eqnarray*} If we continue the way of induction, we have \begin{equation*} \left\Vert v_{n+1}-p\right\Vert =\max \left\{ \left\Vert u-p\right\Vert \text{, }\left\Vert v_{1}-p\right\Vert \text{ }\right\} \text{, }\forall n\in \mathbb{N} \text{.} \end{equation*} Therefore, we conclude that $\left\Vert v_{n+1}-p\right\Vert $ is bounded, this implies that $\left\{ v_{n}\right\} _{n\in \mathbb{N} }$ is bounded. Furthermore, it is easily show that $\left\{ J_{r_{n}}^{A_{i}}v_{n}\right\} _{n\in \mathbb{N} ,i\in \mathbb{N} \cup \left\{ 0\right\} }\ $and $\left\{ w_{n}\right\} _{n\in \mathbb{N} }$ are bounded too. Step 2. Show that for any $n\in \mathbb{N} $, \begin{equation} \left\Vert v_{n+1}-z\right\Vert ^{2}\leq \left( 1-\xi _{n}\right) \left\Vert v_{n}-z\right\Vert ^{2}+2\xi _{n}\left\langle u-z,J_{\phi }\left( v_{n+1}-z\right) \right\rangle \text{.} \label{j2} \end{equation} By considering $\left( \ref{j1}\right) $, we have \begin{equation} \left\Vert w_{n}-z\right\Vert ^{2}=\left\Vert v_{n}-z\right\Vert ^{2}-\varphi _{n,0}\varphi _{n,i}g\left( \left\Vert v_{n}-J_{r_{n}}^{A_{i}}v_{n}\right\Vert \right) \text{.} \label{j3} \end{equation} $\left( \ref{j3}\right) $ implies that \begin{eqnarray} \left\Vert v_{n+1}-z\right\Vert ^{2} &=&\left\Vert \xi _{n}u+\left( 1-\zeta _{n}\right) J_{r_{n}}^{A_{0}}v_{n}+\left( \zeta _{n}-\xi _{n}\right) J_{r_{n}}^{A_{0}}w_{n}\text{ }-z\right\Vert ^{2} \notag \\ &\leq &\xi _{n}\left\Vert u-z\right\Vert ^{2}+\left( 1-\zeta _{n}\right) \left\Vert J_{r_{n}}^{A_{0}}v_{n}-z\right\Vert ^{2}+\left( \zeta _{n}-\xi _{n}\right) \left\Vert J_{r_{n}}^{A_{0}}w_{n}\text{ }-z\right\Vert ^{2} \notag \\ &\leq &\xi _{n}\left\Vert u-z\right\Vert ^{2}+\left( 1-\zeta _{n}\right) \left\Vert v_{n}-z\right\Vert ^{2}+\left( \zeta _{n}-\xi _{n}\right) \left[ \left\Vert v_{n}-z\right\Vert ^{2}-\varphi _{n,0}\varphi _{n,i}g\left( \left\Vert v_{n}-J_{r_{n}}^{A_{i}}v_{n}\right\Vert \right) \right] \label{j4} \\ &=&\xi _{n}\left\Vert u-z\right\Vert ^{2}+\left( 1-\xi _{n}\right) \left\Vert v_{n}-z\right\Vert ^{2}-\zeta _{n}\varphi _{n,0}\varphi _{n,i}g\left( \left\Vert v_{n}-J_{r_{n}}^{A_{i}}v_{n}\right\Vert \right) +\xi _{n}\varphi _{n,0}\varphi _{n,i}g\left( \left\Vert v_{n}-J_{r_{n}}^{A_{i}}v_{n}\right\Vert \right) \text{.} \notag \end{eqnarray} Assume that $K_{2}=\sup \left\{ \left\vert \left\Vert u-z\right\Vert ^{2}-\left\Vert v_{n}-z\right\Vert ^{2}\right\vert +\xi _{n}\varphi _{n,0}\varphi _{n,i}g\left( \left\Vert v_{n}-J_{r_{n}}^{A_{i}}v_{n}\right\Vert \right) \right\} $. It is conclude form $\left( \ref{j4}\right) $ that \begin{equation} \zeta _{n}\varphi _{n,0}\varphi _{n,i}g\left( \left\Vert v_{n}-J_{r_{n}}^{A_{i}}v_{n}\right\Vert \right) \leq \left\Vert v_{n}-z\right\Vert ^{2}-\left\Vert v_{n+1}-z\right\Vert ^{2}+\xi _{n}K_{2} \text{.} \label{j5} \end{equation} By Lemma \ref{l1} and $\left( \ref{j1}\right) $, we have \begin{eqnarray*} \left\Vert v_{n+1}-z\right\Vert ^{2} &=&\left\Vert \xi _{n}u+\left( 1-\zeta _{n}\right) J_{r_{n}}^{A_{0}}v_{n}+\left( \zeta _{n}-\xi _{n}\right) J_{r_{n}}^{A_{0}}w_{n}\text{ }-z\right\Vert ^{2} \\ &=&\left\Vert \xi _{n}\left( u-z\right) +\left( 1-\zeta _{n}\right) \left( J_{r_{n}}^{A_{0}}v_{n}-z\right) +\left( \zeta _{n}-\xi _{n}\right) \left( J_{r_{n}}^{A_{0}}w_{n}\text{ }-z\right) \right\Vert ^{2} \\ &\leq &\left\Vert \left( 1-\zeta _{n}\right) \left( J_{r_{n}}^{A_{0}}v_{n}-z\right) +\left( \zeta _{n}-\xi _{n}\right) \left( J_{r_{n}}^{A_{0}}w_{n}\text{ }-z\right) \right\Vert ^{2}+2\left\langle \xi _{n}\left( u-z\right) ,J_{\phi }\left( v_{n+1}-z\right) \right\rangle \\ &\leq &\left( 1-\zeta _{n}\right) \left\Vert J_{r_{n}}^{A_{0}}v_{n}-z\right\Vert ^{2}+\left( \zeta _{n}-\xi _{n}\right) \left\Vert J_{r_{n}}^{A_{0}}w_{n}\text{ }-z\right\Vert ^{2}+2\left\langle \xi _{n}\left( u-z\right) ,J_{\phi }\left( v_{n+1}-z\right) \right\rangle \\ &\leq &\left( 1-\zeta _{n}\right) \left\Vert v_{n}-z\right\Vert ^{2}+\left( \zeta _{n}-\xi _{n}\right) \left\Vert w_{n}\text{ }-z\right\Vert ^{2}+2\xi _{n}\left\langle u-z,J_{\phi }\left( v_{n+1}-z\right) \right\rangle \\ &\leq &\left( 1-\zeta _{n}\right) \left\Vert v_{n}-z\right\Vert ^{2}+\left( \zeta _{n}-\xi _{n}\right) \left\Vert v_{n}-z\right\Vert ^{2}+2\xi _{n}\left\langle u-z,J_{\phi }\left( v_{n+1}-z\right) \right\rangle \\ &=&\left( 1-\xi _{n}\right) \left\Vert v_{n}-z\right\Vert ^{2}+2\xi _{n}\left\langle u-z,J_{\phi }\left( v_{n+1}-z\right) \right\rangle \text{.} \end{eqnarray*} Step 3. We show that $v_{n}\rightarrow z$ as $n\rightarrow \infty $. For this step, we will examine two cases. Case 1. Suppose that there exists $n_{0}\in \mathbb{N} $ such that $\left\{ \left\Vert v_{n}-z\right\Vert \right\} _{n\geq n_{0}}$ is nonincreasing. furthermore, the sequence $\left\{ \left\Vert v_{n}-z\right\Vert \right\} _{n\in \mathbb{N} }$ is convergent. Thus, it is clear that $\left\Vert v_{n}-z\right\Vert ^{2}-\left\Vert v_{n+1}-z\right\Vert ^{2}\rightarrow 0$ as $n\rightarrow \infty $. In view of condition $\left( 4\right) $ and $\left( \ref{j5} \right) $, we have \begin{equation*} \lim_{n\rightarrow \infty }g\left( \left\Vert v_{n}-J_{r_{n}}^{A_{i}}v_{n}\right\Vert \right) =0\text{.} \end{equation*} From the properties of $g$, we have \begin{equation*} \lim_{n\rightarrow \infty }\left\Vert v_{n}-J_{r_{n}}^{A_{i}}v_{n}\right\Vert =0\text{.} \end{equation*} Also, we can construct the sequences $\left( w_{n}-v_{n}\right) $ and $ \left( v_{n+1}-w_{n}\right) $, as follows: \begin{eqnarray} w_{n}-v_{n} &=&\varphi _{n,0}v_{n}+\dsum\limits_{i=1}^{\infty }\varphi _{n,i}J_{r_{n}}^{A_{i}}v_{n}-v_{n} \notag \\ &=&\dsum\limits_{i=1}^{\infty }\varphi _{n,i}\left( J_{r_{n}}^{A_{i}}v_{n}-v_{n}\right) \text{,} \label{j6} \end{eqnarray} and \begin{equation*} v_{n+1}-w_{n}=\xi _{n}u+\left( 1-\zeta _{n}\right) J_{r_{n}}^{A_{0}}v_{n}+\left( \zeta _{n}-\xi _{n}\right) J_{r_{n}}^{A_{0}}w_{n}\text{ }-w_{n} \end{equation*} \begin{eqnarray} \left\Vert v_{n+1}-w_{n}\right\Vert &=&\left\Vert \xi _{n}\left( u-w_{n}\right) +\left( 1-\zeta _{n}\right) \left( J_{r_{n}}^{A_{0}}v_{n}-w_{n}\right) +\left( \zeta _{n}-\xi _{n}\right) \left( J_{r_{n}}^{A_{0}}w_{n}\text{ }-w_{n}\right) \right\Vert \notag \\ &\leq &\xi _{n}\left\Vert u-w_{n}\right\Vert +\left( 1-\zeta _{n}\right) \left\Vert J_{r_{n}}^{A_{0}}v_{n}-w_{n}\right\Vert +\left( \zeta _{n}-\xi _{n}\right) \left\Vert J_{r_{n}}^{A_{0}}w_{n}\text{ }-w_{n}\right\Vert . \notag \end{eqnarray} These imply that \begin{equation} \lim_{n\rightarrow \infty }\left\Vert v_{n+1}-w_{n}\right\Vert =0\text{ \ \ \ \ \ \ \ \ \ \ \ \ and \ \ \ \ }\lim_{n\rightarrow \infty }\left\Vert w_{n}-v_{n}\text{\ }\right\Vert =0\text{.\ \ \ \ } \label{m} \end{equation} By the expressions in $\left( \ref{m}\right) $, we obtain \begin{equation*} \left\Vert v_{n+1}-v_{n}\right\Vert \leq \left\Vert w_{n}-v_{n}\text{\ } \right\Vert +\left\Vert v_{n+1}-w_{n}\right\Vert \text{.} \end{equation*} This implies that \begin{equation} \lim_{n\rightarrow \infty }\left\Vert v_{n+1}-v_{n}\right\Vert =0\text{.} \label{j8} \end{equation} By Lemma \ref{l6} and $\left( \ref{j6}\right) $, we have \begin{equation*} \left\Vert v_{n}-J_{r}^{A_{i}}v_{n}\right\Vert \leq \left\Vert v_{n}-J_{r_{n}}^{A_{i}}v_{n}\right\Vert +\left\Vert J_{r_{n}}^{A_{i}}v_{n}-J_{r}^{A_{i}}v_{n}\right\Vert \leq \left\Vert v_{n}-J_{r_{n}}^{A_{i}}v_{n}\right\Vert +\frac{\left\vert r_{n}-r\right\vert }{r_{n}}\left\Vert v_{n}-J_{r_{n}}^{A_{i}}v_{n}\right\Vert . \end{equation*} This implies that \begin{equation*} \lim_{n\rightarrow \infty }\left\Vert v_{n}-J_{r}^{A_{i}}v_{n}\right\Vert =0 \text{, for all }i\in \mathbb{N} . \end{equation*} Previously, we have shown that the sequence $\left\{ v_{n}\right\} _{n\in \mathbb{N} }$ is bounded. Therefore, there exists a subsequence $\left\{ v_{n_{j}}\right\} _{j\in \mathbb{N} }$ of $\left\{ v_{n}\right\} _{n\in \mathbb{N} }$ such that $v_{n_{j}+1}\rightarrow l\in F\left( J_{r}^{A_{i}}v_{n}\right) $ for all $j\in \mathbb{N} $. This, together with Lemma \ref{l1} implies that \begin{eqnarray} \limsup_{n\rightarrow \infty }\left\langle u-z,J_{\phi }\left( v_{n+1},z\right) \right\rangle &=&\lim_{k\rightarrow \infty }\left\langle u-z,J_{\phi }\left( v_{n_{j}+1}-z\right) \right\rangle \notag \\ &=&\left\langle u-z,J_{\phi }\left( l-z\right) \right\rangle \label{j9} \\ &\leq &0\text{.} \notag \end{eqnarray} By Lemma $\left( \ref{l5}\right) $, we obtain the desired \ result. Case 2. Let $\left\{ n_{j}\right\} _{j\in \mathbb{N} }$ be subsequence of $\left\{ n\right\} _{n\in \mathbb{N} }$ such that \begin{equation*} \left\Vert v_{n_{j}}-z\right\Vert \leq \left\Vert v_{n_{j}+1}-z\right\Vert \text{, for all }j\in \mathbb{N} \text{.} \end{equation*} Then, in view of Lemma $\left( \ref{l7}\right) $, there exists a nondecreasing sequence $\left\{ m_{k}\right\} _{k\in \mathbb{N} }\subset \mathbb{N} $, and hence \begin{equation*} \left\Vert z-v_{m_{k}}\right\Vert <\left\Vert z-v_{m_{k}+1}\right\Vert \text{ \ \ \ \ \ and \ \ \ }\left\Vert z-v_{k}\right\Vert \leq \left\Vert z-v_{m_{k}+1}\right\Vert \text{, }\forall k\in \mathbb{N} \text{.\ \ \ } \end{equation*} If we rewrite the equation $\left( \ref{5}\right) $ for this Lemma, we have \begin{eqnarray*} \zeta _{m_{k}}\varphi _{m_{k},0}\varphi _{m_{k},i}g\left( \left\Vert v_{m_{k}}-J_{r_{n}}^{A_{i}}v_{m_{k}}\right\Vert \right) &\leq &\left\Vert v_{m_{k}}-z\right\Vert ^{2}-\left\Vert v_{m_{k}+1}-z\right\Vert ^{2}+\xi _{m_{k}}K_{2} \\ &\leq &\xi _{m_{k}}K_{2}\text{, }\forall k\in \mathbb{N} \text{.} \end{eqnarray*} Considering the conditions $\left( 1\right) $ and $\left( 2\right) $, we obtain \begin{equation*} \lim_{k\rightarrow \infty }g\left( \left\Vert v_{m_{k}}-J_{r_{n}}^{A_{i}}v_{m_{k}}\right\Vert \right) =0\text{.} \end{equation*} It follows that \begin{equation*} \lim_{k\rightarrow \infty }\left\Vert v_{m_{k}}-J_{r_{n}}^{A_{i}}v_{m_{k}}\right\Vert =0\text{.} \end{equation*} Therefore, using the same argument as Case 1, we have \begin{equation*} \limsup_{n\rightarrow \infty }\left\langle u-z,J_{\phi }\left( v_{m_{k}},z\right) \right\rangle =\limsup_{n\rightarrow \infty }\left\langle u-z,J_{\phi }\left( v_{v_{m_{k}}+1},z\right) \right\rangle \leq 0\text{.} \end{equation*} Using $\left( \ref{j2}\right) $, we get \begin{equation*} \left\Vert v_{m_{k}+1}-z\right\Vert ^{2}\leq \left( 1-\xi _{m_{k}}\right) \left\Vert v_{m_{k}}-z\right\Vert ^{2}+2\xi _{m_{k}}\left\langle u-z,J_{\phi }\left( v_{m_{k}+1}-z\right) \right\rangle \text{.} \end{equation*} Previously, we have shown that the inequality $\left\Vert v_{m_{k}}-z\right\Vert \leq \left\Vert v_{m_{k}+1}-z\right\Vert $ is performed, and hence \begin{eqnarray*} \xi _{m_{k}}\left\Vert v_{m_{k}}-z\right\Vert ^{2} &\leq &\left\Vert v_{m_{k}}-z\right\Vert ^{2}-\left\Vert v_{m_{k}+1}-z\right\Vert ^{2}+2\xi _{m_{k}}\left\langle u-z,J_{\phi }\left( v_{m_{k}+1}-z\right) \right\rangle \\ &\leq &2\xi _{m_{k}}\left\langle u-z,J_{\phi }\left( v_{m_{k}+1}-z\right) \right\rangle \text{.} \end{eqnarray*} Hence, we get \begin{equation} \lim_{k\rightarrow \infty }\left\Vert v_{m_{k}}-z\right\Vert =0\text{.} \label{j10} \end{equation} Considering the expressions $\left( \ref{j9}\right) $ and $\left( \ref{j10} \right) $, we obtain \begin{equation*} \lim_{k\rightarrow \infty }\left\Vert v_{m_{k}+1}-z\right\Vert =0\text{.} \end{equation*} Finally, we get $\left\Vert v_{k}-z\right\Vert \leq \left\Vert v_{m_{k}+1}-z\right\Vert $, $\forall k\in \mathbb{N} $.\ It follows that $v_{m_{k}}\rightarrow z$ as $k\rightarrow \infty $. Then we have $v_{k}\rightarrow z$ as $n\rightarrow \infty $. \end{proof} \begin{theorem} Let $B$ be a real uniformly convex Banach space having a G\^{a}teaux differentiable norm. and $C$ be a $ccs$ of $B$ such that $\overline{D(A_{i})} \subset C\subset \dbigcap\limits_{r>0}^{\infty }R(I+rA_{i})$ for each $i\in N $. Assume that $\left\{ A_{i}\right\} _{i\in \mathbb{N} \cup \left\{ 0\right\} }$ is an infinite family of accretive operators satisfying the range condition, and $r_{n}>0$ and $r>0$ be such that $ lim_{n\rightarrow \infty }r_{n}=r$. Let $ J_{r_{n}}^{A_{i}}=(I+r_{n}A_{i})^{-1}$ be the resolvent of $A$. Let $\left\{ v_{n}\right\} _{n\in \mathbb{N} }$ be a sequence generated by \begin{equation} \left\{ \begin{array}{c} v_{1}\text{, }u\in C\text{ arbitrarily chosen, \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ } \\ v_{n+1}=\xi _{n}u+\left( 1-\zeta _{n}\right) J_{r_{n}}^{A_{0}}v_{n}+\left( \zeta _{n}-\xi _{n}\right) J_{r_{n}}^{A_{0}}w_{n}\text{ \ \ } \\ w_{n}=\varphi _{n,0}v_{n}+\dsum\limits_{i=1}^{\infty }\varphi _{n,i}J_{r_{n}}^{A_{i}}v_{n}\text{, \ }\ n\geq 0\text{,\ \ \ \ \ \ \ \ \ } \end{array} \right. \end{equation} where$\left\{ \zeta _{n}\right\} _{n\in \mathbb{N} }$, $\left\{ \xi _{n}\right\} _{n\in \mathbb{N} }$ and $\left\{ \varphi _{n,i}\right\} _{n\in \mathbb{N} ,i\in \mathbb{N} \cup \left\{ 0\right\} }$ are sequences in $\left[ 0,1\right] $ satisfying the following control conditions: \qquad $\left( 1\right) ~\lim_{n\rightarrow \infty }\xi _{n}=0$; \qquad $\left( 2\right) ~\dsum\limits_{n=1}^{\infty }\xi _{n}=\infty $; \qquad $\left( 3\right) ~\varphi _{n,0}+\dsum\limits_{i=1}^{\infty }\varphi _{n,i}=1$, for all $n\in \mathbb{N} $; \qquad $\left( 4\right) ~\lim \inf_{n\rightarrow \infty }\zeta _{n}\varphi _{n,0}\varphi _{n,i}>0$, for all $n\in \mathbb{N} $. If $Q_{Z}:B\rightarrow Z$ is the sunny nonexpansive retraction such that $ Z=\dbigcap\limits_{i=1}^{\infty }A_{i}^{-1}\left( 0\right) \neq \varnothing $ , then $\left\{ v_{n}\right\} _{n\in \mathbb{N} }$ converges strongly as $n\rightarrow \infty $ to $Q_{Z}u$. \end{theorem} \end{document}
\begin{document} \title[Predictive control with noise]{Stabilisation of difference equations with noisy prediction-based control} \author[E. Braverman, C. Kelly and A. Rodkina] {E. Braverman, C. Kelly and A. Rodkina} \address{Department of Mathematics, University of Calgary, Calgary, Alberta T2N1N4, Canada} \email{[email protected]} \address{Department of Mathematics \\ The University of the West Indies, Mona Campus, Kingston, Jamaica} \email{[email protected]} \address{Department of Mathematics \\ The University of the West Indies, Mona Campus, Kingston, Jamaica} \email{[email protected]} \thanks{The first author was supported by the NSERC grant RGPIN-2015-05976, all the authors were supported by American Institute of Mathematics SQuaRE program} \thanks{Corresponding author email: {\tt [email protected]}} \begin{abstract} We consider the influence of stochastic perturbations on stability of a unique positive equilibrium of a difference equation subject to prediction-based control. These perturbations may be multiplicative $$x_{n+1}=f(x_n)-\left( \alpha + l\xi_{n+1} \right) (f(x_n)-x_n), \quad n=0, 1, \dots$$ if they arise from stochastic variation of the control parameter, or additive $$x_{n+1}=f(x_n)-\alpha(f(x_n)-x_n) +l\xi_{n+1}, \quad n=0, 1, \dots $$ if they reflect the presence of systemic noise. We begin by relaxing the control parameter in the deterministic equation, and deriving a range of values for the parameter over which all solutions eventually enter an invariant interval. Then, by allowing the variation to be stochastic, we derive sufficient conditions (less restrictive than known ones for the unperturbed equation) under which the positive equilibrium will be globally a.s. asymptotically stable: i.e. the presence of noise improves the known effectiveness of prediction-based control. Finally, we show that systemic noise has a ``blurring'' effect on the positive equilibrium, which can be made arbitrarily small by controlling the noise intensity. Numerical examples illustrate our results. {\bf AMS Subject Classification:} 39A50, 37H10, 34F05, 39A30, 93D15, 93C55 {\bf Keywords:} stochastic difference equations; prediction-based control, multiplicative noise, additive noise \end{abstract} \maketitle \section{Introduction} \label{sec:intr} The dynamics of discrete maps can be complicated, and various methods may be introduced to control their asymptotic behaviour. In addition, both the intrinsic dynamics and the control may involve stochasticity. We may ask the following of stochastically perturbed difference equations: \begin{enumerate} \item If the original (non-stochastic) map has chaotic or unknown dynamics, can we stabilise the equation by introducing a control with a stochastic component? \item If the non-stochastic equation is either stable or has known dynamics (for example, a stable two-cycle \cite{BR_2012}), do those dynamics persist when a stochastic perturbation is introduced? \end{enumerate} In this article, we consider both these questions in the context of prediction-based control (PBC, or predictive control). Ushio and Yamamoto \cite{uy99} introduced PBC as a method of stabilising unstable periodic orbits of \begin{equation} \label{eq:intr1} x_{n+1}=f(x_n), \quad x_0>0, \quad n\in {\mathbb N}_0, \end{equation} where ${\mathbb N}_0=\{0,1,2, \dots, \}$. The method overcomes some of the limitations of delayed feedback control (introduced by Pyragas~\cite{pyr}), and does not require the a priori approximation of periodic orbits, as does the OGY method developed by Ott et al~\cite{OGY}. The general form of PBC is $$x_{n+1}=f(x_n)-\alpha(f^k(x_n)-x_n), \quad x_0>0, \quad n\in {\mathbb N}_0, $$ where $\alpha \in (0,1)$ and $f^k$ is the $k$th iteration of $f$. If $k=1$, PBC becomes \begin{equation} \label{eq:intr2} x_{n+1}=f(x_n)- \alpha (f(x_n)-x_n)= (1-\alpha)f(x_n)+ \alpha x_n, \quad x_0>0, \quad n\in {\mathbb N}_0. \end{equation} Recently, it has been shown how PBC can be used to manage population size via population reduction by ensuring that the positive equilibrium of a class of one-dimensional maps commonly used to model population dynamics is globally asymptotically stable after the application of the control~\cite{FL2010}. Similar effects are also possible if it is not feasible to apply the control at every timestep. This variation on the technique is referred to as PBC-based pulse stabilisation~\cite{BerLizCAMWA,LizPotsche14}. Here, we investigate the influence of stochastic perturbations on the ability of PBC to induce global asymptotic stability of a positive point equilibrium of a class of equations of the form \eqref{eq:intr1}. It is reasonable to introduce noise in one of two ways. First, the implementation of PBC relies upon a controlling agent to change the state of the system in a way characterised by the value of the control parameter $\alpha$. In reality we expect that such precise control is impossible, and the actual change will be characterised by a control sequence $\{\alpha_n\}_{n\in\mathbb{N}_0}$ with terms that vary randomly around $\alpha$ with some distribution. This will lead to a state-dependent, or multiplicative, stochastic perturbation. Second, the system itself may be subject to extrinsic noise, which may be modelled by a state-independent, or additive, perturbation. The fact that stochastic perturbation can stabilise an unstable equilibrium has been understood since the 1950s: consider the well-known example of the pendulum of Kapica \cite{Kapica}. More recently, a general theory of stochastic stabilisation and destabilisation of ordinary differential equations has developed from \cite{Mao}: a comprehensive review of the literature is presented in \cite{AMR1}. This theory extends to functional differential equations: for example \cite{Appleby,AKMR2} and references therein. Stochastic stabilisation and destabilisation is also possible for difference equations; see for example \cite{ABR,AMR06}. However, the qualitative behaviour of stochastic difference equations may be dramatically different from that seen in the continuous-time case, and must be investigated separately. For example, in \cite{KR09}, solutions of a nonlinear stochastic difference equation with multiplicative noise arising from an Euler discretisation of an It\^o-type SDE are shown to demonstrate monotonic convergence to a point equilibrium with high probability. This behaviour is not possible in the continuous-time limit. Now, consider the structure of the map $f$. We impose the Lipschitz-type assumption on the function $f$ around the unique positive equilibrium $K$. \begin{assumption} \label{as:slope} $f:[0,\infty) \to [0,\infty)$ is a continuous function, $f(x)>0$ for $x>0$, $f(x)>x$ for $x \in (0,K)$, $f(x)<x$ for $x >K$, and there exists $M \geq 1$ such that \begin{equation} \label{eq:Mcond} |f(x)-K| \leq M |x-K|. \end{equation} \end{assumption} Note that under Assumption \ref{as:slope} function $f$ has only a single positive point equilibrium $K$. We will also suppose that $f$ is decreasing on an interval that includes $K$: \begin{assumption} \label{as:3} There is a point $c<K$ such that $f(x)$ is monotone decreasing on $[c,\infty)$. \end{assumption} It is quite common for Assumptions~\ref{as:slope} and \ref{as:3} to hold for models of population dynamics, and in particular for models characterised by a unimodal map: we illustrate this with Examples \ref{ex:Ric}-\ref{ex:BH}. It follows from Singer~\cite{Singer} that, when additionally $f$ has a negative Schwarzian derivative $(Sf)(x)=f'''(x)/f'(x) - \frac{3}{2}(f''(x)/f'(x))^2<0$, the equilibrium $K$ is globally asymptotically stable if and only if it is locally asymptotically stable. In each case, as the system parameter grows, a stable cycle replaces a stable equilibrium which loses its stability, there are period-doubling bifurcations and eventually chaotic behaviour. \begin{example} \label{ex:Ric} For the Ricker model \begin{equation} \label{eq:ricker} x_{n+1} = x_n e^{r(1-x_n)}, \quad x_0>0, \quad n\in {\mathbb N}_0, \end{equation} Assumptions~\ref{as:slope} and \ref{as:3} both hold with $K=1$, and the global maximum is attained at $c=1/r<K=1$ for $r>1$. Let us note that for $r \leq 1$ the positive equilibrium is globally asymptotically stable and the convergence of solutions to $K$ is monotone. However, for $r>2$ the equilibrium becomes unstable. \end{example} \begin{example} \label{ex:log} The truncated logistic model \begin{equation} \label{eq:logistic} x_{n+1} = \max\left\{ r x_n (1-x_n), 0 \right\}, \quad x_0>0, \quad n\in {\mathbb N}_0, \end{equation} with $r>1$ and $c=\frac{1}{2} <K =1-1/r$, also satisfies Assumptions~\ref{as:slope} and \ref{as:3}. Again, for $r \leq 2$, the equilibrium $K$ is globally asymptotically stable, with monotone convergence to $K$, while for $r>3$ the equilibrium $K$ is unstable. \end{example} \begin{example} \label{ex:BH} For the modifications of the Beverton-Holt equation \begin{equation} \label{eq:BHm1} x_{n+1} = \frac{Ax_n}{1+Bx_n^{\gamma}},\quad A>1,~B>0,~\gamma >1, , \quad x_0>0, \quad n\in {\mathbb N}_0, \end{equation} and \begin{equation} \label{eq:BHm2} x_{n+1} = \frac{Ax_n}{(1+Bx_n)^{\gamma}},\quad A>1,~B>0,~\gamma>1, \quad x_0>0, \quad n\in {\mathbb N}_0 \end{equation} Assumption~\ref{as:slope} holds. Also, \eqref{eq:BHm1} and \eqref{eq:BHm2} satisfy Assumption~\ref{as:3} as long as the point at which the map on the right-hand side takes its maximum value is less than that of the point equilibrium. If Assumption~\ref{as:3} is not satisfied, the function is monotone increasing up to the unique positive point equilibrium, and thus all solutions converge to the positive equilibrium, and the convergence is monotone. If all $x_n>K$, we have a monotonically decreasing sequence. If we fix $B$ in \eqref{eq:BHm1} and \eqref{eq:BHm2} and consider the growing $A$, the equation loses stability and experiences transition to chaos through a series of period-doubling bifurcations. \end{example} The article has the following structure. In Section 2 we relax the control parameter $\alpha$, replacing it with the variable control sequence $\{\alpha_n\}_{n\in\mathbb{N}_0}$, and yielding the equation \begin{equation} \label{eq:intr3} x_{n+1}=f(x_n)- \alpha_n (f(x_n)-x_n)= (1-\alpha_n)f(x_n)+ \alpha_n x_n, \quad x_0>0, \quad n\in {\mathbb N}_0. \end{equation} We identify a range over which $\{\alpha_n\}_{n\in\mathbb{N}_0}$ may vary deterministically while still ensuring the global asymptotic stability of the positive equilibrium $K$. We confirm that, without imposing any constraints on the range of values over which the control sequence $\{\alpha_n\}_{n\in\mathbb{N}_0}$ may vary, there exists an invariant interval, containing $K$, under the controlled map. We then introduce constraints on terms of the sequence $\{\alpha_n\}_{n\in\mathbb{N}_0}$ which ensure that all solutions will eventually enter this invariant interval. In Section 3, we assume that the variation of $\alpha_n$ around $\alpha$ is bounded and stochastic, which results in a PBC equation with multiplicative noise of intensity $l$. After identifying constraints on $\alpha$ and $l$ under which a domain of local stability for $K$ exists for all trajectories, we demonstrate that the presence of an appropriate noise perturbation in fact ensures that almost all trajectories will eventually enter this domain of local stability, hence providing global a.s. asymptotic stabilisation of $K$. The known range of values of $\alpha$ under which this stabilisation occurs is larger than for the deterministic PBC equation, and in this sense the stochastic perturbation improves the stabilising properties of PBC. In Section 4, we suppose that the noise is acting systemically rather than through the control parameter, which results in a PBC equation with an additive noise. In this setting it is possible to show that, under certain conditions on the noise intensity $l$, the noise causes a ``blurring'' of the positive equilibrium $K$ in the sense that the controlled solutions will enter and remain within a neighbourhood of $K$, and the size of that neighbourhood can be made arbitrarily small by an appropriate choice of $l$. Finally, Section 5 contains some simulations that illustrate the results of the article, and a brief summary. \section{Deterministic Equations with Variable PBC} \label{sec:det} We begin by relaxing the control variable in the deterministic PBC equation \eqref{eq:intr2}, both as a generalisation to equations of form \eqref{eq:intr3} and to support our analysis of the system with stochastically varying control in Section \ref{sec:3}. Deterministic PBC equation \eqref{eq:intr3} with variable control parameter may be written in the form \begin{equation} \label{eq:Fa} x_{n+1}=F_{\alpha_n}(x_n),\quad x_0>0, \quad n\in {\mathbb N}_0, \end{equation} where \begin{equation} \label{eq:F} F_{\alpha}(x):= \alpha x+ (1-\alpha)f(x). \end{equation} The following result extends \cite[Theorem 2.2]{BerLizCAMWA} to develop conditions on the magnitude of variation of $\alpha_n$ for solutions of \eqref{eq:Fa} to approach the positive equilbrium $K$ at some minimum rate. \begin{lemma} \label{lem:PBC} Let Assumption~\ref{as:slope} hold and each $\alpha_n$ satisfy $\alpha_n \in [a,1)$, where $$a \in \left( 1-\frac{1}{M},1 \right).$$ Let $\{x_n\}_{n\in\mathbb{N}_0}$ be any solution of \eqref{eq:Fa} with $x_0>0$. Then \begin{enumerate} \item[(i)] the sequence $\{|x_n-K|\}_{n \in {\mathbb N}_0}$ is non-increasing; \item[(ii)] If there is $b \in (0,1)$ for which $\alpha_n \leq b <1$, for any $x_0>0$ \[ \lim_{n\to\infty}x_n=K; \] \item[(iii)] If in addition Assumption~\ref{as:3} holds, there exists $n_0 \in {\mathbb N}_0$ such that $x_n \geq c$ for $n\geq n_0$ and \begin{equation} \label{decay} \left| x_{n+1}-K \right| = \left| F_{\alpha_n}(x_{n})-K \right| \leq \gamma \left| x_{n}-K \right|, \end{equation} where \begin{equation} \label{gamma} \gamma = \max \{ b, 1-a \}. \end{equation} \end{enumerate} \end{lemma} \begin{proof} We address each part in turn. \begin{itemize} \item[Part (i):] First, we prove convergence in the case where the signs of $\{x_n-K\}_{n\in\mathbb{N}_0}$ are eventually constant: solutions eventually remain either above or below the positive equilibrium $K$. Suppose that there exists $n_1 \in {\mathbb N}_0$ such that $x_j<K$ for $j \geq n_1$. Then the subsequence $\{x_n\}_{j\geq n_1}$ is monotone increasing, since by Assumption \ref{as:slope} and \eqref{eq:F}, \[ x_j<K\,\,\Rightarrow\,\,f(x_j)>x_j\,\,\Rightarrow\,\, F_{\alpha_j}(x_j)>x_j\,\,\Rightarrow\,\, x_{j+1}>x_j. \] Next, we consider the case when the terms of $\{x_n-K\}_{n\in\mathbb{N}_0}$ change signs infinitely often. Note that we need to take into consideration only the indices $i$ where $x_i<K$ and $F_{\alpha_i}(x_i)>K$ or $x_{i}>K$ and $F_{\alpha_{i}}(x_{i}) < K$. At any $n$ where $(x_n-K)(x_{n+1}-K)>0$, we have $|x_{n+1}-K|<|x_n-K |$. Subsequences of $\{x_n\}_{n\in\mathbb{N}_0}$ that do not switch in this way will approach $K$ monotonically, as proven above. We must prove that $|x_{i+1}-K|<|x_i-K|$ at these switches as well. Suppose first that $x_i \in (0, K)$ and $F_{\alpha_i}(x_i) > K$. Then $f(x_i)>K$ necessarily, since otherwise $$ F_{\alpha_i}(x_i)=\alpha_i x_i+(1-\alpha_i)f(x_i) \le \alpha_i K+(1-\alpha_i)K=K. $$ It is also the case that $x_{i+1} \in (K,f(x_i))$, since $$ x_{i+1}=F_{\alpha_i}(x_i) \le \alpha_i f(x_i)+(1-\alpha_i)f(x_i) = f(x_i). $$ Note that $a \in (1-1/M,1)$ implies $1-a < \frac{1}{M}$. Since $\alpha_i>a$, it follows that $(1-\alpha_i)M<1$, and we have from $F_{\alpha_i}(x_i) > K$ and \eqref{eq:F} that \begin{eqnarray*} \left| x_{i+1}-K \right| & = & F_{\alpha_i}(x_{i})-K \\ & = & (1-\alpha_i) (f(x_i)- K) - \alpha_i (K-x_i) \\ & \leq & (1-\alpha_i) M(K-x_i) - \alpha_i (K-x_i) \\ & \leq & |x_i-K| - \alpha_i |x_i-K| \\ & \leq & (1-a) \left| x_{i}-K \right|. \end{eqnarray*} By similar reasoning, if $x_i > K$ and $F_{\alpha_i}(x_i) < K$ we have $f(x_i) < K$ and $x_{i+1} \in (f(x_i),K)$. Therefore \begin{eqnarray*} \left| x_{i+1}-K \right| & = & K - F_{\alpha_n}(x_{i}) \\ & = & (1-\alpha_i) (K-f(x_i)) - \alpha_i (x_i-K) \\ & \leq & (1-\alpha_i) M(x_i-K) + \alpha_i (x_i-K) \\ & \leq & (1-\alpha_i) \left| x_{i}-K \right| \\ & \leq & (1-a) \left| x_{i}-K \right|. \end{eqnarray*} Thus, $\{|x_n-K|\}_{n \in {\mathbb N}_0}$ is a non-increasing sequence, and Part (i) of the statement of the lemma is verified. \item[Part (ii):] $\{|x_n-K|\}_{n \in {\mathbb N_0}}$ is a decreasing positive sequence if no terms of the sequence $\{x_n\}_{n\in\mathbb{N}_0}$ coincide with $K$. If $x_n < K$ for all $n\geq j$ then $\lim\limits_{n\to\infty}x_n=L>0$. This implies in turn that the left-hand side of $$ x_{n+1}-x_n=(1-\alpha_n)(f(x_n)-x_n) $$ tends to zero, and so the right-hand side also tends to zero. From $1-\alpha_n \geq 1-b>0$ and continuity of $f$, we have $f(L)=L$, so the limit $L$ can only be $K$. The case where $x_j>K$, for all $j\geq n_1$ is treated similarly. If $(x_{n+1}-K)(x_n-K)<0$ then $|x_{n+1}-K|\leq (1-a) |x_n-K|$, which implies \begin{equation} \label{add_star} \lim_{n \to \infty} \left| x_n-K \right| =0. \end{equation} Therefore $\lim\limits_{n \to \infty} x_n = K$, and Part (ii) of the statement of the lemma is confirmed. \item[Part (iii):] Let Assumption~\ref{as:3} hold. By \eqref{add_star}, for any $c>0$ there exists some $n_0 \in {\mathbb N_0}$ such that, for $n\geq n_0$, $|x_{n}-K | \leq K-c$, and thus $x_{n} \in [c,\infty)$. Further we consider only $n \geq n_0$. Also, it has been established above that under the common conditions holding for Parts (i)-(iii) in the statement of the lemma, $|x_n-K|$ is decreasing. Let $i$ be an index where a switch across the equilibirum $K$ occurs, i.e. $(x_i-K)(x_{i+1}-K)<0$. Then, from the analysis above, \[ |x_{n+1}-K| \leq (1-a) |x_n-K| \leq \gamma |x_n-K|. \] If $(x_n-K)(x_{n+1}-K)=0$, then $x_j=K$ for all $j>n$, so \eqref{decay} is satisfied in this situation. It remains to consider the case where $(x_n-K)(x_{n+1}-K)>0$, $n \geq n_0$. Suppose first that $x_n<K$, $x_{n+1}<K$ for some $n\geq n_0$. Then $x_n \in [c,K]$, $f(x_n)>K$ and, as $\alpha_n \leq b \leq \gamma$, \begin{eqnarray*} \left| x_{n+1}-K \right| & = & K - F_{\alpha_n}(x_{n}) = (1-\alpha_n) (K-f(x_n)) + \alpha_n (K-x_n) \\ & < & \alpha_n (K-x_n) \leq b (K-x_n) \leq \gamma |K-x_n|. \end{eqnarray*} When $x_n > K$, $x_{n+1}>K$ for some $n\geq n_0$, we have $f(x_n)<K$ and \begin{eqnarray*} \left| x_{n+1}-K \right| & = & F_{\alpha_n}(x_{n}) -K = (1-\alpha_n) (f(x_n)-K) + \alpha_n (x_n-K) \\ & < & \alpha_n (x_n-K) \leq b (x_n-K) \leq \gamma |x_n-K|. \end{eqnarray*} Taking these results in aggregate we can conclude that $|x_{n+1}-K| \leq \gamma |x_n-K|$ for any $n\geq n_0$, where $\gamma$ is defined in \eqref{gamma}. \end{itemize} \end{proof} In the case of $\alpha_n \equiv \alpha$, we obtain the following corollary, highlighting the existence of an invariant interval under the map $F_\alpha$ when the control parameter $\alpha$ is constant. \begin{corollary} \label{new_add1} If Assumptions~\ref{as:slope} and \ref{as:3} hold and $\displaystyle \alpha \in \left( 1 - \frac{1}{M}, 1 \right)$ then $F_{\alpha}$ defined by \eqref{eq:F} maps the interval $[c,2K - c]$ into $(c,2K-c)$, and satisfies \begin{equation} \label{contraction} |F_{\alpha}(x)-K| \leq \gamma |x-K|, \quad x \geq c, \end{equation} with \begin{equation} \label{def_gamma} \gamma=\max\{ \alpha, 1-\alpha \}. \end{equation} \end{corollary} \begin{proof} By Lemma~\ref{lem:PBC}, Part (i), $$|F_{\alpha}(x)-K| \leq |x-K|, \quad x>0.$$ Thus, $F_{\alpha}$ maps the interval $[d,2K - d]$ into $(d,2K-d)$ for any $d<K$, in particular, $F_{\alpha}: [c,2K - c]\to (c,2K-c)$. Next, let $x\geq c$. By Lemma~\ref{lem:PBC}, Part (iii), for $x \in [c,2K - c]$, $$\left|F_{\alpha} (x) - K\right| \leq \gamma |x-K|,$$ where $\gamma$ defined in \eqref{gamma} takes the form of $\gamma=\max\{ \alpha, 1-\alpha \}$. \end{proof} Our next step is prove the existence of an invariant interval under the PBC map when the control parameter $\alpha_n$ is allowed to vary on the interval $[0,1]$. Note first that if Assumptions~\ref{as:slope} and \ref{as:3} hold, the maximum of $f(x)$ on $[0,\infty)$ is attained on $[0,c]$. We construct the endpoints of the invariant interval as follows. \begin{definition}\label{def:mus} Under Assumptions~\ref{as:slope} and \ref{as:3}, define \begin{enumerate} \item $\mu_0 \in [0,c]$ to be the smallest point where the maximum of $f$ is attained: \begin{equation} \label{eq:mu0} \mu_0 := \inf \left\{ x \in [0,c] \left| f(x)= \max_{s \in [0,\infty)} f(s) \right\} \right. ; \end{equation} \item $\mu_2$ to be the value of this maximum: \begin{equation} \label{eq:mu2} \mu_2 := f(\mu_0) \geq f(c)>f(K)=K>c, \quad f(x) \leq \mu_2, \quad x \in [0,\infty); \end{equation} \item $\mu_1$ to be the image of $\mu_2$ under $f$: \begin{equation}\label{eq:mu1def} \mu_1:= f(\mu_2). \end{equation} \end{enumerate} \end{definition} \begin{remark} By Assumption~\ref{as:3}, $f$ decreases on $[c,\infty)$ and $\mu_2>K>c$, thus \begin{equation} \label{eq:mu1} \quad \mu_1 = f(\mu_2) < f(K)=K. \end{equation} \end{remark} \begin{lemma} \label{lemma_mu} Suppose that Assumptions~\ref{as:slope} and \ref{as:3} hold, and let $F_{\alpha_n}$ be the PBC map defined in \eqref{eq:F}. For any $\alpha_n \in [0,1]$, $$F_{\alpha_n}\left([\mu_1,\mu_2]\right)\subseteq [\mu_1,\mu_2].$$ \end{lemma} \begin{proof} First, we prove that $f\left([\mu_1,\mu_2]\right)\subseteq [\mu_1,\mu_2]$. By Parts (1) and (2) of Definition \ref{def:mus}, we have $f(x) \leq \mu_2$ for any $x \in [\mu_1,\mu_2]$. Since $K\in[\mu_1,\mu_2]$, we consider the subintervals $[\mu_1,K]$ and $[K,\mu_2]$ in turn. If $x \in [\mu_1,K]$, then $f(x)>x \geq \mu_1$. If $x \in [K,\mu_2]$, due to the fact that $f(x)$ is decreasing on this interval, $f(x) \geq f(\mu_2)=\mu_1$. Thus, $f\left([\mu_1,\mu_2]\right)\subseteq [\mu_1,\mu_2]$. Furthermore, for any $x \in [\mu_1,\mu_2]$, $$ F_{\alpha_n}(x) = \alpha_n x + (1-\alpha_n)f(x) \geq \alpha_n \mu_1+(1-\alpha_n) \mu_1 = \mu_1, $$ and $$ F_{\alpha_n}(x) \leq \alpha_n \mu_2 +(1-\alpha_n) \mu_2 = \mu_2. $$ We conclude that $F_{\alpha_n}\left([\mu_1,\mu_2]\right)\subseteq [\mu_1,\mu_2]$, as required. \end{proof} The final result in this section shows that terms of the sequence $\{\alpha_n\}_{n\in\mathbb{N}_0}$ may be constrained in such a way that solutions of the PBC equation \eqref{eq:Fa} eventually enter, and therefore remain, within the interval $[\mu_1,\mu_2]$. We will use this approach to obtain global stochastic stability conditions later in the article. \begin{lemma} \label{lemma_add2} Suppose that Assumptions~\ref{as:slope} and \ref{as:3} hold and there exists $\delta \in \left(0, \frac{1}{2}\right)$ such that \begin{equation} \label{alphadelta} \alpha_n \in (\delta,1-\delta) \end{equation} for every $n \in {\mathbb N_0}$. Then, for each $x_0>0$, there exists $n_0 \in {\mathbb N_0}$ such that the solution $x_n$ of \eqref{eq:Fa} satisfies $x_n \in [\mu_1,\mu_2]$ for $n \geq n_0$, where $\mu_1$ and $\mu_2$ are defined in \eqref{eq:mu1def} and \eqref{eq:mu2}, respectively. \end{lemma} \begin{proof} We will show that for $x_0 \leq \mu_1$ (and similarly for $x_0 \geq \mu_2$), there exists $n_0 \in {\mathbb N_0}$ such that $x_{n_0} \in [\mu_1,\mu_2]$. It will then follow from Lemma \ref{lemma_mu} that $x_{n_0+j}\in [\mu_1,\mu_2]$ for any $j \in {\mathbb N_0}$. Proceed by contradiction. Suppose first that $x_0 \leq \mu_1$. Then $f(x_0)>x_0$, and $$ x_1=\alpha_0 x_0 + (1-\alpha_0)f(x_0) > x_0. $$ Repeating this step confirms that $\{x_n\}_{n\in\mathbb{N}_0}$ is an increasing sequence as long as $x_n<K$. Thus, either there exists $n_0 \in {\mathbb N_0}$ such that $x_{n_0} > \mu_1$ or $x_n$ is a bounded increasing sequence which has a limit $d \leq \mu_1<K$. In the latter case, obviously $f(d)> d$. Denote $$ \Delta := \frac{\delta}{2}(f(d)-d), $$ where $\delta$ is as defined in \eqref{alphadelta}. Here $\Delta>0$, so by continuity of $f$ and convergence of $\{x_n\}_{n\in\mathbb{N}_0}$ to $d$, there is $n_1 \in {\mathbb N_0}$ such that for any $n \geq n_1$, $$ f(x_n)-x_n > \frac{\Delta}{\delta}. $$ Then, for any $n \geq n_1$, \begin{eqnarray*} x_{n+1}&=& \alpha_n x_n + (1-\alpha_n)f(x_n)=x_n+(1-\alpha_n)(f(x_n)-x_n)\\ &>& x_n+ \frac{1-\alpha_n}{\delta} \Delta > x_n+\Delta. \end{eqnarray*} So $x_{n_1+j} \geq \mu_1$ for any $\displaystyle j \geq n_1+\mu_1/\Delta$, which contradicts our assumption that all $x_n \leq \mu_1$. Moreover, by Definition \ref{def:mus}, $f(x) \leq \mu_2$, so that $x_n \leq \mu_2$ for all $n\in\mathbb{N}_0$. We conclude that there exists $n_0 \in {\mathbb N_0}$ such that $x_{n_0} \in [\mu_1,\mu_2]$. Suppose next that $x_0>\mu_2>K$. If there is an $n_2 \in {\mathbb N_0}$ such that $x_{n_2} < \mu_2$, then either we revert to the previous case or $x_{n_2} \in [\mu_1,\mu_2]$. Otherwise, $\{x_n\}_{n\in\mathbb{N}_0}$ is a decreasing sequence with a limit $d \geq \mu_2>K$, where $f(d)<d$; moreover, $f(d)<K$, as $f$ is decreasing on $[c,\infty)$, $c<K$. As before, by the continuity of $f$ and convergence of $\{x_n\}_{n\in\mathbb{N}_0}$ to $d$, for some $\Delta>0$ there exists $n_2 \in {\mathbb N_0}$ such that $$x_n-f(x_n)> \frac{\Delta}{\delta}, \quad n \geq n_2. $$ Then, for any $n \geq n_2$, $$ x_{n+1}= \alpha_n x_n + (1-\alpha_n)f(x_n)=x_n-(1-\alpha_n)(x_n-f(x_n)) < x_n - \frac{1-\alpha_n}{\delta} \Delta < x_n-\Delta. $$ So there exists $j\in\mathbb{N}_0$ such that $x_j \leq \mu_2$. We conclude that, for all solutions $\{x_n\}_{n\in\mathbb{N}_0}$ of \eqref{eq:Fa}, there exists $n_0 \in {\mathbb N_0}$ such that $x_n \in [\mu_1,\mu_2]$ for $n \geq n_0$. \end{proof} \section{Multiplicative Noise} \label{sec:3} Let $(\Omega, {\mathcal{F}}, \{\mathcal{F}_n\}_{n \in \mathbb{N}_0}, {\mathbb{P}})$ be a complete, filtered probability space, and let $\{\xi_n\}_{n\in\mathbb{N}_0}$ be a sequence of independent and identically distributed random variables with common density function $\phi_n$. The filtration $\{\mathcal{F}_n\}_{n \in\mathbb{N}_0}$ is naturally generated by this sequence: $\mathcal{F}_{n} = \sigma \{\xi_{i} : 1\leq i\leq n\}$, for $n\in\mathbb{N}_0$. Among all sequences $\{x_n\}_{n \in N}$ of random variables we consider those for which $x_n$ is $\mathcal{F}_n$-measurable for all $n \in \mathbb{N}_0$. We use the standard abbreviation ``a.s.'' for the wordings ``almost sure'' or ``almost surely'' with respect to ${\mathbb{P}}$. In this section, we allow the control parameter $\alpha$ to vary stochastically, by setting $\alpha_n=\alpha+l\xi_{n+1}$ for each $n\in\mathbb{N}_0$, where $l$ controls the intensity of the perturbation and the sequence $\{\xi_n\}_{n\in\mathbb{N}_0}$ additionally satisfies the following assumption. \begin{assumption} \label{as:chi1} Let $\{\xi_n\}_{n\in\mathbb{N}_0}$ be a sequence of independent and identically distributed continuous random variables with common density function $\phi$ supported on the interval $[-1,\nu]$, for some $\nu\geq 1$. \end{assumption} \begin{remark} The support of each $\xi_n$ is asymmetric if $\nu>1$, allowing for perturbations where the potential magnitude in the positive direction is larger than that in the negative direction. Note that the possibility that $\mathbb{E}\xi_n=0$ is not ruled out in that case. \end{remark} This leads to the following PBC equation with stochastic control \begin{equation} \label{eq:multi} x_{n+1}= \max \left\{ f(x_n)-\left(\alpha + l\xi_{n+1} \right) (f(x_n)-x_n), 0 \right\}, \quad x_0>0, \quad n\in {\mathbb N}_0. \end{equation} The right-hand side is truncated to ensure that physically unrealistic negative population sizes cannot occur. Our first result in this section applies when the perturbation support is symmetric, and provides a bound on the stochastic intensity $l$ that will ensure the convergence of all solution trajectories to the positive equilibrium $K$. A minimum asymptotic convergence rate is also determined. \begin{theorem} \label{th_multi} Let $\{x_n\}_{n\in\mathbb{N}_0}$ be any solution of equation \eqref{eq:multi} with $x_0>0$. Suppose that Assumptions~\ref{as:slope} and \ref{as:chi1} hold, the latter with $\nu=1$. If \[ \alpha \in \left(1-\frac{1}{M},1\right) \] and \begin{equation} \label{eq:noise} l < \min\left\{ \alpha -\left(1-\frac{1}{M}\right), \,1-\alpha \right\}, \end{equation} then for all $\omega\in\Omega$ \begin{equation}\label{eq:xconv} \lim_{n\to\infty}x_n(\omega)=K,\quad x_0>0. \end{equation} If in addition Assumption~\ref{as:3} holds, then there exists a finite random number $n_0(\omega)$ such that for $n\geq n_0(\omega)$, $\{x_n(\omega)\}_{n\in\mathbb{N}_0}$ satisfies \eqref{decay} with \begin{equation} \label{eq:beta} \gamma = \max\left\{ 1- \alpha + l, \, \alpha+l \right\} \end{equation} for all $\omega\in\Omega$. \end{theorem} \begin{proof} If \eqref{eq:noise} and Assumptions ~\ref{as:slope}, \ref{as:chi1} ($\nu=1$) hold, then for any $\omega \in \Omega$, $$\alpha+l \xi_n(\omega) \geq \alpha-l > 1 - \frac{1}{M} \quad \mbox{ and } \quad \alpha+l \xi_n(\omega) \leq \alpha+l <1 .$$ Lemma~\ref{lem:PBC} implies that \eqref{eq:xconv} holds for all $\omega\in\Omega$. If in addition Assumption~\ref{as:3} holds, then for all $\omega\in\Omega$, by Lemma~\ref{lem:PBC}, there exists $n_0(\omega)\in \mathbb R$ such that, for $n\ge n(\omega)$, we have $x_n(\omega) \geq c$ and $$|x_{n+1}(\omega)-K| \leq \gamma |x_n(\omega)-K|,$$ where $$\gamma = \max\left\{ 1- (\alpha-l), \alpha+l \right\} = \max\left\{ 1- \alpha + l,\alpha+l \right\},$$ which concludes the proof. \end{proof} The Lipschitz-type condition with global constant $M$ given by \eqref{eq:Mcond} in Assumption \ref{as:slope} implies a local Lipschitz-type condition: for any $\varepsilon\in(0,K)$, there exists $M_\varepsilon\leq M$ such that \begin{equation} \label{eq:M1cond} |f(x)-K| \leq M_\varepsilon |x-K|, \quad x \in (K-\varepsilon,K+\varepsilon). \end{equation} In practice, $M_\varepsilon$ can be significantly less than $M$. We use the Ricker map to illustrate this statement. \begin{example} \label{ex_Ricker} Consider the Ricker model given by \eqref{eq:ricker} with $r>2$, so that the positive equilibrium $K=1$ is unstable. We provide lower bounds on $M$ and $M_\varepsilon$ for this model. Note first that $M$ cannot be less than the magnitude of the slope connecting the point $(K,f(K))=(1,1)$ with the maximum point $(1/r, \exp(r-1)/r)$. This is given by $$ \frac{e^{r-1} -r}{r-1} \leq M, $$ where the function in the left-hand side is greater than $r-1$ for $r \geq 2.8$. For instance, $r=5$ leads to the estimates $M>12$. Now consider $M_\varepsilon$. The derivative $f'(x)=(1-rx)e^{r(1-x)}$ at $x=1$ is $f'(1)=1-r$, so by continuity of the derivatives of \eqref{eq:ricker}, for any $M_\varepsilon>|1-r|=r-1$ there will be some interval $(1-\varepsilon,1+\varepsilon)$ upon which \eqref{eq:M1cond} holds. Again, $r=5$ leads to the estimate $M_\varepsilon>4$. Let us take $M_\varepsilon=4.5$, then \eqref{eq:M1cond} is satisfied with $\varepsilon =0.6$ \begin{equation} \label{Ricker_cond} |f(x)-1| \leq 4.5 |x-1|, \quad x \in (0.94, 1.06). \end{equation} In fact, the right endpoint of the interval upon which this inequality holds can be chosen to be arbitrarily large, since $|f(x)-1| < 4(x-1)$ for $x>1$. \end{example} \begin{remark} The ability to choose $M_\varepsilon<M$ will help us to identify where stochastic perturbations may act to stabilise the positive equilibirum $K$. \end{remark} The following Borel-Cantelli lemma (see, for example, \cite[Chapter 2.10]{Shir}) will be used in the proof of Lemma \ref{cor:barprob} . \begin{lemma} \label{lem:BC} Let $A_1, \dots, A_n, \dots$ be a sequence of independent events. If $\displaystyle \sum_{i=1}^\infty \mathbb P\{A_i\}=\infty$, then $\mathbb P\{A_n \,\, \text{occurs infinitely often}\}=1.$ \end{lemma} \begin{lemma} \label{cor:barprob} Let $\xi_1, \dots , \xi_n, \dots$ be a sequence of independent identically distributed random variables such that $\mathbb P \left\{\xi_n\in (a,b)\right\}=\tau\in (0, 1)$ for some interval $(a,b)$, $a<b$, and each $n\in\mathbb{N}_0$. Let $n_0$ be an a.s. finite random number. Then for each $j \in {\mathbb N_0}$, \begin{multline*} \mathbb{P}\left[\text{There exists } \mathcal N=\mathcal N(j)\in (n_0, \infty)\right.\\ \left.\text{ such that (s.t.) } \xi_{\mathcal N}\in (a,b), \, \xi_{\mathcal N+1}\in (a,b), \dots, \xi_{\mathcal N+j}\in (a,b)\right]=1. \end{multline*} \end{lemma} \begin{proof} Denote \begin{eqnarray*} B_1&:=&\{\xi_{1}\in (a,b), \, \xi_{2}\in (a,b), \dots, \xi_{j}\in (a,b)\}, \\ B_2&:=&\{\xi_{j+1}\in (a,b), \xi_{j+2}\in (a,b), \dots, \xi_{2j}\in (a,b)\},\\ &\vdots&\\ B_i&:=&\{\xi_{(i-1)j+1}\in (a,b), \, \xi_{(i-1)j+2}\in (a,b), \dots, \xi_{ij}\in (a,b)\}, \quad \text{for each} \quad i,j\in \mathbb N_0. \end{eqnarray*} The events in the sequence $\{B_n\}_{n\in\mathbb{N}_0}$ are mutually independent, since terms of the sequence $\{\xi_n\}_{n\in\mathbb{N}_0}$ are mutually independent, and each $\xi_i$ appears in one and only one event $\{B_j\}_{n\in\mathbb{N}_0}$. Moreover, we have \[ \mathbb P\{B_i\}=\tau^j \] and therefore \[ \sum_{i=1}^\infty\mathbb{P}[B_i]=\infty. \] The Borel-Cantelli Lemma, Lemma~\ref{lem:BC}, yields \[ \mathbb P[B_n \,\, \text{occurs infinitely often}]=1. \] Denoting \[ \mathcal N(\omega)=\min\left\{ n\ge n_0(\omega): B_n(\omega) \,\, \text{occurs} \right\} \] we complete the proof. \end{proof} \begin{lemma} \label{lemma_add1} Suppose that Assumption~\ref{as:slope} holds, $M_\varepsilon>1$ and $\varepsilon \in (0,K)$ are constants for which \eqref{eq:M1cond} is valid and the unperturbed control parameter satisfies \[ \alpha\in \left(1-\frac 1{M_\varepsilon}, \, 1\right). \] Suppose also that Assumption \ref{as:chi1} holds and further that the perturbation intensity satisfies \begin{equation} \label{for_cond2} l<\min\left\{\alpha-\left(1-\frac{1}{M_\varepsilon}\right), \, \frac{1-\alpha}{\nu} \right\}. \end{equation} Let $\{x_n\}_{n\in\mathbb{N}_0}$ be any solution of equation \eqref{eq:multi} with $x_0>0$. The following is true for all $\omega\in\Omega$: if $x_j(\omega) \in (K-\varepsilon,K+\varepsilon)$ for some $j \in {\mathbb N_0}$, then $x_n(\omega) \in (K-\varepsilon,K+\varepsilon)$ for any $n \geq j$ and \[ \lim_{n \to \infty} x_n(\omega)=K. \] \end{lemma} \begin{proof} Denote $\alpha_n(\omega)=\alpha + l \xi_{n+1}(\omega)$ for any $\omega\in\Omega$. We fix and notationally suppress the trajectory $\omega$ for the remainder of this proof. From \eqref{for_cond2} and Assumption \ref{as:chi1} we have \begin{equation}\label{eq:1map} 0<\alpha-l<\alpha_n\le \alpha+ \nu l<1, \end{equation} and \[ 0<1-\alpha-\nu l<1-\alpha_n<1-\alpha+l<1. \] Moreover, \begin{equation}\label{eq:MeBound} (1-\alpha_n) M_\varepsilon<(1-\alpha+l)M_\varepsilon<1. \end{equation} Let $x_j \in (K-\varepsilon,K+\varepsilon)$ for some $j \in {\mathbb N_0}$, and suppose that $x_j,x_{j+1}\neq K$. There are two possibilities: either $(x_{j+1}-K)(x_j-K)>0$ or $(x_{j+1}-K)(x_j-K)<0$. If $(x_{j+1}-K)(x_j-K)>0$ then, as in the proof of Lemma~\ref{lem:PBC}, $|x_{j+1}-K|<|x_j-K|$ and thus $x_{j+1} \in (K-\varepsilon,K+\varepsilon)$. If instead, $(x_{j+1}-K)(x_j-K)<0$, then we must consider two further sub-cases. Suppose first that $x_j \in (K-\varepsilon, K)$, then $f(x_j)>K$, so that $F_{\alpha_j}(x_j) > K$, and $x_{j+1} \in (K,f(x_j))$. Therefore, by \eqref{eq:F}, \eqref{eq:M1cond} and \eqref{eq:MeBound}, \begin{eqnarray*} \left| x_{j+1}-K \right| & = & F_{\alpha_j}(x_j)-K \\ & = & (1-\alpha_j) (f(x_j)- K) - \alpha_j (K-x_j) \\ & \leq & (1-\alpha_j) M_\varepsilon(K-x_j) - \alpha_j (K-x_j) \\ & \leq & |x_j-K| - \alpha_j |x_j-K|\\ &\leq& (1-\alpha + l) \left| x_{j}-K \right|. \end{eqnarray*} It follows that $x_{j+1} \in (K-\varepsilon,K+\varepsilon)$. Next, when $x_j \in (K,K+\varepsilon)$ we have $f(x_j) < K$, so that $F_{\alpha_j}(x_j) < K$ and $x_{j+1} \in (f(x_j),K)$. Again this yields \begin{eqnarray*} \left| x_{j+1}-K \right| & = & K - F_{\alpha_j}(x_{j}) \\ & = & (1-\alpha_j) (K-f(x_j)) - \alpha_j (x_j-K) \\ & \leq & (1-\alpha_j) M_\varepsilon(x_j-K) - \alpha_j (x_j-K) \\ & \leq & \left| x_{j}-K \right| - \alpha_j \left| x_{j}-K \right| \\ & \leq & (1-\alpha+l) \left| x_{j}-K \right|, \end{eqnarray*} where $0<1-\alpha+l<1$. Thus $x_{j+1} \in (K-\varepsilon,K+\varepsilon)$, and by induction, all $x_i \in (K-\varepsilon,K+\varepsilon)$, $i \geq j$. We also conclude that the sequence $\{| x_{n}-K |\}_{n\geq j}$ is non-negative and monotone non-increasing and therefore has a limit which can only be zero. The result follows. \end{proof} \begin{lemma} \label{lemma_add3} Suppose that Assumptions~\ref{as:slope}, \ref{as:3} and condition \eqref{eq:M1cond}, with $\varepsilon \in (0,K)$ and $M_\varepsilon \in (1,M)$, hold. Suppose further that Assumption~\ref{as:chi1} is satisfied, and \begin{equation} \label{eq_prob} a:= \alpha - l > 1-\frac{1}{M_\varepsilon}, \quad 1-\frac{1}{M} < b:= \alpha + l \nu <1. \end{equation} Let $\{x_n\}_{n\in\mathbb{N}_0}$ be any solution of equation \eqref{eq:multi} with $x_0>0$. Then \[ \lim_{n \to \infty} x_n = K,\quad a.s. \] \end{lemma} \begin{proof} By Lemma~\ref{lemma_add1}, it suffices to prove that \[ \mathbb{P}\left[\text{There exists }\mathcal N_1<\infty\text{ s.t. }x_{\mathcal N_1} \in (K - \varepsilon, K+ \varepsilon)\right]=1. \] Denote as usual $\alpha_n = \alpha + l \xi_{n+1}$ and fix $\varepsilon>0$. Let $N_1$ and $N_2$ be nonrandom positive integers defined by \eqref{def:N1} and \eqref{def:N2} respectively. Our proof is in three parts. In Part (i) we show that each trajectory $x_n(\omega)$ reaches $[\mu_1, \mu_2]$ in an $\omega$-dependent number of steps and stays there forever. In Part (ii) we prove that each trajectory $x_n(\omega)$ then enters $[c, \mu_2]$ in fewer than $N_1$ steps. In Part (iii) we verify that each trajectory then enters $[K-\varepsilon, K+\varepsilon]$ in fewer than $N_2$ steps. \begin{itemize} \item[Part (i):] Let $c$ be the constant associated with the map $f$ in Assumption \ref{as:3}, and let $\mu_1$ and $\mu_2$ be as defined by \eqref{eq:mu1def} and \eqref{eq:mu2} in Definition \ref{def:mus}. Since $f(x) >x$ for any $x \in [\mu_1,c]$ and $f$ is continuous, we can introduce \begin{equation} \label{d1} d_1 := \min_{x \in [\mu_1, c]} (f(x)-x) >0, \end{equation} and \begin{equation} \label{def:N1} N_1 := \left[ \frac{c-\mu_1}{(1-\alpha - \nu l)d_1} \right] +1, \end{equation} where $[t]$ is an integer part of $t$. Next, we denote $a$ as in \eqref{eq_prob} and fix some $a_1$ satisfying \begin{equation} \label{def:a} a_1 \in \left( 1 - \frac{1}{M}, \,\alpha+l\nu \right). \end{equation} Denote \begin{equation} \label{gam1} \gamma = \max\{ b, 1-a\}= \max\{ \alpha + l \nu, 1 - \alpha + l \}, \end{equation} where $b$ was defined in \eqref{eq_prob} and from which it follows that $\gamma\in (0, 1)$. Additionally denote \begin{equation} \label{def:N2} N_2 := \left[ \left. \ln \left( \max \left\{ \frac{K-c}{\varepsilon}, \frac{\mu_2 - K}{\varepsilon},1 \right\} \right) \right/ (-\ln \gamma) \right] + 2. \end{equation} From \eqref{eq_prob}, we have \begin{equation} \label{est:alphan} \alpha_n \in (a,b) = (\alpha-l, \alpha+l\nu)\subset (\delta, 1-\delta), \quad 1-\alpha_n \geq 1-\alpha-l\nu, \end{equation} where $\delta>0$ can be chosen, for example, to satisfy the inequality $$ \delta <\min \left\{ a,1-b \right\}. $$ So the conditions of Lemma~\ref{lemma_add2} are satisfied and we can deduce that for each trajectory $\omega\in\Omega$, there is a finite $n_0(\omega)$ such that $x_{j}(\omega) \in [\mu_1,\mu_2]$ for $j \geq n_0(\omega)$. Corollary \ref{cor:barprob} implies that, given any constant integers $N_1$ and $N_2$ defined by \eqref{def:N1} and \eqref{def:N2} respectively, the finite random number $n_0$, and the interval $\left(\frac {a_1-\alpha}l, \nu\right)$, where $a_1$ is defined by \eqref{def:a}, \begin{multline}\label{def:mathcalN} \mathbb{P}\left[\text{There exists }\mathcal{N}\in (n_0, \infty)\text{ s.t. }\right.\\ \left.\xi_k\in \left(\frac {a_1-\alpha}l, \nu\right),\text{ for all } k=\mathcal N+1, \dots, \mathcal N+N_1+N_2, \right]=1. \end{multline} Since $a_1<\alpha+l\nu$, we have \[ \left(\frac {a_1-\alpha}l, \nu\right)\cap (-1, \nu)\neq \emptyset, \] so that \[ \mathbb P\left\{\xi_k\in \left(\frac {a_1-\alpha}l, \nu\right)\right\}>0,\quad \text{for all}\quad k\in\mathbb{N}_0. \] Also, on a given trajectory $\omega\in\Omega$, \begin{equation} \label{cond:alphak} \xi_k(\omega)\in \left(\frac {a_1-\alpha}l, \nu\right)\quad \Rightarrow\quad\alpha_k(\omega)\in (a_1, \alpha+l\nu). \end{equation} \item[Part (ii):] From Assumption \ref{as:slope} and \eqref{est:alphan}, note that for any fixed trajectory $\omega\in\Omega$, as long as $x_n(\omega) \in [\mu_1,c]$, we have \begin{eqnarray*} x_{n+1}(\omega) & = & \alpha_n(\omega) x_n(\omega) +(1-\alpha_n(\omega))f(x_n(\omega))\\ &=&x_n(\omega)+(1-\alpha_n(\omega))(f(x_n(\omega))-x_n(\omega)) \\ & \geq & x_n(\omega) + (1-\alpha_n(\omega)) d_1\\ & \geq &x_n(\omega)+ (1-\alpha -l\nu) d_1, \end{eqnarray*} and hence at least one of $x_{n+1}(\omega), \dots, x_{n+N_1}(\omega)$ is in $[c,\mu_2]$. \item[Part (iii):] For any fixed trajectory $\omega\in\Omega$, if $x_n(\omega) \in [c,\mu_2]$ and $N_2$ successive terms of the subsequence $\{\alpha_k(\omega)\}_{k=n}^{\infty}$ satisfy \begin{equation} \label{big_alpha} \alpha_{k-1}(\omega)=\alpha+l \xi_{k}(\omega) \in (a_1, \alpha + l\nu), \quad k=n+1, n+2, \dots, n+N_2, \end{equation} we have $x_{n+N_2+1} \in (K-\varepsilon, K+\varepsilon)$. For the proof we assume that at least one of $K-c$, $\mu_2-K$ is not less than $\varepsilon$, otherwise $x_n(\omega) \in [c,\mu_2]$ is already in $(K-\varepsilon, K+\varepsilon)$. Choose $\omega$ to be any trajectory in the a.s. event described by \eqref{def:mathcalN}, and suppose that $x_n(\omega) \in [c, K-\varepsilon] \cup [K+\varepsilon, \mu_2]$. It follows from \eqref{def:mathcalN} and \eqref{cond:alphak} that on this trajectory $\alpha_{k-1}(\omega)$ satisfies \eqref{big_alpha}, for $k=n+1, \dots, n+N_2$. Applying Lemma~\ref{lem:PBC} with $b=\alpha+\nu l$, $a_1$ as chosen in \eqref{def:a} in place of $a$, and $\gamma$ defined in \eqref{gam1}, we arrive at $$ \left| x_{j+1}(\omega)-K \right| \leq \gamma \left| x_{j}(\omega)-K \right|, $$ for $j=n+1, n+1, \dots, n+N_2$. If $x_n(\omega) \in [c,K-\varepsilon]$, this relation implies \begin{eqnarray*} \left| x_{n+N_2}(\omega) -K \right| &\leq& {\gamma}^{N_2} \left| x_n(\omega)-K \right|\\ &\leq& {\gamma}^{-\log_{\gamma}((K-c)/\varepsilon)} \left| x_n(\omega)-K \right| \\ &<& \frac{\varepsilon}{K-c} \left| x_n(\omega)-K \right| \leq \varepsilon. \end{eqnarray*} Similarly, for $x_n(\omega) \in [K+\varepsilon,\mu_2]$, \begin{eqnarray*} \left| x_{n+N_2}(\omega) -K \right| &\leq& {\gamma}^{N_2} \left| x_n(\omega)-K \right| \\ &\leq& {\gamma}^{-\log_{\gamma}((\mu_2-K)/\varepsilon)} \left| x_n(\omega)-K \right| \\ &<& \frac{\varepsilon}{\mu_2-K} \left| x_n(\omega)-K \right| \leq \varepsilon. \end{eqnarray*} \end{itemize} Now we bring together all parts of the proof. In Part (i), we verified that there exists a finite random number $n_0$ such that, for any $\omega\in\Omega$, $x_n(\omega) \in [\mu_1,\mu_2]$ for $n \geq n_0(\omega)$. For $N_1$ and $N_2$ and $a_1$, defined in \eqref{def:N1}, \eqref{def:N2} and \eqref{def:a}, respectively, \eqref{def:mathcalN} holds for some random number $\mathcal N$. Then, in particular, $x_n\in [\mu_1,\mu_2]$ for all $n \ge\mathcal N(\omega)>n_0(\omega)$. In Part (ii), we proved that there exists $k(\omega)\in [0, N_1]$, such that for any $\omega\in\Omega$ \[ x_{\mathcal N(\omega)+k(\omega)}\in (c, \mu_2). \] Finally, in Part (iii), we showed that \[ \mathbb{P}\left[\omega\in \Omega: |x_{\mathcal{N}_1(\omega)}(\omega)-K|\leq \varepsilon \quad \text{for} \quad \mathcal{N}_1:=\mathcal{N}+N_1+N_2 \right]=1. \] An application of Lemma~\ref{lemma_add1} concludes the proof. \end{proof} We recall that by Corollary \ref{new_add1}, the positive equilibrium of the unperturbed PBC equation with constant $\alpha$ is globally asymptotically stable if we choose $\alpha>1-1/M$. The next theorem presents conditions under which the introduction of a stochastic perturbation of $\alpha$ has the effect of a.s. stabilising the positive equilibrium. In both cases, the presence of noise has the effect of ensuring that on an event of probability one, solutions (regardless of a positive initial value) will eventually enter the domain of local stability predicted by Lemma \ref{lemma_add1}. In this sense, we are showing that pathwise local stability, together with an appropriate noise perturbation, imply a.s. global stability. In the first part, we require the support of the stochastic perturbation to be symmetric ($\nu=1$). A.s. global stability of the equilibrium of the stochastic PBC can be achieved by an appropriate choice of noise intensity $l$ if $\alpha\leq 1-1/M$ as long as $\alpha$ is closer to $1-1/M$ than to $1-1/M_\varepsilon$. Hence the parameter range corresponding to known global asymptotic stability is extended by an appropriate stochastic perturbation. In the second part, we show that the a.s. stability region can be extended further, to $\alpha > 1/M_\varepsilon$, by allowing the support of the perturbations to extend to the right ($\nu>1$), so that, with probability one, values of the sequence $\{\alpha_n\}_{n\in\mathbb{N}_0}$ exceed $1-1/M$ sufficiently often on an a.s. event. \begin{theorem} \label{multi_local} Suppose that Assumptions~\ref{as:slope}, \ref{as:3} hold, and there is an $\varepsilon>0$ and $M>M_\varepsilon>1$ such that \eqref{eq:M1cond} is satisfied. Suppose further that one of the two following conditions holds: \begin{enumerate} \item Assumption~\ref{as:chi1} holds with $\nu=1$, \begin{equation}\label{eq:al1} \alpha \in\left (1 - \frac{1}{2 M_\varepsilon}-\frac{1}{2M},1\right), \end{equation} and \begin{equation} \label{eq:noi_M1} l \in \left( \max\left\{1-\frac{1}{M} -\alpha,0\right\},\, \min\left\{\alpha-\left(1-\frac{1}{M_\varepsilon}\right),\,1-\alpha\right\}\right); \end{equation} \item Assumption~\ref{as:chi1} holds with $\nu>1$, and \begin{equation}\label{alpha_cond} \alpha \in \left(1 - \frac{1}{M_\varepsilon},1\right) \end{equation} is such that \begin{equation} \label{nu_cond} 1-\frac{1}{M} -\alpha < \nu \left( \alpha - \left(1 - \frac{1}{M_\varepsilon}\right) \right) \end{equation} and \begin{equation} \label{eq:n_M1} l \in \left( \frac{1}{\nu} \max \left\{ 1-\frac{1}{M} -\alpha, 0 \right\}, \min\left\{ \alpha - \left(1 -\frac{1}{M_\varepsilon}\right), \frac{1-\alpha}{\nu} \right\} \right). \end{equation} \end{enumerate} Let $\{x_n\}_{n\in\mathbb{N}_0}$ be any solution of equation \eqref{eq:multi} with $x_0>0$. Then \[ \lim_{n\to\infty}x_n=K,\quad \text{a.s.} \] \end{theorem} \begin{proof} We treat each part in turn, showing that the conditions of Lemma \ref{lemma_add3} are satisfied. \begin{enumerate} \item Let Assumption~\ref{as:chi1} hold with $\nu=1$. By \eqref{eq:al1} $$ 1-\frac{1}{M} -\alpha < \alpha-\left(1-\frac{1}{M_\varepsilon}\right), $$ and so the interval for $l$ in \eqref{eq:noi_M1} is non-empty. Thus, applying \eqref{eq:noi_M1}, \[ \alpha+l<1, \quad \alpha+l>1-\frac{1}{M} \] and, additionally by \eqref{alpha_cond} \[ \alpha-l>2\alpha-\left(1-\frac{1}{M}\right)>1-\frac{1}{M}>1-\frac 1{M_\varepsilon}. \] The result then follows from Lemma \ref{lemma_add3}. \item Let Assumption~\ref{as:chi1} hold with $\nu>1$. It is clear that $(1-1/M-\alpha)/\nu<(1-\alpha)/\nu$, and this with \eqref{nu_cond} ensures that the interval for $l$ in \eqref{eq:n_M1} is non-empty. It remains to see that \eqref{eq:n_M1} implies $$ l\nu < 1- \alpha, \quad \alpha - l>1 -\frac{1}{M_\varepsilon}, $$ and \[ \alpha+l \nu> 1 - \frac{1}{M}. \] The result then follows from Lemma \ref{lemma_add3}. \end{enumerate} \end{proof} \section{Additive Noise} \label{sec:4} Consider the case where the PBC equation is perturbed externally, and therefore includes an additive stochastic term \begin{equation} \label{eq:add} x_{n+1}= \max \left\{ f(x_n)-\alpha (f(x_n)-x_n) + l\xi_{n+1}, 0 \right\}, \quad x_0>0, \quad n \in {\mathbb N_0}. \end{equation} Since the perturbation intensity $l$ is fixed, a.s. asymptotic convergence to the positive point equilibrium $K$ is impossible. However, we can show that if $K$ is a stable equilibrium of the unperturbed prediction-based controlled equation, then solutions of the perturbed PBC equation eventually enter and remain within a neighbourhood of $K$, where the size of that neighbourhood depends on $l$. In this section we suppose that the support of the perturbation is symmetric ($\nu=1$). \begin{lemma} \label{lem:add_noise} Suppose that Assumptions~\ref{as:slope},\ref{as:3} and \ref{as:chi1} (with $\nu=1$) hold, and that \[ \alpha \in \left(1-\frac{1}{M},1\right). \] Let $\{x_n\}_{n\in\mathbb{N}_0}$ be any solution of \eqref{eq:add} with $x_0>0$. If \begin{equation} \label{eq:l_cond} l< (1-\gamma)(K-c), \end{equation} where $\gamma$ is defined by \eqref{def_gamma} in the statement of Corollary \ref{new_add1}, then the following statement holds: for any $\varepsilon>0$, \begin{equation}\label{eq:bounds} \mathbb{P}\left[\text{There exists } \mathcal{N}_0<\infty\text{ s.t. }x_n\in\left(K-\frac{l}{1-\gamma}-\varepsilon, K+\frac{l}{1-\gamma}+ \varepsilon\right),\,n\geq\mathcal{N}_0\right]=1. \end{equation} \end{lemma} \begin{proof} Our proof has three parts. \begin{itemize} \item[Part (i):] We show that for all $\omega\in\Omega$, if $x_n(\omega) \in [c,2K-c]$, then all $x_j(\omega) \in (c,2K-c)$, for $j \geq n$. Since $\alpha \in (1-1/M,1)$, then by \eqref{contraction} in Corollary~\ref{new_add1}, Assumptions \ref{as:3} and \ref{as:chi1} ($\nu=1$), and \eqref{eq:l_cond}, if $x_n(\omega)\in [c, 2K-c]$ and $F_\alpha$ is defined by \eqref{eq:F}, \begin{eqnarray*} \left| x_{n+1}(\omega)-K \right| & = & \left| F_{\alpha} (x_n(\omega)) + l\xi_{n+1} - K \right| \\ &\leq& \left| F_{\alpha} (x_n(\omega)) - K \right| + \left| l\xi_{n+1} \right| \\ & \leq & \gamma \left| x_{n}(\omega)-K \right| + l\\ &<& \gamma (K-c) + (1-\gamma) (K-c) = K-c, \end{eqnarray*} thus $x_{n+1}(\omega) \in (c,2K-c)$. By induction, $x_j(\omega) \in [c,2K-c]$ for any $j \geq n$.\\ \item[Part (ii):] We prove that for any $x_0 \notin [c,2K-c]$ \[ \mathbb{P}\left[\text{There exists }\mathcal N_1<\infty\text{ s.t. }x_{\mathcal{N}_1} \in [c,2K-c]\right]=1. \] If $x_0 \in [0,c]$ then the following is true for all $\omega\in\Omega$: either $x_n(\omega) \in [0,c]$ for all $n\in \mathbb N_0$, or $x_m(\omega) >c$ for some unknown $m \in \mathbb N_0$. If $x_n(\omega) \in [0,c]$ for all $n\in \mathbb N_0$, denote \begin{equation} \label{def:r} r=\left[ \frac{2 c}{l} \right] +1. \end{equation} By Lemma~\ref{cor:barprob}, \begin{equation*} \mathbb{P}\left[\text{There exists }\mathcal N_2<\infty\text{ s.t. }\xi_{j} \in \left( \frac{1}{2}, 1\right)\text{ when }j=\mathcal N_2+1, \mathcal N_2+2, \dots, \mathcal N_2+r\right]=1 \end{equation*} Note that if $\omega\in\Omega$ is such that for some $n\in\mathbb{N}_0$, $x_n(\omega) \in (0, c]$ and $\displaystyle \xi_{n+1}(\omega) \in \left( \frac{1}{2}, 1\right)$, we have \begin{equation*} x_{n+1}(\omega) = F_{\alpha}(x_n(\omega))+l\xi_{n+1}(\omega) \geq x_n(\omega) + \frac{l}{2}. \end{equation*} Therefore $$ x_{\mathcal N_2+1}(\omega) \geq x_{\mathcal N_2}(\omega)+\frac{l}{2}, ~~\dots~, x_{\mathcal N_2+r}(\omega)\geq x_{\mathcal N_2}(\omega)+r\frac{l}{2} > c,$$ which makes the first case, $x_n(\omega) \in [0,c]$ for all $n\in \mathbb N_0$, impossible. If $x_0> 2K-c$ then, by Assumption~\ref{as:3}, $F_{\alpha} (x_0) < F_{\alpha} (2K-c)$, so, by our choice of $l$, \begin{eqnarray*} x_1(\omega) &<& F_{\alpha} (2K-c) + l \xi_1(\omega)\\ &<& K+l \\ &<& K + \alpha(K-c)\\ &<& K+K-c\\&=&2K-c. \end{eqnarray*} Together with Part (i), this allows us to conclude that \begin{equation}\label{eq:n1rand} \mathbb{P}\left[\text{There exists } \mathcal{N}_1<\infty\text{ s.t. }x_{\mathcal{N}_1} \in [c,2K-c]\right]=1. \end{equation} \item[Part (iii):] We may now proceed to prove \eqref{eq:bounds}. By \eqref{contraction} in the statement of Corollary~\ref{new_add1}, \[ |F_{\alpha}(x)-K | < \gamma |x-K|,\quad x\in[c,2K-c]. \] By Parts (i) and (ii) we only need to consider trajectories $\omega$ belonging to the a.s. event referred to in \eqref{eq:n1rand}. Introducing the nonnegative sequence $\{y_n(\omega)\}_{n\geq\mathcal{N}_1(\omega)}$ with $y_n(\omega) := \left| x_{n}(\omega)-K \right|$, we notice that \begin{equation} \label{eq:flower} 0\leq y_{n+1}(\omega) < \gamma y_n(\omega) + l. \end{equation} There are two possibilities. \begin{enumerate} \item If $\displaystyle y_n(\omega)< \frac{l}{1-\gamma}$ then $$ y_{n+1}(\omega) < \gamma y_n(\omega) + l < \frac{ \gamma l + (1-\gamma) l}{1-\gamma} = \frac{l}{1-\gamma}, $$ and therefore all successive terms will be on the interval $\left[0,\frac{l}{1-\gamma}\right)$. \item If $\displaystyle y_n(\omega) \geq \frac{l}{1-\gamma}$ then $$ y_{n+1}(\omega) < \gamma y_n(\omega) + l \leq \gamma y_n(\omega) + (1-\gamma) y_n(\omega) = y_n(\omega), $$ and thus $\{y_n(\omega)\}_{n\in\mathcal{N}_1(\omega)}$ is a positive decreasing sequence for as long as each $y_n \geq l/(1-\gamma)$. \end{enumerate} Hence, either $\{y_n(\omega)\}_{n\in\mathcal{N}_1(\omega)}$ has a limit $\displaystyle \lim_{n \to \infty} y_n(\omega) = A(\omega)$ satisfying, by \eqref{eq:flower}, $$A(\omega) \leq \frac{l}{1-\gamma},$$ or it eventually drops below $l/(1-\gamma)$. Therefore, for any $\varepsilon>0$, \[ \mathbb{P}\left[\text{There exists } \mathcal{N}_0<\infty\text{ s.t. }y_n \in \left(0,\frac{l}{1-\gamma}+\varepsilon\right),\, n \geq \mathcal N_0\right]=1, \] which immediately implies \eqref{eq:bounds}, and the statement of the lemma. \end{itemize} \end{proof} Finally we show that it follows from Lemma \ref{lem:add_noise} that the neighbourhood of $K$ into which solutions eventually settle can be made arbitrarily small by placing an additional constraint on the noise intensity $l$. \begin{theorem} Suppose that Assumptions~\ref{as:slope}, \ref{as:3} and \ref{as:chi1} (with $\nu=1$) hold, and that \[ \alpha \in \left(1-\frac{1}{M},1\right). \] Let $\{x_n\}_{n\in\mathbb{N}_0}$ be any solution of equation \eqref{eq:add} with $x_0>0$. For any $\varepsilon_1>0$, there exists $l$ satisfying \eqref{eq:l_cond} such that \begin{equation} \label{eq:bound1} \mathbb{P}\left[\text{There exists } \mathcal{N}<\infty\text{ s.t. }x_n \in \left[ \max\left\{ K-\varepsilon_1, 0 \right\}, K+\varepsilon_1 \right],\, n\geq\mathcal{N}\right]=1. \end{equation} \end{theorem} \begin{proof} Let us choose in the statement of Lemma~\ref{lem:add_noise}, $$\varepsilon \leq \frac{\varepsilon_1}{2}, \quad l \leq \min\left\{ \varepsilon(1-\gamma), (1-\gamma)(K-c) \right\},$$ then a reference to \eqref{eq:bounds} in Lemma~\ref{lem:add_noise} completes the proof. \end{proof} \section{Numerical Examples and Discussion} Our numerical experiments are mostly concerned with the stabilising effect of the multiplicative noise. First, let us illustrate stabilisation of the chaotic Ricker model using PBC with multiplicative noise. \begin{example} \label{ex_Ricker_1} Consider the chaotic Ricker map \eqref{eq:ricker} with $r=5$. As mentioned in Example~\ref{ex_Ricker}, inequality \eqref{eq:Mcond} is satisfied with $M>12$, while for $M_1=4.5$ and $\varepsilon =0.6$, \eqref{Ricker_cond} holds. As $M \approx 12.8624$, we can take $M=12.87$ in further computations, \eqref{eq:Mcond} will be satisfied. Thus, according to Theorem~\ref{multi_local}, we should choose $\alpha$ such that $$ \alpha > 1 - \frac{1}{M_\varepsilon} \approx 0.778, \quad \alpha -l >1 - \frac{1}{M_\varepsilon}$$, $$\alpha+ \nu l > 1 - \frac{1}{M} \approx 0.9223, \quad \alpha+ \nu l <1. $$ Let us take $\alpha =0.8$, $l= 0.02$, $\nu=1$. Then, $\alpha+l\xi_n > 1 - 1/M_\varepsilon$, Fig.~\ref{figure1} shows fast convergence of solutions to the equilibrium $K=1$. \begin{figure} \caption{Ten runs of the difference equation with $f$ as in (\protect{\ref{eq:ricker} \label{figure1} \end{figure} Next, let us take $\alpha<1 - 1/M_\varepsilon$, for example, $\alpha=0.5$. Fig.~\ref{figure2} illustrates the dynamics of the Ricker equation with deterministic PBC ($l=0$), and the multiplicative uniformly distributed noise with the growing perturbation amplitudes $l=0.05, 0.1, 0.2$. \begin{figure} \caption{Five runs of the difference equation with $f$ as in (\protect{\ref{eq:ricker} \label{figure2} \end{figure} Finally, let us fix $\alpha=5$, $l=0.05$ and increase $\nu$. The distribution function of $\xi$ is chosen to be $\displaystyle e^{\ln(\nu+1) \ln(2\zeta)/\ln 2} - 1 $, where $\zeta$ is uniformly distributed on $(0,1)$. As $\zeta<0.5$ leads to $\xi<0$, half of the perturbations are negative. We can observe the stabilising effect of larger $\nu$ in Fig.~\ref{figure3}. \begin{figure} \caption{Five runs of the difference equation with $f$ as in (\protect{\ref{eq:ricker} \label{figure3} \end{figure} \end{example} Next, we illustrate the stabilising effect of a multiplicative stochastic perturbation in non-controlled models. \begin{example} Let us consider the example of Singer \cite[p.~266]{Singer} where a map has a locally stable equilibrium, together with an attractive cycle. Denote $$ F(x)=7.86 x - 23.31 x^2+ 28.75x^3- 13.30x^4, \quad a:=F(0.99) \approx 0.055438317 $$ Consider the function \begin{equation} \label{eq:S3.1} f(x) = \left\{ \begin{array}{ll} F(x),& \mbox{ if~~ } x \in [0,0.99], \\ \frac{F(0.99)}{x+0.01}, & \mbox{ if~~ } x \in (0.99, \infty). \end{array} \right. \end{equation} Thus \begin{equation} f(x) = \left\{ \begin{array}{ll} 7.86 x - 23.31 x^2+ 28.75x^3- 13.30x^4, & \mbox{ if~~ } x \in [0,0.99], \\ \frac{100 F(0.99)}{100x+1}, & \mbox{ if~~ } x \in (0.99, \infty). \end{array} \right. \label{Singfunc} \end{equation} Following \cite{Singer}, we notice that $F$ has a locally stable fixed point $K \approx 0.7263986$ together with a locally stable period two orbit $(\theta_1, \theta_2) \approx (0.3217591,0.9309168)$. Here $c \approx 0.3239799$. In Fig.~\ref{figure5a}, we illustrate the function $f(x)$ introduced in \eqref{Singfunc} on the segment $[0.1.5]$, together with a two-cycle which is locally stable in the absence of perturbations. \begin{figure} \caption{ The map $f(x)$ introduced in (\protect{\ref{Singfunc} \label{figure5a} \end{figure} In the following 10 numerical simulations, the initial point $x_0=0.3217$ is chosen very close to the lower value $0.3217591\dots$ of the stable 2-cycle. Here the amplitude of uniformly distributed in $[-l,l]$ perturbations is $l=0.02$, $\xi$ is uniformly distributed in $[-1,1]$ \begin{equation} x_{n+1}=(1+ l \xi)f(x_n). \label{pert_1} \end{equation} We observe that the stochastic perturbation can make a locally (though not globally) asymptotically stable equilibrium, globally asymptotically stable. An important condition of this global stability is that there is a neighbourhood of the equilibrium which is invariant for any perturbations. On the other hand, the occasional perturbations amplitude should be large enough to leave the stable 2-orbit. If we increase the amplitude to $l=0.03$, the process of attraction of solution to the locally stable equilibrium is faster, see Fig.~\ref{figure5}, right. \begin{figure} \caption{Ten runs of the difference equation (\protect{\ref{pert_1} \label{figure5} \end{figure} \end{example} The theoretical results of the present paper and the numerical simulations of this section illustrate the following conclusions: \begin{enumerate} \item As expected, in the presence of either multiplicative or additive stochastic perturbations, the unique positive equilibrium can become blurred. \item However, for a class of maps that includes commonly occurring models of population dynamics, stochasticity can contribute to the stability of this equilibrium. First, the bounds of the control parameter for which any solution of the controlled system converges to this (blurred) equilibrium expand. The second relevant issue is that even in the case when the positive equilibrium of the deterministic equation is not globally attractive, its blurred version can become attractive under perturbations, as numerical examples illustrate. \end{enumerate} \end{document}
\begin{document} \newtheorem{remm}{Remark} \setlength\abovedisplayskip{7pt} \setlength\belowdisplayskip{7pt} \setlength\abovedisplayshortskip{7pt} \setlength\belowdisplayshortskip{7pt} \allowdisplaybreaks \setlength{\parindent}{1em} \setlength{\parskip}{0em} \addtolength{\oddsidemargin}{3pt} \begin{frontmatter} \title{ Observer-Based Delay-Adaptive Compensator for 3-D Space Formation of Multi-Agent Systems with Leaders Actuation} \thanks[footnoteinfo]{Corresponding author: Jie~Qi. The work was supported by the National Natural Science Foundation of China (62173084, 61773112). Mamadou Diagne appreciates the support of the National Science Foundation of the USA under the grant number (FAIN): 2222250.} \author[Wang,Qi]{Shanshan Wang}\ead{wss\[email protected]}, \author[UMICH]{Mamadou Diagne}\ead{[email protected]}, \author[Qi]{Jie Qi}\ead{[email protected]} \address[Wang]{Department of Control Science and Engineering, University of Shanghai for Science and Technology, Shanghai, P. R. China, 200093} \address[Qi]{College of Information science and Technology, Donghua University, Shanghai, P. R. China, 201620} \address[UMICH]{Department of Mechanical and Aerospace Engineering, Univerity of California San Diego, La Jolla, CA, USA, 92093} \begin{keyword} Multi-agent system; Unknown input delay; PDE Backstepping; Adaptive control; Formation control. \end{keyword} \begin{abstract} This paper considers the collective dynamics control of large-scale multi-agent systems (MAS) in a 3-D space with leaders' actuation unknown delay. The communication graph of the agents is defined on a mesh-grid 2-D cylindrical surface. We model the agents' collective dynamics by a complex-valued and a real-valued reaction-advection-diffusion 2-D partial differential equations (PDEs) whose states represent the 3-D position coordinates of the agents. The leader agents on the boundary suffer unknown actuator delay due to the computation and information transmission time. We design a delay-adaptive controller for the 2-D PDE by using PDE Backstepping combined with a Lyapunov functional method, where the latter is used for the design of the delay update law. In order to achieve distributed formation control, we also design a boundary observer that estimates the positions of all the agents from the measurement of the position of the leaders' neighbors. Capitalizing on our recent result on the control of 1-D parabolic PDEs with unknown input delay, we use Fourier series expansion to bridge the control of 1-D PDEs to that of 2-D PDEs. To design the update law for the 2-D system, a new target system is constructed to establish the closed-loop local boundedness of the system trajectories in $H^2$ norm and asymptotic convergence for both state and output feedback configurations. The effectiveness of our formation control approach is verified by numerical simulations. \end{abstract} \end{frontmatter} \section{Introduction} MAS's cooperative formation control has attracted significant attention because of its many applications in diverse engineering fields, including UAV formation flying \cite{Mora2015,Du2019,Li2021}, multi-robot collaboration \cite{alonso2019distributed, Wang2017MAS}, vehicle queues \cite{Fax2004, Ren2007}, and satellite clusters \cite{Zetocha2000,zhou2013decentralized}. Communication delay (the exchange of information between agents) and input delay (the measurement, transmission, and processing of data required by actuators) are frequently present in MAS, causing poor performance and instability. Control laws that compensate for delay-induced instability are an efficient way to overcome these challenges. In the past few decades, studies on communication delay have been extensively conducted in MAS, based on high-order models and consensus protocol \cite{hou2017consensus, tian2012high, yu2010some}. Among the existing results, \cite{Lee2006} proposes a general protocol framework for directed graphs with non-uniform communication delay, where the communication delay-independent consistency condition is obtained. \cite{Tian2008} considers a first-order multi-agent system with input and communication delays and finds a consensus condition which only depends on the input delay by use of the frequency domain analysis. A necessary and sufficient condition for leader-following consensus of a MAS with input delays are derived in \cite{zhu2015event}, by constructing a suitable triggering mechanism. Meanwhile, a similar condition is also presented in \cite{yu2010some} for second-order consensus in multi-agent dynamical systems with input delays. Applying the Artstein-Kwon-Pearson reduction method which converts delay-dependent systems into delay-free systems, \cite{ai2021distributed} studies the fixed-time event-triggered consensus problem for linear MAS with input delay. Employing Lyapunov method for a mean square consensus problem of leader-following stochastic MAS with input time-dependent or constant delay, \cite{tan2017leader} establishes sufficient conditions to achieving consensus. The leader-follower consensus problem for nonlinear MAS with unknown nonuniform time-varying input delay is investigated in \cite{9107091} where for each follower, an output-feedback-based distributed finite-dimension controller independent of delay is constructed. Predominantly, attention has been drawn to the effect of input delay on the follower agents, which is substantially different from our contribution that considers actuated leader agents \cite{qi2019control} in a 3-D infinite-dimensional setup. Moreover, as stated in \cite{Lee2006,Lin2014}, communication delay between the followers are easier to handle and may not even influence the consensus condition. Meanwhile, almost all of those works are based on ordinary differential equations (ODEs) model, that is, the dynamic state of each agent needs to be described by an ODE, and the greater the number of agents, the higher the complexity of the system. Compared to ODEs models, the PDE-based approach has the advantage to capture the evolution of the large-scale multi-agents system in a compact form \cite{Meurer2011, FREUDENTHALER2020108897, WEI201947,TERUSHKIN2021109697, Frihauf2011, Qi2015, qi2017wave}; diffusion PDEs whose states represent the position coordinates of each agent. The diffusion term, namely, the Laplace operator plays the role of MAS consensus protocol modeled by ODEs. The formation actuators defined as the leader agents located at the boundary of the communication topology, require more information and computation efforts than the follower agents and are more likely subject to delay that impact the formation control. Using the nominal delay-compensated boundary control law proposed in \cite{MKrstic2009,Wang2017}, the authors of \cite{qi2019control} designed a boundary feedback law for MAS in 3-D space under a constant input delay. However, the delay value is unlikely to be measured in practice, and only the bounds of the delay can be estimated. There are only a few works on MAS that are modeled by ODEs. An example is \cite{LIU2021104927} which figures out the bound of the delay under which the regulated state synchronization of MAS with unknown and nonuniform input delays can achieve. In \cite{Zhangunknowndelay2021}, a similar bound of delay is found for semi-global state synchronization of a MAS with actuator saturation and unknown nonuniform input delay. While for MAS modeled by PDEs, little literature so far considers unknown delays. This paper considers a formation control in 3-D space of a multi-agent system with unknown actuator delay. The collective dynamics of the MAS is modeled by two diffusion 2-D PDEs; one is a complex-valued PDE whose states represent the agents' positions in coordinates $(x,y)$ and the other is a real-valued PDE whose states represent the agents' positions in coordinate $z$. We employ the PDE Backstepping combined with a Lyapunov method to design an output feedback adaptive controller consisting of the unknown delay update law and the observer with the purpose of realizing distributed control of the formation. We propose the first study of formation control of MAS with unknown input delay in 3-D space via the PDE approach. The main contributions of the paper include \begin{enumerate} \item The PDE backstepping method is usually used for the control design of 1-D PDEs. In the paper, we introduce Fourier series expansion to reduce the dimensionality of the 2-D system to $n$ 1-D systems. Then, we assemble them back in 2-D space and prove the convergence of the resulting series. \item Compared to our previous work \cite{WANG2021109909} in 1-D, we construct a new 2-D target system involving a nonlinear term which increases the difficulty of stability analysis. For the 2-D system, $H^2$ norm boundedness is required to ensure the continuity of the state, which is the necessary condition for topology connectivity of the agents' network. \item In order to achieve distributed formation control, a boundary observer is designed to estimate the positions of all the agents by measuring the positions of the leaders' neighbors. We also prove the local asymptotic convergence of the output feedback system in $H^2$ norm. \end{enumerate} This paper is organized as follows. Section \ref{Multi-agent} introduces the PDE-based model for a MAS with actuation delay. Section \ref{controller} presents the delay-adaptive control design for the MAS collective dynamics subject to unknown actuation delay. Update laws of unknown delay are designed, and the local stability of the MAS is analyzed in Section \ref{adaptive} and Section \ref{5-proof}. Observer-based boundary feedback control laws are designed in Section\ref{observer} and supportive simulation results are provided in Section \ref{simu}. The paper ends with concluding remarks and future works presented in Section \ref{perspec}. \textbf{Notation:} Throughout the paper, we adopt the following notation for $\chi(\theta,s)$ on the cylindrical surface \cite{tang2017formation,qi2019control}: The $L^2$ norm is defined as \begin{align} \label{equ-define11} ||\chi(s,\theta)||_{L^2}= \left(\int_{0}^{1}\int_{-\pi}^{\pi}|\chi(s,\theta)|^2 \mathrm{d}\theta\mathrm{d}s\right)^\frac{1}{2}, \end{align} for $\chi(\theta,s)\in L^{2}((0,1)\times(-\pi,\pi))$. To save space, we set $\rVert \chi\rVert^2=\rVert \chi(s,\theta)\rVert^2_{L^2}$ . The Sobolev norm $||\centerdot||_{H^1}$ is defined as \begin{align} \label{equ-define12} ||\chi||^2_{H^1} =&\|\chi\|^{2}+ \|\partial_s\chi\|^{2} +\|\partial_\theta\chi\|^{2}, \end{align} for $\chi(\theta,s)\in H^{1}((0,1)\times(-\pi,\pi))$. The Sobolev norm $||\centerdot||_{H^2}$ is defined as \begin{align} \label{equ-define13} \nonumber||\chi||^2_{H^2}=&\|\chi\|_{H^1}^{2}+\|\partial^2_{s}\chi\|^{2}+2\|\partial_{s\theta}\chi\|^{2}+\|\partial^2_{\theta}\chi\|^{2}\\ &+\|\partial_{\theta}\chi\|^{2}, \end{align} for $\chi(\theta,s)\in H^{2}((0,1)\times(-\pi,\pi))$. \section{\protect Muilt-Agent's PDEs Model}\label{Multi-agent} \subsection{Model description} Following \cite{qi2019control}, we consider a group of agents located on a cylindrical surface undirected topology graph with index $(i,j),~i=1,..., \ M,~j=1,..., N$, moving in a 3-D space under the coordinate axes $(x,y,z)$. A complex-valued state $u=x+\mathrm{j}y$ is defined to simplify the expression of the components on the $(x,y)$ axes. Defining the discrete indexes $(i,j)$ of the agents into a continuous domain, $\Omega=\{(s, \theta):0<{s}<{1}$, $-\pi\leq{\theta}\leq{\pi}\}$, as $M, ~N\to {\infty}$ (see, Figure \ref{figure-topology}), the continuum model of the collective dynamics of a large scale multi-agent system as follows \begin{figure} \caption{Cylindrical surface topology prescribing the communication relationship among agents. The agents at the uppermost and lowermost layers are leaders. Each follower has four neighbors.} \label{figure-topology} \end{figure} \begin{align} \label{equ-u_0} &{\partial}_{t}u(s,{\theta},t)={\Delta}{u}(s,{\theta},t) +\beta_1{\partial}_su(s,{\theta},t)+{\lambda_{1}}{u}(s,{\theta},t), \\ \label{equ-z_0} &{\partial}_{t}{z}(s,{\theta},t)={\Delta}{z}(s,{\theta},t) +\beta_2{\partial}_sz(s,{\theta},t)+{\lambda_{2}}{z}(s,{\theta},t), \\ \label{equ-bdu} &{u}(s,0,t)={u}(s,2{\pi},t),\\\label{bnd-uzero} &{u}(0,{\theta},t)=f_1(\theta),\\ &{u}(1,{\theta},t)=g_1(\theta)+{U}({\theta},t-D), \label{bnd-U} \\ \label{equ-bdz} &{z}(s,0,t)={z}(s,2{\pi},t),\\ &{z}(0,{\theta},t)=f_2(\theta),\\ \label{equ-bdz1} &{z}(1,{\theta},t)=g_2(\theta)+{Z}({\theta},t-D), \end{align} where $(s,{\theta},t) \in{\mathbb{R}^+}\times \Omega$, $u$, $\lambda_{1}$, $ \beta_1 \in{\mathbb{C}}$, $z$, $\lambda_2$, $\beta_2 \in{\mathbb{R}}$. The coordinates $(s,\theta)$ are the spatial variables denoting the indexes of the agents in the continuum and ${\Delta}$ represents the following Laplace operator \begin{align} \label{equ-lapu} {\Delta}{u}(s,{\theta},t) & ={\partial}^2_{s}u(s,{\theta},t)+{\partial}^2_{\theta}u(s,{\theta},t), \\ \label{equ-lapz} {\Delta}{z}(s,{\theta},t) & ={\partial}^2_{s}z(s,{\theta},t)+{\partial}^2_{\theta}z(s,{\theta},t), \end{align} which is defined as ''consensus operators" for PDE representations \cite{Ferrari2006}. Note that the boundary conditions (\ref{equ-bdu}) and (\ref{equ-bdz}) are periodical on the cylinder surface (see Figure \ref{figure-topology}) while $f_1(\theta)$, $g_1(\theta)$, $f_2(\theta)$ and $g_2(\theta)$ are non-zero bounded boundary conditions for the states $u$ and $z$, respectively. To control the MAS to desired formations, we consider a configuration where the agents at the boundaries $s=0$ and $s=1$ are the leaders that drive all the agents to prescribe equilibrium. In \eqref{bnd-U} and \eqref{equ-bdz1}, we defined the input delay $D>0$ affecting the actuated leaders and caused by communication lags in leader-follower configurations and measurement constraints. In practice, the exact value of the delay is hard to measure, only the bounds of the unknown delay can be estimated, so we assume: \newtheorem{assuption}{\bf{Assumption}} \begin{assuption}\label{as1} \rm{Assume delay $D\in \{D\in \mathbb{R}^+|\underline{D}\leq D\leq \overline {D}\}$, where $\underline{D}$ and $\overline {D}$ are the known lower and upper bounds, respectively.} \end{assuption} \begin{remm}\rm{Letting $\partial_t u(s,\theta,t)=0$ and $\partial_t z(s,\theta,t)=0$, one can solve \eqref{equ-u_0}--\eqref{equ-bdz1} without control and get steady state profiles $\bar{u}(s,\theta)$ and $\bar{z}(s,\theta)$, which express the desired formations as functions of the values of the parameters $\lambda_{1}$, $\beta_1$, $\lambda_2$, $\beta_2$ and the open-loop boundary conditions $f_1(\theta)$, $g_1(\theta)$, $f_2(\theta)$ and $g_2(\theta)$ \cite{qi2019control}. } \end{remm} \section{{Delay-}adaptive boundary controller design}\label{controller} First, define the error between the actual system and the desired system as $\tilde{u}(s,{\theta},t)=u(s,{\theta},t)-\bar{u}(s,{\theta})$, and then introduce a change of variable $\phi(s,{\theta},t)=e^{\frac{1}{2}\beta_1s}\tilde{u}(s,{\theta},t)$ for removing the convection term, \begin{align} \label{equ-phi-ori} &\partial_{t}{\phi}(s,{\theta},t)={\Delta}{\phi}(s,{\theta},t)+{\lambda'_1}{\phi}(s,{\theta},t),~~s\in(0,1),\\ &{\phi}(s,0,t)={\phi}(s,2{\pi},t),\\ &{\phi}(0,{\theta},t)=0,\quad{\phi}(1,{\theta},t)=\Phi(\theta,t-D),\label{initial-phi-ori} \end{align} where $\lambda'_1=\lambda_1-\frac{1}{4}\beta_1^2$ and ${\Phi}({\theta},t-D)=e^{\frac{1}{2}\beta_1}{U}({\theta},t-D)$. By employing a transport PDE of $\vartheta$ representation of the delay appearing in \eqref{bnd-U}, we transform the error system \eqref{equ-phi-ori}--\eqref{initial-phi-ori} as follows: \begin{align} \label{equ-phi} &\partial_{t}{\phi}(s,{\theta},t)={\Delta}{\phi}(s,{\theta},t)+{\lambda'_1}{\phi}(s,{\theta},t),~~s\in(0,1),\\ &{\phi}(s,0,t)={\phi}(s,2{\pi},t),\\ &{\phi}(0,{\theta},t)=0,\quad {\phi}(1,{\theta},t)={\vartheta}(0,{\theta},t),\\ &D\partial_{t}{\vartheta}(s,{\theta},t)=\partial_{s}{\vartheta}(s,{\theta},t),\label{equ-vartheta-0}\\ &\vartheta(s,{0},t)=\vartheta(s,2\pi,t),\quad\vartheta(1,{\theta},t)=\Phi(\theta,t),\label{initial-vartheta} \end{align} where $\vartheta(s,{\theta},t)=\Phi({\theta},t+D(s-1))$, defined in $\Omega\times\mathbb{R}^+$. In the following, we will adopt the design method presented in \cite{qi2019control} to derive the dynamic boundary adaptive controller of the states $u$ and $z$ but limit our analysis to the $u$ component of the state as a similar approach applies for the $z$ dynamics. \subsection{Fourier series expansions} In order to transform the 2-D system \eqref{equ-phi}--\eqref{initial-vartheta} into $n$ 1-D systems, we introduce the Fourier series expansion \cite{vazquez2016explicit} as \begin{align} \label{fourier-phi} &{\phi}(s,{\theta},t)=\sum_{n=-\infty}^{\infty}{\phi}_{n}(s,t)e^{\mathrm{j}n\theta},\\ \label{fourier-vartheta} &{\vartheta}(s,{\theta},t)=\sum_{n=-\infty}^{\infty}{\vartheta}_{n}(s,t)e^{\mathrm{j}n\theta}, \\ \label{fourier-Phi} &{\Phi}({\theta},t)=\sum_{n=-\infty} ^{\infty}{\Phi}_{n}(t)e^{\mathrm{j}n\theta}, \end{align} where $\phi_n$, ${\vartheta}_{n}$, $\Phi_n$ are the $n$ Fourier coefficients; independent of the angular argument $\theta$. As an illustration, one of the coefficients in \eqref{fourier-phi}--\eqref{fourier-Phi} is given below \begin{align} \phi_n(s,t)=\frac{1}{2\pi}\int_{-\pi}^{\pi}\phi(s,\psi,t)e^{-\mathrm jn\psi}\mathrm d\psi. \end{align} Introducing \eqref{fourier-phi}--\eqref{fourier-Phi} to \eqref{equ-phi}--\eqref{initial-vartheta}, we get the following 1-D PDE of the Fourier coefficients ${\phi}_{n}(s,t)$ and ${\vartheta}_{n}(s,t)$ \begin{align} \label{phi-n} &\partial_t{\phi}_{n}(s,t)=\partial^2_s{\phi}_{n}(s,t)+(\lambda'_1-n^2){\phi}_{n}(s,t),~s\in(0,1),\\ &{\phi}_{n}(0,t)=0, \quad {\phi}_{n}(1,t)={\vartheta}_{n}(0,t), \label{bnd-phi1} \\ &D\partial_t{\vartheta}_{n}(s,t)={\partial_s}{\vartheta}_{n}(s,t), \quad {\vartheta}_{n}(1,t)={\Phi}_{n}(t).\label{initial-vartheta-n} \end{align} We design a feedback adaptive controller $\Phi_n$ to stabilize each cascade system $(\phi_n, \vartheta_n)$ in \eqref{phi-n}--\eqref{initial-vartheta-n} by postulating the following transformations {\begin{align} \label{equ-transformation} {w}_n(s,t)&=\mathcal{T}_n[\phi_n](s,t)={\phi}_n(s,t)-\int_{0}^s k_n(s,\tau)\phi_n(\tau,t)\mathrm{d}\tau,\\ \label{equ-transformation2} \nonumber{h}_n(s,t)&=\mathcal{T}_n[\vartheta_n](s,t)={-\hat D(t)}\int_{0}^s p_n(s,\tau,\hat D(t))\vartheta_n(\tau,t)\mathrm{d}\tau\\ &+{\vartheta}_n(s,t)-\int_{0}^1 \gamma_n(s,\tau,\hat D(t))\phi_n(\tau,t)\mathrm{d}\tau, \end{align} with the inverse transformations \begin{align} \label{equ-transformation-inverse} {\phi}_n(s,t)&=\mathcal{T}_n^{-1}[w_n](s,t)={w}_n(s,t)+\int_{0}^s l_n(s,\tau)w_n(\tau,t)\mathrm{d}\tau,\\ \label{equ-transformation2-inverse} \nonumber{\vartheta}_n(s,t)&=\mathcal{T}_n^{-1}[h_n](s,t)={\hat D(t)}\int_{0}^s q_n(s,\tau,\hat D(t))h_n(\tau,t)\mathrm{d}\tau\\ &+h_n(s,t)+\int_{0}^1 \eta_n(s,\tau,\hat D(t))w_n(\tau,t)\mathrm{d}\tau, \end{align}} where the kernels are defined on ${\mathcal D}=\{(s,\tau,\hat D(t)):0\leq \tau \leq s \leq 1\}$, and $\hat D(t)$ is the estimate of unknown input delay\footnote{For the sake of simplicity, $\hat D(t)$ is defined as $\hat D$ in the remaining part of our developments.}. Hence, by PDE Backstepping method, \eqref{phi-n}--\eqref{initial-vartheta-n} maps into the following target system parameterized by $\hat D=\tilde D-D$ \begin{align} &\partial_t{w}_{n}(s,t)=\partial^2_s{w}_n(s,t)-n^2{w}_n(s,t),\label{equ-w_n-0}\\ &{w}_n(0,t)=0,\quad{w}_n(1,t)={h}_n(0,t),\\ &D{\partial_th}_n(s,t)={\partial_sh}_n(s,t)-\tilde D P_{1n}(s,t)-D\dot{\hat D}P_{2n}(s,t),\\ &{h}_n(1,t)=0,\label{equ-h_n-0} \end{align} where \begin{align} \nonumber P_{1n}&(s,t)=\int_{0}^1\bigg(-{\partial_\tau}\gamma_n(s,1,\hat D)l_n(1,\tau)+\frac{1}{\hat D}\partial_s \gamma_n(s,\tau,\hat D)\\ \nonumber&+\frac{1}{\hat D}\int_{\tau}^1\partial_s \gamma_n(s,\tau,\hat D)l_{n}(\xi,t)\mathrm{d}\xi\bigg)w_{n}(\tau,t)\mathrm{d}\tau\\ &-{\partial_\tau}\gamma_n(s,1,\hat D)h_{n}(0,t),\\ \nonumber P_{2n}&(s,t)=\int_{0}^1\bigg(\int_\tau^1\partial_{{\hat D}} \gamma_n(s,\tau,\hat D)l_n(\xi,\tau)\mathrm d\xi+\partial_{{\hat D}} \gamma_n(s,\tau,\hat D)\\ \nonumber&+\int_{0}^s (p_n(s,\xi,\hat D)+{\hat D}\partial_{{\hat D}} p_n(s,\xi,\hat D))\eta_n(\xi,\tau,\hat D)\mathrm{d}\xi\bigg) \nonumber\\ \nonumber&\cdot w_n(\tau,t)\mathrm{d}\tau+\int_{0}^s\bigg({\hat D}\partial_{{\hat D}} p_n(s,\tau,\hat D)+ p_n(s,\tau,\hat D)\\ \nonumber&+\hat D\int_{\tau}^s (p_n(s,\xi,\hat D)+{\hat D}\partial_{{\hat D}} p_n(s,\xi,\hat D))q_n(\xi,\tau,\hat D)\mathrm{d}\xi \bigg) \\ &\cdot h_n(\tau,t)\mathrm d\tau. \end{align} The mapping \eqref{equ-transformation}, \eqref{equ-transformation2} is well defined if the kernel functions $k_n(s,\tau)$, $\gamma_n(s,\tau)$ and $p_n(s,\tau)$ satisfy \begin{align} \label{equ-kn} &\partial^2_s{k}_{n}(s,\tau)=\partial^2_\tau{k}_{n}(s,\tau)+\lambda'_1{k}_{n}(s,\tau),\\ &{k}_{n}(s,0)=0,\quad {k}_{n}(s,s)=-\frac{\lambda'_1}{2}s,\label{bnd-kn}\\ &\partial_s{\gamma}_{n}(s,\tau,\hat D)=\hat D(\partial^2_\tau{\gamma}_{n}(s,\tau,\hat D)+(\lambda'_1-n^2){\gamma}_{n}(s,\tau,\hat D)),\label{gamman}\\ &{\gamma}_{n}(s,0,\hat D)= {\gamma}_{n}(s,1,\hat D)=0, \label{bnd-gamman}\\ &{\gamma}_{n}(0,\tau,\hat D)={k}_{n}(1,\tau),\label{initial-gamman} \\ &\partial_sp_{n}(s,\tau,\hat D)=-\partial_\tau p_{n}(s,\tau,\hat D),\label{pn}\\ &p_{n}(s,1,\hat D)=-\partial_\tau{\gamma}_{n}(s,\tau,\hat D)|_{\tau=1}.\label{bnd-pn} \end{align} The solution of the above gain kernels PDEs is given by \begin{align} \label{equ-kn1} &{k}_{n}(s,\tau)=-\lambda\tau\frac{I_1(\sqrt{\lambda'_1(s^2-{\tau}^2)})}{\sqrt{\lambda'_1(s^2-{\tau}^2)}},\\ \label{equ-gamman5} \nonumber&\gamma_n(s,\tau,\hat D)=2\sum_{i=1}^{\infty}e^{\hat D(\lambda'_1-n^2-i^2{\pi^2})s}{\mathrm{sin}}(i\pi\tau) \int_{0}^{1}{\mathrm{sin}}(i\pi\xi)\\ &~~~~~~~~~~~~~~~~~\cdot k(1,\xi){\mathrm{d}{\xi}},\\ &{p}_{n}(s,\tau,\hat D)=-{\partial}_2{\gamma}_{n}(s-\tau,1,\hat D), \end{align} where $\partial_2\gamma_{n}(\cdot,\cdot)$ denotes the derivative of $\gamma_n(\cdot,\cdot,\cdot)$ with respect to the second argument. {Similarly, one can get the kernels in inverse transformations \eqref{equ-transformation-inverse}, \eqref{equ-transformation2-inverse}:} \begin{align} \label{equ-ln1} &{l}_{n}(s,\tau)=-\lambda\tau\frac{J_1(\sqrt{\lambda'_1(s^2-{\tau}^2)})}{\sqrt{\lambda'_1(s^2-{\tau}^2)}},\\ \label{equ-eta5} \nonumber&\eta_n(s,\tau,\hat D)=2\sum_{i=1}^{\infty}e^{-\hat D(n^2+i^2\pi^2)s}{\mathrm{sin}}(i\pi\tau) \int_{0}^{1}{\mathrm{sin}}(i\pi\xi)\\ &~~~~~~~~~~~~~~~~~\cdot k(1,\xi){\mathrm{d}{\xi}},\\ &{q}_{n}(s,\tau,\hat D)=-{\partial}_2{\eta}_{n}(s-\tau,1,\hat D). \end{align} From \eqref{initial-vartheta-n}, \eqref{equ-transformation2} and \eqref{equ-h_n-0}, the 1-D delay-compensated adaptive controller writes \begin{align} \label{equ-controller} \nonumber{U}_n(t)=&{\hat D}\int_{0}^1 p_n(1,\tau,\hat D)\vartheta_n(\tau,t)\mathrm{d}\tau \\&+\int_{0}^1 \gamma_n(1,\tau,\hat D)\phi_n(\tau,t)\mathrm{d}\tau, \end{align} \subsection{2-D delay-compensated adaptive controller} In order to obtain the 2-D delay-compensated adaptive controller, we assemble all the $n$ 1-D transformations defined in \eqref{equ-transformation}--\eqref{equ-transformation2} in the form of Fourier series to recover the 2-D domain components and then get \begin{align}\label{transform-phi-w} \nonumber w(s,\theta,t)&{=\sum_{n=-\infty}^{\infty}{w}_{n}(s,t)e^{\mathrm{j}n\theta}}\\ &={\phi}(s,\theta,t) -\int_{0}^{s}{k}(s,\tau){\phi} (\tau,\theta,t){\mathrm{d}{\tau}},\\ \nonumber{h}(s,{\theta},t) &{=\sum_{n=-\infty}^{\infty}h_{n}(s,t)e^{\mathrm{j}n\theta}}={\vartheta}(s,{\theta},t)\\ \nonumber&-\int_{0}^1\int_{-\pi}^\pi \gamma(s,\tau,\theta,\psi,\hat D) w(\tau,\psi,t)\mathrm{d}\psi\mathrm{d}\tau \nonumber \\ &-\hat D\int_{0}^s\int_{-\pi}^\pi p(s,\tau,\theta,\psi,\hat D)\vartheta(\tau,\psi,t)\mathrm{d}\psi\mathrm{d}\tau, \label{transform-var-h} \end{align} where $k(s,\tau)$ is defined in \eqref{equ-kn1}, the related 2-D kernels are given as \begin{align} \label{kernel-gamma} \gamma(s,\tau, \theta,\psi,\hat D)=&2{Q(s,\theta-\psi,\hat D)}\sum_{i=1}^{\infty}e^{\hat D(\lambda'_1-i^2\pi^2)s} \mathrm{sin}(i\pi\tau)\nonumber\\ &\times \int_{0}^{1}\mathrm{sin}(i\pi\xi)k(1,\xi)\mathrm{d}\xi,\\ \label{kernel-p} p(s,\tau,\theta,\psi,\hat D)=&-\partial_2\gamma(s-\tau,1,\theta-\psi,\hat D), \end{align} where $\partial_2\gamma(\cdot,\cdot,\cdot,\cdot)$ denotes the derivative of $\gamma(\cdot,\cdot,\cdot,\cdot)$ with respect to its second argument. For all $s\in[0,1] $, defining \begin{align} \label{equ-Poisson} {Q(s,\theta-\psi,\hat D)}=\frac{1}{2\pi}\sum_{n=-\infty}^{\infty} e^{-\hat Dn^2s}e^{\mathrm{j}n(\theta-\psi)}. \end{align} {Due to $0<Q(s,\theta-\psi,\hat D)\leq P(e^{-\hat Ds},\theta -\psi)$, where $P$ denotes Possion Kernel.} Using the properties of Poisson kernels \cite{Brown2009}, one gets the boundedness of the kernel functions $\gamma(s,\tau, \theta,\psi,\hat D)$ and $p(s,\tau, \theta,\psi,\hat D)$. In a similar way, we get the inverse transformations of \eqref{transform-phi-w} and \eqref{transform-var-h} are given by \begin{align} \label{equ-trans3} &{\phi}(s,{\theta},t)={w}(s,{\theta},t)+\int_{0}^s l(s,\tau)w(\tau,{\theta},t)\mathrm{d}\tau,\\ &\nonumber{\vartheta}(s,{\theta},t)=\int_{0}^1\int_{-\pi}^\pi \eta(s,\tau,\theta,\psi,\hat D)w(\tau,\psi,t)\mathrm{d}\psi\mathrm{d}\tau\\ \label{equ-trans4} &~~~~{+h}(s,{\theta},t)+\hat D\int_{0}^s\int_{-\pi}^\pi q(s,\tau,\theta,\psi,\hat D)h(\tau,\psi,t)\mathrm{d}\psi\mathrm{d}\tau, \end{align} where the gain kernels $l$, $q$ and $\eta$ are defined as \begin{align} \label{equ-kn2} &l(s,\tau)=-\lambda\tau\frac{J_1(\sqrt{\lambda'_1(s^2-{\tau}^2)})}{\sqrt{\lambda'_1(s^2-{\tau}^2)}},\quad \\ &\eta(s,\tau,\theta,\psi,\hat D)=2{Q(s,\theta-\psi,\hat D)}\sum_{i=1}^{\infty}e^{-\hat Di^2\pi^2s}\mathrm{sin}(i\pi\tau)\nonumber \\ &~~~~~~~~~~~~~~~~~~~~\cdot\int_{0}^{1}\mathrm{sin}(i\pi\xi)l(1,\xi)\mathrm{d}\xi,\label{equ-rn2}\\ &q(s,\tau,\theta,\psi,\hat D)=-{\partial_2}{\eta}(s-\tau,1,\theta,\psi,\hat D)\label{equ-qn2}. \end{align} From \eqref{equ-controller}, \eqref{kernel-gamma} and \eqref{kernel-p}, we obtain the following delay-adaptive control law: \begin{align} \label{equ-endU} &\nonumber U({\theta},t)=\int_0^1 \int_{-\pi}^\pi \gamma(1,\tau,\theta,\psi,\hat D)e^{-\frac{1}{2}\beta_1(1-\tau)}(u(\tau,{\psi},t)\\ \nonumber&~~~~-\bar{u}(\tau,{\psi}))\mathrm{d}\psi\mathrm{d}\tau-\int_{t-D}^{t}\int_{-\pi}^\pi {\partial_2}{\gamma} (\frac{t-\nu}{\hat D},1,\theta,\psi,\hat D)e^{\frac{1}{2}\beta_1}\\ &~~~~\cdot U({\psi},\nu)\mathrm{d}\psi\mathrm{d}\nu. \end{align} \subsection{Target system for the plant with unknown input delay} Similarly, in order to obtain the 2-D Target system for the plant with unknown input delay, we assemble all the $n$ 1-D target systems defined in \eqref{equ-w_n-0}--\eqref{equ-h_n-0} in the form of Fourier series to return back to the 2-D domain \begin{align} \label{equ-aw0-adp} &\partial_t{w}(s, {\theta}, t)={\Delta}{w}(s, {\theta}, t), \\ &{w}(s, 0, t)={w}(s, 2{\pi}, t), \\ &{w}(0, {\theta}, t)=0, \quad {w}(1, {\theta}, t)={h}(0, {\theta}, t)\label{bnd-aw-adp}, \\ & D\partial_t{h}(s, {\theta}, t)=\partial_s{h}(s, {\theta}, t)-\tilde DP_1(s, \theta, t)-D\dot{\hat D}P_2(s, \theta, t), \label{equ-ah-adp}\\ &{h}(s, 0, t)={h}(s, 2{\pi}, t),\quad {h}(1, {\theta}, t)=0,\label{initial-ah-adp} \end{align} with \begin{align} \nonumber P_{1}(s, \theta, t)=&\int_0^1\int_{-\pi}^\pi M_1(s, \tau, \hat \theta, \psi,t)w(\tau, \psi, t)\mathrm d\psi\mathrm d\tau\\ &+\int_{-\pi}^\pi M_{2}(s, \theta, \psi,t)h(0, \psi, t)\mathrm d\psi, \label{equ-P1}\\ \nonumber P_{2}(s, \theta, t)=&\int_0^1\int_{-\pi}^{\pi}M_3(s, \tau, \theta, \psi,t)w(\tau, \psi, t)\mathrm d\psi\mathrm d\tau\\ &+\int_0^s\int_{-\pi}^{\pi}M_4(s, \tau, \theta, \psi,t)h(\tau, \psi, t)\mathrm d\psi\tau, \label{equ-P2} \end{align} where $M_{i}, ~ i=1, 2, 3, 4$ are functions defined below: \begin{align} \label{equ-M1} &\nonumber M_1(s, \tau, \theta, \psi,t)=\frac{1}{\hat D}\bigg(\int_{\tau}^{1}\gamma_s(s,\xi,\theta, \psi,\hat D)l(\xi, \tau)\mathrm d\xi\\ &~~~~+\gamma_{s}(s, \tau,\theta, \psi,\hat D)\bigg)-{\partial_\tau}\gamma(s, 1, \theta, \psi,\hat D)l(1,\tau),\\ &M_2(s,\tau, \theta, \psi,t)=-{\partial_\tau}\gamma(s, 1, \theta, \psi,\hat D), \\ &\nonumber M_{3}(s, \tau, \theta, \psi,t)=\int_\tau^1\gamma_{\hat D}(s, \xi, \theta, \psi,\hat D)l(\xi, \tau)\mathrm d\xi\\ \nonumber&~~~~+\int_0^s\int_{-\pi}^\pi(p(s, \xi, \theta, \varphi,\hat D)+\hat Dp_{\hat D}(s, \tau, \theta, \varphi,\hat D))\\ &~~~~\cdot\eta(\xi, \tau, \varphi, \psi,\hat D)\mathrm d \varphi\mathrm d\xi+\gamma_{\hat D}(s, \tau, \theta, \psi,\hat D),\\ &\nonumber M_{4}(s, \tau, \theta, \psi,t)=p(s, \tau, \theta, \psi,\hat D)+\hat Dp_{\hat D}(s, \tau, \theta, \psi,\hat D)\\ \nonumber&~~~~+\hat D\int_\tau^s\int_{-\pi}^\pi(p(s, \xi, \theta, \varphi,\hat D)+\hat Dp_{\hat D}(s, \xi, \theta, \varphi,\hat D))\\ &~~~~\cdot q(\xi, \tau, \varphi, \psi,\hat D)\mathrm d \varphi\mathrm d\xi.\label{equ-M4} \end{align} \section{The main result}\label{adaptive} To estimate the unknown parameter $D$, we construct the following update law \begin{align} \label{equ-law1} \dot{\hat D}=\varrho\mathrm{Proj}_{[\underline D, \overline D]}\{\tau(t)\}, ~~~~0<\varrho<1, \end{align} where $\tau(t)$ is given as \begin{align} \label{equ-tau} \tau(t)=-2\int_0^1\int_{-\pi}^{\pi}(1+s)h(s, \theta, t) P_1(s, \theta, t)\mathrm d\theta\mathrm ds, \end{align} and the standard projection operator is defined as follows \begin{align} \mathrm{Proj}_{[\underline D,\overline D]}\{\tau(t)\}=\left\{ \begin{array}{rcl} 0 ~~~~ & & {\hat D=\underline D~and~\tau(t)<0},\\ 0 ~~~~ & & {\hat D=\overline D ~and~\tau(t)>0},\\ \tau(t)~~ & & {otherwise}. \end{array} \right.\label{equ-law2} \end{align} Our claim is that the time-delayed multi-agent system studied in this paper achieves stable formation control, in other words, the state of the error system \eqref{equ-phi}--\eqref{initial-vartheta} tends to zero under the effect of the adaptive controller \eqref{equ-endU}. The following theorem is established. \newtheorem{theorem}{\textbf{Theorem}} \begin{theorem} \label{theorem1} \rm{Consider the closed-loop system consisting of the plant \eqref{equ-phi}--\eqref{initial-vartheta}, the control law \eqref{equ-endU}, the updated law \eqref{equ-law1}, \eqref{equ-tau} under Assumption \ref{as1}. Local boundedness and asymptotic convergence of the system trajectories are guaranteed, i.e., there exist positive constants $\mathcal M_1$, $\mathcal{R}_1$ such that if the initial conditions $(\phi_0,\vartheta_0,\hat D_0)$ satisfy $\Psi_{1}(0)<\mathcal M_1$, where \begin{align}\label{psi} \nonumber \Psi_{1}&(t)=\rVert \phi\rVert^2+\rVert\partial_s \phi\rVert^2+\rVert\partial_\theta \phi\rVert^2+\rVert\Delta \phi\rVert^2+\rVert\partial_t \phi\rVert^2+\rVert \partial_{ts} \phi\rVert^2\\ &\nonumber+\rVert\partial_{t\theta} \phi\rVert^2+\rVert \vartheta\rVert^2+\rVert\partial_s \vartheta\rVert^2+\rVert\partial_\theta \vartheta\rVert^2+\rVert\Delta \vartheta\rVert^2\\ \nonumber&+\rVert \partial_{s\theta\theta } \vartheta\rVert^2+\rVert\partial_{ss\theta} \vartheta\rVert^2+\rVert\vartheta(0, \cdot, t)\rVert^2+\rVert\partial_\theta \vartheta(0, \cdot, t)\rVert^2\\ &+\rVert\partial_\theta^2 \vartheta(0, \cdot, t)\rVert^2+\rVert\partial_t \vartheta(0, \cdot, t\rVert^2+\rVert\partial_{t\theta} \vartheta (0, \cdot, t)\rVert^{2}+\tilde D^2, \end{align} the following holds: \begin{align} \Psi_{1}(t)\leq {\mathcal R}_1\Psi_{1}(0), \quad \forall t\geq0\label{equ-Psi}; \end{align} furthermore, \begin{align} &\lim_{t\to \infty}\max_{(s,\theta)\in[0, 1]\times[-\pi,\pi]}|\phi(s, \theta, t)|=0, \label{equ-the-6-phi}\\ &\lim_{t\to \infty}\max_{(s,\theta)\in[0, 1]\times[-\pi,\pi]}|\vartheta(s, \theta, t)|=0.\label{equ-the-6-v} \end{align}} \end{theorem} \begin{remm} \rm{Only local stability result is obtained due to the existence of the unbounded boundary input operator combined with the presence of highly nonlinear terms in the target system \eqref{equ-aw0-adp}--\eqref{initial-ah-adp}. In comparison to \cite{WANG2021109909}, the need to ensure continuity of the communication topology of the multi-agent system in three-dimensional space leads to consider more complex norms of the system state (see. \eqref{psi}) for the stability analysis.}\end{remm} \section{Proof of the main result}\label{5-proof} We introduce the following change of variables \begin{align} m(s, \theta, t)=w(s, \theta, t)-sh(0, \theta, t), \label{equ-m-w} \end{align} to create a homogeneous boundary condition of the target system \begin{align} \label{equ-am0-adp} &\partial_t{m}(s, {\theta}, t)={\Delta}{m}(s, {\theta}, t)+s\partial_{\theta}^2h(0, \theta, t)-s\partial_t h(0, \theta, t), \\ &{m}(s, 0, t)={m}(s, 2{\pi}, t), \\ &{m}(0, {\theta}, t)={m}(1, {\theta}, t)=0, \label{bnd-am-adp}\\ &D\partial_t{h}(s, {\theta}, t)=\partial_s{h}(s, {\theta}, t)-\tilde DP_1(s, \theta, t)-D\dot{\hat D}P_2(s, \theta, t), \label{equ-amh-adp}\\ &{h}(s, 0, t)={h}(s, 2{\pi}, t),\quad h(1, {\theta}, t)=0,\label{initial-amh-adp} \end{align} with $w(s, \theta, t)$ in $P_{i}(s, \theta, t),~\{ i=1, 2\}$, is rewritten as $w(s, \theta, t)-sh(0, \theta, t)$. We will prove Theorem \ref{theorem1} by \begin{enumerate} \item proving the norm equivalence between the target system \eqref{equ-am0-adp}--\eqref{initial-amh-adp} and the error system \eqref{equ-phi}--\eqref{initial-vartheta} through Proposition \ref{proposition4-1}, \item analyzing the local stability of the target system \eqref{equ-am0-adp}--\eqref{initial-amh-adp}, and then deriving the stability of the error system based on norm equivalence's argument, \item and establishing the regulation of the state $\phi(s, \theta, t)$ and $\vartheta(s, \theta, t)$. \end{enumerate} \noindent{\bf (1) Norm equivalence} We prove the equivalence between the error system \eqref{equ-phi}--\eqref{initial-vartheta} and target system \eqref{equ-am0-adp}--\eqref{initial-amh-adp} in the following Proposition. \newtheorem{proposition}{\textbf{Proposition}} \begin{proposition}\label{proposition4-1} \rm{The following estimates hold between the state of the error system \eqref{equ-phi}--\eqref{initial-vartheta}, and the state of the target system \eqref{equ-am0-adp}--\eqref{initial-amh-adp}: \begin{align}\label{theo1} \nonumber&\rVert \phi\rVert^2+\rVert\partial_s \phi\rVert^2+\rVert\partial_\theta \phi\rVert^2+\rVert\Delta \phi\rVert^2+\rVert\partial_t \phi\rVert^2+\rVert \partial_{ts} \phi\rVert^2\\ &\nonumber+\rVert\partial_{t\theta} \phi\rVert^2+\rVert \vartheta\rVert^2+\rVert\partial_s \vartheta\rVert^2+\rVert\partial_\theta \vartheta\rVert^2+\rVert\Delta \vartheta\rVert^2+\rVert \partial_{s\theta\theta } \vartheta\rVert^2\\ \nonumber&+\rVert\partial_{ss\theta} \vartheta\rVert^2+\rVert\vartheta(0, \cdot,t)\rVert^2+\rVert\partial_\theta \vartheta(0, \cdot, t)\rVert^2+\rVert\partial_\theta^2 \vartheta(0,\cdot, t)\rVert^2\\ &\nonumber+\rVert\partial_t \vartheta(0,\cdot, t)\rVert^2+\rVert\partial_{t\theta} \vartheta (0, \cdot, t)\rVert^2\\ \nonumber\leq& R_1(\rVert m\rVert^2+\rVert\partial_sm\rVert^2+\rVert\partial_\theta m\rVert^2+\rVert\Delta m\rVert^2+\rVert\partial_tm\rVert^2+\rVert \partial_{ts}m\rVert^2\\ \nonumber&+\rVert\partial_{t\theta}m\rVert^2+\rVert h\rVert^2+\rVert\partial_sh\rVert^2+\rVert\partial_\theta h\rVert^2+\rVert\Delta h\rVert^2+\rVert \partial_{s\theta\theta }h\rVert^2\\ \nonumber&+\rVert\partial_{ss\theta}h\rVert^2+\rVert h(0, \cdot, t)\rVert^2+\rVert\partial_\theta h(0, \cdot, t)\rVert^2+\rVert\partial_\theta^2h(0, \cdot, t)\rVert^2\\ &+\rVert\partial_th(0, \cdot, t)\rVert^2+\rVert\partial_{t\theta}h (0, \cdot, t)\rVert^{2}),\\ \nonumber&\rVert m\rVert^2+\rVert\partial_sm\rVert^2+\rVert\partial_\theta m\rVert^2+\rVert\Delta m\rVert^2+\rVert\partial_tm\rVert^2+\rVert \partial_{ts}m\rVert^2\\ \nonumber&+\rVert\partial_{t\theta}m\rVert^2+\rVert h\rVert^2+\rVert\partial_sh\rVert^2+\rVert\partial_\theta h\rVert^2+\rVert\Delta h\rVert^2+\rVert \partial_{s\theta\theta }h\rVert^2\\ \nonumber&+\rVert\partial_{ss\theta}h\rVert^2+\rVert h(0, \cdot, t)\rVert^2+\rVert\partial_\theta h(0, \cdot, t)\rVert^2+\rVert\partial_\theta^2h(0, \cdot, t)\rVert^2\\ \nonumber&+\rVert\partial_th(0, \cdot, t)\rVert^2+\rVert\partial_{t\theta}h (0, \cdot, t)\rVert^{2})\\ \nonumber\leq&R_2(\rVert \phi\rVert^2+\rVert\partial_s \phi\rVert^2+\rVert\partial_\theta \phi\rVert^2+\rVert\Delta \phi\rVert^2+\rVert\partial_t \phi\rVert^2+\rVert \partial_{ts} \phi\rVert^2\\ &\nonumber+\rVert\partial_{t\theta} \phi\rVert^2+\rVert \vartheta\rVert^2+\rVert\partial_s \vartheta\rVert^2+\rVert\partial_\theta \vartheta\rVert^2+\rVert\Delta \vartheta\rVert^2\\ \nonumber&+\rVert \partial_{s\theta\theta } \vartheta\rVert^2+\rVert\partial_{ss\theta} \vartheta\rVert^2+\rVert\vartheta(0, \cdot, t)\rVert^2+\rVert\partial_\theta \vartheta(0, \cdot, t)\rVert^2\\ &+\rVert\partial_\theta^2 \vartheta(0, \cdot, t)\rVert^2+\rVert\partial_t \vartheta(0, \cdot, t)\rVert^2+\rVert\partial_{t\theta} \vartheta (0, \cdot, t)\rVert^2),\label{theo2} \end{align} where $R_i$, $i=1, 2$ are sufficiently large positive constants.} \end{proposition} The proof of Proposition \ref{proposition4-1} is stated in Appendix \ref{apA}. Next, we show the local stability for the closed-loop system consisting of the $(\phi, \vartheta)$-system under the control law \eqref{equ-endU}, and with the updated law \eqref{equ-law1}--\eqref{equ-tau}. \noindent{\bf (2) Local stability analysis} Since the error system \eqref{equ-phi}--\eqref{initial-vartheta} is equivalent to the target system \eqref{equ-am0-adp}--\eqref{initial-amh-adp}, we establish the local stability of the target system by introducing the following Lyapunov-Krasovskii-type function, \begin{align}\label{equ-6-V-adp} \nonumber V_{1}&(t)=b_1\int_0^1\int_{-\pi}^\pi \bigg(|m|^2+|\partial_sm|^2+|\partial_\theta m|^2+|\Delta m|^2+|\partial_tm|^2\\ &\nonumber+|\partial_{ts}m|^2+|\partial_{t\theta}m|^2\bigg)\mathrm d\theta\mathrm ds+D\int_0^1\int_{-\pi}^\pi(1+s)\bigg(|h|^2\\ \nonumber&+|\partial_sh|^2+|\partial_\theta h|^2+|\Delta h|^2+|\partial_{s\theta\theta}h|^2+|\partial_{ss\theta}h|^2\bigg)\mathrm d\theta\mathrm ds\\ &\nonumber+b_2D\int_{-\pi}^\pi\bigg(|h(0, \theta, t)|^2+|\partial_\theta h(0, \theta, t)|^2+|\partial_\theta^2 h(0, \theta, t)|^2\\ &+|\partial_th(0, \theta, t)|^2+|\partial_{t\theta}h(0, \theta, t)|^2\bigg)\mathrm d\theta+\frac{\tilde D^{2}}{2\varrho}. \end{align} First, taking the time derivative of the first term of \eqref{equ-6-V-adp}, and using Cauchy Schwartz's inequality, Young's inequality, and integration by parts, we obtain \begin{align} \label{equ-6-first-adp} \nonumber&\frac{\mathrm d}{\mathrm dt}b_1\int_0^1\int_{-\pi}^\pi (|m|^2+|\partial_sm|^2+|\partial_\theta m|^2+|\Delta m|^2+|\partial_tm|^2\\ \nonumber&+|\partial_{ts}m|^2+|\partial_{t\theta}m|^2)\mathrm d\theta\mathrm ds\\ \nonumber=&2b_1\int_0^1\int_{-\pi}^\pi m(\Delta m-s\partial_th(0, \theta, t)+s\partial_{\theta}^2h(0, \theta, t))\mathrm d\theta\mathrm ds\\ \nonumber&-2b_1\int_0^1\int_{-\pi}^\pi \Delta m(\Delta m-s\partial_th(0, \theta, t)+s\partial_{\theta}^2h(0, \theta, t))\mathrm d\theta\mathrm ds\\ \nonumber&+2b_1\int_0^1\int_{-\pi}^\pi\Delta m\Delta\partial_tm\mathrm d\theta\mathrm ds+2b_1\int_0^1\int_{-\pi}^\pi\partial_tm(\Delta\partial_t m\\ \nonumber&-s\partial_t^2h(0, \theta, t)+s\partial_{t\theta\theta}h(0, \theta, t))\mathrm d\theta\mathrm ds-2b_1\int_0^1\int_{-\pi}^\pi\Delta\partial_{t}m\\ \nonumber&\cdot(\Delta\partial_t m-s\partial_t^2h(0, \theta, t)+s\partial_{t\theta\theta}h(0, \theta, t))\mathrm d\theta\mathrm ds\\ \nonumber\leq&-b_1(\frac{3}{8}-\frac{1}{\sigma_2}-\frac{1}{\sigma_3})\rVert{m}\rVert^2-\frac{b_1}{2}\rVert\partial_s{m}\rVert^2-2b_1\rVert\partial_\theta{m}\rVert^2\\ \nonumber&-b_1(2-\frac{1}{\sigma_{1}}-\frac{1}{\sigma_{4}}-\frac{1}{\sigma_{5}})\rVert\Delta m\rVert^2-b_1(\frac{3}{8}-\frac{1}{\sigma_6}-\frac{1}{\sigma_7})\\ \nonumber&\cdot\rVert\partial_t{m}\rVert^2-\frac{b_1}{2}\rVert\partial_{ts}{m}\rVert^2-2b_1\rVert\partial_{t\theta}{m}\rVert^2-b_1(2-\sigma_{1}-\sigma_{8}\\ \nonumber&-\sigma_{9})\rVert\Delta\partial_t {m}\rVert^2+\int_{-\pi}^{\pi}\bigg(\frac{b_1(\sigma_3+\sigma_5)}{3}|\partial_th(0, \theta, t)|^2\\ \nonumber&+\frac{b_1(\sigma_2+\sigma_{4})}{3}|\partial^2_{\theta}h(0, \theta, t)|^2+b_1(\frac{\sigma_7}{3}+\frac{1}{3\sigma_9})|\partial_t^2h(0,\\ & \theta, t)|^2+b_1(\frac{\sigma_6}{3}+\frac{1}{3\sigma_{8}})|\partial_{t\theta\theta}h(0, \theta, t)|^2\bigg)\mathrm d\theta, \end{align} where $\sigma_i>0$, $i=1,2,...,9$. Then, taking the time derivative of the other two terms of \eqref{equ-6-V-adp}, we get \begin{align} \label{equ-6-second-adp} \nonumber&\frac{\mathrm{d}}{\mathrm{d}t}\bigg(D\int_{0}^{1}\int_{-\pi}^{\pi}(1+s)(|h|^2+|\partial_{s}h|^2+|\partial_{\theta}h|^2+|\Delta h|^2\\ &\nonumber+|\partial_{s\theta\theta }h|^2+|\partial_{ss\theta }h|^2){\mathrm{d}\theta}\mathrm{d}s+b_2D\int_{-\pi}^{\pi}(|h(0, \theta, t)|^2\\ \nonumber&+|\partial_{\theta}h(0, \theta, t)|^2+|\partial_{\theta}^2h(0, \theta, t)|^2+|\partial_th(0, \theta, t)|^2\\ \nonumber&+|\partial_{t\theta}h(0, \theta, t)|^2){\mathrm{d}\theta}\bigg)\\ \nonumber&\leq-\int_{-\pi}^{\pi}(|h(0, \theta, t)|^2+|\partial_{s}h(0, \theta, t)|^2+|\partial_{\theta}h(0, \theta, t)|^2\\ \nonumber&+|\Delta h(0, \theta, t)|^2+|\partial_{s\theta\theta}h(0, \theta, t)|^2+|\partial_{ss\theta}h(0, \theta, t)|^2){\mathrm{d}\theta}\\ \nonumber&+2\int_{-\pi}^{\pi}(|\partial_{s }h(1, \theta, t)|^2+|\partial_{s}^2{h}(1, \theta, t)|^2+|\partial_{s\theta\theta}h(1, \theta, t)|^2\\ \nonumber&+2|\partial_{s\theta}{h}(1, \theta, t)|^2+|\partial_{ss\theta }h(1, \theta, t)|^2)\mathrm{d}\theta-\int_{0}^{1}\int_{-\pi}^{\pi}(1+s)\\ \nonumber&\cdot(|h|^2+|\partial_sh|^2+|\partial_\theta h|^2+|\Delta h|^2+|\partial_{s\theta \theta }h|^2+|\partial_{ss\theta}h|^2){\mathrm{d}\theta}{\mathrm{d}s}\\ \nonumber&+b_{2}\int_{-\pi}^{\pi}(\sigma_{10}|h(0, \theta, t)|^2+\frac{1}{\sigma_{10}}|\partial_{s}h(0, \theta, t)|^2+\frac{1}{\sigma_{11}}\\ \nonumber&\cdot|\partial_{\theta}h(0, \theta, t)|^2+{\sigma_{11}}|\partial_{s\theta}h(0, \theta, t)|^2+\frac{1}{\sigma_{12}}|\partial_{\theta}^2h(0, \theta, t)|^2\\ \nonumber&+{\sigma_{12}}|\partial_{s\theta\theta}h(0, \theta, t)|^2+{D\sigma_{13}}|\partial_{t}h(0, \theta, t)|^2+\frac{D}{\sigma_{13}}|\partial_{t}^2h(0, \\ \nonumber&\theta, t)|^2+D\sigma_{14}|\partial_{t\theta}h(0, \theta, t)|^2+\frac{D}{\sigma_{14}}|\partial_{tt\theta}h(0, \theta, t)|^2){\mathrm{d}\theta}\\ \nonumber&-2\tilde D\int_{0}^{1}\int_{-\pi}^{\pi}(1+s)(hP_1+\partial_sh\partial_sP_1+\partial_\theta h\partial_\theta P_1+\Delta h(\partial^2_s P_1\\ \nonumber&+\partial^2_\theta P_1)+\partial _{s\theta\theta } h\partial_{s\theta\theta } P_1+\partial_{ss\theta } h\partial_{ss\theta} P_1){\mathrm{d}\theta}{\mathrm{d}s}\\ \nonumber&+b_2\int_{-\pi}^{\pi}(h(0, \theta, t)P_1(0, \theta, t)+\partial_\theta h(0, \theta, t)\partial_\theta P_1(0, \theta, t)\\ \nonumber&+\partial_\theta^2 h(0, \theta, t)\partial_\theta^2 P_1(0, \theta, t))\mathrm d\theta-2D\dot{\hat D}(t)\int_{0}^{1}\int_{-\pi}^{\pi}(1+s)\\ \nonumber&\cdot(hP_2+\partial_sh\partial_sP_2+\partial_\theta h\partial_\theta P_2+\Delta h(\partial^2_s P_2+\partial^2_\theta P_2)\\ \nonumber&+\partial_{s\theta\theta } h\partial_{s\theta \theta } P_2+\partial_{ss\theta } h\partial_{ss\theta } P_2){\mathrm{d}\theta}{\mathrm{d}s}+b_2\int_{-\pi}^{\pi}(h(0, \theta, t)\\ \nonumber&\cdot P_2(0, \theta, t)+\partial_\theta h(0, \theta, t)\partial_\theta P_2(0, \theta, t)+\partial_\theta^2 h(0, \theta, t)\\ &\cdot\partial_\theta^2 P_2(0, \theta, t))\mathrm{d}\theta. \end{align} where $\sigma_i>0$, $i=10,11,...,14$. Combining \eqref{equ-6-first-adp}, \eqref{equ-6-second-adp} and Poincare's inequality, we arrive at \begin{align} \label{equ-w1-adp} &\int_{0}^{1}\int_{-\pi}^{\pi}|m|^2{\mathrm{d}\theta}{\mathrm{d}s}\leq{4\int_{0}^{1}\int_{-\pi}^{\pi}|\partial_sm|^2{\mathrm{d}\theta}{\mathrm{d}s}},\\ &\int_{0}^{1}\int_{-\pi}^{\pi}|\partial_t m|^2{\mathrm{d}\theta}{\mathrm{d}s}\leq{4\int_{0}^{1}\int_{-\pi}^{\pi}|\partial_{ts}m|^2{\mathrm{d}\theta}{\mathrm{d}s}}. \end{align} From the above estimates \begin{align} \label{equ-P11-adp} \nonumber\dot{V}_{1}&(t)\leq-b_1\bigg(\frac{3}{8}-\frac{1}{\sigma_2}-\frac{1}{\sigma_3}\bigg)\rVert{m}\rVert^2-\frac{b_1}{2}\rVert\partial_s{m}\rVert^2\\ \nonumber&-2b_1\rVert\partial_\theta{m}\rVert^2-b_1\bigg(2-\frac{1}{\sigma_{1}}-\frac{1}{\sigma_{4}}-\frac{1}{\sigma_{5}}\bigg)\rVert\Delta m\rVert^2\\ \nonumber&-b_1\bigg(\frac{3}{8}-\frac{1}{\sigma_6}-\frac{1}{\sigma_7}\bigg)\rVert\partial_t{m}\rVert^2-\frac{b_1}{2}\rVert\partial_{ts}{m}\rVert^2-2b_1\rVert\partial_{t\theta}{m}\Vert^2\\ \nonumber&-b_1\bigg(2-\sigma_{1}-\sigma_{8}-\sigma_{9}\bigg)\rVert\Delta\partial_t {m}\rVert^2-(\rVert h\rVert^2+\rVert\partial_sh\rVert^2\\ \nonumber&+\rVert\partial_\theta h\rVert^2+\rVert\Delta h\rVert^2+\rVert\partial_{s\theta \theta }h\rVert^2+\rVert\partial_{ss\theta}h\rVert^2)\\ \nonumber&-\int_{-\pi}^{\pi}\bigg((1-b_{2}\sigma_{10})|h(0, \theta, t)|^2+(1-\frac{b_{2}}{\sigma_{10}}-\frac{3b_2\sigma_{13}}{D}\\ \nonumber&-\frac{b_1(\sigma_3+\sigma_5)}{D^{2}})|\partial_sh(0, \theta, t)|^2+(1-\frac{b_{2}}{\sigma_{11}})|\partial_\theta h(0, \theta, t)|^2\\ \nonumber&+(1-\frac{7b_2{}}{D^3\sigma_{13}}-\frac{7b_1}{3D^4}({\sigma_7}+\frac{1}{\sigma_9}))|\partial_s^2h(0, \theta, t)|^2+(1\\ \nonumber&-\frac{b_1(\sigma_2+\sigma_{4})}{3}-\frac{b_2}{\sigma_{12}})|\partial^2_{\theta}h(0, \theta, t)|^2+(2-b_{2}\sigma_{11}\\ \nonumber&-\frac{3b_{2}\sigma_{14}}{D})|\partial_{s\theta}h(0, \theta, t)|^2+(1-\frac{7b_{2}}{D^3\sigma_{14}})|\partial_{ss\theta}h(0, \theta, t)|^2\\ \nonumber&+{(1-\frac{b_1}{D^2}({\sigma_6}+\frac{1}{\sigma_{8}})-\sigma_{12}b_{2})|\partial_{s\theta\theta}h(0, \theta, t)|^2\bigg){\mathrm{d}\theta}}\\ \nonumber&-\tilde DE_{1}(t)-D\dot{\hat D}E_{2}(t)+\tilde D^2E_{3}(t)+\dot{\hat D}^2E_{4}(t)+\ddot{\hat D}^2E_{5}(t)\\ &-\dot{\hat D}\frac{\tilde D}{\varrho}, \end{align} where \begin{align} \nonumber E_1&(t)=2\int_{0}^{1}\int_{-\pi}^{\pi}(1+s)(hP_1+\partial_sh\partial_sP_1+\partial_\theta h\partial_\theta P_1\\ \nonumber&+\Delta h\partial^2_s P_1+\Delta h\partial^2_\theta P_1+\partial _{s\theta\theta} h\partial_{s\theta\theta}P_1+\partial_{ss\theta} h\partial_{ss\theta} P_1)\mathrm{d}\theta\mathrm{d}s\\ \nonumber&+2b_2\int_{-\pi}^{\pi}(h(0, \theta, t)P_1(0, \theta, t)+\partial_\theta h(0, \theta, t)\partial_\theta P_1(0, \theta, t)\\ &+\partial_\theta^2 h(0, \theta, t)\partial_\theta^2 P_1(0, \theta, t)){\mathrm{d}\theta}{\mathrm{d}s}, \\ \nonumber E_2&(t)=2\int_{0}^{1}\int_{-\pi}^{\pi}(1+s)(hP_2+\partial_sh\partial_sP_2+\partial_\theta h\partial_\theta P_2\\ \nonumber&+\Delta h\partial^2_s P_2+\Delta h\partial^2_\theta P_2+\partial_{s\theta\theta} h\partial_{s\theta \theta} P_2+\partial_{ss\theta } h\partial_{ss\theta } P_2){\mathrm{d}\theta}{\mathrm{d}s}\\ \nonumber&+2b_2\int_{-\pi}^{\pi}(h(0, \theta, t)P_2(0, \theta, t)+\partial_\theta h(0, \theta, t)\partial_\theta P_2(0, \theta, t)\\ &+\partial_\theta^2 h(0, \theta, t)\partial_\theta^2 P_2(0, \theta, t)){\mathrm{d}\theta}{\mathrm{d}s}, \\ \nonumber E_3&(t)=\int_{-\pi}^{\pi}\bigg((\frac{b_1(\sigma_3+\sigma_5)}{D^2}+\frac{3b_{2}\sigma_{13}}{D})|P_1(0, \theta, t)|^{2}\\ \nonumber&+\frac{b_1}{D^2}({\sigma_6}+\frac{1}{\sigma_8})|\partial_\theta^2 P_1(0, \theta, t)|^{2}+\frac{3b_{2}\sigma_{14}}{D}|\partial_\theta P_1(0, \theta, t)|^{2}\\ \nonumber&+{7}(\frac{b_1}{3D^{2}}({\sigma_{7}}+\frac{1}{\sigma_{9}})+\frac{b_{2}}{\sigma_{14}D})(\frac{1}{D^2}|\partial_sP_1(0, \theta, t)|^{2}\\ \nonumber&+|\partial_t P_1(0, \theta, t)|^{2})+\frac{7b_{2}}{\sigma_{15}D}(\frac{1}{D^2}|\partial_{s\theta}P_1(0, \theta, t)|^{2}\\ \nonumber&+|\partial_{t\theta} P_1(0, \theta, t)|^{2})+4(|P_1(1,\theta,t)|^2+2|\partial_{\theta} P_1(1, \theta, t)|^{2}\\ \nonumber&+|\partial_{\theta}^2 P_1(1, \theta, t)|^{2})+12(|\partial_{s}P_1(1, \theta, t)|^{2}+|\partial_{s\theta}P_1(1, \theta, t)|^{2}\\ &+D^{2}|\partial_t P_1(1, \theta, t)|^{2}+D^{2}|\partial_{t\theta}P_1(1, \theta, t)|^{2})\bigg){\mathrm{d}\theta}, \\ \nonumber E_4&(t)=\int_{-\pi}^{\pi}\bigg((\frac{7b_1}{3D^{2}}({\sigma_{7}}+\frac{1}{\sigma_{9}})+\frac{7b_{2}}{D\sigma_{13}})(|P_1(0, \theta, t)|^{2}\\ \nonumber&+|\partial_s P_2(0, \theta, t)|^{2}+D^{2}|\partial_t P_2(0, \theta, t)|^{2})+b_1({\sigma_6}+\frac{1}{\sigma_8})\\ \nonumber&\cdot|\partial_\theta^2 P_2(0, \theta, t)|^{2}+\frac{7b_{2}}{D\sigma_{14}}(|\partial_\theta P_1(0, \theta, t)|^{2}\\ \nonumber&+|\partial_{s\theta} P_2(0, \theta, t)|^{2}+D^{2}|\partial_{t\theta} P_2(0, \theta, t)|^{2})+(b_1(\sigma_{3}+\sigma_{5})\\ \nonumber&+3b_{2}D\sigma_{13})|P_2(0, \theta, t)|^{2}+3b_{2}D\sigma_{14}|\partial_\theta P_2(0, \theta, t)|^{2}\\ \nonumber&+4D^{2}(|P_2(1, \theta, t)|^{2}+2|\partial_{\theta} P_2(1, \theta, t)|^{2}+|\partial_{\theta}^2 P_2(1, \theta, t)|^{2})\\ \nonumber&+12D^{2}(|P_1(1, \theta, t)|^{2}+|\partial_{s}P_2(1, \theta, t)|^{2}+|\partial_\theta P_{1}(1, \theta, t)|^{2}\\ \nonumber&+D^{2}|\partial_tP_2(1, \theta, t)|^{2}+|\partial_{s\theta}P_2(1, \theta, t)|^{2}\\ &+D^{2}|\partial_{t\theta} P_2(1, \theta, t)|^{2})\bigg){\mathrm{d}\theta}, \\ \nonumber E_5&(t)=\int_{-\pi}^{\pi}\bigg((\frac{7b_1}{3}(\sigma_{7}+\frac{1}{\sigma_{9}})+\frac{7b_{2}D}{\sigma_{13}})|P_2(0, \theta, t)|^{2}\\ \nonumber&+\frac{7Db_{2}}{\sigma_{14}}|\partial_{\theta} P_2(0, \theta, t)|^{2}+12D^{4}|P_2(1, \theta, t)|^{2}\\ &+12D^{4}|\partial_{\theta}P_2(1, \theta, t)|^{2}\bigg)\mathrm{d}\theta. \end{align} By setting $\sigma_1=1$, $\sigma_2=\sigma_3=8$, $\sigma_4=\sigma_5=3$, $\sigma_6=\sigma_7=8$, $\sigma_8=\sigma_9=\frac{1}{3}$, $\sigma_{10}=\sigma_{11}=\sigma_{12}=\sigma_{13}=\sigma_{14}=1$, $0<b_1<\min\{\frac{3}{11}$, $\frac{\underline D^2}{11}$, $\frac{3\underline D^4}{77}\}$, $0<b_2<\min\{\frac{2\underline D}{3+\overline D}$, $ \frac{3-11b_{1}}{3}$, $\frac{\underline D^2-11b_{1}}{\overline D^2}$, $\frac{\underline D^3}{7}$, $\frac{\underline D^2-11b_1}{\overline D(3+2\overline D)}$, $\frac{3\underline D^4-77b_1}{3\overline D(\overline D^3+7)}\}$, we get the following estimate \begin{align} \label{equ-P12-adp} \nonumber\dot{V}_1(t)\leq&-\kappa_{1} V_2(t)-\tilde DE_{1}(t)-D\dot{\hat D}E_{2}(t)+\tilde D^2E_{3}(t)\\ &+\dot{\hat D}^2E_{4}(t)+\ddot{\hat D}^2E_{5}(t)-\dot{\hat D}\frac{\tilde D}{\varrho}, \end{align} where $\kappa_1=\min\{\frac{b_1}{8},~1-\frac{11b_1}{\underline D^{2}}-{b_{2}}-\frac{3b_{2}}{\underline D}\}> 0$ and \begin{align}\label{equ-V0-ada} \nonumber V_2(t)&=\rVert m\rVert^2+\rVert\partial_sm\rVert^2+\rVert\partial_\theta m\rVert^2+\rVert\Delta m\rVert^2+\rVert\partial_tm\rVert^2\\ \nonumber&+\rVert \partial_{ts} m\rVert^2+\rVert\partial_{t\theta} m\rVert^2+\rVert h\rVert ^2+\rVert \partial_s h\rVert ^2+\rVert \partial_\theta h\rVert ^2+\rVert \Delta h\rVert ^2\\ &+\rVert \partial_{s\theta\theta}h\rVert ^2+\rVert \partial_{ss\theta}h\rVert ^2+\rVert h(0, \cdot, t)\rVert^2. \end{align} With the help of Agmon's, Cauchy-Schwarz, and Young's inequalities, one can perform quite long calculations to derive the following estimates: \begin{align}\label{equ-V0-L0} &E_1(t)\leq11L_{1}V_2(t), \quad\quad E_2(t)\leq11L_{1}V_2(t), \\ &E_3(t)\leq(\alpha_1+\alpha_2\theta^2 L_{1}^2V_2(t)^2+\tilde D^2\alpha_2)L_{1}V_2(t), \\ &E_4(t)\leq (\alpha_3+\alpha_4\theta^2L_{1}^2V_4(t)^2+\tilde D^2\alpha_4)L_{1}V_2(t), \\ &E_5(t)\leq \alpha_4L_{1}V_2(t), \quad \dot{\hat D}(t)\leq \theta L_{1}V_2(t), \\ &\ddot{\hat D}(t)\leq\theta(1+\theta L_{1}V_2(t)+|\tilde{D}|)L_{1}V_2(t), \label{equ-V0-L1} \end{align} where \begin{align} \alpha_1&=\frac{11(13\overline D^{2}+7)b_1}{3\underline D^{4}}+\frac{2(10\overline D^{2}+7)b_{2}}{\underline D^3}+24\overline D^{2}+40,\\ \alpha_2&=\frac{77b_1}{3\underline D^2}+\frac{14b_{2}}{\underline D}+24\overline D^{2}, \\ \alpha_3&=\frac{11(13\overline D^{2}+14)b_1}{3\underline D^{2}}+\frac{4(5\overline D^{2}+7)b_{2}}{\underline D}+64\overline D^{2}+24\overline D^4\\ \alpha_4&=\frac{77b_1}{3}+{14\overline Db_{2}}+24\overline D^{4}, \end{align} and $L_1$ is a sufficiently large positive constant, which estimation method is similar to the method in Appendix \ref{apA}. And then, combining with \eqref{equ-V0-L0}--\eqref{equ-V0-L1}, one can get \begin{align} \label{equ-6-dot_V} \nonumber\dot{V}_{1}&(t)\leq-\kappa_1 V_2(t)+|\tilde D|(8+\frac{1}{\varrho})L_{1}V_{2}(t)+8\overline DL_{1}^2V_{2}(t)^2\\ \nonumber&+\tilde D^2\alpha_1L_{1}V_2(t)+\tilde D^2(2\alpha_2+12\alpha_4)L_{1}^3V_2(t)^3+\tilde D^4\alpha_2L_{1}\\ &\cdot V_2(t)+(\alpha_3+12\alpha_4)L_{1}^3V_2(t)^3+28\alpha_4L_{1}^5V_2(t)^5. \end{align} From \eqref{equ-6-V-adp}, it is easy to get \begin{align} \tilde{D}^2\leq 2\varrho V_{1}(t)-2\varrho\Theta V_2(t), \end{align} where $\zeta_1=\min\{b_1, ~\underline D, ~b_2\underline D\}$. Using Cauchy-Schwarz's and Young's inequalities, one can deduce that \begin{align} \left|\tilde{D}\right|\leq& \frac{\varepsilon_1}{2}+\frac{\tilde D^2}{2\varepsilon_1}\leq\frac{\varepsilon_1}{2}+\frac{\varrho}{\varepsilon_1}V_1(t)-\frac{\varrho\zeta_1}{\varepsilon_1} V_2(t). \end{align} Hence, \begin{align} \nonumber\dot V_{1}&(t)\leq-\bigg(\frac{\kappa_1}{2} -8\varrho^2\alpha_2 L_{1}V_1(t)^2\bigg)V_2(t)-\bigg(\frac{\kappa_1}{2}-L_{1}(8 \\ \nonumber&+\frac{1}{\varrho})(\frac{\varepsilon_1}{2}+\frac{\varrho}{\varepsilon_1}V_{1}(t))-2\varrho\alpha_1 L_{1}V_1(t)\bigg) V_{2}(t)-L_{1}\\ \nonumber&\cdot\bigg(\frac{\varrho\zeta_1 }{\varepsilon_1}(8+\frac{1}{\varrho})-((\alpha_3+12\alpha_4)L^{2}+8\alpha_2\varrho^2\zeta_1^{2})V_2(t)\\ \nonumber&-8\overline DL_{1}\bigg)V_2(t)^2-2{\varrho }L_{1}\bigg({\alpha_1}\zeta_1-{(\alpha_2+13\alpha_4)}{L_{1}^{2}} V_2(t)\\ \nonumber& \cdot V_1(t)\bigg)V_2(t)^2-L_{1}^3\bigg(2\varrho (\alpha_2+13\alpha_4)\zeta_1-28\alpha_4L_{1}^{2} V_2(t)\bigg) \\ &\cdot V_2(t)^4.\label{equ-V1-last0} \end{align} Again, using \eqref{equ-6-V-adp} we have \begin{align} \zeta_1 V_0(t)\leq V(t).\label{VI} \end{align} Substituting \eqref{VI} into \eqref{equ-V1-last0}, we derive the following estimate \begin{align} \nonumber\dot V_{1}&(t)\leq-\bigg(\frac{\kappa_1}{2} -8\varrho^2\alpha_2 L_{1}V_1(t)^2\bigg)V_2(t)-\bigg(\frac{\kappa_1}{2} -L_{1}(8\\ \nonumber&+\frac{1}{\varrho})(\frac{\varepsilon_1}{2}+\frac{\varrho}{\varepsilon_1}V_{1}(t))-2\varrho\alpha_1 L_1V_1(t)\bigg)V_2(t)-L_{1}\\ \nonumber&\cdot\bigg(\frac{\varrho\zeta_1 }{\varepsilon_1} (8+\frac{1}{\varrho})-(\frac{(\alpha_3+12\alpha_4)L_{1}^{2}}{\zeta_1 } +8\alpha_2\varrho^2\zeta_1 ^{2})V_1(t)\\ \nonumber&-8\overline DL_{1}\bigg)V_2(t)^2-2{\varrho }L_1\bigg({\alpha_1}\zeta_1-\frac{(\alpha_2+13\alpha_4)L_{1}^2}{\zeta_1 } V_1(t)^2\bigg) \\ &\cdot V_2(t)^2-L_{1}^3\bigg(2\varrho(\alpha_2+13\alpha_4)\zeta_1-\frac{28\alpha_4L_{1}^{2}}{\zeta_1 } V_1(t)\bigg) V_2(t)^4.\label{equ-V1-last} \end{align} Let $\varepsilon_1$ defined as \begin{align} \varepsilon_1<&\min\left\{\frac{\kappa_1\varrho}{L_{1}(8\varrho+1)}, \frac{(8\varrho+1)\zeta_1 }{8\varrho\overline D L_{1}}\right\}, \end{align} to ensure $V_{1}(0)\leq \mathcal \mu_1$, where \begin{align} \label{equ-initial-coditions1} \nonumber\mu_1\triangleq&\min\left\{\frac{\varepsilon_1(\kappa_1{\varrho }-(8\varrho+1)L_{1}\varepsilon_1)}{2\varrho L_{1}(8\varrho+1+2\alpha_1\varrho\varepsilon_1))}, {\frac{\sqrt{\kappa_1} }{4\varrho\sqrt{\alpha_2L_{1}}} },\right.\\ \nonumber& \frac{\sqrt{\alpha_1}\zeta_1 }{\sqrt{(\alpha_2+13\alpha_4)}L_{1}},\frac{\varrho(\alpha_2+13\alpha_4)\zeta_1 }{14\alpha_4L_{1}^{2}},\\ &\left.\frac{\zeta_1 ((8\varrho+1)\zeta_1 -8\overline DL_{1}\varepsilon_1) }{\varepsilon_1(4\varrho^{2}\alpha_2\zeta_1^2+(\alpha_3+12\alpha_4)L_{1}^{2})}\right\}. \end{align} Therefore, \begin{align} \nonumber\dot V_{1}(t)\leq&-(\delta_1(t)+\delta_2(t))V_2(t)-(\delta_3(t)+\delta_4(t))V_2(t)^2\\ &+\delta_5(t)V_2(t)^4, \end{align} where \begin{align} \delta_1(t)=&\frac{\kappa_1}{2} -L_{1}(8+\frac{1}{\varrho})(\frac{\varepsilon_1}{2}+\frac{\varrho}{\varepsilon_1}V_{1}(t))-2\varrho\alpha_1 L_{1}V_1(t), \label{equ-6-delta1}\\ \delta_2(t)=&\frac{\kappa_1}{2} -4\varrho^2\alpha_2 L_{1}V_1(t)^2, \\ \nonumber\delta_3(t)=&L_{1}\bigg(\frac{\varrho\zeta_1 }{\varepsilon_1}(8+\frac{1}{\varrho}) -8\overline DL_{1}-(\frac{(\alpha_3+12\alpha_4)L_{1}^{2}}{\zeta_1 } \\ &+4\varrho^2\alpha_2)V_1(t)\bigg), \\ \delta_4(t)=&2{\varrho}L_{1}\left({\alpha_1}\zeta_1 -\frac{(\alpha_2+13\alpha_4)L_{1}^{2}}{\zeta_1 } V_1(t)^2\right), \\ \delta_5(t)=&L_{1}^3\bigg({2{\varrho }{(\alpha_2+13\alpha_4)}}\zeta_1 -\frac{28\alpha_4L_{1}^{2}}{\zeta_1 } V_1(t)\bigg), \end{align} are nonnegative functions if the initial condition satisfies \eqref{equ-initial-coditions1}. Thus, $V_{1}(t)\leq V_{1}(0),~\forall t\geq0$. Using \eqref{theo1}, we can get \begin{align} \nonumber \Psi_{1}(t)\leq &\max\{R_1, 1\}(\rVert m\rVert^2+\rVert\partial_sm\rVert^2+\rVert\partial_\theta m\rVert^2+\rVert\Delta m\rVert^2\\ \nonumber&+\rVert\partial_tm\rVert^2+\rVert \partial_{ts}m\rVert^2+\rVert\partial_{t\theta}m\rVert^2+\rVert h\rVert^2+\rVert\partial_sh\rVert^2]\\ \nonumber&+\rVert\partial_\theta h\rVert^2+\rVert\Delta h\rVert^2+\rVert \partial_{s\theta\theta }h\rVert^2+\rVert\partial_{ss\theta}h\rVert^2\\ \nonumber&+\rVert\partial_th(0, \cdot, t)\rVert^2+\rVert\partial_{t\theta}h (0, \cdot, t)\rVert^{2})\\ \nonumber&+\rVert h(0, \cdot, t)\rVert^2+\rVert\partial_\theta h(0, \cdot, t)\rVert^2+\rVert\partial_\theta^2h(0, \cdot, t)\rVert^2\\ \nonumber\leq& \frac{\max\{R_1, 1\}}{\min\{b_1, 2\overline D, b_2\overline D, \frac{1}{2\varrho}\}}V_{1}(t)\\ \triangleq&{\mu}_2V_{1}(t)\leq{\mu}_2V_{1}(0), \label{equ-initial-coditions2} \end{align} where $\Psi_{1}(t)$ is defined as \eqref{psi}, and $\mu_2=\frac{\max\{R_1, 1\}}{\min\{b_1,~ \underline D, ~b_2\underline D, ~\frac{1}{2\varrho}\}}$. Hence, combining \eqref{equ-initial-coditions1} and \eqref{equ-initial-coditions2}, we have $\mathcal M_1=\mu_1\mu_2$. From \eqref{theo2} and \eqref{equ-6-V-adp}, one gets \begin{align} \nonumber V_{1}(t)\leq &b_1(\rVert m\rVert^2+\rVert\partial_sm\rVert^2+\rVert\partial_\theta m\rVert^2+\rVert\Delta m\rVert^2+\rVert\partial_tm\rVert^2\\ \nonumber&+\rVert \partial_{ts}m\rVert^2+\rVert\partial_{t\theta}m\rVert^2)+2\overline D(\rVert h\rVert^2+\rVert\partial_sh\rVert^2+\rVert\partial_\theta h\rVert^2\\ \nonumber&+\rVert\Delta h\rVert^2+\rVert \partial_{s\theta\theta }h\rVert^2+\rVert\partial_{ss\theta}h\rVert^2)+b_2\overline D(\rVert h(0, \theta, t)\rVert^2\\ \nonumber&+\rVert\partial_th(0, \theta, t\rVert^2+\rVert\partial_{t\theta}h (0, \theta, t)\rVert^{2}+\rVert\partial_\theta h(0, \cdot, t)\rVert^2\\ \nonumber&+\rVert\partial_\theta^2h(0, \cdot, t)\rVert^2)+\frac{\tilde D^2}{2\varrho}\\ \leq&\max\left\{\max\{b_1, 2\overline D, b_2\overline D\}R_{2}, \frac{1}{2\varrho}\right\}\Psi_{1}(t). \end{align} Knowing that \begin{align} V_{1}(0)\leq\max\{\max\{b_1, 2\overline D, b_2\overline D\}R_{2}, \frac{1}{2\varrho}\}\Psi_{1}(0), \end{align} we arrive at \begin{align} &\Psi_{1}(t)\leq {\mathcal{R}}_{1}\Psi_{1}(0),\\ &{\mathcal{R}}_{1}=\mu_2\max\left\{\max\{b_1, 2\overline D, b_2\overline D\}R_{2}, \frac{1}{2\varrho}\right\}, \end{align} which proves the local stability of the closed-loop system. Next, we will prove the regulation of the cascaded system $(\phi,\vartheta)$ to complete the proof of Theorem 1. \noindent{\bf (3) Regulation of the cascaded system} From \eqref{equ-6-V-adp} and \eqref{equ-V1-last}, we get the boundedness of all terms in \eqref{equ-V0-ada}, and then, based on \eqref{theo1}, we also get the boundedness of all terms of $\Psi_{1}(t)$. We will prove \eqref{equ-the-6-phi} and \eqref{equ-the-6-v} in Theorem \ref{theorem1} by applying Lemma D.2 \cite{Krstic2010} to ensure the following facts: \begin{itemize} \item all terms in \eqref{equ-V0-ada} are square integrable in time, \item $\frac{\mathrm d}{\mathrm dt}(\rVert m\rVert^2)$, $\frac{\mathrm d}{\mathrm dt}(\rVert h\rVert^2)$ and $\frac{\mathrm d}{\mathrm dt}(\rVert\partial_{s}h\rVert^2)$ are bounded. \end{itemize} Knowing that \begin{align} \int_0^t\rVert m(\tau)\rVert^2\mathrm d\tau\leq\frac{1}{\inf_{0\leq\tau\leq t}\delta_1(\tau)}\int_0^t\delta_1(\tau)V_2(\tau)\mathrm d\tau, \label{equ-square-w} \end{align} and using \eqref{equ-6-delta1}, the following inequality holds: \begin{align} \nonumber \inf_{0\leq\tau \leq t}\delta_1(t)=&\frac{\kappa_1}{2} -L_{1}(8+\frac{1}{\varrho})\left(\frac{\varepsilon_1}{2} +\frac{\varrho}{\varepsilon_1}V_{1}(t)\right)\\ &-2\varrho\alpha_1 L_{1}V_1(t).\label{equ-square1} \end{align} Since $\dot V_{1}\leq-(\delta_1(t)+\delta_2(t))V_2(t)-(\delta_3(t)+\delta_4(t))V_1(t)^2+\delta_5(t) V_2(t)^4$ and $\delta_i(t)$ are nonnegative functions, we have \begin{align} \dot V_1\leq-\delta_1(t)V_2(t).\label{equ-square00} \end{align} Integrating \eqref{equ-square00} over $[0,\ t]$ leads to \begin{align} \int_0^t\delta_1(\tau)V_2(\tau)\mathrm d\tau\leq V_1(0)\leq \mu_1.\label{equ-square2} \end{align} Substituting \eqref{equ-square1} and \eqref{equ-square2} into \eqref{equ-square-w}, we get $\rVert m\rVert$ is square integrable in time. Similarly, one can establish that other terms in \eqref{equ-V0-ada} are square-integrable in time. To prove that $\frac{\mathrm d}{\mathrm dt}(\rVert m\rVert^2)$, $\frac{\mathrm d}{\mathrm dt}(\rVert h\rVert^2)$ and $\frac{\mathrm d}{\mathrm dt}(\rVert\partial_{s}h\rVert^2)$ are bounded, we define the Lyapunov function \begin{align} \nonumber &V_3(t)=\frac{1}{2}\int_0^1\int_{-\pi}^\pi m^2\mathrm d\theta\mathrm ds+\frac{b_3D}{2}\int_0^1\int_{-\pi}^\pi(1+s)h^2\mathrm d\theta\mathrm ds\\ &~~~+\frac{b_3D}{2}\int_0^1\int_{-\pi}^\pi(1+s)\partial_{s}h^{2}\mathrm d\theta\mathrm ds, \label{equ-Vnew3} \end{align} where $b_3$ is a positive constant. Taking the derivative of \eqref{equ-Vnew3} with respect to time, and using integration by parts and Young's inequality, the following holds \begin{align} \nonumber\dot V_3&(t)\leq-\rVert \partial_sm\rVert^2-b_3\rVert h\rVert^2-b_3\rVert \partial_sh\rVert^2+(\frac{1}{2\iota_7}+\frac{1}{2\iota_8})\rVert m\rVert^2\\ \nonumber&+\frac{\iota_7}{6}\rVert \partial_{\theta\theta}h\rVert^2+\frac{\iota_7}{6}\rVert \partial_{\theta\theta s}h\rVert^2+\frac{\iota_8}{2\underline D^2}|\tilde D|^2\rVert P_1(0, \cdot, t)\rVert^2\\ \nonumber&-(\frac{b_3}{2}-\frac{\iota_8}{2\underline D^2})\rVert\partial_sh(0, \cdot, t)\rVert^2+\frac{\iota_8}{2}|\dot{\hat D}|^2\rVert P_2(0, \cdot, t)\rVert^2+4b_3\\ \nonumber&\cdot|\tilde D|^{2}\rVert h\rVert\rVert P_1\rVert+2b_3|\tilde D|^2\rVert P_1(1, \cdot, t)\rVert^2+4b_3{|\tilde D|}\rVert h\rVert \rVert P_{1}\rVert\\ \nonumber&+2b_3\overline D^{2}|\dot{\hat D}|^2\rVert P_2(1, \cdot, t)\rVert^2+4b_3|\tilde D|^{2}\rVert \partial_sh\rVert\rVert \partial_sP_1\rVert\\ &+4b_3{|\tilde D|}\rVert h\rVert \rVert P_{2}\rVert+4b_3\overline D|\dot{\hat{D}}|\rVert \partial_sh\rVert \rVert\partial_sP_{2}\rVert. \end{align} Setting $\iota_7=\iota_8=8$ and $b_3>\frac{4}{\underline D^2}$, we have \begin{align} \nonumber\dot V_3\leq&-\frac{1}{8}\rVert m\rVert^2-b_3\rVert h\rVert^2-b_3\rVert \partial_{s}h\rVert^2+2b_3|\tilde{D}|^{2}\\ \nonumber&+\frac{4}{3}(\rVert \partial_{\theta\theta}h\rVert^2+\rVert \partial_{\theta\theta s}h\rVert^2)+\frac{4|\tilde D|^2}{\underline D^2} \rVert P_1(0, \cdot, t)\rVert^2\\ \nonumber&+4|\dot{\hat D}|^2\rVert P_2(0, \cdot, t)\rVert^2+2b_3|\tilde D|^2\rVert P_1(1, \cdot, t)\rVert^2\\ \nonumber&+2b_3\overline D^{2}|\dot{\hat D}|^2\rVert P_2(1, \cdot, t)\rVert^2+2b_3\rVert P_1\rVert^2+2b_3\rVert \partial_s P_1\rVert^2\\ \nonumber&+2b_3\rVert P_2\rVert^2+2b_3\rVert \partial_s P_2\rVert^2\\ \leq&-c_{1}V_3+f_1(t)V_{3}+f_2(t)<\infty, \label{equ-6-wt} \end{align} where we use Young's and Agmon's inequalities, $c_{1}=\min\{\frac{1}{4},~\frac{1}{2\overline D}\}$, and \begin{align} f_1&(t)=\frac{{2}\tilde D^{2}}{\underline D}(\tilde D^{2}+\overline D^2|\dot{\hat{D}}|^2), \\ \nonumber f_2&(t)=\frac{4}{3}(\rVert \partial_{\theta\theta}h\rVert^2+\rVert \partial_{\theta\theta s}h\rVert^2)+2b_3|\tilde D|^2\rVert P_1(1, \cdot, t)\rVert^2\\ \nonumber& +2b_3\overline D^{2}|\dot{\hat D}|^2\rVert P_2(1, \cdot, t)\rVert^2+\frac{4|\tilde D|^2}{\underline D^2} \rVert P_1(0, \cdot, t)\rVert^2+4|\dot{\hat D}|^2\\ \nonumber&\cdot\rVert P_2(0, \cdot, t)\rVert^2+2b_3|\tilde D|^2\rVert P_1(1, \cdot, t)\rVert^2+2b_3\rVert P_1\rVert^2\\ \nonumber&+2b_3\rVert \partial_s P_1\rVert^2+2b_3\overline D^{2}|\dot{\hat D}|^2\rVert P_2(1, \cdot, t)\rVert^2+2b_3\rVert P_2\rVert^2\\ &+2b_3\rVert \partial_s P_2\rVert^2. \end{align} Combining \eqref{equ-P1} and \eqref{equ-P2}, we get that $|\dot{\hat D}|$, $\rVert P_1(0,\cdot, t)\rVert^2$, $\rVert P_2(0,\cdot, t)\rVert^2$, $\rVert P_1(1,\cdot, t)\rVert^2$, $\rVert P_2(1,\cdot, t)\rVert^2$, $\rVert P_1\rVert^2$ and $\rVert P_2\rVert^2$ are bounded and integrable. Thereby, $f_1(t)$ and $f_2(t)$ are bounded and integrable functions of time. Thus, from \eqref{equ-6-wt}, we deduce that $\dot V_2\leq\infty$, which proves the boundedness of $\frac{\mathrm d}{\mathrm dt}(\rVert m\rVert^2)$, $\frac{\mathrm d}{\mathrm dt}(\rVert h\rVert^2)$ and $\frac{\mathrm d}{\mathrm dt}(\rVert\partial_{s}h\rVert^2)$. Moreover, by Lemma D.2 \cite{Krstic2010}, it holds that $\rVert m\rVert$, $\rVert h\rVert$, $ \rVert\partial_{s}h\rVert\to 0$ as $t\to\infty$. Knowing that $\rVert h(0,\cdot,t)\rVert^2\leq 2\rVert h\rVert\rVert\partial_{s}h\rVert$, so $\rVert h(0,\cdot,t)\rVert^2\to 0$ as $t\to\infty$. From \eqref{equ-trans3} and \eqref{equ-m-w}, one can get \begin{align} \nonumber \rVert \phi\rVert^{2}\leq&4\bigg(1+\int_{0}^{1}\int_{0}^{1}|l(s, \tau)|^{2}\mathrm d\tau\mathrm ds)\rVert m\rVert^{2}\\ &+4\int_{0}^{1}\int_{0}^{1}|l(s, \tau)|^{2}\mathrm d\tau\mathrm ds(\rVert h\rVert^{2}+\rVert \partial_sh\rVert^2\bigg). \end{align} So, we get $\rVert\phi\rVert^{2}\to 0$ as $t\to\infty$. Since $\rVert\phi\rVert_{H^2}$ is bounded, we can get $\phi(s, \theta, t)^2\leq C\rVert \phi\rVert_{L^2}\rVert \phi\rVert_{H^2}$ by using Agmon's inequality, and then we get $\phi(s,\theta,t)$ is regulated. Similarly, we can get $\vartheta(s,\theta,t)$ is also regulated. \section{Observer Design}\label{observer} We use the PDE Backstepping method to design an observer to observe the state of all agents. For the error system \eqref{equ-phi-ori}--\eqref{initial-phi-ori}, the state of the controller $U(\theta, t-D)$ is determined by the state feedback of the system $\phi(s, \theta, t)$ and the controller state at the past $D$ moment. Therefore, as long as the observer is designed for the $\phi(s, \theta, t)$ system, the output feedback control of the multi-agent system with input delay can be realized. \subsection{Observer design} To estimate the position of all agents under a collocated actuation-sensing configuration, specifically, the derivative $u_s(s,\theta,t)$, we propose the following observer \{for the $\phi$-system, namely, \eqref{equ-phi-ori}--\eqref{initial-phi-ori}: \begin{align} \label{equ-6-hatphi} \nonumber&\partial_{t}{\hat\phi}(s, \theta, t)=\Delta{\hat\phi}(s, \theta, t)+{\lambda'_1}{\hat\phi}(s, \theta, t)\\ &~~~~~~~~~~~~~~~~~~~+p_{1}(s)(\phi_s(1, \theta, t)-\hat\phi_s(1, \theta, t)), \\ &\hat{\phi}(s,0,t)=\hat{\phi}(s,2{\pi},t),\quad {\hat\phi}(0, \theta, t)=0, \\ &{\hat\phi}(1, \theta, t)=\Phi(\theta,t-D)+p_{10}(\phi_s(1, \theta, t)-\hat\phi_s(1, \theta, t)),\label{initial-6-hatphi-bud} \end{align} where $\hat\phi(s, \theta, t)=e^{\frac{1}{2}\beta_1 s}(\hat u(s, \theta, t)-\overline u(s, \theta))$, $p_1(s)$ and $p_{10}$ are the observer gains. The observer gains $p_1(s)$ and $p_{10}$ are derive by introducing $\tilde\phi(s, \theta, t)=\phi(s, \theta, t)-\hat\phi(s, \theta, t)$ leading to the error system: \begin{align} \label{equ-6-errophi} &\partial_{t}{\tilde\phi}(s, \theta, t)=\Delta{\tilde\phi}(s, \theta, t)+{\lambda'_1}{\tilde\phi}(s, \theta, t)-p_{1}(s)\partial_s\tilde\phi(1, \theta, t), \\ &\tilde{\phi}(s,0,t)=\tilde{\phi}(s,2{\pi},t),\quad{\tilde\phi}(0, \theta, t)=0, \\ &{\tilde\phi}(1, \theta, t)=-p_{10}\partial_s\tilde\phi(1, \theta, t). \label{initial-6-errophi-bud} \end{align} The following transformation \begin{align} \tilde \phi(s, \theta, t)=\tilde w(s, \theta, t)-\int_s^1{P}(s, \tau)\tilde w_n(\tau, \theta, t)\mathrm d\tau, \label{equ-6-errotranwphin} \end{align} maps \eqref{initial-6-hatphi-bud}--\eqref{equ-6-errophi} into the following target system \begin{align} \label{equ-6-errown} &\partial_{t}{\tilde w}(s, \theta, t)=\Delta{\tilde w}(s, \theta, t),\quad {\tilde w}(s, 0, t)={\tilde w}(s, 2{\pi}, t), \\ &{\tilde w}(0, \theta, t)=0, \quad{\tilde w}(1, \theta, t)=0, \label{initial-6-wn-bud} \end{align} where \begin{align}\label{solution-6-P} P(s,\tau)=&-\lambda'_1s\frac{I_1(\sqrt{\lambda'_1(\tau^2-s^2)})}{\sqrt{\lambda'_1(\tau^2-s^2)}}, \end{align} and the observer gains are $p_1(s)=-\lambda'_1s\frac{I_1(\sqrt{\lambda'_1(1-s^2)})}{\sqrt{\lambda'_1(1-s^2)}}$ and $p_{10}=0$. Now, introducing a variable transformation $\hat u(s, \theta, t)=\overline u(s, \theta, t)+e^{-\frac{1}{2}\beta_1 s}\hat \phi(s, \theta, t)$, we get $u$-system observer defined below \begin{align} \label{equ-6-hatu} &\nonumber\partial_{t}{\hat u}(s, \theta, t)=\Delta{\hat u }(s, \theta, t)+\beta_1{\partial}_{s}{\hat u }(s, \theta, t)+{\lambda_1}{\hat u}(s, \theta, t)\\ &~~-\lambda'_1s\frac{I_1(\sqrt{\lambda'_1(1-s^2)})}{\sqrt{\lambda'_1(1-s^2)}}(\partial_s u(1, \theta, t)-\partial_s\hat u(1, \theta, t)), \\ &\hat{u}(s,0,t)=\hat{u}(s,2{\pi},t),\quad{\hat u}(0, \theta, t)=f_1(s, \theta), \\ &{\hat u}(1, \theta, t)=g_1(s, \theta)+U(\theta, t-D), \label{equ-6-hatu-bud} \end{align} where \begin{align} \label{equ-6-U-hat} \nonumber U({\theta}, t)=&-\int_{t-\hat D}^{t}\int_{-\pi}^\pi {\gamma} (1, 1-\frac{t-\nu}{\hat D},\hat D, \theta-\psi)e^{\frac{1}{2}\beta_1}\\ \nonumber&\cdot U({\psi}, \nu)\mathrm{d}\psi\mathrm{d}\nu+ \int_0^1 \int_{-\pi}^\pi \gamma(1, \tau,\hat D, \theta-\psi)\\ &\cdot e^{-\frac{1}{2}\beta_1(1-\tau)}(\hat u(\tau, {\psi}, t)-\bar{u}(\tau, {\psi}))\mathrm{d}\psi\mathrm{d}\tau. \end{align} Similarly, the observer for the $z$ system can be deduced. To analysis the stability of $(u,\hat u)$, we consider the closed-loop system is composed by \eqref{equ-u_0}, \eqref{equ-bdu}--\eqref{bnd-U}, adaptive controller \eqref{equ-6-U-hat}, and observer \eqref{equ-6-hatu}--\eqref{equ-6-hatu-bud} and state the following theorem. \begin{theorem} \label{therom-6-2} \rm {Consider the closed-loop system consisting of the plant \eqref{equ-phi}--\eqref{initial-vartheta}, adaptive controller \eqref{equ-6-U-hat}, observer \eqref{equ-6-hatu}--\eqref{equ-6-hatu-bud}, and update law \eqref{equ-law1}--\eqref{equ-tau} under Assumption \ref{as1}. Local boundedness and asymptotic convergence of the system trajectories are guaranteed, i.e., there exist positive constants ${\mathcal M}_2$, ${\mathcal{R}}_2$ such that if the initial conditions $(\hat \phi_0,\vartheta_0,\tilde \phi_0,\hat D)$ satisfy $\Psi_{2}(0)<{\mathcal M}_{2}$, where\begin{align}\label{equ-psi-2} \nonumber \Psi_{2}&(t)=\rVert \hat \phi\rVert^2+\rVert\partial_s\hat \phi\rVert^2+\rVert\partial_\theta\hat \phi\rVert^2+\rVert\Delta\hat \phi\rVert^2+\rVert\partial_t \hat \phi\rVert^2+\rVert \partial_{ts} \hat \phi\rVert^2\\ &\nonumber+\rVert\partial_{t\theta} \hat \phi\rVert^2+\rVert \vartheta\rVert^2+\rVert\partial_s \vartheta\rVert^2+\rVert\partial_\theta \vartheta\rVert^2+\rVert\Delta \vartheta\rVert^2\\ \nonumber&+\rVert \partial_{s\theta\theta } \vartheta\rVert^2+\rVert\partial_{ss\theta} \vartheta\rVert^2+\rVert\partial_t \vartheta(0, \cdot, t\rVert^2+\rVert\partial_{t\theta} \vartheta (0, \cdot, t)\rVert^{2}\\ \nonumber&+\rVert \tilde \phi\rVert^2+\rVert\partial_s\tilde \phi\rVert^2+\rVert\partial_\theta\tilde \phi\rVert^2+\rVert\Delta\tilde \phi\rVert^2+\rVert\partial_t \tilde \phi\rVert^2+\rVert \partial_{ts} \tilde \phi\rVert^2\\ &+\rVert\partial_{t\theta} \tilde \phi\rVert^2+\rVert\Delta\partial_{t} \tilde \phi\rVert^2+\tilde D^2, \end{align} the following holds: \begin{align} \Psi_{2}(t)\leq {\mathcal R}_{2}\Psi_{2}(0), \quad \forall t\geq0; \end{align} furthermore, \begin{align} &\lim_{t\to \infty}\max_{(s,\theta)\in[0, 1]\times[-\pi,\pi]}|\hat \phi(s, \theta, t)|=0, \label{equ-6-hat-phi}\\ &\lim_{t\to \infty}\max_{(s,\theta)\in[0, 1]\times[-\pi,\pi]}|\tilde \phi(s, \theta, t)|=0, \label{equ-6-tilde-phi}\\ &\lim_{t\to \infty}\max_{(s,\theta)\in[0, 1]\times[-\pi,\pi]}|\vartheta(s, \theta, t)|=0.\label{equ-6-v} \end{align}} \end{theorem} Based on the variable transformation $\phi(s, \theta, t)=e^{\frac{1}{2}\beta_1 s}(u(s, \theta, t)-\overline u(s,\theta))$ and $\hat \phi(s, \theta, t)=e^{\frac{1}{2}\beta_1 s}(\hat u(s, \theta, t)-\overline u(s, \theta, t))$, we have $(u,\hat u)$-system is equivalent to $(\phi,\hat\phi)$-system. Meanwhile, $(\phi,\hat\phi)$ system is equivalent to $(\hat \phi,\vartheta,\tilde\phi)$ system. Thus, $(u,\hat u)$-system is equal to $(\hat \phi,\vartheta,\tilde\phi)$-system. Therefore, we prove the stability of $(\hat \phi,\vartheta,\tilde\phi)$ using the full-state feedback control method. Introducing the following transformations \begin{align} &\hat w(s, \theta, t)=\hat\phi(s, \theta, t)-\int_0^sk(s, \tau)\hat\phi(\tau, \theta, t)\mathrm d\tau, \label{equ-out-6-hattranwphi}\\ &\nonumber h(s, \theta, t)=\vartheta(s, \theta, t)\\ \nonumber&~~~-\int_0^1\int_{-\pi}^\pi\gamma(s, \tau, \theta, \psi, \hat D)\hat\phi(\tau, \psi, t)\mathrm d\psi\mathrm d\tau\\ &~~~-\hat D\int_0^s\int_{-\pi}^\pi p(s, \tau, \theta, \psi, \hat D)\vartheta(\tau, \psi, t)\mathrm d\psi\mathrm d\tau, \label{equ-out-6-hattranzv}\\ &\tilde \phi(s, \theta, t)=\tilde w(s, \theta, t)-\int_s^1P(s, \tau)\tilde w(\tau, \theta, t)\mathrm d\tau, \label{equ-out-6-tildetranwphi} \end{align} mapping $(\hat \phi,\vartheta,\tilde\phi)$ system defined as \eqref{equ-6-hatphi}--\eqref{initial-6-errophi-bud} into the following target system \begin{align} \label{equ-out-6-hatw} &\partial_{t}{\hat w}(s, \theta, t)=\Delta{\hat w}(s, \theta, t)+{\mathcal{P}(s)}\partial_s\tilde w(1, \theta, t), \\ &{\hat w}(s, 0, t)={\hat w}(s, 2{\pi}, t),\quad{\hat w}(0, \theta, t)=0,\\ &{\hat w}(1, \theta, t)=h(0, \theta, t), \\ \nonumber &D\partial_{t}{h}(s, \theta, t)={\partial}_{s}{h}(s, \theta, t)-\tilde DP_1(s, \theta, t)-D\dot{\hat D}P_2(s, \theta, t)\\ &~~~-\int_0^1\int_{-\pi}^\pi\gamma(s, \tau, \theta, \psi,\hat D)p_1(\tau)\partial_s\tilde w(1, \psi, t)\mathrm d\psi\mathrm d\tau, \\ &h(s, 0, t)=h(s, 2{\pi}, t),\quad{h}(1, \theta, t)=0, \label{initial-6-hatz-bud}\\ \label{equ-out-6-tildew} &\partial_{t}{\tilde w}(s, \theta, t)={\partial}_{s}^2{\tilde w}(s, \theta, t),\quad{\tilde w}(s, 0, t)={\tilde w}(s, 2{\pi}, t) \\ &{\tilde w}(0, \theta, t)={\tilde w}(1, \theta, t)=0, \label{equ-out-6-tildew-bud} \end{align} where $\mathcal{P}(s)=p_{1}(s)-\int_0^sk(s, \tau)p_1(\tau)\mathrm d\tau$. Defining $\hat m(s, \theta, t)=\hat w(s, \theta, t)-s h(0, \theta, t)$, the target system rewrites with homogeneous boundary conditions \begin{align} \label{equ-out-6-hatm} \nonumber&\partial_{t}{\hat m}(s, \theta, t)=\Delta{\hat m}(s, \theta, t)+\mathcal{P}(s)\partial_s \tilde w(1, \theta, t)-s\partial_t h(0, \theta, t)\\ &~~~~-s\partial_{\theta}^2 h(0, \theta, t), \\ &{\hat m}(s, 0, t)={\hat m}(s, 2{\pi}, t),~~~{\hat m}(0, \theta, t)={\hat m}(1, \theta, t)=0, \\ \nonumber&D\partial_{t}{h}(s, \theta, t)={\partial}_{s}{h}(s, \theta, t)-\tilde DP_1(s, \theta, t)-D\dot{\hat D}P_2(s, \theta, t)\\ &~~~~-\int_0^1\int_{-\pi}^\pi\gamma(s, \tau,\theta, \psi, \hat D)p_1(\tau)\partial_s\tilde w(1, \psi, t)\mathrm d\psi\mathrm d\tau, \label{initial-6-hath-bud-last}\\ &{h}(s, 0, t)={h}(s, 2{\pi}, t),\quad{h}(1, \theta, t)=0, \label{equ-out-6-hath}\\ &\partial_{t}{\tilde w}(s, \theta, t)={\partial}_{s}^2{\tilde w}(s, \theta, t),\quad {\tilde w}(s, 0, t)={\tilde w}(s, 2{\pi}, t),\\ &{\tilde w}(0, \theta, t)={\tilde w}(1, \theta, t)=0. \label{equ-out-6-tildew-bud-last} \end{align} Taking the derivative of the following Lyapunov function \begin{align} \nonumber V_4&(t)=e_1\int_{0}^{1}\int_{-\pi}^{\pi}(|\hat m|^2+|\partial_s\hat m|^2+|\partial_\theta m|^2+|\Delta{\hat m}|^2+|\partial_t\hat m|^2\\ \nonumber&+|\partial_{ts}\hat m|^2+|\partial_{t\theta}\hat m|^2){\mathrm{d}\theta}{\mathrm{d}s}+D\int_{0}^{1}\int_{-\pi}^{\pi}(1+s)(|h|^2\\ \nonumber&+|\partial_sh|^2+|\partial_\theta h|^2+|\Delta h|^2+|\partial_{s\theta\theta}h|^2+|\partial_{ss\theta}h|^2){\mathrm{d}\theta}{\mathrm{d}s}\\ \nonumber&+e_2D\int_{-\pi}^{\pi}(|\partial_th(0, \theta, t)|^2+|\partial_{t\theta}h(0, \theta, t)|^2){\mathrm{d}\theta}{\mathrm{d}s}\\ \nonumber&+\frac{e_3}{2}\int_0^1\int_{-\pi}^{\pi}(|\tilde w|^2+|\partial_s\tilde w|^2+|\partial_\theta\tilde w|^2+|\Delta\tilde w|^2+|\partial_{t}\tilde w|^2\\ &+|\partial_{ts}\tilde w|^2+|\partial_{t\theta}\tilde w|^2+|\Delta\partial_{t}\tilde w|^2)\mathrm d\psi\mathrm ds+\frac{\tilde D^2}{2\rho_1},\label{equ-out-6-V2} \end{align} where $e_i, \ i=1, 2, 3$ are positive constants, and and using integration by parts, Young's, Poincare's, and Cauchy-Schwarz's inequalities, we have \begin{align} \nonumber\dot V_4&(t)\leq-e_{1}(\frac{3}{8}-\frac{1}{r_{2}}-\frac{1}{r_3}-\frac{a_1}{r_4})\rVert{m}\rVert^2-\frac{e_{1}}{2}\rVert\partial_s{m}\rVert^2\\ \nonumber&-2e_{1}\rVert\partial_\theta {m}|^2-e_{1}(2-\frac{1}{r_1}-\frac{1}{r_5}-\frac{1}{r_6}-\frac{a_{1}}{r_7})\rVert\Delta{m}\rVert^2\\ \nonumber&-e_{1}(\frac{3}{8}-\frac{1}{r_8}-\frac{1}{r_9}-\frac{a_{1}}{r_{10}})\rVert\partial_t {m}\rVert^2-\frac{e_{1}}{2}\rVert\partial_{ts}{m}\rVert^2\\ \nonumber&-2{e_{1}}\rVert\partial_{t\theta}{m}\rVert^2-e_{1}(2-r_1-r_{11}-r_{12}-a_1r_{13})\\ \nonumber&\rVert\Delta(\partial_t{m)}\rVert^2-(4-\frac{e_{2}}{r_{22}}-\frac{e_{2}c_{9}}{r_{23}}-\frac{4}{D^{2}}(\frac{e_{1}(r_{3}+r_{6})}{3}+\frac{e_{2}D}{r_{20}}))\\ \nonumber&\cdot\int_{-\pi}^{\pi}|\partial_sh(0, \theta, t)|^2\mathrm d\theta-(4-{e_{2}}{r_{22}}-\frac{10}{D^{4}}({e_{2}D}{r_{20}}\\ \nonumber&+e_{1}(\frac{r_{9}}{3}+\frac{1}{3r_{12}})))\int_{-\pi}^{\pi}|\partial_s^2h(0, \theta, t)|^2\mathrm d\theta-(8-\frac{4e_{2}D}{r_{21}})\\ \nonumber&\cdot\int_{-\pi}^{\pi}|\partial_{s\theta}h(0, \theta, t)|^2{\mathrm{d}\theta}-(4-\frac{4e_{1}}{D^{2}}(\frac{r_{8}}{3}+\frac{1}{3r_{11}}))\\ \nonumber&\cdot\int_{-\pi}^{\pi}|\partial_{s\theta\theta}h(0, \theta, t)|^2\mathrm{d}\theta-(4-\frac{e_{2}(r_{2}+r_{15})}{3})\\ \nonumber&\cdot\int_{-\pi}^{\pi}|\partial^2_{\theta}h(0, \theta, t)|^2{\mathrm{d}\theta-4(1-\frac{1}{r_{14}})\rVert h\rVert^2}-4(1-\frac{1}{r_{15}})\\ \nonumber&\cdot\rVert\partial_sh\rVert^2-(4-\frac{e_{2}Dr_{21}}{D^{4}})\int_{-\pi}^{\pi}(|\partial_{ss\theta}h(0, \theta, t)|^2){\mathrm{d}\theta}\\ \nonumber&-4(1-\frac{1}{r_{16}})\rVert\partial_\theta h\rVert^2-4(1-\frac{2}{r_{17}})\rVert\Delta h\rVert^2-4(1-\frac{1}{r_{18}})\\ \nonumber&\cdot\rVert\partial_{s\theta\theta} h\rVert^2-4(1-\frac{1}{r_{19}})\rVert\partial_{ss\theta} h\rVert^2-(\frac{e_{3}}{2}-3(e_{1}a_{1}(r_{4}+r_{7})\\ \nonumber&+4c_{13}r_{14}+4c_{16}r_{15}+4c_{17}r_{16}+4(c_{16}+c_{17})r_{17}+4c_{18}r_{18}\\ \nonumber&+4c_{19}r_{19}+c_{9}e_2r_{23}+24(c_{1}+c_{5})+72(c_{2}+c_{4})\\ \nonumber&+\frac{4e_{2}Dc_{10}}{r_{21}}+\frac{4}{D^{2}}(\frac{e_{2}D}{r_{20}}+\frac{e_{1}(r_3+r_6)}{3})+\frac{e_{2}r_{21}c_{11}}{D^{3}}\\ \nonumber&+\frac{4e_{1}c_{12}}{D^{2}}(\frac{r_{8}}{3}+\frac{1}{3r_{11}})+\frac{10c_{9}}{D^{4}}(e_{2}Dr_{20}+e_1(\frac{r_{9}}{3}+\frac{1}{3r_{12}}))))\\ \nonumber&\cdot\rVert\partial_s^2\tilde w(s, \theta, t)\rVert^2-(\frac{e_{3}}{2}-3(e_1a_1(r_{10}+\frac{1}{r_{13}})+72D^{2}(c_{1}\\ \nonumber&+c_{3})+\frac{10}{D^{2}}(e_1(\frac{r_{9}}{3}+\frac{1}{3r_{12}})c_{8}+e_{2}Dr_{20}+\frac{e_{2}r_{21}c_{10}}{D}))\\ \nonumber&\cdot\rVert\partial_{tss}\tilde w(s, \theta, t)\rVert^2-\frac{e_3}{2}(\rVert w\rVert^2+\rVert\partial_s\tilde w\rVert^2+2\rVert\partial_\theta\tilde w\rVert^2+\rVert\Delta\tilde w\rVert^2\\ \nonumber&+\rVert\partial_{t}\tilde w\rVert^2+\rVert\partial_{ts}\tilde w\rVert^2+2\rVert\partial_{t\theta}\tilde w\rVert^2+\rVert\Delta\partial_{t}\tilde w\rVert^2)-D\dot{\hat D}E_7(t)\\ &-\tilde DE_6(t)+\tilde D^{2}E_8(t)+\dot{\hat D}^{2}E_9(t)+\ddot{\hat D}^{2}E_{10}(t)-\dot{\hat D}\frac{\tilde D}{\varrho}, \end{align} where \begin{align} \nonumber&\int_{-\pi}^\pi \bigg(\int_0^1\int_{-\pi}^\pi \gamma(1, \theta, \psi, \tau) p_1\partial_s\tilde w(1, \psi, t)\mathrm d\psi\mathrm d\tau\bigg )^{2} d\theta\\ &\leq 3c_1\rVert\partial_s^2\tilde w\rVert^2, \\ \nonumber&\int_{-\pi}^\pi \bigg(\int_0^1\int_{-\pi}^\pi \partial_s\gamma(1, \theta, \psi, \tau, \hat D) p_1 \partial_s\tilde w(1, \psi, t)\mathrm d\psi\mathrm d\tau\bigg)^{2}\mathrm d\theta \\ &\leq 3c_2\rVert\partial_s^2\tilde w\rVert^2, \\ \nonumber&\int_{-\pi}^\pi \bigg(\int_0^1\int_{-\pi}^\pi \partial_\theta\gamma(1, \theta, \psi, \tau, \hat D) p_1 \partial_s\tilde w(1, \psi, t)\mathrm d\psi\mathrm d\tau\bigg)^{2}\mathrm d\theta \\ &\leq 3c_3\rVert\partial_s^2\tilde w\rVert^2, \\ \nonumber&\int_{-\pi}^\pi \bigg(\int_0^1\int_{-\pi}^\pi \partial_{s\theta}\gamma(1, \theta, \psi, \tau, \hat D) p_1 \partial_s\tilde w(1, \psi, t)\mathrm d\psi\mathrm d\tau\bigg)^{2}\mathrm d\theta \\ &\leq 3c_4\rVert\partial_s^2\tilde w\rVert^2, \\ \nonumber&\int_{-\pi}^\pi \bigg(\int_0^1\int_{-\pi}^\pi \partial_{\theta}^2\gamma(1, \theta, \psi, \tau, \hat D) p_1 \partial_s\tilde w(1, \psi, t)\mathrm d\psi\mathrm d\tau\bigg)^{2}\mathrm d\theta \\ &\leq 3c_5\rVert\partial_s^2\tilde w\rVert^2, \\ \nonumber&\int_{-\pi}^\pi \bigg(\int_0^1\int_{-\pi}^\pi \partial_{\hat D}\gamma(1, \theta, \psi, \tau, \hat D) p_1 \partial_s\tilde w(1, \psi, t)\mathrm d\psi\mathrm d\tau\bigg)^{2}\mathrm d\theta \\ &\leq 3c_6\rVert\partial_s^2\tilde w\rVert^2, \\ \nonumber&\int_{-\pi}^\pi \bigg(\int_0^1\int_{-\pi}^\pi \partial_{\hat D\theta}\gamma(1, \theta, \psi, \tau, \hat D) p_1 \partial_s\tilde w(1, \psi, t)\mathrm d\psi\mathrm d\tau\bigg)^{2}\mathrm d\theta \\ &\leq 3c_7\rVert\partial_s^2\tilde w\rVert^2, \\ \nonumber&\int_{-\pi}^\pi\bigg (\int_0^1\int_{-\pi}^\pi \gamma(0, \theta, \psi, \tau, \hat D) p_1 \partial_s\tilde w(1, \psi, t)\mathrm d\psi\mathrm d\tau\bigg)^{2}\mathrm d\theta \\ &\leq 3c_8\rVert\partial_s^2\tilde w\rVert^2, \\ \nonumber&\int_{-\pi}^\pi\bigg (\int_0^1\int_{-\pi}^\pi \partial_{s}\gamma(0, \theta, \psi, \tau, \hat D) p_1 \partial_s\tilde w(1, \psi, t)\mathrm d\psi\mathrm d\tau\bigg)^{2}\mathrm d\theta\ \\ &\leq 3c_9\rVert\partial_s^2\tilde w\rVert^2, \\ \nonumber&\int_{-\pi}^\pi \bigg(\int_0^1\int_{-\pi}^\pi \partial_{\theta}\gamma(0, \theta, \psi, \tau, \hat D) p_1 \partial_s\tilde w(1, \psi, t)\mathrm d\psi\mathrm d\tau\bigg)^{2}\mathrm d\theta \\ &\leq 3c_{10}\rVert\partial_s^2\tilde w\rVert^2, \\ \nonumber&\int_{-\pi}^\pi \bigg(\int_0^1\int_{-\pi}^\pi \partial_{s\theta}\gamma(0, \theta, \psi, \tau, \hat D) p_1 \partial_s\tilde w(1, \psi, t)\mathrm d\psi\mathrm d\tau\bigg)^{2}\mathrm d\theta \\ &\leq 3c_{11}\rVert\partial_s^2\tilde w\rVert^2, \\ \nonumber&\int_{-\pi}^\pi \bigg(\int_0^1\int_{-\pi}^\pi \partial_{\theta}^2\gamma(0, \theta, \psi, \tau, \hat D) p_1 \partial_s\tilde w(1, \psi, t)\mathrm d\psi\mathrm d\tau\bigg)^{2}\mathrm d\theta \\ &\leq 3c_{12}\rVert\partial_s^2\tilde w\rVert^2, \\ \nonumber &\int_0^{1}\int_{-\pi}^\pi \bigg(\int_0^1\int_{-\pi}^\pi\gamma p_1 \partial_s\tilde w(1, \psi, t)\mathrm d\psi\mathrm d\tau \bigg)^{2}d\theta\mathrm ds \\ &\leq 3c_{13}\rVert\partial_s^2\tilde w\rVert^2, \\ \nonumber &\int_0^{1}\int_{-\pi}^\pi \bigg(\int_0^1\int_{-\pi}^\pi\partial_s\gamma p_1 \partial_s\tilde w(1, \psi, t)\mathrm d\psi\mathrm d\tau \bigg)^{2}d\theta\mathrm ds \\ &\leq 3c_{14}\rVert\partial_s^2\tilde w\rVert^2, \\ \nonumber &\int_0^{1}\int_{-\pi}^\pi \bigg(\int_0^1\int_{-\pi}^\pi\partial_\theta\gamma p_1 \partial_s\tilde w(1, \psi, t)\mathrm d\psi\mathrm d\tau\bigg)^{2}d\theta\mathrm ds \\ &\leq 3c_{15}\rVert\partial_s^2\tilde w\rVert^2, \\ \nonumber &\int_0^{1}\int_{-\pi}^\pi \bigg(\int_0^1\int_{-\pi}^\pi\partial_s^2\gamma p_1 \partial_s\tilde w(1, \psi, t)\mathrm d\psi\mathrm d\tau \bigg)^{2}d\theta\mathrm ds \\ &\leq 3c_{16}\rVert\partial_s^2\tilde w\rVert^2, \\ \nonumber &\int_0^{1}\int_{-\pi}^\pi \bigg(\int_0^1\int_{-\pi}^\pi\partial_\theta^2\gamma p_1 \partial_s\tilde w(1, \psi, t)\mathrm d\psi\mathrm d\tau \bigg)^{2}d\theta\mathrm ds \\ &\leq 3c_{17}\rVert\partial_s^2\tilde w\rVert^2, \\ \nonumber &\int_0^{1}\int_{-\pi}^\pi \bigg(\int_0^1\int_{-\pi}^\pi\partial_{s\theta\theta}\gamma p_1 \partial_s\tilde w(1, \psi, t)\mathrm d\psi\mathrm d\tau \bigg)^{2}d\theta\mathrm ds \\ &\leq 3c_{18}\rVert\partial_s^2\tilde w\rVert^2, \\ \nonumber &\int_0^{1}\int_{-\pi}^\pi \bigg(\int_0^1\int_{-\pi}^\pi\partial_{ss\theta}\gamma p_1 \partial_s\tilde w(1, \psi, t)\mathrm d\psi\mathrm d\tau \bigg)^{2}d\theta\mathrm ds \\ &\leq 3c_{19}\rVert\partial_s^2\tilde w\rVert^2, \end{align} where $c_i$, $i=1,2,3,...,19$ can be bounded based on {a similar estimation method in Appendix \ref{apA}}. Defined $E_i$, $i=6,7,...,10$ as below, \begin{align} \nonumber &E_6(t)=8\int_{0}^{1}\int_{-\pi}^{\pi}(1+s)(hP_1+\partial_sh \partial_sP_1+\partial_\theta h \partial_\theta P_1\\ \nonumber&~~~~+\Delta h\partial^2_s P_1+\Delta h\partial^2_\theta P_1+\partial _{s\theta\theta} h\partial_{s\theta\theta}P_1s+\partial_{ss\theta} h \partial_{ss\theta} P_1)\mathrm{d}\theta\mathrm{d}s\\ &~~~~+2e_2\int_{-\pi}^{\pi}\partial_s h(0, \theta, t)\partial_s P_1{\mathrm{(0, \theta, t)d}\theta}{\mathrm{d}s}, \\ \nonumber &E_7(t)=8\int_{0}^{1}\int_{-\pi}^{\pi}(1+s)(h P_2+\partial_sh\partial_sP_2+\partial_\theta h\partial_\theta P_2\\ \nonumber&~~~~+\Delta h\partial^2_s P_2+\Delta h \partial^2_\theta P_2+\partial _{s\theta\theta} h \partial_{s\theta\theta}P_2+\partial_{ss\theta} h\partial_{ss\theta} P_2)\mathrm{d}\theta\mathrm{d}s\\ &~~~~+2e_2\int_{-\pi}^{\pi}\partial_s h(0, \theta, t)\partial_s P_2{\mathrm{(0, \theta, t)d}\theta}{\mathrm{d}s}, \\ \nonumber &E_8(t)=\int_{-\pi}^{\pi}\bigg(\frac{4}{D^2}(\frac{e_{1}(r_3+r_6)}{3}+\frac{e_{2}D}{r_{20}})|P_1(0, \theta, t)|^{2}\\ \nonumber&~~~~+\frac{4e_{2}D}{r_{21}}|\partial_\theta P_1(0, \theta, t)|^{2}+\frac{4e_{1}}{D^2}(\frac{r_{8}}{3}+\frac{1}{3r_{11}})|\partial_\theta^2 P_1(0, \theta, t)|^{2}\\ \nonumber&~~~~+\frac{10{e_{2}}{r_{21}}}{D}(\frac{1}{D^2}|\partial_{s\theta}P_1(0, \theta, t)|^{2} +|\partial_{t\theta} P_1(0, \theta, t)|^{2})\\ \nonumber&~~~~+\frac{10}{D^2}({e_{1}}(\frac{r_{9}}{3}+\frac{1}{3r_{12}})+{e_{2}}{r_{20}D})(\frac{1}{D^2}|\partial_sP_1(0, \theta, t)|^{2}\\ \nonumber&~~~~+|\partial_t P_1(0, \theta, t)|^{2})+24(|P_1(1, \theta, t)|^{2}+|\partial_\theta^2P_1(1, \theta, t)|^{2}\\ \nonumber&~~~~+3|\partial_{s}P_1(1, \theta, t)|^{2}+3D^{2}|\partial_t P_1(1, \theta, t)|^{2}\\ &~~~~+3|\partial_{s\theta}P_1(1, \theta, t)|^{2}+3D^{2}|\partial_{t\theta}P_1(1, \theta, t)|^{2})\bigg){\mathrm{d}\theta,}\\ \nonumber& E_9(t)=\int_{-\pi}^{\pi}\bigg(\frac{10}{D^2}(e_{1}(\frac{r_{9}}{3}+\frac{1}{3r_{12}})+{e_{2}}{r_{20}D})(|P_1(0, \theta, t)|^{2}\\ \nonumber&~~~~+|\partial_s P_2(0, \theta, t)|^{2}+D^{2}|\partial_t P_2(0, \theta, t)|^{2})+\frac{4e_1}{3}({r_8}+\frac{1}{r_{11}})\\ \nonumber&~~~~\cdot|\partial_\theta^2 P_2(0, \theta, t)|^{2}+\frac{10e_{2}r_{21}}{D}(|\partial_\theta P_1(0, \theta, t)|^{2}\\ \nonumber&~~~~+|\partial_{s\theta} P_2(0, \theta, t)|^{2}+D^{2}|\partial_{t\theta} P_2(0, \theta, t)|^{2})+4(\frac{e_1}{3}(r_{3}\\ \nonumber&~~~~+r_{6})+{\frac{e_{2}D}{r_{20}}})|P_2{{(0, \theta, t)|^{2}}}{+{\frac{4e_{2}D^{3}}{r_{21}}}|\partial_\theta P_2(0, \theta, t)|^{2}}\\ \nonumber&~~~~+24D^{2}(|P_2(1, \theta, t)|^{2}+|\partial_{\theta}^2P_2(1, \theta, t)|^{2})+72D^{2}\\ \nonumber&~~~~\cdot(|P_1(1, \theta, t)|^{2}+|\partial_{s}P_2(1, \theta, t)|^{2}+D^{2}|\partial_tP_2(1, \theta, t)|^{2}\\ \nonumber&~~~~+|\partial_{\theta}P_1(1, \theta, t)|^{2}+|\partial_{s\theta}P_2(1, \theta, t)|^{2}\\ \nonumber&~~~~+D^{2}|\partial_{t\theta} P_2(1, \theta, t)|^{2})\bigg){\mathrm{d}\theta}+\bigg(216D^2(c_{6}+c_{7})\\ &~~~~+\frac{30}{D^2}e_{1}(\frac{r_{9}}{3}+\frac{1}{3r_{12}})\bigg)\rVert\partial_s \tilde w\rVert^2,\\ \nonumber &E_{10}(t)=\int_{-\pi}^{\pi}\bigg(10(e_{1}(\frac{r_{9}}{3}+\frac{1}{3r_{12}})+{e_{2}}{r_{20}D})|P_2(0, \theta, t)|^2\\ \nonumber&~~~~+72D^{4}(|P_2(1, \theta, t)|^{2}+|\partial_{s\theta}P_2(1, \theta, t)|^{2})\\ &~~~~+10De_{2}r_{21}|\partial_{\theta} P_2(0, \theta, t)|^{2}\bigg){\mathrm d\theta}. \end{align} Setting $r_1=1$, $r_2=r_3=8$, $r_4>8a_1$, $r_5=r_6=3$, $r_7>3a_1$, $r_8=r_9=8$, $r_{10}>8a_1$, $r_{11}=r_{12}=\frac{1}{3}$, $r_{13}<\frac{1}{3a_1}$, $r_{14}>1$, $r_{15}>1$, $r_{16}>1$, $r_{17}>2$, $r_{18}>1$, $r_{19}>1$, ${e_1}<\min\big\{\frac{\mathrm{12}}{11}$, $\frac{\mathrm{3\underline D^{2}}}{22}$, $\frac{\mathrm{6\underline D^{4}}}{55}\big \}$, $e_2<\min \big\{\frac{{6\underline D^{2}-44e_{1}}}{3\overline D(\overline D(c_9+1)+4)}, \frac{{12\underline D^{4}-110e_{1}}}{3\overline D(\overline D^{3}+10)}, \frac{2}{\overline D}, 4\underline D^{3}\big\}$, $e_{3}>\max\big\{6(e_{1}a_{1}(r_{4}$ $+r_{7})+4c_{13}r_{14}+4c_{14}r_{15}+4c_{15}r_{16}+4(c_{16}+c_{17})r_{17}+4c_{18}r_{18}+4c_{19}r_{19}+c_{9}e_2r_{23}+24(c_{1}+c_{5})+72(c_{2}+c_{4})+\frac{4}{D^{2}}(\frac{e_{2}D}{r_{20}}+\frac{e_{1}(r_3+r_6)}{3})+\frac{4e_{1}c_{12}}{D^{2}}(\frac{r_{8}}{3}+\frac{1}{3r_{11}})+\frac{4e_{2}Dc_{10}}{r_{21}}+\frac{e_{2}r_{21}c_{11}}{D^{3}}+\frac{10c_{9}}{D^{4}}(e_{2}Dr_{20}+e_1(\frac{r_{9}}{3}+\frac{1}{3r_{12}})), 6(e_1a_1(r_{10}+\frac{1}{r_{13}})+72D^{2}(c_{1}+c_{3})+\frac{10}{D^{2}}(e_{2}Dr_{20}+e_1(\frac{r_{9}}{3}+\frac{1}{3r_{12}})c_{8}+\frac{e_{2}r_{21}c_{10}}{D}))\big\}$, and one arrives at the following inequality \begin{align} \label{equ-out-P12-adp} \nonumber&\dot{V}_4(t)\leq-\kappa_2V_5(t)-\tilde DE_{6}(t)-D\dot{\hat D}E_{7}(t)+\tilde D^2E_{8}(t)\\ &+\dot{\hat D}^2E_{9}(t)+\ddot{\hat D}^2E_{10}(t)-\dot{\hat D}\frac{\tilde D}{\varrho}, \end{align} where $\kappa_2=\min\big\{e_{1}(\frac{1}{8}-\frac{a_1}{r_4}), e_{1}(\frac{1}{8}-\frac{a_{1}}{r_{10}}), e_{1}(\frac{1}{3}-\frac{a_{1}}{r_7}), 2, e_3\big\}$ and \begin{align}\label{equ-out-V0-ada} \nonumber V_5&(t)=\rVert\hat m\rVert^2+\rVert\partial_s\hat m\rVert^2+\rVert\partial_\theta m\rVert^2+\rVert\Delta{\hat m}\rVert^2+\rVert\partial_t\hat m\rVert^2+\rVert h\rVert^2\\ \nonumber&+\rVert\partial_{ts}\hat m\rVert^2+\rVert\partial_{t\theta}\hat m\rVert^2+\rVert\partial_sh\rVert^2+\rVert\partial_\theta h\rVert^2+\rVert\Delta h\rVert^2+\rVert\partial_{s\theta\theta}h\rVert^2\\ \nonumber&+\rVert\partial_{ss\theta}h\rVert^2+\rVert\tilde w\rVert^2+\rVert\partial_s\tilde w\rVert^2+\rVert\partial_\theta\tilde w\rVert^2+\rVert\Delta\tilde w\rVert^2+\rVert\partial_{t}\tilde w\rVert^2\\ &+\rVert\partial_{ts}\tilde w\rVert^2+\rVert\partial_{t\theta}\tilde w\rVert^2+\rVert\Delta\partial_{t}\tilde w\rVert^2+\rVert h(0, \cdot, t)\rVert^2. \end{align} After a lengthy calculation, we obtain the following estimates \begin{align}\label{equ-out-V0-L0} &E_6(t)\leq \alpha_6L_{2}V_5(t), \quad\quad E_7(t)\leq \alpha_6L_{2}V_5(t), \\ &E_8(t)\leq\alpha_7L_{2}V_5(t)+\alpha_8L_{2}^3V_5(t)^3+\tilde D^2\alpha_8L_{2}V_5(t), \\ &E_9(t)\leq \alpha_9L_{2}V_5(t)+\alpha_{10}L_{2}^3V_5(t)^3+\tilde D^2\alpha_{10}L_{2}V_5(t), \\ &E_{10}(t)\leq \alpha_{10}L_{2}V_5(t), \quad \dot{\hat D}(t)\leq L_{2}V_5(t), \\ &\ddot{\hat D}(t)\leq 2L_{2}V_5(t)+3L_{2}^2V_5(t)^2+2\tilde{D}L_{2}V_5(t), \label{equ-out-V0-L1} \end{align} where \begin{align} &\alpha_6=28+e_2, \\ & \nonumber\alpha_7=\frac{(198\overline D^{2}{+110)e_{1}}}{3\underline D^4}+e_{2}\bigg(\frac{10(r_{20}+r_{21})(1+\overline D^{2})}{\underline D^{3}}\\ &~~~~+\frac{4}{r_{20}\underline D}+\frac{4\overline D}{r_{21}}\bigg)+144\overline D^{2}+192, \\ &\alpha_8=\frac{110{e_{1}}{}}{3\underline D^2}+\frac{10e_{2}(r_{20}+r_{21})}{\underline D}+144\overline D^{2}, \\ &\nonumber\alpha_9=\frac{e_1(198\overline D^{2}+220)}{3\underline D^{2}}+e_{2}\bigg({\frac{4\overline D}{r_{20}}}+\frac{4\overline D^{3}}{r_{21}}+(\frac{20}{\underline D}+{10}{\overline D})\\ &~~~~\cdot(r_{20}+{r_{21}})\bigg)+216\overline D^2(c_{6}+c_{7})+336\overline D^{2}+144\overline D^{4}, \\ &\alpha_{10}=\frac{110e_{1}}{3}+{10e_{2}}{\overline D}(r_{20}+r_{21})+144\overline D^{4}, \end{align} and $L_2$ is a sufficiently large positive constant, which estimation method is similar to the method in Appendix \ref{apA}. Therefore, from \eqref{equ-out-V0-L0}--\eqref{equ-out-V0-L1}, we get \begin{align} \label{equ-out-6-dot_V} \nonumber\dot{V}_4&(t)\leq-\kappa_2 V_5(t)+|\tilde D|(\alpha_6+\frac{1}{\varrho})L_2V_5(t)+\alpha_6\overline DL_2^{2}V_5(t)^{2}\\ \nonumber&+\tilde D^2\alpha_7L_2V_5(t)+\tilde D^2(\alpha_8+\alpha_{10}+12\alpha_{10})L_2^3V_5(t)^3+\tilde D^4\alpha_8\\ &\cdot L_2V_5(t)+(\alpha_9+12\alpha_{10})L_2^{3}V_5(t)^{3}+28\alpha_{10}L_2^5V_5(t)^5, \end{align} and based on \eqref{equ-out-6-V2}, we have $\tilde{D}(t)^2\leq 2\varrho V_{4}(t)-2\varrho\zeta_2 V_5(t)$, where ${\zeta_2}=\min\{e_1, ~ 4\underline D,~ e_2\underline D,~ \frac{e_3}{2}\}.$ Using Young's and Cauchy-Schwarz's inequalities, the following holds \begin{align} &\left|\tilde{D}\right|\leq\frac{\varepsilon_2}{2}+\frac{\varrho}{\varepsilon_2}V_4(t)-\frac{\varrho\zeta_2}{\varepsilon_{2}} V_5(t). \end{align} Using \eqref{equ-out-6-V2}, we have $\zeta_2V_5(t)\leq V_{4}(t)$ and \begin{align} \label{equ-out-6-P12-adp} \nonumber\dot{V}_{4}&(t)\leq-\bigg(\frac{\kappa_2}{2}-L_2(\alpha_6+\frac{1}{\varrho})(\frac{\varepsilon}{2}+\frac{\varrho}{\varepsilon}V_{4}(t))-2\varrho\alpha_7L_2 \\ \nonumber&\cdot V_{5}(t)\bigg)V_{5}(t)-\bigg(\frac{\kappa_2}{2}-8\varrho^{2}\alpha_8L_2V_{4}(t)^{2}\bigg)V_5(t)-L_2\bigg(\frac{\varrho\zeta_2}{\varepsilon}\\ \nonumber&\cdot(\alpha_6+\frac{1}{\varrho})-(8\varrho^{2}\alpha_8\zeta_2+\frac{(\alpha_9+12\alpha_{10})L_2^{2}}{\zeta_2})V_{4}(t)-\alpha_6\overline D\\ \nonumber&\cdot L_2\bigg)V_5(t)^{2}-2\varrho L_2\bigg(\zeta_2\alpha_7- \frac{(\alpha_8+13\alpha_{10})L_2^2}{\zeta_2}V_{4}(t)^{2}\bigg)\\ \nonumber&\cdot V_5(t)^{2}-2\varrho L_2\bigg(\zeta_2\alpha_7- \frac{(\alpha_8+13\alpha_{10})L_2^2}{\zeta_2}V_{4}(t)^{2}\bigg)V_5(t)^{2}\\ &-2L_2^3\bigg(\varrho\zeta_2(\alpha_8+13\alpha_{10})-\frac{14\alpha_{10}L_2^2}{\zeta_2}V_{4}(t)\bigg)V_5(t)^4. \end{align} Setting $\varepsilon_{2}$ as $\varepsilon_{2}<\min\left\{\frac{\kappa_2\varrho}{L_2(\alpha_6\varrho+1)}, \frac{(\alpha_6\varrho+1)\zeta_2}{\alpha_6\varrho\overline D L_2^2}\right\}$ ensures $V_{4}(0)\leq \mu_3$, where \begin{align} \label{equ-out-initial-coditions1} \nonumber\mu_3\triangleq\min&\bigg\{\frac{\varepsilon_{2}(\kappa_2{\varrho }-(\alpha_6\varrho+1)L_2\varepsilon_{2})}{2\varrho L_2(\alpha_6\varrho+2\alpha_7\varrho\varepsilon_{2}+1)},~ \sqrt{\frac{\kappa_2 }{16\varrho^2\alpha_8L_2}},\\ \nonumber&\frac{\sqrt{\alpha_7}{\zeta_2}}{\sqrt{(\alpha_8+13\alpha_{10})}L_{2}},~\frac{\varrho(\alpha_8+13\alpha_{10}){\zeta_2}^{2}}{14\alpha_{10}L_{2}^2},\\ &\frac{{\zeta_2}((\alpha_6\varrho+1)\zeta_2-\alpha_6\overline DL\varepsilon_{2})}{\varepsilon_{2}(8\varrho^{2}\alpha_8{\zeta_2}^{2}+(\alpha_9+12\alpha_{10})L_{2}^{2})}\bigg\}. \end{align} Thus, we get $\dot V_{4}\leq-(\delta_6(t)+\delta_7(t))V_5(t)-(\delta_8(t)+\delta_9(t))V_5(t)^2-\delta_{10}(t)V_5(t)^4$ with \begin{align} &\delta_6(t)=\frac{\kappa_2}{2}-L_{2}(\alpha_6+\frac{1}{\varrho})(\frac{\varepsilon_{2}}{2}+\frac{\varrho}{\varepsilon_{2}}V_{4}(t))-2\varrho\alpha_7 L _{2}V_{4}(t), \\ &\delta_7(t)=\frac{\kappa_2}{2}-8\varrho^{2}\alpha_8L_{2}V_{4}(t)^{2}, \\ &\nonumber\delta_8(t)=L_{2}(\frac{\varrho}{\varepsilon_{2}}(\alpha_6+\frac{1}{\varrho})\zeta_2-\alpha_6\overline DL_{2}-(8\varrho^{2}\alpha_8{\zeta_2 }\\ &~~~~~~~~~~+\frac{(\alpha_9+12\alpha_{10})L_{2}^{2}}{{\zeta_2}})V_{4}(t)), \\ &\delta_9(t)=2\varrho L_{2}(\alpha_7\zeta_2- \frac{(\alpha_8+13\alpha_{10})L_{2}^2}{{\zeta_2}}V_{4}(t)^{2}), \\ &\delta_{10}(t)=2L_{2}^3(\varrho{\zeta_2}(\alpha_8+13\alpha_{10})-\frac{14\alpha_{10}L_{2}^2}{{\zeta_2}}V_{4}(t)). \end{align} If the initial condition satisfies \eqref{equ-out-initial-coditions1}, then $\delta_i(t)$, $i=6,7,..10$ are nonnegative. Thus, $V_{4}(t)\leq V_{4}(0), \ \forall t\geq0$, which concludes the proof of local stability of the target system $(\hat m,h,\tilde w)$ \eqref{equ-out-6-hatw}--\eqref{equ-out-6-tildew-bud}. Based on \eqref{equ-out-6-P12-adp}, one can get the boundedness of all terms in \eqref{equ-out-6-V2}. Next, to prove that $(\hat m,h,\tilde w)$ system states are convergence to zero one needs to proceed in two steps. First, one needs to prove that all terms in \eqref{equ-out-V0-ada} are square integrable in time. Second, one must establish that $\frac{\mathrm d}{\mathrm dt}(\rVert\hat m\rVert^2)$, $\frac{\mathrm d}{\mathrm dt}(\rVert h\rVert^2)$, $\frac{\mathrm d}{\mathrm dt}(\rVert\partial_{s}h\rVert^2)$ and $\frac{\mathrm d}{\mathrm dt}(\rVert \tilde w\rVert^2)$ are bounded. The proof is similar to the regulation proof in Theorem 1 and is therefore omitted. Based on Lemma D.2 \cite{Krstic2010}, we get $\rVert\hat m\rVert$, $\rVert h\rVert$, $\rVert \partial_{s}h\rVert^2$, $\rVert \tilde w\rVert\to 0$ as $t\to\infty$ and knowing that $\rVert h(0,\cdot,t)\rVert^2\leq 2\rVert h\rVert\rVert \partial_{s}h\rVert$, so $\rVert h(0,\cdot,t)\rVert^2\to 0$ as $t\to\infty$. From \eqref{equ-trans3} and \eqref{equ-m-w}, one obtains \begin{align} \nonumber \rVert \hat \phi\rVert^{2}\leq&4(1+\int_{0}^{1}\int_{0}^{1}|l(s, \tau)|^{2}\mathrm d\tau\mathrm ds)\rVert \hat m\rVert^{2}\\ &+4\int_{0}^{1}\int_{0}^{1}|l(s, \tau)|^{2}\mathrm d\tau\mathrm ds\rVert h(0,\cdot,t)\rVert^2, \end{align} and consequently $\rVert\hat \phi\rVert^{2}\to 0$ as $t\to\infty$. Since $\rVert\hat \phi\rVert_{H^2}$ is bounded, $\hat \phi(s, \theta, t)^2\leq C\rVert \hat \phi\rVert_{L^2}\rVert\hat \phi\rVert_{H^2}$ by using Agmon's inequality, and then we get $\hat \phi(s,\theta,t)$ is regulated. Similarly, from \eqref{equ-out-6-tildetranwphi}, $\rVert\tilde \phi\rVert_{H^2}$, $\rVert\vartheta\rVert_{H^2}$ are bounded, and $\tilde \phi(s,\theta,t)$, $\vartheta(s,\theta,t)$ are also regulated. Thus, we have completed the proof for Theorem \ref{therom-6-2}. \section{Numerical Simulations}\label{simu} \subsection{Control laws for the leaders and the followers} In order to implement control laws of the followers, we discretize the PDEs \eqref{equ-u_0} and \eqref{equ-z_0}. For $u\in\Omega$, we define the following discretized grid \begin{align} \label{equ-disway} s_i=(i-1)h_s, ~~\theta_j=(j-1)h_\theta,~~ d_k=(k-1)\Delta D, \end{align} for $i=2,...,M-1$, $j=1,...,N$, $k=1,...,M'$, where $h_s=\frac{1}{M-1}$, $h_\theta=\frac{2\pi}{N-1}$ and $\Delta D=\frac{D}{M'-1}$. Using a three-point central difference approximation, the control laws of the follower agents $(i,j)$ are written as \begin{align} \label{equ-disu} \dot{u}_{ij}=&\frac{(u_{i+1,j}-u_{i,j}) -(u_{i,j}-u_{i-1,j})}{h_s^2}+\beta_1\frac{u_{i+1,j}-u_{i-1,j}}{2h_s} \nonumber\\ &+\frac{(u_{i,j+1}-u_{i,j}) -(u_{i,j-}u_{i,j-1})}{h_\theta^2} +\lambda_1{u}_{i,j}, \end{align} where $i=2,...,M-1$, $j=1,...,N$, and all the state variables in $\theta$ space are $2\pi$ periodic, namely, $u_{i,1}=u_{i,N}$. The leader agents with guiding role at the boundary $s=0$, namely $i=1$ are formed as $u_{1,j}=f_1(\theta_j)$. For the leader agents at the boundary $s=0$, namely $i=M$, from the discretized form of \eqref{equ-endU}, the state feedback control action is given by \begin{align} \label{equ-disU} u_{M,j}(t)&= \sum_{m=1}^{M}\sum_{l=1}^{N}{a}_{m,l}\gamma_{j,m,l}e^{-\frac{1}{2}\beta_1(1-s_m)}(u_{m,l}(t)-\bar{u}_{m,l}(t)) \nonumber\\ &-\sum_{k=1}^{M'}\sum_{l=1}^{N}a'_{k,l} \gamma'_{j,k,l}u_{M,l}(t-D+d_k)+\overline u_{M,j}, \end{align} where $\gamma_{j,m,l}$ and $\gamma'_{j,k,l}$ can be discretized from \eqref{kernel-gamma} and \eqref{kernel-p}. $M$, $N$, and $M'$ are odd numbers according to Simpson's rule. The control laws for the $z$-coordinate can be obtained in a similar way. In practice, one uses the estimated values for $u_{i,j}$ which are derived by running the observer on the actuated leaders’ central processing unit (CPU). The observer only requires one boundary measurement $\partial_s u(t,\theta, 1)$, which can be approximated through finite-difference schemes. Here, we use a second-order accurate approximation of the observer variable (see \cite{leveque2007finite} and \cite{qi2019parabolic}) as \begin{align} \partial_s u(1, \theta,t)=\frac{3 u_{i,M}-4 u_{i,M-1}+ u_{i,M-2}}{2h_s}. \end{align} Therefore, each actuated leader $(i, M)$ needs to measure the positions of its nearest and next nearest neighbors. The approximation error from the discretization process is $O(h^2_s + h^2_\theta)$. Higher-order approximations can help reduce errors at the cost of increased communication network and controller complexity. Moreover, PDE-based methods are well suited for analyzing large-scale systems since an increase in the number of agents (greatly) reduces error. \subsection{Simulation results} A formation control simulation example with $51\times50$ agents on a mesh grid in the 3-D space is presented to illustrate the performance of the proposed control laws with unknown input delay. The true value of input delay $D=2$, and the upper and lower bounds of the unknown delay are $\underline D=0.1$ and $\overline D=4$, respectively. The adaptive gain is set to $\varrho=0.05$. The parameters of the model are set as $\lambda_1=\lambda_2=10$, $\beta_1=\beta_2=0$. The control goal is to drive the formation of the agents from an initial equilibrium state characterized by the boundary values $f_1(\theta)=-e^{\mathrm j\theta}+e^{-\mathrm j2\theta}$, $g_1(\theta)=e^{\mathrm j\theta}-e^{-\mathrm j2\theta}$, $f_2(\theta)=-1.9$, $g_2(\theta)=1.9$ and the parameters of $\lambda_1=\lambda_2=10$, $\beta_1=\beta_2=0$ to a desired formation with boundary $f_1(\theta)=g_1(\theta)=e^{\mathrm j\theta}$, $f_2(\theta)=0$, $g_2(\theta)=1.3$ and the parameters of $\lambda_1=30$, $\lambda_2=20$, $\beta_1=\beta_2=1$. Figure \ref{fig:6-formation} shows the formation diagram (or snapshots of the evolution in time) of a 3D multi-agent formation with an initial value of the unknown delay estimate $\hat D=4$ and from the initial to the desired formation. The six snapshots of the formation states illustrate the smooth evolution of collective dynamics between two different reference formations when input delay are arbitrarily large and unknown. Figure \ref{fig:6-control} shows the time-evolution of the control signals and it's clear that the control efforts tend to zero and ensure the stability of the whole dynamics. In Figure \ref{figure2}, (a) shows the dynamics of the update rate of the unknown parameter, $\dot {\hat {D}}$, when its initial value is $\hat D(0)=4$. It is clear that the updated rate gradually tends to zero over time; (b) describes the estimate of the unknown input delay for the system subject to the designed adaptive control law for a given initial value $\hat D(0)=4$: the estimated delay $\hat D$ gradually converges to the true value of $D=2$. \begin{figure} \caption{The adaptive formation change process of the multi-agent system with unknown delay initial value $\hat D(0)=4$. (a) $t=0s$ (b) $t=0.09s$ (c) $t=0.12s$ (d) $t=0.2s$ (e) $t=5.6s$ (f) $t=40s$.} \label{fig:6-formation} \end{figure} \begin{figure} \caption{The time-evolution of the control signals} \label{fig:6-control} \end{figure} \begin{figure} \caption{(a) The dynamics of the updated law $\dot{\hat D} \label{figure2} \end{figure} In Figure \ref{fig:6-error}, (a) and (b) show the tracking error of agents indexed by $i=5$, $i=15$, $i=30$, and $i=51$ (actuator leaders) and the average of all the agents on the horizontal and vertical directions, respectively, under non-adaptive boundary control. It can be seen from the figure that the tracking error gradually tends to $0$ with time evolution; (c) and (d) show the observer error of the same agents and the average of all the agents in the horizontal and vertical directions, respectively, under adaptive boundary control. It can be seen from the figure that the observer error smoothly tends to $0$ as time evolves; Finally, (e) and (f) show the $L^2$-norm of average tracking error of all the agents in the horizontal and vertical directions, respectively. It can be seen that if the estimate of unknown delay $\hat D$ does not match the true value of the delay $D=2$, namely if a delay mismatch occurs, the tracking error diverges. \begin{figure} \caption{(a) Tracking error of $u$ system under adaptive control, (b) Observation error of $z$ system under adaptive control, (c) Observation error of $u$ system under adaptive control, (d) Observation error of $z$ system under adaptive control, (e) Average tracking error of $u$ system, (f) Average tracking error of $z$ system.} \label{fig:6-error} \end{figure} \section{Conclusion}\label{perspec} This paper studies the formation control of MAS with unknown input delay in cylindrical topology--3D space. Adaptive controllers and observers are designed to achieve the desired 3D formation with stable transitions. The update law of the unknown parameter and the local stability is characterized using a Lyapunov method. The level of complexity substantially increases with the dimensionality and a Fourier series is introduced to reduce the PDE describing the two-dimensional cylindrical communication topology to a one-dimensional system. The local stability and state asymptotic convergence characteristics of the target system are analyzed by a very complex Lyapunov function, and the original system inherited the local stability following the equivalence between the original system and the target system. In future work, we will consider unknown plant coefficients and input delay. \appendices \section{The Proof of Proposition \ref{proposition4-1}}\label{apA} To prove the estimates between the state of the error system \eqref{equ-phi}--\eqref{initial-vartheta} and the state of the target system \eqref{equ-am0-adp}--\eqref{initial-amh-adp}, $S_i$, $i=1,2$ are constructed {using the bound of the integral of kernels, for example, let's consider the $L^2$ norm of $\vartheta(s,\theta,t)$. From equation \eqref{equ-trans4}}, we get the following estimate \begin{align}\label{equ-vs} &\int_0^1\int_\pi^\pi | {\vartheta}(s,\theta,t)|^2\mathrm d\theta\mathrm ds\nonumber\\ \nonumber\leq&3\rVert h\rVert^2+3\int_0^1\int_{-\pi}^\pi\bigg(\int_{0}^1\int_{-\pi}^\pi \eta(s,\tau,\theta,\psi,\hat D)w(\tau,{\theta},t)\\ \nonumber&\mathrm{d}\psi\mathrm{d}\tau\bigg)^2\mathrm d\theta\mathrm ds+3\hat D^2 \int_0^1\int_{-\pi}^\pi\bigg(\int_{0}^s\int_{-\pi}^\pi q(s,\tau,\theta,\psi,\\ &\hat D)h(\tau,{\theta},t)\mathrm{d}\psi\mathrm{d}\tau \bigg)^2\mathrm d\theta\mathrm ds. \end{align} From \eqref{equ-rn2} and \eqref{equ-qn2}, we have \begin{align} & \eta(s,\tau,\theta,\psi,\hat D)={Q(s,\theta-\psi,\hat D)} F(s,\tau),\\ &q(s,\tau,\theta,\psi,\hat D)={Q(s-\tau,\theta-\psi,\hat D)}G(s,\tau), \end{align} where \begin{align} F(s,\tau)&=2\sum_{i=1}^{\infty}e^{-\hat Di^2\pi^2s}\mathrm{sin}(i\pi\tau)\int_{0}^{1}\mathrm{sin}(i\pi\xi)l(1,\xi)\mathrm{d}\xi.\\ G(s,\tau)&=-2\sum_{i=1}^{\infty}e^{-\hat Di^2\pi^2(s-\tau)}i\pi(-1)^n\int_{0}^{1}\mathrm{sin}(i\pi\xi)l(1,\xi)\mathrm{d}\xi. \end{align} For the second term on the left of the inequality \eqref{equ-vs}, using Fourier series, the following relations hold (see \eqref{equ-Poisson} and \eqref{fourier-Phi})\begin{align} {Q(s,\theta-\psi,\hat D)}&=\frac{1}{2\pi}\sum_{n=-\infty}^{\infty}e^{-\hat Dn^2s}e^{\mathrm{j}n(\theta-\psi)},\label{a5}\\ w(s,\theta,t)&=\sum_{n=-\infty}^{\infty}w_n(s,t)e^{\mathrm{j}n\theta}\label{a6}. \end{align} {Then, \begin{align} \nonumber&\int_{-\pi}^\pi Q(s,\theta-\psi,\hat D)w(\tau,{\psi},t)\mathrm{d}\psi\\ \nonumber&=\frac{1}{2\pi}\sum_{n=-\infty}^{\infty}\sum_{m=-\infty}^{\infty}e^{-\hat Dn^2s}w_m(\tau,t)\int_{-\pi}^\pi e^{\mathrm{j}n(\theta-\psi)}e^{\mathrm{j}m\psi}\mathrm{d}\psi\\ &=\sum_{n=-\infty}^{\infty}e^{-\hat Dn^2s}e^{\mathrm{j}n\theta}w_n(\tau,t), \end{align} where we have used the orthogonality property of the Fourier series (the same conclusion is reached using the convolution theorem). } From the Parseval's theorem, the following can be deduced \begin{align} \nonumber&\int_{-\pi}^\pi\bigg( \int_{-\pi}^\pi Q(s,\theta-\psi,\hat D)w(\tau,{\psi},t)\mathrm{d}\psi F(s,\tau)\bigg)^2\mathrm d\theta\\ \nonumber&=2\pi\sum_{n=-\infty}^{\infty}e^{-2\hat Dn^2s}|w_n(\tau,t)|^{2}|F(s,\tau)|^{2}\\ &{\leq 2\pi\sum_{n=-\infty}^{\infty}|w_n(\tau,t)|^{2}|F(s,\tau)|^{2},} \end{align} and then \begin{align} \nonumber&\int_0^1\int_{-\pi}^\pi\bigg( \int_0^1\int_{-\pi}^\pi Q(s,\theta-\psi,\hat D)F(s,\tau)\\ \nonumber&\cdot w(\tau,{\psi},t)\mathrm{d}\psi\mathrm d\tau\bigg)^2\mathrm d\theta\mathrm ds\\ \nonumber&\leq\int_0^1\bigg(\int_{-\pi}^\pi \int_0^1\bigg(\int_{-\pi}^\pi Q(s,\theta-\psi,\hat D)w(\tau,{\psi},t)\mathrm{d}\psi\bigg)^2\mathrm d\tau\mathrm d\theta\\ \nonumber&\cdot \int_0^1|F(s,\tau)|^2\mathrm d\tau\bigg)\mathrm ds\\ \nonumber&\leq\int_0^1 \int_0^1\int_{-\pi}^\pi\bigg(\int_{-\pi}^\pi Q(s,\theta-\psi,\hat D)w(\tau,{\psi},t)\mathrm{d}\psi\bigg)^2\mathrm d\tau\mathrm d\theta\mathrm ds\\ \nonumber&\cdot \int_0^1\int_0^1|F(s,\tau)|^2\mathrm d\tau\mathrm ds\\ \nonumber&\leq{2\pi}\int_0^1\sum_{n=-\infty}^{\infty}|w_n(\tau,t)|^{2}\mathrm d\tau\int_0^1\int_0^1|F(s,\tau)|^2\mathrm d\tau\mathrm ds\\ &=\int_0^1\int_0^1|F(s,\tau)|^2\mathrm d\tau\mathrm ds\rVert w\rVert^2. \end{align} Similarly, for the third term, based on \eqref{a5}, \eqref{a6} and the orthogonality property of Fourier series, \begin{align} \nonumber&\int_{-\pi}^\pi Q(s-\tau,\theta-\psi,\hat D)h(\tau,{\psi},t)\mathrm{d}\psi\\ &=\sum_{n=-\infty}^{\infty}e^{-\hat Dn^2(s-\tau)}e^{\mathrm{j}n\theta}h_n(\tau,t). \end{align} From the Parseval's theorem, the following can be deduced \begin{align} \nonumber&\int_{-\pi}^\pi\bigg( \int_{-\pi}^\pi Q(s-\tau,\theta-\psi,\hat D)G(s,\tau)h(\tau,{\psi},t)\mathrm{d}\psi \bigg)^2\mathrm d\theta\\ \nonumber&=2\pi\sum_{n=-\infty}^{\infty}e^{-2\hat Dn^2(s-\tau)}|h_n(\tau,t)|^{2}|G(s,\tau)|^{2}\\ &{\leq 2\pi\sum_{n=-\infty}^{\infty}|h_n(\tau,t)|^{2}|G(s,\tau)|^{2},} \end{align} and then \begin{align} \nonumber&\int_0^1\int_{-\pi}^\pi\bigg( \int_0^s\int_{-\pi}^\pi Q(s-\tau,\theta-\psi,\hat D)G(s,\tau)\\ \nonumber&\cdot h(\tau,{\psi},t)\mathrm{d}\psi\mathrm d\tau\bigg)^2\mathrm d\theta\mathrm ds\\ \nonumber&\leq\int_0^1 \int_0^s\int_{-\pi}^\pi\bigg(\int_{-\pi}^\pi Q(s-\tau,\theta-\psi,\hat D)\\ \nonumber&\cdot h(\tau,{\psi},t)\mathrm{d}\psi\bigg)^2\mathrm d\tau\mathrm d\theta\mathrm ds\int_0^1\int_0^s|G(s,\tau)|^2\mathrm d\tau\mathrm ds\\ \nonumber&\leq{2\pi}\int_0^1\sum_{n=-\infty}^{\infty}|h_n(\tau,t)|^{2}\mathrm d\tau\int_0^1\int_0^s|G(s,\tau)|^2\mathrm d\tau\mathrm ds\\ &=\int_0^1\int_0^s|G(s,\tau)|^2\mathrm d\tau\mathrm ds\rVert h\rVert^2. \end{align} Thus, we get \begin{align}\label{equ-vs0} &\int_0^1\int_\pi^\pi | {\vartheta}(s,\theta,t)|^2\mathrm d\theta\mathrm ds\leq3\rVert h\rVert^2\nonumber\\ \nonumber&+3\int_0^1\int_{-\pi}^\pi\bigg(\int_{0}^1\int_{-\pi}^\pi {Q(s,\theta-\psi,\hat D)} F(s,\tau)w(\tau,{\theta},t)\\ \nonumber&\mathrm{d}\psi\mathrm{d}\tau\bigg)^2\mathrm d\theta\mathrm ds+3\hat D^2 \int_0^1\int_{-\pi}^\pi\bigg(\int_{0}^s\int_{-\pi}^\pi {Q(s-\tau,\theta-\psi,\hat D)}\\ \nonumber&\cdot G(s,\tau)h(\tau,{\theta},t)\mathrm{d}\psi\mathrm{d}\tau \bigg)^2\mathrm d\theta\mathrm ds\\ \nonumber\leq&3(1+\overline D^2)\int_0^1\int_0^s|G(s,\tau)|^2\mathrm d\tau\mathrm ds\rVert h\rVert^2\\ &+3\int_0^1\int_0^1|F(s,\tau)|^2\mathrm d\tau\mathrm ds\rVert w\rVert^2, \end{align} where $\int_0^1\int_0^s|G(s,\tau)|^2\mathrm d\tau\mathrm ds$ and $\int_0^1\int_0^1|F(s,\tau)|^2\mathrm d\tau\mathrm ds$ are proved bounded in our previous work \cite{WANG2021109909}. Based on this method we can completely prove Proposition \ref{proposition4-1}. \end{document}
\begin{document} \title{\Large\textbf{Algebraic structures of multipartite quantum systems} \thispagestyle{empty} \title{\Large\textbf{Algebraic structures of multipartite quantum systems} \begin{abstract} We investigate the relation between multilinear mappings and multipartite states. We show that the isomorphism between multilinear mapping and tensor product completely characterizes decomposable multipartite states in a mathematically well-defined manner. \end{abstract} \section{Introduction} The characterization of multipartite state is a fascinating subject in the field of fundamental quantum theory with interesting applications in quantum information and quantum computing. There are many multipartite states which can be used for different algorithm or scheme for quantum computation. For example entangled cluster states are the building block for one-way quantum computer as scheme for universal quantum computation. In recent years, we have witnessed some progress in quantification and classification of multipartite states, but this problem is still open and needs further investigation. In this paper we will discuss the structure of multipartite product state in a clear and abstract algebraic way. Our main interest are multilinear mapping or $m$-linear mapping of complex vector spaces. We will establish an isomorphism between this maps and tensor product states. In section \ref{bilmb} we will give an introduction to the structure of bilinear mapping and condition for which we have isomorphic mapping between this bilinear mapping and tensor product. We will also show that this construction defines the separable set of general bipartite states. Moreover, we will establish a relation between this construction and the concurrence. In section \ref{bilmm}, we will generalize our result from bilinear to multilinear or $m$-linear mapping and the construction of tensor product for such mapping. We will show that this construction describes the separable set of general multipartite states. Finally, we will establish a relation between this algebraic construction and the generalized concurrence. An introduction to theory of multilinear mapping and algebra can be found in \cite{Greub1} which is also our main reference. \section{Bilinear mapping and bipartite state}\label{bilmb} In this section we will give an introduction to bilinear mapping and tensor product. Let $\mathrm{V}_{1},\mathrm{V}_{2}$ be two linear spaces and consider the mapping \begin{equation}\label{mlinear} \Phi:\mathcal{V}_{1}\times\mathcal{V}_{2}\longrightarrow\mathcal{M} \end{equation} This map is called bilinear if it satisfies the following condistions\begin{enumerate} \item $\Phi(\lambda \ket{\psi^{1}_{1}}+\mu \ket{\psi^{2}_{1}},\ket{\psi_{2}})= \lambda\Phi( \ket{\psi^{1}_{1}},\ket{\psi_{2}})) +\mu\Phi( \ket{\psi^{2}_{1}},\ket{\psi_{2}})$ for all $\ket{\psi^{1}_{1}},\ket{\psi^{2}_{1}}\in\mathcal{V}_{1}$, $\ket{\psi_{2}}\in\mathcal{V}_{2}$, and $\lambda, \mu \in\mathcal{N}$ \item $\Phi(\ket{\psi_{1}},\lambda \ket{\psi^{1}_{2}}+\mu \ket{\psi^{2}_{2}})= \lambda\Phi( \ket{\psi_{1}},\ket{\psi^{1}_{2}})) +\mu\Phi( \ket{\psi_{1}},\ket{\psi^{2}_{2}})$ for all $\ket{\psi^{1}_{2}},\ket{\psi^{2}_{2}}\in\mathcal{V}_{2}$, $\ket{\psi_{1}}\in\mathcal{V}_{1}$, \end{enumerate} for a linear space $\mathcal{N}$. We call this bilinear algebra a bilinear function if the linear space $\mathcal{N}=\mathcal{M}$. In this paper, we are mostly interested in complex vector spaces and specially finite dimensional. The most important observation about bilinear mapping is that the set of all vectors $\mathcal{S}$ in $\mathcal{M}$ of the form $\Phi( \ket{\psi_{1}},\ket{\psi_{2}})$ for $\ket{\psi_{1}}\in\mathcal{V}_{1}$ and $\ket{\psi_{2}}\in\mathcal{V}_{2}$ is not in general a linear subspace of the target space $\mathcal{M}$. To make this important point clear we will give an example based on pair qubits. In this case we have $\mathcal{V}_{1}=\mathcal{V}_{2}=\mathbb{C}^{2}$ and $\mathcal{M}=\mathbb{C}^{4}$. Let also $\ket{\psi_{1}}=\alpha^{1}_{1}\ket{e_{1}}+\alpha^{1}_{2}\ket{e_{1}}$ and $\ket{\psi_{2}}=\alpha^{2}_{1}\ket{e_{1}}+\alpha^{2}_{2}\ket{e_{1}}$ In this case the bilinear mapping $\Phi( \ket{\psi_{1}},\ket{\psi_{2}})$ is defined by \begin{equation} \Phi(\ket{\psi_{1}},\ket{\psi_{2}})=\alpha^{1}_{1}\alpha^{2}_{1}\ket{f_{1}} +\alpha^{1}_{1}\alpha^{2}_{2}\ket{f_{2}}+\alpha^{1}_{2}\alpha^{2}_{1}\ket{f_{3}} +\alpha^{1}_{2}\alpha^{2}_{2}\ket{f_{4}}, \end{equation} where $\ket{f_{j}}$ is a basis for $\mathcal{M}$. Then a vector $\ket{\psi}=\sum_{j}\alpha^{j}\ket{f_{j}}\in\mathcal{M}$ is contained in the set $\mathcal{S}$ if and only if the components satisfy the following very important condition: \begin{equation} \alpha^{1}\alpha^{4}=\alpha^{2}\alpha^{3}. \end{equation} This condition is exactly the separability condition for pair of qubits. Let us consider the a pure two qubit state $\ket{\Psi}=\sum^{1}_{i_{1}=0}\sum^{1}_{i_{2}=0} \alpha_{i_{1}i_{2}} \ket{i_{1}}\otimes\ket{i_{2}}$. Then for this state the separability condition is given by $\alpha_{00}\alpha_{11}=\alpha_{01}\alpha_{10}$. This equation also gives a well-known measure of entanglement called the concurrence $C(\ket{\Psi})=2|\alpha_{00}\alpha_{11}-\alpha_{01}\alpha_{10}|$ for a pair of qubit \cite{Wootters98}. Thus this is very important to investigate the algebra of product states for bipartite and multipartite states. In following sections we will use the notation $\mathrm{Im}\Phi$ for the subspace of $\mathcal{M}$ which is generated by the set $\mathcal{S}$. \begin{defn} Let $\mathrm{V}_{1},\mathrm{V}_{2}$ be two complex vector spaces and consider the mapping $ \Phi:\mathcal{V}_{1}\times\mathcal{V}_{2}\longrightarrow\mathcal{M} $. Then the pair $(\Phi,\mathcal{M})$ is called a tensor product if and only if the following important conditions are satisfied: \begin{itemize} \item $\mathrm{I}$: the image of bilinear mapping is equal the target space $\mathrm{Im}\Phi=\mathcal{M}$ \item $\mathrm{II}$: If there is bilinear mapping $ \Psi:\mathcal{V}_{1}\times\mathcal{V}_{2}\longrightarrow\mathcal{N} $, where $\mathcal{N}$ is a arbitrary complex vector space, then there exists a linear mapping $\Theta\mathcal{M}\longrightarrow\mathcal{N}$ such that the following diagram $$\xymatrix{\mathcal{V}_{1}\times\mathcal{V}_{2} \ar[d]_{\Psi}\ar[r]_{\Phi}&\mathcal{M} \ar[d]_{\Theta}\\ \mathcal{N}\ar@{=}[r]&\mathcal{N} }$$ is commutative, that is $\Psi=\Theta\circ\Phi$. \end{itemize} \end{defn} If $(\Phi,\mathcal{M})$ is a tensor product, then we denote $\mathcal{M}=\mathcal{V}_{1}\otimes\mathcal{V}_{2}$ and $\Phi(\ket{\phi_{1}},\ket{\phi_{2}})=\ket{\phi_{1}}\otimes\ket{\phi_{2}}$. Moreover, the bilinearity is give by \begin{enumerate} \item $(\lambda \ket{\psi^{1}_{1}}+\mu \ket{\psi^{2}_{1}})\otimes\ket{\psi_{2}}= \lambda\ket{\psi^{1}_{1}}\otimes\ket{\psi_{2}} +\mu\ket{\psi^{2}_{1}}\otimes\ket{\psi_{2}}$ for all $\ket{\psi^{1}_{1}},\ket{\psi^{2}_{1}}\in\mathcal{V}_{1}$, $\ket{\psi_{2}}\in\mathcal{V}_{2}$, and $\lambda, \mu \in\mathcal{N}$ \item $\ket{\psi_{1}}\otimes(\lambda \ket{\psi^{1}_{2}}+\mu \ket{\psi^{2}_{2}})= \lambda\ket{\psi_{1}}\otimes\ket{\psi^{1}_{2}})) +\mu \ket{\psi_{1}}\otimes\ket{\psi^{2}_{2}}$ for all $\ket{\psi^{1}_{2}},\ket{\psi^{2}_{2}}\in\mathcal{V}_{2}$, $\ket{\psi_{1}}\in\mathcal{V}_{1}$. \end{enumerate} Next, we will give some elementary properties of the tensor product. For example every vector $0\neq\ket{\psi}\in\mathcal{V}_{1}\otimes\mathcal{V}_{2}$ can be written as $\ket{\psi}=\sum^{k}_{i=1}\ket{\psi^{i}_{1}}\otimes\ket{\psi^{i}_{2}}$ for linearly independent vectors $\ket{\psi^{i}_{1}}$ and $\ket{\psi^{i}_{2}}$. To see that let us choose a representation of $\ket{\psi}$ such that it minimize the $k$. For $k=1$, it follows easily that $\ket{\psi^{1}_{1}}\neq0$ and $\ket{\psi^{1}_{2}}\neq0$ and so $\ket{\psi^{1}_{1}}$ and $\ket{\psi^{1}_{2}}$ are linearly independent vectors. Next, we show that the case for $k\geq 2$ is also correct. For a linearly dependent vector we can assume that $\ket{\psi^{i}_{1}}=\sum^{k-1}_{i=1}\gamma^{i}\ket{\psi^{i}_{1}}$, so we have \begin{eqnarray} \ket{\psi}&=&\sum^{k-1}_{i=1}\ket{\psi^{i}_{1}}\otimes\ket{\psi^{i}_{2}} +\sum^{k-1}_{i=1}\gamma^{i}\ket{\psi^{i}_{1}}\otimes\ket{\psi^{k}_{2}} \\\nonumber&=&\sum^{k-1}_{i=1}\ket{\psi^{i}_{1}}\otimes(\ket{\psi^{i}_{2}}+\gamma^{i}\ket{\psi^{k}_{2}}= \sum^{k-1}_{i=1}\ket{\psi^{i}_{1}}\otimes\ket{\psi^{i'}_{2}}. \end{eqnarray} This show that $k$ is not minimal. We can also show that $\ket{\psi^{i}_{2}}$ are linearly independent in the same way as above. Now, let Let $\mathcal{V}_{1},\mathcal{V}_{2}$ be two complex vector spaces and $\mathcal{V}_{1}\otimes\mathcal{V}_{2}$ be a tensor product for these spaces. Moreover, let $\mathcal{L}(\mathcal{V}_{1}\otimes\mathcal{V}_{2};\mathcal{M})$ denotes linear mapping $\mathcal{V}_{1}\otimes\mathcal{V}_{2}\longrightarrow\mathcal{M}$ and $\mathcal{B}(\mathcal{V}_{1},\mathcal{V}_{2};\mathcal{M})$ denotes bilinear mapping $\mathcal{V}_{1}\times\mathcal{V}_{2}\longrightarrow\mathcal{M}$. Then, we have following isomorphism \begin{equation} \mathcal{L}(\mathcal{V}_{1}\otimes\mathcal{V}_{2};\mathcal{M})\longrightarrow \mathcal{B}(\mathcal{V}_{1},\mathcal{V}_{2};\mathcal{M}) \end{equation} which is defined by $\Phi(\Theta)=\Theta\circ\otimes$ for all $\Theta\in\mathcal{L}(\mathcal{V}_{1}\otimes\mathcal{V}_{2};\mathcal{M})$. The proof follows from conditions I and II for tensor product. Moreover, the correspondence between the linear map $\Psi\in\mathcal{L}(\mathcal{V}_{1}\otimes\mathcal{V}_{2};\mathcal{M})$ and $ \Theta\in\mathcal{B}(\mathcal{V}_{1},\mathcal{V}_{2};\mathcal{M})$ is visualize in following commutative diagram \begin{equation}\xymatrix{\mathcal{V}_{1}\times\mathcal{V}_{2} \ar[d]_{\otimes}\ar[r]_{\Phi}&\mathcal{M} \ar@{=}[d]\\ \mathcal{V}_{1}\otimes\mathcal{V}_{2}\ar[r]_{\Theta}&\mathcal{M} } \end{equation} Thus we have the following proposition \begin{prop} Let $\Psi\in\mathcal{L}(\mathcal{V}_{1}\otimes\mathcal{V}_{2};\mathcal{M})$ be a bilinear map and $ \Theta\in\mathcal{B}(\mathcal{V}_{1},\mathcal{V}_{2};\mathcal{M})$ be induced linear map. Then $\Theta$ is surjective and injective if and only if $\Phi$ satisfies the condition $\mathrm{I}$ and $\mathrm{II}$ of tensor respectively. \end{prop} The map $\Theta$ is surjective follows from $\mathrm{Im}\Psi=\mathrm{Im}\Theta$. Now, if we assume that $\Theta$ is injective, then $(\mathrm{Im}\Psi,\Psi)$ is a tensor product for $\mathcal{V}_{1}$ and $\mathcal{V}_{2}$ and every bilinear mapping $\mathcal{V}_{1}\times\mathcal{V}_{2}\longrightarrow\mathcal{N}$ induces a linear mapping $\Upsilon:\mathrm{Im}\Psi\longrightarrow\mathcal{N}$ such that $\Phi(\ket{\psi_{1}},\ket{\psi_{2}})=\Upsilon\Psi(\ket{\psi_{1}},\ket{\psi_{2}})$. Next, $\Psi$ satisfies the condition $\mathrm{II}$ since if $\Theta$ is an extension of $\Upsilon$ to a map $\Theta:\mathcal{M}\longrightarrow\mathcal{N}$, then $\Phi(\ket{\psi_{1}},\ket{\psi_{2}})=\Theta\Psi(\ket{\psi_{1}},\ket{\psi_{2}})$. The converse follows by assuming that $\Psi$ satisfies the condition $\mathrm{II}$ and show that $\Theta$ is injective. As an example let us look at the general bipartite states. For such system we have a bilinear mapping $\Sigma:\mathbb{C}^{N_{1}}\times\mathbb{C}^{N_{2}}\longrightarrow\mathrm{M}^{N_{1}\times N_{2}}$ defined by \begin{equation}\label{matrixbi} (\alpha^{1}_{1},\alpha^{2}_{1},\ldots,\alpha^{N_{1}}_{1})\times (\alpha^{1}_{2},\alpha^{2}_{2},\ldots,\alpha^{N_{2}}_{2})\longrightarrow \left( \begin{array}{cccc} \alpha^{1}_{1}\alpha^{1}_{2}& \alpha^{1}_{1}\alpha^{2}_{2} & \cdots &\alpha^{1}_{1}\alpha^{N_{2}}_{2} \\ \alpha^{2}_{1}\alpha^{1}_{2}& \alpha^{2}_{1}\alpha^{2}_{2} & \cdots &\alpha^{2}_{1}\alpha^{N_{2}}_{2} \\ \vdots & \vdots&\ddots &\vdots \\ \alpha^{N_{1}}_{1}\alpha^{1}_{2}& \alpha^{N_{1}}_{1}\alpha^{2}_{2} & \cdots &\alpha^{N_{1}}_{1}\alpha^{N_{2}}_{2} \\ \end{array} \right). \end{equation} But for this bilinear mapping the pair $(\mathrm{M}^{N_{1}\times N_{2}},\Sigma)$ is a tensor product of $\mathbb{C}^{N_{1}}$ and $\mathbb{C}^{N_{2}}$, that is \begin{equation}\xymatrix{\mathbb{C}^{N_{1}}\times\mathbb{C}^{N_{2}} \ar[d]_{\otimes}\ar[r]_{\Sigma}&\mathrm{M}^{N_{1}\times N_{2}} \ar@{=}[d]\\ \mathbb{C}^{N_{1}}\otimes\mathbb{C}^{N_{1}}\ar[r]_{\Theta}&\mathrm{M}^{N_{1}\times N_{2}} } \end{equation} and thus represent the product states of general bipartite states. Let us consider the a general pure bipartite state $\ket{\Psi}=\sum^{N_{1}-1}_{i_{1}=0}\sum^{N_{2}-1}_{i_{2}=0} \alpha_{i_{1}i_{2}} \ket{i_{1}}\otimes\ket{i_{2}}$ and $\mathrm{M}^{N_{1}\times N_{2}}=\{\alpha_{i_{1}i_{2}}\in\mathbb{C}^{N_{1}}\times\mathbb{C}^{N_{2}}: \alpha_{k_{1}k_{2}}\alpha_{l_{1}l_{2}}=\alpha_{l_{1}k_{2}}\alpha_{k_{1}l_{2}}, \forall i_{j}=k_{j},l_{j}, j=1,2\}$. Then for this state the separability condition is give by $\alpha_{k_{1}k_{2}}\alpha_{l_{1}l_{2}}=\alpha_{l_{1}k_{2}}\alpha_{k_{1}l_{2}}$. This equation also gives a general expression for the concurrence \begin{equation} C(\ket{\Psi})=\left(\mathcal{N}\sum^{N_{1}-1}_{l_{1}>k_{1}=0}\sum^{N_{2}-1}_{l_{2}>k_{2}=0}|\alpha_{k_{1}k_{2}}\alpha_{l_{1}l_{2}} -\alpha_{l_{1}k_{2}}\alpha_{k_{1}l_{2}}|^{2}\right)^{1/2} \end{equation} of a general bipartite state \cite{Hosh4}. Now, we will discuss the direct decompositions which is the one interesting property of tensor product. Let $\mathcal{V}_{1}$ and $\mathcal{V}_{2}$ be two complex vector spaces and there is a direct decompositions of these space as $\mathcal{V}_{1}=\sum_{r}\mathcal{V}^{r}_{1}$ and $\mathcal{V}_{2}=\sum_{s}\mathcal{V}^{s}_{2}$. Moreover, assume that the pair $(\mathcal{V}_{1}\otimes\mathcal{V}_{2},\otimes)$ is a tensor product of these spaces. Then $\mathcal{V}_{1}\otimes\mathcal{V}_{2}$ is the direct sum of the subspaces $\mathcal{V}^{r}_{1}\otimes\mathcal{V}^{s}_{2}$, that is $\mathcal{V}_{1}\otimes\mathcal{V}_{2}=\sum_{r}\sum_{s}\mathcal{V}^{r}_{1}\otimes\mathcal{V}^{s}_{2}$. The first condition $\mathrm{I}$ follows from the observation that $\mathcal{V}_{1}\otimes\mathcal{V}_{2}$ is generated by $\ket{\psi_{1}}\otimes\ket{\psi_{2}}$ for $\ket{\psi_{j}}\in\mathcal{V}_{j}$, $j=1,2$. But $\ket{\psi_{1}}\otimes\ket{\psi_{2}}=\sum_{r}\sum_{s}\ket{\psi^{r}_{1}}\otimes\ket{\psi^{s}_{2}}$ for $\ket{\psi_{1}}=\sum_{r}\ket{\psi^{r}_{1}}$ and $\ket{\psi_{2}}=\sum_{s}\ket{\psi^{s}_{2}}$ with $\ket{\psi^{r}_{1}}\in\mathcal{V}^{r}_{1}$ and $\ket{\psi^{s}_{2}}\in\mathcal{V}^{s}_{2}$. Thus $\mathcal{V}_{1}\otimes\mathcal{V}_{2}$ is the sum of the subspaces $\mathcal{V}^{r}_{1}\otimes\mathcal{V}^{s}_{2}$. It is more difficult to show that the decomposition is direct and the proof can be found in \cite{Greub1}. \section{Multilinear mapping and multipartite states}\label{bilmm} In this section we will give an introduction to multilinear mapping and tensor product. The relation between the multilinear mapping and tensor product gives the product states of multipartite states. Let $\mathcal{V}_{1},\mathcal{V}_{2},\ldots,\mathcal{V}_{m}$ be $m$ complex vector spaces. Then the mapping $\Phi:\mathcal{V}_{1}\times\mathcal{V}_{2}\times\cdots\times\mathcal{V}_{m}\longrightarrow\mathcal{M}$ is called $m$-linear if for every $j$ with $1\leq j\leq m$ we have \begin{eqnarray} &&\Psi(\ket{\psi_{1}},\ldots,\ket{\psi_{j-1}},\lambda\ket{\psi_{j}}+\mu\ket{\phi_{j}},\ket{\psi_{j+1}}, \ldots,\ket{\psi_{m}}\\\nonumber&&=\lambda\Psi(\ket{\psi_{1}},\ldots,\ket{\psi_{j-1}},\ket{\psi_{j}},\ket{\psi_{j+1}}, \ldots,\ket{\psi_{m}}\\\nonumber&&+\mu\Psi(\ket{\psi_{1}},\ldots,\ket{\psi_{j-1}},\ket{\phi_{j}},\ket{\psi_{j+1}}, \ldots,\ket{\psi_{m}}, \end{eqnarray} where $\ket{\psi_{j}},\ket{\phi_{j}}\in\mathcal{V}_{j}$ and $\lambda,\mu\in\mathcal{K}$. We also denote the image of the mapping $\Psi$ by $\mathrm{Im}\Psi$,that is, the subspace of $\mathcal{M}$ which is generated by the vectors $\Psi(\ket{\psi_{1}},\ket{\psi_{2}},\ldots,\ket{\psi_{m}}$. Moreover, let $\mathcal{L}(\mathcal{V}_{1},\mathcal{V}_{2},\ldots,\mathcal{V}_{m};\mathcal{M})$ be the set of all $m$-linear maps $\mathcal{L}(\mathcal{V}_{1},\mathcal{V}_{2},\ldots,\mathcal{V}_{m}\longrightarrow\mathcal{M})$. Then we can obtain a linear structure in $\mathcal{L}(\mathcal{V}_{1},\mathcal{V}_{2},\ldots,\mathcal{V}_{m};\mathcal{M})$ by defining the following operations \begin{equation}(\Psi+\Phi)(\ket{\psi_{1}},\ket{\psi_{2}},\ldots,\ket{\psi_{m}})= \Psi(\ket{\psi_{1}},\ket{\psi_{2}},\ldots,\ket{\psi_{m}})+\Phi(\ket{\psi_{1}},\ket{\psi_{2}},\ldots,\ket{\psi_{m}}) \end{equation} and $(\lambda\Psi)(\ket{\psi_{1}},\ket{\psi_{2}},\ldots,\ket{\psi_{m}})= \lambda\Psi(\ket{\psi_{1}},\ket{\psi_{2}},\ldots,\ket{\psi_{m}})$. \begin{defn} Let $\mathcal{V}_{1},\mathcal{V}_{2},\ldots,\mathcal{V}_{m}$ be $m$ complex vector spaces and consider he $m$-linear mapping \begin{equation}\label{mlinear} \Phi:\mathcal{V}_{1}\times\mathcal{V}_{2}\times\cdots\times\mathcal{V}_{m}\longrightarrow\mathcal{M}. \end{equation} Then the pair $(\Phi,\mathcal{M})$ is called a tensor product if and only if the following important conditions are satisfied: \begin{itemize} \item $\mathrm{I}_{\otimes}$: the image of bilinear mapping is equal the target space $\mathrm{Im}\Phi=\mathcal{M}$ \item $\mathrm{II}_{\otimes}$: If there is bilinear mapping $ \Psi:\mathcal{V}_{1}\times\mathcal{V}_{2}\times\cdots\times\mathcal{V}_{m}\longrightarrow\mathcal{N} $, where $\mathcal{N}$ is a arbitrary complex vector space, then there exists a linear mapping $\Theta\mathcal{M}\longrightarrow\mathcal{N}$ such that the following diagram $$\xymatrix{\mathcal{V}_{1}\times\mathcal{V}_{2}\times\cdots\times\mathcal{V}_{m} \ar[d]_{\Psi}\ar[r]_{\Phi}&\mathcal{M} \ar[d]_{\Theta}\\ \mathcal{N}\ar@{=}[r]&\mathcal{N} }$$ is commutative, that is $\Psi=\Theta\circ\Phi$. \end{itemize} \end{defn} We can denote the tensor product $(\Phi,\mathcal{M})$ of spaces $\mathcal{V}_{j}$ by $(\mathcal{V}_{1}\otimes\mathcal{V}_{2}\otimes\cdots\otimes\mathcal{V}_{m},\bigotimes^{m})$ and $\Phi(\ket{\phi_{1}},\ket{\phi_{2}},\ldots,\ket{\phi_{m}} )=\ket{\phi_{1}}\otimes\ket{\phi_{2}}\otimes\cdots\otimes\ket{\phi_{m}}$. Now, let $\mathcal{V}_{1},\mathcal{V}_{2},\ldots,\mathcal{V}_{m}$ be $m$ complex vector spaces and $\mathcal{V}_{1}\otimes\mathcal{V}_{2}\otimes\cdots\otimes\mathcal{V}_{m}$ be a tensor product for these spaces. Moreover, let $\mathcal{L}(\mathcal{V}_{1}\otimes\mathcal{V}_{2}\otimes\cdots\otimes\mathcal{V}_{m};\mathcal{M})$ denotes linear mapping $\mathcal{V}_{1}\otimes\mathcal{V}_{2}\otimes\cdots\otimes\mathcal{V}_{m}\longrightarrow\mathcal{M}$ and $\mathcal{L}(\mathcal{V}_{1},\mathcal{V}_{2},\ldots,\mathcal{V}_{m};\mathcal{M})$ denotes multilinear mapping $\mathcal{V}_{1}\times\mathcal{V}_{2}\times\cdots\times\mathcal{V}_{m}\longrightarrow\mathcal{M}$. Then, we have following isomorphism \begin{equation} \mathcal{L}(\mathcal{V}_{1}\otimes\mathcal{V}_{2}\otimes\cdots\otimes\mathcal{V}_{m};\mathcal{M})\longrightarrow \mathcal{L}(\mathcal{V}_{1},\mathcal{V}_{2},\ldots,\mathcal{V}_{m};\mathcal{M}) \end{equation} which is defined by $\Phi(\Theta)=\Theta\circ\otimes$ for all $\Theta\in\mathcal{L}(\mathcal{V}_{1}\otimes\mathcal{V}_{2};\mathcal{M})$. The proof follows from conditions $\mathrm{I}_{\otimes}$ and $\mathrm{II}_{\otimes}$ for tensor product. Moreover, the correspondence between the linear map $\Psi\in\mathcal{L}(\mathcal{V}_{1}\otimes\mathcal{V}_{2}\otimes\cdots\otimes\mathcal{V}_{m};\mathcal{M})$ and $ \Theta\in\mathcal{B}(\mathcal{V}_{1},\mathcal{V}_{2},\ldots,\mathcal{V}_{m};\mathcal{M})$ is visualize in following commutative diagram \begin{equation}\xymatrix{\mathcal{V}_{1}\times\mathcal{V}_{2}\times\cdots\times\mathcal{V}_{m} \ar[d]_{\otimes}\ar[r]_{~~~~\Phi}&\mathcal{M} \ar@{=}[d]\\ \mathcal{V}_{1}\otimes\mathcal{V}_{2}\otimes\cdots\otimes\mathcal{V}_{m}\ar[r]_{~~~~\Theta}&\mathcal{M} } \end{equation} Thus in general case we have the following proposition \begin{prop} Let $\Psi\in\mathcal{L}(\mathcal{V}_{1}\otimes\mathcal{V}_{2}\otimes\cdots\otimes\mathcal{V}_{m};\mathcal{M})$ be a mulilinear map and $ \Theta\in\mathcal{L}(\mathcal{V}_{1},\mathcal{V}_{2},\ldots,\mathcal{V}_{m};\mathcal{M})$ be induced linear map. Then $\Theta$ is surjective and injective if and only if $\Phi$ satisfies the condition $\mathrm{I}_{\otimes}$ and $\mathrm{II}_{\otimes}$ of tensor respectively. \end{prop} Let $\mathcal{V}_{1},\mathcal{V}_{2},\ldots,\mathcal{V}_{m}$ be complex vector spaces. Then the mapping \begin{equation} \Psi:\mathcal{L}(\mathcal{V}_{1})\times\mathcal{L}(\mathcal{V}_{2})\times\cdots\times\mathcal{L}( \mathcal{V}_{m})\longrightarrow\mathcal{L}(\mathcal{V}_{1}\otimes\mathcal{V}_{2}\otimes\cdots\otimes\mathcal{V}_{m}) \end{equation} given by $\Psi(\Theta_{1},\ldots,\Theta_{m})(\ket{\psi_{1}}\otimes\cdots\otimes\ket{\psi_{m}})=\Theta_{1}(\ket{\psi_{1}}) \cdots\Theta_{m}(\ket{\psi_{m}})$ is a tensor product for the space $\mathcal{L}(\mathcal{V}_{j})$. We will give an example to visualize the relation between our $m$-linear construction and multipartite product states. Let us consider a multipartite states where $\mathcal{V}_{j}=\mathbb{C}^{N_{j}}$ for all $1\leq j\leq m$ and a general state is given by \begin{equation} \ket{\Psi}=\sum^{N_{1}-1,N_{2}-1,\ldots,N_{m}-1}_{i_{1},i_{2},\ldots,i_{m}=0} \alpha_{i_{1}i_{2}\cdots i_{m}}\ket{i_{1}} \otimes\ket{i_{2}}\otimes\cdots\ket{i_{m}}. \end{equation} Then we have the following commutative diagram \begin{equation}\xymatrix{\mathbb{C}^{N_{1}}\times\mathbb{C}^{N_{2}}\times\cdots\times\mathbb{C}^{N_{m}} \ar[d]_{\otimes}\ar[r]_{~~~~\Phi}&\mathrm{M}^{N_{1}\times{N}_{2}\times\cdots\times{N}_{m}} \ar@{=}[d]\\ \mathbb{C}^{N_{1}}\otimes\mathbb{C}^{N_{2}}\otimes\cdots\otimes\mathbb{C}^{N_{m}}\ar[r]_{~~~~\Theta}& \mathrm{M}^{N_{1}\times{N}_{2}\times\cdots\times{N}_{m}} } \end{equation} where $\mathrm{M}^{N_{1}\times{N}_{2}\times\cdots\times{N}_{m}}=\left(\alpha_{i_{1}i_{2}\cdots i_{m}}\right)_{1\leq i_{j}\leq N_{j}}$, is a multi-box matrix which is defined as follows \begin{equation} \mathrm{M}^{N_{1}\times\cdots \times N_{m}}=\{\alpha_{i_{1}\ldots i_{m}}\in\mathbb{C}^{N_{1}}\times\cdots\times\mathbb{C}^{N_{m}}:\mathcal{S}^{k_{j}l_{j}}_{1\leq j\leq m}=0, \forall , j=1,2,\ldots, m\}, \end{equation} where \begin{eqnarray}\label{eq: submeasure}\mathcal{S}^{k_{j}l_{j}}_{1\leq j\leq m} &=&\alpha_{k_{1}k_{2}\cdots k_{m}}\alpha_{l_{1}l_{2}\cdots l_{m}}-\\\nonumber&& \alpha_{k_{1}k_{2}\cdots k_{j-1}l_{j}k_{j+1}\cdots k_{m}}\alpha_{l_{1}l_{2}\cdots l_{j-1}k_{j}l_{j+1}\cdots l_{m}} \end{eqnarray} This construction also gives a general expression for the concurrence \begin{equation} C(\ket{\Psi})=\left(\mathcal{N}\sum^{N_{1}-1}_{l_{1}>k_{1}=0}\cdots \sum^{N_{3}-1}_{l_{3}>k_{3}=0}|\mathcal{S}^{k_{j}l_{j}}_{1\leq j\leq 3}|^{2}\right)^{1/2} \end{equation} of a general three-partite state. Note also that for a mixed state a measure of entanglement can be constructed by taking the infimum over all pure decomposition of a given state using above expression for concurrence. This is our main result for general multipartite states. This result is also related to the construction of Segre variety given in \cite{Hosh4}. We can also construct a measure of entanglement for general multipartite states based on the multidimensional matrix $\mathrm{M}^{N_{1}\times{N}_{2}\times\cdots\times{N}_{m}}$ with some additional structures \cite{Hosh6}. \begin{flushleft} \textbf{Acknowledgments:} The author acknowledges the financial support of the Japan Society for the Promotion of Science (JSPS). \end{flushleft} \end{document}
\begin{document} \title{New method of averaging diffeomorphisms based on Jacobian determinant and curl vector} \author{Xi Chen, Guojun Liao \\ \multicolumn{1}{p{.7\textwidth}}{\centering\emph{\scriptsize{Department of Mathematics, \\University of Texas at Arlington, Arlington, Texas 76019, \textsc{USA}\\[email protected]}}}} \date{} \maketitle \begin{abstract} Averaging diffeomorphisms is a challenging problem, and it has great applications in areas like medical image atlases. The simple Euclidean average can neither guarantee the averaged transformation is a diffeomorphism, nor get reasonable result when there is a local rotation. The goal of this paper is to propose a new approach to averaging diffeomorphisms based on the Jacobian determinant and the curl vector of the diffeomorphisms. Instead of averaging the diffeomorphisms directly, we average the Jacobian determinants and the curl vectors, and then construct a diffeomorphism based on the averaged Jacobian determinant and averaged curl vector as the average of diffeomorphisms. Numerical examples with convincible results are presented to demonstrate the method. \end{abstract} {\bf Keywords: averaging diffeomorphisms, Jacobian determinant, curl vector, construction of diffeomorphism} \section{Introduction} Numerical construction of differentiable and invertible transformations is an interesting and challenging problem. In \cite{Liao2009}, two methods are formulated based on a $div-curl-ode$ system or $div-curl$ system only. The latter is demonstrated in numerical examples to accurately reconstruct 2D and 3D diffeomorphisms using their divergence and curl vector. The problem for diffeomorphism with prescribed Jacobian determinant has been solved by the deformation method \cite{Liao2004, Chen2016}. But the prescribed Jacobian determinant alone cannot uniquely determine a diffeomorphism. In \cite{Chen2016,Chen2015}, an innovative variational method is proposed for numerical construction of a diffeomorphism with prescribed Jacobian determinant and prescribed curl vector.\\ This new method enables us to define a new approach to robustly averaging given diffeomorphisms so that the average is guaranteed to be a diffeomorphism. In fact, this research is motivated by building of medical image atlases. An atlas is a model image of an organ constructed by "averaging" images of a group of healthy individuals or patients \cite{Evans2012, Mandal2012}. Atlases can be used to detect and monitor health problems and diseases such as Alzheimer's disease. Constructing a correct atlas is a very challenging task. This is mainly due to the rich variability of anatomy in individuals of the population. In particular, both the building and use of an atlas are greatly dependent on non-rigid image registration techniques which bring the images to a common coordinate system by nonlinear transformations. Early methods \cite{Thompson2002} of building an atlas from a set of images consist of two steps: (1) registering each image to an initial template; (2) averaging the warped (resampled) images.\\ In order to reduce the blurs of the atlas due to registration errors, we adopt the idea of \cite{Vaillant2004,Twining2008} and \cite{Asigny2006} that the registration transformations are averaged first and then the template image is re-sampled in the averaged transformation. Thus we build an atlas in three steps: (1) constructing an initial template or simply selecting one of the images as a template; and then registering all images to the template by diffeomorphisms; (2) averaging the registration diffeomorphisms; and (3) re-sampling the template image on the averaged diffeomorphism. This paper is mainly focused on the second step: averaging the diffeomorphisms.\\ Since the goal of image atlases is to quantify variability and to detect abnormality in individuals, brain image atlases must be correctly built. But current methods of building and using atlases are not the best possible in terms of their theoretical foundation, reliability, and computational efficiency. A thorough review of available literatures indicates that there is no agreement in medical imaging community on how the ※average§ of diffeomorphisms should be defined and computed. The widely used Euclidean average may not have the averaged size and may even fail to be a diffeomorphism. This is illustrated by graphics below. Other methods proposed are restricted to the particular diffeomorphisms that are generated by certain registration methods. Moreover, these methods again do not guarantee that the ※average§ have the averaged size or averaged local rotation. From a mathematical point of view, the main problem is how to generate diffeomorphisms with prescribed Jacobian determinant and prescribed curl vector. In this paper, we propose an innovative approach to averaging general diffeomorphisms, including those arising from brain imaging atlases. In Section 2, a new definition of the average is defined. In Section 3, the variational method for construction of diffeomorphisms based on the Jacobian determinant and the curl vector is briefly reviewed. In Section 4, numerical examples are presented. \section{Average of Diffeomorphisms} \subsection{The Euclidean Average} The commonly used Euclidean average of a set of transformations is to simply average the Cartesian coordinates of the points specified by the transformations.\\ We now examine what can go wrong with the Euclidean average and convince the readers that the correct average must have the averaged size and the averaged local rotation. To illustrate this point, we use a grid to approximate a transformation. Each grid node in a uniform grid on $\Omega = [0, 1]\times[0, 1]$ is sent to a point in $\Omega$. We keep the same connectivity between the adjacent nodes and get a new grid in $\Omega$, which is a good approximation if the grid spacing is very small. Let $\Phi_1$ be the identity transformation from $\Omega$ to $\Omega$; let $\Phi_2$ be a diffeomorphism from $\Omega$ to $\Omega$. A small cell with corners $\{A, B, C, D\}$ is sent by $\Phi_1$ to the same cell in $\Omega$; We now consider three examples of $\Phi_2$. First, suppose $\Phi_2$ is locally a translation followed by a rotation counterclockwise in the angle of $0^\circ$, which sends the cell $\{A, B, C, D\}$ to the cell $\{A', B', C', D'\}$ shown in Fig. 1(a). The Euclidean average of the four pairs of corners are denoted by $\{\overline{A},\overline{B},\overline{C},\overline{D}\}$. Note that the cell formed by $\{\overline{A},\overline{B},\overline{C},\overline{D}\}$ is a meaningful average of the two cells $\{A, B, C, D\}$ and $\{A', B', C', D'\}$. But when the rotation angle increases from $0^\circ$ to $90^\circ$ and to $180^\circ$, the resulting Euclidean average is deteriorating as shown in Fig. 1(b) and (c). Suppose $\Phi_2$ is locally the same translation followed by a rotation counterclockwise in the angle of $90^\circ$ which sends the cell $\{A, B, C, D\}$ to the cell $\{A', B', C', D'\}$ shown in Fig. 1(b). The Euclidean average $\{\overline{A},\overline{B},\overline{C},\overline{D}\}$ is too small to be a meaningful average of the two cells $\{A, B, C, D\}$ and $\{A', B', C', D'\}$. Suppose now $\Phi_2$ is locally the same translation followed by a rotation counterclockwise in the angle of $180^\circ$, which sends the same grid cell to the cell $\{A', B', C', D'\}$ shown in the Fig. 1(c). Then the Euclidean average, cell $\{\overline{A},\overline{B},\overline{C},\overline{D}\}$ shrinks to a point P. This means the ※cell§ degenerates to a point and the Jacobian determinant becomes zero! These figures show clearly that local rotations play a crucial role in the study of shapes. The role of local rotation will also be demonstrated in Figure 2 below. \begin{figure} \caption{Euclidean average of transformation $\Phi_1$ and different $\Phi_2$.} \end{figure} \subsection{Some Non-Euclidean Approaches} There are currently two non-Euclidean approaches to averaging of diffeomorphisms arising from nonlinear image registration. Miller\'s group at Johns Hopkins University has been a main contributor in this research area. Their approach (see \cite{Vaillant2004}) is based on their image registration method called the Large Deformation Diffeomorphic Metric Matching (LDDMM) \cite{Beg2005} by time-varying velocity fields. The theoretical foundation of their approach is that a certain quantity called the ※moment§ is time invariant in their method of image registration. Thus the moments of diffeomorphisms at $t=0$ determine the diffeomorphisms generated at any time $t>0$. They perform statistics such as determining the average on the initial moments. Then, the average of these diffeomorphisms is generated from this averaged moments by their method of image registration. There are two main issues with this approach: (1) It only treats a special type of diffeomorphisms that are generated by their specific methods of image registration; but these diffeomorphisms are rather restrictive as can be seen in their experiments in \cite{Beg2005} (see Examples below); (2) The numerical implementation of the approach is sensitive since the moment is in fact the inverse of the Green*s function of a linear differential operator (i.e. the Laplace operator $\Delta+0.1\cdot Identity$), and therefore exists only in weak sense. As a consequence, in \cite{Vaillant2004} it was only formulated and tested for diffeomorphisms constructed by matching a few dozen landmark points in one of the two images ($256\times256\times64$), with the same number of landmark points in the other. Seven years have passed and there has been no work published for more general diffeomorphisms that are generated based on image intensity.\\ \cite{Twining2008} described in detail the mathematical foundation of the Large Deformation Diffeomorphic Metric Matching (LDDMM) and proposed a variant to the approach in \cite{Vaillant2004}. This paper considers the problem of defining an average of diffeomorphisms, motivated primarily by image registration problem, where diffeomorphisms are used to align images of large deformation. Constructing an average on the diffeomorphism group will enable the quantitative analysis of these diffeomorphisms to discover the normal and abnormal variation of structures in a population. Based on splines, they construct an average for particular choices of boundary conditions on the space on which the diffeomorphism acts, and for a particular class of metrics on the diffeomorphism group, which define a class of diffeomorphic interpolating splines. The geodesic equation is computed for this class of metrics, and they show how it can be solved in the spline representation. Furthermore, they demonstrate that the spline representation generates submanifolds of the diffeomorphism group. Instead of matching land mark points as in \cite{Vaillant2004}, their approach is based on matching spline control nodes in the images. It determines the usual Euclidean average of control nodes of a set of diffeomorphisms, and then generates the ※average§ of these diffeomorphisms by spline interpolation. Explicit computational examples are included, showing how this average can be constructed in practice, and that the use of the geodesic distance allows better classification of variation than those obtained using just a Euclidean metric on the space of diffeomorphisms. This method is an variant of the method in \cite{Vaillant2004} and is only suitable to diffeomorphisms generated by the Large Deformation Diffeomorphic Metric Matching (LDDMM).\\ In \cite{Avants2011}, another variant to the method in \cite{Vaillant2004} is proposed. It proposed a method of atalas building that is based on their version of the Large Deformation Diffeomorphic Metric Matching (LDDMM), called the symmetric normalization (SyN). It is included as part of an open source software package called the Advanced Neuroimaging Tools (ANTs) developed at University of Pennsylvania. The image registration method SyN received a top ranking in \cite{Klein2009} and a groupwise version called SyGN is used to build an atlas from a group of images. As commented before, as an variant, SyN uses a smoothing term to regularize the optimization of a similarity measure. The smoothing term is based on a linear differential operator $L= a \cdot\Delta + b\cdot identity$, which inevitably altered the optimization problem: a trade-off between the similarity and a smoothing term is optimized. Instead, we should optimize the similarity measure in the space of all diffeomorphisms, not on a subspace defined by a linear differential operator $L$. Thus, there is room for significant improvement in registration accuracy. Also, this atlas building method works with diffeomorphisms arising from their groupwise symmetric normalization (SyGN) only.\\ In \cite{Asigny2006}, a different approach is proposed which in theory is suitable only for small deformation in images, i.e. it works in theory only for diffeomorphisms that are near the identity transformation. The approach, called the Log-Euclidean is also based on generating diffeomorphisms from a time-independent velocity field. The diffeomorphisms this approach treats are even more restrictive compared to the previous approach. The approach is theoretically valid only for diffeomorphisms near the identity. Nonetheless, their numerical algorithms appear to be robust and fast, and numerical experiments can still go through for diffeomorphisms that are not near the identity; but, the author indicated that the results obtained by the Log-Euclidean are not very different from that of the Euclidean average (which may not be diffeomorphic). \subsection{Proposed Approach to Averaging Diffeomorphisms} Let $\Omega = [0,1]\times[0,1]$ in $R^2$ or $\Omega= [0,1]\times[0,1]\times[0,1]$ in $R^3$. Suppose there are diffeomorphisms $\Phi_i, i = 1, 2\ldots K$, from $\Omega$ into itself such that $\Phi_i = identity$ on $\partial\Omega$ (other boundary conditions are possible). Our concept of average is based on Jacobian determinants and the curl vector field. The former controls the size of volume elements; the latter controls the local rotation. We can show both theoretically and computationally that together these two quantities under proper boundary conditions uniquely determine a transformation \cite{Liao2009, Chen2016}. \\ To motivate our definition of the average, let us revisit Fig.1(a), (b), (c), and ask ourselves what is the correct average of the cells $\{A, B, C, D\}$ and $\{A', B', C', D'\}$ in each of the three cases. For Fig. 1(a), we should simply take the cell $\{\overline{A},\overline{B},\overline{C},\overline{D}\}$ as the average since it has the correct size and correct orientation. This expected mean is denoted by $\{A^*, B^*, C^*, D^*\}$ in Fig. 2(a). For Fig. 1(b), the cell $\{\overline{A},\overline{B},\overline{C},\overline{D}\}$ has the correct orientation since it forms $45^\circ$ with both cell $\{A, B, C, D\}$ and cell $\{A', B', C', D'\}$. But its size is too small. Since cell $\{A, B, C, D\}$ and cell $\{A', B', C', D'\}$ have the same size, we expect their average to have the same size also. The expected average is denoted by $\{A^*, B^*, C^*, D^*\}$ in Fig. 2(b). For Fig. 2(c), the cell $\{\overline{A},\overline{B},\overline{C},\overline{D}\}$ is reduced to the point P. Since the relative rotation is $180^\circ$, the expected average should form a $90^\circ$ angle with both cell $\{A, B, C, D\}$ and cell $\{A', B', C', D'\}$. Its size should also be the same as cell $\{A, B, C, D\}$ and cell $\{A', B', C', D'\}$. The cell denoted in Fig. 2(c) by $\{A^*, B^*, C^*, D^*\}$ is the expected average.\\ \begin{figure} \caption{Expected average of transformation $\Phi_1$ and different $\Phi_2$.} \end{figure} Next, we introduce the definition of the average and describe our approach to computing it by differential equations.\\ \textbf{Proposed Definition of the Average of Diffeomorphisms:}\\ The average of given diffeomorphisms $\Phi_i, i = 1, 2\ldots K$, is defined as the diffeomorphism $\Phi$ which has the following properties: \begin{enumerate} \item $J(\Phi)=\sum w_i J(\Phi_i)$, where $J$ is the Jacobian determinant of a diffeomorphism, \item $curl (\Phi) =\sum w_i curl(\Phi_i)$, where $curl$ is the curl vector field of a diffeomorphism; \end{enumerate} and $w_i$'s are positive weights whose sum is $1$. (For instance, we may simply take all $w_i = 1/K$; or we can define $w_i$ as proportional to the distance between $\Phi_i(x)$ and the Euclidean mean $\sum \Phi_i/K$).\\ Thus, mathematically, the problem of determining the average of diffeomorphisms $\Phi_i, i = 1, 2\ldots K$, is to construct a diffeomorphism with prescribed Jacobian determinant $f=\sum w_i J(\Phi_i)$ and the prescribed curl vector field $g =\sum w_i curl(\Phi_i)$ under proper boundary conditions.\\ In next section, we present a variational method which computes a diffeomorphism $\Phi$ with prescribed Jacobian determinant and curl robustly and efficiently. \section{A Variational Method} The most technical step is to construct a diffeomorphism with prescribed Jacobian determinant and the prescribed curl vector. This problem is formulated as follows:\\ Given $f_0>0$ and $\bm{g}_0$ on $\Omega$, minimize the energy functional \[ E(\bm{\Phi},f_0,\bm{g}_0)=\frac 1 2\int_\Omega[(J(\bm{\Phi}(x))-f_0(x))^2+(curl(\bm{\Phi}(x))-\bm{g}_0(x))^2]dx \] subject to the constraints: \[ \bm{\Phi}(x)=\bm{\Phi_0}(x)+\bm{u}(x), \] where $\bm{u}$ satisfies: \[ \left\{ \begin{array}{ll} \textrm{div} \bm{u}&=f\\ \textrm{curl} \bm{u}&=\bm{g} \qquad in \ \Omega\\ \bm{u}&=0 \qquad on \ \partial\Omega. \end{array} \right. \] In the numerical optimization, the \textrm{div} and \textrm{curl} equation are replaced by \[ \Delta \bm{u}=(f_1,f_2,f_3)=\bm{F}. \] Thus, the diffeomorphism $\Phi$ is constructed by optimizing $E(\bm{\Phi},f_0,\bm{g}_0)$ with respect to the control function $\bm{F}=(f_1,f_2,f_3)$. Note: the Jacobian determinant $J(\bm{\Phi})$ and the curl vector $curl(\bm{\Phi})$ are equal to $f_0$ and $\bm{g}_0$ in the $L_2-$ norm.\\ In \cite{Chen2016, Chen2015}, the gradient $\frac{\partial E}{\partial \bm{F}}$ of $E(\bm{\Phi},f_0,\bm{g}_0)$ is derived. \\ An optimization algorithm is implemented based on the method of gradient descent. In the following example, the coordinates of the grid in Fig 3. are used to represent the diffeomorphism, and to calculate the Jacobian determinant $f_0$ and the curl vector $\bm{g}_0$. Our algorithm generated a grid that is almost identical to the original grid in Fig 4,5. \begin{figure} \caption{Test transformation $\Phi_0$} \end{figure} \begin{figure} \caption{Reconstruction of a transformation grid from its cell size and local rotation. \footnotesize{The black star dots $*$ represent $\bm{\Phi} \end{figure} \begin{figure} \caption{Reconstruction of a transformation grid from its cell size and local rotation. \footnotesize{The black star dots $*$ represent $\bm{\Phi} \end{figure} \section{Averaging Two Diffeomorphisms} We now give a numerical example about averaging two diffeomorphisms by our proposed method. We use the variational method to construct the average of diffeomorphisms $\Phi_1$ and $\Phi_2$, which are approximated by two grids, see Fig 6.(a) and(b) below. These two grids are obtained by rotating the grid $\Phi_0$ in the previous example through an angle $\theta$ in the clockwise and the counter-clockwise direction, respectively. Since the sell size does not change in rotations, and the curls of the two rotations are opposite to each other, we should expect the correct average of $\Phi_1$ and $\Phi_2$ is $\Phi_0$. We define \[ \left\{ \begin{aligned} f_0&=\frac 1 2(J(\Phi_1)+J(\Phi_2))\\ g_0&=\frac 1 2(curl(\Phi_1)+curl(\Phi_2)). \end{aligned} \right. \] Using the above $f_0$ and $g_0$ in the variational method, we generated the average $\Phi^*$ of $\Phi_1$ and $\Phi_2$ which is almost identical to $\Phi_0$, see Fig. 6(e,f). While the Euclidean average of $\Phi_1$ and $\Phi_2$ is shown in (c,d), which is remarkably different from $\Phi_0$. \begin{figure} \caption{Averaging two diffeomorphisms by the proposed method.} \end{figure} \section{Conclusion} In this paper, a new concept of averaging diffeomorphisms is proposed. It is based on a variational method of constructing diffeomorphisms with prescribed Jacobian determinant and curl vector. It is demonstrated by a numerical example that the method is more realistic than the Euclidean average, and the numerical algorithm is accurate and efficient. Also, the method can be extended naturally to general 3-dimensional case \cite{Chen2016}. \end{document}
\begin{document} \title{Extended Euler-Lagrange and Hamiltonian Conditions in Optimal Control of Sweeping Processes with Controlled Moving Sets}\vspace*{-0.2in} \small{\bf Abstract.} This paper concerns optimal control problems for a class of sweeping processes governed by discontinuous unbounded differential inclusions that are described via normal cone mappings to controlled moving sets. Largely motivated by applications to hysteresis, we consider a general setting where moving sets are given as inverse images of closed subsets of finite-dimensional spaces under nonlinear differentiable mappings dependent on both state and control variables. Developing the method of discrete approximations and employing generalized differential tools of first-order and second-order variational analysis allow us to derive nondegenerated necessary optimality conditions for such problems in extended Euler-Lagrange and Hamiltonian forms involving the Hamiltonian maximization. The latter conditions of the Pontryagin Maximum Principle type are the first in the literature for optimal control of sweeping processes with control-dependent moving sets. {\bf Key words.} optimal control, sweeping process, variational analysis, discrete approximations, generalized differentiation, Euler-Lagrange and Hamiltonian formalisms, maximum principle, rate-independent operators {\bf AMS subject classifications.} 49J52, 49J53, 49K24, 49M25, 90C30 \newtheorem{Theorem}{Theorem}[section] \newtheorem{Proposition}[Theorem]{Proposition} \newtheorem{Remark}[Theorem]{Remark} \newtheorem{Lemma}[Theorem]{Lemma} \newtheorem{Corollary}[Theorem]{Corollary} \newtheorem{Definition}[Theorem]{Definition} \newtheorem{Example}[Theorem]{Example} \newtheorem{Assumptions}[Theorem]{Assumptions} \renewcommand{\tauhesection.\arabic{equation}}{\tauhesection.\arabic{equation}} \normalsize \def\proof{\normalfont {\noindent\itshape Proof. \taurianglespace*{6pt}\ignorespaces}} \def$\h$\vspace*{0.1in}}\vspace*{-0.15in{$ \tauriangle$\vspace*{0.1in}}\vspace*{-0.15in} \section{Introduction} \setcounter{equation}{0} The basic sweeping process (``processus du rafle") was introduced by Moreau \cite{mor_frict} in the form \begin{eqnarray}\langlebel{sp} \dot{x}(t)\in-N\big(x(t);C(t)\big)\;\mbox{ a.e. }\;t\in[0,T], \end{eqnarray} where $N(x;\Omega)$ stands for the normal cone to a convex set $\Omega\subset\mathbb{R}^n$ at $x$ defined by \begin{eqnarray}\langlebel{nor-conv} N(x;\Omega):=\left\{\begin{array}{ll} \big\{v\in\mathbb{R}^n\big|\;\langle v,u-x\rangle\le 0\;\mbox{ for all }\;u\in\Omega\big\}&\mbox{if }\;x\in\Omega,\\ \emptyset&\mbox{otherwise}, \end{array}\right. \end{eqnarray} and where the convex variable set $C(t)$ continuously evolves in time. It has been realized that the Cauchy problem $x(0)=x_0\in C(0)$ for \eqref{sp} admits a unique solution (see, e.g., \cite{ct}), and hence there is no sense to consider optimization problems for the sweeping differential inclusion \eqref{sp}. This is totally different from the developed optimal control theory for Lipschitzian differential inclusions of the type \begin{eqnarray}\langlebel{di} \dot x(t)\in F\big(x(t)\big)\;\mbox{ a.e. }\;t\in[0,T], \end{eqnarray} which arose from the classical one for controlled differential equations \begin{eqnarray}\langlebel{de} \dot x(t)=f\big(x(t),u(t)\big),\;\;u(t)\in U\;\mbox{ a.e. }\;t\in[0,T] \end{eqnarray} with $F(x):=f(x,U)=\{y\in\mathbb{R}^n|\;y=f(x,u)\;\mbox{ for some }\;u\in U\}$ in \eqref{di}; see, e.g., the books \cite{mordukhovich,vinter} with the references therein as well as more recent publications devoted to optimal control of \eqref{di}. It was suggested in \cite{chhm}, probably for the first time in the literature, to formulate optimal control problems for \eqref{sp} by entering control functions into the moving sets $C(t)$ in \eqref{sp}, i.e., considering the moving set {\em control parametrization} in the form \begin{eqnarray}\langlebel{cp} C(t)=C\big(u(t)\big)\;\mbox{ for all }\;t\in[0,T] \end{eqnarray} with respect to some collections of admissible controls $u(\cdot)$ satisfying appropriate constraints. In this way we arrive at new and very challenging classes of optimal control problems on minimizing certain Bolza-type cost functionals over feasible solutions to {\em highly non-Lipschitzian} unbounded differential inclusions under {\em irregular pointwise} state-control constraints \begin{eqnarray}\langlebel{state-con} x(t)\in C\big(u(t)\big)\;\mbox{ for all }\;t\in[0,T], \end{eqnarray} which intrinsically arise from \eqref{sp} and \eqref{cp} due the normal cone construction in \eqref{nor-conv}. It occurs that not only results but also methods developed in optimal control theory for controlled differential equations \eqref{de} and Lipschitzian differential inclusions \eqref{di} are not suitable for applications to the new classes of sweeping control systems that appear in this way. Papers \cite{chhm,h1} present significant extensions to sweeping control systems of type \eqref{sp}, \eqref{cp}, and \eqref{state-con} of the {\em method of discrete approximations} developed in \cite{m95,mordukhovich} for Lipschitzian differential inclusions of type \eqref{di}. Major new ideas in the obtained extensions consist of marring the discrete approximation approach to recently established {\em second-order subdifferential calculus} and {\em explicit computations} of the corresponding second-order constructions of variational analysis. The strongest results established by such a device in \cite{h1} concern necessary optimality conditions for the generalized Bolza problem with the controlled sweeping dynamics in \eqref{sp}, \eqref{cp}, and \eqref{state-con} described by the moving convex polyhedra of the type \begin{eqnarray}\langlebel{sw-con2} C(t):=\big\{x\in\mathbb{R}^n\big|\;\langle u_i(t),x\rangle\le b_i(t),\;i=1,\ldots,m\big\},\quad t\in[0,T], \end{eqnarray} where both actions $u_i(t)$ and $b_i(t)$ are involved in control. Other developments of the discrete approximation approach to derive necessary conditions for controlled sweeping systems with controls not only in the moving sets \eqref{cp} but also in additive perturbations of \eqref{sp} are given in \cite{cm1}--\cite{cm3}, where the reader can find applications of the obtained results to the practical crowd motion model of traffic equilibrium. The method of discrete approximations was also implemented in \cite{dfm} to study various optimal control issues for evolution inclusions governed by one-sided Lipschitzian mappings and in \cite{h2} for those described by maximal monotone operators in Hilbert spaces, but without deriving necessary optimality conditions. Note that the necessary optimality conditions obtained in the previous papers \cite{cm1}--\cite{h1} do not contain the formalism of the Pontryagin Maximum Principle (PMP) \cite{pontryagin} (i.e., the maximization of the corresponding Hamiltonian function) established in classical optimal control of \eqref{de} and then extended to optimal control problems for Lipschitzian differential inclusions of type \eqref{di}. To the best of our knowledge, necessary optimality conditions involving the maximization of the corresponding Hamiltonian were first obtained for sweeping control systems in \cite{bk}, where the authors considered a sweeping process with a strictly smooth, convex, and solid set $C(t)\equiv C$ in \eqref{sp} while with control functions entering linearly an adjacent ordinary differential equation. Further results with the maximum condition for global (as in \cite{bk}) minimizers were derived in \cite{ac} for the sweeping control system \begin{eqnarray}\langlebel{sp1} \dot x(t)\in f\big(x(t),u(t)\big)-N\big(x(t);C(t)\big)\;\mbox{ a.e. }\;t\in[0,T], \end{eqnarray} where measurable controls $u(t)$ enter the additive smooth term $f$ while the uncontrolled moving set $C(t)$ is compact, uniformly prox-regular regular (close enough to convexity), and possesses a ${\cal C}^3$-smooth boundary for each $t\in[0,T]$ under some other assumptions. The very recent paper \cite{pfs} also concerns a (generally nonautonomous) sweeping control system in form \eqref{sp1} and derives necessary optimality conditions of the PMP type for global minimizers provided that the convex, solid, and compact set $C(t)\equiv C$ therein is defined by $C:=\{x\in\mathbb{R}^n|\;\psi(x)\le 0\}$ via a ${\cal C}^2$-smooth function $\psi$ under other assumptions, which are partly differ from \cite{ac}. The {\em penalty-type} approximation methods developed in \cite{ac}, \cite{bk}, and \cite{pfs} are different from each other, significantly based on the {\em smoothness} of uncontrolled moving sets while being totally distinct from the method of discrete approximations employed in our previous papers and in what follows.\vspace*{0.02in} This paper addresses sweeping control systems modeled as \begin{eqnarray}\langlebel{evo_equa} \dot x(t)\in f\big(t,x(t)\big)-N\big(g(x(t));C(t,u(t))\big)\;\mbox{ a.e. }\;t\in[0,T],\quad x(0)=x_0\in C\big(0,u(0)\big), \end{eqnarray} where the controlled moving set is given by \begin{eqnarray}\langlebel{mov-set} C(t,u):=\big\{x\in\mathbb{R}^n\big|\;\psi(t,x,u)\in\Theta\big\},\quad(t,u)\in[0,T]\tauimes\mathbb{R}^m, \end{eqnarray} with $f\mbox{\rm co}\,lon[0,T]\tauimes\mathbb{R}^n\tauo\mathbb{R}^n$, $g\mbox{\rm co}\,lon\mathbb{R}^n\tauo\mathbb{R}^n$, $\psi\mbox{\rm co}\,lon[0,T]\tauimes\mathbb{R}^n\tauimes\mathbb{R}^m\tauo\mathbb{R}^s$, and $\Theta\subset\mathbb{R}^s$. Definition~\eqref{mov-set} amounts to saying that $C(t,u)$ is the {\em inverse image} of the set $\Theta$ under the mapping $x\mapsto\psi(t,g(x),u)$ for any $((t,u)$. Throughout the paper we assume that the set $\Theta$ is {\em locally closed} around the reference point. We do not impose any convexity of $C(t,u)$ and use in \eqref{evo_equa} the (basic, limiting, Mordukhovich) {\em normal cone} to an arbitrary locally closed set $\Omega\subset\mathbb{R}^n$ at $\bar{x}\in\mathbb{R}^n$ defined by \begin{eqnarray}\langlebel{nc} N(\bar{x};\Omega):=\left\{\begin{array}{ll} \big\{v\in\mathbb{R}^n\big|\;\exists x_k\tauo\bar{x},\;\alpha_k\ge 0,\;w_k\inPainlev\'{e}i(x_k;\Omega),\;\alpha_k(x_k-w_k)\tauo v&\mbox{if }\;\bar{x}\in\Omega,\\ \emptyset&\mbox{otherwise}, \end{array}\right. \end{eqnarray} where $Painlev\'{e}i(x;\Omega)$ stands for the Euclidean projector of $x$ onto $\Omega$. When $\Omega$ is convex, the normal cone \eqref{nc} reduces to the one \eqref{nor-conv} in the sense of convex analysis, but in general the multifunction $x\tauto N(x;\Omega)$ is nonconvex-valued while satisfying a {\em full calculus} together with the associated subdifferential of extended-real-valued functions and coderivative of set-valued mappings considered below. Such a calculus is due to {\em variational/extremal principles} of variational analysis; see \cite{mordukhovich,mord,rw} for more details. Our major goal here is to study the optimal control problem $(P)$ of minimizing the cost functional \begin{eqnarray}\langlebel{eq:MP} {\rm minimize}\;J[x,u]:=\varphi\big(x(T)\big)+\int_0^T\ell\big(t,x(t),u(t),\dot x(t),\dot u(t)\big)dt \end{eqnarray} over absolutely continuous control actions $u(\cdot)$ and the corresponding absolutely continuous trajectories $x(\cdot)$ of the sweeping differential inclusion \eqref{evo_equa} generated by the controlled moving set \eqref{mov-set}. It follows from \eqref{evo_equa} and the normal cone definition \eqref{nc} that the optimal control problem in \eqref{evo_equa} and \eqref{eq:MP} intrinsically contains the {\em pointwise constraints} on both {\em state} and {\em control} functions \begin{eqnarray*}\langlebel{const} \psi\big(t,g(x(t)),u(t)\big)\in\Theta\;\mbox{ for all }\;t\in[0,T]. \end{eqnarray*} Note that the optimal control problem studied in \cite{h1} is a particular case of our problem $(P)$ that corresponds to the choice of $g(x):=x$ (identity operator), $\psi(t,x,u):=Ax-b$ and $Z:=\mathbb{R}^m_-$ in \eqref{evo_equa} and \eqref{mov-set}. Besides being attracted by challenging mathematical issues, our interest to more general sweeping control problems considering in this paper is largely motivated by applications to {\em rate-independent operators} that frequently appear, e.g., in various plasticity models and in the study of hysteresis. We discuss these and related topics in more details in Section~\ref{application-example} and also will devote a separate paper to such applications. While the underlying approach to derive necessary optimality conditions for local minimizers of the above problem $(P)$ is the usage of the {\em method of discrete approximations} and {\em generalized differentiation}, similarly to \cite{h1} and our other publications on sweeping optimal control, some important elements of our technique here are significantly different from the previous developments. From one side, the new/modified technique allows us to establish {\em nondegenerated} necessary optimality conditions for local minimizers of $(P)$ in the {\em extended Euler-Lagrange} form for more general sweeping systems with relaxing several restrictive technical assumptions of \cite{h1} in its polyhedral setting. On the other hand, we obtain optimality conditions of the {\em Hamiltonian/PMP} type, which are new even for polyhedral moving sets as in \cite{h1} under an additional surjectivity assumption. In fact, the optimality conditions in the PMP form are the {\em first results} of this type for sweeping process with controlled moving sets.\vspace*{0.02in} The rest of the paper is organized as follows. In Section~\ref{prelim} we formulate and discuss our standing assumptions and present necessity preliminaries from first-order and second-order generalized differentiation that are widely used for deriving the main results of the paper. Section~\ref{discrete-approximations} concerns discrete approximations of feasible and local optimal solutions to the sweeping control problem $(P)$ with the verification of the required strong convergence. In Section~\ref{optimality-conditions} we derive the extended Euler-Lagrange conditions for local optimal solutions to $(P)$ by passing to the limit from discrete approximations and using the second-order subdifferential calculations. Section~\ref{hamiltonian} contains necessary optimality conditions of the PMP type involving the maximization of the new Hamiltonian function, discusses relationships with the conventional Hamiltonians, and presents an example showing that the maximum principle in the conventional Hamiltonian form fails in our framework. More examples of some practical meaning in the areas of elastoplasticity and hysteresis are given in Section~\ref{application-example}. The final Section~\ref{conclusion} discusses some directions of the future research. Throughout the paper we use standard notation of variational analysis and control theory; see, e.g., \cite{mordukhovich,rw,vinter}. Recall that $\mathbb{N}:=\{1,2,\ldots\}$, that $A^*$ stands for the transposed/adjoint matrix to $A$, and that $\mathbb{B}$ denotes the closed unit ball of the space in question.\vspace*{-0.2in} \section{Standing Assumptions and Preliminaries}\langlebel{prelim} \setcounter{equation}{0} Let us first formulate the major assumptions on the given data of problem $(P)$ that are standing throughout the whole paper. Since our approach to derive necessary optimality conditions for $(P)$ is based on the method of discrete approximations, we impose the a.e.\ continuity of the functions involved with respect to the time variable, although it is not needed for results dealing with discrete systems before passing to the limit. Note also that the time variable is never included in subdifferentiation. As mentioned above, the constraint set $Z$ in \eqref{mov-set} is assumed to be locally closed unless otherwise stated.\vspace*{0.05in} Our {\em standing assumptions} are as follows:\vspace*{0.05in} {\bf (H1)} There exits $L_f>0$ such that $\|f(t,x)-f(t,y)\|\le L_f\|x-y\|$ for all $x,y\in\mathbb{R}^n,\;t\in[0,T]$ and the mapping $t\mapsto f(t,x)$ is a.e.\ continuous on $[0,T]$ for each $x\in\mathbb{R}^n$. {\bf(H2)} There exits $L_g>0$ such that $\|g(x)-g(y)\|\le L_g\|x-y\|$ for all $x,y\in\mathbb{R}^n$. {\bf(H3)} For each $(t,u)\in[0,T]\tauimes\mathbb{R}^m$, the mapping $\psi_{t,u}(x):=\psi(t,x,u)$ is ${\cal C}^2$-smooth around the reference points with the surjective derivative $\nabla\psi_{t,u}(x)$ satisfying \begin{eqnarray*} \|\nabla\psi_{t,u}(x)-\nabla\psi_{t,v}(x)\|\le L_{\psi}\|u-v\| \end{eqnarray*} with the uniform Lipschitz constant $L_\psi$. Furthermore, the mapping $t\mapsto\psi(t,x)$ is a.e.\ continuous on $[0,T]$ for each $x\in\mathbb{R}^n$ and $u\in\mathbb{R}^m$. {\bf(H4)} There are a number $\tauau>0$ and a mapping $\vartheta\mbox{\rm co}\,lon\mathbb{R}^n\tauimes\mathbb{R}^n\tauimes\mathbb{R}^n\tauimes\mathbb{R}^m\tauo\mathbb{R}^m$ locally Lipschitz continuous and uniformly bounded on bounded sets such that for all $t\in[0,T]$, $\bar{v}\in N(\psi_{(t,\bar{u})}(\bar{x});\Theta)$, and $x\in\psi^{-1}_{(t,u)}(\Theta)$ with $u:=\bar{u}+\vartheta(x-\bar{x},x,\bar{x},\bar{u})$ there exists $v\in N(\psi_{(t,u)}(x);\Theta)$ satisfying $\|v-\bar{v}\|\le\tauau\|x-\bar{x}\|$. {\bf(H5)} The cost functions $\varphi\mbox{\rm co}\,lon\mathbb{R}^n\tauo\overline{\R}:=[-\infty,\infty)$ and $\ell(t,\cdot)\mbox{\rm co}\,lon\mathbb{R}^{2(n+m)}\tauo\overline{\R}$ in \eqref{eq:MP} are bounded from below and lower semicontinuous (l.s.c.) around a given feasible solution to $(P)$ for a.e.\ $t\in[0,T]$, while the integrand $\ell$ is a.e.\ continuous in $t$ and is uniformly majorized by a summable function on $[0,T]$.\vspace*{0.05in} Assumption (H4) is technical and seems to be the most restrictive. Let us show nevertheless that it holds automatically in the polyhedral setting of \cite{h1} and also for nonconvex moving sets.\vspace*{-0.05in} \begin{Proposition}{\bf(validity of (H4) for controlled polyhedra).}\langlebel{poly} Let \begin{eqnarray*} \psi\big(t,x,(u,b)\big):=\langle x,u\rangle-b\;\mbox{ and }\;\Theta=\mathbb{R}^m_- \end{eqnarray*} in \eqref{mov-set}. Then condition ${\rm(H4)}$ is satisfied. \end{Proposition}\vspace*{-0.05in} {\bf Proof.} Pick $\bar{v}\in N(\langle\bar{x},\bar{u}\rangle-\bar{b};\mathbb{R}^m_-)$, $x\in\mathbb{R}^n$ and denote $\vartheta(x,y,z,u):=(0,\langlengle x,u\ranglengle)$. Choose $(u,b):=(\bar{u},\bar{b})+\vartheta(x-\bar{x},x,\bar{x},\bar{u})$ and hence get $u=\bar{u}$ and $b=\bar{b}+\langlengle x-\bar{x},\bar{u}\ranglengle$, which results in $\langle\bar{x},\bar{u}\rangle-\bar{b}=\langle x,u\rangle-b$. Then $N(\langle\bar{x},\bar{u}\rangle-\bar{b};\mathbb{R}^m_-)=N(\langle x,u\rangle-b;\mathbb{R}^m_-)$. We can choose $v:=\bar{v}$, and thus condition (H4) is satisfied with $v:=\bar{v}$ for any number $\tauau\ge 1$ therein. $ \tauriangle$\vspace*{0.05in} The following simple example illustrates that (H4) is also satisfied in standard nonconvex settings.\vspace*{-0.05in} \begin{Example}{\bf(validity of (H4) for nonconvex moving sets).}\langlebel{h4n} {\rm Consider the nonconvex set \begin{eqnarray*} C(t,u)=\big\{(x\in\mathbb{R}\big|\;x^2\ge-u+1\big\}, \end{eqnarray*} which corresponds to $\psi(x,u):=x^2+u-1$ and $\Theta:=[0,\infty)$ in \eqref{mov-set}. To verify (H4) in this setting, denote $\vartheta(x,y,z,u):=-x(y+z)$ and pick any $\bar{v}\in N(\bar{x}^2+\bar{u}-1;[0,\infty))$ and $x\in\mathbb{R}^n$. Choosing now $u:=\bar{u}-(x-\bar{x})(x+\bar{x})$, we get $x^2+u-1=\bar{x}^2+\bar{u}-1$ and thus verify (H4) with $v:=\bar{v}$ for every $\tauau\ge 1$.} \end{Example}\vspace*{-0.05in} Let us next discuss condition (H3), which plays a significant role is deriving some major results of the paper. This condition, which is equivalent in the finite-dimensional setting under consideration to the full rank of the Jacobian matrix $\nabla\psi_{t,u}(x)$, amounts to {\em metric regularity} of the mapping $x\mapsto\psi_{t,u}(x)$ by the seminal Lyusternik-Graves theorem; see, e.g., \cite[Theorem~1.57]{mordukhovich}. The following normal cone calculus rule is a consequence of \cite[Theorem~1.17]{mordukhovich}.\vspace*{-0.05in} \begin{Proposition}{\bf(normal cone representation for inverse images).}\langlebel{nor_inve} Under the validity of {\rm(H3)} the normal cone \eqref{nc} to the controlled moving set \eqref{mov-set} is represented by \begin{eqnarray*} N\big(x;C(t,u)\big)=\nabla\psi_{t,u}(x)^*N\big(\psi_{t,u}(x);\Theta\big)\;\mbox{ whenever }\;\psi(t,x,u)\in\Theta. \end{eqnarray*} \end{Proposition} To proceed further, we recall some constructions of first-order and second-order generalized differentiation for functions and multifunctions/set-valued mappings needed in what follows; see \cite{mordukhovich,mord} for detailed expositions. All these constructions are generated geometrically by our basic normal cone \eqref{nc}. Given a set-valued mapping $F\mbox{\rm co}\,lon\mathbb{R}^n\tauto\mathbb{R}^q$ and a point $(\bar{x},\bar{y})\in\mbox{\rm gph}\, F$ from its graph \begin{eqnarray*} \mbox{\rm gph}\, F:=\big\{(x,y)\in\mathbb{R}^n\tauimes\mathbb{R}^q\big|\;y\in F(x)\big)\big\}, \end{eqnarray*} the {\em coderivative} $D^*F(\bar{x},\bar{y})\mbox{\rm co}\,lon\mathbb{R}^q\tauto\mathbb{R}^n$ of $F$ at $(\bar{x},\bar{y})$ is defined by \begin{eqnarray}\langlebel{cod} D^*F(\bar{x},\bar{y})(u):=\big\{v\in\mathbb{R}^n\big|\;(v,-u)\in N\big((\bar{x},\bar{y});\mbox{\rm gph}\, F\big)\big\},\quad u\in\mathbb{R}^q, \end{eqnarray} where $\bar{y}$ is omitted in the notation if $F\mbox{\rm co}\,lon\mathbb{R}^n\tauo\mathbb{R}^q$ is single-valued. If furthermore $F$ is ${\cal C}^1$-smooth around $\bar{x}$ (or merely strictly differentiable at this point), we have $D^*F(\bar{x})(v)=\{\nabla F(\bar{x})^*v\}$ via the adjoint Jacobian matrix. In general, the coderivative \eqref{cod} is a positively homogeneous multifunction satisfying comprehensive calculus rules and providing complete characterizations of major well-posedness properties in variational analysis related to Lipschitzian stability, metric regularity, and linear openness; see \cite{mordukhovich,rw}. For an extended-real-valued function $\varphii\mbox{\rm co}\,lon\mathbb{R}^n\tauo\overline{\R}$ finite at $\bar{x}$, i.e., with $\bar{x}\in\mbox{\rm dim}\,\varphii$, the (first-order) {\em subdifferential} of $\varphii$ at $\bar{x}$ is defined geometrically by \begin{eqnarray}\langlebel{1sub} \partial\varphii(\bar{x}):=\{v\in\mathbb{R}^n\big|\;(v,-1)\in N\big((\bar{x},\varphii(\bar{x}));\mbox{\rm epi}\,\varphii\big)\big\} \end{eqnarray} via the normal cone \eqref{nc} to the epigraphical set $\mbox{\rm epi}\,\varphii:=\{(x,\alpha)\in\mathbb{R}^{n+1}|\;\alpha\ge\varphii(x)\}$. If $\varphii(x):=\delta_\Omega(x)$, the indicator function of a set $\Omega$ that equals to 0 for $x\in\Omega$ and to $\infty$ otherwise, we get $\partial\varphii(\bar{x})=N(\bar{x};\Omega)$. Given further $\bar{v}\in\partial\varphii(\bar{x})$, the {\em second-order subdifferential} (or generalized Hessian) $\partial^2\varphii(\bar{x},\bar{v})\mbox{\rm co}\,lon\mathbb{R}^n\tauto\mathbb{R}^n$ of $\varphii$ at $\bar{x}$ relative to $\bar{v}$ is defined as the coderivative of the first-order subdifferential by \begin{eqnarray}\langlebel{2sub} \partial^2\varphii(\bar{x},\bar{v})(u):=(D^*\partial\varphii)(\bar{x},\bar{v})(u),\quad u\in\mathbb{R}^n, \end{eqnarray} where $\bar{v}=\nabla\varphii(\bar{x})$ is omitted when $\varphii$ is differentiable at $\bar{x}$. If $\varphii$ is ${\cal C}^2$-smooth around $\bar{x}$, then \eqref{2sub} reduces to the classical (symmetric) Hessian matrix \begin{eqnarray*} \partial^2\varphii(\bar{x})(u)=\big\{\nabla^2\varphii(\bar{x})u\big\}\;\mbox{ for all }\;u\in\mathbb{R}^n. \end{eqnarray*} For applications in this paper we also need partial versions of the above subdifferential constructions for functions of two variables $\varphii\mbox{\rm co}\,lon\mathbb{R}^n\tauimes\mathbb{R}^m\tauo\overline{\R}$. Consider the {\em partial first-order subdifferential} mapping $(x,w)\mapsto\partial_x\varphii(x,w)$ for $\varphi(x,w)$ with respect to $x$ by \begin{eqnarray*}\langlebel{1par} \partial_x\varphii(x,w):=\big\{\;\mbox{set of subgradients }\;v\in\mathbb{R}^n\;\mbox{ of }\;\varphii_w:=\varphii(\cdot,w)\;\mbox{ at }\;x\big\}=\partial\varphii_w(x) \end{eqnarray*} and then, picking $(\bar{x},\bar{w})\in\mbox{\rm dom}\,\varphii$ and $\bar{v}\in\partial_x\varphii(\bar{x},\bar{w})$, define the {\em partial second-order subdifferential} of $\varphii$ with respect to $x$ at $(\bar{x},\bar{w})$ relative to $\bar{v}$ by \begin{eqnarray}\langlebel{2par} \partial^2_x\varphii(\bar{x},\bar{w},\bar{v})(u):=\big(D^*\partial_x\varphii)(\bar{x},\bar{w},\bar{v})(u)\;\mbox{ for all }\;u\in\mathbb{R}^n. \end{eqnarray} If $\varphii$ is ${\cal C}^2$-smooth around $(\bar{x},\bar{w})$, we have the representation \begin{eqnarray*} \partial^2\varphii(\bar{x},\bar{w})(u)=\big\{\big(\nabla^2_{xx}\varphii(\bar{x},\bar{w})^*u,\nabla^2_{xw}\varphii(\bar{x},\bar{w})^*u\big)\big\}\;\mbox{ for all }\;u\in\mathbb{R}^n. \end{eqnarray*} Taking into account the controlled moving set structure \eqref{mov-set}, important roles in this paper are played by the parametric constraint system \begin{eqnarray}\langlebel{ctild} S(w):=\big\{x\in\mathbb{R}^n\big|\;\psi(x,w)\in\Theta\big\},\quad w\in\mathbb{R}^m, \end{eqnarray} and the {\em normal cone mapping} ${\cal N}\mbox{\rm co}\,lon\mathbb{R}^n\tauimes\mathbb{R}^m\tauto\mathbb{R}^n$ associated with \eqref{ctild} by \begin{eqnarray}\langlebel{nm} {\cal N}(x,w):=N\big(x;S(w)\big)\;\mbox{ for }\;x\in S(w). \end{eqnarray} It is easy to see that the mapping ${\cal N}$ in \eqref{nm} admits the composite representation \begin{eqnarray}\langlebel{2comp} {\cal N}(x,w)=\partial_x\varphii(x,w)\;\mbox{ with }\;\varphii(x,w):=\big(\delta_{\Theta}\circ\psi\big)(x,w) \end{eqnarray} via the ${\cal C}^2$-smooth mapping $\psi\mbox{\rm co}\,lon\mathbb{R}^n\tauimes\mathbb{R}^m\tauo\mathbb{R}^s$ from \eqref{ctild} and the indicator function $\delta_\Theta$ of the closed set $\Theta\subset\mathbb{R}^s$. It follows directly from \eqref{2comp} due to the second-order subdifferential construction \eqref{2par} that \begin{eqnarray}\langlebel{nm1} \partial^2_x\varphii(\bar{x},\bar{w},\bar{v})(u)=D^*{\cal N}(\bar{x},\bar{w},\bar{v})(u)\;\mbox{ for any }\;\bar{v}\in{\cal N}(\bar{x},\bar{w})\;\mbox{ and }\;u\in\mathbb{R}^n. \end{eqnarray} Applying now the second-order chain rule from \cite[Theorem~3.1]{BR} to the composition in \eqref{nm1} allows us to compute the coderivative of the normal cone mapping \eqref{nm} via the given data of \eqref{ctild}.\vspace*{-0.06in} \begin{Proposition}{\bf(coderivative of the normal cone mapping for inverse images).}\langlebel{morout} Assume that $\psi$ is ${\cal C}^2$-smooth around $(\bar{x},\bar{w})$, and that the partial Jacobian matrix $\nabla_x\psi(\bar{x},\bar{w})$ is of full rank. Then for each $\bar{v}\in{\cal N}(\bar{x},\bar{w})$ there is a unique vector $\bar{p}\in N_{\Theta}(\psi(\bar{x},\bar{w})):=N(\psi(\bar{x},\bar{w});\Theta)$ satisfying \begin{eqnarray}\langlebel{unique_re} \nabla_{x}\psi(\bar{x},\bar{w})^*\bar{p}=\bar{v} \end{eqnarray} and such that the coderivative of the normal cone mapping is computed for all $u\in\mathbb{R}^n$ by \begin{eqnarray*} D^*{\cal N}(\bar{x},\bar{w},\bar{v})(u)=\left[\begin{array}{c} \nabla_{xx}^{2}\langlengle\bar p,\psi\ranglengle(\bar{x},\bar{w})\\\nabla_{xw}^{2}\langlengle\bar p,\psi\ranglengle(\bar{x},\bar{w}) \end{array}\right]u+\nabla\psi(\bar{x},\bar{w})^*D^*N_{\Theta}\big(\psi(\bar{x},\bar{w}),\bar p\big)\big(\nabla_{x}\psi(\bar{x},\bar{w})u\big). \end{eqnarray*} \end{Proposition}\vspace*{-0.02in} Thus Proposition~\ref{morout} reduces the computation of $D^*{\cal N}$ to that of $D^*N_{\Theta}$, which has been computed via the given data for broad classes of sets $\Theta$; see, e.g., \cite{heoutsur,mord,MO07,BR} for more details and references.\vspace*{-0.15in} \section{Discrete Approximations}\langlebel{discrete-approximations} \setcounter{equation}{0} In this section we construct a well-posed sequence of discrete approximations for feasible solutions to the constrained sweeping dynamics in \eqref{evo_equa}, \eqref{mov-set} and for local optimal solutions to the sweeping optimal control problem $(P)$. The results obtained here establish the $W^{1,2}$-strong convergence of discrete approximations while being free of generalized differentiation. They are certainly of independent interest from just their subsequent applications to deriving necessary optimality conditions for problem $(P)$.\vspace*{0.02in} Starting with discrete approximations of the sweeping differential inclusion \eqref{evo_equa}, we replace the time derivative therein by the Euler finite difference $\dot x(t)\approx[x(t+h)-x(t)]/h$ and proceed as follows. For each $k\in\mathbb{N}$ define $h_k:=T/k$ and consider the discrete mesh $T_k:=\{t_j^k:=jh_k|\,j=0,1,\ldots,k\}$. Then the sequence of discrete approximations of \eqref{evo_equa} is given by \begin{eqnarray}\langlebel{evo_equa_discrete} \frac{x^k_{j+1}-x^k_j}{h_k}\in f(t^k_j,x^k_{j})-N\big(g(x_j^k);C(t^k_j,u_j^k)\big),\;j=0,\ldots,k-1;\;x^k_0=x_0\in C\big(0,u(0)\big). \end{eqnarray} The following major result on a {\em strong approximation} of {\em feasible solutions} to the controlled sweeping process in \eqref{evo_equa}, \eqref{mov-set} by feasible solutions to the discretized systems in \eqref{evo_equa_discrete} is essentially different from the related one in \cite[Theorem~3.1]{h1}. Besides being applied to more general systems, it eliminates or significantly relax some restrictive technical assumptions imposed in \cite{h1} for the polyhedral controlled sets \eqref{sw-con2} by using another proof technique under the surjectivity assumption in (H3). Note furthermore that the choices of the reference feasible pair $(\bar{x}(\cdot),\bar{u}(\cdot))$ in \cite{h1} and Theorem~\ref{feasible-approx} below are also different: instead of the actual choice of $(\bar{x}(\cdot),\bar{u}(\cdot))\in W^{2,\infty}[0,T]\tauimes W^{2,\infty}[0,T]$ we now have the less restrictive pick $(\bar{x}(\cdot),\bar{u}(\cdot))\in{\cal C}^1[0,T]\tauimes{\cal C}[0,T]$ and establish its strong approximation in the $W^{1,2}\tauimes{\cal C}$ topology instead of $W^{1,2}\tauimes W^{1,2}$ in \cite{h1}. This actually effects the types of local minimizers for which we derive necessary optimality conditions in the setting of this paper; see Definition~\ref{ilm} below. If however the mapping $g$ is linear in \eqref{evo_equa} and if $\bar{u}(\cdot)\in W^{1,2}[0,T]$, we obtain the same $W^{1,2}$-approximation of the reference control as in the polyhedral framework of \cite{h1}.\vspace*{-0.05in} \begin{Theorem}{\bf(strong discrete approximation of feasible sweeping solutions).}\langlebel{feasible-approx} Let the pair $(\bar{x}(\cdot),\bar{u}(\cdot))\in{\cal C}^1[0,T]\tauimes{\cal C}[0,T]$ satisfy \eqref{evo_equa} with the moving set $C(t,u)$ from \eqref{mov-set} under the validity of the standing assumptions in {\rm(H1)--(H4)}. Then there exists a sequence $\{(x^k(\cdot),u^k(\cdot))\}$ of piecewise linear functions on $[0,T]$ satisfying the discrete inclusions \eqref{evo_equa_discrete} with $(x^k(0),u^k(0))=(\bar{x}_0,\bar{u}(0))$ and such that $\{(x^k(\cdot),u^k(\cdot))\}$ converges to $(\bar{x}(\cdot),\bar{u}(\cdot))$ in the norm topology of $W^{1,2}([0,T];\mathbb{R}^{n})\tauimes{\cal C}([0,T];\mathbb{R}^m)$. If in addition the mapping $g(\cdot)$ in \eqref{evo_equa} is linear and $\bar{u}(\cdot)\in W^{1,2}([0,T];\mathbb{R}^m)$, then the sequence $\{(x^k(\cdot),u^k(\cdot))\}$ converges to $(\bar{x}(\cdot),\bar{u}(\cdot))$ in the norm topology of $W^{1,2}([0,T];\mathbb{R}^n)\tauimes W^{1,2}([0,T];\mathbb{R}^m)$. \end{Theorem}\vspace*{-0.05in} {\bf Proof.} Fix $k\in\mathbb{N}$, choose $x_0^k:=x_0$ and $u^k_0:=\bar{u}(0)$, and then construct $(x_j^k,u_j^k)$ for $j=1,\ldots,k$ by induction. Suppose that $x_j^k$ is known and satisfies $\|x_j^k-\bar{x}(t_j^k)\|\le 1$ without loss of generality. Define \begin{eqnarray}\langlebel{ep} \epsilon_j:=\|\bar{x}(t^k_{j+1})-\bar{x}(t^k_j)-h_k\dot x(t^k_j)\|\;\mbox{ for all }\;j=0,\ldots,k-1 \end{eqnarray} and deduce from the validity of \eqref{evo_equa} at $t_j^k$ that $-\dot\bar{x}(t_j^k)+f(t_j^k,\bar{x}(t_j^k))\in N(g(\bar{x}(t_j^k);C(t_j^k,\bar{u}(t_j^k)))$. Using $\bar{x}(\cdot)\in{\cal C}^1([0,T];\mathbb{R}^n)$ allows us to find $\eta>0$ such that \begin{eqnarray*} \|\nabla\psi_{t_j^k,\bar{u}(t_j^k)}\|\le\eta\;\mbox{ and }\;\|-\dot\bar{x}(t_j^k)+f\big(t_j^k,\bar{x}(t_j^k)\big)\|\le\eta\;\mbox{ whenever }\;k\in\mathbb{N}\;\mbox{ and }\;j=0,\ldots,k. \end{eqnarray*} The surjectivity of $\nabla\psi_{t_j^k,\bar{u}(t_j^k)}$ ensures by the open mapping theorem the existence of $M>0$ for which \begin{eqnarray*} \mathbb{B}\subset\big(\nabla\psi_{t_j^k,\bar{u}(t_j^k)}\big)^*(M\mathbb{B}). \end{eqnarray*} Combining it with Proposition~\ref{nor_inve} tells us that \begin{eqnarray*} N\big(g(\bar{x}(t_j^k);C(t_j^k,\bar{u}(t_j^k))\big)\cap\eta\mathbb{B}=\big(\nabla\psi_{t_j^k,\bar{u}(t_j^k)}\big)^*\mathbb{B}ig(N\big(\psi_{t_j^k,\bar{u}(t_j^k)}(g(\bar{x}(t_j^k));\Theta\big)\cap\eta M\mathbb{B}\mathbb{B}ig). \end{eqnarray*} Since $-\dot\bar{x}(t_j^k)+f(t_j^k,\bar{x}(t_j^k))\in N(g(\bar{x}(t_j^k);C(t_j^k,\bar{u}(t_j^k)))\cap\eta\mathbb{B}$, we find $w\in N(\psi_{t_j^k,\bar{u}(t_j^k)}(g(\bar{x}(t_j^k)));\Theta)$ with $\|w\|\le\eta M$ satisfying the equality \begin{eqnarray*} -\dot\bar{x}(t_j^k)+f\big(t_j^k,\bar{x}(t_j^k)\big)=\nabla\psi_{t_j^k,\bar{u}(t_j^k)}^*w. \end{eqnarray*} Using now the mapping $\vartheta(\cdot)$ from (H4) gives us vectors $u_j^k$ and $\tauilde w\in N(\psi_{t_j^k,\bar{u}_j^k}(x_j^k);\Theta)$ for which \begin{eqnarray*}\langlebel{d} u_j^k=\bar{u}(t_j^k)+d\big(g(x_j^k)-g(\bar{x}(t_j^k)),g(x_j^k),g(\bar{x}(t_j^k)),\bar{u}(t_j^k)\big)\;\mbox{ and }\;\|w-\tauilde w\|\le\tauau\|g(x_j^k)-g(\bar{x}(t_j^k))\|. \end{eqnarray*} By the assumed uniform boundedness of the mapping $\vartheta(\cdot)$ it is easy to adjust $\tauau>0$ so that $\|u_j^k-\bar{u}(t_j^k)\|\le\tauau\|g(x_j^k)-g(\bar{x}(t_j^k))\|$. Denoting $v_j^k:=\big(\nabla\psi_{t_j^k,u_j^k}\big)^*\tauilde w$, we get $v_j^k\in N(g(x_j^k);C(t_j^k,u_j^k))$ by employing Proposition~\ref{nor_inve} and then arrive at the following estimates: \begin{align*} \big\|v_j^k-\big(-\dot\bar{x}(t_j^k)+f(t_j^k,\bar{x}(t_j^k))\big)\big\|&=\big\|\big(\nabla\psi_{t_j^k,\bar{u}(t_j^k)}\big)^*w-\big(\nabla\psi_{t_j^k,u_j^k}\big)^*\tauilde w\big\|\\&\le\big\|\big(\nabla\psi_{t_j^k,\bar{u}(t_j^k)}-\nabla\psi_{t_j^k,u_j^k}\big)^*w\big\|+\big\|\big(\nabla\psi_{t_j^k,u_j^k}\big)^*(w-\tauilde w)\big\|\\ &\le\eta M L_{\psi}\|u_j^k-\bar{u}(t_j^k)\|+\tauau L_g\|x_j^k-\bar{x}(t_j^k)\|\big(\eta+L_{\psi}\|u_j^k-\bar{u}(t_j^k)\|\big)\\ &\le\mathbb{B}ig(\eta M L_{\psi}\tauau L_g+\tauau L_g\big(\eta+L_{\psi}\tauau L_g\|x_j^k-\bar{x}(t_j^k)\|\big)\mathbb{B}ig)\|x_j^k-\bar{x}(t_j^k)\|\\ &\le\mathbb{B}ig(\eta M L_{\psi}\tauau L_g+\tauau L_g\eta+L_{\psi}\tauau^2L^2_g\|x_j^k-\bar{x}(t_j^k)\|\mathbb{B}ig)\|x_j^k-\bar{x}(t_j^k)\|. \end{align*} Denoting further $\alphapha:=\eta M L_{\psi}\tauau L_g+\tauau\eta L_g$ and $\beta:=L_{\psi}\tauau^2 L^2_g$, we get from the above that \begin{eqnarray}\langlebel{derivative_est} \big\|v_j^k-\big(-\dot\bar{x}(t_j^k)+f(t_j^k,\bar{x}(t_j^k))\big)\big\|\le\big(\alphapha+\beta\|x_j^k-\bar{x}(t_j^k)\|\big)\|x_j^k-\bar{x}(t_j^k)\|. \end{eqnarray} Now we are ready to construct the next iterate $x^k_{j+1}$ by \begin{eqnarray*} x^k_{j+1}:=x^k_j+h_k f(t^k_j,x^k_j)-h_k v_j^k \end{eqnarray*} and thus conclude that inclusion \eqref{evo_equa_discrete} holds at the discrete time $j$. It follows from the arguments below that for any $k\in\mathbb{N}$ sufficiently large we always have $\|x^k_{j+1}-\bar{x}(t_{j+1}^k\|\le 1$. This completes the induction process and gives us therefore a sequence $\{(x^k(\cdot),u^k(\cdot))\}$ defined on the discrete mesh $T_k$ for large $k\in\mathbb{N}$ and satisfied therein the discretized sweeping inclusion \eqref{evo_equa_discrete} with the controlled moving set \eqref{mov-set}. Next we prove that piecewise linear extensions $(x^k(t),u^k(t))$, $0\le t\le T$, of the above sequence to the continuous-time interval $[0,T]$ converges to the reference pair $(\bar{x}(\cdot),\bar{u}(t))$ in the norm topology of $W^{1,2}([0,T];\mathbb{R}^n)\tauimes{\cal C}([0,T];\mathbb{R}^m)$. To proceed, fix any $\epsilon>0$ and recall the definition of $\epsilon_j$ in \eqref{ep}. Taking into account that $\bar{x}(\cdot)\in{\cal C}^1([0,T];\mathbb{R}^n)$, we get that $\epsilon_j\le h_k\epsilon$ for all $j=0,\ldots,k-1$ and all $k\in\mathbb{N}$ sufficiently large. This gives us the relationships \begin{align*} \|x_{j+1}^k-\bar{x}(t_{j+1}^k)\|&\le\|x^k_j+h_k f(t^k_j,x^k_j)-h_k v_j^k-\bar{x}(t_{j+1}^k)\|\\ &\le\|x_j^k-\bar{x}(t_j^k)\|+h_k\|f(t^k_j,x^k_j)-f(t^k_j,\bar{x}(t^k_j))\|+h_k\|f(t^k_j,\bar{x}(t^k_j))-v_j^k-\dot\bar{x}(t_j^k)\|+\epsilon_j\\ &\le\big(1+(\alphapha+L_f)h_k+\beta h_k\|x_j^k-\bar{x}(t_j^k)\|\big)\|x_j^k-\bar{x}(t_j^k)\|+\epsilon_j\\ &\le\big(1+(\alphapha+L_f)h_k+\beta h_k\|x_j^k-\bar{x}(t_j^k)\|\big)\|x_j^k-\bar{x}(t_j^k)\|+h_k\epsilon, \end{align*} and therefore we arrive at the following estimate: \begin{eqnarray*} \|x_{j+1}^k-\bar{x}(t_{j+1}^k)\|\le\big(1+(\alphapha+L_f h_k+\beta h_k\|x_j^k-\bar{x}(t_j^k)\|\big)\|x_j^k-\bar{x}(t_j^k)\|+h_k\epsilon. \end{eqnarray*} Define further the quantities $a_j:=\|x_j^k-\bar{x}(t_j^k)\|$ for all $j=0,\ldots,k$ and $k\in\mathbb{N}$. Observe that $a_{j+1}=(1+(\alphapha+L_f) h_k+\beta h_k a_j)a_j+h_k \epsilon$ for $j=0,\ldots,k-1$, which is equivalent to \begin{eqnarray*} \frac{a_{j+1}-a_j}{h_k}=(\alphapha+L_f)a_j+\beta a_j^2 +\epsilon,\quad j=0,\ldots,k-1. \end{eqnarray*} Denoting by $a_{\epsilon}(\cdot)$ the solution of the differential equation \begin{eqnarray*} \dot a(t)=(\alphapha+L_f)a(t)+\beta a^2(t)+\epsilon,\quad t\in[0,T],\quad a(0)=0, \end{eqnarray*} we see that $a_\epsilon(t)\tauo 0$ uniformly on $[0,T]$ as $\epsilon\downarrow 0$. It readily implies that \begin{eqnarray*} \max\big\{\|x_j^k-\bar{x}(t_j^k)\|,\;j=0,\ldots,k\big\}\tauo 0\;\mbox{ as }\;k\tauo\infty. \end{eqnarray*} This verifies, in particular, that $\max\{\|x_j^k-\bar{x}(t_j^k)\|,\;j=0,\ldots,k\}\le 1$ for all $k\in\mathbb{N}$, which was needed to complete the induction process. Since we have \begin{eqnarray}\langlebel{est} \|u_j^k-\bar{u}(t_j^k)\|\le\tauau L_g\|x_j^k-\bar{x}(t_j^k)\|\;\mbox{ for all }\;j=0,\ldots,k \end{eqnarray} as shown above, the control sequence $\{u^k(t)\}$ converges to $\bar{u}(\cdot)$ strongly in ${\cal C}([0,T];\mathbb{R}^m)$. Let us next justify the strong $W^{1,2}$-convergence of the trajectories $x^k(\cdot)$ to $\bar{x}(\cdot)$ on $[0,T]$. We have \begin{align*} \int_0^T \|\dot x^k(t)-\dot\bar{x}(t)\|^2dt&=\sum_{j=0}^{k-1}\int_{t^k_j}^{t^k_{j+1}}\|f(t_j^k,x_j^k)-v_j^k-\dot\bar{x}(t)\|^2dt\\ &\le 2\sum_{j=0}^{k-1}h_k\big\|f(t_j^k,x_j^k)-v_j^k-\dot\bar{x}(t^k_j)\big\|^2+2\sum_{j=0}^{k-1}\int_{t^k_j}^{t^k_{j+1}}\|\dot\bar{x}(t^k_j)-\dot\bar{x}(t)\|^2dt, \end{align*} where the last term converges to zero due to $\bar{x}(\cdot)\in{\cal C}^1([0,T];\mathbb{R}^n)$. The first term therein also converges to zero by the following estimates valid for all $j=0,\ldots,k$: \begin{align*} \|f(t_j^k,x_j^k)-v_j^k-\dot\bar{x}(t^k_j)\|&\le\big\|f\big(t_j^k,\bar{x}(t_j^k)\big)-v_j^k-\dot\bar{x}(t^k_j)\big\|+\|f(t_j^k,x_j^k)-f(t_j^k,\bar{x}(t_j^k))\|\\ &\le\big(L_f+\alphapha+\beta\|x_j^k-\bar{x}(t_j^k)\|\big)\|x_j^k-\bar{x}(t^k_j)\|, \end{align*} which is due to \eqref{derivative_est}. Thus we get $\int_0^T\|\dot x^k(t)-\dot\bar{x}(t)\|^2dt\tauo 0$ as $k\tauo\infty$. Since $x_0^k=\bar{x}(0)$, the latter verifies that $\{x^k(\cdot)\}$ strongly converges to $\bar{x}(\cdot)$ in $W^{1,2}([0,T];\mathbb{R}^n)$. To complete the proof of the theorem, it remains to show that if $\bar{u}(\cdot)\in W^{1,2}([0,T];\mathbb{R}^m)$ and the mapping $g(\cdot)$ is linear, then $u^k(\cdot)\tauo\bar{u}(\cdot)$ strongly in $W^{1,2}([0,T];\mathbb{R}^m)$. Denoting \begin{eqnarray*} A^k_j:=\big(g(x_j^k),g(\bar{x}(t_j^k)),\bar{u}(t_j^k)\big)\;\mbox{ for all }\;j=0,\ldots,k-1\;\mbox{ and }\;k\in\mathbb{N} \end{eqnarray*} and using the local Lipschitz continuity of $d(\cdot)$ with constant $L_d>0$, we get \begin{align*} &\mathbb{B}ig\|\frac{u^k_{j+1}-u_j^k}{h_k}-\frac{\bar{u}(t_{j+1}^k)-\bar{u}(t_j^k)}{h_k}\mathbb{B}ig\|=\frac{1}{h_k}\big\|d\big(g(x_{j+1}^k)-g(\bar{x}(t_{j+1}^k)), A^k_{j+1}\big)-d\big(g(x_j^k)-g(\bar{x}(t_j^k)),A^k_j\big)\big\|\\ &\le\frac{1}{h_k}\big\|d\big(g(x_{j+1}^k)-g(\bar{x}(t_{j+1}^k)),A^k_{j+1}\big)-d\big(g(x_j^k)-g(\bar{x}(t_j^k)),A^k_{j+1}\big)\big\|\\ &+\big\|d\big(g(x_j^k)-g(\bar{x}(t_j^k)),A^k_{j+1}\big)-d\big(g(x_j^k)-g(\bar{x}(t_j^k)),A^k_j\big)\big\|\\ &\le\frac{L_d L_g}{h_k}\big\|\big(x_{j+1}^k-\bar{x}(t_{j+1}^k)\big)-\big(x_j^k-\bar{x}(t_j^k)\big)\big\|\\ &+M\frac{L_d}{h_k}\big(\|g(x_{j+1}^k)-g(x_j^k)\|+\big\|g\big(\bar{x}(t_{j+1}^k)\big)-g\big(\bar{x}(t_j^k)\big)\big\|+\|\bar{u}(t_{j+1}^k)-\bar{u}(t_j^k)\|\big), \end{align*} where $M>0$ is sufficiently large. This shows that \begin{align*} &\int_0^T\|\dot u^k(t)-\dot\bar{u}(t)\|^2dt\le M\int_0^T\|\dot x^k(t)-\dot\bar{x}(t)\|^2dt\\ &+M\sum_{i=0}^{k-1}h_k\mathbb{B}ig(\|g(x_{j+1}^k)-g(x_j^k)\|^2+\big\|g\big(\bar{x}(t_{j+1}^k)\big)-g\big(\bar{x}(t_j^k)\big)\big\|^2+\|\bar{u}(t_{j+1}^k)-\bar{u}(t_j^k)\|^2\mathbb{B}ig) \end{align*} and thus verifies the claimed convergence under the assumptions made. $ \tauriangle$\vspace*{0.05in} The two approximation results established in Theorem~\ref{feasible-approx} allow us to apply the method of discrete approximations to deriving necessary optimality conditions for {\em two types} of local minimizers in problem $(P)$. The first type treats the trajectory and control components of the optimal pair $(\bar{x}(\cdot),\bar{u}(\cdot))$ in the same way and reduces in fact to the {\em intermediate $W^{1,2}$-minimizers} introduced in \cite{m95} in the general framework of differential inclusions and then studied in \cite{cm1}--\cite{h1} for various controlled sweeping processes. The second type seems to be {\em new in control theory}; it treats control and trajectory components differently and applies to problems $(P)$ whose running costs do not depend on control velocities.\vspace*{-0.05in} \begin{Definition}{\bf(local minimizers for controlled sweeping processes).}\langlebel{ilm} Let the pair $(\bar{x}(\cdot),\bar{u}(\cdot))$ be feasible to problem $(P)$ under the standing assumptions made. {\bf(i)} We say that $(\bar{x}(\cdot),\bar{u}(\cdot))$ be a {\sc local $W^{1,2}\tauimes W^{1,2}$-minimizer} for $(P)$ if $\bar{x}(\cdot)\in W^{1,2}([0,T];\mathbb{R}^n)$, $\bar{u}(\cdot)\in W^{1,2}([0,T];\mathbb{R}^m)$, and \begin{eqnarray}\langlebel{lm1} J[\bar{x},\bar{u}]\le J[x,u]\;\mbox{ for all }\;x(\cdot)\in W^{1,2}([0,T];\mathbb{R}^n)\;\mbox{ and }\;u(\cdot)\in W^{1,2}([0,T];\mathbb{R}^m) \end{eqnarray} sufficiently close to $(\bar{x}(\cdot),\bar{u}(\cdot))$ in the norm topology of the corresponding spaces in \eqref{lm1}. {\bf(ii)} Let the running cost $\ell(\cdot)$ in \eqref{eq:MP} do not depend on $\dot u$. We say that the pair $(\bar{x}(\cdot),\bar{u}(\cdot))$ be a {\sc local $W^{1,2}\tauimes{\cal C}$-minimizer} for $(P)$ if $\bar{x}(\cdot)\in W^{1,2}([0,T];\mathbb{R}^n)$, $\bar{u}(\cdot)\in{\cal C}([0,T];\mathbb{R}^m)$, and \begin{eqnarray}\langlebel{lm2} J[\bar{x},\bar{u}]\le J[x,u]\;\mbox{ for all }\;x(\cdot)\in W^{1,2}([0,T];\mathbb{R}^n)\;\mbox{ and }\;u(\cdot)\in{\cal C}([0,T];\mathbb{R}^m) \end{eqnarray} sufficiently close to $(\bar{x}(\cdot),\bar{u}(\cdot))$ in the norm topology of the corresponding spaces in \eqref{lm2}. \end{Definition}\vspace*{-0.05in} Our main attention in what follows is to derive {\em necessary optimality conditions} for both types of local minimizers in Definition~\ref{ilm} by developing appropriate versions of the method of discrete approximations. It is clear that any local $W^{1,2}\tauimes{\cal C}$-minimizer for $(P)$ is also a local $W^{1,2}\tauimes W^{1,2}$-minimizer for this problem, provided that we restrict the class of feasible controls to $W^{1,2}$-functions. Thus necessary optimality conditions for local $W^{1,2}\tauimes W^{1,2}$-minimizers are also necessary for local $W^{1,2}\tauimes{\cal C}$-ones in this framework, while not vice versa. On the other hand, we may deal with local $W^{1,2}\tauimes{\cal C}$-minimizers without imposing anything but the continuity assumptions of feasible controls, provided that the running cost in \eqref{eq:MP} does not depend on control velocities. Note furthermore that considering a $W^{1,2}$-neighborhood of the trajectory part $\bar{x}(\cdot)$ in both settings of Definition~\ref{ilm} leads us to potentially more selective necessary optimality conditions for such minimizers than for conventional strong local minimizers and global solutions to $(P)$. It has been well recognized in the calculus of variations and optimal control, starting with pioneering studies by Bogolyubov and Young, that limiting procedures of dealing with continuous-time dynamical systems involving time derivatives require a certain {\em relaxation stability}, which means that the value of cost functionals does not change under the convexification of the dynamics and running cost with respect to velocity variables; see, e.g., \cite{mord,vinter} for more details and references. In sweeping control theory, such issues have been investigated in \cite{et,t} for controlled sweeping processes somewhat different from $(P)$. To consider an appropriate relaxation of our problem $(P)$, denote \begin{eqnarray}\langlebel{F} F=F(t,x,u):=f(t,x)-N\big(g(x);C(t,u)\big) \end{eqnarray} and formulate the {\em relaxed optimal control problem} $(R)$ as a counterpart of $(P)$ with the replacement of the cost functional \eqref{eq:MP} by the convexified one \begin{eqnarray*}\langlebel{R} {\rm minimize}\;Hadamardat J[x,u]:=\varphi\big(x(T)\big)+\int_0^THadamardat\ell_F\big(t,x(t),u(t),\dot x(t),\dot u(t)\big)dt, \end{eqnarray*} where $Hadamardat\ell(t,x,u,\cdot,\cdot)$ is defined as the largest l.s.c.\ convex function majorized by $\ell(t,x,u,\cdot,\cdot)$ on the convex closure of the set $F$ in \eqref{F} with $Hadamardat\ell:=\infty$ otherwise. Then we say that the pair $(\bar{x}(\cdot),\bar{u}(\cdot))$ is a {\em relaxed local $W^{1,2}\tauimes W^{1,2}$-minimizer} for $(P)$ if in additions to the conditions of Definition~\ref{ilm}(i) we have $J[\bar{x},\bar{u})=Hadamardat J[\bar{x},\bar{u}]$. Similarly we define a {\em relaxed local $W^{1,2}\tauimes{\cal C}$-minimizer} for $(P)$ in the setting of Definition~\ref{ilm}(ii). Note that, in contrast to the original problem $(P)$, the convexified structure of the relaxed problem $(R)$ provides an opportunity to the establish the {\em existence} of global optimal solutions in the prescribed classes of controls and trajectories. It is not a goal of this paper, but we refer the reader to \cite[Theorem~4.1]{cm3} and \cite[Theorem~4.2]{t} for some particular settings of controlled sweeping processes in the classes of $W^{1,2}\tauimes W^{1,2}$ and $W^{1,2}\tauimes{\cal C}$ feasible pairs $(\bar{x}(\cdot),\bar{u}(\cdot))$, respectively. There is clearly no difference between the problems $(P)$ and $(R)$ if the normal cone in \eqref{F} is convex and the integrand $\ell$ in \eqref{eq:MP} is convex with respect to velocity variables. On the other hand, the measure continuity/nonatonomicity on $[0,T]$ and the differential inclusion structure of the sweeping process \eqref{evo_equa} create the environment where any local minimizer of the types under consideration is also a relaxed one. Without delving into details here, we just mention that the possibility to derive such a {\em local relaxation stability} from \cite[Theorem~4.2]{t} for {\em strong} local (in the ${\cal C}$-norm) minimizers of $(P)$, provided that the controlled moving set $C(t,u)$ in \eqref{mov-set} is convex and continuously depends on its variables.\vspace*{0.02in} Given now a relaxed local minimizer $(\bar{x}(\cdot),\bar{u}(\cdot))$ of the types introduced in Definition~\ref{ilm}, we construct appropriate sequences of discrete-time optimal control problems corresponding to each type therein separately. For {\em brevity and simplicity}, from now on we restrict ourselves to the setting of $(P)$ where $g(x):=x$, $f:=0$ while $\psi$ and $\ell$ do not depend on $t$. The reader can easily check that the procedure developed below is applicable to the general version of $(P)$ under the standing assumptions made. If the pair $(\bar{x}(\cdot),\bar{u}(\cdot))$ is a relaxed local {\em $W^{1,2}\tauimes W^{1,2}$-minimizer} of $(P)$, we fix $\varepsilon>0$ sufficiently small to accommodate the $W^{1,2}\tauimes W^{1,2}$-neighborhood of $(\bar{x}(\cdot),\bar{u}(\cdot))$ in Definition~\ref{ilm}(i) and for each $k\in\mathbb{N}$ define the approximation problem $(P^1_k)$ as follows: \begin{eqnarray*}\langlebel{cost-pk} \begin{array}{ll} \mbox{minimize}\;\; J_k[z^k]:=&\displaystyle\varphi(x_k^k)+\displaystyle{h_k}\sum\limits_{j=0}^{k-1}\displaystyle{\ell\mathbb{B}ig(x_j^k,u_j^k,\frac{x_{j+1}^k-x_j^k}{h_k}, \displaystyle\frac{u_{j+1}^k-u_j^k}{h_k}\mathbb{B}ig)}\\ &+h_k\displaystyle\sum\limits_{j=0}^{k-1}\int\limits_{{t^k_j}}^{{t^k_{j+1}}}\displaystyle\mathbb{B}ig(\mathbb{B}ig\|\frac{x_{j+1}^k-x_j^k}{h_k}-\dot\bar{x}(t)\mathbb{B}ig\|^2+ \mathbb{B}ig\|\displaystyle\frac{u_{j+1}^k-u_j^k}{h_k}-\dot{\bar{u}}(t)\mathbb{B}ig\|^2\mathbb{B}ig)\,dt \end{array} \end{eqnarray*} over collections $z^k:=(x_0^k,\ldots,x_k^k,u_0^k,\ldots,u_k^k)$ subject to the constraints \begin{eqnarray}\langlebel{discrete-inclusion} x_{j+1}^k\in x_j^k+h_k F(x_j^k,u_j^k)\;\mbox{ for }\;j=0,\ldots,k-1\;\mbox{ with }\;\big(x^k_0,u^k_0\big)=\big(x_0,\bar{u}(0)\big), \end{eqnarray} \begin{eqnarray}\langlebel{state-constraint} (x_k^k,u^k_k)\in\psi^{-1}(\Theta), \end{eqnarray} \begin{eqnarray}\langlebel{nei} \big\|(x^k_j,u^k_j)-\big(\bar{x}(t^k_j),\bar{u}(t^k_j)\big)\big\|\le\varepsilon/2\;\mbox{ for }\;j=0,\ldots,k, \end{eqnarray} \begin{eqnarray*}\langlebel{neighborhood1} \sum\limits_{j=0}^{k-1}\int\limits_{{t^k_j}}^{{t^k_{j+1}}}\displaystyle\mathbb{B}ig(\mathbb{B}ig\|\frac{x_{j+1}^k-x_j^k}{h_k}\displaystyle- \dot\bar{x}(t)\mathbb{B}ig\|^2+\mathbb{B}ig\|\displaystyle\frac{u_{j+1}^k-u_j^k}{h_k}-\dot\bar{u}(t)\mathbb{B}ig\|^2\mathbb{B}ig)\,dt\le\frac{\varepsilon}{2}. \end{eqnarray*} If the pair $(\bar{x}(\cdot),\bar{u}(\cdot))$ is a relaxed local {\em $W^{1,2}\tauimes{\cal C}$-minimizer} of $(P)$, fix $\varepsilon>0$ sufficiently small to accommodate the $W^{1,2}\tauimes{\cal C}$-neighborhood of $(\bar{x}(\cdot),\bar{u}(\cdot))$ in Definition~\ref{ilm}(ii) and for each $k\in\mathbb{N}$ define the approximation problem $(P^2_k)$ in the following way corresponding to \eqref{lm2}: \begin{eqnarray*}\langlebel{cost-pk1} \begin{array}{ll} \mbox{minimize}\;\; J_k[z^k]:=&\displaystyle\varphi(x_k^k)+\displaystyle{h_k}\sum\limits_{j=0}^{k-1}\displaystyle{\ell\mathbb{B}ig(x_j^k,u_j^k,\frac{x_{j+1}^k-x_j^k}{h_k}\mathbb{B}ig)}\\ &+\displaystyle\sum\limits_{j=0}^{k}\big\|u_j^k-\bar{u}(t_j^k)\big\|^2+\displaystyle\sum\limits_{j=0}^{k-1}\int\limits_{{t^k_j}}^{{t^k_{j+1}}}\displaystyle\mathbb{B}ig\|\frac{x_{j+1}^k-x_j^k}{h_k} -\dot\bar{x}(t)\mathbb{B}ig\|^2dt \end{array} \end{eqnarray*} over $z^k=(x_0^k,\ldots,x_k^k,u_0^k,\ldots,u_k^k)$ subject to the constraints in \eqref{discrete-inclusion}--\eqref{nei} and \begin{eqnarray*}\langlebel{neighborhood2} \sum\limits_{j=0}^{k-1}\int\limits_{{t^k_j}}^{{t^k_{j+1}}}\displaystyle\mathbb{B}ig\|\frac{x_{j+1}^k-x_j^k}{h_k}\displaystyle- \dot\bar{x}(t)\mathbb{B}ig\|^2 dt\le\frac{\varepsilon}{2}. \end{eqnarray*} To proceed further with the method of discrete approximations, we need to make sure that the approximating problems $(P^i_k)$, $i=1,2$, admit optimal solutions. This is indeed the case due to Theorem~\ref{feasible-approx} and the {\em robustness} (closed-graph property) of our basic normal cone \eqref{nc}.\vspace*{-0.05in} \begin{Proposition}{\bf (existence of discrete optimal solutions).}\langlebel{ex-disc} Under the imposed standing assumptions {\rm(H1)--(H5)}, each problem $(P^i_k)$, $i=1,2$, has an optimal solution for all $k\in\mathbb{N}$ sufficiently large. \end{Proposition}\vspace*{-0.05in} {\bf Proof.} It follows from Theorem~\ref{feasible-approx} and the constructions of $(P^1_k)$, $(P^2_k)$ that the set of feasible solutions to each of these problems is nonempty for all large $k\in\mathbb{N}$. Applying the classical Weierstrass existence theorem, observe that the boundedness of the feasible sets follows directly from the constraint structures in $(P^1_k)$ and $(P^2_k)$. The remaining closedness of the feasible sets for these problems is a consequence of the robustness property of the normal cone \eqref{nc} that determines the discrete inclusions \eqref{discrete-inclusion}. $ \tauriangle$\vspace*{0.05in} The next theorem establishes the strong convergence in the corresponding spaces of extended discrete optimal solutions for discrete approximation problems to the given relaxed local minimizers for $(P)$.\vspace*{-0.05in} \begin{Theorem}{\bf(strong convergence of discrete optimal solutions).}\langlebel{conver} In addition to the standing assumption {\rm(H1)--(H5)}, suppose that the cost functions $\varphi$ and $\ell$ are continuous around the given local minimizer. The following assertions hold: {\bf(i)} If $(\bar{x}(\cdot),\bar{u}(\cdot))$ is a relaxed local $W^{1,2}\tauimes W^{1,2}$-minimizer for $(P)$, then any sequence of piecewise linear extensions on $[0,T]$ of the optimal solutions $(\bar{x}^k(\cdot),\bar{u}^k(\cdot))$ to $(P^1_k)$ converges to $(\bar{x}(\cdot),\bar{u}(\cdot))$ in the norm topology of $W^{1,2}([0,T];\mathbb{R}^n)\tauimes W^{1,2}([0,T];\mathbb{R}^m)$ as $k\tauo\infty$. {\bf(ii)} If $(\bar{x}(\cdot),\bar{u}(\cdot))$ is a relaxed local $W^{1,2}\tauimes{\cal C}$-minimizer for $(P)$, then any sequence of piecewise linear extensions on $[0,T]$ of the optimal solutions $(\bar{x}^k(\cdot),\bar{u}^k(\cdot))$ to $(P^2_k)$ converges to $(\bar{x}(\cdot),\bar{u}(\cdot))$ in the norm topology of $W^{1,2}([0,T];\mathbb{R}^n)\tauimes{\cal C}([0,T];\mathbb{R}^m)$ as $k\tauo\infty$. \end{Theorem}\vspace*{-0.05in} {\bf Proof.} To verify assertion (i), we proceed similarly to the proof of \cite[Theorem~3.4]{h1} with the usage of the normal cone robustness for \eqref{nc} instead of Attouch's theorem in the convex setting of \cite{h1}. The proof of assertion (ii) goes in the same lines with observing that the required ${\cal C}$-compactness of the control sequence follows from the control-state relationship of type \eqref{est} valid due to assumption (H4). $ \tauriangle$\vspace*{-0.1in} \section{Extended Euler-Lagrange Conditions for Sweeping Solutions}\langlebel{optimality-conditions} \setcounter{equation}{0} Having the strong convergence results of Theorem~\ref{conver} as the quintessence of the discrete approximation well-posedness justified in Section~\ref{discrete-approximations}, we now proceed first with deriving necessary optimality conditions in both discrete problems $(P^1_k)$ and $(P^2_k)$ for each $k\in\mathbb{N}$ and then with the subsequent passage to the limit therein as $k\tauo\infty$. In this way we arrive at necessary optimality conditions for relaxed local minimizers in $(P)$ of both $W^{1,2}\tauimes W^{1,2}$ and $W^{1,2}\tauimes{\cal C}$ types. Observe that for each fixed $k\in\mathbb{N}$ both problems $(P^1_k)$ and $(P^2_k)$ belong to the class of finite-dimensional mathematical programming with nonstandard geometric constraints \eqref{discrete-inclusion} and \eqref{state-constraint}. We can handle them by employing appropriate tools of variational analysis that revolve around the normal cone \eqref{nc}.\vspace*{-0.05in} \begin{Theorem}{\bf(necessary optimality conditions for $(P^1_k)$).}\langlebel{discrete} Fix $k\in\mathbb{N}$ and consider an optimal solution $\bar{z}^k:=(x_0,\bar{x}^k_1\ldots,\bar{x}_k^k,\bar{u}_0^k,\ldots,\bar{u}_{k}^{k})$ to problem $(P^1_k)$, where $F$ may be a general closed-graph mapping. Suppose that the cost functions $\varphi$ and $\ell$ are locally Lipschitzian around the corresponding components of the optimal solution and denote the quantities \begin{eqnarray}\langlebel{th} \mathbb{B}ig(\tauheta_{j}^{xk},\tauheta_{j}^{uk}\mathbb{B}ig):=2\int\limits_{{t_j^k}}^{{t_{j+1}^k}}\mathbb{B}ig(\frac{\bar{x}_{j+1}^k-\bar{x}_j^k}{h_k} \displaystyle-\dot\bar{x}(t),\frac{\bar{u}_{j+1}^k-\bar{u}_j^k}{h_k}-\dot\bar{u}(t)\mathbb{B}ig)dt,\quad j=0,\ldots,k-1. \end{eqnarray} Then there exist dual elements $\lambda^k\ge 0$, $p_j^k=(p^{xk}_j,p^{uk}_j)\in\mathbb{R}^n\tauimes\mathbb{R}^m$ as $j=0,\ldots,k$ and subgradient vectors \begin{eqnarray}\langlebel{sublneighborhood1} \big(w^{xk}_{j},w^{uk}_{j},v^{xk}_{j},v^{uk}_{j}\big)\in\partial\ell\left(\bar{x}^k_j,\bar{x}^k_j,\frac{\bar{x}_{j+1}^k-\bar{x}_j^k}{h_k},\frac{\bar{u}_{j+1}^k-\bar{u}_j^k}{h_k}\right),\quad j=0,\ldots,k-1, \end{eqnarray} such that the following conditions are satisfied: \begin{eqnarray}\langlebel{nontriv} \lambda^k+\sum_{j=0}^{k-1}\|p^{xk}_j\|+\|p^{uk}_0\|+\|p^{xk}_k\|+\|p^{uk}_k\|\ne 0, \end{eqnarray} \begin{eqnarray}\langlebel{transversality_end_d} -(p^{xk}_k,p^{uk}_k)\in\langlembda^k\big(\partial\varphi(\bar{x}_k^k),0\big)+N\big((\bar{x}^k_k,\bar{u}^k_k);\psi^{-1}(\Theta)\big), \end{eqnarray} \begin{eqnarray}\langlebel{psiu} p^{uk}_{j+1}=\langlembda^k(v^{uk}_j+h_k^{-1}\tauheta^{uk}_j),\;j=0,\ldots,k-1, \end{eqnarray} \begin{eqnarray}\langlebel{euler1} \begin{split} &\left(\frac{p_{j+1}^{xk}-p_j^{xk}}{h_k}-\langlembda^k w_j^{xk},\frac{p_{j+1}^{uk}-p_j^{uk}}{h_k}- \langlembda^k w_j^{uk},p_{j+1}^{xk}-\langlembda^k\mathbb{B}ig(v_j^{xk}+\frac{1}{h_k}\tauheta_j^{xk}\mathbb{B}ig)\right)\\ &\qquad\in N\mathbb{B}ig(\mathbb{B}ig(\bar{x}_j^k,\bar{u}_j^k,\frac{\bar{x}_{j+1}^k-\bar{x}_j^k}{h_k}\mathbb{B}ig);\mbox{\rm gph}\, F\mathbb{B}ig),\quad j=0,\ldots,k-1. \end{split} \end{eqnarray} \end{Theorem}\vspace*{-0.05in} {\bf Proof.} It follows the lines in the proof of \cite[Theorem~5.1]{h1} by reducing $(P^1_k)$ to a problem of mathematical programming. The usage of necessary optimality conditions for such problems and calculus rules of generalized differentiation for the basic constructions \eqref{nc} and \eqref{1sub} available in the books \cite{mordukhovich,mord,rw} allow us arrive at \eqref{sublneighborhood1}--\eqref{euler1} due to the particular structure of the data in $(P^1_k)$. $ \tauriangle$\vspace*{0.05in} The same approach holds for verifying the necessary optimality conditions for problem $(P^2_k)$ presented in the next theorem, which also takes into account the specific structure of this problem.\vspace*{-0.07in} \begin{Theorem}{\bf(necessary optimality conditions for $(P^2_k)$).}\langlebel{discrete1} Let $\bar{z}^k:=(x_0,\bar{x}^k_1\ldots,\bar{x}_k^k,\bar{u}_0^k,\ldots,\bar{u}_{k}^{k})$ be an optimal solution problem $(P^2_k)$ in the framework of Theorem~{\rm\ref{discrete}}. Consider the quantities \begin{eqnarray*} \tauheta_{j}^{xk}:=2\int\limits_{{t_j^k}}^{{t_{j+1}^k}}\mathbb{B}ig(\frac{\bar{x}_{j+1}^k-\bar{x}_j^k}{h_k} \displaystyle-\dot\bar{x}(t)\mathbb{B}ig)dt,\quad\tauheta_{j}^{uk}:=2\big(\bar{u}_j^k-\bar{u}(t_j^k)\big),\quad j=0,\ldots,k. \end{eqnarray*} Then there exist dual elements $\lambda^k\ge 0$, $p_j^k=(p^{xk}_j,p^{uk}_j)\in\mathbb{R}^n\tauimes\mathbb{R}^m$ as $j=0,\ldots,k$ and subgradient vectors \begin{eqnarray*} \big(w^{xk}_{j},w^{uk}_{j},v^{xk}_{j}\big)\in\partial\ell\left(\bar{x}^k_j,\bar{u}^k_j,\frac{\bar{x}_{j+1}^k-\bar{x}_j^k}{h_k}\right),\quad j=0,\ldots,k-1, \end{eqnarray*} satisfying following necessary optimality conditions: \begin{eqnarray*} \lambda^k+\sum_{j=0}^{k-1}\|p^{xk}_j\|+\|p^{uk}_0\|+\|p^{xk}_k\|\ne 0, \end{eqnarray*} \begin{eqnarray*} -(p^{xk}_k,0)\in\langlembda^k\big(\partial\varphi(\bar{x}_k^k),0\big)+N\big((\bar{x}^k_k,\bar{u}^k_k);\psi^{-1}(\Theta)\big), \end{eqnarray*} \begin{eqnarray*} p^{uk}_{j+1}=0,\quad j=0,\ldots,k-1, \end{eqnarray*} \begin{eqnarray*} \begin{split} &\left(\frac{p_{j+1}^{xk}-p_j^{xk}}{h_k}-\langlembda^k w_j^{xk},\frac{p_{j+1}^{uk}-p_j^{uk}}{h_k}- \langlembda^k (w_j^{uk}+\tauheta_{j}^{uk}),p_{j+1}^{xk}-\langlembda^k\mathbb{B}ig(v_j^{xk}+\frac{1}{h_k}\tauheta_j^{xk}\mathbb{B}ig)\right)\\ &\qquad\in N\mathbb{B}ig(\mathbb{B}ig(\bar{x}_j^k,\bar{u}_j^k,\frac{\bar{x}_{j+1}^k-\bar{x}_j^k}{h_k}\mathbb{B}ig);\mbox{\rm gph}\, F\mathbb{B}ig),\quad j=0,\ldots,k-1. \end{split} \end{eqnarray*} \end{Theorem}\vspace*{-0.05in} Now we are ready to derive necessary optimality conditions for both types of (relaxed) local minimizers for $(P)$ from Definition~\ref{ilm} by passing the limit from those in Theorems~\ref{discrete} and \ref{discrete1} with taking into account the convergence results from Section~\ref{discrete-approximations} and calculation results of generalized differentiation presented in Section~\ref{prelim}. The reader can see that the obtained optimality conditions for both types are local minimizers are pretty similar under the imposed assumptions. This is largely due to the achieved discrete approximation convergence in Theorem~\ref{conver} and the structures of the discretized problems. Necessary optimality conditions for relaxed local $W^{1,2}\tauimes W^{1,2}$-minimizers of $(P)$ were derived in \cite[Theorem~6.1]{h1} for polyhedral moving sets \eqref{sw-con2} under significantly more restrictive assumptions, which basically cover the case of $(\bar{x}(\cdot),\bar{u}(\cdot))\in W^{2,\infty}([0,T];\mathbb{R}^{n+m})$. Note that the {\em linear independence constraint qualification} (LICQ) condition on generating polyhedral vectors imposed therein is a counterpart of our surjectivity assumption (H3) in the polyhedral setting of \cite{h1}.\vspace*{-0.05in} \begin{Theorem}{\bf(necessary optimality conditions for the controlled sweeping process).}\langlebel{necopt} Let $(\bar{x}(\cdot),\bar{u}(\cdot))$ be a local minimizer for problem $(P)$ of the types specified below. In addition to the standing assumptions, suppose that $\psi=\psi(x,u)$ is ${\cal C}^2$-smooth with respect to both variables while $\varphi$ and $\ell$ are locally Lipschitzian around the corresponding components of the optimal solution. The following assertions hold: {\bf(i)} If $(\bar{x}(\cdot),\bar{u}(\cdot))$ is a relaxed local $W^{1,2}\tauimes W^{1,2}$-minimizer, then there exist a multiplier $\lambda\ge 0$, an adjoint arc $p(\cdot)=(p^x,p^u)\in W^{1,2}([0,T];\mathbb{R}^n\tauimes\mathbb{R}^m)$, a signed vector measure $\gamma\in C^*([0,T];\mathbb{R}^s)$, as well as pairs $(w^x(\cdot),w^u(\cdot))\in L^2([0,T];\mathbb{R}^n\tauimes\mathbb{R}^m)$ and $(v^x(\cdot),v^u(\cdot))\in L^\infty([0,T];\mathbb{R}^n\tauimes\mathbb{R}^m)$ with \begin{eqnarray}\langlebel{co} \big(w^x(t),w^u(t),v^x(t),v^u(t)\big)\in{\rm co}\,\partial\ell\big(\bar{x}(t),\bar{u}(t),\dot\bar{x}(t),\dot\bar{u}(t)\big)\;\mbox{ a.e. }\;t\in[0,T] \end{eqnarray} satisfying the collection of necessary optimality conditions: $\bullet$ {\sc Primal-dual dynamic relationships}: \begin{eqnarray}\langlebel{hamiltonx} \dot p(t)=\lambda w(t)+\left[\begin{array}{c} \nabla_{xx}^{2}\big\langlengle\eta(t),\psi\big\ranglengle\big(\bar{x}(t),\bar{u}(t)\big)\\ \nabla_{xw}^{2}\big\langlengle\eta(t),\psi\big\ranglengle\big(\bar{x}(t),\bar{u}(t)\big) \end{array}\right]\big(-\lambda v^x(t)+q^x(t)\big)\;\mbox{ a.e. }\;t\in[0,T], \end{eqnarray} \begin{eqnarray}\langlebel{hamilton2ub} q^u(t)=\lambda v^u(t)\;\mbox{ a.e. }\;t\in[0,T], \end{eqnarray} where $\eta(\cdot)\in L^{2}([0,T];\mathbb{R}^s)$ is a uniquely defined vector function determined by the representation \begin{eqnarray}\langlebel{etajkl} \dot\bar{x}(t)=-\nabla_x\psi\big(\bar{x}(t),\bar{u}(t)\big)^*\eta(t)\;\mbox{ a.e. }\;t\in[0,T] \end{eqnarray} with $\eta(t)\in N(\psi(\bar{x}(t),\bar{u}(t));\Theta)$, and where $q\mbox{\rm co}\,lon[0,T]\tauo\mathbb{R}^n\tauimes\mathbb{R}^m$ is a function of bounded variation on $[0,T]$ with its left-continuous representative given, for all $t\in[0,T]$ except at most a countable subset, by \begin{eqnarray}\langlebel{q} q(t)=p(t)-\int_{[t,T]}\nabla\psi\big(\bar{x}(\tau),\bar{u}(\tau)\big)^*d\gamma(\tau). \end{eqnarray} $\bullet$ {\sc Measured coderivative condition}: Considering the $t$-dependent outer limit \begin{eqnarray*} \mathop{{\rm Lim}}sup_{|B|\tauo 0}\frac{\gamma(B)}{|B|}(t):=\mathbb{B}ig\{y\in\mathbb{R}^s\mathbb{B}ig|\;\exists\,\mbox{ sequence }\;B_k\subset[0,1]\;\mbox{ with }\;t\in\mathbb{B}_k,\;|B_k|\tauo 0,\;\frac{\gamma(B_k)}{|B_k|}\tauo y\mathbb{B}ig\} \end{eqnarray*} over Borel subsets $B\subset[0,1]$ with the Lebesgue measure $|B|$, for a.e.\ $t\in[0,T]$ we have \begin{eqnarray}\langlebel{maximumcondition} D^*N_\Theta\big(\psi(\bar{x}(t),\bar{u}(t)),\eta(t)\big)\big(\nabla_x\psi(\bar{x}(t),\bar{u}(t))(q^x(t)-\lambda v^x(t))\big)\cap\mathop{{\rm Lim}}sup_{|B|\tauo 0}\frac{\gamma(B)}{|B|}(t)\ne\emptyset. \end{eqnarray} $\bullet$ {\sc Transversality condition} at the right endpoint: \begin{eqnarray}\langlebel{pxkkc} -\big(p^x(T),p^u(T)\big)\in\lambda\big(\partial\varphi(\bar{x}(T)),0\big)+\nabla\psi\big(\bar{x}(T),\bar{u}(T)\big)N_\Theta\big((\bar{x}(T),\bar{u}(T)\big). \end{eqnarray} $\bullet$ {\sc Measure nonatomicity condition}: Whenever $t\in[0,T)$ with $\psi(\bar{x}(t),\bar{u}(t))\in{\rm int}\,\Theta$ there is a neighborhood $V_t$ of $t$ in $[0,T]$ such that $\gamma(V)=0$ for any Borel subset $V$ of $V_t$. $\bullet$ {\sc Nontriviality condition}: \begin{eqnarray}\langlebel{nontriv2} \lambda+\sup_{t \in[0,T]}\|p(t)\|+\|\gamma\|\ne 0\;\mbox{ with }\;\|\gamma\|:=\sup_{\|x\|_{C([0,T]}=1}\int_{[0,T]}x(s)d\gamma. \end{eqnarray} {\bf(ii)} If $(\bar{x}(\cdot),\bar{u}(\cdot))$ is a relaxed local $W^{1,2}\tauimes{\cal C}$-minimizer, then all the conditions \eqref{hamiltonx}--\eqref{nontriv2} in {\rm(i)} hold with the replacement of the quadruple $(w^x(\cdot),w^u(\cdot),v^x(\cdot),v^u(\cdot))$ in \eqref{co} by the triple $(w^x(\cdot),w^u(\cdot),v^x(\cdot))\in L^2([0,T];\mathbb{R}^n)\tauimes L^2([0,T];\mathbb{R}^m)\tauimes L^\infty([0,T];\mathbb{R}^n)$ satisfying the inclusion \begin{eqnarray*} \big(w^x(t),w^u(t),v^x(t)\big)\in{\rm co}\,\partial\ell\big(\bar{x}(t),\bar{u}(t),\dot\bar{x}(t)\big)\;\mbox{ a.e. }\;t\in[0,T]. \end{eqnarray*} \end{Theorem}\vspace*{-0.05in} {\bf Proof.} We give it only for assertion (i), since the proof of (ii) is similar with taking into account the type of convergence $\bar{u}^k(\cdot)\tauo\bar{u}(\cdot)$ achieved in Theorem~\ref{conver}(ii) and that the running cost $\ell$ in Definition~\ref{ilm}(ii) does not depend on the control velocity $\dot{u}$. To verify assertion (i), deduce first from \eqref{euler1} and Proposition~\ref{morout} that for each $k\in\mathbb{N}$ and $j=0,\ldots,k-1$ there is a unique vector $\eta_j^k\in N_{Z}(\psi(\bar{x}_j^k,\bar{u}_j^k))$ satisfying the conditions \begin{eqnarray*}\langlebel{eta_dis} \nabla_{x}\psi(\bar{x}_j^k,\bar{u}_j^k)^*\eta_j^k=-\frac{\bar{x}_{j+1}^k-\bar{x}_j^k}{h_k}, \end{eqnarray*} \begin{eqnarray}\langlebel{p_iteration} \frac{p_{j+1}^{xk}-p_j^{xk}}{h_k}-\langlembda^k w_j^{k}=\left[\begin{array}{c} \nabla_{xx}^{2}\langlengle\eta_j^k,\psi\ranglengle(\bar{x}_j^k,\bar{u}_j^k)\\ \nabla_{xw}^{2}\langlengle\eta_j^k,\psi\ranglengle(\bar{x}_j^k,\bar{u}_j^k) \end{array}\right]u+\nabla\psi(\bar{x}_j^k,\bar{u}_j^k)^*\gamma_j^k \end{eqnarray} with $u:=p_{j+1}^{xk}-\langlembda^k\mathbb{B}ig(v_j^{xk}+\frac{1}{h_k}\tauheta_j^{xk}\mathbb{B}ig)$ and some vectors \begin{eqnarray}\langlebel{gamma_dis} \gamma_j^k\in D^*N_{Z}\big(\psi(\bar{x}_j^k,\bar{u}_j^k),\eta_j^k\big)\mathbb{B}ig(\nabla_{x}\psi(\bar{x}_j^k,\bar{u}_j^k)\mathbb{B}ig(p_{j+1}^{xk}-\langlembda^k\mathbb{B}ig(v_j^{xk}+\frac{1}{h_k}\tauheta_j^{xk}\mathbb{B}ig)\mathbb{B}ig) \mathbb{B}ig). \end{eqnarray} Taking this into account, we get from \eqref{nontriv} the improved nontriviality condition \begin{eqnarray}\langlebel{nontriv1} \lambda^k+\|p^{uk}_0\|+\|p^{xk}_k\|+\|p^{uk}_k\|\ne 0\;\mbox{ for all }\;k\in\mathbb{N} \end{eqnarray} with the validity of \eqref{psiu} as well as $\lambda^k\ge 0$ and the relationships in \eqref{th} and \eqref{sublneighborhood1} of Theorem~\ref{discrete}. Now we proceed with passing to the limit as $k\tauo\infty$ in the obtained optimality conditions for discrete approximations. Since some arguments in this procedure are similar to those used in \cite[Theorem~6.1]{h1} in a more special setting, we skip them for brevity while focusing on significantly new developments. In particular, the existence of the claimed quadruples $(w^x(\cdot),w^u(\cdot),v^x(\cdot),v^u(\cdot))$ satisfying \eqref{co} is proved as in \cite{h2} while the existence of the uniquely defined $\eta(\cdot)\in L^2([0.T];\Theta)$ solving the differential equation \eqref{etajkl} follows from representation \eqref{unique_re} by repeating the limiting procedure of \cite[Theorem~6.1]{h1}. Next we define $q^k(\cdot)=(q^{xk}(\cdot),q^{uk}(\cdot))$ by extending $p_j^k$ piecewise linearly to $[0,T]$ with $q^{k}(t^k_j):=p^{k}_j$ for $j=0,\ldots,k$. Construct $\gamma^k(\cdot)$ on $[0,T]$ by \begin{eqnarray*} \gamma^k(t):=\gamma^k_j,\;\mbox{ for }\;t\in[t^k_j,t^k_{j+1}),\quad j=0,\ldots,k-1, \end{eqnarray*} with $\gamma^k(T):=0$ and consider the auxiliary functions \begin{eqnarray}\langlebel{vt} \vartheta^{k}(t):=\max\big\{t_{j}^{k}\big|\;t_{j}^{k}\le t,\;0\le j\le k\big\}\;\mbox{ for all }\; t\in\left[0,T\right],\;k\in\mathbb{N}, \end{eqnarray} so that $\vartheta^k(t)\tauo t$ uniformly in $[0,T]$ as $k\tauo\infty$. Since $\vartheta^{k}(t)=t_{j}^{k}$ for all $t\in\lbrack t_{j}^{k},t_{j+1}^{k})$ and $j=0,\ldots,k-1$, the equations in \eqref{p_iteration} can be rewritten as \begin{eqnarray}\langlebel{qk} \dot q^{k}(t)-\langlembda^k w^{k}(t)=\left[\begin{array}{c} \nabla_{xx}^{2}\big\langlengle\eta^k_i(t),\psi\big\ranglengle\big(\bar{x}^k(\vartheta^k(t),\bar{u}^k(\vartheta^k(t)\big)\\ \nabla_{xw}^{2}\big\langlengle\eta^k_i(t),\psi\big\ranglengle\big(\bar{x}^k(\vartheta^k(t),\bar{u}^k(\vartheta^k(t)\big) \end{array}\right]u+\nabla\psi\big(\bar{x}^k(\vartheta^k(t),\bar{u}^k(\vartheta^k(t)\big)^*\gamma^k(t), \end{eqnarray} where $u:=q^{xk}(\vartheta^k_+(t))-\langlembda^k(v^{xk}(t)+\tauh^{xk}(t))$ for every $t\in(t^k_j,t^k_{j+1})$, $j=0,\ldots,k-1$, and $i=1,\ldots,m$, and where $\vartheta^k_+(t):=t^k_{j+1}$ for $t\in[t^k_j,t^k_{j+1})$. Define now we $p^k(\cdot)=(p^{xk}(\cdot),p^{uk}(\cdot))$ on $[0,T]$ by setting \begin{eqnarray*} \begin{array}{ll} p^{k}(t):=q^{k}(t)+\displaystyle\int_{t}^{T}\nabla\psi\big(\bar{x}^k(\vartheta^k(\tau)),\bar{u}^k(\vartheta^k(\tau))\big)\gamma^k(\tau)d\tau \end{array} \end{eqnarray*} for every $t\in[0,T]$. This gives us $p^k(T)=q^k(T)$ with the differential relation \begin{eqnarray*} \dot p^{k}(t)=\dot q^{k}(t)-\nabla\psi\big(\bar{x}^k(\vartheta^k(t)),\bar{u}^k(\vartheta^k(t))\big)^*\gamma^k(t) \end{eqnarray*} holding for a.e.\ $t\in[0,T]$. Substituting the latter into \eqref{qk}, we get \begin{eqnarray*} \dot p^{k}(t)-\langlembda^k w^{k}(t)=\left[\begin{array}{c} \nabla_{xx}^{2}\big\langlengle\eta^k_i(t),\psi\big\ranglengle\big(\bar{x}^k(\vartheta^k(t)),\bar{u}^k(\vartheta^k(t)\big)\\ \nabla_{xw}^{2}\big\langlengle\eta^k_i(t),\psi\big\ranglengle\big(\bar{x}^k(\vartheta^k(t)),\bar{u}^k(\vartheta^k(t)\big) \end{array}\right]u, \end{eqnarray*} for every $t\in(t^k_j,t^k_{j+1})$, $j=0,\ldots,k-1$, and $i=1,\ldots,m$. Define further the vector measures $\gamma^k$ by \begin{eqnarray*} \int_B d\gamma^k:=\int_B\gamma^k(t)dt \;\mbox{ for every Borel subset }\;B\subset[0,T] \end{eqnarray*} and observe that, due to the positive homogeneity of all the expressions in the statement of Theorem~\ref{discrete} with respect to $(\langlembda^k,p^k,\gamma^k)$, the nontriviality condition \eqref{nontriv1} can be rewritten as \begin{eqnarray}\langlebel{alphak} \lambda^k+\|q^{uk}(0)\|+\|p^k(T)\|+\int_0^T\|\gamma^{k}(t)\|dt=1\;\mbox{ for all }\;k\in\mathbb{N}, \end{eqnarray} which tell us that all the sequential terms in \eqref{alphak} are uniformly bounded. Following the proof of \cite[Theorem 6.1]{h1}, we obtain the relationships in \eqref{hamiltonx}, \eqref{q}, and \eqref{pxkkc}, where the form of the transversality condition \eqref{pxkkc} benefits from the ``full" counterpart of the calculus rule in Proposition~\ref{nor_inve} for normals to inverse images that is valid under the full rank assumption in (H3). The measure nonatomicity condition of this theorem is also verified similarly to \cite[Theorem~6.1]{h1}. Next we establish the new measured coderivative condition \eqref{maximumcondition}, which was not obtained in \cite{h1} even in the particular framework therein. Rewrite first \eqref{gamma_dis} in the form \begin{eqnarray*}\langlebel{gamma_con} \gamma^k(t)\in D^*N_{\Theta}\big(\psi(\bar{x}^k(\vartheta^k(t)),\bar{u}^k(\vartheta^k(t)),\eta^k(t)\big)\mathbb{B}ig(\nabla_{x}\psi\big(\bar{x}^k\big(\vartheta^k(t)\big),\bar{u}^k\big(\vartheta^k(t)\big)u\mathbb{B}ig)\;\mbox{ a.e. }\;t\in[0,T] \end{eqnarray*} via the functions $\vartheta^k(t)$ from \eqref{vt}. Since $\gamma^k(t)$ is a step vector function for each $k\in N$, taking any $t\ne t_j^k$ as $j=0,\ldots,k$ allows us to choose a number $\delta_k>0$ sufficiently small so that $|B|\le\delta_k$ whenever a Borel set $B$ contains $t$, and thus $B$ does not contain any mesh points. Hence $\gamma^k(t)$ remains constant on $B_k$. As a result, we can write the representation \begin{eqnarray*} \gamma^k(t)=\frac{1}{|B|}\int_{B}\gamma^k(\tau)d\tau\;\mbox{ on }\;t\in\mathbb{B}_k\;\mbox{ for all large }\;k\in\mathbb{N}. \end{eqnarray*} The separability of the space $C([0,T];\mathbb{R}^s)$ and the boundedness of $\gamma^k(\cdot)$ in $C^*([0,T];\mathbb{R}^s)$ by \eqref{alphak} allow us to select a subsequence of $\{\gamma^k(\cdot)\}$ (no relabeling) that weak$^*$ converges in $C^*([0,T];\mathbb{R}^s)$ to some $\gamma(\cdot)$. As a result, we get without loss of generality that \begin{eqnarray*} \int_B\gamma^k(\tau)d\tau\tauo\int_B\gamma(\tau)d\tau\;\mbox{ as }\;k\tauo\infty \end{eqnarray*} for any Borel set $B$. To proceed further, choose a sequence $\{B_k\}\subset B$ such that $t\in B_k$, $|B_k|\tauo 0$, and $\frac{1}{|B_k|}\int_{B_k} \gamma(\tau)d\tau\tauo\alpha$ as $k\tauo\infty$ for some $\alphapha\in\mathbb{R}^s$. It follows from the constructions above that \begin{eqnarray*} \gamma^k(t)\tauo\alphapha\in\mathbb{B}ig(\mathop{{\rm Lim}}sup_{|B|\tauo 0}\frac{1}{|B|}\int_B\gamma(\tau)d\tau\mathbb{B}ig)(t)=\mathop{{\rm Lim}}sup_{|B|\tauo 0}\frac{\gamma(B)}{|B|}(t). \end{eqnarray*} Taking into account the coderivative robustness with respect to all of its variables, we arrive at \begin{eqnarray*} \alphapha\in D^*N_\Theta\big(\psi(\bar{x}(t),\bar{u}(t)),\eta(t)\big)\big(\nabla_x\psi(\bar{x}(t),\bar{u}(t))(q^x(t)-\lambda v^x(t))\big)\cap\mathop{{\rm Lim}}sup_{|B|\tauo 0}\frac{\gamma(B)}{|B|}(t) \end{eqnarray*} for a.e.\ $t\in[0,T]$, which verifies the measured coderivative condition \eqref{maximumcondition}. To complete the proof of the theorem, it remains to justify the nontriviality condition \eqref{nontriv2}. Arguing by contradiction, suppose that \eqref{nontriv2} fails and thus find sequences of $\langlembda^k\tauo 0$ and $\|p_j^k\|\tauo 0$ as $k\tauo\infty$ uniformly in $j$. It follows from \eqref{alphak} and \eqref{hamilton2ub} that $\int_0^T\|\gamma^{k}(t)\|dt\tauo 1$ as $k\tauo\infty$. Define now the sequence of measurable vector functions $\beta^k\mbox{\rm co}\,lon[0,T]\tauo\mathbb{R}^s$ by \begin{eqnarray*} \beta^k(t):=\left\{\begin{array}{cl} \displaystyle\frac{\gamma^k(t)}{\|\gamma^k(t)\|}&{\rm if}\;\gamma^k(t)\ne 0,\\ 0&{\rm if}\;\gamma^k(t)=0 \end{array}\right.\quad\mbox{ for all }\;t\in[0,T]. \end{eqnarray*} Using the Jordan decomposition $\gamma^k=(\gamma^k)^+-(\gamma^k)^-$ gives us a subsequence $\{\gamma^k\}$ and a Borel vector measure $\gamma=\gamma^+-\gamma^-$ such that $\{(\gamma^{k})^+\}$ weak$^*$ converges to $\gamma^+$ and $\{(\gamma^{k})^-\}$ weak$^*$ converges to $\gamma^-$ in $C^*([0,T];\mathbb{R}^s)$. Taking into account the uniform boundedness of $\beta^k(\cdot)$ on $[0,T]$ allows us to apply the convergence result of \cite[Proposition~9.2.1]{vinter} (with $A=A_i:=\mathbb{B}_{\mathbb{R}^s}$ for all $i \in\mathbb{N}$ therein) and thus find a Borel measurable vector functions $\beta^+,\beta^-\mbox{\rm co}\,lon [0,T]\tauo\mathbb{R}^s$ so that, up to a subsequence, $\{\beta^k(\gamma^k)^+\}$ weak$^*$ converges to $\beta^+\gamma^+$, and $\{\beta^k(\gamma^k)^-\}$ weak$^*$ converges to $\beta^-\gamma^-$. As a result, we get \begin{align*} &\int_0^T\beta^+(t)d\gamma^+(t)-\int_0^T\beta^-(t)d\gamma^-(t)=\lim_{k\tauo\infty}\int_0^T \beta^k(t)d(\gamma^k)^+(t)-\int_0^T\beta^k(t)d(\gamma^k)^-(t)\\ &=\lim_{k\tauo\infty}\int_0^T\beta^k(t)d\big((\gamma^k)^+-(\gamma^k)^-\big)(t)=\lim_{k\tauo\infty}\int_0^T\beta^k(t)d(\gamma^k)(t)=\lim_{k\tauo\infty}\int_0^T\| \gamma^{k}(t)\|dt=1. \end{align*} This means that $\|\gamma\|=\|\gamma^+\|+\|\gamma^-\|\ne 0$, which contradicts the assumed failure of \eqref{nontriv2}. $ \tauriangle$\vspace*{0.05in} Note that the nontrivial optimality conditions obtained in Theorem~\ref{necopt} do not generally exclude the case of their validity for any feasible solution, although even in this case they may be useful as is shown by examples. The following consequence of Theorem~\ref{necopt} presents effective sufficient conditions ensuring {\em nondegenerated} optimality conditions for the considered local minimizers of $(P)$. The reader can find more discussions on nondegeneracy in \cite{arutyunov,AK,vinter} for classical optimal control problems and Lipschitzian differential inclusions and in \cite{h1} for sweepings ones over polyhedral controlled sets.\vspace*{-0.05in} \begin{Corollary}{\bf(nondegeneracy).}\langlebel{degeneracy} In the setting of Theorem~{\rm\ref{necopt}}, suppose that $\eta(T)$ is well defined and that $\tauh=0$ is the only vector satisfying the relationships \begin{eqnarray}\langlebel{int_con_relax} \tauh\in D^*N_\Theta\big(\psi(\bar{x}(T),\bar{u}(T)),\eta(T)\big)(0),\quad \nabla\psi\big(\bar{x}(T),\bar{u}(T)\big)^*\tauh\in\nabla\psi\big(\bar{x}(T),\bar{u}(T)\big)N_\Theta\big(\bar{x}(T),\bar{u}(T)\big). \end{eqnarray} Then the necessary optimality conditions of Theorem~{\rm\ref{necopt}} hold with the enhanced nontriviality \begin{eqnarray}\langlebel{enhanced_non} \lambda+{\rm mes}\big\{t\in[0,T]\big|\;q(t)\ne 0\big\}+\|q(0)\|+\|q(T)\|>0. \end{eqnarray} \end{Corollary}\vspace*{-0.02in} {\bf Proof.} Arguing by contradiction, suppose that \eqref{enhanced_non} fails, which yields $\lambda=0$, $q(0)=0$, $q(T)=0$ and $q=0$ for a.e.\ $t\in[0,T]$. It follows from \eqref{hamiltonx} that $p\equiv p(T)$ on $[0,T]$. By using $\eqref{q}$ and the fact that $\nabla\psi(\bar{x}(t),\bar{u}(t))$ has full rank on $[0,T]$, we get $\gamma:=\tauh_1\delta_{\{T\}}+\tauh_2\delta_{\{T\}}$ for some $\tauh_1,\tauh_2\in\mathbb{R}^s$ via the Dirac measures. Let us now check that $\tauh_1=\tauh_2=0$. Since $\eta(T)$ is well defined and since $q(T)=0$ and $\lambda=0$, we conclude that condition \eqref{maximumcondition} holds at $t=T$ being equivalent to \begin{eqnarray}\langlebel{condition1} \tauh_2\in D^*N_\Theta\big(\psi(\bar{x}(T),\bar{u}(T)),\eta(T)\big)(0). \end{eqnarray} On the other hand, it follows from $q(T)=0$ due to \eqref{q} that $p(T)=\nabla\psi(\bar{x}(t),\bar{u}(t))^*\tauh$. Using further \eqref{pxkkc} tells us that $\nabla\psi(\bar{x}(T),\bar{u}(T))^*\tauh_2\in-\nabla\varphii(\bar{x}(T),\bar{u}(T))N_\Theta((\bar{x}(T),\bar{u}(T))$. Then it follows from \eqref{int_con_relax} and \eqref{condition1} that $\tauh_2=0$, which yields $\tauh_1=0$ and gives us a contradiction. $ \tauriangle$\vspace*{-0.07in} \begin{Remark}{\bf(discussions on nondegeneracy).}\langlebel{example_nondegenerate} {\rm It is easy to see that the imposed assumption \eqref{enhanced_non} excludes the degeneracy case of $\lambda:=0$, $p=q\equiv 0$, and $\gamma:=\delta_{\{T\}}$ in Theorem~\ref{necopt}. Furthermore, the inferiority assumption $\psi(\bar{x}(T),\bar{u}(T))\in{\rm int}\,\Theta$ yields \eqref{int_con_relax} while not vice versa. To illustrate it, consider the following {\em example}: minimize the cost \begin{eqnarray*} J[u]:=\frac{1}{2}\mathbb{B}ig(x(2)-1\mathbb{B}ig)^2+\int_0^1\mathbb{B}ig(u(t)+2-t\mathbb{B}ig)^2dt+\int_1^2\mathbb{B}ig(u(t)+1\mathbb{B}ig)^2dt \end{eqnarray*} over the dynamics $-\dot x(t)\in N(x(t);(-\infty,u])$ with $x(0)=3/2$ and $u(0)=-2$. We can directly check that the only optimal trajectory in this problem is given by $\bar{x}(t)=3/2$ on $[0,1/2]$, $\bar{x}(t)=2-t$ on $[1/2,1]$, and $\bar{x}(t)=1$ on $[1,2]$, It is generated by the optimal control $\bar{u}(t)=t-2$ on $[0,1]$ and $\bar{u}(t)=-1$ on $(1,2]$. To check the nondegeneracy condition \eqref{int_con_relax} in this example with $\psi(x,u)=x+u$ and $\Theta=(-\infty,0]$, observe that the second inclusion therein reduces to \begin{eqnarray*} (\alpha,\alpha)\in-N\big((1,-1);\{(x,u)|\,x+u\le 0\}\big), \end{eqnarray*} which is equivalent to $\alpha\le 0$. The first inclusion reads as \begin{eqnarray*} \alpha\in D^*N_{\mathbb{R}_-}\big(0,0\big)\big(0\big)=[0,\infty) \end{eqnarray*} giving us $\alpha\ge 0$. Thus $\alpha=0$, and condition \eqref{int_con_relax} is satisfied. On the other hand, we see that the point $(\bar{x}(2),\bar{u}(2))=(1,-1)$ does not belong to the interior of the set $\Theta$.} \end{Remark}\vspace*{-0.25in} \section{Hamiltonian Formalism and Maximum Principle}\langlebel{hamiltonian} \setcounter{equation}{0} The necessary optimality conditions for $(P)$ obtained in Theorem~\ref{necopt} and Corollary~\ref{degeneracy} are of the extended {\em Euler-Lagrange} type that is pivoting in optimal control of Lipschitzian differential inclusions; see \cite{mordukhovich,vinter}. The result of \cite[Theorem~1.34]{mordukhovich} tells us that the Euler-Lagrange framework involving coderivatives implies the {\em maximum condition} of the Weierstrass-Pontryagin type for problems with convex velocities provided that the velocity mapping is {\em inner semicontinuous} (e.g., Lipschitzian), which is never the case in our setting \eqref{F}. Nevertheless, we show in what follows that the Hamiltonian formalism and the maximum condition can be derived from the measured coderivative condition of Theorem~\ref{necopt} in rather broad and important situations by using {\em coderivative calculations} available in variational analysis. The first result of this section deals with problem $(P)$ in the case where $\Theta:=\mathbb{R}^s_-$. In this case we consider the set of {\em active constraint indices} \begin{eqnarray}\langlebel{active_constr} I(x,u):=\big\{i\in\{1,\ldots,s\}\big|\;\psi_i(x,u)=0\big\}. \end{eqnarray} It follows from Proposition~\ref{nor_inve} under (H3) that for each $v\in-N(x;C(u))$ there is a unique collection $\{\alphapha_i\}_{i\in I(x,u)}$ with $\alphapha_i\le 0$ and $v=\sum_{i\in I(x,u)}\alphapha_i[\nabla_x\psi(x,u)]_i$. Given $\nu\in\mathbb{R}^s$, define the vector $[\nu,v]\in\mathbb{R}^n$ by \begin{eqnarray}\langlebel{nu} [\nu,v]:=\sum_{i\in I(x,u)}\nu_i\alphapha_i\big[\nabla_x\psi(x,u)\big]_i \end{eqnarray} and introduce the {\em modified Hamiltonian} function \begin{eqnarray}\langlebel{new_hamiltonian} H_{\nu}(x,u,p):=\sup\big\{\big\langle[\nu,v],p\big\rangle\big|\;v\in-N\big(x;C(u)\big)\big\},\quad(x,u,p)\in\mathbb{R}^n\tauimes\mathbb{R}^m\tauimes\mathbb{R}^n. \end{eqnarray} The following consequence of Theorem~\ref{necopt} and Corollary~\ref{degeneracy} shows that the measured coderivative condition \eqref{maximumcondition} yields the {\em maximization} of the modified Hamiltonian \eqref{new_hamiltonian} at the local optimal solutions for $(P)$ with polyhedral constraints corresponding to $\Theta=\mathbb{R}^s_-$ in \eqref{mov-set}.\vspace*{-0.05in} \begin{Corollary}{\bf(maximum condition for polyhedral sweeping control systems under surjectivity).}\langlebel{maximumconditionorthant} In the frameworks of Theorem~{\rm\ref{necopt}} and Corollary~{\rm\ref{degeneracy}} with $\Theta=\mathbb{R}^s_-$ we have the corresponding necessary optimality conditions therein together with the following maximum condition: there is a measurable vector function $\nu\mbox{\rm co}\,lon[0,T]\tauo\mathbb{R}^s$ such that $\nu(t)\in\mathop{{\rm Lim}}sup_{|B|\tauo 0}\frac{\gamma(B)}{|B|}(t)$ and \begin{eqnarray}\langlebel{novelmaxcond} \big\langlengle\big[\nu(t),\dot\bar{x}(t)\big],q^x(t)-\lambda v^x(t)\big\ranglengle=H_{\nu(t)}\big(\bar{x}(t),\bar{u}(t),q^x(t)-\lambda v^x(t)\big)=0\;\mbox{ a.e. }\;t\in[0,T]. \end{eqnarray} \end{Corollary}\vspace*{-0.05in} {\bf Proof.} Let us show that the maximum condition \eqref{novelmaxcond} follows from the measured coderivative condition \eqref{maximumcondition}. To proceed, we need to compute the coderivative of the normal cone mapping $D^*N_{\mathbb{R}^s_-}$, which has been done in variational analysis in several forms; see, e.g., \cite{mord,BR} for more discussions and references. We use here the one taken from \cite{MO07}: if $D^*N_{\mathbb{R}^s_-}(w,\xi)(u)\ne\emptyset$, then \begin{eqnarray}\langlebel{orthant} D^*N_{\mathbb{R}^s_-}(w,\xi)(u)=\left\{\omega\in\mathbb{R}^{s}\left\varepsilonrt\begin{array}{ll} \omega_i=0&\mbox{if }\;w_i<0\;\mbox{ or if }\;w_i=0,\;\xi_{i}=0,\;u_i<0 \\\omega_i\ge 0&\mbox{if }\;w_i=0,\;\xi_{i}=0,\;u_i\ge 0\\ \omega_i\in\mathbb{R}&\mbox{if }\;\xi_{i}>0,\;u_i=0 \end{array}\right.\right\}. \end{eqnarray} The measured coderivative condition \eqref{maximumcondition} reads in this case as: \begin{eqnarray*} \nu(t)\in D^*N_{\mathbb{R}^s_-}\big(\psi(\bar{x}(t),\bar{u}(t)),\eta(t)\big)\big(\nabla_x \psi(\bar{x}(t),\bar{u}(t))(q^x(t)-\lambda v^x(t))\big)\;\mbox{ a.e. }\;t\in[0,T] \end{eqnarray*} with a vector function $\nu(t)\in\mathop{{\rm Lim}}sup_{|B|\tauo 0}\frac{\gamma(B)}{|B|}(t)$, which can be selected as (Lebesgue) measurable on $[0,T]$ due to the well-known measurable selection results; see, e.g., \cite{rw,vinter}. It follows from \eqref{orthant} that \begin{eqnarray}\langlebel{eta} \big[\eta_{i}(t)>0\big]\Longrightarrow\mathbb{B}ig[\big\langle\lambda v^x(t)-q^{x}(t),\big[\nabla_x\psi\big(\bar{x}(t),\bar{u}(t)\big)\big]_i\big\rangle=0\mathbb{B}ig]\;\mbox{ for a.e. }\;t\in[0,T],\quad i=1,\ldots,s, \end{eqnarray} which gives us by equation \eqref{etajkl} that \begin{eqnarray}\langlebel{LHS-MX} \big\langle\big[\nu(t),\dot\bar{x}(t)\big],\lambda v^x(t)-q^{x}(t)\big\rangle=0\;\mbox{ a.e. }\;t\in[0,T]. \end{eqnarray} On the other hand, we get from \eqref{active_constr}--\eqref{new_hamiltonian} with $I:=I(\bar{x}(t),\bar{u}(t))$ that \begin{eqnarray}\langlebel{RHS-MX} H_{\nu(t)}\big(\bar{x}(t),\bar{u}(t),q^x(t)-\lambda v^x(t)\big)=\sup\mathbb{B}ig\{\sum_{i\in I}\alphapha_i\nu_i(t)\big\langlengle\big[\nabla_x\psi\big(\bar{x}(t),\bar{u}(t)\big)\big]_i,q^x(t)-\lambda v^x(t)\big\ranglengle|\;\alphapha_i\le 0\mathbb{B}ig\}. \end{eqnarray} for a.e.\ $t\in[0,T]$. Applying now \eqref{orthant} gives us the implication \begin{eqnarray*} \big[i\in I\big(\bar{x}(t),\bar{u}(t)\big)]\Longrightarrow\mathbb{B}ig[\nu_i(t)\big\langlengle\big[\nabla_x\psi\big(\bar{x}(t),\bar{u}(t)\big)\big]_i,q^x(t)-\lambda v^x(t)\big\ranglengle\ge 0\mathbb{B}ig]\;\mbox{ a.e. }\;t\in[0,T]. \end{eqnarray*} Combining this with \eqref{RHS-MX}, we get $H_{\nu(t)}(\bar{x}(t),\bar{u}(t),q^x(t)-\lambda v^x(t))=0$ for a.e.\ $t\in[0,T]$ and thus arrive at the maximum condition \eqref{novelmaxcond}, where the other equality was established in \eqref{LHS-MX}. $ \tauriangle$\vspace*{0.05in} Observe that the explicit coderivative computation \eqref{orthant} plays a crucial role in deriving the maximum condition \eqref{novelmaxcond} in Corollary~\ref{maximumconditionorthant}. Available second-order calculus and coderivative evaluations for the normal cone mappings allow us to derive more general results of the maximum principle type in sweeping optimal control. The next theorem addresses the case where the set $\Theta$ in \eqref{mov-set} is given by \begin{eqnarray}\langlebel{h} \Theta=h^{-1}(\mathbb{R}^l_-):=\big\{z\in\mathbb{R}^s\big|\;h(z)\in\mathbb{R}^l_-\big\} \end{eqnarray} via a smooth mapping $h\mbox{\rm co}\,lon\mathbb{R}^s\tauo\mathbb{R}^l$. As mentioned above, the surjectivity condition on the Jacobian $\nabla h(\bar{z})$ at a fixed point $\bar{z}$ corresponds to the LICQ condition at $\bar{z}$. Dealing with linear mappings $h(z):=Az-b$, we may replace the LICQ condition in the coderivative evaluation by a weaker {\em positive LICQ} (PLICQ) condition at $\bar{x}$ that is discussed and implemented in \cite{h1}. It is used in what follows.\vspace*{-0.07in} \begin{Theorem}{\bf(maximum principle in sweeping optimal control).}\langlebel{max-pr} Consider the control problem $(P)$ in the frameworks of Theorem~{\rm\ref{necopt}} and Corollary~{\rm\ref{degeneracy}} with the set $\Theta$ given by \eqref{h}, where $h\mbox{\rm co}\,lon\mathbb{R}^s\tauo\mathbb{R}^l$ is ${\cal C}^2$-smooth around the local optimal solution $\bar{z}(t):=(\bar{x}(t),\bar{u}(t))$ for all $t\in[0,T]$. Suppose that either $\nabla h(\bar{z}(t))$ is surjective, or $h(\cdot)$ is linear and the PLICQ assumption is fulfilled at $\bar{z}(t)$ on $[0,T]$. Then, in addition to the corresponding necessary optimality conditions of the statements above, the maximum condition \eqref{novelmaxcond} holds with a measurable vector function $\nu\mbox{\rm co}\,lon[0,T]\tauo\mathbb{R}^s$ satisfying the inclusion \begin{eqnarray}\langlebel{tildegg1} \nu(t)\in D^*N_{\mathbb{R}^l_-}\big(h(\psi(\bar{x}(t),\bar{u}(t))),\mu(t)\big)\big(\nabla_x\psi(\bar{x}(t),\bar{u}(t))(q^x(t)-\lambda v^x(t))\big)\;\mbox{ a.e. }\;t\in[0,T], \end{eqnarray} where $\mu\mbox{\rm co}\,lon[0,T]\tauo\mathbb{R}^l$ is also measurable and such that \begin{eqnarray*}\langlebel{v} \mu(t)\in N_{\mathbb{R}^l_-}\big(h(\psi(\bar{x}(t),\bar{u}(t))\big)\;\mbox{ with }\;\eta(t)=\nabla h\big(\psi(\bar{x}(t),\bar{u}(t)\big)^*\mu(t)\;\mbox{ a.e. }\;t\in[0,T]. \end{eqnarray*} \end{Theorem}\vspace*{-0.05in} {\bf Proof.} As in the proof of Corollary~\ref{maximumconditionorthant}, we derive from the measured coderivative condition \eqref{maximumcondition} the existence of a measurable function $\widetilde\nu\mbox{\rm co}\,lon[0,T]\tauo\mathbb{R}^l$ satisfying the inclusion \begin{eqnarray}\langlebel{v1} \widetilde\nu(t)\in D^*N_{h^{-1}(\mathbb{R}^l_-)}\big(\psi(\bar{x}(t),\bar{u}(t)),\eta(t)\big)\big(\nabla_x\psi(\bar{x}(t),\bar{u}(t))(q^x(t)-\lambda v^x(t))\big)\;\mbox{ a.e. }\;t\in[0,T]. \end{eqnarray} Assuming that $\nabla h(\bar{z}(t))$ is surjective on $[0,T]$ and applying the second-order chain rule from \cite[Theorem~1.127]{mordukhovich} together with the aforementioned measurable selection results, we find measurable functions $\nu\mbox{\rm co}\,lon[0,T]\tauo\mathbb{R}^s$ and $\mu\mbox{\rm co}\,lon[0,T]\tauo\mathbb{R}^l$ satisfying the conditions \eqref{tildegg1} and \eqref{v1} as well as \begin{eqnarray*} \widetilde\nu(t)=\nabla^2\big\langlengle\mu(t),h\big\ranglengle\big(\psi(\bar{x}(t),\bar{u}(t))\big)+\nabla h\big(\psi(\bar{x}(t),\bar{u}(t))\big)^*\nu(t), \end{eqnarray*} which uniquely determines $\nu(t)$ from $\widetilde\nu(t)$ for a.e.\ $t\in[0,T]$. The validity of the maximum condition follows now from the proof of Corollary~\ref{maximumconditionorthant} due to the relationships above. In the case where $h$ is linear and the PLICQ holds, we proceed similarly by applying the evaluation of $D^*N_{h^{-1}(\mathbb{R}^l_-)}$ from \cite[Lemma~4.2]{h1} without claiming that $\nu(t)$ is uniquely defined by $\widetilde\nu(t)$. $ \tauriangle$\vspace*{-0.05in} \begin{Remark}{\bf(discussions on the maximum principle).}\langlebel{max-disc} {\rm The necessary optimality conditions of the maximum principle type obtained in Corollary~\ref{maximumconditionorthant} and Theorem~\ref{max-pr} are the first in the literature for sweeping processes with controlled moving sets. Note that our form of the modified Hamiltonian \eqref{new_hamiltonian} is different from the {\em conventional Hamiltonian} form \begin{eqnarray}\langlebel{hamilton} H(x,p):=\sup\big\{\langle p,v\rangle\rangle\big|\;v\in F(x)\big\} \end{eqnarray} used in optimal control of Lipschitzian differential inclusions $\dot x\in F(x)$ that extends the Hamiltonian in classical optimal control. We show below in Example~\ref{counterexample} that the maximum principle via the conventional Hamiltonian \eqref{hamiltonian} {\em fails} for our problem $(P)$. The reason is that \eqref{hamilton} does not reflects {\em implicit} state constraints, which do not appear for Lipschitzian problems while being an essential part of the sweeping dynamics; see the discussions in Section~1. Note that the Hamiltonian form \eqref{new_hamiltonian} is also different from those used in \cite{ac,bk,pfs} for deriving maximum principles in sweeping control problems with uncontrolled moving sets that are significantly diverse from our problem $(P)$. We can see that the new maximum principle form \eqref{novelmaxcond} incorporates vector measures that appear through the measured coderivative condition \eqref{maximumcondition}. The fact that measures naturally arise in descriptions of necessary optimality conditions in optimal control problems with state constraints has been first realized by Dubovitskii and Milyutin \cite{duboviskii} and since that has been fully accepted in the literature; see, e.g., \cite{arutyunov,vinter} and the references therein. There are interesting connections between our form of the maximum principle for controlled sweeping processes and the Hamiltonian formalism in models of contact and nonsmooth mechanics (see, e.g., \cite{brog,razavy}), which we plan to fully investigate in subsequent publications.} \end{Remark}\vspace*{-0.07in} The next example shows that the maximum principle in the conventional form used for Lipschitzian differential inclusions \cite{arutyunov,vinter} with the standard Hamiltonian \eqref{hamilton} fails for the sweeping control problem $(P)$, while our new form obtained in Theorem~\ref{max-pr} holds.\vspace*{-0.05in} \begin{Example}{\bf(failure of the conventional maximum principle for sweeping processes).}\langlebel{counterexample} {\rm We consider the optimal control problem for the sweeping process taken from \cite[Example~7.6]{h1}, where the controlled moving set is defined by \eqref{sw-con2} with control actions $u_j(t)$ and $b_j(t)$. Specify the initial data as \begin{eqnarray*} n=m=2,\;x_0=(1,1),\;T=1,\;\varphi(x)=\frac{\|x\|^2}{2},\;\mbox{ and }\;\ell(t,x,u,b,\dot x,\dot u,\dot b):=\frac{1}{2}\big(\dot{b}_1^2+\dot{b}_2^2\big) \end{eqnarray*} and fix the $u$-controls as $\bar{u}_1\equiv(1,0)$, $\bar{u}_2\equiv(0,1)$. The necessary optimality conditions of Corollary~\ref{degeneracy} give us in this case the following relationships on $[0,1]$: \begin{itemize} \item[(1)] $w(\cdot)=0$, $v^x(\cdot)=0$, $v^b(\cdot)=\big(\dot{b}_1(\cdot),\dot{b}_2(\cdot)\big)$;\quad(2)$\;\dot{\bar{x}}_i(t)\ne 0\Longrightarrow q^x_i(t)=0$,\;$i=1,2$, \item[(3)] $p^b(\cdot)$ is constant with nonnegative components; $-p_i^x(\cdot)=\langlembda\bar{x}(1)+p_i^b(\cdot)\bar{u}_i$ are constant for $i=1,2$, \item[(4)] $q^x(t)=p^x-\gamma([t,1])$,\quad $q^b(t)=\langlembda\dot{\bar{b}}(t)=p^b+\gamma([t,1])$ for a.e.\ $t\in[0,1]$, \item[(5)] $\langlembda+\|q(0)\|+\|p(1)\|\ne 0$ with $\lambda\ge 0$. \end{itemize} Observe first that the pair $\bar{x}(t)=(1,1)$ and $\bar{b}(t)=0$ on $[0,1]$ satisfies the necessary conditions with $p_1^x=p_2^x=-1$, $p_1^b=p_2^b=\gamma_1=\gamma_2=0$, and $\langlembda=1$). The conventional Hamiltonian \eqref{hamilton} reads now as \begin{eqnarray*} H(x,b,p)=\sup\big\{\langle p,v\rangle\big|\;v\in-N\big(x;C((1,0),(0,1)),b\big)\big\}, \end{eqnarray*} and we get by the direct calculation that \begin{align*} H\big(\bar{x}(t),\bar{b}(t),q^x(t)-\langlembda v^x(t)\big)=&H\big((1,1),(1,1),(-1,-1)\big)\\ =&\sup\big\{\big\langle(-1,-1),v\big\rangle\big|\;v\in-N\big((1,1);C(((1,0),(0,1)),(1,1)\big)\big\}\\ =&\sup\big\{\big\langle(-1,-1),v\big\rangle\big|\;v_1\le 0,\;v_2\le 0\big\}=\infty \end{align*} while $\langle\dot\bar{x}(t),q^x(t)-\langlembda v^x(t)\rangle=0$, and thus the conventional maximum principle fails in this example. At the same time, the new maximum condition \eqref{novelmaxcond} holds trivially with $\nu(t)\equiv 0$ on $[0,1]$.} \end{Example}\vspace*{-0.05in} The following consequence of Theorem~\ref{max-pr} provides a natural sufficient condition for the validity of the maximum principle in terms of the conventional Hamiltonian \eqref{hamilton}.\vspace*{-0.05in} \begin{Corollary}{\bf(maximum principle in the conventional form).}\langlebel{con-mp} Assume that in the setting of Theorem~{\rm\ref{max-pr}} we have the condition \begin{eqnarray*} \eta_i(t)>0\;\mbox{ for all }\;i\in I\big(\bar{x}(t),\bar{u}(t)\big)\;\mbox{ a.e. }\;t\in[0,T]. \end{eqnarray*} where $\eta(t)\in N(\psi(\bar{x}(t),\bar{u}(t));\Theta)$ is uniquely defined by \eqref{etajkl}. Then \begin{eqnarray*} \big\langlengle\dot\bar{x}(t),q^x(t)-\lambda v^x(t)\big\ranglengle=H\big(\bar{x}(t),\bar{u}(t),\bar{b}(t),q^x(t)-\lambda v^x(t)\big)=0\;\mbox{ a.e. }\;t\in[0,T]. \end{eqnarray*} \end{Corollary}\vspace*{-0.05in} {\bf Proof.} This follows from \eqref{eta} and its counterpart in the proof of Theorem~\ref{max-pr} by definitions of the modified and conventional Hamiltonians. $ \tauriangle$ \section{Applications to Elastoplasticity and Hysteresis}\langlebel{application-example} \setcounter{equation}{0} In this section we discuss some applications of the obtained necessary optimality conditions to a fairly general class of problems relating to elastoplasticity hysteresis. Let us consider the model of this type discussed in \cite[Section~3.2]{adly}, which can be described in our form, where $Z$ is a closed convex subset of the $\frac{1}{2}n(n+1)$-dimensional vector space $E$ of symmetric tensors $n\tauimes n$ with ${\rm int}\,Z\ne\emptyset$. Using the notation of \cite{adly}, define the strain tensor $\epsilon=\{\epsilon\}_{i,j}$ by $\epsilon:=\epsilon^e+\epsilon^p$, where $\epsilon^e$ is the elastic strain and $\epsilon^p$ is the plastic strain. The elastic strain $\epsilon^e$ depends on the stress tensor $\sigma=\{\sigma\}_{i,j}$ linearly, i.e., $\epsilon^e=A^2\sigma$, where $A$ is a constant symmetric positive-definite matrix. The {\em principle of maximal dissipation} says that \begin{eqnarray}\langlebel{elasto-plastic} \big\langlengle\dot\epsilon^p(t),z-\sigma(t)\big\ranglengle \le 0 \;\mbox{ for all }\;z\in Z. \end{eqnarray} It is shown in \cite{adly} that the variational inequality \eqref{elasto-plastic} is equivalent to the {\em sweeping processes} \begin{eqnarray}\langlebel{sweeping-elasto} \dot\zeta(t)\in-N\big(\zeta(t);C(t)\big),\;\zeta(0)=A\sigma(0)-A^{-1}\epsilon(0)\in C(0), \end{eqnarray} where $\zeta(t):=A\sigma(t)-A^{-1}\epsilon(t)$ and $C(t):=-A^{-1}\epsilon(t)+AZ$. It can be rewritten in the frame of our problem $(P)$ with $x:=\zeta$, $u:=\epsilon$, $\psi(x,u):=x+A^{-1}u$, and $\Theta:=AZ$. Thus we can apply Theorem~\ref{necopt} to this class of hysteresis operators for the {\em general} elasticity domain $Z$. Note that a similar model is considered in \cite{herzog1}, only for the von Mises yield criterion. Our results obtained here give us the flexibility of applications to many different elastoplasticity models including those with the Drucker-Prager, Mohr-Coulomb, Tresca, von Mises yield criteria, etc. In the following example we summarize applications of Theorem~\ref{necopt} to solve a meaningful optimal control problem generated by the elastoplasticity dynamics \eqref{sweeping-elasto}.\vspace*{-0.05in} \begin{Example}{\bf(optimal control in elastoplasticity).}\langlebel{example1} {\rm Consider the dynamic optimization problem: \begin{eqnarray*} \mbox{minimize }\;J[\epsilon]:=\int_0^1\frac{1}{2}\|\dot\epsilon(t)\|^2dt+\frac{1}{2}\|\zeta(1)-\zeta_1\|^2 \end{eqnarray*} over feasible solutions to the sweeping process \eqref{sweeping-elasto} with the initial point $\zeta(0)\in C(0)$. Observe that the linear function $\bar\epsilon(t):=tz+\bar\epsilon(0)$ with appropriate adjustments of the starting and terminal points is an optimal control to the corresponding problem $(P)$. Remembering the above notation for $x,u,\psi$, and $\Theta$, we derive from Theorem~\ref{necopt} the following necessary optimality conditions: \begin{eqnarray*} \begin{array}{ll} (1)\;\;\dot p(t)=\lambda (0,0),\quad(2)\;\;q^u(t)=\lambda \frac{d}{dt}\bar \epsilon(t)\;\mbox{ a.e. }\;t\in[0,T],\\ (3)\;\;q(t)=p(t)-\displaystyle\int_{[t,T]}(1,1)d\gamma(s)\;\mbox{ a.e. }\;t\in[0,T],\\ (4)\;\;-\big(p^x(1),p^b(1)\big)=\langlembda\big(\big(\zeta(1)-\zeta_1\big),0\big)+N\big((\zeta(1),\epsilon(1));\{(\zeta,\epsilon)|\,\zeta+A^{-1}\epsilon\in AZ\}\big),\\ (5)\;\;\lambda +\sup_{t\in[0,T]}\|p(t)\|+\|\gamma\|\ne 0. \end{array} \end{eqnarray*} Assuming that $\zeta(T)\in{\rm int}\,C(T)$ and using the measure nonatomicity condition, we get from (1)--(5) that $\lambda=1$. It follows from the linearity of $\bar\epsilon$ that $\frac{d}{dt}\bar\epsilon(t)\equiv:a$ and that the choice of $p\equiv 0$, $q\equiv(\lambda a,\lambda a)$, and $\gamma \equiv(-a,-a)\delta_{\{1\}}$ fulfills all the conditions (1)--(5).}$ \tauriangle$ \end{Example}\vspace*{-0.05in} Note that in the above example we actually guessed the form of optimal solutions and then checked the fulfillment of the obtained necessary optimality conditions. The following example does more by showing how to calculate an optimal solution by using the necessary optimality conditions of Theorem~\ref{necopt} together with the maximum condition from Corollary~\ref{maximumconditionorthant}.\vspace*{-0.05in} \begin{Example}{\bf(calculation of optimal solutions).}\langlebel{Ex2} {\rm Consider here the optimal control problem taken from the example in Remark \ref{example_nondegenerate}. Applying the necessary optimality conditions of Corollary~\ref{degeneracy} with the enhanced nontriviality condition \eqref{enhanced_non}, we get the following: \begin{eqnarray*} \begin{array}{ll} (1)\;\;w=0,\;v(t)=\big(0,2(\bar{u}(t)+2-t)\big)\;\mbox{ if }\;t\in[0,1]\;\mbox{ and }\;v(t)=\big(0,2(\bar{u}(t)+1)\big)\;\mbox{ if }\;t\in(1,2],\\ (2)\;\;\dot{\bar{x}}(t)\in-N\big(\bar{x}(t);(-\infty,-\bar{u}(t)]\big)\;\mbox{ a.e. }\;t\in[0,2],\\ (3)\;\big(\dot{p}^x,\dot{p}^u\big)(t)=\big(0,0\big)\;\mbox{ a.e. }\;t\in[0,2],\\ (4)\;\;q^u(t)=2\lambda\big(\bar{u}(t)+2-t\big)\;\mbox{ if }\;t\in[0,1]\;\mbox{ and }\;q^u(t)=2\lambda\big(\bar{u}(t)+1\big)\;\mbox{ if }\;t\in(1,2],\\ (5)\;(q^x,q^u)(t)=(p^x,p^u)(t)-\displaystyle\mathbb{B}ig(\int_t^2d\gamma,\int_t^2d\gamma\mathbb{B}ig)\;\mbox{ a.e. }\;t\in[0,2],\\ (6)-\big(p^x(2),p^u(2)\big)=\langlembda\big(\big(\bar{x}(2)-1\big),0\big)+N\big((\bar{x}(2),\bar{u}(2));\{(x,u)|\,x+u\le 0\}\big),\\ (7)\;\;\lambda +{\rm mes}\big\{t\in[0,2]\big|\;q(t)\ne 0\big\}+\|q(0)\|+\|q(T)\|>0, \end{array} \end{eqnarray*} together with the measured coderivative condition telling is that \begin{eqnarray}\langlebel{MXC} \nu(t)\in D^*N_{\mathbb{R}_-}\big(\bar{x}(t)+\bar{u}(t),-\dot \bar{x}(t)\big)\big(q^x(t)\big)\;\mbox{ a.e. }\;t\in[0,2], \end{eqnarray} where $\nu(t)\in\mathop{{\rm Lim}}sup_{|B|\tauo 0}\frac{\gamma(B)}{|B|}(t)$. To proceed, take $\sigma\in(0,2]$ such that the state constraint is inactive on $[0,\sigma)$ while it is active on $[\sigma,2]$. It follows from (3) that $(p^x,p^u)$ is constant on $[0,2]$. Assuming by contradiction that $\lambda=0$, we get from (4) that $q^u\equiv 0$, and so $q^x\equiv 0$ by (5). It shows that (7) is violated, and thus $\lambda=1$. Since $\bar{x}(t)+\bar{u}(t)<0$ on $[0,\sigma)$, it follows from \eqref{MXC} and \eqref{orthant}, which gives us the maximum condition \eqref{novelmaxcond}, that $\nu(t)=0$ for all $t\in[0,\sigma)$. Then $q^u(t)$ remains constant on $[0,\sigma)$ by (5). Combining this with (4) shows that $\bar{u}(t)=t-2$ on $[0,\sigma]$. Using now (2) and the fact that $\bar{x}(t)+\bar{u}(t)<0$ for $t\in[0,\sigma)$, we get $\bar{x}\equiv 3/2$ on $[0,\sigma]$. Then $\bar{x}(\sigma)+\bar{u}(\sigma)=\sigma-2+3/2=0$ and $\sigma=1/2$. Assuming further that $\dot\bar{x}(t)<0$ on $(1/2,\sigma_1)$ for some $\sigma_1\in(1/2,2]$ and using \eqref{MXC} together with \eqref{orthant} tell us that $q^x\equiv 0$ on $(1/2,\sigma_1)$. Applying (5) again, we have $\bar{u}(t)=t-2$ on $[1/2,\sigma_1]$. Since $\dot\bar{x}(t)<0$ on $(1/2,\sigma_1)$, it follows from (2) that $\bar{x}(t)=-\bar{u}(t)=2-t$ on $[1/2,\sigma_1]$; thus we tend to move $\bar{x}(t)$ towards $1$. This means that when $\bar{x}(\sigma_1)=2-\sigma_1=1$, the object stops moving, i.e., $\dot\bar{x}(t)=0$ on $[1,2]$. Note that (5) yields $\nu(t)=-\dot q^u(t)=-\dot q^x(t)$ on $[0,2]$. If furthermore $\bar{u}(t)$ is strictly increasing, then $q^u(t)$ is also strictly increasing. Since $\nu(t)=-\dot q^u(t)$, we get $\nu(t)<0$. Then \eqref{MXC} and \eqref{orthant} imply that $\dot\bar{x}(t)<0$, a contradiction. Assuming that $\bar{u}(t)$ is strictly decreasing tells us by (4) that $q^u(t)$ is strictly decreasing, and so $\nu(t)<0$ by $\nu(t)= -\dot q^u(t)$. In this case conditions \eqref{MXC} and \eqref{orthant} yield $q^x(t)\ge 0$, and hence we have $q^x(2)\le 0$ by (6). It shows that $q^x(t)=0$ on $[1,2]$ and so $\nu(t)=-\dot q^x(t)=0$, which contradicts the condition $\nu(t)<0$. Thus $\bar{u}(t)$ remains constant on $[1,2]$, and we find an optimal solution to this problem.} \end{Example}\vspace*{-0.25in} \section{Conclusions}\langlebel{conclusion} \setcounter{equation}{0} This paper presents new results on extended Euler-Lagrange and Hamiltonian optimality conditions for a rather general class of controlled sweeping processes. In our future research in this direction we plan to focus on optimal control problems governed by {\em rate-independent operators} having the following description. Given two functionals $E\mbox{\rm co}\,lon[0,T]\tauimes Z\tauo\mathbb{R}$ and $Painlev\'{e}si\mbox{\rm co}\,lon Z\tauimes Z\tauo[0,\infty)$ on a Banach (or finite-dimensional) space $Z$, we consider the {\em doubly nonlinear evolution inclusion} \begin{eqnarray*} 0\in\partial_vPainlev\'{e}si\big(z(t),\dot z(t)\big)+\partial E\big(t,z(t)\big)\;\mbox{ a.e. }\;t\in[0,T]. \end{eqnarray*} If $E$ is smooth, the inclusion above is equivalent to \begin{eqnarray*} \dot z(t)\in N_{C(z(t))}\big(\nabla E(t,z(t))\big)\mbox{ a.e. }\;t\in[0,T], \end{eqnarray*} where $\{C(z)\}_{z\in Z}$ is the family of closed convex subsets of $Z$ related to $Painlev\'{e}si$ by the formula \begin{eqnarray*} Painlev\'{e}si(z,v):=\sup\big\{(\sigma,v)\big|\;\sigma\in C(z)\big\}\;\mbox{ for all }\;z,v\in Z. \end{eqnarray*} Among our major applications we plan to consider practical hysteresis models, especially those arising in problems of contact and nonsmooth mechanics; see \cite{brog,razavy}.\vspace*{-0.2in} \end{document}
\begin{document} \title{\TitleFont Robust Stability of Optimization-based State Estimation} \author{Wuhua~Hu \thanks{W. Hu is with the Institute for Infocomm Research, Agency for Science, Technology and Research (A$^\star$STAR), Singapore, E-mail: [email protected]}} \maketitle \begin{abstract} Optimization-based state estimation is useful for nonlinear or constrained dynamic systems for which few general methods with established properties are available. The two fundamental forms are moving horizon estimation (MHE) which uses the nearest measurements within a moving time horizon, and its theoretical ideal, full information estimation (FIE) which uses all measurements up to the time of estimation. Despite extensive studies, the stability analyses of FIE and MHE for discrete-time nonlinear systems with bounded process and measurement disturbances, remain an open challenge. This work aims to provide a systematic solution for the challenge. First, we prove that FIE is robustly globally asymptotically stable (RGAS) if the cost function admits a property mimicking the incremental input/output-to-state stability (i-IOSS) of the system and has a sufficient sensitivity to the uncertainty in the initial state. Second, we establish an explicit link from the RGAS of FIE to that of MHE, and use it to show that MHE is RGAS under enhanced conditions if the moving horizon is long enough to suppress the propagation of uncertainties. The theoretical results imply flexible MHE designs with assured robust stability for a broad class of i-IOSS systems. Numerical experiments on linear and nonlinear systems are used to illustrate the designs and support the findings. \end{abstract} \begin{IEEEkeywords} Nonlinear systems; moving-horizon estimation; full information estimation; state estimation; bounded disturbances; robust stability \end{IEEEkeywords} \thispagestyle{empty} \section{Introduction} Optimization-based state estimation refers to an estimation method that estimates the state of a system via an optimization approach, in which the optimization utilizes all or a subset of the information available about the system up to the time of estimation. It has advantages in handling nonlinear or constrained systems for which few general state estimation methods with established properties are available \cite{rawlings2014moving}. Full information estimation (FIE) is an ideal form of optimization-based estimation which uses all measurements up to the time of estimation. In the absence of constraints, FIE is equivalent to Kalman filtering (KF) when the system is linear time-invariant and the cost function has an appropriate quadratic form \cite{rawlings2014moving}. Since the measurements increase with time, it is impractical to implement FIE. This motivates the development of moving horizon estimation (MHE) as its practical approximation which uses only the latest batch of measurements to do the estimation. The idea of MHE dates back to 1960's \cite{jazwinski1968limited} which was motivated for making KF robust to modeling errors. However, it is not until recently that the idea is gradually developed into a field, i.e., the field of MHE \cite{muske1995nonlinear,allgower1999nonlinear,rawlings2012optimization,rawlings2014moving}. The recent developments include MHE theoretical and applied researches which investigate the stability and the implementation issues of MHE. Theoretical research has been concentrated on the stability conditions of MHE. Early research assumed linear systems \cite{muske1993receding,rao2001constrained,alessandri2003receding}, and later nonlinear systems \cite{michalska1995moving,alessandri1999neural,rao2003constrained,alessandri2008moving}. { Part of the stability results were obtained by assuming the presence of measurement disturbances but the absence of process disturbances, e.g., \cite{alessandri1999neural}, and most were obtained by assuming the presence of both disturbances, e.g., \cite{muske1993receding,michalska1995moving,rao2001constrained,rao2003constrained,alessandri2003receding,alessandri2008moving}. } And the stability results were derived based on different formulations of the problems. For example, references \cite{muske1993receding,rao2001constrained,alessandri2003receding,alessandri2010advances,liu2013moving} considered either linear or nonlinear systems and each assumed a quadratic cost function that accounts for a quadratic arrival cost and quadratic penalties on the measurement fitting errors. Whereas, references \cite{rao2003constrained,rawlings2012optimization,hu2015optimization,ji2016robust,muller2017nonlinear} considered general nonlinear systems and assumed a general form of cost functions that are not necessarily in a quadratic form. The recent review made in \cite{rawlings2012optimization} provides a concise and general view of the problem which relies on the concept of incremental input/output-to-state stability (i-IOSS; refer to Definition \ref{def: i-IOSS} in Section \ref{sec: Notation-and-Preliminaries}) for detectability of nonlinear systems \cite{sontag1997output} and the concept of robust global asymptotic stability (RGAS) for robust stability of a state estimator \cite{rawlings2009model,rawlings2012optimization}. The review revealed two major challenges that were open in the field \cite{rawlings2012optimization,rawlings2014moving}: (i) the search for conditions and a proof of the RGAS of MHE in the presence of bounded disturbances, and (ii) the development of suboptimal MHE that enables an efficient computation of the solution. As an initial step to tackling challenge (i), reference \cite{hu2015optimization} identified a broad class of cost functions that ensure the RGAS of FIE, and the cost functions were shown to admit a more specific form for a class of i-IOSS systems considered in \cite{ji2016robust}. { The implication to the RGAS of MHE was further investigated in \cite{muller2017nonlinear} based on the results of \cite{ji2016robust}, which showed that MHE is RGAS if the same conditions are enhanced properly. Moreover, in \cite{muller2017nonlinear} the convergence of MHE for convergent disturbances was proved under the enhanced conditions, and the MHE was shown to be RGAS even if the cost function does not have max terms which are needed in the stability analyses of FIE \cite{ji2016robust}. } Other relevant progress was reported in \cite{liu2013moving}, which assumed a quadratic cost function and used a nonlinear deterministic observer to generate useful constraints so that the MHE results in bounded estimation errors under certain conditions. Some earlier developments are also available in \cite{alessandri2008moving,alessandri2010advances}, which assumed a quadratic cost function for MHE and also an observability and some Lipschitz conditions on the system. The other developments of MHE are mainly in applied research, which mostly aimed at reducing the online computational complexity of MHE for applications in large dimensional and nonlinear systems \cite{rawlings2014moving}. Interested readers are referred to \cite{zavala2008fast,kuhl2011real,lopez2012moving,wynn2014convergence,alessandri2016moving_cdc} and the references therein for relevant examples. This paper follows the general view of MHE developed in \cite{rawlings2012optimization}, and aims to present a systematic solution for the aforementioned open challenge (i). It significantly extends our conference paper \cite{hu2015optimization}, which identified sufficient conditions for the RGAS of FIE. The major new contributions are the following. First, an explicit link is established from the RGAS of FIE to that of MHE. Second, based on the link we prove the RGAS of MHE under both general and specialized conditions, depending on the stability property of a system. The convergence of MHE to the true state is also established for a system subjected to disturbances that converge to zero. Third, two numerical examples are developed with rigorous analyses to support the theoretical findings. The numerical results are compared with those obtained by KF in the linear case and extended Kalman filter (EKF) \cite{ljung1979asymptotic} in the nonlinear case, demonstrating the advantages of MHE in the considered situations. { Since the discrete-time system and the MHE are assumed to have general forms, the theoretical results imply flexible MHE designs with assured robust stability for a broad class of systems. This constitutes a key difference from the recent stability results obtained in \cite{muller2017nonlinear} which are applicable to a subset of the considered systems. } The rest of the paper is organized as follows. Section \ref{sec: Notation-and-Preliminaries} introduces the notation and some preliminaries. Section \ref{sec: optimization-based estimation} defines the ideal and the practical forms of optimization-based state estimation, i.e., FIE and MHE. Section \ref{sec: RGAS of FIE} proves the robust stability of FIE under general and then specialized conditions. Section \ref{sec: RGAS of the MHE} reveals its implication to MHE and subsequently proves the robust stability of MHE under enhanced conditions. Convergence of the MHE is also proved for disturbances that are convergent to zero. Section \ref{sec:numerical example} presents two numerical examples to illustrate the flexible MHE designs. Section \ref{sec:Discussion} provides a brief discussion on ways to tackle the computational challenge of MHE. Finally, Section \ref{sec:Conclusion} concludes the paper with a remark on the future work. \section{Notation and Preliminaries \label{sec: Notation-and-Preliminaries}} The notation mostly follows the convention in \cite{rawlings2012optimization}. The symbols $\mathbb{R}$, $\mathbb{R}_{\ge0}$ and $\mathbb{I}_{\ge0}$ denote the sets of real numbers, nonnegative real numbers and nonnegative integers, respectively, and $\mathbb{I}_{a:b}$ denotes the set of integers from $a$ to $b$. The constraints $t\ge0$ and $t\in\mathbb{I}_{\ge0}$ are used interchangeably to refer to the set of discrete times. The symbol $\left|\cdot\right|$ denotes the Euclidean norm of a vector or the 2-norm of a matrix, depending on the argument. The bold symbol $\boldsymbol{x}_{a:b}$, denotes a sequence of vector-valued variables $\{x_{a},\,x_{a+1},\,...,\thinspace x_{b}\}$, and with a function $f$ acting on a vector $x$, $f(\boldsymbol{x}_{a:b})$ stands for the sequence of function values $\{f(x_{a}),\,f(x_{a+1}),\,...,\thinspace f(x_{b})\}$. The notation $\left\Vert \boldsymbol{x}_{a:b}\right\Vert $ refers to $\max_{a\le i\le b}\left|x_{i}\right|$ if $a\le b$ and to 0 if $a>b$. Throughout the paper, $t$ refers to a discrete time, and as a subscript it indicates dependence on time $t$. Whereas, the subscripts or superscripts $x$, $w$ and $v$ are used exclusively to indicate a function or variable that is associated with the state ($x$), disturbance ($w$) or measurement noise ($v$), and they do not imply dependence relationships. The frequently used $\mathcal{K}$, $\mathcal{K}inf$, $\mathcal{L}$ and $\mathcal{K}L$ functions are defined as follows. \begin{defn} ($\mathcal{K}$, $\mathcal{K}inf$, $\mathcal{L}$ and $\mathcal{K}L$ functions) A function $\alpha:\mathbb{R}_{\ge0}\to\mathbb{R}_{\ge0}$ is a $\mathcal{K}$ function if it is continuous, zero at zero, and strictly increasing, and a $\mathcal{K}inf$ function if $\alpha$ is a $\mathcal{K}$ function and satisfies $\alpha(s)\to\infty$ as $s\to\infty$. A function $\varphi:\mathbb{R}_{\ge0}\to\mathbb{R}_{\ge0}$ is a $\mathcal{L}$ function if it is continuous, nonincreasing and satisfies $\varphi(t)\to0$ as $t\to\infty$. A function $\beta:\mathbb{R}_{\ge0}\times\mathbb{R}_{\ge0}\to\mathbb{R}_{\ge0}$ is a $\mathcal{K}L$ function if, for each $t\ge0$, $\beta(\cdot,t)$ is a $\mathcal{K}$ function and for each $s\ge0$, $\beta(s,\cdot)$ is a $\mathcal{L}$ function. \end{defn} The following properties of the $\mathcal{K}$ and $\mathcal{K}L$ functions will be used in later analyses. \begin{lem} \cite{khalil2002nonlinear,rawlings2012optimization} Given a $\mathcal{K}$ function $\alpha$ and a $\mathcal{K}L$ function $\beta$, the following holds for all $a_{i}\in\mathbb{R}_{\ge0}$, $i\in\mathbb{I}_{1:n}$, and all $t\in\mathbb{R}_{\ge0}$, \[ \alpha\left(\sum_{i=1}^{n}a_{i}\right)\le\sum_{i=1}^{n}\alpha(na_{i}),\,\,\beta\left(\sum_{i=1}^{n}a_{i},t\right)\le\sum_{i=1}^{n}\beta(na_{i},t). \] \end{lem} \begin{defn} \label{def: i-IOSS}(i-IOSS \cite{rawlings2012optimization,sontag1997output}) The system $x_{t+1}=f(x_{t},w_{t})$, $y_{t}=h(x_{t})$ is i-IOSS if there exist functions $\beta\in\mathcal{K}L$ and $\alpha_{1},\alpha_{2}\in\mathcal{K}$ such that for every two initial states $x_{0}^{(1)},x_{0}^{(2)}$, and two sequences of disturbances $\boldsymbol{w}_{0:t-1}^{(1)},\boldsymbol{w}_{0:t-1}^{(2)}$, the following inequality holds for all $t\in\mathbb{I}_{\ge0}$: \begin{align} & \left|x_{t}(x_{0}^{(1)},\boldsymbol{w}_{0:t-1}^{(1)})-x_{t}(x_{0}^{(2)},\boldsymbol{w}_{0:t-1}^{(2)})\right|\le\beta(\left|x_{0}^{(1)}-x_{0}^{(2)}\right|,t)\nonumber \\ & +\alpha_{1}(\left\Vert \boldsymbol{w}_{0:t-1}^{(1)}-\boldsymbol{w}_{0:t-1}^{(2)}\right\Vert )+\alpha_{2}(\left\Vert h(\boldsymbol{x}_{0:t}^{(1)})-h(\boldsymbol{x}_{0:t}^{(2)})\right\Vert ),\label{eq:definition - i-IOSS} \end{align} where $x_{t}^{(i)}$ is a shorthand of $x_{t}(x_{0}^{(i)},\boldsymbol{w}_{0:t-1}^{(i)})$ for $i=1$ and 2. \end{defn} The definition of i-IOSS can be interpreted as a ``detectability'' concept for nonlinear systems \cite{sontag1997output}, as the state may be ``detected'' from the \emph{noise-free} output by (\ref{eq:definition - i-IOSS}). In particular, if in \eqref{eq:definition - i-IOSS} $\beta(s,t)=\alpha(s)a^{t}$ for all $s,t\ge0$, with $\alpha\in\mathcal{K}$ and $a$ being a constant within $(0,1)$, we say that the system is \emph{exponentially i-IOSS} or \emph{exp-i-IOSS} for short. This can be viewed as extending the exponential input-to-state stability \cite{grune1999input,liu2010exponential} to the context of i-IOSS. \begin{defn} \label{def: (-factorizable-} ($\mathcal{K}dL$ function) A $\mathcal{K}L$ function $\beta$ is called a $\mathcal{K}dL$ function if there exist functions $\alpha\in\mathcal{K}$ and $\varphi\in\mathcal{L}$ such that $\beta(s,t)=\alpha(s)\varphi(t)$, for all $s,t\ge0$. \end{defn} As an example, the $\mathcal{K}L$ function $se^{-t}$ is a $\mathcal{K}dL$ function for $s,t\ge0$. The next lemma shows the general interest of a $\mathcal{K}dL$ function. \begin{lem} \label{lem: factorizable-KL-bound} ($\mathcal{K}dL$ bound, Lemma 8 in \cite{sontag1998comments}) Given an arbitrary $\mathcal{K}L$ function $\beta$, there exists a $\mathcal{K}dL$ function $\bar{\beta}$ such that $\beta(s,t)\le\bar{\beta}(s,t)$ for all $s,t\ge0$. \end{lem} Lemma \ref{lem: factorizable-KL-bound} implies that the i-IOSS property defined by means of a $\mathcal{K}L$ function can be defined equivalently using a $\mathcal{K}dL$ function. This is useful in the later analyses of FIE and MHE. The following definition of a Lipschitz continuous function will also be used in the analysis of MHE. \begin{defn} \label{def: Lipschitz-continuous} (Lipschitz continuous function) A function $f:\mathbb{R}^{n}\to\mathbb{R}^{m}$ is Lipschitz continuous over a subset $\mathbb{S}\in\mathbb{R}^{n}$ if there is a constant $c$ such that $|f(x)-f(y)|\le c|x-y|$ for all $x,\thinspace y\in\mathbb{S}$. \end{defn} \section{Optimization-based State Estimation\label{sec: optimization-based estimation}} Consider a discrete-time nonlinear system described by \begin{equation} x_{t+1}=f(x_{t},w_{t}),\,\,y_{t}=h(x_{t})+v_{t},\label{eq:system} \end{equation} where $x_{t}\in\mathbb{R}^{n}$ is the system state, $y_{t}\in\mathbb{R}^{p}$ the measurement, $w_{t}\in\mathbb{R}^{g}$ the process disturbance, $v_{t}\in\mathbb{R}^{p}$ the measurement disturbance, all at time $t$. { Since control inputs known up to the estimation time can be treated as given constants, they do not cause difficulty to the later defined optimization and related analyses and hence are ignored for brevity in the problem formulation \cite{rawlings2012optimization}. } The functions $f$ and $h$ are assumed to be continuous and known, and the initial state $x_{0}$ and the disturbances $(w_{t},v_{t})$ are modeled as unknown but \emph{bounded} variables. Given a time $t$, the state estimation problem is to find an optimal estimate of state $x_{t}$ based on measurements $\{y_{\tau}\}$ for $\tau$ belonging to a time set and satisfying $\tau\le t$. In the ideal case, all measurements up to time $t$ are used, leading to the so-called FIE; and in the practical case, only measurements within a limited distance from time $t$ are used, yielding the so-called MHE. Both FIE and MHE can be cast as optimization problems as defined next. Let the decision variables of FIE be $(\boldsymbol{\chi}_{0:t},\boldsymbol{\omega}_{0:t-1},\boldsymbol{\nu}_{0:t})$, which correspond to the system variables $(\boldsymbol{x}_{0:t},\boldsymbol{w}_{0:t-1},\boldsymbol{v}_{0:t})$, and the optimal decision variables be $(\hat{\boldsymbol{x}}_{0:t},\hat{\boldsymbol{w}}_{0:t-1},\hat{\boldsymbol{v}}_{0:t})$. Since $\hat{\boldsymbol{x}}_{0:t}$, which consists of optimal estimates at all sampled times, is uniquely determined once $\hat{x}_{0}$ and $\hat{\boldsymbol{w}}_{0:t-1}$ are known, the decision variables essentially reduce to $(\boldsymbol{\chi}_{0},\boldsymbol{\omega}_{0:t-1},\boldsymbol{\nu}_{0:t})$. { Here, although $\boldsymbol{\nu}_{0:t}$ is uniquely determined by $\boldsymbol{\chi}_{0}$ and $\boldsymbol{\omega}_{0:t-1}$, we keep it for the convenience of expressing bounds and penalty costs to be defined on $\boldsymbol{\nu}_{0:t}$. } Given a present time $t\in\mathbb{I}_{\ge0}$, let $\bar{x}_{0}$ be the prior estimate of the initial state which may be obtained from the initial or historical measurements. The uncertainty in the initial state is thus represented by $\chi_{0}-\bar{x}_{0}$. Denote the time-dependent cost function as $V_{t}(\chi_{0}-\bar{x}_{0},\boldsymbol{\omega}_{0:t-1},\boldsymbol{\nu}_{0:t})$, which penalizes uncertainties in the initial state, the process and the measurements. Then, FIE is defined by the following optimization problem: \begin{equation} \begin{aligned}\mathrm{FIE:}\thinspace\thinspace & \inf V_{t}(\chi_{0}-\bar{x}_{0},\boldsymbol{\omega}_{0:t-1},\boldsymbol{\nu}_{0:t})\\ \mathrm{subject~to,~} & \chi_{\tau+1}=f(\chi_{\tau},\omega_{\tau}),\,\,\forall\tau\in\mathbb{I}_{0:t-1},\\ & y_{\tau}=h(\chi_{\tau})+\nu_{\tau},\,\,\forall\tau\in\mathbb{I}_{0:t},\\ & \chi_{0}\in\mathbb{B}_{x}^{0},\thinspace\thinspace\boldsymbol{\omega}_{0:t-1}\in\mathbb{B}_{w}^{0:t-1},\,\,\boldsymbol{\nu}_{0:t}\in\mathbb{B}_{v}^{0:t}, \end{aligned} \label{eq: FIE} \end{equation} where $\{\chi_{0},\thinspace\boldsymbol{\omega}_{0:t-1},\thinspace\boldsymbol{\nu}_{0:t}\}$ are the decision variables. Here $\mathbb{B}_{x}^{0}$, $\mathbb{B}_{w}^{0:t-1}$ and $\mathbb{B}_{v}^{0:t}$ denote the sets of bounded initial states, bounded sequences of process and measurement disturbances, respectively, for all $t\in\mathbb{I}_{\ge0}$, of which the latter two sets may vary with time. Since the optimal decision variable $\hat{x}_\tau$, for any $\tau\le t$, is dependent on the time $t$ when the FIE instance is defined, to be unambiguous we use $\hat{x}_{t}^{*}$ to exclusively represent $\hat{x}_{t}$ that is obtained from the FIE instance defined at time $t$. This keeps $\hat{x}_{t}^{*}$ unique, while $\hat{x}_{t}$ varies as the FIE is renewed with new measurements as time elapses. Given a constant $T\in\mathbb{I}_{\ge0}$, if the measurements are limited only to the $T+1$ measurements backwards from and including the present time $t$, then the following optimization defines MHE, i.e., \begin{equation} \begin{aligned}\mathrm{MHE:~} & \inf V_{T}(\chi_{t-T}-\bar{x}_{t-T},\boldsymbol{\omega}_{t-T:t-1},\boldsymbol{\nu}_{t-T:t})\\ \mathrm{subject~to,~} & \chi_{\tau+1}=f(\chi_{\tau},\omega_{\tau}),\,\,\forall\tau\in\mathbb{I}_{t-T:t-1},\\ & y_{\tau}=h(\chi_{\tau})+\nu_{\tau},\,\,\forall\tau\in\mathbb{I}_{t-T:t},\\ \boldsymbol{\chi}_{t-T}\in\mathbb{B}_{x}^{t-T},\,\,& \boldsymbol{\omega}_{t-T:t-1}\in\mathbb{B}_{w}^{t-T:t-1},\,\,\boldsymbol{\nu}_{t-T:t}\in\mathbb{B}_{v}^{t-T:t}, \end{aligned} \label{eq: MHE} \end{equation} where $\{\chi_{t-T},\thinspace\boldsymbol{\omega}_{t-T:t-1},\thinspace\boldsymbol{\nu}_{t-T:t}\}$ are the decision variables, and $\bar{x}_{t-T}$ is a prior estimate of $x_{t-T}$, and $\mathbb{B}_{x}^{t-T}$, $\mathbb{B}_{w}^{t-T:t-1}$ and $\mathbb{B}_{v}^{t-T:t}$ denote the bounding sets for the time period from $t-T$ to $t$. We use $\hat{x}_{t}^{\star}$ to represent $\hat{x}_{t}$ that is obtained from the MHE instance defined at time $t$. By this way, $\hat{x}_{t}^{\star}$ remains unique although $\hat{x}_{t}$ varies as the MHE instance is renewed with new measurements. Since the cost function is in the same form of FIE except for the truncated argument variables, the MHE defined in \eqref{eq: MHE} is named as the \textit{associated} MHE of the FIE defined in \eqref{eq: FIE}, and vice versa. { By definition, FIE uses all available historical measurements to perform state estimation. So its computational complexity increases with time and will ultimately become intractable, which makes FIE impractical for applications. For this reason, FIE is studied mainly for its theoretical interest: its performance can be viewed as a limit or benchmark that MHE tries to approach, and its stability can be a good start point for the stability analysis of MHE, which will become clear later on. } An important issue in designing FIE and MHE is to identify conditions under which the associated optimizations have optimal solutions such that the state estimates satisfy the RGAS property defined below. Let $\boldsymbol{x}_{0:t}(x_{0},\boldsymbol{w}_{0:t-1})$ denote a state sequence generated from an initial condition $x_{0}$, and a disturbance sequence $\boldsymbol{w}_{0:t-1}$. \begin{defn} (RGAS \cite{rawlings2012optimization}) The estimate $\hat{x}_{t}$ of the state $x_{t}$ is based on partial or full sequence of the noisy measurements, $\boldsymbol{y}_{0:t}=h(\boldsymbol{x}_{0:t}(x_{0},\boldsymbol{w}_{0:t-1}))+\boldsymbol{v}_{0:t}$. The estimate is RGAS if for all $x_{0}$ and $\bar{x}_{0}$, there exist functions $\beta_{x}\in\mathcal{K}L$ and $\alpha_{w},\alpha_{v}\in\mathcal{K}$ such that the following inequality holds for all $\boldsymbol{\omega}_{0:t-1}\in\mathbb{B}_{w}^{0:t-1}$, $\boldsymbol{\nu}_{0:t}\in\mathbb{B}_{v}^{0:t}$ and $t\in\mathbb{I}_{\ge0}$: \begin{align} & \left|x_{t}-\hat{x}_{t}\right|\nonumber \\ & \le\beta_{x}(\left|x_{0}-\bar{x}_{0}\right|,t)+\alpha_{w}(\left\Vert \boldsymbol{w}_{0:t-1}\right\Vert )+\alpha_{v}(\left\Vert \boldsymbol{v}_{0:t}\right\Vert ).\label{def: RGAS} \end{align} \end{defn} Note that, the last measurement $y_{t}$ and hence the corresponding fitting error $v_{t}$ are considered in the above inequality, which is however absent in the original definition \cite{rawlings2012optimization}. To have FIE or MHE that is RGAS, the cost function needs to penalize the uncertainties appropriately, and meanwhile the system dynamics should satisfy certain conditions. We present such sufficient conditions for FIE and MHE respectively in the next two sections. \begin{rem} \label{rem: on-notation} In the above formulation of FIE, the notation of an estimate, $\hat{\cdot}_{\tau}$ for $\tau\in\mathbb{I}_{0:t}$, is a shorthand of $\hat{\cdot}_{\tau}(x_{0},\boldsymbol{w}_{0:t-1},\boldsymbol{v}_{0:t})$. Similar shorthand is used in MHE. The meaning of $\hat{\cdot}_{\tau}$ hence depends on time, and will be explained if ambiguity arises. \end{rem} \section{Robust Stability of FIE\label{sec: RGAS of FIE}} This section summarizes the results on robust stability of FIE which were obtained in our recent conference paper \cite{hu2015optimization}. The results are rephrased for the ease of understanding, and some changes are also included and explained. The stability results rely on the following two assumptions. \begin{assumption} \label{assump: A1} { The cost function of FIE is given as: $V_{t}(\chi_{0}-\bar{x}_{0},\boldsymbol{\omega}_{0:t-1},\boldsymbol{\nu}_{0:t})=V_{t,1}(\chi_{0}-\bar{x}_{0})+V_{t,2}(\boldsymbol{\omega}_{0:t-1},\boldsymbol{\nu}_{0:t})$, for $t\in\mathbb{I}_{\ge0}$, where $V_{t,1}$ and $V_{t,2}$ are continuous functions and satisfy the following inequalities for all $\chi_{0}\in\mathbb{B}_{x}^{0}$, $\boldsymbol{\omega}_{0:t-1}\in\mathbb{B}_{w}^{0:t-1}$, $\boldsymbol{\nu}_{0:t}\in\mathbb{B}_{v}^{0:t}$ and $t\in\mathbb{I}_{\ge0}$: \begin{align} \underbar{\ensuremath{\rho}}_{x}(\left|\chi_{0}-\bar{x}_{0}\right|,t)\le V_{t,1}(\chi_{0}-\bar{x}_{0})\le\rho_{x}(\left|\chi_{0}-\bar{x}_{0}\right|,t),\label{eq: Assumption 1a}\\ \underbar{\ensuremath{\gamma}}_{w}(\left\Vert \boldsymbol{\omega}_{0:t-1}\right\Vert )+\underbar{\ensuremath{\gamma}}_{v}(\left\Vert \boldsymbol{\nu}_{0:t}\right\Vert )\le V_{t,2}(\boldsymbol{\omega}_{0:t-1},\boldsymbol{\nu}_{0:t})\nonumber \\ \le\gamma_{w}(\left\Vert \boldsymbol{\omega}_{0:t-1}\right\Vert )+\gamma_{v}(\left\Vert \boldsymbol{\nu}_{0:t}\right\Vert ),\label{eq: Assumption 1b} \end{align} where $\underbar{\ensuremath{\rho}}_{x},\rho_{x}\in\mathcal{K}L$ and $\underbar{\ensuremath{\gamma}}_{w},\underbar{\ensuremath{\gamma}}_{v},\gamma_{w},\gamma_{v}\in\mathcal{K}inf$. } \end{assumption} \begin{assumption} \label{assump: A2} The $\mathcal{K}$ and $\mathcal{K}L$ functions in (\ref{eq:definition - i-IOSS}), (\ref{def: RGAS}) and (\ref{eq: Assumption 1a}) satisfy the following inequalities for all $s_{x},s_{w},s_{v},t\ge0$: \begin{align} & \beta\left(s_{x}+\underbar{\ensuremath{\rho}}_{x}^{-1}\left(\rho_{x}(s_{x},t)+\gamma_{w}(s_{w})+\gamma_{v}(s_{v}),t\right),t\right)\nonumber \\ & \le\bar{\beta}_{x}(s_{x},t)+\bar{\alpha}_{w}(s_{w})+\bar{\alpha}_{v}(s_{v}), \label{eq: Assumption 2} \end{align} in which $\underbar{\ensuremath{\rho}}_{x}^{-1}(\cdot,t)$ refers to the inverse of $\underbar{\ensuremath{\rho}}_{x}(s,t)$ in its first argument $s$ at time $t$, and $\bar{\beta}_{x}$, $\bar{\alpha}_{w}$ and $\bar{\alpha}_{v}$ are certain $\mathcal{K}L$, $\mathcal{K}$ and $\mathcal{K}$ functions, respectively. \end{assumption} If we interpret the cost function $V_{t}$ as to measure the deviation of a state estimate of a disturbed system from the true state of the corresponding undisturbed system, then Assumption \ref{assump: A1} basically requires that the deviation is both lower and upper bounded by i-IOSS like limits. { The assumption ensures that the FIE has sufficient sensitivity to the involved uncertainties, i.e., the sub-cost $V_{t,1}(\chi_{0}-\bar{x}_{0})$ decays neither too fast nor too slowly with respect to $\chi_{0}-\bar{x}_{0}$, and the sub-cost $V_{t,2}(\boldsymbol{\omega}_{0:t-1},\boldsymbol{\nu}_{0:t})$ are lower and upper bounded by strictly increasing functions of $(\boldsymbol{\omega}_{0:t-1},\boldsymbol{\nu}_{0:t})$. } { In Assumption \ref{assump: A2}, if we interpret the argument, $\rho_{x}(s_{x},t)+\gamma_{w}(s_{w})+\gamma_{v}(s_{v})$, as a metric of the deviation from the prior of the true initial state (as caused by the uncertainties $s_x$, $s_w$ and $s_v$), then the $\mathcal{K}L$ function $\underbar{\ensuremath{\rho}}_{x}^{-1}$ aims to infer an upper bound of the deviation based on this metric. The inferred bound, $\underbar{\ensuremath{\rho}}_{x}^{-1}\left(\rho_{x}(s_{x},t)+\gamma_{w}(s_{w})+\gamma_{v}(s_{v}),t\right)$, is added to $s_x$ to form an error bound of the inferred initial state. Subsequently, this error bound is required to be small enough such that the induced error is bounded by an i-IOSS like limit after suppression by the i-IOSS property of the system (as indicated by the $\mathcal{K}L$ function $\beta$ here). Alternatively, we may simply interpret Assumption \ref{assump: A2} as to require that the FIE is more sensitive than the system to the uncertainty in the initial state, so that accurate inference of the initial state is possible. The conditions of Assumption \ref{assump: A2} and their interpretations will become more concrete in Lemmas \ref{lem:The-FIE-defined}-\ref{lem:The-FIE-defined3} to be presented ahead. } The robust stability of FIE is then established under Assumptions \ref{assump: A1} and \ref{assump: A2}. \begin{thm} \label{thm:RGAS-of-FIE}(RGAS of FIE) The FIE defined in \eqref{eq: FIE} is RGAS if the three conditions are satisfied: 1) the system described in \eqref{eq:system} is i-IOSS, 2) the cost function of the FIE satisfies Assumptions \ref{assump: A1}-\ref{assump: A2}, and 3) the infimum of the optimization in the FIE is attainable, i.e., exists and is numerically obtainable. \end{thm} { The condition 3) is needed because the state estimate is assumed to be computed as an optimal solution to the optimization defined in \eqref{eq: FIE}. } As a result, the $\mathcal{K}$ and $\mathcal{K}L$ functions of the RGAS property (cf. Definition \ref{def: RGAS}) can be obtained explicitly as: \begin{multline} \beta_{x}(\left|x_{0}-\bar{x}_{0}\right|,\,t)=\bar{\beta}_{x}\left(\left|x_{0}-\bar{x}_{0}\right|,\,t\right)\\ +\alpha_{1}\left(3\underbar{\ensuremath{\gamma}}_{w}^{-1}\left(3\rho_{x}(\left|x_{0}-\bar{x}_{0}\right|,\,t)\right)\right)\\ +\alpha_{2}\left(3\underbar{\ensuremath{\gamma}}_{v}^{-1}\left(3\rho_{x}(\left|x_{0}-\bar{x}_{0}\right|,\,t)\right)\right),\label{eq: beta_x} \end{multline} \begin{multline} \alpha_{w}(\left\Vert \boldsymbol{w}_{0:t-1}\right\Vert )=\bar{\alpha}_{w}\left(\left\Vert \boldsymbol{w}_{0:t-1}\right\Vert \right)\\ +\alpha_{1}\left(3\left\Vert \boldsymbol{w}_{0:t-1}\right\Vert +3\underbar{\ensuremath{\gamma}}_{w}^{-1}\left(3\gamma{}_{w}(\left\Vert \boldsymbol{w}_{0:t-1}\right\Vert )\right)\right)\\ +\alpha_{2}\left(3\underbar{\ensuremath{\gamma}}_{v}^{-1}\left(3\gamma{}_{w}(\left\Vert \boldsymbol{w}_{0:t-1}\right\Vert )\right)\right),\label{eq: alpha_w} \end{multline} \begin{multline} \alpha_{v}(\left\Vert \boldsymbol{v}_{0:t}\right\Vert )=\bar{\alpha}_{v}\left(\left\Vert \boldsymbol{v}_{0:t}\right\Vert \right)+\alpha_{1}\left(3\underbar{\ensuremath{\gamma}}_{w}^{-1}\left(3\gamma{}_{v}(\left\Vert \boldsymbol{v}_{0:t}\right\Vert )\right)\right)\\ +\alpha_{2}\left(3\left\Vert \boldsymbol{v}_{0:t}\right\Vert +3\underbar{\ensuremath{\gamma}}_{v}^{-1}\left(3\gamma{}_{v}(\left\Vert \boldsymbol{v}_{0:t}\right\Vert )\right)\right).\label{eq: alpha_v} \end{multline} \begin{rem} As shown in \cite{hu2015optimization}, FIE can prove to converge to the true state when the disturbances are convergent to zero if the feasible sets $\mathbb{B}_{w}^{0:t-1}$ and $\mathbb{B}_{v}^{0:t}$ restrict the disturbances estimates to be convergent to zero. However, it is unclear if there exists a form of cost function such that the conclusion remains true without imposing this restriction. \end{rem} A more specific form of the sub-cost function $V_{t,2}$ that satisfies Assumption \ref{assump: A1} is given by the following:{\small{}{} \begin{align} & V_{t,2}(\boldsymbol{\omega}_{0:t-1},\,\boldsymbol{\nu}_{0:t}):=\dfrac{\lambda_{w}}{t}\sum_{\tau\in\mathbb{I}_{0:t-1}}l_{w,\tau}(\omega_{\tau})+\dfrac{\lambda_{v}}{t+1}\sum_{\tau\in\mathbb{I}_{0:t}}l_{v,\tau}(\nu_{\tau})\label{eq: specific form of the sub-cost}\\ & \,\,\,\,+(1-\lambda_{w})\max_{\tau\in\mathbb{I}_{0:t-1}}l_{w,\tau}(\omega_{\tau})+(1-\lambda_{v})\max_{\tau\in\mathbb{I}_{0:t}}l_{w,\tau}(\nu_{\tau}),\nonumber \end{align} }for given constants $\lambda_{w},\,\lambda_{v}\in[0,\,1)$, in which the functions $l_{w,\tau}$ and $l_{v,\tau}$ satisfy the following inequalities for all $\boldsymbol{\omega}_{0:t-1}\in\mathbb{B}_{w}^{0:t-1}$ and $\boldsymbol{\nu}_{0:t}\in\mathbb{B}_{v}^{0:t}$: \begin{equation} \begin{gathered}\underbar{\ensuremath{\gamma}}_{w}'(|\omega_{\tau}|)\le l_{w,\tau}(\omega_{\tau})\le\gamma_{w}'(|\omega_{\tau}|),\\ \underbar{\ensuremath{\gamma}}_{v}'(|\nu_{\tau}|)\le l_{v,\tau}(\nu_{\tau})\le\gamma_{v}'(|\nu_{\tau}|), \end{gathered} \label{eq: specific-subcost-form - b} \end{equation} where $\underbar{\ensuremath{\gamma}}_{w}',\,\underbar{\ensuremath{\gamma}}_{v}',\,\gamma_{w}',\,\gamma_{v}'\in\mathcal{K}inf$. In the sub-cost function, the terms associated with $\omega_{\tau}$ vanish if $t=0$. On the other hand, more specific forms of the sub-cost function $V_{t,1}$ that satisfies Assumptions \ref{assump: A1} and \ref{assump: A2} can be obtained if the $\mathcal{K}L$ function $\beta$ of the i-IOSS property of the system belongs to one of two particular types. The derivation is based on the next lemma. \begin{lem} \label{lem:The-FIE-defined}Assumption \ref{assump: A2} is satisfied if the $\mathcal{K}L$ functions $\beta$ in \eqref{eq:definition - i-IOSS} and $\underbar{\ensuremath{\rho}}_{x},\thinspace\rho_{x}$ in \eqref{eq: Assumption 1a} are $\mathcal{K}dL$ functions in the form of $\beta(s,\,t)=\mu_{1}(s)\varphi_{1}(t)$, $\underbar{\ensuremath{\rho}}_{x}(s,\,t)=\mu_{2}(s)\varphi_{2}(t)$ and $\ensuremath{\rho}_{x}(s,\,t)=\mu_{3}(s)\varphi_{2}(t)$, with $\mu_{1},\,\mu_{2},\thinspace\mu_{3}\in\mathcal{K}$ and $\varphi_{1},\varphi_{2}\in\mathcal{L}$, and further for any $\pi\in\mathcal{K}$ there exists $\pi'\in\mathcal{K}$ such that \begin{equation} \mu_{1}\left(3\mu_{2}^{-1}\left(\frac{\pi(s)}{\varphi_{2}(t)}\right)\right)\varphi_{1}(t)\le\pi'(s)\label{eq: key-stability-condition} \end{equation} which holds for all $s,\thinspace t\ge0$. \end{lem} \begin{IEEEproof} The proof is the same as that of Corollary 1 in \cite{hu2015optimization} except that a tighter upper bound is used during the deduction: \begin{align} & \beta\left(s_{x}+\underbar{\ensuremath{\rho}}_{x}^{-1}\left(\rho_{x}(s_{x},\,t)+\gamma_{w}(s_{w})+\gamma_{v}(s_{v}),\, t\right),\,t\right)\nonumber \\ & \le\beta\left(3s_{x}+3\underbar{\ensuremath{\rho}}_{x}^{-1}\left(3\rho_{x}(s_{x},\,t),\,t\right),\,t\right)\nonumber \\ & \,\,\,\,\,\,+\beta\left(3\underbar{\ensuremath{\rho}}_{x}^{-1}\left(3\gamma_{w}(s_{w}),\,t\right),\,t\right)+\beta\left(3\underbar{\ensuremath{\rho}}_{x}^{-1}\left(3\gamma_{v}(s_{v}),\,t\right),\,t\right),\nonumber \end{align} of which the three terms can be proved to be upper bounded by $\mathcal{K}L$, $\mathcal{K}$ and $\mathcal{K}$ functions, respectively, by use of \eqref{eq: key-stability-condition}. \end{IEEEproof} In Lemma \ref{lem:The-FIE-defined}, the assumption of $\beta$ being a $\mathcal{K}dL$ function is trivial because it is always feasible to assign such a function as an alternative if the original $\mathcal{K}L$ function $\beta$ is not in a $\mathcal{K}dL$ form (cf. Lemma \ref{lem: factorizable-KL-bound}). { The condition that $\underbar{\ensuremath{\rho}}_{x}$ and $\rho_{x}$ in \eqref{eq: Assumption 1a} are $\mathcal{K}dL$ functions is not imposed on the system dynamics, but a requirement on the cost function of FIE. } The key condition thus boils down to \eqref{eq: key-stability-condition}, which is basically an alternative of the more general condition \eqref{eq: Assumption 2}. Therefore the previous interpretation of \eqref{eq: Assumption 2} (or Assumption \ref{assump: A2}) is applicable to \eqref{eq: key-stability-condition}. Based on Lemma \ref{lem:The-FIE-defined}, we can prove that the FIE admits a even more specific cost function if the system is i-IOSS with a $\mathcal{K}L$ bound in the rational form. \begin{lem} \label{lem:The-FIE-defined2}Assumption \ref{assump: A2} is satisfied if the three conditions are satisfied: a) the system \eqref{eq:system} is i-IOSS as per \eqref{eq:definition - i-IOSS} in which the $\mathcal{K}L$ bound is explicitly given as $\beta(s,t)=c_{1}s^{a_{1}}(t+1)^{-b_{1}}$ for some constants $c_{1},a_{1},b_{1}>0$ and all $s,t\ge0$, and b) the sub-cost function $V_{t,1}$ is defined as \begin{align*} V_{t,1}(\mathcal{X}_{0}-\bar{x}_{0}) & =c_{2}|\mathcal{X}_{0}-\bar{x}_{0}|^{a_{2}}(t+1)^{-b_{2}}, \end{align*} with $a_{2},\thinspace b_{2}>0$, and c) the parameters $a_{2}$ and $b_{2}$ satisfy $\frac{a_{2}}{b_{2}}\ge\frac{a_{1}}{b_{1}}$. \end{lem} This lemma implies the main result of \cite{ji2016robust} if the design parameter $b_{2}$ is fixed to 1 (with a minor difference that here the FIE is able to utilize the last measurement in the estimation, whose fitting error is penalized through $\nu(t)$). Moreover, if the system described in \eqref{eq:system} is exp-i-IOSS, then the conclusion remains valid by replacing the rational form of $\mathcal{K}L$ bound in Lemma \ref{lem:The-FIE-defined2} with an exponential form. \begin{lem} \label{lem:The-FIE-defined3}Assumption \ref{assump: A2} is satisfied if the three conditions are satisfied: a) the system \eqref{eq:system} is exp-i-IOSS as per \eqref{eq:definition - i-IOSS} in which the $\mathcal{K}L$ function is explicitly given as $\beta(s,\,t)=c_{1}s^{a_{1}}b_{1}^{t}$ for some constants $c_{1},a_{1}>0$ and $0<b_{1}<1$ and all $s,t\ge0$, and b) the sub-cost function $V_{t,1}$ is defined as \begin{align*} V_{t,1}(\mathcal{X}_{0}-\bar{x}_{0}) & =c_{2}|\mathcal{X}_{0}-\bar{x}_{0}|^{a_{2}}b_{2}^{t}, \end{align*} with $a_{2}>0$ and $0<b_{2}<1$, and c) the parameters $a_{2}$ and $b_{2}$ satisfy $\sqrt[a_{2}]{b}_{2}\ge\sqrt[a_{1}]{b}_{1}$. \end{lem} In condition c) of Lemma \ref{lem:The-FIE-defined3}, the constraint $b_{2}<1$ is required to make sure that $c_{2}s^{a_{2}}b_{2}^{t}$ is a $\mathcal{K}L$ function of $s$ and $t$, so as to satisfy Assumption \ref{assump: A1} of Theorem \ref{thm:RGAS-of-FIE}. \begin{rem} As shown in \cite{glas1987exponential}, the set of exponentially stable systems are dense in the whole set of asymptotically stable systems. So it seems not to lose generality to assume exp-i-IOSS systems in practice as in Lemma \ref{lem:The-FIE-defined3}. \end{rem} \section{Robust Stability of MHE\label{sec: RGAS of the MHE}} { At any discrete time, an MHE instance can be treated as an associated FIE instance that is confined to the same optimization horizon. Thus, the associated FIE instance being RGAS implies the RGAS of MHE within its present optimization horizon. If we interpret this as MHE being robust \textit{locally} asymptotically stable (RLAS) within each optimization horizon of a given size, then the challenge reduces to identifying the conditions under which RLAS implies RGAS of MHE. } To that end, we need an assumption on the prior estimate of the initial state of each MHE instance. \begin{assumption} \label{assump: A3} Given any time $t\ge T+1$, the prior estimate $\bar{x}_{t-T}$ of $x_{t-T}$ satisfies the following constraint: \[ |x_{t-T}-\bar{x}_{t-T}|\le|x_{t-T}-\hat{x}_{t-T}^{\star}|. \] \end{assumption} The assumption is trivially satisfied if $\bar{x}_{t-T}$ is set to $\hat{x}_{t-T}^{\star}$, which is the past MHE estimate obtained at time $t-T$. Alternatively, a better $\bar{x}_{t-T}$ might be obtained with smoothing techniques which use measurements both before and after time $t-T$ \cite{aravkin2016generalized}. Since a rigorous derivation is non-trivial, the extension is left for future research. The next lemma links the robust stability and convergence of MHE with those of its associated FIE. \begin{lem}\label{lem: MHE-FIE} (Stability link from FIE to MHE) Consider the MHE under Assumption \ref{assump: A3}. Let the uncertainty in the initial state be bounded as $|x_{0}-\bar{x}_{0}|\le M_{0}$, and the disturbances be bounded as $|w_{t}|\le M_{w}$ and $|v_{t}|\le M_{v}$ for all $t\in\mathbb{I}_{\ge0}$. Given a constant $\eta\in(0,\thinspace1)$, the following two conclusions hold: { a) If the associated FIE is RGAS as per \eqref{def: RGAS}, in which the $\mathcal{K}L$ function $\beta_{x}$ satisfies $\beta_{x}(s,\thinspace t)\le\mu(s)\varphi(t)$ for some $\mu\in\mathcal{K}$, $\varphi\in\mathcal{L}$ and all $s\in\mathbb{R}_{\ge0}$, $t\in\mathbb{I}_{\ge0}$ , and if there exists $T_{\bar{s},\eta}$ such that $\mu(s)\varphi(T_{\bar{s},\eta})\le\eta s$ for all $s\in[0,\thinspace\bar{s}]$ with $\bar{s}:=\beta_{x}(M_{0},\thinspace0)+\frac{1}{1-\eta}\left(\alpha_{w}(M_{w})+\alpha_{v}(M_{v})\right)$, then MHE is RGAS for all $T\ge T_{\bar{s},\eta}$. In particular, if $\mu(s)$ is Lipschitz continuous at the origin, then $T_{\bar{s},\eta}$ exists and can be determined from the inequality, $\varphi(T_{\bar{s},\eta})\le\frac{\eta s^{*}}{\mu(s^{*})}$ with $s^{*}:=\arg\min_{s\in[0,\thinspace\bar{s}]}\frac{s}{\mu(s)}$. } b) If the associated FIE estimate ($\hat{x}_{t}^{*}$) converges to the true state, i.e., the estimate satisfies $|x_{t}-\hat{x}_{t}^{*}|\le\rho'(|x_{0}-\bar{x}_{0}|,\thinspace t)\le\mu'(|x_{0}-\bar{x}_{0}|)\varphi'(t)$ for some $\rho'\in\mathcal{K}L$, $\mu'\in\mathcal{K}$, $\varphi'\in\mathcal{L}$ and all $t\in\mathbb{I}_{\ge0}$, and if there exists $T_{\bar{s}',\eta}$ such that $\mu(s)\varphi(T_{\bar{s}',\eta})\le\eta s$ for all $s\in[0,\thinspace\bar{s}']$ with $\bar{s}':=\rho'(M_{0},\thinspace0)$, then the MHE estimate ($\hat{x}_{t}^{\star}$) converges to the true state for all $T\ge T_{\bar{s}',\eta}$. In particular, if $\mu'(s)$ is Lipschitz continuous at the origin, then $T_{\bar{s}',\eta}$ exists and can be determined from the inequality, $\varphi(T_{\bar{s}',\eta})\le\frac{\eta s^{\star}}{\mu(s^{\star})}$ with $s^{\star}:=\arg\min_{s\in[0,\thinspace\bar{s}']}\frac{s}{\mu'(s)}$. \end{lem} \begin{IEEEproof} a)\emph{ RGAS.} Given $t\in\mathbb{I}_{0:T-1}$, the MHE estimate ($\hat{x}_{t}^{\star}$) is the same as the associated FIE estimate ($\hat{x}_{t}^{*}$), and so the estimation error norm $\left|x_{t}-\hat{x}_{t}^{\star}\right|$ satisfies the RGAS inequality by \eqref{def: RGAS}. Specifically, under Assumption \ref{assump: A3} the RGAS inequality implies that, for all $t\in\mathbb{I}_{0:T-1}$, \begin{equation} \left|x_{t}-\bar{x}_{t}\right|\le\beta_{x}(M_{0},\thinspace0)+\alpha_{w}(M_{w})+\alpha_{v}(M_{v})\le\bar{s}.\label{eq: error bound from 0 to T} \end{equation} Next, we proceed to prove that the RGAS property is maintained for all $t\in\mathbb{I}_{\ge T}$. Given $t\in\mathbb{I}_{\ge T}$, define $n=\left\lfloor \frac{t}{T}\right\rfloor $, which is the largest integer that is less than or equal to $\frac{t}{T}$. So $t-nT$ belongs to the set $\mathbb{I}_{0:T-1}$, and hence $\left|x_{t-nT}-\bar{x}_{t-nT}\right|$ satisfies the preceding inequality, i.e., $\left|x_{t-nT}-\bar{x}_{t-nT}\right|\le\bar{s}$. Treat the MHE defined at time $t-(n-1)T$ as the associated FIE confined to the time interval $[t-nT,\thinspace t-(n-1)T]$. Therefore, the MHE satisfies the RGAS property within this interval, that is, by \eqref{def: RGAS} we have: \begin{align*} & \left|x_{t-(n-1)T}-\hat{x}_{t-(n-1)T}^{\star}\right|\le\beta_{x}(\left|x_{t-nT}-\bar{x}_{t-nT}\right|,\thinspace T)\\ & +\alpha_{w}(\left\Vert \boldsymbol{w}_{t-nT:t-(n-1)T-1}\right\Vert )+\alpha_{v}(\left\Vert \boldsymbol{v}_{t-nT:t-(n-1)T}\right\Vert )\\ & \le\mu(\left|x_{t-nT}-\bar{x}_{t-nT}\right|)\varphi(T)+\alpha_{w}(\left\Vert \boldsymbol{w}_{0:t-1}\right\Vert )+\alpha_{v}(\left\Vert \boldsymbol{v}_{0:t}\right\Vert ). \end{align*} Since $\left|x_{t-nT}-\bar{x}_{t-nT}\right|\in[0,\thinspace\bar{s}]$ and $\varphi(T)$ decreases with $T$, for all $T\ge T_{\bar{s},\eta}$ we have \begin{align*} & \left|x_{t-(n-1)T}-\hat{x}_{t-(n-1)T}^{\star}\right|\\ & \le\mu(\left|x_{t-nT}-\bar{x}_{t-nT}\right|)\varphi(T_{\bar{s},\eta})+\alpha_{w}(\left\Vert \boldsymbol{w}_{0:t-1}\right\Vert )+\alpha_{v}(\left\Vert \boldsymbol{v}_{0:t}\right\Vert )\\ & \le\eta\left|x_{t-nT}-\bar{x}_{t-nT}\right|+\alpha_{w}(\left\Vert \boldsymbol{w}_{0:t-1}\right\Vert )+\alpha_{v}(\left\Vert \boldsymbol{v}_{0:t}\right\Vert ) \end{align*} where the second inequality follows from the definition of $T_{\bar{s},\eta}$. Repeat the above reasoning for the MHE defined at time $t-(n-2)T$ with $T\ge T_{\bar{s},\eta}$, yielding \begin{align*} & \left|x_{t-(n-2)T}-\hat{x}_{t-(n-2)T}^{\star}\right|\\ & \le\beta_{x}(\left|x_{t-(n-1)T}-\bar{x}_{t-(n-1)T}\right|,\thinspace T)\\ & \quad+\alpha_{w}(\left\Vert \boldsymbol{w}_{t-(n-1)T:t-(n-2)T-1}\right\Vert )\\ & \quad+\alpha_{v}(\left\Vert \boldsymbol{v}_{t-(n-1)T:t-(n-2)T}\right\Vert )\\ & \overset{\text{Assump. \ref{assump: A3}}}{\le}\beta_{x}(\left|x_{t-(n-1)T}-\hat{x}_{t-(n-1)T}^{\star}\right|,\thinspace T)\\ & \quad+\alpha_{w}(\left\Vert \boldsymbol{w}_{0:t-1}\right\Vert )+\alpha_{v}(\left\Vert \boldsymbol{v}_{0:t}\right\Vert )\\ & \le\beta_{x}\left(\eta\left|x_{t-nT}-\bar{x}_{t-nT}\right|+\alpha_{w}(\left\Vert \boldsymbol{w}_{0:t-1}\right\Vert )+\alpha_{v}(\left\Vert \boldsymbol{v}_{0:t}\right\Vert ),\thinspace T\right)\\ & \quad+\alpha_{w}(\left\Vert \boldsymbol{w}_{0:t-1}\right\Vert )+\alpha_{v}(\left\Vert \boldsymbol{v}_{0:t}\right\Vert )\\ & \le\mu\left(\eta\left|x_{t-nT}-\bar{x}_{t-nT}\right|+\alpha_{w}(\left\Vert \boldsymbol{w}_{0:t-1}\right\Vert )+\alpha_{v}(\left\Vert \boldsymbol{v}_{0:t}\right\Vert )\right)\\ & \quad\times\varphi(T_{\bar{s},\eta})+\alpha_{w}(\left\Vert \boldsymbol{w}_{0:t-1}\right\Vert )+\alpha_{v}(\left\Vert \boldsymbol{v}_{0:t}\right\Vert )\\ & \le\eta\left(\eta\left|x_{t-nT}-\bar{x}_{t-nT}\right|+\alpha_{w}(\left\Vert \boldsymbol{w}_{0:t-1}\right\Vert )+\alpha_{v}(\left\Vert \boldsymbol{v}_{0:t}\right\Vert )\right)\\ & \quad+\alpha_{w}(\left\Vert \boldsymbol{w}_{0:t-1}\right\Vert )+\alpha_{v}(\left\Vert \boldsymbol{v}_{0:t}\right\Vert )\\ & =\eta^{2}\left|x_{t-nT}-\bar{x}_{t-nT}\right| \\ & \quad+(1+\eta)\left(\alpha_{w}(\left\Vert \boldsymbol{w}_{0:t-1}\right\Vert )+\alpha_{v}(\left\Vert \boldsymbol{v}_{0:t}\right\Vert )\right). \end{align*} In the deduction, the inequality \eqref{eq: error bound from 0 to T} has been used to show that $\eta\left|x_{t-nT}-\bar{x}_{t-nT}\right|+\alpha_{w}(\left\Vert \boldsymbol{w}_{0:t-1}\right\Vert )+\alpha_{v}(\left\Vert \boldsymbol{v}_{0:t}\right\Vert )\le\bar{s}$, and so the inequality $\mu(s)\varphi(T_{\bar{s},\eta})\le\eta s$ remains applicable. By induction, we obtain \begin{align*} \left|x_{t}-\hat{x}_{t}^{\star}\right| & \le\eta^{n}\left|x_{t-nT}-\bar{x}_{t-nT}\right| \\ & \quad+\sum_{i=0}^{n-1}\eta^{i}\left(\alpha_{w}(\left\Vert \boldsymbol{w}_{0:t-1}\right\Vert )+\alpha_{v}(\left\Vert \boldsymbol{v}_{0:t}\right\Vert )\right)\\ & \le\eta^{\left\lfloor \frac{t}{T}\right\rfloor }\left|x_{t-nT}-\bar{x}_{t-nT}\right| \\ & \quad+\frac{1}{1-\eta}\left(\alpha_{w}(\left\Vert \boldsymbol{w}_{0:t-1}\right\Vert )+\alpha_{v}(\left\Vert \boldsymbol{v}_{0:t}\right\Vert )\right), \end{align*} for all $T\ge T_{\bar{s},\eta}$. Since MHE satisfies the RGAS property within the time interval $[0,\thinspace t-nT]$, it follows that \begin{align*} & \left|x_{t-nT}-\bar{x}_{t-nT}\right|\le x_{t-nT}-\hat{x}_{t-nT}^{\star}\\ & \le\beta_{x}(\left|x_{0}-\bar{x}_{0}\right|,\thinspace t-nT)\\ & \quad+\alpha_{w}(\left\Vert \boldsymbol{w}_{0:t-nT-1}\right\Vert )+\alpha_{v}(\left\Vert \boldsymbol{v}_{0:t-nT}\right\Vert )\\ & \le\beta_{x}(\left|x_{0}-\bar{x}_{0}\right|,\thinspace0)+\alpha_{w}(\left\Vert \boldsymbol{w}_{0:t-1}\right\Vert )+\alpha_{v}(\left\Vert \boldsymbol{v}_{0:t}\right\Vert ). \end{align*} Consequently, \begin{align*} & \left|x_{t}-\hat{x}_{t}^{\star}\right| \\ & \le\eta^{\left\lfloor \frac{t}{T}\right\rfloor }\left(\beta_{x}(\left|x_{0}-\bar{x}_{0}\right|,\thinspace0)+\alpha_{w}(\left\Vert \boldsymbol{w}_{0:t-1}\right\Vert )+\alpha_{v}(\left\Vert \boldsymbol{v}_{0:t}\right\Vert )\right)\\ & \quad+\frac{1}{1-\eta}\left(\alpha_{w}(\left\Vert \boldsymbol{w}_{0:t-1}\right\Vert )+\alpha_{v}(\left\Vert \boldsymbol{v}_{0:t}\right\Vert )\right)\\ & \le\eta^{\left\lfloor \frac{t}{T}\right\rfloor }\beta_{x}(\left|x_{0}-\bar{x}_{0}\right|,\thinspace0) \\ & \quad+\frac{2-\eta}{1-\eta}\left(\alpha_{w}(\left\Vert \boldsymbol{w}_{0:t-1}\right\Vert )+\alpha_{v}(\left\Vert \boldsymbol{v}_{0:t}\right\Vert )\right)\\ & =:\beta_{x}'(\left|x_{0}-\bar{x}_{0}\right|,\thinspace t)+\alpha_{w}'(\left\Vert \boldsymbol{w}_{0:t-1}\right\Vert )+\alpha_{v}'(\left\Vert \boldsymbol{v}_{0:t}\right\Vert ), \end{align*} for all $T\ge T_{\bar{s},\eta}$ , where $\beta_{x}'\in\mathcal{K}L$ and $\alpha_{w}',\thinspace\alpha_{v}'\in\mathcal{K}$. Therefore, the MHE satisfies the RGAS property for all $t\in\mathbb{I}_{\ge0}$, which completes the proof of the major conclusion. If $\mu(s)$ is Lipschitz continuous at the origin, together with the property that $\mu(0)=0$ and $\mu(s)$ is non-negative and strictly increasing for all $s\in\mathbb{R}_{\ge0}$, it follows that the value of $\frac{\mu(s)}{s}$ must be positive and bounded above for all $s\in[0,\thinspace\bar{s}]$. Consequently, $\frac{s}{\mu(s)}$ is positive and bounded below. That is, the minimizer $s^{*}:=\arg\min_{s\in[0,\thinspace\bar{s}]}\frac{s}{\mu(s)}$ exists and is well-defined. By the property of a $\mathcal{L}$ function, it follows that there exists $T_{\bar{s},\eta}>0$ such that $\varphi(T_{\bar{s},\eta})\le\frac{\eta s^{*}}{\mu(s^{*})}$ and the MHE is RGAS. This proves the rest part of the conclusion. b)\emph{ Convergence.} If the associated FIE estimate ($\hat{x}_{t}^{*}$) converges to the true state, then by Lemma 4.5 of \cite{khalil2002nonlinear} and Lemma \ref{lem: factorizable-KL-bound} in Section \ref{sec: Notation-and-Preliminaries}, there exist $\rho'\in\mathcal{K}L$, $\mu'\in\mathcal{K}$ and $\varphi'\in\mathcal{L}$ such that \[ |x_{t}-\hat{x}_{t}^{*}|\le\rho'(|x_{0}-\bar{x}_{0}|,\thinspace t)\le\mu'(|x_{0}-\bar{x}_{0}|)\varphi'(t), \] for all $t\in\mathbb{I}_{\ge0}$. Continue the proof per part a) but with the $\mathcal{K}L$ function $\beta_{x}(s,\thinspace t)$ replaced with $\rho'(s,\thinspace t)$ and the $\mathcal{K}$ functions $\alpha_{w}$ and $\alpha_{v}$ set to zero. We reach the conclusion that $|x_{t}-\hat{x}_{t}^{\star}|\le\varrho(|x_{0}-\bar{x}_{0}|,\thinspace t)$ for some $\varrho\in\mathcal{K}L$, if $T\ge T_{\bar{s}',\eta}$. This implies that the MHE estimate ($\hat{x}_{t}^{\star}$) converges to the true state ($x_{t}$), which completes the proof of the major conclusion. The rest of the proof with $\mu'(s)$ being Lipschitz continuous at the origin is completed as per the last paragraph of the proof in part a). \end{IEEEproof} { From the above proof, we see that $\bar{s}$ and $\bar{s}'$ in Lemma \ref{lem: MHE-FIE} are basically the upper bounds of the uncertainty in the initial state of an MHE instance defined at any time. They are used to define the ranges of the uncertainty within which the conditions of the lemma need to hold. This avoids a stronger condition which assumes $\bar{s}$ or $\bar{s}'$ to be infinite. } Lemma \ref{lem: MHE-FIE} indicates that the robust stability of MHE is implied by the \textit{enhanced} robust stability of its associated FIE. The enhancing condition requires the moving horizon size ($T$) to be large enough such that the inequality, $\mu(s)\varphi(T)\le\eta s$, holds true when the initial state estimation error ($s$) takes a value within a bounded range. (The lower bound on $T$ can be less conservative if the size of the moving horizon adapts to the variable $s$ while keeping the inequality satisfied.) { With $0<\eta<1$, the condition basically requires each MHE instance to be based on sufficient measurements so that the effect of the estimation error of the initial state decays over time. } The conditions of Lemma \ref{lem: MHE-FIE} become more specific if the $\mathcal{K}$ function $\mu$ and $\mu'$ have special forms. For example, if $\mu(s)=\mu'(s)=c_{1}s^{a_{1}}$ with $a_{1}\ge1$ and $c_{1}>0$, both of which are Lipschitz continuous at the origin, then the conditions of conclusion a) reduce to that $T\ge T_{\bar{s},\eta}$, satisfying $\varphi(T_{\bar{s},\eta})\le\frac{\eta}{c_{1}\bar{s}^{a_{1}-1}}$, which further degenerates to $\varphi(T_{\bar{s},\eta})\le\frac{\eta}{c_{1}}$ if $a_{1}=1$, and meanwhile the conditions of conclusion b) reduce to $T\ge T_{\bar{s}',\eta}$, satisfying $\varphi'(T_{\bar{s}',\eta})\le\frac{\eta}{c_{1}\bar{s}'^{a_{1}-1}}$, which further degenerates to $\varphi'(T_{\bar{s}',\eta})\le\frac{\eta}{c_{1}}$ if $a_{1}=1$. Note that, if $0<a_{1}<1$ in these two cases, then $\mu(s)$ and $\mu'(s)$ are not Lipschitz continuous at the origin and the RGAS of MHE may not follow. With the explicit link established between the stability of MHE and that of its associated FIE, we are able to prove the RGAS of MHE by enhancing the conditions that establish the RGAS of FIE. In the following, the symbol $\bar{s}$ remains to be the constant defined in Lemma \ref{lem: MHE-FIE}. \begin{thm} \label{thm: RGAS-of-MHE}(RGAS of MHE) Suppose that the system described in \eqref{eq:system} is i-IOSS and the infimum of the MHE defined in \eqref{eq: MHE} is attainable (i.e., exists and numerically obtainable). Given Assumption \ref{assump: A3} and any $\eta\in(0,\thinspace1)$, the MHE is RGAS for all $T\ge T_{\eta,\bar{s}}$ if its associated FIE satisfies Assumptions \ref{assump: A1}-\ref{assump: A2} and the involved $\mathcal{K}$ and $\mathcal{K}L$ functions satisfy \begin{multline} \bar{\beta}_{x}(s,\thinspace T_{\eta,\bar{s}})+\alpha_{1}\left(3\underbar{\ensuremath{\gamma}}_{w}^{-1}\left(3\rho_{x}(s,\thinspace T_{\eta,\bar{s}})\right)\right)\\ +\alpha_{2}\left(3\underbar{\ensuremath{\gamma}}_{v}^{-1}\left(3\rho_{x}(s,\thinspace T_{\eta,\bar{s}})\right)\right)\le\eta s,\label{eq: RGAS-of-MHE-extrac-condition} \end{multline} for all $s\in[0,\thinspace\bar{s}]$ and $t\in\mathbb{I}_{\ge0}$. Furthermore, if both disturbance $w(t)$ and noise $v(t)$ converge to zero as $t$ goes to infinity, then the MHE estimate $x^{\star}(t)$ converges to the true state $x(t)$. \end{thm} \begin{IEEEproof} (a) \textit{RGAS}. Under the conditions excluding Assumption \ref{assump: A3} and inequality \eqref{eq: RGAS-of-MHE-extrac-condition}, the FIE associated with the MHE is RGAS by Theorem \ref{thm:RGAS-of-FIE}. In the resulting RGAS property, the $\mathcal{K}L$ bound function is obtained as $\beta_{x}\left(s,\,t\right)=\bar{\beta}_{x}\left(s,\,t\right)+\alpha_{1}\left(3\underbar{\ensuremath{\gamma}}_{w}^{-1}\left(3\rho_{x}(s,\,t)\right)\right)+\alpha_{2}\left(3\underbar{\ensuremath{\gamma}}_{v}^{-1}\left(3\rho_{x}(s,\,t)\right)\right)$ (cf. \eqref{eq: beta_x}). Then, inequality \eqref{eq: RGAS-of-MHE-extrac-condition} implies that $\beta_{x}\left(s,\,T\right)\le\eta s$ for all $T\ge T_{\eta,\bar{s}}$ and $s\in[0,\thinspace\bar{s}]$. Consequently the conclusion follows from conclusion a) of Lemma \ref{lem: MHE-FIE}. (b) \textit{Convergence}. Since the disturbance $w_t$ and the noise $v_t$ converge to zero, for any $\epsilon>0$, there exists a time $t_{\epsilon}$ such that $|w_t|<\epsilon/3$ and $|v_t|<\epsilon/3$ for all $t\ge t_{\epsilon}$. By the definition of $\mathcal{K}L$ function, there also exists a time $\tau_{\epsilon}$ such that $\beta_{x}(|x_{t_{\epsilon}}-\bar{x}_{t_{\epsilon}}|,\thinspace\tau_{\epsilon})<\epsilon/3$. Given $t\ge t_{\epsilon}+\tau_{\epsilon}$, by the induction in part (a) of the proof of Lemma \ref{lem: MHE-FIE}, we observe that, under the conditions of this theorem, the same RGAS inequality \eqref{def: RGAS} of the MHE remains valid if the intermediate state $x_{t_{\epsilon}}$ is treated as the initial state. Consequently, we obtain \begin{align*} |\hat{x}^{\star}_{t}-x_{t}| & \le \beta_{x}(|x_{t_{\epsilon}}-\bar{x}_{t_{\epsilon}}|,\thinspace t-t_{\epsilon})\\ & \quad+\alpha_{w}(\left\Vert \boldsymbol{w}_{t_{\epsilon}:t-1}\right\Vert )+\alpha_{v}(\left\Vert \boldsymbol{v}_{t_{\epsilon}:t}\right\Vert )\\ & <\epsilon/3+\epsilon/3+\epsilon/3=\epsilon, \end{align*} which implies that the MHE estimate $\hat{x}^{\star}_{t}$ converges to $x_{t}$ as $t$ goes to infinity. This completes the proof. \end{IEEEproof} { The proof shows that the left-hand-side of \eqref{eq: RGAS-of-MHE-extrac-condition} is nothing but the $\mathcal{K}L$ bound component in the RGAS property of the FIE that is associated with the MHE. Inequality \eqref{eq: RGAS-of-MHE-extrac-condition} basically requires the $\mathcal{K}L$ bound to be contractive with respect to the estimation error of the initial state for each MHE instance. This is made possible by requiring the MHE to implement a sufficiently large moving horizon as indicated by $T_{\eta,\bar{s}}$. } \begin{rem} \label{rem: convergence-of-FIE} The convergence of FIE is not proved for the same conditions given in Theorem \ref{thm: RGAS-of-MHE}. This is because, given an initial condition ($x_{0}$), the RGAS property of FIE is exclusively associated with $x_0$ and is not applicable if an intermediate state ($x_{t_{\epsilon}}$, $\forall t_{\epsilon}\in\mathbb{I}_{>0}$) is used to replace $x_0$ in the property. This makes it invalid to apply a similar RGAS inequality to establish the convergence as per the above proof. \end{rem} \begin{lem} \label{lem: RGAS of MHE - KdL form}Given conditions a)-c) of Lemma \ref{lem:The-FIE-defined}, the condition \eqref{eq: RGAS-of-MHE-extrac-condition} holds true if the involved $\mathcal{K}$ functions $\{\mu_{1},\,\mu_{2},\,\mu_{3},\,\alpha_{1},\,\alpha_{2}\}$, $\mathcal{K}inf$ functions $\{\underbar{\ensuremath{\gamma}}_{w},\thinspace\underbar{\ensuremath{\gamma}}_{v}\}$ and $\mathcal{L}$ functions $\{\varphi_{1},\,\varphi_{2}\}$ satisfy the following inequality: \begin{multline} \mu_{1}\left(3s+3\mu_{2}^{-1}\left(3\mu_{3}(s)\right)\right)\varphi_{1}(T_{\eta,\bar{s}})\\ +\alpha_{1}\left(3\underbar{\ensuremath{\gamma}}_{w}^{-1}\left(3\mu_{3}(s)\varphi_{2}(T_{\eta,\bar{s}})\right)\right)\\ +\alpha_{2}\left(3\underbar{\ensuremath{\gamma}}_{v}^{-1}\left(3\mu_{3}(s)\varphi_{2}(T_{\eta,\bar{s}})\right)\right)\le\eta s,\label{eq: key-stability-condition-3} \end{multline} for all $s\in[0,\thinspace\bar{s}]$ and $t\in\mathbb{I}_{\ge0}$. \end{lem} \begin{IEEEproof} It is basically to show that inequality \eqref{eq: key-stability-condition-3} implies inequality \eqref{eq: RGAS-of-MHE-extrac-condition} under the conditions of Lemma \ref{lem:The-FIE-defined}. By Lemma \ref{lem:The-FIE-defined} and its proof, we have $\rho_{x}(s,\thinspace t)=\mu_{3}(s)\varphi_{2}(t)$ and $\bar{\beta}_{x}(s,t)=\beta\left(3s+3\mu_{2}^{-1}\left(3\mu_{3}(s)\right),\,t\right)=\mu_{1}\left(3s+3\mu_{2}^{-1}\left(3\mu_{3}(s)\right)\right)\varphi_{1}(t)$. Substituting these specific $\mathcal{K}L$ functions into \eqref{eq: key-stability-condition-3}, yields \eqref{eq: RGAS-of-MHE-extrac-condition} and hence completes the proof. \end{IEEEproof} \begin{lem} \label{lem: RGAS MHE special case 1}Given conditions a)-c) of Lemma \ref{lem:The-FIE-defined2}, the condition \eqref{eq: RGAS-of-MHE-extrac-condition} holds true if in the given conditions the $\mathcal{K}inf$ functions $\{\underbar{\ensuremath{\gamma}}_{w},\,\underbar{\ensuremath{\gamma}}_{v}\}$ and the $\mathcal{K}$ functions $\{\alpha_{1},\thinspace\alpha_{2}\}$ satisfy the following inequality: \begin{multline} c_{1}\left[3\left(1+^{a_{2}}\sqrt{3}\right)\right]{}^{a_{1}}s^{a_{1}}\left(T_{\eta,\bar{s}}+1\right)^{-b_{1}}\\ +\alpha_{1}\left(3\underbar{\ensuremath{\gamma}}_{w}^{-1}\left(3c_{2}s^{a_{2}}\left(T_{\eta,\bar{s}}+1\right)^{-b_{2}}\right)\right)\\ +\alpha_{2}\left(3\underbar{\ensuremath{\gamma}}_{v}^{-1}\left(3c_{2}s^{a_{2}}(T_{\eta,\bar{s}}+1)^{-b_{2}}\right)\right)\le\eta s,\label{eq: key-stability-condition-3-1} \end{multline} for all $s\in[0,\thinspace\bar{s}]$ and $t\in\mathbb{I}_{\ge0}$. \end{lem} \begin{IEEEproof} With $\mu_{1}(s)=c_{1}s^{a_{1}}$, $\mu_{2}(s)=\mu_{3}(s)=c_{2}s^{a_{2}}$, $\varphi_{1}(t)=(t+1)^{-b_{1}}$ and $\varphi_{2}(t)=(t+1)^{-b_{2}}$, it is straightforward to show that \eqref{eq: key-stability-condition-3-1} is equivalent to \eqref{eq: key-stability-condition-3}. The conclusion follows immediately from Lemma \ref{lem: RGAS of MHE - KdL form}. \end{IEEEproof} Note that, to satisfy inequality \eqref{eq: key-stability-condition-3-1}, the parameter $a_{1}$ of the i-IOSS system must satisfy $a_{1}\ge1$. \begin{lem} \label{lem: RGAS MHE special case 2}Given conditions a)-c) of Lemma \ref{lem:The-FIE-defined3}, the condition \eqref{eq: RGAS-of-MHE-extrac-condition} holds true if in the given conditions the $\mathcal{K}inf$ functions $\{\underbar{\ensuremath{\gamma}}_{w},\,\underbar{\ensuremath{\gamma}}_{v}\}$ and the $\mathcal{K}$ functions $\{\alpha_{1},\thinspace\alpha_{2}\}$ satisfy the following inequality: \begin{multline} c_{1}\left[3\left(1+^{a_{2}}\sqrt{3}\right)\right]{}^{a_{1}}s^{a_{1}}b_{1}^{T_{\eta,\bar{s}}}+\alpha_{1}\left(3\underbar{\ensuremath{\gamma}}_{w}^{-1}\left(3c_{2}s^{a_{2}}b_{2}^{T_{\eta,\bar{s}}}\right)\right)\\ +\alpha_{2}\left(3\underbar{\ensuremath{\gamma}}_{v}^{-1}\left(3c_{2}s^{a_{2}}b_{2}^{T_{\eta,\bar{s}}}\right)\right)\le\eta s,\label{eq: key-stability-condition-3-2} \end{multline} for all $s\in[0,\thinspace\bar{s}]$ and $t\in\mathbb{I}_{\ge0}$. \end{lem} \begin{IEEEproof} With $\mu_{1}(s)=c_{1}s^{a_{1}}$, $\mu_{2}(s)=\mu_{3}(s)=c_{2}s^{a_{2}}$, $\varphi_{1}(t)=b_{1}^{t}$ and $\varphi_{2}(t)=b_{2}^{t}$, it is straightforward to show that \eqref{eq: key-stability-condition-3-2} is equivalent to \eqref{eq: key-stability-condition-3}. The conclusion follows immediately from Lemma \ref{lem: RGAS of MHE - KdL form}. \end{IEEEproof} { Inequalities \eqref{eq: key-stability-condition-3}-\eqref{eq: key-stability-condition-3-2} are materialization of inequality \eqref{eq: RGAS-of-MHE-extrac-condition} under more specific conditions on the i-IOSS property of the system and the cost function of the MHE. Therefore, the remark and interpretation on \eqref{eq: RGAS-of-MHE-extrac-condition} which are given after the proof of Theorem \ref{thm: RGAS-of-MHE} are applicable to these three inequalities. } Analog to the case of FIE, a specific sub-cost function $V_{T,2}$ for MHE to satisfy the conditions of Lemma \ref{lem: RGAS MHE special case 1} or \ref{lem: RGAS MHE special case 2} can take the following form: \begin{align} & V_{T,2}(\boldsymbol{\omega}_{t-T:t-1},\,\boldsymbol{\nu}_{t-T:t}) \nonumber\\ & :=\dfrac{\lambda_{w}}{T}\sum_{\tau\in\mathbb{I}_{t-T:t-1}}l_{w,\tau}(\omega_{\tau})+\dfrac{\lambda_{v}}{T+1}\sum_{\tau\in\mathbb{I}_{t-T:t}}l_{v,\tau}(\nu_{\tau}) \nonumber\\ & +(1-\lambda_{w})\max_{\tau\in\mathbb{I}_{t-T:t-1}}l_{w,\tau}(\omega_{\tau})+(1-\lambda_{v})\max_{\tau\in\mathbb{I}_{t-T:t}}l_{w,\tau}(\nu_{\tau}), \label{eq: specific-form-of-MHE-cost-function} \end{align} with given constants $\lambda_{w},\,\lambda_{v}\in[0,\,1)$. Here the functions $l_{w,\tau}$ and $l_{v,\tau}$ are bounded as per \eqref{eq: specific-subcost-form - b}, and the resulting $\mathcal{K}inf$ bound functions $\{\underbar{\ensuremath{\gamma}}_{w},\,\underbar{\ensuremath{\gamma}}_{v}\}$ which are associated with $V_{T,2}$ satisfy inequality \eqref{eq: key-stability-condition-3-1} (or \eqref{eq: key-stability-condition-3-2}) of Lemma \ref{lem: RGAS MHE special case 1} (or \ref{lem: RGAS MHE special case 2}). \begin{rem} \label{rem: on-max-term} { It can be shown that, if the sub-cost $V_{T,1}$ admits a form which decays with a higher order of $T$ than $V_{T,2}$ in \eqref{eq: specific-form-of-MHE-cost-function} does, then the MHE remains RGAS even if the weight parameters $\lambda_{w}$ and $\lambda_{v}$ take the value of 1 (i.e., no max terms exist in $V_{T,2}$ in \eqref{eq: specific-form-of-MHE-cost-function}). However, the $\mathcal{K}$ functions in the resulting RGAS property will be dependent on the size of the moving horizon ($T$) implemented in MHE. Motivated by a relevant proof in \cite{muller2017nonlinear}, the proof can be developed by showing that $|\hat{w}_\tau|$, $\forall \tau \in \mathbb{I}_{t-T:t-1}$ and consequently $\left\Vert \hat{\boldsymbol{w}}_{t-T:t-1}\right\Vert$ is upper bounded by a sum of $\mathcal{K}$ functions that are dependent on $T$, which is likewise applicable to $|\hat{v}_\tau|$, $\forall \tau \in \mathbb{I}_{t-T:t-1}$. The remaining proof is first to prove the RLAS of MHE (cf. the beginning of Section \ref{sec: RGAS of the MHE}) by following the routine of the proof for the RGAS of FIE (refer to \cite{hu2015optimization}), and then use the result to establish the RGAS of MHE by following the routine of the proof of Lemma \ref{lem: MHE-FIE}. The conclusion can be generalized by assuming $V_{T,1}$ and $V_{T,2}$ to be general $\mathcal{K}L$ functions. Similar conclusions, however, are not proved for the associated FIE. } \end{rem} \section{Numerical Examples\label{sec:numerical example}} This section applies MHE to estimate the states of a linear system and a nonlinear system. The two systems are provable to be i-IOSS, and were subject to Gaussian disturbances, each of which was truncated to the range of $[-3\sigma,\thinspace3\sigma]$, with $\sigma^{2}$ representing the variance of the disturbance. { In MHE, the prior $\bar{x}_{t-T}$ of state $x_{t-T}$ is chosen to be equal to the past MHE estimate $\hat{x}_{t-T}^{\star}$ for all $t\ge T+1$, which makes Assumption \ref{assump: A3} always satisfied. } { The optimization problems in MHE were solved in MATLAB (version R2010b) which ran on a laptop with Intel(R) Core(TM) i7-6700HQ and [email protected] GHz. Specifically, in both examples the optimization problems were solved by the ``fmincon'' solver which implements an interior-point algorithm. The iterations were set large enough such that the optimal estimates were returned. } The performances of MHE are compared with those of KF (with dynamic gains) in the linear case and EKF in the nonlinear case. The performance was evaluated by mean error, and mean absolute error (MAE) of the estimation which is defined below: \begin{align*} {\rm MAE} & =\frac{1}{N(t_{f}+1)}\sum_{i=1}^{N}\sum_{t=0}^{t_{f}}\sum_{j=1}^{n}|x_{t,j}^{(i)}-\hat{x}_{t,j}^{(i)}|, \end{align*} where $t_{f}$ is the simulation duration, $N$ is the number of random instances of the initial state and the disturbance sequence, $x_{t,j}^{(i)}$ is the $j$th state of a state vector $x_{t}^{(i)}$ for time $t$ in instance $i$, and $\hat{x}_{t,j}^{(i)}$ denotes the corresponding estimate. In both examples, we set $N=100$ and $t_{f}=60$. \subsection{A linear system} Consider a linear discrete-time system described by: \begin{align*} &\left[\begin{array}{c} x_{1,t+1}\\ x_{2,t+1}\\ x_{3,t+1} \end{array}\right] =\left[\begin{array}{ccc} 0.74 & 0.21 & -0.25\\ 0.09 & 0.86 & -0.19\\ -0.09 & 0.18 & 0.50 \end{array}\right]\left[\begin{array}{c} x_{1,t}\\ x_{2,t}\\ x_{3,t} \end{array}\right]\\ & \quad+\left[\begin{array}{c} w_{1,t}\\ w_{2,t}\\ w_{3,t} \end{array}\right], \quad y_{t} =0.1x_{1,t}+2x_{2,t}+x_{3,t}+v_{t},\thinspace\thinspace\forall t\ge0. \end{align*} The disturbances $\{w_{1,t},\thinspace\thinspace w_{2,t},\thinspace\thinspace w_{3,t}\}$ and noise $\{v_{t}\}$ are four sequences of independent, zero mean, truncated Gaussian noises with variances given by $\sigma_{w_{1}}^{2}=\sigma_{w_{2}}^{2}=\sigma_{w_{3}}^{2}=\sigma_{w}^{2}=0.04$ and $\sigma_{v}^{2}=0.01$, respectively. The initial state $x_{0}$ is a random variable independent of the disturbances and noise, and follows a Gaussian distribution with a mean of $\bar{x}_{0}$ and the variances of the three elements are all given by $\sigma_{0}^{2}:=1$. The prior estimate of the initial state is given as $\bar{x}_{0}=[1\thinspace\thinspace1\thinspace\thinspace-1]^{\top}$. The system is exp-i-IOSS by Lemma \ref{lem: LTI-variant exp-i-IOSS} established in the Appendix. By the definition of an i-IOSS system in \eqref{eq:definition - i-IOSS}, the $\mathcal{K}\cdot\mathcal{L}$ and $\mathcal{K}$ bound functions are obtained as $\beta(s,\thinspace t)=3.04s\cdot0.9{}^{t}$, $\alpha_{1}(s)=30.3s$ and $\alpha_{2}(s)\equiv0$. By Lemma \ref{lem: RGAS MHE special case 2} and Theorem \ref{thm: RGAS-of-MHE}, for the MHE to be RGAS we can specify its cost function as{ \begin{align} & V_{T}(\chi_{t-T},\thinspace\boldsymbol{\omega}_{t-T:t-1},\thinspace\boldsymbol{\nu}_{t-T:t})\nonumber \\ & :=\frac{|\chi_{t-T}-\hat{x}_{t-T}|^{2}b_{2}^{T}}{\sigma_{0}^{2}}+V_{T,2}(\boldsymbol{\omega}_{t-T:t-1},\thinspace\boldsymbol{\nu}_{t-T:t})\label{eq: LE - Vt} \end{align} with \begin{align} & V_{T,2}(\boldsymbol{\omega}_{t-T:t-1},\thinspace\boldsymbol{\nu}_{t-T:t})\nonumber \\ & :=\dfrac{\lambda_{w}}{\sigma_{w}^{2}T}\sum_{\tau=t-T}^{t-1}|\omega_{\tau}|^{2}+\dfrac{\lambda_{v}}{\sigma_{v}^{2}(T+1)}\sum_{\tau=t-T}^{t}|\nu_{\tau}|^{2}\nonumber \\ & +\dfrac{1-\lambda_{w}}{\sigma_{w}^{2}}\|\boldsymbol{\omega}_{t-T:t-1}\|^{2}+\dfrac{1-\lambda_{v}}{\sigma_{v}^{2}}\|\boldsymbol{\nu}_{t-T:t}\|^{2}.\label{eq: LE - Vt-bar} \end{align} }The resulting MHE is named as MHE I, to distinguish it from another MHE defined later. With this choice of cost function, the $\mathcal{K}inf$ bound functions associated with the disturbances are derived as $\underbar{\ensuremath{\gamma}}_{w}(s)=(1-\lambda_{w})s^{2}/\sigma_{w}^{2}$, $\gamma_{w}(s)=s^{2}/\sigma_{w}^{2}$, $\underbar{\ensuremath{\gamma}}_{v}(s)=(1-\lambda_{v})s^{2}/\sigma_{v}^{2}$, and $\ensuremath{\gamma}_{v}(s)=s^{2}/\sigma_{v}^{2}$. To satisfy the conditions of Lemma \ref{lem: RGAS MHE special case 2}, it suffices to choose $b_{2}=0.81$ and $\lambda_{w},\,\lambda_{v}\in[0,\,1)$, satisfying $0.9^{2}\le b_{2}<1$. Given a moving horizon size specified by $T$, we solve the MHE subject to $\left\Vert \chi_{0}\right\Vert \le3\sigma_{0}$, $\left\Vert \boldsymbol{\omega}_{t-T:t-1}\right\Vert \le3\sigma_{w}$ and $\left\Vert \boldsymbol{\nu}_{t-T:t}\right\Vert \le3\sigma_{v}$, yielding the state estimate for each $t\in\mathbb{I}_{0:t_{f}}$. The MAEs of the estimates when $b_{2}$ and $T$ took different values are shown in Fig. \ref{fig: LE - MAE-T-b2 }. As observed, MHE I with $\lambda_{w}=\lambda_{v}=0.99$ outperformed KF if the horizon size $T$ and the parameter $b_{2}$ were large enough. Whereas, MHE I was inferior to KF when $\lambda_{w}=\lambda_{v}=0$, regardless of the values of $T$ and $b_{2}$. The observations are verified by the results of a random instance, as shown in Fig. \ref{fig: LE - exemplary ME-t}. We see that MHE I outperformed KF during the early stage of estimation and became almost equivalent to KF afterwards. In addition, the results in Fig. \ref{fig: LE - MAE-T-b2 } indicate that a small moving horizon size, e.g., $T=10$, is sufficient for the MHE to offer a competitive estimation, and that the improvement in the estimation performance is marginal once the horizon is large enough. The feasible size can thus be smaller than the sufficient size predicted by Lemma \ref{lem: RGAS MHE special case 2}, which is 39 for $\lambda_{w}=\lambda_{v}=0$ and 57 for $\lambda_{w}=\lambda_{v}=0.99$. \begin{figure} \caption{MAE performances of MHE for different values of $T$ and $b_{2} \label{fig: LE - MAE-T-b2 } \end{figure} \begin{figure} \caption{Mean error performances of KF and MHE I. MHE I was implemented with $\lambda_{w} \label{fig: LE - exemplary ME-t} \end{figure} Alternatively, if the i-IOSS property is expressed by using a looser $\mathcal{K}\cdot\mathcal{L}$ bound with $\beta(s,t):=3.04s\cdot(t+1)^{\ln0.9}$ (cf. Lemma \ref{lem: LTI-variant exp-i-IOSS} and the remark that follows), then by Lemma \ref{lem: RGAS MHE special case 1} a different valid cost function can be defined as: \begin{multline} V_{T}'(\chi_{t-T},\thinspace\boldsymbol{\omega}_{t-T:t-1},\thinspace\boldsymbol{\nu}_{t-T:t})=\frac{|\chi_{t-T}-\hat{x}_{t-T}|^{2}}{\sigma_{0}^{2}(T+1)^{b_{2}}}\\ +V_{T,2}(\boldsymbol{\omega}_{t-T:t-1},\thinspace\boldsymbol{\nu}_{t-T:t}),\label{eq: LE - alternative Vt} \end{multline} where $V_{T,2}(\boldsymbol{\omega}_{t-T:t-1},\thinspace\boldsymbol{\nu}_{t-T:t})$ is the same as in \eqref{eq: LE - Vt-bar}. To satisfy the stability conditions in Lemma \ref{lem: RGAS MHE special case 1}, it is sufficient to choose $b_{2}=0.21$, which satisfies $0<b_{2}\le-2\ln0.9$. We call the resulting MHE as MHE II. Simulations were performed on the same random instances for different values of $T$ and $b_{2}$, and the results are again shown in Fig. \ref{fig: LE - MAE-T-b2 }. The state estimation results are slightly better than those obtained by MHE I for different values of $T$. Similar observations were yielded when the parameters $b_{2}$'s of the two MHEs took values in the ranges identified by Lemmas \ref{lem: RGAS MHE special case 1} and \ref{lem: RGAS MHE special case 2}. Simulations also showed that both MHE I and MHE II remained stable even if $b_{2}$ took values beyond the identified ranges, which indicates the sufficiency but non-necessity of the derived stability conditions. The solved optimizations are convex in both MHEs. The solution times averaged over the whole simulation period (i.e., 60 time units) and 100 random instances are summarized in Table \ref{tbl: computational-times-example1}, for different sizes of moving horizons. The average solution times were less than 1.4 secs for both MHE I and MHE II if the parameters $\lambda_w$ and $\lambda_v$ were set to 0.99, and increased if $\lambda_w$ and $\lambda_v$ were set to 0 as the optimization became more challenging to solve. Moreover, in each case the solution time increased with the size of the moving horizon. \begin{table} \caption{Average solution time (in secs) for the linear system.} \label{tbl: computational-times-example1} \centering{} \begin{tabular}{p{2.7cm}|llllll} \hline Moving horizon size & 5 & 10 & 15 & 20 & 25 & 30\tabularnewline \noalign{\vskip3pt} \hline Time for MHE I, with\newline$\lambda_{w}=\lambda_{v}=0.99$ & 0.12 & 0.30 & 0.49 & 0.72 & 1.00 & 1.33\tabularnewline\noalign{\vskip4pt} Time for MHE I, with\newline$\lambda_{w}=\lambda_{v}=0$ & 0.60 & 1.48 & 2.24 & 2.93 & 3.49 & 3.92\tabularnewline\noalign{\vskip4pt} Time for MHE II, with\newline$\lambda_{w}=\lambda_{v}=0.99$ & 0.10 & 0.30 & 0.45 & 0.66 & 0.91 & 1.19\tabularnewline\noalign{\vskip4pt} Time for MHE II, with\newline$\lambda_{w}=\lambda_{v}=0$ & 0.60 & 1.46 & 2.11 & 2.58 & 3.16 & 3.79\tabularnewline \hline \end{tabular} \end{table} Next, we compare the performances of MHE and KF when the measurements had outliers. In this case, the noise was a mixture of two truncated Gaussian noises: a nominal noise had a variance of $\sigma_{v}^{2}$ which occurred with a probability of $p$, and an intermittent large noise had a variance of $100\sigma_{v}^{2}$ which occurred with a probability of $(1-p)$ \cite{aravkin2016generalized}. The system disturbances $w_{1,t}$, $w_{2,t}$ and $w_{3,t}$ were generated as the same Gaussian disturbance with a variance of $\sigma_{w}^{2}$. In the simulations, we set $p=0.9$, $\sigma_{v}=0.1$ and $\sigma_{w}=0.02$. The other simulation settings were the same as before. In this case, the cost function of the MHE was specified as{ \begin{multline*} V_{T}(\chi_{t-T},\thinspace\boldsymbol{\omega}_{t-T:t-1},\thinspace\boldsymbol{\nu}_{t-T:t}):=\frac{|\chi_{t-T}-\hat{x}_{t-T}|^{2}b_{2}^{T}}{\sigma_{0}^{2}}\\ +\dfrac{\lambda_{w}}{\sigma_{w}^{2}T}\sum_{\tau=t-T}^{t-1}|\omega_{\tau}|^{2}+\dfrac{\lambda_{v}}{\sigma_{v}(T+1)}\sum_{\tau=t-T}^{t}|\nu_{\tau}|\\ +\dfrac{1-\lambda_{w}}{\sigma_{w}^{2}}\|\boldsymbol{\omega}_{t-T:t-1}\|^{2}+\dfrac{1-\lambda_{v}}{\sigma_{v}}\max_{\tau\in[t-T,\thinspace t]}|\nu_{\tau}|, \end{multline*} }which imposes 1-norm instead of 2-norm penalties on the fitting errors $\{\nu_{\tau}\}_{t-T\le\tau\le t}$ in order to account for outliers in the measurements in a better manner \cite{ke2005robust,hedengren2015overview,aravkin2016generalized}. The MHE also incorporates the knowledge of identical disturbances by including the equality constraints $\omega_{1,\tau}=\omega_{2,\tau}=\omega_{3,\tau}$ for all $\tau\in\mathbb{I}_{t-T:t-1}$. In contrast, the KF fully implemented 2-norm penalties, and the knowledge of identical disturbances was incorporated by specifying their covariance matrix as $\sigma_{w}^{2}\boldsymbol{1}_{3}$, where $\boldsymbol{1}_{3}$ is a $3\times3$ matrix with all elements equal to 1. \begin{figure} \caption{Performances of MHE and KF based on measurements with outliers. Subplots (a)-(c) present the mean estimation errors, and (d)-(g) show the estimates for a random instance.} \label{fig: LE - mixed gaussian noise} \end{figure} As shown in Fig. \ref{fig: LE - mixed gaussian noise}(a)-(c), the state estimates obtained by MHE outperformed those obtained by KF during the whole simulation period, in terms of both mean and variance of the estimation errors. Fig. \ref{fig: LE - mixed gaussian noise}(d)-(f) show the true trajectories of the two states and their associated KF and MHE estimates in a random instance. The results confirm the superiority of MHE in this case. Indeed, this owes to the more accurate recovery of the measurement noise sequence by means of the 1-norm penalties applied on the measurement fitting errors, which is supported by the noise estimates shown in Fig. \ref{fig: LE - mixed gaussian noise}(g). \subsection{A nonlinear system} \label{subsec: nonlinear example} Consider a nonlinear continuous-time system described by \begin{equation} \begin{aligned}\left[\begin{array}{c} \dot{x}_{1,t}\\ \dot{x}_{2,t} \end{array}\right] & =\left[\begin{array}{c} -2kx_{1,t}^{2}+w_{t}\\ kx_{1,t}^{2} \end{array}\right],\\ y_{t} & =x_{1,t}+x_{2,t}+v_{t},\thinspace\thinspace\forall t\ge0, \end{aligned} \label{eq: nonlinear-system} \end{equation} where $k=0.16$. When $w_{t}$ is constantly zero, the system describes an ideal gas-phase irreversible reaction in a well mixed, constant volume, isothermal batch reactor, where $x_{1,t}$ and $x_{2,t}$ represent the partial pressures and $y_{t}$ the reactor pressure measurement \cite{haseltine2005critical,ji2016robust}. In normal operations, the states and measurements are non-negative, i.e., $x_{1,t},\thinspace x_{2,t},\thinspace y_{t}\ge0$ for all $t\ge0$. We assume that $x_{2,0}\ge c_{0}$ for a certain positive constant $c_{0}$. This implies that $x_{2,t}\ge c_{0}>0$ for all $t\ge0$ because $x_{2,t}$ increases with $t$. First we prove that the system is i-IOSS, which was often assumed without a proof in the literature, e.g., \cite{ji2016robust}. Given two initial conditions $x_{0}^{(1)}:=[x_{1,0}^{(1)}\thinspace\thinspace x_{2,0}^{(1)}]^{\top}$ and $x_{0}^{(2)}:=[x_{1,0}^{(2)}\thinspace\thinspace x_{2,0}^{(2)}]^{\top}$, let the corresponding state trajectory be denoted as $x_{t}^{(1)}$ and $x_{t}^{(2)}$. Define $\delta x_{1,t}=x_{1,t}^{(1)}-x_{1,t}^{(2)}$ and $p_{t}=|\delta x_{1,t}|$. The dynamics of $p_{t}$ is then derived as \begin{align*} \dot{p}_{t} & =\dfrac{\delta x_{1,t}}{|\delta x_{1,t}|}\delta\dot{x}_{1,t}=\dfrac{\delta x_{1,t}}{|\delta x_{1,t}|}\left(-2k(x_{1,t}^{(1)}+x_{1,t}^{(2)})\delta x_{1,t}+\delta w_{t}\right)\\ & =-2k(x_{1,t}^{(1)}+x_{1,t}^{(2)})|\delta x_{1,t}|+\dfrac{\delta x_{1,t}}{|\delta x_{1,t}|}\delta w_{t}\\ & \le-2kc_{0}p_{t}+|\delta w_{t}|. \end{align*} By the comparison lemma (Lemma 3.4 of \cite{khalil2002nonlinear}), it follows that \begin{align*} |\delta x_{1,t}| & =p_{t}\le p_{0}e^{-2kc_{0}t}+\int_{0}^{t}e^{-2kc_{0}(t-\tau)}|\delta w_{\tau}|d\tau\\ & \le|\delta x_{1,0}|e^{-2kc_{0}t}+\dfrac{1-e^{-2kc_{0}t}}{2kc_{0}}\left\Vert \delta\boldsymbol{w}_{0:t}\right\Vert \\ & \le|\delta x_{0}|e^{-2kc_{0}t}+\frac{\left\Vert \delta\boldsymbol{w}_{0:t}\right\Vert }{2kc_{0}}. \end{align*} Therefore, the gap between the two full states is bounded as follows: \begin{align*} & |x_{t}^{(1)}-x_{t}^{(2)}|\le\left|\delta x_{1,t}\right|+\left|\delta x_{2,t}\right|\le2\left|\delta x_{1,t}\right|+\left|\delta(y_{t}-v_{t})\right|\\ & \le2|\delta x_{0}|e^{-2kc_{0}t}+\dfrac{\left\Vert \delta\boldsymbol{w}_{0:t}\right\Vert }{kc_{0}}+\left\Vert \delta(\boldsymbol{y}_{0:t}-\boldsymbol{v}_{0:t})\right\Vert \\ & =:\beta(|\delta x_{0}|,\thinspace t)+\alpha_{1}\left(\left\Vert \delta\boldsymbol{w}_{0:t}\right\Vert \right)+\alpha_{2}\left(\left\Vert \delta(\boldsymbol{y}_{0:t}-\boldsymbol{v}_{0:t})\right\Vert \right), \end{align*} where $\beta\in\mathcal{K}L$ and $\alpha_{1},\alpha_{2}\in\mathcal{K}$. Thus, the system described in \eqref{eq: nonlinear-system} is i-IOSS by Definition \ref{def: i-IOSS}. Let $w_{t}$ and $v_{t}$ be Gaussian white noises with variances equal to $0.001^{2}$ and $0.01^{2}$, respectively. And let the initial state $x_{0}$ follow a Gaussian distribution with a mean of $\bar{x}_{0}$ and a covariance of $\sigma_{0}^{2}I_{2}$, where $\bar{x}_{0}:=[0.1,\thinspace4.5]$, $\sigma_{0}^{2}=9$ and $I_{2}$ is a $2\times2$ identify matrix. { In the simulations, we applied the Euler-Maruyama method \cite{higham2001algorithmic} to obtain discrete counterparts of the stochastic differential equations in \eqref{eq: nonlinear-system}, and the discretization step size is given by $T_{s}$. } According to Lemma \ref{lem: RGAS MHE special case 2}, the MHE in discrete time is RGAS when the moving horizon size $T$ is large enough, if its cost function takes the form of \eqref{eq: LE - Vt}-\eqref{eq: LE - Vt-bar} and is equipped with $b_{2}=e^{-4kc_{0}T_{s}}$. With $c_{0}=0.1$ and $T_{s}=0.1$ (smaller step sizes were found to yield similar results), Monte-Carlo simulations were performed for $T$ varying from 2 to 30 and the average state estimation results are shown in Fig. \ref{fig: NLE - MAE-T}(a). As observed, MHE outperformed EKF once the moving horizon size $T$ is larger than 2 for $\lambda_{w}=\lambda_{v}=0.99$ and 4 for $\lambda_{w}=\lambda_{v}=0$. The observations were reflected in the results of a random instance as shown in Fig. \ref{fig: NLE - MAE-T}(b)-(d), in which the prior estimate of the initial state was given by $\bar{x}_{0}$ and the MHE parameters were chosen as $\lambda_{w}=\lambda_{v}=0.99$ and $T=15$. \begin{figure} \caption{MAE and mean error performances of EKF and MHE. (a) MAE performances: the red dash-dot line is the results of EKF, and the blue dash curve with stars and the blue solid curve with circles correspond to the results of MHE with $\lambda_{w} \label{fig: NLE - MAE-T} \end{figure} The computational times for solving the optimization problems in MHE with different sizes of moving horizons are summarized in Table \ref{tbl: computational-times-example2}. Since the optimizations are nonlinear and non-convex, the computational times were much longer than those in the previous example in which the optimizations are convex. This reflects on the challenge that persists and needs to be tackled in MHE for nonlinear systems. \begin{table} \caption{Average solution time (in secs) for the nonlinear system.} \label{tbl: computational-times-example2} \centering{} \begin{tabular}{p{2.5cm}|llllll} \hline Moving horizon size & 5 & 10 & 15 & 20 & 25 & 30\tabularnewline \noalign{\vskip3pt} \hline Time for MHE with\newline$\lambda_{w}=\lambda_{v}=0.99$ & 0.32 & 0.70 & 1.21 & 1.78 & 2.62 & 3.59\tabularnewline\noalign{\vskip4pt} Time for MHE with\newline$\lambda_{w}=\lambda_{v}=0$ & 0.90 & 2.89 & 5.45 & 8.11 & 11.57 & 14.50 \tabularnewline \hline \end{tabular} \end{table} \section{Discussion\label{sec:Discussion}} This section provides a brief discussion on solving the optimization problem defined in \eqref{eq: MHE} for MHE. If both the state and the measurement equations are linear, and if the bound sets are convex, then the optimization defined in \eqref{eq: MHE} is convex when a convex cost function is used. In this case, the optimization problem can be solved efficiently to global optimality using state-of-the-art convex solvers \cite{Boyd2004}, even if the MHE implements a large moving horizon size. In practice, however, the state or the measurement equation is often nonlinear. This makes the optimization defined in \eqref{eq: MHE} non-convex, and the computation for a global optimal solution becomes time-consuming. This tends to void the application of MHE in cases where computational time is a key concern. To tackle the challenge, researchers have proposed solving \eqref{eq: MHE} for suboptimal solutions. For instance, in \cite{alessandri2008moving} the authors assume that the values of the cost function subjected to suboptimal solutions are within a fixed gap to the globally optimal costs. Yet it is unclear if there exist optimization solvers that can keep the assumption valid without violating the tight requirement on computational efficiency. In general, it remains an open challenge to ensure the RGAS of MHE when only suboptimal solutions are obtained for the series of optimization involved. Let the (global) optimal and the suboptimal solution of $(\chi_{t-T},\,\boldsymbol{\omega}_{t-T:t-1},\,\boldsymbol{\nu}_{t-T:t})$ be denoted as $(\hat{x}_{t-T},\,\hat{\boldsymbol{w}}_{t-T:t-1},\,\hat{\boldsymbol{v}}_{t-T:t})$ and $(\check{x}_{t-T},\,\check{\boldsymbol{w}}_{t-T:t-1},\,\check{\boldsymbol{v}}_{t-T:t})$, respectively. The following result may shed some light on the ways to tackle this challenge. \begin{thm} \label{thm: RGAS-of-suboptimal-MHE} Given Assumption \ref{assump: A3} and any $\eta\in(0,\thinspace1)$, the MHE which implements a suboptimal solution for the optimization in \eqref{eq: MHE} is RGAS for all $T\ge T_{\eta,\bar{s}}$ if the following conditions are satisfied: a) the suboptimal solution yields a cost value which satisfies the following inequality, \begin{multline} V_t(\check{x}_{t-T},\,\check{\boldsymbol{w}}_{t-T:t-1},\,\check{\boldsymbol{v}}_{t-T:t})\\ \le \gamma\left(V_t(\hat{x}_{t-T},\,\hat{\boldsymbol{w}}_{t-T:t-1},\,\hat{\boldsymbol{v}}_{t-T:t})\right) \label{eq: suboptimal-MHE-condition} \end{multline} with a certain $\gamma\in \mathcal{K}$; and b) the cost function of the associated FIE satisfies Assumption \ref{assump: A1}, a modified Assumption \ref{assump: A2} and a modified inequality \eqref{eq: RGAS-of-MHE-extrac-condition}, in which the modifications are to replace the $\mathcal{K}L$ function $\rho_x$ and the $\mathcal{K}$ functions $\gamma_w,\,\gamma_v$ used in Assumption \ref{assump: A2} and inequality \eqref{eq: RGAS-of-MHE-extrac-condition} with the $\mathcal{K}L$ function $\gamma\circ3\rho_x$ and $\mathcal{K}$ functions $\gamma\circ3\gamma_w$, $\gamma\circ3\gamma_v$, respectively. \end{thm} \begin{IEEEproof} By condition a) and Assumption \ref{assump: A1}, we have \begin{align*} & \underbar{\ensuremath{\rho}}_{x}(\left|\check{x}_{0}-\bar{x}_{0}\right|,t)+\underbar{\ensuremath{\gamma}}_{w}(\left\Vert \check{\boldsymbol{w}}_{0:t-1}\right\Vert )+\underbar{\ensuremath{\gamma}}_{v}(\left\Vert \check{\boldsymbol{v}}_{0:t}\right\Vert )\\ & \le V_{T}(\check{x}_{0}-\bar{x}_{0},\,\check{\boldsymbol{w}}_{0:t-1},\,\check{\boldsymbol{v}}_{0:t})\\ & \le\gamma\left(V_{T}(\hat{x}_{0}-\bar{x}_{0},\,\hat{\boldsymbol{w}}_{0:t-1},\,\hat{\boldsymbol{v}}_{0:t})\right)\\ & \le\gamma\left(V_{T}(x_{0}-\bar{x}_{0},\,\boldsymbol{w}_{0:t-1},\,\boldsymbol{v}_{0:t})\right)\\ & \le\gamma\left(\rho_{x}(\left|x_{0}-\bar{x}_{0}\right|,\,t)+\gamma_{w}(\left\Vert \boldsymbol{w}_{0:t-1}\right\Vert )+\gamma_{v}(\left\Vert \boldsymbol{v}_{0:t}\right\Vert )\right)\\ & \le\check{\rho}_{x}(\left|x_{0}-\bar{x}_{0}\right|,\,t)+\check{\gamma}_{w}(\left\Vert \boldsymbol{w}_{0:t-1}\right\Vert )+\check{\gamma}_{v}(\left\Vert \boldsymbol{v}_{0:t}\right\Vert ) \end{align*} where $\check{\rho}_{x}:=\gamma\circ3\rho_{x}$, $\check{\gamma}_{w}:=\gamma\circ3\gamma_{w}$ and $\check{\gamma}_{v}:=\gamma\circ3\gamma_{v}$, which are $\mathcal{K}L$, $\mathcal{K}$ and $\mathcal{K}$ functions, respectively. The lower and the eventual upper bounds of $V_{T}(\check{x}_{0}-\bar{x}_{0},\,\check{\boldsymbol{w}}_{0:t-1},\,\check{\boldsymbol{v}}_{0:t})$ are in the same forms of the counterparts obtained for a global optimal solution (refer to part (a) of the proof of Theorem 1 in \cite{hu2015optimization}). The only differences are that the left-hand side of the first inequality is expressed by the suboptimal instead of optimal solution variables, and that the right-hand side of the last inequality is described using the new functions $\check{\rho}_x$, $\check{\gamma}_w$ and $\check{\gamma}_v$, instead of $\rho_x$, $\gamma_w$ and $\gamma_v$. Consequently, under the modified Assumption \ref{assump: A2}, the proof of the associated FIE (which implements suboptimal solutions) being RGAS can be developed by following the same routine of part (a) of the proof of Theorem 1 in \cite{hu2015optimization}. The remaining proof is to show that an additional condition, as a counterpart of \eqref{eq: RGAS-of-MHE-extrac-condition}, when the MHE implements suboptimal solutions, is also satisfied. This is done by applying the modified inequality \eqref{eq: RGAS-of-MHE-extrac-condition} stated in condition b). The proof is then complete. \end{IEEEproof} Theorem \ref{thm: RGAS-of-suboptimal-MHE} indicates that MHE can be RGAS even if it implements suboptimal solutions for the series of optimizations that are revealed over time. To that end, each suboptimal solution needs to satisfy certain conditions, say, inequality \eqref{eq: suboptimal-MHE-condition} which requires the yielded cost value to be upper bounded by a $\mathcal{K}$ function of the counterpart that results from an optimal solution. Since the condition does not restrict the form of the $\mathcal{K}$ function, it implies flexibility in obtaining the suboptimal solutions and hence also a direction for future research. On the other hand, we may model the system dynamics and the measurements using discrete-time linear equations in which the disturbances and noises lump all unmodeled nonlinear dynamics (including unknown external disturbances and noises). This will enable MHE to solve only convex programs, but meanwhile may sacrifice the estimation accuracy for the increased uncertainties. Future research may be conducted to design appropriate convex MHEs that balance between the estimation accuracy and the computational complexity. In addition, as remarked after the proof of Lemma \ref{lem: MHE-FIE}, it is possible to change the size of the moving horizon of MHE online while keeping the MHE being RGAS. This will enable MHE with adaptive moving horizon which can be computationally more efficient on average as compared to the MHE that implements a moving horizon of a fixed size. \section{Conclusion\label{sec:Conclusion}} This paper proved the robust global asymptotic stability (RGAS) of full information estimation (FIE) and its practical approximation, moving horizon estimation (MHE) under general settings. The results indicate that both FIE and MHE lead to bounded estimation errors under mild conditions for an incrementally input/output-to-state stable (i-IOSS) system subjected to bounded system and measurement disturbances. The stability conditions require that the cost function to be optimized has a property resembling the i-IOSS property of a system, but with a higher sensitivity to the uncertainty in the initial state. The stability of MHE additionally requires that the moving horizon is long enough to suppress temporal propagation of the estimator errors. Under the same conditions, the MHE was also shown to converge to the true state if the disturbances converge to zero in time. When dealing with constrained nonlinear systems, MHE has to solve a non-convex program at each estimation point. Searching for a global optimal solution to the program requires considerable computational resources which are often unaffordable in applications. This problem has motivated considerable efforts to develop robustly stable MHE that relies on suboptimal but computationally more efficient solutions. We provided a brief discussion of this direction which may hopefully contribute to the future development of a systematic and practical solution for this equally important problem. \section*{Acknowledgment} The author thanks anonymous reviewers for their valuable comments that have helped to improve the quality of the paper. The author is also grateful to Dr. Keyou You and Dr. Lihua Xie for their help during the early development of the results. \section*{Appendix: A supporting lemma and its proof} \begin{lem} \label{lem: LTI-variant exp-i-IOSS} Consider a system described by \eqref{eq:system}, where $f(x_{t},\thinspace w_{t})=Ax_{t}+g(x_{t})+w_{t}$ with $g$ being a nonlinear function, and $h$ is a linear or nonlinear measurement function. Suppose that $A$ is diagonalizable as $P^{-1}\Lambda P$ for a certain non-singular matrix $P$ and a diagonal matrix $\Lambda$. If the spectrum radius of $A$, denoted by $\rho(A)$, is less than one and the nonlinear functions satisfy $|g(x_{t}^{(1)})-g(x_{t}^{(2)})|\le L|h(x_{t}^{(1)})-h(x_{t}^{(2)})|$ for all admissible $x_{t}^{(1)}$ and $x_{t}^{(2)}$ and a positive constant $L$, then the system is exp-i-IOSS as per \eqref{eq:definition - i-IOSS}, in which the $\mathcal{K}L$ function can be specified as $\beta(s,\thinspace t)=|P^{-1}||P|s\rho^{t}(A)\le c_{1}s(t+1)^{-b_{1}}$ for all $s,\thinspace t\ge0$, and the $\mathcal{K}$ functions as $\alpha_{1}(s)=\frac{|P^{-1}||P|}{1-\rho(A)}\cdot s$ and $\alpha_{2}(s)=\frac{L|P^{-1}||P|}{1-\rho(A)}\cdot s$ for all $s\ge0$. Here the parameters $b_{1}$ and $c_{1}$ are positive constants satisfying $c_{1}\ge\max_{t\in\mathbb{I}_{\ge0}}|P^{-1}||P|\rho^{t}(A)(t+1)^{b_{1}}$. \end{lem} \begin{IEEEproof} Given two initial conditions $x_{0}^{(1)}$ and $x_{0}^{(2)}$ and corresponding disturbance sequences $\boldsymbol{w}_{0:t-1}^{(1)}$ and $\boldsymbol{w}_{0:t-1}^{(2)}$, let the system states at time $t$ be yielded as $x_{t}^{(1)}$ and $x_{t}^{(2)}$, respectively. By using the analytical expressions of the two states, we have { {\footnotesize{} \begin{align*} & |x_{t}^{(1)}-x_{t}^{(2)}|\\ = & \left|A^{t}(x_{0}^{(1)}-x_{0}^{(2)})+\sum_{l=0}^{t-1}A^{t-1-l}\left(g(x_{l}^{(1)})+w_{l}^{(1)}-g(x_{l}^{(2)})-w_{l}^{(2)}\right)\right|\\ \le & |A^{t}||x_{0}^{(1)}-x_{0}^{(2)}|+\left(\begin{array}{c} \|\boldsymbol{w}_{0:t-1}^{(1)}-\boldsymbol{w}_{0:t-1}^{(2)}\|\\ +\|g(\boldsymbol{x}_{0:t-1}^{(1)})-g(\boldsymbol{x}_{0:t-1}^{(2)})\| \end{array}\right)\sum_{k=0}^{t-1}|A^{k}|\\ \le & |P^{-1}||P|\left(\begin{array}{c} \rho^{t}(A)|x_{0}^{(1)}-x_{0}^{(2)}|\\ +\left(\begin{array}{c} \|\boldsymbol{w}_{0:t-1}^{(1)}-\boldsymbol{w}_{0:t-1}^{(2)}\|\\ +L\|h(\boldsymbol{x}_{0:t-1}^{(1)})-h(\boldsymbol{x}_{0:t-1}^{(2)})\| \end{array}\right)\sum_{k=0}^{t-1}\rho^{k}(A) \end{array}\right)\\ \le & |P^{-1}||P|\left(\begin{array}{c} \rho^{t}(A)|x_{0}^{(1)}-x_{0}^{(2)}|+\dfrac{1}{1-\rho(A)}\|\boldsymbol{w}_{0:t-1}^{(1)}-\boldsymbol{w}_{0:t-1}^{(2)}\|\\ +\dfrac{L}{1-\rho(A)}\|h(\boldsymbol{x}_{0:t}^{(1)})-h(\boldsymbol{x}_{0:t}^{(2)})\| \end{array}\right). \end{align*} }}The last inequality implies that the system is i-IOSS and in particular exp-i-IOSS by Definition \ref{def: i-IOSS}, in which the $\mathcal{K}L$ function $\beta$ and the $\mathcal{K}$ functions $\{\alpha_{1},\thinspace\alpha_{2}\}$ are specified by the lemma. Given $b_{1}\ge0$, define $c_{1,\min}^{b_{1}}=\max_{t\in\mathbb{I}_{\ge0}}|P^{-1}||P|\rho^{t}(A)(t+1)^{b_{1}}$. Since the maximum exists while $\rho(A)<1$, $c_{1,\min}^{b_{1}}$ is well defined. Therefore, there always exists $c_{1}\ge c_{1,\min}^{b_{1}}$ such that $|P^{-1}||P|\rho^{t}(A)\le c_{1}(t+1)^{-b_{1}}$ for all $t\ge0$. The conclusions of the lemma follows immediately. \end{IEEEproof}{\footnotesize \par} By the lemma, if a system is exp-i-IOSS with the $\mathcal{K}L$ bound function given as $\beta(s,\thinspace t)=c_{1}s^{a_{1}}b_{1}^{t}$ for certain positive constants $a_{1}$, $b_{1}$ and $c_{1}$ with $b_{1}<1$, then it is always feasible to specify a looser $\mathcal{K}L$ bound function as $\beta'(s,\thinspace t)=c_{1}'s^{a_{1}}(t+1)^{-b_{1}'}$ for certain positive constants $b_{1}'$ and $c_{1}'$ satisfying $c_{1}'\ge\max_{t\in\mathbb{I}_{\ge0}}c_{1}b_{1}^{t}(t+1)^{b_{1}'}$. For example, if $b_{1}'$ is set to $1$, then $c_{1}'$ can be specified as $\max c_{1}b_{1}^{t}(t+1)$ which exists for $t\in\mathbb{I}_{\ge0}$ when $0\le b_{1}<1$. For another example, if $b_{1}'$ is set to $-\ln b_{1}$, then $c_{1}'$ can be specified as equal to $c_{1}$ (which is equal to $\max_{t\in\mathbb{I}_{\ge0}}c_{1}b_{1}^{t}(t+1)^{-\ln b_{1}}$). \begin{IEEEbiography}[{\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{Wuhua_Hu}}]{Wuhua Hu} received the BEng degree in Automation in 2005 and the MEng degree in Detecting Technique and Automation Device in 2007 from Tianjin University, China. He received the PhD degree in Communication Engineering from Nanyang Technological University (NTU), Singapore, in 2012. He is currently a Research Scientist with the Institute for Infocomm Research, Agency for Science, Technology and Research (A$^\star$STAR), Singapore. Before joining A$^\star$STAR, he worked as a Research Fellow first with the School of Mechanical and Aerospace Engineering and then the School of Electrical and Electronic Engineering, NTU, Singapore, from Aug 2011 to Mar 2016. His research interests lie in modeling, estimation, control and optimization of dynamical systems, with applications for smart and greener power and energy systems. \end{IEEEbiography} \end{document}
\begin{document} \title{Asymptotic analysis of a particle system with mean-field interaction} \author{Anatoli Manita \thanks{Supported by Russian Foundation of Basic Research (RFBR grant 02-01-00945).},\\ {\small Faculty of Mathematics and Mechanics,}\\ {\small Moscow State University, 119992, Moscow, Russia.}\\ {\small E-mail:[email protected]}\and Vadim Shcherbakov\thanks{Supported by Russian Foundation of Basic Research (RFBR grant 01-01-00275). On leave from Laboratory of Large Random Systems, Faculty of Mathematics and Mechanics, Moscow State University, 119992, Moscow, Russia.},\\ {\small CWI, Postbus 94079, 1090 GB,}\\ {\small Amsterdam, The Netherlands}\\ {\small E-mail:[email protected]}} \begin{comment} Vadim Shcherbakov\thanks{Supported by RFBR grant 01-01-00275. },\\ {\small Laboratory of Large Random Systems,}\\ {\small Faculty of Mathematics and Mechanics,}\\ {\small Moscow State University, 119992, Moscow, Russia.}\\ {\small E-mail:[email protected]}} \end{comment} \date{} \maketitle \vspace*{-5ex} \begin{abstract} We study a system of~$N$ interacting particles on~$\mathbf{Z}$. The~stochastic dynamics consists of two components: a free motion of each particle (independent random walks) and a~pair-wise interaction between particles. The~interaction belongs to the class of {\it mean-field\/} interactions and models a {\it rollback synchronization\/} in asynchronous networks of processors for a distributed simulation. First of all we study an empirical measure generated by the particle configuration on $\mathbf{R}$. We~prove that if space, time and a parameter of the interaction are appropriately scaled (hydrodynamical scale), then the empirical measure converges weakly to a deterministic limit as $N$ goes to infinity. The~limit process is defined as a weak solution of some partial differential equation. We~also study the~long time evolution of the particle system with fixed number of particles. The~Markov chain formed by individual positions of the particles is not ergodic. Never\-theless it is possible to introduce {\it relative} coordinates and prove that the~new Markov chain is ergodic while the system as a whole moves with an asymptotically constant mean speed which differs from the mean drift of the free particle motion. \textbf{MSC 2000:} 60K35, 60J27, 60F99. \end{abstract} \section{Introduction} \label{intro} We study an interacting particle system which models a set of processors performing parallel simulations. The system can be described as follows. Consider $N\gammaeq 2$ particles moving in~${\textbf {Z}}$. Let $x_{i}(t)$ be the position at time $t$ of the $i-$th particle, $1\leq i\leq N$. Each particle has three clocks. The first, the second and the third clock, attached to the $i-$th particle, ring at the moments of time given by a mutually independent Poisson processes $\mathsf{P}i _{i,\alpha },\, \mathsf{P}i _{i,\beta }$ and $\mathsf{P}i _{i,\mu _{N}}$ with intensities $\alpha $, $\beta $ and $\mu _{N}$ correspondingly. These triples of Poisson processes for different indexes are also independent. Consider a particle with index $i$. If the first attached clock rings, then the particle jumps to the nearest right site: $x_{i}\rightarrow x_{i}+1$, if the second attached clock rings, then the particle jumps to the nearest left site: $x_{i}\rightarrow x_{i}- 1$. At moments when the third attached clock rings a particle with index $j$ is chosen with probability $1/N$ and if $x_{i}>x_{j}$, then the $i-$th particle is relocated: $x_{i}\rightarrow x_{j}$. It is supposed that all these changes occur immediately. The type of the interaction between the particles is motivated by studying of probabilistic models in the theory of parallel simulations in computer science (\cite{Mitra}, \cite{T1,T3} and \cite{GrMP,MM-Time-Sync}). The main peculiarity of the models is that a group of processors performing a large-scale simulation is considered and each processor does a specific part of the task. The processors share data doing simulations therefore their activity must be synchronized. In practice, this synchronization is achieved by applying a so-called {\it rollback\/} procedure which is based on a~massive message exchange between different processors (see~\cite[Sect.~1.4 and Ch.~8]{BertTsit}). One says that $x_i(t)$ is a {\it local time\/} of the $i$-th processor while $t$ is a real (absolute) time. If~we interpret the variable $x_i(t)$ as an amount of job done by the processor~$i$ till the time moment~$t$, then the interaction described above imitates this synchronization procedure. Note that from a point of view of general stochastic particle systems the interaction between the particles is essentially non-local. We are interested in the analysis of asymptotic behaviour of this particle system. First of all we consider the situation as the number of particles goes to infinity. For every finite $N$ and $t$ we can define an empirical measure generated by the particle configuration. It~is a point measure with atoms at integer points. An atom at a point $k$ equals to a proportion of particles with coordinate $k$ at time $t$. It is convenient in our case to consider an empirical tail function corresponding to the measure. It means that we consider $ \xi _{x,N}(t)={\displaystyle \frac{1}{N}}\, \sum \limits _{i=1}^{N}\mathbf{1}\left(x_{i}(t)\gammaeq x\rightarrowght) $ the proportion of particles having coordinates not less than $x\in \mathbf{R}$. The problem is to find an appropriate time scale $t_{N}$ and a sequence of interaction parameters $\mu_{N}$ to obtain a non-trivial limit dynamics of the process $\xi_{N, [xN]}(t_{N})$ as $N\rightarrow \infty $. The cases $\alpha\neq \beta$ and $\alpha=\beta$ require different scaling of time and the interaction constant $\mu_{N}$. We prove that there exist non-trivial limit deterministic processes in both cases as $N$ goes to infinity if we rescale time and the interaction constant as $t_{N}=tN,\,\mu_{N}=\mu/N$ in the first case and as $t_{N}=tN^{2},\,\mu_{N}=\mu/N^{2}$ in the second case respectively. The processes are defined as weak solutions of some partial differential equations (PDE). It should be noted that the PDE relating to the zero drift situation is a famous \emph{Kolmogorov--Petrowski--Piscounov}-equation (KPP-equation, \cite{KoPP}). This result was announced in \cite{MaSch}. Another issue we address in the paper is studying of the long time evolution of the particle system with fixed number of particles. It is easy to see that the Markov chain $x(t)=\{x_{i}(t),\,i=1,\ldots ,N\},\,t\gammaeq 0,$ is not ergodic. Nevertheless the particle system possesses some relative stability. We introduce new coordinates $y_{i}(t)=x_{i}(t)-\min_{j}x_{j}(t),\,i=1,\ldots ,N$, and prove that the countable Markov chain $y(t)=\{y_{i}(t),\,i=1,\ldots ,N\},\,t\gammaeq 0,$ is ergodic and converges exponentially fast to its stationary distribution. Therefore the system of stochastic interacting particles possesses some relative stability. We show also that the center of mass of the system moves with an asymptotically constant speed. It appears that due to the interaction between the particles this speed differs from the mean drift of the free particle motion. It should be noted that the choice of the interaction may vary depending on a situation. Various modifications of the model can be considered and similar results can be obtained using the same methods. We have chosen the described model just for the sake of concreteness. Probabilistic models for parallel computation considered before by other authors. The paper~\cite{Mitra} deals with a model consisting of two interacting processors ($N=2$). It contains a rigorous study of the long-time behavior of the system and formulae for some performance characteristics. Unfortunately, there are not too many mathematical results about multi-processor models (\cite{MadWalMes,GuAkFu,AkChFuSer,T1,T3}). Usually mathematical components of these papers have a form of preparatory considerations before some large numerical simulation. The paper~\cite{GrMP} is of special interest because it rigorously studies a behavior of some model of parallel computation with $N$ processor units in the limit $N\rightarrowghtarrow\infty$. A stochastic dynamics of~\cite{GrMP} is different from the dynamics studied in the present paper and main results of~\cite{GrMP} concern a so-called thermodynamical limit. The authors prove that in the limit the evolution of the system can be described by some integro-differential equation. In the present study we propose a model which dynamics is easy from the point of view of numerical simulations and, at the same time, provides us with a new probabilistic interpretation of some important PDEs including the classical KPP-equation. The paper is organised as follows. We formally define the particle system, introduce some notation and formulate the main results in Section \ref{model}. Sections \ref{proof} and \ref{stability} contain the proofs of the main results. In Section \ref{travel} we discuss solutions of the limiting equations. paragraph{Acknowledgments.} We are thankful to Dr.\ T.~Voznesenskaya (Faculty of Computational Mathematics and Cybernetics, Moscow University) who first introduced us to stochastic algorithms for parallel computations. The authors would like to thank Prof.\ V.~Bogachev (Faculty of Mechanics and Mathematics, Moscow University) for the helpful discussions on convergence of measures on general topological vector spaces and for the suggested references. We are also grateful to Prof.\ V.~Malyshev for his warm encouragement and valuable comments on the present manuscript. \section{The model and main results} \label{model} Formally, the process $x(t)=\{x_{i}(t),i=1,\ldots ,N\}$, describing positions of the particles, is a continuous time countable Markov chain taking values in ${\textbf {Z}}^{N}$ and having the following generator \begin{multline} G_{N}g(x)=\sum\limits _{i=1}^{N} \left( \alpha \left(g\left(x+e_{i}^{\left( {\scriptscriptstyle N} \rightarrowght)}\rightarrowght)-g(x)\rightarrowght)+ \beta \left(g\left(x-e_{i}^{\left({\scriptscriptstyle N} \rightarrowght)}\rightarrowght)-g(x)\rightarrowght) \rightarrowght)+\\ {}+\sum \limits _{i=1}^{N}\sum \limits _{j\neq i}\left(g\left(x-e_{i}^{\left( {\scriptscriptstyle N} \rightarrowght)} \left(x_{i}-x_{j}\rightarrowght)\rightarrowght)-g(x)\rightarrowght)I_{\{x_{i}>x_{j}\}} \frac{\mu_{N}}{N}\, , \end{multline} where $x=(x_{1},\ldots ,x_{N})\in {\textbf {Z}}^{N}$, $g:\;{\textbf {Z}}^{N}\rightarrowghtarrow \textbf {R}$ is a bounded function, $e_{i}^{\left({\scriptscriptstyle N} \rightarrowght)}$ is a $N$-dimensional vector with all zero components except $i-$th which equals to $1$, $I_{\{x_{i}>x_{j}\}}$ is an indicator of the set $\{x_{i}>x_{j}\}$. Define \begin{equation} \xi _{N,k}(t)=\frac{1}{N}\, \sum _{i=1}^{N}I_{\{x_{i}(t)\gammaeq k\}},\quad k\in \mathbf{Z}. \label{ksi} \end{equation} The process $\xi _{N}(t)=\{\xi _{N,k}(t),\, k\in \mathbf{Z}\}$ is a Markov one with a state space $H_{N}$ the set of all non-negative and nonincreasing sequences $z=\{z_{k},k\in {\textbf {Z}}\}$ such that $z_{k}\in \{l/N,\,l=0,1,\ldots,N\}$ for any $k\in {\textbf {Z}}$ and \[ \lim _{k\rightarrowghtarrow -\infty }z_{k}=1,\qquad \lim _{k\rightarrowghtarrow +\infty }z_{k}=0. \] The generator of the process $\xi _{N}(t)$ is given by the following formula \begin{align} L_{N}f(z) & =N\sum \limits _{k}((f(z+e_{k}/N)-f(z))\alpha (z_{k-1}-z_{k})+(f(z-e_{k}/N)-f(z))\beta (z_{k}-z_{k+1}))\label{L2}\\ & +N\mu _{N}\sum \limits _{l<k}(f(z-(e_{l+1}+\ldots +e_{k})/N)-f(z))(z_{k}-z_{k+1})(z_{l}-z_{l+1})\, ,\nonumber \end{align} where $e_{i},i\in {\textbf {Z}}$ is an infinite dimensional vector with all zero components except $i-$th which equals to $1$, $f:\;H_{N}\rightarrowghtarrow \textbf {R}$ is a bounded function. \subsection{Hydrodynamical behavior of the particle system} \label{hydro} Denote $\zeta_{N,x}(t)=\xi_{N,[Nx]}(t),\, \, x\in \mathbf{R}$. The process $\zeta _{N}(t)$ takes values in $H=H(\mathbf{R})$ the set of all non-negative right continuous with left limits nonincreasing functions having the following limits \[ \lim _{x\rightarrow -\infty }psi (x)=1,\, \lim _{x\rightarrow \infty }psi (x)=0. \] Denote by $S(\textbf{R})$ the Schwartz space of infinitely differentiable functions such that for all $m,n\in\textbf{Z}_+$ $$ \|f\|_{m,n}=\sup_{x\in\textbf{R}}|x^m f^{(n)}(x)|<\infty . $$ Recall that $S(\textbf{R})$ equipped with a natural topology given by seminorms $\|\cdot\|_{m,n}$ is a Frechet space~(\cite{RS}). Define for every $h\in H$ a functional \[ (h,f)=\int \limits _{\textbf{R}}h(x)f(x)dx,\,\, f\in S(\textbf{R}), \] on the Schwartz space $S(\textbf{R})$. The following bound yields that for each $h\in H$ $(h,\cdot)$ is a~continuous linear functional on $S(\textbf{R})$ $$ |(h,f)|\leq \int \limits _{\textbf{R}}|f(x)|\frac{1+x^2}{1+x^2}dx\leq pi(\|f\|_{\infty}+\|x^{2}f\|_{\infty})\equiv pi(\|f\|_{0,0}+\|x^{2}f\|_{2,0}), $$ where $\|\cdot\|_{\infty}$ is the supremum norm. Thus the set of functions $H(\textbf{R})$ is naturally embedded into the space of all continuous linear functionals on $S({\textbf{R}})$, namely into the space $S'(\textbf{R})$ of tempered distributions. We will interpret $\zeta _{N}(t)$ as a stochastic process taking its values in the space $S'(\textbf{R})$. There are two reasons for embedding $H(\textbf{R})$ into $S'(\textbf{R})$ and considering the $S'(\textbf{R})$-valued processes. The first reason is that due to some nice topological properties of $S'(\textbf{R})$ we can use in Section~\ref{proof} many powerful results from the theory of weak convergence of probability distributions on topological vector fields. And, secondly, the choice of $S'(\textbf{R})$ as a state space is convenient from the point of view of possible future study of stochastic fluctuation fields around the deterministic limits obtained in our main theorem~\ref{hydro}. In the sequel we mainly deal with the strong topology ($\mbox{\itshape{s.t.}}$) on $S'(\textbf{R})$ (see Section~\ref{top}). From now we fix some $T>0$ and consider $\zeta _{N}$ as a random element in a Skorokhod space $D([0,T], S'(\textbf{R}))$ of all mappings of $[0,T]$ to $(S'(\textbf{R}),\mbox{\itshape{s.t.}})$ that are right continuous and have left-hand limits in the strong topology on $S'(\textbf{R})$. Note that $(S'(\textbf{R}),\mbox{\itshape{s.t.}})$ is not a metrisable topological space therefore it is not evident how to define the Skorokhod topology on the space $D([0,T], S'(\textbf{R}))$. To do this we follow~\cite{Mitoma} and refer to Section~\ref{top}. Now we are able to consider probability distributions of the processes $\left(\zeta _{N}(tN^{a}), t\in [0,T]\rightarrowght)$, $a=1,2,$ as probability measures on a measurable space $\left(D([0,T], S'(\textbf{R})),\mathcal{B}_{D([0,T], S'(\textbf{R}))}\rightarrowght)$ where $\mathcal{B}_{D([0,T], S'(\textbf{R}))}$ is a corresponding Borel $\sigmagma$-algebra. It was proved in~\cite{Jakub} that $\mathcal{B}_{D([0,T], S'(\textbf{R}))}=\mathcal{C}_{D([0,T], S'(\textbf{R}))}$, where $\mathcal{C}_{D([0,T], S'(\textbf{R}))}$ is a $\sigmagma$-algebra of cylindrical subsets. Consider two following Cauchy problems \begin{align} u_{t}(t,x) & =-\lambda u_{x}(t,x)+\mu (u^{2}(t,x)-u(t,x))\, ,\label{eq:ur-asym}\\ u(0,x) & =psi (x)\nonumber \end{align} and \begin{align} u_{t}(t,x) & =\gammaamma u_{xx}(t,x)+\mu (u^{2}(t,x)-u(t,x))\, ,\label{eq:ur-KPP}\\ u(0,x) & =psi (x)\nonumber \end{align} where $u_{t},u_{x}$ and $u_{xx}$ are first and second derivatives of $u$ with respect to $t$ and $x$. Notice that the equation~(\ref{eq:ur-KPP}) is a particular case of the famous \emph{Kolmogorov--Petrowski--Piscounov}-equation (KPP-equation, \cite{KoPP}). We will deal with weak solutions of the equations (\ref{eq:ur-asym}) and (\ref{eq:ur-KPP}) in the sense of Definition \ref{weak}. Fix $T>0$ and denote by $C_{0,T}^{\infty}=C_{0}^{\infty}([0,T]\times {\textbf {R}})$ the space of infinitely differentiable functions with finite support and equal to zero for $t=T$. \begin{dfn} \label{weak} \begin{description} \item[(i)] The bounded measurable function \(u(t,x)\) is called a weak (or generalized) solution of the Cauchy problem (\ref{eq:ur-asym}) in the region $[0,T]\times \textbf{R}$, if the following integral equation holds for any function \(f \in C_{0,T}^{\infty}\) \begin{multline*} \int\limits_{0}^{T}\int\limits_{\textbf{R}}u(t,x)(f_{t}(t,x) +\lambda f_{x}(t,x)) +\mu u(t,x)(1-u(t,x))f(t,x))dxdt\\ +\int\limits_{\textbf{R}}u(0,x)f(0,x)dx=0 \end{multline*} \item[(ii)] The bounded measurable function \(u(t,x)\) is called a weak (or generalized) solution of the Cauchy problem (\ref{eq:ur-KPP}) in the region $[0,T]\times \textbf{R}$, if the following integral equation holds for any function \(f \in C_{0,T}^{\infty}\) \begin{multline*} \int\limits_{0}^{T}\int\limits_{\textbf{R}}u(t,x)(f_{t}(t,x) +\gammaamma f_{xx}(t,x)) +\mu u(t,x)(1-u(t,x))f(t,x))dxdt\\ +\int\limits_{\textbf{R}}u(0,x)f(0,x)dx=0 \end{multline*} \end{description} \end{dfn} In Subsection~\ref{edinstv} we will show that the both of Cauchy problems (\ref{eq:ur-asym}) and (\ref{eq:ur-KPP}) have \emph{unique weak solutions} in the sense of Definition~\ref{weak}. Here we want just to mention that this problem is not trivial. Indeed, the equation (\ref{eq:ur-asym}) is an example of a quasilinear first order partial differential equation. It is known that in a general case such type of equations might have more than one weak solution and it is only possible to guarantee uniqueness of the solution which satisfies to the so-called entropy condition. The most general form of this condition was introduced by Kruzhkov in \cite{Kruzhkov}, where he also proved his famous uniqueness theorem. Fortunately, in our particular case of the equation (\ref{eq:ur-asym}) the situation is quite simple due to simplicity of characteristics, they are given by the straight lines $x(t)=\lambda t+ C$, do not intersect with each other and do not produce the shocks. Detailed discussions of the problem of uniqueness for equations~(\ref{eq:ur-asym}) and (\ref{eq:ur-KPP}) are presented in Subsection~\ref{edinstv}. The first theorem we are formulating describes the evolution of the system at the hydrodynamical scale. \begin{thr} \label{thydro} Assume that an initial particle configuration \(\xi_{N}(0)=\{\xi_{N,k}(0),\,k\in {\textbf{Z}}\}\) is such that for any function \(f\in S(\textbf{R})\) \begin{equation} \label{initial} \lim\limits_{N\rightarrow\infty}\frac{1}{N}\sum\limits_{k}\xi_{N,k}(0)f(k/N)= \int\limits_{{\textbf{R}}}psi(x)f(x)dx, \end{equation} where \(psi \in H(\textbf{R})\). \begin{description} \item[(i)] If \(\alpha-\beta=\lambda\neq 0\) and \(\mu_{N}=\mu/N\), then the sequence $\{Q_{N,\lambda}^{\left(T\rightarrowght)}\}_{N=2}^{\infty}$ of probability distributions of random processes $\{\zeta_{N}(tN),\, t\in [0,T]\}_{N=2}^{\infty}$ converges weakly as \(N\rightarrow \infty\) to the probability measure $Q_{\lambda}^{\left(T\rightarrowght)}$ on $D([0,T], S'(\textbf{R}))$ supported by a trajectory $u(t,x)$, which is a unique weak solution of the equation (\ref{eq:ur-asym}) with the initial condition \(u(0,x)=psi(x)\) and as a function of $x$ $u(t,\cdot)\in H(\textbf{R}),$ for any $t\gammaeq 0$. \item[(ii)] If $\alpha=\beta=\gammaamma>0\), \(\mu_{N}=\mu/N^{2}$, then the sequence $\{Q_{N,\gammaamma}^{\left(T\rightarrowght)}\}_{N=2}^{\infty}$ of probability distributions of random processes $\{\zeta_{N}(tN^{2}),\, t\in [0,T]\}_{N=2}^{\infty}$ converges weakly as \(N\rightarrow \infty\) to the probability measure $Q_{\gammaamma}^{\left(T\rightarrowght)}$ on $D([0,T], S'(\textbf{R}))$ supported by a trajectory $u(t,x)$, which is a unique weak solution of the equation (\ref{eq:ur-KPP}) with the initial condition \(u(0,x)=psi(x)\) and as a function of $x$ $u(t,\cdot)\in H(\textbf{R}),$ for any $t\gammaeq 0$. \end{description} \end{thr} \subsection{Long time behavior of the particle system with the fixed number of particles} \label{connection} The number of particles is fixed in this section. Consider the following stochastic process $y(t)=(y_{1}(t),\ldots ,y_{N}(t))$, where $$y_{i}(t)=x_{i}(t)-\min\limits_{j}x_{j}(t).$$ Note that $x_k-x_l=y_k-y_l$ for any pair $k,l$. It is easy to see that $y(t)$ is a continuous time Markov chain on the state space $$\Gamma =\bigcup _{k}\Gamma _{k}\subset \mathbf{Z}_{+}^{N},$$ where $\Gamma _{k}:=\left\{ \left(z_{1},\ldots ,z_{k-1},0,z_{k+1}, \ldots,z_{N}\rightarrowght):\, z_{j}\in \mathbf{Z}_{+}\rightarrowght\}$. \begin{thr} \label{ergodic} The Markov chain $\left(y(t),t\gammaeq 0\rightarrowght)$ is ergodic and converges exponentially fast to its stationary distribution $$ \sum_{y\in\Gamma} |P(y(t)=y)-pi(y)| \leq C_{1}\exp(-C_{2}t) $$ uniformly in initial distributions of $y(0)$. \end{thr} \section{Proof of Theorem \ref{thydro}} \label{proof} \subsection{Plan of the proof} The proof of the convergence uses the next well-known general idea (see, for example, \cite[\S~5]{SmFom}). Let $\{a_n\}$ be a sequence in some Hausdorff topological space and assume that $\{a_n\}$ satisfies to the following two properties: (a) for any subsequence of $\{a_n\}$ there is a converging subsequence (this property is called a \emph{sequential compactness}); (b) $\{a_n\}$ contains at most one limit point. Then the sequence $\{a_n\}$ has a limit. In our situation the role of $\{a_n\}$ is played by the sequences $\{Q_{N,\lambda}^{\left(T\rightarrowght)}\}_{N=2}^{\infty}$ and $\{Q_{N, \gammaamma}^{\left(T\rightarrowght)}\}_{N=2}^{\infty}$. Our proof consists of the following steps. \emph{Step 1}. We fix an arbitrary $T>0$ and prove that the sequences of probability measures $\{Q_{N,\lambda}^{\left(T\rightarrowght)}\}_{N=2}^{\infty}$ and $\{Q_{N, \gammaamma}^{\left(T\rightarrowght)}\}_{N=2}^{\infty}$ are tight. We use the Mitoma theorem (\cite{Mitoma}) and apply martingale techniques widely used in the theory of hydrodynamical limits of interacting particle systems (\cite{PreDeMas,KipLan}). It is important to note that if a topological space $\mathcal{V}$ is not metrisable then, generally speaking, the tightness of a family of distributions on $\mathcal{V}$ does not imply a sequential compactness (see, for example, \cite[V.~2, \S~8.6]{VBogach}). So, in general, the above property~(a) does not follow directly from the step~1. But in our concrete case $\mathcal{V}=D([0,T], S'(\textbf{R}))$ we can proceed as follows. It was shown in~\cite{Jakub} that any compact subset of $D([0,T], S'(\textbf{R}))$ is metrisable. Due to this property we can apply the theorem from~\cite[Th.~2, \S~5]{SmFom} which states that (under assumption of metrisability of compact subsets) the tightness of a family of measures implies its sequential compactness. All this justifies the next step. \emph{Step 2}. We show that a measure that is a limit of some subsequence of the sequence $\{Q_{N,\lambda}^{\left(T\rightarrowght)}\}_{N=2}^{\infty}$ (or $\{Q_{N, \gammaamma}^{\left(T\rightarrowght)}\}_{N=2}^{\infty}$) is supported by the weak solutions of the partial differential equation (\ref{eq:ur-asym}) (or, correspondingly, (\ref{eq:ur-KPP})\,). Then we note that each of the equations (\ref{eq:ur-asym}) and (\ref{eq:ur-KPP}) has a unique weak solution (Subsection~\ref{edinstv}). This gives the above property (b). \subsection{Technical lemmas} We start with some bounds which will be used throughout the proof. Denote \[ R_{f}(z)=\frac{1}{N}\sum _{k}f(k/N)z_{k}, \] for $z\in H_{N}$ and $f\in S({\textbf{R}})$. \begin{lm} \label{lb1} \begin{description} \item[(i)] If \(\alpha\neq \beta\) and \(\mu_{N}=\mu/N\), then for any $z\in H_{N}$ \begin{equation} \label{b1} \left|L_{N}R_{f}(z)\rightarrowght|\leq \frac{C}{N}, \end{equation} and \begin{equation} \label{b2} N\left(L_{N}R_{f}^{2}(z)-2R_{f}(z)L_{N}R_{f}(z)\rightarrowght)= O\left(\frac{1}{N}\rightarrowght). \end{equation} \item[(ii)] If \(\alpha=\beta\) and \(\mu_{N}=\mu/N^2\), then for any $z\in H_{N}$ \begin{equation} \label{b12} \left|L_{N}R_{f}(z)\rightarrowght|\leq \frac{C}{N^2}, \end{equation} and \begin{equation} \label{b22} N^2\left(L_{N}R_{f}^{2}(z)-2R_{f}(z)L_{N}R_{f}(z)\rightarrowght)= O\left(\frac{1}{N^2}\rightarrowght). \end{equation} \end{description} In both cases $C=C(f,\alpha ,\beta ,\mu )$. \end{lm} paragraph{Proof of Lemma \ref{lb1}.} We will prove the bounds (\ref{b1}) and (\ref{b2}), the other ones can be proved similarly. We start with the bound (\ref{b1}). Using the equations \begin{align*} R_{f}(z+e_{k}/N)-R_{f}(z)& =\frac{f(k/N)}{N^{2}},\\ R_{f}(z-e_{k}/N)-R_{f}(z)& =-\frac{f(k/N)}{N^{2}} \end{align*} we get that for every $z\in H_{N}$ \begin{align*} L_{N}R_{f}(z)= & \frac{1}{N}\sum \limits _{k}z_{k}(\beta f((k-1)/N)-(\alpha +\beta )f(k/N)+\alpha f((k+1)/N)\\ & -\frac{\mu }{N^{2}}\sum\limits_{k}f(k/N)z_{k}(1-z_{k}). \end{align*} For any function $f\in S\left(\mathbf{R}\rightarrowght)$ consider its upper Darboux sum \begin{equation} \label{e-uds} U_N^+ (f)=\, \frac{1}{N}\, \sum _{k\in \mathbf{Z}}\, \max_{y\in \left[k/N,(k+1)/N\rightarrowght]}f\left(y\rightarrowght). \end{equation} Since $U_N^+ (f)\rightarrowghtarrow \int f(x)\, dx$ as $N\rightarrowghtarrow \infty $, the sequence $\left\{ U_N^+ (f)\rightarrowght\} _{N=1}^{\infty }$ is bounded in $N$ for any fixed $f$. We have uniformly in~$z\in H_{N}$ \begin{eqnarray*} \left|L_{N}R_{f}(z)\rightarrowght| & \leq & \frac{1}{N}\left( \sum_{k}\left|\alpha (f\left((k+1)/N\rightarrowght)-f\left(k/N\rightarrowght))+ \beta (f\left(k/N\rightarrowght)-f\left((k-1)/N\rightarrowght) )\rightarrowght|\rightarrowght)\\ & + & \frac{\mu }{N}\left(\frac{1}{N}\sum _{k}\left|f(k/N)\rightarrowght|\rightarrowght)\\ & \leq & \frac{1}{N}\left(|\alpha -\beta |U_N^+ \left(\left|f_{x}\rightarrowght|\rightarrowght)+\mu U_N^+ \left(\left|f\rightarrowght|\rightarrowght)\rightarrowght), \end{eqnarray*} where $f_{x}=df(x)/dx$. So the bound (\ref{b1}) is proved. Let us prove the bound (\ref{b2}). Note that $L_{N}=L_{N}^{\left(0\rightarrowght)}+L_{N}^{\left(1\rightarrowght)}$, where \begin{multline*} L_{N}^{\left(0\rightarrowght)}f(z)=N\sum _{k}(f(z+e_{k}/N)-f(z))\alpha (z_{k-1}-z_{k})+\\ N\sum _{k}(f(z-e_{k}/N)-f(z))\beta (z_{k}-z_{k+1}) \end{multline*} and \[ L_{N}^{\left(1\rightarrowght)}f(z)=\mu\sum _{l<k}(f(z-(e_{l+1}+\ldots +e_{k})/N)-f(z))(z_{k}-z_{k+1})(z_{l}-z_{l+1}). \] Using the equations \begin{align*} R_{f}^{2}(z+e_{k}/N)-R_{f}^{2}(z) & =\left(2R_{f}(z)+\frac{f(k/N)}{N^{2}}\rightarrowght)\frac{f(k/N)}{N^{2}},\\ R_{f}^{2}(z-e_{k}/N)-R_{f}^{2}(z) & =-\left(2R_{f}(z)-\frac{f(k/N)}{N^{2}}\rightarrowght)\frac{f(k/N)}{N^{2}}, \end{align*} one can obtain that for any $z\in H_{N}$ \begin{align} L_{N}^{\left(0\rightarrowght)}R_{f}^{2}(z) & =\alpha N\sum \limits _{k}(R_{f}^{2}(z+e_{k}/N)-R_{f}^{2}(z))(z_{k-1}-z_{k})\nonumber \\ & +\beta N\sum \limits _{k}(R_{f}^{2}(z-e_{k}/N)-R_{f}^{2}(z))(z_{k}-z_{k+1})\nonumber \\ & =2R_{f}(z)\frac{\alpha }{N}\sum \limits _{k}f(k/N)(z_{k-1}-z_{k})+\frac{\alpha }{N^{3}}\sum \limits _{k}f^{2}(k/N)(z_{k-1}-z_{k})\nonumber \\ & -2R_{f}(z)\frac{\beta }{N}\sum \limits _{k}f(k/N)(z_{k}-z_{k+1})+\frac{\beta }{N^{3}}\sum \limits _{k}f^{2}(k/N)(z_{k}-z_{k+1})\nonumber \\ & =2R_{f}(z)L_{N}^{\left(0\rightarrowght)}R_{f}(z)+O\left(\frac{1}{N^{2}}\rightarrowght). \label{LL1} \end{align} Direct calculation gives that for any function $g_{k,j}(z)=z_{k}z_{j},k<j$, on $H_{N}$ \begin{equation} L_{N}^{\left(1\rightarrowght)}g_{k,j}(z)=-\frac{\mu }{N}(z_{k}z_{j}(1-z_{k})+z_{k}z_{j}(1-z_{j}))+\frac{\mu }{N^{2}}z_{j}(1-z_{k}). \end{equation} Using this formula we get that for any $z\in H_{N}$ \begin{align} L_{N}^{\left(1\rightarrowght)}R_{f}^{2}(z) & =-2R_{f}(z)\frac{\mu }{N^{2}}\sum \limits _{k}f(k/N)z_{k}(1-z_{k})\nonumber \\ & +\frac{2\mu }{N^{4}}\sum \limits _{k<j}f(k/N)f(j/N)z_{j}(1-z_{k})+ \frac{\mu }{N^{4}}\sum \limits _{k}f^{2}(k/N)z_{k}(1-z_{k})\nonumber \\ & =2R_{f}(z)L_{N}^{\left(1\rightarrowght)}R_{f}(z)+O\left(\frac{1}{N^{2}}\rightarrowght). \label{LL2} \end{align} Summing the formulas (\ref{LL1}) and (\ref{LL2}) we obtain \begin{equation} \label{sum1} L_{N}R_{f}^{2}(z)=2R_{f}(z)L_{N}R_{f}(z)+O\left(\frac{1}{N^{2}}\rightarrowght). \end{equation} Lemma \ref{lb1} is proved. \subsection{Tightness} We will make all considerations for the part ${\bf (i)}$ ($\alpha\neq\beta$ and $\mu=\mu/N$) of the theorem. All reasonings and conclusions are valid for the part ${\bf (ii)}$ ($\alpha=\beta$ and $\mu=\mu/N^2$) of the theorem with evident changes. We will denote by $P_{N,\lambda}^{\left(T\rightarrowght)}$ the probability distribution on the path space $D([0,T], H_{N})$ corresponding to the process $\xi_{N}(t)$ and by $E_{N,\lambda}^{\left(T\rightarrowght)}$ the expectation with respect to this measure. Theorem 4.1 in \cite{Mitoma} (see also Section~\ref{mit}) yields that tightness of the sequence of $\{Q_{N, \lambda}^{\left(T\rightarrowght)}\}_{N=2}^{\infty}$ will be proved if we prove the same for a sequence of distributions of one-dimensional projection $\{(\zeta_{N}(tN),f),t\in [0,T]\}_{N=2}^{\infty}$ for every $f\in S({\textbf{R}})$. So fix $f\in S({\textbf {R}})$ and consider the sequence of distributions of the processes $(\zeta _{N}(tN), f),\, t\in [0,T]$. Note that the probability distribution of a process $(\zeta _{N}(tN), f)$ is a probability measure on $D([0,T],{\textbf {R}})$ the Skorokhod space of real-valued functions. By definition of the process $\zeta _{N}(tN)$ we have that \[ (\zeta _{N}(tN),f)=\sum_{k}\xi_{N,k}(tN) \int\limits_{k/N}^{(k+1)/N}f(x)dx. \] It is easy to see that $$ (\zeta_{N}(tN),f)=R_{f}(\xi_{N}(tN)) + phi_{N}(t), $$ where the random process $phi_{N}(t)$ is bounded $|phi_{N}(t)|<C(f)/N$ for any $t\gammaeq 0$ and sufficiently large $N$. Therefore it suffices to prove tightness of the sequence of distributions of random processes $\{R_{f}(\xi_{N}(tN)),\,t\in [0,T]\}_{N=2}^{\infty}$. Introduce two random processes \begin{equation} \label{mart1} W_{f,N}(t)=R_{f}(\xi _{N}(tN))-R_{f}(\xi _{N}(0))- N\int \limits_{0}^{t}L_{N}R_{f}(\xi _{N}(sN))ds, \end{equation} and \[ V_{f,N}(t)=(W_{f,N}(t))^{2}-\int\limits_{0}^{t}Z_{f,N}(s)ds, \] where \begin{equation} \label{zz} Z_{f,N}(s)=N\left(L_{N}R_{f}^{2}(\xi _{N}(sN))-2R_{f}(\xi _{N}(sN))L_{N}R_{f}(\xi _{N}(sN))\rightarrowght). \end{equation} It is well known (Theorem 2.6.3 in \cite{PreDeMas} or Lemma A1.5.1 in \cite{KipLan}) that the processes $W_{f,N}(t)$ and $V_{f,N}(t)$ are martingales. The bound (\ref{b1}) yields that \[ \left|N\int \limits _{\tau }^{\tau +\theta }L_{N}R_{f}(\xi _{N}(sN))ds\rightarrowght|\leq C\theta \quad (a.s.) \] for any time moment $\tau $. Thus the sequence of probability distributions of random processes $\{N\int_{0}^{t}L_{N}R_{f}(\xi_{N}(sN))ds,\,t\in [0,T] \}_{N=2}^{\infty}$ is tight by Theorems~\ref{bil} and~\ref{ald} from Appendix. The bound (\ref{b2}) yields that \begin{equation} \label{kvad} E_{N,\lambda}^{\left(T\rightarrowght)}(W_{f,N}(\tau +\theta ) -W_{f,N}(\tau ))^{2}=E_{N, \lambda}^{\left(T\rightarrowght)} \left(\int\limits_{\tau}^{\tau + \theta }Z_{f,N}(s)ds\rightarrowght)\leq \frac{C\theta }{N}. \end{equation} for any stopping time $\tau \gammaeq 0$ since $V_{f,N}(s)$ is martingale. Using this estimate and Chebyshev inequality we obtain that the sequence of probability distributions of martingales $\{W_{f,N}(t),\,t\in [0,T]\}_{N=2}^{\infty }$ is also tight by Theorem~\ref{ald}. Thus the sequence of probability distributions of the processes $\{R_{f}(\xi_{N}(tN)),\,t\in [0,T]\}_{N=2}^{\infty}$ is tight by the equation (\ref{mart1}) and the assumption (\ref{initial}) and, hence, the sequence of probability measures $\{Q_{N, \lambda}^{\left(T\rightarrowght)}\}_{N=2}^{\infty}$ is tight by Theorem 4.1 in \cite{Mitoma}. \subsection{Characterization of a limit point} We are going to show now that there is a unique limit point of the sequence $\{Q_{N, \lambda}^{\left(T\rightarrowght)}\}_{N=2}^{\infty}$ and this limit point is supported by trajectories which are weak solutions of the partial differential equation (\ref{eq:ur-asym}) in the sense of Definition \ref{weak}. Let $f(s,x)\in C_{0,T}^{\infty}$ and denote $$ R_{f}(t,\xi_{N}(tN))=\frac{1}{N}\sum_{k}\xi_{N,k}(tN)f(t,k/N), $$ Define as before two random processes $$ W_{f,N}'(t)=R_{f}(t,\xi _{N}(tN))-R_{f}(0,\xi _{N}(0))- \int \limits_{0}^{t}\left(partial/partial s+NL_{N}\rightarrowght) R_{f}(s,\xi_{N}(sN))ds, $$ and $$ V_{f,N}'(t)=(W_{f,N}'(t))^{2}-\int \limits _{0}^{t}Z_{f,N}'(s)ds, $$ where $$ Z_{f,N}'(s)=N\left(L_{N}R_{f}^{2}(s,\xi _{N}(sN))-2R_{f}(s,\xi _{N}(sN))L_{N}R_{f}(s,\xi _{N}(sN))\rightarrowght). $$ By Lemma A1.5.1 in \cite{KipLan} the processes $W_{f,N}'(t)$ and $V_{f,N}'(t)$ are martingales. \begin{comment} Using calculations that we have done in the proof of Lemma \ref{lb1} we obtain that for any $0\leq t \leq T$ \begin{align} W_{f,N}'(t)& = R_{f}(t,\xi _{N}(tN))-R_{f}(0,\xi _{N}(0)) \nonumber \\ &+\frac{1}{N}\int\limits_{0}^{t}\sum\limits_{k}(-f_{s}(s,k/N) -\lambda f_{x}(s,k/N))\xi_{N,k}(sN))ds \label{rasn} \\ &-\frac{\mu }{N}\int\limits_{0}^{t}\sum\limits_{k} f(s,k/N)\xi_{N,k}(sN)(1-\xi_{N,k}(sN))ds +O\left(\frac{1}{N}\rightarrowght),\nonumber \end{align} where $f_{s}(s,x)=partial f(s,x)/partial s$ and $f_{x}(s,x)=partial f(s,x)/partial x$. We can rewrite (\ref{rasn}) as follows \end{comment} It is easy to see that \begin{multline} \label{rasn1} W_{f,N}'(t) =(\zeta (tN),f)- \int\limits_{0}^{t}(\zeta_{N}(sN),f_{s}+\lambda f_{x}+\mu f)ds - (\zeta (0),f)\\+ \mu\int\limits_{0}^{t}R_{f}(s,\xi^{2}(sN))ds+O\left(\frac{1}{N}\rightarrowght). \end{multline} We are going to approximate the nonlinear term in (\ref{rasn1}) by some quantities making sense in the space of generalised functions since we treat the processes distributions as probability measures on a space $D([0,T], S'(\textbf{R}))$. Let $\varkappa \in C_{0}^{\infty}({\textbf{R}})$ be a non-negative function such that $\int _{R}\varkappa (y)\, dy=1$. Denote $\varkappa _{\varepsilon } (y)= \varkappa \left(y/\varepsilon \rightarrowght)/\varepsilon$, for $0<\varepsilon\leq 1$ and let $ (\varkappa _{\varepsilon } \ast\varphi (s))(x)=\int_{{\textbf{R}}} \varkappa _{\varepsilon } (x-y)\varphi (y,s)dy $ be a convolution of a generalised function $\varphi (s,\cdot)$ with the test function $\varkappa _{\varepsilon } (y)$. \begin{lm} \label{cut0} The following uniform estimate holds $$ \left|R_{f}(s,\xi ^{2}(sN))-((\varkappa _{\varepsilon }*\zeta _{N}(sN))^{2},f)\rightarrowght|\leq F_{1}(\varepsilon )+ F_{2}(\varepsilon N) $$ where the functions $F_1$ and $F_2$ do not depend on $\xi$ and $s$ and \[ \lim _{\varepsilon \downarrow 0}F_{1}(\varepsilon )= \lim _{r\rightarrowghtarrow+\infty} F_{2}(r)=0. \] \end{lm} \emph{Proof.} For definiteness we assume that $\varkappa(x)=0$ for $x\in (-\infty,-1-\delta') \cup (1+\delta',\infty)$ for some positive $\delta'$. It is easy to see that if $x\in [k/N,(k+1)/N)$ for some $k$, then \begin{align*} (\varkappa _{\varepsilon }\ast \zeta_{N}(sN))(x)&= \sum_{j}\xi_{N,j}(sN)\int\limits_{j/N}^{(j+1)/N}\varkappa _{\varepsilon }(x-y)dy\\ &=\frac{1}{N}\sum\limits_{j}\varkappa _{\varepsilon } ((k-j)/N)\xi_{N,j}(sN)+ g_1(N,\varepsilon,x,\xi_{N}(sN)) \end{align*} where the function $g_1(N,\varepsilon,x,\xi_{N}(sN))$ can be bounded as follows \begin{eqnarray*} |g_1(N,\varepsilon,x,s,\xi_{N}(sN))| & \leq & \sum _{j}\int \limits _{j/N}^{(j+1)/N}\left|\varkappa _{\varepsilon } (x-y)-\varkappa _{\varepsilon } \left(\frac{k-j}{N}\rightarrowght)\rightarrowght|\, dy\\ & \leq & \frac{1}{N}\sum _{m}\, 2\max _{w\in \left[(m-1)/N,m/N\rightarrowght]}\frac{1}{\varepsilon ^{2}}\left|\varkappa '\left(\frac{w}{\varepsilon }\rightarrowght)\rightarrowght|\cdot \frac{1}{N}\\ & = & \frac{2}{\left(N\varepsilon \rightarrowght)^{2}}\sum _{m} \max _{v\in\left[(m-1)/(N\varepsilon ),m/(N\varepsilon )\rightarrowght]} \left|\varkappa'\left(v\rightarrowght)\rightarrowght| \, =\, \frac{2\,U_{N\varepsilon }^{+}\left(|\varkappa '|\rightarrowght)} {N\varepsilon } \end{eqnarray*} (see the (\ref{e-uds}) for the notation~$U^+$). Note that if $\varepsilon$ is fixed then $U_{N\varepsilon }^{+}\left(|\varkappa '|\rightarrowght)=O(1)$ as $N\rightarrowghtarrow\infty$. This representation implies that $$((\varkappa _{\varepsilon }\ast \zeta_{N}(sN))^2,f)= \frac{1}{N}\sum\limits_{k}f(k/N)\left( \frac{1}{N}\sum\limits_{j}\varkappa _{\varepsilon }((k-j)/N)\xi_{N,j}(sN)\rightarrowght)^2 +O\left(\frac{1}{\varepsilon N}\rightarrowght)$$ Therefore $$ |R_{f}(s,\xi^{2}(sN))- ((\varkappa _{\varepsilon }\ast \zeta_{N}(sN))^2,f)|\leq J_{f,s}(\delta',\varepsilon,N) + K(\varepsilon ,N)+O\left(\frac{1}{\varepsilon N}\rightarrowght), $$ where $$ J_{f,s}(\delta',\varepsilon,N)= \frac{2}{N}\sum\limits_{k} |f(s,k/N)|\frac{1}{N}\sum\limits_{j:|j-k|<(1+\delta')\varepsilon N} \varkappa _{\varepsilon } ((k-j)/N)|\xi_{N,k}(sN)-\xi_{N,j}(sN)| $$ and \[ K(\varepsilon ,N)=Const\cdot \left| \frac{1}{N}\sum _{m}\varkappa _{\varepsilon } \left(\frac{m}{N}\rightarrowght)-1 \rightarrowght|. \] Evidently, $\displaystyle K(\varepsilon ,N)=Const\cdot \left| \frac{1}{N\varepsilon}\sum _{m}\varkappa \left(\frac{m}{N\varepsilon} \rightarrowght)-1 \rightarrowght|$ and, hence, $K(\varepsilon ,N)$ tends to $0$ in the limit "$\varepsilon$ is fixed, $N\to\infty$". So we can include $K(\varepsilon ,N)$ into $F_{2}(\varepsilon N)$. Consider now the term $J_{f,s}(\delta',\varepsilon,N)$. Using monotonicity of trajectories we obtain that for any $j$ such that $|j-k|<(1+\delta')\varepsilon N$ $$ \left|\xi_{N,k}(sN)-\xi_{N,j}(sN)\rightarrowght|\leq \xi_{N,k-[(1+\delta')\varepsilon N]}(sN)-\xi_{N,k+[(1+\delta')\varepsilon N]}(sN). $$ Thus we have that $$ J_{f,s}(\delta',\varepsilon,N)\leq \frac{2}{N}\sum\limits_{k} |f(s,k/N)|(\xi_{N,k-[(1+\delta')\varepsilon N]}(sN)-\xi_{N,k+[(1+\delta')\varepsilon N]}(sN)). $$ Integrating by parts we get the following bound \begin{align*} J_{f,s}(\delta',\varepsilon,N)&\leq \frac{2}{N}\sum\limits_{k} \left(|f(s,(k+[(1+\delta')\varepsilon N])/N)|- |f(s,(k-[(1+\delta')\varepsilon N])/N)|\rightarrowght)\xi_{N,k}(sN)\\ & \leq\frac{2}{N}\sum\limits_{k} |f(s,(k+[(1+\delta')\varepsilon N])/N)-f(s,(k-[(1+\delta')\varepsilon N])/N)| \\ & \leq Const\cdot MD \cdot (1+\delta')\varepsilon, \end{align*} where $M=\max _{x,s}\left|f_{x}(s,x)\rightarrowght|$, $D$ is a diameter of $\mbox{supp}\,f_{x}(s,x)$. Note that the last inequality is uniform in trajectories. Lemma \ref{cut0} is proved. \begin{lm} \label{cut} For every $\delta>0$ $$ \limsup\limits_{\varepsilon\rightarrow 0}\limsup\limits_{N\rightarrow \infty} P_{\lambda,N}^{\left(T\rightarrowght)} \left(\left|\int\limits_{0}^{T}\Bigl(R_{f}(s,\xi^{2}(sN))- ((\varkappa _{\varepsilon }\ast \zeta_{N}(sN))^2,f)\Bigr)\,ds\rightarrowght|>\delta\rightarrowght)=0. $$ \end{lm} The proof of this lemma is omitted because it is a direct consequence of Lemma~\ref{cut0}. It is easy to see that for any $f\in C_{0,T}^{\infty}$ a map $F_{f,T,\varepsilon}(\varphi ):D([0,T],S')\rightarrow {\textbf{R}}_{+}$ defined by $$ F_{f,T,\varepsilon}(\varphi )=\left| \int\limits_{0}^{T}\left(\varphi(s),f_{s}+\lambda f_{x}+\mu f) -\mu ((\varkappa _{\varepsilon }\ast\varphi (s))^2,f)\rightarrowght)ds +(\varphi (0),f)\rightarrowght| $$ is continuous, therefore for any $\delta>0$ the set $\left\{ \varphi \in D([0,T],S'):F_{f,T,\varepsilon}(\varphi )>\delta \rightarrowght\} $ is open and hence \[ \limsup\limits_{\varepsilon\rightarrow 0}Q_{\lambda}^{\left(T\rightarrowght)}\left(\varphi: F_{f,T,\varepsilon}(\varphi )>\delta\rightarrowght)\leq \limsup\limits_{\varepsilon\rightarrow 0}\liminf_{N\rightarrow \infty }Q_{N, \lambda}^{\left(T\rightarrowght)}\left(\varphi: F_{f,T,\varepsilon}(\varphi)>\delta \rightarrowght), \] where $Q_{\lambda}^{\left(T\rightarrowght)}$ is a limit point of the sequence $\{Q_{N, \lambda}^{\left(T\rightarrowght)}\}_{N=2}^{\infty }$. Obviously that \begin{align} \label{qpp} Q_{N, \lambda}^{\left(T\rightarrowght)}\left(\varphi: F_{f,T,\varepsilon}(\varphi)>\delta \rightarrowght)& \leq P_{N, \lambda}^{\left(T\rightarrowght)} \left(\sup_{t\leq T}|W^{'}_{f,N}(t)| >\delta/2 \rightarrowght)\\ &+P_{N, \lambda}^{\left(T\rightarrowght)}\left( \left|\int\limits_{0}^{T}(R_{f}(s,\xi^{2}(sN))- ((\varkappa _{\varepsilon }\ast \zeta_{N}(sN))^2,f)ds\rightarrowght|>\delta/2\rightarrowght).\nonumber \end{align} It is easy to see that the bound (\ref{b2}) obtained in Lemma \ref{lb1} for the process $Z_{f,N}(s)$ is also valid for the process $Z_{f,N}'(s)$, therefore for any $t$ $$ E_{N, \lambda}^{\left(T\rightarrowght)}(W_{f,N}'(t))^{2}= E_{N, \lambda}^{\left(T\rightarrowght)}\left(\int\limits_{0}^{t }Z_{f,N}'(s)ds\rightarrowght)\leq \frac{Ct}{N}, $$ since $V_{f,N}'(t)$ is a martingale. Kolmogorov inequality implies that for any $\delta >0$ \begin{equation} P_{N, \lambda}^{\left(T\rightarrowght)}\left(\sup _{t\leq T}|W_{f,N}'(t)|\gammaeq \delta \rightarrowght)\leq \delta^{-2}E_{N, \lambda}^{\left(T\rightarrowght)}(W_{f,N}'(T))^{2}=\delta^{-2} E_{N, \lambda}^{\left(T\rightarrowght)}\left(\int \limits_{0}^{T}Z_{f,N}'(s)ds\rightarrowght)\leq \frac{CT}{N\delta ^{2}}. \label{kolm} \end{equation} The second term in (\ref{qpp}) vanishes to zero by Lemma \ref{cut} as $N\rightarrow \infty$ and $\varepsilon\rightarrow 0$. Therefore for any $f\in C_{0,T}^{\infty}$ and $\delta >0$ \begin{equation}\label{e-Qd} \limsup\limits_{\varepsilon\rightarrow 0} Q_{\lambda}^{\left(T\rightarrowght)} \left(\varphi: F_{f,T,\varepsilon}(\varphi )>\delta\rightarrowght)=0. \end{equation} Let us prove that we can replace the convolution in $F_{f,T,\varepsilon}$ by its limit which is well defined with respect to the measure $Q_{\lambda}^{\left(T\rightarrowght)}$. First of all we note that for any $C>0$ $B_{C}({\textbf{R}})$ the set of measurable functions $h$ such that $\|h\|_{\infty}\leq C$ is a closed subset of $S'({\textbf{R}})$ in both strong and weak topology. Indeed, consider a sequence of functions $g_{n}\in B_{C},\, n\gammaeq 1$ and assume this sequence converges in $S'$ to some tempered distribution $G\in S'$. We are going to show that this generalized function is determinated by some measurable function bounded by the same constant $C$. It is easy to see that for every $n\gammaeq 1$ $$ \left|\int\limits_{\textbf{R}} g_{n}(x)f(x)dx\rightarrowght|\leq C\|f\|_{L^1},\,\, f \in S, $$ where $\|\cdot\|_{L^1}$ is a norm in $L^1$ the space of all integrable functions. So the limit linear functional $G$ on $S$ is also continuous in $L^1$-norm $$|G(f)|\leq C\|f\|_{L^1},\,\,f \in S.$$ The space $S$ is a linear subspace of $L^1$ therefore by Hahn-Banach Theorem (Theorem III-5, \cite{RS}) the linear functional $G$ can be extended to a continuous linear functional $\tilde{G}$ on $L^{1}$ with the same norm and such that $\tilde{G}|_{S}=G$. Using the theorem about the general form of a continuous linear functional on $L^1$ (\cite{RS}) we obtain that $$ G(f)=\int\limits_{\textbf{R}}g(x)f(x)dx,\,\, f\in L^{1}, $$ where $g$ is a measurable bounded function. Obviously that $\|g\|_{\infty}\leq C$. Obviously that for any $N\gammaeq 2$ and fixed $t \in [0,T]$ we have that $Q_{N, \lambda}^{\left(T\rightarrowght)}\left(\varphi (t,\cdot)\in B_1 \rightarrowght)=1$, where $\varphi (t,\cdot)=\varphi(t)$ is a coordinate variable on $D([0,T],S')$. Therefore if some subsequence $\{Q_{N', \lambda}^{\left(T\rightarrowght)}\}$ of $\{Q_{N, \lambda}^{\left(T\rightarrowght)}\}_{N=2}^{\infty}$ converges weakly to a limit point $Q_{\lambda}^{\left(T\rightarrowght)}$, then for any fixed $t\in [0,T]$ \begin{equation} \label{closed} Q_{\lambda}^{\left(T\rightarrowght)}\left(\varphi(t)\in B_{1}\rightarrowght)\gammaeq \limsup_{N'\rightarrow \infty}Q_{N', \lambda}^{\left(T\rightarrowght)} \left(\varphi(t) \in B_{1} \rightarrowght)=1, \end{equation} since $B_{1}$ is closed. Next lemma gives an important property of the convolution on the set of bounded functions $L^{\infty}$. \begin{lm}\label{closed2} Fix $\varphi\in L^{\infty}$. Then 1) for any $0\leq s\leq T$ $$((\varkappa _{\varepsilon }\ast \varphi (s))^2,f)- \int_{{\textbf{R}}}\varphi^2(s,x)f(x)\, dx \to 0 \qquad (\varepsilon\to 0) $$ 2) for any $0\leq t\leq T$ $$\int\limits_0^t\biggl\{ \, ((\varkappa _{\varepsilon }\ast \varphi (s))^2,f)- \int_{{\textbf{R}}}\varphi^2(s,x)f(x)\, dx \,\biggr\}\, ds \to 0 \qquad (\varepsilon\to 0). $$ \end{lm} \emph{Proof of Lemma~\ref{closed2}.} \begin{eqnarray*} \left|((\varkappa _{\varepsilon }\ast \varphi (s))^2,f) - \int_{{\textbf{R}}}\varphi^2(s,x)f(x)\, dx\rightarrowght|&\leq & \int\limits_{{\textbf{R}}}\,\biggl|\biggl(\bigl((\varkappa _{\varepsilon }\ast \varphi (s)\bigr)(x)\biggr)^2 - \varphi^2(s,x)\biggr| \, |f(x)| \,dx\\ &\leq& 2 \|\varphi\|_{\infty} \int\limits_{{\textbf{R}}}\,\biggl|\bigl((\varkappa _{\varepsilon }\ast \varphi (s)\bigr)(x) - \varphi(s,x)\biggr| \, |f(x)| \,dx \end{eqnarray*} To get the last inequality we used the identity $a^2-b^2= (a+b)(a- b)$ and the fact that $\|\varkappa _{\varepsilon }\ast\varphi\|_{\infty}\leq \|\varphi\|_{\infty}$. To finish the proof it suffices to apply a well-known result about convergence of $(\varkappa _{\varepsilon }\ast\varphi)(s,\cdot)$ to $\varphi(s)$ in $L^1_{loc}$. This proves the first statement of the lemma. To get the second statement we use again the boundedness of $ \varphi$ and $\varkappa _{\varepsilon }\ast\varphi$ and apply the Lebesgue theorem. Lemma \ref{closed2} is proved. On the set $\mbox{supp}\, Q_{\lambda}^{\left(T\rightarrowght)}$ we can define a functional $$ F^0_{f,T}(\varphi)= \int\limits _{0}^{T}(\varphi (s),f_{s}+\lambda f_{x}+\mu f) -\mu (\varphi^{2}(s),f))ds+(\varphi (0),f). $$ The equation (\ref{closed}) and Lemma \ref{closed2} yield that for any $\varphi\in \mbox{supp}\, Q_{\lambda}^{\left(T\rightarrowght)}\,\,$ $F_{f,T,\varepsilon}(\varphi)\to F^0_{f,T}(\varphi)$ as $\varepsilon\to 0$. This implies that for any $\delta_1>0$ $$ Q_{\lambda}^{\left(T\rightarrowght)} \left(\varphi: |F_{f,T,\varepsilon}(\varphi )-F^0_{f,T}(\varphi)| >\delta_1\rightarrowght) \rightarrowghtarrow 0\qquad (\varepsilon\to 0). $$ Combining this with (\ref{e-Qd}) we get that a limit point of the sequence $\{Q_{N,\lambda}^{\left(T\rightarrowght)}\}_{N=2}^{\infty }$ is concentrated on the trajectories $\varphi (t,\cdot),\,t\in [0,T],$ taking values in the set of regular bounded functions and satisfying the following integral equation \[ \int\limits _{0}^{T}(\varphi (s),f_{s}+\lambda f_{x}+\mu f) -\mu (\varphi^{2}(s),f))ds+(\varphi (0),f)=0 \] for any $f\in C_{0,T}^{\infty}$. It means that each such a trajectory is a weak solution of the equation (\ref{eq:ur-asym}) in the sense of Definition \ref{weak}. \subsection{Uniqueness of a weak solution} \label{edinstv} paragraph{The first order equation.} Using the method of Theorem~1 in the celebrated paper \cite{Oleinik} of Oleinik we will show here that for any measurable bounded initial function $psi(x)$ there might be at most one weak solution of the equation (\ref{eq:ur-asym}) in the sense of Definition \ref{weak} and no entropy condition is required. Let $u(t,x)$ and $v(t,x)$ be two weak solutions of the equation (\ref{eq:ur-asym}) in the region $[0,T]\times \textbf{R}$ with the same initial condition $psi$ (not necessarily from $H$). Definition \ref{weak} implies that \begin{multline} \label{eqt0} \int\limits_{0}^{T}\int\limits_{\textbf{R}}((u(t,x)-v(t,x))(f_{t}(t,x) +\lambda f_{x}(t,x) +\mu (1-u(t,x)-v(t,x))f(t,x))dxdt=0, \end{multline} for any $f\in C_{0,T}^{\infty}$. Consider the following sequence of equations \begin{equation} \label{eqt1} f_{t}(t,x)+\lambda f_{x}(t,x)+g_{n}(t,x)f(t,x)=F(t,x), \end{equation} with any infinitely differentiable function $F$ equal to zero outside of a certain bounded region, lying in the half-plane $t\gammaeq\delta_1>0$, where $\delta_1$ is an arbitrary small number. The functions $g_{n}(t,x)$ are uniformly bounded for all $x,t, n\gammaeq 1$ and converges in $L^{1}_{loc}$ to the function $g(t,x)=\mu (1- u(t,x)-v(t,x))$ as $n \rightarrow \infty$. The solution $f_{n}(t,x)\in C_{0,T}^{\infty}$ of the equation (\ref{eqt1}) is given by the following formula (formula (2.8) in \cite{Oleinik}) $$ f_{n}(t,x)=\int\limits_{T}^{t}F(s, x+\lambda (s-t)) \exp\left\{\int\limits_{t}^{s}g_{n}(\tau,x+ \lambda (\tau-t))d\tau\rightarrowght\}ds. $$ Equation (\ref{eqt0}) yields that \begin{equation} \label{eqt2} \int\limits_{\textbf{R}_{+}}\int\limits_{\textbf{R}}(u(t,x)-v(t,x))F(t,x)dxdt =\int\limits_{\textbf{R}_{+}}\int\limits_{\textbf{R}} (g(t,x)-g_{n}(t,x))f(t,x))dxdt. \end{equation} The right side of (\ref{eqt2}) is arbitrary small for sufficiently large $n$ and, since the left side of (\ref{eqt2}) does not depend on $n$, so it is equal to zero. Therefore $u=v$, since $F$ is arbitrary. An \emph{existence} of a weak solution of the equation (\ref{eq:ur-asym}) follows from the general theory for quasilinear equations of the first order (for example, Theorem 8 in \cite{Oleinik}). In the particular case of the equation (\ref{eq:ur-asym}) it is possible to obtain an explicit formula for a weak solution. First of all we note that the Cauchy problem \begin{equation} u_{t}(t,x)=-\lambda u_{x}(t,x)+\mu (u^2(t,x)-u(t,x)), \qquad u(0,x)=psi (x), \label{eq:1-por} \end{equation} has a unique classical solution, if $psi\in C^1(\mathbf{R})$ and there is an explicit formula for this solution. Indeed, using substitution $u^{\circ }(t,x)=u(t,x-\lambda t)$ we transform the equation (\ref{eq:1-por}) into the equation $$u^{\circ } _{t}(t,x)=-\mu u^{\circ } (t,x)(1-u^{\circ } (t,x)),\qquad u^{\circ } (0,x)=psi (x).$$ Considering $x$ as a parameter we obtain an ordinary differential equation which is solvable and the solution is given by: \begin{equation} \label{eq-uo-ex} u^{\circ }(t,x)=\frac{psi (x)e^{-\mu t}}{1-psi (x)+psi (x)e^{-\mu t}}. \end{equation} So, if $psi\in C^1(\mathbf{R})$, then a unique classical solution of the equation (\ref{eq:1-por}) is given by the following formula \begin{equation} \label{solut1} u(t,x)=\frac{psi (x+\lambda t)e^{-\mu t}}{1-psi (x+\lambda t)+ psi (x+\lambda t)e^{-\mu t}}. \end{equation} If we approximate any measurable bounded function $g $ in $L_{loc}^{1}$ by a sequence of smooth functions $\{g_{n}, n\gammaeq 1\}$, then the sequence of corresponding weak solutions $\{u_n(t,x), n\gammaeq 1\}$, where $u_{n}(t,x)$ is defined by the formula (\ref{solut1}) with $psi=g_n$, converges in $L_{loc}^{1}$ to the weak solution of the equation with initial condition $g$ by Theorem 11 in \cite{Oleinik} or Theorem 1 in \cite{Kruzhkov}. It is easy to show by direct calculation that the $L_{loc}^{1}$--limit of the sequence $\{u_n(t,x), n\gammaeq 1\}$ is given by the same formula (\ref{solut1}) with $psi=g$. The formula (\ref{solut1}) yields that if $psi\in H(\textbf{R})$, then $u(t,\cdot)\in H(\textbf{R})$ as a function of $x$ for any fixed $t\gammaeq 0$. If a function $u(t,x)$ is a weak solution of the equation (\ref{eq:1-por}), then this function is differentiable at a point $(t,x)$ iff the initial condition $psi(y)$ is differentiable at the point $y=x+\lambda t$. paragraph{KPP-equation.} The equation (\ref{eq:ur-KPP}) is a quasilinear parabolic equations of the second order and is a particular case of the famous KPP-equation in~\cite{KoPP}. It is known that there exists a unique weak solution $u(t,x)$ of the problem $$ u_{t}(t,x)=\gamma u_{xx}(t,x)+\mu (u^2(t,x)-u(t,x)), \qquad u(0,x)=psi(x),$$ for any bounded measurable initial function $psi$, this solution is in fact a unique classical solution and if $psi\in H(\mathbf{R})$, then $u(t,\cdot)\in H(\textbf{R})$ as a function of $x$ for any fixed $t$. We refer to the paper \cite{Oleinik} for more details. \section{System with fixed number of particles} \label{stability} In this section we deal with the situation ``$N$ is fixed, $t\rightarrowghtarrow\infty$''. \subsection{Proof of Theorem \ref{ergodic}} Let $\sigma (y,w)$ be the rate of transition from the state $y=(y_{1},\ldots ,y_{N})\in \Gamma $ to the state $w=(w_{1},\ldots,w_{N})\in \Gamma $ for the Markov $y(t)$ chain. Define \( \sigma (y)=\sum\limits _{w\not =y}\sigma (y,w). \) From definition of the particle system it follows that \[ \sigmagma (y)=(\alpha +\beta )N+\frac{\mu _{N}}{N}\sum _{(i,j)}I_{\left\{ y_{i}>y_{j}\rightarrowght\} }. \] Since $\displaystyle \sum _{(i,j)}I_{\left\{ y_{i}>y_{j}\rightarrowght\} }\leq N(N-1)/2$ we have uniformly in $y\in \Gamma $ \begin{equation} \La _{*,N} \, \leq \, \sigmagma (y)\, \leq \, \sigmagma _{N}^{*} \label{eq:lambda-below-above} \end{equation} with $\La _{*,N}=(\alpha +\beta )N$ and $\quad \sigmagma _{N}^{*}=(\alpha +\beta )N+\mu_{N}(N-1)/2$. A discrete time Markov chain \emph{}$\left\{ Y(n),\, n=0,1,\ldots \rightarrowght\} $ on the state space $S$ with transition probabilities \begin{equation} \label{eq:def-p} p (y,w)\equiv \mathsf{P} \left\{ Y(n+1)=w\, |\, Y(n)=y\rightarrowght\} =\begin{cases} \displaystyle \frac{\sigmagma (y,w)}{\sigmagma (y)}\, , & y\not =w\\ 0\, , & y=w,\end{cases} \end{equation} is an embedded Markov chain of the continuous time Markov chain~$\left(y(t),\, t\gammaeq 0\rightarrowght)$. Theorem \ref{ergodic} is a consequence of the following statement. \begin{lm} \label{Deblin} The Markov chain $\left\{ Y(n),\, n=0,1,\ldots\rightarrowght\}$ is irreducible, aperiodic and satisfies to the Doeblin condition: there exist $\varepsilon >0$, $m_{0}\in \mathbf{N}$ and finite set $A\subset \Gamma$ such that \begin{equation} \label{eq:deb} \mathsf{P} \left\{ Y(m_{0})\in A\, |\, Y(0)=Y_{0}\rightarrowght\} \, \gammaeq \varepsilon, \end{equation} for any $Y_{0}\in S$. Therefore this Markov chain is ergodic (\cite{FayMM}). \end{lm} \emph{Proof of Lemma~\ref{Deblin}.} We are going to show that condition~(\ref{eq:deb}) holds with $A=\left\{ (0,\ldots ,0)\rightarrowght\} $, $m_{0}=N$, \[ \varepsilon =\left(\frac{\min \left(\alpha ,\beta ,\mu _{N}/N\rightarrowght)} {\sigmagma_{N}^{*}}\rightarrowght)^{N}>0. \] The transition probabilities of the Markov chain $\left\{ Y(n),\, n=0,1,\ldots\rightarrowght\}$ are uniformly bounded from below in the following sense: if a pair of states $(z,v)$ is such that $\sigma (z,v)>0$ (or, equivalently, $p (z,v)>0$) then (\ref{eq:def-p}) implies that \begin{equation} \label{eq:uni-bound} p (z,v)>\min\left(\alpha ,\beta,\mu _{N}/N\rightarrowght)/\sigmagma _{N}^{*}. \end{equation} So to prove~(\ref{eq:deb}) we need only to show that for any $y$ there exists a sequence of states \begin{equation}\label{eq-cepochka} v^{0}=y,\quad v^{1},\quad v^{2},\quad \ldots ,\quad v^{N}=(0,\ldots ,0) \end{equation} which can be subsequently visited by the Markov chain $\left\{ Y(n),\,n=0,1,\ldots\rightarrowght\}$. The last means that i.e. $p (v^{n-1},v^{n})>0$ for every $n=1,\ldots,N$ and hence $$ \mathsf{P} \left\{ Y(N)\in A\, |\, Y(0)=y\rightarrowght\} \, \gammaeq\, prod_{n=1}^{N}p (v^{n-1},v^{n}) \, \gammaeq \, \left(\frac{\min \left(\alpha, \beta, \mu _{N}/N\rightarrowght)}{\sigmagma_{N}^{*}}\rightarrowght)^{N} $$ as a consequence of the uniform bound (\ref{eq:uni-bound}). To prove existence of the sequence (\ref{eq-cepochka}) let us assume first that $y=(y_{1},\ldots,y_{N})\not =0$. Choose and fix some $r$ such that $y_{r}=\displaystyle \max _{i}y_{i}>0$. Denote by \[ n_{0}=\#\left\{ j:\, y_{j}=0\rightarrowght\} \] the number of left-most particles. Let the right-most particle $y_{r}$ move $n_{0}$ steps to right: \[ v^{n}-v^{n-1}=e_{r}^{({\scriptscriptstyle N} )},\quad \quad n=1,\ldots,n_{0}. \] This can be done by using of jumps to the nearest right state. So $Y(n_{0})=v^{n_{0}}$ has exactly $n_{0}$ particles at $0$ and $N-n_{0}$ particles out of $0$. Denote by $i_{n_{0}+1}<i_{n_{0}+2}<\cdots <i_{N}$ indices of particles with $v_{j_{a}}^{n_{0}}>0,\,a=n_{0}+1,\ldots,N$. Let now the Markov chain $Y$ transfer each of these particles to~$0$: \[ v^{a}-v^{a-1}=-v_{i_{a}}^{n_{0}}e_{i_{a}}^{({\scriptscriptstyle N} )},\quad \quad a=n_{0}+1,\ldots,N. \] It is possible due to transitions provided by the interaction. To complete the proof we need to consider the case $y=(y_{1},\ldots ,y_{N})=0$. It is quite easy: \[ v^{1}=e_{1}^{({\scriptscriptstyle N} )},\quad v^{2}=2e_{1}^{({\scriptscriptstyle N} )},\quad v^{3}=3e_{1}^{({\scriptscriptstyle N} )},\quad \ldots ,\quad v^{N-1}=(N-1)e_{1}^{({\scriptscriptstyle N} )},\quad v^{N}=0. \] Proof of the lemma is over. Denote by $py =\left(py (y),\, y=(y_{1},\ldots ,y_{N})\in \Gamma \rightarrowght)$ a unique stationary distribution of the Markov chain $\left\{ Y(n),\, n=0,1,\ldots\rightarrowght\}$. {\it The proof of Theorem \ref{ergodic}\/} is now easy. First of all let us show that the uniform bound $\si (y)\gammaeq \La _{*,N} $ implies existence of a stationary distribution for the Markov chain $y(t)$. Indeed, it is easy to check that if $py $ is the stationary distribution of the embedded Markov chain $Y$ and $Q$ is the infinitesimal matrix for the chain $y(t)$, then a vector with positive components $s=\left(s(w),w\in S\rightarrowght)$ defined as \[ s(w)=\frac{py (w)}{\si (w)}, \] satisfies to the equation $sQ=0$. So for existence of a stationary distribution of the chain $y(t)$ it is sufficient to show $\displaystyle \sum _{w\in S}s(w)<+\infty $. It is easy to check the last condition: \[ \sum _{w\in S}s(w)=\sum _{w\in S}\frac{py (w)}{\si (w)}\leq \frac{1}{\La _{*,N} }\sum _{w\in S}py (w)=\frac{1}{\La _{*,N} }. \] Therefore the continuous-time Markov chain $\left(y(t),\, t\gammaeq 0\rightarrowght)$ has a stationary distribution~$pi =\left(pi (y),\, y\in \Gamma \rightarrowght)$ of the following form \[ pi (y)=\frac{\displaystyle \frac{py (y)}{\sigmagma (y)}}{\displaystyle \sum _{w\in \Gamma } \frac{py (w)}{\sigmagma (w)}}\, . \] Denote $p_{yw}(t)=\mathsf{P} \left\{ y(t)=w\, |\, y(0)=0\rightarrowght\} $. The next step is to prove that the continuous-time Markov chain $y(t)$ is ergodic. To do this we show that the following Doeblin condition holds: {\it for some $j_{0}\in S$ there exists $h>0$ and $0<\delta <1$ such that $p_{ij_{0}}(h)\gammaeq \delta $ for all $i\in S$. } It is well-known (\cite{FayMM}) that this condition implies ergodicity and moreover \[ |p_{ij}(t)-pi (y)|\, \leq \, \left(1-\delta \rightarrowght)^{[t/h]}. \] Let $\tau_{k},\,k\gammaeq 0,$ be the time of stay of the Markov chain $y(t)$ in $k-$th consecutive state. Condition on the sequence of the chain states $y_{k},\,k\gammaeq 0,$ the joint distribution of the random variables $\tau _{k},\, k\gammaeq 0,$ coincides with the joint distribution of independent random variables exponentially distributed with parameters $\si (y_k),\, k=0,1,\ldots,n$, so the transition probabilities of the chain $y(t)$ are \[ p_{yw}(t)=\sum _{n}\sum _{(y\rightarrowghtarrow w)}\mathsf{P} \left\{ (y\rightarrowghtarrow w)\rightarrowght\} \int\limits_{\Delta _{t}^{n}}\quad e^{-\si (w)(t-t_{n})} prod\limits_{k=1}^{n}\si (y_{k-1})e^{-\si (y_{k-1})(t_{k}-t_{k-1})} \, dt_{1}\ldots dt_{n}, \] where $n$ corresponds to the number of jumps of the chain $y$ during the time interval $[0,t]$, the inner sum is taken over all trajectories $(y\rightarrowghtarrow w)=\left\{ y=y_{0},y_{1},\ldots ,y_{n}=w\rightarrowght\} $ with $n$ jumps, integration is taken over $\Delta _{n}^{t}=\left\{ 0=t_{0}\leq t_1\leq \cdots \leq t_{n}\leq t\rightarrowght\} $, and $$ \mathsf{P} \left\{ (y\rightarrowghtarrow w)\rightarrowght\} =p (y,y_{1})p (y_{1},y_{2})\cdots p (y_{n-1},w) $$ is a probability of the corresponding path for the embedded chain. The equation (\ref{eq:lambda-below-above}) implies that the integrand in $p_{yw}(t)$ is uniformly bounded from below by the expression \[ \left(\La _{*,N} \rightarrowght)^{n} \exp (-\La _{N}^{*} t_{1})\cdots \exp (-\La _{N}^{*} (t_{n}-t_{n-1}))\exp (-\La _{N}^{*} (t-t_{n})), \] and, hence, \begin{align*} \int\limits_{\Delta _{t}^{n}} e^{-\si (w)(t-t_{n})} prod\limits_{k=1}^{n}\si (y_{k-1})e^{-\si (y_{k-1})(t_{k}-t_{k-1})} \, dt_{1}\ldots dt_{n}& \gammaeq \frac{\left(\La _{*,N} t\rightarrowght)^{n}}{n!}e^{-\La _{N}^{*} t}\\ & = \mathsf{P} \left\{ \mathsf{P}i _{t}=n\rightarrowght\} e^{-\left(\La _{N}^{*} -\La _{*,N} \rightarrowght)t}, \end{align*} where $\mathsf{P}i _{t}$ is a Poisson process with parameter $\La _{*,N} $. It provides us with a lower bound for the transition probabilities of the time-continuous chain: \[ p_{yz_{0}}(t)\gammaeq \left(\sum _{n}p^{n}(y,z_{0})\mathsf{P} \left\{ \mathsf{P}i _{t}=n\rightarrowght\} \rightarrowght)e^{-\left(\La _{N}^{*} -\La _{*,N} \rightarrowght)t}. \] It is easy now to get a lower bound for probabilities $p^{n}(y,z_{0})$. Fix some $z_{0}\in S$ and denote $\xi =py (z_{0})$. It follows from \underbar{\it ergodicity} of the chain $Y$, that for any fixed $z_{0}$ \[ p ^{m}(y,z_{0})\gammaeq py (z_{0})/2=\xi/2>0, \] for all $m\gammaeq m_{1}=m_{1}(y)$. For a \underbar{\it Doeblin} Markov chain we have more strong conclusion, namely, the above number $m_{1}$ does not depend on $y$. Let us fix such $m_1$ and show that the continuous-time Markov chain $y(t)$ satisfies to the Doeblin condition. Indeed, \begin{eqnarray*} p_{yz_{0}}(t) & \gammaeq & \left(\sum _{n<m_{1}}p^{n}(y,z_{0}) \mathsf{P} \left\{ \mathsf{P}i _{t}=n\rightarrowght\} +\sum _{n\gammaeq m_{1}}p^{n}(y,z_{0}) \mathsf{P} \left\{ \mathsf{P}i _{t}=n\rightarrowght\} \rightarrowght)e^{-\left(\La _{N}^{*} -\La _{*,N} \rightarrowght)t}\\ & \gammaeq & \sum _{n\gammaeq m_{1}}p^{n}(y,z_{0})\mathsf{P} \left\{ \mathsf{P}i _{t}=n\rightarrowght\} e^{-\left(\La _{N}^{*} -\La _{*,N} \rightarrowght)t}\\ & \gammaeq & \frac{\xi }{2}\mathsf{P} \left\{ \mathsf{P}i _{t}\gammaeq m_{1}\rightarrowght\} e^{-\left(\La _{N}^{*} -\La _{*,N} \rightarrowght)t}. \end{eqnarray*} Hence, the Doeblin condition holds: we choose any $z_{0}$ as $j_{0}$, take a corresponding $m_{1}$, fix any $h>0$ and put \[ \delta =\frac{\xi }{2}\, \mathsf{P} \left\{ \mathsf{P}i _{h}\gammaeq m_{1}\rightarrowght\} e^{-\left(\La _{N}^{*} -\La _{*,N} \rightarrowght)h}. \] Proof of the theorem is over. \subsection{Evolution of the center of mass} Consider the following function on the state space $\mathbf{Z}^{N}$: $m(x_{1},\ldots ,x_{N})=\left(x_{1}+\cdots +x_{N}\rightarrowght)/N.$ So if each\, particle\, has the mass~$1$\, and\, $x_{1}(t),\ldots ,x_{N}(t)$ are positions of particles, then $m(x_{1}(t),\ldots ,x_{N}(t))$ is the \emph{center of mass} of the system. We are interested in evolution of $\mathsf{E} \, m(x_{1}(t),\ldots ,x_{N}(t))$. A direct calculation gives that \begin{eqnarray} \left(G_{N}m\rightarrowght)(x_{1},\ldots ,x_{N}) & = & \sum _{i=1}^{N}\left(\frac{\alpha }{N}-\frac{\beta }{N}\rightarrowght)+\sum _{i=1}^{N}\sum _{j\not =i}\left(-\frac{x_{i}-x_{j}}{N}\rightarrowght)I_{\left\{ x_{i}>x_{j}\rightarrowght\} }\frac{\mu _{N}}{N}\nonumber\\ & = & (\alpha -\beta )-\frac{\mu _{N}}{N^{2}}\sum _{i<j}\left|x_{i}-x_{j}\rightarrowght|,\label{eq:g-m} \end{eqnarray} where we have used the following equalities \begin{eqnarray*} m\left(xpm e_{i}^{\left({\scriptscriptstyle N} \rightarrowght)}\rightarrowght)-m\left(x\rightarrowght)&=& pm \frac{1}{N}, \\ \quad m\left(x-(x_{i}-x_{j})e_{i}^{\left({\scriptscriptstyle N} \rightarrowght)}\rightarrowght)-m\left(x\rightarrowght)&=&-\frac{x_{i}-x_{j}}{N},\\ \left(x_{i}-x_{j}\rightarrowght)I_{\left\{ x_{i}>x_{j}\rightarrowght\} }+\left(x_{j}-x_{i}\rightarrowght)I_{\left\{ x_{j}>x_{i}\rightarrowght\} }&=& \left|x_{i}-x_{j}\rightarrowght|. \end{eqnarray*} Note that the summand \[ -\frac{\mu _{N}}{N^{2}}\sum _{i<j}\left|x_{i}-x_{j}\rightarrowght| \] added by the interaction to the {}``free dynamics'' drift $(\alpha -\beta )$ depends only on the relative disposition of particles. So the center of mass of the system moves with speed which tends to the value \[ (\alpha -\beta )-\mu _{N}\, \frac{N-1}{2N}\, \, \mathsf{E} _{pi }\, \left|x_{1}-x_{2}\rightarrowght| \] as $t$ goes to infinity. Here $\mathbf{E}_{pi }\, \left|x_{1}- x_{2}\rightarrowght|$ is the mean distance between two particles calculated with respect to the stationary measure $pi$ of the Markov chain $Y$. Using this fact and Theorem \ref{ergodic} we can describe the \emph{long time behavior of the particle system} in the initial coordinates $x'$s as follows. Theorem \ref{ergodic} means that the system of stochastic interacting particles possesses some relative stability. In coordinates~$y$ the system approaches exponentially fast its equilibrium state. In the meantime the particle system considered as a~"single body" moves with an asymptotically constant speed. The speed differs from the mean drift of the free particle motion and this difference is due to the interaction between the particles. \section{On travelling waves and long time evolution of solution of PDE} \label{travel} We deal here with partial differential equation in variables $(t,x)\in{\textbf{R}}_+\times{\textbf{R}}$. \begin{dfn} Function $w=w(x)$ is called a travelling wave solution of some PDE if there exists $v\in \mathbf{R}$ such that the function \(u(t,x)=w(x-vt)\) is a solution of this PDE. The number $v$ is speed of the wave~$w$. \end{dfn} \noindent We are interested only in the travelling waves having the following properties: \textbf{U1)} $w(x)\in[0,1]$; \textbf{U2)} $w(x)$ and $dw(x)/dx$ have limits as $x\rightarrowghtarrow pm \infty $ and, besides, $w(-\infty )=1$ and $w(+\infty )=0$. We identify two travelling waves $w_{1}(x)$ and $w_{2}(x)$ if $w_{1}(x)=w_{2}(x-c)$ for some $c$. For any probabilistic solution $0\leq u(t,x)\leq 1$ we define a function $r(t)$ such that \( u(t,r(t))\equiv \frac{1}{2}. \) Let a function $w(x)$ be a travelling wave solution. Without loss of generality we can assume that $w(0)=1/2$. \begin{dfn} A solution $u(t,x)$ converges \emph{in form} to the travelling wave $w(x)$ if \[ \lim\limits _{t\rightarrowghtarrow +\infty }u(t,x+r(t))=w(x), \] uniformly on any finite interval. The solution $u(t,x)$ converges \emph{in speed} to the travelling wave $w(x)$ if there exists $r'(t)=dr(t)/dt$ and \( \lim\limits _{t\rightarrowghtarrow +\infty }r'(t)=v,\) where $v$ is a speed of the travelling wave $w(x)$. \end{dfn} paragraph{First order equation.} It is easy to check for the equation~(\ref{eq:ur-asym}) for every $v<\lambda $ there exists a unique (up to shift) travelling wave solution having properties U1--U2 and this travelling wave solution is given by the following formula $\displaystyle w_v(x)=\left(1+\exp \left(\frac{\mu }{\lambda -v}x\rightarrowght)\rightarrowght)^{-1}. $ \begin{prop} \label{trav1} If for some $C>0,\nu >0$, the initial profile $psi (x)\in H(\mathbf{R})$ of the equation (\ref{eq:ur-asym}) has the following asymptotic behavior $ 1-psi (x)\sigmam C\exp (\nu x), $ as $x\rightarrowghtarrow -\infty$, then there exists $x_0\in{\textbf{R}}$ such that for every $x\in \mathbf{R}$ $$ |u(t,x)-w_{v}(x-x_0-vt)|\rightarrowghtarrow 0, $$ as $t\rightarrowghtarrow \infty$, where $\displaystyle v=\lambda -\mu/\nu$. \end{prop} The proof of Proposition \ref{trav1} is a direct calculation based on the exact formula (\ref{solut1}). The formula (\ref{eq-uo-ex}) yields that $u^{\circ }(t_{1},x)\gammaeq u^{\circ }(t_{2},x)$ for any $t_{1}<t_{2}$ and this observation immediately implies the following statement: for every $x$ \ $u^{\circ }(t,x)\rightarrowghtarrow I_{\left\{ y:\, psi (y)=1\rightarrowght\} }(x),$ as $t\rightarrowghtarrow \infty$. As a direct application of this property we can obtain the following \begin{prop} \label{eq:for} Assume that the initial profile has the form $psi (x)=I_{\left\{ y<b\rightarrowght\} }(x)$ for some $b\in \mathbf{R}$. Then $$|u(t,x)-I_{\left\{ y<b\rightarrowght\} }(x-\lambda t)|\rightarrowghtarrow 0,\qquad t\rightarrowghtarrow \infty.$$ \end{prop} So the function $w(x)=I_{\left\{ y\leq 0\rightarrowght\} }(x)$ is a unique (up to shift) non-increasing continuous from the right travelling wave corresponding to the maximal possible speed $v=\lambda $. It is easy to see that this function is a limiting case of~$w_v(x)$ as $v\rightarrowghtarrow\lambda-0$. paragraph{Second order equation.} The existence of travelling waves for parabolic partial differential equations was a subject of studying in many papers followed to the paper~\cite{KoPP}. A review of many results can be found in~\cite{Volpert} (see also \cite{Smol}) and for completeness of the text we mention some of them. Reformulating the well-known results (\cite{Volpert}) we obtain that travelling waves of the equation~(\ref{eq:ur-KPP}) can move only from the right to the left. It means that the speed of any travelling wave is negative and, moreover, is bounded away from~0. \begin{prop} For equation~(\ref{eq:ur-KPP}) for every $v\leq v_{*}=-\sqrt{4\gamma \mu}$ there exist and unique (up to shift) travelling wave solution with speed~$v$. There are no other travelling wave solutions satisfying the conditions U1 and U2. \end{prop} If a function $f=f(x)$ is such that $f(x)\leq 1$, $f(x)\rightarrowghtarrow 1$ as $x\rightarrowghtarrow -\infty $ and there exists a limit \( \varkappa =\lim\limits _{x\rightarrowghtarrow -\infty }{x}^{-1}{\log (1-f(x))}>0, \) then the number $\varkappa $ is called \emph{Lyapunov exponent of the function $f$ (at minus infinity).} It is well known (\cite{Volpert}) that for the equation~(\ref{eq:ur-KPP}) a travelling $w(x)$ with speed~$v$ has the following Lyapunov exponent at minus infinity \[ \varkappa (v)=\left(-v-\sqrt{v^{2}-4\gamma \mu }\rightarrowght)/\left(2\gammaamma\rightarrowght).\] Hence we get that for the travelling wave with minimal in absolute value speed $v_{*}=-\sqrt{4\gamma \mu }$ the Lyapunov exponent is \( \varkappa (v_{*})=\sqrt{{\mu }/{\gamma }}. \) \begin{prop}[\cite{Volpert}] Assume that an initial function $psi (x)$ has a Lyapunov exponent $\varkappa $. Then a) if $\varkappa \gammaeq \sqrt{\mu/\gamma}$ then the solution $u(t,x)$ of the problem~(\ref{eq:ur-KPP}) converges in form and in speed to the travelling wave moving with the minimal speed $v_{*}=-\sqrt{4\gamma \mu }$; b) if $\varkappa <\sqrt{\mu/\gamma}$ then the solution $u(t,x)$ of the problem~(\ref{eq:ur-KPP}) converges in form and in speed to the travelling wave with speed \( \displaystyle v=-\left(\gammaamma\varkappa +\frac{\mu }{\varkappa }\rightarrowght),\) or, in other words, $\varkappa (v)=\varkappa $. \end{prop} We see from the above analysis that both the first order PDE~(\ref{eq:ur-asym}) and the second order PDE~(\ref{eq:ur-KPP}) exhibit similar long-time behavior of their solutions. This seems very natural if we recall from Theorem~\ref{hydro} that the both equations arise as hydrodynamical approximations of the same stochastic particle system. \appendix \section{Appendix} \subsection{Strong topology on the Skorokhod space} \label{top} Remind that Schwartz space $S({\textbf{R}} )$ is a Frechet space (see~\cite{RS}). In the dual space $S^{prime }({\textbf{R}} )$ of tempered distributions there are at least two ways to define topology (both not metrizable): 1) \emph{weak topology} on $S^{prime }({\textbf{R}} )$, where all functionals \( \left( \, \cdot \, ,phi \rightarrowght) ,\quad phi \in S({\textbf{R}} ) \) are continuous. 2) \emph{strong topology} ($\mbox{\itshape{s.t.}}$) on $S^{prime }({\textbf{R}} )$, which is generated by the set of seminorms \[ \left\{ \rho _{A}(M)=\sup _{phi \in A}\left|\left( M,phi \rightarrowght) \rightarrowght|\, :\, \, A\subset S({\textbf{R}} )\, -\, {bounded}\rightarrowght\} . \] Below we shall consider $S^{prime }({\textbf{R}} )$ as equipped with the strong topology. The problem of introducing of the Skorokhod topology on the space $D_{T}(S^{prime}):=D([0,T],S^{prime}({\textbf{R}} ))$ was studied in~\cite{Mitoma} and~\cite{Jakub}. We follow these papers. For each seminorm $\rho _{A}$ on $S'(\textbf{R})$ we define the following pseudometric on $D([0,T], S'(\textbf{R}))$ \[ d_{A}(y,z)=\inf _{\lambda \in \si mbda }\left\{ \sup _{t}\rho_A(y_{t}-z_{\lambda (t)}) +\sup _{t\not =s}\left|\log \frac{\lambda (t)-\lambda (s)}{t-s}\rightarrowght|\rightarrowght\} ,\quad y,z\in D([0,T], S'(\textbf{R})),\] where $\inf$ is taken over the set $\si mbda =\left\{ \lambda =\lambda (t),\, t\in [0,T]\rightarrowght\} $ of all strictly increasing continuous mappings of $[0,T]$ onto itself. Introducing on $D([0,T], S'(\textbf{R}))$ the projective limit topology of $\{d_A(\cdot,\cdot)\}$ we get a completely regular topological space. \subsection{Mitoma theorem} \label{mit} Let $\mathcal{B}_{D_{T}(S^{prime})}$ be the corresponding Borel $\sigmagma $-algebra. Let $\left\{ P_{n}\rightarrowght\} $ be a sequence of probability measures on $\left(D_{T}(S^{prime}),\mathcal{B}_{D_{T}(S^{prime})}\rightarrowght)$. For each $phi \in S({\textbf{R}} )$ consider a map $\mathcal{I} _{phi }:\, y\in D_{T}(S^{prime})\rightarrowghtarrow (y,phi )\in D_{T}({\textbf{R}} )$. The following result belongs to I.~Mitoma~\cite{Mitoma}. \begin{thr} Suppose that for any $phi \in S({\textbf{R}} )$ the sequence $\left\{ P_{n}\mathcal{I} _{phi }^{-1}\rightarrowght\} $ is tight in $D_{T}({\textbf{R}} )$. Then the sequence~$\left\{ P_{n}\rightarrowght\} $ itself is tight in~$D_{T}(S^{prime})$. \end{thr} \subsection{Probability measures on the Skorokhod space: tightness} Let $\left\{ \left(\xi _{t}^{n},t\in [0,T]\rightarrowght)\rightarrowght\} _{n\in \mathbf{N}}$ \ be a sequence of real random processes which trajectories are right-continuous and admit left-hand limits for every $0<t\leq T$ . We will consider $\xi ^{n}$ as random elements with values in the Skorokhod space $D_{T}({\textbf{R}} ):=D\left([0,T],{\textbf{R}} ^{1}\rightarrowght)$ with the standard topology. Denote $P_{T}^{n}$ the distribution of $\xi ^{n}$, defined on the measurable space $\left(D_{T}({\textbf{R}} ),\mathcal{B}\left(D_{T}({\textbf{R}} )\rightarrowght)\rightarrowght)$. The following result can be found in~\cite{billingsley}. \begin{thr} \label{bil} ~\ The sequence of probability measures $\left\{ P_{T}^{n}\rightarrowght\} _{n\in \mathbf{N}}$ is tight iff the following two conditions hold: 1) for any $\varepsilon >0$ there is $\, C(\varepsilon )>0$ such that \[ \sup _{n}P_{T}^{n}\left(\sup _{0\leq t\leq T}\left|\xi _{t}^{n}\rightarrowght|>C(\varepsilon )\rightarrowght)\leq \varepsilon \, ; \] 2) for any $\varepsilon >0$ \[ \lim _{\gammaamma \rightarrowghtarrow 0}\limsup _{n}P_{T}^{n}\left(\xi _{\cdot }:\, w^{prime }(\xi ;\gammaamma )>\varepsilon \rightarrowght)=0\, , \] where for any function $f:\, [0,T]\rightarrowghtarrow {\textbf{R}} $ and any $\gammaamma >0$ we define \[ w^{prime }(f;\gammaamma )=\inf _{\left\{ t_{i}\rightarrowght\} _{i=1}^{r}}\, \max _{i<r}\, \sup _{t_{i}\leq s<t<t_{i+1}}\left|f(t)-f(s)\rightarrowght|\, , \] moreover the $\inf $ is over all partitions of the interval $[0,T]$ such that \[ 0=t_{0}<t_{1}<\cdots <t_{r}=T,\qquad t_{i}-t_{i-1}>\gammaamma ,\quad i=1,\ldots ,r. \] \end{thr} The following theorem is known as the sufficient condition of Aldous~\cite[Proposition 1.6]{KipLan}. \begin{thr} \label{ald} Condition 2) of the previous theorem follows from the following condition \[ \forall \varepsilon >0\qquad \lim _{\gammaamma \rightarrowghtarrow 0}\, \limsup _{n}\, \sup _{\tau \in \mathcal{R}_{T},\, \theta \leq \gammaamma }P_{T}^{n}\left(\left|\xi _{\tau +\theta }-\xi _{\tau }\rightarrowght|>\varepsilon \rightarrowght)=0\, , \] where $\mathcal{R}_{T}$ \ is the set of Markov moments (stopping times) not exceeding $T$ . \end{thr} \end{document}
\mathbfegin{equation}gin{document} \tildeldeitle{Sharp asymptotics of the $L_p$ approximation error for interpolation on block partitions} \author{Yuliya Babenko, Tatyana Leskevich, Jean-Marie Mirebeau} \title{Sharp asymptotics of the $L_p$ approximation error for interpolation on block partitions} \date{} \mathbfegin{equation}gin{abstract} { Adaptive approximation (or interpolation) takes into account local variations in the behavior of the given function, adjusts the approximant depending on it, and hence yields the smaller error of approximation. The question of constructing optimal approximating spline {\inftyt {for each function}} proved to be very hard. In fact, no polynomial time algorithm of adaptive spline approximation can be designed and no exact formula for the optimal error of approximation can be given. Therefore, the next natural question would be to study the asymptotic behavior of the error and construct asymptotically optimal sequences of partitions. In this paper we provide sharp asymptotic estimates for the error of interpolation by splines on block partitions in $\rightm \hbox{I\kern-.2em\hbox{R}}^d$. We consider various projection operators to define the interpolant and provide the analysis of the exact constant in the asymptotics as well as its explicit form in certain cases. } {\bf v}arepsilonnd{abstract} \noindent Yuliya Babenko\\ Department of Mathematics and Statistics\\ Sam Houston State University\\ Box 2206\\ Huntsville, TX, USA 77340-2206\\ Phone: 936.294.4884\\ Fax: 936.294.1882\\ Email: [email protected]\\ \noindent Tatyana Leskevich\\ Department of Mathematical Analysis \\ Dnepropetrovsk National University \\ pr. Gagarina, 72, \\ Dnepropetrovsk, UKRAINE, 49050 \\ Email: [email protected] \\ \noindent Jean-Marie Mirebeau\\ UPMC Univ Paris 06, UMR 7598, Laboratoire Jacques-Louis Lions, F-75005, Paris, France\\ CNRS, UMR 7598, Laboratoire Jacques-Louis Lions, F-75005, Paris, France\\ Email: [email protected]\\ \section {Introduction} The goal of this paper is to study the adaptive approximation by interpolating splines defined over block partitions in $\rightm \hbox{I\kern-.2em\hbox{R}}^d$. With the help of introduced projection operator we shall handle the general case, and then apply the obtained estimates to several different interpolating schemes most commonly used in practice. Our approach is to introduce the ``error function'' which reflects the interaction of approximation procedure with polynomials. Throughout the paper we shall study the asymptotic behavior of the approximation error and, whenever possible, the explicit form of the error function which plays a major role in finding the constants in the formulae for exact asymptotics. \subsection{The projection operator} Let us first introduce the definitions that will be necessary to state the main problem and the results of this paper. We consider a fixed integer $d\geq 1$ and we denote by $x=(x_1, \cdots , x_d)$ the elements of $\mathbb R^d$. A block $R$ is a subset of $\mathbb R^d$ of the form $$ R = {\bf p}rod_{1\lefteq i\lefteq d} [a_i,b_i] $$ where $a_i< b_i$, for all $1\lefteq i\lefteq d$. For any block $R\subset \mathbb R^d$, by $L_{p}(R)$, $1\lefte p\lefte \inftynfty$, we denote the space of measurable functions $f:R \tildeldeo\rightm \hbox{I\kern-.2em\hbox{R}}$ for which the value $$ \|f\|_p = \|f\|_{L_p(R)} : = \lefteft\{\mathbfegin{equation}gin{array}{ll} \lefteft(\displaystyle\inftynt\leftimits_{R} |f (x) | ^p dx\rightight)^{\frac 1p}, &{\rightm if}\;\;\; 1\lefteq p < \inftynfty, \\ [10pt] {\rightm ess sup} \{|f (x) | : \; x \inftyn R \}, &{\rightm if}\;\;\; p =\inftynfty. {\bf v}arepsilonnd{array}\rightight. $$ is finite. We also consider the space $C^0(R)$ of continuous functions on $R$ equipped with the uniform norm $\|\cdot\|_{L_\inftynfty(R)}$. We shall make a frequent use of the canonical block ${\mathbb I}^d$, where ${\mathbb I}$ is the interval $$ {\mathbb I} := \lefteft[-\frac 1 2, \frac 1 2\rightight]. $$ Next we define the space $V := C^0({\mathbb I}^d)$ and the norm $\|\cdot\|_V := \|\cdot\|_{L_\inftynfty({\mathbb I}^d)}$. Throughout this paper we consider a linear and bounded (hence, continuous) operator $ \inftynterp : V\tildeldeo V. $ This implies that there exists a constant $C_{\inftynterp}$ such that \mathbfegin{equation} \leftabel{contI} \|\inftynterp u\|_V \lefteq C_{\inftynterp} \|u\|_V \tildeldeext{ for all } u\inftyn V. {\bf v}arepsilonnd{equation} We assume furthermore that $\inftynterp$ is a projector, which means that it satisfies \mathbfegin{equation} \leftabel{projAxiom} \inftynterp\circ \inftynterp = \inftynterp. {\bf v}arepsilonnd{equation} Let $R$ be an arbitrary block. It is easy to show that there exists a unique $x_0\inftyn \mathbb R^d$ and a unique diagonal matrix $D$ with positive diagonal coefficients such that the transformation \mathbfegin{equation} \leftabel{defPhi} {\bf p}hi(x) := x_0+ Dx \ \tildeldeext{ satisfies } \ {\bf p}hi({\mathbb I}^d) = R. {\bf v}arepsilonnd{equation} The volume of $R$, denoted by $|R|$, is equal to $\det(D)$. For any function $f\inftyn C^0(R)$ we then define \mathbfegin{equation} \leftabel{interpR} \inftynterp_R f := \inftynterp(f\circ{\bf p}hi)\circ {\bf p}hi^{-1}. {\bf v}arepsilonnd{equation} Note that \mathbfegin{equation} \leftabel{changeRect} \|f-\inftynterp_R f\|_{L_p(R)} = (\det D)^{\frac 1 p}\|f \circ {\bf p}hi - \inftynterp(f\circ{\bf p}hi)\|_{L_p({\mathbb I}^d)}. {\bf v}arepsilonnd{equation} A block partition ${\cal R}$ of a block $R_0$ is a finite collection of blocks such that their union covers $R_0$ and which pairwise intersections have zero Lebesgue measure. If ${\cal R}$ is a block partition of a block $R_0$ and if $f\inftyn C^0(R_0)$, by $ \inftynterp_{\cal R} f \inftyn L_\inftynfty(R_0) $ we denote the (possibly discontinuous) function which coincides with $\inftynterp_R f$ on the interior of each block $R\inftyn {\cal R}$. {\mathbff Main Question.} The purpose of this paper is to understand the asymptotic behavior of the quantity $$ \|f-\inftynterp_{{\cal R}_N} f \|_{L_p(R_0)} $$ {\inftyt {for each given function $f$}} on $R_0$ from some class of smoothness, where $({\cal R}_N)_{N\geq 1}$ is a sequence of block partitions of $R_0$ that are optimally adapted to $f$. Note that the exact value of this error can be explicitly computed only in trivial cases. Therefore, the natural question is to study the asymptotic behavior of the error function, i.e. the behavior of the error as the number of elements of the partition ${{\cal R}_N}$ tends to infinity. Most of our results hold with only assumptions \inftyref{contI} of continuity of the operator $\inftynterp$, the projection axiom \inftyref{projAxiom}, and the definition of $\inftynterp_R$ given by \inftyref{interpR}. Our analysis therefore applies to various projection operators $\inftynterp$, such as the $L_2$-orthogonal projection on a space of polynomials, or spline interpolating schemes described in \S\rightef{exampleI}. \subsection{History} The main problem formulated above is interesting for functions of arbitrary smoothness as well as for various classes of splines (for instance, for splines of higher order, interpolating splines, best approximating splines, etc.). In the univariate case general questions of this type have been investigated by many authors. The results are more or less complete and have numerous applications (see, for example, ~\cite{LSh}). Fewer results are known in the multivariate case. Most of them are for the case of approximation by splines on triangulations (for review of existing results see, for instance ~\cite{Gr3, KL, us, Cohen, JM}). However, in applications where preferred directions exist, box partitions are sometimes more convenient and efficient. The first result on the error of interpolation on rectangular partitions by bivariate splines linear in each variable (or bilinear) is due to D'Azevedo ~\cite{Daz1} who obtained local (on a single rectangle) error estimates. In ~\cite{PhD} Babenko obtained the exact asymptotics for the error (in $L_1$, $L_2$, and $L_{\inftynfty}$ norms) of interpolation of $C^2({\mathbb I}^d)$ functions by bilinear splines. In ~\cite{JAT} Babenko generalized the result to interpolation and quasiinterpolation of a function $f\inftyn C^2({\mathbb I}^d)$ with arbitrary but fixed throughout the domain signature (number of positive and negative second-order partial derivatives). However, the norm used to measure the error of approximation was uniform. In this paper we use a different, more abstract, approach which allows us to obtain the exact asymptotics of the error in a more general framework which can be applied to many particular interpolation schemes by an appropriate choice of the interpolation operator. In general, the constant in the asymptotics is implicit. However, imposing additional assumptions on the interpolation operator allows us to compute the constant explicitly. The paper is organized as follows. Section \rightef{subsecApprox} contains the statements of main approximation results. The closer study of the error function, as well as its explicit formulas under some restrictions, can be found in Section \rightef{studyK}. The proofs of the theorems about asymptotic behavior of the error are contained in Section \rightef{proofTh}. \subsection{Polynomials and the error function} In order to obtain the asymptotic error estimates we need to study the interaction of the projection operator $\inftynterp$ with polynomials. The notation ${\mathbfalpha}$ always refers to a $d$-vector of non-negative integers $$ \mathbfalpha = (\mathbfalpha_1,\cdots , \mathbfalpha_d) \inftyn {\rightm {{\rightm Z}\kern-.28em{\rightm Z}}}_+^d. $$ For each $\mathbfalpha$ we define the following quantities $$ |\mathbfalpha|:= \sum_{1\lefteq i\lefteq d} \mathbfalpha_i, {\bf q}uad \mathbfalpha! := {\bf p}rod_{1\lefteq i\lefteq d} \mathbfalpha_i!, {\bf q}uad \max(\mathbfalpha) := \max_{1\lefteq i\lefteq d} \mathbfalpha_i. $$ We also define the monomial $$ X^\mathbfalpha := {\bf p}rod_{1\lefteq i\lefteq d} X_i^{\mathbfalpha_i}, $$ where the variable is $X=(X_1,...,X_d)\inftyn \rightm \hbox{I\kern-.2em\hbox{R}}^d$. Finally, for each integer $k\geq 0$ we define the following three vector spaces of polynomials \mathbfegin{equation} \leftabel{defPk} \mathbfegin{equation}gin{array}{lcl} {\rightm \hbox{I\kern-.2em\hbox{P}}}_k & :=& \Vect\{X^\mathbfalpha : \; |\mathbfalpha| \lefteq k\}, \\ {\rightm \hbox{I\kern-.2em\hbox{P}}}_k^* &:=& \Vect\{X^\mathbfalpha : \; \max(\mathbfalpha) \lefteq k \tildeldeext{ and } |\mathbfalpha| \lefteq k+1\}, \\ {\rightm \hbox{I\kern-.2em\hbox{P}}}_k^{**} &:=& \Vect\{X^\mathbfalpha : \; \max(\mathbfalpha) \lefteq k\}. {\bf v}arepsilonnd{array} {\bf v}arepsilonnd{equation} Note that clearly $\dim({\rightm \hbox{I\kern-.2em\hbox{P}}}_k^{**}) = (k+1)^d$. In addition, a classical combinatorial argument shows that $$ \dim {\rightm \hbox{I\kern-.2em\hbox{P}}}_k = \mathbfegin{itemize}nom {k+d}{d}. $$ Furthermore, $$ \dim{\rightm \hbox{I\kern-.2em\hbox{P}}}_k^* = \dim{\rightm \hbox{I\kern-.2em\hbox{P}}}_{k+1} - d = \mathbfegin{itemize}nom {k+d+1}{d} - d. $$ By $V_{\inftynterp}$ we denote the image of $\inftynterp$, which is a subspace of $V = C^0({\mathbb I}^d)$. Since $\inftynterp$ is a projector \inftyref{projAxiom}, we have \mathbfegin{equation} \leftabel{defVI} V_{\inftynterp} = \{\inftynterp(f): \;\; f\inftyn V\} = \{f \inftyn V : \;\; f=\inftynterp(f)\}. {\bf v}arepsilonnd{equation} From this point on, the integer $k$ is fixed and defined as follows \mathbfegin{equation} \leftabel{defk} k = k(\inftynterp) := \max \{k'\geq 0 : \; {\rightm \hbox{I\kern-.2em\hbox{P}}}_{k'} \subset V_{\inftynterp} \} {\bf v}arepsilonnd{equation} Hence, the operator $\inftynterp$ reproduces polynomials of total degree less or equal than $k$. (If $k=\inftynfty$ then we obtain, using the density of polynomials in $V$ and the continuity of $\inftynterp$, that $\inftynterp(f) = f$ for all $f\inftyn V$. We exclude this case from now on.) In what follows, by $m$ we denote the integer defined by \mathbfegin{equation} \leftabel{defm} m = m(\inftynterp) :=k+1, {\bf v}arepsilonnd{equation} where $k = k(\inftynterp)$ is defined in \inftyref{defk}. By ${\rightm \hbox{I\kern-.2em\hbox{H}}}_m$ we denote the space of homogeneous polynomials of degree $m$ $$ {\rightm \hbox{I\kern-.2em\hbox{H}}}_m := \Vect\{ X^\mathbfalpha : \; |\mathbfalpha| = m\}. $$ We now introduce a function $K_I$ on ${\rightm \hbox{I\kern-.2em\hbox{H}}}_m$, further referred to as the ``error function''. \mathbfegin{equation}gin{definition}[Error Function] For all ${\bf p}i \inftyn {\rightm \hbox{I\kern-.2em\hbox{H}}}_m$ \mathbfegin{equation} \leftabel{defK} K_I({\bf p}i) := \inftynf_{|R|=1} \|{\bf p}i - \inftynterp_R {\bf p}i\|_{L_p(R)}, {\bf v}arepsilonnd{equation} where the infimum is taken over all blocks $R$ of unit $d$-dimensional volume. {\bf v}arepsilonnd{definition} The error function $K$ plays a major role in our asymptotical error estimates developed in the next subsection. Hence, we dedicate \S2 to its close study, and we provide its explicit form in various cases. The optimization \inftyref{defK} among blocks can be rephrased into an optimization among diagonal matrices. Indeed, if $|R|=1$, then there exists a unique $x_0\inftyn \mathbb R^d$ and a unique diagonal matrix with positive coefficients such that $R = {\bf p}hi({\mathbb I}^d)$ with ${\bf p}hi(x) = x_0+ Dx$. Furthermore, the homogeneous component of degree $m$ is the same in both ${\bf p}i\circ {\bf p}hi$ and ${\bf p}i\circ D$, hence ${\bf p}i\circ {\bf p}hi - {\bf p}i\circ D \inftyn {\rightm \hbox{I\kern-.2em\hbox{P}}}_k$ (recal that $m = k+1$) and therefore this polynomial is reproduced by the projection operator $\inftynterp$. Using the linearity of $\inftynterp$, we obtain $$ {\bf p}i \circ {\bf p}hi - \inftynterp ({\bf p}i \circ {\bf p}hi) = {\bf p}i \circ D - \inftynterp ({\bf p}i \circ D). $$ Combining this with \inftyref{changeRect}, we obtain that \mathbfegin{equation} \leftabel{KD} K_I({\bf p}i) = \inftynf_{\substack{\det D = 1\\D\geq 0}} \|{\bf p}i\circ D - \inftynterp({\bf p}i\circ D)\|_{L_p({\mathbb I}^d)}, {\bf v}arepsilonnd{equation} where the infimum is taken over the set of diagonal matrices with non-negative entries and unit determinant. \subsection{Examples of projection operators} \leftabel{exampleI} In this section we define several possible choices for the projection operator $\rightm I$ which are consistent with \inftyref{defk} and, in our opinion, are most useful for practical purposes. However, many other possibilities could be considered. \mathbfegin{equation}gin{definition}[$L_2({\mathbb I}^d)$ orthogonal projection] \leftabel{L2Proj} We may define $\inftynterp(f)$ as the $L_2({\mathbb I}^d)$ orthogonal projection of $f$ onto one of the spaces of polynomials ${\rightm \hbox{I\kern-.2em\hbox{P}}}_k$, ${\rightm \hbox{I\kern-.2em\hbox{P}}}_k^*$ or ${\rightm \hbox{I\kern-.2em\hbox{P}}}_k^{**}$ defined in \inftyref{defPk}. {\bf v}arepsilonnd{definition} If the projection operator $\inftynterp$ is chosen as in Definition \rightef{L2Proj}, then a simple change of variables shows that for any block $R$, the operator $\inftynterp_R$ defined by \inftyref{interpR} is the $L_2(R)$ orthogonal projection onto the same space of polynomials. To introduce several possible interpolation schemes for which we obtain the estimates using our approach, we consider a set $U_k\subset {\mathbb I}$ of cardinality $\#(U_k) = k+1$ (special cases are given below). For any ${\bf u} = (u_1, \cdots u_d) \inftyn U_k^d$ we define an element of ${\rightm \hbox{I\kern-.2em\hbox{P}}}_k^{**}$ as follows $$ \mu_{\bf u} (X):= {\bf p}rod_{1 \lefteq i\lefteq d} \lefteft( {\bf p}rod_{\substack{v\inftyn U_k\\ v\neq u_i}} \frac{X_i - v}{u_i-v}\rightight) \inftyn {\rightm \hbox{I\kern-.2em\hbox{P}}}_k^{**}. $$ Clearly, $ \mu_{\bf u} ({\bf u}) = \mu_{\bf u} (u_1, \cdots , u_d) = 1 $ and $ \mu_{\bf u}({\bf v}) = \mu_{\bf u}(v_1, \cdots , v_d) = 0 $ if ${\bf v} = (v_1, \cdots , v_d)\inftyn U_k^d$ and ${\bf v}\neq {\bf u}$. It follows that the elements of $B := (\mu_{\bf u})_{{\bf u}\inftyn U_k^d}$ are linearly independent. Since $\# (B) = \#(U_k^d) = (k+1)^d = \dim({\rightm \hbox{I\kern-.2em\hbox{P}}}_k^{**})$, $B$ is a basis of ${\rightm \hbox{I\kern-.2em\hbox{P}}}_k^{**}$. Therefore, any element of $\mu \inftyn {\rightm \hbox{I\kern-.2em\hbox{P}}}_k^{**}$ can be written in the form $$ \mu(X) = \sum_{{\bf u}\inftyn U_k^d} \leftambda_{\bf u} \mu_{\bf u}(X). $$ It follows that there is a unique element of $\mu\inftyn {\rightm \hbox{I\kern-.2em\hbox{P}}}_k^{**}$ such that $\mu({\bf u}) = f({\bf u})$ for all ${\bf u}\inftyn U_k^d$. We define $\inftynterp f := \mu$, namely $$ (\inftynterp f)(X) := \sum_{{\bf u}\inftyn U_k^d} f({\bf u}) \mu_{\bf u}(X) \inftyn {\rightm \hbox{I\kern-.2em\hbox{P}}}_k^{**}. $$ We may take $U_k$ to be the set of $k+1$ equi-spaced points on ${\mathbb I}$ \mathbfegin{equation} \leftabel{interpEqui} U_k = \lefteft\{-\frac 1 2 + \frac n k : \; 0 \lefteq n \lefteq k\rightight\}. {\bf v}arepsilonnd{equation} We obtain a different, but equally relevant, operator $\inftynterp$ by choosing $U_k$ to be the set of Tchebychev points on ${\mathbb I}$ \mathbfegin{equation} \leftabel{interpTche} U_k = \lefteft\{\frac 1 2 \cos\lefteft( \frac {n{\bf p}i} k\rightight) : \; 0 \lefteq n \lefteq k\rightight\}. {\bf v}arepsilonnd{equation} Different interpolation procedures can be used to construct $\inftynterp$. Another convenient interpolation scheme is to take $$ \inftynterp(f) \inftyn {\rightm \hbox{I\kern-.2em\hbox{P}}}_k^* $$ and $\inftynterp(f) = f$ on a subset of $U_k^d$. This subset contains $\dim {\rightm \hbox{I\kern-.2em\hbox{P}}}_k^*$ points, which are convenient to choose first on the boundary of ${\mathbb I}^d$ and then (if needed) at some interior lattice points. Note that since $\dim {\rightm \hbox{I\kern-.2em\hbox{P}}}_k^*< \#(U_k^d) =(k+1)^d$, it is always possible to construct such an operator. If the projection operator $\inftynterp$ is chosen as described above, then for any block $R$ and any $f\inftyn C^0(R)$, $\inftynterp_R(f)$ is the unique element of respective space of polynomials which coincides with $f$ at the image ${\bf p}hi(p)$ of the points $p$ mentioned in the definition of $\inftynterp$, by the transformation ${\bf p}hi$ described in \inftyref{defPhi}. \subsection{Main results} \leftabel{subsecApprox} In order to obtain the approximation results we often impose a slight technical restriction (which can be removed, see for instance ~\cite{us}) on sequences of block partitions, which is defined as follows. \mathbfegin{equation}gin{definition}[admissibility] We say that a sequence $({\cal R}_N)_{N\geq 1}$ of block partitions of a block $R_0$ is {\bf v}arepsilonmph{admissible} if $\#({\cal R}_N) \lefteq N$ for all $N \geq 1$, and \mathbfegin{equation} \leftabel{defadmi} \sup_{N\geq 1} \lefteft( N^{\frac 1 d} \sup_{R \inftyn {\cal R}_N} \diam(R)\rightight) < \inftynfty {\bf v}arepsilonnd{equation} {\bf v}arepsilonnd{definition} We recall that the approximation error is measured in $L_p$ norm, where the exponent $p$ is fixed and $1\lefteq p \lefteq \inftynfty$. We define $\tildeldeau\inftyn (0, \inftynfty)$ by \mathbfegin{equation} \leftabel{defTau} \frac 1 \tildeldeau := \frac m d+ \frac 1 p. {\bf v}arepsilonnd{equation} In the following estimates we identified $d^m f(x)$ with an element of ${\rightm \hbox{I\kern-.2em\hbox{H}}}_m$ according to \mathbfegin{equation} \leftabel{dmfHm} \frac{d^m f(x)}{m!} \sim \sum_{|\mathbfalpha| = m} \frac{{\bf p}artial^m f(x)}{{\bf p}artial x^\mathbfalpha} \frac{X^\mathbfalpha}{\mathbfalpha!}. {\bf v}arepsilonnd{equation} We now state the asymptotically sharp lower bound for the approximation error of a function $f$ on an admissible sequence of block partitions. \mathbfegin{equation}gin{theorem} \leftabel{thLower} Let $R_0$ be a block and let $f\inftyn C^m(R_0)$. For any admissible sequence of block partitions $({\cal R}_N)_{N\geq 1}$ of $R_0$ $$ \leftiminf_{N\tildeldeo \inftynfty} N^{\frac m d}\|f-\inftynterp_{{\cal R}_N} f\|_{L_p(R_0)} \geq \lefteft\|K_I\lefteft(\frac{d^m f}{m!}\rightight)\rightight\|_{L_\tildeldeau(R_0)}. $$ {\bf v}arepsilonnd{theorem} The next theorem provides an upper bound for the projection error of a function $f$ when an optimal sequence of block partitions is used. It confirms the sharpness of the previous theorem. \mathbfegin{equation}gin{theorem} \leftabel{thUpper} Let $R_0$ be a block and let $f\inftyn C^m(R_0)$. Then there exists a (perhaps non-admissible) sequence $({\cal R}_N)_{N\geq 1}$, $\#{\cal R}_N \lefteq N$, of block partitions of $R_0$ satisfying \mathbfegin{equation} \leftabel{upperEstim} \leftimsup_{N\tildeldeo \inftynfty} N^{\frac m d}\|f-\inftynterp_{{\cal R}_N} f\|_{L_p(R_0)} \lefteq \lefteft\|K_I\lefteft(\frac{d^m f}{m!}\rightight)\rightight\|_{L_\tildeldeau(R_0)}. {\bf v}arepsilonnd{equation} Furthermore, for all ${\bf v}arepsilon>0$ there exists an admissible sequence $({\cal R}_N^{\bf v}arepsilon)_{N\geq 1}$ of block partitions of $R_0$ satisfying \mathbfegin{equation} \leftabel{upperEstimEps} \leftimsup_{N\tildeldeo \inftynfty} N^{\frac m d}\|f-\inftynterp_{{\cal R}_N^{\bf v}arepsilon} f\|_{L_p(R_0)} \lefteq \lefteft\|K_I\lefteft(\frac{d^m f}{m!}\rightight)\rightight\|_{L_\tildeldeau(R_0)}+{\bf v}arepsilon. {\bf v}arepsilonnd{equation} {\bf v}arepsilonnd{theorem} An important feature of these estimates is the ``$\leftimsup$''. Recall that the upper limit of a sequence $(u_N)_{N\geq N_0}$ is defined by $$ \leftimsup_{N\tildeldeo \inftynfty} u_N := \leftim_{N\tildeldeo \inftynfty} \sup_{n\geq N} u_n, $$ and is in general strictly smaller than the supremum $\sup_{N\geq N_0} u_N$. It is still an open question to find an appropriate upper estimate of $\sup_{N\geq N_0} N^{\frac{m} d} \|f-\inftynterp_{{\cal R}_N} f\|_{L_p(R_0)}$ when optimally adapted block partitions are used. In order to have more control of the quality of approximation on various parts of the domain we introduce a positive weight function $\Omega\inftyn C^0(R_0)$. For $1\lefteq p\lefteq \inftynfty$ and for any $u\inftyn L_p(R_0)$ as usual we define \mathbfegin{equation} \leftabel{defWeight} \|u\|_{L_p(R_0, \Omega)} := \|u\Omega\|_{L_p(R_0)}. {\bf v}arepsilonnd{equation} \mathbfegin{equation}gin{remark} \leftabel{remarkWeight} Theorems \rightef{thLower}, \rightef{thUpper} and \rightef{thNoEps} below also hold when the norm $\| \cdot\|_{L_p(R_0)}$ (resp $\| \cdot\|_{L_\tildeldeau(R_0)}$) is replaced with the weighted norm $\| \cdot\|_{L_p(R_0, \Omega)}$ (resp $\| \cdot\|_{L_\tildeldeau(R_0, \Omega)}$) defined in \inftyref{defWeight}. {\bf v}arepsilonnd{remark} In the following section we shall use some restrictive hypotheses on the interpolation operator in order to obtain an explicit formula for the shape function. In particular, Propositions \rightef{propOdd}, \rightef{propEven}, and equation \inftyref{Kdmf} show that, under some assumptions, there exists a constant $C=C(\inftynterp)>0$ such that $$ \frac 1 {C} K_I\lefteft(\frac{d^m f}{m!}\rightight) \lefteq $\diamond$\\rightt[d]{\lefteft|{\bf p}rod_{1\lefteq i\lefteq d} \frac{{\bf p}artial^m f}{{\bf p}artial x_i^m}\rightight|} \lefteq C K_I\lefteft(\frac{d^m f}{m!}\rightight). $$ These restrictive hypotheses also allow to improve slightly the estimate \inftyref{upperEstimEps} as follows. \mathbfegin{equation}gin{theorem} \leftabel{thNoEps} If the hypotheses of Proposition \rightef{propOdd} or \rightef{propEven} hold, and if $ K_I\lefteft(\frac{d^m f}{m!}\rightight) >0 $ everywhere on $R_0$, then there exists an {\bf v}arepsilonmph{admissible} sequence of partitions $({\cal R}_N)_{N\geq 1}$ which satisfies the optimal estimate \inftyref{upperEstim}. {\bf v}arepsilonnd{theorem} The proofs of the Theorems \rightef{thLower}, \rightef{thUpper} and \rightef{thNoEps} are given in \S\rightef{proofTh}. Each of these proofs can be adapted to weighted norms, hence establishing Remark \rightef{remarkWeight}. Some details on how to adapt proofs for the case of weighted norms are provided at the end of each proof. \section{Study of the error function} \leftabel{studyK} In this section we perform a close study of the error function $K_I$, since it plays a major role in our asymptotic error estimates. In the first subsection \S\rightef{studyKGen} we investigate general properties which are valid for any continuous projection operator $\inftynterp$. However, we are not able to obtain an explicit form of $K_I$ under such general assumptions. Recall that in \S\rightef{exampleI} we presented several possible choices of projection operators $\inftynterp$ that seem more likely to be used in practice. In \S\rightef{subsecproperties} we identify four important properties shared by these examples. These properties are used in \S \rightef{subsecExact} to obtain an explicit form of $K_I$. \subsection{General properties} \leftabel{studyKGen} The error function $K$ obeys the following important invariance property with respect to diagonal changes of coordinates. \mathbfegin{equation}gin{prop} \leftabel{propInv} For all ${\bf p}i\inftyn {\rightm \hbox{I\kern-.2em\hbox{H}}}_m$ and all diagonal matrices $D$ with non-negative coefficients $$ K_I({\bf p}i\circ D) = (\det D)^{\frac m d} K_I({\bf p}i). $$ {\bf v}arepsilonnd{prop} {\noindent \mathbff Proof: } We first assume that the diagonal matrix $D$ has positive diagonal coefficients. Let $\overline D$ be a diagonal matrix with positive diagonal coefficient and which satisfies $\det \overline D = 1$. Let also ${\bf p}i\inftyn {\rightm \hbox{I\kern-.2em\hbox{H}}}_m$. Then $$ {\bf p}i \circ (D \overline D) = {\bf p}i \circ ((\det D)^{\frac 1 d} \tildeldeilde D) = (\det D)^{\frac m d} {\bf p}i \circ \tildeldeilde D, $$ where $ \tildeldeilde D := (\det D)^{- \frac 1 d} D\overline D $ satisfies $\det \tildeldeilde D = \det \overline D=1$ and is uniquely determined by $\overline D$. According to \inftyref{KD} we therefore have \mathbfegin{equation}gin{eqnarray*} K_I({\bf p}i \circ D) &=& \inftynf_{\substack{\det \overline D = 1\\ \overline D \geq 0}} \|{\bf p}i\circ (D \overline D) - \inftynterp({\bf p}i\circ (D \overline D))\|_{L_p({\mathbb I}^d)}\\ &=& (\det D)^{\frac m d} \inftynf_{\substack{\det \tildeldeilde D = 1\\ \tildeldeilde D \geq 0}} \|{\bf p}i\circ \tildeldeilde D - \inftynterp({\bf p}i\circ \tildeldeilde D)\|_{L_p({\mathbb I}^d)}\\ &=& (\det D)^{\frac m d} K_I({\bf p}i), {\bf v}arepsilonnd{eqnarray*} which concludes the proof in the case where $D$ has positive diagonal coefficients.\\ Let us now assume that $D$ is a diagonal matrix with non-negative diagonal coefficients and such that $\det(D) = 0$. Let $D'$ be a diagonal matrix with positive diagonal coefficients, and such that $D=DD'$ and $\det D' = 2$. We obtain $$ K_I({\bf p}i \circ D) = K_I({\bf p}i \circ (DD')) = 2^{\frac m d} K_I({\bf p}i \circ D), $$ which implies that $K_I({\bf p}i \circ D) = 0$ and concludes the proof. $\diamond$\\ The next proposition shows that the exponent $p$ used for measuring the approximation error plays a rather minor role. By $K_p$ we denote the error function associated with the exponent $p$. \mathbfegin{equation}gin{prop} \leftabel{propEquiv} There exists a constant $c>0$ such that for all $1\lefteq p_1\lefteq p_2\lefteq \inftynfty$ we have on ${\rightm \hbox{I\kern-.2em\hbox{H}}}_m$ $$ c K_\inftynfty\lefteq K_{p_1} \lefteq K_{p_2} \lefteq K_{\inftynfty}. $$ {\bf v}arepsilonnd{prop} {\noindent \mathbff Proof: } For any function $f\inftyn V = C^0({\mathbb I}^d)$ and for any $1\lefteq p_1\lefteq p_2\lefteq \inftynfty$ by a standard convexity argument we obtain that $$ \|f\|_{L_1({\mathbb I}^d)} \lefteq \|f\|_{L_{p_1}({\mathbb I}^d)} \lefteq \|f\|_{L_{p_2}({\mathbb I}^d)} \lefteq \|f\|_{L_{\inftynfty}({\mathbb I}^d)}. $$ Using \inftyref{KD}, it follows that $$ K_1\lefteq K_{p_1}\lefteq K_{p_2}\lefteq K_\inftynfty $$ on ${\rightm \hbox{I\kern-.2em\hbox{H}}}_m$. Furthermore, the following semi norms on ${\rightm \hbox{I\kern-.2em\hbox{H}}}_m$ $$ |{\bf p}i|_1 := \|{\bf p}i-\inftynterp {\bf p}i\|_{L_1({\mathbb I}^d)} \tildeldeext{ and } |{\bf p}i|_\inftynfty := \|{\bf p}i-\inftynterp {\bf p}i\|_{L_\inftynfty({\mathbb I}^d)} $$ vanish precisely on the same subspace of ${\rightm \hbox{I\kern-.2em\hbox{H}}}_m$, namely $V_{\inftynterp} \cap H_m = \{{\bf p}i \inftyn {\rightm \hbox{I\kern-.2em\hbox{H}}}_m : \; {\bf p}i = \inftynterp {\bf p}i\}$. Since ${\rightm \hbox{I\kern-.2em\hbox{H}}}_m$ has finite dimension, it follows that they are equivalent. Hence, there exists a constant $c>0$ such that $c|\cdot |_\inftynfty\lefteq |\cdot |_1$ on ${\rightm \hbox{I\kern-.2em\hbox{H}}}_m$. Using \inftyref{KD}, it follows that $cK_\inftynfty\lefteq K_1$, which concludes the proof. $\diamond$\\ \subsection{Desirable properties of the projection operator} \leftabel{subsecproperties} The examples of projection operators presented in \S\rightef{exampleI} share some important properties which allow to obtain the explicit expression of the error function $K_I$. These properties are defined below and called $H_{\bf p}m$, $H_\sigma$, $H_*$ or $H_{**}$. They are satisfied when operator $\inftynterp$ is the interpolation at equispaced points (Definition \rightef{interpEqui}), at Tchebychev points (Definition \rightef{interpTche}), and usually on the most interesting sets of other points. They are also satisfied when $\inftynterp$ is the $L_2({\mathbb I}^d)$ orthogonal projection onto ${\rightm \hbox{I\kern-.2em\hbox{P}}}_k^*$ or ${\rightm \hbox{I\kern-.2em\hbox{P}}}_k^{**}$ (Definition \rightef{L2Proj}). The first property reflects the fact that a coordinate $x_i$ on ${\mathbb I}^d$ can be changed to $-x_i$, independently of the projection process. \mathbfegin{equation}gin{definition}[$H_{\bf p}m$ hypothesis] We say that the interpolation operator $\inftynterp$ satisfies the $H_{\bf p}m$ hypothesis if for any diagonal matrix $D$ with entries in ${\bf p}m 1$ we have for all $f\inftyn V$ $$ \inftynterp(f\circ D) = \inftynterp(f) \circ D. $$ {\bf v}arepsilonnd{definition} The next property implies that the different coordinates $x_1, \cdots, x_d$ on ${\mathbb I}^d$ play symmetrical roles with respect to the projection operator. \mathbfegin{equation}gin{definition}[$H_\sigma$ hypothesis] If $M_\sigma$ is a permutation matrix, i.e. $(M_\sigma)_{ij} := \delta_{i \sigma(j)}$ for some permutation $\sigma$ of $\{1,\cdots, d\}$, then for all $f\inftyn V$ $$ \inftynterp (f\circ M_\sigma) = \inftynterp( f) \circ M_\sigma. $$ {\bf v}arepsilonnd{definition} According to \inftyref{defk}, the projection operator $\inftynterp$ reproduces the space of polynomials ${\rightm \hbox{I\kern-.2em\hbox{P}}}_k$. However, in many situations the space $V_{\inftynterp}$ of functions reproduced by $\inftynterp$ is larger than ${\rightm \hbox{I\kern-.2em\hbox{P}}}_k$. In particular $V_{\inftynterp} = {\rightm \hbox{I\kern-.2em\hbox{P}}}_k^{**}$ when $\inftynterp$ is the interpolation on equispaced or Tchebychev points, and $V_{\inftynterp} = {\rightm \hbox{I\kern-.2em\hbox{P}}}_k$ (resp ${\rightm \hbox{I\kern-.2em\hbox{P}}}_k^*$, ${\rightm \hbox{I\kern-.2em\hbox{P}}}_k^{**}$) when $\inftynterp$ is the $L_2({\mathbb I}^d)$ orthogonal projection onto ${\rightm \hbox{I\kern-.2em\hbox{P}}}_k$ (resp ${\rightm \hbox{I\kern-.2em\hbox{P}}}_k^*$, ${\rightm \hbox{I\kern-.2em\hbox{P}}}_k^{**}$). It is particularly useful to know whether the projection operator $\inftynterp$ reproduces the elements of ${\rightm \hbox{I\kern-.2em\hbox{P}}}_k^*$, and we therefore give a name to this property. Note that it clearly does not hold for the $L_2({\mathbb I}^d)$ orthogonal projection onto ${\rightm \hbox{I\kern-.2em\hbox{P}}}_k$. \mathbfegin{equation}gin{definition}[$H_*$ hypothesis] The following inclusion holds : $$ P_k^* \subset V_{\inftynterp}. $$ {\bf v}arepsilonnd{definition} On the contrary it is useful to know that some polynomials, and in particular pure powers $x_i^m$, are not reproduced by $\inftynterp$. \mathbfegin{equation}gin{definition}[$H_{**}$ hypothesis] $$ \tildeldeext{If } \ \sum_{1\lefteq i\lefteq d} \leftambda_i x_i^m \inftyn V_{\inftynterp} \ \tildeldeext{ then } \ (\leftambda_1, \cdots, \leftambda_d) = (0, \cdots , 0). $$ {\bf v}arepsilonnd{definition} This condition obviously holds if $\inftynterp(f)\inftyn {\rightm \hbox{I\kern-.2em\hbox{P}}}_k^{**}$ (polynomials of degree $\lefteq k$ in each variable) for all $f$. Hence, it holds for all the examples of projection operators given in the previous subsection \S\rightef{exampleI}. \subsection{Explicit formulas} \leftabel{subsecExact} In this section we provide the explicit expression for $K$ when some of the hypotheses $H_{\bf p}m$, $H_\sigma$, $H_*$ or $H_{**}$ hold. Let ${\bf p}i\inftyn {\rightm \hbox{I\kern-.2em\hbox{H}}}_m$ and let $\leftambda_i$ be the corresponding coefficient of $X_i^m$ in ${\bf p}i$, for all $1\lefteq i\lefteq d$. We define $$ K_*({\bf p}i) := $\diamond$\\rightt[d]{{\bf p}rod_{1\lefteq i\lefteq d} |\leftambda_i|} $$ and $$ s({\bf p}i) := \#\{1\lefteq i\lefteq d : \; \leftambda_i >0\}. $$ If $\frac{d^m f(x)}{m!}$ is identified by \inftyref{dmfHm} to an element of ${\rightm \hbox{I\kern-.2em\hbox{H}}}_m$, then one has \mathbfegin{equation} \leftabel{Kdmf} K_*\lefteft(\frac{d^m f(x)}{m!}\rightight) = \frac 1 {m!} $\diamond$\\rightt[d]{\lefteft|{\bf p}rod_{1\lefteq i\lefteq d} \frac{{\bf p}artial^m f}{{\bf p}artial x_i^m}(x)\rightight|}. {\bf v}arepsilonnd{equation} \mathbfegin{equation}gin{prop} \leftabel{propOdd} If $m$ is odd and if $H_{\bf p}m$, $H_\sigma$ and $H_*$ hold, then $$ K_p({\bf p}i) = C(p) K_*({\bf p}i), $$ where $$ C(p) := \lefteft\|\sum_{1\lefteq i\lefteq d} X_i^m - \inftynterp \lefteft(\sum_{1\lefteq i\lefteq d} X_i^m\rightight)\rightight\|_{L_p({\mathbb I}^d)}>0. $$ {\bf v}arepsilonnd{prop} \mathbfegin{equation}gin{prop} \leftabel{propEven} If $m$ is even and if $H_\sigma$, $H_*$ and $H_{**}$ hold then $$ K_p({\bf p}i) = C(p,s({\bf p}i)) \, K_*({\bf p}i). $$ Furthermore, \mathbfegin{equation} \leftabel{eqEven} C(p,0) = C(p,d) = \lefteft\|\sum_{1\lefteq i\lefteq d} X_i^m - \inftynterp \lefteft(\sum_{1\lefteq i\lefteq d} X_i^m\rightight)\rightight\|_{L_p({\mathbb I}^d)}>0. {\bf v}arepsilonnd{equation} Other constants $C(p,s)$ are positive and obey $C(p,s) = C(p,d-s)$.\\ {\bf v}arepsilonnd{prop} \noindent Next we turn to the proofs of Propositions \rightef{propOdd} and \rightef{propEven}. {\bf p}aragraph{Proof of Proposition \rightef{propOdd}} Let ${\bf p}i\inftyn {\rightm \hbox{I\kern-.2em\hbox{H}}}_m$ and let $\leftambda_i$ be the coefficient of $X_i^m$ in ${\bf p}i$. Denote by $$ {\bf p}i_* := \sum_{ 1\lefteq i\lefteq d} \leftambda_i X_i^m $$ so that ${\bf p}i-{\bf p}i_* \inftyn {\rightm \hbox{I\kern-.2em\hbox{P}}}_k^*$ and, more generally, ${\bf p}i \circ D - {\bf p}i_* \circ D \inftyn {\rightm \hbox{I\kern-.2em\hbox{P}}}_k^*$ for any diagonal matrix $D$. The hypothesis $H_*$ states that the projection operator $\inftynterp$ reproduces the elements of ${\rightm \hbox{I\kern-.2em\hbox{P}}}_k^*$, and therefore $$ {\bf p}i\circ D - \inftynterp ({\bf p}i\circ D) = {\bf p}i_*\circ D - \inftynterp ({\bf p}i_*\circ D). $$ Hence, $ K_I({\bf p}i) = K_I({\bf p}i_*) $ according to \inftyref{KD}. If there exists $i_0$, $1\lefteq i_0\lefteq d$, such that $\leftambda_{i_0} = 0$, then we denote by $D$ the diagonal matrix of entries $D_{ii} = 1$ if $i\neq i_0$ and $0$ if $i=i_0$. Applying Proposition \rightef{propInv} we find $$ K_I({\bf p}i) = K_I({\bf p}i_*) = K_I({\bf p}i_*\circ D) = (\det D)^{\frac m d} K_I({\bf p}i_*) = 0. $$ which concludes the proof. We now assume that all the coefficients $\leftambda_i$, $1\lefteq i\lefteq d$, are different from $0$, and we denote by ${\bf v}arepsilon_i$ be the sign of $\leftambda_i$. Applying Proposition \rightef{propInv} to the diagonal matrix $D$ of entries $|\leftambda_i|^{\frac 1 m}$ we find that $$ K_I({\bf p}i) = K_I({\bf p}i_*) = (\det D)^{\frac m d} K_I({\bf p}i_*\circ D^{-1}) = K_*({\bf p}i) \, K_I\lefteft(\sum_{1\lefteq i\lefteq d} {\bf v}arepsilon_i X_i^m\rightight). $$ Using the $H_{\bf p}m$ hypothesis with the diagonal matrix $D$ of entries $D_{ii} = {\bf v}arepsilon_i$, and recalling that $m$ is odd, we find that $$ K_I\lefteft(\sum_{1\lefteq i\lefteq d} {\bf v}arepsilon_i X_i^m\rightight) = K_I\lefteft(\sum_{1\lefteq i\lefteq d} X_i^m\rightight). $$ We now define the functions $$ g_i := X_i^m - \inftynterp(X_i^m) \tildeldeext{ for } 1\lefteq i \lefteq d. $$ It follows from \inftyref{KD} that $$ K_I\lefteft(\sum_{1\lefteq i\lefteq d} X_i^m\rightight) = \inftynf_{{\bf p}rod_{1\lefteq i\lefteq d} a_i = 1} \lefteft\|\sum_{1\lefteq i\lefteq d} a_i g_i\rightight\|_{L_p({\mathbb I}^d)}, $$ where the infimum is taken over all $d$-vectors of positive reals of product $1$. Let us consider such a $d$-vector $(a_1, \cdots , a_d)$, and a permutation $\sigma$ of the set $\{1, \cdots, d\}$. The $H_\sigma$ hypothesis implies that the quantity $$ \lefteft\|\sum_{1\lefteq i\lefteq d} a_{\sigma(i)} g_i\rightight\|_{L_p({\mathbb I}^d)} $$ is independent of $\sigma$. Hence, summing over all permutations, we obtain \mathbfegin{equation} \leftabel{sumPerm} \lefteft\|\sum_{1\lefteq i\lefteq d} a_i g_i\rightight\|_{L_p({\mathbb I}^d)} = \frac 1 {d!} \sum_{\sigma} \lefteft\|\sum_{1\lefteq i\lefteq d} a_{\sigma(i)} g_i\rightight\|_{L_p({\mathbb I}^d)} \geq \frac 1 d \lefteft\|\sum_{1\lefteq i\lefteq d} g_i\rightight\|_{L_p({\mathbb I}^d)} \sum_{1\lefteq i\lefteq d} a_i. {\bf v}arepsilonnd{equation} The right-hand side is minimal when $ a_1 = \cdots = a_d = 1 $, which shows that $$ \lefteft\|\sum_{1\lefteq i\lefteq d} a_i g_i\rightight\|_{L_p({\mathbb I}^d)} \geq \lefteft\|\sum_{1\lefteq i\lefteq d} g_i\rightight\|_{L_p({\mathbb I}^d)} = C(p) $$ with equality when $a_i=1$ for all $i$. Note as a corollary that \mathbfegin{equation} \leftabel{existsRPos} K_I({\bf p}i_{\bf v}arepsilon) = \|{\bf p}i_{\bf v}arepsilon - \inftynterp({\bf p}i_{\bf v}arepsilon)\|_{L_p({\mathbb I}^d)} = C(p) \ \tildeldeext{ where } \ {\bf p}i_{\bf v}arepsilon = \sum_{1\lefteq i\lefteq d} {\bf v}arepsilon_i X_i^m. {\bf v}arepsilonnd{equation} It remains to prove that $C(p)>0$. Using the hypothesis $H_{\bf p}m$, we find that for all $\mu_i \inftyn\{{\bf p}m 1\}$ we have $$ \lefteft\|\sum_{1\lefteq i\lefteq d}\mu_i g_i\rightight\|_{L_p({\mathbb I}^d)} = C(p). $$ In particular, for any $1\lefteq i_0\lefteq d$ one has $$ 2\|g_{i_0}\|_{L_p({\mathbb I}^d)} \lefteq \lefteft\|\sum_{1\lefteq i\lefteq d} g_i\rightight\|_{L_p({\mathbb I}^d)} + \lefteft\|2g_{i_0} - \sum_{1\lefteq i\lefteq d} g_i\rightight\|_{L_p({\mathbb I}^d)} \lefteq 2 C(p). $$ If $C(p) = 0$, it follows that $g_{i_0} = 0$ and therefore that $X_{i_0}^m = \inftynterp(X_{i_0}^m)$, for any $1\lefteq i_0\lefteq d$. Using the assumption $H_*$, we find that the projection operator $\inftynterp$ reproduces all the polynomials of degree $m= k+1$, which contradicts the definition \inftyref{defk} of the integer $k$. $\diamond$\\ {\bf p}aragraph{Proof of proposition \rightef{propEven}} We define $\leftambda_i$, ${\bf p}i_*$ and ${\bf v}arepsilon_i\inftyn\{{\bf p}m 1\}$ as before and we find, using similar reasoning, that $$ K_I({\bf p}i) = K_*({\bf p}i) K_I\lefteft(\sum_{1\lefteq i\lefteq d} {\bf v}arepsilon_i X_i^m\rightight). $$ \noindent For $1\lefteq s\lefteq d$ we define $$ C(p, s) := K_I\lefteft(\sum_{1\lefteq i\lefteq s} X_i^m - \sum_{s+1\lefteq i\lefteq d} X_i^m\rightight). $$ From the hypothesis $H_\sigma$ it follows that $K_I({\bf p}i) = K_*({\bf p}i) C(p,s({\bf p}i))$. Using again $H_\sigma$ and the fact that $K_I({\bf p}i) = K_I(-{\bf p}i)$ for all ${\bf p}i \inftyn {\rightm \hbox{I\kern-.2em\hbox{H}}}_m$, we find that $$ C(p,s) = K_I\lefteft(\sum_{1\lefteq i\lefteq s} X_i^m - \sum_{s+1\lefteq i\lefteq d} X_i^m\rightight) = K_I\lefteft(-\lefteft(\sum_{1\lefteq i\lefteq d-s} X_i^m - \sum_{d-s+1\lefteq i\lefteq d} X_i^m\rightight)\rightight) = C(p,d-s). $$ We define $g_i := X_i^m - \inftynterp(X_i^m)$, as in the proof of Proposition \rightef{propOdd}. We obtain the expression for $C(p,0)$ by summing over all permutations as in \inftyref{sumPerm} $$ C(p,0) = \lefteft\|\sum_{1\lefteq i\lefteq d} g_i\rightight\|_{L_p({\mathbb I}^d)}. $$ This concludes the proof of the first part of Proposition \rightef{propEven}. We now prove that $C(p,s)>0$ for all $1\lefteq p\lefteq \inftynfty$ and all $s\inftyn \{0, \cdots, d\}$. To this end we define the following quantity on $\mathbb R^d$ $$ \|a\|_K := \lefteft\|\sum_{1\lefteq i\lefteq d}a_i g_i\rightight\|_{L_p({\mathbb I}^d)}. $$ Note that $\|a\|_K = 0$ if and only if $$ \sum_{1\lefteq i\lefteq d} a_i X_i^m = \sum_{1\lefteq i\lefteq d} a_i \inftynterp(X_i^m), $$ and the hypothesis $H_{**}$ precisely states that this equality occurs if and only if $a_i = 0$, for all $1\lefteq i\lefteq d$. Hence, $\|\cdot \|_K$ is a norm on $\mathbb R^d$. Furthermore, let $$ E_s := \lefteft\{a\inftyn \mathbb R_+^s\tildeldeimes \mathbb R_-^{d-s} : \; {\bf p}rod_{1\lefteq i \lefteq d} |a_i| = 1\rightight\} $$ Then $$ C(p,s) = \inftynf_{a\inftyn E_s} \|a\|_K. $$ Since $E_s$ is a closed subset of $\mathbb R^d$, which does not contain the origin, this infimum is attained. It follows that $C(p,s)>0$, and that there exists a rectangle $R_{\bf v}arepsilon$ of unit volume such that \mathbfegin{equation} \leftabel{existsREven} K_I({\bf p}i_{\bf v}arepsilon) = \|{\bf p}i_{\bf v}arepsilon - \inftynterp {\bf p}i_{\bf v}arepsilon\|_{L_p(R_{\bf v}arepsilon)} = C(p, s({\bf p}i_{\bf v}arepsilon)) \ \tildeldeext{ where } \ {\bf p}i_{\bf v}arepsilon = \sum_{1\lefteq i\lefteq d} {\bf v}arepsilon_i X_i^m. {\bf v}arepsilonnd{equation} $\diamond$\\ \section{Proof of the approximation results} \leftabel{proofTh} In this section, let the block $R_0$, the integer $m$, the function $f\inftyn C^m(R_0)$ and the exponent $p$ be fixed. We conduct our proofs for $1\lefteq p<\inftynfty$ and provide comments on how to adjust our arguments for the case $p=\inftynfty$. For each $x\inftyn \mathbb R_0$ by $\mu_x$ we denote the $m$-th degree Taylor polynomial of $f$ at the point $x$ \mathbfegin{equation} \leftabel{defmux} \mu_x = \mu_x(X) := \sum_{|\mathbfalpha| \lefteq m}\frac{{\bf p}artial^m f}{{\bf p}artial x^\mathbfalpha}(x) \, \frac{(X-x)^\mathbfalpha}{\mathbfalpha!}, {\bf v}arepsilonnd{equation} and we define ${\bf p}i_x$ to be the homogeneous component of degree $m$ in $\mu_x$, \mathbfegin{equation} \leftabel{defpix} {\bf p}i_x = {\bf p}i_x(X) := \sum_{|\mathbfalpha| = m}\frac{{\bf p}artial^m f}{{\bf p}artial x^\mathbfalpha}(x) \, \frac{X^\mathbfalpha}{\mathbfalpha!}. {\bf v}arepsilonnd{equation} Since ${\bf p}i_x$ and $\mu_x$ are polynomials of degree $m$, their $m$-th derivative is constant, and clearly $d^m {\bf p}i_x=d^m \mu_x = d^m f(x)$. In particular, for any $x\inftyn R_0$ the polynomial $\mu_x - {\bf p}i_x$ belongs to ${\rightm \hbox{I\kern-.2em\hbox{P}}}_k$ (recall that $k=m-1$) and is therefore reproduced by the projection operator $\inftynterp$. It follows that for any $x\inftyn R_0$ and any block $R$ \mathbfegin{equation} \leftabel{muzPiz} {\bf p}i_x- \inftynterp_R {\bf p}i_x = \mu_x -\inftynterp_R \mu_x. {\bf v}arepsilonnd{equation} In addition, we introduce a measure $\rightho$ of the degeneracy of a block $R$ $$ \rightho(R) := \frac{\diam(R)^d}{|R|}. $$ Given any function $g\inftyn C^m(R)$ and any $x\inftyn R$ we can define, similarly to \inftyref{defpix}, a polynomial $\hat {\bf p}i_x\inftyn {\rightm \hbox{I\kern-.2em\hbox{H}}}_m$ associated to $g$ at $x$. We then define \mathbfegin{equation} \leftabel{normdmg} \|d^m g\|_{L_\inftynfty(R)}:=\sup_{x\inftyn R} \lefteft(\sup_{|u| = 1} |\hat {\bf p}i_x(u)|\rightight). {\bf v}arepsilonnd{equation} \mathbfegin{equation}gin{prop} There exists a constant $C = C(m,d)>0$ such that for any block $R$ and any function $g\inftyn C^m(R)$ \mathbfegin{equation} \leftabel{localIso} \|g-\inftynterp_R g\|_{L_p(R)} \lefteq C |R|^{\frac 1 \tildeldeau} \rightho(R)^{\frac m d} \|d^m g\|_{L_\inftynfty(R)}. {\bf v}arepsilonnd{equation} {\bf v}arepsilonnd{prop} {\noindent \mathbff Proof: } Let $x_0\inftyn R$ and let $g_0$ be the Taylor polynomial for $g$ of degree $m-1$ at point $x_0$ which is defined as follows $$ g_0(X) := \sum_{|\mathbfalpha| \lefteq m-1}\frac{{\bf p}artial^\mathbfalpha f(x_0)}{{\bf p}artial x^\mathbfalpha}\frac{(X-x_0)^\mathbfalpha}{\mathbfalpha!}. $$ Let $x\inftyn R$ and let $x(t) = x_0+ t (x-x_0)$. We have $$ g(x) = g_0(x)+ \inftynt_{t=0}^1 \frac{d^m g_{x(t)}(x-x_0)}{m!} (1-t)^m dt. $$ Hence, \mathbfegin{equation} \leftabel{ineqGG0} |g(x) - g_0(x)|\lefteq \inftynt_{t=0}^1 \|d^m g\|_{L_\inftynfty(R)} |x-x_0|^m (1-t)^mdt \lefteq \frac 1 {m+1} \|d^m g\|_{L_\inftynfty(R)} \diam(R)^m. {\bf v}arepsilonnd{equation} Since $g_0$ is a polynomial of degree at most $m-1$, we have $g_0 = \inftynterp g_0$. Hence, \mathbfegin{equation}gin{eqnarray*} \|g-\inftynterp_R g\|_{L_p(R)} &\lefteq & |R|^{\frac 1 p} \|g-\inftynterp_R g\|_{L_\inftynfty(R)}\\ & =& |R|^{\frac 1 p} \|(g-g_0)-\inftynterp_R (g-g_0)\|_{L_\inftynfty(R)}\\ &\lefteq & (1+C_{\inftynterp}) |R|^{\frac 1 p} \|g-g_0\|_{L_\inftynfty(R)}, {\bf v}arepsilonnd{eqnarray*} where $C_{\inftynterp}$ is the operator norm of $\inftynterp : V\tildeldeo V$. Combining this estimate with \inftyref{ineqGG0}, we obtain \inftyref{localIso}. $\diamond$\\ \subsection{Proof of Theorem \rightef{thLower} (Lower bound)} The following lemma allows us to bound the interpolation error of $f$ on the block $R$ from below. \mathbfegin{equation}gin{lemma} \leftabel{lemmaLower} For any block $R\subset R_0$ and $x\inftyn R$ we have $$ \|f- \inftynterp_R f\|_{L_p(R)} \geq |R|^{\frac 1 \tildeldeau} \lefteft(K_I({\bf p}i_x) - \omega(\diam R) \rightho(R)^{\frac m d}\rightight), $$ where the function $\omega$ is positive, depends only on $f$ and $m$, and satisfies $\omega(\delta) \tildeldeo 0$ as $\delta\tildeldeo 0$. {\bf v}arepsilonnd{lemma} {\noindent \mathbff Proof: } Let $h:= f-\mu_x$, where $\mu_x$ is defined in \inftyref{defmux} Using \inftyref{muzPiz}, we obtain \mathbfegin{equation}gin{eqnarray*} \|f- \inftynterp_R f\|_{L_p(R)} & \geq & \|{\bf p}i_x - \inftynterp_R {\bf p}i_x\|_{L_p(R)} - \|h - \inftynterp_R h\|_{L_p(R)}\\ & \geq & |R|^{\frac 1 \tildeldeau} K_I({\bf p}i_x) - \|h - \inftynterp_R h\|_{L_p(R)}, {\bf v}arepsilonnd{eqnarray*} and according to \inftyref{localIso} we have $$ \|h - \inftynterp_R h\|_{L_p(R)} \lefteq C_0 |R|^{\frac 1 \tildeldeau} \rightho(R)^{\frac m d} \|d^m h\|_{L_\inftynfty(R)}. $$ Observe that $$ \|d^m h\|_{L_\inftynfty(R)} = \|d^m f - d^m{\bf p}i_x\|_{L_\inftynfty(R)} = \|d^m f - d^m f(x)\|_{L_\inftynfty(R)}. $$ We introduce the modulus of continuity $\omega_*$ of the $m$-th derivatives of $f$. \mathbfegin{equation} \leftabel{defOmega} \omega_*(r) := \sup_{\substack{x_1,x_2\inftyn R_0 :\\ |x_1 - x_2|\lefteq r}} \|d^m f(x_1)-d^m f(x_2)\|= \sup_{\substack{x_1,x_2\inftyn R_0 :\\ |x_1-x_2|\lefteq r}} \lefteft( \sup_{|u|\lefteq 1} |{\bf p}i_{x_1}(u) - {\bf p}i_{x_2} (u)|\rightight) {\bf v}arepsilonnd{equation} By setting $\omega = C_0\, \omega_*$ we conclude the proof of this lemma. $\diamond$\\ We now consider an admissible sequence of block partitions $({\cal R}_N)_{N\geq 0}$. For all $N\geq 0$, $R\inftyn {\cal R}_N$ and $x\inftyn R$, we define $$ {\bf p}hi_N(x) := |R| {\bf q}quad \hbox{and} {\bf q}quad {\bf p}si_N(x) := \lefteft(K_I({\bf p}i_x) - \omega(\diam(R)) \rightho(R)^{\frac m d} \rightight)_+, $$ where $\leftambda_+ := \max\{\leftambda,0\}$. We now apply Holder's inequality $ \inftynt_{R_0} f_1 f_2 \lefteq \|f_1\|_{L_{p_1}(R_0)}\|f_2\|_{L_{p_2}(R_0)} $ with the functions $$ f_1 = {\bf p}hi_N^{\frac {m \tildeldeau} d} {\bf p}si_N^\tildeldeau \ \tildeldeext{ and } f_2 = {\bf p}hi_N^{-\frac {m \tildeldeau} d} $$ and the exponents $ p_1 = \frac p \tildeldeau \ \tildeldeext{ and } \ p_2 = \frac d {m\tildeldeau}. $ Note that $\frac 1 {p_1}+ \frac 1 {p_2} = \tildeldeau \lefteft(\frac 1 p + \frac m d\rightight) = 1$. Hence, \mathbfegin{equation} \leftabel{holderPsi} \inftynt_{R_0} {\bf p}si_N^\tildeldeau \lefteq \lefteft(\inftynt_{R_0} {\bf p}hi_N^{\frac{m p} d} {\bf p}si_N^{p}\rightight)^{\frac \tildeldeau p} \lefteft(\inftynt_{R_0} {\bf p}hi_N^{-1}\rightight)^{\frac {m\tildeldeau} d}. {\bf v}arepsilonnd{equation} \noindent Note that $\inftynt_{R_0} {\bf p}hi_N^{-1} = \# ({\cal R}_N)\lefteq N$. Furthermore, if $R\inftyn {\cal R}_N$ and $x\inftyn R$ then according to Lemma \rightef{lemmaLower} $$ {\bf p}hi_N(x)^{\frac m d} {\bf p}si_N(x) =|R|^{\frac 1 \tildeldeau-\frac 1 p} {\bf p}si_N(x) \lefteq|R|^{-\frac 1 p} \|f-\inftynterp_R f\|_{L_p(R)}. $$ Hence, \mathbfegin{equation} \leftabel{intphipsi} \lefteft[\inftynt_{R_0} {\bf p}hi_N^{\frac{m p} d} {\bf p}si_N^{p}\rightight]^{\frac 1 p} \lefteq \lefteft[\sum_{R\inftyn {\cal R}_N} \frac 1 {|R|} \inftynt_R \|f-\inftynterp_R f\|_{L_p(R)}^p\rightight]^{\frac 1 p}= \|f-\inftynterp_R f\|_{L_p(R_0)}. {\bf v}arepsilonnd{equation} Inequality \inftyref{holderPsi} therefore leads to \mathbfegin{equation} \leftabel{upperPsi} \|{\bf p}si_N\|_{L_\tildeldeau(R_0)} \lefteq \|f- \inftynterp_{{\cal R}_N} f\|_{L_p(R_0)} N^{\frac m d}. {\bf v}arepsilonnd{equation} \noindent Since the sequence $(\cR_N)_{N\geq 0}$ is admissible, there exists a constant $C_A>0$ such that for all $N$ and all $R\inftyn {\cal R}_N$ we have $\diam(R)\lefteq C_AN^{-\frac 1 d}$. We introduce a subset of ${\cal R}'_N\subset {\cal R}_N$ which collects the most degenerate blocks $$ {\cal R}'_N = \{ R\inftyn {\cal R}_N : \; \rightho(R)\geq \omega(C_AN^{-\frac 1 d})^{-\frac{1} m}\}, $$ where $\omega$ is the function defined in Lemma \rightef{lemmaLower}. By $R'_N$ we denote the portion of $R_0$ covered by ${\cal R}'_N$. For all $x\inftyn R_0\mathbfs R'_N$ we obtain $$ {\bf p}si_N(x)\geq K_I({\bf p}i_x) -\omega(C_A N^{-\frac 1 d})^{1-\frac 1 d}. $$ We define ${\bf v}arepsilon_N := \omega(C_A N^{-\frac 1 d})^{1-\frac 1 d}$ and we notice that ${\bf v}arepsilon_N \tildeldeo 0$ as $N \tildeldeo \inftynfty$. Hence, $$ \mathbfegin{equation}gin{array}{ll} \|{\bf p}si_N\|_{L_\tildeldeau(R_0)}^\tildeldeau & \geq \lefteft \|\lefteft(K_I({\bf p}i_x) -{\bf v}arepsilon_N\rightight)_+\rightight\|_{L_\tildeldeau(R_0\mathbfs R'_N)}^\tildeldeau\\ & \geq \lefteft \|\lefteft(K_I({\bf p}i_x) -{\bf v}arepsilon_N\rightight)_+\rightight\|_{L_\tildeldeau(R_0)}^\tildeldeau -C^\tildeldeau |R'_N|, {\bf v}arepsilonnd{array} $$ where $C:=\max_{x\inftyn R_0}K_I({\bf p}i_x)$. Next we observe that $|R'_N|\tildeldeo 0$ as $N\tildeldeo +\inftynfty$: indeed for all $R\inftyn {\cal R}'_N$ we have $$ |R| = \diam(R)^d \rightho(R)^{-1} \lefteq C_A^d N^{-1} \omega(C_A N^{-\frac 1 d})^{\frac 1 m}. $$ Since $\#({\cal R}'_N)\lefteq N$, we obtain $|R'_N|\lefteq C_A^d \omega(C_A N^{-\frac 1 d})^{\frac 1 m}$, and the right-hand side tends to $0$ as $N\tildeldeo \inftynfty$. We thus obtain $$ \leftiminf_{N\tildeldeo \inftynfty} \|{\bf p}si_N\|_{L_\tildeldeau(R_0)} \geq \leftim_{N\tildeldeo \inftynfty} \lefteft \|\lefteft(K_I({\bf p}i_x) -{\bf v}arepsilon_N\rightight)_+\rightight\|_{L_\tildeldeau(R_0)} = \|K_I({\bf p}i_x)\|_{L_\tildeldeau(R_0)}. $$ Combining this result with \inftyref{upperPsi}, we conclude the proof of the announced estimate. Note that this proof also works with the exponent $p = \inftynfty$ by changing $$ \lefteft(\inftynt_{R_0} {\bf p}hi_N^{\frac{m p} d} {\bf p}si_N^{p}\rightight)^{\frac \tildeldeau p} \ \tildeldeext{ into } \ \|{\bf p}hi_N^{\frac m d} {\bf p}si_N\|_{L_\inftynfty(R_0)}^\tildeldeau $$ in \inftyref{holderPsi} and performing the standard modification in \inftyref{intphipsi}. \mathbfegin{equation}gin{remark} As announced in Remark \rightef{remarkWeight}, this proof can be adapted to the weighted norm $\|\cdot\|_{L_p(R_0, \Omega)}$ associated to a positive weight function $\Omega\inftyn C^0(R_0)$ and defined in \inftyref{defWeight}. For that purpose let $r_N := \sup \{ \diam(R) : \; R \inftyn {\cal R}_N\}$ and let $$ \Omega_N(x) := \inftynf_{\substack{x'\inftyn R_0\\ |x-x'|\lefteq r_N}} \Omega(x'). $$ The sequence of functions $\Omega_N$ increases with $N$ and tends uniformly to $\Omega$ as $N\tildeldeo \inftynfty$. If $R\inftyn {\cal R}_N$ and $x\inftyn R$, then $$ \|f-\inftynterp_R f\|_{L_p(R,\Omega)} \geq \Omega_N(x) \|f-\inftynterp_R f\|_{L_p(R)}. $$ The main change in the proof is that the function ${\bf p}si_N$ should be replaced with ${\bf p}si'_N := \Omega_N {\bf p}si_N$. Other details are left to the reader. {\bf v}arepsilonnd{remark} $\diamond$\\ \subsection{Proof of the upper estimates} The proof of Theorems \rightef{thUpper} (and \rightef{thNoEps}) is based on the actual construction of an asymptotically optimal sequence of block partitions. To that end we introduce the notion of a local block specification. \mathbfegin{equation}gin{definition}{\mathbff{(local block specification)}} \leftabel{defBlockSpec} A local block specification on a block $R_0$ is a (possibly discontinuous) map $x \mapsto R(x)$ which associates to each point $x\inftyn R_0$ a block $R(x)$, and such that \mathbfegin{equation}gin{itemize} \inftytem The volume $ |R(x)| $ is a positive continuous function of the variable $x \inftyn R_0$. \inftytem The diameter is bounded : $\sup \{\diam(R(x)) : \; x \inftyn R_0\}<\inftynfty$. {\bf v}arepsilonnd{itemize} {\bf v}arepsilonnd{definition} The following lemma shows that it is possible to build sequences of block partitions of $R_0$ adapted in a certain sense to a local block specification. \mathbfegin{equation}gin{lemma} \leftabel{lemmaSeqBlock} Let $R_0$ be a block in $\rightm \hbox{I\kern-.2em\hbox{R}}^d$ and let $x\mapsto R(x)$ be a local block specification on $R_0$. Then there exists a sequence $({\cal P}_n)_{n\geq 1}$ of block partitions of $R_0$, $ {\cal P}_n = {\cal P}_n^1 \cup {\cal P}_n^2, $ satisfying the following properties. \mathbfegin{equation}gin{itemize} \inftytem (The number of blocks in ${\cal P}_n$ is asymptotically controlled) \mathbfegin{equation} \leftabel{limCardRn} \leftim_{n \tildeldeo \inftynfty} \frac{\#({\cal P}_n)}{n^{2d}} = \inftynt_{R_0} |R(x)|^{-1} dx. {\bf v}arepsilonnd{equation} \inftytem (The elements of ${\cal P}_n^1$ follow the block specifications) For each $R\inftyn {\cal P}_n^1$ there exists $y\inftyn R_0$ such that \mathbfegin{equation} \leftabel{n2Ry} R \tildeldeext{ is a translate of } n^{-2} R(y), \tildeldeext{ and } |x-y| \lefteq \frac{\diam(R_0)} n \tildeldeext{ for all } x\inftyn R. {\bf v}arepsilonnd{equation} \inftytem (The elements of ${\cal P}_n^2$ have a small diameter) \mathbfegin{equation} \leftabel{smallDiam} \leftim_{n \tildeldeo \inftynfty } \lefteft( n^2 \sup_{R\inftyn {\cal P}_n^2} \diam(R)\rightight) =0. {\bf v}arepsilonnd{equation} {\bf v}arepsilonnd{itemize} {\bf v}arepsilonnd{lemma} {\noindent \mathbff Proof: } See Appendix. $\diamond$\\ We recall that the block $R_0$, the exponent $p$ and the function $f\inftyn C^m(R_0)$ are fixed, and that at each point $x\inftyn R_0$ the polynomial ${\bf p}i_x\inftyn {\rightm \hbox{I\kern-.2em\hbox{H}}}_m$ is defined by \inftyref{defpix}. The sequence of block partitions described in the previous lemma is now used to obtain an asymptotical error estimate. \mathbfegin{equation}gin{lemma} \leftabel{lemmaNn} Let $x \mapsto R(x)$ be a local block specification such that for all $x\inftyn R_0$ \mathbfegin{equation} \leftabel{unitError} \|{\bf p}i_x -\inftynterp_{R(x)}({\bf p}i_x)\|_{L_p(R(x))} \lefteq 1. {\bf v}arepsilonnd{equation} Let $({\cal P}_n)_{n\geq 1}$ be a sequence of block partitions satisfying the properties of Lemma \rightef{lemmaSeqBlock}, and let for all $N\geq 0$ $$ n(N) := \max\{ n\geq 1 : \; \# ({\cal P}_n) \lefteq N\}. $$ Then ${\cal R}_N := {\cal P}_{n(N)}$ is an admissible sequence of block partitions and \mathbfegin{equation} \leftabel{limsupR} \leftimsup_{N\tildeldeo \inftynfty} N^{\frac m d} \|f-\inftynterp_{{\cal R}_N} f \|_{L_p(R_0)} \lefteq \lefteft(\inftynt_{R_0} R(x)^{-1} dx\rightight)^{\frac 1 \tildeldeau}. {\bf v}arepsilonnd{equation} {\bf v}arepsilonnd{lemma} {\noindent \mathbff Proof: } Let $n \geq 0$ and let $R\inftyn {\cal P}_n$. If $R\inftyn {\cal P}_n^1$ then let $y\inftyn R_0$ be as in \inftyref{n2Ry}. Using \inftyref{localIso} we find \mathbfegin{equation}gin{eqnarray*} \|f-\inftynterp_R f\|_{L_p(R)} &\lefteq& \|{\bf p}i_y-\inftynterp_R {\bf p}i_y\|_{L_p(R)} + \|(f-{\bf p}i_y)-\inftynterp_R (f-{\bf p}i_y)\|_{L_p(R)} \\ &\lefteq & n^{-\frac {2d} \tildeldeau} \|{\bf p}i_y-\inftynterp_{R(y)} {\bf p}i_y\|_{L_p(R(y))} + C |R|^{\frac 1 p} \diam(R)^m \|d^m f-d^m {\bf p}i_y\|_{L_{\inftynfty(R)}}\\ &\lefteq & n^{-\frac {2d} \tildeldeau} + C n^{-\frac {2d} \tildeldeau} |R(y)|^{\frac 1 p} \diam(R(y))^m \|d^m f-d^m f(y)\|_{L_\inftynfty(R)}\\ &\lefteq & n^{-\frac {2d} \tildeldeau} (1+C' \omega_*(n^{-1}\diam(R_0))), {\bf v}arepsilonnd{eqnarray*} where we defined $C' := C \sup_{y\inftyn \mathbb R_0} |R(y)|^{\frac 1 p} \diam(R(y))^m$, which is finite by Definition \rightef{defBlockSpec}. We denoted by $\omega_*$ the modulus of continuity of the $m$-th derivatives of $f$ which is defined at \inftyref{defOmega}. We now define for all $n \geq 1$, $$ \delta_n := n^2 \sup_{R \inftyn {\cal P}_n^2} \diam(R). $$ According to \inftyref{smallDiam} one has $\delta_n \tildeldeo 0$ as $n \tildeldeo \inftynfty$. If $R\inftyn {\cal P}_n^2$, then $\diam(R)\lefteq n^{-2} \delta_n$ and therefore $|R|\lefteq \diam(R)^d\lefteq n^{-2d} \delta_n^d$. Using again \inftyref{localIso}, and recalling that $\frac 1 \tildeldeau = \frac m d+ \frac 1 p$ we find $$ \|f-\inftynterp_R f\|_{L_p(R)} \lefteq C |R|^{\frac 1 p} \diam(R)^m \|d^m f\|_{L_{\inftynfty(R_0)}} \lefteq C'' n^{-\frac {2d} \tildeldeau} \delta_n^{\frac d \tildeldeau} $$ where $C'' = C \|d^m f\|_{L_{\inftynfty(R_0)}}$. From the previous observations it follows that $$ \|f-\inftynterp_{{\cal P}_n} f\|_{L_p(R_0)} \lefteq \#({\cal P}_n)^{\frac 1 p} \max_{R\inftyn {\cal P}_n} \|f-\inftynterp_R f\|_{L_p(R)} \lefteq \#({\cal P}_n)^{\frac 1 p} n^{-\frac {2d} \tildeldeau} \max\{1+ C'\omega_*(n^{-1}\diam(R_0)), \, C'' \delta_n^{\frac d \tildeldeau}\}. $$ Hence, $$ \leftimsup_{n\tildeldeo \inftynfty} \#({\cal P}_n)^{-\frac 1 p} n^{\frac {2d} \tildeldeau}\|f-\inftynterp_{{\cal P}_n} f\|_{L_p(R_0)} \lefteq 1. $$ Combining the last equation with \inftyref{limCardRn}, we obtain $$ \leftimsup_{n\tildeldeo \inftynfty} \#({\cal P}_n)^{\frac m d} \|f-\inftynterp_{{\cal P}_n} f\|_{L_p(R_0)} \lefteq \lefteft(\inftynt_{R_0} R(x)^{-1} dx\rightight)^{\frac 1 \tildeldeau}. $$ The sequence of block partitions ${\cal R}_N := {\cal P}_{n(N)}$ clearly satisfies $\#({\cal R}_N)/N\tildeldeo 1$ as $N \tildeldeo \inftynfty$ and therefore leads to the announced equation \inftyref{limsupR}. Furthermore, it follows from the boundedness of $\diam(R(x))$ on $R_0$ and the properties of ${\cal P}_n$ described in Lemma \rightef{lemmaSeqBlock} that $$ \sup_{n\geq 1}\lefteft( \#({\cal P}_n)^{\frac 1 d} \sup_{R \inftyn {\cal P}_n} \diam(R) \rightight)< \inftynfty $$ which implies that ${\cal R}_N$ is an admissible sequence of partitions. $\diamond$\\ We now choose adequate local block specifications in order to obtain the estimates announced in Theorems \rightef{thUpper} and \rightef{thNoEps}. For any $M\geq \diam({\mathbb I}^d) = $\diamond$\\rightt d$ we define the modified error function \mathbfegin{equation} \leftabel{defKM} K_M({\bf p}i) := \inftynf_{\substack{|R| = 1,\\ \diam(R)\lefteq M}} \|{\bf p}i - \inftynterp_R {\bf p}i\|_{L_p(R)}, {\bf v}arepsilonnd{equation} where the infimum is taken on blocks of unit volume and diameter smaller that $M$. It follows from a compactness argument that this infimum is attained and that $K_M$ is a continuous function on ${\rightm \hbox{I\kern-.2em\hbox{H}}}_m$. Furthermore, for all ${\bf p}i\inftyn {\rightm \hbox{I\kern-.2em\hbox{H}}}_m$, $M\mapsto K_M({\bf p}i)$ is a decreasing function of $M$ which tends to $K_I({\bf p}i)$ as $M \tildeldeo \inftynfty$. For all $x\inftyn R_0$ we denote by $R_M^*(x)$ a block which realises the infimum in $K_M({\bf p}i_x)$. Hence, $$ |R_M^*(x)| = 1, \ \diam(R_M^*(x))\lefteq M, \tildeldeext{ and } K_M({\bf p}i_x) = \|{\bf p}i_x - \inftynterp_{R_M^*(x)} {\bf p}i_x\|_{L_p(R_M^*(x))} $$ We define a local block specification on $R_0$ as follows \mathbfegin{equation} \leftabel{defRM} R_M(x) := (K_M({\bf p}i_x) + M^{-1})^{- \frac \tildeldeau d} R_M^*(x). {\bf v}arepsilonnd{equation} We now observe that $$ \|{\bf p}i_x - \inftynterp_{R_M(x)} {\bf p}i_x\|_{L_p(R_M(x))} = K_M({\bf p}i_x) (K_M({\bf p}i_x)+ M^{-1})^{-1} \lefteq 1. $$ Hence, according to Lemma \rightef{lemmaNn}, there exists a sequence $({\cal R}_N^M)_{N\geq 1}$ of block partitions of $R_0$ such that $$ \leftimsup_{N\tildeldeo \inftynfty} N^{\frac m d} \|f-\inftynterp_{{\cal R}^M_N} f \|_{L_p(R_0)} \lefteq \|K_M({\bf p}i_x)+M^{-1}\|_{L_\tildeldeau (R_0)}. $$ Using our previous observations on the function $K_M$, we see that $$ \leftim_{M \tildeldeo \inftynfty} \|K_M({\bf p}i_x)+M^{-1}\|_{L_\tildeldeau (R_0)} = \|K_I({\bf p}i_x)\|_{L_\tildeldeau (R_0)}. $$ Hence, given ${\bf v}arepsilon >0$ we can choose $M({\bf v}arepsilon)$ large enough in such a way that $$ \|K_{M({\bf v}arepsilon)}({\bf p}i_x)+M({\bf v}arepsilon)^{-1}\|_{L_\tildeldeau (R_0)} \lefteq \|K_I({\bf p}i_x)\|_{L_\tildeldeau (R_0)}+ {\bf v}arepsilon, $$ which concludes the proof of the estimate \inftyref{upperEstimEps} of Theorem \rightef{thUpper}. For each $N$ let $M=M(N)$ be such that $$ N^{\frac m d} \|f-\inftynterp_{{\cal R}^M_N} f \|_{L_p(R_0)} \lefteq \|K_M({\bf p}i_x)+M^{-1}\|_{L_\tildeldeau (R_0)}+M^{-1} $$ and $M(N) \tildeldeo \inftynfty$ as $N \tildeldeo \inftynfty$. Then the (perhaps non admissible) sequence of block partitions ${\cal R}_N := {\cal R}_N^{M(N)}$ satisfies \inftyref{upperEstim} which concludes the proof of Theorem \rightef{thUpper}. $\diamond$\\ We now turn to the proof of Theorem \rightef{thNoEps}, which follows the same scheme for the most. There exists $d$ functions $\leftambda_1(x) , \cdots, \leftambda_d(x) \inftyn C^0(R_0)$, and a function $x \mapsto {\bf p}i_*(x) \inftyn {\rightm \hbox{I\kern-.2em\hbox{P}}}^*_k$ such that for all $x\inftyn R_0$ we have $$ {\bf p}i_x = \sum_{1\lefteq i\lefteq d} \leftambda_i(x) X_i^m + {\bf p}i_*(x). $$ The hypotheses of Theorem \rightef{thNoEps} state that $K_I\lefteft(\frac {d^m f}{m!}\rightight) = K_I({\bf p}i_x)$ does not vanish on $R_0$. It follows from Propositions \rightef{propOdd} and \rightef{propEven} that the product $\leftambda_1(x)\cdots \leftambda_d(x)$ is nonzero for all $x\inftyn R_0$. We denote by ${\bf v}arepsilon_i\inftyn \{{\bf p}m 1\}$ the sign of $\leftambda_i$, which is therefore constant over the block $R_0$, and we define $$ {\bf p}i_{\bf v}arepsilon := \sum_{1\lefteq i\lefteq d} {\bf v}arepsilon_i X_i^m $$ The proofs of Propositions \rightef{propEven} and \rightef{propOdd} show that there exists a block $R_{\bf v}arepsilon$, satisfying $|R_{\bf v}arepsilon| = 1$, and such that $K_I({\bf p}i_{\bf v}arepsilon) = \|{\bf p}i-\inftynterp_{R_{\bf v}arepsilon} {\bf p}i\|_{L_p(R_{\bf v}arepsilon)}$. By $D(x)$ we denote the diagonal matrix of entries $|\leftambda_1(x)|, \cdots , |\leftambda_d(x)|$, and we define $$ R^*(x) := (\det D(x))^{\frac 1 {md} } D(x)^{- \frac 1 m} R_{\bf v}arepsilon. $$ Clearly, $|R^*(x)| = 1$. Using \inftyref{changeRect} and the homogeneity of ${\bf p}i_x\inftyn {\rightm \hbox{I\kern-.2em\hbox{H}}}_m$, we find that $$ \|{\bf p}i_x - \inftynterp_{R^*(x)} {\bf p}i_x\|_{L_p(R^*(x))} = (\det D(x))^{\frac 1 d} K_I({\bf p}i_{\bf v}arepsilon) = K_I({\bf p}i_x). $$ We then define the local block specification \mathbfegin{equation} \leftabel{defR} R(x) := K_I({\bf p}i_x)^{-\frac \tildeldeau d}R^*(x). {\bf v}arepsilonnd{equation} The admissible sequence $({\cal R}_N)_{N \geq 1}$ of block partitions constructed in Lemma \rightef{lemmaNn} then satisfies the optimal upper estimate \inftyref{upperEstim}, which concludes the proof of Theorem \rightef{thNoEps}. $\diamond$\\ \mathbfegin{equation}gin{remark}[Adaptation to weighted norms] Lemma \rightef{lemmaNn} also holds if \inftyref{unitError} is replaced with $$\Omega(x) \|{\bf p}i_x -\inftynterp_{R(x)}({\bf p}i_x)\|_{L_p(R(x))} \lefteq 1$$ and if the $L_p(R_0)$ norm is replaced with the weighted $L_p(R_0, \Omega)$ norm in \inftyref{limsupR}. Replacing the block $R_M(x)$ defined in \inftyref{defRM} with $$ R'_M(x) := \Omega(x)^{- \frac \tildeldeau d}R_M(x), $$ one can easily obtain the extension of Theorem \rightef{thUpper} to weighted norms. Similarly, replacing $R(x)$ defined in \inftyref{defR} with $R'(x) := \Omega(x)^{- \frac \tildeldeau d}R(x)$, one obtains the extension of Theorem \rightef{thNoEps} to weighted norms. {\bf v}arepsilonnd{remark} \appendix \mathbfegin{equation}gin{center} \leftarge APPENDIX {\bf v}arepsilonnd{center} \section{Proof of Lemma \rightef{lemmaSeqBlock}} By ${\cal Q}_n$ we denote the standard partition of $R_0\inftyn \rightm \hbox{I\kern-.2em\hbox{R}}^d$ in $n^d$ identical blocks of diameter $n^{-1} \diam(R_0)$ illustrated on the left in Figure 1. For each $Q \inftyn {\cal Q}_n$ by $x_Q$ we denote the barycenter of $Q$ and we consider the tiling ${\cal T}_Q$ of $\mathbb R^d$ formed with the block $n^{-2}R(x_Q)$ and its translates. We define ${\cal P}_n^1(Q)$ and ${\cal P}_n^1$ as follows $$ {\cal P}_n^1(Q) := \{R\inftyn {\cal T}_Q : \; R \subset Q\} \ \tildeldeext{ and } \ {\cal P}_n^1 := \mathbfegin{itemize}gcup_{Q\inftyn {\cal Q}_n} {\cal P}_n^1(Q). $$ Comparing the areas, we obtain $$ \# ({\cal P}_n^1) = \sum_{Q\inftyn {\cal Q}_n} {\cal P}_n^1(Q) \lefteq \sum_{Q \inftyn {\cal Q}_n} \frac{|Q|}{|n^{-2} R(x_Q)|} = n^{2 d} \sum_{Q \inftyn {\cal Q}_n} |Q| |R(x_Q)|^{-1}. $$ From this point, using the continuity of $x \mapsto |R(x)|$, one can easily show that $ \frac {\# ({\cal P}_n^1)}{n^{2d}} \tildeldeo \inftynt_{R_0} |R(x)|^{-1} dx $ as $n \tildeldeo \inftynfty$. Furthermore, the property \inftyref{n2Ry} clearly holds. In order to construct ${\cal P}_n^2$, we first define two sets of blocks ${\cal P}_n^{2*}(Q)$ and ${\cal P}_n^{2*}$ as follows $$ {\cal P}_n^{2*}(Q) := \{R\cap Q : \; R \inftyn {\cal T}_Q \tildeldeext{ and }R\cap {\bf p}artial Q\neq {\bf v}arepsilonmptyset\} \ \tildeldeext{ and } \ {\cal P}_n^{2*} := \mathbfegin{itemize}gcup_{Q\inftyn {\cal Q}_n} {\cal P}_n^{2*}(Q). $$ Comparing the surface of ${\bf p}artial Q$ with the dimensions of $R(x_Q)$, we find that $$ \#({\cal P}_n^{2*}(Q)) \lefteq C n^{d-1} $$ where $C$ is independent of $n$ and of $Q\inftyn {\cal Q}_n$. Therefore, $\#({\cal P}_n^{2*})\lefteq C n^{2d-1}$. The set of blocks ${\cal P}_n^2$ is then obtained by subdividing each block of ${\cal P}_n^{2*}$ into $o(n)$ (for instance, $\leftfloor \leftn(n)\rightfloor^d$) identical sub-blocks, in such a way that $\#({\cal P}_n^2)$ is $o(n^{2d})$ and that the requirement \inftyref{smallDiam} is met. \mathbfegin{equation}gin{figure} \centering \inftyncludegraphics[width=4cm,height=4cm]{frame.pdf} \hspace{1cm} \inftyncludegraphics[width=4cm,height=4cm]{tile.pdf} \caption{(Left) the initial uniform (coarse) tiling ${\cal Q}_3$ of $R_0$. (Right) the set of blocks ${\cal P}_n^1$ in green and the set of blocks ${\cal P}_n^{2*}$ in red.} {\bf v}arepsilonnd{figure} \mathbfegin{equation}gin{thebibliography}{99} \mathbfegin{itemize}bitem{Bab} {V.F. Babenko}, {Interpolation of continuous functions by piecewise linear ones}, Math. Notes, 24, no.1, (1978) 43--53. \mathbfegin{itemize}bitem{us} { V. Babenko, Yu. Babenko, A. Ligun, A. Shumeiko,} {On asymptotical behavior of the optimal linear spline interpolation error of $C^2$ functions}, East J. Approx., V. 12, N. 1 (2006), 71--101. \mathbfegin{itemize}bitem{us2} {Yu. Babenko, V. Babenko, D. Skorokhodov,} { {Exact asymptotics of the optimal $L_{p,\Omega}$-error of linear spline interpolation}, } East Journal on Approximations, V. 14, N. 3 (2008), pp. 285--317. \mathbfegin{itemize}bitem{PhD} {Yu. Babenko}, { On the asymptotic behavior of the optimal error of spline interpolation of multivariate functions}, PhD thesis, 2006. \mathbfegin{itemize}bitem{JAT} {Yu. Babenko}, Exact asymptotics of the uniform error of interpolation by multilinear splines, to appear in J. Approx. Theory. \mathbfegin{itemize}bitem{KL} {K. B$\rightm{\ddot{o}}$r$\rightm{\ddot{o}}$czky, M. Ludwig,} { Approximation of convex bodies and a momentum lemma for power diagrams}, Monatshefte f$\inftyt{\ddot{u}}$r Mathematik, V. 127, N. 2, (1999) 101--110. \mathbfegin{itemize}bitem{Cohen} {A. Cohen, J.-M. Mirebeau,} {\inftyt Adaptive and anisotropic piecewise polynomial approximation}, chapter 4 of the book {\inftyt Multiscale, Nonlinear and Adaptive Approximation}, Springer, 2009 \mathbfegin{itemize}bitem{Daz1} { E. F. D'Azevedo} { Are bilinear quadrilaterals better than linear triangles?} SIAM J. Sci. Comput. 22 (2000), no. 1, 198--217. \mathbfegin{itemize}bitem{DDI} {L. Demaret, N. Dyn, A. Iske,} { Image compression by linear splines over adaptive triangulations}, IEEE Transactions on Image Processing. \mathbfegin{itemize}bitem{Toth} {L. Fejes Toth,} Lagerungen in der Ebene, auf der Kugel und im Raum, 2nd edn. Berlin: Springer, 1972. \mathbfegin{itemize}bitem {Gr3} {P. Gruber}, { Error of asymptotic formulae for volume approximation of convex bodies in $E^d$,} Monatsh. Math. 135 (2002) 279-304. \mathbfegin{itemize}bitem{LSh} {Ligun A.A., Shumeiko A.A.,} Asymptotic methods of curve recovery, {Kiev. Inst. of Math. NAS of Ukraine}, 1997. (in Russian) \mathbfegin{itemize}bitem{JM} {J.-M. Mirebeau,} Optimally adapted finite elements meshes, Constructive Approximation, 2010 \mathbfegin{itemize}bitem{Nadler} {E. Nadler}, { Piecewise linear best $L_2$ approximation on triangles,} in: Chui, C.K., Schumaker, L.L. and Ward, J.D. (Eds.), Approximation Theory V, Academic Press, (1986) 499--502. \mathbfegin{itemize}bitem{kodla1} {H. Pottmann, R. Krasauskas, B. Hamann, K. Joy, W. Seibold}, { On piecewise linear approximation of quadratic functions}, J. Geom. Graph. 4, no. 1, (2000) 31--53. {\bf v}arepsilonnd{thebibliography} {\bf v}arepsilonnd{document}
\begin{document} \title{Stable States with Non-Zero Entropy under Broken $\mathcal{PT}$-Symmetry} \author{Jingwei Wen} \affiliation{State Key Laboratory of Low-Dimensional Quantum Physics and Department of Physics, Tsinghua University, Beijing 100084, China} \author{Chao Zheng} \affiliation{Department of Physics, College of Science, North China University of Technology - Beijing 100144, China} \author{Zhangdong Ye} \affiliation{State Key Laboratory of Low-Dimensional Quantum Physics and Department of Physics, Tsinghua University, Beijing 100084, China} \author{Tao Xin} \email{[email protected]} \affiliation{Shenzhen Institute for Quantum Science and Engineering, Southern University of Science and Technology, Shenzhen 518055, China} \affiliation{Guangdong Provincial Key Laboratory of Quantum Science and Engineering, Southern University of Science and Technology, Shenzhen 518055, Guangdong, China} \author{Guilu Long} \email{[email protected]} \affiliation{State Key Laboratory of Low-Dimensional Quantum Physics and Department of Physics, Tsinghua University, Beijing 100084, China} \affiliation{Frontier Science Center for Quantum Information, Beijing 100084, China} \affiliation{Beijing National Research Center for Information Science and Technology, Beijing 100084, China} \affiliation{Beijing Academy of Quantum Information Sciences, Beijing 100193, China} \begin{abstract} The $\mathcal{PT}$-symmetric non-Hermitian systems have been widely studied and explored both in theory and in experiment these years due to various interesting features. In this work, we focus on the dynamical features of a triple-qubit system, one of which evolves under local $\mathcal{PT}$-symmetric Hamiltonian. A new kind of abnormal dynamic pattern in the entropy evolution process is identified, which presents a parameter-dependent stable state, determined by the non-Hermiticity of Hamiltonian in the broken phase of $\mathcal{PT}$-symmetry. The entanglement and mutual information of a two-body subsystem can increase beyond the initial values, which do not exist in the Hermitian and two-qubit $\mathcal{PT}$-symmetric systems. Moreover, an experimental demonstration of the stable states in non-Hermitian system with non-zero entropy and entanglement is realized on a four-qubit quantum simulator with nuclear spins. Our work reveals the distinctive dynamic features in the triple-qubit $\mathcal{PT}$-symmetric system and paves the way for practical quantum simulation of multi-party non-Hermitian system on quantum computers. \end{abstract} \maketitle \emph{Introduction.---}In the conventional quantum mechanics, the Hamiltonian of a closed system requires to be Hermitian \cite{Nielsen}, which guarantees the reality of the energy spectrum and the unitarity of the corresponding time evolution operators. However, the Hermiticity requirement is a sufficient condition but not necessary for real eigenvalues, and in 1998 \cite{Bender}, Bender and Boettcher found that a class of Hamiltonians satisfying joint $\mathcal{P}$ (spatial reflection) and $\mathcal{T}$ (time reversal) symmetry instead of Hermiticity can still have real eigenvalues in the unbroken phase \cite{nonlinear,add_pt}. Moreover, there exists critical points for phase transition from the $\mathcal{PT}$ unbroken phase to broken phase, called exceptional point or branch point \cite{EP1,cir_EP,pt_the1}. Because of various peculiar characters in this kind of non-Hermitian system, such as the violation of no-signaling principle \cite{violation,violation_exp}, entanglement restoration \cite{entanglepra2014, wenpt} and reversible-irreversible criticality in information flow \cite{flow2017theory,flow2019exp}, the $\mathcal{PT}$-symmetric quantum mechanics has aroused continuous attention and research interest in many perspectives. Recently, there are some related researches on its potential applications in reconstructing standard quantum theory \cite{reconstruct,reconstruct2} and it has been shown that a unitary evolution can be introduced by redefining the inner product of quantum states \cite{pt_inner2015,Nori_nogo}, which make it equivalent to the Hermitian quantum theory. \begin{figure} \caption{The multi-qubit system with local $\mathcal{PT} \label{model} \end{figure} In experiment, many quantum processes such as symmetry-breaking transitions \cite{exp_transition1,exp_transition2,exp_transition3,exp_transition4,exp_transition5}, observation of exceptional point \cite{exp_ep1,exp_ep2} and topological features \cite{exp_topo1,exp_topo2,exp_topo3} of the $\mathcal{PT}$-symmetric system have been demonstrated and it depends mainly on the optical systems \cite{flow2019exp}, nuclear spins \cite{wenpt,exp_pt0}, ultracold atoms \cite{exp_transition1}, nitrogen-vacancy centers \cite{exp_transition2}, and superconductor systems \cite{exp_ep1} by introducing balanced gain and loss or state-selective dissipation. Moreover, some previous researches \cite{violation_exp,wenpt,flow2019exp} focus on the two-body non-Hermitian system as shown in Fig. \ref{model}, where two qubits (Alice and Bob) are entangled initially and one of them (Alice) evolves under local $\mathcal{PT}$-symmetric Hamiltonian. Such a two-qubit model can lead to oscillations of entropy and entanglement in the unbroken phase of $\mathcal{PT}$-symmetry, which violates the property of entanglement monotonicity \cite{entanglepra2014, wenpt}. Especially, the entropy and entanglement of both qubits will decay exponentially to zero in the broken phase and form stable states, which do not vary with time. Such stable states, whose dynamic process is named normal dynamic pattern (NDP) here, is just related to the quantum phase but independent on the degree of non-Hermiticity. However, in this work, we find that when the system is extended from two-body to triple-body model, another kind of evolution process, named abnormal dynamic pattern (ADP) here, arises up. Subsystems evolving under ADP can present novel non-Hermiticity-related stable states with non-zero entropy. By controlling the local system of Alice, the entanglement and mutual information between Bob and Charlie can be redistributed and even increased beyond the initial value, which do not exist in the two-qubit $\mathcal{PT}$-symmetric system. Some theoretical and numerical analyses are introduced to study the properties of the partial-information reserved quantum states in the broken phase of $\mathcal{PT}$-symmetry. By enlarging the system with ancillary qubits and encoding the subsystem with the non-Hermitian Hamiltonian with postselection, an experimental demonstration of the stable states in ADP is realized on a four-qubit quantum simulator based on quantum circuit algorithm. \emph{Entropy of stable states.---}We focus on the dynamical features of a composite system consisting of three qubits, which is initialized as Greenberger-Horne-Zeilinger (GHZ) state \cite{GHZ} $\ket{\psi_{0}}=(\ket{000}+\ket{111})/\sqrt{2}$ and the reduced density matrix of each single qubit is $\rho_{\textup{single}}=I/2$, which is the maximally mixed state. Then one of the qubits, such as Alice qubit, performs local operator $U_{A}=e^{-i\hat{H}_{\mathcal{PT}}t}$ (set $\hbar=1$) on her own system with $\mathcal{PT}$-symmetric Hamiltonian \begin{equation} \hat{H}_{\mathcal{PT}}=s(\sigma_{x}+ir \sigma_{z}) \label{eq1} \end{equation} where $\sigma_{i}~(i=x,y,z)$ are Pauli matrix. The parameter $s>0$ represents energy scale and $r>0$ is the degree of non-Hermiticity. The $\mathcal{PT}$-symmetric Hamiltonian $\hat{H}_{\mathcal{PT}}$ satisfies $(\mathcal{PT})\hat{H}_{\mathcal{PT}}(\mathcal{PT})^{-1}=\hat{H}_{\mathcal{PT}}$, where operator $\mathcal{P}=\sigma_{x}$ and $\mathcal{T}$ corresponds to complex conjugation. The energy gap of the Hamiltonian $w=2s\sqrt{1-r^{2}}$ will be real as long as $r<1$, which means the $\mathcal{PT}$-symmetry is unbroken. The condition $r>1$ will lead to a broken phase with a transition at exceptional point $r_{ep}=1$. The three-body Hamiltonian can be expressed as $\hat{H}_{\mathcal{PT}}^{3}=\hat{H}_{\mathcal{PT}}\otimes I_{B} \otimes I_{C}$. Density matrix $\rho(t)$ of the whole system can be obtained by time-evolving operator with renormalized quantum state \cite{flow2017theory} \begin{equation} \rho(t)=\frac{e^{-i\hat{H}_{\mathcal{PT}}^{3}t}\rho(0)e^{i\hat{H}_{\mathcal{PT}}^{3\dagger}t}}{\textup{tr}[e^{-i\hat{H}_{\mathcal{PT}}^{3}t}\rho(0)e^{i\hat{H}_{\mathcal{PT}}^{3\dagger}t}]} \label{evolution2} \end{equation} \begin{figure} \caption{Two kinds of dynamical evolution pattern. (a) The entropy $S(\rho_{A} \label{PT3_entropy} \end{figure} \begin{figure*} \caption{(a) Mutual information of $\textup{I} \label{fig_entanglement} \end{figure*} The joint reduced states of two-body system are $\rho_{ij}=\textup{tr}_{k}(\rho)$, while the single-body reduced density matrices are $\rho_{i}=\textup{tr}_{jk}(\rho)~(i,j,k=A,B,C)$. We focus on the dynamical features of the von Neumann entropy $S(\rho)=-\textup{tr}(\rho \log_{2}\rho)$ \cite{Nielsen} and plot the evolution process within total time $T$ under different phases in Fig. \ref{PT3_entropy}\textcolor{blue}{(a), (b)}. It can be concluded that in the triple-qubit $\mathcal{PT}$-symmetric system, for the single-body subsystem, the entropy of Alice still evolve under NDP: entropy oscillates in the unbroken phase and the amplitude increases when the parameter $r$ approaching $r_{ep}$. Once crossing the exceptional point, entropy exponentially decays to zero and tend to be stable states, which are indistinguishable in terms of entropy evolution characteristics. However, the dynamic pattern of $S(\rho_{B})$ changed and another kind of ADP shows up: entropy still oscillates in the unbroken phase, whereas in the broken phase of $\mathcal{PT}$-symmetry, the entropy of stable states will not decrease to zero exponentially, but stabilize to a value related to the degree of non-Hermiticity. In other words, there exist a parameter-dependent stable state in the subsystem of multi-party $\mathcal{PT}$-symmetric system and the entropy decreases with the increase of non-Hermiticity. Such a stable state can maintain partial-entropy in the system under broken $\mathcal{PT}$-symmetry. Based on the evolution equation, we can determine the reduced density matrix of Bob qubit \begin{equation} \rho_{B}=\frac{1}{N} \begin{smallmatrix} \begin{pmatrix} \vert C \vert^2+(A-B)^2&0\\ 0&\vert C \vert^2+(A+B)^2 \\ \end{pmatrix} \end{smallmatrix} \end{equation} where $A=\cos(wt/2)$, $B=(-2rs/w)\sin(wt/2)$, $C=(-2is/w)\sin(wt/2)$ and $N=2(\vert C \vert^2+A^2+B^2)$ is the normalization constant. We focus on the stable state in broken phase with $\rho_{B}^{\textup{ss}}=\frac{1}{2}I+\frac{\sqrt{r^2-1}}{2r}\sigma_{z}$. At exceptional point, the density matrix of stable state is a maximally mixed state with entropy $S(\rho_{B})=1$. However, with the increase of non-Hermiticity, the stable state will tend to be $\rho_{B}^{\textup{ss}}=\ket{0}\bra{0}$, a pure state with entropy $S(\rho_{B})=0$. It can serve as a quantum state purification phenomenon induced by the non-Hermiticity increase and the pure state is a stable state with time. Then we can calculate the analytical expression of the von Neumann entropy for Bob qubit in ADP \begin{equation} \begin{split} S(\rho_{B}^{\textup{ss}})=\log_{2}\frac{2}{\cos\theta(\sec\theta+\tan\theta)^{\sin\theta}}>0 \end{split} \label{Sb} \end{equation} where $\cos\theta=1/r$ and $\theta \in [0,\pi/2)$ \cite{supp}. The entropy will not be zero unless $\theta=\pi/2$, which means the non-Hermiticity of system is infinity. As for the entropy evolution in NDP, the density matrix of Alice in stable state with broken $\mathcal{PT}$-symmetry is $\rho_{A}^{\textup{ss}}=\rho_{B}^{\textup{ss}}-D(r)\cdot\sigma_{y}$. It can be found that the stable states of Alice and Bob have same population distribution, but the $\rho_{A}^{\textup{ss}}$ has off-diagonal elements, which decrease in power-law with a damping function $D(r)=1/(2r)$ \cite{supp}. Such effects can be modeled as a phase damping process induced by non-Hermiticity, leading to the loss of quantum information to the environment. As shown in Fig. \ref{PT3_entropy}\textcolor{blue}{(c)}, we plot the Bloch vectors of stable states of Alice and Bob in Bloch sphere. When $r$ increases from the exceptional point to a large enough value, the Bloch vector of Alice's stable state rotates along the Bloch sphere surface from point $(0,-1,0)$ towards the north point of $z$-axis with norm $\vert \vert \vec{r}_{A} \vert \vert = 1$ all the process. Therefore, the entropy of the stable states is \begin{equation} S(\rho_{A}^{\textup{ss}})=-\sum_{i=1,2}\lambda_{i}^{A}\log_2{\lambda_{i}^{A}} \equiv 0 \end{equation} with eigenvalues $\lambda_{1,2}^{A}=0,1$, which are not related to the non-Hermiticity parameter and this is what happens in the evolution process obeying NDP. However, the norm of the Bloch vector of Bob in stable states is $\vert \vert \vec{r}_{B} \vert \vert =\sin\theta\le 1$, which starts at the center of Bloch sphere at exceptional point and move towards the top point when increasing non-Hermiticity. Moreover, besides the stable states, there exist another kind of non-Hermiticity-related quantum states satisfying $dS(t)/dt=0$ at the specific points $P=(t_{P}(r),S_{P}(r))$ labeled by gray dashed line in Fig. \ref{PT3_entropy}\textcolor{blue}{(b)}. During the evolution from point $P$ to stable states in ADP, the entropy increase and this turning point does not exist in NDP. With the increase of non-Hermiticity parameter, the entropy of quantum state at time point $t_{P}$ will gradually approach $S(\rho_{B}^{\textup{ss}})$ and the duration and intensity of the entropy increase process will gradually weaken until it disappears. \emph{Entanglement evolution.---}Furthermore, we turn to investigate the dynamical features of interaction and entanglement in the triple-party $\mathcal{PT}$-symmetric system. Entropic quantities are generally used to quantify correlations and for a two-body system with density matrix $\rho_{ij}$, the amount of information shared between the two parts can be characterized by the mutual information defined as $\textup{I}(i:j)=S(\rho_i)+S(\rho_j)-S(\rho_{ij})\ge 0$. The mutual information is always non-negative, and cannot be zero unless $i$ and $j$ are in a separable state, ensuring that $\textup{I}(i:j)$ is a genuine measure of correlations \cite{mutual}. It is usually believed that local trace-preserving quantum operations can never increase mutual information \cite{Nielsen} but this can be violated in the two-qubit $\mathcal{PT}$-symmetric system without exceeding the initial value \cite{wenpt,flow2019exp}. In the triple-qubit $\mathcal{PT}$-symmetric system, this property still can be hold in subsystem (Alice-Bob) under NDP, just as shown in Fig. \ref{fig_entanglement}\textcolor{blue}{(a)}. However, we find that evolution in the ADP of $\textup{I}(B:C)$, which has non-zero mutual information in broken phase, can present mutual information beyond the initial value. Moreover, the mutual information $\textup{I}(B:C)$ oscillates with a maximum deviation $\textup{I}_{\textup{md}}$ from the initial values at a series of discrete time points but tends to be stable at value $\textup{I}_{s}$ after passing exceptional point. We define a variation measure $\Delta \textup{I}(r)=\mathcal{O}(r_{ep}-r)\textup{I}_{\textup{md}}+\mathcal{O}(r-r_{ep})\textup{I}_{s}-\textup{I}(t_{0})$ to quantify the increase of mutual information, where $\mathcal{O}(\cdot)$ is the Heaviside step function. We can conclude from Fig. \ref{fig_entanglement}\textcolor{blue}{(b)} that the stable value $\textup{I}_{s}$ decreases with $r$ and the subsystem (Bob-Charlie) has the maximal available information at exceptional point. However, the increasing process of mutual information stops at $r_{\textup{MI}}\approx1.5978$, a critical point for increase of mutual information, which is different from the exceptional point of the $\mathcal{PT}$-symmetry. In other word, the critical point for transition of phase is not that for the increase of accessible information in triple-qubit $\mathcal{PT}$-symmetric system. It is noted that there exists an anti-corresponding relation between entropy and mutual information evolution because of the fact that $\textup{I}(B:C,t_\infty)=2S(\rho_B^{\textup{ss}})$ : the subsystems of two-body, which have NDP in entropy evolution, can present ADP in mutual information evolution, and vice verse. To evaluate the degree of entanglement in the two-body subsystem, we can also use concurrence \cite{concurrence} \begin{equation} C(\rho_{ij})=\textup{Max} \{ 0,\sqrt{\lambda_{1}}-\sqrt{\lambda_{2}}-\sqrt{\lambda_{3}}-\sqrt{\lambda_{4}} \} \end{equation} where $\lambda_{i}$ is the eigenvalues of $\rho_{ij}(\sigma_{y}\sigma_{y})\rho^{*}_{ij}(\sigma_{y}\sigma_{y})$ in decreasing order. We numerically calculate the dynamical evolution and find that $C(\rho_{AB})=C(\rho_{AC})=0$ all the time, which do not show evolution. It is Alice's $\mathcal{PT}$-symmetric operators introduce the non-Hermiticity, but the systems including Alice qubit do not show entanglement oscillation and this is different from the two-qubit counterparts. But for $C(\rho_{BC})$, the concurrence will emerge both in the unbroken and broken phase, although the local Hamiltonian of them are Hermitian. This evolution pattern is consistent with the entropic quantities of mutual information. We identify the amplitude of concurrence during the evolution by $A(r)=C_{\textup{max}}$ and it can be concluded from Fig. \ref{fig_entanglement}\textcolor{blue}{(c)} that the concurrence of $\rho_{BC}^{\textup{ss}}$ will decrease and tend to be stable at $C_{s}=1/r$, which presents a power-law decay with the increase of non-Hermiticity after exceptional point. \begin{figure} \caption{Experimental sample and quantum circuit. Three of the four controllable qubits are used as work system and the last one is an ancillary qubit. The whole process is divided into initial state preparation, $\mathcal{PT} \label{fig_exp1} \end{figure} \begin{figure} \caption{(a) The off-diagonal elements of stable states $\rho_{A} \label{fig_exp2} \end{figure} \emph{Experimental observation of stable states.---}In experiment, we focus on demonstrating the entropy dynamic evolution of stable states in the triple-qubit system with local $\mathcal{PT}$-symmetric operators on a liquid nuclear magnetic resonance quantum simulator. The sample used is ${}^{13}C$-labeled iodotrifluoroethylene (C$_{2}$F$_{3}$I) and the qubits in blue box of Fig. \ref{fig_exp1} encode the work system while another nucleus ${}^{19}F_{3}$ is chosen as ancillary system to realize $\mathcal{PT}$-symmetric operator \cite{wenpt}. The operators in the dotted box initialize the work system to the GHZ state. To realize the quantum simulation of the non-unitary evolution induced by $\mathcal{PT}$-symmetric Hamiltonian on Alice, we decompose the non-Hermitian Hamiltonian evolution into a linear combination of unitary operators and realize the simulation in an enlarged Hilbert space with post-selection \cite{dual1,dual2,dual3,dual4}. Notations $H$ in the quantum circuit represent Hardmard gate and the 1-controlled gate $V_{2}=\sigma_{z}$. The single-qubit operator $V_{0}$ and 0-controlled $V_{1}$ are parameter-dependent quantum gates and the concrete forms are \begin{equation} \begin{smallmatrix} \begin{split} V_{0}=\begin{pmatrix} \cos\phi&-\sin\phi \\ \sin\phi&\cos\phi \end{pmatrix},~ V_{1}=\begin{pmatrix} \cos\phi_{1}&i\sin\phi_{1} \\ i\sin\phi_{1}&\cos\phi_{1} \end{pmatrix} \end{split} \end{smallmatrix} \end{equation} where $ \phi=\arcsin\frac{r\sin{(wt/2)}}{M_{1}}$, $\phi_{1}=\arcsin\frac{-\sin{(wt/2)}}{M_{2}}$ and $M_{1}=[1-r^2\cos{wt}]^{1/2}$, $M_{2}=[1-r^2\cos^2{(wt/2)}]^{1/2}$ \cite{supp}. Then the evolution can be realized via single-qubit operations and two-qubit controlled gates. We take several different parameter points in experiment and all the operations are realized using shaped pulses \cite{grape11,grape22}, while being robust to the static field distributions and inhomogeneity and the durations of the experimental pulses are within 15ms. At the end of quantum circuit, we obtain the density matrix of work system by observing the probe spin ${}^{13}C$ in the subspace $\ket{0}$ of ancillary qubit \cite{YangB}. We trace out different qubits of experimental stable states to find the subsystems with different dynamical patterns. As shown in Fig. \ref{fig_exp2}\textcolor{blue}{(a)}, the non-diagonal elements of the density matrix of Alice's stable states present a power-law decay with the increase of non-Hermiticity, which is consistent with the damping function $D(r)$. It is how quantum states behave in the NDP as analyzed above. In ADP, we experimentally determine the entropy of Bob and concurrence between Bob and Charlie in Fig. \ref{fig_exp2}\textcolor{blue}{(b)}, which both present the parameter-related non-zero value in stable states with broken $\mathcal{PT}$-symmetry. The experimental results of entropy match well with the theoretical expectation of Eq. (\ref{Sb}) in different parameter conditions. The inset panels represent the density matrix of quantum states of Bob with minimal and maximal non-Hermiticity in experimental parameter setup with average fidelities over 0.989 and we can see that with the increase of non-Hermiticity, $\rho_{B}^{\textup{ss}}$ gradually evolves from the maximally mixed state to a pure state. \emph{Conclusion.---}We investigate the evolution process of entropy and entanglement in a triple-qubit system with local $\mathcal{PT}$-symmetric operation from theoretical and experimental perspectives. Two kinds of dynamic pattern, named ADP and NDP, are found in this system, where entropy and entanglement tend to be stable at a non-Hermiticity-related non-zero value in the ADP, which do not exist in the two-qubit counterparts. Two-body subsystems in ADP present maximum entanglement increase at exceptional point and mutual information can increase beyond the initial values. A new critical point $r_{\textup{MI}}$ is determined in the broken phase, where the transition of accessible information from increase to decrease compared with the initial condition happens. Based on the four-qubit quantum simulator, we experimentally observe the stable states in non-Hermitian system with nuclear spins and the results confirmed the theoretical analysis. Our work shows that when the $\mathcal{PT}$-symmetric system is extended from two-body to triple-body, some different physical properties occur and the enhancement of entanglement and mutual information has important physical significance. Especially, there are some potential applications in quantum communication and quantum eavesdropping by regulating and controlling the channel capacity of system with local $\mathcal{PT}$-symmetric operators on the third party. This work was supported by the National Key R$\&$D Program of China (2017YFA0303700), the Key R$\&$D Program of Guangdong province (2018B030325002), Beijing Advanced Innovation Center for Future Chip (ICFC) and the National Natural Science Foundation of China under Grants No. 11774197. C. Z is supported by National Natural Science Foundation of China Grant No. 11705004. T. X. is also supported by the National Natural Science Foundation of China (Grants No. 11905099, and No. U1801661), Guangdong Basic and Applied Basic Research Foundation (Grant No. 2019A1515011383), and Guangdong Provincial Key Laboratory (Grant No. 2019B121203002). \begin{thebibliography}{99} \bibitem{Nielsen} M. A. Nielsen and I. L. Chuang, \textit{Quantum Computation and Quantum Information: 10th Anniversary Edition}, 10th ed. (Cambridge University Press, New York, 2011). \bibitem{Bender} C. M. Bender and S. Boettcher, \href{https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.80.5243} {\text{Phys. Rev. Lett.} \textbf{80,} 5243 (1998)}. \bibitem{nonlinear} V. V. Konotop, J. Yang, and D. A. Zezyulin, \href{https://journals.aps.org/rmp/abstract/10.1103/RevModPhys.88.035002} {\text{Rev. Mod. Phys.} \textbf{88,} 035002 (2016)}. \bibitem{add_pt} R. El-Ganainy, K. G. Makris, M. Khajavikhan, Z. H. Musslimani, S. Rotter, and D.N. Christodoulides, \href{https://www.nature.com/articles/nphys4323} {\text{Nature Physics.} \textbf{14,} 11 (2018)}. \bibitem{EP1} T. J. Milburn, J. Doppler, C. A. Holmes, S. Portolan, S. Rotter, and P. Rabl, \href{https://journals.aps.org/pra/abstract/10.1103/PhysRevA.92.052124} {\text{Phys. Rev. A.} \textbf{92,} 052124 (2015)}. \bibitem{cir_EP} D. Heiss, \href{https://www.nature.com/articles/nphys3864} {\text{Nature Physics.} \textbf{12,} 823 (2016)}. \bibitem{pt_the1} C. M. Bender, D. C. Brody, H. F. Jones, and B. K. Meister, \href{https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.98.040403} {\text{Phys. Rev. Lett.} \textbf{98,} 040403 (2007)}. \bibitem{violation} Y.-C. Lee, M.-H. Hsieh, S.T. Flammia, and R.-K. Lee, \href{https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.112.130404} {\text{Phys. Rev. Lett.} \textbf{112,} 130404 (2014)}. \bibitem{violation_exp} J.-S. Tang, Y. T. Wang, S. Yu, D. Y. He, J. S. Xu, B. H. Liu, G. Chen, Y. N. Sun, K. Sun, Y. J. Han, C.-F. Li, and G.-C. Guo, \href{https://www.nature.com/articles/nphoton.2016.144#citeas} {\text{Nature Photonics.} \textbf{10,} 642 (2016)}. \bibitem{entanglepra2014} S.-L. Chen, G.-Y. Chen, and Y.-N. Chen, \href{https://journals.aps.org/pra/abstract/10.1103/PhysRevA.90.054301} {\text{Phy. Rev. A.} \textbf{90,} 054301 (2014)}. \bibitem{wenpt} J.-W. Wen, C. Zheng, X.-Y. Kong, S.-J. Wei, T. Xin, and G.-L. Long, \href{https://journals.aps.org/pra/abstract/10.1103/PhysRevA.99.062122} {\text{Phy. Rev. A.} \textbf{99,} 062122 (2019)}. \bibitem{flow2017theory} K. Kawabata, Y. Ashida, and M. Ueda, \href{https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.119.190401} {\text{Phys. Rev. Lett. } \textbf{119,} 190401 (2017)}. \bibitem{flow2019exp} L. Xiao, K.-K. Wang, X. Zhan, Z.-H. Bian, K. Kawabata, M. Ueda, W. Yi, and P. Xue, \href{https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.123.230401} {\text{Phys. Rev. Lett.} \textbf{123,} 230401 (2019)}. \bibitem{reconstruct} C. M. Bender, D. C. Brody, and H. F. Jones, \href{https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.89.270401} {\text{Phys. Rev. Lett.} \textbf{89,} 270401 (2002)}. \bibitem{reconstruct2} C. M. Bender, D. W. Hook, P. N. Meisinger, and Q.-H. Wang, \href{https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.104.061601} {\text{Phys. Rev. Lett.} \textbf{104,} 061601 (2010)}. \bibitem{pt_inner2015} S. Croke, \href{https://journals.aps.org/pra/abstract/10.1103/PhysRevA.91.052113} {\text{Phys. Rev. A.} \textbf{91,} 052113 (2015)}. \bibitem{Nori_nogo} F. Minganti, A. Miranowicz, R. W. Chhajlany, I. I. Arkhipov, and F. Nori, \href{https://journals.aps.org/pra/abstract/10.1103/PhysRevA.100.062118} {\text{Phys. Rev. A.} \textbf{100,} 062118 (2019)}. \bibitem{exp_transition1} J. Li, A. K. Harter, J. Liu, L. de Melo, Y. N. Joglekar, and L. Luo, \href{https://www.nature.com/articles/s41467-019-08596-1} {\text{Nature Communications.} \textbf{10,} 855 (2019)}. \bibitem{exp_transition2} Y. Wu, W. Liu, J. Geng, X. Song, X. Ye, C.-K. Duan, X. Rong, and J. Du, \href{https://science.sciencemag.org/content/364/6443/878.abstract} {\text{Science.} \textbf{364,} 878 (2019)}. \bibitem{exp_transition3} A. Guo, G.J. Salamo, D. Duchesne, R. Morandotti, M. Volatier-Ravat, V. Aimez, G. A. Siviloglou, and D. N. Christodoulides, \href{https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.103.093902} {\text{Phys. Rev. Lett.} \textbf{103,} 093902 (2009)}. \bibitem{exp_transition4} C. E. R$\ddot{u}$ter, K. G. Makris, R. El-Ganainy, D. N. Christodoulides, M. Segev, and D. Kip, \href{https://www.nature.com/articles/nphys1515} {\text{Nature Physics.} \textbf{6,} 192 (2010)}. \bibitem{exp_transition5} C. M. Bender, B. K. Berntson, D. Parker, and E. Samuel, \href{https://aapt.scitation.org/doi/pdf/10.1119/1.4789549?class=pdf} {\text{Am. J. Phys.} \textbf{81,} 173 (2013)}. \bibitem{exp_ep1} M Naghiloo, M Abbasi, Yogesh N Joglekar, and KW Murch, \href{https://www.nature.com/articles/s41567-019-0652-z} {\text{Nature Physics.} \textbf{15,} 1232 (2019)}. \bibitem{exp_ep2} K. Ding, G. Ma, Z. Q. Zhang, and C. T. Chan, \href{https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.121.085702} {\text{Phys. Rev. Lett.} \textbf{121,} 085702 (2018)}. \bibitem{exp_topo1} L. Xiao, X. Zhan, Z. H. Bian, K. K. Wang, X. Zhang, X. P. Wang, J. Li, K. Mochizuki, D. Kim, N. Kawakami, W. Yi, H. Obuse, B.C. Sanders, and P. Xue, \href{https://www.nature.com/articles/nphys4204} {\text{Nature Physics.} \textbf{13,} 1117 (2017)}. \bibitem{exp_topo2} N. X. A. Rivolta, H. Benisty, and B. Maes, \href{https://journals.aps.org/pra/abstract/10.1103/PhysRevA.96.023864} {\text{Phys. Rev. A.} \textbf{96,} 023864 (2017)}. \bibitem{exp_topo3} X. Ni, D. Smirnova, A. Poddubny, D. Leykam, Y. Chong, and A. B. Khanikaev, \href{https://journals.aps.org/prb/abstract/10.1103/PhysRevB.98.165129} {\text{Phys. Rev. B.} \textbf{98,} 165129 (2018)}. \bibitem{exp_pt0} C. Zheng, L. Hao, and G. L. Long, \href{https://royalsocietypublishing.org/doi/full/10.1098/rsta.2012.0053} {\text{Phil. Trans. R. Soc. A.} \textbf{371,} 20120053 (2013)}. \bibitem{GHZ} D. M. Greenberger, M. A. Horne, A. Shimony, and A. Zeilinger, \href{https://aapt.scitation.org/doi/abs/10.1119/1.16243} {\text{Am. J. Phys.} \textbf{58,} 1131 (1990)}. \bibitem{mutual} G. Camilo, G. T. Landi, and S. Eli$\ddot{e}$ns, \href{https://doi.org/10.1103/PhysRevB.99.045155} {\text{Phys. Rev. B.} \textbf{99,} 045155 (2019)}. \bibitem{concurrence} W. K. Wootters, \href{https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.80.2245} {\text{Phys. Rev. Lett.} \textbf{80,} 2245 (1998)}. \bibitem{dual1} G. L. Long, \href{http://iopscience.iop.org/article/10.1088/0253-6102/45/5/013/meta} {\text{Commun. Theor. Phys.} \textbf{45,} 825 (2006)}. \bibitem{dual2} U. G$\ddot{u}$nther and B. F. Samsonov, \href{https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.101.230404} {\text{Phys. Rev. Lett.} \textbf{101,} 230404 (2008)}. \bibitem{dual3} M.-Y. Huang, R.-K. Lee, L.-J. Zhang, S.-M. Fei, and J.-D. Wu, \href{https://dx.doi.org/10.1103/PhysRevLett.123.080404} {\text{Phys. Rev. Lett.} \textbf{123,} 080404 (2019)}. \bibitem{dual4} C. Zheng, \href{http://iopscience.iop.org/article/10.1209/0295-5075/123/40002} {\text{Europhysics Letters.} \textbf{123,} 40002 (2018)}. \bibitem{grape11} N. Khaneja, T. Reiss, C. Kehlet, T. Schulte-Herbr$\ddot{u}$ggen, and S. J. Glaser, \href{https://www.sciencedirect.com/science/article/pii/S1090780704003696?} {\text{J. Magn. Reson.} \textbf{172,} 296 (2005)}. \bibitem{grape22} C. A. Ryan, C. Negrevergne, M. Laforest, E. Knill, and R. Laflamme, \href{https://doi.org/10.1103/PhysRevA.78.012328} {\text{Phys. Rev. A.} \textbf{78,} 012328 (2008)}. \bibitem{YangB} H. Wang, S. Wei, C. Zheng, X. Kong, J. Wen, X. Nie, J. Li, D. Lu, and T. Xin, \href{https://journals.aps.org/pra/abstract/10.1103/PhysRevA.102.012610} {\text{Phys. Rev. A.} \textbf{102,} 012610 (2020)}. \bibitem{supp} See Supplementary Material for details. \end{thebibliography} \section{Supplementary Material} \section{Derivation of Entropy Evolution of Stable States} In the triple-qubit system, the non-unitary operator on Alice induced by the $\mathcal{PT}$-symmetric Hamiltonian is \begin{equation} \begin{smallmatrix} \begin{split} U_{A}=e^{i\phi} \begin{pmatrix} \cos\frac{wt}{2\hbar}+\frac{2rs}{w}\sin\frac{wt}{2\hbar}&\frac{-2is}{w}\sin\frac{wt}{2\hbar}\\ \frac{2is}{w}\sin\frac{wt}{2\hbar}&\cos\frac{wt}{2\hbar}-\frac{2rs}{w}\sin\frac{wt}{2\hbar}\\ \end{pmatrix} \end{split} \end{smallmatrix} \end{equation} where $\phi$ is a phase factor. Because the triple-qubit Hamiltonian is a direct product of each single-body Hamiltonian, the operator on the whole system can be expressed as $U_{3}=U_{A}\otimes I_{B} \otimes I_{C}$. Then the time-dependent quantum state of the triple-qubit system without considering the normalization constant is $\rho(t)=U_{3}\rho(0)U_{3}^{\dagger}$. We need to trace out the other qubits to find the density matrix of Bob, which presents an ADP in entropy evolution and it can be realized by \begin{equation} \begin{smallmatrix} \begin{split} \rho_{B}(t)=&\sum_{i,j=0,1}(\bra{i} \otimes I\otimes \bra{j})\rho(t)(\ket{j} \otimes I\otimes \ket{i})\\ =&\frac{1}{N} \begin{pmatrix} \vert C \vert^2+(A-B)^2&0\\ 0&\vert C \vert^2+(A+B)^2 \\ \end{pmatrix} \end{split} \end{smallmatrix} \end{equation} where $A=\cos(wt/2\hbar)$, $B=(-2rs/w)\sin(wt/2\hbar)$, $C=(-2is/w)\sin(wt/2\hbar)$ and $N=2(\vert C \vert^2+A^2+B^2)$ is the normalization constant. In the unbroken phase of the $\mathcal{PT}$-symmetry, each item in the quantum state oscillates with time periodically. However, when the symmetry is broken, the energy gap will become a pure imaginary number and we set $w/2\hbar=ik$, where $k$ is a positive real number. According to the Euler equations, we can decompose each item in quantum state into exponentially increase item and exponentially decrease item, where the latter one can be abandoned in the long-time limit, \begin{eqnarray} \lim_{t \to \infty} \begin{cases} \cos^2\frac{wt}{2\hbar}=e^{2kt}/4 \\ \sin^2\frac{wt}{2\hbar}=-e^{2kt}/4\\ \cos^2\frac{wt}{2\hbar}\sin^2\frac{wt}{2\hbar}=ie^{2kt}/4\\ \end{cases} \end{eqnarray} Then the eigenvalues of the renormalized density matrix $\rho_{B}^{\textup{ss}}$ is $\lambda_{1,2}^{B}=\frac{r\pm\sqrt{r^2-1}}{2r}$ and we can calculate the analytical expression of the von Neumann entropy of stable state in ADP \begin{equation} \begin{split} S(\rho_{B}^{\textup{ss}})=&-\sum_{i=1,2}\lambda_{i}^{B} \log_{2}\lambda_{i}^{B} \\ =& \log_{2}2r-\frac{\sqrt{r^2-1}}{r}\log_{2}(r+\sqrt{r^2-1}) \\ =& \log_{2}\frac{2}{\cos\theta(\sec\theta+\tan\theta)^{\sin\theta}} \\ \end{split} \end{equation} where $\cos\theta=1/r$ and $\theta \in [0,\pi/2)$. Based on the variable substitution, we can rewrite the quantum state as \begin{equation} \begin{split} \rho_{B}^{\textup{ss}}=\frac{I+\sin\theta\cdot \sigma_{z}}{2} \end{split} \end{equation} and the Bloch vector is $\vec{r}_{B}=(0,0,\sin\theta)$. So we can see that the purity of stable state is parameter-dependent and the stable state will evolve from a maximally mixed state to a pure state $\ket{0}$ when increasing non-Hermiticity. However, when we turn to the quantum stable state of Alice qubit, the density matrix have off-diagonal elements and the stable state in the broken phase is \begin{equation} \begin{smallmatrix} \begin{split} \rho_{A}^{\textup{ss}}=& \lim_{t\rightarrow\infty} \frac{1}{N} \begin{pmatrix} \vert C \vert^2+(A-B)^2&2BC\\ -2BC&\vert C \vert^2+(A+B)^2 \\ \end{pmatrix} \\ =& \begin{pmatrix} \frac{r+\sqrt{r^2-1}}{2r} &\frac{-i}{2r}\\ \frac{i}{2r}&\frac{r-\sqrt{r^2-1}}{2r} \\ \end{pmatrix}\\ =&\frac{I-\cos\theta\cdot \sigma_{y}+\sin\theta\cdot \sigma_{z}}{2} \end{split} \end{smallmatrix} \end{equation} where the damping function $D(r)=\cos\theta/2=1/2r$ and the norm of the Bloch vector is $\vert \vert \vec{r}_{A} \vert \vert = \cos^2\theta+\sin^2\theta=1$, which is parameter-independent. The eigenvalues of the stable state of Alice is \begin{equation} \begin{split} \lambda_{1,2}^{A}&=\frac{\rho_{A(11)}^{\textup{ss}}+\rho_{A(22)}^{\textup{ss}}}{2}\pm \\ &~~~\frac{\sqrt{(\rho_{A(11)}^{\textup{ss}}-\rho_{A(22)}^{\textup{ss}})^2+4\rho_{A(12)}^{\textup{ss}}\rho_{A(21)}^{\textup{ss}}}}{2} \\ &=\frac{1\pm\sqrt{(r^2-1)/r^2+1/r^2}}{2}\\ &=0,1 \end{split} \end{equation} And this results that the entropy of stable states in the NDP keep zero all the time and is parameter-independent. This leads to that in the broken phase, the mutual information of subsystem Bob and Charlie is $I(B:C,t\to\infty)=S(\rho_B^{\textup{ss}})+S(\rho_C^{\textup{ss}})-S(\rho_{BC}^{\textup{ss}})=2S(\rho_B^{\textup{ss}})$. So by numerically solving the equation $S(\rho_B^{\textup{ss}})=1/2$, we can determine the critical point $r_{\textup{MI}}$ in the broken phase. \section{Experimental Simulation of the Stable States} \subsection{Initialization} The experiments for simulating the stable states in triple-qubit system with local $\mathcal{PT}$-symmetric operator are carried out on a 600 MHz nuclear magnetic resonance platform at room temperature (298 K) with a four-qubit sample ${}^{13}C$-labeled iodotrifluoroethylene dissolved in d chloroform. The spectrometer is equipped with a superconducting magnet which creates a strong magnetic field (14.1T). The sample is placed in the static magnetic field along the $z$-direction and the internal Hamiltonian under weak coupling approximation is \begin{equation} \begin{split} H_{int}=-\sum^{4}_{i=1}\pi\nu_{i}\sigma^{i}_{z}+\sum^{4}_{i<j}\frac{\pi}{2}J_{ij}\sigma^{i}_{z}\sigma^{j}_{z} \end{split} \end{equation} where $\nu_{i}$ is the chemical shift and $J_{ij}$ is the J-coupling strength between the $i$th and $j$th nuclei. The experimentally identified parameters of this molecule is shown in Fig. \ref{fig_supp1}. The initialization process of quantum computation in liquid nuclear magnetic resonance system starts from a thermal equilibrium state obeying Boltzmann distribution: \begin{equation} \begin{split} \rho_{eq}=\frac{e^{-H_{int}/k_{B}T}}{\textup{tr}(e^{-H_{int}/k_{B}T})} \end{split} \end{equation} where $k_{B}$ is the Boltzmann constant and $T$ is the thermodynamic temperature. Under the condition that $\| H_{int}/k_{B}T\| \ll 1$ and $J_{kl}\ll \omega_{i}$, the thermal equilibrium state in our platform can be approximated as \begin{equation} \rho_{eq} \approx \frac{1}{2^{4}}(I^{\otimes 4}+\sum_{i}^{4}\frac{\hbar w_{i} \sigma_{z}^{i}}{2k_{B}T}) \label{eq2} \end{equation} where the notation $I$ is identity matrix and $\sigma_{z}$ is a Pauli matrix. To initialize the system, we generally need to drive the quantum system from the highly mixed state $\rho_{eq}$, which can not be used as an initial state to the pseudo-pure state (PPS) \begin{equation} \ket{\rho_{\textup{pps}}} = \frac{1-\epsilon}{2^4}I^{\otimes 4} + \epsilon\ket{0000}\bra{0000} \end{equation} where $\epsilon\approx10^{-5}$ is polarization. The first term can be neglected since the identity matrix does not evolve under any unitary propagator and cannot be observed. We prepared the PPS from the thermal equilibrium state with the selective-transition method \cite{pps_lineselect,Yang}, which is realized by unitary operators and field gradient pulses in the $z$-direction (Gz). The unitary operators redistribute the diagonal elements and the Gz pulse is used to eliminate the undesired coherence except the zero-quantum coherence of spins. After these processes, the PPS is prepared and this state serves as the starting point for subsequent computation tasks. \begin{figure} \caption{Molecule structure and molecule parameters of the sample. ${} \label{fig_supp1} \end{figure} \subsection{Quantum Simulation of $\mathcal{PT}$-Symmetric Operator} To realize the simulation of non-unitary dynamical process induced by $\mathcal{PT}$-symmetric Hamiltonian, we encode the non-unitary evolution into unitary process by adding ancillary qubit and form a gate-based quantum circuit, which is friendly for experiment. It is called as the linear combination of unitaries, which is a universal subroutine in designing and developing quantum algorithms \cite{dual11}. We first create superposition states on the ancillary system, and then perform controlled operations on the work system. The physical picture is that different unitary operations are implemented simultaneously on the work system but in different subspaces and the final results can be obtained in a specific subspace of ancillary systems according to the practical algorithm design. Specifically, suppose that the operator for creating superposition states is $V_{0}=[\cos\phi,-\sin\phi; \sin\phi,\cos\phi ]$ and the non-unitary evolution operator can be decomposed into the form $U_{A}=\cos\phi \cdot V_{1}+\sin\phi \cdot V_{2}$, where \begin{equation} \begin{smallmatrix} \begin{split} V_{1}=\begin{pmatrix} \cos\phi_{1}&i\sin\phi_{1} \\ i\sin\phi_{1}&\cos\phi_{1} \\ \end{pmatrix}, V_{2}=\begin{pmatrix} \cos\phi_{2}&-i\sin\phi_{2} \\ i\sin\phi_{2}&-\cos\phi_{2} \\ \end{pmatrix} \end{split} \end{smallmatrix} \end{equation} Under the unitary limitation on $V_{i}$ $(i=0,1,2)$, the choice of these operators are not unique. This construction leads to four equations as follows \begin{eqnarray} \begin{cases} \cos\phi \cdot \cos\phi_{1}=\cos\frac{wt}{2\hbar}\\ \sin\phi \cdot \cos\phi_{2}=\frac{2rs}{w}\sin\frac{wt}{2\hbar}\\ \cos\phi \cdot \sin\phi_{1}=\frac{-2s}{w}\sin\frac{wt}{2\hbar}\\ \sin\phi \cdot \sin\phi_{2}=0\\ \end{cases} \end{eqnarray} By solving these equations, we can determine the angles $\phi$ and $\phi_{1,2}$ as shown in the main text. It worth noting that $\tan\phi_{2}=0$ and this leads to $V_{2}=\sigma_{z}$. Single-qubit operator $V_{0}$ and two-qubit operator $V_{1}$ are parameter-dependent quantum gates, while the other unitary quantum gates do not vary with the parameters in the $\mathcal{PT}$-symmetric Hamiltonian. In the broken phase of $\mathcal{PT}$-symmetry, the operators can be determined based on the experimental parameter setup and decomposed into single-qubit operations and controlled-NOT gates, as shown in Fig. \ref{fig_supp2}. Quantum evolution according to the quantum circuit we constructed are optimized by gradient ascent pulse engineering \cite{grape1,grape2}. Each shaped pulse is simulated to be over 99.5\% fidelity \cite{fidelity}, while being robust to the static field distributions and inhomogeneity and the durations of the experimental pulses are within 15ms. \begin{figure} \caption{Quantum gate decomposition for the simulation of the triple-qubit system with local $\mathcal{PT} \label{fig_supp2} \end{figure} \begin{table}[htbp] \caption{Decomposition scheme of quantum algorithm with single-qubit gates and controlled-NOT gate according to the parameter setting in experiment. The $R_{i}(\alpha)~(i=y,z)$ is a rotation operator along $i$-axis with angle $\alpha$.} \centering \renewcommand\arraystretch{1.5} \begin{tabular}{cccccc} \hline \hline $A_1$:&$R_{z}(1.5\pi)R_{y}(\phi_{1})$&$A_2$:&$R_{z}(1.5\pi)R_{y}(\pi)$&$V_{0}$:&$R_{y}(0.5\pi)$\\ $B_1$:&$R_{y}(-\phi_{1})R_{z}(-\pi)$&$B_2$:&$R_{y}(-\pi)R_{z}(-1.5\pi)$&P:&$\begin{pmatrix} \begin{smallmatrix} 1&0\\0&-i \end{smallmatrix} \end{pmatrix}$\\ $C_1$:&$R_{z}(-0.5\pi)$ &X:&\multicolumn{3}{c}{$R_{z}(-0.5\pi)R_{y}(\pi)R_{z}(0.5\pi)$}\\ \hline \hline \end{tabular} \label{Tab:table} \end{table} \subsection{Measurement and Results} After the entanglement creation and $\mathcal{PT}$-symmetric evolution, quantum measurement is performed on a bulk ensemble of molecules, which means the readout is an ensemble-averaged macroscopic measurement. At the end of the quantum circuit, all experimental data are extracted from the free-induction decay (FID), which is the signal induced by the precessing magnetization of the sample in a surrounding detection coil. The signal is then subjected to Fourier transformation, and the resulting spectral lines are fitted, yielding a set of measurement data. As the precession frequencies of different spins are distinguishable, they can be individually detected and all the observations are made on the probe spin ${}^{13}C$ \cite{Yang}. By fitting the ${}^{13}C$ spectrum, the real parts and the imaginary parts of the peaks are extracted, which correspond to $\langle \hat{\sigma}_{1}^{x} \rangle $ and $\langle\hat{\sigma}_{1}^{y}\rangle$, respectively. Then we can reconstruct all the density matrix elements in the subspace where the ancillary qubit is $\ket{0}$ to get the target stable states of the triple-qubit work system under different experimental parameter setup. We plot the fidelities of different subsystems between the experimental results and theoretical expectations in Fig. \ref{fig_supp3} with average fidelities over 0.98. The corresponding density matrix of stable states, which evolve under NDP and ADP respectively, are shown in Fig. \ref{fig_supp4} and both of them present quantum state purification phenomenon with the increase of non-Hermiticity. \begin{figure} \caption{Fidelities of subsystems between the experimental stable states and the theoretical expectations under different non-Hermiticity. The average fidelities are labeled by lines with corresponding colors.} \label{fig_supp3} \end{figure} \begin{figure} \caption{The experimentally identified density matrix of stable states under different degree of non-Hermiticity. Figures in the first row from (a) to (d) represent the quantum state of Alice in NDP ($r_1 \rightarrow r_{4} \label{fig_supp4} \end{figure} \end{document}
\begin{document} \title{Weighted homomorphisms on the \\ $p$-analog of the Fourier-Stieltjes algebra\\ induced by piecewise affine maps} \titlerunning{weighted homomorphisms of $B_p(G)$} \author{Mohammad Ali Ahmadpoor\inst{1}\orcidID{0000-0001-6902-1916} \and Marzieh Shams Yousefi \inst{2}\orcidID{0000-0003-0426-708X}} \authorrunning{M. A. Ahmadpoor \& M. Shams Yousefi} \institute{{${}^{1}$ School of Mathematics and Statistics, Carleton University, Ottawa, Canada\\ ${}^{2}$ Department of Pure Mathematics, Faculty of Mathematical Sciences, University of Guilan, Rasht, Iran\\ \email{[email protected]}\\ \email{[email protected]}}} \maketitle \begin{abstract} In this paper, for $p\in(1,\infty)$ we study $p$-complete boundedness of weighted homomorphisms on the $p$-analog of the Fourier-Stieltjes algebras, $B_p(G)$, based on the $p$-operator space structure defined by the authors. Here, for a locally compact group $G$, the space $B_p(G)$ stands for Runde's definition of the $p$-analog of the Fourier-Stieltjes algebra and the implemented $p$-operator space structure is come from the duality between $B_p(G)$ and the algebra of universal $p$-pseudofunctions, $UPF_p(G)$. It is established that the homomorphism $\Phi_\alpha:B_p(G)\to B_p(H)$, defined by $\Phi(u)=u\circ\alpha$ on $Y$ and zero otherwise, is $p$-completely contractive when the continuous and proper map $\alpha :Y\subseteq H\to G$ is affine, and it is $p$-completely bounded whenever $\alpha$ is piecewise affine map. Moreover, we assume that $Y$ belongs to the coset ring generated by open and amenable subgroups of $H$. To obtain the result, by utilizing the properties of $QSL_p$-spaces and representations on them, the relation between $B_p(G/N)$ and a closed subalgebra of $B_p(G)$ is shown, where $N$ is a closed normal subgroup of $G$. Additionally, $p$-complete boundedness of several well-known maps on such algebras are obtained. \keywords{Completely bounded homomorphisms\and $p$-analog of the Fourier-Stieltjes algebras\and $QSL_p$-spaces\and Piecewise affine maps} \textbf{MSC2010:} Primary 46L07; Secondary 43A30, 47L10. \end{abstract} \section{Introduction}\label{subsection1.1} Let $G$ be a locally compact group. The Fourier algebra, $A(G)$, and the Fourier-Stieltjes algebra, $B(G)$, on the locally compact group $G$, have been found by Eymard in 1964 \cite{EYMARD1964}. The general form of special type of maps on the Fourier and Fourier-Stieltjes algebras has been studied extensively. For example, when $G$ is an Abelian topological group, $A(G)$ is nothing except $L_1(\widehat{G})$, where $\widehat{G}$ is the Pontrjagin dual group of $G$, and $B(G)$ is isometrically isomorphic to $M(\widehat{G})$, the measure algebra. In this case, Cohen in \cite{COHEN1960-1} and \cite{COHEN1960-2} studied homomorphisms from $L_1(G)$ to $M(H)$, for Abelian groups $G$ and $H$, and gave the general form of these maps, as the weighted maps by a piecewise affine map on the underlying groups.\\ By \cite{BLECHER1992,EFFROSRUAN1991}, we know that $A(G)$ and $B(G)$ are operator spaces as the predual of a von Neumann algebra, and the dual of a $C^*$-algebra, respectively. Amini and the second author studied the relation between operator and order structure of these algebras via amenability \cite{SHAMSAMINISADY2010}. Ilie in \cite{ILIE2004} and \cite{ILIESPRONK2005} studied the completely bounded homomorphisms from the Fourier to the Fourier-Stieltjes algebras. It is shown that for a continuous piecewise affine map $\alpha:Y\subseteq H\rightarrow G$, the homomorphism $\Phi_\alpha:A(G)\rightarrow B(H)$, defined through \begin{align*} \Phi_\alpha u=\left\{ \begin{array}{ll} u\circ\alpha& \text{on} \: Y\\ 0& \text{o.w.} \end{array}\right. ,\quad u\in A(G), \end{align*} is completely bounded. Moreover, in the cases that $\alpha$ is an affine map and a homomorphism, the homomorphism $\Phi_\alpha$ is completely contractive and completely positive, respectively. The Fig\`a-Talamanca-Herz algebras were introduced by Fig\`a-Talamanca for Abelian locally compact groups \cite{FIGATALAMANCA1965}, and it is generalized for any locally compact group by Herz \cite{HERZ1971}. For $p\in (1,\infty)$, coefficient functions of the left regular representation of a locally compact group $G$ on $L_{p}(G)$ make the Fig\`a-Talamanca-Herz algebra $A_p(G)$, and we have $A_2(G)=A(G)$. Therefore, Fig\`a-Talamanca-Herz algebras can be considered as the $p$-analog of the Fourier algebras. Daws in \cite{DAWS2010} introduced a $p$-operator space structure, with an extensive application to $A_p(G)$, which generalizes the natural and non-trivial operator space structure of $A(G)$.\\ Oztop and Spronk in \cite{OZTOPSPRONK2012}, and Ilie in \cite{ILIE2013} studied the $p$-completely bounded homomorphisms on the Fig\`a-Talamanca-Herz algebras, using this $p$-operator space structure. In \cite{ILIE2013} it is shown that the map $\Phi_\alpha :A_p(G)\rightarrow A_p(H)$, defined via \begin{align*} \Phi_\alpha u= \left\{ \begin{array}{ll} u\circ \alpha & \text{on} \: Y\\ 0 & \text{o.w} \end{array}\right.,\quad u\in A_p(G), \end{align*} is a $p$-completely (bounded) contractive homomorphism for a continuous proper (piecewise) affine map $\alpha : Y\subseteq H\rightarrow G $ in the case that the locally compact group $H$ is amenable. Runde in \cite{RUNDE2005} found a $p$-analog of the Fourier-Stieltjes algebras, $B_p(G)$. He used the theory of $ QSL_p $-spaces and representations on these spaces. Additionally, the $p$-operator space structure of $B_p(G)$ is fully described by the authors in \cite{AHSH2021-I} and it is shown that $B_p(G)$ is a $p$-operator space, as the dual space of the algebra of universal $p$-pseudofunctions $UPF_p(G)$. The second author of this paper studied the $p$-analog of the Fourier-Stieltjes algebras on the inverse semigroups in \cite{shams}. In this paper, for a continuous proper piecewise affine map $\alpha : Y\subseteq H\rightarrow G $, we study the weighted map $\Phi_\alpha : B_p(G)\rightarrow B_p(H)$ which is defined by \begin{align}\label{eq1} \Phi_\alpha u= \left\{ \begin{array}{ll} u\circ \alpha & \text{on} \: Y\\ 0 & \text{o.w} \end{array}\right.,\quad u\in B_p(G). \end{align} We will show that when $\alpha$ is an affine map, $\Phi_\alpha$ is a $p$-complete contraction, and in the case that $\alpha$ is a piecewise affine map, it is $p$-completely bounded homomorphism. At this aim, we put amenability assumption on open subgroups of $H$. Our approach to the concept of $p$-operator space structure on the $p$-analog of the Fourier-Stieltjes algebra, is the $p$-operator structure that can be implemented on this space from its predual. The paper is organized as follows. In Section \ref{SECTIONPRELIMINARIES}, we give required definitions and theorems about the $p$-analog of the Fourier-Stieltjes algebras and representations on $QSL_p$-spaces, and then some crucial and previously obtained properties of the algebra $B_p(G)$ will be listed. As a consequence, in Section \ref{SECTIONSPECIALMAPS} we will establish $p$-completely boundedness of some well-known operators on the $p$-analog of the Fourier-Stieltjes algebras (Theorem \ref{IMPORTANTMAPPING}). The obtained results will be applied in the final section, Section \ref{SECTIONINDUCEDHOMO}, to generalize Ilie's results on homomorphisms of the Fig\`a-Talamanca-Herz algebras in \cite{ILIE2013}. \section{Preliminaries}\label{SECTIONPRELIMINARIES} Herewith, we divide the prerequisites into three parts. First, we give general notions and theorems of representations and specific algebras on locally compact groups together with some useful tools to deal with our problem. Next, we focus on some properties of the $p$-analog of the Fourier-Stieltjes algebra and its $p$-operator space structure. Finally, we introduce some special maps regarding our main result. Throughout this paper, $G$ and $H$ are locally compact groups, and for $p\in (1,\infty)$, the number $p'$ is its complex conjugate, i.e. $1/p+1/p'=1$. We commence by the essential description of $QSL_p$-spaces, and representations of groups on such spaces. For more information one can see \cite{RUNDE2005}. \begin{definition}\label{def1} A representation of a locally compact group $ G $ is a pair $ (\pi , E) $, where $ E $ is a Banach space and $ \pi $ is a group homomorphism that maps each element $x\in G$ to an invertible isometric operator $\pi(x)$ on $E$. This homomorphism is continuous with respect to the given topology on $ G $ and the strong operator topology on $ \mathcal{B}(E) $. \end{definition} \begin{remark} Each representation $ (\pi , E) $ can be lifted to a representation of the group algebra $L_1(G)$ on $ E $. Denoting this homomorphism with the same symbol $ \pi $, it is defined through \begin{align}\label{11} &\pi(f)=\int f(x)\pi (x)dx,\ f\in L_1(G),\\ &\langle \pi(f)\xi, \eta\rangle =\int f(x)\langle \pi (x)\xi, \eta\rangle dx,\quad \xi\in E,\:\eta\in E^*,\nonumber \end{align} where the integral \eqref{11} converges with respect to the strong operator topology. \end{remark} \begin{definition} For two representations $ (\pi , E)$ and $(\rho , F) $ of the locally compact group $G$, we have the following terminologies. \begin{enumerate} \item $ (\pi , E)$ and $ (\rho , F) $ are called equivalent, if there exists an invertible isometric map $T : E\rightarrow F$ for which the following diagram commutes for each $x\in G$, \begin{displaymath} \xymatrix{ E \ar[r]^{\pi(x)} & E \ar[d]^{T\quad .} \\ F \ar[r]_{\rho(x)} \ar[u]^{T^{-1}} & F } \end{displaymath} \item The representation $ (\pi , E)$ has a subrepresentation $ (\rho , F) $, if $F$ is a closed subspace of $E$, and for each $x\in G$, the operator $\rho(x)$ is the restriction of $\pi(x)$ to the subspace $F$. \item We say that $ (\pi , E)$ contains $ (\rho , F) $ and write $(\rho , F) \subseteq (\pi , E)$, if it is equivalent to a subrepresentation of $ (\pi , E) $. \end{enumerate} \end{definition} \begin{definition} \begin{enumerate} \item A Banach space is called an $L_p$-space if it is of the form $L_p(X)$ for some measure space $X$. \item A Banach space is called a $ QSL_p $-space if it is isometrically isomorphic to a quotient of a subspace of an $ L_p $-space. \end{enumerate} \end{definition} We denote by $ \text{Rep}_p(G) $ the collection of all (equivalence classes) of representations of $G $ on a $ QSL_p $-space. \begin{definition} A representation of a Banach algebra $\mathcal{A}$ is a pair $ (\pi , E)$, where $E$ is a Banach space, and $\pi $ is a contractive algebra homomorphism from $\mathcal{A}$ to $\mathcal{B}(E)$. We call $ (\pi , E)$ isometric if $\pi $ is an isometry and essential if the linear span of $\{\pi(a)\xi \: : \ a \in \mathcal{A},\: \xi\in E\} $ is dense in $E$. \end{definition} \begin{remark}\label{rem11} For a locally compact group $G$ there exists a one-to-one correspondence between representations of $G$ and essential representations of $L_1(G)$. \end{remark} \begin{definition} \begin{enumerate} \item If for a representation $ (\pi , E)\in \text{Rep}_p(G) $ there is an element $\xi_0\in E$ such that \begin{align*} E=\overline{\{\pi (f)\xi_0\ : f\in L_1(G)\}{}}^{\|\cdot\|_E}, \end{align*} then $(\pi,E)$ is said to be cyclic, and the set of all such cyclic representations of the group $G$ on $QSL_p$-spaces is denoted by $\text{Cyc}_p(G)$. \item If a representation $ (\pi , E)\in \text{Rep}_p(G)$ contains (up to an isometry) each cyclic representation then it is called $p$-universal. \end{enumerate} \end{definition} \begin{definition} We say that the function $u:G\rightarrow\mathbb{C}$ is a coefficient function of a representation $(\pi,E)$, if there exist $\xi\in E$ and $\eta\in E^*$ such that \begin{align*} u(x)=\langle\pi(x)\xi,\eta\rangle,\quad x\in G. \end{align*} \end{definition} The left regular representation of $G$ on $L_{p}(G)$, is denoted by $ \lambda_{p}$ and defined as following \begin{align*} &\lambda_{p}: G\rightarrow \mathcal{B}(L_{p}(G)),\quad\lambda_{p}(x)\xi(y)=\xi(x^{-1}y),\quad \xi\in L_{p}(G),\:x,y\in G, \end{align*} and it obtains a description of the Fig\`a-Talamanca-Herz algebras. \begin{definition} \begin{enumerate} \item The set of all linear combinations of the coefficient functions of the representation $(\pi,E)\in\text{Rep}_p(G)$, namely elements of the form \begin{align}\label{TTT1111} u(x)=\sum_{n=1}^\infty\langle\pi(x)\xi_n,\eta_n\rangle,\quad x\in G,\; (\xi_n)_{n=1}^\infty\subseteq E,\quad (\eta_n)_{n=1}^\infty\subseteq E^* \end{align} where \begin{align}\label{TTT222} \sum_{n=1}^\infty\|\xi_n\|\|\eta_n\|<\infty, \end{align} together with the norm \begin{align}\label{TTT333} \|u\|=\inf\bigg\{\sum_{n=1}^\infty\|\xi_n\|\|\eta_n\|\ :\ u\;\text{as}\;\eqref{TTT1111}\;\text{with}\;\eqref{TTT222}\bigg\} \end{align} is denoted by $A_{p,\pi}$, and is called the $p$-analog of the $\pi$-Fourier space. \item In particular, if $(\pi,E)=(\lambda_p,L_p(G))$ then it is simply denoted by $A_p(G)$. \item More generally, if $(\pi,E)$ is a $p$-universal representation, then this space is denoted by $B_p(G)$. \end{enumerate} \end{definition} \begin{remark}\label{REMARKRUNDERUNDE} \begin{enumerate} \item By \cite[Theorem 4.7]{RUNDE2005}, the space $B_p(G)$ equipped with the norm defined as above, and pointwise operations is a commutative unital Banach algebra, and it is called the $p$-analog of the Fourier-Stieltjes algebra. Additionally, it has been shown that the space $A_p(G)$ is a Banach algebra which is called the Fig\`a-Talamanca-Herz algebra. \item The $p$-analog of the Fourier-Stieltjes algebra has been studied, for example in \cite{COWLING1979}, \cite{FORREST1994}, \cite{MIAO1996} and \cite{PIER1984}, as the multiplier algebra of the Fig\`a-Talamanca-Herz algebra. In this paper, we follow the construction of Runde in definition and notation (See \cite{RUNDE2005}) which we have swapped indexes $p$ and $p'$. \item By \cite[Corollary 5.3]{RUNDE2005}, if we denote the multiplier algebra of $A_p(G)$ by $\mathcal{M}(A_p(G))$, we have the following contractive embeddings \begin{align*} A_p(G)\subseteq B_p(G)\subseteq \mathcal{M}(A_p(G)). \end{align*} \item One may express u in \eqref{TTT1111} as \begin{align}\label{TTT444} u(x)=\sum_{n=1}^\infty\langle\pi_n(x)\xi_n,\eta_n\rangle,\quad x\in G,\ \xi_n\in E_n,\ \eta_n\in E_n^*, \end{align} for which \eqref{TTT222} holds and where $(\pi_n,E_n)\subseteq(\pi,E)$, and $(\pi_n,E_n)\in\text{Cyc}_p(G)$, for all $n\in \mathbb{N}$. In this case, the norm \eqref{TTT333} can be replaced by \begin{align*} \|u\|=\inf\bigg\{\sum_{n=1}^\infty\|\xi_n\|\|\eta_n\|\ :\ u\;\text{as}\;\eqref{TTT444}\;\text{with}\;\eqref{TTT222}\bigg\} \end{align*} \end{enumerate} \end{remark} \begin{definition} \begin{enumerate} \item For a representation $(\pi,E)\in\text{Rep}_p(G)$, the closure of the space $\{\pi(f)\ :\ f\in L_1(G)\}$ with respect to the norm $\|\cdot\|_{\mathcal{B}(E)}$ is denoted by $PF_{p,\pi}(G)$ and it is called the algebra of $p$-pseudofunctions associated with $(\pi,E)$. \item If $(\pi,E)$ is a $p$-universal representation, then it is denoted by $UPF_p(G)$ and is called the algebra of universal $ p $-pseudofunctions. \item If $(\pi,E)=(\lambda_p,L_p(G))$, then it is simply denoted by $PF_{p}(G)$. \end{enumerate} \end{definition} \iffalse \begin{definition} Let $ (\pi , E)\in \text{Rep}_p(G) $. \begin{enumerate} \item For each $f\in L_1(G)$, let $\| f\|_{\pi}:=\|\pi(f)\|_{\mathcal{B}(E)}$, then $\|\cdot\|_{\pi}$ defines an algebra seminorm on $L_1(G)$. \item By $ PF_{p,\pi}(G) $, we mean the algebra of $ p $-pseudofunctions associated with $ (\pi , E) $, which is the closure of $ \pi( L_1(G)) $ in $ \mathcal{B}(E) $. \item If $ (\pi , E) = (\lambda_p,L_p(G))$, we denote $PF_{p,\lambda_p}(G)$ by $PF_p(G)$. \item If $ (\pi , E) $ is $ p $-universal, we denote $ PF_{p,\pi}(G) $ by $ UPF_{p}(G)$, and call it the algebra of universal $ p $-pseudofunctions. \end{enumerate} \end{definition} \fi \iffalse \begin{remark} \begin{enumerate} \item For $ p = 2 $, the algebra $ PF_{p}(G) $ is the reduced group $ C^* $-algebra, and $ UPF_{p}(G)$ is the full group $ C^* $-algebra of $ G $. \item If $ (\rho , F)\in \text{Rep}_p(G) $ is such that $ (\pi , E) $ contains every cyclic subrepresentation of $ (\rho , F)$, then $\|\cdot\|_{\rho}\leq\|\cdot\|_{\pi}$ holds. In particular, the definition of $ UPF_{p}(G)$ is independent of a particular $ p $-universal representation. \item With $\langle\cdot,\cdot\rangle$ denoting $L_1(G)-L_\infty(G)$ duality, and with $(\pi,E)$ a $p$-universal representation of $G$, we have \begin{align*} \| f\|_\pi=\sup\{|\langle f,g\rangle|\ :\ g\in B_p(G),\ \|g\|_{B_p(G)}\leq 1\},\quad f\in L_1(G). \end{align*} \end{enumerate} \end{remark} \fi Next lemma states that $ B_p(G) $ is a dual space. \begin{lemma}{\cite[Lemma 6.5]{RUNDE2005}}\label{RUNDEDUALITY} Let $ (\pi , E)\in \text{Rep}_{p}(G)$. Then, for each $ \phi\in PF_{p,\pi}(G)^*$, there is a unique $g\in B_p(G) $, with $\|g\|_{B_p(G)}\leq \|\phi\| $ such that \begin{equation}\label{duality} \langle\pi (f),\phi\rangle=\int_G f(x)g(x)dx,\qquad f\in L_1(G). \end{equation} Moreover, if $ (\pi , E) $ is $p$-universal, we have $\|g\|_{B_p(G)}= \|\phi\| $. \end{lemma} \begin{definition} For a representation $(\pi, E)\in\text{Rep}_p(G)$ the dual space $PF_{p,\pi}(G)^*$ will be denoted by $B_{p,\pi}$, following the tradition initiated in \cite{ARSAC1976}, and it is called the $p$-analog of the $\pi$-Fourier-Stieltjes algebra. \end{definition} \begin{remark} \begin{enumerate} \item The duality between $B_{p,\pi}$ and $PF_{p,\pi}(G)$ can be stated via relation below, \begin{align*} \langle \pi(f),u\rangle=\int_G u(x)f(x)dx,\quad f\in L_1(G),\ u\in B_{p,\pi}. \end{align*} \item On top of that, \begin{align*} &\| u\|=\sup_{\|f\|_\pi\leq 1}|\langle \pi(f), u\rangle|=\sup_{\|f\|_\pi\leq 1}|\int_G u(x)f(x)dx|,\quad u\in B_{p,\pi},\\ &\| f\|_\pi=\sup_{\|u\|\leq 1}|\langle \pi(f), u\rangle|=\sup_{\|u\|\leq 1}|\int_G u(x)f(x)dx|,\quad f\in L_1(G).\\ \end{align*} \item Additionally, the fashionable notation for $PF_{p,\lambda_p}(G)^*$ is $PF_{p}(G)^*$, instead of $B_{p,\lambda_p}$. For the procedure of obtaining the space $B_{p,\pi}$ one would refer to \cite[Subsection 3.2]{AHSH2021-II}. \end{enumerate} \end{remark} Now, we state a result from \cite{RUNDE2005} adapted to our notations. \begin{theorem}\label{THEOREMRUNDE2005} \begin{enumerate} \item For a representation $(\pi,E)\in\text{Rep}_p(G)$, the inclusion $B_{p,\pi}\subseteq B_p(G)$ is contractive, and is an isometric isomorphism whenever $(\pi,E)$ is a $p$-universal representation. \item We have the following contractive inclusions \begin{align*} PF_{p}(G)^*=B_{p,\lambda_p}\subseteq B_p(G)\subseteq \mathcal{M}(A_p(G)), \end{align*} and all inclusions will become equalities in the case that $G$ is amenable. \end{enumerate} \begin{proof} See \cite[Theorem 6.6 and Theorem 6.7]{RUNDE2005}). \end{proof} \end{theorem} A concrete $p$-operator space is a closed subspace of $\mathcal{B}(E)$, for some $QSL_p$-space $E$. In this case for each $n\in\mathbb{N}$ one can define a norm $\|\cdot\|_n$ on $\mathbb{M}_n(X)=\mathbb{M}_n\otimes X$ by identifying $ \mathbb{M}_n(X) $ with a subspace of $ \mathcal{B}(l_p^n\otimes_p E $). So, we have the family of norms $\Big(\|\cdot\|_n\Big)_{n\in\mathbb{N}}$ satisfying: \begin{enumerate} \item[$\mathcal{D}_\infty:$] For $u\in \mathbb{M}_n(X)$ and $v\in \mathbb{M}_m(X)$, we have that $ \|u\oplus v\|_{n+m}=\max\{\|u\|_n,\|v\|_m\} $. Here $u\oplus v\in \mathbb{M}_{n+m}(X)$ has block representation $\begin{pmatrix} u & 0 \\ 0 & v \end{pmatrix}. $ \item[$\mathcal{M}_p:$] For every $u\in \mathbb{M}_m(X)$ and $\alpha\in \mathbb{M}_{n,m}$, $\beta\in \mathbb{M}_{m,n}$, we have that $$ \|\alpha u\beta\|_n\leq\|\alpha\|_{\mathcal{B}(l^m_p,l^n_p)}\|u\|_m\|\beta\|_{\mathcal{B}(l^n_p,l^m_p)}. $$ \end{enumerate} Additionally, an abstract $p$-operator space is a Banach space $X$ equipped with the family of norms $(\|\cdot\|_n)_{n\in\mathbb{N}}$ defined by $\mathbb{M}_n(X)$ which satisfy two axioms above. \begin{definition} A linear operator $\Psi :X\rightarrow Y$ between two $p$-operator spaces is called $p$-completely bounded, if $\|\Psi\|_{\text{p-cb}}=\sup_{n\in\mathbb{N}}\|\Psi^{(n)}\|<\infty$, and $p$-completely contractive if $\|\Psi\|_{\text{p-cb}}=\sup_{n\in\mathbb{N}}\|\Psi^{(n)}\|\leq 1$, where $\Psi^{(n)} :\mathbb{M}_n(X)\rightarrow\mathbb{M}_n(Y)$ is defined in the natural way. \end{definition} \iffalse \begin{theorem}\cite[Theorem 4.3]{DAWS2010}\label{DAWSTHEOREM} Let $X$ be a $p$-operator space. There exists a $p$-complete isometry $\varphi : X^*\rightarrow\mathcal{B}(l_p(I))$ for some index set $I$. \end{theorem} \fi \begin{lemma}\cite[Lemma 4.5]{DAWS2010}\label{DAWSLEMMA} If $ \Psi:X\rightarrow Y $ is $p$-completely bounded map between two operator spaces $X$ and $Y$, then $\Psi^* :Y^*\rightarrow X^*$ is $p$-completely bounded, with $\|\Psi^*\|_{\text{p-cb}}\leq \|\Psi\|_{\text{p-cb}}$. \end{lemma} \begin{remark} \begin{enumerate} \item It should be noticed that, converse of Lemma \ref{DAWSLEMMA} is not necessarily true, unless $X$ be a closed subspace of $\mathcal{B}(E)$, for some $L_p$-space $E$. \item In comparison with \cite{ILIE2013}, because of above explanations, a major difference in our work is that we need to study the predual maps of some crucial $p$-completely bounded maps (see Theorem \ref{IMPORTANTMAPPING}), instead of their dual maps, which is used in the classic theory. \end{enumerate} \end{remark} Since the main purpose of this paper is to study $p$-completely boundedness of operators on the $p$-analog of the Fourier-Stieltjes algebras, we need to specify the $p$-operator structure on these algebras. We briefly explain about this structure in the following. For a representation $(\pi,E)\in\text{Rep}_p(G)$, in \cite[Proposition 3.11]{AHSH2021-I}, it is indicated that the algebra $PF_{p,\pi}(G)$ is $p$-completely isomorph to a closed subspace of $\mathcal{B}(\mathcal{E})$, for some $QSL_p$-space $\mathcal{E}$. Indeed the space $\mathcal{E}$ has been constructed through $l_p$-direct sum of $QSL_p$-spaces associated with cyclic subrepresentations of $(\pi,E)$ in a particular process. As a consequence, it has been obtained that for a representation $(\pi,E)\in\text{Rep}_p(G)$, the algebra of $p$-pseudofunctions $PF_{p,\pi}(G)$ is a $p$-operator space \cite[Theorem 3.12]{AHSH2021-I}. Moreover, a $p$-operator space structure for the algebra $B_p(G)$ is derived from its predual $UPF_p(G)$. In fact, we have the following sequence of results from \cite{AHSH2021-I} in this regard. \begin{theorem}{\cite[Theorem 4.5]{AHSH2021-I}}\label{TH45} The algebra of universal $p$-pseudofunctions $UPF_p(G)$ has a non-trivial $p$-operator space structure. \end{theorem} \begin{theorem}\cite[Theorem 4.7]{AHSH2021-I}\label{TH47} For $p\in (1,\infty)$, the Banach algebra $B_p(G)$ is a $p$-operator space. \end{theorem} Next proposition is the output of all materials mentioned before. \begin{proposition}\cite[Proposition 4.8]{AHSH2021-I}\label{P48} For a locally compact group $G$, and a complex number $p\in (1,\infty)$, the identification $B_p(G)=UPF_p(G)^*$ is $p$-completely isometric isomorphism. \end{proposition} In Section \ref{SECTIONINDUCEDHOMO}, we will study the homomorphisms on the $p$-analog of the Fourier-Stieltjes algebras induced by a continuous map $\alpha :Y\subseteq H\rightarrow G$, in the cases that $\alpha$ is homomorphism, affine and piecewise affine map, and $Y$ in the coset ring of $H$. So, we give some preliminaries here.\\ For a locally compact topological group $H$, let $\Omega_0(H)$ denote the ring of subsets which generated by open cosets of $H$. By \cite{ILIE2013} we have \begin{align}\label{OPENCOSETRINGS} \Omega_0(H)=\left\{Y\backslash\cup_{i=1}^nY_i \: : \begin{array}{ll} &Y\:\text{is an open coset of }\: H,\\ & Y_1,\ldots ,Y_n\:\text{open subcosets of infinite index in}\: Y\\ \end{array}\right\}. \end{align} Moreover, for a set $Y\subseteq H$, by $\text{Aff}(Y)$ we mean the smallest coset containing $Y$, and if $Y=Y_0\backslash\cup_{i=1}^nY_i\in\Omega_0(H)$, then $\text{Aff}(Y)=Y_0$.\\ Similarly, let us denote by $\Omega_{\text{am-}0}(H)$ the ring of open cosets of open amenable subgroups of $H$, that is the ring of subsets of the form $Y\backslash\cup_{i=1}^nY_i$ where $Y$ is an open coset of an open amenable subgroup of $H$ and $Y_i$ is an open subcosets of infinite index in $Y$ (which has to be an open coset of an open amenable subgroup), for $i=1\ldots,n$. \begin{definition}\label{DEFPIECEWISE} Let $\alpha : Y\subseteq H\rightarrow G$ be a map. \begin{enumerate} \item The map $\alpha$ is called an affine map on an open coset $Y$ of an open subgroup $H_0$, if \begin{equation*} \alpha(xy^{-1}z)=\alpha(x)\alpha(y)^{-1}\alpha(z),\qquad x,y,z\in Y, \end{equation*} \item The map $\alpha$ is called a piecewise affine map if \begin{enumerate} \item there are pairwise disjoint $ Y_i\in\Omega_0(H)$, for $ i=1,\ldots , n $, such that $Y=\cup_{i=1}^nY_i$, \item there are affine maps $\displaystyle{\alpha_i : \text{Aff}(Y_i)\subseteq H\rightarrow G}$, for $ i=1,\ldots , n $, such that \begin{equation*} \alpha |_{Y_i}=\alpha_i |_{Y_i}. \end{equation*} \end{enumerate} \end{enumerate} \end{definition} \begin{definition} If $X$ and $Y$ are locally compact spaces, then a map $\alpha :Y\rightarrow X$ is called proper, if $\alpha^{-1}(K)$ is compact subset of $Y$, for every compact subset $K$ of $ X $. \end{definition} \begin{proposition}\cite[Proposition 4]{DUNKLRAMIREZ1971}\label{ramirez} Let $\alpha : H\rightarrow G$ be a continuous group homomorphism. Then $\alpha$ is proper if and only if the bijective homomorphism $\tilde{\alpha}: H/{\ker\alpha}\rightarrow \alpha(H)=G_0$, is a topological group isomorphism, and $\ker\alpha$ is compact. \end{proposition} \begin{remark}\label{AFFFIINEREMMM} \begin{enumerate} \item Proposition \ref{ramirez} implies that every continuous proper homomorphism is automatically a closed map. Therefore, $\alpha(H)$ is a closed subgroup of $G$. Additionally, $\ker\alpha$ is a compact normal subgroup of $H$. \item It is well-known that $\tilde{\alpha}$ is a group isomorphism, if and only if $\alpha$ is an open homomorphism into $\alpha(H)$, with the relative topology. \item\label{affine-remark}{\cite[Remark 2.2]{ILIE2004}} If $Y=h_0H_0$ is an open coset of an open subgroup $H_0\subseteq H$, and $\alpha : Y\subseteq H\rightarrow G$ is an affine map, then there exists a group homomorphism $ \beta $ associated to $\alpha$ such that \begin{align}\label{affine-homomorphism} &\beta : H_0\subseteq H\rightarrow G,\quad\beta (h)= \alpha(h_0)^{-1}\alpha(h_0h),\quad h\in H_0. \end{align} \item \label{HOMAFFPROPER} It is clear that, $\alpha$ is a proper affine map, if and only if $\beta$ is a proper homomorphism. \item\cite[Lemma 8]{ILIE2013}\label{LEMMA8888} Let $Y\in\Omega_0(H)$, and $\alpha:\text{Aff}(Y)\rightarrow G$ be an affine map such that $\alpha|_{Y}$ is proper, then $\alpha$ is proper. \end{enumerate} \end{remark} In the next stage of preparation for our results, we bring some facts about extension of a function $u\in B_p(G_0)$ to a function $B_p(G)$, where $G_0\subseteq G$ is an open subgroup of the locally compact group $G$. We refer the interested reader to \cite[Subsection 3.3]{AHSH2021-II} for detailed description. Let $G_0\subseteq G$, be any subset, and $u:G_0\rightarrow \mathbb{C}$ be a function. Let $u^\circ$ denote the extension of $u$ to $G$ by setting value zero outside of $G_0$, i.e. \begin{align*} u^\circ=\left\{ \begin{array}{ll} u&\text{on}\; G_0\\ 0&\text{o.w.} \end{array}\right. . \end{align*} Following lemma has intricate importance in the sequel. \begin{lemma}{\cite[Lemma 3]{AHSH2021-II}}\label{LEMMARESTRICTIONMAP} Let $(\pi,E)\in\text{Rep}_p(G)$. Then the restriction of $\pi$ to the open subgroup $G_0$, which is denoted by $(\pi_{G_0},E)$ belongs to $\text{Rep}_p(G_0)$. Moreover, for each $f\in L_1(G_0)$ and each $g\in L_1(G)$, we have the following relations \begin{align}\label{w0} \pi_{G_0}(f)=\pi(f^\circ),\quad\text{and}\quad \pi_{G_0}(g|_{G_0})=\pi(g\chi_{G_0}). \end{align} \end{lemma} Next proposition is the main building block of the extension problem. \begin{proposition}\cite[Proposition 4]{AHSH2021-II}\label{PROPEXREC} Let $G$ be a locally compact group and $G_0$ be its open subgroup, and let $(\pi,E)\in \text{Rep}_p(G)$. Then the following statements hold. \begin{enumerate} \item\label{PROPEXREC1} The map $S_{\pi_{G_0}}:PF_{p,\pi_{G_0}}(G_0)\rightarrow PF_{p,\pi}(G)$ defined via $S_{\pi_{G_0}}(\pi_{G_0}(f))=\pi(f^\circ)$, for $f \in L_1(G_0)$ is an isometric homomorphism. In fact, we have the following isometric identification \begin{align*} PF_{p,\pi_{G_0}}(G_0) =\overline{\{\pi(f)\ :\ f\in L_1(G),\;\text{supp}(f)\subseteq G_0 \}{}}^{\|\cdot\|_{\mathcal{B}(E)}}\subseteq PF_{p,\pi}(G). \end{align*} \item\label{PROPEXREC2} The linear restriction mapping $R_\pi : B_{p,\pi}\rightarrow B_{p,\pi_{G_0}}$ which is defined for $u\in B_{p,\pi}$, as $R_\pi(u) =u|_{G_0}$ is the dual map of $S_{\pi_{G_0}}$, and is a quotient map. \item\label{PROPEXREC2.5} The extension map $E_\pi: B_{p,\pi_{G_0}}\rightarrow B_{p,\pi}$, defined via $E_\pi(u)=u^\circ$ is an isometric map. \item\label{PROPEXREC3} The restriction mapping $R : B_p(G)\rightarrow B_p(G_0)$ is a contraction. \item\label{PROPEXREC4} When $(\pi,E)$ is also a $p$-universal representation, we have the following contractive inclusions \begin{align*} PF_p(G_0)^*\subseteq B_{p,\pi_{G_0}}\subseteq B_p(G_0)\subseteq\mathcal{M}(A_p(G_0)). \end{align*} Under the assumption that $G_0$ is amenable, we have isometric identification below \begin{align*} PF_p(G_0)^* = B_{p,\pi_{G_0}}= B_p(G_0)=\mathcal{M}(A_p(G_0)). \end{align*} \end{enumerate} \end{proposition} Next theorem has a key role in studying of weighted homomorphisms \eqref{eq1}, and it is stated as Proposition 5 in \cite{AHSH2021-II}. \begin{theorem}[Extension Theorem]\label{PROPEXTENSION} Let $G$ be a locally compact group and $G_0$ be its open subgroup. Then \begin{enumerate} \item\label{PROPEXTENSION1} the extension mapping $E_{MM} : \mathcal{M}(A_p(G_0))\rightarrow \mathcal{M}(A_p(G))$, defined for $u \in \mathcal{M}(A_p(G_0))$ via $E_{MM} (u) = u^\circ$ is an isometric map. \item\label{PROPEXTENSION2} for every $u\in B_p(G_0)$, we have $u^\circ\in \mathcal{M}(A_p(G))$, and the map $E_{BM} : B_p(G_0)\rightarrow \mathcal{M}(A_p(G))$, with $u\mapsto u^\circ$, is a contraction. \item\label{PROPEXTENSION3} if $G_0$ is also an amenable subgroup, then for every $u \in B_p(G_0)$, we have $u^\circ \in B_p(G)$, and the associated extending map $E_{BB} : B_p(G_0)\rightarrow B_p(G)$ is an isometric one. \end{enumerate} \end{theorem} \begin{remark}\label{REMARKEXTENSION} It is worthwhile to take notice of the fact that when the subgroup $G_0\subseteq G$ is an amenable open one, a $p$-universal representation of $G_0$ can be induced by restriction of a $p$-universal representation of $G$ to $G_0$. \end{remark} Next and final theorem of this section is a consequence of the Theorem \ref{PROPEXTENSION} and can be found in \cite{AHSH2021-II}. \begin{theorem}\label{THEOREMCONCLUSION} Let $G$ and $H$ be locally compact groups, and $\alpha :Y=\cup_{k=1}^nY_k\subseteq H\rightarrow G$ be a continuous piecewise affine map with disjoint $Y_k\in\Omega_{\text{am-}0}(H)$, for $k=1,\ldots,n$. Then $u\in B_p(G)$ implies that $(u\circ \alpha)^\circ\in B_p(H)$, and consequently, the weighted homomorphism $\Phi_\alpha:B_p(G)\rightarrow B_p(H)$ is well-defined bounded homomorphism. \end{theorem} \section{Special $p$-completely bounded operators on $B_p(G)$}\label{SECTIONSPECIALMAPS} The essential features of the $p$-analog of the Fourier-Stieltjes algebra and the algebra of $p$-pseudofunctions have been provided so far, and this section is allocated to maintain critical tools in dealing with operators on $B_p(G)$. Upcoming theorem is going to reveal the relation between $p$-universal representations of a locally compact group $G$ and the quotient group $G/N$, where $N$ is a closed normal subgroup. Recall the canonical quotient map below \begin{align*} q:G\rightarrow G/N, \quad q(x)=xN,\quad x\in G, \end{align*} which is a continuous and onto homomorphism, for a closed normal subgroup $N$ of $G$. \begin{theorem}\label{PROPQUOTIENT} Let $N\subseteq G$ be a closed normal subgroup. Then we have the following statements. \begin{enumerate} \item\label{PROPQUOTIENT1} $(\rho,F)\in\text{Rep}_p(G/N)$ implies that $(\rho\circ q,F)\in\text{Rep}_p(G)$ and the identification \begin{align*} PF_{p,\rho}(G/N)=PF_{p,\rho\circ q}(G), \end{align*} is an isometric isomorphism. \item\label{PROPQUOTIENT2} Each representation $(\pi,E)\in\text{Rep}_p(G)$ induces a representation $(\tilde{\pi}_K,K)\in\text{Rep}_p(G/N)$ so that \begin{align*} (\tilde{\pi}_K\circ q,K)\subseteq(\pi,E), \end{align*} where \begin{align}\label{K-SPACE} K=\big\{\xi\in E\ :\ \pi(n)\xi=\xi,\; \text{for all}\; n\in N\big\}. \end{align} Moreover, if the representation $(\pi,E)$ is $p$-universal, then the induced representation $(\tilde{\pi}_K,K)$ is $p$-universal, as well. \item\label{PROPQUOTIENT3} Denoting by $B_p(G:N)$ the subalgebra of functions in $B_p(G)$ those are constant on each coset of $N$, we have \begin{align*} B_p(G:N)=B_p(G/N). \end{align*} \item\label{PROPQUOTIENT4} The map $\Phi_q:B_p(G/N)\rightarrow B_p(G)$ defined through $\Phi_q(u)=u\circ q$, for $u\in B_p(G/N)$ is an isometric map. \end{enumerate} \begin{proof} \begin{enumerate} \item Let $(\rho,F)\in\text{Rep}_p(G/N)$. Evidently, we have $(\rho\circ q,F)\in\text{Rep}_p(G)$. Recall the natural map $P:L_1(G)\rightarrow L_1(G/N)$ from \cite{FOLLAND1995}, \begin{align}\label{P-MAP} Pf(xN)=\int_Nf(xn)dn,\quad xN\in G/N. \end{align} For each $x\in G$, $\zeta\in F$, and $\psi\in F^*$ we have $\langle\rho\circ q(x)\zeta,\psi\rangle =\langle\rho(xN)\zeta,\psi\rangle$, and since for each continuous function $u:G/N\rightarrow\mathbb{C}$, and $f\in L_1(G)$, we have $P((u\circ q)\cdot f)=u\cdot Pf$, it follows that \begin{align*} \langle\rho\circ q(f)\zeta,\psi\rangle =\langle\rho(Pf)\zeta,\psi\rangle,\ f\in L_1(G),\; \zeta\in F,\;\psi\in F^*. \end{align*} Consequently, we have \begin{align}\label{K1} \rho\circ q(x)=\rho(xN),\quad\text{and}\quad \rho\circ q(f)=\rho(Pf),\quad x\in G,\; f\in L_1(G). \end{align} The relation \eqref{K1} means that $PF_{p,\rho\circ q}(G)=PF_{p,\rho}(G/N)$, isometrically, by the fact that the map $P$ is onto. \item Let $(\pi,E)\in\text{Rep}_p(G)$. Recall the closed subspace $K$ of $E$ in \eqref{K-SPACE}. The space $K$ is a $QSL_p$-space and on top of that, it is invariant under $\pi$, i.e. \begin{align*} \pi(x)K\subseteq K,\quad\text{and}\quad \pi(f)K\subseteq K,\qquad x\in G, \ f\in L_1(G). \end{align*} So, if $(\pi_K,K)$ denotes the representation of $G$ that maps every element $x\in G$ to $\pi_K(x)=\pi(x)|_K$, then \begin{align*} (\pi_K,K)\in\text{Rep}_p(G),\quad\text{and}\quad (\pi_K,K)\subseteq (\pi,E). \end{align*} Now, put \begin{align*} \tilde{\pi}_K:G/N\rightarrow \mathcal{B}(K),\quad \tilde{\pi}_K(xN)=\pi(x),\quad xN\in G/N. \end{align*} Then by the definition of $K$, the pair $(\tilde{\pi}_K,K)$ is well-defined and belongs to $\text{Rep}_p(G/N)$. Additionally, \begin{align*} (\tilde{\pi}_K\circ q,K) =(\pi_K,K)\subseteq (\pi,E). \end{align*} The idea for the $p$-universality part is based on the fact that each element $(\rho,F)$ in $\text{Cyc}_p(G/N)$ can be corresponded to $(\rho\circ q,F)$ in $\text{Cyc}_p(G)$. Indeed, if $(\rho,F)\in\text{Cyc}_p(G/N)$ is associated with the cyclic vector $\xi_0\in F$ then we shall show that $(\rho\circ q,F)$ is a cyclic representation with the same vector $\xi_0$. To this end, assume that $\xi\in F$ is an arbitrary vector. Then due to the fact that $(\rho,F)$ is cyclic, there exists $(g_i)_{i\in\mathbb{I}}\subseteq L_1(G/N)$ such that \begin{align*} \lim_{i\in\mathbb{I}}\|\rho(g_i)\xi_0-\xi\|=0. \end{align*} On the other hand, the map $P$ in \eqref{P-MAP} is onto, therefore, for each $i\in\mathbb{I}$, there exists $ f_i\in L_1(G)$ so that $P(f_i)=g_i$. Thus, via \eqref{K1}, we have \begin{align*} \lim_{i\in\mathbb{I}}\|\rho\circ q (f_i)\xi_0-\xi\|=0. \end{align*} Hence, $(\rho\circ q,F)$ is a cyclic representation of $G$ associated with $\xi_0$. Now, if $(\pi, E)$ is a $p$-universal representation of $G$, then by the definition, it contains $(\rho\circ q,F)$, and we have (up to an isometry) \begin{align*} F\subseteq E,\quad \rho\circ q(f)=\pi(f)|_F\quad f\in L_1(G). \end{align*} Moreover, by the definition of $K$, we have $F\subseteq K$, and it is obtained that \begin{align*} (\rho\circ q,F)\subseteq (\pi_K,K),\quad\text{and consequently} \quad (\rho,F)\subseteq (\tilde{\pi}_K,K). \end{align*} Now, since $(\tilde{\pi}_K,K)$ contains every cyclic representation $(\rho,F)$ of $G/N$, then it is a $p$-universal representation. \item Let \begin{align*} u(x)=\langle\pi(x)\xi_0,\eta_0\rangle,\quad x\in G, \end{align*} be a function which is not identically zero and is constant on each coset of $N$, where $(\pi,E)\in\text{Rep}_p(G)$, $\xi_0\in E$ and $\eta_0\in E^*$. Assume that \begin{align*} F_{\eta_0}=\overline{\{\pi(f)^*\eta_0\ : \ f\in L_1(G)\}{}}^{\|\cdot\|_{E^*}}, \end{align*} and then define $E_{\eta_0}=E/F_{\eta_0}^{\bot}$. Consequently, we have $E_{\eta_0}=F_{\eta_0}^*$, or equivalently, $E_{\eta_0}^*=F_{\eta_0}$. So, we can assume that the function $u$ is a coefficient function of the representation $(\pi_{\eta_0 }, E_{\eta_0})=(\pi|_{E_{\eta_0}}, E_{\eta_0})$. Now, similar to \eqref{K-SPACE}, define \begin{align*} E_{u}=\left\{ \xi\in E_{\eta_0}\ : \ \pi_{\eta_0 }(n)\xi=\xi ,\; \text{for all} \; n\in N\right\}. \end{align*} One can easily check that $\xi_0\in E_u$. Indeed, through the fact that $u\in B_p(G:N)$, for an arbitrary element $f\in L_1(G)$, we have \begin{align*} \langle \pi_{\eta_0 }(n)\xi_0,\pi_{\eta_0 }(f)^*\eta_0\rangle&= \langle \pi_{\eta_0 }(f)\pi_{\eta_0 }(n)\xi_0,\eta_0\rangle\\ &=\int f(x)\langle \pi_{\eta_0 }(x)\pi_{\eta_0 }(n)\xi_0,\eta_0\rangle dx\\ &=\int f(x)\langle\pi_{\eta_0 }(xn)\xi_0,\eta_0\rangle dx\\ &=\int f(x)u(xn) dx\\ &=\int f(x)u(x) dx\\ &=\langle \xi_0,\pi_{\eta_0 }(f)^*\eta_0\rangle . \end{align*} This implies that $\pi_{\eta_0}(n)\xi_0=\xi_0$, for all $n\in N$. Following the notations of Part \eqref{PROPQUOTIENT2}, if we put $(\pi_u,E_u)=(\pi_{\eta_0}|_{E_u},E_u)$ then $(\pi_u,E_u)\in\text{Rep}_p(G)$, and the representation $(\tilde{\pi}_u,E_u)$ is well-defined, and belongs to $\text{Rep}_p(G/N)$. On top of that, if we consider \begin{align}\label{U-TILDE} \tilde{u}(xN)=\langle\tilde{\pi}_u(xN)\xi_0,\eta_0\rangle,\quad xN\in G/N. \end{align} then we have $\tilde{u}\in B_p(G/N)$ and that $u=\tilde{u}\circ q$. Let us define \begin{align} & k_N:B_p(G:N)\rightarrow B_p(G/N),\quad k_N(u)=\tilde{u},\label{MAPSKN}\\ & l_N: B_p(G/N)\rightarrow B_p(G:N),\quad l_N(u)=u\circ q.\label{MAPSLN} \end{align} where $\tilde{u}$ is as \eqref{U-TILDE}. Evidently, these maps are contractive homomorphisms and are inverse of each other. Therefore, they are isometric isomorphisms. \item It is straightforward through previous parts. Indeed, Part \eqref{PROPQUOTIENT2} implies that the map $\Phi_q$ is a contraction, and by Part \eqref{PROPQUOTIENT3} we have the isometry. \end{enumerate} \end{proof} \end{theorem} Next theorem is our first main result of this paper, and it will be applied to give the results on weighted homomorphisms on the $p$-analog of the Fourier-Stieltjes algebras. For more clarification, we need to introduce the notion of the $p$-tensor product $E\tilde{\otimes}_p F$ of two $QSL_p$-spaces $E$ and $F$, that is defined in \cite{RUNDE2005}. In fact, Runde introduced the norm $\|\cdot\|_p$ on the algebraic tensor product $E\otimes F$ which benefits from pivotal properties. An important property of the norm $\|\cdot\|_p$ is the fact that the completion $E\tilde{\otimes}_p F$ of $E\otimes F$ with respect to $\|\cdot\|_p$ is a $QSL_p$-space. Furthermore, for two representations $(\pi,E)$ and $(\rho,F)$ of the locally compact group $G$ in $\text{Rep}_p(G)$, the representation $(\pi\otimes\rho,E\tilde{\otimes}_p F)$ is well-defined and belongs to $\text{Rep}_p(G)$. As a result, for two functions $u(x)=\langle\pi(x)\xi,\eta\rangle$ and $v(x)=\langle\rho(x)\zeta,\psi\rangle,\quad (x\in G)$, the pointwise product of them is a coefficient function of the representation $(\pi\otimes\rho, E\tilde{\otimes}_p F)$, i.e., \begin{align*} u\cdot v(x)=\langle(\pi(x)\otimes\rho(x))(\xi\otimes\zeta),\eta\otimes\psi\rangle,\quad x\in G . \end{align*} For more details on $p$-tensor product $\tilde{\otimes}_p$ see \cite[Theorem 3.1 and Corollary 3.2]{RUNDE2005}. \begin{theorem}\label{IMPORTANTMAPPING} Let $p\in (1,\infty)$ and $G$ be a locally compact group. Then we have the following statements. \begin{enumerate} \item\label{ITEMIDENTITYMAP} For any $(\rho, F)\in \text{Rep}_{p}(G)$, the identity map $I:B_{p,\rho}\rightarrow B_p(G)$ is a $p$-completely contractive map. \item\label{ITEMRESTRICTIONMAP} For an open subgroup $G_0$ of $G$, the restriction map $R: B_p(G)\rightarrow B_p(G_0)$, is a $p$-completely contractive homomorphism. \item\label{ITEMTRANSLATIONMAP} For an element $a\in G$, the translation map $L_a:B_p(G)\rightarrow B_p(G)$, defined through $L_a(u)={}_a u$, where ${}_au(x)=u(ax)$, for $x\in G$, is a $p$-completely contractive map. \item\label{ITEMQUOTIENTMAP} The homomorphism $\Phi_q: B_p(G/N)\rightarrow B_p(G)$, with $\Phi_q(u)= u\circ q$, is a $p$-completely contractive homomorphism. \item\label{ITEMEXTENSIONMAP} For an open amenable subgroup $G_0$ of $G$, the extension map $E_{BB}:B_p(G_0)\rightarrow B_p(G)$ is a $p$-completely contractive homomorphism. \item\label{ITEMMULTIPLICATIONYYY} For an open coset $Y$ of an open subgroup $G_0$ of $G$, the map $M_Y:B_p(G)\rightarrow B_p(G)$, with $M_Y(u)=u\cdot\chi_Y$, is $p$-completely contractive homomorphism. More generally, for a set $Y\in\Omega_0(G)$, the map $M_Y$ is a $p$-completely bounded homomorphism. \end{enumerate} \begin{proof} \begin{enumerate} \item We want to prove that for each $(\rho,F)\in\text{Rep}_{p}(G)$, the following map is a $p$-complete contraction, \begin{align}\label{ID} I: B_{p,\rho}\rightarrow B_p(G),\quad I(u)=u . \end{align} Let $(\pi ,E)$ be a $p$-universal representation of $G$ that contains the representation $(\rho,F)$. Following relations hold between $(\rho ,F)$, and $(\pi, E)$, \begin{align*} F\subseteq E, \quad \rho(x)=\pi(x)|_{F},\quad\text{and}\quad\rho(f)=\pi(f)|_{F},\quad x\in G,\; f\in L_1(G). \end{align*} Since $\rho(f)=\pi(f)|_{F}$, then $\|\rho(f)\|\leq\|\pi(f)\|$. Additionally, the map $I$ is weak$^*$-weak$^*$ continuous, and it is a contraction by Theorem \ref{THEOREMRUNDE2005}. Define \begin{align*} {}_*I: UPF_{p}(G)\rightarrow PF_{p,\rho}(G),\quad {}_*I(\pi(f))=\pi(f)|_{F}=\rho(f), \end{align*} then ${}_*I$ is the predual of the map \eqref{ID}, since for every $f\in L_1(G)$ and $u\in B_{p,\rho}$ we have $\langle\pi(f), I(u)\rangle=\langle\rho(f),u\rangle$. The following calculations indicate that ${}_*I$ is a $p$-complete contraction: for each $n\in\mathbb{N}$, and $[\rho(f_{ij})]\in\mathbb{M}_n(PF_{p,\rho}(G))$ we have \begin{align*} \|[\rho(f_{ij})]\|_n&=\sup\bigg\{\|[\rho(f_{ij})](\xi_{j})_{j=1}^n\|\ :\ (\xi_j)_{j=1}^n\subseteq F,\; \sum_{j=1}^n\|\xi_j\|^p\leq 1\bigg\}\\ &=\sup\bigg\{\|[\pi(f_{ij})](\xi_{j})_{j=1}^n\|\ :\ (\xi_j)_{j=1}^n\subseteq F,\; \sum_{j=1}^n\|\xi_j\|^p\leq 1\bigg\}\\ &\leq\sup\bigg\{\|[\pi(f_{ij})](\xi_{j})_{j=1}^n\|\ :\ (\xi_j)_{j=1}^n\subseteq E,\; \sum_{j=1}^n\|\xi_j\|^p\leq 1\bigg\}\\ &=\|[\pi(f_{ij})]\|_n, \end{align*} so, we have $\|[\rho(f_{ij})]\|_n\leq \|[\pi(f_{ij})]\|_n$, and by this, it is concluded that \begin{align*} \|I\|_{\text{p-cb}}\leq \|{}_*I\|_{\text{p-cb}}\leq 1 . \end{align*} \item Let $G_0\subseteq G$ be an open subgroup and recall the maps $R$ and $S=S_\pi$ (for a $p$-universal representation $(\pi,E)$ of $G$) in Proposition \ref{PROPEXREC} together with the relation \eqref{w0} from Lemma \ref{LEMMARESTRICTIONMAP}. Since we have $S^*=R$, and the map $S$ is indeed an identity, therefore, $S$ is a $p$-completely isometric map, and consequently, \begin{align*} \|R\|_{\text{p-cb}}=\|S^*\|_{\text{p-cb}}\leq\|S\|_{\text{p-cb}}=1. \end{align*} \item For $a\in G$, consider the following map \begin{align*} L_a:B_p(G)\rightarrow B_p(G),\quad L_a(u)={}_au,\quad {}_au(x)=u(ax),\; x\in G. \end{align*} The predual of the map of $L_a$ is as following \begin{align*} &{}_*L_a :UPF_{p}(G)\rightarrow UPF_p(G),\quad {}_*L_a(\pi(f))=\pi(\lambda_p(a)f)), \end{align*} and it is clearly $p$-completely contractive, consequently, this holds true for $L_a$. On the other hand, the map $L_a$ has the inverse $L_{a^{-1}}$, and it is $p$-completely contractive as well, which makes $L_a$ a $p$-completely isometric map. \item Let $N\subseteq G$ be a closed normal subgroup. Recalling all notations from Theorem \ref{PROPQUOTIENT}, we let $(\pi,E)\in\text{Rep}_p(G)$ be a $p$-universal representation and $(\tilde{\pi}_K,K)$ be the induced $p$-universal representation of $G/N$. For functions $f\in L_1(G)$, and $u\in B_p(G/N)$, we have \begin{align}\label{EQPHQ} \langle\pi(f),\Phi_q(u)\rangle=\langle\pi(f),u\circ q\rangle=\langle\tilde{\pi}(Pf),u\rangle. \end{align} This implies that the map $\Phi_q$ is weak$^*$-weak$^*$ continuous, and by this we define the predual map ${}_*\Phi_q$, as following, \begin{align*} {}_*\Phi_q : PF_{p,\tilde{\pi}\circ q}(G)\rightarrow UPF_{p}(G/N),\quad {}_*\Phi_q(\tilde{\pi}\circ q(f))=\tilde{\pi}(P f),\quad f\in L_1(G), \end{align*} which by \eqref{EQPHQ} we have $({}_*\Phi_q)^*=\Phi_q$. By using similar relation to \eqref{K1}, we have $\tilde{\pi}\circ q(f)=\tilde{\pi}(Pf)$, which means that the predual map ${}_*\Phi_q$ is an identity that is $p$-completely isometric map via the following computation \begin{align*} \|{}_*\Phi_q^{(n)}([\tilde{\pi}\circ q(f_{ij})])\|_n= \|[\tilde{\pi}(Pf_{ij})]\|_n= \|[\tilde{\pi}\circ q(f_{ij})]\|_n. \end{align*} Therefore, we have $\|\Phi_q\|_{\text{p-cb}}\leq 1$. \item Let $G_0\subseteq G$, be an open amenable subgroup, and $u\in B_p(G_0)$. By Theorem \ref{PROPEXTENSION}, the extension map $E_{BB}$ is well-defined, \begin{align*} &E_{BB}: B_p(G_0)\rightarrow B_p(G),\quad E_{BB}(u)=u^\circ . \end{align*} Let $(\pi ,E)$ be a $p$-universal representation of $G$. We denote the restriction of $(\pi,E)$ to $G_0$ by $(\pi_{G_0}, E)$, which is a $p$-universal representation of $G_0$ via Remark \ref{REMARKEXTENSION}. We note that by the relation \begin{align}\label{EQEXT} \langle \pi(f), u^\circ\rangle=\langle\pi_{G_0}(f|_{G_0}),u\rangle,\quad f\in L_1(G),\ u\in B_p(G_0), \end{align} the map $E_{BB}$ is weak$^*$-weak$^*$ continuous. So, we define the predual map ${}_*E_{BB}$, as following \begin{align*} {}_*E_{BB}: UPF_p(G)\rightarrow UPF_p(G_0),\quad {}_*E_{BB}(\pi(f)):=\pi_{G_0}(f|_{G_0}), \end{align*} which by \eqref{EQEXT} we have $({}_*E_{BB})^*=E_{BB}$. We need to take notice of the fact that since $\chi_{G_0}\in B_p(G)$, via \cite[Theorem 2]{AHSH2021-II}, $\chi_{G_0}$ is a normalized coefficient function of $(\pi, E)$, i.e. there exist $\xi_\chi\in E$, and $\eta_\chi\in E^*$ so that \begin{align}\label{p0} \|\xi_\chi\|=\|\eta_\chi\|=1, \ \text{and}\quad \chi_{G_0}(x)=\langle\pi(x)\xi_\chi,\eta_\chi\rangle,\quad x\in G, \end{align} On the other hand, for $f\in L_1(G_0)$, $\xi\in E$, and $\eta\in E^*$, we have \begin{align*} \langle\pi(f\chi_{G_0})\xi,\eta\rangle&=\int_{G}f(x)\chi_{G_0}(x)\langle\pi(x)\xi,\eta\rangle dx\\ &=\int_{G}f(x)\langle\pi(x)\xi_\chi,\eta_\chi\rangle\langle\pi(x)\xi,\eta\rangle dx\\ &=\int_{G}f(x)\langle(\pi(x)\otimes\pi(x))(\xi_\chi\otimes\xi),\eta_\chi\otimes\eta\rangle dx\\ &=\langle(\pi\otimes\pi(f))(\xi_\chi\otimes\xi),\eta_\chi\otimes\eta\rangle, \end{align*} which implies that \begin{align}\label{Z1} \langle\pi(f\chi_{G_0})\xi,\eta\rangle=\langle(\pi\otimes\pi(f))(\xi_\chi\otimes\xi),\eta_\chi\otimes\eta\rangle,\quad f\in L_1(G),\ \xi\in E,\ \eta\in E^*. \end{align} Therefore, by combining equality \eqref{Z1} with \eqref{w0}, we have \begin{align}\label{EQNEW} \langle\pi_{G_0}(f|_{G_0})\xi,\eta\rangle=\langle(\pi\otimes\pi(f))(\xi_\chi\otimes\xi),\eta_\chi\otimes\eta\rangle,\quad f\in L_1(G),\ \xi\in E,\ \eta\in E^*. \end{align} Additionally, since $(\pi,E)$ is a $p$-universal representation, and we have \begin{align*} (\pi, E)\subseteq (\pi\otimes\pi, E\tilde{\otimes}_p E), \end{align*} thus, $(\pi\otimes\pi, E\tilde{\otimes}_p E)$ can be assumed as a $p$-universal of $G$. Let \begin{align*} {}_*E_{BB}^{(n)}: \mathbb{M}_n(UPF_p(G))\rightarrow \mathbb{M}_n(UPF_p(G_0)),\quad {}_*E_{BB}^{(n)}(\pi(f_{ij})):=(\pi_{G_0}(f_{ij}|_{G_0})). \end{align*} Then via \eqref{EQNEW} we have \begin{small} \begin{align*} \|{}_*E_{BB}^{(n)}([\pi(f_{ij})])\|_n&=\|[\pi_{G_0}(f_{ij}|_{G_0})]\|_n\\ &=\sup\bigg\{|\sum_{i,j=1}^n\langle\pi_{G_0}(f_{ij}|_{G_0})\xi_j,\eta_i\rangle|\ :\ (\xi_j)_{j=1}^n\subseteq E, \\ &\qquad\qquad (\eta_i)_{i=1}^n\subseteq E^*,\;\sum_{j=1}^n\|\xi_j\|^p\leq 1,\; \sum_{i=1}^n\|\eta_i\|^{p'}\leq 1\bigg\}\\ &\leq\sup\bigg\{|\sum_{i,j=1}^n\langle(\pi\otimes\pi(f_{ij}))\phi_j,\psi_i\rangle|\ :\ (\phi_j)_{j=1}^n\subseteq E\tilde{\otimes}_pE,\\ &\qquad\qquad(\psi_i)_{i=1}^n\subseteq E^*\tilde{\otimes}_{p'}E^*,\;\sum_{j=1}^n\|\phi_j\|^p\leq 1,\; \sum_{i=1}^n\|\psi_i\|^{p'}\leq 1\bigg\}\\ &=\|(\pi\otimes\pi(f_{ij}))\|_n, \end{align*} \end{small} and since norm of $UPF_p(G)$ is independent of choosing $p$-universal representation (see Theorem \ref{TH45}) then we have $\|{}_*E_{BB}\|_{\text{p-cb}}\leq 1$, which implies that $\|E_{BB}\|_{\text{p-cb}}\leq 1$. \item Let $(\pi,E)$ be a $p$-universal representation. By \cite[Corollary 2]{AHSH2021-II}, for $Y\in\Omega_0(G)$ the map $M_Y: B_p(G)\rightarrow B_p(G)$ with $M_Y(u)=u\cdot \chi_Y$ is well-defined, and \begin{align*} \|M_Y\|\leq 2^{m_Y}, \end{align*} On the other hand, by the following relation this map is weak$^*$-weak$^*$ continuous \begin{align}\label{EQMULTBP} \langle\pi(f),u\cdot\chi_Y\rangle=\langle\pi(f\cdot\chi_Y),u\rangle,\quad f\in L_1(G),\ u\in B_p(G). \end{align} So, one may define its predual map as following \begin{align*} {}_*M_Y:UPF_p(G)\rightarrow UPF_p(G),\quad {}_*M_Y(\pi(f))=\pi(f\cdot\chi_Y), \end{align*} and by \eqref{EQMULTBP} we have $({}_*M_Y)^*=M_Y$. \begin{enumerate} \item[Step 1:] To prove the claim, first we let $Y$ be an open coset itself. Similar to \eqref{p0}, the function $\chi_Y$ is a normalized coefficient function of the representation $(\pi,E)$ which means that there are elements $\xi_Y\in E$, and $\eta_Y\in E^*$ such that \begin{align*} \|\xi_Y\|=\|\eta_Y\|=1,\ \text{and}\quad \chi_Y(x)=\langle\pi(x)\xi_Y,\eta_Y\rangle,\quad x\in G. \end{align*} So, for a matrix $[\pi(f_{ij})]\in \mathbb{M}_n(UPF_p(G))$, through the relation \eqref{Z1}, we have \begin{small} \begin{align*} \|[\pi(f_{ij}\cdot\chi_Y)]\|_n&=\sup\bigg\{|\sum_{i,j=1}^n\langle\pi(f_{ij}\cdot\chi_Y)\xi_j,\eta_i\rangle|\ :\ (\xi_j)_{j=1}^n\subseteq E,\\ &\qquad\qquad (\eta_i)_{i=1}^n\subseteq E^*,\; \sum_{j=1}^n\|\xi_j\|^p\leq 1,\; \sum_{i=1}^n\|\eta_i\|^{p'}\leq 1 \bigg\}\\ &\leq \sup\bigg\{|\sum_{i,j=1}^n\langle\pi\otimes\pi(f_{ij})\phi_j,\psi_i\rangle|\ :\ (\phi_j)_{j=1}^n\subseteq E\tilde{\otimes}_pE,\\ &\qquad\qquad (\psi_i)_{i=1}^n\subseteq E^*\tilde{\otimes}_{p'}E^*,\; \sum_{j=1}^n\|\phi_j\|^p\leq 1,\; \sum_{i=1}^n\|\psi_i\|^{p'}\leq 1 \bigg\}\\ &=\|(\pi\otimes\pi(f_{ij}))\|_n. \end{align*} \end{small} By these computations, we obtain that the map ${}_*M_Y$ is a $p$-complete contraction. Therefore, we have $\|M_Y\|_{\text{p-cb}}\leq 1$, through the fact that $p$-operator norm of $UPF_{p}(G)$ is independent of choosing $p$-universal representation. \item[Step 2:] Now let $Y=Y_0\backslash\cup_{i=1}^m Y_i \in\Omega_0(G)$, and we have, \begin{align*} M_{Y}=M_{{Y_0}}-(\sum_{i=1}^m M_{Y_i} -\sum_{i,j}M_{{Y_i\cap Y_j}}+\sum_{i,j,k}M_{{Y_i\cap Y_j\cap Y_k}}+\ldots+(-1)^{m+1}M_{{Y_1\cap\ldots Y_m}}). \end{align*} Therefore, we have $\|M_Y\|_{\text{p-cb}}\leq 2^{m_Y}$. \end{enumerate} \end{enumerate} \end{proof} \end{theorem} The following, is an immediate consequence of the Theorem \ref{PROPQUOTIENT} and Theorem \ref{IMPORTANTMAPPING}-\eqref{ITEMQUOTIENTMAP}. \begin{corollary} For a closed normal subgroup $N$ of a locally compact group $G$ the identification $B_p(G/N)=B_p(G:N)$ is a $p$-complete isometry. \begin{proof} Recall the maps $k_N$ and $l_N$ in \eqref{MAPSKN} and \eqref{MAPSLN}. Evidently, they are weak$^*$-weak$^*$ continuous. Then, with notations of Theorem \eqref{PROPQUOTIENT}, let $(\pi,E)$ be a $p$-universal representation of $G$. It should be noted that elements of the algebra $B_p(G:N)$ are coefficient functions of the representation $(\tilde{\pi}\circ q, K)$. Hence one my define their predual maps as following \begin{align*} &{}_*k_N: UPF_p(G/N)\to PF_{p,\tilde{\pi}\circ q}(G),\quad {}_*k_N(\tilde{\pi}(f))=\tilde{\pi}\circ q(g),\quad f\in L_1(G/N),\\ &\text{where}\; g\in L_1(G),\; \text{with}\; P(g)=f,\\ &{}_*l_N: PF_{p,\tilde{\pi}\circ q}(G) \to UPF_p(G/N),\quad {}_*l_N(\tilde{\pi}\circ q (g))=\tilde{\pi} (P(g)),\quad g\in L_1(G ). \end{align*} By \eqref{K1} the map ${}_*k_N$ is well-defined, and in fact, both maps ${}_*k_N$ and ${}_*l_N$ are identities and inverses of each other, from which the result follows. \end{proof} \end{corollary} \begin{remark} \begin{enumerate} The importance of Theorem \ref{IMPORTANTMAPPING}-\eqref{ITEMIDENTITYMAP} is that while we are working with maps with ranges as subspaces of the $p$-analog of the Fourier-Stieltjes algebras, we just need to restrict ourselves to their ranges, as what we have done in the rest of Theorem \ref{IMPORTANTMAPPING}. \end{enumerate} \end{remark} \section{$p$-Completely bounded homomorphisms on $B_p(G)$ induced by proper piecewise affine maps}\label{SECTIONINDUCEDHOMO} As an application of previous sections, we are ready to study on homomorphisms $\Phi_\alpha : B_p(G)\rightarrow B_p(H)$ of the form \begin{align*} \Phi_\alpha u= \left\{ \begin{array}{ll} u\circ \alpha & \text{on} \: Y\\ 0 & \text{o.w} \end{array}\right.,\quad u\in B_p(G), \end{align*} for the proper and continuous piecewise affine map $\alpha :Y\subseteq H\rightarrow G$ with $Y=\cup_{i=1}^nY_i$ and $Y_i\in\Omega_{\text{am-}0}(H)$, which are pairwise disjoint, for $i=1,\ldots,n$. We will give some results in the sequel, and we need the following lemma in this regard. For general form of this lemma, see \cite[Lemma 1]{ILIE2013}, and related references there, e.g. \cite{DERIGHETTI1982}. \begin{lemma}\label{LEMMACALPHA} Let $G$ and $H$ be locally compact groups and $\alpha :H\rightarrow G$ be a proper homomorphism that is onto, then there is a constant $c_\alpha>0$, such that \begin{align*} \int_H f\circ\alpha(h)dh=c_\alpha\int_G f(x)dx,\quad f\in L_1(G). \end{align*} \end{lemma} \begin{proposition}\label{ITEMHOMOMORPHISMPHI} Let $G$ and $H$ be locally compact groups and $\alpha :H\rightarrow G$ be a proper continuous group homomorphism. Then the homomorphism $\Phi_\alpha : B_p(G)\rightarrow B_p(H)$, of the form $\Phi_\alpha(u)=u\circ\alpha$, is well-defined and $p$-completely contractive homomorphism. \begin{proof} Let $(\pi,E)$ be a $p$-universal representation of $G$. Obviously, $(\pi\circ\alpha , E)\in\text{Rep}_p(H)$, and $\Phi_\alpha$ is a contractive map so that its range is the subspace of $B_p(H)$ of functions which are coefficient functions of the representation $(\pi\circ\alpha , E)$, i.e. the space $B_{p,\pi\circ\alpha}$. We divide our proof into two steps. \begin{enumerate} \item[Step 1:] First, we suppose that $\alpha :H\rightarrow G$ is a continuous isomorphism. In this case, $(\pi\circ\alpha , E)$ is a $p$-universal representation of $H$, and by Lemma \ref{LEMMACALPHA}, for every $f\in L_1(H)$ and $u\in B_p(G)$, we have \begin{align*} \langle\pi\circ\alpha(f),u\circ\alpha\rangle&=\int_H f(h)u\circ\alpha(h)dh\\ &=\int_H (f\circ\alpha^{-1})\circ\alpha(h)u\circ\alpha(h)dh\\ &=c_\alpha\int_G f\circ\alpha^{-1}(x)u(x)dx\\ &=c_\alpha\langle\pi(f\circ\alpha^{-1}),u\rangle. \end{align*} Consequently, the map $\Phi_\alpha$ is weak$^*$-weak$^*$ continuous, and we define \begin{align*} {}_*\Phi_\alpha :UPF_p(H)\rightarrow UPF_p(G),\quad{}_*\Phi_\alpha(\pi\circ\alpha(f)):=c_\alpha\pi(f\circ\alpha^{-1}). \end{align*} According to the above relation, we have $({}_*\Phi_\alpha)^*=\Phi_\alpha$. On the other hand, for every $\xi\in E$ and $\eta\in E^*$, we have \begin{align*} \langle\pi\circ\alpha(f)\xi,\eta\rangle & =\int_{H}f(h)\langle\pi\circ\alpha(h)\xi,\eta\rangle dh\\ & =\int_{H}f\circ\alpha^{-1}\circ\alpha(h)\langle\pi\circ\alpha(h)\xi,\eta\rangle dh\\ & =c_\alpha\int_{G}f\circ\alpha^{-1}(x)\langle\pi(x)\xi,\eta\rangle dx\\ &=\langle c_\alpha\pi(f\circ\alpha^{-1})\xi,\eta\rangle, \end{align*} which means $\pi\circ\alpha (f)=c_\alpha\pi(f\circ\alpha^{-1})$. Consequently, ${}_*\Phi_\alpha$ is an identity map, so is a $p$-complete isometric map, \begin{align*} \|{}_*\Phi_\alpha^{(n)}(\pi\circ\alpha(f_{ij}))\|_n & =\|(c_\alpha\pi(f_{ij}\circ\alpha^{-1}))\|_n =\|(\pi\circ\alpha(f_{ij}))\|_n. \end{align*} Therefore, $\|\Phi_\alpha\|_{\text{p-cb}}\leq\|{}_*\Phi_\alpha\|_{\text{p-cb}}=1$. \item[Step 2:] Now let $\alpha :H\rightarrow G$ be any proper continuous homomorphism. Let $G_0=\alpha(H)$, and $N=\ker\alpha$. Let us define \begin{align*} \tilde{\alpha} :H/N\rightarrow G_0,\quad \tilde{\alpha}(xN)=\alpha(x),\quad x\in G, \end{align*} then by Proposition \ref{ramirez}, the map $\tilde{\alpha}$ is a continuous isomorphism, $N$ is a compact normal subgroup of $H$, and $G_0$ is an open subgroup of $G$. Therefore, $\alpha=\tilde{\alpha}\circ q $. By Step 1, the map $\Phi_{\tilde{\alpha}}$ is $p$-completely contractive, and because of the following composition, $\Phi_\alpha$ is $p$-completely contractive, via Theorem \ref{IMPORTANTMAPPING}-\eqref{ITEMRESTRICTIONMAP}-\eqref{ITEMQUOTIENTMAP}, \begin{align*} \Phi_\alpha=\Phi_q\circ\Phi_{\tilde{\alpha}}\circ R . \end{align*} \end{enumerate} \end{proof} \end{proposition} For the next theorem, we have to put the amenability assumption on the subgroups of $H$, because of Theorem \ref{PROPEXTENSION}. \begin{theorem}\label{ITEMAFFINE} Let $G$ and $H$ be two locally compact groups, $Y$ be an open coset of an open amenable subgroup of $H$, and $\alpha :Y\subseteq H\rightarrow G$ be a continuous proper affine map. Then the map $\Phi_\alpha : B_p(G)\rightarrow B_p(H)$, defined as \begin{align}\label{Z2} \Phi_\alpha(u)=\left\{ \begin{array}{ll} u\circ\alpha, & \text{on}\; Y,\\ 0,&\text{o.w.} \end{array}\right.,\quad u\in B_p(G), \end{align} is $p$-completely contractive. More generally, if $\alpha$ is a continuous proper piecewise affine map, and $Y=\cup_{i=1}^nY_i$, where disjoint sets $Y_i$ belong to $\Omega_{\text{am-}0}(H)$, then the map $\Phi_\alpha$ is $p$-completely bounded. \begin{proof} Through Theorem \ref{THEOREMCONCLUSION} the map \eqref{Z2} is well-defined and it is contractive when the map $\alpha$ is affine on an open coset $Y$ of an open amenable subgroup $H_0$ of $H$, and is bounded in the case that the map $\alpha$ is piecewise affine on the set $Y=\cup_{i=1}^nY_i$, with disjoint $Y_i\in\Omega_{\text{am-}0}(H)$, for $i=1,\ldots, n$, of the form $Y_i=Y_{i,0}\backslash\cup_{j=1}Y_{ij}^{m_i}$. Let $\alpha :Y=y_0H_0\rightarrow G$ be a continuous proper affine map on the open coset $Y=y_0H_0$, and $H_0$ be an open amenable subgroup of $H$, for which by Remark \ref{AFFFIINEREMMM}-\eqref{affine-remark}, there exists a continuous group homomorphism $\beta :H_0\subseteq H\rightarrow G$ associated with $\alpha$ such that \begin{align*} \beta(h)=\alpha(y_0)^{-1}\alpha(y_0h),\quad h\in H_0, \end{align*} and it is proper via Remark \ref{AFFFIINEREMMM}-\eqref{HOMAFFPROPER}. Now, consider the following composition \begin{align*} \Phi_\alpha= L_{{y_0}^{-1}}\circ E_{BB}\circ\Phi_\beta\circ L_{\alpha(y_0)}, \end{align*} then by Proposition \ref{ITEMHOMOMORPHISMPHI}, and Theorem \ref{IMPORTANTMAPPING}-\eqref{ITEMTRANSLATIONMAP}-\eqref{ITEMEXTENSIONMAP}, in place, the map $\Phi_\alpha$ is $p$-completely contractive homomorphism. Next, we consider the piecewise affine case. Let the map $\alpha :Y\subseteq H\rightarrow G$ be a proper and continuous piecewise affine map. Then for some $n\in\mathbb{N}$, and $i=1,\ldots,n$, there are disjoint sets $Y_i\in\Omega_{\text{am-}0}(H)$, such that $Y=\cup_{i=1}^{n}Y_i$, and for $i=1,\ldots,n$, the map $\alpha_i:{Aff(Y_i)}\rightarrow G$ is an affine map, together with the fact that $\alpha_i|_{Y_i}=\alpha|_{Y_i}$. On top of that, by Remark \ref{AFFFIINEREMMM}-\eqref{LEMMA8888}, each affine map $\alpha_i$ is proper. Therefore, by considering \begin{align*} \Phi_\alpha= \sum_{i=1}^n M_{Y_i}\circ\Phi_{\alpha_i}, \end{align*} and through the above computations for the maps $\Phi_{\alpha_i}$, we have \begin{align*} \|\Phi_\alpha\|_{\text{p-cb}}\leq \sum_{i=1}^n 2^{m_{Y_i}}. \end{align*} Here the value $m_{Y_i}$ is the corresponding number to each $Y_i$, as it is in Theorem \ref{IMPORTANTMAPPING}-\eqref{ITEMMULTIPLICATIONYYY}. \end{proof} \end{theorem} \end{document}
\begin{document} \begin{abstract} We study the relation between Poisson algebras and representations of Lie conformal algebras. We establish a setting for the calculation of a Gr\"obner--Shirshov basis in a module over an associative conformal algebra and apply this technique to Poisson algebras considered as conformal modules over appropriate associative envelopes of current Lie conformal algebras. As a result, we obtain a setting for the calculation of a Gr\"obner--Shirshov basis in a Poisson algebra. \end{abstract} \maketitle \section{Introduction} Conformal algebras (also known as Lie vertex algebras) appear in the theory of vertex operator algebras as a formal language describing the singular part of the operator product expansion (OPE) \cite{Kac1998}. The entire OPE of vertex operators may be completely recovered from the singular part by adding the only operation of normally ordered product which is known to be left-symmetric \cite{BK2002}. In this note, we consider a relation between Poisson algebras and representations of conformal algebras. As a possible application, the equality problem is the class of Poisson algebras reduces to the equality problem for modules over associative conformal algebras. Suppose $P$ is a Poisson algebra over a field $\Bbbk $ of characteristic zero and $L$ is a Lie subalgebra of $P$. That is, $P$ is a commutative algebra equipped with a Lie bracket $\{\cdot,\cdot\}$ satisfying the Leibniz rule \[ \{x,yz\} = y\{x,z\} +z\{x,y\},\quad x,y,z\in P. \] For example, if $L$ is a Lie algebra then the associated graded algebra $\mathrm{gr}\,U(L)$ of the universal enveloping associative algebra $U(L)$ is a Poisson algebra denoted $P(L)$, the bracket extends the Lie product on~$L$. For $a\in L$, $u\in P$, denote by $(a\oo_{\lambda } u)$ the polynomial $\{a,u\}+\lambda au$ in a formal variable $\lambda $ with coefficients in~$P$. Denote $H=\Bbbk [\partial ]$, and extend the operation $(\cdot \oo{\lambda } \cdot )$ to the free $H$-modules $H\otimes L$, $H\otimes P$ as follows: \[ (f(\partial )a\oo{\lambda } g(\partial )u ) = f(-\lambda )g(\partial +\lambda ) (a\oo\lambda u). \] This is straightforward to compute \cite{Kol2020} that \begin{equation}\label{eq:ReprLieJacobi} (x\oo\lambda (y\oo\mu w)) - (y\oo\mu (x\oo\lambda w)) = [x\oo\lambda y]\oo{\lambda +\mu} w \end{equation} for $x,y\in H\otimes L$, $w\in H\otimes P$, where $[x\oo\lambda y]$ in the right-hand side is the conformal Lie bracket in the current conformal algebra $\mathop {\fam 0 Cur}\nolimits L$. Therefore, in particular, given a Lie algebra $L$ the free $H$-module generated by the Poisson enveloping algebra $P(L)$ is a conformal module over $\mathop {\fam 0 Cur}\nolimits L$. As in the case of ordinary Lie algebras, a conformal module structure over a conformal algebra gives rise to a module structure over its universal associative conformal envelope \cite{Ro2000}. In contrast to ordinary algebras, the universal enveloping associative conformal algebra for a given conformal algebra is not unique, and the choice of appropriate one is determined by the locality function on the conformal linear maps corresponding to the particular representation. In our case, consider the conformal linear maps $\rho(a)_\lambda \in \mathop {\fam 0 Cend}\nolimits (H\otimes P)$, $a\in L$, sending $w\in P$ to $(a\oo\lambda w) = \{a,w\} + \lambda aw$. Evaluate the $\lambda $-product of two such maps: \begin{multline*} (\rho(a) \oo\lambda \rho(b))_\mu = \rho(a)_\lambda (\rho(b)_{\mu-\lambda } w) \\ = \{a,\{b,w\}\} +\lambda a \{b,w\} +(\mu-\lambda ) \{a, bw\} + \lambda (\mu-\lambda )abw. \end{multline*} The result is a quadratic polynomial in $\lambda $, so the locality level of $\rho(a)$ and $\rho(b)$ is $N=3$. Hence, $H\otimes P$ is a conformal module over $U(\mathop {\fam 0 Cur}\nolimits L, N=3)$. The latter associative conformal algebra was studied in \cite{KK2020}. Given a Lie algebra $L$, we find the explicit set of defining relation of $H\otimes P(L)$ as of conformal $\mathop {\fam 0 Cur}\nolimits L$-module. In particular, if $L$ is the free Lie algebra $\mathop {\fam 0 Lie}\nolimits (X)$ generated by a set $X$ then $P(L)$ is the free Poisson algebra $\mathop {\fam 0 Pois}\nolimits (X)$. For a set $S\subset \mathop {\fam 0 Pois}\nolimits (X)$, denote by $I$ the ideal in $\mathop {\fam 0 Pois}\nolimits (X)$ generated by $S$. Then $H\otimes I\subset H\otimes \mathop {\fam 0 Pois}\nolimits (X)$ is a conformal submodule over $\mathop {\fam 0 Cur}\nolimits \mathop {\fam 0 Lie}\nolimits (X)$. Therefore, finding a Gr\"obner--Shirshov basis in the conformal module allows solving the equality problem in $\mathop {\fam 0 Pois}\nolimits (X)$. This is an alternative approach relative to the theory of Gr\"obner--Shirshov bases in Poisson algebras \cite{BCZ19}. \section{Conformal endomorphisms and universal associative envelopes} Let $L$ be a (Lie) conformal algebra, i.e., a left $H$-module equipped with a $\lambda $-bracket \[ [\cdot\oo\lambda \cdot ] : L\otimes L \to H[\lambda ]\otimes_H L \] which is sesqui-linear and satisfies skew-commutativity \[ [x\oo\lambda y] = -[y\oo{-\lambda-\partial} x], \quad x,y\in L \] and Jacobi identity \[ [ x \oo\lambda [y\oo\mu z]] - [y\oo\mu [ x \oo\lambda z]] = [ [x\oo\lambda y]\oo{\lambda+\mu } z ], \quad x,y,z\in L. \] A representation of $L$ on a left $H$-module $M$ is defined by sesqui-linear $\lambda $-operation \[ (\cdot \oo\lambda \cdot ): L\otimes M \to H[\lambda ]\otimes _H M \] which satisfies \eqref{eq:ReprLieJacobi} for all $x,y\in L$, $w\in M$. In other words, a representation is a map sending $x\in L$ to the operation $\rho(x) = (x\oo\lambda \cdot ): M\to H[\lambda ]\otimes _H M$. The operations $\rho(x)$, $x\in L$, are conformal linear operators on $M$ in the sense of \cite{Kac1998}, i.e., $\rho : L\to \mathop {\fam 0 Cend}\nolimits M$. Recall that $\mathop {\fam 0 Cend}\nolimits M$ has a $\lambda $-operation $(\cdot \oo\lambda \cdot )$ which has not necessarily polynomial values: if $\rho(x) = (x\oo\lambda \cdot )$, $\rho(y) = (y\oo\lambda \cdot )$ then \[ \begin{aligned} (\rho(x)\oo\lambda \rho(y)):{}& M \to H[[\lambda ]] [\mu] \otimes _H M, \\ & w\mapsto (x\oo\lambda (y\oo{\mu-\lambda} w)), \quad w\in M. \end{aligned} \] If for every $x,y\in L$ the image of $(\rho(x)\oo\lambda \rho(y))$ is a polynomial in $\lambda $ (in particular, this is the case when $M$ is a finitely generated $H$-module) then by the Dong's Lemma (see, e.g., \cite{Kac1998}) $\rho(L)$ generates an associative conformal subalgebra $E=E(L,\rho)$ in $\mathop {\fam 0 Cend}\nolimits M$. Denote by $E^{(-)}$ the commutator Lie conformal algebra based on $E$: \[ [f\oo\lambda g] = (f\oo\lambda g) - (g\oo{-\lambda -\partial } f),\quad f,g\in E. \] The relation \eqref{eq:ReprLieJacobi} and the definition of $(\rho(x)\oo\lambda \rho(y))$ imply $[\rho(x)\oo\lambda \rho(y)] = \rho([ x \oo\lambda y])$ for $x,y\in L$. Therefore, $E$ is an associative envelope of $L$. Every associative envelope of a Lie conformal algebra is an image of an appropriate universal enveloping associative conformal algebra \cite{Ro2000}. The choice of the universal envelope is determined by the degrees of the polynomials $(\rho(x)\oo\lambda \rho(y))$, $x,y\in L$. If $\deg_\lambda (\rho(x)\oo\lambda \rho(y)) <N(x,y)$ then $E =E(L,\rho )$ is an image of the associative conformal algebra $U(L,N)$. As $M$ is a conformal module over $L$ then the universal property of $U(L,N)$ implies that there exists a representation of $U(L,N)$ on $M$ extending $\rho $. Note that the canonical map $L\to U(L,N)$ is not necessarily injective. For example, if $L=\mathop {\fam 0 Cur}\nolimits\mathfrak g$ is the current Lie conformal algebra over a Lie algebra $\mathfrak g$ and $M=L$ is the regular conformal $L$-module then $(\rho(a)\oo\lambda \rho(b)) = \mathrm{ad}\, a \mathrm{ad}\, b \in \mathop {\fam 0 End}\nolimits \mathfrak g$, $a,b\in \mathfrak g$, so $N(a,b)=1$. The corresponding universal enveloping associative conformal algebra $U(\mathop {\fam 0 Cur}\nolimits\mathfrak g, N=1)$ is just $\mathop {\fam 0 Cur}\nolimits U(\mathfrak g)\mathfrak g$. For the same $L$, if $M=H\otimes P(\mathfrak g)$, and $(a\oo\lambda w) = [a,w] + \lambda aw$, $a\in \mathfrak g$, $w\in P(\mathfrak g)$, then $N(a,b)\le 3$ for $a,b\in \mathfrak g$. Hence, the corresponding envelope is an image of $U(\mathop {\fam 0 Cur}\nolimits \mathfrak g, N=3)$. Hence, $H\otimes P(\mathfrak g)$ is a conformal module over the associative conformal algebra $U(\mathop {\fam 0 Cur}\nolimits \mathfrak g, N=3)$. Obviously, it is generated by the single element $1=1\otimes 1$. The problem addressed in this paper is to determine the defining relations and find a complete (confluent) set of rewriting rules for this conformal module. \section{Gr\"obner--Shirshov bases of conformal modules} Let $C$ be a conformal algebra generated by a set $X$. Then $C$ is an image of an appropriate free associative conformal algebra \cite{Ro1999} generated by $X$ relative to a locality function $N: X\times X\to \mathbb Z_+$. Denote this free system by $\mathop {\fam 0 Conf}\nolimits(X,N)$. The latter may be presented as follows \cite{Ko2020}. Denote by $A(X)$ the ``ordinary'' associative algebra generated by the set \[ B(X)=\{\partial \}\cup \{L_n^a,R_n^a \mid a\in X,\,n\in \mathbb Z_+\} \] with the defining relations \begin{align}\label{eq:rel_A(X)1} & L_n^a\partial - \partial L_n^a -n L_{n-1}^a, \\ & R_n^a\partial - \partial R_n^a -n R_{n-1}^a, \label{eq:rel_A(X)2} \\ & R_n^aL_m^b - L_m^b R_n^a, \label{eq:rel_A(X)3} \end{align} For a given function $N:X\times X\to \mathbb Z_+$ consider the left $A(X)$-module $M(X,N)$ generated by $X$ with the following relations: \begin{align}\label{eq:mod_A(X)1} & L_n^ab, \quad n\ge N(a,b),\\ & R_m^ba - \sum\limits_{s= 0}^{N(a,b)-m}(-1)^{m+s}\frac{1}{s!}\partial^{s}L_{m+s}^a b, \quad m\in \mathbb Z_+, \label{eq:mod_A(X)2} \end{align} where $a,b\in X$. \begin{proposition}[\cite{Ko2020}]\label{cor:FreeBasis_module} The free associative conformal algebra $\mathop {\fam 0 Conf}\nolimits(X,N)$ is an an $A(X)$-module relative to \[ \begin{gathered} (a\oo\lambda u) = \sum\limits_{n\ge 0} \dfrac{\lambda^n}{n!} L_n^a u, \quad (u\oo{-\lambda-\partial} a) = \sum\limits_{n\ge 0} \dfrac{\lambda^n}{n!} R_n^a u, \\ \quad u\in \mathop {\fam 0 Conf}\nolimits(X,N),\ a\in X, n\ge 0. \end{gathered} \] Moreover, $\mathop {\fam 0 Conf}\nolimits(X,N)$ and $M(X,N)$ are isomorphic as $A(X)$-modules. \end{proposition} A relation in $\mathop {\fam 0 Conf}\nolimits (X,N)$ may be rewritten as a relation in the free $A(X)$-module generated by $X$. Finding a Gr\"obner--Shirshov basis for such a module \cite{KangLee2000} is the same as finding the Gr\"obner--Shirshov basis for an associative conformal algebra. Hence, every associative conformal algebra $C$ may be presented as a quotient of $M(X,N)$ with a Gr\"obner--Shirshov basis $S\subset A(X)\otimes \Bbbk X$. Now, consider a (left) conformal module $M$ over $C$ generated by a set $Y$ relative to a locality function $N': X\times Y\to \mathbb Z_+$. The defining relations $T$ of $M$ may be written as elements of the free $A(X)$-module generated by $Y$ in the same way as it is done with the relations of~$C$. The following statement describes the setting for Gr\"obner--Shirshov bases computation in $M$ which is less technical than proposed in \cite{ChenLu2020}. Moreover, our technique is based on the ``ordinary'' associative algebra and thus the computations may be performed with either of existing computer algebra packages. Let $C$ be an associative conformal algebra generated by a set $X$ (relative to a locality function $N$) with defining relations $S\subset A(X)\otimes \Bbbk X$. Suppose $M$ is a left conformal $C$-module generated by a set $Y$ (relative to a locality function $N'$) with defining relations $T\subset A(X)\otimes \Bbbk Y$. Then the split null extension $C\oplus M$ is an associative conformal algebra generated by $X\cup Y$ relative to the locality function $N$ that extends the locality on $X\times X$ and $X\times Y$ with $N(Y,X)=N(Y,Y)=0$. \begin{theorem}\label{thm:GSB-mod} The defining relations of $C\oplus M$ are $S\cup T$ along with $L_n^y u$, $y\in Y$, $n\ge 0$, $u\in A(X\cup Y)\otimes \Bbbk(X\cup Y)$. \end{theorem} \begin{proof} Recall that $(M\oo\lambda M) = (M\oo\lambda C) = 0$ in $C\oplus M$. Note that \eqref{eq:rel_A(X)1}--\eqref{eq:rel_A(X)3} is a Gr\"obner--Shirshov basis, so $A(X)\subset A(X\cup Y)$ and $A(Y)\subset A(X\cup Y)$. It is sufficient to prove that an arbitrary element of $A(X\cup Y)\otimes (\Bbbk X \oplus \Bbbk Y)$ is equivalent modulo the relations mentioned in the statement either to an element of $A(X)\otimes \Bbbk X$ or to an element of $A(X)\otimes \Bbbk Y$. First, assume $f \in A(X,Y)\otimes \Bbbk Y$, $f = u\otimes y$, $u$ is a monomial in $A(X\cup Y)$. Without loss of generality we may suppose \[ u = \partial ^s L_{n_1}^{x_1}\dots L_{n_k}^{x_k} R_{m_1}^{z_1}\dots R_{m_p}^{z_p}, \] where $x_i\in X$, $z_j\in X\cup Y$. If $z_p\in X$ then the image of $u\otimes y $ is zero due to the relation \eqref{eq:mod_A(X)2} which allows rewriting $R_n^xy$ via $L_n^yx$. If $z_p\in Y$ then one may apply the relation in \eqref{eq:mod_A(X)2} and obtain zero again due to $L_n^y$. Hence, $u$ contains only $L_{n_i}^{x_i}\in A(X)$ as required. Next, assume $f = u\otimes x$, $u$ is a monomial in $A(X\cup Y)$, $x\in X$. Present $u$ in the same form as above and choose maximal $j=1,\dots, p$ such that $z_j\in Y$. (If there is no such $j$ then $f$ belongs to $A(X)\otimes \Bbbk X$.) Then $R_{m_{j+1}}^{z_{j+1}}\dots R_{m_p}^{z_p}x $ may be rewritten in terms of $L_n^z$, $z\in X$, $n\ge 0$ via \eqref{eq:mod_A(X)2}. One may interchange $R_{m_j}^{z_j}$ with all these operators via \eqref{eq:rel_A(X)3} and thus reduce $f$ to a linear combination of words ending with $R_m^{z_j} z$, $z\in X$. The latter rewite to the expressions ending with $z_j$. If there exists at least one more $z_l \in Y$ ($l=1,\dots, j-1$) then $f$ is equivalent to zero as shown in the previous paragraph. Otherwise, $f$ is equivalent to an expression from $A(X)\otimes \Bbbk Y$ as required. \end{proof} \begin{corollary}\label{cor:GSB_mod} A Gr\"obner--Shirshov basis of a conformal module $M$ over an associative conformal algebra $C$ consists of all those relations from a Gr\"obner--Shirshov basis of $C\oplus M$ that contain a single letter $y\in Y$. \end{corollary} In order to construct a Gr\"obner--Shirshov basis of $M$ according to Corollary \eqref{cor:GSB_mod} one should proceed as follows. Define the associative algebra $A$ generated by $B(X\cup Y)\setminus \{L_n^y \mid y\in Y, n\in \mathbb Z_+\}$ with respect to the defining relations \eqref{eq:rel_A(X)1}--\eqref{eq:rel_A(X)3} (for $a\in X\cup Y$). The reason is that we do not need $L_n^y$ which is identically zero operator, but we need $R_n^y$ for $y\in Y$. Consider the free $A$-module generated by $X\cup Y$ relative to the defining relations \eqref{eq:mod_A(X)1}--\eqref{eq:mod_A(X)2} (for $a\in X$, $b\in X\cup Y$). Add the defining relations $S\cup T$ and find a Gr\"obner--Shirshov basis of the obtained set of relations. Finally, choose those relations that contain a single letter from $Y$. \section{Poisson enveloping algebras as conformal modules} Let us apply the technique mentioned above to find a Gr\"obner--Shirshov basis of $H\otimes P(\mathfrak g)$ as of a conformal module over the associative conformal algebra $U=U(\mathop {\fam 0 Cur}\nolimits\mathfrak g, N=3)$. As a necessary part, we need a Gr\"obner--Shirshov basis of $U$ found in \cite{KK2020}. Let us fix a linear basis $X$ of $\mathfrak g$, and let $Y=\{1\}$. Assume the set $X\cup Y$ is linearly ordered in such a way that $1>X$. We will denote $R_n^1$ simply by $R_n$. For associative envelopes of Lie conformal algebras, the algebra $A$ may be slightly modified by adding the family of commutation relations on $L_n^x$. Namely, let $A$ be an associative algebra generated by the set \[ B = \{\partial, R_n, L_n^a, R_n^a \mid n\ge 0, a\in X \} \] relative to the following defining relations written as rewriting rules: \begin{equation}\label{eq:U3-rulesA} \begin{gathered} \partial L_0^a \to L_0^a\partial , \quad \partial L_1^a\to L_1^a\partial -L_0^a, \\ L_n^a\partial \to \partial L_n^a + nL_{n-1}^a,\quad n\ge 2, \\ R_n^a\partial \to \partial R_n^a + nR_{n-1}^a,\quad n\ge 0, \\ R_n \partial \to \partial R_n + nR_{n-1},\quad n\ge 0, \\ R_m^aL_n^b \to L_n^bR_m^a,\quad n,m\ge 0, \\ R_m L_n^b \to L_n^bR_m ,\quad n,m\ge 0, \\ L_n^aL_m^b \to L_m^bL_n^a + L_{n+m}^{[a,b]},\quad (n,a)>_{lex}(m,b); \end{gathered} \end{equation} Consider a left $A$-module generated by the set $X\cup \{1\}$ with defining relations \begin{equation}\label{eq:U3-rulesB} \begin{gathered} L_n^ab\to 0,\quad R_n^ab\to 0,\quad n\ge 3, \\ L_0^a 1\to 0,\quad L_n^a 1 \to 0,\quad n\ge 2, \\ R_n^a 1\to 0, \quad R_n1\to 0,\quad n\ge 0, \\ R_2^ab \to L_2^ab,\quad R_1^ab\to L_1^{a}b , \\ R_2a \to 0, \quad R_1a\to -L_1^a1,\quad R_0a\to -\partial L_1^a1, \\ R_0^ab \to L_0^ab - [a,b]; \end{gathered} \end{equation} \begin{equation}\label{eq:U3-rulesC} \begin{gathered} L_2^ab \to L_2^ba,\quad a>b, \\ \partial L_2^ab \to L_1^ab +L_1^ba , \\ L_1^a\partial b \to L_1^b\partial a + 3L_0^ab-3L_0^ba -2[a,b], \quad a>b. \end{gathered} \end{equation} In order to get a Gr\"obner--Shirshov basis of $U(\mathop {\fam 0 Cur}\nolimits\mathfrak g, N=3)$ it is enough to choose the relations without $R_m$ or $1$ and add the following rewriting rules \cite{KK2020}. \begin{equation}\label{eq:GSB-Ds} \begin{gathered} L_1^a\partial^sb \to L_1^b\partial^sa - (s+2)L_0^b\partial^{s-1} a + (s+2)L_0^a\partial^{s-1}b - 2\partial^{s-1}[a, b], \\ \quad s\ge2,\ a>b, \end{gathered} \end{equation} \begin{gather} L_2^aL_2^bc \to 0, \quad a,b,c\in X_1 \label{eq:GSB-2.2} \\ L_1^aL_2^bc \to L_1^bL_2^ca ,\quad b\le c<a, \label{eq:GSB-1.2} \\ L_1^aL_2^bc \to L_1^bL_2^ac ,\quad b<a\le c, \label{eq:GSB-1.2'} \end{gather} \begin{multline} L_1^aL_1^bc \to L_1^c L_1^a b + L_0^b L_2^c a - L_0^c L_2^a b + L_2^c [a,b] + L_2^a [c, b],\\ c<a\le b, \label{eq:GSB-1.1'} \end{multline} \begin{multline} L_1^aL_1^bc \to L_1^a L_1^c b + L_0^b L_2^a c - L_0^c L_2^a b + L_2^a [c, b] \\ + L_2^b [c,a] + L_2^c [a,b],\quad a\le c<b, \label{eq:GSB-1.1} \end{multline} \begin{multline} L_0^aL_1^bc \to L_0^a L_1^c b + L_0^b L_1^a c + L_0^c L_1^b a - L_0^b L_1^c a - L_0^c L_1^a b + L_1^{[c,a]} b +\\+ L_1^{[a,b]} c + L_1^{[b,c]} a - L_1^c [a,b] - L_1^a [b,c] - L_1^b [c,a] , \quad c<b<a, \label{eq:GSB-0.1} \end{multline} \begin{theorem}\label{thm:PBW} The rules \eqref{eq:U3-rulesA}--\eqref{eq:GSB-0.1} form a set of defining relations for the split null extension $E = U(\mathop {\fam 0 Cur}\nolimits\mathfrak g, N=3)\oplus (H\otimes P(\mathfrak g))$. To construct a Gr\"obner--Shirshov basis, it is enough to add the rules \[ L_0^a\partial^s 1 \to 0,\quad a\in X, \ s>0, \] and \begin{equation}\label{eq:Leibniz} L_0^a L_1^{b_1}\dots L_1^{b_k} \partial ^s 1 \to \sum\limits_{i=1}^k L_1^{b_1}\dots L_1^{[a,b_i]} \dots L_1^{b_k} \partial ^s1 \end{equation} for $a,b_1,\dots ,b_k\in X$, $b_1\le \dots \le b_k$, $s\ge 0$, $k\ge 1$. \end{theorem} Hence, the specific defining relations of $H\otimes P(\mathfrak g)$ as of a conformal module over $U(\mathop {\fam 0 Cur}\nolimits \mathfrak g, N=3)$ are \[ L_n^a1\to 0,\quad n=0,2,3,\dots . \] \begin{proof} Let us start with the intersection of the first relation in \eqref{eq:U3-rulesC} with $R_0\partial \to \partial R_0 $. On the one hand, \begin{multline*} R_0\partial L_2^a b \to \partial R_0L_2^ab \to \partial L_2^aR_0b \to -\partial L_2^a\partial L_1^b 1 \to -\partial L_2^aL_1^b \partial 1 + \partial L_2^aL_0^b 1 \\ \to -\partial L_1^bL_2^a\partial 1 -\partial L_3^{[a,b]}\partial 1 + \partial L_0^bL_2^a1 +\partial L_3^{[a,b]}1 \to -2\partial L_1^bL_1^a 1 \\ \to -2L_1^bL_1^a\partial 1 + 2L_0^bL_1^a 1 + 2L_1^bL_0^a 1 \to -2L_1^bL_1^a\partial 1 + 2L_0^bL_1^a 1. \end{multline*} On the other hand, \begin{multline*} R_0\partial L_1^a b + R_0L_1^ba \to -L_1^a\partial L_1^b1 - L_1^b\partial L_1^a 1 \to -L_1^a L_1^b \partial 1 - L_1^b L_1^a \partial \\ 1 + L_1^aL_0^b1 + L_1^bL_0^a1 \to -2L_1^bL_1^a\partial 1 - L_2^{[a,b]} \partial 1 \to -2L_1^bL_1^a\partial 1 - 2L_1^{[a,b]}1 \end{multline*} Therefore, the composition is \[ L_0^bL_1^a 1 + L_1^{[a,b]} 1 \] which is equivalent to the rewriting rule \[ L_0^bL_1^a 1 \to L_1^{[b,a]} 1,\quad a,b\in X. \] The latter is exactly \eqref{eq:Leibniz} for $k=1$. Proceed by induction on $k$. Assume \eqref{eq:Leibniz} holds for some $k$ (with $s=0$), then the composition of intersection with $L_1^{b_{k+1}}L_0^a \to L_0^aL_1^{b_{k+1}} + L_1^{[b_{k+1},a]}$ leads to the same sort rule for $k+1$. The compositions of the rule \eqref{eq:Leibniz} with $\partial L_0^a\to L_0^a\partial $ gives rise to the desired relations for $s\ge 1$. It is straightforward to check that the remaining compositions are all trivial. For example, let us consider the intersection of \eqref{eq:GSB-0.1} with $R_1L_0^a\to L_0^aR_1$. On the one hand, \[ R_1L_0^aL_1^bc \to L_0^aL_1^bR_1c \to -L_0^aL_1^bL_1^c1 \to -L_1^{[a,b]}L_1^c 1 - L_1^bL_1^{[a,c]} 1. \] On the other hand, \begin{multline*} R_1L_0^aL_1^bc \to L_0^a L_1^c R_1b + L_0^b L_1^a R_1c + L_0^c L_1^b R_1a - L_0^b L_1^c R_1 a - L_0^c L_1^a R_1b \\ + L_1^{[c,a]} R_1b + L_1^{[a,b]} R_1 c + L_1^{[b,c]} R_1a - L_1^c R_1[a,b] - L_1^a R_1[b,c] - L_1^b R_1[c,a] \\ \to -L_0^a L_1^c L_1^b 1 - L_0^b L_1^a L_1^c1 - L_0^c L_1^b L_1^a1 +L_0^b L_1^c L_1^a 1 + L_0^c L_1^a L_1^b 1 - L_1^{[c,a]} L_1^b 1 \\ - L_1^{[a,b]} L_1^c 1 -L_1^{[b,c]} L_1^a 1 + L_1^c L_1^{[a,b]} 1 + L_1^a L_1^{[b,c]} 1 + L_1^b L_1^{[c,a]} 1 \\ = \big( -L_0^a L_1^c L_1^b 1 + L_1^{[a,c]} L_1^b 1 + L_1^c L_1^{[a,b]} 1 \big) +\big (-L_0^b L_1^a L_1^c1 + L_1^{[b,a]} L_1^c 1 \\ + L_1^a L_1^{[b,c]} 1 \big) + \big( L_0^c L_1^a L_1^b 1 -L_0^c L_1^b L_1^a1 \big) + \big (L_0^b L_1^c L_1^a 1 -L_1^{[b,c]} L_1^a 1 \big) + L_1^b L_1^{[c,a]} 1 \\ \to L_0^c L_2^{[a,b]} 1 + L_1^c L_1^{[b,a]} 1 + L_1^bL_1^{[c,a]} 1. \end{multline*} The last to expressions are equal modulo the relations $L_2^x1\to 0$, $x\in \mathfrak g$. Note that: \[ L_2^aL_1^b 1 \to L_1^bL_2^a 1 + L_3^{[a,b]}1 \to 0\] \[ L_n^a \partial 1 \to \partial L_n^a 1 + n L_{n-1}^a 1 \to 0, \qquad n\ge 3\] We will use these last two relations without explanations. Let us check \eqref{eq:U3-rulesC}. First relation, multiplied by $R_0$, $a>b$: \begin{multline*} R_0L_2^a b \to - L_2^a \partial L_1^b 1 \to - L_2^aL_1^b \partial 1 + L_2^aL_0^b1\to -L_1^bL_2^a\partial 1 - L_3^{[a,b]}\partial 1 \\ \to -L_1^bL_2^a\partial 1 - \partial L_3^{[a,b]}1 - 3L_2^{[a,b]}1 \to -L_1^bL_2^a\partial 1 \to -L_1^b\partial L_2^a 1 - 2 L_1^bL_1^a 1 \\\to -2 L_1^bL_1^a 1\end{multline*} \[ R_0 L_2^b a \to \dots - 2L_1^aL_1^b 1 \to -2 L_1^bL_1^a 1 - L_2^{[a,b]}1 \to -2L_1^bL_1^a 1\] First relation, multiplied by $R_1$, $a>b$: \[ R_1L_2^ab\to -L_2^aR_1^b\to -L_2aL_1^b1\to - L_1^bL_2^a1+L_3^{[a,b]}1\to 0 \] \[ R_1L_2^ba\to\dots\to 0 \] Second relation, multiplied by $R_0$: \begin{multline*} R_0\partial L_2^a b \to - \partial L_2^a \partial L_1^b 1 \to -\partial L_2^a L_1^b \partial 1 \to -\partial L_1^b L_2^a \partial 1 \\\to -\partial L_1^b \partial L_2^a 1 - 2 \partial L_1^b L_1^a 1 \\\to -2L_1^b\partial L_1^a 1 + 2 L_0^b L_1^a 1 \to -2 L_1^b L_1^a \partial 1 + 2 L_1^{[b,a]}1 \end{multline*} \begin{multline*} R_0L_1^a b + R_0 L_1^b a \to - L_1^a L_1^b \partial 1 - L_1^bL_1^a \partial 1 = -2L_1^bL_1^a\partial 1 - L_2^{[a,b]}\partial 1 \\\to -2L_1^bL_1^a\partial 1 - 2 L_1^{[a,b]}1 = -2L_1^bL_1^a\partial 1 + 2 L_1^{[b,a]}1 \end{multline*} Second relation, multiplied by $R_1$: \begin{multline*} R_1\partial L_2^a b \to -\partial L_2^a L_1^b 1 - L_2^a L_1^b \partial 1 \to -\partial L_1^b L_2^a 1 - L_1^b L_2^a \partial 1 \to - 2L_1^b L_1^a 1 \end{multline*} \begin{multline*} R_1L_1^a b + R_1 L_1^b a \to - L_1^a L_1^b 1 - L_1^b L_1^a 1 \to -2L_1^bL_1^a 1 - L_2^{[a,b]}1 \to -2 L_1^bL_1^a 1 \end{multline*} Third relation, multiplied by $R_0$, $a>b$: \begin{multline*} R_0L_1^a\partial b \to L_1^a\partial R_0b\to -L_1^a\partial^2L_1^b1\to -L_1^a\partial L_1^b\partial 1 + L_1^a\partial L_0^b 1 \\\to -L_1^a L_1^b \partial^2 1 +L_1^aL_0^b \partial 1 \to -L_1^a L_1^b \partial^2 1 \to -L_1^bL_1^a\partial^2 1 - L_2^{[a,b]}\partial^2 1 \\ \to -L_1^bL_1^a\partial^2 1 - \partial L_2^{[a,b]}\partial 1 - 2L_1^{[a,b]}\partial 1 \\\to -L_1^bL_1^a\partial^2 1 - \partial^2 L_2^{[a,b]}1 - 2\partial L_1^{[a,b]}1 - 2L_1^{[a,b]}\partial 1\to -L_1^bL_1^a\partial^2 1 -4L_1^{[a,b]}\partial 1 \end{multline*} \begin{multline*} R_0L_1^b\partial a + 3R_0L_0^ab-3R_0L_0^ba-2R_0[a,b] \\ \to - L_1^b\partial^2 L_1^a1 -3L_0^a\partial L_1^b1 + 3L_0^b\partial L_1^a1 + 2 \partial L_1^{[a,b]}1 \\\to -L_1^b\partial L_1^a\partial 1 + L_1^b\partial L_0^a 1 - 3 L_0^aL_1^b\partial 1 + 3 L_0^bL_1^a\partial 1 + 2L_1^{[a,b]}\partial 1 \\ \to -L_1^bL_1^a \partial^2 1 + L_1^bL_0^a\partial 1 -3 L_0^aL_1^b\partial 1 + 3 L_0^bL_1^a\partial 1 + 2L_1^{[a,b]}\partial 1 \\ \to -L_1^bL_1^a \partial^2 1 -3 L_0^aL_1^b\partial 1 + 3 L_0^bL_1^a\partial 1 + 2L_1^{[a,b]}\partial 1 \\\to -L_1^bL_1^a \partial^2 1 - 3 L_1^{[a,b]}\partial 1 + 3L_1^{[b,a]}\partial 1+2L_1^{[a,b]}\partial 1 \to -L_1^bL_1^a\partial^2 1 -4L_1^{[a,b]}\partial 1 \end{multline*} Third relation, multiplied by $R_1$, $a>b$: \begin{multline*} R_1L_1^a\partial b \to - L_1^a\partial L_1^b 1 - L_1^a\partial L_1^b 1 \to -2L_1^aL_1^b\partial 1 \to -2L_1^bL_1^a \partial 1 - 2L_2^{[a,b]}\partial 1 \\\to -2L_1^bL_1^a \partial 1 - 4L_1^{[a,b]}1 \end{multline*} \begin{multline*} R_1L_1^b\partial a + 3R_1L_0^ab-3R_1L_0^ba-2R_1[a,b] \\ \to -L_1^b\partial L_1^a 1 - L_1^b \partial L_1^a 1 - 3L_0^aL_1^b 1 + 3 L_0^b L_1^a 1 + 2 L_1^{[a,b]}1 \\\to -2L_1^bL_1^a\partial 1 -3 L_1^{[a,b]}1 + 3 L_1^{[b,a]}1 + 2 L_1^{[a,b]}1 \to -2L_1^bL_1^a\partial 1 -4L_1^{[a,b]}1 \end{multline*} Let us check \eqref{eq:GSB-Ds}, $s\ge 2$, $a>b$. Left part of relation, multiplied by $R_0$: \begin{multline*} R_0L_1^a\partial^s b \to -L_1^a\partial^{s+1}L_1^b 1 \to -L_1^a \partial^s L_1^b \partial 1 \to - L_1^a L_1^b \partial^{s+1} 1 - sL_1^a L_0^b \partial^s 1 \\\to -L_1^bL_1^a\partial^{s+1}1 -L_2^{[a,b]}\partial^{s+1} 1 - sL_0^bL_1^a \partial^s 1 - sL_1^{[a,b]}\partial^s 1 \\\to -L_1^bL_1^a\partial^{s+1}1-L_2^{[a,b]}\partial^{s+1}1-sL_1^{[b,a]}\partial^s 1 - s L_1^{[a,b]}\partial^s 1 \\\to -L_1^bL_1^a\partial^{s+1}1-L_2^{[a,b]}\partial^{s+1}1\to -L_1^bL_1^a\partial^{s+1}1-2(s+1)L_1^{[a,b]}\partial^{s+1}1 \end{multline*} Right part of relation (one term at a time), multiplied by $R_0$: \begin{multline*} R_0L_1^b\partial^s a \to - L_1^b\partial^{s+1} L_1^a 1 \to -L_1^bL_1^a\partial^{s+1}1 + (s+1)L_1^bL_0^a\partial^{s} 1 \\\to -L_1^bL_1^a\partial^{s+1}1 \end{multline*} \begin{multline*} -R_0(s+2)L_0^b\partial^{s-1}a \to (s+2)L_0^b\partial^{s}L_1^a 1 \\\to (s+2)L_0^bL_1^a \partial^s 1 - s(s+2)L_0^bL_0^a \partial^{s-1} 1\to (s+2) L_1^{[b,a]}\partial^{s}1 \end{multline*} \[ R_0(s+2)L_0^a\partial^{s-1}b \to -(s+2)L_1^{[a,b]}\partial^s 1 \] \begin{multline*} -2R_0\partial^{s-1}[a,b]\to 2\partial^{s-1}L_1^{[a,b]}1 \to 2L_1^{[a,b]}\partial^{s-1}1 - 2(s-1)L_0^{[a,b]}\partial^{s-2}1 \\\to 2L_1^{[a,b]}\partial^{s-1}1 \end{multline*} So, right part goes to \begin{multline*} -L_1^bL_1^a\partial^{s+1}1 + (s+2) L_1^{[b,a]}\partial^{s}1 -(s+2)L_1^{[a,b]}\partial^s 1 + 2L_1^{[a,b]}\partial^{s-1}1 \\\to -L_1^bL_1^a\partial^{s+1}1 + 2(s+1) L_1^{[b,a]}\partial^{s}1 = -L_1^bL_1^a\partial^{s+1}1 - 2(s+1) L_1^{[a,b]}\partial^{s}1 \end{multline*} Left part of relation \eqref{eq:GSB-Ds}, multiplied by $R_1$. \begin{multline*} R_1L_1^a\partial^s b \to -L_1^a\partial^s L_1^b 1 - sL_1^a\partial^{s}L_1^b 1 \to -(s+1)L_1^aL_1^b\partial^s 1 - s L_1^aL_0^b\partial^{s-1}1 \\\to -(s+1)L_1^bL_1^a\partial^s 1 - (s+1)L_2^{[a,b]}\partial^s 1 \to -(s+1)L_1^bL_1^a\partial^s 1 - 2(s+1)s L_1^{[a,b]}\partial^{s-1}1 \end{multline*} Right part of relation \eqref{eq:GSB-Ds} (one term at a time), multiplied by $R_0$ \[ R_1L_1^b\partial^s \to -(s+1)L_1^bL_1^a\partial^s 1 \] \begin{multline*} -R_1(s+2)L_0^b\partial^{s-1}a \to (s+2)L_0^b\partial^{s-1}L_1^a 1 + (s-1)(s+2)L_0^b\partial^{s-1}L_1^a 1 \\\to s(s+2)L_1^{[b,a]}\partial^{s-1}1 \end{multline*} \[ R_1(s+2)L_0^a\partial^{s-1}b \to -s(s+2)L_1^{[a,b]}\partial^{s-1}1 \] \[ -2R_1\partial^{s-1}[a,b]\to 2\partial^{s-1}L_1^{[a,b]}1 + 2(s-1)\partial^{s-1}L_1^{[a,b]} 1 \to 2sL_1^{[a,b]}\partial^{s-1} 1 \] So, right part goes to \[ -(s+1)L_1^bL_1^a\partial^s 1 -2s(s+1)L_1^{[a,b]}\partial^{s-1}1 \] Let us check \eqref{eq:GSB-2.2}, $b\le c < a$. \begin{multline*}R_0L_2^aL_2^b c \to - L_2^aL_2^b \partial L_1^c 1 \to -L_2^aL_1^cL_2^b \partial 1 \to -L_1^cL_2^aL_2^b \partial 1 \\\to -2L_1^cL_2^a L_1^b 1 \to 0\end{multline*} \[R_1L_2^aL_2^b c \to -L_2^aL_2^bL_1^c 1 \to 0 \] Let us check \eqref{eq:GSB-1.2}, $b < a \le c$. \[R_0L_1^aL_2^b c \to -L_1^aL_2^b\partial L_1^c 1 \to -L_1^a\partial L_2^bL_1^c 1 - 2 L_1^aL_1^bL_1^c 1 \to -2L_1^bL_1^cL_1^a 1 \] \[ R_0L_1^bL_2^c a \to -2 L_1^bL_1^cL_1^a 1\] \[ R_1L_1^aL_2^b c \to -L_1^aL_2^bL_1^c 1 \to 0\] \[ R_1L_1^bL_2^c a \to 0\] \eqref{eq:GSB-1.2'} is checked similarly. Let us check \eqref{eq:GSB-1.1'}, $c< a \le b$. Left part of relation, multiplied by $R_0$ \begin{multline*} R_0L_1^aL_1^b c \to - L_1^aL_1^b\partial L_1^c 1 \to -L_1^aL_1^bL_1^c \partial 1 \to -L_1^a L_1^c L_1^b \partial 1 - L_1^aL_2^{[b,c]}\partial 1 \\\to -L_1^cL_1^aL_1^b \partial 1 - L_2^{[a,c]}L_1^b \partial 1 - 2L_1^aL_1^{[b,c]} 1 \\\to -L_1^cL_1^aL_1^b \partial 1 - L_1^bL_2^{[a,c]}\partial 1 -2 L_1^aL_1^{[b,c]} 1 \\\to -L_1^cL_1^aL_1^b \partial 1 - 2L_1^bL_1^{[a,c]}1 -2L_1^aL_1^{[b,c]}1 \end{multline*} Right part of relation, multiplied by $R_0$ (one term at a time) \[ R_0L_1^cL_1^a b \to -L_1^cL_1^aL_1^b \partial 1 \] \begin{multline*} R_0L_0^bL_2^c a \to -L_0^bL_2^c\partial L_1^a 1 \to -L_0^bL_2^cL_1^a \partial 1 \to -L_0^bL_1^aL_2^c \partial 1 \\\to -2L_0^bL_1^aL_1^c 1 \to -2 L_1^{[b,c]}L_1^a 1 -2 L_1^cL_1^{[b,a]}1 \end{multline*} \[ -R_0L_0^cL_2^a b \to 2L_0^cL_1^aL_1^b 1 \to 2 L_1^{[c,a]}L_1^b 1 + 2 L_1^a L_1^{[c,b]}1 \] \[ R_0L_2^c[a,b]\to -L_2^c\partial L_1^{[a,b]}1 \to -L_1^{[a,b]}L_2^c\partial 1 \to -2L_1^{[a,b]}L_1^c 1\] \[ R_0L_2^a[c,b] \to -2 L_1^{[c,b]}L_1^a 1 \] So, right part goes to \[ -L_1^cL_1^aL_1^b\partial 1 + 2 L_1^{[c,a]}L_1^b 1 + 2 L_1^a L_1^{[c,b]}1 \] Now, \eqref{eq:GSB-1.1'}, multiplied by $R_1$. Left part: \[ R_1L_1^aL_1^b c \to -L_1^aL_1^bL_1^c 1 \to -L_1^aL_1^cL_1^b1 \to -L_1^cL_1^aL_1^b 1 \] Right part: \[ R_1L_1^cL_1^a b \to -L_1^cL_1^aL_1^b 1\] \[ R_1L_0^bL_2^c a \to - L_0^bL_2^cL_1^a 1 \to 0 \] \[ -R_1L_0^cL_2^a b \to 0\] \[ R_1L_2^c[a,b] \to - L_2^cL_1^{[a,b]} 1 \to 0\] \[ R_1L_2^a[c,b]\to 0\] So, right part goes to \[-L_1^cL_1^aL_1^b 1\] Let us check \eqref{eq:GSB-1.1}, $a\le c < b$ . Left part, multiplied by $R_0$: \begin{multline*} R_0L_1^aL_1^b c \to - L_1^aL_1^b\partial L_1^c 1 \to -L_1^aL_1^bL_1^c \partial 1 \to -L_1^a L_1^c L_1^b \partial 1 - L_1^aL_2^{[b,c]}\partial 1 \\\to -L_1^aL_1^cL_1^b \partial 1 - 2L_1^aL_1^{[b,c]} 1 \end{multline*} Right part, multiplied by $R_0$: \[ R_0L_1^aL_1^c b \to -L_1^aL_1^cL_1^b \partial 1\] \begin{multline*} R_0L_0^bL_2^a c \to -L_0^bL_2^aL_1^c \partial 1 \to -L_0^bL_1^cL_2^a \partial 1 \\\to -2L_0^bL_1^aL_1^c 1 \to -2L_1^{[b,a]}L_1^c1 -2L_1^aL_1^{[b,c]}1 \end{multline*} \[ -R_0 L_0^cL_2^a b \to -2L_0^cL_1^aL_1^b 1 \to 2L_1^{[c,a]}L_1^b 1 + 2L_1^aL_1^{[c,b]}1 \] \[ R_0L_2^x[y,z]\to - L_2^x L_1^{[y,z]} \partial 1 \to -2L_1^{[y,z]}L_1^x 1 \] In last relation $x,y,z\in \{a,b,c\}$. So, right part goes to \[ -L_1^aL_1^cL_1^b \partial 1 -2L_1^aL_1^{[b,c]}1 \] Proof for \eqref{eq:GSB-1.1}, multiplied by $R_1$, is pretty similar with \eqref{eq:GSB-1.1'}, since: \[ R_1L_2^x[y,z]\to -L_2^xL_1^{[y,z]}1 \to 0 \] Let us check \eqref{eq:GSB-0.1}, $c<b<a$. Left part of the relation, multiplied by $R_0$: \begin{multline*} R_0L_0^aL_1^b c \to -L_0^aL_1^b L_1^c \partial 1 \to -L_0^aL_1^cL_1^b \partial 1 - 2L_0^aL_1^{[b,c]} 1 \\\to -L_1^{[a,c]}L_1^b-L_1^cL_1^{[a,b]}\partial 1 - 2L_1^{[a,[b,c]]}1 \end{multline*} Right part of the relation, multiplied by $R_0$: \[ R_0L_0^aL_1^c b \to -L_1^{[a,c]}L_1^b 1 - L_1^cL_1^{[a,b]}1 \] \[ R_0L_0^bL_1^a c -R_0L_0^bL_1^ca \to L_0^bL_1^cL_1^a\partial 1 + 2L_0^bL_1^{[a,c]} 1 - L_0^bL_1^cL_1^a\partial 1 \to 2L_1^{[b,[a,c]}1 \] \[ R_0L_0^cL_1^b a -R_0L_0^cL_1^ab \to -L_0^cL_1^bL_1^a1 \partial 1 + L_0^cL_1^b L_1^a1 \partial 1 + 2L_0^cL_1^{[a,b]}1 \to 2L_1^{[c,[a,b]]}1 \] \[ R_0L_1^{[c,a]}b+R_0L_1^{[a,b]}c+R_0L_1^{[b,c]}a \to -L_1^{[c,a]}L_1^{b}\partial 1 -L_1^{[a,b]}L_1^c \partial 1 - L_1^{[b,c]}L_1^a \partial 1 \] \[ -R_0L_1^c[a,b]-R_0L_1^a[b,c]-R_0L_1^b[c,a]\to L_1^cL_1^{[a,b]}\partial 1+L_1^aL_1^{[b,c]}\partial 1+L_1^bL_1^{[c,a]}\partial 1\] So, the composition is 0 by Jacobi identity. Left part of the relation, multiplied by $R_1$: \[ R_1L_0^aL_1^b c \to -L_0^aL_1^bL_1^c 1 \to -L_0^aL_1^cL_1^b 1 \to -L_1^{[a,c]}L_1^b 1 - L_1^cL_1^{[a,b]}1 \] Right part of the relation, multiplied by $R_1$: \[ R_1L_0^aL_1^c b \to -L_1^{[a,c]}L_1^b 1 - L_1^cL_1^{[a,b]}1 \] \[ R_1L_0^bL_1^a c -R_1L_0^bL_1^ca \to -L_0^bL_1^aL_1^c 1 + L_0^bL_1^cL_1^a 1 \to 0 \] \[ R_1L_0^cL_1^b a -R_1L_0^cL_1^ab \to -L_0^cL_1^bL_1^a 1 + L_0^cL_1^aL_1^b 1 \to 0\] \[ R_1L_1^{[c,a]}b+R_1L_1^{[a,b]}c+R_1L_1^{[b,c]}a \to -L_1^{[c,a]}L_1^{b}1 -L_1^{[a,b]}L_1^c 1 - L_1^{[b,c]}L_1^a 1 \] \[ -R_1L_1^c[a,b]-R_1L_1^a[b,c]-R_1L_1^b[c,a]\to L_1^cL_1^{[a,b]}+L_1^aL_1^{[b,c]}1+L_1^bL_1^{[c,a]}1\] So, the composition is zero. This completes the proof. \end{proof} \begin{corollary} The linear basis of $H\otimes P(\mathfrak g)$ as of $H$-module consists of the words $L_1^{b_1}\dots L_k^{b_k}1$, $b_1\le \dots \le b_k$. \end{corollary} Indeed, the reduced words ending with $1$ do not contain $L_0^a$ or $L_n^a$ for $n\ge 2$. The result agrees with the classical Poincar\'e--Birkhoff--Witt Theorem for $P(\mathfrak g) =\mathrm{gr}\,U(\mathfrak g)$. Being applied to the case when $\mathfrak g$ is a free Lie algebra $\mathop {\fam 0 Lie}\nolimits (G)$ generated by a set $G$, Theorem~\ref{thm:PBW} provides us a setting for calcuating a Gr\"obner--Shirshov basis in the free Poisson algebra. Namely, every element from the free Poisson algebra $\mathop {\fam 0 Pois}\nolimits (G) =P(\mathop {\fam 0 Lie}\nolimits (G))$ may be presented as a rewriting rule in the free $A(X)$-module generated by a single element 1, where $X$ is the set of nonassociative Lyndon--Shirshov words in the alphabet~$G$. \begin{corollary}[Composition-Diamond Lemma for Poisson algebras] If a set $S\subset \mathop {\fam 0 Pois}\nolimits (G)$ defines a set of rewriting rules that have no nontrivial compositions with \eqref{eq:U3-rulesA} and \eqref{eq:Leibniz} then $S$ along with the rules mentioned in Theorem~\ref{thm:PBW} is a Gr\"obner--Shirshov basis of $H\otimes \mathop {\fam 0 Pois}\nolimits(G\mid S)$ considered as a conformal module over $U(\mathop {\fam 0 Cur}\nolimits\mathop {\fam 0 Lie}\nolimits(G), N=3)$. \end{corollary} \end{document}
\begin{equation}gin{document} \title{Stuck Walks} \begin{equation}gin{abstract} We investigate the asymptotic behaviour of a class of self-interacting nearest neighbour random walks on the one-dimensional integer lattice which are pushed by a particular linear combination of their own local time on edges in the neighbourhood of their current position. We prove that in a range of the relevant parameter of the model such random walkers can be eventually confined to a finite interval of length depending on the parameter value. The phenomenon arises as a result of competing self-attracting and self-repelling effects where in the named parameter range the former wins. \noindent {\sc MSC2010: 60K37, 60K99, 60J55} \noindent {\sc Key words and phrases:} self-interacting random walk, local time, trapping \end{abstract} \section{Introduction and main result} \langleglebel{s:intro} Let $(X_n, n \ge 0)$ be a nearest neighbour path on the one-dimensional integer lattice $\Z$, and define for each $n\in\N$ and $j\in\Z$, its local time $\ell (n,j)$ on unoriented edges: $$ \ell(n,j):=\#\{1\le m\le n\,:\, \{X_{m-1}, X_m \}=\{j-1,j\}\}. $$ Throughout this paper the unoriented edge connecting the sites $j-1$ and $j$ will be denoted by $j$. We fix a real parameter $\alpha$ and define \begin{equation}gin{align} \langleglebel{Delta} \Delta(n,j) := -\alpha\ell(n,j-1)+\ell(n,j)-\ell(n,j+1)+\alpha\ell(n,j+2) \end{align} for all $j \in \Z$ and $n \ge 0$. We then also define $ \Delta_n = \Delta (n, X_n)$, which is therefore a particular linear combination of the number of visits by $X$ before time $n$ to the edges near $X_n$. We consider a special type of self-interacting random walk $(X_n, n \ge 0)$ with long memory started from $X_0=0$, whose law is described by the following ``dynamics'': for all $n \ge 0$, \begin{equation}gin{align} \langleglebel{law} \condprob{X_{n+1} = X_n \pm1}{{\mathcal F}_n} = \frac{\exp\{\pm\begin{equation}ta\Delta_n\}} {\exp\{\begin{equation}ta\Delta_n\}+\exp\{-\begin{equation}ta\Delta_n\}} \end{align} where $\begin{equation}ta>0$ is another fixed parameter of the problem and ${\mathcal F}_n = \sigma (X_0, \ldots, X_n)$. In plain words, if $ \Delta_n$ is positive (respectively, negative), then the walker will prefer to jump to the right (resp., to the left) at its $(n+1)$-st jump. We are interested in the long time asymptotic behaviour of the walk. The parameter $\alpha$ plays a crucial role. Depending on its value the qualitative behaviour varies spectacularly. The role of the parameter $\begin{equation}ta$ is less dramatic. The driving mechanism \eqref{law} is a generalization of the rules governing the so-called ``true'' self-avoiding random walk (or true self-repelling walk -- we will refer to is as the TSRW) in 1d. Choosing $\alpha=0$ we obtain the TSRW with edge repulsion (the latter looks at each step at the number of times it has previously jumped along its two neighbouring edges, and favors the less-visited one) while choosing $\alpha=-1$ corresponds to the TSRW with site repulsion. In these two cases non-degenerate scaling limits for $n^{-2/3}X(n)$ are proved \cite{toth_95}, \cite{toth_werner_98}, respectively conjectured \cite{amit_parisi_peliti_83}, \cite{obukhov_peliti_83}, \cite{peliti_pietronero_87}. See the survey \cite{toth_01} for more information about these two cases. It turns out that depending on the value of the parameter $\alpha$, a rather rich phase diagram emerges. We refer to \cite {erschler_toth_werner_10} for background and motivation. For $\alpha$ in $[-1,1/3)$, we expect a similar scaling behaviour as for the TSRW. It is intuitively clear that for $\alpha$ close to $0$ the mechanism can be viewed as a minor perturbation of the TSRW case. The fact that the interval of parameters where this kind of asymptotic behaviour is expected is exactly $\alpha\in[-1,1/3)$ follows from more detailed arguments relying on the fact that this is the range of parameters where the coefficients of the linear combination defining $\Delta$ in \eqref{Delta} correspond to a positive definite sequence. For details of this argument see \cite{erschler_toth_werner_10}, \cite{ttv}. For $\alpha\in(-\infty,-1)$, when the walk is repelled by its past visits to its neighbouring edges and even more strongly by its second-neighbouring edges, a kind of slowing down phenomenon seems to occur, where the walk gets slowed down by self-built trapping environments. A more detailed discussion can be found in \cite {erschler_toth_werner_10}. The results of the present paper will concern the range of values where $\alpha$ is positive. In this case, the walk is repelled by its local time on the edges adjacent to its current position, but attracted by its previous visits to its two next-to-neighbouring edges. As we shall see, when $\alpha>1/3$, the self-attractiveness can win and the walk can remain stuck forever on a finite interval of consecutive sites, while this fails to hold for $\alpha\le1/3$. It is therefore natural to define the (possibly infinite) interval ${\mathcal L}$ of points that are visited infinitely often by the walk. We will use a simple explicit sequence of values $(\alpha_{L}, L \ge 1)$ that decays to $1/3$ defined as follows: $\alpha_{1} = + \infty$ and for all $L \ge 2$, $$ \alpha_{L} = \frac {1} { 1 + 2 \cos (2 \pi/(L+2))}. $$ Our main result is the following: \begin{equation}gin {theorem} \langleglebel {thm:main} Suppose that $L \ge 1$. Then: \begin{equation}gin {itemize} \item If $\alpha \in (\alpha_{L+1}, \alpha_{L})$, then the probability that $\# {\mathcal L}= L +2$ is positive. \item If $\alpha < \alpha_{L+1}$, then almost surely, $\# {\mathcal L} > L +2 $. \end {itemize} \end {theorem} Note our convention of denoting discrete interval length: When $\# {\mathcal L} = L+2$, this means that the number of \emph{interior} lattice sites of ${\mathcal L}$ is $L$. Thus, $L+1$ will be the number of lattice edges in the interval (i.e., the length of the interval) and $L+2$ will be the number of sites in the closed interval, including the endpoints. Such a discrete interval will be of the type $\{x,x+1, \mathrm dots,x+L,x+L+1\}$, $x\in\Z$, with endpoints $x$ and $x+L+1$. \begin{equation}gin {figure}[htbp] \begin{equation}gin {center} \includegraphics [height=1.9in]{0.4.traj.eps} \includegraphics [height=1.9in]{0.4.lt.eps} \caption {A trajectory and its local time when $\alpha = 0.4$} \end {center} \end {figure} It implies in particular that $\# {\mathcal L} \in \{ 0, \infty \}$ almost surely when $\alpha \le 1/3 = \inf_{L \ge 2} \alpha_{L} $ i.e. that the range of the walk is infinite. This trapping phenomenon when $\alpha > 1/3$ is reminiscent of the asymptotic behaviour of the \emph{vertex reinforced random walk} in 1d, cf. \cite{pemantle_92}, \cite{pemantle_volkov_99}, \cite{tarres_04}. However, the differences are also conspicuous. The reinforcement scheme is of a different type, and here, there is no clear monotone attractiveness in the self-interaction mechanism, rather a competition between self-attraction and self-repulsion, where the self-attraction wins. Due to this, the size of the trapping range increases to infinity as the parameter value approaches the borderline between the confined and non-confined regimes, at $\alpha =1/3$. We will in fact also give a more precise description of the asymptotic behavior of the walk, or rather of its local times on edges, in the case where it is trapped. In the scenario that we will describe and happens with positive probability, the renormalized local time profile $ (n^{-1} \ell (n, j ), j \in \Z)$ will become deterministic in the large-scale limit: We will describe for each $\alpha$ in $(\alpha_{L+1}, \alpha_L)$ an explicit sequence $u_1, \ldots, u_{L}$ (that will follow the values of a $\sin^2$ curve along an arithmetic sequence) so that when ${\mathcal L} = \{ x , \ldots , {x+L+1} \}$ for some $x\in\Z$, one then necessary has (up to a set of zero probability) $$ \lim_{n \to \infty} \frac {1}{n} ( \ell (n,x+1), \ldots, \ell (n,x+ L+1)) = (u_1, \ldots, u_{L+1}).$$ \section{Probabilistic part of the proof} \langleglebel{s:forcibly_confined_walk} Let $L \ge 1$ and $\alpha < \alpha_{L}$ be fixed throughout most of this section (note that by definition $\alpha_{1} = \infty$). We define an auxiliary nearest neighbour random walk $(Y_n , n \ge 0)$ confined \emph{by force} to the interval $[0,L+1]$. We will study some properties of $Y$ and see whether it is possible to couple $X$ and $Y$ is such a way that (with positive probability) they coincide forever. By abuse of notation we will also denote by $\ell(n,j)$, respectively, by $\Delta(n,j)$ and ${\mathcal F}_n$ the local time on edges, respectively, the linear combinations and the $\sigma$-field defined for $Y$ just as for $X$. The law of this walk started from $Y_0= 0$ is described by its dynamics \begin{equation}gin {align*} \condprob{Y_{n+1}= Y_n \pm1}{{\mathcal F}_n} = \left\{ \begin{equation}gin{array}{cl} \mathrm displaystyle \frac{1\pm1}{2} & \text{ if } Y_n=0,\\ [13pt] \mathrm displaystyle \frac{\exp\{\pm\begin{equation}ta\Delta_n\}} {\exp\{\begin{equation}ta\Delta_n\}+\exp\{-\begin{equation}ta\Delta_n\}} &\text{ if } Y_n \in\{1,\mathrm dots,L\},\\ [13pt] \mathrm displaystyle \frac{1\mp1}{2} & \text{ if } Y_n =L+1. \end{array} \right. \end{align*} So, $Y$ behaves exactly as $X$ except that when it is on the boundary of the interval $[0, L+1]$, it is forced to jump inwards. When $Y_n \notin \{0, L+1 \}$, we can interpret $\Delta_n$ as a local stream felt by the walker (due to its past) at time $n$. If $\Delta_n$ is positive, it will tend to jump to the right, whereas when $\Delta_n$ is negative, it will tend to jump to the left. We say that when it jumps (from $Y_n$ to $Y_{n+1}$) to the \emph{opposite direction} than the one suggested by the sign of $\Delta_n$ and when $Y_n \in [1, L]$, it does an upstream jump of intensity $|\Delta_n|$. We will be interested in the relation between the maximal stream that the walker has ``successfully overcome'' before time $n$ and the maximal value of $\Delta (n,j)$. The main ingredient of the proof of Theorem \ref {thm:main} is the following \emph{deterministic} statement, that says that there is no way of building up a stream larger than $D \ge D_0$ somewhere in the interior of the interval without having earlier performed an upstream jump of intensity larger $\mathbf{a}repsilon D$ somewhere. Its proof is given in section \ref{s:proof_of_propo_mds}. \begin{equation}gin{proposition} \langleglebel{prop:mds} Suppose that $\alpha < \alpha_{L}$. There exist constants $\mathbf{a}repsilon=\mathbf{a}repsilon(\alpha, L)>0$ and $D_0=D_0(\alpha, L)<\infty$ such that for any nearest neighbour walk trajectory $(Y_n, n \ge 0) $ in $\{0,\mathrm dots,L+1\}$, any $D \ge D_0$ and any $n$, at least one of the following two statements hold: \begin{equation}gin {itemize} \item For all $j\in\{1,\mathrm dots,L\}$, and a positive $n$ such that $\abs{\Delta(n,j)}\le D$. \item During its first $n$ steps, the walk $Y$ has performed at least one upstream jump of intensity larger than $\mathbf{a}repsilon D$. \end {itemize} \end{proposition} In other words, \medbreak Note that for any $n \ge 0$, if $\Delta_n >0$ and $Y_n \not= L+1$, the conditional probability that $Y_{n+1} - Y_n = -1$ given ${\mathcal F}_n$ is smaller than $\exp ( - 2 \begin{equation}ta \Delta_n)$. The symmetric result holds when $Y_n \not= 0$. It follows readily that, for any positive $n$ and $D$, the probability that $Y$ does an upstream jump of intensity greater than $\mathbf{a}repsilonilon D$ at its $n$-th jump is smaller than $\exp (-2 \begin{equation}ta \mathbf{a}repsilonilon D)$. Hence, the proposition implies that for all $D > D_0$ and all $n \ge 0$, $$ \prob {\max_{j\in\{1,\mathrm dots,L\}} \abs{\Delta(n,j)} \ge D } \le n e^{-2 \begin{equation}ta \mathbf{a}repsilon D}. $$ A Borel-Cantelli argument immediately implies that for the walk $Y$: \begin{equation}gin{corollary} \langleglebel{co:nobigstream} There exists a constant $c$, such that almost surely, $ \max_{j\in\{1,\mathrm dots,L\}} \abs{\Delta(n,j)} \le c \log n$ for all large $n$. \end{corollary} We see in particular that when $n$ is very large, all $L$ values $\Delta (n,1), \ldots, \Delta(n,L)$ are very small compared to $n$. We can keep in mind that these are simple linear combinations of the $L+1$ non-negative numbers $\ell (n,1), \ldots, \ell (n, L+1)$ that also satisfy $$ \ell (n,1) + \ldots + \ell (n, L+1) = n . $$ This leads us naturally to study the set of possible solutions $(l_1, \ldots, l_{L+1})$ to the linear system of $L+1$ equations given by \begin{equation}gin {equation} \langleglebel {refu} d_1 = d_2 = \cdots = d_L = 0 \hbox { and } l_1 + \cdots + l_{L+1} = 1, \end {equation} where $d_j = - \alpha l_{j-1} + l_j - l_{j+1} + \alpha l_{j+2}$, with the convention $l_{0}=l_{L+2} = 0$. Note that the conditions $d_{1} = \ldots = d_L = 0$ mean that $ l_0=0 , \ldots, l_{L+1}, l_{L+2}=0$ are part of a bi-infinite ``Fibonacci-type'' sequence $(\tilde l_j,j \in \Z)$ that satisfies $\tilde d_j=0$ for all $j \in \Z$ (with obvious notation). Note that then, $$d_0 = - l_1 + \alpha l_2 = l_0 - l_1 + \alpha l_2 = \tilde d_0 + \alpha \tilde l_{-1} = \alpha \tilde l_{-1}$$ and similarly $d_{L+1} = -\alpha \tilde l_{L+3}$. Such bi-infinite sequences are on a periodic curve as soon as $\alpha > 1/3$ and then, if we define $\omega \in (0, \pi)$ by $$ \cos (\omega ) = \frac {1- \alpha}{2 \alpha}, $$ such a bi-infinite sequence is necessarily of the form $$ \tilde l_j = A + B \cos ( \omega j + \mathbf{a}rphi ) $$ for some $A$, $B$ and $\mathbf{a}rphi$. The fact that $\tilde l_0 = 0 $ shows that it is possible to take $$ \tilde l_j = B ( \cos (\omega j + \mathbf{a}rphi ) - \cos (\mathbf{a}rphi)).$$ It is then immediate to check that when $\alpha < \alpha_{L}$, one can take $$ \mathbf{a}rphi (\alpha) :=\frac {1}{2} ( {2\pi} - (L+1) \omega) $$ and that $(u_1, \ldots , u_{L+1})$ given by the formula $$ u_j = \frac { \cos(\mathbf{a}rphi ) - \cos(\omega j + \mathbf{a}rphi )}{Z} = \frac { \sin^2 ( (\mathbf{a}rphi /2) + j ( \omega/2) ) - \sin^2 ( \mathbf{a}rphi/2)}{Z'} $$ (where $Z$ and $Z'$ are the normalisation constants chosen so that $u_1 + \cdots + u_{L+1} = 1$) is the solution to our system of equations. Similarly, when $\alpha \le 1/3$, the solution to (\ref {refu}) can be easily worked out. In any case, we can observe that it is non-negative (as soon as $\alpha < \alpha_L$), and that: \begin{equation}gin {itemize} \item When $\alpha < \alpha_{L+1}$ and $(l) = (u)$, $d_0<0$ and $d_{L+1} > 0$. \item When $\alpha \in ( \alpha_{L+1} , \alpha_{L} ) $ and $(l) = (u)$, $d_0 >0$ and $d_{L+1} < 0$. \end {itemize} This will be important later. Corollary \ref{co:nobigstream} therefore implies in particular that $\Delta (n, j) / n \to 0$ almost surely when $ n \to \infty$ for all $j \in \{1, \ldots,L \}$, which in turn implies that almost surely, $$ \lim_{n \to \infty } \frac{\ell(n,j)}{n} = u_j $$ for all $j=1, \ldots , L+1$ (this reasoning in fact implies an asymptotic upper bound on the rate of convergence). Note that this implies also that almost surely, $\Delta (n, 0) / n \to d_0$ and $\Delta (n, L+1) / n \to d_{L+1} $ when $n \to \infty$. \medbreak In order to prove Theorem \ref{thm:main}, we now have to see if it is possible to couple $X$ and $Y$ in such a way that they stick together with positive probability. We first try to couple the walks starting at the time $0$. Given the similar dynamics, the optimal way to couple them is quite obvious: If we first define $(X_n)$, we can then simply define $Y_n =X_n$ as long as $ n \le \tau $, where $$\tau := \inf \{ t \ge 0 \ : \ X_t \notin [0, L+1] \}.$$ In order for $\tau$ to be greater than $t$, it therefore suffices that at each of the times $n \in \{ 0, 1, \ldots, t \}$ at which $Y_n \in \{ 0, L+1 \}$ (we call ${\mathcal N}_t$ this random set of times), $X$ jumps inwards (i.e., not out of our interval). Hence, $$ \condprob{ \tau > t}{ Y_0, \ldots, Y_t } = \prod_{n \in {\mathcal N}_t} \left(\frac { e^{\begin{equation}ta \Delta_n} 1_{\{Y_n=0\}} + e^{-\begin{equation}ta \Delta_n}1_{\{Y_n=L+1\}} }{ e^{\Delta_n} + e^{-\Delta_n}} \right).$$ The previous description of the asymptotic behavior of $Y$ (and of $\Delta ( n, L+1)$ and $\Delta (n, 0)$) immediately implies on the one hand that $\tau < \infty$ almost surely if $\alpha < \alpha_{L+1}$ (because $\Delta (n, 0)$ tends to $-\infty$), and that on the other hand, the probability that $\tau = \infty$ is strictly positive (by a simple Borel-Cantelli argument, due to the fact that $\Delta(n, 0) \sim d_0 n $ and $\Delta (n, L+1) \sim d_{L+1} n $ almost surely when $n \to \infty$) when $\alpha \in (\alpha_{L+1}, \alpha_{L})$. \medbreak Hence, we have proved that: \begin{equation}gin {itemize} \item When $\alpha < \alpha_{L+1}$, the probability that $X$ stays forever in $\{0, 1, \ldots, L+1\}$ is zero. \item When $\alpha \in (\alpha_{L+1}, \alpha_{L})$, the probability that $X$ stays forever in $\{0, \ldots, L+1 \}$ is positive. Furthermore, in this case, the asymptotic local time profile of $Y$ (and therefore also of $X$) satisfies $\ell(j, n) / n \to u_j$ as $n \to \infty$ for all $j \in \{1, \ldots, L+1\}$. \end {itemize} To conclude the proof of Theorem \ref {thm:main}, it remains to notice that when $\alpha < \alpha_{L+1}$, the previous argument can be immediately adapted to show that for all $m\ge 1$ and for all finite nearest-neighbour sequence $x_0, \ldots, x_{m}$, the conditional probability that for all $n \ge n_0$, $X_n \in \{x_{m}, 1+ x_{m}, \ldots , L+1+ x_m \}$ given $\{ X_0 = x_0, \ldots, X_{m}=x_m \}$ is zero. It suffices to couple $(X_{m+n} - X_m, n \ge 0)$ with $Y$, where the local time has been suitably initialized. Similarly, the ``successful'' coupling argument between $X$ and $Y$ can be started after these $m$ steps and shows that when $\alpha \in (\alpha_{L+1}, \alpha_{L})$ and the walk gets eventually trapped in $[x, x+ L+1]$ for some $x$, then $(\ell (n,x+1), \ldots , \ell (n, x+L+1) ) \sim (n u_1, \ldots, n u_{L+1})$ when $n \to \infty$. \begin{equation}gin {figure}[htbp] \begin{equation}gin {center} \includegraphics [height=1.8in]{0.34.traj.eps} \includegraphics [height=1.8in]{0.34.lt.eps} \caption {The trapping phenomenon: A trajectory and its local time when $\alpha = 0.34$} \end {center} \end {figure} \section{Proof of the main combinatorial statement} \langleglebel{s:proof_of_propo_mds} Our goal in this section is to prove Proposition \ref{prop:mds}. As we have already noticed, this is a deterministic statement about nearest-neighbour paths. When $L=1$, the statement turns out to be straightforward (the walk is confined to two edges). So, we restrict ourselves to $L \ge 2$ (so that $\alpha < \alpha_{L} \le \alpha_2 = 1$). Recall that $(u_1, \ldots, u_{L+1})$ is the unique solution to the linear system (\ref {refu}) (with unknown $(l_1, \ldots, l_{L+1})$) and that the values of $u_j$ for $j =1, \ldots, L+1$ are all positive. By continuity, it follows that by choosing $\gamma$ small enough, we can ensure that, if \begin{equation}gin {equation} \langleglebel {system} \max (| d_1 |, \ldots, |d_L| ) \le \gamma \hbox { and } l_1 + \ldots + l_{L+1} = 1, \end {equation} then the $l_j$'s are as close as we wish to the $u_j$'s. It follows (from the corresponding signs of $d_0$ and $d_{L+1}$ for $(u)$) that: \begin{equation}gin {itemize} \item If $\alpha < \alpha_{L+1}$, there exists $\gamma ( \alpha, L)$ such that if $\gamma \le \gamma (\alpha, L)$, then (\ref {system}) implies that $d_0< 0$ and $d_{L+1}>0 $. \item If $\alpha \in (\alpha_{L+1}, \alpha_{L})$, there exists $\gamma (\alpha, L)$ such that if $\gamma \le \gamma (\alpha, L)$, then (\ref {system}) implies that $d_0 > 0$ and $d_{L+1} < 0$. \end {itemize} Let us now choose our constants $D_0$ and $\mathbf{a}repsilon$. Recall that $\alpha < \alpha_{L} \le \alpha_{2} =1$. From now on, we choose (and fix) $\gamma$ such that $$ \gamma < \frac {1}{100} \min (1, \gamma (\alpha, L), \gamma (\alpha, L-1), \ldots, \gamma (\alpha, 2)).$$ We then define $$ \mathbf{a}repsilon:=\gamma^{L(L+1)} \hbox { and } D_0:= 10/ \mathbf{a}repsilon. $$ The main role of $D_0$ will be to ensure that all time-intervals that we will talk about are non-empty (we will omit to mention this throughout the proofs) as soon as $D \ge D_0$. Suppose that $n\mapsto Y_n \in\{0,\mathrm dots,L+1\}$ is a given nearest-neighbour trajectory. First we define some particular times, corresponding to first appearances of streams of certain intensity. For all $M\in(0,\infty)$, we let (using the convention $\inf \emptyset = \infty$) \begin{equation}gin{align*} \theta_{+}(M) &:= \inf\{n:\Delta(n,j)\ge \gamma^{jL} M \text{ for some } j\in\{1,\mathrm dots,L\}\}, \\ \theta_{-}(M) &:= \inf\{n:\Delta(n,L+1-j)\le - \gamma^{jL} M \text{ for some } j\in\{1,\mathrm dots,L\}\}, \end {align*} and we finally let $\sigma (M)$ denote the first time at which the walk makes an upstream jump of intensity greater than $M$. \medbreak We will prove that with our choice of $\mathbf{a}repsilon$ and $D_0$, then necessarily, $\theta_+ (D) \ge \sigma (\mathbf{a}repsilon D )$ for all $D \ge D_0$. By symmetry, we then necessarily also have $\theta_- (D) \ge \sigma (\mathbf{a}repsilon D)$. But because $\gamma<1$, this implies that $$ \sigma ( \mathbf{a}repsilon D) \le \min ( \theta_+ (D) , \theta_- (D) ) \le \inf\{n:\max_{j\in\{1,\mathrm dots,L\}} | \Delta(n,j)| \ge D \}, $$ which completes the proof of the proposition. \medbreak We will prove by contradiction that $\theta_+ (D) \ge \sigma (\mathbf{a}repsilon D )$. We will therefore from now on assume that $Y$ belongs to the set $\mathcal{B}_+$ of paths $Y$ such that $\theta_+(D) < \sigma(\mathbf{a}repsilon D)$ for some given $D \ge D_0$ i.e. that $Y$ has created a ``very strong'' stream to the right (in the sense defined by $\theta_+(D)$) without having done any upstream jump of intensity larger than $\mathbf{a}repsilon D$. \medbreak We start with two simple preliminary remarks. \begin{equation}gin{lemma} \langleglebel{lemma:conf} (i) For any $n_1<n_2$ and $j\in\{1,\mathrm dots,L\}$, $$\abs{\Delta(n_2,j)- \Delta(n_1,j)}\le n_2-n_1.$$ (ii) Let $1<M<\infty$ be fixed, $j \in\{1,\mathrm dots,L\}$ and $n_1<n_2<\sigma(M)$. If \begin{equation}gin{align*} \min_{n\in[n_1,n_2]}\Delta(n,j)> M+1 \end{align*} then the walk is confined to the interval $\{j,\mathrm dots,L+1\}$ for the whole time-span $[n_1,n_2]$. \end{lemma} (ii) means that if there is a strong positive stream somewhere, then the walk is located to the right of this stream, unless it has performed a strong upstream jump before. The symmetric result also holds, i.e., if there is a strong negative stream somewhere, then (unless the walk has performed a strong upstream jump before) the walk is to the left of the stream. \begin{equation}gin{proof} (i) This is straightforward, since for all $j\in\{1,\mathrm dots,L\}$ and $n\in\N$, \begin{equation}gin{align} \Delta(n+1,j)-\Delta(n,j)\in\{-1,-\alpha,0,\alpha, 1\}. \end{align} (ii) Let \begin{equation}gin{align*} \hat n:=\max\{n<n_1:\Delta(n,j)\le M+1\} .\end{align*} Then $\Delta (\hat n + 1, j) > M+1 $, so that $\Delta (\hat n , j) > M$. Since $\Delta ( \hat n , j) < \Delta (\hat n +1 , j)$, one necessarily has $\{Y(\hat n), Y(\hat n +1)\}=\{j-1,j\}$ or $\{Y(\hat n), Y(\hat n +1)\}=\{j+1,j+2\}$. But, the possibility $\{ Y(\hat n)=j, Y(\hat n +1)=j-1 \}$ is excluded, since this would correspond to an upstream jump which is in conflict with the assumption $n_1<\sigma(M)$. Hence, $Y( \hat n + 1) \ge j$ and then, between $\hat n$ and $n_2$, the strong stream at $j$ will prevent the walk from jumping to the left of $j$. \end{proof} Since $\theta_+ (D)$ is finite, we can define the following: \begin{equation}gin{align*} \overline n & := \theta_+ (D) = \min \{ n \ : \ \Delta (n, j) \ge \gamma^{jL} D \hbox { for some } j \in \{1, \ldots, L \} \} \\ J &:= \max\{j\in\{1,\mathrm dots,L\}: \Delta(\overline n,j)\ge \gamma^{jL} D\}, \\ \underline n &:= \max\{n\le\overline n: \Delta(n,J)\le{\gamma^{JL} D}/ {2}\}. \end{align*} Throughout the proof we restrict ourselves to the time-span $[\underline n, \overline n]$ (this is the time-interval where we will detect a contradiction i.e. the necessity of an upstream jump -- in fact, we will zoom into smaller time-intervals). Note that by the definition of $\overline n$ and $\underline n$ and (i) of Lemma \ref{lemma:conf}, \begin{equation}gin {equation} \langleglebel {ref26} \overline n - \underline n \ge {\gamma^{JL} D}/ {2}. \end {equation} Furthermore, the definition of $\underline n$ and our choice of $\gamma$, $\mathbf{a}repsilon$ and $D$ show that $$ \min_{n\in[\underline n, \overline n]}\Delta(n,J) = \Delta ( \underline n, J) \ge \frac {\gamma^{JL} D}{2} - 1 > 2\mathbf{a}repsilon D > \mathbf{a}repsilon D + 1, $$ and thus, because of Lemma \ref {lemma:conf}-(ii), any walk $Y$ in $\mathcal{B}_+$ is confined to the interval $[J,L+1]$ for the whole time-span $[\underline n,\overline n]$ (recall that we assume that $\overline n \le \sigma (\eps D)$). It is important to notice here that the interval $[J, L+1]$ is strictly shorter than $[0,L+1]$. Let us choose $\widehat n$ to be the smallest integer such that $$ \widehat n \ge \underline n + \frac{\gamma^{JL} D}{2} $$ (mind that the symbol $\widehat n$ will be used as a temporary variable which will be redefined in various ``subroutines'' of the proof) and note that, because of (\ref {ref26}), $ \widehat n \le \overline n$. We now define \begin{equation}gin{align*} l_i &:= \ell(\widehat n, J+i)-\ell(\underline n, J+i), && i\in\{1,\mathrm dots,L-J+1\}, \\ d_i &:= \Delta(\widehat n, J+i)-\Delta(\underline n, J+i), && i\in\{1,\mathrm dots,L-J\}. \end{align*} Then $(l_1, \ldots, l_{L-J+1})$ is a family of non-negative numbers, and $$ \| l \| := | l_1+ \ldots + l_{L-J+1} | = \hat n - \underline n \ge \frac{\gamma^{JL} D}{2}. $$ Let us define $$C = C(J,D) = \gamma^{L(J+1)} D.$$ The reader might want to keep in mind that $\gamma$ is small and that $ D \ge C \ge \mathbf{a}repsilon D$. We will soon prove the following lemma: \begin{equation}gin{lemma} \langleglebel{lemma:bound_on_Delta} For all $Y \in \mathcal{B}_+$, for all $j\in\{J+1,\mathrm dots,L\}$, $$ \max_{n\in[\underline n,\overline n]}\abs{\Delta(n,j)} \le C \left(\frac{1}{\gamma}\right)^{j-(J+1)} . $$ \end{lemma} Let us now show how it implies the proposition. Note that then, for all $j \in \{ J+1, \ldots, L \}$, $$ \max_{n\in[\underline n,\overline n]} \abs{\Delta(n,j)} \le D \gamma^{(J+1)L} \gamma^{J+1 - L } \le D \gamma^{JL + 1 } \le 2 \gamma \| l \| . $$ Hence $$ \max_{j\in\{1,\mathrm dots,L -J +1 \}} \abs{d_j} \le 4 \gamma \norm{l}. $$ Recall that $L+1-J \le L$; our choice of $\gamma \le \gamma (\alpha, L-J+1)$ therefore implies that $$ d_0 = \Delta(\widehat n, J)-\Delta(\underline n,J) < 0, $$ which {is in contradiction} with the definition of $\underline n$. We conclude that $\mathcal{B}_+$ is indeed empty, and that the proposition holds. \medbreak It now remains to prove Lemma \ref{lemma:bound_on_Delta}: First we note that by the definition of $\overline n$, for all $j\in\{J+1,\mathrm dots,L\}$, $$ \max_{n \le \overline n} \Delta(n,j) \le {\gamma^{jL}} D \le \gamma^{(J+1)L} D = C \le C \left(\frac{1}{\gamma}\right)^{j-(J+1)}. $$ So, it remains to control the negative streams, i.e. to prove that for all $j \in \{ J+1, \ldots, L \}$, \begin{equation}gin{align} \langleglebel{remains} \min_{n\in[\underline n,\overline n]} \Delta(n,j) \ge -C \left(\frac{1}{\gamma}\right)^{j-(J+1)} . \end{align} We will proceed by induction for $j=J+1,J+2, \mathrm dots,L$, from the left to the right. \medbreak We start with the case where $j=J+1$. We proceed in two steps. First we prove the following slightly stronger bound at time $\underline n$: \begin{equation}gin{align} \langleglebel{bound_beginning_1} \Delta(\underline n, J+1)\ge-\frac {C}{2}. \end{align} Assume that the contrary holds i.e., that $ \Delta(\underline n, J+1) < - C/2$. Note that the definitions of $\gamma$ and $\mathbf{a}repsilon$ show that $$ \frac {C}{4} = \frac {\gamma^{L(J+1)} D}{4} \ge \frac {\gamma^{L \times L} D}{4} \ge 2 \gamma^{L(L+1)} D = 2 \mathbf{a}repsilon D.$$ Furthermore, note that $C/4 \le \overline n - \underline n$. Hence, from Lemma \ref{lemma:conf} and the fact that $\overline n \le \sigma (\mathbf{a}repsilon D)$, it follows that $Y$ is confined in the interval $[J,J+1]$ during $C/4$ steps, i.e, that it bounces back and forth on this single edge for at least $C/{4}$ steps after $\underline n$. If we let ${\breve n}$ denote the integer value of $ \underline n + C/4$, it therefore follows that on the one hand ${\breve n}\le \overline n$ and that on the other hand $$ \Delta(\chn, J) - \Delta(\underline n, J)<0, $$ which contradicts the definition of $\underline n$. Hence, we conclude that \eqref{bound_beginning_1} indeed holds. \medbreak Next, we want to study what happens on the entire interval $[ \underline n, \overline n ]$. The quantity $\Delta (n, J+1)$ has to decrease from above $-C/2$ to below $-C$. We let \begin{equation}gin{align*} & \widehat n:=\inf\{n\ge\underline n: \Delta(n,J+1)<-C\} \\ & \wt n:= \max\{n\le\widehat n: \Delta(n,J+1)\ge -{C}/{2}\} \end{align*} (note that these definitions of $\widehat n$ and $\wt n$ are also temporary and will be redefined in other ``subroutines'' of the proof). Assume that $ \widehat n \le \overline n$. From \eqref{bound_beginning_1} it follows that (for all $Y \in \mathcal{B}_+$), $ \wt n>\underline n$ so that $[\wt n, \widehat n ]\subset [\underline n , \overline n]$. But the definition of $\wt n$ shows that \begin{equation}gin{align*} \max_{n\in[\wt n, \widehat n]} \Delta(n,J+1) \le -\frac{C}{2} + 1 < - \mathbf{a}repsilon D, \end{align*} so that we can deduce as before (using the fact that $\sigma (\mathbf{a}repsilon D) \ge \overline n$) from Lemma \ref{lemma:conf} that on the whole time span $[\wt n, \widehat n]$ the walk is confined to the interval $[J, J+1]$ and thus we readily get $$ \Delta(\widehat n,J+1)-\Delta(\wt n, J+1) > 0, $$ which contradicts the definition of $\wt n$ and $\widehat n$. We conclude that (for all $Y \in \mathcal{B}_+$) the bound \eqref{remains} holds for $j=J+1$. \medbreak The induction step follows next. If $J=L-1$ then we are done. So, assume that $J<L-1$, we let $K\in\{J+1,\mathrm dots,L-1\}$ and assume that \eqref{remains} holds for $j\in\{J+1,\mathrm dots,K\}$. Our goal is to prove it for $j=K+1$. Again, first we divide this into two steps. Let us first prove the following slightly stronger bound at time $\underline n$: \begin{equation}gin{align} \langleglebel{bound_beginning_2} \Delta(\underline n, K+1) \ge -\frac{C}{2} \left(\frac{1}{\gamma}\right)^{K-J} . \end{align} Assume the contrary. At time $\underline n$, we have right stream at $J$ and a left stream at $K+1$. Note that on the one hand, $$ \frac{C}{4} \left(\frac{1}{\gamma}\right)^{K-J} \ge \frac {C}{4} > 2 \mathbf{a}repsilon D. $$ Note also on the other hand that $$ \frac {C}{4} \left( \frac {1}{\gamma} \right)^{K-J} \le \frac {D}{4} \gamma^{L(J+1) + J - K } \le \frac {D}{4} \gamma^{LJ} \le \frac {\overline n - \underline n}{2}.$$ From Lemma \ref{lemma:conf} it therefore follows that $Y$ is confined to $[J,K+1]$ for at least $$\frac{C}{4} (1/ \gamma)^{K-J}$$ steps and that if, we let $\chn$ smallest integer such that $$ {\breve n}\ge \underline n + \frac{C}{4} \left(\frac{1}{\gamma}\right)^{K-J}, $$ then $${\breve n}\le \overline n.$$ Then, define \begin{equation}gin{align*} l_i &:= \ell(\chn, J+i)-\ell(\underline n, J+i), && i\in\{1,\mathrm dots,K-J+1\}, \\ \langleglebel{didef1} d_i &:= \Delta(\chn, J+i)-\Delta(\underline n, J+i), && i\in\{1,\mathrm dots,K-J\}. \end{align*} Then, for $ l = (l_1, \ldots, l_{K-J +1})$, we see that \begin{equation}gin{align*} \norm{l} = {\breve n}- \underline n \ge \frac{C}{4} \left(\frac{1}{\gamma}\right)^{K-J}. \end{align*} By the inductive assumption, we have $$ \max_{n\in [\underline n, \chn]} \max_{j\in\{J+1,\mathrm dots,K\}} \abs{\Delta(n,j)} \le C \left(\frac{1}{\gamma}\right)^{K-(J+1)} \le 4 \gamma \norm{l}. $$ and thus, $$\abs{d_j} \le 8 \gamma \norm{l}$$ for all $ j\in\{1,\mathrm dots,K-J\}$. Our choice of $\gamma$ therefore ensures that $$ d_0 = \Delta(\chn, J)-\Delta(\underline n, J)<0 $$ which is in contradiction with the definition of $\underline n$ and the fact that ${\breve n}\in [\underline n, \overline n ]$. Hence, we conclude that \eqref{bound_beginning_2} holds for all $Y \in \mathcal{B}_+$. \medbreak We now want to derive the lower bound on the entire time-interval $[\underline n , \overline n]$. We now let \begin{equation}gin {align*} & \widehat n:=\inf\left\{n\ge\underline n: \Delta(n,K+1)<-C \left(\frac{1}{\gamma}\right)^{K-J}\right\} \\ & \wt n:= \max\left\{n\le\widehat n: \Delta(n,K+1) \ge -\frac{C}{2} \left(\frac{1}{\gamma}\right)^{K-J} \right\}. \end{align*} Assume that $ \widehat n \le \overline n$. From \eqref{bound_beginning_2}, we know that $ \wt n>\underline n$ for all $Y \in \mathcal{B}_+$. But \begin{equation}gin{align*} \max_{n\in[\wt n, \widehat n]} \Delta(n,K+1) & = \Delta ( \wt n , K+1 ) \le -\frac{D}{2} \gamma^{(J+1)L} \left(\frac{1}{\gamma}\right)^{K-J} + 1 \\ &< - \frac {D}{2} \gamma^{L \times L} + 1 \le -3 \mathbf{a}repsilon D + 1 \le - 2\mathbf{a}repsilon D, \end{align*} so that it follows that on the whole time span $[\wt n, \widehat n]$ the walk is confined to the interval $[J, K+1]$. We define now \begin{equation}gin{align*} l_i &:= \ell(\widehat n, J+i)-\ell(\wt n, J+i), && i\in\{1,\mathrm dots,K-J+1\}, \\ d_i &:= \Delta(\widehat n, J+i)-\Delta(\wt n, J+i), && i\in\{1,\mathrm dots,K-J\}. \end{align*} Then, if $l = ( l_1, \ldots, l_{K-J+1})$, we get that $$ \norm{l} = \widehat n - \wt n \ge \frac{C}{4} \left(\frac{1}{\gamma}\right)^{K-J} . $$ By the inductive assumption we have $$ \max_{n\in [\underline n, \widehat n]} \max_{j\in\{J+1,\mathrm dots,K\}} \abs{\Delta(n,j)} \le C \left(\frac{1}{\gamma}\right)^{K-(J+1)} \le 4 \gamma \norm{l}. $$ and thus, \begin{equation}gin{align*} \abs{d_j} \le \gamma \norm{l}, \qquad i\in\{1,\mathrm dots,K-J\}. \end{align*} Hence, our choice for $\gamma$ ensures that \begin{equation}gin{align*} d_{K-J+1} = \Delta(\widehat n, K+1)-\Delta(\wt n, K+1)>0 \end{align*} which is in conflict with the definition of $\wt n$ and $\widehat n$. We conclude that \eqref{remains} holds for $j=K+1$, which concludes the proof of the Lemma. \section {Concluding remarks} We now make some remarks on the proof, and list a few open problems that are directly related to the models that we have investigated in the present paper. Other related problems are discussed in \cite {erschler_toth_werner_10}. \begin{equation}gin {itemize} \item Just as in many other self-interacting random walks, it is rather hard to get direct information on the dynamics of the walker. The strategy of the proof presented in the present paper is to use some a priori information about the local time profile, in order to deduce the properties of the walker. As a consequence, there are many intuitive results that can not derived in this way (and it would be of course very nice to prove them). \item A natural guess is that in the case where $\alpha < \alpha_{L}$ and the confined walk $Y$ actually visits all sites of the interval infinitely often, it should be the case that, if one defines $$ \Lambda_n = ( \ell (n, 1), \ldots, \ell (n, L+1)) - ( n u_1, \ldots, n u_{L+1}), $$ then the Markov chain $( Y_n ,\Lambda_n)_{n \ge 0}$ should be positive recurrent and have an invariant distribution. In particular, this would imply not only the convergence of $(\ell (n, 1), \ldots, \ell (n,L+1)) / n $ towards $(u_1, \ldots, u_{L+1})$ but it would give a much finer description of the limiting behavior of the profile. Unfortunately, it seems quite difficult to get an explicit expression for such a stationary distribution. It would be actually sufficient to prove the tightness of some well-chosen functional in order to prove the existence of the stationary measure, but we have not been able to find a simple way to approach this. \item One very natural question that the previous approach would enable to tackle (but that could possibly be studied by other means too) is to show that for the cases where we have proved that $\# {\mathcal L} = L+2 $ with positive probability, then the probability that ${\mathcal L}= K+2$ for some $K > L$ is zero. \item A related question deals of course with the behavior of the walk when $\alpha$ is equal to one of the critical values. It seems intuitively clear that when $\alpha= \alpha_{L+1}$, then the walk will almost surely not be stuck on $L+2$ sites. This is because of the fact that the corresponding $d_0$ and $d_{L+1}$ are equal to zero, so that for infinitely many times $n$, if $X$ would stick to $Y$ forever, it would be at the edge of the interval and have a probability bounded from below to jump out of it. But our proof (that controls only the first-order behavior of the local-time profile) is not able to control this, nor to prove that $\# {\mathcal L}$ is equal to $L+3$ with positive probability. \item Even though it is a trivial observation, we would like to notice that for generic $\alpha$'s, it is the case that there exist many $K > L+1$ such that one can find many non-negative $(l_1, \ldots, l_{K+1})$ with $d_1 = \cdots = d_{K}=0$, $d_0 > 0$ and $d_{K+1}< 0$. This happens in fact as soon as the distance between $(K+1) \theta$ and $2 \pi \Z$ is smaller than all distances between the $j \theta$'s and $2 \pi \Z$ for all $j \in \{ 1, \ldots, K \}$ (so that for non-rational $\pi / \theta$, there are infinitely many such $K$'s). Note that if one wishes to find a stationary measure for the couple $(Y, \Lambda) $ as described above, then one would need to exclude these larger values of $K$, which is an indication that it could in fact be a rather complicated issue. \item The arguments that we have developed in the present paper seem to be adaptable in order to derive analogous results in some cases where the self-interaction depends on the local time at more than the four edges neighbouring $X_n-1, X_n, X_{n+1}$ and $X_{n+2}$ from the left. For instance, one could replace the definition of $\Delta (n, j )$ by $$ \alpha_k \ell (n, j_k) + \ldots + \alpha_1 \ell(n,j-1)+\alpha_0 \ell(n,j)- \alpha_0 \ell(n,j+1) - \ldots - \alpha_k \ell (j+1+k) ,$$ the needed condition for the arguments to go through is dealing with the roots of the corresponding polynomial i.e. with the behavior of the corresponding generalized Fibonacci sequence. \end {itemize} \medbreak \noindent{\bf Acknowledgements.} BT thanks the kind hospitality of Ecole Normale Sup\'erieure, Paris, where part of this work was done. The research of BT is partially supported by the Hungarian National Research Fund, grant no. K60708. The research of WW is supported in part by Research supported in part by ANR-06-BLAN-00058. The cooperation of the authors is facilitated by the French-Hungarian bilateral mobility grant Balaton/02/2008. \begin{equation}gin{thebibliography}{99} \bibitem{amit_parisi_peliti_83} D.\ Amit, G.\ Parisi, L.\ Peliti: Asymptotic behavior of the `true' self-avoiding walk. {\sl Phys.\ Rev.\ B}, {\bf 27}: 1635--1645 (1983) \bibitem{erschler_toth_werner_10} A. Erschler, B. T\'oth, W. Werner: Some locally self-interacting walks on the integers, preprint (2010). \bibitem{obukhov_peliti_83} S.\ P.\ Obukhov, L.\ Peliti: Renormalisation of the ``true'' self-avoiding walk. {\sl J.\ Phys.\ A}, {\bf 16}: L147--L151 (1983) \bibitem{peliti_pietronero_87} L.\ Peliti, L.\ Pietronero: Random walks with memory. {\sl Riv.\ Nuovo Cimento}, {\bf 10}: 1--33 (1987) \bibitem{pemantle_92} R. Pemantle: Vertex-reinforced random walk. {\sl Probab. Theory Rel. Fields}, {\bf 92}: 117-136 (1992) \bibitem{pemantle_volkov_99} R. Pemantle, S. Volkov: Vertex-reinforced random walk on $\Z$ has finite range. {\sl Ann. Probab.}, {\bf 27}: 1368-1388 (1999) \bibitem{tarres_04} P. Tarr\`es: Vertex-reinforced random walk on $\Z$ eventually gets stuck on five points. {\sl Ann. Probab.}, {\bf 32}: 2650-2701 (2004) \bibitem{ttv} { P. Tarr\`es, B. T\'oth, B. Valk\'o (2010), Diffusivity bounds for 1d Brownian polymers. to appear in Ann. Probab., http://arxiv.org/abs/0911.2356. } \bibitem{toth_95} B. T\'oth: `True' self-avoiding walk with bond repulsion on $\Z$: limit theorems. {\sl Ann. Probab.}, {\bf 23}: 1523-1556 (1995) \bibitem{toth_01} B. T\'oth: Self-interacting random motions. In: {\sl Proceedings of the 3rd European Congress of Mathematics}, Barcelona 2000, vol. 1, pp. 555-565, Birkhauser, 2001. \bibitem{toth_werner_98} B. T\'oth, W. Werner: The true self-repelling motion. {\sl Probab. Theory Rel. Fields}, {\bf 111}: 375-452 (1998) \end{thebibliography} \noindent Affiliations and e-mails of authors: \\[10pt] {\sc Anna Erschler}, CNRS, D\'epartement de Math\'ematiques, Universit\'e Paris Sud Orsay,\\ email: {\tt [email protected]} \\[10pt] {\sc B\'alint T\'oth}, Institute of Mathematics, Budapest University of Technology,\\ email: {\tt [email protected]} \\[10pt] {\sc Wendelin Werner}, D\'epartement de Math\'ematiques, Universit\'e Paris Sud Orsay, and DMA, Ecole Normale Sup\'erieure\\ email: {\tt [email protected]} \end{document}
\begin{document} \title{Comments on ``Fractional Extreme Value Adaptive Training Method: Fractional Steepest Descent Approach''} \author{Abdul~Wahab and Shujaat Khan \thanks{A. Wahab is with NUTECH School of Applied Sciences and Humanities, National University of Technology, Sector I-12, Main IJP Road, 44000, Islamabad, Pakistan (e-mail: [email protected]).} \thanks{S. Khan is with Bio-imaging, Signal Processing and Learning (BISPL) Laboratory, Department of Bio and Brain Engineering, Korea Advanced Institute of Science and Technology, 34141, Daejeon, South Korea (e-mail: [email protected]).} \thanks{This work was supported by the Korea Research Fellowship Program through the National Research Foundation (NRF) funded by the Ministry of Science and ICT (NRF- 2015H1D3A1062400).} \thanks{Manuscript received December 15, 2017.}} \markboth{IEEE Transactions on Neural Networks and Learning Systems,~Vol.~-, No.~-, December~2017} {Wahab and Khan: Comments on ``Fractional Extreme Value Adaptive Training Method: Fractional Steepest Descent Approach''} \maketitle \begin{abstract} In this comment, we raise serious concerns over the derivation of the rate of convergence of fractional steepest descent algorithm in Fractional Adaptive Learning (FAL) approach presented in ``Fractional Extreme Value Adaptive Training Method: Fractional Steepest Descent Approach'' [IEEE Trans. Neural Netw. Learn. Syst., vol. 26, no. 4, pp. 653--662, April 2015]. We substantiate that the estimate of the rate of convergence is grandiloquent. We also draw attention towards a critical flaw in the design of the algorithm stymieing its applicability for broad adaptive learning problems. Our claims are based on analytical reasoning supported by experimental results. \end{abstract} \begin{IEEEkeywords} Fractional calculus, fractional differential, fractional energy norm, fractional extreme point, fractional gradient. \end{IEEEkeywords} \IEEEpeerreviewmaketitle \section{Introduction} \IEEEPARstart{T}{he} Least Mean Squares (LMS) algorithm is a widely used tool in adaptive signal processing due to its stable performance and simple implementation. However, its convergence is slow. Accordingly, many variants of LMS have been proposed in recent years in order to achieve an accelerated convergence without compromising on the steady-state residual error. In the same spirit, the FAL method based on a \emph{fractional steepest descent approach} was proposed in \cite{paper}. Unfortunately, the rate of convergence of the FAL algorithm is derived in terms of an approximation of the general update rule that furnishes unreliable estimate. We elaborate on this issue in Section \ref{ss:error}. Further, we draw attention towards a critical flaw in the design of the algorithm stymieing its applicability on general adaptive learning problems in Section \ref{ss:limitation}. The consequences of these flaws on the proposed method are discussed in Section \ref{s:consequences}. A brief conclusion is provided in Section \ref{s:conclusion}. \section{Main Remarks}\label{s:mistake} In order to facilitate ensuing discussion, we follow the notation and equation numbering used in \cite{paper}, the corrected and the new numbers are distinguished by a superposed asterisk and a prime, respectively. \subsection{Remarks on Convergence Analysis}\label{ss:error} In \cite{paper}, the update equation of the proposed FAL algorithm based on fractional gradient descent is provided in \eqref{eq:19} as \begin{align} \label{eq:19} s_{k+1}=s_k-\frac{2\mu\eta}{\Gamma(3-\nu)}\left(s_k-s^{\nu *}\right)^2s_k^{-\nu}, \quad\text{if } \nu\neq 1,2,3. \tag{19} \end{align} Since, \eqref{eq:19} is nonlinear, it is intriguing to derive an explicit expression for $s_k$. Towards this end, $s_k$ is regarded as a discrete sample of a continuous function $s(t)$ at $t=k$ in \cite{paper}, and \eqref{eq:19} is converted to an ordinary differential equation (ODE), \begin{align} \label{eq:20} D^1_ts(t)\cong\frac{-2\mu\eta}{\Gamma(3-\nu)\nu (s^{\nu *})^{\nu-1} s(t)}\left[s(t)-s^{\nu *}\right]^2, \tag{20} \end{align} using a power series expansion of $s^\nu$ about $s-s^{\nu *}$ (furnishing $s^\nu\cong\nu(s^{\nu*})^{\nu-1} s$). Here, $D^1_t$ is the derivative with respect to $t$. The ODE \eqref{eq:20} is solved in \cite{paper} for $s(t)$, thereby furnishing \begin{align} \label{eq:21} s_k\cong s^{\nu *} + e^{\left(\frac{-2\mu\eta k}{\Gamma(3-\nu)\nu\left(s^{\nu *}\right)^{\nu-1}}\right)}, \quad\text{if } \nu\neq 1,2,3. \tag{21} \end{align} We argue that the expression \eqref{eq:21}, on which the entire convergence analysis is based, is an \emph{unreliable approximation} of the solution to \eqref{eq:20}. In fact, by separation of variables, \eqref{eq:20} renders \begin{align} \ln\left |s(t)-s^{\nu *}\right|-\frac{s^{\nu *}}{s(t)-s^{\nu *}}\cong \frac{-2\mu\eta t}{\Gamma(3-\nu)\nu\left(s^{\nu *}\right)^{\nu-1}}+C, \label{eq:b} \tag{1'} \end{align} where $C$ is the constant of integration whose value can be determined by the initial input $s_0=s(0)$. Specifically, \begin{align} \label{eq:d} C\cong \ln\left |s_0-s^{\nu *}\right|-[{s^{\nu *}}/({s_0-s^{\nu *}})]. \tag{2'} \end{align} Substituting \eqref{eq:d} in \eqref{eq:b} and setting $s(t)=s_k$, one gets \begin{align} (s_k-s^{\nu *}) &\cong\left(s_0-s^{\nu *}\right)e^{\left(\frac{-2\mu\eta k}{\Gamma(3-\nu)\nu\left(s^{\nu *}\right)^{\nu-1}}\right)} e^{\left(\frac{s^{\nu *}}{s_k-s^{\nu *}}\right)}e^{\left(-\frac{s^{\nu *}}{s_0-s^{\nu *}}\right)}. \label{eq:21*} \tag{21*} \end{align} Remark that \eqref{eq:21} is different from the correct solution \eqref{eq:21*} to the ODE \eqref{eq:20}. In fact, if one chooses $C\cong 0$ and neglects the second term on the LHS of \eqref{eq:b} while solving ODE (20), one gets \eqref{eq:21}. In Section \ref{s:consequences}, we substantiate that $C$ cannot be simply neglected under the parametric setting of \cite{paper}. Moreover, the removal of the second term leads to an unreliable estimation of the rate of convergence. \subsection{Technical Flaw in the Algorithmic Design}\label{ss:limitation} The FAL approach in \cite{paper} is proposed for seeking a minimizer $s^{\nu *}$ of the energy norm \cite[Eq. (6)]{paper} in the real domain $\mathbb{R}$. Both negative and positive minimizers are sought in \cite{paper}. However, the update equation \eqref{eq:19} of the FAL algorithm contains a fractional power of $s_k$ which becomes complex whenever $s_k<0$. In particular, ${d^{\nu} E}/{ds^{\nu}}$ for $\nu = 1/2$ and $3/2$ is pure imaginary. In this situation, $s_{k+1}$ will be complex since \eqref{eq:19} is also derived from ${d^{\nu} E}/{ds^{\nu}}$. Consequently, the FAL method is not expected to converge to a real value. In order to elaborate on this point, we evaluate ${d^{\nu} E}/{ds^{\nu}}$ (based on \cite[Eq. (8)]{paper}) using the same parameters as in \cite[Sect. IV-B]{paper}, i.e., we set $E^1_{\min}=10$, $\eta=2$, and $s^{1,*}=5$, $1<\nu\leq 2$, and the domain $-4<s<8$ as used for \cite[Figs. 2(e), 2(d)]{paper}. Then, for $\nu=3/2$, \begin{align} \frac{d^{3/2} E}{ds^{3/2}} =&-\frac{1}{\sqrt{\pi}}\left(30s^{-3/2}+20s^{-1/2}-8s^{1/2}\right), \label{eq:7'} \tag{2'} \end{align} which contains fractional powers of $s\in (-4, 8)$. In particular, at $s=-1$, \begin{align} \frac{d^{3/2} E}{ds^{3/2}}\Bigg|_{s=-1} = -\frac{2\iota}{\sqrt{\pi}}, \label{eq:8'} \tag{3'} \end{align} where $\iota = \sqrt{-1}$. Similarly, the $1/2-$order derivative of the energy norm (based on \cite[Eq. (8)]{paper}) can be calculated as \begin{align} \frac{d^{1/2} E}{ds^{1/2}} =&\frac{4}{3\sqrt{\pi}}\left(45s^{-1/2}-30s^{1/2}+4s^{3/2}\right), \label{eq:9'} \tag{4'} \end{align} with parameters as in \cite[Fig. 2(a)]{paper}. Especially, at $s=-1$, \begin{align} \frac{d^{1/2} E}{ds^{1/2}}\Big|_{s=-1} =&-\frac{316\iota}{3\sqrt{\pi}}. \label{eq:10'} \tag{5'} \end{align} As a result, \eqref{eq:19} is also complex since it is based on the same expression of the fractional derivative. Consequently, the future updates $s_{k+1}$ will be complex and the algorithm will not converge to a real value as anticipated. In order to substantiate this, we plotted the expressions \eqref{eq:d} and \eqref{eq:9'} in Fig. \ref{fig:FD} over the domain $(-4,8)$ using same parameters as in \cite[Fig. 2]{paper}. It is observed that ${d^{\nu} E}/{ds^{\nu}}$ is real as long as $s>0$ and is pure imaginary for $s<0$. Note also that ${d^{\nu} E}/{ds^{\nu}}$ is singular at $s=0$ which actually justifies that $s_0=0$. \begin{figure} \caption{Fractional derivative of the energy norm.} \label{fig:FD1} \label{fig:FD2} \label{fig:FD} \end{figure} \section{Discussion}\label{s:consequences} \subsection{Reliability of the Rate of Convergence}\label{ss:reliability} Let us discuss some consequences of the flaws indicated in Section \ref{ss:error}. First, it is worthwhile precising that the FAL approach is based on left Riemann-Liouville fractional derivative \cite[Eq. (3)]{paper} (instead of Gr\"{u}nwald-Letnikov derivative as pretended in \cite{paper}) with $a=0$ . Therefore, FAL is valid only for $s>0$ and $s_0=0$. Consequently, Eq. \eqref{eq:d} suggests that $C\cong \ln |s^{\nu*}|+1$. Since $s^{\nu*}$ is unknown sought value, one cannot simply set $C\cong 0$ in \eqref{eq:b} to get \eqref{eq:21}. On the other hand, the approximation \eqref{eq:21}, derived from \eqref{eq:21*} by ignoring $\exp\left({s^{\nu *}}/{(s_k-s^{\nu *})}\right)$ and choosing $C\cong 0$, is highly unreliable. The convergence analysis in \cite{paper} is based entirely on the estimate \eqref{eq:21}. By choosing $\mu$ such that \begin{align} \lim_{k\to+\infty} k \chi=+\infty,\quad\text{with}\quad \chi:= \frac{2\mu\eta}{\Gamma(3-\nu)\nu \left(s^{*\nu}\right)^{\nu-1}}>0, \label{eq:f} \tag{6'} \end{align} it is suggested in \cite{paper} that the \emph{algorithm converges at the rate $\exp(-\chi k )$}. In fact, since $(s_k)_{k\in\mathbb{N}}$ is assumed to be convergent to $s^{\nu*}$, $(s_k-s^{\nu*})\to 0$ as $k\to +\infty$. Hence, $s^{\nu *}/(s_k-s^{\nu *})\to+\infty$ and consequently, $\exp\left(s^{\nu *}/(s_k-s^{\nu*})\right)\to +\infty$ when $s^{\nu *}/(s_k-s^{\nu *})$ is positive and $k\to+\infty$. Therefore, the product $\exp\left(-\chi k\right) \exp\left(s^{\nu *}(s_k-s^{\nu *})^{-1}\right)$ has an indeterminate form $0\times \infty$. One cannot guarantee that it will approach to $0$. Even if it does so, the factor $\exp\left(s^{\nu *}(s_k-s^{\nu *})^{-1}\right)$ will severely impede the decay of $\exp\left(-\chi k\right)$, which will be grandiloquent as the rate of convergence of FAL. In order to elaborate on this point, we have compared the rates of convergence based on estimates \eqref{eq:19}, \eqref{eq:21}, and \eqref{eq:21*} in Fig. \ref{fig:rates}. We choose same parameters as in \cite[Fig.5(a)]{paper}. The computational results indicate that the FAL (with update rule \eqref{eq:19}) converges at a very slow rate as compared to that predicted by \eqref{eq:21}. When $\chi=0.25$, \eqref{eq:21} suggest that FAL converges to the sought value $s^{\nu *}=4.2856$ after only 29 iterations with $s_{29}\approx 4.2856$. On contrary, \eqref{eq:19} suggests that after $k=1948$ iterations $s_k\approx 4.316$. On the other hand, \eqref{eq:21*} predicts that a steady state is achieved at $k=414$ with $s_{414}\approx 4.2856$. Similarly, when $\chi=1.75$, the actual number of iterations for FAL to achieve a steady state is $k=1741$ whereas \eqref{eq:21} and \eqref{eq:21*} predict $k=5$ and $k=56$, respectively. Following remarks are in order. First, \eqref{eq:21} does not provide any reliable estimate for the rate of convergence as the actual convergence is roughly two orders of magnitude slower than the predicted rate. Second, \eqref{eq:21*} also predicts a convergence almost an order of magnitude faster than the actual rate, yet, it provides much superior estimation than \eqref{eq:21}. Thirdly, based on these observations, it seems inappropriate to consider $s_k$ as a discrete sample of a continuous function $s(t)$ based on which both \eqref{eq:21} and \eqref{eq:21*} are derived. \begin{figure} \caption{Estimation of the rate of convergence.} \label{fig:rates1} \label{fig:rates2} \label{fig:rates} \end{figure} \subsection{Consequences of the Flaw in Algorithmic Design}\label{ss:conseq} In view of the remarks in Section \ref{ss:limitation}, it is clear that for negative sought values, the FAL update weight $s_{k+1}$ in \eqref{eq:19} becomes complex and cannot converge to a real negative desired output. As mentioned above, the fractional gradient (\cite[Eq. (8)]{paper}) is valid over the domain $(0,s)$. Therefore, the algorithm cannot be used for negative values of the independent variable. This is the main reason that the fractional derivative appears to be complex for $s<0$. As a consequence, almost every simulation in \cite{paper} is affected and is unreliable. \begin{enumerate} \item \cite[Figs. 2(a), (b), (d), (e), (g), and (h)]{paper} are counter-factual as the fractional derivative in all these cases is complex. Particularly for $\nu=1/2$ or $3/2$, the fractional derivative is pure imaginary (see, for example, \eqref{eq:d}-\eqref{eq:10'} or Fig. \ref{fig:FD} in this note). For $\nu=0$ (i.e., no derivative is taken), the quadratic energy function \cite[Eq. (6)]{paper} is expected to have a parabolic graph. However, in \cite[Fig 2(h)]{paper}, it appears to be a straight line, which is impossible. Similar observations also hold for \cite[Figs. 2(b), (d), (e), and (g)]{paper}. \item In \cite[Fig. 3(b)]{paper}, the fractional derivative is evaluated over the domain $s<0$. Therefore, derivative should be complex valued. \item In \cite[Fig. 4(b)]{paper}, a negative optimal value $s_2^{\nu *}=-0.6406$ is sought. In fact, with initial step $s_{20}=-0.25$ and the parameters for \cite[Fig. 4(b)]{paper}, even $s_{21}$ becomes complex if \cite[Eq. \eqref{eq:19}]{paper} is used. If \cite[Eq. \eqref{eq:21}]{paper} is used then the exponent on the RHS becomes complex (due to the term $(s^{\nu *})^{\nu-1}$). \item In \cite[Fig. 5]{paper}, the rate of convergence is evaluated for different choices of $\mu$ and $\chi$. As discussed in Section \ref{ss:reliability}, the displayed results are misleading and grandiloquent (see Fig. \ref{fig:rates} in this note). Note that $s_0=15$ is assumed for \cite[Fig. 5]{paper} whereas $s_0=0$ is tacitly assumed in the derivation of the FAL algorithm. \item The results in \cite[Fig. 6]{paper} are also affected by the complex outputs when the $x$ or the $y$ component is varying over a part of the negative axis as multi-dimensional FAL is essentially a generalization of the 1-D FAL. \end{enumerate} \subsection{Comparison to \cite{Bershad}} In \cite{Bershad}, Bershad, Wen, and Cheung So, have already debated the unsuitability of fractional learning frameworks for adaptive signal processing \cite{Raja}. Theoretical obeservations in this note can be compared to those made in \cite{Bershad} through a variety of experimental results (see \cite[Sect. 1 and Remark 1]{Bershad}). In fact, it is well-known that the LMS algorithm is a stochastic version of the steepest descent algorithm when the statistics of the input are unknown. Thus, \cite[Eq. (1)]{Bershad} can be compared directly to \cite[Eq. (19)]{paper}. Based on extensive experiments, the following conclusions have been drawn in \cite[Page 225] {Bershad}. \begin{enumerate} \item The fractional variants of the LMS are only useful when all the update weights are positive but their performance is comparable to that of the LMS. That is, under no conditions fractional variants of LMS perform better than the standard LMS. \item In case when some of the update weights are negative, the fractional variants of LMS render complex outputs (see \cite[Remark 1]{Bershad}). Moreover, even when the absolute operator is employed in the fractional algorithms (see, for instance, Refs. 3 and 5 in \cite{Bershad}), their performance is inferior than standard LMS. Finally, if only the real part of the complex update weight is employed, the fractional LMS reduces to LMS with a slower convergence rate. \end{enumerate} Observe that the FAL method proposed in \cite{paper} has similar drawbacks as highlighted in \cite{Bershad} for fractional frameworks for adaptive signal processing. Precisely, as debated in Sections \ref{ss:reliability} and \ref{ss:conseq}, the FAL method has limited applicability for broad spectrum of adaptive learning problems due to complex outputs and has slow convergence rate when the update iterates remain real. \section{Conclusion}\label{s:conclusion} In this comment, some serious concerns over the derivation of the rate of convergence of Fractional Adaptive Learning (FAL) approach proposed in \cite{paper} were raised. It is established that the convergence analysis perfomed in \cite{paper} is unreliable in general and the FAL algorithm converges much slower than anticipated. It was also highlighted that the FAL method can practically work only for positive domains. Over negative domains or whenever its iterative update becomes negative, the FAL algorithm furnishes a complex output due to the presence of fractional powers in its update rule. In this situation, the algorithm is not expected to converge to a real sought value. Moreover, thanks to the analogy of the FAL algorithm with fractional variants of Least Mean Squares (LMS) for adaptive signal processing \cite{Raja}, the analysis performed by Bershad, Wen, and Cheung So \cite{Bershad} suggests that FAL is not better than LMS under any condition. Their performances are nearly the same but the FAL approach is much more complicated than LMS. Finally, it is needless to say that the multi-dimensional variant of the FAL also inherits the same flaws and is unreliable. \ifCLASSOPTIONcaptionsoff \fi \end{document}
\mathbf begin{document} \maketitle \mathbf begin{abstract} Let $\Omega\subset\mathbb R^n$ be a bounded mean convex domain. If $\alpha<0$, we prove the existence and uniqueness of classical solutions of the Dirichlet problem in $\Omega$ for the $\alpha$-singular minimal surface equation with arbitrary continuous boundary data. \mathbb End{abstract} {\it AMS Subject Classification:} 35J60, 53A10, 53C42 \mathbf noindent {\it Keywords:} Dirichlet problem, singular minimal surface, continuity, apriori estimates \section{Introduction and statement of results} Let $\Omega\subset\mathbb R^n$ be a smooth domain and $\alpha$ a given constant. We consider the existence of classical solutions $u\in C^2(\Omega)\cap C^0(\overline{\Omega})$, $u>0$ in $\overline{\Omega}$, of the Dirichlet problem \mathbf begin{eqnarray} &&\mbox{div}\mathbb Left(\dfrac{Du}{\sqrt{1+|Du|^2}}\mathbb Right)= \frac{\alpha}{u\sqrt{1+|Du|^2}}\quad \mbox{in $\Omega$}\mathbb Label{eq1}\\ &&u=\varphi\quad \mbox{on $\partial\Omega,$}\mathbb Label{eq2} \mathbb End{eqnarray} where $D$ and div are the gradient and divergence operators and $\varphi>0$ is a positive continuous function in $\partial\Omega$. We call Equation (\mathbb Ref{eq1}) the {\it $\alpha$-singular minimal surface equation} and the graph $\Sigma_u=\{(x,u(x)):x\in\Omega\}$ is an {\it $\alpha$-singular minimal hypersurface}, or simply, a singular minimal surface. Equation (\mathbb Ref{eq1}) is an equation of mean curvature type because the mean curvature $H$ of $\Sigma_u$ is $H=\alpha/(nu\sqrt{1+|Du|^2})$. In the limit case $\alpha=0$, Equation (\mathbb Ref{eq1}) is the known minimal surface equation. The theory of singular minimal surfaces has been intensively studied from the works of Bemelmans, Dierkes and Huisken among others: see \cite{bd,bht,di,di1,di2,dh,ke,ni}. An interesting case is $\alpha=1$ because the hypersurface $\Sigma_u$ has the property to have the lowest center of gravity and this generalizes to the $n$-dimensional case, the same property that the catenary curve (\cite{bht,dh}). Other case of interest is $\alpha=-n$, where now $\Sigma_u$ is a minimal hypersurface in the upper halfspace model of hyperbolic space. Usually, the existence of examples of singular minimal surfaces have been considered from the parametric viewpoint by solving the Plateau problem. However, the existence of singular minimal graphs has been only studied in \cite{bht} (see also \cite{di6}). Indeed, it was proved the existence of a solution of (\mathbb Ref{eq1})-(\mathbb Ref{eq2}) for $\alpha>0$ in bounded mean convex domains of $\mathbb R^n$ provided the size of $\Omega$ is small in relation to the boundary data $\varphi$. Recall that $\Omega$ is said to be mean convex if the mean curvature $H_{\partial\Omega}$ of $\partial\Omega$ with respect to the inner normal is nonnegative at every point. Thus the result in \cite{bht} is an approach to the known result of Jenkins and Serrin in \cite{js} that asserts the existence of a minimal graph for arbitrary continuous boundary data $\varphi$ if and only if $\Omega$ is a bounded mean convex domain. The geometric properties of the singular minimal surfaces change drastically depending on the sign of $\alpha$. In this paper, and when $\alpha$ is negative, we are able to extend the Jenkins-Serrin result without assumptions on the size of $\Omega$. The existence result is established by our next theorem. \mathbf begin{proposition} \mathbb Label{t1} Let $\Omega\subset\mathbb R^n$ be a bounded mean convex domain with $C^{2,\gamma}$ boundary $\partial\Omega$ for some $\gamma\in (0,1)$. Assume $\alpha<0$. If $\varphi\in C^{2,\gamma}(\partial\Omega)$ is a positive function, then there exists a unique positive solution $u\in C^{2,\gamma}(\overline{\Omega})$ of (\mathbb Ref{eq1})-(\mathbb Ref{eq2}). \mathbb End{proposition} If the assumption of the mean convexity of $\Omega$ fails at some point, we do not show that Theorem \mathbb Ref{t1} is not longer true, that is, there exists a boundary data $\varphi$ for which no solution exists. The corresponding result for minimal graphs in hyperbolic space and constant boundary data was proved by Lin in \cite[Th. 2.1]{li}. The proof of Theorem \mathbb Ref{t1} involves the continuity method by deforming (\mathbb Ref{eq1})-(\mathbb Ref{eq2}) in a uniparametric family of Dirichlet problems varying the value of $\alpha$, and the classical techniques of apriori estimates for elliptic equations: we refer the reader to \cite{gt} as a general reference. Our proof can not extend to the case $\alpha>0$ by the absence of apriori $C^0$ estimates since if $\alpha>0$ we have to prevent that $|u|\mathbb Rightarrow 0$ for a solution $u$ in the continuity method. This paper is organized as follows. In Section \mathbb Ref{sec2} we recall the maximum and comparison principles for Equation (\mathbb Ref{eq1}) as well as the behavior of the radial solutions. In Sections \mathbb Ref{sec3} and \mathbb Ref{sec4}, we deduce the height and gradient estimates, respectively, and finally, the last section \mathbb Ref{sec5} presents the proof of the Theorem \mathbb Ref{t1} following the known continuity method. \section{Preliminaries}\mathbb Label{sec2} As a consequence of the maximum principle for elliptic equations of divergence type, we have: \mathbf begin{proposition}[Touching principle]\mathbb Label{pr21} Let $\Sigma_i$ be two $\alpha$-singular minimal surfaces, $i=1,2$. If $\Sigma_1$ and $\Sigma_2$ have a common tangent interior point and $\Sigma_1$ lies above $\Sigma_2$ around $p$, then $\Sigma_1$ and $\Sigma_2$ coincide at an open set around $p$. \mathbb End{proposition} We also need to state the known comparison principle in the context of $\alpha$-singular minimal surfaces. Define the operator \mathbf begin{equation}\mathbb Label{op} \mathbf begin{split} Q[u] &= (1+|Du|^2)\Delta u-u_iu_ju_{ij}-\frac{\alpha(1+|Du|^2)}{u}\\ & = a_{ij}(Du)u_{ij}+{\mathbf b}(u,Du), \mathbb End{split} \mathbb End{equation} where $$a_{ij}=(1+|Du|^2)\delta_{ij}-u_iu_j,\quad {\mathbf b}= - \frac{\alpha(1+|Du|^2)}{u}.$$ Here we are denoting $u_i=\partial u/\partial x_i$, $1\mathbb Leq i\mathbb Leq n$, and we assume the summation convention of repeated indices. It is immediate that $u$ is a solution of Equation (\mathbb Ref{eq1}) if and only if $Q[u]=0$. Further observe that the function $\mathbf{b}$ is non-increasing in $u$ for each $(x,Du)\in\Omega\mathbf times\mathbb R^n$ because $\alpha<0$. In particular, we apply the classical comparison principle for elliptic equations (\cite[Th. 10.1]{gt}). \mathbf begin{proposition}[Comparison principle] Let $\Omega\subset\mathbb R^n$ be a bounded domain. If $u,v\in C^2(\Omega)\cap C^0(\overline{\Omega})$ satisfy $Q[u]\geq Q[v]$ and $u\mathbb Leq v$ on $\partial\Omega$, then $u\mathbb Leq v$ in $\Omega$. \mathbb End{proposition} We now prove the uniqueness of solutions of (\mathbb Ref{eq1})-(\mathbb Ref{eq2}) when $\alpha$ is negative. \mathbf begin{proposition}\mathbb Label{pr-u} Let $\Omega\subset\mathbb R^n$ be a bounded domain and $\alpha<0$. The solution of (\mathbb Ref{eq1})-(\mathbb Ref{eq2}), if exists, is unique. \mathbb End{proposition} \mathbf begin{proof} The uniqueness is a consequence that the right hand side of (\mathbb Ref{eq1}) is non-decreasing on $u$ (\cite[Th. 10.1]{gt}). \mathbb End{proof} We point out that the above result fails if $\alpha>0$ by taking suitable examples in the class of rotational $\alpha$-singular minimal surfaces. We now show the behavior of the radial solutions of Equation (\mathbb Ref{eq1}). Denote $u=u(r)$, $r=|x|$, and subsequently, $\Sigma_u$ is a rotational singular minimal hypersurface. The behavior of $u$ depends strongly on the sign of $\alpha$: see \cite{ke,lo}. For our purposes, we only need the case $\alpha<0$. \mathbf begin{proposition}\mathbb Label{pr-rot} Let $\alpha<0$ and let $u=u(r)$ be a radial solution of (\mathbb Ref{eq1}). Then $u$ is a concave function whose maximal domain is a bounded ball $B_R=\{x\in\mathbb R^n: |x|<R\}$ with $$\mathbb Lim_{r\mathbb Rightarrow R} u(r)=0,\quad \mathbb Lim_{r\mathbb Rightarrow R} u'(r)=-\infty.$$ \mathbb End{proposition} Recall that homotheties from the origin ${\mathbf O}\in\mathbb R^n$ preserve the Equation (\mathbb Ref{eq1}), that is, if $u$ is a solution of (\mathbb Ref{eq1}), then $\mathbb Lambda u(x/\mathbb Lambda)$, $\mathbb Lambda>0$, also satisfies (\mathbb Ref{eq1}). As a consequence of Proposition \mathbb Ref{pr-rot} and using homotheties, we establish the solvability of (\mathbb Ref{eq1})-(\mathbb Ref{eq2}) when $\Omega$ is any arbitrary ball and $\varphi$ is any positive constant. \mathbf begin{proposition}\mathbb Label{pr25} Let $\alpha<0$. Then for any $r, c>0$, there exists a unique radial solution $u$ of (\mathbb Ref{eq1}) in $\Omega=B_r$ with $u=c$ on $\partial B_r$. \mathbb End{proposition} \mathbf begin{proof} Let $u=u(r)$ be any radial solution of (\mathbb Ref{eq1}) defined in its maximal domain $B_R$. Take $\mathbb Lambda>0$ sufficiently large so $u_\mathbb Lambda(x)=\mathbb Lambda u(x/\mathbb Lambda)$ has the property that the bounded domain determined by its graph $\Sigma_{u_\mathbb Lambda}$ and the plane $\mathbb R^n\mathbf times\{0\}$ contains the ball $B_r\mathbf times\{c\}$. Let $\mathbb Lambda$ decrease until some value $\mathbb Lambda_0$ such that $\Sigma_{u_{\mathbb Lambda_0}}$ intersect $B_r\mathbf times\{c\}$. Then the function $u_{\mathbb Lambda_0}$ is the solution that we are looking for. \mathbb End{proof} \section{Height estimates}\mathbb Label{sec3} In this section we obtain $C^0$ apriori estimates for solutions of (\mathbb Ref{eq1})-(\mathbb Ref{eq2}) when $\alpha<0$. \mathbf begin{proposition} \mathbb Label{pr-31} Let $\Omega\subset\mathbb R^n$ be a bounded domain and $\alpha<0$. If $u$ is a positive solution of (\mathbb Ref{eq1})-(\mathbb Ref{eq2}), then there exists a constant $C_1=C_1(\alpha,\Omega,\varphi)>0$ such that \mathbf begin{equation}\mathbb Label{eh} \min_{\partial\Omega}\varphi\mathbb Leq u\mathbb Leq C_1 \quad \mbox{in $\Omega$}. \mathbb End{equation} \mathbb End{proposition} \mathbf begin{proof} Since the right hand side of (\mathbb Ref{eq1}) is negative, then $\inf_\Omega u=\min_{\partial\Omega}\varphi$ by the maximum principle. The upper estimate for $u$ is obtained by comparing $\Sigma_u$ with radial solutions of (\mathbb Ref{eq1}). Exactly, let $B_R\subset\mathbb R^n$ be a ball centered at the origin ${\mathbf O}$ of radius $R>0$ sufficiently large such that $\overline{\Omega}\subset B_R$. Set $\varphi_M=\max_{\partial\Omega}\varphi$. By Proposition \mathbb Ref{pr25}, let $v=v(r)$ be the radial solution of (\mathbb Ref{eq1}) with $v=\varphi_M$ on $\partial B_R$. Let $\mathbb Lambda>1$ be sufficiently large that $\mathbb Lambda\Sigma_v\cap\Sigma_u=\mathbb Emptyset$. Notice that the hypersurface $\mathbb Lambda\Sigma_v$ is a singular minimal hypersurface for the same constant $\alpha$ than $\Sigma_v$. Let $\mathbb Lambda$ decrease to $1$. By Proposition \mathbb Ref{pr21}, it is not possible a contact at some interior point between $\mathbb Lambda \Sigma_v$ and $\Sigma_u$ because $\partial(\mathbb Lambda\Sigma_v)\cap\partial\Sigma_u=\mathbb Lambda(\partial\Sigma_v)\cap\partial\Sigma_u=\mathbb Emptyset$ for all $\mathbb Lambda>1$. Therefore we arrive until the initial position $\mathbb Lambda=1$ and we find $\Sigma_v\cap\Sigma_u=\mathbb Emptyset$. Consequently, $u<v\mathbb Leq \sup_\Omega v:=C_1$, and $C_1$ depends only on $\alpha$, $\Omega$ and $\varphi$. \mathbb End{proof} Of particular interest is when $\varphi=c>0$ is a constant function on $\partial\Omega$. Then we may improve estimate (\mathbb Ref{eh}) with the preceding argument by taking all singular minimal surfaces of rotational type. Among all them, we choose the rotational example $\Sigma_v$ with lowest height. This achieves when, after a horizontal translation if necessary, consider $B_R$ the circumscribed sphere of $\Omega$. In such a case, the inequality (\mathbb Ref{eh}) is now $c<u\mathbb Leq v\mathbb Leq v(0)$ in $\Omega$. \section{Gradient estimates}\mathbb Label{sec4} Firstly, we derive estimates for $\sup_\Omega|Du|$ in terms of $\sup_{\partial\Omega}|Du|$. In the next result, the fact that $\alpha$ is negative is essential. \mathbf begin{proposition}[Interior gradient estimates] \mathbb Label{pr-41} Let $\Omega\subset\mathbb R^n$ be a bounded domain and $\alpha<0$. If $u\in C^2(\Omega)\cap C^1(\overline{\Omega})$ is a positive solution of (\mathbb Ref{eq1})-(\mathbb Ref{eq2}), then the maximum of the gradient is attained at some boundary point, that is, $$\max_{\overline{\Omega}}|Du|=\max_{\partial\Omega}|Du|.$$ \mathbb End{proposition} \mathbf begin{proof} We know that (\mathbb Ref{eq1}) can be expressed as (\mathbb Ref{op}). Let $v^k=u_k$, $1\mathbb Leq k\mathbb Leq n$, and we differentiate (\mathbb Ref{op}) with respect to $x_k$, obtaining for each $k$, \mathbf begin{equation}\mathbb Label{eq3} \mathbb Left((1+|Du|^2)\delta_{ij}-u_iu_j\mathbb Right)v_{ij}^k+2\mathbb Left(u_i\Delta u-u_ju_{ij}-\frac{\alpha u_i}{u}\mathbb Right)v_i^k+\frac{\alpha(1+|Du|^2)}{u^2}v^k=0. \mathbb End{equation} Equation (\mathbb Ref{eq3}) is a linear elliptic equation in the function $v^k$ and, in addition, the coefficient for $v^k$ is negative because $\alpha<0$. By the maximum principle \cite[Th. 3.7]{gt}, $|v^k|$, and then $|Du|$, has not an interior maximum. In particular, if $u$ is a solution of (\mathbb Ref{eq1}), the maximum of $|Du|$ on the compact set $\overline{\Omega}$ is attained at some boundary point, proving the result. \mathbb End{proof} Once proved Proposition \mathbb Ref{pr-41}, the problem of finding apriori estimates of $|Du|$ reduces to find them along $\partial\Omega$. Then we now address it by proving that $u$ admits barriers from above and from below along $\partial\Omega$. It is now when we use the mean convexity property of $\Omega$. \mathbf begin{proposition}[Boundary gradient estimates] \mathbb Label{pr42} Let $\Omega\subset\mathbb R^n$ be a bounded mean convex domain and $\alpha<0$. If $u\in C^2(\Omega)\cap C^1(\overline{\Omega})$ is a positive solution of (\mathbb Ref{eq1})-(\mathbb Ref{eq2}), then there exists a constant $C_2=C_2(\alpha,\Omega, C_1,\|\varphi\|_{2;\Omega})$ such that $$\max_{\partial\Omega}|Du|\mathbb Leq C_2.$$ \mathbb End{proposition} \mathbf begin{proof} We consider the operator $Q[u]$ defined (\mathbb Ref{op}). A lower barrier for $u$ is obtained by considering the subsolution $v^0$ of the Dirichlet problem for the minimal surface equation in $\Omega$ with the same boundary data $\varphi$: the existence of $v^0$ is assured by the Jenkins-Serrin result (\cite{js}). Because $Q[v^0]>0=Q[u]$ and $v^0=u$ on $\partial\Omega$, we conclude $v^0<u$ in $\Omega$ by the comparison principle. We now find an upper barrier for $u$. Here we use the distance function in a small tubular neighborhood of $\partial\Omega$ in $\Omega$. The following arguments are standard: see \cite[Ch. 14]{gt} for details. Consider the distance function $d(x)=\mbox{dist}(x,\partial\Omega)$ and let $\mathbb Epsilon>0$ sufficiently small so $\mathcal{N}_\mathbb Epsilon=\{x\in\overline{\Omega}: d(x)<\mathbb Epsilon\}$ is a tubular neighborhood of $\partial\Omega$. The value of $\mathbb Epsilon$ will be precised later. We can parametrize $\mathcal{N}_\mathbb Epsilon$ using normal coordinates $x\mathbb Equiv (t,\pi(x)) \in\mathcal{N}_\mathbb Epsilon$, where we write $x=\pi(x)+t\mathbf nu(\pi(x))$ for some $t\in [0,\mathbb Epsilon)$, where $\pi:\mathcal{N}_\mathbb Epsilon\mathbb Rightarrow\partial\Omega$ is the orthogonal projection and $\mathbf nu$ is the unit normal vector to $\partial\Omega$ pointing to $\Omega$. Among the properties of the function $d$, we know that $d$ is $C^2$, $|Dd|(x)=1$, and $\Delta d(x) \mathbb Leq -(n-1)H_{\partial\Omega}(\pi(x))\mathbb Leq 0$ for all $x\in\mathcal{N}_\mathbb Epsilon$, where last inequality holds because $\Omega$ is mean convex. Define in $\mathcal{N}_\mathbb Epsilon$ a function $w=h\circ d+\varphi$, where we extended $\varphi$ to $\mathcal{N}_\mathbb Epsilon$ by letting $\varphi(x)=\varphi(\pi(x))$. Here $h(t)=a\mathbb Log(1+bt)$, $a,b>0$ to be chosen later. It is known that $h\in C^\infty[0,\infty)$ and $h''=-h'^2/a$. The computation of $Q[w]$ leads to $$Q[w]=a_{ij}(h''d_id_j+h'd_{ij}+\varphi_{ij})-\frac{\alpha}{w}(1+|Dw|^2).$$ From $|Dd|=1$, it follows that $\mathbb Langle D(Dd)_x\xi,Dd(x)\mathbb Rangle=0$ for all $\xi\in\mathbb R^n$. If $\{e_i\}_i$ is the canonical basis of $\mathbb R^n$, by taking $\xi=e_i$, we find $d_{ij}d_j=0$. Thus \mathbf begin{eqnarray*} w_iw_jd_{ij}&=&(h'd_i+\varphi_i)(h'd_j+\varphi_j)d_{ij}=(h'^2d_i+2h'\varphi_i)d_jd_{ij}+\varphi_i \varphi_jd_{ij}\\ &=&\varphi_{i}\varphi_jd_{ij}\geq |D\varphi|^2\Delta d. \mathbb End{eqnarray*} Using this inequality and from the definition of $a_{ij}$ in (\mathbb Ref{op}), we derive $$a_{ij}d_{ij}=(1+|Dw|^2)\Delta d-w_iw_j d_{ij}\mathbb Leq(1+|Dw|^2-|D\varphi|^2)\Delta d.$$ Since $|\xi|^2\mathbb Leq a_{ij}\xi_i\xi_j\mathbb Leq(1+|Dw|^2)|\xi|^2$ for all $\xi\in\mathbb R^n$, we have $a_{ij}d_id_j\geq 1$ and $a_{ij}\varphi_{ij}\mathbb Leq (1+|Dw|^2)|D^2\varphi|$, where $|D^2\varphi|=\sum_{ij}\sup_{\overline{\Omega}}|\varphi_{ij}|$. By using that $h'>0$ and $\Delta d\mathbb Leq 0$, we find \mathbf begin{eqnarray*} Q[w]&\mathbb Leq& h''+h'\Delta d(1+|Dw|^2-|D\varphi|^2)+(-\frac{\alpha}{w}+|D^2 \varphi|)(1+|Dw|^2)\\ &\mathbb Leq & h''+\mathbb Left(-\frac{\alpha}{w}+|D^2 \varphi|\mathbb Right)(1+|Dw|^2)\\ &=&h''+\mathbb Left(-\frac{\alpha}{w}+|D^2 \varphi|\mathbb Right)(1+h'^2+|D\varphi|^2+2h'|D\varphi|). \mathbb End{eqnarray*} In the tubular neighborhood $\mathcal{N}_\mathbb Epsilon$, we have \mathbf begin{equation}\mathbb Label{ww} w=a\mathbb Log(1+bd)+\varphi\geq a\mathbb Log(1+b)-\|\varphi\|_{0;\Omega}>0, \mathbb End{equation} where the last inequality holds if $a\mathbb Log(1+b)$ is sufficiently large. We will now assume that this is true. In particular, and because $\alpha<0$, we find $$-\frac{\alpha}{w}+|D^2 \varphi|\mathbb Leq \frac{-\alpha}{a\mathbb Log(1+b)-\|\varphi\|_{0;\Omega}}+\|D^2\varphi\|_{0;\Omega}:=\mathbf beta.$$ Therefore, and taking into account that $h''=-h'^2/a$, we deduce $$Q[w]\mathbb Leq \mathbb Left(\mathbf beta-\frac{1}{a}\mathbb Right)h'^2+2\mathbf beta h'\|D\varphi\|_{0;\Omega}+\mathbf beta(1+\|D\varphi\|_{0;\Omega}^2).$$ We take $a= c/\mathbb Log(1+b)$, $c>0$ to be chosen later. Then the above inequality for $Q[w]$ writes as \mathbf begin{equation}\mathbb Label{qq} Q[w]\mathbb Leq \mathbb Left(\mathbf beta-\frac{\mathbb Log(1+b)}{c}\mathbb Right)\frac{c^2b^2}{(1+bt)^2}+2\mathbf beta \|D\varphi\|_{0;\Omega} \frac{cb}{1+bt}+\mathbf beta(1+\|D\varphi\|_{0;\Omega}^2), \mathbb End{equation} where we denote again $x\mathbb Equiv(t,\pi(x))$ in normal coordinates. For $b$ sufficiently large, the parenthesis $\mathbf beta-\mathbb Log(1+b)/c$ in (\mathbb Ref{qq}) is negative. If we see the right hand side in (\mathbb Ref{qq}) as a continuous function $\phi(t)$, $t>0$, then we find that $\phi(0)<0$ for $b$ large enough. Since $\partial\Omega$ is compact, by an argument of continuity, there exists $\mathbb Epsilon>0$ small enough to ensure that $\phi(t)<0$ for $t\in [0,\mathbb Epsilon)$. This defines definitively the tubular neighborhood $\mathcal{N}_\mathbb Epsilon$ of $\partial\Omega$ and, furthermore, we conclude that for $b$ large enough, we find $Q[w]<0$. In order to assure that $w$ is a local upper barrier in $\mathcal{N}_\mathbb Epsilon$ for the Dirichlet problem (\mathbb Ref{eq1})-(\mathbb Ref{eq2}), and because we will apply the comparison principle, we have to prove that \mathbf begin{equation}\mathbb Label{mm} u\mathbb Leq w\quad \mbox{in $\partial\mathcal{N}_\mathbb Epsilon$}. \mathbb End{equation} In $\partial\mathcal{N}_\mathbb Epsilon\cap\partial\Omega$, the distance function is $d=0$, so $w=\varphi=u$. On the other hand, in $\partial\mathcal{N}_\mathbb Epsilon\setminus\partial\Omega$, we find $w=h(\mathbb Epsilon)+\varphi=a\mathbb Log(1+b\mathbb Epsilon)+\varphi$. Denote $\mu=C_1+\|\varphi\|_{0;\Omega}$, where $C_1$ is the constant of Proposition \mathbb Ref{pr-31}. Take $c>0$ sufficiently large so that $$c\geq\frac{\mu\mathbb Log(1+b)}{\mathbb Log(1+b\mathbb Epsilon)}.$$ With this choice of $c$, we infer that $u\mathbb Leq w$ in $\partial\mathcal{N}_\mathbb Epsilon\setminus\partial\Omega$. By the way, and taking $c$ large enough if necessary, we assure that $w>0$ in (\mathbb Ref{ww}). Definitively, (\mathbb Ref{mm}) holds in $\partial\mathcal{N}_\mathbb Epsilon\setminus\partial\Omega$. Because $Q[w]<0=Q[u]$, we conclude $u\mathbb Leq w$ in $\mathcal{N}_\mathbb Epsilon$ by the comparison principle. Consequently, we have proved the existence of lower and upper barriers for $u$ in $\mathcal{N}_\mathbb Epsilon$, namely, $v^0\mathbb Leq u\mathbb Leq w$ in $\mathcal{N}_\mathbb Epsilon$. Hence we deduce $$\max_{\partial\Omega}|Du|\mathbb Leq C_2:=\max\{\|Dw\|_{0;\partial\Omega}, \|Dv^0\|_{0;\partial\Omega}\}$$ and both values $\|Dw\|_{0;\partial\Omega}, \|Dv^0\|_{0;\partial\Omega}$ depend only on $\alpha$, $\Omega$, $C_1$ and $\varphi$. This completes the proof of proposition. \mathbb End{proof} \section{Proof of Theorem \mathbb Ref{t1}}\mathbb Label{sec5} We establish the solvability of the Dirichlet problem (\mathbb Ref{eq1})-(\mathbb Ref{eq2}) by applying a slightly modified method of continuity, where the boundary data is fixed on the deformation (see \cite[Sec. 17.2]{gt}). Define the family of Dirichlet problems parametrized by $t\in [0,1]$ by $$\mathcal{P}_t: \mathbb Left\{\mathbf begin{array}{cll} Q_t[u]&=&0 \mbox{ in $\Omega$}\\ u&=& \varphi \mbox{ on $\partial\Omega,$} \mathbb End{array}\mathbb Right.$$ where $$Q_t[u]= (1+|Du|^2)\Delta u-u_iu_ju_{ij}-\frac{\alpha t(1+|Du|^2)}{u}.$$ The graph $\Sigma_{u_t}$ of a solution of $u_t$ is a $(t\alpha)$-singular minimal surface. As usual, let $$\mathcal{A}=\{t\in [0,1]: \mathbb Exists u_t\in C^{2,\gamma}(\overline{\Omega}), u_t>0, Q_t[u_t]=0, {u_t}_{|\partial\Omega}=\varphi\}.$$ The proof consists to show that $1\in \mathcal{A}$. For this, we prove that $\mathcal{A}$ is a non-empty open and closed subset of $[0,1]$. \mathbf begin{enumerate} \item The set $\mathcal{A}$ is not empty. Let us observe that $0\in\mathcal{A}$: if $t=0$, then $u_0$ is the solution $v^0$ provided by the Jenkins-Serrin theorem (\cite{js}). Notice that $v^0>0$ by the maximum principle. \item The set $\mathcal{A}$ is open in $[0,1]$. Given $t_0\in\mathcal{A}$ we need to prove that there exists $\mathbb Epsilon>0$ such that $(t_0-\mathbb Epsilon,t_0+\mathbb Epsilon)\cap [0,1]\subset\mathcal{A}$. Define the map $T(t,u)=Q_t[u]$ for $t\in\mathbb R$ and $u\in C^{2,\gamma}(\overline{\Omega})$. Then $t_0\in\mathcal{A}$ if and only if $T(t_0,u_{t_0})=0$. If we prove that the derivative of $Q_t$ with respect to $u$, say $(DQ_t)_u$, at the point $u_{t_0}$ is an isomorphism, it follows from the Implicit Function Theorem the existence of an open set $\mathcal{V}\subset C^{2,\gamma}(\overline{\Omega})$, with $u_{t_0}\in \mathcal{V}$ and a $C^1$ function $\xi:(t_0-\mathbb Epsilon,t_0+\mathbb Epsilon)\mathbb Rightarrow \mathcal{V}$ for some $\mathbb Epsilon>0$, such that $\xi(t_0)=u_{t_0}>0$ and $T(t,\xi(t))=0$ for all $t\in (t_0-\mathbb Epsilon,t_0+\mathbb Epsilon)$: this guarantees that $\mathcal{A}$ is an open set of $[0,1]$. The proof that $(DQ_t)_u$ is one-to-one is equivalent that say that for any $f\in C^\gamma(\overline{\Omega})$, there exists a unique solution $v\in C^{2,\gamma}(\overline{\Omega})$ of the linear equation $Lv:=(DQ_t)_u(v)=f$ in $\Omega$ and $v=\varphi$ on $\partial\Omega$. The computation of $L$ is $$Lv=(DQ_t)_uv=a_{ij}(Du)v_{ij}+\mathcal{B}_i(u,Du,D^2u)v_i+{\mathbf c}(u,Du)v,$$ where $a_{ij}$ is as in (\mathbb Ref{op}) and $$\mathcal{B}_i=2( \Delta u-\frac{\alpha t}{u})u_i-2u_ju_{ij},\quad {\mathbf c}=\frac{\alpha t(1+|Du|^2)}{u^2}.$$ Since $\alpha<0$, the function $\mathbf textbf{c}$ satisfies ${\mathbf c}\mathbb Leq 0$ and the existence and uniqueness is assured by standard theory (\cite[Th. 6.14]{gt}). \item The set $\mathcal{A}$ is closed in $[0,1]$. Let $\{t_k\}\subset\mathcal{A}$ with $t_k\mathbb Rightarrow t\in [0,1]$. For each $k\in\mathbb{N}$, there exists $u_k\in C^{2,\gamma}(\overline{\Omega})$, $u_k>0$, such that $Q_{t_k}[u_k]=0$ in $\Omega$ and $u_k=\varphi$ in $\partial\Omega$. Define the set $$\mathcal{S}=\{u\in C^{2,\gamma}(\overline{\Omega}): \mathbb Exists t\in [0,1]\mbox{ such that }Q_{t}[u]=0 \mbox{ in }\Omega, u_{|\partial\Omega}=\varphi\}.$$ Then $\{u_k\}\subset\mathcal{S}$. If we prove that the set $\mathcal{S}$ is bounded in $C^{1,\mathbf beta}(\overline{\Omega})$ for some $\mathbf beta\in[0,\gamma]$, and since $a_{ij}=a_{ij}(Du)$ in (\mathbb Ref{op}), then Schauder theory proves that $\mathcal{S}$ is bounded in $C^{2,\mathbf beta}(\overline{\Omega})$, in particular, $\mathcal{S}$ is precompact in $C^2(\overline{\Omega})$ (see Th. 6.6 and Lem. 6.36 in \cite{gt}). Thus there exists a subsequence $\{u_{k_l}\}\subset\{u_k\}$ converging to some $u\in C^2(\overline{\Omega})$ in $C^2(\overline{\Omega})$. Since $T:[0,1]\mathbf times C^2(\overline{\Omega})\mathbb Rightarrow C^0(\overline{\Omega})$ is continuous, it follows $Q_t[u]=T(t,u)=\mathbb Lim_{l\mathbb Rightarrow\infty}T(t_{k_l},u_{k_l})=0$ in $\Omega$. Moreover, $u_{|\partial\Omega}=\mathbb Lim_{l\mathbb Rightarrow\infty} {u_{k_l}}_{|\partial\Omega}=\varphi$ on $\partial\Omega$, so $u\in C^{2,\gamma}(\overline{\Omega})$ and consequently, $t\in \mathcal{A}$. The above reasoning says that $\mathcal{A}$ is closed in $[0,1]$ provided we find a constant $M$ independent of $t\in\mathcal{A}$, such that $$ \|u_t\|_{C^1(\overline{\Omega})}=\sup_\Omega |u_t|+\sup_\Omega|Du_t|\mathbb Leq M. $$ However the $C^0$ and $C^1$ estimates for $u_1=u_t$, that is, when the parameter is $t=1$, proved in Sections \mathbb Ref{sec3} and \mathbb Ref{sec4} are enough as we see now. The $C^0$ estimates for $u_t$ follow with the comparison principle. Indeed, let $t_1<t_2$, $t_i\in [0,1]$, $i=1,2$. Then $Q_{t_1}[u_{t_1}]=0$ and $$Q_{t_1}[u_{t_2}]=-\frac{(t_1-t_2)\alpha(1+|Du_{t_2}|^2)}{u_{t_2}}<0$$ because $\alpha<0$. Since $u_{t_1}=u_{t_2}$ on $\partial\Omega$, the comparison principle yields $u_{t_1}<u_{t_2}$ in $\Omega$. This proves that the solutions $u_{t_i}$ are ordered in increasing sense according the parameter $t$. Consequently, and by (\mathbb Ref{eh}), we find \mathbf begin{equation}\mathbb Label{ut} \sup_\Omega u_t \mathbb Leq \sup_\Omega u_1 \mathbb Leq C_1. \mathbb End{equation} In order to find the gradient estimates for the solution $u_t$, the same computations given in Proposition \mathbb Ref{pr42} conclude that $\sup_{\partial\Omega}|Du_t|$ is bounded by a constant depending on $\alpha$, $\Omega$, $\varphi$ and $\|u_t\|_{0;\Omega}$. However, and by using (\mathbb Ref{ut}), the value $\|u_t\|_{0;\Omega}$ is bounded by $C_1$, which depends only on $\alpha$, $\varphi$ and $\Omega$. \mathbb End{enumerate} The above three steps proves the part of existence in Theorem \mathbb Ref{t1}. The uniqueness is consequence of Proposition \mathbb Ref{pr-u} and this completes the proof of theorem. \mathbf begin{remark} We point out that a $C^0$-version of Theorem \mathbb Ref{t1} holds for continuous positive boundary values $\varphi$. For this, let $\varphi\in C^0(\partial\Omega)$ be given. Let $\{\varphi_k^{+}\}, \{\varphi_k^{-}\}\in C^{2,\gamma}(\partial\Omega)$ be a monotonic sequence of functions converging from above and from below to $\varphi$ in the $C^0$ norm. It follows from Theorem \mathbb Ref{t1} that there exist solutions $u_k^{+}, u_k^{-}\in C^{2,\gamma}(\overline{\Omega})$ of the $\alpha$-singular minimal surface equation (\mathbb Ref{eq1}) such that ${u_k^{+}}_{|\partial\Omega}=\varphi_k^{+}$ and ${u_k^{-}}_{|\partial\Omega}=\varphi_k^{-}$. The sequences $\{u_k^{\pm}\}$ are uniformly bounded in the $C^0$ norm since, by the comparison principle, we find $$u_1^{-}\mathbb Leq\mathbb Ldots \mathbb Leq u_k^{-}\mathbb Leq u_{k+1}^{-}\mathbb Leq\mathbb Ldots \mathbb Leq u_{k+1}^{+}\mathbb Leq u_k^+\mathbb Leq\mathbb Ldots\mathbb Leq u_1^{+}$$ for every $k$. By the proof of Theorem \mathbb Ref{t1}, the sequences $\{u_k^{\pm}\}$ have a priori $C^1$ estimates depending only on $\alpha$, $\Omega$, $\varphi$ and the $C^0$ estimates. Using classical Schauder theory again (\cite[Th. 6.6]{gt}), the sequence $\{u_k^{\pm}\}$ contains a subsequence $\{v_k\}\in C^{2,\gamma}(\overline{\Omega})$ converging uniformly on the $C^2$ norm on compacts subsets of $\Omega$ to a solution $u\in C^2(\Omega)$ of (\mathbb Ref{eq1}). Since $\{{u_k^{\pm}}_{|\partial\Omega}\}=\{\varphi_k^{\pm}\}$ and $\{\varphi_k^{\pm}\}$ converge to $\varphi$, it follows that $u$ extends continuously to $\overline{\Omega}$ and $u_{|\partial\Omega}=\varphi$. \mathbb End{remark} \mathbf begin{thebibliography}{99} \mathbf bibitem{bd} J. Bemelmans, U. Dierkes, \mathbf textit{On a singular variational integral with linea growth, I: Existence and regularity of minimizers}. Arch. Rat. Mech. Analysis, 100 (1987), 83--103. \mathbf bibitem{bht} R. B\"{o}hme, S. Hildebrandt, E. Taush, \mathbf textit{The two-dimensional analogue of the catenary}. Pacific J. Math. 88 (1980), 247--278. \mathbf bibitem{di} U. Dierkes, \mathbf textit{A geometric maximum principle, Plateau's problem for surfaces of prescribed mean curvature, and the two-dimensional analogue of the catenary}, in: Partial Differential Equations and Calculus of Variations, pp. 116-141, Springer Lecture Notes in Mathematics 1357 (1988). \mathbf bibitem{di1} U. Dierkes, \mathbf textit{A Bernstein result for energy minimizing hypersurfaces}, Calc. Var. Partial Differential Equations \mathbf textbf{1} (1993), no. 1, 37--54. \mathbf bibitem{di6} U. Dierkes, \mathbf textit{On the regularity of solutions for a singular variational problem}, Math. Z. \mathbf textbf{225} (1997), 657--670. \mathbf bibitem{di2} U. Dierkes, Singular minimal surfaces. \mathbf textit{Geometric analysis and nonlinear partial differential equations}, 177--193, Springer, Berlin, 2003. \mathbf bibitem{dh} U. Dierkes, G. Huisken, \mathbf textit{The $n$-dimensional analogue of the catenary: existence and nonexistence}. Pacific J. Math. \mathbf textbf{141} (1990), 47--54. \mathbf bibitem{gt} D. Gilbarg, N. S. Trudinger, \mathbf textit{Elliptic Partial Differential Equations of Second Order}. Second edition. Springer-Verlag, Berlin, 1983. \mathbf bibitem{js} H. Jenkins, J. Serrin, \mathbf textit{The Dirichlet problem for the minimal surface equation in higher dimensions}, J. Reine Angew. Math. \mathbf textbf{229} (1968), 170--187. \mathbf bibitem{ke} J. B. Keiper, \mathbf textit{The axially symmetric $n$-tectum}, preprint, Toledo University (1980). \mathbf bibitem{li} F. H. Lin, \mathbf textit{On the Dirichlet problem for minimal graphs in hyperbolic space}. Invent. Math. \mathbf textbf{96} (1989), 593--612. \mathbf bibitem{lo} R. L\'opez, \mathbf textit{Invariant singular minimal surfaces}, Ann. Global Anal. Geom. \mathbf textbf{53} (2018), 521--541. \mathbf bibitem{ni} J. C. C. Nitsche, \mathbf textit{A non-existence theorem for the two-dimensional analogue of the catenary}. Analysis, \mathbf textbf{6} (1986), 143--156. \mathbf bibitem{se} J. B. Serrin, \mathbf textit{The problem of Dirichlet for quasilinear elliptic differential equations with many independent variables}. Phil. Trans. R. Soc. Lond. \mathbf textbf{264} (1969), 413--496. \mathbb End{thebibliography} \mathbb End{document}
\begin{document} \section{Introduction} An essential step in the development of viable quantum technologies is to achieve precise control over quantum dynamics \cite{techno1,techno2}. In many situations, optimal performance relies on the ability to create particular target states. However, in dynamically reaching such states the quantum adiabatic theorem \cite{adtheorem} poses a formidable challenge, since finite-time driving inevitably causes parasitic excitations \cite{transitions1,transitions2,transitions3,transitions4}. Acknowledging and addressing this issue, the field of ``shortcuts to adiabaticity'' (STA) \cite{Torrontegui2013,review1,review2,review3} has developed a variety of techniques that permit to facilitate effectively adiabatic dynamics in finite time. Recent years have seen an explosion of work on, for instance, counterdiabatic driving \cite{CD1,CD2,CD3,LMGsta,psta2,psta3,psta4,psta5}, the fast-forward method~\cite{fastforw1,fastforw2,fastforw3,fastforw4}, time-rescaling~\cite{resta1,resta2}, methods based on identifying the adiabatic invariant~\cite{invar1,invar2,invar3,invar4}, and even generalizations to classical dynamics \cite{csta1,csta2,Iram2020}. For comprehensive reviews of the various techniques we refer to the recent literature \cite{review1,review2,review3}. Among these different paradigms, counterdiabatic driving (CD) stands out as it is the only method that forces evolution through the adiabatic manifold at all times. However, experimentally realizing the CD method requires applying a complicated control field, which often involves non-local terms that are hard to implement in many-body systems~\cite{LMGsta,psta3}. This may be particularly challenging if the system is not readily accessible, as for instance due to geometric restrictions of the experimental set-up. In the present paper, we propose an alternative method to achieve transitionless quantum driving by leveraging the system's (realistically) inevitable interaction with the environment. Our novel paradigm is inspired by ``envariance,'' which is short for entanglement-assisted invariance. Envariance is a symmetry of composite quantum systems, first described by Wojciech H. Zurek~\cite{born1}. Consider a quantum state $|\psi_{\mc{SE}}\right\ranglengle$ that lives on a composite quantum universe comprising the system, $\mc{S}$, and its environment, $\mc{E}$. Then, $|\psi_{\mc{SE}}\right\ranglengle$ is called envariant under a unitary map $u_{\mc{S}} \otimes \mathbb{I}_{\mc{E}}$ if and only if there exists another unitary $\mathbb{I}_{\mc{S}} \otimes u_{\mc{E}}$ acting on $\mc{E}$, such that the composite state remains unaltered after applying both maps, i.e., $\left(u_{\mc{S}} \otimes \mathbb{I}_{\mc{E}}\right)|\psi_{\mc{SE}}\right\ranglengle= |\phi_{\mc{SE}}\right\ranglengle$ and $\left(\mathbb{I}_{\mc{S}} \otimes u_{\mc{E}}\right)|\phi_{\mc{SE}}\right\ranglengle= |\psi_{\mc{SE}}\right\ranglengle$. In other words, the state is envariant if the action of a unitary on $\mc{S}$ can be inverted by applying a unitary on $\mc{E}$. Envariance has been essential to derive Born's rule~\cite{born1,born2}, and in formulating a novel approach to the foundations of statistical mechanics~\cite{deffner2016foundations}. Moreover, experiments~\cite{envexp1,envexp2} showed that this inherent symmetry of composite quantum states is indeed a physical reality, or rather a ``quantum fact of life'' with no classical analog~\cite{born2}. Drawing inspiration from envariance, we develop a novel method for transitionless quantum driving. In the following, we will see that instead of inverting the action of a unitary on $\mc{S}$, we can suppress undesirable transitions in the energy eigenbasis of $\mc{S}$ by applying a control field on the environment $\mc{E}$. In particular, we consider the unitary evolution of an ensemble of composite states $\{|\psi_{\mc{SE}}\right\ranglengle\}$, on a Hilbert space $\mc{H}_{\mc{S}}\otimes \mc{H}_{\mc{E}}$ of arbitrary dimension, and we determine the general analytic form of the time-dependent driving on $\mc{H}_{\mc{E}}$ which suppresses undesirable transitions in the system of interest $\mc{S}$. This general driving on the environment $\mc{E}$ guarantees that the system $\mc{S}$ evolves through the adiabatic manifold at all times. We dub this technique \textit{Environment-Assisted Shortcuts To Adiabaticity}, or ``EASTA'' for short. In addition, we prove that the cost associated with the EASTA technique is exactly equal to that of counterdiabatic driving. We illustrate our results in a simple two-qubit model, where the system and the environment are each described by a single qubit. Finally, we conclude with discussing a few implications of our results in the general context of decoherence theory and Quantum Darwinism. \section{Counterdiabatic Driving} We start by briefly reviewing counterdiabatic driving to establish notions and notations. Consider a quantum system $\mc{S}$, in a Hilbert space $\mc{H}_{\mc{S}}$ of dimension $d_{\mc{S}}$, driven by the Hamiltonian $H_{0}(t)$ with instantaneous eigenvalues $\{E_{n}(t)\}_{n \in \llbracket 0, \ d_{\mc{S}}-1 \rrbracket}$ and eigenstates $\{|n(t) \right\ranglengle \}_{n \in \llbracket 0, \ d_{\mc{S}}-1 \rrbracket}$. For slowly varying $H_{0}(t)$, according to the quantum adiabatic theorem~\cite{adtheorem}, the driving of $\mc{S}$ is transitionless. In other words, if the system starts in the eigenstate $|n(0) \right\ranglengle$, at $t=0$, it evolves into the eigenstate $|n(t) \right\ranglengle$ at time $t$ (with a phase factor), \begin{equation} |\psi_n(t)\right\ranglengle\equiv U(t)|n(0) \right\ranglengle = e^{-\frac{i}{\hbar} \int_{0}^{t} E_{n}(s) ds- \int_{0}^{t} \left\left\langlengle n \mid \partial_{s} n \right\right\ranglengle ds} |n(t) \right\ranglengle \equiv e^{-\frac{i}{\hbar} f_{n}(t)} |n(t) \right\ranglengle. \end{equation} For arbitrary driving $H_{0}(t)$, namely for driving rates larger than the typical energy gaps, the system undergoes transitions. However, it has been shown~\cite{CD1,CD2,CD3} that the addition of a counterdiabatic field $H_{\text{\tiny CD}}(t)$ forces the system to evolve through the adiabatic manifold. Using the total Hamiltonian \begin{equation} H = H_{0}(t) +H_{\text{\tiny CD}}(t) = H_{0}(t)+i \hbar \sum_{n}\left(\left|\partial_{t} n\right\right\ranglengle\left\left\langlengle n\left|-\left\left\langlengle n \mid \partial_{t} n\right\right\ranglengle\right| n\right\right\ranglengle\left\langlengle n|\right)\,, \end{equation} the system evolves with the corresponding unitary $U_{\text{\tiny CD}}(t)= \sum_{n} |\psi_n(t)\right\ranglengle \left\langlengle n(0)|$ such that \begin{equation} U_{\text{\tiny CD}}(t)|n(0) \right\ranglengle = e^{-\frac{i}{\hbar} f_{n}(t)} |n(t) \right\ranglengle. \end{equation} This evolution is exact no matter how fast the system is driven by the total Hamiltonian. However, the counterdiabatic driving (CD) method requires adding a complicated counterdiabatic field $H_{\text{\tiny CD}}(t)$ involving highly non-local terms that are hard to implement in a many-body set-up~\cite{LMGsta,psta3}. Constructing this counterdiabatic field requires determining the instantaneous eigenstates $\{|n(t) \right\ranglengle \}_{n \in \llbracket 0, \ d_{\mc{S}}-1 \rrbracket}$ of the time-dependent Hamiltonian $H_{0}(t)$. Moreover, changing the dynamics of the system of interest (i.e., adding the counterdiabatic field), requires direct access and control on $\mc{S}$. In the following, we will see how (at least) the second issue can be circumvented by relying on the environment $\mc{E}$ that inevitably couples to the system of interest. In particular, we make use of the entanglement between system and environment to avoid any transitions in the system. To this end, we construct the unique driving of the environment $\mc{E}$ that counter-acts the transitions in $\mc{S}$. \section{Open system dynamics and STA for mixed states} \left\langlebel{sec3} We start by stating three crucial assumptions: (i) the joint state of the system $\mc{S}$ and the environment $\mc{E}$ is described by an initial wave function $| \psi_{\mc{SE}}(0) \right\ranglengle$ evolving unitarly according to the Schr{\"o}dinger equation; (ii) the environment's degrees of freedom do not interact with each other; (iii) the $\mc{S}$-$\mc{E}$ joint state belongs to the ensemble of \textit{singly branching states}~\cite{BKWHZ}. These branching states have the general form, \begin{equation} | \psi_{\mc{SE}} \right\ranglengle= \sum^{N-1}_{n=0} \sqrt{p_n} |n \right\ranglengle \bigotimes^{N_{\mc{E}}}_{l=1}|\mc{E}^{l}_n \right\ranglengle, \left\langlebel{branch1} \end{equation} where $p_n \in [0,\ 1]$ is the probability associated with the $n$th branch of the wave function, with orthonormal states $|n \right\ranglengle \in \mc{H}_{\mc{S}}$ and $\bigotimes^{N_{\mc{E}}}_{l=1} |\mc{E}^{l}_n \right\ranglengle \in \mc{H}_{\mc{E}}$. Without loss of generality we can further assume $\sqrt{p_n}=1/\sqrt{N}$ for all $n \in \llbracket 0, \ N-1 \rrbracket$, since if $\sqrt{p_n}\neq 1/\sqrt{N}$ we can always find an extended Hilbert space~\cite{born1,born2} such that the state $| \psi_{\mc{SE}} \right\ranglengle$ becomes even. Thus, we can consider branching state $| \psi_{\mc{SE}} \right\ranglengle$ of the simpler form \begin{equation} | \psi_{\mc{SE}} \right\ranglengle= \frac{1}{\sqrt{N}} \sum^{N-1}_{n=0} |n \right\ranglengle \bigotimes^{N_{\mc{E}}}_{l=1}|\mc{E}^{l}_n \right\ranglengle. \left\langlebel{branch2} \end{equation} In the following, we will see that EASTA can actually only be facilitated for even states \eqref{branch2}\footnote{In Appendix~\ref{a}, we show that EASTA cannot be implemented for arbitrary probabilities (i.e., $(\exists \ n); \ \sqrt{p_n} \neq 1/\sqrt{N}$).}. \subsection{Two-level environment $\mc{E}$} We start with the instructive case of a two-level environment, cf. Fig.~\ref{sta}. To this end, consider the branching state \begin{equation} | \psi_{\mc{SE}}(0) \right\ranglengle= \frac{1}{\sqrt{2}} |g (0) \right\ranglengle\otimes|\mc{E}_g (0) \right\ranglengle+\frac{1}{\sqrt{2}} |e (0) \right\ranglengle\otimes|\mc{E}_e (0) \right\ranglengle, \left\langlebel{twol} \end{equation} where the states $|\mc{E}_g (0) \right\ranglengle$ and $|\mc{E}_e (0) \right\ranglengle$ form a basis on the environment $\mc{E}$, and the states $|g (0) \right\ranglengle$ and $|e (0) \right\ranglengle$ represent the ground and excited states of $\mc{S}$ at $t=0$, respectively. It is then easy to see that there exists a unique unitary $U^{\prime}$ such that the system evolves through the adiabatic manifold in each branch of the wave function, \begin{equation} (\exists! \ U^{\prime}); \ \ (U \otimes U^{\prime})| \psi_{\mc{SE}}(0) \right\ranglengle= (U_{\text{\tiny CD}} \otimes \mathds{I}_{\mc{E}})| \psi_{\mc{SE}}(0) \right\ranglengle. \end{equation} Starting from the above equality, we obtain \begin{equation} U |g (0) \right\ranglengle\otimes U^{\prime}|\mc{E}_g (0) \right\ranglengle + U |e (0) \right\ranglengle \otimes U^{\prime}|\mc{E}_e (0) \right\ranglengle = e^{-\frac{i}{\hbar} f_{g}(t)} |g (t) \right\ranglengle\otimes|\mc{E}_g (0) \right\ranglengle+ e^{-\frac{i}{\hbar} f_{e}(t)} |e (t) \right\ranglengle\otimes|\mc{E}_e (0) \right\ranglengle. \end{equation} Projecting the environment $\mc{E}$ into the state ``$|\mc{E}_g (0) \right\ranglengle$'', we have \begin{equation} U |g (0) \right\ranglengle \left\langlengle \mc{E}_g (0)|U^{\prime}|\mc{E}_g (0) \right\ranglengle + U |e (0) \right\ranglengle \left\langlengle \mc{E}_g (0)|U^{\prime}|\mc{E}_e (0) \right\ranglengle = e^{-\frac{i}{\hbar} f_{g}(t)} |g (t) \right\ranglengle, \end{equation} equivalently written as \begin{equation} (U^{\prime}_{g,g}) U |g (0) \right\ranglengle + (U^{\prime}_{g,e}) U |e (0) \right\ranglengle = e^{-\frac{i}{\hbar} f_{g}(t)} |g (t) \right\ranglengle, \end{equation} which implies \begin{equation} (U^{\prime}_{g,g}) |g (0) \right\ranglengle + (U^{\prime}_{g,e}) |e (0) \right\ranglengle = e^{-\frac{i}{\hbar} f_{g}(t)} U^{\dagger}|g (t) \right\ranglengle. \end{equation} Therefore, \begin{equation} U^{\prime}_{g,g}= e^{-\frac{i}{\hbar} f_{g}(t)} \left\langlengle g (0)|U^{\dagger}|g (t) \right\ranglengle, \ \text{and} \ \ U^{\prime}_{g,e}= e^{-\frac{i}{\hbar} f_{g}(t)} \left\langlengle e (0)|U^{\dagger}|g (t) \right\ranglengle. \end{equation} Additionally, by projecting $\mc{E}$ into the state ``$|\mc{E}_e (0) \right\ranglengle$'' we get \begin{equation} U^{\prime}_{e,g}= e^{-\frac{i}{\hbar} f_{e}(t)} \left\langlengle g (0)|U^{\dagger}|e (t) \right\ranglengle, \ \text{and} \ \ U^{\prime}_{e,e}= e^{-\frac{i}{\hbar} f_{e}(t)} \left\langlengle e (0)|U^{\dagger}|e (t) \right\ranglengle. \end{equation} It is straightforward to check that the operator $U^{\prime}$, which reads \begin{equation} \left\langlebel{Uprime} U^{\prime}= \begin{pmatrix} U^{\prime}_{g,g} & U^{\prime}_{g,e} \\ U^{\prime}_{e,g} & U^{\prime}_{e,e} \end{pmatrix}, \end{equation} is indeed a unitary on $\mc{E}$. In conclusion, we have constructed a unique unitary map that acts only on $\mc{E}$, but counteracts transitions in $\mc{S}$. Note that coupling the system and environment implies that the state of the system is no longer described by a wave function. Hence the usual counterdiabatic scheme evolves the density matrix $\rho_{\mc{S}}(0)$ to another density $\rho_{\mc{S}}(t)$, such that both matrices have the same populations and coherence in the instantaneous eigenbasis of $H_{0}(t)$ (which is what EASTA accomplishes, as well). \begin{figure}\end{figure} \subsection{N-level environment $\mc{E}$} We can easily generalize the two-level analysis to an $N$-level environment. Similar to the above description, coupling the system to the environment leads to a branching state of the form \begin{equation} | \psi_{\mc{SE}}(0) \right\ranglengle= \frac{1}{\sqrt{N}} \sum^{N-1}_{n=0} |n (0) \right\ranglengle\otimes|\mc{E}_n (0) \right\ranglengle, \end{equation} where the states $\{|\mc{E}_n (0) \right\ranglengle\}_n$ form a basis on the environment $\mc{E}$. We can then construct a unique unitary $U^{\prime}$ such that the system evolves through the adiabatic manifold in each branch of the wave function, \begin{equation} (\exists! \ U^{\prime}); \ \ (U \otimes U^{\prime})| \psi_{\mc{SE}}(0) \right\ranglengle= (U_{\text{\tiny CD}} \otimes \mathds{I}_{\mc{E}})| \psi_{\mc{SE}}(0) \right\ranglengle. \end{equation} The proof follows the exact same strategy as the two-level case, and we find \begin{equation} (\forall \ (m,n) \in \llbracket 0, N-1 \rrbracket^{2} ); \ \ U^{\prime}_{m,n}= e^{-\frac{i}{\hbar} f_{m}(t)} \left\langlengle n(0)|U^{\dagger}|m(t) \right\ranglengle. \left\langlebel{mainr1} \end{equation} The above expression of the elements of the unitary $U^{\prime}$ is our main result, which holds for any driving $H_{0}(t)$ and any $N$-dimensional system. \subsection{Process cost} Having established the general analytic form of the unitary applied on the environment, the next logical step is to compute and compare the cost of both schemes: (a) the usual counterdiabatic scheme and (b) the environment-assisted shortcut scheme presented above (cf. Figure~\ref{sta}). More specifically, we now compare the time integral of the instantaneous cost~\cite{cost5} for both driving schemes~\cite{cost1,cost2,cost3,cost4,cost5}, (a) $C_{\text{\tiny CD}}(t)= (1/\tau)\int_{0}^{t} \| H_{\text{\tiny CD}}(s) \| ds$ and (b) $C_{\text{env}}(t)= (1/\tau)\int_{0}^{t} \| H_{\text{env}}(s) \| ds$ ($\|.\|$ is the operator norm), where the driving Hamiltonian on the environment can be determined from the expression of $U^{\prime}(t)$, $H_{\text{env}}(t)= i\hbar \frac{dU^{\prime}(t)}{dt} U^{\prime \dagger}(t)$. In fact, from~\myeqref{mainr1} it is not too hard to see that the field applied on the environment $H_{\text{env}}(t)$ has the same eigenvalues as the counterdiabatic field $H_{\text{\tiny CD}}(t)$, since there exists a similarity transformation between $H_{\text{env}}(t)$ and $H^{*}_{\text{\tiny CD}}(t)$. Therefore, the cost of both processes is exactly the same, $C_{\text{\tiny CD}}=C_{\text{env}}$, for any arbitrary driving $H_{0}(t)$. Details of the derivation can be found in Appendix~\ref{aa}. Note that for $t=\tau$, the above definition of the cost becomes the total cost for the duration ``$\tau$'' of the process. \begin{figure}\end{figure} \subsection{Illustration} We illustrate our results in a simple two-qubit model, where the system and and the environment are each described by a single qubit. Note that the environment can live in a larger Hilbert space while still characterized as a virtual qubit \cite{QD20}. The aforementioned virtual qubit notion simply means that the state of the environment is of rank equal to two. We choose a driving Hamiltonian $H_{0}(t)$, such that \begin{equation} H_{0}(t)= \frac{B}{2} \sigma_x + \frac{J(t)}{2} \sigma_z, \end{equation} where $J(t)$ is the driving/control field, $B$ is a constant, $\sigma_z$ and $\sigma_x$ are Pauli matrices. Depending on the physical context, $B$ and $J(t)$ can be interpreted in various ways. In particular, as noted in Ref.~\cite{analyticqubit}, in some contexts the constant $B$ can be regarded as the energy splitting between the two levels~\cite{h1,h2,h3}, and in others, the driving $J(t)$ can be interpreted as a time-varying energy splitting between the states~\cite{J1,J2,J3,J4}. To illustrate our results we choose \begin{equation} (\forall \ t \in [0,\ \tau]); \ J(t) = B \cos^{2}\left(\frac{\pi t}{2\tau}\right). \end{equation} The above driving evolves the system beyond the adiabatic manifold, and we quantify this by plotting, in Figure~\ref{illust1}, the overlap between the evolved state $| \phi_{n}(t) \right\ranglengle \equiv U(t) |n(0) \right\ranglengle$ and the instantaneous eigenstate $|n(t) \right\ranglengle$ of the Hamiltonian $H_{0}(t)$, for $n \in \{g,e\}$. To illustrate our main result, we also plot the overlap between the states resulting from the two shortcut schemes (illustrated in Figure~\ref{sta}): the first scheme is the usual counterdiabatic (CD) driving where we add a counterdiabatic field $H_{\text{\tiny CD}}$ to the system of interest, and we note the resulting composite state as ``$|\psi^{\text{\tiny CD}}_{\mc{SE}} \right\ranglengle$''. The second scheme is the environment-assisted shortcut to adiabaticity (EASTA), and we note the resulting composite state as ``$|\psi^{\text{\tiny EASTA}}_{\mc{SE}} \right\ranglengle$''. Confirming our analytic results, the local driving on the environment ensures that the system evolves through the adiabatic manifold at all times, since the state overlap is equal to one for all $t \in [0,\tau]$. Finally, we compute and plot the cost of both shortcut schemes and verify that they are both equal to each other for all times ``$t$'' (cf. Figure~\ref{illust1}, panel (b)), and for all ``$\tau$'' (cf. Figure~\ref{illust1}, panel (c)). \section{Concluding remarks} \subsection{Summary} In the present manuscript, we considered branching states $\{|\psi_{\mc{SE}}\right\ranglengle\}$, on a Hilbert space $\mc{H}_{\mc{S}}\otimes \mc{H}_{\mc{E}}$ of arbitrary dimension, and we derived the general analytic form of the time-dependent driving on $\mc{H}_{\mc{E}}$ which guarantees that the system $\mc{S}$ evolves through the adiabatic manifold at all times. Through this \textit{Environment-Assisted Shortcuts To Adiabaticity} scheme, we explicitly showed that the environment can act as a proxy to control the dynamics of the system of interest. Moreover, for branching states $|\psi_{\mc{SE}}\right\ranglengle$ with equal branch probabilities, we further proved that the cost associated with the EASTA technique is exactly equal to that of counterdiabatic driving. We illustrated our results in a simple two-qubit model, where the system and the environment are each described by a single qubit. It is interesting to note, that while we focused in the present manuscript on counterdiabatic driving, the technique can readily be generalized to any type of control unitary map ``$U_{\text{\tiny control}}$'', resulting in a desired evolved state $|\kappa_n(t) \right\ranglengle \equiv U_{\text{\tiny control}} |n(0) \right\ranglengle$. The corresponding unitary $U^{\prime}$ on $\mc{H}_{\mc{E}}$ has then the form \begin{equation} (\forall \ (m,n) \in \llbracket 0, N-1 \rrbracket^{2} ); \ \ U^{\prime}_{m,n}= \left\langlengle n(0)|U^{\dagger}|\kappa_m(t) \right\ranglengle. \left\langlebel{mainr111} \end{equation} In the special case, for which the evolved state is equal to the $n$th instantaneous eigenstate of $H_{0}(t)$ (with a phase factor), \begin{equation} |\kappa_n(t) \right\ranglengle=e^{-\frac{i}{\hbar} f_{n}(t)} |n(t) \right\ranglengle, \end{equation} we recover the main result of the manuscript. The above generalization illustrates the broad scope of our results. Any control unitary on the system $\mc{S}$ can be realized solely by acting on the environment $\mc{E}$, without altering the dynamics of the system of interest $\mc{S}$ (i.e., for any arbitrary driving $H_{0}(t)$, hence any driving rate). \subsection{Envariance and pointer states} In the present work, we leveraged the presence of an environment to induce desired dynamics in a quantum system. Interestingly, our novel method for shortcuts to adiabaticity relies on branching states, which play an essential role in decoherence theory and in the framework of Quantum Darwinism. In open system dynamics~\cite{decoh1,decoh2,decoh3}, the interaction between system and environment superselects states that survive the decoherence process, aka the pointer states~\cite{pointer1,pointer2}. It is exactly these pointer states that are the starting point of our analysis, and for which EASTA is designed. While previous studies \cite{psta6,open1,open2} have explored STA methods for open quantum systems, to the best of our understanding, the environment was only considered as a passive source of additional noise described by quantum master equations. In our paradigm, we recognize the active role that an environment plays in quantum dynamics, which is inspired by envariance and reminiscent of the mind-set of Quantum Darwinism. In this framework~\cite{QD1,QD2,QD3,QD4,QD5,QD6,QD7,QD8,QD9,QD10,QD11,QD12,QD13,QD14,QD15,QD16,QD17,QD19,QD20}, the environment is understood as a communication channel through which we learn about the world around us, i.e., we learn about the state of systems of interest by eavesdropping on environmental degrees of freedom~\cite{QD20}. Thus, in true spirit of the teachings by Wojciech H. Zurek we have understood the agency of quantum environments and the useful role they can assume. To this end, we have applied a small part of the many lessons we learned from working with Wojciech, to connect and merge tools from seemingly different areas of physics to gain a deeper and more fundamental understanding of nature. \appendix \numberwithin{equation}{section} \renewcommand{\thesection\arabic{equation}}{\thesection\arabic{equation}} \section{Cost of environment-assisted shortcuts to adiabaticity} \left\langlebel{aa} \setcounter{equation}{0} In this appendix, we show that CD and EASTA have the same cost. Generally, we have \begin{equation} \begin{split} H_{\text{env}}(t)&= i\hbar \frac{dU^{\prime}(t)}{dt} U^{\prime \dagger}(t),\\ &= i\hbar \sum_{i,j} \sum_{k} \frac{dU^{\prime}_{i,k}}{dt} (U^{\prime}_{j,k})^{*} |\mc{E}_{i}(0)\right\ranglengle \left\langlengle \mc{E}_{j}(0)|.\\ \end{split} \end{equation} From the main result in~\myeqref{mainr1}, we obtain \begin{equation} H_{\text{env}}(t)= \sum_{i,j} \left(\sum_{k} \left(\left\langlengle k(0)|i\hbar\partial_{t}U^{\dagger}|\psi_{i}(t) \right\ranglengle (U^{\prime}_{j,k})^{*} + i\hbar\left\langlengle k(0)|U^{\dagger}|\partial_{t}\psi_{i}(t) \right\ranglengle(U^{\prime}_{j,k})^{*}\right)\right)|\mc{E}_{i}(0)\right\ranglengle \left\langlengle \mc{E}_{j}(0)|. \end{equation} Given that $H_{0}(t)= i\hbar \frac{dU(t)}{dt} U^{\dagger}(t)$, we also have \begin{equation} H_{\text{env}}(t)= \sum_{i,j} \left(\sum_{k} \left(\left\langlengle k(0)|(-U^{\dagger}H_{0})|\psi_{i}(t) \right\ranglengle (U^{\prime}_{j,k})^{*} + i\hbar\left\langlengle k(0)|U^{\dagger}|\partial_{t}\psi_{i}(t) \right\ranglengle(U^{\prime}_{j,k})^{*}\right)\right)|\mc{E}_{i}(0)\right\ranglengle \left\langlengle \mc{E}_{j}(0)|, \end{equation} which implies \begin{equation} H_{\text{env}}(t)= \sum_{i,j} \left(\sum_{k} \left(\left\langlengle k(0)|(-U^{\dagger}H_{0})|\psi_{i}(t) \right\ranglengle (U^{\prime}_{j,k})^{*} + \left\langlengle k(0)|U^{\dagger}H|\psi_{i}(t) \right\ranglengle(U^{\prime}_{j,k})^{*}\right)\right)|\mc{E}_{i}(0)\right\ranglengle \left\langlengle \mc{E}_{j}(0)|, \end{equation} and hence, \begin{equation} H_{\text{env}}(t)= \sum_{i,j} \left(\sum_{k} \left\langlengle k(0)|U^{\dagger}H_{\text{\tiny CD}}|\psi_{i}(t) \right\ranglengle(U^{\prime}_{j,k})^{*}\right)|\mc{E}_{i}(0)\right\ranglengle \left\langlengle \mc{E}_{j}(0)|. \end{equation} Using $| \phi_{k}(t) \right\ranglengle \equiv U(t) |k(0) \right\ranglengle$ we can write \begin{equation} H_{\text{env}}(t)= \sum_{i,j} \left(\sum_{k} \left\langlengle \psi_{j}(t)|\phi_k(t) \right\ranglengle \left\langlengle \phi_k(t)|H_{\text{\tiny CD}}|\psi_{i}(t) \right\ranglengle\right)|\mc{E}_{i}(0)\right\ranglengle \left\langlengle \mc{E}_{j}(0)|. \end{equation} Therefore, \begin{equation} H_{\text{env}}(t)= \sum_{i,j} \left( \left\langlengle \psi_{j}(t)|H_{\text{\tiny CD}}|\psi_{i}(t) \right\ranglengle\right)|\mc{E}_{i}(0)\right\ranglengle \left\langlengle \mc{E}_{j}(0)|. \end{equation} By definition, we also have \begin{equation} H_{\text{\tiny CD}}= \sum_{i,j} \left( \left\langlengle \psi_{i}(t)|H_{\text{\tiny CD}}|\psi_{j}(t) \right\ranglengle\right)|\psi_{i}(t)\right\ranglengle \left\langlengle \psi_{j}(t)|, \end{equation} hence, \begin{equation} H^{T}_{\text{\tiny CD}}=H^{*}_{\text{\tiny CD}}= \sum_{i,j} \left( \left\langlengle \psi_{j}(t)|H_{\text{\tiny CD}}|\psi_{i}(t) \right\ranglengle\right)|\psi_{i}(t)\right\ranglengle \left\langlengle \psi_{j}(t)|. \end{equation} Thus, there exists a similarity transformation between $H^{*}_{\text{\tiny CD}}$ and $H_{\text{env}}$, and $C_{\text{\tiny CD}}=C_{\text{env}}$ for any arbitrary driving $H_{0}(t)$. The similarity transformation is given by the matrix $S=\sum_{j} |\mc{E}_j(0)\right\ranglengle \left\langlengle \psi_j(t)|$, such that $SH^{*}_{\text{\tiny CD}}S^{-1}=H_{\text{env}}$. Since we proved that the Hamiltonians $H_{\text{\tiny CD}}$ and $H_{\text{env}}$ have the same eigenvalues, our result can be valid for other definitions of the cost function $\mc{C}$ which might involve other norms (e.g., the Frobenius norm). \section{Generalization to arbitrary branching probabilities} \left\langlebel{a} Finally we briefly inspect the case of non-even branching states. We begin by noting the consequences of our assumptions. In particular, we have assumed that the state of system+environment evolves unitarly. Thus, consider a joint map of the form $U \otimes M$, where $U$ is a unitary on $\mc{S}$. Then, it is a simple exercise to show that the map $M$, on $\mc{E}$, is also unitary, $MM^{\dagger}=M^{\dagger}M=\mathbb{I}$. In what follows, we prove by contradiction that there exists no unitary map $M$ that suppresses transitions in $\mc{S}$, for branching states with arbitrary probabilities. Consider \begin{equation} | \psi_{\mc{SE}}(0) \right\ranglengle= \sum^{N-1}_{n=0} \sqrt{p_n} |n (0) \right\ranglengle \bigotimes^{N_{\mc{E}}}_{l=1}|\mc{E}^{l}_n (0) \right\ranglengle, \left\langlebel{bran1} \end{equation} and assume that there exists a unitary map $M$ on $\mc{E}$ that suppresses transitions in $\mc{S}$, i.e., \begin{equation} \sum^{N-1}_{n=0} \sqrt{p_n} U|n (0) \right\ranglengle \otimes \left(M \bigotimes^{N_{\mc{E}}}_{l=1} |\mc{E}^{l}_n (0) \right\ranglengle \right)=\sum^{N-1}_{n=0} \sqrt{p_n} e^{-\frac{i}{\hbar} f_{n}(t)} |n(t) \right\ranglengle \otimes \left(\bigotimes^{N_{\mc{E}}}_{l=1} |\mc{E}^{l}_n (0) \right\ranglengle \right). \end{equation} Following the same steps of Section~\ref{sec3} we obtain \begin{equation} (\forall \ (m,n) \in \llbracket 0, N-1 \rrbracket^{2} ); \ \ M_{m,n}= \sqrt{\frac{p_m}{p_n}} e^{-\frac{i}{\hbar} f_{m}(t)} \left\langlengle n(0)|U^{\dagger}|m(t) \right\ranglengle. \left\langlebel{map1} \end{equation} Comparing the above map with our main result in~\myeqref{mainr1}, we conclude that the additional factor $\sqrt{\frac{p_m}{p_n}}$ violates unitarity, and hence we conclude that EASTA cannot work for non-even branching states \eqref{bran1}. This can be seen more explicitly from the form of the matrices $MM^{\dagger}$ and $M^{\dagger}M$. Generally, and by dropping the superscript in environmental states $\bigotimes^{N_{\mc{E}}}_{l=1}|\mc{E}^{l}_n (0) \right\ranglengle \equiv |\mc{E}_n (0) \right\ranglengle$, we have \begin{equation} MM^{\dagger}= \sum_{i,j,k} M_{i,k} M^{*}_{j,k} |\mc{E}_i(0)\right\ranglengle \left\langlengle \mc{E}_j(0)|, \end{equation} from the expression of the elements of $M$ (cf.~\myeqref{map1}), and by adopting the notation $| \phi_{n} \right\ranglengle \equiv U(t) |n(0) \right\ranglengle$, we get \begin{equation} MM^{\dagger}= \sum_{i,j,k} \frac{\sqrt{p_ip_j}}{p_k} \left\langlengle \phi_k | \psi_i \right\ranglengle \left\langlengle \psi_j | \phi_k \right\ranglengle|\mc{E}_i(0)\right\ranglengle \left\langlengle \mc{E}_j(0)|, \end{equation} which implies \begin{equation} MM^{\dagger}= \mathbb{I}+\sum_{i,j} \sqrt{\frac{p_j}{p_i}} \left\langlengle \psi_j | D_{(i)}|\psi_i \right\ranglengle|\mc{E}_i(0)\right\ranglengle \left\langlengle \mc{E}_j(0)|, \left\langlebel{counter1} \end{equation} such that the matrix \begin{equation} D_{(i)} = \sum_k \frac{p_i}{p_k} |\phi_k\right\ranglengle \left\langlengle \phi_k |-\mathbb{I} \end{equation} is diagonal in the basis spanned by the orthonormal vectors $\{|\phi_k\right\ranglengle\}_k$. This matrix is generally (for any choice of $H_{0}(t)$ and initial state of $\mc{S}$) different from the null matrix for non-equal branch probabilities. A similar decomposition can be made for the matrix $M^{\dagger}M$, such that \begin{equation} M^{\dagger}M= \mathbb{I}+\sum_{i,j} \sqrt{\frac{p_i}{p_j}} \left\langlengle \phi_j | \mc{D}_{(i)}|\phi_i \right\ranglengle|\mc{E}_i(0)\right\ranglengle \left\langlengle \mc{E}_j(0)|, \left\langlebel{counter2} \end{equation} where \begin{equation} \mc{D}_{(i)} = \sum_k \frac{p_k}{p_i} |\psi_k\right\ranglengle \left\langlengle \psi_k |-\mathbb{I}. \end{equation} In conclusion, for branching states with non-equal probabilities there is no unitary map that guarantees that the system evolves through the adiabatic manifold at all times and for any arbitrary driving $H_{0}(t)$. Hence we can realize the EASTA technique only for a system maximally entangled with its environment (cf.~\myeqref{bran1} with $\sqrt{p_n}=1/\sqrt{N}$ for all $n \in \llbracket 0, \ N-1 \rrbracket$), or in the general case (non-equal branch probabilities) when we can access an extended Hilbert space. \funding{S.D. acknowledges support from the U.S. National Science Foundation under Grant No. DMR-2010127. This research was supported by grant number FQXi-RFP-1808 from the Foundational Questions Institute and Fetzer Franklin Fund, a donor advised fund of Silicon Valley Community Foundation (SD).} \acknowledgments{We would like to thank Wojciech H. Zurek for many years of mentorship and his unwavering patience and willingness to teach us how to think about the mysteries of the quantum universe. Enlightening discussions with Agniva Roychowdhury are gratefully acknowledged.} \conflictsofinterest{The authors declare no conflict of interest.} \end{document}
\begin{document} \title[Compressible NSF flows at steady-state]{Compressible Navier--Stokes--Fourier flows at steady-state} \author{Luisa Consiglieri} \thanks{Dedicated to my coauthor and beloved father Victor Consiglieri.} \address{Luisa Consiglieri, Independent Researcher Professor, European Union} \urladdr{\href{http://sites.google.com/site/luisaconsiglieri}{http://sites.google.com/site/luisaconsiglieri}} \begin{abstract} The heat conducting compressible viscous flows are governed by the Navier--Stokes--Fourier (NSF) system. In this paper, we study the NSF system accomplished by the Newton law of cooling for the heat transfer at the boundary. On one part of the boundary, we consider the Navier slip boundary condition, while in the remaining part the inlet and outlet occur. The existence of a weak solution is proved via a new fixed point argument. With this new approach, the weak solvability is possible in Lipschitz domains, by making recourse to \(L^q\)-Neumann problems with \(q>n\). Thus, standard existence results can be applied to auxiliary problems and the claim follows by compactness techniques. Quantitative estimates are established. \end{abstract} \keywords{Compressible Navier--Stokes--Fourier system; Navier slip boundary conditions; Newton law of cooling; inlet/outlet flows; Helmholtz decomposition.} \subjclass[2010]{Primary: 76N06, 80A19; Secondary: 35Q35, 35Q79, 35R05, 35B45.} \maketitle \section{Introduction} The heat conductive flows are described by a coupled system consisting of the equations of continuity, motion and energy. The study of compressible flows depends on the knowledge of solving the continuity equation, because this equation has its shortcomings. We refer to \cite{bveiga87} for the existence of stationary solutions if the transport coefficients are, at least, of class \(W^{2,p}(\Omega)\) with \(p>n\). Several works deal with barotropic flows, where the pressure is a function of the density only. To cover the physical point of view, namely, the adiabatic exponent \(\gamma=5/3\) for the monoatomic gases or \(\gamma=7/5\) for the diatomic gases at ordinary temperature \SIrange{150}{600}{\kelvin}, the imposed assumption on the pressure has being studied in function of the adiabatic exponent \(\gamma\). To deal with this, the renormalized bounded energy weak solutions, in the context of the theory introduced by P.L. Lions \cite{plions}, are proved for \(\gamma \geq 5/3\) if \(n=3\). Since then the adiabatic exponent is becoming realistic. In \cite{frehse-w}, the renormalized bounded energy weak solutions are proved under the assumption that the adiabatic exponent satisfies \(\gamma> 4/3\). We refer the existence of renormalized weak solutions for \(\gamma> (3+\sqrt{41})/8\) to \cite{brezina}, for the flows powered by volume potential forces in a rectangular domain with periodic boundary conditions, and recently, for \(\gamma>1\) to \cite{plotni-w}, in a bounded domain with no-slip boundary condition. For a general case, the existence of a fixed point to the Navier--Stokes system is applied in \cite{valli} by using the Schauder theorem under smallness of the \(H^3\)-norm for the velocity field if providing the system by smooth coefficients. The higher order derivatives are essential in establishing the estimate of div\,\(\mathbf{u}\). We remind that a fluid that flows at low velocity is described by the Stokes equations and not by the Navier--Stokes equations. Nonisothermal steady state studies are well known and there exists a vast literature under the Dirichlet condition, for instance, on optimal control of low Mach number \cite{imanu} and on uniqueness \cite{padula} and the literature cited therein. The better regularity of solutions by introducing the effective viscous flux \(G=p-(2\mu +\lambda)\mbox{div}\,\mathbf{u}\) is only possible under constant viscosities \(\mu\) and \(\lambda\) (see \cite{frehse-w}, and the references therein). With this assumption, the authors in \cite{mucha} prove the existence of weak solutions by replacing, in NSF system, the energy equation by the total energy equation. This new system has the particularity of adding the equations, the pressure and the dissipation disappear, in the establishment of the crucial estimates. Here, we consider the transport coefficients as temperature and spatial dependent. The behavior of the transport coefficients do not allow standard techniques \cite{sarka2009} as, for instance, the use of either the above \(G\) or the inverse of the Stokes operator. The inhomogeneous boundary value problems are, in contrast, less common. We refer to \cite{plotni} to the existence of continuous strong solutions to NSF problem under the assumptions that the Reynolds number and the inverse viscosity ratio are small and the Mach number Ma\,\(\ll 1\). The study of the NSF system that the source/sink is the heat transfer at the boundary, which is given by the Newton law of cooling, can be applicable to the physical situations such that come from biomedical engineering (as, for instance, thermal ablation for the treatment of thyroid nodules \cite{chung,radz}) as well as geological engineering (as, for instance, the natural gas flow in wells at the region that a single phase occurs). \textit{A priori} estimates are the core in a fixed point argument. However, they are usually deduced from the boundedness propriety of the operators. Then, there exist a universal constant that is abstract, that is, it does not reflect the data dependency. To fill this gap, additional attention is payed in the determination of quantitative estimates in which the dependence on the data is explicit. The outline of this paper is as follows. Next section is concerned for modeling of the problem under study and the description of the model itself. Section \ref{smain} is devoted to the mathematical framework, the establishment of the data assumptions, and the statement of the main theorems. In Section \ref{strat}, we delineate the fixed point argument. The following sections (Sections \ref{sZO}, \ref{sdens} and \ref{sSOLA}) concentrate on the wellposedness of three auxiliary problems, namely a Dirichlet--Navier problem for the velocity field, a inlet/outlet problem for the density scalar and a Dirichlet--Robin problem for the temperature. The remaining sections (Sections \ref{smain1} and \ref{smain2}) are devoted to the proofs of the main theorems, respectively, Theorems \ref{main} and \ref{main2}. \section{Statement of the problem} \label{stt} Let \( \Omega\) be a bounded domain (connected open set) of \( \mathbb R^n\), \( (n= 2,3)\), with Lipschitz boundary. The boundary \(\partial\Omega\) consists of three pairwise disjoint relatively open \( (n-1)\)-dimensional submanifolds, \( \Gamma_\mathrm{in}\), \(\Gamma_\mathrm{out}\) and \( \Gamma\), with positive Lebesgue measures, whose verify \[ \mathrm{cl}(\Gamma_\mathrm{in}) \cup \mathrm{cl}(\Gamma_\mathrm{out}) \cup \mathrm{cl}(\Gamma)=\partial\Omega, \] where cl stands for the set closure. The heat conducting fluid at steady-state is governed by the Navier--Stokes--Fourier equations \begin{align} \nabla\cdot(\rho\mathbf{u})&= 0\label{mass}\\ \rho (\mathbf{u}\cdot\nabla)\mathbf{u} -\nabla\cdot\sigma &=\rho \mathbf{g}\label{motion}\\ \label{heateqs} \rho \mathbf{u}\cdot\nabla e-\nabla\cdot (k(\theta)\nabla \theta) &=\sigma:D\mathbf{u} \mbox{ in } \Omega . \end{align} Here, the unknown functions are the density \(\rho\), the velocity field \(\mathbf{u}\), and the specific internal energy \( e\). We denote \( \zeta:\varsigma=\zeta_{ij}\varsigma_{ij}\) taking into account the convention on implicit summation over repeated indices. The gravitational force \(\mathbf{g}\) and the dissipation \( \sigma:D\mathbf{u}\) are negligible. Notice that the neglecting the external force fields does not imply that the fluid is at rest. Indeed, the fluid flow is driven both by inlet and outlet flows and by heat transfer on the boundary. In the case of ideal gases, the specific internal energy \( e\) is related with the absolute temperature \(\theta\) by the linear relationship \( e = c_v \theta,\) where \( c_v\) denotes the specific heat capacity of the fluid at constant volume. Thus, the energy equation \eqref{heateqs} can be written in terms of the temperature. Assuming that the thermal conductivity \( k\) is a function dependent on both temperature and space variable, the smoothness of the temperature depends on this coefficient. The Cauchy stress tensor \(\sigma\), which is temperature dependent, obeys the constitutive law \begin{equation}\label{diff} \sigma =-p \mathsf{I}+\mu(\theta)D\mathbf{u}+\lambda(\theta) {\rm tr}(D\mathbf{u}) \mathsf{I}, \quad{\rm tr}(D\mathbf{u})=\mathsf{I}:D\mathbf{u}= \nabla\cdot\mathbf{u}, \end{equation} where \(\mathsf{I}\) denotes the identity (\(n\times n\))-matrix, \( D=(\nabla+\nabla^T)/2\) the symmetric gradient, and \( \mu\) and \( \lambda\) are the viscosity coefficients in accordance with the second law of thermodynamics \begin{equation}\label{mu} \mu(\theta)>0,\quad \nu(\theta):=\lambda(\theta)+\mu(\theta)/n\geq 0, \end{equation} with \(\nu\) denoting the bulk (or volume) viscosity and \(\mu/2\) being the shear (or dynamic) viscosity. The pressure \( p\) in the case of ideal gases obeys to the Boyle--Marriotte law \begin{equation}\label{boyle} p= R_\mathrm{specific}\rho\theta \end{equation} where \( R_\mathrm{specific} =R/M\) is the specific gas constant, with \(R= \SI{8.314}{\joule\per\mole\per\kelvin}\) being the gas constant and \(M\) denoting the molar mass. To understand the range of values we are talking to about, we exemplify some well known values for the dry air. For the air (assumed to be at the atmospheric pressure \(p=\SI{101,325}{\kilo\pascal}\)), the molar mass of dry air is \(M=\SI{28.96}{\kilogram\per\kilo\mole}\) at temperature \(\theta=\SI{298.15}{\kelvin}\) (\(=\SI{25}{\celsius}\)), then the density \(\rho =\SI{1.184}{\kilogram\per\cubic\metre}\). Thus, we have \(R_\mathrm{specific}=\SI{287}{\joule\per\kilogram\per\kelvin}\). The dry air can be assumed as diatomic, then \(c_v =5R/2\). The dynamic viscosity \(\mu/2=\SI{0.018}{\milli\pascal\second}\) and the bulk viscosity \(\nu = 0.8 \mu/2\) \cite{gu-ubachs}. Similar values are known for O\(_2\) (see Table 1). \begin{table}[h]\label{table1} \caption{Parameters at the atmospheric pressure \cite{kmn,lae}} \begin{tabular}{|c|c|c|c|c|} \hline \(\theta\) & \(\mu/2\) (Air) & \(\mu/2\) (O\(_2\)) & \(k\) (Air) & \(k\) (O\(_2\)) \\ \([\si{\kelvin}]\) & \( [10^{-5}\si{\pascal\second}]\) & \( [10^{-5}\si{\pascal\second}]\) & \( [10^{-2}\si{\watt\per\metre\per\kelvin}]\)& \( [10^{-2}\si{\watt\per\metre\per\kelvin}]\) \\ \hline 100 &0.7 & 0.8 &0.9 & 1.0 \\ 200 &1.3 & 1.5& 1.8 & 1.8 \\ 300 &1.9& 2.0 &2.6 & 2.7 \\ 500 & 2.7 & 3.0 & 4.0 & 4.3 \\ 800 &3.7 & 4.2 & 5.7 & 6.6 \\ 1000 &4.3 & 4.9 & 6.8 & 8.0 \\ \hline \end{tabular} \end{table} The triple point of the air is reached at temperature of \SI{59.75}{\kelvin} (\(=-\SI{213.4}{\celsius}\)) and a correlated pressure (which value varies from author to author because how it is assumed the air composition). Thus, a minimum temperature \(\theta_0\) is admissible. The values for the velocity, however, range from that the flow has Reynolds number Re\,\(\ll 1\), in which case is described by the Stokes equation, until the flow behaves in the turbulent regime of Re\,\(\geq 10^6\). This means Re\,\(> 6.5\times 10^4 v L\), with \(v\) standing for an average velocity and \(L\) the maximum length of the cross-section of the domain, in the above conditions. We notice that, in this work, we only assume as constant the specific heat capacity. This assumption is essential to leave the thermal conductivity as space variable dependent, by replacing the specific internal energy by the temperature as an unknown to seek. We leave all the remaining coefficients dependent on the temperature (see, for instance, Table 1) and on the space variable. On the Dirichlet boundary \(\Gamma_{D}= \mathrm{int}(\mathrm{cl}(\Gamma_\mathrm{in})\cup \mathrm{cl}(\Gamma_\mathrm{out}))\), we assume inhomogeneous Dirichlet boundary condition \begin{equation}\label{bdD} \rho=\rho_\infty \quad\mbox{and}\quad \mathbf{u}=\mathbf{u}_{D}. \end{equation} This represents both the inflow (\(u_\mathrm{in}:=\mathbf{u}_{D}\cdot\mathbf{n} <0\)) and outflow (\(u_\mathrm{out}:=\mathbf{u}_{D}\cdot\mathbf{n} >0\)). On the remaining boundary \(\Gamma\), the fluid do not penetrate the solid wall, and it obeys the Navier slip boundary condition \begin{equation}\label{ud1} u_N:=\mathbf{u}\cdot\mathbf{n}=0,\qquad \tau_T=-\gamma (\theta)\mathbf{u}_T, \end{equation} where \( \mathbf{n}\) stands for the unit outward vector to \( \Gamma\), \(u_N,\mathbf{u}_T\) are the normal and tangential components of the velocity vector, respectively, \(\tau_T=\tau\cdot \mathbf{n}-\tau_N\mathbf{n}\) and \(\tau_N =( \tau\cdot \mathbf{n})\cdot\mathbf{n}\) are the tangential and normal components of the deviator stress tensor \(\tau=\sigma +p\mathsf{I}\), respectively, and \(\gamma\) denotes the friction coefficient. For the heat transfer conditions, it is admissible to assume prescribed temperature in the inlet, that is, we consider the Dirichlet condition \begin{equation} \theta=\theta_\mathrm{in}\ \mbox{ on }\Gamma_\mathrm{in}.\label{tin} \end{equation} For the sake of simplicity, we assume \(\theta_\mathrm{in}\) as a positive constant. Alternatively, we might assume that \(\theta_\mathrm{in}\) may be extended to a function \(\tilde\theta_\mathrm{in}\in H^1(\Omega).\) On the boundary \(\Gamma_N=\partial\Omega\setminus \mathrm{cl}(\Gamma_\mathrm{in})\), we assume the Newton law of cooling \begin{equation}\label{newton} k(\theta)\nabla \theta\cdot\mathbf{n}+h_c (\theta) (\theta-\theta_\mathrm{e})=0, \end{equation} where \( h_c\) denotes the heat transfer coefficient and \( \theta_\mathrm{e}\) represents a given (eventually nonconstant) external temperature. This condition is mathematically known as the Robin condition. The heat source/sink is completely driven from the boundary and we denote \[ \theta_0 =\left\lbrace \begin{array}{ll} \theta_\mathrm{in} & \mbox{on }\Gamma_\mathrm{in}\\ \theta_\mathrm{e} = & \left\lbrace \begin{array}{l} \theta_\mathrm{w} \mbox{ on }\Gamma\\ \theta_\mathrm{out} \mbox{ on }\Gamma_\mathrm{out}. \end{array} \right. \end{array} \right. \] \section{Main Results} \label{smain} We assume that \( \Omega \subset\mathbb{R}^n\) is a bounded domain with its boundary \( \partial\Omega\in C^{0,1}\). The standard notation of Lebesgue and Sobolev spaces is used. Let us define the Hilbert spaces \begin{align*} {H}_{\mathrm{in}}^{1}(\Omega)&:=\{ v\in {H}^{1}(\Omega):\ v=0\mbox{ on }\Gamma_\mathrm{in}\};\\ \mathbf{V}&:=\{\mathbf{v}\in \mathbf{H}^{1}(\Omega):\ \mathbf{v}=\mathbf{0}\mbox{ on }\Gamma_D,\ \mathbf{v}\cdot\mathbf{n}=0\mbox{ on }\Gamma\}, \end{align*} endowed with the norms, respectively, \begin{align*} \|v\|_{1,2,\Omega}&=\left( \|\nabla v\|_{2,\Omega}^2+ \|v\|_{2,\Gamma}^2 \right)^{1/2} ;\\ \|\mathbf{v}\|_\mathbf{V}&= \left(\| D\mathbf{v}\|_{2,\Omega}^2+ \|\mathbf{v}\|_{2,\Gamma}^2 \right)^{1/2} . \end{align*} The meaning of the condition \(\mathbf{v}\cdot\mathbf{n}=0\) on \(\Gamma\) should be understood as \[ \langle \mathbf{v}\cdot\mathbf{n} ,v \rangle_{\Gamma}=0,\quad\forall v\in { H}^{1/2}_{00}(\Gamma)= \{v\in H^{1/2}(\partial \Omega):\ v=0\quad\mbox{on } \Gamma_D \}, \] where the symbol \(\langle\cdot,\cdot\rangle_\Gamma\) stands for the duality pairing \(\langle\cdot,\cdot\rangle_{Y'\times Y}\), where \(Y={ H}^{1/2}_{00}(\Gamma)\). \begin{definition}[NSF problem]\label{mainbj} We say that the triplet \((\rho,\mathbf{u}, \theta)\) is a weak solution to the NSF problem if it satisfies the integral identities \begin{align}\label{rhow} \int_\Omega \rho \mathbf{u}\cdot\nabla v\dif{x}= \int_{\Gamma_D}\rho_\infty\mathbf{u}_D\cdot\mathbf{n} v \dif{s} ,\quad\forall v\in W^{1,q'}(\Omega); \\ \int_{\Omega}\rho (\mathbf{u}\cdot\nabla )\mathbf{u}\cdot\mathbf{v}\dif{x} +\int_{\Omega}\mu(\theta)D\mathbf{u}:D\mathbf{v} \dif{x} +\int_{\Omega} \lambda(\theta)\nabla\cdot\mathbf{u}\nabla\cdot\mathbf{v} \dif{x} +\nonumber\\ +\int_{\Gamma}\gamma(\theta)\mathbf{u}_T\cdot \mathbf{v}_T\dif{s} =\int_{\Omega} p\nabla \cdot\mathbf{v}\dif{x} ,\quad\forall \mathbf{v}\in \mathbf{V}; \qquad \label{motionw}\\ c_v \int_{\Omega} \rho \mathbf{u}\cdot\nabla \theta v \dif{x} + \int_\Omega k(\theta)\nabla \theta\cdot\nabla v \dif{x} +\int_{\Gamma_N} h_c(\theta) \theta v\dif{s} = \nonumber\\ = \int_{\Gamma_N} h(\theta) v \dif{s} ,\quad\forall v\in H^{1}_\mathrm{in}(\Omega),\label{heatw} \end{align} subject to \eqref{boyle}, \eqref{bdD} and \eqref{tin}. Here, \(q'\) stands for the conjugate exponent of \(q\), \textit{i.e.} \(1/q'+1/q=1\), and \(h=h_c\theta_\mathrm{e}\). \end{definition} \begin{remark} The variational formulations \eqref{rhow}-\eqref{heatw} are standardly derived from the NSF system \eqref{mass}-\eqref{heateqs} by the Green formula. We point out that the general formula \[ \langle \rho (\mathbf{u}\cdot\nabla) \mathbf{u} , \mathbf{v}\rangle = \langle \tau_T,\mathbf{v} \rangle_\Gamma -\langle p , \mathbf{v}\cdot \mathbf{n}\rangle_\Gamma -\int_{\Omega}\sigma :D\mathbf{v}\dif{x} \] holds for any \( \mathbf{v}\in \mathbf{V}\), under \(\nabla\cdot\sigma \in \mathbf{V}'\) \cite{lap2011}. \end{remark} The following assertions on the physical parameters appearing in the equations are assumed: \begin{description} \item[(H1)] The viscosities \(\mu\) and \(\lambda\) are Carath\'eodory functions from \(\Omega\times\mathbb{R}\) into \(\mathbb{R}\) such that \begin{align}\label{mu1} \exists \mu_\# >0: &\ \mu(x,e) \geq \mu_\#>0 ; \\ \label{mu2} \exists \mu^\# >0: &\ \mu(x,e) \leq \mu^\# ; \\ \exists \lambda^\# >0: &\ |\lambda(x,e)| \leq \lambda^\# , \label{nu3} \end{align} for a.e. \(x\in\Omega\) and for all \( e\in\mathbb{R}\). \item[(H2)] The thermal conductivity \(k\) is a Carath\'eodory function from \(\Omega\times\mathbb{R}\) into \(\mathbb{R}\) such that \begin{equation}\label{defchi} \exists k^\#, k_\# >0 : \quad k_\#\leq k(x,e)\leq k^\#, \end{equation} for a.e. \(x\in\Omega\) and for all \( e\in\mathbb{R}\). \item[(H3)] The friction coefficient \(\gamma\) is a continuous function from \(\mathbb{R}\) into \(\mathbb{R}\) such that \begin{equation}\label{g1} \exists \gamma^\#, \gamma_\# >0 : \quad \gamma_\#\leq \gamma(e)\leq \gamma^\#,\quad \forall e\in \mathbb{R}. \end{equation} \item[(H4)] The heat transfer coefficient \(h_c\) is a Carath\'eodory function from \(\Gamma_N\times\mathbb{R}\) into \(\mathbb{R}\) such that \begin{align}\label{defhm} \exists h^\# >0: &\ h_c(e)\leq h^\#\mbox{ a.e. on }\Gamma_N;\\ \exists h_\#>0:& \ h_c(e)\geq h_\# \mbox{ a.e. on }\Gamma; \label{h1}\\ &\ h_c(e)\geq 0\mbox{ a.e. on }\Gamma_\mathrm{out},\label{hout} \end{align} for all \(e\in \mathbb{R}\). Moreover, \(h=\theta_\mathrm{e}h_c\) with the function \(\theta_\mathrm{e}\in L^\infty (\Gamma_N)\). \item[(H5)] The boundary term \(\rho_\infty \mathbf{u}_{D}\cdot\mathbf{n}\in L^q (\Gamma_{D})\), for some \(q>n\), satisfy the compatibility condition \begin{equation}\label{cc} \int_{\Gamma_{D}}\rho_\infty \mathbf{u}_{D}\cdot\mathbf{n} \dif{s}=0. \end{equation} There exists \(\widetilde{\mathbf{u}}_D\in \mathbf{H}^{1}(\Omega)\) such that its trace \(\widetilde{\mathbf{u}}_D= \mathbf{u}_{D}\) on \(\Gamma_D\) and the normal component of trace vanishes on \(\Gamma\). Indeed, the trace operator has a continuous right inverse operator, and in particular it is surjective from \( \mathbf{W}^{1,q}(\Omega)\) onto \( \mathbf{W}^{1-1/q,q}(\partial\Omega)\). \end{description} \begin{remark}\label{rsob} We denote by \(p^*=pn/(n-p)\) the critical Sobolev exponent related to the embedding \(W^{1,p}(\Omega)\hookrightarrow L^{p^*}(\Omega)\), if \(p<n\). For the sake of simplicity, we also denote by \(p^*\) any real value greater than one, if \(p=n\). The Rellich--Kondrachov embedding stands for any exponent between \(1\) and the critical Sobolev exponent \(p^*\). Notice that the Morrey embedding \(W^{1,q}(\Omega)\hookrightarrow C^{0,1-n/q}(\Omega)\) holds for \(q>n\). \end{remark} \begin{remark}\label{meaningful} All terms are meaningful in the integral identities \eqref{rhow}-\eqref{heatw}. The nonlinear terms, the convective term in \eqref{motionw} and the advective term in \eqref{heatw}, are justified in Lemma \ref{lemb}, with \(\mathbf{m}=\rho\mathbf{u}\in \mathbf{L}^{q}(\Omega)\), \(q>n\), \textit{i.e.} \(\rho\in L^r(\Omega)\) and \(\mathbf{u}\in \mathbf{H}^1(\Omega)\), with \begin{equation}\label{defr} \frac{1}{q} = \frac{1}{r}+\frac{1}{p} \quad\mbox{ if } r = \frac{2p}{p-4} >\frac{2n}{4-n} ,\ 4<p<2^*\ (n=2,3). \end{equation} Observe that \(r> 2n/(4-n)\) follows from \(1/n> 1/q = 1/r +1/p >1/r+1/2^*\), while \(r= 2p/(p-4)\) follows from \( 1/r+1/p=1/q\) altogether to \(1/q+1/p=1/2\). \end{remark} Let us state our first main theorem, where the density function is only defined a.e. in \(\Omega\). \begin{theorem}\label{main} Let the assumptions (H1)-(H5) be fulfilled. For any \(M\in\mathbb{N}\), there exists a triplet \((\rho,\mathbf{u}, \theta)\) such that \begin{itemize} \item \(\rho\) is a measurable function satisfying \(\rho\mathbf{u}\in \mathbf{L}^q(\Omega)\), with \(n<q<n+\varepsilon\), for some \(\varepsilon\) depending on \(\Omega\); \item \(\mathbf{u}\in \widetilde{\mathbf{u}}_D +\mathbf{V}\); \item \(\theta\in (\theta_\mathrm{in}+ H^1_\mathrm{in}(\Omega) ) \cap L^\infty (\Omega) \), \end{itemize} which is a weak solution to the NSF problem, with \eqref{boyle} replaced by \begin{equation}\label{paux} p_M=T_M( \rho) R_\mathrm{specific}\theta. \end{equation} Here, \(T_M\) stands for the truncation, \textit{i.e.} \(T_M(z)=z\) for \(0\leq z\leq M\) and \(T_M\equiv M\) otherwise. \end{theorem} Let us state our second main theorem, where the density function is assumed to have \(L^r\)-regularity, for some \(r>2n/(4-n)\) (\(n=2,3\)). \begin{theorem}\label{main2} Under the conditions of Theorem \ref{main}, the NSF problem admits at least one solution in \( {L}^r(\Omega) \times (\widetilde{\mathbf{u}}_D + \mathbf{V})\times H^1(\Omega)\) if provided by \(\rho\in L^r (\Omega) \) satisfying \begin{equation}\label{arho} \|\rho\|_{r,\Omega}\leq \mathcal{R}, \end{equation} for some positive constant \(\mathcal{R}\) independent on \(M\) and \(r\) verifying \eqref{defr}. Moreover, the following quantitative estimates \begin{align}\label{u1} \|\mathbf{u}-\widetilde{\mathbf{u}}_D\|_{\mathbf{V}} \leq& \max\left\lbrace \frac{n}{(n-1)\mu_\#}, \frac{1}{\gamma_\#}\right\rbrace \left( R_4 |\Omega|^{1/2-1/r} + R_1 \|\widetilde{\mathbf{u}}_D\|_{p,\Omega} \right.\nonumber \\ & \left. + \mu^\#\|D\widetilde{\mathbf{u}}_D\|_{2,\Omega} +\lambda^\#\|\nabla\cdot\widetilde{\mathbf{u}}_D\|_{2,\Omega}\right) \nonumber \\ & +\sqrt{\frac{\gamma^\#} {\min\left\lbrace\frac{n-1}{n}\mu_\#,\gamma_\#\right\rbrace} }\|\widetilde{\mathbf{u}}_D\|_{2,\Gamma}; \\ \|\theta\|_{1,2,\Omega} \leq& R_2\label{t1} \end{align} hold, where \(R_1\), \(R_2\) and \(R_4\) are defined in \eqref{cotamm}, \eqref{cotaeg} and \eqref{r4}, respectively. \end{theorem} \begin{remark} The quantitative estimate \eqref{u1} may be simplified if, for instance, in the assumption (H5) we assume the existence of \(\widetilde{\mathbf{u}}_D\in \mathbf{H}^{1}(\Omega)\) having the trace \[ \widetilde{\mathbf{u}}_D=\left\{ \begin{array}{ll} \mathbf{u}_{D}& \mbox{ on }\Gamma_D\\ \mathbf{0}&\mbox{ on }\Gamma \end{array}\right. \] instead. \end{remark} \section{Strategy} \label{strat} Our strategy is based on that the velocity field is not admissible to use for the fixed point argument, because the velocity field is not directly measurable and the linear momentum is easier to be physically determined. For fixed \(q>n\) and \(r> 2\), we define the closed set \begin{equation} K_{q,r} :=\lbrace \mathbf{m}\in \mathbf{L}^q(\Omega): \ \eqref{defm}\mbox{ holds}\rbrace \times H^{1}(\Omega) \times L^r (\Omega) \label{defv} \end{equation} in the reflexive Banach space \(\mathbf{L}^q(\Omega)\times H^{1}(\Omega)\times L^r (\Omega)\). For fixed \(M\in\mathbb{N}\), we build an operator \(\mathcal{T}\) \begin{align*} \mathcal{T}: (\mathbf{m},\xi,\pi)\in K_{q,r} &\mapsto \mathbf{w}=\mathbf{w}(\mathbf{m},\xi,\pi)\quad\mbox{(Dirichlet--Navier problem)} \\ &\mapsto \mathbf{u}=\mathbf{w}+ \widetilde{\mathbf{u}}_D\\ &\mapsto \rho =\rho (\mathbf{u}) \quad\mbox{(Inlet/outlet problem)} \\ &\mapsto \theta = \theta (\mathbf{m},\xi )\quad (\mbox{Dirichlet--Robin problem)} \\ &\mapsto (\rho\mathbf{u}, \theta, p_M) \end{align*} with \(\mathbf{m}\in\mathbf{L}^q(\Omega)\) satisfying \begin{equation}\label{defm} \int_\Omega \mathbf{m}\cdot \nabla v \dif{x}= \int_{\Gamma_D}\rho_\infty\mathbf{u}_D\cdot\mathbf{n} v \dif{s} ,\quad\forall v\in W^{1,q'}(\Omega). \end{equation} Here, we consider three auxiliary problems. \begin{description} \item[\textbf{(Dirichlet--Navier problem)}] The auxiliary velocity \(\mathbf{w}\in\mathbf{V}\) is the unique solution to the Dirichlet--Navier problem defined by \begin{align}\label{fluidw} -\int_\Omega \mathbf{m}\otimes \mathbf{w}:\nabla\mathbf{v} \dif{x}+ \int_\Omega \mu(\xi)D\mathbf{w}:D\mathbf{v}\dif{x}+ \int_\Omega\lambda(\xi)\nabla\cdot\mathbf{w}\nabla\cdot\mathbf{v} \dif{x} \nonumber \\ +\int_{\Gamma}\gamma(\xi)\mathbf{w}_T\cdot \mathbf{v}_T\dif{s} = \int_\Omega \pi\nabla\cdot\mathbf{v} \dif{x}+ \mathcal{G}(\mathbf{m},\xi , \widetilde{\mathbf{u}}_D, \mathbf{v}) ,\ \forall \mathbf{v}\in \mathbf{V}, \end{align} with \begin{align*} \mathcal{G}(\mathbf{m},\xi, \widetilde{\mathbf{u}}_D, \mathbf{v}):=& \int_\Omega \mathbf{m}\otimes \widetilde{\mathbf{u}}_D:\nabla\mathbf{v} \dif{x} - \int_{\Gamma}\gamma(\xi)\widetilde{\mathbf{u}}_D\cdot \mathbf{v}_T\dif{s}\\ & - \int_\Omega \left( \mu(\xi)D\widetilde{\mathbf{u}}_D:D\mathbf{v}+\lambda(\xi)\nabla\cdot\widetilde{\mathbf{u}}_D\nabla\cdot\mathbf{v} \right)\dif{x}. \end{align*} \item[\textbf{(Inlet/outlet problem)}] The auxiliary density \(\rho\) is a unique solution to the inlet/outlet problem defined by \begin{equation}\label{syst2} \int_\Omega \rho \mathbf{u}\cdot \nabla v \dif{x}= \int_{\Gamma_D}\rho_\infty\mathbf{u}_D\cdot\mathbf{n} v \dif{s},\quad\forall v\in W^{1,q'}(\Omega). \end{equation} \item[\textbf{(Dirichlet--Robin problem)}] The auxiliary temperature \(\theta-\theta_\mathrm{in} \in H^{1}_\mathrm{in}(\Omega)\) is the unique weak solution to the Dirichlet--Robin problem defined by \begin{align} c_v\int_\Omega \mathbf{m}\cdot\nabla\theta v \dif{x} +&\int_\Omega k(\xi)\nabla \theta \cdot \nabla v \dif{x} \nonumber \\ &+\int_{\Gamma_N} h_c(\xi) \theta v \dif{s} = \int_{\Gamma_N} h(\xi) v \dif{s},\quad\forall v\in H^{1}_\mathrm{in}(\Omega).\label{newtonpb} \end{align} \end{description} Finally, the auxiliary pressure is given by \eqref{paux}. Let us establish some properties of the linearized convective and advective terms, which are the key points of this paper. \begin{lemma}\label{lemb} Let \( \Omega \subset\mathbb{R}^n\) be a bounded Lipschitz domain. For each \( \mathbf{m}\in \mathbf{L}^{q}(\Omega)\), \(q>n\), which verifies \eqref{defm}, the following functionals are well defined and continuous: \begin{description} \item[(convective)] \( \mathbf{u}\in \mathbf{H}^1(\Omega) \mapsto \langle B\mathbf{u},\mathbf{v}\rangle:= \int_\Omega \mathbf{m}\otimes \mathbf{u}:\nabla\mathbf{v} \dif{x},\) for all \(\mathbf{v}\in \mathbf{V}\). Moreover, \(B\) is skew-symmetric in the sense \begin{equation}\label{skew} \langle B\mathbf{u}, \mathbf{v}\rangle =-\int_\Omega (\mathbf{m}\cdot \nabla) \mathbf{u}\cdot\mathbf{v} \dif{x}\quad\forall \mathbf{u}\in \mathbf{H}^1(\Omega) \ \forall \mathbf{v}\in \mathbf{V} \end{equation} and, in particular, \( \langle B\mathbf{v},\mathbf{v}\rangle=0\) holds for all \(\mathbf{v}\in\mathbf{V}\). \item[(advective)] \( e\in H^1(\Omega)\mapsto\int_\Omega \mathbf{m}\cdot\nabla e v \dif{x},\) for all \(v \in H^1(\Omega).\) Assuming (H5), the relation \begin{equation}\label{advt} \int_\Omega \mathbf{m}\cdot\nabla e v \dif{x} = \int_{\Gamma_D}\rho_\infty\mathbf{u}_D\cdot\mathbf{n} ev \dif{s} -\int_\Omega \mathbf{m}\cdot\nabla v e \dif{x} \end{equation} holds for any \(e ,v\in H^1(\Omega).\) \end{description} \end{lemma} \begin{proof} The wellposedness of each functional is consequence of the H\"older inequality, with exponents \(q\), \(p\) and \(2\) such that \[ \frac{1}{2^*}<\frac{1}{p}=\frac{1}{2}-\frac{1}{q} \Leftrightarrow q>n, \] and the Rellich--Kondrachov embedding \( H^1(\Omega)\hookrightarrow\hookrightarrow L^{p}(\Omega) \) (cf. Remark \ref{rsob}). The skew symmetry of \(B\), \eqref{skew}, follows from the relation \[ \langle B\mathbf{u}, \mathbf{v}\rangle +\int_\Omega (\mathbf{m}\cdot \nabla) \mathbf{u}\cdot\mathbf{v} \dif{x} =\int_\Omega \mathbf{m}\cdot \nabla ( \mathbf{u}\cdot\mathbf{v}) \dif{x} = \int_{\Gamma_D}\rho_\infty\mathbf{u}_D\cdot\mathbf{n} ( \mathbf{u}\cdot\mathbf{v}) \dif{s} \] by using \eqref{defm} with \( \mathbf{u}\cdot\mathbf{v}\in W^{1,q'}(\Omega)\), \(1<q'<n/(n-1)\). In \eqref{advt}, the wellposedness of the boundary integral follows from the H\"older inequality, with exponents \[ \frac{1}{t} + 2 \frac{n-2}{2(n-1)}= 1 \Leftrightarrow t = \left\{ \begin{array}{ll} n-1&\mbox{if } n=3,4\\ \mbox{arbitrary} &\mbox{if } n=2 \end{array}\right. \] and considering the embedding \(H^1(\Omega) \hookrightarrow L^{2(n-1)/(n-2)}(\partial\Omega)\) and \(\rho_\infty\mathbf{u}_D\cdot\mathbf{n} \in L^q(\Gamma_D)\), where \(q>t\). \end{proof} \section{Wellposedness of the Dirichlet--Navier problem} \label{sZO} The following properties are well known in the fluid mechanics theory. However, the quantitative estimate is essential in the fixed point argument and we will fix it. \begin{proposition}\label{pcotau} Let the assumptions (H1), (H3) and (H5) be fulfilled. For each (\(\mathbf{m},\xi,\pi)\in K_{q,2} \), with \(q> n\), let \(\mathbf{w}\in \mathbf{V}\) be a solution to the problem \eqref{fluidw}. Then, the following quantitative estimate \begin{align}\label{cotau} \min\left\lbrace\frac{n-1}{n}\mu_\#,\gamma_\#\right\rbrace \|\mathbf{w}\|_{\mathbf{V}}^2\leq \frac{n}{(n-1)\mu_\#} \left( \|\pi\|_{2,\Omega}+\|\mathbf{m}\|_{q,\Omega} \|\widetilde{\mathbf{u}}_D\|_{p,\Omega} \right.\nonumber \\ \left. + \mu^\#\|D\widetilde{\mathbf{u}}_D\|_{2,\Omega} +\lambda^\#\|\nabla\cdot\widetilde{\mathbf{u}}_D\|_{2,\Omega}\right)^2 +\gamma^\#\|\widetilde{\mathbf{u}}_D\|_{2,\Gamma}^2 \end{align} holds, with \(1\leq p< 2^*\) being such that \(1/q+1/p=1/2\). \end{proposition} \begin{proof} This proof is standard, but we sketch it because its quantitative expression. Choose \(\mathbf{v}=\mathbf{w}\in \mathbf{V}\) as a test function in \eqref{fluidw}, and use Lemma \ref{lemb} to find \begin{align*} \int_\Omega \mu(\xi)|D\mathbf{w}|^2\dif{x}+ \int_\Omega\lambda(\xi)|\nabla\cdot\mathbf{w}|^2 \dif{x} +\int_{\Gamma}\gamma(\xi)|\mathbf{w}_T|^2\dif{s} \nonumber \\ \leq \left(\|\pi\|_{2,\Omega}+\|\mathbf{m}\|_{q,\Omega} \|\widetilde{\mathbf{u}}_D\|_{p,\Omega}+ 2\| \mu(\xi)D\widetilde{\mathbf{u}}_D\|_{2,\Omega} +\|\lambda(\xi)\nabla\cdot\widetilde{\mathbf{u}}_D\|_{2,\Omega} \right)\|\nabla\mathbf{w}\|_{2,\Omega} \\ +\frac{1}{2}\|\sqrt{\gamma(\xi)}\widetilde{\mathbf{u}}_D\|_{2,\Gamma}^2 +\frac{1}{2}\|\sqrt{\gamma(\xi)} \mathbf{w}_T\|_{2,\Gamma}^2 \end{align*} taking the H\"older and Young inequalities into account and using the fact that \( (\nabla\cdot\mathbf{w})^2\leq|\nabla\mathbf{w}|^2\). Since \( n\lambda(\xi)+\mu(\xi)\geq 0\), applying \eqref{mu1} and \eqref{g1} we have \begin{align*} \frac{n-1}{n}\mu_\#\|D\mathbf{w}\|_{2,\Omega}^2+\frac{\gamma_\#}{2}\| \mathbf{w}_T\|_{2,\Gamma}^2 \\ \leq\int_\Omega \mu(\xi)\left(|D\mathbf{w}|^2-\frac{1}{n}|\nabla\cdot\mathbf{w}|^2\right) \dif{x} +\frac{1}{2}\int_{\Gamma}\gamma(\xi)|\mathbf{w}_T|^2\dif{s} \nonumber \\ \leq \frac{n-1}{2n}\mu_\# \|D\mathbf{w}\|_{2,\Omega}^2+ \frac{n}{2(n-1)\mu_\#}\left( \|\pi\|_{2,\Omega}+\|\mathbf{m}\|_{q,\Omega} \|\widetilde{\mathbf{u}}_D\|_{p,\Omega} \right.\\ \left.+ \mu^\#\|D\widetilde{\mathbf{u}}_D\|_{2,\Omega} +\lambda^\#\|\nabla\cdot\widetilde{\mathbf{u}}_D\|_{2,\Omega}\right)^2 +\frac{\gamma^\#}{2}\|\widetilde{\mathbf{u}}_D\|_{2,\Gamma}^2. \end{align*} Then, readjusting the above estimate we conclude Proposition \ref{pcotau}. \end{proof} The following proposition asserts the existence and uniqueness of auxiliary velocity field. \begin{proposition}\label{pau} Let the assumptions (H1), (H3) and (H5) be fulfilled. For each (\(\mathbf{m},\xi,\pi)\in K_{q,2} \), with \(q> n\), the problem \eqref{fluidw} admits a unique solution \(\mathbf{w}\in\mathbf{V}\). \end{proposition} \begin{proof} Let \(a\) be the (non symmetric) bilinear form on \(\mathbf{V}\times \mathbf{V}\), which is associated to the energy functional \( J:\mathbf{V}\rightarrow\mathbb R\), defined by \begin{align*} J(\mathbf{v})=& \int_\Omega \left(\mu(\xi)\frac{|D\mathbf{v}|^2}{2}+\lambda(\xi)\frac{|\nabla\cdot\mathbf{v}|^2}{2} -\pi\nabla\cdot\mathbf{v} \right)\dif{x} +\\ &+\int_{\Gamma}\gamma(\xi)\frac{|\mathbf{v}_T|^2}{2}\dif{s} - \mathcal{G}(\mathbf{m},\xi, \widetilde{\mathbf{u}}_D,\mathbf{v}). \end{align*} The existence of \(J'\) is well-defined \cite[Appendix C]{stru} as the Fr\'echet derivative of a Nemytskii operator \( F:\Omega\times \mathbb R^n\times\mathbb M^{n\times n}_{\rm sym}\rightarrow\mathbb R\), and the form \(a\) is sum of the non symmetric and symmetric parts \[ a(\mathbf{w},\mathbf{v})=-\int_\Omega \mathbf{m}\otimes \mathbf{w}:\nabla\mathbf{v} \dif{x}+\langle J'(\mathbf{w}),\mathbf{v}\rangle. \] Then, the existence and uniqueness of solution are consequence of the Lax--Milgram Lemma \cite{lions}. \end{proof} We finalize this section by proving the continuous dependence. \begin{proposition}[Continuous dependence]\label{pum} Let \(\lbrace (\mathbf{m}_m,\xi_m,\pi_m)\rbrace_{m\in\mathbb{N}}\) be a sequence weakly convergent in \( K_{q,r} \), for some \(q> n\) and \(r>2\). Then, the corresponding solutions \(\mathbf{w}_m=\mathbf{w}(\mathbf{m}_m,\xi_m,\pi_m)\) to the problem \eqref{fluidw}\(_m\), for each \(m\in\mathbb{N}\), weakly converge to \(\mathbf{w}=\mathbf{w}(\mathbf{m},\xi,\pi)\) in \(\mathbf{V}\), which is the solution to the problem \eqref{fluidw} corresponding to the weak limit (\(\mathbf{m},\xi,\pi\)). \end{proposition} \begin{proof} Let us take the sequences \begin{align*} \mathbf{m}_m\rightharpoonup\mathbf{m}&\mbox{ in } \mathbf{L}^q(\Omega) \mbox{ for some } q> n;\\ \xi_m\rightharpoonup\xi&\mbox{ in } H^{1}(\Omega);\\ \pi_m\rightharpoonup\pi&\mbox{ in }L^r(\Omega)\mbox{ for some }r> 2. \end{align*} The Rellich--Kondrachov embeddings \( H^{1}(\Omega)\hookrightarrow\hookrightarrow L^2(\Omega)\) and \( H^{1}(\Omega)\hookrightarrow\hookrightarrow L^2(\Gamma)\) yield \( \xi_m\rightarrow\xi\) in \( L^2(\Omega)\) and \(L^2(\Gamma)\). The continuity of the Nemytskii operators, \(\mu\), \(\lambda\) and \(\gamma\), and Lebesgue dominated convergence theorem imply \begin{align*} \mu(\xi_m)D\mathbf{v}\rightharpoonup\mu(\xi)D\mathbf{v} &\mbox{ in } [L^2(\Omega)]^{n\times n}; \\ \lambda(\xi_m)\nabla\cdot \mathbf{v}\rightharpoonup\lambda(\xi)\nabla\cdot \mathbf{v} &\mbox{ in } L^2(\Omega); \\ \gamma(\xi_m)\mathbf{v}_T\rightharpoonup\gamma(\xi)\mathbf{v}_T &\mbox{ in } \mathbf{L}^2(\Gamma). \end{align*} Let \(\mathbf{w}_m=\mathbf{w}(\mathbf{m}_m,\xi_m,\pi_m)\) be the corresponding solution to the problem \eqref{fluidw}\(_m\), for each \(m\in\mathbb{N}\). The uniform estimate \eqref{cotau} allows to extract at least one subsequence, still denoted by \(\mathbf{w}_m\), of the solutions \( \mathbf{w}_m=\mathbf{w}(\mathbf{m}_m,\xi_m,\pi_m)\) weakly convergent for some \( \mathbf{w}\in \mathbf{V}\). Consequently, we have \begin{align*} \nabla\mathbf{w}_m\rightharpoonup \nabla\mathbf{w}&\mbox{ in } [L^2(\Omega)]^{n\times n};\\ \mathbf{w}_m\rightarrow \mathbf{w} &\mbox{ in }\mathbf{L}^p(\Omega)\mbox{ and on }\mathbf{L}^2(\partial\Omega), \end{align*} for \(p<2^*\). The above convergences allow to pass to the limit as \( m\) tends to infinity in \eqref{fluidw}\(_m\), concluding that \( \mathbf{w}\) satisfies the system \eqref{fluidw}. \end{proof} \section{Existence and uniqueness of density solution} \label{sdens} In this section, our objective is not to apply the artificial viscosity technique that approximates the continuity equation by an elliptic equation through a vanishing viscosity (also known as elliptic approximation) as it has being usual. Our argument goes out in the spirit of the Helmholtz decomposition \[ \mathbf{a}=\mathbf{a}_{\bm{\omega}}+\mathbf{a}_\psi , \] where \begin{itemize} \item \(\mathbf{a}_{\bm{\omega}}=\nabla\times {\bm{\omega}}\) stands for the solenoidal (divergence-free) component, \textit{i.e.} it satisfies \(\nabla\cdot \mathbf{a}_{\bm{\omega}} = \mathbf{0}\) in \(\Omega\). \item \(\mathbf{a}_\psi =\nabla\psi\) stands for the irrotational component (curl-free), \textit{i.e.} it satisfies \(\nabla\times \mathbf{a}_\psi =\mathbf{0}\) in \(\Omega\). \end{itemize} We refer to \cite{galdi} for the weak \(L^q\)-solution to the Dirichlet--Laplace problem being motivated by the Weyl decomposition. On the one hand, we consider the Neumann--Laplace problem \begin{align} \Delta\psi &= 0\qquad \mbox{ in }\Omega\label{NL1}\\ \nabla\psi\cdot\mathbf{n} &= \rho_\infty \mathbf{u}_{D}\cdot\mathbf{n} \mbox{ on }\Gamma_{D}\\ \nabla\psi\cdot\mathbf{n} &= 0\qquad \mbox{ on }\Gamma, \label{NL3} \end{align} with the zero mean value datum \(g:= \rho_\infty \mathbf{u}_{D}\cdot\mathbf{n}\chi_{\Gamma_{D}} \in L^q(\partial\Omega)\hookrightarrow \left( B^{q'}_{1/q}(\partial\Omega)\right)'\), where the Besov space under the usual notation \( B^{1/q}_{q',q'}\) is in fact the Slobodetskii space \( W^{1/q,q'}\) for \(0<1/q<1/n\) and \(1<q'<n/(n-1)\). Thanks to potential theory \cite{fabes,geng-shen}, the problem \eqref{NL1}-\eqref{NL3}, with the zero mean value datum \(g\in B^q_{-1/q}(\partial\Omega)=\left( B^{q'}_{1/q}(\partial\Omega)\right)'\), admits the unique solution \(\psi\in W^{1,q}(\Omega)\) represented by \[ \psi(x) =\int_{\partial\Omega}G_N(x,y)g(y)\dif{s}_y + \overline{\psi} \] where \(G_N(x,y)=E(x-y)+\phi(y)\) is the Green function of the second type, \textit{i.e.} it solves the Neumann--Poisson boundary value problem \(\Delta G_N(x,\cdot)=\delta_x +1/|\Omega|\) in \(\Omega\) and \(\nabla G_N\cdot\mathbf{n}=0\) on \(\partial\Omega\) \cite{ijpde14}. Here, \(\delta_x\) is the Dirac delta function at the point \(x\). The Green function \(E\), being the fundamental solution for \(\Delta\) in \(\mathbb{R}^n\) with pole at the origin, is given by \begin{align*} E(x)&= \frac{1}{2\pi}\ln|x|\ \mbox{ if } n=2;\\ E(x)&= \frac{1}{4\pi}\frac{1}{|x|}\quad\mbox{ if } n=3. \end{align*} The function \(\phi\) solves \(\Delta\phi =1/|\Omega|\) in \(\Omega\) and \(\nabla (E(x-\cdot)+\phi)\cdot\mathbf{n}=0\) on \(\partial\Omega\). The uniqueness of the Neumann problem is possible by the compatibility condition \eqref{cc}, up to the additive constant \(\overline{\psi}= |\Omega|^{-1}\int_\Omega \psi \dif{x}\), where \( |\Omega| \triangleq \mathrm{meas} (\Omega)\). The solution \(\psi\), the so called scalar potential, satisfies the estimate \begin{equation}\label{cotapsi} \|\nabla\psi\|_{q,\Omega}\leq C_q \| \rho_\infty \mathbf{u}_{D}\cdot\mathbf{n} \|_{q,\partial\Omega}, \end{equation} for any \(1< q <\infty\) if \(\Omega\) is of class \(C^1\), for the sharp ranges \(4/3-\varepsilon < q < 4+\varepsilon\) if \(n=2\) or \(3/2-\varepsilon< q <3+\varepsilon\) if \(n=3\) and \(\Omega\) is bounded Lipschitz, with \(\varepsilon>0\) depending on \(\Omega\) and \(C_q>0\) depending on \(n\), \(q\), and the Lipschitz character of \(\Omega\) \cite{geng-shen,geng-shen10,mitrea}. Some specific results are known for convex domains for \(1<q<\infty\) if \(n=2\) and for \(1<q<4\) if \(n=3\) \cite{dong,geng-shen10}. On the other hand, we find the corresponding vector that makes possible the decomposition. First, let us establish in the two dimensional space the existence of our auxiliary density function. \begin{proposition}[\(n=2\)]\label{p2D} Let \((\mathbf{m},\xi,\pi)\in \mathbf{L}^q(\Omega)\times H^{1}(\Omega)\times L^r (\Omega) \) and let \(\mathbf{u}\in \mathbf{H}^1(\Omega)\) be the corresponding solution to \eqref{fluidw} obtained in Section \ref{sZO}. Then, there exists a unique function \(\rho\) verifying \begin{equation}\label{drho2} \rho\mathbf{u}=\nabla \psi +\nabla\times\bm{\omega}\mbox{ a.e. in } \Omega,\end{equation} with \(\psi\) being the unique solution to \eqref{NL1}-\eqref{NL3} and for some \( \bm{\omega}\) in \(\mathbf{W}^{1,q}(\Omega)\). In particular, it is non-negative. Moreover, \eqref{syst2} holds. \end{proposition} \begin{proof} Let \(\psi\in W^{1,q}(\Omega)\) be the unique solution to \eqref{NL1}-\eqref{NL3}, which verifies \eqref{cotapsi}, for \(1< q <2+\varepsilon\), with \(\varepsilon>0\) depending on \(\Omega\) and \(C_q>0\) depending on \(n\), \(q\), and the Lipschitz character of \(\Omega\). By the potential theory \cite{amrouche,mitrea}, it suffices to seek for a unique non-negative density function that satisfies \begin{equation}\label{weyl} \rho \mathbf{u} = \nabla\psi +\mathbf{z}, \end{equation} with \(\mathbf{z}\) belonging to \[ \mathbf{H}_q :=\{\mathbf{v}\in \mathbf{L}^{q}(\Omega):\ \nabla\cdot\mathbf{v}=0\mbox{ in }\Omega,\ \mathbf{v}\cdot\mathbf{n}=0\mbox{ on }\partial\Omega\}. \] Taking in \eqref{weyl} the inner product with \(\mathbf{u}\), we obtain \begin{equation}\label{rho2d} \rho = \frac{1}{|\mathbf{u}|^2} (\nabla\psi+\mathbf{z} )\cdot\mathbf{u} \quad \mbox{ in } \Omega[|\mathbf{u}|\not=0], \end{equation} otherwise, we define \(\rho= \rho_0\) in \( \Omega[|\mathbf{u}|=0]\) (cf. Remark \ref{rho0}). Hereafter, the set \(A[\mathcal{S}]\) means \(\{x\in A:\, \mathcal{S}(x)\}\), with \(\mathcal{S}\) denoting a sentence to be pointwisely (a.e.) satisfied in \(A\), which may represent either \(\Omega\) or \(\Gamma\). Taking in \eqref{weyl} the inner product with \(\mathbf{u}_\bot=(-u_2,u_1)\in \mathbf{L}^{p}(\Omega)\), for any \(1<p<\infty\), we find the relation \[ \mathbf{z}\cdot \mathbf{u}_\bot= u_1\partial_2\psi - u_2\partial_1\psi:= \nabla \times \bm{\psi}\cdot\mathbf{u}, \] taking \(\bm{\psi}=(0,0,\psi)\) into account. We emphasize that the absolute value of the real number \(\nabla \times \bm{\psi}\cdot\mathbf{u}/|\mathbf{u}|=\nabla\psi\cdot \mathbf{u}_\bot/|\mathbf{u}|\in \mathbf{L}^q(\Omega)\) is the magnitude of the vector rejection of \(\nabla\psi\) from \(\mathbf{u}\), which is defined as \begin{equation}\label{rej} \nabla\psi -\frac{\nabla\psi\cdot \mathbf{u}_\bot}{|\mathbf{u}|^2} \mathbf{u}_\bot =\left(\nabla\psi\cdot \frac{\mathbf{u}}{|\mathbf{u}|}\right) \frac{\mathbf{u}}{|\mathbf{u}|}, \end{equation} where the right hand side stands for the vector projection of \(\nabla\psi\) onto \(\mathbf{u}\). In the following, for the sake of simplicity, we assume that \(\nabla\psi \cdot\mathbf{u}_\bot \geq 0\), otherwise we may similarly argue by redefining \(\mathbf{u}_\bot=(u_2,-u_1)\). Let us consider the following two cases. \textsc{Case 1.} If \(\nabla \times \bm{\psi}\cdot\mathbf{u}=0\), it means that \[ \nabla\psi = \pm |\nabla\psi|\frac{\mathbf{u}}{|\mathbf{u}|}. \] If \(\angle(\nabla\psi, \mathbf{u})=0\), we may take \(\mathbf{z}= \mathbf{0} \) in \eqref{weyl}. Then, we obtain \(\rho =|\nabla\psi|/ |\mathbf{u}|\). In particular, \(\rho \) is unique and non-negative. Notice that the case \(\angle(\nabla\psi, \mathbf{u})=\pi\) does not occur a.e. in \(\Omega\), because it leads to the contradiction \[ - |(\nabla\psi)^*| /|\mathbf{u_D}| \mathbf{u_D}\cdot \mathbf{n} =\rho_\infty \mathbf{u_D}\cdot \mathbf{n} \mbox{ on }\Gamma_D, \] where \((\nabla\psi)^*\) denotes the nontangential maximal function of \(\nabla\psi\) \cite{geng-shen,geng-shen10}. If \(\angle(\nabla\psi, \mathbf{u})=\pi\) in an open ball \(B\subset\subset \Omega\), we may take \(\mathbf{z}= -2\nabla\psi \) that fulfills \eqref{weyl} in \(B\). Then, we obtain \(\rho =|\nabla\psi|/ |\mathbf{u}|\), which is unique and non-negative. \textsc{Case 2.} If \(\nabla \times \bm{\psi}\cdot\mathbf{u}\not=0\), it means that \(\cos(\angle(\nabla\psi, \mathbf{u}))<1\). Next, we will need three auxiliary functions, denoted by \(\mathbf{a}\), \(\varphi\) and \(\mathbf{F}\). In accordance with the latter case, we define the vector \(\mathbf{a}\in \mathbf{L}^q(\Omega)\) as follows \begin{equation}\label{refle} \nabla\psi +\mathbf{a}=|\nabla\psi|\frac{\mathbf{u}}{|\mathbf{u}|} :=\rho_1\mathbf{u}. \end{equation} This definition captures both cases (a) \(\nabla\psi \cdot\mathbf{u}>0\) and (b) \(\nabla\psi \cdot\mathbf{u}\leq 0\) (see Fig. \ref{plane2D}). \begin{figure} \caption{Graphical representation of \(\nabla\psi\) and \(\mathbf{a} \label{plane2D} \end{figure} By the \(L^q-\)Helmholtz--Weyl decomposition (see e.g. \cite{geng-shen10,mitrea}), the vector \(\mathbf{a}\) may be decomposed as \(\mathbf{a}=\nabla\varphi+ \mathbf{F}\). Here, \(\varphi\in W^{1,q}(\Omega)\) is the unique (up to additive constants) solution to the variational problem \begin{equation}\label{NPoisson} \int_\Omega\nabla\varphi\cdot\nabla v\dif{x} = \int _\Omega\mathbf{a}\cdot\nabla v \dif{x}, \quad \forall v\in W^{1,q'}(\Omega). \end{equation} The existence and uniqueness of this scalar potential in the quotient space \(W^{1,q}(\Omega)/\mathbb{R}\) is guaranteed by the range values of \(q\) for Lipchitz domains. Setting \[ \mathbf{F}= \mathbf{a}- \nabla\varphi, \] it is unique and it belongs to \(\mathbf{H}_q\). Moreover, we have \begin{equation}\label{cotas} \max\lbrace \|\nabla\varphi\|_{q,\Omega},\|\mathbf{F}\|_{q,\Omega}\rbrace \leq C_q\|\mathbf{a}\|_{q,\Omega}, \end{equation} where \(C_q\) depends only on \(q\) and the Lipschitz character of \(\Omega\). Taking \eqref{refle} and the decomposition \eqref{rej} for \(\varphi\), we have \begin{align} \label{relation} \rho_1 \mathbf{u} -\left( \nabla\varphi\cdot \frac{\mathbf{u}}{|\mathbf{u}|}\right) \frac{ \mathbf{u}}{|\mathbf{u}|} &=\nabla\psi +\mathbf{F} +\left( \nabla\varphi\cdot \frac{\mathbf{u}_\bot}{|\mathbf{u}|}\right) \frac{ \mathbf{u}_\bot}{|\mathbf{u}|}\\ \rho_2&= \left\{\begin{array}{ll} \rho_1-\nabla\varphi\cdot\mathbf{u}/|\mathbf{u}|^2& \mbox{ if } \rho_1-\nabla\varphi \cdot\mathbf{u}/|\mathbf{u}|^2>0\\ 0 &\mbox{ otherwise} \end{array}\right.\nonumber \end{align} It remains to evaluate the last term of the above relation, namely \[ \mathbf{f} = \left( \nabla\varphi\cdot \frac{\mathbf{u}_\bot}{|\mathbf{u}|}\right) \frac{ \mathbf{u}_\bot}{|\mathbf{u}|}. \] Or, equivalently \begin{align*} f_t(t,n)&=0 \\ f_n(t,n) &=\nabla\varphi\cdot\mathbf{e}_n, \end{align*} by taking the change of coordinates \[ \left[ \begin{matrix} \mathbf{e}_t\\ \mathbf{e}_n \end{matrix} \right]= \left[ \begin{matrix} u_1/|\mathbf{u}| & u_2/|\mathbf{u}|\\ - u_2/|\mathbf{u}| & u_1/|\mathbf{u}| \end{matrix} \right] \left[ \begin{matrix} \mathbf{e}_1\\ \mathbf{e}_2 \end{matrix} \right] \] into account. Hence, we define \(z_t\) and \(z_n\) such that \[ z_n = f_n \quad\mbox{ and }\quad z_t = -\int\partial_n z_n \dif{t} +\rho_3(n), \] with \(\rho_3\) being such that \(z_t<0\). By returning to the Cartesian coordinate system, the boundary condition \(\mathbf{z}\cdot\mathbf{n}=0\) guarantees the uniqueness of \(\rho_3\). Finally, we choose the vector \(\mathbf{z}\in\mathbf{H}_q\) such that \[ \mathbf{z} = \mathbf{F} + z_t\mathbf{e}_t+z_n\mathbf{e}_n , \] by recalling the vector \(\mathbf{F}\) from \eqref{relation}. From \eqref{drho2}, the function \(\rho\) verifies the variational formulation \eqref{syst2}, which concludes the proof of Proposition \ref{p2D}. \end{proof} \begin{remark}\label{rho0} We call by \(\rho_0\) the constant density at STP (standard temperature and pressure). Notice that the velocity may be zero, the so called stagnation. The no upper boundedness of the density is related to that \(|\mathbf{u}|\rightarrow 0\) means \( \rho\rightarrow\infty\). However, neither the velocity function nor the density function are continuous. It suggests that some upper boundedness will be possible, but it is still an open problem. \end{remark} Next, we study the three dimensional space. \begin{proposition}[\(n=3\)]\label{p3D} Let \((\mathbf{m},\xi,\pi)\in \mathbf{L}^q(\Omega)\times H^{1}(\Omega)\times L^r (\Omega) \) and let \(\mathbf{u}\) be the corresponding solution to \eqref{fluidw} obtained in Section \ref{sZO}. Then, there exists a unique function \(\rho\) verifying \begin{equation}\label{drho3} \rho\mathbf{u}=\nabla \psi +\nabla\times\bm{\omega} \mbox{ a.e. in } \Omega, \end{equation} with \(\psi\) being the unique solution to \eqref{NL1}-\eqref{NL3} and for some \( \bm{\omega}\) in \(\mathbf{W}^{1,q}(\Omega)\). In particular, it is non-negative. Moreover, \eqref{syst2} holds. \end{proposition} \begin{proof} Let \(\psi\in W^{1,q}(\Omega)\) be the unique solution to \eqref{NL1}-\eqref{NL3}, which verifies \eqref{cotapsi}, for \(3/2-\varepsilon< q <3+\varepsilon\), with \(\varepsilon>0\) depending on \(\Omega\) and \(C_q>0\) depending on \(n\), \(q\), and the Lipschitz character of \(\Omega\). In the three dimensional space, arguing as in the two dimensional Proposition \ref{p2D} we seek for \begin{equation}\label{defrho} \rho=(\nabla \psi +\nabla\times\bm{\omega})\cdot \mathbf{u}/|\mathbf{u}|^2\quad \mbox{ in } \Omega[|\mathbf{u}|\not=0] \end{equation} otherwise, we define \(\rho= \rho_0\) if \( \Omega[|\mathbf{u}|=0]\) (cf. Remark \ref{rho0}). As the objective is to find a scalar function, the argument of the proof of Proposition \ref{p2D} may be repeated in the plane formed by the vectors \(\mathbf{u}\) and \(\nabla\psi\), \textit{i.e.} we consider the local coordinate system \((\mathbf{e}_t,\mathbf{e}_n,\mathbf{0})\), where \(\mathbf{e}_t = \mathbf{u}/|\mathbf{u}|\) and \(\mathbf{e}_n = \nabla\psi\times\mathbf{u}/|\nabla\psi\times\mathbf{u}|\). Therefore, there exists a vector potential \(\bm{\omega}\) such that \(\nabla\times \bm{\omega} =\rho \mathbf{u}-\nabla\psi\), \textit{i.e.} \eqref{drho3}, which may be given unique \cite{amrouche}. \end{proof} Finally, we are in conditions to determine the estimate for the linear momentum (cf. \eqref{cotapsi}). \begin{corollary}\label{crho} Let \(\Omega\) be Lipschitz. For \(n = 2,3\), let \(\rho\) be the unique function given at Propositions \ref{p2D} and \ref{p3D}. Then, the estimate \begin{equation}\label{cotamm} \| \rho\mathbf{u} \|_{q,\Omega}\leq 3 C_q \| \rho_\infty \mathbf{u}_{D}\cdot\mathbf{n} \|_{q,\partial\Omega}:=R_1 \end{equation} holds, for any \(n< q <n+\varepsilon\) and \(n=2,3\), with \(\varepsilon\) depending on \(\Omega\) and \(C_q\) depending on \(n\), \(q\), and the Lipschitz character of \(\Omega\). \end{corollary} \section{Wellposedness of the Dirichlet--Robin problem} \label{sSOLA} The existence of the solution \(\theta\in H^{1}(\Omega)\), which satisfies \eqref{tin}, to the problem \eqref{newtonpb} is stated in the following proposition. \begin{proposition}[Existence and uniqueness]\label{pae} Let the assumptions (H2) and (H4)-(H5) be fulfilled. For each \((\mathbf{m},\xi)\in \mathbf{L}^q(\Omega)\times H^{1}(\Omega)\), which verifies \eqref{defm}, the problem \eqref{newtonpb} admits a unique solution \(\theta\in H^1(\Omega)\) such that \(\theta=\theta_\mathrm{in}\) on \(\Gamma_\mathrm{in}\). Moreover, the estimate \begin{equation}\label{cotaeg} \|\nabla \theta\|_{2,\Omega}^2+ \|\theta\|_{2,\Gamma}^2 \leq \frac{h^\#}{\min\lbrace 2k_\#, h_\#\rbrace } \|\theta_\mathrm{in} + \theta_\mathrm{e}\|_{2,\Gamma_N}^2 := R_2^2 \end{equation} holds. \end{proposition} \begin{proof} The existence and uniqueness of \( \theta=u+\theta_\mathrm{in}\), with \(u\in H^1_\mathrm{in}(\Omega)\), solving \eqref{newtonpb} is standard by the Lax--Milgram Lemma. The problem \eqref{newtonpb} reads \[ a(u,v)= \int_{\Gamma_N} h_c(\xi)(\theta_\mathrm{e}-\theta_\mathrm{in}) v \dif{s}, \quad\forall v\in H^{1}_\mathrm{in}(\Omega), \] where the continuous bilinear form \(a\) from \(H^{1}_\mathrm{in}(\Omega)\times H^{1}_\mathrm{in}(\Omega)\) into \(\mathbb{R}\), is defined by \[ a(u,v)=c_v\int_\Omega \mathbf{m}\cdot\nabla u v \dif{x} +\int_\Omega k(\xi)\nabla u \cdot \nabla v \dif{x} +\int_{\Gamma_N} h_c(\xi) u v \dif{s}. \] Moreover, using the assumptions \eqref{defchi} and \eqref{h1}-\eqref{hout}, the form \(a\) is coercive: \begin{align*} a( u,u)= c_v\int_\Omega \mathbf{m}\cdot\nabla(u ^2/2) \dif{x} +\int_\Omega k(\xi) |\nabla u |^2 \dif{x} +\int_{\Gamma_N} h_c(\xi) u^2 \dif{s} \nonumber \\ \geq \min\lbrace k_\#,h_\#\rbrace \left(\|\nabla u\|_{2,\Omega}^2+\|u\|_{2,\Gamma}^2\right), \end{align*} taking \(u^2\in W^{1,q'}(\Omega)\) into account, that is, \eqref{defm} reads \[ \int_\Omega \mathbf{m}\cdot\nabla (u^2/2) \dif{x}=\int_{\Gamma_\mathrm{out}}\rho_\infty\mathbf{u}_D\cdot \mathbf{n} u^2/2\dif{s}\geq 0. \] The estimate \eqref{cotaeg} follows by choosing \(v=\theta-\theta_\mathrm{in}\) as a test function in \eqref{newtonpb}, arguing as above, considering that \(\nabla u=\nabla\theta\) and \[ k_\#\|\nabla \theta\|_{2,\Omega}^2+\frac{1}{2}\|\sqrt{h_c(\xi)} \theta\|_{2,\Gamma_N}^2 \leq\frac{1}{2}\left( \|\sqrt{h_c(\xi)} (\theta_\mathrm{in} + \theta_\mathrm{e} ) \|_{2,\Gamma_N}^2\right) \] after routine computations. \end{proof} The following minimum-maximum principle is standard, its proof argument differs on the advective and boundary terms. For reader convenience, we provide the proof. \begin{proposition}[Minimum-maximum principle]\label{maxmin} Let \(\theta\in H^{1}(\Omega)\) be a solution to the problem \eqref{newtonpb}. Then, the lower and upper bounds \begin{equation}\label{tmax} \mathrm{ess}\inf_{\partial\Omega}\theta_0 \leq \theta \leq \mathrm{ess}\sup_{\partial\Omega}\theta_0\mbox{ a.e. in } \Omega \end{equation} hold. \end{proposition} \begin{proof} Let us define \(T_\mathrm{min}=\mathrm{ess}\inf\lbrace \theta_0(x):\, x\in\partial\Omega\rbrace\). Let us choose \(\phi(\theta)=(\theta-T_\mathrm{min})^-=\min\lbrace \theta-T_\mathrm{min},0\rbrace\in H^1_\mathrm{in}(\Omega) \) as a test function in \eqref{newtonpb}. Applying the assumptions \eqref{defchi} and \eqref{h1}-\eqref{hout}, we have \[ \int_\Omega \mathbf{m}\cdot\nabla\theta \phi(\theta) \dif{x}+ k_\#\|\nabla \theta\|_{2,\Omega[\theta<T_\mathrm{min}]}^2 + h_\#\| \theta-T_\mathrm{min}\|_{2,\Gamma[\theta<T_\mathrm{min}]}^2 \leq 0. \] Since the advective term verifies \begin{align*} \int_\Omega \mathbf{m}\cdot\nabla\theta \phi(\theta) \dif{x}&= \int_{\Omega[\theta<T_\mathrm{min}]}\mathbf{m}\cdot \nabla (\phi^2(\theta)/2) \dif{x} \\ &=\int_{\Gamma_D}\rho_\infty\mathbf{u}_D\cdot\mathbf{n} \phi^2(\theta)/2 \dif{s} = \int_{\Gamma_\mathrm{out}} \rho_\infty u_\mathrm{out} \phi^2(\theta)/2 \dif{s} \geq 0 , \end{align*} taking \eqref{defm} and next \eqref{h1}-\eqref{hout} into account, we deduce \begin{equation} \label{eqeq1} k_\#\|\nabla \phi(\theta)\|_{2,\Omega}^2 +h_\#\| \phi(\theta)\|_{2,\Gamma}^2 \leq 0. \end{equation} Then, we conclude that \(\phi(\theta)=0\) in \(\Omega\), which means that the lower bound is proved. The upper bound is analogously proved, by defining \(T_\mathrm{max}=\mathrm{ess}\sup\lbrace \theta_0(x):\, x\in\partial\Omega\rbrace\) and choosing \(\phi(\theta)=(\theta-T_\mathrm{max})^+=\max\lbrace \theta-T_\mathrm{max},0\rbrace\in H^1_\mathrm{in}(\Omega) \) as a test function in \eqref{newtonpb}. \end{proof} We finalize this section by proving the continuous dependence. \begin{proposition}[Continuous dependence]\label{pem} Let \(\lbrace (\mathbf{m}_m,\xi_m)\rbrace_{m\in\mathbb{N}}\) be a weakly convergent sequence in \(\mathbf{L}^q(\Omega)\times H^{1}(\Omega)\), for some \(q>n\). Then, the corresponding solutions \(\theta_m=\theta(\mathbf{m}_m,\xi_m)\in H^{1}(\Omega)\) to the problem \eqref{newtonpb}\(_m\), for each \(m\in\mathbb{N}\), weakly converge to \(\theta_m=\theta(\mathbf{m},\xi)\), which is the solution to the problem \eqref{newtonpb} corresponding to the weak limit (\(\mathbf{m},\xi\)). \end{proposition} \begin{proof} Let us take the sequences \begin{align*} \mathbf{m}_m\rightharpoonup\mathbf{m}&\mbox{ in } \mathbf{L}^q (\Omega);\\ \xi_m\rightharpoonup\xi&\mbox{ in } H^{1}(\Omega). \end{align*} The Rellich--Kondrachov embeddings \( H^{1}(\Omega)\hookrightarrow\hookrightarrow L^2(\Omega)\) and \( H^{1}(\Omega)\hookrightarrow\hookrightarrow L^2(\partial\Omega)\) yield \( \xi_m\rightarrow\xi\) in \( L^2(\Omega)\) and \(L^2(\partial\Omega)\). By the one hand, from \( \xi_m\rightarrow\xi\) in \( L^1(\Omega)\) and a.e. in \( \Omega\), and the assumption \eqref{defchi}, the continuity property of the Nemytskii operator associated to the leading coefficient \( k\) implies that \begin{align*} k(\cdot,\xi_m)\rightarrow k(\cdot,\xi) &\mbox{ a.e. in }\Omega;\\ k(\xi_m)\nabla v \rightarrow k(\xi)\nabla v &\mbox{ in } \mathbf{L}^2(\Omega). \end{align*} By the other hand, from \( \xi_m\rightarrow\xi\) in \( L^1(\partial\Omega)\) and a.e. on \( \partial\Omega\), and the assumptions \eqref{defhm}-\eqref{hout}, the continuity property of the Nemytskii operator associated to the boundary coefficient \( h_c\) implies that \begin{align*} h_c(\cdot,\xi_m)\rightarrow h_c(\cdot,\xi) &\mbox{ a.e. on }\Gamma_N;\\ h_c(\xi_m) v \rightarrow h_c(\xi) v &\mbox{ in }L^2(\Gamma_N). \end{align*} For each \(m\in\mathbb{N}\), let \(\theta_m=\theta(\mathbf{m}_m,\xi_m)\) be the corresponding solution to the problem \eqref{newtonpb}\(_m\). The uniform estimate \eqref{cotaeg} allows to extract at least one subsequence, still denoted by \(\theta_m\), of the solutions \(\theta_m=\theta(\mathbf{m}_m,\xi_m)\) weakly convergent for some \( \theta\in H^{1}(\Omega)\). The above convergences do not be sufficient to the passage to the limit, as \( m\) tends to infinity, in \eqref{newtonpb}\(_m\). It remains to pass the advective term to the limit. To this aim, we prove the following strong convergence \( \nabla \theta_m\rightarrow\nabla \theta \) in \( L^2 (\Omega)\). Arguing as in \cite{m3as2006}, we apply the assumption \eqref{defchi} and we decompose to obtain \[ k_\# \int_{\Omega}|\nabla(\theta_m -\theta)|^2\mathrm{d}x\leq \int_{\Omega} (k(\xi_m)\nabla\theta_m -k(\xi_m)\nabla\theta)\cdot\nabla(\theta_m -\theta)\dif{x} = \mathcal{I}_1 - \mathcal{I}_2, \] with \begin{align*} \mathcal{I}_1 &= \int_{\Omega} k(\xi_m)\nabla\theta_m\cdot\nabla (\theta_m -\theta)\dif{x}\\ \mathcal{I}_2 &= \int_{\Omega} k(\xi_m)\nabla\theta\cdot\nabla (\theta_m -\theta)\dif{x} \longrightarrow 0 \mbox{ as } m\rightarrow \infty. \end{align*} Next, to prove that \(\mathcal{I}_1\) also tends to zero, we take \(v= \theta_m-\theta \) as a test function in \eqref{newtonpb}\(_m\). Hence, we obtain \begin{align*} \int_{\Omega} \mathbf{m}_m \cdot \nabla\frac{(\theta_m -\theta)^2}{2} \dif{x}+ \mathcal{I}_1 = \int_{\Omega} \mathbf{m}_m\cdot \nabla\theta (\theta_m- \theta) \dif{x} & \\ + \int_{\partial\Omega} (h(\xi_m) - h_c(\xi_m)\theta_m) (\theta_m -\theta)\dif{s} \longrightarrow 0&\mbox{ as } m\rightarrow \infty, \end{align*} taking the Rellich--Kondrachov embeddings \( H^1(\Omega)\hookrightarrow\hookrightarrow L^p (\Omega)\), with \(p<2^*\), and \( H^1(\Omega)\hookrightarrow\hookrightarrow L^2 (\partial\Omega)\) into account for \(n=2,3\). Then, applying the relation \eqref{defm} into the left hand side of the above equality, we find the claim, \textit{i.e.} the strong convergence. Then, the passage to the limit yields that \(\theta\) satisfies \eqref{newtonpb}, concluding Proposition \ref{pem}. \end{proof} \section{Existence of a fixed point to the problem ({\sc Proof of Theorem \ref{main}})} \label{smain1} We will apply the following Tychonoff extension to weak topology of the Schauder fixed point theorem \cite[pp. 453-456 and 470]{dsch}. \begin{theorem}\label{fpt} Let \( K\) be a nonempty weakly sequential compact convex subset of a locally convex linear topological vector space \( V\). Let \( \mathcal{T}:K\rightarrow K\) be a weakly sequential continuous operator. Then \( \mathcal{T}\) has at least one fixed point. \end{theorem} Let \(V= \mathbf{L}^q(\Omega)\times H^{1}(\Omega)\times L^r (\Omega)\) and \(K_{q,r}\) be the nonempty convex set defined in \eqref{defv}. We define \( K = K_{q,r}\cap B\), where \(B\) is the closed (bounded) ball, with radius \(R_1,R_2,R_3>0\) defined in \eqref{cotamm}, \eqref{cotaeg} and \eqref{cotapm}, respectively. In the reflexive Banach space \(V\), the closed, convex and bounded set \(K\) is compact for the weak topology \(\sigma (V,V')\), \textit{i.e.} it is weakly sequential compact. Let \(\mathcal{T}\) be the operator defined in Section \ref{strat}. The fixed point argument (cf. Theorem \ref{fpt}) guarantees the existence of the required solution, by proving the following two propositions, namely, Propositions \ref{pt1} and \ref{pt2}. \begin{proposition}\label{pt1} Let the assumptions (H1)-(H5) be fulfilled. Then, the operator \( \mathcal{T}\) is well defined and it maps \(K\) into itself. \end{proposition} \begin{proof} The well-definiteness of \(\mathcal{T}\) is consequence of Proposition \ref{pau}, Corollary \ref{crho}, and Proposition \ref{pae}. In order to prove that \( \mathcal{T}\) maps \(K\) into itself, let \( (\mathbf{m},\xi,\pi)\in K\) and \[ \mathcal{T}(\mathbf{m},\xi,\pi) = (\rho\mathbf{u}, \theta, p_M) . \] That is, we seek for \(R_1,R_2,R_3>0\) such that \begin{align*} \|\mathbf{m}\|_{q,\Omega}\leq R_1,\qquad \|\xi\|_{1,2,\Omega}\leq R_2, \qquad \|\pi\|_{r,\Omega}\leq R_3;\\ \|\rho\mathbf{u}\|_{q,\Omega}\leq R_1,\qquad \|\theta\|_{1,2,\Omega}\leq R_2, \qquad \|p_M\|_{r,\Omega}\leq R_3. \end{align*} Thanks to Corollary \ref{crho}, the quantitative estimate \eqref{cotamm} guarantees the existence of \(R_1\), for \(q\) depending on the smoothness of the domain \(\Omega\). Thanks to Proposition \ref{pae}, the quantitative estimate \eqref{cotaeg} guarantees the existence of \(R_2\). The existence of \(R_3\) is due to the definition \eqref{paux}, we concretely have \begin{equation}\label{cotapm} \|p_M\|_{r,\Omega}\leq M |\Omega|^{1/r} R_\mathrm{specific} \mathrm{ess}\sup_{\partial\Omega}\theta_0 :=R_3, \end{equation} by considering the estimate \eqref{tmax}. \end{proof} \begin{proposition}\label{pt2} Let the assumptions (H1)-(H5) be fulfilled. Then, the operator \( \mathcal{T}\) is weakly sequential continuous. \end{proposition} \begin{proof} Let \(\lbrace (\mathbf{m}_m,\xi_m,\pi_m)\rbrace_{m\in\mathbb{N}}\) be a sequence of \(V\) weakly convergent to (\(\mathbf{m},\xi,\pi) \), namely \begin{align*} \mathbf{m}_m\rightharpoonup\mathbf{m}&\mbox{ in }\mathbf{L}^q(\Omega);\\ \xi_m\rightharpoonup\xi&\mbox{ in } H^{1}(\Omega);\\ \pi_m\rightharpoonup\pi&\mbox{ in }L^r(\Omega). \end{align*} Thanks to Proposition \ref{pum}, the corresponding solutions \(\mathbf{w}_m=\mathbf{w}(\mathbf{m}_m,\xi_m,\pi_m) \in \mathbf{V} \) to the problem \eqref{fluidw}\(_m\), for each \(m\in\mathbb{N}\), weakly converge to the solution \(\mathbf{w}=\mathbf{w}(\mathbf{m},\xi,\pi)\) to the problem \eqref{fluidw}. Thus, we get \[ \mathbf{u}_m\rightharpoonup \mathbf{u}\mbox{ in }\mathbf{H}^1(\Omega) . \] Consequently, we get \(\mathbf{u}_m\rightarrow \mathbf{u}\) a.e. in \(\Omega\). Notice that \(\mathbf{u}\) satisfies \begin{align*} -\int_{\Omega} \mathbf{m}\times \mathbf{u}:\nabla \mathbf{v}\dif{x} +\int_{\Omega}\mu(\xi)D\mathbf{u}:D\mathbf{v} \dif{x} +\int_{\Omega} \lambda(\xi)\nabla\cdot\mathbf{u}\nabla\cdot\mathbf{v} \dif{x} \\ +\int_{\Gamma}\gamma(\xi)\mathbf{u}_T\cdot \mathbf{v}_T\dif{s} =\int_{\Omega} \pi\nabla \cdot\mathbf{v}\dif{x}, \end{align*} for all \(\mathbf{v}\in \mathbf{V}\), and the convective term verifies \eqref{skew}. Let \(\rho_m\) be the unique solution given at Propositions \ref{p2D} and \ref{p3D}, for \(n = 2,3\), respectively. Then, it follows that \(\rho_m\) a.e. converges to \(\rho\) in \(\Omega\). Thanks to Corollary \ref{crho}, we have \(\rho_m\mathbf{u}_m\rightharpoonup\rho\mathbf{u}\) in \(\mathbf{L}^q(\Omega)\), which limit satisfies \eqref{syst2}. Thanks to Proposition \ref{pem}, the corresponding solutions \(\theta_m=\theta(\mathbf{m}_m,\xi_m)\) to the problem \eqref{newtonpb}\(_m\), for each \(m\in\mathbb{N}\), weakly converge to the solution \(\theta=\theta(\mathbf{m},\xi)\) in \(H^1(\Omega)\). Thus, \(\theta_m\) strongly converges to \(\theta\) in \(L^p(\Omega)\), for \(1<p<2^*\). Thanks to \eqref{cotapm} and the Lebesgue dominated convergence theorem, we have \[ T_M(\rho_m)\theta_m \rightharpoonup T_M(\rho)\theta \mbox{ in }L^r(\Omega). \] Then, the operator \( \mathcal{T}\) is weakly sequential continuous, which finishes the proof of Proposition \ref{pt1}. \end{proof} Therefore, we are in condition to obtain the fixed point \[ (\mathbf{m},\xi,\pi) = (\rho\mathbf{u}, \theta, p_M), \] which is the required solution. Finally, the argument of Proposition \ref{maxmin}, with the auxiliary problem \eqref{newtonpb} being replaced by the variational problem \eqref{heatw}, can be applied to obtain the \(L^\infty\)-regularity of the temperature \(\theta\), and the proof of Theorem \ref{main} is concluded. \section{Passage to the limit as \(M\rightarrow\infty\) ({\sc Proof of Theorem \ref{main2}})} \label{smain2} The proof of the main result is due to compactness arguments. Under the assumption \eqref{arho}, the solution \((\rho_M,\mathbf{u}_M,\theta_M)\) determined in Theorem \ref{main} satisfies \begin{align} \|\rho_M\|_{r,\Omega}\leq \mathcal{R}; \\ \|\rho_M\mathbf{u}_M\|_{q,\Omega}\leq R_1; \label{r1}\\ \|\theta_M\|_{1,2}\leq R_2, \label{r3} \end{align} considering \(R_1\) and \(R_2\) from \eqref{cotamm} and \eqref{cotaeg}, respectively. Arguing as in \eqref{cotapm} with \(\mathcal{R}\) replacing \(M |\Omega|^{1/r} \), we get \begin{equation}\label{r4} \|p_M\|_{r,\Omega}\leq \mathcal{R} R_\mathrm{specific} \mathrm{ess}\sup_{\partial\Omega}\theta_0 :=R_4. \end{equation} Hence, we can extract a subsequence of \(p_M\), still labeled by \(p_M\), weakly convergent to \(p\) in \(L^r(\Omega)\). Considering \eqref{r1} and \eqref{r4}, the estimate \eqref{cotau} reads \begin{align} \min\left\lbrace\frac{n-1}{n}\mu_\#,\gamma_\#\right\rbrace \|\mathbf{w}\|_{\mathbf{V}}^2\leq \frac{n}{(n-1)\mu_\#} \left( R_4 |\Omega|^{1/2-1/r} + R_1 \|\widetilde{\mathbf{u}}_D\|_{p,\Omega} \right.\nonumber \\ \left. + \mu^\#\|D\widetilde{\mathbf{u}}_D\|_{2,\Omega} +\lambda^\#\|\nabla\cdot\widetilde{\mathbf{u}}_D\|_{2,\Omega}\right)^2 +\gamma^\#\|\widetilde{\mathbf{u}}_D\|_{2,\Gamma}^2.\label{cotau4} \end{align} Then, the convergences \begin{align*} \rho_M\rightharpoonup \rho &\mbox{ in } L^r (\Omega); \\ \mathbf{u}_M\rightharpoonup\mathbf{u}&\mbox{ in } \mathbf{H}^1(\Omega) ;\\ \theta_M\rightharpoonup\theta &\mbox{ in } H^{1}(\Omega), \end{align*} hold, as \(M\) tends to infinity. From the above convergences, we identify the limit \[p=\rho R_\mathrm{specific}\theta. \] The quantitative estimates \eqref{u1}-\eqref{t1} are established from the estimates \eqref{cotau4} and \eqref{cotaeg}, respectively. Therefore, the proof of Theorem \ref{main2} is concluded. \subsection*{Acknowledgements.} This preprint is a submitted manuscript. The Version of Record of this article is published in S\~ao Paulo Journal of Mathematical Sciences, and is available online at https://doi.org/10.1007/s40863-021-00262-z \end{document}
\begin{document} \title{A central series associated with $V(G)$}{} \begin{abstract} We generalize Lewis's result about a central series associated with the vanishing off subgroup. We write $V_{1}=V(G)$ for the vanishing off subgroup of $G$, and $V_{i}=[V_{i-1},G]$ for the terms in this central series. Lewis proved that there exists a positive integer $n$ such that if $V_{3} < G_{3}$, then $|G:V_{1}|=|G':V_{2}|^{2}=p^{2n}$. Let $D_{3}/V_{3} = C_{G/V_{3}}(G'/V_{3})$. He also showed that if $V_{3} < G_{3}$, then either $|G:D_{3}|=p^{n}$ or $D_{3}=V_{1}$. We show that if $V_{i} <G_{i}$ for $i\ge 4,$ where $G_{i}$ is the $i$-th term in the lower central series of $G$, then $|G_{i-1}:V_{i-1}|=|G:D_{3}|.$ \end{abstract} \section{Introduction} Throughout this paper, $G$ is a finite group. We write ${\rm{Irr}}(G)$ for the set of irreducible characters of $G$ and ${\rm{nl}}(G)=\{\chi\in {\rm{Irr}}(G) \mid \chi(1)\neq 1\}.$ Define the vanishing off subgroup of $G$, denoted by $V(G) ,$ by $V(G)=\langle g\in G \mid$ there exists $\chi \in$ nl$(G) $ such that $ \chi(g) \neq 0 \rangle.$ This subgroup was first introduced by Lewis in \cite{Lewis1}. Note that $V(G)$ is the smallest subgroup of $G$ such that all nonlinear irreducible characters vanish on $G\setminus V(G).$ Consider the term $G_{i}$ as the $i$-th term in the lower central series, which is defined by $G_{1}=G,$ $G_{2}=G'=[G,G],$ and $G_{i}=[G_{i-1},G]$ for $i\ge 3.$ We are going to study a central series associated with the vanishing off subgroup, defined inductively by $V_{1}=V(G)$, and $V_{i}=[ V_{i-1}, G ]$ for $i\ge 2.$ Lewis proved in \cite{Lewis1} that $G_{i+1}\le V_{i}\le G_{i}.$ In \cite{Lewis1}, Lewis showed that when $V_{i}<G_{i},$ we have $V_{j}<G_{j}$ for all $j$ such that $1\le j\le i.$ Also, in \cite{Lewis1}, Lewis proved that if $V_{2} < G_{2}$, then there exists a prime $p$ such that $G_{i}/V_{i }$ is an elementary abelian $p$-group for all $i\ge 1.$ In addition, he proved that there exists a positive integer $n$ such that if $V_{3} < G_{3}$, then $|G:V_{1}|=|G':V_{2}|^{2}=p^{2n}$. We are able to generalize the results in \cite{Lewis1} to the case where $V_{i}<G_{i}$ for $i>3.$ Also, we prove that the index of $V_{i-1}$ in $G_{i-1}$ is the same as the index of $D_{3}$ in $G.$ We define some subgroups that are useful to prove our results. First, set $D_{3}/V_{3} = C_{G/V_{3}}(G'/V_{3}).$ Lewis proved in \cite{Lewis1} that if $V_{3} < G_{3}$, then either $|G:D_{3}|=\sqrt{|G:V_{1}|}$ or $D_{3}=V_{1}$. To study the case when $i>3,$ we define some more subgroups. For each integer $i\ge3,$ set $ Y_{i}/V_{i}=Z(G/V_{i})$ and $D_{i}/V_{i}=C_{G/V_{i}}(G_{i-1}/V_{i}).$ We say $G_{k}$ is $H_{1}$, if for every normal subgroup $N$ of $G$ where $V_{k}\le N<G_{k}$ we have $V_{k-1}/N=G_{k-1}/N\cap Y_{k}(G/N).$ In \cite{Lewis1}, it was proved that $G_{3}$ is $H_{1}.$ Under the additional hypothesis that $G'/V_{i}$ is abelian, we are able to show that $G_{i}$ is $H_{1}$ for all $i>3.$ We are also interested in computing the index of $V_{i}$ in $G_{i}$. We will see that this index depends on the size of $D_{3}.$ In other words, it depends on the size of the centralizer of $G'$ modulo $V_{i}.$ We now come to our first theorem. When $V_{k}<G_{k}$, and $G_{i}$ is $H_{1}$ for $i= 4,\cdots,k,$ we are able to prove that $D_{k}=D_{3},$ which is very useful to prove some of the results of this paper. \begin{thm} Assume that $V_{k}< G_{k}$, $G'/V_{k}$ is abelian, and $G_{i}$ is $H_{1}$ for all $i=3,\cdots,k$. Then $D_{k}=D_{3}.$ \end{thm} Our second theorem should be considered to be the main result of this paper. We are able to prove that $|G_{i-1}:V_{i-1}|=|G:D_{3}|,$ for every $i \ge 4$, where $V_{i}< G_{i}$, and $G'/V_{i} $ is abelian. Hence, for a nilpotent group of class $c,$ if $V_{c} <G_{c},$ and $G'/V_{c}$ is abelian, then we have $|G_{i-1}:V_{i-1}|=|G:D_{3}|$ for all $4\le i\le c,$ and $|G_{c}:V_{c}|\le |G:D_{3}|.$ \begin{thm} Assume that $V_{k}< G_{k}$, $G'/V_{k} $ is abelian, for some $k\geq3.$ Then\\ (a) $|G_{k-1}:V_{k-1}|=|G:D_{3}|$ for $k\ge 4.$\\ (b) $D_{k}=D_{3}.$\\ (c) $G_{k}$ is $H_{1}$.\\ (d) $|G_{k}:V_{k}|\le |G:D_{3}|.$ \end{thm} Let $G$ be a finite group, we say that $G$ is a Camina group if $cl(x)=xG'$ for every $x\in G\setminus G'.$ If $3\le i\le k-1,$ then $V_{i}$ will satisfy the same hypothesis as a Camina group. So, $D_{i}=D_{3},$ $G_{i}$ is $H_{1}$ and when $i\ge 4,$ $|G_{i-1}:V_{i-1}|=|G:D_{3}|.$ Note that the above result was motivated from the bound of subgroups by MacDonald in \cite{MacDonald1}, where he proved that $|G_{3}|\le |G:G'|$ for a Camina group $G.$ Our motivation for adding the hypothesis $G/V_{k}$ abelian is that the results in \cite{MacDonald1} were under the hypothesis that $G$ is metabelian (i.e., $G'$ is abelian.) Hence, proving this conclusion under a similar metabelian hypothesis seems like a reasonable first step. In the Camina group case, removing the metabelian hypothesis required totally different techniques. In closing, as an application of our techniques we answer an open question about Camina groups. In \cite{MacDonald1}, MacDonald conjectured that if $G$ is a Camina group of nilpotence class $3,$ then $|G_{3}| \le p^{n},$ where $p^{2n}=|G:G'|.$ He gave a sketch of a proof. But Dark and Scoppola observed in \cite{Drak1} that MacDonald's proof was not conclusive. So, they proved that if $G$ is a Camina group of nilpotence class $3,$ then $|G_{3}| \le p^{\frac{3n}{2}}.$ In our third theorem, we give a conclusive proof of MacDonald's conjecture. \begin{thm} If $G$ is a Camina group of nilpotence class $3$ with $|G:G'|=p^{2n},$ then $|G_{3}| \le p^{n}.$ \end{thm} Acknowledgement: I would like thank my advisor, Dr. Mark Lewis, for his input and the useful weekly discussions regarding this paper. This research is a part of my doctoral dissertation. \section{General Lemmas} In this section, we prove some lemmas that are useful for the proofs of our theorems. Also, some of these facts give us a good idea about the relation between the lower central series and the central series associated with the vanishing off subgroup that we defined in the introduction. Lewis showed in \cite{Lewis1} that both series are related by proving that $V_{i}\le G_{i}\le V_{i-1}.$ We now show that if $G_{k}$ is $H_{1},$ then $V_{k-1}=G_{k-1}\cap Y_{k}.$ \begin{lemma}\label{Hone} Assume that $V_{k}<G_{k}.$ If there exists $N$ such that $ V_{k}\le N<G_{k}$ with $V_{k-1}/N=(G_{k-1}/N) \cap Z(G/N),$ then $V_{k-1}=G_{k-1}\cap Y_{k}.$ \end{lemma} \begin{proof} Observe that $Y_{k}/N\le Z(G/N).$ We have $$V_{k-1}/N \le (Y_{k}\cap G_{k-1})/N = (Y_{k}/N) \cap (G_{k-1}/N) \le Z(G/N)\cap (G_{k-1}/N) =V_{k-1}/N.$$ Thus, we obtain equality throughout, and $V_{k-1}=G_{k-1}\cap Y_{k}$ as desired. \end{proof} As an immediate consequence, note that if $G_{k}$ is $H_{1},$ then $V_{k-1}=G_{k-1}\cap Y_{k}.$ This next lemma is well known. \begin{lemma}\label{twoone} If $G$ is nilpotent and $|G_{i}|=p,$ then for every $x \in G_{i-1}\setminus( G_{i-1}\cap Y_{i})$, we have $cl(x)=xG_{i}.$ \end{lemma} \begin{proof} Because $G$ is nilpotent, we can write $G=P\times Q$ where $P$ is a $p$-group and $Q$ is a $p'$-group. Hence, $G_{i-1}= P_{i-1}\times Q_{i-1}.$ As $|G_{i}|=p$, we have $G_{i}=P_{i}$. In particular, $Q_{i-1}\le Z(G).$ Observe that $G_{i-1}/G_{i}$ is central in $G/G_{i}.$ Thus, it follows that $cl(x) \subseteq xG_{i}.$ We deduce that $|cl(x)|\leq p$. Recall that $x \in G_{i-1}\setminus Y_{i},$ which implies that $Q \le C_{G}(x)$. Now, $|cl(x)|=|G:C_{G}(x)|$ divides $|G:Q|=|P|$. Therefore, $|cl(x)|$ is either $1$ or $p$. Since $x$ is not central, we must have $|cl(x)|=p=|xG_{i}|.$ We conclude that $cl(x)=xG_{i}.$ \end{proof} Now, we get a relationship between the central series associated with the vanishing off subgroup of the whole group and a quotient group of that group. \begin{lemma}\label{twothree} Assume that $V_{k}<G_{k},$ for some $k\ge 3.$ Then for every normal subgroup $ N<G_{k}$ we have $V_{i}(G/N)=V_{i}/N$ for every $2\leq i \leq k.$ \end{lemma} \begin{proof} We prove this by induction. In Lemma 2.2 in \cite{Lewis1}, we have $V_{1}(G/V_{2})= V(G)/V_{2}.$ Let $X/N=V(G/N).$ By Lemma 3.3 in \cite{Lewis1}, $X\le V(G).$ On the other hand, $V_{2}/N$ is normal in $G/N.$ By Lemma 3.3 in \cite{Lewis1} applied to $G/N$, we have $V(G)/V_{2} =V_{1}(G/V_{2})= V_{1}((G/N)/(V_{2}/N))\le V(G/N)/(V_{2}/N)=(X/N)/(V_{2}/N)\cong X/V_{2}.$ So, $V(G)\le X.$ We deduce that $X=V(G),$ and $V_{2}(G/N)=V_{2}/N.$ This is the initial case of the induction. Now, suppose that $i>2$ and assume that $V_{i-1}(G/N)=V_{i-1}/N.$ Therefore, $V_{i}(G/N)=[ V_{i-1}(G/N), G/N] =[V_{i-1}/N,G/N]=[V_{i-1},G]N/N=V_{i}/N$ as desired. \end{proof} Now, we see the importance of the $H_{1}$ hypothesis. \begin{lemma}\label{twotwo} If $V_{i} =1$ and $G_{i}$ is $H_{1}$, then for every $x \in G_{i-1} \setminus V_{i-1}$ we have $cl(x)=xG_{i}.$ \end{lemma} \begin{proof} Since $V_{i}=1$, we have $G_{i}$ is central in $G.$ Thus, $[x,G]$ is central. This implies that $[x,G]=\{x^{-1}x^{g}\mid g\in G\}.$ It follows that the map $a\mapsto x^{-1}a$ is a bijection from $cl(x)$ to $[x,G].$ Hence, $cl(x)=xG_{i}$ if and only if $[x,G] = G_{i}$. Since $x\in G_{i-1},$ it follows that $[x,G] \leq G_{i}.$ Suppose that $[x,G]<G_{i},$ and we want to find a contradiction. We can find $N$ such that $[x,G] \leq N < G_{i},$ where $|G_{i}: N|=p$. Since $x\not\in Y_{i},$ $[x,G]\neq 1.$ Thus, $N>1.$ Applying Lemma \ref{twothree}, it is not difficult to see that $V_{i-1}(G/N) = V_{i-1}/N$. Notice that $xN \in Y_{i}(G/N)$. On the other hand, we have $xN \in G_{i-1}/N = (G/N)_{i-1}.$ Thus, since $G_{i}$ is $H_{1},$ we have $xN \in Y_{i}(G/N) \cap (G_{i-1}/N) =V_{i-1}(G/N) \leq V_{i-1}/N$. Therefore, $x \in V_{i-1},$ which contradicts the choice of $x$. \end{proof} The following result is a nice consequence of Lemma \ref{twotwo} that gives us a good idea about the irreducible characters in ${\rm{Irr}} (G | G_{k}).$ \begin{lemma} If $V_{k} = 1$ and $G_{k}$ is $H_{1}$, then all the characters in ${\rm{Irr}} (G | G_{k})$ vanish on $G_{k-1} \setminus V_{k-1}$. \end{lemma} \begin{proof} Consider $x\in G_{k-1}\setminus V_{k-1}$. By Lemma \ref{twotwo} we have $ cl(x) = xG_{k}$. Applying the second orthogonality relation, which is Theorem 2.18 in \cite{Isaacs1}, we obtain $$|G|/|G_{k}|= |G|/|cl(x)|=|C_{G}(x)|= \sum_{\chi \in {\rm{Irr}}(G)}|\chi(x)|^{2} = \sum_{\chi \in {\rm{Irr}}(G/G_{k})}|\chi(x)|^{2} + \sum_{\chi \in {\rm{Irr}}(G\mid G_{k})}|\chi(x)|^{2}.$$ Since $G_{k-1}/G_{k}$ is central in $G/G_{k},$ we can use the second orthogonality relation in $G/N$ to see that $$ |G:G_{k}| = \sum_{\chi \in {\rm{Irr}}(G/G_{k})}|\chi(xG_{k})|^{2} = \sum_{\chi \in {\rm{Irr}}(G/G_{k})}|\chi(x)|^{2}.$$ Hence, $$\sum_{\chi \in {\rm{Irr}}(G\mid G_{k})}|\chi(x)|^{2} =0.$$ Since $|\chi(x)|^{2} \geq 0$ for each $ \chi \in {\rm{Irr}}(G\mid G_{k}),$ this implies that all characters in $ {\rm{Irr}}(G\mid G_{k})$ vanish on $G_{k-1}\setminus V_{k-1}$ as desired. \end{proof} Define $E_{i}/(G_{i-1}\cap Y_{i})=C_{G/{(G_{i-1}\cap Y_{i})}}(G_{i-2}/(G_{i-1}\cap Y_{i})).$ We know that $V_{i-1}\le G_{i-1}.$ Since $V_{i}=[V_{i-1},G],$ we have $V_{i-1}\le Y_{i},$ and hence, $V_{i-1}\le G_{i-1} \cap Y_{i}.$ Because $[G_{i-1},D_{i-1}]\le V_{i-1}\le G_{i-1} \cap Y_{i},$ it follows that $D_{i-1} \le E_{i}.$ Recall, as a consequence of Lemma \ref{Hone}, that if $G_{i}$ is $H_{1},$ then $V_{i-1}=G_{i-1}\cap Y_{i}.$ Hence, $D_{i-1}/V_{i-1}=C_{G/V_{i-1}}(G_{i-2}/V_{i-1})=C_{G/(G_{i-1}\cap Y_{i})}(G_{i-2}/(G_{i-1}\cap Y_{i}))=E_{i}/(G_{i-1}\cap Y_{i}).$ In particular, $D_{i-1}=E_{i}.$ Notice that our next lemma is the only time we use the hypothesis $G'/V_{i}$ is abelian. \begin{lemma}\label{fivee} Let $V_{i} < G_{i},$ suppose that $i\ge 4,$ and assume that $G'/V_{i}$ is abelian. Then $D_{i}\leq E_{i}$. \end{lemma} \begin{proof} We may assume that $V_{i} =1.$ Hence, $ D_{i}= C_{G}(G_{i-1})$, $G'$ is abelian, and $Y_{i}=Z(G).$ Since $ G'$ is abelian, we obtain $[G, D_{i}, G_{i-2} ] \leq [ G', G'] = 1$. On the other hand, we have $[G_{i-2}, G, D_{i} ]=[ G_{i-1},D_{i}] =1$. By the Three Subgroups Lemma, which is Lemma 8.27 in \cite{Isaacs2}, we get $ [D_{i}, G_{i-2}, G ]=1.$ Therefore, $[D_{i}, G_{i-2}] \leq Y_{i}$. Now, we know that $[D_{i}, G_{i-2}]= [G_{i-2},D_{i}] \leq G_{i-1},$ and $[ D_{i},G_{i-2}] \leq G_{i-1}\cap Y_{i}$. We conclude that $D_{i}\leq E_{i},$ as desired. \end{proof} In the next lemma, we get an upper bound for the index of $D_{i}$ in $G.$ \begin{lemma}\label{seventeenprimee} Assume that $V_{i} =1.$ If $|G_{i}|=p,$ then $|G:D_{i}|\le |G_{i-1}:G_{i-1}\cap Y_{i}|$. \end{lemma} \begin{proof} By Theorem 1 in \cite{Lewis1}, we know that $G_{i-1}/V_{i-1}$ is an elementary abelian $p$-group. Hence, we can find $x_{1}, \cdots ,x_{t} \in G_{i-1}\setminus Y_{i}$, such that $G_{i-1} =\langle x_{1}, \cdots ,x_{t}, G_{i-1}\cap Y_{i} \rangle$, where $ |G_{i-1} : G_{i-1}\cap Y_{i}| =p^{t}$. Since $|G_{i}|=p$, we know by Lemma \ref{twoone} that $|G:C_{G}(x_{j})|=p$ for all $j= 1, \cdots , t$. Thus, $$|G:D_{i}|=|G:\bigcap_{j=1}^{t} C_{G}(x_{j})|\le \prod_{j=1}^{t}|G:C_{G}(x_{j})|=p^{t} = |G_{i-1} : G_{i-1}\cap Y_{i}|.$$ \end{proof} In our next lemma, we prove a very interesting isomorphism that will a be a key to get the index of $V_{i}$ in $G_{i}.$ \begin{lemma}\label{seventeeen} Assume that $ G_{i}$ is $H_{1}.$ Let $a \in G_{i-1} \setminus V_{i-1}$ and set $K/V_{i}=C_{G/V_{i}}(aV_{i}).$ Then $ G/K \cong G_{i}/V_{i}$. \end{lemma} \begin{proof} Without loss of generality, we may assume that $V_{i}=1$. Consider the map from $G$ to $ G_{i}$ defined by $ g \mapsto [g, a]$. Since $a \in G_{i-1}$, we have $[g, a] \in G_{i} $ for every $ g\in G.$ Hence, this map is well defined. Also, we know that $ G_{i}$ is central in $G.$ Thus, this map is a homomorphism with kernel $K$. By Lemma \ref{twotwo}, this map is onto. Therefore, by the First Isomorphism Theorem, we conclude that $ G/K \cong G_{i}.$ \end{proof} Now, we prove the following result. \begin{cor}\label{seventeencomposite} Assume that $ G_{i}$ is $H_{1}.$ Then $|G_{i}:V_{i}|\le |G:D_{i}|.$ \end{cor} \begin{proof} Let $a$ and $K$ be as in Lemma \ref{seventeeen}. We know since $a\in G_{i-1}$ and $D_{i}/V_{i}=C_{G/V_{i}}(G_{i-1}/V_{i})$ that $D_{i}\le K.$ Hence, $|G_{i}:V_{i}|=|G:K|\le |G:D_{i}|.$ \end{proof} The following result is very useful to prove our main theorem. \begin{lemma}\label{twentyy} Assume that $V_{i}< G_{i}$, $G'/V_{i}$ is abelian, and $G_{i-1}$ is $H_{1},$ for $i\ge 4.$ Let $a \in G_{i-2} \setminus V_{i-2}$ and set $K/V_{i-1}=C_{G/V_{i-1}}(aV_{i-1}).$ Then $ K \leq D_{i}$. \end{lemma} \begin{proof} We may assume that $V_{i}=1.$ Hence, $V_{i-1}$ is central in $G$, $G'$ is abelian, $Y_{i}= Z(G),$ and $D_{i}=C_{G}(G_{i-1})$. Fix $x\in K,$ and let $w\in G$ be arbitrary. Notice that $[a, x] \in V_{i-1} \leq Y_{i}$. Thus, $[a , x, w]=1.$ Also, $[x,w]\in G'.$ Because $i\ge 4$, $G_{i-2}\le G'$ so $a\in G'$. Since $G'$ is abelian, $[x, w, a ]\le [G',G']=1$. Therefore, by Hall's Identity, which is Lemma 8.26 in \cite{Isaacs2}, we obtain $[w, a , x]=1$. This implies that $x$ centralizes $[w,a].$ Since $a \not\in V_{i-2}$ and $G_{i-1}$ is $H_{1},$ we deduce by Lemma \ref{twotwo} that as $w$ runs through all of $G$, $[w,a]$ runs through all of $G_{i-1}$. Hence, $x$ centralizes $G_{i-1}.$ Thus, $ x\leq D_{i}.$ Therefore, $ K\leq D_{i}.$ \end{proof} As a consequence of the previous lemma, we get the following corollary. \begin{cor}\label{corone} Assume that $V_{i}< G_{i}$, $G'/V_{i}$ is abelian, and $G_{i-1}$ is $H_{1},$ for $i\ge 4.$ Then $D_{i-1}\le D_{i}.$ \end{cor} \begin{proof} Let $a \in G_{i-2} \setminus V_{i-2}$ and set $ K/V_{i-1}=C_{G/V_{i-1}}(aV_{i-1}).$ Then by Lemma \ref{twentyy} we have $ K \leq D_{i}.$ Also, we know that $D_{i-1}\le K.$ Thus, $D_{i-1}\le D_{i}.$ \end{proof} We now get an upper bound for $|G_{i-1}:G_{i-1}\cap Y_{i}|.$ \begin{lemma}\label{fifty} Assume that $V_{i} <G_{i}$ and $G_{i-1}$ is $H_{1}$. Then $|G:E_{i}|\ge |G_{i-1}:G_{i-1}\cap Y_{i}|$. \end{lemma} \begin{proof} Fix $a\in G_{i-2}\setminus V_{i-2},$ and consider the map $f$ from $G$ to $G_{i-1}/V_{i-1}$ defined by $f(g)=[a,g]V_{i-1}.$ As in the proof of Lemma \ref{seventeeen}, we know that $f$ is an onto homomorphism. It follows that $f$ maps $G/E_{i}$ onto $G_{i-1}/f(E_{i}).$ Thus, $|G_{i-1}:f(E_{i})| \leq |G:E_{i}|$. Since $a\in G_{i-2},$ $[E_{i},a]\le G_{i-1} \cap Y_{i},$ and thus $f(E_{i})\leq G_{i-1}\cap Y_{i}$. Then $|G_{i-1}:G_{i-1}\cap Y_{i}| \leq |G_{i-1}:f(E_{i})|$. Hence, $|G:E_{i}|\ge |G_{i-1}:G_{i-1}\cap Y_{i}|$ as required. \end{proof} \section{Proofs of Theorems 1, 2, and 3} In this section, we prove our three theorems using the general lemmas that we proved in the previous section. Now, we prove Theorem 1. \begin{proof}[Proof of Theorem 1] We have $D_{3}=D_{3}.$ This is the initial case of induction. Assume that the theorem is true for $i-1.$ We are going to prove it for $i.$ By hypothesis, we know that $G_{i}$ is $H_{1},$ and by Lemma \ref{Hone}, we have $V_{i-1}=G_{i-1}\cap Y_{i}.$ This implies $E_{i} =D_{i-1}.$ By the inductive hypothesis we know that $D_{i-1}=D_{3},$ and so, $E_{i}=D_{3}.$ By Lemma \ref{fivee}, we obtain $D_{i}\le E_{i}.$ Applying Corollary \ref{corone}, we conclude that $D_{i-1}\le D_{i}.$ Thus, $D_{i-1}\le D_{i} \le E_{i} =D_{i-1}.$ Therefore, we deduce that $D_{i}=E_{i}=D_{i-1}=D_{3}.$ \end{proof} Now, we are ready to prove our second theorem. \begin{proof}[Proof of Theorem 2] We are going to prove this theorem by induction. Notice that the initial case of induction $(i=3)$ is done by Lewis in \cite{Lewis1}. Now, assume that the theorem is true for $k= i-1 $. We are going to prove it for $k=i$. Also in this proof, without loss of generality, we may assume that $V_{i}=1$. We also know by the inductive hypothesis that $D_{i-1}=D_{3}$ and $G_{i-1}$ is $H_{1}.$ Now, by Lemma \ref{fivee} we have that $D_{i} \le E_{i}.$ By Corollary \ref{corone}, we have $D_{i}\le D_{i-1}$. First we assume that $|G_{i}|=p.$ Thus, we obtain $$|G:D_{i}|\ge |G:D_{i-1}| \ge |G_{i-1}: V_{i-1}| \ge |G_{i-1}: G_{i-1}\cap Y_{i}|.$$ But by Lemma \ref{seventeenprimee}, we have $ |G:D_{i}|\le |G_{i-1}: G_{i-1}\cap Y_{i}|.$ Hence, we have equality throughout the above inequality. Therefore, $V_{i-1}=G_{i-1} \cap Y_{i},$ and $|G_{i-1}:V_{i-1}|=|G:D_{i-1}|.$ Now, assume that $|G_{i}| >p.$ Consider a normal subgroup $N$, such that $V_{i}\le N < G_{i}$ and $|G_{i}:N| =p$. The above argument shows that $V_{i-1}(G/N)=Y_{i}(G/N)\cap (G_{i-1}/N).$ Thus, $G_{i}$ satisfies $H_{1}.$ By strong induction we have $G_{4},\cdots ,G_{i-1}$ satisfy $H_{1}.$ Thus, we may apply Theorem 1 to see that $D_{i}=D_{3}.$ First define $D_{iN}/N=C_{G/N}(G_{i-1}/N).$ Note that $D_{i}\le D_{iN},$ and so $D_{iN}=D_{3}.$ The above argument yields $|G:D_{3}|=|G:D_{i-1}|=|G_{i-1}:V_{i-1}|.$ To prove part (d), since $G_{i}$ is $H_{1},$ by Corollary \ref{seventeencomposite} we obtain $|G_{i}:V_{i}|\le |G:D_{i}|,$ as desired. \end{proof} Now, we prove Theorem 3, which is a conclusive proof of MacDonald's conjecture in \cite{MacDonald1} about the order of $G_{3},$ in the case when $G$ is a Camina group of nilpotence class $3.$ \begin{proof}[Proof of Theorem 3] Note that $V_{1}=G_{2}.$ Hence, $V_{2}=G_{3}.$ We deduce that $V_{3}=G_{4}=1.$ Let $a\in G_{2}\setminus V_{2}$ and set $K =C_{G}(a).$ Thus, by Lemma \ref{seventeeen}, we have $G_{3}\cong G/K.$ But $D_{3} \le K.$ By MacDonald in \cite{MacDonald1}, we know that $|G:D_{3}| =p^{n}.$ Thus, $$|G_{3}|= |G:K| \le |G:D_{3}| =p^{n}.$$ \end{proof} \end{document}
\begin{document} \begin{abstract} The first two authors [Proc. Lond. Math. Soc. (3) {\bf 114}(1):1--34, 2017] classified the behaviour near zero for all positive solutions of the perturbed elliptic equation with a critical Hardy--Sobolev growth $$-\Delta u=|x|^{-s} u^{2^\star(s)-1} -\mu u^q \hbox{ in }B\setminus\{0\},$$ where $B$ denotes the open unit ball centred at $0$ in $\mathbb{R}^n$ for $n\geq 3$, $s\in (0,2)$, $2^\star(s):=2(n-s)/(n-2)$, $\mu>0$ and $q>1$. For $q\in (1,2^\star-1)$ with $2^\star=2n/(n-2)$, it was shown in the op. cit. that the positive solutions with a non-removable singularity at $0$ could exhibit up to three different singular profiles, although their existence was left open. In the present paper, we settle this question for all three singular profiles in the maximal possible range. As an important novelty for $\mu>0$, we prove that for every $q\in (2^\star(s) -1,2^\star-1)$ there exist infinitely many positive solutions satisfying $|x|^{s/(q-2^\star(s)+1)}u(x)\to \mu^{-1/(q-2^\star(s)+1)}$ as $|x|\to 0$, using a dynamical system approach. Moreover, we show that there exists a positive singular solution with $\liminf_{|x|\to 0} |x|^{(n-2)/2} u(x)=0$ and $\limsup_{|x|\to 0} |x|^{(n-2)/2} u(x)\in (0,\infty)$ if (and only if) $q\in (2^\star-2,2^\star-1)$. \end{abstract} \maketitle \section{Introduction and main results}\label{Sec1} The Hardy--Sobolev inequality is obtained by interpolating between the Sobolev inequality ($s=0$) and the Hardy inequality ($s=2$): For every $s\in (0,2)$ and $n\geq 3$, there exists a positive constant $K_{s,n}$ such that $$ \int_{\mathbb{R}^n} |\nabla u|^2\,dx \geq K_{s,n} \left( \int_{\mathbb{R}^n} |x|^{-s} |u|^{2^\star(s)} \,dx \right)^{\frac{2}{2^\star(s)}} \quad \text{for all } u\in C^\infty_c(\mathbb{R}^n), $$ where $ 2^\star(s):=2(n-s)/(n-2)$ denotes the critical Hardy--Sobolev exponent. The critical Sobolev exponent $2^\star$ corresponds to $2^\star(s)$ with $s=0$. Recent results and challenges on the Hardy--Sobolev inequalities are surveyed by Ghoussoub--Robert in \cite{GR1}, see also \cite{GR3}. For $s\in (0,2)$, the best Hardy--Sobolev constant $K_{s,n}$ is attained by a one-parameter family $(U_\eta)_{\eta>0}$ of functions \begin{equation} \label{ulamb} U_\eta(x):= c_{n,s} \,\eta^{\frac{n-2}{2}} \left( \eta^{2-s}+|x|^{2-s}\right)^{-\frac{n-2}{2-s}}\quad \text{for } x\in\mathbb{R}^n, \end{equation} where $c_{n,s}:=\left((n-s)(n-2)\right)^{1/(2^\star(s)-2)}$ is a positive normalising constant. The functions $U_\eta$ are the only positive {\em non-singular} solutions of the equation (see Chen--Lin \cite{ChenLin} and Chou--Chu \cite{cc}) \begin{equation} \label{rnsol} -\Delta U=|x|^{-s} U^{2^\star(s)-1} \quad \text{in } \mathbb{R}^n\setminus\{0\}. \end{equation} Moreover, any positive $C^2(\mathbb{R}^n\setminus\{0\})$ {\em singular} solution $U$ of \eqref{rnsol} is radially symmetric around $0$ and $v(t)=e^{-(n-2)t/2} U(e^{-t})$ is a positive periodic function of $t$ in $\mathbb{R}$ (see Hsia--Lin--Wang \cite{HLW}). The isolated singularity problem has been studied extensively, see V\'eron's monograph \cite{Ve}. Recent works of the first author and her collaborators such as \cites{CCi,FC,CirRob} give a full classification of the isolated singularities for various classes of elliptic equations. In this paper, we settle an open question arising from \cite{CirRob} with regard to the {\em existence} of all the singular profiles at zero for the positive solutions of the perturbed non-linear elliptic equation \begin{equation}\label{Eq0} -\Delta u=|x|^{-s} u^{2^\star(s)-1}-\mu u^q \qquad \text{for } x\in B(0,R)\setminus\{0\},\end{equation} where $\mu$ is a {\em positive} parameter, $q>1$ and $s\in (0,2)$. By $B(0,R)$ we denote the open ball in $\mathbb{R}^n$ $(n\geq 3)$ centred at $0$ with radius $R>0$. The first two authors have proved in \cite{CirRob} that the positive singular solutions of \eqref{Eq0} can exhibit up to {\em three} types of singular profiles at zero in a suitable range for $q$: $\bullet$ A {\bf (ND) type }profile (for ``Non Differential") if $$\lim_{|x|\to 0}|x|^{\frac{s}{q-(2^\star(s)-1)}}u(x)=\mu^{-\frac{1}{q-(2^\star(s)-1)}}.\eqno{(ND)}$$ $\bullet$ A profile of {\bf (MB) type} (for ``Multi-Bump") in the sense that there exists a sequence $(r_k)_{k\geq 0}$ of positive numbers decreasing to $0$ such that $r_{k+1}=o(r_k)$ as $k\to +\infty$ and \begin{equation*} u(x)=\left(1+o(1)\right)\sum_{k=0}^\infty U_{r_k}(x) \hbox{ as }|x|\to 0,\hbox{ where }U_\eta\hbox{ is as in } \eqref{ulamb}.\eqno{(MB)} \end{equation*} $\bullet$ A profile of {\bf (CGS) type} (for ``Caffarelli--Gidas--Spruck") if there exists a positive periodic function $v\in C^\infty(\mathbb{R})$ such that $$\lim_{|x|\to 0}\left(|x|^{\frac{n-2}{2}}u(x)-v(-\log |x|)\right)=0.\eqno{(CGS)}$$ \noindent The case $q=2^\star-1$ in \eqref{Eq0} was fully dealt with in \cite{CirRob}. Hence, in the sequel we assume that $q\not=2^\star-1$. We recall the relevant classification result from \cite{CirRob}: \begin{theorem}[\cite{CirRob}]\label{thm:1} Let $u\in C^\infty(B(0,R)\setminus\{0\})$ be an arbitrary positive solution to \eqref{Eq0}. \begin{itemize} \item If $q>2^\star-1$, then $0$ is a removable singularity; \item If $2^\star(s)-1<q<2^\star-1$, then either $0$ is a removable singularity, or $u$ develops a profile of type (CGS), (MB) or (ND); \item If $1<q\leq 2^\star(s)-1$, then either $0$ is a removable singularity, or $u$ has a profile of type (CGS) or (MB). \end{itemize} Moreover, if $u$ develops a profile of (MB) type, then $2^\star-2<q<2^\star-1$. \end{theorem} However, no examples of the three singular profiles of Theorem~\ref{thm:1} were given in \cite{CirRob}, leaving open the question of their existence. In the present paper, we fill this gap by proving the following: \begin{theorem}\label{Th0} The three singular profiles of Theorem \ref{thm:1} actually do exist. \end{theorem} The existence assertion of Theorem \ref{Th0} is a corollary of the following precise result: \begin{theorem}\label{Th1} Equation \eqref{Eq0} admits positive radially symmetric solutions developing (CGS), (MB) and (ND) profiles in the exact range of parameters given by Theorem \ref{thm:1}. More precisely, when $q\in\left(1,2^\star-1\right)$, there exists $R_0>0$ such that for every $R\in\left(0,R_0\right)$, the following hold: \begin{itemize} \item[{\rm (i)}] For every $\gamma>0$, there exists a unique positive radial solution $u_\gamma$ of \eqref{Eq0} with a removable singularity at $0$ and $\lim_{|x|\to 0}u_\gamma\left(x\right)=\gamma$. \item[{\rm (ii)}] If $q>2^\star-2$, then \eqref{Eq0} has at least a positive (MB) solution. \item[{\rm (iii)}] For every positive singular solution $U$ of \eqref{rnsol}, there exists a unique positive radial (CGS) solution $u$ of \eqref{Eq0} with asymptotic profile $U$ near zero. \item[{\rm (iv)}] If $q>2^\star\left(s\right)-1$, then \eqref{Eq0} admits infinitely many positive (ND) solutions. \end{itemize} \end{theorem} \begin{remark} If $q\in (1,2^\star(s)-1)$, then all positive radial solutions of \eqref{Eq0} extend as positive radial solutions in $\mathbb{R}^n\setminus\{0\}$. For $ q\in [2^\star(s)-1, 2^\star-1)$, any positive radial non-(ND) solution $u$ of \eqref{Eq0} extends as a positive radial solution at least in $B(0,R^*)\setminus\{0\}$ with $R^*$ independent of $u$ (see Lemma~\ref{pos}). \end{remark} From the three singular profiles of \eqref{Eq0}, only the (CGS) type is reminiscent of the asymptotics of the local singular solutions for the Yamabe problem in the case of a flat background metric ($\mu=s=0$) studied in Caffarelli--Gidas--Spruck \cite{CGS} (see also Korevaar--Mazzeo--Pacard--Schoen \cite{KM} for a refined asymptotics and Marques \cite{Ma} for the case of a general background metric). But for $\mu>0$, the introduction of the perturbation term in \eqref{Eq0} yields two new singular profiles: the (ND) and (MB) types. An important novelty in this paper is the {\em existence of infinitely many} positive radial (ND) solutions for \eqref{Eq0} when $q\in (2^\star(s)-1,2^\star-1)$. To our best knowledge, there are no previous existence results known for this type of singularities, which arise as a consequence of studying \eqref{Eq0} with a critical Hardy--Sobolev growth (i.e., $s\in (0,2)$) rather than with a critical Sobolev growth ($s=0$). Since \eqref{upli} fails for the (ND) solutions, neither Pohozaev-type arguments nor Fowler-type transformations relevant for (CGS) or (MB) profiles can be used. Specific to the (ND) solutions, the {\em first term} in their asymptotics arises from the competition generated in the right-hand side of \eqref{Eq0} and not directly from the differential structure. To overcome this obstacle, we rewrite the radial form of \eqref{Eq0} as a dynamical system using an original transformation involving {\em three} variables, see \eqref{var3}. The variable $X_1$ in \eqref{var3} is suggestive of a second order term in the asymptotics of the (ND) solutions, which will make apparent the differential structure of our equation in a dynamical systems setting. Nevertheless, by linearising the flow around the critical point, we find a positive eigenvalue, a null one and a negative eigenvalue so that we cannot apply the classical Hartman--Grobman theorem. Instead, we shall use Theorem~\ref{71} in the Appendix, which invokes the notion of center-stable manifold and ideas of Kelley \cite{Kel}. For $1<q<2^\star-1$, Theorem~\ref{thm:1} yields that every positive non-(ND) solution of \eqref{Eq0} satisfies \begin{equation} \label{upli} \limsup_{|x|\to 0} |x|^{\frac{n-2}{2}} u(x)<\infty. \end{equation} Moreover, \eqref{upli} holds for every positive solution of \eqref{Eq0} when $q\in (1,2^\star(s)-1]$. Note that \eqref{upli} is crucial for Pohozaev type arguments \cite{CirRob}, on the basis of which we prove in Sect.~\ref{sec-thm2} the non-existence of smooth positive solutions for \eqref{Eq0}, subject to $u=0$ on $\partial B(0,R)$. \begin{theorem} \label{thm2} Let $\mu>0$ and $s\in (0,2)$ be arbitrary. Let $\Omega$ be a smooth bounded domain in $\mathbb{R}^n$ ($n\geq 3$) such that $0\in \Omega$. Assume that $\Omega$ is star-shaped with respect to $0$. Then, for every $q\in (1,2^\star(s)-1]$, there are no positive smooth solutions for the problem \begin{equation} \label{diri} \left\{ \begin{aligned} & -\Delta u=|x|^{-s} u^{2^\star(s)-1}-\mu u^q && \text{in } \Omega\setminus\{0\},&\\ & u=0 && \text{on } \partial \Omega.& \end{aligned} \right. \end{equation} If $ q\in (2^\star(s)-1, 2^\star-1)$, then \eqref{diri} admits no positive smooth solutions of non-(ND) type. \end{theorem} Motivated by the problem of finding a metric conformal to the flat metric of $\mathbb{R}^n$ such that $K(x)$ is the scalar curvature of the new metric, Chen--Lin \cites{CLn1,CLn2,ChenLin} and Lin \cite{Lin} analysed the local behaviour of the positive singular solutions $u\in C^2(B(0,1)\setminus\{0\})$ to \begin{equation} \label{ccli} -\Delta u=K(x) \,u^{2^\star-1}\quad \text{in } B(0,1)\setminus\{0\},\end{equation} where $K$ is a positive continuous function on $B(0,1)$ in $\mathbb{R}^n$ ($n\geq 3$) with $K(0)=1$. Moreover, $K$ was always assumed to be a $C^1$ function on $B(0,1)\setminus\{0\}$ such that \begin{equation} \label{gag} 0<\underline{L}:=\liminf_{|x|\to 0} |x|^{1-\ell} |\nabla K(x)|\leq \overline{L}:=\limsup_{|x|\to 0} |x|^{1-\ell} |\nabla K(x)|<\infty\ \ \text{for some } \ell>0. \end{equation} In the above-mentioned works (see also Lin--Prajapat \cite{LinPra} and Taliaferro--Zhang \cite{TaZa}), the following question was investigated: {\em Under what conditions on $K$, the positive singular solutions of \eqref{rnsol} with $s=0$ are asymptotic models at zero for the positive singular solutions of \eqref{ccli}?} This question was settled positively in any of the following situations: \begin{enumerate} \item[(a)] Assumption \eqref{gag} holds for $\ell\geq (n-2)/2$ (see \cite{ChenLin}*{Theorems 1.1 and 1.2}); \item[(b)] If \eqref{gag} holds with $\ell\in (0,(n-2)/2)$, together with {\em extra} conditions, see \cite{Lin}*{Theorem~1.2}. \end{enumerate} Extra conditions in situation (b) are needed to guarantee a positive answer to the above question. Otherwise, for every $0<\ell<(n-2)/2$, Chen--Lin \cite{ChenLin}*{Theorem 1.6} provided general positive radial functions $K(r)$ non-increasing in $r=|x|\in [0,1]$ with $K(0)=1$ such that \eqref{gag} holds and \eqref{ccli} has a positive singular solution with $\liminf_{|x|\to 0} |x|^{(n-2)/2}u(x)=0$. The importance of condition \eqref{gag} in settling the above question can be inferred from our next result as a by-product of Theorem~\ref{Th1}(ii): For every $0<\ell<\min\{(n-2)/2,2\}$ and $s\in (0,2)\setminus\{\ell\}$, we construct a positive continuous function $K$ on $B(0,R)$ for some $R>0$ with $K(0)=1$ such that {\em exactly one inequality in \eqref{gag} fails}, yet generating for \eqref{sgt} a positive singular solution, the asymptotics of which at zero {\em cannot} be modelled by any positive singular solution of \eqref{rnsol}. \begin{cor} \label{corol} For every $0<\ell<\min\{(n-2)/2,2\} $ and $s\in (0,2)\setminus \{\ell\}$, there exist $R>0$ and a positive $C^1$-function $K$ on $B(0,R)\setminus\{0\}$ in $\mathbb{R}^n$ ($n\geq 3$) with $K<\lim_{|x|\to 0} K(x)=1$ on $B(0,R)\setminus\{0\}$ such that $0=\underline L<\overline L<\infty$ if $\ell<s$ and $0<\underline L<\overline L=\infty$ if $\ell>s$, yet \begin{equation} \label{sgt} -\Delta u =K(x) |x|^{-s} u^{2^\star(s)-1} \quad \text{in } B(0,R)\setminus \{0\} \end{equation} admits a positive singular solution with $\liminf_{|x|\to 0} |x|^{(n-2)/2}u(x)=0$. \end{cor} \paragraph{Structure of the paper.} In Sect.~\ref{Sec6}, we prove Theorem~\ref{Th1}(iv) on the existence of infinitely many positive (ND) solutions for \eqref{Eq0}. In Sect.~\ref{sec-thm2}, we establish Theorem~\ref{thm2}, together with uniform {\em a priori} estimates for the positive radial solutions of \eqref{Eq0} satisfying \eqref{upli} (see Proposition~\ref{Pr}). In Sect.~\ref{Sec3}, by setting $u(r)=y(\xi)$ with $\xi=r^{(2-s)/2}$, we reduce the assertion of Theorem~\ref{Th1}(i) on removable singularities to the existence and uniqueness of the solution for \eqref{rem} on an interval $[0,T]$. The latter follows from Biles--Robinson--Spraker \cite{BRS}*{Theorems~1 and 2}. In Sect.~\ref{Sec4}, after giving the proof of Corollary~\ref{corol}, we use an argument influenced by Chen--Lin \cite{ChenLin} to prove the existence of (MB) solutions for \eqref{Eq0} in the whole possible range $q \in (2^\star-2,2^\star-1)$. In Sect.~\ref{Sec5}, with a dynamical system approach, we prove Theorem~\ref{Th1}(iii): the positive singular solutions of \eqref{rnsol} serve as asymptotic models for the positive radial (CGS) solutions of \eqref{Eq0}. For a dynamical approach to Emden--Fowler equations and systems, see Bidaut-V\'eron--Giacomini \cite{BG}. The results in this paper give the existence and profile at infinity for the positive solutions to $$ -\Delta \tilde u=|x|^{-s} \tilde u^{2^\star(s) -1} -\mu |x|^{(n-2)q-(n+2)} \tilde u^q\quad \text{for } |x|>1/R $$ by using the Kelvin transform $\tilde u(x)=|x|^{2-n}u(x/|x|^2)$, where $u$ is a positive solution of \eqref{Eq0}. \section{(ND) solutions}\label{Sec6} In this section, we let $q\in (2^\star(s)-1,2^\star-1)$ and prove Theorem~\ref{Th1}(iv), restated below. \begin{proposition} \label{NDproof} Assume that $q\in (2^\star(s)-1,2^\star-1)$. Then, there exists $R_0>0$ such that for every $R\in (0,R_0)$, equation \eqref{Eq0} admits infinitely many positive (ND) solutions. \end{proposition} The proof of Proposition~\ref{NDproof} takes place in several steps. First, we reformulate the radial form of \eqref{Eq0} as a first order autonomous differential system using a new transformation, see \eqref{var3}. \subsection{Formulation of our problem as a dynamical system} We first assume that $u$ is a positive radial (ND) solution of \eqref{Eq0}. We define \begin{equation} \label{newb} \vartheta:=\frac{s}{q-2^\star(s)+1},\quad \beta:= \frac{\left(q-1\right)\vartheta}{2}-1,\quad \zeta:=\frac{2^\star(s)-2}{q-2^\star(s)+1}. \end{equation} We introduce a new transformation involving three functions $X_1$, $X_2$ and $X_3$ as follows \begin{equation} \label{var3} X_1(t)=t\left( 1-\mu r^{s} u^{q-2^\star(s)+1}\right),\ \ \quad X_2(t)=\frac{1}{t},\quad X_3(t)=\frac{ru'(r)}{u(r)}+\vartheta, \end{equation} where $t:=r^{-\beta}$ and $\beta,\vartheta$ are given by \eqref{newb}. Since $u$ is a positive radial (ND) solution of \eqref{Eq0}, that is, $ \lim_{r\to 0^+} r^{\vartheta} u(r)=\mu^{-1/(q-2^\star(s)+1)}$, it follows that \begin{equation} \label{sens}\left\{ \begin{array}{l} 1-X_1(t) X_2(t)=\mu r^s u(r)^{q-2^\star(s)+1}>0\quad \text{for all }t\in [2R^{-\beta},\infty),\\ X_1(t) X_2(t)\to 0\ \text{as } t\to \infty. \end{array} \right. \end{equation} If we set $\vec X=(X_1,X_2,X_3)$, then, as one easily checks, we have that \begin{equation} \label{grid} \vec X'(t) =(H_1(\vec X(t)), H_2(\vec X(t)),H_3(\vec X(t)))\end{equation} for all $t\in [2R^{-\beta},\infty)$, where $H_1$, $H_2$ and $H_3$ are real-valued functions defined on $\mathbb{R}^3$ by \begin{equation} \label{ahh} \left\{ \begin{aligned} &H_1(\xi_1,\xi_2,\xi_3):= \xi_1 \xi_2+\beta^{-1} (q-2^\star(s)+1) (1-\xi_1 \xi_2) \xi_3,\\ & H_2(\xi_1,\xi_2,\xi_3):= -\xi_2^2,\\ & H_3(\xi_1,\xi_2,\xi_3):=\beta^{-1} \mu^{-\zeta} \xi_1 (1-\xi_1\xi_2)_+^{\zeta}+\beta^{-1} \xi_2 (\xi_3-\vartheta) (\xi_3-\vartheta+n-2). \end{aligned} \right. \end{equation} By $\xi_+$ we mean the positive part of $\xi$. We define $\vec Y:=(Y_1, Y_2,Y_3)$, where $\vec Y(t)=\vec X(t+2R^{-\beta})$ for all $ t\geq 0$. Then, \eqref{grid} gives that $\vec Y'(t)=(H_1(\vec Y(t)),H_2(\vec Y(t)), H_{3}(\vec Y(t)))$ for all $ t\in [0,\infty)$. To get more regularity, for any $\varepsilon\in (0,1)$, we choose $\Psi_\varepsilon\in C^1(\mathbb{R})$ such that $\Psi_\varepsilon(t)=t^{\zeta}$ for all $t\geq \varepsilon$. By choosing $\varepsilon_0\in (0,1)$ small enough and using \eqref{sens}, we find that \begin{equation} \label{sys} \vec Y'(t)=(H_1(\vec Y(t)),H_2(\vec Y(t)), H_{3,\Psi_\varepsilon}(\vec Y(t)))\quad \text{for all } t\in [0,\infty) \end{equation} for every $\varepsilon\in (0,\varepsilon_0)$, where the function $H_{3,\Psi_\varepsilon}:\mathbb{R}^3\to \mathbb{R}$ is defined by $$ H_{3,\Psi_\varepsilon}(\xi_1,\xi_2,\xi_3):= \beta^{-1} \mu^{-\zeta} \xi_1 \Psi_\varepsilon(1-\xi_1\xi_2)+\beta^{-1} \xi_2 (\xi_3-\vartheta) (\xi_3-\vartheta+n-2). $$ \subsection{Existence of solutions for \eqref{sys}} Using $\vartheta$, $\beta$ and $\zeta$ in \eqref{newb}, we define $\Upsilon$ and $\Gamma$ by \begin{equation} \label{upga} \Upsilon:=\mu^{\zeta/2} \sqrt{q-2^\star(s)+1}\quad \text{and}\quad \Gamma:=\vartheta\left(n-2-\vartheta \right)\mu^{\zeta}. \end{equation} \begin{lemma} \label{exist:sol} Let $q\in (2^\star(s)-1,2^\star-1)$ and $\varepsilon\in (0,1)$. Fix $\Psi_\varepsilon\in C^1(\mathbb{R})$ such that $\Psi_\varepsilon(t)=t^{\zeta}$ for all $t\geq \varepsilon$. For every $\delta>0$ small, there exist $r_0\in (0,\delta/2)$ and a Lipschitz function $w:[0,r_0]\times [-r_0,r_0]\to [-r_0,r_0]$ such that for any $(Y_{2,0},Z_{3,0})\in (0,r_0]\times [-r_0,r_0]$, the system \eqref{sys} subject to the initial condition \begin{equation} \label{iniy} \vec Y(0)=(\Upsilon (Z_{3,0}-w(Y_{2,0},Z_{3,0}))+\Gamma Y_{2,0} ,Y_{2,0}, w(Y_{2,0},Z_{3,0})+Z_{3,0}) \end{equation} has a solution $\vec Y(t)=(Y_1(t),Y_2(t),Y_3(t))$ for all $t\geq 0$ satisfying \begin{equation} \label{aay} \lim_{t\to +\infty} \vec Y(t)=(0,0,0). \end{equation} Moreover, we have $Y_2(t)=1/(t+Y_{2,0}^{-1})$ for all $t\geq 0$. \end{lemma} \begin{proof} Since $\Psi_\varepsilon(1)=1$, we find one critical point $(0,0,0)$ for \eqref{sys}. Linearising the flow around $(0,0,0)$, we get one {\em unstable} eigenvalue $\lambda_1=\mu^{-\zeta/2} \beta^{-1}\sqrt{q-2^\star(s)+1}$ with associated eigenvector $(\Upsilon,0,1)$, one {\em null} eigenvalue with associated eigenvector $(\Gamma,1,0)$ and one {\em stable} eigenvalue $-\lambda_1$ with associated eigenvector $(\Upsilon,0,1)$. For $\vec Z=(Z_1,Z_2,Z_3)$, using a change of coordinates \begin{equation} \label{vss} \vec Y=(\Upsilon (Z_1-Z_3)+\Gamma Z_2,Z_2,Z_1+Z_3), \ \text{i.e.,} \ \vec Z=\left(\frac{Y_1-\Gamma Y_2+\Upsilon Y_3}{2\Upsilon} ,Y_2, \frac{\Gamma Y_2+\Upsilon Y_3-Y_1}{2\Upsilon}\right), \end{equation} we bring the system \eqref{sys} to a diagonal form, namely \begin{equation} \label{zig} \vec Z'(t)=(\lambda_1 Z_1(t)+h_1(\vec Z(t)),-Z_2^2(t), -\lambda_1 Z_3(t)+h_3(\vec Z(t)))\quad \text{for all } t\geq 0. \end{equation} For any $\delta>0$ small, the functions $h_1$ and $h_3$ are $C^1$ on the ball $B_\delta(0)$ in $\mathbb R^3$ centred at $0$ with radius $\delta$. Moreover, for some constant $C_1>0$, the functions $h_1$ and $h_3$ satisfy \begin{equation} \label{nano} |h_1(\vec \xi)|+ |h_3(\vec \xi)|\leq C_1\sum_{j=1}^3 \xi_j^2\ \text{and} \ |\nabla h_1(\vec \xi)|+|\nabla h_3(\vec \xi)|\leq C_1\sum_{j=1}^3 |\xi_j| \end{equation} for all $ \vec \xi=(\xi_1,\xi_2,\xi_3)\in B_\delta(0)$. By \eqref{vss}, proving Lemma~\ref{exist:sol} is equivalent to showing that for every small $\delta>0$, there exist $r_0\in (0,\delta/2)$ and a Lipschitz map $w:[0,r_0]\times [-r_0,r_0]\to [-r_0,r_0]$ such that for all $(Y_{2,0},Z_{3,0})\in (0,r_0]\times [-r_0,r_0]$, the system \eqref{zig} subject to \begin{equation} \label{init} \vec Z(0)=(w(Y_{2,0},Z_{3,0}), Y_{2,0},Z_{3,0}) \end{equation} has a solution $\vec Z(t)$ for all $t\geq 0$ with $\lim_{t\to +\infty} \vec Z(t)=(0,0,0)$. Linearising the flow for \eqref{zig} around $(0,0,0)$ yields one null eigenvalue, and the classical Hartman--Grobman theorem does not apply to \eqref{zig}. In Appendix, using the notion of center-stable manifold and inspired by Kelley \cite{Kel}, we prove Theorem~\ref{71} that can be applied to \eqref{zig} due to \eqref{nano}. This ends the proof. \qed \end{proof} \subsection{Proof of Proposition~\ref{NDproof}} For fixed $\varepsilon\in (0,1)$, we choose $\Psi_\varepsilon\in C^1(\mathbb{R})$ such that $\Psi_\varepsilon(t)=t^{\zeta}$ for all $t\geq \varepsilon$. Let $\delta\in (0,(1-\varepsilon)^{1/2})$. Let $r_0\in (0,\delta/2)$ and $w:[0,r_0]\times [-r_0,r_0]\to [-r_0,r_0]$ be given by Lemma~\ref{exist:sol}. We fix $Y_{2,0}:=r_0/2$. Then for any fixed $Z_{3,0}\in [-r_0,r_0]$, the system \eqref{sys}, subject to the initial condition \eqref{iniy} has a solution $\vec Y(t)$ for all $t\geq 0$ such that \eqref{aay} holds. Moreover, we find that $Y_2(t)=1/(t+Y_{2,0}^{-1})$ for all $t\geq 0$. Let $t_0>0$ be large such that $\vec Y(t)\in B_\delta (0)$ for all $ t\geq t_0$. Using that $0<\varepsilon< 1-\delta^2$, for all $ t\geq t_0$, we get that $1-Y_1(t)Y_2(t)>\varepsilon$ so that $ \Psi_\varepsilon(1-Y_1(t)Y_2(t))=(1-Y_1(t) Y_2(t))^\zeta$. Hence, we have $H_{3,\Psi_\varepsilon}(\vec Y(t))=H_3(\vec Y(t)) $ for all $t\geq t_0$. For every $t\geq T:=t_0+Y_{2,0}^{-1}$, we define $\vec X(t)$ by $ \vec X(t):=\vec Y(t-Y_{2,0}^{-1}) $, which yields that $X_2(t)=1/t$. Then, $\vec X(t)$ is a solution of the system \eqref{grid} for all $t\geq T $ such that $ \lim_{t\to \infty} \vec X(t)=(0,0,0)$. With $\vartheta$ and $\beta$ be given by \eqref{newb} and $t:=r^{-\beta}$, we define $u(r)$ as in \eqref{var3}. Then $u$ is a positive radial (ND) solution of \eqref{Eq0} with $R:=T^{-1/\beta}$. The above construction leads to an infinite number of positive radial (ND) solutions for \eqref{Eq0} by varying $Z_{3,0}$ in $[-r_0,r_0]$. This completes the proof. \qed \section{Consequences of Pohozaev's identity} \label{sec-thm2} In this section, using Pohozaev's identity, we prove Theorem~\ref{thm2}, followed by uniform {\em a priori} estimates for the positive radial solutions of \eqref{Eq0} satisfying \eqref{upli} (see Proposition~\ref{Pr}). Let $u$ be any positive solution of \eqref{Eq0} with $q\in (1,2^\star-1)$ such that \eqref{upli} holds. As in \cite{CirRob}, for every $r\in (0,R)$, we denote by $P_r^{(q)}(u) $ the Pohazev-type integral associated to $u$, namely \begin{equation} \label{poh-int} P_{r}^{(q)}(u):=\int_{\partial B(0,r)} \left[ (x,\nu) \left(\frac{|\nabla u|^2}{2}-\frac{u^{2^\star(s)}}{2^\star(s)|x|^s}+\mu\frac{u^{q+1}}{q+1} \right)-T(x,u)\, \partial_\nu u\right]d\sigma, \end{equation} where $T(x,u)=(x,\nabla u(x))+(n-2) u(x)/2$. Here, $\nu$ denotes the unit outward normal at $\partial B(0,r)$. Assuming $u$ satisfies \eqref{upli}, it was shown in \cite{CirRob} that there exists $\lim_{r\to 0^+}P_r^{(q)}(u) :=P^{(q)}(u)$ and \begin{equation} \label{van} P^{(q)}(u)\geq 0\end{equation} with strict inequality if and only if $u$ is a (CGS) solution of \eqref{Eq0}. We refer to $P^{(q)}(u)$ as the {\em asymptotic Pohozaev integral}. We introduce the notation \begin{equation} \label{lamb} \lambda:=(n-2)(2^\star-1-q)/2\ \ \text{and}\ \ c_{\mu,q,n}:= \lambda\mu /(q+1). \end{equation} Both $\lambda$ and $c_{\mu,q,n} $ are positive by the assumption $q\in (1,2^\star-1)$. \subsection{Proof of Theorem~\ref{thm2}} Let $q\in (1, 2^\star-1)$. Suppose that \eqref{diri} admits a positive smooth solution $u$ satisfying \eqref{upli}. From $u=0$ on $\partial \Omega$, we have $\nabla u=\left(\partial_\nu u\right) \nu$ for $x\in \partial \Omega$, where $\nu$ denotes the unit outward normal at $\partial \Omega$. For every $r>0$ small, by applying the Pohozaev identity as in \cite{CirRob}*{Proposition~6.1} for $\omega=\omega_r=\Omega\setminus \overline{B(0,r)}$, we get that \begin{equation} \label{find} -\frac{1}{2}\int_{\partial \Omega} (x,\nu) |\nabla u|^2\,d\sigma = P_r^{(q)}(u) +c_{\mu,q,n} \int_{\omega_r} u^{q+1}\,dx. \end{equation} By letting $r\to 0^+$ in \eqref{find} and using \eqref{van}, we arrive at \begin{equation} \label{find2} -\frac{1}{2}\int_{\partial \Omega} (x,\nu) |\nabla u|^2\,d\sigma = P^{(q)}(u) +c_{\mu,q,n} \int_{\Omega} u^{q+1}\,dx\geq 0. \end{equation} Since $\Omega$ is star-shaped with respect to the origin, we have $(x,\nu)>0$ on $\partial \Omega$. Then, \eqref{find2} can only hold when $\nabla u\equiv 0$ on $\partial \Omega$ and $u\equiv 0$ in $\Omega$. Hence, \eqref{diri} has no positive smooth solutions satisfying \eqref{upli}. Using the comments before statement of Theorem~\ref{thm2}, we finish the proof. \qed \subsection{Uniform {\em a priori} estimates} Let $q\in (1,2^\star-1)$. For the positive radial solutions $u$ of \eqref{Eq0} satisfying \eqref{upli}, we derive uniform {\em a priori} estimates. These are crucial for proving the existence of (MB) solutions in Proposition~\ref{mb} and (CGS) solutions in Proposition~\ref{CGS}. We define \begin{equation} \label{defyz} \left\{ \begin{aligned} & {\bar R}(u):=\sup \{R>0:\ u\ \text{is a positive radial solution of \eqref{Eq0}}\}, \\ & \ z(r):=r^{\frac{n-2}{2}} u(r)\ \text{for } r\in (0,R),\ \ F_0(\xi):=\frac{(n-2)^2}{4}\xi^2-\frac{2}{2^\star(s)}\xi^{2^\star(s)}\ \ \text{for } \xi\geq 0. \end{aligned}\right. \end{equation} If $u$ has a removable singularity at $0$ or $u$ is a solution of (MB) type, then $\liminf_{r\to 0^+}z\left(r\right)=0$. If $u$ is a (CGS) solution, then from~\cite{CirRob}, we can derive that \begin{equation}\label{P7} 0<\liminf_{r\to 0^+}z\left(r\right)\le \left[(n-2)/2\right]^{2/(2^\star(s)-2)}:=M_0. \end{equation} For $R>0$, we also define \begin{equation} \label{psir} F_R(\xi):=\frac{(n-2)^2}{4}-\frac{2}{2^\star(s)} \xi^{2^\star(s)-2}+ \frac{2\mu R^{\lambda} \xi^{q-1}}{q+1}\ \ \text{for } \xi\geq 0.\end{equation} For $F_0$ given by \eqref{defyz}, let $\Lambda_0$ denote the unique positive solution of $F_0(\xi)=0$, that is \begin{equation}\label{PrEq1} \Lambda_0:=\left[(n-2)(n-s)/4\right]^{\frac{1}{2^\star(s)-2}}. \end{equation} For any $\Lambda>\Lambda_0$, we have $F_0(\Lambda)<0$. Let $R_\Lambda$ denote the unique $R>0$ for which $F_{R}(\Lambda)=0$: \begin{equation} \label{rlambda} R_\Lambda:= \left[-\frac{(q+1)F_0(\Lambda)}{2\mu \Lambda^{q+1}}\right]^{\frac{1}{\lambda}}>0. \end{equation} Moreover, it holds \begin{equation}\label{RLambda} R_\Lambda=\sup\left\{\xi>0:\,F_{R_\Lambda}\left(t\right)>0\quad \text{for all } t\in (0,\xi)\right\}. \end{equation} \begin{proposition}[Uniform {\em a priori} estimates] \label{Pr} Let $q\in (1,2^\star-1)$. Then for every $\Lambda>\Lambda_0$, there exists $R_\Lambda>0$ as in \eqref{rlambda} such that any positive radial solution of \eqref{Eq0} with $R\in (0,R_\Lambda)$ satisfying \eqref{upli} can be extended as a positive radial solution of \eqref{Eq0} in $B(0,R_\Lambda]\setminus\{0\}$ and \begin{equation}\label{PrEq2} r^{\frac{n-2}{2}}u\left(r\right)<\Lambda \qquad\text{for all } r\in (0,R_\Lambda]. \end{equation} \end{proposition} Let $\omega_{n-1}$ denote the volume of the Euclidean $(n-1)$-sphere $\mathbb S^{n-1}$ in $\mathbb{R}^n$. Let $\lambda$ and $c_{\mu,q,n}$ be given by \eqref{lamb}. For $q>2^\star(s)-1$, we define $\ell_q$ as follows \begin{equation} \label{grand} \ell_q:= \frac{(2-s)(q+1)}{ (n-s)(q-1)} \left[\frac{(n-2)(n-s)(q-1)}{4(q-2^\star(s)+1)}\right]^{-\frac{q-2^\star(s)+1}{2^\star(s)-2}}. \end{equation} \noindent A key tool in proving Proposition~\ref{Pr} is given by Lemma~\ref{pos}, which is of interest in its own. \begin{lemma} \label{pos} Let $q\in (1,2^\star-1)$. Let $u$ be a positive radial solution of \eqref{Eq0} satisfying \eqref{upli}. \begin{itemize} \item[{\rm (a)}] For all $r\in (0,\bar R)$, the functions $z$ and $F_r(z)$ in \eqref{defyz} and \eqref{psir}, respectively satisfy \begin{equation} \label{hihi1} z^2(r) \,F_r(z(r)) =\frac{2P^{(q)}(u)}{\omega_{n-1}}+ [rz'(r)]^2+2 c_{\mu,q,n} \int_0^r \xi^{n-1} u^{q+1}(\xi)\,d\xi. \end{equation} \item[{\rm (b)}] If $\bar R<+\infty$, then $\liminf_{r\nearrow \bar R} u(r)>0$ and $\limsup_{r\nearrow \bar R}u(r)=+\infty$. \item[{\rm (c)}] If $1<q<2^\star(s)-1$, then $\bar R=+\infty$. \item[{\rm (d)}] If $q=2^\star(s)-1$, then $\bar R\geq (1/\mu)^{1/s}$. \item[{\rm (e)}] If $q\in (2^\star(s)-1,2^\star-1)$, then $\bar R>(\ell_q/\mu)^{1/\lambda}$, where $\ell_q$ is given by \eqref{grand}. \end{itemize} \end{lemma} \begin{remark} We have $\ell_q\to 1$ as $q\searrow 2^\star(s)-1$ and using $F_0$ in \eqref{defyz}, we get \begin{equation} \label{lqsup} \ell_q=\frac{q+1}{2}\sup_{\Lambda\in (\Lambda_0,\infty)} \frac{-F_0(\Lambda)}{\Lambda^{q+1}}. \end{equation} \end{remark} \begin{proof} From our assumptions, it follows that $\lim_{r\to 0^+} r^n u^{q+1}(r)=0$. \begin{proof}[Proof of (a)] Since $u$ is a radial solution of \eqref{Eq0}, the {\em Pohozaev-type integral} $P_r^{(q)}(u)$ satisfies \begin{equation} \label{red} \frac{2P_{r}^{(q)}(u)}{\omega_{n-1}} = -[rz'(r)]^2+z^2(r)\,F_r(z(r))\quad \text{for all } r\in (0,\bar R). \end{equation} By the Pohozaev identity, see \cite{CirRob}*{Proposition~6.1}, for every $0<r_1<r<\bar R$, we find that \begin{equation} \label{dif} P_{r}^{(q)}(u)-P_{r_1}^{(q)}(u)=\omega_{n-1} c_{\mu,q,n} \int_{r_1}^{r} \xi^{n-1} u^{q+1}(\xi)\,d\xi. \end{equation} Letting $r_1\to 0^+$ in \eqref{dif}, for any $r\in (0,\bar R)$, we find that \begin{equation} \label{trei} P_r^{(q)}(u)=P^{(q)}(u)+\omega_{n-1} c_{\mu,q,n} \int_0^r \xi^{n-1} u^{q+1}(\xi)\,d\xi. \end{equation} Then we conclude \eqref{hihi1} by using \eqref{red} and \eqref{trei}. The proof of (a) is now complete. \qed \end{proof} \begin{proof}[Proof of (b)] Assume that $\bar R<+\infty$. To prove that $\liminf_{r\nearrow \bar R}u(r)>0$, we proceed by contradiction. Assume that for a sequence $(r_k)_{k\geq 1}$ of positive numbers with $r_k\nearrow \bar R$ as $k\to \infty$, we have $\lim_{k\to \infty} u(r_k)= 0$, that is $\lim_{k\to \infty} z(r_k)=0$. We let $r=r_k$ in \eqref{hihi1}, then pass to the limit $k\to \infty$ to obtain a contradiction. For the other claim in (b), assume that $\limsup_{r\nearrow \bar R} u(r)<+\infty$. Then $\limsup_{r\nearrow \bar R} z(r)<+\infty$ since $\bar R<+\infty$. By the classical ODE theory, it follows that $\limsup_{r\nearrow \bar R}|u'(r)|=\infty$. On the other hand, by \eqref{hihi1}, we get that $\limsup_{r\nearrow \bar R} |rz'(r)|<+\infty$, which shows that $\limsup_{r\nearrow \bar R} |u'(r)|<+\infty$. This contradiction completes the proof of (b). \qed \end{proof} \begin{proof}[Proof of (c)] Let $q<2^\star(s)-1$. If $\bar R<\infty$, then there exists a sequence $(r_k)_{k\geq 1}$ in $(0,\bar R)$ with $\lim_{k\to \infty}r_k=\bar R$ and $\lim_{k\to \infty} z(r_k)= +\infty$. By letting $r=r_k$ in \eqref{hihi1} and $k\to \infty$, the left-hand side of \eqref{hihi1} diverges to $-\infty$ as $k\to \infty$, which is a contradiction. This proves that $\bar R=+\infty$. \qed \end{proof} \begin{proof}[Proof of (d)] Let $q=2^\star(s)-1$. We argue by contradiction. Assume that $\bar R<\left(1/\mu\right)^{1/s}$. Then, there exists $(r_k)_{k\geq 1}$ in $(0,\bar R)$ with $\lim_{k\to \infty}r_k= \bar R$ and $\lim_{k\to \infty}z(r_k)= +\infty$. Since $r^{\lambda}=r^{s}<\bar R^{s}$ for all $r\in (0,\bar R)$, from \eqref{hihi1} and the definition of $F_r$ in \eqref{psir} (with $R=r$), we have \begin{equation} \label{figu} \frac{(n-2)^2 z^2(r_k)}{4}- \frac{2\left(1-\mu \bar R^{s}\right)z^{2^\star(s)}(r_k)}{2^\star(s)}> 0 \quad \text{for all } k\geq 1. \end{equation} By letting $k\to \infty$ in \eqref{figu} and using that $1-\mu \bar R^{s}>0$, we get that the left-hand side of \eqref{figu} tends to $-\infty$ as $k\to \infty$. This contradiction proves that $\bar R\geq \left(1/\mu\right)^{1/s}$. \qed \end{proof} \begin{proof}[Proof of (e)] Let $q\in (2^\star(s)-1,2^\star-1)$. To prove $\bar R>\left(\ell_q/\mu\right)^{1/\lambda}$ with $\ell_q$ as in \eqref{grand}, it suffices to assume $\bar R<+\infty$. Let $F_{\bar{R}}$ be the function $F_R$ in \eqref{psir} with $R=\bar R$. We distinguish two cases: \noindent {\sc Case 1}: If $u$ has a removable singularity at $0$, or $u$ is a (MB) solution, then $\liminf_{r\to 0^+} z(r)=0$ using that $z(r)=r^{\frac{n-2}{2}} u(r)$. Since $\limsup_{r\nearrow {\bar R}} z(r)=+\infty$, to ensure \eqref{hihi1} for a positive radial solution $u$ of \eqref{Eq0} which is {\em not} (CGS) {\em nor} (ND), it is necessary to have \begin{equation} \label{exi} F_{\bar{R}}(\xi)>0 \quad \text{for all } \xi\in [0,\infty). \end{equation} We next study the monotonicity of $F_{\bar{R}}$. We see that $F_{\bar{R}}$ has only one positive critical point $\xi_c$ defined by \begin{equation} \label{definX} \xi_c:=\left( \frac{(2-s)(q+1)}{\mu (n-s)(q-1) \bar R^{\lambda} }\right)^{\frac{1}{q-2^\star(s)+1}}. \end{equation} Moreover, $\xi_c$ is a global minimum point for $\bar F$ on $[0,\infty)$. Thus, \eqref{exi} holds if and only if $F_{\bar{R}}(\xi_c)> 0$, which corresponds to $\bar R>\left(\ell_q/\mu\right)^{1/\lambda}$. \noindent {\sc Case 2:} If $u$ is a radial (CGS) solution of \eqref{Eq0} then we need $F_{\bar{R}}(\xi)>0$ for every $\xi\geq \liminf_{r\to 0^+} z(r)$. If $M_0$ in \eqref{P7} satisfies $M_0\leq \xi_c$ then $\bar R>\left(\ell_q/\mu\right)^{1/\lambda}$ is necessary to have $F_{\bar{R}}(\xi)>0$ for every $\xi\in [\liminf_{r\to 0^+} z(r),+\infty)$. If $M_0> \xi_c $, then from \eqref{definX} and \eqref{P7}, we get \begin{equation} \label{minim2} \bar R^\lambda > \frac{(2-s)(q+1)}{\mu (n-s)(q-1)} \left(\frac{n-2}{2}\right)^{-\frac{2(q-2^\star(s)+1)}{2^\star(s)-2}}, \end{equation} which again implies $\bar R>\left(\ell_q/\mu\right)^{1/\lambda}$. \noindent We have established the assertion of (e) in both Cases 1 and 2. \qed \end{proof} \noindent This completes the proof of Lemma~\ref{pos}. \qed \end{proof} \noindent {\bf Proof of Proposition~\ref{Pr}.} For any $q\in [2^\star(s)-1,2^\star-1)$, we denote $R^*=R^*(q)$ as follows $$ R^*:=\left\{\begin{aligned} & (1/\mu)^{1/s} && \text{if } q=2^\star(s)-1,&\\ & (\ell_q/\mu)^{1/\lambda} && \text{if } 2^\star(s)-1<q<2^\star-1.& \end{aligned} \right.$$ Let $\Lambda>\Lambda_0$ be fixed. Let $u$ be any positive radial solution of \eqref{Eq0} with $R\in (0,R_\Lambda)$ such that \eqref{upli} holds. From Lemma~\ref{pos}, the maximum radius of existence $\bar R=\bar R(u)$ for $u$ satisfies $\bar R=+\infty$ if $1<q<2^\star(s)-1$, $\bar R\geq R^*$ for $q=2^\star(s)-1$ and $\bar R>R^*$ for $2^\star(s)-1<q<2^\star-1$. From \eqref{lqsup} and \eqref{rlambda}, we have $R_\Lambda\leq R^*$ for all $2^\star(s)-1< q<2^\star-1$. When $q=2^\star(s)-1$, then using the definition of $F_0$ and $R^*$, we see easily that $R_\Lambda<R^*$. Hence, we can extend $u$ as a positive radial solution of \eqref{Eq0} in $B(0,R_\Lambda]\setminus\{0\}$ for all $1<q<2^\star-1$. \noindent We now prove \eqref{PrEq2}. Assume by contradiction that \eqref{PrEq2} fails, that is, $z(r_0)\geq \Lambda$ for some $r_0\in (0,R_\Lambda]$, where $z(r):=r^{\frac{n-2}{2}}u(r)$ is defined as in \eqref{defyz}. Since $z(r_0)\geq \Lambda>\Lambda_0>M_0$, the Mean Value Theorem, together with \eqref{RLambda} and \eqref{P7}, gives that there exists $r_1\in\left(0,r_0\right)$ such that $z\left(r_1\right)=\Lambda$. Hence, using Lemma~\ref{pos}(a), we find that $ 0= \Lambda^2\,F_{R_{\Lambda}}(\Lambda) >0$. This contradiction ends the proof of Proposition~\ref{Pr}. \qed \section{Removable singularities}\label{Sec3} The assertion of Theorem~\ref{Th1}(i) follows from Lemma~\ref{pos} and Lemma~\ref{remove} below. \begin{lemma} \label{remove} For $q>1$ and every $\gamma\in (0,\infty)$, there exists $R>0$ such that \eqref{Eq0} has a unique positive radial solution $u_\gamma$ with a removable singularity at $0$ and $\lim_{r\to 0^+} u_\gamma(r)=\gamma$. \end{lemma} \begin{proof} Fix $\gamma\in (0,\infty)$ arbitrarily. We consider the following initial value problem: \begin{equation} \label{rem} \left\{ \begin{aligned} & y''(\xi)+a\, y'(\xi)/\xi+4 ( y^{2^\star(s)-1}-\mu\, \xi^{\frac{2s}{2-s}}\,y^q)/(2-s)^2 =0 \ \text{for } \xi>0,\\ & y(0)=\gamma,\quad y'(0)=0, \end{aligned} \right. \end{equation} where we denote $a:=(2n-s-2)/(2-s)$. By Biles--Robinson--Spraker \cite{BRS}*{Theorems~1 and 2}, for every $\gamma>0$, there exists a unique positive solution $y_\gamma$ of \eqref{rem} on some interval $[0,T]$ with $T>0$. A solution $y$ of \eqref{rem} is defined in \cite{BRS} as follows: \begin{enumerate} \item[(a)] $y$ and $y'$ are absolutely continuous on $[0,T]$; \item[(b)] $y$ satisfies the ODE in \eqref{rem} a.e. on $[0,T]$; \item[(c)] $y$ satisfies the initial conditions in \eqref{rem}. \end{enumerate} Since $a>1$, the function $\xi\longmapsto \xi^a y_\gamma '(\xi)$ is absolutely continuous on $[0,T]$. From \eqref{rem}, we have $$ \left(\xi^a y_\gamma '(\xi)\right)'= -\frac{4}{(2-s)^2} \xi^a \left( y_\gamma ^{2^\star(s)-1}-\mu\, \xi^{\frac{2s}{2-s}}\,y_\gamma ^q \right)\quad \text{a.e. in } [0,T]. $$ Thus, for all $\xi\in [0,T]$, we find that $$ \xi^a y_\gamma '(\xi)=-\frac{4}{(2-s)^2} \int_0^\xi t^a \left( y_\gamma ^{2^\star(s)-1}(t)-\mu\, t^{\frac{2s}{2-s}}\,y_\gamma ^q(t)\right)\,dt. $$ By the property (a) for $y_\gamma$, we find that $y_\gamma \in C^2(0,T]$ satisfies the ODE in \eqref{rem} on $ (0,T]$. The change of variable $u_\gamma(r)=y_\gamma(\xi)$ with $\xi=r^{(2-s)/2}$ yields that $u_\gamma$ is a positive radial $C^2(0,R]$-solution of \eqref{Eq0} with $R=T^{2/(2-s)}$ and $\lim_{r\to 0^+}u_\gamma(r)=\gamma$. This proves the existence claim. \noindent We now show the uniqueness claim: any positive radial $C^2(0,R]$-solution $u$ of \eqref{Eq0} for some $R>0$ such that $\lim_{r\to 0^+}u(r)=\gamma$ must coincide with $u_\gamma$ on their common domain of existence. Indeed, using the change of variable $u(r)=y(\xi)$ with $\xi=r^{(2-s)/2}$, we get that $y\in C^2(0,R^{(2-s)/2}]$ satisfies the differential equation in \eqref{rem} for all $\xi\in (0,R^{(2-s)/2})$ and $\lim_{\xi \to 0^+}y(\xi)=\lim_{r\to 0^+}u(r)=\gamma$. Hence, $y$ can be extended by continuity at $0$ by defining $y(0)=\gamma$. To conclude that $y$ is a solution of \eqref{rem} on $[0,R^{(2-s)/2}]$ in the sense of \cite{BRS}, that is, $y$ satisfies properties (a)--(c) stated above with $T=R^{(2-s)/2}$, it suffices to show that \begin{equation} \label{f} y'(\xi)\to 0\ \text{and }\ y''(\xi)\to -2\,\gamma^{2^\star(s)-1}/[(n-s)(2-s)]\ \ \text{as } \xi\to 0^+. \end{equation} This would give that $y\in C^2[0,R^{(2-s)/2}]$, and then, by applying Theorem~2 in \cite{BRS}, we conclude that $y=y_\gamma$ on $[0,\min\{T, R^{(2-s)/2}\}]$, proving our uniqueness assertion. \noindent We prove \eqref{f}. Since $u$ is a positive radial solution of \eqref{Eq0} with $\lim_{r\to 0^+} u(r)=\gamma$, we have \begin{equation} \label{rad} r^{-(n-1-s)} \left( r^{n-1} u'(r) \right)'=-u^{2^\star(s)-1}+\mu r^s u^q\to -\gamma^{2^\star(s)-1} \quad \text{as } r\to 0^+. \end{equation} Hence, the function $r\longmapsto r^{n-1} u'(r)$ is decreasing on some interval $(0,r_0)$ for small $r_0>0$. Thus, there exists $\lim_{r\to 0^+} r^{n-1} u'(r)= \theta\in (-\infty, \infty]$. We next show that $\theta=0$. Assume by contradiction that $\theta\not= 0$. Then choosing $\min\{\theta,0\}<c<\max\{\theta,0\}$, we find that $h(r)= u(r)+c(n-2)^{-1} r^{2-n}$ is decreasing (respectively, increasing) on $(0,r_1)$ for $r_1>0$ small when $\theta<0$ (respectively, when $\theta>0$). Since $\lim_{r\to 0^+} h(r)=-\infty$ if $\theta<0$ and $\lim_{r\to 0^+} h(r)=+\infty$ if $\theta>0$, we arrive at a contradiction. This proves that $\lim_{r\to 0^+} r^{n-1} u'(r)=0.$ Hence, by \eqref{rad}, we get that $ \lim_{r\to 0^+}r^{s-1}u'(r)= -\gamma^{2^\star(s)-1}/(n-s)$. Coming back to the $\xi$ variable, we obtain \eqref{f}. This ends the proof of Lemma~\ref{remove}. \qed \end{proof} \section{(MB) solutions}\label{Sec4} In Sect.~\ref{sec-51} we prove Corollary~\ref{corol}. In Sect.~\ref{sec-52} we prove Theorem~\ref{Th1}(ii) given as Proposition~\ref{mb}. \subsection{Proof of Corollary~\ref{corol}} \label{sec-51} For every $0<\ell<\min\{(n-2)/2,2\}$, we set $q:=2^\star -1 -2\ell/(n-2)$ so that $q\in (2^\star-2,2^\star-1)$ with $q>1$. Then, for every $s\in (0,2)$, Theorem~\ref{Th1}(ii) yields a positive radial (MB) solution $u_{MB}$ of \eqref{Eq0} for some $R>0$. We define $z(r)=r^{(n-2)/2} u_{MB}(r)$ for $r\in (0,R)$. Since $z^*:=\limsup_{r\to 0^+}z(r)\in (0,\infty)$ and $z_*=\liminf_{r\to 0^+} z(r)=0$, the asymptotics of $u_{MB}$ at zero is different from that of any positive singular solution of \eqref{rnsol}. By defining $$K(r)=1-\mu r^s u_{MB}(r)^{q-2^\star(s)+1}\ \text{and } C_{s,\ell}:=2(s-\ell)/(n-2)\ \text{for } r= |x|\in (0,R),$$ we see that $u=u_{MB} $ is a positive singular solution of \eqref{sgt}. Moreover, we find that \begin{equation} \label{home} |r^{1-\ell} K'(r)| =\mu [z(r)]^{C_{s,\ell}}\left|\ell+ C_{s,\ell}\,rz'(r)/z(r)\right|\quad \text{for all } r\in (0,R) . \end{equation} We have $C_{s,\ell}>0$ when $\ell<s$ and $C_{s,\ell}<0$ when $\ell>s$. With $\underline L$ and $\overline L$ as in \eqref{gag}, we prove that \begin{equation} \label{scor} \underline{L}=0<\overline L<\infty\ \text{if } \ell\in (0, s),\ \text{ whereas } 0<\underline L<\overline L=\infty\ \text{ if } \ell>s.\end{equation} Indeed, since $P^{(q)}(u_{MB})=0$ and $z^*<\infty$, Lemma~\ref{pos}(a) yields that \begin{equation} \label{vic} \limsup_{r\to 0^+} r|z'(r)|/z(r)<\infty \ \text{ and } F_0(z(r))-[rz'(r)]^2\to 0\ \text{ as }r\to 0^+,\end{equation} where $F_0$ is given by \eqref{defyz}. Hence, $\underline L=0$ and $\overline L<\infty$ if $\ell\in (0,s)$. Since $z_*=0$, for every $\rho\in (0,z^*)$, there exists a sequence $\{r_k\}$ of positive numbers decreasing to $0$ as $k\to \infty$ such that $\lim_{k\to \infty} z(r_k)=\rho$. Then, by \eqref{vic}, we have $\lim_{k\to \infty} (r_kz'(r_k))^2=F_0(\rho)$. For suitable $\rho$, using $r_k$ in \eqref{home}, we get that $\overline L>0$ for $\ell\in (0,s)$, and correspondingly $0<\underline L<\infty$ for $\ell>s$. It remains to show that $\overline L=\infty$ if $\ell>s$. Assuming the contrary, $R_k z'(R_k)/z(R_k)\to -\ell/C_{s,\ell}$ for every sequence $\{R_k\}$ of positive numbers decreasing to $0$ such that $\lim_{k\to \infty}z(R_k)= 0$. Lemma~\ref{pos}(a) gives that $ F_{R_k}(z(R_k))\geq [R_k z'(R_k)/z(R_k)]^2$. Letting $k\to \infty$, we would have $(n-2)^2/4\geq \ell^2/C^2_{s,\ell}$, which is a contradiction with $s>0$. Thus, \eqref{scor} holds and $K$ satisfies the properties in Corollary~\ref{corol}. \qed \subsection{Existence of (MB) solutions} \label{sec-52} \begin{proposition} \label{mb} Let $q\in (1,2^\star-1)$. Assuming that $q> 2^\star-2$, then for $R>0$ small, \eqref{Eq0} admits at least a positive radial (MB) solution $u$, that is, \begin{equation} \label{mbsol} \liminf_{r\to 0^+} r^{\frac{n-2}{2}} u(r)=0\quad \text{and} \quad \limsup_{r\to 0^+} r^{\frac{n-2}{2}} u(r)\in (0,\infty). \end{equation} \end{proposition} \begin{proof} We use an argument inspired by Chen--Lin \cite{ChenLin}. Let $(\gamma_i)_{i\geq 1}$ be an increasing sequence of positive numbers with $\lim_{i\to \infty} \gamma_i=\infty$. By Lemmas~\ref{remove} and \ref{pos}, for every $i\geq 1$, there exists $R_i>0$ such that \eqref{Eq0}, subject to $\lim_{|x|\to 0^+} u(x)=\gamma_i$, admits a unique positive radial $C^2(0,R_i]$-solution $u_{\gamma_i}$. From now on, we use $u_i$ instead of $u_{\gamma_i}$. Let $\Lambda>\Lambda_0$ be fixed, where $\Lambda_0$ is given by \eqref{PrEq1}. By Proposition~\ref{Pr}, there exists $R_\Lambda>0$ such that $u_i$ can be extended as a positive radial $C^2(0,R_\Lambda]\cap C[0,R_\Lambda]$-solution of \eqref{Eq0} in $(0,R_\Lambda]$ satisfying \begin{equation} \label{upp} u_i(0)=\gamma_i,\quad r^{\frac{n-2}{2}} u_i(r)\leq \Lambda\ \text{for all } r\in (0,R_\Lambda]\ \text{and every } i\geq 1. \end{equation} \noindent {\sc Claim: } For any $u_0>0$, there exist $r_0\in (0,R_\Lambda)$ and $i_0\geq 1$ such that $$ u_i(r_0)\geq u_0\quad \text{for all } i\geq i_0. $$ \noindent We now complete the proof of Proposition~\ref{mb} assuming the Claim. From \eqref{upp}, there exists a subsequence of $(u_i)$, relabelled $(u_i)$, converging uniformly to $u_{\infty}$ on any compact subset of $(0,R_\Lambda]$. Moreover, $u_i\to u_\infty$ in $C^2_{\rm loc}(0,R_\Lambda]$ and $u_\infty$ is a radial solution of \eqref{Eq0}. The above Claim yields $\limsup_{r\to 0^+} u_\infty(r)=\infty$, that is, $u_\infty$ has a non-removable singularity at $0$. By \eqref{upp}, we get $\limsup_{r\to 0^+} r^{\frac{n-2}{2}}u_\infty(r)\in (0,\infty)$. Since $q<2^\star-1$, we thus find that $u_\infty^{q+1}\in L^1(B(0,R_\Lambda))$. We have $P^{(q)}(u_i)=0$ for all $i\geq 1$. By letting $u=u_i$ in \eqref{trei} and \eqref{red}, then passing to the limit $i\to +\infty$, we find that \begin{equation} \label{uui} P_r^{(q)}(u_\infty)= c_{\mu,q,n} \int_{B(0,r)} u_\infty^{q+1}(x)\,dx \quad \text{for all } r\in (0,R_\Lambda]. \end{equation} By letting $r\to 0^+$ in \eqref{uui}, we find that $P^{(q)}(u_\infty)=0$. Hence by \eqref{van}, $u_\infty$ is not a (CGS) solution of \eqref{Eq0}. As $u_\infty$ does not have a removable singularity at $0$, we conclude that $u_\infty$ is a radial (MB) solution of \eqref{Eq0}, that is $u_\infty$ satisfies \eqref{mbsol}. This ends the proof of Proposition~\ref{mb}. \qed \end{proof} \begin{proof}[Proof of the Claim] Suppose the contrary. Then for some $u_0>0$ and any $r_0\in (0, R_\Lambda)$, there exists a subsequence of $u_i$, relabeled $(u_i)$, such that \begin{equation} \label{mim} u_i(r_0)<u_0\quad \text{for all } i\geq 1. \end{equation} We apply the following transformation \begin{equation} \label{tr} w_i(t)=r^{\frac{n-2}{2}} u_i(r)\quad \text{with } t=\log r.\end{equation} By $w_i'(t)$ and $w_i''(t)$, we denote the first and second derivative of $w_i$ with respect to $t$, respectively. Then $w_i$ satisfies the equation \begin{equation} \label{weq} w_i''(t)-f(w_i(t)) =\mu e^{\lambda t} w_i^q(t)\quad \text{for } -\infty<t< \log R_\Lambda, \end{equation} where $\lambda:=(n-2)(2^\star-1-q)/2$ and $f:[0,\infty)\to \mathbb{R} $ is defined by \begin{equation}\label{fdefi} f(\xi):=(n-2)^2\xi/{4} -\xi^{2^\star(s)-1}\quad \text{for all } \xi\geq 0. \end{equation} From \eqref{upp}, we have that \begin{equation} \label{ww} w_i(t)\in (0,\Lambda]\quad \text{ for all } t\in (-\infty,\log R_\Lambda]\ \text{and } i\geq 1.\end{equation} \noindent The proof of the Claim is now divided into five steps: \begin{step} \label{step51} The family $(w_i'(t))_{i\geq 1}$ is uniformly bounded on $(-\infty, \log R_\Lambda]$. \end{step} \begin{proof}[Proof of Step \ref{step51}] Using $F_R$ in \eqref{psir} with $R=e^t$, we define $E_i:(-\infty,\log R_\Lambda]\to \mathbb{R}$ by \begin{equation} \label{ei} E_i(t):= \left(w_i'(t)\right)^2-w_i^2(t)\,F_{e^t} (w_i(t)). \end{equation} We have $\lambda>0$ (since $q<2^\star-1$) and $\lim_{t\to -\infty} w_i(t)=0 $. By Lemma~\ref{pos}(a), we find that \begin{equation}\label{eii} E_i(t)=-2 c_{\mu,q,n}\int_{-\infty}^t e^{\lambda \xi} w_i^{q+1}(\xi)\,d\xi\ \text{ and }\ E_i'(t)=-2 c_{\mu,q,n}\,e^{\lambda t} w_i^{q+1}(t)<0 \end{equation} for all $t\in (-\infty,\log R_\Lambda)$. It follows that \begin{equation} \label{limei} \lim_{t\to -\infty} w_i'(t)=\lim_{t\to -\infty} E_i(t)=0.\end{equation} From \eqref{eii}, we have $E_i<0$ on $(-\infty,\log R_\Lambda]$. Thus, by \eqref{ww}, we get that $(w_i'(t))_{i\geq 1}$ is uniformly bounded for $t\in (-\infty,\log R_\Lambda]$, completing Step~\ref{step51}. \qed \end{proof} \begin{step} \label{step52} For $\varepsilon_0>0$ and $r_0\in (0,R_\Lambda)$ small such that $ r_0^{(n-2)/2} u_0<\varepsilon_0/2$, we set $$ \mathcal F_i:=\left\{ t\in (-\infty, \log r_0):\ w_i(t)\geq \varepsilon_0\right\}\quad \text{for all } i\geq 1.$$ Then there exists $i_0\geq 1$ such that $$w_i(\log r_0)<\varepsilon_0/2\hbox{ and }\mathcal F_i\not=\emptyset \quad \text{for every } i\geq i_0.$$ \end{step} \begin{proof}[Proof of Step~\ref{step52}] For $0<\varepsilon_0<\left[(n-2)/2\right] ^{(n-2)/(2-s)}$, we define \begin{equation} \label{beta} \beta_0:=2 \varepsilon_0^{2^\star(s)-2}\left(n-2+ \sqrt{(n-2)^2-4\varepsilon_0^{2^\star(s)-2}}\right)^{-1} \hbox{ so that }0<\beta_0<\frac{n-2}{2}\hbox{ is small}. \end{equation} Since $\beta_0\to 0^+$ as $\varepsilon_0\to 0^+$, we can take $\varepsilon_0>0$ small enough such that $\beta_0$ is smaller than $\max\{(n-2)/4, 2/q, (2-s)/(2^\star(s)-1)\}$. Our choice of $r_0$ and \eqref{mim} yield that \begin{equation} \label{r0} w_i(\log r_0)=r_0^{\frac{n-2}{2}} u_i(r_0)<r_0^{\frac{n-2}{2}} u_0<\varepsilon_0/2\quad \text{for all } i\geq 1. \end{equation} To end Step~\ref{step52}, we show by contradiction that there exists $i_0\geq 1$ such that $\mathcal F_i\not=\emptyset$ for all $i\geq i_0$. Indeed, suppose that for a subsequence $(w_{i_k})_{k\geq 1}$ of $(w_i)_{i\geq 1}$, we have \begin{equation} \label{gr} w_{i_k}(t)<\varepsilon_0\quad \text{ for all } t\in (-\infty, \log r_0]\ \text{ and every } k\geq 1. \end{equation} Let $k\geq 1$ be arbitrary. Using \eqref{gr} and \eqref{beta}, we infer that \begin{equation} \label{es} \beta_0\left(n-2-\beta_0\right)=\varepsilon_0^{2^\star(s)-2} > w_{i_k}^{2^\star(s)-2}(t) \end{equation} for all $t\leq \log r_0$. From \eqref{weq} and \eqref{es}, we obtain that \begin{equation} \label{sec} w_{i_k}''(t)> \left[(n-2)/2-\beta_0\right]^2 w_{i_k}(t)\quad \text{for all } t\leq \log r_0. \end{equation} In particular, $t\longmapsto w_{i_k}'(t)$ is increasing on $(-\infty,\log r_0]$. Since $\lim_{t\to -\infty} w_{i_k}'(t)=0$, we find that $w_{i_k}'(t)>0$ for all $t\leq \log r_0$. Set $$\mathcal G_{i_k}(t) :=(w_{i_k}'(t))^2- \left[(n-2)/2-\beta_0\right]^2 w_{i_k}^2(t).$$ Using \eqref{sec}, we get that $\mathcal G_{i_k}$ is increasing on $(-\infty,\log r_0]$ and $\lim_{t\to -\infty} \mathcal G_{i_k}(t)=0$. Thus, $\mathcal G_{i_k}>0$ on $(-\infty,\log r_0]$, which implies that $$w_{i_k}'(t)>\left[(n-2)/2-\beta_0\right] w_{i_k}(t)\quad \text{for all } t\leq \log r_0.$$ Thus, $t\longmapsto e^{-\left(\frac{n-2}{2}-\beta_0\right)t}w_{i_k}(t)$ is increasing on $(-\infty, \log r_0]$. Using \eqref{tr} and \eqref{r0}, we find that \begin{equation} \label{ah} u_{i_k}(r)\leq c_0\, r^{-\beta_0}\quad \text{for every }r\in (0,r_0]\ \text{ and all }k\geq 1,\end{equation} where $c_0:=\left(\varepsilon_0/2\right) r_0^{-\frac{n-2}{2}+\beta_0} $. Since $\beta_0$ can be made arbitrarily small, it follows from \eqref{ah} that the right-hand side of \eqref{Eq0} with $u=u_{i_k}$ is uniformly bounded in $L^p(B(0,r_0))$ for some $p>n/2$. Then, $u_{i_k}$ satisfies \eqref{Eq0} in $\mathcal D'(B(0,r_0))$ (in the sense of distributions) and $(u_{i_k})_{k\geq 1}$ is uniformly bounded in $W^{2,p}(B(0,r_0))$ for some $p>n/2$. Hence, $(u_{i_k}(r))_{k\geq 1}$ is uniformly bounded in $ r\in [0,r_0/2]$, which leads to a contradiction with $u_{i_k}(0)=\gamma_{i_k}\to \infty$ as $k\to \infty$. This ends the proof of Step~\ref{step52}. \qed \end{proof} \noindent For $i\geq i_0$, we define $$t_i:=\sup\left\{ t\in (-\infty, \log r_0):\ w_i(t)\geq \varepsilon_0\right\}.$$ It follows from Step \ref{step52} that $t_i$ is well-defined and that $t_i\in (-\infty,\log r_0)$ for all $i\geq i_0$. \begin{step} \label{step53} We claim that for every $i\geq i_0$, the function $w_i$ is decreasing on $[t_i,\bar t_i]$ for some $\bar t_i\in (t_i,\log r_0]$. Moreover, by diminishing $\varepsilon_0>0$ and $r_0>0$, there exist positive constants $c_1,c_2$ independent of $\varepsilon_0$ and $i$ such that \begin{equation} \label{sa} w_i(\bar t_i)\geq c_1 \varepsilon_0^{\frac{q+2}{2}} e^{\frac{\lambda t_i}{2}} \quad \text{and}\quad \bar t_i-t_i\leq \frac{2}{n-2} \log \frac{ \varepsilon_0}{w_i(\bar t_i)} +c_2. \end{equation} Moreover, if $\bar t_i<\log r_0$, then $w_i$ is increasing on $[\bar t_i,\log r_0]$ and \begin{equation} \label{sa2} \log r_0-\bar t_i\leq \frac{2}{n-2} \log \frac{w_i(\log r_0)} {w_i(\bar t_i)} +c_3, \end{equation} where $c_3>0$ is a constant independent of $\varepsilon_0$ and $i$.\end{step} \begin{proof}[Proof of Step~\ref{step53}] Let $i\geq i_0$ be arbitrary. By Step~\ref{step52}, we have $w_i(t)\leq \varepsilon_0$ for every $t\in [t_i,\log r_0]$. Since \eqref{es} holds for all $t\in [t_i,\log r_0]$, as in the proof of Step~\ref{step52}, we regain \eqref{sec} replacing $w_{i_k}$ by $w_i$ for all $t\in [t_i,\log r_0]$. Hence, $t\longmapsto w_i'(t)$ is increasing on $[t_i,\log r_0]$ since $ w_i''(t)>0$ for all $t\in [t_i,\log r_0]$. We next distinguish two cases: \noindent {\sc Case 1:} $w_i'(t)\not=0$ for all $t\in [t_i,\log r_0)$. Hence, $w_i'<0$ on $[t_i,\log r_0)$ using that $w_i<\varepsilon_0$ on $(t_i,\log r_0]$. \noindent {\sc Case 2:} $w_i'(\bar t_i)=0$ for some $\bar t_i\in [t_i,\log r_0)$. Then, $w_i'<0$ on $[t_i,\bar t_i)$ and $w_i'>0$ on $(\bar t_i,\log r_0]$. \noindent In both cases, $w_i$ is decreasing on $[t_i,\bar t_i]$ such that \begin{enumerate} \item $\bar t_i=\log r_0$ in Case~1; \item $\bar t_i\in (t_i,\log r_0)$ and $w_i'(t)>0$ for all $t\in (\bar t_i,\log r_0]$ in Case~2. \end{enumerate} \noindent Unless explicitly mentioned, the argument below applies for both Case~1 (when $\bar t_i= \log r_0$) and Case~2 (when $\bar t_i\in (t_i,\log r_0)$). \noindent From \eqref{weq}, we have that \begin{equation} \label{vv} w_i''(t)\geq f(w_i(t))\quad \text{for all } t\in [t_i,\log r_0]. \end{equation} Thus, using \eqref{vv}, we find that \begin{equation} \label{m0} t\longmapsto \left(w_i'(t)\right)^2- F_0(w_i(t)) \end{equation} \begin{enumerate} \item[(a)] is non-increasing on $[t_i,\bar t_i]$ (in both Case 1 and Case 2); \item[(b)] is non-decreasing on $[\bar t_i,\log r_0]$ in Case 2. \end{enumerate} {\em Proof of the first inequality in \eqref{sa}.} By \eqref{r0} and $w_i(t_i)=\varepsilon_0$, we infer that there exists $\widetilde t_i\in (t_i,\log r_0)$ such that $w_i(\widetilde t_i)=\varepsilon_0/2$ and, moreover, $\bar t_i\in (\widetilde t_i,\log r_0]$. Hence, there exists $\xi_i\in [t_i,\widetilde t_i]$ such that $$-\varepsilon_0/2=w_i(\widetilde t_i)-w_i(t_i)=w_i'(\xi_i)(\widetilde t_i-t_i).$$ By Step~\ref{step51}, $(|w_i'(t)|)_{i\geq 1}$ is uniformly bounded on $(-\infty, \log R_\Lambda)$ so that \begin{equation} \label{new} \widetilde t_i-t_i \geq c \varepsilon_0\quad \text{for some constant } c>0.\end{equation} From \eqref{ww}, \eqref{ei} and \eqref{eii}, there exists $\widetilde c>0$ such that \begin{equation} \label{u1} -\widetilde c \,w_i^2(t)\leq E_i(t) \leq E_i(\widetilde t_i)\quad \text{for every } \widetilde t_i<t \leq \log r_0. \end{equation} Moreover, using \eqref{new}, together with $E_i(t_i)<0$ and $w_i\geq \varepsilon_0/2$ on $[t_i,\widetilde t_i]$, we obtain that \begin{equation} \label{u2} E_i(\widetilde t_i)=E_i(t_i)-\frac{2\lambda\mu}{q+1} \int_{t_i}^{\widetilde t_i} e^{\lambda t} w_i^{q+1}(t)\,dt\leq - \frac{ \lambda \mu c\, \varepsilon_0^{q+2} e^{\lambda t_i}}{2^{q}\left(q+1\right) } . \end{equation} Since $\bar t_i\in (\widetilde t_i, \log r_0]$, by combining \eqref{u1} and \eqref{u2}, there exists $c_1>0$ such that \begin{equation} \label{bb} w_i(\bar t_i)\geq c_1 \varepsilon_0^{\frac{q+2}{2}} \,e^{\frac{\lambda t_i}{2}}, \end{equation} where $c_1>0$ is independent of $\varepsilon_0$ and $i$. \qed {\em Proof of the second inequality in \eqref{sa}.} From \eqref{m0}, for all $ t\in [t_i, \bar t_i)$, we have \begin{equation} \label{bbb} [w_i'(t)]^2- F_0(w_i(t)) \geq -F_0(w_i(\bar t_i)), \end{equation} which jointly with $w_i'(t)<0$ and $F_0$ increasing on $[0,\epsilonilon_0]$, yields that \begin{equation} \label{hh} -w_i'(t)\,\left[ F_0(w_i(t)) -F_0(w_i(\bar t_i))\right]^{-1/2}\geq 1 \ \text{for all } t\in [t_i,\bar t_i). \end{equation} Hence, for all $t\in [t_i,\bar t_i)$, by integrating \eqref{hh} over $[t,\bar t_i]$, we get that \begin{equation} \label{not1} \bar t_i-t\leq \int_{w_i(\bar t_i)}^{w_i(t)} \frac{d\eta}{\left[F_0(\eta)- F_0(w_i(\bar t_i))\right]^{1/2}} =: \mathcal D_i(t). \end{equation} We shall prove below that \begin{equation} \label{pp} \mathcal D_i(t)\leq \frac{2}{n-2} \left(\log \frac{w_i(t)}{w_i(\bar t_i)} +\log 2\right) + \tilde k w_i^{2^\star(s)-2}(t) \end{equation} for all $t\in [t_i,\bar t_i)$, where $\tilde k>0$ is a constant independent of $\varepsilon_0$ and $i$. Then, since $w_i\leq \varepsilon_0$ on $[t_i,\bar t_i]$, from \eqref{not1} an \eqref{pp}, we conclude the proof of the second inequality in \eqref{sa}. {\em Proof of \eqref{pp}.} For every $\xi\geq 0$, we define \begin{equation} \label{m1} g_i(\xi):= \left(\frac{n-2}{2}\right)^2 \xi^2- \frac{2}{2^\star(s)}\,\xi^{2^\star(s)}\left[ w_i(\bar t_i)\right]^{2^\star(s)-2}. \end{equation} By a change of variable, we find that \begin{equation} \label{di} \mathcal D_i(t)= \int_{1}^{ w_i(t)/w_i (\bar t_i)} \frac{d\xi}{\left[ g_i(\xi)- g_i(1)\right]^{1/2}}\quad \text{for all } t\in [t_i,\bar t_i). \end{equation} By the definition of $g_i$ in \eqref{m1}, for each $\xi>1$, we have \begin{equation} \label{m2} \frac{g_i(\xi)-g_i(1)}{\xi^2-1}= \left(\frac{n-2}{2}\right)^2 -\frac{2[w_i(\bar t_i)]^{2^\star(s)-2}}{2^\star(s)} \frac{\xi^{2^\star(s)} -1}{\xi^2-1}. \end{equation} Since $ \left(2^\star(s)-1\right) \xi^{ 2^\star(s)}-2^\star(s)\, \xi^{2^\star(s)-2}+1$ increases for $ \xi\geq 1$, we get that $\xi^{2^\star(s)} -1$ is bounded from above by $ 2^\star(s)\, \xi^{2^\star(s)-2}(\xi^2-1)$ for all $\xi\geq 1$. Hence, for any $1<\xi\leq \varepsilon_0/w_i(\bar t_i)$, we find that \begin{equation} \label{m4} \frac{[w( \bar t_i)]^{2^\star(s)-2}}{2^\star(s)} \frac{\xi^{2^\star(s)} -1}{\xi^2-1}\leq \left[ w_i(\bar t_i)\,\xi\right]^{2^\star(s)-2}\leq \varepsilon_0^{2^\star(s)-2}. \end{equation} Since we fix $\varepsilon_0>0$ small, there exists a positive constant $k$, independent of $\varepsilon_0$ and $i$, such that \begin{equation} \label{st} \left[\frac{\xi^2-1}{g_i(\xi)-g_i(1)}\right]^{1/2}\leq \frac{2}{n-2}+2k \left[w_i(\bar t_i)\,\xi\right]^{2^\star(s)-2} \end{equation} for every $ 1<\xi\leq \varepsilon_0/w_i(\bar t_i)$. Since $w_i(t)\leq w_i(t_i)\leq \varepsilon_0 $ for each $t\in [t_i,\bar t_i)$, using \eqref{st} in \eqref{di}, we get \begin{equation} \label{mm} \mathcal D_i(t)\leq \frac{2}{n-2}\int_1^{h_i(t)}\frac{d\xi}{\left[\xi^2-1\right]^{1/2}}+2k \left[w_i(\bar t_i)\right]^{2^\star(s)-2} \mathcal E_i(t), \end{equation} where for every $t\in [t_i,\bar t_i)$, we define $h_i(t)$ and $\mathcal E_i(t)$ by \begin{equation} \label{jj} h_i(t):=\frac{w_i(t)}{w_i(\bar t_i)} \quad \text{and} \quad {\mathcal E}_i(t):=\int_{1}^{h_i(t)} \frac{\xi^{2^\star(s)-2}} {\left(\xi^2-1\right)^{1/2}}\,d\xi. \end{equation} A simple calculation gives that there exists $C>0$ such that for every $t\in [t_i,\bar t_i) $, we have \begin{equation} \label{int1} {\mathcal E}_i(t)\leq C h_i^{2^\star(s)-2}(t). \end{equation} Using \eqref{jj} and \eqref{int1} into \eqref{mm}, we reach \eqref{pp} with $\tilde k$ large enough. This completes the proof of the inequalities in \eqref{sa}. \qed {\em Proof of \eqref{sa2} in Case 2 (when $\bar t_i\in (t_i, \log r_0)$).} Recall that $w_i$ is increasing on $[\bar t_i, \log r_0]$ so that using \eqref{r0}, we get that $w_i(t)\leq w_i(\log r_0)<\varepsilon_0/2\quad \text{for all } t\in [\bar t_i, \log r_0].$ Moreover, the function in \eqref{m0} is non-decreasing on $[\bar t_i, \log r_0]$. Hence, we recover \eqref{bbb} for all $ t\in (\bar t_i, \log r_0]$. Since this time $w_i'>0$ on $(\bar t_i, \log r_0]$, instead of \eqref{not1}, we find that \begin{equation} \label{off} w_i'(t) \left[ F_0(w_i(t))-F_0(w_i(\bar t_i))\right]^{-1/2}\geq 1\quad \text{for every } t\in (\bar t_i,\log r_0]. \end{equation} Using $\mathcal D_i(t)$ given by \eqref{not1}, we see that by integrating \eqref{off} over $[\bar t_i,t]$, we obtain that \begin{equation} \label{com} t-\bar t_i\leq \mathcal D_i(t)\quad \text{for all } t\in (\bar t_i, \log r_0]. \end{equation} Similar to the case $t\in [t_i,\bar t_i)$, we can prove \eqref{pp} for all $t\in (\bar t_i,\log r_0]$, which jointly with \eqref{com}, gives the existence of a constant $c_3>0$ independent of $\varepsilon_0$ and $i$ such that \begin{equation} \label{ccc} t-\bar t_i\leq \frac{2}{n-2}\log \frac{w_i(t)}{w_i(\bar t_i)}+c_3\quad \text{for all } t\in (\bar t_i, \log r_0]. \end{equation} By letting $t=\log r_0$ in \eqref{ccc}, we conclude \eqref{sa2}. This proves the assertions of Step~\ref{step53}. \qed \end{proof} \begin{step} \label{step54} Proof of the Claim concluded in Case 1 of Step~\ref{step53}: $\bar t_i=\log r_0$. \end{step} \begin{proof}[Proof of Step~\ref{step54}] \noindent Suppose that $w_i'<0$ on $ [t_i,\log r_0)$. \noindent The second inequality in \eqref{sa} of Step~\ref{step53} reads as follows \begin{equation} \label{ou} \log r_0-t_i\leq \frac{2}{n-2}\log \frac{\varepsilon_0}{w_i(\log r_0)}+c_2. \end{equation} The first inequality in \eqref{sa} and \eqref{r0} give that $ r_0^{\frac{n-2}{2}}u_0\geq c_1 \varepsilon_0^{\frac{q+2}{2}} \,e^{\frac{\lambda t_i}{2}}$. By applying $\log$ to this inequality and to \eqref{bb} (with $\bar t_i=\log r_0$), respectively, we find that \begin{equation} \label{ou1} \lambda t_i/2\leq [(n-2)/2] \log r_0+c_4 \left(\log u_0+\log (1/\varepsilon_0) \right) \end{equation} for some constant $c_4>0$ independent of $\varepsilon_0$ and $i$, respectively \begin{equation} \label{ou2} \log (w_i(\log r_0)) \geq \lambda t_i/2+[(q+2)/2] \log \varepsilon_0 +\log c_1. \end{equation} Using \eqref{ou2} into \eqref{ou}, we deduce that \begin{equation} \label{ou3} \log r_0\leq \left[1-\lambda/(n-2) \right] t_i+c_5\log (1/\varepsilon_0) \end{equation} for a constant $c_5>0$ independent of $\varepsilon_0$ and $i$. We have \begin{equation} \label{theta} \Theta:= 2(q-2^\star+2)/(2^\star-1-q)>0\quad \text{since } q\in (2^\star-2,2^\star-1).\end{equation} Plugging into \eqref{ou3} the estimate on $t_i$ from \eqref{ou1}, we conclude that \begin{equation}\label{con} -\Theta \log r_0\leq c_6 \left[\log u_0 +\log (1/\varepsilon_0)\right], \end{equation} where $c_6$ is a positive constant independent of $\varepsilon_0$ and $i$. Since $\Theta>0$, we can choose $r_0>0$ small so that the left-hand side of \eqref{con} is bigger than twice the right-hand side of \eqref{con}, which is a contradiction with \eqref{con}. This completes Step~\ref{step54}. \qed \end{proof} \begin{step} \label{step55} Proof of the Claim in Case~2 of Step~\ref{step53}: $\bar t_i\in (t_i,\log r_0)$. \end{step} \begin{proof}[Proof of Step~\ref{step55}] We have $w_i'<0$ on $[t_i,\bar t_i)$ and $w_i'>0$ on $(\bar t_i, \log r_0]$. The first inequality of \eqref{sa} yields \begin{equation} \label{eee} 2\log w_i(\bar t_i)\geq (q+2) \log \varepsilon_0+\lambda t_i+2\log c_1. \end{equation} By adding the second inequality of \eqref{sa} to that of \eqref{sa2}, we get \begin{equation} \label{aaa} \log r_0-t_i\leq \frac{2}{n-2} \left[\log \varepsilon_0 +\log w_i(\log r_0)-2\log w_i(\bar t_i) \right]+ C_1, \end{equation} where $C_1>0$ is a constant independent of $\varepsilon_0$ and $i$. By \eqref{r0}, we have \begin{equation} \label{abc} \log w_i(\log r_0)\leq \log u_0+[(n-2)/2]\,\log r_0. \end{equation} Using \eqref{eee} and \eqref{abc} into \eqref{aaa}, we obtain that \begin{equation} \label{cdcd} \left[ 2\lambda/(n-2)-1\right] t_i\leq C_2 \left[\log (1/\varepsilon_0)+\log u_0\right], \end{equation} where $C_2>0$ is a a constant independent of $\varepsilon_0$ and $i$. Since the coefficient of $t_i$ in \eqref{cdcd} equals $2^\star-2-q$, which is negative from the assumption $q>2^\star-2$, using that $t_i<\log r_0$, we infer that \begin{equation} \label{ggg} \left(2^\star-2-q\right) \log r_0\leq C_2 \left[\log (1/\varepsilon_0)+\log u_0\right]. \end{equation} By choosing $r_0>0$ small so that the left-hand side of \eqref{ggg} is greater than twice the right-hand side of \eqref{ggg}, we reach a contradiction. This proves Step~\ref{step55}. \qed \end{proof} \noindent From Steps~\ref{step54} and \ref{step55} above, we conclude the proof of the Claim. \qed \end{proof} \section{(CGS) solutions}\label{Sec5} This section is devoted to the proof of part (iii) of Theorem~\ref{Th1}, restated below. \begin{proposition} \label{CGS} Let $q\in (1,2^\star-1)$. There exists $R_0>0$ such that for every $R\in (0,R_0)$ and any positive singular solution $U$ of \eqref{rnsol}, there exists a unique positive radial (CGS) solution $u$ of \eqref{Eq0} with asymptotic profile $U$ near zero. \end{proposition} \begin{proof} Let $f$ be given by \eqref{fdefi}. Let $U$ be a positive singular solution of \eqref{rnsol}. Then, by defining $\varphi(t)=e^{-(n-2)t/2}U(e^{-t})$ for $t\in \mathbb{R}$, we see that $\varphi\in C^\infty\left(\mathbb{R}\right)$ is a positive periodic solution of \begin{equation}\label{Th1CGSEq1} \varphi''(t)=f(\varphi(t))\quad \text{ for all } t\in \mathbb{R}.\end{equation} Let $\mathcal P$ denote the set of all positive smooth periodic solutions of \eqref{Th1CGSEq1} to be described in Sect.~\ref{cgs61}. We next show that Proposition~\ref{CGS} is equivalent to Lemma~\ref{lem62}, the proof of which will be given in Sect.~\ref{cgs62}. \begin{lemma} \label{lem62} Let $q\in (1,2^\star-1)$. For every $\varphi\in\mathcal P$, there exists $T_0=T_0(\varphi)>0$ large for which the non-autonomous first order system \begin{equation}\label{Th1CGSEq2} \left\{ \begin{aligned} & (V',W')=(W,f(V)+\mu e^{-\lambda t} V^q)\qquad\text{in }[T_0,\infty),\\ & V>0\ \text{on } [T_0,\infty),\\ \end{aligned} \right. \end{equation} has a unique solution satisfying \begin{equation} \label{til} \left( V(t),W(t)\right)-\left(\varphi(t),\varphi'(t)\right)\to (0,0)\quad \text{as } t\to \infty. \end{equation} \end{lemma} \noindent Indeed, assuming that Proposition~\ref{CGS} holds, then for every $\varphi\in \mathcal P$, we use the transformation \begin{equation} \label{vite} \varphi(t)=r^{\frac{n-2}{2}}U(r), \quad V(t)=r^{\frac{n-2}{2}} u(r),\quad W(t)=V'(t) \quad \text{with } t=\log (1/r),\end{equation} where $u$ is the unique positive radial (CGS) solution of \eqref{Eq0} satisfying $\lim_{r\to 0^+} u(r)/U(r)=1$. Hence, we obtain that $(V,W)$ is a solution of \eqref{Th1CGSEq2} for any $T_0>\log \left(1/R\right)$ and, moreover, $V\left(t\right)-\varphi\left(t\right)\to0$ as $ t\to \infty$. Using \eqref{Th1CGSEq1}, we find that $W^\prime(t)-\varphi^{\prime\prime}(t)\to 0$ as $t\to \infty$. Hence, $W-\varphi'$ is uniformly continuous on $[T_0,+\infty)$. Since $\lim_{t\to \infty} (V-\varphi)(t)= 0$, we get that $W\left(t\right)-\varphi'\left(t\right)\to 0$ as $t\to +\infty$. This proves Lemma~\ref{lem62}. We prove the reverse implication. If Lemma~\ref{lem62} holds, then for every positive singular solution $U$ of \eqref{rnsol}, by using \eqref{vite} and Proposition \ref{Pr}, we get a unique positive radial (CGS) solution $u$ of \eqref{Eq0} satisfying $\lim_{r\to 0^+} u(r)/U(r)=1$. \subsection{Description of $\mathcal P$} \label{cgs61} We show that the set $\mathcal P$ of all positive smooth periodic solutions of \eqref{Th1CGSEq1} is given by \eqref{pers}. This is basically standard ODE theory. We state only the essential steps and leave the details to the reader. The function $F_0$ in \eqref{defyz} is increasing on $[0,M_0]$ and decreasing on $[M_0,\infty)$ with $M_0$ given by \eqref{mino}. Thus $F_0$ reaches its maximum $\overline \sigma$ at $M_0$, where \begin{equation} \label{mino}M_0:=\left(\frac{n-2}{2}\right)^{\frac{n-2}{2-s}} \quad \text{and} \quad \overline\sigma:=F_0(M_0)=\frac{2-s}{n-s}\left(\frac{n-2}{2}\right)^{\frac{2\left(n-2\right)}{2-s}}.\end{equation} Note that $M_0$ is the only positive zero of $f(\xi)=0$. Let $\varphi\in \mathcal P$. Since $F_0(\xi)=2\int_0^\xi f(t)\,dt$ for all $\xi\ge 0$, from \eqref{Th1CGSEq1}, there exists a constant $\sigma>0$ such that \begin{equation} \label{pohs} \left(\varphi'(t)\right)^2=F_0(\varphi(t))- \sigma \quad \text{for all }t\in \mathbb{R}. \end{equation} In fact, by taking $\mu=0$ in \eqref{red} for $u=U$ with $U$ given by \eqref{vite}, we precisely obtain that $\sigma=2P(U)/\omega_{n-1}>0$, where $P(U)$ is the Pohozaev invariant associated to the positive singular solution $U$ of \eqref{rnsol}. From \eqref{mino} and \eqref{pohs}, we must have $$ 0<\sigma\leq {\overline \sigma}\quad {\rm and }\quad \varphi \equiv M_0 \ \text{ on } \mathbb{R}\ \text{ if } \sigma=\overline \sigma. $$ Let $\sigma\in (0,\overline \sigma)$ be fixed. Let $a_\sigma$ and $b_\sigma$ denote the two positive solutions of $F_0(\xi)=\sigma$ with $0<a_\sigma<M_0<b_\sigma$. It follows from standard analysis of the ODE \eqref{Th1CGSEq1} that for any $\sigma\in (0,\bar{\sigma})$, there is a unique solution $\varphi_\sigma$ to \eqref{Th1CGSEq1} such that $\min_\mathbb{R}\varphi_\sigma=\varphi_\sigma(0)=a_\sigma<b_\sigma= \max_{\mathbb{R}}\varphi_\sigma$. Moreover, $\varphi_\sigma$ is periodic and we let $2t_\sigma>0$ be its principal period. For every $\tau\in \mathbb S^1$, let $\varphi_{\sigma,\tau}$ denote the function whose graph is obtained from that of $\varphi_\sigma$ by a horizontal shift with $(t_\sigma/\pi) \text{Arg }\,\tau $ units, where $\text{Arg} \,\tau$ denotes the principal argument of $\tau$. Note that $\varphi_\sigma=\varphi_{\sigma,\tau_0}$ with $\tau_0=(1,0)\in \mathbb S^1$. It follows that \begin{equation} \label{pers} \mathcal P=\{\varphi_{\overline\sigma}\}\cup \{ \varphi_{\sigma,\tau}\}_{(\sigma,\tau)\in (0,\overline \sigma)\times \mathbb S^1}, \end{equation} where $\varphi_{\overline\sigma}\equiv M_0$ and $\varphi_{\sigma,\tau}(t)=\varphi_\sigma\left(t-(t_\sigma/\pi) \text{Arg}\, \tau\right)$ for all $ t\in \mathbb{R}$. \subsection{Proof of Lemma~\ref{lem62}} \label{cgs62} We first prove Lemma~\ref{lem62} for $\varphi\in \cup \{\varphi_{\sigma,\tau}\}_{(\sigma,\tau) \in (0,\sigma_0)\times \mathbb S^1}$ with $\sigma_0\in (0,\overline\sigma)$ and second for $\varphi\in \{\varphi_{\overline \sigma}\}\cup \{ \varphi_{\sigma,\tau}\}_{(\sigma,\tau)\in [\sigma_0,\overline \sigma )\times \mathbb S^1}$ with $\sigma_0\in (0,\overline \sigma)$ close enough to $\overline \sigma$. \begin{step} \label{step621}For any $\sigma_*\in (0,\sigma_0)$ fixed, there exists $T_0>0$ large such that for every $\varphi=\varphi_{\sigma,\tau}$ with $\left(\sigma,\tau\right)\in (\sigma_*,\sigma_0)\times\mathbb S^1$, the system \eqref{Th1CGSEq2}, subject to \eqref{til}, admits a unique solution $(V_{\sigma,\tau},W_{\sigma,\tau})$. \end{step} \begin{proof}[Proof of Step~\ref{step621}] For the existence proof, we make a suitable transformation and use the Fixed Point Theorem for a contraction mapping. Let $I_0$ be an open interval such that $\left(\sigma_*,\sigma_0\right)\Subset I_0\Subset\left(0,\overline{\sigma}\right)$. The key here is that for every $\left(\sigma,\tau\right)\in I_0\times\mathbb S^1$, both $\varphi_{\sigma,\tau}$ and $\partial_t\varphi_{\sigma,\tau}=\varphi'_{\sigma,\tau}$ are differentiable with respect to $\sigma$. This does not hold for $\sigma=\overline\sigma$. By differentiating \eqref{pohs} with respect to $\sigma$ and using \eqref{Th1CGSEq1}, we get $$ f(\varphi_{\sigma,\tau}(t)) \frac{d\left[\varphi_{\sigma,\tau}(t)\right]}{d\sigma}-\partial_t\varphi_{\sigma,\tau} (t) \frac{d \left[\partial_t\varphi_{\sigma,\tau}(t)\right]}{d\sigma}=\frac{1}{2}\quad \text{for all } t\in \mathbb{R}. $$ We see that there exists $C_*>0$ such that for every $\left(\sigma,\tau\right)\in I_0\times\mathbb S^1$, we have \begin{equation} \label{dstar} | \partial_t\varphi_{\sigma,\tau}(t)|+\left| \frac{d\left[\varphi_{\sigma,\tau}(t)\right]}{d\sigma}\right| \leq C_*\quad \text{for all } t\in \mathbb{R}. \end{equation} Moreover, there exists $T_0>0$ such that $C_* e^{-\lambda T_0/2}<a_0:=\inf\left\{a_\sigma:\,\sigma\in I_0\right\}$, where $a_\sigma$ is the smallest positive root of $F_0(\xi)=\sigma$. Let $\mathcal X_{T_0}$ denote the set of all continuous functions $(f_1,f_2):[T_0,\infty)\to \mathbb{R}^2$ with $ e^{\lambda t/2} (|f_1(t)|+|f_2(t)|)\leq 1$ for all $ t\geq T_0$. If we define $$\|(f_1,f_2)\|:=\sup_{t\geq T_0}\left\{ e^{\lambda t/2} \left(|f_1(t)|+|f_2(t)|\right)\right\},$$ then $(\mathcal X_{T_0},\|\cdot\|)$ is a complete metric space. For $(\widehat V,\widehat W)\in \mathcal X_{T_0}$ and recalling that $\varphi_{\sigma,\tau}''=f(\varphi_{\sigma,\tau})$ on $\mathbb{R}$, we consider the following transformation: \begin{equation} \label{Th1CGSEq5} \begin{bmatrix} V(t)-\varphi_{\sigma,\tau}(t) \\ W(t)-\varphi_{\sigma,\tau}'(t) \end{bmatrix} =\begin{bmatrix} \partial_t\varphi_{\sigma,\tau} (t)& \frac{d\left[\varphi_{\sigma,\tau}(t)\right]}{d\sigma}\\ f(\varphi_{\sigma,\tau}(t)) & \frac{d \left[\partial_t\varphi_{\sigma,\tau}(t)\right]}{d\sigma} \end{bmatrix} \begin{bmatrix} \widehat V(t)\\ \widehat W(t) \end{bmatrix}\quad \text{for } t\in [T_0,\infty). \end{equation} Note that the matrix and its inverse are both uniformly bounded with respect to $\left(\sigma,\tau\right)\in I_0\times\mathbb S^1$. In particular, \eqref{Th1CGSEq5} yields that \begin{equation} \label{vii} V(t):= \varphi_{\sigma,\tau}(t)+\widehat V (t)\partial_t\varphi_{\sigma,\tau}(t)+\widehat W(t) \frac{d\left[\varphi_{\sigma,\tau}(t)\right]}{d\sigma}. \end{equation} From \eqref{dstar}, \eqref{vii} and our choice of $T_0$, we find that \begin{equation} \label{inn} |V(t)-\varphi_{\sigma,\tau}(t)|\leq C_* e^{-\lambda t/2} \| (\widehat V,\widehat W)\| \leq C_* e^{-\lambda T_0/2} <a_0\quad \text{for all } t\geq T_0. \end{equation} For every $t\geq T_0$ and $\left(\sigma,\tau\right)\in I_0\times\mathbb S^1$, we have $\varphi_{\sigma,\tau}(t)\geq a_{\sigma}\geq a_0$ since $a_{\sigma}$ is increasing in $\sigma$. Thus, \eqref{inn} proves that $V$ in \eqref{vii} is positive on $[T_0,\infty)$ for all $(\widehat V,\widehat W)\in \mathcal X_{T_0}$. For simplicity of reference, using $V$ in \eqref{vii} for $(\widehat V,\widehat W)\in \mathcal X_{T_0}$, we define $$ g_{\sigma,\tau,\widehat V,\widehat W}(t):=f(V(t))-f(\varphi_{\sigma,\tau}(t)) -(V(t)-\varphi_{\sigma,\tau}(t)) f'(\varphi_{\sigma,\tau}(t))+\mu e^{-\lambda t} V^q(t). $$ By \eqref{inn}, there exist positive constants $C_0$ and $C_1$ such that for all $(\widehat V,\widehat W)\in \mathcal X_{T_0}$, \begin{equation} \label{gb} |g_{\sigma,\tau,\widehat V,\widehat W}(t)|\leq C_0 |V(t)-\varphi_{\sigma,\tau}(t)|^2+\mu e^{-\lambda t} V^q(t)\leq C_1 e^{-\lambda t} \end{equation} for every $t\in [T_0,\infty)$ and $\left(\sigma,\tau\right)\in I_0\times\mathbb S^1$. Remark that \eqref{Th1CGSEq2} is equivalent to the system \begin{equation} \label{Th1CGSEq6} ( \widehat V'(t),\widehat W'(t))= \left( 2g_{\sigma,\tau,\widehat V,\widehat W}(t) \frac{d\left[\varphi_{\sigma,\tau}(t)\right]}{d\sigma},- 2 g_{\sigma,\tau,\widehat V,\widehat W}(t) \,\partial_t\varphi_{\sigma,\tau}(t) \right) \ \text{on } [T_0,\infty) . \end{equation} For every $(\widehat V,\widehat W)\in \mathcal X_{T_0}$ and $t\geq T_0$, we define $$ \Phi_{\sigma,\tau}(\widehat V,\widehat W)(t)= \left(-2\int_t^\infty g_{\sigma,\tau,\widehat V,\widehat W}(y) \frac{d\left[\varphi_{\sigma,\tau}(y)\right]}{d\sigma}\,dy, 2\int_t^\infty g_{\sigma,\tau,\widehat V,\widehat W}(y) \,\partial_t\varphi_{\sigma,\tau}(y)\,dy\right). $$ We next prove the existence of $T_0>0$ large such that $\Phi_{\sigma,\tau}$ maps $\mathcal X_{T_0}$ into $\mathcal X_{T_0}$ and $\Phi_{\sigma,\tau}$ is a contraction mapping on $\mathcal X_{T_0}$ for every $\left(\sigma,\tau\right)\in I_0\times\mathbb S^1$. From \eqref{dstar}, \eqref{gb} and the definition of $(\mathcal X_{T_0},\|\cdot\|)$, we have \begin{equation} \label{phm} \|\Phi_{\sigma,\tau} (\widehat V,\widehat W)\| \leq 2C_* \sup_{t\geq T_0} \left\{ e^{\lambda t/2} \int_t^\infty |g_{\sigma,\tau,\widehat V,\widehat W}(y)|\,dy\right\}\leq \frac{2 C_* C_1}{\lambda} e^{-\lambda T_0/2}. \end{equation} Thus, for large $T_0>0$, we find that $\Phi_{\sigma,\tau} (\widehat V,\widehat W)\in \mathcal X_{T_0}$ for every $(\widehat V,\widehat W)\in \mathcal X_{T_0}$ and all $\left(\sigma,\tau\right)\in I_0\times\mathbb S^1$. For every $(\widehat V_1,\widehat W_1)$ and $(\widehat V_2,\widehat W_2)$ in $ \mathcal X_{T_0}$, we have $$ \|(\widehat V_1,\widehat W_1)- (\widehat V_2,\widehat W_2)\|= \sup _{t\geq T_0} \left\{ e^{\frac{\lambda t}{2}} \widehat S(t) \right\}\text{ with } \widehat S(t):=|(\widehat V_1-\widehat V_2)(t)|+ |(\widehat W_1-\widehat W_2)(t)| .$$ Hence, there exist positive constants $C_2$ and $C_3$ such that for every $\left(\sigma,\tau\right)\in I_0\times\mathbb S^1$ $$ \begin{aligned} \| \Phi_{\sigma,\tau}(\widehat V_1,\widehat W_1)-\Phi_{\sigma,\tau}(\widehat V_2,\widehat W_2) \| & \leq 2C_* \sup_{t\geq T_0} \left\{ e^{\frac{\lambda t}{2}}\int_t^\infty \left|g_{\sigma,\tau,\widehat V_1,\widehat W_1}(y)- g_{\sigma,\tau,\widehat V_2,\widehat W_2}(y) \right|\,dy \right\}\\ & \leq C_2 \sup_{t\geq T_0} \left\{ e^{\frac{\lambda t}{2}} \int_t^\infty \left[ (\widehat S(y))^2 +e^{-\lambda y} \widehat S(y)\right]\,dy\right\}\\ & \leq C_3 e^{-\frac{\lambda T_0 }{2}}\| (\widehat V_1,\widehat W_1)- (\widehat V_2,\widehat W_2) \| \end{aligned} $$ for all $(\widehat V_1,\widehat W_1),(\widehat V_2,\widehat W_2)\in \mathcal X_{T_0}$. Thus, for $T_0>0$ large, $\Phi_{\sigma,\tau}$ is a contraction on $ \mathcal X_{T_0}$ for all $\left(\sigma,\tau\right)\in I_0\times\mathbb S^1$. Hence, $\Phi_{\sigma,\tau}$ has a unique fixed point in $\mathcal X_{T_0}$, say $(\widehat V_{\sigma,\tau},\widehat W_{\sigma,\tau})$, which gives a unique solution in $\mathcal X_{T_0}$ of \eqref{Th1CGSEq6} such that $\lim_{t\to \infty}(\widehat V_{\sigma,\tau},\widehat W_{\sigma,\tau})(t)= (0,0)$. By \eqref{dstar} and $(\widehat V,\widehat W)=(\widehat V_{\sigma,\tau},\widehat W_{\sigma,\tau})$ in \eqref{Th1CGSEq5}, we obtain a solution $(V_{\sigma,\tau},W_{\sigma,\tau})$ of \eqref{Th1CGSEq2} satisfying \eqref{til} with $\varphi=\varphi_{\sigma,\tau}$. Moreover, $(V_{\sigma,\tau},W_{\sigma,\tau})$ is continuous in $\left(\sigma,\tau\right)$ since $\Phi_{\sigma,\tau}$ is continuous in $\left(\sigma,\tau\right)$. \noindent To prove uniqueness, on $\Omega_0:=I_0\times\mathbb S^1\times(0,e^{-T_0})$, we define the functions $H,G:\Omega_0\to\mathbb{R}^3$ by $$H\left(\sigma,\tau,r\right):=\left(V_{\sigma,\tau}\left(t\left(r\right)\right),W_{\sigma,\tau}\left(t\left(r\right)\right),r\right)\hbox{ and }G\left(\sigma,\tau,r\right):=\left(\varphi_{\sigma,\tau}\left(t\left(r\right)\right),\varphi'_{\sigma,\tau}\left(t\left(r\right)\right),r\right)$$ for every $\left(\sigma,\tau,r\right)\in\Omega_0$, where $t\left(r\right):=\log \left(1/r\right)$. From our construction, $H$ is continuous. Since $V_{\sigma,\tau}$ is a solution to a second-order ODE and $W_{\sigma,\tau}=V_{\sigma,\tau}'$, the uniqueness theorem for ODEs yields that $H$ is one-to-one in $\Omega_0$. Clearly, $G$ is also continuous and one-to-one in $\Omega_0$. Thus, by applying the Domain Invariance Theorem, we obtain that $H$ and $G$ are open. Moreover, since the functions $\left\{\varphi_{\sigma,\tau}\right\}_{\left(\sigma,\tau\right)\in I_0\times\mathbb S^1}$ are periodic, we see that $G\left(\Omega_0\right)=\Sigma_0\times(0,e^{-T_0})$ for some domain $\Sigma_0$ in $\mathbb{R}^2$. Let $H_0:\Sigma_0\times(-e^{-T_0},e^{-T_0})\to\mathbb{R}^3$ be the function defined as $$H_0\left(\xi_1,\xi_2,r\right):=\left\{\begin{aligned}&H\left(G^{-1}\left(\xi_1,\xi_2,r\right)\right)&&\text{if }r>0\\&\left(\xi_1,\xi_2,0\right)&&\text{if }r=0\\&J\left(H\left(G^{-1}\left(\xi_1,\xi_2,-r\right)\right)\right)&&\text{if }r<0,\end{aligned}\right.$$ where $J (\xi_1,\xi_2,r):=(\xi_1,\xi_2,-r)$. Since $H$ and $G$ are one-to-one in $\Omega_0$, we obtain that $H_0$ is one-to-one in $\Sigma_0\times(-e^{-T_0},e^{-T_0})$. Moreover, since $H$ and $G^{-1}$ are continuous in $\Omega_0$, we obtain that $H_0$ is continuous in $\Sigma_0\times[(-e^{-T_0},e^{-T_0})\backslash\left\{0\right\}]$. As regards the continuity of $H_0$ on $\Sigma_0\times\left\{0\right\}$, for every $\left(\zeta_1,\zeta_2\right)\in\Sigma_0$ and $\left(\xi_1,\xi_2,r\right)\in\Sigma_0\times[(-e^{-T_0},e^{-T_0})\backslash\left\{0\right\}]$, we write \begin{multline}\label{Continuity} \left|H_0\left(\xi_1,\xi_2,r\right)-H_0\left(\zeta_1,\zeta_2,0\right)\right|\le\left|H_0\left(\xi_1,\xi_2,r\right)-\left(\xi_1,\xi_2,r\right)\right|+\left|\left(\xi_1,\xi_2,r\right)-\left(\zeta_1,\zeta_2,0\right)\right|\\ \le\left|\left(V_{\sigma,\tau}\left(t\left(\left|r\right|\right)\right)-\varphi_{\sigma,\tau}\left(t\left(\left|r\right|\right)\right),W_{\sigma,\tau}\left(t\left(\left|r\right|\right)\right)-\varphi'_{\sigma,\tau}\left(t\left(\left|r\right|\right)\right)\right)\right|+\left|\left(\xi_1,\xi_2,r\right)-\left(\zeta_1,\zeta_2,0\right)\right|, \end{multline} where $\left(\sigma,\tau,\left|r\right|\right)=G^{-1}\left(\xi_1,\xi_2,\left|r\right|\right)$. Since $(\widehat V_{\sigma,\tau},\widehat W_{\sigma,\tau})\in \mathcal X_{T_0}$, it follows from \eqref{dstar}, \eqref{Th1CGSEq5} and \eqref{Continuity} that for every $\left(\zeta_1,\zeta_2,0\right)\in\Sigma_0\times\left\{0\right\}$, $H_0\left(\xi_1,\xi_2,r\right)\to H_0\left(\zeta_1,\zeta_2,0\right)$ as $\left(\xi_1,\xi_2,r\right)\to\left(\zeta_1,\zeta_2,0\right)$ i.e., $H_0$ is continuous at $\left(\zeta_1,\zeta_2,0\right)$. By another application of the Domain Invariance Theorem, we obtain that $H_0$ is open. We let $\Sigma_*$ be the domain such that $\Sigma_*\times\left\{r\right\}=G(\left(\sigma_*,\sigma_0\right)\times\mathbb S^1,r )$ for every $r>0$. In particular, since $\Sigma_*$ is open, we obtain that for every $\left(\sigma,\tau\right)\in (\sigma_*,\sigma_0)\times\mathbb S^1$, every solution $(V\left(t\right),W\left(t\right))$ of \eqref{Th1CGSEq2} satisfying \begin{equation} \label{til2} \left( V(t),W(t)\right)-\left(\varphi_{\sigma,\tau}(t),\varphi_{\sigma,\tau}'(t)\right)\to (0,0)\quad \text{as } t\to \infty \end{equation} must satisfy $(X(t\left(r\right)),Y(t\left(r\right)))\in \Sigma_*$ for small $r>0$. Since $\Sigma_*\times\left\{0\right\}\Subset H_0(\Sigma_0\times(-e^{-T_0},e^{-T_0}))$ and $H_0$ is open, we obtain that there exists $R_0\in(0,e^{-T_0})$ such that $\Sigma_*\times \left[-R_0,R_0\right]\subset H_0(\Sigma_0\times(-e^{-T_0},e^{-T_0}))$. It then follows from the definitions of $H_0$ and $\Sigma_*$ that $\Sigma_*\times\left(0,R_0\right]\subset H(I_0\times\mathbb S^1\times(0,R_0])$. Hence, for every solution $(X\left(t\right),Y\left(t\right))$ of \eqref{Th1CGSEq2} satisfying \eqref{til2} for some $\left(\sigma,\tau\right)\in (\sigma_*,\sigma_0)\times\mathbb S^1$, we obtain $(X(t\left(r\right)),Y(t\left(r\right)),r)\in H(I_0\times\mathbb S^1\times(0,R_0])$ and so $(X(t\left(r\right)),Y(t\left(r\right)))=(V_{\sigma,\tau}(t\left(r\right)),W_{\sigma,\tau}(t\left(r\right)))$ for small $r>0$. Hence, for every $\varphi=\varphi_{\sigma,\tau}$ with $\left(\sigma,\tau\right)\in(\sigma_*,\sigma_0)\times\mathbb S^1$, we conclude that $(V_{\sigma,\tau},W_{\sigma,\tau})$ is the unique solution of \eqref{Th1CGSEq2} satisfying \eqref{til}. This ends Step~\ref{step621}. \qed \end{proof} \begin{step} \label{step622} Proof of Lemma~\ref{lem62} if $\varphi\in \cup \{ \varphi_{\sigma,\tau}\}_{(\sigma,\tau)\in [\sigma_0,\overline \sigma ]\times {\mathbb S}^1}$ for $\sigma_0\in (0,\overline \sigma)$ close enough to $\overline \sigma$. \end{step} \begin{proof}[Proof of Step~\ref{step622}] For $(\sigma,\tau)\in (0,\overline\sigma]\times \mathbb{S}^1$, we search for $T_0\in\mathbb{R}$ and $V,W\in C^1([T_0,+\infty))$ such that \eqref{Th1CGSEq2} holds and $(V(t),W(t))-(\varphi_{\sigma,\tau}(t),\varphi_{\sigma,\tau}'(t))\to (0,0)$ as $t\to\infty$. Writing $\tilde{V}=V-\varphi_{\sigma,\tau}$ and $\tilde{W}=W-\varphi_{\sigma,\tau}'$, this is equivalent to finding $T_0\in\mathbb{R}$ and $\tilde{V},\tilde{W}\in C^1([T_0,+\infty))$ such that \begin{equation}\label{syst:tilde} \left\{ \begin{aligned} & (\tilde{V}',\tilde{W}')=(\tilde{W},f(\varphi_{\sigma,\tau}+\tilde{V})-f(\varphi_{\sigma,\tau})+\mu e^{-\lambda t}(\varphi_{\sigma,\tau}+\tilde{V})^q ) \quad \text{in } [T_0,\infty),\\ & (\tilde{V}(t),\tilde{W}(t))\to (0,0)\ \ \text{as } t\to +\infty. \end{aligned}\right. \end{equation} We define $L(\varphi_{\sigma,\tau},\varphi_{\overline\sigma})(t):=f'(\varphi_{\sigma,\tau}(t))-f'(\varphi_{\overline\sigma}(t))$ for $t\in \mathbb{R}$ and $$A:=\left(\begin{array}{cc} 0 & 1\\ f'(M_0) & 0\end{array}\right).$$ Since $\varphi_{\overline\sigma}\equiv M_0$ and $\varphi_{\sigma,\tau}\to\varphi_{\overline\sigma}$ as $\sigma\to\overline\sigma$ uniformly with respect to $\tau\in \mathbb{S}^1$, we get that \begin{equation}\label{lim:H} \lim_{\sigma\to\overline\sigma}\sup_{\tau\in \mathbb{S}^1,\, t\in\mathbb{R}}|L(\varphi_{\sigma,\tau},\varphi_{\overline\sigma})(t)|=0. \end{equation} With a Taylor expansion, we write \begin{eqnarray*} f(\varphi_{\sigma,\tau}+\tilde{V})-f(\varphi_{\sigma,\tau})&=&f'(\varphi_{\overline\sigma})\tilde{V}+L(\varphi_{\sigma,\tau},\varphi_{\overline\sigma})\tilde{V}+Q(\varphi_{\sigma,\tau},\tilde{V}) \end{eqnarray*} with $|Q(\varphi_{\sigma,\tau},\tilde{V})|\leq C |\tilde{V}|^2$. Therefore, the system in \eqref{syst:tilde} rewrites as follows \begin{equation}\left(\begin{array}{c} \tilde{V}\\ \tilde{W}\end{array}\right)^\prime=A\left(\begin{array}{c} \tilde{V}\\ \tilde{W}\end{array}\right)+\left(\begin{array}{c} 0\\ L(\varphi_{\sigma,\tau},\varphi_{\overline\sigma})\tilde{V}+Q(\varphi_{\sigma,\tau},\tilde{V})+\mu e^{-\lambda t}(\varphi_{\sigma,\tau}+\tilde{V})^q\end{array}\right). \end{equation} Since $f'(M_0)<0$, we get that $A$ has two conjugate pure imaginary eigenvalues. Therefore, there exists $C>0$ such that $\Vert e^{tA}\Vert+\Vert e^{-tA}\Vert\leq C$ for all $t\in\mathbb{R}$, where $\Vert\cdot\Vert$ is any operator norm on $\mathbb{R}^2$. For all $t\geq T_0$, we define $\tilde{X}(t):=e^{-t A}\left(\begin{array}{c} \tilde{V}(t)\\ \tilde{W}(t)\end{array}\right)$ and $$\Phi_{\varphi_{\sigma,\tau}}(t,\tilde{X}):=e^{-tA}\left(\begin{array}{c} 0\\ L(\varphi_{\sigma,\tau},\varphi_{\overline\sigma})(e^{tA}\tilde{X})_1+Q(\varphi_{\sigma,\tau},(e^{tA}\tilde{X})_1)+\mu e^{-\lambda t}(\varphi_{\sigma,\tau}+(e^{tA}\tilde{X})_1)^q\end{array}\right), $$ where $(e^{tA}\tilde{X})_1$ denotes the first coordinate of $e^{tA}\tilde{X}\in \mathbb{R}^2$. Then getting a solution to \eqref{syst:tilde} amounts to finding a solution $\tilde{X}\in C^1([T_0,+\infty),\mathbb{R}^2)$ to \begin{equation}\label{syst:tilde:2} \tilde{X}'(t)=\Phi_{\varphi_{\sigma,\tau}}(t,\tilde{X}(t))\hbox{ for }t\geq T_0\hbox{ and }\lim_{t\to +\infty}\tilde{X}(t)=0. \end{equation} As in Step \ref{step621}, we find a solution to \eqref{syst:tilde:2} via the Fixed Point Theorem for contracting maps on a complete metric space. Since $Q(\varphi_{\sigma,\tau},\tilde{V})$ is quadratic in $\tilde{V}$, the last two terms of the second coordinate of $\Phi_{\varphi_{\sigma,\tau}}(t,\tilde{X})$ are tackled as in Step \ref{step621}. The first term is linear in $\tilde{V}$ and controled by $L(\varphi_{\sigma,\tau},\varphi_{\overline\sigma})$: with \eqref{lim:H}, this term is contracting for $\sigma$ close enough to $\overline\sigma$. Mimicking the existence proof of Step \ref{step621}, we get the following: \noindent There exist $\epsilonilon>0$ and $T_0>0$ such that for every $(\sigma,\tau)\in [\overline\sigma-3\epsilonilon,\overline\sigma]\times \mathbb{S}^1$, there exists a solution $(V_{\sigma,\tau},W_{\sigma,\tau})\in C^1([T_0,+\infty),\mathbb{R}^2)$ to \eqref{Th1CGSEq2} such that \eqref{til} holds for $\varphi=\varphi_{\sigma,\tau}$. Moreover, since $(\sigma,\tau)\longmapsto (\varphi_{\sigma,\tau},\varphi_{\sigma,\tau}')$ is continuous on $(0,\overline\sigma]\times \mathbb{S}^1$ (despite the issue for $\overline\sigma$), the continuity of the fixed points depending on a parameter yields that $(\sigma,\tau)\longmapsto (V_{\sigma,\tau},W_{\sigma,\tau})$ is continuous on $[\overline\sigma-3\epsilonilon,\overline\sigma]\times \mathbb{S}^1$. Here we have taken the supremum norm on $C^0([T_0,+\infty),\mathbb{R}^2)$: via the fixed point construction, we also get that this holds with a weighted norm. \noindent We only sketch the uniqueness proof. For $\tau_0=(1,0)\in \mathbb{S}^1$ and every $\xi\in B(0,2\epsilonilon)\subset \mathbb{R}^2$, we define $$\sigma(\xi):=\overline\sigma-|\xi|\hbox{ and }\left\{\tau(\xi):=\xi/|\xi|\hbox{ if }\xi\neq 0\hbox{ and }\tau(0)=\tau_0\right\}. $$ Due to the uniqueness of solution for $\sigma=\overline{\sigma}$, as one checks, we have the continuity of the mappings $\xi\longmapsto (\varphi_{\sigma(\xi),\tau(\xi)},\varphi_{\sigma(\xi),\tau(\xi)}')$ and $\xi\longmapsto (V_{\sigma(\xi),\tau(\xi)},W_{\sigma(\xi),\tau(\xi)}')$ on $B(0,2\epsilonilon)$. We introduce the domain $\Omega_0:=B(0,2\epsilonilon)\times(0,e^{-T_0})$ and the functions $H,G:\Omega_0\to\mathbb{R}^3$ defined as $$H\left(\xi,r\right):=\left(V_{\sigma,\tau}\left(t\right),W_{\sigma,\tau}\left(t\right),r\right)\hbox{ and }G\left(\xi,r\right):=\left(\varphi_{\sigma,\tau}\left(t\right),\varphi'_{\sigma,\tau}\left(t\right),r\right)$$ for every $\left(\xi,r\right)\in\Omega_0$, where $t\left(r\right):=\log \left(1/r\right)$, $\sigma=\sigma(\xi)$ and $\tau=\tau(\xi)$. Arguing as in Step \ref{step621} and with some extra care for the case $\xi=0$, we get the uniqueness of the solution of \eqref{Th1CGSEq2} satisfying \eqref{til} for $\varphi=\varphi_{\sigma,\tau}$. This ends the proof of Step~\ref{step622}. \qed \end{proof} This completes the proof of Lemma~\ref{lem62} and thus of Proposition~\ref{CGS}. \qed \end{proof} \section{Appendix} \label{appen} Here, we establish Theorem~\ref{71}, a critical result that was used in the proof of Lemma~\ref{exist:sol}. The proof of Theorem~\ref{71} is strongly inspired by Kelley's paper \cite{Kel}. We denote by $B_\delta(0)\subset \mathbb{R}^3$ the ball centered at $0$ with radius $\delta>0$. For any $r_0>0$, we set $D_{r_0}:=[0,r_0]\times [-r_0,r_0]$. \begin{theorem} \label{71} Let $h_j\in C^1(B_\delta(0))$ for some $\delta>0$ with $j=1,2,3$. Suppose there exist constants $C_1>0$ and $p>1$ such that for all $\vec \xi=(\xi_1,\xi_2,\xi_3)\in B_\delta(0)$, we have \begin{equation} \label{hy}\left\{ \begin{aligned} &\sum_{j=1}^3 |h_j(\vec \xi)|\leq C_1\sum_{j=1}^3 \xi_j^2\ \text{and}\ \ \sum_{j=1}^3 |\nabla h_j(\vec \xi)|\leq C_1\sum_{j=1}^3 |\xi_j|,\\ & h_2(\vec \xi)\leq -C_1|\xi_2|^p\ \text{and } h_2(\xi_1,0,\xi_3)=0. \end{aligned} \right.\end{equation} For fixed $a>0$ and $c<0$, we consider the system \begin{equation}\label{syst:th:bis} \left\{\begin{aligned} &\vec {\mathcal Z}'(t)=(a{\mathcal Z}_1(t)+h_1(\vec {\mathcal Z}(t)),h_2(\vec {\mathcal Z}(t)),c{\mathcal Z}_3(t)+ h_3(\vec {\mathcal Z}(t)))\quad \text{for }t\geq 0,\\ & \vec {\mathcal Z}(0)=(x_0,y_0,z_0). \end{aligned} \right. \end{equation} Then there exist $r_0\in (0,\delta/2)$ and a Lipschitz function $w: D_{r_0}\to [-r_0,r_0]$ such that for all $(y_0,z_0)\in D_{r_0}$ and $x_0=w(y_0,z_0)$, the initial value system \eqref{syst:th:bis} has a solution $\vec {\mathcal Z}$ on $[ 0,\infty)$ and \begin{equation} \label{mn}\lim_{t\to +\infty} \vec {\mathcal Z}(t)=(0,0,0).\end{equation} Moreover, we have that the parametrized surface $({\mathcal Z}_2,{\mathcal Z}_3)\longmapsto (w({\mathcal Z}_2,{\mathcal Z}_3),{\mathcal Z}_2,{\mathcal Z}_3)$ is stable in the sense that ${\mathcal Z_1}(t)=w({\mathcal Z}_2(t),{\mathcal Z}_3(t))$ for all $t\geq 0$. \end{theorem} \begin{proof} Since $h_j\in C^1(B_\delta(0))$ for $1\leq j\leq 3$, the Cauchy--Lipschitz theory applies to the system. For $r_0\in (0,\delta/2)$ and $C_2>0$, we define $\mathcal X$ as the set of all continuous functions $w:D_{r_0}\to [-r_0,r_0]$ such that $w(0,0)=0$ and $w$ is $C_2$-Lipschitz. Note that $(\mathcal X,\Vert\cdot\Vert_\infty)$ is a complete metric space. For any $w\in \mathcal X$, we consider the system \begin{equation} \label{syst:w} \tag{$S_w$} \left\{ \begin{aligned} & (y',z')= \left( h_2 (w(y,z),y,z), cz+h_3(w(y,z),y,z) \right) \quad \text{on } [0,\infty),\\ & (y(0),z(0))=(y_0,z_0). \end{aligned} \right. \end{equation} We now divide the proof of Theorem~\ref{71} in five Steps. \begin{step} \label{step71} Let $r_0\in (0,\delta/2)$ be such that $4C_1(1+C_2^2) \,r_0\leq |c|$. If $(y_0,z_0)\in D_{r_0}$, then the flow $\Phi_t^w(y_0,z_0)$ associated to \eqref{syst:w} is defined for all $t\in [0,+\infty)$. If we set \begin{equation} \label{yzx} (y(t),z(t)):=\Phi_t^w(y_0,z_0)\quad \text{for all }t\in [0,\infty),\end{equation} then $ 0\leq y(t)\leq y_0$ and $ |z(t)|\leq \max\{y_0,|z_0|\}$ on $[0,\infty)$. Moreover, we have \begin{equation} \label{ac} \lim_{t\to \infty} (y(t),z(t))=(0,0).\end{equation} \end{step} \begin{proof}[Proof of Step~\ref{step71}] Let $(y_0,z_0)\in D_{r_0}$ be arbitrary. Since the Cauchy--Lipschitz theory applies, the initial value problem \eqref{syst:w} has a unique solution $(y,z) $ on an interval $[0,b)$ with $b>0$. We prove the following: \begin{itemize} \item[(i)] $y\equiv 0$ if $y_0=0$ and $0<y(t)\leq y_0$ for all $t\in [0,b)$ if $y_0\in (0,r_0]$; \item[(ii)] $|z(t)|\leq \max\{y_0,|z_0|\}$ for every $t\in [0,b)$. \end{itemize} \begin{proof}[Proof of (i)] We write $h_2 (w(y(t),z(t)),y(t),z(t))=\widehat h_2(t,y(t))$ for $t\in [0,b)$, where $\widehat h_2(t,y)$ is continuous in $t\in [0,b)$ and Lipschitz with respect to $y\in [0,r_0]$. The assumption \eqref{hy} yields $\widehat h_2(\cdot,0)=0$ on $[0,b)$ and $\widehat h_2(t,y(t))\leq 0$ for all $t\in [0,b)$. The claim of (i) holds since $y'(t)\leq 0$ on $[0,b)$ and $y$ is the unique solution of $y'(t)=\widehat h_2(t,y(t))$ for $t\in [0,b)$, subject to $y(0)=y_0$. \qed \end{proof} \begin{proof}[Proof of (ii)] Since $c<0$, using the system $(S_w)$, we find that \begin{equation}\label{eq:zdeux} (z^2)'=2\left(-|c|z^2+z \,h_3(w(y,z),y,z)\right)\quad \text{on } [0,b). \end{equation} Since $w$ is a $C_2$-Lipschitz function, using the hypothesis on $h_3$ in \eqref{hy}, we have \begin{equation}\label{control:h} \left| z\,h_3(w(y,z),y,z)\right|\leq C_1 |z| \left[w^2(y,z)+y^2+z^2\right]\leq C_1\left(1+C_2^2\right)|z|\left(y^2+z^2\right). \end{equation} Using $|z|\leq r_0$ and the choice of $r_0>0$, from \eqref{control:h} we obtain that \begin{equation} \label{nuc} \left| z\,h_3(w(y,z),y,z)\right|\leq |c|\max \{y^2,z^2\}/2\quad \text{on } [0,b). \end{equation} If $y_0=0$, then $y\equiv 0$ on $[0,b)$ by (i). From \eqref{nuc} and \eqref{eq:zdeux}, we have $(z^2)'\leq 0$ on $[0,b)$, which yields $|z(t)|\leq |z_0|$ for all $t\in [0,b)$, proving (ii) if $y_0=0$. We now prove (ii) when $y_0>0$. If there exists $t_0\in [0,b)$ such that $|z(t_0)|=y_0$, then using (i) and \eqref{nuc}, we find that $ \left| z\,h_3(w(y,z),y,z)\right|(t_0) <|c|z^2(t_0)$. Thus, \eqref{eq:zdeux} yields that $(z^2)'(t_0)<0$. This means that $|z(t)|=y_0$ has at most a solution in $[0,b)$. Hence, one of the following holds: \begin{itemize} \item[(a)] $|z(t)|\leq y_0$ for all $t\in [0,b)$, which immediately yields (ii); \item[(b)] $|z(t)|\geq y_0$ for all $t\in [0,b)$; \item[(c)] For some $t_0\in (0,b)$, we have $|z|> y_0$ on $t\in [0,t_0)$ and $|z|< y_0$ on $(t_0,b)$. \end{itemize} Using \eqref{nuc} into \eqref{eq:zdeux}, we get $(z^2)'<0$ on $[0,b)$ in case (b) and on $[0,t_0)$ in case (c) since $\max\{y^2,z^2\}=z^2$. Thus in case (b) and (c) respectively, we find that $|z|\leq |z_0|$ on $ [0,b)$ and $[0,t_0)$, respectively. This proves (ii) when $y_0>0$. \qed \end{proof} By (i), (ii) and the finite-time blow-up of solutions of ODEs, the flow $\Phi_t^w(y_0,z_0)$ associated to \eqref{syst:w} is defined for all $t\in [0,+\infty)$. Let $(y(t),z(t))$ be as in \eqref{yzx}. \begin{proof}[Proof of \eqref{ac}] If $y_0=0$, then $y\equiv 0$ on $[0,\infty)$. Assuming $y_0>0$, then $y>0$ on $[0,\infty)$. The hypothesis on $h_2$ in \eqref{hy} implies that $(y^{1-p})'(t)\geq (p-1)C_1$ for all $t\geq 0$. By integration, we get that $\lim_{t\to +\infty}y(t)= 0$. Hence, for every $\varepsilon>0$, there exists $t_\varepsilon>0$ large such that $0\leq y\leq \varepsilon$ on $[t_\varepsilon,\infty)$. To prove that $\lim_{t\to +\infty}z(t)= 0$, we show that there exists $\widetilde t_\varepsilon\geq t_\varepsilon$ such that $|z(t)|\leq \varepsilon$ for all $t\geq \widetilde t_\varepsilon$. Indeed, with a similar argument to the proof of (ii), it can be shown that $|z(t)|=\varepsilon$ has at most one zero on $[t_\varepsilon,\infty)$. The option $|z|\geq \varepsilon$ on $[t_\varepsilon,\infty)$ is not viable here. Indeed, if $|z|\geq \varepsilon$ on $[t_\varepsilon,\infty)$, then again from \eqref{eq:zdeux} and \eqref{nuc}, we would have $(z^2)'\leq -|c| z^2\leq -|c|\varepsilon^2$ on $[t_\varepsilon,\infty)$, leading to a contradiction. Hence, either $|z|\leq \varepsilon$ on $[t_\varepsilon,\infty)$ or there exists $\widetilde t_\varepsilon\in (t_\varepsilon,\infty)$ such that $|z|>\varepsilon$ on $[t_\varepsilon,\widetilde t_\varepsilon)$ and $|z|<\varepsilon$ on $(\widetilde t_\varepsilon,\infty)$. In either of these cases, the conclusion $\lim_{t\to +\infty}z(t)= 0$ follows. This proves \eqref{ac}. \qed \end{proof} The proof of Step~\ref{step71} is now complete. \qed \end{proof} \begin{step} \label{step72} For any $\rho>0$, let $r_0\in (0,\delta/2)$ be as in Step~\ref{step71} and $3C_1(3+2C_2) r_0<\rho$. Then for any $w_j\in \mathcal X$ and $\big(y_{0}^{(j)},z_{0}^{(j)}\big)\in D_{r_0}$ with $j=1,2$, we have \begin{equation} \left| (y_1,z_1)-(y_2,z_2) \right|(t)\leq e^{\rho t}\Big(\| w_1-w_2\|_\infty+ \big|(y_{0}^{(1)},z_{0}^{(1)})-(y_{0}^{(2)},z_{0}^{(2)})\big|\Big) \end{equation} for all $t\in [0,\infty)$, where we denote $(y_j(t),z_j(t)):=\Phi_t^{w_j}(y_{0}^{(j)},z_{0}^{(j)})$ for $j=1,2$. \end{step} \begin{proof}[Proof of Step~\ref{step72}] We denote $Y:=y_1-y_2$ and $Z:=z_1-z_2$. It suffices to prove that \begin{equation} \label{suf} e^{-2\rho t} (Y^2+Z^2)(t) \leq \|w_1-w_2\|_\infty^2+(Y^2+Z^2)(0)\quad \text{for all } t\geq 0. \end{equation} When clear, we drop the dependence on $t$ in notation. For $j=1,2$, we set \begin{equation} \label{Ldef} P_j:=(w_j(y_j,z_j),y_j,z_j) \ \text{and } L:= Y\left[h_2(P_1)-h_2(P_2)\right]+Z\left[h_3(P_1)-h_3(P_2)\right]. \end{equation} By a simple calculation, we see that \begin{equation} \label{hop} \left(e^{-2\rho t}(Y^2+Z^2)\right)^\prime=2 e^{-2\rho t}\left[ -\rho \left(Y^2+Z^2\right)-\left|c\right|Z^2+L\right]. \end{equation} We show that $L$ in \eqref{Ldef} satisfies \begin{equation} \label{esti} |L|\leq 3C_1 r_0\left[(3+2C_2) (Y^2+Z^2)+ \|w_1-w_2\|^2_\infty\right]. \end{equation} \begin{proof}[Proof of \eqref{esti}] Since $\max\{|y_j|,|z_j|, |w_j(y_j,z_j)|\}\leq r_0$ for $j=1,2$, by the assumption on $|\nabla h_2|$ and $|\nabla h_3|$ in \eqref{hy}, we infer that \begin{equation} \label{lin} \sup_{\xi\in [0,1]}|(\nabla \phi)(\xi P_1+(1-\xi)P_2)| \leq 3C_1 r_0 \end{equation} with $\phi=h_2$ and $\phi=h_3$. Therefore, we get that \begin{equation} \label{ha1} |L|\leq 3C_1r_0 |P_1-P_2| \left(|Y|+|Z|\right)\leq 3\sqrt{2} C_1 r_0 |P_1-P_2| \sqrt{Y^2+Z^2}.\end{equation} Set $a_1=\|w_1-w_2\|_\infty $ and $a_2=\sqrt{Y^2+Z^2}$. Using \eqref{Ldef} and that $w_1$ is $C_2$-Lipschitz, we get \begin{equation} \label{ha2} \begin{aligned}|P_1-P_2|& \leq |w_1(y_1,z_1)-w_2(y_2,z_2)|+a_2 \\ &\leq |w_1(y_1,z_1)-w_1(y_2,z_2)|+a_1+ a_2\leq (1+C_2)a_2+a_1. \end{aligned}\end{equation} Plugging \eqref{ha2} into \eqref{ha1}, then using the inequality $2a_1 a_2\leq a_1^2+a_2^2$, we conclude \eqref{esti}. \qed \end{proof} Using \eqref{esti} into \eqref{hop}, we get that \begin{equation} \label{mio} \left(e^{-2\rho t}\left(Y^2+Z^2\right)(t)\right)^\prime \leq 2 e^{-2\rho t} \left[ \alpha_0 (Y^2+Z^2)(t)+ 3C_1r_0 \| w_1-w_2\|_\infty^2 \right], \end{equation} where $\alpha_0:= 3C_1(3+2C_2) r_0-\rho$ is negative from our choice of $r_0$. Hence, from \eqref{mio}, for every $t\in (0,\infty)$, we deduce that \begin{equation} \label{yoo} \left(e^{-2\rho t}\left(Y^2+Z^2\right)(t)\right)^\prime\leq 2\rho\, e^{-2\rho t} \| w_1-w_2\|_\infty^2. \end{equation} By integrating \eqref{yoo}, we obtain \eqref{suf}, which completes Step \ref{step72}. \qed \end{proof} \begin{step} \label{step73} Let $r_0\in (0,\delta/2)$ be as in Step~\ref{step72} with $\rho=a/2$ and $6C_1(1+C_2)r_0<a C_2$. Then, the map $T: \mathcal X\to \mathcal X$ is well-defined, where for every $w\in \mathcal X$, we put $$Tw\left(y_0,z_0\right):=-\int_0^\infty e^{-a t}h_1\left(w\left(\Phi_t^w(y_0,z_0)\right), \Phi_t^w(y_0,z_0)\right)\, dt\ \ \text{for all }(y_0,z_0)\in D_{r_0}. $$ \end{step} \begin{proof}[Proof of Step~\ref{step73}] For all $(y_0,z_0)\in D_{r_0}$, we define $(y(t),z(t)):=\Phi_t^w(y_0,z_0)$. We now observe that for all $t\geq 0$, $h_1(w(y(t),z(t)),y(t),z(t))$ stays bounded since $\max\{|w(y(t),z(t))|,|y(t)|,|z(t)|\}\leq r_0$. Then, $Tw\left(y_0,z_0\right)$ is well-defined since $a>0$. From $w(0,0)=0$, we have $\Phi_t^w(0,0)=(0,0)$ for all $t\geq 0$, which yields $Tw \left(0,0\right)=0$. To prove that $Tw\in \mathcal X$, it remains to show that $Tw$ ranges in $[-r_0, r_0]$ and $Tw$ is $C_2$-Lipschitz. Indeed, using \eqref{hy}, for every $\left(y_0,z_0\right)\in D_{r_0}$, we find that \begin{equation} |Tw\left(y_0,z_0\right)|\leq C_1 \int_0^\infty e^{-at}(w^2(y,z)+y^2+z^2)\, dt \leq \frac{3C_1r_0^2}{a}. \end{equation} Since $3C_1r_0<a$, we have $|Tw\left(y_0,z_0\right)|\leq r_0$ so that $Tw$ ranges in $[-r_0,r_0]$. We prove that $Tw$ is $C_2$-Lipschitz. We fix $(y_{0}^{(j)},z_{0}^{(j)})\in D_{r_0}$ for $j=1,2$, then define $(y_j(t),z_j(t)):=\Phi_t^{w}\big(y_{0}^{(j)},z_{0}^{(j)}\big)$ and $P_j(t):=(w(y_j,z_j),y_j,z_j)(t)$ for all $t\geq 0$. By the definition of $Tw$, we see that \begin{equation} \label{hp} \big| Tw\big(y_{0}^{(1)},z_{0}^{(1)}\big)-Tw\big(y_{0}^{(2)},z_{0}^{(2)}\big)\big| \leq \int_0^\infty e^{-at}\left|h_1(P_1)-h_1(P_2)\right|\, dt. \end{equation} Since \eqref{lin} holds for $\phi=h_1$, using $w_1=w_2=w$ in \eqref{ha2}, we get that $$ \left|h_1(P_1)-h_2(P_2)\right| \leq 3 C_1(1+C_2)r_0\left| (y_1,z_1)-(y_2,z_2) \right|. $$ Using that $6C_1(1+C_2)r_0<aC_2$ and taking $\rho=a/2$ in Step~\ref{step72}, we arrive at \begin{equation} \label{hp2} \left|h_1(P_1)-h_1(P_2)\right| \leq \frac{aC_2}{2} e^{\frac{a t}{2}} \big| \big(y_0^{(1)},z_0^{(1)}\big)-\big(y_0^{(2)},z_0^{(2)}\big)\big|. \end{equation} From \eqref{hp} and \eqref{hp2}, we see that $Tw$ is $C_2$-Lipschitz, completing Step~\ref{step73}. \qed \end{proof} \begin{step} \label{step74} If also $12C_1 r_0(2+C_2)<a$ in Step~\ref{step73}, then $T$ is a contraction on $\mathcal X$. \end{step} \begin{proof}[Proof of Step~\ref{step74}] For $w_1,w_2\in \mathcal X$ and $(y_0,z_0)\in D_{r_0}$, we define $$ (y_j(t),z_j(t)):=\Phi_t^{w_j}(y_0,z_0) \quad\text{and} \quad P_j(t):=(w_j(y_j(t),z_j(t)),y_j(t),z_j(t)) $$ for all $t\geq 0$ and $j=1,2$. As in Step~\ref{step72} with $\rho=a/2$, we obtain that \begin{equation} \label{tu1} \begin{aligned} |h_1(P_1)-h_1(P_2)| &\leq 3C_1 r_0 |P_1-P_2| \leq 3C_1 r_0\left[(1+C_2) |(y_1,z_1)-(y_2,z_2)|+\|w_1-w_2\|_\infty\right]\\ &\leq 3C_1 r_0 (2+C_2) e^{\frac{at}{2}}\|w_1-w_2\|_\infty . \end{aligned} \end{equation} Then, using \eqref{tu1} and our choice of $r_0$, we see that $$ \left| (Tw_1-Tw_2)(y_0,z_0)\right|\leq \int_0^\infty e^{-at} |h_1(P_1)-h_1(P_2)|\,dt<\frac{1}{2}\|w_1-w_2\|_\infty. $$ Therefore, $T$ is $1/2$-Lipschitz, so it is a contraction mapping. This ends Step \ref{step74}. \qed \end{proof} \begin{step} \label{step75} Let $r_0\in (0,\delta/2)$ be as in Step~\ref{step74}. Then, there exists $w\in \mathcal X$ such that for every $(y_0,z_0)\in D_{r_0}$ and $x_0=w(y_0,z_0)$, the initial value system \eqref{syst:th:bis} has a solution $\vec {\mathcal Z}=(x,y,z)$ defined on $[0,\infty)$ and satisfying \eqref{mn}. \end{step} \begin{proof}[Proof of Step~\ref{step75}] The choice of $r_0$ in Step~\ref{step74} depends only on $a,|c|, C_1, C_2$. Picard's fixed point theorem yields the existence of $w\in \mathcal X$ such that $Tw=w$, where $T$ is given by Step~\ref{step73}, that is, \begin{equation}\label{eq:w} w(y,z)=-\int_0^\infty e^{-a\xi}h_1\left(w\left(\Phi_\xi^w(y,z)\right), \Phi_\xi ^w(y,z)\right)\, d\xi \end{equation} for all $(y,z)\in D_{r_0}$. We fix $(y_0,z_0)\in D_{r_0}$ arbitrarily. We show that $\vec {\mathcal Z}(t)=(x(t),y(t),z(t))$ is a solution to the initial system \eqref{syst:th:bis}, subject to \eqref{mn}, where we define \begin{equation} \label{defxyz} (y(t),z(t)):=\Phi_t^w(y_0,z_0)\quad \text{and}\quad x(t):=w(y(t),z(t))\ \text{for all } t\geq 0.\end{equation} Indeed, Step~\ref{step71} yields that $(y',z')=(h_2(x,y,z),cz+h_3(x,y,z))$ on $ [0,\infty)$. In view of $w(0,0)=0$ and $\lim_{t\to +\infty}(y(t),z(t))= (0,0)$, we get $\lim_{t\to +\infty} x(t)=0$, proving \eqref{mn}. Since $$\Phi_\xi^w(y(t),z(t))=\Phi_\xi ^w\circ\Phi_t ^w(y_0,z_0)=\Phi_{t+\xi}^w(y_0,z_0)=(y(t+\xi),z(t+\xi))$$ for all $\xi,t\geq 0$, from \eqref{eq:w} and \eqref{defxyz}, we obtain that \begin{equation}\label{exp:x} \begin{aligned} x(t)&= -\int_0^\infty e^{-a\xi}h_1\left(w\left(\Phi_\xi ^w(y(t),z(t))\right), \Phi_\xi ^w(y(t),z(t))\right)\, d\xi\\ &= -\int_0^\infty e^{-a\xi}h_1\left(w\left(y(t+\xi),z(t+\xi)\right), y(t+\xi),z(t+\xi)\right)\, d\xi\\ &= -e^{at}\int_t^\infty e^{-a \theta}h_1\left(w\left(y(\theta),z(\theta)\right), y(\theta),z(\theta)\right)\, d\theta \end{aligned}\end{equation} for all $t\geq 0$. This ends Step~\ref{step75} since $ x\in C^1[0,+\infty)$ and $x'=ax+h_1(x,y,z)$ on $[0,\infty)$. \qed \end{proof} Using the definition of $\mathcal X$ and Step~\ref{step75}, we finish the proof of Theorem~\ref{71}. \qed \end{proof} \noindent{\bf Acknowledgements:} Work on this project started during first author's visits to McGill University (July--August 2017) and University of Lorraine (September--October 2017). She is very grateful for the support and hospitality of her co-authors while carrying out research at their institutions. \end{document}
\begin{document} \title{A quantum neural network computes its own relative phase } \author{\IEEEauthorblockN{Elizabeth C. Behrman} \IEEEauthorblockA{Department of Mathematics, Statistics, and Physics\\ Wichita State University\\ Wichita, Kansas 67260--0033\\ email: [email protected]} \and \IEEEauthorblockN{James E. Steck} \IEEEauthorblockA{Department of Aerospace Engineering\\ Wichita State University\\ Wichita, Kansas 67260--0044\\ email: [email protected]}} \maketitle \begin{abstract} Complete characterization of the state of a quantum system made up of subsystems requires determination of relative phase, because of interference effects between the subsystems. For a system of qubits used as a quantum computer this is especially vital, because the entanglement, which is the basis for the quantum advantage in computing, depends intricately on phase. We present here a first step towards that determination, in which we use a two-qubit quantum system as a quantum neural network, which is trained to compute and output its own relative phase. \end{abstract} \IEEEpeerreviewmaketitle \section{Introduction} Entanglement is the root of the power of quantum computers\cite{genentref}; thus, the production and measurement of entanglement are essential if we are ever to be successful in making full use of the potential of quantum computing. This turns out to be a very hard problem. In previous work, we have proposed a method to find an entanglement witness for a general, unknown, quantum input state, using dynamic learning to find parameters for the quantum system that make it calculate its own entanglement. We called this a quantum neural network (QNN) \cite{eb08}. The basic idea is that contained in the system itself is the information about its entanglement: If we find, through learning, an appropriate set of parameters for the system, then it can extract the entanglement of its initial state as an output measure of the state at some final time. We imagine that our quantum system evolves under some Hamiltonian containing adjustable parameters; we find that set of parameters such that our designated output function (the qubit-qubit correlation function) is mapped onto the correct values for the entanglement of the initial state. Our entanglement witness gave good results for large classes of input states, including both pure and mixed states. Unlike the case with any other witness (see, {\it e.g.}, \cite{toth}), the input state did not need to be ``close'' to any particular state. We have also \cite{nabic,eb12} extended our work to the 3-, 4-, and 5-qubit cases, and found that as the size of the system grows, the amount of additional training necessary diminishes; thus, our method may be very practical for use on large computational systems. Figure 1 shows some representative results, in which we compare our entanglement witness to the entanglement of formation \cite{wootters} for 50,000 randomly generated states for the 2-qubit system. These are pure states with real coefficients on the usual (``charge'') basis. The agreement is excellent. Unfortunately these results do not carry over to the more general case of complex coefficients. See Figure 2, which shows a similar set but with complex coefficients. Indeed, as we showed \cite{eb08}, it is impossible to find any single measurable which will not exhibit anomalous oscillation; all witnesses do so. But is there a way to get around this difficulty? \begin{figure} \caption{QNN entanglement for 50,000 randomly generated pure states of the form $a_{00} \label{realfig} \end{figure} \begin{figure} \caption{As in Figure \ref{realfig} \label{cxcoeffig} \end{figure} There are, of course, ways to determine more information about the state; if we know the entire density matrix we can, at least for the 2-qubit system, simply calculate the entanglement of formation (as we ourselves did to generate the comparison data for Figure \ref{realfig}.) For the 2-qubit system this may not be unreasonable. But for the eventual goal of a large computational system, this can become quite daunting, since the number of parameters necessary goes like $2^{2N}$, where $N$ is the number of qubits. Perhaps dynamic learning can allow us to find a shortcut. This paper is a first step in that direction. If we knew or could determine the relative phases $\{\theta\}$ of the basis states, we could apply the (unitary) phase shift operator of $\{e^{-i \theta} \}$ to each relevant part of our input state. Since the coefficients would then be real, we could then perform our entanglement witness measurement and achieve results like those in Figure \ref{realfig}. In 2005, Yang and Han \cite{han} found an algorithm for determining the relative phase between parts of the n-qubit Bell (or GHZ) state, $ \sqrt{p}|0...0> + e^{i \phi} \sqrt{1-p}|1...1>$. They showed that performing a Hadmard transform on each qubit puts the system in a state in which the probability of finding an even number of qubits in the state $|1>$ is given by $p_{even} = \frac{1}{2} + \sqrt{p(1-p)} \cos{\phi}$. Given a large number of copies of the state, it is then possible to determine both $p$ and $\phi$. Here we show that, with our QNN, we can extend this result, for the 2-qubit system, in two ways. First, we show that we can also find the phase offset in an EPR state, $ a_{01}|01> + e^{i \theta}a_{10}|10>$. Second, we show that we can also find the phase offset for any of the partially entangled states consisting of an EPR or a Bell state with some contaminant: \begin{eqnarray} a_{00}|00> + a_{01}|01> + e^{i \phi}a_{11}|11> \\ \nonumber a_{00}|00> + a_{10}|101> + e^{i \phi}a_{11}|11> \\ \nonumber a_{00}|00> + a_{01}|01> + e^{i \theta}a_{10}|10> \\ \nonumber a_{01}|01> + e^{i \theta}a_{10}|10> + a_{11}|11> \\ \nonumber e^{i \xi}a_{01}|01> + a_{10}|10>+ a_{11}|11> \\ \nonumber a_{00}|00> + e^{i \xi}a_{01}|01> + a_{10}|10> \end{eqnarray} \section{Dynamic learning: quantum neural network (QNN)\label{qnn}} We consider 2-qubit system whose Hamiltonian is: \begin{equation} H = K_{A} \sigma_{xA} + K_{B} \sigma_{xB} + \varepsilon_{A} \sigma_{zA} + \varepsilon_{B} \sigma_{zB} + \zeta \sigma_{zA} \sigma_{zB} \label{hamiltonian} \end{equation} where $\{ \sigma \}$ are the Pauli operators corresponding to each of the two qubits, A and B, $K_{A}$ and $K_{B}$ are the tunneling amplitudes, $\varepsilon_{A}$ and $\varepsilon_{B}$ are the biases, and $\zeta$ the qubit-qubit coupling. The time evolution of the system is then given by the Schr\"{o}dinger equation: \begin{equation} \frac{d \rho}{dt} = \frac{1}{i \hbar}[H, \rho] \label{schr} \end{equation} where $\rho$ is the density matrix and $H$ is the Hamiltonian. The parameters $\{ K,\varepsilon,\zeta \}$ control the time evolution of the system in the sense that, if one or more of them is changed, the way a given state will evolve in time will also change. This is the basis for using our quantum system as a neural network. The role of the ``weights'' of the network is played by the parameters of the Hamiltonian, $\{ K,\varepsilon,\zeta \}$, all of which we take to be experimentally adjustable as functions of time (see, {\it e.g.}, \cite{yamamoto}, for the case of SQuID charge qubits.) By adjusting the parameters using a neural learning algorithm we can train the system to evolve in time to a set of chosen target outputs at the final time $t_{f}$, in response to a corresponding (one-to-one) set of given inputs. Because the time evolution is quantum mechanical (and, we assume, coherent), a quantum mechanical function, like an entanglement witness of the initial state, can be mapped to an observable of the system's final state, a measurement made at the final time $t_{f}$. The time evolution of the quantum system is calculated by integrating the Schr\"{o}dinger equation numerically in MATLAB Simulink, using ODE4 (Runge-Kutta), with a fixed integration step size of 0.05 ns \cite{matlab}. The system was initialized in each input state in the training set, in turn, then allowed to evolve for 190 ns. A measurement is then made at the final time; this is the ``output'' of the network. An error, $target-output$, is calculated, and the parameters are adjusted slightly to reduce the error. This is repeated for each $(input,target)$ pair multiple times until the calculation converges on parameters that work well for the entire training set. Complete details, including a derivation of the quantum dynamic learning paradigm using backpropagation \cite{lecun} in time \cite{werbos}, are given in \cite{eb08}. We choose the usual ``charge basis '', in which each qubit's state is given as 0 or 1. All of the parameters $\{ K,\varepsilon,\zeta \}$ were taken to be functions of time; in contrast to our earlier work \cite{eb08,nabic,eb12}, in which the parameters were taken to be piecewise constant in time, we have, here, allowed the parameters to be continuous functions of time. For the backpropagation learning, the output error needs to be back-propagated backward through time \cite{werbos}, so the integration has to be carried out from the final time $t_{f}$ to $0$. To implement this in MATLAB Simulink, a change of variable is made by letting $t' = t_{f}-t$, and running this simulation forward in $t'$ in Simulink. \section{Training of the phase indicator} In the charge basis, we can write a general pure state of the system at time $t=0$ as \begin{eqnarray} |\Psi(0)>=a_{00}|00>+a_{01}e^{i\xi}|01>+a_{10}e^{i \theta}|10> \\ \nonumber +a_{11}e^{i \phi}|11> \end{eqnarray} where normalization requires that \begin{equation} \sqrt{a_{00}^{2}+a_{01}^{2}+a_{10}^{2}+a_{11}^{2}}=1 \end{equation} Since an \underline{overall} phase is physically meaningless we may take out any overall phase factor; that is, without loss of generality we may take the coefficient of the $|00>$ basis state to be real. We then write each of the other coefficients as its magnitude times a phase factor; thus, each $a_{nm}$ will be a real number, and the phase factor, if any, will be written in explicitly. As discussed above, the state of the system evolves under the Hamiltonian, Equation \ref{hamiltonian}, to another state, $|\Psi(t_{f})>$, at the final time $t_{f}$. At that final time we make a measurement. In the terminology of neural network learning: (1) the input to the neural network is the initial state $|\Psi(0)>$ at time $t=0$ of the quantum system; (2) the output of the neural network is a quantum measure made on the final state at the final time $t=t_{f}$ of the quantum system; and (3) the trainable weights of the neural network are the time histories of the adjustable parameters of the quantum system. The network is trained on a set of training pairs, each of which consists of (input, correct output). Each training pair is presented to the network, the output is calculated, the error computed, and the weights changed so as to decrease the error \cite{eb08}. Each pass through the entire training set is called an epoch. As with all good science, we began with what was already known \cite{han}: namely, that it is possible to extract relative phase information from the Bell state, $|Bell>= a_{00}|00> + a_{11}e^{i \phi}|11>$. Because we are using a \underline{learning} process, it is important to see how much information we can get with as little input as possible. Thus, our original training set consisted of only 11 training pairs, using only equal amplitude Bell states $|Bell>= a_{00}|00> + a_{11}e^{i \phi}|11>$ with $a_{00}=a_{11}=\frac{1}{\sqrt{2}}$, where the phase angle $\phi$ varies from $-\pi/2$ to $\pi/2$ as $\phi = -\frac{\pi}{2} + \frac{(n-1)\pi}{10}$, for $n=1:11$. The network output is the absolute magnitude squared of the projection of the final state of the quantum system onto the state $|11>$, {\it i.e.}, the probability of the system's being found in the state $|11>$. The correct, or target, output for these equal-amplitude EPR states is taken to be just $\cos^{2}(\phi/2)$. That is, the (input,output) pairs are \begin{eqnarray} input = |\Psi(0)>= \frac{1}{\sqrt{2}}(|00> + e^{i \phi}|11>) \\ \nonumber output = |<11|\Psi(t_{f})>|^{2} \rightarrow target = \cos^{2}(\phi/2) \end{eqnarray} The network was trained for 10 epochs, on a total of 11 training pairs. The average RMS error of all 11 training pairs after training is 0.0127. A plot of RMS error vs epoch is shown in Figure \ref{trainBfig}. A plot of output vs target for the 11 training pairs is shown in Figure \ref{trainresBfig}. A plot of the trained parameters as functions of time is shown in Figure \ref{paramBfig}. Each is a simple oscillatory function. Note that the trained tunneling amplitude functions $K_{A}$ and $K_{B}$ lie right on top of each other, as do $\epsilon_{A}$ and $\epsilon_{B}$, which is unsurprising given the symmetry of the training set. \begin{figure} \caption{RMS error per training pair vs. epoch (pass through the training set) for the $\phi$ phase offset indicator. The training set of 11 (input,output) pairs is given in the text. \label{trainBfig} \label{trainBfig} \end{figure} \begin{figure} \caption{Results for the training set for the $\theta$ phase offset indicator, showing deviation of the output, $|<11|\Psi(t_{f} \label{trainresBfig} \end{figure} \begin{figure} \caption{The functions $K_{A} \label{paramBfig} \end{figure} To see if the network has generalized ({\it i.e.}, \underline{learned} as opposed to having simply curvefitted), we then tested (with no additional training) on a set of Bell states of random relative magnitude, that is, on states of the type $|Bell>= a_{00}|00> + a_{11}e^{i \phi}|11>$ with now randomly generated numbers for $a_{00}$ and $a_{11}$ (such that the state remained normalized, {\it i.e.}, $\sqrt{a_{00}^{2}+a_{11}^{2}} = 1$.) From \cite{han} we knew that it was unlikely that we would be able to train to the same simple target function $\cos^{2}(\phi/2)$ , and so it transpired; however, we found that a simple analogue, $2(\frac{1}{2} - a_{00}^2)^{2} a_{11}^{2} + 2a_{00}a_{11}\cos^{2}(\phi/2)$, did work quite well. Again, the measurable is the probability of the system's being found in the $|11>$ state at the final time, {\it i.e.},$|<11|\Psi(t_{f})>|^{2}$. Note that this target function reduces to the target function used for training, when $a_{00}=a_{11}=\frac{1}{\sqrt{2}}$, and maintains the necessary symmetry. These data are plotted in Figure \ref{test3fig} (blue triangles.) As can easily be seen in the figure, agreement is excellent. The ability of the system to map onto the target function depends on its being an entangled state \cite{han}; however, as long as we adjust the target function appropriately, full entanglement is, clearly, not necessary. Thus it ought also to be possible to find the phase offset for a partially entangled input state, {\it e.g.}, of the form $a_{00}|00> + a_{01}|01> + e^{i \phi}a_{11}|11>$. How do we do this? We consider a probability-weighted target function, equal to our earlier targets for the special case of the pure Bell state, but adjusting the relative function for the diminished entanglement. We are guided here by symmetry and by earlier analytic results \cite{han}, in which it was found that, while the relative phase was extractable, it was not easily separable from the amplitude information, and, in fact, had to be separately measured for (hence, the necessity for ``many copies'' of the original state.) Experimentation eventually gave us the following (relatively) simple functions. For the Bell state, $a_{00}|00> + a_{11}e^{i\phi}|11>$, the target function for the output $|<11|\Psi(t_{f})>|^{2}$ is: \begin{equation} target_{Bell} = 2(\frac{1}{2} - a_{00}^2)^{2} a_{11}^{2} + 2a_{00}a_{11}\cos^{2}(\phi/2) \label{Belltarget} \end{equation} For the $|BP_{1}>= a_{00}|00> + a_{01}|01> + a_{11}e^{i\phi}|11> $ state, the target function for the output $|<11|\Psi(t_{f})>|^{2}$ is: \begin{eqnarray} target_{BP_{1}} = 2|\frac{1}{3}-a_{01}^{2}|a_{00}^{2}a_{11}^{2} + 3|\frac{1}{3}-a_{00}^{2}|a_{01}^{2}a_{11}^{2} \\ \nonumber + 2a_{00}a_{11}\cos^{2}(\phi/2) \label{BP1target} \end{eqnarray} For the $|BP_{2}>= a_{00}|00> + a_{10}|10> + e^{i\phi}a_{11}|11>$ state, the target function for the output $|<11|\Psi(t_{f})>|^{2}$ is: \begin{eqnarray} target_{BP_{2}} = 2|\frac{1}{3}-a_{10}^{2}|a_{00}^{2}a_{11}^{2} + 3|\frac{1}{3}-a_{00}^{2}|a_{10}^{2}a_{11}^{2} \\ \nonumber + 2a_{00}a_{11}\cos^{2}(\phi/2) \label{BP2target} \end{eqnarray} Note that these target functions agree with the functions used for \underline{training}: that is, the training states $\frac{1}{\sqrt{2}}[ |00> + e^{i\phi}|11>]$, for $\phi: -\pi/2$ to $\pi/2$, had a target function given by $\cos^{2}(\phi/2)$; this is exactly what the training function in Equation \ref{Belltarget} reduces to, in the case $a_{00}=a_{11}=\frac{1}{\sqrt{2}}$. Similarly the target functions for the $|BP>$ states both reduce to the function tested on for the pure Bell states in the case of equal amplitudes of $\frac{1}{\sqrt{3}}$. Testing results for 550 randomly generated states of all three types, for all values of the angle $\phi$, are shown in Figure \ref{test3fig}. Agreement is quite good, even remarkable, considering that the system was trained only on 11 phase angles for an equal-amplitude Bell state. The average RMS error per pair over all 550 testing pairs after training is 0.0270. \begin{figure} \caption{Results for the testing set for the phase offset $\phi$, consisting of three types of states: unequal amplitudes Bell states $a_{00} \label{test3fig} \end{figure} With some confidence in our method, we now extend to the corresponding states of the two qubit system that also can have maximal entanglement, the EPR states, $|\Psi(0)>= a_{01}|01> + a_{10}e^{i \theta}|10>$. By symmetry, these states are ``the same'' as the Bell states; thus, we would expect that similar training ought to be able to map the phase shift to the projection onto the $|01>$ basis state. However, with \underline{no further training}, we were \underline{also} able to recover \underline{this} information! In other words, the neural net, trained to map $\phi$ information to the projection onto the basis state $|11>$, \underline{also} maps the $\theta$ information to the projection onto the $|01>$ basis state. For equal amplitudes, $a_{01}=a_{10}=\frac{1}{\sqrt{2}}$, we again use the simple cosine function, $\cos^{2}(\theta/2)$; we use the analogous measure on the final state, $|<10|\Psi(t_{f})>|^{2}$. That is, the (input,output) pairs for equal amplitude EPR states are \begin{eqnarray} input =|\Psi(0)>= \frac{1}{\sqrt{2}}(|01> + e^{i \theta}|10> \\ \nonumber output = |<10|\Psi(t_{f})>|^{2}\rightarrow target =\cos^{2}(\theta/2) \end{eqnarray} For non-equal amplitude EPR states, and for the analogous partially entangled EPR states, we employ exactly analogous target functions as with the Bell states. For the EPR state, $a_{01}|01> + a_{101}e^{i\theta}|10>$, we take the target function for the output $|<10|\Psi(t_{f})>|^{2}$ to be: \begin{equation} target_{EPR} = 2(\frac{1}{2} - a_{01}^2)^{2} a_{10}^{2} + 2a_{01}a_{10}\cos^{2}(\theta/2) \label{EPRtarget} \end{equation} For the $|EP_{1}>= a_{00}|00> + a_{01}|01> + a_{10}e^{i\theta}|10> $ state, the target function for the output $|<10|\Psi(t_{f})>|^{2}$ is: \begin{eqnarray} target_{EP_{1}} = 2|\frac{1}{3}-a_{00}^{2}|a_{01}^{2}a_{10}^{2} + 3|\frac{1}{3}-a_{01}^{2}|a_{00}^{2}a_{10}^{2} \\ \nonumber + 2a_{01}a_{10}\cos^{2}(\theta/2) \label{EP1target} \end{eqnarray} For the $|EP_{2}>= a_{01}|01> + a_{10}e^{i\theta}|10> + a_{11}|11>$ state, the target function for the output $|<10|\Psi(t_{f})>|^{2}$ is: \begin{eqnarray} target_{EP_{2}} = 2|\frac{1}{3}-a_{11}^{2}|a_{01}^{2}a_{10}^{2} + 3|\frac{1}{3}-a_{01}^{2}|a_{11}^{2}a_{10}^{2} \\ \nonumber + 2a_{01}a_{10}\cos^{2}(\theta/2) \label{EP2target} \end{eqnarray} Results for testing on 550 randomly generated states of all three types are shown in Figure \ref{test4fig}. \begin{figure} \caption{Results for the testing set on the phase offset $\theta$, consisting of input states of three types: (1) $a_{01} \label{test4fig} \end{figure} If we can recover phase offset information on both $|11>$ and $|10>$ projections, we ought to be able to do so on $|01>$. And so we can. Again we test only (\underline{no} additional training), using, this time, the projection onto the $|01>$ state, and looking for information about the phase offset term multiplied by that basis state. Our target functions are the exact analogues to the $|EPR>$ and $|EP_{1,2}>$ targets. For the EPR state with the $\xi$ offset, $a_{01}e^{i\xi}|01> + a_{10}|10>$, the target function for the output $|<01|\Psi(t_{f})>|^{2}$ is: \begin{equation} target_{EPRx} = 2(\frac{1}{2} - a_{10}^2)^{2} a_{01}^{2} + 2a_{10}a_{01}\cos^{2}(\xi/2) \label{EPRxtarget} \end{equation} For the $|EP_{3}>= a_{01}e^{i\xi}|01> + a_{10}|10> + a_{11}|11>$ state, the target function for the output $|<01|\Psi(t_{f})>|^{2}$ is: \begin{eqnarray} target_{EP_{3}} = 2|\frac{1}{3}-a_{11}^{2}|a_{10}^{2}a_{01}^{2} + 3|\frac{1}{3}-a_{10}^{2}|a_{11}^{2}a_{01}^{2} \\ \nonumber + 2a_{10}a_{01}\cos^{2}(\xi/2) \label{EP4target} \end{eqnarray} For the $|EP_{4}>= a_{00}|00> + a_{01}e^{i\xi}|01> + a_{10}|10> $ state, the target function for the output $|<01|\Psi(t_{f})>|^{2}$ is: \begin{eqnarray} target_{EP_{4}} = 2|\frac{1}{3}-a_{00}^{2}|a_{10}^{2}a_{01}^{2} + 3|\frac{1}{3}-a_{10}^{2}|a_{00}^{2}a_{01}^{2} \\ \nonumber + 2a_{10}a_{01}\cos^{2}(\xi/2) \label{EP3target} \end{eqnarray} Results are shown in Figure \ref{test5fig} for 550 randomly generated states. \begin{figure} \caption{Results for the testing set on the phase offset $\xi$, consisting of input states of three types: (1) $e^{i\xi} \label{test5fig} \end{figure} \section{Conclusion} We have shown that a two-qubit quantum system, considered as a trainable quantum neural net, can compute its own phase offsets. The training is not difficult: the training set consisted of only 11 training pairs, of a single type, and the set was trained for only 10 epochs. Agreement is not perfect, and, doubtless, a more complicated function could be devised such that better agreement would be reached. But if we are considering inverting these functions, in order to perform the rotations that would enable our use of the entanglement estimator discussed in the Introduction, simplicity is also important. Because our method relies on the phase offset's being on a basis state which carries entanglement, it is not completely general; however, since our goal is to be able to estimate the entanglement of a general input state, it does not really matter, since no phase correction is necessary to an unentangled state, and would make no difference to the calculation if made. Our previous work \cite{nabic, eb12}, which extended our work on entanglement in 2-qubit systems to n-qubit systems, seems to indicate that extension of our present results to multiple qubit systems should be possible without too much difficulty. The ease with which we are able to extract multiple angle information is encouraging. It should be not too difficult to perform the inverse rotations, and, thereby, to be able to form a good and reliable estimate for the entanglement with only a very few measurements, even for many-qubit systems. We are currently working on these calculations, and on the extension of our results to mixed systems. \end{document}
\begin{document} \title{Large dimensional random $ k$ circulants} \author{Arup Bose} \address{Stat Math Unit, Kolkata, Indian Statistical Institute.} \thanks{AB partially supported by J. C. Bose Fellowship, Government of India.} \author{Joydip Mitra} \address{ Management Development Institute Gurgaon} \author{Arnab Sen } \address{Dept. of Statistics, U.C. Berkeley.} \date{\today} \keywords{eigenvalue, circulant, $k$-circulant, empirical spectral distribution, limiting spectral distribution, central limit theorem, normal approximation, spectral radius, Gumbel distribution.} \subjclass[2000]{Primary 60B20, Secondary 60B10, 60F05, 62E20, 62G32} \begin{abstract} Consider random $k$-circulants $A_{k,n}$ with $n \to \infty, k=k(n)$ and whose input sequence $\{a_l\}_{l \ge 0}$ is independent with mean zero and variance one and $\sup_n n^{-1}\sum_{l=1}^n \mathbb{E} |a_l|^{2+\delta}< \infty$ for some $\delta > 0$. Under suitable restrictions on the sequence $\{k(n)\}_{ n \ge 1}$, we show that the limiting spectral distribution (LSD) of the empirical distribution of suitably scaled eigenvalues exists and identify the limits. In particular, we prove the following: Suppose $g \ge 1$ is fixed and $p_1$ is the smallest prime divisor of $g$. Suppose $P_g=\prod_{j=1}^g E_j$ where $\{E_j\}_{1 \le j \le g}$ are i.i.d.\ exponential random variables with mean one. (i) If $k^g = -1+ s n$ where $s=1$ if $g=1$ and $s = o(n^{p_1 -1})$ if $g>1$, then the empirical spectral distribution of $n^{-1/2}A_{k,n}$ converges weakly in probability to $U_1P_g^{1/2g}$ where $U_1$ is uniformly distributed over the $(2g)$th roots of unity, independent of $P_g$. (ii) If $g \ge 2$ and $k^g = 1+ s n$ with $s = o(n^{p_1-1})$ then the empirical spectral distribution of $n^{-1/2}A_{k,n}$ converges weakly in probability to $U_2P_g^{1/2g}$ where $U_2$ is uniformly distributed over the unit circle in $\mathbb R^2$, independent of $P_g$. On the other hand, if $k \ge 2 $, $ k= n^{o(1)}$ with $\gammad(n,k) = 1$, and the input is i.i.d. standard normal variables, then $F_{n^{-1/2}A_{k,n}}$ converges weakly in probability to the uniform distribution over the circle with center at $(0,0)$ and radius $r = \exp( \mathbb{E} [ \log \sqrt E_1] )$. We also show that when $n=k^2+1\to \infty$, and the input is i.i.d.\ with finite $(2+\delta)$ moment, then the spectral radius, with appropriate scaling and centering, converges to the Gumbel distribution. \end{abstract} B_igskip \maketitle \section{Introduction}\label{section:intro} For any (random) $n\times n$ matrix $B$, let $\mu_1(B), \ldots , \mu_n(B) \in \mathbb C = \mathbb R^2$ denote its eigenvalues including multiplicities. Then the empirical spectral distribution (ESD) of $B$ is the (random) distribution function on $\mathbb R^2$ given by $$F_{B} (x, y) = n^{-1} \# \Big \{ j : \mu_j(B) \in (-\infty, x] \times (-\infty, y], \ 1 \le j \le n \Big \}.$$ \noindent For a sequence of random $n \times n$ matrices $\{ B_n\}_{ n \ge 1}$ if the corresponding ESDs $F_{B_n}$ converge weakly (either almost surely or in probability) to a (nonrandom) distribution $F$ in the space of probability measures on $\mathbb R^2$ as $n \rightarrow \infty$, then $F$ is called the limiting spectral distribution (LSD) of $\{B_n\}_{n \ge 1}$. See Bai (1999)C_ite{Bai99}, Bose and Sen (2007)C_ite{Bosesen2008} and Bose, Sen and Gangopadhyay (2009)C_ite{bosesengangopadhyay09} for description of several interesting situations where the LSD exists and can be explicitly specified. Another important quantity associated with a matrix is its spectral radius. For any matrix $B$, its spectral radius $\texttt{sp}(B)$ is defined as \[ \texttt{sp}(B) := \max \Big \{|\mu|: \mu \ \text{ is an eigenvalue of } B \Big\}, \] where $|z|$ denotes the modulus of $z \in \mathbb C$. For classical random matrix models such as the Wigner matrix and i.i.d.\ matrix, the limiting distribution of an appropriately normalized spectral radius is known for the Gaussian entries (see, for example, Forrester(1993)C_ite{forrester93}, Johansson (2000)C_ite{johansson00}, Tracy and Widom (2000)C_ite{tracywidom00} and, Johnstone (2001)C_ite{Johnstone01}) which was later extended by SoshnikovC_ite{soshnikov1, soshnikov2} to more general entries. Suppose $\underline{a} =\{a_l\}_{ l \ge 0}$ is a sequence of real numbers (called the {\it input} sequence). For positive integers $k$ and $n$, define the $n \times n$ square matrix \[A_{k,n}(\underline{a}) = \left[ \begin{array}{cccc} a_0 & a_1 & \ldots & a_{n-1} \\ a_{n-k} & a_{n-k+1} & \ldots & a_{n-k-1} \\ a_{n-2k} & a_{n-2k+1} & \ldots & a_{n-2k-1} \\ & & \vdots & \\ \end{array} \right]_{n \times n}. \] All subscripts appearing in the matrix entries above are calculated modulo $n$. Our convention will be to start the row and column indices from zero. Thus, the $0$th row of $A_{k,n}(\underline{a})$ is $\left( a_0,\ a_1,\ a_2,\ \ldots,\ a_{n-1} \right).$ For $0 \le j < n-1$, the $(j+1)$-th row of $A_{k,n}$ is a right-circular shift of the $j$-th row by $k$ positions (equivalently, $k \mbox{ mod } n$ positions). We will write $A_{k,n}(\underline{a})=A_{k,n}$ and it is said to be a $k$-\emptyseth{circulant matrix}. Note that $A_{1, n}$ is the well-known circulant matrix. Without loss of generality, $k$ may always be reduced modulo $n$. Our goal is to study the LSD and the distributional limit of the spectral radius of suitably scaled $k$-circulant matrices $A_{k, n}(\underline{a})$ when the input sequence $\underline{a} = \{a_l\}_{ l\ge 0}$ consists of i.i.d.\ random variables. \subsection{Why study $k$-circulants?} One of the usefulness of circulant matrix stems from its deep connection to Toeplitz matrix - while the former has an explicit and easy-to-state formula of its spectral decomposition, the spectral analysis of the latter is much harder and challenging in general. If the input $\{a_l\}_{ l \ge 0}$ is square summable, then the circulant approximates the corresponding Toeplitz in various senses with the growing dimension. Indeed, this approximating property is exploited to obtain the LSD of the Toeplitz matrix as the dimension increases. See Gray (2006)C_ite{gray06} for a recent and relatively easy account. When the input sequence is i.i.d. with positive variance, then it loses the square summability. In that case, while the LSD of the (symmetric) circulant is normal (see Bose and Mitra (2002)C_ite{Bosemitra02} and Massey, Miller and Sinsheimer (2007)C_ite{masseymillersinsheimer07}), the LSD of the (symmetric) Toeplitz is nonnormal (see Bryc, Dembo and Jiang (2006)C_ite{brycdembojiang06} and Hammond and Miller (2005)C_ite{hammil05}) On the other hand, consider the random symmetric band Toeplitz matrix, where the banding parameter $m$, which essentially is a measure of the number of nonzero entries, satisfies $m \to \infty$ and $m/n\to 0$. Then again, its spectral distribution is approximated well by the corresponding banded symmetric circulant. See for example Kargin (2009)C_ite{kargin09} and Bose and Basak (2009)C_ite{bosebasak09}. Similarly, the LSD of the $(n-1)$-circulant was derived in Bose and Mitra (2002)C_ite{Bosemitra02}) (who called it the reverse circulant matrix). This has been used in the study of symmetric band Hankel matrices. See Bose and Basak (2009)C_ite{bosebasak09}. The circulant matrices are diagonalized by the Fourier matrix $F = ((F_{s,t})), F_{s,t} = e^{2 \pi i st/n}/ \sqrt{n}, 0 \le s, t < n$. Their eigenvalues are the discrete Fourier transform of the input sequence $\{a_l\}_{0 \le l < n}$ and are given by ${\lambda}bda_t= \sum_{l=0}^{n-1} a_l e^{-2\pi it /n}, 0 \le t < n$. The eigenvalues of the circulant matrices crop up crucially in time series analysis. For example, the periodogram of a sequence $\{ a_l\}_{ l \ge 0}$ is defined as $n^{-1}|\sum_{l=0}^{n-1} a_l e^{2\pi ij /n}|^2$, $-\lfloor\frac{n-1}{2}\rfloor \le j \le \lfloor\frac{n-1}{2}\rfloor$ and is a simple function of the eigenvalues of the corresponding circulant matrix. The study of the properties of periodogram is fundamental in the spectral analysis of time series. See for instance Fan and Yao (2003)C_ite{fanyao03}. The maximum of the perdiogram, in particular, has been studied in Mikosch (1999)C_ite{Mikosch99}. The $k$-circulant matrix and its block versions arise in many different areas of Mathematics and Statistics - from multi-level supersaturated design of experiment (Georgiou and Koukouvinos (2006) C_ite{georgiou06}) to spectra of De Bruijn graphs (Strok (1992)C_ite{strok1992circulant}) and $(0,1)$-matrix solutions to $A^m = J_n$ (Wu, Jia and Li (2002) C_ite{wu2002g}) - just to name a few. See also the book by Davis (1979)C_ite{Davis79} and the article by Pollock (2002)C_ite{Pollock02}. The $k$-circulant matrices with random input sequence are examples of so called `patterned' matrices. Deriving LSD for general patterned matrices has drawn significant attention in the recent literature. See for example the review article by Bai (1999)C_ite{Bai99} or the more recent Bose and Sen (2008)C_ite{Bosesen2008} and also Bose, Sen and Gangopadhyay (2009)C_ite{bosesengangopadhyay09}). However, there does not seem to have been any studies of the general random $k$-circulant either with respect to the LSD or with respect to the spectral radius. It seems natural to investigate these. The LSDs of the $1$-circulant and $1$-circulant with symmetry restriction, $(n-1)$ circulant are known. It seems interesting to investigate the possible LSDs that may arise from $k$-circulants. Likewise, the limit distributions of the spectral radius of circulant and the $(n-1)$-circulant are both Gumbel. It seems natural to ask what happens to the distributional limit of the spectral radius for general $k$-circulants. \subsection{Main results and discussion} \subsubsection{Limiting Spectral distributions} The LSDs for $k$-circulant matrices are known for a few important special cases. If the input sequence $\{a_l\}_{l \ge 0}$ is i.i.d.\ with finite third moment, then the limit distribution of the circulant matrices ($k=1$) is bivariate normal (Bose and Mitra (2002)C_ite{Bosemitra02}). For the \emptyseth{symmetric} circulant with i.i.d.\ input having finite second moment, the LSD is real normal, (Bose and Sen (2007)C_ite{Bosesen2008}). For the $k$-circulant with $k=n-1$, the LSD is the symmetric version of the positive square root of the exponential variable with mean one (Bose and Mitra (2002)C_ite{Bosemitra02}). Clearly, for many combinations of $k$ and $n$, a lot of eigenvalues are zero. Later we provide a formula solution for the eigenvalues. From this, if $k$ is prime and $n=m \times k$ where $\gammad(m,k) = 1$, then $0$ is an eigenvalue with multiplicity $(n-m)$. To avoid this degeneracy and to keep our exposition simple, we primarily restrict our attention to the case when $\gammad(k, n)=1$. In general, the structure of the eigenvalues depend on the number theoretic relation between $k$ and $n$ and the LSD may vary widely. In particular, LSD is not `continuous' in $k$. In fact, while the ESD of usual circulant matrices $n^{-1/2}A_{1, n}$ is bivariate normal, the ESD of 2-circulant matrices $n^{-1/2}A_{2, n}$ for $n$ large odd number looks like a solar ring (See Figure~\ref{fig:case3&4}). The next theorem tells us that the radial component of the LSD of $k$-circulants with $k \ge 2$ is always degenerate, at least when the input sequence is i.i.d. normal, as long as $k = n^{o(1)}$ and $\gammad(k, n)=1$. \begin{figure} \caption{Eigenvalues of $100$ realizations of $n^{-1/2} \label{fig:case3&4} \end{figure} \begin{theorem}\label{thm:degenerate} Suppose $\{a_l\}_{ l \ge 0}$ is an i.i.d.\ sequence of $N(0,1)$ random variables. Let $k \ge 2 $ be such that $ k= n^{o(1)}$ and $n \to \infty $ with $\gammad(n,k) = 1$. Then $F_{n^{-1/2}A_{k,n}}$ converges weakly in probability to the uniform distribution over the circle with center at $(0,0)$ and radius $r = \exp( \mathbb{E} [ \log \sqrt E] )$, $E$ being an exponential random variable with mean one. \end{theorem} \begin{remark} Since $-\log E$ has the standard Gumbel distribution which has mean $\alphamma $ where $\alphamma \approx 0.57721$ is the Euler-Mascheroni constant, it follows that $r = e^{ - \gamma/2} \approx 0.74930$. \end{remark} In view of Theorem~\ref{thm:degenerate}, it is natural to consider the case when $k^{g}= \Omega(n)$ and $\gammad(k, n)=1$ where $g$ is a fixed integer. In the next two theorems, we consider two special cases of the above scenario, namely when $n$ divides $k^g \pm 1$. Consider the following assumption.\\ \noindent \textbf{Assumption \texttt{I}.} The sequence $\{a_l\}_{l\ge 0}$ is independent with mean zero, variance one and for some $ \delta > 0$, $$\sup_n n^{-1}\sum_{i=0}^{n-1} E|a_l|^{2+\delta} < \infty.$$ We are now ready to state our main theorems on the existence of LSD. \begin{theorem} \label{theo:lsd12} Suppose $\{a_l\}_{l \ge 0}$ satisfies Assumption \texttt{I}. Fix $g \ge 1$ and let $p_1$ be the smallest prime divisor of $g$. Suppose $k^g = -1+ s n$ where $s=1$ if $g=1$ and $s = o(n^{p_1 -1})$ if $g>1$. Then $F_{n^{-1/2}A_{k,n}}$ converges weakly in probability to $U_1(\prod_{j=1}^g E_j)^{1/2g}$ as $n \to \infty$ where $\{ E_j\}_{ 1 \le j \le g}$ are i.i.d.\ exponentials with mean one and $U_1$ is uniformly distributed over the $(2g)$th roots of unity, independent of $\{E_j\}_{ 1 \le j \le g}$. \end{theorem} \begin{theorem}\label{theo:lsd45} Suppose $\{a_l\}_{l \ge 0}$ satisfies Assumption \texttt{I}. Fix $g \ge 1$ and let $p_1$ be the smallest prime divisor of $g$. Suppose $k^g = 1+ s n$ where $s =0$ if $g=1$ and $s = o(n^{p_1-1})$ if $g>1$. Then $F_{n^{-1/2}A_{k,n}}$ converges weakly in probability to $U_2(\prod_{j=1}^g E_j)^{1/2g}$ as $n \to \infty$ where $\{E_j\}_{1 \le j \le g}$ are i.i.d.\ exponentials with mean one and $U_2$ is uniformly distributed over the unit circle in $\mathbb R^2$, independent of $\{E_j\}_{1 \le j \le g}$. \end{theorem} \begin{figure} \caption{Eigenvalues of $20$ realizations of $n^{-1/2} \label{fig:case1&2} \end{figure} \begin{remark} (1) Theorem~\ref{theo:lsd12} and Theorem~\ref{theo:lsd45} recover the LSDs of $k$-circulants for $k=n-1$ and $k=1$ respectively. \noindent (2) While the radial coordinates of the LSD described in Theorem \ref{theo:lsd12} and \ref{theo:lsd45} are same, their angular coordinates differ. While one puts its mass only at discrete places $e^{i 2 \pi j/ 2g}, 1 \le j \le 2g$ on the unit circle, the other spreads its mass uniformly over the entire unit circle. See Figure \ref{fig:case1&2}. \noindent (3) The restriction on $s = (k^g \pm 1)/n$ in the above two theorems seems to be a natural one. Suppose $g$ is a prime and so $g = p_1$. In this case if $s \ge n^{p_1 -1} $, then $k$ becomes greater than or equal to $n$ violating the assumption that $k < n$. \noindent (4) We cannot expect similar LSDs to hold for more general cases like $k^g = \pm r + n, r >1$ fixed. Compare Figure \ref{fig:case1&2} and Figure \ref{fig:case5&6}. \end{remark} \begin{figure} \caption{Eigenvalues of $100$ realizations of $n^{-1/2} \label{fig:case5&6} \end{figure} \subsubsection{Spectral radius} For the $k$-circulant, first suppose that the input sequence is i.i.d. standard normal. When $k=1$, it is easy to check that the modulus square of the eigenvalues are exponentials and they are independent of each other. Hence, the appropriately scaled and normalized spectral radius converges to the Gumbel distribution. But when the input sequence is i.i.d.\ but not necessary normal, that independence structure is lost. A careful use of Koml\'{o}s-Major-Tus\'{a}ndi type sharp normal approximation results are needed to deal with this case. See Davis and Mikosch (1999)C_ite{Mikosch99}. These approximations imply that the limit continues to be Gumbel. The spectral radius of the $(n-1)$-circulant is the same as that of the circulant and hence it has the same limit. See also Bryc and Sethuraman (2009)C_ite{brycsethuraman09} who use the same approach for the symmetric circulant. Now let $g=2$ and for further simplicity, assume that $n = k^2+1$. If the input sequence is i.i.d. standard normal, then the modulus of the nonzero eigenvalues are independent and distributed according to $(E_1E_2)^{1/4}$, where $E_j, j=1,2$ are i.i.d.\ standard exponential. Thus, the behavior of the spectral radius is the same as that of the maxima of i.i.d variables each distributed as $(E_1E_2)^{1/4}$. This is governed by the tail behaviour of $E_1E_2$. We deduce this tail behaviour via properties of Bessel functions and the limit again turns out to be Gumbel. Now, as suggested by the results of Davis and Mikosch (1999)C_ite{Mikosch99}, even when the input sequence is only assumed to be i.i.d. and not necessarily normal, with suitable moment condition, some kind of invariance principle holds and the same limit persists. We show that this is indeed the case. \begin{theorem} \label{theo:max} Suppose $\{a_l\}_{l\ge 0}$ is an i.i.d.\ sequence of random variables with mean zero and variance $1$ and $ \mathbb{E} |a_l|^\gamma < \infty $ for some $\gamma > 2$. If $n=k^2+1$ then \[ \frac{ \texttt{sp}( n^{-1/2}A_{k, n}) -d_q } { c_q} \] converges in distribution to the standard Gumbel as $n \to \infty$ where $ q =q(n) = { \lfloor \frac{n}{4} \rfloor}$ and the normalizing constants $ c_n $ and $d_n$ can be taken as follows \begin{equation}\label{eq:normalize} c_n =(8\log n)^{-1/2}\ \ \text{and} \ \ d_n=\frac{ (\log n)^{1/2} }{\sqrt 2}\left (1+\frac{1}{4}\frac{\log\log n}{\log n}\right)+\frac{1}{2(8\log n)^{1/2}} \log \frac{\pi}{2}. \end{equation} \end{theorem} In the next section we state the basic eigenvalue formula for $k$-circulant and develop some essential properties of the eigenvalues. In Section \ref{sec:lsd} and Section \ref{sec:spectralnorm} we state and prove the results on LSD and the spectral radius respectively. An Appendix reproves the known eigenvalue formula for $k$-circulant. \section{Eigenvalues of the $k$-circulant}\label{section:eigenvalues} We first describe the eigenvalues of a $k$-circulant and prove some related auxiliary properties. The formula solution, in particular is already known, see for example Zhou (1996)C_ite{Zhou96}. We provide a more detailed analysis which we later use in our study of the LSD and the spectral radius. Let \begin{equation}\label{eq:lambda} \omega=\omega_n := \cos(2\pi/n)+i\sin(2\pi/n),\ i^2=-1\ \ \text{and} \ \ {\lambda}bda_t = \sum\limits_{l=0}^{n-1} a_{l} \omega^{tl}, \ \ 0 \le t < n. \end{equation} \begin{remark} Note that $\{ {\lambda}bda_t, 0 \le t < n \}$ are eigenvalues of the usual circulant matrix $A_{1,n}$. \end{remark} Let $p_1 <p_2<\cdots < p_c$ be all the common prime factors of $n$ and $k$. Then we may write, \begin{equation}\label{eq:decomposition} n=n^{\prime} \prod_{q=1}^{c} p_q^{\beta_q} \ \text{ and } \ \ k=k^{\prime} \prod_{q=1}^{c} p_q^{\alpha_q}.\end{equation} Here $\alpha_q,\ \beta_q \ge 1$ and $n^{\prime}$, $k^{\prime}$, $p_q$ are pairwise relatively prime. We will show that $(n-n^{\prime})$ eigenvalues of $A_{k,n}$ are zero and $n^{\prime}$ eigenvalues are non-zero functions of $\underline{a}$. To identify the non-zero eigenvalues of $A_{k,n}$, we need some preparation. For any positive integer $m$, the set $\mathbb Z_{m}$ has its usual meaning, that is, $\mathbb Z_{m} = \{ 0, 1,2, \ldots, m-1\}.$ We introduce the following family of sets \begin{equation}\label{eq:Sx} S(x) := B_ig \{xk^b \text{ mod } n^{\prime}: b \ge 0 B_ig \}, \quad x \in \mathbb Z_{n'}. \end{equation} We observe the following facts about the family of sets $\{ S(x) \}_{ x \in \mathbb Z_{n'}}$.\\ \noindent (I) Let $g_x = \#S(x)$. We call $g_x$ the {\it order} of $x$. Note that $g_0 = 1$. It is easy to see that $$S(x) = \{xk^b \text{ mod } n^{\prime}: 0 \le b < g_x \}.$$ An alternative description of $g_x$, which we will use later extensively, is the following. For $x \in \mathbb Z_{n^{\prime}}$, let \[ \mathcal O_x = \{b > 0 \ : b \text{ is an integer and } xk^b = x \text{ mod } n' \}. \] Then $g_x = \min O_x$, that is, $g_x$ is the smallest positive integer $b$ such that $xk^b = x \text{ mod } n'$.\\ \noindent (II) The distinct sets from the collection $\{S(x) \}_{ x \in \mathbb Z_{n'}}$ forms a partition of $\mathbb Z_{n^{\prime}}$. To see this, first note that $x \in S(x)$ and hence $B_igcup_{x \in \mathbb Z_{n'}}S(x) = \mathbb Z_{n'}$. Now suppose $S(x) \cap S(y) \neq \emptysettyset$. Then, $xk^{b_1} = yk^{b_2} \text{ mod } n^{\prime}$ for some integers $b_1, b_2 \ge 1$. Multiplying both sides by $k^{g_x-b_1}$ we see that, $x \in S(y)$ so that, $S(x) \subseteq S(y)$. Hence, reversing the roles, $S(x)=S(y)$. We call the distinct sets in $\{S(x) \}_{ x \in \mathbb Z_{n'}}$ the {\em eigenvalue partition} of $\mathbb Z_{n^{\prime}}$ and denote the partitioning sets and their sizes by \begin{equation}\label{eq:partitionsets}\mathcal{P}_0 = \{0\}, \mathcal{P}_1, \ldots, \mathcal{P}_{\ell -1}\ \ \text{and}\ \ n_j = \#\mathcal{P}_j, \ 0 \le j < \ell.\end{equation} Define \begin{equation}\label{eq:xj} \Pi_j := \prod_{t \in \mathcal{P}_j} {\lambda}bda_{tn/n'}, \ \ j=0, 1, \ldots , \ell-1.\end{equation} The following theorem provides the formula solution for the eigenvalues of $A_{k,n}$. Since this is from a Chinese article which may not be easily accessible to all readers, we have provided a proof in the Appendix. \begin{theorem}[Zhou (1996)C_ite{Zhou96}] \label{theo:formula} The characteristic polynomial of $A_{k,n}$ is given by \begin{equation}\label{eq:evalue} \chiup \left(A_{k,n} \right) ({\lambda}bda)={\lambda}bda^{n-n^{\prime}} \prod_{j=0}^{\ell-1} \left( {\lambda}bda^{n_j} - \Pi_j \right). \end{equation} \end{theorem} \subsection{Some properties of the eigenvalue partition $\{\mathcal{P}_j, 0 \le j < \ell\}$} We collect some simple but useful properties about the eigenvalue partition in the following lemma. \begin{lemma}\label{lem:partitionproperties} (i) Let $x, y \in \mathbb Z_{n'}$. If $n'-t_0 \in S(y)$ \ for some $t_0 \in S(x)$, then for every $t \in S(x) $, we have $n' - t \in S(y)$. \\ \noindent (ii) Fix $x \in \mathbb Z_{n'} $. Then $g_x$ divides $g$ for every $g \in \mathcal O_x$. Furthermore, $g_1$ divides $g_x$ for each $x \in \mathbb Z_{n'} $. \\ \noindent (iii) Suppose $g$ divides $g_1$. Set $m := \gammad(k^g-1,n')$. Let $X(g)$ and $Y(g)$ be defined as \begin{eqnarray} X(g):= \Big \{x: x \in\mathbb Z_{n'} \ \ \text{and}\ \ x \ \ \text{has order} \ \ g \Big \}, \ \ Y(g) := \Big \{ bn'/m \ : \ 0 \le b < m \Big \}. \end{eqnarray} Then $$X(g) \subseteq Y(g), \ \ \#Y(g) = m\ \ \text{and}\ \ B_igcup_{h : h|g} X(h) = Y(g).$$ \end{lemma} \begin{proof} (i) Since $t \in S(x) = S(t_0)$, we can write $t = t_0k^b \text{ mod } n' $ for some $b \ge 0$. Therefore, $n'-t = (n'-t_0) k^b \text{ mod } n' \in S(n' - t_0) = S(y)$. \noindent (ii) Fix $g \in \mathcal O_x$. Since $g_x$ is the smallest element of $\mathcal O_x$, it follows that $g_x \le g$. Suppose, if possible, $g = qg_x+r$ where $0 < r < g_x$. By the fact $x g_x = x \text{ mod } n'$, it then follows that \[ x = xk^{g} \text{ mod } n^{\prime} = xk^{qg_x +r } \text{ mod } n^{\prime} = xk^r \text{ mod } n^{\prime}. \] This implies that $r \in \mathcal O_x$ and $r <g_x$ which is a contradiction to the fact that $g_x$ is the smallest element in $\mathcal O_x$. Hence, we must have $r=0$ proving that $g$ divides $g_1$. Note that $k^{g_1} = 1 \text{ mod } n'$, implying that $xk^{g_1} = x \text{ mod } n'$. Therefore $g_1 \in \mathcal O_x$ proving the assertion. \noindent (iii) Clearly, $\#Y(g) = m$. Fix $x \in X(h)$ where $h$ divides $g$. Then, $xk^g = x(k^h)^{g/h} = x \mbox{ mod } n^{\prime}$, since $g/h$ is a positive integer. Therefore $n'$ divides $x(k^g-1)$. So, $n'/m$ divides $x(k^g-1)/m$. But $n'/m$ is relatively prime to $(k^g-1)/m$ and hence $n'/m$ divides $x$. So, $x=bn'/m$ for some integer $b \ge 0$. Since $0 \le x <n'$, we have $0 \le b < m$, and $x \in Y(g)$, proving $B_igcup_{h : h|g} X(h) \subseteq Y(g)$ and in particular, $ X(g) \subseteq Y(g)$. On the other hand, take $ 0 \le b < g$. Then $\left( bn'/m\right ) k^g = \left( bn'/m\right)\text{ mod } n'$. Hence, $g \in \mathcal O_{bn'/m}$ which implies, by part (ii) of the lemma, that $ g_{cn'/m}$ divides $g$. Therefore, $ Y(g) \subseteq B_igcup_{h : h|g} X(h) $ which completes the proof. $\Box$ \end{proof} \begin{lemma} \label{theo:count} Let $g_1 = q_1^{\alphamma_1} q_2^{\alphamma_2}\ldots q_m^{\alphamma_m}$ where $q_1 < q_2 < \ldots < q_m$ are primes. Define for $1 \le j \le m$, \[ L_j := \left\{ q_{i_1}q_{i_2} \cdots q_{i_j} : 1 \le i_1 < \ldots < i_j \le m \right\} \] and \[G_j = \sum\limits_{l_j \in L_j} \# Y(g_1/\ell_j) = \sum\limits_{l_j \in L_j} \gammad \left(k^{g_1/\ell_j}-1,n' \right). \] Then we have \noindent (i) $\# \left\{ x \in \mathbb Z_{n^{\prime}}: g_x < g_1 \right\} = G_1 - G_2 + G_3 - G_4 + \cdots$.\\ \noindent (ii) $G_1 - G_2 + G_3 - G_4 + \cdots \le G_1.$ \end{lemma} \begin{proof} Fix $x \in \mathbb Z_{n'} $. By Lemma~\ref{lem:partitionproperties}(ii), $g_x$ divides $g_1$ and hence we can write $g_x = q_1^{\eta_1} \ldots q_m^{\eta_m}$ where, $0 \le \eta_b \le \alphamma_b$ for $1 \le b \le m$. Since $g_x < g_1$, there is at least one $b$ so that $\eta_b< \alphamma_b$. Suppose that exactly $h$-many $\eta$'s are equal to the corresponding $\alphamma$'s where $0\le h < m$. To keep notation simple, we will assume that, $\eta_b=\alphamma_b,\ 1 \le b \le h$ and $\eta_b<\alphamma_b,\ h+1 \le b \le m$. \noindent (i) Then $x \in Y(g_1/q_b)$ for $h+1 \le b \le m$ and $x \not \in Y(g_1/q_b)$ for $1 \le b \le h$. So, $x$ is counted $(m-h)$ times in $G_1$. Similarly, $x$ is counted ${m-h \choose 2}$ times in $G_2$, ${m-h \choose 3}$ times in $G_3$, and so on. Hence, total number of times $x$ is counted in $(G_1-G_2+G_3-\ldots)$ is \[ {m-h \choose 1} - {m-h \choose 2} + {m-h \choose 3}- \ldots = 1. \] (ii) Note that $m-h \ge 1$. Further, each element in the set $\left\{ x \in \mathbb Z_{n^{\prime}}: g_x < g_1 \right\}$ is counted once in $G_1-G_2+G_3- \ldots$ and $(m-h)$ times in $G_1$. The result follows immediately. \end{proof} \subsection{Asymptotic negligibility of lower order elements} We will now consider the elements in $\mathbb Z_{n{\prime}}$ with order less than that of $1 \in \mathbb Z_{n{\prime}} $ which has the highest order $g_1$. We will need the proportion of such elements in $\mathbb Z_{n{\prime}}$. So, we define \begin{equation}\label{eq:upsilon} \uparrowsilon_{k,n'} :=\frac{1}{n'}\# \{ x \in\mathbb Z_{n{\prime}} : g_x < g_1 \}. \end{equation} To derive the LSD in the special cases we have in mind, the asymptotic negligibility of $\uparrowsilon_{k,n'}$ turns out to be important. The following two lemmas establish upper bounds on $\uparrowsilon_{k,n'}$ and will be crucially used later. \begin{lemma} \label{lem:minus_one} (i) If $g_1 =2$, then $\displaystyle \uparrowsilon_{k,n'} = \gammad(k-1, n')/n'$.\\ \noindent (ii) If $g_1 \ge 4$ is even, and $k^{g_1/2} = -1 \mbox{ mod } n$, then $\displaystyle \uparrowsilon_{k,n'} \le 1 + \sum_{b| g_1, \ b \ge 3} \gammad(k^{g_1/b} -1, n').$ \noindent (iii) If $g_1 \ge 2$ and $q_1$ is the smallest prime divisor of $g_1$, then $\displaystyle \uparrowsilon_{k,n'} <2{n'}^{-1}k^{g_1/q_1}.$ \end{lemma} \begin{proof} Part (i) is immediate from Lemma \ref{theo:count} which asserts that $n' \uparrowsilon_{k,n'} = \# Y(1) = \gammad (k -1, n')$. \noindent (ii) Fix $x \in \mathbb Z_{n'}$ with $g_x < g_1$. Since $g_x$ divides $g_1$ and $g_x <g_1$, $g_x$ must be of the form $g_1/b$ for some integer $b \ge 2$ provided $g_1/b$ is an integer. If $b=2$, then $ xk^{g_1/2} = x k^{g_x} = x \text{ mod } n^{\prime}$. But $k^{g_1/2} = -1 \mbox{ mod } n^{\prime}$ and so, $xk^{g_1/2} = -x \mbox{ mod } n^{\prime}$. Therefore, $2x = 0 \mbox{ mod } n^{\prime}$ and $x$ can be either $0$ or $n^{\prime}/2$, provided, of course, $n^{\prime}/2$ is an integer. But $ g_0 =1 < 2 \le g_1/2$ so $x$ cannot be $0$. So, there is at most one element in the set $X(g_1/2)$. Thus we have, \begin{align*} \# \{ x \in \mathbb Z_{n^{\prime}} : g_x < g_1 \} & = \#X(g_1/2) + \sum_{b| g_1, \ b \ge 3} \# \{ x \in \mathbb Z_{n^{\prime}} : g_x =g_1/b \} \\ & = \#X(g_1/2) + \sum_{b| g_1, \ b \ge 3} \# X(g_1/b) \\ & \le 1+ \sum_{b| g_1, \ b \ge 3} \#Y(g_1/b) \quad [ \text{by Lemma \ref{lem:partitionproperties}(iii)}] \\ &= 1 + \sum_{b| g_1, \ b \ge 3} \gammad(k^{g_1/b} -1, n') \quad [ \text{by Lemma \ref{lem:partitionproperties}(iii)}. ] \end{align*} (iii) As in Lemma \ref{theo:count}, let $g_1 = q_1^{\alphamma_1} q_2^{\alphamma_2}\ldots q_m^{\alphamma_m}$ where $q_1 < q_2 < \ldots < q_m$ are primes. Then by Lemma \ref{theo:count}, \begin{align*} n^{\prime} \times \uparrowsilon_{k,n^{\prime}} = G_1-G_2+G_3-G_4+ \ldots \le G_1 & = \sum\limits_{b=1}^{m} \gammad(k^{g_1/q_b}-1,n^{\prime}) \\ & < \sum\limits_{b=1}^{m} k^{g_1/q_b} \le 2k^{g_1/q_1} \end{align*} where the last inequality follows from the observation \[ \sum\limits_{b=1}^{m} k^{g_1/q_b} \le k^{g_1/q_1}\sum\limits_{b=1}^{m} k^{ - g_1(q_b - q_1)/q_1 q_b} \le k^{g_1/q_1}\sum\limits_{b=1}^{m} k^{ - (q_b - q_1) } \le k^{g_1/q_1} \sum\limits_{b=1}^{m} k^{ - (b - 1) } \le 2 k^{g_1/q_1}. \] \end{proof} \begin{lemma}\label{lem:gcdab} Let $b$ and $c$ be two fixed positive integers. Then for any integer $k \ge2 $, the following inequality holds in each of the four cases, \[\gammad(k^b \pm 1, k^c \pm 1) \le k^{ \gammad( b, c)} +1. \] \end{lemma} \begin{proof} The assertion trivially follows if one of $b$ and $c$ divides other. So, we assume, without loss, that $b < c$ and $b$ does not divide $c$. Since, $k^c \pm1 =k^{c-b}(k^b +1) + (-k^{c-b} \pm 1)$, we can write \[ \gammad(k^b + 1, k^c \pm 1) = \gammad(k^b + 1, k^{c - b} \mp 1). \] Similarly, \[ \gammad(k^b - 1, k^c \pm 1) = \gammad(k^b - 1, k^{c - b} \pm 1). \] Moreover, if we write $c_1 = c - \lfloor c/b \rfloor b$, then by repeating the above step $\lfloor c/b \rfloor$ times, we can see that $\gammad(k^b \pm 1, k^c \pm 1) $ is equal to one of $\gammad(k^b \pm 1, k^{c_1} \pm 1)$. Now if $c_1$ divides $b$, then $\gammad(b, c) = c_1$ and we are done. Otherwise, we can now repeat the whole argument with $b = c_1$ and $c = b$ to deduce that $\gammad(k^b \pm 1, k^{c_1} \pm 1)$ is one of $\gammad(k^{b_1} \pm 1, k^{c_1} \pm 1)$ where $b_1 = b - \lfloor b/c_1 \rfloor c_1$. We continue in the similar fashion by reducing each time one of the two exponents of $k$ in the gcd and the lemma follows once we recall Euclid's recursive algorithm for computing the gcd of two numbers. \end{proof} \begin{lemma} \label{lem:fkn} (i) Fix $g \ge 1$. Suppose $k^g = -1+ s n$, $n \to \infty$ with $s=1$ if $g=1$ and $s = o(n^{p_1 -1})$ if $g>1$ where $p_1$ is the smallest prime divisor of $g$. Then $g_1=2g$ for all but finitely many $n$ and $\uparrowsilon_{k,n} \to 0.$\\ \noindent (ii) Suppose $k^g = 1+ s n$, $g \geq 1$ fixed, $n \to \infty$ with $s=0$ if $g=1$ and $s = o(n^{p_1 -1})$ where $p_1$ is the smallest prime divisor of $g$. Then $g_1=g$ for all but finitely many $n$ and $\uparrowsilon_{k,n} \to 0.$ \end{lemma} \begin{proof} (i) First note that $\gammad(n, k)=1$ and therefore $n'=n$. When $g=1$, it is easy to check that $g_1$ =2 and by Lemma~\ref{lem:minus_one}(i), $\uparrowsilon_{k, n} \le 2/n$. Now assume $g>1$. Since $k^{2g} = (sn-1)^2 =1 \text{ mod } n$, $g_1$ divides $2g$. Observe that $g_1 \ne g =2g/2$ because $k^g = -1 \text{ mod } n$. If $ g_1 = 2g /b$, where $b$ divides $g$ and $b \ge 3$, then by Lemma~\ref{lem:gcdab}, \[ \gammad(k^{g_1}-1, n) = \gammadB_ig (k^{2g/b}-1, (k^{g}+1)/s B_ig) \le \gammadB_ig (k^{2g/b}-1, k^{g}+1B_ig ) \le k^{ \gammad(2g/b, \ g)} +1.\] Note that since $\gammad(2g/b, g)$ divides $g$ and $\gammad(2g/b, g) \le 2g/b < g$, we have $\gammad(2g/b, g) \le g/p_1$. Consequently, \begin{equation}\label{eq:2gb_neg} \gammad(k^{2g/b}-1, n) \le k^{g/p_1} +1 \le (sn - 1)^{1/p_1} +1 = o(n), \end{equation} which is a contradiction to the fact that $k^{g_1} = 1 \text{ mod } n $ which implies that $\gammad(k^{g_1}-1, n) = n$. Hence, $g_1 = 2g$. Now by Lemma \ref{lem:minus_one}(ii) it is enough to show that for any fixed $ b \ge 3$ so that $b$ divides $g_1$, \[ \gammad(k^{g_1/b} -1, n)/n = o(1) \ \ \text{as}\ \ n \to \infty,\] which we have already proved in \eqref{eq:2gb_neg}. \noindent (ii) Again $\gammad(n, k)=1$ and $n'=n$. The case when $g=1$ is trivial as then we have $g_x = 1$ for all $x \in \mathbb Z_n$ and $\uparrowsilon_{k,n}= 0$. Since $k^g = 1 \text{ mod } n$, $ g_1$ divides $g$. If $g_1< g$, then $g_1 \le g/p_1$ which implies that $k^{g_1} \le k^{g/p_1} = (sn+1)^{1/p_1} = o(n)$, which is a contradiction. Thus, $g = g_1$. Now Lemma \ref{lem:minus_one}(iii) immediately yields, \[ \uparrowsilon_{k,n} < \frac{2 k^{g_1/p_1}}{n} \le \frac{2 (1+ sn)^{1/p_1}}{n} = o(1). \] \end{proof} \section{Proof of Theorem~\ref{thm:degenerate}, Theorem~\ref{theo:lsd12} and Theorem~\ref{theo:lsd45}}\label{sec:lsd} \subsection{Properties of eigenvalues of Gaussian circulant matrices} Suppose $\{a_l\}_{ l \ge 0}$ are independent, mean zero and variance one random variables. Fix $n$. For $1 \le t < n$, let us split ${\lambda}bda_t$ into real and complex parts as ${\lambda}bda_t = a_{t,n} + i b_{t, n}$, that is, \begin{equation}\label{eq:atnbtn} a_{t, n}= \sum\limits_{l=0}^{n-1} a_{l} \cos \left( \frac{2 \pi t l}{n}\right),\ \ b_{ t,n}=\sum\limits_{l=0}^{n-1} a_{l} \sin \left( \frac{2 \pi t l}{n}\right).\end{equation} \begin{align}\label{eq:ortho} \sum_{l=0}^{n-1} \cos \left( \frac{2 \pi t l}{n}\right) \sin \left( \frac{2 \pi t' l}{n}\right)=0, \ \forall t, t' &\text{ and } \ \sum_{l=0}^{n-1} \cos^2 \left( \frac{2 \pi t l}{n}\right)=\sum_{l=0}^{n-1} \sin^2 \left( \frac{2 \pi t l}{n}\right)= n/2 \quad \forall 0< t < n .\\ \sum_{l=0}^{n-1} \cos \left( \frac{2 \pi t l}{n}\right) \cos \left( \frac{2 \pi t' l}{n}\right)=0, \ &\text{ and } \sum_{l=0}^{n-1} \sin \left( \frac{2 \pi t l}{n}\right) \sin \left( \frac{2 \pi t' l}{n}\right)=0 \quad \forall t \ne t' \ \ (\text{ mod } n). \end{align} For $z \in \mathbb C$, by $\bar z$ we mean, as usual, the complex conjugate of $z$. For all $0< t, t' < n$, the following identities can easily be verified using the above orthogonality relations $$\mathbb{E}( a_{t,n}b_{t,n}) = 0, \ \ \text{and} \ \ \mathbb{E} (a_{t,n}^2 )= \mathbb{E}(b_{t,n}^2 ) = n/2,$$ $$\bar {\lambda}bda_t = {\lambda}bda_{n-t}, \ \ \mathbb{E}( {\lambda}bda_t {\lambda}bda_{t'}) = n \mathbb{I}( t+t'=n), \ \ \mathbb{E}( |{\lambda}bda_t|^2)=n.$$ The following Lemma will be used in the proof of Theorem~\ref{theo:lsd12} and Theorem~\ref{theo:lsd45}. \begin{lemma}\label{lem:product} \noindent Fix $k$ and $n$. Suppose that $\{a_l\}_{0 \le l < n}$ are i.i.d. standard normal random variables. Recall the notations $\mathcal{P}_j$ and $\Pi_j$ from Section~\ref{section:eigenvalues}. Then \\ \noindent(a) For every $n$, $n^{-1/2}a_{t, n}, n^{-1/2}b_{t, n}$, $ 0 \leq t \le n/2$ are i.i.d. normal with mean zero and variance $1/2$. Consequently, any subcollection $\{\Pi_{j_1}, \Pi_{j_2}, \ldots\}$ of $\{\Pi_j\}_{0 \le j < \ell}$, so that no member of the corresponding partition blocks $\{\mathcal{P}_{j_1}, \mathcal{P}_{j_2}, \ldots\}$ is a conjugate of any other, are mutually independent. \\ \noindent(b) Suppose $1 \le j < \ell$ and $\mathcal{P}_j \cap (n - \mathcal{P}_j) =\emptysettyset$. Then all $n^{-n_j/2}\Pi_j$ are distributed as $n_j$-fold product of i.i.d.\ random variables, each of which is distributed as $E^{1/2} U$ where $E$ and $U$ are independent random variables, $E$ is exponential with mean one and $U$ is uniform over the unit circle in $\mathbb R^2$.\\ \noindent (c) Suppose $1 \le j < \ell$ and $\mathcal{P}_j = n - \mathcal{P}_j$ and $n/2 \not \in\mathcal{P}_j$. Then $n^{-n_j/2}\Pi_j$ are distributed as $(n_j/2)$-fold product of i.i.d.\ exponential random variables with mean one. \end{lemma} \begin{proof} (a) Being linear combinations of $\{a_l\}_{0 \le l < n}$, $n^{-1/2}a_{t, n}, n^{-1/2}b_{t, n}$, $ 0 \leq t \le n/2$ are all jointly Gaussian. By (\ref{eq:ortho}), they have mean zero, variance $1/2$ and are independent. \noindent (b) By part (a) of the lemma, note that $n^{-1/2} {\lambda}bda_t =n^{-1/2}a_{t, n}+ i n^{-1/2}b_{t, n}$ is a complex normal random variable with mean zero and variance $1/2$ for every $0< t < n$ and moreover, they are independent by the given restriction on $\mathcal{P}_j$. The assertion follows by the observation that such a complex normal is same as $E^{1/2} U$ in distribution. \noindent (c) If $t \in \mathcal{P}_j$ then $n-t \in \mathcal{P}_j$ too and $t \ne n - t$. Thus $n^{-1} {\lambda}bda_t {\lambda}bda_{n-t} = n^{-1}( a_{t, n}^2 + b^2_{t, n})$ which, by part (a), is distributed as $Y/2$ where $Y$ is Chi-square with two degrees of freedom. Note that $Y/2$ has the same distribution as that of exponential random variable with mean one. The proof is complete once we observe that $n_j$ is necessarily even and the ${\lambda}bda_t$'s associated with $\mathcal{P}_j$ can be grouped into $n_j/2$ disjoint pairs like above which are mutually independent. \end{proof} \subsection{Proof of Theorem~\ref{thm:degenerate} } Recall the notation ${\lambda}bda_j, \ell, \mathcal{P}_j, n_j$ and $g_x$ from Section \ref{section:eigenvalues}. By Theorem \ref{theo:formula}, the eigenvalues of $n^{-1/2}A_{k,n}$ are given by \[ \exp \Big(\frac{2 \pi i s}{ n_j} \Big) \times \Big(\prod_{t \in \mathcal{P}_j} |n^{-1/2}{\lambda}bda_{t}| \Big) ^{1/n_j}, 1\le s \le n_j, \ 0 \le j < \ell, \] where $i = \sqrt {-1}$. Fix any $\mathrm{Var}epsilon>0$ and $0< \theta_1< \theta_2< 2\pi$. Define \[ B(\theta_1,\theta_2, \mathrm{Var}epsilon) = \Big\{(x, y) \in \mathbb R^2: r -\mathrm{Var}epsilon <\sqrt{ x^2 +y^2} < r+\mathrm{Var}epsilon, $a$n^{-1} (y/x) \in [\theta_1, \theta_2] \Big\}. \] Clearly, it is enough to prove that as $n \to \infty$, \begin{equation}\label{eq:main_th2} \frac 1n \sum_{ j=0}^{\ell -1} \sum_{ s = 1}^{n_j} \mathbb{I} \left( \exp \Big(\frac{2 \pi i s}{ n_j} \Big) \times \Big(\prod_{t \in \mathcal{P}_j} |n^{-1/2}{\lambda}bda_{t}| \Big) ^{1/n_j} \in B(\theta_1,\theta_2, \mathrm{Var}epsilon) \right)\mathrm{s.t.}ackrel{P}{\to} \frac{(\theta_2 - \theta_1)}{2\pi}. \end{equation} Note that for a fixed positive integer $C$, we have \begin{align*} n^{-1}\sum_{ 1 \le j < \ell: n_j \le C} n_j & \le n^{-1} \sum_{u = 2}^C \# B_ig \{ 1 \le x < n: g_x = u B_ig \}\\ &\le n^{-1} \sum_{u = 2}^C\# B_ig \{ 1 \le x < n: x k^u = x\mbox{ mod } n B_ig \} \\ &= n^{-1} \sum_{u = 2}^C \# B_ig \{ 1 \le x < n: x (k^u- 1) = s n \text{ for some } s \ge 1B_ig \} \\ & \le n^{-1} \sum_{u = 2}^C (k^u - 1) \le n^{-1} Ck^C \to 0, \text{as}\ \ n \to \infty. \end{align*} Therefore, if we define \[ N_C = \sum_{ j =0 : \ n_j \le C}^{\ell -1} n_j ,\] then the above result combined with the fact that $\mathcal{P}_0 = \{0\}$ yields $N_C/n \to 0$. With $C> (2\pi)/ (\theta_2 - \theta_1)$, the left side of \eqref{eq:main_th2} can rewritten as \begin{align} \label{eq:th2_intermediate} &\frac 1n \sum_{ j=0}^{\ell -1} \# \left \{s :\frac{2 \pi s}{ n_j} \in [\theta_1, \theta_2] , s = 1, 2 , \ldots, n_j \right \} \times \mathbb{I} \left( \Big(\prod_{t \in \mathcal{P}_j} |n^{-1/2}{\lambda}bda_{t}| \Big) ^{1/n_j} \in (r- \mathrm{Var}epsilon, r+\mathrm{Var}epsilon) \right) \notag \\ = & \frac{n-N_C}{n} \frac 1{n - N_C} \sum_{ j=0, \ n_j >C}^{\ell -1} n_j \times n_j^{-1} \# \left \{s :\frac{ s}{ n_j} \in (2 \pi )^{-1}[\theta_1, \theta_2] , s = 1, \ldots, n_j \right \} \notag \\ & \hspace{4cm} \times \mathbb{I} \left( \Big(\prod_{t \in \mathcal{P}_j} |n^{-1/2}{\lambda}bda_{t}| \Big) ^{1/n_j} \in (r- \mathrm{Var}epsilon, r+\mathrm{Var}epsilon) \right) + O\left (\frac{N_C}{n}\right) \notag\\ = & \frac 1{n - N_C} \sum_{ j=0, \ n_j >C}^{\ell -1} n_j \times \left ( \frac{(\theta_2 - \theta_1)}{2\pi} + O(C^{-1}) \right) \times \mathbb{I} \left( \Big(\prod_{t \in \mathcal{P}_j} |n^{-1/2}{\lambda}bda_{t}| \Big) ^{1/n_j} \in (r- \mathrm{Var}epsilon, r+\mathrm{Var}epsilon) \right) + O\left (\frac{N_C}{n}\right) \notag \\ =& \frac 1{n - N_C} \sum_{ j=0, \ n_j >C}^{\ell -1} n_j \times \frac{(\theta_2 - \theta_1)}{2\pi} \times \mathbb{I} \left( \Big(\prod_{t \in \mathcal{P}_j} |n^{-1/2}{\lambda}bda_{t}| \Big) ^{1/n_j} \in (r- \mathrm{Var}epsilon, r+\mathrm{Var}epsilon) \right) + O(C^{-1})+O\left (\frac{N_C}{n}\right) \notag\\ = & \frac{(\theta_2 - \theta_1)}{2\pi} + \frac 1{n - N_C} \sum_{ j=0, \ n_j >C}^{\ell -1} n_j \times \mathbb{I} \left( \Big(\prod_{t \in \mathcal{P}_j} |n^{-1/2}{\lambda}bda_{t}| \Big) ^{1/n_j} \not \in (r- \mathrm{Var}epsilon, r+\mathrm{Var}epsilon) \right) + O(C^{-1})+O\left (\frac{N_C}{n}\right). \end{align} To show that the second term in the above expression converges to zero in $L^1$, hence in probability, it remains to prove, \begin{equation} \label{eq:prodexpC} \mathbb{P} \left( \Big(\prod_{t \in \mathcal{P}_j} |n^{-1/2}{\lambda}bda_{t}| \Big) ^{1/n_j} \not \in (r- \mathrm{Var}epsilon, r+\mathrm{Var}epsilon) \right) \end{equation} is uniformly small for all $j$ such that $n_j>C$ and for all but finitely many $n$ if we take $C$ sufficiently large. By Lemma~\ref{lem:product}, for each $1 \le t < n$, $|n^{-1/2}{\lambda}bda_t|^2 $ is an exponential random variable with mean one, and ${\lambda}bda_t$ is independent of ${\lambda}bda_{t'}$ if $t' \ne n-t$ and $|{\lambda}bda_t| = |{\lambda}bda_{t'}|$ otherwise. Let $ E, E_1, E_2, \ldots$ be i.i.d.\ exponential random variables with mean one. Observe that depending or whether or not $\mathcal{P}_j$ is conjugate to itself, \eqref{eq:prodexpC} equals respectively, \[ \mathbb{P} \left( \Big(\prod_{t=1}^{n_j/2} E_t \Big) ^{1/n_j} \not \in (r- \mathrm{Var}epsilon, r+\mathrm{Var}epsilon) \right) \ \ \text{or} \ \ \mathbb{P} \left( \Big(\prod_{t=1}^{n_j} \sqrt E_t \Big) ^{1/n_j} \not \in (r- \mathrm{Var}epsilon, r+\mathrm{Var}epsilon) \right).\] The theorem now follows by letting first $n \to \infty $ and then $C \to \infty$ in \eqref{eq:th2_intermediate} and by observing that Strong Law of Large Numbers implies that \[ \left( \prod_{t=1}^{C} \sqrt E_t \right)^{1/C} \to r = \exp( \mathbb{E} [ \log \sqrt E] ) \quad \text{almost surely, \ \ \ as } C \to \infty. \] $\square$ \subsection{Invariance Principle} For a set $B \subseteq \mathbb R^d, d \ge 1$, let $({\tt path}rtial B)^{\eta}$ denote the `$\eta$-boundary' of the set $B$, that is, $({\tt path}rtial B)^\eta := B_ig \{ y \in \mathbb R^d: \|y - z\|\leq \eta \text{ for some } z \in {\tt path}rtial B B_ig \}$. By $\Phi_{d}(\cdot)$ we always mean the probability distribution of a $d$-dimensional standard normal vector. We drop the subscript $1$ and write just $\Phi(\cdot)$ to denote the distribution of a standard normal random variable. The proof of the following Lemma follows easily from Theorem 18.1, page 181 of Bhattacharya and Ranga Rao (1976)C_ite{Ranga76}. We omit the proof. \begin{lemma}\label{rangarao} Let $X_1,\ldots,X_{m}$ be $\mathbb R^d$-valued independent, mean zero random vectors and let $V_m={m}^{-1}\sum_{j=1}^{m} \mathrm{Cov} (X_j)$ be positive-definite. Let $G_m$ be the distribution of ${m}^{-1/2}T_m (X_1+X_2+ \cdots+X_m)$, where $T_m$ is the symmetric, positive-definite matrix satisfying $T_m^2=V_m^{-1}$. If for some $\delta>0$, $\mathbb{E} \|X_j\|^{(2+\delta)}<\infty$ for each $1 \le j \le m$, then there exist constants $C_i =C_i(d)$, $i=1, 2$ such that for any Borel set $A \subseteq \mathbb R^{d}$, \begin{eqnarray*} |G_m(A)-\Phi_{d}(A)|&\leq & C_1 m^{-\delta/2}\Big(m^{-1} \sum_{j=1}^{m} \mathbb{E}\| T_mX_j\|^{(2+\delta)}\Big)+2 \sup_{ y \in \mathbb R^d} \Phi_d \Big ( ({\tt path}rtial A)^{\eta} - y \Big),\\ &\leq & C_1 m^{-\delta/2}({\lambda}bda _{\min}(V_m))^{-(2+\delta)}\rho_{2+\delta}+2 \sup_{ y \in \mathbb R^d} \Phi_d\Big ( ({\tt path}rtial A)^{\eta} - y \Big), \end{eqnarray*} where $\rho_{2+\delta}=m^{-1} \sum_{j=1}^{m} \mathbb{E}\| X_j\| ^{(2+\delta)}$, ${\lambda}bda _{\min}(V_m)>0$ is the smallest eigenvalue of $V_m$, and $ \eta = C_2 \rho_{2+\delta} n^{-\delta/2}$. \end{lemma} \subsection{Proof of Theorem~\ref{theo:lsd12}} Since $\gammad(k,n)=1$, $n^{\prime}=n$ in Theorem \ref{theo:formula} and hence there are no zero eigenvalues. By Lemma \ref{lem:fkn} (i), $\uparrowsilon_{k,n}/n \to 0$ and hence the corresponding eigenvalues do not contribute to the LSD. It remains to consider only the eigenvalues corresponding to the sets $\mathcal{P}_j$ of size {\it exactly} equal to $g_1$. From Lemma \ref{lem:fkn}(i), $ g_1 =2g$ for $n$ sufficiently large. Recall the quantities $n_j = \#\mathcal{P}_j$, $\Pi_j = \Pi_{t \in \mathcal{P}_l} {\lambda}bda_{t}$, where ${\lambda}bda_t = \sum\limits_{l=0}^{n- 1} a_{\ell} \omega^{tl}$, $0 \le j < n$. Also, for every integer $t \ge 0$, $tk^g = (-1+ sn) t = -t \text{ mod } n$, so that, ${\lambda}bda_t$ and ${\lambda}bda_{n-t}$ belong to same partition block $S(t) = S(n-t)$. Thus each $\Pi_j$ is a nonnegative real number. Let us define \[ J_n = \{ 0 \le j < \ell : \#\mathcal{P}_j = 2g \},\] so that $n = 2g \#J_n+n\uparrowsilon_{k,n}$. Since, $\uparrowsilon_{k,n} \to 0$, $(\#J_n)^{-1}n \to 2g$. Without any loss, we denote the index set of such $j$ as $J_n=\{1, 2, \ldots \#J_n\}$. Let $1, \varrho, \varrho^2, \ldots \varrho^{2g-1}$ be all the $(2g)$th roots of unity. Since $n_j=2g$ for every $j \in J_n$, the eigenvalues corresponding to the set $\mathcal{P}_j$ are: \[\Pi_j^{1/2g}, \Pi_j^{1/2g} \varrho, \ldots \Pi_j^{1/2g}\varrho^{2g-1}.\] Hence, it suffices to consider only the empirical distribution of $\Pi_j^{1/2g}$ as $j$ varies over the index set $J_n$: if this sequence of empirical distributions has a limiting distribution $F$, say, then the LSD of the original sequence $n^{-1/2}A_{k,n} $ will be $(r, \theta)$ in polar coordinates where $r$ is distributed according to $F$ and $\theta$ is distributed uniformly across all the $2g$ roots of unity and $r$ and $\theta$ are independent. With this in mind, and remembering the scaling $\sqrt n$, we consider $$F_{n}(x) =(\#J_n)^{-1}\sum_{j=1}^{\#J_{n}} \mathbb{I}\left( \left[n^{-g}\Pi_j\right]^{\frac{1}{2g}}\leq x \right).$$ Since the set of ${\lambda}bda$ values corresponding to any $\mathcal{P}_j$ is closed under conjugation, there exists a set $\mathcal{A}_j \subset \mathcal{P}_j$ of size $g$ such that \begin{equation*} \mathcal{P}_j = \{ x : x \in \mathcal{A}_j \text{ or } n- x \in \mathcal{A}_j \}. \end{equation*} Combining each ${\lambda}bda_t$ with its conjugate, and recalling the definition of $\{a_{t, n}\}$ and $\{b_{t, n}\}$ in (\ref{eq:atnbtn}), we may write $\Pi_j$ as $$\Pi_j=\prod_{t \in \mathcal{A}_j } ( a_{t, n}^2+ b_{t, n}^2).$$ First assume the random variables $\{a_l\}_{ l \ge 0}$ are i.i.d.\ standard normal. Then by Lemma~\ref{lem:product}(c), $F_{n}$ is the usual empirical distribution of $\#J_n$ observations on $(\prod_{j=1}^g E_j)^{1/2g}$ where $\{E_j\}_{1 \le j \le g}$ are i.i.d.\ exponentials with mean one. Thus by Glivenko-Cantelli Lemma, this converges to the distribution of $(\prod_{j=1}^g E_j)^{1/2g}$. Though the variables involved in the empirical distribution form a triangular sequence, the convergence is still almost sure due to the specific bounded nature of the indicator functions involved. This may be proved easily by applying Hoeffding's inequality and Borel-Cantelli lemma. As mentioned earlier, all eigenvalues corresponding to any partition block $\mathcal{P}_j$ are all the $(2g)$th roots of the product $\Pi_j$. Thus, the limit claimed in the statement of the theorem holds. So we have proved the result when the random variables $\{a_l\}_{l \ge0}$ are i.i.d. standard normal. Now suppose that the variables $\{a_l\}_{l \ge0}$ are not necessarily normal. This case is tackled by normal approximation arguments similar to Bose and Mitra (2002)C_ite{Bosemitra02} who deal with the case $k=n-1$ (and hence $g=1$). We now sketch some of the main steps. The basic idea remains the same but in this general case, a technical complication arises as we need to control the Gaussian measure of the $\eta$-boundaries of some non-convex sets once we apply the invariance lemma (Lemma~\ref{rangarao}). We overcome this difficulty by suitable compactness argument. We start by defining $$F(x)=\mathbb{P}\left( B_ig( \prod_{j=1}^g E_j B_ig)^{1/2g}\leq x \right), \ \ x \in \mathbb R.$$ To show that the ESD converges to the required LSD in probability, we show that for every $x>0$, $$\mathbb{E}[F_{n}(x)] \rightarrow F(x)\ \ \mbox {and}\ \ \mathrm{Var}[F_{n}(x)]\rightarrow 0. $$ Note that for $x > 0$, $$\mathbb{E}[F_{n}(x)] = (\# J_n)^{-1} \sum_{j=1}^{\# J_n}\mathbb{P}B_ig(n^{-g}\Pi_j \leq x^{2g}B_ig). $$ Lemma \ref{lem:product} motivates using normal approximations. Towards using Lemma~\ref{rangarao}, define $2g$ dimensional random vectors $$X_{l, j}= 2^{1/2} \left(a_{l} \cos \left( \frac{2 \pi t l}{n}\right), \ \ a_{l} \sin \left( \frac{2 \pi t l}{n}\right):\ \ t \in \mathcal{A}_j \right) \quad 0 \le l < n , 1 \le j \le \# J_n.$$ Note that $$\mathbb{E}(X_{l,j})=0 \ \ \text{and} \ \ n^{-1} \sum_{l=1}^{n-1}\text{Cov} (X_{l,j})=I_{2g} \ \ \forall \ \ l, \ j.$$ Fix $x>0$. Define the set $A \subseteq \mathbb R^{2g}$ as $$A:= \Big \{ (x_j, y_j : 1 \le j \le g): \prod_{j=1}^g B_ig [2^{-1}( x_j^2 + y_j^2) B_ig ]\leq x^{2g} \Big \}.$$ Note that $$\Big \{n^{-g}\Pi_j \leq x^{2g} \Big\} = \Big \{ n^{-1/2}\sum_{l=0}^{n-1} X_{l,j}\in A \Big\}.$$ We want to prove $$\mathbb{E}[F_n(x)]- \Phi_{2g}(A) =(\#J_n)^{-1} \sum_{l=1}^{\#J_n} \left( \mathbb{P}B_ig ( n^{-g} \Pi_j \leq x^{2g}B_ig ) - \Phi_{2g}(A) \right) \rightarrow 0.$$ For that, it suffices to show that for every $\mathrm{Var}epsilon>0$ there exists $N = N(\mathrm{Var}epsilon)$ such that for all $n \ge N$, \[ \sup_{ j \in J_n} \left |\mathbb{P} \Big ( n^{-1/2}\sum_{l=0}^{n-1} X_{l,j}\in A \Big) -\Phi_{2g}(A) \right |\leq \mathrm{Var}epsilon.\] Fix $\mathrm{Var}epsilon>0$. Find $M_1>0$ large such that $\Phi([-M_1, M_1]^c) \le \mathrm{Var}epsilon/(8g)$. By Assumption \texttt{I}, $\mathbb{E}(n^{-1/2} a_{t, n})^2 = \mathbb{E}(n^{-1/2} b_{t, n})^2 = 1/2$ for any $ n \ge 1$ and $0 < t < n$. Now by Chebyshev bound, we can find $M_2>0$ such that for each $n \ge 1$ and for each $0 < t< n$, \[ \mathbb{P} ( |n^{-1/2} a_{t, n}| \ge M_2) \le \mathrm{Var}epsilon/(8g) \quad \text{ and } \ \ \mathbb{P} ( |n^{-1/2} b_{t, n}| \ge M_2) \le \mathrm{Var}epsilon/(8g).\] Set $M = \max \{ M_1, M_2\}$. Define the set $B := \Big \{ (x_j, y_j: 1 \le j \le g) \in \mathbb R^{2g}: |x_j |, |y_j| \le M \ \ \forall j \Big\}$. Then for all sufficiently large $n$, \begin{align*} \left |\mathbb{P} \Big ( n^{-1/2}\sum_{l=0}^{n-1} X_{l,j}\in A \Big) - \Phi_{2g}(A )\right | \le \left | \mathbb{P} \Big ( n^{-1/2}\sum_{l=0}^{n-1} X_{l,j}\in A \cap B \Big) - \Phi_{2g}(A \cap B ) \right | + \mathrm{Var}epsilon/2. \end{align*} We now apply Lemma~\ref{rangarao} for $A \cap B$ to obtain \[ \left | \mathbb{P} \Big ( n^{-1/2}\sum_{l=0}^{n-1} X_{l,j}\in A \cap B \Big) - \Phi_{2g}(A \cap B ) \right | \le C_1 n^{-\delta/2}\rho_{2+\delta} + 2 \sup_{z \in \mathbb R^{2g}} \Phi_{2g} \Big( ({\tt path}rtial (A \cap B) )^\eta - z \Big) \] where $$\rho_{2+\delta} = \rho_{2+\delta} =\sup_{ 0 \le l < n , j \in J_n} n^{-1}\sum_{l=0}^{n-1} \mathbb{E}\|X_{l,j}\|^{2+\delta} \quad \text{ and } \ \ \eta = \eta(n) = C_2 \rho_{2+\delta} n^{-\delta/2}.$$ Note that $ \rho_{2+\delta} $ is uniformly bounded in $n$ by Assumption \texttt{I}. It thus remains to show that \[ \sup_{z \in \mathbb R^{2g}} \Phi_{2g} \Big( ({\tt path}rtial (A \cap B) )^\eta - z \Big) \le \mathrm{Var}epsilon/8\] for all sufficiently large $n$. Note that \begin{align*} \sup_{z \in \mathbb R^{2g}} \Phi_{2g} \Big( ({\tt path}rtial (A \cap B) )^\eta - z \Big) &\le \sup_{z \in \mathbb R^{2g}} \int_{ ({\tt path}rtial (A \cap B) )^\eta} \phi(x_1 - z_1) \ldots \phi(y_{g} -z_{2g} ) dx_1 \ldots dy_{g} \\ &\le \int_{ ({\tt path}rtial (A \cap B) )^\eta} dx_1 \ldots dy_{g}. \end{align*} Finally note that ${\tt path}rtial (A \cap B) $ is a {\em compact} $(2g-1)$-dimensional manifold which has zero measure under the $2g$-dimensional Lebesgue measure. By compactness of ${\tt path}rtial (A \cap B) $, we have \[ ({\tt path}rtial (A \cap B))^{\eta} \downarrowarrow {\tt path}rtial (A \cap B) \qquad \text{ as } \eta \to 0, \] and the claim follows by Dominated Convergence Theorem. This proves that for $x >0$, \ $ \mathbb{E}[F_{n}(x)] \rightarrow F(x)$. To show that $\Var[F_{n}(x)]\rightarrow 0$, since the variables involved are all bounded, it is enough to show that $$n^{-2} \sum_{j\neq j^\prime} \text{Cov} \left ( \mathbb{I} B_ig ( n^{-g} \Pi_j \leq x^{2g}B_ig ), \ \mathbb{I} B_ig ( n^{-g}\Pi_{j^\prime} \leq x^{2g}B_ig )\right ) \rightarrow 0.$$ Along the lines of the proof used to show $\mathbb{E}[F_{n}(x)] \rightarrow F(x)$, one may now extend the vectors with $2g$ coordinates defined above to ones with $4g$ coordinates and proceed exactly as above to verify this. We omit the routine details. This completes the proof of Theorem~\ref{theo:lsd12}.\\ $\square$ \begin{remark} In view of Theorem \ref{theo:formula}, the above theorem can easily extended to yield an LSD has some positive mass at the origin. For example, fix $g>1$ and a positive integer $m$. Also, fix $m$ primes $q_1, q_2, \ldots, q_m$ and $m$ positive integers $\beta_1, \beta_2, \ldots, \beta_m$. Suppose the sequences $k$ and $n$ tends to infinity such that \begin{itemize} \item[(i)] $k = q_1q_2 \ldots q_m \hat k $ and $n = q_1^{\beta_1} q_2^{\beta_2} \ldots q_m^{\beta_m}\hat n$ \ with $\hat k$ and $\hat n \to \infty$, \item[(ii)] $k^g = -1+ s \hat n$ where $s = o(\hat n^{p_1-1}) = o(n^{p_1-1})$ where $p_1$ is the smallest prime divisor of $g$. \end{itemize} Then $F_{{ n}^{-1/2}A_{k, n}}$ converges weakly in probability to the distribution which has $1- \Pi_{j=1}^{m} q_j^{-\beta_j}$ mass at zero, and the rest of the probability mass is distributed as $U_1(\prod_{j=1}^g E_j)^{1/2g}$ where $U_1$ and $\{ E_j\}_{ 1 \le j \le g}$ are as in Theorem \ref{theo:lsd12}. \end{remark} \subsection{Proof of Theorem~\ref{theo:lsd45}} We will not present here the detailed proof of Theorem \ref{theo:lsd45} but let us sketch the main idea. First of all, note that $\gammad(k, n) =1$ under the given hypothesis. When $g=1$, then $k=1$ and the eigenvalue partition is the trivial partition which consists of only singletons and clearly the partition sets $\mathcal{P}_j$, unlike the previous theorem, are not self-conjugate. For $g \ge 2$, by Lemma~\ref{lem:fkn}(ii), it follows that $g_1 = g$ for $n$ sufficiently large and $\uparrowsilon_{k,n} \to 0$. In this case also, the partition sets $\mathcal{P}_j$ are not necessarily self-conjugate. Indeed we will show that the number of indices $j$ such that $\mathcal{P}_j$ is self-conjugate is asymptotically negligible compared to $n$. For that, we need to bound the cardinality of the following sets for $ 1 \le b < g_1=g$, \begin{align*} W_b := \Big\{ 0< t <n: tk^b = - t\text{ mod } n \Big \} = \Big \{ 0< t <n : n| t(k^b+1) \Big \}. \end{align*} Note that $t_0(b) := n/\gammad(n, k^b+1)$ is the minimum element of $W_b$ and every other element of the set $W_b$ is a multiple of $t_0(b)$. Thus the cardinality of the set $W_b$ can be bounded by \[ \#W_b \le n/ t_0(b) = \gammad( n, k^b+1).\] Let us now estimate $\gammad( n, k^b+1)$. For $ 1 \le b < g$, \begin{align*} \gammad( n, k^b+1) \le \gammad( k^g - 1, k^b+1) \le k^{\gammad(g, b)} +1 \le k^{g/p_1} +1 = (1+sn)^{1/p_1}+1 = o(n), \end{align*} which implies \begin{align*} n^{-1} \sum \limits_{ 1 \le b < g} \# W_b= o(1) \end{align*} as desired. So, we can ignore the partition sets which are self-conjugate. Let $J_n$ denote the set of all those indices $j$ for which $\#\mathcal{P}_j = g$ and $\mathcal{P}_j \cap (n - \mathcal{P}_j) =\emptysettyset$. Without loss, we assume that $J_n = \{ 1, 2, \ldots, \#J_n\}$. Let $1, \varrho, \varrho^2, \ldots \varrho^{g-1}$ be all the $g$th roots of unity. The eigenvalues corresponding to the set $\mathcal{P}_j, j \in J_n$ are: \[\Pi_j^{1/g}, \Pi_j^{1/g} \varrho, \ldots \Pi_j^{1/g}\varrho^{g-1}.\] For $j \in J_n$, unlike the previous theorem $\Pi_j=\prod_{t \in \mathcal{P}_j } ( a_{t, n}+ i b_{t, n})$ will be complex. Hence, we need to consider the empirical distribution: \[ G_n(x, y) = \frac{1}{ g \#J_n} \sum_{j=1}^{\# J_n} \sum_{r=1}^g \mathbb{I} \left( \Pi_j^{1/g} \varrho^{r-1} \le x+ iy \right), \quad x, y \in \mathbb R, \] where for two complex numbers $w = x_1 + iy_1$ and $z = x_2 + iy_2$, by $w \le z$, we mean $x_1 \le x_2$ and $x_2 \le y_2$. If $\{a_l\}_{ l \ge 0}$ are i.i.d.\ $N(0,1)$, by Lemma~\ref{lem:product}, $\Pi_j^{1/g}, j \in \mathcal{P}_j $ are independent and each of them is distributed as $\Big (\prod_{t=1}^gE_t \Big)^{1/2g} U_2$ as given in the statement of the theorem. This coupled with the fact that $\varrho^{r-1}U_2$ has the same distribution as that of $U_2$ for each $1 \le r \le g$ implies that $\{G_n \}_{n \ge 1}$ converges to the desired LSD (say $G$) as described in the theorem. When $\{a_l\}_{ l \ge 0}$ are not necessarily normals but only satisfy Assumption \texttt{I}, we show that $\mathbb{E} G_n (x, y) \to G(x, y)$ and $\mathrm{Var}(G_n(x, y)) \to 0$ using the same line of argument as given in the proof of Theorem ~\ref{theo:lsd12}. For that, we again define $2g$-dimensional random vectors, $$X_{l, j}= 2^{1/2} \left(a_{l} \cos \left( \frac{2 \pi t l}{n}\right), \ \ a_{l} \sin \left( \frac{2 \pi t l}{n}\right):\ \ t \in \mathcal{P}_j \right) \quad 0 \le l < n , 1 \le j \le \# J_n,$$ which satisfy $$\mathbb{E}(X_{l,j})=0 \ \ \text{and} \ \ n^{-1} \sum_{l=1}^{n-1}\text{Cov} (X_{l,j})=I_{2g} \ \ \forall \ \ l, \ j.$$ Fix $x, y \in \mathbb R$. Define the set $A \subseteq \mathbb R^{2g}$ as $$A:= \left \{ (x_j, y_j : 1 \le j \le g): \left( \prod_{j=1}^g B_ig [2^{-1/2}( x_j + iy_j) B_ig ] \right)^{1/g} \leq x + iy \right \}$$ so that $$\Big \{ \Pi_j^{1/g}\varrho^{r-1} \leq x +iy \Big\} = \Big \{ n^{-1/2}\sum_{l=0}^{n-1} X_{l,j}\in \varrho^{g+1-r} A \Big\}.$$ The rest of the proof can be completed following the proof of Theorem ~\ref{theo:lsd12}, once we realize that for each $1 \le r \le g$, ${\tt path}rtial (\varrho^{g+1-r} A)$ is again a $(2g-1)$-dimensional manifold which has zero measure under the $2g$-dimensional Lebesgue measure. $\square$ \section{Proof of Theorem 5}\label{sec:spectralnorm} We start by defining the gumbel distribution of parameter $\theta > 0$. \begin{definition} A probability distribution is said to be Gumbel with parameter $\theta > 0$ if its cumulative distribution function is given by \[ \Lambda_{\theta} (x)=\exp\{-\theta \exp (- x) \}, \ \ x \in \mathbb R. \] The special case when $\theta =1$ \ is known as standard Gumbel distribution and its cumulative distribution function is simply denoted by $\Lambda (\cdot)$. \end{definition} \begin{lemma} \label{lem:maxima} Let $E_1$ and $E_2$ be i.i.d. exponential random variables with mean one. Then \noindent (i) \begin{equation} \label{eq:m1} \overline K(x) := \mathbb{P} B_ig( E_1E_2 > x B_ig)= \int_0^\infty\exp(-y)\exp(-xy^{-1})dy \mathrm{a.s.}ymp \pi^{1/2} x^{1/4} \exp(-2x^{1/2}) \end{equation} as $x \to \infty$.\\ \noindent (ii) Let $G$ be the distribution of $(E_1E_2)^{1/4}$. If $G_t$ are i.i.d.\ random variables with the distribution $G$, and $G^{(n)}:=\max B_ig \{G_t: 1 \leq t \leq n B_ig\}$, then \[ \frac{G^{(n)}-d_n}{c_n} \mathrm{s.t.}ackrel{\mathcal{D}}{\rightarrow} \Lambda_{}.\] where $c_n$ and $d_n$ are normalising constants which can be taken as follows \begin{equation}\label{eq:normalize} c_n =(8\log n)^{-1/2}\ \ \text{and} \ \ d_n=\frac{ (\log n)^{1/2} }{\sqrt 2}\left (1+\frac{1}{4}\frac{\log\log n}{\log n}\right)+\frac{1}{2(8\log n)^{1/2}} \log \frac{\pi}{2}. \end{equation} \end{lemma} \begin{proof} (i) Differentiating \eqref{eq:m1} twice, we get \begin{equation} \label{eq:m2} \frac{d^2}{dx^2}\overline K(x)=\int_0^\infty y^{-2}\exp(-y)\exp(-xy^{-1})dy, \end{equation} which implies that $\overline K$ satisfies the differential equation \begin{align}\label{eq:m3} x\frac{d^2}{dx^2}\overline K(x) -\overline K(x) &= - \int_0^{\infty} (1 - xy^{-2})\expB_igl( - (y+ xy^{-1} )B_igr)dy \notag \\ &= \expB_igl( - (y+ xy^{-1} )B_igr) \Big|_0^{\infty} =0, \ \ \text{for } x>0, \end{align} with the boundary conditions $\overline K(0)=1$ and $\overline K(\infty)=0$. From the theory of second order differential equations the only solution to \eqref{eq:m3} is $$\overline K(x)= \pi x^{1/2}H_1^1(2ix^{1/2}), \ \ x> 0$$ where $i^2=-1$. The function $H_1^1$ is given by (see Watson (1944)C_ite{Watson44}) $$H_1^1(x)= J_1(x)+iY_1(x)$$ where $J_1$ and $Y_1$ are order one Bessel functions of the first and second kind respectively. It also follows from the theory of the asymptotic properties of the Bessel functions $J_1$ and $Y_1$, that \begin{equation}\label{eq:ktail} \overline K(x) \mathrm{a.s.}ymp \pi^{1/2} x^{1/4} \exp(-2x^{1/2}) \ \ \text{as} \ \ x \to \infty. \end{equation} \noindent (ii) Now from (\ref{eq:ktail}), \begin{equation}\label{eq:gtaildef} \overline G(x)=P\{ (E_1E_2)^{1/4} > x\} \mathrm{a.s.}ymp \pi^{1/2} x \exp(-2x^2) \ \ \text{as} \ \ x \to \infty. \end{equation} By Proposition 1.1 and the development on pages 43 and 44 of Resnick (1996)C_ite{Resnick96}, we need to show that, $$\overline G(x)=\theta(x)(1-F_\#(x)) \ \ \text{where} \lim_{x\to \infty} \theta(x)=\theta > 0 $$ and, there exists some $x_0 $ and a function $f$ such that $f(y) > 0 $ for $y > x_0$ and such that $f $ has an absolute continuous density with $f^\prime(x)\to 0$ as $x \to \infty$ so that \begin{equation} 1-F_\#(x)=\exp\Bigl( {-\int_{x_0}^x (1/f(y))dy}\Bigr), \,\ \ x> x_0. \end{equation} Moreover, a choice for the normalizing constants $c_n$ and $d_n$ is then given by \begin{equation}\label{eq:defanbn} d^*_n=\Big( 1/(1-F_\#)\Big)^{-1}(n), \ \ c^*_n=f(d^*_n). \end{equation} Then \[ \frac{G^{(n)}-d^*_n}{c^*_n} \mathrm{s.t.}ackrel{\mathcal{D}}{\rightarrow} \Lambda_{\theta}.\] Towards this end, define for $ x\geq 1$, \begin{equation}\label{eq:candF} \theta(x)\mathrm{a.s.}ymp \pi^{1/2}e^{-2}, \ \ \ \ 1-F_\#(x)=x\exp{\Bigl(-2(x^2-1)\Bigr)}, \ \ x \geq 1=x_0. \end{equation} To solve for $f$, taking $\log$ on both sides, \begin{equation} \log x-2(x^2-1)=-\int_1^x\frac{1}{f(y)}dy. \end{equation} Taking derivative, $$\frac{1}{x}-2(2x)=-\frac{1}{f(x)}$$ or $$f(x)=\frac{x}{4x^2-1} \mathrm{a.s.}ymp \frac{1}{4x}\ \ \text{as}\ \ x\to \infty.$$ Note that $d^*_n$ (to be obtained) will tend to $\infty$ as $n \to \infty$. Hence $$c^*_n=f(d^*_n)\mathrm{a.s.}ymp(4d^*_n)^{-1}.$$ We now proceed to obtain (the asymptotic form of) $d^*_n$. Using the defining equation (\ref{eq:defanbn}), \begin{equation}\label{eq:defbn}d^*_n\exp^{-2((d^*_n)^2-1)}=n^{-1}. \end{equation} Clearly, from the above, we may write $$d^*_n=\Big(\frac{\log n}{2}\Big )^{1/2}(1+\delta_n)$$ where $\delta_n \to 0$ is a \emptyseth{positive} sequence to be appropriately chosen. Thus, again using (\ref{eq:defbn}), we obtain $$(\log n) (\delta_n^2+2\delta_n)-B_ig(\frac{1}{2}\log \log n+\xi_n)=0$$ where $$\xi_n=2-\frac{1}{2}\log 2+\log (1+\delta_n).$$ ``Solving" the quadratic, and then using expansion $\sqrt{1+x}=1+\frac{1}{2}x+O(x^2)$ as $x\to 0$, we easily see that $$\delta_n=\frac{-2+\sqrt{4+4(\frac{1}{2}\log \log n +\xi_n)/\log n}}{2}= \frac{1}2{}\left (\frac{\frac{1}{2}\log \log n +\xi_n}{\log n}\right)+O\left(\frac{(\log \log n)^2}{(\log n)^2}\right).$$ Hence $$d^*_n=B_ig(\frac{\log n}{2}B_ig)^{1/2}\left(1+ \frac{\frac{1}{2}\log\log n+\xi_n}{2\log n}\right)+O\left(\frac{(\log \log n)^2}{(\log n)^{3/2}}\right).$$ Simplifying, and dropping appropriate small order terms, we see that \[ \frac{G^{(n)}-\hat d_n}{\hat c_n} \mathrm{s.t.}ackrel{\mathcal{D}}{\rightarrow} \Lambda_{\pi^{1/2} e^{-2}}.\] where \begin{eqnarray*} \hat c_n = (8 \log n)^{-1/2} \ \ \text{and} \ \ \hat d_n = \frac{ (\log n)^{1/2} }{\sqrt 2}\left(1+\frac{1}{4}\frac{\log\log n}{\log n}\right)+ \frac{1}{(8\log n)^{1/2}} (2-\frac{1}{2}\log 2 ). \end{eqnarray*} To convert the above convergence to standard Gumbel distribution, we use the following result of de Haan and Ferreira (2006)C_ite{Haan06}[Theorem 1.1.2] which says that the following two statements are equivalent for any sequence of $a_n>0, b_n$ of constants and any nondegenerate distribution function $H$. \noindent (i) For each continuity point $x$ of $H$, \[ \lim_{n \to \infty} G^n( c_n x + d_n) = H(x),\] (ii) For each $x > 0$ continuity point of $H^{-1}(e^{-1/x} )$, \[ \lim_{ t \to \infty} \frac{ \left( 1/ (1 - G) \right)^{-1}( tx) - d_{[t]} }{c_{[t]}} = H^{-1}(e^{-1/x} ). \] Now the relation $\Lambda_{\theta}^{-1}(e^{-1/x} ) - \Lambda^{-1}(e^{-1/x} ) = \log \theta$ and a simple calculation yield that \[ c_n = \hat c_n, \ \ \ d_n = \hat d_n + \hat c_n \log(\pi^{1/2} e^{-2} ).\] \end{proof} \subsection{Some preliminary lemmas} First of all, note that $\gammad(k, n) =1$ and hence $n' =n$. It is easy to check that $g_1 = 4$ and \[ \{ x \in \mathbb Z_n : g_x < g_1 \} = \left \{ \begin{array}{cc} \{0, n/2\} & \text{ if $n$ is even} \\ \{0\} & \text{ if $n$ is odd}. \end{array} \right.\] Thus the eigenvalue partition of $\{0, 1, 2, \ldots, n-1\}$ can be listed as $\mathcal{P}_1, \mathcal{P}_2, \ldots, \mathcal{P}_q $, each of which is of size $4$. Since each $\mathcal{P}_j, 1 \le j \le q$ is self-conjugate, we can find a set $\mathcal{A}_j \subset \mathcal{P}_j$ of size $2$ such that \begin{equation}\label{eq:a_t} \mathcal{P}_j = \{ x : x \in \mathcal{A}_j \text{ or } n- x \in \mathcal{A}_j \}. \end{equation} For any sequence of random variables $b = \{b_l\}_{ t \ge 0}$, define \begin{equation} \label{eq:srformula1} \beta_{b, n}(j) = n^{-2} \prod_{ t \in \mathcal{A}_j} \left | \sum\limits_{l=0}^{n-1} b_{l} \omega^{ t l} \right |^2, \ \ \ \omega = \exp \left(\frac{2 \pi i }{n} \right), \quad 1 \le j \le q. \end{equation} The next lemma helps us to go from bounded to unbounded entries. For each $n \ge 1$, define a triangular array of centered random variables $\{ \bar a^{(n)}_l \}_{ 0 \le l < n} $ by \[ \bar a_ l = \bar a^{(n)}_l = a_l I_{|a_l| \le n^{ 1/ \gamma} } - \mathbb{E} a_l I_{|a_l| \le n^{ 1/ \gamma} }.\] \begin{lemma}[Truncation] \label{lem:trunc} Assume $\mathbb{E} | a_l|^{ \gamma} < \infty$ for some $\gamma > 2$. Then, almost surely, \[ \max_{ 1 \le j \le q} (\beta_{a, n}(j) ) ^{1/4} - \max_{ 1 \le j \le q} (\beta_{\bar a, n}(j) ) ^{1/4} = o(1).\] \end{lemma} \begin{proof} Since $\sum_{l=0}^{n-1} \omega^{ t l} $ = 0 for $0 < t <n$, it follows that $\beta_{\bar a, n}(j) = \beta_{\tilde a, n}(j)$ where \[ \tilde a_l = \tilde a^{(n)} _l = \bar a_l + \mathbb{E} a_l I_{|a_l| \le n^{ 1/ \gamma} }=a_l I_{|a_l| \le n^{ 1/ \gamma}}.\] By Borel-Cantelli lemma, with probability one, $ \sum_{t=0}^{\infty} |a_l| I_{|a_l | > l^{1/\gamma} }$ is finite and has only finitely many non-zero terms. Thus there exists an integer $N \ge 0$, which may depend on the sample point, such that \begin{align}\label{eq:trucbc} \sum_{l=m}^{n-1}| \tilde a^{(n)}_l - a_l | = \sum_{l=m}^{n-1} |a_l| I_{|a_l | > n^{1/\gamma} } \le \sum_{t=m}^{\infty} |a_l| I_{|a_l | > t^{1/\gamma} } = \sum_{l=m}^{N} |a_l| I_{|a_l | > l^{1/\gamma} }. \end{align} Consequently, if $ m > N$, the left side of \eqref{eq:trucbc} is zero. Therefore, the terms of the two sequences $\{a_l \}_{ m \le l < n} $ and $\{\tilde a^{(n)}_l\}_{ m \le l < n}$ are identical almost surely for all sufficiently large $n$ and the assertion follows immediately. \end{proof} \begin{lemma}[Bonferroni inequality] \label{bonferroni} Let $(\Omega, \mathcal F, \mathbb{P})$ be a probability space and let $B_1, B_2, \ldots, B_n$ be events from $\mathcal F$. Then for every integer $m \ge 1$, \begin{equation} \label{e:bonf} \sum_{j=1}^{2m} (-1)^{j-1} S_{j,n} \le \mathbb{P} \Big( B_igcup_{j=1}^n B_i \Big ) \le \sum_{j=1}^{2m-1} (-1)^{j-1}S_{j,n}, \end{equation} where \[ S_{j,n} := \sum_{ 1 \le i_1 < i_2 < \ldots < i_j \le n} \mathbb{P} \Big( B_igcap_{l=1}^j B_{i_l} \Big ). \] \end{lemma} \begin{lemma}\label{lem:tailestimation} Fix $ x \in \mathbb R$. Let $E_1, E_2, c_n$ and $d_n$ be as in Lemma~\ref{lem:maxima}. Let $\sigma^2_n = n^{-c}$, $c>0$. Then there exists some positive constant $K=K(x)$ such that \[ \mathbb{P} \left ( (E_1 E_2)^{1/4} > (1 + \sigma^2_n)^{-1/2} (c_n x + d_n) \right) \le \frac{K}{n},\ \ x \in \mathbb R.\] \end{lemma} \begin{proof} Since $(1 +y)^{-1/2} \ge 1 - y/2$ for $y > 0$, \[ \mathbb{P} \left ( (E_1 E_2)^{1/4} > (1 + \sigma^2_n)^{-1/2} (c_n x + d_n) \right) \le \mathbb{P} \left ( (E_1 E_2)^{1/4} > (1 - \sigma^2_n/2 )(c_n x + d_n) \right).\] Recall the representation $$ \mathbb{P}( (E_1 E_2)^{1/4} > x ) =\theta(x)(1-F_\#(x)) \text{ as } x \to \infty.$$ Note that $(1 - \sigma^2_n/2) (c_n x + d_n) = d^*_n + (d_n - d^*_n) + c_n x- (c_n x + d_n) \sigma^2_n/2 = d^*_n +o_x(1)$ where we use the facts that $c_n \to 0 $, $ (d_n - d^*_n)/ c_n = o(1)$ and $d_n \sim \sqrt{ \log n}$. The lemma now easily follows once we note that $ 1- F_{\#} (d^*_n) = n^{-1}$. \end{proof} \subsection{A strong invariance principle} We now state the normal approximation result and a suitable corollary that we need. For $d \ge 1$, and any distinct integers $i_1, i_2, \ldots, i_d$, from $\Big \{ 1,2, \ldots, \lceil \frac{n-1}{2}\rceil \Big \}$, define \[ v_{2d}(l) = \left( \cos\left ( \frac{ 2 \pi {i_j} l}{n} \right), \sin \left ( \frac{ 2 \pi {i_j} l}{n}\right) : 1 \le j \le d \right)^T, \quad l \in \mathbb Z_n.\] Let $\mathrm{Var}phi_{\Sigma}(\cdot)$ denote the density of the $2d$-dimensional Gaussian vector having mean zero and covariance matrix $\Sigma$ and let $I_{2d}$ be the identity matrix of order $2d$. \begin{lemma}[Normal approximation, Davis and Mikosch (1999)C_ite{Mikosch99}] \label{lm:normalapprox} Fix $d \ge 1$ and $\gamma > 2$ and let $\tilde p_n$ be the density function of \[ 2^{1/2} n^{-1/2} \sum_{l=0}^{n-1}( \bar a_{l} + \sigma_n N_{l} ) v_{2d}(l),\] where $\{N_l\}_{ l \ge 0}$ is a sequence of i.i.d.\ $N(0,1)$ random variables, independent of $\{a_l\}_{ l \ge 0}$ and $\sigma_n^2 = \mathrm{Var}(\bar a_0)s_n^2$. If $n ^{-2c} \ln n \le s_n^2 \le 1 $ with $c = 1/2 - (1-\delta)/ \gamma$ \ \ for arbitrarily small $\delta >0$, then the relation \[ \tilde p_n(x) = \mathrm{Var}phi_{(1+ \sigma_n^2)I_{2d}}(x) (1 + \mathrm{Var}epsilon_n ) \quad \text{ with } \mathrm{Var}epsilon_n \to 0 \] holds uniformly for $\|x\|^3 = o_d ( n^{1/2 - 1/ \gamma}), \ x \in \mathbb R^{2d} $. \end{lemma} \begin{corollary}\label{cor:approx_by_normal} Let $\gamma > 2$ and $\sigma_n^2 = n^{-c}$ where $c$ is as in Lemma \ref{lm:normalapprox}. Let $B \subseteq \mathbb R^{2d}$ be a measurable set. Then \[ \left | \int_{B} \tilde p_n(x) dx- \int_{B} \mathrm{Var}phi_{(1+ \sigma_n^2)I_{2d}}(x) dx \right | \le \mathrm{Var}epsilon_n \int_{B} \mathrm{Var}phi_{(1+ \sigma_n^2)I_{2d}}(x) dx + O_d (\exp ( - n^{\eta} ) ), \] for some $\eta > 0$ and uniformly over all the $d$-tuples of distinct integers $ 1 \le i_1< i_2< \ldots < i_d \le \lceil \frac{n-1}{2}\rceil$. \end{corollary} \begin{proof} Set $r = n^{\alpha}$ where $0< \alpha < 1/2 - 1/ \gamma $. Using Lemma \ref{lm:normalapprox}, we have, \begin{align*} &\left | \int_{B} \tilde p_n(x) dx- \int_{B} \mathrm{Var}phi_{(1+ \sigma_n^2)I_{2d}}(x) dx \right | \\ \le& \left | \int_{B \cap \{\|x \| \le r \}} \tilde p_n(x) dx- \int_{B\cap \{\|x \| \le r \}} \mathrm{Var}phi_{(1+ \sigma_n^2)I_{2d}}(x) dx \right| + \int_{B\cap \{\|x \| > r \}} \tilde p_n(x) dx + \int_{B\cap \{\|x \| > r \}} \mathrm{Var}phi_{(1+ \sigma_n^2)I_{2d}}(x) dx\\ \le& \mathrm{Var}epsilon_n \int_{B \cap \{\|x \| \le r \}} \mathrm{Var}phi_{(1+ \sigma_n^2)I_{2d}}(x) dx + \int_{ \{\|x \| > r \}} \tilde p_n(x) dx + \int_{ \{\|x \| > r \} } \mathrm{Var}phi_{(1+ \sigma_n^2)I_{2d}}(x) dx=T_1+T_2+T_3 \ \ (say). \end{align*} Let $v^{(j)}_{2d}(l)$ denote the $j$-th coordinate of $v_{2d}(l)$, $1 \le j \le 2d$. Then, using the normal tail bound, $ \mathbb{P} B_ig( | N(0, \sigma^2) | > x B_ig ) \le 2 e^{-x/\sigma}$ for $x > 0$, \begin{align*} T_2&=\int_{ \{\|x \| > r \}} \tilde p_n (x) dx = \mathbb{P} \left ( \Big \| 2^{1/2} n^{-1/2} \sum_{l=0}^{n-1}( \bar a_{l} + \sigma_n N_{l} ) v_{2d}(l) \Big \| > r \right)\\ &\le 2d \max_{ 1 \le j \le 2d} \mathbb{P} \left ( \Big | 2^{1/2} n^{-1/2} \sum_{l=0}^{n-1}( \bar a_{l} + \sigma_n N_{l} ) v^{(j)}_d(l) \Big | > r/(2d) \right)\\ &\le 2d \max_{ 1 \le j \le 2d} \mathbb{P} \left ( \Big | n^{-1/2} \sum_{l=0}^{n-1} \bar a_{l} v^{(j)}_d(l) \Big | > r /(4\sqrt 2 d) \right) + 4d \exp \Big( - r n^{c/2} /(4\sqrt 2d) \Big). \end{align*} Note that $\bar a_{l} v^{(j)}_d(l), 0 \le l < n$ are independent, have mean zero and variance at most one and are bounded by $2n^{1/\gamma}$. Therefore, by applying Bernstein's inequality and simplifying, for some constant $K >0$, \begin{align*} \mathbb{P} \left ( \Big | n^{-1/2} \sum_{l=0}^{n-1} \bar a_{l} v^{(j)}_d(l) \Big | > r /4\sqrt 2 d \right) \le \exp( - K r^2). \end{align*} Further, \[ T_3 =\int_{ \{\|x \| > r \} } \mathrm{Var}phi_{(1+ \sigma_n^2)I_{2d}}(x) dx \le 4d \exp( - r/4d). \] Combining the above estimates finishes the proof. \end{proof} \subsection{Proof of Theorem~\ref{theo:max}} First assume that $n$ is even. Then $k$ must be odd and $S(n/2) =\{ n/2\}$. Thus with the previous notation, \[ \texttt{sp}( n^{-1/2}A_{k,n} ) = \max \left \{ \max_{ 1 \le j \le q} (\beta_{a, n}(j) ) ^{1/4}, |n^{-1/2}{\lambda}bda_0|, \ |n^{-1/2}{\lambda}bda_{n/2}| \right \}.\] Since $d_q \to \infty$ and $c_q \to 0$, by Chebyshev inequality, we have \[ \sup_{0 \le t < n} \mathbb{P} \left ( |n^{-1/2}{\lambda}bda_t| \ge x c_q + d_q \right) \to 0, \quad \text{for each } x \in \mathbb R.\] Thus finding the limiting distribution $ \texttt{sp}(n^{-1/2}A_{k,n} )$ is asymptotically equivalent to finding the limiting distribution of $\max_{ 1 \le j \le q} (\beta_{a,n}(j) ) ^{1/4}$. Clearly, this is also true if $n$ is odd as that case is even simpler. Now, as in the proof of Theorem \ref{theo:lsd12}, first assume that $\{a_l\}_{ l \ge 0}$ are i.i.d.\ standard normal. Let $\{ E_j \}_{j \ge 1}$ be i.i.d.\ standard exponentials. By Lemma~\ref{lem:product}, it easily follows that $$\mathbb{P}\Big(\max_{ 1 \le t \le q} (\beta_{a, n}(t) ) ^{1/4} > c_qx+ d_q\Big) =\mathbb{P}\Big((E_{2j-1}E_{2j})^{1/4} > c_qx+ d_q \text{ for some } 1 \le j \le q\Big).$$ The Theorem then follows in this special case from Lemma \ref{lem:maxima}. We now tackle the general case by using truncation of $\{a_l\}_{ l \ge 0}$, Bonferroni's inequality and the strong normal approximation result given in the previous subsections. Fix $x \in \mathbb R$. For notational convenience, define \begin{align*} Q^{(n)}_1 &:= \mathbb{P} \left( \max_{ 1 \le j \le q} (\beta_{\bar a + \sigma_n N, n}(j) ) ^{1/4} > c_q x + d_q\right),\\ Q^{(n)}_2 &:=\mathbb{P} \left( \max_{ 1 \le j \le q} (1+ \sigma_n^2) (E_{2j-1} E_{2j} ) ^{1/4} > c_q x + d_q \right ) , \end{align*} where $\{N_l\}_{ l \ge 0}$ is a sequence of i.i.d.\ standard normals random variables. Our goal is to approximate $ Q^{(n)}_1$ by the simpler quantity $Q^{(n)}_2$. By Bonferroni's inequality, for all $m \ge 1$, \begin{equation}\label{eq:Ssandwich} \sum_{j=1}^{2m} (-1)^{j-1} S_{j,n} \le Q^{(n)}_1 \le \sum_{j=1}^{2m-1} (-1)^{j-1}S_{j,n}, \end{equation} where \[ S_{j,n} = \sum_{ 1 \le t_1 < t_2 < \ldots < t_j \le q} \mathbb{P} \left( (\beta_{\bar a + \sigma_n N, n}(t_1) ) ^{1/4} > c_q x + d_q, \ldots, (\beta_{\bar a + \sigma_n N, n}(t_j) ) ^{1/4} > c_q x + d_q \right ). \] Similarly, we have \begin{equation}\label{eq:Tsandwich} \sum_{j=1}^{2m} (-1)^{j-1} T_{j,n} \le Q^{(n)}_2 \le \sum_{j=1}^{2m-1} (-1)^{j-1}T_{j,n}, \end{equation} where \[ T_{j,n} = \sum_{ 1 \le t_1 < t_2 < \ldots < t_j \le q} \mathbb{P} \Big( (1+ \sigma_n^2) (E_{2t_1-1} E_{2t_1} ) ^{1/4} > c_q x + d_q, \ldots,(1+ \sigma_n^2) (E_{2t_j-1} E_{2t_j} ) ^{1/4} > c_q x + d_q \Big ). \] Therefore, the difference between $Q^{(n)}_1$ and $Q^{(n)}_2$ can be bounded as follows: \begin{equation}\label{eq:maindiff} \sum_{j=1}^{2m} (-1)^{j-1} (S_{j,n} - T_{j,n} ) - T_{2m+1, n} \le Q^{(n)}_1 -Q^{(n)}_2 \le \sum_{j=1}^{2m-1} (-1)^{j-1}(S_{j,n} - T_{j,n}) + T_{2m, n}, \end{equation} for each $m \ge 1$. By independence and Lemma \ref{lem:tailestimation}, there exists $K = K(x)$ such that \begin{equation}\label{eq:Tbound} T_{j,n} \le { n \choose j} \frac{K^j}{n^j} \le \frac{K^j}{j!} \quad \text{for all } n, j \ge 1. \end{equation} Consequently, $\lim_{j \to \infty} \limsup_n T_{j,n} = 0$. Now fix $j \ge 1$. Let us bound the difference between $S_{j,n}$ and $T_{j,n}$. Let $\mathcal{A}_{t}$ defined in \eqref{eq:a_t} be represented as $\mathcal{A}_{t}=\{ e_{t}, e'_{t} \}$. For $1 \le t_1 < t_2 < \ldots < t_j \le q$, define \[ v_{2j}(l) = \left ( \cos \left ( \frac{ 2 \pi l e_{t_1} }{n} \right) , \sin\left ( \frac{ 2 \pi l e_{t_1} }{n} \right) , \cos \left ( \frac{ 2 \pi l e'_{t_1} }{n} \right) , \ldots, \cos \left( \frac{ 2 \pi l e_{t_j}' }{n} \right) , \sin \left ( \frac{ 2 \pi l e_{t_j}' }{n} \right) \right). \] Then, \begin{align*} \mathbb{P} \Big( (\beta_{\bar a + \sigma_n N, n}(t_1) ) ^{1/4} > c_q x + d_q, \ldots, (\beta_{\bar a + \sigma_n N, n}(t_j) ) ^{1/4} > c_q x + d_q \Big )\\ = \mathbb{P} \Big( 2^{1/2}n^{-1/2} \sum_{l = 0}^{n-1} (\bar a_l + \sigma_n N_l) v_{2j}(l) \in B^{(j)}_n \Big), \end{align*} where \[ B^{(j)}_n := \left \{ y \in \mathbb R^{4j}: (y_{4t +1} ^2+ y_{4t+2}^2)^{1/4} (y_{4t +3} ^2+ y_{4t+4}^2)^{1/4} > 2^{1/2}( c_qx+ d_q), 0 \le t < j \right \}. \] By Corollary \ref{cor:approx_by_normal} and the fact $N_1^2 + N_2^2 \mathrm{s.t.}ackrel{\mathcal{D}}{=} 2E_1$, we deduce that uniformly over all the $d$-tuples $1 \le t_1< t_2 < \ldots < t_j \le q$, \begin{align*} &\left| \mathbb{P} \Big( 2^{1/2}n^{-1/2} \sum_{l = 0}^{n-1} (\bar a_l + \sigma_n N_l) v_{2j}(l) \in B^{(j)}_n \Big) - \mathbb{P}\Big( (1+ \sigma_n^2)^{1/2} (E_{2t_m -1} E_{2t_m})^{1/4} > c_qx+ d_q, 1 \le m \le j \Big) \right|\\ &\le \mathrm{Var}epsilon_n \mathbb{P}\Big( (1+ \sigma_n^2)^{1/2} (E_{2t_m -1} E_{2t_m})^{1/4} > c_qx+ d_q, 1 \le m \le j \Big) + O(\exp(-n^{\eta}) ). \end{align*} Therefore, as $n \to \infty$, \begin{equation}\label{eq:STdiff} |S_{j,n} - T_{j,n}| \le \mathrm{Var}epsilon_n T_{j,n} + { n \choose j} O(\exp(-n^{\eta}) ) \le \mathrm{Var}epsilon_n \frac{K^j}{j!} + o(1) \to 0, \end{equation}where $O(\cdot)$ and $o(\cdot)$ are uniform over $j$. Hence using (\ref{eq:Ssandwich}), (\ref{eq:Tsandwich}), (\ref{eq:Tbound}) and (\ref{eq:STdiff}), we have \[ \limsup_n |Q^{(n)}_1 - Q^{(n)}_2| \le \limsup_n T_{2m+1, n} + \limsup_n T_{2m,n} \quad \text{ for each } m \ge 1. \] Letting $m \to \infty$, we conclude $ \lim_n Q^{(n)}_1 - Q^{(n)}_2= 0$. Since by Lemma \ref{lem:maxima}, \[ \displaystyle \max_{ 1 \le j \le q} (E_{2j-1} E_{2j} ) ^{1/4} = O_p( (\log n)^{1/2} ) \ \ \text{and}\ \ \sigma_n^2 = n^{-c},\] it follows that \[ \displaystyle \frac{ (1+ \sigma_n^2)^{1/2} \max_{ 1 \le j \le q} (E_{2j-1} E_{2j} ) ^{1/4} - d_q}{c_q} \mathrm{s.t.}ackrel{\mathcal{D}}{\to} \Lambda_{}\] and consequently, \begin{align*} \displaystyle \frac{ \max_{ 1 \le j \le q} (\beta_{\bar a + \sigma_n N, n}(j) ) ^{1/4} - d_q}{c_q} \mathrm{s.t.}ackrel{\mathcal{D}}{\to} \Lambda_{}. \end{align*} In view of Lemma \ref{lem:trunc}, it now suffices to show that \[ \max_{ 1 \le j \le q} (\beta_{\bar a + \sigma_n N, n}(j) ) ^{1/4} - \max_{ 1 \le j \le q} (\beta_{\bar a , n}(j) ) ^{1/4} = o_p( c_q).\] We use the basic inequality \[ B_ig| |z_1z_2| - |w_1w_2| B_ig| \le B_ig(|z_1| + |w_2| B_ig) \max \Big \{ |z_1 - w_1|, |z_2 - w_2| \Big \}, \quad z_1, z_2, w_1, w_2 \in \mathbb C, \] to obtain \begin{align*} \left | \max_{ 1 \le j \le q} (\beta_{\bar a + \sigma_n N, n}(j) ) ^{1/2} - \max_{ 1 \le j \le q} (\beta_{\bar a , n}(j) ) ^{1/2} \right | &\le \Big (M_n( \bar a + \sigma_n N) + M_n( \bar a) \Big ) M_n( \sigma_n N) \\ &\le \Big (2 M_n( \bar a + \sigma_n N) + M_n( \sigma_n N) \Big ) M_n( \sigma_n N) \end{align*} where, for any sequence of random variables $X = \{X_l\}_{ l \ge 0}$, $$M_n(X) := \max_{ 1 \le t \le n} \left | n^{-1/2} \sum\limits_{l=0}^{n-1} X_{l}\omega^{ t l} \right |.$$ As a trivial consequence of Theorem 2.1 of Davis and Mikosch (1999)C_ite{Mikosch99}, we have \[M_n^2( \sigma_n N) = O_p( \sigma_n \log n)\ \ \text{and} \ \ M_n^2( \bar a + \sigma_n N) = O_p(\log n).\] Together with $\sigma_n = n^{-c/2}$ they imply that \[ \max_{ 1 \le j \le q} (\beta_{\bar a + \sigma_n N, n}(j) ) ^{1/2} - \max_{ 1 \le j \le q} (\beta_{\bar a , n}(j) ) ^{1/2} = o_p(n^{-c/4}).\] From the inequality \[ |\sqrt y_1 - \sqrt y_2| \le \frac{1}{ \min \{ \sqrt y_1, \sqrt y_2 \} } |y_1 - y_2|, \ \ y_1, y_2 > 0\] it easily follows that \[ \max_{ 1 \le j \le q} (\beta_{\bar a + \sigma_n N, n}(j) ) ^{1/4} - \max_{ 1 \le j \le q} (\beta_{\bar a , n}(j) ) ^{1/4} = o_p(n^{-c/8}) = o_p(c_q).\] This completes the proof of Theorem \ref{theo:max}. $\Box$ \section{Concluding remarks and open problems} To establish the LSD of $k$-circulants for more general subsequential choices of $(k, n)$, a much more comprehensive study of the orbits of the translation operator acting on the ring $\mathbb Z_{n'}$ by $\mathbb T_k(x) = x k \text{ mod } n'$ is required. In particular, one may perhaps first establish an asymptotic negligibility criteria similar to that given in Lemma~\ref{lem:fkn}. Then, along the line similar to that of the proofs of Theorems~\ref{theo:lsd12} and \ref{theo:lsd45} - first using the abundant independence structure among the eigenvalues of the $k$-circulant matrices when the input sequence is i.i.d.\ normals as given in Lemma~\ref{lem:product} and then claiming universality through an appropriate use of the invariance principle. What particularly complicates matters is that in general there {\it may} be contributing classes of several sizes as opposed to only one (of size $2g$ or $g$) that we saw in Theorems~\ref{theo:lsd12} and \ref{theo:lsd45} respectively. Thus it is also interesting to investigate whether we can select $k = k(n)$ in a relatively simple way so that there exist finitely many positive integers $h_1, h_2, \ldots, h_r, r>1 $ with \[ \# B_ig \{ x \in \mathbb Z_{n'} : g_x = h_j B_ig \}/n' \to c_j >0, \quad 1 \le j \le r, \] where $c_1 + \ldots+ c_r =1$. In that case the LSD would be an attractive mixture distribution. Establishing the limit distribution of the spectral radius for general subsequential choices of $(k, n)$ appears to be even more challenging. In fact, even under the set up of Theorems~\ref{theo:lsd12} and \ref{theo:lsd45}, this seems to be a nontrivial problem. As a first step, one needs to find max-domain of attraction for $(\prod_{j=1}^{g} E_j)^{1/2g}$ where $\{E_j\}_{ 1 \le j \le g}$ are i.i.d.\ exponentials which requires a detailed understanding of the behaviour of $\mathbb{P}(\prod_{j=1}^g E_j > x)$ as $x \to \infty$. When $ g > 2$, we were unable to locate any results on this. Our preliminary investigation shows that this is fairly involved and we are currently working on this problem. Moreover, an extra layer of difficulty arises while dealing with the spectral radius out of the fact that we can not immediately ignore some `bad' classes of eigenvalues whose proportions are asymptotically negligible like we did while establishing the LSD. \section*{Appendix} Here we provide a proof of Theorem \ref{theo:formula}. Recall that for any two positive integers $k$ and $n$, $p_1 <p_2<\ldots < p_c$ are all their common prime factors so that, $$n=n^{\prime} \prod_{q=1}^{c} p_q^{\beta_q} \ \text{ and } \ \ k=k^{\prime} \prod_{q=1}^{c} p_q^{\alpha_q}$$ where $\alpha_q,\ \beta_q \ge 1$ and $n^{\prime}$, $k^{\prime}$, $p_q$ are pairwise relatively prime. Define \begin{equation}\label{eq:modrelation} m := \max_{1 \le q \le c} \lceil \beta_q/\alpha_q \rceil, \ \ \ [t]_{m,b} := tk^m \mbox{ mod } b,\ b \mbox{ is a positive integer}. \end{equation} Let $e_{m,d}$ be a $d \times 1$ vector whose only nonzero element is $1$ at $(m \mbox{ mod } d)$-th position, $E_{m,d}$ be the $d \times d$ matrix with $e_{jm,d}$, $ 0 \leq j < d$ as its columns and for dummy symbols $\delta_0, \delta_1, \ldots$, let $\Delta_{m,b,d}$ be a diagonal matrix as given below. \begin{eqnarray}e_{m,d}&=& \left[ \begin{array} {c} 0 \\ \vdots \\ 1\\ \vdots \end{array} \right]_{d\times 1}, \\ E_{m,d} &=& \left[ e_{0,d}\ \ e_{m,d}\ \ e_{2m,d} \ldots e_{(d-1)m,d}\right],\\ \Delta_{m,b,d} &=& \text{diag} \left[ \delta_{[0]_{m,b}},\ \delta_{[1]_{m,b}},\ \ldots,\ \delta_{[j]_{m,b}},\ \ldots,\ \delta_{[d-1]_{m,b}}\right]. \end{eqnarray}Note that \[\Delta_{0,b,d}= \text{diag} \left[ \delta_{0\mbox{ mod } b},\ \delta_{1\mbox{ mod } b},\ \ldots,\ \delta_{j\mbox{ mod } b},\ \ldots,\ \delta_{d-1\mbox{ mod } b} \right]. \] \begin{lemma} \label{res:permu} Let $ \pi=( \pi(0),\ \pi(1),\ \ldots,\ \pi(b-1) )$ be a permutation of $(0,1, \ldots,b-1)$. Let \[ P_{\pi} = \left[ e_{\pi(0), b}\ e_{\pi(1), b}\ \ldots e_{\pi(b-1), b} \right]. \] Then, $ P_{\pi}$ is a permutation matrix and the $(i,j)$th element of $P_{\pi}^T E_{k,b}\Delta_{0,b,b} P_{\pi}$ is given by \[(P_{\pi}^T E_{k,b}\Delta_{0,b,b} P_{\pi})_{i,j} = \left\{ \begin{array}{ll} \delta_t & \mbox{if } (i,j) = (\pi^{-1}(kt \mbox{ mod } b), \pi^{-1}(t)), \ \ 0 \le t < b\\ 0 & \mbox{otherwise.}\\ \end{array} \right. \] \end{lemma} The proof is easy and we omit it. In what follows, $\chiup(A)({\lambda}bda)$ stands for the characteristic polynomial of the matrix $A$ evaluated at ${\lambda}bda$ but for ease of notation, we shall suppress the argument ${\lambda}bda$ and write simply $\chiup(A)$.\begin{lemma} \label{res:2} Let $k$ and $b$ be positive integers. Then \begin{eqnarray} \chiup \left( A_{k,b} \right) &=& \chiup \left( E_{k,b} \Delta_{0, b, b} \right). \end{eqnarray}where, $\delta_j = \sum_{l=0}^{b-1} a_{l} \omega^{j l}, \ 0 \le j < b$, $\omega = cos(2\pi/b) +i sin(2\pi/b)$, $i^2=-1$. \end{lemma} \begin{proof} Define the $b \times b$ permutation matrix \[ P_b = \left[ \begin{array}{cc} \underline{0} & I_{b-1} \\ 1 & \underline{0}^T \end{array} \right] .\] Observe that for $0 \le j < b$, the $j$-th row of $A_{k,b}$ can be written as $a^T P_{b}^{jk}$ where $P_{b}^{jk}$ stands for $jk$-th power of $P_b$. From direct calculation, it is easy to verify that $P_b = UDU^*$ is a spectral decomposition of $P_b$ where \begin{eqnarray} D&=&\mbox{diag}(1,\omega,\ldots,\omega^{b-1}), \\ U &=& [ u_0\ u_1\ \cdots \ u_{b-1} ] \mbox{ with } u_j = b^{-1/2} (1, \omega^{j}, \omega^{2j}, \ldots, \omega^{(b-1)j} ), \ 0 \le j < b. \end{eqnarray} Note that $\delta_j = a^T u_j, \ 0 \le j < b.$ From easy computations, it now follows that \[ U^*A_{k,b}U = E_{k,b} \Delta_{0,b,b}, \] so that, $\chiup \left( A_{k,b} \right) = \chiup \left( E_{k,b} \Delta_{0,b,b} \right)$, proving the lemma. \end{proof} \begin{lemma} \label{res:3} Let $k$ and $b$ be positive integers and, $x = b/gcd(k, b)$. Let for dummy variables $\alphamma_{0},\ \alphamma_{1},\ \alphamma_{2},\ldots, \alphamma_{b-1}$, \[\Gamma = diag \left( \alphamma_{0},\ \alphamma_{1},\ \alphamma_{2},\ldots, \alphamma_{b-1} \right).\] Then \begin{eqnarray} \chiup \left( E_{k,b} \times \Gamma \right) &=& {\lambda}bda^{b-x} \chiup \left( E_{k,x} \times diag \left( \alphamma_{0 \text{ mod }b},\ \alphamma_{k \text{ mod }b},\ \alphamma_{2k \text{ mod }b },\ldots, \alphamma_{(x-1)k \text{ mod } b} \right) \right). \end{eqnarray} \end{lemma} \begin{proof} Define the following matrices \[ B_{b \times x} = \left[ e_{0,b} \ e_{k,b} \ e_{2k,b} \ \ldots \ e_{(x-1)k,b} \right] \mbox{\ and\ \ } P = \left[ B \ B^c \right] \] where $B^c$ consists of those columns (in any order) of $I_b$ that are not in $B$. This makes $P$ a permutation matrix. Clearly, $E_{k,b} = \left[ B \ B \ \cdots \ B \right]$ which is a $b \times b$ matrix of rank $x$, and we have \[ \chiup \left( E_{k,b} \Gamma \right) = \chiup \left( P^T E_{k,b} \Gamma P \right). \] Note that, \[\begin{array}{lcl} P^T E_{k,b} \Gamma P & = & \left[ \begin{array}{llcl} I_{x} & I_{x} & \ldots & I_{x} \\ 0_{(b-x) \times x} & 0_{(b-x) \times x} & \ldots & 0_{(b-x) \times x} \\ \end{array} \right] \Gamma P \\ & & \\ & = & \left[ \begin{array}{c} C \\ 0_{(b-x) \times b} \\ \end{array} \right] P \\ & & \\ & = & \left[ \begin{array}{c} C \\ 0_{(b-x) \times b} \\ \end{array} \right] \left[ B \ B^c \right] = \left[ \begin{array}{cc} CB & CB^c \\ 0 & 0 \\ \end{array} \right] \end{array} \] where, \[ \begin{array}{lcl} C & = & \left[ I_{x} \ I_{x} \ \cdots \ I_{x} \right] \Gamma \\ & & \\ & = &\left[ I_{x} \ I_{x} \ \cdots \ I_{x} \right] \times \text{diag} (\alphamma_{0},\ \alphamma_{1},\ \ldots,\ \alphamma_{b-1}). \\ \end{array} \] Clearly, the characteristic polynomial of $ P^T E_{k,b} \Gamma P$ does not depend on $CB^c$, explaining why we did not bother to specify the order of columns in $B^c$. Thus we have, \[ \chiup \left( E_{k,b} \Gamma \right) = \chiup \left( P^T E_{k,b} \Gamma P \right) = {\lambda}bda^{b-x} \chiup \left( CB \right). \] It now remains to show that $CB=E_{k,x} \times \text{diag} \left( \alphamma_{0 \text{ mod }b},\ \alphamma_{k \text{ mod }b},\ \alphamma_{2k \text{ mod }b },\ldots, \alphamma_{(x-1)k \text{ mod } b} \right)$. Note that, the $j$-th column of $B$ is $e_{jk,b}$. So, $j$-th column of $CB$ is actually the $(jk \mbox { mod } b )$-th column of $C$. Hence, $(jk \mbox { mod } b )$-th column of $C$ is $\alphamma_{jk \text{ mod } b\ }e_{jk \text{ mod } x}$. So, \[ CB = E_{k,x} \times \text{diag} \left( \alphamma_{0 \text{ mod }b},\ \alphamma_{k \text{ mod }b},\ \alphamma_{2k \text{ mod }b },\ldots, \alphamma_{(x-1)k \text{ mod } b} \right) \] and the Lemma is proved completely. \end{proof} \begin{proof}{\it of Theorem \ref{theo:formula}}. We first prove the Theorem for $A_{k , n^{\prime}}$. Since $k$ and $n^{\prime}$ are relatively prime, by Lemma \ref{res:2}, \[ \chiup(A_{k,n^{\prime}}) =\chiup(E_{k,n^{\prime}} \Delta_{0,n^{\prime},n^{\prime}}). \] Get the sets $S_0$, $S_1$, $\ldots$ to form a partition of $\{0, 1, \ldots, n'-1\}$, as in Section \ref{section:eigenvalues}. Define the permutation $\pi$ on the set $\mathbb Z_{n^{\prime}}$ by setting $\pi(t) = s_t$, $0 \le t < n^{\prime}$. This permutation $\pi$ automatically yields a permutation matrix $P_{\pi}$ as in Lemma \ref{res:permu}. Consider the positions of $\delta_v$ for $v \in S_j$ in the product $P_{\pi}^TE_{k,n^{\prime}}\Delta_{0,n^{\prime},n^{\prime}}P_{\pi}$. Let $N_{j-1}=\sum_{t=0}^{j-1}|S_t|$. We know, $S_j = \{r_jk^x \text{ mod } n^{\prime}, x \ge 0\}$ for some integer $r_j$. Thus, \[ \pi^{-1}\left( r_jk^{t-1} \mbox{ mod } n^{\prime} \right) = N_{j-1}+t,\ \ 1 \le t \le n_j \] so that, position of $\delta_v$ for $v=r_jk^{t-1} \mbox{ mod } n^{\prime} $, $1 \le t \le n_j$ in $P_{\pi}^T E_{k,n^{\prime}}\Delta_{0,n^{\prime}}P_{\pi}$ is given by \[ \left( \pi^{-1}(r_jk^y \mbox{ mod } n^{\prime} ), \pi^{-1}(r_jk^{y-1} \mbox{ mod } n^{\prime} ) \right) = \left\{ \begin{array}{ll} \left( N_{j-1}+t+1,\ N_{j-1}+t \right) & \mbox{if, } 1 \le t < n_j \\ \left( N_{j-1}+1,\ N_{j-1}+n_j \right) & \mbox{if, } t = n_j \\ \end{array} \right. \] Hence, \[ P_{\pi}^TE_{k,n^{\prime}}\Delta_{0,n^{\prime},n^{\prime}}P_{\pi} = \text{diag} \left( L_0,\ L_1,\ \ldots \right) \] where, for $j \ge 0$, if $n_j=1$ then $L_j = \left[ \delta_{r_j} \right]$ is a $1 \times 1$ matrix, and if $n_j > 1$, then, \[ L_j = \left[ \begin{array}{cccccc} 0 & 0 & 0 & \ldots & 0 & \delta_{r_jk^{n_j-1} \text{ mod } n^{\prime} } \\ \delta_{r_j \text{ mod } n^{\prime}} & 0 & 0 & \ldots & 0 & 0 \\ 0 & \delta_{r_jk \text{ mod } n^{\prime}} & 0 & \ldots & 0 & 0\\ & & & \vdots & \\ 0 & 0 & 0 & \ldots & \delta_{r_jk^{n_j-2} \text{ mod } n^{\prime}} & 0. \\ \end{array}\right] \] Clearly, $\chi(L_j) = {\lambda}bda^{n_j} - \Pi_j$. Now the result follows from the identity \[ \chiup \left( E_{k,n^{\prime}} \Delta_{0,n^{\prime},n^{\prime}} \right) = \prod_{j \ge 0} \chiup(L_j) = \prod_{j \ge0} ( {\lambda}bda^{n_j} - \Pi_j ). \] Now let us prove the results for the general case. Recall that $n=n^{\prime} \times \Pi_{q=1}^{c} p_q^{\beta_q}$. Then, again using Lemma \ref{res:2}, \[ \chiup (A_{k,n}) = \chiup(E_{k,n} \Delta_{0,n,n}). \] Recalling Equation \ref{eq:modrelation}, Lemma \ref{res:2} and using Lemma \ref{res:3} repeatedly, \[ \begin{array}{lcl} \chiup (A_{k,n}) & = & \chiup (E_{k,n} \Delta_{0,n,n})\\ & =& {\lambda}bda^{n-n^{\prime}} \chiup(E_{k,n^{\prime}} \Delta_{m,n,n^{\prime}})\\ & = & {\lambda}bda^{n-n^{\prime}} \chiup(E_{k,n^{\prime}} \Delta_{m+j,n,n^{\prime}}) \ \ [\text{ for all } j \ge 0]\\ & = & {\lambda}bda^{n-n^{\prime}} \chiup \left(E_{k,n^{\prime}} \times \text{diag}\left( \delta_{[0]_{0,n}},\ \delta_{[y]_{0,n}},\ \delta_{[2y]_{0,n}},\ldots, \delta_{[(n^{\prime}-1)y]_{0,n}} \right) \right) \ \ [ \text{where } y = n/n^{\prime}]. \end{array} \] Replacing $\Delta_{0,n^{\prime},n^{\prime}}$ by $\text{diag} \left( \delta_{[0]_{0,n}},\ \delta_{[y]_{0,n}},\ \delta_{[2y]_{0,n}},\ldots, \delta_{[(n^{\prime}-1)y]_{0,n}} \right)$, we can mimic the rest of the proof given for $A_{k,n^{\prime}}$, to complete the proof in the general case. \end{proof} \footnotesize \noindent Address for correspondence:\\ \noindent Arup Bose\\ Stat-Math Unit\\ Indian Statistical Institute\\ 203 B. T. Road\\ Kolkata 700108\\ INDIA\\ email: [email protected] \end{document}
\begin{document} \author{V.A.~Vassiliev} \address{Steklov Mathematical Institute of Russian Academy of Sciences; \newline \hspace*{3mm} National Research University Higher School of Economics} \email{[email protected]} \thanks{Research supported by the Russian Science Foundation grant, project 16-11-10316} \title{Multiplicities of bifurcation sets of Pham singularities} \maketitle \section{Introduction} Let $f:({\mathbb C}^n,0) \to ({\mathbb C},0)$ be a holomorphic function defined in a neighborhood of the orign in ${\mathbb C}^n$ and having an isolated singularity at $0$; let $F:({\mathbb C}^n\times {\mathbb C}^k,0) \to ({\mathbb C},0)$ be an arbitrary deformation of $f$, that is, a family of functions $f_\lambda$ depending holomorphically on the parameter $\lambda \in {\mathbb C}^k$ with $f_0\equiv f$, see \cite{AVGL}. \begin{definition}[see e.g. \cite{AVGL}] \rm The {\em caustic} $\Delta \subset {\mathbb C}^k$ of the deformation $F$ is the set of all parameter values $\lambda \in {\mathbb C}^k$ such that the corresponding function $f_\lambda$ has a non-Morse critical point close to the origin in ${\mathbb C}^n$. The {\em Maxwell set} of this deformation is the closure of the set of all parameter values $\lambda$ such that $f_\lambda$ has equal critical values at different critical points close to the origin. \end{definition} \begin{definition} \rm The {\em mixed Stokes' set} (respectively, the {\em pure Stokes' set}) of deformation $F$ is the closure of the set of all parameter values $\lambda$ such that $f_\lambda$ has three different critical values $\alpha, \alpha', \beta$ satisfying the equality $$\alpha+\alpha'=2\beta ,$$ (respectively, has four different critical values $\alpha, \alpha', \beta, \beta'$ satisfying \begin{equation}\alpha+\alpha'=\beta+\beta' \ \ ).\label{csdef}\end{equation} \end{definition} If our deformation $F$ is large enough, that is, it is a {\em versal} deformation of $f$, then the local multiplicities of the caustic, of the Maxwell set, and of both Stokes' sets at the point $0 \in {\mathbb C}^k$ do not depend on the choice of this deformation. We calculate the local multiplicities of the Maxwell sets and mixed Stokes' sets for all {\em Pham singularities}, that is, for singularities of the form \begin{equation} f(z_1, \dots, z_n) = z_1^{a_1+1} + \dots + z_n^{a_n+1}. \label{pham} \end{equation} Also we calculate the multiplicities of pure Stokes' sets for some Pham singularities, including all singularities in one variable. The multiplicities of caustics of Pham singularities are calculated in \cite{VFA} (see formula (\ref{caus}) below), where also a two-sided estimate of multiplicities of Maxwell sets was given. Our present result replaces this estimate by an equality, proving that its upper side is sharp. These multiplicities provide upper bounds for the complexity of certain programs enumerating all topologically distinct morsifications of complicated real function singularities. Moreover they allow us to write the stop rules in these programs, see \S \ref{motivat} at the end of this article. All these multiplicities depend semicontinuously on the singularity $f$, therefore our calculations give also upper estimates of them for arbitrary isolated holomorphic function singularities: the multiplicity of any of these kinds for a singularity $f$ does not exceed that for any Pham singularity such that $f$ occurs as its perturbation. \section{Statements of main theorems} \subsection{Maxwell set and caustic} Assume that $a_1 \geq a_2 \geq \dots \geq a_n$ in (\ref{pham}), and define the number \begin{equation} L(a_1, \dots, a_n) \equiv \sum_{i=1}^n (a_1 \cdot \ \dots \ \cdot a_{i-1}) (a_i^2-1) (a_{i+1} \cdot \ \dots \ \cdot a_n)^2. \label{Mset} \end{equation} \begin{theorem} \label{ma2} The local multiplicities $C(f)$ and $M(f)$ of the caustic and the Maxwell set of any versal deformation of the isolated singularity $f$ given by $($\ref{pham}$)$ satisfy the equation \begin{equation} 3C(f) + 2M(f) = L(a_1, \dots, a_n). \label{mnest}\end{equation} \end{theorem} On the other hand, according to \cite{VFA}, we have \begin{equation} C(f) = (a_1 \cdot \ \dots \ \cdot a_n)\left(\frac{a_1-1}{a_1}+ \dots + \frac{a_n-1}{a_n}\right) \ ,\label{caus} \end{equation} which allows us to express $M(f)$ from (\ref{mnest}). {\bf A problem}. Is it correct that for an arbitrary isolated function singularity the local multiplicity of the caustic does not exceed $n\mu(f)$, where $\mu(f)$ is the Milnor number of $f$? \subsection{Mixed Stokes' set} \begin{theorem} \label{mss} The multiplicity at $0 \in {\mathbb C}^k$ of the mixed Stokes' set of the Pham singularity $($\ref{pham}$)$ is equal to \begin{equation} \label{bbbb} \binom{{a_1 \dots a_n}}{1,2}\frac{a_1+1}{a_1} + \sum_{j=2}^n (a_1 \dots a_{j-1})\binom{{a_j \dots a_n}}{1,2}\frac{a_{j-1}-a_j}{a_{j-1}a_j} \ , \end{equation} where $\binom{A}{1,2}\equiv A(A-1)(A-2)/2$. \end{theorem} \subsection{Pure Stokes' set} \begin{theorem} \label{cs11} If $n=1,$ $f=z^{d+1}$, and $d$ is odd, then the multiplicity of the pure Stokes' set is equal to \begin{equation} (d+1)(d-1)(d-2)(d-3)/8. \end{equation} \end{theorem} \begin{theorem} \label{cs12} If $n=1$, $f=z^{d+1},$ and $d$ is even, then the multiplicity of the pure Stokes' set is equal to \begin{equation} \label{cs12f} (d-2)((d+1)(d-1)(d-3)+1)/8. \end{equation} \end{theorem} \begin{theorem} \label{cs2}Suppose that $n=2$ and $f=x^{a+1}+y^{b+1},$ $a\geq b$, where both $a$ and $b$ are odd. Then the local multiplicity of the pure Stokes' set is equal to \begin{equation} \label{anscM2} \frac{a+1}{2a} \binom{ab}{{2,2}} + \frac{a-b}{2b} \binom{b}{{2,2}} + \frac{a-b}{a} \binom{a}{2} \binom{b}{2}(b-1) + \frac{1}{a} \binom{a}{2} \binom{b}{2}. \end{equation} \end{theorem} {\bf Problem.} Calculate the multiplicities of all these sets in the terms of Newton diagrams for the generic functions $f$ with these diagrams. \subsection{Corollaries for the homogeneous case} In the asymptotic formulas of the following proposition we assume that $n$ is fixed and the degrees are growing; recall that the Milnor number $\mu(f)$ of a Pham singularity (\ref{pham}) is equal to $a_1 \cdots a_n$. \begin{proposition} If $f$ has the form $($\ref{pham}$)$ and all exponents $a_1, \dots, a_n$ are equal to one another $($and are denoted by $a)$, then we have $$C(f) = a^{n-1}(a-1) \sim n \mu(f),$$ $$M(f) = a^{n-1}((a+1)(a^n-1)-3n(a-1))/2 \sim \mu^2(f)/2;$$ the multiplicity of the mixed Stokes' set is $ \sim \mu^3(f)/2;$ \\ if additionally $n=1$, or $n=2$ and $a_1=a_2,$ then the multiplicity of the pure Stokes' set is $ \sim \mu^4(f)/8.$ \end{proposition} \section{Proof of Theorem \ref{ma2}.} \label{proofma2} First of all, we replace the function (\ref{pham}) by \begin{equation} \label{pham1} f = {\setminusall \frac{1}{a_1+1}} z_1^{a_1+1} + \dots + {\setminusall \frac{1}{a_n+1}} z_n^{a_n+1} \end{equation} (which can be done by the dilation of coordinates) because it simplifies the calculations very much. We will use the versal deformation of this function (\ref{pham1}) consisting of all polynomial functions \begin{equation} \label{vers} f_\lambda\equiv f+ \sum_\alpha \lambda_\alpha z^\alpha,\end{equation} where $\alpha=(\alpha_1, \dots, \alpha_n) \in {\mathbb Z}^n$ are multi-indices with integer values \begin{equation}\alpha_i \in [0, a_i-1],\label{parall} \end{equation} $z^\alpha \equiv z_1^{\alpha_1} \cdot \ \dots \ \cdot z_n^{\alpha_n}$, and $\lambda_\alpha$ are the parameters, $\lambda = \{\lambda_\alpha\}$. In this case $k = \mu(f)=a_1 \cdot \ \dots \ \cdot a_n$. Any function $f_\lambda$, where $\lambda \in {\mathbb C}^k \setminus \Delta$ is sufficiently close to $0\in {\mathbb C}^k$, has exactly $\mu(f)$ different critical points close to the origin in ${\mathbb C}^n$. Consider the complex-valued function on ${\mathbb C}^k\setminus \Delta$, whose value at any point $\lambda$ is equal to the product (over all $2\binom{\mu(f)}{2}$ ordered pairs of different critical points of $f_\lambda$ close to the origin) of differences of the corresponding two critical values. This function is holomorphic and regular close to generic points of the caustic (these points $\lambda$ correspond to the functions $f_\lambda$ having exactly one non-Morse critical point of type $A_2$, while the non-generic points of the caustic form a set of complex codimension $\geq 2$ in ${\mathbb C}^k$). Therefore this function it can be extended to a holomorphic function $D$ in an entire neighborhood of the origin in ${\mathbb C}^k$. \begin{theorem} \label{mp} The degree of the restriction of the function $D$ to a generic line through the origin in ${\mathbb C}^k$ is equal to $($\ref{Mset}$)$. \end{theorem} Theorem \ref{ma2} follows immediately from this one, because this function $D$ vanishes with multiplicity 3 at generic points of the caustic, and with multiplicity 2 at generic points of the Maxwell set. \subsection{Proof of Theorem \ref{mp}} \label{pro6} Any line through the origin in ${\mathbb C}^k$ consists of functions of the form $f-\varepsilon \varphi$, where $\varepsilon \in {\mathbb C}$ is the parameter of the line, and $\varphi$ is a polynomial containing only the monomials $z^\alpha$ with $\alpha=(\alpha_1, \dots, \alpha_n)$ satisfying the conditions (\ref{parall}). We can and will assume that $\varphi(0)=0$, because $D(\lambda)=D(\lambda')$ if $\lambda$ and $\lambda'$ differ only in the coordinate $\lambda_0$, that is, $f_\lambda-f_{\lambda'}$ is a constant function. Let \begin{equation} \varphi_0=q_1z_1 + \dots + q_nz_n \label{linprt} \end{equation} be the linear part of $\varphi$. Consider the space ${\mathbb C}P^{k-2}$ of all lines of this form \begin{equation}\{f-\varepsilon \varphi\} \label{lines}\end{equation} in the subspace ${\mathbb C}^{k-1} \subset {\mathbb C}^k$ distinguished by the last condition $\lambda_0=0$. The lines of this form, which are generic with respect to the function $D$ (that is, the degree at 0 of the restriction of $D$ to these lines is the minimal possible), fill in a Zariski open subset in ${\mathbb CP}^{k-2}$. It is enough to prove the assertion of Theorem (\ref{mp}) for an arbitrary line from this subset, therefore we can and will assume that the linear part (\ref{linprt}) of our function $\varphi$ satisfies the following conditions: \begin{itemize} \item $q_1=1$ (which can be made by the choice of the parameter $\varepsilon$); \item all coefficients $q_i$ in (\ref{linprt}) are positive and $\leq 1$; and \item $q_{i+1} \ll q_i$ if $a_{i+1} =a_i$. \end{itemize} Let us fix an arbitrary such function $\varphi$ and the corresponding line (\ref{lines}). It is not obvious apriori that the line consisting of functions $f-\varepsilon \varphi_0$, where $\varphi_0$ is the linear part of $\varphi$, is also generic with respect to the function $D$. (Moreover, in \S \ref{proCS} we will see that in a similar problem concerning the pure Stokes' sets no perturbation $f-\varepsilon \varphi$ with linear $\varphi$ can be generic). Still we will calculate the degree of the restriction of the function $D$ to such a line, and find that this degree is equal to (\ref{Mset}). Then we connect $\varphi_0$ and $\varphi$ by a sequence of polynomials $\varphi_1, \dots, \varphi_{n-1}, \varphi_n \equiv \varphi$ and prove that this degree is the same for any two consecutive terms of this sequence. Namely, we define $\varphi_j$ as the sum of all monomials of the polynomial $\varphi$, which either are of degree 1, or depend on variables $z_1, \dots, z_j$ only. \begin{lemma} \label{mainll} For any $j=0,1, \dots, n$, the critical points of the function $f-\varepsilon \varphi_j$ with sufficiently small $|\varepsilon|$ can be split into $a_1$ collections ``of depth 1'' with $a_2 \cdots a_n$ points in each, and the distances between the critical values at these points from different collections decreasing as $\asymp |\varepsilon|^{(a_1+1)/a_1}$ when $\varepsilon$ tends to 0; any of these $a_1$ collections can be subdivided into $a_2$ collections ``of depth 2'' with $a_3 \cdots a_n$ critical points in each and distances between the critical values at the points from different such subcollections $($inside one collection of depth 1$)$ decreasing as $\asymp |\varepsilon|^{(a_2+1)/a_2} ,$ etc: for any $i=1, \dots, n-1$ any of $a_1 \cdots a_{i-1}$ collections of depth $i$ distinguished in the previous steps can be split into $a_i$ collections of depth $i+1$ with $a_{i+1}\cdots a_n$ critical points in each and distances between the values at the points from different such collections of depth $i+1$ $($inside one collection of depth $i)$ decreasing as $\asymp |\varepsilon|^{(a_i+1)/a_i}.$ \end{lemma} In what follws it will be convenient to consider the set of all $a_1 \cdots a_n$ critical points of such a function as the collection ``of depth $0$''. \unitlength 0.5mm \linethickness{0.4pt} \begin{figure} \caption{Critical values for $a_1=6, a_2=4$} \label{critvalpic} \end{figure} \begin{corollary} For any $j=0,1, \dots, n$, the degree of the restriction of the function $D$ to the line $\{f-\varepsilon \varphi_j\} \subset {\mathbb C}^{k-1}$ is equal to $($\ref{Mset}$)$. \end{corollary} {\it Proof.} For any $i=1, \dots, n$, the product of differences of critical values over all ordered pairs of critical points of $f-\varepsilon \varphi_j$, which belong to the same collections of depth $i-1$, but not of depth $i$, vanishes as $|\varepsilon|^{s_i}$, where $s_i$ is the $i$-th summand in (\ref{Mset}). $\Box$ For $j=n$ the assertion of this corollary gives us Theorem \ref{mp}. {\bf Basis of induction: proof of Lemma \ref{mainll} for $\varphi\equiv\varphi_0$} The function $f-\varepsilon \varphi_0$ has $a_1\dots a_n$ critical points with coordinates \begin{equation} \label{valf0} z_1 = ( q_1\varepsilon) ^{1/a_1}, \dots,z_n = (q_n\varepsilon)^{1/a_n}, \end{equation} where any of expressions $( q_i\varepsilon)^{1/a_i}$ runs over all $a_i$ values of this root. The critical values at these points are equal to $$ \varepsilon \sum_{i=1}^n c_i \cdot (q_i\varepsilon)^{1/a_i},$$ where $c_i = -a_1/(a_1+1)q_i.$ Then the collections of depth $i$ of critical points from Lemma \ref{mainll} are just the points (\ref{valf0}) with coinciding coordinates $z_1, \dots, z_i$. $\Box$ The step of induction in the proof of Lemma \ref{mainll} will be as follows: we show that the passage from $f-\varepsilon \varphi_{j-1}$ to $f-\varepsilon \varphi_j$ translates all collections of critical values of depth $j$ (i.e. consisting of $a_{j+1} \cdots a_n$ elements) as whole bodies by the distances of size $O_{|\varepsilon| \to +0} (|\varepsilon|^{1+1/{a_j}+1/{a_1}}),$ thus not changing the orders of distances between the points from different subcollections of this depth (these orders are $\asymp (|\varepsilon|^{1+1/a_i})$, $i \leq j$) and not changing at all the distances between the critical values from one and the same such collection. \begin{lemma} \label{mainlll} For $|\varepsilon|>0$ small enough, there is a natural and depending continuously on $\varepsilon$ one-to one correspondence between the sets of critical points of all functions $f-\varepsilon \varphi_j,$ $j=0, \dots, n$. For any $j, j' \in \{0, \dots, n\}$ the difference of values of any coordinate $z_i$, $i=1, \dots, n$, at the critical points of functions $f-\varepsilon \varphi_j$ and $f-\varepsilon \varphi_{j'}$ related by this correspondence decreases as $O_{|\varepsilon| \to +0}(|\varepsilon|^{1/{a_i}+1/{a_1}})$ . \end{lemma} {\it Proof.} Consider the function $z_i^{a_i} - q_i$ in one variable, that is, the expression of the function $\partial(f-\varphi_0)/\partial z_i$. Its derivative is different from 0 at all its zero points $z_{0,i}$, i.e. at $a_i$th roots of $q_i$. Therefore any such point admits a boundary of some radius $T>0$ in ${\mathbb C}^1$, and a constant $c>0$ such that $|z_i^{a_i}-q_i|\geq c|z_i-z_{0,i}|$ if $|z_i-z_{0,i}|\leq T$. Let us choose these constants $T$ and $c$ in such a way that this inequality will hold for all $i=1, \dots, n$. Then by the dilation of the coordinate $z_i$ we obtain that \begin{equation} \label{est2} |z_i^{a_i}-q_i\varepsilon| \geq c |\varepsilon|^{\frac{a_i-1}{a_i}} | z_i-z_{0,i,\varepsilon}| \quad \mbox{if} \quad |z_i-z_{0,i,\varepsilon}|\leq |\varepsilon|^{\frac{1}{a_i}}T \ , \end{equation} where $z_{0,i,\varepsilon}$ is an arbitrary root of the polynomial $z_i^{a_i}-q_i\varepsilon$. Let us fix some number $M>0$. For any critical point $Z_\varepsilon=(z_{0,1,\varepsilon}, \dots, z_{0,n,\varepsilon})$ of the function $f - \varepsilon \varphi_0$, consider its neighbourhood $U_M(Z_\varepsilon)$ containing all the points $(z_1, \dots, z_n)$ such that \begin{equation} \label{est3} |z_i -z_{0,i,\varepsilon}| \leq M |\varepsilon|^{1/a_i+1/a_1}\end{equation} for any $i=1, \dots, n$. If $\varepsilon$ is small enough then all these neighbourhoods belong to the polydisc where $|z_i|\leq 1$ for any $i$, and do not have common points, in particular any of them contains exactly one critical point (\ref{valf0}). The boundary of this neighbourhood $U_M(Z_\varepsilon)$ consists of $n$ pieces, on any of which one of inequalities (\ref{est3}) becomes an equality. Any function $\partial(\varphi_j-\varphi_0)/\partial z_k$ can be represented in the form $\sum_{i=1}^n z_i \psi_{i,j,k}(z)$ where $\psi_{i,j,k}$ are some polynomials. Let $C=\max_{i,j,k}\psi_{i,j,k}(z)$ where $i,j,k \in \{1, \dots, n\}$, $i \leq j$, and $z$ belongs to the polydisc where all $|z_i|\leq 1$. By the triangle inequality in any of our neighbourhoods $U_M(Z_\varepsilon)$ we have $|z_i| \leq |\varepsilon|^{1/a_i}+M|\varepsilon|^{1/a_i + 1/a_1},$ and hence \begin{equation} \label{est1} \left|\frac{\partial(\varphi_j-\varphi_0)}{\partial z_i}\right| < nC(1+M|\varepsilon|^{1/a_i})|\varepsilon|^{1/a_1}. \end{equation} Suppose that \begin{equation} \label{estt} c M > nC(1+M|\varepsilon|^{1/a_i}).\end{equation} Applying (\ref{est2}), (\ref{est3}) and (\ref{est1}) at the points of such a piece of the boundary of our neighborhood $U_M(Z_\varepsilon)$, where the $i$-th inequality (\ref{est3}) becomes an equality, we get \begin{equation} \left|\frac{\partial (f-\varepsilon \varphi_0)}{\partial z_i}\right| \geq c M |\varepsilon|^{1+1/a_1} > nC(1+M|\varepsilon|^{1/a_i}) |\varepsilon|^{1+1/a_1} \geq \left| \frac{\partial(\varepsilon \varphi_j-\varepsilon \varphi_0)}{\partial z_i} \right|. \label{est4} \end{equation} Therefore by the ``argument principle'' the indices of the vector fields $\mbox{grad} (f-\varepsilon \varphi_0)$ and $\mbox{grad} (f-\varepsilon \varphi_j)$ on the entire boundary of any such neighborhood coincide (and are equal to 1, as for the first of them). In particular, there is exactly one critical point of the function $f-\varepsilon \varphi_j$ inside this neighborhood. It remains to prove that all our assumptions used to get this conclusion are compatible, namely, the following statement. \begin{lemma} There exists $M>0$ such that for all $\varepsilon$ with $|\varepsilon|$ small enough the following three conditions are satisfied: the inequality $($\ref{estt}$)$, the right-hand condition in $($\ref{est2}$)$, and the condition that all our neighborhoods $U_M(Z_\varepsilon)$ of the critical points of $f-\varepsilon \varphi_0$ belong to the polydisc $\max |z_i|\leq 1$. \end{lemma} {\it Proof.} We can choose $M=2nC/c$, and then impose three restrictions on $|\varepsilon|$: $nC|\varepsilon|^{1/a_i}< c/2$, $|\varepsilon|^{1/a_1}<T/M$ and $|\varepsilon|^{1/a_1}+M|\varepsilon|^{2/a_1}<1.$ These three conditions imply our three restrictions, respectively. $\Box$ $\Box$ \subsection{Induction step in the proof of Lemma \ref{mainll}} Let $Z_\varepsilon(j-1)$ and $Z_\varepsilon(j)$ be some critical points of the functions $f-\varepsilon \varphi_{j-1}$ and $f-\varepsilon \varphi_j$, related to one another by the correspondence from Lemma \ref{mainlll}, that is, lying in one and the same neighbourhood $U_M(Z_\varepsilon)$ considered in the proof of this Lemma. We need to estimate the difference of the corresponding critical values, i.e. the number $$ |\varepsilon||\varphi_j(Z_\varepsilon(j)) -\varphi_{j-1}(Z_\varepsilon(j-1))| . $$ Consider the family of functions \begin{equation} \label{link} f-\varepsilon(\varphi_{j-1}+t (\varphi_j-\varphi_{j-1}) \end{equation} depending on the parameter $t \in [0,1]$. By Lemma \ref{mainlll} any of these functions has a unique critical point in the domain $U_M(Z_\varepsilon)$. Define the function $u:[0,1]\to {\mathbb C}$, whose value at the point $t$ is the critical value of the corresponding function (\ref{link}) at this its critical points. The desired difference is equal to \begin{equation} \label{int} |u(1)-u(0)| = \left|\int_{0}^1 \frac{\partial u(t)}{\partial t} dt \right|. \end{equation} \begin{lemma}[see e.g. Lemma 9.7.5 in \cite{LL}] The derivative in the integral $($\ref{int}$)$ is equal to the value of the function $-\varepsilon(\varphi_j -\varphi_{j-1})$ at the considered critical point of the function $f-\varepsilon(\varphi_{j-1}+t (\varphi_j-\varphi_{j-1}))$. $\Box$ \end{lemma} The absolute value of the function $\varphi_j-\varphi_{j-1}$ in $U_M(Z_\varepsilon)$ is uniformly estimated from above by $\mbox{ const} \times |\varepsilon|^{{1/a_1}+{1/a_j}}$, hence the difference (\ref{int}) decreases as $O_{\varepsilon \to 0}|\varepsilon|^{1+1/a_1+1/a_j}$. Therefore moving from $\varphi_{j-1}$ to $\varphi_j$ does not affect the order of decrease of differences between the critical values from different collections of depth $\leq j$ (this order is equal to $|\varepsilon|^{1+1/a_i}$, where $i \in \{1, \dots, j\}$ is the first level of depth of collections separating these critical values). On the other hand, this moving does not affect also the distances inside the collections of critical values of depth $m>j$. Indeed, any such collection of critical values of the function $f-\varepsilon f_0$ is nothing else than the set of critical values of the function \begin{equation}\sum_{r=m+1}^n \left(\frac{1}{a_r+1} z_r^{a_r+1} - \varepsilon z_r\right) \label{sumtail} \end{equation} added to some single critical value of the function $$\sum_{r=1}^m \left(\frac{1}{a_r+1} z_r^{a_r+1} - \varepsilon z_r\right).$$ The function $\varphi_j - \varphi_0$ does not contain any monomials depending on variables $z_r,$ $r \geq m$, therefore the corresponding collection for the function $f-\varepsilon \varphi_j$ is just the same set of critical values of the function (\ref{sumtail}) added to some critical value of the function in $m-1$ variables which is the sum of all monomials of the function $f-\varepsilon \varphi_j$ depending on these coordinates. Lemma \ref{mainll} is proved. $\Box$ $\Box$ $\Box$ \section{Proof of Theorem \ref{mss}} Consider again the parameter space ${\mathbb C}^k$ of the deformation (\ref{vers}) of $f$, remove from it its caustic $\Delta$, and define the following complex-valued function $Y$ on the remaining space. Given a point $\lambda \in {\mathbb C}^k \setminus \Delta$, take all $\binom{\mu(f)}{1,2}$ choices of three critical points $Z_1, Z_2, Z_3$ of $f_\lambda$ (the first of which is distinguished, and the other two are not, so that the choice $(Z_1,Z_2, Z_3)$ is equal to $(Z_1, Z_3, Z_2)$), and define $Y(\lambda)$ as the product of all corresponding numbers \begin{equation} \label{jjj} 2f_\lambda(Z_1) -f_\lambda(Z_2)-f_\lambda(Z_3). \end{equation} The obtained function can be extended to a holomorphic function close to the generic points of the caustic, and hence also to the whole neidhborhood of the origin in ${\mathbb C}^k$. It vanishes exactly on the mixed Stokes' set, and has multiplicity 1 at its generic points. Therefore the multiplicity of the mixed Stokes' set is equal to the degree of this function at the point $\lambda=0$ or, which is the same, of its restriction to a generic line through the origin in ${\mathbb C}^k$, consisting of polynomials $f -\varepsilon \varphi$, $\varepsilon \in {\mathbb C}^1$. The restriction of the function $Y$ to this line is a holomorphic function in the coordinate $\varepsilon$; let us calculate the order of its zero at the point $\varepsilon=0$. Suppose first that $\varphi$ is linear and satisfies the conditons from \S \ref{pro6}. The set of critical values of $f-\varepsilon \varphi$ has the structure described in Lemma \ref{mainll}, see also Fig.~\ref{critvalpic}. Consider these values as functions of $\varepsilon$. It is easy to see that the absolute value of the difference (\ref{jjj}) decreases as $|\varepsilon|^{1+1/a_j}$ with $\varepsilon \to 0$, where $j$ is the smallest depth of collections of critical points such that not all three points $Z_1, Z_2, Z_3$ belong to one and the same collection of this depth. In particular, all $\binom{a_1 \cdots a_n}{1,2}$ factors of the function $Y(\varepsilon)$ give the contribution at least $1+1/a_1$ to the exponent of this function in $\varepsilon$. These contributions give us the first summand in (\ref{bbbb}). Moreover, there are some $a_1 \times \binom{a_2\cdots a_n}{1,2}$ choices which give the contribution at least $1+1/a_2$: it are all possible triples which belong to one and the same collection of depth 1. Accounting this addition (equal to $\frac{1}{a_2}-\frac{1}{a_1}=\frac{a_1-a_2}{a_1a_2}$) over all of these choices, we get the second summand in (\ref{bbbb}) (corresponding to $j=2$), etc. Finally, we get that the restriction of the function $Y$ to the generic line $\{f-\varepsilon \varphi\}$ with linear $\varphi$ is a monomial with the exponent equal to (\ref{bbbb}). Moreover, the same considerations as in \S \ref{proofma2} prove that replacing the generic linear function $\varphi$ by a non-linear one with the same linear part does not change the orders of decrease of all our factors (\ref{jjj}), so that the order (\ref{bbbb}) remains the same. $\Box$ \begin{remark} \rm By analogy with the formula (\ref{bbbb}), the formula (\ref{Mset}) can be rewritten as $$2\binom{a_1 \cdots a_n}{2} \frac{a_1+1}{a_1} + 2 \sum_{j=2}^n a_1 \cdots a_{j-1} \binom{a_j \cdots a_n}{2} \frac{a_{j-1}-a_j}{a_{j-1}a_j} \ .$$ \end{remark} \section{Proofs for pure Stokes' sets} \label{proCS} We proceed exactly as in the previous two sections and define the following function $\Omega(\lambda)$ on the space ${\mathbb C}^k \setminus \Delta$. Given $\lambda \in {\mathbb C}^k \setminus \Delta$, consider all possible choices of four unordered critical points $Z_1, \dots, Z_4 \in {\mathbb C}^n$ of $f_\lambda$ separated somehow into two pairs with the fixed order of these pairs: in total $\binom{\mu(f)}{2,2}$ choices. For any such choice we take the corresponding difference \begin{equation} \label{uuuu} f_\lambda(Z_1) + f_\lambda(Z_2) - f_\lambda (Z_3)-f_\lambda(Z_4) \end{equation} (i.e. the sum of critical values at the critical points from the first pair minus the similar sum for the second one) and define the function $\Omega(\lambda)$ as the product of these differences over all our choices. This function vanishes exactly on the pure Stokes' set with multiplicity 2. To calculate its degree, consider again a line through the origin in ${\mathbb C}^k$ consisting of functions $f-\varepsilon \varphi$, where $\varphi$ is a linear combination of monomials $z^\alpha$ with exponents $\alpha$ as in (\ref{vers}) and $\varphi(0)=0$. \subsection{Proof of theorem \ref{cs11}} Suppose that $n=1$ and $a_1 \equiv d$ is odd. Take first $\varphi\equiv z$. In this case all the factors (\ref{uuuu}) of the restriction of the function $\Omega$ to our line vanish exactly as $\varepsilon^{1+1/d}$. Therefore this restriction has a root of multiplicity $\frac{d+1}{d} \binom{d}{2,2} \equiv (d+1)(d-1)(d-2)(d-3)/4$. Moreover, the same arguments as in the proof of Theorem \ref{mp} show that in this case adding the non-linear monomials to $\varphi$ does not change the order of decrease of our factors and keeps the multiplicity of this root unchanged. The multiplicity of the pure Stokes' set is a half of this number, which proves Theorem \ref{cs11}. $\Box$ \subsection{Proof of theorem \ref{cs12}} In the case of $n=1$ and even $d>2$, the line consisting of functions $f-\varepsilon \varphi$ with linear $\varphi$ (that is, of functions $z^{d+1}-\varepsilon z$) will not be generic with respect to the pure Stokes' set. Indeed, if $(Z_1 , Z_2)$ and $(Z_3,Z_4)$ are two different pairs of opposite critical points of such a function then the factor (\ref{uuuu}) is equal to zero for all values of $\varepsilon$. Therefore let us choose the tentative function $\varphi$ in the form $z+\alpha z^2+\dots$, $\alpha \neq 0$. It is easy to calculate that in this case a majority of factors (\ref{uuuu}) still vanish as $\varepsilon^{1+1/a_1}$, and the remaining $\frac{d}{2} \left( \frac{d}{2}-1\right)$ many (corresponding to all possible choices of ordered pairs of unordered pairs of opposite roots of the polynomial $f'-\varepsilon$) vanish as $\varepsilon^{1+2/d}.$ Adding all these exponents and dividing the sum by 2 (i.e. by the multiplicity of the function $\Omega$ at the generic point of the pure Stokes' set) we get the number (\ref{cs12f}). $\Box$ \subsection{Proof of Theorem \ref{cs2}.} Now we have $f= x^{a+1} + y^{b+1}$, $a$ and $b$ odd, and $\varphi_0=x+q y$, where $q$ is positive, and $q \ll 1$ if $a=b$. The critical values of $f-\varepsilon \varphi_0$ split into $a$ collections, with $b$ points in each (see Fig. \ref{critvalpic}, where however the case of {\em even} $a$ and $b$ is shown). We have $\binom{ab}{2,2}$ possible choices of quadruples of critical points $Z_1,\dots, $ $Z_4$ divided somehow into two numbered pairs. For a majority of these choices the expression (\ref{uuuu}) decreases as $\varepsilon^{1+1/a}$. However there are the following three possible degenerations increasing this exponent: 1) all points $Z_i$ belong to one and the same collection of $b$ values. This happens for $a \binom{b}{2,2}$ choices, the expression (\ref{uuuu}) in this case vanishes as $\varepsilon^{1+1/b}$ ; 2) some two of points $Z_1, \dots, Z_4$, belonging to different pairs, lie in one collection of $b$ critical points, and two other points in some other such collection. This happens for $4\binom{a}{2}\binom{b}{2}^2$ choices. In a majority of these cases, namely when the two segments connecting the points inside these pairs are not the diagonals of a parallelogram, the exponent vanishes also as $\varepsilon^{1+1/b}$. 3) this is the exceptional subclass of the previous case, when we have such a parallelogram. This happens for $2\binom{a}{2} \binom{b}{2}$ choices. The expression (\ref{uuuu}) is then equal to zero, which means in particular that the line in ${\mathbb C}^k$ spanned by a linear function $\varphi$ never is generic with respect to the pure Stokes' set of our deformation of $f$. Let us study what will happen when we add a generic function of degree $\geq 2$ to $\varphi_0$. Adding the monomials $x^r$ and $y^r$ with arbitrary coefficients keeps our functions $f-\varepsilon \varphi$ split into the sums of two functions depending on $x$ and $y$ only. The set of critical values of such a function is a Minkovski sum of such sets of these functions, and therefore still contains parallelograms. So, the first possibility to move the corresponding expressions (\ref{uuuu}) from zero is to add the monomials proportional to $xy$. To avoid the details, consider the particular case $\varphi = (a+1)x+(b+1)y + xy$. Any parallelogram of critical values of the function $f-\varepsilon \varphi_0$, $\varepsilon >0$, in this case consists of its values at four points equal to $u_{1,2}\varepsilon^{1/a} +v_{1,2}\varepsilon^{1/b}$, where $u_{1,2}$ are some two different values of $1^{1/a}$, and $v_{1,2}$ some two values of $1^{1/b}$. Adding the monomial $xy$ to $\varphi_0$ we get the additions to the corresponding critical values, which are proportional, in the first approximation, to the values of the monomial $\varepsilon xy$ at these critical points. The expression (\ref{uuuu}) therefore changes from zero to asymptotically $ ((u_1v_1 + u_2v_2)-(u_1v_2+u_2v_1))\varepsilon^{1+1/a+1/b} =(u_1-u_2)(v_1-v_2)\varepsilon^{1+1/a+1/b}$. Adding the monomials of higher degrees divisible by $xy$ makes additions of higher orders in $\varepsilon$ to this expression and thus does not change its order of decrease. Finally, we get that any of our (deformed) parallelograms gives the contribution $1+1/a+1/b$ to the exponent of $\Omega(\varepsilon)$. Summing up all these exponents with their multiplicities, and then dividing the sum by 2, we get the formula (\ref{anscM2}). $\Box$ \begin{remark} \rm It is not very difficult to continue these calculations, considering also the pure Stokes' sets for functions $x^a+y^b$ with other parities of $a$ and $b$. \end{remark} \section{Appendix: motivations in algorithmic singularity theory} \label{motivat} The calculation of multiplicities considered above is needed for the optimization of a program enumerating all topoogically different pertubations of real function singularities. For a description of this program see \S V.8 of the book \cite{APLT}, however the web reference given there leads to an obsolete version of it (written in 1984 and described first in Chapter 5 of \cite{AVGL}). The actual versions of this program are available by \\ \verb"https://www.hse.ru/mirror/pubs/share/185895886" (for singularities of corank $\leq 2$) and \\ \verb"https://www.hse.ru/mirror/pubs/share/185895827" (for singularities of arbitrary ranks). The further versions of the program, which will use also the results of the present paper, will occur at the bottom of the page \verb"https://www.hse.ru/en/org/persons/1297545#sci". The main idea of the algorithm is as follows. Any Morse perturbation of a complicated singularity can be described in the terms of topological invariants related with the set of its critical values (including the imaginary ones), such as the order in ${\mathbb R}^1$ of their real parts, the Morse indices of corresponding critical points, and intersection matrices of related vanishing cycles. Such a collection of topological data is called a ``virtual morsification''. The standard topological surgeries (such as collisions of critical points or critical values) can be modelled on the level of reconstructions of these collections. The algorithm starts from the data of a real morsification and applies to it all sequences of such admissible reconstructions. The virtual morsification of any real one surely occurs at some step of this algorithm. However in the case of sufficiently complicated singularities these admissible sequences are not necessarily finite, because they can include the changes of bases of vanishing cycles defined by the rotations of imaginary critical values around one another, and the occurring set of intersection matrices may be not finite. The results of this article give us a priori estimates of the number of steps, after which any existing topological type of real perturbations will be attained, so that we can stop our algorithm. Namely, the multiplicity of the caustic (the Maxwell set, the pure Stokes' set) gives us an upper bound for the necessary number of collisions of critical points (respectively, of collisions of critical values not related with the collisions of corresponding critical points; the number of changes of the base of vanishing cycles related with the rotations of imaginary critical values). The study of the mixed Stokes' set helps us in the situation when two real critical values undergo the Morse surgery, go into the complex domain, travel there somehow and then come again to the real line in some other place among the real critical values. In the theory of real analytic function singularities, the mixed Stokes' set considered above is represented by (and usually coincides with the algebraic closure of) the set of parameters $\lambda \in {\mathbb R}^k$ such that the complexification of the function $f_\lambda$ has two complex conjugate critical values, whose real parts coincide with its critical value at some real critical point. Correspondingly, the real version of the pure Stokes' set of such a deformation is the closure of the set of parameters $\lambda$ such that the complexification of $f_\lambda$ has two pairs of conjugate critical values with equal real parts. For the geometry of real Stokes' sets of singularities of small codimensions, see \cite{BH}, where also a link to the literature discussing the physical applications of this notion is given. The estimates of the sufficient numbers of virtual surgeries in our algorithm are justified by the following easy statement. \begin{proposition} \label{triv} If the local multiplicity of a real algebraic hypersurface $X \subset {\mathbb R}^k$ at the point $0 \in {\mathbb R}^k$ is equal to $d$, then any two points of its complement in a neighborhood of $0$ can be connected by a generic path intersecting this hypersurface at most $d$ times. \end{proposition} {\it Proof.} Consider a line in ${\mathbb R}^k$, which is generic with respect to our hypersurface among all lines through the origin in ${\mathbb R}^k$. There is a segment in this line whose unique intersection point with $X$ is its middle point $0 \in {\mathbb R}^k$. Choose two small balls in ${\mathbb R}^k$ centered at the endpoints of this segment and not intersecting the hypersurface $X$. Given two points in ${\mathbb R}^k \setminus X$ close to $0$, we can connect them by two paths, any of which is transversal to $X$ and consists of five parts: the first and the last of these parts connect our two points in ${\mathbb R}^k \setminus X$ to some two points very close to $0$; the second and the fourth ones are the segments parallel to the chosen line and ending in one of two chosen balls, and the middle part connects the obtained two points inside this ball. The sum of intersection numbers of these two paths with $X$ is at most $2d$, therefore at least one of these two numbers is not greater than $d$. $\Box$ \end{document}
\begin{document} \title{A footnote on Expanding maps} \author{Carlangelo Liverani} \address{Carlangelo Liverani\\ Dipartimento di Matematica\\ II Universit\`{a} di Roma (Tor Vergata)\\ Via della Ricerca Scientifica, 00133 Roma, Italy.} \email{{\tt [email protected]}} \thanks{ It is a pleasure to thank Oliver Butterley for pointing out the need of this note, for many interesting discussions related to these type of problems and for carefully reading a preliminary version of this note. I also thank Luigi Ambrosio for helpful references and Viviane Baladi and the anonymous referee for helpful comments. Work supported by the European Advanced Grant Macroscopic Laws and Dynamical Systems (MALADY) (ERC AdG 246953).} \begin{abstract} I introduce Banach spaces on which it is possible to precisely characterize the spectrum of the transfer operator associated to a piecewise expanding map with H\"older weight. \end{abstract} \keywords{Expanding maps, decay of correlations, Transfer operator.} \subjclass[2000]{37A05, 37A50, 37D50} \maketitle \section{Introduction} Lately there is some renewed interest in different norms which allow one to analyze transfer operators associated to expanding maps. Such an interest has several motivations, one of the most relevant being the study of semi flows arising from Lorenz like models (e.g., see \cite{AGP}). At the same time, the extension of transfer operator methods to the hyperbolic setting \cite{BKL, GL1, GL2, BT1, BT2, BG1, BG2, DL, Li2, Ts, BL}, just to mention a few, has revitalized the subject. Particular attention has been devoted to the case in which the map is piecewise smooth and its derivative or the weight have low regularity (H\"older instead of $\cC^1$). Several possibilities have been and are currently being explored trying to improve on the classical BV scheme \cite{Ry, Co, Li1} or its relevant variants \cite{Ke, Sa}. Two recent interesting contributions are \cite{Th,Bu}. The purpose of this note is to comment on an old proposal of mine, put forward in footnote 12 of \cite[page 193]{Li}, that has gone mostly unnoticed and/or not understood. Here I show that it can be easily applied to many relevant situations yielding the strongest results so far. I present the approach in the expanding one dimensional case but I see no obstacles in extending it to higher dimensions (following \cite{Li1}) or, with some more work, to the hyperbolic setting. In the next section I will detail the proposed Banach space which is a weakening of $BV$ in the spirit of fractional order Sobolev spaces but avoiding completely definitions based on Fourier transforms. In the final section I will detail some settings where the above strategy can be applied and I will give the main result of the paper. \section{ The Banach space} Let $\fkM$ be the Banach space of complex valued Borel measures on $[0,1]$ equipped with the total variation norm. For each $\vf\in \cC^1([0,1],\bC)$, $\mu\in \fkM$, let us define, for each $\alpha\in[0,1]$, \[ \begin{split} &|\vf|_\alpha=\sup_{x\in [0,1]}|\vf(x)|+\sup_{x,y\in[0,1]}\frac{|\vf(x)-\vf(y)|}{|x-y|^\alpha}\\ &\|\mu\|_\alpha=\sup_{\{\vf\in\cC^1\;:\;|\vf|_\alpha\leq 1\}}|\mu(\vf')|. \end{split} \] We can then define $\cB_\alpha=\{\mu\in\fkM\;:\;\|\mu\|_\alpha<\infty\}$. Note that $\cB_0$ is the space of absolutely continuos measures with density in $BV$ while $\cB_1=\fkM$. \begin{rem} In the following we give, for the reader's convenience, a self contained proof of the relevant properties of the spaces $\cB_\alpha$. Note however that some results are known in larger generality than needed here \cite{Gr, Zu}. Also there is a connection between our spaces and the theory of BV functions on snowflake spaces \cite{Zu}. \end{rem} \begin{lem} If $\alpha\in [0,1)$ and $\mu\in\cB_\alpha$, then $\mu$ is absolutely continuous with respect to Lebesgue and, calling $h$ its density,\footnote{ In this note the $L^p$ spaces are all w.r.t. the Lebesgue measure.} \[ |h|_{L^{\frac 1{\alpha}}}\leq 2 \|\mu\|_\alpha. \] In addition $\cB_\alpha$ is a Banach space. \end{lem} \begin{proof} Let $\vf\in \cC^0$ and $\mu\in \cB_\alpha$, and define $\phi(x)=\int_0^x\vf$. Then \[ \mu( \vf)=\mu\left( \phi'\right). \] Since $\left|\phi(x)\right|\leq |\vf|_{L^{\frac1{1-\alpha}}}$ and \[ |\phi(x)-\phi(y)|=\left|\int_x^y\vf\right|\leq |\vf|_{L^{\frac1{1-\alpha}}} |x-y|^\alpha, \] it follows that $|\phi|_\alpha\leq 2 |\vf|_{L^{\frac1{1-\alpha}}}$ hence \[ |\mu(\vf)|\leq 2\|\mu\|_\alpha|\vf|_{L^{\frac 1{1-\alpha}}}. \] Thus $\mu$ belongs to the dual of $L^{\frac1{1-\alpha}}$, i.e. $L^{\frac 1{\alpha}}$, hence $\mu$ is absolutely continuous with respect to Lebesgue, let $h$ be the density. Then \[ |h|_{L^{\frac 1\alpha}}=\sup_{|\vf|_{L^{\frac1{1-\alpha}}}\leq 1}\mu(\vf)\leq 2\|\mu\|_\alpha. \] To verify that $\cB_\alpha$ is a Banach space it suffices to see that it is complete. Let $\{\mu_n\}\subset \cB_\alpha$ be a Cauchy sequence in $\cB_\alpha$ and $\{h_n\}$ be the respective densities. Then $\{h_n\}$ is a Cauchy sequence in $L^{\frac 1\alpha}$, let $h$ be its limit. Setting $\mu(\vf):=\int_0^1 h\vf$, for each $\vf\in\cC^1$,\footnote{ In this note I use $C_\#$ to designate a generic constant.} \[ |\mu(\vf')|=\lim_{n\to\infty}|\mu_n(\vf')|\leq C_\#|\vf|_\alpha. \] Thus, $\mu\in\cB_\alpha$. On the other hand, for each $\ve>0$, there exists $n\in\bN$ such that $\|\mu_n-\mu_m\|_\alpha\leq \ve$ for all $m\geq n$. Then, for each $\vf\in\cC^1$ and $m>n$, \[ |\mu_n(\vf')-\mu(\vf')|\leq \ve |\vf|_\alpha+|\mu_m(\vf')-\mu(\vf')|. \] Taking the limit for $m\to \infty$ it follows that $\mu$ is the limit of $\{\mu_n\}$ in $\cB_\alpha$. \end{proof} Given the above Lemma we can as well consider the space of densities equipped with the norm \[ \|h\|_\alpha=\sup_{|\vf|_\alpha\leq 1}\left|\int_0^1 h\vf'\right|. \] By a little abuse of notations we will call such a Banach space $\cB_\alpha$ as well. Since $\cB_\alpha\subset L^1([0,1])$ it is then convenient to use $L^1$ rather than $\fkM$ as a weak space.\footnote{ Note that the norm is exactly the same.} \begin{lem}\label{lem:compactness} For each $\alpha\in (0,1)$ the unit ball of $\cB_\alpha$ is relatively compact in $L^1([0,1])$. \end{lem} \begin{proof} To start, we need a little preliminary result. Since the functions in the set $\{\int_0^x\vf\}_{|\vf|_\infty\leq 1}$ are uniformly Lipschitz they are, by Ascoli-Arzel\'a, relatively compact in the $\alpha$-H\"older topology. Thus, for each $\ve>0$ there exists a set $S_\ve:=\{\phi_i\}_{i=1}^{n_\ve}$, $|\phi_i|_{\cC^1}<\infty$ and $\sup_i|\phi_i|_\alpha\leq 1$, such that, for each $\vf\in L^\infty$, $|\vf|_\infty\leq 1$, setting $\Phi(x)=\int_0^x \vf$, we have \[ \inf_{i\in\{1,\dots,n_\ve\}}\left|\Phi-\phi_i\right|_\alpha\leq \ve. \] Accordingly, for each $h\in\cB_\alpha$, \[ \int h\vf=\int h\Phi'\leq \ve\|h\|_\alpha+\sum_{i=1}^{n_\ve}\left|\int h\phi_i'\right|. \] Taking the sup on $\vf$ we have then \[ |h|_{L^1}\leq \ve\|h\|_\alpha+\sum_{i=1}^{n_\ve}\left|\int h\phi_i'\right|. \] We are now ready to conclude the proof. Since we deal with metric spaces it suffices to check sequential compactness. Let $\{h_n\}_{n\in\bN}\subset \{ h\in\cB_\alpha\;:\;\|h\|_\alpha\leq 1\}$. Define $\overline S=\cup_{j\in\bN}S_{2^{-j}}$. Note that \[ \sup_{\substack{\phi\in \overline S}}\left|\int h_n \phi'\right|\leq 1. \] Hence, by Tychonoff Theorem, we can extract a subsequence $\{n_j\}$ such that $\int h_{n_j}\phi'$ is convergent, when $j\to\infty$, for all $\phi\in\overline S$. It follows that $\{h_{n_j}\}$ is Chauchy in $L^1$ since, for all $\ve>0$, \[ |h_{n_j}-h_{n_k}|_{L^1}\leq 2\ve+\sum_{i=1}^{n_\ve}\left|\int [h_{n_j}-h_{n_k}]\phi_i'\right|<3\ve, \] where we have chosen $j,k$ large enough. \end{proof} \begin{lem}\label{lem:lip} For each $\alpha\in (0,1)$, $h\in\cB_\alpha$ and $\vf$ Lipschitz we have\footnote{ Note that, by Rademacher's Theorem, $\vf$ is almost surely differentiable with bounded derivative, hence the integral is meaningful.} \[ \int_0^1 h\vf'\leq |\vf|_\alpha \|h\|_\alpha. \] \end{lem} \begin{proof} It is convenient to extend $h$ to be zero and $\vf$ to be continuous and constant outside $[0,1]$, so we can regard all the integral as integral on $\bR$. Let $j_\ve$ be a smooth mollifier, then\footnote{ As usual $j*h(x)=\int_{\bR}j(x-y)h(y)dy$.} \[ \int_0^1 h\vf'=\lim_{\ve\to 0}\int_{\bR}\left(j_\ve* h\right)\vf'=\lim_{\ve\to 0}\int_{\bR}h(j_\ve* \vf)'\leq \lim_{\ve\to 0}\|h\|_\alpha |j_\ve*\vf|_\alpha\leq \|h\|_\alpha |\vf|_\alpha. \] \end{proof} \section{Piecewise differentiable maps and H\"older continuous weights} Let $f$ be a (almost everywhere defined) map of the interval $[0,1]$ in itself and $\cP$ a (possibly infinite) collection of open subintervals of $[0,1]$. We assume that $\cup_{p\in\cP}\,p$ has full Lebesgue measure in $[0,1]$. Also, assume that $f\in\cC^1(p,\bR)$ and $f, \frac 1{f'}\in\cC^0(\bar p,\bR)$ for each $p\in\cP$. Moreover, we assume the map to be expansive \[ \inf_{p\in\cP}\inf_{x\in p} |f'(x)|>1. \] Let $\xi:[0,1]\to\bC$ be a function that we will call the {\em weight}. We assume that there exits $\beta\in (0,1]$ such that, for each $p\in\cP$, $ \xi \in C^\beta(\bar p,\bR)$, with uniform $\beta$-H\"older constant. In addition, we require that $|f'|^r\in L^1$, for some $r\geq 0$, $\xi \cdot f'\in L^\infty$ and that there exists $\gamma\in [0,1)$ such that\footnote{ Here and in the following $x^t$, $x, t\in\bR$, is meant as a complex number and $|\cdot|$ is used both for the absolute value and the complex modulus.} \begin{equation}\label{eq:optimal?} \sum_{p\in\cP}\sup_{z\in p}|\xi (z)f'(z)^\gamma|^{\frac 1{1-\gamma}}<\infty. \end{equation} Next, let $\cL_\xi$ be the transfer operator (see \cite{Ba} for the relevance of such operators) defined by \[ \cL_\xi h(x)=\sum_{y\in f^{-1}(x)}\xi(y)h(y). \] The main result of this note is the following. \begin{thm}\label{thm:main} In the above setting, if $\beta>\frac 1{r+1}$, then, for each \[ 1>\alpha>\max\left\{\gamma, 1-\beta, \frac {1-\beta}{1-\beta(1-r)}\right\}, \] the spectral radius of $\cL_\xi$, when acting on $\cB_\alpha$, is bounded from above by $|\xi f'|_\infty$, while the essential spectral radius is bounded by $\min\{|\xi f'|_\infty,8|\xi (f')^{\alpha}|_\infty\}$. \end{thm} Before discussing the proof of the above result let us indulge in several remarks. \begin{rem} The proof will not use anywhere the condition $|f'|>1$. Yet, notice that if such a condition is not satisfied, then the theorem is easily empty since the bound for the essential spectral radius might equals the bound for the spectral radius. Yet, a small generalization is possible, we leave it to the interested reader. \end{rem} \begin{rem} Note that, as stated, for $\gamma=0, \beta=1$ the Theorem does not cover the case $\alpha=0$. This is the usual BV case and it is well known already. Also, in the case $\beta=0$ there is no reason to expect good spectral properties for $\cL_\xi$. \end{rem} \begin{rem} As usual, by applying Theorem \ref{thm:main} to a large power of $\cL_\xi$, much sharper estimates of the spectral and essential spectral radius can be obtained (in particular the $8$ in the Theorem is superficial, this is why I did not strive to improve it). I leave this exercise to the reader (see \cite{GL} for relevant results). \end{rem} \begin{rem} To compare the above result with the literature remark that \cite{Ke}, at least in the published version, applies only to the case $\xi=\frac 1{f'}$ and when the partition $\cP$ is finite. The results in \cite{Th} apply only to the case $f'\in L^\infty$. Finally, in \cite{Bu} it is assumed \eqref{eq:optimal?} with $\gamma=0$. Note that \[ \sum_p|\xi (f')^\gamma |_{L^\infty(p)}^{\frac{1}{1-\gamma}}\leq |\xi f'|_\infty^{\frac{\gamma}{1-\gamma}}\sum_p|\xi |_{L^\infty(p)}\leq C_\#\sum_p|\xi |_{L^\infty(p)} \] thus the present condition is weaker. Also in \cite{Bu} it appears the condition $f'\in L^r$, $r\geq 1$ with $\beta>\frac 1r$ that here is replaced by $r\geq 0$, $\beta>\frac 1{r+1}$. In particular, if $\beta>\frac 12$, one can treat the case $r=1$, which is the natural condition when the partition is finite. In the case of infinite partitions $r=1$ is not natural anymore (think of the Gauss map), yet a $r<1$ may suffice. Note hoverer that, most likely, the above cited results can be improved with some extra work. In particular, the norms in \cite{Ke} could provide a bound in which no condition on $f'$ is required \cite{private} while \cite{Th} could probably be improved by using a partition of unity in the spirit of \cite{Ba1, BT1}. Even so, the present norms seem to be an interesting candidate for extensions to the hyperbolic setting. \end{rem} \begin{rem} \label{rem:infty}The reader should be advised that the goal of this note is not to treat the most general case but to show that the $\cB_\alpha$ spaces can be conveniently used to investigate a vast class of problems. The optimal conditions under which Theorem \ref{thm:main} holds depend heavily on the situation. The present treatment is specially adapted to the case of finite partitions with weights that can also be zero. In the case of infinite partitions it could be more natural to consider weight of the form $\xi=e^{\phi}$, where $\phi$ is called the {\em potential}, and impose the H\"older condition on the potential, see Theorem \ref{thm:main2}.\footnote{ If $\xi$ vanishes somewhere one can still use such a setting by introducing countably many {\em artificial} partition elements, in the spirit of billiards {\em homogeneity strips}.} \end{rem} Theorem \ref{thm:main} follows in a standard way (see \cite{Ba}) from Lemma \ref{lem:compactness} and the next Lasota-Yorke inequality. \begin{lem}\label{LY} If $\beta>\frac 1{r+1}$, then for each $1>\alpha>\max\{\gamma, 1-\beta, \frac{1-\beta}{1-\beta(1-r)}\}$, there exists $B>0$ such that, for all $h\in\cB_\alpha$, \[ \begin{split} &|\cL h|_{L^1}\leq |\xi f'|_\infty |h|_{L^1}\\ &\|\cL h\|_\alpha\leq 8|\xi\cdot (f')^{\alpha}|_\infty \|h\|_\alpha+ B |h|_{L^1}. \end{split} \] \end{lem} \begin{proof} For each $h\in L^1, \vf\in L^\infty$ we have, by a change of variable on each $p\in\cP$, \[ \int \cL_\xi h\cdot \vf =\int h\cdot \xi \cdot f'\cdot \vf\circ f, \] from which the first inequality of the Lemma readily follows.\newline Let $h\in \cB_\alpha$. For each $\vf\in\cC^1$ such that $|\vf|_\alpha\leq 1$, we have \[ \int \cL_\xi h\cdot \vf' =\sum_{p\in\cP}\int_p h \xi (\vf\circ f)'. \] First of all, we want to take care of the fact that $f'$ may blow up at the boundaries of $p\in\cP$ so that $\vf\circ f$ may fail to be Lipschitz on $\bar p$ (preventing us from using Lemma \ref{lem:lip}). At the same time we would like to approximate $\xi$ by more regular functions since during the computation we will need to take the derivative of the weigh and $\xi$ is only H\"older. A nice possibility is to use piecewise constant functions $\xi_k$, so that the problem of taking derivatives can be handled just by a refining of the partition $\cP$. This procedure must be done with some care since in the following it is essential to retain the property $\xi_k\cdot f'\in L^\infty$. Since, by hypothesis, $|\xi (\vf\circ f)'|_{\infty}\leq |\xi f'|_{\infty}\leq C_\#$, it follows that $\xi$ is zero where $f'$ blows up. For each $\ve>0$ we can then consider the functions $\bar\xi_{\ve}(z)=\max\{|\xi(z)|-\ve, 0\}$ and $\tilde\xi_\ve=\frac{\xi}{|\xi|}\cdot\bar \xi_\ve$. Clearly $\tilde\xi_\ve$ is zero in a neighborhood of the points in which $f'$ explodes, also $|\tilde \xi_\ve|\leq |\xi|$ and they have $\beta$-H\"older constant uniformly (in $\ve$) proportional. By Lebesgue Dominated Convergence Theorem, for each $h\in\cB_\alpha,\vf\in\cC^1$, $|\vf|_\alpha\leq 1$ there exists $\ve$ such that \begin{equation}\label{eq:part0} \left|\sum_{p\in\cP}\int_p h (\xi-\tilde\xi_\ve) (\vf\circ f)'\right|\leq |h|_{L^1}. \end{equation} Next, for each $k\in\bN$, let $\cP_k$ be a refinement of $\cP$ such that all the elements of $\cP$ of length larger than $2^{-k+1}$ are partitioned in elements of length between $2^{-k+1}$ and $2^{-k}$. Let $\cP^{\text{long}}_k=\{p\in\cP_k \;:\; p\not\in\cP\}$ (this is the collection of elements that come from the refinement and hence are longer than $2^{-k}$, note that they are a finite number) and $\cP^{\text{short}}_k=\{p\in\cP_k \;:\; p\in\cP\}$ (these are the shorter element). For each $p\in\cP_k$ let $x_p\in\bar p$ be such that $|\tilde\xi_{\ve}(x_p)|=\inf_{z\in p}|\tilde\xi_{\ve}(z)|$. For each $k\in\bN$ let $\xi_k(x)=\tilde\xi_\ve(x_p)$ for all $x\in p\in\cP_k$. By construction, $\xi_k\in L^\infty$ and \[ \left|\xi_k-\tilde\xi_\ve\right|_\infty\leq C_\# 2^{-\beta k}. \] It is now convenient to define $\rho_k=\xi_{k+1}-\xi_{k}$. Note that, for all $k_0\in\bN$, \begin{equation}\label{eq:rho} \begin{split} &\sum_{k\geq k_0}\rho_k=\tilde\xi_\ve-\xi_{k_0},\\ &|\rho_k|_\infty\leq C_\# 2^{-\beta k}. \end{split} \end{equation} Hence,\footnote{ From now on we will write $(\vf\circ f)'$ for $\sum_{p\in\cP_{k}}\Id_p(\vf\circ f)'$, i.e. the derivative is meant in the strong sense but only where it is defined. By the way, given a set $A$, the indicator function $\Id_A$ is defined by $\Id_A(x)=1$ if $x\in A$ and zero otherwise.} \begin{equation}\label{eq:ly-0} \begin{split} &\sum_{p\in\cP_{k_0}}\int_p h \tilde\xi_\ve(\vf\circ f)'=\sum_{k\geq k_0}\sum_{p\in\cP_{k_0}}\int_p h(\vf\circ f)'\rho_k+\sum_{p\in\cP_{k_0}}\int_p h (\vf\circ f)'\xi_{k_0}\\ &=\sum_{p\in\cP_{k_0}}\int_p h (\vf\circ f\cdot \xi_{k_0})' +\sum_{k\geq k_0}\int_0^1 h\frac d{dx}\left[\int_0^x\sum_{p\in\cP_{k_0}}\Id_p(\vf\circ f)'\rho_k\right] , \end{split} \end{equation} where the convergence of the series on the first line follows because $f'$ is bounded on the support of $\tilde\xi_\ve$, an hence on the support of the $\rho_k$. To continue, let $\tilde\ell$ be linear in each $p\in\cP^{\text{long}}_{k_0}$ and equal to $\vf\circ f\cdot \xi_{k_0}$ on $\partial p$. Then we define \[ \zeta_{k_0}(x)=\begin{cases}\vf\circ f\cdot \xi_{k_0}(x)-\tilde \ell(x)\quad&\forall x\in p\in\cP^{\text{long}}_{k_0}\\ 0&\text{otherwise}. \end{cases} \] Note that $\zeta_{k_0} \in \cC^0([0,1],\bR)$, in fact Lipschitz. In addition, for $x,y\in p\in\cP$, \begin{equation}\label{eq:basic} \left|\vf\circ f(x)-\vf\circ f(y)\right|\leq \left|\int_x^yf'(z)dz\right|^\alpha\leq |f'(w)|^\alpha|x-y|^\alpha \end{equation} for some $w\in [x,y]$. Thus for each $x,y\in p\in\cP^{\text{long}}_{k_0}$, \[ \left|\zeta_{k_0}(x)-\zeta_{k_0}(y)\right|\leq 2|\xi (f')^\alpha|_\infty|x-y|^\alpha . \] On the other hand, if $x$ and $y$ belong to different elements of $p\in\cP^{\text{long}}_{k_0}$, let $b_1, b_2\in [x,y]$ the boundaries of the elements to which $x$ and $y$ belong, respectively. Since, by construction, $\zeta_{k_0}=0$ at the boundaries of the elements of $\cP_{k_0}$, we have\footnote{ In the last line we use H\"older inequality: $\sum_i a_i b_i\leq \left[\sum_i a_i^\frac 1\alpha\right]^{\alpha}\left[\sum_i b_i^{\frac 1{1-\alpha}}\right]^{1-\alpha}$.} \[ \begin{split} \left|\zeta_{k_0}(x)-\zeta_{k_0}(y)\right|&\leq 2 |\xi(f')^\alpha|_\infty (|x-b_1|^\alpha+|y-b_2|^\alpha)\\ &\leq 2^{2-\alpha} |\xi(f')^\alpha|_\infty|x-y|^\alpha. \end{split} \] Putting together the above facts we have \begin{equation}\label{eq:first} \begin{split} &\left|\zeta_{k_0}\,\right|_\infty\leq 2 |\xi |_\infty\\ &\left|\zeta_{k_0}\,\right|_{\alpha}\leq 6 |\xi \cdot(f')^\alpha|_\infty. \end{split} \end{equation} We can then write the first term of the second line of \eqref{eq:ly-0} as \[ \begin{split} &\sum_{p\in\cP_{k_0}}\int_p h (\vf\circ f\cdot \xi_{k_0})'=\int_0^1 h\zeta_{k_0}'+\sum_{p\in\cP^{\text{long}}_{k_0}}\int_p h\tilde\ell'+\sum_{p\in\cP^{\text{short}}_{k_0}}\int_p h (\vf\circ f\cdot \xi_{k_0})'\\ &\leq 6|\xi \cdot(f')^\alpha|_\infty\|h\|_\alpha+ C_{k_0} |h|_{L^1}+\int_0^1 h \frac d{dx}\int_0^x\sum_{p\in\cP^{\text{short}}_{k_0}}\Id_p(\vf\circ f\cdot \xi_{k_0})'. \end{split} \] Note that we cannot avoid the separation between the short and long pieces: if we would have defined $\tilde\ell$ as a linear interpolation on all the intervals, then $\sup_p|\tilde\ell'|_{\cC^0(\bar p,\bR)}$ could have been infinite. As is made clear clear by the above expression, to handle the short pieces we use a by now standard idea: we estimate using the strong norm. Thus we must compute the norm of the test function: \[ \begin{split} &\left|\int _x^y\sum_{p\in\cP^{\text{short}}_{k_0}}\Id_p(\vf\circ f\cdot \xi_{k_0})'\right|\leq \sum_{\substack{p\in\cP^{\text{short}}_{k_0}\\p\cap [x,y]\neq \emptyset}}\sup_{z\in p}|\xi (z) f'(z)^\alpha| |p\cap [x,y]|^\alpha\\ &\leq \left[\sum_{p\in\cP^{\text{short}}_{k_0}}\sup_{z\in p}|\xi (z) f'(z)^\alpha|^{\frac 1{1-\alpha}}\right]^{1-\alpha}|x-y|^\alpha \\ &\leq C_\#|\xi f'|_\infty^{\frac{\alpha-\gamma}{1-\gamma}}\left[\sum_{\{p\in \cP\;:\; |p|\leq 2^{-k_0}\}}\sup_{z\in p}|\xi (z)f'(z)^\gamma|^{\frac{1}{1-\gamma}}\right]^{1-\alpha}\hskip-.5cm |x-y|^\alpha\leq \frac{|\xi \cdot(f')^\alpha|_\infty}2|x-y|^\alpha \end{split} \] where we have used the hypothesis $\alpha\geq \gamma$, used condition \eqref{eq:optimal?} and chosen $k_0$ large enough (so that the tail of the convergent series is as small as needed). Accordingly \begin{equation}\label{eq:ly-1} \sum_{p\in\cP_{k_0}}\int_p h (\vf\circ f\cdot \xi_{k_0})'\leq 7\,|\xi \cdot(f')^\alpha|_\infty\|h\|_\alpha+ C_{k_0} |h|_{L^1}. \end{equation} To estimate the second term in the second line of \eqref{eq:ly-0} note that, by construction, $\rho_k$ is zero on the elements $p\in \cP^{\text{short}}_{k+1}$. We can then consider a piecewise linear approximation $\ell_k$ of $\vf\circ f$ constructed by taking linear pieces on each $p\in\cP^{\text{long}}_{k+1}$ and such that the two functions are equal on $\partial\cP^{\text{long}}_{k+1}$. Then, we define $\eta_k(x)=0$ if $x\in p\in \cP^{\text{short}}_{k+1}$ and $\eta_k=\rho_k(\vf\circ f-\ell_k)$ otherwise. Note that $\eta_k\in\cC^0$ and Lipschitz. We can write the test functions in the second term in the second line of \eqref{eq:ly-0} as \begin{equation}\label{eq:psik} \psi_k(x):=\int_0^x(\vf\circ f)'\rho_k=\int_0^x\eta_k'+ \int_0^x\ell_k'\rho_k. \end{equation} We are left with the task of estimating $|\psi_k|_\alpha$. We will treat the two terms separately. To start, note that (by \eqref{eq:rho}) \[ \left|\int_x^y\eta_k'\right|=\left|\sum_{p\in\cP^{\text{long}}_{k+1}}\int_{p\cap [x,y]}\eta_k'\right|\leq C_\#2^{-\beta k} \] since at most two of the elements of the sum are non zero, given that $\eta_k$ is zero at the boundaries of $\cP_{k+1}$. On the other hand if $p=[a,b]$ and $p\cap [x,y]=[a',b']$ we have\footnote{ To obtain the last line note that the second term in the curly bracket of the second line is bounded by $C_\#|p|^{\alpha-1}|p\cap [x,y]|$ and $|p\cap [x,y]|\leq |p|^{1-\alpha}|x-y|^\alpha$.} \[ \begin{split} \left|\int_{p\cap[x,y]}\eta_k'\right|&\leq |\rho_k(a)|\left \{|\vf(f(b'))-\vf(f(a'))|+\frac{|\vf(f(b))-\vf(f(b))|}{|p|}|p\cap [x,y]|\right\}\\ &\leq |\rho_k(a)|^{1-\alpha}2^\alpha\left \{\left[\int_{p\cap[x,y]}|\xi f'|\right]^\alpha+\left[\int_{p}|\xi f'|\right]^\alpha\frac{|p\cap [x,y]|}{|p|}\right\}\\ &\leq C_\# 2^{-\beta(1-\alpha) k} |x-y|^\alpha \end{split} \] where we have used the hypothesis $\xi f'\in L^\infty$ and \eqref{eq:basic}. Accordingly, \begin{equation}\label{eq:part-one} \left|\int_0^{(\cdot)} \eta_k'\right|_\alpha\leq C_\# 2^{-\beta(1-\alpha) k}. \end{equation} Next, we must estimate the second term in \eqref{eq:psik} \begin{equation}\label{eq:start-last} \begin{split} \left|\int_x^y\ell'_k\rho_k\right|&\leq \sum_{\substack{p\in\cP^{\text{long}}_{k+1}\\p\cap[x,y]\neq\emptyset}}|\rho_k|_{L^\infty(p)}\left|\int_pf'\right|^\alpha\frac{|p\cap[x,y]|}{|p|}\\ &\leq C_\# 2^{-\beta(1-\alpha) k}\sum_{\substack{p\in\cP^{\text{long}}_{k+1}\\p\cap[x,y]\neq\emptyset}}\left|\int_p|\xi f'|\right|^\alpha\frac{|p\cap[x,y]|}{|p|}\\ &\leq C_\# 2^{-\beta(1-\alpha) k+(1-\alpha)k}|x-y|\leq C_\# 2^{-\epsilon(1-\alpha) k} |x-y|^\alpha, \end{split} \end{equation} provided \begin{equation}\label{eq:xy-small} |x-y|\leq 2^{-[\epsilon +(1-\beta)]k}. \end{equation} To treat the $x,y$ for which \eqref{eq:xy-small} fails we estimate differently the first line of \eqref{eq:start-last}. To do so it is convenient to divide the discussion in two case. If $r\leq 1$, then \begin{equation}\label{eq:almost1} \begin{split} \left|\int_x^y\ell'_k\rho_k\right|&\leq \sum_{\substack{p\in\cP^{\text{long}}_{k+1}\\p\cap[x,y]\neq\emptyset}}|\rho_k|_{L^\infty(p)}^{1-\alpha(1-r)}\left|\int_p|\xi|^{1-r}f'\right|^\alpha\frac{|p\cap[x,y]|^{1-\alpha}}{|p|^{1-\alpha}}\\ &\leq C_\#2^{-\beta(1-\alpha(1-r))k+(1-\alpha)k}\sum_{p\in\cP_{k+1}}\left|\int_p |f'|^r\right|^\alpha |p\cap[x,y]|^{1-\alpha}\\ &\leq C_\#2^{-\beta(1-\alpha(1-r))k+(1-\alpha)k}\left|\int_0^1 |f'|^r\right|^\alpha |y-x|^{1-\alpha}. \end{split} \end{equation} If $\alpha\leq \frac 12$, then, for some $\epsilon>0$, \begin{equation}\label{eq:part2} \left|\int_x^y\ell'_k\rho_k\right|\leq C_\# 2^{-\epsilon k}|y-x|^{\alpha}. \end{equation} provided $\alpha>\frac{1-\beta}{1-\beta(1-r)}$. If, instead, $\alpha >\frac 12$ then we can continue the estimate in \eqref{eq:almost1} by using \eqref{eq:xy-small} to yield \[ \left|\int_x^y\ell'_k\rho_k\right|\leq C_\# 2^{-\beta(1-\alpha(1-r))k+(1-\alpha)k} 2^{[\epsilon+(1-\beta)](2\alpha-1)k}|x-y|^\alpha\leq C_\# 2^{-\epsilon k}|x-y|^\alpha \] provided $\beta>\frac {1}{1+r}$ and $\epsilon$ has been chose small enough. On the other hand, if $r>1$, we have \[ \begin{split} \left|\int_x^y\ell'_k\rho_k\right|&\leq \sum_{\substack{p\in\cP^{\text{long}}_{k+1}\\p\cap[x,y]\neq\emptyset}}|\rho_k|_{L^\infty(p)}\left|\int_p|f'|^r\right|^{\frac{\alpha}r}|p\cap[x,y]|\, |p|^{\alpha(1-\frac 1r)-1}\\ &\leq\sum_{\substack{p\in\cP^{\text{long}}_{k+1}\\p\cap[x,y]\neq\emptyset}}2^{-\beta k+(1-\alpha)k}\left|\int_p|f'|^r\right|^{\frac{\alpha}r}|p\cap[x,y]|^{1-\frac\alpha r}\\ &\leq C_\# 2^{-\beta k+(1-\alpha)k}|x-y|^{1-\frac\alpha r}\leq C_\#2^{-\epsilon k}|x-y|^\alpha, \end{split} \] provided \[ \frac r{1+r}\geq \alpha>1-\beta. \] While if $\alpha>\frac r{1+r}$, then \begin{equation}\label{eq:part3} \left|\int_x^y\ell'_k\rho_k\right|\leq C_\# 2^{-\beta k+(1-\alpha)k}2^{(\alpha+\frac{\alpha}r-1)[(1-\beta)+\epsilon]k} |x-y|^{\alpha}\leq 2^{-\epsilon k}|x-y|^{\alpha} \end{equation} provided $\beta>\frac 1{1+r}$ and $\epsilon$ has been chosen small enough. Collecting equations \eqref{eq:part-one}, \eqref{eq:part2} and \eqref{eq:part3} we have that, for $\beta>\frac 1{1+r}$ and $\alpha>\frac{1-\beta}{1-\beta(1-r)}$, \[ |\psi_k|_\alpha\leq C_\#2^{-\epsilon k}. \] Thus, by choosing $k_0$ large enough, we have \begin{equation}\label{eq:ly-2} \sum_{k\geq k_0}\int_0^1 h\frac d{dx}\left[\int_0^x\sum_{p\in\cP_{k_0}}\Id_p(\vf\circ f)'\rho_k\right] \leq |\xi \cdot(f')^\alpha|_\infty\|h\|_\alpha. \end{equation} Finally, collecting \eqref{eq:part0}, \eqref{eq:ly-0}, \eqref{eq:ly-1}, \eqref{eq:ly-3}, we have \begin{equation}\label{eq:ly-3} \int_0^1 h \xi(\vf\circ f)'\leq 8\,|\xi \cdot(f')^\alpha|_\infty\|h\|_\alpha+ C_\#|h|_{L^1}, \end{equation} from which the Lemma follows. \end{proof} We conclude with an alternative result, in order to give a taste of the available possibilities mentioned in Remark \ref{rem:infty}. \begin{thm}\label{thm:main2} If the potential is uniformly $\beta$-H\"older on the elements of the partition (see Remark \ref{rem:infty}), then we do not need to impose any condition on the integrability of $f'$ and Theorem \ref{thm:main} holds under the single condition $1>\alpha>\max\{\gamma,1-\beta\}$. \end{thm} \begin{proof} We just need to prove Lemma \ref{LY} under the new condition. All the previous arguments are valid, the only difference is that now we do not need to introduce $\tilde\xi_\ve$ since $\xi$ is bounded away from zero on the elements of the partition and hence the derivative cannot explode. The approximation scheme yields \[ |\rho_k(x)|\leq C_\# 2^{-\beta k}|\xi(x)| \] rather than $|\rho_k|\leq C_\# 2^{-\beta k}$ as before. Accordingly, we can easily conclude the proof of Lemma \ref{LY} as follows. \[ \left|\int_x^y\ell_k' \rho_k\right|\leq C_\# \sum_{p\in\cP^{\text{long}}}\int_{p\cap[x,y]}2^{-\beta k}\frac{\left(\int_p |f'\xi|\right)^\alpha}{|p|}\leq C_\# 2^{-(\alpha+\beta-1)k}|x-y|. \] \end{proof} \end{document}
\begin{document} \title{True Online Emphatic TD\la: \\ Quick Reference and Implementation Guide} This document is a guide to the implementation of {\bm e}mph{true online emphatic TD($\l$)\xspacembdaa}, a model-free temporal-difference algorithm for learning to make long-term predictions which combines the emphasis idea (Sutton, Mahmood \& White 2015) and the true-online idea (van Seijen \& Sutton 2014). The setting used here includes linear function approximation, the possibility of off-policy training, and all the generality of general value functions (Maei \& Sutton 2010), as well as the emphasis algorithm's notion of ``interest". Conventional TD($\l$)\xspacembdaa is of course the core model-free algorithm for learning value functions in reinforcement learning (Sutton 1988, Sutton \& Barto 1998). The emphasis idea is to dynamically rescale the updates made by temporal-difference algorithms such that convergence is ensured under off-policy training (Yu 2015) and such that asymptotic accuracy of the approximation is improved. The true-online idea extends TD($\l$)\xspacembdaa to make it more data efficient and less sensitive to step-size settings, at minimal computational expense (van Seijen, Mahmood, Pilarski \& Sutton 2015). The way that these ideas have been combined to produce true online emphatic TD($\l$)\xspacembdaa was modelled after how van Hasselt, Mahmood, and Sutton (2014) combined the true-online idea and the gradient-TD idea (Maei 2011, Sutton et al.~2009) to produce true online GTD($\l$)\xspacembdaa. \section{Setting and requirements} We consider the setting of general value functions, or GVFs (Maei \& Sutton 2010, Sutton et al.~2011, White 2015, Sutton, Mahmood \& White 2015). Here we present these ideas without assuming access to an underlying state (as in Modayil, White \& Sutton 2014). The algorithm is meant to be called at regular intervals with data from a time series, from which it learns to make a prediction. The time series includes a feature vector $\bm\phi_t\in\Re^n$ and a cumulant signal $R_t\in\Re$. \[ \bm\phi_0, R_1, \bm\phi_1, R_2, \bm\phi_2, R_3, \bm\phi_4, ($\l$)\xspacembdadots \] The prediction at each time is linear in the feature vector. That is, the prediction at time $t\gammae0$ is of the form \[ \bm\phi_t\tr\bm\theta_t = \sum_{i=1}^n \bm\phii_t(i) \bm\thetaeta_t(i), \] where $\bm\theta_t\in\Re^n$ is a learned weight vector at time $t$, and $\bm\phii_t(i)$ and $\bm\thetaeta_t(i)$ are of course the $i$th components of the corresponding vectors. The learning process results in the prediction at each time $t$ coming to approximate the outcome, or target, that would follow it: \[ \bm\phi_t\tr\bm\theta_t \alphapprox \sum_{k=t+1}^\infty \! R_k \prod_{j=t+1}^{k-1} \gamma_j \] if actions were selected according to policy $\pi$, and where $\gamma_t\in[0,1]$ is a sequence of discount factors. We see from this equation why the signal $R_t$ is termed the ``cumulant"; all of its values are added up, or {\bm e}mph{accumulated}, within the temporal envelope specified by the $\gamma_j$. In the special case in which the cumulant is a reward and the $\gamma_j$ are constant then the GVF reduces to a conventional value function from reinforcement learning. To make the GVF problem well defined, the user must provide $\pi$ and the $\gamma_j$. The policy $\pi$ is not provided directly, but in the form of a sequence of importance sampling ratios \[ \rho_t = \frac{\pi(A_t|S_t)}{\mu(A_t|S_t)}, \] where $S_t$ and $A_t$ are the state and action actually taken at time $t$, and $\pi(A_t|S_t)$ and $\mu(A_t|S_t)$ are the probabilities of $A_t$ in $S_t$ under policies $\pi$ and $\mu$ respectively. The policy $\pi$ is called the {\bm e}mph{target policy}, because it is under it that we are trying to predict the outcome, as stated above, and $\mu$ is called the {\bm e}mph{behavior policy}, because it is it that actually generates the behavior and the time series. Because only the ratio of the two probabilities is required, there is often no need to work directly with states or action probabilities. For example, in the on-policy case the target and behavior policies are the same, and the ratio is always one. The discount factors are often taken to be constant, but are allowed to depend arbitrarily on the time series, as long as $\prod_{j=t+1}^\infty \gamma_j = 0$ for all $t$. In some publications concerning general value functions there is also specified a fourth sequence pertaining to the prediction problem---the ``terminal pseudo reward" $Z_t$---to specify a final signal to be added in with the cumulants at termination. More recently its has been recognized that this functionality can be included with just the cumulant $R_t$ by appropriately setting the discount sequence $\gamma_t$ (see Modayil, White \& Sutton 2014). For example, if one wanted a terminal pseudo reward of $Z_t$ only upon termination, then one would use a cumulant of $R_t = (1-\gamma_t)Z_t$. In addition to the time series of the feature vectors and cumulant signals, the user must provide three sequences characterizing the nature of the approximation to be found by the algorithm: \begin{itemize} \item $I_t\gammae0$; the {\bm e}mph{interest sequence} specifies the interest in or importance of accurately predicting at time $t\gammae0$. For example, in episodic problems one may care only about the value of the first state of the episode; this is specified by setting $I_t=1$ for the first state of each episode and $I_t=0$ at all other times. (Or, as suggested by the work of Thomas (2014), one may want to use $I_t=\gamma^t$, where $t$ here is the time since the beginning of the episode.) In a discounted continuing task, on the other hand, one often cares about all the states equally, which is specified by setting $I_t=1$ for all $t$. In general, if one has any reason to be more concerned with the approximation being more accurate at some times than others, this can be expressed through the interest sequence. \item $($\l$)\xspacembda_t\in[0,1]$; the {\bm e}mph{bootstrapping} sequence specifies the degree of bootstrapping at each time. \item $\alpha_t\gammae 0$; the step-size sequence specifies the size of the step at each time. One common choice is a constant step-size parameter, e.g., $\alpha_t = 0.1/\max_t \bm\phi_t\tr\bm\phi_t$. Another common choice is a step-size parameter that decreases to zero slowly over time. More sophisticated step-size adaptation methods could also be used to determine the step-size sequence (e.g., Mahmood et al.~2012, Dabney \& Barto 2012, Reidmiller \& Braun 1993) {\bm e}nd{itemize} \section{Algorithm Specification} Internal to the learning algorithm are the learned weight vector, $\bm\theta_t\in\Re^n$, and an auxiliary shorter-term-memory vector ${\bm e}_t\in{\Re}^n$ with ${\bm e}_t\gammae\bm 0$. In addition, there are the scalars $M_t\gammae 0$ and $F_t\gammae 0$. The emphasis $M_t$ and the TD error $\delta_t$ are purely temporary variables. The true online emphatic TD($\l$)\xspacembdaa algorithm is fully specified by the following equations: \begin{align} \delta_t &= R_{t+1} + \gamma_{t+1}\bm\theta_t\tr\bm\phi_{t+1} - \bm\theta_t\tr\bm\phi_t\\ F_t &= \rho_{t-1}\gamma_t F_{t-1} + I_t, \hspace*{150pt}\text{with~}F_{-1}=0 \\ M_t &= ($\l$)\xspacembda_t \, I_t + (1-($\l$)\xspacembda_t) F_t \\ {\bm e}_t &= \rho_t\gamma_t($\l$)\xspacembda_t{\bm e}_{t-1} + \rho_t\alpha_t M_t(1-\rho_t\gamma_t($\l$)\xspacembda_t\bm\phi_t\tr{\bm e}_{t-1})\bm\phi_t\text{~~~~~~~~with~} {\bm e}_{-1}=0\\ \bm\theta_{t+1} &= \bm\theta_t + \delta_t{\bm e}_t + ({\bm e}_t-\alpha_t M_t\rho_t\bm\phi_t)(\bm\theta_t-\bm\theta_{t-1})\tr\bm\phi_t {\bm e}nd{align} \section{Pseudocode} The following pseudocode characterizes the algorithm and its efficient implementation in C++. First the {\tt init} function should be called with argument $n$ (the number of components of $\bm\theta$ and $\bm\phi$): \deltaef\!\tr\!\!{\!\tr\!\!} \deltaef\u#1{{\underbar{$#1$}}} \noindent\fbox{ \begin{varwidth}{\deltaimexpr($\l$)\xspacembdainewidth-2\fboxsep-2\fboxrule\relax} \begin{tabbing} ~~~\=\kill {\tt init($n$):}\\ \>store $n$ \\ \>${\bm e} \gammaets \bm 0$ \\ \>$\bm\theta \gammaets \bm 0$ ~~~~~(or arbitrary)\\ \>$F \gammaets D \gammaets \gamma \gammaets 0$ {\bm e}nd{tabbing} {\bm e}nd{varwidth} } \noindent On each step, $t=0, 1, 2, ($\l$)\xspacembdadots$, the {\tt learn} function is called with arguments $\alpha_t, I_{t}, ($\l$)\xspacembda_{t}, \bm\phi_t, \rho_{t}$, $R_{t+1}, \bm\phi_{t+1}, \gamma_{t+1}$: \noindent\fbox{ \begin{varwidth}{\deltaimexpr($\l$)\xspacembdainewidth-2\fboxsep-2\fboxrule\relax} \begin{tabbing} ~~~\=~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\=\kill {\tt learn($\alpha, I, ($\l$)\xspacembda, \bm\phi, \rho, R, \bm\phi', \gamma'$):} \>\>; $\alpha$ thru $\rho$ are at $t$, the rest are at $t+1$\\[3pt] \> $\delta \gammaets R + \gamma'\bm\theta\tr\bm\phi' - \bm\theta\tr\bm\phi$ \> ; or, do all 3 inner products in a single loop\\[3pt] \> $F \gammaets F + I$ \>; $F$ was $\rho_{t-1}\gamma_t F_{t-1}$; now it is $F_t$ \\[3pt] \> $M \gammaets ($\l$)\xspacembda I + (1-($\l$)\xspacembda)F$ \\[3pt] \> $S \gammaets \rho\,\alpha M(1-\rho\gamma($\l$)\xspacembda\bm\phi\tr{\bm e})$ \>; scalar $S$ saves computation \\[3pt] \> ${\bm e} \gammaets \rho\,\gamma($\l$)\xspacembda{\bm e} + S\bm\phi$ \> ; this + next 3 lines can be done in a single loop\\[3pt] \> $\Delta \gammaets \delta{\bm e} + D({\bm e}-\rho\,\alpha M\bm\phi)$ \>; $D$ here is $(\bm\theta_t-\bm\theta_{t-1})\tr\bm\phi_t$\\[3pt] \> $\bm\theta \gammaets \bm\theta + \Delta$ \\[3pt] \> $D \gammaets \Delta\tr\bm\phi'$ \\[3pt] \> $F \gammaets \rho\,\gamma' F$ \\[3pt] \> $\gamma \gammaets \gamma'$ {\bm e}nd{tabbing} {\bm e}nd{varwidth} } \gammaoodbreak \noindent Finally, to obtain a prediction based on the learned weights, pass a feature vector to the {\tt predict} function: \noindent\fbox{ \begin{varwidth}{\deltaimexpr($\l$)\xspacembdainewidth-2\fboxsep-2\fboxrule\relax} \begin{tabbing} ~~~\=\kill {\tt predict($\bm\phi$):}\\ \>return $\bm\theta\tr\bm\phi$ {\bm e}nd{tabbing} {\bm e}nd{varwidth} } If the task is episodic in the classical sense, then the terminal state should be represented as a special additional state at which $\gamma=0$, $\bm\phi=\bm 0$, and with outgoing transitions to the distribution of start states. As far as {\tt learn} is concerned, there is still just a single sequence. \section{Code} Implementations that closely follow the pseudocode are provided for various programming languages in separate files. Where we have seen it as convenient and non-obfuscating, the implementations are in an object-oriented style in which one creates an instance of the algorithm that contains all of its internal variables. \section*{References} \parindent=0pt \deltaef\hangindent=0.15in{\hangindent=0.15indent=0.15in} \parskip=6pt \hangindent=0.15in Dabney, W., Barto, A. G. (2012). Adaptive step-size for online temporal difference learning. In {\bm e}mph{Proceedings of the Conference of the Association for the Advancement of Artificial Intelligence} (AAAI). \hangindent=0.15in Maei, H.~R. (2011). {\bm e}mph{Gradient Temporal-Difference Learning Algorithms}. PhD thesis, University of Alberta. \hangindent=0.15in Maei, H.~R., Sutton, R.~S. (2010). GQ($($\l$)\xspacembda$): A general gradient algorithm for temporal-difference prediction learning with eligibility traces. In {\bm e}mph{Proceedings of the Third Conference on Artificial General Intelligence}, pp.~91--96. Atlantis Press. \hangindent=0.15in Mahmood, A. R., Sutton, R. S., Degris, T., Pilarski, P. M. (2012). Tuning-free step-size adaptation. In {\bm e}mph{Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing} (ICASSP), pp. 2121-2124. IEEE Press. \hangindent=0.15in Modayil, J., White, A., Sutton, R.~S. (2014). Multi-timescale nexting in a reinforcement learning robot. {\bm e}mph{Adaptive Behavior 22}(2):146--160. \hangindent=0.15in Riedmiller, M., Braun, H. (1993). A direct adaptive method for faster backpropagation learning: The RPROP algorithm. In {\bm e}mph{Proceedings of the IEEE International Conference on Neural Networks} (pp. 586-591). IEEE Press. \hangindent=0.15in Sutton, R. S. (1988). Learning to predict by the methods of temporal differences. {\bm e}mph{Machine Learning 3}:9--44. \hangindent=0.15in Sutton, R. S., Barto, A. G. (1998). {\bm e}mph{Reinforcement Learning: An Introduction}. MIT Press. \hangindent=0.15in Sutton, R.~S., Maei, H.~R., Precup, D., Bhatnagar, S., Silver, D., Szepesv{\'a}ri, {Cs}., Wiewiora, E. (2009). Fast gradient-descent methods for temporal-difference learning with linear function approximation. In {\bm e}mph{Proceedings of the 26th International Conference on Machine Learning}, pp. 993--1000, ACM. \hangindent=0.15in Sutton, R. S., Mahmood, A. R., White, M. (2015). An emphatic approach to the problem of off-policy temporal-difference learning. ArXiv:1503.04269. \hangindent=0.15in Sutton, R.~S., Modayil, J., Delp, M., Degris, T., Pilarski, P.~M., White, A., Precup, D. (2011). Horde: A scalable real-time architecture for learning knowledge from unsupervised sensorimotor interaction. In {\bm e}mph{Proceedings of the 10th International Conference on Autonomous Agents and Multiagent Systems}, pp. 761--768. \hangindent=0.15in Thomas, P. (2014). Bias in natural actor--critic algorithms. In {\bm e}mph{Proceedings of the 31st International Conference on Machine Learning}. JMLR W\&CP 32(1):441--448. \hangindent=0.15in van Hasselt, H., Mahmood, A. R., Sutton, R. S. (2014). Off-policy TD($\l$)\xspacembdaa with a true online equivalence. In {\bm e}mph{Proceedings of the 30th Conference on Uncertainty in Artificial Intelligence}, Quebec City, Canada. \hangindent=0.15in van Seijen, H., Sutton, R. S. (2014). True online TD($\l$)\xspacembdaa. In {\bm e}mph{Proceedings of the 31st International Conference on Machine Learning}. Beijing, China. JMLR: W\&CP volume 32. \hangindent=0.15in van Seijen, H., Mahmood, A. R., Pilarski, P. M., Sutton, R. S. (2015). An empirical evaluation of true online TD($\l$)\xspacembdaa. In {\bm e}mph{Proceedings of the 2015 European Workshop on Reinforcement Learning}. \hangindent=0.15in Yu, H. (2015). On convergence of emphatic temporal-difference learning. In {\bm e}mph{Proceedings of the Conference on Computational Learning Theory}. \hangindent=0.15in White, A. (2015). {\bm e}mph{Developing a Predictive Approach to Knowledge}. Phd thesis, University of Alberta. {\bm e}nd{document}
\begin{document} \widetilde{t}itle {\bf Weighted sum formulas of multiple $t$-values with even arguments} \author{ { Zhonghua Li$^{a,}$\widetilde{t}hanks{Email: zhonghua\[email protected]}\quad Ce Xu$^{b,c,}$\widetilde{t}hanks{Email: [email protected]; [email protected]}}\\[1mm] \small a. School of Mathematical Sciences, Tongji University\\ \small Shanghai 200092, P.R. China\\ \small b. Multiple Zeta Research Center, Kyushu University \\ \small Motooka, Nishi-ku, Fukuoka 819-0389, Japan\\ \small c. School of Mathematical Sciences, Xiamen University\\ \small Xiamen 361005, P.R. China} \date{} \maketitle \noindent{\bf Abstract} In this paper, we study the weighted sums of multiple $t$-values and of multiple $t$-star values at even arguments. Some general weighted sum formulas are given, where the weight coefficients are given by (symmetric) polynomials of the arguments. \\[2mm] \noindent{\bf Keywords}: Multiple $t$-values, Multiple $t$-star values, Multiple zeta values, Multiple zeta-star values, Bernoulli numbers, Weighted sum formulas. \noindent{\bf AMS Subject Classifications (2010):} 11M32, 11B68. \section{Introduction} We begin with some basic notation. A finite sequence ${\bf k} = (k_1,\ldots, k_n)$ of positive integers is called an index. We put \[|{\bf k}|:=k_1+\cdots+k_n,\quad d({\bf k}):=n,\] and call them the weight and the depth of ${\bf k}$, respectively. If $k_1>1$, ${\bf k}$ is called admissible. Let $I(k,n)$ be the set of all indices of weight $k$ and depth $n$. For an admissible index ${\bf k}=(k_1,\ldots,k_n)$, the multiple zeta value and the multiple zeta-star value are defined by \begin{align*} \zetaeta({\bf k})\equiv\zetaeta(k_1,k_2,\ldots,k_n):=\displaystyle\!sum\displaystyle\!limits_{m_1>m_2>\cdots>m_n>0 } \displaystyle\!frac{1}{m_1^{k_1}m_2^{k_2}\cdots m_n^{k_n}} \end{align*} and \begin{align*} \zetaeta^\star({\bf k})\equiv\zetaeta^\star(k_1,k_2,\ldots,k_n):=\displaystyle\!sum\displaystyle\!limits_{m_1\geqslant m_2\geqslant \cdots\geqslant m_n\geqslant 1 } \displaystyle\!frac{1}{m_1^{k_1}m_2^{k_2}\cdots m_n^{k_n}}, \end{align*} respectively. The systematic study of multiple zeta values began in the early 1990s with the works of Hoffman \cite{H1992} and Zagier \cite{DZ1994}. After that it has been attracted a lot of research on them in the last three decades (see, for example, the book of Zhao \cite{Z2016}). Let $m,k,n$ be positive integers with $k\geqslant n$, and let $f(x_1,\ldots,x_n)\in \mathbb{Q}[x_1,\ldots,x_n]$ be a symmetric polynomial. Set \[E_f(2m,k,n):=\displaystyle\!sum\displaystyle\!limits_{(k_1,k_2,\ldots,k_n)\in I(k,n)}f(k_1,k_2,\ldots,k_n)\zetaeta(2mk_1,2mk_2,\ldots,2mk_n),\] which is a weighted sum of multiple zeta values with even arguments of weight $2mk$ and depth $n$. If $f(x_1,\ldots,x_n)=1$, we denote $E_f(2m,k,n)$ by $E(2m,k,n)$. The evaluations of these weighted sums $E_f(2m,k,n)$ have been attracted the attention of many researches. In \cite{GKZ2006}, Gangl, Kaneko and Zagier proved that $E(2,k,2)=\displaystyle\!frac{3}{4} \zeta(2k)$. Later, Nakamura gave a different proof of this result in \cite{N2009}. Shen and Cai \cite{SC2012} studied the sums $E(2,k,3)$ and $E(2,k,4)$, and evaluated them in terms of $\zeta(2k)$ and $\zeta(2)\zeta(2k-2)$. Using different methods, Hoffman \cite{H2017} and Gen${\check{\rm c}}$ev \cite{G2016} gave the explicit formula of $E(2,k,n)$. Furthermore, Gen${\check{\rm c}}$ev \cite{G2016} proposed a conjecture on the weighted sum $E(4,k,n)$. More general sums $E(2m,k,n)$ with $m\geqslant 2$ have been considered by Komori, Matsumoto and Tsumura \cite{KMT2014}. And the explicit evaluation formula of $E(2m,k,n)$ was obtained just recently in \cite{ELO2017,ELO2018} and \cite{LQ2016}. Later in \cite{GL2015}, Guo, Lei and Zhao considered the weighted sums $E_f(2,k,2)$ and $E_f(2,k,3)$, and found that they can be evaluated by zeta values at even arguments. Moreover, they conjectured that \begin{align*} E_f(2,k,n)=\displaystyle\!sum\displaystyle\!limits_{l=0}^T c_{f,l}(k)\zeta(2l)\zeta(2k-2l), \end{align*} where $T={\rm max}\{[r+n-2]/2,[(n-1)/2]\}$, $c_{f,l}(x)\in \mathbb{Q}[x]$ depends only on $l$ and $f$, and with $\deg c_{f,l}(x)\leqslant \deg_{x_1}f(x_1,\ldots,x_n)$. Here $r=\deg f(x_1,\ldots,x_n)$ and for a real number $\alpha$, we denote by $[\alpha]$ the greatest integer that not exceeding $\alpha$. Recently, this conjecture was proved by the first author and Qin in \cite{LQ2019} with restriction $\deg c_{f,l}(x)\leqslant r+n-2l-1$. A similar weighted sum formula of the multiple zeta-star values with even arguments was simultaneously obtained by the first author and Qin in \cite{LQ2019}. In a recent paper \cite{H2016}, Hoffman introduced and studied an odd variant of multiple zeta values, which is defined for an admissible index ${\bf k}=(k_1,k_2,\ldots,k_n)$ as \begin{align*} t({\bf k})\equiv t(k_1,k_2,\ldots,k_n):=\displaystyle\!sum\displaystyle\!limits_{m_1>m_2>\cdots>m_n>0\atop m_i:\widetilde{t}ext{odd}} \displaystyle\!frac{1}{m_1^{k_1}m_2^{k_2}\cdots m_n^{k_n}}, \end{align*} and is called a multiple $t$-zeta value. Similarly, one can define a multiple $t$-star value by \begin{align*} t^\star({\bf k})\equiv t^\star(k_1,k_2,\ldots,k_n):=\displaystyle\!sum\displaystyle\!limits_{m_1\geqslant m_2\geqslant\cdots\geqslant m_n\geqslant 1\atop m_i:\widetilde{t}ext{odd}} \displaystyle\!frac{1}{m_1^{k_1}m_2^{k_2}\cdots m_n^{k_n}}. \end{align*} Then similar as multiple zeta values, for any positive integers $m,k,n$ with $k\geqslant n$ and any symmetric polynomial $f(x_1,\ldots,x_n)\in\mathbb{Q}[x_1,\ldots,x_n]$, we define the weighted sums of multiple $t$-values and of multiple $t$-star values by \[T_f(2m,k,n):=\displaystyle\!sum\displaystyle\!limits_{(k_1,k_2,\ldots,k_n)\in I(k,n)}f(k_1,k_2,\ldots,k_n)t(2mk_1,2mk_2,\ldots,2mk_n)\] and \[T_f^\star(2m,k,n):=\displaystyle\!sum\displaystyle\!limits_{(k_1,k_2,\ldots,k_n)\in I(k,n)}f(k_1,k_2,\ldots,k_n)t^\star(2mk_1,2mk_2,\ldots,2mk_n).\] If $f(x_1,\ldots,x_n)=1$, we set \[T(2m,k,n)=T_f(2m,k,n),\qquad T^{\star}(2m,k,n)=T^\star_f (2m,k,n).\] There are some work on the evaluations of the sums $T(2m,k,n)$. For example, using similar but more complicated ideas from \cite{SC2012} Shen and Cai gave a few sum formulas of $T(2,k,n)$ for $n\leqslant 5$ in \cite{SC2011}. In \cite{Z2015}, Zhao gave two explicit formulas of $T(2,k,n)$. Furthermore, Shen and Jia \cite{SJ2017} gave some explicit evaluation formulas of $T(2m,k,n)$. We remark that as for multiple zeta values, one can obtain the evaluation formulas of $T(2m,k,n)$ and $T^{\star}(2m,k,n)$ algebraically. In fact, using \cite[Theorem 2.3]{H2016} and \cite[Proposition 3.26]{LQ2016}, one can express $T(2m,k,n)$ and $T^{\star}(2m,k,n)$ in terms of $t(2m,\ldots,2m)$ and $t^{\star}(2m,\ldots,2m)$. Then one gets evaluation formulas of $T(2m,k,n)$ and $T^{\star}(2m,k,n)$ from that of multiple $t$ and $t$-star values with all arguments equal to the same even number $2m$. In this paper, using a similar method as in \cite{LQ2019}, we study the weighted sums $T_f(2,k,n)$ and $T_f^\star(2,k,n)$. Our main result is the following theorem. \begin{thm}\label{thm2} Let $n,k$ be positive integers with $k\geqslant n$. Let $f(x_1,\ldots,x_n)\in \mathbb{Q}[x_1,\ldots,x_n]$ be a symmetric polynomial of degree $r$. Then we have \begin{align}\label{1.2} T_f(2,k,n)=\displaystyle\!sum\displaystyle\!limits_{l=0}^{{\rm min}\{T,k\}} c_{f,l}(k)\zeta(2l)t(2k-2l) \end{align} and \begin{align}\label{1.3} T_f^\star(2,k,n)=\displaystyle\!sum\displaystyle\!limits_{l=0}^{{\rm min}\{T,k\}} c_{f,l}^\star(k)\zeta(2l)t(2k-2l), \end{align} where $T={\rm max}\{[(r+n-2)/2],[(n-1)/2]\}$, $c_{f,l}(x),c_{f,l}^\star(x)\in \mathbb{Q}[x]$ depend only on $l$ and $f$, and with $\deg c_{f,l}(x), \deg c_{f,l}^\star(x)\leqslant r+n-2l-1$. \end{thm} To prove Theorem \ref{thm2}, we use the symmetric sum formulas of multiple $t$-values and of multiple $t$-star values \cite[Theorems 2.5 and 2.8]{H2016}. Then we find it is sufficient to study the weighted sums of the products of $t$-values at even integers and prove the following theorem. \begin{thm}\label{thm1} Let $n,k$ be positive integers with $k\geqslant n$. Let $f(x_1,\ldots,x_n)\in\mathbb{Q}[x_1,\ldots,x_n]$ be a polynomial of degree $r$. Then we have \begin{align}\label{1.1} \displaystyle\!sum\displaystyle\!limits_{k_1+\cdots+k_n=k\atop k_j\geqslant 1}f(k_1,\ldots,k_n)t(2k_1)\cdots t(2k_n)=\displaystyle\!sum\displaystyle\!limits_{l=0}^{\displaystyle\!min\{T,k\}}e_{f,l}(k)\zetaeta(2l)t(2k-2l), \end{align} where $T=\displaystyle\!max\{[(r+n-2)/2],[(n-1)/2]\}$, $e_{f,l}(x)\in\mathbb{Q}[x]$ depends only on $l$ and $f$, and with $\deg e_{f,l}(x)\leqslant r+n-2l-1$. \end{thm} Note that the polynomial $f(x_1,\ldots,x_n)$ in Theorem \ref{thm1} is not necessarily symmetric. To prove Theorem \ref{thm1}, we use Euler's formula \cite[Eq. (1.4)]{H2016}, which expresses $t(2k)$ by the Bernoulli numbers. Then it is enough to treat the weighted sums of products of the Bernoulli numbers. And we do this by using the generating function of the Bernoulli numbers. We give the proofs of Theorem \ref{thm1} and Theorem \ref{thm2} in Section \ref{Sec:Proof}. Although the weighted sum formulas \eqref{1.2}-\eqref{1.1} are not so concrete, one can get the explicit formulas for given positive integers $n,k$ and a given polynomial $f$ according to the procedure of our proof. We list some weighted sum formulas as examples in Appendix \ref{Sec:Example}. \section{Proofs}\label{Sec:Proof} \sum\limits_{n=1}^\inftybsection{Preliminary knowledge} We begin with the definition of the Bernoulli numbers. The generating function of the Bernoulli numbers $\{B_i\}$ is $$\displaystyle\!sum\displaystyle\!limits_{i=0}^{\displaystyle\!infty} \displaystyle\!frac{B_i}{i!}x^i=\displaystyle\!frac{x}{e^x-1}.$$ Let $\beta_i:=(2^i-1)B_i$ and \begin{align*} &F(x):=\displaystyle\!frac{x}{2}-\displaystyle\!frac{x}{e^x+1}. \end{align*} Then since $B_0=1$, $B_1=-\displaystyle\!frac{1}{2}$ and $B_i=0$ for odd $i\geqslant 3$, we find that \begin{align*} &F(x)=\displaystyle\!sum\displaystyle\!limits_{i=1}^\displaystyle\!infty\displaystyle\!frac{\beta_{2i}}{(2i)!}x^{2i}=\displaystyle\!sum\displaystyle\!limits_{i=0}^\displaystyle\!infty\displaystyle\!frac{\beta_{2i}}{(2i)!}x^{2i}. \end{align*} Hence, $F(x)$ is an even function. Let $D:=x\displaystyle\!frac{d}{dx}$ and $H(x):=\displaystyle\!frac{x}{e^x+1}$. Then we have $$D H(x)=(1-x)H(x)+H(x)^2.$$ Therefore, one can get the following theorem without difficulty. \begin{thm}\label{thm2.1} For any nonnegative integer $m$, \begin{align} D^mF(x)=\displaystyle\!sum\displaystyle\!limits_{i=0}^{m+1}F_{mi}(x)H(x)^i. \label{Eq:Diff-f} \end{align} Here $F_{mi}(x)$ are polynomials determined by $F_{00}(x)=\displaystyle\!frac{x}{2}$, $F_{01}(x)=-1$ and the recurrence relations \begin{align} \begin{cases} F_{m0}(x)=xF_{m-1,0}'(x) & \widetilde{t}ext{for\;} m\geqslant 1,\\ F_{m,m+1}(x)=mF_{m-1,m}(x) & \widetilde{t}ext{for\;} m\geqslant 1,\\ F_{mi}(x)=xF'_{m-1,i}(x)+i(1-x)F_{m-1,i}(x)+(i-1)F_{m-1,i-1}(x) & \widetilde{t}ext{for\;} 1\leqslant i\leqslant m. \end{cases} \label{Eq:Recursive-fmi} \end{align} \end{thm} In particular, for any integers $m,i$ with $1\leqslant i\leqslant m+1$, we have $F_{mi}(x)\in\mathbb{Z}[x]$. From \eqref{Eq:Recursive-fmi}, we deduce that for any nonnegative integer $m$, $$F_{m0}(x)=\displaystyle\!frac{x}{2},\quad F_{m,m+1}(x)=-m!.$$ In general, we have the following result. \begin{pro} For any integers $m,i$ with $1\leqslant i\leqslant m+1$, we have $\deg F_{mi}(x)=m+1-i$, and the leading coefficient $c_{mi}$ of $F_{mi}(x)$ satisfies $(-1)^{m+i}c_{mi}>0$. \end{pro} \noindent \it{Proof.}\rm\quad We prove this result by induction on $m$. Assume that $m\geqslant 1$. The result for $i=m+1$ follows from $F_{m,m+1}(x)=-m!$. Now assume $1\leqslant i\leqslant m$, and $$F_{m-1,i}(x)=c_{m-1,i}x^{m-i}+\widetilde{t}ext{lower degree terms}$$ with $(-1)^{m-1+i}c_{m-1,i}>0$. Let $c_{m0}=\displaystyle\!frac{1}{2}$. Then we obtain $$F_{mi}(x)=(-ic_{m-1,i}+(i-1)c_{m-1,i-1})x^{m+1-i}+\widetilde{t}ext{lower degree terms},$$ and \begin{align*} &(-1)^{m+i}(-ic_{m-1,i}+(i-1)c_{m-1,i-1})\\ =&i(-1)^{m-1+i}c_{m-1,i}+(i-1)(-1)^{m-1+i-1}c_{m-1,i-1}>0, \end{align*} from which one can deduce the desired result. $\square$ Now we have $$F_{m0}(x)=c_{m0}x$$ with $c_{m0}=\displaystyle\!frac{1}{2}$, and for any integers $m,i$ with the condition $1\leqslant i\leqslant m+1$, we have $$F_{mi}(x)=c_{mi}x^{m+1-i}+\widetilde{t}ext{lower degree terms},$$ with the recurrence relation $$c_{mi}=-ic_{m-1,i}+(i-1)c_{m-1,i-1},\quad (1\leqslant i\leqslant m)$$ and $c_{m,m+1}=-m!$. In particular, according to the recurrence relation above, we deduce that if $m$ is a nonnegative integer, then $c_{m1}=(-1)^{m+1}$. The following result will be used later. \begin{pro} For any nonnegative integer $m$, we have \begin{align} \displaystyle\!sum\displaystyle\!limits_{i=1}^{m+1}F_{mi}(x)x^{i-1}=-1, \label{Eq:Sum-fmi} \end{align} and \begin{align} \displaystyle\!sum\displaystyle\!limits_{i=1}^{m+1}c_{mi}=-\delta_{m,0}, \label{Eq:Sum-cmi} \end{align} where $\delta_{i,j}$ is Kronecker's delta. \end{pro} \noindent \it{Proof.}\rm\quad We prove \eqref{Eq:Sum-fmi} by induction on $m$. The case of $m=0$ follows from the fact $F_{01}(t)=-1$. Now assume that $m\geqslant 1$, using the recurrence formula \eqref{Eq:Recursive-fmi}, we arrive at \begin{align*} \displaystyle\!sum\displaystyle\!limits_{i=1}^{m+1}F_{mi}(x)x^{i-1}=&\displaystyle\!sum\displaystyle\!limits_{i=1}^mF_{m-1,i}'(x)x^{i}+\displaystyle\!sum\displaystyle\!limits_{i=1}^{m}iF_{m-1,i}(x)x^{i-1}\\ &-\displaystyle\!sum\displaystyle\!limits_{i=1}^miF_{m-1,i}(x)x^i+\displaystyle\!sum\displaystyle\!limits_{i=1}^m(i-1)F_{m-1,i-1}(x)x^{i-1}-m!x^m\\ =&\displaystyle\!sum\displaystyle\!limits_{i=1}^m(F_{m-1,i}(x)x^{i})'-mF_{m-1,m}(x)x^m-m!x^m\\ =&\displaystyle\!frac{d}{dx}\displaystyle\!sum\displaystyle\!limits_{i=1}^mF_{m-1,i}(x)x^{i}. \end{align*} Then we get \eqref{Eq:Sum-fmi} from the inductive hypothesis. Thus, comparing the coefficients of $x^m$ of both sides of \eqref{Eq:Sum-fmi}, we obtain the desired result \eqref{Eq:Sum-cmi}. $\square$ Now we use matrix computations to express $H(x)^i$ by $D^m F(x)$. First, for any nonnegative integer $m$, we define a $(m+1)\widetilde{t}imes (m+1)$ matrix $A_m(x)$ by $$A_m(x)=\begin{pmatrix} F_{01}(x) & &&\\ F_{11}(x) & F_{12}(x) &&\\ \vdots & \vdots & \ddots &\\ F_{m1}(x) & F_{m2}(x) & \cdots & F_{m,m+1}(x) \end{pmatrix}.$$ It is clear that for $m\geqslant 1$, we have $$A_m(x)=\begin{pmatrix} A_{m-1}(x) & 0\\ \alpha_m(x) & -m! \end{pmatrix}$$ with $\alpha_m(x)=(F_{m1}(x),\ldots,F_{mm}(x))$. Hence, the identity \eqref{Eq:Diff-f} can be rewritten as \begin{align} \begin{pmatrix} F(x)\\ DF(x)\\ \vdots\\ D^mF(x) \end{pmatrix}-\displaystyle\!frac{1}{2}x\begin{pmatrix} 1\\ 1\\ \vdots\\ 1 \end{pmatrix}=A_m(x)\begin{pmatrix} H(x)\\ H(x)^2\\ \vdots\\ H(x)^{m+1} \end{pmatrix}, \label{Eq:G-h-Matrix} \end{align} and the identity \eqref{Eq:Sum-fmi} can be rewritten as $$A_{m}(x)\begin{pmatrix} 1\\ x\\ x^2\\ \vdots\\ x^m \end{pmatrix}=-\begin{pmatrix} 1\\ 1\\ \vdots\\ 1 \end{pmatrix}.$$ Since the matrix $A_m(x)$ is invertible, we find \begin{align} \begin{pmatrix} H(x)\\ H(x)^2\\ \vdots\\ H(x)^{m+1} \end{pmatrix}=A_m(x)^{-1}\begin{pmatrix} F(x)\\ DF(x)\\ \vdots\\ D^mF(x) \end{pmatrix}+\displaystyle\!frac{1}{2}x\begin{pmatrix} 1\\ x\\ x^2\\ \vdots\\ x^m \end{pmatrix}.\label{2.6} \end{align} Therefore, we need to obtain a description of $A_m(x)^{-1}$. From linear algebra, we know that the matrix $\begin{pmatrix} A & 0\\ C & B \end{pmatrix}$ is invertible with $$\begin{pmatrix} A & 0\\ C & B \end{pmatrix}^{-1}=\begin{pmatrix} A^{-1} & 0\\ -B^{-1}CA^{-1} & B^{-1} \end{pmatrix},$$ provided that $A$ and $B$ are invertible square matrices. Hence by induction on $m$, we find that the inverses $A_m(x)^{-1}$ satisfy the recursive formula \begin{align} A_m(x)^{-1}=\begin{pmatrix} A_{m-1}(x)^{-1} & 0\\ \displaystyle\!frac{1}{m!}\alpha_m(x)A_{m-1}(x)^{-1} & \displaystyle\!frac{-1}{m!} \end{pmatrix},\quad (m\geqslant 1). \label{Eq:Recursive-AmInverse} \end{align} For any nonnegative integer $m$, set $$A_m(x)^{-1}=\begin{pmatrix} G_{01}(x) & &&\\ G_{11}(x) & G_{12}(x) &&\\ \vdots & \vdots & \ddots &\\ G_{m1}(x) & G_{m2}(x) & \cdots & G_{m,m+1}(x) \end{pmatrix}.$$ Then from (\ref{2.6}), for any positive integer $i$, we get \begin{align} H(x)^i=\displaystyle\!sum\displaystyle\!limits_{j=1}^iG_{i-1,j}(x)D^{j-1}F(x)+\displaystyle\!frac{1}{2}x^i. \label{Eq:h-i} \end{align} Moreover, it is easy to prove that for a nonnegative integer $m$, the functions $$1,F(x),D F(x),\ldots,D^m F(x)$$ are linearly independent over the rational function field $\mathbb{Q}(x)$. For a proof one can refer to \cite[Lemma 2.6]{LQ2019}. Next, we give some properties of the polynomials $G_{ij}(x)$. \begin{pro} Let $m$ and $i$ be integers. \begin{itemize} \item [(1)] For any $m\geqslant 0$, we have $G_{m,m+1}(x)=-\displaystyle\!frac{1}{m!}$; \item [(2)] For $1\leqslant i\leqslant m$, we have the recursive formula \begin{align} G_{mi}(x)=\displaystyle\!frac{1}{m!}\displaystyle\!sum\displaystyle\!limits_{j=i}^{m}F_{mj}(x)G_{j-1,i}(x); \label{Eq:Recursive-gmi} \end{align} \item [(3)] For $1\leqslant i\leqslant m+1$, we have $G_{mi}(x)\in\mathbb{Q}[x]$ with $\deg G_{mi}(x)\leqslant m+1-i$; \item [(4)] For $1\leqslant i\leqslant m+1$, set $$G_{mi}(x)=d_{mi}x^{m+1-i}+\widetilde{t}ext{lower degree terms}.$$ Then we have $d_{m,m+1}=-\displaystyle\!frac{1}{m!}$ and \begin{align} d_{mi}=\displaystyle\!frac{1}{m!}\displaystyle\!sum\displaystyle\!limits_{j=i}^{m}c_{mj}d_{j-1,i} \label{Eq:Recursive-dmi} \end{align} for $1\leqslant i\leqslant m$. \end{itemize} \end{pro} \noindent \it{Proof.}\rm\quad The assertions in items (1) and (2) follow from \eqref{Eq:Recursive-AmInverse}. To prove the item (3), we proceed by induction on $m$. For the case of $m=0$, we get the result from $G_{01}(t)=-1$. Assume that $m\geqslant 1$, then $G_{m,m+1}(x)=-\displaystyle\!frac{1}{m!}\in\mathbb{Q}[x]$ with degree zero. For $1\leqslant i\leqslant j\leqslant m$, by the induction assumption, we may set $$G_{j-1,i}(x)=d_{j-1,i}x^{j-i}+\widetilde{t}ext{lower degree terms}\in\mathbb{Q}[x].$$ Since $$F_{mj}(x)=c_{mj}x^{m+1-j}+\widetilde{t}ext{lower degree terms}\in\mathbb{Z}[x],$$ we get $$F_{mj}(x)G_{j-1,i}(x)=c_{mj}d_{j-1,i}x^{m+1-i}+\widetilde{t}ext{lower degree terms}\in\mathbb{Q}[x].$$ Using \eqref{Eq:Recursive-gmi}, we finally get $$G_{mi}(x)=\left(\displaystyle\!frac{1}{m!}\displaystyle\!sum\displaystyle\!limits_{j=i}^{m}c_{mj}d_{j-1,i}\right)x^{m+1-i}+\widetilde{t}ext{lower degree terms}\in\mathbb{Q}[x].$$ The item (4) follows from the above proof. $\square$ \begin{cor} For any nonnegative integer $m$, we have $d_{m1}=-1$. \end{cor} \noindent \it{Proof.}\rm\quad We use induction on $m$. If $m\geqslant 1$, using \eqref{Eq:Recursive-dmi} and the induction assumption, we get $$d_{m1}=-\displaystyle\!frac{1}{m!}\displaystyle\!sum\displaystyle\!limits_{j=1}^mc_{mj}.$$ By \eqref{Eq:Sum-cmi}, we have $$d_{m1}=-\displaystyle\!frac{1}{m!}(-\delta_{m,0}-c_{m,m+1}),$$ which implies the result. $\square$ \sum\limits_{n=1}^\inftybsection{A weighted sum formula of the Bernoulli numbers} Let $n$ be a fixed positive integer, and $m_1,\ldots,m_n$ be fixed nonnegative integers. Set $|{\mathbf{m}}|_n:=m_1+m_2+\cdots+m_n+n$. Now, we evaluate $D^{m_1}F(x)\cdots D^{m_n}F(x)$. First, using the fact that $$D^mF(x)=\displaystyle\!sum\displaystyle\!limits_{i=1}^{\displaystyle\!infty}(2i)^m\displaystyle\!frac{\beta_{2i}}{(2i)!}x^{2i}=\displaystyle\!sum\displaystyle\!limits_{i=0}^{\displaystyle\!infty}(2i)^m\displaystyle\!frac{\beta_{2i}}{(2i)!}x^{2i},$$ we get \begin{align*} D^{m_1}F(x)\cdots D^{m_n}F(x)=\displaystyle\!sum\displaystyle\!limits_{k=n}^\displaystyle\!infty \left( \displaystyle\!sum\displaystyle\!limits_{(k_1,\ldots,k_n)\in I(k,n)} (2k_1)^{m_1}\cdots (2k_n)^{m_n} \displaystyle\!frac{\beta_{2k_1}\cdots \beta_{2k_n}}{(2k_1)!\cdots (2k_n)!}\right)x^{2k}. \end{align*} Therefore, for any positive integer $k$ with $k\geqslant n$, the coefficient of $x^{2k}$ in $D^{m_1}F(x)\cdots D^{m_n}F(x)$ is \begin{align} \displaystyle\!sum\displaystyle\!limits_{(k_1,\ldots,k_n)\in I(k,n)}(2k_1)^{m_1}\cdots(2k_n)^{m_n}\displaystyle\!frac{\beta_{2k_1}\cdots \beta_{2k_n}}{(2k_1)!\cdots(2k_n)!}. \label{Eq:Coeff-Left} \end{align} Next, using \eqref{Eq:Diff-f}, we have $$D^{m_1}F(x)\cdots D^{m_n}F(x)=\displaystyle\!sum\displaystyle\!limits_{i=0}^{|\mathbf{m}|_n}F_i(x)H(x)^i,$$ with $$F_i(x)=\displaystyle\!sum\displaystyle\!limits_{i_1+\cdots+i_n=i\atop 0\leqslant i_j\leqslant m_j+1}F_{m_1i_1}(x)\cdots F_{m_ni_n}(x).$$ \begin{pro} We have $$F_0(x)=\left(\displaystyle\!frac{x}{2}\right)^n,$$ and $\deg F_i(x)\leqslant |\mathbf{m}|_n-i$ for any nonnegative integer $i$. \end{pro} \noindent \it{Proof.}\rm\quad For integers $i_1,\ldots,i_n$ with the conditions $i_1+\cdots+i_n=i$ and $0\leqslant i_j\leqslant m_j+1$, we have $$\deg(F_{m_1i_1}(x)\cdots F_{m_ni_n}(x))\leqslant \displaystyle\!sum\displaystyle\!limits_{j=1}^n(m_j+1-i_j)=|\mathbf{m}|_n-i,$$ which implies that $\deg F_i(x)\leqslant|\mathbf{m}|_n-i$. $\square$ Then using \eqref{Eq:h-i}, we get \begin{align*} &D^{m_1}F(x)\cdots D^{m_n}F(x)\\ =&\displaystyle\!sum\displaystyle\!limits_{i=1}^{|\mathbf{m}|_n}F_i(x)\left(\displaystyle\!sum\displaystyle\!limits_{j=1}^iG_{i-1,j}(x)D^{j-1}F(x)+\displaystyle\!frac{1}{2}x^i\right)+R_0(x)\\ =&\displaystyle\!sum\displaystyle\!limits_{j=1}^{|\mathbf{m}|_n}R_j(x)D^{j-1}F(x)+R_0(x) \end{align*} with $$R_0(x):=F_0(x)+\displaystyle\!frac{1}{2}\displaystyle\!sum\displaystyle\!limits_{i=1}^{|\mathbf{m}|_n}F_i(x)x^i$$ and $$R_j(x):=\displaystyle\!sum\displaystyle\!limits_{i=j}^{|\mathbf{m}|_n}F_i(x)G_{i-1,j}(x),\quad (1\leqslant j\leqslant |\mathbf{m}|_n).$$ \begin{pro}\label{pro7} Let $j$ be a nonnegative integer with $j\leqslant |\mathbf{m}|_n$. Then \begin{itemize} \item [(1)] the function $R_j(x)$ is even; \item [(2)] we have $$R_0(x)=\displaystyle\!frac{1}{2^{n+1}}\left(x^n+(-x)^n \right).$$ In particular, $\deg R_0(x)\leqslant n$; \item [(3)] for $j>0$, we have $\deg R_j(x)\leqslant |\mathbf{m}|_n-j$. Moreover, we have $\deg R_1(x)\leqslant |\mathbf{m}|_n-2$ provided that $n$ is even or $m_1,\ldots,m_n$ are not all zero. \end{itemize} \end{pro} \noindent \it{Proof.}\rm\quad Since $D^{m}F(x)$ is even, we have $$\displaystyle\!sum\displaystyle\!limits_{j=1}^{|\mathbf{m}|_n}R_j(x)D^{j-1}F(x)+R_0(x)=\displaystyle\!sum\displaystyle\!limits_{j=1}^{|\mathbf{m}|_n}R_j(-x)D^{j-1}F(x)+R_0(-x).$$ Using the fact that the functions $1,F(x),D F(x),\ldots,D^m F(x)$ are linearly independent over the rational function field $\mathbb{Q}(x)$, we know all $R_j(x)$ are even functions. By the definition of $F_i(x)$, we have $$\displaystyle\!sum\displaystyle\!limits_{i=0}^{|\mathbf{m}|_n}F_i(x)x^i=\prod\displaystyle\!limits_{j=1}^n\displaystyle\!sum\displaystyle\!limits_{i_j=0}^{m_j+1}F_{m_ji_j}(x)x^{i_j}.$$ Using \eqref{Eq:Sum-fmi}, we find $$\displaystyle\!sum\displaystyle\!limits_{i=0}^{|\mathbf{m}|_n}F_i(x)x^i=\prod\displaystyle\!limits_{j=1}^n(F_{m_j0}(x)-x).$$ Then we get (2) from the fact that $F_{m0}(x)=\displaystyle\!frac{1}{2}x$ and the expression of $F_0(x)$. Since $$\deg F_i(x)G_{i-1,j}(x)\leqslant (|\mathbf{m}|_n-i)+(i-j)=|\mathbf{m}|_n-j,$$ we get $\deg R_j(x)\leqslant |\mathbf{m}|-j$. If we set $$\widetilde{c}_{mi}=\begin{cases} \displaystyle\!frac{1}{2}\delta_{m,0} & \widetilde{t}ext{if\;} i=0,\\ c_{mi} & \widetilde{t}ext{if\;} i\neq 0, \end{cases}$$ then the coefficient of $x^{m+1-i}$ in $F_{mi}(x)$ is $\widetilde{c}_{mi}$ for any integers $m,i$ with the condition $0\leqslant i\leqslant m+1$. Since $$R_1(x)=\displaystyle\!sum\displaystyle\!limits_{i=1}^{|\mathbf{m}|_n}\displaystyle\!sum\displaystyle\!limits_{i_1+\cdots+i_n=i\atop 0\leqslant i_j\leqslant m_j+1}F_{m_1i_1}(x)\cdots F_{m_ni_n}(t)G_{i-1,1}(x),$$ and $d_{i-1,1}=-1$, we find the coefficient of $x^{|\mathbf{m}|-1}$ in $R_1(x)$ is \begin{align*} &-\displaystyle\!sum\displaystyle\!limits_{i=1}^{|\mathbf{m}|_n}\displaystyle\!sum\displaystyle\!limits_{i_1+\cdots+i_n=i\atop 0\leqslant i_j\leqslant m_j+1}\widetilde{c}_{m_1i_1}\cdots \widetilde{c}_{m_ni_n} =\widetilde{c}_{m_10}\cdots \widetilde{c}_{m_n0}-\prod\displaystyle\!limits_{j=1}^n\displaystyle\!sum\displaystyle\!limits_{i_j=0}^{m_j+1}\widetilde{c}_{m_ji_j}, \end{align*} more precisely which equals $$\widetilde{c}_{m_10}\cdots \widetilde{c}_{m_n0}-\prod\displaystyle\!limits_{j=1}^n(\widetilde{c}_{m_j0}-\delta_{m_j,0})$$ by \eqref{Eq:Sum-cmi}. Then the coefficient of $x^{|\mathbf{m}|_n-1}$ in $R_1(x)$ is $$\left(\displaystyle\!frac{1}{2}\right)^n(1-(-1)^n)\delta_{m_1,0}\cdots\delta_{m_n,0},$$ which is zero if $n$ is even or at least one $m_i$ is not zero. $\square$ Let $a_{jl}\in \mathbb{Q}$ be the coefficient of $x^{2l}$ in the even polynomial $R_j(x)$, then we have \begin{align} R_j(x)=\displaystyle\!sum\displaystyle\!limits_{l\geqslant 0}a_{jl}x^{2l}=\displaystyle\!sum\displaystyle\!limits_{l=0}^{\left[(|\mathbf{m}|_n-j)/{2}\right]}a_{jl}x^{2l}. \label{Eq:Fj} \end{align} Moreover, from Proposition \ref{pro7}, it is clear that if $n$ is even or $m_1,\ldots,m_n$ are not all zero, then $$R_1(x)=\displaystyle\!sum\displaystyle\!limits_{l=0}^{\left[(|\mathbf{m}|_n-2)/{2}\right]}a_{1l}x^{2l}.$$ Hence we have $$D^{m_1}F(x)\cdots D^{m_n}F(x)=\displaystyle\!sum\displaystyle\!limits_{j=1}^{|\mathbf{m}|_n}\displaystyle\!sum\displaystyle\!limits_{l=0}^{\left[(|\mathbf{m}|_n-j)/{2}\right]}a_{jl}x^{2l}D^{j-1}F(x)+R_0(x).$$ Changing the order of the summation yields $$D^{m_1}F(x)\cdots D^{m_n}F(x)=\displaystyle\!sum\displaystyle\!limits_{l=0}^{T}\displaystyle\!sum\displaystyle\!limits_{j=1}^{|\mathbf{m}|_n-2l}a_{jl}x^{2l}D^{j-1}F(x)+R_0(x),$$ where $$T=\begin{cases} \left[\displaystyle\!frac{n-1}{2}\right] & \widetilde{t}ext{if\;} m_1=\cdots=m_n=0,\\ &\\ \left[\displaystyle\!frac{|\mathbf{m}|_n-2}{2}\right] & \widetilde{t}ext{otherwise}. \end{cases}$$ Since $$D^{j-1}F(x)=\displaystyle\!sum\displaystyle\!limits_{i=0}^\displaystyle\!infty(2i)^{j-1}\displaystyle\!frac{\beta_{2i}}{(2i)!}x^{2i},$$ we get \begin{align*} &D^{m_1}F(x)\cdots D^{m_n}F(x) =&\displaystyle\!sum\displaystyle\!limits_{k=0}^\displaystyle\!infty\displaystyle\!sum\displaystyle\!limits_{l=0}^{\displaystyle\!min\{T,k\}}\left(\displaystyle\!sum\displaystyle\!limits_{j=1}^{|\mathbf{m}|_n-2l}a_{jl}(2k-2l)^{j-1}\right)\displaystyle\!frac{\beta_{2k-2l}}{(2k-2l)!}x^{2k}+R_0(x). \end{align*} Then the coefficient of $x^{2k}$ in $D^{m_1}F(x)\cdots D^{m_n}F(x)$ is \begin{align} \displaystyle\!sum\displaystyle\!limits_{l=0}^{\displaystyle\!min\{T,k\}}\left(\displaystyle\!sum\displaystyle\!limits_{j=1}^{|\mathbf{m}|_n-2l}2^{j-1}a_{jl}(k-l)^{j-1}\right)\displaystyle\!frac{\beta_{2k-2l}}{(2k-2l)!}, \label{Eq:Coeff-Right} \end{align} provided that $k\geqslant n$. Finally, comparing \eqref{Eq:Coeff-Right} with \eqref{Eq:Coeff-Left}, we get a weighted sum formula of the Bernoulli numbers. \begin{thm}\label{Thm:WeightedSum-Bernoulli} Let $n,k$ be positive integers with $k\geqslant n$. Then for any nonnegative integers $m_1,\ldots,m_n$, we have \begin{align} &\displaystyle\!sum\displaystyle\!limits_{k_1+\cdots+k_n=k\atop k_j\geqslant 1}k_1^{m_1}\cdots k_n^{m_n}\displaystyle\!frac{\beta_{2k_1}\cdots \beta_{2k_n}}{(2k_1)!\cdots(2k_n)!} =&\displaystyle\!sum\displaystyle\!limits_{l=0}^{\displaystyle\!min\{T,k\}}\left(\displaystyle\!sum\displaystyle\!limits_{j=1}^{|\mathbf{m}|_n-2l}\displaystyle\!frac{a_{jl}(k-l)^{j-1}}{2^{m_1+\cdots+m_n-j+1}}\right)\displaystyle\!frac{\beta_{2k-2l}}{(2k-2l)!}, \label{Eq:WeightedSum-Bernoulli} \end{align} where $T=\displaystyle\!max\{[(|\mathbf{m}|_n-2)/2],[(n-1)/2]\}$ and $a_{jl}$ are determined by \eqref{Eq:Fj}. \end{thm} \sum\limits_{n=1}^\inftybsection{Proof of Theorem \ref{thm1}} Now, we prove the weighted sum formula \eqref{1.1} of $t$-values at even arguments. Using Euler's formula of $\zetaeta(2k)$, we have \begin{align} t(2k)=(-1)^{k+1}\displaystyle\!frac{\beta_{2k}}{2(2k)!}\pi^{2k}. \label{Eq:Euler-Formula} \end{align} Then from Theorem \ref{Thm:WeightedSum-Bernoulli}, we get the following weighted sum formula of $t$-values at even arguments. \begin{thm}\label{Thm:WeightedSum-Zeta} Let $n,k$ be positive integers with $k\geqslant n$. Then for any nonnegative integers $m_1,\ldots,m_n$, we have \begin{align} &\displaystyle\!sum\displaystyle\!limits_{k_1+\cdots+k_n=k\atop k_j\geqslant 1}k_1^{m_1}\cdots k_n^{m_n}t(2k_1)\cdots t(2k_n)\nonumber\\ &=(-1)^n\displaystyle\!sum\displaystyle\!limits_{l=0}^{\displaystyle\!min\{T,k\}}\displaystyle\!frac{(2l)!}{B_{2l}}\left(\displaystyle\!sum\displaystyle\!limits_{j=1}^{|\mathbf{m}|_n-2l}\displaystyle\!frac{a_{jl}(k-l)^{j-1}}{2^{|\mathbf{m}|_n+2l-j-1}}\right)\zetaeta(2l)t(2k-2l), \label{Eq:WeightedSum-Zeta} \end{align} where $\zetaeta(0)=-1/2$ and $t(0)=0$, $T=\displaystyle\!max\{[(|\mathbf{m}|_n-2)/2],[(n-1)/2]\}$ and $a_{jl}$ are determined by \eqref{Eq:Fj}. \end{thm} Finally, from Theorem \ref{Thm:WeightedSum-Zeta}, we prove Theorem \ref{thm1}. $\square$ \sum\limits_{n=1}^\inftybsection{Proof of Theorem \ref{thm2}} Next, we use the symmetric sum formulas of Hoffman \cite[Theorems 2.5 and 2.8]{H2016} to prove Theorem \ref{thm2}. For a partition $\Pi=\{P_1,P_2,\ldots,P_i\}$ of the set $\{1,2,\ldots,n\}$, let $l_j=\sharp P_j$ and $$c(\Pi)=\prod\displaystyle\!limits_{j=1}^i (l_j-1)!,\quad \widetilde{t}ilde{c}(\Pi)=(-1)^{n-i}c(\Pi).$$ We also denote by $\mathcal{P}_n$ the set of all partitions of the set $\{1,2,\ldots,n\}$. Then the symmetric sum formulas of multiple $t$-values are \begin{align} \displaystyle\!sum\displaystyle\!limits_{\sigma\in S_n}t(k_{\sigma(1)},\ldots,k_{\sigma(n)})=\displaystyle\!sum\displaystyle\!limits_{\Pi\in\mathcal{P}_n}\widetilde{t}ilde{c}(\Pi)t(\mathbf{k},\Pi) \label{Eq:SymSum-MZV} \end{align} and \begin{align} \displaystyle\!sum\displaystyle\!limits_{\sigma\in S_n}t^{\star}(k_{\sigma(1)},\ldots,k_{\sigma(n)})=\displaystyle\!sum\displaystyle\!limits_{\Pi\in\mathcal{P}_n}c(\Pi)t(\mathbf{k},\Pi), \label{Eq:SymSum-MZSV} \end{align} where $\mathbf{k}=(k_1,\ldots,k_n)$ is an index with all $k_i>1$, $S_n$ is the symmetric group of degree $n$ and for a partition $\Pi=\{P_1,\ldots,P_i\}\in\mathcal{P}_n$, $$t(\mathbf{k},\Pi)=\prod\displaystyle\!limits_{j=1}^i t\left(\displaystyle\!sum\displaystyle\!limits_{l\in P_j}k_l\right).$$ Now let $\mathbf{k}=(2k_1,\ldots,2k_n)$ with all $k_i$ positive integers. Using \eqref{Eq:SymSum-MZV} and \eqref{Eq:SymSum-MZSV}, we have \begin{align} &\displaystyle\!sum\displaystyle\!limits_{\sigma\in S_n}t(2k_{\sigma(1)},\ldots,2k_{\sigma(n)})\nonumber\\ =&\displaystyle\!sum\displaystyle\!limits_{i=1}^n(-1)^{n-i}\displaystyle\!sum\displaystyle\!limits_{l_1+\cdots+l_i=n\atop l_1\geqslant \cdots\geqslant l_i\geqslant 1}\prod\displaystyle\!limits_{j=1}^i(l_j-1)!\displaystyle\!sum\displaystyle\!limits_{\Pi=\{P_1,\ldots,P_i\}\in\mathcal{P}_n\atop \sharp{P_j}=l_j}t(\mathbf{k},\Pi) \label{Eq:SymSum-2-MZV} \end{align} and \begin{align} &\displaystyle\!sum\displaystyle\!limits_{\sigma\in S_n}t^{\star}(2k_{\sigma(1)},\ldots,2k_{\sigma(n)})\nonumber\\ =&\displaystyle\!sum\displaystyle\!limits_{i=1}^n\displaystyle\!sum\displaystyle\!limits_{l_1+\cdots+l_i=n\atop l_1\geqslant \cdots\geqslant l_i\geqslant 1 }\prod\displaystyle\!limits_{j=1}^i(l_j-1)!\displaystyle\!sum\displaystyle\!limits_{\Pi=\{P_1,\ldots,P_i\}\in\mathcal{P}_n\atop \sharp{P_j}=l_j}t(\mathbf{k},\Pi). \label{Eq:SymSum-2-MZSV} \end{align} From now on, let $k,n$ be fixed positive integers with $k\geqslant n$, and let $f(x_1,\ldots,x_n)$ be a fixed symmetric polynomial with rational coefficients. It is easy to see that \begin{align*} &\displaystyle\!sum\displaystyle\!limits_{(k_1,\ldots,k_n)\in I(k,n)}f(k_1,\ldots,k_n)\displaystyle\!sum\displaystyle\!limits_{\sigma\in S_n}t(2k_{\sigma(1)},\ldots,2k_{\sigma(n)})\\ =&n!\displaystyle\!sum\displaystyle\!limits_{(k_1,\ldots,k_n)\in I(k,n)}f(k_1,\ldots,k_n)t(2k_1,\ldots,2k_n)=n!T_f(2,k,n) \end{align*} and \begin{align*} &\displaystyle\!sum\displaystyle\!limits_{(k_1,\ldots,k_n)\in I(k,n)}f(k_1,\ldots,k_n)\displaystyle\!sum\displaystyle\!limits_{\sigma\in S_n}t^{\star}(2k_{\sigma(1)},\ldots,2k_{\sigma(n)})\\ =&n!\displaystyle\!sum\displaystyle\!limits_{(k_1,\ldots,k_n)\in I(k,n)}f(k_1,\ldots,k_n)t^{\star}(2k_1,\ldots,2k_n)=n!T_f^\star(2,k,n). \end{align*} On the other hand, for a partition $\Pi=\{P_1,\ldots,P_i\}\in\mathcal{P}_n$ with $\sharp P_j=l_j$, we have \begin{align} &\displaystyle\!sum\displaystyle\!limits_{(k_1,\ldots,k_n)\in I(k,n)}f(k_1,\ldots,k_n)t(\mathbf{k},\Pi)\nonumber\\ =&\displaystyle\!sum\displaystyle\!limits_{s_1+\cdots+s_i=k\atop s_j\geqslant 1}\displaystyle\!sum\displaystyle\!limits_{{{k_1+\cdots+k_{l_1}=s_1\atop\vdots}\atop k_{l_1+\cdots+l_{i-1}+1}+\cdots+k_n=s_i}\atop k_j\geqslant 1}f(k_1,\ldots,k_n)t(2s_1)\cdots t(2s_i). \label{Eq:F-times-zeta} \end{align} To treat the inner sum about $f(k_1,\ldots,k_n)$ in the right-hand side of \eqref{Eq:F-times-zeta}, we need the following lemma. \begin{lem}[{\cite[Lemma 4.2]{LQ2019}}]\label{Lem:PowerSum} Let $k$ and $n$ be integers with $k\geqslant n\geqslant 1$, and let $p_1,\ldots,p_n$ be nonnegative integers. Then there exists a polynomial $g(x)\in\mathbb{Q}[x]$ of degree $p_1+\cdots+p_n+n-1$, such that $$\displaystyle\!sum\displaystyle\!limits_{k_1+\cdots+k_n=k\atop k_j\geqslant 1}k_1^{p_1}\cdots k_n^{p_n}=g(k).$$ \end{lem} Using Lemma \ref{Lem:PowerSum}, there exists a polynomial $g_{s_1,\ldots,s_i}(x_1,\ldots,x_i)\in\mathbb{Q}[x_1,\ldots,x_i]$ of degree $\deg f+n-i$, such that \begin{align*} &\displaystyle\!sum\displaystyle\!limits_{(k_1,\ldots,k_n)\in I(k,n)}f(k_1,\ldots,k_n)t(\mathbf{k},\Pi)\\ =&\displaystyle\!sum\displaystyle\!limits_{s_1+\cdots+s_i=k\atop s_j\geqslant 1}g_{s_1,\ldots,s_i}(s_1,\ldots,s_i)t(2s_1)\cdots t(2s_i). \end{align*} Therefore we get \begin{align*} T_f(2,k,n)=&\displaystyle\!frac{1}{n!}\displaystyle\!sum\displaystyle\!limits_{i=1}^n(-1)^{n-i}\displaystyle\!sum\displaystyle\!limits_{l_1+\cdots+l_i=n\atop l_1\geqslant\cdots \geqslant l_i\geqslant 1}\prod\displaystyle\!limits_{j=1}^i(l_j-1)!n(l_1,\ldots,l_i)\\ &\widetilde{t}imes\displaystyle\!sum\displaystyle\!limits_{s_1+\cdots+s_i=k\atop s_j\geqslant 1}g_{s_1,\ldots,s_i}(s_1,\ldots,s_i)t(2s_1)\cdots t(2s_i) \end{align*} and \begin{align*} T^{\star}_f(2,k,n)=&\displaystyle\!frac{1}{n!}\displaystyle\!sum\displaystyle\!limits_{i=1}^n\displaystyle\!sum\displaystyle\!limits_{l_1+\cdots+l_i=n\atop l_1\geqslant\cdots\geqslant l_i\geqslant 1}\prod\displaystyle\!limits_{j=1}^i(l_j-1)!n(l_1,\ldots,l_i)\\ &\widetilde{t}imes\displaystyle\!sum\displaystyle\!limits_{s_1+\cdots+s_i=k\atop s_j\geqslant 1}g_{s_1,\ldots,s_i}(s_1,\ldots,s_i)t(2s_1)\cdots t(2s_i), \end{align*} where $$n(l_1,\ldots,l_i)=\displaystyle\!frac{n!}{\prod\displaystyle\!limits_{j=1}^il_j!\prod\displaystyle\!limits_{j=1}^n\sharp\{m\mid 1\leqslant m\leqslant i,k_m=j\}!}$$ is the number of partitions $\Pi=\{P_1,\ldots,P_i\}\in\mathcal{P}_n$ with the conditions $\sharp P_j=l_j$ for $j=1,2,\ldots,i$. Thus, applying Theorem \ref{thm1}, we prove the weighted sum formulas (\ref{1.2}) and (\ref{1.3}). $\square$ \appendix \section{Some weighted sum formulas through depth $4$}\label{Sec:Example} In this appendix, we list some explicit weighted sum formulas of depth $n\leqslant 4$. For any positive integers $k,n$ with $k\geqslant n$, we set $$\displaystyle\!sum\nolimits^{(n)}=\displaystyle\!sum_{(k_1,\ldots,k_n)\in I(k,n)}.$$ \sum\limits_{n=1}^\inftybsection{Weighted sum formulas of the Bernoulli numbers} Recall that $\beta_{2i}=(2^{2i}-1)B_{2i}$. If $n=2$, we have \begin{align*} &\displaystyle\!sum\nolimits^{(2)} \displaystyle\!frac{\beta_{2k_1}\beta_{2k_2}}{(2k_1)!(2k_2)!}=-(2k-1)\displaystyle\!frac{\beta_{2k}}{(2k)!},\\ &\displaystyle\!sum\nolimits^{(2)} k_1\displaystyle\!frac{\beta_{2k_1}\beta_{2k_2}}{(2k_1)!(2k_2)!}=-\displaystyle\!frac{1}{2}k(2k-1)\displaystyle\!frac{\beta_{2k}}{(2k)!},\\ &\displaystyle\!sum\nolimits^{(2)} k_1^2\displaystyle\!frac{\beta_{2k_1}\beta_{2k_2}}{(2k_1)!(2k_2)!}=-\displaystyle\!frac{1}{12}k(2k-1)(4k-1)\displaystyle\!frac{\beta_{2k}}{(2k)!}-\displaystyle\!frac{1}{24}(2k-3)\displaystyle\!frac{\beta_{2k-2}}{(2k-2)!},\\ &\displaystyle\!sum\nolimits^{(2)} k_1 k_2\displaystyle\!frac{\beta_{2k_1}\beta_{2k_2}}{(2k_1)!(2k_2)!}=-\displaystyle\!frac{1}{12}k(2k-1)(2k+1)\displaystyle\!frac{\beta_{2k}}{(2k)!}+\displaystyle\!frac{1}{24}(2k-3)\displaystyle\!frac{\beta_{2k-2}}{(2k-2)!},\\ &\displaystyle\!sum\nolimits^{(2)} k_1^3\displaystyle\!frac{\beta_{2k_1}\beta_{2k_2}}{(2k_1)!(2k_2)!}=-\displaystyle\!frac{1}{8}k^2(2k-1)^2\displaystyle\!frac{\beta_{2k}}{(2k)!}-\displaystyle\!frac{1}{16}k(2k-3)\displaystyle\!frac{\beta_{2k-2}}{(2k-2)!},\\ &\displaystyle\!sum\nolimits^{(2)} k_1^2k_2\displaystyle\!frac{\beta_{2k_1}\beta_{2k_2}}{(2k_1)!(2k_2)!}=-\displaystyle\!frac{1}{24}k^2(2k-1)(2k+1)\displaystyle\!frac{\beta_{2k}}{(2k)!}+\displaystyle\!frac{1}{48}k(2k-3)\displaystyle\!frac{\beta_{2k-2}}{(2k-2)!},\\ &\displaystyle\!sum\nolimits^{(2)} k_1^4\displaystyle\!frac{\beta_{2k_1}\beta_{2k_2}}{(2k_1)!(2k_2)!}=-\displaystyle\!frac{1}{240}k(2k-1)(4k-1)(12k^2-6k-1)\displaystyle\!frac{\beta_{2k}}{(2k)!}\\ &\qquad\quad -\displaystyle\!frac{1}{96}(2k-3)(8k^2-6k+5)\displaystyle\!frac{\beta_{2k-2}}{(2k-2)!}+\displaystyle\!frac{1}{480}(2k-5)\displaystyle\!frac{\beta_{2k-4}}{(2k-4)!},\\ &\displaystyle\!sum\nolimits^{(2)} k_1^3k_2\displaystyle\!frac{\beta_{2k_1}\beta_{2k_2}}{(2k_1)!(2k_2)!}=-\displaystyle\!frac{1}{240}k(2k-1)(2k+1)(6k^2-1)\displaystyle\!frac{\beta_{2k}}{(2k)!}\\ &\qquad\quad +\displaystyle\!frac{1}{96}(2k-3)(2k^2-6k+5)\displaystyle\!frac{\beta_{2k-2}}{(2k-2)!}-\displaystyle\!frac{1}{480}(2k-5)\displaystyle\!frac{\beta_{2k-4}}{(2k-4)!},\\ &\displaystyle\!sum\nolimits^{(2)} k_1^2k_2^2\displaystyle\!frac{\beta_{2k_1}\beta_{2k_2}}{(2k_1)!(2k_2)!}=-\displaystyle\!frac{1}{240}k(2k-1)(2k+1)(4k^2+1)\displaystyle\!frac{\beta_{2k}}{(2k)!}\\ &\qquad\quad +\displaystyle\!frac{1}{96}(2k-3)(6k-5)\displaystyle\!frac{\beta_{2k-2}}{(2k-2)!}+\displaystyle\!frac{1}{480}(2k-5)\displaystyle\!frac{\beta_{2k-4}}{(2k-4)!},\\ &\displaystyle\!sum\nolimits^{(2)} k_1^5\displaystyle\!frac{\beta_{2k_1}\beta_{2k_2}}{(2k_1)!(2k_2)!}=-\displaystyle\!frac{1}{96}k^2(2k-1)^2(8k^2-4k-1)\displaystyle\!frac{\beta_{2k}}{(2k)!}\\ &\qquad\quad -\displaystyle\!frac{5}{192}k(2k-3)(4k^2-6k+5)\displaystyle\!frac{\beta_{2k-2}}{(2k-2)!}+\displaystyle\!frac{1}{192}k(2k-5)\displaystyle\!frac{\beta_{2k-4}}{(2k-4)!},\\ &\displaystyle\!sum\nolimits^{(2)} k_1^4k_2\displaystyle\!frac{\beta_{2k_1}\beta_{2k_2}}{(2k_1)!(2k_2)!}=-\displaystyle\!frac{1}{480}k^2(2k-1)(2k+1)(8k^2-3)\displaystyle\!frac{\beta_{2k}}{(2k)!}\\ &\qquad\quad +\displaystyle\!frac{1}{192}k(2k-3)(4k^2-18k+15)\displaystyle\!frac{\beta_{2k-2}}{(2k-2)!}-\displaystyle\!frac{1}{320}k(2k-5)\displaystyle\!frac{\beta_{2k-4}}{(2k-4)!},\\ &\displaystyle\!sum\nolimits^{(2)} k_1^3k_2^2\displaystyle\!frac{\beta_{2k_1}\beta_{2k_2}}{(2k_1)!(2k_2)!}=-\displaystyle\!frac{1}{480}k^2(2k-1)(2k+1)(4k^2+1)\displaystyle\!frac{\beta_{2k}}{(2k)!}\\ &\qquad\quad +\displaystyle\!frac{1}{192}k(2k-3)(6k-5)\displaystyle\!frac{\beta_{2k-2}}{(2k-2)!}+\displaystyle\!frac{1}{960}k(2k-5)\displaystyle\!frac{\beta_{2k-4}}{(2k-4)!}. \end{align*} If $n=3$, we have \begin{align*} &\displaystyle\!sum\nolimits^{(3)}\displaystyle\!frac{\beta_{2k_1}\beta_{2k_2}\beta_{2k_3}}{(2k_1)!(2k_2)!(2k_3)!}=(k-1)(2k-1)\displaystyle\!frac{\beta_{2k}}{(2k)!}+\displaystyle\!frac{1}{4}\displaystyle\!frac{\beta_{2k-2}}{(2k-2)!},\\ &\displaystyle\!sum\nolimits^{(3)} k_1\displaystyle\!frac{\beta_{2k_1}\beta_{2k_2}\beta_{2k_3}}{(2k_1)!(2k_2)!(2k_3)!}=\displaystyle\!frac{1}{3}k(k-1)(2k-1)\displaystyle\!frac{\beta_{2k}}{(2k)!}+\displaystyle\!frac{1}{12}k\displaystyle\!frac{\beta_{2k-2}}{(2k-2)!},\\ &\displaystyle\!sum\nolimits^{(3)} k_1^2\displaystyle\!frac{\beta_{2k_1}\beta_{2k_2}\beta_{2k_3}}{(2k_1)!(2k_2)!(2k_3)!}=\displaystyle\!frac{1}{12}k(k-1)(2k-1)^2\displaystyle\!frac{\beta_{2k}}{(2k)!}+\displaystyle\!frac{1}{24}(4k^2-11k+9)\displaystyle\!frac{\beta_{2k-2}}{(2k-2)!},\\ &\displaystyle\!sum\nolimits^{(3)} k_1k_2\displaystyle\!frac{\beta_{2k_1}\beta_{2k_2}\beta_{2k_3}}{(2k_1)!(2k_2)!(2k_3)!}=\displaystyle\!frac{1}{24}k(k-1)(2k-1)(2k+1)\displaystyle\!frac{\beta_{2k}}{(2k)!}\\ &\qquad\qquad\qquad\qquad\qquad\qquad -\displaystyle\!frac{1}{48}(k-1)(2k-9)\displaystyle\!frac{\beta_{2k-2}}{(2k-2)!},\\ &\displaystyle\!sum\nolimits^{(3)} k_1^3\displaystyle\!frac{\beta_{2k_1}\beta_{2k_2}\beta_{2k_3}}{(2k_1)!(2k_2)!(2k_3)!}=\displaystyle\!frac{1}{120}k(k-1)(2k-1)(12k^2-12k+1)\displaystyle\!frac{\beta_{2k}}{(2k)!}\\ &\qquad\qquad +\displaystyle\!frac{1}{48}(8k^3-24k^2+17k+3)\displaystyle\!frac{\beta_{2k-2}}{(2k-2)!}+\displaystyle\!frac{1}{240}(2k-5)\displaystyle\!frac{\beta_{2k-4}}{(2k-4)!},\\ &\displaystyle\!sum\nolimits^{(3)} k_1^2k_2\displaystyle\!frac{\beta_{2k_1}\beta_{2k_2}\beta_{2k_3}}{(2k_1)!(2k_2)!(2k_3)!}=\displaystyle\!frac{1}{240}k(k-1)(2k-1)(2k+1)(4k-1)\displaystyle\!frac{\beta_{2k}}{(2k)!}\\ &\qquad\qquad +\displaystyle\!frac{1}{96}(k-1)(2k+3)\displaystyle\!frac{\beta_{2k-2}}{(2k-2)!}-\displaystyle\!frac{1}{480}(2k-5)\displaystyle\!frac{\beta_{2k-4}}{(2k-4)!},\\ &\displaystyle\!sum\nolimits^{(3)} k_1k_2k_3\displaystyle\!frac{\beta_{2k_1}\beta_{2k_2}\beta_{2k_3}}{(2k_1)!(2k_2)!(2k_3)!}=\displaystyle\!frac{1}{120}k(k-1)(k+1)(2k-1)(2k+1)\displaystyle\!frac{\beta_{2k}}{(2k)!}\\ &\qquad\qquad -\displaystyle\!frac{1}{48}(k-3)(k-1)(2k-1)\displaystyle\!frac{\beta_{2k-2}}{(2k-2)!}+\displaystyle\!frac{1}{240}(2k-5)\displaystyle\!frac{\beta_{2k-4}}{(2k-4)!},\\ &\displaystyle\!sum\nolimits^{(3)} k_1^4\displaystyle\!frac{\beta_{2k_1}\beta_{2k_2}\beta_{2k_3}}{(2k_1)!(2k_2)!(2k_3)!}=\displaystyle\!frac{1}{240}k(k-1)(2k-1)^2(8k^2-8k-1)\displaystyle\!frac{\beta_{2k}}{(2k)!}\\ &\qquad\qquad+\displaystyle\!frac{1}{96}(16k^4-64k^3+96k^2-79k+39)\displaystyle\!frac{\beta_{2k-2}}{(2k-2)!}\\ &\qquad\qquad +\displaystyle\!frac{1}{480}(2k-5)(3k+1)\displaystyle\!frac{\beta_{2k-4}}{(2k-4)!},\\ &\displaystyle\!sum\nolimits^{(3)} k_1^3k_2\displaystyle\!frac{\beta_{2k_1}\beta_{2k_2}\beta_{2k_3}}{(2k_1)!(2k_2)!(2k_3)!}=\displaystyle\!frac{1}{480}k(k-1)(2k-1)(2k+1)(4k^2-2k-1)\displaystyle\!frac{\beta_{2k}}{(2k)!}\\ &\qquad+\displaystyle\!frac{1}{192}(k-1)(16k^2-46k+39)\displaystyle\!frac{\beta_{2k-2}}{(2k-2)!}-\displaystyle\!frac{1}{960}(k+1)(2k-5)\displaystyle\!frac{\beta_{2k-4}}{(2k-4)!},\\ &\displaystyle\!sum\nolimits^{(3)} k_1^2k_2^2\displaystyle\!frac{\beta_{2k_1}\beta_{2k_2}\beta_{2k_3}}{(2k_1)!(2k_2)!(2k_3)!}=\displaystyle\!frac{1}{1440}k(k-1)(2k-1)(2k+1)(8k^2-4k+3)\displaystyle\!frac{\beta_{2k}}{(2k)!}\\ &\qquad\qquad+\displaystyle\!frac{1}{576}(k-1)(8k^3-64k^2+168k-117)\displaystyle\!frac{\beta_{2k-2}}{(2k-2)!}\\ &\qquad\qquad-\displaystyle\!frac{1}{2880}(2k-5)(7k-3)\displaystyle\!frac{\beta_{2k-4}}{(2k-4)!},\\ &\displaystyle\!sum\nolimits^{(3)} k_1^2k_2k_3\displaystyle\!frac{\beta_{2k_1}\beta_{2k_2}\beta_{2k_3}}{(2k_1)!(2k_2)!(2k_3)!}=\displaystyle\!frac{1}{360}k^2(k-1)(k+1)(2k-1)(2k+1)\displaystyle\!frac{\beta_{2k}}{(2k)!}\\ &\qquad\quad -\displaystyle\!frac{1}{144}k(k-3)(k-1)(2k-1)\displaystyle\!frac{\beta_{2k-2}}{(2k-2)!}+\displaystyle\!frac{1}{720}k(2k-5)\displaystyle\!frac{\beta_{2k-4}}{(2k-4)!}. \end{align*} If $n=4$, we have \begin{align*} &\displaystyle\!sum\nolimits^{(4)}\displaystyle\!frac{\beta_{2k_1}\beta_{2k_2}\beta_{2k_3}\beta_{2k_4}}{(2k_1)!(2k_2)!(2k_3)!(2k_4)!}=-\displaystyle\!frac{1}{3}(k-1)(2k-3)(2k-1)\displaystyle\!frac{\beta_{2k}}{(2k)!}\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad -\displaystyle\!frac{1}{3}(2k-3)\displaystyle\!frac{\beta_{2k-2}}{(2k-2)!},\\ &\displaystyle\!sum\nolimits^{(4)} k_1\displaystyle\!frac{\beta_{2k_1}\beta_{2k_2}\beta_{2k_3}\beta_{2k_4}}{(2k_1)!(2k_2)!(2k_3)!(2k_4)!}=-\displaystyle\!frac{1}{12}k(k-1)(2k-3)(2k-1)\displaystyle\!frac{\beta_{2k}}{(2k)!}\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad -\displaystyle\!frac{1}{12}k(2k-3)\displaystyle\!frac{\beta_{2k-2}}{(2k-2)!},\\ &\displaystyle\!sum\nolimits^{(4)} k_1^2\displaystyle\!frac{\beta_{2k_1}\beta_{2k_2}\beta_{2k_3}\beta_{2k_4}}{(2k_1)!(2k_2)!(2k_3)!(2k_4)!}=-\displaystyle\!frac{1}{120}k(k-1)(2k-3)(2k-1)(4k-3)\displaystyle\!frac{\beta_{2k}}{(2k)!}\\ &\qquad\qquad -\displaystyle\!frac{1}{12}(2k-3)(k^2-3k+3)\displaystyle\!frac{\beta_{2k-2}}{(2k-2)!}-\displaystyle\!frac{1}{160}(2k-5)\displaystyle\!frac{\beta_{2k-4}}{(2k-4)!},\\ &\displaystyle\!sum\nolimits^{(4)} k_1k_2\displaystyle\!frac{\beta_{2k_1}\beta_{2k_2}\beta_{2k_3}\beta_{2k_4}}{(2k_1)!(2k_2)!(2k_3)!(2k_4)!}=-\displaystyle\!frac{1}{120}k(k-1)(2k-3)(2k-1)(2k+1)\displaystyle\!frac{\beta_{2k}}{(2k)!}\\ &\qquad\qquad -\displaystyle\!frac{1}{12}(k-1)(2k-3)\displaystyle\!frac{\beta_{2k-2}}{(2k-2)!}+\displaystyle\!frac{1}{480}(2k-5)\displaystyle\!frac{\beta_{2k-4}}{(2k-4)!},\\ &\displaystyle\!sum\nolimits^{(4)} k_1^3\displaystyle\!frac{\beta_{2k_1}\beta_{2k_2}\beta_{2k_3}\beta_{2k_4}}{(2k_1)!(2k_2)!(2k_3)!(2k_4)!}=-\displaystyle\!frac{1}{240}k(k-1)(2k-3)(2k-1)(4k^2-6k+1)\displaystyle\!frac{\beta_{2k}}{(2k)!}\\ &\qquad\qquad\qquad -\displaystyle\!frac{1}{96}(2k-3)(6k^3-21k^2+17k+6)\displaystyle\!frac{\beta_{2k-2}}{(2k-2)!}\\ &\qquad\qquad\qquad -\displaystyle\!frac{1}{960}(2k-5)(13k-21)\displaystyle\!frac{\beta_{2k-4}}{(2k-4)!},\\ &\displaystyle\!sum\nolimits^{(4)} k_1^2k_2\displaystyle\!frac{\beta_{2k_1}\beta_{2k_2}\beta_{2k_3}\beta_{2k_4}}{(2k_1)!(2k_2)!(2k_3)!(2k_4)!}=-\displaystyle\!frac{1}{720}k(k-1)(2k-3)(2k-1)^2(2k+1)\displaystyle\!frac{\beta_{2k}}{(2k)!}\\ &\qquad -\displaystyle\!frac{1}{288}(k-1)(2k-3)(2k^2-k+6)\displaystyle\!frac{\beta_{2k-2}}{(2k-2)!}+\displaystyle\!frac{7}{2880}(k-3)(2k-5)\displaystyle\!frac{\beta_{2k-4}}{(2k-4)!},\\ &\displaystyle\!sum\nolimits^{(4)} k_1k_2k_3\displaystyle\!frac{\beta_{2k_1}\beta_{2k_2}\beta_{2k_3}\beta_{2k_4}}{(2k_1)!(2k_2)!(2k_3)!(2k_4)!}=-\displaystyle\!frac{1}{720}k(k-1)(k+1)(2k-3)(2k-1)(2k+1)\displaystyle\!frac{\beta_{2k}}{(2k)!}\\ &\qquad +\displaystyle\!frac{1}{288}(k-6)(k-1)(2k-3)(2k-1)\displaystyle\!frac{\beta_{2k-2}}{(2k-2)!}-\displaystyle\!frac{1}{2880}(2k-5)(4k-21)\displaystyle\!frac{\beta_{2k-4}}{(2k-4)!}. \end{align*} \sum\limits_{n=1}^\inftybsection{Weighted sum formulas of $t$-values} If $n=2$, we have \begin{align*} &\displaystyle\!sum\nolimits^{(2)} t(2k_1)t(2k_2)=\displaystyle\!frac{1}{2}(2k-1)t(2k),\\ &\displaystyle\!sum\nolimits^{(2)} k_1t(2k_1)t(2k_2)=\displaystyle\!frac{1}{4}k(2k-1)t(2k),\\ &\displaystyle\!sum\nolimits^{(2)} k_1^2t(2k_1)t(2k_2)=\displaystyle\!frac{1}{24}k(2k-1)(4k-1)t(2k)-\displaystyle\!frac{1}{8}(2k-3)\zetaeta(2)t(2k-2),\\ &\displaystyle\!sum\nolimits^{(2)} k_1k_2t(2k_1)t(2k_2)=\displaystyle\!frac{1}{24}k(2k-1)(2k+1)t(2k)+\displaystyle\!frac{1}{8}(2k-3)\zetaeta(2)t(2k-2),\\ &\displaystyle\!sum\nolimits^{(2)} k_1^3t(2k_1)t(2k_2)=\displaystyle\!frac{1}{16}k^2(2k-1)^2t(2k)-\displaystyle\!frac{3}{16}k(2k-3)\zetaeta(2)t(2k-2),\\ &\displaystyle\!sum\nolimits^{(2)} k_1^2k_2t(2k_1)t(2k_2)=\displaystyle\!frac{1}{48}k^2(2k-1)(2k+1)t(2k)+\displaystyle\!frac{1}{16}k(2k-3)\zetaeta(2)t(2k-2),\\ &\displaystyle\!sum\nolimits^{(2)} k_1^4t(2k_1)t(2k_2)=\displaystyle\!frac{1}{480}k(2k-1)(4k-1)(12k^2-6k-1)t(2k)\\ &\qquad\qquad -\displaystyle\!frac{1}{32}(2k-3)(8k^2-6k+5)\zetaeta(2)t(2k-2)-\displaystyle\!frac{3}{32}(2k-5)\zetaeta(4)t(2k-4),\\ &\displaystyle\!sum\nolimits^{(2)} k_1^3k_2t(2k_1)t(2k_2)=\displaystyle\!frac{1}{480}k(2k-1)(2k+1)(6k^2-1)t(2k)\\ &\qquad\qquad +\displaystyle\!frac{1}{32}(2k-3)(2k^2-6k+5)\zetaeta(2)t(2k-2)+\displaystyle\!frac{3}{32}(2k-5)\zetaeta(4)t(2k-4),\\ &\displaystyle\!sum\nolimits^{(2)} k_1^2k_2^2t(2k_1)t(2k_2)=\displaystyle\!frac{1}{480}k(2k-1)(2k+1)(4k^2+1)t(2k)\\ &\qquad\qquad +\displaystyle\!frac{1}{32}(2k-3)(6k-5)\zetaeta(2)t(2k-2)-\displaystyle\!frac{3}{32}(2k-5)\zetaeta(4)t(2k-4),\\ &\displaystyle\!sum\nolimits^{(2)} k_1^5t(2k_1)t(2k_2)=\displaystyle\!frac{1}{192}k^2(2k-1)^2(8k^2-4k-1)t(2k)\\ &\qquad\qquad -\displaystyle\!frac{5}{64}k(2k-3)(4k^2-6k+5)\zetaeta(2)t(2k-2)-\displaystyle\!frac{15}{64}k(2k-5)\zetaeta(4)t(2k-4),\\ &\displaystyle\!sum\nolimits^{(2)} k_1^4k_2t(2k_1)t(2k_2)=\displaystyle\!frac{1}{960}k^2(2k-1)(2k+1)(8k^2-3)t(2k)\\ &\qquad\qquad +\displaystyle\!frac{1}{64}k(2k-3)(4k^2-18k+15)\zetaeta(2)t(2k-2)+\displaystyle\!frac{9}{64}k(2k-5)\zetaeta(4)t(2k-4),\\ &\displaystyle\!sum\nolimits^{(2)} k_1^3k_2^2t(2k_1)t(2k_2)=\displaystyle\!frac{1}{960}k^2(2k-1)(2k+1)(4k^2+1)t(2k)\\ &\qquad\qquad +\displaystyle\!frac{1}{64}k(2k-3)(6k-5)\zetaeta(2)t(2k-2)-\displaystyle\!frac{3}{64}k(2k-5)\zetaeta(4)t(2k-4). \end{align*} If $n=3$, we have \begin{align*} &\displaystyle\!sum\nolimits^{(3)} t(2k_1)t(2k_2)t(2k_3)=\displaystyle\!frac{1}{4}(k-1)(2k-1)t(2k)-\displaystyle\!frac{3}{8}\zetaeta(2)t(2k-2),\\ &\displaystyle\!sum\nolimits^{(3)} k_1t(2k_1)t(2k_2)t(2k_3)=\displaystyle\!frac{1}{12}k(k-1)(2k-1)t(2k)-\displaystyle\!frac{1}{8}k\zetaeta(2)t(2k-2),\\ &\displaystyle\!sum\nolimits^{(3)} k_1^2t(2k_1)t(2k_2)t(2k_3)=\displaystyle\!frac{1}{48}k(k-1)(2k-1)^2t(2k)\\ &\qquad\qquad\qquad\qquad\qquad\qquad -\displaystyle\!frac{1}{16}(4k^2-11k+9)\zetaeta(2)t(2k-2),\\ &\displaystyle\!sum\nolimits^{(3)} k_1k_2t(2k_1)t(2k_2)t(2k_3)=\displaystyle\!frac{1}{96}k(k-1)(2k-1)(2k+1)t(2k)\\ &\qquad\qquad\qquad\qquad\qquad\qquad +\displaystyle\!frac{1}{32}(k-1)(2k-9)\zetaeta(2)t(2k-2),\\ &\displaystyle\!sum\nolimits^{(3)} k_1^3t(2k_1)t(2k_2)t(2k_3)=\displaystyle\!frac{1}{480}k(k-1)(2k-1)(12k^2-12k+1)t(2k)\\ &\qquad\qquad -\displaystyle\!frac{1}{32}(8k^3-24k^2+17k+3)\zetaeta(2)t(2k-2)+\displaystyle\!frac{3}{32}(2k-5)\zetaeta(4)t(2k-4),\\ &\displaystyle\!sum\nolimits^{(3)} k_1^2k_2t(2k_1)t(2k_2)t(2k_3)=\displaystyle\!frac{1}{960}k(k-1)(2k-1)(2k+1)(4k-1)t(2k)\\ &\qquad\qquad -\displaystyle\!frac{1}{64}(k-1)(2k+3)\zetaeta(2)t(2k-2)-\displaystyle\!frac{3}{64}(2k-5)\zetaeta(4)t(2k-4),\\ &\displaystyle\!sum\nolimits^{(3)} k_1k_2k_3t(2k_1)t(2k_2)t(2k_3)=\displaystyle\!frac{1}{480}k(k-1)(k+1)(2k-1)(2k+1)t(2k)\\ &\qquad\qquad +\displaystyle\!frac{1}{32}(k-3)(k-1)(2k-1)\zetaeta(2)t(2k-2)+\displaystyle\!frac{3}{32}(2k-5)\zetaeta(4)t(2k-4),\\ &\displaystyle\!sum\nolimits^{(3)} k_1^4t(2k_1)t(2k_2)t(2k_3)=\displaystyle\!frac{1}{960}k(k-1)(2k-1)^2(8k^2-8k-1)t(2k)\\ &\quad -\displaystyle\!frac{1}{64}(16k^4-64k^3+96k^2-79k+39)\zetaeta(2)t(2k-2)+\displaystyle\!frac{3}{64}(2k-5)(3k+1)\zetaeta(4)t(2k-4),\\ &\displaystyle\!sum\nolimits^{(3)} k_1^3k_2t(2k_1)t(2k_2)t(2k_3)=\displaystyle\!frac{1}{1920}k(k-1)(2k-1)(2k+1)(4k^2-2k-1)t(2k)\\ &\quad -\displaystyle\!frac{1}{128}(k-1)(16k^2-46k+39)\zetaeta(2)t(2k-2)-\displaystyle\!frac{3}{128}(k+1)(2k-5)\zetaeta(4)t(2k-4),\\ &\displaystyle\!sum\nolimits^{(3)} k_1^2k_2^2t(2k_1)t(2k_2)t(2k_3)=\displaystyle\!frac{1}{5760}k(k-1)(2k-1)(2k+1)(8k^2-4k+3)t(2k)\\ &\qquad\qquad\qquad -\displaystyle\!frac{1}{384}(k-1)(8k^3-64k^2+168k-117)\zetaeta(2)t(2k-2)\\ &\qquad\qquad\qquad-\displaystyle\!frac{1}{128}(2k-5)(7k-3)\zetaeta(4)t(2k-4),\\ &\displaystyle\!sum\nolimits^{(3)} k_1^2k_2k_3t(2k_1)t(2k_2)t(2k_3)=\displaystyle\!frac{1}{1440}k^2(k-1)(k+1)(2k-1)(2k+1)t(2k)\\ &\qquad +\displaystyle\!frac{1}{96}k(k-3)(k-1)(2k-1)\zetaeta(2)t(2k-2)+\displaystyle\!frac{1}{32}k(2k-5)\zetaeta(4)t(2k-4). \end{align*} If $n=4$, we have \begin{align*} &\displaystyle\!sum\nolimits^{(4)} t(2k_1)t(2k_2)t(2k_3)t(2k_4)=\displaystyle\!frac{1}{24}(k-1)(2k-3)(2k-1)t(2k)\\ &\qquad\qquad\qquad\qquad\qquad-\displaystyle\!frac{1}{4}(2k-3)\zetaeta(2)t(2k-2),\\ &\displaystyle\!sum\nolimits^{(4)} k_1t(2k_1)t(2k_2)t(2k_3)t(2k_4)=\displaystyle\!frac{1}{96}k(k-1)(2k-3)(2k-1)t(2k)\\ &\qquad\qquad\qquad\qquad\qquad-\displaystyle\!frac{1}{16}k(2k-3)\zetaeta(2)t(2k-2),\\ &\displaystyle\!sum\nolimits^{(4)} k_1^2t(2k_1)t(2k_2)t(2k_3)t(2k_4)=\displaystyle\!frac{1}{960}k(k-1)(2k-3)(2k-1)(4k-3)t(2k)\\ &\qquad\qquad-\displaystyle\!frac{1}{16}(2k-3)(k^2-3k+3)\zetaeta(2)t(2k-2)+\displaystyle\!frac{9}{128}(2k-5)\zetaeta(4)t(2k-4),\\ &\displaystyle\!sum\nolimits^{(4)} k_1k_2t(2k_1)t(2k_2)t(2k_3)t(2k_4)=\displaystyle\!frac{1}{960}k(k-1)(2k-3)(2k-1)(2k+1)t(2k)\\ &\qquad\qquad-\displaystyle\!frac{1}{16}(k-1)(2k-3)\zetaeta(2)t(2k-2)-\displaystyle\!frac{3}{128}(2k-5)\zetaeta(4)t(2k-4),\\ &\displaystyle\!sum\nolimits^{(4)} k_1^3t(2k_1)t(2k_2)t(2k_3)t(2k_4)=\displaystyle\!frac{1}{1920}k(k-1)(2k-3)(2k-1)(4k^2-6k+1)t(2k)\\ &\qquad\qquad\qquad-\displaystyle\!frac{1}{128}(2k-3)(6k^3-21k^2+17k+6)\zetaeta(2)t(2k-2)\\ &\qquad\qquad\qquad+\displaystyle\!frac{3}{256}(2k-5)(13k-21)\zetaeta(4)t(2k-4),\\ &\displaystyle\!sum\nolimits^{(4)} k_1^2k_2t(2k_1)t(2k_2)t(2k_3)t(2k_4)=\displaystyle\!frac{1}{5760}k(k-1)(2k-3)(2k-1)^2(2k+1)t(2k)\\ &\quad-\displaystyle\!frac{1}{384}(k-1)(2k-3)(2k^2-k+6)\zetaeta(2)t(2k-2)-\displaystyle\!frac{7}{256}(k-3)(2k-5)\zetaeta(4)t(2k-4),\\ &\displaystyle\!sum\nolimits^{(4)} k_1k_2k_3t(2k_1)t(2k_2)t(2k_3)t(2k_4)=\displaystyle\!frac{1}{5760}k(k-1)(k+1)(2k-3)(2k-1)(2k+1)t(2k)\\ &\qquad\qquad\qquad+\displaystyle\!frac{1}{384}(k-6)(k-1)(2k-3)(2k-1)\zetaeta(2)t(2k-2)\\ &\qquad\qquad\qquad+\displaystyle\!frac{1}{256}(2k-5)(4k-21)\zetaeta(4)t(2k-4). \end{align*} \sum\limits_{n=1}^\inftybsection{Weighted sum formulas of multiple $t$-values} If $n=2$, we have \begin{align*} &\displaystyle\!sum\nolimits^{(2)} t(2k_1,2k_2)=\displaystyle\!frac{1}{4}t(2k),\\ &\displaystyle\!sum\nolimits^{(2)}(k_1^2+k_2^2) t(2k_1,2k_2)=\displaystyle\!frac{1}{8}k(2k-1)t(2k)-\displaystyle\!frac{1}{8}(2k-3)\zetaeta(2)t(2k-2),\\ &\displaystyle\!sum\nolimits^{(2)} k_1k_2 t(2k_1,2k_2)=\displaystyle\!frac{1}{16}kt(2k)+\displaystyle\!frac{1}{16}(2k-3)\zetaeta(2)t(2k-2),\\ &\displaystyle\!sum\nolimits^{(2)}(k_1^3+k_2^3) t(2k_1,2k_2)=\displaystyle\!frac{1}{16}k^2(4k-3)t(2k)-\displaystyle\!frac{3}{16}k(2k-3)\zetaeta(2)t(2k-2),\\ &\displaystyle\!sum\nolimits^{(2)}(k_1^4+k_2^4) t(2k_1,2k_2)=\displaystyle\!frac{1}{32}k(2k-1)(4k^2-2k-1)t(2k)\\ &\qquad\qquad -\displaystyle\!frac{1}{32}(2k-3)(8k^2-6k+5)\zetaeta(2)t(2k-2)-\displaystyle\!frac{3}{32}(2k-5)\zetaeta(4)t(2k-4),\\ &\displaystyle\!sum\nolimits^{(2)}(k_1^3k_2+k_1k_2^3) t(2k_1,2k_2)=\displaystyle\!frac{1}{32}k(2k^2-1)t(2k)\\ &\qquad\qquad +\displaystyle\!frac{1}{32}(2k-3)(2k^2-6k+5)\zetaeta(2)t(2k-2)+\displaystyle\!frac{3}{32}(2k-5)\zetaeta(4)t(2k-4),\\ &\displaystyle\!sum\nolimits^{(2)} k_1^2k_2^2 t(2k_1,2k_2)=\displaystyle\!frac{1}{64}kt(2k)+\displaystyle\!frac{1}{64}(2k-3)(6k-5)\zetaeta(2)t(2k-2)\\ &\qquad\qquad -\displaystyle\!frac{3}{64}(2k-5)\zetaeta(4)t(2k-4),\\ &\displaystyle\!sum\nolimits^{(2)}(k_1^5+k_2^5) t(2k_1,2k_2)=\displaystyle\!frac{1}{64}k^2(16k^3-20k^2+5)t(2k)\\ &\qquad\qquad -\displaystyle\!frac{5}{64}k(2k-3)(4k^2-6k+5)\zetaeta(2)t(2k-2)-\displaystyle\!frac{15}{64}k(2k-5)\zetaeta(4)t(2k-4),\\ &\displaystyle\!sum\nolimits^{(2)}(k_1^4k_2+k_1k_2^4) t(2k_1,2k_2)=\displaystyle\!frac{1}{64}k^2(4k^2-3)t(2k)\\ &\qquad\qquad +\displaystyle\!frac{1}{64}k(2k-3)(4k^2-18k+15)\zetaeta(2)t(2k-2)+\displaystyle\!frac{9}{64}k(2k-5)\zetaeta(4)t(2k-4) \end{align*} and \begin{align*} &\displaystyle\!sum\nolimits^{(2)} t^{\star}(2k_1,2k_2)=\displaystyle\!frac{1}{4}(4k-3)t(2k),\\ &\displaystyle\!sum\nolimits^{(2)}(k_1^2+k_2^2) t^{\star}(2k_1,2k_2)=\displaystyle\!frac{1}{24}k(2k-1)(8k-5)t(2k)-\displaystyle\!frac{1}{8}(2k-3)\zetaeta(2)t(2k-2),\\ &\displaystyle\!sum\nolimits^{(2)} k_1k_2 t^{\star}(2k_1,2k_2)=\displaystyle\!frac{1}{48}k(8k^2-5)t(2k)+\displaystyle\!frac{1}{16}(2k-3)\zetaeta(2)t(2k-2),\\ &\displaystyle\!sum\nolimits^{(2)}(k_1^3+k_2^3) t^{\star}(2k_1,2k_2)=\displaystyle\!frac{1}{16}k^2(8k^2-12k+5)t(2k)-\displaystyle\!frac{3}{16}k(2k-3)\zetaeta(2)t(2k-2),\\ &\displaystyle\!sum\nolimits^{(2)}(k_1^4+k_2^4) t^{\star}(2k_1,2k_2)=\displaystyle\!frac{1}{480}k(2k-1)(96k^3-132k^2+34k+17)t(2k)\\ &\qquad\qquad -\displaystyle\!frac{1}{32}(2k-3)(8k^2-6k+5)\zetaeta(2)t(2k-2)-\displaystyle\!frac{3}{32}(2k-5)\zetaeta(4)t(2k-4),\\ &\displaystyle\!sum\nolimits^{(2)}(k_1^3k_2+k_1k_2^3) t^{\star}(2k_1,2k_2)=\displaystyle\!frac{1}{480}k(48k^4-50k^2+17)t(2k)\\ &\qquad\qquad +\displaystyle\!frac{1}{32}(2k-3)(2k^2-6k+5)\zetaeta(2)t(2k-2)+\displaystyle\!frac{3}{32}(2k-5)\zetaeta(4)t(2k-4),\\ &\displaystyle\!sum\nolimits^{(2)} k_1^2k_2^2 t^{\star}(2k_1,2k_2)=\displaystyle\!frac{1}{960}k(32k^4-17)t(2k)+\displaystyle\!frac{1}{64}(2k-3)(6k-5)\zetaeta(2)t(2k-2)\\ &\qquad\qquad\qquad\qquad -\displaystyle\!frac{3}{64}(2k-5)\zetaeta(4)t(2k-4),\\ &\displaystyle\!sum\nolimits^{(2)}(k_1^5+k_2^5) t^{\star}(2k_1,2k_2)=\displaystyle\!frac{1}{192}k^2(64k^4-144k^3+100k^2-17)t(2k)\\ &\qquad\qquad -\displaystyle\!frac{5}{64}k(2k-3)(4k^2-6k+5)\zetaeta(2)t(2k-2)-\displaystyle\!frac{15}{64}k(2k-5)\zetaeta(4)t(2k-4),\\ &\displaystyle\!sum\nolimits^{(2)}(k_1^4k_2+k_1k_2^4) t^{\star}(2k_1,2k_2)=\displaystyle\!frac{1}{960}k^2(64k^4-100k^2+51)t(2k)\\ &\qquad\qquad +\displaystyle\!frac{1}{64}k(2k-3)(4k^2-18k+15)\zetaeta(2)t(2k-2)+\displaystyle\!frac{9}{64}k(2k-5)\zetaeta(4)t(2k-4). \end{align*} If $n=3$, we have \begin{align*} &\displaystyle\!sum\nolimits^{(3)} t(2k_1,2k_2,2k_3)=\displaystyle\!frac{1}{8}t(2k)-\displaystyle\!frac{1}{16}\zetaeta(2)t(2k-2),\\ &\displaystyle\!sum\nolimits^{(3)} (k_1^2+k_2^2+k_3^2)t(2k_1,2k_2,2k_3)=\displaystyle\!frac{1}{32}k(4k-3)t(2k)-\displaystyle\!frac{1}{32}(2k^2-3)\zetaeta(2)t(2k-2),\\ &\displaystyle\!sum\nolimits^{(3)} (k_1k_2+k_1k_3+k_2k_3)t(2k_1,2k_2,2k_3)=\displaystyle\!frac{3}{64}kt(2k)-\displaystyle\!frac{3}{64}\zetaeta(2)t(2k-2),\\ &\displaystyle\!sum\nolimits^{(3)} (k_1^3+k_2^3+k_3^3)t(2k_1,2k_2,2k_3)=\displaystyle\!frac{1}{128}k(16k^2-18k+3)t(2k)\\ &\qquad\qquad-\displaystyle\!frac{1}{128}(8k^3-18k+3)\zetaeta(2)t(2k-2)+\displaystyle\!frac{3}{128}(2k-5)\zetaeta(4)t(2k-4),\\ &\displaystyle\!sum\nolimits^{(3)} \displaystyle\!sum\displaystyle\!limits_{1\leqslant i<j\leqslant 3}(k_i^2k_j+k_ik_j^2)t(2k_1,2k_2,2k_3)=\displaystyle\!frac{3}{128}k(2k-1)t(2k)\\ &\qquad\qquad-\displaystyle\!frac{3}{128}(2k-1)\zetaeta(2)t(2k-2)-\displaystyle\!frac{3}{128}(2k-5)\zetaeta(4)t(2k-4),\\ &\displaystyle\!sum\nolimits^{(3)} k_1k_2k_3t(2k_1,2k_2,2k_3)=\displaystyle\!frac{1}{128}kt(2k)-\displaystyle\!frac{1}{128}\zetaeta(2)t(2k-2)\\ &\qquad\qquad\qquad\qquad\qquad +\displaystyle\!frac{1}{128}(2k-5)\zetaeta(4)t(2k-4),\\ &\displaystyle\!sum\nolimits^{(3)} (k_1^4+k_2^4+k_3^4)t(2k_1,2k_2,2k_3)=\displaystyle\!frac{1}{128}k(16k^3-24k^2+6k+3)t(2k)\\ &\qquad-\displaystyle\!frac{1}{128}(8k^4-36k^2+42k-21)\zetaeta(2)t(2k-2)+\displaystyle\!frac{3}{128}(2k-5)(2k-3)\zetaeta(4)t(2k-4),\\ &\displaystyle\!sum\nolimits^{(3)} \displaystyle\!sum\displaystyle\!limits_{1\leqslant i<j\leqslant 3}(k_i^3k_j+k_ik_j^3)t(2k_1,2k_2,2k_3)=\displaystyle\!frac{3}{128}k(k-1)(2k+1)t(2k)\\ &\qquad\qquad-\displaystyle\!frac{3}{128}(k-1)(6k-7)\zetaeta(2)t(2k-2)-\displaystyle\!frac{3}{128}(k-3)(2k-5)\zetaeta(4)t(2k-4),\\ &\displaystyle\!sum\nolimits^{(3)} (k_1^2k_2^2+k_1^2k_3^2+k_2^2k_3^2)t(2k_1,2k_2,2k_3)=-\displaystyle\!frac{1}{256}k(2k-3)t(2k)\\ &\qquad+\displaystyle\!frac{1}{256}(12k^2-34k+21)\zetaeta(2)t(2k-2)-\displaystyle\!frac{1}{256}(2k-5)(2k+9)\zetaeta(4)t(2k-4) \end{align*} and \begin{align*} &\displaystyle\!sum\nolimits^{(3)} t^{\star}(2k_1,2k_2,2k_3)=\displaystyle\!frac{1}{8}(4k^2-10k+5)t(2k)-\displaystyle\!frac{1}{16}\zetaeta(2)t(2k-2),\\ &\displaystyle\!sum\nolimits^{(3)} (k_1^2+k_2^2+k_3^2)t^{\star}(2k_1,2k_2,2k_3)=\displaystyle\!frac{1}{96}k(24k^3-80k^2+78k-25)t(2k)\\ &\qquad\qquad\qquad\qquad -\displaystyle\!frac{1}{32}(6k^2-22k+21)\zetaeta(2)t(2k-2),\\ &\displaystyle\!sum\nolimits^{(3)} (k_1k_2+k_1k_3+k_2k_3)t^{\star}(2k_1,2k_2,2k_3)=\displaystyle\!frac{1}{192}k(24k^3-40k^2-18k+25)t(2k)\\ &\qquad\qquad\qquad\qquad+\displaystyle\!frac{1}{64}(4k^2-22k+21)\zetaeta(2)t(2k-2),\\ &\displaystyle\!sum\nolimits^{(3)} (k_1^3+k_2^3+k_3^3)t^{\star}(2k_1,2k_2,2k_3)=\displaystyle\!frac{1}{640}k(6k-1)(16k^3-64k^2+76k-29)t(2k)\\ &\qquad\qquad-\displaystyle\!frac{1}{128}(24k^3-96k^2+86k+9)\zetaeta(2)t(2k-2)+\displaystyle\!frac{9}{128}(2k-5)\zetaeta(4)t(2k-4),\\ &\displaystyle\!sum\nolimits^{(3)} \displaystyle\!sum\displaystyle\!limits_{1\leqslant i<j\leqslant 3}(k_i^2k_j+k_ik_j^2)t^{\star}(2k_1,2k_2,2k_3)=\displaystyle\!frac{1}{1920}k(2k-1)(96k^3-152k^2-76k+87)t(2k)\\ &\qquad\qquad-\displaystyle\!frac{1}{128}(8k^2-2k-9)\zetaeta(2)t(2k-2)-\displaystyle\!frac{9}{128}(2k-5)\zetaeta(4)t(2k-4),\\ &\displaystyle\!sum\nolimits^{(3)} k_1k_2k_3t^{\star}(2k_1,2k_2,2k_3)=\displaystyle\!frac{1}{1920}k(16k^4-60k^2+29)t(2k)\\ &\qquad\qquad +\displaystyle\!frac{1}{384}(8k^3-36k^2+40k-9)\zetaeta(2)t(2k-2)+\displaystyle\!frac{3}{128}(2k-5)\zetaeta(4)t(2k-4),\\ &\displaystyle\!sum\nolimits^{(3)} (k_1^4+k_2^4+k_3^4)t^{\star}(2k_1,2k_2,2k_3)=\displaystyle\!frac{1}{1920}k(192k^5-960k^4+1560k^3-1000k^2+108k\\ &\qquad\qquad +85)t(2k)-\displaystyle\!frac{1}{128}(24k^4-128k^3+228k^2-200k+99)\zetaeta(2)t(2k-2)\\ &\qquad\qquad+\displaystyle\!frac{3}{128}(2k-5)(4k+5)\zetaeta(4)t(2k-4),\\ &\displaystyle\!sum\nolimits^{(3)} \displaystyle\!sum\displaystyle\!limits_{1\leqslant i<j\leqslant 3}(k_i^3k_j+k_ik_j^3)t^{\star}(2k_1,2k_2,2k_3)=\displaystyle\!frac{1}{1920}k(k-1)(96k^4-144k^3-144k^2\\ &\qquad\qquad +106k+85)t(2k)-\displaystyle\!frac{1}{128}(k-1)(32k^2-110k+99)\zetaeta(2)t(2k-2)\\ &\qquad\qquad-\displaystyle\!frac{3}{128}(k+5)(2k-5)\zetaeta(4)t(2k-4),\\ &\displaystyle\!sum\nolimits^{(3)} (k_1^2k_2^2+k_1^2k_3^2+k_2^2k_3^2)t(2k_1,2k_2,2k_3)=\displaystyle\!frac{1}{3840}k(64k^5-160k^4+120k^3-124k\\ &\qquad\qquad +85)t(2k)-\displaystyle\!frac{1}{768}(16k^4-144k^3+500k^2-672k+297)\zetaeta(2)t(2k-2)\\ &\qquad\qquad -\displaystyle\!frac{3}{256}(2k-5)(4k-5)\zetaeta(4)t(2k-4). \end{align*} If $n=4$, we have \begin{align*} &\displaystyle\!sum\nolimits^{(4)} t(2k_1,2k_2,2k_3,2k_4)=\displaystyle\!frac{5}{64}t(2k)-\displaystyle\!frac{3}{64}\zetaeta(2)t(2k-2),\\ &\displaystyle\!sum\nolimits^{(4)} (k_1^2+k_2^2+k_3^2+k_4^2)t(2k_1,2k_2,2k_3,2k_4)=\displaystyle\!frac{1}{128}k(10k-9)t(2k)\\ &\qquad\qquad-\displaystyle\!frac{3}{128}(2k^2-k-2)\zetaeta(2)t(2k-2)+\displaystyle\!frac{3}{256}(2k-5)\zetaeta(4)t(2k-4),\\ &\displaystyle\!sum\nolimits^{(4)} \displaystyle\!sum\displaystyle\!limits_{1\leqslant i<j\leqslant 4}k_ik_j t(2k_1,2k_2,2k_3,2k_4)=\displaystyle\!frac{9}{256}kt(2k)\\ &\qquad\qquad-\displaystyle\!frac{3}{256}(k+2)\zetaeta(2)t(2k-2)-\displaystyle\!frac{3}{512}(2k-5)\zetaeta(4)t(2k-4),\\ &\displaystyle\!sum\nolimits^{(4)} (k_1^3+k_2^3+k_3^3+k_4^3)t(2k_1,2k_2,2k_3,2k_4)=\displaystyle\!frac{1}{512}k(40k^2-54k+15)t(2k)\\ &\qquad\qquad-\displaystyle\!frac{3}{512}(8k^3-6k^2-10k+3)\zetaeta(2)t(2k-2)+\displaystyle\!frac{9}{512}k(2k-5)\zetaeta(4)t(2k-4),\\ &\displaystyle\!sum\nolimits^{(4)} \displaystyle\!sum\displaystyle\!limits_{1\leqslant i<j\leqslant 4}(k_i^2k_j+k_ik_j^2) t(2k_1,2k_2,2k_3,2k_4)=\displaystyle\!frac{3}{512}k(6k-5)t(2k)\\ &\qquad\qquad-\displaystyle\!frac{3}{512}(2k^2+2k-3)\zetaeta(2)t(2k-2)-\displaystyle\!frac{3}{512}k(2k-5)\zetaeta(4)t(2k-4),\\ &\displaystyle\!sum\nolimits^{(4)} (k_1k_2k_3+k_1k_2k_4+k_1k_3k_4+k_2k_3k_4) t(2k_1,2k_2,2k_3,2k_4)=\displaystyle\!frac{5}{512}kt(2k)\\ &\qquad\qquad\qquad\qquad\qquad -\displaystyle\!frac{1}{512}(2k+3)\zetaeta(2)t(2k-2) \end{align*} and \begin{align*} &\displaystyle\!sum\nolimits^{(4)} t^{\star}(2k_1,2k_2,2k_3,2k_4)=\displaystyle\!frac{1}{192}(4k-7)(8k^2-28k+15)t(2k)\\ &\qquad\qquad\qquad\qquad\qquad -\displaystyle\!frac{1}{64}(4k-9)\zetaeta(2)t(2k-2),\\ &\displaystyle\!sum\nolimits^{(4)} (k_1^2+k_2^2+k_3^2+k_4^2)t^{\star}(2k_1,2k_2,2k_3,2k_4)=\displaystyle\!frac{1}{1920}k(128k^4-840k^3+1880k^2\\ &\qquad\qquad -1680k+527)t(2k)-\displaystyle\!frac{1}{384}(32k^3-210k^2+469k-348)\zetaeta(2)t(2k-2)\\ &\qquad\qquad-\displaystyle\!frac{1}{256}(2k-5)\zetaeta(4)t(2k-4),\\ &\displaystyle\!sum\nolimits^{(4)} \displaystyle\!sum\displaystyle\!limits_{1\leqslant i<j\leqslant 4}k_ik_j t^{\star}(2k_1,2k_2,2k_3,2k_4)=\displaystyle\!frac{1}{3840}k(192k^4-840k^3+680k^2+630k\\ &\qquad\qquad -527)t(2k)+\displaystyle\!frac{1}{768}(8k^3-156k^2+469k-348)\zetaeta(2)t(2k-2)\\ &\qquad\qquad +\displaystyle\!frac{1}{512}(2k-5)\zetaeta(4)t(2k-4),\\ &\displaystyle\!sum\nolimits^{(4)} (k_1^3+k_2^3+k_3^3+k_4^3)t^{\star}(2k_1,2k_2,2k_3,2k_4)=\displaystyle\!frac{1}{7680}k(256k^5-2016k^4+5640k^3\\ &\qquad\qquad -6720k^2+3464k-609)t(2k)-\displaystyle\!frac{1}{512}(32k^4-232k^3+534k^2-344k\\ &\qquad\qquad -69)\zetaeta(2)t(2k-2)+\displaystyle\!frac{3}{512}(2k-5)(5k-26)\zetaeta(4)t(2k-4),\\ &\displaystyle\!sum\nolimits^{(4)} \displaystyle\!sum\displaystyle\!limits_{1\leqslant i<j\leqslant 4}(k_i^2k_j+k_ik_j^2) t^{\star}(2k_1,2k_2,2k_3,2k_4)=\displaystyle\!frac{1}{7680}k(256k^5-1344k^4+1880k^3\\ &\qquad\qquad -1356k+609)t(2k)-\displaystyle\!frac{1}{1536}(32k^4-144k^3+274k^2-360k\\ &\qquad\qquad +207)\zetaeta(2)t(2k-2)-\displaystyle\!frac{1}{512}(2k-5)(17k-78)\zetaeta(4)t(2k-4),\\ &\displaystyle\!sum\nolimits^{(4)} (k_1k_2k_3+k_1k_2k_4+k_1k_3k_4+k_2k_3k_4) t^{\star}(2k_1,2k_2,2k_3,2k_4)=\displaystyle\!frac{1}{23040}k(128k^5-336k^4\\ &\qquad\qquad -520k^3+1260k^2+302k-609)t(2k)+\displaystyle\!frac{1}{1536}(16k^4-152k^3+404k^2-352k\\ &\qquad\qquad +69)\zetaeta(2)t(2k-2)+\displaystyle\!frac{1}{256}(2k-5)(3k-13)\zetaeta(4)t(2k-4). \end{align*} \noindent {\bf Acknowledgments.} The authors express their deep gratitude to Professor Masanobu Kaneko for valuable discussions and comments. The second author is supported by the China Scholarship Council (No. 201806310063). {\small } \end{document}
\begin{document} \maketitle \begin{abstract} We prove a formula for the motive of the stack of vector bundles of fixed rank and degree over a smooth projective curve in Voevodsky's triangulated category of mixed motives with rational coefficients. \end{abstract} \tableofcontents \section{Introduction} Let $\Bun_{n,d}$ denote the moduli stack of rank $n$, degree $d$ vector bundles on a smooth projective geometrically connected curve $C$ of genus $g$ over a field $k$. In this paper, we prove the following formula for the motive of $\Bun_{n,d}$ in Voevodsky's triangulated category $\mathrm{DM}} \def\DA{\mathrm{DA}} \def\PSh{\mathrm{PSh}(k):=\mathrm{DM}} \def\DA{\mathrm{DA}} \def\PSh{\mathrm{PSh}(k,\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T)$ of mixed motives over $k$ with $\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T$-coefficients. \begin{thm}\label{main thm} Suppose that $C(k) \neq \emptyset$; then in $\mathrm{DM}} \def\DA{\mathrm{DA}} \def\PSh{\mathrm{PSh}(k,\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T)$, we have \[M(\Bun_{n,d}) \simeq M(\Jac(C)) \otimes M(B\GG_m) \otimes \bigotimes_{i=1}^{n-1} Z(C, \mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T\{i\}),\] where $Z(C,\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T\{i\}):=\bigoplus_{j=0}^{\infty} M(C^{(j)})\otimes \mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T\{ij\}$ is a motivic Zeta function and $\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T\{i\} := \mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T(i)[2i]$. \end{thm} In particular, this implies a decomposition on Chow groups and $\ell$-adic cohomology and, as explained below, this formula is compatible with previous cohomological descriptions of $\Bun_{n,d}$. This paper is a continuation of our previous work \cite{HPL} in which we define and study the motive $M(\Bun_{n,d}) \in \mathrm{DM}} \def\DA{\mathrm{DA}} \def\PSh{\mathrm{PSh}(k,R)$ for any coefficient ring $R$ (provided the characteristic of $k$ is invertible in $R$ in positive characteristic); more generally, we introduce there the notion of an exhaustive stack and define motives of smooth exhaustive stacks by generalising a construction of Totaro for quotient stacks \cite{totaro} (see \cite[Definitions 2.15 and 2.17]{HPL} for details). \subsection{Overview of our previous results} In \cite[Theorem 3.5]{HPL}, we work with a general coefficient ring $R$ (for which the exponential characteristic is invertible) and give the following description of the motive of the stack $\Bun_{n,d}$ in terms of smooth projective Quot schemes by following a geometric argument for computing the $\ell$-adic cohomology of this stack in \cite{BGL}. \begin{thm}\label{thm old main} For any effective divisor $D>0$ on $C$, we have in $\mathrm{DM}} \def\DA{\mathrm{DA}} \def\PSh{\mathrm{PSh}(k,R)$ \[M(\Bun_{n,d})\simeq \hocolim_{l\in\NN} M(\Div_{n,d}(lD)),\] where $\Div_{n,d}(D)=\{ \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H \subset \cO_C(D)^{\oplus n} : \rk(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H) =n, \deg(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H) = d\}$ is a smooth Quot scheme. \end{thm} Our approach in \cite{HPL} to describing the motives $M(\Div_{n,d}(lD))$ is to use Bia{\l}ynicki-Birula decompositions \cite{BB_original} associated to an action of a generic one-parameter subgroup $\GG_m\subset \GL_n$ on these Quot schemes, whose fixed loci are disjoint unions of products of symmetric powers of $C$. To use these decompositions to compute the motive of $\Bun_{n,d}$, one needs to understand the behaviour of the transition maps $i_l: \Div_{n,d}(lD) \hookrightarrow \Div_{n,d}((l+1)D)$ in the inductive system in Theorem \ref{thm old main} with respect to the motivic Bia{\l}ynicki-Birula decompositions; this is very complicated, as although the closed immersion $i_l$ is $\GG_m$-equivariant, the closed subscheme $\Div_{n,d}(lD) \hookrightarrow \Div_{n,d}((l+1)D)$ does not intersect the Bia{\l}ynicki-Birula strata transversally. We conjecture a precise description of these transition maps \cite[Conjecture 3.9]{HPL} and show that the formula for the motive of $\Bun_{n,d}$ appearing in Theorem \ref{main thm} follows from this conjectural description of the transition maps. \subsection{Summary of the results and methods in this paper} In this paper, we prove the conjectural formula in \cite{HPL} under the assumption that $R = \mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T$. The main idea is to replace the Quot schemes with Flag-Quot schemes, which are generalisations of Quot schemes that allow flags of sheaves and then to describe the transition maps using these Flag-Quot schemes without using Bia{\l}ynicki-Birula decompositions. The idea to use Flag-Quot schemes was inspired by a result of Laumon in \cite{laumon} and its application in a paper of Heinloth to study the cohomology of the moduli space of Higgs bundles using Hecke modification stacks \cite{Heinloth_LaumonBDay}. To prove Theorem \ref{main thm}, our starting point is Theorem \ref{thm old main}, where as we assume that $C$ has a rational point $x$ we can take the divisor $D := x$ and write $\Div_{n,d}(l):= \Div_{n,d}(lx)$. We replace the Quot schemes $\Div_{n,d}(l)$ with smooth projective Flag-Quot schemes \[ \FDiv_{n,d}(l) = \{ \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H_{nl -d} \subsetneq \cdots \subsetneq \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H_1 \subsetneq \mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H_{0} = \cO_C(lx)^{\oplus n} : \rk(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H_i) = n \text{ and } \deg{\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H_i} = nl-i \}. \] The natural map $ \FDiv_{n,d}(l) \ra \Div_{n,d}(l)$ is small and is a $S_{nl-d}$-principal bundle over the open subset consisting of subsheaves $\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H \subset \cO_C(lx)^{\oplus n}$ with torsion quotient that has support consisting of $nl-d$ distinct points. Using these facts, we relate the motives of these two varieties as follows. \begin{thm}\label{thm intro FDiv} There is an induced $S_{nl-d}$-action on $M(\FDiv_{n,d}(l))$ and isomorphisms \[M(\Div_{n,d}(l)) \simeq M(\FDiv_{n,d}(l))^{S_{nl-d}} \simeq (M(C \times \PP^{n-1})^{\otimes nl-d})^{S_{nl-d}} \simeq \Sym^{nl-d}(M(C \times \PP^{n-1})).\] \end{thm} An isomorphism $M(\Div_{n,d}(l)) \simeq \Sym^{nl-d}(M(C \times \PP^{n-1}))$ is constructed by del Ba\~{n}o \cite[Theorem 4.2]{Del_Bano_motives_moduli} using associated motivic Bia{\l}ynicki-Birula decompositions (see also \cite[$\S$3.2]{HPL}). However, in del Ba\~{n}o's description, we do not understand the transition maps $M(i_l)$. In fact, we deduce Theorem \ref{thm intro FDiv} as a special case of a more general result (Theorem \ref{thm Sl action on THecke}), where we replace $\cO_C(lx)^{\oplus n} \ra C/k$ with a family of vector bundles $\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H \ra T \times C/T$ parametrised by a smooth $k$-scheme $T$ and then study the motives of schemes of (iterated) Hecke correspondences as (Flag)-Quot schemes over $T$. This work was inspired by a beautiful description of the cohomology of these schemes due to Heinloth (see the proof of \cite[Proposition 11]{Heinloth_LaumonBDay} which uses ideas of Laumon \cite[Theorem 3.3.1]{laumon}). In fact we lift Heinloth's cohomological description of schemes of (iterated) Hecke correspondences to $\mathrm{DM}} \def\DA{\mathrm{DA}} \def\PSh{\mathrm{PSh}(k)$. To prove this result, in $\S\ref{sec small maps}$ we study the invariant piece of a motive with a finite group action, which is why we need to work with rational coefficients; the main result is Theorem \ref{thm:actions}, which states that for a small proper map $f : X \twoheadrightarrow Y$ of smooth projective $k$-varieties which is a principal $G$-bundle on the locus with finite fibres, we have an isomorphism $M(X)^G \cong M(Y)$. In $\S$\ref{sec motives Hecke schemes}, we study the geometry and motives of schemes of (iterated) Hecke correspondences in order to prove Theorem \ref{thm Sl action on THecke}. Furthermore, we obtain a formula for the motive of the Quot scheme of length $l$ torsion quotients of a rank $n$ locally free sheaf $\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H$ on $C$, which is independent of $\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H$ (Corollary \ref{cor motive torsion quot}); this complements recent analogous results in the Grothendieck ring of varieties \cite{BFP,Ricolfi}. In $\S$\ref{sec lift trans}, we lift the transition maps $i_l : \Div_{n,d}(l) \ra \Div_{n,d}(l+1)$ to the schemes $\FDiv_{n,d}(l)$. It turns out to be much simpler to describe the motivic behaviour of the lifts of the transition maps to Flag-Quot schemes, as those are iterated projective bundles over products of the curve. By symmetrising this description, we deduce the corresponding behaviour for the maps $M(i_l)$ which enables us to prove Theorem \ref{main thm} in $\S$\ref{sec proof1}. Finally, in $\S$\ref{sec proof2}, we give a second proof of this formula for $M(\Bun_{n,d})$ which follows more closely the ideas in our previous work \cite{HPL}. It remains an interesting open question as to whether Theorem \ref{main thm} holds integrally. One may expect this to be the case, as Atiyah and Bott \cite{atiyah_bott} gave an integral description of the cohomology of $\Bun_{n,d}$. In fact, in future work we plan to remove the assumption that $C$ has a rational point, by giving a more canonical construction of the isomorphism in Theorem \ref{main thm} inspired by \cite{atiyah_bott}. By Poincar\'{e} duality, we obtain a formula for the compactly supported motive $M^c(\Bun)$, which compares nicely with previous results, such as the Behrend--Dhillon formula for the virtual class of $\Bun_{n,d}$ in the Grothendieck ring of varieties \cite{BD} and Harder's formula for the stacky point count over a finite field \cite{Harder} (see the discussion in \cite[$\S$4.2]{HPL}). \begin{cor} Assume $C(k) \neq \emptyset$; then the compactly supported motive of $\Bun_{n,d}$ is given by \[M^c(\Bun_{n,d}) \simeq M(\Jac C)\otimes M^c(B\GG_m)\{(n^2 -1 )(g-1) \} \otimes \bigotimes_{i=2}^n Z(C, \mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T\{-i\}). \] \end{cor} \subsection{Background on motives} Let us briefly recall some basic properties about $\mathrm{DM}} \def\DA{\mathrm{DA}} \def\PSh{\mathrm{PSh}(k):=\mathrm{DM}} \def\DA{\mathrm{DA}} \def\PSh{\mathrm{PSh}(k,\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T)$. It is a monoidal $\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T$-linear triangulated category. For a separated scheme $X$ of finite type over $k$, we can associate a motive $M(X)\in \mathrm{DM}} \def\DA{\mathrm{DA}} \def\PSh{\mathrm{PSh}(k)$, which is covariantly functorial in $X$ and behaves like a homology theory. The motive $M(\Spec k):=\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T\{0\}$ is the unit for the monoidal structure, and there are Tate motives $\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T\{n\}:=\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T(n)[2n] \in \mathrm{DM}} \def\DA{\mathrm{DA}} \def\PSh{\mathrm{PSh}(k)$ for all $n\in\ZZ$. For any motive $M$ and $n \in \ZZ$, we write $M\{n\}:=M\otimes \mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T\{n\}$. In $\mathrm{DM}} \def\DA{\mathrm{DA}} \def\PSh{\mathrm{PSh}(k)$, there are K\"{u}nneth isomorphisms, $\mathbb A}\def\BB{\mathbb B}\def\CC{\mathbb C}\def\DD{\mathbb D^1$-homotopy invariance, Gysin distinguished triangles, projective bundle formulae and Poincar\'{e} duality isomorphisms, as well as realisation functors (to compare with Betti, de Rham and $\ell$-adic cohomology) and descriptions of Chow groups as homomorphism groups in $\mathrm{DM}} \def\DA{\mathrm{DA}} \def\PSh{\mathrm{PSh}(k)$. For a precise statement of these results, we refer the reader to the summary in \cite[$\S$2]{HPL}. In this paper, unlike in \cite{HPL}, we need to use categories of relative motives over varying base schemes, and the associated ``six operations'' formalism. We only need a small portion of the machinery, which we summarise here; for more details, see \cite[\S 3]{Ayoub_Survey}. Given a base scheme $S$, which in this paper will always be of finite type and separated over the field $k$, there is a monoidal $\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T$-linear triangulated category $\mathrm{DM}} \def\DA{\mathrm{DA}} \def\PSh{\mathrm{PSh}(S)$, which we take to be the category $\DA^{\mathrm{\acute{e}t}}(S,\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T)$ of \cite{Ayoub_Survey} and \cite[\S 3]{Ayoub_etale}. The monoidal unit of $\mathrm{DM}} \def\DA{\mathrm{DA}} \def\PSh{\mathrm{PSh}(S)$ is denoted by $\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_S$ (in particular, $\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_k:=\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T\{0\}\in\mathrm{DM}} \def\DA{\mathrm{DA}} \def\PSh{\mathrm{PSh}(k)$). Given a morphism $f:S\ra T$ between two such base schemes (so that $f$ is automatically separated and of finite type), there are two adjunctions \[ f^*:\mathrm{DM}} \def\DA{\mathrm{DA}} \def\PSh{\mathrm{PSh}(T)\leftrightarrows \mathrm{DM}} \def\DA{\mathrm{DA}} \def\PSh{\mathrm{PSh}(S):f_* \quad \quad \text{and} \quad \quad f_!:\mathrm{DM}} \def\DA{\mathrm{DA}} \def\PSh{\mathrm{PSh}(S)\leftrightarrows \mathrm{DM}} \def\DA{\mathrm{DA}} \def\PSh{\mathrm{PSh}(T):f^! \] which satisfies the same formal properties as the corresponding adjunctions $(f^*,Rf_*)$ and $(Rf_!,f^!)$ in the setting of derived categories of $\ell$-adic sheaves. In particular, we have natural isomorphisms $f_*\simeq f_!$ for $f$ proper, and $f^*\simeq f^!$ for $f$ \'etale. We also have proper base change (in the general form of \cite[Theorem 3.9]{Ayoub_Survey}) and a purity isomorphism $f^!\simeq f^*(-)\{d\}$ for $f$ smooth of relative dimension $d$. Many constructions in $\mathrm{DM}} \def\DA{\mathrm{DA}} \def\PSh{\mathrm{PSh}(k)$ have an alternative description in terms of the six operations formalism: for a $k$-scheme $X$ with structure map $\pi_X$, we have \[ M(X)\simeq \pi_{X!}\pi_X^!\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_k\quad \quad\text{and} \quad \quad M^c(X)\simeq \pi_{X*}\pi_X^!\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_k. \\ \] \noindent \textbf{Acknowledgements.} We thank Elden Elmanto, Michael Gr\"{o}chenig, Jochen Heinloth, Frances Kirwan and Marc Levine for useful discussions. \section{Small maps and induced actions on motives}\label{sec small maps} \subsection{Properties of small maps} Let us recall the following definition. \begin{defn}\label{def:small} Let $X$ and $Y$ be algebraic varieties\footnote{Here as in the rest of the paper, variety means finite type separated over $k$, not necessarily irreducible.} over $k$, and let $f:X\ra Y$ be a proper morphism. For $\delta\in\NN$, define \[ Y_{f,\delta}:=\{y\in Y|\dim(f^{-1}(y))=\delta\}. \] This is a locally closed subscheme of $Y$, and so its codimension in $Y$ makes sense. We say $f$ is \begin{enumerate}[label={\upshape(\roman*)}] \item semismall if $\codim_{Y}(Y_{f,\delta})\geq 2\delta$ for all $\delta\geq 0$. \item small if $f$ is semismall and $\codim_{Y}(Y_{f,\delta})> 2\delta$ if for all $\delta> 0$. \end{enumerate} \end{defn} \begin{rmk} This formulation in terms of codimension also makes sense when $X,Y$ are algebraic stacks and $f:X\ra Y$ is a proper representable morphism. \end{rmk} \begin{lemma}\label{lem:small_bc} Proper (semi)small morphisms are stable by flat base change. \end{lemma} \begin{proof} Let $f:X\ra Y$ be a proper morphism and $g:Z\ra Y$ be a flat morphism. Write $\tilde{f}:X\times_{Y}Z\ra Z$. With the notations of Definition \ref{def:small}, for all $\delta\in \NN$ we have $Z_{\tilde{f},\delta}=g^{-1}(Y_{f,\delta})$. Since $g$ is flat, we deduce that \[ \codim_{Z}(Z_{\tilde{f},\delta}) = \codim_Z (g^{-1}(Y_{f,\delta}))\geq \codim_Y (Y_{f,\delta}) \] which implies the result. \end{proof} \begin{rmk} This property also holds for proper representable (semi-)small morphisms between algebraic stacks, with the same proof. \end{rmk} The key property of (semi-)small morphisms for this paper is the following lemma. \begin{lemma}\cite[Proposition 2.1.1, Remark 2.1.2]{dCM_small}\label{lem:small_dim} Let $f:X\ra Y$ be a proper morphism of varieties. For $\delta\in \NN$, let $Y_{f,\delta}$ be as in Definition \ref{def:small} and $X_{f,\delta}:=f^{-1}(Y_{f,\delta})$. \begin{enumerate}[label={\upshape(\roman*)}] \item \label{dim} The morphism $f$ is semismall if and only if \[ \dim(X\times_{Y}X)\leq \dim(Y). \] \item \label{dim_surj} If $f$ is semismall and surjective, then $\dim(X\times_{Y}X)=\dim(X).$ \item \label{comp} If $f$ is small and surjective, then the irreducible components of dimension $\dim(X)$ of $X\times_{Y}X$ are the closures of the irreducible components of $X_{f,0}\times_{Y_{f,0}}X_{f,0}$ and in particular dominate $Y$. \end{enumerate} \end{lemma} \subsection{Endomorphisms of motives of small maps} Given a morphism of schemes $f:X\ra Y$, we denote by $\Aut_Y(X)$ the group of automorphisms of $X$ as a $Y$-scheme. For a $k$-scheme $X$ and an integer $i\in\NN$, we denote by $Z_i(X)$ the group of $i$-dimensional cycles with rational coefficients on $X$, and $\mathrm{CH}_i(X)$ the $i$-th Chow group, i.e., the quotient of $Z_i(X)$ by rational equivalence. \begin{prop}\label{prop:relative_action} Let $f:X\ra Y$ be a proper morphism with $X$ smooth equidimensional of dimension $d\in\NN$. Then there exist an isomorphism \[ \phi_{f}:\mathrm{CH}_{d}(X\times_{Y}X)\simeq \End_{\mathrm{DM}} \def\DA{\mathrm{DA}} \def\PSh{\mathrm{PSh}(Y)}(f_{*}\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_{X}) \] such that, if $e:U\hookrightarrow Y$ is a \'etale morphism and $\tilde{e}:V\hookrightarrow X$ is its base change along $f$ and $\tilde{f}:V\ra U$ the base change of $f$ along $e$, we have a commutative diagram \[ \xymatrix{ \mathrm{CH}_d(X\times_Y X) \ar[d]^{(\tilde{e}\times\tilde{e})^*} \ar[r]_{\phi_f} & \End_{\mathrm{DM}} \def\DA{\mathrm{DA}} \def\PSh{\mathrm{PSh}(Y)}(f_{*}\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_X) \ar[d]^{e^*} \\ \mathrm{CH}_d(V\times_U V) \ar[r]_{\phi_{\tilde{f}}} & \End_{\mathrm{DM}} \def\DA{\mathrm{DA}} \def\PSh{\mathrm{PSh}(U)}(\tilde{f}_{*}\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_{V}). } \] \end{prop} \begin{proof} Write $p_1,p_2:X\times_Y X\ra X$ for the two projections. For a $k$-scheme $Z$, write $\pi_Z:Z\ra \Spec(k)$ for its structure map. We start with the isomorphism \[ \mathrm{CH}_d(X\times_Y X)\simeq \Hom_{\mathrm{DM}} \def\DA{\mathrm{DA}} \def\PSh{\mathrm{PSh}(k)}(\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T\{d\},M^c(X\times_Y X))\simeq \Hom_{\mathrm{DM}} \def\DA{\mathrm{DA}} \def\PSh{\mathrm{PSh}(X\times_Y X)}(\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_{X\times_Y X},\pi^!_{X\times_Y X}\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T\{-d\}) \] where we have used the description of Chow groups for general varieties in $\mathrm{DM}} \def\DA{\mathrm{DA}} \def\PSh{\mathrm{PSh}(k)$, the formula for $M^c$ in terms of the six operations and the adjunction $(\pi_{X\times_Y X}^*,\pi_{X\times_Y X *})$. We then write \begin{flalign*} \Hom_{\mathrm{DM}} \def\DA{\mathrm{DA}} \def\PSh{\mathrm{PSh}(X\times_Y X)}(\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_{X\times_Y X},\pi^!_{X\times_Y X}\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_k\{-d\}) & \simeq \Hom_{\mathrm{DM}} \def\DA{\mathrm{DA}} \def\PSh{\mathrm{PSh}(X\times_Y X)}(\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_{X\times_Y X},p_1^!\pi_{X}^{!}\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_k\{-d\}) \\ & \simeq \Hom_{\mathrm{DM}} \def\DA{\mathrm{DA}} \def\PSh{\mathrm{PSh}(X\times_Y X)}(\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_{X\times_Y X}, p_1^!\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_X) \\ & \simeq \Hom_{\mathrm{DM}} \def\DA{\mathrm{DA}} \def\PSh{\mathrm{PSh}(X)}(\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_X,p_{1*}p_1^!\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_{X}) \\ & \simeq \Hom_{\mathrm{DM}} \def\DA{\mathrm{DA}} \def\PSh{\mathrm{PSh}(X)}(\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_{X},f^{!}f_{*}\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_{X}) \\ & \simeq \End_{\mathrm{DM}} \def\DA{\mathrm{DA}} \def\PSh{\mathrm{PSh}(Y)}(f_{*}\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_{X}) \end{flalign*} where the first isomorphism follows from $\pi_{X\times_{Y}X}=\pi_{X} \circ p_{1}$, the second follows from relative purity for the smooth morphism $\pi_{X}$, the third is the adjunction $(p_{1}^{*},p_{1*})$, the fourth is proper base change and the fifth uses the adjunction $(f_{!},f^{!})$ and the properness of $f$. The isomorphism $\phi_f$ is defined as the composition of the sequence of isomorphisms above. Its compatibility with pullback by an \'etale morphism $e$ is a matter of carefully going through the construction and using the natural isomorphism $e^!\simeq e^*$ and proper base change. \end{proof} \begin{rmk} Since the target of $\phi_f$ is clearly a $\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T$-algebra, the proposition endows $\mathrm{CH}_d(X\times_Y X)$ with a $\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T$-algebra structure. The multiplication can be described using refined Gysin morphisms, but we will not need this. \end{rmk} \begin{prop}\label{prop:end_ring_iso} Let $f:X\ra Y$ be a surjective proper small morphism with $X$ and $Y$ smooth varieties. Let $f^\circ:X^\circ\ra Y^\circ$ be the restriction of $f$ to the locus with finite fibers, and $j:Y^\circ\ra Y$ the corresponding open immersion. Then the natural map \[ j^*:\End_{\mathrm{DM}} \def\DA{\mathrm{DA}} \def\PSh{\mathrm{PSh}(Y)}(f_!f^!\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_Y)\ra \End_{\mathrm{DM}} \def\DA{\mathrm{DA}} \def\PSh{\mathrm{PSh}(Y^\circ)}(f^\circ_!f^{\circ !}\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_{Y^\circ}) \] is an isomorphism of rings. \end{prop} \begin{proof} First, let us explain how $j^*$ is defined. Write $\tilde{\jmath}:X^\circ\ra X$. Then we have \[j^*f_!f^!\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_Y\simeq f^\circ_!\tilde{\jmath}^*f^!\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_Y\simeq f^\circ_!\tilde{\jmath}^!f^!\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_Y\simeq f^\circ_!(f^\circ)^!j^!\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_Y\simeq f^\circ_!(f^\circ)^!j^*\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_Y\simeq f^\circ_!(f^\circ)^!\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_{Y^\circ}\] where we have used proper base change, compatibility of $(-)^{!}$ with composition and the fact that $e^!\simeq e^*$ for $e$ \'etale. Then $j^*$ is defined as \[ \End_{\mathrm{DM}} \def\DA{\mathrm{DA}} \def\PSh{\mathrm{PSh}(Y)}(f_!f^!\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_Y)\stackrel{j^*}{\ra}\End_{\mathrm{DM}} \def\DA{\mathrm{DA}} \def\PSh{\mathrm{PSh}(Y)}(j^*f_!f^!\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_Y) \simeq \End_{\mathrm{DM}} \def\DA{\mathrm{DA}} \def\PSh{\mathrm{PSh}(Y^\circ)}(f^\circ_!f^{\circ !}\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_{Y^\circ}). \] The map $j^*$ is clearly compatible with addition and composition, hence is a homomorphism of rings. It remains to show that it is bijective. Since $X$ and $Y$ are both smooth of dimension $d$ over $k$, we can use purity isomorphisms to obtain an isomorphism \begin{equation} \label{f_purity} f^{!}\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_{Y}\simeq f^{!}\pi_{Y}^{!}\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_{k}\{-d\}\simeq \pi_{X}^{!}\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_{k}\{-d\}\simeq \mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_{X}. \end{equation} We deduce that $f_*\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_X\simeq f_*f^!\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_Y\simeq f_!f^!\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_Y$, and similarly that $f^\circ_*\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_{X^\circ}\simeq f^\circ_!f^{\circ !}\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_{Y^\circ}$. These two isomorphisms are compatible with restriction along $j$. Combining this observation with Proposition \ref{prop:relative_action}, we have the commutative diagram with horizontal isomorphisms \[ \xymatrix{ \mathrm{CH}_d(X\times_{Y} X) \ar[d]_{(j\times j)^*} \ar[r]^{\sim}_{\phi_{f}} & \End_{\mathrm{DM}} \def\DA{\mathrm{DA}} \def\PSh{\mathrm{PSh}(Y)}(f_*\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_X) \ar[d]_{j^*} \ar[r]^{\sim} & \End_{\mathrm{DM}} \def\DA{\mathrm{DA}} \def\PSh{\mathrm{PSh}(Y)}(f_!f^{ !}\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_{Y}) \ar[d]_{j^*} \\ \mathrm{CH}_d(X^\circ \times_{Y^\circ} X^\circ) \ar[r]^{\sim}_{\phi_{f^\circ}} & \End_{\mathrm{DM}} \def\DA{\mathrm{DA}} \def\PSh{\mathrm{PSh}(Y^\circ)}(f^\circ_*\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_{X^\circ}) \ar[r]^{\sim} & \End_{\mathrm{DM}} \def\DA{\mathrm{DA}} \def\PSh{\mathrm{PSh}(Y^\circ)}(f^\circ_!f^{\circ !}\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_{Y^\circ}). } \] On a variety of dimension $d$, we have $\mathrm{CH}_d=Z_d$, i.e., rational equivalence is trivial on top-dimensional cycles. By Lemma \ref{lem:small_dim} \ref{dim_surj}, this implies $\mathrm{CH}_d(X\times_Y X)\simeq Z_d(X\times_Y X)$ and also $\mathrm{CH}_d(X\times_Y X)\simeq Z_d(X^\circ\times_{Y^\circ}X^\circ)$. By Lemma \ref{lem:small_dim} \ref{comp}, the restriction morphism $Z_d(X\times_Y X)\ra Z_d(X^\circ\times_{Y^\circ} X^\circ)$ is a bijection. We deduce that the left vertical map in the diagram above is a bijection, and conclude that the right vertical map is a bijection. \end{proof} \begin{lemma}\label{lem:psi} Let $f:X\ra Y$ be a finite type separated morphism with $Y$ smooth. Then there exist an morphism of $\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T$-algebras \[ \psi_f:\End_{\mathrm{DM}} \def\DA{\mathrm{DA}} \def\PSh{\mathrm{PSh}(Y)}(f_!f^!\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_Y)\ra \End_{\mathrm{DM}} \def\DA{\mathrm{DA}} \def\PSh{\mathrm{PSh}(k)}(M(X)) \] such that, for $e:U\hookrightarrow Y$ an \'etale morphism, $\tilde{e}:V\hookrightarrow X$ its base change along $f$ and $\tilde{f}:V\ra U$ the base change of $f$ along $e$, we have a commutative diagram \[ \xymatrix{ \End_{\mathrm{DM}} \def\DA{\mathrm{DA}} \def\PSh{\mathrm{PSh}(Y)}(f_!f^!\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_Y) \ar[d]^{e^*} \ar[r]_{\psi_f} & \End_{\mathrm{DM}} \def\DA{\mathrm{DA}} \def\PSh{\mathrm{PSh}(k)}(M(X)) \ar[d]^{e^*} \\ \End_{\mathrm{DM}} \def\DA{\mathrm{DA}} \def\PSh{\mathrm{PSh}(U)}(\tilde{f}_!\tilde{f}^!\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_U) \ar[r]_{\psi_{\tilde{f}}} & \End_{\mathrm{DM}} \def\DA{\mathrm{DA}} \def\PSh{\mathrm{PSh}(k)}(M(V)). } \] \end{lemma} \begin{proof} Recall that, for $Z$ a smooth variety of dimension $e$ over $k$, we have a canonical purity isomorphism $\pi_{Z}^{!}\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_{k}\simeq \mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_{Z}\{e\}$. By working with each connected component of $Y$ separately, we can assume that $Y$ is equidimensional of dimension $d$. We deduce that \[ M(X):= \pi_{X!}\pi^{!}_{X}\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_{k}\simeq \pi_{Y!}f_{!}f^{!}\pi_{Y}^{!}\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_{k}\simeq \pi_{Y!}f_{!}f^!\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_{Y}\{d\} \] by using the purity isomorphism for the smooth morphism $\pi_Y$. We define $\psi_f$ as the composition \[\End_{\mathrm{DM}} \def\DA{\mathrm{DA}} \def\PSh{\mathrm{PSh}(Y)}(f_!f^!\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_Y)\stackrel{\pi_{Y!}(-)\{d\}}{\longrightarrow} \End_{\mathrm{DM}} \def\DA{\mathrm{DA}} \def\PSh{\mathrm{PSh}(k)}(\pi_{Y!}f_!f^!\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_Y\{d\})\simeq \End_{\mathrm{DM}} \def\DA{\mathrm{DA}} \def\PSh{\mathrm{PSh}(k)}M(X).\] The compatibility with pullbacks by \'etale morphisms follows again easily from the natural isomorphism $e^!\simeq e^*$ for an \'etale morphism $e$. \end{proof} \subsection{Group actions on motives of small maps} Let $S$ be a scheme, $M\in\mathrm{DM}} \def\DA{\mathrm{DA}} \def\PSh{\mathrm{PSh}(S)$ a motive and $G$ a group. An action of $G$ on $M$ is a morphism of groups $a:G\ra \Aut_{\mathrm{DM}} \def\DA{\mathrm{DA}} \def\PSh{\mathrm{PSh}(S)}(M)$. In particular, given a morphism $f:X\ra Y$, we have an action $\Aut_Y(X)\ra \Aut_{\mathrm{DM}} \def\DA{\mathrm{DA}} \def\PSh{\mathrm{PSh}(Y)}(f_!f^!\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_Y)$. Assuming further that $G$ is finite, let \[ \Pi_{a}:=\mathfrak r}\def\fs{\mathfrak s}\def\ft{\mathfrak t}\def\fu{\mathfrak uac{1}{|G|}\sum_{g\in G}a(g)\in \End_{\mathrm{DM}} \def\DA{\mathrm{DA}} \def\PSh{\mathrm{PSh}(S)}(M) \] which makes sense since $\mathrm{DM}} \def\DA{\mathrm{DA}} \def\PSh{\mathrm{PSh}(S)$ is $\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T$-linear. Then $\Pi_{a}$ is idempotent, and since $\mathrm{DM}} \def\DA{\mathrm{DA}} \def\PSh{\mathrm{PSh}(S)$ is idempotent-complete we define the invariant motive $M^{G}\in\mathrm{DM}} \def\DA{\mathrm{DA}} \def\PSh{\mathrm{PSh}(S)$ as the image of $\Pi_{a}$. \begin{ex}\label{ex:sym} An important example for this paper are motives of symmetric products. For a quasi-projective variety $X$ over $k$ and $n\in\NN$, we have a morphism $f:X^n\ra \Sym^n(X)$. The symmetric group $S_n$ acts on $X^n$ over $\Sym^n(X)$, so that we get an induced action on $M(X^n)$ such that $M(f):M(X^n)\ra M(\Sym^n(X))$ factors via $M(X^n)^{S_n}\ra M(\Sym^n(X))$. Since $S_n$ acts transitively on the geometric fibers of $f$, this second morphism is an isomorphism $M(X^n)^{S_n}\simeq M(\Sym^n(X))$ by \cite[Corollaire 2.1.166]{Ayoub_these_1}. \end{ex} The main result of this section is a generalisation of the previous example where we do not have a global action on $X$ and $f$ is not necessarily finite but only small. \begin{thm}\label{thm:actions} Let $f:X\ra Y$ be a small surjective proper morphism between smooth connected varieties. Assume that the restriction $f^\circ:X^\circ\ra Y^\circ$ to the locus with finite fibers is a principal $G$-bundle. Then the action of $G$ on $M(X^\circ)$ extends to an action on $M(X)$ which induces an isomorphism $M(X)^{G}\simeq M(Y)$; we have a commutative diagram \[ \xymatrix{ M(X^\circ) \ar[r] \ar[d] & M(X^\circ)^{G} \ar[r]^{\sim} \ar[d] & M(Y^\circ)\ar[d] \\ M(X) \ar[r] & M(X)^{G} \ar[r]^{\sim} & M(Y). \\ } \] \end{thm} \begin{proof} By working separately with each connected component, we can assume $Y$ is connected, and in particular equidimensional. Write $d=\dim(X)=\dim(Y)$. Since $f^\circ:X^\circ\ra Y^\circ$ is a principal $G$-bundle, we have a morphism of groups $G\ra \Aut_{Y^\circ}(X^\circ)$. We deduce a morphism of groups $G\ra \Aut_{\mathrm{DM}} \def\DA{\mathrm{DA}} \def\PSh{\mathrm{PSh}(Y^\circ)}(f^\circ_!f^{\circ !}\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_{Y^\circ})$. By Proposition \ref{prop:end_ring_iso}, this yields a morphism of groups $G\ra \Aut_{\mathrm{DM}} \def\DA{\mathrm{DA}} \def\PSh{\mathrm{PSh}(Y)}(f_!f^!\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_Y)$. We compose with the morphism $\psi_f$ of Lemma \ref{lem:psi} and get a morphism of groups $G\ra\Aut_{\mathrm{DM}} \def\DA{\mathrm{DA}} \def\PSh{\mathrm{PSh}(k)}(M(X))$, which is the required action. Let us check that the morphism $M(f):M(X)\ra M(Y)$ factors through $M(X)^G$. Given its construction, it suffices to show that the counit morphism $f_!f^!\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_Y\ra \mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_Y$ factors through $(f_!f^!\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_Y)^G$. For this, it suffices to show that, for any $g\in G$, the composition $f_!f^!\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_Y\stackrel{g}{\ra}f_!f^!\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_Y\ra \mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_Y$ coincides with the counit of the adjunction $(f_!,f^!)$. By the same adjunction, this amounts to comparing two maps $f^!\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_Y\ra f^!\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_Y$. By equation \eqref{f_purity}, we have $f^!\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_Y\simeq \mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_X$. By \cite[Proposition 11.1]{Ayoub_etale}, and using the fact that $X^\circ$ is dense in $X$, we have \[ \Hom_{\mathrm{DM}} \def\DA{\mathrm{DA}} \def\PSh{\mathrm{PSh}(X)}(\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_X,\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_X)\simeq \mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T^{\pi_0(X)}\hookrightarrow \mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T^{\pi_0(X^{\circ})}\simeq \Hom_{\mathrm{DM}} \def\DA{\mathrm{DA}} \def\PSh{\mathrm{PSh}(X^\circ)}(\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_{X^\circ},\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_{X^\circ}) \] hence we can check the required equality after restriction to $X^\circ$; that is, we must show that for any $g\in G$, the composition $f^\circ_!f^{\circ !}\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_{Y^\circ}\stackrel{g}{\ra}f^\circ_!f^{\circ !}\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_{Y^\circ}\ra \mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_{Y^\circ}$ coincides with the counit of the adjunction $(f^\circ_!,f^{\circ !})$. This is clear since $G$ acts through $\Aut_{Y^\circ}(X^\circ)$. By construction, to show that the induced map $M(X)^G\ra M(Y)$ is an isomorphism, it suffices to show that the morphism $(f_! f^!\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_Y)^G\ra \mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_Y$ is an isomorphism. Let $\Pi_G\in \End_{\mathrm{DM}} \def\DA{\mathrm{DA}} \def\PSh{\mathrm{PSh}(Y)}(f_!f^!\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_Y)$ the projector onto $(f_! f^!\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_Y)^G$. Since $X$ and $Y$ are smooth of the same dimension $d$, the purity isomorphisms yield an isomorphism $f^!\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_Y\simeq \mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_X$ (equation \eqref{f_purity}). Moreover, this isomorphism is compatible with restriction to $Y^\circ$, in the sense that after applying $\tilde{\jmath}^!=\tilde{\jmath}^*$ for $\tilde{\jmath}:X^{\circ}\ra X$, it coincides with the simpler isomorphism $f^{\circ !}\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_{Y^\circ}\simeq f^{\circ *}\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_{Y^\circ}\simeq \mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_{X^\circ}$ (using that $f^\circ$ is \'etale). Consider the composition \[ \Pi':f_!f^!\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_Y\stackrel{\mathrm{\acute{e}t}a_!}{\ra} \mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_Y\stackrel{\mathfrak r}\def\fs{\mathfrak s}\def\ft{\mathfrak t}\def\fu{\mathfrak uac{1}{|G|}}{\ra}\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_Y\stackrel{\epsilon_*}{\ra} f_*\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_X\simeq f_!f^!\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_Y \] where $\epsilon_*$ is the unit for the adjunction $(f^*,f_*)$ and $\mathrm{\acute{e}t}a_!$ is the counit for the adjunction $(f_!,f^!)$. By \cite[Lemme 2.1.165]{Ayoub_these_1}, we see that $j^{*}\Pi'$ is a projector which coincides with $j^{*}\Pi_{G}$. By the injectivity of $j^{*}$ (Proposition \ref{prop:end_ring_iso}), this implies that $\Pi'=\Pi_G$, thus $\Pi'$ is a projector, and to conclude it remains to identify the image of $\Pi'$ with the morphism $f_!f^!\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_Y\ra \mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_Y$. For this, it is clearly enough to show that the composition \[ \mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_Y\stackrel{\epsilon_*}{\ra} f_*\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_X\simeq f_!f^!\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_Y\stackrel{\mathrm{\acute{e}t}a_!}{\ra} \mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_Y \] coincides with the multiplication by $|G|$. Since $Y$ and $Y^\circ$ are connected, by \cite[Proposition 11.1]{Ayoub_etale} we have \[\Hom_{\mathrm{DM}} \def\DA{\mathrm{DA}} \def\PSh{\mathrm{PSh}(Y)}(\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_Y,\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_Y)\simeq \mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T\simeq \Hom_{\mathrm{DM}} \def\DA{\mathrm{DA}} \def\PSh{\mathrm{PSh}(Y^\circ)}(\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_{Y^\circ},\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_{Y^\circ})\] hence it is enough to show this after restriction to $Y^\circ$. The corresponding composition is \[ \mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_{Y^\circ}\stackrel{\epsilon_*}{\ra} f^\circ_*\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_{X^\circ}\simeq f^\circ_!f^{\circ !}\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_{Y^\circ}\stackrel{\mathrm{\acute{e}t}a_!}{\ra} \mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T_{Y^\circ} \] which coincides with multiplication by $|G|$ by \cite[Lemme 2.1.165]{Ayoub_these_1}. \end{proof} \begin{rmk}\label{rmk:functoriality} Consider a commutative diagram \[ \xymatrix{ X \ar[r]^{h} \ar[d]_{f} & X' \ar[d]^{f'} \\ Y \ar[r]^{g} & Y' } \] with $f$ and $f'$ satisfying the assumptions of Theorem \ref{thm:actions} with groups $G$, $G'$. If $g$ does not send the locus $Y^0$ into $(Y^0)'$, it is not clear how to formulate conditions which make the morphism $M(X)\ra M(X')$ equivariant with respect to some given homomorphism $G\ra G'$. However in the application in $\S$\ref{sec formula}, we have an alternative description of the actions which make a certain equivariance property clear (see Proposition \ref{prop THecke trans}). \end{rmk} \section{Motives of schemes of Hecke correspondences}\label{sec motives Hecke schemes} In this section, we introduce some generalisations of the schemes of matrix divisors $\Div_{n,d}(D)$ and the flag-generalisation $\FDiv_{n,d}(D)$ and study their motives. The main result in this section is inspired by work of Laumon \cite{laumon} and Heinloth \cite{Heinloth_LaumonBDay}. \subsection{Definitions and basic properties} For a family $\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H$ of vector bundles on $C$ parametrised by a $k$-scheme $T$, we write $\rk(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H) = n$ and $\deg(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H) = d$ if the fibrewise rank and degree of this family are $n$ and $d$ respectively. \begin{defn} For $l \in \NN$ and a family $\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H$ of rank $n$ degree $d$ vector bundles over $C$ parametrised by $k$-scheme $T$, we define two $T$-schemes $\Hecke^l_{\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H/T}$ and $\THecke^l_{\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H/T}$ as follows: over $g: S \ra T$, the points of these schemes are given by \[ \Hecke^l_{\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H/T} (S):= \left\{\phi : \cF \hookrightarrow (g \times \mathrm{id}_C)^*\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H : \begin{array}{c} \cF \ra S \times C \text{ family of vector bundles on } C \\ \rk(\cF) = n, \deg(\cF)=d-l, \rk(\phi)=n \end{array} \right\} \] and \[ \THecke^l_{\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H/T} (S):= \left\{ \cF_l \hookrightarrow \cF_{l-1} \cdots \hookrightarrow \cF_0 := (g \times \mathrm{id}_C)^*\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H : \begin{array}{c} \cF_i \ra S \times C \text{ family of vector bundles} \\ \rk(\cF_i) = n, \deg(\cF_i)=d-i \\ \rk(\cF_i \ra \cF_{i-1})=n \text{ for } i = 1, \cdots, l \end{array} \right\}. \] We refer to $\Hecke^l_{\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H/T}$ as the $T$-scheme of length $l$ Hecke correspondences of $\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H$ and the $\THecke^l_{\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H/T}$ as the $T$-scheme of $l$-iterated Hecke correspondences of $\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H$. \end{defn} Let us first explain why these are both schemes over $T$. The scheme of length $l$ Hecke correspondences $\Hecke^l_{\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H/T}$ is the Quot scheme over $T$ \[\Hecke^l_{\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H/T} = \quot_{T \times C /T}^{(0,l)}(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H)\] parametrising quotients families of $\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H$ of rank $0$ and degree $l$, which is a projective $T$-scheme. Similarly $\THecke^l_{\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H/T}$ is a generalisation of Quot schemes to allow flags of arbitrary length, called a Flag-Quot or Drap scheme (see \cite[Appendix 2A]{HL}); thus $\THecke^l_{\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H/T}$ is also projective over $T$. In fact, as we are considering torsion quotients of a smooth projective curve, both $\Hecke^l_{\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H/T}$ and $\THecke^l_{\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H/T}$ are smooth $T$-schemes (see \cite[Propositions 2.2.8 and 2.A.12]{HL}). In particular, if $T/k$ is smooth (resp. projective), then both these schemes are smooth (resp.) projective over $k$. \begin{ex} Let $T = \spec (k)$ and $\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H = \cO_C(D)^{\oplus n}$ for a divisor $D$ on $C$; then \[ \Hecke^{n\deg(D) - d}_{\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H/T} = \Div_{n,d}(D) \quad \text{and} \quad \THecke^{n \deg(D) -d}_{\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H/T} = \FDiv_{n,d}(D), \] which are both smooth and projective. \end{ex} We introduce some notation and properties of these Hecke schemes in the following remark. \begin{rmk}\label{rmk structure of Hecke schemes} Let $\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H $ be a family of rank $n$ degree $d$ vector bundles over $C$ parametrised by $T$. \begin{enumerate}[label={\upshape(\roman*)}] \item\label{rmk Hecke 1} For $l=0$, we note that $\THecke^0_{\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H/T} = \Hecke^0_{\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H/T} = T$ and for $l = 1$, we have \[ \THecke^1_{\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H/T} = \Hecke^1_{\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H/T} \cong \PP(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H) \ra T \times C, \] where this projection is given by taking the support of the family of degree 1 torsion sheaves. Indeed, an elementary modification of a vector bundle $E \ra C$ at $x \in C$ is equivalent to a surjection $E_x \twoheadrightarrow \kappa(x)$ (up to scalar multiplication). \item\label{rmk Hecke 2} Since $\THecke^l_{\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H/T}$ is a Flag-Quot scheme there is a universal flag of vector bundles \[ \mathcal U}\def\cV{\mathcal V}\def\cW{\mathcal W}\def\cX{\mathcal X^l_l \hookrightarrow \mathcal U}\def\cV{\mathcal V}\def\cW{\mathcal W}\def\cX{\mathcal X^l_{l-1} \hookrightarrow \cdots \hookrightarrow \mathcal U}\def\cV{\mathcal V}\def\cW{\mathcal W}\def\cX{\mathcal X^l_{1} \hookrightarrow \mathcal U}\def\cV{\mathcal V}\def\cW{\mathcal W}\def\cX{\mathcal X^l_0 := p_2^*\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H \] over $\THecke^l_{\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H/T} \times_T (T \times C) \cong \THecke^l_{\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H/T} \times C$. In fact, Flag-Quot schemes, and in particular schemes of iterated Hecke correspondences, are constructed as iterated relative Quot schemes. More precisely, we have \[ \pi_l: \THecke^l_{\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H/T} \cong \Hecke^1_{\mathcal U}\def\cV{\mathcal V}\def\cW{\mathcal W}\def\cX{\mathcal X^{l-1}_{l-1}/\THecke^{l-1}_{\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H/T}} \cong \PP(\mathcal U}\def\cV{\mathcal V}\def\cW{\mathcal W}\def\cX{\mathcal X^{l-1}_{l-1}) \ra \THecke^{l-1}_{\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H/T} \times_T (T \times C) \cong \THecke^{l-1}_{\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H/T} \times C. \] where $\pi_l(\cF_l \subsetneq \cF_{l-1} \subsetneq \cdots \subsetneq \cF_0) := (\cF_{l-1} \subsetneq \cdots \subsetneq \cF_0, \supp(\cF_{l-1}/\cF_l))$ \item\label{rmk Hecke 3} There is a map $P_l : \THecke^l_{\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H/T} \ra T \times C^l$ obtained by composing the maps $\pi_j$ for $ 1 \leq j \leq l$ \[ \quad \quad \quad P_{l} : \THecke^l_{\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H/T} \ra \THecke^{l-1}_{\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H/T} \times_T (T \times C) \ra \THecke^{l-2}_{\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H/T} \times_T (T \times C)^{\times_T \: 2} \cdots \ra T \times_T (T \times C)^{\times_T \: l} .\] Explicitly, we have $P_l(\cF_l \subsetneq \cF_{l-1} \subsetneq \cdots \subsetneq \cF_0) = (\supp(\cF_{0}/\cF_1),\ldots , \supp(\cF_{l-1}/\cF_l))$. \item\label{rmk Hecke 4} For $1 \leq j \leq l$, we let $\pr_{j}^l : \THecke^l_{\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H/T} \ra T \times C$ denote the composition of $P_{l}$ with the projection onto the $j$th copy of $T \times C$; that is, \[ \pr_{j}^l(\cF_l \subsetneq \cF_{l-1} \subsetneq \cdots \subsetneq \cF_0) = \supp(\cF_{j-1}/\cF_j).\] \item\label{rmk Hecke 5} Let $p_l : \THecke^l_{\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H/T} \ra \THecke^{l-1}_{\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H/T} $ denote the composition of $\pi_l$ with the projection to the first factor; then for $1 \leq j \leq l-1$, we have $(p_l \times \mathrm{id}_C)^*\mathcal U}\def\cV{\mathcal V}\def\cW{\mathcal W}\def\cX{\mathcal X_{j}^{l-1} = \mathcal U}\def\cV{\mathcal V}\def\cW{\mathcal W}\def\cX{\mathcal X_j^l$. \end{enumerate} \end{rmk} \begin{lemma}\label{lemma it proj bdle} Let $\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H$ be a family of rank $n$ degree $d$ vector bundles over $C$ parametrised by a scheme $T$; then the scheme $\THecke^l_{\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H/T}$ is an $l$-iterated $\PP^{n-1}$-bundle over $T \times C^l$. More precisely, we have the following sequence of projective bundles \[ \THecke^l_{\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H/T} \cong \PP(\mathcal U}\def\cV{\mathcal V}\def\cW{\mathcal W}\def\cX{\mathcal X^{l-1}_{l-1}) \ra \THecke^{l-1}_{\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H/T} \times C \cong \PP(\mathcal U}\def\cV{\mathcal V}\def\cW{\mathcal W}\def\cX{\mathcal X^{l-2}_{l-2}) \times C \ra \cdots \ra \THecke^1_{\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H/T}\times C^{l-1} \cong \PP(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H) \times C^{l-1} \ra T \times C^l. \] \end{lemma} \begin{proof} This follows by induction from Remark \ref{rmk structure of Hecke schemes} \ref{rmk Hecke 1} and \ref{rmk Hecke 2}. \end{proof} By repeatedly applying the projective bundle formula, we obtain the following corollary. \begin{cor}\label{cor motive THecke} Let $\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H$ be family of rank $n$ degree $d$ vector bundles over $C$ parametrised by a scheme $T$. Then \[ M(\THecke^l_{\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H/T}) \cong M(T) \otimes M(C \times \PP^{n-1})^{\otimes l}. \] \end{cor} In fact, we will need to explicitly identify this isomorphism. For a rank $n$ vector bundle $\cV$ over a scheme $X$, the projective bundle $\pi : \PP(\cV) \ra X$ is equipped with a line bundle $\cL:=\cO_{\PP(\cV)}(1)$. The first chern class of this line bundle defines a map $c_1(\cL) : M(\PP(\cV)) \ra \mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T\{1\}$ and for $i \geq 0$ it induces maps \[ c_1(\cL)^{\otimes i} : M(\PP(\cV)) \stackrel{M(\Delta)}{\longrightarrow} M(\PP(\cV))^{\otimes i} \stackrel{ c_1(\cL)^{\otimes i} }{\longrightarrow} \mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T\{i\}\] which together define a map $[c_1(\cL)]:=\oplus_{i=0}^{n-1} c_1(\cL)^{\otimes i} : M(\PP(\cV)) \ra \oplus_{i=-0}^{n-1}\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T\{i\} \simeq M(\PP^{n-1})$. Then the projective bundle formula isomorphism can be explicitly written as the composition \[ \PB(\cL) : M(\PP(\cV)) \stackrel{M(\Delta)}{\longrightarrow} M(\PP(\cV))^{\otimes 2} \stackrel{M(\pi) \otimes [c_1(\cL)] }{\xrightarrow{\hspace*{1cm}}} M(X) \otimes M(\PP^{n-1}). \] \begin{rmk}\label{rmk line bundles THecke} On $\THecke^l:=\THecke^l_{\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H/T}$, we can inductively define $l$ line bundles $\cL_1^1, \cdots, \cL_1^l$ by \begin{enumerate}[label={\upshape(\roman*)}] \item $\cL^l_l:=\cO(1) \ra \PP(\mathcal U}\def\cV{\mathcal V}\def\cW{\mathcal W}\def\cX{\mathcal X_{l-1}^{l-1})$, \item $\cL^l_j:= p_l^* \cL^{l-1}_j$ for $ 1 \leq j \leq l-1$, where $p_l : \THecke^l \ra \THecke^{l-1}$. \end{enumerate} These $l$ line bundles on $\THecke^l$ induce a morphism \[ \PB(\cL_\bullet^l) : M(\THecke^l_{\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H/T}) \stackrel{M(\Delta)}{\longrightarrow} M(\THecke^l_{\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H/T})^{\otimes l+1} \stackrel{M(P_l) \otimes [c_1(\cL^l_\bullet)]}{\longrightarrow} M(T \times C^l) \otimes M( \PP^{n-1})^{\otimes l}, \] where $ [c_1(\cL^l_\bullet)] = \otimes_{i=1}^l [c_1(\cL^l_i)]$. Furthermore, on $\THecke^l$ we have two universal objects: \begin{enumerate}[label={\upshape(\roman*)}] \item a surjection $\pi_l^*\mathcal U}\def\cV{\mathcal V}\def\cW{\mathcal W}\def\cX{\mathcal X^{l-1}_{l-1} \twoheadrightarrow \cL^l_l$ over $\THecke^l$ (as $\THecke^l\cong \PP(\mathcal U}\def\cV{\mathcal V}\def\cW{\mathcal W}\def\cX{\mathcal X^{l-1}_{l-1})$ by Remark \ref{rmk line bundles THecke}), \item a short exact sequence $0 \ra \mathcal U}\def\cV{\mathcal V}\def\cW{\mathcal W}\def\cX{\mathcal X_l^l \ra \mathcal U}\def\cV{\mathcal V}\def\cW{\mathcal W}\def\cX{\mathcal X^l_{l-1} \ra \cT_l^l \ra 0$ over $\THecke^l \times C$. \end{enumerate} Since $ \mathcal U}\def\cV{\mathcal V}\def\cW{\mathcal W}\def\cX{\mathcal X^l_{l-1} = (p_l \times \mathrm{id}_C)^*\mathcal U}\def\cV{\mathcal V}\def\cW{\mathcal W}\def\cX{\mathcal X^{l-1}_{l-1}$, the relationship between the line bundle $\cL^l_l \ra \THecke^l$ and the family of degree 1 torsion sheaves $\cT^l_l$ on $C$ parametrised by $\THecke^l$ is \[\cL^l_l \cong ((\mathrm{id}_{\THecke^l} \times \pi_l) \circ \Delta_{\THecke^l})^*\cT_l^l,\] for $(\mathrm{id}_{\THecke^l} \times \pi_l) \circ \Delta_{\THecke^l}: \THecke^l \stackrel{\Delta_{\THecke^l}}{\longrightarrow} \THecke^l \times_{\THecke^{l-1}} \THecke^l \stackrel{\mathrm{id} \times \pi_l}{\longrightarrow} \THecke^l \times_{\THecke^{l-1}} (\THecke^{l-1} \times C) \simeq \THecke^l \times C$. In fact, for $ 1 \leq j \leq l$, we can define maps \[ r^l_j = (\mathrm{id}_{\THecke^l} \times \pr^l_j) \circ \Delta_{\THecke^l} : \THecke^l \ra \THecke^l \times_T \THecke^l \ra {\THecke^l} \times_T (T \times C) \cong {\THecke^l} \times C \] such that $r_l^l = (\mathrm{id}_{\THecke^l} \times \pi_l) \circ \Delta_{\THecke^l}$. For $j < l$, the family of degree 1 torsion sheaves $\cT^l_j:= \mathcal U}\def\cV{\mathcal V}\def\cW{\mathcal W}\def\cX{\mathcal X^l_{j-1}/\mathcal U}\def\cV{\mathcal V}\def\cW{\mathcal W}\def\cX{\mathcal X^l_{j}$ on $C$ parametrised by $\THecke^l$ is obtained as a pullback of $\cT^{l-1}_j$ via the map $p_l \times \mathrm{id}_C$. Hence, for $ 1 \leq j \leq l$, we have isomorphisms relating the line bundles and families of torsion sheaves \begin{equation}\label{eq line bundles and torsion quotients} \cL^l_j \cong (r^l_j)^*\cT^l_j. \end{equation} \end{rmk} We can now give a precise description of the isomorphism in Corollary \ref{cor motive THecke}. \begin{lemma}\label{lemma PB for THecke with line bundles} The tuple $\cL_\bullet^l = (\cL^l_1, \cdots , \cL^l_l)$ of line bundles on $\THecke^l_{\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H/T}$ induces a morphism \[ \PB(\cL_\bullet^l) : M(\THecke^l_{\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H/T}) \stackrel{M(\Delta)}{\longrightarrow} M(\THecke^l_{\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H/T})^{\otimes l+1} \stackrel{M(P_l) \otimes [c_1(\cL^l_\bullet)]}{\xrightarrow{\hspace*{1cm}}} M(T \times C^l) \otimes M( \PP^{n-1})^{\otimes l}, \] which coincides with the composition \[ M(\THecke^l_{\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H/T}) \stackrel{\PB(\cL_l^l)}{\longrightarrow} M(\THecke^{l-1}_{\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H/T}) \otimes M( C \times \PP^{n-1}) \stackrel{\PB(\cL_{l-1}^{l-1}) \otimes M(\mathrm{id})}{\longrightarrow} \cdots \longrightarrow M(T \times C^l) \otimes M( \PP^{n-1})^{\otimes l}\] and thus is an isomorphism. \end{lemma} \begin{proof} For this one uses that Chern classes are compatible with pullbacks, so that $ c_1(\cL^{l-1}_j) \circ M(p_l) = c_1(\cL^l_{j})$ for $1 \leq j \leq l-1$, as $p_l^*(\cL_j^{l-1})=\cL_j^l$. Then one uses that $P_l$ is defined as the composition of the maps $\pi_i$ for $i \leq l$ together with the fact that for any morphism $f : X \ra Y_1 \times Y_2$, we have the following commutative diagram \[ \xymatrix{ M(X) \ar[r]^{M(\Delta)\quad\quad} \ar[d]_{M(f)} & M(X) \otimes M(X) \ar[d]^{M(f_1) \otimes M(f_2)}\\ M(Y_1 \times Y_2) \ar[r]^{\simeq \quad} & M(Y_1) \otimes M(Y_2),}\] where $f_i := \mathrm{pr}_i \circ f : X \ra Y_i$ and the lower map in this square is the K\"{u}nneth isomorphism. \end{proof} \subsection{The motive of the scheme of Hecke correspondences} There is a forgetful map \[ f : \THecke^l_{\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H/T} \ra \Hecke^l_{\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H/T} \] that we will use to relate the motive of $\Hecke^l_{\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H/T}$ to that of $\THecke^l_{\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H/T} $, which we computed above. In fact, we plan to use the above section to compare these motives, as the map $f$ is small. To prove that $f$ is a small map, we will describe it as the pullback of a small map along a flat morphism by generalising an argument of Heinloth \cite[Proposition 11]{Heinloth_LaumonBDay}. Let $\Coh_{0,l}$ denote the stack of rank $0$ degree $l$ coherent sheaves on $C$ and let $\widetilde{\Coh}_{0,l}$ denote the stack which associates to a scheme $S$ the groupoid \[ \widetilde{\Coh}_{0,l} (S) = \langle \cT_1 \hookrightarrow \cT_2 \hookrightarrow \cdots \hookrightarrow \cT_l : \cT_i \in \Coh_{0,i}(S) \rangle.\] The forgetful map $f' : \widetilde{\Coh}_{0,l} \ra \Coh_{0,l}$ fits into the following commutative diagram \begin{equation}\label{diag p q} \xymatrix{ \THecke^l_{\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H/T} \ar[d]^{f} \ar[r]^{\tilde{\gr}} & T \times \widetilde{\Coh}_{0,l} \ar[d]^{\mathrm{id}_T \times f'} \ar[r] & T \times C^l \ar[d]\\ \Hecke^l_{\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H/T} \ar[r]^{{\gr}} & T \times \Coh_{0,l} \ar[r] & T \times C^{(l)} } \end{equation} such that the left square in this diagram is Cartesian. Furthermore, by \cite[Theorem 3.3.1]{laumon}, the map $f'$ is small and generically a $S_l$-covering. By Lemma \ref{lem:small_bc}, $\mathrm{id}_T \times f'$ is small and generically a $S_{l}$ covering. Since the morphism $\gr$ is smooth and thus flat (see the proof of \cite[Proposition 11]{Heinloth_LaumonBDay}), we deduce by Lemma \ref{lem:small_bc} that $f$ is small and generically a $S_l$-covering. By Theorem \ref{thm:actions}, there is an induced $S_l$-action on $M(\THecke^l_{\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H/T})$ and we can now prove the following result. \begin{thm}\label{thm Sl action on THecke} Let $\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H$ be family of rank $n$ degree $d$ vector bundles over $C$ parametrised by a smooth $k$-scheme $T$. Then via the isomorphism $M(\THecke^l_{\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H/T}) \cong M(T) \otimes M(C \times \PP^{n-1})^{\otimes l}$ of Corollary \ref{cor motive THecke}, the $S_l$-action permutes the $l$-copies of $M(C \times \PP^{n-1})$. Moreover, we have \[ M(\Hecke^l_{\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H/T}) \cong M(T) \otimes M(\Sym^l(C \times \PP^{n-1})). \] \end{thm} \begin{proof} We note that as $T$ is smooth, both $\Hecke^l_{\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H/T}$ and $\THecke^l_{\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H/T}$ are smooth over $k$. By Lemma \ref{lemma PB for THecke with line bundles}, there is an isomorphism \[ M(\THecke^l_{\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H/T}) \cong M(T) \otimes M(C \times \PP^{n-1})^{\otimes l} \] induced by $l$ line bundles $\cL_1^l, \dots, \cL_l^l$ on $ \THecke^l_{\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H/T}$ (which are the pullbacks of the ample bundles on each projective bundle) and the projection $P_l : \THecke^l_{\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H/T} \ra T \times C^l$. The $S_l$-action on $M (\THecke^l_{\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H/T} )$ from Theorem \ref{thm:actions} is induced by the $S_l$-action on the open subset $\THecke^{l,\circ}_{\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H/T} = p^{-1} ( \Hecke^{l,\circ}_{\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H/T})$, where $\Hecke^{l,\circ}_{\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H/T}$ parametrises length $l$ Hecke correspondences whose degree $l$ torsion quotient has support consisting of $l$ distinct points. The $S_l$-action on $\THecke^{l,\circ}_{\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H/T} $ corresponds to permuting the $l$ universal degree $1$ torsion quotients $\cT_1^l, \dots, \cT_l^l$. By Remark \ref{rmk line bundles THecke}, this corresponds to permuting the $l$ line bundles $\cL_i^l$ on $\THecke^l_{\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H/T}$ (see equation \eqref{eq line bundles and torsion quotients}). Therefore, the induced $S_l$-action on $M(\THecke^l_{\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H/T})$ permutes the $l$-copies of $M(C \times \PP^{n-1})$. As $f$ is a small proper surjective map of smooth varieties, Theorem \ref{thm:actions} yields an isomorphism \[ M (\THecke^l_{\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H/T} )^{S_l} \cong M( \Hecke^l_{\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H/T} ).\] Finally, by Example \ref{ex:sym} we have $\Sym^{nl-d}M(C \times \PP^{n-1})\simeq M(\Sym^{nl-d}(C\times \PP^{n-1}))$. \end{proof} In particular, if we apply this to $T = \Spec k$ and $\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H = \cO_C(D)^{\oplus n}$ for a divisor $D$ on $C$, we obtain Theorem \ref{thm intro FDiv} as a special case of this result. Furthermore, the motive of the Quot scheme of length $l$ torsion quotients of a locally free sheaf $\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H$ over $T \times C/T$ only depends on the rank of $\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H$; we explicitly state this as a corollary for $T = \spec k$, as there are similar recent results concerning the class of such Quot schemes in the Grothendieck ring of varieties \cite{BFP, Ricolfi}. \begin{cor}\label{cor motive torsion quot} Let $\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H$ be rank $n$ locally free sheaf on $C$ and $l \in \NN$. Then the motive of the Quot scheme $ \quot_{C/k}^{(0,l)}(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H)$ parametrising length $l$ torsion quotients of $\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H$ is \[ M(\quot_{C/k}^{(0,l)}(\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H)) \cong M(\Sym^l(C \times \PP^{n-1})).\] In particular, this motive only depends on the rank $n$ of $\mathcal E}\def\cF{\mathcal F}\def\cG{\mathcal G}\def\cH{\mathcal H$. \end{cor} \section{The formula for the motive of the stack of vector bundles}\label{sec formula} \subsection{The transition maps in the inductive system}\label{sec lift trans} Throughout this section we fix $x \in C(k)$ and let $s_x : \Spec k \ra C$ be the inclusion of $x$. The inclusion $\cO_C \hookrightarrow \cO_C(x)$ defines an inductive sequence of morphisms $i_l : \Div_{n,d}(l) \ra \Div_{n,d}(l+1)$ indexed by $l \in \NN$. In this section, we will lift the maps $i_l : \Div_{n,d}(l) \ra \Div_{n,d}(l+1)$ to the schemes of iterated Hecke correspondences and compute the induced maps of motives. We recall that \[ \Div_{n,d}(l) = \Hecke^{nl -d}_{\cO_C(lx)^{\oplus n}/ \Spec k} \quad \text{and} \quad \FDiv_{n,d}(l) = \THecke^{nl -d}_{\cO_C(lx)^{\oplus n}/ \Spec k}\] and we will drop the subscripts for Hecke schemes throughout the rest of this section. The inclusion $\cO_C \hookrightarrow \cO_C(x)$ induces an inclusion $\cO_C^{\oplus n} \hookrightarrow \cO_C(x)^{\oplus n}$. Any full flag \[ \cF_\bullet =(\cO_C^{\oplus n}=\cF_0 \subsetneq \cF_1 \subsetneq \cdots \subsetneq \cF_{n-1} \subsetneq \cF_n = \cO_C(x)^{\oplus n})\] determines, for $l \in \NN$, a morphism $A_l({\cF_\bullet}) : \FDiv_{n,d}(l) \ra \FDiv_{n,d}(l+1) $ lifting the morphism $ \Div_{n,d}(l) \ra \Div_{n,d}(l+1)$. Recall that we have maps $P_{nl-d} : \FDiv_{n,d}(l) = \THecke^{nl-d} \ra C^{nl-d}$ defined in Remark \ref{rmk structure of Hecke schemes}. The morphism $A_l({\cF_\bullet})$ sits in a commutative diagram \begin{equation}\label{commutative diag A a} \xymatrixcolsep{3pc} \xymatrix{ \FDiv_{n,d}(l) \ar[d]_{P_{nl-d}} \ar[r]^-{A_l({\cF_\bullet})} &\FDiv_{n,d}(l+1) \ar[d]^{P_{n(l+1)-d}}\\ C^{nl-d} \ar[r]^-{c_l} &C^{n(l+1) -d}. } \end{equation} where $c_l:= s_{x}^n \times \mathrm{id}_{C^{nl-d}} $. Recall that $\pr_j^{nl-d} : \FDiv_{n,d}(l) \ra C^{nl-d} \ra C$ denotes the composition of $P_{nl-d}$ with the projection onto the $j$th factor. We have \begin{equation}\label{eq A_l and projections} \pr_j^{n(l+1)-d} \circ A_l({\cF_\bullet}) = \left \{ \begin{array}{ll} t_x & \text{if } 1 \leq j \leq n\\ \pr^{nl-d}_{j-n }& \text{if } n+1 \leq j \leq n(l+1)-d, \end{array} \right. \end{equation} where $t_x : \FDiv_{n,d}(l) \ra \spec k \ra C$ is the composition of the structure map with $s_x$. Similarly, a tuple $p:=(p_1, \cdots , p_n) \in (\PP^{n-1})^n$ induces $b_l(p) : (\PP^{n-1})^{nl-d} \ra (\PP^{n-1})^{n(l+1)-d}$ which is the identity on the last $nl-d$ factors. We define \[ a_l(p):=c_l \times b_l(p) : (C \times \PP^{n-1})^{nl-d} \ra (C \times \PP^{n-1})^{n(l+1)-d}.\] \begin{lemma} Every choice of flag $\cF_\bullet$ induces the same map of motives \[ M(A_l):= M(A_l({\cF_\bullet})) : M(\FDiv_{n,d}(l)) \ra M( \FDiv_{n,d}(l+1))\] and every choice of tuple $p \in (\PP^{n-1})^n$ induces the same map of motives \[ M(b_l)=M(b_l(p)) : M(\PP^{n-1})^{\otimes nl-d} \ra M(\PP^{n-1})^{\otimes n(l+1)-d}.\] \end{lemma} \begin{proof} A flag $\cF_\bullet$ as above is specified by a full flag in $k^n$, which is parametrised by the flag variety $\GL_n/B$, which is $\mathbb A}\def\BB{\mathbb B}\def\CC{\mathbb C}\def\DD{\mathbb D^1$-chain connected and so all flags induce the same map of motives. The second statement follows similarly as projective spaces are also $\mathbb A}\def\BB{\mathbb B}\def\CC{\mathbb C}\def\DD{\mathbb D^1$-chain connected. \end{proof} As we are only interested in studying these maps motivically, we will drop the choice of flag $\cF_\bullet$ and tuple $p$ from the notation and simply write $A_l$, $b_l$ and $a_l$ for these morphisms. By Lemma \ref{lemma PB for THecke with line bundles}, there is an $S_{nl-d}$-equivariant isomorphism \[\PB(\cL^{nl-d}_\bullet): M(\FDiv_{n,d}(l)) = M( \THecke^{nl-d}) \ra M(C \times \PP^{n-1})^{\otimes nl-d}\] determined by line bundles $\cL_j^{nl-d}$ on $\THecke^{nl-d}$ for $1 \leq j \leq nl-d$. Moreover, we have homomorphisms $\varphi_l : S_{nl-d} \hookrightarrow S_{n(l+1)-d}$ such that the maps $c_l : C^{nl-d} \ra C^{n(l+1)-d}$ are equivariant. \begin{prop}\label{prop THecke trans} For each $l$, we have a commutative diagram \begin{equation}\label{eq prop THecke tran} \xymatrixcolsep{3pc} \xymatrix{ M( \FDiv_{n,d}(l)) \ar[d]_{\PB(\cL^{nl-d}_\bullet)}^{\wr} \ar[r]^-{M(A_l)} &M( \FDiv_{n,d}(l+1) )\ar[d]^{\PB(\cL^{n(l+1)-d}_\bullet)}_{\wr}\\ M(C \times \PP^{n-1})^{\otimes nl-d} \ar[r]^-{M(a_l )} &M(C \times \PP^{n-1})^{\otimes n(l+1) -d} } \end{equation} such that the horizontal maps are equivariant with respect to $\varphi_l : S_{nl-d} \hookrightarrow S_{n(l+1)-d}$. \end{prop} \begin{proof} We claim that the pullbacks via $A_l$ (for any flag $\cF_\bullet$) of the line bundles $\cL_j^{n(l+1)-d}$ satisfy \begin{equation}\label{eq pullback line bundles under A} A_l^* \cL^{n(l+1)-d}_j = \left \{ \begin{array}{ll} \cO_{\THecke^{nl-d}} & \text{if } 1 \leq j \leq n \\ \cL^{nl-d}_{j-n} & \text{if } n+1 \leq j \leq n(l+1)-d. \end{array} \right. \end{equation} We recall that we have $n(l+1)-d$ families of degree $1$ torsion sheaves on $C$ parametrised by $\FDiv_{n,d}(l+1)=\THecke^{n(l+1)-d}$ given by the successive quotients of the universal flag of vector bundles on $\THecke^{n(l+1)-d} \times C$; these families of torsion sheaves are denoted by \[ \cT^{n(l+1)-d}_j:= \mathcal U}\def\cV{\mathcal V}\def\cW{\mathcal W}\def\cX{\mathcal X^{n(l+1)-d}_{j-1}/\mathcal U}\def\cV{\mathcal V}\def\cW{\mathcal W}\def\cX{\mathcal X^{n(l+1)-d}_{j} \quad \text{ for } \: 1 \leq j \leq n(l+1)-d .\] The pullbacks of these families of torsion sheaves along $A_l$ (for any flag $\cF_\bullet$) are as follows: \begin{equation}\label{eq pullback torsion under A} (A_l \times \mathrm{id}_C)^* \cT^{n(l+1)-d}_j = \left \{ \begin{array}{ll} p_C^*k_x & \text{if } 1 \leq j \leq n \\ \cT^{nl-d}_{j-n} & \text{if } n+1 \leq j \leq n(l+1)-d, \end{array} \right. \end{equation} where $p_C : \THecke^{n(l+1)-d} \times C \ra C$ denote the projection and $k_x$ is the skyscraper sheaf at $x$. Consequently, Claim \eqref{eq pullback line bundles under A} follows from equations \eqref{eq line bundles and torsion quotients}, \eqref{eq A_l and projections} and \eqref{eq pullback torsion under A}. Similarly, if we let $\mathcal M}\def\cN{\mathcal N}\def\cO{\mathcal O}\def\cP{\mathcal P_j^{nl-d}$ denote the line bundle on $(C \times \PP^{n-1})^{nl-d}$ obtained by pulling back $\cO_{\PP^{n-1}}(1)$ via the $j$th projection, we have \begin{equation*} a_l^* \mathcal M}\def\cN{\mathcal N}\def\cO{\mathcal O}\def\cP{\mathcal P^{n(l+1)-d}_j = \left \{ \begin{array}{ll} \cO_{(C \times \PP^{n-1})^{nl-d}} & \text{if } 1 \leq j \leq n \\ \mathcal M}\def\cN{\mathcal N}\def\cO{\mathcal O}\def\cP{\mathcal P^{nl-d}_{j-n} & \text{if } n+1 \leq j \leq n(l+1)-d. \end{array} \right. \end{equation*} Since the action of the symmetric groups on these motives corresponds to permuting the order of these line bundles, we see that $M(A_l)$ and $M(a_l)$ are both equivariant with respect to $\varphi_l$. Finally let us prove the commutativity of the square \eqref{eq prop THecke tran}. For this we require the explicit formula for the iterated projective bundle isomorphisms given in Lemma \ref{lemma PB for THecke with line bundles}: \[ \PB(\cL_\bullet^{nl-d}) =(M(P_{nl-d}) \otimes [c_1(\cL_\bullet^{nl-d})]) \circ M(\Delta_{\THecke^{nl-d}}),\] where $[c_1(\cL_\bullet^{nl-d})] : M(\THecke^{nl-d}) \ra M(\PP^{n-1})^{nl-d}$ is the map induced by powers of the first chern classes of the line bundles $\cL_j^{nl-d}$ for $ 1 \leq j \leq nl-d$. If we insert $n$ copies of the structure sheaf on $\THecke^{nl-d}$ into this family, we obtain a map \[ [c_1(\cO, \dots , \cO,\cL_\bullet^{nl-d})] : M(\THecke^{nl-d}) \ra M(\PP^{n-1})^{n(l+1)-d}. \] In fact, since $c_1(\cO)$ is the zero map, we see that $[c_1(\cO)] : M(\THecke^{nl-d}) \ra M(\PP^{n-1})$ is the composition of the structure map $M(\THecke^{nl-d}) \ra \mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T\{ 0 \}$ with the inclusion $\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T\{ 0 \} \hookrightarrow M(\PP^{n-1})$ of any point in $\PP^{n-1}$. Therefore, we can write the lower diagonal composition in \eqref{eq prop THecke tran} as \[ M(a_l) \circ \PB(\cL_\bullet^{nl-d}) = (M(c_l \circ P_{nl-d}) \otimes [c_1(\cO, \dots , \cO, \cL_\bullet^{nl-d})] )\circ M(\Delta_{\THecke^{nl-d}}). \] Then by \eqref{eq pullback line bundles under A}, we have \[ [c_1(\cO, \dots , \cO, \cL_\bullet^{nl-d})] = [c_1(\cL_\bullet^{n(l+1)-d})] \circ M(A_l)\] and as diagram \eqref{commutative diag A a} commutes, we deduce that \[ \PB(\cL_\bullet^{n(l+1)-d}) \circ M(A_l) = (M(c_l \circ P_{nl-d}) \otimes [c_1(\cO, \dots , \cO, \cL_\bullet^{nl-d}) ])\circ M(\Delta_{\THecke^{nl-d}}), \] which completes the proof that the square \eqref{eq prop THecke tran} commutes. \end{proof} Since $a_l : (C \times \PP^{n-1})^{nl-d} \ra (C \times \PP^{n-1})^{n(l+1)-d}$ is equivariant with respect to $\varphi_l : S_{nl-d} \hookrightarrow S_{n(l+1) -d}$, we obtain an induced map between the associated symmetric products \[ \xymatrixcolsep{5pc} \xymatrix{ (C \times \PP^{n-1})^{nl-d} \ar[r]^{a_l} \ar[d] & (C \times \PP^{n-1})^{n(l+1)-d} \ar[d] \\ \Sym^{nl-d}(C \times \PP^{n-1}) \ar[r]^-{\Sym(a_l)} & \Sym^{n(l+1)-d}(C \times \PP^{n-1}).}\] By Theorem \ref{thm intro FDiv}, there is an isomorphism \[ e_l : M(\Div_{n,d}(l)) \cong M(\FDiv_{n,d}(l))^{S_{nl-d}} \cong \Sym^{nl-d}M(C \times \PP^{n-1}) \] where the second isomorphism is induced by the $S_{nl-d}$-equivariant isomorphism $\PB(\cL^{nl-d}_\bullet)$. \begin{cor}\label{cor1 transition maps div} The following diagram commutes \[ \xymatrixcolsep{5pc} \xymatrix{ M(\Div_{n,d}(l)) \ar[r]^{M(i_l)} \ar[d]_{e_l}^{\wr} & M(\Div_{n,d}(l+1)) \ar[d]^{e_{l+1}}_{\wr} \\ \Sym^{nl-d}M(C \times \PP^{n-1}) \ar[r]^-{M(\Sym(a_l))} & \Sym^{n(l+1)-d}M(C \times \PP^{n-1}).}\] \end{cor} \begin{proof} By the equivariance property of $M(A_{l})$ observed in Proposition \ref{prop THecke trans} and the fact that $A_l$ lifts $i_l$, the isomorphisms of Theorem \ref{thm intro FDiv} fit in a commutative diagram \[ \xymatrixcolsep{5pc} \xymatrix{ M(\FDiv_{n,d}(l))^{S_{nl-d}} \ar[r]^{M(A_l)} \ar[d]_{\wr} & M(\FDiv_{n,d}(l+1))^{S_{n(l+1}-d} \ar[d]_{\wr} \\ M(\Div_{n,d}(l)) \ar[r]^{M(i_l)} & M(\Div_{n,d}(l+1)) } \] The corollary then follows from combining this diagram with the diagram of Proposition \ref{prop THecke trans}. \end{proof} \subsection{A proof of the formula}\label{sec proof1} The rational point $x \in C(k)$ gives rise to a decomposition $M(C) = \mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T\{ 0 \} \oplus \overline{M}(C)$, where $\overline{M}(C) = M_1(\Jac(C)) \oplus \mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T\{ 1\}$, see \cite[Proposition 4.2.5]{AHEW}. The motive of $\Jac(C)$ can be recovered from the motive $ M_1(\Jac(C))$ using \cite[Proposition 4.3.5]{AHEW}: \[ M(\Jac(C)) = \bigoplus_{i=0}^{2g} \Sym^i(M_1(\Jac(C))) = \bigoplus_{i=0}^{\infty} \Sym^i(M_1(\Jac(C))) . \] We can then write \[ M(C \times \PP^{n-1}) = M(C) \otimes \left(\bigoplus_{i=0}^{n-1} \mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T\{i \}\right) = \mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T\{ 0 \} \oplus \overline{M}(C) \oplus \bigoplus_{i=1}^{n-1} M(C) \{i \}.\] Let $M_{C,n}:= \overline{M}(C) \oplus \bigoplus_{i=1}^{n-1}M(C)\{i \}$; then (for example, by \cite[Lemma B.3.1]{AHEW}) \[ \Sym^{nl-d}(M(C \times \PP^{n-1})) = \Sym^{nl-d}(\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T\{ 0 \} \oplus M_{C,n})= \bigoplus_{i=0}^{nl-d} \Sym^i(M_{C,n}). \] \begin{lemma}\label{lemma tran map Div using MC} There is a commutative diagram \[ \xymatrixcolsep{5pc} \xymatrix{ M(\Div_{n,d}(l)) \ar[r]^{M(i_l)} \ar[d]^{\wr} & M(\Div_{n,d}(l+1)) \ar[d]_{\wr} \\ \bigoplus\limits_{i=0}^{nl-d} \Sym^i(M_{C,n}) \ar[r] & \bigoplus\limits_{i=0}^{n(l+1)-d} \Sym^i(M_{C,n})}\] where the lower map is the obvious inclusion. \end{lemma} \begin{proof} Let us start with the description of the transition map given in Corollary \ref{cor1 transition maps div}. We see that the map $a_l : (C \times \PP^{n-1})^{nl-d} \ra (C \times \PP^{n-1})^{n(l+1)-d}$ can be described motivically as \[ M(a_l) : M(C \times \PP^{n-1})^{\otimes nl-d} \cong \mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T\{ 0 \}^{\otimes n} \otimes M(C \times \PP^{n-1})^{\otimes (nl-d)} \stackrel{\iota^{\otimes n} \otimes M(\mathrm{id})}{\xrightarrow{\hspace*{1cm}}} M(C \times \PP^{n-1})^{\otimes n(l+1)-d} \] where $\iota : \mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T \{0 \} \ra M(C \times \PP^{n-1}) = \mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T\{0 \} \oplus M_{C,n}$ is the natural inclusion of this direct factor. It thus follows that the symmetrised map $M(\Sym(a_l))$ is the claimed inclusion. \end{proof} \begin{thm} If $C(k) \neq \emptyset$, then the motive of $\Bun_{n,d}$ satisfies \[ M(\Bun_{n,d}) \simeq \hocolim_{l} \left( \bigoplus_{i=0}^{nl-d} \Sym^i(M_{C,n}) \right) \simeq \bigoplus_{i=0}^{\infty} \Sym^i(M_{C,n}).\] More precisely, we have \[M(\Bun_{n,d}) \simeq M(\Jac(C)) \otimes M(B\GG_m) \otimes \bigotimes_{i=1}^{n-1} Z(C, \mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T\{i\}). \] \end{thm} \begin{proof} The first claim follows from Lemma \ref{lemma tran map Div using MC} and Theorem \ref{thm old main}. For the second claim, we introduce the notation $\Sym^*(M):= \oplus_{i=0}^{\infty} \Sym^i(M)$ for any motive $M$; then \begin{enumerate}[label={\upshape(\roman*)}] \item $\Sym^*(M_1 \oplus M_2) = \Sym^*(M_1) \otimes \Sym^*(M_2)$ (by \cite[Lemma B.3.1]{AHEW}), \item $Z(C,\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T\{i\}) = \Sym^*(M(C)\{i \})$ (by definition of the motivic Zeta function), \item $\Sym^*(\mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T\{1\})= M(B\GG_m)$ (see \cite[Example 2.21]{HPL} based on \cite[Lemma 8.7]{totaro}), \item $\Sym^*(M_1(\Jac(C))) = M(\Jac(C))$ (by \cite[Proposition 4.3.5]{AHEW}), \end{enumerate} and the formula follows from these observations. \end{proof} \subsection{An alternative proof using previous results}\label{sec proof2} We will give a second proof of this formula for $M(\Bun_{n,d})$, also based on Corollary \ref{cor1 transition maps div} but which follows more closely our previous work \cite{HPL}. The idea is to describe the unsymmetrised transition maps $M(a_l)$ by decomposing the motives $M(C \times \PP^{n-1})^{\otimes nl-d}$ using $M(\PP^{n-1}) = \oplus_{i=0}^{n-1} \mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T\{ i \}$. \begin{rmk} By returning to the decomposition $M(\PP^{n-1}) = \oplus_{i=0}^{n-1} \mathbb Q}\def\RR{\mathbb R}\def\SS{\mathbb S}\def\TT{\mathbb T\{ i \}$, we can describe the maps $M(a_l)$ explicitly. Indeed we have a decomposition $M(C \times \PP^{n-1})^{\otimes nl-d}$ indexed by ordered tuples $I = (i_1, \cdots, i_{nl-d}) \in \mathcal I}\def\cJ{\mathcal J}\def\cK{\mathcal K}\def\cL{\mathcal L_l:= \{ 0, \cdots ,n-1\}^{\times \: nl -d }$ of the form \[ M(C \times \PP^{n-1})^{\otimes nl-d} = \bigoplus_{I \in \mathcal I}\def\cJ{\mathcal J}\def\cK{\mathcal K}\def\cL{\mathcal L_l} \bigotimes_{j=1}^{nl-d} M(C) \{i_j\} = \bigoplus_{{I \in \mathcal I}\def\cJ{\mathcal J}\def\cK{\mathcal K}\def\cL{\mathcal L_l}} M(C^{nl+d}) \{ | I | \},\] where $| I | = \sum_{j=1}^{nl-d} i_j$. There is a map $h_l: \mathcal I}\def\cJ{\mathcal J}\def\cK{\mathcal K}\def\cL{\mathcal L_l \ra \mathcal I}\def\cJ{\mathcal J}\def\cK{\mathcal K}\def\cL{\mathcal L_{l+1}$ given by $I \mapsto (0,\dots, 0,I)$ (inserting $n$ zeros) such that the map $M(a_l): M(C \times \PP^{n-1})^{\otimes nl-d} \ra M(C \times \PP^{n-1})^{\otimes n(l+1)-d}$ sends the direct summand indexed by $I \in \mathcal I}\def\cJ{\mathcal J}\def\cK{\mathcal K}\def\cL{\mathcal L_l$ to the direct summand indexed by the tuple $h_l(I) \in \mathcal I}\def\cJ{\mathcal J}\def\cK{\mathcal K}\def\cL{\mathcal L_{l+1}$ via the map \begin{equation}\label{eq beh unsym trans} M(c_l)\{| I |\} : M(C^{nl+d})\{ | I | \} \ra M(C^{n(l+1)+d})\{ | I | \} = M(C^{n(l+1)+d})\{ | (0,\dots, 0,I) | \}. \end{equation} \end{rmk} The $S_{nl-d}$-action on $M(C \times \PP^{n-1})^{\otimes nl-d}$ permutes these direct summands via the obvious action of $S_{nl-d}$ on $\mathcal I}\def\cJ{\mathcal J}\def\cK{\mathcal K}\def\cL{\mathcal L_l$. The invariant part is the motive of $\Sym^{nl-d}(C \times \PP^{n-1})$ which has an associated decomposition. The index set for this decomposition is \[ \cB_l:=\left\{ m = (m_0, \dots , m_{n-1}) \in \NN^n : \sum_{i=0}^{n-1} m_i = nl-d \right\}. \] Moreover, for $I \in \mathcal I}\def\cJ{\mathcal J}\def\cK{\mathcal K}\def\cL{\mathcal L_l$, we let $\tau_l(I)_r= \# \{ i_j : i_j = r \}$, then $\tau_l(I) = (\tau_l(I)_0, \dots , \tau_l(I)_{n-1}) \in \cB_l$ and the map $\tau_l : \mathcal I}\def\cJ{\mathcal J}\def\cK{\mathcal K}\def\cL{\mathcal L_l \ra \cB_l$ is $S_{nl-d}$-invariant with $| I | = \sum_{i=0}^{n-1} i \tau_l(I)_i$. By grouping together the factors with the same values of $i_j$, there is a map \begin{equation}\label{eq map induced by s} C^{nl+d} \ra \prod_{i=0}^{n-1} \Sym^{\tau_l(I)_i}(C) \end{equation} which is the quotient of the natural action of $\Stab(I) \cong \prod_{i=0}^{n-1} S_{\tau_l(I)_i}$. \begin{lemma}\label{lemma sym transition maps} For each $l$, we have a decomposition \[ M( \Sym^{nl-d}(C \times \PP^{n-1})) = \bigoplus_{m \in \cB_l} \bigotimes_{i=0}^{n-1} \Sym^{m_i}(M(C)) \{i m_i \}\] such that the following statements hold. \begin{enumerate}[label={\upshape(\roman*)}] \item \label{Sym decomp} For each $m \in \cB_l$, we have a commutative diagram \[ \xymatrix{ M(C \times \PP^{n-1})^{\otimes nl-d} \ar[r] \ar[d] & M( \Sym^{nl-d}(C \times \PP^{n-1})) \ar[d] \\ \bigoplus\limits_{I \in \tau_l^{-1}(m)} M(C^{nl-d}) \{ | I | \} \ar[r] & \bigotimes\limits_{i=0}^{n-1} \Sym^{m_i}(M(C)) \{i m_i \} } \] where the lower maps are induced by the maps \eqref{eq map induced by s}. \item \label{Sym trans} The transition maps $M(\Sym(a_l))$ decompose as maps \[ \kappa_{m,m'} : \bigotimes_{i=0}^{n-1} \Sym^{m_i}(M(C)) \{i m_i \} \ra \bigotimes_{i=0}^{n-1} \Sym^{m'_i}(M(C)) \{i m'_i \} \] for $m \in \cB_l$ and $m' \in \cB_{l+1}$ with $\kappa_{m,m'} = 0$ unless $m' = m + (n,0,\dots 0)$, in which case this map is induced by the morphism of varieties \[ \prod_{i=0}^{n-1}\Sym^{m_i}(C)\ra \prod_{i=0}^{n-1}\Sym^{m'_i}(C) \] which is the map $\Sym(s_x^n \times \mathrm{id}_{C^{m_i}})$ on the $0$th factor and the identity on all other factors. \end{enumerate} \end{lemma} \begin{proof} We will give the decomposition and the proof of \ref{Sym decomp} simultaneously, by collecting the direct summands in the decomposition of $M(C \times \PP^{n-1})^{\otimes nl-d}$ which are preserved by the $S_{nl-d}$-action and taking their invariant parts. For this, we recall that there is a $S_{nl-d}$-action on $\mathcal I}\def\cJ{\mathcal J}\def\cK{\mathcal K}\def\cL{\mathcal L_l$ and the map $\tau_l : \mathcal I}\def\cJ{\mathcal J}\def\cK{\mathcal K}\def\cL{\mathcal L_l \ra \cB_l$ is $S_{nl-d}$-invariant and the fibres consist of single orbits. For $I \in \mathcal I}\def\cJ{\mathcal J}\def\cK{\mathcal K}\def\cL{\mathcal L_l$ with $m = \tau_l(I)$, we note that the quotient of the associated action of $\Stab(I) = \prod_{i=0}^{n-1} S_{m_i}$ on $C^{nl-d}$ is isomorphic to $\prod_{i=0}^{n-1} \Sym^{m_i}(C)$. Therefore, the motive appearing in the left lower corner of the diagram in statement \ref{Sym decomp} is a direct summand of $M(C \times \PP^{n-1})^{\otimes nl-d}$ that is preserved by the $S_{nl-d}$-action and its $S_{nl-d}$-invariant piece is precisely the motive appearing in the lower right corner. This proves the first statement and the decomposition. To describe the behaviour of the symmetrised transition maps with respect to this decomposition, we recall that the unsymmetrised transition maps send the direct summand indexed by $I \in \mathcal I}\def\cJ{\mathcal J}\def\cK{\mathcal K}\def\cL{\mathcal L_l$ to $h_l(I)=(0,...0,I) \in \mathcal I}\def\cJ{\mathcal J}\def\cK{\mathcal K}\def\cL{\mathcal L_{l+1}$. The unsymmetrised transition maps on these direct summands are described by \eqref{eq beh unsym trans} and so it remains to describe the induced map on the invariant parts for the actions of the symmetric groups. Since $h_l : \mathcal I}\def\cJ{\mathcal J}\def\cK{\mathcal K}\def\cL{\mathcal L_l \ra \mathcal I}\def\cJ{\mathcal J}\def\cK{\mathcal K}\def\cL{\mathcal L_{l+1}$ is equivariant for the actions of the symmetric groups via the homomorphism $\varphi_l : S_{nl-d} \hookrightarrow S_{n(l+1)-d}$, it descends to map \[ \overline{h} : \cB_l \ra \cB_{l+1} \quad \text{where} \quad \overline{h}(m) = m + (n,0, \dots, 0). \] Thus, $\kappa_{m,m'}$ is zero unless $m' = \overline{h}(m)$. For $m' = \overline{h}(m)$, $I \in \tau_l^{-1}(m)$ and $I' \in \tau_{l+1}^{-1}(m')$ note that \[ | I | = | I'| = \sum_{i=0}^{n-1} i m_i = \sum_{i=0}^{n-1} i m_i' \] and \[ \Stab(I) = \prod_{i=0}^{n-1} S_{m_i} \quad \text{and} \quad \Stab(I') = \prod_{i=0}^{n-1} S_{m'_i} = S_{m_0 + n} \times \prod_{i=1}^{n-1} S_{m_i}. \] In particular, the map $c_l = s_x^n \times \mathrm{id} : C^{nl-d} \ra C^{n(l+1)-d}$ is equivariant for the induced actions of $\Stab(I)$ and $\Stab(I')$ and there is a map between the quotients \[ \xymatrix{ C^{nl-d} \ar[r]^{c_l} \ar[d] & C^{n(l+1)-d} \ar[d] \\ \prod\limits_{i=0}^{n-1} \Sym^{m_i}(C) \ar[r] & \prod\limits_{i=0}^{n-1} \Sym^{m'_i}(C) } \] which is $\Sym(s_x^n \times \mathrm{id}_{C^{m_i}})$ on the $0$th factor and the identity on the other factors. Combined with \ref{Sym decomp}, this concludes the proof of \ref{Sym trans}. \end{proof} \begin{cor}\label{cor2 transition maps div} The transition maps $M(i_l) : M(\Div_{n,d}(l)) \ra M(\Div_{n,d}(l+1))$ fit in the following commutative diagram \[ \xymatrixcolsep{5pc} \xymatrix{ M(\Div_{n,d}(l)) \ar[r]^{M(i_l)} \ar[d]^{\wr} & M(\Div_{n,d}(l+1)) \ar[d]_{\wr} \\ \bigoplus\limits_{m \in \cB_l} \bigotimes\limits_{i=0}^{n-1} \Sym^{m_i}(M(C)) \{i m_i \} \ar[r]^-{\bigoplus\limits_{m,m' } \kappa_{m,m'}} & \bigoplus\limits_{m' \in \cB_{l+1}} \bigotimes\limits_{i=0}^{n-1} \Sym^{m'_i}(M(C)) \{i m'_i \}.}\] where the maps $\kappa_{m,m'}$ are as in Lemma \ref{lemma sym transition maps}. \end{cor} \begin{proof} This follows from Lemma \ref{lemma sym transition maps} and Corollary \ref{cor1 transition maps div}. \end{proof} This looks very similar to \cite[Conjecture 3.9]{HPL}, except we do not know whether the vertical maps in this commutative diagram coincide with the maps given by the Bia{\l}ynicki-Birula decompositions used in the formulation of this conjecture. Nevertheless, with the description of the transition maps in Corollary \ref{cor2 transition maps div}, one can apply the proof of \cite[Theorem 3.18]{HPL} to obtain an alternative proof of the formula for the motive of $\Bun_{n,d}$ appearing in Theorem \ref{main thm}. \noindent{Freie Universit\"{a}t Berlin, Arnimallee 3, 14195 Berlin, Germany} \noindent{\texttt{[email protected], [email protected]}} \end{document}
\beta \, egin{document} \maketitle \beta \, egin{abstract} In this work, the author shows a sufficient and necessary condition for an integer of the form $\displaystyle{\frac{z^n-y^n}{z-y~~~}}$ to be divisible by some perfect $mth$ power $p^m$, where $p$ is an odd prime and $m$ is a positive integer. A constructive method of this type of integers is explained with details and examples. Links between the main result and known ideas such as Fermat's last theorem, Goormaghtigh conjecture and Mersenne numbers are discussed. Other related ideas, examples and applications are provided. \end{abstract} n=1, 2, \dotsoindent {\it AMS Subj. Class.:11A07 ; 11D41} \vskip2mm n=1, 2, \dotsoindent {\it Keywords:} primitive root modulo integer; prime integer, perfect $nth$ power; Fermat's last theorem, Goormaghtigh conjucture. \beta \, egin{section}{Introduction} Contrary to our expectations, while we were trying to prove Fermat's last theorem by showing that if $y$ and $z$ are relatively prime and $n$ and $p$ are odd prime integers, then $p^n$ does not divide $\displaystyle{\frac{z^n-y^n}{z-y~~~}}$ , we found that for almost every $p$ we can construct infinitely many integers of the form $\displaystyle{\frac{z^n-y^n}{z-y~~~}}$ each of which is divisible by $p^n$. Not only that, but no matter how large is the positive integer $m$, we can always construct integers of the form $\displaystyle{\frac{z^n-y^n}{z-y~~~}}$ that are divisible by $p^m$. The main tool of our analysis in this work, is the concept of primitive root modulo integer. Given a positive integer $p$, we say that $r$ is a primitive root modulo $p$ if $r$ is an integer relatively prime to $p$ and the smallest integer $a$ such that $r^a \equiv 1 ~~ (mod~~p)$ is $\phi(p)$, where $\phi$ denotes the well-known Euler function. A positive integer possesses a primitive root if and only if $n= 2, 4, p^t$ or $2p^t,$ where $p$ is an odd prime and $t$ is a positive integer $a \equiv b \ \ (mod \, n)$ite[Theorem 8.14]{KR}. Another important fact about primitive roots is given by the following theorem which we state as a lemma for its use in the proof of the main result. \beta \, egin{lemma}\lambdaabel{l1} $a \equiv b \ \ (mod \, n)$ite[Theorem 8.9]{KR} Let $p$ be an odd prime, then $p^k$ has a primitive root for all positive integer $k$. Moreover, if $r$ is a primitive root modulo $p^2$, then $r$ is a primitive root modulo $p^k$, for all positive integer $k$. \end{lemma} Note that there are some rare cases where a primitive root modulo $p$ is not a primitive root modulo $p^2$. As an example, the prime integer $p=487$ has a primitive root $r=10$ which is not a primitive root modulo $487^2$ $a \equiv b \ \ (mod \, n)$ite[Section 8.3]{KR}. More elementary ideas about primitive roots modulo integers can be found in number theory textbooks such as $a \equiv b \ \ (mod \, n)$ite{DM}, $a \equiv b \ \ (mod \, n)$ite{KC}, $a \equiv b \ \ (mod \, n)$ite{KR}, $a \equiv b \ \ (mod \, n)$ite{NZM}, $a \equiv b \ \ (mod \, n)$ite{Ore} and $a \equiv b \ \ (mod \, n)$ite{UD}. Throughout the paper, the greatest common divisor of two integers $a$ and $b$ is denoted $(a,b)$. \end{section} \beta \, egin{section}{Main result} \hskip7mm \beta \, egin{comment} We start this section by reminding the reader of some important facts about primitive roots. Let $p$ be an odd prime and let $n$ be an integer strictly greater than $2$. It is well-known that an integer $r$ that is a primitive root modulo $p^2$ is also a primitive root modulo $p^m$. It is also well-known that in general, there are some rare cases where a primitive root modulo $p$ is not necessarily a primitive root modulo $p^2$. As an example, the prime integer $p=487$ has a primitive root $r=10$ which is not a primitive root modulo $487^2$ $a \equiv b \ \ (mod \, n)$ite[Section 8.3]{KR}. Another important fact about primitive roots is given by the following theorem which we state as a lemma for its use in the proof of the main result. \beta \, egin{lemma}\lambdaabel{l4} $a \equiv b \ \ (mod \, n)$ite[Theorem 8.9]{KR} Let $p$ be an odd prime, then $p^k$ has a primitive root for all positive integer $k$. Moreover, if $r$ is a primitive root modulo $p^2$, then $r$ is a primitive root modulo $p^k$, for all positive integer $k$. \end{lemma} \end{comment} \beta \, egin{comment} \beta \, egin{lemma}\lambdaabel{l5} Let $p$ be and odd prime and let $n$ be a positive integer with $nGer\v{s}gorin \,eq2$. If the integer $r$ is a primitive root modulo $p^n$, then $r$ is also a primitive root modulo $p^{n-1}, p^{n-2}, \dots, p$. \end{lemma} \beta \, egin{proof} First, recall that for every perfect $nth$ power $p^n$, $n=1, 2, \dots$, of some odd prime integer $p$, there is a primitive root modulo $p^n$ $a \equiv b \ \ (mod \, n)$ite[Theorem 8.14]{KR}. Let $t$ be an integer relatively prime to $p$. Hence $t$ is relatively prime to $p^n$ for $n=2, 3, \dots~$ Suppose that $t$ is not a primitive root modulo $p^n$. Then there exists a positive integer $k$ strictly less than $\phi(p^n) = (p-1)p^{n-1}$ such that $t^k ~\equiv~1 ~~~ (mod~~p^n)$. Let's show that $t$ is not a primitive root modulo $p^{n+1}$. If $t^k ~\equiv~1 ~~~ (mod~~p^{n+1})$, then, obviously, $t$ is not a primitive root modulo $p^{n+1}$. Suppose that $t^k ~n=1, 2, \dotsot\equiv~1 ~~~ (mod~~p^{n+1})$. Then \beta \, egin{equation}\lambdaabel{array1} \beta \, egin{array}{ccl} t^{kp}-1 & = & (t^k-1) \displaystyle{\sum_{j=0}^{p-1} t^{jk}}\\ & = & (t^k-1) \displaystyle{\sum_{j=0}^{p-1} (t^{jk}-1+1)}\\ & = & (t^k-1) \Big(~\displaystyle{\sum_{j=1}^{p-1} (t^{jk}-1)~ + ~ p}~\Big).\\ \end{array} \end{equation} By our assumption, $p^n$ divides $t^k -1$. Hence, $~p^n~$ divides $~t^{jk}-1~$ for $~j=1, 2, \dots,$ and $p$ divides $\displaystyle{\sum_{k=1}^{p-1} (t^{jk}-1)~ + ~ p}$. Combining these two facts with the last line of (\ref{array1}), we have that $t^{kp}~\equiv~1~~~(mod~~p^{n+1})$. Since $kp < (p-1)p^{n}=\phi(p^{n+1})$, the integer $t$ cannot be a primitive root modulo $p^{n+1}$. ~This is equivalent to saying that if $t$ is a primitive root modulo $p^{n+1}$, then $t$ is a primitive root modulo $p^n$. ~Using induction, we conclude that $t$ is also a primitive root modulo $p^j~$ for $j=n-1, n-2, \dots 1$. \end{proof} \beta \, egin{cor}\lambdaabel{c1} Let $r$ be a positive integer and let $p$ be an odd prime integer. If $r$ is a primitive root modulo $p^2$, then $r$ is a primitive root modulo $p^n$ for every positive integer $n$. \end{cor} \beta \, egin{proof} It is known that, for $m Ger\v{s}gorin \,eq 2$, $r$ is a primitive root modulo $p^m$ if and only if $r$ is a primitive root modulo $p^{m+1}$ $a \equiv b \ \ (mod \, n)$ite[Theorem 8.9]{KR}. Then the proof is completed by Lemma \ref{l1}. \end{proof} \end{comment} The following lemma is needed in the proof of the main result and contains some ideas that are well-known to mathematicians working on Fermat's last theorem. Nevertheless, we prefer to provide a proof because we couldn't find a reference where all the three assertions of the lemma are proved together. \beta \, egin{lemma}\lambdaabel{l2} Let $y$ and $z$ be two relatively prime integers with $zn=1, 2, \dotseq y$ and let $n$ be an odd prime integer.\\ $~~~~~~~$ 1. If $~n~$ divides $z-y$, ~then ~ $\Big(z-y ~,~ \displaystyle{\frac{z^n-y^n}{z-y~~~}}\Big) = ~n$.\\ $~~~~~~~$ 2. If $~n~$ does not divides $z-y$, ~then ~$n, ~(z-y)~$ and $~\displaystyle{\frac{z^n-y^n}{z-y~~~}}~$ are pairwise relatively prime.\\ $~~~~~~~$ 3. $~n^2~$ does not divide $\displaystyle{\frac{z^n-y^n}{z-y~~~}}$. \end{lemma} \beta \, egin{proof} We have $$z^n = (z-y+y)^n = \sum_{i=2}^{n}\beta \, inom{n}{i}(z-y)^i\,y^{(n-i)} + n(z-y)y^{(n-1)} + y^n,$$ from which, $$ \beta \, egin{array}{lcl} z^n - y^n & = & (z-y) \displaystyle{\Big[\sum_{i=2}^{n}\beta \, inom{n}{i}(z-y)^{(i-1)}\,y^{(n-i)} + ny^{(n-1)}\Big]}\\ & = & (z-y) \displaystyle{\Big[ (z-y) \Big\{\sum_{i=2}^{n}\beta \, inom{n}{i}(z-y)^{(i-2)}\,y^{(n-i)}\Big\} + ny^{(n-1)}\Big]}, \end{array} $$ so that \beta \, egin{equation}\lambdaabel{f32} \frac{z^n - y^n}{z-y} = (z-y) \Big\{\sum_{i=2}^{n}\beta \, inom{n}{i}(z-y)^{(i-2)}\,y^{(n-i)}\Big\} + ny^{(n-1)}. \end{equation} Since $y$ and $z$ are relatively prime, the power $y^{n-1}$ and $(z-y)$ are relatively prime. Hence, Formula (\ref{f32}) implies that $$ \Big(z-y ~,~ \displaystyle{\frac{z^n-y^n}{z-y~~~}}\Big) = ~n, ~~ \text{if} ~n \text{~divides~} z-y,~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~$$ and $$ \Big(z-y ~,~ \displaystyle{\frac{z^n-y^n}{z-y~~~}}\Big) = ~1, ~~ \text{if} ~n \text{~is relatively prime to~} z-y.~~~~~~~~~~~~~~~~~$$ Moreover, (\ref{f32}) can be rewritten as \beta \, egin{equation}\lambdaabel{f33} \frac{z^n - y^n}{z-y} = (z-y)^{n-1} + \Big\{\sum_{i=1}^{n-1}\beta \, inom{n}{i}(z-y)^{(i-1)}\,y^{(n-i)}\Big\}. \end{equation} Since $n$ is a prime integer, we have \beta \, egin{equation}\lambdaabel{fh5} \Big(n, \beta \, inom{n}{i}\Big) = n \text{~~~for~~~} i=1, 2, \dots n-1. \end{equation} From (\ref{f33}) and (\ref{fh5}) , we get \beta \, egin{equation}\lambdaabel{fh4} \displaystyle{\frac{z^n-y^n}{z-y~~~}} \equiv (z-y)^{n-1}~~ (mod~n). \end{equation} It follows from (\ref{fh4}) that if $n$ is relatively prime to $z-y$, then $n$ and $\displaystyle{\frac{z^n-y^n}{z-y~~~}}$ are relatively prime. This is to prove the second assertion. The third assertion of the lemma follows directly from the second one if $n$ does not divide $z-y$. Otherwise, suppose that $n$ divides $z-y$. Then from (\ref{f32}) and (\ref{fh5}), we can see easily that, in this case, \beta \, egin{equation}\lambdaabel{fh6} \displaystyle{\frac{z^n-y^n}{z-y~~~}}~ \equiv ~ ny^{(n-1)} ~~~ (mod~~n^2). \end{equation} If $n^2$ divides $\displaystyle{\frac{z^n-y^n}{z-y~~~}}$, then (\ref{fh6}) implies that $n$ divides $y$, so that also, $n$ divides $z$ since it divides $z-y$. This is in contradiction with our assumptions that $y$ and $z$ are relatively prime. \end{proof} \beta \, egin{remark} The first two assertions of Lemma \ref{l2} apply to the case where $n=2$, but the third one does not. For example, if we take $z=5, y=3$ and $n=2$, then $2^2$ divides $\displaystyle{\frac{5^2-3^2}{5-3~~}}=8$. \end{remark} Next we state and prove the main result. \beta \, egin{theorem}\lambdaabel{t14} Let $y$ and $z$ be two distinct nonnegative integers and let $n$ be an odd prime integer. Let $p$ be an odd prime integer that is different than $n$ and relatively prime to $y$. Let $r$ be a primitive root modulo $p^2$ and let $m$ be a positive integer. Then $p^m$ divides $\displaystyle{\frac{z^n-y^n}{z-y~~~}}$ if and only if $$n \text{~~divides~~} p-1 \text{~~~~and~~~~} z ~\equiv~ y~r^{cp^{m-1}} ~~~ (mod~~p^{m}),$$ where $c$ is any integer that satisfies: \beta \, egin{enumerate} \item $0<c<p-1$. \item $p-1$ divides $nc$. \end{enumerate} \end{theorem} \beta \, egin{proof} First recall that, by Lemma \ref{l1}, $r$ is also a primitive root modulo $p^m$ for $m=1$ as well as for $m=3, 4, \dots$ Suppose that $n$ divides $p-1$ and \beta \, egin{equation}\lambdaabel{fgg3} z ~\equiv~ y~r^{cp^{m-1}} ~~~ (mod~~p^{m}), \end{equation} for some integer $c$ such that $0<c<p-1$ and $p-1$ divides $nc$. Formula (\ref{fgg3}) implies that $z^n ~\equiv~ y^n~r^{ncp^{m-1}} ~~~ (mod~~p^{m}).$ Since $p-1$ divides $nc$, it follows that $\phi(p^{m})$, which is equal to $(p-1)p^{m-1}$, divides $ncp^{m-1}$ and therefore \beta \, egin{equation}\lambdaabel{fgg4} z^n ~\equiv~ y^n ~~~ (mod~~p^{m}). \end{equation} Also, Formula (\ref{fgg3}) implies that $z ~\equiv~ y~r^{cp^{m-1}} ~~~ (mod~~p)$, which is equivalent to $z ~\equiv~ y~r^c~r^{c(p^{m-1}-1)} ~~~ (mod~~p)$. Since $\phi(p)$, which is equal to $p-1$, divides $c(p^{m-1}-1)$, it follows that $z ~\equiv~ y~r^c ~~~ (mod~~p)$. By Lemma \ref{l1}, $r$ is a primitive root modulo $p$ and since $0<c<p-1$, we have that $r^c n=1, 2, \dotsot \equiv 1 ~~~(mod~~p)$. Hence \beta \, egin{equation} \lambdaabel{fgg5} zn=1, 2, \dotsot\equiv y~~~ (mod~~p). \end{equation} It follows from (\ref{fgg4}) and (\ref{fgg5}) that $\displaystyle{\frac{z^n-y^n}{z-y~~~}}$ is divisible by $p^{m}$.\\ Conversely, we have two different cases. \beta \, egin{enumerate} \item Case1: $z$ and $y$ are relatively prime.\\ Suppose that $p^{m}$ divides $\displaystyle{\frac{z^n-y^n}{z-y~~~}}$. Then $p^{m}$ divides $z^n-y^n$ or equivalently, \beta \, egin{equation}\lambdaabel{fgg6} z^n ~\equiv~ y^n~~~(mod~~p^{m}). \end{equation} Since $p$ is different than $n$ and divides $\displaystyle{\frac{z^n-y^n}{z-y~~~}}$, Lemma \ref{l2} implies that $p$ does not divides $z-y$. Hence, there exists an integer $k$ such that \beta \, egin{equation}\lambdaabel{fhh1} 0<k<(p-1)p^{m-1} \end{equation} and \beta \, egin{equation}\lambdaabel{fh1} z ~\equiv~ y~r^{k}~~~(mod~~p^{m}). \end{equation} This implies \beta \, egin{equation}\lambdaabel{fgg7} z^n ~\equiv~ y^n r^{nk}~~~(mod~~p^{m}). \end{equation} From (\ref{fgg6}) and (\ref{fgg7}) we have $y^n(1-r^{nk}) \equiv 0 ~ (mod~~p^{m})$, which leads to $(1-r^{nk}) \equiv 0 ~ (mod~~p^{m})$ since $y$ and $p$ are relatively prime. Therefore, \beta \, egin{equation}\lambdaabel{fgg8} \phi(p^{m}), \text{~~which is equal to~~} (p-1)p^{m-1}, \text{~~divides~~} nk. \end{equation} Since $pn=1, 2, \dotseq n$, the above expression implies that $p^{m-1}$ divides $k$ and because $0<k<(p-1)p^{m-1}$, there exists an integer $c$ such that $0<c<p-1$ and \beta \, egin{equation}\lambdaabel{fh2} k=c p^{m-1}. \end{equation} From (\ref{fh2}) and (\ref{fgg8}), we have that $(p-1)p^{m-1}$ ~divides~ $n c p^{m-1}$. Thus, \beta \, egin{equation}\lambdaabel{fh14} (p-1) \text{~~divides~~} nc. \end{equation} Since $0<c<p-1$ and $n$ is a prime integer, Formula (\ref{fh14}) implies that \beta \, egin{equation}\lambdaabel{fh3} n \text{~~divides~} p-1, \end{equation} We complete the proof of this case by taking (\ref{fh2}) into (\ref{fh1}) to obtain \beta \, egin{equation} z ~\equiv~ y~r^{cp^{m-1}}~~~ (mod~~p^m). \end{equation} \item Case2: $(z,y) = q >1$.\\ Let $y'$ and $z'$ be such that $y = qy'$ and $z=qz'$. Then $(z',y') = 1$ and \beta \, egin{equation} \displaystyle{\frac{z^n-y^n}{z-y~~~}} = q^{n-1}~~ \frac{z'^n-y'^n}{z'-y'}. \end{equation} If $p^m$ divides $\displaystyle{\frac{z^n-y^n}{z-y~~~}}$ with $p$ and $y$ being relatively prime, then $p^m$ divides $\displaystyle{\frac{z'^n-y'^n}{z'-y'}}.$ It follows, by Case1, that $n$ divides $p-1$ and $z' \equiv y'r^{cp^{m-1}}~~~(mod~~p^m)$, so that $z \equiv yr^{cp^{m-1}}~~~(mod~~p^m)$, where $c$ is an integer such that $0<c<p-1$ and $p-1$ divides $nc$. \end{enumerate} \end{proof} \beta \, egin{remark}\lambdaabel{r2} The integer $c$ is even and different than $\displaystyle{\frac{p-1}{2}}$. If $c_1$ satisfies $n~c_1 = p-1$, then the integer $c$ takes all the values $c_1, c_2 = 2~c_1, c_3 = 3~c_1, \dots, c_{n-1} = (n-1)c_1$. That makes a total of $(n-1)$ values. Notice also that if $ z = y r^{c_i P^{m-1}}$ for some index $i \in \{1, 2, \dots, n-1\}$, then, by analogy between $z$ and $y$, we have $ y = z r^{c_j P^{m-1}}$ for some integer $j \in \{1, 2, \dots, n-1\}$ such that $c_i + c_j = p-1$. Moreover, $c_i n=1, 2, \dotseq c_j$ for if they were equal, then we would have $c = \displaystyle{\frac{p-1}{2}}$, which is impossible as is already mentioned. \end{remark} \beta \, egin{remark}\lambdaabel{r3} Note that, in the statement of Theorem \ref{t14}, the condition $pn=1, 2, \dotseq n$ needs to be stated for the case $m=1$ only. If $m Ger\v{s}gorin \,eq 2$ and $p^m$ divides $\displaystyle{\frac{z^n-y^n}{z-y~~~}}$, then the third assertion of Lemma \ref{l2} ensures that $p n=1, 2, \dotseq n$. \end{remark} \beta \, egin{remark}\lambdaabel{r1} Observe that $n$ divides $\displaystyle{\frac{p-1}{2}}$ in Theorem \ref{t14}. Therefore, if $p < 2n+1$, then $p^m$ does not divide $\displaystyle{\frac{z^n-y^n}{z-y~~~}}$. \end{remark} \beta \, egin{remark} If an odd prime $q$ divides $\displaystyle{\frac{z^n-y^n}{z-y~~~}}$ but $n$ does not divide $q-1$, then by Theorem \ref{t14} and Lemma \ref{l2}, $t$ is equal to $n$ and divides $z-y$. \end{remark} \beta \, egin{example} Goormaghtigh conjecture states that the Diophantine equation $$ \frac{x^{n_1}-1}{x-1}=\frac{y^{n_2}-1}{y-1}, ~~~ x>y>1 \text{~~and~~} n,m>2 ,$$ is satisfied for only two trivial cases: $$\frac{5^{3}-1}{5-1} = \frac{2^{5}-1}{2-1} = 31$$ and $$\frac{90^{3}-1}{90-1} = \frac{2^{13}-1}{2-1} = 8191.$$ The condition imposed by Theorem \ref{t14} that $n$ divides $p-1$ is satisfied in both cases. In the first case, we have $n_1=3$ divides $p-1 = 30 = (2)(3)(5)$. In the second case, $p=8191$ is a prime number and each of $n_1=3$ and $n_2=13$ divides $p-1=8190=(3^2)(7)(13)$. \end{example} \beta \, egin{example} A Mersenne number is an integer of the form $2^n-1$. Therefore, it is of the form $\displaystyle{\frac{z^n-y^n}{z-y~~~}}$. It is well-known that if a prime $p$ divides $2^n-1$, where $n$ is an odd prime, then $n$ divides $p-1$. This fact is in accordance with Theorem \ref{t14}. It means that for every odd prime integer $n$, there is another prime integer $p$ strictly larger than $n$. As it is known, This idea implies the infinitude of prime integers. \end{example} Two particular cases of Theorem \ref{t14} are $m=1$ and $m=n$. we state the second one as a corollary because of its connection with \flt. \beta \, egin{cor} \lambdaabel{fb1} Let $y$ and $z$ be two distinct nonnegative integers. Let $p$ be an odd prime integer relatively prime to $y$ and let $r$ be a primitive root modulo $p^2$. and let $n$ be an odd prime integer. Then $p^n$ divides $\displaystyle{\frac{z^n-y^n}{z-y~~~}}$ if and only if $$n \text{~~divides~~} p-1 \text{~~~~and~~~~} z ~\equiv~ y~r^{cp^{n-1}} ~~~ (mod~~p^{n}),$$ where $c$ is any integer that satisfies: \beta \, egin{enumerate} \item $0<c<p-1$. \item $p-1$ divides $nc$. \end{enumerate} \end{cor} \beta \, egin{cor}\lambdaabel{c2} Let $y$ and $z$ be two distinct nonnegative integers and let $n$ be an odd prime integer. Let $p$ be an odd prime integer different than $n$, relatively prime to $y$ and having the form $p = 2^k +1$ for some positive integer $k$. Then $p$ does not divide $\displaystyle{\frac{z^n-y^n}{z-y~~~}}$. \end{cor} \beta \, egin{proof} Follows, immediately, from Theorem \ref{t14} since there is no odd prime integer $n$ that divides $p-1 = 2^k$. \end{proof} As a completion of Theorem \ref{t14}, we show that integers of the form $\displaystyle{\frac{z^n-y^n}{z-y~~~}}$ are not divisible by $2$, given that $z$ and $y$ are not both even and $n$ is an odd prime integer. \beta \, egin{theorem}\lambdaabel{t12} Let $y$ and $z$ be two distinct nonnegative integers not both even and let $n$ be an odd prime integer. Then $2$ does not divide $\displaystyle{\frac{z^n - y^n}{z-y~~}}$. \end{theorem} \beta \, egin{proof} It suffices to show that $\displaystyle{\frac{z^n-y^n}{z-y~~~}}$ is an odd integer. If one of $y$ and $z$ is odd and the other is even, then both $(z^n-y^n)$ and $(z-y)$ are odd integers. Hence, their quotient $\displaystyle{\frac{z^n-y^n}{z-y~~~}}$ is also odd. If each of $y$ and $z$ is odd, then $(z-y)$ is even. Hence, $\displaystyle{\frac{z^n-y^n}{z-y~~~}}$ has to be an odd integer since, by Lemma \ref{l2}, ~$\Big(\displaystyle{\frac{z^n-y^n}{z-y~~~}}, z-y\Big) = 1$ or $n$. \end{proof} \end{section} \beta \, egin{section}{Some applications of Theorem \ref{t14}} \beta \, egin{subsection}{Construction of integers having the form $\displaystyle{\frac{z^n-y^n}{z-y~~~}}$ and divisible by $p^m$} Theorem \ref{t14}, beside being a characteristic theorem, it is also a constructive theorem. In other words, if $y, p, n, m, $ are as in theorem \ref{t14}, $\displaystyle{c_1=\frac{p-1}{n}}$ and $r$, is a primitive root modulo $p^2$, then we can construct the set $\xi(y,p,n,m,c_1)$ of all integers of the form $\displaystyle{\frac{z^n-y^n}{z-y~~~}}$ that are divisible by $p^m$, \beta \, egin{equation}\lambdaabel{fh13} \xi(y,p,n,m,c_1) = \Big\{ \displaystyle{\frac{z^n-y^n}{z-y~~~}} ~~\Big|~~~ z \equiv r^{c_1~p^{m-1}}~~(mod~~p^m)\Big\}. \end{equation} As we have explained in Remark \ref{r2}, the integer $c_1$ can be replaced by $c_i = i\, c_1$ for $i =1, 2, \dots, n-1$, so that we can construct sets of the form: \beta \, egin{equation}\lambdaabel{fh13} \xi(y,p,n,m,c_i) = \Big\{ \displaystyle{\frac{z^n-y^n}{z-y~~~}} ~~\Big|~~ z \equiv r^{i\,c~p^{m-1}}~~(mod~~p^m)\Big\}, ~~~ i=1, 2, \dots, n-1. \end{equation} The union, over $i$, of the above sets is \beta \, egin{equation} \xi(y,p,n,m) = \beta \, igcup_{i=1}^{n-1} ~ \xi(y,p,n,m,c_i). \end{equation} Let $\xi(p,n,m)$ be the set of all integers of the form $\displaystyle{\frac{z^n-y^n}{z-y~~~}}$ that are divisible by $p^m$, $y$ relatively prime to $p$, Then $\xi(p,n,m)$ is obtained by taking the union of the sets of the form $\xi(y,p,n,m)$ over all possible values of $y$. \beta \, egin{equation} \xi(p,n,m) = \beta \, igcup_{\substack{y \in \mathbb{N}\\p ~n=1, 2, \dotsmid~ y}}~\xi(y,p,n,m) = \beta \, igcup_{\substack{y \in \mathbb{N}\\ p ~n=1, 2, \dotsmid~ y}} \beta \, igcup_{i=1}^{n-1} ~ \xi(y,p,n,m,c_i). \end{equation} \beta \, egin{remark} Unless $p=3$, there are many primitive roots that are incongruent modulo $p^2$. However, we do not consider $r$ to be a parameter in the construction of $\xi(p,n,m)$ since this set remains invariant if we replace $r$ by another primitive root modulo $p^2$. This can be easily verified. \end{remark} Suppose that $\displaystyle{\frac{z^n-y^n}{z-y~~~}} \in \xi(y,p,n,m,c=c_2)$. Then $$z \equiv y~r^{c_2\,p^{m-1}} ~~~ (mod~~p^m).$$ Since $c_2 = 2~c_1$, the above congruence equation can be rewritten as $$z \equiv \beta \, ig(y~r^{c_1\,p^{m-1}}\beta \, ig)~r^{c_1\,p^{m-1}} ~~~ (mod~~p^m).$$ Letting $y' = y~r^{c_1\,p^{m-1}}$, we obtain $$z \equiv y'~r^{c_1\,p^{m-1}} ~~~ (mod~~p^m),$$ so that $\displaystyle{\frac{z^n - y'^n}{z-y'}} \in \xi(y',p,n,m,c_1).$ The above reasoning shows that \beta \, egin{equation} \beta \, igcup_{\substack{y \in \mathbb{N}\\p ~n=1, 2, \dotsmid~ y}} \beta \, igcup_{i=1}^{n-1} ~ \xi(y,p,n,m,c_i) =\beta \, igcup_{\substack{y\in \mathbb{N}\\p ~n=1, 2, \dotsmid~ y}} ~ \xi(y,p,n,m,c_1). \end{equation} Therefore, we have the following corollary. \beta \, egin{cor}\lambdaabel{c9} Let $p$ be an odd prime integer for which there exists an other odd prime integer $n$ such that $p-1 = n\,c$ for some positive integer $c$. Let $r$ be a primitive root modulo $p^2$. Then \beta \, egin{equation}\lambdaabel{fb2} \xi(p,n,m) = \beta \, igcup_{\substack{y\in \mathbb{N}\\p n=1, 2, \dotsmid y}} \Big\{ \displaystyle{\frac{z^n-y^n}{z-y~~~}} ~~\Big|~~ z \equiv r^{c\,p^{m-1}}~~~(mod~~p^m)\Big\} \end{equation} is the set of all integers of the form $\displaystyle{\frac{z^n-y^n}{z-y~~~}}$ that are divisible by $p^m$, where $p$ does not divide $y$ and $m$ is a positive integer. \end{cor} \beta \, egin{remark} Notice that no matter how the integer $m$ is large, we can construct infinitely many integers of the form $\displaystyle{\frac{z^n-y^n}{z-y~~~}}$ divisible by $p^m$. Notice also that $$ \xi(p,n,m) \subseteq \xi(p,n,m'), \text{~~~for~~} 1\lambdaeq m' < m .$$ \end{remark} \beta \, egin{example}\lambdaabel{e1} Let's construct an integer of the form $\displaystyle{\frac{z^3-y^3}{z-y}}$ that is divisible by $7^3$. Take $p=7, r=3, n=3, c=2$ and $y=1$. We have that $n=3$ divides $p-1=6$ and $nc=6=p-1$. Construct the integer $z=r^{c\,p^{n-1}} = 3^{98}$. Then, by Theorem \ref{t14}, $$ 7^3 = 343 \text{~~divides~~} \frac{(3^{98})^3-1}{3^{98}-1}.$$ Of course, this is a huge number. But Theorem \ref{t14} ensures that we can use positive numbers that are less than and equivalent to $z$ modulo $p^n$. By the use of a calculator, we find easily that $3^{98} \equiv 324~~(mod~7^3)$. Indeed, $$\frac{324^3-1}{324-1} = 105301 = (307)(7^3).$$ \end{example} Now, let's ask a question:\\ Is it true that, for an odd prime integer $n$, there are infinitely many odd prime integers $p$ such that $n$ divides $p-1$?\\ Consider the set $$\xi(y=1,p,n,m=1) = \Big\{ \frac{z^n-1}{z-1} ~~|~~ z \equiv r^c ~~~(mod~~p)\Big\}.$$ and let $E$ be the set of all odd prime integers $p$ such that $p$ divides some element from $\xi(y=1,p,n,m=1)$. Since, by Theorem \ref{t14}, $n$ divides $p-1$ for every element $p \in E$, an affirmative answer of the above question can be obtained if we prove that there are infinitely many element in $E$. This seems to be true because every two element of $\xi(y=1,p,n,m=1)$ have, more likely, different prime decomposition. \end{subsection} \beta \, egin{subsection}{Proving a general fact about the congruence modulo $p^m$} Beside its constructive aspect, Theorem \ref{t14} has other applications such as the following. \beta \, egin{cor}\lambdaabel{c4} Let $p$ be an odd prime integer for which there exist another prime integer $n$ such that $n$ divides $p-1$. Let $r$ be a primitive root modulo $p^2$. Let $c$ be an integer such that $0<c<p-1$ and $p-1$ divides $nc$. Then, for every positive integer $m$, we have \beta \, egin{equation}\lambdaabel{e4} \sum_{k=0}^{n-1} r^{kcp^{m-1}} \equiv 0 ~~~ (mod~~p^m). \end{equation} In particular, for $m=1$, we have \beta \, egin{equation}\lambdaabel{e5} \sum_{k=0}^{n-1} r^{kc} \equiv 0 ~~~ (mod~~p). \end{equation} \end{cor} \beta \, egin{proof} We choose an integer $y$ relatively prime to $p$, and we construct the integer \beta \, egin{equation}\lambdaabel{fh7} z = y r^{cp^{m-1}}. \end{equation} By Theorem \ref{t14}, we have $\displaystyle{\frac{z^n-y^n}{z-y~~~}} \equiv 0 ~~~ (mod ~~p^m)$, which is equivalent to \beta \, egin{equation}\lambdaabel{fh8} \sum_{k=0}^{n-1}~z^k~y^{n-k-1} \equiv 0 ~~~ (mod ~~p^m). \end{equation} Taking (\ref{fh7}) into (\ref{fh8}), we obtain \beta \, egin{equation}\lambdaabel{fh9} y^{n-1}~\sum_{k=0}^{n-1}~r^{kcp^{m-1}} \equiv 0 ~~~ (mod ~~p^m). \end{equation} Since $y$ and $p$ are relatively prime, it follows from (\ref{fh9}) that \beta \, egin{equation} \sum_{k=0}^{n-1}r^{kcp^{m-1}} \equiv 0 ~~~ (mod ~~p^m). \end{equation} \end{proof} \beta \, egin{remark} It is well-known that if $r$ is primitive root mod $p$, then \beta \, egin{equation}\lambdaabel{e3} \sum_{k=0}^{p-1}~r^k ~ \equiv 0 ~~~ (mod ~~p). \end{equation} To see this, recall that $r^1, r^2, \dots, ..., r^{n-1}$ form a complete residue set modulo $p$. A question that arises is: do we have similar formula for an integer $t$ that is not a primitive root modulo $p$? The above corollary gives a partial answer to this question by the mean of Formula (\ref{e5}) which can be considered as an extension of Formula (\ref{e3}). In fact, if $ t = r^{c}$, then $t$ is not a primitive root modulo $p$ since $0<c<p-1$ and $\beta \, ig(c,p-1\beta \, ig)n=1, 2, \dotseq 1$. Then Formula (\ref{e5}) becomes \beta \, egin{equation}\lambdaabel{e6} \sum_{k=0}^{n-1} t^k \equiv 0 ~~~ (mod~~p). \end{equation} Note that $n < \displaystyle{\frac{p}{2}}$. That is, the number of summands in (\ref{e6}) is less than half of that in (\ref{e3}). \end{remark} \beta \, egin{example}\lambdaabel{e2} As in Example \ref{e1}, we take $p=7, r= 3, n=3$ and $c=2$. If we let $m=n=3$, then we have $$ \beta \, egin{array}{ccl} \sum_{k=0}^{n-1}r^{kcp^{n-1}} & = & \sum_{k=0}^{2}~3^{98k}\\ & = & 1+3^{98}+3^{196}\\ & \equiv & 1 + 324 + 324^2~~~(mod~~7^3)\\ & \equiv & 1 + 324 + (-19)^2~~~(mod~~343)\\ & \equiv & 1 + 324 + 361 ~~~(mod~~343)\\ & \equiv & 0~~~(mod~~7^3)\\ \end{array} $$ By the same reasoning, if $m=1$, then $$\sum_{k=0}^{2} 3^{kc} = 3^0+3^2+3^4 = 91 \equiv 0 ~~~ mod~~7.$$ An other primitive root of $7$ is the integer $5$. For $m=1$, we have $$\sum_{k=0}^{2} 5^{kc} = 5^0+5^2+5^4 = 651 \equiv 0 ~~~ mod~~7.$$ \end{example} \end{subsection} \beta \, egin{subsection}{Case where the $mth$ power of a composite integer divides $\displaystyle{\frac{z^n-y^n}{z-y~~~}}$} Let $p_1, p_2, \dots, p_k$ be $k$ distinct prime integers each of which is different than $n$ and relatively prime to $y$. Suppose that the product $\displaystyle{\prod_{i=1}^{k} p_i^{m_i}}$ divides $\displaystyle{\frac{z^n-y^n}{z-y~~~}}$, where $m_1, m_2, \dots, m_k$ are $k$ positive integers. According to Theorem \ref{t14}, this hold if and only if $n$ divides $p_i -1~$ for $~i=1, 2, \dots, k~$ and \beta \, egin{align*} &z \equiv y ~r_1^{c_1 p_1^{m_1-1}} ~~~ (mod~~p_1^{m_1})&\\ &z \equiv y ~r_2^{c_2 p_2^{m_2-1}} ~~~ (mod~~p_2^{m_2})&\\ &\dots&\\ &z \equiv y ~r_k^{c_k p_k^{m_k-1}} ~~~ (mod~~p_k^{m_k}),&\\ \end{align*} where, for $i=1, 2, \dots, k$, $r_i$ is a primitive root modulo $p_i$, the integer $c_i$ satisfies $0<c_i< p_i-1$ and $p_i-1$ divides $nc_i$. By the Chinese remainder theorem, the above system of congruence equations holds if and only if \beta \, egin{align} & z & \equiv & & \sum_{i=1}^k~ \Big(y~r_i^{c_i p_i^{m_i-1}}\Big)~\Big(M_i~q_i\Big) ~~~ (mod~~p_1^{m_1}~p_2^{m_2}\dots p_k^{m_k})& & n=1, 2, \dotsotag \\ & & \equiv & & y ~ \sum_{i=1}^k~ M_i~q_i ~ r_i^{c_i p_i^{m_i-1}} ~~~ (mod~~p_1^{m_1}~p_2^{m_2}\dots p_k^{m_k}),& & \end{align} where $M_i = \displaystyle{\frac{\prod_{j=1}^k p_j^{m_j}}{p_i^{m_i}}}$ and $q_i$ is any integer that satisfies $M_i q_i \equiv 1 ~~~ (mod~~p_i^{m_i}).$ The following corollary summarize the above result. \beta \, egin{cor}\lambdaabel{c8} Let $y$ and $z$ be two relatively prime integers and let $n$ be an odd prime integer. Let $p_1, p_2, \dots, p_k$ be $k$ distinct odd prime integers, each of which is different than $n$. and let $r_1, r_2, \dots, r_k$ be, respectively, primitive root modulo $p_1^2, p_2^2, \dots, p_k^2$. Let $m_1, m_2, \dots, m_k$ be $k$ positive integers. Then the product $\displaystyle{\prod_{i=1}^k~ p_i^{m_i}}$ divides $\displaystyle{\frac{z^n-y^n}{z-y~~~}}$ if and only if \beta \, egin{equation} n~~~\text{divides}~~~ p_i-1, ~~~\text{for}~~~ i=1, 2, \dots, k, \end{equation} and \beta \, egin{equation} z \equiv y ~ \sum_{i=1}^k~ M_i~q_i ~ r_i^{c_i p_i^{m_i-1}} ~~~ (mod~~p_1^{m_1}~p_2^{m_2}\dots p_k^{m_k}), \end{equation} where \beta \, egin{enumerate} \item $M_i = \displaystyle{\frac{\prod_{j=1}^k p_j^{m_j}}{p_i^{m_i}}},$\\ \item $q_i$ is any integer that satisfies $M_i q_i \equiv 1 ~~~ (mod~~p_i^{m_i}),$\\ \item $c_i$ is an integer such that $0 < c_i < p_i -1$ and $p_i-1$ divides $nc_i$. \end{enumerate} \end{cor} \end{subsection} \end{section} \beta \, egin{section}{Connection with Fermat's last theorem} Fermat's last theorem $a \equiv b \ \ (mod \, n)$ite{TW} states: \beta \, egin{theorem}\lambdaabel{flt} For every positive integer $n$ with $n Ger\v{s}gorin \,eq 3$, no positive integers $x, y$ and $z$ satisfy $$z^n = x^n + y^n.$$ \end{theorem} This theorem, which has been proved around 1995 $a \equiv b \ \ (mod \, n)$ite{TW}, implies the following fact. \beta \, egin{cor}\lambdaabel{c7} Let $z$ and $y$ be two relatively prime integers, and let $n$ be an odd prime integer. Then $z-y$ is a perfect $nth$ power if and only if $\displaystyle{\frac{z^n-y^n}{z-y~~~}}$ is not a perfect $nth$ power. In particular, if $z-y=1$, then $\displaystyle{\frac{z^n-y^n}{z-y~~~}}$ is not an $nth$ perfect power. \end{cor} We have proved in this work, that $\displaystyle{\frac{z^n-y^n}{z-y~~~}}$ can be multiple of some perfect $nth$ power $p^n$. But we don't know if $\displaystyle{\frac{z^n-y^n}{z-y~~~}}$, itself, can be a perfect $nth$ power having the form $(p_1p_2\dots)^n$ for not necessary distinct prime integers $p_1, p_2, \dots$ By going back to Formula (\ref{fb2}) and looking at how large is the set $\xi(p,n,m)$ and the degree of freedom that we have to construct such set by acting on different parameters $p$ and $n$, one may believe that there is chance for some elements of $\xi(p,n,m)$ to be perfect $nth$ powers. For instance, consider the number $a= \displaystyle{\frac{16^3 - 5^3}{16-5}}$ which is equal to $19^2$. Of course, the integer $a$ is not a perfect $nth$ power since $n=3$. But it is well a perfect power. Moreover, it is a perfect power of a prime integer. Since such number $a$ exists and it is remarkably small, we believe that nothing impede the existence of a perfect $nth$ power of the form $\displaystyle{\frac{z^n-y^n}{z-y~~~}}$. However, it may turn out that the smallest of these numbers is tremendously large and therefore difficult to reach with a computer. Perhaps, a constructive proof, is the best way to find such integers if they exist.\\ For mathematicians seeking a proof of \flt\- by the mean of classical methods, we have a little result that may be of some use and which is consequence of Theorem \ref{t14}. \beta \, egin{theorem} Suppose that there are pairwise relatively prime positive integers $x, y, z$ such that $z^n = x^n+y^n$, where $n$ is an odd prime integer. If $p$ is an odd prime integer such that $pn=1, 2, \dotseq n$ and $p$ divides $\displaystyle{\frac{x^ny^nz^n}{(z-x)(z-y)(x+y)}}$, then $n$ divides $p-1$. \end{theorem} \beta \, egin{proof} $p$ divides one of $\displaystyle{\frac{x^n}{z-x}}$, $\displaystyle{\frac{y^n}{z-y}}$ and $\displaystyle{\frac{z^n}{x+y}}$. Then, by Theorem \ref{t14}, $n$ divides $p-1$. \end{proof} \end{section} \beta \, egin{section}{Conclusion} We believe that a lot can be done about the integers of the form $\displaystyle{\frac{z^n-y^n}{z-y~~~}}$. A better understanding of this type of integers may lead to a more accessible proof of \flt\ as well as to the solutions of other Diophantine equations. For instance, an observations made on some few integers of the form $\displaystyle{\frac{z^n-y^n}{z-y~~~}}$ gives us the impression that there may be always some prime integer $p$ divisor of $\displaystyle{\frac{z^n-y^n}{z-y~~~}}$ that is greater than each of $|z|$ and $|y|$. We strongly believe that this observation holds for all integers of the form $\displaystyle{\frac{z^n-y^n}{z-y~~~}}$. Therefore we state it as a conjecture. \beta \, egin{conjecture} Let $z$ and $y$ be two relatively prime positive integers with $z>y$ and let $n$ be an odd prime integer. There is a prime integer $p$ divisor of $\displaystyle{\frac{z^n-y^n}{z-y~~~}}$ such that $p>z$. \end{conjecture} If it happens that this conjecture is true, then \flt\, will be an immediate consequence of it. \beta \, egin{center} \beta \, egin{tabular}{ | c | c | c |c|c|c|c|c|c|c|c|} \hline $y$ & $z$ &$\displaystyle{\frac{z^3-y^3}{z-y~}}$ & prime decomp & $p, ~ p>z$ & &$y$ & $z$ &$\displaystyle{\frac{z^5-y^5}{z-y~}}$ & prime decomp & $p, ~ p>z$ \\ \hline $5$ & $6$ & $91$ & $7*13$ & $13$ & &$7$ & $8$ & $15961$ & $11*1451$ & $1451 $ \\ \hline $5$ & $7$ & $109$ & prime & $109$ & &$7$ & $9$ & $21121$ & prime & $21121$ \\ \hline $5$ & $8$ & $129$ & $3*43$ & $43$ & &$7$ & $10$ & $27731$ & $11*2521$ & $2521$ \\ \hline $5$ & $9$ & $151$ & prime & $151$ & &$7$ & $11$ & $36061$ & prime & $36061$ \\ \hline $5$ & $11$ & $201$ & $3*67$ & $67$ & &$7$ & $12$ & $46405$ & $5*9281$ & $9281$ \\ \hline $5$ & $12$ & $229$ & prime & $229$ & &$7$ & $13$ & $59081$ & $11*41*131$ & $131, 41$ \\ \hline $5$ & $13$ & $259$ & $7*37$ & $37$ & &$7$ & $15$ & $92821$ & prime & $92821$ \\ \hline $5$ & $14$ & $291$ & $3*97$ & $97$ & &$7$ & $16$ & $114641$ & prime & $114641$ \\ \hline $5$ & $16$ & $361$ & $19*19$ & $19$ & &$7$ & $17$ & $140305$ & $5*11*2551$ & $2551$ \\ \hline $5$ & $17$ & $399$ & $3*7*19$ & $19$ & &$7$ & $18$ & $170251$ & $61*2791$ & $2791$ \\ \hline $5$ & $18$ & $439$ & prime & $439$ & &$7$ & $19$ & $204941$ & $11*31*601$ & $601, 31$ \\ \hline $5$ & $19$ & $481$ & $13*37$ & $37$ & &$7$ & $20$ & $244861$ & prime & $244861$ \\ \hline $5$ & $21$ & $571$ & prime & $571$ & &$7$ & $22$ & $342455$ & $5*68491$ & $68491$ \\ \hline $5$ & $22$ & $619$ & prime & $619$ & &$7$ & $23$ & $401221$ & $71*5651$ & $71, 5651$ \\ \hline $5$ & $23$ & $669$ & $3*223$ & $223$ & &$7$ & $24$ & $467401$ & $11 x 42491$& $42491$ \\ \hline $5$ & $24$ & $721$ & $7*103$ & $103$ & &$7$ & $25$ & $541601$ & $31 x 17471$& $31, 17471$\\ \hline $5$ & $26$ & $831$ & $3*277$ & $277$ & &$7$ & $26$ & $624451$ & prime & $624451$ \\ \hline $5$ & $27$ & $889$ & $7*127$ & $127$ & &$7$ & $27$ & $716605$ & $5*251*571$ & $251, 571$ \\ \hline $5$ & $28$ & $949$ & $13*73$ & $73$ & &$7$ & $29$ & $931561$ & $41*22721$ & $41, 22721$\\ \hline $5$ & $29$ & $1011$ & $3*337$ & $337$ & &$7$ & $30$ & $1055791$ & $11*41*2341$& $41, 2341$ \\ \hline $5$ & $31$ & $1141$ & $7*163$ & $163$ & &$7$ & $31$ & $1192181$ & prime & $1192181$ \\ \hline \end{tabular} \vskip3mm Table1: The prime decomposition of some small numbers of the forms $\displaystyle{\frac{z^n-y^n}{z-y~~~}}$.\\ Each one of them has a prime divisor that is larger than $z$. \end{center} \end{section} {\beta \, f \Large Acknowledgements} \vskip3mm The author would like to thank Abdullah Laaradji from King Fahd University of Minerals and Petroleum for his very useful comments and suggestions.\\ \beta \, egin{thebibliography}{30} \beta \, ibitem{DM} David M. Burton, Elementary Number Theory, Allyn and Bacon, Inc., Boston, 1980. \beta \, ibitem{Gauss} C. F. Gauss, Disquisitiones Arithmeticae, English translation, Yale University Press, New Haven, 1986. \beta \, ibitem{KC} R. Kumanduri and C. Romero, Number Theory With Computer Applications, Prentice Hall, New Jersey, 1998. \beta \, ibitem{KR} K. H. Rosen, Elementary number theory and its applications, Addison-Wesley, Massachusetts, 1984. \beta \, ibitem{NZM} I. Niven, H. S. Zuckerman, and H. L. Montgomery, An Introduction to the Theory of Numbers, 5th Ed., John Wiley and Sons, New York, 1991. \beta \, ibitem{Ore} O. Ore, Number Theory and Its History, Dover Publications, Inc.,New York, 1988. \beta \, ibitem{PR} P. Ribenboim, Fermat's Last Theorem for Amateurs, Springer-Verlag, New York, 1999. \beta \, ibitem{TW} R. Taylor and A. Wiles, Ring-theoretic properties of certain Hecke algebras, Annals of Math., 141(1995), 553-572. \beta \, ibitem{UD} Underwood Dudley, Elementary Number Theory, W. H. Freeman and Company, San Fransisco, 1969. \end{thebibliography} \end{document}
\begin{document} \title[Equations in three singular moduli]{Equations in three singular moduli: the equal exponent case} \author{Guy Fowler} \address{Leibniz Universit\"{a}t Hannover, Institut für Algebra, Zahlentheorie und Diskrete Mathematik, Welfengarten 1, 30167 Hannover, Germany.} \email{\mathbb{H}ref{mailto:[email protected]}{[email protected]}} \urladdr{\url{https://www.guyfowler.uk/}} \date{\today} \thanks{\textit{Acknowledgements}: The author would like to thank Jonathan Pila for helpful comments and the referee for a careful reading of this paper. The author has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No. 945714).} \begin{abstract} Let $a \in \mathbb{Z}_{>0}$ and $\epsilon_1, \epsilon_2, \epsilon_3 \in \{\pm 1\}$. We classify explicitly all singular moduli $x_1, x_2, x_3$ satisfying either $\epsilon_1 x_1^a + \epsilon_2 x_2^a + \epsilon_3 x_3^a \in \mathbb{Q}$ or $(x_1^{\epsilon_1} x_2^{\epsilon_2} x_3^{\epsilon_3})^{a} \in \mathbb{Q}^{\times}$. In particular, we show that all the solutions in singular moduli $x_1, x_2, x_3$ to the Fermat equations $x_1^a + x_2^a + x_3^a= 0$ and $x_1^a + x_2^a - x_3^a= 0$ satisfy $x_1 x_2 x_3 = 0$. Our proofs use a generalisation of a result of Faye and Riffaut on the fields generated by sums and products of two singular moduli, which we also establish. \end{abstract} \maketitle \section{Introduction}\label{sec:intro} Denote by $\mathbb{H}$ the complex upper half plane. Let $j \colon \mathbb{H} \to \mathbb{C}$ be the modular $j$-function. A singular modulus is a complex number $j(\tau)$, where $\tau \in \mathbb{H}$ is such that $[\mathbb{Q}(\tau) : \mathbb{Q}]=2$. Equivalently, a singular modulus is the $j$-invariant of an elliptic curve (over $\mathbb{C}$) with complex multiplication. Singular moduli are algebraic integers which generate the ring class fields of imaginary quadratic fields. For background, see e.g. \cite[Chapter~3]{Cox89}. Let $n \in \mathbb{Z}_{>0}$. A CM-point of $\mathbb{C}^n$ is a point $(\sigma_1, \ldots, \sigma_n) \in \mathbb{C}^n$ such that each $\sigma_i$ is a singular modulus. A special subvariety of $\mathbb{C}^n$ is an irreducible component of some subvariety of $\mathbb{C}^n$ which is defined by equations of the form $x_i = \sigma$ and $\Phi_N(x_l, x_k)=0$, where $\sigma$ is a singular modulus and $N \in \mathbb{Z}_{>0}$. Here $\Phi_N$ denotes the $N$th classical modular polynomial (see \cite[\S11]{Cox89}). In particular, a CM-point is a zero-dimensional special subvariety. Let $V \subset \mathbb{C}^n$ be a subvariety. The Andr\'e--Oort conjecture for $\mathbb{C}^n$, which was proved by Pila \cite{Pila11} (see Pila's paper and the references therein for background on the conjecture), states that $V$ contains only finitely many maximal special subvarieties. In particular, $V$ contains only finitely many CM-points which do not lie on positive-dimensional special subvarieties of $V$. Pila's theorem is ineffective; it does not provide an effective means of finding all the special subvarieties of a given $V$. For $n=2$, an effective version of the Andr\'e--Oort conjecture was proved by K\"uhne \cite{Kuhne12, Kuhne13} and Bilu, Masser, and Zannier \cite{BiluMasserZannier13} (see also Andr\'e's earlier ineffective proof \cite{Andre98} of the $n=2$ case). For $n > 2$, effective forms of the Andr\'e--Oort conjecture are known only for some restricted classes of subvarieties \cite{BiluKuhne20, Binyamini19}. Alongside this work, there have also been a number of results which classify explicitly all the CM-points lying on particular families of algebraic curves contained in $\mathbb{C}^2$, e.g. \cite{AllombertBiluMadariaga15, BiluLucaMadariaga16, BiluLucaMadariaga20, LucaRiffaut19, Riffaut19}. In this article, we prove the following two results, which consider instead two families of algebraic surfaces in $\mathbb{C}^3$. \begin{thm}\label{thm:sum} Let $a \in \mathbb{Z}_{>0}$ and $\epsilon_1, \epsilon_2, \epsilon_3 \in \{\pm 1\}$. Let $x_1, x_2, x_3$ be singular moduli. Then \[ \epsilon_1 x_1^a + \epsilon_2 x_2^a + \epsilon_3 x_3^a \in \mathbb{Q} \] if and only if one of the following holds: \begin{enumerate} \item $x_1, x_2, x_3 \in \mathbb{Q}$; \item some $x_i \in \mathbb{Q}$ and, for the remaining $x_j, x_k$, one has that $x_j = x_k$ and $\epsilon_j + \epsilon_k =0$; \item some $x_i \in \mathbb{Q}$ and the remaining $x_j, x_k$ are distinct, of degree $2$, and conjugate over $\mathbb{Q}$ with $\epsilon_j = \epsilon_k$; \item $\epsilon_1 = \epsilon_2 = \epsilon_3$ and $x_1, x_2, x_3$ are pairwise distinct, of degree $3$, and conjugate over $\mathbb{Q}$. \end{enumerate} \end{thm} \begin{thm}\label{thm:prod} Let $x_1, x_2, x_3$ be singular moduli and $a \in \mathbb{Z} \setminus \{0\}$. Then \[(x_1 x_2 x_3)^a \in \mathbb{Q}^\times\] if and only if one of the following holds: \begin{enumerate} \item $x_1, x_2, x_3 \in \mathbb{Q}^\times$; \item some $x_i \in \mathbb{Q}^\times$ and the remaining $x_j, x_k$ are distinct, of degree $2$, and conjugate over $\mathbb{Q}$; \item $x_1, x_2, x_3$ are pairwise distinct, of degree $3$, and conjugate over $\mathbb{Q}$. \end{enumerate} \end{thm} Since $0=j(e^{2\pi i /3})$ is a singular modulus, we exclude the case of product $0$ in Theorem~\ref{thm:prod}. There are $13$ rational singular moduli (including $0$), $58$ singular moduli of degree $2$, and $75$ singular moduli of degree $3$. The list of these may be computed straightforwardly in PARI \cite{PARI2}. Theorems~\ref{thm:sum} and \ref{thm:prod} are thus completely explicit. The $a=1$ case of Theorem~\ref{thm:prod} was proved previously in the author's paper \cite{Fowler20}. To our knowledge, this is the only other explicit result about CM-points lying on some algebraic surfaces in $\mathbb{C}^3$ to have appeared previously. We prove the following corollary of Theorem~\ref{thm:sum}, which gives a ``Fermat's last theorem'' for singular moduli. \begin{cor}\label{cor:sum} Let $x_1, x_2, x_3$ be singular moduli. Suppose that there exists $a \in \mathbb{Z}_{>0}$ such that either $x_1^a + x_2^a + x_3^a =0$ or $x_1^a + x_2^a = x_3^a$. Then $x_1 x_2 x_3 = 0$. \end{cor} We also obtain the following generalisation of Theorem~\ref{thm:prod}. To state it, we need a definition. Weinberger \cite[Theorem~1]{Weinberger73} proved that there is at most one quadratic imaginary field $K$ such that the class group of $K$ is isomorphic to $(\mathbb{Z} / 2 \mathbb{Z})^n$ for some $n \in \mathbb{Z}_{>0}$ and the fundamental discriminant $D_K$ of $K$ satisfies $\lvert D_K \rvert > 5460$. \begin{definition}\label{def:tat} Let $K_*$ be the single exceptional quadratic imaginary field which has class group isomorphic to $(\mathbb{Z} / 2 \mathbb{Z})^n$ for some $n \in \mathbb{Z}_{>0}$ and fundamental discriminant $D_*$ such that $\lvert D_* \rvert > 5460$, if such a field exists. \end{definition} Our result is then as follows. \begin{thm}\label{thm:quot} Let $a \in \mathbb{Z} \setminus \{0\}$ and $\epsilon_1, \epsilon_2, \epsilon_3 \in \{\pm 1\}$. Let $x_1, x_2, x_3$ be singular moduli. If \[ (x_1^{\epsilon_1} x_2^{\epsilon_2} x_3^{\epsilon_3})^a \in \mathbb{Q}^\times, \] then one of the following holds: \begin{enumerate} \item $x_1, x_2, x_3 \in \mathbb{Q}^\times$; \item some $x_i \in \mathbb{Q}^\times$ and, for the remaining $x_j, x_k$, one has that $x_j = x_k$ and $\epsilon_j + \epsilon_k =0$; \item some $x_i \in \mathbb{Q}^\times$ and the remaining $x_j, x_k$ are distinct, of degree $2$, and conjugate over $\mathbb{Q}$ with $\epsilon_j = \epsilon_k$; \item $\epsilon_1 = \epsilon_2 = \epsilon_3$ and $x_1, x_2, x_3$ are pairwise distinct, of degree $3$, and conjugate over $\mathbb{Q}$; \item all of the following hold: \begin{enumerate} \item some $x_i, x_j$ are distinct and conjugate over $\mathbb{Q}$; \item $x_i = j(\nu)$ for some $\nu \in \mathbb{H}$ such that $\mathbb{Q}(\nu) \neq K_*$; \item the remaining $x_k$ is equal to $j(\tau)$ for some $\tau \in \mathbb{H}$ such that $\mathbb{Q}(\tau) = K_*$; \item $\epsilon_i=\epsilon_j=-\epsilon_k$; \item $\mathbb{Q}(x_k) \subset \mathbb{Q}(x_i)= \mathbb{Q}(x_j)$; \item $[\mathbb{Q}(x_i) : \mathbb{Q}(x_k)]=2$ and $[\mathbb{Q}(x_k) : \mathbb{Q}] \geq 128$. \end{enumerate} \end{enumerate} \end{thm} It is possible that the field $K_*$ does not exist. For example, the Generalised Riemann Hypothesis implies that there is no such field $K_*$. In this case, Theorem~\ref{thm:quot} would hold with case (5) removed, and would evidently then be an ``if and only if''. The plan of this article is as follows. In Section~\ref{sec:background}, we recall those results about singular moduli which we will need. Section~\ref{sec:prim} contains some primitive element theorems for singular moduli, which will be central to our proofs. These generalise results of Faye and Riffaut \cite{FayeRiffaut18}. Theorem~\ref{thm:sum} is proved in Sections~\ref{sec:sum} and \ref{sec:diff}. Corollary~\ref{cor:sum} is proved in Section~\ref{sec:fermat}. Theorem~\ref{thm:prod} is then proved in Section~\ref{sec:prod}, and finally the proof of Theorem~\ref{thm:quot} is contained in Section~\ref{sec:quot}. Our proofs make extensive use of computations in PARI \cite{PARI2}; scripts are available from \url{https://github.com/guyfowler/three_singular_moduli}. While this article was under review, Bilu, Gun, and Tron \cite[Theorem~1.1]{BiluGunTron22} proved a result related to Theorem~\ref{thm:quot}. They established explicit bounds on the triples $(x_1, x_2, x_3)$ of distinct singular moduli such that $x_1^a x_2^b x_3^c \in \mathbb{Q}^\times$ for some $a,b,c \in \mathbb{Z} \setminus \{0\}$. They list all the known examples of such triples $(x_1, x_2, x_3)$, but currently their bounds do not suffice to show that these are the only examples. In the particular case of Theorem~\ref{thm:quot}, a result of Bilu, Gun, and Tron \cite[Corollary~2.12]{BiluGunTron22} implies that case (5) of Theorem~\ref{thm:quot} cannot occur. Thus, Theorem~\ref{thm:quot} holds with case (5) removed and is thus evidently an ``if and only if''. One therefore obtains a complete list of those triples $(x_1, x_2, x_3)$ of singular moduli such that $x_1^a x_2^b x_3^c \in \mathbb{Q}^\times$ for some $a, b, c \in \mathbb{Z}\setminus \{0\}$ with $\lvert a \rvert = \lvert b \rvert = \lvert c \rvert$. \section{Background}\label{sec:background} \subsection{Basic properties of singular moduli}\label{subsec:singmod} For background on singular moduli, the reader is referred to e.g. \cite[Chapter~3]{Cox89}. We will use the properties of singular moduli which are stated in this section throughout this article, often without special reference. Let $x = j(\tau)$ be a singular modulus, so that $\tau \in \mathbb{H}$ is such that $[\mathbb{Q}(\tau) : \mathbb{Q}] = 2$. The discriminant $\Delta$ of $x$ is defined by $\Delta = b^2 - 4 a c$, where $(a,b,c) \in \mathbb{Z}^3 \setminus \{(0,0,0)\}$ are such that $a \tau^2 + b \tau + c = 0$ and $\gcd(a,b,c)=1$. Observe that $\Delta<0$ and $\Delta \equiv 0, 1 \bmod 4$. We may write $\Delta = f^2 D$ for some unique $f \in \mathbb{Z}_{>0}$, where $D$ is the (fundamental) discriminant of the imaginary quadratic field $\mathbb{Q}(\sqrt{\Delta}) = \mathbb{Q}(\tau)$. Let $x_1, \dots, x_n$ be the list of all the singular moduli of discriminant $\Delta$. The Hilbert class polynomial $H_\Delta(z)$ is defined to be \[H_\Delta(z) = (z- x_1) \cdots (z-x_n).\] Remarkably, $H_\Delta(z)$ is a monic polynomial with coefficients in $\mathbb{Z}$ which is irreducible over $\mathbb{Q}$ and over $\mathbb{Q}(\sqrt{\Delta})$. In particular, every singular modulus is an algebraic integer. Further, the singular moduli of a given discriminant $\Delta$ form a complete set of Galois conjugates both over $\mathbb{Q}$ and over $\mathbb{Q}(\sqrt{\Delta})$. The number of singular moduli of a given discriminant $\Delta$ is equal to $h(\Delta)$, the class number of the (unique) imaginary quadratic order of discriminant $\Delta$. In particular, for $x$ a singular modulus of discriminant $\Delta$, one thus has that \[ h(\Delta) = [\mathbb{Q}(x) : \mathbb{Q}] = [ \mathbb{Q}(\sqrt{\Delta}, x) : \mathbb{Q}(\sqrt{\Delta})].\] The field $\mathbb{Q}(\sqrt{\Delta}, x)$ is the ring class field (see \cite[\S9]{Cox89}) of the imaginary quadratic order of discriminant $\Delta$. The extension $\mathbb{Q}(\sqrt{\Delta}, x) / \mathbb{Q}$ is always a Galois extension \cite[\S3.2]{AllombertBiluMadariaga15}. The singular moduli of a given discriminant $\Delta$ may be explicitly described \cite[Proposition~2.5]{BiluLucaMadariaga16}. Write $T_\Delta$ for the set of triples $(a,b,c) \in \mathbb{Z}^3$ such that: $\gcd(a,b,c)=1$, $\Delta = b^2-4ac$, and either $-a < b \leq a < c$ or $0 \leq b \leq a = c$. Then there is a bijection, given by $(a,b,c) \mapsto j((b + \sqrt{\Delta})/2a)$, between $T_\Delta$ and the singular moduli of discriminant $\Delta$. This description follows from the definition of the discriminant $\Delta$ and the fact that the $j$-function restricts to a bijection $F_j \to \mathbb{C}$, where $F_j$ is the usual fundamental domain for the action of $\mathrm{SL}_2(\Z)$ on $\mathbb{H}$, i.e. \[ F_j = \{ z \in \mathbb{H} : -\frac{1}{2} < \re z \leq \frac{1}{2}, \lvert z \rvert \geq 1, \mbox{ and } \lvert z \rvert > 1 \mbox{ if } \re z < 0 \}.\] Observe that $(b + \sqrt{\Delta})/2a \in F_j$ for every $(a,b,c) \in T_\Delta$. We require the following elementary lemma. \begin{lem}\label{lem:dom} For a given discriminant $\Delta$, there exists: \begin{enumerate} \item a unique singular modulus corresponding to a triple with $a=1$; \item at most two singular moduli corresponding to triples with $a=2$, and: \begin{enumerate} \item if $\Delta \equiv 0, 4 \bmod 16$, then there are no such singular moduli; \item if $\Delta \equiv 1 \bmod 8$ and $\Delta \notin \{-7, -15\}$, then there are exactly two such singular moduli; \end{enumerate} \item at most two singular moduli corresponding to triples with $a=3$; \item at most two singular moduli corresponding to triples with $a=4$; \item at most two singular moduli corresponding to triples with $a=5$; \item at most four singular moduli corresponding to triples with $a=6$; \item at most two singular moduli corresponding to triples with $a=7$; \item at most four singular moduli corresponding to triples with $a=8$; \item at most three singular moduli corresponding to triples with $a=9$; \item at most four singular moduli corresponding to triples with $a=10$; \item at most two singular moduli corresponding to triples with $a=11$. \end{enumerate} \end{lem} \begin{proof} (1) and (2) are \cite[Proposition~2.6]{BiluLucaMadariaga16}\footnote{That $\Delta = -15$ must be excluded in 2(b) is not stated in \cite{BiluLucaMadariaga16}, but is necessary since $h(-15)=2$.}. (3)--(5) are proved in \cite[Lemma~2.1]{Fowler20}. (6)--(11) then follow by the same argument as in \cite{Fowler20}, which uses the fact that $b_1^2-b_2^2 \equiv 0 \bmod 4a$ for any two tuples $(a, b_1, c_1), (a, b_2, c_2) \in T_{\Delta}$. \end{proof} The singular modulus corresponding to the unique tuple $(a,b,c) \in T_{\Delta}$ with $a=1$ is called the ``dominant'' singular modulus of discriminant $\Delta$. A singular modulus corresponding to a tuple $(a, b, c) \in T_\Delta$ with $a=2$ is called a ``subdominant'' singular modulus of discriminant $\Delta$. \subsection{Bounds on singular moduli}\label{subsec:bd} The $j$-function has a Fourier expansion \[ j(z) = q^{-1} + 744 + \sum_{n = 1}^\infty c_n q^n,\] where $q= \exp(2 \pi i z)$ and the $c_i \in \mathbb{Z}_{>0}$. Observe that it follows from this Fourier expansion that the dominant singular modulus of a given discriminant is always real. Suppose that $x$ is the singular modulus corresponding to some triple $(a, b, c) \in T_\Delta$, so that $x = j((b + \sqrt{\Delta})/2a)$ and $(b + \sqrt{\Delta})/2a \in F_j$. It follows \cite[Lemma~1]{BiluMasserZannier13} from the Fourier expansion for the $j$-function that \begin{align}\label{eq:ineq1} (e^{\pi \lvert \Delta \rvert^{1/2}/a} - 2079) \leq \lvert x \rvert \leq (e^{\pi \lvert \Delta \rvert^{1/2}/a} + 2079). \end{align} This inequality explains the choice of the terms ``dominant'' and ``subdominant'' for singular moduli corresponding to triples $(a,b,c) \in T_\Delta$ with $a=1$ and $a=2$ respectively. This terminology was first used in \cite{AllombertBiluMadariaga15}. In particular, the dominant singular modulus of a given discriminant $\Delta$ is much larger in absolute value than all the other singular moduli of discriminant $\Delta$. \begin{prop}[{\cite[Lemma~3.5]{AllombertBiluMadariaga15}}]\label{prop:dom} Let $x$ be the dominant singular modulus of discriminant $\Delta$. Suppose that $x'$ is a singular modulus of discriminant $\Delta$ with $x' \neq x$. Then $\lvert x' \rvert \leq 0.1 \lvert x \rvert$. \end{prop} (In \cite{AllombertBiluMadariaga15}, it is required that $\lvert \Delta \rvert \geq 11$. Nonetheless, the result holds without this restriction, since $\lvert \Delta \rvert < 11$ implies that $x \in \mathbb{Q}$ and for such $x$ the result is trivial.) \begin{prop}\label{prop:dominc} Let $x$ be the dominant singular modulus of discriminant $\Delta$ and $x' \neq x$ a singular modulus of discriminant $\Delta'$, where $\lvert \Delta' \rvert \leq \lvert \Delta \rvert$. Then $\lvert x' \rvert < \lvert x \rvert$. \end{prop} \begin{proof} If $\Delta = \Delta'$, then this follows from Proposition~\ref{prop:dom}. So suppose that $\lvert \Delta' \rvert < \lvert \Delta \rvert$. Hence, $\lvert \Delta' \rvert \leq \lvert \Delta \rvert - 1$. We thus have that \[ \lvert x \rvert \geq (e^{\pi \lvert \Delta \rvert^{1/2}} - 2079)\] and \[ \lvert x' \rvert \leq (e^{\pi (\lvert \Delta \rvert - 1)^{1/2}} + 2079).\] In particular, one may verify that $\lvert x' \rvert < \lvert x \rvert$ if $\lvert \Delta \rvert \geq 9$. The result then follows by inspecting the list of singular moduli with discriminant $<9$ in absolute value. \end{proof} We also make use of the following lower bound for non-zero singular moduli, due to Bilu, Luca, and Pizarro-Madariaga \cite{BiluLucaMadariaga16} (see also its generalisation to the differences of distinct singular moduli in \cite{BiluFayeZhu19}). \begin{lem}[{\cite[\S3.1]{BiluLucaMadariaga16}}]\label{lem:lower} Let $x$ be a non-zero singular modulus of discriminant $\Delta$. Then \[ \lvert x \rvert \geq \min \{ 4.4 \times 10^{-5}, 3500 \lvert \Delta \rvert^{-3}\}. \] \end{lem} \begin{remark} The proof of this inequality in \cite{BiluLucaMadariaga16} is a step in a larger proof, in which the authors have previously assumed that the singular moduli considered are of degree at least $3$. This degree assumption though is not used anywhere in the proof of the inequality in Lemma~\ref{lem:lower}; this inequality thus holds for all non-zero singular moduli. \end{remark} \subsection{Fields generated by singular moduli}\label{subsec:fields} Singular moduli generate the ring class fields of imaginary quadratic fields. In particular, it is a strong condition for two distinct singular moduli to generate closely related fields. We now collect some explicit results along these lines, which we need for our proof. The first result was proved mostly in \cite{AllombertBiluMadariaga15}, as Corollary~4.2 and Proposition~4.3 there. For the ``further'' claim in (2), see \cite[\S3.2.2]{BiluLucaMadariaga16}. \begin{lem}\label{lem:samefield} Let $x_1, x_2$ be singular moduli with discriminants $\Delta_1, \Delta_2$ respectively. Suppose that $\mathbb{Q}(x_1) = \mathbb{Q}(x_2)$, and denote this field $L$. Then $h(\Delta_1) = h(\Delta_2)$, and we have that: \begin{enumerate} \item If $\mathbb{Q}(\sqrt{\Delta_1}) \neq \mathbb{Q}(\sqrt{\Delta_2})$, then the possible fields $L$ are listed in \cite[Table~2]{AllombertBiluMadariaga15}. Further, the extension $L / \mathbb{Q}$ is Galois and the discriminant of any singular modulus $x$ with $\mathbb{Q}(x) = L$ is also listed in this table. \item If $\mathbb{Q}(\sqrt{\Delta_1}) = \mathbb{Q}(\sqrt{\Delta_2})$, then either: $L = \mathbb{Q}$ and $\Delta_1, \Delta_2 \in \{-3, -12, -27\}$; or: $\Delta_1 / \Delta_2 \in \{1, 4, 1/4\}$. Further, if $\Delta_1 = 4 \Delta_2$, then $\Delta_2 \equiv 1 \bmod 8$. \end{enumerate} \end{lem} The next two results are from \cite{Fowler20}. \begin{lem}[{\cite[Lemma~2.3]{Fowler20}}]\label{lem:subfieldsamefund} Let $x_1, x_2$ be singular moduli such that $\mathbb{Q}(x_1) \supset \mathbb{Q}(x_2)$ with $[\mathbb{Q}(x_1) : \mathbb{Q}(x_2)]=2$. Suppose that $\mathbb{Q}(\sqrt{\Delta_1}) = \mathbb{Q}(\sqrt{\Delta_2})$. Then either $x_2 \in \mathbb{Q}$, or $\Delta_1 \in \{9 \Delta_2 / 4, 4\Delta_2, 9 \Delta_2, 16\Delta_2\}$. \end{lem} \begin{lem}\label{lem:subfielddiffund} Let $x_1, x_2$ be singular moduli such that $\mathbb{Q}(x_1) \supset \mathbb{Q}(x_2)$ with $[\mathbb{Q}(x_1) : \mathbb{Q}(x_2)]=2$. Suppose that $\mathbb{Q}(\sqrt{\Delta_1}) \neq \mathbb{Q}(\sqrt{\Delta_2})$. Then there exists $i \in \{1,2\}$ such that: the extension $\mathbb{Q}(x_i) / \mathbb{Q}$ is Galois and either $\Delta_i$ is listed in \cite[Table~1]{AllombertBiluMadariaga15} or $h(\Delta_i) \geq 128$. \end{lem} \begin{proof} This is a slightly stronger statement than \cite[Lemma~2.4]{Fowler20}, but is in fact exactly what is proved there. \end{proof} We also need a result on the exceptional field $K_*$, which was defined in Definition~\ref{def:tat}. \begin{lem}\label{lem:tat} If $x$ is a singular modulus of discriminant $\Delta$ such that the extension $\mathbb{Q}(x) / \mathbb{Q}$ is Galois and $h(\Delta) \geq 128$, then $\mathbb{Q}(\sqrt{\Delta}) = K_*$. \end{lem} \begin{proof} This is immediate from \cite[\S2.2, Remark~2.3, and Corollary~3.3]{AllombertBiluMadariaga15}. \end{proof} \subsection{Explicit Andr\'e--Oort results in two dimensions}\label{subsec:2dim} We recall here those explicit Andr\'e--Oort results in two dimensions which we will make use of in the sequel. These are due to Riffaut \cite{Riffaut19} and Luca and Riffaut \cite{LucaRiffaut19}. \begin{thm}[{\cite[Theorem~1.3]{LucaRiffaut19}}]\label{thm:sum2} Let $x_1, x_2$ be distinct singular moduli and $m, n \in \mathbb{Z}_{>0}$. Suppose that $A x_1^m + B x_2^n = C$ for some $A, B, C \in \mathbb{Q}^\times$. Then $\mathbb{Q}(x_1)=\mathbb{Q}(x_2)$ and this number field has degree at most $2$ over $\mathbb{Q}$. \end{thm} \begin{thm}[{\cite[Theorem~1.7]{Riffaut19}}]\label{thm:prod2} Let $x_1, x_2$ be non-zero singular moduli. Suppose that $x_1^m x_2^n \in \mathbb{Q}$ for some $m, n \in \mathbb{Z} \setminus \{0\}$. Then one of the following holds: \begin{enumerate} \item $x_1, x_2 \in \mathbb{Q}$; \item $x_1 = x_2$ and $m+n =0$; \item $m=n$ and $x_1, x_2$ are distinct, of degree $2$, and conjugate over $\mathbb{Q}$. \end{enumerate} \end{thm} The $m=n=1$ cases of these two results were proved previously in \cite{AllombertBiluMadariaga15} and \cite{BiluLucaMadariaga16} respectively. For partial results on the $x_1 = x_2$ case of Theorem~\ref{thm:sum2}, see \cite{BiluLucaMadariaga20}. \section{Primitive element theorems for singular moduli}\label{sec:prim} In \cite{FayeRiffaut18}, Faye and Riffaut proved the following primitive element theorems for sums and products of singular moduli. See also the generalisation of Theorem~\ref{thm:fieldsum} obtained in \cite{BiluFayeZhu19}. \begin{thm}[{\cite[Theorem~4.1]{FayeRiffaut18}}]\label{thm:fieldsum} Let $x_1, x_2$ be distinct singular moduli of discriminants $\Delta_1, \Delta_2$ respectively. Let $\epsilon \in \{\pm 1\}$. Then $\mathbb{Q}(x_1 + \epsilon x_2) = \mathbb{Q}(x_1, x_2)$ unless $\Delta_1 = \Delta_2$ and $\epsilon =1 $. If $\Delta_1 = \Delta_2$, then $[\mathbb{Q}(x_1, x_2) : \mathbb{Q}(x_1 + x_2)] \leq 2$. \end{thm} \begin{thm}[{\cite[Theorem~5.1]{FayeRiffaut18}}]\label{thm:fieldprod} Let $x_1, x_2$ be distinct non-zero singular moduli of discriminants $\Delta_1, \Delta_2$ respectively. Let $\epsilon \in \{\pm 1\}$. Then $\mathbb{Q}(x_1 x_2^{\epsilon}) = \mathbb{Q}(x_1, x_2)$ unless $\Delta_1 = \Delta_2$ and $\epsilon =1 $. If $\Delta_1 = \Delta_2$, then $[\mathbb{Q}(x_1, x_2) : \mathbb{Q}(x_1 x_2)] \leq 2$. \end{thm} We also have the following result of Riffaut \cite{Riffaut19}. (In \cite{Riffaut19}, it is required that the discriminant $\Delta$ of $x$ satisfies $\lvert \Delta \rvert \geq 11$. We may dispense with this condition because the result is trivial for $\lvert \Delta \rvert < 11$, since all singular moduli having such discriminant are rational.) \begin{prop}[{\cite[Lemma~2.6]{Riffaut19}}]\label{prop:fieldpower} Let $x$ be a singular modulus and $a \in \mathbb{Z} \setminus \{0\}$. Then $\mathbb{Q}(x^a) = \mathbb{Q}(x)$. \end{prop} In this section, we prove the following two results, which can be seen as combining the results of \cite{FayeRiffaut18} with Proposition~\ref{prop:fieldpower}. The proofs of Theorems~\ref{thm:fieldsumpower} and \ref{thm:fieldprodpower} both follow the approach of Faye and Riffaut. \begin{thm}\label{thm:fieldsumpower} Let $x_1, x_2$ be distinct singular moduli of discriminants $\Delta_1, \Delta_2$ respectively. Let $\epsilon \in \{\pm 1\}$ and $a \in \mathbb{Z}_{>0}$. Then $\mathbb{Q}(x_1^a + \epsilon x_2^a) = \mathbb{Q}(x_1, x_2)$, unless $\Delta_1 = \Delta_2$ and $\epsilon =1 $. If $\Delta_1 = \Delta_2$, then $[\mathbb{Q}(x_1, x_2) : \mathbb{Q}(x_1^a + x_2^a)] \leq 2$. \end{thm} \begin{thm}\label{thm:fieldprodpower} Let $x_1, x_2$ be distinct non-zero singular moduli of discriminants $\Delta_1, \Delta_2$ respectively. Let $\epsilon \in \{\pm 1\}$ and $a \in \mathbb{Z} \setminus \{0\}$. Then $\mathbb{Q}(x_1^a x_2^{\epsilon a}) = \mathbb{Q}(x_1, x_2)$, unless $\Delta_1 = \Delta_2$ and $\epsilon =1 $. If $\Delta_1 = \Delta_2$, then $[\mathbb{Q}(x_1, x_2) : \mathbb{Q}(x_1^a x_2^a)] \leq 2$. \end{thm} The proof of the $a=1$ case of our Theorem~\ref{thm:prod} in \cite{Fowler20} used Theorem~\ref{thm:fieldprod}. In this paper, our proofs of Theorems~\ref{thm:sum} and \ref{thm:prod} will use Theorems~\ref{thm:fieldsumpower} and \ref{thm:fieldprodpower} in a corresponding way. \subsection{Proof of Theorem~\ref{thm:fieldsumpower}} Let $x_1, x_2$ be distinct singular moduli of discriminants $\Delta_1, \Delta_2$ respectively. Write $D_1, D_2$ for their corresponding fundamental discriminants. Let $\epsilon \in \{\pm 1\}$ and $a \in \mathbb{Z}_{>0}$. Observe that Proposition~\ref{prop:fieldpower} implies that \[ \mathbb{Q}(x_1, x_2) = \mathbb{Q}(x_1^a, x_2^a).\] Let $L$ be the Galois closure of $\mathbb{Q}(x_1, x_2) / \mathbb{Q}$. Define \[G= \mathrm{Gal}(L / \mathbb{Q}(x_1^a + \epsilon x_2^a))\] and \[ H = \mathrm{Gal}(L/ \mathbb{Q}(x_1, x_2)).\] So $H \leq G$, and $H=G$ if and only if $\mathbb{Q}(x_1, x_2) = \mathbb{Q}(x_1^a + \epsilon x_2^a)$. Let $\sigma \in G$. Then \[ x_1^a + \epsilon x_2^a = \sigma(x_1^a) + \epsilon \sigma(x_2^a).\] So $\sigma(x_1^a) = x_1^a$ if and only if $\sigma(x_2^a) = x_2^a$. Since $\mathbb{Q}(x_1^a)=\mathbb{Q}(x_1)$, we have that $\sigma(x_1^a) = x_1^a$ if and only if $\sigma(x_1) = x_1$. Similarly, $\sigma(x_2^a) = x_2^a$ if and only if $\sigma(x_2) = x_2$. Consequently, \begin{align*} H &= \{ \sigma \in G : \sigma(x_1^a)=x_1^a\}= \{ \sigma \in G : \sigma(x_1)=x_1\}\\ &= \{ \sigma \in G : \sigma(x_2^a)=x_2^a\}= \{ \sigma \in G : \sigma(x_2)=x_2\}. \end{align*} If either $x_1 \in \mathbb{Q}$ or $x_2 \in \mathbb{Q}$, then the desired result follows from Proposition~\ref{prop:fieldpower}. So we will assume that $x_1, x_2 \notin \mathbb{Q}$. \subsubsection{The case where $\Delta_1 = \Delta_2$.} Write $\Delta = \Delta_1 = \Delta_2$. We may assume, after applying an automorphism of $\mathbb{C}$ if necessary, that $x_1$ is dominant, and so $x_2$ is not dominant. By \eqref{eq:ineq1}, one therefore has that \begin{align}\label{eq:ineq2} \lvert x_1^a + \epsilon x_2^a \rvert &\geq \lvert x_1 \rvert^a - \lvert x_2 \rvert^a \nonumber \\ &\geq (e^{\pi \lvert \Delta \rvert^{1/2}} - 2079)^a - (e^{\pi \lvert \Delta \rvert^{1/2}/2} + 2079)^a. \end{align} First, assume that $\epsilon = 1$. Suppose then that $[\mathbb{Q}(x_1, x_2) : \mathbb{Q}(x_1^a + x_2^a)] > 2$. Then $[G : H] > 2$. So there exists $\sigma \in G$ such that $\sigma(x_1^a) \neq x_1^a$ and $\sigma(x_1^a) \neq x_2^a$. Since $x_1^a + x_2^a = \sigma(x_1^a) + \sigma(x_2^a)$, we thus have that $\sigma(x_2^a) \neq x_1^a$. In particular, $\sigma(x_1), \sigma(x_2) \neq x_1$, and so neither of $\sigma(x_1), \sigma(x_2)$ is dominant. Therefore, \begin{align}\label{eq:ineq3} \lvert x_1^a + x_2^a \rvert &= \lvert \sigma(x_1)^a + \sigma(x_2)^a \rvert \nonumber\\ &\leq 2 (e^{\pi \lvert \Delta \rvert^{1/2}/2} + 2079)^a. \end{align} The bounds \eqref{eq:ineq2} and \eqref{eq:ineq3} are incompatible once \[ (e^{\pi \lvert \Delta \rvert^{1/2}} - 2079)^a > 3 (e^{\pi \lvert \Delta \rvert^{1/2}/2} + 2079)^a.\] In particular, they are incompatible when \[ \frac{e^{\pi \lvert \Delta \rvert^{1/2}} - 2079}{e^{\pi \lvert \Delta \rvert^{1/2}/2} + 2079} > 3,\] which happens for all $\lvert \Delta \rvert \geq 9$. Since $\lvert \Delta \rvert < 9$ implies that $x_1, x_2 \in \mathbb{Q}$, we are done in this case. Now for $\epsilon = -1$. Suppose that $\mathbb{Q}(x_1, x_2) \neq \mathbb{Q}(x_1^a - x_2^a)$. Then $G \neq H$. So there exists $\sigma \in G$ such that $\sigma(x_1) \neq x_1$. Suppose that $\sigma(x_2) = x_1$. Then the equality \[ x_1^a - x_2^a = \sigma(x_1)^a - \sigma(x_2)^a\] implies that \[ 2 x_1^a = x_2^a + \sigma(x_1)^a.\] Since $x_1$ is dominant and neither of $x_2, \sigma(x_1)$ is dominant, the absolute value of the right hand side of this equation is at most $2 (0.1 \lvert x_1 \rvert)^a$ by Proposition~\ref{prop:dom}. Clearly, this cannot happen. So $\sigma(x_2) \neq x_1$. In particular, $\sigma(x_2)$ is not dominant. Thus, by \eqref{eq:ineq1}, \begin{align}\label{eq:ineq4} \lvert x_1^a - x_2^a \rvert &= \lvert \sigma(x_1)^a - \sigma(x_2)^a \rvert \nonumber\\ &\leq 2 (e^{\pi \lvert \Delta \rvert^{1/2}/2} + 2079)^a. \end{align} The bounds \eqref{eq:ineq2} and \eqref{eq:ineq4} are incompatible once \[ (e^{\pi \lvert \Delta \rvert^{1/2}} - 2079)^a > 3 (e^{\pi \lvert \Delta \rvert^{1/2}/2} + 2079)^a,\] and in particular for all $\lvert \Delta \rvert \geq 9$. As above, this completes the proof in this case, since $\lvert \Delta \rvert < 9$ implies that $x_1, x_2 \in \mathbb{Q}$. \subsubsection{The case where $\Delta_1 \neq \Delta_2$.} Without loss of generality, we may assume that $\lvert \Delta_1 \rvert > \lvert \Delta_2 \rvert$. Suppose that $\mathbb{Q}(x_1, x_2) \neq \mathbb{Q}(x_1^a + \epsilon x_2^a)$. Then $G \neq H$. So there exists $\sigma \in G$ such that $\sigma(x_1) \neq x_1$ and $\sigma(x_2) \neq x_2$. Since \[ x_1^a + \epsilon x_2^a = \sigma(x_1)^a + \epsilon \sigma(x_2)^a,\] we obtain that \[ \mathbb{Q}(x_1^a - \sigma(x_1)^a) = \mathbb{Q}(x_2^a - \sigma(x_2)^a).\] Thus, by the already established equal discriminant case, \[ \mathbb{Q}(x_1, \sigma(x_1)) = \mathbb{Q}(x_2, \sigma(x_2)).\] Suppose first that $D_1 \neq D_2$. Then, by \cite[Corollary~3.3]{FayeRiffaut18}, $\mathbb{Q}(x_1)=\mathbb{Q}(x_2)$, and so $\Delta_1, \Delta_2$ are listed in \cite[Table~2]{AllombertBiluMadariaga15} by Lemma~\ref{lem:samefield}. We may assume that $x_1$ is dominant. So $\sigma(x_1)$ is not dominant. Observe that \[1+ \epsilon \Big (\frac{x_2}{x_1}\Big)^a= \Big(\frac{\sigma(x_1)}{x_1}\Big)^a + \epsilon \Big(\frac{\sigma(x_2)}{x_1}\Big)^a.\] Note that \[ \Big\lvert 1+ \epsilon \Big(\frac{x_2}{x_1}\Big)^a \Big\rvert \geq 1 - \frac{\lvert x_2 \rvert}{\lvert x_1 \rvert},\] since $\lvert x_2 \rvert < \lvert x_1 \rvert$. Further, \[ \Big\lvert \Big(\frac{\sigma(x_1)}{x_1}\Big)^a + \epsilon \Big(\frac{\sigma(x_2)}{x_1}\Big)^a \Big\rvert \leq \frac{\lvert \sigma(x_1) \rvert}{\lvert x_1 \rvert} + \frac{\lvert \sigma(x_2) \rvert}{\lvert x_1 \rvert}, \] since $\lvert \sigma(x_1) \rvert, \lvert \sigma(x_2) \rvert < \lvert x_1 \rvert$. Write $x_2'$ for the dominant singular modulus of discriminant $\Delta_2$. Let $x_1'$ be a (not necessarily unique) singular modulus of discriminant $\Delta_1$ such that the only singular modulus of discriminant $\Delta_1$ with strictly larger absolute value than $x_1'$ is the dominant singular modulus $x_1$. (Such a singular modulus must exist since by assumption $x_1 \notin \mathbb{Q}$.) To complete the proof in this case, it suffices to verify that \[ 1- \frac{\lvert x_2' \rvert}{\lvert x_1 \rvert} > \frac{\lvert x_1' \rvert}{\lvert x_1 \rvert} + \frac{\lvert x_2' \rvert}{\lvert x_1 \rvert}\] for every ordered pair $(\Delta_1, \Delta_2)$ such that: $\Delta_1, \Delta_2$ appear in the same row of \cite[Table~2]{AllombertBiluMadariaga15} with $\lvert \Delta_1 \rvert > \lvert \Delta_2 \rvert$ and $h(\Delta_1) > 1$. This may be done by a routine computation in PARI. Now suppose that $D_1 = D_2$. In this case (see e.g. \cite[Lemma~7.2]{BiluFayeZhu19}), the identity $\mathbb{Q}(x_1, \sigma(x_1)) = \mathbb{Q}(x_2, \sigma(x_2))$ implies that $\Delta_1 = 4 \Delta_2$. Write $\Delta = \Delta_2$. As usual, take $x_1$ to be dominant. Then \[ \lvert x_1^a + \epsilon x_2^a \rvert \geq (e^{2 \pi \lvert \Delta \rvert^{1/2}} - 2079)^a - (e^{\pi \lvert \Delta \rvert^{1/2}} + 2079)^a.\] We also have that \begin{align*} \lvert x_1^a + \epsilon x_2^a \rvert &= \lvert \sigma(x_1)^a + \epsilon \sigma(x_2)^a \rvert\\ &\leq 2(e^{\pi \lvert \Delta \rvert^{1/2}} + 2079)^a, \end{align*} since $\sigma(x_1) \neq x_1$. These upper and lower bounds are incompatible once \[ (e^{2 \pi \lvert \Delta \rvert^{1/2}} - 2079)^a > 3 (e^{\pi \lvert \Delta \rvert^{1/2}} + 2079)^a.\] In particular, they are incompatible when \[ \frac{e^{2 \pi \lvert \Delta \rvert^{1/2}} - 2079}{e^{\pi \lvert \Delta \rvert^{1/2}} + 2079} > 3,\] which happens for all $\lvert \Delta \rvert \geq 3$, i.e. for all possible $\Delta$. \subsection{Proof of Theorem~\ref{thm:fieldprodpower}} Let $x_1, x_2$ be distinct non-zero singular moduli of discriminants $\Delta_1, \Delta_2$ respectively. Let $\epsilon \in \{\pm 1\}$ and $a \in \mathbb{Z} \setminus \{0\}$. Clearly, it is enough to consider $a \geq 1$. Observe that Proposition~\ref{prop:fieldpower} implies that \[ \mathbb{Q}(x_1, x_2) = \mathbb{Q}(x_1^a, x_2^a).\] Let $L$ be the Galois closure of $\mathbb{Q}(x_1, x_2) / \mathbb{Q}$. Define \[G= \mathrm{Gal}(L / \mathbb{Q}(x_1^a x_2^{\epsilon a}))\] and \[ H = \mathrm{Gal}(L/ \mathbb{Q}(x_1, x_2)).\] So $H \leq G$, and $H=G$ if and only if $\mathbb{Q}(x_1, x_2) = \mathbb{Q}(x_1^a x_2^{\epsilon a})$. By analogous arguments to those in the proof of Theorem~\ref{thm:fieldsumpower}, we have that \begin{align*} H &= \{ \sigma \in G : \sigma(x_1^a)=x_1^a\}= \{ \sigma \in G : \sigma(x_1)=x_1\}\\ &= \{ \sigma \in G : \sigma(x_2^a)=x_2^a\}= \{ \sigma \in G : \sigma(x_2)=x_2\}. \end{align*} One may now prove Theorem~\ref{thm:fieldprodpower} by following the proof of \cite[Theorem~5.1]{FayeRiffaut18}, thanks to the elementary fact that \[ \lvert (xy)^a \rvert < \lvert (wz)^a \rvert \mbox{ if and only if } \lvert xy \rvert < \lvert wz \rvert.\] Suppose first that $\Delta_1 = \Delta_2$ and $\epsilon =1$. Write $\Delta = \Delta_1 = \Delta_2$. In this case, suppose that $[\mathbb{Q}(x_1, x_2) : \mathbb{Q}(x_1^a x_2^a)] > 2$. Then $[G : H] > 2$. So there exists $\sigma \in G$ such that $\sigma(x_1^a) \neq x_1^a$ and $\sigma(x_1^a) \neq x_2^a$. We may assume that $x_1$ is dominant. Since $x_1^a x_2^a = \sigma(x_1^a) \sigma(x_2^a)$, we have that $\sigma(x_2^a) \neq x_1^a$. In particular, $\sigma(x_1), \sigma(x_2) \neq x_1$, and so neither of $\sigma(x_1), \sigma(x_2)$ is dominant. We therefore obtain, exactly as in \cite[\S5.1.1]{FayeRiffaut18}, that $\lvert \Delta \rvert \leq 395$. To deal with the case where $\lvert \Delta \rvert \leq 395$, we do the following. Since $x_1^a x_2^a = \sigma(x_1^a) \sigma(x_2^a)$, we have that \[ \frac{x_1 x_2}{\sigma(x_1) \sigma(x_2)}\] is a root of unity, which is an element of the field $L$. For each possible choice of singular modulus $x_2$ of discriminant $\Delta$ such that $x_2 \neq x_1$ and for each conjugate $(x_1', x_2')$ of $(x_1, x_2)$ such that $x_1', x_2' \neq x_1$, we may verify in PARI whether \[ \frac{x_1 x_2}{x_1' x_2'}\] is a root of unity in $L$. If none are roots of unity, then we may eliminate the corresponding discriminant $\Delta$. In this way, we complete the proof of Theorem~\ref{thm:fieldprodpower} in this case. Now suppose that $\Delta_1 = \Delta_2$ and $\epsilon =-1$. Write $\Delta = \Delta_1 = \Delta_2$. In this case, suppose that $\mathbb{Q}(x_1, x_2) \neq \mathbb{Q}(x_1^a x_2^a)$. In this case, we obtain, exactly as in \cite[\S5.1.2]{FayeRiffaut18}, that $\lvert \Delta \rvert \leq 395$. These small values of $\Delta$ may then be handled in an analogous way to that described above. The cases where $\Delta_1 \neq \Delta_2$ are then treated similarly. We bound $\Delta_1, \Delta_2$ by the same argument as in \cite[\S5]{FayeRiffaut18}, and then eliminate these remaining small discriminants by the process described above. This completes the proof of Theorem~\ref{thm:fieldprodpower}. \section{The $\epsilon_1=\epsilon_2=\epsilon_3$ case of Theorem~\ref{thm:sum}}\label{sec:sum} We now begin the proof of Theorem~\ref{thm:sum}. The ``if'' direction is immediate. We will prove the ``only if''. In this section, we prove the ``only if'' direction of Theorem~\ref{thm:sum} in the case that $\epsilon_1=\epsilon_2=\epsilon_3$. Suppose then that $x_1, x_2, x_3$ are singular moduli such that \[ \epsilon_1 x_1^a + \epsilon_2 x_2^a + \epsilon_3 x_3^a \in \mathbb{Q}\] for some $a \in \mathbb{Z}_{>0}$ and $\epsilon_1=\epsilon_2=\epsilon_3 \in \{\pm 1\}$. Clearly, we may assume that $\epsilon_1=\epsilon_2=\epsilon_3=1$. Write $\Delta_i$ for the discriminant of $x_i$ and $h_i = h(\Delta_i)$ for the corresponding class number. We will show that we must be in one of cases (1), (3), or (4) of Theorem~\ref{thm:sum}. If $x_1 = x_2 = x_3$, then, by Proposition~\ref{prop:fieldpower}, $x_1, x_2, x_3 \in \mathbb{Q}$. If $x_i = x_j \neq x_k$, then $2 x_i^a + x_k^a \in \mathbb{Q}$. Then, by Theorem~\ref{thm:sum2}, either $x_1, x_2, x_3 \in \mathbb{Q}$ or $\mathbb{Q}(x_i) = \mathbb{Q}(x_k)$ and this field has degree $2$ over $\mathbb{Q}$. In the first case, we are done. We now show that the second case cannot occur. Suppose then that $\mathbb{Q}(x_i) = \mathbb{Q}(x_k)$ and this field has degree $2$ over $\mathbb{Q}$. If $\Delta_i = \Delta_k$, then $x_i^a + x_k^a \in \mathbb{Q}$, since $x_i, x_k$ are the two roots of the polynomial $H_{\Delta_i}$. But $x_i^a \notin \mathbb{Q}$ by Proposition~\ref{prop:fieldpower}. Hence, $2 x_i^a + x_k^a \notin \mathbb{Q}$, a contradiction. So we must have that $\Delta_i \neq \Delta_k$. Since $x_i, x_k$ are both degree $2$, they each have a unique conjugate, which we denote $x_i', x_k'$ respectively. Since $\mathbb{Q}(x_i) = \mathbb{Q}(x_k)$, one has that \[ 2 x_i^a + x_k^a = 2 (x_i')^a + (x_k')^a.\] Suppose first that $\lvert \Delta_i \rvert > \lvert \Delta_k \rvert$. Without loss of generality, we may assume that $x_i$ is dominant. We have that \[ 2 = 2 \Big(\frac{x_i'}{x_i}\Big)^a + \Big(\frac{x_k'}{x_i}\Big)^a - \Big(\frac{x_k}{x_i}\Big)^a.\] By Proposition~\ref{prop:dominc}, $\lvert x_i \rvert > \lvert x_i' \rvert, \lvert x_k \rvert, \lvert x_k' \rvert$. Thus, the absolute value of the right hand side of the above equation must be \[\leq 2 \frac{\lvert x_i' \rvert}{\lvert x_i \rvert} + \frac{\lvert x_k' \rvert}{\lvert x_i \rvert} + \frac{\lvert x_k \rvert}{\lvert x_i \rvert}.\] We may however verify in PARI that this expression is always $< 2$, a contradiction. Suppose then that $\lvert \Delta_i \rvert < \lvert \Delta_k \rvert$. In this case, we may, without loss of generality, assume that $x_k$ is dominant. Then \[ 1 = 2 \Big(\frac{x_i'}{x_k}\Big)^a - 2 \Big(\frac{x_i}{x_k}\Big)^a + \Big(\frac{x_k'}{x_k}\Big)^a.\] By Proposition~\ref{prop:dominc}, $\lvert x_k \rvert > \lvert x_i \rvert, \lvert x_i' \rvert, \lvert x_k' \rvert$. Thus, the absolute value of the right hand side of the above equation must be \[\leq 2 \frac{\lvert x_i' \rvert}{\lvert x_k \rvert} + 2 \frac{\lvert x_i \rvert}{\lvert x_k \rvert} + \frac{\lvert x_k' \rvert}{\lvert x_k \rvert}.\] We may however verify in PARI that this expression is always $< 1$, a contradiction. We have thus shown that the second case cannot occur. Hence, we may assume from now on that $x_1, x_2, x_3$ are pairwise distinct. Suppose next that some $x_i \in \mathbb{Q}$. Then $x_j^a + x_k^a \in \mathbb{Q}$. By Theorem~\ref{thm:fieldsumpower}, if $\Delta_j \neq \Delta_k$, then \[\mathbb{Q} = \mathbb{Q}( x_j^a + x_k^a) = \mathbb{Q}(x_j, x_k)\] and so $x_j, x_k \in \mathbb{Q}$. If $\Delta_j = \Delta_k$, then $x_j, x_k$ are distinct, conjugate singular moduli and, again by Theorem~\ref{thm:fieldsumpower}, $[\mathbb{Q}(x_j, x_k) : \mathbb{Q}] \leq 2$; we thus must have that $x_j, x_k$ are distinct conjugate singular moduli of degree $2$. Thus, we are in case (3) of the theorem. So subsequently we assume also that no $x_i$ is rational. In particular, $h_i \geq 2$ for all $i$. Without loss of generality, we may assume that $h_1 \geq h_2 \geq h_3$. Let $A \in \mathbb{Q}$ be such that \[ x_1^a + x_2^a + x_3^a = A. \] Observe that $\mathbb{Q}(x_1) = \mathbb{Q}(x_1^a) = \mathbb{Q}(x_2^a + x_3^a)$, where the first equality follows from Proposition~\ref{prop:fieldpower}. Thus, by Theorem~\ref{thm:fieldsumpower}, we have that \[[\mathbb{Q}(x_1) : \mathbb{Q}] = [\mathbb{Q}(x_2^a + x_3^a) : \mathbb{Q}]= \begin{cases} \mbox{ either } [\mathbb{Q}(x_2, x_3) : \mathbb{Q}], \\ \mbox{ or } \frac{1}{2}[\mathbb{Q}(x_2, x_3) : \mathbb{Q}]. \end{cases}\] We now argue as in \cite{Fowler20}. Since $h_2 = [\mathbb{Q}(x_2) : \mathbb{Q}]$ and $h_3 = [\mathbb{Q}(x_3) : \mathbb{Q}]$ each divide $[\mathbb{Q}(x_2,x_3) : \mathbb{Q}]$, we obtain that $h_2, h_3 \mid 2 [\mathbb{Q}(x_1) : \mathbb{Q}]$. So $h_2, h_3 \mid 2 h_1$. Symmetrically, $h_1, h_2 \mid 2h_3$ and $h_1, h_3 \mid 2 h_2$. By assumption, $h_1 \geq h_2 \geq h_3$. Therefore, one of the following holds: \begin{enumerate} \item $h_1 = h_2 = h_3$, \item $h_1 = h_2 = 2h_3$, \item $h_1 = 2 h_2 = 2 h_3$. \end{enumerate} We will consider each of these cases in turn. \subsection{The case where $h_1 = h_2 = h_3$} Suppose first that $h_1 = h_2 = h_3$. Write $h$ for this class number. We make another case distinction. \subsubsection{The subcase where $\Delta_1 = \Delta_2 = \Delta_3$}\label{subsubsec0} Suppose that $\Delta_1 = \Delta_2 = \Delta_3$. Write $\Delta$ for this common discriminant. Note that we must have that $h \geq 3$, since the $x_i$ are pairwise distinct. If $h=3$, then this is case (4) of Theorem~\ref{thm:sum}. So assume that $h \geq 4$. Taking conjugates as necessary, we may assume that $x_1$ is dominant. So $x_2, x_3$ are not dominant. Thus, by \eqref{eq:ineq1}, \begin{align}\label{eq:ineq5} \lvert A \rvert \geq (e^{\pi \lvert \Delta \rvert^{1/2}} -2079)^a - 2 (e^{\pi \lvert \Delta \rvert^{1/2}/2} + 2079)^a. \end{align} Since $h \geq 4$, there is a conjugate $(x_1', x_2', x_3')$ of $(x_1, x_2, x_3)$ such that none of $x_1', x_2', x_3'$ is dominant. Therefore, by \eqref{eq:ineq1}, \begin{align}\label{eq:ineq6} \lvert A \rvert &= \lvert (x_1')^a + (x_2')^a + (x_3')^a \rvert \nonumber\\ &\leq 3 (e^{\pi \lvert \Delta \rvert^{1/2}/2} + 2079)^a. \end{align} (In fact, a better bound holds, since at least one of $x_1', x_2', x_3'$ is not subdominant, because there are at most two subdominant singular moduli of discriminant $\Delta$. For our purposes, however, the given bound suffices.) In particular, the two bounds \eqref{eq:ineq5} and \eqref{eq:ineq6} are incompatible whenever \[ (e^{\pi \lvert \Delta \rvert^{1/2}} -2079)^a > 5 (e^{\pi \lvert \Delta \rvert^{1/2}/2} + 2079)^a,\] and so certainly whenever \[ \frac{e^{\pi \lvert \Delta \rvert^{1/2}} -2079}{e^{\pi \lvert \Delta \rvert^{1/2}/2} + 2079} > 5,\] which happens if $\lvert \Delta \rvert \geq 10$. Since $\lvert \Delta \rvert < 10$ would contradict the assumption that $h \geq 4$, we are done in this case. \subsubsection{The subcase where the $\Delta_i$ are not all equal} \label{subsubsec1} Next, suppose that the $\Delta_i$ are not all equal. Without loss of generality, we may assume that $\lvert \Delta_1 \rvert > \lvert \Delta_2 \rvert$ and $\lvert \Delta_1 \rvert \geq \lvert \Delta_3 \rvert \geq \lvert \Delta_2 \rvert$. Then, by Proposition~\ref{prop:fieldpower} and Theorem~\ref{thm:fieldsumpower}, \[ \mathbb{Q}(x_3) = \mathbb{Q}(x_3^a)= \mathbb{Q}(x_1^a + x_2^a) = \mathbb{Q}(x_1, x_2),\] since $\Delta_1 \neq \Delta_2$. So \[ \mathbb{Q}(x_1), \mathbb{Q}(x_2) \subset \mathbb{Q}(x_3).\] Since $h_1 = h_2 = h_3$, we must then have that \begin{align}\label{eq:1} \mathbb{Q}(x_1) = \mathbb{Q}(x_2) = \mathbb{Q}(x_3). \end{align} If $\mathbb{Q}(\sqrt{\Delta_i}) \neq \mathbb{Q}(\sqrt{\Delta_j})$ for some $i \neq j$, then all possibilities for $(\Delta_1, \Delta_2, \Delta_3)$ are known by Lemma~\ref{lem:samefield}. We now explain how we eliminate these possibilities. For now, assume that $h = 2$. Then, by \eqref{eq:1}, there is a unique non-trivial Galois conjugate $(x_1', x_2', x_3')$ of $(x_1, x_2, x_3)$ and this satisfies $x_i' \neq x_i$ for all $i$. One has that \[(x_1')^a + (x_2')^a + (x_3')^a = A.\] If $\Delta_1 = \Delta_3$, then $x_1, x_3$ are distinct, conjugate singular moduli of degree $2$, so one has that $x_1 = x_3'$ and $x_3 = x_1'$. Thus, in this case, one obtains that $(x_2)^a - (x_2')^a=0$, but this contradicts Theorem~\ref{thm:prod2}. Hence, $\Delta_1 \neq \Delta_3$. Similarly, one shows that $\Delta_2 \neq \Delta_3$. So we must have that $\lvert \Delta_1 \rvert > \lvert \Delta_3 \rvert > \lvert \Delta_2 \rvert$. Observe that \[ x_1^a = (x_1')^a -x_2^a + (x_2')^a - x_3^a + (x_3')^a,\] and thus \[ 1 = \Big(\frac{x_1'}{x_1}\Big)^a -\Big(\frac{x_2}{x_1}\Big)^a +\Big(\frac{x_2'}{x_1}\Big)^a -\Big(\frac{x_3}{x_1}\Big)^a +\Big(\frac{x_3'}{x_1}\Big)^a.\] We may assume that $x_1$ is dominant. Thus, $\lvert x_1' \rvert, \lvert x_2 \rvert, \lvert x_2' \rvert, \lvert x_3 \rvert, \lvert x_3' \rvert < \lvert x_1 \rvert$ by Proposition~\ref{prop:dominc}, since $\lvert \Delta_1 \rvert > \lvert \Delta_2 \rvert, \lvert \Delta_3 \rvert$ and $x_1'$ is not dominant. Hence, \begin{align*} 1 =& \Big\lvert \Big(\frac{x_1'}{x_1}\Big)^a -\Big(\frac{x_2}{x_1}\Big)^a +(\frac{x_2'}{x_1})^a -(\frac{x_3}{x_1})^a +\Big(\frac{x_3'}{x_1}\Big)^a \Big\rvert\\ \leq &\Big\lvert \Big(\frac{x_1'}{x_1}\Big) \Big\rvert + \Big\lvert \Big(\frac{x_2}{x_1}\Big) \Big\rvert +\Big\lvert \Big(\frac{x_2'}{x_1}\Big) \Big\rvert +\Big\lvert \Big(\frac{x_3}{x_1}\Big) \Big\rvert +\Big\lvert \Big(\frac{x_3'}{x_1}\Big) \Big\rvert. \end{align*} However, we may verify in PARI that this last expression is $<1$ for every possible choice of $x_2, x_3$ for each of the relevant triples $(\Delta_1, \Delta_2, \Delta_3)$. We may thereby eliminate each of these possible $(\Delta_1, \Delta_2, \Delta_3)$. We now assume that $h>2$. Then inspection of \cite[Table~2]{AllombertBiluMadariaga15} gives us that $h \geq 4$. Assume that $x_1$ is dominant. Since $h \geq 4$, there exists a conjugate $(x_1', x_2', x_3')$ of $(x_1, x_2, x_3)$ such that no $x_i'$ is dominant. Proceeding as above, we have that \[ 1 = \Big(\frac{x_1'}{x_1}\Big)^a -\Big(\frac{x_2}{x_1}\Big)^a +\Big(\frac{x_2'}{x_1}\Big)^a -\Big(\frac{x_3}{x_1}\Big)^a +\Big(\frac{x_3'}{x_1}\Big)^a.\] By Proposition~\ref{prop:dominc}, $\lvert x_1' \rvert, \lvert x_2 \rvert, \lvert x_2' \rvert, \lvert x_3 \rvert, \lvert x_3' \rvert < \lvert x_1 \rvert$, since: $\lvert \Delta_1 \rvert > \lvert \Delta_2 \rvert$; $\lvert \Delta_1 \rvert \geq \lvert \Delta_3 \rvert$; no $x_i'$ is dominant; and if $\Delta_3 = \Delta_1$, then $x_3$ is not dominant (because $x_3 \neq x_1$). Consequently, \begin{align*} 1= & \Big\lvert \Big(\frac{x_1'}{x_1}\Big)^a -\Big(\frac{x_2}{x_1}\Big)^a +\Big(\frac{x_2'}{x_1}\Big)^a -\Big(\frac{x_3}{x_1}\Big)^a +\Big(\frac{x_3'}{x_1}\Big)^a \rvert\\ \leq &\Big\lvert \Big(\frac{x_1'}{x_1}\Big) \Big\rvert + \Big\lvert \Big(\frac{x_2}{x_1}\Big) \Big\rvert +\Big\lvert \Big(\frac{x_2'}{x_1}\Big) \Big\rvert +\Big\lvert \Big(\frac{x_3}{x_1}\Big) \Big\rvert +\Big\lvert \Big(\frac{x_3'}{x_1}\Big) \Big\rvert. \end{align*} Once again, we may verify in PARI that this last expression is $<1$ for every possible choice of $x_2, x_3$ for each of the relevant triples $(\Delta_1, \Delta_2, \Delta_3)$. We may thereby eliminate this case. Now suppose instead that $\mathbb{Q}(\sqrt{\Delta_1}) = \mathbb{Q}(\sqrt{\Delta_2})= \mathbb{Q}(\sqrt{\Delta_3})$. Then Lemma~\ref{lem:samefield} implies that $\Delta_1 = 4 \Delta_2$ and $\Delta_3 \in \{\Delta_1, \Delta_2\}$. Write $\Delta = \Delta_2$. Suppose first that $\Delta_1 = \Delta_3 = 4 \Delta$ and $\Delta_2 = \Delta$. Taking conjugates, we may assume that $x_1$ is dominant. So $x_3$ is not dominant. Then, by \eqref{eq:ineq1}, \begin{align}\label{eq:ineq7} \lvert A \rvert &\geq (e^{2 \pi \lvert \Delta \rvert^{1/2}} -2079)^a - (e^{\pi \lvert \Delta \rvert^{1/2}} + 2079)^a - (e^{2 \pi \lvert \Delta \rvert^{1/2}/2} + 2079)^a \nonumber\\ & \geq (e^{2 \pi \lvert \Delta \rvert^{1/2}} -2079)^a - 2 (e^{\pi \lvert \Delta \rvert^{1/2}} + 2079)^a. \end{align} If $h = 2$, then $x_1, x_3$ are two distinct, conjugate singular moduli of degree $2$, so $x_1^a + x_3^a \in \mathbb{Q}$ and hence $x_2 \in \mathbb{Q}$ (since $\mathbb{Q}(x_2^a) = \mathbb{Q}(x_2)$ by Proposition~\ref{prop:fieldpower}). This cannot happen, since $h = 2$. So we must have that $h \geq 3$. Consequently, there is a conjugate $(x_1', x_2', x_3')$ with neither $x_1', x_3'$ dominant. Such a conjugate gives rise to the upper bound \begin{align}\label{eq:ineq8} \lvert A \rvert &\leq 2 (e^{2 \pi \lvert \Delta \rvert^{1/2}/2} +2079)^a + (e^{\pi \lvert \Delta \rvert^{1/2}} + 2079)^a \nonumber \\ &\leq 3 (e^{\pi \lvert \Delta \rvert^{1/2}} + 2079)^a. \end{align} In particular, the bounds \eqref{eq:ineq7} and \eqref{eq:ineq8} are incompatible whenever \[ (e^{2 \pi \lvert \Delta \rvert^{1/2}} -2079)^a > 5 (e^{\pi \lvert \Delta \rvert^{1/2}} + 2079)^a,\] and so certainly whenever \[ \frac{e^{2 \pi \lvert \Delta \rvert^{1/2}} -2079}{e^{\pi \lvert \Delta \rvert^{1/2}} + 2079} > 5,\] which happens if $\lvert \Delta \rvert \geq 3$, i.e. for all discriminants $\Delta$. So this case is proved. Suppose next that $\Delta_1 = 4 \Delta$ and $\Delta_2 = \Delta_3 = \Delta$. Taking conjugates, we may assume that $x_1$ is dominant. At most one of $x_2, x_3$ is dominant, so \begin{align*} \lvert A \rvert &\geq (e^{2 \pi \lvert \Delta \rvert^{1/2}} -2079)^a - (e^{\pi \lvert \Delta \rvert^{1/2}} + 2079)^a - (e^{\pi \lvert \Delta \rvert^{1/2}/2} + 2079)^a \nonumber\\ &\geq (e^{2 \pi \lvert \Delta \rvert^{1/2}} -2079)^a - 2 (e^{\pi \lvert \Delta \rvert^{1/2}} + 2079)^a. \end{align*} Since $h \geq 2$, there is a conjugate $(x_1', x_2', x_3')$ with $x_1'$ not dominant. This conjugate gives rise to the upper bound \begin{align*} \lvert A \rvert &\leq (e^{2 \pi \lvert \Delta \rvert^{1/2}/3} +2079)^a + (e^{\pi \lvert \Delta \rvert^{1/2}} + 2079)^a + (e^{\pi \lvert \Delta \rvert^{1/2}/2} + 2079)^a\\ &\leq 3 (e^{\pi \lvert \Delta \rvert^{1/2}} + 2079)^a. \end{align*} We thus complete the proof of this case exactly as in the previous case. \subsection{The case where $h_1 = h_2 = 2 h_3$} Now we come to the case that $h_1 = h_2 = 2 h_3$. In particular, $\Delta_2 \neq \Delta_3$. Thus, by Theorem~\ref{thm:fieldsumpower}, \[ \mathbb{Q}(x_1) = \mathbb{Q}(x_1^a)= \mathbb{Q}(x_2^a + x_3^a) = \mathbb{Q}(x_2, x_3) \supset \mathbb{Q}(x_2), \mathbb{Q}(x_3).\] And so \[ \mathbb{Q}(x_1) = \mathbb{Q}(x_2) \supsetneq \mathbb{Q}(x_3)\] with $[\mathbb{Q}(x_1) : \mathbb{Q}(x_3)]=2$. If $\Delta_1 \neq \Delta_2$, then \[\mathbb{Q}(x_3) = \mathbb{Q}(x_3^{a})= \mathbb{Q}(x_1^a + x_2^a) = \mathbb{Q}(x_1, x_2) = \mathbb{Q}(x_1),\] a contradiction. So $\Delta_1 = \Delta_2$. Observe also that $h_1 = h_2 \geq 4$ since $h_3 \geq 2$. \subsubsection{The subcase where $\mathbb{Q}(\sqrt{\Delta_1})=\mathbb{Q}(\sqrt{\Delta_3})$} Suppose that $\mathbb{Q}(\sqrt{\Delta_1})=\mathbb{Q}(\sqrt{\Delta_3})$. Then Lemma~\ref{lem:subfieldsamefund} implies that $\Delta_1 \in \{9 \Delta_3/4, 4 \Delta_3, 9 \Delta_3, 16 \Delta_3\}$. First, suppose that $\Delta_1 = \Delta_2 = 9 \Delta/4$ and $\Delta_3 = \Delta$. We may assume that $x_1$ is dominant, and hence $x_2$ is not dominant. We obtain that \begin{align}\label{eq:ineq9} \lvert A \rvert &\geq (e^{3 \pi \lvert \Delta \rvert^{1/2}/2} -2079)^a - (e^{3 \pi \lvert \Delta \rvert^{1/2}/4} + 2079)^a - (e^{\pi \lvert \Delta \rvert^{1/2}} + 2079)^a \nonumber\\ &\geq (e^{3 \pi \lvert \Delta \rvert^{1/2}/2} -2079)^a - 2 (e^{\pi \lvert \Delta \rvert^{1/2}} + 2079)^a. \end{align} Since $h_1 \geq 4$, there exists a conjugate $(x_1', x_2', x_3')$ with $x_1', x_2'$ both not dominant. Thus, \begin{align}\label{eq:ineq10} \lvert A \rvert &\leq 2 (e^{3 \pi \lvert \Delta \rvert^{1/2}/4} +2079)^a + (e^{\pi \lvert \Delta \rvert^{1/2}} + 2079)^a \nonumber\\ &\leq 3 (e^{\pi \lvert \Delta \rvert^{1/2}} + 2079)^a. \end{align} In particular, the bounds \eqref{eq:ineq9} and \eqref{eq:ineq10} are incompatible whenever \[ (e^{3 \pi \lvert \Delta \rvert^{1/2} / 2} -2079)^a > 5 (e^{\pi \lvert \Delta \rvert^{1/2}} + 2079)^a,\] and so certainly whenever \[ \frac{e^{3 \pi \lvert \Delta \rvert^{1/2} /2} -2079}{e^{\pi \lvert \Delta \rvert^{1/2}} + 2079} > 5,\] which happens if $\lvert \Delta \rvert \geq 5$. Since $\lvert \Delta \rvert < 5$ would contradict the assumption that $h_3 \geq 2$, we are done in this case. Second, suppose that $\Delta_1 = \Delta_2 = 4 \Delta$ and $\Delta_3 = \Delta$. Then, taking $x_1$ dominant, \begin{align}\label{eq:ineq11} \lvert A \rvert &\geq (e^{2 \pi \lvert \Delta \rvert^{1/2}} -2079)^a - 2 (e^{ \pi \lvert \Delta \rvert^{1/2}} + 2079)^a. \end{align} Since $h_1 \geq 4$, there exists a conjugate $(x_1', x_2', x_3')$ with $x_1', x_2'$ both not dominant. Thus, \begin{align} \label{eq:ineq12} \lvert A \rvert \leq 3 (e^{ \pi \lvert \Delta \rvert^{1/2}} +2079)^a. \end{align} The bounds \eqref{eq:ineq11} and \eqref{eq:ineq12} are incompatible whenever \[ (e^{2 \pi \lvert \Delta \rvert^{1/2}} -2079)^a > 5 (e^{\pi \lvert \Delta \rvert^{1/2}} + 2079)^a,\] which we have already seen is true for all discriminants $\Delta$. Third, suppose that $\Delta_1 = \Delta_2 = 9 \Delta$ and $\Delta_3 = \Delta$. Then, taking $x_1$ dominant, \begin{align}\label{eq:ineq13} \lvert A \rvert &\geq (e^{3 \pi \lvert \Delta \rvert^{1/2}} -2079)^a - (e^{3 \pi \lvert \Delta \rvert^{1/2}/2} + 2079)^a - (e^{\pi \lvert \Delta \rvert^{1/2}} + 2079)^a \nonumber\\ &\geq (e^{3 \pi \lvert \Delta \rvert^{1/2}} -2079)^a - 2 (e^{3 \pi \lvert \Delta \rvert^{1/2}/2} + 2079)^a. \end{align} Since $h_1 \geq 4$, there exists a conjugate $(x_1', x_2', x_3')$ with $x_1', x_2'$ both not dominant. Thus, \begin{align}\label{eq:ineq14} \lvert A \rvert &\leq 2 (e^{3 \pi \lvert \Delta \rvert^{1/2}/2} +2079)^a + (e^{\pi \lvert \Delta \rvert^{1/2}} + 2079)^a \nonumber \\ &\leq 3(e^{3 \pi \lvert \Delta \rvert^{1/2}/2} +2079)^a . \end{align} The bounds \eqref{eq:ineq13} and \eqref{eq:ineq14} are incompatible whenever \[ (e^{3 \pi \lvert \Delta \rvert^{1/2}} -2079)^a > 5 (e^{3 \pi \lvert \Delta \rvert^{1/2}/2} + 2079)^a,\] and so certainly whenever \[\frac{e^{3 \pi \lvert \Delta \rvert^{1/2}} -2079}{e^{3 \pi \lvert \Delta \rvert^{1/2}/2} + 2079} > 5,\] which holds for all $\lvert \Delta \rvert \geq 2$, i.e. for all discriminants $\Delta$. Finally suppose that $\Delta_1 = \Delta_2 = 16 \Delta$ and $\Delta_3 = \Delta$. Then, taking $x_1$ dominant, \begin{align}\label{eq:ineq15} \lvert A \rvert &\geq (e^{4 \pi \lvert \Delta \rvert^{1/2}} -2079)^a - (e^{2 \pi \lvert \Delta \rvert^{1/2}} + 2079)^a - (e^{\pi \lvert \Delta \rvert^{1/2}} + 2079)^a\nonumber \\ &\geq (e^{4 \pi \lvert \Delta \rvert^{1/2}} -2079)^a - 2(e^{2 \pi \lvert \Delta \rvert^{1/2}} + 2079)^a. \end{align} Since $h_1 \geq 4$, there exists a conjugate $(x_1', x_2', x_3')$ with $x_1', x_2'$ both not dominant. Thus, \begin{align}\label{eq:ineq16} \lvert A \rvert &\leq 2 (e^{2 \pi \lvert \Delta \rvert^{1/2}} +2079)^a + (e^{\pi \lvert \Delta \rvert^{1/2}} + 2079)^a \nonumber\\ &\leq 3 (e^{2 \pi \lvert \Delta \rvert^{1/2}} +2079)^a. \end{align} The bounds \eqref{eq:ineq15} and \eqref{eq:ineq16} are incompatible whenever \[ (e^{4 \pi \lvert \Delta \rvert^{1/2}} -2079)^a > 5 (e^{2 \pi \lvert \Delta \rvert^{1/2}} + 2079)^a,\] and so certainly whenever \[\frac{e^{4 \pi \lvert \Delta \rvert^{1/2}} -2079}{e^{2 \pi \lvert \Delta \rvert^{1/2}} + 2079} > 5,\] which holds for all $\lvert \Delta \rvert \geq 1$, i.e. for all discriminants $\Delta$. \subsubsection{The subcase where $\mathbb{Q}(\sqrt{\Delta_1}) \neq \mathbb{Q}(\sqrt{\Delta_3})$} Suppose that $\mathbb{Q}(\sqrt{\Delta_1}) \neq \mathbb{Q}(\sqrt{\Delta_3})$. Then Lemma~\ref{lem:subfielddiffund} implies that either one of $\Delta_1, \Delta_3$ is listed and the corresponding extension $\mathbb{Q}(x_i) / \mathbb{Q}$ is Galois, or $h_1 \geq 128$. We begin with the first case. In this case, we may find all possibilities for $(\Delta_1, \Delta_2, \Delta_3)$. When $\Delta_1$ is listed, PARI finds\footnote{The command which does this, delta1\textunderscore listed\textunderscore triples(), is contained in the script general.gp.} 330 possible triples $(\Delta_1, \Delta_2, \Delta_3)$. All but $9$ of these have $\lvert \Delta_1 \rvert > \lvert \Delta_3 \rvert$. So for now assume that $\lvert \Delta_1 \rvert > \lvert \Delta_3 \rvert$. Our approach is basically the same as previously. We may assume that $x_1$ is dominant, and so $x_2$ is not dominant. Since $x_1^a + x_2^a + x_3^a = A$, for every conjugate $(x_1', x_2', x_3')$ of $(x_1, x_2, x_3)$, we have that \[ (x_1')^a + (x_2')^a + (x_3')^a = A\] as well. Thus, \[ 1 = \Big(\frac{x_1'}{x_1}\Big)^a -\Big(\frac{x_2}{x_1}\Big)^a +\Big(\frac{x_2'}{x_1}\Big)^a -\Big(\frac{x_3}{x_1}\Big)^a +\Big(\frac{x_3'}{x_1}\Big)^a.\] Since $h_1 \geq 4$, we may always choose $(x_1', x_2', x_3')$ such that neither of $x_1', x_2'$ is dominant. Therefore, $\lvert x_1' \rvert, \lvert x_2 \rvert, \lvert x_2' \rvert, \lvert x_3 \rvert, \lvert x_3' \rvert < \lvert x_1 \rvert$ by Proposition~\ref{prop:dominc}, since $\lvert \Delta_1 \rvert > \lvert \Delta_3 \rvert$ and $x_1', x_2, x_2'$ are not dominant. Hence, \begin{align*} 1= & \Big\lvert \Big(\frac{x_1'}{x_1}\Big)^a -\Big(\frac{x_2}{x_1}\Big)^a +\Big(\frac{x_2'}{x_1}\Big)^a -\Big(\frac{x_3}{x_1}\Big)^a +\Big(\frac{x_3'}{x_1}\Big)^a \Big\rvert\\ \leq &\Big\lvert \Big(\frac{x_1'}{x_1}\Big) \Big\rvert + \Big\lvert \Big(\frac{x_2}{x_1}\Big) \Big\rvert +\Big\lvert \Big(\frac{x_2'}{x_1}\Big) \Big\rvert +\Big\lvert \Big(\frac{x_3}{x_1}\Big) \Big\rvert +\Big\lvert \Big(\frac{x_3'}{x_1}\Big) \Big\rvert. \end{align*} However, we may verify in PARI that this last expression is $<1$ for every possible choice of $x_2, x_3$ for all but one of the $321$ triples $(\Delta_1, \Delta_2, \Delta_3)$, and so we may eliminate those triples. The one exception is the triple $(\Delta_1, \Delta_2, \Delta_3) = (-240, -240, -235)$, which has $h(\Delta_1) = 4$. The problem here is that, since $-240, -235$ are so close to one another, one has that \[ \frac{\lvert x_3 \rvert}{\lvert x_1 \rvert} \approx 0.6\] if $x_1, x_3$ are the dominant singular moduli of respective discriminants $-240, -235$. So the approach described above does not work, because $x_3, x_3'$ could both be dominant. However, we can avoid this issue in the following way. Recall that $\mathbb{Q}(x_3) \subset\mathbb{Q}(x_1)$ with $[\mathbb{Q}(x_1) : \mathbb{Q}(x_3)]=2$ and the extension $\mathbb{Q}(x_1) /\mathbb{Q}$ is Galois. Further, $x_1$ is dominant by assumption, and so $x_2$ is not dominant. Suppose first that $x_3$ is also dominant. Since $h_1=4$, there are exactly $4$ conjugates of $(x_1, x_2, x_3)$, including $(x_1, x_2, x_3)$ itself. Observe that $x_3$ occurs exactly twice among the third coordinates of these $4$ conjugates, and by assumption one of these occurrences is in $(x_1, x_2, x_3)$ itself. Note that $2$ of the $3$ non-trivial conjugates of $(x_1, x_2, x_3)$ have neither their first nor their second coordinate dominant. At most $1$ of these $2$ conjugates has its third coordinate dominant. Thus, there exists a conjugate $(x_1', x_2', x_3')$ of $(x_1, x_2, x_3)$ such that none of $x_1', x_2', x_3'$ are dominant. Now suppose that $x_3$ is not dominant. Exactly $2$ of the $3$ non-trivial conjugates of $(x_1, x_2, x_3)$ have neither their first nor their second coordinate dominant. Further, exactly $2$ of the $3$ non-trivial conjugates of $(x_1, x_2, x_3)$ have their third coordinate dominant. Thus, there exists a conjugate $(x_1', x_2', x_3')$ of $(x_1, x_2, x_3)$ such that $x_3'$ is dominant and neither of $x_1', x_2'$ is dominant. In either case, we may always find a conjugate $(x_1', x_2', x_3')$ of $(x_1, x_2, x_3)$ such that $x_1', x_2'$ are not dominant and at most one of $x_3, x_3'$ is dominant. For such $(x_1, x_2, x_3)$ and $(x_1', x_2', x_3')$, we may verify in PARI that \[\Big\lvert \Big(\frac{x_1'}{x_1}\Big) \Big\rvert + \Big\lvert \Big(\frac{x_2}{x_1}\Big) \Big\rvert +\Big\lvert \Big(\frac{x_2'}{x_1}\Big) \Big\rvert +\Big\lvert \Big(\frac{x_3}{x_1}\Big) \Big\rvert +\Big\lvert \Big(\frac{x_3'}{x_1}\Big) \rvert <1.\] We may thus eliminate the triple $(-240, -240, -235)$. We now rule out the $9$ triples $(\Delta_1, \Delta_2, \Delta_3)$ with $\lvert \Delta_1 \rvert < \lvert \Delta_3 \rvert$ analogously. We may assume that \[ x_1^a + x_2^a +x_3^a = A\] with $x_3$ dominant. We may find a conjugate $(x_1', x_2', x_3')$ of $(x_1, x_2, x_3)$ such that $x_3'$ is not dominant. We then verify in PARI that \[\Big\lvert \Big(\frac{x_1}{x_3}\Big) \Big\rvert + \Big\lvert \Big(\frac{x_1'}{x_3}\Big) \Big\rvert +\Big\lvert \Big(\frac{x_2}{x_3}\Big) \Big\rvert +\Big\lvert \Big(\frac{x_2'}{x_3}\Big) \Big\rvert +\Big\lvert \Big(\frac{x_3'}{x_3}\Big) \Big\rvert <1.\] This method allows us to eliminate these $9$ triples. Now suppose that $\Delta_3$ was listed. Once again, we may find in PARI all the possible triples $(\Delta_1, \Delta_2, \Delta_3)$. We then eliminate each triple in the above way, taking either $x_1$ or $x_3$ to be dominant, according as to whether $\lvert \Delta_1 \rvert > \lvert \Delta_3 \rvert$ or $\lvert \Delta_1 \rvert < \lvert \Delta_3 \rvert$. The one triple in this case which we cannot immediately eliminate in this way is $(-240, -240, -235)$. However, we saw already how to eliminate this triple. The proof of this case is thus complete. So we reduce to the second case, where $h_1 \geq 128$. Observe that either $\lvert \Delta_1 \rvert > \lvert \Delta_3 \rvert$ or $\lvert \Delta_1 \rvert < \lvert \Delta_3 \rvert$. Let $i \in \{1,3\}$ be such that $\Delta_i$ is the discriminant with strictly larger absolute value. Write $\Delta= \lvert \Delta_i \rvert$. Taking $x_i$ to be dominant, we obtain that \begin{align}\label{eq:ineq17} \lvert A \rvert \geq (e^{\pi \Delta^{1/2}} -2079)^a - (e^{\pi \Delta^{1/2}/2} + 2079)^a - (e^{\pi (\Delta - 1)^{1/2}} + 2079)^a. \end{align} Since $h_1, h_2, h_3 \geq 64$, we may find a conjugate $(x_1', x_2', x_3')$ with none of $x_1', x_2', x_3'$ dominant, so that \begin{align} \label{eq:ineq18} \lvert A \rvert \leq 3 (e^{\pi \Delta^{1/2}/2} +2079)^a. \end{align} The bounds \eqref{eq:ineq17} and \eqref{eq:ineq18} are incompatible when \[ (e^{\pi \Delta^{1/2}} -2079)^a - (e^{\pi (\Delta - 1)^{1/2}} + 2079)^a > 4 (e^{\pi \Delta^{1/2}/2} +2079)^a,\] and, in particular, whenever (without loss of generality, we take $\Delta \geq 6$, so that $e^{\pi \Delta^{1/2}}-2079>0$) \[ 1 - \Big(\frac{e^{\pi (\Delta - 1)^{1/2}} + 2079}{e^{\pi \Delta^{1/2}} -2079}\Big)^a > 4 \Big(\frac{e^{\pi \Delta^{1/2}/2} +2079}{e^{\pi \Delta^{1/2}} -2079}\Big)^a.\] For $\Delta \geq 9$, we have that \[0 < \frac{e^{\pi (\Delta - 1)^{1/2}} + 2079}{e^{\pi \Delta^{1/2}} -2079}, \frac{e^{\pi \Delta^{1/2}/2} +2079}{e^{\pi \Delta^{1/2}} -2079} <1. \] Therefore, for the given bounds to be incompatible, it will suffice to have $\Delta \geq 9$ such that \[ 1 - \frac{e^{\pi (\Delta - 1)^{1/2}} + 2079}{e^{\pi \Delta^{1/2}} -2079}> 4 \frac{e^{\pi \Delta^{1/2}/2} +2079}{e^{\pi \Delta^{1/2}} -2079},\] since $1-y^a \geq 1-y$ and $y^a \leq y$ for $0<y<1$ and $a \geq 1$. Equivalently, to have $\Delta \geq 9$ such that \[(e^{\pi \Delta^{1/2}} -2079) - (e^{\pi (\Delta - 1)^{1/2}} + 2079) > 4 (e^{\pi \Delta^{1/2}/2} +2079),\] which happens for all $\Delta \geq 12$. Since $\Delta < 12$ would imply that $x_i \in \mathbb{Q}$, we are done. \subsection{The case where $h_1= 2 h_2= 2 h_3$} Finally, we come to the case that $h_1 = 2 h_2 = 2 h_3$. In particular, $\Delta_1 \neq \Delta_2, \Delta_3$. Thus, by Theorem~\ref{thm:fieldsumpower}, \[ \mathbb{Q}(x_2) = \mathbb{Q}(x_2^a)= \mathbb{Q}(x_1^a + x_3^a) = \mathbb{Q}(x_1, x_3) \supset \mathbb{Q}(x_1).\] This though contradicts the fact that $h_1 = 2 h_2 > h_2$. The proof of the $\epsilon_1=\epsilon_2=\epsilon_3$ case of Theorem~\ref{thm:sum} is thus complete. \section{The general case of Theorem~\ref{thm:sum}}\label{sec:diff} In this section, we prove the ``only if'' direction of Theorem~\ref{thm:sum} in general. Let $\epsilon_1, \epsilon_2, \epsilon_3 \in \{\pm 1\}$ and $a \in \mathbb{Z}_{>0}$. Suppose that $x_1, x_2, x_3$ are singular moduli such that \[ \epsilon_1 x_1^a + \epsilon_2 x_2^a + \epsilon_3 x_3^a \in \mathbb{Q}.\] Write $\Delta_i$ for the discriminant of $x_i$ and $h_i = h(\Delta_i)$ for the corresponding class number. We will show that we must be in one of cases (1)--(4) of Theorem~\ref{thm:sum}. If $x_1 = x_2 = x_3$, then clearly $x_1, x_2, x_3 \in \mathbb{Q}$ by Proposition~\ref{prop:fieldpower}. If $x_i = x_j \neq x_k$, then either $\epsilon_i = - \epsilon_j$ and so $x_k \in \mathbb{Q}$ by Proposition~\ref{prop:fieldpower}, or $\epsilon_i = \epsilon_j$ and so $2 \epsilon_i x_i^a + \epsilon_k x_k^a \in \mathbb{Q}$. The former possibility is case (2) of Theorem~\ref{thm:sum}. So suppose we are in the latter case. Then, by Theorem~\ref{thm:sum2}, either $x_i, x_k \in \mathbb{Q}$ (and we are in case (1) of Theorem~\ref{thm:sum}), or $\mathbb{Q}(x_i) = \mathbb{Q}(x_k)$ and this number field has degree $2$ over $\mathbb{Q}$. Suppose that we are in the second case. If $\epsilon_i = \epsilon_k$, then we showed how to eliminate this case at the beginning of Section~\ref{sec:sum}. So we may assume that $\epsilon_i \neq \epsilon_k$. Let $A \in \mathbb{Q}$ be such that \[2 x_i^a - x_k^a = A.\] Since $\mathbb{Q}(x_i)=\mathbb{Q}(x_k)$ has degree $2$, there exists a unique conjugate $(x_i', x_k')$ of $(x_i, x_k)$. One has that \[2 (x_i')^a - (x_k')^a = A.\] Suppose that $\Delta_i = \Delta_k$. Then $x_i' = x_k$ and $x_k' = x_i$. One thus obtains that \[2x_i^a - x_k^a = 2 x_k^a -x_i^a,\] and so $x_i^a = x_k^a$. This contradicts Theorem~\ref{thm:prod2}. So we may suppose that $\Delta_i \neq \Delta_k$. One then uses the equality \[ 2 x_i^a - x_k^a = 2 (x_i')^a - (x_k')^a\] to eliminate all the possibilities using PARI, in the same way as explained at the beginning of Section~\ref{sec:sum}. So we may assume from now on that $x_1, x_2, x_3$ are pairwise distinct. Suppose next that some $x_i \in \mathbb{Q}$. Then $\epsilon_j x_j^a + \epsilon_k x_k^a \in \mathbb{Q}$. By Theorem~\ref{thm:fieldsumpower}, \[\mathbb{Q} = \mathbb{Q}(\epsilon_j x_j^a + \epsilon_k x_k^a) = \mathbb{Q}(x_j, x_k)\] and so $x_j, x_k \in \mathbb{Q}$, unless $\epsilon_j = \epsilon_k$ and $\Delta_j = \Delta_k$. In the exceptional case, $[\mathbb{Q}(x_j, x_k) : \mathbb{Q}] \leq 2$, and so $x_j, x_k$ are distinct, conjugate singular moduli of degree $2$. Since $\epsilon_j = \epsilon_k$, this falls under case (3) of Theorem~\ref{thm:sum}. So we also assume subsequently that $x_1, x_2, x_3 \notin \mathbb{Q}$. If $\epsilon_1 = \epsilon_2 = \epsilon_3$, then we are done by the $\epsilon_1 = \epsilon_2 = \epsilon_3$ case of Theorem~\ref{thm:sum}, which was proved in Section~\ref{sec:sum}. So we may assume that the $\epsilon_i$ are not all equal. Clearly, it suffices to consider the case where $\epsilon_1 = \epsilon_2 = - \epsilon_3 = 1$. Let $A \in \mathbb{Q}$ be such that \[ x_1^a + x_2^a - x_3^a = A.\] By Theorem~\ref{thm:fieldsumpower}, we have that \[ \mathbb{Q}(x_1) = \mathbb{Q}(x_1^a)= \mathbb{Q}(x_2^a - x_3^a) = \mathbb{Q}(x_2, x_3)\] and \[\mathbb{Q}(x_2) = \mathbb{Q}(x_2^a)=\mathbb{Q}(x_1^a - x_3^a) = \mathbb{Q}(x_1, x_3).\] So \[ \mathbb{Q}(x_1, x_2)= \mathbb{Q}(x_1) = \mathbb{Q}(x_2) \supset \mathbb{Q}(x_3).\] Further, $\mathbb{Q}(x_3) =\mathbb{Q}(x_3^a) = \mathbb{Q}(x_1^a + x_2^a)$ and $[\mathbb{Q}(x_1, x_2) : \mathbb{Q}(x_1^a + x_2^a)] \leq 2$. \subsection{The case where $\mathbb{Q}(x_1) = \mathbb{Q}(x_2) = \mathbb{Q}(x_3)$}\label{subsec:diffeq} Suppose first that $\mathbb{Q}(x_1) = \mathbb{Q}(x_2) = \mathbb{Q}(x_3)$. Write $h = h(\Delta_1) = h(\Delta_2) = h(\Delta_3)$. Without loss of generality, we may assume that $\lvert \Delta_1 \rvert \geq \lvert \Delta_2 \rvert$. If $\mathbb{Q}(\sqrt{\Delta_i}) \neq \mathbb{Q}(\sqrt{\Delta_j})$ for some $i \neq j$, then, by Lemma~\ref{lem:samefield}, all possible $(\Delta_1, \Delta_2, \Delta_3)$ are listed. We now explain how we eliminate these possibilities. To begin with, assume that $h = 2$. Then there is a unique non-trivial conjugate $(x_1', x_2', x_3')$ of $(x_1, x_2, x_3)$ and one has that \[(x_1')^a + (x_2')^a - (x_3')^a = A.\] If $\Delta_1 = \Delta_2$, then $x_1, x_2$ are distinct, conjugate singular moduli of degree $2$, so one has that $x_1 = x_2'$ and $x_2 = x_1'$. Thus, in this case, one obtains that $(x_3)^a - (x_3')^a=0$, but this contradicts Theorem~\ref{thm:prod2} since $x_3 \neq x_3'$. Hence, $\Delta_1 \neq \Delta_2$, and so $\lvert \Delta_1 \rvert > \lvert \Delta_2 \rvert$. Suppose first that $\Delta_1 = \Delta_3$. Then $x_1' = x_3$ and $x_3' = x_1$. So \[ 2 x_1^a = 2 x_3^a - x_2^a + (x_2')^a.\] We may then use this equality, as at the beginning of Section~\ref{sec:sum}, to eliminate this choice of discriminants by taking $x_1$ to be dominant. Suppose then that $\Delta_1 \neq \Delta_3$. Then \begin{align}\label{eq:2} x_1^a = (x_1')^a - x_2^a + (x_2')^a + x_3^a - (x_3')^a \end{align} and \begin{align}\label{eq:3} x_3^a = x_1^a -(x_1')^a + x_2^a - (x_2')^a + (x_3')^a. \end{align} If $\lvert \Delta_1 \rvert > \lvert \Delta_3 \rvert$, then take $x_1$ dominant and use \eqref{eq:2}. If $\lvert \Delta_1 \rvert < \lvert \Delta_3 \rvert$, then take $x_3$ dominant and use \eqref{eq:3}. In each case, the approach is the same as that used in \S\ref{subsubsec1}. So we may now assume that $h>2$. Then inspection of \cite[Table~2]{AllombertBiluMadariaga15} gives us that $h \geq 4$. Recall that, by assumption, $\lvert \Delta_1 \rvert \geq \lvert \Delta_2 \rvert$. Assume first that $\lvert \Delta_1 \rvert \geq \lvert \Delta_3 \rvert$ as well. Suppose that $x_1$ is dominant. Since $h \geq 4$, there exists a conjugate $(x_1', x_2', x_3')$ of $(x_1, x_2, x_3)$ such that no $x_i'$ is dominant. Proceeding as above, we have that \[ 1 = \Big(\frac{x_1'}{x_1}\Big)^a -\Big(\frac{x_2}{x_1}\Big)^a +\Big(\frac{x_2'}{x_1}\Big)^a +\Big(\frac{x_3}{x_1}\Big)^a -\Big(\frac{x_3'}{x_1}\Big)^a.\] Further, $\lvert x_1' \rvert, \lvert x_2 \rvert, \lvert x_2' \rvert, \lvert x_3 \rvert, \lvert x_3' \rvert < \lvert x_1 \rvert$ by Proposition~\ref{prop:dominc}, since: $\lvert \Delta_1 \rvert \geq \lvert \Delta_2 \rvert, \lvert \Delta_3 \rvert$; no $x_i'$ is dominant; and, for $i \in \{2,3\}$, if $\Delta_i = \Delta_1$, then $x_i$ is not dominant since $x_i \neq x_1$. Consequently, \begin{align*} 1= & \Big\lvert \Big(\frac{x_1'}{x_1}\Big)^a -\Big(\frac{x_2}{x_1}\Big)^a +\Big(\frac{x_2'}{x_1}\Big)^a +\Big(\frac{x_3}{x_1}\Big)^a -\Big(\frac{x_3'}{x_1}\Big)^a \Big\rvert\\ \leq &\Big\lvert \Big(\frac{x_1'}{x_1}\Big) \Big\rvert + \Big\lvert \Big(\frac{x_2}{x_1}\Big) \Big\rvert +\Big\lvert \Big(\frac{x_2'}{x_1}\Big) \Big\rvert +\Big\lvert \Big(\frac{x_3}{x_1}\Big) \Big\rvert +\Big\lvert \Big(\frac{x_3'}{x_1}\Big) \Big\rvert. \end{align*} Once again, we may verify in PARI that this last expression is $<1$ for every possible choice of $x_2, x_3$ for each of the relevant triples $(\Delta_1, \Delta_2, \Delta_3)$. We may thereby eliminate this case. Now suppose that $\lvert \Delta_3 \rvert > \lvert \Delta_1 \rvert$. Then we proceed in exactly the same way, but taking $x_3$ dominant instead. We may thus eliminate each of the relevant triples $(\Delta_1, \Delta_2, \Delta_3)$. So suppose that $\mathbb{Q}(\sqrt{\Delta_1}) = \mathbb{Q}(\sqrt{\Delta_2}) = \mathbb{Q}(\sqrt{\Delta_3})$. Then, by Lemma~\ref{lem:samefield} again, $\Delta_i / \Delta_j \in \{1/4, 1, 4\}$ for all $i, j$. We thus have, since $\lvert \Delta_1 \rvert \geq \lvert \Delta_2 \rvert$ by assumption, one of: \begin{enumerate} \item either $\Delta_1 = \Delta_2$ and one of: \begin{enumerate} \item $\Delta_3 = \Delta_1 / 4$, \item $\Delta_3 = \Delta_1$, \item $\Delta_3 = 4 \Delta_1$; \end{enumerate} \item or $\Delta_1 = 4 \Delta_2$ and one of: \begin{enumerate} \item $\Delta_3 = \Delta_1$, \item $\Delta_3 = \Delta_2$. \end{enumerate} \end{enumerate} Cases 1(a) and 2(a) reduce to the second case of \S\ref{subsubsec1} by means of the inequalities \[ \lvert x_1^a + x_2^a - x_3^a \rvert \geq \lvert x_1 \rvert^a - \lvert x_2 \rvert^a - \lvert x_3 \rvert^a\] and \[\lvert x_1^a + x_2^a - x_3^a \rvert \leq \lvert x_1 \rvert^a + \lvert x_2 \rvert^a + \lvert x_3 \rvert^a.\] In case 1(b), we must have that $h \geq 3$ since $x_1, x_2, x_3$ are three distinct Galois conjugates. If $h=3$, then $x_1, x_2, x_3$ are the three roots of the Hilbert class polynomial $H_{\Delta_1} \in \mathbb{Z}[x]$. Hence, \[ x_1^a + x_2^a + x_3^a \in \mathbb{Q}.\] Since $x_1^a + x_2^a - x_3^a \in \mathbb{Q}$ also, we obtain that \[ x_3^a = \frac{(x_1^a + x_2^a + x_3^a)-(x_1^a+x_2^a-x_3^a)}{2} \in \mathbb{Q}.\] But then Proposition~\ref{prop:fieldpower} implies that $x_3 \in \mathbb{Q}$, a contradiction. So we may assume that $h \geq 4$. We may now reduce case 1(b) to the case of \S\ref{subsubsec0} by using the inequalities from the previous paragraph. Case 1(c) is new. Here, we may take $x_3$ to be dominant and use the lower bound coming from \[ \lvert x_1^a + x_2^a - x_3^a \rvert \geq \lvert x_3 \rvert^a - \lvert x_1 \rvert^a - \lvert x_2 \rvert^a.\] The resulting lower bound is incompatible for all $\lvert \Delta_1 \rvert \geq 3$ (i.e. for all $\Delta_1$) with the upper bound obtained from \[\lvert (x_1')^a + (x_2')^a - (x_3')^a \rvert \leq \lvert x_1' \rvert^a + \lvert x_2' \rvert^a + \lvert x_3' \rvert^a,\] where $(x_1', x_2', x_3')$ is a conjugate of $(x_1, x_2, x_3)$ with $x_3'$ not dominant. Such a conjugate exists since $h \geq 2$. We treat case 2(b) in directly analogous fashion to case 1(c), swapping the roles of $x_1$ and $x_3$. We thus complete the proof in this case. \subsection{The case where $\mathbb{Q}(x_1) = \mathbb{Q}(x_2) \supsetneq \mathbb{Q}(x_3)$}\label{subsec:diffdiff} So now suppose that $\mathbb{Q}(x_1) = \mathbb{Q}(x_2) \supsetneq \mathbb{Q}(x_3)$. Then $[\mathbb{Q}(x_1, x_2) : \mathbb{Q}(x_3)]=2$. If $\Delta_1 \neq \Delta_2$, then \[\mathbb{Q}(x_3) = \mathbb{Q}(x_3^a) = \mathbb{Q}(x_1^a + x_2^a) = \mathbb{Q}(x_1, x_2) = \mathbb{Q}(x_1),\] a contradiction. So $\Delta_1 = \Delta_2$. Observe also that $h_1 = h_2 \geq 4$ since $h_3 \geq 2$ and $h_1 = h_2 = 2 h_3$. Suppose first that $\mathbb{Q}(\sqrt{\Delta_1})=\mathbb{Q}(\sqrt{\Delta_3})$. Then Lemma~\ref{lem:subfieldsamefund} implies that $\Delta_1 \in \{9 \Delta_3/4, 4 \Delta_3, 9 \Delta_3, 16 \Delta_3\}$. These cases may be dealt with in exactly the same way as the corresponding cases in Section~\ref{sec:sum}. So suppose that $\mathbb{Q}(\sqrt{\Delta_1}) \neq \mathbb{Q}(\sqrt{\Delta_3})$. Then Lemma~\ref{lem:subfielddiffund} implies that either one of $\Delta_1, \Delta_3$ is listed, or $h_1 \geq 128$. In the first case, we may find all possibilities for $(\Delta_1, \Delta_2, \Delta_3)$. Each of these may be eliminated in exactly the same way as was done in Section~\ref{sec:sum}. So we reduce to the second case, where $h_1 \geq 128$. This case may also be handled exactly as was done in Section~\ref{sec:sum}. The proof of Theorem~\ref{thm:sum} is thus complete. \section{Fermat's last theorem for singular moduli}\label{sec:fermat} In this section, we prove that there are no triples $(x_1, x_2, x_3)$ with $x_1, x_2, x_3$ all non-zero singular moduli which satisfy either \[ x_1^a + x_2^a + x_3^a = 0\] or \[ x_1^a + x_2^a - x_3^a = 0,\] where $a \in \mathbb{Z}_{>0}$. This proves a ``Fermat's last theorem'' for singular moduli and gives a completely explicit form of Andr\'e--Oort for the two corresponding algebraic surfaces. For fixed $a$, we note that effective bounds on the discriminants of singular moduli $x_1, x_2, x_3$ satisfying either of the above equations follow from Binyamini's effective Andr\'e--Oort result for ``hdnd hypersurfaces'' \cite[Corollary~4]{Binyamini19}. \begin{cor}\label{cor:diff} Let $a \in \mathbb{Z}_{>0}$. Suppose that $x_1, x_2, x_3$ are singular moduli such that $x_1^a + x_2^a - x_3^a = 0$. Then $\{0, x_3\}=\{x_1, x_2\}$. \end{cor} \begin{proof} Suppose that $x_1, x_2, x_3$ are singular moduli such that $x_1^a + x_2^a - x_3^a = 0$. If $x_3 \in \{x_1, x_2\}$, then clearly $\{x_1, x_2\} = \{0, x_3\}$. Suppose next that $x_1 = x_2 \neq x_3$. Then $2 x_1^a = x_3^a$. So $x_1, x_3 \neq 0$ and \[ \frac{x_1^a}{x_3^a} = \frac{1}{2}.\] We thus must have that $x_1, x_3 \in \mathbb{Q}$ by Theorem~\ref{thm:prod2}. An inspection of the list of rational singular moduli shows that this is impossible. We may thus assume that $x_1, x_2, x_3$ are pairwise distinct. We will show that this impossible. Suppose first that some $x_i = 0$. Then $x_j, x_k \neq 0$ and $x_j^a + \epsilon x_k^a =0$ for some $\epsilon \in \{\pm 1\}$. Hence one has that \[ \Big(\frac{x_j}{x_k}\Big)^{2a} = 1,\] and so, by Theorem~\ref{thm:prod2}, one has that $x_j, x_k \in \mathbb{Q}$. So $x_j/ x_k$ is a rational root of unity (i.e. $\pm 1$), so $x_j = - x_k$ since $x_j \neq x_k$. Inspecting the list of rational singular moduli, we see that this is impossible. Hence, we may assume that $x_1, x_2, x_3 \neq 0$ also. We will show that this leads to a contradiction. By Theorem~\ref{thm:sum}, we have that either $x_1, x_2, x_3 \in \mathbb{Q}$ or $x_3 \in \mathbb{Q}$ and $x_1, x_2$ are degree $2$ and conjugate. Suppose first that $x_1, x_2, x_3 \in \mathbb{Q}$ (and hence $\in \mathbb{Z}$). Then Fermat's last theorem (Wiles \cite{Wiles95} and Taylor--Wiles \cite{TaylorWiles95}) implies that $a \leq 2$. For $a \leq 2$, we may verify in PARI that the statement holds. Suppose then that $x_3 \in \mathbb{Q}$ and $x_1, x_2$ are degree $2$ and conjugate. Without loss of generality, we may assume that $x_1$ is dominant, and so $x_2$ is not dominant. So $\lvert x_2 \rvert \leq 0.1 \lvert x_1 \rvert$ by Proposition~\ref{prop:dom}. We thus have that \begin{align*} \lvert x_3 \rvert^a &\geq \lvert x_1 \rvert^a - \lvert x_2 \rvert^a\\ &\geq \lvert x_1 \rvert^a - (0.1 \lvert x_1 \rvert)^a\\ &= (1- 0.1^a) \lvert x_1 \rvert^a\\ &\geq (0.9 \lvert x_1 \rvert)^a. \end{align*} Hence, $\lvert x_3 \rvert \geq 0.9 \lvert x_1 \rvert$. Clearly, \[ \lvert x_3 \rvert^a \leq \lvert x_1 \rvert^a + \lvert x_2 \rvert^a \leq (\lvert x_1 \rvert + \lvert x_2 \rvert)^a,\] and so $\lvert x_3 \rvert \leq \lvert x_1 \rvert + \lvert x_2 \rvert$. We may verify in PARI that these two inequalities are never both satisfied. \end{proof} \begin{remark} In particular, for every $a \in \mathbb{Z}_{>0}$, the sum (respectively, difference) of the $a$th powers of two non-zero (respectively, non-zero and distinct) singular moduli is never equal to the $a$th power of a singular modulus. Differences of singular moduli are themselves objects of considerable interest, see e.g. \cite{GrossZagier85}. \end{remark} \begin{cor}\label{cor:vanish} Let $n \leq 3$ and let $x_1, \ldots, x_n$ be singular moduli (which are not necessarily pairwise distinct). Suppose that $x_1^a + \dots + x_n^a = 0$ for some $a \in \mathbb{Z}_{>0}$. Then $x_i=0$ for all $i \in \{1, \ldots, n\}$. \end{cor} \begin{proof} The case $n=1$ is trivial. Suppose $n=2$. If $x_1 = x_2$, then the result is immediate. So suppose $x_1 \neq x_2$ and $x_1^a + x_2^a =0$. The same argument as in the case where some $x_i=0$ in the proof of Corollary~\ref{cor:diff} shows that this is impossible. Now for the $n=3$ case. Suppose that $x_1, x_2, x_3$ are singular moduli such that $x_1^a + x_2^a + x_3^a = 0$ for some $a \geq 1$. If $x_1 = x_2 = x_3$, then clearly $x_1=x_2=x_3=0$. Suppose that $x_1=x_2 \neq x_3$. Then $2 x_1^a = -x_3^a$. So $x_1, x_3 \neq 0$. Then Theorem~\ref{thm:prod2} implies that $x_1, x_3 \in \mathbb{Q}$. An inspection of the list of rational singular moduli shows that this is impossible. So we may assume that $x_1, x_2, x_3$ are pairwise distinct. Suppose $x_3=0$ say. Then $x_1^a + x_2^a = 0$. By the same argument as before, this is impossible. So we may assume also that $x_1, x_2, x_3 \neq 0$. Theorem~\ref{thm:sum} implies that one of the following holds: \begin{enumerate} \item $x_1, x_2, x_3 \in \mathbb{Q}$, \item some $x_i, x_j$ are conjugate and of degree $2$ and the remaining $x_k \in \mathbb{Q}$, \item $x_1, x_2, x_3$ are conjugate and of degree $3$. \end{enumerate} Suppose first that $x_1, x_2, x_3 \in \mathbb{Q}$. Then Fermat's last theorem \cite{Wiles95, TaylorWiles95} implies that $a \leq 2$. If $a =2$, then $- x_3^2 = x_1^2 + x_2^2 >0$, which is impossible. So we must have that $a=1$. But $a=1$ may be eliminated by a computation. Suppose next that $x_1, x_2$ are conjugate and of degree $2$ and $x_3 \in \mathbb{Q}$. This case may be dealt with as in Corollary~\ref{cor:diff}. We assume that $x_1$ is dominant and use the inequalities \[0.9 \lvert x_1 \rvert \leq \lvert x_3 \rvert \leq \lvert x_1 \rvert + \lvert x_2 \rvert. \] Suppose finally that $x_1, x_2, x_3$ are conjugate and of degree $3$. We may assume that $x_1$ is dominant, so $x_2, x_3$ are not dominant. Then \begin{align*} 0=\lvert x_1^a + x_2^a + x_3^a \rvert &\geq \lvert x_1 \rvert^a - \lvert x_2 \rvert^a - \lvert x_3 \rvert^a\\ &\geq \lvert x_1 \rvert^a - (0.1 \lvert x_1 \rvert)^a - (0.1 \lvert x_1 \rvert)^a\\ &\geq (1- 2 \times 0.1^a) \lvert x_1 \rvert^a\\ &>0, \end{align*} which is a contradiction. The proof is thus complete. \end{proof} Corollary~\ref{cor:sum} follows immediately from Corollaries~\ref{cor:diff} and \ref{cor:vanish}. \section{Powers of a product of three singular moduli}\label{sec:prod} Now we come to the proof of Theorem~\ref{thm:prod}. Again the ``if'' direction is immediate, so we just need to prove the ``only if''. Suppose that $x_1, x_2, x_3$ are singular moduli of respective discriminants $\Delta_1, \Delta_2, \Delta_3$ such that \[(x_1 x_2 x_3)^a = A\] for some $a \in \mathbb{Z} \setminus \{0\}$ and $A \in \mathbb{Q}^{\times}$. Write $h_i$ for the respective class numbers $h(\Delta_i)$. Clearly, we may assume that $a \geq 1$. We will show that one of cases (1)--(3) of Theorem~\ref{thm:prod} must hold. \subsection{The trivial cases}\label{subsec:trivial} If $x_1 = x_2 = x_3$, then we must have that $x_1, x_2, x_3 \in \mathbb{Q}^{\times}$ by Proposition~\ref{prop:fieldpower}. If the set $\{x_1, x_2, x_3\}$ has cardinality $2$, then we are done by Theorem~\ref{thm:prod2}. So we may and do assume that $x_1, x_2, x_3$ are pairwise distinct. If some $x_i \in \mathbb{Q}$, then again the result follows from Theorem~\ref{thm:prod2}. So we assume from now on that $x_1, x_2, x_3 \notin \mathbb{Q}$. In particular, $h_1, h_2, h_3 \geq 2$. If $\Delta_1 = \Delta_2 = \Delta_3$, then we must have that $h_1 = h_2 = h_3 \geq 3$, since $x_1, x_2, x_3$ are all distinct. If $\Delta_1 = \Delta_2 = \Delta_3$ and $h_1 = h_2 = h_3 = 3$, then we are in case (3) of Theorem~\ref{thm:prod}. So from now on, assume that we do not have both $\Delta_1 = \Delta_2 = \Delta_3$ and $h_1 = h_2 = h_3 = 3$. \subsection{Bounding the non-trivial cases effectively}\label{subsec:bdnontriv} We are not then in any of cases (1)--(3) of Theorem~\ref{thm:prod}. We will show that this cannot happen. Our first step is to reduce all the possibilities for $(\Delta_1, \Delta_2, \Delta_3)$ to a finite list. This we do in this subsection. In the next subsection, we will explain how we can eliminate each of the cases in this list in PARI. Without loss of generality, we may assume that $h_1 \geq h_2 \geq h_3 \geq 2$. By Proposition~\ref{prop:fieldpower}, we have that \[ \mathbb{Q}(x_1) = \mathbb{Q}(x_1^{-a}) = \mathbb{Q}(x_2^a x_3^a).\] Hence, by Theorem~\ref{thm:fieldprodpower}, either $\mathbb{Q}(x_1) = \mathbb{Q}(x_2, x_3)$ or $[\mathbb{Q}(x_2, x_3) : \mathbb{Q}(x_1)] = 2$. Obviously, we obtain the same swapping the roles of $x_1, x_2, x_3$. Since $[\mathbb{Q}(x_i) : \mathbb{Q}] = h_i$, we may thus conclude that one of the following holds: \begin{enumerate} \item $h_1 = h_2 = h_3$, \item $h_1 = h_2 = 2 h_3$, \item $h_1 = 2 h_2 = 2 h_3$. \end{enumerate} We may now proceed to bound $\lvert \Delta_1 \rvert, \lvert \Delta_2 \rvert, \lvert \Delta_3 \rvert$ exactly as in \cite{Fowler20}. In particular, we obtain the same bounds as there thanks to the elementary fact that, for $s, t\geq 0$, one has that $s^a > t^a$ if and only if $s > t$. We thus reduce to the same possibilities for $(\Delta_1, \Delta_2, \Delta_3)$ as in \cite{Fowler20}, namely those recorded in the list at the beginning of \S4 of \cite{Fowler20}. \subsection{Eliminating the non-trivial cases}\label{subsec:elimnontriv} For a triple $(\Delta_1, \Delta_2, \Delta_3)$ belonging to this list, we may eliminate it using a PARI script in the following way. Suppose that $x_1, x_2, x_3$ are pairwise distinct, non-zero singular moduli of discriminants $\Delta_1, \Delta_2, \Delta_3$ such that $(x_1 x_2 x_3)^a = A$ for some $a \in \mathbb{Z}_{>0}$ and $A \in \mathbb{Q}^\times$. Let $L$ be a Galois extension of $\mathbb{Q}$ containing $x_1, x_2, x_3$. Then, for every $\sigma \in \mathrm{Gal}(L / \mathbb{Q})$, we have that \[(\sigma(x_1) \sigma(x_2) \sigma(x_3))^a = A,\] and thus \[ \frac{x_1 x_2 x_3}{\sigma(x_1) \sigma(x_2) \sigma(x_3)}\] is a root of unity in $L$. Therefore, if we can find an automorphism $\sigma \in \mathrm{Gal}(L / \mathbb{Q})$ such that \[ \frac{x_1 x_2 x_3}{\sigma(x_1) \sigma(x_2) \sigma(x_3)}\] is not among the roots of unity in $L$ for each possible choice of $x_1, x_2, x_3$, then we may eliminate the triple $(\Delta_1, \Delta_2, \Delta_3)$. Using PARI, we eliminate in this way all the triples $(\Delta_1, \Delta_2, \Delta_3)$ belonging to the list from Subsection~\ref{subsec:elimnontriv}, and thereby complete the proof of Theorem~\ref{thm:prod}. \begin{remark}\label{rmk:multfmt} The obvious multiplicative analogue of the results in Section~\ref{sec:fermat} would concern solutions in singular moduli to the equation $(x_1 x_2 x_3)^a =1$, where $a \in \mathbb{Z} \setminus \{0\}$. One may easily show, using Theorems~\ref{thm:prod} and \ref{thm:prod2} together with some easy computations, that there are no such solutions. However, the non-existence of such solutions is also an immediate consequence of Bilu, Habegger, and K\"uhne's result \cite{BiluHabeggerKuhne18} that no singular modulus is an algebraic unit. Indeed, the result of \cite{BiluHabeggerKuhne18} implies that there are no solutions in singular moduli to the equation $(\prod_{i=1}^n x_i)^a = 1$ for any $n \in \mathbb{Z}_{>0}$ and $a \in \mathbb{Z} \setminus \{0\}$. \end{remark} \section{The proof of Theorem~\ref{thm:quot}}\label{sec:quot} Now suppose that $x_1, x_2, x_3$ are singular moduli of respective discriminants $\Delta_1, \Delta_2, \Delta_3$ such that \[(x_1^{\epsilon_1} x_2^{\epsilon_2} x_3^{\epsilon_3})^a = A\] for some $a \in \mathbb{Z} \setminus \{0\}$, $A \in \mathbb{Q}^\times$, and $\epsilon_i \in \{\pm 1\}$. As usual, write $h_i = h(\Delta_i)$. Clearly, we may assume that $a \geq 1$. If $\epsilon_1 = \epsilon_2 = \epsilon_3$, then the desired result is just Theorem~\ref{thm:prod}. So we may assume that the $\epsilon_i$ are not all equal. If the $x_i$ are not pairwise distinct, then the result follows easily from Proposition~\ref{prop:fieldpower} and Theorem~\ref{thm:prod2}. So we may suppose that $x_1, x_2, x_3$ are pairwise distinct. In addition, if some $x_i \in \mathbb{Q}$, then the result also follows immediately from Theorem~\ref{thm:prod2}. So we assume also that no $x_i$ is rational. In particular, $h_1, h_2, h_3 \geq 2$. We are thus not in any of the trivial cases (1)--(4) of Theorem~\ref{thm:quot}. We will show this leads to a contradiction, apart from in the one exceptional case (5). Without loss of generality, we may assume that $\epsilon_1 = \epsilon_2 = - \epsilon_3 = 1$, so that \[\Big(\frac{x_1 x_2}{x_3}\Big)^a = A.\] By Proposition~\ref{prop:fieldpower} and Theorem~\ref{thm:fieldprodpower}, we have that \[ \mathbb{Q}(x_1) = \mathbb{Q}(x_1^a) = \mathbb{Q}\Big(\frac{x_2^a}{x_3^a}\Big) = \mathbb{Q}(x_2, x_3)\] and \[ \mathbb{Q}(x_2) = \mathbb{Q}(x_2^a) = \mathbb{Q}\Big(\frac{x_1^a}{x_3^a}\Big) = \mathbb{Q}(x_1, x_3).\] Thus, $\mathbb{Q}(x_1) = \mathbb{Q}(x_2) \supset \mathbb{Q}(x_3)$. Also, the field $\mathbb{Q}(x_3) =\mathbb{Q}(x_3^a) = \mathbb{Q}(x_1^a x_2^a)$ is equal to the field $\mathbb{Q}(x_1, x_2)$ if $\Delta_1 \neq \Delta_2$ and satisfies $[ \mathbb{Q}(x_1, x_2) : \mathbb{Q}(x_3)] \leq 2$ if $\Delta_1 = \Delta_2$. We now distinguish into the two cases: $\mathbb{Q}(x_1) = \mathbb{Q}(x_2) = \mathbb{Q}(x_3)$ and $\mathbb{Q}(x_1) = \mathbb{Q}(x_2) \supsetneq \mathbb{Q}(x_3)$. In our proof, we repeatedly use the following fact. Let $L$ be a Galois extension containing $x_1, x_2, x_3$. Then, for any $\sigma \in \mathrm{Gal}(L/\mathbb{Q})$, we have that \[ \Big ( \frac{\sigma(x_1) \sigma(x_2)}{\sigma(x_3)} \Big )^a = A\] and hence \[ \Big\lvert \frac{\sigma(x_1) \sigma(x_2)}{\sigma(x_3)} \Big\rvert = \Big\lvert \frac{x_1 x_2}{x_3} \Big\rvert = \lvert A \rvert^{1/a}.\] In Subsections~\ref{subsecA} and \ref{subsecB}, we will show that either we are in case (5) of Theorem~\ref{thm:quot}, or the triple $(\Delta_1, \Delta_2, \Delta_3)$ belongs to a finite list of possible exceptions, which we may find effectively. In Subsection~\ref{subsecC}, we will then show that in fact none of these finitely many possible exceptions can occur. This will complete the proof of Theorem~\ref{thm:quot}. \subsection{The case where $\mathbb{Q}(x_1) = \mathbb{Q}(x_2) = \mathbb{Q}(x_3)$.}\label{subsecA} \subsubsection{The subcase where $\Delta_1 = \Delta_2 = \Delta_3$.} Write $\Delta = \Delta_1 = \Delta_2 = \Delta_3$ and $h = h_1 = h_2 = h_3$. Note that $h \geq 3$ since the $x_i$ are all distinct. Taking conjugates, we assume that $x_1$ is dominant, and thus obtain, using \eqref{eq:ineq1} and Lemma~\ref{lem:lower}, the lower bound \[\lvert A \rvert^{1/a} \geq \frac{(e^{\pi \lvert \Delta \rvert^{1/2}}-2079)\min \{4.4 \times 10^{-5}, 3500 \lvert \Delta \rvert^{-3}\}}{e^{\pi \lvert \Delta \rvert^{1/2}/2}+2079}.\] Conjugating again, we assume that $x_3$ is dominant. We obtain the upper bound \[\lvert A \rvert^{1/a} \leq \frac{(e^{\pi \lvert \Delta \rvert^{1/2}/2}+2079)^2}{e^{\pi \lvert \Delta \rvert^{1/2}}-2079}.\] These two bounds are incompatible when $\lvert \Delta \rvert \geq 43$. \subsubsection{The subcase where $\mathbb{Q}(\sqrt{\Delta_i}) \neq \mathbb{Q}(\sqrt{\Delta_j})$ for some $i, j$.} Then Lemma~\ref{lem:samefield} implies that $\Delta_1, \Delta_2, \Delta_3$ are all listed in \cite[Table~2]{AllombertBiluMadariaga15}. \subsubsection{The subcase where $\Delta_1, \Delta_2, \Delta_3$ are not all equal, but $\mathbb{Q}(\sqrt{\Delta_1})= \mathbb{Q}(\sqrt{\Delta_2})= \mathbb{Q}(\sqrt{\Delta_3})$.}\label{ss} Then Lemma~\ref{lem:samefield} implies that $\Delta_i / \Delta_j \in \{1/4, 1, 4\}$ for all $i, j$. Without loss of generality, assume that $\lvert \Delta_1 \rvert \geq \lvert \Delta_2 \rvert$. Then one of the following must hold: \begin{enumerate} \item $\Delta_1 = \Delta_2 = 4 \Delta_3$, \item $4 \Delta_1 = 4 \Delta_2 = \Delta_3$, \item $\Delta_1 = 4 \Delta_2 = \Delta_3$, \item $\Delta_1 = 4 \Delta_2 = 4 \Delta_3$. \end{enumerate} First, suppose that $\Delta_1 = \Delta_2 = 4 \Delta_3$. Write $\Delta = \Delta_3$. Then Lemma~\ref{lem:samefield} implies also that $\Delta \equiv 1 \bmod 8$. Therefore, by Lemma~\ref{lem:dom}, there are no subdominant singular moduli of discriminant $4 \Delta$ and there are precisely two subdominant singular moduli of discriminant $\Delta$, provided $\Delta \notin \{-7, -15\}$. If $\Delta = -7$, then $x_3 \in \mathbb{Q}$, which is ruled out by assumption. Suppose that $\Delta = -15$. Then $h(4 \Delta) = h(\Delta) = 2$. So $x_1, x_2$ are the two roots of $H_{4 \Delta} \in \mathbb{Z}[z]$, and hence $x_1 x_2 \in \mathbb{Q}$. Thus $x_3^a \in \mathbb{Q}$, and so $x_3 \in \mathbb{Q}$ by Proposition~\ref{prop:fieldpower}. So we may assume that $\Delta \notin \{-7, -15\}$. Since $\mathbb{Q}(x_1) = \mathbb{Q}(x_2) = \mathbb{Q}(x_3)$, each conjugate of $x_i$ occurs precisely once as the $i$th coordinate of a conjugate $(x_1', x_2', x_3')$ of $(x_1, x_2, x_3)$. In particular, for each $i$, there is exactly one conjugate $(x_1', x_2', x_3')$ with $x_i'$ dominant. There is therefore a conjugate $(x_1', x_2', x_3')$ with $x_3'$ not dominant and one of $x_1', x_2'$ dominant. This gives the lower bound \[ \lvert A \rvert^{1/a} \geq \frac{(e^{2 \pi \lvert \Delta \rvert^{1/2}} - 2079)\min \{4.4 \times 10^{-5}, 3500 \times 4^{-3} \lvert \Delta \rvert^{-3}\}}{e^{\pi \lvert \Delta \rvert^{1/2}/2}+2079}.\] Suppose for now that the conjugate $(x_1'', x_2'', x_3'')$ with $x_3''$ dominant has one of $x_1'', x_2''$ dominant. Since there are two subdominant singular moduli of discriminant $\Delta$, there is then another conjugate $(x_1''', x_2''', x_3''')$ with $x_3'''$ subdominant and neither $x_1''', x_2'''$ dominant. This conjugate $(x_1''', x_2''', x_3''')$ gives the upper bound \[ \lvert A \rvert^{1/a} \leq \frac{(e^{2 \pi \lvert \Delta \rvert^{1/2}/3} +2079)^2}{e^{\pi \lvert \Delta \rvert^{1/2}/2} - 2079}.\] These two bounds are incompatible whenever $\lvert \Delta \rvert \geq 29$. Now suppose that the conjugate $(x_1'', x_2'', x_3'')$ with $x_3''$ dominant has neither of $x_1'', x_2''$ dominant. Then this conjugate gives the upper bound \[ \lvert A \rvert^{1/a} \leq \frac{(e^{2 \pi \lvert \Delta \rvert^{1/2}/3} +2079)^2}{e^{\pi \lvert \Delta \rvert^{1/2}} - 2079}.\] This is incompatible with the previous lower bound if $\lvert \Delta \rvert \geq 14$. Thus, in either case, we must have that $\lvert \Delta \rvert \leq 28$. Next suppose that $\Delta_3 = 4 \Delta_1 = 4 \Delta_2$. Write $\Delta = \Delta_1 = \Delta_2$. By Lemma~\ref{lem:samefield}, $\Delta \equiv 1 \bmod 8$. Thus, by Lemma~\ref{lem:dom}, there are no subdominant singular moduli of discriminant $4 \Delta$. Taking conjugates, we may assume that $x_3$ is dominant. Thus, \[ \lvert A \rvert^{1/a} \leq \frac{(e^{\pi \lvert \Delta \rvert^{1/2}}+2079)(e^{\pi \lvert \Delta \rvert^{1/2}/2}+2079)}{e^{2 \pi \lvert \Delta \rvert^{1/2}} - 2079}.\] There is also a conjugate $(x_1', x_2', x_3')$ with $x_3'$ not dominant and one of $x_1', x_2'$ dominant. We obtain that \[ \lvert A \rvert^{1/a} \geq \frac{(e^{\pi \lvert \Delta \rvert^{1/2}}-2079)\min \{4.4 \times 10^{-5}, 3500 \lvert \Delta \rvert^{-3}\}}{e^{2 \pi \lvert \Delta \rvert^{1/2}/3} + 2079}.\] The two bounds are incompatible whenever $\lvert \Delta \rvert \geq 19$. Now suppose that $\Delta_1 = 4 \Delta_2 = \Delta_3$. Write $\Delta = \Delta_2$. By Lemma~\ref{lem:samefield}, $\Delta \equiv 1 \bmod 8$ and so, by Lemma~\ref{lem:dom}, there are no subdominant singular moduli of discriminant $4 \Delta$. Assuming that $x_1$ is dominant (and so $x_3$ is neither dominant nor subdominant), we have that \[\lvert A \rvert^{1/a} \geq \frac{(e^{2 \pi \lvert \Delta \rvert^{1/2}}-2079)\min \{4.4 \times 10^{-5}, 3500 \lvert \Delta \rvert^{-3}\}}{e^{2 \pi \lvert \Delta \rvert^{1/2}/3} + 2079}.\] Let $(x_1', x_2', x_3')$ be the conjugate such that $x_3'$ is dominant (and so $x_1'$ is neither dominant nor subdominant). Then we have that \[ \lvert A \rvert^{1/a} \leq \frac{(e^{2 \pi \lvert \Delta \rvert^{1/2}/3}+2079)(e^{\pi \lvert \Delta \rvert^{1/2}}+2079)}{e^{2 \pi \lvert \Delta \rvert^{1/2}} - 2079}. \] These bounds are incompatible whenever $\lvert \Delta \rvert \geq 8$. Finally, suppose that $\Delta_1 = 4 \Delta_2 = 4 \Delta_3$. Write $\Delta = \Delta_2 = \Delta_3$. By Lemma~\ref{lem:samefield}, $\Delta \equiv 1 \bmod 8$. Since $x_2 \notin \mathbb{Q}$, we have that $\Delta \neq -7$. Suppose that $\Delta = -15$. Then $h(4 \Delta) = h(\Delta) = 2$. The unique non-trivial Galois conjugate of $(x_1, x_2, x_3)$ is $(x_1', x_3, x_2)$, where $x_1'$ is the unique non-trivial conjugate of $x_1$. We thus have that $x_1^a = (x_1')^a$, which contradicts Proposition~\ref{prop:dom}. So $\Delta \neq -15$. So, by Lemma~\ref{lem:dom}, there are no subdominant singular moduli of discriminant $4 \Delta$, and there are precisely two subdominant singular moduli of discriminant $\Delta$. We may assume that $(x_1, x_2, x_3)$ has $x_3$ dominant (and so $x_2$ not dominant). Suppose for now that $x_1$ is not dominant. Then we have the upper bound \[ \lvert A \rvert^{1/a} \leq \frac{(e^{2 \pi \lvert \Delta \rvert^{1/2}/3}+2079)(e^{\pi \lvert \Delta \rvert^{1/2}/2}+2079)}{e^{\pi \lvert \Delta \rvert^{1/2}} - 2079}.\] And taking the conjugate $(x_1', x_2', x_3')$ with $x_1'$ dominant (and hence $x_3'$ not dominant since $(x_1', x_2', x_3') \neq (x_1, x_2, x_3)$), we have that \[ \lvert A \rvert^{1/a} \geq \frac{(e^{2 \pi \lvert \Delta \rvert^{1/2}}-2079)\min \{4.4 \times 10^{-5}, 3500 \lvert \Delta \rvert^{-3}\}}{e^{ \pi \lvert \Delta \rvert^{1/2}/2} + 2079}.\] These are incompatible when $\lvert \Delta \rvert \geq 13$. Now suppose that $x_1$ is dominant. Then $(x_1, x_2, x_3)$ has both $x_1, x_3$ dominant and gives the lower bound \[ \lvert A \rvert^{1/a} \geq \frac{(e^{2 \pi \lvert \Delta \rvert^{1/2}}-2079)\min \{4.4 \times 10^{-5}, 3500 \lvert \Delta \rvert^{-3}\}}{e^{ \pi \lvert \Delta \rvert^{1/2}} + 2079}.\] In this case, there exists a conjugate $(x_1'', x_2'', x_3'')$ with $x_3''$ subdominant, $x_1''$ not dominant, and $x_2''$ not dominant. Such a conjugate exists since there are two subdominant singular moduli of discriminant $\Delta$, so there are two conjugates of $(x_1, x_2, x_3)$ which have their third coordinate subdominant, and neither of these has its first conjugate dominant and at most one of these has its second coordinate dominant. This conjugate $(x_1'', x_2'', x_3'')$ gives rise to the bound \[\lvert A \rvert^{1/a} \leq \frac{(e^{2 \pi \lvert \Delta \rvert^{1/2}/3}+2079)(e^{\pi \lvert \Delta \rvert^{1/2}/2}+2079)}{e^{\pi \lvert \Delta \rvert^{1/2}/2} - 2079}.\] These two bounds are incompatible when $\lvert \Delta \rvert \geq 92$. Thus, in either case, we obtain that $\lvert \Delta \rvert \leq 91$. \subsection{The case where $\mathbb{Q}(x_1) = \mathbb{Q}(x_2) \supsetneq \mathbb{Q}(x_3)$.}\label{subsecB} By the discussion at the start of the proof, we must have that $\Delta_1 = \Delta_2$ in this case. \subsubsection{The subcase where $\mathbb{Q}(\sqrt{\Delta_1}) \neq \mathbb{Q}(\sqrt{\Delta_3})$.} Suppose that $\mathbb{Q}(\sqrt{\Delta_1}) \neq \mathbb{Q}(\sqrt{\Delta_3})$. Then, by Lemma~\ref{lem:subfielddiffund}, either at least one of $\Delta_1, \Delta_3$ is listed in \cite[Table~1]{AllombertBiluMadariaga15} or some $h_i \geq 128$ and the corresponding field $\mathbb{Q}(x_i)$ is Galois. In the first case, we may find all the possible $(\Delta_1, \Delta_2, \Delta_3)$ using a PARI script. So suppose that neither of $\Delta_1, \Delta_3$ is listed. If $\mathbb{Q}(x_1)$ is Galois, then $\mathbb{Q}(x_3)$ is also Galois, and hence at least one of $\Delta_1, \Delta_3$ is listed in \cite[Table~1]{AllombertBiluMadariaga15} by \cite[Corollaries~2.2 \& 3.3]{AllombertBiluMadariaga15} since $\mathbb{Q}(\sqrt{\Delta_1}) \neq \mathbb{Q}(\sqrt{\Delta_3})$. We are thus actually in the first case again. So we may assume that $\mathbb{Q}(x_1)$ is not Galois, and hence $\mathbb{Q}(x_3)$ is Galois and $h(\Delta_3) \geq 128$. By Lemma~\ref{lem:tat}, we thus have that $\mathbb{Q}(\sqrt{\Delta_3})= K_*$, where $K_*$ is the exceptional field defined in Definition~\ref{def:tat}. We are thus in the exceptional case (5) of Theorem~\ref{thm:quot}. \subsubsection{The subcase where $\mathbb{Q}(\sqrt{\Delta_1}) = \mathbb{Q}(\sqrt{\Delta_3})$.} Suppose that $\mathbb{Q}(\sqrt{\Delta_1}) = \mathbb{Q}(\sqrt{\Delta_3})$. Then, by Lemma~\ref{lem:subfieldsamefund}, we have that $\Delta_1 \in \{9 \Delta_3 / 4, 4 \Delta_3, 9 \Delta_3, 16 \Delta_3\}$. Write $\Delta = \Delta_3$. We will consider each of these four cases in turn. Our approach, which we now explain, will be the same for all of them. We may assume that $x_1$ is dominant, in order to obtain a lower bound for $\lvert A \rvert^{1/a}$. We will then show that if $h_3$ (equivalently, $h_1$) is sufficiently large, then there exists a conjugate $(x_1', x_2', x_3')$ of $(x_1, x_2, x_3)$ which gives rise to an upper bound for $\lvert A \rvert^{1/a}$ that is incompatible with the obtained lower bound for large enough values of $\lvert \Delta \rvert$. Thus, either $h_1$ is bounded, or $\lvert \Delta \rvert$ is bounded. In particular, there are only finitely many possibilities for $\Delta$, and we may find these. In Subsection~\ref{subsecC}, we will eliminate each of these finitely many possibilities for $\Delta$ using a PARI script. The time taken to eliminate a possible value of $\Delta$ increases rapidly with $h_3 =h(\Delta)$. It is therefore computationally efficient to reduce the number of possible $\Delta$ with large class number as much as possible. For this reason, in each case we will obtain not one, but several, upper bounds for $\lvert A \rvert^{1/a}$, each of which is valid for a different range of $h_3$. Crucially, the implied bound on $\lvert \Delta \rvert$ becomes much sharper as $h_3$ increases. This allows us to reduce considerably the number of possible $\Delta$ with large class number, and so significantly reduces the computational task which faces us in Subsection~\ref{subsecC}. In particular, the final bound on $\lvert \Delta \rvert$ which we obtain in each case will be sufficiently sharp to rule out there being any $\Delta$ with a class number so large as to be computationally infeasible to handle. Now we begin with the four cases themselves. First, suppose that $\Delta_1 = \Delta_2 = 9 \Delta / 4$. Assuming that $x_1$ is dominant, we have that \[\lvert A \rvert^{1/a} \geq \frac{(e^{3 \pi \lvert \Delta \rvert^{1/2}/2}-2079)\min \{4.4 \times 10^{-5}, 3500 \times (\frac{9}{4})^{-3}\lvert \Delta \rvert^{-3}\}}{e^{ \pi \lvert \Delta \rvert^{1/2}} + 2079}.\] Let $k, m_1, m_2$ be as given in Table~\ref{tbl:9/4}. If $h_1 \geq k$, then we may (Lemma~\ref{lem:dom}) find a conjugate $(x_1', x_2', x_3')$ where the associated triples $(a_i', b_i', c_i') \in T_{\Delta_i}$ satisfy $a_1' \geq m_1$ and $a_2' \geq m_2$. This conjugate gives rise to the upper bound \[\lvert A \rvert^{1/a} \leq \frac{(e^{3 \pi \lvert \Delta \rvert^{1/2}/2 m_1}+2079)(e^{3 \pi \lvert \Delta \rvert^{1/2}/2 m_2}+2079)}{\min \{4.4 \times 10^{-5}, 3500 \lvert \Delta \rvert^{-3}\}}.\] These upper bounds are incompatible with the above lower bound for sufficiently large values of $\lvert \Delta \rvert$ (which are also recorded in Table~\ref{tbl:9/4}). Observe that the greater $h_1$ is, the sharper the upper bound on $\lvert \Delta \rvert$ we obtain is. We thus obtain that either $h_1 \leq 29$ (and so $h_3 \leq 14$) or $\lvert \Delta \rvert$ is bounded as in Table~\ref{tbl:9/4}. \begin{table} \caption{Conjugates when $\Delta_1 = \Delta_2 = 9 \Delta /4$.} \begin{tabular}{ c | c | c | c } $k$ & $m_1$ & $m_2$ & $\lvert \Delta \rvert \leq$ \\ \mathbb{H}line $29$ & $7$ & $8$ & $22443$\\ $31$ & $8$ & $8$ & $11576$\\ $35$ & $8$ & $9$ & $7484$\\ $39$ & $9$ & $9$ & $5076$\\ $42$ & $9$ & $10$ & $3820$\\ $45$ & $10$ & $10$ & $2929$\\ $49$ & $10$ & $11$ & $2384$\\ $53$ & $11$ & $11$ & $1957$\\ $55$ & $11$ & $12$ & $1669$\\ $57$ & $12$ & $12$ & $1430$\\ \end{tabular} \label{tbl:9/4} \end{table} Second, suppose that $\Delta_1 = \Delta_2 = 4 \Delta$. As in \cite[\S3.2.2]{BiluLucaMadariaga16}, we note that, for $m \in \mathbb{Z}_{>0}$, the class number formula \cite[Corollary~7.28]{Cox89} implies that \[h(m^2 \Delta) = m \prod_{p \mid m} (1 - \frac{1}{p}\Big(\frac{\Delta}{p}\Big) ) h (\Delta),\] where $(\Delta / \cdot)$ is the Kronecker symbol. Since $h_1 = 2 h_2$, we thus obtain that $(\Delta/2)=0$. In particular, $4 \Delta \equiv 0 \bmod 16$, and so there are no subdominant singular moduli of discriminant $4 \Delta$ by Lemma~\ref{lem:dom}. Assuming $x_1$ is dominant, we have that \[\lvert A \rvert^{1/a} \geq \frac{(e^{2 \pi \lvert \Delta \rvert^{1/2}}-2079)\min \{4.4 \times 10^{-5}, 3500 \times 4^{-3}\lvert \Delta \rvert^{-3}\}}{e^{ \pi \lvert \Delta \rvert^{1/2}} + 2079}.\] Now let $k, m_1, m_2$ be as given in Table~\ref{tbl:4}. As before, provided that $h_1 \geq k$, we may find a conjugate $(x_1', x_2', x_3')$, where $x_i'$ has associated $(a_i', b_i', c_i') \in T_{\Delta_i}$, such that $a_1' \geq m_1$ and $a_2' \geq m_2$. Such a conjugate gives rise to the upper bound \[\lvert A \rvert^{1/a} \leq \frac{(e^{2 \pi \lvert \Delta \rvert^{1/2}/m_1 }+2079)(e^{2 \pi \lvert \Delta \rvert^{1/2}/ m_2}+2079)}{\min \{4.4 \times 10^{-5}, 3500 \lvert \Delta \rvert^{-3}\}}.\] We thus obtain that either $h_1 \leq 10$ (and so $h_3 \leq 5$) or $\lvert \Delta \rvert$ is bounded as in Table~\ref{tbl:4}. \begin{table} \caption{Conjugates when $\Delta_1 = \Delta_2 = 4 \Delta$.} \begin{tabular}{ c | c | c | c } $k$ & $m_1$ & $m_2$ & $\lvert \Delta \rvert \leq$ \\ \mathbb{H}line $11$ & $5$ & $5$ & $3397$\\ $13$ & $5$ & $6$ & $1393$\\ $15$ & $6$ & $6$ & $650$\\ $19$ & $6$ & $7$ & $403$\\ $23$ & $7$ & $7$ & $293$\\ $25$ & $7$ & $8$ & $236$\\ $27$ & $8$ & $8$ & $194$\\ \end{tabular} \label{tbl:4} \end{table} Third, suppose that $\Delta_1 = \Delta_2 = 9 \Delta$. Assuming that $x_1$ is dominant, we have that \[\lvert A \rvert^{1/a} \geq \frac{(e^{3 \pi \lvert \Delta \rvert^{1/2}}-2079)\min \{4.4 \times 10^{-5}, 3500 \times 9^{-3}\lvert \Delta \rvert^{-3}\}}{e^{ \pi \lvert \Delta \rvert^{1/2}} + 2079}.\] Let $k, m_1, m_2$ be as given in Table~\ref{tbl:9}. Provided $h_1 \geq k$, we may as usual find a conjugate $(x_1', x_2', x_3')$, where $x_i'$ has associated $(a_i', b_i', c_i') \in T_{\Delta_i}$ with $a_1' \geq m_1$ and $a_2' \geq m_2$. This conjugate gives rise to the upper bound \[\lvert A \rvert^{1/a} \leq \frac{(e^{3 \pi \lvert \Delta \rvert^{1/2}/m_1 }+2079)(e^{3 \pi \lvert \Delta \rvert^{1/2}/ m_2}+2079)}{\min \{4.4 \times 10^{-5}, 3500 \lvert \Delta \rvert^{-3}\}}.\] We thus obtain that either $h_1 \leq 8$ (and so $h_3 \leq 4$) or $\lvert \Delta \rvert$ is bounded as in Table~\ref{tbl:9}. \begin{table} \caption{Conjugates when $\Delta_1 = \Delta_2 = 9 \Delta $.} \begin{tabular}{ c | c | c | c } $k$ & $m_1$ & $m_2$ & $\lvert \Delta \rvert \leq$ \\ \mathbb{H}line $9$ & $3$ & $4$ & $2131$\\ $11$ & $4$ & $4$ & $255$\\ $13$ & $4$ & $5$ & $126$\\ $15$ & $5$ & $5$ & $71$\\ \end{tabular} \label{tbl:9} \end{table} Finally, suppose that $\Delta_1 = \Delta_2 = 16 \Delta$. In this case, the class number formula implies that \[h(16 \Delta) = 4 (1- \frac{1}{2}\Big(\frac{\Delta}{2}\Big))h(\Delta).\] Since $h_1 = 2 h_3$, we obtain that $(\Delta/2)=1$ and so $\Delta \equiv 1 \bmod 8$. In particular, $16 \Delta \equiv 0 \bmod 16$, and hence there are no subdominant singular moduli of discriminant $16 \Delta$ by Lemma~\ref{lem:dom}. Assuming that $x_1$ is dominant, we have that \[\lvert A \rvert^{1/a} \geq \frac{(e^{4 \pi \lvert \Delta \rvert^{1/2}}-2079)\min \{4.4 \times 10^{-5}, 3500 \times 16^{-3}\lvert \Delta \rvert^{-3}\}}{e^{ \pi \lvert \Delta \rvert^{1/2}} + 2079}.\] Let $k, m_1, m_2$ be as given in Table~\ref{tbl:16}. Provided $h_1 \geq k$, we may also find a conjugate $(x_1', x_2', x_3')$, where $x_i'$ has associated $(a_i', b_i', c_i') \in T_{\Delta_i}$ with $a_1' \geq m_1$ and $a_2' \geq m_2$. Such a conjugate gives rise to the upper bound \[\lvert A \rvert^{1/a} \leq \frac{(e^{4 \pi \lvert \Delta \rvert^{1/2}/m_1 }+2079)(e^{4 \pi \lvert \Delta \rvert^{1/2}/ m_2}+2079)}{\min \{4.4 \times 10^{-5}, 3500 \lvert \Delta \rvert^{-3}\}}.\] We thus obtain that either $h_1 \leq 4$ (and so $h_3 = 2$) or $\lvert \Delta \rvert$ is bounded as in Table~\ref{tbl:16}. \begin{table} \caption{Conjugates when $\Delta_1 = \Delta_2 = 16 \Delta $.} \begin{tabular}{ c | c | c | c } $k$ & $m_1$ & $m_2$ & $\lvert \Delta \rvert \leq$ \\ \mathbb{H}line $5$ & $3$ & $4$ & $143$\\ $7$ & $4$ & $4$ & $48$\\ $9$ & $4$ & $5$ & $28$ \end{tabular} \label{tbl:16} \end{table} \subsection{Eliminating the exceptional cases}\label{subsecC} If we are not in one of cases (1)--(5) of Theorem~\ref{thm:quot}, then our arguments in the previous two subsections have reduced the possibilities for $(\Delta_1, \Delta_2, \Delta_3)$ to a known finite list. We may eliminate all the entries on this list using a PARI script. The obvious modification of the algorithm described in Subsection~\ref{subsec:elimnontriv} may be used to do this. A slight technical difficulty arises in the case where $\mathbb{Q}(x_1) = \mathbb{Q}(x_2) \supsetneq \mathbb{Q}(x_3)$ and $\Delta_1 = \Delta_2 = 9 \Delta_3 /4$. Here PARI finds 30 possibilities for $(\Delta_1, \Delta_2, \Delta_3)$, of which $19$ have $h_1 \geq 30$. The run time of our usual algorithm increases with $h_1$, and it is therefore not practical to use this algorithm to check triples $(\Delta_1, \Delta_2, \Delta_3)$ with $h_1$ greater than around 30 (at least with modest computational resources). In this case, we therefore first filter these 30 triples with the following process. Fix such a triple $(\Delta_1, \Delta_2, \Delta_3)$. (Note that $\Delta_1 = \Delta_2$.) The singular moduli of discriminant $\Delta_1$ may be enumerated \[ \{y_1, \ldots, y_{h_3}, y_{h_3 + 1}, \ldots, y_{2 h_3}\},\] where $\lvert y_i \rvert \geq \lvert y_{i+1} \rvert$ for all $i$ (in particular, $y_1$ is dominant). We may also enumerate the singular moduli of discriminant $\Delta_3$ as \[\{w_1, \ldots, w_{h_3}\},\] where $\lvert w_i \rvert \geq \lvert w_{i+1} \rvert$ for all $i$. This is straightforward in PARI. Suppose then that $x_1, x_2, x_3$ is a pairwise distinct choice of singular moduli of respective discriminants $\Delta_1, \Delta_2, \Delta_3$. It must always be possible to find a conjugate $(\sigma(x_1), \sigma(x_2))$ of $(x_1, x_2)$ such that $(\sigma(x_1), \sigma(x_2)) = (y_i, y_j)$ with $i, j \geq h_3$ and $i \neq j$. We thus must have that \[ \lvert A \rvert^{1/a} \leq \frac{\lvert y_{h_3} \rvert \lvert y_{h_3 + 1} \rvert}{\lvert w_{h_3} \rvert}.\] We may also find a conjugate $(\sigma_0(x_1), \sigma_0(x_2))$ with $\sigma_0(x_1) = y_1$. Hence, \[ \lvert A \rvert^{1/a} \geq \frac{\lvert y_{1} \rvert \lvert y_{2 h_3} \rvert}{\lvert w_{1} \rvert}.\] We thus obtain upper and lower bounds for $\lvert A \rvert^{1/a}$. Note that we may obtain the same bounds for any choice of $x_1, x_2, x_3$ as pairwise distinct singular moduli of respective discriminants $\Delta_1, \Delta_2, \Delta_3$. Thus, if these upper and lower bounds for $\lvert A \rvert^{1/a}$ are incompatible, then we may reject the triple $(\Delta_1, \Delta_2, \Delta_3)$. Implementing this approach, we are able to reject all but one of the 30 triples $(\Delta_1, \Delta_2, \Delta_3)$. The single triple $(\Delta_1, \Delta_2, \Delta_3)$ we cannot reject using this algorithm has $h_1 = 6$. This triple though has $h_1$ sufficiently small to be eliminated by the usual algorithm. The proof of Theorem~\ref{thm:quot} is thus completed in this way. \end{document}
\begin{document} \begin{abstract} We study ASEP in a spatially inhomogeneous environment on a torus $ \mathbb{T}^{(N)} = \mathbf{Z}/N\mathbf{Z}$ of $ N $ sites. A given inhomogeneity $ \mathbb{a} t(x)\in(0,\infty) $, $ x\in\mathbb{T} $, perturbs the overall asymmetric jumping rates $ r<\varepsilonll\in(0,1) $ at bonds, so that particles jump from site $x$ to $x+1$ with rate $r \mathbb{a} t(x)$ and from $x+1$ to $x$ with rate $\varepsilonll \mathbb{a} t(x)$ (subject to the exclusion rule in both cases). Under the limit $ N\to\infty $, we suitably tune the asymmetry $ (\varepsilonll-r) $ to zero like $N^{-\frac{1}{2}}$ and the inhomogeneity $ \mathbb{a} t $ to unity, so that the two compete on equal footing. At the level of the G\"{a}rtner (or microscopic Hopf--Cole) transform, we show convergence to a new SPDE --- the Stochastic Heat Equation with a mix of spatial and spacetime multiplicative noise. Equivalently, at the level of the height function we show convergence to the Kardar-Parisi-Zhang equation with a mix of spatial and spacetime additive noise. Our method applies to a general class of $ \mathbb{a} t(x) $, which, in particular, includes i.i.d., long-range correlated, and periodic inhomogeneities. The key technical component of our analysis consists of a host of estimates on the \varepsilonmph{kernel} of the semigroup $ \mathcal{Q} (t):=e^{t \mathcal{H} } $ for a Hill-type operator $ \mathcal{H} := \frac12\mathbb{T}al_{xx} + \mathbb{A} lim'(x) $, and its discrete analog, where $ \mathbb{A} lim $ (and its discrete analog) is a generic H\"{o}lder continuous function. \varepsilonnd{abstract} \maketitle \section{Introduction} \label{intro} In this article we study the \ac{ASEP} in a spatially inhomogeneous environment where the inhomogeneity perturbs the rate of jumps across bonds, while maintaining the asymmetry (i.e., the ratio of the left and right rates across the bond). Quenching the inhomogeneity, we run the \ac{ASEP} and study its resulting Markov dynamics. Even without inhomogeneities, \ac{ASEP} demonstrates an interesting scaling limit to the \ac{KPZ} equation when the asymmetry is tuned weakly~\cite{bertini97}. It is ultimately interesting to determine how the inhomogeneous rates modify the dynamics of such systems, and scaling limits thereof. In this work we tune the strengths of the asymmetry and the inhomogeneity to compete on equal levels, and we find that the latter introduces a new spatial noise into the limiting equation. At the level of the G\"{a}rtner (or microscopic Hopf--Cole) transform (see~\varepsilonqref{eq:Z}), we obtain a new equation of \ac{SHE}-type, with a mix of spatial and spacetime multiplicative noise. At the level of the height function, we obtain a new equation of \ac{KPZ}-type, with a mix of spatial and spacetime additive noise. We now define the inhomogeneous \ac{ASEP}. The process runs on a discrete $N$-site torus $ \mathbb{T}^{(N)} := \mathbf{Z}/N\mathbf{Z} $ where we identify $ \mathbb{T}^{(N)} $ with $ \{0,1,\ldots,N-1\} $, and, for $ x,y\in\mathbb{T} $, understand $ x+y $ to be mod $ N $. To alleviate heavy notation, we will often omit dependence on $ N $ and write $ \mathbb{T} $ in place of $ \mathbb{T}^{(N)} $, and similarly for notation to come. For fixed homogeneous jumping rates $ r<\varepsilonll\in(0,1) $ with $ r+\varepsilonll=1 $, and for fixed inhomogeneities $ \mathbb{a} t(x)\in(0,\infty) $, $ x\in\mathbb{T} $, the inhomogeneous \ac{ASEP} consists of particles performing continuous time random walks on $ \mathbb{T} $. Jumping from $ x $ to $ x+1 $ occurs at rate $ \mathbb{a} t(x)r $, jumping from $ x+1 $ to $ x $ currents at rate $ \mathbb{a} t(x)\varepsilonll $, and attempts to jump into occupied sites are forbidden. See Figure~\ref{fig:asepRing}. \begin{figure}[h] \centering \begin{subfigure}{.35\textwidth} \psfrag{L}[r]{$ \varepsilonll \mathbb{a} t(x-1) $} \psfrag{R}[c][t]{$ r \mathbb{a} t(x) $} \includegraphics[width=\textwidth]{asepRing} \caption{Inhomogeneous \ac*{ASEP}} \label{fig:asepRing} \varepsilonnd{subfigure} \hfil \begin{subfigure}{.45\textwidth} \includegraphics[width=\textwidth]{height} \caption{The height function} \label{fig:height} \varepsilonnd{subfigure} \caption{Inhomogeneous \ac*{ASEP} on $ \mathbb{T} $ and its height function. (a): The particle at $x$ jumps to $x-1$ at rate $ \mathbb{a} t(x-1) \varepsilonll$ or to $x+1$ at rate $ \mathbb{a} t(x) r$; meanwhile the particle at $1$ may not jump to the occupied site $2$. (b): The particle dynamics are coupled with a height function as shown.} \varepsilonnd{figure} We will focus on the height function (also known as integrated current), denoted $ h(t,x) $. To avoid technical difficulties, throughout this article we assume the particle system to be \textbf{half-filled} so that $ N $ is even, and there are exactly $ \frac{N}{2} $ particles. Under this setup, letting \begin{align*} \varepsilonta(t,x) := \left\{\begin{array}{l@{,}l} 1 &\text{ if the site } x \text{ if occupied at } t, \\ 0 &\text{ if the site } x \text{ if empty at } t, \varepsilonnd{array}\right. \varepsilonnd{align*} denote the occupation variables, we define the height function $ h: [0,\infty)\times\mathbb{T}\to\mathbf{R} $ at $ t=0 $ to be \begin{align*} h(0,x) &:= \sum_{0<y\leq x} \big( 2\varepsilonta(0,y) - 1 \big), \qquad x=0,1,\ldots,N-1. \varepsilonnd{align*} Then, for $ t \geq 0 $, each jump of a particle from $ x $ to $ x+1 $ decreases $ h(t,x) $ by $ 2 $, and each jump of a particle from $ x+1 $ to $ x $ increases $ h(t,x) $ by $ 2 $, as depicted in Figure~\ref{fig:height}. We now provide two key definitions needed to state and prove our main results. The \textbf{G\"{a}rtner (or microscopic Hopf--Cole) transform} of inhomogeneous ASEP is defined to be \begin{align} \label{eq:Z} Z(t,x) := \tau^{ \frac12h(t,x)}e^{\nu t}, \qquad \tau := r/\varepsilonll, \qquad \nu := 1-2\sqrt{r\varepsilonll}. \varepsilonnd{align} The other key definition is \textbf{weak asymmetry} scaling in which the system size $N$ controls the asymmetry and scaling of $Z$ as \begin{align} \label{eq:was} \varepsilonll = \tfrac12(1+N^{-\frac12}), \ r = \tfrac12(1-N^{-\frac12}), \qquad Z_N(t,x) := Z( t N^2,xN). \varepsilonnd{align} G\"{a}rtner \cite{gaertner87} introduced his eponymous transform in the context of \varepsilonmph{homogeneous} \ac{ASEP} (i.e., $ \mathbb{a} t(x)\varepsilonquiv 1 $), where he observed that \varepsilonqref{eq:Z} linearizes the drift of the microscopic equation, so that $ Z(t,x) $ solves a microscopic \ac{SHE}: \begin{align} \label{eq:dshe} dZ(t,x) = \sqrt{r\varepsilonll}\Delta Z(t,x) + dM(t,x), \qquad \Delta Z(t,x) := Z(t,x+1)+Z(t,x-1) - 2Z(t,x), \varepsilonnd{align} where $ M(t,x) $ is an explicit martingale in $ t $. Using G\"{a}rtner's transform as a starting point, Bertini and Giacomin \cite{bertini97} showed that a \ac{SPDE} arises under the weak asymmetry scaling. Namely, under the scaling~\varepsilonqref{eq:was}, the process $ Z_N $ converges to the solution of the \ac{SHE}: \begin{align} \label{eq:SHE} \mathbb{T}al_t \mathcal{Z} = \tfrac12\mathbb{T}al_{xx} \mathcal{Z} + \xi\mathcal{Z}, \varepsilonnd{align} where $ \mathcal{Z}=\mathcal{Z}(t,x) $, $ (t,x)\in[0,\infty)\times\mathbf{R} $, and $ \xi=\xi(t,x) $ denotes the Gaussian spacetime white noise (see, e.g.,~\cite{walsh86}). In fact, the result of~\cite{bertini97} is on the full-line $ \mathbf{Z} $, and in that context $ \varepsilon\to 0 $ represents lattice spacing, which is identified with $ N^{-1} $ here. Also, the work of \cite{bertini97} assumes near stationary initial conditions similar to the ones considered here in~\varepsilonqref{eq:nearst}. Other initial conditions were considered later in \cite{ACQ} The first key observation of our present paper is that for inhomogeneities of the form introduced above, G\"{a}rtner's transform remains essentially valid. In particular, the Laplacian term $\sqrt{r\varepsilonll}\Delta$ in the discrete \ac{SHE} \varepsilonqref{eq:dshe} is replaced by the spatially inhomogeneous operator $$ \mathbb{H} := \sqrt{r\varepsilonll} \, \mathbb{a} t(x)\Delta - \nu \, \big( \mathbb{a} t(x)-1\big), $$ which involves a mixture of the \textbf{Bouchaud trap model} generator and a \textbf{parabolic Anderson model} type potential (see the sketch of the proof later in this introduction for further explanation and references for these terms). Besides this change, the martingale is also modified. Armed with the G\"{a}rtner transform, we investigate the effect of $ \mathbb{a} t(x) $ at large scales in the $ N\to\infty $ limit. In doing so, we focus on the case where the effect of $ \mathbb{a} t(x) $ is compatible with the aforementioned \ac{SPDE} limit. A prototype of our study is when \begin{align*} \mathbb{a} t(x) = 1 + \tfrac{1}{\sqrt{N}} \mathbb{b} (x), \qquad \{ \mathbb{b} (x):x\in\mathbb{T}\} \text{ i.i.d., bounded, with } \mathbf{E} [ \mathbb{b} (x)]=0. \varepsilonnd{align*} For this example of i.i.d.\ inhomogeneities, the $ N^{-\frac12} $ scaling is weak enough to have an \ac{SPDE} limit, while still strong enough to modify the nature of said limit. To demonstrate the generality of our approach, we will actually consider a much more general class of inhomogeneities. Let us first prepare some notation. For $ x,x'\in\mathbb{T} $, let $ [x,x']\subset \mathbb{T} $ denote the closed interval on $ \mathbb{T} $ that goes counterclockwise (see Figure~\ref{fig:asepRing} for the orientation) from $ x $ to $ x' $, and similarly for open and half-open intervals. With $ |I| $ denoting the cardinality of (i.e., number of points within) an interval $ I\subset\mathbb{T} $, we define the \textbf{geodesic distance} \begin{align*} \mathrm{dist} _{\mathbb{T}}(x,x') := |(x,x']| \wedge |(x',x]|. \varepsilonnd{align*} We will also be considering the continuum torus $ \mathcal{T} := \mathbf{R}/\mathbf{Z} \simeq [0,1) $, which is to be viewed as the $ N\to\infty $ limit of $ \frac{1}{N}\mathbb{T} $. Similarly for the continuum torus $ \mathcal{T} $, we let $ [x,x']\subset\mathcal{T} $ denote the interval going from $ x $ to $ x' $ counterclockwise, let $ |[x,x']| $ denote the length of the interval, and let $ \mathrm{dist} _{\mathcal{T}}(x,x') $, $ x,x'\in\mathcal{T} $ denote the analogous geodesic distance on $ \mathcal{T} $. For $ u\in[0,1] $, let $ C^{u}[0,1] $ denote the space of $ u $-H\"{o}lder continuous functions $ f:[0,1]\to\mathbf{R} $, equipped with the norm \begin{align} \label{eq:Hold} \norm{f}_{C^u[0,1]} := \norm{f}_{L^\infty[0,1]}+ [f]_{C^u(\mathcal{T})}, \qquad [f]_{C^u(\mathcal{T})} := \sup_{[x, x']\subset\mathcal{T}} \Big( \frac{1}{|[x,x']|^u} \Big| \int_{[x,x']\setminus\{0\}} d f(y)\Big| \Big), \varepsilonnd{align} where the integral is in the Riemann–Stieltjes sense. The integral excludes $ 0 $ so that the possible jump of $ f $ there will not be picked up. We now define the type of inhomogeneities to be studied. Throughout this article, we will consider possibly random $ ( \mathbb{a} t^{(N)}(x))_{x\in\mathbb{T}} $ that may depend on $ N $. Set $ \mathbb{a} ^{(N)}(x) := \mathbb{a} t^{(N)}(x)-1 $, and put \begin{align} \label{eq:Rt} \mathbb{A} ^{(N)}(x,x') := -\frac12 \sum_{y\in(x,x']} \mathbb{a} ^{(N)}(y), \qquad x,x'\in\mathbb{T}. \varepsilonnd{align} As announced previously, we will often write $ \mathbb{a} t^{(N)} = \mathbb{a} t $, $ \mathbb{a} ^{(N)}= \mathbb{a} $, $ \mathbb{A} ^{(N)}= \mathbb{A} $, etc., to simplify notation. When $ x=0 $, we will write $ \mathbb{A} (x):= \mathbb{A} (0,x) $. Consider also the scaled partial sums $ \mathbb{A} _N(x,x') := \mathbb{A} (xN,x'N) $ and $ \mathbb{A} _N(x) := \mathbb{A} (xN) $, which are linearly interpolated to be functions on $ \mathcal{T}^2 $ and $ [0,1) $, respectively. For $ f:\mathbb{T}^2\to\mathbf{R} $, we define a seminorm that is analogous to $ [\Cdot]_{C^u(\mathcal{T})} $ in~\varepsilonqref{eq:Hold}: \begin{align} \label{eq:hold} \hold{f}_{u,N} := \sup_{[x,x']\subset\mathbb{T}} \Big( \frac{1}{(|(x,x']|/N)^u} |f(x,x')| \Big). \varepsilonnd{align} Throughout this article we assume $ \{ \mathbb{a} t(x):x\in\mathbb{T}\} $ satisfies: \begin{assumption}\label{assu:rt} \begin{enumerate}[label=(\alph*),leftmargin=7ex] \item[] \item \label{assu:rt:bdd} For some fixed constant $ c\in(0,\infty) $, $ \frac{1}{c} \leq \mathbb{a} t(x) \leq c $. \item \label{assu:rt:holder} For some $ u_{\mathbf{R}t} >0 $, the partial sum $ \mathbb{A} _N(x,x') $ is $ u_{\mathbf{R}t} $-H\"{o}lder continuous: \begin{align*} \lim_{\Lambda\to\infty}\liminf_{N\to\infty} \mathbf{P} \Big[ \hold{ \mathbb{A} _N}_{ u_{\mathbf{R}t} ,N} \leq \Lambda \Big] =1. \varepsilonnd{align*} \item \label{assu:rt:limit}For the same $ u_{\mathbf{R}t} >0 $ as in \ref{assu:rt:holder} there exists a $ C^{ u_{\mathbf{R}t} }[0,1] $-valued process $ \mathbb{A} lim $ such that \begin{align*} \sup_{ x\in[0,\frac{N-1}{N}) } | \mathbb{A} _N(x)- \mathbb{A} lim(x)| \longrightarrow_\text{P} 0, \qquad \text{as } N\to\infty, \varepsilonnd{align*} where $ \to_\text{P} $ denotes convergence in probability. \varepsilonnd{enumerate} \varepsilonnd{assumption} \begin{remark}\label{rmk:rt} \begin{enumerate}[label=(\alph*),leftmargin=7ex] \item [] \item Assumption~\ref{assu:rt}\ref{assu:rt:bdd} ensures the rate $ \mathbb{a} t(x) $ is always nonnegative so that the process is well-defined. \item Note that we do \varepsilonmph{not} assume $ ( \mathbb{a} (0)+\ldots \mathbb{a} (N-1))=0 $ or $ \mathbb{A} lim(1)=0 $. \item Under Assumption~\ref{assu:rt}\ref{assu:rt:limit}, the microscopic processes $\big\{ \mathbb{A} ^{(N)}\big\}_{N}$ (and likewise $\big\{ \mathbb{a} ^{(N)}\big\}_{N}$) and limiting process $ \mathbb{A} lim $ are \varepsilonmph{coupled} on the same probability space. \varepsilonnd{enumerate} \varepsilonnd{remark} Here we list a few examples that fit into our working assumption~\ref{assu:rt}. \begin{example}[i.i.d.\ inhomogeneities] \label{ex:iid} Consider $ \mathbb{a} ^{(N)}(x)=\tfrac{1}{\sqrt{N}} \mathbb{b} (x) $, where $\{ \mathbb{b} (x):x\in\mathbb{T}\}$ are i.i.d., bounded, with $ \mathbf{E} [ \mathbb{b} (x)]=0 $ and $ \mathbf{E} [ \mathbb{b} (x)^2] := \sigma^2>0 $. Indeed, Assumptions \ref{assu:rt}\ref{assu:rt:bdd}--\ref{assu:rt:holder} are satisfied for any $ u_{\mathbf{R}t} \in(0,\frac12) $ (and $ N $ large enough). The invariance principle asserts that $ \mathbb{A} ^{(N)}(x/N) $ converges in distribution to $ \frac12\sigma B(x) $ in $ C[0,1] $, where $ B(x) $ denotes a standard Brownian motion. By Skorokhod's representation theorem, after suitable extension of the probability space, we can couple $ \{ \mathbb{A} ^{(N)}\}_N$ and $B$ together so that Assumption~\ref{assu:rt}\ref{assu:rt:limit} holds. \varepsilonnd{example} \begin{example}[fractional Brownian motion] \label{ex:fbm} Let $ B^\alpha(x) $, $ x \geq 0 $, denote a fractional Brownian motion of a fixed Hurst exponent $ \alpha\in(0,1) $. For $ x\in \mathbb{T}^{(N)}\simeq \{0,1,\ldots,N-1\} $, set $ \widehat{ \mathbb{a} }^{(N)}(x) = B^*(\frac{x+1}{N})-B^*(\frac{x}{N}) $, and $ \mathbb{a} ^{(N)}(x) := \widehat{ \mathbb{a} }^{(N)}(x) \mathbf{1} _\set{|\widehat{ \mathbb{a} }^{(N)}(x)|<1/2} $. To be clear, we define $\widehat{ \mathbb{a} }^{(N)}(N-1) = B^*(1)-B^*(\frac{N-1}{N})$. The indicator $ \mathbf{1} _\set{|\widehat{ \mathbb{a} }^{(N)}(x)|<1/2} $ forces Assumption~\ref{assu:rt}\ref{assu:rt:bdd} to hold. Since each $ \widehat{ \mathbb{a} }^{(N)}(x) $ is a mean-zero Gaussian of variance $ N^{-2\alpha} $, we necessarily have that \begin{align*} \mathbf{P} \big[ \mathbb{a} ^{(N)}(x) = \widehat{ \mathbb{a} }^{(N)}(x), \ \forall x\in\mathbb{T} \big] \longrightarrow 1, \qquad \text{as } N\to\infty. \varepsilonnd{align*} Given this, it is standard to verify that Assumptions~\ref{assu:rt}\ref{assu:rt:holder}--\ref{assu:rt:limit} hold for $ u_{\mathbf{R}t} \in(0,\alpha) $ and $ \mathbb{A} lim=-\frac12 B^\alpha $. \varepsilonnd{example} \begin{example}[Alternating] \label{ex:alt} Fix any $ \delta>0 $ and let $ \mathbb{a} ^{(N)}(x)=N^{-\delta} $ for $ x=0,2,4,\ldots,N-2 $ and $ \mathbb{a} ^{(N)}(x)=-N^{-\delta} $ for $ x=1,3,\ldots,N-1 $. It is readily verified that Assumptions~\ref{assu:rt}\ref{assu:rt:bdd}--\ref{assu:rt:limit} hold for $ u_{\mathbf{R}t} \in(0,\delta] $ and $ \mathbb{A} lim\varepsilonquiv 0 $. \varepsilonnd{example} Roughly speaking, our main result asserts that, for inhomogeneous \ac{ASEP} under Assumption~\ref{assu:rt}, $ Z_N(t,x) $ (defined via \varepsilonqref{eq:Z} and \varepsilonqref{eq:was}) converges in distribution to the solution of the following \ac{SPDE}: \begin{align} \label{eq:spde} \mathbb{T}al_t \mathcal{Z} = \mathcal{H} \mathcal{Z} + \xi \mathcal{Z}, \qquad \mathcal{H} := \tfrac12 \mathbb{T}al_{xx} + \mathbb{A} lim'(x). \varepsilonnd{align} To state our result precisely, we first recall a result from~\cite{fukushima77} on the Schr\"{o}dinger operator with a rough potential. It is shown therein that, for any bounded Borel function $ f:[0,1]\to\mathbf{R} $, the expression $ \frac12 \mathbb{T}al_{xx} + f'(x) $ defines a self-adjoint operator on $ L^{2}[0,1] $ with Dirichlet boundary conditions. This construction readily generalizes to $ \mathcal{T} $ (i.e., $ [0,1] $ with periodic boundary condition) considered here. In Section~\ref{sect:Sg}, for given $ \mathbb{A} lim\in C^{ u_{\mathbf{R}t} }[0,1] $, we construct the semigroup $ \mathcal{Q} (t)=e^{t \mathcal{H} } $ by giving an explicit formula for the kernel $ \mathcal{Q} (t;x,\widetildex) $. We say that a $ C([0,\infty),C(\mathcal{T})) $-valued process $ \mathcal{Z} $ is a \textbf{mild solution} of~\varepsilonqref{eq:spde} with initial condition $ \mathcal{Z}^\mathrm{ic} \in C(\mathcal{T}) $, if \begin{align} \label{eq:spde:mild} \mathcal{Z}(t,x) = \int_{\mathcal{T}} \mathcal{Q} (t;x,\widetildex) \mathcal{Z}^\mathrm{ic}(\widetildex)d\widetildex + \int_0^t \int_{\mathcal{T}} \mathcal{Q} (t-s;x,\widetildex) \mathcal{Z}(s,\widetildex) \xi(s,\widetildex) dsd\widetildex. \varepsilonnd{align} \begin{remark} \label{rmk:Rtfixed} In~\varepsilonqref{eq:spde:mild}, $ \mathcal{Q} (t;x,\widetildex) $ is taken to be independent of the driving noise $ \xi $. This being the case, throughout this article, for the analysis that involves the limiting \ac{SPDE}~\varepsilonqref{eq:spde}--\varepsilonqref{eq:spde:mild}, we will assume without loss of generality that $ \mathcal{Q} (t;x,\widetildex) $ is deterministic, and interpret the stochastic integral $ \int(\ldots)\xi(s,x) dsdx $ in the It\^{o} sense. \varepsilonnd{remark} \noindent Using standard Picard iteration, we show in Proposition~\ref{prop:unique} that~\varepsilonqref{eq:spde:mild} admits at most one solution for a given $ \mathcal{Z}^\mathrm{ic} \in C(\mathcal{T}) $. Existence follows from our result~Theorem~\ref{thm:main} in the following. Fix $ u_\mathrm{ic} >0 $. Throughout this article we will also fix a sequence of \varepsilonmph{deterministic} initial conditions for the \ac{ASEP} height function $ \big\{ h^{\mathrm{ic},(N)}(\Cdot) \big\}_N $, and let $ Z^{\mathrm{ic},(N)}(x) $ be defined in terms of $ h^{\mathrm{ic},(N)} $ via \varepsilonqref{eq:Z} and \varepsilonqref{eq:was} at $t=0$. We make the assumption that the initial conditions are \textbf{near stationary}. This is easiest stated in terms of $ Z^{\mathrm{ic},(N)}$ and posits that there exists a finite constant $ c<\infty $ such that, with the shorthand notation $ Z^{\mathrm{ic}}:=Z^{\mathrm{ic},(N)} $, \begin{align} \label{eq:nearst} Z^\mathrm{ic}(x) \le c, \qquad | Z^\mathrm{ic}(x)-Z^\mathrm{ic}(x')| \le c \ \big(\tfrac{ \mathrm{dist} _{\mathbb{T}}(x,x')}{N}\big)^{ u_\mathrm{ic} }, \qquad \forall x,x'\in\mathbb{T}, \ N\in\mathbf{Z}_{>0}. \varepsilonnd{align} Recall the scaled process $ Z_N(t,x) $ from~\varepsilonqref{eq:was}, and similarly scale $ Z^\mathrm{ic}_N(x) := Z^{\mathrm{ic},(N)}(xN) $. We linearly interpolate the process $ Z_N(t,x) $ in $ x $ so that it is $ D([0,\infty),C(\mathcal{T})) $-valued. We endow the space $ C(\mathcal{T}) $ with the uniform norm $ \norm{\,\Cdot\,}_{C(\mathcal{T})} $ (and hence uniform topology), and, for each $ T<\infty $, endow the space $ D([0,T],C(\mathcal{T})) $ with Skorohod's $ J_1 $-topology. We use $ \mathbf{R}ightarrow $ to denote weak convergence of probability laws. Our main result is the following: \begin{theorem} \label{thm:main} Consider a half-filled inhomogeneous \ac{ASEP} on $ \mathbb{T}^{(N)}$, with deterministic, near stationary initial condition described as in the preceding. If, for some $ \mathcal{Z}^\mathrm{ic}\in C(\mathcal{T}) $, \begin{align*} \norm{Z^\mathrm{ic}_N - \mathcal{Z}^\mathrm{ic}}_{C(\mathcal{T})} \longrightarrow 0, \qquad \text{as }N\to\infty, \varepsilonnd{align*} then, under the scaling~\varepsilonqref{eq:was}, \begin{align*} Z_N \Longrightarrow \mathcal{Z} \text{ in } D([0,T],C(\mathcal{T})), \qquad \text{as }N\to\infty, \varepsilonnd{align*} for each $ T<\infty $, where $ \mathcal{Z} $ is the mild solution of~\varepsilonqref{eq:spde} with initial condition $ \mathcal{Z}^\mathrm{ic} $. \varepsilonnd{theorem} \begin{remark} Though we formulate all of our results at the level of \ac{SHE}-type equations, they can also be interpreted in terms of convergence of the \ac{ASEP} height function (under suitable centering and scaling) to a \ac{KPZ}-type equation which formally is written as \begin{align}\label{KPZtypeformal} \mathbb{T}al_t \mathcal{H}(t,x) = \tfrac{1}{2}\mathbb{T}al_{xx} \mathcal{H}(t,x) - \tfrac{1}{2} \big(\mathbb{T}al_x \mathcal{H}(t,x)\big)^2 + \xi(t,x) - \mathbb{A} lim'(x). \varepsilonnd{align} The solution to this equation should be (as in the case where $ \mathbb{A} lim'(x)\varepsilonquiv 0$) defined via $\mathcal{H}(t,x) = -\log \mathcal{Z}(t,x)$. One could also try to prove well-posedness of this inhomogeneous \ac{KPZ} equation directly, though this is outside the scope of our present investigation and unnecessary for our aim. \varepsilonnd{remark} \subsection*{Steps in the proof of Theorem \ref{thm:main}} Given that Theorem~\ref{thm:main} concerns convergence at the level of $ \mathcal{Z} $, our proof naturally goes through the microscopic transform~\varepsilonqref{eq:Z}. As mentioned earlier, for \varepsilonmph{homogeneous} \ac{ASEP}, $ Z $ solves the microscopic \ac{SHE} \varepsilonqref{eq:dshe}. On the other hand, with the presence of inhomogeneities, it was not clear at all that G\"{a}rtner's transform applies. As noted in \cite[Remark~4.5]{borodin14}, transforms of the type~\varepsilonqref{eq:Z} are tied up with the Markov duality. The inhomogeneous \ac{ASEP} considered here lacks a certain type of Markov duality so that one cannot infer a useful transform from Markov duality. For specifically, referring to the notation in Remark~\ref{rmk:rt} and \cite{borodin14}, the inhomogeneous \ac{ASEP} does enjoy a Markov duality for the observable $ \widetilde{Q}(t,\vec{x})$ (which is essentially $\varepsilonta(t,x)Z(t,x)$ in our notation), but not for $ Q(t,\vec{x}) $ (which is essentially $Z(t,x)$ in our notation). The latter is crucial for inferring a transform of the type~\varepsilonqref{eq:Z}. The first step of the proof is to observe that, despite the (partial) lost of Markov duality, $ Z $ still solves an \ac{SHE}-type equation (\varepsilonqref{eq:Lang} in the following), with two significant changes compared to~\varepsilonqref{eq:dshe}. \begin{enumerate}[label=\roman*),leftmargin=30pt] \item \label{enu:Bouchaud} First the discrete Laplacian is now replaced by the generator of an inhomogeneous random walk. Interestingly, this walk is exactly Bouchaud's model \cite{bouchaud92}, which is often studied with heavy-tail $ \mathbb{a} t(x) $ (as opposed Assumption~\ref{assu:rt}) in the context of randomly trapped walks. \item \label{enu:pontential} Additionally, a potential term (the term $ \nu \mathbb{a} (x)Z(t,x)dt $ in~\varepsilonqref{eq:Lang}) appears due to the unevenness of quenched expected growth. For homogeneous \ac{ASEP} with near stationary initial condition, the height function grows at a constant expected speed, and the term $ e^{\nu t} $ in~\varepsilonqref{eq:Z} is in place to balance such a constant growth. Due to the presence of the inhomogeneity, in our case the quenched expected growth is no longer a constant and varies among sites. This results in a fluctuating potential that acts on $ Z(t,x) $. \varepsilonnd{enumerate} The two terms in~\ref{enu:Bouchaud}--\ref{enu:pontential} together make up an operator $ \mathbb{H} $ (defined in~\varepsilonqref{eq:Lang}) of Hill-type that governs the microscopic equation. Correspondingly, the semigroup $ \mathbb{Q} (t):=e^{t \mathbb{H} } $ now plays the role of standard heat kernel in the case of homogeneous \ac{ASEP}. We refer to $ \mathbb{Q} (t):=e^{t \mathbb{H} } $ and its continuum analog $ \mathcal{Q} (t) $ as \ac{PAM} semigroups. Much of our analysis consists of estimating the kernels of $ \mathbb{Q} (t) $ and $ \mathcal{Q} (t) $. These estimates are crucial in order to adapt and significantly extend the core argument of \cite{bertini97}. To prepare for the analysis of $ \mathcal{Q} (t) $, in Section~\ref{sect:hk}, we establish bounds on the transition kernel of Bouchaud's walk, which was described \ref{enu:Bouchaud}, and show that the kernel is well-approximated by that of the simple random walk. The transition kernel of Bouchaud’s walk has been studied with heavy-tail inhomogeneities in the context of trapping models --- see the beginning of Section~\ref{sect:hk} for more discussions of this literature. Here we consider \varepsilonmph{bounded} and \varepsilonmph{vanishing} inhomogeneiiesy, which is technically much simpler to dealt with than heavy-tail ones. Given the vast literature (some of which is surveyed at the beginning of Section~\ref{sect:hk}) on heat kernel estimates, it is quite possibly that some of the bounds (Proposition~\ref{prop:hk}\ref{cor:hk:hkasup}--\ref{cor:hk:hkahold::}) obtained in this paper could be derived from, or follow similarly from existing techniques. However, for the sake of being self-contained, we provide with an elementary and short derivation via Picard iteration of the heat kernel bounds we use. Based on the bounds in Section~\ref{sect:hk} on the kernel of Bouchaud's walk, in Section~\ref{sect:Sgsg}, we express the PAM semigroup $ \mathbb{Q} (t) $ using the Feynman--Kac formula, with Bouchaud's walk being the underlying measure. We then expand the Feynman--Kac formula, and develop techniques to bound the resulting expansion to obtain estimates on the kernel of $ \mathbb{Q} (t) $ and its continuum analog. For the operator $ \frac12\Delta-V $ with a singular potential $ V $, accessing the corresponding semigroup has been a classic subject of study in mathematical physics. In particular, semigroup kernels and the Feynman--Kac formulas have been studied in \cite{mckean77,simon82}, and the expansion we use is similar to the one considered in \cite[Section 14, Chapter V]{simon79}. The major difference is that our potential $ \mathbb{A} lim' $ has negative regularity, and hence is not function-valued. Armed with the heat kernel bounds from Section~\ref{sect:hk} and the semigroup kernel expansion from Section~\ref{sect:Sgsg}, the final two sections adapt the key ideas from the work of Bertini and Giacomin \cite{bertini97} into the inhomogeneous setting to prove tightness (Section \ref{sect:mom}) and identify the limiting \ac{SPDE} (Section \ref{sect:pfmain}). \subsection*{Further directions} There are a number of directions involving inhomogeneous \ac{ASEP} which could warrant further investigation. At a very basic level, in this article we limit our scope to half-filled systems on the torus with near stationary initial conditions so as to simplify the analysis. However, we expect similar results should be provable via our methods when one relaxes these conditions. Putting aside the weak asymmetry scaling, it is compelling to consider the nature of the long-time hydrodynamic limit (i.e., functional law of large numbers) or fluctuations (i.e., central limit type theorems) for inhomogeneous \ac{ASEP}. For the homogeneous \ac{ASEP} the hydrodynamic limit is dictated by Hamilton-Jacobi PDEs and the fluctuations characterized by the \ac{KPZ} universality class --- does any of this survive the introduction of inhomogeneities? These questions are complicated by the lack of an explicit invariant measure for our inhomogeneous \ac{ASEP}, as well as a lack of any apparent exact solvability. There are other types of inhomogeneities which can be introduced into \ac{ASEP} and it is natural to consider whether different choices lead to similar long-time scaling limits or demonstrate different behaviors. Our choice of inhomogeneities stemmed from the fact that upon applying G\"{a}rtner's transform, it results in an \ac{SHE}-type equation. On the other hand, our methods seem not to apply to site (instead of bond) inhomogeneities (so out of $ x $ we have $\varepsilonll \mathbb{a} t(x)$ and $r \mathbb{a} t(x) $ as rates). It also does not seem to extend to non-nearest neighbor systems. A more direct approach (e.g. regularity structures \cite{Hairer13}; energy solutions \cite{GJ2014a, GP2015a}; paracontrolled distributions \cite{GIP15,GP17}; or renormalization group \cite{Kupiainen16}) at the level of the height function may eventually prove useful in dealing with these generalizations. For the homogeneous non-nearest neighbor ASEP (or other homogeneous generalizations which maintain a product invariant measure), the energy solution method has proved quite useful for demonstrating \ac{KPZ} equation limit results --- see, for example, \cite{GJ14}. Another type of inhomogeneities would involve jump out of $ x $ given by rates $ \mathbb{a} t(x) +b $ to the left and $ \mathbb{a} t(x)-b $ to the right. A special case of this type of inhomogeneities is studied in \cite{franco16} where they consider a single slow bond (i.e, $ \mathbb{a} t(x)\varepsilonquiv \mathbb{a} t_* $ for $ x\neq 0 $ and $ \mathbb{a} t(0)< \mathbb{a} t_* $). In that case, they show that the inhomogeneities preserves the product Bernoulli invariant measure (note that the inhomogeneities we consider do not preserve this property). (In fact, the argument in \cite{franco16} for this preservation of the invariant measure may be generalized to more than just a single-site inhomogeneity.) Using energy solution methods, \cite{franco16} shows that depending on the strength of the asymmetry and the slow bond, one either obtains a Gaussian limit with a possible effect of the slow bond, or the \ac{KPZ} equation without the effect of the slow bond. It would be interesting to see if this type of inhomogeneities (at every bond, not just restricted to a single site) could lead to a similar sort of \ac{KPZ} equation with inhomogeneous spatial noise such as derived herein. \cite{covert97,rolla08,calder15} characterized the hydrodynamic limit for \ac{ASEP} and TASEP with inhomogeneities that varies at a \varepsilonmph{macroscopic} scale. Those methods do not seem amenable to rough or rapidly varying parameters (such as the i.i.d.\ or other examples considered herein) and it would be interesting to determine their effect. A special case of spatial inhomogeneities is to have a slow bond at the origin. The slow bond problem is traditionally considered for the TASEP, with particular interest in how the strength of slow-down affects the hydrodynamic limit of the flux, see \cite{janowsky92,basu14} and the reference therein. As mentioned previously, this problem has been further considered in the context of weakly asymmetric \ac{ASEP} in \cite{franco16}. There are other studies of TASEP (or equivalently last passage percolation) with inhomogeneitues in \cite{gravner02, gravner02a, lin12, emrah15, emrah16, borodin17}. The type of inhomogeneities in those works are of a rather different nature than those considered here. In terms of TASEP, their inhomogeneities mean that the $i^{th}$ jump of the $j^{th}$ particle occur at rate $\pi_i+\widehat{\pi}_j$ for the inhomogeneity parameters $\{\pi_i\}$ and $\{\widehat{\pi}_j\}$. Such inhomogeneities do not seem to result in a temporally constant (but spatially varying) noise in the limit. Thus, the exact methods which are applicable in those works do not seem likely to grant access to the fluctuations or phenomena surrounding our inhomogeneous process or limiting equation. As mentioned previously, upon applying G\"{a}rtner's transform we obtain an \ac{SHE}-type equation with the generator of Bouchaud's walk. Our particular result involves tuning the waiting time rate near unity, and under such scaling the inhomogeneous walk approximates the standard random walk. On the other hand, Bouchaud's model (introduced in \cite{bouchaud92} in relation to aging in disordered systems; see also \cite{benarous06,benarous15} for example) is often studied under the assumption of heavy-tailed waiting parameters. In such a regime, one expects to see the effect of trapping, and in particular the FIN diffusion \cite{fontes99} is a scaling limit that exhibits the trapping effect. It would be interesting to consider a scaling limit of inhomogeneous \ac{ASEP} in which the FIN diffusion arises. See the beginning of Section \ref{sect:hk} for further discussion and references related to Bouchaud's model. For the case $ \mathbb{A} lim'(x)=B'(x) $ (spatial white noise), the operator $ \mathcal{H} $ (in~\varepsilonqref{eq:spde}) that goes into the \ac{SPDE}~\varepsilonqref{eq:spde} is known as Hill's operator. There has been much interest in the spectral properties of this and similar random Schr\"{o}dinger type operator. In particular, \cite{frisch60, halperin65, fukushima77, mckean94, cambronero99, cambronero06} studied the ground state energy in great depth, and recently, \cite{dumaz17} proved results on the point process for lowest few energies, as well as the localization of the eigenfunctions. On the other hand, the semigroup $ \mathcal{Q} (t) := e^{t \mathcal{H} } $ is the solution operator of the (continuum) \ac{PAM} (see \cite{carmona94, koenig16} and the references therein for extensive discussion on the discrete and continuum \ac{PAM}). A compelling challenge is to understand how this spectral information translates into the long-time behavior of our \ac{SPDE}. For instance, what can be said about the intermittency of this \ac{SPDE}? \subsection*{Outline} In Section~\ref{sect:hc}, we derive the microscopic (\ac{SHE}-type) equation for $ Z(t,x) $. As seen therein, the equation is governed by a Hill-type operator $ \mathbb{H} $ that involves the generator of an (Bouchaud-type) inhomogeneous walk. Subsequently, in Sections~\ref{sect:hk} and \ref{sect:Sgsg} we develop the necessary estimates on the transition kernel of the inhomogeneous walk and Hill-type operator. Given these estimates, we proceed to prove Theorem~\ref{thm:main} in two steps: by first establishing tightness of $ \{Z_N\}_N $ and then characterizing its limit point. Tightness is settle in Section~\ref{sect:mom} via moment bounds. To characterizes the limit point, in Section~\ref{sect:pfmain}, we develop the corresponding martingale problem, and prove that the process $ Z_N(t,x) $ solves the martingale problem. \subsection*{Acknowledgment} We thank anonymous referees for their very useful comments that help us improving the presentation and historical treatment of results in this article. During the conference `Stochastic Analysis, Random Fields and Integrable Probability, The 12th Mathematical Society of Japan, Seasonal Institute', Takashi Kumagai generously walked us through references on heat kernel estimates in related settings. We appreciate Kumagai's help and the hospitality of the conference organizers. We thank Yu Gu and Hao Shen for useful discussions during the writing of the first manuscript, and particularly acknowledge Hao Shen for pointing to us the argument in \cite[Proof of Proposition~3.8]{labbe17}. Ivan Corwin was partially supported by the Packard Fellowship for Science and Engineering, and by the NSF through DMS-1811143 and DMS-1664650. Li-Cheng Tsai was partially supported by the Simons Foundation through a Junior Fellowship and by the NSF through DMS-1712575. \subsection*{Notation} We use $ c(u,v,\ldots)\in (0,\infty) $ to denote a generic, positive, finite, deterministic constant, that may change from line to line (or even within a line), but depends only on the designated variables $ u,v,\ldots $. We use subscript $ N $ to denote scaled processes/spaces/functions, e.g., $ Z_N $ in~\varepsilonqref{eq:was}. Intrinsic (not from scaling) dependence on $ N $ will be designated by superscript $ (N) $, e.g., $ \mathbb{T}^{(N)}$, though, to alleviate heavy notation, we will often omit such dependence, e.g., $ \mathbb{T} := \mathbb{T}^{(N)} $. For processes/spaces/functions that have discrete and continuum versions, we often use the same letter but in difference fonts: math backboard for the discrete and math calligraphy for the continuum, e.g., $ \mathbb{A} (x) $ and $ \mathbb{A} lim(x) $. We list some of the reoccurring notation below for convenience of the readers. The left column is for discrete notation and the right is for analgous continuum notion. Some notation only occurs in one of the two contexts. \begin{minipage}{.49\linewidth} \begin{itemize}[leftmargin=0pt] \item [] \item [] \item [] $\mathbb{T}= \mathbf{Z}/N\mathbf{Z}$: discrete torus \item [] $ \mathrm{dist} _\mathbb{T}(\Cdot,\Cdot) $: geodesic distance on $ \mathbb{T} $ \item [] \item [] \item [] \item [] $ \mathbb{a} t(x) $: the inhomogeneity \item [] $ \mathbb{a} (x) := \mathbb{a} t(x)-1 $ \item [] $ \mathbb{A} (x,x') $: partial sum of $ \mathbb{a} (x) $, see~\varepsilonqref{eq:Rt}. \item [] $ \mathbb{A} (x) := \mathbb{A} (0,x) $ \item [] $ \mathbf{E} rt[\,\Cdot \,] := \mathbf{E} [ \,\Cdot \, | \mathbb{a} (x),x\in\mathbb{T}] $ \item [] $ \mathbb{p} (t) $: random-walk semigroup on $ \mathbb{T} $ \item [] $ \mathbb{p} a(t) $: inhomogeneous random-walk semigroup on $ \mathbb{T} $ \item [] $ \mathbb{p} r(t) := \mathbb{p} a(t) - \mathbb{p} (t) $ \item [] $ \mathbb{H} $: the discrete \ac{PAM} operator \item [] $ \mathbb{Q} (t) := e^{t \mathbb{H} } $ \item [] $ \mathbb{Q} r(t) := \mathbb{Q} (t)- \mathbb{p} (t) $ \item [] $ \mathbb{Q} ra(t) := \mathbb{Q} (t)- \mathbb{p} a(t) $ \varepsilonnd{itemize} \varepsilonnd{minipage} \begin{minipage}{.49\linewidth} \begin{itemize}[leftmargin=5pt] \item [] \item [] \item [] $ \mathcal{T} := \mathbf{R}/\mathbf{Z} $ (continuum) torus \item [] $ \mathrm{dist} _\mathcal{T}(\Cdot,\Cdot) $: geodesic distance on $ \mathcal{T} $ \item [] $ C[0,1],C(\mathcal{T}) $: continuous functions on $ [0,1] $ and $ \mathcal{T} $ \item [] $ C^u[0,1] $: $ u $-H\"{o}lder continuous functions on $ [0,1] $ \item [] $ H^k(\mathcal{T}) $: the $ k $-th Sobolev space on $ \mathcal{T} $ \item [] \item [] \item [] \item [] $ \mathbb{A} lim(x) $: limit of $ \mathbb{A} (Nx) $, see Assumption~\ref{assu:rt}\ref{assu:rt:limit}. \item [] \item [] $ \mathcal{P} (t) := e^{\frac12 t\Delta} $, the heat semigroup on $ \mathcal{T} $ \item [] \item [] \item [] $ \mathcal{H} := \frac12\mathbb{T}al_{xx} + \mathbb{A} lim'(x) $, the continuum \ac{PAM} operator \item [] $ \mathcal{Q} (t) := e^{t \mathcal{H} } $ \item [] $ \mathcal{Q} r(t) := \mathcal{Q} (t)- \mathcal{P} (t) $ \item [] \varepsilonnd{itemize} \varepsilonnd{minipage} \section{Microscopic Equation for $ Z(t,x) $} \label{sect:hc} In this section we derive the microscopic equation for $ Z(t,x) $. In doing so, we view $ \{ \mathbb{a} t(x):x\in\mathbb{T}\} $ as being fixed (quenched), and consider only the randomness due to the dynamics of our process. In deriving this equation for $Z(t,x)$ we will also treat $N$ as fixed and hence drop it from the notation. The inhomogeneous \ac{ASEP} can be constructed as a continuous time Markov process with a finite state space $ \{0,1\}^\mathbb{T} $, where $ \{0,1\} $ indicates whether a given sites is empty or occupied. Here we build the inhomogeneous \ac{ASEP} out of graphical construction (see~\cite[Section~2.1.1]{liggett12,corwin12}). For each $ x\in\mathbb{T} $, let $ \{P_\rightarrow(t,x)\}_{t\geq 0} $ and $ \{P_\leftarrow(t,x)\}_{t\geq 0} $ be independent Poisson processes of rates $r \mathbb{a} t(x)$ and $\varepsilonll \mathbb{a} t(x)$ respectively. A particle attempts a jump across the bond $ (x,x+1) $ to the right (resp.\ left) whenever $ P_\rightarrow(\Cdot,x) $ (resp.\ $ P_\leftarrow(\Cdot,x) $) increases. The jump is executed if the destination is empty, otherwise the particle stays put. Let \begin{align} \label{eq:filZ} \mathscr{F} (t) := \sigma(P_\leftarrow(s,x),P_\rightarrow(s,x), \mathbb{a} (x): s\leq t, x\in\mathbb{T}) \varepsilonnd{align} denote the corresponding filtration. Recall from~\varepsilonqref{eq:Z} that $ \tau:=\frac{r}{\varepsilonll} $. Consider when a particle jumps from $ x $ to $ x+1 $. Such a jump occurs only if $ \varepsilonta(t,x)(1-\varepsilonta(t,x+1))=1 $, and, with $ Z(t,x) $ defined in~\varepsilonqref{eq:Z}, such a jump changes $ Z(t,x) $ by $ (\tau^{-1}-1)Z(t,x) $. Likewise, a jump from $ x+1 $ to $ x $ occurs only if $ \varepsilonta(t,x+1)(1-\varepsilonta(t,x))=1 $, and changes $ Z(t,x) $ by $ (\tau-1)Z(t,x) $. Taking into account the continuous growth due to the term $ e^{\nu t} $ in~\varepsilonqref{eq:Z}, we have that \begin{align} \notag dZ(t,x) =& \ \varepsilonta(t,x)(1-\varepsilonta(t,x+1)) (\tau^{-1}-1) Z(t,x) d P_\rightarrow(t,x) \\ \label{eq:Lang1} &+\varepsilonta(t,x+1)(1-\varepsilonta(t,x)) (\tau-1) Z(t,x) d P_\leftarrow(t,x) + \nu Z(t,x) dt. \varepsilonnd{align} The differential in $ dZ(t,x) $ acts on the $ t $ variable. We may extract the expected growth $ \mathbb{a} t(x)rt $ and $ \mathbb{a} t(x)\varepsilonll t $ from the Poisson processes $ P_\rightarrow(\Cdot,x) $ and $ P_\leftarrow(\Cdot,x) $, so that the processes \begin{align*} Q_\rightarrow(t,x) := P_\rightarrow(t,x) - \mathbb{a} t(x)rt, \qquad Q_\leftarrow(t,x) := P_\leftarrow(t,x) - \mathbb{a} t(x)\varepsilonll t \varepsilonnd{align*} are martingales. We then rewrite~\varepsilonqref{eq:Lang1} as \begin{align} \notag dZ(t,x) &= \Big( \mathbb{a} t(x)\varepsilonta(t,x)(1-\varepsilonta(t,x+1)) (\tau^{-1}-1) r + \mathbb{a} t(x)\varepsilonta(t,x+1)(1-\varepsilonta(t,x)) (\tau-1) \varepsilonll +\nu \Big) Z(t,x) dt + dM(t,x) \\ \label{eq:Lang2} &= \Big( \mathbb{a} t(x)(\varepsilonll-r)\big( \varepsilonta(t,x)-\varepsilonta(t,x+1)\big) +\nu \Big) Z(t,x) dt + dM(t,x), \varepsilonnd{align} where $ M(t,x) $ is an $ \mathscr{F} $-martingale given by \begin{align} \label{eq:mg} M(t,x) := \int_0^{t} Z(s,x)\Big( \varepsilonta(s,x)(1-\varepsilonta(s,x+1))(\tau^{-1}-1)dQ_\rightarrow(s,x) + \varepsilonta(s,x+1)(1-\varepsilonta(s,x))(\tau-1)dQ_\leftarrow(s,x) \Big). \varepsilonnd{align} Recall from~\varepsilonqref{eq:Z} that $ \nu := 1-2\sqrt{\varepsilonll r} $. Let $ \Delta f(x) := f(x+1)+f(x-1)-2f(x) $ denote discrete Laplacian. By considering separately the four cases corresponding to $ (\varepsilonta(x),\varepsilonta(x+1)) \in \{0,1\}\times\{0,1\} $, it is straightforward to verify that \begin{align*} ( \varepsilonll-r ) \big(\varepsilonta(t,x)-\varepsilonta(t,x+1)\big) Z(t,x) = \sqrt{\varepsilonll r}\Delta Z(t,x) - \nu Z(t,x). \varepsilonnd{align*} Inserting this identity into~\varepsilonqref{eq:Lang2}, we obtain the following Langevin equation for $ Z(t,x) $: \begin{align} \label{eq:Lang} dZ(t,x) &= \mathbb{H} Z(t,x) dt + dM(t,x), \\ \label{eq:ham} \mathbb{H} &:= \sqrt{r\varepsilonll} \, \mathbb{a} t(x)\Delta - \nu \, \mathbb{a} (x). \varepsilonnd{align} Recall that $ \mathbb{A} (x) := \mathbb{A} (x,0) $. From~\varepsilonqref{eq:Rt} we have $ \mathbb{a} (x) = -2 \,( \mathbb{A} (x)- \mathbb{A} (x-1)) $, and, under weak asymmetry scaling~\varepsilonqref{eq:was}, we have $ \nu = \frac1{2N} + O(N^{-2}) $. We hence expect (and will justify) that $ \mathbb{H} $ behaves like $ \mathcal{H} =\frac12\mathbb{T}al_{xx} + \mathbb{A} lim'(x) $. This explains why $ \mathcal{H} $ appears in the limiting equation~\varepsilonqref{eq:spde}. For~\varepsilonqref{eq:spde} to be the limit of~\varepsilonqref{eq:Lang}, the martingale increment $ dM(t,x) $ should behave like $ \xi \mathcal{Z} $. To see why this should be true, let us calculate the quadratic variation of $ M(t,x) $. The collection of compensated Poisson processes $\big\{Q_\rightarrow(\Cdot,x), Q_\leftarrow(\Cdot,x)\big\}_{x\in \mathbb{T}}$ are all independent of each other. Thus, from~\varepsilonqref{eq:mg}, we have that \begin{align} \notag &d\langle M(t,x), M(t,\widetildex) \rangle\\ \notag &\quad= \mathbf{1} _\set{x=\widetildex}Z^2(t,x) \Big( \varepsilonta(t,x)(1-\varepsilonta(t,x+1))(\tau^{-1}-1)^2 \mathbb{a} t(x)r + \varepsilonta(t,x+1)(1-\varepsilonta(t,x))(\tau-1)^{2} \mathbb{a} t(x)\varepsilonll \Big)dt \\ \label{eq:qv} &\quad= \mathbf{1} _\set{x=\widetildex}Z^2(t,x) (r-\varepsilonll)^2 \mathbb{a} t(x) \Big( \tfrac{1}{\varepsilonll}\varepsilonta(t,x) + \tfrac{1}{r}\varepsilonta(t,x+1) - \big(\tfrac1r+\tfrac1\varepsilonll\big)\varepsilonta(t,x)\varepsilonta(t,x+1)) \Big)dt, \varepsilonnd{align} where $ \mathbf{1} _A(\Cdot) $ denotes the indicator function of a given set $ A $. Under the weak asymmetry scaling~\varepsilonqref{eq:was}, $ (r-\varepsilonll)^2 = \frac1N + O(N^{-2}) $ acts as the relevant scaling factor for the quadratic variation. In addition to this scaling factor, we should also consider the quantities that involve $ \varepsilonta(t,x) $ and $ \varepsilonta(t,x+1) $. Informally speaking, since the system is half-filled (i.e., having $ N/2 $ particles), we expect $ \varepsilonta(t,x) $ and $ \varepsilonta(t,x+1) $ to self-average (in $ t $) to $ \frac12 $, and expect $ \varepsilonta(t,x)\varepsilonta(t,x+1) $ to self-average to $ \frac14 $. With $ r,\varepsilonll\to\frac12 $ and $ \mathbb{a} t(x) \to 1 $, we expect $ d\langle M(t,x), M(t,\widetildex) \rangle $ to behave like $ N^{-1} \mathbf{1} _\set{x=\widetildex}Z^2(t,x) dt $, and hence $ dM(t,x) $ to behaves like $ \xi\mathcal{Z} $, as $ N\to\infty $. Equation~\varepsilonqref{eq:Lang} gives the microscopic equation in differential form. For subsequent analysis, it is more convenient to work with the integrated equation. Consider the semigroup $ \mathbb{Q} (t):=e^{t \mathbb{H} } $, which is well-defined and has kernel $ \mathbb{Q} (t;x,\widetildex) $ because $ \mathbb{H} $ acts on the space $ \{f:\mathbb{T}\to \mathbf{R}\} $ of \varepsilonmph{finite} dimensions. Using Duhamel's principle in~\varepsilonqref{eq:Lang} gives \begin{align} \label{eq:Lang:int} Z(t,x) = \sum_{\widetildex\in\mathbb{T}} \mathbb{Q} (t;x,\widetildex)Z_\mathrm{ic}(\widetildex) + \int_0^t \sum_{\widetildex\in\mathbb{T}} \mathbb{Q} (t-s;x,\widetildex)dM(s,\widetildex). \varepsilonnd{align} More generally, initiating the process from time $ t_* \geq 0 $ instead of $ 0 $, we have \begin{align} \label{eq:Lang:int:} Z(t,x) = \sum_{\widetildex\in\mathbb{T}} \mathbb{Q} (t-t_*;x,\widetildex)Z(t_*,\widetildex) + \int_{t_*}^t \sum_{\widetildex\in\mathbb{T}} \mathbb{Q} (t-s;x,\widetildex)dM(s,\widetildex), \qquad t \geq t_*. \varepsilonnd{align} The semigroup $ \mathbb{Q} (t) $ admits an expression by the Feynman--Kac formula as \begin{align} \label{eq:feynmankac} \big( \mathbb{Q} (t)f \big)(s) = \mathbf{E} _x\Big[ e^{\int_0^t \nu \mathbb{a} (X^ \mathbb{a} (s))ds} f(X^ \mathbb{a} (t)) \Big]. \varepsilonnd{align} Hereafter $ \mathbf{E} _x[\,\Cdot\,] $ (and similarly $ \mathbf{P} _x[\,\Cdot\,] $) denotes expectation with respect to a reference process starting at $ x $. Here the reference process $ X^ \mathbb{a} (t) $ is a walk on $ \mathbb{T} $ that attempts jumps from $ X^ \mathbb{a} (t) $ to $ X^ \mathbb{a} (t)\pm 1 $ in continuous time (each) at rate $ \sqrt{r\varepsilonll} \, \mathbb{a} t(X^ \mathbb{a} (t)) $. \begin{remark} It is natural to wonder whether it is possible to directly (without appealing to the G\"{a}rtner transform and \ac{SHE}-type equation) see convergence of the height function to the \ac{KPZ}-type equation \varepsilonqref{KPZtypeformal}. While a direct proof is certainly beyond the scope of this paper, we will briefly explain at a very heuristic level where the structure of the \ac{KPZ}-type equation \varepsilonqref{KPZtypeformal} arises from the microscopic evolution of the height function. There are two changes that can occur for the ASEP height function $h(t,x)$. Using the notation $\nabla f(x)=f(x+1)-f(x)$, we have that $h(t,x)$ can increase by $2$ at rate $\varepsilonll \mathbb{a} t(x)$ provided that $\nabla h(t,x)=1$ and $\nabla h(t,x-1)=-1$; and $h(t,x)$ can decrease by $2$ at rate $r \mathbb{a} t(x)$ provided that $\nabla h(t,x)=-1$ and $\nabla h(t,x-1)=1$. Since $\nabla h$ only takes values in $\{-1,1\}$ we can encode the indicator functions as linear functions of $\nabla h$. This gives rise to the following evolution equation: $$ dh(t,x) = \Big( 2\varepsilonll \mathbb{a} t(x) \frac{1+\nabla h(t,x)}{2}\frac{1-\nabla h(t,x-1)}{2} - 2r \mathbb{a} t(x) \frac{1-\nabla h(t,x)}{2}\frac{1+\nabla h(t,x-1)}{2} \Big) dt + d\widetilde{M}(t,x), $$ where $ \widetilde{M}(t,x)$ is an explicit martingale. Recalling that $\Delta f(x) = \nabla f(x)-\nabla f(x-1)$, we can rewrite the above in the suggestive form $$ dh(t,x) = \Big( \frac{(\varepsilonll +r) \mathbb{a} t(x)}{2} \Delta h(t,x) - \frac{(\varepsilonll-r) \mathbb{a} t(x)}{2} \nabla h(t,x)\nabla h(t,x-1) + \frac{(\varepsilonll-r) \mathbb{a} t(x)}{2} \Big) dt + d\widetilde{M}(t,x). $$ While it is still highly non-trivial to prove convergence (with appropriate renormalization) of these terms to those in \varepsilonqref{KPZtypeformal}, it is now apparent that the new spatial noise $ \mathbb{A} lim'(x)$ comes from the term $\frac{(\varepsilonll-r) \mathbb{a} t(x)}{2}$. \varepsilonnd{remark} \section{Transition Probability of the Inhomogeneous Walk $ X^ \mathbb{a} (t) $} \label{sect:hk} The focus of this section and Section~\ref{sect:Sgsg} is to control the semigroup $ \mathbb{Q} (t) $ and its continuum counterpart $ \mathcal{Q} (t) $. As the first step, in this section we establish estimates on the transition kernel \begin{align} \label{eq:hka} \mathbb{p} a(t;x,\widetildex) := \mathbf{P} _{x}\big[ X^ \mathbb{a} (t)=\widetildex \big] \varepsilonnd{align} of the inhomogeneous walk $ X^ \mathbb{a} (t) $. Note that $ X^ \mathbb{a} (t) $ and $ \mathbb{p} a $ depend on $ N $ through $ \mathbb{a} (x)= \mathbb{a} ^{(N)}(x) $ and through the underlying torus $ \mathbb{T}=\mathbb{T}^{(N)} $, but we omit such dependence in the notation. The kernel $ \mathbb{p} a $ has been studied with heavy-tail $ \mathbb{a} (x) $ in the context of trapped models. For heavy-tail $ \mathbb{a} (x) $, \cite[Lemma~3.1]{cerny06} obtained bounds on $ \mathbb{p} a(t;x,x) $, and \cite[Lemma~3.2]{cerny06} and \cite{cabezas15} demonstrated stretched exponential tails in large deviations of $ X^ \mathbb{a} (x) $, which confirmed a prediction \cite{bertin03} based on the trapping nature of Bouchaud's walk. Estimates on the analogous continuum kernel (i.e., for the FIN diffusion) have been obtained \cite{croydon19}. As mentioned previously, here we consider bounded and vanishing $ \mathbb{a} (x) $, which is technically much simpler than heavy-tailed $ \mathbb{a} (x) $. These bounds mentioned above essentially imply Proposition \ref{prop:hk}(a). Due to the technical nature of how $ \mathbb{p} a $ enters our subsequent analysis, we need detailed bounds. Specifically, we will derive in Proposition~\ref{prop:hk} bounds on $ \mathbb{p} a(t,\Cdot) $, its H\"{o}lder continuity, its gradients, and its difference between the homogeneous walk kernel. Put in a broader context, the type of bounds we seek to obtain on $ \mathbb{p} a(t,\Cdot) $ and its H\"{o}lder continuity go under the name of Nash--Aronson bounds \cite{nash58,aronson67} in elliptic PDEs (we thank one of the anonymous referees for pointing us to this literature). These type of bounds have since been pursued and generalized in various contexts such as Riemannian manifolds, metric measure spaces, and fractals. We point to \cite{grigor92,saloffcoste02} and the references therein. On Riemannian manifolds, bounds on the gradient of heat kernels have been derived in, for example, \cite{cheng81,li86}. Modern works in probability have been investigating the analogous heat kernel in discrete settings. For the random conductance model, heat kernel estimates have been derived at various generality: for bounded below conductance in \cite{barlow10}, and for general conductance with certain integrability assumptions in \cite{folz11,andres15,andres19}. It was shown in \cite{berger08,biskup11} that anomalies in heat-kernel decay may occur without integrability assumptions. The works \cite{deuschel19,deuschel19a} consider layered random conductance models and establish kernel estimates. For an overview on the random conductance model we point to~\cite{biskup11}. We also point out that a gradient estimates on Green's function of percolation clusters has been obtained in~\cite[Theorem~6]{benjamini15}. The starting point of our analysis is the backward Kolmogorov equation \begin{align} \label{eq:bkol:} \mathbb{T}al_t \mathbb{p} a(t;x,\widetildex) = \sqrt{r\varepsilonll} \, \mathbb{a} t(x) \Delta_{x} \mathbb{p} a(t;x,\widetildex), \qquad \mathbb{p} a(0;x,\widetildex) = \mathbf{1} _\set{\widetildex}(x), \varepsilonnd{align} where $ \mathbf{1} _A(\Cdot) $ denotes the indicator function of a given set $ A $. Under the scaling~\varepsilonqref{eq:was}, we have $ \sqrt{r\varepsilonll}\to\frac12 $ as $ N\to\infty $. Indeed, the coefficient $ \sqrt{r\varepsilonll} $ can be scaled to $ \frac12 $ by a change-of-variable $ t\mapsto 2\sqrt{\varepsilonll r}t $, so without loss of generality, we alter the coefficient $ \sqrt{r\varepsilonll} $ in~\varepsilonqref{eq:bkol:} and consider \begin{align} \tag{\ref*{eq:bkol:}'} \label{eq:bkol} \mathbb{T}al_t \mathbb{p} a(t;x,\widetildex) = \tfrac12 \mathbb{a} t(x) \Delta_{x} \mathbb{p} a(t;x,\widetildex), \qquad \mathbb{p} a(0;x,\widetildex) = \mathbf{1} _\set{\widetildex}(x). \varepsilonnd{align} As announced previously, we use $ c(u,v,\ldots)\in (0,\infty) $ to denote a generic, positive, finite, deterministic constant, that may change from line to line (or even within a line), but depends only on the designated variables $ u,v,\ldots $. Recall that $ \mathbb{a} t(x)=1+ \mathbb{a} (x) $. Our strategy of analyzing $ \mathbb{p} a $ is perturbative. We solve~\varepsilonqref{eq:bkol} iteratively, viewing $ \mathbb{a} (x) $ as a perturbation. Such an iteration scheme begins with the unperturbed equation \begin{align} \label{eq:lhe} \mathbb{T}al_t \mathbb{p} (t;x,\widetildex) = \tfrac12 \Delta_{x} \mathbb{p} (t;x,\widetildex), \qquad \mathbb{p} (0;x,\widetildex) = \mathbf{1} _\set{\widetildex}(x), \varepsilonnd{align} which is solved by the transition probability $ \mathbb{p} (t;x,\widetildex) = \mathbf{P} _{x}[ X(t)=\widetildex ] $ of the continuous time symmetric simple random walk $ X(t) $ on $ \mathbb{T} $. Here, we record some useful bounds on $ \mathbb{p} $. Let $ \nabla f(x) := f(x+1)-f(x) $ denote the forward discrete gradient. When needed we write $ \nabla_x $ or $ \Delta_x $ to highlight which variable the operator acts on. Given any $ u\in(0,1] $ and $ T<\infty $, \begin{subequations} \label{eq:hk} \begin{align} \label{eq:hk:sup} | \mathbb{p} (t;x,\widetildex)| &\leq \frac{c(T)}{\sqrt{t+1}}, \\ \label{eq:hkgd} | \mathbb{p} (t;x,\widetildex)- \mathbb{p} (t,x',\widetildex)| & \leq c(u,T) \frac{ \mathrm{dist} _{\mathbb{T}}(x,x')^{u}}{(t+1)^{(1+u)/2}}, \\ \label{eq:hkgd:sum} \sum_{\widetildex\in\mathbb{T}} | \mathbb{p} (t;x,\widetildex)- \mathbb{p} (t;x',\widetildex)| & \leq c(u,T) \frac{ \mathrm{dist} _{\mathbb{T}}(x,x')^{u}}{(t+1)^{u/2}}, \\ \label{eq:hk:lap:sum} \sum_{\widetildex\in\mathbb{T}} |\Delta_{x} \mathbb{p} (t;x,\widetildex)| &\leq \frac{c(T)}{t+1}, \\ \label{eq:hk:lap:sum:} \sum_{x\in\mathbb{T}} |\Delta_{x} \mathbb{p} (t;x,\widetildex)| &\leq \frac{c(T)}{t+1}, \\ \label{eq:hk:hold:sup} | \mathbb{p} (t;x,\widetildex)| \mathrm{dist} _{\mathbb{T}}(x,\widetildex)^{u} &\leq c(T)(t+1)^{-(1-u)/2}, \\ \label{eq:hk:hold} \sum_{\widetildex\in\mathbb{T}}| \mathbb{p} (t;x,\widetildex)| \mathrm{dist} _{\mathbb{T}}(x,\widetildex)^{u} &\leq c(T)(t+1)^{u/2}, \\ \label{eq:hk:hold:} \sum_{\widetildex\in\mathbb{T}}|\nabla_x \mathbb{p} (t;x,\widetildex)| \mathrm{dist} _{\mathbb{T}}(x,\widetildex)^{u} & \leq \frac{c(u,T)}{(1+t)^{(1-u)/2}}, \\ \label{eq:hk:hold::} \sum_{\widetildex\in\mathbb{T}}|\nabla_{\widetildex} \mathbb{p} (t;x,\widetildex)| \mathrm{dist} _{\mathbb{T}}(x,\widetildex)^{u} & \leq \frac{c(u,T)}{(1+t)^{(1-u)/2}}, \varepsilonnd{align} \varepsilonnd{subequations} for all $ x,x',\widetildex\in\mathbb{T} $ and $ t\leq N^2T $. These bounds~\varepsilonqref{eq:hk:sup}--\varepsilonqref{eq:hk:hold::} follow directly from known results on the analogous kernel on $ \mathbf{Z} $. Indeed, with $ \mathbb{p} ^\mathbf{Z}(t;x-\widetildex):= \mathbf{P} _{x}[ X^\mathbf{Z}(t)=\widetildex ] $ denoting the transition kernel of continuous time symmetric simple random walk $ X^\mathbf{Z}(t) $ on the full-line $ \mathbf{Z} $, we have \begin{align} \label{eq:hk:hkZ} \mathbb{p} (t;x,\widetildex) = \sum_{i\in\mathbf{Z}} \mathbb{p} ^\mathbf{Z}(t;x-\widetildex+iN), \qquad x,\widetildex \in \{0,\ldots,N-1\}. \varepsilonnd{align} The full-line kernel $ \mathbb{p} ^\mathbf{Z} $ can be analyzed by standard Fourier analysis, as in, e.g., \cite[Equation (A.11)-(A.14)]{dembo16}. Relating these known bounds on $ \mathbb{p} ^\mathbf{Z} $ to $ \mathbb{p} $ gives~\varepsilonqref{eq:hk:sup}--\varepsilonqref{eq:hk:hold::}. Now we will start to study the perturbations around the solution to \varepsilonqref{eq:lhe}. Let $ \Gamma(v) $ denote the Gamma function, and let \begin{align} \label{eq:Sigman} \Sigma_n(t) := \big\{(s_0,\ldots,s_n)\in(0,\infty)^{n+1}: s_0+\ldots+s_n=t\big\}. \varepsilonnd{align} In subsequent analysis, we will make frequent use of the Dirichlet formula \begin{align} \label{eq:dirichlet} \int_{\Sigma_n(t)} \prod_{i=0}^n s_i^{v_i-1} d^n\vec{s} = t^{(v_0+\ldots+v_n)-1} \frac{\prod_{i=0}^n\Gamma(v_i)}{\Gamma(v_0+\ldots+v_n)}, \qquad v_0,\ldots,v_n >0. \varepsilonnd{align} Note that the constraint in~\varepsilonqref{eq:Sigman} reduces one dimension out the $ (n+1) $-dimensional variable $ (s_0,\ldots,s_n) $. In particular, the integration in~\varepsilonqref{eq:dirichlet} is $ n $-dimension, and we adopt the notation \begin{align} \label{eq:ds} d^n\vec{s} = (ds_1\cdots ds_n) = (ds_0ds_2\cdots ds_n) = \cdots = \prod_{i\in\{0,\ldots,n\}\setminus\set{i_0}} ds_i, \qquad i_0\in\{0,\ldots,n\}. \varepsilonnd{align} In the following we view $ \mathbb{p} a $ as a perturbation of $ \mathbb{p} $, and set \begin{align} \label{eq:hkr} \mathbb{p} r(t;x,\widetildex) := \mathbb{p} a(t;x,\widetildex)- \mathbb{p} (t;x,\widetildex). \varepsilonnd{align} \begin{lemma} \label{lem:hk} Given any $ u,v\in(0,1] $ and $ T<\infty $, \begin{enumerate}[label=(\alph*),leftmargin=7ex] \item \label{lem:hka:sup} \ $ \displaystyle | \mathbb{p} r(t;x,\widetildex)| \leq \frac{1}{\sqrt{t+1}} \sum_{n=1}^{\infty} \frac{(c(v,T)N^v\norm{ \mathbb{a} }_{L^\infty})^n}{\Gamma(\frac{nv+1}2)}, $ \item \label{lem:hk:sum} \ $ \displaystyle \sum_{\widetildex\in\mathbb{T}} | \mathbb{p} r(t;x,\widetildex)| \leq \sum_{n=1}^{\infty} \Big(c(T)\norm{a}_{L^\infty(\mathbb{T})}\log(N+1)\Big)^n, $ \item \label{lem:hkagd:sum} \ $ \displaystyle \sum_{\widetildex\in\mathbb{T}}| \mathbb{p} r(t;x,\widetildex)- \mathbb{p} r(t;x',\widetildex)| \leq \frac{ \mathrm{dist} _{\mathbb{T}}(x,x')^u}{(t+1)^{u/2}} \sum_{n=1}^{\infty} \frac{(c(u,v,T)N^v\norm{ \mathbb{a} }_{L^\infty})^n}{\Gamma(\frac{2-u+nv}2)}, $ \item \label{lem:hkagd:sup} \ $ \displaystyle | \mathbb{p} r(t;x,\widetildex)- \mathbb{p} r(t;x',\widetildex)| \leq \frac{ \mathrm{dist} _{\mathbb{T}}(x,x')^u}{(t+1)^{(1+u)/2}} \sum_{n=1}^{\infty} \frac{(c(u,v,T)N^v\norm{ \mathbb{a} }_{L^\infty})^n}{\Gamma(\frac{1-u+nv}2)}, $ \varepsilonnd{enumerate} for all $ x,x',\widetildex\in\mathbb{T} $, $ t\in[0,N^{2}T] $. \varepsilonnd{lemma} \begin{proof} The starting point of the proof is the backward Kolmogorov equation~\varepsilonqref{eq:bkol}. We split the inhomogeneous Laplacian $ \tfrac12 \mathbb{a} t(x) \Delta_x $ into $ \frac12 \Delta_x + \frac12 \mathbb{a} (x) \Delta_x $, and rewrite \varepsilonqref{eq:bkol} as \begin{align} \label{eq:bkol::} \mathbb{p} a(t;x,\widetildex) = \mathbb{p} (t;x,\widetildex) + \int_{\Sigma_1(t)} \sum_{x_1\in\mathbb{T}} \mathbb{p} (s_0;x,x_1) \frac{ \mathbb{a} (x_1)}{2} \Delta_{x_1} \mathbb{p} a(s_1;x_1,\widetildex) ds_1. \varepsilonnd{align} Through Picard iteration we obtain \begin{align} \label{eq:hk:chaos} \mathbb{p} r(t;x,\widetildex) = \sum_{n=1}^\infty \mathbb{p} r_n(t;x,\widetildex), \varepsilonnd{align} where, under the convention $ x_0:=x $ and $ x_{n+1}:=\widetildex $, and the notation~\varepsilonqref{eq:Sigman} and~\varepsilonqref{eq:ds}, \begin{align} \label{eq:hk:chaosn} \mathbb{p} r_n(t;x,\widetildex) &:= \int_{\Sigma_n(t)} \ \sum_{x_1,\ldots,x_n\in\mathbb{T}} \mathbb{p} (s_0;x_{0},x_1) \prod_{i=1}^{n} \frac{ \mathbb{a} (x_i)}{2} \Big( \Delta_{x_i} \mathbb{p} (s_i;x_{i},x_{i+1}) \Big) d^n\vec{s}. \varepsilonnd{align} Indeed, the infinite series in~\varepsilonqref{eq:hk:chaos} converges for fixed $ (t,x) $. To see this, in~\varepsilonqref{eq:hk:chaosn}, (crudely) bound \begin{align*} | \mathbb{p} r_n(t;x,\widetildex)| \leq N^n \norm{\tfrac12 \mathbb{a} }^n_{L^\infty(\mathbb{T})} \big(4\norm{ \mathbb{p} }_{L^\infty([0,t]\times\mathbb{T})}\big)^{n+1} \int_{\Sigma_n(t)} \ d^n\vec{s} \leq c(N, \mathbb{a} ,t)^n \tfrac{1}{(n+1)!}. \varepsilonnd{align*} Given the expression~\varepsilonqref{eq:hk:chaos}--\varepsilonqref{eq:hk:chaosn}, we proceed to prove the bounds~\ref{lem:hka:sup}--\ref{lem:hkagd:sup}. \ref{lem:hka:sup} In~\varepsilonqref{eq:hk:chaosn}, use~\varepsilonqref{eq:hk:sup} to bound $ \mathbb{p} (s_0;x_0,x_1) $ by $ \frac{c}{\sqrt{s_0}} $, and then sum over $ x_1,\ldots,x_n $ in order, using~\varepsilonqref{eq:hk:lap:sum:}. This yields \begin{align} \label{eq:hka:sup:chaosn} | \mathbb{p} r_n(t;x,\widetildex)| \leq \big( c \norm{ \mathbb{a} }_{L^\infty(\mathbb{T})} \big)^n \int_{\Sigma_n(t)} \frac{1}{\sqrt{s_0}} \prod_{i=1}^n \frac{ds_i}{s_i+1}. \varepsilonnd{align} To bound the last expression, for the given $ v\in(0,1) $, we write $ \frac{1}{s_i+1} \leq c(v)s_i^{v/2-1} $, and apply the Dirichlet formula~\varepsilonqref{eq:dirichlet} with $ (v_0,\ldots,v_n)=(1/2,v/2,\ldots,v/2) $ to get \begin{align} \label{eq:hka:sup:chaosn:} | \mathbb{p} r_n(t;x,\widetildex)| \leq \big( c(v) \norm{ \mathbb{a} }_{L^\infty(\mathbb{T})} \big)^n \int_{\Sigma_n(t)} s_{0}^{-\frac12}\prod_{i=1}^n s_i^{v/2-1} ds_i = \frac{1}{\sqrt{t}} \frac{(t^{v/2}c(v)\norm{ \mathbb{a} }_{L^\infty(\mathbb{T})})^n}{\Gamma(\frac{nv+1}{2})}. \varepsilonnd{align} Referring back to~\varepsilonqref{eq:hka:sup:chaosn}, we see that $ | \mathbb{p} r_n(t;x,\widetildex)| $ is bounded by $ ( c \norm{ \mathbb{a} }_{L^\infty(\mathbb{T})})^n $ when $ t\leq 1 $, uniformly over $ x,\widetildex\in\mathbb{T} $. This being the case, by making the constant $ c(v) $ larger in~\varepsilonqref{eq:hka:sup:chaosn:}, we replace the factor $ \frac{1}{\sqrt{t}} $ with $ \frac{1}{\sqrt{t+1}} $. Since $ t^{v/2} \leq (TN^2)^{v/2} = c(v,T)N^{v} $, summing over $ n\geq 1 $ yields the desired bound. \ref{lem:hk:sum} Given the expansion~\varepsilonqref{eq:hk:chaos}, our goal is to bound $ \sum_{\widetildex\in\mathbb{T}}| \mathbb{p} r_n(t;x,\widetildex)| $, for $ n=1,2,\ldots $. To this end, sum both sides of~\varepsilonqref{eq:hk:chaosn} over $ \widetildex\in\mathbb{T} $. Under the convention $ x_0:=x $ and the relabeling $ x_{n+1}=\widetildex $, we bound \begin{align} \label{eq:hkn:sum} \sum_{\widetildex\in\mathbb{T}} | \mathbb{p} r_n(t;x,\widetildex)| \leq \norm{ \mathbb{a} }^n_{L^\infty(\mathbb{T})} \int_{\Sigma_n(t)} \sum_{x_1,\ldots,x_{n+1}\in\mathbb{T}} \mathbb{p} (s_0;x_0,x_1) \prod_{i=1}^n \big| \Delta_{x_i} \mathbb{p} (s_i;x_{i},x_{i+1}) \big| d^n \vec{s}. \varepsilonnd{align} In~\varepsilonqref{eq:hkn:sum}, sum over $ x_{n+1},\ldots,x_2,x_1 $ in order, using the bound~\varepsilonqref{eq:hk:lap:sum} for the sum over $ x_{n+1},\ldots,x_2 $ and using $ \sum_{x_1} \mathbb{p} (s_0;x_0,x_1)=1 $ for the sum over $ x_1 $. We then obtain \begin{align*} \sum_{\widetildex\in\mathbb{T}} | \mathbb{p} r_n(t;x,\widetildex)| \leq \big( c\norm{ \mathbb{a} }_{L^\infty(\mathbb{T})} \big)^n \int_{\Sigma_n(t)} \prod_{i=1}^n \frac{ds_i}{s_i+1}. \varepsilonnd{align*} To bound the last integral, performing a change of variable $ s'_i:= ts_i $, we see that \begin{align*} \sum_{\widetildex\in\mathbb{T}} | \mathbb{p} r_n(t;x,\widetildex)| &\leq \big( c\norm{ \mathbb{a} }_{L^\infty(\mathbb{T})} \big)^n\int_{\Sigma_n(1)} \prod_{i=1}^n \frac{ds_i}{s_i+t^{-1}} \leq \big( c\norm{ \mathbb{a} }_{L^\infty(\mathbb{T})} \big)^n \int_{\Sigma_n(1)} e^{1-s_1-\ldots-s_n} \prod_{i=1}^n \frac{ds_i}{s_i+t^{-1}} \\ &\leq e \big( c\norm{ \mathbb{a} }_{L^\infty(\mathbb{T})} \big)^n \prod_{i=1}^n\int_0^\infty \frac{e^{-s_i}}{s_i+t^{-1}} ds_i \leq \big( c\norm{ \mathbb{a} }_{L^\infty(\mathbb{T})} \big)^n(1+(\log t)_+)^n. \varepsilonnd{align*} With $ t \leq N^2T $, summing both sides over $ n\geq 1 $ gives the desired result. \ref{lem:hkagd:sum} Taking the difference of~\varepsilonqref{eq:hk:chaosn} for $ x=x $ and $ x=x' $, under the relabeling $ x_{n+1}=\widetildex $, here we have \begin{align*} \sum_{\widetildex\in\mathbb{T}} | \mathbb{p} r_n(t;x,\widetildex)- \mathbb{p} r_n(t;x',\widetildex)| \leq \norm{ \mathbb{a} }^n_{L^\infty(\mathbb{T})} \int_{\Sigma_n(t)} \sum_{x_1,\ldots,x_{n+1}\in\mathbb{T}} | \mathbb{p} (s_0;x,x_1)- \mathbb{p} (s_0;x',x_1)| \prod_{i=1}^n \big| \Delta_{x_i} \mathbb{p} (s_i;x_{i},x_{i+1}) \big| d^n \vec{s}. \varepsilonnd{align*} Sum over $ x_{n+1},\ldots,x_2,x_1 $ in order, using the bound~\varepsilonqref{eq:hk:lap:sum} for the sum over $ x_{n+1},\ldots,x_2 $, and using the bound~\varepsilonqref{eq:hkgd:sum} for the sum over $ x_1 $. From this we obtain \begin{align*} \sum_{\widetildex\in\mathbb{T}} | \mathbb{p} r_n(t;x,\widetildex)- \mathbb{p} r_n(t;x',\widetildex)| \leq \big( c(u)\norm{ \mathbb{a} }_{L^\infty(\mathbb{T})} \big)^n \int_{\Sigma_n(t)} \frac{ \mathrm{dist} _{\mathbb{T}}(x,x')^u}{s^{u/2}_0}\prod_{i=1}^n \frac{ds_i}{s_i+1}. \varepsilonnd{align*} To bound the last integral, for the given $ v\in(0,1) $, we write $ \frac{1}{s_i+1} \leq c(v)s_i^{v/2-1} $, and apply the Dirichlet formula~\varepsilonqref{eq:dirichlet} with $ (v_0,\ldots,v_n)=(1-u/2,v/2,\ldots,v/2) $ to get \begin{align} \label{eq:hkagd:sup:chaosn:} | \mathbb{p} r_n(t;x,\widetildex)- \mathbb{p} r_n(t;x',\widetildex)| &\leq \mathrm{dist} _{\mathbb{T}}(x,x')^u \big( c(u,v)\norm{ \mathbb{a} }_{L^\infty(\mathbb{T})} \big)^n \int_{\Sigma_n(t)} s^{-u/2}_0\prod_{i=1}^n s_i^{v/2-1}ds_i \\ \notag &= \mathrm{dist} _{\mathbb{T}}(x,x')^u \frac{1}{t^{u/2}}\frac{( t^{v/2}c(u,v)\norm{ \mathbb{a} }_{L^\infty(\mathbb{T})})^n}{\Gamma(\frac{2-u+nv}{2})}. \varepsilonnd{align} Referring back to~\varepsilonqref{eq:hka:sup:chaosn}, we see that the l.h.s.\ of~\varepsilonqref{eq:hkagd:sup:chaosn:} is bounded by $ ( c(u) \norm{ \mathbb{a} }_{L^\infty(\mathbb{T})})^n $ when $ t\leq 1 $, uniformly over $ x,\widetildex\in\mathbb{T} $. This being the case, by making the constant $ c(u,v) $ larger in~\varepsilonqref{eq:hkagd:sup:chaosn:}, we may replace the factor $ \frac{1}{t^{u/2}} $ with $ \frac{1}{(t+1)^{u/2}} $. Since $ t^{v/2} \leq (TN^2)^{v/2} = c(v,T)N^{v} $, summing the result over $ n\geq 1 $ concludes the desired bound. \ref{lem:hkagd:sup} Taking the difference of~\varepsilonqref{eq:hk:chaosn} for $ x=x $ and $ x=x' $, here we have \begin{align*} | \mathbb{p} r_n(t;x,\widetildex)- \mathbb{p} r_n(t;x',\widetildex)| \leq \norm{ \mathbb{a} }^n_{L^\infty(\mathbb{T})} \int_{\Sigma_n(t)} \sum_{x_1,\ldots,x_{n}\in\mathbb{T}} | \mathbb{p} (s_0;x,x_1)- \mathbb{p} (s_0;x',x_1)| \prod_{i=1}^n \big| \Delta_{x_i} \mathbb{p} (s_i;x_{i},x_{i+1}) \big| d^n \vec{s}. \varepsilonnd{align*} Use~\varepsilonqref{eq:hkgd} to bound the expression $ | \mathbb{p} (s_0;x,x_1) - \mathbb{p} (s_0,x',x_1) | $ by $ c(u) \mathrm{dist} _{\mathbb{T}}(y,y')^u (s_0)^{-\frac{1+u}2} $, and then sum over $ x_1,\ldots,x_{n} $ using~\varepsilonqref{eq:hk:lap:sum:}. We then obtain \begin{align*} | \mathbb{p} r_n(t;x,\widetildex)- \mathbb{p} r_n(t;x',\widetildex)| \leq \big( c(u) \norm{ \mathbb{a} }_{L^\infty(\mathbb{T})} \big)^n \int_{\sum_{n}(t)} \frac{ \mathrm{dist} _{\mathbb{T}}(x,x')^u}{s_0^{(1+u)/2}} \prod_{i=1}^n \frac{dt_i}{s_i+1}. \varepsilonnd{align*} To bound the last expression, for the given $ v\in(0,1) $, we write $ \frac{1}{s_i+1} \leq c(v)s_i^{v-1} $, and apply the Dirichlet formula~\varepsilonqref{eq:dirichlet} with $ (v_0,\ldots,v_n)=((1-u)/2,v/2,\ldots,v/2) $ to get \begin{align} \label{eq:hkagd:sup:chaosn::} | \mathbb{p} r_n(t;x,\widetildex)- \mathbb{p} r_n(t;x',\widetildex)| &\leq \mathrm{dist} _{\mathbb{T}}(x,x')^{u} \big(c(u,v) \norm{ \mathbb{a} }^n_{L^\infty(\mathbb{T})} \big)^n \int_{\Sigma_n(t)} s_{0}^{-(1+u)/2}\prod_{i=1}^n s_i^{v/2-1} ds_i \\ \notag &= \frac{ \mathrm{dist} _{\mathbb{T}}(x,x')^v}{t^{\frac{1+u}{2}}} \frac{(t^{v/2}c(u,v))^n}{\Gamma(\frac{1-u+nv}{2})}. \varepsilonnd{align} Referring back to~\varepsilonqref{eq:hka:sup:chaosn}, we see that the l.h.s.\ of~\varepsilonqref{eq:hkagd:sup:chaosn::} is bounded by $ ( c(u) \norm{ \mathbb{a} }_{L^\infty(\mathbb{T})})^n $ when $ t\leq 1 $, uniformly over $ x,\widetildex\in\mathbb{T} $. This being the case, by making the constant $ c(u,v) $ larger in~\varepsilonqref{eq:hkagd:sup:chaosn::}, we replace the factor $ \frac{1}{t^{(1+u)/2}} $. Since $ t^{v/2} \leq (TN^2)^{v/2} = c(v,T)N^{v} $, summing the result over $ n\geq 1 $ concludes the desired bound. \varepsilonnd{proof} We now incorporate Lemma~\ref{lem:hk} with the assumed properties of $ \mathbb{a} (x) $ from Assumption~\ref{assu:rt}. To simplify notation, we will say that events $ \{\Omega_{\Lambda,N}\}_{\Lambda,N} $ hold `with probability $ \to_{\Lambda,N} 1 $' if \begin{align} \label{eq:wp1} \lim_{\Lambda\to\infty} \liminf_{N\to\infty} \mathbf{P} \big[ \Omega_{\Lambda,N} \big] = 1. \varepsilonnd{align} \begin{proposition}\label{prop:hk} For given $ T<\infty $, $ u\in(0,1] $ and $ v\in(0, u_{\mathbf{R}t} ) $, the following hold with probability $ \to_{\Lambda,N} 1 $: \begin{enumerate}[label=(\alph*),leftmargin=7ex] \item \label{cor:hk:hkasup} \ $ \displaystyle | \mathbb{p} a(t;x,\widetildex)| \leq \frac{1}{\sqrt{t+1}}\Lambda, \qquad \forall t\in[0,N^2T], \ x,\widetildex\in\mathbb{T}, $ \item \label{cor:hk:hkagdsup} \ $ \displaystyle | \mathbb{p} a(t;x,\widetildex)- \mathbb{p} a(t;x',\widetildex)| \leq \frac{ \mathrm{dist} _{\mathbb{T}}(x,x')^u}{(t+1)^{(1+u)/2}}\Lambda, \qquad \forall t\in[0,N^2T], \ x,x'\in\mathbb{T}, $ \item \label{cor:hk:hkagdsum} \ $ \displaystyle \sum_{\widetildex\in\mathbb{T}}| \mathbb{p} a(t;x,\widetildex)- \mathbb{p} a(t;x',\widetildex)| \leq \frac{ \mathrm{dist} _{\mathbb{T}}(x,x')^u}{(t+1)^{u/2}}\Lambda, \qquad \forall t\in[0,N^2T], \ x,x'\in\mathbb{T}, $ \item \label{cor:hk:hkahold:sup} \ $ \displaystyle \mathbb{p} a(t;x,\widetildex) \mathrm{dist} _{\mathbb{T}}(x,\widetildex)^v \leq (t+1)^{-(1-v)/2}\Lambda, \qquad \forall t\in[0,N^2T], \ x,\widetildex\in\mathbb{T}, $ \item \label{cor:hk:hkahold} \ $ \displaystyle \sum_{\widetildex\in\mathbb{T}} \mathbb{p} a(t;x,\widetildex) \mathrm{dist} _{\mathbb{T}}(x,\widetildex)^v \leq (t+1)^{v/2}\Lambda, \qquad \forall t\in[0,N^2T], \ x\in\mathbb{T}, $ \item \label{cor:hk:hkahold:} \ $ \displaystyle \sum_{\widetildex\in\mathbb{T}}|\nabla_x \mathbb{p} a(t;x,\widetildex)|\Big(\frac{ \mathrm{dist} _{\mathbb{T}}(x,\widetildex)}{N}\Big)^v \leq \Lambda, \qquad \forall t\in[0,N^2T], \ x\in\mathbb{T}, $ \item \label{cor:hk:hkahold::} \ $ \displaystyle \sum_{\widetildex\in\mathbb{T}}|\nabla_{\widetildex} \mathbb{p} a(t;x,\widetildex)|\Big(\frac{ \mathrm{dist} _{\mathbb{T}}(x,\widetildex)}{N}\Big)^v \leq \Lambda, \qquad \forall t\in[0,N^2T], \ x\in\mathbb{T}, $ \item \label{cor:hk:hkrsup} \ $ \displaystyle | \mathbb{p} r(t;x,\widetildex)| \leq \frac{N^{-v}}{\sqrt{t+1}}\Lambda, \qquad \forall t\in[0,N^2T], \ x,\widetildex \in\mathbb{T}, $ \item \label{cor:hk:hkrsum} \ $ \displaystyle \sum_{\widetildex\in\mathbb{T}} | \mathbb{p} r(t;x,\widetildex)| \leq N^{-v}\Lambda, \qquad \forall t\in[0,N^2T], \ x\in\mathbb{T}, $ \item \label{cor:hk:hkrgsum} \ $ \displaystyle \sum_{\widetildex\in\mathbb{T}}| \mathbb{p} r(t;x,\widetildex)- \mathbb{p} r(t;x',\widetildex)| \leq \frac{( \mathrm{dist} _{\mathbb{T}}(x,x'))^{u}}{(t+1)^{u/2}} N^{-v}\Lambda, \qquad \forall t\in[0,N^2T], \ x,x'\in\mathbb{T}, $ \item \label{cor:hk:hkrgsup} \ $ \displaystyle | \mathbb{p} r(t;x,\widetildex)- \mathbb{p} r(t;x',\widetildex)| \leq \frac{( \mathrm{dist} _{\mathbb{T}}(x,x'))^{u}}{(t+1)^{(u+1)/2}} N^{-v}\Lambda, \qquad \forall t\in[0,N^2T], \ x,x',\widetildex\in\mathbb{T}. $ \varepsilonnd{enumerate} \varepsilonnd{proposition} \begin{proof} Recall the definition of $ \mathbb{A} (x,x') $ from~\varepsilonqref{eq:Rt} and recall the seminorm $ \hold{\,\Cdot\,}_{ u_{\mathbf{R}t} ,N} $ from~\varepsilonqref{eq:hold}. With $ \mathbb{a} (x)= \mathbb{A} (0,x)- \mathbb{A} (0,x-1) $, we have \begin{align} \label{eq:rtbd} | \mathbb{a} (x)| \leq N^{- u_{\mathbf{R}t} } \hold{ \mathbb{A} _N }_{ u_{\mathbf{R}t} ,N}. \varepsilonnd{align} In particular, under Assumption~\ref{assu:rt}\ref{assu:rt:holder}, $ \norm{ \mathbb{a} }_{L^\infty(\mathbb{T})} \leq N^{- u_{\mathbf{R}t} }\Lambda $ with probability $ \to_{\Lambda,N} 1 $. This being the case, taking $ v'= u_{\mathbf{R}t} -v $ in Lemma~\ref{lem:hk}, and summing over $ n\geq 1 $ therein, we see that events \ref{cor:hk:hkrsup}--\ref{cor:hk:hkrgsup} hold with probability $ \to_{\Lambda,N} 1 $. With $ \mathbb{p} a= \mathbb{p} + \mathbb{p} r $, \ref{cor:hk:hkasup}--\ref{cor:hk:hkagdsum} follow by combining the bounds (we just showed they hold with probability $ \to_{\Lambda,N} 1 $) in \ref{cor:hk:hkrsup}, \ref{cor:hk:hkrgsum} and \ref{cor:hk:hkrgsup} with those in \varepsilonqref{eq:hk:sup}--\varepsilonqref{eq:hkgd:sum}. As for~\ref{cor:hk:hkahold:sup}--\ref{cor:hk:hkahold::}, with $ \mathbb{p} a= \mathbb{p} + \mathbb{p} r $ and $ \frac{ \mathrm{dist} _\mathbb{T}(x,x')}{N} \leq 1 $, we write \begin{align} \label{eqpf:hkhold:sup} \mathbb{p} a(t;x,\widetildex) \mathrm{dist} _\mathbb{T}(x,x')^v &\leq \mathbb{p} (t;x,\widetildex) \mathrm{dist} _\mathbb{T}(x,x')^v + | \mathbb{p} r(t;x,\widetildex)| \, N^v, \\ \label{eqpf:hkhold} \sum_{\widetildex\in\mathbb{T}} \mathbb{p} a(t;x,\widetildex) \mathrm{dist} _\mathbb{T}(x,x')^v &\leq \sum_{\widetildex\in\mathbb{T}} \mathbb{p} (t;x,\widetildex) \mathrm{dist} _\mathbb{T}(x,x')^v + \sum_{\widetildex\in\mathbb{T}}| \mathbb{p} r(t;x,\widetildex)| \, N^v, \varepsilonnd{align} and, for $ y=x,\widetildex $, \begin{align} \notag \sum_{\widetildex\in\mathbb{T}}|\nabla_{y} \mathbb{p} a(t;x,\widetildex)|\Big(\frac{ \mathrm{dist} _\mathbb{T}(x,x')}{N}\Big)^{v} &\leq \sum_{\widetildex\in\mathbb{T}}|\nabla_{y} \mathbb{p} (t;x,\widetildex)|\Big(\frac{ \mathrm{dist} _\mathbb{T}(x,x')}{N}\Big)^{v} + \sum_{\widetildex\in\mathbb{T}}|\nabla_{y} \mathbb{p} r(t;x,\widetildex)| \\ \label{eqpf:hkhold:} &\leq \sum_{\widetildex\in\mathbb{T}}|\nabla_{y} \mathbb{p} (t;x,\widetildex)|\Big(\frac{ \mathrm{dist} _\mathbb{T}(x,x')}{N}\Big)^{v} + \sum_{\widetildex\in\mathbb{T}}2| \mathbb{p} r(t;x,\widetildex)|. \varepsilonnd{align} Applying~\varepsilonqref{eq:hk:hold:sup}--\varepsilonqref{eq:hk:hold::} as well as \ref{cor:hk:hkrsup} and \ref{cor:hk:hkrsum} (that we have already established in this proposition), we can bound the corresponding terms in~\varepsilonqref{eqpf:hkhold:sup}--\varepsilonqref{eqpf:hkhold:}. From this and by using $ (t+1)^{-1/2} \leq (t+1)^{-(1-v)/2} $ we obtain the desired results for~\ref{cor:hk:hkahold:sup}--\ref{cor:hk:hkahold::}. \varepsilonnd{proof} \section{The Semigroups $ \mathcal{Q} (t) $ and $ \mathbb{Q} (t) $} \label{sect:Sgsg} Our goal in this section is to establish the relevant properties of the semigroups $ \mathcal{Q} (t)=e^{t \mathcal{H} } $ and $ \mathbb{Q} (t)=e^{t \mathbb{H} } $. In particular, in Section~\ref{sect:Sg}, for any given potential $ \mathbb{A} lim' $, we will \varepsilonmph{construct} $ \mathcal{Q} (t)=e^{t \mathcal{H} } $ and establish bounds using integration by parts techniques. Then, in Section~\ref{sect:sg}, we generalize these techniques to the microscopic setting to establish bounds on $ \mathbb{Q} (t) $. As mentioned in the introduction, we will utilize an expansion of the Feynman--Kac formula, which is similar to the expansion considered in \cite[Section 14, Chapter V]{simon79} for singular but function-valued potentials. With the potential $ \mathbb{A} lim'(x) $ being non-function-valued, we need to extract the smoothing effect of the heat semigroup to compensate the roughness of $ \mathbb{A} lim'(x) $. This is done by integration by parts in Lemmas~\ref{lem:Ips} and \ref{lem:ips}. \subsection{Macroscopic} \label{sect:Sg} Recall that $ \mathcal{H} =\frac12\mathbb{T}al_{xx}+ \mathbb{A} lim'(x) $. As previously explained in Remark~\ref{rmk:Rtfixed}, for the analysis within this subsection (that pertains to the \varepsilonmph{limiting} \ac{SPDE}), the randomness of $ \mathbb{A} lim $ plays no role, and we will assume without loss of generality $ \mathbb{A} lim $ is a deterministic function in $ C^{ u_{\mathbf{R}t} }[0,1] $. We begin by recalling the classical construction of $ \mathcal{H} $ from~\cite{fukushima77}. Note that, even though~\cite{fukushima77} treats $ \mathcal{H} $ on the closed interval $ [0,1] $ with Dirichlet boundary condition, the (relevant) argument carries through for $ \mathcal{T} $ as well. Write $ H^1(\mathcal{T}):\{f\in\mathcal{T}\to\mathbf{R}: f,f'\in L^2(\mathcal{T})\} $ for the Sobolev space, equipped with the norm $ \norm{f}^2_{H^1(\mathcal{T})} := \norm{f}^2_{L^2(\mathcal{T})} + \norm{f'}^2_{L^2(\mathcal{T})} $. For $ f,g\in L^2(\mathcal{T}) $, write $ \langle f,g\rangle = \langle f,g\rangle_{L^2(\mathcal{T})}:= \int_{\mathcal{T}} fg dx $ for the inner product in $ L^2(\mathcal{T}) $, and similarly $ \langle f,g\rangle_{H^1(\mathcal{T})} := \int_{\mathcal{T}} (fg + f'g') dx $. Consider the symmetric quadratic form \begin{align*} F_ \mathbb{A} lim: H^1(\mathcal{T})\times H^1(\mathcal{T})\to\mathbf{R}, \qquad F_ \mathbb{A} lim(f,g): = \tfrac12 \langle f',g' \rangle - f(1)g(1) \mathbb{A} lim(1) + \int_{0}^1 (f'g+fg')(x) \mathbb{A} lim(x) dx. \varepsilonnd{align*} If $ \mathbb{A} lim $ were smooth, integration by parts gives $ F_ \mathbb{A} lim(f,g)= -\langle f, \mathcal{H} g\rangle $. We now appeal to \cite[Definition 12.14]{grubb08} to define $ \mathcal{H} $ to be the operator associated to $ F_ \mathbb{A} lim $. Let $ D( \mathcal{H} ) $ denote the domain of $ \mathcal{H} $. From the preceding definition we have that \begin{align} \label{eq:ham:form} D( \mathcal{H} ) \subset H^1(\mathcal{T}); \qquad\quad - \langle \mathcal{H} f, g \rangle = F_ \mathbb{A} lim(f,g), \qquad \forall (f,g)\in D(F_ \mathbb{A} lim)\times H^1(\mathcal{T}). \qquad \varepsilonnd{align} On $ \mathcal{T} $ we have the following elementary bound \begin{align} \label{fn:LinfinH1} \norm{\,\Cdot\,}_{L^\infty(\mathcal{T})} \leq \sqrt{2}\norm{\,\Cdot\,}_{H^1(\mathcal{T})} \varepsilonnd{align} To see this, for $ x,y\in\mathcal{T} $, write $ |f(x)| \leq |f(y)| + \int_{[x,y]} |f'(\widetildey)| d\widetildey $, and integrate in $ y\in\mathcal{T} $ to get $ |f(x)| \leq \norm{f}_{L^1(\mathcal{T})} + \norm{f'}_{L^1(\mathcal{T})} \leq \sqrt{2}\norm{f}_{H^1(\mathcal{T})} $. Now, with $ \mathbb{A} lim $ being bounded, using \varepsilonqref{fn:LinfinH1} it is readily checked (see~\cite[Lemma~1]{fukushima77}) that \begin{align*} F_ \mathbb{A} lim(f,g) + c \langle f, g\rangle_{L^2(\mathcal{T})} \geq \tfrac{1}{c} \langle f, g\rangle_{H^1(\mathcal{T})}, \qquad\quad F_ \mathbb{A} lim(f,g) \leq c \langle f, g\rangle_{H^1(\mathcal{T})}, \qquad f,g\in H^1(\mathcal{T}), \varepsilonnd{align*} for some constant $ c=c( \mathbb{A} lim) $ depending only on $ \mathbb{A} lim $. Given these properties, and that $ F_ \mathbb{A} lim $ is symmetric, it then follows that (see \cite[Theorem 12.18, Corollary 12.19]{grubb08}) $ \mathcal{H} $ is a self-adjoint, closed operator, with $ D( \mathcal{H} ) $ being dense in $ L^2(\mathcal{T}) $. Having constructed $ \mathcal{H} $, we now turn to the semigroup $ \mathcal{Q} (t)=e^{t \mathcal{H} } $. \varepsilonmph{Heuristically}, the semigroup should be given by the Feynman--Kac formula \begin{align*} \big( \mathcal{Q} (t) f \big)(x) ``=" \mathbf{E} _x\Big[ e^{\int_0^t \mathbb{A} lim'(B(s))ds} f(B(t)) \Big], \varepsilonnd{align*} where $ B $ denotes a Brownian motion on $ \mathcal{T} $ starting from $ x $. The issue with this formula is that, under our assumptions, $ \mathbb{A} lim \in C^{ u_{\mathbf{R}t} }[0,1] $ is not necessarily differentiable, namely the potential $ \mathbb{A} lim' $ may not be function-valued. As mentioned previously, for function-valued potentials, such Feynman--Kac formula have been rigorously made sense in \cite{mckean77,simon79,simon82}. Continuing, for the moment, with the informal Feynman--Kac formula, we Taylor-expand the exponential function $ \varepsilonxp(\int_0^t \mathbb{A} lim'(B(s))ds) $, and exchange the expectation $ \mathbf{E} _x[\Cdot] $ with the integrals. This yields \begin{align*} ( \mathcal{Q} (t) f)(x) &``=" \mathbf{E} _x\Big[ \sum_{n=0}^\infty \frac{1}{n!}\int_{[0,t]^n} \Big(\prod_{i=1}^n \mathbb{A} lim'(B(t_i))dt_i\Big) f(t) \Big] \\ &= \mathbf{E} _x\Big[ \sum_{n=0}^\infty \int_{0<t_1<\ldots<t_n<t} \Big(\prod_{i=1}^n \mathbb{A} lim'(B(t_i))dt_i\Big) f(t) \Big] ``=" \int_{\mathcal{T}} \mathcal{Q} (t;x,\widetildex)f(x) dx, \varepsilonnd{align*} where $ \mathcal{Q} $ is defined as follows. With the notation $ \Sigma_{n}(t) $ from~\varepsilonqref{eq:Sigman}, $ d^n\vec{s} $ from~\varepsilonqref{eq:ds}, the convention $ x_0:=x $, $ \widetildex:=x_{n+1} $, and with \begin{align} \label{eq:HK} \mathcal{P} (t;x,\widetildex) = \sum_{i\in\mathbf{Z}}\frac{1}{\sqrt{2\pi t}} e^{-\frac{|x-\widetildex+i|^2}{2t}}, \qquad x,\widetildex \in[0,1) \varepsilonnd{align} denoting the standard heat kernel on $ \mathcal{T} $, we define \begin{align} \label{eq:Sgker} \mathcal{Q} (t;x,\widetildex) &:= \mathcal{P} (t;x,\widetildex) + \sum_{n=1}^\infty \mathcal{Q} r_n(t;x,\widetildex), \\ \label{eq:Sgr} \mathcal{Q} r_n(t;x,\widetildex) &:= \int_{\Sigma_n(t)} \mathcal{Q} rI_n(\vec{s};x,\widetildex) d^n\vec{s}, \\ \label{eq:SgrI} \mathcal{Q} rI_n(\vec{s};x,\widetildex) &= \mathcal{Q} rI_n(s_0,\ldots,s_n;x,\widetildex) := \int_{\mathcal{T}^n} \prod_{i=0}^n \mathcal{P} (s_i;x_i,x_{i+1})\ \prod_{i=1}^n d \mathbb{A} lim(x_i). \varepsilonnd{align} Note that, for each fixed $ (s_0,\ldots,s_n)\in \Sigma_n(t) $, the function $ \prod_{i=0}^n \mathcal{P} (s_i;x_i,x_{i+1}) $ is $ C^\infty(\mathcal{T}^{n+2}) $ in $ (x_0,\ldots,x_{n+1}) $, so~\varepsilonqref{eq:SgrI} is actually a well-defined Riemann–Stieltjes integral. Namely, despite the heuristic nature of the preceding calculations, the function $ \mathcal{Q} (t;x,\widetildex) $ in \varepsilonqref{eq:Sgker} is well-defined as long as the summations and integrals in \varepsilonqref{eq:Sgker}--\varepsilonqref{eq:Sgr} converge absolutely. This construction will be carried out in Proposition~\ref{prop:Sg}: there we \varepsilonmph{define} $ \mathcal{Q} (t) $ via~\varepsilonqref{eq:Sgker}--\varepsilonqref{eq:SgrI}, check that the result defines a bounded operator for each $ t \in[0,\infty) $, and verify that the result is indeed the semigroup generated by $ \mathcal{H} $. \begin{remark} \label{rmk:notchaos} In the case when $ \mathbb{A} lim $ is equal to a Brownian motion $ B $, one can also consider the chaos expansion of $ \mathcal{Q} (t;x,\widetildex) $ (see, e.g.,~\cite{janson97}). That is, for each $ t,x,\widetildex $, one views $ \mathcal{Q} (t;x,\widetildex) $ as a random variable (with randomness over $ B $), and decompose it into terms that belongs to $ n $-th order Wiener chaoses of $ B $. Such an expansion has been carried out in \cite{gu18} for \ac{PAM} in two dimensions, and it is conceivable that their method carries over in one dimension. We clarify here that our expansion~\varepsilonqref{eq:Sgker}--\varepsilonqref{eq:SgrI} here is \varepsilonmph{not} the chaos expansion. For example, it is readily checked that $ \mathbf{E} [ \mathcal{Q} r_1(t;x,\widetildex) \mathcal{Q} r_2(t;x,\widetildex)] \neq 0 $, where the expectation is taken with respect to $ B $. \varepsilonnd{remark} To prepare for the construction in Proposition~\ref{prop:Sg}, in Lemma~\ref{lem:ips} we establish an integration-by-parts identity, and, based on this identity, in Lemmas~\ref{lem:Ipsbd}--\ref{lem:SgrIbd}, we obtain bounds on the relevant integrals in \varepsilonqref{eq:Sgker}--\varepsilonqref{eq:Sgr}. To state Lemma~\ref{lem:Ips}, we begin with some notation. Recall that we write $ [x,\widetildex] $, $ x,\widetildex\in\mathcal{T} $, for the interval on $ \mathcal{T} $ that goes counterclockwise from $ x $ to $ \widetildex $. For given $ y_1\neq y_2\in\mathcal{T} $, let $ \mathrm{Mid}_2(y,\widetildey)\in[y_1,y_2] $ denote the midpoint of the interval $ [y_1,y_2] $, and let $ \mathrm{Mid}_1(y_1,y_2)\in[y_2,y_1] $ denote the midpoint of the interval $ [y_2,y_1] $. Set $ \mathcal{T}_1(y_1,y_2) := [\mathrm{Mid}_1(y_1,y_2),\mathrm{Mid}_2(y_1,y_2)) \subset \mathcal{T} $ and $ \mathcal{T}_2(y_1,y_2) := [\mathrm{Mid}_2(y_1,y_2),\mathrm{Mid}_1(y_1,y_2)) \subset \mathcal{T} $. Indeed, $ \mathcal{T}_1(y_1,y_2), \mathcal{T}_2(y_1,y_2) $ form a partition of $ \mathcal{T} $, with the property \begin{align} \label{eq:Parti} \mathrm{dist} _{\mathcal{T}}(y_j,x) \leq \mathrm{dist} _{\mathcal{T}}(y_{j+1},x), \ \forall x\in \mathcal{T}_j(y_1,y_2), \qquad \forall j=1,2, \qquad \text{where } y_3 := y_1. \varepsilonnd{align} See Figure~\ref{fig:Midpt} for an illustration. \begin{figure}[h] \centering \psfrag{X}{$ y_1 $} \psfrag{Y}{$ y_2 $} \psfrag{T}[c]{$ \mathcal{T}_1(y_1,y_2) $} \psfrag{S}{$ \mathcal{T}_2(y_1,y_2) $} \psfrag{M}[r]{$ \mathrm{Mid}_1(y_1,y_2) \quad $} \psfrag{N}{$ \mathrm{Mid}_2(y_1,y_2) $} \includegraphics[width=.4\textwidth]{mid} \caption{The points $ \mathrm{Mid}_1(y_1,y_2), \mathrm{Mid}_2(y_1,y_2) $ are equidistant from $y_1$ and $y_2$; the intervals $ \mathcal{T}_1(y_1,y_2), \mathcal{T}_2(y_1,y_2) $ are composed of all points closer to $y_1$ and $y_2$ respectively.} \label{fig:Midpt} \varepsilonnd{figure} Recall that $ [x_1,x_2] \subset \mathcal{T} $ denotes the interval going from $ x_1 $ to $ x_2 $ counterclockwise. We define the macroscopic analog of $ \mathbb{A} (x_1,x_2) $ (see~\varepsilonqref{eq:Rt}) as \begin{align} \label{eq:RtLim} \mathbb{A} lim(x_1,x_2) = \int_{[x_1,x_2]\setminus\{0\}} d \mathbb{A} lim(x) = \left\{\begin{array}{l@{,}l} \mathbb{A} lim(x_2)- \mathbb{A} lim(x_1) &\text{ when } x_1\leq x_2\in[0,1), \\ \mathbb{A} lim(x_2)- \mathbb{A} lim(0) + \mathbb{A} lim(1)- \mathbb{A} lim(x_1) &\text{ when } x_2<x_1\in[0,1). \varepsilonnd{array}\right. \varepsilonnd{align} Note that the integral excludes $ 0 $ so that the possible jump of $ \mathbb{A} lim(x) $ there will not be picked up by $ \mathbb{A} lim(x_1,x_2) $. Hereafter we adopt the standard notation $ f(x)|^{x=b}_{x=a} := f(b)-f(a) $. Lemma~\ref{lem:Ips} gives a integration-by-parts formula for $ \mathcal{U} $. \begin{lemma} \label{lem:Ips} For $ y_1\neq y_2\in\mathcal{T} $, set \begin{align} \label{eq:Ips:} \begin{split} \mathcal{U} &(s,s';y_1,y_2) := \sum_{j=1}^2 \Bigg( \mathcal{P} (s;y_1,x) \mathbb{A} lim(y_j,x) \mathcal{P} (s';x,y_2)\big|_{x=\mathrm{Mid}_j(y_1,y_2)}^{x=\mathrm{Mid}_{j+1}(y_1,y_2)} \\ &- \int_{\mathcal{T}_j(y_1,y_2)} (\mathbb{T}al_{x} \mathcal{P} (s;y_1,x)) \mathbb{A} lim(y_j,x) \mathcal{P} (s';x,y_2) dx - \int_{\mathcal{T}_j(y_1,y_2)} \mathcal{P} (s;y_1,x) \mathbb{A} lim(y_j,x) \mathbb{T}al_{x} \mathcal{P} (s';x,y_2) dx \Bigg), \varepsilonnd{split} \varepsilonnd{align} where, by convention, we set $ \mathrm{Mid}_{3}(\widetildey,y) :=\mathrm{Mid}_{1}(\widetildey,y) $. Then we have \begin{align} \label{eq:Ips} \mathcal{Q} rI_n(\vec{s};x,\widetildex) = \int_{\mathcal{T}^{n+1}} \mathcal{P} (\tfrac{s_0}{2};x,y_1) dy_1 \Big( \prod_{i=1}^n \mathcal{U} (\tfrac{s_{i-1}}{2},\tfrac{s_{i}}{2};y_{i},y_{i+1}) dy_{i+1} \Big) \mathcal{P} (\tfrac{s_{n}}{2};y_{n+1},\widetildex). \varepsilonnd{align} \varepsilonnd{lemma} \begin{remark} The value of $ \mathcal{U} (s,s';y_1,y_2) $ at $ y=\widetildey $ in~\varepsilonqref{eq:Ips} is irrelevant since the set has zero Lebesgue measure. \varepsilonnd{remark} \begin{proof} In~\varepsilonqref{eq:SgrI}, use the semigroup property $ \mathcal{P} (s_i;x_i,x_{i+1})=\int_{\mathcal{T}} \mathcal{P} (\frac{s_i}{2};x_i,y_i) \mathcal{P} (\frac{s_i}{2};y_i,x_{i+1}) dy_i $ to rewrite \begin{align} \label{eqlem:Ips} & \mathcal{Q} rI_n(\vec{s};x,\widetildex) = \int_{\mathcal{T}^{n+1}} \mathcal{P} (\tfrac{s_0}{2};x,y_1) dy_1 \Big( \prod_{i=1}^n \widetilde{ \mathcal{U} }_i(y_i,s_{i-1},y_{i+1},s_i) dy_{i+1} \Big) \mathcal{P} (\tfrac{s_{n}}{2};y_{n+1},\widetildex), \\ \label{eqlem:Ips:} &\widetilde{ \mathcal{U} }_i(y_i,s_{i-1},y_{i+1},s_i) := \int_{\mathcal{T}} \mathcal{P} (\tfrac{s_{i-1}}{2};y_{i},x) \, d \mathbb{A} lim(x) \, \mathcal{P} (\tfrac{s_{i}}{2};x,y_{i+1}). \varepsilonnd{align} The integral in \varepsilonqref{eqlem:Ips:} is over $ x\in\mathcal{T} $ in the Riemann–Stieltjes sense. Split the integral into integrals over $ \mathcal{T}_1(y_i,y_{i+1}) $ and $ \mathcal{T}_2(y_{i},y_{i+1}) $. This gives $ \widetilde{ \mathcal{U} }_i=\widetilde{ \mathcal{U} }_{i,1}+\widetilde{ \mathcal{U} }_{i,2} $, \begin{align} \notag \widetilde{ \mathcal{U} }_{i,j}(y_i,s_{i-1},y_{i+1},s_i) &:= \int_{\mathcal{T}_j(y_i,y_{i+1})} \mathcal{P} (\tfrac{s_{i-1}}{2};y_{i},x) \, d \mathbb{A} lim(x) \, \mathcal{P} (\tfrac{s_{i}}{2};x,y_{i+1}) \\ \label{eq:Ips1} &= \int_{\mathcal{T}_j(y_{i},y_{i+1})} \mathcal{P} (\tfrac{s_{i-1}}{2};y_{i},x) \, d \mathbb{A} lim(y_{i+j-1},x) \, \mathcal{P} (\tfrac{s_{i}}{2};x,y_{i+1}), \varepsilonnd{align} where the equality in~\varepsilonqref{eq:Ips1} follows since $ y_i $ and $ y_{i+1} $ are \varepsilonmph{fixed} (the integral is in $ x $). Then, in~\varepsilonqref{eq:Ips1}, integrate by parts (in $ x $), and add the results for $ j=1,2 $ together. This gives $ \widetilde{ \mathcal{U} }_{i}= \mathcal{U} (\frac{s_{i-1}}{2},\frac{s_{i}}{2};y_i,y_{i+1}) $. Inserting this back into~\varepsilonqref{eqlem:Ips} completes the proof. \varepsilonnd{proof} Equation~\varepsilonqref{eq:Ips} expresses $ \mathcal{Q} rI_n $ in terms of $ \mathcal{U} $. We proceed to establish bounds on the latter. Here we list a few bounds on $ \mathcal{P} (t;x,x') $ that will be used in the subsequent analysis. They are readily checked from the explicit expression~\varepsilonqref{eq:HK} of $ \mathcal{P} $ (in the spirit of \varepsilonqref{eq:hk}). Given any $ u\in(0,1] $ and $ T<\infty $, for all $ x,x',\widetildex\in\mathcal{T} $ and $ s\in[0,T] $, \begin{subequations} \begin{align} \label{eq:HK:int} \int_{\mathcal{T}} \mathcal{P} (s;x,\widetildex) d\widetildex &=1, \\ \label{eq:HKgx:int} \int_{\mathcal{T}} |\mathbb{T}al_{\widetildex} \mathcal{P} (s;x,\widetildex)| \mathrm{dist} _{\mathcal{T}}(x,\widetildex)^u d\widetildex &\leq c(u,T) s^{-\frac{1-u}{2}}, \\ \label{eq:HKgx':int} \int_{\mathcal{T}} |\mathbb{T}al_{x} \mathcal{P} (s;x,\widetildex)| \mathrm{dist} _{\mathcal{T}}(x,\widetildex)^u d\widetildex &\leq c(u,T) s^{-\frac{1-u}{2}}, \\ \label{eq:HK:hold} \int_{\mathcal{T}} \mathcal{P} (s;x,\widetildex) \mathrm{dist} _{\mathcal{T}}(x,\widetildex)^u d\widetildex &\leq c(T) s^{-\frac{u}2}, \\ \label{eq:HK:uni} \mathcal{P} (s;x,\widetildex) &\leq c(T) s^{-\frac12}, \\ \label{eq:HK:uni:} \mathcal{P} (s;x,\widetildex) \mathrm{dist} _{\mathcal{T}}(x,\widetildex)^u &\leq c(T) s^{-\frac{1-u}2}, \\ \label{eq:HKg:int} \int_{\mathcal{T}} | \mathcal{P} (s;x,\widetildex)- \mathcal{P} (s;x',\widetildex)| d\widetildex &\leq c(u,T) \mathrm{dist} _{\mathcal{T}}(x,x')^{u}s^{-\frac{u}{2}}, \\ \label{eq:HKg:uni} | \mathcal{P} (s;x,\widetildex)- \mathcal{P} (s;x',\widetildex)| &\leq c(u,T)s^{-\frac{1+u}{2}}. \varepsilonnd{align} \varepsilonnd{subequations} \begin{lemma} \label{lem:Ipsbd} Given any $ v\in(0, u_{\mathbf{R}t} ) $ and $ T<\infty $, for all $s,s' \in [0,T]$, \begin{align*} \int_{\mathcal{T}} | \mathcal{U} (s,s';y,y')| dy' \leq c(v,T) \norm{ \mathbb{A} lim}_{C^{ u_{\mathbf{R}t} }[0,1]} \big( {s}^{-(1-v)/2}+{s'}^{-(1-v)/2} \big). \varepsilonnd{align*} \varepsilonnd{lemma} \begin{proof} Recall the definition of $ \norm{\Cdot}_{C^{u}[0,1]} $ and $ [\Cdot]_{C^u(\mathcal{T})} $ from~\varepsilonqref{eq:Hold}. From the expression~\varepsilonqref{eq:RtLim} of $ \mathbb{A} lim(x_1,x_2) $ and the property~\varepsilonqref{eq:Parti}, it is straightforward to check that \begin{align*} | \mathbb{A} lim(y_j,x)| \leq \mathrm{dist} _{\mathcal{T}}(y_j,x)^{ u_{\mathbf{R}t} } \, [ \mathbb{A} lim ]_{C^{ u_{\mathbf{R}t} }(\mathcal{T})} \leq \mathrm{dist} _{\mathcal{T}}(y_j,x)^{v} \, \norm{ \mathbb{A} lim }_{C^{ u_{\mathbf{R}t} }[0,1]}, \qquad \forall x\in\mathcal{T}_j(y_1,y_2), \ j=1,2. \varepsilonnd{align*} Inserting this bound into~\varepsilonqref{eq:Ips:} gives \begin{align} \label{eqlem:Ipsbd:1} | \mathcal{U} (s,s';y_1,y_2)| \leq \norm{ \mathbb{A} lim }_{C^{ u_{\mathbf{R}t} }[0,1]} \sum_{j=1}^2 \Bigg( &\sum_{x\in\{\mathrm{Mid}_j(y_1,y_2)\}_{j=1}^2} \mathcal{P} (s;y_1,x) \, \mathrm{dist} _{\mathcal{T}}(y_j,x)^v \, \mathcal{P} (s';x,y_2) \\ \label{eqlem:Ipsbd:2} &+ \int_{\mathcal{T}_j(y_1,y_2)} \,\big|\mathbb{T}al_{x} \mathcal{P} (s;y_1,x)\big| \mathrm{dist} _{\mathcal{T}}(y_j,x)^v \, \mathcal{P} (s';x,y_2) dx \\ \label{eqlem:Ipsbd:3} &+ \int_{\mathcal{T}_j(y_1,y_2)} \mathcal{P} (s;y_1,x) \, \mathrm{dist} _{\mathcal{T}}(y_j,x)^v \big|\mathbb{T}al_{x} \mathcal{P} (s;y_2,x)\big| \, dx \Bigg). \varepsilonnd{align} In~\varepsilonqref{eqlem:Ipsbd:2} use~\varepsilonqref{eq:Parti} to bound $ \mathrm{dist} _{\mathcal{T}}(y_j,x) $ by $ \mathrm{dist} _{\mathcal{T}}(y_1,x) $, and in~\varepsilonqref{eqlem:Ipsbd:3} use~\varepsilonqref{eq:Parti} to bound $ \mathrm{dist} _{\mathcal{T}}(y_j,x) $ by $ \mathrm{dist} _{\mathcal{T}}(y_2,x) $. This way $ \mathrm{dist} _{\mathcal{T}} $ has the same $ y $ variable as $ \mathbb{T}al_{x} \mathcal{P} $. We now have \begin{align} \label{eq:Ips:bd:} &| \mathcal{U} (s,s';y_1,y_2)| \leq \norm{ \mathbb{A} lim }_{C^{ u_{\mathbf{R}t} }[0,1]} \Bigg( 2 \sum_{x\in\{\mathrm{Mid}_j(y_1,y_2)\}_{j=1}^2} \mathcal{P} (s;y_1,x) \mathrm{dist} _{\mathcal{T}}(y_j,x)^v \mathcal{P} (s';x,y_2) \\ \label{eq:Ips:bd::} & + \int_{\mathcal{T}} \, \big|\mathbb{T}al_{x} \mathcal{P} (s;y_1,x)\big| \mathrm{dist} _{\mathcal{T}}(y_1,x)^v \, \mathcal{P} (s';x,y_2) dx + \int_{\mathcal{T}} \mathcal{P} (s;y_1,x) \, \mathrm{dist} _{\mathcal{T}}(y_2,x)^v \big|\mathbb{T}al_{x} \mathcal{P} (s;x,y_2)\big| \, dx \Bigg). \varepsilonnd{align} Integrate~\varepsilonqref{eq:Ips:bd:}--\varepsilonqref{eq:Ips:bd::} over $ y_2\in\mathcal{T} $, use \varepsilonqref{eq:HK:int}, \varepsilonqref{eq:HK:uni}--\varepsilonqref{eq:HK:uni:} to bound the terms in~\varepsilonqref{eq:Ips:bd:}, and use \varepsilonqref{eq:HK:int}--\varepsilonqref{eq:HKgx':int} to bound the terms in~\varepsilonqref{eq:Ips:bd::}. We then conclude the desired result \begin{align*} \int_{\mathcal{T}} | \mathcal{U} (s,s';y_1,y_2)| dy_2 \leq c(v,T) \norm{ \mathbb{A} lim}_{C^{ u_{\mathbf{R}t} }[0,1]} \big( s^{-(1-v)/2} + {s'}^{-(1-v)/2} + s^{-(1-v)/2} + {s'}^{-(1-v)/2} \big). \varepsilonnd{align*} \varepsilonnd{proof} Based on Lemmas~\ref{lem:Ips}--\ref{lem:Ipsbd}, we now establish bounds on $ \mathcal{Q} rI_n $. Recall the notation $ \Sigma_{n}(t) $ from~\varepsilonqref{eq:Sigman} and $ d^n\vec{s} $ from~\varepsilonqref{eq:ds}. \begin{lemma} \label{lem:SgrIbd} Given any $ u\in(0,1] $ and $ v\in(0, u_{\mathbf{R}t} ) $, we have, for all $ x,x',\widetildex\in\mathcal{T} $, $ t\in[0,T] $, and $ n \geq 1 $, \begin{enumerate}[label=(\alph*),leftmargin=7ex] \item \label{lem:SgrI:int} $ \displaystyle \int_{\Sigma_n(t)} \int_{\mathcal{T}} | \mathcal{Q} rI_n(\vec{s};x,\widetildex)| d^n\vec{s}\, d\widetildex \leq t^{\frac{(1+v)n}{2}}\frac{ (c(v,T) \norm{ \mathbb{A} lim}_{C^{ u_{\mathbf{R}t} }[0,1]} )^n}{\Gamma(\frac{(1+v)n+2}{2})}, $ \item \label{lem:SgrI:gint} $ \displaystyle \int_{\Sigma_n(t)} \int_{\mathcal{T}} | \mathcal{Q} rI_n(\vec{s};x,\widetildex)- \mathcal{Q} rI_n(\vec{s};x',\widetildex)| d^n\vec{s}\,d\widetildex \leq \mathrm{dist} _{\mathcal{T}}(x,x')^{u} t^{\frac{(1+v)n-u}{2}} \frac{(c(u,v,T) \norm{ \mathbb{A} lim}_{C^{ u_{\mathbf{R}t} }[0,1]})^n }{\Gamma(\frac{(1+v)n+2-u}{2})}, $ \item \label{lem:SgrI:sup} $ \displaystyle \int_{\Sigma_n(t)} | \mathcal{Q} rI_n(\vec{s};x,\widetildex)| d^n\vec{s} \leq t^{\frac{(1+v)n-1}{2}} \frac{ (c(v,T) \norm{ \mathbb{A} lim}_{C^{ u_{\mathbf{R}t} }[0,1]} )^n}{\Gamma(\frac{(1+v)n+1}{2})}, $ \item \label{lem:SgrI:gsup} $ \displaystyle \int_{\Sigma_n(t)} | \mathcal{Q} rI_n(\vec{s};x,\widetildex)- \mathcal{Q} rI_n(\vec{s};x',\widetildex)| d^n\vec{s} \leq \mathrm{dist} _{\mathcal{T}}(x,x')^{u} t^{\frac{(1+v)n-1-u}{2}} \frac{(c(u,v,T) \norm{ \mathbb{A} lim}_{C^{ u_{\mathbf{R}t} }[0,1]} )^n }{\Gamma(\frac{(1+v)n+1-u}{2})}. $ \varepsilonnd{enumerate} \varepsilonnd{lemma} \begin{proof} The proof begins with the given expression~\varepsilonqref{eq:Ips} of $ \mathcal{Q} rI_n $: \begin{align} \begin{split} \label{eqSglem:SgrIbd:Ips} & \mathcal{Q} rI_n(\vec{s};x,\widetildex) = \int_{\mathcal{T}^{n+1}} \mathcal{P} (\tfrac{s_0}{2};x,y_1) \, dy_1 \, \mathcal{U} (\tfrac{s_{0}}{2},\tfrac{s_{1}}{2};y_{1},y_{2}) \, dy_2 \cdots \\ &\hphantom{ \mathcal{Q} rI_n(\vec{s};x,\widetildex)=\int_{\mathcal{T}^{n+1}} \mathcal{P} (\tfrac{s_0}{2};x,y_1) \, dy_1 \,} \, dy_{n} \, \mathcal{U} (\tfrac{s_{n-1}}{2},\tfrac{s_{n}}{2};y_{n},y_{n+1}) \, dy_{n+1} \, \mathcal{P} (\tfrac{s_{n}}{2};y_{n+1},\widetildex), \varepsilonnd{split} \\ \begin{split} & \mathcal{Q} rI_n(\vec{s};x,\widetildex) - \mathcal{Q} rI_n(\vec{s};x',\widetildex) = \int_{\mathcal{T}^{n+1}} \big( \mathcal{P} (\tfrac{s_0}{2};x,y_1)- \mathcal{P} (\tfrac{s_0}{2};x',y_1) \big) \, dy_1 \\ & \label{eqSglem:SgrIbd:Ipsg:} \hphantom{ \mathcal{Q} rI_n(\vec{s};x,\widetildex) - \mathcal{Q} rI_n(\vec{s};x',\widetildex)=} \mathcal{U} (\tfrac{s_{0}}{2},\tfrac{s_{1}}{2};y_{1},y_{2}) \, dy_2 \cdots \, dy_{n} \, \mathcal{U} (\tfrac{s_{n-1}}{2},\tfrac{s_{n}}{2};y_{n},y_{n+1}) \, dy_{n+1} \, \mathcal{P} (\tfrac{s_{n}}{2};y_{n+1},\widetildex). \varepsilonnd{split} \varepsilonnd{align} For~\ref{lem:SgrI:int}--\ref{lem:SgrI:gint}, integrate \varepsilonqref{eqSglem:SgrIbd:Ips}--\varepsilonqref{eqSglem:SgrIbd:Ipsg:} over $ \widetildex,y_{n+1},\ldots, y_1\in\mathcal{T} $ \varepsilonmph{in order}. Use~\varepsilonqref{eq:HK:int} for the integral over $ \widetildex $, use Lemma~\ref{lem:Ipsbd} subsequently for the integrals over $ y_{n},\ldots,y_{2} $, and use~\varepsilonqref{eq:HK:int} and \varepsilonqref{eq:HKg:int} for the integral over $ y_1 $. We then obtain \begin{subequations} \label{eqSglem:} \begin{align} \label{eqSglem:a} \int_{\mathcal{T}} | \mathcal{Q} rI_n(\vec{s};x,\widetildex)| d\widetildex &\leq \big( c(v,T) \norm{ \mathbb{A} lim}_{C^{ u_{\mathbf{R}t} }[0,1]} \big)^n \prod_{i=1}^n \big( s_{i-1}^{-(1-v)/2}+s_{i}^{-(1-v)/2} \big), \\ \label{eqSglem:b} \int_{\mathcal{T}} | \mathcal{Q} rI_n(\vec{s};x,\widetildex)- \mathcal{Q} rI_n(\vec{s};x',\widetildex)| d\widetildex &\leq \mathrm{dist} _{\mathcal{T}}(x,x')^{u} \big( c(u,v,T)\norm{ \mathbb{A} lim }_{C^{ u_{\mathbf{R}t} }[0,1]} \big)^n s_0^{-\frac{u}{2}} \prod_{i=1}^n \big( s_{i-1}^{-(1-v)/2}+s_{i}^{-(1-v)/2} \big). \varepsilonnd{align} \varepsilonnd{subequations} For~\ref{lem:SgrI:sup}--\ref{lem:SgrI:gsup}, in~\varepsilonqref{eqSglem:SgrIbd:Ips}--\varepsilonqref{eqSglem:SgrIbd:Ipsg:}, use~\varepsilonqref{eq:HK:uni} to bound $ \mathcal{P} (\frac{s_n}{2};y_{n+1},\widetildex) $ by $ c s_{n}^{-1/2} $, and then integrate the result over $ y_{n+1},\ldots, y_1\in\mathcal{T} $ in order. Similarly to the preceding, we have \begin{align} \tag{\ref*{eqSglem:}c} | \mathcal{Q} rI_n(\vec{s};x,\widetildex)| &\leq \big( c(u,T)\norm{ \mathbb{A} lim }_{C^{ u_{\mathbf{R}t} }[0,1]} \big)^n \prod_{i=1}^n \big( s_{i-1}^{-(1-u)/2}+s_{i}^{-(1-u)/2} \big) s_n^{-\frac12}, \\ \tag{\ref*{eqSglem:}d} \label{eqSglem:d} | \mathcal{Q} rI_n(\vec{s};x,\widetildex)- \mathcal{Q} rI_n(\vec{s};x',\widetildex)| &\leq \mathrm{dist} _{\mathcal{T}}(x,x')^{u} \big( c(u,v,T)\norm{ \mathbb{A} lim }_{C^{ u_{\mathbf{R}t} }[0,1]} \big)^n s_0^{-\frac{u}{2}} \prod_{i=1}^n \big( s_{i-1}^{-(1-v)/2}+s_{i}^{-(1-v)/2} \big) s_n^{-\frac12}. \varepsilonnd{align} Next we will explain how to integrate the r.h.s.\ of~\varepsilonqref{eqSglem:a}--\varepsilonqref{eqSglem:d} to establish the desired bounds. We begin with~\varepsilonqref{eqSglem:a}. Expand the $ n $-fold product on the r.h.s.\ of~\varepsilonqref{eqSglem:a} into a sum of size $ 2^n $: \begin{align} \label{eq:sumexpansion} \prod_{i=1}^n \big( s_{i-1}^{-(1-v)/2}+s_{i}^{-(1-v)/2} \big) = \sum_{\vec{b}} \prod_{i=0}^n s_{i}^{-\frac{1-v}{2}( \mathbf{1} \set{b_{i-1/2}=i}+ \mathbf{1} \set{b_{i+1/2}=i})}, \varepsilonnd{align} where the sum goes over $ \vec{b}=(b_{1/2},b_{3/2},\ldots,b_{n-1/2})\in \{0,1\}\times\{1,2\}\times\cdots\times\{n-1,n\} $, with the convention that $ b_{-1/2}:=-1 $ and $ b_{n+1/2}:=n+1 $. Insert~\varepsilonqref{eq:sumexpansion} into~\varepsilonqref{eqSglem:a}, and integrate both sides over $ \vec{s}\in\Sigma_n(t) $. With the aid of the Dirichlet formula~\varepsilonqref{eq:dirichlet}, we obtain \begin{align*} \int_{\Sigma_n(t)\times\mathcal{T}} | \mathcal{Q} rI_n(\vec{s};x,\widetildex)| d^n\vec{s} d\widetildex \leq \big( c(v,T)\norm{ \mathbb{A} lim }_{C^{ u_{\mathbf{R}t} }[0,1]} \big)^n \sum_{\vec{b}} \frac{t^{(1+v)n/2} \prod_{i=0}^n\Gamma\big(1-\frac{1-v}{2}( \mathbf{1} \set{b_{i-1/2}=i}+ \mathbf{1} \set{b_{i+1/2}=i})\big)}{\Gamma(\frac{1+v}{2}n+1)}. \varepsilonnd{align*} Since $ \Gamma(x) $ is decreasing for $ x\in(0,1] $, we have $ \Gamma(1-\frac{1-v}{2}( \mathbf{1} \set{b_{i-1/2}=i}+ \mathbf{1} \set{b_{i+1/2}=i})) \leq \Gamma(v) $. From this we conclude the desired result for~\ref{lem:SgrI:int}: \begin{align*} \int_{\Sigma_n(t)\times\mathcal{T}} | \mathcal{Q} rI_n(\vec{s};x,\widetildex)| d^n\vec{s} d\widetildex \leq \big( c(v,T)\norm{ \mathbb{A} lim }_{C^{ u_{\mathbf{R}t} }[0,1]} \big)^n 2^{n} \frac{t^{(1+v)n/2}\Gamma(v)^{n+1}}{\Gamma(\frac{1+v}{2}n+1)} \leq t^{\frac{(1+v)n}{2}}\frac{ (c(v,T) \norm{ \mathbb{A} lim}_{C^{ u_{\mathbf{R}t} }[0,1]} )^n}{\Gamma(\frac{(1+v)n+2}{2})}. \varepsilonnd{align*} As for~\ref{lem:SgrI:gint}--\ref{lem:SgrI:gsup}, integrating \varepsilonqref{eqSglem:b}--\varepsilonqref{eqSglem:d} over $ \vec{s}\in\Sigma_n(t) $, with the aid of the Dirichlet formula~\varepsilonqref{eq:dirichlet}, one obtains the desired results via the same procedure as in the preceding. We do not repeat the argument. \varepsilonnd{proof} Given Lemma~\ref{lem:SgrIbd}, we now construct the semigroup $ \mathcal{Q} (t) $, whereby verifying the heuristic given earlier. Recall $ \mathcal{Q} r_n $ from~\varepsilonqref{eq:Sgr}. \begin{proposition} \label{prop:Sg} Fix $ u\in(0,1] $ and $ T<\infty $. The series $ \mathcal{Q} r(t;x,\widetildex) := \sum_{n=1}^\infty \mathcal{Q} r_n(t;x,\widetildex) $ converges uniformly over $ x,\widetildex\in\mathcal{T} $ and $ t\in[0,T] $, and satisfies the bounds \begin{enumerate}[label=(\alph*),leftmargin=7ex] \item \label{prop:Sg:int} $ \displaystyle \int_{\Sigma_n(t)\times\mathcal{T}} | \mathcal{Q} r(t;x,\widetildex)| d\widetildex \leq c(T), $ \item \label{prop:Sgg:int} $ \displaystyle \int_{\Sigma_n(t)\times\mathcal{T}} | \mathcal{Q} r(t;x,\widetildex)- \mathcal{Q} r(t;x',\widetildex)| d\widetildex \leq \mathrm{dist} _{\mathcal{T}}(x,x')^u c(u,T), $ \item \label{prop:Sg:sup} $ \displaystyle | \mathcal{Q} r(t;x,\widetildex)| \leq c(T), $ \item \label{prop:Sgg:sup} $ \displaystyle | \mathcal{Q} r(t;x,\widetildex)- \mathcal{Q} r(t;x',\widetildex)| \leq \frac{ \mathrm{dist} _{\mathcal{T}}(x,x')^u}{t^{u/2}} c(u,T), $ \varepsilonnd{enumerate} for all $ x,\widetildex\in\mathcal{T} $ and $ t\in[0,T] $. Furthermore, with $ \mathcal{Q} (t;x,\widetildex) := \mathcal{P} (t;x,\widetildex)+ \mathcal{Q} r(t;x,\widetildex) $ (as in~\varepsilonqref{eq:Sgker}), \begin{align*} \big( \mathcal{Q} (t) f \big)(x) := \int_{\mathcal{T}} \mathcal{Q} (t;x,\widetildex) f(\widetildex) d\widetildex \varepsilonnd{align*} defines an operator $ \mathcal{Q} (t): C(\mathcal{T})\to C(\mathcal{T}) $ for each $ t\in[0,\infty) $, and $ \mathcal{Q} (t) $ is, in fact, the semigroup of $ \mathcal{H} $. \varepsilonnd{proposition} \begin{proof} By assumption, $ \norm{ \mathbb{A} lim}_{C^{ u_{\mathbf{R}t} }[0,1]}<\infty $. For given $ v\in(0, u_{\mathbf{R}t} ) $, $ \delta>0 $, and $ c<\infty $, $ \sum_{n=1}^\infty \frac{c^n}{\Gamma(v n+\delta)}<\infty $. From these two observations the claimed bounds \ref{prop:Sg:int}--\ref{prop:Sgg:sup} follow straightforwardly from Lemma~\ref{lem:SgrIbd}. It remains to check that the so defined operators $ \mathcal{Q} (t) $, $ t\geq 0 $, form the semigroup of $ \mathcal{H} $. Fixing $ t,s \in[0,\infty) $, we begin by checking the semigroup property. Writing $ \mathcal{Q} r_0(t;x,y):= \mathcal{P} (t;x,y) $ to streamline notation, we have \begin{align} \label{eqprop:Sg:s+t} \big( \mathcal{Q} (t) \mathcal{Q} (s) \big)(x,\widetildex) := \int_{\mathcal{T}} \Big( \sum_{n=0}^\infty \mathcal{Q} _n(t;x,y) \Big)\Big( \sum_{n=0}^\infty \mathcal{Q} _n(s;y,\widetildex) \Big) dy = \sum_{n=0}^\infty \sum_{n_1+n_2=n}\int_{\mathcal{T}} \mathcal{Q} _{n_1}(t;x,y) \mathcal{Q} _{n_2}(s;y,\widetildex) dy. \varepsilonnd{align} Here we rearranged the product of two infinite sums into iterated sums, which is permitted granted the bounds from Lemma~\ref{lem:SgrIbd}. Fix $ n\geq 0 $, and consider generic $ n_1,n_2\geq 0 $ with $ n_1+n_2=n $. From the given expressions~\varepsilonqref{eq:Sgr}--\varepsilonqref{eq:SgrI} of $ \mathcal{Q} r_n $, we have \begin{align*} \int_{\mathcal{T}} & \mathcal{Q} _{n_1}(t;x,y) \mathcal{Q} _{n_2}(s;y,\widetildex) dy \\ &= \int_{\mathcal{T}^{n+1}\times\Sigma_{n_1}(t)\times\Sigma_{n_2}(s)} \Big( \prod_{i=0}^{n_1} \mathcal{P} (t_i;x_i,x_{i+1})\ \prod_{i=1}^n d \mathbb{A} lim(x_i) \Big) dy \Big( \prod_{i=0}^{n_2} \mathcal{P} (s_i;x'_i,x'_{i+1})\ \prod_{i=1}^n d \mathbb{A} lim(x'_i) \Big) d^{n_1}\vec{t} \ d^{n_2}\vec{s}, \varepsilonnd{align*} with the convention $ x_0:=x $, $ x_{n_1}:=y $, $ x'_0:=y $, and $ x_{n_2}:=\widetildex $. Integrate over $ y $, using $ \int_{\mathcal{T}} \mathcal{P} (t_{n_1};x_{n_1-1},y) \mathcal{P} (s_{0};y,x'_1)dy = \mathcal{P} (t_{n_1}+s_0;x_{n_1-1},x'_1) $. Renaming variables as $ (x'_1,\ldots,x'_{n_2}):=(x_{n_1+1},\ldots,x_{n}) $ and $ (t_{n_1}+s_0,s_1,\ldots,s_{n_2}):=(t_{n_1},\ldots,t_{n}) $, we obtain \begin{align} \label{eqprop:Sg:1} \int_{\mathcal{T}} \mathcal{Q} _{n_1}(t;x,y) \mathcal{Q} _{n_2}(s;y,\widetildex) dy = \int_{\mathcal{T}^{n}\times\Sigma_{n}(t+s)} \Big( \prod_{i=0}^{n_1+n_2} \mathcal{P} (t_i;x_i,x_{i+1})\ \prod_{i=1}^{n_1+n_2} d \mathbb{A} lim(x_i) \Big) \mathbf{1} _{\Sigma'_{n_1,n_2}(t,s)}(\vec{t}\,) \, d^{n}\vec{t}, \varepsilonnd{align} where $ \Sigma'_{n_1,n_2}(t,s) := \{ t_0+\ldots+t_{n_1-1}<t, \ t_{n_1+1}+\ldots+t_{n_1+n_2}<s \}. $ It is straightforward to check that \begin{align*} \sum_{n_1+n_2=n} \mathbf{1} _{\Sigma'_{n_1,n_2}(t,s)}(\vec{t}\,) = \mathbf{1} _{\Sigma_{n}(t+s)}(\vec{t}\,), \quad \text{for Lebesgue almost every } \vec{t} \in (0,\infty)^n. \varepsilonnd{align*} Given this property, we sum \varepsilonqref{eqprop:Sg:1} over $ n_1+n_2=n $ to obtain \begin{align*} \sum_{n_1+n_2=n} \int_{\mathcal{T}} \mathcal{Q} _{n_1}(t;x,y) \mathcal{Q} _{n_2}(s;y,\widetildex) dy = \int_{\mathcal{T}^{n}\times\Sigma_{n}(t+s)} \Big( \prod_{i=0}^{n_1+n_2} \mathcal{P} (t_i;x_i,x_{i+1})\ \prod_{i=1}^{n_1+n_2} d \mathbb{A} lim(x_i) \Big) \, d^{n}\vec{t} = \mathcal{Q} _n(t+s;x,\widetildex). \varepsilonnd{align*} Inserting this back into~\varepsilonqref{eqprop:Sg:s+t} confirms the semigroup property: $ \mathcal{Q} (t) \mathcal{Q} (s)= \mathcal{Q} (t+s) $. We now turn to showing that $ \lim_{t\downarrow 0} \frac{1}{t} ( \mathcal{Q} (t)g-g) = \mathcal{H} g $, for all $ g\in D( \mathcal{H} ) $. Recall that $ \mathcal{H} $ satisfies \varepsilonqref{eq:ham:form}. This being the case, it suffices to show \begin{align} \label{eqprop:Sg:generator} \lim_{t\downarrow 0} \tfrac{1}{t} \big( \langle f, \mathcal{Q} (t)g\rangle -\langle f, g \rangle \big) = -F_ \mathcal{H} (f,g) := -\tfrac12 \langle f',g' \rangle + f(1)g(1) \mathbb{A} lim(1) - \int_{0}^1 (f'g+fg')(x) \mathbb{A} lim(x) dx, \varepsilonnd{align} for all $ f,g\in H^1(\mathcal{T}) $. The operator $ \mathcal{Q} (t) $, by definition, is given by the series~\varepsilonqref{eq:Sgker}. This being the case, we consider separately the contribution from each term in the series. First, for the heat kernel, with $ f,g\in H^1(\mathcal{T}) $, it is standard to show that \begin{align} \label{eqprop:Sg:HK} \lim_{t\downarrow 0} \tfrac{1}{t} \big( \langle f, \mathcal{P} (t)g\rangle -\langle f, g \rangle \big) = -\tfrac12 \langle f',g' \rangle. \varepsilonnd{align} Next we turn to the $ n=1 $ term. Recall the given expressions~\varepsilonqref{eq:Sgr}--\varepsilonqref{eq:SgrI} for $ \mathcal{Q} r_1 $. With the notation $ \phi(t;x) := \int_{\mathcal{T}} \mathcal{P} (t;x,y)\phi(y)dy $ for a given function $ \phi $, we write \begin{align*} \frac{1}{t} \langle f, \mathcal{Q} r_1(t)g\rangle = \frac{1}{t} \int_{0}^t \Big( \int_{\mathcal{T}} f(s;x) d \mathbb{A} lim(x) g(t-s;x) \Big) ds. \varepsilonnd{align*} Integrate by parts in $ x $ gives \begin{align} \label{eqprop:Sg:Sgr1} \frac{1}{t} \langle f, \mathcal{Q} r_1(t)g\rangle = \frac{1}{t} \int_{0}^t \Bigg( \mathbb{A} lim(1)f(s;1) g(t-s;1) - \int_{0}^1 \big( (\mathbb{T}al_x f(s;x)) g(t-s;x) + f(s;x) (\mathbb{T}al_x g(t-s;x)) \big) \mathbb{A} lim(x) dx \Bigg) ds. \varepsilonnd{align} For $ \phi\in H^1(\mathcal{T}) $, it is straightforward to check that $ \Vert \phi(t;\Cdot) - \phi(\Cdot) \Vert_{H^1(\mathcal{T})} \to 0 $ as $ t\downarrow 0 $. Also, with $ \mathcal{T} $ having unit (and hence finite) Lebesgue measure, $ L^2 $-norms and $ L^\infty $-norms are controlled by the $ H^1 $-norms: \begin{align*} \norm{\psi}_{L^2(\mathcal{T})},\norm{\psi}_{L^\infty(\mathcal{T})} \leq\norm{\psi}_{H^1(\mathcal{T})}, \varepsilonnd{align*} so $ \Vert \phi(t;\Cdot) - \phi(\Cdot) \Vert_{L^2(\mathcal{T})} \to 0 $ and $ \Vert \phi(t;\Cdot) - \phi \Vert_{L^\infty(\mathcal{T})} \to 0 $, as $ t\downarrow 0 $. Using these properties for $ \phi=f,g $ in~\varepsilonqref{eqprop:Sg:Sgr1}, together with $ \mathbb{A} lim \in L^\infty[0,1] $, we send $ t\downarrow 0 $ to obtain \begin{align} \label{eqprop:Sg:Sgr1:} \lim_{t\downarrow 0} \tfrac{1}{t} \langle f, \mathcal{Q} r_1(t)g\rangle = \mathbb{A} lim(1)f(1) g(1) - \int_{0}^1 \big( (f'g + fg \big)(x) \mathbb{A} lim(x) dx. \varepsilonnd{align} Finally we consider the $ n\geq 2 $ terms. Given the expressions~\varepsilonqref{eq:Sgr}--\varepsilonqref{eq:SgrI} for $ \mathcal{Q} r_n $, we write \begin{align} \label{eqprop:Sg:Sgrn:1} \langle f, \mathcal{Q} r_n(t)g\rangle = \int_{\mathcal{T}^2} f(x) \Big( \int_{\Sigma_n(t)} \mathcal{Q} rI_n(\vec{s};x,\widetildex) \Big) g(\widetildex) dxd\widetildex. \varepsilonnd{align} With $ f,g\in H^1(\mathcal{T}) $, we have $ \Vert f \Vert_{L^\infty(\mathcal{T})},\Vert g \Vert_{L^\infty(\mathcal{T})} <\infty $. Thus, in~\varepsilonqref{eqprop:Sg:Sgrn:1}, bound $ f(x) $ and $ g(x') $ by their supremum, and use Lemma~\ref{lem:SgrIbd}\ref{lem:SgrI:int} for fixed $ v\in(0, u_{\mathbf{R}t} ) $. This gives \begin{align*} |\langle f, \mathcal{Q} r_n(t)g\rangle| \leq \Vert f \Vert_{L^\infty(\mathcal{T})} \Vert g \Vert_{L^\infty(\mathcal{T})} t^{\frac{(1+v)n}{2}}\frac{ (c(v,T) \norm{ \mathbb{A} lim}_{C^{ u_{\mathbf{R}t} }[0,1]})^n}{\Gamma(\frac{(1+v)n+2}{2})}. \varepsilonnd{align*} Sum this inequality over $ n\geq 2 $, and divide the result by $ t $. This gives, for all $ t\leq 1 $, \begin{align} \label{eqprop:Sg:Sgrn:2} \frac{1}{t}\sum_{n\geq 2} |\langle f, \mathcal{Q} r_n(t)g\rangle| \leq c(f,g, u_{\mathbf{R}t} ) t^{v/2}. \varepsilonnd{align} The r.h.s.\ of~\varepsilonqref{eqprop:Sg:Sgrn:2} indeed converges to $ 0 $ as $ t\downarrow 0 $. Combining \varepsilonqref{eqprop:Sg:HK}, \varepsilonqref{eqprop:Sg:Sgr1:}, and \varepsilonqref{eqprop:Sg:Sgrn:2} concludes the desired result~\varepsilonqref{eqprop:Sg:generator}. \varepsilonnd{proof} We close this subsection by showing the uniqueness of mild solutions~\varepsilonqref{eq:spde:mild} of \varepsilonqref{eq:spde}. (Recall that existence follows from Theorem~\ref{thm:main}.) The argument follows standard Picard iteration the same way as for the \ac{SHE}. \begin{proposition} \label{prop:unique} For any given $ \mathcal{Z}^\mathrm{ic} \in C^{ u_\mathrm{ic} }(\mathcal{T}) $ and a fixed $ \mathbb{A} lim \in C^{ u_{\mathbf{R}t} }[0,1] $, there exists at most one $ C([0,\infty),C(\mathbf{R})) $-valued mild solution~\varepsilonqref{eq:spde:mild}. \varepsilonnd{proposition} \begin{proof} Let $ \mathcal{Z}\in C([0,\infty),C(\mathbb{T})) $ be a mild solution~\varepsilonqref{eq:spde:mild} solving \varepsilonqref{eq:spde}. Iterating~\varepsilonqref{eq:spde:mild} $ m $-times gives \begin{align*} \mathcal{Z}(t,x) = \sum_{n=0}^m\mathcal{Z}_n(t,x) + \mathcal{W}_m(t,x), \varepsilonnd{align*} where, with the notation $ [0,t]^n_{<} := \{ (t_1,\ldots,t_n)\in(0,\infty)^n : 0<t_1<\ldots<t_n<t_{n+1}:=t\} $, \begin{align*} \mathcal{Z}_n(t,x) &:= \int_{[0,t]^{n}_{<}\times\mathcal{T}^{n+1}} \Big( \prod_{i=1}^n \mathcal{Q} (t_{i+1}-t_i;x_{i+1},x_i) \xi(t_i,x_i) dt_idx_i \Big) \mathcal{Q} (t_1;x_1,x_0) \mathcal{Z}^\mathrm{ic}(x_0) dx_0, \\ \mathcal{W}_m(t,x) &:= \int_{[0,t]^{m+1}_{<}\times\mathcal{T}^{m+1}} \Big( \prod_{i=1}^{m+1} \mathcal{Q} (t_{i+1}-t_i;x_{i+1},x_i) \xi(t_i,x_i) dt_idx_i \Big) \mathcal{Z}(t_1,x_1). \varepsilonnd{align*} For given $ \Lambda<\infty $, let $ \tau_{\Lambda} := \inf\{t \geq 0: \sup_{x\in\mathcal{T}} \mathcal{Z}(t,x)^2 > \Lambda \} $ denote the first hitting of $ \mathcal{Z}^2 $ at $ \Lambda $. Recall that $ \mathcal{Q} (t) $ is deterministic since $ \mathbb{A} lim $ is assumed to be deterministic (throughout this section). Evaluating the second moment of $ \mathcal{W}_m(t\wedge\tau_\Lambda,x) $ gives \begin{align*} \mathbf{E} \big[\mathcal{W}_m(t\wedge\tau_\Lambda,x)^2\big] &= \mathbf{E} \Big[ \int_{[0,\tau_\Lambda]^{m+1}_{<}\times\mathcal{T}^{m+1}} \Big( \prod_{i=1}^{m+1} \mathcal{Q} ^2(t_{i+1}-t_i;x_{i+1},x_i) dt_idx_i \Big) \mathcal{Z}^2(t_1,x_1) \Big] \\ &\leq \Lambda \int_{[0,t]^{m+1}_{<}\times\mathcal{T}^{m+1}} \prod_{i=1}^{m+1} \mathcal{Q} ^2(t_{i+1}-t_i;x_{i+1},x_i) dt_idx_i. \varepsilonnd{align*} Further applying bounds from Proposition~\ref{prop:Sg}\ref{prop:Sg:int}, \ref{prop:Sg:sup} gives \begin{align*} \mathbf{E} \big[\mathcal{W}_m(t\wedge\tau_\Lambda,x)^2\big] &\leq \Lambda c(t)^{m+1} \int_{[0,t]^{m+1}_{<}\times\mathcal{T}^{m+1}} \prod_{i=1}^{m+1} dt_idx_i = \Lambda \frac{c(t)^{m+1}}{(m+1)!}. \varepsilonnd{align*} Sending $ m\to\infty $ gives $ \mathbf{E} [\mathcal{W}_m(t\wedge\tau_\Lambda,x)^2] \to 0 $. With $ \mathcal{Z} $ being $ C([0,\infty)\times\mathcal{T}) $ by assumption, we have $ \mathbf{P} [ \tau_{\Lambda} > t ]\to 1 $, as $ \Lambda\to\infty $. Hence, after passing to a suitable sequence $ \Lambda_m\to\infty $, we conclude $ \mathcal{W}_m(t,x) \to_\text{P} 0 $, as $ m\to\infty $, for each fixed $ (t,x) $. This gives \begin{align*} \mathcal{Z}(t,x) = \lim_{m\to\infty} \sum_{n=0}^m\mathcal{Z}_n(t,x), \varepsilonnd{align*} for each $ (t,x) $. Since each $ \mathcal{Z}_n $ is a function of $ \mathcal{Z}^\mathrm{ic} $ and $ \xi $, uniqueness of $ \mathcal{Z}(t,x) $ follows. \varepsilonnd{proof} \subsection{Microscopic} \label{sect:sg} Our goal is to bound the kernel $ \mathbb{Q} (t;x,\widetildex) $ of the microscopic semigroup. Recall the operator $ \mathbb{H} $ defined in~\varepsilonqref{eq:ham} and the definition of $ \nu $ from~\varepsilonqref{eq:Z}. Various operators and parameters (e.g., $ \mathbb{H} $, $ \nu $) depends on $ N $, but, to alleviate heavy notation, we often omit the dependence in notation. Under weak asymmetry scaling~\varepsilonqref{eq:was}, \begin{align} \label{eq:nuN} \nu=\tfrac{1}{N} + O(\tfrac{1}{N^2}). \varepsilonnd{align} Set $ f= \mathbf{1} _\set{\widetildex} $ in the Feynman--Kac formula~\varepsilonqref{eq:feynmankac} to get \begin{align*} \mathbb{Q} (t;x,\widetildex) = \big( \mathbb{Q} (t) \mathbf{1} _\set{\widetildex} \big)(x) = \mathbf{E} _x\Big[ e^{\int_0^t \nu \mathbb{a} (X^ \mathbb{a} (s))ds} \mathbf{1} _\set{\widetildex}(X^a(t)) \Big], \varepsilonnd{align*} where $ X^ \mathbb{a} (t) $ denotes the inhomogeneous walk defined in Section~\ref{sect:hc}. Taylor-expanding the exponential function, and exchanging the expectation with the sums and integrals yields \begin{align} \notag \mathbb{Q} (t;x,\widetildex) &= \mathbf{E} _{x}\big[ \mathbf{1} _\set{\widetildex}(X^ \mathbb{a} (t))\big] + \sum_{n=1}^\infty \int_{\Sigma_n(t)} \mathbf{E} _{x}\Big[ \prod_{i=1}^n \nu \, \mathbb{a} (X^ \mathbb{a} (s_0+\ldots+s_{i-1})) \mathbf{1} _\set{\widetildex}(X^ \mathbb{a} (t)) \Big] d^n\vec{s} \\ \label{eq:sgker} &= \mathbb{p} a(t;x,\widetildex) + \sum_{n=1}^\infty \mathbb{Q} r_n(t;x,\widetildex), \varepsilonnd{align} where we assume (and will show below that) the sum in~\varepsilonqref{eq:sgr} converges, and the term $ \mathbb{Q} r_n(t;x,\widetildex) $ is defined as \begin{align} \label{eq:sgr} \mathbb{Q} r_n(t;x,\widetildex) &:= \int_{\Sigma_{n}(t)} \mathbb{Q} rI_n(\vec{s};x,\widetildex) d^n\vec{s}, \\ \label{eq:sgrI} \mathbb{Q} rI_n(\vec{s};x,\widetildex) &:= \sum_{x_1,\ldots,x_n\in\mathbb{T}} \prod_{i=0}^n \mathbb{p} a(s_i;x_i,x_{i+1}) \prod_{i=1}^n\nu \mathbb{a} (x_i). \varepsilonnd{align} Note that, unlike in the continuum case in Section~\ref{sect:Sg}, the expansions~\varepsilonqref{eq:sgker} here is rigorous, provided that the sums and integrals in~\varepsilonqref{eq:sgr}--\varepsilonqref{eq:sgrI} converge. We now proceed to establish bounds that will guarantee the convergence of the sums and integrals. Our treatment here parallels Section~\ref{sect:Sg}, starting with a summation-by-part formula. Similarly to our treatment in Section~\ref{sect:Sg}, here we need to partition $ \mathbb{T} $ into two pieces according to a given pair $ y,\widetildey\in\mathbb{T} $. Unlike in the macroscopic (i.e., continuum) case, here we cannot ignore $ y=\widetildey $. Given $ y,\widetildey \in \mathbb{T} $, we define \begin{align*} \mathbb{T}_1(y,\widetildey) := \big\{ x\in \mathbb{T}: \mathrm{dist} _{\mathbb{T}}(x,y) \leq \mathrm{dist} _{\mathbb{T}}(x,\widetildey) \wedge \tfrac{N}2 \big\}, \qquad \mathbb{T}_2(y,\widetildey) := \big\{ x\in \mathbb{T}: \mathrm{dist} _{\mathbb{T}}(x,\widetildey) < \mathrm{dist} _{\mathbb{T}}(x,y) \wedge \tfrac{N}2 \big\}. \varepsilonnd{align*} The intervals $ \mathbb{T}_1(y,\widetildey) $ and $ \mathbb{T}_2(y,\widetildey) $ are the macroscopic analog of $ \mathcal{T}_1(y,\widetildey) $ and $ \mathcal{T}_2(\widetildey,y) $, respectively. In particular, $ \mathbb{T}_1(y,\widetildey) $ and $ \mathbb{T}_2(y,\widetildey) $ partition $ \mathbb{T} $ into two pieces, with \begin{align} \label{eq:parti:dist} \begin{split} & \mathrm{dist} _{\mathbb{T}}(y_1,x) \leq \mathrm{dist} _{\mathbb{T}}(y_2,x)+1 \leq 2 \mathrm{dist} _{\mathbb{T}}(y_2,x),\ \forall x\in \mathbb{T}_1(y_1,y_2), \\ & \mathrm{dist} _{\mathbb{T}}(y_2,x) \leq \mathrm{dist} _{\mathbb{T}}(y_1,x),\ \forall x\in \mathbb{T}_2(y_1,y_2). \varepsilonnd{split} \varepsilonnd{align} Write $ \mathrm{mid}_1(y,\widetildey),\mathrm{mid}_2(y,\widetildey)\in\mathbb{T} $ for the boundary points of $ \mathbb{T}_1(y,\widetildey) $ and $ \mathbb{T}_2(y,\widetildey) $. More precisely, $ \mathbb{T}_1(y,\widetildey) = [ \mathrm{mid}_1(y,\widetildey),\mathrm{mid}_2(y,\widetildey) ) $ and $ \mathbb{T}_2(y,\widetildey) = [ \mathrm{mid}_2(y,\widetildey),\mathrm{mid}_1(y,\widetildey) ) $. Recall the definition of $ \mathbb{A} (y,x) $ from~\varepsilonqref{eq:Rt}. \begin{lemma} \label{lem:ips} Set \begin{align} \label{eq:ips:} \begin{split} \mathbb{U} (s,s';y_1,y_2) := \sum_{j=1}^2 \Big(& \mathbb{p} a(s;y_1,x) \, \nu \, \mathbb{A} (y_j,x) \mathbb{p} a(s';x-1,y_2)\big|_{x=\mathrm{mid}_j(y_1,y_2)}^{x=\mathrm{mid}_{j+1}(y_1,y_2)+1} \\ &- \int_{\mathbb{T}_1(y_1,y_2)} \big(\nabla_{x} \mathbb{p} a(s;y_1,x)\big) \, \nu \, \mathbb{A} (y_j,x) \mathbb{p} a(s';x+1,y_2) dx \\ &- \int_{\mathbb{T}_2(y_1,y_2)} \mathbb{p} a(s;y_1,x) \, \nu \, \mathbb{A} (y_j,x) \nabla_{x} \mathbb{p} a(s';x,y_2) dx \Big), \varepsilonnd{split} \varepsilonnd{align} where, by convention, we let $ \mathrm{mid}_{3}(y_1,y_2) := \mathrm{mid}_{1}(y_1,y_2) $. Then we have that \begin{align} \label{eq:ips} \mathbb{Q} rI_n(\vec{s};x,\widetildex) = \sum_{y_1,\ldots,y_{n+1}\in\mathbb{T}} \mathbb{p} a(\tfrac{s_0}{2};x,y_1) \Big( \prod_{i=1}^n \mathbb{U} (\tfrac{s_{i-1}}{2},\tfrac{s_{i}}{2};y_{i},y_{i+1}) \Big) \mathbb{p} a(\tfrac{s_{n}}{2};y_{n+1},\widetildex). \varepsilonnd{align} \varepsilonnd{lemma} \begin{proof} Use the semigroup property $ \mathbb{p} a(s_i;x_i,x_{i+1})=\sum_{y_i\in\mathbb{T}} \mathbb{p} a(\frac{s_i}{2};x_i,y_i) \mathbb{p} a(\frac{s_i}{2};y_i,x_{i+1}) dy_i $ in~\varepsilonqref{eq:sgrI} to rewrite \begin{align} \label{eqlem:ips} & \mathbb{Q} rI_n(\vec{s};x,\widetildex) = \sum_{y_1,\ldots,y_{n+1}\in\mathbb{T}} \mathbb{p} a(\tfrac{s_0}{2};x,y_1) \,\widetilde{ \mathbb{U} }_1 (s_{0},y_1,s_1,y_{2}) \cdots \widetilde{ \mathbb{U} }_n(s_{n},y_i,s_n,y_{n+1}) \, \mathbb{p} a(\tfrac{s_{n}}{2};y_{n+1},\widetildex), \\ \label{eqlem:ips:} &\widetilde{ \mathbb{U} }_i(s_{i-1},y_i,s_i,y_{i+1}) := \sum_{x\in\mathbb{T}} \mathbb{p} a(\tfrac{s_{i-1}}{2};y_{i},x) \, \nu \, \mathbb{a} (x) \, \mathbb{p} a(\tfrac{s_{i}}{2};x,y_{i+1}). \varepsilonnd{align} In~\varepsilonqref{eqlem:ips:}, divide the sum over $ \mathbb{T} $ into sums over $ \mathbb{T}_1(y_i,y_{i+1}) $ and $ \mathbb{T}_2(y_i,y_{i+1}) $ to get $ \widetilde{ \mathbb{U} }_i=\widetilde{ \mathbb{U} }_{i,1}+\widetilde{ \mathbb{U} }_{i,2} $, where \begin{align*} \widetilde{ \mathbb{U} }_{i,j}(s_{i-1},y_i,s_i,y_{i+1}) &:= \sum_{x\in\mathbb{T}_1(y_i,y_{i+1})} \mathbb{p} a(\tfrac{s_{i-1}}{2};y_{i},x) \, \nu \, \mathbb{a} (x) \, \mathbb{p} a(\tfrac{s_{i}}{2};x,y_{i+1}) \\ &= \sum_{x\in\mathbb{T}_2(y_i,y_{i+1})} \mathbb{p} a(\tfrac{s_{i-1}}{2};y_{i},x) \, \nu \, \big( \nabla_{x} \mathbb{A} (y_{i+j-1},x-1) \big) \, \mathbb{p} a(\tfrac{s_{i}}{2};x,y_{i+1}). \varepsilonnd{align*} Apply summation by parts \begin{align*} \sum_{x\in[x_1,x_2)} f(x)\nabla g(x-1) = - \sum_{x=[x_1,x_2)} \big( \nabla f(x) \big) g(x) + f(x_2+1)g(x_2) - f(x_1)g(x_1-1) \varepsilonnd{align*} with $ f(x)= \mathbb{A} (y_{i+j-1},x) $ and $ g(x)= \mathbb{p} a(\tfrac{s_{i-1}}{2};y_{i},x) \mathbb{p} a(\tfrac{s_{i}}{2};x,y_{i+1}) $ for $ j=1,2 $, and add the results together. We then conclude $ \widetilde{ \mathbb{U} }_{i}(s_{i-1},y_i,s_i,y_{i+1})= \mathbb{U} (\frac{s_{i-1}}{2},\frac{s_{i}}{2};y_i,y_{i+1}) $. Inserting this back to~\varepsilonqref{eqlem:ips} completes the proof. \varepsilonnd{proof} Given the summation-by-parts formula \varepsilonqref{eq:ips:}, we proceed to establish bounds on $ \mathbb{U} $. Unlike in the macroscopic case, where we assume $ \mathbb{A} lim $ to be deterministic, the treatment of microscopic semigroup needs to address the randomness of $ \mathbb{a} $. Recall the terminology `with probability $ \to_{\Lambda,N} 1 $' from~\varepsilonqref{eq:wp1}. \begin{lemma} \label{lem:ipsbd} Given any $ v\in(0, u_{\mathbf{R}t} ) $ and $ T<\infty $, the following holds with probability $ \to_{\Lambda,N} 1 $: \begin{align*} \sum_{y'\in\mathbb{T}} | \mathbb{U} (s,s';y,y')| \leq \frac{ \Lambda \, c(v,T) }{ N^{1+v} } \big( (1+s)^{-(1-v)/2}+(1+s')^{-(1-v)/2} \big), \qquad \forall s,s'\in[0,N^2T], y\in \mathbb{T}. \varepsilonnd{align*} \varepsilonnd{lemma} \begin{proof} Recall the definition of the seminorm $ \hold{\,\Cdot\,}_{u,N} $ from~\varepsilonqref{eq:hold}. With $ v\leq u_{\mathbf{R}t} $, we have $ | \mathbb{A} (y_j,x)| \leq (\frac{|(y_j,x]|}{N})^{v} \hold{ \mathbb{A} }_{ u_{\mathbf{R}t} ,N} $. Further, by~\varepsilonqref{eq:parti:dist}, we have $ |(y_j,x]| \leq 2 \mathrm{dist} _{\mathbb{T}}(y_1,y_2) $, for all $ x\in\mathbb{T}_j(y_1,y_2) $. Hence \begin{align*} | \mathbb{A} (y_j,x)| \leq 2 \mathrm{dist} _{\mathbb{T}}(y_j,x)^{v} N^{-v} \, \hold{ \mathbb{A} _N }_{ u_{\mathbf{R}t} }, \qquad \forall x\in\mathbb{T}_j(y_1,y_2). \varepsilonnd{align*} Using this bound in~\varepsilonqref{eq:ips:}, together with $ |\nu| \leq \frac{c}{N} $ (from~\varepsilonqref{eq:nuN}), we obtain \begin{align} \label{eqlem:ipsbd:1} | \mathbb{U} (s,s';y_1,y_2)| \leq \frac{ \hold{ \mathbb{A} }_{ u_{\mathbf{R}t} ,N} }{ N^{1+v} } \sum_{j=1}^2 \Bigg(& \sum_{ x\in\{\mathrm{mid}_i(y_1,y_2)\}_{i=1}^2 } \mathbb{p} a(s;y_1,x) \, \mathrm{dist} _{\mathbb{T}}(y_j,x)^{v} \, \mathbb{p} a(s';x,y_2) \\ \label{eqlem:psbd:2} &+ \sum_{x\in\mathbb{T}_1(y_1,y_2)} \big|\nabla_{x} \mathbb{p} (s;y_1,x)\big| \mathrm{dist} _{\mathbb{T}}(y_j,x)^{v} \, \mathbb{p} a(s';x+1,y_2) \\ \label{eqlem:psbd:3} & + \sum_{x\in\mathbb{T}_2(y_1,y_2)} \mathbb{p} a(s;y_1,x) \, \mathrm{dist} _{\mathbb{T}}(y_j,x)^{v} \big|\nabla_{x} \mathbb{p} a(s;x,y_2)\big| \Bigg). \varepsilonnd{align} In~\varepsilonqref{eqlem:psbd:2}, use~\varepsilonqref{eq:parti:dist} to bound $ \mathrm{dist} _{\mathbb{T}}(y_j,x)^{v} $ by $ 2 \mathrm{dist} _{\mathbb{T}}(y_1,x)^v $, and in~\varepsilonqref{eqlem:psbd:3}, use~\varepsilonqref{eq:parti:dist} to bound $ \mathrm{dist} _{\mathbb{T}}(y_j,x)^{v} $ by $ 2 \mathrm{dist} _{\mathbb{T}}(y_2,x)^v $. This gives \begin{align} \label{eq:ips:bd:} \begin{split} &| \mathbb{U} (s,s';y_1,y_2)| \leq \frac{ \hold{ \mathbb{A} }_{ u_{\mathbf{R}t} ,N} }{ N^{1+v} } \Bigg( 2 \sum_{ x\in\{\mathrm{mid}_i(y_1,y_2)\}_{i=1}^2 } \mathbb{p} a(s;y_1,x) \, \mathrm{dist} _{\mathbb{T}}(y_j,x)^{v} \, \mathbb{p} a(s';x,y_2) \\ & + 2\sum_{x\in\mathbb{T}} |\nabla_{x} \mathbb{p} a(s;y_1,x)\big| \mathrm{dist} _{\mathbb{T}}(y_1,x)^{v} \, \mathbb{p} a(s';x+1,y_2) + 2\sum_{x\in\mathbb{T}} \mathbb{p} a(s;y_1,x) \, \mathrm{dist} _{\mathbb{T}}(y_2,x)^{v} \, \big|\nabla_{x} \mathbb{p} a(s;x,y_2)\big| \Bigg). \varepsilonnd{split} \varepsilonnd{align} For the kernel $ \mathbb{p} (t,x,\widetildex) $ of the homogeneous walk, we indeed have $ \sum_{\widetildex\in\mathbb{T}} \mathbb{p} (t,x,\widetildex) = 1 $. Since $ \mathbb{p} a= \mathbb{p} + \mathbb{p} r $, the preceding identity together with the bound from Proposition~\ref{prop:hk}\ref{cor:hk:hkrsum} yields, with probability $ \to_{\Lambda,N} 1 $, \begin{align} \label{eq:hk:hka:summ} \sum_{\widetildex\in\mathbb{T}} \mathbb{p} a(t,x,\widetildex) \leq 2. \varepsilonnd{align} Now sum~\varepsilonqref{eq:ips:bd:} over $ y_2\in\mathbb{T} $, and use \varepsilonqref{eq:hk:hka:summ} and Proposition~\ref{prop:hk}\ref{cor:hk:hkasup}, \ref{cor:hk:hkahold:sup}--\ref{cor:hk:hkahold::} to bound the result. This gives \begin{align*} &\sum_{y_2\in\mathbb{T}}| \mathbb{U} (s,s';y_1,y_2)| \leq \frac{ c(v,T) \hold{ \mathbb{A} }_{ u_{\mathbf{R}t} ,N} }{ N^{1+v} } \big( (s+1)^{-(1-v)/2} + (s'+1)^{-(1-v)/2} + (s+1)^{-(1-v)/2} + (s'+1)^{-(1-v)/2} \big). \varepsilonnd{align*} Recalling Assumption~\ref{assu:rt}\ref{assu:rt:holder} on $ \hold{ \mathbb{A} }_{ u_{\mathbf{R}t} ,N} $, we conclude the desired result. \varepsilonnd{proof} Based on Lemmas~\ref{lem:ips}--\ref{lem:ipsbd}, we now establish bounds on $ \mathbb{Q} rI_n $. \begin{lemma} \label{lem:sgrIbd} Given any $ u\in(0,1] $, $ v\in(0, u_{\mathbf{R}t} ) $, and $ T<\infty $, the following hold with probability $ \to_{\Lambda,N} 1 $: \begin{enumerate}[label=(\alph*),leftmargin=7ex] \item \label{lem:sgrI:sum} $ \displaystyle \sum_{\widetildex\in\mathbb{T}} \int_{\Sigma_n(t)} | \mathbb{Q} rI_n(\vec{s};x,\widetildex)| d^n\vec{s} \leq (tN^{-2})^{\frac{1+v}2} \frac{ \Lambda^n}{\Gamma(\frac{(1+v)n+2}{2})} $, \qquad $ t\in[0,N^2T] $, $ \forall x\in\mathbb{T} $, $ n\in\mathbf{Z}_{>0} $, \item \label{lem:sgrI:gsum} $ \displaystyle \sum_{\widetildex\in\mathbb{T}} \int_{\Sigma_n(t)} | \mathbb{Q} rI_n(\vec{s};x,\widetildex)- \mathbb{Q} rI_n(\vec{s};x',\widetildex)| d^n\vec{s}\ \leq \frac{ \mathrm{dist} _{\mathbb{T}}(x,x')^{u}}{(t+1)^{u/2}} (tN^{-2})^{\frac{1+v}2} \frac{\Lambda^n }{\Gamma(\frac{(1+v)n+2-u}{2})}, $ \newline $ t\in[0,N^2T] $, $ \forall x\in\mathbb{T} $, $ n\in\mathbf{Z}_{>0} $, \item \label{lem:sgrI:sup} $ \displaystyle \int_{\Sigma_n(t)} | \mathbb{Q} rI_n(\vec{s};x,\widetildex)| d^n\vec{s} \leq \frac{(tN^{-2})^{\frac{1+v}2} }{(t+1)^{1/2}} \frac{\Lambda^n}{\Gamma(\frac{(1+v)n+1}{2})}, $ \qquad $ t\in[0,N^2T] $, $ \forall x\in\mathbb{T} $, $ n\in\mathbf{Z}_{>0} $, \item \label{lem:sgrI:gsup} $ \displaystyle \int_{\Sigma_n(t)} | \mathbb{Q} rI_n(\vec{s};x,\widetildex)- \mathbb{Q} rI_n(\vec{s};x',\widetildex)| d^n\vec{s} \leq \frac{ \mathrm{dist} _{\mathbb{T}}(x,x')^{u}}{(t+1)^{(1+u)/2}}(tN^{-2})^{\frac{1+v}2} \frac{\Lambda^n}{\Gamma(\frac{(1+v)n+1-u}{2})}, $ \qquad $ t\in[0,N^2T] $, $ \forall x\in\mathbb{T} $, $ n\in\mathbf{Z}_{>0} $. \varepsilonnd{enumerate} \varepsilonnd{lemma} \begin{proof} The proof follows by the same line of calculation as in the proof of Lemma~\ref{lem:SgrIbd}, with $ \mathbb{p} a $, $ \mathbb{U} $, $ \mathbb{Q} rI_n $ replacing $ \mathcal{P} $, $ \mathcal{U} $, $ \mathcal{Q} rI $, and with sums replacing integrals accordingly. In particular, in place of~\varepsilonqref{eqSglem:a}--\varepsilonqref{eqSglem:d}, here we have, with probability $ \to_{\Lambda,N} 1 $, \begin{subequations} \label{eqsglem:} \begin{align} \label{eqsglem:a} \sum_{\widetildex\in\mathbf{Z}} | \mathbb{Q} rI_n(\vec{s};x,\widetildex)| d\widetildex &\leq \big( N^{-1-v}\Lambda \big)^n \prod_{i=1}^n \big( s_{i-1}^{-(1-v)/2}+s_{i}^{-(1-v)/2} \big), \\ \label{eqsglem:b} \sum_{\widetildex\in\mathbf{Z}} | \mathbb{Q} rI_n(\vec{s};x,\widetildex)- \mathbb{Q} rI_n(\vec{s};x',\widetildex)| d\widetildex &\leq \big( N^{-1-v}\Lambda \big)^n \mathrm{dist} _{\mathbb{T}}(x,x')^{u} s_0^{-\frac{u}{2}} \prod_{i=1}^n \big( s_{i-1}^{-(1-v)/2}+s_{i}^{-(1-v)/2} \big), \\ | \mathbb{Q} rI_n(\vec{s};x,\widetildex)| &\leq \big( N^{-1-v}\Lambda \big)^n \prod_{i=1}^n \big( s_{i-1}^{-(1-u)/2}+s_{i}^{-(1-u)/2} \big) s_n^{-\frac12}, \\ \label{eqsglem:d} | \mathbb{Q} rI_n(\vec{s};x,\widetildex)- \mathbb{Q} rI_n(\vec{s};x',\widetildex)| &\leq \mathrm{dist} _{\mathbb{T}}(x,x')^{u} \big( N^{-1-v}\Lambda \big)^n s_0^{-\frac{u}{2}} \prod_{i=1}^n \big( s_{i-1}^{-(1-v)/2}+s_{i}^{-(1-v)/2} \big) s_n^{-\frac12}. \varepsilonnd{align} \varepsilonnd{subequations} Given~\varepsilonqref{eqsglem:a}--\varepsilonqref{eqsglem:d}, the rest of the proof follows by applying the Dirichlet formula~\varepsilonqref{eq:dirichlet}. We omit repeating the argument. \varepsilonnd{proof} We now proceed to establish bounds on $ \mathbb{Q} r $. In the following, we will often decompose $ \mathbb{Q} r $ into the sum of $ \mathbb{p} $, the kernel of the homogeneous walk on $ \mathbb{T} $, and a remainder term $ \mathbb{Q} r := \mathbb{Q} - \mathbb{p} $. Recall from~\varepsilonqref{eq:hkr} that $ \mathbb{p} a(t;x,\widetildex)= \mathbb{p} (t;x,\widetildex)+ \mathbb{p} r(t;x,\widetildex) $. Referring to \varepsilonqref{eq:sgker}, we see that \begin{align*} \mathbb{Q} r(t;x,\widetildex) := \mathbb{Q} (t;x,\widetildex)- \mathbb{p} (t;x,\widetildex) = \mathbb{p} r(t;x,\widetildex) + \sum_{n=1}^\infty \mathbb{Q} r_n(t;x,\widetildex). \varepsilonnd{align*} \begin{proposition} \label{prop:sg} Fix $ u\in(0,1] $, $ v\in(0, u_{\mathbf{R}t} ) $ and $ T<\infty $. The following hold with probability $ \to_{\Lambda,N} 1 $: \begin{enumerate}[label=(\alph*),leftmargin=7ex] \item \label{prop:sg:sum} $ \displaystyle \sum_{\widetildex\in\mathbb{T}} \mathbb{Q} (t;x,\widetildex) \leq \Lambda, $ \qquad $ t\in[0,N^2T] $, $ \forall x\in\mathbb{T} $, \item \label{prop:sg:sup} $ \displaystyle \mathbb{Q} (t;x,\widetildex) \leq \frac{\Lambda}{\sqrt{t+1}}, $ \qquad $ t\in[0,N^2T] $, $ \forall x,\widetildex\in\mathbb{T} $, \item \label{prop:sg:gsup} $ \displaystyle | \mathbb{Q} (t;x,\widetildex)- \mathbb{Q} (t;x',\widetildex)| \leq \frac{ \mathrm{dist} _{\mathbb{T}}(x,x')^u}{(t+1)^{(u+1)/2}}\Lambda, $ \qquad $ t\in[0,N^2T] $, $ \forall x,x',\widetildex\in\mathbb{T} $, \item \label{prop:sg:gsum} $ \displaystyle \sum_{\widetildex\in\mathbb{T}}| \mathbb{Q} (t;x,\widetildex)- \mathbb{Q} (t;x',\widetildex)| \leq \frac{ \mathrm{dist} _{\mathbb{T}}(x,x')^u}{(t+1)^{u/2}}\Lambda, $ \qquad $ t\in[0,N^2T] $, $ \forall x,x'\in\mathbb{T} $, \item \label{prop:sgr:sum} $ \displaystyle \sum_{\widetildex\in\mathbb{T}} | \mathbb{Q} r(t;x,\widetildex)| \leq (tN^{-2})^{v}\Lambda, $ \qquad $ t\in[0,N^2T] $, $ \forall x\in\mathbb{T} $, \item \label{prop:sgr:gsum} $ \displaystyle \sum_{\widetildex\in\mathbb{T}}| \mathbb{Q} r(t;x,\widetildex)- \mathbb{Q} r(t;x',\widetildex)| \leq \Big( \frac{ \mathrm{dist} _{\mathbb{T}}(x,x')^u}{(1+t)^{u/2}}N^{-v} + \Big(\frac{ \mathrm{dist} _{\mathbb{T}}(x,x')}{N}\Big)^u \Big)\Lambda, $ \quad $ t\in[0,N^2T] $, $ \forall x,x'\in\mathbb{T} $, \item \label{prop:sgr:gsup} $ \displaystyle | \mathbb{Q} r(t;x,\widetildex)- \mathbb{Q} r(t;x',\widetildex)| \leq \Big( \frac{ \mathrm{dist} _{\mathbb{T}}(x,x')^u}{(1+t)^{(u+1)/2}}N^{-v} + \frac{( \mathrm{dist} _{\mathbb{T}}(x,x')/N)^u}{(1+t)^{1/2}} \Big)\Lambda, $ \quad $ t\in[0,N^2T] $, $ \forall x,x',\widetildex\in\mathbb{T} $, \item \label{prop:sg:loc} $ \displaystyle \sup_{t'\in[t,t+1]} \mathbb{Q} (t';x,\widetildex) \leq \Lambda \mathbb{Q} (t+1;x,\widetildex), $ \qquad $ t\in[0,N^2T] $, $ \forall x,\widetildex\in\mathbb{T} $. \varepsilonnd{enumerate} \varepsilonnd{proposition} \begin{proof} Let $ \widetilde{ \mathbb{Q} r}(t;x,\widetildex) := \sum_{n\geq 1} \mathbb{Q} r_n(t;x,\widetildex) $. Summing the r.h.s.\ of Lemma~\ref{lem:sgrIbd}\ref{lem:sgrI:sum}--\ref{lem:sgrI:gsup} gives, with probability $ \to_{\Lambda,N} 1 $, \begin{enumerate}[label=(\mathbf{R}oman*),leftmargin=7ex] \item \label{sgr:sum} $ \displaystyle \sum_{\widetildex\in\mathbb{T}} |\widetilde{ \mathbb{Q} r}(t;x,\widetildex)| \leq (tN^{-2})^{\frac{1+v}2} \Lambda $, \item \label{sgr:gsum} $ \displaystyle \sum_{\widetildex\in\mathbb{T}} |\widetilde{ \mathbb{Q} r}(t;x,\widetildex)-\widetilde{ \mathbb{Q} r}(t;x,\widetildex)| \leq \frac{ \mathrm{dist} _{\mathbb{T}}(x,x')^{u}}{(t+1)^{u/2}} (tN^{-2})^{\frac{1+v}2} \Lambda $, \item \label{sgr:sup} $ \displaystyle |\widetilde{ \mathbb{Q} r}(t;x,\widetildex)| \leq \frac{(tN^{-2})^{\frac{1+v}2} }{(t+1)^{1/2}} \Lambda $, \item \label{sgr:gsup} $ \displaystyle |\widetilde{ \mathbb{Q} r}(t;x,\widetildex)-\widetilde{ \mathbb{Q} r}(t;x',\widetildex)| \leq \frac{ \mathrm{dist} _{\mathbb{T}}(x,x')^{u}}{(t+1)^{(1+u)/2}}(tN^{-2})^{\frac{1+v}2} \Lambda $. \varepsilonnd{enumerate} Given that $ \mathbb{Q} (t)= \mathbb{p} a(t)+\widetilde{ \mathbb{Q} r}(t) $: \begin{itemize} \item \ref{prop:sg:sum} follows by combining $ \sum_{\widetildex} \mathbb{p} a(t;x,\widetildex)=1 $ and \ref{sgr:sum}; \item \ref{prop:sg:sup} follows by combining Proposition~\ref{prop:hk}\ref{cor:hk:hkasup} and \ref{sgr:sup}; \item \ref{prop:sg:gsup} follows by combining Proposition~\ref{prop:hk}\ref{cor:hk:hkagdsup} and \ref{sgr:gsup}; \item \ref{prop:sg:gsum} follows by combining Proposition~\ref{prop:hk}\ref{cor:hk:hkagdsum} and \ref{sgr:gsum}. \varepsilonnd{itemize} Given that $ \mathbb{Q} r(t)= \mathbb{p} r(t)+\widetilde{ \mathbb{Q} r}(t) $, \begin{itemize} \item \ref{prop:sgr:sum} follows by combining Proposition~\ref{prop:hk}\ref{cor:hk:hkrsum} and \ref{sgr:sum} (note that $ (tN^{-2})^{(v+1)/2} \leq c(T)(tN^{-2})^{v/2} $). \item With $ t\leq N^2T $, we have $ \frac{1}{(t+1)^{u/2}} (tN^{-2})^{\frac{1+v}2} \leq c(T) N^{-v} $. Hence, by \ref{sgr:gsum}, with probability $ \to_{\Lambda,N} 1 $, \begin{align*} \sum_{\widetildex\in\mathbb{T}} | \mathbb{Q} r(t;x,\widetildex)| \leq \frac{ \mathrm{dist} _{\mathbb{T}}(x,x')^u}{N^u} \Lambda. \varepsilonnd{align*} Combining this with Proposition~\ref{prop:hk}\ref{cor:hk:hkrgsum} gives~\ref{prop:sgr:gsum}. \item Similarly to the preceding, by \ref{sgr:gsup}, with probability $ \to_{\Lambda,N} 1 $, \begin{align*} \sum_{\widetildex\in\mathbb{T}} | \mathbb{Q} r(t;x,\widetildex)- \mathbb{Q} r(t;x',\widetildex)| \leq \frac{( \mathrm{dist} _{\mathbb{T}}(x,x')/N)^u}{(1+t)^{1/2}} \Lambda. \varepsilonnd{align*} Combining this with Proposition~\ref{prop:hk}\ref{cor:hk:hkrgsup} gives~\ref{prop:sgr:gsup}. \item Finally, to show~\ref{prop:sg:loc}, we fix $ t'\in[t,t+1] $, and set $ \delta:=t+1-t'\leq 1 $. With $ \mathbb{Q} (t;x,y) \geq 0 $, we write \begin{align} \label{eq:sgloc:} \mathbb{Q} (t+1;x,\widetildex) = \sum_{y\in\mathbb{T}} \mathbb{Q} (\delta;x,y) \mathbb{Q} (t';y,\widetildex) \geq \mathbb{Q} (\delta;x,x) \mathbb{Q} (t';x,\widetildex). \varepsilonnd{align} Given that $ \delta \leq 1 $, we indeed have $ \mathbb{p} (\delta;x,x) \geq \mathbf{P} _x[ X(s)=x, \forall s\in[0,1] ] \geq \frac{1}{c} $. With $ \mathbb{Q} (\delta)= \mathbb{p} (\delta)+ \mathbb{p} r(\delta)+\widetilde{ \mathbb{Q} r}(\delta) $, combining the preceding lower bound on $ \mathbb{p} (\delta;x,x) $ with Proposition~\ref{prop:hk}\ref{cor:hk:hkrsup} and \ref{sgr:sup}, we now have, with probability $ \to_{\Lambda,N}1 $, $ \mathbb{Q} (\delta;x,x) \geq \frac1c - N^{-v}\Lambda \to \frac1c >0 $. Inserting this back into~\varepsilonqref{eq:sgloc:} yields~\ref{prop:sg:loc}. \varepsilonnd{itemize} \varepsilonnd{proof} We conclude this section by establishing the convergence of the microscopic semigroup $ \mathbb{Q} (t) $ to its macroscopic counterpart $ \mathcal{Q} (t) $. Recall from Assumption~\ref{assu:rt}\ref{assu:rt:limit} that $ \mathbb{A} lim $ and $ \mathbb{A} $ are \varepsilonmph{coupled}. The semigroups $ \mathcal{Q} (t) $ and $ \mathbb{Q} (t) $ being constructed from $ \mathbb{A} lim $ and $ \mathbb{A} $, the coupling in Assumption~\ref{assu:rt}\ref{assu:rt:limit} induces a coupling of $ \mathcal{Q} (t) $ and $ \mathbb{Q} (t) $. \begin{proposition} \label{prop:sgtoSg} Set $ \mathbb{Q} _N(t;x,\widetildex) := N \mathbb{Q} (tN^2;Nx,N\widetildex) $, and linearly interpolate in $ x $ and $ \widetildex $ so that $ \mathbb{Q} _N(t;x,\widetildex) $ defines a kernel on $ \mathcal{T} $. Given any $ T<\infty $, $ u>0 $, and $ f\in C(\mathcal{T}) $, we have that \begin{align*} \sup_{x\in\mathcal{T}} \sup_{t\in[0,T]} \Big| \big( \mathcal{Q} (t)f- \mathbb{Q} _N(t)f \big)(x) \Big| \longrightarrow_\text{P} 0. \varepsilonnd{align*} \varepsilonnd{proposition} \begin{proof} Set $ \mathbb{p} a_N(t;x,\widetildex):= N \mathbb{p} a(N^2t;Nx,N\widetildex) $, $ \mathbb{p} _N(t;x,\widetildex):= N \mathbb{p} (N^2t;Nx,N\widetildex) $, $ \mathbb{p} r_N(t;x,\widetildex):= N \mathbb{p} r(N^2t;Nx,N\widetildex) $, and $ \mathbb{Q} r_{n,N}(t;x,\widetildex) := N \mathbb{Q} r_n(tN^2;Nx,N\widetildex) $, and linearly interpolate these kernels in $ x $ and $ \widetildex $. Recall from~\varepsilonqref{eq:Sgker} and \varepsilonqref{eq:sgker} that $ \mathcal{Q} (t) $ and $ \mathbb{Q} (t) $ are given in terms of $ \mathcal{Q} r_n(t) $ and $ \mathcal{P} (t) $, and $ \mathbb{Q} r_n(t) $ and $ \mathbb{p} a $, respectively. Finally, recall that $ \mathbb{p} a(t)= \mathbb{p} (t)+ \mathbb{p} r(t) $. We may write \begin{align*} \Big| \big( \mathcal{Q} (t)f- \mathbb{Q} _N(t)f \big)(x) \Big| \leq &\Big| \int_{\mathcal{T}} \big( \mathcal{P} (t;x,\widetildex) - \mathbb{p} _N(t,x,\widetildex) \big) f(\widetildex) d\widetildex \Big| + \norm{f}_{L^\infty(\mathcal{T})} \int_{\mathcal{T}} | \mathbb{p} r_N(t;x,\widetildex)| d\widetildex \\ &+ \norm{f}_{L^\infty(\mathcal{T})} \sum_{n=1}^\infty \sup_{x\in\mathcal{T}} \sup_{t\in[0,T]} \int_{\mathcal{T}}| \mathcal{Q} r_n(t;x,\widetildex)- \mathbb{Q} r_{n,N}(t;x,\widetildex)|d\widetildex. \varepsilonnd{align*} Given that $ f\in C(\mathcal{T}) $, with the aid of Lemma~\ref{lem:hk}, it is standard to check that: \begin{align*} \sup_{x\in\mathcal{T}} \int_{\mathcal{T}} \big( \mathcal{P} (t;x,\widetildex) - \mathbb{p} _N(t;x,\widetildex) \big) f(\widetildex) d\widetildex \longrightarrow 0, \qquad \textrm{as } N\to\infty. \varepsilonnd{align*} By Proposition~\ref{prop:hk}\ref{cor:hk:hkrsum}, we have \begin{align*} \sup_{x\in\mathcal{T}} \sup_{t\in[0,T]} \int_{\mathbb{T}} | \mathbb{p} r_N(t;x,\widetildex)| d\widetildex \longrightarrow_\text{P} 0, \qquad \textrm{as } N\to\infty. \varepsilonnd{align*} Further, by Lemmas~\ref{lem:SgrIbd}\ref{lem:SgrI:int} and \ref{lem:sgrIbd}\ref{lem:sgrI:sum}, we have, with probability $ \to_{\Lambda,N} 1 $, \begin{align*} \sum_{n \geq 1}\sup_{x\in\mathcal{T}} \sup_{t\in[0,T]} \int_{\mathcal{T}}| \mathcal{Q} r_n(t;x,\widetildex)|d\widetildex <\Lambda, \qquad \textrm{and} \qquad \sum_{n \geq 1}\sup_{x\in\mathbb{T}} \sup_{t\in[0,N^2T]} \sum_{\widetildex\in\mathbb{T}}| \mathbb{Q} r_{n,N}(t;x,\widetildex)| <\Lambda. \varepsilonnd{align*} Given this, it suffices to check, for each fixed $ n \geq 1 $, \begin{align*} \sup_{x\in\mathcal{T}} \sup_{t\in[0,T]} \int_{\mathcal{T}}\big| \mathcal{Q} r_n(t;x,\widetildex)- \mathbb{Q} r_{n,N}(t;x,\widetildex) \big|d\widetildex \longrightarrow_\text{P} 0, \qquad \textrm{as } N\to\infty. \varepsilonnd{align*} Such a statement is straightforwardly checked (though tedious) from the given expressions \varepsilonqref{eq:Sgr}--\varepsilonqref{eq:SgrI}, \varepsilonqref{eq:Ips} and \varepsilonqref{eq:sgr}--\varepsilonqref{eq:sgrI}, \varepsilonqref{eq:ips} of $ \mathcal{Q} r_n $ and $ \mathbb{Q} r_{n} $, together with the aid of Lemmas~\ref{lem:hk}, \ref{lem:SgrIbd}, and \ref{lem:sgrIbd}. We omit the details here. \varepsilonnd{proof} \section{Moment bounds and tightness} \label{sect:mom} Recall that $ Z_N(t,x)=Z(tN^{2},xN) $ denotes the scaled process in~\varepsilonqref{eq:Z}. The goal of this section is to show the tightness of $ \{Z_N\}_N $. For the case of homogeneous \ac{ASEP}, tightness is shown by establishing moment bounds on $ Z_N $ through iterating the microscopic equation (analogous to~\varepsilonqref{eq:Lang:int}); see \cite[Section~4]{bertini97} and also \cite[Section~3]{corwin18}. Here we proceed under the same general strategy. A major difference here is that the kernel $ \mathbb{Q} (t;x,x') $ (that governs the microscopic equation~\varepsilonqref{eq:Lang:int}) is itself random. We hence proceed by conditioning. For given $ u\in(0,1] $, $ v\in(0, u_{\mathbf{R}t} ) $, $ \Lambda,T<\infty $, let \begin{align} \label{eq:Omega} \Omega(u,v,\Lambda,T,N):=\{ \text{properties in Proposition~\ref{prop:sg} hold and } \hold{ \mathbb{A} }_{ u_{\mathbf{R}t} ,N} \leq \Lambda \}. \varepsilonnd{align} Recall $ M(s,x) $ from~\varepsilonqref{eq:mg}. \begin{lemma} \label{lem:BDG} Fix $ k>1 $. Write $ \mathbf{E} rt[\,\Cdot \,]:= \mathbf{E} [ \, \Cdot \, | \mathbb{a} (x),x\in\mathbb{T}] $ for the conditional expectation quenching the inhomogeneity, and write $ \normrt{\,\Cdot\,}{k} := ( \mathbf{E} rt[\,(\Cdot)^k\,])^{1/k} $ for the corresponding norm. Given any deterministic $ f:\mathbb{T}\to\mathbf{R} $, \begin{align*} \mathbb{N}ormrt{ \int_{i}^{i'} \sum_{x\in\mathbb{T}} f(s,x)dM(s,x) }{k}^2 \leq \frac{c(k)}{N} \sum_{i\leq j<i'} \sum_{x\in\mathbb{T}}\Big(\sup_{s\in[j,j+1]} f^2(s,x)\Big) \normrt{ Z(j,x) }{k}^2, \varepsilonnd{align*} for all $ i<i'\in\mathbf{Z}_{\geq 0} $. \varepsilonnd{lemma} \begin{proof} The conditional expectation $ \mathbf{E} rt[\,\Cdot \,]:= \mathbf{E} [ \, \Cdot \, | \mathbb{a} (x),x\in\mathbb{T}] $ amounts to fixing a realization of $ \{ \mathbb{a} (x)\}_{x\in\mathbb{T}} $ that satisfies Assumption~\ref{assu:rt}. In fact, only Assumption~\ref{assu:rt}\ref{assu:rt:bdd} will be relevant toward the proof. With this in mind, throughout this proof we view $ \mathbb{a} (x) $ as \varepsilonmph{deterministic} functions satisfying~Assumption~\ref{assu:rt}\ref{assu:rt:bdd}. For fixed $ i\in\mathbf{Z}_{\geq 0} $, consider the discrete-time martingale $ \widetilde{M}(i') := \sum_{j=i}^{i'-1} J(j) $, $ i'=i+1,i+2,\ldots $, with increment $ J(j):= \int_{j}^{j+1} \sum_{x\in\mathbb{T}} f(s,x)dM(s,x) $. Write $ \mathscr{F}(i'):=\sigma(J(i),\ldots,J(i')) $ for the canonical filtration. Burkholder's inequality applied to $ \widetilde{M} $ gives \begin{align} \label{eq:burkholder} \normrt{ \widetilde{M}(i') }{k}^2 \leq c(k) \mathbb{N}ormrt{ \sum_{i\leq j<i'} \mathbf{E} rt\big[ J(j)^2 \big| \mathscr{F}(j) \big] }{k}. \varepsilonnd{align} We may compute $ \mathbf{E} rt[ J(j)^2 | \mathscr{F}(j) ] = \mathbf{E} rt[ \int_{j}^{j+1} \sum_{x,x'} f(s,x)f(s,x') d\langle M(s,x),M(s,x')\rangle | \mathscr{F}(j) ] $. The quadratic variation $ \langle M(s,y),M(s,y')\rangle $ is calculated in~\varepsilonqref{eq:qv}. Under Assumption~\ref{assu:rt}\ref{assu:rt:bdd}, $ \mathbb{a} t(x) $ is uniformly bounded, and weak asymmetry scaling~\varepsilonqref{eq:was} gives $ (\tau-1)^{2}, (\tau^{-1}-1)^2 \leq \frac{1}{N} $. Using these properties in~\varepsilonqref{eq:qv} gives \begin{align} \label{eq:qv:bd} |\tfrac{d~}{dt}\langle M(t,x), M(t,x') \rangle | \leq \tfrac{c}{N} \mathbf{1} _\set{x=x'}Z^2(t,x), \varepsilonnd{align} whereby \begin{align} \label{eq:bdg:J} \mathbf{E} rt[ J(j)^2 | \mathscr{F}(j) ] \leq \frac{c}{N} \sum_{x\in\mathbb{T}} \mathbf{E} rt\Big[ \int_{j}^{j+1} f(s,x)^2 Z(s,x)^2 ds \Big| \mathscr{F}(j)\Big]. \varepsilonnd{align} Fix $ x\in\mathbb{T} $. Assumption~\ref{assu:rt}\ref{assu:rt:bdd} asserts that the Poisson clocks $ P_\leftarrow(t,x) $ and $ P_\rightarrow(t,x) $ that dictate jumps between $ x $ and $ x+1 $ have bounded rates. Each jump changes $ Z(t,x) $ by a factor of $ \varepsilonxp(\pm\frac{c}{\sqrt{N}}) $ (see~\varepsilonqref{eq:Z} and~\varepsilonqref{eq:was}). This being the case, we have \begin{align} \label{eq:locZbd>} Z(s,x) &\leq e^{\frac{X(j,x)}{\sqrt{N}}} Z(j,x), \qquad s\in[j,j+1), \\ \label{eq:locZbd<} Z(s,x) &\geq e^{-\frac{\widetilde{X}(j,x)}{\sqrt{N}}} Z(j,x), \qquad s\in[j,j+1), \varepsilonnd{align} for some $ X(j,x),\widetilde{X}(j,x) $ that are stochastically dominated by Poisson($ c $), and are \varepsilonmph{independent} of the sigma algebra $ \mathscr{F} (t) $ defined in~\varepsilonqref{eq:filZ}. Now, use \varepsilonqref{eq:locZbd>} in \varepsilonqref{eq:bdg:J} to get \begin{align*} \mathbf{E} rt[ J(j)^2 | \mathscr{F}(j) ] \leq \frac{c}{N} \sum_{x\in\mathbb{T}}\Big(\sup_{s\in[j,j+1]} f(s,x)^2 \Big) Z(j,x)^2. \varepsilonnd{align*} Inserting this back into~\varepsilonqref{eq:burkholder} concludes the desired result. \varepsilonnd{proof} Recall from~\varepsilonqref{eq:nearst} that $ u_\mathrm{ic} >0 $ is the H\"{o}lder exponent of $ Z_\mathrm{ic}(\Cdot) $. \begin{proposition}\label{prop:mom} Fixing $ u\in(0,1) $, $ v\in(0, u_{\mathbf{R}t} ) $, $ k>1 $, and $ \Lambda,T<\infty $. Let $ \mathbf{E} rt[\,\Cdot \,] $ be as in Lemma~\ref{lem:BDG}, and further, write $ \mathbf{E} O[\,\Cdot \,]:= \mathbf{E} rt[ (\, \Cdot \, ) \mathbf{1} _{\Omega(u,v,\Lambda,T,N)}]= \mathbf{E} rt[ \, \Cdot \, ] \mathbf{1} _{\Omega(1,v,\Lambda,T)} $, and let $ \normO{\,\Cdot\,}{k}:= \mathbf{E} O[(\,\Cdot\,)^k]^{1/k} $ denote the corresponding norm. There exists $ c=c(u,v,k,\Lambda,T) $ such that, for all $ x,x'\in\mathbb{T} $ and $ t,t'\in[0, N^2T] $, \begin{subequations} \begin{align} \label{eq:Zmom} \normO{ Z(t,x) }{k} &\leq c, \\ \label{eq:gZmom} \normO{ Z(t,x)-Z(t,x') }{k} &\leq c \Big(\frac{ \mathrm{dist} _\mathbb{T}(x,x')}{N}\Big)^{\frac{u}{2}\wedge u_\mathrm{ic} \wedge v}, \\ \label{eq:tZmom} \normO{ Z(t',x)-Z(t,x) }{k} &\leq c \Big(\frac{|t'-t|\vee 1}{N^2}\Big)^{\frac{u}{4}\wedge\frac{ u_\mathrm{ic} }{2}\wedge\frac{v}{2}}, \varepsilonnd{align} \varepsilonnd{subequations} \varepsilonnd{proposition} \begin{proof} Fixing $ v\in(0, u_{\mathbf{R}t} ) $, $ k>1 $, and $ \Lambda,T<\infty $, throughout this proof we write $ c=c(v,k,T,\Lambda) $ to simplify notation. As declared previously, the value of the constant may change from line to line. Following the same convention as in the proof of Lemma~\ref{lem:BDG}, throughout this proof we view $ \mathbb{a} (x) $ and $ \mathbb{Q} (t;x,\widetildex) $ as \varepsilonmph{deterministic} functions and, (with $ \Omega(1,v,\Lambda,T) $ as in~\varepsilonqref{eq:Omega} being conditioned) assume the properties in Proposition~\ref{prop:sg}\ref{prop:sg:sum}--\ref{prop:sg:loc} hold. Let us begin by considering discrete time $ i\in\mathbf{Z}\cap[0,N^{2}T] $. The starting point of the proof is the microscopic, mild equation~\varepsilonqref{eq:Lang:int}. Recall that $ Z_\mathrm{ic}(x) $ is deterministic by assumption. In~\varepsilonqref{eq:Lang:int}, set $ t=i $, take $ \normO{\,\Cdot\,}{k} $ on both sides, and square the result. We have \begin{align} \label{eq:Lang:iter} \normO{ Z(i,y) }{k}^2 \leq 2 \Big( \sum_{\widetildex\in\mathbb{T}} \mathbb{Q} (i;x,\widetildex)Z_\mathrm{ic}(\widetildex) \Big)^2 + 2 \mathbb{N}ormO{ \int_0^i \sum_{\widetildex\in\mathbb{T}} \mathbb{Q} (i-s;x,\widetildex)dM(s,\widetildex) }{k}^2. \varepsilonnd{align} To bound the last term in~\varepsilonqref{eq:Lang:iter}, apply Lemma~\ref{lem:BDG} with $ (i,i')\mapsto (0,i) $ and $ f(s,\widetildex)= \mathbb{Q} (i-s;x,\widetildex) $ (recall that $ \mathbb{Q} $ is deterministic here), and then use Proposition~\ref{prop:sg}\ref{prop:sg:loc} to bound $ \sup_{s\in[j,j+1]} \mathbb{Q} (i-s;x,\widetildex)^2 $ by $ c\, \mathbb{Q} (i-j;x,\widetildex)^2 $. This gives \begin{align} \label{eq:Lang:iter:} \normO{ Z(i,x) }{k}^2 \leq 2 \Big( \sum_{\widetildex\in\mathbb{T}} \mathbb{Q} (i;x,\widetildex)Z_\mathrm{ic}(\widetildex) \Big)^2 + \frac{c}{N} \sum_{j=0}^{i-1} \sum_{x\in\mathbb{T}} \mathbb{Q} (i-j;x,\widetildex)^2 \normO{ Z(i,\widetildex) }{k}^2. \varepsilonnd{align} Using the assumption $ Z_\mathrm{ic}(x) \leq c $ from~\varepsilonqref{eq:nearst} and the bound from Proposition \ref{prop:sg}\ref{prop:sg:sum}, we have $ \sum_{\widetildex\in\mathbb{T}} \mathbb{Q} (i;x,\widetildex)Z_\mathrm{ic}(\widetildex) \leq c $, and using the bound from Proposition \ref{prop:sg}\ref{prop:sg:sup}, we write $ \mathbb{Q} (i-j;x,\widetildex)^2 \leq \mathbb{Q} (i-j;x,\widetildex) c\,(i-j)^{-1/2} $. Inserting these bounds into~\varepsilonqref{eq:Lang:iter:}, we arrive at \begin{align} \label{eq:Lang:iter:1} \normO{ Z(i,x) }{k}^2 \leq c + c \sum_{j=0}^{i-1} \frac{N^{-2}}{\sqrt{N^{-2}(i-j)}} \sum_{x\in\mathbb{T}} \mathbb{Q} (i-j;x,\widetildex) \normO{ Z(i,\widetildex) }{k}^2, \varepsilonnd{align} Iterating~\varepsilonqref{eq:Lang:iter:1} gives \begin{align} \label{eq:Lang:iter:2} \normO{ Z(i,x) }{k}^2 \leq c + \sum_{n=1}^\infty c^n \sum_{\vec{\varepsilonll}\in \sigma_n(i)} \prod_{j=0}^n \frac{N^{-2}}{\sqrt{N^{-2}\varepsilonll_j}} \sum_{x_1,\ldots,x_n\in\mathbb{T}} \prod_{k=1}^n \mathbb{Q} (\varepsilonll_j;x_{j-1},x_{j}). \varepsilonnd{align} Here, we adopt the convention that $ x_0:=x $, and $ \sigma_n(i) :=\{ (\varepsilonll_0,\ldots,\varepsilonll_n) \in \mathbf{Z}_{>0}^{n+1} : \varepsilonll_0+\ldots+\varepsilonll_n=i \} $. In~\varepsilonqref{eq:Lang:iter:2}, we sum over $ x_{n},\ldots,x_1 $ in order, and use the bound from Proposition~\ref{prop:sg}\ref{prop:sg:sum} at each step to bound the result by $ c $. Then, approximating the sum over $ \vec{\varepsilonll}\in \sigma_n(t) $ by an integral, we have \begin{align} \label{eq:Lang:iter:3} \normO{ Z(i,x) }{k}^2 \leq c + \sum_{n=1}^\infty c^n \int_{\Sigma_n(iN^{-2})} \prod_{j=0}^n s_i^{-\frac12} \cdot d^n\vec{s}. \varepsilonnd{align} We now apply the Dirichlet integral formula~\varepsilonqref{eq:dirichlet} with $ v_0=\ldots=v_n=\frac12 $. Given that $ i \leq TN^{2} $, upon summing the result over $ n=1,2,\ldots $, we obtain \begin{align} \tag{\ref*{eq:Zmom}'} \label{eq:Zmom:} \normO{ Z(i,x) }{k} &\leq c. \varepsilonnd{align} This is exactly the first desired bound~\varepsilonqref{eq:Zmom} for $ t=i\in\mathbf{Z}_{>0} $, and hence the label~\varepsilonqref{eq:Zmom:}. We will have similar labels for \varepsilonqref{eq:gZmom}--\varepsilonqref{eq:tZmom}. We now turn to the gradient moment estimates~\varepsilonqref{eq:gZmom}. Set \begin{align} \label{eq:mom:I} I(x)&:=\sum_{\widetildex\in\mathbb{T}} \mathbb{Q} (i;x,\widetildex)Z_\mathrm{ic}(\widetildex), \\ \label{eq:mom:J} J(x,x')&:= \frac{1}{N} \sum_{j=0}^{i-1} \sum_{\widetildex\in\mathbb{T}} | \mathbb{Q} (i-j;x,\widetildex)- \mathbb{Q} (i-j;x',\widetildex)|^2 \normO{ Z(i,\widetildex) }{k}^2. \varepsilonnd{align} Following the same procedure leading to~\varepsilonqref{eq:Lang:iter:}, but starting with $ Z(i,x)-Z(i,x') $ instead of $ Z(i,x) $, here we have \begin{align} \label{eq:Lang:iter::} \normO{ Z(i,x) - Z(i,x') }{k}^2 \leq 2 \big( I(x)-I(x') \big)^2 + c \, J(x,x'). \varepsilonnd{align} To bound the term $ J(x,x') $, in~\varepsilonqref{eq:mom:J}, use \begin{align*} | \mathbb{Q} (i-j;x,\widetildex)- \mathbb{Q} (i-j;x',\widetildex)|^2 \leq \Big( \sup_{\widetildex}| \mathbb{Q} (i-j;x,\widetildex)- \mathbb{Q} (i-j;x',\widetildex)| \Big) \Big( \mathbb{Q} (i-j;x,\widetildex)+ \mathbb{Q} (i-j;x',\widetildex) \Big). \varepsilonnd{align*} Then, sum over $ \widetildex\in\mathbb{T} $, using the bound \varepsilonqref{eq:Zmom:} on $ \normO{ Z(i,\widetildex) }{k}^2 $ and the bounds from Proposition~\ref{prop:sg}\ref{prop:sg:sum} and \ref{prop:sg:gsup} on $ \mathbb{Q} $. With $ i\leq N^2T $, we have \begin{align} \label{eq:mom:J:} J(x,x') \leq \frac{c}{N} \sum_{j=0}^{i-1} \frac{ \mathrm{dist} _\mathbb{T}(x,x')^u}{(i-j+1)^{(u+1)/2}} \leq c\Big(\frac{ \mathrm{dist} _\mathbb{T}(x,x')}{N}\Big)^u. \varepsilonnd{align} We now proceed to bound $ I(x)-I(x') $. Recall that $ \mathbb{Q} (t)= \mathbb{p} (t)+ \mathbb{Q} r(t) $. Decompose $ I(x)=I_ \mathbb{p} (x)+I_ \mathbb{Q} r(x) $ into the corresponding contributions of $ \mathbb{p} (t) $ and $ \mathbb{Q} (t) $: $ I_ \mathbb{p} (x):=\sum_{\widetildex\in\mathbb{T}} \mathbb{p} (i;x,\widetildex)Z_\mathrm{ic}(\widetildex) $ and $ I_ \mathbb{Q} r(x):=\sum_{\widetildex\in\mathbb{T}} \mathbb{Q} r(i;x,\widetildex)Z_\mathrm{ic}(\widetildex) $. For $ I_ \mathbb{p} $, using translation invariance of $ \mathbb{p} $ (i.e., $ \mathbb{p} (t;x,\widetildex)= \mathbb{p} (t;x+i,\widetildex+i) $), we have $ I_ \mathbb{p} (x)-I_ \mathbb{p} (x')=\sum_{\widetildex\in\mathbb{T}} \mathbb{p} (t;x,\widetildex)(Z_\mathrm{ic}(\widetildex)-Z_\mathrm{ic}(\widetildex+(x'-x))) $. Given this expression, together with the H\"{o}lder continuity of $ Z_\mathrm{ic}(\Cdot) $ from our assumption \varepsilonqref{eq:nearst}, we have \begin{align} \label{eq:mom:Ihk} \big| I_ \mathbb{p} (x)-I_ \mathbb{p} (x') \big| \leq \big(\tfrac{ \mathrm{dist} _\mathbb{T}(x,x')}{N}\big)^{ u_\mathrm{ic} } c. \varepsilonnd{align} As for $ I_ \mathbb{Q} r $, using the bound from Proposition~\ref{prop:sg}\ref{prop:sgr:gsum} for $ u=v $ and the boundedness of $ Z_\mathrm{ic}(x) $ gives \begin{align} \label{eq:mom:Isgr} \big| I_ \mathbb{Q} r(x)-I_ \mathbb{Q} r(x') \big| \leq \big(\tfrac{ \mathrm{dist} _\mathbb{T}(x,x')}{N}\big)^v c. \varepsilonnd{align} Combining~\varepsilonqref{eq:mom:J:}--\varepsilonqref{eq:mom:Isgr} with~\varepsilonqref{eq:Lang:iter::} yields \begin{align} \tag{\ref*{eq:gZmom}'} \label{eq:gZmom:} \normO{Z(i,x)-Z(i,x') }{k} \leq \Big( \big(\tfrac{ \mathrm{dist} _\mathbb{T}(x,x')}{N}\big)^u + (\tfrac{ \mathrm{dist} _\mathbb{T}(x,x')}{N})^{2 u_\mathrm{ic} } + (\tfrac{ \mathrm{dist} _\mathbb{T}(x,x')}{N})^{2v} \Big)^{1/2}c \leq \big(\tfrac{ \mathrm{dist} _\mathbb{T}(x,x')}{N}\big)^{\frac{u}{2}\wedge u_\mathrm{ic} \wedge v} c. \varepsilonnd{align} Finally we turn to the gradient moment estimate~\varepsilonqref{eq:tZmom}. Fix $ i<i'\in\mathbf{Z}\cap[0,N^2T] $, $ x\in\mathbb{T} $, and set \begin{align*} \widetilde{I}(i,i',x):=\sum_{\widetildex\in\mathbb{T}} \mathbb{Q} (i'-i;x,\widetildex)Z(i,\widetildex) - Z(i,x), \qquad \widetilde{J}(i,i',x) := \frac1N\sum_{j=i}^{i'-1} \sum_{\widetildex\in\mathbb{T}} \mathbb{Q} (i'-j;x,\widetildex)^2 \normO{ Z(i,\widetildex) }{k}. \varepsilonnd{align*} To alleviate heavy notation, hereafter we omit dependence on $ (i,i',x) $ and write $ \widetilde{I} $ and $ \widetilde{J} $ in places of $ \widetilde{I}(i,i',x) $ and $ \widetilde{J}(i,i',x) $. Following the same procedure leading to~\varepsilonqref{eq:Lang:iter:}, starting from $ t=i $ instead of $ t=0 $, here we have \begin{align} \label{eq:Lang:iter:::} \normO{ Z(i',x)-Z(i,x) }{k}^2 \leq 2 \normO{ \widetilde{I} }{k}^2 + c \widetilde{J}. \varepsilonnd{align} Using the bound~\varepsilonqref{eq:Zmom:} on $ \normO{ Z(i,\widetildex) }{k} $ and the bounds from Proposition~\ref{prop:sg}\ref{prop:sg:sum} and \ref{prop:sg:sup} for $ u=1 $ on $ \mathbb{Q} $, we have \begin{align*} \widetilde{J} \leq \frac{c}{N} \sum_{j=i}^{i'} \frac{1}{\sqrt{i'-j+1}} \leq \Big(\frac{i'-i}{N^2}\Big)^{\frac12} c. \varepsilonnd{align*} As for $ \widetilde{I} $, decompose it into $ \widetilde{I}=\widetilde{I}_ \mathbb{p} +\widetilde{I}_ \mathbb{Q} r $, where \begin{align*} \widetilde{I}_ \mathbb{p} &:=\sum_{\widetildex\in\mathbb{T}} \mathbb{p} (i'-i;x,\widetildex)Z(i,\widetildex) - Z(i,x) = \sum_{\widetildex\in\mathbb{T}} \mathbb{p} (i'-i;x,\widetildex)\big(Z(i,\widetildex) - Z(i,x)\big), \\ \widetilde{I}_ \mathbb{Q} r &:=\sum_{\widetildex\in\mathbb{T}} \mathbb{Q} r(i'-i;x,\widetildex)Z(i,\widetildex). \varepsilonnd{align*} Taking $ \normO{\,\Cdot\,}{k} $ of $ \widetilde{I}_ \mathbb{p} $, with the aid of~\varepsilonqref{eq:gZmom:}, we have $ \normO{\widetilde{I}_ \mathbb{p} }{k} \leq c \sum_{\widetildex\in\mathbb{T}} \mathbb{p} (i'-i;x,\widetildex)( \mathrm{dist} _\mathbb{T}(x,\widetildex)/N)^{\frac12\wedge u_\mathrm{ic} } $. For $ \mathbb{p} $ it is straightforward to show that $ \sum_{\widetildex\in\mathbb{T}} \mathbb{p} (i'-i;x,\widetildex) \mathrm{dist} _\mathbb{T}(x,\widetildex)^u \leq c(u) (i'-i)^{u/2} $, so $ \normO{\widetilde{I}_ \mathbb{p} }{k} \leq (\frac{i'-i}{N^2})^{\frac12\wedge u_\mathrm{ic} }c. $ As for $ \widetilde{I}_ \mathbb{Q} r $, taking $ \normO{\,\Cdot\,}{k} $ using~\varepsilonqref{eq:Zmom:} and the bound from Proposition~\ref{prop:sg}\ref{prop:sgr:sum} gives $ \normO{\widetilde{I}_ \mathbb{p} }{k} \leq (\tfrac{i'-i}{N^2})^vc. $ Inserting the preceding bounds on $ \widetilde{J} $, $ \widetilde{I}_ \mathbb{p} $, and $ \widetilde{I}_ \mathbb{Q} r $ into~\varepsilonqref{eq:Lang:iter:::}, we obtain \begin{align} \tag{\ref{eq:tZmom}'} \label{eq:tZmom:} \normO{ Z(i,y)-Z(i',y) }{k} \leq \Big( \big(\tfrac{i'-i}{N^2}\big)^{\frac12} + \big(\tfrac{i'-i}{N^2}\big)^{\frac12\wedge u_\mathrm{ic} } c+\big(\tfrac{i'-i}{N^2}\big)^{v} \Big)^{1/2}c \leq \big(\tfrac{i'-i}{N}\big)^{\frac{u}{4}\wedge\frac{ u_\mathrm{ic} }{2}\wedge\frac{v}{2}} c. \varepsilonnd{align} So far we have obtained the relevant bounds~\varepsilonqref{eq:Zmom:}--\varepsilonqref{eq:tZmom:} for integer time. To go from integer to continuum, we consider generic $ \lfloor t\rfloor \leq t\in[0,N^{2}T] $, and estimate $ \normO{ Z(t,x)-Z(\lfloor t\rfloor, x)}{k} $. To this end, recall we have the local (in time) bounds~\varepsilonqref{eq:locZbd>}--\varepsilonqref{eq:locZbd<} on the growth of $ Z(s,y) $, where $ X(j,x),\widetilde{X}(j,x) $ that are stochastically dominated by Poisson($ c $), and are \varepsilonmph{independent} of $ \mathscr{F} (t) $ (defined in~\varepsilonqref{eq:filZ}). In~\varepsilonqref{eq:locZbd>}--\varepsilonqref{eq:locZbd<}, subtract $ Z(j,x) $ from both sides, and take $ \normO{ \,\Cdot\,}{k} $ on both sides to get \begin{align} \notag &\mathbb{N}ormO{ \sup_{t\in[j,j+1]}|Z(t,x)-Z(j, x)|}{k} \leq \mathbb{N}ormO{ (e^{\frac{X(j,x)}{\sqrt{N}}}-1)Z(j,x) }{k} + \mathbb{N}ormO{ (1-e^{-1\frac{\widetilde{X}(j,x)}{\sqrt{N}}})Z(j, x) }{k} \\ \label{eq:locZbd} &\quad\quad\quad = \mathbb{N}ormO{ (e^{\frac{X(j,x)}{\sqrt{N}}}-1) }{k} \, \normO{ Z(j, x) }{k} + \mathbb{N}ormO{ (1-e^{-1\frac{\widetilde{X}(j,x)}{\sqrt{N}}})}{k} \, \normO{ Z(j, x) }{k} \leq \tfrac{1}{\sqrt{N}}c. \varepsilonnd{align} Since $ (\frac{ \mathrm{dist} _\mathbb{T}(x,x')}{N})^{\frac{u}{2}\wedge u_\mathrm{ic} \wedge v}, (\frac{|t-t'|\vee 1}{N^2})^{\frac{u}{4}\wedge\frac{ u_\mathrm{ic} }{2}\wedge\frac{v}{2}} \geq \frac{1}{\sqrt{N}} $ for all $ x\neq x' $ and $ t,t'\geq 0 $, we may use~\varepsilonqref{eq:locZbd} to approximate $ Z(\lfloor t\rfloor,x) $ with $ Z(t,x) $, and hence infer \varepsilonqref{eq:Zmom}--\varepsilonqref{eq:tZmom} from \varepsilonqref{eq:Zmom:}--\varepsilonqref{eq:tZmom:}. \varepsilonnd{proof} Recall that $ D([0,T],C(\mathcal{T})) $ denotes the space of right-continuous-with-left-limits functions $ [0,T]\to C(\mathbf{R}) $, equipped with Skorohod's $ J_1 $-topology. \begin{corollary} \label{cor:tight} For any given $ T<\infty $, $ \{Z_N\}_{N} $ is tight in the space of $ D([0,T],C(\mathcal{T})) $, and its limits concentrate in $ C([0,T],C(\mathcal{T})) $. \varepsilonnd{corollary} \begin{proof} First, to avoid the jumps (in $ t $) of $ Z_N(t,x) $, consider the process $ \widetilde{Z}_N(t,x) := Z(t,x) $, for $ t\in \frac{1}{N^2}\mathbf{Z}_{\geq 0} $, and linearly interpolate in $ t\in[0,\infty) $. For fixed $ v,\Lambda,T $ as in Proposition~\ref{prop:mom}, the moment bounds obtained in Proposition~\ref{prop:mom}, together with the Kolmogorov continuity theorem, implies that $ \{\widetilde{Z}_N \mathbf{1} _{\Omega(1,v,\Lambda,T)}\}_{N} $ is tight in $ C([0,T]\times\mathcal{T})=C([0,T],C(\mathcal{T})) $. Further, Proposition~\ref{prop:sg} asserts that $ \mathbf{P} [\Omega(1,v,\Lambda,T)] \to 1 $ under the iterative limit $ (\lim_{\Lambda\to\infty} \lim_{N\to\infty}\Cdot) $, so $ \{\widetilde{Z}_N \}_{N} $ is tight in $ C([0,T],C(\mathcal{T})) $. To relate $ Z_N $ to $ \widetilde{Z}_N $, we proceed to bound the difference $ \widetilde{Z}_N - Z_N $. Fix $ u\in(0,1) $, $ v\in(0, u_{\mathbf{R}t} ) $ and set $ I_j:=[\frac{j}{N^2},\frac{j+1}{N^2}] $. From~\varepsilonqref{eq:locZbd}, we have that \begin{align} \notag \mathbf{E} O&\big[ \ \norm{ \widetilde{Z}_N - Z_N }_{L^\infty(I_j\times\mathcal{T})}^k \ \big] \\ \label{eq:markov:} &:= \mathbf{E} \Big[ \ \norm{ \widetilde{Z}_N - Z_N }_{L^\infty(I_j\times\mathcal{T})}^k \mathbf{1} _{\Omega(u,v,\Lambda,T,N)} \ \Big| \mathbb{a} (x),x\in\mathbb{T} \Big] \leq c(u,v,\Lambda,k,T) N^{-k/2}, \qquad j=0,1,\ldots, \lceil TN^2 \rceil. \varepsilonnd{align} The r.h.s.\ of~\varepsilonqref{eq:markov:} is deterministic (i.e., not depending on $ \mathbb{a} $). This being the case, take $ \mathbf{E} [\,\Cdot\,] $ in~\varepsilonqref{eq:markov:}, and apply Markov inequality $ \mathbf{P} [|X|^k>\varepsilon] \leq \varepsilon^{-k} \mathbf{E} [|X|^k] $ with $ X=\norm{ \widetilde{Z}_N - Z_N }_{L^\infty(I_j\times\mathcal{T})} \mathbf{1} _{\Omega(u,v,\Lambda,T,N)} $. We obtain \begin{align*} \mathbf{P} \Big[ \ \norm{ \widetilde{Z}_N - Z_N }_{L^\infty(I_j\times\mathcal{T})} \mathbf{1} _{\Omega(u,v,\Lambda,T,N)} > \varepsilon \Big] \leq c(u,v,\Lambda,k,T) \varepsilon^{-k} N^{-k/2}, \qquad j=0,1,\ldots, \lceil TN^2 \rceil. \varepsilonnd{align*} Setting $ k=5 $ and take union bounds over $ j=0,1,\ldots, \lceil TN^2 \rceil $ yields \begin{align} \label{eqcor:tight} \mathbf{P} \big[ \norm{ \widetilde{Z}_N-Z_N }_{L^\infty([0,T]\times\mathcal{T})} \mathbf{1} _{\Omega(u,v,\Lambda,T,N)} > \varepsilon \big] \leq c(u,v,\Lambda,T,\varepsilon) N^{2-5/2}. \varepsilonnd{align} Further, Proposition~\ref{prop:sg} asserts that $ \mathbf{P} [\Omega(u,v,\Lambda,T,N)] \to 1 $ under the iterative limit $ (\lim_{\Lambda\to\infty} \lim_{N\to\infty}\Cdot) $. Hence, passing~~\varepsilonqref{eqcor:tight} to $ N\to\infty $ along a suitable sequence $ \Lambda=\Lambda_n\to\infty $ gives \begin{align*} \lim_{N\to\infty} \mathbf{P} \big[ \norm{ \widetilde{Z}_N-Z_N }_{L^\infty([0,T]\times\mathcal{T})} > \varepsilon \big] = 0. \varepsilonnd{align*} From this, we conclude that $ Z_N $ and $ \widetilde{Z}_N $ must have the same limit points in $ D([0,T],C(\mathcal{T})) $. Knowing that $ \{\widetilde{Z}_N \}_{N} $ is tight in $ C([0,T],C(\mathcal{T})) $, we thus conclude the desired result. \varepsilonnd{proof} \section{Proof of Theorem~\ref{thm:main}} \label{sect:pfmain} Given Corollary~\ref{cor:tight}, to prove Theorem~\ref{thm:main}, it suffices to identify limit points of $ \{Z_N\}_N $. We achieve this via a martingale problem. \subsection{Martingale problem} \label{sect:mgpb} Recall that, even though $ \mathcal{H} $ and its semigroup $ \mathcal{Q} (t):= e^{t \mathcal{H} } $ are possibly random, they are independent of the driving noise $ \xi $. This being the case, conditioning on a generic realization of $ \mathbb{A} lim $, throughout this subsection, we assume $ \mathcal{Q} (t) $ and $ \mathcal{H} $ are \varepsilonmph{deterministic}, (constructed from a deterministic $ \mathbb{A} lim\in C^{ u_{\mathbf{R}t} }[0,1] $). It is shown in~\cite[Section~2]{fukushima77} that, for bounded $ \mathbb{A} lim $, the self-adjoint operator $ \mathcal{H} = \frac12\mathbb{T}al_{xx}+ \mathbb{A} lim'(x) $ has discrete spectrum. More explicitly, $ \mathcal{H} \varphi _n = \lambda _n \varphi _n $, $ n=1,2,\ldots $, with $ \varphi _n \in D( \mathcal{H} ) \subset H^1(\mathcal{T}) $ and $ \lambda _1\geq \lambda _2 \geq \cdots \to-\infty $, and with $ \{ \varphi _n\}_{n=1}^\infty $ forming a Hilbert basis (i.e., dense orthonormal set) of $ L^2(\mathcal{T}) $. Let $ \langle \{\varepsilonigf_n\} \rangle := \{ \sum_{i=1}^m \alpha_i \varphi _i: m\in\mathbf{Z}_{>0},\alpha_1,\ldots,\alpha_m\in\mathbf{R} \} $ denote the linear span of eigenfunctions. Recall that $ \langle f, g\rangle := \int_{\mathcal{T}} f(x) g(x) dx $ denotes the inner product on $ L^2(\mathcal{T}) $. We say that a $ C([0,\infty)\times\mathcal{T}) $-valued process $ \mathcal{Z} $ solves the \textbf{martingale problem} corresponding to~\varepsilonqref{eq:spde} if, for any $ f\in \langle \{\varepsilonigf_n\} \rangle $, \begin{align} \label{eq:linmg} \mathcal{M} (t;f) &:= \langle f, \mathcal{Z}(s) \rangle \Big|_{s=0}^{s=t} - \int_{0}^t \langle \mathcal{H} f, \mathcal{Z}(s) \rangle ds , \\ \label{eq:qdmg} \mathcal{M} g(t;f) &:= ( \mathcal{M} _f(t))^2 - \int_{0}^t \langle f^2, \mathcal{Z}^2(s) \rangle ds \varepsilonnd{align} are local martingales in $ t $. \begin{proposition} \label{prop:mgpb} A $ C([0,\infty)\times\mathcal{T}) $-valued process $ \mathcal{Z} $ that solves the aforementioned martingale problem is a mild solution~\varepsilonqref{eq:spde} of the \ac{SPDE}~\varepsilonqref{eq:spde}. \varepsilonnd{proposition} \begin{proof} Fix $ \mathcal{Z}\in C([0,\infty)\times\mathcal{T}) $ that solves the martingale problem. The first step is to show that $ \mathcal{Z} $ is a weak solution. That is, extending the probability space if necessary, there exists a white noise measure $ \xi(t,x)dtdx $ such that, for any given $ f\in \langle \{\varepsilonigf_n\} \rangle $, \begin{align} \label{eq:weak} \mathcal{M} (t;f) = \langle f, \mathcal{Z}(s) \rangle \big|_{s=0}^{s=t} - \int_{0}^t \langle \mathcal{H} f, \mathcal{Z}(s) \rangle ds = \int_0^t\int_{\mathcal{T}} f(x) \mathcal{Z}(s,x) \xi(s,x)dx. \varepsilonnd{align} With $ \langle \{\varepsilonigf_n\} \rangle $ being dense in $ L^2(\mathcal{T}) $, the statement is proven by the same argument of \cite[Proposition~4.11]{bertini97}. We do not repeat it here. Next, for given $ n \geq 1 $, consider the process $ F(t):=e^{- \lambda _n t} \langle \varphi _n, \mathcal{Z}(t) \rangle $. Using It\^{o} calculus, with the aid of~\varepsilonqref{eq:weak} (for $ f= \varphi _n $), we have \begin{align*} F(t)-F(0) = \int_0^t \big( - \lambda _n F(s) + e^{- \lambda _n s} \langle \mathcal{H} \varphi _n, \mathcal{Z}(s) \rangle \big) ds + \int_0^t \int_{\mathcal{T}} e^{- \lambda _n s} \varphi _n(x) \mathcal{Z}(s,x) \xi(s,x)dxds. \varepsilonnd{align*} With $ \mathcal{H} \varphi _n= \lambda _n \varphi _n $, the first term on the r.h.s.\ is zero. This being the case, multiplying both sides by $ e^{ \lambda _n t} $ gives \begin{align*} \langle \varphi _n, \mathcal{Z}(t) \rangle - \langle e^{t \lambda _n} \varphi _n, \mathcal{Z}(0) \rangle = \int_0^t \int_{\mathcal{T}} e^{ \lambda _n (t-s)} \varphi _n(\widetildex) \mathcal{Z}(s,\widetildex) \xi(s,\widetildex)d\widetildex ds. \varepsilonnd{align*} Further, write $ e^{t \lambda _n} \varphi _n= \mathcal{Q} (t) \varphi _n $ and $ e^{ \lambda _n (t-s)} \varphi _n(\widetildex) = \int_{\mathcal{T}} \mathcal{Q} (t-s;x,\widetildex) \varphi _n(x) dx $, and use the fact that $ \mathcal{Q} (t-s;x,\widetildex)= \mathcal{Q} (t-s;\widetildex,x) $, we now have \begin{align} \label{eq:mild1} \langle f, \mathcal{Z}(t) \rangle - \langle \mathcal{Q} (t)f, \mathcal{Z}(0) \rangle = \Big\langle f, \int_0^t \int_{\mathcal{T}} \mathcal{Q} (t-s;\Cdot,\widetildex) \mathcal{Z}(s,\widetildex) \xi(s,\widetildex) d\widetildex ds \Big\rangle, \qquad f= \varphi _1, \varphi _2,\ldots. \varepsilonnd{align} Equation~\varepsilonqref{eq:mild1} being linear in $ f $ readily generalizes to all $ f\in \langle \{\varepsilonigf_n\} \rangle $. With $ \langle \{\varepsilonigf_n\} \rangle $ being dense in $ L^2(\mathcal{T}) $ and hence in $ C(\mathcal{T}) $, we conclude that $ \mathcal{Z} $ satisfies~\varepsilonqref{eq:spde:mild}. \varepsilonnd{proof} For convenience of subsequent analysis, let us rewrite the martingale problem \varepsilonqref{eq:linmg}--\varepsilonqref{eq:qdmg} in a slightly different but equivalent form: for all $ n,n' \geq 1 $, \begin{align} \tag{\ref*{eq:linmg}'} \label{eq:linmg:} \mathcal{M} _n(t) &:= \mathcal{M} (t; \varphi _n) = \langle \varphi _n, \mathcal{Z}(s) \rangle \big|_{s=0}^{s=t} - \lambda _n\int_{0}^t \langle \varphi _n, \mathcal{Z}(s) \rangle ds, \\ \tag{\ref*{eq:qdmg}'} \label{eq:quadmg:} \mathcal{M} g_{n,n'}(t) &:= \mathcal{M} (t; \varphi _n) \mathcal{M} (t; \varphi _{n'}) - \int_{0}^t \langle \varphi _n \varphi _{n'}, \mathcal{Z}^2(s) \rangle ds \varepsilonnd{align} are local-martingales in $ t $. As stated previously, to prove Theorem~\ref{thm:main}, it now suffices to identify limit points of $ \{Z_N\}_N $. This being the case, after passing to a subsequence, hereafter we assume $ Z_N \mathbf{R}ightarrow \mathcal{Z} $, for some $ C([0,\infty),C(\mathbb{T})) $-valued process $ \mathcal{Z} $. By Skorokhod's representation theorem, extending the probability space if necessary, we further assume $ \mathcal{Z} $ and $ Z_N $ inhabit the same probability space, with \begin{align} \label{eq:Zcnvg} \norm{Z_N - \mathcal{Z} }_{L^\infty([0,T]\times\mathcal{T})} \longrightarrow_\text{P} 0, \varepsilonnd{align} for each given $ T<\infty $. Our goal is to show that $ \mathcal{Z} $ solves the martingale problem~\varepsilonqref{eq:linmg:}--\varepsilonqref{eq:quadmg:}. We further refer to \varepsilonqref{eq:linmg:} and \varepsilonqref{eq:quadmg:} as the linear and quadratic martingale problems, respectively. \subsection{Linear martingale problem} \label{sect:linmg} Here we show that $ \mathcal{Z} $ solves the linear martingale problem~\varepsilonqref{eq:linmg:}. Let \begin{align*} \big\langle f, g \big\rangle_N := \frac{1}{N} \sum_{x\in\mathbb{T}} f(\tfrac{x}{N}) g(\tfrac{x}{N}) \varepsilonnd{align*} denote the discrete analog of $ \langle f,g\rangle $, $ \Delta_N f (x) := N^2 (f(x+\frac{1}N)+f(x-\frac1N)-f(2x)) $ denote the scaled discrete Laplacian, and $ \mathbb{H} _N := \frac12 \Delta_N + N^2 \nu \mathbb{a} (Nx) $ denote the scaled operator. Multiply both sides of~\varepsilonqref{eq:Lang} by $ \varphi _n(Nx) $, integrate over $ t\in[0,N^2\overline{t}] $ and sum over $ x\in\mathbb{T} $. We have that \begin{align} \label{eq:mgn} M_n(N^2t) := \int_0^{N^2t} \frac1N \sum_{x\in\mathbb{T}} \varphi _n(Nx) dM(t,x) = \langle \varphi _n, Z_N(s) \rangle_N \Big|_{s=0}^{s=t} - \int_0^t \langle \mathbb{H} _N \varphi _n, Z_N(s) \rangle_N ds \varepsilonnd{align} is a martingale. Indeed, the r.h.s.\ of~\varepsilonqref{eq:mgn} resemble the r.h.s.\ of~\varepsilonqref{eq:linmg:}, and one would hope to show convergence of the former to the latter in order to establish $ \mathcal{M} _n(t) $ being a local martingale. For the case of homogeneous \ac{ASEP}, we have $ \frac12\Delta_N $ in place of $ \mathbb{H} _N $, and the eigenfunctions $ \varphi _n $ are $ C^2 $. In this case, using Taylor expansion it is straightforward to show that $ \int_0^t \langle \frac12\Delta_N \varphi _n, Z_N(s) \rangle_N ds $ converges to its continuum counterpart $ \int_0^t\int_{\mathcal{T}} \langle \frac12 \varphi _n'', \mathcal{Z}(s) \rangle ds $. Here, on the other hand, we only have $ \varphi _n \in H^1(\mathcal{T}) $, and $ \mathbb{a} (x) $ and $ Z_N(t,x) $ lack differentiability in $ x $. Given the situation, a direct proof of $ \int_0^t \langle \mathbb{H} _N \varphi _n, Z_N(s) \rangle_N ds $ converging to its continuum counterpart seems challenging. To circumvent the aforementioned issue, we route through the integrated (i.e., mild) equation~\varepsilonqref{eq:Lang:int:}. For a given $ t \geq 0 $ and $ k\in\mathbf{Z}_{>0} $, put $ t_i:= \frac{i}{k}t $, set $ (t_*,t)=(N^2t_{i-1},N^2t_i) $ in~\varepsilonqref{eq:Lang:int:}, and subtract $ Z(N^2t_{i-1},x) $ from both sides. This gives \begin{align*} Z(s,x)\big|_{s=N^2t_{i-1}}^{s=N^2t_i} = \Big(\big( \mathbb{Q} (N^2\tfrac{t}{k}) -\text{Id}\big) Z(N^2t_{i-1}) \Big)(x) + \sum_{\widetildex\in\mathbb{T}}\int_{N^2t_{i-1}}^{N^2t_i} \mathbb{Q} (N^2t_i-s;x,\widetildex) dM(s,\widetildex), \varepsilonnd{align*} where `$ \text{Id} $' denotes the identity operator. Multiply both sides by $ \varphi _n(Nx) $, and sum over $ x\in\mathbb{T} $ and $ i=1,\ldots,k $. After appropriate scaling, we obtain \begin{align} \label{eq:linmg:pf} \big\langle \varphi _n, Z_N(s)\big\rangle_N \big|_{s=0}^{s=t} - G _{k,N}(t) - \sum_{i=1}^k\sum_{x,\widetildex\in\mathbb{T}}\int_{N^2t_{i-1}}^{N^2t_i} \frac1N \sum_{x\in\mathbb{T}} \varphi _n(Nx) \mathbb{Q} (N^2t_i-s;x,\widetildex) dM(s,\widetildex) = 0, \varepsilonnd{align} where, with $ ( \mathbb{Q} _N(t) f)(x) := \frac{1}{N}\sum_{\widetildex\in\frac1N\mathbb{T}} N \mathbb{Q} (N^2t;Nx,N\widetildex) f(x) $ denoting the scaled semigroup, \begin{align} \label{eq:linmgD} G _{k,N}(t) := \sum_{i=1}^k \Big\langle \varphi _n, ( \mathbb{Q} _N(\tfrac{t}{k})-\text{Id}) Z_N(t_{i-1}) \Big\rangle_N. \varepsilonnd{align} Further adding and subtracting $ M_n(t) $ on both sides of \varepsilonqref{eq:linmg:pf} gives \begin{align} \label{eq:linmg:pf:} &\langle \varphi _n, Z_N(s)\rangle_N \big|_{s=0}^{s=t} - G _{k,N}(t) - H _{k,N}(t) = M_n(t), \\ \label{eq:lingmgR} & H _{k,N}(t) := \sum_{i=1}^k \int_{N^2t_{i-1}}^{N^2t_i} \frac1N\sum_{\widetildex\in\mathbb{T}}\Big( \sum_{x\in\mathbb{T}} \varphi _n(Nx) \mathbb{Q} (N^2t_i-s;x,\widetildex) - \varphi _n(N\widetildex) \Big) dM(s,\widetildex). \varepsilonnd{align} In the following we will invoke convergence to zero in probability under \varepsilonmph{iterated} limits. For random variables $ X_{k,N} $ indexed by $ k,N $, we write \begin{align*} \lim_{k\to\infty} \lim_{N\to\infty} X_{k,N} \stackrel{\text{P}}{=} 0 \varepsilonnd{align*} if $ \limsup\limits_{k\to\infty} \limsup\limits_{N\to\infty} \mathbf{P} [ |X_{k,N}| > \varepsilon ] =0 $, for each $ \varepsilon >0 $. Given~\varepsilonqref{eq:linmg:pf:}, we proceed to show \begin{lemma} \label{lem:linmg} For any given $ T<\infty $, \begin{enumerate}[label=(\alph*),leftmargin=7ex] \item \label{lem:linmg1} \ $ \displaystyle \lim_{N\to\infty} \Big( \sup_{t\in[0,T]} \big| \langle \varphi _n, Z_N(t)\rangle_N - \langle \varphi _n, \mathcal{Z}(t)\rangle \big| \Big) \stackrel{\text{P}}{=} 0, $ \item \label{lem:linmg2} \ $ \displaystyle \lim_{k\to\infty}\lim_{N\to\infty} \sup_{t\in[0,T]} \big| G _{k,N}(t) - \lambda _n \int_0^t \langle \varphi _n, \mathcal{Z}(s)\rangle ds \big| \stackrel{\text{P}}{=} 0, $ \item \label{lem:linmg3} \ $ \displaystyle \lim_{k\to\infty}\lim_{N\to\infty} \sup_{t\in[0,T]} \big| H _{k,N}(t) \big| \stackrel{\text{P}}{=} 0. $ \varepsilonnd{enumerate} \varepsilonnd{lemma} \begin{proof} \ref{lem:linmg1} Given~\varepsilonqref{eq:Zcnvg} and $ \varphi _n \in H^1(\mathcal{T}) \subset C(\mathcal{T}) $, this follows straightforwardly. \ref{lem:linmg2} Given~\varepsilonqref{eq:Zcnvg} and Proposition~\ref{prop:sgtoSg}, we have, for each $ s,\delta\in[0,\infty) $, \begin{align} \label{eqlem:lingmg2} \lim_{N\to\infty} \norm{ \big( \mathbb{Q} _N(\delta)-\text{Id}) Z_N(s)\big)- \big( \mathcal{Q} (\delta)-\text{Id}\big) \mathcal{Z}(s) }_{L^\infty(\mathcal{T})} \stackrel{\text{P}}{=} 0. \varepsilonnd{align} Using~\varepsilonqref{eqlem:lingmg2} for $ s=t_{j-1} $ and $ \delta=\frac{t}{k} $, and plugging it into~\varepsilonqref{eq:linmgD}, together with $ \varphi _n \in H^1(\mathcal{T}) \subset L^1(\mathcal{T}) $, we have \begin{align*} &\lim_{N\to\infty} \sup_{t\in[0,T]} | G _{k,N}(t) - \mathcal{G} _k(t,k)| \stackrel{\text{P}}{=} 0, \\ & \mathcal{G} _k(t) := \sum_{i=1}^k \Big\langle \varphi _n, ( \mathcal{Q} (\tfrac{t}{k})-\text{Id}) \mathcal{Z}(t_{i-1}) \Big\rangle = \sum_{i=1}^k (e^{\frac{t}{k} \lambda _n}-1) \big\langle \varphi _n, \mathcal{Z}(t_{i-1}) \big\rangle. \varepsilonnd{align*} Further taking the $ k\to\infty $ limit using the continuity of $ \mathcal{Z}(t) $ gives \begin{align*} \lim_{k\to\infty} \sup_{t\in[0,T]} \Big| \mathcal{G} _k(t) - \lambda _n \int_0^t \big\langle \varphi _n, \mathcal{Z}(s) \big\rangle ds\Big| \stackrel{\text{P}}{=} 0. \varepsilonnd{align*} This concludes the proof for~\ref{lem:linmg2}. \ref{lem:linmg3} Given the moment bounds from Proposition~\ref{prop:mom}, it is not hard to check that $ \{ H _{k,N}(\Cdot)\}_{k,N} $ is tight in $ D[0,T] $. This being the case, it suffices to establish one point convergence: \begin{align} \label{eq:linmggoal} \lim_{k\to\infty}\limsup_{N\to\infty} \big| H _{k,N}(t) \big| \stackrel{\text{P}}{=} 0. \varepsilonnd{align} To this end, fix $ u\in(0,1) $, $ v\in(0, u_{\mathbf{R}t} ) $, $ \Lambda,T<\infty $, recall the definition of $ \Omega=\Omega(u,v,\Lambda,T,N) $ from~\varepsilonqref{eq:Omega}, and recall the notation $ \mathbf{E} rt[\,\Cdot\,]:= \mathbf{E} [\,\Cdot\,| \mathbb{a} (x),x\in\mathbb{T}] $. Multiply both sides of~\varepsilonqref{eq:lingmgR} by $ \mathbf{1} _{\Omega} \mathbf{1} _\set{ \lambda _n<\Lambda} $, and calculate the second moment (with respect to $ \mathbf{E} rt[\,\Cdot\,] $) of $ H _{k,N}(t) $. With the aid of~\varepsilonqref{eq:qv:bd} and the moment bounds from Proposition~\ref{prop:mom}, we have \begin{align} \label{eq:linmg2} \mathbf{E} rt&\big[ H _{k,N}(t)^2\big] \mathbf{1} _{\Omega} \mathbf{1} _\set{\lambda_n<\Lambda} \\ \notag &\leq c(u,v,T,\Lambda) \sum_{i=1}^k \int_{N^2t_{i-1}}^{N^2t_i} \frac{1}{N^3}\sum_{\widetildex\in\mathbb{T}} \Bigg( \Big( \sum_{x\in\mathbb{T}} \varphi _n(Nx) \mathbb{Q} (N^2t_i-s;x,\widetildex) - \varphi _n(N\widetildex) \Big)^2 \mathbf{E} rt\big[Z(s,\widetildex)^2\big] \mathbf{1} _{\Omega} \mathbf{1} _\set{\lambda_n<\Lambda} \Bigg) ds \\ \label{eq:linmg3} &\leq c(u,v,T,\Lambda) \sum_{i=1}^k \frac{1}{N^2}\int_{N^2t_{i-1}}^{N^2t_i} \frac{1}{N}\sum_{\widetildex\in\mathbb{T}}\Big( \sum_{x\in\mathbb{T}} \varphi _n(Nx) \mathbb{Q} (N^2t_i-s;x,\widetildex) - \varphi _n(N\widetildex) \Big)^2 ds \mathbf{1} _\set{|\lambda_n|<\Lambda}. \varepsilonnd{align} Let $ N\to\infty $ in~\varepsilonqref{eq:linmg3}. Given that $ \varphi _n \in H^1(\mathcal{T}) \subset L^1(\mathcal{T}) $, with the aid of Proposition~\ref{prop:sgtoSg}, we have \begin{align*} \sum_{x\in\mathbb{T}} \varphi _n(Nx) \mathbb{Q} (N^2(t_i-s);x,N\widetildex) \to \int_{\mathcal{T}} \varphi _n(x) \mathcal{Q} (t_i-s;x,\widetildex) dx, \qquad \text{uniformly in } \widetildex\in\mathcal{T}. \varepsilonnd{align*} Hence \begin{align} \notag \limsup_{N\to\infty} \varepsilonqref{eq:linmg3} &\leq c(u,v,T,\Lambda) \sum_{i=1}^k \int_{t_{i-1}}^{t_i} \int_{\mathcal{T}}\Big( \int_{\mathcal{T}} \varphi _n(x) \mathcal{Q} (t_i-s;x,\widetildex) d\widetildex - \varphi _n(\widetildex) \Big)^2 d\widetildex \mathbf{1} _\set{\lambda_n<\Lambda} \\ \notag &= c(u,v,T,\Lambda) \sum_{i=1}^k \int_{t_{i-1}}^{t_i} \int_{\mathcal{T}}\Big( (e^{(t_i-s)} \lambda _n-1) \varphi _n(\widetildex) \Big)^2 d\widetildex \mathbf{1} _\set{\lambda_n<\Lambda} \\ \label{eq:linmg5} &= c(u,v,T,\Lambda) k \int_{0}^{\frac{t}{k}} (e^{s \lambda _n}-1)^2 ds \leq k^{-2} (u,v,T,\Lambda). \varepsilonnd{align} Now, combine~\varepsilonqref{eq:linmg2}--\varepsilonqref{eq:linmg5}, take $ \mathbf{E} [\,\Cdot\,] $ of the result, and let $ k\to\infty $. We arrive at \begin{align} \label{eq:linmg6} \lim_{k\to\infty} \limsup_{N\to\infty} \mathbf{E} \big[ H _{k,N}(t)^2 \mathbf{1} _{\Omega} \mathbf{1} _\set{ \lambda _n<\Lambda} \big] = 0. \varepsilonnd{align} Indeed, $ \mathbf{P} [\set{ \lambda _n<\Lambda}] \to 1 $ as $ \Lambda \to \infty $, and Proposition~\ref{prop:sg} asserts that $ \mathbf{P} [\Omega]= \mathbf{P} [\Omega(u,v,\Lambda,T,N)] \to 1 $ under the iterative limit $ (\lim_{\Lambda\to\infty} \lim_{N\to\infty}\Cdot) $. Combining these properties with \varepsilonqref{eq:linmg6} yields the desired result~\varepsilonqref{eq:linmggoal}. \varepsilonnd{proof} Lemma~\varepsilonqref{lem:linmg} together with~\varepsilonqref{eq:linmg:pf:} gives \begin{align} \label{eq:mgcnvg} \sup_{t\in[0,T]} | \mathcal{M} _n(t)-M_n(t)| \longrightarrow_\text{P} 0. \varepsilonnd{align} Knowing that~$ M_n(t) $ is an $ \mathscr{F} $-martingale, we conclude that $ \mathcal{M} _n(t) $ is a local martingale. \subsection{Quadratic martingale problem} \label{sect:quadmg} Our goal here is to show that $ \mathcal{Z} $ solves the quadratic martingale problem~\varepsilonqref{eq:qdmg}. With $ M_n(t) $ given in~\varepsilonqref{eq:mgn}, the first step is to calculate the cross variation of $ M_n(t)M_{n'}(t) $: \begin{align} \label{eq:qdmg:qv} \langle M_n, M_{n'} \rangle(N^2t) = \int_0^{N^2t} \frac{1}{N^2} \sum_{x,x'\in\mathbb{T}} \varphi _n(Nx) \varphi _{n'}(Nx') d\langle M(s,x), M(s,x') \rangle. \varepsilonnd{align} Given~\varepsilonqref{eq:qv}, the r.h.s.\ of~\varepsilonqref{eq:qdmg:qv} permits an explicit expression in terms of $ \varepsilonta(s,x) $ and $ Z(s,x) $. Relevant to our purpose here is an expansion of the expression that exposes the $ N\to\infty $ asymptotics. To this end, with $ Z(t,x) $ defined in~\varepsilonqref{eq:Z}, note that \begin{align} \label{eq:taylor} \varepsilonta(t,x)Z(t,x) &= \tfrac{\tau^{1/2}-1}{\tau^{1/2}-\tau^{1/2}} Z(t,x) + \tfrac{1}{\tau^{1/2}-\tau^{1/2}} \nabla Z(t,x-1), \\ \label{eq:taylor:} \varepsilonta(t,x+1)Z(t,x) &= \tfrac{1-\tau^{-1/2}}{\tau^{1/2}-\tau^{1/2}} Z(t,x) + \tfrac{1}{\tau^{1/2}-\tau^{1/2}} \nabla Z(t,x). \varepsilonnd{align} Recall the filtration~$ \mathscr{F} (t) $ from \varepsilonqref{eq:filZ}. In the following we use $ \mathcal{B}(t,x)=\mathcal{B}^{(N)}(t,x) $ to denote a \varepsilonmph{generic} $ \mathscr{F} $-adopted process that may change from line to line (or even within a line), but is bounded uniformly in $ t,x,N $. Set \begin{align} \label{eq:W} W(t,x) := N(\nabla Z(t,x))(\nabla Z(t,x-1)). \varepsilonnd{align} Using the identities~\varepsilonqref{eq:taylor}--\varepsilonqref{eq:taylor:} in~\varepsilonqref{eq:qv}, together with $ r=\frac{1-1/\sqrt{N}}{2} $, $ \varepsilonll=\frac{1+1/\sqrt{N}}{2} $, $ \tau:=r/\varepsilonll $ and $ | \mathbb{a} t(x)| \leq c $ (from \varepsilonqref{eq:Z}, \varepsilonqref{eq:was}, and Assumption~\ref{assu:rt}\ref{assu:rt:bdd}), we have \begin{align} \notag \tfrac{d~}{ds}&\langle M(s,x), M(s,x') \rangle =(r-\varepsilonll)^2 \mathbb{a} t(x) \Big( \tfrac{1}{\varepsilonll}\varepsilonta(s,x) + \tfrac{1}{r}\varepsilonta(s,x+1) - \big(\tfrac1r+\tfrac1\varepsilonll\big)\varepsilonta(t,x)\varepsilonta(s,x+1)) \Big)Z(s,x)^2 \\ \label{eq:qdmg:qv::} &= \tfrac{ \mathbb{a} t(x)}{N} \Big( \big( Z^2(s,x) + W(s,x) ) \big) + N^{-\frac12}\mathcal{B}(s,x) Z^2(s,x) \Big), \varepsilonnd{align} From~\varepsilonqref{eq:taylor}--\varepsilonqref{eq:taylor:}, it is readily checked that \begin{align} \label{eq:Wapbd} |W(t,x)| \leq c Z^2(t,x). \varepsilonnd{align} In~\varepsilonqref{eq:qdmg:qv::}, write $ \mathbb{a} t(x)=1+ \mathbb{a} (x) $, and use~\varepsilonqref{eq:Wapbd} to get $ \frac{ \mathbb{a} (x)}{N}(Z^2(s,x)+W(s,x)) = \frac{ \mathbb{a} (x)}{N} \mathcal{B}(s,x)Z^2(s,x) $. Also, since $ \mathbb{a} t(x) $ is bounded (from Assumption~\ref{assu:rt}\ref{assu:rt:bdd}), we have $ \frac{ \mathbb{a} t(x)}{N}N^{-\frac12}\mathcal{B}(s,x) Z^2(s,x) = \frac{1}{N} N^{-\frac12} \mathcal{B}(s,x) Z^2(s,x) $. From these discussions we obtain \begin{align} \tfrac{d~}{ds}\langle M(s,x), M(s,x') \rangle \label{eq:qdmg:qv:} &= \tfrac{1}{N} \Big( \big( Z(s,x)^2 + W(s,x) ) + ( \mathbb{a} (x) + N^{-\frac12}) \mathcal{B}(s,x) Z^2(s,x) \Big). \varepsilonnd{align} Inserting~\varepsilonqref{eq:qdmg:qv:} into~\varepsilonqref{eq:qdmg:qv} gives \begin{subequations} \label{eq:qvv} \begin{align} \label{eq:qdmg:qvv} \langle M_n, M_{n'} \rangle(N^2t) &= \frac{1}{N^2}\int_0^{N^2t} \frac{1}{N} \sum_{x\in\mathbb{T}} \varphi _n(Nx) \varphi _{n'}(Nx') Z^2(s,x) ds + L _1(t) + L _2(t), \\ \label{eq:qdmg:qvv1} L _1(t) &:= \frac{1}{N^2}\int_0^{N^2t} \frac{1}{N} \sum_{x\in\mathbb{T}} \varphi _n(Nx) \varphi _{n'}(Nx') ( \mathbb{a} (x) + N^{-\frac12}) \mathcal{B}(s,x) Z^2(s,x) ds, \\ \label{eq:qdmg:qvv2} L _2(t) &:= \frac{1}{N^2}\int_0^{N^2t} \frac{1}{N} \sum_{x\in\mathbb{T}} \varphi _n(Nx) \varphi _{n'}(Nx) W(s,x) ds. \varepsilonnd{align} \varepsilonnd{subequations} Indeed, the r.h.s.\ of~\varepsilonqref{eq:qdmg:qvv} is the discrete analog of $ \int_0^{t} \langle \varphi _n \varphi _{n'}, \mathcal{Z}^2(s) \rangle ds $ that appears in~\varepsilonqref{eq:quadmg:}. By~\varepsilonqref{eq:rtbd}, $ \norm{ \mathbb{a} }_{L^\infty(\mathbb{T})} \leq N^{- u_{\mathbf{R}t} } $ with probability $ \to_{\Lambda,N} 1 $. With the aid of moment bounds from Proposition~\ref{prop:mom}, it is conceivable $ L _1(t) $ converges in $ C[0,T] $ to zero in probability. On the other hand, $ W(s,x) $ does \varepsilonmph{not} converge to zero for fixed $ (s,x) $. In order to show that $ L _2(t) $ converges to zero, we capitalize on the spacetime averaging in~\varepsilonqref{eq:qdmg:qvv2}. The main step toward showing such an averaging is a decorrelation estimate on $ W(s,x) $, stated in Proposition~\ref{prop:selfav}. To prove the decorrelation estimate, we follow the general strategy of \cite{bertini97}. The idea here is to develop an integral equation for $ \mathbf{E} O[ W(t+s,x) | \mathscr{F} (s) ] $ and try to `close' the equation. Closing the equation means bounding terms on the r.h.s.\ of the integral equation, so as to end up with an integral inequality for $ \mathbf{E} O[ W(t+s,x) | \mathscr{F} (s) ] $. Crucial to success under this strategy are certain nontrivial inequalities involving the kernel $ \mathbb{Q} (t;x,\widetildex) $, which we now establish. These are considerably more difficult to demonstrate in the inhomogeneous case (versus the homogeneous case). \begin{remark} \label{rmk:selfav} Self-averaging properties like Proposition~\ref{prop:selfav} are often encountered in the context of convergence of particle systems to \acp{SPDE}. In particular, in addition to the approach of \cite{bertini97} that we are following, alternative approaches have been developed in different contexts. This includes hydrodynamic replacement \cite{quastel11} and the Markov duality method \cite{corwin18a}. The last two approaches do not seem to apply in the current context. For hydrodynamic replacement, one needs two-block estimates to relate the fluctuation of $ h(t,x) $ to the quantity $ W(t,x) $. Inhomogeneous \ac{ASEP} under Assumption~\ref{assu:rt} sits beyond the scope of existing proofs of two-block estimates. As for the duality method, it is known \cite{borodin14} that inhomogeneous \ac{ASEP} enjoys a duality via the function $ \widetilde{Q}(t,\vec{x}):=\prod_{i=1}^n \varepsilonta(t,x_i)\tau^{h(t,x_i)} $. (Even though \cite{borodin14} treats \ac{ASEP} in the full-line $ \mathbf{Z} $, duality being a local property, readily generalizes to $ \mathbb{T} $.) For the method in~\cite{corwin18a} to apply, however, one also needs a duality via the function $ Q(t,\vec{x}):=\prod_{i=1}^n \tau^{h(t,x_i)} $, which is lacking for the inhomogeneous \ac{ASEP}. \varepsilonnd{remark} In what follow, for $ f,g\in[0,\infty)\times\mathbb{T}^2\to\mathbf{R} $, we write \begin{align} \label{eq:sgmg:sgmgg} \mathbb{Q} mg_{f,g}(t;x,\widetildex) := (\nabla_{x}f(t;x,\widetildex))(\nabla_x g(t;x-1,\widetildex)), \qquad \mathbb{Q} mgg_{f,g}(t;x) := \sum_{\widetildex\in\mathbb{T}}| \mathbb{Q} mg_{f,g}(t;x,\widetildex)|. \varepsilonnd{align} Recall also that $ \mathbb{Q} r(t):= \mathbb{Q} (t)- \mathbb{p} (t) $ denotes the difference of $ \mathbb{Q} (t)=e^{t \mathbb{H} } $ to $ \mathbb{p} (t) $, the kernel of the homogeneous walk. \begin{lemma} \label{lem:sgkey} Fix $ u\in(\frac12,1) $, $ v\in(0, u_{\mathbf{R}t} ) $, $ \Lambda,T<\infty $. We have, for all $ t\in[0,N^2T] $ and $ x\in\mathbb{T} $, \begin{enumerate}[label=(\alph*),leftmargin=7ex] \item \label{lem:sgrkey=0} \ $ \displaystyle \int_0^{N^2T} \mathbb{Q} mgg_{f,g}(s;x) ds \, \mathbf{1} _{\Omega(u,v,\Lambda,T,N)} \leq c(u,v,T,\Lambda)N^{-(u\wedge v)}\log(N+1) \qquad \textrm{when }(f,g)=( \mathbb{p} , \mathbb{Q} r), ( \mathbb{Q} r, \mathbb{p} ), ( \mathbb{Q} r, \mathbb{Q} r), $ \item \label{lem:sgrkeytail} \ $ \displaystyle \mathbb{Q} mgg_{ \mathbb{Q} , \mathbb{Q} }(t;x) \mathbf{1} _{\Omega(u,v,\Lambda,T,N)} \leq c(u,v,T,\Lambda)(1+t)^{-(u+\frac12)}, $ \item \label{lem:sgkey=0} \ $ \displaystyle \Big| \int_0^{t} \sum_{\widetildex\in\mathbb{T}} \mathbb{Q} mg_{ \mathbb{p} , \mathbb{p} }(s;x,\widetildex)ds \Big| \leq \frac{c}{\sqrt{t+1}}, $ \item \label{lem:sgkey<1} \ There exists a universal $ \beta<1 $ and $ N_0=N_0(u,v,T,\Lambda) $ such that\\ $ \displaystyle \int_0^{N^2T} \mathbb{Q} mgg_{ \mathbb{Q} , \mathbb{Q} }(s;x) ds \, \mathbf{1} _{\Omega(u,v,\Lambda,T,N)} \leq \beta, $ for all $ N\geq N_0 $. \varepsilonnd{enumerate} \varepsilonnd{lemma} \begin{proof} Throughout this proof we assume that $ s,t\leq TN^{2} $, and, to simplify notation, write $ c=c(u,v,\Lambda,T) $ and $ \Omega=\Omega(u,v,\Lambda,T,N) $. At times below we will apply earlier lemmas or propositions wherein variables were labeled $x$ or $u$. We will not, however, always apply them with the values of $x$ and $u$ specified in our proof (for instance, we may want to apply a result with $u=1$). In order to avoid confusion, when we specify the value $\Cdot$ of $x$ or $u$ (or other variables) used in that application of an earlier result we will write $x\mapsto \Cdot$ or $u\mapsto \Cdot$. \ref{lem:sgrkey=0} Our first aim is to bound the expression $ \sum_{\widetildex} |\nabla f(s;x,\widetildex)| |\nabla g(s;x',\widetildex)| $ when $ (f,g)=( \mathbb{p} , \mathbb{Q} r), ( \mathbb{Q} r, \mathbb{p} ), ( \mathbb{Q} r, \mathbb{Q} r) $. To this end, bound $ |\nabla f(s;x,\widetildex)| $ by it supremum over $ \widetildex\in\mathbf{Z} $, and use \varepsilonqref{eq:hkgd} or Proposition~\ref{prop:sg}\ref{prop:sgr:gsup} (with $ x'\mapsto x-1 $ and $u\mapsto u$), and for the remaining sum use \varepsilonqref{eq:hkgd:sum} or Proposition~\ref{prop:sg}\ref{prop:sgr:gsum}. This gives \begin{align} \label{eqlem:sgrkey=0:} \sum_{\widetildex\in\mathbb{T}} |\nabla f(s;x,\widetildex)\,\nabla g(s;x',\widetildex)| \mathbf{1} _{\Omega} &\leq c\,\Bigg( \frac{1}{s+1} \bigg( \frac{N^{-v}}{(s+1)^{u/2}}+N^{-u} \bigg) + \bigg( \frac{N^{-v}}{(s+1)^{(u+1)/2}}+\frac{N^{-u}}{\sqrt{s+1}} \bigg) \frac{1}{\sqrt{s+1}} \\ \label{eqlem:sgrkey=0::} &\hphantom{\leq c\Big(\frac{1}{s+1} } + \bigg( \frac{N^{-v}}{(s+1)^{(u+1)/2}}+\frac{N^{-u}}{\sqrt{s+1}} \bigg) \bigg( \frac{N^{-v}}{(s+1)^{u/2}}+N^{-u} \bigg) \Bigg), \varepsilonnd{align} when $ (f,g)=( \mathbb{p} , \mathbb{Q} r), ( \mathbb{Q} r, \mathbb{p} ), ( \mathbb{Q} r, \mathbb{Q} r) $. Expand the terms on the r.h.s.\ of~\varepsilonqref{eqlem:sgrkey=0:}, and (using $ u<1 $), bound $ N^{-v}/(s+1)^{1+\frac{u}2} \leq N^{-v}/(s+1)^{u+\frac12} $. In~\varepsilonqref{eqlem:sgrkey=0::}, use $ u>\frac12 $ and $ s\leq TN^2 $ to bound $ N^{-u}/\sqrt{s+1} \leq c\,(s+1)^{-1} $. We then have, when $ (f,g) = ( \mathbb{p} , \mathbb{Q} r), ( \mathbb{Q} r, \mathbb{p} ), ( \mathbb{Q} r, \mathbb{Q} r) $, \begin{align} \label{eqlem:sgrkey=0} \sum_{\widetildex\in\mathbb{T}} |\nabla f(s;x,\widetildex)\,\nabla g(s;x',\widetildex)| \mathbf{1} _{\Omega} \leq \frac{cN^{-v}}{(s+1)^{u+\frac12}} + \frac{cN^{-u}}{s+1}. \varepsilonnd{align} Integrate~\varepsilonqref{eqlem:sgrkey=0} over $ s\in[0,t] $. Given that $ u>\frac12 $, we have $ \int_0^t N^{-v}/(s+1)^{u+\frac12} ds \leq c N^{-v} $; Given that $ t\leq N^2T $, we have $ \int_0^t N^{-u}/(s+1) ds \leq c N^{-u}\log(N+1) $. From these considerations we conclude the desired bound. \ref{lem:sgrkeytail} Using~\varepsilonqref{eq:hkgd}--\varepsilonqref{eq:hkgd:sum} gives $ \sum_{\widetildex\in\mathbb{T}} |\nabla \mathbb{p} (t;x,\widetildex)\,\nabla \mathbb{p} (t;x',\widetildex)| \leq c\,(t+1)^{-3/2} $. Combining this with~\varepsilonqref{eqlem:sgrkey=0}, and using $ N^{-u}/(t+1) \leq c\,(t+1)^{-(1+\frac{u}{2})} $, we conclude the desired result. \ref{lem:sgkey=0} Recall that $ \mathbb{p} $ solves the lattice heat equation~\varepsilonqref{eq:lhe}. Multiply both sides of~\varepsilonqref{eq:lhe} by $ \mathbb{p} (s;x',\widetildex) $, sum over $ x\in\mathbb{T} $, and integrate over $ s\in[0,\infty) $. We have \begin{align*} \sum_{\widetildex\in\mathbb{T}} \int_0^\infty \tfrac12\mathbb{T}al_s\big( \mathbb{p} (s;x',\widetildex) \mathbb{p} (s;x,\widetildex) \big) ds &= \int_0^\infty \sum_{\widetildex\in\mathbb{T}} \tfrac14 \big( \mathbb{p} (s,x',\widetildex)\Delta_x \mathbb{p} (s;x,\widetildex) + (\Delta_{x'} \mathbb{p} (s,x',\widetildex))\, \mathbb{p} (s;x,\widetildex) \big) ds \\ &= -\int_0^\infty \sum_{\widetildex\in\mathbb{T}} \tfrac12 \nabla_{x'} \mathbb{p} (s;x',\widetildex) \nabla_x \mathbb{p} (s;x,\widetildex)ds. \varepsilonnd{align*} With $ \mathbb{p} (0;x,\widetildex)= \mathbf{1} _\set{x=\widetildex} $ and $ \mathbb{p} (\infty;x,\widetildex)=\frac{1}{N} $, the l.h.s.\ is equal to $ \frac12(\frac{1}{N}- \mathbf{1} _\set{x=x'}) $. This gives \begin{align*} \int_0^\infty \sum_{\widetildex\in\mathbb{T}} \nabla_{x'} \mathbb{p} (s,x',\widetildex) \nabla_x \mathbb{p} (s;x,\widetildex)ds = \mathbf{1} _\set{x=x'} - \frac1N. \varepsilonnd{align*} Set $ x'\mapsto x-1 $ gives $ \int_0^t \sum_{\widetildex\in\mathbb{T}} \mathbb{Q} mg_{ \mathbb{p} , \mathbb{p} }(s;x,\widetildex)ds = \frac1N-\int_{t}^\infty \sum_{\widetildex\in\mathbb{T}} \mathbb{Q} mg_{ \mathbb{p} , \mathbb{p} }(s;x,\widetildex)ds $. To bound the last term, use~\varepsilonqref{eq:hkgd}--\varepsilonqref{eq:hkgd:sum} (with $ u\mapsto 1 $) to get \begin{align*} \Big| \int_0^t \sum_{\widetildex\in\mathbb{T}} \mathbb{Q} mg_{ \mathbb{p} , \mathbb{p} }(s;x,\widetildex)ds \Big| \leq \frac1N+ \Big| \int_{t}^\infty \sum_{\widetildex\in\mathbb{T}} \mathbb{Q} mg_{ \mathbb{p} , \mathbb{p} }(s;x,\widetildex)ds \Big| \leq \frac1N+ \int_{t}^\infty \frac{c}{(s+1)^{3/2}} ds \leq \frac1N + \frac{c}{\sqrt{t+1}}. \varepsilonnd{align*} This together with $ \frac1N \leq \frac{c}{\sqrt{t+1}} $ completes the proof. \ref{lem:sgkey<1} Since $ \mathbb{Q} = \mathbb{p} + \mathbb{Q} r $, we have $ \mathbb{Q} mgg_{ \mathbb{Q} , \mathbb{Q} }(s,x)\leq ( \mathbb{Q} mgg_{ \mathbb{p} , \mathbb{p} }+ \mathbb{Q} mgg_{ \mathbb{Q} r, \mathbb{Q} }+ \mathbb{Q} mgg_{ \mathbb{Q} , \mathbb{Q} r}+ \mathbb{Q} mgg_{ \mathbb{Q} r, \mathbb{Q} r})(s,x) $. The bounds established in part \ref{lem:sgrkey=0} of this lemma gives \begin{align*} \sup_{x\in\mathbb{T}} \int_0^{N^2T} \big( \mathbb{Q} mgg_{ \mathbb{Q} r, \mathbb{Q} }+ \mathbb{Q} mgg_{ \mathbb{Q} , \mathbb{Q} r}+ \mathbb{Q} mgg_{ \mathbb{Q} r, \mathbb{Q} r} \big)(s,x) ds \mathbf{1} _{\Omega} \longrightarrow_\text{P} 0. \varepsilonnd{align*} Granted this, it now suffices to show that there exists $\beta'<1$ and $N_0(u,v,T,\Lambda)$ such that \begin{align} \label{eqlem:sgkey<1} \int_0^{N^2T} \mathbb{Q} mgg_{ \mathbb{p} , \mathbb{p} }(s;x) ds := \int_0^{N^2T} \sum_{\widetildex\in\mathbb{T}} |\nabla \mathbb{p} (s;x,\widetildex) \, \nabla \mathbb{p} (s;x-1,\widetildex)| ds \leq \beta', \qquad \textrm{for }N\geq N_0(u,v,T,\Lambda). \varepsilonnd{align} for some universal constant $ \beta'<1 $. Recall that $ \mathbb{p} ^\mathbf{Z}(t;x) $ denotes the kernel $ \mathbb{p} $ of the homogeneous walk on $ \mathbf{Z} $, and that $ \mathbb{p} $ is expressed in terms of $ \mathbb{p} ^\mathbf{Z} $ by~\varepsilonqref{eq:hk:hkZ}. Let $ I := (-\frac{N}{2},\frac{N}{2}]\cap\mathbf{Z} \subset\mathbf{Z} $ denote an interval in $ \mathbf{Z} $ of length $ N $ centered at $ 0 $. Under this setup we have \begin{align} \notag \sum_{\widetildex\in\mathbb{T}} |\nabla \mathbb{p} (s;x,\widetildex) \, \nabla \mathbb{p} (s;x-1,\widetildex)| &= \sum_{y\in I} \Big|\sum_{j\in\mathbf{Z}}\nabla \mathbb{p} ^\mathbf{Z}(s;y+jN) \, \sum_{j'\in\mathbf{Z}}\nabla \mathbb{p} ^\mathbf{Z}(s;y-1+j'N)\Big| \\ \label{eq:nabnab} &\leq \sum_{y\in I}\sum_{j,j'\in\mathbf{Z}} \Big|\nabla \mathbb{p} ^\mathbf{Z}(s;y+jN) \, \nabla \mathbb{p} ^\mathbf{Z}(s;y-1+j'N)\Big|. \varepsilonnd{align} Within \varepsilonqref{eq:nabnab}, the diagonal terms $ j=j'$, after being summed over $ y\in I $, jointly contribute to \begin{align*} V(s) := \sum_{y\in \mathbf{Z}}\big|\nabla \mathbb{p} ^\mathbf{Z}(s;y) \, \nabla \mathbb{p} ^\mathbf{Z}(s;y-1)\big|. \varepsilonnd{align*} We set the contribution of off-diagonal terms to be \begin{align} \label{eq:nabnabS} S(s) := \sum_{y\in I}\sum_{j\neq j'\in\mathbf{Z}} \big|\nabla \mathbb{p} ^\mathbf{Z}(s;y+jN) \, \nabla \mathbb{p} ^\mathbf{Z}(s;y-1+j'N)\big|, \varepsilonnd{align} Integrating \varepsilonqref{eq:nabnab} over $ s\in[0,N^2T] $ then gives \begin{align} \label{eq:nabnab:} \int_0^{N^2T}\sum_{\widetildex\in\mathbb{T}} |\nabla \mathbb{p} (s;x,\widetildex) \, \nabla \mathbb{p} (s;x-1,\widetildex)| ds \leq \int_0^{N^2T} V(s) ds + \int_0^{N^2T} S(s) ds. \varepsilonnd{align} For the first term on the r.h.s.\ of~\varepsilonqref{eq:nabnab:}, it is known~\cite[Lemma~A.3]{bertini97} that \begin{align} \label{eq:bgkey} \int_0^{N^2T} V(s) ds = \int_0^\infty \sum_{y\in\mathbf{Z}} |\nabla \mathbb{p} ^\mathbf{Z}(s;y) \, \nabla \mathbb{p} ^\mathbf{Z}(s;y-1)| ds =: \beta'' <1. \varepsilonnd{align} To bound the last term in~\varepsilonqref{eq:nabnab:}, we use the bound from \cite[Eq~(A.13)]{dembo16}, which in our notation reads $ |\nabla \mathbb{p} ^\mathbf{Z}(s;y+iN)| \leq \frac{1}{s+1} e^{-\frac{|y+iN|}{\sqrt{s+1}}} $. Further, for all $ y\in I $ we have $ |y|\leq \frac{N}{2} $, which gives $ |y+iN| \geq \frac{1}{c} (|y|+ |i|N) $, and hence $ |\nabla \mathbb{p} ^\mathbf{Z}(s;y+iN)| \leq \tfrac{1}{s+1} e^{-\frac{|y|+|i|N}{c\sqrt{s+1}}}, $ for all $ y\in I $. Using this bound on the r.h.s.\ of~\varepsilonqref{eq:nabnabS} gives \begin{align*} S(s) \leq c\,\Big( \sum_{j\neq j'\in\mathbf{Z}} e^{-\frac{(|j|+|j'|)N}{\sqrt{s+1}}} \Big) \Big( \sum_{y\in \mathbf{Z}} \frac{ e^{-\frac{|y|}{c\sqrt{s+1}}} }{(s+1)^2} \Big) \leq c \, e^{-\frac{N}{c\sqrt{s+1}}} \, (s+1)^{-3/2}. \varepsilonnd{align*} Integrating this inequality over $ s\in[0,N^2T] $, and combining the result with~\varepsilonqref{eq:nabnab:}--\varepsilonqref{eq:bgkey} yields \begin{align*} \int_0^{N^2T}\sum_{\widetildex\in\mathbb{T}} |\nabla \mathbb{p} (s;x,\widetildex) \, \nabla \mathbb{p} (s;x-1,\widetildex)| ds \leq \beta'' + c \int_0^{N^2 T} e^{-\frac{N}{c\sqrt{s+1}}}(s+1)^{-3/2} ds. \varepsilonnd{align*} Fix $ \alpha \in (0,1) $ and divide the last integral into integrals over $ s\in[0,N^{-\alpha}] $ and $ s\in[N^{-\alpha},N^2T] $. We see that $ \int_0^{N^2 T} \varepsilonxp(-\frac{N}{c\sqrt{s+1}})(s+1)^{-3/2} ds \leq (\varepsilonxp(-\frac{1}c N^{-1+\alpha})+ N^{-\alpha/2}) c \to 0 $. Hence we conclude~\varepsilonqref{eqlem:sgkey<1} for $ \beta'=\frac{\beta''+1}{2} <1 $. \varepsilonnd{proof} Given Lemma~\ref{lem:sgkey}, we now proceed to establish an integral inequality of the conditional expectation of $ W(t+s,x) $. \begin{lemma} \label{lem:selfav:} Fix $ u\in(\frac12,1) $, $ v\in(0, u_{\mathbf{R}t} ) $, $ \Lambda,T<\infty $. Let $ \Omega':=\Omega(u,v,T,\Lambda,N)\cap \Omega(1,v,T,\Lambda,N) $ and $ \mathbf{E} O[\,\Cdot\,] := \mathbf{E} [\,\Cdot\,| \mathbb{a} (x),x\in\mathbb{T}] \mathbf{1} _{\Omega'} $. We have, for all $ s,t\in[0,N^2T] $ and $ x\in\mathbb{T} $, \begin{align} \label{eq:decorIter} \begin{split} \mathbf{E} O&\Big[\,\big| \mathbf{E} O\big[ W(t+s,x) \big| \mathscr{F} (s) \big] \big| \, \Big] \leq c(u,v,\Lambda,T) \big( N^{-(\frac{u}2\wedge u_\mathrm{ic} \wedge v)}\log(N+1) + \tfrac{1}{\sqrt{t+1}} + \tfrac{N}{t+1} \big) \\ &+ \int_0^t \sum_{\widetildex\in\mathbb{T}} \mathbb{Q} mg_{ \mathbb{Q} , \mathbb{Q} }(t';x,\widetildex) \mathbf{E} O\Big[\,\big| \mathbf{E} O\big[ W(t'+s,x) \big| \mathscr{F} (s) \big] \big| \,\Big] dt'. \varepsilonnd{split} \varepsilonnd{align} \varepsilonnd{lemma} \begin{proof} Throughout this proof we assume $ s,t\leq TN^{2} $, and, to simplify notation, we write $ c=c(u,v,\Lambda,T) $. Recall from~\varepsilonqref{eq:W} that $ W(t+s,x) $ involves $ x $-gradients of $ Z $. The idea is to derive equations for $ \nabla_x Z(t,x) $. To this end, set $ (t_*,t)\mapsto(s,s+t) $ in~\varepsilonqref{eq:Lang:int:} and take $ \nabla_x $ on both sides to get \begin{align} \label{eq:gZeq} &\nabla_x Z(t+s,x) = D(x) + F(x), \\ \label{eq:DF} & D(x) := \sum_{\widetildex\in\mathbb{T}}\nabla_x \mathbb{Q} (s;x,\widetildex)Z(s,\widetildex), \qquad F(x) := \int_{s}^{t+s} \sum_{\widetildex\in\mathbb{T}}\nabla_x \mathbb{Q} (t-\tau;x,\widetildex)dM(\tau,\widetildex). \varepsilonnd{align} Note that we have omit dependence on $ s,t $ in the notation $ D(x), F(x) $. Similar convention is practiced in the sequel. Use~\varepsilonqref{eq:gZeq} twice with $ x\mapsto x $ and $ x\mapsto x-1 $ to express $ W $ in terms of $ D $ and $ F $. Since $ F(x) $ is a martingale integral and since $ D(x) $ is $ \mathscr{F} (s) $-measurable, upon taking $ \mathbf{E} O[\,\Cdot\,| \mathscr{F} (s)] $, we have \begin{align} \label{eq:quadmg:ExW} & \mathbf{E} O[W(t+s,x)| \mathscr{F} (s)] = N D(x)D(x-1) + N \mathbf{E} O[F(x)F(x+1)| \mathscr{F} (s)]. \varepsilonnd{align} To evaluate the last term in~\varepsilonqref{eq:quadmg:ExW}, recall that $ \mathcal{B}(t,x) $ denotes a generic $ \mathscr{F} $-adopted uniformly bounded process, and note that, from~\varepsilonqref{eq:rtbd}, we have $ | \mathbb{a} (x)| \leq \Lambda N^{- u_{\mathbf{R}t} } $ under $ \Omega $. Recall the notation $ \mathbb{Q} mg_{f,g}, \mathbb{Q} mgg_{f,g} $ from~\varepsilonqref{eq:sgmg:sgmgg}. Using~\varepsilonqref{eq:qdmg:qv:} we write \begin{align*} N \mathbf{E} O[F(x)F(x+1)| \mathscr{F} (s)] = F_1(x)+F_2(x)+F_3(x), \varepsilonnd{align*} where \begin{align} \label{eq:quadmg:F1} F_1(x) &:= \int_{s}^{s+t} \sum_{\widetildex\in\mathbb{T}} \mathbb{Q} mg_{ \mathbb{Q} , \mathbb{Q} }(t-t';x,\widetildex) \mathbf{E} O[ Z^2(s+t',\widetildex)| \mathscr{F} (s)] dt', \\ \notag F_2(x) &:= \int_{0}^{t} \sum_{\widetildex\in\mathbb{T}} \mathbb{Q} mg_{ \mathbb{Q} , \mathbb{Q} }(t-t';x,\widetildex) \mathbf{E} O[ W(s+t',\widetildex)| \mathscr{F} (s)] dt', \\ \label{eq:quadmg:F3} F_3(x) &:= N^{-(\frac12\wedge u_{\mathbf{R}t} )} \int_{s}^{s+t} \sum_{\widetildex\in\mathbb{T}} \mathbb{Q} mg_{ \mathbb{Q} , \mathbb{Q} }(t-t';x,\widetildex) \mathbf{E} O[ \mathcal{B}(t',\widetildex)Z^2(s+t',\widetildex)| \mathscr{F} (s)] dt'. \varepsilonnd{align} Note that $ F_2(x) $ is expressed in terms of $ \mathbf{E} O[W(t+s,x)| \mathscr{F} (s)] $, $ t\geq 0 $. Insert \varepsilonqref{eq:quadmg:F1}--\varepsilonqref{eq:quadmg:F3} into the last term in~\varepsilonqref{eq:quadmg:ExW}, and take $ \mathbf{E} O[\,|\Cdot|\,] $ on both sides. We now obtain \begin{align} \label{eq:quadmg:iter} \begin{split} \mathbf{E} O&\big[\, \big| \mathbf{E} O[W(t+s,x)| \mathscr{F} (s)] \big| \, \big] \leq N \mathbf{E} O\big[ \, |D(x)D(x-1)| \, \big] + \mathbf{E} O\big[\,|F_1(x)| \, \big] \\ &\quad + \int_{0}^{t} \sum_{\widetildex\in\mathbb{T}} \mathbb{Q} mg_{ \mathbb{Q} , \mathbb{Q} }(t-t';x,\widetildex) \mathbf{E} O\big[\, | \mathbf{E} O[ W(s+t',\widetildex)| \mathscr{F} (s)]| \, \big] dt' + \mathbf{E} O\big[\,|F_3(x)|\,\big]. \varepsilonnd{split} \varepsilonnd{align} To proceed, we bound the residual terms in~\varepsilonqref{eq:quadmg:iter} that involves $ D $, $ F_1 $, and $ F_3 $. We begin with the term $ N \mathbf{E} O[\,|D(x)D(x-1)|\,] $. Using the expression~\varepsilonqref{eq:DF} for $ D(x) $, we take $ \normO{\,\Cdot\,}{2} $ on both sides, and write $ \mathbb{Q} (t)= \mathbb{p} (t)+ \mathbb{Q} r(t) $. With the aid of the moment bound on $ \normO{Z(s,\widetildex)}{2} $ from Proposition~\ref{prop:mom}, we obtain \begin{align} \normO{D(x)}{2} \leq \sum_{\widetildex\in\mathbb{T}} |\nabla_x \mathbb{Q} (t;x,\widetildex)| \, \normO{Z(s,\widetildex)}{2} \leq c\sum_{\widetildex\in\mathbb{T}} |\nabla_x \mathbb{p} (t;x,\widetildex)| + c\sum_{\widetildex\in\mathbb{T}} |\nabla_x \mathbb{Q} r(t;x,\widetildex)|. \varepsilonnd{align} Further using~\varepsilonqref{eq:hkgd:sum} (with $ x'\mapsto x-1 $ and $ u\mapsto 1 $) and using the bound from Proposition~\ref{prop:sg}\ref{prop:sgr:gsum} (with $ u\mapsto 1 $ and $v\mapsto v$) gives $ \normO{D(x)}{2} \leq ( \tfrac{1}{\sqrt{t+1}} + N^{-1} )c \leq \tfrac{c}{\sqrt{t+1}}, $ where, in the last inequality, we used $ t \leq TN^{2} $. We were able to take $u\mapsto 1$ in our application of \varepsilonqref{eq:hkgd:sum} because we are on the event $\Omega':=\Omega(u,v,T,\Lambda,N)\cap\Omega(1,v,T,\Lambda,N)$. Given this bound, applying Cauchy--Schwarz inequality we have \begin{align} \label{eq:quadmg:Dbd:} N \mathbf{E} O\big[\,|D(x)D(x-1)|\,\big] \leq N\normO{D(x)}{2} \normO{D(x-1)}{2} \leq \tfrac{cN}{t+1}. \varepsilonnd{align} Next we turn to bounding $ \mathbf{E} O[\,|F_1(x)|\,] $. First, given the decomposition $ \mathbb{Q} mg = \mathbb{Q} mg_{ \mathbb{p} , \mathbb{p} }+ \mathbb{Q} mg_{ \mathbb{p} , \mathbb{Q} r}+ \mathbb{Q} mg_{ \mathbb{Q} r, \mathbb{p} }+ \mathbb{Q} mg_{ \mathbb{Q} r, \mathbb{Q} r} $, we write $ F_1(x) = F_{11}(x)+F_{12}(x) $, where \begin{align*} F_{11}(x) &= \int_{0}^{t} \sum_{\widetildex\in\mathbb{T}}\big( \mathbb{Q} mg_{ \mathbb{p} , \mathbb{Q} r}+ \mathbb{Q} mg_{ \mathbb{Q} r, \mathbb{p} }+ \mathbb{Q} mg_{ \mathbb{Q} r, \mathbb{Q} r}\big)(t-t';x,\widetildex) \mathbf{E} O[ Z^2(s+t',\widetildex)| \mathscr{F} (s)] dt', \\ F_{12}(x) &= \int_{0}^{t} \sum_{\widetildex\in\mathbb{T}} \mathbb{Q} mg_{ \mathbb{p} , \mathbb{p} }(t-t';x,\widetildex) \mathbf{E} O[ Z^2(s+t',\widetildex)| \mathscr{F} (s)] dt'. \varepsilonnd{align*} For $ F_{11}(x) $, we use bounds from Lemma~\ref{lem:sgkey}\ref{lem:sgrkey=0} and moment bounds from Proposition~\ref{prop:mom} to get $ \mathbf{E} O[\,|F_{11}(x)|\,] \leq cN^{-(u\wedge v)}\log(N+1) $. As for $ F_{12}(x) $, we further decompose $ F_{12}(x)=F_{121}(x)+F_{122}(x) $, where \begin{align*} F_{121}(x) &= \mathbf{E} O[ Z^2(s+t,x)| \mathscr{F} (s)] \int_{0}^{t} \sum_{\widetildex\in\mathbb{T}} \mathbb{Q} mg_{ \mathbb{p} , \mathbb{p} }(t-t';x,\widetildex) dt', \\ F_{122}(x) &= \int_{0}^{t} \sum_{\widetildex\in\mathbb{T}} \mathbb{Q} mg_{ \mathbb{p} , \mathbb{p} }(t-t';x,\widetildex) \mathbf{E} O[ Z^2(s+t',\widetildex)-Z^2(s+t,x)| \mathscr{F} (s)] dt'. \varepsilonnd{align*} For $ F_{121}(x) $, taking $ \mathbf{E} O[\,|\,\Cdot\,|\,] $ and using the moment bound on $ \normO{Z(s+t,\widetildex)}{2} $ from Proposition~\ref{prop:mom}, followed by using Lemma~\ref{lem:sgkey}\ref{lem:sgkey=0}, we have $ \mathbf{E} O[\,|F_{121}(x)|\,] \leq \frac{c}{\sqrt{t+1}} $. As for $ F_{122}(x) $, write \begin{align*} |Z^2(s+t',\widetildex)-Z^2(s+t,x)| \leq \big(Z(s+t',\widetildex)+Z(s+t,x)\big) \big( |Z(s+t',\widetildex)-Z(s+t,\widetildex)| + |Z(s+t,\widetildex)-Z(s+t,x)| \big). \varepsilonnd{align*} Set $ \alpha=\frac{u}{2}\wedge u_\mathrm{ic} \wedge v $ to simplify notation. Using the moment bounds from Proposition~\ref{prop:mom}, here we have \begin{align*} \mathbf{E} O\big[\,|F_{122}(x)|\,\big] \leq c \int_{0}^{t} \sum_{\widetildex\in\mathbb{T}}| \mathbb{Q} mg_{ \mathbb{p} , \mathbb{p} }(t-t';x,\widetildex)| \Big( \Big(\frac{ \mathrm{dist} _\mathbb{T}(x,\widetildex)}{N}\Big)^{\alpha} + \Big(\frac{|t-t'|\vee 1}{N^2}\Big)^{\frac{\alpha}{2}} \Big) dt'. \varepsilonnd{align*} Further using the bounds~\varepsilonqref{eq:hkgd}--\varepsilonqref{eq:hkgd:sum} (with $ x'\mapsto x-1 $ and $ u\mapsto 1 $) and \varepsilonqref{eq:hk:hold:} (with $ u\mapsto \alpha $) gives \begin{align*} \mathbf{E} O\big[\,|F_{122}(x)|\,\big] \leq c \int_{0}^{t} \frac{1}{(t-t'+1)} \frac{N^{-\alpha}}{(t-t'+1)^{(1-\alpha)/2}} dt' \leq c N^{-\alpha} =c N^{-(\frac{u}2\wedge u_\mathrm{ic} \wedge v)}. \varepsilonnd{align*} Collecting the preceding bounds on the $ F $'s, we conclude \begin{align} \label{eq:quadmg:F1bd} \mathbf{E} O\big[\,|F_1(x)|\,\big] \leq \frac{c}{\sqrt{t+1}} + c N^{-(\frac{u}2\wedge u_\mathrm{ic} \wedge v)} \log(N+1). \varepsilonnd{align} As for $ F_3(x) $. Recall that $ \mathcal{B}(t,x) $ denotes a (generic) uniformly bounded process. Taking $ \mathbf{E} O[\,|\,\Cdot\,|\,] $ in~\varepsilonqref{eq:quadmg:F3} and using the moment bounds from Proposition~\ref{prop:mom} and using Lemma~\ref{lem:sgkey}\ref{lem:sgkey<1}, we have \begin{align} \label{eq:quadmg:F3bd} \mathbf{E} O\big[\,|F_3(x)|\,\big] \leq cN^{-(\frac12\wedge u_{\mathbf{R}t} )}. \varepsilonnd{align} Inserting~\varepsilonqref{eq:quadmg:Dbd:}--\varepsilonqref{eq:quadmg:F3bd} into~\varepsilonqref{eq:quadmg:iter} completes the proof. \varepsilonnd{proof} We now establish the required decorelation estimate on $ W $. \begin{proposition} \label{prop:selfav} Let $ u,v,\Lambda,T, \Omega' $, $ \mathbf{E} O[\,\Cdot\,] $ be as in Lemma~\ref{lem:selfav:}. There exists $ c=c(u,v,\Lambda,T) $ such that, for all $ s,t\in[0,N^2T]$ and $ x\in\mathbb{T} $, \begin{align} \label{eq:selfav:goal} \mathbf{E} O\Big[\,\big| \mathbf{E} O\big[ W(t+s,x) \big| \mathscr{F} (s) \big]\big| \, \Big] \leq c\, \Big( N^{-(\frac{u}{2}\wedge u_\mathrm{ic} \wedge v)}\log(N+1) + \tfrac{1}{\sqrt{t+1}} + \tfrac{N}{t+1} \Big). \varepsilonnd{align} \varepsilonnd{proposition} \begin{proof} Through the proof, we write $ c=c(u,v,T,\Lambda) $ to simplify notation, and assume $ t\in[0,N^2T] $. For fixed $ s \in[0,N^2T] $, set $ w(t) := \sup_{x\in\mathbb{T}} \mathbf{E} O[\,| \mathbf{E} O[ W(t+s,x) \big| \mathscr{F} (s) ]|\,] $, which is the quantity we aim to bound, and consider also $ w(t,x) := \mathbf{E} O[\,| \mathbf{E} O[ W(t+s,x) \big| \mathscr{F} (s) ]|\,] $. Taking supremum over $ x\in\mathbb{T} $ in~\varepsilonqref{eq:decorIter} gives \begin{align*} w(t,x) \leq c \Big( N^{-(\frac{u}2\wedge u_\mathrm{ic} \wedge v)}\log(N+1) + \tfrac{1}{\sqrt{t+1}} + \tfrac{N}{t+1} \Big) + \int_0^t \mathbb{Q} mgg_{ \mathbb{Q} , \mathbb{Q} }(t-t';x) w(t') dt'. \varepsilonnd{align*} Iterating this inequality gives \begin{align} \label{eq:decorIter:} w(t,x) \leq c \Big( N^{-(\frac{u}2\wedge u_\mathrm{ic} \wedge v)} + \tfrac{1}{\sqrt{t+1}} + \tfrac{N}{t+1} \Big) + \sum_{n=1}^\infty \big(w_{1,n}(t,x) + w_{2,n}(t,x) + w_{3,n}(t,x)\big), \varepsilonnd{align} where, with the notation $ \Sigma_n(t) $ from~\varepsilonqref{eq:Sigman} and $ d^n\vec{s} $ from~\varepsilonqref{eq:ds}, we have \begin{align*} w_{i,n}(t,x) := \int_{\Sigma_n(t)} \Big(\prod_{i=1}^n \mathbb{Q} mgg_{ \mathbb{Q} , \mathbb{Q} }(s_i;x) \Big) \cdot \left\{\begin{array}{l@{,}l} N^{-(\frac{u}2\wedge u_\mathrm{ic} \wedge v)}\log(N+1) & \text{ for } i=1 \\ \frac{1}{\sqrt{s_0+1}} & \text{ for } i=2 \\ \frac{N}{s_0+1} & \text{ for } i=3 \varepsilonnd{array} \right\} \cdot d^n\vec{s}. \varepsilonnd{align*} Let $ \beta:= \sup_{x\in\mathbb{T}} \int_0^{N^2T} \mathbb{Q} mgg_{ \mathbb{Q} , \mathbb{Q} }(t,x) dt $, which, by Lemma~\ref{lem:sgkey}\ref{lem:sgkey<1}, is strictly less than $ 1 $ (uniformly in $ N $). For $ w_{1,n}(t,x) $, noting that the integral does not involve the variable $ s_0 $, we bound \begin{align} \label{eq:w1bd} \begin{split} \sum_{n=1}^\infty w_{1,n}(t,x) &\leq N^{-(\frac{u}2\wedge u_\mathrm{ic} \wedge v)} \log(N+1) \sum_{n=1}^\infty \int_{[0,N^2T]^n} \prod_{i=1}^n \mathbb{Q} mgg_{ \mathbb{Q} , \mathbb{Q} }(s_i;x) ds_i \\ &= N^{-(\frac{u}2\wedge u_\mathrm{ic} \wedge v)} \log(N+1) \tfrac{\beta}{1-\beta} = c N^{-(\frac{u}2\wedge u_\mathrm{ic} \wedge v)}\log(N+1). \varepsilonnd{split} \varepsilonnd{align} To bound $ w_{2,n} $ and $ w_{3,n} $, we invoke the argument from \cite[Proof of Proposition~3.8]{labbe17}. We begin with $ w_{2,n} $. Split $ w_{2,n}(t,x) $ into integrals over $ \Sigma_{n}(t)\cap\set{s_0>\frac{t}{n+1}} $ and over $ \Sigma_{n}(t)\cap\set{s_0\leq\frac{t}{n+1}} $, i.e., $ w_{2,n}(t,x)=w'_{2,n}(t,x)+w''_{2,n}(t,x) $, where \begin{align*} w'_{2,n}(t,x) := \hspace{-10pt} \int\limits_{\Sigma_n(t)\cap\set{s_0>\frac{t}{n+1}}} \hspace{-10pt} \Big(\prod_{i=1}^n \mathbb{Q} mgg_{ \mathbb{Q} , \mathbb{Q} }(s_i;x) \Big) \cdot \frac{1}{\sqrt{s_0+1}} d^n\vec{s},& & w''_{i,n}(t,x) := \hspace{-10pt} \int\limits_{\Sigma_n(t)\cap\set{s_0\leq\frac{t}{n+1}}} \hspace{-10pt} \Big(\prod_{i=1}^n \mathbb{Q} mgg_{ \mathbb{Q} , \mathbb{Q} }(s_i;x) \Big) \cdot \frac{1}{\sqrt{s_0+1}} \cdot d^n\vec{s}. \varepsilonnd{align*} For $ w'_{i,n} $, we bound $ \frac{1}{\sqrt{s_0+1}} $ by $ c(\frac{n+1}{t+1})^{1/2} $. Doing so releases the $ s_0 $ variable from the integration, yielding \begin{align} \label{eq:w'} w'_{i,n}(t,x) \leq c \Big(\frac{n+1}{t+1}\Big)^{1/2} \int_{[0,N^2T]^n} \Big(\prod_{i=1}^n \mathbb{Q} mgg_{ \mathbb{Q} , \mathbb{Q} }(s_1;x) \Big) d^n\vec{s} = c n \beta^n \Big(\frac{n+1}{t+1}\Big)^{1/2}. \varepsilonnd{align} As for $ w''_{i,n} $, we note that the integration domain is necessarily a subset of $ \Sigma_n(t)\cup_{i=1}^n\set{s_i>\frac{t}{n+1}} $. At each encounter of $ s_i>\frac{t}{n+1} $, we invoke the bound from Lemma~\ref{lem:sgkey}\ref{lem:sgrkeytail}. This gives \begin{align} \label{eq:w''} w''_{i,n}(t,x) \leq c\sum_{i=1}^n \Big( \frac{n+1}{t+1} \Big)^{u+\frac12} \int_{\Sigma_n(t)} \Big(\prod_{i'\in\set{1,\ldots,n}\setminus\set{i}} \mathbb{Q} mgg_{ \mathbb{Q} , \mathbb{Q} }(s_{i'};x) \Big) \frac{1}{\sqrt{s_0+1}} d^n\vec{s}. \varepsilonnd{align} For each $ i=1,\ldots,n $, the integral in~\varepsilonqref{eq:w''} does not involve the variable $ s_i $. We then bound \begin{align} \label{eq:w'':} w''_{i,n}(t,x) \leq c \sum_{i=1}^n \Big( \frac{n+1}{t+1} \Big)^{u+\frac12} \int_{[0,t]^n} \Big(\prod_{i'\in\set{1,\ldots,n}\setminus\set{i}} \mathbb{Q} mgg_{ \mathbb{Q} , \mathbb{Q} }(s_{i'};x) ds_i \Big) \frac{1}{\sqrt{s_0+1}} ds_0 \leq \frac{c(n+1)^{u+\frac12}\beta^{n-1}}{(t+1)^{u}}. \varepsilonnd{align} Combine~\varepsilonqref{eq:w'} and~\varepsilonqref{eq:w'':}, and sum the result over $ n\geq 1 $. With $ \beta<1 $ and $ u>\frac12 $, we conclude \begin{align} \label{eq:w2bd} \sum_{n=1}^\infty w_{2,n}(t,x) \leq \frac{c}{\sqrt{t+1}}. \varepsilonnd{align} As for $ w_{3,n}(t) $, the same calculations as in the preceding gives \begin{align*} w_{3,n}(t,x) \leq cN\cdot \Big( n \beta^n \frac{n+1}{t+1} + \frac{c(n+1)^{u+\frac12}\beta^{n-1}}{(t+1)^{u+\frac12}} \log(t+2) \Big) \leq cN\cdot\frac{(n+1)^{u+\frac12} \beta^{n-1}}{t+1}, \varepsilonnd{align*} where the factor $ \log(t+2) $ arises from integrating $ \frac{1}{s_0+1} $, and the second inequality follows since $ u>\frac12 $. Summing over $ n\geq 1 $, with $ \beta<1 $, we have \begin{align} \label{eq:w3bd} \sum_{n=1}^\infty w_{3,n}(t,x) \leq \frac{cN}{t+1}. \varepsilonnd{align} Inserting~\varepsilonqref{eq:w1bd}, \varepsilonqref{eq:w2bd}--\varepsilonqref{eq:w3bd} into~\varepsilonqref{eq:decorIter:} completes the proof. \varepsilonnd{proof} Having established the decorrelation estimate in Proposition~\ref{prop:selfav}, we continue to prove that $ \mathcal{Z} $ solves the quadratic martingale problem~\varepsilonqref{eq:quadmg:}. Recall the definition of $ M_n(t) $ from~\varepsilonqref{eq:mgn}. Consider the discrete analog $ L _{n,n'}(t) $ of $ \mathcal{M} g_{n,n'}(t) $ (defined in~\varepsilonqref{eq:quadmg:}): \begin{align*} L _{n,n'}(t) := M_n(t)M_{n'}(t) - \int_0^t \langle \varphi _{n} \varphi _{n'}, Z^2_N(s) \rangle_N ds. \varepsilonnd{align*} Recall from \varepsilonqref{eq:mgcnvg} that $ M_n $ converges in $ C[0,T] $ to $ \mathcal{M} _n $ in probability. Also, from~\varepsilonqref{eq:Zcnvg}, $ \int_0^t \langle \varphi _{n} \varphi _{n'}, Z^2_N(s) \rangle_N ds $ converges in $ C[0,T] $ to its continuum counterpart $ \int_0^t \langle \varphi _{n} \varphi _{n'}, \mathcal{Z}^2(s) \rangle ds $ in probability. Consequently, $ L _{n,n'} $ converges in $ C[0,T] $ to $ \mathcal{M} g_{n,n'} $ in probability. On the other hand, we know that $ L '_{n,n'}(t) := M_n(t)M_{n'}(t) - \langleM_n,M_{n'} \rangle (t) $ is a martingale, and, given the expansion~\varepsilonqref{eq:qvv}, we have $ L _{n,n'}(t) - L '_{n,n'}(t) = L _1(t) + L _2(t). $ In the following we will show that $ L _1 $ and $ L _2 $ converges in $ C[0,T] $ to zero in probability. Given this, \varepsilonqref{eq:mgcnvg}, and the fact that $ L '_{n,n'}(t) $ is a martingale, it then follows that \varepsilonqref{eq:quadmg:} is a local martingale. It now remains only to show that $ L _1 $ and $ L _2 $ converges in $ C[0,T] $ to zero in probability. Given the moment bounds Proposition~\ref{prop:mom}, it is not hard to check that $ L _1, L _2 $ is tight in $ C[0,T] $. This being the case, it suffices to establish one point convergence: \begin{lemma} For a fixed $ t\in\mathbf{R}_{\geq 0} $, we have that $ L _1(t), L _2(t) \to_\text{P} 0 $. \varepsilonnd{lemma} \begin{proof} Fixing $ u\in(0,1) $, $ v\in(0, u_\mathrm{ic} ) $, $ t\in[0,T] $, $ \Lambda<\infty $, throughout this proof we write $ \Omega=\Omega(u,v,\Lambda,T,N) $, $ \Omega'=\Omega(u,v,\Lambda,T,N)\cap\Omega(1,v,\Lambda,T,N) $, $ c=c(u,v,T,\Lambda) $, and $ \mathbf{E} rt[\,\Cdot\,]:= \mathbf{E} [\,\Cdot\,| \mathbb{a} (x),x\in\mathbb{T}] $. We begin with $ L _1 $. Recall that $ \mathcal{B} $ denotes a generic uniformly bounded process. By~\varepsilonqref{eq:rtbd}, $ \norm{ \mathbb{a} }_{L^\infty(\mathbb{T})} \leq N^{- u_{\mathbf{R}t} } $. Given this, taking $ \mathbf{E} rt[|\,\Cdot \mathbf{1} _{\Omega}\,|] $ in~\varepsilonqref{eq:qdmg:qvv1} using the moment bound on $ Z(t,x) $ from Proposition~\ref{prop:mom} and using $ \norm{ \varphi _n}_{L^\infty(\mathcal{T})} \leq c\norm{ \varphi _n}_{H^1(\mathcal{T})} $ (from~\ref{fn:LinfinH1}), we have \begin{align*} \mathbf{E} rt[| L _1(t) \mathbf{1} _{\Omega}|] := \mathbf{E} \big[\,| L _1(t) | \mathbf{1} _{\Omega}\,\big| \mathbb{a} (x) \big] \leq c N^{-(\frac12\wedge u_{\mathbf{R}t} )}\,\norm{ \varphi _n}_{H^1(\mathcal{T})}\norm{ \varphi _{n'}}_{H^1(\mathcal{T})}. \varepsilonnd{align*} Set $ \Gamma = \Gamma(n,n',\Lambda):= \set{\,\norm{ \varphi _n}_{H^1(\mathcal{T})}\norm{ \varphi _{n'}}_{H^1(\mathcal{T})} \leq \Lambda} $. Multiply both sides by $ \mathbf{1} _\Gamma $, and take $ \mathbf{E} [\,\Cdot\,] $ on both sides to get $ \mathbf{E} [\,| L _1(t) | \mathbf{1} _{\Omega} \mathbf{1} _{\Gamma}\,] \leq c N^{-(\frac12\wedge u_{\mathbf{R}t} )}. $ This gives $ | L _1(t) | \mathbf{1} _{\Omega} \mathbf{1} _{\Gamma} \to_\text{P} 0 $ as $ N\to\infty $. More explicitly, writing $ L _1(t)= L _1(t;N) $, $ \Omega=\Omega(u,v,T,\Lambda,N) $, and $ \Gamma=\Gamma(\Lambda) $, we have, for each fixed $ \varepsilon>0 $, \begin{align} \label{eq:mgg1to0} \lim_{N\to\infty} \mathbf{P} \big[ \set{| L _1(t;N)|>\varepsilon}\cap\Omega(u,v,T,\Lambda,N)\cap\Gamma(\Lambda) \big] =0. \varepsilonnd{align} Indeed, with $ n,n' $ being fixed, we have \begin{align} \label{eq:Gammato1} \lim_{\Lambda\to\infty} \mathbf{P} [\,\Gamma(\Lambda)^\text{c}\,] = \mathbf{P} [\,\norm{ \varphi _n}_{H^1(\mathcal{T})}\norm{ \varphi _{n'}}_{H^1(\mathcal{T})} > \Lambda\,] = 0. \varepsilonnd{align} Also, Proposition~\ref{prop:sg} asserts that \begin{align} \label{eq:iterlimit} \limsup_{\Lambda\to\infty} \limsup_{N\to\infty} \mathbf{P} [\Omega(u,v,T,\Lambda,N)^\text{c}]=0. \varepsilonnd{align} Use the union bound to write \begin{align*} \mathbf{P} \big[ | L _1(t;N)|>\varepsilon \big] \leq \mathbf{P} \big[ \set{| L _1(t;N)|>\varepsilon}\cap\Omega(u,v,T,\Lambda,N)\cap\Gamma(\Lambda) \big] + \mathbf{P} [\Omega(u,v,T,\Lambda,N)^\text{c}] + \mathbf{P} [\,\Gamma(\Lambda)^\text{c}\,], \varepsilonnd{align*} and send $ N\to\infty $ and $ \Lambda\to\infty $ \varepsilonmph{in order} on both sides. With the aid of \varepsilonqref{eq:mgg1to0}--\varepsilonqref{eq:iterlimit}, we conclude that\\ $ \lim_{N\to\infty} \mathbf{P} [ | L _1(t;N)|>\varepsilon ] = 0 $, for each $ \varepsilon >0 $. That is, $ L _1(t;N) \to_\text{P} 0 $, as $ N\to\infty $. Turning to $ L _2 $, in~\varepsilonqref{eq:qdmg:qvv2}, we take $ \mathbf{E} O[(\,\Cdot\,)^2 \mathbf{1} _{\Omega'}] $ on both sides to get \begin{align*} \mathbf{E} rt[( L _2(t))^2 \mathbf{1} _{\Omega'}] \leq \big( \norm{ \varphi _n}_{H^1(\mathcal{T})}\norm{ \varphi _{n'}}_{H^1(\mathcal{T})} \big)^2 \frac{2}{N^4}\int_{s_1<s_2\in[0,N^2t]^2} \frac{1}{N^2} \sum_{x_1,x_2\in\mathbb{T}} | \mathbf{E} rt[ W(s_2,x_2) W(s_1,x_1) \mathbf{1} _{\Omega'} ]| ds_1ds_2. \varepsilonnd{align*} Multiplying both sides by $ \mathbf{1} _{\Gamma} $, we replace $ (\norm{ \varphi _n}_{H^1(\mathcal{T})}\norm{ \varphi _{n'}}_{H^1(\mathcal{T})})^2 $ with $ \Lambda^2=c $ on the r.h.s.\ to get \begin{align} \label{eq:mgg2:bd} \mathbf{E} rt[( L _2(t))^2 \mathbf{1} _{\Omega'} \mathbf{1} _\Gamma] \leq \frac{c}{N^4}\int_{s_1<s_2\in[0,N^2t]} \frac{1}{N^2} \sum_{x_1,x_2\in\mathbb{T}} | \mathbf{E} rt[ W(s_2,x_2) W(s_1,x_1) \mathbf{1} _{\Omega'} ]| ds_1ds_2. \varepsilonnd{align} To bound the expectation on the r.h.s.\ of~\varepsilonqref{eq:mgg2:bd}, we fix a threshold $ \kappa>0 $, and split the expectation into $ \mathbf{E} rt[ W(s_2,x_2) W(s_1,x_1) \mathbf{1} _{\Omega'} ] = f_1+ f_2 $, where \begin{align*} f_1:= \mathbf{E} rt[ W(s_2,x_2) W(s_1,x_1) \mathbf{1} _{\Omega'} \mathbf{1} _{|W(s_1,x_1)|\leq\kappa} ], \qquad f_2:= \mathbf{E} rt[ W(s_2,x_2) W(s_1,x_1) \mathbf{1} _{\Omega'} \mathbf{1} _{W(s_1,x_1)>\kappa} ]. \varepsilonnd{align*} For $ f_1 $, insert the conditional expectation $ \mathbf{E} [\,\Cdot\,| \mathscr{F} (s_1)] $, and then use Proposition~\ref{prop:selfav} to show \begin{align*} |f_1| \leq \kappa \mathbf{E} rt\big| \mathbf{E} rt\big[ W(s_2,x_2) \mathbf{1} _{\Omega'} \big| \mathscr{F} (s_1) \big] \big| \leq c\kappa \, \Big( N^{-(\frac{u}{2}\wedge u_\mathrm{ic} \wedge v)}\log(N+1)+\tfrac{1}{\sqrt{s_2-s_1+1}} + \tfrac{N}{s_2-s_1+1} \Big). \varepsilonnd{align*} As for $ f_2 $, apply Markov's inequality followed by using~\varepsilonqref{eq:Wapbd} to get \begin{align*} |f_2| \leq c \kappa^{-1} \sup_{s\in[0,N^2T]} \sup_{x\in\mathbb{T}} \mathbf{E} rt[ Z^4(s,x) \mathbf{1} _{\Omega'} ] \leq c\kappa^{-1}, \varepsilonnd{align*} where the last inequality follows from the moment bound on $ Z(s,x) $ from Proposition~\ref{prop:mom}. Inserting the bounds on $ |f_1| $ and $ |f_2| $ into~\varepsilonqref{eq:mgg2:bd} now gives \begin{align} \label{eq:mgg2:bd:} \begin{split} \mathbf{E} rt[( L _2(t))^2 \mathbf{1} _{\Omega'} \mathbf{1} _\Gamma] &\leq \frac{c}{N^4}\int\limits_{s_1<s_2\in[0,N^2t]} \hspace{-15pt} \Big( \kappa\,\Big( N^{-(\frac{u}{2}\wedge u_\mathrm{ic} \wedge v)}\log(N+1)+\frac{1}{\sqrt{s_2-s_1+1}} + \frac{N}{s_2-s_1+1} \Big) + \kappa^{-1} \Big) ds_1ds_2 \\ & \leq c \kappa N^{-(\frac{u}{2}\wedge u_\mathrm{ic} \wedge v)}\log(N+1) + c\kappa^{-1}. \varepsilonnd{split} \varepsilonnd{align} Now, choose $ \kappa= N^{-(\frac{u}{4}\wedge\frac{ u_\mathrm{ic} }{2}\wedge\frac{v}{2})} $, and take $ \mathbf{E} [\,\Cdot\,] $ on both sides of~\varepsilonqref{eq:mgg2:bd:}. This gives \begin{align*} \mathbf{E} [( L _2(t))^2 \mathbf{1} _{\Omega'} \mathbf{1} _\Gamma] \leq c N^{-(\frac{u}{4}\wedge\frac{ u_\mathrm{ic} }{2}\wedge\frac{v}{2})}\,\log(N+1) \to 0. \varepsilonnd{align*} Given this, similarly to the preceding, after passing $ \Lambda $ to a suitable subsequent $ \Lambda_N\to\infty $ in~\varepsilonqref{eq:mgg1to0}, we conclude that $ L _2(t) \to_\text{P} 0 $. \varepsilonnd{proof} \varepsilonnd{document}
\begin{document} \title{Another introduction to the geometry \\ of metric spaces} \author{Stephen Semmes \\ Rice University} \date{} \maketitle \renewcommand{\thefootnote}{} \footnotetext{The present notes are somewhat complementary to \cite{s2}, and the two can be read in either order.} \begin{abstract} Here Lipschitz conditions are used as a primary tool, for studying curves in metric spaces in particular. \end{abstract} \tableofcontents \section{Basic notions} \label{basic notions} \setcounter{equation}{0} A \emph{metric space} is a nonempty set $M$ equipped with a distance function $d(x, y)$ defined for $x, y \in M$ such that $d(x, y)$ is a nonnegative real number which is equal to $0$ exactly when $x = y$, \begin{equation} d(y, x) = d(x, y) \end{equation} for every $x, y \in M$, and \begin{equation} d(x, z) \le d(x, y) + d(y, z) \end{equation} for every $x, y, z \in M$. This last condition is known as the \emph{triangle inequality}. As usual, ${\bf R}$ denotes the real line, and the \emph{absolute value} $|r|$ of $r \in {\bf R}$ is defined to be $r$ when $r \ge 0$ and $-r$ when $r \le 0$. It is easy to check that \begin{equation} \label{|r + t| le |r| + |t|} |r + t| \le |r| + |t| \end{equation} for every $r, t \in {\bf R}$, which implies that $|r - t|$ is a metric on ${\bf R}$. Let $(M, d(x, y))$ be a metric space. For each $x \in M$ and $r > 0$, the \emph{open ball} with center $x$ and radius $r$ is defined by \begin{equation} B(x, r) = \{y \in M : d(x, y) < r\}, \end{equation} and the corresponding \emph{closed ball} is defined to be \begin{equation} \overline{B}(x, r) = \{y \in M : d(x, y) \le r\}. \end{equation} A set $E \subseteq M$ is said to be \emph{bounded} if $E$ is contained in a ball in $M$. For any $p, q \in M$ and $r > 0$, \begin{equation} B(p, r) \subseteq B(q, r + d(p, q)) \end{equation} and \begin{equation} \overline{B}(p, r) \subseteq \overline{B}(q, r + d(p, q)), \end{equation} by the triangle inequality. It follows that a bounded set $E \subseteq M$ is contained in a ball centered at any point in $M$. \section{Norms on ${\bf R}^n$} \label{norms} \setcounter{equation}{0} For each positive integer $n$, ${\bf R}^n$ is the space of $n$-tuples $x = (x_1, \ldots, x_n)$ of real numbers, i.e., $x_1, \ldots, x_n \in {\bf R}$. This is a vector space with respect to coordinatewise addition and scalar multiplication by real numbers. Suppose that $N(x)$ is a function defined on ${\bf R}^n$ with values in the nonnegative real numbers. We say that $N(x)$ is a \emph{norm} on ${\bf R}^n$ if $N(x) = 0$ exactly when $x = 0$, \begin{equation} \label{N(r x) = |r| N(x)} N(r \, x) = |r| \, N(x) \end{equation} for every $r \in {\bf R}$ and $x \in {\bf R}^n$, and \begin{equation} \label{N(x + y) le N(x) + N(y)} N(x + y) \le N(x) + N(y) \end{equation} for every $x, y \in {\bf R}^n$. If $N(x)$ is a norm on ${\bf R}^n$, then \begin{equation} \label{d_N(x, y) = N(x - y)} d_N(x, y) = N(x - y) \end{equation} is a metric on ${\bf R}^n$. The absolute value function is a norm on the real line, and any norm on ${\bf R}$ can be expressed as $a \, |x|$ for some $a > 0$. The standard Euclidean norm on ${\bf R}^n$ is defined by \begin{equation} |x| = \Big(\sum_{j = 1}^n x_j^2\Big)^{1/2}, \end{equation} and determines the standard metric on ${\bf R}^n$. It is not completely obvious that this satisfies the triangle inequality, and one way to show this will be mentioned in the next section. It is much easier to check directly that \begin{equation} \|x\|_1 = \sum_{j = 1}^n |x_j| \end{equation} and \begin{equation} \|x\|_\infty = \max(|x_1|, \ldots, |x_n|) \end{equation} are norms on ${\bf R}^n$. Note that the standard norm may also be denoted $\|x\|_2$. If $N(x)$ is any norm on ${\bf R}^n$, then \begin{equation} N(x) \le \max(N(e_1), \ldots, N(e_n)) \, \|x\|_1, \end{equation} where $e_1, \ldots, e_n$ are the standard basis vectors in ${\bf R}^n$, which is to say that the $j$th coordinate of $e_\ell$ is equal to $1$ when $j = \ell$ and to $0$ otherwise. Indeed, \begin{equation} x = \sum_{j = 1}^n x_j \, e_j, \end{equation} and therefore \begin{equation} N(x) \le \sum_{j = 1}^n N(e_j) \, |x_j|. \end{equation} In particular, \begin{equation} \|x\|_\infty \le \|x\|_2 \le \|x\|_1 \end{equation} for every $x \in {\bf R}^n$, since the first inequality holds by inspection. One can also get the second inequality by observing that \begin{equation} \sum_{j = 1}^n x_j^2 \le \|x\|_1 \, \|x\|_\infty, \end{equation} since $\|x\|_\infty \le \|x\|_1$ also holds by inspection. In the other direction, it is easy to see that \begin{equation} \|x\|_1 \le n \, \|x\|_\infty \end{equation} and \begin{equation} \|x\|_2 \le \sqrt{n} \, \|x\|_\infty, \end{equation} and one can use the convexity of $\phi(t) = t^2$ on the real line to show that \begin{equation} \|x\|_1 \le \sqrt{n} \, \|x\|_2. \end{equation} \section{Convex sets in ${\bf R}^n$} \label{convex sets in R^n} \setcounter{equation}{0} A set $E \subseteq {\bf R}^n$ is said to be \emph{convex} if for every $x, y \in E$ and every real number $t$, $0 \le t \le 1$, \begin{equation} t \, x + (1 - t) \, y \in E. \end{equation} For example, open and closed balls with respect to the metric associated to a norm on ${\bf R}^n$ are convex. Conversely, suppose that $N(x)$ is a nonnegative real-valued function on ${\bf R}^n$ that satisfies $N(x) > 0$ when $x \ne 0$ and the homogeneity condition (\ref{N(r x) = |r| N(x)}). If \begin{equation} \label{B_N = {x in {bf R}^n : N(x) le 1}} B_N = \{x \in {\bf R}^n : N(x) \le 1\} \end{equation} is convex, then $N$ satisfies the triangle inequality (\ref{N(x + y) le N(x) + N(y)}) and hence is a norm. To see this, let $x, y \in {\bf R}^n$ be given, and let us check (\ref{N(x + y) le N(x) + N(y)}). The inequality is trivial when $x = 0$ or $y = 0$, and so we may suppose that $x, y \ne 0$. Put \begin{equation} x' = \frac{x}{N(x)}, \quad y' = \frac{y}{N(y)}, \end{equation} which automatically satisfy \begin{equation} N(x') = N(y') = 1. \end{equation} For $0 \le t \le 1$, convexity of $B_N$ implies that \begin{equation} \label{N(t x' + (1 - t) y') le 1} N(t \, x' + (1 - t) \, y') \le 1. \end{equation} If \begin{equation} t = \frac{N(x)}{N(x) + N(y)}, \end{equation} then \begin{equation} 1 - t = \frac{N(y)}{N(x) + N(y)} \end{equation} and \begin{equation} t \, x' + (1 - t) \, y' = \frac{x + y}{N(x) + N(y)}, \end{equation} which means that (\ref{N(x + y) le N(x) + N(y)}) follows from (\ref{N(t x' + (1 - t) y') le 1}). One can use the convexity of the function $\phi(r) = r^2$ on the real line to show directly that the closed unit ball with respect to the standard Euclidean norm is a convex set in ${\bf R}^n$, and hence that the Euclidean norm satisfies the triangle inequality and is therefore a norm. For each real number $p$, $1 \le p < \infty$, put \begin{equation} \|x\|_p = \Big(\sum_{j = 1}^n |x_j|^p \Big)^{1/p}. \end{equation} One can use the convexity of the function $\phi_p(r) = |r|^p$ on the real line to show that the closed unit ball associated to $\|x\|_p$ is a convex set in ${\bf R}^n$, and therefore that $\|x\|_p$ is a norm on ${\bf R}^n$. By inspection, \begin{equation} \|x\|_\infty \le \|x\|_p \end{equation} for every $x \in {\bf R}^n$ and $1 \le p < \infty$. If $1 \le p < q < \infty$, then \begin{equation} \sum_{j = 1}^n |x_j|^q \le \Big(\sum_{j = 1}^n |x_j|^p \Big) \, \|x\|_\infty^{q - p}, \end{equation} which implies that \begin{equation} \|x\|_q \le \|x\|_p^{p/q} \, \|x\|_\infty^{1 - (p/q)} \end{equation} and thus \begin{equation} \|x\|_q \le \|x\|_p \end{equation} for every $x \in {\bf R}^n$. \section{Lipschitz conditions, 1} \label{lipschitz conditions, 1} \setcounter{equation}{0} Let $(M_1, d_1(x, y))$, $(M_2, d_2(u, v))$ be metric spaces. A mapping $f : M_1 \to M_2$ is said to be \emph{Lipschitz} with constant $C \ge 0$ or $C$-Lipschitz if \begin{equation} d_2(f(x), f(y)) \le C \, d_1(x, y) \end{equation} for every $x, y \in M_1$. More precisely, $f$ is Lipschitz of order $1$ if this holds for some $C \ge 0$, and we shall discuss Lipschitz conditions of any order $a > 0$ a bit later. Note that $f$ is Lipschitz with $C = 0$ if and only if $f$ is constant, and that Lipschitz mappings are automatically uniformly continuous. If $M_2$ is the real line with the standard metric, then the preceding Lipschitz condition is equivalent to \begin{equation} f(x) \le f(y) + C \, d_1(x, y). \end{equation} This follows by interchanging the order of $x$ and $y$. In particular, \begin{equation} f_p(x) = d_1(x, p) \end{equation} is Lipschitz with $C = 1$ for every $p \in M_1$, by the triangle inequality. If $f$, $\widetilde{f}$ are real-valued Lipschitz functions on $M_1$ with constants $C$, $\widetilde{C}$, respectively, then $f + \widetilde{f}$ is Lipschitz with constant $C + \widetilde{C}$. Moreover, $a \, f$ is Lipschitz with constant $|a| \, C$ for every $a \in {\bf R}$. The product of bounded real-valued Lipschitz functions is also Lipschitz. If $(M_3, d_3(w, z))$ is another metric space, and $f_1 : M_1 \to M_2$ and $f_2 : M_2 \to M_3$ are Lipschitz mappings with constants $C_1$, $C_2$, respectively, then the composition $f_2 \circ f_1 : M_1 \to M_3$ defined by \begin{equation} (f_2 \circ f_1)(x) = f_2(f_1(x)) \end{equation} is Lipschitz with constant $C_1 \, C_2$. For any mapping $f : M_1 \to M_2$ and set $A \subseteq M_1$, \begin{equation} f(A) = \{f(x) : x \in A\} \subseteq M_2. \end{equation} Let $B_1(x, r)$ and $B_2(p, t)$ be the open balls in $M_1$, $M_2$ with centers $x \in M_1$, $p \in M_2$ and radii $r, t > 0$, respectively. It is easy to see that $f : M_1 \to M_2$ is Lipschitz with constant $C > 0$ if and only if \begin{equation} f(B_1(x, r)) \subseteq B_2(f(x), C \, r) \end{equation} for every $x \in M_1$ and $r > 0$. This is also equivalent to the analogous condition \begin{equation} f(\overline{B}_1(x, r)) \subseteq \overline{B}_2(f(x), C \, r) \end{equation} for closed balls. In particular, if $A$ is a bounded set in $M_1$, then $f(A)$ is bounded in $M_2$. Suppose that $f$ is a real-valued function on the real line, equipped with the standard metric. If $f$ is differentiable at a point $x \in {\bf R}$, and $f$ is $C$-Lipschitz for some $C \ge 0$, then the derivative $f'(x)$ of $f$ at $x$ satisfies \begin{equation} \label{|f'(x)| le C} |f'(x)| \le C. \end{equation} This follows from the definition of the derivative. Conversely, if $f$ is differentiable and satisfies (\ref{|f'(x)| le C}) at every point in ${\bf R}$, then $f$ is $C$-Lipschitz, by the mean value theorem. Note that $f(x) = |x|$ is $1$-Lipschitz on ${\bf R}$ and not differentiable at $x = 0$. \section{Lipschitz curves} \label{lipschitz curves} \setcounter{equation}{0} Let $(M, d(x, y))$ be a metric space, and suppose that $a$, $b$ are real numbers with $a \le b$. As usual, the closed interval $[a, b]$ in the real line consists of the $r \in {\bf R}$ such that $a \le r \le b$. Suppose also that $p : [a, b] \to M$ is Lipschitz with constant $k$ for some $k \ge 0$. If $\{t_j\}_{j = 0}^n$ is a finite sequence of real numbers such that \begin{equation} a = t_0 < t_1 < \cdots < t_n = b, \end{equation} then \begin{equation} \sum_{j = 1}^n d(p(t_j), p(t_{j - 1})) \le k \, \sum_{j = 1}^n (t_j - t_{j - 1}) = k \, (b - a). \end{equation} This is often described by saying that the length of the curve determined by $p(t)$, $a \le t \le b$, has length $\le k \, (b - a)$. Of course, one can use translations on the real line to shift the interval on which a path is defined without changing the Lipschitz constant. One can use affine mappings on ${\bf R}$ to change the length of the interval on which a path is defined, with a corresponding change in the Lipschitz constant. The product of the Lipschitz constant and the length of the interval would remain the same. If $c \in {\bf R}$, $c \ge b$, $q : [b, c] \to M$ is $k$-Lipschitz, and $p(b) = q(b)$, then the mapping from $[a, c]$ into $M$ defined by combining $p$ and $q$ is $k$-Lipschitz too. This is easy to verify, directly from the definitions. If the Lipschitz constants for $p$ and $q$ are different, then it may be preferable to rescale the intervals so that the Lipschitz constants are the same. If $p$ is constant on an interval $[a_1, b_1] \subseteq [a, b]$, then one can remove $(a_1, b_1)$ from $[a, b]$ and combine the remaining pieces to get a curve with the same Lipschitz constant on a smaller interval. \section{Minimality} \label{minimality} \setcounter{equation}{0} Let $(M, d(x, y))$ be a metric space in which closed and bounded sets are compact. Suppose that $x, y \in M$ can be connected by a Lipschitz curve in $M$. This means that there is a Lipschitz mapping $p : [0, 1] \to M$ such that $p(0) = x$ and $p(1) = y$. Using the Arzela-Ascoli theorem, one can show that there is such a path whose Lipschitz constant is as small as possible. For suppose that $p_1, p_2, \ldots$ is a sequence of Lipschitz mappings from $[0, 1]$ into $M$ whose Lipschitz constants $k_1, k_2, \ldots$, respectively, converge to the infimum $k$ of the possible Lipschitz constants. By passing to a subsequence, we may suppose that the sequence of mappings converges uniformly on $[0, 1]$. The limiting mapping sends $0$ to $x$ and $1$ to $y$, and it is easy to check that it is Lipschitz with constant $k$. Note that $k \ge d(x, y)$. \section{Affine paths in ${\bf R}^n$} \label{affine paths in R^n} \setcounter{equation}{0} Fix a positive integer $n$, and consider an affine mapping $p : {\bf R} \to {\bf R}^n$, given by $p(r) = u + r \, v$ for some $u, v \in {\bf R}^n$. If $N$ is any norm on ${\bf R}^n$, then $p$ is Lipschitz with constant $N(v)$ with respect to the standard metric on ${\bf R}$ and the metric associated to $N$ on ${\bf R}^n$, since \begin{equation} N(p(r) - p(t)) = |r - t| \, N(v) \end{equation} for every $r, t \in {\bf R}^n$. For any $a, b \in {\bf R}$ with $a \le b$, the restriction of $p$ to $[a, b]$ is a Lipschitz curve connecting $p(a)$ to $p(b)$ with constant $N(v)$, and $N(v)$ is the smallest possible Lipschitz constant for such a curve on $[a, b]$. A norm $N$ on ${\bf R}^n$ is said to be \emph{strictly convex} if the corresponding closed unit ball $B_N$ as in (\ref{B_N = {x in {bf R}^n : N(x) le 1}}) is strictly convex. This means that for every $x, y \in {\bf R}^n$ with $N(x) = N(y) = 1$ and $x \ne y$, we have that \begin{equation} N(t \, x + (1 - t) \, y) < 1 \end{equation} when $0 < t < 1$. Equivalently, if $w, z \in {\bf R}^n$ and \begin{equation} N(w + z) = N(w) + N(z), \end{equation} then either $w = 0$, $z = 0$, or $z = r \, w$ for some $r > 0$. This follows from an argument like the one used in Section \ref{convex sets in R^n} to show that convexity of $B_N$ implies the triangle inequality for $N$ when $N$ is homogeneous. One can show that the standard Euclidean norm on ${\bf R}^n$ is strictly convex, using strict convexity of the function $\phi(r) = r^2$ on the real line. Similarly, $\|x\|_p$ is strictly convex on ${\bf R}^n$ when $1 < p < \infty$, as a consequence of the strict convexity of $\phi_p(r) = |r|^p$. In particular, the absolute value is strictly convex as a norm on ${\bf R}$, if not in the ordinary sense for arbitrary functions, because equality holds in (\ref{|r + t| le |r| + |t|}) only when $r, t \ge 0$ or $r, t \le 0$. However, $\|x\|_1$ and $\|x\|_\infty$ are not strictly convex norms on ${\bf R}^n$ when $n \ge 2$. Suppose that $N$ is a strictly convex norm on ${\bf R}^n$. If $x, y, z \in {\bf R}^n$ satisfy \begin{equation} d_N(x, z) = d_N(x, y) + d_N(y, z), \end{equation} where $d_N$ is as defined in (\ref{d_N(x, y) = N(x - y)}), then \begin{equation} y = r \, x + (1 - r) \, z \end{equation} for some $r \in [0, 1]$. If $q : [a, b] \to {\bf R}^n$ is $k$-Lipschitz and \begin{equation} N(q(b) - q(a)) = k \, (b - a), \end{equation} then \begin{equation} d_N(q(a), q(b)) = d_N(q(a), q(t)) + d_N(q(t), q(b)) \end{equation} for every $t \in [a, b]$. One can use this to show that $q(t)$ is affine. This does not work for the norms $\|x\|_1$, $\|x\|_\infty$ on ${\bf R}^n$ when $n \ge 2$. For example, there is a $1$-Lipschitz path from $[0, 2]$ into ${\bf R}^2$ equipped with the norm $\|x\|_1$ that connects $(0, 0)$ to $(1, 1)$ by following the horizontal segment to $(1, 0)$ and then the vertical segment to $(1, 1)$. If $\phi : [0, 1] \to {\bf R}$ is any $1$-Lipschitz function with respect to the standard metric on the real line which satisfies $\phi(0) = \phi(1) = 0$, then $\Phi(t) = (t, \phi(t))$ is a $1$-Lipschitz mapping from $[0, 1]$ into ${\bf R}^2$ equipped with the norm $\|x\|_\infty$ that connects $(0, 0)$ to $(1, 0)$. \section{$C^1$ paths in ${\bf R}^n$} \label{C^1 paths in R^n} \setcounter{equation}{0} Let $N$ be a norm on ${\bf R}^n$. As in Section \ref{lipschitz conditions, 1}, the triangle inequality implies that $N$ is $1$-Lipschitz with respect to the associated metric $d_N$. As in Section \ref{norms}, one can show that $N$ is less than or equal to a constant multiple of the standard Euclidean norm on ${\bf R}^n$. It follows that $N$ is also a Lipschitz function with respect to the standard metric on ${\bf R}^n$. Let $p : [a, b] \to {\bf R}^n$ be a continuously-differentiable curve with derivative $p'(t)$. This implies that $N(p'(t))$ is a continuous function on $[a, b]$. By the fundamental theorem of calculus, \begin{equation} p(t) - p(r) = \int_r^t p'(u) \, du \end{equation} when $a \le r \le t \le b$. Hence \begin{equation} N(p(t) - p(r)) \le \int_r^t N(p'(u)) \, du, \end{equation} using an extension of the triangle inequality from sums to integrals. If \begin{equation} \label{N(p'(u)) le k} N(p'(u)) \le k \end{equation} for every $u \in [a, b]$, then it follows that $p$ is $k$-Lipschitz with respect to the metric associated to $N$ on ${\bf R}^n$. Alternatively, let $\epsilon > 0$ be given. For each $r \in [a, b]$, \begin{equation} p(t) - p(r) - p'(r) \, (t - r) \end{equation} is $\epsilon$-Lipschitz as a function of $t$ on sufficiently small neighborhoods of $r$ in $[a, b]$, since $p$ is continuously-differentiable. Under the hypothesis (\ref{N(p'(u)) le k}), we get that $p$ is $(k + \epsilon)$-Lipschitz with respect to $N$ on sufficiently small neighborhoods of every point in $[a, b]$. One can use this to show that $p$ is $(k + \epsilon)$-Lipschitz on $[a, b]$, and therefore $k$-Lipschitz because $\epsilon > 0$ is arbitrary. Note that (\ref{N(p'(u)) le k}) holds when $p$ is $k$-Lipschitz with respect to $N$ on $[a, b]$. In order for the product of the Lipschitz constant and the length of the parameter interval to be as small as possible, it would be nice to have $N(p')$ constant on $[a, b]$. As in the classical situation, one can try to get this by reparameterizing $p$. This is easy to do when $p'(t) \ne 0$ for every $t \in [a, b]$. Specifically, \begin{equation} \phi(t) = \int_a^t N(p'(u)) \, du \end{equation} is a continuously-differentiable function on $[a, b]$ with \begin{equation} \phi'(t) = N(p'(t)) > 0 \end{equation} for each $t \in [a, b]$. If $q = p \circ \phi^{-1}$, then \begin{equation} N(q'(r)) = 1 \end{equation} when $\phi(a) \le r \le \phi(b)$. \section{Lipschitz conditions, 2} \label{lipschitz conditions, 2} \setcounter{equation}{0} Let $(M_1, d_1(x, y))$ and $(M_2, d_2(u, v))$ be metric spaces. A mapping $f : M_1 \to M_2$ is said to be \emph{Lipschitz of order $\alpha > 0$} with constant $C \ge 0$ if \begin{equation} d_2(f(x), f(y)) \le C \, d_1(x, y)^\alpha \end{equation} for every $x, y \in M_1$. As before, this holds with $C = 0$ if and only if $f$ is constant, and Lipschitz mappings of any order are uniformly continuous. If a real-valued function on the real line is Lipschitz of order $\alpha > 1$, then it is constant, because it has derivative $0$, although one could also show this more directly. It follows that a Lipschitz mapping of order $\alpha > 1$ from an interval in the real line into any metric space is constant as well, by composing with real-valued Lipschitz functions of order $1$ on the range, such as the distance to a fixed point. Suppose that $0 < \beta < 1$. If $r, t \ge 0$, then \begin{equation} \max(r, t) \le (r^\beta + t^\beta)^{1/\beta}. \end{equation} Therefore \begin{equation} r + t \le \max(r, t)^{1 - \beta} \, (r^\beta + t^\beta) \le (r^\beta + t^\beta)^{1/\beta}, \end{equation} or equivalently \begin{equation} (r + t)^\beta \le r^\beta + t^\beta. \end{equation} This is also very easy to check algebraically when $\beta = 1/2$, for instance. If $(M, d(w, z))$ is a metric space, then it follows that $d(w, z)^\beta$ is a metric on $M$ too when $0 < \beta < 1$. This does not work when $\beta > 1$, even for the real line. Observe that $f : M_1 \to M_2$ is Lipschitz of order $\alpha$ with respect to $d_1(x, y)$ on $M_1$ if and only if $f$ is Lipschitz of order $\alpha/\beta$ with respect to $d_1(x, y)^\beta$, keeping $d_2(u, v)$ fixed on $M_2$. Similarly, $f$ is Lipschitz of order $\alpha$ with respect to $d_2(u, v)$ on $M_2$ if and only if $f$ is Lipschitz of order $\alpha \, \beta$ with respect to $d_2(u, v)^\beta$ on $M_2$, keeping $d_1(x, y)$ fixed on $M_1$. A curve $p : [a, b] \to M$ in a metric space $(M, d(w, z))$ parameterized by a Lipschitz mapping of order $\alpha < 1$ can be quite different from the case where $\alpha = 1$. The length of $p$ can be infinite, and moreover $p([a, b])$ can be fractal. This includes common examples of snowflake curves in the plane. Instead one can show that the $\alpha$-dimensional Hausdorff measure of $p([a, b])$ is finite. \end{document}
\betagin{document} \title{The resultants of quadratic binomial complete intersections} \date{} \def{\partial}{{{\partial}rtial}} \betagin{abstract} We compute the resultants for quadratic binomial complete intersections. As an application we show that any quadratic binomial complete intersection can have the set of square-free monomials as a vector space basis if the generators are put in a normal form. \end{abstract} \section{Introduction} Consider the homogeneous polynomial in 5 variables of degree 5 \[F=12vwxyz+ t_1+t_2, \] where \[t_1=-2(p_1v^3yz + p_2vw^3z + p_3vwx^3 + p_4wxy^3 + p_5xyz^3),\] \[t_2=p_1p_3v^3x^2 + p_2p_4w^3y^2 + p_3p_5x^3z^2 + p_1p_4v^2y^3 + p_2p_5w^2z^3.\] The polynomial $F$, as the Macaulay dual generator of an Artinian Gorenstein algebra in the Macaulay's inverse system, defines a flat family of Gorenstein algebras with Hilbert function $(1 \ 5 \ 10 \ 10 \ 5 \ 1)$. If $p_1p_2p_3p_4p_5 + 1 \neq 0$, then the ideal which $F$ defines is $5$ generated, and if $p_1p_2p_3p_4p_5 + 1 = 0$, it is 7 generated. All the members in this family possess the strong Lefscehtz property. This is known from the computation of the 1st and the 2nd Hessians of the form $F$. On the other hand there exists a similar, but slightly more complicated form $G$ (see \S6), involving $5$ parameters $p_1, \ldots, p_5$, which has generic Hilbert function $(1 \ 5 \ 10 \ 10 \ 5 \ 1)$, but if $p_1p_2p_3p_4 p_5+1=0$ then the Hilbert function reduces to $(1 \ 5 \ 5 \ 5 \ 5 \ 1)$. It seems remarkable that the second Hessian of $G$ is divisible by $(p_1p_2p_3p_4 p_5+1)^5$. A striking fact is that it is precisely equal to the resultant of the complete intersection as the generic member of this family. In this paper we do not pursue the Hessians; instead, we study the resultant for quadratic complete intersections. Generally speaking the resultant of $n$ forms in $n$ variables is a very complicated polynomial in the coefficients of the forms. The meaning of the resultant is that it does not vanish if and only if the ideal generated by the $n$ forms is a complete intersection. Thus if the $n$ forms are monomials, the resultant is easy to describe. Through the investigation of the strong Lefschetz property of complete intersections, we found it necessary to acquire the skill to calculate the resultant of complete intersections. The theory of the resultants has a long history, but a method to obtain it is described completely in the book \cite{GKZ}. Generic computation is impossible except in a few cases because of the huge number of variables involved and the high degree of the resultant. Hence, if one wishes to deal with the resultant in an arbitrary number of variables, it is reasonable to restrict the attention to a special class of homogeneous polynomials. The purpose of this paper is to describe the resultant of quadratic binomials in $n$ variables. We restricted our attention only to quadratic forms. It is because, though it seems needless to say, the degree-two condition makes the computation simple. However, in addition to it, we can expect that a lot of information can be obtained from the knowledge of quadratic complete intersections. Indeed the authors~\cite{hww_1} showed that a certain family of complete intersections is obtained as subrings of quadratic complete intersections. In addition, McDaniel~\cite{cris_mcdaniel} showed that many complete intersections could be embedded in quadratic complete intersections. We can expect, from the manner their results were proved, a vast amount of complete intersections to be obtained as subrings of quadratic complete intersections sharing the socle and general linear elements. Suppose that $I=(f_1, \ldots, f_n)$ is an ideal generated by $n$ homogeneous polynomials of the same degree $\lambda$ in the polynomial ring $R$. Then $I$ is a complete intersection if and only if $I_{nd-n+1}=R_{n\lambda -n- d +1}I_d $ is the whole homogeneous space $R_{n\lambda - n + 1}$. With the aid of an idea of Gelfand et.\ al~\cite{GKZ}, it is possible to pick up certain polynomials from among the elements in $I_{\lambda n-n+1}$ such that their span is the whole space. We apply their method to the particular situation where the generators are quadratic binomials. It turns out that the same method can be used to obtain the conditions which guarantees that, for smaller values of $\lambdambda$, the elements in $I_{\lambdambda}$ generate the subspace of $R_{\lambdambda}$ complementary to the space spanned by the square-free monomials. Thus our computation unexpectedly gives us a proof that the complete intersections defined by quadratic binomials can have square free monomials as a vector space basis. We state it as Theorem~\ref{main_theorem}. This follows from Theorem~\ref{from_gelfand's_book}, which is the main result of this paper. This paper is organized as follows. In \S2 we show that any $n$-dimensional quadratic space can have a normal form. This is an elementary observation but it seems that it has never been used systematically before. It reduces the amount of the computation of the resultant considerably even in the generic case. In \S3 we investigate some properties of the determinants of square matrices of the ``binomial type'', which we need in the sequel. In \S4, we introduce matrices consisting of rows as coefficients of the polynomials in the ideal $I_{\lambda}$. Then we apply the result of \S2 to finding the determinants of these matrices. In \S5 we give some examples of quinternary quintics and in \S6 we briefly discuss the relation between the 2nd Hessian of the Macaulay dual and the resultant of the quadratic binomials. We refer the reader to \cite[Definition~3.75]{HMMNWW} or \cite{maeno_watanabe} for the definition of the 2nd Hessian. \section{The normal form of a quadratic vector space} \betagin{definition} A binomial is a polynomial of the form $f=\alpha M + \beta N$, where $M,N$ are distinct monomials in certain variables and $\alpha, \beta$ are some elements in a field. \end{definition} \betagin{notation} \normalfont We denote by $R=K[x_1, x_2, \ldots, x_n]$ the polynomial ring in $n$ variables over a field $K$. We denote by $R_{\lambdambda}$ the homogeneous space of degree $\lambdambda$ of $R$. \end{notation} \betagin{proposition} \normalfont \lambdabel{quadratic_space_in_normal_form} Let $V \subset R_2$ be an $n$-dimensional vector subspace of $R_2$. Then by a linear transformation of the variables we may choose a basis $f_1, f_2, \ldots, f_n$ for $V$ such that \[f_i=x_i^2 + \fbox{\mbox{\rm linear combination of square-free monomials}}, i=1,2, \ldots, n.\] \end{proposition} \betagin{proof} We induct on $n$. If $n=1$, then the assertion is trivial. Assume that $n > 1$. Suppose that $V$ is spanned by \[f_1, f_2, \ldots, f_n.\] Let $M=(m_{ij})$ be the matrix defined by \[m_{ij} = \fbox{ \mbox{the coefficient of $x_j ^2$ in $f_i$} }.\] By a linear transformation of variables if necessary, we may assume that $f_1, \ldots, f_{n-1}$ are linearly independent by reduction $x_n=0$. Then by the induction hypothesis we may assume that $M$ takes the form: \[M= \left( \betagin{array}{ccccc} 1&0&0&0&*\\ 0&1&0&0&*\\ & &\ddots& & \\ 0&0&\cdots &1&*\\ *&*&\cdots &*&* \end{array} \right). \] First assume $\mbox{det}\;M \neq 0$. Define \[\betagin{pmatrix} g_1\\g_2\\ \vdots \\ g_n\end{pmatrix} = M^{-1}\betagin{pmatrix} f_1\\f_2\\ \vdots \\ f_n \end{pmatrix}.\] Then $g_1, \ldots, g_n$ are a desired basis for $V$. Next assume that $\mbox{det}\;M = 0$. This means that we may make the last row of $M$ a zero vector after subtracting a linear combination of $f_1, \ldots, f_{n-1}$ from $f_n$. Consider the linear transformation of the variables \betagin{align*} {}& x_i \mapsto x_i+ \xi _i x_n, \mbox{ for } i=1, 2, \ldots, n-1, \\ {}& x_n \mapsto x_n. \end{align*} With this transformation only the last column of $M$ is affected and non-zero element appears as the $(n,n)$ entry. (This is because $f_n$ contains some square-free monomial with non-zero coefficient.) Thus we are in the situation as $\mbox{det }M\neq 0$. \end{proof} \betagin{definition} Suppose that $f_1, \ldots, f_n \in R_2$ are linearly independent. We will say that these elements are in a {\bf normal form} as a basis for a quadratic vector space if \[f_i=x_i^2 + \fbox{\mbox{\rm linear combination of square-free monomials}}, i=1,2, \ldots, n.\] \end{definition} \betagin{example} \normalfont Consider $R=K[x,y,z]$, $V=\lambdangle x^2, y^2, xy \rangle$. Then we have the matrix expression: \[\betagin{pmatrix} x^2\\y^2\\xy \end{pmatrix}= \betagin{pmatrix} 1&0&0&0&0&0 \\ 0&1&0&0&0&0 \\ 0&0&0&1&0&0 \end{pmatrix} \betagin{pmatrix} x^2\\y^2\\z^2\\xy\\xz\\yz \end{pmatrix} \] \newcommand{{\rm GL}}{{\rm GL}} Make the translation $\sigmagma \in {\rm GL}(3)$ : $x\mapsto x + \xi z$, $y \mapsto y + \eta z$ and $z \mapsto z$. Then we have the isomorphism of vector spaces $V \cong \lambdangle f,g,h \rangle$, where \[\betagin{pmatrix} f\\g\\h \end{pmatrix}= \betagin{pmatrix} 1&0&\xi ^2&0&2\xi&0 \\ 0&1&\eta^2&0&0&2 \eta \\ 0&0&\xi \eta &1&\eta & \xi \end{pmatrix} \betagin{pmatrix} x^2\\y^2\\z^2\\xy\\xz\\yz \end{pmatrix}. \] Furthermore we have the equality of vector spaces: $\lambdangle f,g ,h \rangle = \lambdangle f',g' ,h' \rangle$, where \[\betagin{pmatrix} f'\\g'\\h' \end{pmatrix}= \betagin{pmatrix} 1&0&\frac{-\xi}{\eta} \\ 0&1&\frac{-\eta}{\xi} \\ 0&0&\frac{1}{\xi\eta}\end{pmatrix} \betagin{pmatrix} f\\g\\h \end{pmatrix}. \] So the vector space $V$ is transformed to $\lambdangle f',g',h' \rangle$, where \[\betagin{pmatrix} f'\\g'\\h' \end{pmatrix}= \betagin{pmatrix} 1&0&0&\frac{-\xi}{\eta}&\xi&\frac{-\xi ^2}{\eta} \\ 0&1&0&\frac{-\eta}{\xi}&\frac{-\eta ^2}{\xi}& \eta \\ 0&0&1&\frac{1}{\xi\eta } &\frac{1}{\xi}&\frac{1}{\eta} \end{pmatrix} \betagin{pmatrix} x^2\\y^2\\z^2\\xy\\xz\\yz \end{pmatrix}. \] It follows that we have the isomorphism of the algebras $K[x,y,z]/(V)\cong K[x,y,z]/(f'g',h')$ with the elements $f', g', h'$ are put in a normal form. \end{example} For a later purpose we prove some propositions in linear algebra. \betagin{proposition} \lambdabel{binomial_matrix} \normalfont Let $A:=\{a_1, a_2, \ldots, a_N\}$ and $B:=\{b_1, b_2, \ldots, b_N\}$ be two independent sets of variables. Suppose that $P$ is an $N \times N$ square matrix satisfying the following conditions: \betagin{enumerate} \item The $i$th row of $P$ contains $a_i$ and $b_i$ as entries and all the other entries are zero. \item Any column of $P$ contains an element in $A$ and an element in $B$. \end{enumerate} Then the following conditions are equivalent. \betagin{enumerate} \item[(1)] $\det P$ is an irreducible polynomial as an element in the polynomial ring \[R:=K[a_1, \ldots, a_N, b_1, \ldots, b_N].\] \item[(2)] The matrix $P$ is irreducible, i.e., does not split to blocks like $\betagin{pmatrix}P_1&O\\O&P_2 \end{pmatrix}$ by permutation of rows and columns. \item[(3)] $\det P=\pm(a_1a_2\cdots a_N +(-1)^{N+1} b_1b_2\cdots b_N)$. \end{enumerate} \end{proposition} \betagin{proof} Assume that $P$ splits into blocks. Then obviously $\det P$ is reducible. This proves (1) implies (2). Assume that $P$ is irreducible. Permuting the columns do not affect the condition of $P$. So we may assume that the diagonal entries of $P$ are $(a_1, a_2, \ldots, a_N)$. Furthermore we may conjugate $P$ by a permutation matrix so that $a_1, \ldots, a_N$ are diagonal entries and elements of $B$ are distributed in the super-diagonal entries and at the $(N,1)$-position. This enables us to compute the determinant of $P$ as (3). Thus we have proved that (2) implies (3). It remains to show that (3) implies (1). In other words we have to show that $d:=a_1 \cdots a_N \pm b_1 \cdots b_N$ is an irreducible polynomial in $R$. It is easy to see that we have the isomorphism \[K[a_1, \ldots,a_N, b_1, \ldots, b_N]/(d, b_1-1, b_2-1, \ldots, b_N -1)\cong K[a_1, \ldots, a_N]/(a_1\ldots a_N \pm 1),\] and that this algebra is an integral domain of Krull dimension $N-1$. This shows that \[d, b_1-1, \ldots, b_N-1 \] is a regular sequence in $R$ and furthermore $R/(d)$ is an integral domain. \end{proof} In the next proposition we slightly weaken the conditions on $P$ and prove similar properties. \betagin{proposition} \lambdabel{binomial_matrix_2} \normalfont Let $A:=\{a_1, a_2, \ldots, a_N\}$ and $B:=\{b_1, b_2, \ldots, b_N\}$ be independent sets of variables. Suppose that $P$ is an $N \times N$ square matrix which satisfies the following conditions. \betagin{enumerate} \item The $i$th row of $M$ contains $a_i$ and $b_i$ as entries and all the other entries are zero. \item Any column of $M$ does not contain two elements in $A$. \end{enumerate} Then we have: \betagin{enumerate} \item[(1)] The variable $a_i$ divides $\det P$ if $a_i$ is the only non-zero element in the column that contains $a_i$. \item[(2)] $\det P$ is independent of $b_i$ if $a_i$ is the only non-zero element in the column that contains $a_i$. \item[(3)] $\det P$ factors into a product of a monomial in $a_1, \ldots, a_N$ and binomials of the form $a_{i_1}a_{i_2}\cdots a_{i_r} \pm b_{i_1}b_{i_2}\cdots b_{i_r}$. \item[(4)]If a binomial $a_{i_1}a_{i_2}\cdots a_{i_r} \pm b_{i_1}b_{i_2}\cdots b_{i_r}$ is a factor of $\det P$, then we can arrange the order of the variables so that $a_{i_j}$ and $b_{i_j}$ are in the same row and $a_{i_{j}}$ and $b_{i_{j-1}}$ are in the same column. (We let $b_{i_0}=b_{i_r}$.) \end{enumerate} \end{proposition} For proof we use a digraph associated to the matrix $P$ as defined in the next Definition. \betagin{definition} \lambdabel{digraph} In the same notation of Proposition~\ref{binomial_matrix_2}, we put $X=A \sqcup B$ and call it the set of vertices. We define the set $E$ of arcs to be the union of the two sets \[\{a_i \to b_j | \mbox{$a_i$ and $b_j$ are in the same row} \} \] and \[\{b_j \to a_i | \mbox{ $b_j$ and $a_i$ are in the same column} \}.\] We may call $(X,E)$ the {\bf digraph associated to $P$}. (Since we have assumed that $i$th row contains $a_i$ and $b_i$, there is an arc $a_i \to b_j$ if and only if $i=j$.) \end{definition} Suppose $ \{a_{j_1}, a_{j_2} , \ldots, a_{j_r} \} \subset A$ and $\{ b_{j_1}, b_{j_2} , \ldots , b_{j_r} \} \subset B$ are subsets both consisting of $r$ elements. We call them a {\bf circuit } of $M$ if $a_{j_i}$ and $b_{j_i}$ are contained in one row and $b_{j_i}$ and $a_{j_{i+1}}$ are in one column for all $i=1,2,\ldots, r$. (We let $a_{j_{r+1}}=a_{j_{1}}$.) A circuit will be represented as \[a_{j_1} \to b_{j_1} \to a_{j_2} \to b_{j_2} \to \cdots \to a_{j_r} \to b_{j_r} \to a_{j_1}.\] If we drop the last term from a circuit, we call it a {\bf chain} of $M$. A {\bf chain is maximal} if it cannot be embedded into a circuit or a longer chain. \betagin{proof}[Proof of Proposition~\ref{binomial_matrix_2}] \betagin{enumerate} \item[(1)] Suppose that $a_i$ is the only non-zero entry of a column. Then obviously $a_i$ divides $\det P$. \item[(2)] Recall that $a_i$ and $b_i$ are in the $i$th row of $P$. If $a_i$ is the single non-zero element in a column, then we may make a column operation to clear $b_i$. Hence $\det P$ does not involve $b_i$. \item[(3)] Let $(X,E)$ be the digraph introduced in Definition~\ref{digraph}. It is easy to see that $(X,E)$ decomposes into the disjoint union of circuits and maximal chains. The first element in a maximal chain is an element of $A$ as a single non-zero entry of the column. Indeed any element in $B$ can be prepended by an element in $A$ and in addition, an element in $A$ cannot be prepended by an element of $B$ if and only if it is a single non-zero entry of the column. If an element $a_i$ appears in a row as a single non-zero element of the column, we have $\det P=a_i\det P'$, where $\det P'$ is the matrix $P$ with the row and column deleted that contains $a_i$. Thus it is enough to treat $P$ in which all columns have two non-zero elements as well as rows. In this case $(X,E)$ decomposes as a disjoint union of circuits. A circuit in $(X,E)$ like \[a^{(1)}\to b^{(1)} \to a^{(2)}\to b^{(2)} \to \cdots \to a^{(k-1)}\to b^{(k-1)} \to a^{(k)}\to b^{(k)} \to a^{(1)}.\] gives us a binomial as a divisor of $\det P$. This completes the proof for (3). \item[(4)] This follows immediately from (3). \end{enumerate} \end{proof} \betagin{example} \normalfont \[\det \betagin{pmatrix}a_1&0&0&b_1 \\ 0&a_2&0&b_2 \\ 0&0&a_3&b_3 \\ 0&0&b_4&a_4 \end{pmatrix} = a_1a_2(a_3a_4 - b_3b_4).\] \end{example} \section{The quadratic binomial complete intersections} \lambdabel{new_definition} \lambdabel{def_of_Ms} In this section we denote by $E=K\lambdangle x_1, x_2, \ldots, x_n \rangle$ the graded vector space spanned by the square-free monomials in the variables $x_1, x_2, \ldots, x_n$ over a field $K$. As well as $R_{\lambda}$, we denote by $E_{\lambda}=K\lambdangle x_1, x_2, \ldots, x_n \rangle _{\lambda}$ the homogeneous space spanned by the square-free monomials of degree $\lambda$ over $K$. \betagin{definition}\normalfont \lambdabel{notation_of_set_of_monomials} For all positive integers $\lambdambda=1,2,3, \ldots , $ we define the sets of monomials $M_1(\lambdambda)$, $M_2(\lambdambda)$, $\ldots$, $M_n(\lambdambda)$ of degree $\lambda$ as follows: \betagin{align*} {} &M_1(\lambda) =\{ x_1^{\lambda _1}x_2^{\lambda _2}x_3^{\lambda _3}\cdots x_n ^{\lambda _n}| \sum _{j=1}^n \lambda _j = \lambda \}, \\ & M_2(\lambda) =\{ x_1^{\lambda _1}x_2^{\lambda _2}x_3^{\lambda _3}\cdots x_n ^{\lambda _n}| \sum _{j=1}^n \lambda _j= \lambda, \lambda _1 < 2 \}, \\ &M_3(\lambda) =\{ x_1^{\lambda _1}x_2^{\lambda _2}x_3^{\lambda _3}\cdots x_n ^{\lambda _n}|\sum _{j=1}^n \lambda _j= \lambda, \lambda _1 <2 , \lambda _2 < 2 \}, \\ & \ \ \ \ \ \ \ \ \ \ \ \ pace{10ex} \vdots \\ &M_n(\lambda) =\{ x_1^{\lambda _1}x_2^{\lambda _2}x_3^{\lambda _3} \cdots x_n^{\lambda _n}| \sum _{j=1}^n \lambda _j= \lambda, \lambda _1 <2 , \lambda _2 < 2, \cdots, \lambda _{n-1} < 2 \}. \end{align*} \end{definition} \betagin{proposition} \lambdabel{basic_property} \normalfont \betagin{enumerate} \item[(1)] The span of $M_1(\lambdambda)$ is $K[x_1,x_2,\ldots, x_n]_{\lambda}$. \item[(2)] $M_1(\lambda) \supset M_2(\lambda) \supset \cdots \supset M_{n-1}(\lambda) \supset M_{n}(\lambda)$. \item[(3)] For all $\lambdambda \geq 1$, the set $\bigsqcup _{j=1}^n M_j(\lambda)\otimes e_j$ is in one-to-one correspondence with $M_1(\lambda +2) \setminus K\lambdangle x_1,x_2,\ldots, x_n \rangle _{\lambda +2}$. ($\{e_1, \ldots, e_n\}$ is a set of indeterminantes used to separate the monomials.) \item[(4)] For all $\lambda \geq n-1$, the set $\bigsqcup _{j=1}^n M_j(\lambda)\otimes e_j$ is in one-to-one correspondence with the set of all the monomials in $R_{\lambda +2}$. \end{enumerate} \end{proposition} \betagin{proof} \betagin{enumerate} \item[(1)] and (2) are obvious. \item[(3)] Consider the correspondence $\bigsqcup _{j=1}^n M_j(\lambda)\otimes e_j \to M_{1}(\lambda+2)$ defined by $M_j(\lambda)\otimes e_j \ni m \otimes e_j \mapsto m x_j^2$. We can make the inverse map as follows. Let $m:=x_1 ^{\lambda _1}\cdots x_n^{\lambda _n} \in K[x_1, \ldots, x_n]_{\lambda +2} \setminus K\lambdangle x_1, \ldots, x_n \rangle _{\lambda +2} $. Then some exponent $\lambda _j$ is greater than $1$. Let $j$ be the smallest index such that $\lambda _j \geq 2$. Then we let $m \mapsto (m/x_j^2) \otimes e_j \in M_j(\lambda) \otimes e_j$. \item[(4)] Since $\lambdambda \geq n-1$, we have $\lambdambda +2 \geq n+1$. For such degrees $\lambdambda$, the homogeneous part of $K\lambdangle x_1, \ldots, x_n \rangle$ does not exist. So (1) and (3) imply (4). \end{enumerate} \end{proof} \betagin{remark} \lambdabel{important_remark} \betagin{enumerate} \item[(1)] We put $M_1(0)= \cdots = M_n(0)=\{1\}$. \item[(2)] If $\lambdambda =1$, then $M_1(\lambdambda)= \cdots = M_n(\lambdambda)=\{x_1, x_2, \ldots, x_n\}$. \item[(3)] \lambdabel{important_remark_item_2} For all $\lambdambda \geq 1$, $M_{j}(\lambdambda)x_n \subset M_j(\lambdambda +1)$ for all $j=1,2, \ldots, n$. (This will play a crucial role in the proof of Proposition~\ref{crucial_prop}.) \end{enumerate} \end{remark} \betagin{theorem} \lambdabel{main_theorem} \normalfont Let $R=K[x_1, x_2, \ldots, x_n]$ be the polynomial ring over a field $K$ and $A=R/I$ a quadratic binomial complete intersection where the generators of $I$ are put in a normal form. Then the set of square-free monomials are a basis of $A$. \end{theorem} \betagin{proof} We fix the generators for $I$ as follows. \[f_1=a_1x_1^2+ b_1m_1,\] \[f_2=a_2x_2^2+ b_2m_2,\] \[ \vdots \] \[f_n=a_n x_n ^2+ b_{n} m_n.\] ($m_j$ is a square-free monomial.) First we assume that $a_1, \ldots, a_n, b_1, \ldots, b_n$ are indeterminates. This means we work over $K=\pi(a_1, \ldots, a_n, b_1, \ldots, b_n )$, where $\pi$ is a prime field. Since the growth of the dimension of $I_{\lambda}$ is the same as the monomial ideal $(x_1^2, x_2^2, \ldots, x_n^2)$, we have \[\mu(\mathfrak{m} ^{\lambda -2} I) = \dim _K I_{\lambda}= \dim _K R_{\lambda} - {n \choose \lambda}.\] ($\mu$ denotes the number of generators of an ideal.) Put $N:=\mu(\mathfrak{m} ^{\lambda - 2} I)$. We want to specify $N$ polynomials in $\mathfrak{m} ^{\lambda -2}I$ suitable for our purpose. For a minimal set of generators for $\mathfrak{m} ^{\lambda -2} I$ we can choose the set of polynomials \[S:=M_1(\lambda -2)f_1 \cup M_2(\lambda -2)f_2 \cup \cdots \cup M_{n-1}(\lambda -2)f_{n-1} \cup M_n(\lambda -2)f_n .\] Note that these unions are in fact disjoint unions and furthermore these elements are linearly independent. To see this set $b_1=b_2= \cdots = b_n=0$. Then one sees easily that $S$ contains all the monomials in $(x_1^2, \ldots, x_n^2) \cap R_{\lambdambda}$. These are $\dim _K R_{\lambdambda} - {n \choose \lambda}$ in number. On the other hand, the number of elements $|S|$ can be as large as this number only if the union is the disjoint union. If we drop the condition $b_1 = \cdots = b_n=0$, the linear independence should be easier to prove. We rewrite the set $S$ as $S=\{g_1, g_2, \ldots, g_N\}$. Index the monomials in $R_{\lambdambda}$ in such a way that the last ${n \choose \lambdambda}$ are square-free monomials. Recall that $\dim _K (I_{\lambda}) + {n \choose \lambdambda} = \dim R _{\lambdambda}$. Let $C'=(c_{ij})$ be the matrix consisting of the coefficients of the polynomials in $S$. So $C'$ satisfies the following equality: \[ \betagin{pmatrix} g_1 \\ g_2 \\ \vdots \\ g_N \end{pmatrix} =C'\betagin{pmatrix} w_1 \\ w_2 \\ \vdots \\ w_N \\ w_{N+1} \\ \vdots \\ w_{N'} \end{pmatrix}, \] where $w_1, w_2, \ldots , w_{N'}$ are all the monomials in $R_{\lambda}$ and $N'=N+{n \choose \lambda}$. Let $C$ be the submatrix of $C'$ consisting of rows $1, 2, \ldots, N$ and columns $1, 2, \ldots, N$. By Theorem~\ref{from_gelfand's_book}, which we prove in the next section, we have $\det C \neq 0$. Define the polynomials $g_1', g_2', \ldots, g_N'$ as follows: \[\betagin{pmatrix}g_1'\\ g_2'\\ \vdots \\ g_N ' \end{pmatrix}=C^{-1} \betagin{pmatrix}g_1 \\ g_2 \\ \vdots \\ g_N \end{pmatrix}=C^{-1} C'\betagin{pmatrix} w_1\\ w_2 \\ \vdots \\ w_{N'} \end{pmatrix}.\] This matrix notation shows that \[g_k'-w_k= \fbox{ \mbox{ linear combination of square-free monomials} } \] for all $k=1,2, \ldots, N$. Since the elements $g_1', \ldots, g_N'$ are a $K$-basis for $I_{\lambda}$, it follows that any element of $R_{\lambdambda}$ can be expressed, mod $I_{\lambdambda}$, as a linear combination of square-free monomials of degree $\lambdambda$. Now the proof is complete for the generic case. Next we assume $R$ is the polynomial ring over an arbitrary field $K$. Suppose that $I$ is an ideal of $R$ obtained by substituting elements of $K$ for the variables $a_i, b_i$. The ideal $I$ is a complete intersection if and only if the resultant is non-zero. Note that we have established a rewriting rule which assigns any monomial $m \in R_{\lambda}$ mod $I$ to a linear combination of square-free monomials in $R_{\lambda}$, for every $\lambda = 2, \ldots, n+1$. For each $\lambda \geq 2$, we have used the matrices $C'$ and $C$. So we index them as $C'(\lambda)$ and $C(\lambda)$, $\lambda = 2,3, \ldots, n+1$. Define the matrix $C'(2)$ as the coefficient matrix for $f_1, \ldots, f_n$ and $C(2)$ is the first $n \times n$ submatrix of $C'(2)$, which is automatically the diagonal matrix with diagonal entries $(a_1, a_2, \ldots, a_n)$. By Theorem~\ref{from_gelfand's_book}, if the resultant is non-zero, then it implies all $C(3), C(4), \ldots, C(n+1)$ are invertible. Hence the proof is complete. \end{proof} \betagin{remark} \betagin{enumerate} \item It seems conceivable that any quadratic complete intersection defined by quadrics put in a normal form can have the set of square-free monomials as a vector space basis. Theorem~\ref{main_theorem} should be regarded as a case where this can be verified. \item If each of the quadrics $f_i$ is a product of linear forms, the elements $f_i$ are in a normal form if we adopt the variables $x_1, \ldots, x_n$ as linear factors of $f_1, \ldots, f_n$ respectively. Abedelfatah~\cite{abed_abedelfatah} proved that the Artinian algebra defined by such forms can have the square-free monomials as a vector space basis. \end{enumerate} \end{remark} \betagin{remark} \lambdabel{additional_remark} In Notation~\ref{notation_of_set_of_monomials} we introduced the sets of monomials $M_1(\lambdambda),M_2(\lambdambda),\ldots, M_n(\lambdambda)$ for all $\lambdambda \geq 1$ and used them to define the set $S$ in the proof of Theorem~\ref{main_theorem}. Suppose that \[\sigmagma=\betagin{pmatrix} 1&2&\cdots &n \\ 1'&2'&\cdots & n' \end{pmatrix}\] is a permutation of the indices. If we use the order \[1' < 2' < \cdots < n', \] for the definition of $M_i(\lambda)$, instead of the natural order \[1 < 2 < \cdots < n, \] we have the different sets of monomials \[M_1'(\lambdambda),M_2'(\lambdambda) , \ldots ,M_n'(\lambdambda).\] The flag of subspaces \[M_1'(\lambda) \supset M_2'(\lambda) \supset \cdots \supset M_n'(\lambda)\] is different from \[M_1(\lambda) \supset M_2(\lambda) \supset \cdots \supset M_n(\lambda).\] In this case we should adopt the set $S$ as \[S'=M_1'(\lambda - 2)f_{1'} \cup M_2'(\lambda - 2)f_{2'} \cup \cdots \cup M_n'(\lambdambda-2)f_{n'}.\] The consideration of the set $S'$ is important in the definition of the resultant of $f_1, \ldots, f_n$ for the binomial complete intersection. See the proof of Theorem~\ref{from_gelfand's_book}(4). $M_k'(\lambdambda)$ should not be confused with $M_{k'}(\lambdambda)$. It is important that $M_k'(\lambdambda-2)f_{k'}$ contains the polynomial $x_{k'}^{\lambdambda -2}f_{k'}$ for all $k$. \end{remark} \section{Some remarks on the coefficient matrices of generic complete intersections} We work with the {\em generic} complete intersection generated by $f_i=a_ix_i^2 + b_i m_i$, where $m_i$ is a square free monomial of degree 2. Recall that we have defined the matrices \[C'(\lambdambda) \mbox{ and } C(\lambdambda)\] for\[\lambdambda =2,3, \ldots, n+1.\] To define them we used the subsets $M_i(\lambdambda) \subset R_{\lambdambda}$ of monomials for $i=1,2, \ldots, n$ for all $\lambdambda = 2,3, \ldots, n+1 $. Actually it is possible to define these sets for all $\lambdambda > n+1$, although we do not need them. From now on $C(\lambdambda)$ are defined for all $\lambdambda \geq 2.$ \betagin{lemma}\lambdabel{key_lemma} \lambdabel{lemma_on_C_and_C'} \normalfont Let $A=\{a_1, \ldots, a_n\}$, $B=\{b_1, \ldots, b_n\}$. The matrices $C(\lambdambda)$ and $C'(\lambdambda)$ have the following property. \betagin{enumerate} \item[(1)] $C(\lambdambda)$ is a square matrix of size $\dim _K R_{\lambdambda} - {n \choose \lambdambda}$. \item[(2)] Each row of $C'(\lambdambda)$ contains exactly one element in the set $A$ and one element in $B$. \item[(3)] Each row of $C(\lambdambda)$ contains exactly one element in the set $A$ and at most one element in $B$. \item[(4)] A column of $C(\lambdambda)$ contains exactly one element in the set $A$. (It may contain none of or many of the elements in $B$.) \item[(5)] For any $i$ and $\lambdambda \geq 2$, $a_i$ divides $\det C(\lambdambda)$. \end{enumerate} \end{lemma} \betagin{proof} (1) was proved earlier. Recall that a row of the matrix $C'(\lambdambda)$ is defined as the coefficients of $w_kf_i$ for some $i$ with some monomial $w_k$ of degree $\lambdambda -2$. This proves (2). Recall that the last ${n \choose \lambdambda}$ columns of $C'(\lambdambda)$ are indexed by square-free monomials. On the other hand $a_i$ can appear only in the columns indexed by monomials which are divisible by $x_i^2$ for some $i$. This proves (3). Suppose that $a_i$ appears in a column indexed by a monomial $w$ with $w=x_1^{\lambda _1}x_2^{\lambda _2} \cdots x_n^{\lambda _n}$. It implies that \[\lambda _1 < 2, \lambda _2 < 2, \cdots, \lambda _{i-1} < 2 \leq \lambda _i. \] Hence $a_k$ cannot appear in this column if $k\neq i$. This proves (4). The column of $C(\lambdambda)$ indexed by $x_i^{\lambdambda}$, regarded as a monomial, contains $a_i$ and all other entries are $0$. This proves (5). \end{proof} We define a circuit in $C(\lambdambda)$ in the same way as $P$ considered in Proposition~\ref{binomial_matrix_2}. Namely, a {\bf circuit} in $C(\lambdambda)$ is a sequence of elements in $A \sqcup B$ \[a^{(1)} \to b^{(1)} \to a^{(2)} \to b^{(2)} \to \cdots \to a^{(r)} \to b^{(r)} \to a^{(1)} \] such that $a^{(k)}$ and $b^{(k)}$ are in the same row and $b^{(k)}$ and $a^{(k+1)}$ and are in the same column of $C(\lambdambda)$. (The matrix $C(\lambda)$ differs from $P$ in that the same element occurs in different rows. Thus the repetition may exit in a circuit.) It is easy to see that if there is a circuit in $C(\lambdambda)$, it gives us a binomial \[a_{i_1}^{\alpha _{i_1}}a_{i_2}^{\alpha _{i_2}} \cdots a_{i_r}^{\alpha _{i_r}} \pm b_{i_1}^{\alpha _{i_1}}b_{i_2}^{\alpha _{i_2}} \cdots b _{i_r}^{\alpha _{i_r}}.\] as a factor in the determinant of $C(\lambda)$. We will say that {\bf two circuits are the same} if they give the same determinant. Note that a submatrix of $C(\lambdambda)$ whose determinant is a binomial is another name for a circuit. In this sense $C(\lambdambda)$ and $C(\lambdambda')$, $\lambda \neq \lambda '$, can have the same circuit. \betagin{proposition} \lambdabel{crucial_prop} \normalfont Suppose that a circuit exists in $C(\lambdambda)$. Then $C(\lambdambda+1)$ contains the same circuit. \end{proposition} \betagin{proof} Recall that the columns of $C(\lambdambda)$ are indexed by certain monomials. By the definition of the elements $\{g_1, g_2, \ldots, g_N\}$ to be the set $S$, we may index the rows of $C(\lambda)$ by the elements of $M_j(\lambdambda - 2)\otimes e_j$. If we multiply the elements (as indices) by $x_n$, they remain as indices for $C(\lambdambda +1)$, since the multiplication by $x_n$ does not change the exponents of monomials except the exponent of $x_n$ itself. (See Remark~\ref{important_remark}(3).) Suppose that $\{U_1, U_2, \ldots, U_r\}$ are indices of rows and $\{W_1, W_2, \ldots, W_r\}$ are indices of columns which gives us a circuit of $C(\lambda)$. Then the submatrix of $C(\lambdambda +1)$ consisting of rows and columns indexed by $\{U_1x_n, U_2x_n, \ldots, U_rx_n\}$ and $\{W_1x_n, W_2x_n, \ldots, W_rx_n\}$ gives us the same circuit in $C(\lambdambda +1)$. This proves the assertion. \end{proof} \newcommand{{\rm Res}}{{\rm Res}} We denote by ${\rm Res}(f_1, \ldots, f_n)$ the resultant of $f_1, \ldots, f_n$. The resultant is a polynomial in the variables of the coefficients $a_1, \ldots, a_n, b_1, \ldots, b_n$, but even after the coefficients are substituted for elements in $K$, we call it the resultant. The ideal $I$ obtained by substitution is a complete intersection if and only if the resultant does not vanish. For details see Gelfand et.\ al \cite{GKZ}. Define $\Delta _{\lambdambda}: =\det C(\lambdambda)$. Now we can prove the main theorem of this paper. \betagin{theorem} \lambdabel{from_gelfand's_book} \normalfont \betagin{enumerate} \item[(0)] $a_1a_2\cdots a_n \mid \Delta _{\lambdambda}$ for any $\lambdambda \geq 2$. \item[(1)] $a_1a_2\cdots a_n \mid {\rm Res}(f_1, \ldots, f_n)$. \item[(2)] $ {\rm Res}(f_1, \ldots, f_n) \mid \Delta_{n+1}$. \item[(3)] $\sqrt{\Delta_2} \mid \sqrt{\Delta_3} \mid \cdots \mid \sqrt{\Delta_{n+1}}$. ($\sqrt{\Delta}$ means that the exponents are replaced by $1$ in the factorization of $\Delta$.) \item[(4)] A circuit that appears in some $\Delta(\lambdambda)$ is a factor of ${\rm Res}(f_1, \ldots, f_n)$. \item[(5)] $\sqrt{{\rm Res}(f_1, f_2, \cdots, f_n)}= \sqrt{\Delta_{n+1}}$. \end{enumerate} \end{theorem} \betagin{proof} \betagin{enumerate} \item[(0)] This is proved in Lemma~\ref{lemma_on_C_and_C'}(5). \item[(1)] It is easy to see that if we set $a_i$ =0 for some $i$, the ideal $(f_1, \ldots, f_n)$ cannot be a complete intersection. Therefore $a_1\cdots a_n$ divides ${\rm Res}(f_1, \ldots, f_n)$. \item[(2)] See \cite[p.429]{GKZ}. \item[(3)] It is easy to see that $\Delta _2 =a_1a_2\cdots a_n$. For $\lambdambda > 2$, it is also easy to see that $a_i$ divides $\Delta _{\lambdambda}$. Suppose that a binomial is a factor of $\Delta _{\lambdambda}$. Then it is a factor of $\Delta _{\lambdambda+1}$ by Proposition~\ref{crucial_prop}. \item[(4)] In \S\ref{def_of_Ms} we constructed the sets $\{M_j(\lambdambda) \}$. They depend on the order of the indices $1 < 2< \ldots < n$. Suppose that \[\sigmagma:=\betagin{pmatrix} 1 & 2 & \cdots & n \\ 1' & 2' & \cdots & n' \end{pmatrix} \] is a permutation of indices. Then with the order $1' < 2 ' < \cdots < n'$ we can obtain another sequence of determinants: \[\Delta _2 ^{\sigmagma} , \Delta _3 ^{\sigmagma}, \ldots, \Delta _{n+1} ^{\sigmagma}.\] It is known that the GCD of $\{\Delta _{n+1}^{\sigmagma}\}$, where $\sigmagma$ runs over all cycles of length $n$ \[\sigmagma=\betagin{pmatrix} 1 & 2 & \cdots & n-1 & n \\ k & k+1 & \cdots& k-2 & k-1 \end{pmatrix}, \] gives us the resultant ${\rm Res}(f_1, \ldots, f_n)$. (See \cite[429]{GKZ}.) Thus to prove the claim it suffices to show that if a circuit appears in one of $\Delta _{\lambdambda}$, with $\lambdambda \leq n$, then that circuit also appears in $\Delta_{n+1} '$ which is obtained based on a permutation $\sigmagma$, whatever the permutation is. This is easy to see since $C'(n+1)=C(n+1)$ up to permutation of rows and columns, every circuit in $C'(\lambdambda)$ for $\lambdambda \leq n$ is contained in $C(n+1)$. \item[(5)] Put $r={\rm Res}(f_1, \ldots, f_n)$. By (2) we have $\sqrt{r} \mid \sqrt{\Delta{n+1}}$. A factor of $\Delta_{n+1}$ is either a factor of $a_1\cdots a_n$ or a circuit in $C(n+1)$. We have seen that each $a_i$ divides $r$. On the other hand if a circuit divides $C(n+1)$, it divides $r$ by (4). This completes the proof. \end{enumerate} \end{proof} \section{Some examples} In this section we set $K=\pi(a_1, \ldots, a_5, p_1,\ldots, p_5 )$, the rational function field over the prime field $\pi$, and $R=K[x_1, \ldots, x_5]$. By $\sigmagma=\betagin{pmatrix} 1&2&3&4&5 \\ 2&3&4&5&1\end{pmatrix}$, we denote the cyclic permutation of the indices. If $f \in R$, we denote by $f^{\sigmagma}$ the polynomial obtained from $f$ by substituting the indices $i$ by $\sigmagma(i)$. In Example~\ref{ex_1}, the polynomials $f_i$ are determined by $f_1$ by the rule $f_{i}=f_{i-1}^{\sigmagma}$ for $i=2,3,4,5$. \betagin{example} \lambdabel{ex_1} \normalfont \betagin{enumerate} \mbox{} \item If $f_1=a_1x_1^2+ p_1x_1x_2$, we have ${\rm Res}(f_1, \ldots, f_5) = (a_1a_2a_3a_4a_5)^{14}(a_1a_2a_3a_4a_5 + p_1p_2p_3p_4p_5)$ \item If $f_1=a_1x_1^2+ p_1x_2x_3$, ${\rm Res}(f_1, \ldots, f_5) = (a_1a_2a_3a_4a_5)^{5}(a_1a_2a_3a_4a_5 + p_1p_2p_3p_4p_5)^{11}$ \item If $f_1=a_1x_1^2+ p_1x_2x_5$, ${\rm Res}(f_1, \ldots, f_5) = (a_1a_2a_3a_4a_5)^{11}(a_1a_2a_3a_4a_5 + p_1p_2p_3p_4p_5)^5$ \end{enumerate} \end{example} There are $10$ square free monomials in $R_5$. In all cases the resultant is one of the above three types. It is known that the resultant is a polynomial of degree $80$. If all $f_j$ factor into two linear forms, the resultant can be computed by \cite[Chapter~13, Propsotion~1.3]{GKZ}. This was also computed by Abedelfatah~\cite{abed_abedelfatah} without referring to the resultant. In the following table $\alpha, \beta$ are the integers such that \[{\rm Res}(f_1, \ldots, f_5)=(a_1a_2a_3a_4a_5)^{\alpha}(a_1a_2a_3a_4a_5+ p_1p_2p_3p_4p_5)^{\beta}. \] pace{1ex} \[\betagin{array}{|c|c|c|c|} \hline \mbox{monomial in $f_1$} & f_1 & \alpha & \beta \\ \hline x_1x_2 & a_1x_1^2+ p_1 x_1x_2 & 15 & 1 \\ \hline x_1x_3 & a_1x_1^2+ p_1 x_1x_3 & 15 & 1 \\ \hline x_1x_4 & a_1x_1^2+ p_1 x_1x_4 & 15 & 1 \\ \hline x_1x_5 & a_1x_1^2+ p_1 x_1x_5 & 15 & 1 \\ \hline x_2x_3 & a_1x_1^2+ p_1 x_2x_3 & 5 & 11 \\ \hline x_2x_4 & a_1x_1^2+ p_1 x_2x_4 & 5 & 11 \\ \hline x_2x_5 & a_1x_1^2+ p_1 x_2x_5 & 11 & 5 \\ \hline x_3x_4 & a_1x_1^2+ p_1 x_3x_4 & 11 & 5 \\ \hline x_3x_5 & a_1x_1^2+ p_1 x_3x_5 & 5 & 11 \\ \hline x_4x_5 & a_1x_1^2+ p_1 x_4x_5 & 5 & 11 \\ \hline \end{array} \] \betagin{example} \normalfont In this example we chose the square-free monomials rather randomly. Put \[f_1=a_1x_1 ^2 + p_1x_2x_3,\] \[f_2=a_2x_2 ^2 + p_2x_3x_5,\] \[f_3=a_3x_3 ^2 + p_3x_4x_5,\] \[f_4=a_4x_4 ^2 + p_4x_1x_3,\] \[f_5=a_5x_5 ^2 + p_5x_1x_2.\] Then \[{\rm Res}(f_1,\ldots, f_5)=a_1^9a_2^8a_3^6a_4^{11}a_5^7(a_1^7a_2^8a_3^{10}a_4^5a_5^9+p_1^7p_2^8p_3^{10}p_4^5p_5^9).\] \end{example} \section{Relevance to the 2nd Hessian} \betagin{example} \lambdabel{ex_3} \normalfont Let $K$ and $R$ be the same as in the previous section. We use the notation $v=x_1, w=x_2, \ldots, z=x_5 $ interchangeably. We consider the polynomial \[G=120 vwxyz+s_1+s_2+s_3+s_4+s_5,\] where \betagin{align*} {}&s_1=- (p_1^3p_3p_4v^5 + p_2^3p_4p_5w^5 + p_1p_3^3p_5x^5 + p_1p_2p_4^3y^5 + p_2p_3p_5^3z^5), \\ {}&s_2= -20(p_1v^3wz + p_2vw^3x + p_3wx^3y + p_4xy^3z + p_5vyz^3), \\ {}&s_3= 20(p_1^2p_3p_4v^3xy + p_2^2p_4p_5w^3yz + p_1p_3^2p_5vx^3z + p_1p_2p_4^2vwy^3 + p_2p_3p_5^2wxz^3),\\ {}&s_4= 30(p_1p_3v^2wx^2 + p_2p_4w^2xy^2 + p_3p_5x^2yz^2 + p_1p_4v^2y^2z + p_2p_5vw^2z^2), \\ {}&s_5 =-30(p_1p_2p_4v^2w^2y + p_2p_3p_5w^2x^2z + p_1p_3p_4vx^2y^2 + p_2p_4p_5wy^2z^2 + p_1p_3p_5v^2xz^2). \end{align*} We consider $G$ as a polynomial in the polynomial ring $R=K[v,w,x,y,z]$. $G$ was obtained as the Macaulay dual generator of the complete intersection $I=(f_1,f_2,f_3,f_4,f_5)$, where \[f_1=v^2 + p_1wz,\] \[f_2=w^2 + p_2xv,\] \[f_3=x^2 + p_3yw,\] \[f_4=y^2 + p_4zx,\] \[f_5=z^2 + p_5vw.\] As we said in the introduction, the resultant of these elements is \[(p_1p_2p_3p_4p_5+1)^5.\] (The polynomial $G$ was computed by the computer algebra system Mathematica~\cite{mathematica}.) It is not difficult to verify that ${\rm Ann} _R(F) \supset (f_1, \ldots, f_5)$. If $p_1p_2p_3p_4p_5 + 1 \neq 0$, then since ${\rm Ann} _R G$ is a Gorenstein ideal containing $f_1, \ldots, f_5$, it follows that they coinside: ${\rm Ann} _R(G)=(f_1, \ldots , f_5)$ and $A:=K[v,w,x,y,z]/{\rm Ann} _R(G)$ has the Hilbert function $(1\ 5 \ 10 \ 10 \ 5 \ 1)$. If $p_1p_2p_3p_4p_5+1 = 0$, then we can calculate that the algebra $A=K[v,w,x,y,z]/{\rm Ann} _R(G)$ has the Hilbert function $(1\ 5 \ 5 \ 5 \ 5 \ 1)$. Since we know that the square free monomials are linearly independent, the second Hessian matrix of $G$ is, in this case, computed as the $10 \times 10$ matrix \[H^2(G)=\left( \frac{{\partial}^4 (G)}{{\partial} x_i {\partial} x_j {\partial} _k {\partial} _l } \right) _{(1 \leq i < j \leq 5),(1 \leq k < l \leq 5)} .\] For details of higher Hessians, see \cite{maeno_watanabe}. \newcommand{{\rm hess}}{{\rm hess}} Let ${\rm hess} ^2(G)$ be the determinant of $H^2(G)$. It is a polynomial in $v, \ldots, z, p_1, \ldots, p_5$. We may regard ${\rm hess} ^2$ as a polynomial in $v, w, x, y, z$ with coefficients in $\pi[p_1, p_2, p_3, p_4,p_5]$. Let $P$ be the ideal in the polynomial ring $\pi[p_1, p_2, p_3, p_4, p_5]$ generated by the coefficients of ${\rm hess} ^2$, where $\pi$ is a prime field. It has many complicated generators but surprisingly enough, it turns out that the ideal $P$ is a principal ideal generated by of $(1+p_1p_2p_3p_4p_5)^5$. The computation was done also by Mathematica~\cite{mathematica}. \end{example} \betagin{example} \lambdabel{ex_4} \normalfont Let $F$ be the polynomial in the first paragraph of Introduction. $F$ was in fact obtained as the Macaulay dual generator fo the complete intersection $I=(f_1, \ldots, f_5)$, where \[f_1=v^2 + p_1wx,\] \[f_2=w^2 + p_2xy,\] \[f_3=x^2 + p_3yz,\] \[f_4=y^2 + p_4vz,\] \[f_5=z^2 + p_5vw.\] As in the previous example, let $H^2(F)$ be the second Hessian of $F$. Then the coefficient ideal in $K[p_1, \ldots, p_5]$ turns out to be the unit ideal. Hence the algebra $\pi[p_1, \ldots, p_5][v,w,x,y,z]/I$ gives a flat family of Artinian Gorenstein algebras over $K=\pi[p_1, \ldots, p_5]$. The fiber is a complete intersection if and only if $p_1p_2p_3p_4p_5+1 \neq 0$, and otherwise it is a Gorenstein algebra defined by a $7$-generated ideal. (This is a computational result.) \end{example} \end{document} \bibitem{anderson} I.\ Anderson, Combinatorics of Finite Sets, Dover Publication Inc.\, Mineola (2002). Corrected reprint of the 1989 edition. \bibitem{aigner} M.\ Aigner, Combinatorial Theory, Grundlehren der Mathematischen Wissenschaften 234, Springer-Verlag, New York, 1979. \bibitem{bollobas} B.\ Bollob\`{a}s, Combinatorics: Set Systems, Hypergraphs, Families of Vectors and Combinatorial Probability, Cambridge University Press, Cambridge, 1986. \bibitem{debruijn_kruyswiek_tengbergen} N.\ G.\ deBruijn, C.\ A.\ van E.\ Tengbergen and D.\ R.\ Kruyswijk, \emph{On the set of divisors of a number}, Nieuw Arch.\ Wisk.,(2) 23, (1951), 191--193. \bibitem{david_cook_1} D.\ Cook, II, \emph{The Lefschetz properties of monomial complete intersections in positive characteristic}, J.\ Algebra \thetaxtbf{369} (2012), 42-58. \bibitem{david_cook_2} D.\ Cook, II, U.\ Nagel, \emph{The weak Lefschetz property, monomial ideals, and Lozenges}, Illinois J.\ Math.\ \thetaxtbf{55} (2011), no.\ 1, (2011), 377-395. \bibitem{eisenbud_green_harris} D.\ Eisenbud, M.\ Green and J.\ Harris, \emph{Cayley--Bacharach theorems and conjectures}, Bull.\ Amer.\ Math.\ Soc.\ \thetaxtbf{33}, (1996), 295--325. \bibitem{HMNW} T.\ Harima, J.\ Migliore, U.\ Nagel and J.\ Watanabe, \emph{The weak and strong Lefschetz properties for Artinian $K$-algebras}, J.\ Algebra\ \thetaxtbf{262} (1), 99--126 (2003) \bibitem{greene_kleitman} C.\ Greene and D.\ J.\ Kleitman, Proof techniques in the theory of finite sets, In: Studies in Combinatorics. MAA Studies in Mathematics, vol.\ 17, pp.\ 22--79, Mathematical Association of America, Washington (1978). \bibitem{Ikeda_Watanabe_1} H.\ Ikeda and J.\ Watanabe, \emph{The Dilworth lattice of Artinian rings}, J.\ Commut.\ Algebra, \thetaxtbf{1} (2), (2009), 315--326. \bibitem{MMMNW} J.\ Migliore, R.--M.\ Mir\`{o}--Roig, S.\ Murai, and J.\ Watanabe, \emph{On ideals with the Rees property}, Arch.\ Math.\ \thetaxtbf{101}, (2013), 445--454. \bibitem{migliore_miro_roig} J.\ Migriore and R.\ M.\ Mir\'{o}-Roig, \emph{Ideals of general forms and the ubiquity of the weak Lefschetz property}, J.\ Pure Appl.\ Algebra \thetaxtbf{182} (2003), no.\ 1, 79--107. \bibitem{migliore_nagel} J.\ Migliore and U.\ Nagel, \emph{A tour of the weak and strong Lefschetz properties}, J.\ commut.\ algebras \thetaxtbf{5}, (2013), 329--358. \bibitem{sperner} E.\ Sperner, \emph{Ein Satz \"{u}ber Untermengen einer endlichen Menge}, Math.\ Z. \thetaxtbf{27}, (1928), 544--548. \bibitem{GKZ} I.\ M.\ Gelfand, M.\ M.\ Kaplanov, A.\ v.\ Zelevinsky, \emph{Discriminants, Resultantants, and Multidimensional determinants}, Birkh\"auser, 1995?. \end{thebibliography} \end{document}
\begin{document} \title{Variation of the uncentered maximal characteristic Function} \author{Julian Weigt} \affil{Department of Mathematics and Systems Analysis, Aalto University, Finland, \texttt{[email protected]}} \maketitle \begin{abstract} Let \(\M\) be the uncentered Hardy-Littlewood maximal operator or the dyadic maximal operator and \(d\geq1\). We prove that for a set \(E\subset\mathbb{R}^d\) of finite perimeter the bound \(\var\M\ind E\leq C_d\var\ind E\) holds. We also prove this for the local maximal operator. \end{abstract} \begingroup \renewcommand\thefootnote{}\footnotetext{ 2020 \textit{Mathematics Subject Classification.} 42B25,26B30.\\% \textit{Key words and phrases.} Maximal function, variation, dyadic cubes. } \addtocounter{footnote}{-1} \endgroup \section*{Introduction} The uncentered Hardy-Littlewood maximal function of a non-negative locally integrable function \(f\) is given by \[\M f(x)=\sup_{B\ni x}\f1{\lm{B}}\int_B f,\] where the supremum is taken over all open balls \(B\subset\mathbb{R}^d\) that contain \(x\). Various versions of this maximal operator have been investigated. There is the (centered) Hardy-Littlewood maximal operator, where the supremum is taken only over those balls that are centered in \(x\), or the dyadic maximal operator which maximizes over dyadic cubes instead of balls. Those operators also have local versions, where for some open set \(\Omega\subset\mathbb{R}^d\) the supremum is taken only over those balls or cubes that are contained in \(\Omega\). For example the local dyadic maximal function with respect to \(\Omega\) of \(f\in L^1_\loc(\Omega)\) at \(x\in\Omega\) is given by \[\M f(x)=\sup_{x\in Q\subset \Omega}\f1{\lm{Q}}\int_Q f,\] where the supremum is taken over all half open dyadic cubes \(Q\subset\mathbb{R}^d\) with \(x\in Q\subset\Omega\). It is well known that many maximal operators are bounded on \(L^p(\mathbb{R}^d)\) if and only if \(p>1\). The regularity of the maximal operator was first studied in \cite{MR1469106}, where Kinnunen proved for the Hardy-Littlewood maximal operator that for \(p>1\) and \(f\in W^{1,p}(\mathbb{R}^d)\) also the bound \[\|\nabla\M f\|_p\leq C_{d,p}\|\nabla f\|_p\] holds, from which it follows that the Hardy-Littlewood maximal operator is bounded on \(W^{1,p}(\mathbb{R}^d)\). The proof combines the point-wise bound \(|\nabla\M f|\leq\M|\nabla f|\) with the \(L^p(\mathbb{R}^d)\)-bound of the maximal operator. Since the maximal operator is not bounded on \(L^1(\mathbb{R}^d)\), this approach fails for \(p=1\). For \(p>1\) the gradient \(L^p(\mathbb{R}^d)\)-bound or some corresponding version is valid for most maximal operators. However so far no counterexamples have been found for \(p=1\). So in 2004, Haj\l{}asz and Onninen posed the following question in \cite{MR2041705}: For the Hardy-Littlewood maximal operator \(\M\), is \(f\mapsto|\nabla\M f|\) a bounded mapping \(W^{1,1}(\mathbb{R}^d)\rightarrow L^1(\mathbb{R}^d)\)? This question for various maximal operators has since become a well known problem and has been the subject of lots of research. In one dimension for \(L^1(\mathbb{R})\) the gradient bound has already been proven in \cite{MR1898539} by Tanaka for the uncentered maximal function, and later in \cite{MR3310075} by Kurka for the centered Hardy-Littlewood maximal function. The latter proof turned out to be much more complicated. In \cite{MR3800463} Luiro has proven the gradient bound for radial functions in \(L^1(\mathbb{R}^d)\) for the uncentered maximal operator. More research on this question, and also more generally on the endpoint regularity of maximal operators can be found in \cite{MR2868961,MR2276629,MR2356061,MR3091605,MR3624402,MR3695894,MR2550181,MR3592548}. However, so far the question has been essentially unsolved in dimensions larger than one for any maximal operator. In this paper we prove that for \(\M\) being the dyadic or the uncentered Hardy Littlewood maximal operator and \(E\subset\mathbb{R}^d\) being a set with finite perimeter, we have \[\var\M\ind E\leq C_d\var\ind E.\] This answers the question of Haj\l{}asz and Onninen in a special case, and is the first truly higher dimensional result for \(p=1\) to the best of our knowledge. We furthermore prove a localized version, as is stated in \Cref{theo_goaldyadic,theo_goal}. The Hardy-Littlewood uncentered maximal function and the dyadic maximal function have in common that their levels sets \(\{\M f>\lambda\}\) can be written as the union of all balls/dyadic cubes \(X\) with \(\int_X f>\lambda\lm X\). Our proof relies on this. Since this is not true for the centered Hardy-Littlewood maximal function, a different approach has to be found for that maximal operator. Also related topics for various exponents \(1\leq p\leq\infty\) have been studied, such as the continuity of the maximal operator in Sobolev spaces \cite{MR1724375} and bounds for the gradient of other maximal operators, such as fractional, convolution, discrete, local and bilinear maximal operators \cite{MR3809456,MR2431055,MR3063097,MR3319617,MR1650343,MR1979008,2017arXiv171007233L,2019arXiv190904375R}. I would like to thank my supervisor, Juha Kinnunen for all of his support, Panu Lahti for discussions on the theory of sets of finite perimeter, his suggested proof of \Cref{cla_largeboundaryinball}, and repeated reading of and advice on the manuscript, and Carlos Mudarra for discussions on the proof of \Cref{cla_surfacedistanceboxingball}. I am indebted to the anonymous referees for their careful reading and the large amount of improvements they suggested both in style and in mathematical substance to the paper. The author has been supported by the Vilho, Yrj\"o and Kalle V\"ais\"al\"a Foundation of the Finnish Academy of Science and Letters. \section{Preliminaries and main result}\label{sec_preliminaries} We work in the setting of sets of finite perimeter, as in Evans-Gariepy \cite{MR3409135}, Section~5. For a measurable set \(E\subset\mathbb{R}^d\) we denote by \(\lm E\) its Lebesgue measure and by \(\sm E\) its \(d-1\)-dimensional Hausdorff measure. For an open set \(\Omega\subset\mathbb{R}^d\), a function \(f\in L^1_\loc(\Omega)\) is said to have locally bounded variation if for each open and compactly supported \(U\subset \Omega\) we have \[\sup\Bigl\{\int_Uf\div\varphi:\varphi\in C^1_{\tx c}(U;\mathbb{R}^d),\ |\varphi|\leq1\Bigr\}<\infty.\] Such a function comes with a measure \(\mu\) and a function \(\nu:\Omega\rightarrow\mathbb{R}^d\) that has \(|\nu|=1\) \(\mu\)-a.e.\ such that for all \(\varphi\in C^1_{\tx c}(\Omega;\mathbb{R}^d)\) we have \[\int_\Omega f\div\varphi=\int_\Omega\varphi\nu\intd \mu.\] We define the variation of \(f\) in \(\Omega\) by \[\var_\Omega f=\mu(\Omega).\] For a measurable set \(E\subset\mathbb{R}^d\) we define the measure theoretic boundary by \[\mb E=\Bigl\{x:\limsup_{r\rightarrow0}\f{\lm{B(x,r)\setminus E}}{r^d}>0,\ \limsup_{r\rightarrow0}\f{\lm{B(x,r)\cap E}}{r^d}>0\Bigr\}.\] The following coarea formula is our strategy to approach the variation of the maximal function. \begin{lem}[Theorem~5.9 in \cite{MR3409135}]\label{lem_coareabv} Let \(\Omega\subset\mathbb{R}^d\) be open. Let \(f\in L^1_\loc(\Omega)\). Then \[\var_\Omega f=\int_\mathbb{R}\sm{\mb{\{f>\lambda\}\cap \Omega}}\intd\lambda.\] \end{lem} We say that measurable set \(E\subset\mathbb{R}^d\) has locally finite perimeter if its characteristic function \(\ind E\) has locally bounded variation. For \(f=\ind E\) we call \(\var_\Omega\ind E\) the perimeter of \(E\) and \(\nu\) from above the outer normal of \(E\). \Cref{lem_coareabv} implies \[\var_\Omega\ind E=\sm{\mb E\cap \Omega}.\] Recall the definition of the set of dyadic cubes \[\bigcup_{n\in\mathbb{Z}}\{[x_1,x_1+2^n)\times\ldots\times[x_d,x_d+2^n):i=1,\ldots,n,\ x_i\in2^n\mathbb{Z}\}.\] The maximal function of a characteristic function can be written as \[\M \ind E(x)=\sup_{x\in X\subset \Omega}\f{\lm{E\cap X}}{\lm{X}},\] where \(X\) ranges over balls for the uncentered maximal operator, and over dyadic cubes for the dyadic maximal operator. Now we are ready to state the main results of this paper. \begin{theo}\label{theo_goaldyadic} Let \(\M\) be the local dyadic maximal operator with respect to an open set \(\Omega\subset\mathbb{R}^d\). Let \(E\subset\mathbb{R}^d\) be a set with locally finite perimeter. Then \[\var_\Omega\M\ind E\leq C_d\sm{\mb E\cap \Omega},\] where \(C_d\) depends only on the dimension \(d\). \end{theo} \begin{theo}\label{theo_goal} Let \(\M\) be the local uncentered maximal operator with respect to an open set \(\Omega\subset\mathbb{R}^d\). Let \(E\subset\mathbb{R}^d\) be a set with locally finite perimeter. Then \[\var_\Omega\M\ind E\leq C_d\sm{\mb E\cap \Omega},\] where \(C_d\) depends only on the dimension \(d\). \end{theo} We can for example take \(\Omega=\mathbb{R}^d\). Denote \(\{\M\ind E>\lambda\}=\{x\in\Omega:\M\ind E(x)>\lambda\}\). We reduce \Cref{theo_goaldyadic,theo_goal} to the following results. \begin{pro}\label{eq_levelsetsdyadic} Let \(\M\) be the local dyadic maximal operator with respect to some open set \(\Omega\subset\mathbb{R}^d\). Let \(E\subset\mathbb{R}^d\) be a set with locally finite perimeter and let \(\lambda\in(0,1)\). Then \[\sm{\mb{\{\M\ind E> \lambda\}}\cap \Omega}\leq C_d\lambda^{-\f{d-1}d}\sm{\mb E\cap \Omega}.\] \end{pro} By \Cref{lem_finMf} we have \(\mc E\cap\Omega\subset\mc{\{\M\ind E>\lambda\}}\) so that we might intersect the right-hand side with \(\mc{\{\M\ind E>\lambda\}}\). \begin{pro}\label{eq_levelsets} Let \(\M\) be the local uncentered maximal operator. Let \(E\subset\mathbb{R}^d\) be a set with locally finite perimeter and let \(\lambda\in(0,1)\). Then \[\sm{\mb{\{\M\ind E> \lambda\}}\cap\Omega}\leq C_d\lambda^{-\f{d-1}d}(1-\log\lambda)\sm{\mb E\cap\{\M\ind E>\lambda\}}.\] \end{pro} The constants \(C_d\) that appear in \Cref{theo_goal,theo_goaldyadic} and \Cref{eq_levelsetsdyadic,eq_levelsets} are not equal. Since the proofs of \Cref{theo_goaldyadic,theo_goal} are almost the same we do them simultaneously. \begin{proof}[Proof of \Cref{theo_goaldyadic,theo_goal}] By \Cref{lem_coareabv} and \Cref{eq_levelsetsdyadic,eq_levelsets} we have \begin{align*} \var_\Omega\M\ind E&=\int_0^1\sm{\mb{\{\M\ind E> \lambda\}}\cap \Omega}\intd\lambda\\ &\leq C_d\int_0^1\lambda^{-\f{d-1}d}(1-\log\lambda)\sm{\mb E\cap \Omega}\intd\lambda\\ &=d(d+1)C_d\sm{\mb E\cap \Omega}. \end{align*} \end{proof} In \Cref{sec_both,sec_dyadic,sec_uncentered} we prove \Cref{eq_levelsetsdyadic,eq_levelsets}. In \Cref{sec_optimal} we prove \Cref{eq_levelsets_optimal} which is \Cref{eq_levelsets} without the factor \(1-\log\lambda\). The rate \(\lambda^{-\f{d-1}d}\) is optimal. We introduce some notation we will use throughout the paper. By \(a\lesssim b\) we mean that there exists a constant \(C_d\) that depends only on the dimension \(d\) such that \(a\leq C_d b\). For a set \(\B\) of subsets of \(\mathbb{R}^d\) we write \[\bigcup\B=\bigcup_{B\in\B}B.\] For a ball \(B=B(x,r)\subset\mathbb{R}^d\) and \(c>0\) we denote \(cB=B(x,cr)\). If \(\B\) is a set of balls we denote \[c\B=\{cB:B\in\B\}.\] For a set \(E\subset\mathbb{R}^d\) and a point \(x\in\mathbb{R}^d\) we denote \[ \dist(x,E) = \inf_{y\in E}|x-y| . \] We also need more measure theoretic quantities. We define the measure theoretic interior by \begin{align*} \mi E&=\Bigl\{x:\limsup_{r\rightarrow0}\f{\lm{B(x,r)\setminus E}}{r^d}=0\Bigr\},\\ \intertext{the measure theoretic closure by} \mc E&=\Bigl\{x:\limsup_{r\rightarrow0}\f{\lm{B(x,r)\cap E}}{r^d}>0\Bigr\} \end{align*} and the measure theoretic boundary by \[ \mb E = \mc E \setminus \mi E . \] \begin{lem}\label{lem_boundaryofunion} Let \(A,B\subset\mathbb{R}^d\) be measurable. Then \[\mb{(A\cup B)}\subset(\mb A\setminus\mc B)\cup(\mb B\setminus\mc A)\cup(\mb A\cap\mb B).\] \end{lem} \begin{proof} Let \(x\in\mb{(A\cup B)}\). Then \begin{align*} \limsup_{r\rightarrow0}\f{\lm{B(x,r)\cap(A\cup B)}}{r^d}&>0\\ \intertext{ and } \limsup_{r\rightarrow0}\f{\lm{B(x,r)\setminus(A\cup B)}}{r^d}&>0. \end{align*} By symmetry it suffices to consider the case that \[\limsup_{r\rightarrow0}\f{\lm{B(x,r)\cap A}}{r^d}>0.\] Then \[\limsup_{r\rightarrow0}\f{\lm{B(x,r)\setminus A}}{r^d}\geq\limsup_{r\rightarrow0}\f{\lm{B(x,r)\setminus(A\cup B)}}{r^d}>0\] which means \(x\in\mb A\). Analogously, if \[\limsup_{r\rightarrow0}\f{\lm{B(x,r)\cap B}}{r^d}>0\] then \(x\in\mb B\) so we get \(x\in\mb A\cap\mb B\). Otherwise \[\limsup_{r\rightarrow0}\f{\lm{B(x,r)\cap B}}{r^d}=0\] and we can conclude \(x\in\mb A\setminus\mc B\). \end{proof} Let \(E\subset\mathbb{R}^d\) be measurable and let \(\mu\) be the measure from the definition of \(\var\ind E\) and \(\nu\) the outer normal. We define the reduced boundary \(\rb E\) of \(E\subset\mathbb{R}^d\) as the set of all points \(x\in\mathbb{R}^d\) such that for all \(r>0\) we have \(\mu(B(x,r))>0\), \[\lim_{r\rightarrow0}\avint_{B(x,r)}\nu\intd\mu=\nu(x),\] and \(|\nu(x)|=1\). This is Definition~5.4 in \cite{MR3409135}. By Lemma~5.5 in \cite{MR3409135} we have \(\rb E\subset\mb E\) and \(\sm{\mb E\setminus\rb E}=0\). Thus it suffices to consider only the reduced boundary when estimating the perimeter of a set. But most of the time we will formulate the results for the measure theoretic boundary. The exception is \Cref{lem_finMf}, which we could only prove for the reduced boundary because there we make use of Theorem~5.13 in \cite{MR3409135}, which states the following. \begin{lem}[Theorem~5.13 in \cite{MR3409135}]\label{lem_blowuprb} Let \(E\subset\mathbb{R}^d\) be a measurable set. Assume \(0\in\rb E\) with \(\nu(0)=(1,0,\ldots,0)\). Then for \(r\rightarrow0\) we have \(\ind{\f1rE}\rightarrow\ind{\{x:x_1<0\}}\) in \(L^1_\loc(\mathbb{R}^d)\). \end{lem} A central tool used here is the relative isoperimetric inequality, see Theorem~5.11 in \cite{MR3409135}. It states that for a ball \(B\) and any measurable set \(E\subset\mathbb{R}^d\) we have \begin{equation}\label{eq_isoperimetric} \min\{\lm{E\cap B},\lm{B\setminus E}\}^{d-1}\lesssim\sm{\mb E\cap B}^d. \end{equation} However we need the relative isoperimetric inequality also for other sets than balls. An open bounded set \(A\) is called a John domain if there is a constant \(K\) and point \(x\in A\) from which every other point \(y\in A\) can be reached via a path \(\gamma\) such that for all \(t\) we have \begin{equation}\label{eq_twistedcone} \dist(\gamma(t),A^\comp)\geq K^{-1}|y-\gamma(t)|. \end{equation} \begin{figure} \caption{A John domain.} \label{fig_Acone} \end{figure} This is called the cone condition, see \Cref{fig_Acone}. Theorem 107 in the lecture notes \cite{lecturenoteshajlasz} by Piotr Haj\l{}asz states that all John domains admit a relative isoperimetric inequality. \begin{lem}\label{lem_isoperimetric} Let \(A\subset\mathbb{R}^d\) be a John domain with constant \(K\). Then \(A\) satisfies a relative isoperimetric inequality with constant \(C_{K,d}\) only depending on \(K\) and the dimension \(d\), \[\min\{\lm{E\cap A},\lm{A\setminus E}\}^{d-1}\leq C_{K,d}\sm{\mb E\cap A}^d.\] \end{lem} For example a ball and an open cube are John domains. Another basic tool is the Vitali covering lemma, see for example Theorem~1.24 in \cite{MR3409135}. \begin{lem}[Vitali covering lemma]\label{lem_disjointcover} Let \(\B\) be a set of balls in \(\mathbb{R}^d\) with diameter bounded by some \(R\in\mathbb{R}\). Then it has a countable subset \(\tilde\B\) of disjoint balls such that \[\bigcup\B\subset\bigcup5\tilde\B.\] \end{lem} Instead of considering \(\{\M\ind E>\lambda\}\) we will only consider a finite union of balls/cubes. In order to pass from there to the whole set \(\{\M\ind E>\lambda\}\) we will use an approximation result. We say that a sequence \((A_n)_n\) of sets in \(\mathbb{R}^d\) converges to some set \(A\) in \(L^1_\loc(\mathbb{R}^d)\) if \((\ind{A_n})_n\) converges to \(\ind A\) in \(L^1_\loc(\mathbb{R}^d)\). \begin{lem}[Theorem~5.2 in \cite{MR3409135} for characteristic functions]\label{lem_l1approx} Let \(\Omega\subset\mathbb{R}^d\) be an open set and let \((A_n)_n\) be subsets of \(\mathbb{R}^d\) with locally finite perimeter that converge to \(A\) in \(L^1_\loc(\Omega)\). Then \[\sm{\mb A\cap \Omega}\leq\liminf_{n\rightarrow\infty}\sm{\mb A_n\cap \Omega}.\] \end{lem} \begin{lem} \label{eq_unitvolumeratio} Let \(\sigma_d=\lm{B(0,1)}\) be the Lebesgue measure of the \(d\)-dimensional unit ball. Then \[ \sqrt{\f{2\pi}{d+1}} \leq \f{\sigma_d}{\sigma_{d-1}} \leq \sqrt{\f{2\pi}d} . \] \end{lem} \begin{proof} By the logarithmic convexity of the \(\Gamma\)-function, for all \(x>\f12\) we have \begin{align*} \f{\Gamma(x)}{\Gamma(x+1/2)} &\leq \f{\sqrt{\Gamma(x-1/2)\Gamma(x+1/2)}}{\Gamma(x+1/2)} = \sqrt{\f{\Gamma(x-1/2)}{\Gamma(x+1/2)}} = \f1{\sqrt{x-1/2}} , \\ \f{\Gamma(x)}{\Gamma(x+1/2)} &\geq \f{\Gamma(x)}{\sqrt{\Gamma(x)\Gamma(x+1)}} = \sqrt{\f{\Gamma(x)}{\Gamma(x+1)}} = \f1{\sqrt x} , \end{align*} and the result follows from \( \sigma_d =\pi^{\f d2}/ \Gamma(d/2+1) . \) \end{proof} We will need some facts about convex sets. \begin{lem} \label{lem_surfaceconvex} The following properties hold for all convex and bounded sets \(A,B\subset\mathbb{R}^d\). \begin{enumerate} \item The set \(A\cap B\) is convex. \label{it_intersectionconvex} \item If \(A\subset B\) then \( \sm{\partial A} \leq \sm{\partial B} . \) \label{it_perimeterconvex} \item For every \(\varepsilon>0\) we have \( \lm{\{x\in A:0<\dist(x,A^\comp)\leq\varepsilon\}} \leq\varepsilon \sm{\partial A} . \) \label{it_perimeterblowupcovex} \end{enumerate} \end{lem} \begin{proof} \Cref{it_intersectionconvex} follows from the definition of convexity. For every \(x\in\partial B\) there is a point \(z\in\partial A\) with \[ |z-x| = \min_{y\in\partial A}|y-x| . \] A straightforward computation shows that if \(z'\in\partial A\) with \(|x-z'|=\min_{y\in\partial A}|x-y|\) then \( |x-(z+z')/2| \leq \min_{y\in\partial A}|x-y| \) and the inequality is strict if \(z'\neq z\). Hence we must have \(z'=z\) because \((z+z')/2\in\cl A\) by convexity. We denote \(p(x)=z\). Since \(A\) is convex, in every point \(z\in\partial A\) there is a hyperplane \(H\) which contains \(z\) and such that for all \(y\in\partial A\) we have \(\langle y-z,n\rangle\leq0\), where \(n\) is the normal of \(H\). Because \(B\) is bounded, there is an \(r\geq0\) such that \(z+rn\in\partial B\). It is easy to see that \(p(z+rn)=z\). That means \(p:\partial B\rightarrow\partial A\) is surjective. Let \(x_1,x_2\in\partial B\). For \(i=1,2\) denote \(z_i=p(x_i)\) and let \(H_i\) be the hyperplane with normal \(x_i-z_i\) which contains \(z_i\). Then \( \langle z_2-z_1,x_1-z_1\rangle \leq0 \) because otherwise it is straightforward to find a \(t>0\) small enough with \((1-t)z_1+tz_2\in\cl A\) which is closer to \(x_1\) than \(z_1\), which leads to a contradiction to \(p(x_1)=z_1\). Similarly we must have \( \langle z_1-z_2,x_2-z_2\rangle \leq0 . \) We can conclude \[ |z_1-z_2||x_1-x_2| \geq \langle z_1-z_2,x_1-x_2\rangle = \langle z_1-z_2,x_1-z_1\rangle + \langle z_2-z_1,x_2-z_2\rangle + \langle z_1-z_2,z_1-z_2\rangle \geq |z_1-z_2|^2 . \] This means that the map \(p:\partial B\rightarrow\partial A\) is \(1\)-Lipschitz, and we obtain \cref{it_perimeterconvex} because the Hausdorff measure does not increase under \(1\)-Lipschitz maps by \cite[Theorem~2.8]{MR3409135}. For every \(\lambda\geq0\) denote \(A_\lambda=\{x\in A:\dist(x,A^\comp)\geq\lambda\}\). Then \(A_\lambda\) is convex and by \cite[Theorem~3.14]{MR3409135} and \cref{it_perimeterconvex} we have \[ \lm{\{x\in A:0<\dist(x,A^\comp)\leq\varepsilon\}} = \int_0^\varepsilon \sm{\partial A_\lambda} \intd\lambda \leq \int_0^\varepsilon \sm{\partial A} \intd\lambda = \varepsilon \sm{\partial A} . \] \end{proof} \section{Tools for both maximal operators}\label{sec_both} We start with a couple of tools that are used for both maximal operators. \begin{lem}\label{lem_isoperimetricconsequence} Let \(X\subset\mathbb{R}^d\) be an open set with finite measure and finite perimeter which satisfies a relative isoperimetric inequality and denote \(c=\sm{\partial X}^d/\lm X^{d-1}\). Let \(0<\lambda\leq1-\varepsilon<1\) and let \(E\) be a measurable set such that \(\lambda\leq\lm{E\cap X}/\lm X\leq1-\varepsilon\). Then \[\sm{\mb E\cap X}\gtrsim c^{-1/d}\varepsilon^{d-1}\lambda^{\f{d-1}d}\sm{\partial X}.\] \end{lem} Note that \(c\) is invariant under scaling of \(X\). \begin{proof} We first prove \begin{equation} \label{eq_isopereps} \lm{E\cap X}^{d-1} \lesssim\varepsilon^{-(d-1)} \sm{\mb E\cap X}^d. \end{equation} If \(\varepsilon\geq\f12\) then \cref{eq_isopereps} follows directly from the relative isoperimetric inequality for \(X\). For \(\varepsilon<\f12\) we obtain \cref{eq_isopereps} it from the relative isoperimetric inequality as follows \[ \sm{\mb E\cap X}^d\gtrsim\lm{X\setminus E}^{d-1} \geq\varepsilon^{d-1} \lm{X}^{d-1} \geq\varepsilon^{d-1} \lm{E\cap X}^{d-1} . \] From \cref{eq_isopereps} we conclude \[\varepsilon^{-(d-1)}\sm{\mb E\cap X}\gtrsim\lm{E\cap X}^{\f{d-1}d}\geq\lambda^{\f{d-1}d}\lm X^{\f{d-1}d}\geq c^{-1/d}\lambda^{\f{d-1}d}\sm{\partial X}.\] \end{proof} \begin{lem}[Boxing inequality, c.f.\ Theorem 3.1 in Kinnunen, Korte, Shanmugalingam, Tuominen \cite{MR2400262}]\label{lem_boxing} Let \(E\subset\mathbb{R}^d\) be a set with finite measure that is contained in the union of a set \(\B\) of balls \(B\) with \(\lm{E\cap B}\leq\lm B/2\). Then there is a set \(\F\) of balls \(F\) with \(\lm{F\cap E}=\lm F/2\) which covers almost all of \(E\). Furthermore, each \(F\in\F\) is contained in a ball \(B\in\B\). \end{lem} \begin{proof} It suffices to show that for every ball \(B(x_1,r_1)\in\B\) every Lebesgue point \(x\in\mi E\) with \(x\in B(x_1,r_1)\) is contained in a ball \(F\subset B(x_1,r_1)\) with \(\lm{F\cap E}=\lm F/2\). By assumption \begin{align*} \lm{E\cap B(x_1,r_1)}&\leq\f{\lm{B(x_1,r_1)}}2\\ \intertext{and since \(x\) is a Lebesgue point there is a ball \(B(x_0,r_0)\) with \(x\in B(x_0,r_0)\subset B(x_1,r_1)\) and} \lm{E\cap B(x_0,r_0)}&\geq\f{\lm{B(x_0,r_0)}}2. \end{align*} Define \(x_t=(1-t)\cdot x_0+t\cdot x_1\) and \(r_t=(1-t)\cdot r_0+t\cdot r_1\) so that \(t\mapsto B(x_t,r_t)\) is a continuous transformation of balls. That means there is a \(t\) with \[\lm{E\cap B(x_t,r_t)}=\f{\lm{B(x_t,r_t)}}2.\] Since \(x\in B(x_0,r_0)\subset B(x_t,r_t)\subset B(x_1,r_1)\) that means we have found the right ball. \end{proof} We will prove a more specialized version of \Cref{lem_boxing}. \begin{lem}\label{cla_surfacedistanceboxingball} Let \(X\) be an open cube or a ball in \(\mathbb{R}^d\) and \(E\) a set with \(\lm{E\cap X}\geq\lambda\lm X\). Then there is a cover \(\C\) of \(\partial X\setminus\mc E\) consisting of balls \(C\) with \(\diam C\leq2\diam X\) and \begin{equation}\label{eq_ballcontainsmuchboundary} \smb{\mb E\cap\Bigl\{y\in C:\dist(y,X^\comp)> \f{\lambda\diam C}{4dc_d}\Bigr\}}\gtrsim\lambda^{\f{d-1}d}\sm{\partial C} , \end{equation} where \(c_d=2^d\) if \(X\) is a ball and \(c_d=d^{\f d2}\sigma_d\) if \(X\) is a cube. \end{lem} \begin{figure} \caption{The regions in \Cref{cla_surfacedistanceboxingball} \end{figure} The constants in \Cref{cla_surfacedistanceboxingball} are not important and one could also impose a stronger bound on the diameter of the balls \(C\in\C\) for \(\lambda\) near 1. \begin{proof}[Proof of \Cref{cla_surfacedistanceboxingball}] It suffices to show that for each \(x\in\partial X\setminus\mc E\) there is a ball \(C\) centered in \(x\) that satisfies \cref{eq_ballcontainsmuchboundary}. Let \(x\in\partial X\setminus\mc E\) and for \(0<r\leq\diam X\) define \[ A(r) = \Bigl\{y\in B(x,r):\dist(y,X^\comp)> \f{\lambda r}{2dc_d}\Bigr\} . \] We first show that \(A(r)\) is a John domain. Consider the case that \(X\) is a ball. Then there is a point \(z\in X\cap B(x,r)\) such that \(B(z,\f r2)\subset X\cap B(x,r)\). That means \[ B\Bigl(z,\f r4\Bigr) \subset B\Bigl(z,\f r2-\f{\lambda r}{2dc_d}\Bigr) \subset A(r) . \] Now let \(X\) be a cube. Then \(X\cap B(x,r)\) contains a cube with diameter at least \(r\), i.e.\ sidelength at least \(\f r{\sqrt d}\). Thus, \(A(r)\) contains a cube with sidelength at least \[ \f r{\sqrt d}-2\f{\lambda r}{2dc_d} \geq \f r{\sqrt d}\Bigl(1-\f1{\sqrt dc_d}\Bigr) \geq \f r{2\sqrt d} \] which in turn contains a ball \(B\) with radius \(\f r{4\sqrt d}\). The last inequality holds because \(1^{\f{1+1}2}\sigma_1=2\) and \(\sqrt dc_d=d^{\f{d+1}2}\sigma_d\) is increasing in \(d\) by \Cref{eq_unitvolumeratio}. We have shown that there is a point \(z\in A(r)\) such that \begin{equation} \label{eq_Arlikeball} B\Bigl(z,\f r{4\sqrt d}\Bigr) \subset A(r) , \end{equation} both if \(X\) is a cube or a ball. For any \(y\in A(r)\) we have \(\dist(y,z)\leq\diam(A(r))\leq 2r\). Because \(A(r)\) is convex by \Cref{lem_surfaceconvex}\cref{it_intersectionconvex}, it contains the convex hull of \(B\bigl(z,\f r{4\sqrt d}\bigr)\cup\{y\}\). We can conclude that \(A(r)\) is a John domain with \(K=\f{2r}{r/(4\sqrt d)}=8\sqrt d\). We have \begin{align} \nonumber \lm{B(x,r)\setminus A(r)} &\leq \lmb{\Bigl\{y:0<\dist(y,(B(x,r)\cap X)^\comp)\leq \f{\lambda r}{2dc_d}\Bigr\}} \\ \nonumber &\leq \f{\lambda r}{2dc_d}\sm{\partial(B(x,r)\cap X)} \\ \nonumber &\leq \f{\lambda r}{2dc_d}\sm{\partial B(x,r)} \\ \nonumber &= \f\lambda{2c_d}\lm{B(x,r)} \\ \label{eq_awayfromboundary} &\leq \f\lambda2\lm{B(x,r)\cap X} , \end{align} where the last inequality holds because as observed above \(B(x,r)\cap X\) contains a ball with radius \(\f r2\) if \(X\) is a ball, and a cube with sidelength \(\f r{\sqrt d}\) if \(X\) is a cube. Then from \[\f{\lm{X\cap E}}{\lm X}\geq\lambda\] and \cref{eq_awayfromboundary} with \(r=\diam X\) we get \[\f{\lm{A(\diam X)\cap E}}{\lm{A(\diam X)}}\geq\f{\lm{A(\diam X)\cap E}}{\lm X}\geq\lambda-\f\lambda2=\f\lambda2.\] Since \(x\not\in\mc E\) we have \(\lm{E\cap B(x,r)}/r^d\rightarrow0\) for \(r\rightarrow0\). By \cref{eq_Arlikeball} this implies that there is an \(r_0\) with \[\f{\lm{A(r_0)\cap E}}{\lm{A(r_0)}}\leq\f\lambda2.\] By continuity we conclude that there is an \(r_0\leq r\leq\diam X\) such that \[\f{\lm{A(r)\cap E}}{\lm{A(r)}} =\f\lambda2.\] By \cref{eq_Arlikeball} and \Cref{lem_surfaceconvex}\cref{it_intersectionconvex} we have \begin{equation} \label{eq_dBdA} \sm{\partial B(x,r)} \lesssim \smb{\partial B\Bigl(z,\f r{4\sqrt d}\Bigr)} \leq \sm{\partial A(r)} . \end{equation} Because \(A(r)\) is a John domain it satisfies a relative isoperimetric inequality by \Cref{lem_isoperimetric}, so that we can apply \Cref{lem_isoperimetricconsequence} with \(X=A(r)\) and \(\varepsilon=\f12\) and obtain \begin{equation} \label{eq_Aisoperimetric} \sm{\partial A(r)} \lesssim \lambda^{-\f{d-1}d}\sm{\mb E\cap A(r)} . \end{equation} Combining \cref{eq_dBdA} and \cref{eq_Aisoperimetric} we obtain \cref{eq_ballcontainsmuchboundary}, which finishes the proof. \end{proof} Note that the following \Cref{lem_finMf} addresses the reduced boundary \(\rb E\) and not the measure theoretic boundary \(\mb E\). \begin{lem}\label{lem_finMf} Let \(\Omega\subset\mathbb{R}^d\) be an open set and let \(E\subset\mathbb{R}^d\) be measurable. Then for every \(\lambda\in[0,1)\) and for both the dyadic and the uncentered maximal operator with domain \(\Omega\) we have \(\mi E\cap\Omega\subset\{\M\ind E=1\}\). For the uncentered maximal operator we furthermore have \(\rb E\cap\Omega\subset\{\M\ind E=1\}\). \end{lem} This is a slightly more precise version of \(\M f\geq f\) almost everywhere for characteristic functions. \begin{proof} Let \(x\in\mi E\cap\Omega\). Then for every \(\varepsilon>0\) there is a ball \(B\subset\Omega\) with center \(x\) and with \(\lm{B\setminus E}\leq\varepsilon\lm{B}\) and a dyadic cube \(Q\) with \(x\in Q\subset B\) and \(\lm{Q}\gtrsim\lm{B}\). That means \(\lm{Q\setminus E}\leq\varepsilon\lm{B}\lesssim\varepsilon\lm{Q}\). We can conclude \(\M\ind E(x)=1\). Let \(x\in\rb E\cap\Omega\). It suffices to consider \(x=0\) and \[\lim_{r\rightarrow0}\avint_{B(0,r)}\nu_E=(1,0,\ldots,0).\] Then for \(r\) small enough we have \(0\in B_r=B((-r,0,\ldots,0),r+r^2)\subset\Omega\), and so by \Cref{lem_blowuprb} we obtain \[ \lim_{r\rightarrow0} \avint_{B_r}\ind E = \lim_{r\rightarrow0} \f{\lm{\{y\in B_r:y_1<0\}}}{\lm{B_r}} = \lim_{r\rightarrow0} \f{\lm{\{y\in B(0,r+r^2):y_1<r\}}}{\lm{B(0,r+r^2)}} =1 . \] \end{proof} \section{The dyadic maximal function}\label{sec_dyadic} In this section we discuss the argument for the dyadic maximal operator. It already showcases the main idea of the proof for the uncentered maximal operator. For the superlevelset of the dyadic maximal operator we have \[\{\M\ind E>\lambda\}=\bigcup\{\tx{dyadic cube }Q:\lm{E\cap Q}>\lambda\lm Q\}.\] The first step in the proof of \Cref{eq_levelsetsdyadic} is to consider only a finite set \(\Q\) of cubes \(Q\) with \(\lm{E\cap Q}>\lambda\lm Q\) instead of the whole set, because this allows to write \[\sm{\mb\bigcup\Q}\leq\sum_{Q\in\Q}\sm{\partial Q\cap\mb\bigcup\Q}.\] From there we use approximation results to extend to the union of all cubes \(Q\) with \(\lm{E\cap Q}>\lambda\lm Q\). The strategy for the uncentered maximal operator is similar, but with cubes replaced by balls. The main argument is \Cref{lem_varav}, which is more or less \Cref{eq_levelsetsdyadic} for the case that \(\{\M\ind E>\lambda\}\) consists of only one cube. \begin{pro}\label{lem_varav} Let \(0<\lambda\leq1\), let \(Q\) be a cube and let \(E\subset\mathbb{R}^d\) be a measurable set with \(\lm{E\cap Q}\geq\lambda\lm Q\). Then \[\sm{\partial Q\setminus\mc E}\lesssim\lambda^{-\f{d-1}d}\sm{\mb E\cap\ti Q}.\] \end{pro} \begin{proof}[Proof of \Cref{lem_varav}] We apply \Cref{cla_surfacedistanceboxingball} to \(X=\ti Q\) and for the resulting cover use \Cref{lem_disjointcover} to extract a disjoint subcollection \(\C\) such that \(5\C\) still covers \(\partial Q\setminus\mc E\). Then by \Cref{lem_surfaceconvex}\cref{it_intersectionconvex,it_perimeterconvex} and \Cref{cla_surfacedistanceboxingball} we have \begin{align*} \sm{\partial Q\setminus\mc E} &\leq \sum_{C\in\C}\sm{\partial Q\cap5C} \\ &\leq \sum_{C\in\C}\sm{\partial 5C} \\ &\lesssim\lambda^{-\f{d-1}d}\sum_{C\in\C}\sm{\mb E\cap C\cap\ti Q}\\ &\leq\lambda^{-\f{d-1}d}\sm{\mb E\cap\ti Q}. \end{align*} \end{proof} \begin{rem}\label{rem_varavball} For \(\lambda\leq\f12\) \Cref{lem_varav} also follows directly from the relative isoperimetric inequality \cref{eq_isoperimetric} for \(Q\). \Cref{lem_varav} also holds for \(Q\) being a ball. \end{rem} \begin{proof}[Proof of \Cref{eq_levelsetsdyadic}] For each \(x\in\{\M\ind E>\lambda\}\cap \Omega\) there is a dyadic cube \(Q\subset\Omega\) with \(x\in Q\) and \(\lm{E\cap Q}>\lambda\lm Q\). Since there are only countably many dyadic cubes we can enumerate them as \(Q_1,Q_2,\ldots\). For each \(n\) let \[ \Q_n = \{Q_i:\forall j=1,\ldots,n\tx{ with }j\neq i\tx{ we have }Q_i\not\subset Q_j\} . \] Then \(\bigcup\Q_n=Q_1\cup\ldots\cup Q_n\) and thus \[ \bigcup_nQ_n=\{\M\ind E>\lambda\} . \] Because \(E\) and \(\mi E\) agree up to measure zero and \(\mi E\subset\{\M\ind E>\lambda\}\) by \Cref{lem_finMf}, we have that \(\bigcup\Q_n\cup E\) converges to \(\{\M\ind E>\lambda\}\) in \(L^1_\loc(\Omega)\). Therefore, by \Cref{lem_l1approx,lem_boundaryofunion} we obtain \begin{align} \nonumber\sm{\mb{\{\M\ind E>\lambda\}}\cap \Omega}&\leq\limsup_{n\rightarrow\infty}\sm{\mb{(\bigcup\Q_n\cup E)}\cap \Omega}\\ \label{eq_summandtoomany}&\leq\limsup_{n\rightarrow\infty}\sm{(\mb{\bigcup\Q_n}\setminus\mc E)\cap \Omega}+\sm{\mb E\cap \Omega}. \end{align} It is not necessary, but in the line corresponding to \cref{eq_summandtoomany} in the proof for the uncentered Hardy-Littlewood maximal function we can actually eliminate the term \(\sm{\mb E\cap \Omega}\) thanks to \Cref{lem_finMf}; see \cref{eq_summandtoomanyagain} in \Cref{sec_uncentered} and the subsequent comment. Here this is not so clear because for the dyadic maximal function \Cref{lem_finMf} is weaker. But in any case, it suffices to estimate the first term on the right hand side of \cref{eq_summandtoomany}. We invoke \Cref{lem_varav} and use that the cubes in \(\Q_n\) are disjoint and obtain \begin{align*} \sm{(\mb{\bigcup\Q_n}\setminus\mc E)\cap \Omega}&\leq\sum_{Q\in\Q_n}\sm{(\mb Q\setminus\mc E)\cap \Omega}\\ &\lesssim\sum_{Q\in\Q_n}\lambda^{-\f{d-1}d}\sm{\mb E\cap Q}\\ &\leq\lambda^{-\f{d-1}d}\sm{\mb E\cap \Omega\cap\{\M\ind E>\lambda\}}. \end{align*} \end{proof} \Cref{lem_varav} readily implies \Cref{eq_levelsetsdyadic} because \(\{\M\ind E>\lambda\}\) is a disjoint union of such cubes. Two balls however can have nontrivial intersections, which is why the proof for the uncentered Hardy-Littlewood maximal operator is much more complicated than the proof for the dyadic maximal operator. \section{The uncentered maximal function}\label{sec_uncentered} In this section we prove \Cref{eq_levelsets}. The main step is \Cref{pro_levelsets_finite_g}. It is \Cref{lem_varav} for a set \(\B\) of finitely many balls \(B\) with \(\lm{B\cap E}>\lambda\lm B\) instead of one cube. \Cref{pro_levelsets_finite_g} comes with an additional but harmless factor \((1-\log\lambda)\). We will show in \Cref{sec_optimal} that this factor can be removed. \begin{lem}\label{cla_largeboundaryinball} Let \(K>0\), let \(C\) be a ball and let be \(\B\) a finite set of balls \(B\) with \(\diam(B)\geq K\diam(C)\). Then \[\sm{\mb{\bigcup\B}\cap C}\lesssim (K^{-d}+1)\sm{\partial C}.\] \end{lem} \begin{figure} \caption{The objects in \Cref{cla_largeboundaryinball} \end{figure} The rate \(K^{-d}\) does not play a role in the application. \begin{proof} By translation and scaling it suffices to consider the case \(C=B(0,1)\). Let \(B(x,r)\) be a ball with \(|x|\geq4d+1\) whose boundary intersects \(B(0,1)\), which means \(4d<r<4d+2\). For any point \(y=(y_1,\ldots,y_d)\in\mathbb{R}^d\) denote \(\proj y=(y^1,\ldots,y^{d-1})\). Assume that \(|x^d|=\max\{|x_1|,\ldots,|x_d|\}\), so that \[ |\proj x|^2=|x|^2-x_d^2\leq\Bigl(1-\f1d\Bigr)|x|^2 . \] Then for every \(y\in B(0,1)\) we have \begin{align*} |\proj y-\proj x| &\leq |\proj x|+1 \leq \sqrt{1-\f1d}|x|+1 \leq \Bigl(1-\f1{2d}\Bigr)(r+1)+1 =r \Bigl(1+\f2r-\f1{2d}-\f1{2dr}\Bigr) \leq r , \end{align*} and \[ x_d-y_d \geq \sqrt{\f1d}|x| -1 \geq \f{r-(\sqrt d+1)}{\sqrt d} >0 . \] Therefore the function \[ y\mapsto\varphi(\hat y) = x_d-\sqrt{r^2-|\proj{y-x}|^2} \] is well defined for \(y\in B(0,1)\), we have \[ B(x,r)\cap B(0,1) = \{y\in B(0,1):y_d>\varphi(\hat y)\} , \] and for \(y\in\partial B(x,r)\cap B(0,1)\) the gradient of \(\varphi\) in \(y_1,\ldots,y_{d-1}\) is bounded by \[ |\nabla\varphi(\hat y)| = \f{|\proj{x-y}|}{\sqrt{r^2-|\proj{x-y}|^2}} = \f{|\proj{x-y}|}{|x_d-y_d|} \leq \f{\sqrt dr}{r-(\sqrt d+1)} \leq \f{4d^{\f32}}{4d-(\sqrt d+1)} \leq 2\sqrt d . \] For the case that all balls \(B\in\B\) have radius at least \(4d\) we can conclude that the boundary of the union of all balls of the above form is a piece of the infimum of \(2\sqrt d\)-Lipschitz graphs, and thus itself a piece of a \(2\sqrt d\)-Lipschitz graph. We can conclude that \begin{align*} \smb{\partial \bigcup\bigl\{B(x,r)\in\B:x_d=\max\{|x_1|,\ldots,|x_d|\}\bigr\}\cap B(0,1) } &\leq \sqrt{4d+1}\sigma_{d-1} \\ &= \f{\sqrt{4d+1}\sigma_{d-1}}{d\sigma_d} \sm{\partial B(0,1)} . \end{align*} By rotation we obtain the same bound for the union of those balls \(B(x,r)\in\B\) with \(\pm x_i=\max\{|x_1|,\ldots,|x_d|\}\) for any \(i=1,\ldots,d\) and any sign. This finishes the proof for \(K\geq4d\). If \(K<4d\) then we cover \(B(0,1)\) by \(\lesssim\bigl(\f{4d}K\bigr)^d\) many balls \(C_1,C_2,\ldots\) so that for each \(i\) we have \(\diam(B)\geq 4d\diam(C_i)\). Then \[ \sm{\mb{\bigcup\B}\cap B(0,1)} \leq \sum_i \sm{\mb{\bigcup\B}\cap C_i} \lesssim \sum_i \sm{\partial C_i} \lesssim \Bigl(\f{4d}K\Bigr)^d\sm{\partial B(0,1)}. \] \end{proof} \input{surface_boxing.tex} Now we extend \Cref{pro_levelsets_finite_g} to the whole set \(\{\M\ind E>\lambda\}\). \begin{proof}[Proof of \Cref{eq_levelsets}] Note that \[\{\M\ind E>\lambda\}=\bigcup\{B\subset \Omega:\lm{B\cap E}>\lambda\lm B\}.\] First we pass to a countable set of balls. By the Lindel\"of property, for example Proposition~1.5 in \cite{MR2867756}, there is a sequence of balls with \[\{\M\ind E>\lambda\}= B_1\cup B_2\cup\ldots\] such that for each \(i\) we have \(\lm{E\cap B_i}>\lambda\lm{B_i}\). Denote \(\B_n=\{B_1,\ldots,B_n\}\). Then \(\bigcup\B_n\) converges to \(\{\M\ind E>\lambda\}\) in \(L^1_\loc(\Omega)\). Furthermore, by \Cref{lem_finMf} we have \[ \bigcup\B_n \subset \bigcup\B_n\cup\mi E \subset \{\M\ind E>\lambda\} , \] which means that also \(\bigcup\B_n\cup E\) converges to \(\{\M\ind E>\lambda\}\) in \(L^1_\loc(\Omega)\). Since \(E\) and \(\mi E\) agree up to a set of measure zero we have \(\mc{(\mi E)}=\mc E\) and \(\mb{(\mi E)}=\mb E\). We apply the approximation using \Cref{lem_l1approx} and then divide the boundary using \Cref{lem_boundaryofunion} and obtain \begin{align} \nonumber\sm{\mb{\{\M\ind E>\lambda\}}\cap \Omega}&\leq\limsup_{n\rightarrow\infty}\sm{\mb{(\bigcup\B_n\cup\mi E)}\cap \Omega}\\ \label{eq_summandtoomanyagain}&\leq\limsup_{n\rightarrow\infty}\sm{\mb{\bigcup\B_n}\setminus\mc E\cap \Omega}+\smb{\mb E\setminus\mib{\bigcup\B_n}\cap \Omega}. \end{align} By \Cref{lem_finMf} the second summand is bounded by \(\sm{\mb E\cap \Omega\cap \{\M\ind E>\lambda\}}\). In fact, if \(\sm{\mb E\cap \Omega\cap \{\M\ind E>\lambda\}}\) is finite then the second summand in \cref{eq_summandtoomanyagain} even goes to \(0\) for \(n\rightarrow\infty\). This is due to \Cref{lem_finMf} for the uncentered maximal function, because \[\mib{\bigcup\B_n}\supset\bigcup\B_n\] which is an increasing sequence in \(n\) which exhausts \(\{\M\ind E>\lambda\}\). In any case, it remains to estimate the first summand in \cref{eq_summandtoomanyagain} which we do using \Cref{pro_levelsets_finite_g} \begin{align*} \sm{\mb{\bigcup\B_n}\setminus\mc E}&\lesssim\lambda^{-\f{d-1}d}(1-\log\lambda)\sm{\mb E\cap\bigcup\B_n}\\ &\leq\lambda^{-\f{d-1}d}(1-\log\lambda)\sm{\mb E\cap\{\M\ind E>\lambda\}}. \end{align*} \end{proof} \section{The optimal rate in \texorpdfstring{\(\lambda\)}{lambda}}\label{sec_optimal} In this section we prove the following improvement of \Cref{eq_levelsets}. \begin{pro}\label{eq_levelsets_optimal} Let \(\M\) be the local uncentered maximal operator. Let \(E\subset\mathbb{R}^d\) be a set with locally finite perimeter and let \(\lambda\in(0,1)\). Then \[\sm{\mb{\{\M\ind E> \lambda\}}\cap\Omega}\lesssim\lambda^{-\f{d-1}d}\sm{\mb E\cap\{\M\ind E>\lambda\}}.\] \end{pro} More important than the statement of \Cref{eq_levelsets_optimal} is maybe the proof strategy. It may be helpful when attempting to generalize \Cref{theo_goal} to \(\var\M f\lesssim\var f\) for general functions \(f\) with bounded variation. \begin{rem} From taking \(\Omega=\mathbb{R}^d\) and \(E=B(0,1)\) it follows that the rate \(\lambda^{-\f{d-1}d}\) in \Cref{eq_levelsets_optimal} is optimal. \end{rem} In order to prove \Cref{eq_levelsets_optimal} it suffices to prove the following improvement of \Cref{pro_levelsets_finite_g}. \begin{pro}\label{pro_levelsets_finite_l_local} Let \(\lambda\in[0,1/2)\), let \(E\subset\mathbb{R}^d\) be a set of locally finite perimeter and let \(\B\) be a finite set of balls such that for each \(B\in\B\) we have \(\lambda\lm B<\lm{E\cap B}\leq\f12\lm B\). Then \[\sm{\mb{\bigcup\B} }\lesssim\lambda^{-\f{d-1}d}\sm{\mb E \cap\bigcup\B }.\] \end{pro} \begin{proof}[Proof of \Cref{eq_levelsets_optimal}] Let \(\B\) be a finite set of balls \(B\) with \(\lm{B\cap E}\geq\lambda\lm B\). Then \begin{align*} \sm{\partial\bigcup\B\setminus\mc E} &\leq\sm{\partial\{B\in\B:\lm{B\cap E}>\lm B/2\}\setminus\mc E}\\ &+\sm{\partial\{B\in\B:\lambda\lm B<\lm{B\cap E}\leq\lm B/2\}\setminus\mc E} \end{align*} By \Cref{pro_levelsets_finite_g} the first summand in the previous display bounded by a dimensional costant times \(\sm{\mb E\cap\bigcup\B}\) and by \Cref{pro_levelsets_finite_l_local} the second summand is bounded by a dimensional constant times \(\lambda^{-\f{d-1}d}\sm{\mb E\cap\bigcup\B}\). We conclude \[\sm{\partial\bigcup\B\setminus\mc E}\lesssim\lambda^{-\f{d-1}d}\sm{\mb E\cap\bigcup\B},\] which is \Cref{pro_levelsets_finite_g} without the factor \(1-\log\lambda\). Now we can repeat the proof of \Cref{eq_levelsets} verbatim without the factor \(1-\log\lambda\). \end{proof} There is a weaker version of \Cref{pro_levelsets_finite_l_local} which has a simpler proof, but already suffices to prove \Cref{eq_levelsets_optimal} for \(\Omega=\mathbb{R}^d\). \begin{pro}\label{pro_levelsets_finite_l} There is an \(\varepsilon>0\) depending only on the dimension such that for all \(\lambda\in[0,\varepsilon)\) the following holds. Let \(E\subset\mathbb{R}^d\) be a set of locally finite perimeter and let \(\B\) be a finite set of balls such that for each \(B\in\B\) we have \(\lambda\lm B<\lm{E\cap B}\leq\varepsilon\lm B\). Then there is a finite superset \(\tilde\B\) of \(\B\) consisting of balls \(B\) with \(\lm{E\cap B}>\lambda\lm B\) that satisfies \[\sm{\mb{\bigcup\tilde\B} }\lesssim\lambda^{-\f{d-1}d}\sm{\mb E \cap\bigcup\B }.\] \end{pro} \begin{proof}[Proof of \Cref{eq_levelsets_optimal} for \(\Omega=\mathbb{R}^d\)] Take \(\varepsilon>0\) from \Cref{pro_levelsets_finite_l}. For \(\lambda\geq\varepsilon\) \Cref{eq_levelsets_optimal} already follows from \Cref{eq_levelsets}. It suffices to consider the case that there is an \(x_0\in\mathbb{R}^d\) with \(\lambda<\M\ind E(x_0)\leq\varepsilon\). Let \(x\in\mathbb{R}^d\) with \(\M\ind E(x)>\lambda\). Then there is a ball \(C\ni x\) with \(\lm{E\cap C}>\lambda\lm C\), while \(\lm{E\cap B(x_0,|x-x_0|+1)}\leq\varepsilon\lm{B(x_0,|x-x_0|)+1}\). By continuously transforming \(C\) into \(B(x_0,|x-x_2|+1)\) we can conclude that \(\{\M\ind E>\lambda\}\) is a union of balls \(B\) with \(\lambda\lm B<\lm{E\cap B}\leq\varepsilon\lm B\). Thus by the Lindel\"of property there is a sequence of balls \((B_n)_n\) with \(\lambda\lm{B_n}<\lm{E\cap B_n}\leq\varepsilon\lm{B_n}\) such that \(\{\M\ind E>\lambda\}=B_1\cup B_2\cup\ldots\). Let \(\tilde\B_n\) be the finite superset of \(\B_n=\{B_1,\ldots,B_n\}\) from \Cref{pro_levelsets_finite_l}. Then \[\bigcup\B_n\subset\bigcup\tilde\B_n\subset\{\M\ind E>\lambda\}\] which means that \(\bigcup\tilde\B_n\) converges to \(\{\M\ind E>\lambda\}\) in \(L^1_\loc(\Omega)\). Thus we get as in the proof of \Cref{eq_levelsets} that \[ \sm{\mb{\{\M\ind E>\lambda\}}}\leq\limsup_{n\rightarrow\infty}\sm{\mb{\bigcup\tilde\B_n}}. \] By \Cref{pro_levelsets_finite_l} we have \begin{align*} \sm{\mb{\bigcup\tilde\B_n}} &\lesssim\lambda^{-\f{d-1}d}\sm{\mb E\cap\bigcup\B_n}\\ &\leq\lambda^{-\f{d-1}d}\sm{\mb E\cap\{\M\ind E>\lambda\}}. \end{align*} \end{proof} \subsection{The global case \texorpdfstring{\(\Omega=\mathbb{R}^d\)}{Omega=Rd}} In this subsection we present a proof of \Cref{pro_levelsets_finite_l}. It already contains some of the ideas for the general local case \Cref{pro_levelsets_finite_l_local}. \input{small_average.tex} \subsection{The general local case \texorpdfstring{\(\Omega\subset\mathbb{R}^d\)}{Omega is a subset of Rd}} In this subsection we present a proof of \Cref{pro_levelsets_finite_l_local}. It requires a few more steps than the proof of \Cref{pro_levelsets_finite_l}. \input{small_average_local.tex} \nocite{*} \end{document}
\begin{document} \title{Implicative algebras II:\\ completeness w.r.t. Set-based triposes} \author{Alexandre Miquel} \address{Instituto de Matem\'atica y Estad\'istica Prof. Ing. Rafael Laguardia\\ Facultad de Ingenier\'ia -- Universidad de la Rep\'ublica\\ Julio Herrera y Reissig 565 -- Montevideo C.P. 11300 -- URUGUAY} \date{November, 2020} \begin{abstract} We prove that all $\mathbf{Set}$-based triposes are implicative triposes. \end{abstract} \maketitle \section{Introduction} In~\cite{Miq20}, we introduced the notion of \emph{implicative algebra}, a simple algebraic structure that is intended to factorize the model constructions underlying forcing and realizability, both in intuitionistic and classical logic. We showed that this algebraic structure induces a large class of ($\mathbf{Set}$-based) triposes---the \emph{implicative triposes}---, that encompasses all (intuitionistic and classical) forcing triposes, all classical realizability triposes (both in the sense of Streicher~\cite{Str13} and Krivine~\cite{Miq20}) as well as all the intuitionistic realizability triposes induced by partial combinatory algebras (in the style of Hyland, Johnstone and Pitts~\cite{HJP80}). The aim of this paper is to prove that the class of implicative triposes actually encompasses all $\mathbf{Set}$-based triposes, in the sense that: \begin{theorem}\label{th:Thm} Each $\mathbf{Set}$-based tripos is (isomorphic to) an implicative tripos. \end{theorem} For that, we first recall some notions about triposes and implicative algebras. \subsection{$\mathbf{Set}$-based triposes} \label{ss:SetBasedTriposes} In what follows, we write: \begin{itemize} \item $\mathbf{Set}$ the category of sets equipped with all functions; \item $\mathsf{P}os$ the category of posets equipped with monotonic functions; \item $\mathbf{HA}$ the category of Heyting algebras equipped with the corresponding morphisms. \end{itemize} In the category~$\mathbf{Set}$, we write: \begin{itemize} \item $1$ the terminal object (i.e. a fixed singleton); \item $1_X:X\to 1$ the unique map from a given set~$X$ to~$1$; \item $X\times Y$ the Cartesian product of two sets~$X$ and $Y$, and \item $\pi_{X,Y}:X\times Y\to X$ and $\pi'_{X,Y}:X\times Y\to Y$ the associated projections. \item Finally, given two functions $f:Z\to X$ and $g:Z\to Y$, we write $\langlef,g\rangle:Z\to X\times Y$ the unique function such that $\pi_{X,Y}\circ\langlef,g\rangle=f$ and $\pi'_{X,Y}\circ\langlef,g\rangle=g$. \end{itemize} \begin{definition}[$\mathbf{Set}$-based triposes]\label{d:Tripos} A \emph{$\mathbf{Set}$-based tripos} is a functor $\mathsf{P}:\mathbf{Set}^{\mathrm{op}}\to\mathbf{HA}$ that fulfills the following three conditions: \begin{enumerate} \item For each map $f:X\to Y$ ($X,Y\in\mathbf{Set}$), the corresponding map $\mathsf{P}{f}:\mathsf{P}{Y}\to\mathsf{P}{X}$ has left and right adjoints in~$\mathsf{P}os$, that are monotonic maps $\exists{f},\forall{f}:\mathsf{P}{X}\to\mathsf{P}{Y}$ such that $$\begin{array}{rcl} \exists{f}(p)\le q&\Leftrightarrow&p\le\mathsf{P}{f}(q) \\[3pt] q\le\forall{f}(p)&\Leftrightarrow&\mathsf{P}{f}(q)\le p \\ \end{array}\eqno(\text{for all}~p\in\mathsf{P}{X},~q\in\mathsf{P}{Y})$$ \item Beck-Chevalley condition. Each pullback square in $\mathbf{Set}$ (on the left-hand side) induces the following two commutative diagrams in~$\mathsf{P}os$ (on the right-hand side): $$\begin{array}{@{}c@{}} \xymatrix{ X\pullback{6pt}\ar[r]^{f_1}\ar[d]_{f_2}& X_1\ar[d]^{g_1} \\ X_2\ar[r]_{g_2} & Y \\ }\\ \end{array}\qquad{\Rightarrow}\qquad \begin{array}{c@{\qquad}c} \xymatrix{ \mathsf{P}{X}\ar[r]^{\exists{f_1}}& \mathsf{P}{X_1} \\ \mathsf{P}{X_2}\ar[u]^{\mathsf{P}{f_2}}\ar[r]_{\exists{g_2}} & \mathsf{P}{Y}\ar[u]_{\mathsf{P}{g_1}} \\ } & \xymatrix{ \mathsf{P}{X}\ar[r]^{\forall{f_1}}& \mathsf{P}{X_1} \\ \mathsf{P}{X_2}\ar[u]^{\mathsf{P}{f_2}}\ar[r]_{\forall{g_2}} & \mathsf{P}{Y}\ar[u]_{\mathsf{P}{g_1}} \\ }\\ \end{array}$$ That is:\quad $\exists{f_1}\circ\mathsf{P}{f_2}~=~\mathsf{P}{g_1}\circ\exists{g_2}$\quad and\quad $\forall{f_1}\circ\mathsf{P}{f_2}~=~\mathsf{P}{g_1}\circ\forall{g_2}$. \item The functor $\mathsf{P}:\mathbf{Set}^{\mathrm{op}}\to\mathbf{HA}$ has a \emph{generic predicate}, that is: a predicate $\textit{tr}_{\Sigma}\in\mathsf{P}\Sigma$ (for some set~$\Sigma$) such that for all sets~$X$, the following map is surjective: $$\begin{array}{r@{~{}~}c@{~{}~}l} \Sigma^X&\to&\mathsf{P}{X} \\ \sigma&\mapsto&\mathsf{P}\sigma(\textit{tr}_{\Sigma}) \\ \end{array}$$ \end{enumerate} \end{definition} \begin{remarks} (1)~Given a map $f:X\to Y$, the left and right adjoints $\exists{f},\forall{f}:\mathsf{P}{X}\to\mathsf{P}{Y}$ of the substitution map $\mathsf{P}{f}:\mathsf{P}{Y}\to\mathsf{P}{X}$ are always monotonic functions (due to the adjunction), but in general they are not morphisms of Heyting algebras. Moreover, both correspondences $f\mapsto\exists{f}$ and $f\mapsto\forall{f}$ are functorial, in the sense that $$\begin{array}{c@{\qquad\qquad}c} \exists(g\circ f)~=~\exists{g}\circ\exists{f}& \exists\mathrm{id}_X~=~\mathrm{id}_{\mathsf{P}{X}}\\ \forall(g\circ f)~=~\forall{g}\circ\forall{f}& \forall\mathrm{id}_X~=~\mathrm{id}_{\mathsf{P}{X}}\\ \end{array}$$ for all sets~$X$, $Y$, $Z$ and for all maps $f:X\to Y$ and $g:Y\to Z$. So that we can see~$\exists$ and $\forall$ as covariant functors from~$\mathbf{Set}$ to~$\mathsf{P}os$, whose action on sets is given by $\exists{X}=\forall{X}=\mathsf{P}{X}$.\par (2)~When defining the notion of tripos, some authors~\cite{Pit02} require that the Beck-Chevalley condition hold only for the pullback squares of the form $$\begin{array}{@{}c@{}} \xymatrix@C=36pt{ Z\times X\pullback{6pt} \ar[r]^{\pi_{Z,X}}\ar[d]_{f\times\mathrm{id}_X}&Z\ar[d]^{f} \\ Z'\times X\ar[r]_{\pi_{Z',X}}&Z'\\ } \end{array}\eqno(\text{in}~\mathbf{Set})$$ Here, we follow~\cite{HJP80,Miq20} by requiring that the Beck-Chevalley condition hold more generally for all pullback squares in~$\mathbf{Set}$.\par (3)~The meaning of the generic predicate $\textit{tr}_{\Sigma}\in\mathsf{P}\Sigma$ will be explained in Section~\ref{ss:GenPred}. \end{remarks} Let us also recall that: \begin{definition}[Isomorphism of triposes] Two triposes $\mathsf{P},\mathsf{P}':\mathbf{Set}^{\mathrm{op}}\to\mathbf{HA}$ are \emph{isomorphic} when there is a natural isomorphism $\phi:\mathsf{P}\Rightarrow\mathsf{P}'$. \end{definition} \begin{remark} By a natural isomorphism $\phi:\mathsf{P}\Rightarrow\mathsf{P}'$, we mean any family of isomorphisms $\phi_X:\mathsf{P}{X}\to\mathsf{P}'{X}$ (indexed by $X\in\mathbf{Set}$) such that for all maps $f:X\to Y$ ($X,Y\in\mathbf{Set}$), the following diagram commutes: $$\xymatrix{ X\ar[d]_f & \mathsf{P}{X}\ar[r]^{\phi_X}_{\sim} & \mathsf{P}'{X} \\ Y & \mathsf{P}{Y}\ar[u]^{\mathsf{P}{f}}\ar[r]_{\phi_Y}^{\sim} & \mathsf{P}'{Y}\ar[u]_{\mathsf{P}'{f}} \\ }$$ Note that here, the notion of isomorphism can be taken indifferently in the sense of $\mathbf{HA}$ or $\mathsf{P}os$, since a map $\phi_X:\mathsf{P}{X}\to\mathsf{P}'{X}$ is an isomorphism of Heyting algebras if and only if it is an isomorphism between the underlying posets. \end{remark} To conclude this presentation of triposes, we recall a few facts about left and right adjoints: \begin{lemma} For all maps $f:X\to Y$ and for all predicates $p,p'\in\mathsf{P}{X}$, we have: $$\begin{array}{l@{\qquad\qquad}l} \exists{f}(p\lor p')~=~\exists{f}(p)\lor\exists{f}(p')& \exists{f}(\bot_X)~=~\bot_Y \\[3pt] \forall{f}(p\land p')~=~\forall{f}(p)\land\forall{f}(p')& \forall{f}(\top_X)~=~\top_Y \\ \end{array}$$ \end{lemma} \begin{proof} Let us treat the case of $\exists{f}$. For all $q\in\mathsf{P}{Y}$, we have $$\begin{array}{r@{\quad}c@{\quad}l} \exists{f}(p\lor p')\le q &\text{iff}& p\lor p'\le\mathsf{P}{f}(q) \\ &\text{iff}& p\le\mathsf{P}{f}(q)~{}~\text{and}~{}~p'\le\mathsf{P}{f}(q)\\ &\text{iff}& \exists{f}(p)\le q~{}~\text{and}~{}~ \exists{f}(p')\le q\\ &\text{iff}& \exists{f}(p)\lor\exists{f}(p')\le q\\ \end{array}$$ hence $\exists{f}(p\lor p')=\exists{f}(p)\lor\exists{f}(p')$. Moreover, we have $\bot_X\le\mathsf{P}{f}(\bot_Y)$, hence $\exists{f}(\bot_X)\le\bot_Y$, and thus $\exists{f}(\bot_X)=\bot_Y$. The proofs of the corresponding properties for $\forall{f}$ proceed dually. \end{proof} \begin{lemma}\label{l:SimplAdj} Given a map $f:X\to Y$: \begin{enumerate} \item If $f$ has an inverse (i.e.\ $f$ is bijective), then $\exists{f}$ and $\forall{f}$ are the inverse of $\mathsf{P}{f}$: $$\exists{f}~=~\forall{f}~=~\mathsf{P}{f^{-1}}~=~(\mathsf{P}{f})^{-1}\,.$$ \item If $f$ has a right inverse, then $\exists{f}$ and~$\forall{f}$ are left inverses of $\mathsf{P}{f}$: $$\exists{f}\circ\mathsf{P}{f}~=~\forall{f}\circ\mathsf{P}{f}~=~\mathrm{id}_{\mathsf{P}{Y}}\,.$$ \item If $f$ has a left inverse, then $\exists{f}$ and~$\forall{f}$ are right inverses of $\mathsf{P}{f}$: $$\mathsf{P}{f}\circ\exists{f}~=~\mathsf{P}{f}\circ\forall{f}~=~\mathrm{id}_{\mathsf{P}{X}}\,.$$ \end{enumerate} \end{lemma} \begin{proof} (1)~If $f:X\to Y$ is bijective, then for all $p\in\mathsf{P}{X}$ and $q\in\mathsf{P}{Y}$, we have \begin{itemize} \item $\exists{f}(p)\le q ~{}~\text{iff}~{}~p\le\mathsf{P}{f}(q) ~{}~\text{iff}~{}~(\mathsf{P}{f})^{-1}(p)\le q$,\quad hence:\quad $\exists{f}=(\mathsf{P}{f})^{-1}(p)=\mathsf{P}{f^{-1}}(p).$ \item $q\le\forall{f}(p) ~{}~\text{iff}~{}~\mathsf{P}{f}(q)\le p ~{}~\text{iff}~{}~q\le(\mathsf{P}{f})^{-1}(p)$,\quad hence:\quad $\forall{f}=(\mathsf{P}{f})^{-1}(p)=\mathsf{P}{f^{-1}}(p).$ \end{itemize} (2)~Let $g:Y\to X$ such that $f\circ g=\mathrm{id}_Y$. By functoriality, we get $\mathsf{P}{g}\circ\mathsf{P}{f}=\mathrm{id}_{\mathsf{P}{Y}}$, hence the map $\mathsf{P}{f}:\mathsf{P}{Y}\to\mathsf{P}{X}$ is an embedding of posets. For all $q,q'\in\mathsf{P}{Y}$, we thus have: \begin{itemize} \item $\exists{f}(\mathsf{P}{f}(q))\le q' ~{}~\text{iff}~{}~\mathsf{P}{f}(q)\le\mathsf{P}{f}(q') ~{}~\text{iff}~{}~q\le q'$,\quad hence\quad $\exists{f}\circ\mathsf{P}{f}=\mathrm{id}_{\mathsf{P}{Y}}$. \item $q'\le\forall{f}(\mathsf{P}{f}(q)) ~{}~\text{iff}~{}~\mathsf{P}{f}(q')\le\mathsf{P}{f}(q) ~{}~\text{iff}~{}~q'\le q$,\quad hence\quad $\forall{f}\circ\mathsf{P}{f}=\mathrm{id}_{\mathsf{P}{Y}}$. \end{itemize} (3)~Let $g:Y\to X$ such that $g\circ f=\mathrm{id}_X$. By functoriality, we get $\forall{g}\circ\forall{f}=\exists{g}\circ\exists{f}=\mathrm{id}_{\mathsf{P}{X}}$, hence $\exists{f},\forall{f}:\mathsf{P}{X}\to\mathsf{P}{Y}$ are embeddings of posets. For all $p,p'\in\mathsf{P}{X}$, we thus have: \begin{itemize} \item $p'\le\mathsf{P}{f}(\exists{f}(p)) ~{}~\text{iff}~{}~\exists{f}(p')\le\exists{f}(p) ~{}~\text{iff}~{}~p'\le p$,\quad hence\quad $\mathsf{P}{f}\circ\exists{f}=\mathrm{id}_{\mathsf{P}{X}}$. \item $\mathsf{P}{f}(\forall{f}(p))\le p' ~{}~\text{iff}~{}~\forall{f}(p)\le\forall{f}(p') ~{}~\text{iff}~{}~p\le p'$,\quad hence\quad $\mathsf{P}{f}\circ\forall{f}=\mathrm{id}_{\mathsf{P}{X}}$.\qedhere \end{itemize} \end{proof} \subsection{Implicative algebras}\label{ss:ImpAlg} Recall that: \begin{itemize} \item An \emph{implicative structure} is a complete (meet-semi)lattice $(\mathscr{A},{\preccurlyeq})$ equipped with a binary operation $({\to}):\mathscr{A}^2\to\mathscr{A}$ such that: \begin{enumerate} \item[(1)] If $a'\preccurlyeq a$ and $b\preccurlyeq b'$,\ \ then\ \ $(a\to b)\preccurlyeq(a'\to b')$. \item[(2)] For all $a\in\mathscr{A}$ and $B\subseteq\mathscr{A}$, we have:\quad $\displaystyle a\to\bigcurlywedge_{b\in B}\!\!b~=~\bigcurlywedge_{b\in B}\!(a\to b)$. \end{enumerate} \item A \emph{separator} of an implicative structure $(\mathscr{A},{\preccurlyeq},{\to})$ is a subset $S\subseteq\mathscr{A}$ such that: \begin{enumerate} \item[(1)] If $a\in S$ and $a\preccurlyeq a'$, then $a'\in S$. \item[(2)] $\bigcurlywedge_{a,b\in\mathscr{A}}(a\to b\to c) ~{}~({=}~\mathbf{K}^{\mathscr{A}})~\in~S$\quad and\\ $\bigcurlywedge_{a,b,c\in\mathscr{A}}((a\to b\to c)\to(a\to b)\to a\to c) ~{}~({=}~\mathbf{S}^{\mathscr{A}})~\in~S$. \item[(3)] If $(a\to b)\in S$ and $a\in S$, then $b\in S$. \end{enumerate} Moreover, we say that a separator $S\subseteq\mathscr{A}$ is \emph{classical} when \begin{enumerate} \item[(4)] $\bigcurlywedge_{a,b\in\mathscr{A}}(((a\to b)\to a)\to a) ~{}~({=}~\mathbf{cc}^{\mathscr{A}})~\in~S$. \end{enumerate} \medbreak \item An \emph{implicative algebra} is a quadruple $(\mathscr{A},{\preccurlyeq},{\to},S)$ where $(\mathscr{A},{\preccurlyeq},{\to})$ is an implicative structure and where $S\subseteq\mathscr{A}$ is a separator. An implicative algebra is \emph{classical} when the underlying separator $S\subseteq\mathscr{A}$ is classical. \end{itemize} \bigbreak In~\cite[\S4]{Miq20}, we have seen that each implicative algebra $(\mathscr{A},{\preccurlyeq},{\to},S)$ induces a $\mathbf{Set}$-based tripos $\mathsf{P}:\mathbf{Set}^{\mathrm{op}}\to\mathbf{HA}$ that is defined as follows: \begin{itemize} \item For each set $X$, the corresponding Heyting algebra $\mathsf{P}{X}$ is defined as the poset reflection of the preordered set $(\mathscr{A}^X,\vdash_{S[X]})$ whose preorder $\vdash_{S[X]}$ is given by $$a\vdash_{S[X]}b\quad\Leftrightarrow\quad \bigcurlywedge_{x\in X}(a_x\to b_x)~\in~S \eqno(\text{for all}~a,b\in\mathscr{A}^X)$$ The quotient $\mathscr{A}^X/{\mathrel{{\dashv}{\vdash}}_{S[X]}}~({=}~\mathsf{P}{X})$ is also written $\mathscr{A}^X/S[X]$. \item For each map $f:X\to Y$, the corresponding substitution map $\mathsf{P}{f}:\mathsf{P}{Y}\to\mathsf{P}{X}$ is the (unique) morphism of Heyting algebras that factors the map $\mathscr{A}^f:\mathscr{A}^Y\to\mathscr{A}^X$ through the quotients $\mathsf{P}{Y}=\mathscr{A}^Y/S[Y]$ and $\mathsf{P}{X}=\mathscr{A}^X/S[X]$. \end{itemize} The aim of this paper is thus to show that all $\mathbf{Set}$-based triposes are of this form. \section{Anatomy of a $\mathbf{Set}$-based tripos} The results presented in this section are essentially taken from~\cite{HJP80,Pit81}. \subsection{The generic predicate} \label{ss:GenPred} From now on, we work with a fixed tripos $\mathsf{P}:\mathbf{Set}^{\mathrm{op}}\to\mathbf{HA}$. From the definition, $\mathsf{P}$ has a \emph{generic predicate}, that is: a predicate $\textit{tr}_{\Sigma}\in\mathsf{P}\Sigma$ (for some set $\Sigma$) such that for each set~$X$, the corresponding `decoding map' is surjective: $$\begin{array}{r@{~{}~}c@{~{}~}l} \sem{\_}_X~:~\Sigma^X&\to&\mathsf{P}{X} \\ \sigma&\mapsto&\mathsf{P}\sigma(\textit{tr}_{\Sigma}) \\ \end{array}$$ Intuitively, $\Sigma$ can be thought of as the set of (codes of) propositions, whereas $\Sigma^X$ can be thought of as the set of \emph{propositional functions over~$X$}. The above condition thus expresses that each predicate $p\in\mathsf{P}{X}$ is represented by at least one propositional function $\sigma\in\Sigma^X$ such that $\sem{\sigma}_X=p$, which we shall call a \emph{code} for the predicate~$p$. \begin{notation} Given a code $\sigma\in\Sigma^X$, the predicate $\sem{\sigma}_X\in\mathsf{P}{X}$ will be often written $\sem{\sigma_x}_{x\in{X}}$. In particular, given an individual code $\xi\in\Sigma$, we write $(\xi)_{\_\in1}\in\Sigma^1$ the corresponding constant family indexed by the singleton~$1$, and $\sem{\xi}_{\_\in1}\in\mathsf{P}{1}$ the associated predicate. \end{notation} \begin{fact}[Naturality of $\sem{\_}_X$] The decoding map $\sem{\_}_X:\Sigma^X\to\mathsf{P}{X}$ is natural in~$X$, in the sense that for each map $f:X\to Y$, the following diagram commutes: $$\begin{array}{@{}c@{}} \xymatrix{ \Sigma^X\ar[r]^{\sem{\_}_X}&\mathsf{P}{X}\\ \Sigma^Y\ar[r]_{\sem{\_}_Y}\ar[u]^{\_\circ f}& \mathsf{P}{Y}\ar[u]_{\mathsf{P}{f}} \\ }\\ \end{array}\qquad \sem{\sigma\circ f}_X~=~\mathsf{P}{f}(\sem{\sigma}_Y) \eqno(\sigma\in\Sigma^Y)$$ \end{fact} \begin{proof} For all $\sigma\in\Sigma^Y$, we have\ \ $\sem{\sigma\circ f}_X=\mathsf{P}(\sigma\circ f)(\textit{tr}_{\Sigma})= \mathsf{P}{f}(\mathsf{P}\sigma(\textit{tr}_{\Sigma}))=\mathsf{P}{f}(\sem{\sigma}_Y)$. \end{proof} \begin{remarks}[Non-uniqueness of generic predicates] It is important to observe that in a $\mathbf{Set}$-based tripos~$\mathsf{P}$, the generic predicate is never unique. \begin{itemize} \item Indeed, given a generic predicate $\textit{tr}_{\Sigma}\in\mathsf{P}\Sigma$ and a surjection $h:\Sigma'\to\Sigma$, we can always construct another generic predicate $\textit{tr}_{\Sigma'}\in\mathsf{P}\Sigma'$ by letting $\textit{tr}_{\Sigma'}=\mathsf{P}{h}(\textit{tr}_{\Sigma})$ \footnote{To prove that $\textit{tr}_{\Sigma'}\in\mathsf{P}\Sigma'$ is another generic predicate, we actually need a right inverse of $h:\Sigma'\to\Sigma$, which exists by~(AC). Without (AC), the same argument works by replacing `surjective' with `having a right inverse'.}. \textbf{C}OUIC{ \begin{proof} Since $h:\Sigma'\to\Sigma$ is surjective, it has a right inverse by (AC), that is a map $h':\Sigma\to\Sigma'$ such that $h\circ h'=\mathrm{id}_{\Sigma}$. Given a set $X$ and a predicate $p\in\mathsf{P}{X}$, we know that there is a code $\sigma\in\Sigma^X$ such that $\mathsf{P}\sigma(\textit{tr}_{\Sigma})=p$. Now letting $\sigma':=h'\circ\sigma\in\Sigma'^X$, we observe that $\mathsf{P}{\sigma'}(\textit{tr}_{\Sigma'})= \mathsf{P}(h'\circ\sigma)(\mathsf{P}{h}(\textit{tr}_{\Sigma}))= \mathsf{P}(h\circ h'\circ\sigma)(\textit{tr}_{\Sigma})= \mathsf{P}{\sigma}(\textit{tr}_{\Sigma})=p$. \end{proof} } \item More generally, if $\textit{tr}_{\Sigma}\in\mathsf{P}\Sigma$ and $\textit{tr}_{\Sigma'}\in\mathsf{P}\Sigma'$ are two generic predicates of the same tripos~$\mathsf{P}$, then there always exist two conversion maps $h:\Sigma'\to\Sigma$ and $h':\Sigma\to\Sigma'$ such that $\textit{tr}_{\Sigma'}=\mathsf{P}{h}(\textit{tr}_{\Sigma})$ and $\textit{tr}_{\Sigma}=\mathsf{P}{h'}(\textit{tr}_{\Sigma'})$. \end{itemize} In what follows, we will work with a fixed generic predicate $\textit{tr}_{\Sigma}\in\mathsf{P}\Sigma$. \end{remarks} \subsection{Defining connectives on~$\Sigma$} \label{ss:DefConnSigma} We want to show that the operations $\land$, $\lor$ and $\to$ on each Heyting algebra $\mathsf{P}{X}$ ($X\in\mathbf{Set}$) can be derived from analogous operations on the generic set $\Sigma$ of propositions. For that, we pick codes $({\mathbin{\dot{{\land}}}}),({\mathbin{\dot{{\lor}}}}),({\mathbin{\dot{{\to}}}})\in\Sigma^{\Sigma\times\Sigma}$ such that $$\begin{array}{r@{~{}~}c@{~{}~}l} \sem{\mathbin{\dot{{\land}}}}_{\Sigma\times\Sigma}&=& \sem{\pi}_{\Sigma\times\Sigma}\land \sem{\pi'}_{\Sigma\times\Sigma}\\[3pt] \sem{\mathbin{\dot{{\lor}}}}_{\Sigma\times\Sigma}&=& \sem{\pi}_{\Sigma\times\Sigma}\lor \sem{\pi'}_{\Sigma\times\Sigma}\\[3pt] \sem{\mathbin{\dot{{\to}}}}_{\Sigma\times\Sigma}&=& \sem{\pi}_{\Sigma\times\Sigma}\to \sem{\pi'}_{\Sigma\times\Sigma}\\ \end{array}\eqno\begin{array}{r@{}} ({\in}~\mathsf{P}(\Sigma\times\Sigma))\\[3pt] ({\in}~\mathsf{P}(\Sigma\times\Sigma))\\[3pt] ({\in}~\mathsf{P}(\Sigma\times\Sigma))\\ \end{array}$$ writing $\pi,\pi':\Sigma\times\Sigma\to\Sigma$ the two projections from $\Sigma\times\Sigma$ to $\Sigma$. \begin{proposition}\label{p:ConnSigma} Let $X$ be a set. For all codes $\sigma,\tau\in\Sigma^X$, we have $$\begin{array}{r@{~{}~}c@{~{}~}l} \sem{\sigma_x\mathbin{\dot{{\land}}}\tau_x}_{x\in X} &=&\sem{\sigma}_X\land\sem{\tau}_X\\[3pt] \sem{\sigma_x\mathbin{\dot{{\lor}}}\tau_x}_{x\in X} &=&\sem{\sigma}_X\lor\sem{\tau}_X\\[3pt] \sem{\sigma_x\mathbin{\dot{{\to}}}\tau_x}_{x\in X} &=&\sem{\sigma}_X\to\sem{\tau}_X\\ \end{array}\eqno\begin{array}{r@{}} ({\in}~\mathsf{P}{X})\\[3pt] ({\in}~\mathsf{P}{X})\\[3pt] ({\in}~\mathsf{P}{X})\\ \end{array}$$ \end{proposition} \begin{proof} Let us treat (for instance) the case of implication: $$\begin{array}[b]{r@{~{}~}c@{~{}~}l} \sem{\sigma_x\mathbin{\dot{{\to}}}\tau_x}_{x\in X} &=&\sem{({\mathbin{\dot{{\to}}}})\circ\langle\sigma,\tau\rangle}_X ~=~\mathsf{P}\langle\sigma,\tau\rangle\sem{{\mathbin{\dot{{\to}}}}}_{\Sigma\times\Sigma} \\ &=&\mathsf{P}\langle\sigma,\tau\rangle\bigl(\sem{\pi}_{\Sigma\times\Sigma}\to \sem{\pi'}_{\Sigma\times\Sigma}\bigr)\\ &=&\mathsf{P}\langle\sigma,\tau\rangle\bigl(\sem{\pi}_{\Sigma\times\Sigma}\bigr)\to \mathsf{P}\langle\sigma,\tau\rangle\bigl(\sem{\pi'}_{\Sigma\times\Sigma}\bigr)\\ &=&\sem{\pi\circ\langle\sigma,\tau\rangle}_X\to \sem{\pi'\circ\langle\sigma,\tau\rangle}_X ~=~\sem{\sigma}_X\to\sem{\tau}_X\,.\\ \end{array}\eqno\mbox{\qedhere}$$ \end{proof} Similarly, we pick codes $\dot{\bot},\dot{\top}\in\Sigma$ such that $$\sem{\dot{\bot}}_{\_\in 1}=\bot_1\in\mathsf{P}(1)\qquad\text{and}\qquad \sem{\dot{\top}}_{\_\in 1}=\top_1\in\mathsf{P}(1)\,.$$ Again, we easily check that: \begin{proposition}\label{p:UnitSigma} For each set $X$, we have: $$\begin{array}{r@{~{}~}r@{~{}~}c@{~{}~}l} \sem{\dot{\bot}}_{x\in X}&=&\bot_X\\[3pt] \sem{\dot{\top}}_{x\in X}&=&\top_X\\ \end{array}\eqno\begin{array}{r@{}} ({\in}~\mathsf{P}{X})\\[3pt] ({\in}~\mathsf{P}{X})\\ \end{array}$$ \end{proposition} \begin{proof} Let us treat (for instance) the case of $\top$: $$\sem{\dot{\top}}_{x\in X} ~=~\sem{(\dot{\top})_{\_\in 1}\circ 1_X}_X ~=~\mathsf{P}{1_X}(\sem{\dot{\top}}_{\_\in 1}) ~=~\mathsf{P}{1_X}(\top_1)~=~\top_X$$ (writing $1_X:X\to 1$ the unique map from $X$ to $1$). \end{proof} \subsection{Defining quantifiers on~$\Sigma$} \label{ss:DefQuantSigma} In this section, we propose to show that the adjoints $$\exists{f},\forall{f}~:~\mathsf{P}{X}\to\mathsf{P}{Y} \eqno(\text{for each}~f:X\to Y)$$ can be derived from suitable \emph{quantifiers} on the generic set~$\Sigma$ of propositions. For that, we consider the membership relation $E:=\{(\xi,s):\xi\in s\}\subseteq\Sigma\times\mathfrak{P}(\Sigma)$ and write $e_1:E\to\Sigma$ and $e_2:E\to\mathfrak{P}(\Sigma)$ the corresponding projections. We pick two codes $\mathbin{\dot{{\bigvee}}},\mathbin{\dot{{\bigwedge}}}\in\Sigma^{\mathfrak{P}(\Sigma)}$ such that $$\begin{array}{r@{~~}c@{~~}l} \sem{\mathbin{\dot{{\bigvee}}}}_{\mathfrak{P}(\Sigma)}&=&\exists{e_2}(\sem{e_1}_E)\\[3pt] \sem{\mathbin{\dot{{\bigwedge}}}}_{\mathfrak{P}(\Sigma)}&=&\forall{e_2}(\sem{e_1}_E)\\ \end{array}\eqno\begin{array}{r@{}} ({\in}~\mathsf{P}(\mathfrak{P}(\Sigma)))\\[3pt] ({\in}~\mathsf{P}(\mathfrak{P}(\Sigma)))\\ \end{array}$$ \begin{proposition}\label{p:QuantSigma} Given a code $\sigma\in\Sigma^X$ and a map $f:X\to Y$, we have: $$\begin{array}{r@{~{}~}c@{~{}~}l} \bigsem{\mathbin{\dot{{\bigvee}}}\{\sigma_x:x\in f^{-1}(y)\}}_{y\in Y} &=&\exists{f}(\sem{\sigma}_X)\\[6pt] \bigsem{\mathbin{\dot{{\bigwedge}}}\{\sigma_x:x\in f^{-1}(y)\}}_{y\in Y} &=&\forall{f}(\sem{\sigma}_X)\\ \end{array}\eqno\begin{array}{r@{}} ({\in}~\mathsf{P}{Y})\\[6pt] ({\in}~\mathsf{P}{Y})\\ \end{array}$$ \end{proposition} \begin{proof} Let us define the map $h:Y\to\mathfrak{P}(\Sigma)$ by $h(y):=\{\sigma_x:x\in f^{-1}(y)\}$ for all $y\in Y$. From this definition and from the definitions of $\mathbin{\dot{{\bigvee}}}$, $\mathbin{\dot{{\bigwedge}}}$, we get $$\begin{array}{rcl} \sem{\mathbin{\dot{{\bigvee}}}\{\sigma_x:x\in f^{-1}(y)\}}_{y\in Y} &=&\sem{{\mathbin{\dot{{\bigvee}}}}\circ h}_Y ~=~\mathsf{P}{h}\bigl(\sem{\mathbin{\dot{{\bigvee}}}}_{\mathfrak{P}(\Sigma)}\bigr) ~=~\mathsf{P}{h}\bigl(\exists{e_2}(\sem{e_1}_E)\bigr) \\[3pt] \sem{\mathbin{\dot{{\bigwedge}}}\{\sigma_x:x\in f^{-1}(y)\}}_{y\in Y} &=&\sem{{\mathbin{\dot{{\bigwedge}}}}\circ h}_Y ~=~\mathsf{P}{h}\bigl(\sem{\mathbin{\dot{{\bigwedge}}}}_{\mathfrak{P}(\Sigma)}\bigr) ~=~\mathsf{P}{h}\bigl(\forall{e_2}(\sem{e_1}_E)\bigr) \\ \end{array}$$ Let us now consider the set $G\subseteq\Sigma\times Y$ defined by $G:=\{(\sigma_x,f(x)):x\in X\}$ as well as the two functions $g:G\to Y$ and $g':G\to E$ given by $$g(\xi,y)~:=~y\qquad\text{and}\qquad g'(\xi,y)~:=~(\xi,h(y)) \eqno(\text{for all}~(\xi,y)\in G)$$ We observe that the following diagram is a pullback in~$\mathbf{Set}$: $$\xymatrix{ G\ar[r]^{g}\ar[d]_{g'}\pullback{6pt} & Y\ar[d]^{h} \\ E\ar[r]_{e_2}&\mathfrak{P}(\Sigma) \\ }$$ Hence $\mathsf{P}{h}\circ\exists{e_2}=\exists{g}\circ\mathsf{P}{g'}$ and $\mathsf{P}{h}\circ\forall{e_2}=\forall{g}\circ\mathsf{P}{g'}$ (Beck-Chevalley), and thus: $$\begin{array}{rcl} \bigsem{\mathbin{\dot{{\bigvee}}}\{\sigma_x:x\in f^{-1}(y)\}}_{y\in Y} &=&(\mathsf{P}{h}\circ\exists{e_2})(\sem{e_1}_E) ~=~(\exists{g}\circ\mathsf{P}{g'})(\sem{e_1}_E)\\[3pt] \bigsem{\mathbin{\dot{{\bigwedge}}}\{\sigma_x:x\in f^{-1}(y)\}}_{y\in Y} &=&(\mathsf{P}{h}\circ\forall{e_2})(\sem{e_1}_E) ~=~(\forall{g}\circ\mathsf{P}{g'})(\sem{e_1}_E)\\ \end{array}$$ Now we consider the map $q:X\to G$ defined by $q(x)=(\sigma_x,f(x))$ for all $x\in X$. Since~$q$ is surjective, it has a right inverse by (AC), hence we have $\exists{q}\circ\mathsf{P}{q}=\forall{q}\circ\mathsf{P}{q}=\mathrm{id}_{\mathsf{P}{G}}$ by Lemma~\ref{l:SimplAdj}~(2). Therefore: $$\begin{array}{r@{~~}c@{~~}l} \bigsem{\mathbin{\dot{{\bigvee}}}\{\sigma_x:x\in f^{-1}(y)\}}_{y\in Y} &=&(\exists{g}\circ\mathsf{P}{g'})(\sem{e_1}_E) ~=~(\exists{g}\circ\exists{q}\circ\mathsf{P}{q}\circ\mathsf{P}{g'})(\sem{e_1}_E)\\ &=&\bigl(\exists(g\circ q)\circ\mathsf{P}(g'\circ q)\bigr)(\sem{e_1}_E) ~=~\exists{f}\bigl(\mathsf{P}(g'\circ q)(\sem{e_1}_E)\bigr) \\ &=&\exists{f}\bigl(\sem{e_1\circ g'\circ q}_X\bigr) ~=~\exists{f}(\sem{\sigma}_X) \\ \end{array}$$ (since $g\circ q=f$ and $e_1\circ g'\circ q=\sigma$). And similarly for ${\forall}$. \end{proof} \subsection{Defining the `filter' $\mathsf{P}hi$} \label{ss:DefFilterSigma} In Sections~\ref{ss:DefConnSigma} and~\ref{ss:DefQuantSigma}, we introduced codes $({\mathbin{\dot{{\land}}}}),({\mathbin{\dot{{\lor}}}}),(\mathbin{\dot{{\to}}})\in\Sigma^{\Sigma\times\Sigma}$ and $\mathbin{\dot{{\bigvee}}},\mathbin{\dot{{\bigwedge}}}\in\Sigma^{\mathfrak{P}(\Sigma)}$ such that for all sets $X$ and for all predicates $p,q\in\mathsf{P}{X}$: \begin{itemize} \item If $\sigma,\tau\in\Sigma^X$ are codes for $p,q\in\mathsf{P}{X}$, respectively, then: $$\begin{array}{l@{\quad}l@{\qquad}l@{\qquad}l@{\quad}l} (\sigma_x\mathbin{\dot{{\land}}}\tau_x)_{x\in X}&({\in}~\Sigma^X) &\text{is a code for}&p\land q&({\in}~\mathsf{P}{X}) \\[3pt] (\sigma_x\mathbin{\dot{{\lor}}}\tau_x)_{x\in X}&({\in}~\Sigma^X) &\text{is a code for}&p\lor q&({\in}~\mathsf{P}{X}) \\[3pt] (\sigma_x\mathbin{\dot{{\to}}}\tau_x)_{x\in X}&({\in}~\Sigma^X) &\text{is a code for}&p\to q&({\in}~\mathsf{P}{X}) \\ \end{array}$$ \item If $\sigma\in\Sigma^X$ is a code for $p\in\mathsf{P}{X}$ and if $f:X\to Y$ is any map, then: $$\begin{array}{l@{\quad}l@{\qquad}l@{\qquad}l@{\quad}l} \mathscr{B}igl({\textstyle\mathbin{\dot{{\bigvee}}}} \bigl\{\sigma_x:x\in f^{-1}(y)\bigr\}\mathscr{B}igr)_{y\in Y}&({\in}~\Sigma^Y) &\text{is a code for}&\exists{f}(p)&({\in}~\mathsf{P}{Y}) \\[6pt] \mathscr{B}igl({\textstyle\mathbin{\dot{{\bigwedge}}}} \bigl\{\sigma_x:x\in f^{-1}(y)\bigr\}\mathscr{B}igr)_{y\in Y}&({\in}~\Sigma^Y) &\text{is a code for}&\forall{f}(p)&({\in}~\mathsf{P}{Y}) \\ \end{array}$$ \end{itemize} It now remains to characterize the ordering on each Heyting algebra $\mathsf{P}{X}$ from the above constructions in~$\Sigma$. For that, we consider the set $\mathsf{P}hi\subseteq\Sigma$ defined by $\mathsf{P}hi:=\{\xi\in\Sigma:\sem{\xi}_{\_\in1}=\top_1\}$, writing~$\top_1$ the top element of $\mathsf{P}{1}$. We check that: \begin{proposition}\label{p:OrderPhi} For all $X\in\mathbf{Set}$ and $\sigma,\tau\in\Sigma^X$, we have $$\sem{\sigma}_X\le\sem{\tau}_X\qquad\text{iff}\qquad {\textstyle\mathbin{\dot{{\bigwedge}}}}\{\sigma_x\mathbin{\dot{{\to}}}\tau_x:x\in X\}\in\mathsf{P}hi\,.$$ \end{proposition} \begin{proof} Writing $1_X:X\to 1$ the unique map from~$X$ to~$1$, we have: $$\begin{array}[b]{r@{\quad}c@{\quad}l} \sem{\sigma}_X\le\sem{\tau}_X &\text{iff}&\top_X\le\sem{\sigma}_X\to\sem{\tau}_X \\ &\text{iff}&\mathsf{P}{1_X}(\top_1)\le\sem{\sigma_x\mathbin{\dot{{\to}}}\tau_x}_{x\in X}\\ &\text{iff}&\top_1\le\forall{1_X} \bigl(\sem{\sigma_x\mathbin{\dot{{\to}}}\tau_x}_{x\in X}\bigr)\\ &\text{iff}&\top_1\le\bigsem{\mathbin{\dot{{\bigwedge}}} \{\sigma_x\mathbin{\dot{{\to}}}\tau_x:x\in X\}}_{\_\in 1}\\ &\text{iff}&\mathbin{\dot{{\bigwedge}}}\{\sigma_x\mathbin{\dot{{\to}}}\tau_x:x\in X\}\in\mathsf{P}hi\\ \end{array}\eqno\mbox{\qedhere}$$ \end{proof} So that we can complete the above correspondence between the operations on the Heyting algebra $\mathsf{P}{X}$ and the analogous operations on the set~$\Sigma$ by concluding that: \begin{itemize} \item If $\sigma,\tau\in\Sigma^X$ are codes for $p,q\in\mathsf{P}{X}$, respectively, then: $$p\le q\quad({\in}~\mathsf{P}{X})\qquad\text{iff}\qquad \mathop{\mathbin{\dot{{\bigwedge}}}}\limits_{x\in X} (\sigma_x\mathbin{\dot{{\to}}}\tau_x)~\in~\mathsf{P}hi \quad({\subseteq}~\Sigma)$$ \end{itemize} \begin{remark} At this stage, it would be tempting to see the set $\Sigma$ as a complete Heyting algebra whose structure is given by the operations ${\mathbin{\dot{{\land}}}},{\mathbin{\dot{{\lor}}}},{\mathbin{\dot{{\to}}}},\mathbin{\dot{{\bigvee}}},\mathbin{\dot{{\bigwedge}}}$, while seeing the subset $\mathsf{P}hi\subseteq\Sigma$ as a particular filter of~$\Sigma$. Alas, the above operations come with absolutely no algebraic property, since in general we have $$\xi\mathbin{\dot{{\land}}}\xi\neq\xi,\qquad \xi\mathbin{\dot{{\land}}}\xi'\neq\xi'\mathbin{\dot{{\land}}}\xi,\qquad \xi\mathbin{\dot{{\to}}}\xi\neq\dot{\top},\qquad {\textstyle\mathbin{\dot{{\bigvee}}}}\{\xi\}\neq {\textstyle\mathbin{\dot{{\bigwedge}}}}\{\xi\}\neq \xi,\qquad\text{etc.}$$ So that in the end, these operations fail to endow the set~$\Sigma$ with the structure of a (complete) Heyting algebra---although they are sufficient in practice to characterize the whole structure of the tripos $\mathsf{P}:\mathbf{Set}^{\mathrm{op}}\to\mathbf{HA}$ via the pseudo-filter $\mathsf{P}hi\subseteq\Sigma$. However, we shall see in Section~\ref{s:ImpAlg} that the set $\Sigma$ generates an implicative algebra~$\mathscr{A}$ whose operations (arbitrary meets and implication) capture the whole structure of the tripos~$\mathsf{P}$ in a more natural way. \end{remark} \subsection{Miscellaneous properties}\label{ss:MiscProp} In this section, we present some properties of the set~$\Sigma$ that will be useful to construct the implicative algebra~$\mathscr{A}$. \begin{proposition}[Merging quantifications] \label{p:MergeQuantSigma} $$\begin{array}{r@{~{}~}c@{~{}~}l} \bigsem{{\mathbin{\dot{{\bigvee}}}}\bigl\{{\mathbin{\dot{{\bigvee}}}}s:s\in S\bigr\}} _{S\in\mathfrak{P}(\mathfrak{P}(\Sigma))}&=& \bigsem{{\mathbin{\dot{{\bigvee}}}}\bigl({\bigcup}S\bigr)} _{S\in\mathfrak{P}(\mathfrak{P}(\Sigma))}\\[3pt] \bigsem{{\mathbin{\dot{{\bigwedge}}}}\bigl\{{\mathbin{\dot{{\bigwedge}}}}s:s\in S\bigr\}} _{S\in\mathfrak{P}(\mathfrak{P}(\Sigma))}&=& \bigsem{{\mathbin{\dot{{\bigwedge}}}}\bigl({\bigcup}S\bigr)} _{S\in\mathfrak{P}(\mathfrak{P}(\Sigma))}\\ \end{array}$$ \end{proposition} \begin{proof} Let us consider: \begin{itemize} \item The membership relation $E:=\{(\xi,s):\xi\in s\}\subseteq\Sigma\times\mathfrak{P}(\Sigma)$ equipped with the projections $e_1:E\to\Sigma$ and $e_2:E\to\mathfrak{P}(\Sigma)$ (see Section~\ref{ss:DefQuantSigma} above); \item The membership relation $F:=\{(s,S):s\in S\}\subseteq\mathfrak{P}(\Sigma)\times\mathfrak{P}(\mathfrak{P}(\Sigma))$ equipped with the projections $f_1:F\to\mathfrak{P}(\Sigma)$ and $f_2:F\to\mathfrak{P}(\mathfrak{P}(\Sigma))$; \item The composed membership relation $G:=\{(\xi,s,S):\xi\in s\in S\}\subseteq \Sigma\times\mathfrak{P}(\Sigma)\times\mathfrak{P}(\mathfrak{P}(\Sigma))$ equipped with the functions $g_1:G\to E$ and $g_2:G\to F$ defined by $g_1(\xi,s,S)=(\xi,s)$ and $g_2(\xi,s,S)=(s,S)$ for all $(\xi,s,S)\in G$. \end{itemize} By construction, we have the following pullback: $$\xymatrix{ G\pullback{6pt}\ar[d]_{g_1}\ar[r]^{g_2}&F\ar[r]^{f_2}\ar[d]^{f_1}& \mathfrak{P}\rlap{$(\mathfrak{P}(\Sigma))$} \\ E\ar[d]_{e_1}\ar[r]_{e_2}&\mathfrak{P}(\Sigma) \\ \Sigma }$$ hence $\mathsf{P}{f_1}\circ\exists{e_2}=\exists{g_2}\circ\mathsf{P}{g_1}$ and $\mathsf{P}{f_1}\circ\forall{e_2}=\forall{g_2}\circ\mathsf{P}{g_1}$ (Beck-Chevalley). Therefore: $$\begin{array}[b]{@{}r@{~{}~}c@{~{}~}l@{\hskip-10pt}} \bigsem{\mathbin{\dot{{\bigvee}}}\bigl\{{\mathbin{\dot{{\bigvee}}}}s:s\in S\}} _{S\in\mathfrak{P}(\mathfrak{P}(\Sigma))} &=&\bigsem{\mathbin{\dot{{\bigvee}}}\bigl\{{\mathbin{\dot{{\bigvee}}}}f_1(z): z\in f_2^{-1}(S)\bigr\}}_{S\in\mathfrak{P}(\mathfrak{P}(\Sigma))} ~=~\exists{f_2}\bigl(\sem{\mathbin{\dot{{\bigvee}}}\circ f_1}_{F}\bigr)\\ &=&(\exists{f_2}\circ\mathsf{P}{f_1}) \bigl(\sem{\mathbin{\dot{{\bigvee}}}}_{\mathfrak{P}(\Sigma)}\bigr) ~=~(\exists{f_2}\circ\mathsf{P}{f_1}\circ\exists{e_2}) \bigl(\sem{e_1}_E\bigr)\\ &=&(\exists{f_2}\circ\exists{g_2}\circ\mathsf{P}{g_1}) \bigl(\sem{e_1}_E\bigr) ~=~\exists(f_2\circ g_2)\bigl(\sem{e_1\circ g_1}_G\bigr)\\ &=&\bigsem{\mathbin{\dot{{\bigvee}}}\{(e_1\circ g_1)(z): z\in(f_2\circ g_2)^{-1}(S)\}}_{S\in\mathfrak{P}(\mathfrak{P}(\Sigma))}\\ &=&\bigsem{{\mathbin{\dot{{\bigvee}}}}\bigl(\bigcup{S}\bigr)}_{S\in\mathfrak{P}(\mathfrak{P}(\Sigma))}\\ \end{array}$$ And similarly for $\forall$. \end{proof} The following proposition expresses that the codes for universal quantification and implication fulfill the usual property of distributivity (on the right-hand side of implication): \begin{proposition}\label{p:DistrForallImp} We have:\quad $\bigsem{{\mathbin{\dot{{\bigwedge}}}}\{\theta\mathbin{\dot{{\to}}}\xi~:~\xi\in s\}} _{(\theta,s)\in\Sigma\times\mathfrak{P}(\Sigma)} ~=~\bigsem{\theta\mathbin{\dot{{\to}}}{\mathbin{\dot{{\bigwedge}}}}s} _{(\theta,s)\in\Sigma\times\mathfrak{P}(\Sigma)}$. \end{proposition} \begin{proof} Let us consider the set $G:=\{(\theta,\xi,s):\xi\in s\}\subseteq \Sigma\times\Sigma\times\mathfrak{P}(\Sigma)$ with the corresponding projections $g_1,g_2:G\to\Sigma$ and $g_3:G\to\mathfrak{P}(\Sigma)$. We also write $\pi:\Sigma\times\mathfrak{P}(\Sigma)\to\Sigma$ the first projection from $\Sigma\times\mathfrak{P}(\Sigma)$ to~$\Sigma$. For all $p\in\mathsf{P}(\Sigma\times\mathfrak{P}(\Sigma))$, we observe that: $$\begin{array}[b]{l@{\quad}l} &p~\le~\bigsem{{\mathbin{\dot{{\bigwedge}}}}\{\theta\mathbin{\dot{{\to}}}\xi~:~\xi\in s\}} _{(\theta,s)\in\Sigma\times\mathfrak{P}(\Sigma)}\\ \text{iff}&p~\le~\bigsem{{\mathbin{\dot{{\bigwedge}}}}\bigl\{ g_1(z)\mathbin{\dot{{\to}}} g_2(z)~:~z\in\langleg_1,g_3\rangle^{-1}(\theta,s) \bigr\}}_{(\theta,s)\in\Sigma\times\mathfrak{P}(\Sigma)}\\ \text{iff}&p~\le~\forall\langleg_1,g_3\rangle\bigl( \sem{g_1}_G\to\sem{g_2}_G\bigr) \\ \text{iff}&\mathsf{P}\langleg_1,g_3\rangle(p)~\le~\sem{g_1}_G\to\sem{g_2}_G \\ \text{iff}&\mathsf{P}\langleg_1,g_3\rangle(p)\land\sem{g_1}_G~\le~\sem{g_2}_G \\ \text{iff}&\mathsf{P}\langleg_1,g_3\rangle(p)\land\sem{\pi\circ\langleg_1,g_3\rangle}_G ~\le~\sem{g_2}_G \\ \text{iff}&\mathsf{P}\langleg_1,g_3\rangle\bigl(p\land \sem{\pi}_{\Sigma\times\mathfrak{P}(\Sigma)}\bigr)~\le~\sem{g_2}_G \\ \text{iff}&p\land\sem{\pi}_{\Sigma\times\mathfrak{P}(\Sigma)} ~\le~\forall\langleg_1,g_3\rangle(\sem{g_2}_G) \\ \text{iff}&p~\le~ \sem{\pi}_{\Sigma\times\mathfrak{P}(\Sigma)}\to \forall\langleg_1,g_3\rangle(\sem{g_2}_G)\\ \text{iff}&p~\le~ \sem{\theta}_{(\theta,s)\in\Sigma\times\mathfrak{P}(\Sigma)}\to \bigsem{{\mathbin{\dot{{\bigwedge}}}}\bigl\{g_2(z)~:~ z\in\langleg_1,g_3\rangle^{-1}(\theta,s) \bigr\}}_{(\theta,s)\in\Sigma\times\mathfrak{P}(\Sigma)}\\ \text{iff}&p~\le~ \sem{\theta}_{(\theta,s)\in\Sigma\times\mathfrak{P}(\Sigma)}\to \bigsem{{\mathbin{\dot{{\bigwedge}}}}\bigl\{\xi~:~\xi\in s \bigr\}}_{(\theta,s)\in\Sigma\times\mathfrak{P}(\Sigma)}\\ \text{iff}&p~\le~ \bigsem{\theta\mathbin{\dot{{\to}}}{\mathbin{\dot{{\bigwedge}}}}s} _{(\theta,s)\in\Sigma\times\mathfrak{P}(\Sigma)}\\ \end{array}$$ From which we get the desired equality. \end{proof} \begin{corollary}\label{c:DistrForallImp} Given a set $X$ and two families $\sigma\in\Sigma^X$ and $t\in\mathfrak{P}(\Sigma)^X$, we have: $$\textstyle \bigsem{{\mathbin{\dot{{\bigwedge}}}}\{\sigma_x\mathbin{\dot{{\to}}}\xi~:~\xi\in t_x\}}_{x\in X} ~=~\bigsem{\sigma_x\mathbin{\dot{{\to}}}{\mathbin{\dot{{\bigwedge}}}}t_x}_{x\in X}$$ \end{corollary} \begin{proof} Indeed, we have $$\begin{array}[b]{r@{~{}~}c@{~{}~}l} \bigsem{{\mathbin{\dot{{\bigwedge}}}}\{\sigma_x\mathbin{\dot{{\to}}}\xi~:~\xi\in t_x\}}_{x\in X} &=&\mathsf{P}(\langle\sigma,t\rangle)\bigl( \bigsem{{\mathbin{\dot{{\bigwedge}}}}\{\theta\mathbin{\dot{{\to}}}\xi~:~\xi\in s\}} _{(\theta,s)\in\Sigma\times\mathfrak{P}(\Sigma)}\bigr) \\ &=&\mathsf{P}(\langle\sigma,t\rangle)\bigl( \bigsem{\theta\mathbin{\dot{{\to}}}{\mathbin{\dot{{\bigwedge}}}}s} _{(\theta,s)\in\Sigma\times\mathfrak{P}(\Sigma)}\bigr) \\ &=&\bigsem{\sigma_x\mathbin{\dot{{\to}}}{\mathbin{\dot{{\bigwedge}}}}t_x}_{x\in X}\\ \end{array}\eqno\begin{tabular}[b]{r@{}} (by Prop.~\ref{p:DistrForallImp})\\ \mbox{\qedhere}\\ \end{tabular}$$ \end{proof} From now on, we consider the inclusion relation $F:=\{(s,s'):s\subseteq s'\}\subseteq \mathfrak{P}(\Sigma)\times\mathfrak{P}(\Sigma)$ together with the corresponding projections $f_1:F\to\mathfrak{P}(\Sigma)$ and $f_2:F\to\mathfrak{P}(\Sigma)$. The following proposition expresses that the operators ${\mathbin{\dot{{\bigvee}}}}:\mathfrak{P}(\Sigma)\to\Sigma$ and ${\mathbin{\dot{{\bigwedge}}}}:\mathfrak{P}(\Sigma)\to\Sigma$ are respectively monotonic and antitonic w.r.t.\ the domain of quantification: \begin{proposition}\label{p:SubQuantSigma} $\bigsem{{\mathbin{\dot{{\bigvee}}}}\circ f_1}_F~\le~ \bigsem{{\mathbin{\dot{{\bigvee}}}}\circ f_2}_F$\ \ and\ \ $\bigsem{{\mathbin{\dot{{\bigwedge}}}}\circ f_1}_F~\ge~ \bigsem{{\mathbin{\dot{{\bigwedge}}}}\circ f_2}_F$. \end{proposition} \begin{proof} Let us consider the set $G:=\{(\xi,\xi',(s,s')):\xi\in s,~\xi'\in s',~(s,s')\in F\} \subseteq\Sigma\times\Sigma\times F$ equipped with the three projections $g_1,g_2:G\to\Sigma$ and $g_3:G\to F$. We have $$\begin{array}{r@{~{}~}c@{~{}~}l} \bigsem{{\mathbin{\dot{{\bigvee}}}}\circ f_1}_F &=&\bigsem{{\mathbin{\dot{{\bigvee}}}} \bigl\{g_1(z):z\in g_3^{-1}(s,s')\bigr\}}_{(s,s')\in F} ~=~\exists{g_3}(\sem{g_1}_G) \\ \bigsem{{\mathbin{\dot{{\bigvee}}}}\circ f_2}_F &=&\bigsem{{\mathbin{\dot{{\bigvee}}}} \bigl\{g_2(z):z\in g_3^{-1}(s,s')\bigr\}}_{(s,s')\in F} ~=~\exists{g_3}(\sem{g_2}_G) \\ \bigsem{{\mathbin{\dot{{\bigwedge}}}}\circ f_1}_F &=&\bigsem{{\mathbin{\dot{{\bigwedge}}}} \bigl\{g_1(z):z\in g_3^{-1}(s,s')\bigr\}}_{(s,s')\in F} ~=~\forall{g_3}(\sem{g_1}_G) \\ \bigsem{{\mathbin{\dot{{\bigwedge}}}}\circ f_1}_F &=&\bigsem{{\mathbin{\dot{{\bigwedge}}}} \bigl\{g_2(z):z\in g_3^{-1}(s,s')\bigr\}}_{(s,s')\in F} ~=~\forall{g_3}(\sem{g_2}_G) \\ \end{array}$$ Let us now consider the function $g:G\to G$ defined by $g(\xi,\xi',(s,s'))=(\xi,\xi,(s,s'))$ for all $(\xi,\xi',(s,s'))\in G$. Since $g_2\circ g=g_1$, we have $\mathsf{P}{g}(\sem{g_2}_G)=\sem{g_2\circ g}_G=\sem{g_1}_G$ and thus $$\exists{g}(\sem{g_1}_G)~\le~\sem{g_2}_G ~\le~\forall{g}(\sem{g_1}_G) \eqno\text{(by left/right adjunction)}$$ Combining the above inequalities with the fact that $g_3\circ g=g_3$, we deduce that: $$\begin{array}[b]{l} \bigsem{{\mathbin{\dot{{\bigvee}}}}\circ f_1}_F ~=~\exists{g_3}(\sem{g_1}_G) ~=~\exists{g_3}(\exists{g}(\sem{g_1}_G)) ~\le~\exists{g_3}(\sem{g_2}_G) ~=~\bigsem{{\mathbin{\dot{{\bigvee}}}}\circ f_2}_F\\ \bigsem{{\mathbin{\dot{{\bigwedge}}}}\circ f_1}_F ~=~\forall{g_3}(\sem{g_1}_G) ~=~\forall{g_3}(\forall{g}(\sem{g_1}_G)) ~\ge~\forall{g_3}(\sem{g_2}_G) ~=~\bigsem{{\mathbin{\dot{{\bigwedge}}}}\circ f_2}_F\,.\\ \end{array}\eqno\mbox{\qedhere}$$ \end{proof} \begin{corollary}\label{c:SubQuantSigma} Given a set~$X$ and two families $a,b\in\mathfrak{P}(\Sigma)^X$ such that $a_x\subseteq b_x$ for all $x\in X$, we have: $\bigsem{{\mathbin{\dot{{\bigvee}}}}\circ a}_X~\le~ \bigsem{{\mathbin{\dot{{\bigvee}}}}\circ b}_X$\ \ and\ \ $\bigsem{{\mathbin{\dot{{\bigwedge}}}}\circ a}_X~\ge~ \bigsem{{\mathbin{\dot{{\bigwedge}}}}\circ b}_X$. \end{corollary} \begin{proof} Let $c:=(a_x,b_x)_{x\in X}\in F^X$. From Prop.~\ref{p:SubQuantSigma} we get: $$\begin{array}[b]{@{}l@{}} \bigsem{{\mathbin{\dot{{\bigvee}}}}\circ a}_X =\bigsem{{\mathbin{\dot{{\bigvee}}}}\circ f_1\circ c}_X =\mathsf{P}{c}\bigl(\bigsem{{\mathbin{\dot{{\bigvee}}}}\circ f_1}\bigr) \le\mathsf{P}{c}\bigl(\bigsem{{\mathbin{\dot{{\bigvee}}}}\circ f_2}\bigr) =\bigsem{{\mathbin{\dot{{\bigvee}}}}\circ f_2\circ c}_X =\bigsem{{\mathbin{\dot{{\bigvee}}}}\circ b}_X \\ \bigsem{{\mathbin{\dot{{\bigwedge}}}}\circ a}_X =\bigsem{{\mathbin{\dot{{\bigwedge}}}}\circ f_1\circ c}_X =\mathsf{P}{c}\bigl(\bigsem{{\mathbin{\dot{{\bigwedge}}}}\circ f_1}\bigr) \ge\mathsf{P}{c}\bigl(\bigsem{{\mathbin{\dot{{\bigwedge}}}}\circ f_2}\bigr) =\bigsem{{\mathbin{\dot{{\bigwedge}}}}\circ f_2\circ c}_X =\bigsem{{\mathbin{\dot{{\bigwedge}}}}\circ b}_X \\ \end{array}\eqno\mbox{\qedhere}$$ \end{proof} \section{Extracting the implicative algebra} \label{s:ImpAlg} In Section~\ref{ss:DefFilterSigma}, we have seen that the structure of the tripos $\mathsf{P}:\mathbf{Set}^{\mathrm{op}}\to\mathbf{HA}$ can be fully characterized from the binary operations $({\mathbin{\dot{{\land}}}}),({\mathbin{\dot{{\lor}}}}),({\mathbin{\dot{{\to}}}}):\Sigma\times\Sigma\to\Sigma$ and the infinitary operations $({\mathbin{\dot{{\bigvee}}}}),({\mathbin{\dot{{\bigwedge}}}}):\mathfrak{P}(\Sigma)\to\Sigma$ via some subset $\mathsf{P}hi\subseteq\Sigma$ (the `pseudo-filter'). However, these operations fail to endow the set~$\Sigma$ itself with the structure of a complete Heyting algebra, for they lack the most basic algebraic properties that are required by such a structure. In this section, we shall construct a particular implicative structure $\mathscr{A}=(\mathscr{A},{\preccurlyeq},{\to})$ from the set~$\Sigma$ equipped with the only operations $({\mathbin{\dot{{\to}}}}):\Sigma\times\Sigma\to\Sigma$ and $({\mathbin{\dot{{\bigwedge}}}}):\mathfrak{P}(\Sigma)\to\Sigma$, using a construction that is reminiscent from the construction of graph models~\cite{Eng81}. As we shall see, the main interest of such a construction is that: \begin{enumerate} \item The carrier set $\mathscr{A}$ can be used as an alternative set of propositions, for it possesses its own generic predicate $\textit{tr}_{\mathscr{A}}\in\mathsf{P}\mathscr{A}$. \item The two operations $({\to}):\mathscr{A}\times\mathscr{A}\to\mathscr{A}$ and $({\bigcurlywedge}):\mathfrak{P}(\mathscr{A})\to\mathscr{A}$ that naturally come with the implicative structure~$\mathscr{A}$ constitute codes (in the sense of the new generic predicate $\textit{tr}_{\mathscr{A}}\in\mathsf{P}\mathscr{A}$) for implication and universal quantification in the tripos~$\mathsf{P}$. \item The ordering on each Heyting algebra $\mathsf{P}{X}$ (for $X\in\mathbf{Set}$) can be characterized from the above two operations via a particular separator $S\subseteq\mathscr{A}$. \end{enumerate} From the above properties, we shall easily conclude that the tripos~$\mathsf{P}:\mathbf{Set}^{\mathrm{op}}\to\mathbf{HA}$ is isomorphic to the tripos induced by the implicative algebra $(\mathscr{A},{\preccurlyeq},{\to},S)$. \subsection{Defining the set $\mathscr{A}_0$ of atoms} \label{ss:Atoms} The construction of the implicative structure~$\mathscr{A}$ is achieved in two steps. First we construct from the set~$\Sigma$ of propositions a (larger) set $\mathscr{A}_0$ of \emph{atoms} equipped with a preorder~$\le$. Then we let $\mathscr{A}:=\mathfrak{P}up(\mathscr{A}_0)$, where $\mathfrak{P}up(\mathscr{A}_0)$ denotes the set of all upwards closed subsets of~$\mathscr{A}_0$ w.r.t.\ the preorder~$\le$. Formally, the set $\mathscr{A}_0$ of atoms is inductively defined from the following two clauses: \begin{enumerate} \item If $\xi\in\Sigma$, then $\dot\xi\in\mathscr{A}_0$ \footnote{Here, we use the dot notation $\dot\xi$ only to distinguish the code $\xi\in\Sigma$ from its image $\dot\xi\in\mathscr{A}_0$.}\quad (base case). \item If $s\in\mathfrak{P}(\Sigma)$ and $\alpha\in\mathscr{A}_0$, then $(s\mapsto\alpha)\in\mathscr{A}_0$\quad (inductive case). \end{enumerate} In other words, each atom $\alpha\in\mathscr{A}_0$ is a finite list of subsets of~$\Sigma$ terminated by an element of~$\Sigma$, that is:\ \ $\alpha=s_1\mapsto\cdots\mapsto s_n\mapsto\dot\xi$\ \ for some $s_1,\ldots,s_n\in\mathfrak{P}(\Sigma)$ and $\xi\in\Sigma$. The set $\mathscr{A}_0$ is equipped with the binary relation $\alpha\le\beta$ that is inductively defined from the two rules $$\infer{\dot\xi\le\dot\xi}{}\qquad\qquad \infer{s\mapsto\alpha~\le~s'\mapsto\alpha'}{ s\subseteq s'&\quad \alpha\le\alpha' }$$ \begin{fact} The relation $\alpha\le\beta$ is a preorder on $\mathscr{A}_0$. \end{fact} \begin{proof} By induction on $\alpha\in\mathscr{A}_0$, we successively prove (1) that $\alpha\le\alpha$ and (2) that $\alpha\le\beta$ and $\beta\le\gamma$ together imply that $\alpha\le\gamma$, for all $\beta,\gamma\in\mathscr{A}_0$. \end{proof} Finally, the set $\mathscr{A}_0$ is equipped with a conversion function $\phi_0:\mathscr{A}_0\to\Sigma$ that is defined by $$\phi_0(\dot\xi)~:=~\xi\qquad\text{and}\qquad \phi_0(s\mapsto\alpha)~:=~ \bigl({\textstyle\mathbin{\dot{{\bigwedge}}}{s}}\bigr)\mathbin{\dot{{\to}}}\phi_0(\alpha)$$ (Note that by construction, this function is surjective.) \begin{remark}[Relationship with graph models] In the theory of graph models~\cite{Eng81}, the set of atoms~$\mathscr{A}_0$ would be naturally defined from the grammar $$\alpha,\beta\in\mathscr{A}_0\quad::=\quad \dot{\xi}\quad|\quad\{\alpha_1,\ldots,\alpha_n\}\mapsto\beta \eqno(\xi\in\Sigma)$$ that is, as the least solution of the set-theoretic equation $\mathscr{A}_0=\Sigma+\mathfrak{P}fin(\mathscr{A}_0)\times\mathscr{A}_0$. However, such a construction would yield an \emph{applicative} structure upon the set $\mathscr{A}=\mathfrak{P}up(\mathscr{A}_0)$---it would even constitute a ($D_{\infty}$-like) model of the $\lambda$-calculus---, but it would not yield an \emph{implicative} structure, for the restriction to the finite subsets of $\mathscr{A}_0$ in the left-hand side of the construction $\{a_1,\ldots,\alpha_n\}\mapsto\beta$ actually prevents defining an implication with the desired properties.\par To fix this problem, it would be natural to relax the condition of finiteness, by considering instead the equation $\mathscr{A}_0=\Sigma+\mathfrak{P}(\mathscr{A}_0)\times\mathscr{A}_0$. Alas, such an equation has no solution (for obvious cardinality reasons), so that the trick consists to replace arbitrary subsets of $\mathscr{A}_0$ (in the left-hand side of the construction $s\mapsto\beta$) by arbitrary subsets of~$\Sigma$, using the fact that the subsets of~$\mathscr{A}_0$ can always be converted (element-wise) into subsets of~$\Sigma$, via the conversion function $\phi_0:\mathscr{A}_0\to\Sigma$. So that in the end, we obtain the set-theoretic equation $\mathscr{A}_0=\Sigma+\mathfrak{P}(\Sigma)\times\mathscr{A}_0$, whose least solution is precisely the set $\mathscr{A}_0$ we defined above. \end{remark} \subsection{Defining the implicative structure $(\mathscr{A},{\preccurlyeq},{\to})$} \label{ss:ImpStruct} Let $\mathscr{A}:=\mathfrak{P}up(\mathscr{A}_0)$ be the set of upwards closed subsets of~$\mathscr{A}_0$ (w.r.t.\ the preorder $\le$ on~$\mathscr{A}_0$), equipped with the ordering $a\preccurlyeq b$ defined by $a\preccurlyeq b$ iff $a\supseteq b$ (reverse inclusion) for all $a,b\in\mathscr{A}$. It is clear that: \begin{proposition} The poset $(\mathscr{A},{\preccurlyeq})=(\mathfrak{P}up(\mathscr{A}_0),{\supseteq})$ is a complete lattice. \end{proposition} Note that in this complete lattice, (finitary or infinitary) meets and joins are respectively given by unions and intersections. In particular, we have $\bot_{\mathscr{A}}=\mathscr{A}$ and $\top_{\mathscr{A}}=\varnothing$. Let $\tilde{\phi}_0:\mathscr{A}\to\mathfrak{P}(\Sigma)$ be the function defined by $\tilde{\phi}_0(a)=\{\phi_0(\alpha):\alpha\in a\}$ for all $a\in\mathscr{A}$. For each set of codes $s\in\mathfrak{P}(\Sigma)$, we write $s^{\subseteq}:=\{s'\in\mathfrak{P}(\Sigma):s\subseteq s'\}$. We now equip the complete lattice $(\mathscr{A},{\preccurlyeq})$ with the implication defined by $$a\to b~:=~\bigl\{s\mapsto\beta~:~ s\in\tilde{\phi}_0(a)^{\subseteq},~\beta\in b\bigr\} \eqno(\text{for all}~a,b\in\mathscr{A})$$ Note that by construction, we have $(a\to b)\in\mathscr{A}~({=}~\mathfrak{P}up(\mathscr{A}_0))$ for all $a,b\in\mathscr{A}$. \begin{proposition} The triple $(\mathscr{A},{\preccurlyeq},{\to})$ is an implicative structure. \end{proposition} \begin{proof} \emph{Variance of implication.}\quad Let $a,a',b,b'\in\mathscr{A}$ such that $a'\preccurlyeq a$ and $b\preccurlyeq b$, that is: $a\subseteq a'$ and $b'\subseteq b$. We observe that $$\begin{array}{r@{~{}~}c@{~{}~}l} a'\to b'&=&\bigl\{s\mapsto\beta~:~ s\in\tilde{\phi}_0(a')^{\subseteq},~\beta\in b'\bigr\}\\ &\subseteq&\bigl\{s\mapsto\beta~:~ s\in\tilde{\phi}_0(a)^{\subseteq},~\beta\in b\bigr\} ~=~a\to b \\ \end{array}$$ (since $\tilde{\phi}_0(a)\subseteq\tilde{\phi}_0(a')$ and $b'\subseteq b$), which means that $(a\to b)\preccurlyeq(a'\to b')$.\par\noindent \emph{Distributivity of meets w.r.t.\ $\to$.}\quad Given $a\in\mathscr{A}$ and $B\subseteq\mathscr{A}$, we observe that $$\begin{array}[b]{r@{~{}~}c@{~{}~}l} a\to\bigcurlywedge\!B &=&\displaystyle\bigl\{s\mapsto\beta~:~ s\in\tilde{\phi}_0(a)^{\subseteq},~ \beta\in{\textstyle\bigcup}B\bigr\}\\[6pt] &=&\displaystyle\bigcup_{b\in B}\bigl\{s\mapsto\beta~:~ s\in\tilde{\phi}_0(a)^{\subseteq},~\beta\in b\bigr\} ~=~\bigcurlywedge_{\!\!b\in B\!\!}(a\to b) \\ \end{array}\eqno\mbox{\qedhere}$$ \end{proof} \subsection{Viewing $\mathscr{A}$ as a new set of propositions} Let us now consider the two conversion functions $\phi:\mathscr{A}\to\Sigma$ and $\psi:\Sigma\to\mathscr{A}$ defined by $$\begin{array}{r@{~{}~}c@{~{}~}l} \phi(a)&:=&\mathbin{\dot{{\bigwedge}}}\tilde{\phi}_0(a) ~=~\mathbin{\dot{{\bigwedge}}}\{\phi_0(\alpha):\alpha\in a\}\\[3pt] \psi(\xi)&:=&\{\dot\xi\}\\ \end{array}\eqno\begin{array}{r@{}} (\text{for all}~a\in\mathscr{A})\\[3pt] (\text{for all}~\xi\in\Sigma)\\ \end{array}$$ as well as the predicate $\textit{tr}_{\mathscr{A}}:=\mathsf{P}\phi(\textit{tr}_{\Sigma})~({=}~\sem{\phi}_{\mathscr{A}})\in\mathsf{P}\mathscr{A}$.\ \ We easily check that: \begin{lemma} $\mathsf{P}\psi(\textit{tr}_{\mathscr{A}})=\textit{tr}_{\Sigma}$. \end{lemma} \begin{proof} Since $\phi(\psi(\xi))={\mathbin{\dot{{\bigwedge}}}}\{\xi\}$ for all $\xi\in\Sigma$, we have: $$\begin{array}[b]{r@{~{}~}c@{~{}~}l} \mathsf{P}\psi(\textit{tr}_{\mathscr{A}}) &=&\mathsf{P}\psi(\sem{\phi}_{\mathscr{A}}) ~=~\sem{\phi\circ\psi}_{\Sigma} ~=~\bigsem{\mathbin{\dot{{\bigwedge}}}\{\xi\}}_{\xi\in\Sigma} ~=~\bigsem{\mathbin{\dot{{\bigwedge}}}\bigl\{\mathrm{id}(\xi'): \xi'\in\mathrm{id}^{-1}(\xi)\bigr\}}_{\xi\in\Sigma}\\ &=&\forall\mathrm{id}_{\Sigma}\bigl(\sem{\mathrm{id}_{\Sigma}}_{\Sigma}\bigr) ~=~\sem{\mathrm{id}_{\Sigma}}_{\Sigma} ~=~\mathsf{P}\mathrm{id}_{\Sigma}(\textit{tr}_{\Sigma})~=~\textit{tr}_{\Sigma}\,. \end{array}\eqno\mbox{\qedhere}$$ \end{proof} Therefore: \begin{proposition} The predicate $\textit{tr}_{\mathscr{A}}\in\mathsf{P}\mathscr{A}$ is a generic predicate for the tripos~$\mathsf{P}$. \end{proposition} \begin{proof} For each set $X$, we want to show that the function $\csem{\_}_X:\mathscr{A}^X\to\mathsf{P}{X}$ defined by $\csem{a}_X=\mathsf{P}{a}(\textit{tr}_{\mathscr{A}})$ for all $a\in\mathscr{A}^X$ is surjective. For that, we take $p\in\mathsf{P}{X}$ and pick a code $\sigma\in\Sigma^X$ such that $p=\sem{\sigma}_X=\mathsf{P}\sigma(\textit{tr}_{\Sigma})$. From the above lemma, we get: $$p~=~\mathsf{P}{\sigma}(\textit{tr}_{\Sigma})~=~\mathsf{P}{\sigma}(\mathsf{P}\psi(\textit{tr}_{\mathscr{A}})) ~=~\mathsf{P}(\psi\circ\sigma)(\textit{tr}_{\mathscr{A}})~=~\csem{\psi\circ\sigma}_X$$ hence $a:=\psi\circ\sigma\in\mathscr{A}^{X}$ is a code for $p$ in the sense of the predicate~$\textit{tr}_{\mathscr{A}}\in\mathsf{P}\mathscr{A}$. \end{proof} To sum up, we now have two sets of propositions~$\Sigma$ and~$\mathscr{A}$, two generic predicates $\textit{tr}_{\Sigma}\in\mathsf{P}\Sigma$ and $\textit{tr}_{\mathscr{A}}\in\mathsf{P}\mathscr{A}$, as well as two decoding functions $\sem{\_}_X:\Sigma^X\to\mathsf{P}{X}$ and $\csem{\_}_X:\mathscr{A}^X\to\mathsf{P}{X}$. As usual, we write $\phi^X:\mathscr{A}^X\to\Sigma^X$ and $\psi^X:\Sigma^X\to\mathscr{A}^X$ (for $X\in\mathbf{Set}$) the natural transformations induced by the two maps $\phi:\mathscr{A}\to\Sigma$ and $\psi:\Sigma\to\mathscr{A}$. We easily check that: \begin{proposition} For each set~$X$, the following two diagrams commute: $$\xymatrix{ \Sigma^X\ar[r]^{\sem{\_}_X}&\mathsf{P}{X}\ar@{=}[d] \\ \mathscr{A}^X\ar[u]^{\phi^X}\ar[r]_{\csem{\_}_X}&\mathsf{P}{X} \\ }\qquad\qquad \xymatrix{ \Sigma^X\ar[d]_{\psi^X}\ar[r]^{\sem{\_}_X}&\mathsf{P}{X}\ar@{=}[d] \\ \mathscr{A}^X\ar[r]_{\csem{\_}_X}&\mathsf{P}{X} \\ }$$ \end{proposition} \begin{proof} For all $a\in\mathscr{A}^X$, we have\ \ $\sem{\phi^X(a)}_X=\sem{\phi\circ a}_X =\mathsf{P}{a}(\mathsf{P}\phi(\textit{tr}_{\Sigma}))=\mathsf{P}{a}(\textit{tr}_{\mathscr{A}})=\csem{a}_X$.\\ And for all $\sigma\in\Sigma^X$, we have\ \ $\csem{\psi^X(\sigma)}_X=\csem{\psi\circ\sigma} =\mathsf{P}{\sigma}(\mathsf{P}\psi(\textit{tr}_{\mathscr{A}}))=\mathsf{P}{\sigma}(\textit{tr}_{\Sigma}) =\sem{\sigma}_X$. \end{proof} \subsection{Universal quantification in~$\mathscr{A}$} \label{ss:QuantA} By analogy with the construction performed in Section~\ref{ss:DefQuantSigma}, we now consider the membership relation $E':=\{(a,A):a\in A\}\subseteq\mathscr{A}\times\mathfrak{P}(\mathscr{A})$ together with the corresponding projections $e'_1:E'\to\mathscr{A}$ and $e'_2:E'\to\mathfrak{P}(\mathscr{A})$. The following proposition states that the operator $({\bigcurlywedge}):\mathfrak{P}(\mathscr{A})\to\mathscr{A}$ is a code for universal quantification in the sense of the generic predicate $\textit{tr}_{\mathscr{A}}\in\mathsf{P}\mathscr{A}$: \begin{proposition}\label{p:QuantA0} $\bigcsem{{\bigcurlywedge}A}_{A\in\mathfrak{P}(\mathscr{A})}= \forall{e'_2}\bigl(\csem{e'_1}_{E'}\bigr)$. \end{proposition} \begin{proof} Indeed, we have: $$\begin{array}[b]{r@{~{}~}c@{~{}~}l@{\hskip-5mm}} \bigcsem{{\bigcurlywedge}A}_{A\in\mathfrak{P}(\mathscr{A})} &=&\bigcsem{{\bigcup}A}_{A\in\mathfrak{P}(\mathscr{A})} ~=~\bigsem{\phi\bigl({\bigcup}A\bigr)}_{A\in\mathfrak{P}(\mathscr{A})} ~=~\bigsem{{\mathbin{\dot{{\bigwedge}}}}\tilde{\phi}_0\bigl({\bigcup}A\bigr)} _{A\in\mathfrak{P}(\mathscr{A})}\\ &=&\bigsem{{\mathbin{\dot{{\bigwedge}}}}{\bigcup}\mathfrak{P}\tilde{\phi}_0(A)}_{A\in\mathfrak{P}(\mathscr{A})} ~=~\mathsf{P}(\mathfrak{P}\tilde{\phi}_0)\bigl(\bigsem{{\mathbin{\dot{{\bigwedge}}}}{\bigcup}S} _{S\in\mathfrak{P}(\mathfrak{P}(\Sigma))}\bigr) \\ &=&\mathsf{P}(\mathfrak{P}\tilde{\phi}_0)\bigl(\bigsem{{\mathbin{\dot{{\bigwedge}}}} \bigl\{{\mathbin{\dot{{\bigwedge}}}}s:s\in S\bigr\}} _{S\in\mathfrak{P}(\mathfrak{P}(\Sigma))}\bigr)\\ &=&\bigsem{{\mathbin{\dot{{\bigwedge}}}} \bigl\{{\mathbin{\dot{{\bigwedge}}}}\tilde{\phi}_0(a):a\in A\bigr\}}_{A\in\mathfrak{P}(\mathscr{A})} ~=~\bigsem{{\mathbin{\dot{{\bigwedge}}}} \bigl\{\phi(a):a\in A\bigr\}}_{A\in\mathfrak{P}(\mathscr{A})}\\ &=&\bigsem{{\mathbin{\dot{{\bigwedge}}}} \bigl\{\phi(e'_1(p)):p\in{e'}_2^{-1}(A)\}}_{A\in\mathfrak{P}(\mathscr{A})}\\ &=&\forall{e'_2}\bigl(\sem{\phi\circ e'_1}_{E'}\bigr) ~=~\forall{e'_2}\bigl(\csem{e'_1}_{E'}\bigr)\,. \\ \end{array}\eqno\begin{tabular}[b]{r@{}} (by Prop.~\ref{p:MergeQuantSigma})\\\\\\ \mbox{\qedhere}\\ \end{tabular}$$ \end{proof} From the above result, we deduce that: \begin{proposition}\label{p:QuantA} Given a code $a\in\mathscr{A}^X$ and a map $f:X\to Y$, we have: $$\textstyle\bigcsem{\bigcurlywedge\{a_x:x\in f^{-1}(y)\}}_{y\in Y} ~=~\forall{f}(\csem{a}_X)\eqno({\in}~\mathsf{P}{Y})$$ \end{proposition} \begin{proof} Same argument as for Prop.~\ref{p:QuantSigma} p.~\pageref{p:QuantSigma}. \end{proof} \subsection{Implication in~$\mathscr{A}$} \label{ss:ImpA} It now remains to show that the operation $({\to}):\mathscr{A}\times\mathscr{A}\to\mathscr{A}$ is a code for implication in the sense of the generic predicate $\textit{tr}_{\mathscr{A}}\in\mathsf{P}\mathscr{A}$. For that, we first need to prove the following technical lemma: \begin{lemma}\label{l:ImpTech} $\mathscr{B}igsem{{\mathbin{\dot{{\bigwedge}}}} \mathscr{B}igl\{\bigl({\mathbin{\dot{{\bigwedge}}}}s'\bigr)\mathbin{\dot{{\to}}}\xi~:~ s'\in s^{\subseteq},~\xi\in t\mathscr{B}igr\}} _{(s,t)\in\mathfrak{P}(\Sigma)^2}= \mathscr{B}igsem{{\mathbin{\dot{{\bigwedge}}}} \mathscr{B}igl\{\bigl({\mathbin{\dot{{\bigwedge}}}}s\bigr)\mathbin{\dot{{\to}}}\xi~:~\xi\in t\mathscr{B}igr\}} _{(s,t)\in\mathfrak{P}(\Sigma)^2}$. \end{lemma} \begin{proof} Let us consider the set $G:=\{(s,t,s',\xi):s'\supseteq s,~\xi\in t\} \subseteq\mathfrak{P}(\Sigma)\times\mathfrak{P}(\Sigma)\times\mathfrak{P}(\Sigma)\times\Sigma$ equipped with the four projections $g_1,g_2,g_3:G\to\mathfrak{P}(\Sigma)$, $g_4:G\to\Sigma$ and the function $g:G\to G$ defined by $g(s,t,s',\xi)=g(s,t,s,\xi)$ for all $(s,t,s',\xi)\in G$. We observe that $$\begin{array}{l@{~{}~}l} &\mathscr{B}igsem{{\mathbin{\dot{{\bigwedge}}}} \mathscr{B}igl\{\bigl({\mathbin{\dot{{\bigwedge}}}}s'\bigr)\mathbin{\dot{{\to}}}\xi~:~ s'\in s^{\subseteq},~\xi\in t\mathscr{B}igr\}} _{(s,t)\in\mathfrak{P}(\Sigma)^2}\\ =&\mathscr{B}igsem{{\mathbin{\dot{{\bigwedge}}}} \mathscr{B}igl\{\bigl({\mathbin{\dot{{\bigwedge}}}}g_3(z)\bigr)\mathbin{\dot{{\to}}} g_4(z)~:~ z\in\langleg_1,g_2\rangle^{-1}(s,t)\mathscr{B}igr\}}_{(s,t)\in\mathfrak{P}(\Sigma)^2}\\ =&\forall\langleg_1,g_2\rangle\mathscr{B}igl( \bigsem{{\mathbin{\dot{{\bigwedge}}}}\circ g_3}_G\to\sem{g_4}_G\mathscr{B}igr)\\ \end{array}$$ whereas $$\begin{array}{l@{~{}~}l} &\mathscr{B}igsem{{\mathbin{\dot{{\bigwedge}}}} \mathscr{B}igl\{\bigl({\mathbin{\dot{{\bigwedge}}}}s\bigr)\mathbin{\dot{{\to}}}\xi~:~\xi\in t\mathscr{B}igr\}} _{(s,t)\in\mathfrak{P}(\Sigma)^2}\\ =&\mathscr{B}igsem{{\mathbin{\dot{{\bigwedge}}}} \mathscr{B}igl\{\bigl({\mathbin{\dot{{\bigwedge}}}}g_1(z)\bigr)\mathbin{\dot{{\to}}} g_4(z)~:~ z\in\langleg_1,g_2\rangle^{-1}(s,t)\mathscr{B}igr\}}_{(s,t)\in\mathfrak{P}(\Sigma)^2}\\ =&\forall\langleg_1,g_2\rangle\mathscr{B}igl( \bigsem{{\mathbin{\dot{{\bigwedge}}}}\circ g_1}_G\to\sem{g_4}_G\mathscr{B}igr)\\ \end{array}$$ So that we have to prove that\ \ $\forall\langleg_1,g_2\rangle\mathscr{B}igl( \bigsem{{\mathbin{\dot{{\bigwedge}}}}\circ g_3}_G\to\sem{g_4}_G\mathscr{B}igr)~=~ \forall\langleg_1,g_2\rangle\mathscr{B}igl( \bigsem{{\mathbin{\dot{{\bigwedge}}}}\circ g_1}_G\to\sem{g_4}_G\mathscr{B}igr)$. \smallbreak\noindent $({\le})$\quad We observe that $$\textstyle \mathsf{P}{g}\mathscr{B}igl(\bigsem{{\mathbin{\dot{{\bigwedge}}}}\circ g_3}_G\to\sem{g_4}_G\mathscr{B}igr) ~=~\bigsem{{\mathbin{\dot{{\bigwedge}}}}\circ g_3\circ g}_G\to\sem{g_4\circ g}_G ~=~\bigsem{{\mathbin{\dot{{\bigwedge}}}}\circ g_1}_G\to\sem{g_4}_F\,,$$ since $g_3\circ g=g_1$ and $g_4\circ g=g_4$. By right adjunction we thus get $$\begin{array}[b]{r@{~{}~}c@{~{}~}l} \bigsem{{\mathbin{\dot{{\bigwedge}}}}\circ g_3}_G\to\sem{g_4}_G&\le& \forall{g}\mathscr{B}igl( \bigsem{{\mathbin{\dot{{\bigwedge}}}}\circ g_1}_G\to\sem{g_4}_G\mathscr{B}igr) \\ \forall\langleg_1,g_2\rangle\mathscr{B}igl( \bigsem{{\mathbin{\dot{{\bigwedge}}}}\circ g_3}_G\to\sem{g_4}_G\mathscr{B}igr)&\le& (\forall\langleg_1,g_2\rangle\circ\forall{g})\mathscr{B}igl( \bigsem{{\mathbin{\dot{{\bigwedge}}}}\circ g_1}_G\to\sem{g_4}_G\mathscr{B}igr)\\ \forall\langleg_1,g_2\rangle\mathscr{B}igl( \bigsem{{\mathbin{\dot{{\bigwedge}}}}\circ g_3}_G\to\sem{g_4}_G\mathscr{B}igr)&\le& \forall\langleg_1,g_2\rangle\mathscr{B}igl( \bigsem{{\mathbin{\dot{{\bigwedge}}}}\circ g_1}_G\to\sem{g_4}_G\mathscr{B}igr)\,,\\ \end{array}\leqno\begin{tabular}[b]{@{}l} hence\\[3pt]and thus\\ \end{tabular}$$ using the fact that $\langleg_1,g_2\rangle\circ g=\langleg_1\circ g,g_2\circ g\rangle=\langleg_1,g_2\rangle$. \smallbreak\noindent $({\ge})$\quad From Coro.~\ref{c:SubQuantSigma}, we get $\bigsem{{\mathbin{\dot{{\bigwedge}}}}\circ g_3}_G\le \bigsem{{\mathbin{\dot{{\bigwedge}}}}\circ g_1}_G$ (since $g_1(z)\subseteq g_3(z)$ for all $z\in G$). Hence\ \ $\bigsem{{\mathbin{\dot{{\bigwedge}}}}\circ g_3}_G\to\sem{g_4}_G~\ge~ \bigsem{{\mathbin{\dot{{\bigwedge}}}}\circ g_1}_G\to\sem{g_4}_G$,\ \ and thus: $$\textstyle\forall\langleg_1,g_2\rangle\mathscr{B}igl( \bigsem{{\mathbin{\dot{{\bigwedge}}}}\circ g_3}_G\to\sem{g_4}_G\mathscr{B}igr)~\ge~ \forall\langleg_1,g_2\rangle\mathscr{B}igl( \bigsem{{\mathbin{\dot{{\bigwedge}}}}\circ g_1}_G\to\sem{g_4}_G\mathscr{B}igr)\,. \eqno\mbox{\qedhere}$$ \end{proof} We can now state the desired property: \begin{proposition}\label{p:ImpA0} $\csem{a\to b}_{(a,b)\in\mathscr{A}^2}= \csem{\pi}_{\mathscr{A}^2}\to\csem{\pi'}_{\mathscr{A}^2} ~({\in}~\mathsf{P}(\mathscr{A}\times\mathscr{A}))$,\\ writing $\pi,\pi':\mathscr{A}^2\to\mathscr{A}$ the two projections from~$\mathscr{A}^2$ to~$\mathscr{A}$. \end{proposition} \begin{proof} Indeed, we have: $$\begin{array}[b]{r@{~{}~}l@{~{}~}l@{\hskip-10mm}} \csem{a\to b}_{(a,b)\in\mathscr{A}^2} &=&\sem{\phi(a\to b)}_{(a,b)\in\mathscr{A}^2} \\ &=&\mathscr{B}igsem{\phi\mathscr{B}igl(\bigl\{s'\mapsto\beta~:~ s'\in\tilde{\phi}_0(a)^{\subseteq},~\beta\in b\bigr\}\mathscr{B}igr)} _{(a,b)\in\mathscr{A}^2} \\ &=&\mathscr{B}igsem{{\mathbin{\dot{{\bigwedge}}}} \mathscr{B}igl\{\bigl({\mathbin{\dot{{\bigwedge}}}}s'\bigr)\mathbin{\dot{{\to}}}\xi~:~ s'\in\tilde{\phi}_0(a)^{\subseteq},~\xi\in\tilde{\phi}_0(b)\mathscr{B}igr\}} _{(a,b)\in\mathscr{A}^2} \\ &=&\mathsf{P}(\tilde{\phi}_0\times\tilde{\phi}_0)\left( \mathscr{B}igsem{{\mathbin{\dot{{\bigwedge}}}} \mathscr{B}igl\{\bigl({\mathbin{\dot{{\bigwedge}}}}s'\bigr)\mathbin{\dot{{\to}}}\xi~:~ s'\in s^{\subseteq},~\xi\in t\mathscr{B}igr\}} _{(s,t)\in\mathfrak{P}(\Sigma)^2}\right) \\ &=&\mathsf{P}(\tilde{\phi}_0\times\tilde{\phi}_0)\left( \mathscr{B}igsem{{\mathbin{\dot{{\bigwedge}}}} \mathscr{B}igl\{\bigl({\mathbin{\dot{{\bigwedge}}}}s\bigr)\mathbin{\dot{{\to}}}\xi~:~\xi\in t\mathscr{B}igr\}} _{(s,t)\in\mathfrak{P}(\Sigma)^2}\right) \\ &=&\mathsf{P}(\tilde{\phi}_0\times\tilde{\phi}_0)\left( \bigsem{\bigl({\mathbin{\dot{{\bigwedge}}}}s\bigr)\mathbin{\dot{{\to}}}\bigl({\mathbin{\dot{{\bigwedge}}}}t\bigr)} _{(s,t)\in\mathfrak{P}(\Sigma)^2}\right) \\ &=&\bigsem{\bigl({\mathbin{\dot{{\bigwedge}}}}\tilde{\phi}_0(a)\bigr)\mathbin{\dot{{\to}}} \bigl({\mathbin{\dot{{\bigwedge}}}}\tilde{\phi}_0(b)\bigr)}_{(a,b)\in\mathscr{A}^2} ~=~\sem{\phi(a)\mathbin{\dot{{\to}}}\phi(b)}_{(a,b)\in\mathscr{A}^2}\\ &=&\sem{\phi\circ\pi}_{\mathscr{A}^2}\to\sem{\phi\circ\pi'}_{\mathscr{A}^2} ~=~\csem{\pi}_{\mathscr{A}^2}\to\csem{\pi'}_{\mathscr{A}^2} \\ \end{array}\eqno\begin{tabular}[b]{r@{}} (by Lemma~\ref{l:ImpTech})\\[5pt] (by Coro.~\ref{c:DistrForallImp})\\[3pt]\\ \mbox{\qedhere}\\ \end{tabular}$$ \end{proof} \begin{proposition}\label{p:ImpA} Let $X$ be a set. For all codes $a,b\in\mathscr{A}^X$, we have $$\textstyle \csem{a_x\to b_x}_{x\in X}~=~\csem{a}_X\to\csem{b}_X \eqno({\in}~\mathsf{P}{X})$$ \end{proposition} \begin{proof} Same argument as for Prop.~\ref{p:ConnSigma} p.~\pageref{p:ConnSigma}. \end{proof} \subsection{Defining the separator $S\subseteq\mathscr{A}$} By analogy with the construction of the pseudo-filter $\mathsf{P}hi\subseteq\Sigma$ (cf Section~\ref{ss:DefFilterSigma}), we let $$S~:=~\bigl\{a\in\mathscr{A}~:~\csem{a}_{\_\in1}=\top_1\bigr\} \eqno({\subseteq}~\mathscr{A})$$ writing $\top_1$ the top element of $\mathsf{P}{1}$. Note that by construction, we have $$S~=~\bigl\{a\in\mathscr{A}~:~\sem{\phi(a)}_{\_\in1}=\top_1\bigr\} ~=~\bigl\{a\in\mathscr{A}~:~\phi(a)\in\mathsf{P}hi\bigr\} ~=~\phi^{-1}(\mathsf{P}hi)\,.$$ \begin{proposition} The subset $S\subseteq\mathscr{A}$ is a separator of the implicative structure $(\mathscr{A},{\preccurlyeq},{\to})$. \end{proposition} \begin{proof} \emph{$S$ is upwards closed.}\quad Let $a,b\in\mathscr{A}$ such that $a\in S$ and $a\preccurlyeq b$ (that is: $b\subseteq a$). From these assumptions, we have $\csem{a}_{\_\in1}=\top_1$ and $\tilde{\phi}_0(b)\subseteq\tilde{\phi}_0(a)$, hence $$\textstyle\top_1~=~\csem{a}_{\_\in 1} ~=~\bigsem{{\mathbin{\dot{{\bigwedge}}}}\tilde{\phi}_0(a)}_{\_\in 1} ~=~\bigsem{{\mathbin{\dot{{\bigwedge}}}}\circ(\tilde{\phi}_0(a))_{\_\in 1}}_1 ~\le~\bigsem{{\mathbin{\dot{{\bigwedge}}}}\circ(\tilde{\phi}_0(b))_{\_\in 1}}_1 ~=~\csem{b}_{\_\in 1}$$ (from Coro.~\ref{c:SubQuantSigma}) and thus $\csem{b}_{\_\in 1}=\top_1$. Therefore $b\in S$. \smallbreak\noindent \emph{$S$ contains $\mathbf{K}^{\mathscr{A}}$ and $\mathbf{S}^{\mathscr{A}}$}.\quad We observe that $$\begin{array}{r@{~{}~}c@{~{}~}l} \bigcsem{\mathbf{K}^{\mathscr{A}}}_{\_\in1} &=&\mathscr{B}igcsem{\bigcurlywedge_{a\in\mathscr{A}}\bigcurlywedge_{b\in\mathscr{B}} (a\to b\to a)}_{\_\in1}\\[6pt] &=&\forall{\pi_{1,\mathscr{A}}}\mathscr{B}igl( \mathscr{B}igcsem{\bigcurlywedge_{b\in\mathscr{B}}(a\to b\to a)} _{(\_\,,a)\in1\times\mathscr{A}}\mathscr{B}igr)\\[6pt] &=&\forall{\pi_{1,\mathscr{A}}}\bigl(\forall{\pi_{1\times\mathscr{A},\mathscr{A}}}\bigl( \bigcsem{a\to b\to a} _{((\_\,,a),b)\in(1\times\mathscr{A})\times\mathscr{A}}\bigr)\bigr)\\[3pt] &=&\forall{\pi_{1,\mathscr{A}}}\bigl(\forall{\pi_{1\times\mathscr{A},\mathscr{A}}} \bigl(\top_{(1\times\mathscr{A})\times\mathscr{A}}\bigr)\bigr) ~=~\top_1\\ \end{array}$$ hence $\mathbf{K}^{\mathscr{A}}\in S$. The proof that $\mathbf{S}^{\mathscr{A}}\in S$ is analogous. \smallbreak\noindent \emph{$S$ is closed under modus ponens}.\quad Suppose that $(a\to b)\in S$ and $a\in S$. This means that $\csem{a\to b}_{\_\in1}= \csem{a}_{\_\in 1}\to\csem{b}_{\_\in 1}=\top_1$ and $\csem{a}_{\_\in1}=\top_1$. Hence $\csem{b}_{\_\in1}=\top_1$, and thus $b\in S$. \end{proof} Similarly to Prop.~\ref{p:OrderPhi} p.~\pageref{p:OrderPhi}, the following proposition characterizes the ordering on each Heyting algebra $\mathsf{P}{X}$ from the operations of~$\mathscr{A}$ and the separator $S\subseteq\mathscr{A}$: \begin{proposition}\label{p:CharacSeparA} For all sets $X$ and for all codes $a,b\in\mathscr{A}^X$, we have: $$\csem{a}_X\le\csem{b}_X\quad\text{iff}\quad \bigcurlywedge_{x\in X}(a_x\to b_x)~\in~S\,.$$ \end{proposition} \begin{proof} Writing $1_X:X\to 1$ the unique map from~$X$ to~$1$, we have: $$\begin{array}[b]{r@{\quad}c@{\quad}l} \csem{a}_X\le\csem{b}_X &\text{iff}&\top_X\le\csem{a}_X\to\csem{b}_X \\ &\text{iff}&\mathsf{P}{1_X}(\top_1)\le\csem{a_x\to b_x}_{x\in X}\\ &\text{iff}&\top_1\le\forall{1_X} \bigl(\csem{a_x\to b_x}_{x\in X}\bigr)\\ &\text{iff}&\top_1\le\bigcsem{{\bigcurlywedge} \{a_x\to b_x:x\in X\}}_{\_\in 1}\\ &\text{iff}&\displaystyle\bigcurlywedge_{x\in X}(a_x\to b_x)~\in~S\,. \\ \end{array}\eqno\mbox{\qedhere}$$ \end{proof} \subsection{Constructing the isomorphism} Let us now write $\mathsf{P}':\mathbf{Set}^{\mathrm{op}}\to\mathbf{HA}$ the tripos induced by the implicative algebra $(\mathscr{A},{\preccurlyeq},{\to},S)$ (Section~\ref{ss:ImpAlg}). Recall that for each set $X$, the Heyting algebra $\mathsf{P}'{X}:=\mathscr{A}^X/S[X]$ is the poset reflection of the preordered set $(\mathscr{A}^X,\vdash_{S[X]})$ whose preorder $\vdash_{S[X]}$ is given by $$a\vdash_{S[X]}b\quad\text{iff}\quad \bigcurlywedge_{x\in X}(a_x\to b_x)~\in~S \eqno(\text{for all}~a,b\in\mathscr{A}^X)$$ It now remains to show that: \begin{proposition} The implicative tripos~$\mathsf{P}'$ is isomorphic to the tripos~$\mathsf{P}$. \end{proposition} \begin{proof} Let us consider the family of maps $\rho_X:=\csem{\_}_X:\mathscr{A}^X\to\mathsf{P}{X}$, which is clearly natural in the parameter set~$X$. From Prop.~\ref{p:CharacSeparA}, we have $$a\vdash_{S[X]}b\quad\text{iff}\quad \bigcurlywedge_{x\in X}(a_x\to b_x)~\in~S \quad\text{iff}\quad\rho_X(a)\le\rho_X(b) \eqno(\text{for all}~a,b\in\mathscr{A}^X)$$ hence $\rho_X:\mathscr{A}^X\to\mathsf{P}{X}$ induces an embedding of posets $\tilde{\rho}_X:\mathsf{P}'{X}\to\mathsf{P}{X}$ through the quotient $\mathsf{P}'{X}:=\mathscr{A}^X/S[X]$. Moreover, the map $\tilde{\rho}_X:\mathsf{P}'{X}\to\mathsf{P}{X}$ is surjective (since $\rho_X$ is), therefore it is an isomorphism from the tripos~$\mathsf{P}'$ to the tripos~$\mathsf{P}$. \end{proof} The proof of Theorem~\ref{th:Thm} p.~\pageref{th:Thm} is now complete. \subsection{The case of classical triposes} In~\cite{Miq20}, we showed that each classical implicative tripos (that is: a tripos induced by a classical implicative algebra~$\mathscr{A}$) is isomorphic to a Krivine tripos (that is: a tripos induced by an abstract Krivine structure). Combining this result with Theorem~\ref{th:Thm}, we deduce that classical realizability provides a complete description of all classical triposes $\mathsf{P}:\mathbf{Set}^{\mathrm{op}}\to\mathbf{BA}$ (writing $\mathbf{BA}\subset\mathbf{HA}$ the full subcategory of Boolean algebras): \begin{corollary} Each classical tripos $\mathsf{P}:\mathbf{Set}^{\mathrm{op}}\to\mathbf{BA}$ is isomorphic to a Krivine tripos. \end{corollary} \end{document}
\begin{document} \title{Extremal Restraints for Graph Colourings} \author{Jason I. Brown\footnote{Communicating author.}~, Aysel Erey and Jian Li \\ Department of Mathematics and Statistics\\ Dalhousie University\\ Halifax, Nova Scotia, Canada B3H 3J5} \displaystyleate{} \maketitle \begin{abstract} A {\em restraint} on a (finite undirected) graph $G = (V,E)$ is a function $r$ on $V$ such that $r(v)$ is a finite subset of ${\mathbb N}$; a proper vertex colouring $c$ of $G$ is {\em permitted} by $r$ if $c(v) ot\in r(v)$ for all vertices $v$ of $G$ (we think of $r(v)$ as the set of colours {\em forbidden} at $v$). Given a large number of colors, for restraints $r$ with exactly one colour forbidden at each vertex the smallest number of colorings is permitted when $r$ is a constant function, but the problem of what restraints permit the largest number of colourings is more difficult. We determine such extremal restraints for complete graphs and trees. \end{abstract} \vskip 3 ex {\bf Keywords:} graph colouring, chromatic polynomial, restraint, restrained chromatic polynomial \vskip 3 ex \vskip 1 exection{Introduction} A (proper vertex) {\em k-colouring} of a finite undirected graph $G$ is a function $f:=V(G) \rightarrow \{1,2,\ldots,k\}$ such that for every edge $e = uv$ of $G$, $f(u) eq f(v)$ (we will denote by $[k] = \{1,2,\ldots,k\}$ the set of {\em colours}). There are variants of vertex colourings that have been of interest. In a {\em list colouring}, for each vertex $v$ there is a finite list (that is, set) $L(v)$ of colours available for use, and then one wishes to properly colour the vertices such that the colour of $v$ is from $L(v)$. If $|L(v)|=k$ for every vertex $v$, then a list colouring is called a {\em k-list colouring}. There is a vast literature on list colourings (see, for example, \cite{alon} and \cite{chartrand}, Section 9.2). We are going to consider a complementary problem, namely colouring the vertices of a graph $G$ where each vertex $v$ has a {\em forbidden} finite set of colours, $r(v) \vskip 1 exubset {\mathbb N}$ (we allow $r(v)$ to be equal to the empty set); we call the function $r$ a {\em restraint} on the graph. A restraint $r$ is called an {\em $m$-restraint} if $|r(u)| \leq m$ for every $u\in V(G)$, and $r$ is called a {\em standard $m$-restraint} if $|r(u)| = m$ for every $u\in V(G)$. If $m = 1$ (that is, we forbid at most one colour at each vertex) we omit $m$ from the notation and use the word {\em simple} when discussing such restraints. A $k$-colouring $c$ of $G$ is {\em permitted} by restraint $r$ (or $c$ is a colouring {\em with respect to $r$}) if for all vertices of $v$ of $G$, $c(v) ot\in r(v)$. Restrained colourings arise in a natural way as a graph is sequentially coloured, since the colours already assigned to vertices induce a set of forbidden colours on their uncoloured neighbours. Restrained colourings can also arise in scheduling problems where certain time slots are unavailable for certain nodes (c.f.\ \cite{kubale}). Moreover, restraints are of use in the construction of critical graphs (with respect to colourings) \cite{toft}; a $k$-chromatic graph $G = (V,E)$ is said to be {\em $k$-amenable} iff for every non-constant simple restraint $r:V \rightarrow \{1,2,\ldots,k\}$ permits a $k$-colouring \cite{amenable,roberts}. Finally, observe that if each vertex $v$ of a graph $G$ has a list of available colours $L(v)$, and, without loss, \[ L = \bigcup_{v \in V(G)} L(v) \vskip 1 exubseteq [k] \] then setting $r(v) = \{1,2,\ldots,k\} - L(v)$ we see that $G$ is list colourable with respect to the lists $L(v)$ iff $G$ has a $k$-colouring permitted by $r$. The well known {\em chromatic polynomial} $\pi(G,x)$ (see, for example, \cite{chrompolybook}) counts the number of $x$-colourings of $G$ with $x$ colours. Given a restraint $r$ on graph $G$, we define the {\em restrained chromatic polynomial} of $G$ with respect to $r$, $\pi_{r}(G,x)$, to be the number of $x$-colourings permitted by restraint $r$. Note that this function extends the definition of chromatic polynomial, as if $r(v) = \emptyset$ for all vertices $v$, then $\pi_r(G,x) = \pi(G,x)$. Using standard techniques (i.e. the deletion/contraction formula), we can show that the restrained chromatic polynomial $\pi_{r}(G,x)$ is a polynomial function of $x$ for $x$ sufficiently large, and like chromatic polynomials, the restrained chromatic polynomials of a graph $G$ of order $n$ is monic of degree $n$ with integer coefficients that alternate in sign, but unlike chromatic polynomials, the constant term need not be 0 (we can show that the constant term for any restraint $r$ on $\overline{K_{n}}$ is $(-1)^{n}\prod_{v \in V(G)} |r(v)|)$. Also, note that if $r$ is a constant standard $m$-restraint, say $r(v) = S$ for all $v \in V$, then $\pi_{r}(G,x) = \pi(G,x-m)$ for $x$ at least as large as $\mbox{max}(S)$. Observe that if $r'$ arises from $r$ by a permutation of colours, then $\pi_r(G,x)=\pi_{r'}(G,x)$ for all $x$ sufficiently large. Thus if $\displaystyleisplaystyle{k = \vskip 1 exum_{v \in V(G)} |r(v)|}$ then we can assume (as we shall do for the rest of this paper) that each $r(v) \vskip 1 exubseteq [k]$, and so there are only finitely many restrained chromatic polynomials on a given graph $G$. Hence past some point (past the roots of all of the differences of such polynomials), one polynomial exceeds (or is less) than all of the rest, no matter what $x$ is. As an example, consider the cycle $C_{3}$. There are essentially three different kinds of standard simple restraints on $C_{3}$, namely $r_{1}= [\{1\}, \{1\}, \{1\}]$, $r_{2} = [\{1\}, \{2\}, \{1\}]$ and $r_{3}= [\{1\}, \{2\}, \{3\}]$ (If the vertices of $G$ are ordered as $v_1,v_2\displaystyleots v_n$, then we usually write $r$ in the form $[r(v_1),r(v_2)\displaystyleots r(v_n)]$). For $x\geq 3$, the restrained chromatic polynomials with respect to these restraints can be calculated as \begin{eqnarray*} \pi_{r_1}(C_3,x) & = & (x-1)(x-2)(x-3),\\ \pi_{r_2}(C_3,x) & = & (x-2)(x^2-4x+5), \mbox{ and} \\ \pi_{r_3}(C_3,x) & = & 2(x-2)^2+(x-2)(x-3)+(x-3)^3. \end{eqnarray*} where $\pi_{r_1}(C_3,x)<\pi_{r_2}(C_3,x)<\pi_{r_3}(C_3,x)$ holds for $x>3.$ Our focus in this paper is on the following: given a graph $G$ and $x$ large enough, what standard simple restraints permit the largest/smallest number of $x$-colourings? In the next section, we give a complete answer to minimization part of this question, and then turn our attention to the more difficult maximization problem, and in the case of complete graphs and trees, describe the standard simple restraints which permit the largest number of colourings. \vskip 1 exection{Standard Restraints permitting the extremal number of colourings} The standard $m$-restraints that permit the smallest number of colourings are easy to describe, and are, in fact, the same for all graphs. In \cite{carsten} (see also \cite{donner}) it was proved that if a graph $G$ of order $n$ has a list of at least $k$ available colours at every vertex, then the number of list colourings is at least $\pi(G,k)$ for any natural number $k\geq n^{10}$. As we already pointed out, given a standard $m$-restraint $r$ on a graph $G$ and a natural number $x\geq mn$, we can consider an $x$-colouring permitted by $r$ as a list colouring $L$ where each vertex $v$ has a list $L(v)=[x]-r(v)$ of $x-m$ available colours. Therefore, we derive that for a standard $m$-restraint $r$ on graph $G$, $\pi_r(G,x) \geq \pi(G,x-m)$ for any natural number $x\geq n^{10}+mn$. But $\pi_{r_{const}^m}(G,x)$ is clearly the number of colourings permitted by the {\em constant} standard $m$-restraint in which $\{1,2,\ldots,m\}$ is restrained at each vertex. In particular, for any graph $G$, the constant standard simple restraints always permit the smallest number of colourings (provided the number of colours is large enough). The more difficult question is which standard $m$-restraints permit the largest number of colorings; even for standard simple restraints, it appears difficult, so we will focus on this question. As we shall see, the extremal simple restraints differ from graph to graph. We investigate the extremal problem for two important families of graphs: complete graphs and trees. \vskip 1 exubsection{Complete graphs} First, we prove that for complete graphs, the standard simple restraints that allow for the largest number of colourings are obtained when all vertices have different restrained colours. \begin{theorem}\label{completethm} Let $r: \{ v_{1}, v_{2}, \ldots, v_{n} \} \longrightarrow [n]$ be any standard simple restraint on $K_{n}$ , then for all $x \geq n$, $ \pi_{r}(K_{n}, x) \le \pi_{r'}(K_{n}, x)$, where $r'(v_{i})=i$ for all $i \le n$. \end{theorem} \begin{proof} We show that if two vertices of a complete graph have the same forbidden colour, then we can improve the situation, colouring-wise, by reassigning the restraint at one of these vertices to a colour not forbidden elsewhere. Let $r_{1}: \{ v_{1}, v_{2}, \ldots, v_{n} \} \longrightarrow [n]$ be a standard simple restraint on $K_{n}$ with $r_{1}(v_{i}) = r_{1}(v_{j})= t$, and there is an element $l \in [n]$ such that $l otin r(V(K_{n}))$. Then setting \[ r'_{1}(v_{s}) = \left\{ \begin{array}{ll} r(v_{s}) & \mbox{ if } s eq j\\ l & \mbox{ if } s = j \end{array} \right. \] we will show that $\pi_{r_{1}}(K_{n}, x) \le \pi_{r'_{1}}(K_{n}, x)$ for $x \ge n$. Let $c$ be a proper $x$-colouring of $K_{n}$ permitted by $r_{1}$. We produce for each such $c$ another proper $x$-colouring $c'$ of $K_{n}$ permitted by $r'_{1}$, in a 1--1 fashion. We take cases based on $c$. \begin{itemize} \item case 1: $c(v_{j}) eq l$. The proper $x$-colouring $c$ is also permitted by $r'_{1}$, so take $c'=c$. \item case 2: $c(v_{j}) = l$ and $t$ is not used by $c$ on the rest of $K_{n}$. Let $c'$ be the proper $x$-colouring of $K_{n}$ with $c'(v_{u})= c(v_{u})$ if $u eq j$ and $c'(v_{j})= t$. This gives us a proper $x$-colouring $c'$ permitted by $r'_{1}$. \item case 3: $c(v_{j}) = l$ and $t$ is used somewhere on the rest of $K_{n}$ by $c$, say vertex $v_{k}$. Let $c'$ be a proper $x$-colouring of $K_{n}$ with $c'(v_{u}) = c(v_{u})$ if $ u eq j$ or $k$, $c'(v_{j})= t$ and $c'(v_{k})=l$. This gives us a proper $x$-colouring $c'$ permitted by $r'_{1}$. \end{itemize} No colouring from one case is a colouring in another case and different colourings $c$ give rise to different colourings $c'$ within each case. Therefore, we have $\pi_{r_{1}}(K_{n}, x) \le \pi_{r'_{1}}(K_{n}, x)$ for $x \ge n$. If $r$ is not 1-1, we start with $r_{1} = r$ and repeat the argument until we arrive at a simple restraint $r^{\ast}$ that is 1-1 on $V(G)$ and $\pi_{r}(K_{n}, x) \le \pi_{r^{\ast}}(K_{n}, x)$ for $x \ge n$. Clearly $r^{\ast}$ arises from $r'$ by a permutation of colours, so $\pi_{r}(K_{n}, x) \le \pi_{r^{\ast}}(K_{n}, x) = \pi_{r'}(K_{n}, x)$ for $x \ge n$ and we are done. \end{proof} \vskip 1 exubsection{Trees} We now consider extremal simple restraints for trees, but first we need some notation. Suppose $G$ is a connected bipartite graph with bipartition $(A,B)$. Then a standard simple restraint is called an \textit{alternating restraint}, denoted $r_{alt}$, if $r_{alt}$ is constant on both $A$ and $B$ individually but $r_{alt}(A) eq r_{alt}(B)$. We show that for trees alternating restraints permit the largest number of colorings. Before we begin, though, we will need some notation and a lemma. If $r$ is a restraint on $G$ and $H$ is an induced subgraph of $G$ then $r|_H$, the {\em restriction of $r$ to $H$}, denotes the restraint function induced by $r$ on the vertex set of $H$ (if $A$ is a vertex subset of $G$ then $G_A$ is the subgraph induced by $A$). \begin{lemma}\label{tree3} Let $T$ be a tree on $n$ vertices and $r:V(T)\rightarrow [n]$ be a $2$-restraint such that there is at most one vertex $w$ of $T$ with $|r(w)| = 2$. Then for any $k \geq \operatorname{max}\{3,n\}$, $\pi_r(T,k) > 0$. \end{lemma} \begin{proof} The proof is by induction on $n$. For $n= 1$ the proof is trivial, so we assume that $n \geq 2$. As $T$ has at least two leaves, let $u$ be a leaf of $T$ such that $|r(u)|\leq 1$ and $v$ be the stem of $u$. By induction we can colour $T-u$ with respect to $r|_{T-u}$. As $k \geq 3$, there is a colour different from $r(u)$ and the colour assigned to $v$, so we can extend the colouring to one permitted by $r$ on all of $T$. \end{proof} \begin{theorem}\label{treemaxmin}Let $T$ be a tree on $n$ vertices and $r:V(T)\rightarrow [n]$ be a standard simple restraint that is not an alternating restraint, then for $k \geq n$, $$\pi_r(T,k) < \pi_{r_{alt}}(T,k).$$ \end{theorem} \begin{proof} We proceed by induction on $n$. We leave it to the reader to check the basis step $n=2$. Suppose that $n\geq 3$, $u$ be a leaf of $T$ and $v$ be the neighbor of $u$. Also, let $v_1,v_2,\displaystyleots v_m$ be the vertices of the set $N(v)-\{u\}$. Let $T'=T-u$ and $T''=T-\{u,v\}$. Let $T^i$ be the connected component of $T''$ which contains the vertex $v_i$. Given a simple restraint $r$ on $T$, we consider two cases: \begin{itemize} \item case 1: $r(u)=r(v).$ ewline Once all the vertices of $T'$ are coloured with respect to $r|_{T'}$, $u$ has $k-2$ choices because it cannot get the colour $r(u)$ and the colour assigned to $v$ which different from $r(u)$. Thus, \begin{equation}\label{treecase1} \pi_r(T,k)=(k-2)\pi_{r|_{T'}}(T',k). \end{equation} \item case 2: $r(u) eq r(v).$ ewline In this case we define $x_{n-1}^r$ (respectively $y_{n-1}^r$) to be the number of $k$-colourings of $T'$ permitted by $r|_{T'}$ where $v$ gets (respectively does not get) the colour $r(u)$. Now it can be verified that $\pi_{r|_{T'}}(T',k)=x_{n-1}^r+y_{n-1}^r$ and $\pi_r(T,k)=(k-1)x_{n-1}^r+(k-2)y_{n-1}^r$. In other words, \begin{equation}\label{treecase2} \pi_r(T,k)=(k-2)\pi_{r|_{T'}}(T',k)+x_{n-1}^r \end{equation} Also let us define a restraint function $r_i:V(T^i)\rightarrow {\mathbb N} $ on each component $T^i$ for $i=1,\displaystyleots m$ as follows: \begin{enumerate} \item If $r(v_i)=r(u)$ then $r_i(w):=r(w)$ for each $w\in V(T^i)$ \item If $r(v_i) eq r(u)$ then \[r_i(w) := \left\{ \begin{array}{ll} \{r(v_i),r(u)\} & \mbox{ if $w=v_i$}\\ r(w) & \mbox{ if $w eq v_i$ } \end{array} \right. \] \textit{for each } $w\in V(T^i).$ \end{enumerate} Now, $\displaystyleisplaystyle{x_{n-1}^r=\prod_{i=1}^m \pi_{r_i}(T^i,k)}$ which is strictly larger than $0$ by Lemma~\ref{tree3}. \end{itemize} By comparing Equations (\ref{treecase1}) and (\ref{treecase2}), it is clear that $\pi_r(T,k)$ will be maximized in case 2. Since $r(V(T^i))\vskip 1 exubseteq r_i(V(T^i))$, $\pi_{r_i}(T,k)$ is maximized when $r(v_i)=r(u)$, that is, when $r_i$ and $r|_{T^i}$ are equal to each other for each $i=1\displaystyleots m$. Moreover, $\pi_{r|_{T^i}}(T^i,k)$ is maximized when $r|_{T^i}$ is alternating on $T^i$ for each $i=1\displaystyleots m$, and $\pi_{r|_{T'}}(T',k)$ is maximized when $r|_{T'}$ is alternating on $T'$ by the induction hypothesis. Hence, $\pi_r(T,k)$ attains its maximum value when $r$ is alternating on $T$. Moreover, this value is strictly larger than all the others. Therefore, the result follows. \end{proof} \vskip 1 exection{Concluding remarks and open problems} It is worth noting that for complete graphs and trees the simple restraints which maximize the restrained chromatic polynomials are all minimal colourings, that is, colourings with the smallest number of colours. One might wonder therefore whether this always holds, but unfortunately this is not always the case. For consider the graph $G$ in Figure~\ref{twotriangles} which has chromatic number $3$. It is easy to see that there is essentially only one standard simple restraint ($r_2=[1,2,3,1,2,3]$) which is a proper colouring of the graph with three colours. If $r_1=[1,2,3,1,2,4]$, then some direct computations show that $$\pi_{r_1}(G,x)-\pi_{r_2}(G,x)=(x-3)^2>0$$ for all $x$ large enough. It follows that the simple restraint which maximizes the restrained chromatic polynomial of $G$ cannot be a minimal colouring of the graph. \begin{figure} \caption{Graph whose standard simple restraint permitting the largest number of colourings is not a minimal colouring.} \label{twotriangles} \end{figure} We believe, however, that for bipartite graphs the simple restraint which maximizes the restrained chromatic polynomial is a minimal colouring of the graph. More specifically, we propose the following: \begin{conjecture}\label{bipartiteconjecture} Let $r: \{ v_{1}, v_{2}, \ldots, v_{n} \} \longrightarrow [n]$ be any standard simple restraint on a bipartite graph $G$ and $x$ large enough. Then, $\pi_{r}(G, x) \le \pi_{r_{alt}}(G, x)$. \end{conjecture} We verified that the conjecture above is correct for all such graphs of order at most $6$. Indeed, we know that among all graphs of order at most $6$, there are only two graphs where the standard simple restraint which maximizes the restrained chromatic polynomial is not a minimal colouring of the graph. Therefore, we suggest the following interesting problem: \begin{problem} Is it true that for almost all graphs the standard simple restraint which maximizes the restrained chromatic polynomial is a minimal colouring of the graph? \end{problem} \vskip0.4in oindent {\bf \large Acknowledgments} \\ The authors would like to thank the referee for his help and insightful comments. This research was partially supported by a grant from NSERC. \end{document}
\begin{document} \title{NMR investigation of contextuality in a quantum harmonic oscillator \\ via pseudospin mapping} \author{Hemant Katiyar} \email{[email protected]} \affiliation{NMR Research Center, Indian Institute of Science Education and Research, Pune 411008, India} \author{C. S. Sudheer Kumar} \email{[email protected]} \affiliation{NMR Research Center, Indian Institute of Science Education and Research, Pune 411008, India} \author{T. S. Mahesh} \email{[email protected]} \affiliation{NMR Research Center, Indian Institute of Science Education and Research, Pune 411008, India} \begin{abstract} Physical potentials are routinely approximated to harmonic potentials so as to analytically solve the system dynamics. Often it is important to know when a quantum harmonic oscillator (QHO) behaves quantum mechanically and when classically. Recently Su et. al. [Phys. Rev. A {\bf 85}, 052126 (2012)] have theoretically shown that QHO exhibits quantum contextuality (QC) for a certain set of pseudospin observables. In this work, we encode the four eigenstates of a QHO onto four Zeeman product states of a pair of spin-1/2 nuclei. Using the techniques of NMR quantum information processing, we then demonstrate the violation of a state-dependent inequality arising from the noncontextual hidden variable model, under specific experimental arrangements. We also experimentally demonstrate the violation of a state-independent inequality by thermal equilibrium states of nuclear spins, thereby assessing their quantumness. \end{abstract} \keywords{nuclear magnetic resonance, contextuality, quantum harmonic oscillator} \maketitle \section{Introduction} Quantum contextuality (QC) states that the outcome of the measurement depends not only on the system and the observable but also on the context of the measurement, i.e., on other compatible observables which are measured along with \cite{peres_context_1pg,Peres,quant_theory_peres,KS}. Let us consider a pair of space-like separated entangled particles, with local observables $A$ and $C$ belonging to the first particle, and $B$ and $D$ to the second. We assume that these observables are dichotomic (i.e., can take values $\pm 1$) and that the pairs $ (A,B) , ~ (B,C), ~ (C,D), $ and $(D,A)$ commute. Classically, one assigns objective properties to the particles such that $D$ behaves identically on the state of the system irrespective of whether it is measured in the context of $A$ or in the context of $C$, even though $A$ and $C$ are not compatible \cite{Mermin,EPR_original_paper}. Such measurements are said to be \textit{context independent}. Classically, one can pre-assign values $(a,c)$ to $(A,C)$ of the first particle independent of the measurement carried out on the second particle. Similarly, for the second particle one can pre-assign values $(b,d)$ to $(B,D)$ independent of the measurement carried out on the first particle. In these pre-assignments, implicit is the assumption of noncontextual hidden variables, which predict definite measurement outcomes independent of measuring arrangement. If we pre-assign values to observables such that $A,B,C,D = \pm 1$, it follows that $AB+BC+CD-AD=\pm 2$ and the expectation value, \begin{eqnarray} \mathrm{\textbf{I}}~ && =\expec{AB+BC+CD-AD} \nonumber \\ && = \expec{AB} + \expec{BC} + \expec{CD} - \expec{AD} \leq 2 \label{I_main} \end{eqnarray} \cite{quant_info_neilson_chuang}. This inequality often known as CHSH inequality arises from noncontextual hidden variable (NCHV) model and must be satisfied by all classical particles. Now let us see the implication of the quantum theory. Let Alice and Bob share a large number of singlet states: $(\ket{01}-\ket{10})/\sqrt{2}=-(\ket{+-}-\ket{-+})/\sqrt{2}$, where $\ket{0}$ and $\ket{1}$ are eigenkets of Pauli-$z$ operator ($\sigma_z$) and $\ket{\pm}=(\ket{0}\pm\ket{1})/\sqrt{2}$. Alice measures on her qubit either $\sigma_x^A$ or $\sigma_z^A$, while Bob always measures $\sigma_x^B$. Let us compare the results of only those measurements in which Alice has obtained outcome $+1$. If Alice measures $\sigma_z^A$, then Bob's qubit collapse to $\ket{1}=(\ket{+}-\ket{-})/\sqrt{2}$. In this context (i.e., $\sigma_z^A$), Bob will get both outcomes $\pm 1$ with equal probability. On the other hand, if Alice measures $\sigma_x^A$ on her qubit, then Bob's qubit collapses to $\ket{-}$ and in this context (i.e., $\sigma_x^A$), Bob will always get the outcome $-1$. Hence the context dependency. Here we experimentally investigate QC of a quantum harmonic oscillator (QHO). There are a variety of quantum systems whose potentials are approximated by QHO. Consider for example the quantized electromagnetic field used to manipulate a qubit in cavity quantum electrodynamics\cite{cQED_review}. Recently, QC in QHO has been theoretically studied by Su et al \cite{Cont_theory} by mapping four lowermost QHO states onto four pseudospin states. Such states can be encoded by qubit states, and QC can be studied by realizing the measurements of appropriate observables. We realize this study using a nuclear magnetic resonance (NMR) quantum simulator \cite{Corynmr_1st} . In the following section we shall revisit the formulation of Su et al, and in section III, we describe the experimental demonstration of state-dependent and state-independent QC using an NMR system. Finally we conclude in section IV. \section{theory} Hong-Yi Su \textit{et.al.} \cite{Cont_theory} have theoretically studied QC of eigenstates of $1$D-QHO by introducing two sets of pseudo-spin operators, \begin{eqnarray} {\bf \Gamma } = (\Gamma_x,\Gamma_y,\Gamma_z),~~ {\bf \Gamma '} = (\Gamma_x',\Gamma_y',\Gamma_z') \nonumber \end{eqnarray} with components, \begin{eqnarray} \Gamma_x = \sigma_x \otimes \mathbbm{1}, \Gamma_y = \sigma_z \otimes \sigma_y, \Gamma_z = -\sigma_y \otimes \sigma_y, \nonumber \\ \Gamma_x' = \sigma_x \otimes \sigma_z, \Gamma_y' = \mathbbm{1} \otimes \sigma_y, \Gamma_z' = -\sigma_x \otimes \sigma_x. \label{Gammas defined} \end{eqnarray} where $ \mathbbm{1} $ is $2\times 2$ identity matrix. Using these operators they defined the following observables, \begin{eqnarray} A&=&\Gamma_x = \sigma_x \otimes \mathbbm{1}, \nonumber \\ B&=&\Gamma_x' \cos \beta + \Gamma_z' \sin \beta =\sigma_x\otimes(\sigma_z\cos \beta - \sigma_x\sin \beta ), \nonumber \\ C&=&\Gamma_z = -\sigma_y \otimes \sigma_y, \nonumber \\ D&=&\Gamma_x'\cos \eta + \Gamma_z'\sin\eta=\sigma_x\otimes(\sigma_z\cos \eta - \sigma_x\sin \eta ). \label{A,B,C,D defined} \end{eqnarray} The products which form the inequality expression (Eq. \ref{I_main}) are \begin{eqnarray} AB &=&\mathbbm{1}\otimes(\cos\beta~\sigma_z-\sin\beta~\sigma_x),\nonumber\\ BC &=&-\sigma_z\otimes(\cos\beta~\sigma_x+\sin\beta~\sigma_z)\nonumber\\ CD &=&-\sigma_z\otimes(\cos\eta~\sigma_x+\sin\eta~\sigma_z),\nonumber\\ DA &=& \mathbbm{1}\otimes(\cos\eta~\sigma_z-\sin\eta~\sigma_x). \label{AB,BC,..evalted} \end{eqnarray} Following commutation relations hold: $ [\Gamma_i,\Gamma_j'] = 0~(i,j=x,y,z)$, $[\Gamma_x,\Gamma_y]=2i~\Gamma_z$ , $[\Gamma'_x,\Gamma'_y]=2i~\Gamma'_z$ and cyclic permutation of $x,y,z$. $ A,B,C,D $ have eigenvalues $ \pm 1 $, with $ (A,B)$, $(B,C)$, $(C,D)$ and $(D,A) $ forming compatible pairs. $A,~B,~C$ and $D$ are Hermitian, hence observables. One can verify that they are also unitary operators. Hong-Yi Su \textit{et.al.} \cite{Cont_theory} have shown that, \begin{eqnarray} \mathrm{\textbf{I}}_{\ket{l}_{QHO}}^{QM} = 2\sqrt{2} > 2 ,~\mathrm{when},~(\beta,\eta)_l = \begin{cases} (-\pi/4 , -3\pi/4)_0 \\ (3\pi/4, \pi/4)_1 \\ (\pi/4, 3\pi/4)_2 \\ (-3\pi/4, -\pi/4)_3 \end{cases} \label{max vio angles} \end{eqnarray} where, $\mathrm{\textbf{I}}_{\ket{l}_{QHO}}^{QM}$ is the expression on LHS of inequality \ref{I_main}, $ l=0,1,2\textrm{ and } 3 $, and, $ \ket{0}_{QHO}, \ket{1}_{QHO}, \ket{2}_{QHO}$ and $ \ket{3}_{QHO}$ are first four energy eigenstates of 1D-QHO. Thus QHO violates the inequality (\ref{I_main}) for certain observables and thereby exhibits QC. It is well known that only certain two-particle states violate the CHSH inequality (\ref{I_main}). As shown in \cite{D_Home_book,capasso} factorable states always satisfy inequality (\ref{I_main}) for local observables (observables of the form $P \otimes \mathbbm{1}$ or $\mathbbm{1} \otimes Q$ \cite{Audruch_entangled_sys}). With maximally mixed state ($\mathbbm{1}\otimes\mathbbm{1}/4$) the inequality (\ref{I_main}) is satisfied even with nonlocal observables (observables of the form $P \otimes Q$ \cite{Audruch_entangled_sys}) in eq. (\ref{A,B,C,D defined}), which is obvious from the fact that all the products in eq. (\ref{AB,BC,..evalted}) are traceless. However, if the initial state is nonfactorable, we can always find observables such that inequality (\ref{I_main}) is violated \cite{D_Home_book}. Although the pseudospin states $\{\ket{00},\ket{01},\ket{10},\ket{11}\}$, are factorable, they still violate (\ref{I_main}) since the observables in eq. (\ref{A,B,C,D defined}) are nonlocal. Thus, we observe that even when a system is in a nonentangled state, measurements of nonlocal observables may lead to violation of noncontextuality inequality \cite{Cabello_NCHV_ineqlty}. \textbf{State independent QC:} There exist stronger inequalities obtained from NCHV models which is violated by all states, including separable or maximally mixed states. If the initial state is maximally mixed, entanglement cannot be created by measuring whatever observable (local or nonlocal). This shows that entanglement is not necessary in general even in a bipartite system, to exhibit QC. Hence we conclude that, QC is more fundamental or general than entanglement. Any system whose Hilbert space has dimension $>2$ exhibits QC \cite{KS}. Even a \textit{single} spin-1 particle (where entanglement has no meaning as far as spin degree of freedom is concerned) can exhibit QC \cite{spin_1_QC}. \section{Experiment} \subsection{State dependent contextuality} For experimentally studying eq. (\ref{I_main}), we need: (i) a physical representation of first four energy eigenstates $ \{ \ket{0}_{QHO}, \ket{1}_{QHO}, \ket{2}_{QHO}, \ket{3}_{QHO} \}$ of 1D-QHO , and (ii) a way to find out the expectation values for operators $ AB,~ BC, ~CD,$ and $DA $. We encode the first four energy eigenstates of 1D-QHO onto the four energy eigenstates (under secular approximation) of a pair of spin-1/2 nuclei precessing in external static magnetic field: $\{\ket{00},\ket{01},\ket{10},\ket{11}\}$, which we call Zeeman product states. In fact any four arbitrarily choosen energy eigenstates of 1D-QHO and also their superposition states exhibit QC \cite{Cont_theory}. The circuit shown in Fig. \ref{moussa} is called Moussa protocol \cite{Moussa}, and is used to extract the expectation value of observables in a joint measurement. This has subsequently been generalized by Joshi et al \cite{joshi_frankcond} to unitary operators. \begin{figure} \caption{Moussa Protocol for finding out the expectation value of the joint observable $ X_1X_2X_3 $ i.e. $ \expec{X_1X_2X_3} \label{moussa} \end{figure} The three qubits for this experiment were provided by the three $^{19}$F nuclear spins of trifluoroiodoethylene dissolved in acetone-D6. Fig. \ref{molecule}(a) shows the structure of trifluoroiodoethylene along with the Hamiltonian parameters in Fig. \ref{molecule}(b). The effective $^{19}$F spin-spin (T$ _2^* $) and spin-lattice (T$ _1 $) relaxation time constants were about $ 0.8 $ and $ 6.3 $ s respectively. The experiments were carried out at an ambient temperature of 290 K on a 500 MHz Bruker UltraShield NMR spectrometer. \begin{figure} \caption{(a) Molecular Structure, (b) chemical shifts(diagonal elements) and J-couplings(off-diagonal elements) in Hz of trifluoroiodoethylene, and (c) pulse sequence for pseudo-pure state preparation. The amplitude and phase of the shaded pulse is written over the top of respective pulse, and unshaded pulses are $ (\pi)_x $ pulses. During the J evolution $1/(2J_{23} \label{molecule} \end{figure} The thermal equilibrium state for the three spin system in the eigenbasis of total Hamiltonian (under secular approximation: $\{\ket{000},\ket{001},...\}$) is \begin{equation} \rho_{\mathrm{eq}} = \frac{\mathbbm{1}_8}{8} +\epsilon \sum_{i=1}^3 I_{iz} \label{rho_eq} \end{equation} where, $\mathbbm{1}_8$ is an $8\times 8$ identity matrix, $I_{iz}$ are spin angular momentum operators, and the purity factor $\epsilon = \hbar \omega_0/(8kT)$ is the ratio of the Zeeman energy gap to the thermal energy \cite{cavanagh}. Unitary operation has no effect on the identity part, but modifies only the traceless deviation part. By applying a series of unitary and nonunitary operators (pulse sequence shown in Fig. \ref{molecule} \cite{Avik}), it is possible to transform the equilibrium state to a pseudopure state \begin{eqnarray} \rho_\mathrm{pps}= (1-\epsilon)\frac{\mathbbm{1}_8}{8} + \epsilon \outpr{000}{000} = \frac{\mathbbm{1}_8}{8} + \epsilon \Delta \rho_{\ket{000}} \end{eqnarray} which is isomorphic to the pure state $\ket{000}$ \cite{Corynmr_1st}. In the pseudopure state, the traceless deviation part has the form \begin{eqnarray} \Delta\rho_{\ket{000}} = && \frac{1}{4}(I_{1z}+I_{2z}+I_{3z} + 2I_{1z}I_{2z} \nonumber\\ &&+ 2I_{2z}I_{3z} + 2I_{1z}I_{3z} + 4I_{1z}I_{2z}I_{3z}) \label{Delta rho 000}. \end{eqnarray} The first spin, F$_1$, is used as an ancilla qubit, and other spins, F$_2$ and F$_3$, as the system qubits (see Fig. \ref{moussa}). The initial Hadamard gate on the first spin prepares $\rho_\mathrm{\ket{+00}}$. To measure $\expec{AB}_{\ket{00}}$, we apply the corresponding controlled operations $A$ and $B$ as indicated in circuit \ref{moussa}. The transverse magnetization of the ancilla qubit will be proportional to the expectation value $\expec{AB}_{\ket{00}}$. The absolute value of $\expec{AB}_{\ket{00}}$ is estimated by normalizing the value obtained in the above experiment with that obtained from a reference experiment having no controlled operations. Similarly we can measure the other expectation values $\expec{BC}_{\ket{00}}$, $\expec{CD}_{\ket{00}}$, and $\expec{AD}_{\ket{00}}$, and determine the value of ${\bf I}_0$. Other values ${\bf I}_l$ are obtained by preparing the corresponding pseudopure states $\rho_{\ket{+01}}$, $\rho_{\ket{+10}}$, and $\rho_{\ket{+11}}$ and applying the circuit \ref{moussa} in each case. In our experiments, all the controlled operations were realized by numerically optimized radio frequency (RF) pulses obtained using GRAPE technique \cite{Khaneja}. Each pair of controlled operations in circuit \ref{moussa} was realized by a GRAPE sequence with a duration of about 23 ms (having RF segments of duration $5~\upmu s$) and an average Hilbert-Schmidt fidelity better than 0.99 over 10\% variation in RF amplitude. \begin{figure*} \caption{$ \mathrm{\textbf{I} \label{results} \end{figure*} We estimated the values for $\textbf{I}_l$ (\ref{I_main}), for all the four eigenstates and independently varied both $\beta$ and $\eta$ over the range $[-\pi,\pi]$ with increments of $\pi/4$. The results are shown in Fig. \ref{results}. The maximum theoretical violation is $ 2\sqrt{2} = 2.82$. The experimental values of maximum violations for $\textbf{I}_0$, $\textbf{I}_1$, $\textbf{I}_2$, and $\textbf{I}_3$ are $ 2.40 \pm 0.02,~ 2.45 \pm 0.02,~ 2.39 \pm 0.02$, and $2.42 \pm 0.03$ respectively. \subsection{State independent contextuality} Hong-Yi Su \textit{et.al.} also studied the state independent contextuality \cite{Cont_theory,Cabello_state_ind_context}. They considered the inequality (arising from NCHV model) \begin{eqnarray} &&\expec{P_{11}P_{12}P_{13}} + \expec{P_{21}P_{22}P_{23}} + \expec{P_{31}P_{32}P_{33}} \nonumber \\ &&+\expec{P_{11}P_{21}P_{31}}+\expec{P_{12}P_{22}P_{32}}-\expec{P_{13}P_{23}P_{33}} \le 4~~~~~~ \label{stateIn} \end{eqnarray} where $ P_{ij} $ are the elements of the matrix P, \begin{equation} P=\left( \begin{array}{ccc} \Gamma_z & \Gamma_z' & \Gamma_z\Gamma_z' \\ \Gamma_x' & \Gamma_x & \Gamma_x\Gamma_x'\\ \Gamma_z \Gamma_x' & \Gamma_x \Gamma_z' & \Gamma_y\Gamma_y' \end{array} \right). \end{equation} Operators in each row of the matrix $P$ commute with each other. Similarly in each column. $P_{ij}$ are dichotomic observables with measurement outcomes $\pm 1$. We can verify the inequality \ref{stateIn} by preassigning the values $\pm 1$ to each observable $P_{ij}$. Now introducing the operators from expressions (\ref{Gammas defined}), we find that the product of each row of the matrix $P$ is identity (i.e., $ P_{j1}P_{j2}P_{j3} = \mathbbm{1}$) having eigenvalue $+1$. Similarly, the products along each of the first two columns is again identity. However, the product along the last column, i.e., $P_{13}P_{23}P_{33} = -\mathbbm{1}$ having the eigenvalue $-1$. No preassignment of $\pm 1$ to the various elements of $P$ can satisfy the condition that, product along each row and along the first two columns be $+1$ and along the last column be $-1$. This shows that quantum theory is not compatible with NCHV model. Further, for the above choice of operators, the expectation values for the first five operators in expression \ref{stateIn} are all $+1$ while that of the last term is $-1$. Therefore, quantum bound for lhs of expression \ref{stateIn} is $6$ independent of initial state of the system. To investigate state-independent QC, we need to measure joint expectation values of three observables. We again use the circuit \ref{moussa} for this purpose. Taking advantage of the state independent property of the above mentioned inequality, we choose thermal equilibrium state (\ref{rho_eq}) as initial state. A $(\pi/2)_y $ pulse was applied on the first spin to prepare the ancilla in a superposition state. Then state (\ref{rho_eq}) takes the form: $ (1-4\epsilon) \mathbbm{1}_8/8 + \epsilon (\outpr{+}{+}\otimes\mathbbm{1}\otimes\mathbbm{1})+\epsilon(I_{2z}+I_{3z})$. All the controlled $P_{ij}$ operations were realized using the GRAPE sequences having average fidelities better than $ 0.99 $ over 10\% variation in RF amplitude. The total duration of the RF sequences (for each term in \ref{stateIn}) were about 40 ms. Experimentally obtained value of lhs of inequality \ref{stateIn} is $ 4.81 \pm 0.02$. Thus we observed a clear violation of the classical bound. However it is still lower than the quantum limit. The reduced violation is attributed to decoherence ($ \mathrm{T}_2 $ decay) and imperfections in RF pulses. \section{Conclusion} We have experimentally demonstrated the quantum contextuality exhibited by first four energy eigenstates of a one dimensional quantum harmonic oscillator through the violation of an inequality obtained from a noncontextual hidden variable model. The continuous observables of the harmonic oscillator are mapped onto pseudospin observables which are then experimentally realized on a pair of qubits. We used Moussa protocol to retrieve the joint expectation values of observables using an ancillary qubit. Our quantum register was based on three mutually interacting spin-1/2 nuclei controlled by NMR techniques. We also demonstrated a violation of an inequality formulated to study state-independent contextuality, by measuring a set of expectation values on the thermal equilibrium states of the nuclear spins. The results of the experiment not only establish the validity of quantum theoretical calculations, but also highlights the success of NMR systems as quantum simulators. \end{document}
\begin{document} \title{Dynamical moderate deviations for the Curie-Weiss model} \author{ \mathrm{Re}\,newcommand{\arabic{footnote}}{\arabic{footnote}} Francesca Collet\footnotemark[1] \, and Richard C. Kraaij\footnotemark[2] } \footnotetext[1]{ Delft Institute of Applied Mathematics, Delft University of Technology, Mekelweg 4, 2628 CD Delft, The Netherlands, E-mail: \texttt{[email protected]}. } \footnotetext[2]{ Fakultät für Mathematik, Ruhr-University of Bochum, Postfach 102148, 44721 Bochum, Germany, E-mail: \texttt{[email protected]}. } \maketitle \begin{abstract} We derive moderate deviation principles for the trajectory of the empirical magnetization of the standard Curie-Weiss model via a general analytic approach based on convergence of generators and uniqueness of viscosity solutions for associated Hamilton-Jacobi equations. The moderate asymptotics depend crucially on the phase under consideration. \\ \noindent \emph{Keywords:} moderate deviations; interacting particle systems; mean-field interaction; viscosity solutions; Hamilton-Jacobi equation \\ \noindent \emph{MSC[2010]:} 60F10; 60J99 \end{abstract} \section{Introduction} The study of the normalized sum of random variables and its asymptotic behavior plays a central role in probability and statistical mechanics. Whenever the variables are independent and have finite variance, the central limit theorem ensures that the sum with square-root normalization converges to a Gaussian distribution. The generalization of this result to dependent variables is particularly interesting in statistical mechanics where the random variables are correlated through an interaction Hamiltonian. Ellis and Newman characterized the distribution of the normalized sum of spins (\emph{empirical magnetization}) for a wide class of mean-field Hamiltonian of Curie-Weiss type \cite{ElNe78a,ElNe78b,ElNeRo80}. They found conditions, in terms of thermodynamic properties, that lead in the infinite volume limit to a Gaussian behavior and those which lead to a higher order exponential probability distribution. A natural further step was the investigation of large and moderate fluctuations for the magnetization. The large deviation principle is due to Ellis \cite{Ell85}. Moderate deviation properties have been treated by Eichelsbacher and L{\"o}we in \cite{EiLo04}. A moderate deviation principle is technically a large deviation principle and consists in a refinement of a (standard or non-standard) central limit theorem, in the sense that it characterizes the exponential decay of deviations from the average on a smaller scale. In \cite{EiLo04}, it was shown that the physical phase transition in Curie-Weiss type models is reflected by a radical change in the asymptotic behavior of moderate deviations. Indeed, whereas the rate function is quadratic at non-critical temperatures, it becomes non-quadratic at criticality. All the results mentioned so far have been derived at equilibrium; on the contrary, we are interested in describing the time evolution of fluctuations, obtaining non-equilibrium properties. Fluctuations for the standard Curie-Weiss model were studied on the level of a path-space large deviation principle by Comets \cite{Co89} and Kraaij \cite{Kr16b} and on the level of a path-space central limit theorem by Collet and Dai Pra in \cite{CoDaP12}. The purpose of the present paper is to study dynamical moderate deviations to complete the analysis of fluctuations of the empirical magnetization. We apply the generator convergence approach to large deviations by Feng-Kurtz \cite{FK06} to characterize the most likely behaviour for the trajectories of fluctuations around the stationary solution(s) in the various regimes. The moderate asymptotics depend crucially on the phase we are considering. The criticality of the inverse temperature $\beta=1$ shows up at this level via a sudden change in the speed and rate function of the moderate deviation principle for the magnetization. In particular, our findings indicate that fluctuations are Gaussian-like in the sub- and super-critical regimes, while they are not at the critical point. \\ Besides, we analyze the deviation behaviour when the temperature is size-dependent and is increasing to the critical point. In this case, the rate function inherits features of both the uniqueness and multiple phases: it is the combination of the critical and non-critical rate functions. To conclude, it is worth to mention that our statements are in agreement with the results found in \cite{EiLo04}.\\ The outline of the paper is as follows: in Section~ \mathrm{Re}\,f{sct:results} we formally introduce the Curie-Weiss model and we state our main results. All the proofs, if not immediate, are postponed to Section~ \mathrm{Re}\,f{sct:proofs}. Appendix~ \mathrm{Re}\,f{sct:app:LDPviaHJequation} is devoted to the derivation of a large deviation principle via solution of Hamilton-Jacobi equation and it is included to make the paper as much self-contained as possible. \section{Model and main results}\label{sct:results} \subsection{Notation and definitions} Before we give our main results, we introduce some notation. We start with the definition of good rate-functions and what it means for random variables to satisfy a large deviation principle. \begin{definition} Let $X_1,X_2,\dots$ be random variables on a Polish space $F$. Furthermore let $I : F \rightarrow [0,\infty]$. \begin{enumerate} \item We say that $I$ is a \textit{good rate-function} if for every $c \geq 0$, the set $\{x \, | \, I(x) \leq c\}$ is compact. \item We say that the sequence $\{X_n\}_{n\geq 1}$ is \textit{exponentially tight} if for every $a \geq 0$ there is a compact set $K_a \subseteq X$ such that $\limsup_n \mathbb{P}[X \in K^c_a] \leq - a$. \item We say that the sequence $\{X_n\}_{n\geq 1}$ satisfies the \textit{large deviation principle} with rate $r(n)$ and good rate-function $I$, denoted by \begin{equation*} \mathbb{P}[X_n \approx a] \sim e^{-r(n) I(a)}, \end{equation*} if we have for every closed set $A \subseteq X$ \begin{equation*} \limsup_{n \rightarrow \infty} \frac{1}{r(n)} \log \mathbb{P}[X_n \in A] \leq - \inf_{x \in A} I(x), \end{equation*} and for every open set $U \subseteq X$ \begin{equation*} \liminf_{n \rightarrow \infty} \frac{1}{r(n)} \log \mathbb{P}[X_n \in U] \geq - \inf_{x \in U} I(x). \end{equation*} \end{enumerate} \end{definition} Throughout the whole paper $\mathcal{A}\mathcal{C}$ will denote the set of absolutely continuous curves in $\mathbb{R}$. \begin{definition} A curve $\gamma: [0,T] \to \mathbb{R}$ is absolutely continuous if there exists a function $g \in L^1[0,T]$ such that for $t \in [0,T]$ we have $\gamma(t) = \gamma(0) + \int_0^t g(s) \mathrm{d} s$. We write $g = \dot{\gamma}$. A curve $\gamma: \mathbb{R}^+ \to \mathbb{R}$ is absolutely continuous if the restriction to $[0,T]$ is absolutely continuous for every $T \geq 0$. \end{definition} \subsection{Glauber dynamics for the Curie-Weiss model} Let $\sigma = \left( \sigma_i \right)_{i=1}^n \in \{-1,+1\}^n$ be a configuration of $n$ spins and denote by \begin{equation*} m_n(\sigma) = n^{-1} \sum_{i=1}^n \sigma_i \end{equation*} the empirical magnetization. The stochastic process $\{\sigma(t)\}_{t \geq 0}$ is described as follows. For $\sigma \in \{-1,+1\}^n$, let us define $\sigma^j$ the configuration obtained from $\sigma$ by flipping the $j$-th spin. The spins will be assumed to evolve with Glauber spin-flip dynamics: at any time $t$, the system may experience a transition $\sigma \to \sigma^j$ at rate $\exp \{ - \beta \sigma_j(t) m_n(t)\}$, where $\beta > 0$ represents the inverse temperature and where by abuse of notation $m_n(t) := m_n(\sigma(t))$. More formally, we can say that $\{ \sigma(t) \}_{t \geq 0}$ is a Markov process on $\{-1,+1\}^n$, with infinitesimal generator \begin{equation}\label{CW:micro:gen} \mathcal{G}_n f (\sigma) = \sum_{i=1}^n e^{- \beta \sigma_i m_n(\sigma)} \left[ f \left( \sigma^i \right) - f(\sigma)\right]. \end{equation} Let \[ E_n := m_n \left( \{-1,+1\}^n \right) = \left\{ -1, -1+ \frac{2}{n}, \dots, 1 - \frac{2}{n}, 1 \right\} \subseteq [-1,1] \] be the set of possible values taken by the magnetization (we will keep using this notation for the state space of the magnetization). The Glauber dynamics \eqref{CW:micro:gen} on the configurations induce Markovian dynamics for the process $\{ m_n(t) \}_{t \geq 0}$ on $E_n$, that in turn evolves with generator \begin{equation*} \mathcal{A}_nf(x) = n \frac{1-x}{2} e^{\beta x} \left[f\left(x + \frac{2}{n}\right) - f(x)\right] + n \frac{1+x}{2} e^{-\beta x} \left[f\left(x - \frac{2}{n}\right) - f(x)\right]. \end{equation*} This generator can be derived in two ways from \eqref{CW:micro:gen}. First of all, the microscopic jumps induce a change of size $\frac{2}{n}$ on the empirical magnetization. The jump rate of $x$ to $x + \frac{2}{n}$ corresponds to any $-1$ spin switching to $+1$ with rate $e^{\beta x}$. The total number of $-1$ spins can be computed from the empirical magnetization $x$ and equals $n\frac{1-x}{2}$. A similar computation yields the jump rate of $x$ to $x - \frac{2}{n}$. A second way to see that $\mathcal{A}_n$ is the generator of the empirical magnetization is via the martingale problem and the property that $\mathcal{A}_n f(m_n(\sigma)) = \mathcal{G}_n(f \circ m_n)(\sigma)$. Assume the initial condition $m_n(0)$ obeys a large deviation principle, then it can be shown that $\{m_n(t)\}_{t \geq 0}$ obeys a large deviation principle on the Skorohod space of c{\`a}dl{\`a}g functions $D_\mathbb{R}(\mathbb{R}^+)$. We refer to \cite{EK86} for definition and properties of Skorohod spaces and to \cite{Co89,DuRaWu16} for the proof of the large deviation principle. Moreover, see \cite[Theorem 1]{Kr16b} for a LDP obtained by using similar techniques as in this paper. This path space large deviation principle allows to derive the infinite volume dynamics for our model: if $m_n(0)$ converges weakly to the constant $m_0$, then the empirical magnetization process $\{m_n(t)\}_{t \geq 0}$ converges weakly, as $n \to \infty$, to the solution of \begin{equation}\label{CW:macro:dyn} \dot{m}(t) = - 2 \, m(t) \cosh (\beta m(t)) + 2 \sinh (\beta m(t)) \end{equation} with initial condition $m_0$. It is well known that the dynamical system \eqref{CW:macro:dyn} exhibits a phase transition at the critical value $\beta=1$. The solution $m=0$ is an equilibrium of \eqref{CW:macro:dyn} for all values of the parameters. For $\beta \leq 1$, it is globally stable; whereas, for $\beta > 1$, it loses stability and two new stable fixed point $m = \pm m_\beta$, $m_\beta > 0$, bifurcate. We refer the reader to \cite{Ell85}. For later convenience, let us introduce the notation \[ G_{1,\beta}(x) = \cosh(\beta x) - x \sinh(\beta x) \; \mbox{ and } \; G_{2,\beta}(x) = \sinh(\beta x) - x \cosh(\beta x). \] Observe that the equilibria of \eqref{CW:macro:dyn} are solutions to $G_{2,\beta}(x)=0$.\\ \subsection{Main results} We want to discuss the moderate deviations behavior of the magnetization around its limiting stationary points in the various regimes. We have the following three results that can be obtained as particular cases of the more general Theorem \mathrm{Re}\,f{theorem:mdp_1d_arbitrarypotential} stated and proven in Section~ \mathrm{Re}\,f{subsct:MDP&CLT:arbitrary:potential}. The first of our statements is mainly of interest for sub-critical inverse temperatures $\beta < 1$, but is indeed valid for all $\beta \geq 0$. The results for the critical and super-critical regimes follow afterwards. \begin{theorem}[Moderate deviations around $0$] \label{theorem:moderate_deviations_CW_subcritical} Let $\{b_n\}_{n\geq 1}$ be a sequence of positive real numbers such that $b_n \to \infty$ and $b_n^{2} n^{-1} \to 0$. Suppose that $b_n m_n(0)$ satisfies the large deviation principle with speed $n b_n^{-2}$ on $\mathbb{R}$ with rate function $I_0$. Then the trajectories $\left\{b_n m_n(t)\right\}_{t \geq 0}$ satisfy the large deviation principle on $D_\mathbb{R}(\mathbb{R}^+)$: \begin{equation*} \mathbb{P}\left[\left\{b_n m_n(t)\right\}_{t \geq 0} \approx \{\gamma(t)\}_{t \geq 0} \right] \sim e^{-n b_n^{-2} I(\gamma)}, \end{equation*} where $I$ is the good rate function \begin{equation}\label{CW:0MD:RF} I(\gamma) = \begin{cases} I_0(\gamma(0)) + \int_0^\infty \mathcal{L} (\gamma(s),\dot{\gamma}(s)) \mathrm{d} s & \text{if } \gamma \in \mathcal{A}\mathcal{C}, \\ \infty & \text{otherwise}, \end{cases} \end{equation} with \[ \mathcal{L}(x,v) = \frac{1}{8} \left|v + 2x(1-\beta) \right|^2. \] \end{theorem} \begin{theorem}[Moderate deviations: critical temperature $\beta = 1$] \label{theorem:moderate_deviations_CW_critical} Let $\{b_n\}_{n\geq 1}$ be a sequence of positive real numbers such that $b_n \to \infty$ and $b_n^{4} n^{-1} \to 0$. Suppose that $b_n m_n(0)$ satisfies the large deviation principle with speed $n b_n^{-4}$ on $\mathbb{R}$ with rate function $I_0$. Then the trajectories $\left\{b_n m_n(b_n^2 t)\right\}_{t \geq 0}$ satisfy the large deviation principle on $D_\mathbb{R}(\mathbb{R}^+)$: \begin{equation*} \mathbb{P}\left[\left\{b_n m_n(b_n^2 t)\right\}_{t \geq 0} \approx \{\gamma(t)\}_{t \geq 0} \right] \sim e^{-n b_n^{-4} I(\gamma)}, \end{equation*} where $I$ is the good rate function \begin{equation}\label{CW:criticalMD:RF} I(\gamma) = \begin{cases} I_0(\gamma(0)) + \int_0^\infty \mathcal{L}(\gamma(s),\dot{\gamma}(s)) \mathrm{d} s & \text{if } \gamma \in \mathcal{A}\mathcal{C}, \\ \infty & \text{otherwise}, \end{cases} \end{equation} with \[ \mathcal{L}(x,v) = \frac{1}{8} \left|v + \frac{2}{3}x^3 \right|^2. \] \end{theorem} \begin{theorem}[Moderate deviations: super-critical temperatures $\beta > 1$] \label{theorem:moderate_deviations_CW_supercritical} Let \mbox{$m \in \{-m_\beta,+m_\beta\}$} be a non-zero solution of $G_{2,\beta}(x) = 0$. Moreover, let $\{b_n\}_{n\geq 1}$ be a sequence of positive real numbers such that $b_n \to \infty$ and $b_n^{2} n^{-1} \to 0$. Suppose that $b_n (m_n(0) - m)$ satisfies the large deviation principle with speed $n b_n^{-2}$ on $\mathbb{R}$ with rate function $I_0$. Then the trajectories $\left\{b_n (m_n(t) - m)\right\}_{t \geq 0}$ satisfy the large deviation principle on $D_\mathbb{R}(\mathbb{R}^+)$: \begin{equation*} \mathbb{P}\left[\left\{b_n (m_n(t) - m) \right\}_{t \geq 0} \approx \{\gamma(t)\}_{t \geq 0} \right] \sim e^{-n b_n^{-2} I(\gamma)}, \end{equation*} where $I$ is the good rate function \begin{equation}\label{CW:supercriticalMD:RF} I(\gamma) = \begin{cases} I_0(\gamma(0)) + \int_0^\infty \mathcal{L} (\gamma(s),\dot{\gamma}(s)) \mathrm{d} s & \text{if } \gamma \in \mathcal{A}\mathcal{C}, \\ \infty & \text{otherwise}, \end{cases} \end{equation} with \[ \mathcal{L}(x,v) = \frac{(v-2xG_{2,\beta}'(m))^2}{8 G_{1,\beta}(m)}. \] \end{theorem} The rate functions \eqref{CW:0MD:RF} and \eqref{CW:supercriticalMD:RF} have a similar structure. Indeed, whenever $\beta < 1$, $m = 0$ is the unique solution of $G_{2,\beta}(x) = 0$; moreover, it yields $G_{1,\beta}(0) = 1$ and $G_{2,\beta}'(0) = \beta - 1$. \\ By choosing the sequence $b_n = n^\alpha$, with $\alpha > 0$, we can rephrase Theorems~ \mathrm{Re}\,f{theorem:moderate_deviations_CW_subcritical}, \mathrm{Re}\,f{theorem:moderate_deviations_CW_critical} and \mathrm{Re}\,f{theorem:moderate_deviations_CW_supercritical} in terms of more familiar ``moderate'' scalings involving powers of the volume. We therefore get estimates for the probability of a typical trajectory on a scale that is between a law of large numbers and a central limit theorem. We give a schematic summary of these special results in Table~ \mathrm{Re}\,f{tab:CW:deviations} below. For any of the three cases above, we define the \textit{Hamiltonian} $H : \mathbb{R} \times \mathbb{R} \rightarrow \mathbb{R}$ by taking the Legendre transform of $\mathcal{L}$: $H(x,p) = \sup_v pv - \mathcal{L}(x,v)$. It is well known that the rate function $S$ of the stationary measures of the Markov processes, also known as the quasi-potential, solves the equation $H(x,S'(x)) = 0$, cf. Theorem 5.4.3 in \cite{FW98}. We use this property to show that our results are consistent with the moderate deviation principles obtained for the stationary measures in \cite{EiLo04}. We give the Hamiltonian of the three cases above \begin{enumerate}[(a)] \item \textit{sub-critical temperatures}, Theorem \mathrm{Re}\,f{theorem:moderate_deviations_CW_subcritical} $H(x,p) = 2x(\beta - 1)p + 2p^2$, \item \textit{critical temperature}, Theorem \mathrm{Re}\,f{theorem:moderate_deviations_CW_critical} $H(x,p) = - \frac{2}{3}x^3p + 2p^2$, \item \textit{super-critical temperatures}, Theorem \mathrm{Re}\,f{theorem:moderate_deviations_CW_supercritical} $H(x,p) = 2xG_2'(m)p + 2 G_1(m) p^2$. \end{enumerate} The stationary rate function $S$ in each of these three cases obtained in \cite[Theorem 1.18]{EiLo04} is given by \begin{enumerate}[(a)] \item \textit{sub-critical temperatures}, $S(x) = \frac{1}{2}(1-\beta)x^2$, \item \textit{critical temperature}, $S(x) = \frac{1}{12}x^4$, \item \textit{super-critical temperatures}, $S(x) = \frac{1}{2}cx^2$, where $c := (\phi''(\beta m))^{-1} - \beta$, and $m$ is a solution of $G_{2,\beta}(x) = 0$ and $\phi(x) = \log \left(\cosh(x)\right)$. \end{enumerate} For (a) and (b), it is clear that $H(x,S'(x)) = 0$ for all $x$. For (c), since $m = \tanh(\beta m)$, by inverse function theorem we obtain $\phi''(\beta m) = 1 - m^2$. Therefore, we have \begin{multline*} c = (\phi''(\beta m))^{-1} - \beta = \left(1 - m \tanh(\beta m)\right)^{-1} - \beta \\ = - \left[\frac{\beta G_{1,\beta}(m) - \cosh(\beta m)}{G_{1,\beta}(m)} \right] = - \frac{G_{2,\beta}'(m)}{G_{1,\beta}(m)}, \end{multline*} which implies that also in this case $H(x,S'(x)) = 0$ for all $x$. The next theorem complements the results in \cite[Proposition~2.2]{CoDaP12} for the subcritical regime and shows that also in the supercritical case the fluctuations around a ferromagnetic stationary point converge to a diffusion process. As previously, the statement is a direct consequence of a more general central limit theorem given in Theorem~ \mathrm{Re}\,f{theorem:CLT_1d_arbitrarypotential} below. \begin{theorem}[Central limit theorem: super-critical temperatures $\beta > 1$] \label{theorem:CLT_CW_supercritical} Let \mbox{$m \in \{-m_\beta,+m_\beta\}$} be a non-zero solution of $G_{2,\beta}(x) = 0$. Suppose that $n^{1/2} (m_n(0) - m)$ converges in law to $\nu$. Then the process $n^{1/2}(m_n(t) - m)$ converges weakly in law on $D_\mathbb{R}(\mathbb{R}^+)$ to the unique solution of: \begin{equation}\label{CLT_CW_supercritical} \begin{cases} \mathrm{d} Y(t) = 2Y(t)G_{2,\beta}'(m) \mathrm{d} t + 2\sqrt{G_{1,\beta}(m)} \, \mathrm{d} W(t) \\ Y(0) \sim \nu, \end{cases} \end{equation} where $W(t)$ is a standard Brownian motion on $\mathbb{R}$. \end{theorem} We want to conclude the analysis by considering moderate deviations and non-standard central limit theorem for volume-dependent temperatures decreasing to the critical point. In the sequel let $\{ m_n^\beta(t) \}_{t \geq 0}$ denote the process evolving at temperature~$\beta$. \begin{theorem}[Moderate deviations: critical temperature $\beta = 1$, temperature rescaling] \label{theorem:moderate_deviations_CW_critical_temperature_rescaling} Let $\kappa \geq 0$ and let $\{b_n\}_{n\geq 1}$ be a sequence of positive real numbers such that $b_n \rightarrow \infty$ and $b_n^4 n^{-1}\rightarrow 0$. Suppose that $b_n m_n^{1 + \kappa b_n^{-2}}(0)$ satisfies the large deviation principle with speed $n b_n^{-4}$ on $\mathbb{R}$ with rate function $I_0$. Then the trajectories $\left\{b_n m_n^{1 + \kappa b_n^{-2}}(b_n^{2} t) \right\}_{t \geq 0}$ satisfy the large deviation principle on $D_\mathbb{R}(\mathbb{R}^+)$: \begin{equation*} \mathbb{P}\left[\left\{b_n m_n^{1 + \kappa b_n^{-2}}(b_n^2t)\right\}_{t \geq 0} \approx \{\gamma(t)\}_{t \geq 0} \right] \sim e^{-n b_n^{-4}I(\gamma)}, \end{equation*} where $I$ is the good rate function \begin{equation}\label{CW:criticalMD:temp_resc:RF} I(\gamma) = \begin{cases} I_0(\gamma(0)) + \int_0^\infty \mathcal{L}(\gamma(s),\dot{\gamma}(s)) \mathrm{d} s & \text{if } \gamma \in \mathcal{A}\mathcal{C}, \\ \infty & \text{otherwise}, \end{cases} \end{equation} with \[ \mathcal{L}(x,v) = \frac{1}{8} \left| v - 2\left( \kappa x - \frac{1}{3}x^3 \right) \right|^2\,. \] \end{theorem} Notice that in this borderline case the moderate deviations rate function \eqref{CW:criticalMD:temp_resc:RF} is a sort of mixture of the one at the criticality \eqref{CW:criticalMD:RF} and the Gaussian rate function~\eqref{CW:0MD:RF}. \begin{theorem}[Critical fluctuations: critical temperature $\beta=1$, temperature rescaling] \label{theorem:CLT_CW_critical_temperature_rescaling} Let $\kappa \geq 0$ and suppose that $n^{1/4} m_n^{1 + \kappa n^{-1/2}}(0)$ converges in law to $\nu$. Then the process $n^{1/4}m_n^{1 + \kappa n^{-1/2}}(n^{1/2}t) $ converges weakly in law on $D_\mathbb{R}(\mathbb{R}^+)$ to the unique solution of: \begin{equation}\label{CLT_CW_critical_temperature_rescaling} \begin{cases} \mathrm{d} Y(t) = 2 \left[ \kappa Y(t) - \frac{1}{3} Y(t)^3\right] \mathrm{d} t + 2 \, \mathrm{d} W(t) \\ Y(0) \sim \nu, \end{cases} \end{equation} where $W(t)$ is a standard Brownian motion on $\mathbb{R}$. \end{theorem} The proof of Theorem~ \mathrm{Re}\,f{theorem:CLT_CW_critical_temperature_rescaling} is a simple adaptation of the proofs of Theorems~ \mathrm{Re}\,f{theorem:CLT_1d_arbitrarypotential} and \mathrm{Re}\,f{theorem:moderate_deviations_CW_critical_temperature_rescaling} and therefore is omitted.\\ The results in the present section, together with the large deviation principle in \cite[Theorem 1]{Kr16b} and the study of fluctuations at $\beta=1$ in \cite[Theorem 2.10]{CoDaP12}, give a complete picture of the behaviour of fluctuations for the Curie-Weiss model. Indeed all the possible scales are covered. We summarize our findings in Table~ \mathrm{Re}\,f{tab:CW:deviations}. For completeness, we give also the Hamiltonian of the large deviation principle for the dynamics of $m_n(t)$ around its limiting trajectory \eqref{CW:macro:dyn}: \begin{equation} \label{eqn:Ham_LDP} \begin{aligned} H(x,p) & = \frac{1-x}{2} e^{\beta x} \left[e^{2p} - 1\right] + \frac{1+x}{2} e^{-\beta x}\left[e^{-2p} - 1\right] \\ & = \left[\cosh(2p) -1 \right]G_{1,\beta}(x) + \sinh(2p) G_{2,\beta}(x). \end{aligned} \end{equation} The displayed conclusions are drawn under the assumption that in each case either the initial condition satisfies a large deviation principle at the correct speed or the initial measure converges weakly.\\ \begin{table}[h!] \caption{Fluctuations for the empirical magnetization of the Curie-Weiss spin-flip dynamics} \begin{center} \begin{tabular}{|c|c|c|c|} \hline \rowcolor{Gray} \parbox[c][1cm][c]{1.5cm}{\scshape \footnotesize \centering Scaling Exponent} & \parbox[c][1cm][c]{2.5cm}{\scshape \footnotesize \centering Temperature} & \parbox[c][1cm][c]{2.8cm}{\scshape \footnotesize \centering Rescaled Process} & \parbox[c][1cm][c]{4cm}{\scshape \footnotesize \centering Limiting Theorem}\\ \hline \hline $\alpha = 0$ & all $\beta$ & $m_n(t)$ & \parbox[c][1.3cm][c]{4cm}{\footnotesize \centering LDP at speed $n$ with Hamiltonian as in \eqref{eqn:Ham_LDP} \\ (see \cite{Co89,Kr16b})}\\ \hline\hline \multicolumn{4}{|c|}{{\bf \scriptsize \cellcolor{LightGray} NON-CRITICAL CASES}}\\ \hline\hline \multirow{3}{*}{$\alpha \in \left( 0, \frac{1}{2} \right)$} & all $\beta$ & $n^\alpha m_n(t)$ & \parbox[c][1.5cm][c]{4cm}{\footnotesize \centering LDP at speed $n^{1-2\alpha}$ with rate function \eqref{CW:0MD:RF}}\\ & $\beta > 1$ & $n^\alpha (m_n(t) \pm m_\beta)$ & \parbox[c][1.5cm][c]{4cm}{\footnotesize \centering LDP at speed $n^{1-2\alpha}$ with rate function \eqref{CW:supercriticalMD:RF}} \\ \hline \multirow{3}{*}{$\alpha = \frac{1}{2}$} & all $\beta$ & $n^{1/2} m_n(t)$ & \parbox[c][2.7cm][c]{4cm}{\footnotesize \centering CLT \\ weak convergence to the unique solution of {\scriptsize \[ dY(t) = 2(\beta-1) Y(t) \mathrm{d} t + 2 \mathrm{d} W(t) \] (see \cite{CoDaP12})}}\\ & $\beta > 1$ & $n^{1/2} (m_n(t) \pm m_\beta)$ & \parbox[c][1.5cm][c]{4cm}{\footnotesize \centering CLT \\ weak convergence to the unique solution of \eqref{CLT_CW_supercritical}} \\ \hline\hline \multicolumn{4}{|c|}{{\bf \scriptsize \cellcolor{LightGray} CRITICAL CASES}}\\ \hline \hline \multirow{3}{*}{$\alpha \in \left( 0, \frac{1}{4} \right)$} & $\beta = 1$ & $ n^{\alpha} m_n \left( n^{2\alpha} t \right)$ & \parbox[c][1.3cm][c]{4cm}{\footnotesize \centering LDP at speed $n^{1-4\alpha}$ with rate function \eqref{CW:criticalMD:RF}}\\ & \parbox{2.5cm}{\centering $\beta = 1 + \kappa n^{-2\alpha}$ \mbox{ \footnotesize (with $\kappa \geq 0$)}} & $n^\alpha m_n \left( n^{2\alpha}t \right)$ & \parbox[c][1.3cm][c]{4cm}{\footnotesize \centering LDP at speed $n^{1-4\alpha}$ with rate function \eqref{CW:criticalMD:temp_resc:RF}} \\ \hline \multirow{3}{*}{$\alpha = \frac{1}{4}$} & $\beta = 1$ & $ n^{1/4} m_n \left( n^{1/2} t \right)$ & \parbox[c][2.3cm][c]{4cm}{\footnotesize \centering weak convergence to the unique solution of {\scriptsize \[ dY(t) = -\frac{2}{3} Y(t)^3 \mathrm{d} t + 2 \mathrm{d} W(t) \] (see \cite{CoDaP12})}}\\ & \parbox{2.5cm}{\centering $\beta = 1 + \kappa n^{-1/2}$ \mbox{ \footnotesize (with $\kappa \geq 0$)}} & $n^{1/4} m_n \left( n^{1/2}t \right)$ & \parbox[c][1.3cm][c]{4cm}{\footnotesize \centering weak convergence to the unique solution of \eqref{CLT_CW_critical_temperature_rescaling}} \\ \hline \end{tabular} \end{center} \label{tab:CW:deviations} \end{table} \section{Proofs}\label{sct:proofs} \subsection{Strategy of the proof} \label{section:strategy_of_proof} We will analyze large/moderate deviations behaviour following the Feng-Kurtz approach to large deviations \cite{FK06}. This method is based on three observations: \begin{enumerate} \item If the processes are exponentially tight, it suffices to establish the large deviation principle for finite dimensional distributions. \item The large deviation principle for finite dimensional distributions can be established by proving that the semigroup of log Laplace-transforms of the conditional probabilities converges to a limiting semigroup. \item One can often rewrite the limiting semigroup as a variational semigroup, which allows to rewrite the rate-function on the Skorohod space in Lagrangian form. \end{enumerate} The strategy to prove a large deviation principle with speed $r(n)$ for a sequence of Markov processes $\{X_n\}_{n \geq 1}$, having generators $\{ A_n \}_{n \geq 1}$, formally works as follows: \begin{enumerate} \item \emph{Identification of a limiting Hamiltonian $H$.} The semigroups of log-Laplace transforms of the conditional probabilities \begin{equation*} V_n(t)f(x) = \frac{1}{r(n)} \log \mathbb{E}\left[e^{r(n)f(X_n(t))} \, \middle| X_n(0) = x \right] \end{equation*} formally have generators $H_nf = r(n)^{-1}e^{-r(n)f}A_n e^{r(n)f}$. Then one verifies that the sequence $\{H_n\}_{n \geq 1}$ converges to a limiting operator $H$; i.e. one shows that, for any $f \in \mathcal{D}(H)$, there exists $f_n \in \mathcal{D}(H_n)$ such that $f_n \to f$ and $H_n f_n \to Hf$, as $n \to \infty$. \item \emph{Exponential tightness.} Provided one can verify the exponential compact containment condition, the convergence of the sequence $\{H_n\}_{n \geq 1}$ gives exponential tightness. \item \emph{Verification of a comparison principle.} The theory of viscosity solutions gives applicable conditions for proving that the limiting Hamiltonian generates a semigroup. If for all $\lambda > 0$ and bounded continuous functions $h$, the Hamilton-Jacobi equation $f - \lambda H f = h$ admits a unique solution, one can extend the generator $H$ so that the extension satisfies the conditions of Crandall-Liggett theorem and thus generates a semigroup $V(t)$. Additionally, it follows that the semigroups $V_n(t)$ converge to $V(t)$, giving the large deviation principle. Uniqueness of the solution of the Hamilton-Jacobi equation can be established via comparison principle for sub- and super-solutions. \item \emph{Variational representation of the limiting semigroup.} By Legendre transforming the limiting Hamiltonian $H$, one can define a ``Lagrangian'' which can be used to define a variational semigroup and a variational resolvent. It can be shown that the variational resolvent provides a solution of the Hamilton-Jacobi equation and therefore, by uniqueness of the solution, identifies the resolvent of $H$. As a consequence, an approximation procedure yields that the variational semigroup and the limiting semigroup $V(t)$ agree. A standard argument is then sufficient to give a Lagrangian form of the path-space rate function. \end{enumerate} We refer to Appendix~ \mathrm{Re}\,f{sct:app:LDPviaHJequation} for an overview of the derivation of a large deviation principle via solution of Hamilton-Jacobi equation.\\ We proceed with the verification of the model-specific conditions needed to apply the results from the appendix. The treatment is carried out in a more abstract way than strictly necessary to single out important arguments that might get lost if all the objects are explicit. The appendix and the following definitions are written to prove a path-space large deviation principle on $D_E(\mathbb{R}^+)$, the Skorohod space of paths taking values in the closed set $E \subseteq \mathbb{R}^d$ which is contained in the $\mathbb{R}^d$-closure of its $\mathbb{R}^d$-interior. Two types of functions are of importance to this purpose: good penalization and good containment functions. \begin{definition} We say that $\{\Psi_\alpha\}_{\alpha >0}$, with $\Psi_\alpha : E^2 \rightarrow \mathbb{R}$, are \textit{good penalization functions} if there are extensions of $\Psi_\alpha$ to an open neighbourhood of $E^2$ in $\mathbb{R}^d \times \mathbb{R}^d$ (also denoted by $\Psi_\alpha$) so that \begin{enumerate}[($\Psi$a)] \item For all $\alpha > 0$, we have $\Psi_\alpha \geq 0$ and $\Psi_\alpha(x,y) = 0$ if and only if $x = y$. Additionally, $\alpha \mapsto \Psi_\alpha$ is increasing and \begin{equation*} \lim_{\alpha \rightarrow \infty} \Psi_\alpha(x,y) = \begin{cases} 0 & \text{if } x = y \\ \infty & \text{if } x \neq y. \end{cases} \end{equation*} \item $\Psi_\alpha$ is twice continuously differentiable in both coordinates for all $\alpha > 0$, \item $(\nabla \Psi_\alpha(\cdot,y))(x) = - (\nabla \Psi_\alpha(x,\cdot))(y)$ for all $\alpha > 0$. \end{enumerate} \end{definition} \begin{definition} We say that $\Upsilon : E \rightarrow \mathbb{R}$ is a \textit{good containment function} (for $H$) if there is an extension of $\Upsilon$ to an open neighbourhood of $E$ in $\mathbb{R}^d$ (also denoted by $\Upsilon$) so that \begin{enumerate}[($\Upsilon$a)] \item $\Upsilon \geq 0$ and there exists a point $x_0 \in E$ such that $\Upsilon(x_0) = 0$, \item $\Upsilon$ is twice continuously differentiable, \item for every $c \geq 0$, the set $\{x \in E \, | \, \Upsilon(x) \leq c\}$ is compact, \item we have $\sup_{z \in E} H(z,\nabla \Upsilon(z)) < \infty$. \end{enumerate} \end{definition} In the rest of the paper, we will denote by $C_c^2(E)$the set of functions that are constant outside some compact set in $E$ and twice continuously differentiable on a neighbourhood of $E$ in~$\mathbb{R}^d$. \\ Let us denote by $E_n$, closed subset of $E \subseteq \mathbb{R}^d$, the set where the finite $n$ process takes values. Our large or moderate deviation principles will all follow from the application of Theorem \mathrm{Re}\,f{theorem:Abstract_LDP} after having checked the following conditions: \begin{enumerate}[(a)] \item For all $f \in C_c^2(E)$ and compact sets $K \subseteq E$, we have \begin{equation*} \lim_{n \rightarrow \infty} \sup_{x \in K \cap E_n} \left|H_n f(x) - Hf(x) \right| = 0. \end{equation*} \item There exists a good containment function $\Upsilon$ for $H$. \item For all $\lambda > 0$ and $h \in C_b(E)$, the comparison principle holds for $f - \lambda Hf = h$. \end{enumerate} Rigorous definitions of Hamilton-Jacobi equation, viscosity solutions and comparison principle will be given in Appendix \mathrm{Re}\,f{sct:app:LDPviaHJequation}.\\ The limiting Hamiltonians we will encounter are all of quadratic type and the space $E$ will always equal $\mathbb{R}$. The following two known results establish (b) and (c) for Hamiltonians of this type, and are given for completeness. We postpone the verification of condition (a) for our various cases to Subsections \mathrm{Re}\,f{subsct:MDP&CLT:arbitrary:potential} and \mathrm{Re}\,f{subsct:MDP:critical:temp_rescaling}. \begin{definition} Let $E \subseteq \mathbb{R}^d$ be a closed set. We say that a vector field $\mathbf{F} : E \rightarrow \mathbb{R}^d$ is one-sided Lipschitz if there exists a constant $M \geq 0$ such that, for all $x,y \in E$, it holds \begin{equation*} \ip{x-y}{\mathbf{F}(x) - \mathbf{F}(y)} \leq M|x-y|^2. \end{equation*} \end{definition} \begin{lemma}\label{lemma:FW_one_sided_lipschitz_containment_function} Let $\mathbf{F}: \mathbb{R}^d \rightarrow \mathbb{R}^d$ be one-sided Lipschitz, $A$ a positive-definite matrix and $c \geq 0$. Assume $\mathbf{F}(0) = 0$. Consider the Hamiltonian $H$ with domain $C_c^2(\mathbb{R}^d)$ and of the form $Hf(x) = H(x,\nabla f(x))$, where \begin{equation*} H(x,p) = \ip{p}{\mathbf{F}(x)} + c \, \ip{Ap}{p}. \end{equation*} Then $\Upsilon(x) = \log (1+ \frac{1}{2}|x|^2)$ is a good containment function for $H$. \end{lemma} \begin{proof} The proof is based on a simplification of Example 4.23 in \cite{FK06}. Since the $i$-th component of the gradient of $\Upsilon$ is given by \begin{equation*} (\nabla \Upsilon(x))_i = \frac{x_i}{1+\frac{1}{2}|x|^2}, \end{equation*} we obtain \[ H(x,\nabla \Upsilon(x)) = \frac{\ip{x}{\mathbf{F}(x)}}{1+\frac{1}{2}|x|^2} + \frac{c \ip{Ax}{x}}{(1+\frac{1}{2}|x|^2)^2} . \] Notice that, by the one-sided Lipschitz property of $\mathbf{F}$, there exists a constant $M$ such that $\ip{x}{\mathbf{F}(x)} = \ip{x- 0}{\mathbf{F}(x) - \mathbf{F}(0)} \leq M |x|^2$. Moreover, by Cauchy-Schwarz inequality, we have $\ip{Ax}{x} \leq |Ax||x| \leq \|A\| |x|^2$, where $\|A\| := \sup_{|x|=1} \, |Ax|$. Therefore, we get the estimate \[ H(x,\nabla \Upsilon(x)) \leq \frac{M |x|^2 }{1+\frac{1}{2}|x|^2} + \frac{c \|A\| |x|^2}{\left( 1+\frac{1}{2}|x|^2 \right)^2} \leq 4 (M +c \|A\|), \] which gives $\sup_x H(x,\nabla \Upsilon(x)) < \infty$, implying that $\Upsilon$ is a good containment function. \end{proof} \begin{proposition} \label{proposition:FW_one_sided_lipschitz_comparison_principle} Let $\mathbf{F}: \mathbb{R}^d \rightarrow \mathbb{R}^d$ be one-sided Lipschitz, $A$ a positive-definite matrix and $c \in \mathbb{R}$. Assume $\mathbf{F}(0) = 0$. Consider the Hamiltonian $H$ with domain $C_c^2(\mathbb{R}^d)$ and of the form $Hf(x) = H(x,\nabla f(x))$, where \begin{equation*} H(x,p) = \ip{p}{\mathbf{F}(x)} + c \, \ip{Ap}{p}. \end{equation*} Then, for every $\lambda > 0$ and $h \in C_b(\mathbb{R}^d)$, the comparison principle is satisfied for $f - \lambda H f = h$. \end{proposition} \begin{proof} We apply Proposition \mathrm{Re}\,f{proposition:comparison_conditions_on_H}. We have to check \eqref{condH:negative:liminf}. We use the good containment function introduced in Lemma \mathrm{Re}\,f{lemma:FW_one_sided_lipschitz_containment_function} and the collection of good penalization functions $\{\Psi_\alpha \}_{\alpha>0}$, with $\Psi_\alpha(x,y) = \frac{\alpha}{2}|x-y|^2$. We fix $\varepsilon > 0$ and write $x_\alpha, y_\alpha$ instead of $x_{\alpha,\varepsilon},y_{\alpha,\varepsilon}$ to lighten the notation. As the term $\ip{Ap}{p}$ does not depend on $x$ or $y$, we have \begin{align*} &H(x_\alpha, \alpha(x_\alpha - y_\alpha)) - H(y_\alpha, \alpha(x_\alpha - y_\alpha)) \\ &\qquad \qquad = \ip{\alpha(x_\alpha - y_\alpha)}{\mathbf{F}(x_\alpha)} -\ip{\alpha(x_\alpha - y_\alpha)}{\mathbf{F}(y_\alpha)} \leq M \Psi_{\alpha}(x_\alpha,y_\alpha). \end{align*} By Lemma \mathrm{Re}\,f{lemma:doubling_lemma}, we obtain $\lim_{\alpha \to \infty} \Psi_{\alpha}(x_\alpha,y_\alpha) = 0$ and the conclusion follows. \end{proof} \begin{remark} Lemma~ \mathrm{Re}\,f{lemma:FW_one_sided_lipschitz_containment_function} and Proposition~ \mathrm{Re}\,f{proposition:FW_one_sided_lipschitz_comparison_principle} can be suitably adapted (by centering~$\Upsilon$) to deal with the case when $\mathbf{F}(x_s) = 0$ at some $x_s \neq 0$. \end{remark} \subsection{Proof of Theorems~ \mathrm{Re}\,f{theorem:moderate_deviations_CW_subcritical}, \mathrm{Re}\,f{theorem:moderate_deviations_CW_critical}, \mathrm{Re}\,f{theorem:moderate_deviations_CW_supercritical} and \mathrm{Re}\,f{theorem:CLT_CW_supercritical}} \label{subsct:MDP&CLT:arbitrary:potential} We introduce a general version of dynamics \`a la Curie-Weiss where the evolution of the magnetization is driven by a sufficiently smooth arbitrary potential. We characterize moderate deviations and central limit theorem for this generalization getting then as corollaries the statements of Theorems \mathrm{Re}\,f{theorem:moderate_deviations_CW_subcritical}-- \mathrm{Re}\,f{theorem:CLT_CW_supercritical}.\\ Let $U$ be a continuously differentiable potential and consider the dynamics such that the empirical magnetization $\{m_n(t)\}_{t \geq 0}$ is a Markov process on $E_n$, with generator \begin{multline}\label{eqn:CWgenerator_arbitrarypotential} \mathcal{A}_nf(x) = \frac{n(1-x)}{2} \, e^{U'(x)} \left[f\left(x + \frac{2}{n}\right) - f(x)\right] \\ + \frac{n(1+x)}{2} \, e^{-U'(x)} \left[f\left(x - \frac{2}{n}\right) - f(x)\right]. \end{multline} The infinite volume dynamics corresponding to the Markov process \eqref{eqn:CWgenerator_arbitrarypotential} may be derived from the large deviation principle in \cite[Theorem 1]{Kr16b}. In particular, the stationary points for the limiting dynamics are the solutions of $G_2(x)=0$, where \[ G_{2}(x) := \sinh(U'(x)) - x \cosh(U'(x)). \] For later convenience we define also $G_{1}(x) := \cosh(U'(x)) - x \sinh(U'(x))$. In this setting, we have a moderate deviation principle and a weak convergence result. \begin{theorem}[Moderate deviations, arbitrary potential and stationary point] \label{theorem:mdp_1d_arbitrarypotential} Let $m$ be a solution of $G_2(x) = 0$. Let $k \in \mathbb{N} \cup \{0\}$ and suppose that $U'$ is $2k+1$ times continuously differentiable. Additionally, suppose that \begin{enumerate}[(a)] \item $G_2^{(l)}(m) = 0$ for $l \leq 2k$, \item if $k > 0$, then $G_2^{(2k+1)}(m) \leq 0$. \end{enumerate} Let $\{b_n\}_{n\geq 1}$ be a sequence of positive real numbers such that \begin{equation*} b_n \rightarrow \infty, \qquad \frac{b_n^{2(k+1)}}{n} \rightarrow 0. \end{equation*} Suppose that $b_n (m_n(0) - m)$ satisfies the large deviation principle with speed $n b_n^{-2(k+1)}$ on $\mathbb{R}$ with rate function $I_0$. Then the trajectories $\left\{b_n(m_n(b_n^{2k} t) - m)\right\}_{t \geq 0}$ satisfy the large deviation principle on $D_\mathbb{R}(\mathbb{R}^+)$: \begin{equation*} \mathbb{P}\left[\left\{b_n(m_n(b_n^{2k} t) - m)\right\}_{t \geq 0} \approx \{\gamma(t)\}_{t \geq 0} \right] \sim e^{-n b_n^{-2(k+1)}I(\gamma)}, \end{equation*} where $I$ is the good rate function \begin{equation*} I(\gamma) = \begin{cases} I_0(\gamma(0)) + \int_0^\infty \mathcal{L}(\gamma(s),\dot{\gamma}(s)) \mathrm{d} s & \text{if } \gamma \in \mathcal{A}\mathcal{C}, \\ \infty & \text{otherwise}, \end{cases} \end{equation*} and \begin{equation*} \mathcal{L}(x,v) = \frac{\left(v-\frac{2x^{2k+1}}{(2k+1)!}G_{2}^{(2k+1)}(m) \right)^2}{8 G_{1}(m)}. \end{equation*} \end{theorem} \begin{proof} The generator $A_n$ of the process $b_n(m_n(b_n^{2k}t) - m)$ can be deduced from \eqref{eqn:CWgenerator_arbitrarypotential} and is given by \begin{align*} A_n f (x) &= b_n^{2k} n \frac{1-m - xb_n^{-1}}{2} e^{U'(m + xb_n^{-1})} \left[f\left(x + 2b_n n^{-1}\right) - f(x)\right] \\ &+ b_n^{2k} n \frac{1 + m +xb_n^{-1}}{2} e^{- U'(m + xb_n^{-1})} \left[ f\left(x - 2b_n n^{-1}\right) - f(x) \right]. \end{align*} Therefore the Hamiltonian \begin{equation*} H_nf = b_n^{2(k+1)}n^{-1} e^{-n b_n^{-2(k+1)}f} A_n e^{n b_n^{-2(k+1)}f}, \end{equation*} results in \begin{multline*} H_nf(x) = b_n^{4k+2} \frac{1-m - xb_n^{-1}}{2} e^{U'(m + xb_n^{-1})} \left[e^{n b_n^{-2(k+1)}\left(f\left(x + 2b_n n^{-1}\right) - f(x)\right)}-1\right] \\ + b_n^{4k+2} \frac{1 + m +xb_n^{-1}}{2} e^{- U'(m + xb_n^{-1})} \left[e^{n b_n^{-2(k+1)}\left(f\left(x - 2b_n n^{-1}\right) - f(x)\right)}-1\right]. \end{multline*} We now prove the convergence of the sequence $H_nf$ for $f \in C_c^2(\mathbb{R})$. To compensate for the $b_n^{4k+2}$ up front, we Taylor expand the exponential containing $f$ up terms of $O(b_n^{-4k-2})$: \begin{multline*} \exp\left\{n b_n^{-2(k+1)}\left(f(x \pm 2b_n n^{-1}) -f(x)\right)\right\} -1 \\ = \pm 2 b_n^{-2k -1} f'(x) + 2 b_n^{-4k-2}(f'(x))^2 + o(b_n^{-4k-2}). \end{multline*} Observe that the above expansion holds since $b_n^{-2k}n^{-1} = o(b_n^{-4k-2})$ by hypothesis. Thus, combining the terms with $f'$ and the terms with $(f')^2$, we find that \begin{equation*} H_nf(x) = 2 b_n^{2k +1} G_{2}(m + x b_n^{-1}) f'(x) + 2 G_{1}(m + x b_n^{-1}) (f'(x))^2 + o(1). \end{equation*} Next, we Taylor expand $G_1,G_2$ around $m$. For $G_1$ it is clear that only the zero'th order term remains. For $G_2$, we use that the first $2k$ terms disappear. This gives \begin{align*} H_nf(x) & = \frac{2x^{2k+1}}{(2k+1)!} G_{2}^{(2k+1)}(m) f'(x) + 2 G_{1}(m) (f'(x))^2 + o(1), \end{align*} where the $o(1)$ is uniform on compact sets. Thus, for $f \in C_c^2(\mathbb{R})$, $H_nf$ converges uniformly to $Hf(x) = H(x,f'(x))$ where \begin{equation*} H(x,p) = \frac{2x^{2k+1}}{(2k+1)!}G_{2}^{(2k+1)}(m) p + 2 G_{1}(m) p^2. \end{equation*} The large deviation result follows by Theorem \mathrm{Re}\,f{theorem:Abstract_LDP}, Lemma \mathrm{Re}\,f{lemma:FW_one_sided_lipschitz_containment_function} and Proposition \mathrm{Re}\,f{proposition:FW_one_sided_lipschitz_comparison_principle}. Note that condition (b) in the statement guarantees that the vector field is one-sided Lipschitz. The Lagrangian is found by taking a Legendre transform of~$H$. \end{proof} \begin{theorem}[Central limit theorem, general potential, arbitrary stable stationary point] \label{theorem:CLT_1d_arbitrarypotential} Let $m$ be a solution of $G_2(x) = 0$. Let $k \in \mathbb{N} \cup \{0\}$ and suppose that $U'$ is $2k+1$ times continuously differentiable. Additionally, suppose that \begin{enumerate}[(a)] \item $G_2^{(l)}(m) = 0$ for $l \leq 2k$, \item if $k > 0$, then $G_2^{(2k+1)}(m) \leq 0$. \end{enumerate} Suppose that $n^{\frac{1}{2k+2}} (m_n(0) - m)$ converges in law to $\nu$. Then the process \[ n^{\frac{1}{2k+2}} \left( m_n \left( n^{\frac{k}{k+1}}t \right) - m \right) \] converges weakly in law on $D_\mathbb{R}(\mathbb{R}^+)$ to the unique solution of: \begin{equation}\label{limiting_diffusion_CLT_CW} \begin{cases} \mathrm{d} Y(t) = \frac{2}{(2k+1)!} \, Y(t)^{2k+1} G_{2}^{(2k+1)}(m) \, \mathrm{d} t + 2\sqrt{G_{1}(m)} \, \mathrm{d} W(t) \\ Y(0) \sim \nu, \end{cases} \end{equation} where $W(t)$ is a standard Brownian motion on $\mathbb{R}$. \end{theorem} The limiting diffusion \eqref{limiting_diffusion_CLT_CW} admits a unique invariant measure with density proportional to $\exp \left\{ - c y^{2k+2}/(2k+2)! \right\}$, with $c = 4 \vert G_2^{(2k+1)}(m) \vert$. Observe that it is precisely the limiting distribution prescribed by the analysis of equilibrium fluctuations performed in \cite{ElNe78b}. The proof of the theorem given below is in the spirit of the proofs of the moderate deviation principles and based on a combination of proving the compact containment condition and the convergence of the generators. We first prove the compact containment condition. Let \[ X_n(t) := n^{\frac{1}{2k+2}} \left( m_n \left( n^{\frac{k}{k+1}} t \right) - m \right) \] be the space-time rescaled fluctuation process. We introduce the family $\{ \tau_n^C \}_{n \geq 1}$ of stopping times, defined by \[ \tau_n^C := \inf_{t \geq 0} \left\{ \left\vert X_n(t) \right\vert \geq C \right\}\,. \] We start by studying the asymptotic behavior of the sequence $\{ \tau_n^C \}_{n \geq 1}$. \begin{lemma}\label{lmm:asymptotics_stopping_times} For any $T \geq 0$ and $\varepsilon > 0$, there exist $n_\varepsilon \geq 1$ and $C_\varepsilon > 0$ such that \[ \sup_{n \geq n_\varepsilon} \, \mathbb{P} \left( \tau_n^{C_\varepsilon} \leq T \right) \leq \varepsilon\,. \] \end{lemma} \begin{proof} Let $C$ be a strictly positive constant. First observe that \begin{align} \mathbb{P} \left( \tau_n^C \leq T \right) &\leq \mathbb{P} \left( \sup_{0 \leq t \leq T \wedge \tau_n^C} \left\vert X_n(t) \right\vert \geq C \right). \label{target_probability} \end{align} We will obtain bounds for \eqref{target_probability} and show that it can be made arbitrarily small whenever $n$ is large enough. The idea is to get the estimate by considering a martingale decomposition for $\tilde{f}(X_n)$, where $\tilde{f} \in C^3(\mathbb{R})$ has bounded partial derivatives and is such that $\tilde{f}(x) = \tilde{f}(-x)$ and $\lim_{x \rightarrow \infty} \tilde{f}(x) = \infty$. \\ For any $n \geq 1$ and $j \in \{-1,+1\}$, let $N_n(j, \mathrm{d} t)$ be the Poisson process counting the number of flips of spins with value $j$ up to time $n^{\frac{k}{k+1}} t$. The intensity of $N_n(j, \mathrm{d} t)$ is $R_n(j,t) \mathrm{d} t$ with \[ R_n (j,t) = \frac{n^{\frac{2k+1}{2k+2}}}{2} \,\left[ 1 + j \left( m + x n^{-\frac{1}{2k+2}} \right) \right] \, e^{-j U' \left( m + x n^{-\frac{1}{2k+2}} \right)}. \] Moreover, we define \begin{equation}\label{eqn:Poisson_process_minus_intensity} \widetilde{N}_n(j, \mathrm{d} t) := N_n(j, \mathrm{d} t) - R_n (j, t) \, \mathrm{d} t. \end{equation} Let $\tilde{f}(x) = \log \sqrt{1+x^2}$ and consider the semi-martingale decomposition \[ \tilde{f}(X_n(t)) = \tilde{f}(X_n(0)) + \int_0^t A_n \tilde{f}(X_n(s)) \, \mathrm{d} s + M^{(1)}_n(t), \] where $M^{(1)}_n$ is the local martingale given by \[ M^{(1)}_n (t) := \int_0^t \sum_{j = \pm 1} \left( \overline{\nabla}_j \, \tilde{f}(X_n(s)) \right)^2 \widetilde{N}_n(j, \mathrm{d} s) \] and \[ \overline{\nabla}_j \tilde{f}(X_n(t)) := \log \sqrt{1+ \left( X_n (t) - 2 j n^{-\frac{2k+1}{2k+2}}\right)^2} - \log \sqrt{1+X_n^2 (t)}. \] We have \begin{multline*} \mathbb{P} \left( \sup_{0 \leq t \leq T \wedge \tau_n^C} \left\vert X_n(t) \right\vert \geq C \right) \leq \mathbb{P} \left( \sup_{0 \leq t \leq T \wedge \tau_n^C} \tilde{f}(X_n(t)) \geq \tilde{f}(C) \right) \\ \leq \mathbb{P} \left( \sup_{0 \leq t \leq T \wedge \tau_n^C} \tilde{f}(X_n(0)) \geq \frac{\tilde{f}(C)}{3} \right) + \mathbb{P} \left( \sup_{0 \leq t \leq T \wedge \tau_n^C} A_n \tilde{f}(X_n(t)) \geq \frac{\tilde{f}(C)}{3T} \right) \\ + \mathbb{P} \left( \sup_{0 \leq t \leq T \wedge \tau_n^C} M^{(1)}_n(t) \geq \frac{\tilde{f}(C)}{3} \right). \end{multline*} We estimate the three terms in the right hand-side of previous inequality. All the constants appearing in the bounds below are \emph{independent of $n$}. \begin{itemize} \item Convergence in law of the initial condition implies $\mathbb{P} \left( \tilde{f}(X_n(0)) \geq c_0 (\varepsilon) \right) \leq \frac{\varepsilon}{3}$ for a sufficiently large $c_0(\varepsilon)$ and for all $n$. \item Since we are stopping the process $X_n(t)$ whenever it leaves the compact set $[-C,C]$, we find that $\tilde{f}(X_n(t))$ is bounded on the set of interest. Therefore, we can apply \eqref{eqn:inf_gen_space-time_rescaled_fluct_CLT} proven below to obtain \begin{equation}\label{eqn:generator:ftilde} A_n \tilde{f} (x) = \frac{2 G_2^{(2k+1)}(m)}{(2k+1)!} \frac{x^{2k+2}}{1+x^2} + 4 G_1(m) \frac{1-x^2}{(1+x^2)^2} + o(1), \end{equation} that, for $t \leq \tau_n^C$, implies \begin{equation}\label{estimate_generator} A_n \tilde{f}(X_n (t)) \leq c_1, \end{equation} for a sufficiently large $c_1>0$, independent of $C$. Note that the first term in the right hand-side of \eqref{eqn:generator:ftilde} is bounded if $k=0$ and negative if $k > 0$. \item Since \begin{align} \left\vert R_n (j,t) \right\vert &\leq n^{\frac{2k+1}{2k+2}} \, e^{-j U' \left( m + x n^{-\frac{1}{2k+2}} \right)} \nonumber\\ &\leq n^{\frac{2k+1}{2k+2}} \, \left( e^{U'(m)} + O \left( n^{-\frac{1}{2k+2}}\right) \right)\label{estimate_intensity} \end{align} and \begin{equation}\label{estimate_gradient} \left( \overline{\nabla}_j \, \tilde{f}(X_n(t)) \right)^2 = \left(\int_{X_n(t)}^{X_n (t) - 2 j n^{-\frac{2k+1}{2k+2}}} \frac{y}{1+y^2} \, \mathrm{d} y\right)^2 \leq c_2 \, n^{-\frac{4k+2}{2k+2}}, \end{equation} we obtain \begin{align}\label{estimate_quadratic_variation} \mathbb{E} \left[ \left( M_n^{(1)} \left(T \wedge \tau_n^C\right)\right)^2 \right] &= \mathbb{E} \left[ \int_0^{T \wedge \tau_n^C} \sum_{j = \pm 1} \left( \overline{\nabla}_j \, \tilde{f}(X_n(s)) \right)^2 R_n(j,s) \mathrm{d} s \right] \leq c_3 \, T \end{align} for sufficiently large $n$ and suitable $c_2, c_3 > 0$, independent of $C$. By Doob's maximal inequality, we can conclude \[ \mathbb{P} \left( \sup_{0 < t < T \wedge \tau_n^C} M_n^{(1)} (t) \geq \frac{\tilde{f}(C)}{3} \right) \leq \frac{9Tc_3}{\tilde{f}(C)^2} \,. \] \end{itemize} Therefore, for any $\varepsilon > 0$ and for sufficiently large $n$, by choosing the constant $C_\varepsilon \geq \max \left\{ c_0(\varepsilon), \sqrt{3c_1}, \tilde{f}^{-1} \left(\frac{27 T c_3}{\varepsilon} \right) \right\}$, we obtain $\sup_{n \geq n_\varepsilon} \mathbb{P} \left( \tau_n^{C_\varepsilon} \leq T \right) \leq \varepsilon$ as wanted. \end{proof} \begin{proof}[Proof of Theorem \mathrm{Re}\,f{theorem:CLT_1d_arbitrarypotential}] As introduced above, let $X_n(t) := n^{\frac{1}{2k+2}} \left( m_n \left( n^{\frac{k}{k+1}} t \right) - m \right)$ be the space-time rescaled process. The infinitesimal generator of the process $X_n(t)$ can be easily deduced from \eqref{eqn:CWgenerator_arbitrarypotential}. It yields \begin{multline*} A_n f(x) = n^{\frac{2k+1}{k+1}} \, \frac{1 - m - xn^{-\frac{1}{2k+2}}}{2} \, e^{U' \big( m + xn^{-\frac{1}{2k+2}} \big)} \left[f \left( x + 2n^{-\frac{2k+1}{2k+2}} \right) -f(x) \right] \\ + n^{\frac{2k+1}{k+1}} \, \frac{1 + m + xn^{-\frac{1}{2k+2}}}{2} \, e^{-U' \big( m + xn^{-\frac{1}{2k+2}} \big)}\left[ f \left( x - 2n^{-\frac{2k+1}{2k+2}} \right) -f(x) \right]. \end{multline*} We want to characterize the limit of the sequence $A_n f$ for $f \in C^3_c(\mathbb{R})$, the set of three times continuously differentiable functions that are constant outside of a compact set. We first Taylor expand $f$ up to the second order \begin{equation*} f \left( x \pm 2n^{-\frac{2k+1}{2k+2}} \right) -f(x) = \pm 2n^{-\frac{2k+1}{2k+2}} f'(x) + 2n^{-\frac{2k+1}{k+1}} f''(x) + o \left( n^{-\frac{2k+1}{k+1}} \right) \end{equation*} and then, by combining the terms with $f'$ and the terms with $f''$, we obtain \begin{multline*} A_n f(x) = 2 n^{\frac{2k+1}{2k+2}} \, G_2 \left( m + x n^{-\frac{1}{2k+2}} \right) f'(x) \\ + 2 \, G_1 \left( m + x n^{-\frac{1}{2k+2}} \right) f''(x) + o (1). \end{multline*} Now we Taylor expand $G_1$ and $G_2$ around $m$. For $G_2$ we use that the first $2k$ terms in the expansion vanish. As far as $G_1$, only the zero-th order term matters. Therefore it yields \begin{equation}\label{eqn:inf_gen_space-time_rescaled_fluct_CLT} A_n f(x) = \frac{2}{(2k+1)!} G_2^{(2k+1)}(m) \, x^{2k+1} f'(x) + 2 \, G_1( m) f''(x) + o(1). \end{equation} In other words, we conclude that for $f \in C_c^3(\mathbb{R})$ we have $\lim_n \vn{A_nf - Af} = 0$. To prove the weak convergence result, we verify the conditions for Corollary 4.8.16 in \cite{EK86}. The martingale problem for the operator $(A,C_c^3(\mathbb{R}))$ has a unique solution by Theorem 8.2.6 in \cite{EK86}. Additionally, the set $C_c^3(\mathbb{R})$ is an algebra that separates points. Finally, by Lemma \mathrm{Re}\,f{lmm:asymptotics_stopping_times} the sequence $\{X_n\}_{n \geq 1}$ satisfies the compact containment condition. Thus the result follows by an application of Corollary 4.8.16 in \cite{EK86}. \end{proof} \begin{comment} {\color{blue} We use the following tightness criterion for processes with c{\`a}dl{\`a}g trajectories \cite{CoEi88}. \begin{proposition}\label{prop:tightness} Fix $T \in \mathbb{R}^+$. The sequence of processes $\{X_n(t)\}_{n \geq 1}$ on $D_{\mathbb{R}}([0,T])$ satisfies the following conditions \begin{enumerate}[(a)] \item For any $\varepsilon > 0$, there exists $\eta > 0$ such that \begin{equation}\label{tightness:C1} \sup_{n \geq 1} \, \mathbb{P} \left( \sup_{0 \leq t \leq T} \left\vert X_n(t) \right\vert \geq \eta \right) \leq \varepsilon. \end{equation} \item For any $\varepsilon > 0$ and $\eta > 0$, there exists $\delta > 0$ such that \begin{equation}\label{tightness:C2} \sup_{n \geq 1} \, \sup_{\substack{\tau_1, \tau_2 \in \mathcal{T}_n \\ 0 \leq \tau_1 \leq \tau_2 \leq (\tau_1 + \delta) \wedge T}} \, \mathbb{P} \left( \left\vert X_n (\tau_1) - X_n(\tau_2) \right\vert \geq \eta \right) \leq \varepsilon, \end{equation} where, for any $n \geq 1$, $\mathcal{T}_n$ is the set of all stopping times adapted to the filtration generated by the process $X_n$. \end{enumerate} \end{proposition} \begin{proof} The validity of condition \eqref{tightness:C1} is a direct consequence of Lemma~ \mathrm{Re}\,f{lmm:asymptotics_stopping_times}. We are therefore left with the proof of \eqref{tightness:C2}.\\ Let $\tau_1, \tau_2 \in \mathcal{T}_n$ be stopping times such that $0 \leq \tau_1 \leq \tau_2 \leq (\tau_1 + \delta) \wedge T$ with $\delta > 0$. Let $\varepsilon > 0$. For sufficiently large $n$, by Lemma~ \mathrm{Re}\,f{lmm:asymptotics_stopping_times} we have \begin{multline*} \mathbb{P} (\vert X_n(\tau_1) - X_n(\tau_2) \vert > \eta) \\ \leq \mathbb{P} \left( \left\vert X_n \left(\tau_1 \wedge \tau_n^{C_\varepsilon} \right) - X_n \left(\tau_2 \wedge \tau_n^{C_\varepsilon} \right) \right\vert > \eta \right) + \mathbb{P} \left( \tau_n^{C_\varepsilon} \leq T \right)\\ \leq \mathbb{P} \left( \left\vert X_n \left(\tau_1 \wedge \tau_n^{C_\varepsilon} \right) - X_n \left(\tau_2 \wedge \tau_n^{C_\varepsilon} \right) \right\vert > \eta \right) + \varepsilon. \end{multline*} Notice that \begin{multline*} \left\vert X_n \left(\tau_1 \wedge \tau_n^{C_\varepsilon} \right) - X_n \left(\tau_2 \wedge \tau_n^{C_\varepsilon} \right) \right\vert \\ = \left\vert \int_{\tau_1 \wedge \tau_n^{C_\varepsilon}}^{\tau_2 \wedge \tau_n^{C_\varepsilon}} A_n X_n(s) \, \mathrm{d} s + \left( M_n^{(2)} \left( \tau_1 \wedge \tau_n^{C_\varepsilon}\right) - M_n^{(2)} \left( \tau_2 \wedge \tau_n^{C_\varepsilon} \right) \right) \right\vert \end{multline*} with \begin{equation}\label{martingale} M_n^{(2)} \left( \tau_2 \wedge \tau_n^{C_\varepsilon} \right) - M_n^{(2)} \left( \tau_1 \wedge \tau_n^{C_\varepsilon} \right) = -2 n^{- \frac{2k+1}{2k+2}} \int_{\tau_1 \wedge \tau_n^{C_\varepsilon}}^{\tau_2 \wedge \tau_n^{C_\varepsilon}} \sum_{j = \pm 1} j \widetilde{N}(j, \mathrm{d} s) \end{equation} and $\widetilde{N}(j, \mathrm{d} t)$ defined as in \eqref{eqn:Poisson_process_minus_intensity}. We want to show that both $A_n X_n(t)$ and the quadratic variation of \eqref{martingale} are uniformly bounded in $n$. Applying similar inequalities as in \eqref{estimate_generator} and \eqref{estimate_intensity}--\eqref{estimate_quadratic_variation}, we find \[ \left\vert \int_{\tau_1 \wedge \tau_n^{C_\varepsilon}}^{\tau_2 \wedge \tau_n^{C_\varepsilon}} A_n X_n(s) \, \mathrm{d} s \right\vert \leq \left( \tilde{c}_1 \left( C_\varepsilon \right)^{2k+1} + \tilde{c}_2 \right) \delta \] and \[ \mathbb{E} \left[ \left( M_n^{(2)} \left( \tau_2 \wedge \tau_n^{C_\varepsilon} \right) - M_n^{(2)} \left( \tau_1 \wedge \tau_n^{C_\varepsilon} \right) \right)^2 \right] \leq \tilde{c}_3 \, \delta, \] which, for large enough $n$ and sufficiently small $\delta$, implies \[ \mathbb{P} \left( \left\vert X_n \left(\tau_1 \wedge \tau_n^{C_\varepsilon} \right) - X_n \left(\tau_2 \wedge \tau_n^{C_\varepsilon} \right) \right\vert > \eta \right) \leq \varepsilon. \] The conclusion then follows. \end{proof} Proposition~ \mathrm{Re}\,f{prop:tightness} together with Theorem 7.2 in \cite{EtKu86} gives the following result. \Ftodo{Check since I am not sure to remember correctly.} \begin{theorem}\label{thm:tightness} The sequence of processes $\{X_n(t)\}_{n \geq 1}$ on $D_{\mathbb{R}}(\mathbb{R}^+)$ is tight. \end{theorem} It remains to identify the limiting process. By Theorem~ \mathrm{Re}\,f{thm:tightness} there exists a converging subsequence $\{X_{n_j}(t)\}_{j \geq 1}$ of $\{X_n(t)\}_{n \geq 1}$. For $f \in C_b^3(\mathbb{R})$, the following semi-martingale decomposition holds \begin{equation}\label{eqn:final_semi-martingale_decomposition} f \left( X_{n_j}(t) \right) = f \left( X_{n_j}(0) \right) + \int_0^t A_{n_j} f \left( X_{n_j}(s) \right) \, \mathrm{d} s + M_{n_j}^{(3)}(t), \end{equation} with $A_{n_j}$ defined by \eqref{eqn:inf_gen_space-time_rescaled_fluct_CLT}. Sending $n_j$ to infinity, we find that \begin{equation*} \lim_{n_j \rightarrow \infty} \sup_{x \in K \cap E_n} \left|A_{n_j}f(x) - Af(x)\right| = 0, \end{equation*} where \begin{equation}\label{eqn:limiting_inf_gen_space-time_rescaled_fluct_CLT} A f(x) = \frac{2}{(2k+1)!} G_2^{(2k+1)}(m) \, x^{2k+1} f'(x) + 2 \, G_1( m) f''(x). \end{equation} Then, due to \eqref{eqn:final_semi-martingale_decomposition}, we get \[ M_{n_j}^{(3)}(t) \xrightarrow[w]{n_j \to \infty} M^{(3)}(t) := f(X(t)) - f(X(0)) - \int_0^t Af(X(s)) \, \mathrm{d} s. \] By an estimate similar to \eqref{estimate_quadratic_variation}, it is straightforward to prove that $\{M_{n_j}^{(3)}(t)\}_{j \geq 1}$ is a uniformly integrable sequence of martingales \cite{Shi96}. Weak convergence and uniform integrability of the sequence $\{M_{n_j}^{(3)}(t)\}_{j \geq 1}$ imply that $M^{(3)}(t)$ is a martingale. Therefore, any weak limit of $\{X_{n_j}(t)\}_{j \geq 1}$ solves the martingale problem associated with generator $A$ in \eqref{eqn:limiting_inf_gen_space-time_rescaled_fluct_CLT} that has a unique solution, so the result follows by Theorem 8.2.6 in \cite{EK86}. } \end{proof} \end{comment} The results in Section~ \mathrm{Re}\,f{sct:results} are recovered by setting $U(x) = \frac{\beta x^2}{2}$, with $\beta > 0$. Note that simply by choosing $U(x) = \frac{\beta x^2}{2} + B x$, with $\beta, B > 0$, we get the corresponding results for the Curie-Weiss with magnetic field. \subsection{Proof of Theorem \mathrm{Re}\,f{theorem:moderate_deviations_CW_critical_temperature_rescaling}} \label{subsct:MDP:critical:temp_rescaling} The infinitesimal generator $A_n$ of the process $b_n m_n^{1 + \kappa b_n^{-2}}(b_n^2t)$ can be easily deduced from \eqref{eqn:CWgenerator_arbitrarypotential} by using $U(x) = \frac{(1 + \kappa b_n^{-2})x^2}{2}$. The Hamiltonian \begin{equation*} H_nf = b_n^{4}n^{-1} e^{-n b_n^{-4}f} A_n e^{n b_n^{-4}f}, \end{equation*} is given by \begin{multline*} H_nf(x) = b_n^{6} \frac{1 - xb_n^{-1}}{2} \,e^{(1+\kappa b_n^{-2})xb_n^{-1}} \left[e^{n b_n^{-4}\left(f\left(x + 2b_n n^{-1}\right) - f(x)\right)}-1\right] \\ + b_n^{6} \frac{1 +xb_n^{-1}}{2} \, e^{- (1+\kappa b_n^{-2})xb_n^{-1}} \left[e^{n b_n^{-4}\left(f\left(x - 2b_n n^{-1}\right) - f(x)\right)}-1\right]. \end{multline*} We start by studying the limiting behaviour of the sequence $H_nf$ for $f \in C_c^2(\mathbb{R})$. Let $A_n$ be the generator of the process that has been speeded up by a factor $b_n^{2}$, i.e. $A_n f = b_n^{2} \mathcal{A}_n f$. To compensate for the $b_n^6$ up front, we Taylor expand the exponential containing $f$ up terms of $O(b_n^{-6})$: \[ \exp\left\{n b_n^{-4}\left(f(x \pm 2b_n n^{-1}) -f(x)\right)\right\} -1 = \pm 2 b_n^{-3} f'(x) + 2 b_n^{-6}(f'(x))^2 + o(b_n^{-6}). \] Thus, combining the terms with $f'$ and the terms with $(f')^2$, we find that \begin{multline*} H_nf(x) = 2 \left[ b_n^3 \sinh \left( x b_n^{-1} + x \kappa b_n^{-3} \right) - x b_n^2 \cosh \left( x b_n^{-1} + x \kappa b_n^{-3} \right) \right]f'(x) \\ + 2 (f'(x))^2 + o(1). \end{multline*} By Taylor expanding the hyperbolic functions \[ \sinh \left( x b_n^{-1} + x \kappa b_n^{-3} \right) = x b_n^{-1} + x \kappa b_n^{-3} + \frac{1}{6}(x b_n^{-1} + x \kappa b_n^{-3})^3 + o \left( b_n^{-3} \right) \] \[ \cosh \left( x b_n^{-1} + x \kappa b_n^{-3} \right) = 1 + \frac{1}{2}(x b_n^{-1} + x\kappa b_n^{-3})^2 + o \left( b_n^{-2} \right), \] we get \begin{equation*} H_nf(x) = 2 \left[ \kappa x - \frac{1}{3} x^3 \right] f'(x) + 2 (f'(x))^2 + o(1), \end{equation*} where the $o(1)$ is uniform on compact sets. Thus, for $f \in C_c^2(\mathbb{R})$, $H_nf$ converges uniformly to $Hf(x) = H(x,f'(x))$ where \begin{equation*} H(x,p) = 2 \left[ \kappa x - \frac{1}{3} x^3 \right] p + 2 p^2 \end{equation*} The large deviation result follows by Theorem~ \mathrm{Re}\,f{theorem:Abstract_LDP}, Lemma~ \mathrm{Re}\,f{lemma:FW_one_sided_lipschitz_containment_function} and Proposition~ \mathrm{Re}\,f{proposition:FW_one_sided_lipschitz_comparison_principle}. The Lagrangian is found by taking a Legendre transform of $H$. \appendix \section{Appendix: Large deviation principle via the Hamilton-Jacobi equation} \label{sct:app:LDPviaHJequation} In the Appendix, we will explain the basic steps to prove the path-space large deviation principle via uniqueness of solutions to the Hamilton-Jacobi equation. These steps follow the proofs in \cite{CIL92,FK06,DFL11} and have also been used in the proof of the large deviation principle for the dynamics of variants of the Curie-Weiss model via well-posedness of the Hamilton-Jacobi equation in \cite{Kr16b}. First, we prove an abstract result on how to obtain uniqueness of viscosity solution of the Hamilton-Jacobi equation via the comparison principle. Then, we will state a result on how uniqueness, together with a exponential compact containment condition, yields the large deviation principle. The verification of the conditions for this result have already been carried out in Section \mathrm{Re}\,f{section:strategy_of_proof}. We make the remark that the requirements on our space $E$ in Section \mathrm{Re}\,f{section:appendix_ldp} and are more stringent than the ones in Sections \mathrm{Re}\,f{section:appendix_definitions} and \mathrm{Re}\,f{section:abstract_proof_of_comparison_principle}. The definitions of good penalization functions and of a good containment function are unchanged for the next two sections. \subsection{Viscosity solutions for the Hamilton-Jacobi equation} \label{section:appendix_definitions} Fix some $d \geq 1$. In this section, and in Section \mathrm{Re}\,f{section:abstract_proof_of_comparison_principle}, let $E$ be a subset of $\mathbb{R}^d$ that is contained in the $\mathbb{R}^d$-closure of its $\mathbb{R}^d$-interior. Additionally, assume that $E$ is a Polish space when equipped with its subspace-topology. \begin{remark} The assumption that $E$ is contained in the $\mathbb{R}^d$-closure of its $\mathbb{R}^d$-interior is needed to ensure that the gradient $\nabla f(x)$ of a function $f \in C_b^1(\mathbb{R}^d)$ for $x \in E$ is determined by its values in $E$. \end{remark} \begin{remark} We say that $A \subseteq \mathbb{R}^d$ is a $G_\delta$ set if it is the countable intersection of open sets in $\mathbb{R}^d$. By Theorems 4.3.23 and 4.3.24 in \cite{En89}, we find that $E$ is Polish if and only if it is a $G_\delta$ set in $\mathbb{R}^d$. \end{remark} Let $H : E \times \mathbb{R}^d \rightarrow \mathbb{R}$ be a continuous map. For $\lambda > 0$ and $h \in C_b(E)$, we will solve the \textit{Hamilton-Jacobi} equation \begin{equation} \label{eqn:differential_equation_intro} f(x) - \lambda H(x, \nabla f(x)) = h(x) \qquad x \in E, \end{equation} in the \textit{viscosity} sense. \begin{definition} \label{definition:viscosity} We say that $u$ is a \textit{(viscosity) subsolution} of equation \eqref{eqn:differential_equation_intro} if $u$ is bounded, upper semi-continuous and if, for every $f \in \mathcal{D}(H)$ such that $\sup_x u(x) - f(x) < \infty$ and every sequence $x_n \in E$ such that \begin{equation*} \lim_{n \rightarrow \infty} u(x_n) - f(x_n) = \sup_x u(x) - f(x), \end{equation*} we have \begin{equation*} \lim_{n \rightarrow \infty} u(x_n) - \lambda Hf(x_n) - h(x_n) \leq 0. \end{equation*} We say that $v$ is a \textit{(viscosity) supersolution} of equation \eqref{eqn:differential_equation_intro} if $v$ is bounded, lower semi-continuous and if, for every $f \in \mathcal{D}(H)$ such that $\inf_x v(x) - f(x) > - \infty$ and every sequence $x_n \in E$ such that \begin{equation*} \lim_{n \rightarrow \infty} v(x_n) - f(x_n) = \inf_x v(x) - f(x), \end{equation*} we have \begin{equation*} \lim_{n \rightarrow \infty} v(x_n) - \lambda Hf(x_n) - h(x_n) \geq 0. \end{equation*} \end{definition} At various points, we will refer to \cite{FK06}. The notion of viscosity solution used here corresponds to the notion of \textit{strong viscosity solution} in \cite{FK06}. For operators of the form $Hf(x) = H(x,\nabla f(x))$ these two notions are equivalent. See also Lemma 9.9 in \cite{FK06}. \begin{definition} We say that equation \eqref{eqn:differential_equation_intro} satisfies the \textit{comparison principle} if for a subsolution $u$ and supersolution $v$ we have $u \leq v$. \end{definition} Note that if the comparison principle is satisfied, then a viscosity solution is unique. To prove the comparison principle, we extend our scope by considering viscosity sub- and supersolutions to the Hamilton-Jacobi equation with two different operators that extend the original Hamiltonian in a suitable way. Let $M(E,\overline{\mathbb{R}})$ be the set of measurable functions from $E$ to $\overline{\mathbb{R}} := \mathbb{R} \cup \{ \infty \}$. \begin{definition} We say that $H_\dagger \subseteq M(E,\overline{\mathbb{R}}) \times M(E,\overline{\mathbb{R}})$ is a \textit{viscosity sub-extension} of $H$ if $H \subseteq H_\dagger$ and if for every $\lambda >0$ and $h \in C_b(E)$ a viscosity subsolution to $f-\lambda H f = h$ is also a viscosity subsolution to $f - \lambda H_\dagger f = h$. Similarly, we define a \textit{viscosity super-extension}. \end{definition} \begin{definition} Consider two operators $H_\dagger, H_ \mathrm{d}agger \subseteq M(E,\overline{\mathbb{R}}) \times M(E,\overline{\mathbb{R}})$ and pick $h \in C_b(E)$ and $\lambda > 0$. We say that the equations \begin{equation*} f - \lambda H_\dagger f = h, \qquad f - \lambda H_ \mathrm{d}agger f = h \end{equation*} satisfy the comparison principle if any subsolution $u$ to the first and any supersolution $v$ to the second equation satisfy $u \leq v$. \end{definition} We have the following straightforward result. \begin{lemma} Suppose that $H_\dagger$ and $H_{ \mathrm{d}agger}$ are a sub- and superextension of $H$ respectively. Fix $\lambda > 0$ and $h \in C_b(E)$. If the comparison principle is satisfied for the equations \begin{equation*} f - \lambda H_\dagger f = h, \qquad f - \lambda H_ \mathrm{d}agger f = h, \end{equation*} then the comparison principle is satisfied for \begin{equation*} f - \lambda H f = h. \end{equation*} \end{lemma} \subsection{Abstract proof of the comparison principle} \label{section:abstract_proof_of_comparison_principle} We introduce two convenient viscosity extensions of a particular Hamiltonian $H$ in terms of good penalization functions $\{\Psi_\alpha\}_{\alpha \geq 0}$ and containment function $\Upsilon$. \begin{align*} \mathcal{D}(H_\dagger) & := C^1_b(E) \cup \left\{x \mapsto (1-\varepsilon)\Psi_\alpha(x,y) + \varepsilon \Upsilon(x) +c \, \middle| \, \alpha,\varepsilon > 0, c \in \mathbb{R} \right\}, \\ \mathcal{D}(H_ \mathrm{d}agger) & := C^1_b(E) \cup \left\{y \mapsto - (1+\varepsilon)\Psi_\alpha(x,y) - \varepsilon \Upsilon(y) +c \, \middle| \, \alpha,\varepsilon > 0, c \in \mathbb{R} \right\}. \end{align*} For $f \in \mathcal{D}(H_\dagger)$, set $H_\dagger f(x) = H(x,\nabla f(x))$ and for $f \in \mathcal{D}(H_ \mathrm{d}agger)$, set $H_ \mathrm{d}agger f(x) = H(x,\nabla f(x))$. \begin{lemma} \label{lemma:viscosity_extension} The operator $(H_\dagger,\mathcal{D}(H_\dagger))$ is a viscosity sub-extension of $H$ and $(H_ \mathrm{d}agger,\mathcal{D}(H_ \mathrm{d}agger))$ is a viscosity super-extension of $H$. \end{lemma} In the proof we need Lemma 7.7 from \cite{FK06}. We recall it here for the sake of readability. Let $M_\infty(E,\overline{\mathbb{R}})$ denote the set of measurable functions $f : E \rightarrow \mathbb{R} \cup \{\infty\}$ that are bounded from below. \begin{lemma}[Lemma 7.7 in \cite{FK06}] \label{lemma:extension_lemma_7.7inFK} Let $H$ and $H_\dagger \subseteq M_\infty(E,\overline{\mathbb{R}}) \times M(E,\overline{\mathbb{R}})$ be two operators. Suppose that for all $(f,g) \in H_\dagger$ there exist $\{(f_n,g_n)\} \subseteq H_\dagger$ that satisfy the following conditions: \begin{enumerate}[(a)] \item For all $n$, the function $f_n$ is lower semi-continuous. \item For all $n$, we have $f_n \leq f_{n+1}$ and $f_n \rightarrow f$ point-wise. \item Suppose $x_n \in E$ is a sequence such that $\sup_n f_n(x_n) < \infty$ and $\inf_n g_n(x_n) > - \infty$, then $\{x_n\}_{n \geq 1}$ is relatively compact and if a subsequence $x_{n(k)}$ converges to $x \in E$, then \begin{equation*} \limsup_{k \rightarrow \infty} g_{n(k)}(x_{n(k)}) \leq g(x). \end{equation*} \end{enumerate} Then $H_\dagger$ is a viscosity sub-extension of $H$.\\ An analogous result holds for super-extensions $H_{ \mathrm{d}agger}$ by taking $f_n$ a decreasing sequence of upper semi-continuous functions and by replacing requirement (c) with \begin{enumerate} \item[(c$^{\prime}$)] Suppose $x_n \in E$ is a sequence such that $\inf_n f_n(x_n) > - \infty$ and $\sup_n g_n(x_n) < \infty$, then $\{x_n\}_{n \geq 1}$ is relatively compact and if a subsequence $x_{n(k)}$ converges to $x \in E$, then \begin{equation*} \liminf_{k \rightarrow \infty} g_{n(k)}(x_{n(k)}) \geq g(x). \end{equation*} \end{enumerate} \end{lemma} \begin{proof}[Proof of Lemma \mathrm{Re}\,f{lemma:viscosity_extension}] We only prove the sub-extension part. Consider a collection of smooth functions $\phi_n : \mathbb{R} \rightarrow \mathbb{R}$ defined as $\phi_n(x) = x$ if $x \leq n$ and $\phi_n(x) = n+1$ for $x \geq n+1$. Note that $\phi_{n + 1} \geq \phi_n$ for all $n$. Fix a function $f \in \mathcal{D}(H_\dagger)$ of the type $f(x) = (1-\varepsilon)\Psi_\alpha(x,y)+\varepsilon \Upsilon(x) + c$ and write $g = H_\dagger f$. Because $\Psi_\alpha$ and $\Upsilon$ are good penalization and good containment functions, $f$ has a twice continuously differentiable extensions to a neighbourhood of $E$ in $\mathbb{R}^d$. We will denote this extension also by $f$. Set $f_n = \phi_n \circ f$. Since $f$ is bounded from below and $\Upsilon$ has compact level sets, we find that $f_n$ is constant outside a compact set in $E$. Furthermore, $f$ and $\phi_n$ are twice continuously differentiable, so that we find $f_n \in C_c^2(E)$. We obtain $f_n \in \mathcal{D}(H)$ and write $g_n = H f_n$. We verify conditions (a)-(c) of Lemma \mathrm{Re}\,f{lemma:extension_lemma_7.7inFK}. (a) has been verified above. (b) is a consequence of the fact that $n \mapsto \phi_n$ is increasing. For (c), let $\{x_n\}_{n \geq 1}$ be a sequence such that $\sup_n f_n(x_n) = M < \infty$. It follows by the compactness of the level sets of $\Upsilon$ and the positivity of $\Psi_\alpha$ that the set \begin{equation*} K := \{z \in E \, | \, f(z) \leq M\} \end{equation*} is compact. Thus the sequence $\{x_n\}$ is relatively compact, and in particular, there exist converging subsequences $x_{n(k)}$ with limits $x \in K$. For any such subsequence, we show that $\limsup_k g_{n(k)} (x_{n(k)}) \leq g(x)$. As $\Psi_\alpha$ and $\Upsilon$, and thus $f$, are twice continuously differentiable up to a neighbourhood $U$ of $E$ in $\mathbb{R}^d$, we find that the set \begin{equation*} V := \{z \in U \, | \, f(z) < M+1\} \end{equation*} is open in $\mathbb{R}^d$ and contains $K$. For two arbitrary continuously differentiable functions $h_1,h_2$ on $U$, if $h_1(z) = h_2(z)$ for all $z \in V$, then $\nabla h_1(z) = \nabla h_2(z)$ for all $z \in V$. Now suppose $x_{n(k)}$ is a subsequence in $K$ converging to some point $x \in K$. As $f$ is bounded on $V$, there exists a sufficiently large $N$ such that for all $n \geq N$ and $y \in V$, we have $f_n(y) = f(y)$. We conclude $\nabla f_n(y) = \nabla f(y)$ for $y \in K \subseteq V$ and hence \begin{equation*} g_n(y) = H(y,\nabla f_n(y)) = H(y,\nabla f(y)) = g(y). \end{equation*} In particular, we find $\limsup_{k} g_{n(k)}(x_{n(k)}) \leq g(x)$. \end{proof} We have the following variants of Lemma 9.2 in \cite{FK06} and Proposition 3.7 in \cite{CIL92}. Note that the presence of the containment function $\Upsilon$ makes sure that the suprema are attained. This motivates the name containment function: $\Upsilon$ forces the maxima to be in some compact set. \begin{lemma}\label{lemma:doubling_lemma} Let $u$ be bounded and upper semi-continuous, let $v$ be bounded and lower semi-continuous, let $\Psi_\alpha : E^2 \rightarrow \mathbb{R}^+$ be good penalization functions and let $\Upsilon$ be a good containment function. Fix $\varepsilon > 0$. For every $\alpha >0$ there exist points $x_{\alpha,\varepsilon},y_{\alpha,\varepsilon} \in E$, such that \begin{multline*} \frac{u(x_{\alpha,\varepsilon})}{1-\varepsilon} - \frac{v(y_{\alpha,\varepsilon})}{1+\varepsilon} - \Psi_\alpha(x_{\alpha,\varepsilon},y_{\alpha,\varepsilon}) - \frac{\varepsilon}{1-\varepsilon}\Upsilon(x_{\alpha,\varepsilon}) -\frac{\varepsilon}{1+\varepsilon}\Upsilon(y_{\alpha,\varepsilon}) \\ = \sup_{x,y \in E} \left\{\frac{u(x)}{1-\varepsilon} - \frac{v(y)}{1+\varepsilon} - \Psi_\alpha(x,y) - \frac{\varepsilon}{1-\varepsilon}\Upsilon(x) - \frac{\varepsilon}{1+\varepsilon}\Upsilon(y)\right\}. \end{multline*} Additionally, for every $\varepsilon > 0$ we have that \begin{enumerate}[(a)] \item The set $\{x_{\alpha,\varepsilon}, y_{\alpha,\varepsilon} \, | \, \alpha > 0\}$ is relatively compact in $E$. \item All limit points of $\{(x_{\alpha,\varepsilon},y_{\alpha,\varepsilon})\}_{\alpha > 0}$ are of the form $(z,z)$ and for these limit points we have $u(z) - v(z) = \sup_{x \in E} \left\{u(x) - v(x) \right\}$. \item Suppose $\Psi_\alpha$ can be written as $\Psi_\alpha = \alpha \Psi$, where $\Psi \geq 0$. Then we have \[ \lim_{\alpha \rightarrow \infty} \alpha \Psi(x_{\alpha,\varepsilon},y_{\alpha,\varepsilon}) = 0. \] \end{enumerate} \end{lemma} \begin{proof} The proof essentially follows the one of Proposition 3.7 in \cite{CIL92}. \\ Fix $\varepsilon > 0$. As $\Upsilon$ is a good containment function, its level sets are compact. This property, combined with the boundedness of $u$ and $v$ and the non-negativity of $\Psi_\alpha$, implies that the supremum can be restricted to a compact set $K_\varepsilon \subseteq E \times E$ that is independent of $\alpha > 0$. As $u$ is upper semi-continuous, and $v$, $\Psi_\alpha$ and $\Upsilon$ are lower semi-continuous, the supremum is attained for some pair $(x_{\alpha,\varepsilon},y_{\alpha,\varepsilon}) \in K_\varepsilon$. This proves (a). We proceed with the proof of (b). Let $(x_0,y_0)$ be a limit point of $\{(x_{\alpha,\varepsilon},y_{\alpha,\varepsilon})\}_{\alpha > 0}$ such that $x_0 \neq y_0$. Without loss of generality, assume that $(x_{\alpha,\varepsilon},y_{\alpha,\varepsilon}) \rightarrow (x_0,y_0)$. By property $(\Psi a)$, the map $\alpha \mapsto \Psi_\alpha$ is increasing. Thus, for all $\alpha_0$ we have that \begin{equation*} \liminf_{\alpha \rightarrow \infty} \Psi_\alpha(x_{\alpha,\varepsilon},y_{\alpha,\varepsilon}) \geq \liminf_{\alpha \rightarrow \infty} \Psi_{\alpha_0}(x_{\alpha,\varepsilon},y_{\alpha,\varepsilon}) \geq \Psi_{\alpha_0}(x_0,y_0) \end{equation*} by the lower semi-continuity of $\Psi_{\alpha_0}$. Thus, we conclude that \begin{equation*} \liminf_{\alpha \rightarrow \infty} \Psi_\alpha(x_{\alpha,\varepsilon},y_{\alpha,\varepsilon}) = \infty \end{equation*} as $\lim_{\alpha \rightarrow \infty} \Psi_\alpha(x,y) = \infty$ for all $x \neq y$. This contradicts the boundedness of $u$ and $v$. We now prove (c). Let us define the constants \begin{align*} M_\alpha & := \frac{u(x_{\alpha,\varepsilon})}{1-\varepsilon} - \frac{v(y_{\alpha,\varepsilon})}{1+\varepsilon} - \Psi_\alpha(x_{\alpha,\varepsilon},y_{\alpha,\varepsilon}) - \frac{\varepsilon}{1-\varepsilon}\Upsilon(x_{\alpha,\varepsilon}) -\frac{\varepsilon}{1+\varepsilon}\Upsilon(y_{\alpha,\varepsilon}) \\ & = \sup_{x,y \in E} \left\{\frac{u(x)}{1-\varepsilon} - \frac{v(y)}{1+\varepsilon} - \Psi_\alpha(x,y) - \frac{\varepsilon}{1-\varepsilon}\Upsilon(x) - \frac{\varepsilon}{1+\varepsilon}\Upsilon(y)\right\}. \end{align*} Observe that the sequence $M_\alpha$ is decreasing as $\alpha \mapsto \Psi_\alpha$ is increasing point-wise. Moreover, the limit $\lim_{\alpha \rightarrow \infty} M_\alpha$ exists, since functions $u$ and $v$ are bounded from below. For any $\alpha > 0$, we obtain \begin{align*} M_{\alpha/2} & \geq \frac{u(x_{\alpha,\varepsilon})}{1-\varepsilon} - \frac{v(y_{\alpha,\varepsilon})}{1+\varepsilon} - \Psi_{\alpha/2}(x_{\alpha,\varepsilon},y_{\alpha,\varepsilon}) - \frac{\varepsilon}{1-\varepsilon}\Upsilon(x_{\alpha,\varepsilon}) -\frac{\varepsilon}{1+\varepsilon}\Upsilon(y_{\alpha,\varepsilon}) \\ & \geq M_\alpha + \Psi_\alpha \left( x_{\alpha,\varepsilon}, y_{\alpha, \varepsilon} \right) - \Psi_{\alpha/2} \left( x_{\alpha,\varepsilon}, y_{\alpha, \varepsilon} \right)\\ & \geq M_\alpha +\frac{\alpha}{2} \, \Psi \left( x_{\alpha,\varepsilon}, y_{\alpha, \varepsilon} \right)\\ & \geq M_\alpha, \end{align*} that implies $\frac{\alpha}{2} \, \Psi \left( x_{\alpha,\varepsilon}, y_{\alpha, \varepsilon} \right) \to 0$, as $M_{\alpha/2}$ and $M_\alpha$ converge to the same limit. \end{proof} \begin{proposition} \label{proposition:comparison_conditions_on_H} Fix $\lambda >0$, $h \in C_b(E)$ and consider $u$ and $v$ sub- and super-solution to $f - \lambda Hf = h$. Let $\{\Psi_\alpha\}_{\alpha > 0}$ be a family of good penalization functions and $\Upsilon$ be a good containment function. Moreover, for every $\alpha,\varepsilon >0$ let $x_{\alpha,\varepsilon},y_{\alpha,\varepsilon} \in E$ be such that \begin{multline} \label{eqn:comparison_principle_proof_choice_of_sequences} \frac{u(x_{\alpha,\varepsilon})}{1-\varepsilon} - \frac{v(y_{\alpha,\varepsilon})}{1+\varepsilon} - \Psi_\alpha(x_{\alpha,\varepsilon},y_{\alpha,\varepsilon}) - \frac{\varepsilon}{1-\varepsilon}\Upsilon(x_{\alpha,\varepsilon}) -\frac{\varepsilon}{1+\varepsilon}\Upsilon(y_{\alpha,\varepsilon}) \\ = \sup_{x,y \in E} \left\{\frac{u(x)}{1-\varepsilon} - \frac{v(y)}{1+\varepsilon} - \Psi_\alpha(x,y) - \frac{\varepsilon}{1-\varepsilon}\Upsilon(x) - \frac{\varepsilon}{1+\varepsilon}\Upsilon(y)\right\}. \end{multline} Suppose that \begin{multline}\label{condH:negative:liminf} \liminf_{\varepsilon \rightarrow 0} \liminf_{\alpha \rightarrow \infty} H\left(x_{\alpha,\varepsilon},\nabla \Psi_\alpha(\cdot,y_{\alpha,\varepsilon})(x_{\alpha,\varepsilon})\right) \\ - H\left(y_{\alpha,\varepsilon},\nabla \Psi_\alpha(\cdot,y_{\alpha,\varepsilon})(x_{\alpha,\varepsilon})\right) \leq 0, \end{multline} then $u \leq v$. In other words: $f - \lambda H f = h$ satisfies the comparison principle. \end{proposition} \begin{proof} By Lemma \mathrm{Re}\,f{lemma:viscosity_extension} we get immediately that $u$ is a sub-solution to $f - \lambda H_\dagger f = h$ and $v$ is a super-solution to $f - \lambda H_ \mathrm{d}agger f = h$ . Thus, it suffices to verify the comparison principle for the equations involving the extensions $H_\dagger$ and $H_ \mathrm{d}agger$. Let $x_{\alpha,\varepsilon},y_{\alpha,\varepsilon} \in E$ such that \eqref{eqn:comparison_principle_proof_choice_of_sequences} is satisfied. Then, for all $\alpha$ we obtain that \begin{align} & \sup_x u(x) - v(x) \notag\\ & = \lim_{\varepsilon \rightarrow 0} \sup_x \frac{u(x)}{1-\varepsilon} - \frac{v(x)}{1+\varepsilon} \notag\\ & \leq \liminf_{\varepsilon \rightarrow 0} \sup_{x,y} \frac{u(x)}{1-\varepsilon} - \frac{v(y)}{1+\varepsilon} - \Psi_\alpha(x,y) - \frac{\varepsilon}{1-\varepsilon} \Upsilon(x) - \frac{\varepsilon}{1+\varepsilon}\Upsilon(y) \notag\\ & = \liminf_{\varepsilon \rightarrow 0} \frac{u(x_{\alpha,\varepsilon})}{1-\varepsilon} - \frac{v(y_{\alpha,\varepsilon})}{1+\varepsilon} - \Psi_\alpha(x_{\alpha,\varepsilon},y_{\alpha,\varepsilon}) - \frac{\varepsilon}{1-\varepsilon}\Upsilon(x_{\alpha,\varepsilon}) -\frac{\varepsilon}{1+\varepsilon}\Upsilon(y_{\alpha,\varepsilon}) \notag \\ & \leq \liminf_{\varepsilon \rightarrow 0} \frac{u(x_{\alpha,\varepsilon})}{1-\varepsilon} - \frac{v(y_{\alpha,\varepsilon})}{1+\varepsilon}, \label{eqn:basic_inequality_on_sub_super_sol} \end{align} as $\Upsilon$ and $\Psi_\alpha$ are non-negative functions. Since $u$ is a sub-solution to $f - \lambda H_\dagger f = h$ and $v$ is a super-solution to $f - \lambda H_ \mathrm{d}agger f = h$, we find by our particular choice of $x_{\alpha,\varepsilon}$ and $y_{\alpha,\varepsilon}$ that \begin{align} & u(x_{\alpha,\varepsilon}) - \lambda H\left(x_{\alpha,\varepsilon}, (1-\varepsilon)\nabla \Psi_\alpha(\cdot,y_{\alpha,\varepsilon})(x_{\alpha,\varepsilon}) + \varepsilon \nabla \Upsilon(x_{\alpha,\varepsilon})\right) \leq h(x_{\alpha,\varepsilon}), \label{eqn:ineq_comp_proof_1}\\ & v(y_{\alpha,\varepsilon}) - \lambda H\left(y_{\alpha,\varepsilon},-(1+\varepsilon)\nabla \Psi_\alpha(x_{\alpha,\varepsilon},\cdot)(y_{\alpha,\varepsilon}) - \varepsilon \nabla \Upsilon(y_{\alpha,\varepsilon})\right) \geq h(y_{\alpha,\varepsilon}).\label{eqn:ineq_comp_proof_2} \end{align} For all $z \in E$, the map $p \mapsto H(z,p)$ is convex. Thus, \eqref{eqn:ineq_comp_proof_1} implies that \begin{multline} \label{eqn:ineq_comp_proof_1_convexity} u(x_{\alpha,\varepsilon}) \leq h(x_{\alpha,\varepsilon}) + (1-\varepsilon) \lambda H(x_{\alpha,\varepsilon}, \nabla \Psi_\alpha(\cdot,y_{\alpha,\varepsilon})(x_{\alpha,\varepsilon})) \\ + \varepsilon \lambda H(x_{\alpha,\varepsilon},\nabla \Upsilon(x_{\alpha,\varepsilon})). \end{multline} For the second inequality, first note that because $\Psi_\alpha$ are good penalization functions, we have $- ( \nabla \Psi_\alpha(x_{\alpha,\varepsilon},\cdot))(y_{\alpha,\varepsilon}) = \nabla \Psi_\alpha(\cdot, y_{\alpha,\varepsilon})(x_{\alpha,\varepsilon})$. Next, we need a more sophisticated bound using the convexity of $H$: \begin{align*} & H(y_{\alpha,\varepsilon}, \nabla \Psi_\alpha(\cdot, y_{\alpha,\varepsilon})(x_{\alpha,\varepsilon})) \\ & \leq \frac{1}{1+\varepsilon} H(y_{\alpha,\varepsilon},(1+\varepsilon)\nabla \Psi_\alpha(\cdot, y_{\alpha,\varepsilon})(x_{\alpha,\varepsilon}) - \varepsilon \nabla \Upsilon(y_{\alpha,\varepsilon})) + \frac{\varepsilon}{1+\varepsilon} H(y_{\alpha,\varepsilon}, \nabla \Upsilon(y_{\alpha,\varepsilon})). \end{align*} Thus, \eqref{eqn:ineq_comp_proof_2} gives us \begin{equation} \label{eqn:ineq_comp_proof_2_convexity} v(y_{\alpha,\varepsilon}) \geq h(y_{\alpha,\varepsilon}) + \lambda (1+\varepsilon) H(y_{\alpha,\varepsilon},\nabla\Psi_\alpha(\cdot,y_{\alpha,\varepsilon})(x_{\alpha,\varepsilon})) - \varepsilon \lambda H(y_{\alpha,\varepsilon},\nabla \Upsilon(y_{\alpha,\varepsilon})). \end{equation} By combining \eqref{eqn:basic_inequality_on_sub_super_sol} with \eqref{eqn:ineq_comp_proof_1_convexity} and \eqref{eqn:ineq_comp_proof_2_convexity}, we find \begin{align} & \sup_x u(x) - v(x) \nonumber\\ & \leq \liminf_{\varepsilon \rightarrow 0} \liminf_{\alpha \rightarrow \infty} \left\{ \frac{h(x_{\alpha,\varepsilon})}{1 - \varepsilon} - \frac{h(y_{\alpha,\varepsilon})}{1+\varepsilon} \right. \label{eqn:eqn:comp_proof_final_bound:line1}\\ & \qquad + \frac{\varepsilon}{1-\varepsilon}H(x_{\alpha,\varepsilon}, \nabla \Upsilon(x_{\alpha,\varepsilon})) + \frac{\varepsilon}{1+\varepsilon}H(y_{\alpha,\varepsilon}, \nabla\Upsilon(y_{\alpha,\varepsilon})) \label{eqn:eqn:comp_proof_final_bound:line2}\\ & \left. \qquad + \lambda \left[H(x_{\alpha,\varepsilon},\nabla\Psi_\alpha(\cdot,y_{\alpha,\varepsilon})(x_{\alpha,\varepsilon})) - H(y_{\alpha,\varepsilon},\nabla\Psi_\alpha(\cdot,y_{\alpha,\varepsilon})(x_{\alpha,\varepsilon}))\right] \vphantom{\sum} \right\}.\label{eqn:eqn:comp_proof_final_bound:line3} \end{align} The term \eqref{eqn:eqn:comp_proof_final_bound:line3} vanishes by assumption. Now observe that, for fixed $\varepsilon$ and varying $\alpha$, the sequence $(x_{\alpha,\varepsilon},y _{\alpha,\varepsilon})$ takes its values in a compact set and, hence, admits converging subsequences. All these subsequences converge to points of the form $(z,z)$. Therefore, as $\alpha \rightarrow \infty$, we find \[ \liminf_{\varepsilon \rightarrow 0} \liminf_{\alpha \rightarrow \infty} \frac{h(x_{\alpha,\varepsilon})}{1 - \varepsilon} - \frac{h(y_{\alpha,\varepsilon})}{1+\varepsilon} \leq \liminf_{\varepsilon \rightarrow 0} \vn{h} \frac{2\varepsilon}{1-\varepsilon^2} = 0, \] giving that also the term in \eqref{eqn:eqn:comp_proof_final_bound:line1} converges to zero. The term in \eqref{eqn:eqn:comp_proof_final_bound:line2} vanishes as well, due to the uniform bounds on $H(z,\nabla \Upsilon(z))$ by property ($\Upsilon$d). We conclude that the comparison principle holds for $f - \lambda H f = h$. \end{proof} \subsection{Compact containment and the large deviation principle} \label{section:appendix_ldp} To connect the Hamilton-Jacobi equation to the large deviation principle, we introduce some additional concepts. Fix some $d \geq 1$. In this section, we assume that $E$ is a \textit{closed} subset of $\mathbb{R}^d$ that is contained in the $\mathbb{R}^d$-closure of its $\mathbb{R}^d$-interior. Additionally, we have closed subspaces $E_n \subseteq E$ for all $n$ and assume that $E = \lim_{n \to \infty} E_n$, i.e. for every $x \in E$ there exist $x_n \in E_n$ such that $x_n \rightarrow x$. We consider the following notion of operator convergence. \begin{definition} Suppose that for each $n$ we have an operator $(B_n,\mathcal{D}(B_n))$, $B_n : \mathcal{D}(B_n) \subseteq C_b(E_n) \rightarrow C_b(E_n)$. The \textit{extended limit} $ex-\lim_n B_n$ is defined by the collection $(f,g) \in C_b(E) \times C_b(E)$ such that there exist $f_n \in \mathcal{D}(B_n)$ satisfying \begin{equation} \label{eqn:convergence_condition} \lim_{n \rightarrow \infty} \sup_{x \in K \cap E_n} \left|f_n(x) - f(x)\right| + \left|B_n f_n(x) - g(x)\right| = 0. \end{equation} For an operator $(B,\mathcal{D}(B))$, we write $B \subseteq ex-\lim_n B_n$ if the graph $\{(f,Bf) \, | \, f \in \mathcal{D}(B) \}$ of $B$ is a subset of $ex-\lim_n B_n$. \end{definition} \begin{remark} The notion of extended limit can be generalized further, e.g. for limiting spaces like in the previous two sections. Such abstract generalizations are carried out in Definition 2.5 and Condition 7.11 of \cite{FK06}. Our definition for a \textit{closed} limiting space $E$ is the simplest version of this abstract machinery. \end{remark} \begin{assumption} \label{assumption:LDP_assumption} Fix some $d \geq 1$. Let $E$ be a closed subset of $\mathbb{R}^d$ that is contained in the $\mathbb{R}^d$-closure of its $\mathbb{R}^d$-interior and let $E_n$ be closed subsets of $E$ such that $E = \lim_{n \rightarrow \infty} E_n$. Assume that for each $n \geq 1$, we have $A_n \subseteq C_b(E_n) \times C_b(E_n)$ and existence and uniqueness holds for the $D_{E_n}(\mathbb{R}^+)$ martingale problem for $(A_n,\mu)$ for each initial distribution $\mu \in \mathcal{P}(E_n)$. Letting $\mathbb{P}_{y}^n \in \mathcal{P}(D_{E_n}(\mathbb{R}^+))$ be the solution to $(A_n,\delta_y)$, the mapping $y \mapsto \mathbb{P}_y^n$ is measurable for the weak topology on $\mathcal{P}(D_{E_n}(\mathbb{R}^+))$. Let $X_n$ be the solution to the martingale problem for $A_n$ and set \begin{equation*} H_n f = \frac{1}{r(n)} e^{-r(n)f}A_n e^{r(n)f} \qquad e^{r(n)f} \in \mathcal{D}(A_n), \end{equation*} for some sequence of speeds $\{r(n)\}_{n \geq 1}$, with $\lim_{n \rightarrow \infty} r(n) = \infty$. Suppose that we have an operator $H : \mathcal{D}(H) \subseteq C_b(E) \rightarrow C_b(E)$ with $\mathcal{D}(H) = C^2_c(E)$ of the form $Hf(x) = H(x,\nabla f(x))$ which satisfies $H \subseteq ex-\lim H_n$. \end{assumption} \begin{proposition} \label{proposition:exp_compact_containment} Suppose Assumption \mathrm{Re}\,f{assumption:LDP_assumption} is satisfied and assume that $\Upsilon$ is a good containment function. Suppose that the sequence $\{X_n(0)\}_{n \geq 1}$ is exponentially tight with speed $\{r(n)\}_{n \geq 1}$. Then the sequence $\{X_n\}_{n \geq 1}$ satisfies the exponential compact containment condition with speed $\{r(n)\}_{n \geq 1}$: for every $T > 0$ and $a \geq 0$, there exists a compact set $K_{a,T} \subseteq E$ such that \begin{equation*} \limsup_{n \rightarrow \infty} \frac{1}{r(n)} \log \mathbb{P}\left[X_n(t) \notin K_{a,T} \text{ for some } t \leq T \right] \leq -a. \end{equation*} \end{proposition} In the proof of this proposition, we apply Lemma 4.22 from \cite{FK06}. We recall it here for the sake of readability. \begin{lemma}[Lemma 4.22 in \cite{FK06}] \label{lemma:compact_containment_FK} Let $X_n$ be solutions of the martingale problem for $A_n$ and suppose that $\{X_n(0)\}_{n \geq 1}$ is exponentially tight with speed $\{r(n)\}_{n \geq 1}$. Let $K$ be compact and let $G \supseteq K$ be open. For each $n$, suppose we have $(f_n,g_n) \in H_n$. Define \begin{equation*} \beta(K,G) := \liminf_{n \rightarrow \infty} \left( \inf_{x \in G^c} f_n(x) - \sup_{x \in K} f_n(x)\right) \mbox{ and } \gamma(G) := \limsup_{n \rightarrow \infty} \sup_{x \in G} g_n(x). \end{equation*} Then \begin{multline} \label{eqn:compact_containment_bound} \limsup_{n \rightarrow \infty} \frac{1}{r(n)} \log \mathbb{P}\left[X_n(t) \notin G \text{ for some } t \leq T \right] \\ \leq \max \left\{-\beta(K,G) + T \gamma(G), \limsup_{n\rightarrow \infty} \mathbb{P}\left[X_n(0) \notin K\right] \right\}. \end{multline} \end{lemma} Note that in the case the closure of $G$ is compact, this result is suitable for proving the compact containment condition. This is what we will use below. \begin{proof}[Proof of Proposition \mathrm{Re}\,f{proposition:exp_compact_containment}] Fix $a \geq 0$ and $T > 0$. We will construct a compact set $K'$ such that \begin{equation*} \limsup_{n \rightarrow \infty} \frac{1}{r(n)} \log \mathbb{P}\left[X_n(t) \notin K' \text{ for some } t \leq T \right] \leq -a. \end{equation*} As $X_n(0)$ is exponentially tight with speed $\{r(n)\}_{n \geq 1}$, we can find a sufficiently large $R \geq 0$ so that \begin{equation*} \limsup_{n\rightarrow \infty} \frac{1}{r(n)} \log \mathbb{P}\left[X_n(0) \notin \overline{B(x_0,R)}\right] \leq - a, \end{equation*} where $x_0$ is a point such that $\Upsilon(x_0) = 0$ and $\overline{B(x_0,R)}$ is the closed ball with radius $R$ and center $x_0$. Thus, by Lemma \mathrm{Re}\,f{lemma:compact_containment_FK} it suffices to find $(f_n,g_n) \in H_n$, a compact $K$ and an open set $G$ such that $-\beta(K,G) + T\gamma(G) \leq - a$. Set $\gamma := \sup_{z} H(z,\nabla \Upsilon(z))$ and $c_1 := \sup_{z \in \overline{B(x_0,R)}} \Upsilon(z)$. Observe that $\gamma < \infty$ by assumption ($\Upsilon$d) and $c_1 < \infty$ by compactness. Now choose $c_2$ such that \begin{equation} \label{eqn:proof_compact_containment_choice_c2} -[c_2 - c_1] + T\gamma = -a \end{equation} and take $K = \{z \in E \, | \, \Upsilon(z) \leq c_1\}$ and $G = \{z \in E \, | \, \Upsilon(z) < c_2\}$. Let $\theta : [0,\infty) \rightarrow [0,\infty)$ be a compactly supported smooth function with the property that $\theta(z) = z$ for $z \leq c_2$. For each $n$, define $f_n := \theta \circ \Upsilon$ and $g_n := H_n f_n$. By Assumption \mathrm{Re}\,f{assumption:LDP_assumption}, $g_n \rightarrow H f$ in the sense of \eqref{eqn:convergence_condition} and moreover, by construction $\beta(K,G) = c_2 - c_1$ and $\gamma(G) = \gamma$. Thus by \eqref{eqn:proof_compact_containment_choice_c2} and Lemma \mathrm{Re}\,f{lemma:compact_containment_FK} we obtain \begin{equation*} \limsup_{n \rightarrow \infty} \frac{1}{r(n)} \log \mathbb{P}\left[X_n(t) \notin G \text{ for some } t \leq T \right] \leq -a \end{equation*} and the compact containment condition holds with $K_{a,T} = \overline{G}$. \end{proof} \begin{theorem}[Large deviation principle] \label{theorem:Abstract_LDP} Suppose Assumption \mathrm{Re}\,f{assumption:LDP_assumption} is satisfied and assume that $\Upsilon$ is a good containment function for $H$. Then we have the following result. Suppose that for all $\lambda > 0$ and $h \in C_b(E)$ the comparison principle holds for $f - \lambda H f = h$. And suppose that the sequence $\{X_n(0)\}_{n \geq 1}$ satisfies the large deviation principle with speed $\{r(n)\}_{n \geq 1}$ on $E$ with good rate function $I_0$. Then the large deviation principle with speed $\{r(n)\}_{n \geq 1}$ holds for $\{X_n\}_{n \geq 1}$ on $D_E(\mathbb{R}^+)$ with good rate function $I$. Additionally, suppose that the map $p \mapsto H(x,p)$ is convex and differentiable for every $x$ and that the map $(x,p) \mapsto \frac{ \mathrm{d}}{ \mathrm{d} p} H(x,p)$ is continuous. Then the rate function $I$ is given by \begin{equation*} I(\gamma) = \begin{cases} I_0(\gamma(0)) + \int_0^\infty \mathcal{L}(\gamma(s),\dot{\gamma}(s)) \mathrm{d} s & \text{if } \gamma \in \mathcal{A}\mathcal{C}, \\ \infty & \text{otherwise}, \end{cases} \end{equation*} where $\mathcal{L} : E \times \mathbb{R}^d \rightarrow \mathbb{R}$ is defined by $\mathcal{L}(x,v) = \sup_p \ip{p}{v} -H(x,p)$. \end{theorem} \begin{proof} The large deviation result follows from Theorem 7.18 in \cite{FK06}. Referring to the notation therein, we are using $H_\dagger = H_ \mathrm{d}agger = H$. The representation of the rate function in terms of the Lagrangian can be carried out by using Theorem 8.27 and Corollary 8.28 in \cite{FK06}. See for example the Section 10.3 in \cite{FK06} for this representation in the setting of Freidlin-Wentzell theory. Alternatively, an application of this result in a compact setting has been carried out also in \cite{Kr16b}. The only extra condition compared to Theorem 6 in \cite{Kr16b} due to the non-compactness is the verification of Condition 8.9.(4) in \cite{FK06}: \textit{For each compact $K \subseteq E$, $T > 0$ and $0 \leq M < \infty$, there exists a compact set $\hat{K} = \hat{K}(K,T,M) \subseteq E$ such that if $\gamma \in \mathcal{A}\mathcal{C}$ with $\gamma(0) \in K$ and \begin{equation} \label{eqn:bound_on_lagrangian_cost} \int_{0}^T \mathcal{L}(\gamma(s),\dot{\gamma}(s)) \mathrm{d} s \leq M, \end{equation} then $\gamma(t) \in \hat{K}$ for all $t \leq T$.} This condition can be verified as follows. Recall that the level sets of $\Upsilon$ are compact. Thus, we control the growth of $\Upsilon$. Let $\gamma \in \mathcal{A}\mathcal{C}$ satisfy the conditions given above. Then \begin{align*} \Upsilon(\gamma(t)) & = \Upsilon(\gamma(0)) + \int_0^t \ip{\nabla\Upsilon(\gamma(s))}{\dot{\gamma}(s)} \mathrm{d} s \\ & \leq \Upsilon(\gamma(0)) + \int_0^t \mathcal{L}(\gamma(s),\dot{\gamma}(s)) + H(\gamma(s),\nabla \Upsilon(\gamma(s))) \mathrm{d} s \\ & \leq \sup_{y \in K} \Upsilon(y) + M + \int_0^T \sup_z H(z,\nabla \Upsilon(z)) \mathrm{d} s \\ & =: C < \infty. \end{align*} Thus, we can take $\hat{K} = \{z \in E \, | \, \Upsilon(z) \leq C\}$. \end{proof} \textbf{Acknowledgement} The authors are supported by The Netherlands Organisation for Scientific Research (NWO): RK via grant 600.065.130.12N109 and FC via TOP-1 grant 613.001.552. \printbibliography \end{document}
\begin{document} \title{Improvement of the Cascadic Multigrid Algorithm with a Gauss Seidel Smoother to Efficiently Compute the Fiedler Vector of a Graph Laplacian} \author{Shivam Gandhi - Tufts University Department of Mathematics \thanks{[email protected]}} \date{November 2015} \maketitle \section{Abstract} In this paper, we detail the improvement of the Cascadic Multigrid algorithm with the addition of the Gauss Seidel algorithm in order to compute the Fiedler vector of a graph Laplacian, which is the eigenvector corresponding to the second smallest eigenvalue. This vector has been found to have applications in graph partitioning, particularly in the spectral clustering algorithm. The algorithm is algebraic and employs heavy edge coarsening, which was developed for the first cascadic multigrid algorithm. We present numerical tests that test the algorithm against a variety of matrices of different size and properties. We then test the algorithm on a range of square matrices with uniform properties in order to prove the linear complexity of the algorithm. \section{Introduction} The Fiedler vector has seen numerous applications within computational mathematics, primarily within the fields of graph partitioning and graph drawing [1]. In particular, we require eigenvalues and eigenvectors for a successful run of the spectral clustering algorithm that partitions a network into clusters[2]. Although many languages have built in eigenvalue methods, the spectral clustering method requires a specialized eigenvalue algorithms to account for massive network size. In fact, spectral clustering becomes unfeasible for networks of size over 1000 as computing the eigenvalues through matrix inversion becomes inefficient. Therefore, we require specialized multigrid algorithms to find the eigenvectors and eigenvalues in less than $O(N^3)$ time. The Cascadic Multigrid Algorithm is an effective method for computing the second largest eigenvalue and eigenvector, where the eigenvector is called the Fiedler vector. The main methods for calculating eigenvalues and eigenvectors include the Lanczos method and the power method. However, these methods become unfeasible for large matrices ($|V| > 1000$). Furthermore, many networks can have over $1000$ nodes which correlates to a matrix with dimension higher than $1000$. For this reason, we require the cascadic multigrid algorithm, as it solves the eigenvalue problem on coarser levels and projects the solution upwards until the solution is projected to the original matrix [3]. When calculating the eigenvalues and eigenvectors of symmetric positive definite matrices more generally, the Jacobi and PCG methods provide good approximations. These can be extended to calculating the Fiedler vector and its eigenvalue [1]. In this paper, we improve upon the previously made Cascadic Multigrid Algorithm by introducing a Gauss Seidel smoother on each level. We employ the previously established heavy edge coarsening that selects the edge with the heaviest weight between two vertices. The refinement procedure continues to use power iteration on a modified matrix. This method does not require the inversion of matrices, unlike Rayleigh-Quotient iteration, thereby making it a much more optimal method. As always, the eigenvector calculated on coarse levels are projected to a finer level with interpolation matrices. The eigenvector that gets calculated and projected up to a higher level serves as the guess for the next Gauss Seidel iteration on the finer level. Finally, on the highest level, we calculate the Rayleigh Quotient to achieve the eigenvalue. The paper is organized to provide a logical introduction to the algorithm. In section 3, we provide definitions and background knowledge required to understand multigrid methods. We also introduce heavy edge coarsening, power iteration, and the Gauss Seidel method. This culminates in a presentation of the algorithm developed. In section 4, we introduce the numerical tests of the algorithm. We compare the algorithm to the previous cascadic multigrid algorithm and various other multigrid algorithms that are meant to calculate the Fiedler vector. We also compare spectral clustering with the build in eigenvalue calculating function in MATLAB to spectral clustering that employs our algorithm to show efficient. In the final section, we wrap up the paper and discuss future improvements to multigrid algorithms that calculate Fiedler vectors. \section{Modified Cascadic MG Method for Computing the Fiedler Vector} First we formally introduce the concepts of the graph Laplacian and the Fiedler vector. A weighted graph $G = (V,E,w)$ is undirected if the edges are unoriented. \textbf{Definition 2.1} $G = (V,E,w)$ is a weighted graph. The Laplacian of $G$, $L(G) \in \mathbb{R}^{n \times n}$, shortened to $L$, where $n = |V|$, is denoted as follows \begin{displaymath} L(G)_{(i,j)} = \left\{ \begin{array}{lr} d_{v_i} &, $ if $ i = j\\ -w_{(i,j)} &, $ if $ i \neq j\\ \end{array} \right. \end{displaymath} where $d_{v_i}$ is the degree of vertex i and $w_{i,j}$ is the weight of the edge connecting $v_i$ and $v_j$. This Laplacian is positive semi-definite and diagonally dominant, and the sum of and row or column of L is zero. This makes the smallest eigenvalue 0 with the corresponding vector $[1,1,...,1]^T$. We are particularly interested in the second smallest eigenvalue and eigenvector. \textbf{Definition 2.2} The second smallest eigenvalue of the Laplacian of a graph $G$ is called the algebraic connectivity. This eigenvalue must be greater than or equal to 0. The corresponding eigenvector $\phi_2$ is called the Fiedler vector of G. The importance of the Fiedler vector is detailed in [4, 5]. It is important to note that the coarsest graph must be very small in size at around $|V| < 25$. A direct power iteration is used at this coarsest level to obtain an eigenvector. Afterwards, the eigenvector is projected upwards and then smoothed using Guass-Seidel. We now introduce heavy edge coarsening for our cascadic algorithm. In our algorithm, $L^i \in \mathbb{R}^{n_{i} \times n_{i}}$. Heavy edge coarsening is iterated on the graph Laplacian in order to create multiple levels for solving. This algorithm makes up the setup phase. \begin{algorithm} \caption{Heavy Edge Coarsening} \begin{algorithmic}[1] \Procedure {HEC}{L} \State $c \leftarrow 0$ \State $p \leftarrow randperm(n_i)$ \State $q \leftarrow zeros(n_i,1)$ \For {$i=1 \rightarrow n_i$} \If {$q(p(i)) = 0$} \State $m \leftarrow argmin(L(:,p(i)))$ \If {$q(m) = 0$} \State $c \leftarrow c+1$ \State $q(m) = c$ \State $q(p(i)) = c$ \Else \State $q(p(i)) = q(m)$ \EndIf \EndIf \EndFor \State $I_{i}^{i+1} \leftarrow zeros(c,n_i)$ \For {$i=1 \rightarrow n_i$} \State $I_{i}^{i+1}(q(i),i) = 1$ \EndFor \EndProcedure \end{algorithmic} \end{algorithm} Heavy edge coarsening is further detailed in [3], and several properties of the algorithm are proved as well. Next, we formally introduce the Gauss Seidel method. This method takes a guess vector and solves a linear system using that guess. In our algorithm, the we use the vector projected upwards from the coarser level as the guess. This was similar to power iteration, as we used the projected vector as the first guess for power iteration as well. The values A and b are the original values in the linear system $Ax = b$. X0 is our initial guess to the solution of this system. N denotes the number of iterations allowed while tol represents the tolerance of error. The algorithm outputs a solution to $Ax = b$ within our denoted error. \begin{algorithm} \caption{Gauss Seidel} \begin{algorithmic}[1] \Procedure {G-S}{A, b, X0, tol, N} \State $k \leftarrow 1$ \While {$k \leq N$} \For {$i = 1 \rightarrow n$} \State $x_i = 1/a_{ii}[- \sum\limits_{j = 1}^{i-1} (a_{ij}x_j) - \sum\limits_{j = i+1}^{n}(a_{ij}X0_j) + b_i]$ \If {$ |x - X0| < tol$} \State output $[x_1, x_2, ..., x_n]$ \EndIf \State $k = k+1$ \For {$i = 1 \rightarrow n$} \State $X0_j = x_i$ \EndFor \EndFor \EndWhile \State Output $[x_1, x_2, ..., x_n]$ \EndProcedure \end{algorithmic} \end{algorithm} We discuss two theorems that confirm that the Gauss Seidel method will converge to a solution in our multigrid algorithm. \textbf{Theorem 2.1}: The Gauss Seidel method converges if A is symmetric positive definite or if A is strictly or irreducibly diagonally dominant. \textbf{Theorem 2.2}: Let A be a symmetric positive definite matrix. Then the Gauss-Seidel method converges for any arbitrary choice of initial approximation $x$. A proof of these theorems can be found in [6]. All of our graph Laplacians on all levels are symmetric positive definite and diagonally dominant therefore the Gauss Seidel method will converge on all levels. With our component algorithms defined and sufficiently detailed, we can now outline the procedure for our algorithm. We begin with a setup phase that has heavy edge coarsening set up the levels on which we do computations. After this, we solve the eigenvalue problem on the coarsest level. We then begin projecting our eigenvector upwards and using Gauss Seidel on finer and finer levels until we get to the finest level, our original matrix. At this level, we use Gauss Seidel one last time to yield the Fiedler vector and then calculate the Rayleigh quotient for the algebraic connectivity. We input the finest level graph Laplacian and the algorithm outputs the Fiedler vector and corresponding eigenvalue. \begin{algorithm} \caption{Gauss Seidel Cascadic Multigrid} \begin{algorithmic}[1] \Procedure {Step 1: Setup Phase}{L} \State $i = 0$ \While {$n_i > 25$} \State $I_i^{i+1} \leftarrow HEC(L^i)$ \State $L^{i+1} = I_{i}^{i+1}L^{i}(I_{i}^{i+1})^T$ \State $i = i+1$ \EndWhile \State $J \leftarrow i$ \EndProcedure \Procedure {Step 2: Coarsest Level Solving Phase}{$L^J$} \State $ y^(J) \leftarrow GS(L^J, rand(n_J))$ \EndProcedure \Procedure{Step 3: Cascadic Refinement Phase}{$y^J$, $L$} \For {$j = J-1 \rightarrow 0$} \State $y^{j} = (I_{i}^{i+1})^{T}y^{(j+1)}$ \State $y^{j} \leftarrow GS(L^j, y^{j})$ \EndFor \EndProcedure \end{algorithmic} \end{algorithm} Structurally, this algorithm is similar to other multigrid algorithms in that it begins with a setup phase and solves on the coarsest level upwards. It is nearly identical to the Cascadic Multigrid Algorithm with the sole difference being in the Gauss Seidel replacing power iteration. \section{Numerical Tests} We perform numerical tests on a variety of graphs listed on Table 4.1. The graphs were taken from the University of Florida Sparse Matrix Collection [7]. The computations were performed on an HP Envy with a 2.40 GHz Intel Core i7 Processor with 8.00 GB of RAM. We consider the performance of the Gauss Seidel Cascadic Multigrid Algorithm to matrices with over 8000 nodes. We use a tolerance $(u^k,u^{k-1}) > 1 - 10^{-6}$. \begin{tabular}{l||l|l|l} Matrix Name & Matrix Size & Matrix Edges & CGMG runtime (s) \\ \hline barth5 & 15606 & 45878 & 0.371467 \\ \hline bcsstk32 & 44609 & 985046 & 1.242307 \\ \hline bcsstk33 & 8738 & 291583 & 0.381135 \\ \hline brack2 & 62631 & 366559 & 1.307903 \\ \hline copter1 & 17222 & 96921 & 0.42307 \\ \hline ct2010 & 67578 & 168176 & 1.265944 \\ \hline halfb & 224617 & 6081602 & 6.694857 \\ \hline srb1 & 54924 & 1453614 & 1.582835 \\ \hline wing\_nodal & 10937 & 75488 & 0.40845 \end{tabular} Next, we show that the algorithm is $O(N)$. We run the algorithm on uniform square arrays of various sizes and show that the runtime increases linearly according to the matrix size. The amount of nodes and edges increases linearly therefore we can expect the runtime of the algorithm to also increase linearly. Because multigrid algorithms run in linear time, it is important that the Gauss Seidel smoother does not change the runtime, otherwise it would be an inferior algorithm to use. The r value is very close to 1, indicating that the algorithm does in fact have $O(N)$ complexity. \begin{tabular}{l|l} Matrix Nodes & Time (seconds) \\ \hline 106276 & 1.921614 \\ \hline 178929 & 3.220836 \\ \hline 232324 & 4.088426 \\ \hline 276676 & 5.344172 \\ \hline 303601 & 5.684314 \\ \hline 374544 & 7.178143 \\ \hline 425104 & 7.811554 \\ \hline 564001 & 10.565033 \\ \hline 657721 & 11.704087 \\ \hline 705600 & 12.936846 \\ \hline 736164 & 13.768696 \\ \hline 753424 & 13.843865 \\ \hline 762129 & 14.799933 \\ \hline 788544 & 14.613115 \\ \hline 795664 & 15.51262 \\ \hline 799236 & 16.808463 \\ \hline 848241 & 16.922279 \\ \hline 851929 & 16.233831 \\ \hline 915849 & 17.257426 \\ \hline 956484 & 19.349795 \end{tabular} \includegraphics[scale = 0.5] {linearity.png} \section{Conclusion} In this paper, we have presented an improvement on the existing Cascadic Multigrid Algorithm by introducing a Gauss Seidel smoother as opposed to a power iteration smoother on each level. The algorithm is effective in calculating the algebraic connectivity and the Fiedler vector and is able to partition graphs quickly. Having shown that the Gauss-Seidel Cascadic Multigrid Algorithm runs in linear time, we can now discuss its benefits and pitfalls. If our initial graph Laplacian is not sparse, then Gauss Seidel will fail as a smoother since it is inherently meant to work on sparse matrices. In this case, other multigrid algorithms would be optimal. However, the Gauss Seidel smoother works well for most Laplacians as most Laplacians are sparse. Furthermore, we showed that the algorithm is effective in calculating the Fiedler vector of a variety of different graphs. We see future works modifying the smoother more. Future improvements could include changing the Gauss Seidel to a Lanczos smoother. Krylov subspace methods are costly for calculating the eigenvalues and eignvectors of large matrices but produce accurate results. Furthermore, future works could include a convergence analysis of the cascadic multigrid algorithm on a more general level and take into account the Gauss Seidel method in the convergence. Of particular interest is our algorithm's convergence with respect to elliptic eigenvalue problems. \end{document}
\begin{document} \newtheorem{thm}{Theorem}[section] \newtheorem{cor}{Corollary}[section] \newtheorem{lem}{Lemma}[section] \newtheorem{definition}{Definition}[section] \newtheorem{con}{Conjecture}[section] \begin{center} {\Large\bf The super-connectivity of Kneser graph KG(n,3)} \\[20pt] {Yulan\ Chen $^a$ \quad Yuqing\ Lin $^b$ \footnote{Corresponding Author.} \quad Weigen\ Yan $^a$ \footnote{Partially supported by NSFC Grant (12071180; 11571139).\\ {\it Email address:} y\_l\[email protected] (Y. Chen), [email protected], [email protected] (W. Yan)}} \\[5pt] \footnotesize {$^a$ School of Sciences, Jimei University, Xiamen 361021, China \\ $^b$ School of Electrical Engineering, the University of Newcastle, Australia} \end{center} \begin{abstract} A vertex cut $S$ of a connected graph $G$ is a subset of vertices of $G$ whose deletion makes $G$ disconnected. A super vertex cut $S$ of a connected graph $G$ is a subset of vertices of $G$ whose deletion makes $G$ disconnected and there is no isolated vertex in each component of $G-S$. The super-connectivity of graph $G$ is the size of the minimum super vertex cut of $G$. Let $KG(n,k)$ be the Kneser graph whose vertices set are the $k$-subsets of $\{1,\cdots,n\}$, where $k$ is the number of labels of each vertex in $G$. We aim to show that the conjecture from Boruzanli and Gauci \cite{EG19} on the super-connectivity of Kneser graph $KG(n,k)$ is true when $k=3$. \\% \\% {\sl Keywords:}\quad Super-connectivity; Kneser graph; Super connected; Super vertex cut. \end{abstract} \section{Introduction} \ Let $[n]=\{1,\cdots,n\}$ be $n$ labels, the Kneser graph $G=KG(n,k)$ is the graph whose vertices are the $k$-subsets of $[n]$, two vertices are adjacent if these two $k$-subsets are disjoint, i.e. two vertices do not share labels. Let $V(G)$ be the set of vertices of $G$, it is clear $V(KG(n,k)) = {[n] \choose k}$ and the $KG(n,k)$ is regular with degree ${n-k \choose k}$. A vertex cut $S$ of a connected graph $G$ is a subset of vertices of $G$ whose deletion disconnect $G$. The connectivity $\kappa$ of $G$ is the size of the minimum vertex cut of $G$. If the deletion of any vertex cut of size $\kappa$ in $G$ will isolate a vertex, then $G$ is super-connected. A vertex cut which isolate a single vertex is called an trivial vertex cut of $G$. When $G$ is super-connected, it makes sense to determine the size of minimum nontrivial vertex cut of $G$, that is, the super-connectivity $\kappa_{1}$ of $G$. And the smallest nontrivial vertex cut is called a super-vertex cut of $G$. A complete graph $K_n$ is a simple graph with $n$ vertices and edge between every pair of vertices of $K_n$. The concept of Kneser graph was proposed by Kneser in 1955 \cite{Kneser55}. Structural properties of Kneser graph has been studied extensively, for example, the hamiltoniancity, Chromatic number and the matchings in Kneser graph. Chen and Lih proved that the Kneser graph is symmetric, vertex-transitive and edge-transitive \cite{CL87}. Using this property, Boruzanli and Gauci showed that the connectivity of Kneser graph $KG(n,k)$ is ${n-k \choose k}$ \cite{EG19}. Harary has proposed the concept of super-connectivity in 1983 \cite{Harary83}. Subsequently, Balbuena, Marcote and Garc\'{\i}a-V\'{a}zquez defined a similar concept, i.e. restricted connectivity of graphs \cite{BMG05}. In this paper, we will investigate the super-connectivity of Kneser graph. It is known that if $n < 2k$, then $KG(n,k)$ contains no edges, and if $n = 2k$, then $KG(n,k)$ is a set of independent edges. The Kneser graph $KG(n, 1)$ is the complete graph on $n$ vertices. Boruzanli and Gauci has made a conjecture which states that \cite{EG19}, \begin{con} Let $n\geq2k+1$, then the super-connectivity $\kappa_{1}$ of $KG(n,k)$ is \[ \kappa_{1}=\left\{\begin{array}{lcl} 2\left({n-k \choose k}-1 \right) & if & 2k+1\leq n<3k, \\ 2\left({n-k \choose k}-1 \right)-{n-2k\choose k} & if & n \geq 3k. \end{array} \right. \] \end{con} Boruzanli and Gauci has proved that this conjecture holds when $k=2$\cite{EG19}, in this work, we are considering the case when $k=3$. \section{Super-Connectivity of $KG(n,3)$} In this section, we determine the super-connectivity of $KG(n,3)$ when $n \geq 7$ and confirmed that the Conjecture 1.1 is true for $k=3$. \begin{thm} The super-connectivity of Kneser graph $KG(n,3)$ is \[ \kappa_{1}=\left\{\begin{array}{lcl} 2\left({n-3 \choose 3}-1 \right) & if & 7\leq n \leq 8, \\ 2\left({n-3 \choose 3}-1 \right)-{n-6\choose 3} & if & n \geq 9. \end{array} \right. \] \end{thm} \textit{Proof}. Let $S\subseteq V(G)$ be a super-vertex cut of $G$. Suppose $n \geq 9$ and $|S| < 2\left({n-3 \choose 3}-1 \right)-{n-6\choose 3}$, then we have \begin{align*} |G-S| & > {n\choose 3}-2\left[{n-3\choose 3}-1\right]+{n-6 \choose 3} = \frac{54n-204}{6}=9n-34 \end{align*} This means that if the $\kappa_{1}$ is less than the bound stated in the conjecture, then there will be more than $9n-34$ vertices in $G-S$. In the following, we will show that $G-S$ has to be connected if it has more than $9n-34$ vertices. Since $S$ is a super-vertex cut, then $G-S$ has at least two components and each component has at least $2$ vertices. If $G-S$ has a component containing only two vertices, then it is straightforward that $|S| = \kappa_{1}$ since the $S$ has to contain all the neighbours of these two vertices and also it is easy to see that there is no singular vertex in $G-S$. Now we assume that each component of $G-S$ has at least $3$ vertices. We also assumed that $G-S$ has two components $C_{1},C_{2}$, and $C_2=G-S-C_1$. Note, in here, $C_2$ might not be connected. If $C_{2}$ is not connected, then $C_{2}$ is the union of some connected components with have at least $3$ vertices in each. Since $C_1$ has at least three vertices, let them be $v_{1},v_{2},v_{3}$. These three vertices form possibly two different graphs, either a complete graph $K_3$ or a path $P_3$ of length $2$, if these three vertices form a path, then there are two possibilities, either the two non-adjacent vertices share only one common label, which we refer to as $Type\ 1 \ path$ or the two non-adjacent vertices share two common labels, which we refer to as $Type \ 2 \ path$. We are making the following three claims. Claim 1: If there is a $K_3$ in $C_1$, then there are at most $27$ vertices in $C_2$. Let the three vertices in $C_1$ be $v_{1}=\{1,2,3\}$, $v_{2}=\{4,5,6\}$, $v_{3}=\{7,8,9\}$. Since $C_1$ and $C_2$ are disconnected, then every vertex in $C_{2}$ has at least one label in common with every vertex in $C_{1}$, i.e. any vertex of $C_2$ has to have a label from $\{1,2,3\}$, a label from $\{4,5,6\}$ and a label from $\{7,8,9\}$, then the number of vertices in $C_{2}$ is at most $3^{3}=27$. Claim 2: If there is a Type 1 path in $C_1$, then there are at most $3n+3$ vertices in $C_2$. Let the three vertices in $C_1$ be $v_{1}=\{1,2,3\}$, $v_{2}=\{4,5,6\}$, $v_{3}=\{1,7,8\}$, the common label of the two end vertices is $1$, then similar to the proof of Claim 1, we have maximum $3(n-2)$ vertices in $C_{2}$ contain label $1$, since the vertices of $C_2$ in this case has to use a label in $\{4,5,6\}$, and in this calculation we have double counted $3$ vertices $\{1,4,5\}$, $\{1,4,6\}$, $\{1,5,6\}$, therefore, there are at most $3(n-2)-3$ vertices containing label $1$ in $C_2$. And there are at most $2\cdot3\cdot2$ vertices in $C_{2}$ do not contain label $1$. Hence the number of vertices in $C_{2}$ is at most $3(n-2)-3+12=3n+3$. Claim 3: If there is a Type 2 path $P_3$ in $C_1$, then there are at most $6n-18$ vertices in $C_2$. Let the three vertices in $C_1$ be $v_{1}=\{1,2,3\}$, $v_{2}=\{4,5,6\}$, $v_{3}=\{1,2,7\}$, the set of common labels of the end vertices are $\{1,2\}$. Similar to the previous argument, we have maximum $3(n-3)$ vertices in $C_{2}$ contain label $1$, but not label $2$. Similarly, have maximum $3(n-3)$ vertices in $C_{2}$ contain label $2$, but not label $1$. And there are at most $3$ vertices in $C_{2}$ contain both labels $\{1,2\}$, and there are at most $3$ vertices in $C_{2}$ contain neither label $1$ nor label $2$. Since we have double counted the $6$ vertices $\{1,4,5\}$, $\{1,4,6\}$, $\{1,5,6\}$, $\{2,4,5\}$, $\{2,4,6\}$, $\{2,5,6\}$, then the number of vertices in $C_{2}$ is at most $2\cdot3(n-3)+6-6=6n-18$. Next we will show that $C_1 \cup C_2 \leq 9n-34$, which implies that $G-S$ has to be connected if it contains more than $9n-34$ vertices. We consider the following cases. Case 1: There are $K_3$s in both $C_1$ and $C_2$. If the three vertices in $C_{1}$ form a complete graph $K_3$, let them be $v_{1}=\{1,2,3\}$, $v_{2}=\{4,5,6\}$, $v_{3}=\{7,8,9\}$, then by Claim 1, we have the number of vertices in $C_{2}$ is at most $27$. If $C_2$ also contains a $K_3$, for example, $\{1,4,7\}$, $\{2,5,8\}$, $\{3,6,9\}$, then the $C_1 \cup C_2$ has at most $54$ vertices. In this $54$ vertices, we have double counted the $6$ vertices $\{1,5,9\}$, $\{1,6,8\}$, $\{2,4,9\}$, $\{2,6,7\}$, $\{3,4,8\}$, $\{3,5,7\}$. And the vertex $\{2,3,7\}$ can only be in $C_1$ or $S$, $\{1,4,8\}$ can only be in $C_2$ or $S$, however they are connected, so one of them must be in $S$, the same for the pairs $\{5,6,7\}$ and $\{1,4,9\}$, $\{2,3,4\}$ and $\{1,5,7\}$, thus $C_1 \cup C_2$ has at most $45$ vertices. When $n \geq 9$, $9n-34$ is larger than $45$, thus $G-S$ be connected, i.e. $C_1$ and $C_2$ must be connected in this case, a contradiction. So we know that $C_1$ and $C_2$ can not contain $K_3$ at the same time. See Figure \ref{case1} as an example. \begin{figure} \caption{The case in $C_1$ and $C_2$} \label{case1} \end{figure} Case 2: There is a $K_3$ in $C_1$ or $C_2$, but not in both. Suppose $C_1$ contains a $K_3$ and $C_2$ do not have a $K_3$. Let the three vertices in $C_1$ be $v_{1}=\{1,2,3\}$, $v_{2}=\{4,5,6\}$, $v_{3}=\{7,8,9\}$. From the Claim 1, we know that there are at most $27$ vertices in $C_2$. If all $27$ vertices are presented in $C_2$, it is easy to verify that there are $36$ $K_3$s in $C_2$, and no two $K_3$s share an edge, however, 4 $K_3$s will share a vertex, for example, $\{1,5,7\}$, $\{2,4,8\}$, $\{3,6,9\}$, and $\{1,5,7\}$, $\{2,4,9\}$, $\{3,6,8\}$, and $\{1,5,7\}$, $\{2,6,8\}$, $\{3,4,9\}$, and $\{1,5,7\}$, $\{2,6,9\}$, $\{3,4,8\}$ (see Figure \ref{K3s}). It is straightforward to see that least 9 vertices have to be excluded from these $27$ vertices, so that there are no $K_3$ in $C_2$, thus, there are at most $27-9=18$ vertices in $C_2$. If there is exactly $18$ vertices in $C_2$, it implies that these 9 vertices that been removed all contains a certain label, for example, label 3. Otherwise, more than 9 vertices have to be excluded so there is no $K_3$s in $C_2$. \begin{figure} \caption{The $K_{3} \label{K3s} \end{figure} There must be a path $P_3$ in $C_2$, either Type 1 or Type 2, otherwise, there will be an isolated vertex or $K_2$ in $C_2$, which contradicts the assumption that the number of vertices in each component of $C_2$ is at least $3$. For the first case, without lose of generality, assume the common label for two end vertices of the path is $1$, and the middle vertex in the path contains label $2$. We could further assume that the three vertices on the path are $\{1,4,x\}$, $\{2,5,y\}$, $\{1,6,z\} \in C_2$, where $x\neq y\neq z$ and $x,y,z \in \{7,8,9\}$. From the proof of Claim 2, we know that there are at most $3n+3$ vertices in $C_1$. However, we have double counted $7$ vertices $\{1,4,y\}$, $\{1,5,7\}$, $\{1,5,8\}$, $\{1,5,9\}$, $\{1,6,y\}$, $\{2,4,z\}$, $\{2,6,x\}$, which should be in either $C_1$ or $C_2$ but not in both. Thus, overall, $C_1 \cup C_2$ has no more than $18+3n+3-7=3n+14$ vertices, which is less than $9n-34$ when $n \geq 9$, then $C_1$ and $C_2$ has to be connected. A contradiction. See Figure \ref{case2.1} for an illustration. \begin{figure} \caption{The case in $C_1$ and $C_2$} \label{case2.1} \end{figure} For the second case, assume the path consist of three vertices $\{1,4,x\}$, $\{2,5,y\}$, $\{1,4,z\} \in C_2$, where $x\neq y\neq z$ and $x,y,z \in \{7,8,9\}$. From the proof of Claim 3, we know that there are at most $6n-18$ vertices in $C_1$. Since we have double counted the $8$ vertices $\{1,4,y\}$, $\{1,5,7\}$, $\{1,5,8\}$, $\{1,5,9\}$, $\{1,6,y\}$, $\{2,4,7\}$, $\{2,4,8\}$, $\{2,4,9\}$. These vertices should be in either $C_1$ or $C_2$ but not in both. Thus, overall, $C_1 \cup C_2$ has no more than $18+6n-18-8=6n-8$ vertices, which is less than $9n-34$ when $n \geq 9$, then $C_1$ and $C_2$ has to be connected. A contradiction. See Figure \ref{case2.2} for an illustration. \begin{figure} \caption{The case in $C_1$ and $C_2$} \label{case2.2} \end{figure} Case 3: There is no $K_3$ in either $C_1$ or $C_2$, i.e. the components $C_1$ and $C_2$ contain $P_3$s. We shall consider the following three sub-cases based on the type of the paths. Suppose there is a Type 1 path in $C_{1}$, let the three vertices be $v_{1}=\{1,2,3\}$, $v_{2}=\{4,5,6\}$, $v_{3}=\{1,7,8\}$, then by Claim 2, we know the number of vertices in $C_{2}$ is at most $3n+3$. Now look at these vertices in $C_2$, there are at most $12$ vertices do not contain label $1$, if all of them are in $C_2$, i.e. none of them are included in the $S$, then these $12$ vertices form two cycles of length $6$. Of cause, if some of them are in $S$, then the rest of the vertices in each cycle form a set of paths. The rest of the vertices in $C_2$ all contain label $1$, thus not connected to each other, and they are connected to the vertices which do no contain label $1$. Then, we claim that either there is a Type 1 path, for example, $\{1,4,7\}$, $\{2,5,8\}$, $\{1,6,9\}$, or, there will be no more than $2n+4$ vertices in $C_2$. To see this, suppose we have no such desired path, and there are up to $n-2$ vertices containing both label $\{1,4\}$, there could be up to $n-2$ vertices in $C_2$ containing both labels $\{1,5\}$. Clearly not all $12$ vertices of no label $1$ are in $C_2$, since among those $12$ vertices, the ones such as $\{2,6,7\}$, $\{2,6,8\}$, $\{3,6,7\}$, $\{3,6,8\}$ will give us a desired path, thus there are at most $8$ among these $12$ vertices could be in $C_2$, also note that there must be some vertices from these $12$ vertices contained in $C_2$, otherwise, we have a set of singular vertices in $C_2$. Then the number of vertices in $C_2$ is at most $2(n-2)+8=2n+4$. If there are vertices containing both labels $\{1,6\}$ in $C_2$, then for sure we see the desired path. If we have the desired Type 1 path in $C_2$, let the three vertices be $\{1,4,x\}$, $\{2,5,y\}$, $\{1,6,z\}$, where $x\neq y\neq z$ and $x,z \in \{3,7,\cdots,n\}$, $y \in \{7,8\}$. Then based on the proof of Claim 2, $C_1$ has maximum $3n+3$ vertices, thus $C_1 \cup C_2$ has maximum $6n+6$ vertices. Also noticed that we have double counted the vertices of form $\{1,5,a\}$, where $a\in \{2,3,4,6,\cdots,n\}$, and vertices $\{1,2,4\}$, $\{1,2,6\}$, $\{1,4,y\}$, $\{1,6,y\}$, which both appear in the $C_1$ and $C_2$ in our calculation. Meanwhile, $\{2,4,6\}$ is only in $C_1$ or $S$, the vertex $\{3,5,7\}$ is either in $C_2$ or $S$. Depending on the choice of $x, y, z$, the vertex $\{3,5,7\}$ could also appear in $C_1$, for example, in the case $x=3,y=8,z=7$. If $\{3,5,7\}$ is either in $C_2$ or $S$, as $\{2,4,6\}$ and $\{3,5,7\}$ are connected, then one of them must be in $S$. If $\{3,5,7\}$ is in $C_1$, then $\{3,5,7\}$ is not in $C_1$, thus we know the size of $C_2$ has to be one less than the maximum possible. The same for $\{1,2,7\}$ and $\{3,5,8\}$, $\{1,2,8\}$ and $\{3,6,7\}$. Therefore, there is no more than $5n+1$ vertices in $C_1 \cup C_2$, and $9n-34$ is larger than $5n+1$ when $n \geq 9$, then $C_1$ and $C_2$ has to be connected. A contradiction. See Figure \ref{case3.1} for an illustration. \begin{figure} \caption{The case in $C_1$ and $C_2$} \label{case3.1} \end{figure} If there is no desired Type 1 path, then $C_2$ has at most $2n+4$ vertices, and we know there is a Type 2 path in $C_2$. Let the shared two labels be $\{1,4\}$ and let the three vertices be $\{1,4,x\}$, $\{2,5,y\}$, $\{1,4,z\}$ as show in Figure \ref{case3.2}, where $x\neq y\neq z$ and $x,z \in \{3,6,\cdots,n\}$, $y \in \{7,8\}$. Based on the proof of Claim 3, there are maximum $6n-18$ vertices in $C_1$. Since we have double counted the vertices of form $\{1,5,a\}$, where $a\in \{2,3,4,6,\cdots,n\}$, and vertices $\{1,2,4\}$, $\{1,2,6\}$, $\{1,4,y\}$, $\{1,6,y\}$, $\{2,4,7\}$, $\{2,4,8\}$, which both appear in the $C_1$ and $C_2$ in our calculation. Therefore, there is no more than $7n-18$ vertices in $C_1 \cup C_2$, and $9n-34$ is larger than $7n-18$ when $n \geq 9$, then $C_1$ and $C_2$ has to be connected. A contradiction. \begin{figure} \caption{The case in $C_1$ and $C_2$} \label{case3.2} \end{figure} Now assume that there is no Type 1 path in $C_1$, then there must be a Type 2 path in $C_1$. Let the three vertices on the path be $v_{1}=\{1,2,3\}$, $v_{2}=\{4,5,6\}$, $v_{3}=\{1,2,7\}$, then by Claim 3, we know the number of vertices in $C_{2}$ is at most $6n-18$. The case where there is a Type 2 path in $C_1$ and a Type 1 path in $C_2$ is the same to the case where there is a Type 1 path in $C_1$ and a Type 2 path in $C_2$. The latter we have considered above, so here we only consider the case where there is a Type 2 path in $C_1$ and there is also a Type 2 path in $C_2$. Suppose there are vertices containing both labels $\{1,4\}$ and vertices containing both labels $\{2,5\}$ in $C_2$, then there is no vertex containing both labels $\{1,6\}$ and no vertex containing both labels $\{2,6\}$ in $C_2$, and furthermore, there is no vertex containing neither label 1 nor label 2 in $C_2$, which implies that there is no vertex containing both labels $\{1,2\}$ in $C_2$, since the vertices with both $\{1,2\}$ only connect the vertices with no $\{1,2\}$ in $C_2$, otherwise Type 1 path or $K_3$ will appear in $C_2$. Then the number of vertices in $C_2$ is at most $4(n-3)-2$, i.e. at most $n-3$ vertices contain both labels $\{1,4\}$, at most $n-3$ vertices contain both labels $\{1,5\}$, at most $n-3$ vertices contain both labels $\{2,4\}$ and at most $n-3$ vertices contain both labels $\{2,5\}$, and we double counted the vertices $\{1,4,5\}$ and $\{2,4,5\}$. Then the Type 2 path in $C_2$ can be $\{1,4,x\}$, $\{2,5,y\}$, $\{1,4,z\}$, where $x\neq y\neq z$ and $x,y,z \in \{3,6,\cdots,n\}$, based on the prove of Claim 3, there have maximum $6n-18$ vertices in $C_1$. Since we have double counted the vertices of form $\{1,5,a\}$ and $\{2,4,b\}$, where $a\in \{2,3,4,6,\cdots,n\}$ and $b\in \{1,3,5,\cdots,n\}$, which both appear in the $C_1$ and $C_2$ in our calculation. Therefore, there is no more than $8n-28$ vertices in $C_1 \cup C_2$, and $G-S=9n-34$ is larger than $8n-28$ when $n \geq 9$, then $C_1$ and $C_2$ has to be connected. A contradiction. See Figure \ref{case3.3} for an illustration. \begin{figure} \caption{The case in $C_1$ and $C_2$} \label{case3.3} \end{figure} Thus we have showed that if $n\ge 9$, $G-S$ has to be connected, therefor the conjecture is true. When $n=7$, only Type 2 path is possible in the graph. Let the three vertices in $C_1$ be $v_{1}=\{1,2,3\}$, $v_{2}=\{4,5,6\}$, $v_{3}=\{1,2,7\}$, then based on the proof of Claim 3, the number of vertices of $C_{2}$ is at most $6n-18=24$, that is, $9$ vertices contain label $1$, but not label $2$, $9$ vertices contain label $2$, but not label $1$, $3$ vertices contain both labels $\{1,2\}$ and $3$ vertices contain neither label $1$ nor label $2$. As $C_2$ has at least three vertices, and it only has Type 2 path, let the three vertices on the path be $\{1,4,x\}$, $\{2,5,y\}$, $\{1,4,z\}$, where $x\neq y\neq z$ and $\{x,y,z\}=\{3,6,7\}$, then there are possibly three paths, it depends on the choice of $y$, the three paths are $\{1,4,3\}$, $\{2,5,6\}$, $\{1,4,7\}$ and $\{1,4,6\}$, $\{2,5,3\}$, $\{1,4,7\}$ and $\{1,4,3\}$, $\{2,5,7\}$, $\{1,4,6\}$. If the first path is presented in $C_2$, based on the proof of Claim 3, $C_1$ has at most $6n-18=24$ vertices, since we have double counted the $16$ vertices $\{1,2,4\}$, $\{1,2,5\}$, $\{1,2,6\}$, $\{1,3,5\}$, $\{1,3,6\}$, $\{1,4,5\}$, $\{1,4,6\}$, $\{1,5,6\}$, $\{1,5,7\}$, $\{1,6,7\}$, $\{2,3,4\}$, $\{2,4,5\}$, $\{2,4,6\}$, $\{2,4,7\}$, $\{3,5,7\}$, $\{3,6,7\}$. Meanwhile, $\{4,5,7\}$ can only be in $C_1$ or $S$, $\{2,3,6\}$ can only be in $C_2$ or $S$, but they are connected, then one of them must be in $S$, the same for $\{3,4,5\}$ and $\{2,6,7\}$, $\{3,4,6\}$ and $\{2,5,7\}$, $\{4,6,7\}$ and $\{2,3,5\}$. Thus, overall, no more than $24+24-16-4=28$ vertices, which is less than $|G-\kappa_1|=29$, then $C_1$ and $C_2$ has to be connected. A contradiction. We can discuss the second path and the third path in the same way, arrive the same conclusion, i.e. the number of vertices in $C_1 \cup C_2$ is at most $24+24-16-4=28$, which is less than $|G-\kappa_1|=29$, then $C_1$ and $C_2$ has to be connected. A contradiction. When $n=8$, the three vertices in $C_{1}$ form a path $P_3$ of length $2$, it is possible for $C_1$ and $C_2$ to contain Type 1 path or Type 2 path, thus we have to look into each case. First, let $C_1$ has a Type 1 path $v_{1}=\{1,2,3\}$, $v_{2}=\{4,5,6\}$, $v_{3}=\{1,7,8\}$, then based on the proof of Claim 2, the number of vertices in $C_{2}$ is at most $3n+3=27$, that is, $15$ vertices contain label $1$ and $12$ vertices which do not label $1$. If we have a Type 1 path in $C_2$, for example, $\{1,4,x\}$, $\{2,5,y\}$, $\{1,6,z\}$, where $x\neq y\neq z$ and $x,z \in \{3,7,8\}$, $y \in \{7,8\}$, then there are possibly four paths, they are $\{1,4,3\}$, $\{2,5,7\}$, $\{1,6,8\}$ and $\{1,4,8\}$, $\{2,5,7\}$, $\{1,6,3\}$ and $\{1,4,3\}$, $\{2,5,8\}$, $\{1,6,7\}$ and $\{1,4,7\}$, $\{2,5,8\}$, $\{1,6,3\}$, respectively. If the first path is presented in $C_2$, based on the proof of Claim 2, $C_1$ has at most $3n+3=27$ vertices, since we have double counted the $13$ vertices $\{1,2,4\}$, $\{1,2,5\}$, $\{1,2,6\}$, $\{1,3,5\}$, $\{1,4,5\}$, $\{1,4,7\}$, $\{1,5,6\}$, $\{1,5,7\}$, $\{1,5,8\}$, $\{1,6,7\}$, $\{2,4,8\}$, $\{3,5,8\}$, $\{3,6,7\}$. Meanwhile, $\{1,2,7\}$ can only be in $C_1$ or $S$, $\{3,4,8\}$ can only be in $C_2$ or $S$, but they are connected, then one of them must be in $S$, the same for the pairs $\{2,4,6\}$ and $\{3,5,7\}$, $\{4,5,8\}$ and $\{2,6,7\}$, $\{4,7,8\}$ and $\{1,3,6\}$. Thus, overall, no more than $27+27-13-4=37$ vertices, which is less than $|G-\kappa_1|=38$, then $C_1$ and $C_2$ has to be connected. A contradiction. We can discuss the second path, the third path and the fourth path in the same way and arrive the same contradiction. If we have a Type 2 path in $C_2$, for example, $\{1,4,x\}$, $\{2,5,y\}$, $\{1,4,z\}$, where $x\neq y\neq z$ and $x,z \in \{3,6,7,8\}$, $y \in \{7,8\}$, then based on the proof of the Claim 3, $C_1$ has at most $6n-18=30$ vertices. Since we have double counted the $13$ vertices $\{1,2,4\}$, $\{1,2,5\}$, $\{1,2,6\}$, $\{1,3,5\}$, $\{1,4,5\}$, $\{1,4,y\}$, $\{1,5,6\}$, $\{1,5,7\}$, $\{1,5,8\}$, $\{1,6,y\}$, $\{2,4,7\}$, $\{2,4,8\}$, $\{3,4,y\}$. Meanwhile, $\{1,2,7\}$ can only be in $C_1$ or $S$. The vertex $\{3,5,8\}$ is either in $C_2$ or $S$. Depending on the choice of $x, y, z$, the vertex $\{3,5,8\}$ could also appear in $C_1$, for example, when $x=3,y=7,z=8$. If $\{3,5,8\}$ is either in $C_2$ or $S$, as $\{1,2,7\}$ and $\{3,5,8\}$ are connected, then one of them must be in $S$. If $\{3,5,8\}$ is in $C_1$, then we know the size of $C_2$ has to be one less than the maximum possible. The same for $\{1,2,8\}$ and $\{3,5,7\}$, $\{3,4,5\}$ and $\{2,6,7\}$, $\{4,5,7\}$ and $\{2,6,8\}$, $\{4,5,8\}$ and $\{3,6,7\}$, $\{2,4,5\}$ and $\{3,6,8\}$. $\{4,7,8\}$ can only be in $C_1$ or $S$, $\{1,3,6\}$ can only be in $C_2$ or $S$, but they are connected, then one of them must be in $S$. Thus, overall, no more than $27+30-13-7=37$ vertices, which is less than $|G-\kappa_1|=38$, then $C_1$ and $C_2$ has to be connected. A contradiction. Second, let $C_1$ has a Type 2 path $v_{1}=\{1,2,3\}$, $v_{2}=\{4,5,6\}$, $v_{3}=\{1,2,7\}$, then based on the proof of Claim 3, the number of vertices of $C_{2}$ is at most $6n-18=30$, that is, $9$ vertices contain label $1$, but not label $2$, $9$ vertices contain label $2$, but not label $1$, $3$ vertices contain both labels $\{1,2\}$ and $3$ vertices contain neither label $1$ nor label $2$. The case where there is a Type 2 path in $C_1$ and a Type 1 path in $C_2$ is similar to the case where there is a Type 1 path in $C_1$ and a Type 2 path in $C_2$. The later we have considered already, so here we only consider the case where there is a Type 2 path in $C_1$ and there is also a Type 2 path in $C_2$. Suppose, in $C_2$, there are vertices containing both labels $\{1,4\}$ and vertices containing both labels $\{2,5\}$ in $C_2$, then there is no vertex containing both labels $\{1,6\}$ in $C_2$, and there is no vertex containing both labels $\{2,6\}$ in $C_2$, otherwise Type 1 path will appear in $C_2$. Then the number of vertices in $C_2$ is at most $4\cdot5+6-2=24$, i.e. at most $5$ vertices contain both labels $\{1,4\}$, at most $5$ vertices contain both labels $\{1,5\}$, at most $5$ vertices contain both labels $\{2,4\}$ and at most $5$ vertices contain both labels $\{2,5\}$, at most $3$ vertices contain both labels $\{1,2\}$ and at most $3$ vertices contain neither label $1$ nor label $2$, and we double counted the vertices $\{1,4,5\}$ and $\{2,4,5\}$. Then the Type 2 path in $C_2$ can be $\{1,4,x\}$, $\{2,5,y\}$, $\{1,4,z\}$, where $x\neq y\neq z$ and $x,y,z \in \{3,6,7,8\}$, based on the prove of Claim 3, there have maximum $6n-18=30$ vertices in $C_1$. Since we have double counted the $14$ vertices $\{1,2,4\}$, $\{1,2,5\}$, $\{1,2,6\}$, $\{1,3,5\}$, $\{1,4,5\}$, $\{1,4,y\}$, $\{1,5,6\}$, $\{1,5,7\}$, $\{1,5,8\}$, $\{2,3,4\}$, $\{2,4,5\}$, $\{2,4,6\}$, $\{2,4,7\}$, $\{2,4,8\}$, which both appear in the $C_1$ and $C_2$ in our calculation. Meanwhile, $\{3,4,5\}$ can only be in $C_1$ or $S$, the vertex $\{1,6,7\}$ is either in $C_2$ or $S$. Depending on the choice of $x, y, z$, the vertex $\{1,6,7\}$ could also appear in $C_1$, for example, when $x=6,y=7,z=8$. If $\{1,6,7\}$ is either in $C_2$ or $S$, as $\{3,4,5\}$ and $\{1,6,7\}$ are connected, then one of them must be in $S$. If $\{1,6,7\}$ is in $C_1$, then we know the size of $C_2$ has to be one less than the maximum possible. The same for pairs $\{4,5,7\}$ and $\{1,6,8\}$, $\{4,5,8\}$ and $\{1,3,6\}$, $\{1,2,8\}$ and $\{3,5,7\}$. Therefore, there is no more than $24+30-14-4=36$ vertices in $C_1 \cup C_2$, and $|G-\kappa_1|=38$ is larger than $36$, then $C_1$ and $C_2$ has to be connected. A contradiction. In summary, we have proved that when $k=3$, the conjecture is true and the bound is achieved only in the case that one of the disconnected component contains just two vertices linked by an edge. \end{document}
\begin{document} \begin{abstract} For dimensions $N \geq 4$, we consider the Br\'ezis-Nirenberg variational problem of finding \[ S(\epsilon V) := \inf_{0\not\equiv u\in H^1_0(\Omega)} \frac{\int_\Omega |\nabla u|^2 \, dx +\epsilon \int_\Omega V\, |u|^2 \, dx}{\left(\int_\Omega |u|^q \, dx \right)^{2/q}}, \] where $q=\frac{2N}{N-2}$ is the critical Sobolev exponent, $\Omega \subset \mathbb{R}^N$ is a bounded open set and $V:\overline{\Omega}\to \mathbb{R}$ is a continuous function. We compute the asymptotics of $S(0) - S(\epsilon V)$ to leading order as $\epsilon \to 0+$. We give a precise description of the blow-up profile of (almost) minimizing sequences and, in particular, we characterize the concentration points as being extrema of a quotient involving the Robin function. This complements the results from our recent paper in the case $N = 3$. \end{abstract} \maketitle \section{\bf Introduction and main results} \subsection{Setting of the problem} Let $N \geq 4$ and let $\Omega \subset \mathbb{R}^N$ be a bounded open set. For $\epsilon > 0$ and a function $V \in C(\overline{\Omega})$, Br\'ezis and Nirenberg study in their famous paper \cite{BrNi} the quotient functional \begin{equation} \label{var-prob} \mathcal S_{\epsilon V}[u] := \frac{\int_\Omega |\nabla u|^2 \, dx +\epsilon \int_\Omega V\, |u|^2 \, dx}{\left(\int_\Omega |u|^q \, dx \right)^{2/q}}, \qquad q=\frac{2N}{N-2} \, , \end{equation} and the corresponding variational problem of finding \begin{equation} \label{var-prob-inf} S(\epsilon V) := \inf_{0\not\equiv u\in H^1_0(\Omega)} \mathcal S_{\epsilon V}[u] \,. \end{equation} This number is to be compared with $$ S_N = \pi N (N-2) \left(\frac{\Gamma(N/2)}{\Gamma(N)} \right)^{2/n}\, ,$$ the sharp constant \cite{Rod,Ro,Au,Ta} in the Sobolev inequality. Indeed, in \cite{BrNi} it is shown that $S(\epsilon V) < S_N$ as soon as \begin{equation} \label{N-def} \mathcal N(V):= \{x\in \Omega: V(x) < 0\} \end{equation} is non-empty. This behavior is in stark contrast to the case $N = 3$ also treated in \cite{BrNi}, where there is an $\epsilonilon_V>0$ such that $S(\epsilonilon V) = S_N$ for all $\epsilonilon \in (0, \epsilon_V]$ even if $\mathcal N (V)$ is non-empty. The purpose of this paper is, for $N \geq 4$, to describe the asymptotics of $S_N- S(\epsilonilon V)$ to leading order as $\epsilonilon \to 0$, as well as the asymptotic behavior of corresponding (almost) minimizing sequences and, in particular, their concentration behavior. This is the higher-dimensional complement to our recent paper \cite{FrKoKo}, where analogous results are shown in the more difficult case $N = 3$. \textbf{Notation. } To prepare the statement of our main results, we now introduce some key objects for the following analysis. An important role is played by the Green's function of the Dirichlet Laplacian on $\Omega$, which in the sense of distributions satisfies, in the normalization of \cite{rey2}, \begin{equation} \label{G-pde} \left\{ \begin{array}{l@{\quad}l} -(-\Delta)elta_x\, G(x,y) = (N-2)\, \omega_N\, \delta_y & \quad \text{in} \ \ \Omega , \\ & \\ G(x,y) = 0 & \quad \text{on} \ \ \partial\Omega, \end{array} \right. \end{equation} where $\omega_N$ is the surface of the unit sphere in $\mathbb{R}^N$, and $\delta_y$ denotes the Dirac delta function centered at $y$. We denote by \begin{equation} \label{h-function} H(x,y) = \frac{1}{|x-y|^{N-2}} - G(x,y) \end{equation} the regular part of $G$. The function $H(x, \cdot)$, defined on $\Omega \setminus \{x\}$, extends to a continuous function on $\Omega$ and we may define the \emph{Robin function} \begin{equation} \label{phi-function} \phi(x) := H(x,x) \,. \end{equation} Using this function, we define the numbers \begin{align*} \sigma_N(\Omega, V) & := \sup_{x\in\mathcal N(V)} \left( \phi(x)^{-\frac{2}{N-4}}\ |V(x)|^{\frac{N-2}{N-4}} \right), & N \geq 5 \, ,\\ \sigma_4(\Omega, V) & := \sup_{x\in\mathcal N(V)} \left( \phi(x)^{-1} |V(x)|\right) , & N = 4 \,, \end{align*} which will turn out to essentially be the coefficients of the leading order term in $S_N- S(\epsilonilon V)$. Another central role is played by the family of functions \begin{equation} \label{u-function} U_{x,\lambda} (y) = \frac{\lambda^{(N-2)/2}}{(1+\lambda^2 |x-y|^2 )^{(N-2)/2}}\, \quad x\in \mathbb{R}^N, \, \lambda > 0. \end{equation} It is well-known that the $U_{x, \lambda}$ are exactly the optimizers of the Sobolev inequality on $\mathbb{R}^N$. Since \eqref{var-prob} is a perturbation of the Sobolev quotient, it is reasonable to expect the $U_{x,\lambda}$ to be nearly optimal functions for \eqref{var-prob-inf}. However, since \eqref{var-prob-inf} is set on $H^1_0(\Omega)$, we consider, as in \cite{BaCo}, the functions $PU_{x, \lambda} \in H^1_0(\Omega)$ uniquely determined by the properties \begin{equation} \label{eq-pu} (-\Delta)elta PU_{x,\lambda} = (-\Delta)elta U_{x,\lambda}\ \ \ \text{ in } \Omega, \qquad PU_{x,\lambda} = 0 \ \ \ \text{ on } \partial \Omega \,. \end{equation} Moreover, let $$ T_{x, \lambda} := \text{ span}\, \big\{ PU_{x, \lambda}, \partial_\lambda PU_{x, \lambda}, \partial_{x_i} PU_{x, \lambda}\, (i=1,2,\ldots, N) \big\} $$ and let $T_{x, \lambda}^\perp$ be the orthogonal complement of $T_{x,\lambda}$ in $H^1_0(\Omega)$ with respect to the inner product $\int_\Omega \nabla u \cdot \nabla v\,dy$. In what follows we denote by $\|\cdot\|$ the $L^2-$norm on $\Omega$. Finally, given a set $X$ and two functions $f_1,\, f_2: X\to\mathbb{R}$, we write $f_1 \lesssim f_2$ if there exists a numerical constant $c$ such that $f_1(x) \leq c\, f_2(x)$ for all $x\in X$. \subsection{Main results} Throughout this paper and without further mention we assume that the following properties are satisfied. \begin{assumption} The set $\Omega \subset \mathbb{R}^N$, $N \geq 4$, is open and bounded and has a $C^2$ boundary. Moreover, $V \in C(\overline{\Omega})$ and $\mathcal N(V) \neq \emptyset$, with $\mathcal N(V)$ given by \eqref{N-def}. \end{assumption} Here is our first main result. It gives the asymptotics of $S_N - S(\epsilon V)$ to leading order in $\epsilonilon$. \begin{thm} \label{thm expansion} As $\epsilon\to 0+$, we have \begin{equation} \label{eq-thm ngeq5} S(\epsilon V) = S_N - C_N\, \sigma_N(\Omega, V)\ \epsilon^{\frac{N-2}{N-4}} + o(\epsilon^{\frac{N-2}{N-4}}) \qquad\qquad\ \text{\rm if} \ N \geq 5 \end{equation} and \begin{equation} \label{eq-thm n4} S(\epsilon V) = S_4 - \exp\Big( - \frac 4\epsilonilon \left(1 +o(1)\right) \sigma_4(\Omega, V)^{-1} \Big) \qquad \qquad \text{\rm if} \ N = 4. \end{equation} Here the constants $C_N$ are defined in \eqref{cn} below. \end{thm} Our second main result shows that the blow-up profile of an arbitrary almost minimizing sequence $(u_\epsilonilon)$ is given to leading order by the family of functions $PU_{x, \lambda}$. Moreover, we give a precise characterization of the blow-up speed $\lambda = \lambda_\epsilon$ and of the point $x_0$ around which the $u_\epsilon$ concentrate. \begin{thm} \label{thm-minimizers} Let $(u_\epsilonilon)\subset H^1_0(\Omega)$ be a family of functions such that \begin{equation} \label{appr-min} \lim_{\epsilonilon\to 0} \frac{\mathcal S_{\epsilonilon V}[u_\epsilonilon] - S(\epsilonilon V)}{S_N-S(\epsilonilon V)} = 0 \qquad \text{and} \qquad \int_\Omega |u_\epsilonilon|^q \,dx = \left( \frac {S_N}{N(N-2)}\right)^{\frac{q}{q-2}} \,. \end{equation} Then there are $(x_\epsilonilon)\subset\Omega$, $(\lambda_\epsilonilon)\subset(0,\infty)$, $(\alpha_\epsilonilon)\subset\mathbb{R}$ and $(w_\epsilon) \subset H^1_0(\Omega)$ with $w_\epsilon \in T_{x_\epsilon, \lambda_\epsilon}^\perp$ such that \begin{equation} \label{u-eps-final} u_\epsilonilon = \alpha_\epsilonilon \left( PU_{x_\epsilonilon, \lambda_\epsilonilon} + w_\epsilon \right) \end{equation} and, along a subsequence, $x_\epsilonilon \to x_0$ for some $x_0\in \mathcal N(V)$. Moreover, \begin{align*} & \begin{cases} \phi(x_0)^{-\frac{2}{N-4}}\ |V(x_0)|^{\frac{N-2}{N-4}} = \sigma_N(\Omega, V) , & N \geq 5, \\ \phi(x_0)^{-1}|V(x_0)| = \sigma_4(\Omega, V) \,, & N = 4, \end{cases} \\ & \begin{cases} \|\nabla w_\epsilon\|=o(\epsilonilon^\frac{N-2}{2N-8}), & N \geq 5, \\ \|\nabla w_\epsilon\|\leq \exp\Big( - \frac 2\epsilonilon \left(1 +o(1)\right) \sigma_4(\Omega, V)^{-1} \Big) , & N = 4, \end{cases} \\ & \begin{cases} \lim_{\epsilonilon \to 0}\, \epsilonilon \, \lambda_\epsilonilon^{N-4} = \frac{N\, (N-2)^2\, a_N \,\phi(x_0)}{2 \,b_N\, |V(x_0)|} \,, & N \geq 5, \\ \lim_{\epsilonilon \to 0}\, \epsilonilon \, \ln \lambda_\epsilon = \frac{2\, \phi(x_0)}{ |V(x_0)|}\, , & N = 4, \end{cases} \\ & \begin{cases} \alpha_\epsilonilon = s \left( 1 + D_N \epsilonilon^\frac{N-2}{N-4} + o(\epsilonilon^\frac{N-2}{N-4}) \right), & N \geq 5, \\ \alpha_\epsilon = s \left(1 + \exp\Big( - \frac 4\epsilonilon \left(1 +o(1)\right) \sigma_4(\Omega, V)^{-1} \Big) \right) , & N=4, \end{cases} \qquad\text{for some}\ s\in\{\pm 1\} \,. \end{align*} Here the constants $a_N$, $b_N$ and $D_N$ are defined in \eqref{anbn} and \eqref{dn} below. \end{thm} The coefficients appearing in Theorems \ref{thm expansion} and \ref{thm-minimizers} are \begin{equation} \label{anbn} a_N := \int_{\mathbb{R}^N} \frac{ dz}{(1+z^2)^{(N+2)/2}}, \qquad b_N := \begin{cases} \int_{\mathbb{R}^N} \frac{ dz}{(1+z^2)^{N-2}}, & N \geq 5, \\ \omega_4, & N = 4, \end{cases} \end{equation} as well as \begin{equation} \label{cn} C_N := S_N^{\frac{2-N}{2}}\, (N(N-2))^{\frac{N-2}{2}}\ \frac{N-4}{N-2} \, \left(\frac{N (N-2)^2}{2}\right)^{\frac{2}{4-N}}\, a_N^{-\frac{2}{N-4}}\, b_N^{\frac{N-2}{N-4}}, \qquad N \geq 5, \end{equation} and \begin{equation} \label{dn} D_N := a_N ^{-\frac{2}{N-4}} b_N ^{\frac{N-2}{N-4}} S_N^{-\frac{N}{2}} \left( N(N-2) \right)^{\frac{N}{2} - \frac{N-2}{N-4}} \left( \frac{N-2}{2} \right)^{-\frac{N-2}{N-4}}\,, \qquad N \geq 5. \end{equation} A simple computation using beta functions yields the numerical values \[ a_N = \frac{\omega_N}{N}, \quad N \geq 4, \quad \quad \text{ and } \quad \quad b_N = \omega_N \, \frac{\Gamma\left(\frac N2\right)\, \Gamma\left(\frac N2-2\right)}{2\, \Gamma(N-2)}, \quad N \geq 5. \] \subsection{Discussion} Let us put our main results, Theorems \ref{thm expansion} and \ref{thm-minimizers}, into perspective with respect to existing results in the literature. Of course, minimizers of the variational problem \eqref{var-prob-inf} satisfy the corresponding Euler--Lagrange equation. It is natural to study general positive solutions of this equation, even if they do not arise as minimizers of \eqref{var-prob-inf}. In the special case where $V$ is a negative constant, Br\'ezis and Peletier \cite{BrPe} discussed the concentration behavior of such general solutions and made some conjectures, which were later proved by Han \cite{Ha} and Rey \cite{rey1}. Probably one can use their precise concentration results to give an alternative proof of our main results in the special case where $V$ is constant and probably one can even extend the analysis of Han and Rey to the case of non-constant $V$. Our approach here is different and, we believe, simpler for the problem at hand. We work directly with the variational problem \eqref{var-prob-inf} and \emph{not} with the Euler--Lagrange equation. Therefore, our concentration results are not only true for minimizers but even for `almost minimizers' in the sense of \eqref{appr-min}. We believe that this is interesting in its own right. On the other hand, a disadvantage of our method compared to the Han--Rey method is that it gives concentration results only in $H^1$ norm and not in $L^\infty$ norm and that it is restricted to energy minimizing solutions of the Euler--Lagrange equation. In the special case where $V$ is a negative constant, our results are very similar to results obtained by Takahashi \cite{Tak}, who combined elements from the Han--Rey analysis (see, e.g., \cite[Equation (2.4) and Lemma 2.6]{Tak}) with variational ideas adapted from Wei's treatment \cite{We} of a closely related problem; see also \cite{FlWe}. Takahashi obtains the energy asymptotics in Theorem \ref{thm expansion} as well as the characterization of the concentration point and the concentration scale in Theorem \ref{thm-minimizers} under the assumption that $u_\epsilonilon$ is a minimizer for \eqref{var-prob-inf}. Thus, in our paper we generalize Takahashi's results to non-constant $V$ and to almost minimizing sequences and we give an alternative, self-contained proof which does not rely on the works of Han and Rey. The present work is a companion paper to \cite{FrKoKo} relying on the techniques developed there in the three dimensional case. In particular, Theorems \ref{thm expansion} and \ref{thm-minimizers} should be compared with \cite[Theorems 1.3 and 1.7]{FrKoKo}, respectively. Although the expansions for $N \geq 4$ have the same structure as in the case $N = 3$, the latter case is more involved. In fact, when $N = 3$, the coefficient of the leading order term, namely the term of order $\epsilon$, vanishes and one has to expand the energy to the next order, namely $\epsilon^2$. Besides the extensions of known results that we achieve here, we also think it is worthwhile from a methodological point of view to present our arguments again in the conceptually easier case $N\geq 4$. In the three-dimensional case the basic technique is iterated twice, which to some extent obscures the underlying simple idea. Moreover, we hope our work sheds some new light on the similarities and discrepancies between the two cases. The structure of this paper is as follows. In Section \ref{sec upperbd} we prove the upper bound from Theorem \ref{thm expansion} by inserting the $PU_{x, \lambda}$ as test functions. The proof of the corresponding lower bound is prepared in Sections \ref{sec lowerbd pre} and \ref{sec lowerbd exp}, where we derive a crude asymptotic expansion for a general almost minimizing sequence $(u_\epsilon)$ and the corresponding expansion of $\mathcal S_{\epsilon V}[u_\epsilon]$. Section \ref{sec pfmain} contains the proof of Theorems \ref{thm expansion} and \ref{thm-minimizers}. A crucial ingredient there is the coercivity inequality \eqref{eq-rey} from \cite{rey2}, which allows us to estimate the remainder terms and to refine the aforementioned expansion of $u_\epsilon$. Finally, an appendix contains two auxiliary technical results. \section{\bf Upper bound} \label{sec upperbd} The computation of the upper bound to $S(\epsilon V)$ uses the functions $PU_{x, \lambda}$, with suitably chosen $x$ and $\lambda$, as test functions. The following theorem gives a precise expansion of the value $\mathcal S_{\epsilon V}[PU_{x, \lambda}]$. To state it, we introduce the distance to the boundary of $\Omega$ as $$ d(x) = \text{dist}(x,\partial\Omega), \qquad x\in\Omega. $$ \begin{thm} \label{thm expansion PU} Let $x = x_\lambda$ be a sequence of points such that $d(x) \lambda \to \infty$. Then as $\lambda \to \infty$, we have \begin{align} \int_\Omega | \nabla PU_{x, \lambda}|^2 \, dy &= N(N-2) \left(\frac{S_N}{N(N-2)} \right) ^\frac{q}{q-2} + N(N-2) \, a_N\, \lambda^{2-N}\, \phi(x) + \mathcal O((d(x)\lambda)^{\frac{4}{3}-N})\, , \label{exp-NablaPU} \\ \int_\Omega V PU_{x, \lambda}^2 \, dy &= \begin{cases} \lambda^{-2}\, b_N\, V(x) + \mathcal{O}\left((d(x) \lambda)^{2-N} \right) + o(\lambda^{-2}) , & N \geq 5, \label{exp-epsVPU}\\ \frac{\log \lambda}{\lambda^2}\ b_4\, V(x) + \mathcal{O}\left((d(x) \lambda)^{-2}\right) + o\left(\frac{\log \lambda}{\lambda^2}\right) & N = 4, \end{cases} \end{align} and \begin{equation} \label{exp-PUq} \int_\Omega | PU_{x, \lambda}|^q \, dy =\left( \frac{S_N}{N(N-2)} \right) ^\frac{q}{q-2} - q\, a_N\, \lambda^{2-N}\, \phi(x) + o ((d(x) \lambda)^{2-N}). \end{equation} \noindent In particular, as $\lambda \to \infty$, \begin{equation} \label{exp-quot-PU} \mathcal S_{\epsilon V}[PU_{x, \lambda}] = \begin{cases} S_N \!+ \!\left( \!\frac{S_N}{N(N-2)}\!\right)^{\frac{2}{2-q}}\!\! \left( \!\frac{N(N-2)\, a_N \, \phi(x)}{\lambda^{N-2}} +b_N\, \epsilon\, \frac{V(x)}{\lambda^{2}} \!\right) + o ((d(x)\lambda)^{2-N}) + o (\epsilon \lambda^{-2}) , & N \geq 5, \\ S_4 + \frac {8}{S_4} \left( \frac{8\, a_4 \, \phi(x)}{\lambda^{2}} +b_4\, \epsilon\, \frac{V(x)\, \log\lambda}{\lambda^{2}} \right) + o ((d(x)\lambda)^{-2}) + o (\epsilon \frac{\log \lambda}{\lambda^2}), & N = 4. \end{cases} \end{equation} \end{thm} In view of Proposition \ref{prop-app-min} below, the assumption $d(x) \lambda \to \infty$ in Theorem \ref{thm expansion PU} is no restriction, even when dealing with general almost minimizing sequences. \begin{cor} \label{cor-upperb} As $\epsilon\to 0+$, we have \begin{equation} \label{eq-upperb1} S(\epsilon V) \leq S_N - C_N\, \sigma_N(\Omega, V)\ \epsilon^{\frac{N-2}{N-4}} + o(\epsilon^{\frac{N-2}{N-4}}) \qquad\qquad\ \text{\rm if} \ N \geq 5 \end{equation} and \begin{equation} \label{eq-upperb2} S(\epsilon V) \leq S_4 - \exp\Big( - \frac 4\epsilonilon \left(1 +o(1)\right) \sigma_4(\Omega, V)^{-1} \Big) \qquad \qquad \text{\rm if} \ N = 4. \end{equation} \end{cor} \begin{proof} [Proof of Corollary \ref{cor-upperb}] By \cite[(2.8)]{rey2}, we have \begin{equation} \label{phi near bdry} d(x)^{2-N} \lesssim \phi(x) \lesssim d(x) ^{2-N}. \end{equation} (Note that this bound uses the $C^2$ assumption on $\Omega$.) Since, moreover, $V=0$ on $\partial\mathcal N(V)\setminus\partial\Omega$, the function $\phi^{-\frac{2}{N-4}}\ |V|^{\frac{N-2}{N-4}}$ can be extended to a continuous function on $\overline{\mathcal N(V)}$ which vanishes on $\partial\mathcal N(V)$. Thus there is $z_0 \in \mathcal N(V)$ such that \begin{equation} \label{z0-1} \sigma_N(\Omega, V) = \phi(z_0)^{-\frac{2}{N-4}}\ |V(z_0)|^{\frac{N-2}{N-4}}, \qquad N \geq 5 . \end{equation} The corollary for $N \geq 5$ now follows by choosing $x = z_0$ in \eqref{exp-quot-PU} and optimizing the quantity $\frac{N(N-2)\, a_N \, \phi(z_0)}{\lambda^{N-2}} +b_N\, \epsilon\, \frac{V(z_0)}{\lambda^{2}}$ in $\lambda$. The optimal choice is \begin{equation} \label{lambda-eps} \lambda(\epsilon) = \left(\frac{N\, (N-2)^2\, a_N \,\phi(z_0)}{2 \,b_N\, |V(z_0)|}\right)^{\frac{1}{N-4}}\, \epsilon^{-\frac{1}{N-4}} \, , \end{equation} and \eqref{eq-upperb1} follows from a straightforward computation. Similarly, if $N = 4$, since $\frac{\phi(y)}{|V(y)|}$ is a positive continuous function on $\mathcal N(V)$ which goes to $+\infty$ as $y \to \partial \mathcal N(V)$, we find some $z_0 \in \mathcal N(V)$ such that \begin{equation} \label{z0-2} \sigma_4(\Omega, V) = \frac{\phi(z_0)}{|V(z_0)|} \, . \end{equation} Thus we may choose $x = z_0$ in \eqref{exp-quot-PU} and optimize the quantity $A \lambda^{-2} - B \epsilonilon \lambda^{-2} \log \lambda$ in $\lambda > 0$, where $A = 8\, a_4 \, \phi(z_0) + o(1)$ and $B = b_4\, |V(z_0)| + o(1)$. The optimal choice is \begin{equation} \label{lambda-eps-2} \lambda(\epsilon) = \sqrt{e} \exp \left(\frac{A}{B \epsilonilon} \right). \end{equation} Inserting this into \eqref{exp-quot-PU}, we get \begin{align*} S(\epsilonilon V) \leq \mathcal S_{\epsilonilon V}[PU_{x, \lambda(\epsilonilon)}] &= S_4 - \frac{4 b_4}{e S_4} \epsilonilon |V(z_0)| \exp\left(- \frac{16\, a_4 \, (\phi(z_0) + o(1))}{b_4\, \epsilon\, |V(z_0)| + o(1)}\right) \\ &= S_4 - \exp\Big( - \frac 4\epsilonilon \left(1 +o(1)\right) \inf_{x\in\mathcal N(V)} \frac{\phi(x)}{|V(x)|} \Big)\,, \end{align*} where we have used the fact that \begin{equation} \label{eq-revised} \epsilonilon\, b \exp\Big(-\frac a\epsilonilon\Big) = \exp\Big(-\frac a\epsilonilon +o\Big(\frac 1\epsilonilon \Big)\Big) , \qquad \epsilonilon\to 0+ \end{equation} holds for all $a\geq 0$ and all $b>0$. This completes the proof of \eqref{eq-upperb2}, and thus of Corollary \ref{cor-upperb}. \end{proof} \begin{proof} [Proof of Theorem \ref{thm expansion PU}] We prove equations \eqref{exp-NablaPU}--\eqref{exp-PUq} separately. Then expansion \eqref{exp-quot-PU} follows by a straightforward Taylor expansion of the quotient functional $\mathcal S_{\epsilon V}[PU_{x, \lambda}]$. \emph{Proof of \eqref{exp-NablaPU}. } Since the $U_{x, \lambda}$ satisfy the equation \begin{equation} \label{u-eq} -(-\Delta)elta_y U_{x,\lambda}(y) = N(N-2)\, U_{x,\lambda}(y)^{q-1}, \quad y \in \mathbb{R}^N, \end{equation} it follows using integration by parts that $$ \int_\Omega | \nabla P U_{x, \lambda}|^2 \, dy = N(N-2) \int_\Omega U^{q-1}_{x, \lambda} P U_{x, \lambda}\, dy . $$ On the other hand, by \cite[Prop.~1]{rey2} we know that \begin{equation} \label{u-split} P U_{x, \lambda} = U_{x, \lambda} -\varphi_{x,\lambda}, \qquad \varphi_{x,\lambda} = \frac{H(x,\cdot)}{\lambda^{(N-2)/2}} + f_{x,\lambda}, \end{equation} where \begin{equation} \label{sup-f} \|f_{x,\lambda}\|_{L^\infty(\Omega)} = \mathcal{O} \left( \lambda^{-(N+2)/2}\, d(x)^{-N}\right) , \qquad \lambda\to\infty. \end{equation} By putting the above equations together we obtain \begin{equation} \label{eq-1} \int_\Omega | \nabla PU_{x, \lambda}|^2 \, dy = N(N-2) \left( \int_\Omega U^{q}_{x, \lambda} \, dy -\lambda^{\frac{2-N}{2}} \int_\Omega U^{q-1}_{x, \lambda} \, H(x, \cdot) \, dy - \int_\Omega U^{q-1}_{x, \lambda}\, f_{x,\lambda} \, dy \, \right). \end{equation} A direct calculation shows that \begin{equation} \label{uq} \int_\Omega U_{x, \lambda}^q \, dy = \int_{\mathbb{R}^N} U_{x, \lambda}^q \, dy + \mathcal{O}((d(x)\lambda)^{-N}) = \left( \frac{S_N}{N(N-2)}\right)^{\frac{q}{q-2}} + \mathcal{O}((d(x)\lambda)^{-N}). \end{equation} Moreover, for any $x\in\Omega$ we have \begin{equation}\label{sup-h} d(x)^{2-N} \ \lesssim\ \| H(x, \cdot) \|_{L^\infty(\Omega)} \ \lesssim\ d(x)^{2-N} \end{equation} and \begin{equation}\label{sup-h'} \sup_{y\in\Omega} | \nabla_y H(x, y) | \ \lesssim\ d(x)^{1-N}, \end{equation} see \cite[Sec.~2 and Appendix]{rey2}. Now let $\rho \in (0, \frac{d(x)}{2})$. A direct calculation using \eqref{u-function}, \eqref{sup-h} and \eqref{sup-h'} shows that \begin{align*} \int_{B_\rho(x)} U^{q-1}_{x, \lambda} \, H(x, \cdot) \, dy &= \lambda^{1+\frac N2}\, \big(\phi(x)+\mathcal{O}(\rho\, d(x)^{1-N})\big) \int_{B_\rho(x)}\frac{dy}{(1+\lambda^2|x-y|^2)^{(N+2)/2} } \\ & = \lambda^{1-\frac N2}\, a_N\, \left(\phi(x)+\mathcal{O}(\rho\, d(x)^{1-N})\right) ( 1+\mathcal{O}((\lambda\, \rho)^{-2})) \end{align*} and \begin{align*} \int_{\Omega \setminus B_\rho(x)} U^{q-1}_{x, \lambda} \, H(x, \cdot) \, dy &= \lambda^{1+\frac N2}\, \mathcal{O}(d(x)^{2-N}) \int_\rho^\infty \frac{r^{N-1}\, dr}{(1+\lambda^2\, r^2)^{\frac{N+2}{2}}} \\ & = \lambda^{1-\frac N2}\, \mathcal{O}(d(x)^{2-N}) \int_{\rho\lambda}^\infty \frac{t^{N-1}\, dt}{(1+ t^2)^{\frac{N+2}{2}}} \\ & = \lambda^{1-\frac N2}\, \mathcal{O}\left(d(x)^{2-N}\, (\lambda\, \rho)^{-2}\right). \end{align*} Hence for the second term on the right hand side of \eqref{eq-1} we get \begin{align} \label{u5h} \lambda^{\frac{2-N}{2}} \int_\Omega U^{q-1}_{x, \lambda} \, H(x, \cdot) \, dy & = \lambda^{2-N}\, a_N\, \phi(x) + \lambda^{2-N} \, \mathcal{O}\left(\rho \, d(x)^{1-N}\right) + \lambda^{2-N} \, \mathcal{O}\left(d(x)^{2-N}\, (\lambda\, \rho)^{-2}\right). \end{align} As for the last term on the right hand side of \eqref{eq-1}, we note that in view of \eqref{sup-f} $$ \left | \int_\Omega U^{q-1}_{x, \lambda}\, f_{x,\lambda} \, dy \, \right | \, \leq \|f_{x,\lambda}\|_{L^\infty(\Omega)} \int_{\mathbb{R}^N} U^{q-1}_{x, \lambda} \, dy= \|f_{x,\lambda}\|_{L^\infty(\Omega)} \, a_N\, \lambda^{1-\frac N2} = \mathcal{O}\left( (\lambda\, d(x))^{-N}\right). $$ The claim thus follows from \eqref{eq-1} by choosing $\rho = d(x)^{1/3} \lambda^{-2/3}$ in \eqref{u5h}. (Notice that $\rho = d(x) (d(x) \lambda)^{-2/3}) \leq \frac{d(x)}{2}$ for $\lambda$ large enough.) \emph{Proof of \eqref{exp-epsVPU}. } We have \begin{equation} \label{eq-V} \int_\Omega V\, PU_{x, \lambda}^2 \, dy = \int_\Omega V\, U_{x, \lambda}^2 \, dy + \int_\Omega V\, (\varphi_{x, \lambda}^2- 2\ U_{x, \lambda}\, \varphi_{x, \lambda} )\, dy \, . \end{equation} Since by \cite[Prop.~1]{rey2}, \begin{equation}\label{eq-rey-2} 0 \, \leq \, \varphi_{x,\lambda}(y) \, \leq\, U_{x,\lambda}(y) \qquad \forall\ y\in\Omega, \end{equation} together with \eqref{u-split}, \eqref{sup-f} and \eqref{sup-h} we obtain the following upper bound on the last integral in \eqref{eq-V}, \begin{align*} \Big | \int_\Omega V\, (\varphi_{x, \lambda}^2- 2\ U_{x, \lambda}\, \varphi_{x, \lambda} ) \, dy \, \Big| & \leq 2\, \|V\|_{L^\infty(\Omega)}\, \|\varphi_{x, \lambda}\|_{L^\infty(\Omega)} \int_\Omega U_{x, \lambda} \, dy = \mathcal{O}\left((d(x) \lambda)^{2-N} \right) \, . \end{align*} To treat the first term on the right hand side of \eqref{eq-V}, first assume $N \geq 5$. Choose a sequence $\rho = \rho_\lambda$ such that $\rho \leq d(x)$, $\rho \to 0$ and $\rho \lambda \to \infty$ as $\lambda \to \infty$. (This is always possible, whether or not $d \to 0$.) Then, by continuity of $V$, \begin{align*} \int_\Omega V\, U_{x, \lambda}^2 \, dy &= (V(x)+ o(1)) \int_{B_\rho(x)} U_{x, \lambda}^2 \, dy + \int_{\Omega \setminus B_\rho(x)} V \, U_{x, \lambda}^2 \, dy \\ &= \lambda^{-2}\, b_N\, V(x) + o(\lambda^{-2}) + \mathcal O \left( \int_{\Omega \setminus B_\rho(x)} U_{x, \lambda}^2 \, dy \right) \\ & = \lambda^{-2}\, b_N\, V(x) + o(\lambda^{-2}) + \mathcal O \left(\lambda^{-2} (\rho \lambda)^{-N +4} \right) = \lambda^{-2}\, b_N\, V(x) + o(\lambda^{-2}). \end{align*} Similarly, in the case $N = 4$ we let $B_\tau(x)$ and $B_R(x)$ be two balls centered at $x$ with radii $\tau$ and $R$ chosen such that $B_\tau(x) \subset \Omega \subset B_R(x)$ and split the last integration in two parts as follows. Extending $V$ by zero to $B_R(x) \setminus \Omega$ we get \begin{align} \int_{\Omega\setminus B_\tau(x)} V\, U_{x, \lambda}^2 \, dy & = \int_{B_R(x)\setminus B_\tau(x)} V\, U_{x, \lambda}^2 \, dy \leq \, \omega_4 \, \|V\|_{L^\infty(\Omega)} \int_{\tau}^R \frac{\lambda^2}{(1+\lambda^2 |x-y|^2)^2}\ r^3\, dr \nonumber \\ & = \omega_4 \, \|V\|_{L^\infty(\Omega)}\, \lambda^{-2} \int_{\tau\lambda}^{R\lambda} \frac{t^3}{(1+t^2)^2}\ \, dt = \mathcal{O}(\lambda^{-2}\, \log(R/\tau)) \label{compl}. \end{align} On the other hand, denoting by $o_\tau(1)$ a quantity that vanishes as $\tau\to 0$ and assuming that $\tau\lambda\to \infty$ we get \begin{align*} \int_{B_\tau(x)} V\, U_{x, \lambda}^2 \, dy & =b_4\, V(x)\ \int_0^{\tau} \frac{\lambda^2\, r^3\, dr}{(1+\lambda^2 |x-y|^2)^2}\ + o_\tau(1)\, \int_0^{\tau} \frac{\lambda^2\, r^3\, dr}{(1+\lambda^2 |x-y|^2)^2} \\ & = b_4 \, \lambda^{-2}\, V(x) \int_0^{\tau\lambda} \frac{ t^3\, dt}{(1+t^2)^2} + \lambda^{-2}\, o_\tau(1)\, \int_0^{\tau\lambda} \frac{ t^3\, dt}{(1+t^2)^2} \\ & = b_4\, \frac{\log \lambda}{\lambda^2}\ V(x) + o_\tau(1)\, \mathcal{O}\left(\frac{\log \lambda}{\lambda^2}\right) + \mathcal{O}\left(\frac{\log \tau}{\lambda^2}\right). \end{align*} By choosing $\tau= \frac{1}{\log \lambda}$ and taking into account \eqref{compl} we arrive at \eqref{exp-epsVPU} in case $N = 4$. \emph{Proof of \eqref{exp-PUq}. } Recall that $q>2$. Hence from the Taylor expansion of the function $t\mapsto t^q$ on an interval $[0, b]$ it follows that for any $a\in [0,b]$ we have \begin{equation} \label{taylor} | \, b^q -(b-a)^q -q\, b^{q-1}\, a\, | \, \leq \frac{q (q-1)}{2}\ b^{q-2}\, a^2. \end{equation} Because of \eqref{eq-rey-2} and \eqref{u-split} we can apply \eqref{taylor} with $b=U_{x,\lambda}(y)$ and $a= \varphi_{x,\lambda}(y)$ to obtain the following point-wise upper bound: \begin{align} \label{taylor-1} \big |\, PU_{x, \lambda} ^q - U_{x,\lambda}^q + q\, U_{x,\lambda}^{q-1}\, \varphi_{x,\lambda}\, \, \big | &\ \leq\ \frac{q(q-1)}{2}\ U_{x,\lambda}^{q-2}\, \varphi_{x,\lambda}^2 \end{align} Together with estimate \eqref{eq-b} this gives \begin{align} \left | \int_\Omega \left( PU_{x, \lambda} ^q - U_{x,\lambda}^q + q\, U_{x,\lambda}^{q-1}\, \varphi_{x,\lambda} \right) \, dy \, \right | & = \mathcal{O}\left((d(x)\, \lambda )^{-N}\right)\, . \label{taylor-2} \end{align} On the other hand, the calculations in the proof of \eqref{exp-NablaPU} show that $$ \int_\Omega U_{x,\lambda}^{q-1}\, \varphi_{x,\lambda} \, dy = \lambda^{2-N}\, a_N\, \phi(x) + \mathcal{O}\left((d(x)\lambda)^{\frac43 -N} \right) = \lambda^{2-N}\, a_N\, \phi(x) + o \left((d(x)\lambda)^{2 -N} \right). $$ In view of \eqref{uq} and \eqref{taylor-2} this completes the proof. \end{proof} \section{\bf Lower bound. Preliminaries } \label{sec lowerbd pre} \noindent As a starting point for the proof of the lower bound on $S(\epsilonilon V)$, we derive a crude asymptotic form of almost minimizers of $\mathcal S_{\epsilon V}$. The following result is essentially well-known. We have recalled the proof in \cite[Appendix B]{FrKoKo} in the case $N = 3$, but the same argument carries over to $N \geq 4$. \begin{prop} \label{prop-app-min} Let $(u_\epsilonilon)\subset H^1_0(\Omega)$ be a sequence of functions satisfying \begin{equation} \label{eq:appminass} \mathcal S_{\epsilonilon V}[u_\epsilonilon] = S_N+o(1) \,, \qquad \int_\Omega |u_\epsilonilon|^q\,dx = \left( \frac{S_N}{N(N-2)} \right) ^\frac{q}{q-2} \,. \end{equation} Then, along a subsequence, \begin{equation} \label{u-rey} u_\epsilonilon = \alpha_\epsilonilon \left( PU_{x_\epsilonilon,\lambda_\epsilonilon} + w_\epsilonilon \right) , \end{equation} where \begin{equation} \begin{aligned} \label{lim-eps} \alpha_\epsilonilon & \to s \qquad \text{for some}\ s\in\{-1,+1\} \,,\\ x_\epsilonilon & \to x_0 \qquad\text{for some}\ x_0\in\overline\Omega \,, \\ \lambda_\epsilonilon d_\epsilonilon & \to \infty \,,\\ \| \nabla w_\epsilonilon \| &\to 0 \qquad\text{and}\qquad w_\epsilonilon \in T_{x_\epsilonilon,\lambda_\epsilonilon}^\bot \,. \end{aligned} \end{equation} Here $d_\epsilon= \text{dist }(x_\epsilon,\partial\Omega)$. \end{prop} \subsection*{Convention} From now on we will assume that \begin{equation} \label{appr-min0} S(\epsilonilon V)< S_N \qquad\text{for all}\ \epsilonilon>0 \end{equation} and that $(u_\epsilonilon)$ satisfies \eqref{appr-min}. In particular, assumption \eqref{eq:appminass} is satisfied. We will always work with a sequence of $\epsilonilon$'s for which the conclusions of Proposition \ref{prop-app-min} hold. To enhance readability, we will drop the index $\epsilonilon$ from $\alpha_\epsilonilon$, $x_\epsilonilon$, $\lambda_\epsilonilon$, $d_\epsilonilon$ and $w_\epsilonilon$. \section{\bf Lower bound. The main expansion } \label{sec lowerbd exp} In this section we expand $\mathcal S_{\epsilon V}[u_\epsilon]$ by using the decomposition \eqref{u-rey} of $u_\epsilonilon$. We shall show the following result. \begin{prop} \label{prop-lowerbd} Let $(u_\epsilonilon)\subset H^1_0(\Omega)$ satisfy \eqref{u-rey} and \eqref{lim-eps}. Then \begin{align} |\alpha|^{-2} \int_\Omega |\nabla u_\epsilon|^2 \, dy & = \int_\Omega |\nabla P U_{x, \lambda}|^2 \, dy + \int_\Omega |\nabla w|^2 \, dy \, , \label{str-1} \\ |\alpha|^{-q} \int_\Omega |u_\epsilon|^q \, dy & = \int_\Omega P U_{x, \lambda}^q \, dy +\frac{q(q-1)}{2}\, \int_\Omega U_{x, \lambda}^{q-2}\, w^2 \, dy + o\left(\int_\Omega |\nabla w|^2+ (\lambda d)^{2-N}\right) \, , \label{str-2} \\ |\alpha|^{-2} \epsilon \int_\Omega V u_\epsilon^2 \, dy & = \epsilon \int_\Omega V P U_{x, \lambda}^2 \, dy + \mathcal{O}\left(\epsilon \int_\Omega |\nabla w|^2 \, dy + \epsilon \sqrt{\int_\Omega |\nabla w|^2 \, dy}\, \sqrt{ \int_\Omega |V|\, P U_{x, \lambda}^2 \, dy } \right) . \label{str-3} \end{align} In particular, \begin{align} \mathcal S_{\epsilon V}[u_\epsilon] &= \mathcal S_{\epsilon V}[PU_{x, \lambda}] + I[w] + \mathcal{O}\left(\epsilon\, \sqrt{\int_\Omega |\nabla w|^2 \, dy}\ \sqrt{ \int_\Omega |V| P U_{x, \lambda}^2 \, dy} \right) \nonumber \\ & \quad + o\left(\int_\Omega |\nabla w|^2 \, dy + (\lambda d)^{2-N}\right), \label{exp-quot-ueps} \end{align} where \begin{equation} \label{definition I} I[w] := \left(\int_{\Omega} U_{x, \lambda}^q \, dy \right)^{-\frac 2q} \left( \int_\Omega |\nabla w|^2 \, dy - N(N+2) \int_\Omega U_{x, \lambda}^{q-2}\, w^2 \, dy \right) . \end{equation} \end{prop} \begin{proof} We prove equations \eqref{str-1}--\eqref{str-3} separately. Then the expansion \eqref{exp-quot-ueps} follows by a straightforward Taylor expansion of the quotient functional $\mathcal S_{\epsilon V}$, using $\mathcal S_{\epsilon V}[u_\epsilon] =\mathcal S_{\epsilon V}[|\alpha|^{-1} u_\epsilon]$. In the sequel we denote by $c_1, c_2, \dots $ various positive constants which are independent of $\epsilon$. \emph{Proof of \eqref{str-1}. } This follows by \eqref{u-rey} and $w \in T_{x,\lambda}^\bot$. \emph{Proof of \eqref{str-2}. } Recall that $\alpha^{-1} u_\epsilon = U_{x, \lambda} + (w - \varphi_{x, \lambda})$ by \eqref{u-split} and \eqref{u-rey}. We use the associated pointwise estimate \begin{align*} & \left | |\alpha|^{-q} |u_\epsilon|^q - U_{x, \lambda}^q - q \, U_{x, \lambda}^{q-1} (w- \varphi_{x, \lambda}) -\frac{q(q-1)}{2}\, U_{x, \lambda}^{q-2} (w-\varphi_{x, \lambda})^2 \right | \\ & \qquad \leq c_1 \left( |w-\varphi_{x, \lambda}|^q + |w- \varphi_{x, \lambda}|^{q-(q-3)_+} U_{x, \lambda}^{(q-3)_+} \right), \end{align*} where $(q-3)_+= \max\{q-3, 0\}$. Using \eqref{taylor-1}, it follows that \begin{align*} & \qquad \left | |\alpha|^{-q} |u_\epsilon|^q - PU_{x, \lambda}^q - q \, U_{x, \lambda}^{q-1} w -\frac{q(q-1)}{2}\, U_{x, \lambda}^{q-2} w^2 \right | \\ & \leq \, c_2 \left ( |w- \varphi_{x, \lambda}|^q + |w- \varphi_{x, \lambda}|^{q-(q-3)_+} U_{x, \lambda}^{(q-3)_+} + U_{x, \lambda}^{q-2} \varphi_{x, \lambda}\, |w| + U_{x, \lambda}^{q-2} \varphi_{x, \lambda}^2 \right) \\ & \leq \, c_3 \left( |w|^q +\varphi_{x, \lambda}^q + |w|^{q-(q-3)_+} U_{x, \lambda}^{(q-3)_+} + \varphi_{x, \lambda}^{q-(q-3)_+} U_{x, \lambda}^{(q-3)_+} + U_{x, \lambda}^{q-2} \varphi_{x, \lambda}\, |w| + U_{x, \lambda}^{q-2} \varphi_{x, \lambda}^2 \right) \\ & \leq \, c_4 \left( |w|^q + |w|^{q-(q-3)_+} U_{x, \lambda}^{(q-3)_+} + U_{x, \lambda}^{q-2} \varphi_{x, \lambda}\, |w| + U_{x, \lambda}^{q-2} \varphi_{x, \lambda}^2 \right). \end{align*} In the last inequality we used \eqref{eq-rey-2} to simplify the form of the remainder terms. Now we use the identity $$ N\, (N-2) \int_\Omega U_{x, \lambda}^{q-1} w \, dy = \int_\Omega \nabla U_{x, \lambda} \cdot \nabla w \, dy = \int_\Omega \nabla P U_{x, \lambda} \cdot \nabla w \, dy =0, $$ which follows from \eqref{eq-pu}, \eqref{u-eq} and $w \in T_{x,\lambda}^\bot$, and the fact that $\int_\Omega |w|^q \to 0$, which follows from \eqref{lim-eps} and the Sobolev inequality. Therefore, with the help of the H\"older inequality, we find \begin{align*} & \quad \, \left | \int_\Omega \left( \, |\alpha|^{-q} |u_\epsilon|^q - P U_{x, \lambda}^q - \frac{q(q-1)}{2}\, U_{x, \lambda}^{q-2} w^2 \right) dy \, \right | \\ & \leq \, c_4 \Bigg[ \int_\Omega |w|^q \, dy + \left(\int_\Omega |w|^q \, dy \right)^{\frac{q-(q-3)_+}{q}} \left( \int_\Omega U_{x, \lambda}^{q} \, dy \right)^{\frac{(q-3)_+}{q}} \\ & \quad \qquad + \left( \int_\Omega U_{x, \lambda}^{\frac{q(q-2)}{q-1}}\, \varphi_{x, \lambda}^{\frac{q}{q-1}} \, dy \right)^{\frac{q-1}{q}}\, \left(\int_\Omega |w|^q \, dy \right)^{\frac{1}{q}} + \int_\Omega U_{x, \lambda}^{q-2}\, \varphi_{x, \lambda}^2 \, dy \Bigg] \\ & \leq \, c_5 \Bigg[ \left(\int_\Omega |\nabla w|^2 \, dy \right)^{\frac{q-(q-3)_+}{2}} + \left( \int_\Omega U_{x, \lambda}^{\frac{q(q-2)}{q-1}}\, \varphi_{x, \lambda}^{\frac{q}{q-1}} \, dy \right)^{\frac{q-1}{q}}\, \left(\int_\Omega |\nabla w|^2 \, dy \right)^{\frac{1}{2}} + \int_\Omega U_{x, \lambda}^{q-2}\, \varphi_{x, \lambda}^2 \, dy \Bigg] \,. \end{align*} In the last inequality, we used the Sobolev inequality for $w$ and the \eqref{lim-eps} for $w$, together with $$ \int_\Omega U_{x, \lambda}^{q} \, dy \ \leq \ \int_{\mathbb{R}^N} U_{x, \lambda}^{q} \, dy \ =\ \left( \frac{S_N}{N(N-2)}\right) ^{\frac{q}{q-2}} \,. $$ It follows from Lemma \ref{lem-tech} and \eqref{lim-eps} that \begin{align*} \left( \int_\Omega U_{x, \lambda}^{\frac{q(q-2)}{q-1}}\, \varphi_{x, \lambda}^{\frac{q}{q-1}} \, dy \right)^{\frac{q-1}{q}} \ & = \ o\left((d \lambda)^{\frac{2-N}{2}}\right)\, , \\ \int_\Omega U_{x, \lambda}^{q-2}\, \varphi_{x, \lambda}^2 \, dy \ & = \ o\left((d\, \lambda)^{2-N}\right). \end{align*} Thus, we conclude that, as $\epsilon \to 0$, \begin{align*} & \left | \int_\Omega \left( \, |\alpha|^{-q} |u_\epsilon|^q - \, P U_{x, \lambda}^q - \frac{q(q-1)}{2}\, U_{x, \lambda}^{q-2} w^2 \right) \, dy \, \right | = o\left(\int_\Omega |\nabla w|^2 \, dy \, + (\lambda d)^{2-N}\right). \end{align*} \emph{Proof of \eqref{str-3}. } We write \begin{equation} \label{eq-V-2} |\alpha|^{-2} \int_\Omega V u_\epsilon^2 \, dy = \int_\Omega V\, P U_{x, \lambda}^2 \, dy + 2\, \int_\Omega V \, P U_{x, \lambda}\, w \, dy + \int_\Omega V w^2 \, dy \, . \end{equation} By the H\"older and Sobolev inequalities we have $$ \left |\, \int_\Omega V w^2 \, dy \, \right | \leq \left(\int_\Omega |V|^{\frac N2} \, dy \right)^{\frac 2N} \left(\int_\Omega |w|^q \, dy \right)^{\frac 2q} \leq S_N^{-1} \left(\int_\Omega |V|^{\frac N2}\, dy \right)^{\frac 2N} \int_\Omega |\nabla w|^2 \, dy \, , $$ and \begin{align*} \left |\, \int_\Omega V P U_{x, \lambda}\, w \, dy \, \right |& \leq \left( \int_\Omega |V| \, P U_{x, \lambda}^2 \, dy \right)^{\frac 12}\, \left(\int_\Omega |V|\, w^2 \, dy \right)^{\frac 12} \\ & \leq S_N^{-1/2} \left( \int_\Omega |V| \, P U_{x, \lambda}^2 \, dy \right)^{\frac 12}\, \left(\int_\Omega |V|^{\frac N2} \, dy \right)^{\frac 1n} \left (\int_\Omega |\nabla w|^2 \, dy \right)^{\frac 12}. \end{align*} Hence \eqref{str-3} follows by inserting these estimates into \eqref{eq-V-2}. \end{proof} \section{\bf Proof of the main results} \label{sec pfmain} We now deduce Theorems \ref{thm expansion} and \ref{thm-minimizers} from Proposition \ref{prop-lowerbd}. To do so, we make crucial use of the following coercivity bound proved in \cite[Appendix D]{rey2}. \begin{prop} \label{prop-rey} For all $x \in \Omega$, $\lambda > 0$ and $v \in T_{x, \lambda}^\perp$, one has \begin{equation} \label{eq-rey} \int_\Omega |\nabla v|^2 \, dy - N(N+2) \, \int_\Omega U_{x, \lambda}^{q-2}\,v^2 \, dy \ \geq \, \frac{4}{N+4} \int_\Omega |\nabla v|^2\, dy \, . \end{equation} \end{prop} \begin{cor} \label{cor-lowerbd} For all $\epsilon > 0$ small enough, we have, if $N\geq 5$, \begin{align} 0 \geq (1 + o(1)) (S_N - S(\epsilonilon V)) &+ \left( \frac{S_N}{N(N-2)}\right)^{\frac{2}{2-q}} \left( \frac{N(N-2)\, a_N \, \phi(x)}{\lambda^{N-2}} +b_N\, \epsilon\, \frac{V(x)}{\lambda^{2}} \right) \nonumber \\ & + c \int_\Omega |\nabla w|^2 \, dy + o((\lambda d)^{2-N}) + o(\epsilonilon \lambda^{-2}) \label{lowerbd-cor} \end{align} and, if $N=4$, \begin{align} 0 \geq (1 + o(1)) (S_4 - S(\epsilonilon V)) &+ \frac{8}{S_4} \left( \frac{8 a_4 \phi(x)}{\lambda^{2}} + b_4 V(x) \frac{ \epsilonilon \log \lambda }{\lambda^2} \right) \nonumber \\ & + c \int_\Omega |\nabla w|^2 \, dy + o((\lambda d)^{-2}) + o(\epsilonilon \lambda^{-2} \log \lambda) \,. \label{lowerbd-cor-N4} \end{align} \end{cor} \begin{proof} Firstly, it follows directly from \eqref{eq-rey} and the definition of $I[w]$ in \eqref{definition I} that there is a $c >0$ such that for all $\epsilonilon> 0$ small enough, we have \begin{equation} \label{coercivity of I} I[w] \geq 4c \int_\Omega |\nabla w|^2 \, dy \,. \end{equation} Using Proposition \ref{prop-lowerbd} and \eqref{coercivity of I} it follows that for $\epsilon$ small enough one has \begin{align*} \mathcal S_{\epsilon V} [u_\epsilonilon] \geq \mathcal S_{\epsilon V} [PU_{x, \lambda}] + 2c \int_\Omega |\nabla w|^2 \, dy + \mathcal{O}\left(\epsilon\, \sqrt{\int_\Omega |\nabla w|^2\, dy }\ \sqrt{ \int_\Omega |V|\, P U_{x, \lambda}^2\, dy } \ \right) + o\left( (\lambda d)^{2-N}\right). \end{align*} Since $$ \epsilon\, \sqrt{\int_\Omega |\nabla w|^2\, dy }\ \sqrt{ \int_\Omega |V| \, P U_{x, \lambda}^2\, dy } \ \leq \ c \int_\Omega |\nabla w|^2\, dy + \frac{ \epsilon^2}{4c}\, \int_\Omega |V|\, P U_{x, \lambda}^2\, dy \, , $$ this further implies that for $\epsilon>0$ small enough \begin{align*} \mathcal S_{\epsilon V} [u_\epsilonilon] & \geq \mathcal S_{\epsilon V} [PU_{x, \lambda}] + c \int_\Omega |\nabla w|^2 \, dy + \mathcal{O}\left(\epsilon^2 \int_\Omega |V|\, P U_{x, \lambda}^2 \, dy \right) + o\left( (\lambda d)^{2-N}\right). \end{align*} Using \eqref{exp-epsVPU} for the potential term and recalling \eqref{lim-eps}, we obtain \begin{align*} \mathcal S_{\epsilon V} [u_\epsilonilon] \geq \begin{cases} \mathcal S_{\epsilon V} [PU_{x, \lambda}] + c \int_\Omega |\nabla w|^2 \, dy + o (\epsilon \lambda^{-2}) + o((\lambda d)^{2-N}), & N \geq 5, \\ \mathcal S_{\epsilon V} [PU_{x, \lambda}] + c \int_\Omega |\nabla w|^2 \, dy + o (\epsilon \lambda^{-2} \log \lambda) + o((\lambda d)^{-2}), & N = 4. \end{cases} \end{align*} Now the fact that $S_N - \mathcal S_{\epsilon V} [u_\epsilonilon] = (1 + o(1)) (S_N - S(\epsilonilon V))$ by \eqref{appr-min}, together with the expansion of $\mathcal S_{\epsilon V} [PU_{x, \lambda}]$ from Theorem \ref{thm expansion PU}, implies the claimed bounds \eqref{lowerbd-cor} and \eqref{lowerbd-cor-N4}. \end{proof} In the next lemma, we prove that the limit point $x_0$ lies in the set $\mathcal N(V)$. \begin{lem} \label{lemma bdry conc} We have $x_0 \in \mathcal N(V)$. In particular, $d^{-1} = \mathcal O(1)$ as $\epsilon \to 0$ and $x \in \mathcal N(V)$ for $\epsilon$ small enough. \end{lem} \begin{proof} We first treat the case $N \geq 5$. In \eqref{lowerbd-cor}, we drop the non-negative gradient term and write the remaining lower order terms as \begin{align*} & \quad \left( \frac{S_N}{N(N-2)}\right)^{\frac{2}{2-q}} \left( \frac{N(N-2)\, a_N \, \phi(x)}{\lambda^{N-2}} +b_N\, \epsilon\, \frac{V(x)}{\lambda^{2}} \right) + o((\lambda d)^{2-N}) + o(\epsilonilon \lambda^{-2}) \\ &= \left( \frac{S_N}{N(N-2)}\right)^{\frac{2}{2-q}} \left( A (d\lambda)^{2-N} - B \epsilon (d \lambda)^{-2} \right), \end{align*} where \begin{equation} \label{def AB lemma} A = N(N-2)\, a_N \, \phi(x) d^{N-2} + o(1), \qquad B = - b_N V(x_0) d^2 + o(1). \end{equation} Notice that since $\phi(x) \gtrsim d^{2-N}$ by \eqref{phi near bdry}, the quantity $A$ is positive and bounded away from zero. Moreover, by \eqref{lowerbd-cor} and the fact that $S(\epsilon V) < S_N$, which follows from Corollary \ref{cor-upperb}, we must have $B > 0$. Optimizing in $d\lambda$ yields the lower bound \begin{equation} \label{lowerbd remainders} A (d\lambda)^{2-N} - B \epsilon (d \lambda)^{-2} \geq - c A^{-\frac{2}{N-4}} B^\frac{N-2}{N-4} \epsilonilon^\frac{N-2}{N-4}, \end{equation} for some explicit constant $c > 0$ independent of $\epsilon$. On the other hand, by Corollary \ref{cor-upperb}, there is $\rho > 0$ such that the leading term in \eqref{lowerbd-cor} is bounded by \begin{equation} \label{lowerbd leading} (1 + o(1)) (S_N - S(\epsilon V)) \geq \rho\, \epsilonilon^\frac{N-2}{N-4} \end{equation} for all $\epsilon > 0$ small enough. Plugging \eqref{lowerbd remainders} and \eqref{lowerbd leading} into \eqref{lowerbd-cor} and rearranging terms, we thus deduce that \begin{equation} \label{lowerbd B} B \geq \rho^\frac{N-4}{N-2} A^\frac{2}{N-2} c^{-\frac{N-4}{N-2}}. \end{equation} As observed above, the quantity $A$ is bounded away from zero and therefore \eqref{lowerbd B} implies that $B$ is bounded away from zero. Hence, in view of \eqref{def AB lemma}, $d$ is bounded away from zero and $V(x_0)< 0$. The fact that $x \in \mathcal N(V)$ for $\epsilon$ small enough is a consequence of the continuity of $V$. This completes the proof in case $N \geq 5$. Now we consider the case $N = 4$ in a similar way. In \eqref{lowerbd-cor-N4}, we drop the non-negative gradient term and write the remaining lower order terms as \begin{align} & \quad \frac{8}{S_4} \left( \frac{8 a_4 \phi(x)}{\lambda^{2}} + b_4 V(x) \frac{ \epsilonilon \log \lambda }{\lambda^2} \right) + o((\lambda d)^{-2}) + o(\epsilonilon \lambda^{-2} \log \lambda) \nonumber \\ & = \frac{8}{S_4} \left( A (d\lambda)^{-2} - B \epsilon (d\lambda)^{-2} \log (d\lambda) \right) , \label{A-B n4} \end{align} where \begin{equation} \label{def AB lemma N=4} A = 8 a_4 \phi(x) d^{2} + o(1), \qquad B = - b_4 (V(x_0)+o(1)) d^2 (1 - \frac{\log d}{\log d\lambda}) . \end{equation} Since $\phi(x) \gtrsim d(x)^{-2}$ by \eqref{phi near bdry}, the quantity $A$ is positive and bounded away from zero. Moreover, by \eqref{lowerbd-cor-N4} and the fact that $S(\epsilon V) < S_4$, we must have $B > 0$. Optimizing \eqref{A-B n4} in $d \lambda$ yields the lower bound \begin{equation} \label{lowerbd remainders n4} A (d\lambda)^{-2} - B \epsilon (d\lambda)^{-2} \log (d\lambda) \geq - \frac{B \epsilonilon}{2e } \exp \left( -\frac{2A}{B \epsilonilon} \right) = - \exp \left( -\frac{2A}{B \epsilonilon} + \log(\frac{B \epsilonilon}{2e}) \right) . \end{equation} On the other hand, by Corollary \ref{cor-upperb}, there is $\rho > 0$ such that the leading term in \eqref{lowerbd-cor-N4} is bounded by \begin{equation} \label{lowerbd leading n4} (1 + o(1)) (S_4 - S(\epsilon V)) \geq \exp(-\frac{\rho}{\epsilonilon}). \end{equation} Plugging \eqref{lowerbd remainders n4} and \eqref{lowerbd leading n4} into \eqref{lowerbd-cor-N4}, we thus deduce that \[ 0 \geq \exp(-\frac{\rho}{\epsilonilon}) - \exp \left( -\frac{2A}{B \epsilonilon} + \log(\frac{B \epsilonilon}{2e}) \right)\, , \] which leads to \begin{equation} \label{AB intermediate} -\frac{2A}{B} + \epsilonilon \log(\frac{B \epsilonilon}{2e}) \geq - \rho . \end{equation} Since $\phi(x) \gtrsim d^{-2}$ by \eqref{phi near bdry}, the quantity $A$ is bounded away from zero and moreover $B$ is bounded. Using this fact, the left hand side of \eqref{AB intermediate} can be written as \[ -\frac{2A}{B} (1 - \frac{B \epsilonilon \log B}{2 A}) + \epsilonilon \log \frac{\epsilonilon}{2e} = - \frac{2A}{B} \left(1 + o(1)\right) + o(1). \] Together with \eqref{AB intermediate}, this easily implies, if $\epsilonilon > 0$ is small enough, that \[ B \geq \frac{A}{\rho}. \] As before, in view of \eqref{def AB lemma N=4}, we deduce that $d$ is bounded away from zero and that $V(x_0) < 0$. The fact that $x \in \mathcal N(V)$ for $\epsilon$ small enough is again a consequence of the continuity of $V$. \end{proof} \begin{proof} [Proof of Theorem \ref{thm expansion}] We first treat the case $N \geq 5$. In view of Lemma \ref{lemma bdry conc}, the lower bound \eqref{lowerbd-cor} can be written as (upon dropping the non-negative gradient term) \begin{align*} 0 & \geq (1 + o(1)) (S_N - S(\epsilonilon V)) + \!\left( \frac{S_N}{N(N-2)}\right)^{\frac{2}{2-q}} \!\!\left( \!\frac{N(N-2)\, a_N \, (\phi(x_0)+ o(1))}{\lambda^{N-2}} +b_N\, \epsilon\, \frac{V(x_0)+o(1)}{\lambda^{2}} \! \right) \\ & \geq (1 + o(1)) (S_N - S(\epsilonilon V)) - C_N (\phi(x_0)+o(1))^{-\frac{2}{N-4}}\ |V(x_0) + o(1)|^{\frac{N-2}{N-4}} \epsilon^\frac{N-2}{N-4} \end{align*} by optimization in $\lambda$. Therefore \[ S(\epsilonilon V) \geq S_N - C_N \phi(x_0)^{-\frac{2}{N-4}}\ |V(x_0)|^{\frac{N-2}{N-4}} \epsilon^\frac{N-2}{N-4} + o(\epsilon^\frac{N-2}{N-4}) \geq S_N - C_N \sigma_N(\Omega, V) \epsilon^\frac{N-2}{N-4} + o(\epsilon^\frac{N-2}{N-4}), \] where the last inequality uses the fact that $x_0 \in \mathcal N(V)$ by Lemma \ref{lemma bdry conc}. Since the matching upper bound has already been proved in Theorem \ref{thm expansion PU}, the proof in case $N \geq 5$ is complete. Similarly, we can handle the case $N = 4$. In view of Lemma \ref{lemma bdry conc}, the lower bound \eqref{lowerbd-cor-N4} can be written as (upon dropping the non-negative gradient term) \begin{align*} 0 &\geq (1 + o(1)) (S_4 - S(\epsilonilon V)) + \frac{8}{S_4} \left( \frac{8 a_4 (\phi(x_0)+o(1))}{\lambda^{2}} + b_4 (V(x_0) + o(1)) \frac{ \epsilonilon \log \lambda }{\lambda^2} \right) \\ & \geq (1 + o(1)) (S_4 - S(\epsilonilon V)) - \frac{4 b_4}{e S_4} \epsilonilon |V(x_0) + o(1)| \exp \left(- \frac{4 (\phi(x_0) + o(1))}{\epsilonilon |V(x_0) + o(1)|} \right) \end{align*} by optimization in $\lambda$. Therefore \[ S(\epsilonilon V) \geq S_4 - \exp\left( - \frac 4\epsilonilon \left(1 +o(1)\right) \frac{\phi(x_0)}{|V(x_0)|} \right) \geq S_4 - \exp\left( - \frac 4\epsilonilon \left(1 +o(1)\right) \sigma_4(\Omega,V)^{-1} \right)\, , \] where the last inequality uses the fact that $x_0 \in \mathcal N(V)$ by Lemma \ref{lemma bdry conc}. Since the matching upper bound has already been proved in Theorem \ref{thm expansion PU}, the proof in case $N = 4$ is complete. \end{proof} \begin{proof} [Proof of Theorem \ref{thm-minimizers}] We start again with the bounds from Corollary \ref{cor-lowerbd}, but this time we need to take into account the various nonnegative remainder terms more carefully. \emph{Proof for $N \geq 5$. } We rewrite \eqref{lowerbd-cor}, using Lemma \ref{lemma bdry conc}, as \begin{equation} \label{apprminproof start} 0 \geq (1 + o(1)) (S_N - S(\epsilonilon V)) - C_N (\phi(x_0)+o(1))^{-\frac{2}{N-4}}\ |V(x_0)+o(1)|^{\frac{N-2}{N-4}} \epsilon^\frac{N-2}{N-4} + \mathcal R \end{equation} with \[\mathcal R = \left( \frac{A_\epsilon}{\lambda^{N-2}} - B_\epsilon \frac{\epsilonilon}{\lambda^2} + C_N A_\epsilon^{-\frac{2}{N-4}} B_\epsilon^\frac{N-2}{N-4} \epsilonilon^\frac{N-2}{N-4} \right) + c \int_\Omega |\nabla w|^2 \, dy \, , \] where we have set \[ A_\epsilon = \left( \frac{S_N}{N(N-2)}\right)^{\frac{2}{2-q}} \left( N(N-2)\, a_N \, (\phi(x_0)+ o(1)) \right) , \quad B_\epsilon = \left( \frac{S_N}{N(N-2)}\right)^{\frac{2}{2-q}} b_N\, ( V(x_0)+o(1) ) \, . \] Notice that both summands of $\mathcal R$ are separately nonnegative. Inserting the upper bound from Corollary \ref{cor-upperb} into \eqref{apprminproof start}, we get \[ 0 \geq C_N \left(\sigma_N(\Omega, V) - \phi(x_0)^{-\frac{2}{N-4}}\ |V(x_0)|^{\frac{N-2}{N-4}} \right) \epsilonilon^\frac{N-2}{N-4} + \mathcal R + o(\epsilonilon^\frac{N-2}{N-4})\, . \] Since each one of the first two summands on the right hand side is nonnegative, we deduce that \[ \phi(x_0)^{-\frac{2}{N-4}}\ |V(x_0)|^{\frac{N-2}{N-4}} = \sup_{x \in \mathcal N(V)} \phi(x)^{-\frac{2}{N-4}}\ |V(x)|^{\frac{N-2}{N-4}} = \sigma_N(\Omega, V)\] and \begin{equation} \label{R is small} \mathcal R = o(\epsilonilon^\frac{N-2}{N-4}). \end{equation} In particular, \eqref{R is small} implies that \begin{equation} \label{apprminproof w} \|\nabla w\|_2^2 = o(\epsilonilon^\frac{N-2}{N-4}). \end{equation} Denote by \[ \lambda_0(\epsilon) = \left(\frac{(N-2)A_\epsilon}{2B_\epsilon } \right)^\frac{1}{N-4} \epsilon^{\frac{1}{4-N}} \] the unique value of $\lambda$ for which the first summand of $\mathcal R$ vanishes. Using Lemma \ref{lem-taylor}, the bound \eqref{R is small} implies that \[ \epsilon (\lambda^{-1} - \lambda_0(\epsilon)^{-1})^2 = o(\epsilonilon^\frac{N-2}{N-4}), \] which is equivalent to \begin{equation} \label{apprminproof lambda} \lambda = \lambda_0(\epsilon) + o(\epsilon^{-\frac{1}{N-4}}) = \left(\frac{N\, (N-2)^2\, a_N \, \phi(x_0)}{2 \,b_N\, |V(x_0)|}\right)^{\frac{1}{N-4}}\, \epsilon^{-\frac{1}{N-4}} + o(\epsilon^{-\frac{1}{N-4}}) . \end{equation} Finally, to obtain the asymptotics of $\alpha$, by \eqref{str-2}, \eqref{appr-min}, \eqref{exp-PUq} and \eqref{apprminproof w}, we have that \begin{equation} \label{apprminproof alpha start} |\alpha|^{-q} \left( \frac{S_N}{N(N-2)}\right)^{\frac{q}{q-2}} = \left( \frac{S_N}{N(N-2)}\right)^{\frac{q}{q-2}} - q a_N \lambda^{2-N} \phi(x_0) +\frac{q(q-1)}{2}\, \int_\Omega U_{x, \lambda}^{q-2}\, w^2 \, dy + o(\lambda^{2-N})\, . \end{equation} Moreover, by Hölder and Sobolev inequalities, \begin{equation} \label{apprminproof alpha bound w} \int_\Omega U_{x, \lambda}^{q-2} w^2 \, dy \lesssim \|\nabla w\|^2. \end{equation} We easily conclude from \eqref{apprminproof w}--\eqref{apprminproof alpha bound w} that \[ |\alpha| = 1 + D_N \sigma_N(\Omega, V) \epsilon^\frac{N-2}{N-4} + o(\epsilon^\frac{N-2}{N-4}) \] with $D_N$ given in \eqref{dn}. This completes the proof of Theorem \ref{thm-minimizers} in the case $N \geq 5$. \emph{Proof for $N = 4$. } We rewrite \eqref{lowerbd-cor-N4}, using Lemma \ref{lemma bdry conc}, as \begin{equation} \label{apprminproof start n4} 0 \geq (1 + o(1)) (S_4 - S(\epsilonilon V)) - \frac{B_\epsilon \epsilonilon}{2 e} \exp\left(- \frac{2A_\epsilon}{B_\epsilon \epsilonilon} \right) + \mathcal R \end{equation} with $$ \mathcal R = \left( \frac{A_\epsilon}{\lambda^{2}} - B_\epsilon \frac{\epsilonilon \log \lambda}{\lambda^2} + \frac{B_\epsilon \epsilonilon}{2 e} \exp\left(- \frac{2A_\epsilon}{B_\epsilon \epsilonilon} \right) \right) + c \int_\Omega |\nabla w|^2 \, dy, $$ where we have set \[ A_\epsilon = \frac{64}{S_4} a_4 (\phi(x_0) +o(1)), \qquad B_\epsilon = \frac{8}{S_4} b_4 |V(x_0)+o(1)| \, . \] Notice that both summands of $\mathcal R$ are separately nonnegative. Inserting the upper bound from Corollary \ref{cor-upperb} into \eqref{apprminproof start n4}, we get \begin{equation} \label{eq:proofn4} 0 \geq (1 + o(1))\exp\left( - \frac 4\epsilonilon \left(1 +o(1)\right) \sigma_4(\Omega,V)^{-1} \right) - \frac{B_\epsilon \epsilonilon}{2 e} \exp\left(- \frac{2A_\epsilon}{B_\epsilon \epsilonilon} \right) + \mathcal R \,. \end{equation} Dropping the nonnegative term $\mathcal R$ from the right side and taking the logarithm of the resulting inequality, we obtain $$ - \frac{2A_\epsilon}{B_\epsilon \epsilonilon} + \log\frac{B_\epsilon \epsilonilon}{2 e} \geq - \frac 4\epsilonilon \left(1 +o(1)\right) \sigma_4(\Omega,V)^{-1} + \log(1+o(1)) \,. $$ Multiplying by $\epsilonilon$ and passing to the limit we infer, since $a_4/b_4 = 1/4$, $$ - \frac{\phi(x_0)}{|V(x_0)|} \geq - \sigma_4(\Omega,V)^{-1} \,. $$ By definition of $\sigma_4(\Omega,V)$, this implies \begin{equation} \label{eq:proofn41} \frac{|V(x_0)|}{\phi(x_0)} = \sigma_4(\Omega,V) \,, \end{equation} as claimed. With this information at hand, we return to \eqref{eq:proofn4} and drop the nonnegative first term on the right side to infer that $$ \mathcal R \leq \frac{B_\epsilon \epsilonilon}{2 e} \exp\left(- \frac{2A_\epsilon}{B_\epsilon \epsilonilon} \right). $$ Keeping only the second term in the definition of $\mathcal R$ and using \eqref{eq:proofn41} we deduce, in particular, that \begin{equation} \label{w is small n4} \|\nabla w \|_2^2 \leq \exp\left( - \frac 4\epsilonilon \left(1 +o(1)\right) \sigma_4(\Omega,V)^{-1} \right). \end{equation} We now keep only the first term in the definition of $\mathcal R$ and obtain from \eqref{eq:proofn4}, multiplied by $(2e/(B_\epsilonilon \epsilonilon))\exp(2A_\epsilonilon/(B_\epsilonilon \epsilonilon))$, \begin{align*} 1 - (1+o(1)) \frac{2 e}{B_\epsilon \epsilonilon} \exp\left( \frac{2A_\epsilon}{B_\epsilon \epsilonilon} - \frac 4\epsilonilon \left(1 +o(1)\right) \sigma_4(\Omega,V)^{-1} \right) & \geq \frac{2 e}{B_\epsilon \epsilonilon} \exp\left(\frac{2A_\epsilon}{B_\epsilon \epsilonilon} \right) \mathcal R \\ & \geq \frac{2 e}{B_\epsilon \epsilonilon} \exp\left(\frac{2A_\epsilon}{B_\epsilon \epsilonilon} \right) \left( \frac{A_\epsilonilon}{\lambda^2} - B_\epsilonilon \frac{\epsilonilon \log\lambda}{\lambda^2} \right) + 1 \\ & = 1+ y\, e^{y+1} \end{align*} with $y = \frac{2}{B_\epsilonilon \epsilonilon} (A_\epsilonilon - \epsilon B_\epsilonilon \log \lambda)$. In view of \eqref{eq:proofn41} and \eqref{eq-revised} we have $$ (1+o(1)) \frac{2 e}{B_\epsilon \epsilonilon} \exp\left( \frac{2A_\epsilon}{B_\epsilon \epsilonilon} - \frac 4\epsilonilon \left(1 +o(1)\right) \sigma_4(\Omega,V)^{-1} \right) = \exp \left( o \left( \frac{1}{\epsilonilon} \right)\right), $$ and therefore $$ - \exp \left( o \left( \frac{1}{\epsilonilon} \right)\right) \geq y \,e^{y+1} \,. $$ This implies $$ 0 < -y \leq o \left( \frac{1}{\epsilonilon} \right), $$ which is the same as $$ \frac{A_\epsilonilon}{B_\epsilonilon \epsilonilon} < \log \lambda \leq \frac{A_\epsilonilon}{B_\epsilonilon \epsilonilon} + o \left( \frac{1}{\epsilonilon} \right). $$ Recalling \eqref{eq:proofn41} we obtain \begin{equation} \label{lambda asymptotics proof} \lambda = \exp\left( - \frac 2\epsilonilon \left(1 +o(1)\right) \sigma_4(\Omega,V)^{-1} \right), \end{equation} as claimed. Finally, to obtain the asymptotics of $\alpha$, we deduce from \eqref{apprminproof alpha start} and \eqref{apprminproof alpha bound w}, together with the bounds \eqref{w is small n4} and \eqref{lambda asymptotics proof}, that \[ |\alpha| = 1 + \exp\left( - \frac 4\epsilonilon \left(1 +o(1)\right) \sigma_4(\Omega,V)^{-1} \right). \] This completes the proof of Theorem \ref{thm-minimizers} in the case $N = 4$. \end{proof} \appendix \section{Auxiliary results} \label{sec-app} The proof of the following lemma is similar to the computation in \cite[Appendix A]{rey2}. We provide here details for the sake of completeness. \begin{lem} \label{lem-tech} Let $x = x_\lambda$ be a sequence of points in $\Omega$ such that $d(x) \lambda \to\infty$. Then \begin{equation} \label{eq-a} \left( \int_\Omega U_{x,\lambda}^{\frac{q(q-2)}{q-1}}\, \varphi_{x,\lambda}^{\frac{q}{q-1}} \, dy \right)^{\frac{q-1}{q}} \ = \begin{cases} \mathcal{O}\left((d(x)\, \lambda)^{\frac{-2-N}{2}}\right) & \text{ if } N > 6, \\ \mathcal{O}\left((d(x)\, \lambda)^{-4} \log(d(x)\lambda)\right) & \text{ if } N = 6, \\ \mathcal{O}\left((d(x)\, \lambda)^{2-N}\right) & \text{ if } N =4,5 \end{cases} \end{equation} and \begin{equation} \label{eq-b} \int_\Omega U_{x,\lambda}^{q-2}\, \varphi_{x,\lambda}^2 \, dy \ = \mathcal{O}\left((d(x)\, \lambda)^{-N}\right) \, . \end{equation} \end{lem} \begin{proof} We write $d = d(x)$ for short in the following proof. \emph{Proof of \eqref{eq-a}. } By equations \eqref{u-split}, \eqref{sup-f} and \eqref{sup-h}, \begin{equation} \label{holder-in} \int_{B_d(x)} U_{x,\lambda}^{\frac{q(q-2)}{q-1}}\, \varphi_{x,\lambda}^{\frac{q}{q-1}}\, dy \, \leq \|\varphi_{x,\lambda}\|^{\frac{q}{q-1}}_{L^\infty(\Omega)}\, \int_{B_d(x)} U_{x,\lambda}^{\frac{q(q-2)}{q-1}} \, dy = \mathcal{O}\left( (d^{2-N}\, \lambda^{\frac{2-N}{2}})^{\frac{q}{q-1}}\right) \, \int_{B_d(x)} U_{x,\lambda}^{\frac{q(q-2)}{q-1}} \, dy \, . \end{equation} Moreover, since $\frac{q(q-2)}{q-1}\, \frac{N-2}{2} = \frac{4N}{N+2}$, from \eqref{u-function} we obtain \begin{align} \int_{B_d(x)} U_{x,\lambda}^{\frac{q(q-2)}{q-1}} \, dy & = \mathcal{O}\left( \lambda^{\frac{4N}{N+2}}\right)\, \int_0^d \frac{r^{N-1}\, dr}{(1+\lambda^2\, r^2)^{\frac{4N}{N+2}}} = \mathcal{O}\left( \lambda^{\frac{2N-N^2}{N+2}}\right)\, \int_0^{\lambda d} \frac{t^{N-1}\, dr}{(1+ t^2)^{\frac{4N}{N+2}}} \nonumber \\ & = \mathcal{O}\left( \lambda^{\frac{2N-N^2}{N+2}}\right)\, \left(\int_1^{\lambda d} t^{\frac{N(N-6)}{N+2}} \, t^{-1} \, dt +\mathcal{O}(1)\right). \label{1-ball-in} \end{align} If $N>6$, then $$ \int_1^{\lambda d} t^{\frac{N(N-6)}{N+2}} \, t^{-1} \, dt = \mathcal{O}\left( ( d\, \lambda)^{\frac{N(N-6)}{N+2}} \right). $$ If $N = 6$, then $$ \int_1^{\lambda d} t^{\frac{N(N-6)}{N+2}} \, t^{-1} \, dt = \mathcal{O}\left( \log ( d\, \lambda) \right) $$ and if $N = 4,5$, then $$ \int_1^{\lambda d} t^{\frac{N(N-6)}{N+2}} \, t^{-1} \, dt = \mathcal{O}\left(1 \right) $$ This gives the bound claimed in \eqref{eq-a} in each case, provided we can bound the integral on the complement $\Omega \setminus B_d(x)$. On this region, we have by H\"older \begin{align*} \left( \int_{\Omega \setminus B_d(x)} U_{x,\lambda}^{\frac{q(q-2)}{q-1}}\, \varphi_{x,\lambda}^{\frac{q}{q-1}}\, dy \right)^{\frac{q-1}{q}} & \leq \, \left( \int_\Omega \varphi_{x,\lambda}^{\frac{2N}{N-2}} \, dy \right)^{\frac{N-2}{2N}}\, \left( \int_{\mathbb{R}^N\setminus B_ d(x)} U_{x,\lambda}^{\frac{2N}{N-2}} \, dy \right)^{\frac 2N} \\ & = \mathcal{O}\left((d\, \lambda)^{\frac{2-N}{2}} \right)\, \left( \int_{\mathbb{R}^N\setminus B_ d(x)} U_{x,\lambda}^{\frac{2N}{N-2}} \, dy \right)^{\frac 2N} \\ & = \mathcal{O}\left((d\, \lambda)^{\frac{2-N}{2}} \right)\, \left( \int_{ d \lambda}^\infty \frac{dt}{t^{N+1}} \right)^{\frac 2N} \\ & = \mathcal{O}\left((d\, \lambda)^{\frac{2-N}{2}}\right) \, \mathcal{O}\left(( d\, \lambda)^{-2}\right), \end{align*} where we have used \eqref{u-function} and the fact that \begin{equation} \label{rey-pr1} \left( \int_\Omega \varphi_{x,\lambda}^{\frac{2N}{N-2}} \, dy \right)^{\frac{N-2}{2N}}\, = \mathcal{O}\left((d\, \lambda)^{\frac{2-N}{2}} \right) \end{equation} by \cite[Prop.~1(c)]{rey2}. Combining all the estimates, we deduce \eqref{eq-a}. \emph{Proof of \eqref{eq-b}. } We split the domain of integration $\Omega$ again into $B_d(x)$ and $\Omega \setminus B_d(x)$. On $B_d(x)$, by \eqref{u-split}, \begin{align} &\qquad \int_{B_d(x)} U_{x,\lambda}^{q-2}\, \varphi_{x,\lambda}^2 \, dy \leq \|\varphi_{x,\lambda}\|^2_{L^\infty(\Omega)} \left( \int_{B_ d(x)} U_{x,\lambda}^{q-2} \, dy \right) \nonumber \\ & = \ \mathcal{O}\left( d(x)^{4-2N}\, \lambda^{2-N}\right) \left( \lambda^{2-N} \int_0^{ d \lambda} \frac{t^{N-1}\, dt}{(1+t^2)^2} \right) = \mathcal O ((d\lambda)^{-N}). \label{b-ball} \end{align} On $\Omega \setminus B_d(x)$, by Hölder and \eqref{rey-pr1}, \begin{align} \label{b-out} \int_{\Omega \setminus B_d(x)} U_{x,\lambda}^{q-2}\, \varphi_{x,\lambda}^2 \, dy \leq \left( \int_{\Omega} \varphi_{x,\lambda}^q \, dy \right)^\frac{2}{q} \left( \int_{\mathbb{R}^N\setminus B_ d(x)} U_{x,\lambda}^q \, dy \right)^{\frac{q-2}{q}} = \mathcal{O}\left((d(x)\, \lambda)^{2-N}\right) \, \mathcal{O}\left(( d\, \lambda)^{-2} \right). \end{align} Combining \eqref{b-ball} and \eqref{b-out}, we obtain \eqref{eq-b}. \end{proof} \begin{lem} \label{lem-taylor} Let $f_\epsilon: (0, \infty) \to \mathbb{R}$ be given by \[ f_\epsilon(\lambda) = \frac{A_\epsilonilon}{\lambda^{N-2}} - B_\epsilonilon \frac{\epsilonilon}{\lambda^2}\] with $A_\epsilon, B_\epsilon > 0$ uniformly bounded away from 0 and $\infty$. Denote by \[ \lambda_0 = \lambda_0(\epsilon) = \left(\frac{(N-2)A_\epsilon}{2B_\epsilon } \right)^\frac{1}{N-4} \epsilon^{\frac{1}{4-N}} \] the unique global minimum of $f_\epsilon$. Then there is a $c_0 > 0$ such that for all $\epsilon > 0$ we have \[ f_\epsilon(\lambda) - f_\epsilon(\lambda_0) \geq \begin{cases} c_0 \epsilonilon \left( \lambda^{-1} - \lambda_0(\epsilon)^{-1}\right)^2 & \text{ if } \quad (\frac{A_\epsilonilon}{B_\epsilon})^{\frac{1}{N-4}} \epsilon^{-\frac{1}{N-4}} \lambda^{-1} \leq 2 (\frac{2}{N-2})^\frac{1}{N-4} , \\ c_0 \epsilonilon^\frac{N-2}{N-4} & \text{ if } \quad (\frac{A_\epsilonilon}{B_\epsilon})^{\frac{1}{N-4}} \epsilon^{-\frac{1}{N-4}} \lambda^{-1} > 2 (\frac{2}{N-2})^\frac{1}{N-4}. \end{cases} \] \end{lem} \begin{proof} Let $F(t):= t^{N-2} - t^2$ and denote by $t_0 := (\frac{2}{N-2})^\frac{1}{N-4}$ the unique global minimum on $(0, \infty)$ of $F$. Then it is easy to see that there is $c > 0$ such that \[ F(t) - F(t_0) \geq \begin{cases} c(t - t_0)^2 & \text{ if } \quad 0 < t \leq 2 t_0, \\ c t_0^{N-2} & \text{ if } \quad t > 2 t_0. \end{cases} \] The assertion of the lemma now follows by rescaling. Indeed, it suffices to observe that \[ f_\epsilon(\lambda) = A_\epsilon^{-\frac{2}{N-4}} B_\epsilon^\frac{N-2}{N-4} \epsilon^\frac{N-2}{N-4} F\left( (\frac{A_\epsilonilon}{B_\epsilon})^{\frac{1}{N-4}} \epsilon^{-\frac{1}{N-4}} \lambda^{-1} \right) \] and to use the boundedness of $A_\epsilon$ and $B_\epsilon$. \end{proof} \end{document}
\begin{document} \title{Allocating Limited Resources to Protect a Massive Number of Targets using a Game Theoretic Model} \author{\centering \IEEEauthorblockN{Xu Liu, Xiaoqiang Di$^*$, Jinqing Li, Huan Wang, Jianping Zhao, Huamin Yang, Ligang Cong} \IEEEauthorblockA{\textit{School of Computer Science and Technology, Changchun University of Science and Technology, Changchun, China} \\ \textit{Jilin Province Key Laboratory of Network and Information Security, Changchun, China}\\ $^*$Corresponding author: [email protected]} \IEEEauthorblockN{ Yuming Jiang} \IEEEauthorblockA{\textit{Department of Information Security and Communication Technology} \\ \textit{Norwegian University of Science and Technology, Trondheim, Norway}} } \maketitle \begin{abstract} Resource allocation is the process of optimizing the rare resources. In the area of security, how to allocate limited resources to protect a massive number of targets is especially challenging. This paper addresses this resource allocation issue by constructing a game theoretic model. A defender and an attacker are players and the interaction is formulated as a trade-off between protecting targets and consuming resources. The action cost which is a necessary role of consuming resource, is considered in the proposed model. Additionally, a bounded rational behavior model (Quantal Response, QR), which simulates a human attacker of the adversarial nature, is introduced to improve the proposed model. To validate the proposed model, we compare the different utility functions and resource allocation strategies. The comparison results suggest that the proposed resource allocation strategy performs better than others in the perspective of utility and resource effectiveness. \end{abstract} \begin{IEEEkeywords} limited resource allocation, action cost, game theoretic model, quantal response, target security \end{IEEEkeywords} \section{Introduction}\label{sec:introduction} Resource allocation has always been a complex problem, especially when driven by security requirements. How to devise a mechanism to control the trade-off between the cost of protection and the achieved security utility is an open challenge \cite{cloudissue2014}. In the AWS re:Invent 2014, the AWS engineer claimed that Amazon had nearly 28 total sets across the world, each of which has one or more data centers with a typical facility containing 50,000 to 80,000 servers \cite{vmNum}. To protect these servers against attack and maintain their consistent operation, cloud providers will implement security strategy. For example, they can protect targets (eg. virtual machines, VMs) by setting up resource reservations to analyze the operation of targets and then respond the attack quickly, which is followed by a lot of resource consumption \cite{url1}. Therefore, a trade-off problem could be abstracted between consuming resources and protecting targets. Especially, when the number of available resources or resource budget is fixed and limited for all the targets, how to allocate limited resources to protect a massive number of targets is a vital issue in the security area. The extreme approach may be to allocate security resources to cover all the targets \cite{all}. For instance, setting up the full resource reservations for all the VMs, which will lead to almost double resource consumption. The common approach may be to protect those targets with the most value \cite{alert}. For instance, setting up the resource reservation for the VMs that store the most data or the sensitive data (eg. financial data). The former approach fails to consider resource constraints and effectiveness, however, the available resources may not be sufficient to protect all the targets on the one hand, on the other hand, resources allocated to some empty targets may be inefficient. The latter approach does not account for the adversarial nature and perspective-taking of the attacker. An attacker who can learn about a defender's possible target protection strategy can exploit this knowledge to launch an attack on the targets that the defender does not protect. This paper focuses on developing a general resource allocation method to address the trade-off between security gain and resource consumption. The goal is to resolve the problem of how to utilize limited resources to efficiently protect massive targets against attack. How to build a mathematical model to describe this problem is the key. For example: (1) How to maintain security while allocating resources? (2) How to simulate an attacker of the adversarial nature? In the previous studies \cite{alert,securityresource,lim3,lim6,reward,16}, the number of allocated resources is measured by defense probability. But the importance and emphasis of resource allocation weakens in such scheme. In general, performing different actions on a target will result in different outcomes. If an action is successful, the actor will obtain some benefit as a reward; otherwise, the actor will lose some assets as a penalty. No matter whether an action is successful, the actor will incur some cost by taking the action. Recent studies about the effort of deterrence and risk preferences in the security games \cite{Budget2017,Risk2017} have analyzed the impact of risk preference on the defense effort and deterrence level, and the impact of defender's cost on the investment strategy. Meanwhile, statistics show that a large data center costs between \$10 million and \$25 million per year and the corresponding maintenance costs account for nearly 80\% of its total cost \cite{MTence}. So it's clear that action cost is an important factor which cannot be ignored. By combining the rewards, penalties, costs and probabilities of actions in some manner, it may be possible to describe our problem. Game theory, an important tool for analyzing real-world resource allocation problems, such as the assignment of cyber analysts \cite{alert} and patrolling strategies \cite{patrolling08,patrolling09}, provides an alternative solution. However, in most of the previous studies \cite{securityresource, lim3, lim6, reward} on game theoretic resource allocation, only the reward and penalty associated with an action have been included in the game utility function, but the action cost has been ignored. In the real world, no matter what one wants to do, an action cost is often necessary. This cost might be measured in monetary units, physical resources, abstract resources and so on. Whatever it is, it can be abstracted as a mathematical expression. Hence, we include cost additionally in the Stackelberg game \cite{11} utility function, and analyze the impact of different parameter value configurations on the defender's utility. Since both the defender and attacker are intelligent and have the perspective-taking ability, we consider an interaction in which the defender designs a resource allocation strategy first and the attacker subsequently develops an attack strategy. Although the attacker has the ability to consider the situation from the perspective of the defender, the attacker might also take abrupt actions that lie outside the defender's expectations. This type of attacker, who is of the adversarial nature, can be simulated by the quantal response (QR) model, which has received widespread support in the literature on modeling human behavior in games \cite{20}. In this paper, we introduce it into the proposed Game Theoretic Resource Allocation (GTRA) model to simulate adversarial reasoning. The efficient resource allocation strategy for the defender is obtained from an optimization algorithm. Three indicators, namely vulnerability, coverage and effectiveness, are designed to evaluate the effectiveness of our strategy. We compare the equilibrium strategy based on the proposed GTRA with the one based on a game utility function without considering the action cost. And also compare with four extreme resource allocation strategies, namely average allocation strategy, partial allocation strategy, random allocation strategy and full-coverage strategy. The experimental results demonstrate the effectiveness of our proposed GTRA model. The contributions of this paper can be summarized as follows: \begin{itemize} \item[(1)] To emphasize the action cost in resource consumption. The players' action costs are included in the game utility function as an independent item. The numerical analyses prove that this type of resource measurement can improve the utility and effectiveness. \item[(2)] To better balance target security and resource consumption. The obtained Nash equilibrium strategy is selected as the defender's resource allocation strategy because it outperforms the other extreme resource allocation strategies in terms of both security and effectiveness. \item[(3)] The constructed GTRA model provides advice based on the target parameters to assist in determining the appropriate quantity of resources to protect a massive number of targets. \end{itemize} The remainder of this paper is organized as follows. Section \ref{sec:related} and section \ref{sec:problem} describe the related work on resource allocation and our problem, respectively. Game theoretic model, QR model and the proposed algorithm are presented in section \ref{sec:model}. The numerical analyses are discussed in section \ref{sec:experiment}. The final section summarizes the paper and outlines directions for future work. \section{Related Work}\label{sec:related} Resource allocation is defined as the economical distribution of resources among competing groups of people or programs \cite{cloudissue2014}. Game theory has been applied in resource allocation to better capture the interaction between resource provider and user, and show the economic nature of resource allocation. The previous studies can be roughly classified into two categories based on the different participants considered: security-driven resource allocation between a resource provider and an attacker; and demand-driven resource allocation between a resource provider and a legitimate user. Demand-driven resource allocation can be further subdivided into cost-scheme-based, performance-scheme-based and mixed-scheme-based resource allocation. The original pricing scheme is used for the allocation of resources of a single type, such as bandwidth \cite{FairBandwidth, CloudBandwidth, DatacenterBandwidth}, offload \cite{Offload,OffloadScheduling}, or cache \cite{CacheAllocation}. With the development of the Internet, resource provider could provide nearly all the resources that users need, such as cloud computing provider provides on-demand resources including storage, memory, bandwidth and so on. Multi-resource pricing schemes \cite{MultiAllocation}, such as the cost-optimized scheme considering multiple resources \cite{Cost-Optimized}, have emerged. Meanwhile, since user requests are becoming necessary while providing service, some research has focused on user-demand-driven resource allocation \cite{auctionSLA,PerfAllocation,qosResource}. Later, the cost-optimized and performance-based schemes are combined to allow a resource provider to achieve a win-win objective in which resource provider obtains the maximum profit while the user receives the best experience \cite{optimizing_satisfiability,MTence,PerfPricing,Liu2017Analysis}. However, during the pursuit of the best experience and the maximum benefit, security issues increase, and security-driven resource allocation become a research hotspot, especially when resources are limited and cannot cover all the targets that require protection. The American institute Teamcore conducted a project with the theme of "AI and game theory for public safety and security", and their achievements have been applied in various areas. ARMOR \cite{22} was deployed to develop randomized checkpoints and a patrol route strategy at Los Angeles International Airport. GUARDS \cite{23} was developed to assist airports in allocating limited air police resources to protect more than 400 United States airports. Federal Air Marshals used IRIS \cite{28} to provide scheduling coverage for potential attacks. PROTECT \cite{27} was deployed to generate randomized patrolling schedules for the US Coast Guard. These cases are typical instances of limited security resource allocation using game theoretic model. Other game theoretic studies have also produced good results. One study \cite{alert} investigated an intelligent allocation method for assigning limited cyber analysts to analyze a massive number of security alerts in a network. Another work \cite{16} developed new models and algorithms that could scale to highly complex instances of limited security resource allocation games. Their new methods performed faster than known algorithms when solving massive security games. In further research \cite{securityresource} based on a previous work \cite{lim3}, efficient algorithms were developed to compute the best responses of security forces to different adversary models when resources are limited, and it was proven that the proposed response strategy was superior because it relaxed the assumption of perfect rationality. An additional study \cite{lim6} proposed a game theoretic scheme for developing dynamic and randomized security strategies that consider an adversary's surveillance capabilities. The experimental results showed that the proposed algorithm outperformed the existing approaches. Although these works have utilized the nature and principles of game theory to determine optimal resource allocation strategies, most of them considered only rewards and penalties in their allocation strategies. Recent works \cite{Budget2017,Risk2017} specially examined the effect of risk preferences on deterrence, and analyzed the impact of the defender's cost on its investment, which demonstrated that the cost of actions cannot be ignored. Nonetheless, in the previous works \cite{alert,securityresource,lim3,lim6,reward,16}, the action cost was measured by defense probability simply, which inclines to analyze the impact of defense instead of action cost. Therefore, the game theoretic approach that includes the action cost independently is required to perform the resource allocation in the security area. \section{Problem Description}\label{sec:problem} This paper considers a common scenario of a defender and an attacker. The defender's responsibility is to protect the security of $N$ targets using $M$ resources, so it allocates resources to targets as its action. By contrast, the attacker's intention is to attack the targets, and such attack also costs resources. For both sides, the benefit of consuming resources can be measured in terms of the security gain. The resources can be computing, storage, energy or even monetary units, and the security gain indicates the return of protecting the targets by consuming resource. Although the units of resources and returns are different, they can be abstracted into the numerical value by mathematical methods. In this paper, we put emphasis on analyzing the relationship between them by setting various parameter configurations to simulate the different scenarios. For example, if the defender allocates resources to a target $i$, this target will be relatively more secure than a target without being covered by resources, which can be configured with a bigger security gain. Therefore, the defender obtains a security gain by expending resources, which can be abstracted as a limited resource allocation problem, that is, the problem of how the defender should allocate $M$ resources to protect $N$ when $M$ is far less than $N$. The defender wants to achieve the greatest security gain while minimizing resource consumption. Therefore, this is a trade-off problem between protecting targets and consuming resources. Table \ref{tab1} lists the parameters used in this paper. $T=\{1,...,i,...N\}$ is the set of active targets; $i$ denotes one target; $R_i^m$ and $P_i^m$ are the defender's reward and penalty, respectively, for an attack on this target; and $C_i^a$ and $C_i^m$ are the resources required to be expended by the attacker and the defender, respectively, to best protect target $i$. $A$ is the attacker, who commits to a strategy $\textbf{\textit{p}} = \{p_1,p_2,...,p_N\}$, where $p_i$ is the probability of an attack on target $i$. $D$ is the defender, who commits to a strategy $\textbf{\textit{q}} = \{q_1,q_2,...,q_N\}$, where $q_i$ is the probability of protecting target $i$. We take $\sum_{i \in T} {{q_i}\mathop C\nolimits_i^m \le \mathop M }$ to represent the constraint of the defender's available resources, where $q_iC_i^m$ represents the resources allocated to target $i$ and $M$ represents the maximum quantity of available resources. \begin{table}[h] \centering \caption{\bf Parameter descriptions} \begin{tabular}{|c|l|l} \hline \multicolumn{1}{|l|}{\bf Parameter } & \multicolumn{1}{|l|}{\bf Description} \\ \hline $T$ & set of targets \\ $N$ & number of targets in $T$ \\ $A$ & attacker \\ $D$ & defender \\ $p_i$ & attack probability for target $i$ \\ $q_i$ & defense probability for target $i$ \\ $C_i^m$ & resources allocated to protect target $i$ \\ $M$ & maximum quantity of available resources\\ \hline \end{tabular} \label{tab1} \end{table} In this way, our problem is transformed into computing a reasonable defense probability distribution subject to the defender's resource constraints based on known parameters, including the resource constraints, the number of targets, the reward for protection, the penalty of protection, the cost of protection and the cost of attack for the set of targets. \section{Model Formulation}\label{sec:model} To solve the given problem, we construct a Game Theoretic Resource Allocation (GTRA) model, as shown in Fig. \ref{fig1}. The input parameters are discussed in the above section. After the parameters are input, the proposed GTRA model computes the defender's possible defense probability distribution and the attacker's possible attack probability distribution. \begin{figure} \caption{\textbf{The game theoretic resource allocation interface.} \label{fig1} \end{figure} For the computation process, a Stackelberg game is used to model the interaction between the defender and attacker. Then, the game payoff functions are built from the input parameters and are designed as strategic rules. Next, the QR model is used to simulate an attacker of the adversarial nature. In addition, an iterative genetic algorithm is utilized to obtain the equilibrium game strategy. \subsection{Stackelberg Game} Game theory \cite{11} is widely used to analyze problems in which all players who are in a conflict with a payoff attempt to win or to maximize their payoffs via changing their strategies based on the reactions of their adversaries. A Stackelberg game is a common game instance in which players select strategies sequentially: the leader moves first, and the follower responds accordingly. In this paper, defender and attacker are the two rival roles. They are in conflict over the targets' security, and both attempt to maximize their own payoffs by allocating the fewest resources to the targets. Through game theoretical deduction, the defender first decides how to allocate resources to cover the targets; then, the attacker selects the targets to attack after observing the defender's strategy. The rivalry, the pursuit of the maximum payoffs and the sequence of actions make our problem fit perfectly into the framework of a Stackelberg game; thus, the GTRA model is built based on a Stackelberg game. In a Stackelberg game, each player selects the action with the greatest payoff, which is defined as the player's return after taking the selected action. This payoff usually consists of reward, penalty and cost. In the proposed GTRA model, both the defender and the attacker can take two actions, so their payoffs for a target $i$ can be represented by a $2 \times2$ payoff matrix, as shown in Table \ref{tab2}. Clearly, there are four cases corresponding to the attacker's two actions (Attack or Not) and the defender's two actions (Protect or Not), which are represented by the four cells. Each cell contains two values separated by a comma: the first is the attacker's payoff, and the second is the defender's. In contrast to previous payoff matrices, we include action cost to measure the resource allocation metric directly. \begin{table}[bhtp] \footnotesize \newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end{tabular}} \caption{\textbf{Payoffs of the two players for target \textit{i}}} \label{tab2} \centering ~\\ \begin{tabular}{|c|c|c|c|} \hline ~ & \tabincell{c}{\textbf{Protect $(q_i)$}}&\tabincell{c}{\textbf{ Not Protect $(1-q_i)$ }} \\ \hline \tabincell{c} {\textbf{Attack}\\ $(p_i)$} & \tabincell{c}{$-\alpha P_i^a+(1-\alpha) R_i^a-C_i^a$, \\ $\alpha P_i^a-(1-\alpha) R_i^a-C_i^m$}& \tabincell{c}{$R_i^a-C_i^a$,\\ $-R_i^a$}\\ \hline \tabincell{c} {\textbf{Not Attack} \\ $(1-p_i)$} & {0 , $-C_i^m$ }&{0 , 0}\\ \hline \end{tabular} \end{table} {\bf Case 1:} $\{$Attack, Protect$\}$. The attacker launches an attack on target $i$, and the defender protects it simultaneously. In this case, the attacker's benefit is $-\alpha P_i^a + (1 - \alpha )R_i^a$, and the defender's benefit is $\alpha P_i^a + (1 - \alpha )R_i^a$, where $\alpha$ is the accuracy of attack prediction, $P_i^a$ is the attack penalty, and $R_i^a$ is the attack reward. {\bf Case 2:} $\{$Attack, Not Protect$\}$. The attacker launches an attack on target $i$, and the defender does not protect it. In this case, the attacker will not be punished, and its payoff is the difference between $R_i^a$ and the cost. The defender's payoff is $-R_i^a$ alone, without a cost, because no protection is attempted. {\bf Case 3:} $\{$Not Attack, Protect$\}$. The attacker does not launch an attack on target $i$, but the defender protects it. In this case, the attacker's payoff is zero due to the absence of an attack, and the defender's payoff is the negative value corresponding to the cost of the consumed security resources, $-C_i^m$, with no benefit. {\bf Case 4:} $\{$Not Attack, Not Protect$\}$. The attacker does not launch an attack on target $i$, and the defender does not protect it. In this case, each player's payoff is zero because neither performs an action. To distinguish different targets, one work \cite{3} considered targets with different noncorrelated security assets. Motivated by that study, we label targets with different security assets in the form of distinct rewards, penalties and action costs for the defender and attacker. A player's total payoff is the combination of the four separate cases. In combination with the attack probability, defense probability and payoff items shown in Table \ref{tab2}, the total payoff functions of the defender and the attacker are given in \eqref{equ:1} and \eqref{equ:2}, respectively. These two utility functions are different from the utility functions used in many previous studies because the players' action costs are directly included in our utility functions. \begin{align} \begin{split} &\scalebox{0.9}{$U_M = \sum\limits_{i \in {\rm T}}{{{p_i}{q_i}[\alpha P_i^a-(1-\alpha )R_i^a-C_i^m]}-{p_i}(1-{q_i})R_i^a}$} \\ &\scalebox{0.9}{$-(1-{p_i}){q_i}C_i^m = \sum\limits_{i \in {\rm T}} {{q_i}[\alpha {p_i}(P_i^a + R_i^a) - C_i^m] - {p_i}R_i^a}$} \end{split} \label{equ:1} \end{align} \begin{align} \begin{split} &\scalebox{0.9}{$ {U_A = \sum\limits_{i \in {\rm T}} {{p_i}} {q_i}[ - \alpha P_i^a + (1 - \alpha )R_i^a - C_i^a]} + {p_i}(1 - {q_i})*$} \\ &\scalebox{0.9}{$ (R_i^a - C_i^a) = \sum\limits_{i \in {\rm T}} {{p_i}} [ - \alpha {q_i}(P_i^a + R_i^a) + (R_i^a - C_i^a)]$} \end{split} \label{equ:2} \end{align} For both the defender and the attacker, the objective of each player is to maximize that player's own payoff by designing an optimal strategy. When both players achieve their maximum payoffs, the corresponding solution to the problem is called the Nash equilibrium \cite{11}. {\bf Definition} Consider a game $G=\{s_1,..,s_n;u_1,...,u_n\}$ with $n$ players. If, for a strategy profile $\{s_1^*,...,s_n^*\}$, the strategy $s_i^*$ for every player $i$ is either the optimal strategy for that player or a strategy that is no worse than any of the other $(n-1)$ strategies, then that strategy profile is called a Nash equilibrium (NE) strategy profile. The NE of a Stackelberg game can be derived by applying backward induction \cite{2}, which involves reasoning from the end of a situation to determine the sequence of optimal strategies. In this context, we deduce the defender's protection strategy in a forward manner from the attacker's situation in each round, as follows. Follower: Attacker side. The attacker observes the defender's strategy and designs a greedy strategy to maximize its payoff. Formally, for any given $\textbf{\textit{q}} \in S_M$, the attacker's task is to solve the optimization problem in \eqref{equ:3}. \begin{eqnarray} \begin{split} & \textbf{\textit{p}}(\textbf{\textit{q}}) = arg max {U_A}(\textbf{\textit{p}},\textbf{\textit{q}}(\textbf{\textit{p}})) \\ & Subject\;to \qquad {\rm{0}} \le {p_i} \le {\rm{1}} \end{split} \label{equ:3} \end{eqnarray} Leader: Defender side. The defender knows that the attacker will respond greedily. Therefore, the defender designs a protection strategy based on the attacker's potentially best response. Formally, the defender needs to solve the optimization problem in \eqref{equ:4}. The first constraint suggests that the total quantity of resources available to the defender is no more than $M$. \begin{eqnarray} \begin{split} \textbf{\textit{q}}(\textbf{\textit{p}}) = arg max {U_M}(\textbf{\textit{p}}(\textbf{\textit{q}}),\textbf{\textit{q}}) \\ Subject\;to \qquad \sum\limits_{i \in T} {{q_i}C_i^m} \le {M}{\rm{ }} \\ and \; \; \qquad \qquad {\rm{0}} \le {q_i} \le 1 \end{split} \label{equ:4} \end{eqnarray} We derive the NE from the above two sequential steps. We derive $\textbf{\textit{q}}^\textbf{*}$ by solving \eqref{equ:4}; then, $\textit{\textbf{p}}^*$ is derived as $\textbf{\textit{p}}(\textbf{\textit{q}}^*)$ by solving \eqref{equ:3}. Finally, the strategy combination $(\textit{\textbf{p}}^*,\textit{\textbf{q}}^*)$ is the equilibrium game strategy for the Stackelberg game, and the final resources allocated to target $i$ will be $q^* * C_i^m$. \subsection{Quantal Response} The previous analysis is performed under the assumption that the attacker is perfectly rational and develops its strategy with complete knowledge of the defender's strategy. However, in the real world, the attacker will not always be perfectly rational since the attacker cannot always know the defender's strategy. Consequently, the defender is unsure whether the attacker will operate according to the predictive strategy $\textbf{\textit{p}(\textit{q})}$. If the attacker is not perfectly rational and chooses a strategy that deviates slightly from the rational strategy, the defender's payoff may decrease. Clearly, the defender is unwilling to accept a lower payoff while doing nothing. To simulate the bounded rational adversary, many behavior models have been proposed, including quantal response (QR), SUQR, prospect theory (PT) and so on, which are all commonly used. The defender's response to them in the game theoretic model has been done in our previous work \cite{liu2017response} and we found the defender's response to the QR model is the most careful where the defense probability is relatively bigger than the other two bounded rational models. Hence, the QR model is introduced into the GTRA model to simulate the attacker's adversarial nature and to improve the proposed model. When the QR model is applied, the noise in a bounded rational attacker's strategy is controlled by $\lambda$. $\lambda=0$ represents a uniform random probability distribution over the attacker's possible strategies, while $\lambda \xrightarrow{}\infty$ represents a perfectly rational attacker. Thus, the attacker's probability of attacking target $i$ is changed to \eqref{equ:5}. \begin{eqnarray} {p_i} = \frac{{{e^{\lambda {U_A}({q_i})}}}}{{\sum\nolimits_{j = 1}^n {{e^{\lambda {U_A}({q_j})}}} }}\ \label{equ:5} \end{eqnarray} Furthermore, the defender's utility function becomes \eqref{equ:8}. \begin{align} \begin{split} &\scalebox{0.9}{$U_M = \sum\limits_{i \in T} {{q_i}} * [\alpha {p_i}(\mathop P\nolimits_i^a + \mathop R\nolimits_i^a ) - \mathop C\nolimits_i^m ] - {p_i}\mathop R\nolimits_i^a$} \\ &\scalebox{1}{$= \sum\limits_{i \in T} {[\frac{{[\alpha {q_i}(\mathop P\nolimits_i^a + \mathop R\nolimits_i^a ) - \mathop R\nolimits_i^a ] * \mathop e\nolimits^{ - \lambda [\alpha {q_i}(\mathop P\nolimits_i^a + \mathop R\nolimits_i^a ) - (\mathop R\nolimits_i^a - \mathop C\nolimits_i^a )]} }}{{\sum\nolimits_{j = 1}^n {\mathop e\nolimits^{ - \lambda [\alpha{q_j}(\mathop P\nolimits_j^a + \mathop R\nolimits_j^a ) - (\mathop R\nolimits_j^a - \mathop C\nolimits_j^a )]} } }} - {q_i}\mathop C\nolimits_i^m ]} $} \end{split} \label{equ:8} \end{align} In summary, the proposed GTRA model is used to solve the defender's problem of how to allocate $M$ units of resources to maximize the defender's utility function, as illustrated in \eqref{equ:9}. \begin{eqnarray} _\textbf{\;\;\textit{q}}^{\max }\;U_M \; \qquad \qquad {\rm{ s}}{\rm{.t}}{\rm{.}}\left\{ \begin{array}{l} \sum\limits_{i \in T} {{q_i} C_i^m \le M } \\ 0 \le {q_i} \le 1,{\rm{ }}\forall i \end{array} \right. \label{equ:9} \end{eqnarray} \subsection{Algorithm} Since the defender's objective utility function expressed in \eqref{equ:9} corresponds to a nonlinear constraint problem, the optimal solution is extremely difficult to find. As a classic algorithm for searching for an approximately optimal solution \cite{NFV}, the genetic algorithm (GA) provides an alternative approach. GA is a stochastic global search and optimization method that mimics natural biological evolution. However, the typical GA attempts to find a globally near-optimal solution instead of a globally optimal one. Therefore, in this paper, we utilize Algorithm \ref{algorithm1} to compute the defender's equilibrium strategy in the proposed GTRA model. \begin{algorithm}[!h] \footnotesize \caption{Iterative Genetic Algorithm (IGA)} \label{algorithm1} \begin{algorithmic}[1] \STATE \textbf{Initialization:} \\number of targets $\rightarrow N$; \\number of iterations $\rightarrow times = 10$; \\resource constraint $\rightarrow M$; \\$Ud^* \leftarrow -\infty$ \STATE \textbf{Iteration:} \WHILE{$i < times $} \STATE $(q_i,Ud_i) \leftarrow GA(MultiObj, N, M)$ \IF{$Ud_i > Ud^*$} \STATE $Ud^* = Ud_i$; \STATE $q^* = q_i$; \ENDIF \ENDWHILE \RETURN $(q^*,Ud^*)$ \end{algorithmic} \end{algorithm} In addition to the parameters of the utility function discussed in the previous section, the number of targets, the number of iterations and the resource constraint are initialized before the iteration process. In each iteration, we find the locally optimal strategy $q_i$ and the corresponding utility $Ud_i$ using the $GA()$ function in MATLAB. Then, we record the current maximum after each iteration. When the iteration number $i$ reaches the given maximum $times$, the globally optimal strategy $q_i^*$ and the corresponding utility $Ud_i^*$ are obtained. In general, the probability of reaching the global optimum increases as the number of iterations $times$ increases. To better understand the equilibrium game strategy, we illustrate the evolutionary behavior of the defender by adopting the phase plane of replicator dynamics \cite{replicator}. First, tersely describe the payoff of the defender and attacker in every case, Table \ref{tab2} is changed into Table \ref{tab3} where $a = -\alpha P_i^a+(1-\alpha) R_i^a-C_i^a$, $b = \alpha P_i^a-(1-\alpha) R_i^a-C_i^m$, $c = R_i^a-C_i^a$, $d = -R_i^a$ and $f = -C_i^m$. Then, the replicator dynamics equations of the attacker and the defender are expressed as \eqref{dr}. $\bar{U}_A$ and $\bar{U}_M$ represent the average payoffs. The evolutionary equilibrium can then be obtained by solving the following equations $\dot{p}=0$ and $\dot{q}=0$. \begin{table}[!h] \centering \caption{\bf Simplified payoffs for target $i$} \begin{tabular}{|c|c|c|} \hline \multicolumn{1}{|l|}{\bf ~ } & \multicolumn{1}{|l|}{\bf Protect $(q)$}& \multicolumn{1}{|l|}{\bf Not Protect $(1-q)$} \\ \hline Attack ($p$) & $a,b$ & $c,d$ \\ \hline Not Attack ($1-p$) & $0,f$ & $0,0$ \\ \hline \end{tabular} \label{tab3} \end{table} \begin{eqnarray} \setlength{\abovedisplayskip}{0pt} \begin{split} \label{dr} & \dot{p}=p(U_A - \bar{U}_A)=p(1-p)[qa+(1-q)c] \\ & \dot{q}=q(U_M - \bar{U}_M)=q(1-q)[p(b-d)+(1-p)f] \end{split} \end{eqnarray} \begin{figure} \caption{\textbf{Evolution process of the defender's behavior.} \label{fig3} \end{figure} Fig. \ref{fig3} presents the evolutionary equilibrium of the defender's strategy, which can be seen as the process of adapting the initial strategy to the NE strategy. The smallest circle around the NE point $(0.012,0.2851)$ is the entire feasible region of the solution. \section{Numerical Study}\label{sec:experiment} Since the focus of this paper is to explore the impact of different parameter configurations, we perform the numerical analysis directly to validate the proposed method. We first compare the proposed utility function with the utility function that does not consider the action cost. Then, we compare the NE strategy that is computed based on the proposed utility function with four other resource allocation strategies. In each group of experiments, 100 game instances under the same conditions are considered, and the average value is taken as the result. In each game instance, the number of iterations in Algorithm \ref{algorithm1} is set to 10 \footnote{We conducted an experiment with 100 iterations and found that the maximum value was usually found within the first ten iterations.}, and the maximum value is taken. \subsection{Comparison of Utility Functions} To assess the impact of the action cost on the strategy, we compare the utility functions with and without action cost, respectively. Specifically, to explore the influence of the relationship between the two players' action costs on each player's strategy, we design three groups of experiments, as shown in Table \ref{tab:Ufuncs}. \begin{table}[!h] \centering \caption{\bf Four utility function scenarios} \begin{tabular}{|c|c|c|c|} \hline {No. } &{Relation}&{$C^m$}& {$C^a$} \\ \hline 1 & $C_i^m > C_i^a$ & $\gamma< C_i^m <2*\gamma$ & $ 0< C_i^a <\gamma$\\ \hline 2 & $C_i^m < C_i^a$ & $ 0< C_i^m <\gamma$ & $\gamma< C_i^a <2*\gamma$ \\ \hline 3 & $C_i^m = C_i^a$ & $ 0< C_i^m <\gamma$ & $C_i^a = C_i^m$ \\ \hline 4 & $NoCost$ & $\gamma =0$ & $\gamma =0$ \\ \hline \end{tabular} \label{tab:Ufuncs} \end{table} \noindent The first scenario corresponds to a utility function in which the cost of defense is greater than the cost of attack. On the contrary, the second scenario corresponds to a utility function in which the cost of attack is greater than the cost of defense. The third scenario corresponds to a utility function in which the costs of attack and the cost of defense are equal and are greater than 0. The fourth scenario corresponds to the utility function used in previous works \cite{alert,securityresource,lim3,lim6,reward,16}, in which the action costs $C_i^a$ and $C_i^m$ are $0$ and the resource consumption is simply the sum of the defense probabilities. \begin{figure} \caption{\textbf{Solution quality comparison of different utility functions.} \label{subfig:Um_n} \label{subfig:Um_r} \label{subfig:Re_n} \label{subfig:Re_r} \label{subfig:Effect_n} \label{subfig:Effect_r} \label{fig:Solution} \end{figure} We measure the solution quality in each scenario in terms of the defender's average utility and the average effectiveness over all 100 game instances, where the \textit{effectiveness} is defined as the average number of protected targets per resource, as shown in \eqref{equ:eff}. The growth rate, defined in \eqref{equ:effGrowth}, is used to measure the difference in solution quality between different utility function scenarios, where $a$ denotes the solution quality for a utility function without action cost and $b$ denotes the solution quality for a utility function that includes the action cost. The parameters used in this paper are the same as those used in the previous study \cite{reward}, where reward $R_i^a$ is chosen randomly from a uniform distribution from 1 to 10, penalty $P_i^a$ is chosen randomly from a uniform distribution from -10 to -1 and the resource constraint is proportional to the number of targets, $M = \gamma * N$. We assume that the total available resources, including the total cost, are insufficient to protect all targets. Therefore, the value of parameter $ \gamma$ is set to less than 1. \begin{eqnarray} effectiveness = \frac{number\_of\_covered\_ targets}{consumed\_ resources} \label{equ:eff} \end{eqnarray} \begin{eqnarray} Growth\_Rate = \frac{a - b}{b} \label{equ:effGrowth} \end{eqnarray} Fig. \ref{fig:Solution} shows the solution quality results for the various utility function scenarios introduced in Table \ref{tab:Ufuncs} with different parameter configurations. The defender's average utility, and the defender's resource consumption along with the effectiveness are displayed on the y-axes. On the x-axes, Figs. \ref{subfig:Um_n}, \ref{subfig:Re_n} and \ref{subfig:Effect_n} show the results of varying the number of targets ($N$) while keeping the ratio ($\gamma$) of resources ($M$) to $N$ fixed to $0.1$. Figs. \ref{subfig:Um_r}, \ref{subfig:Re_r} and \ref{subfig:Effect_r} show the results of varying the ratio of resources to targets while keeping the number of targets fixed at $200$. The corresponding solution qualities in the various utility function scenarios are presented as groups of bars. Figs. \ref{subfig:Um_n}, \ref{subfig:Re_n} and \ref{subfig:Effect_n} show the following. (1) The defender's utility ($U_m$) increases as the number of targets ($N$) increases. The utility is larger in the first scenario than other scenarios under the same conditions, and it is nearly stable in the fourth scenario, regardless of $N$, which indicates that the greater cost of defense has a better effect on obtaining payoff under the same conditions. (2) The defender's resource consumption increases as the number of targets ($N$) increases. The resource consumption is larger in the first scenario than in the second and third scenarios, and it is strongly proportional to $N$ in the fourth scenario. It illustrates the cumulative impact of action costs on a massive number of targets. (3) The effectiveness does not vary regularly with the number of targets ($N$); it varies inversely with the resource consumption in the first scenario, in which the action cost is considered in the utility function, while a nearly constant effectiveness is maintained in the fourth scenario. It suggests that when expending the same number of resources, the number of protected targets of the fourth scenario where the action costs are not considered in the utility function is the least. Figs. \ref{subfig:Um_r}, \ref{subfig:Re_r} and \ref{subfig:Effect_r} show the following. (1) The defender's utility ($U_m$) increases as the ratio of resources to the number of targets increases. The defender's utility is larger than the second and third scenarios under the same conditions when $C_i^m > C_i^a$, and it is nearly stable when there is no action cost in the utility function, regardless of the resource-to-target ratio. It reveals that the utility functions including action cost provide more utility than those without action cost. Moreover, the number of resources has a positive effect on the defender's utility. (2) The defender's resource consumption increases as the resource-to-target ratio increases; it is larger in the first scenario than in the second and third scenarios and it is strongly proportional to the resource-to-target ratio in the fourth scenario. It also shows the cumulative impact of action costs on a massive number of targets. (3) The effectiveness decreases with an increasing resource-to-target ratio. The effectiveness varies inversely with resource consumption in all four scenarios, and it is smaller in the first scenario than in the second and third scenarios. It suggests that the effectiveness decreases with the increasing number of resources under the same conditions, and the effectiveness is the least when the utility does not include the action cost. \begin{figure} \caption{\textbf{Difference in solution quality when the utility function does not consider the action cost.} \label{subfig:Um_nG} \label{subfig:Um_rG} \label{subfig:Re_nG} \label{subfig:Re_rG} \label{subfig:Effect_nG} \label{subfig:Effect_rG} \label{fig:SolutionGrowth} \end{figure} Furthermore, we illustrate the difference in solution quality when the utility function does not consider the action cost in Fig. \ref{fig:SolutionGrowth}. Equation \eqref{equ:effGrowth} shows that if the growth rate is less than zero, then $a$ is less than $b$. An overall comparative analysis of the results shown in Fig. \ref{fig:SolutionGrowth} indicates that when the utility function does not include the action cost, the defender's utility and the effectiveness are lower, and the defender's resource consumption is larger. The difference becomes especially evident as the number of resources increases (the resource-to-target ratio increases) or the number of targets increases ($N$ increases). Overall, a utility function that considers the action cost, regardless of the relationship between defense and attack costs, provides the defender with a larger payoff and higher effectiveness. Additionally, although it is possible to represent the resources allocated to targets using the defense probabilities, as done in the utility functions implemented in many previous studies, sometimes the resource metric is not the same as the defense probability. For example, when 10 GB of storage is required to run intrusion prevention servers to protect a base station, this requirement cannot be represented as a probability. However, we can set $C_i^m = 10$ directly, and the defense probability then represents the probability that this target may be covered by this server. Hence, adding the action cost to the utility function is beneficial. In the following sections, we present a series of comparative analyses of various strategies based on our proposed utility function that considers the action cost. \subsection{Comparison of Allocation Strategies} A system with high security requirements is considered; e.g., government systems usually require a high level of consistency and need to be able to resist various attacks. The defender is usually equipped with high-performance defense modules with powerful processing capabilities, so a relatively large protection reward and a small protection penalty, which can be represented as $P_i^a >R_i^a $ \cite{reward}, are chosen in our study. Since all three scenarios regarding the relationship between the defense cost and the attack cost have a similar impact on the solution quality, we perform our further study based on the case in which the defense cost is less than the attack cost represented by $C_i^m < C_i^a$. We varied the reward and penalty from 1 to 10, the action cost from 0.1 to 0.4, and the numerical gap between the reward (or penalty) and the cost was considered large. Here, we limit the gap between the reward (or penalty) and the cost by randomly choosing values from $C_i^m \in [0.01,0.02]$, $C_i^a \in [0.02,0.03]$, $P_i^a \in [1.4,1.6]$, and $P_i^m \in [0.4,0.6]$. These digits can be projected to the scenario that a unit of defense resource can protect at most 100 targets, and a unit of attack resource can attack at most 50 targets. If an attack fails, the attacker will get a penalty about 1.4. And if the protection fails, the defender will get a penalty about 0.4. In this case, the attacker can be seen as the type of risk-averse player who aims to minimize the risk loss \cite{Risk2017}. To further evaluate the utility function which includes the action cost, we simulate four extreme resource allocation strategies in which the defender does not follow the NE strategy. \subsubsection{PartOneS strategy} The defender cannot protect all the targets due to resource limitations, so it must select at most $k$ targets to protect. The remaining $N-k$ targets are not protected. The defense probability distribution is obtained from \eqref{partones}. In this strategy, $M$ available units of resources are consumed. \begin{eqnarray} \scalebox{0.9}{$ q_i = \left\{ \begin{array}{l} 1,\quad i = 1,...,k - 1;\\ ({M} - \sum\nolimits_{j = 1}^{k - 1}{q_j * C_j^m})/{C_i^m} ,\quad q_j=1,\quad i = k; \\ 0,\quad i = k + 1,...,n. \end{array} \right. $} \label{partones} \end{eqnarray} \subsubsection{Rand strategy} The defender protects targets following a random probability distribution according to \eqref{rand}. In this strategy, the quantity of resources consumed is less than $M$. \begin{equation} \scalebox{1}{$ q_i = \frac{Rand({q_i})*{M}}{\sum\nolimits_{j = 1}^n {Rand({q_j})*C_j^m}} , i=1,2,...,n $} \label{rand} \end{equation} \subsubsection{Average strategy} Resources are allocated to each target equally; the defense probability distribution obeys \eqref{average}. In this strategy, all $M$ available units of resources are consumed. \begin{eqnarray} q_i*C_i^m={M}/{n},\quad i=1,2,...,n \label{average} \end{eqnarray} \subsubsection{AllOneS strategy} The resource limitation is relaxed, and the defender protects all targets, as expressed in \eqref{allones}, which is approximately abstracted as \textit{AllOneS}. In this strategy, the quantity of resources consumed is greater than $M$. \begin{eqnarray} q_i=1,\quad i=1,2,...,n \label{allones} \end{eqnarray} Our strategy obtained based on the proposed GTRA model is similar to \eqref{our}. It is computed using the IGA given in Algorithm \ref{algorithm1}. In our strategy, the quantity of resources consumed is no more than $M$. \begin{eqnarray} \textit{\textbf{q}}=IGA(U_M,n,M), \quad \sum q_i * C_i^m \leq M \label{our} \end{eqnarray} We start by comparing the utility of the defender with that of the attacker. The defender's resources are considered to be limited, whereas the attacker's resources are unlimited. One hundred game instances, with the number of targets ranging from 10 to 1000 in increments of 10, are considered. The defender's utility results for the different strategies are displayed in Fig. \ref{fig4}. The vertical axis represents the defender's utility $U_M$, and the horizontal axis shows the number of targets $N$. \begin{figure} \caption{\textbf{Comparison of the defender's utility.} \label{fig4} \end{figure} When the number of targets is below 120, \textit{AllOneS} provides the defender with the greatest utility. However, when the number of targets exceeds 312, \textit{AllOneS} provides the defender with the smallest utility because of the higher resource consumption. The NE strategy based on our GTRA model outperforms the other four strategies when the number of targets is greater than 120. Interestingly, the defender's utility decreases with an increasing number of targets in Fig. \ref{fig4}, while the opposite trend is seen in Fig. \ref{fig:Solution}. The difference between these two configurations is the range of parameters. When the reward or penalty is much larger than the cost, the defender's utility is larger, and the impact of the number of targets is directly proportional to the utility. By contrast, when the reward or penalty is only slightly larger than the cost, the defender's utility is smaller and potentially even negative. In this case, the impact of the number of targets is directly proportional to the utility. Hence, the parameter configuration, such as the gap between the reward (or penalty) and cost, influences the defender's utility. Regardless of the parameter configuration, the NE strategy based on our GTRA model is better than the other strategies in terms of the defender's utility. \subsection{Comparison in Terms of Various Evaluation Criteria} The NE strategy is obtained by finding the maximum utility for both players. In this subsection, we evaluate the vulnerability, coverage and effectiveness of our equilibrium strategy and the other four strategies. \subsubsection{Vulnerability} We first evaluate the vulnerability of the defender's various strategies. The $vulnerability$ is defined as a risk indicator for the targets as shown in \eqref{equ:vul} \cite{vul}. \begin{eqnarray} vulnerability = \frac{success-failure}{success+failure} \label{equ:vul} \end{eqnarray} \noindent where $success$ and $failure$ denote the numbers of targets in which an attack that is launched is not detected or is detected, respectively. The assumption is made that if the defender allocates resources to protect a target, then that target will be successfully protected against attack by the continuously operating defense system; otherwise, the attack will be successful. Clearly, a greater $success$ value and a lower $failure$ value indicate a more vulnerable strategy. Hence, the lower the $vulnerability$ is, the better the strategy. \begin{figure} \caption{\textbf{Vulnerability of 100 groups of instances.} \label{fig6} \end{figure} Fig. \ref{fig6} shows that the $vulnerability$ of \textit{AllOneS} is $-1$, which implies that the number of successes is $0$. In this situation, the targets are the most secure. The defender's protections cover all the targets, resulting in the most secure environment. When the number of targets (\textit{N}) is small, the \textit{NE} strategy achieves the most secure state; as \textit{N} increases, the $vulnerability$ increases because the available resources become insufficient to protect all targets. Additionally, once \textit{N} is greater than 400, the $vulnerability$ of \textit{NE} varies only slightly. These analyses reveal that \textit{NE} can be scaled up to protect a large number of targets. Furthermore, compared with \textit{PartOneS}, \textit{Rand}, and \textit{Average}, as the number of targets to protect increases such that there are insufficient resources to protect all of them, \textit{NE} performs better. It enables control of the trade-off between the security benefit and the resource consumption and focuses on protecting targets that are more likely to be attacked. As a result, \textit{NE} performs better than all the other strategies except for \textit{AllOneS} in terms of the $vulnerability$ of the targets. \subsubsection{Coverage} We evaluate the allocated resources' coverage of the targets next. The \textit{coverage} is defined as the proportion of protected targets among the total targets, as shown in \eqref{equ:protec}, where the protected targets are defined as those that are attacked by attacker and also protected by defender, those that are not attacked but protected, and those that are not attacked and not protected, either, as denoted by \textit{AP}, \textit{NP} and \textit{NF}, respectively, in Table \ref{tab4}. \begin{table}[h] \centering \caption{\bf Protection type of target $i$} \begin{tabular}{|c|c|c|} \hline \multicolumn{1}{|l|}{\bf ~ } & \multicolumn{1}{|l|}{\bf Protect}& \multicolumn{1}{|l|}{\bf Fail to Protect} \\ \hline Attack & AP & AF \\ \hline Not Attack & NP & NF\\ \hline \end{tabular} \label{tab4} \end{table} \begin{eqnarray} coverage = \frac{AP + NP + NF}{N} \label{equ:protec} \end{eqnarray} \begin{figure} \caption{\textbf{Coverage in 100 groups of instances.} \label{fig7} \end{figure} Fig. \ref{fig7} shows the coverage results for the five strategies for 100 groups of experimental instances. \emph{AllOneS} covers all targets by protecting all of them, whereas \emph{NE} covers almost the fewest targets because \emph{NE} protects only risky targets attractive to the attacker. Thus, the number of protected targets is smaller than the other strategies.The disadvantage of this strategy is that it cannot guarantee the absolute security of the targets, in contrast to \emph{AllOneS}. However, it may be useful for saving resources or improving the effectiveness with which those resources are used, especially when resources are valuable or limited. \subsubsection{Effectiveness} We now evaluate the effectiveness of the five strategies. The greater the effectiveness is, the better the strategy is. For 100 groups of experimental instances, the quantities of resources consumed by each strategy are plotted in Fig. \ref{subfig:Res}. It is worth noting that \emph{AllOneS} consumes the most resources, and \emph{NE} consumes the least resources. The resources consumed by \textit{PartOneS} and \textit{Average} are equivalent since these two strategies use all the available resources. When the number of targets is increased to 1000, the quantity of resources consumed by \textit{AllOneS} is close to four times the resources consumed by \textit{NE}. When these values are applied to the real world, they represent a large amount of material or financial resources that must be expended by the defender. Consequently, our strategy aims to provide high effectiveness. \begin{figure} \caption{\textbf{Defender's effectiveness.} \label{subfig:Effec} \label{subfig:Res} \label{fig:ReEffect} \end{figure} In Fig. \ref{subfig:Effec}, there is an evident upward trend in the effectiveness of \emph{NE} when the number of targets is less than 200, which then gradually drops to a stable value with an increasing number of targets. Moreover, \emph{NE} has the highest effectiveness among all five strategies. These results suggest that increasing the number of targets does not affect the effectiveness. In addition, although \textit{AllOneS} protects the most targets, its effectiveness is lower than that of \textit{NE} because it consumes more resources. \textit{AllOneS} may protect some targets that are not likely to be attacked, which may cause resources to be consumed without gaining benefits, thus decreasing the defender's effectiveness. Now, we combine the number of targets, coverage and vulnerability in Fig. \ref{subfig:NCV} and combine the number of targets, effectiveness and vulnerability in Fig. \ref{subfig:NEV}. From Fig. \ref{fig:NCEV}, it can be concluded that more targets must be protected to maintain a low vulnerability or to decrease the vulnerability. However, if the defender increases the number of protected targets, more resources will be required. Take \textit{AllOneS} as an example. The vulnerability of the targets is near zero, and the number of protected targets is the largest, but the number of covered targets per resource is low because of the high resource consumption. In this situation, to improve the security of the targets, the NE strategy obtained based on the proposed \textit{NE} model, which balances the security utility and the resource consumption, is the best choice for allowing the defender to utilize limited resources effectively. \begin{figure} \caption{\textbf{Comprehensive analysis of the number of targets, coverage, vulnerability and effectiveness.} \label{subfig:NCV} \label{subfig:NEV} \label{fig:NCEV} \end{figure} \subsection{Parameter Analysis} The security utilities of the defender and the attacker are related not only to their strategies but also to certain specific parameters: the resource constraint $M$, the prediction accuracy $\alpha$ and the noise $\lambda$ in the attacker's rationality. \subsubsection{Resource constraint $M$} We generate 100 random game instances with 1000 targets and consider different quantities of resources to assess the impact of the resource constraint $M$ on the players' utilities. In Fig. \ref{fig:DifM}, the x-axes show the proportion of available resources relative to the maximum resources required, and the y-axes represent the players' utilities. Fig. \ref{fig:DifM} shows that when the resource proportion is zero, the defender's utility is the lowest, and the attacker's utility is the highest. As the proportion of available resources increases, the defender's utility increases, and the attacker's utility decreases. When the proportion reaches 40\%, the utilities of both players become stable. Hence, we conclude that for the case in which $R_i^m > P_i^m$ and $P_i^m > C_i^m$, 40\% of the maximum resources is an efficient rate of utilization for the defender. When the proportion is greater than 40\%, both players' utilities remain approximately stable. The jitter in the raw data is due to the aggregated analysis of resource consumption, which demonstrates that spending more resources to protect targets may be less risky, but the cost of the resources consumed will exceed the benefit. The proposed \textit{NE} model can compute the corresponding best resource proportions for different combinations of reward, penalty and cost. Therefore, the proposed model offers the defender an alternative means of gaining greater utility while saving resources, thereby improving the defender's outcome from the perspective of economics. When the defender needs to estimate the overall quantity of resources required to protect a massive number of targets, the proposed GTRA model can be used to compute the approximate quantity based on the configurations of all the targets and thus provide the defender with a game theoretical reference value. \begin{figure} \caption{\textbf{Impact of the quantity of resources on the players' utilities.} \label{subfig:Um_M} \label{subfig:Ua_M} \label{fig:DifM} \end{figure} \subsubsection{Prediction accuracy $\alpha$} We generate 10 random game instances with 100 targets and vary the prediction accuracy $\alpha$ to assess its impact on the players' utilities. $\alpha$ is the accuracy with which attacks are predicted by the defender. Figs. \ref{subfig:Um_a} and \ref{subfig:Ua_a} show the differences in the players' utilities with varying $\alpha$ values (ranging from 0 to 1). The prediction accuracy $\alpha$ is plotted on the horizontal axis, and the player's utility is plotted on the vertical axis. \begin{figure} \caption{\textbf{Impact of $\alpha$ on the players' utilities.} \label{subfig:Um_a} \label{subfig:Ua_a} \label{fig:Difa} \end{figure} As the prediction accuracy increases, the defender's utility increases, and the attacker's utility decreases. For a typical state in which $\alpha$ is 0.8, the defender's utility is -0.5323, and the attacker's utility is 0.411. The reason that the sum of the defender's utility and the attacker's utility is not equal to zero is that our game is a non-zero-sum game. In this paper, we assume that the predictions are not fully accurate, so we take $\alpha$ to be 0.8 without explicit explanation. \subsubsection{Noise $\lambda$ in the attacker's rationality} We generate 30 random game instances with 100 targets and vary $\lambda$ to assess its impact on the players' utilities. $\lambda$ represents the noise in the attacker's rationality during strategy planning. We vary $\lambda$ from 0 to 15 in increments of 0.5. In Fig. \ref{fig16}, the two variables are the noise $\lambda$ in the attacker's rationality and the player's utility. $\lambda$ is the independent variable, and the utility is the dependent variable. The change in the utility is caused by different values of $\lambda$. Fig. \ref{fig16} shows that the larger $\lambda$ is, the greater the utility of the attacker is and the lower the utility of the defender is. Additionally, when $\lambda$ is greater than 4, both players' utilities remain nearly stable with $\lambda$ increasing, especially that of the attacker. We may argue that if the attacker is sufficiently rational ($\lambda$ is sufficiently high), then the players' utilities are nearly constant. In this paper, to model irrational attack behavior, the value of $\lambda$ is set to 1.5 in the analysis \cite{27}. \begin{figure} \caption{\textbf{Impact of $\lambda$ on both players' utilities.} \label{fig16} \end{figure} \section{Conclusion}\label{sec:conclusion} \subsection{Summary} In this paper, we investigate how to allocate resources to efficiently protect targets when the number of targets is greater than the number of resources. A game theoretic resource allocation (GTRA) model is constructed based on a Stackelberg game. In the proposed model, an independent item (i.e., the action cost) is included in the game utility function compared with the previous studies, which makes the resource allocation more flexible and convenient. The proposed method correlates resource allocation with security by means of the game utility function, simulates the behavior of an attacker of the adversarial nature through the introduction of the QR model, and enables the computation of the Nash equilibrium (NE) strategy through an iterative genetic algorithm. In addressing these challenges, we draw the following conclusions: \begin{itemize} \item Including the action cost in the utility function provides the defender with greater utility and higher effectiveness, regardless of the relationship between the defense cost and the attack cost. \item The size of the gap between different parameters affects the defender's utility and the trend of variation in the defender's utility with the number of targets. Regardless of the parameter configuration, the NE strategy based on our GTRA model outperforms the other four resource allocation strategies considered for comparison. \item When the available resources are not sufficient to protect all the targets, our strategy performs better than the random allocation strategy, the average allocation strategy and the partial protection strategy. It can effectively balance security and resource consumption. \item When the resource constraint is relaxed, although our strategy cannot maintain the best target security, it nevertheless achieves higher effectiveness than the one allocating resources to all the targets. Thus, it can optimize the consumption of resources for protecting targets. \item The quantity of resources and the security of the targets are not directly related. Given a set of targets and their corresponding asset values, the proposed model provides advice on the quantity of resources required to effectively protect the targets. \end{itemize} Given these findings, the security of targets can be better protected by considering the cost of protection when planning resource allocation. Last but not least, we hope that this study can serve as a theoretical reference for the allocation of security resources in multiple arenas. \subsection{Future work} Our current work focuses on designing an efficient resource allocation strategy to protect a massive number of targets using limited resources. Next, we plan to apply current research in an application leveraging the idea of software-defined networking (SDN) and network function virtualization (NFV), which is suitable not only for common networks but also for computing environments such as cloud computing. \section*{Conflict of interest} The authors declare that they have no conflicts of interest. \section*{Acknowledgments} This work was partially supported by the Science and Technology Development Plan Projects (20150204081GX and 20180414024GH) of Jilin Province of China, and the 13th Five-Year Science and Technology Research Project of the Education Department of Jilin Province under Grant No. JJKH20190598KJ. . \end{document}
\begin{document} \title{\huge Coexistence of Continuous Variable Quantum Key Distribution and 7$\times$12.5 Gbit/s Classical Channels } \author{ \IEEEauthorblockN{Tobias A. Eriksson$^{(1)}$, Takuya Hirano$^{(2)}$, Motoharu Ono$^{(2)}$, Mikio Fujiwara$^{(1)}$, Ryo Namiki$^{(2)}$,\\ Ken-ichiro Yoshino$^{(3)}$, Akio Tajima$^{(3)}$, Masahiro Takeoka$^{(1)}$, and Masahide Sasaki$^{(1)}$} \IEEEauthorblockA{\small ${(1)}$ \textit{National Institute of Information and Communications Technology (NICT)}, \textit{4-2-1 Nukui-kitamachi, Koganei, Tokyo 184-8795, Japan.} \\ ${(2)}$ \textit{Department of Physics, Gakushuin University, 1-5-1 Mejiro,Toshima-ku,Tokyo, 171-8588, Japan.}\\ ${(3)}$ \textit{IoT Devices Research Labs., NEC Corporation, 1753 Shimonumabe, Nakahara-ku, Kawasaki 211-8666, Japan.}\\ e-mail: [email protected] } } \maketitle \begin{abstract} We study coexistence of CV-QKD and 7 classical 12.5 Gbit/s on-off keying channels in WDM transmission over the C-band. We demonstrate key generation with a distilled secret key rate between 20 to 50 kbit/s in experiments running continuously over 24 hours. \end{abstract} \begin{IEEEkeywords} Quantum Key Distribution \end{IEEEkeywords} \section{Introduction} Quantum key distribution (QKD) is the first quantum technology to find commercial application and it is the only known solution to the problem of sharing a random key between two distance parties with proven security against any possible eavesdropping attack. QKD relies on quantum mechanical properties of light using either single photons (discrete variable (DV)) or weak coherent pulses (continuous variable (CV)). With QKD, information theoretic security can be guaranteed which is in stark contrast to current generation of encryption techniques that relies on computationally hard problems. The first proposed QKD scheme is the BB84 protocol \cite{BB84}, which utilizes the polarization states of single photons to convey a secret key between two parties. Today, many different protocols exist for DV-QKD with the main common factor being that they rely on single photon detection \cite{DiamantiPratical}. Several different DV-QKD techniques have been demonstrated over installed fiber \cite{SasakiTokyoQKD} as well as over free-space links \cite{SchmittManderbach}. CV-QKD on the other hand, can to a large extent be implemented with commercially available telecommunication components. Further, it is also compatible with photonic integration techniques, making CV-QKD a promising technology for future QKD systems. The potential of CV-QKD has been demonstrated in several experimental investigations \cite{Lodewyck, JouguetNature, Huang, Nakazawa}. One key challenge for QKD systems is to manage integration into the current network topology, to avoid having to develop a separate QKD network \cite{DiamantiPratical}. This means that CV-QKD has to be able to coexist with classical wavelength devision multiplexed signals in fiber optical links. Here, CV-QKD has a big advantage due to the mode selection capability when using detection techniques based on homodyne or heterodyne receivers with a local oscillator. Coexistence of DV-QKD at 1300~nm and 64-QAM channels in the C-band have been reported in \cite{Wang}. In \cite{Kumar}, the influence of continuous wave lasers and CV-QKD channels multiplexed in the C-band is experimentally investigated. Further, the excess noise in a 20 channel wavelength division multiplexing (WDM) system has been investigated in \cite{Karinou}. In this paper, we demonstrate WDM co-propagation in the C-band of a 10 MHz repetition rate CV-QKD system together with 7 neighboring classical on-off keying data channels at 12.5 Gbit/s each within the C-band. We show stable key generation over 10~km of fiber for 24~hours. \section{Experimental Setup} The outline of the CV-QKD system is shown in Fig.~\ref{fig:CVQKDsetup}(a), for details see the figure text. The system applies the four state CV-QKD protocol together with reverse reconciliation. Not shown in the figure is the synchronization signal that is transmitted at 1300~nm that adjusts the sampling instance of Bob's analog to digital converter. For more detailed information on the CV-QKD system and the key distillation process, please see \cite{Hirano}. \begin{figure} \caption{Rough overview of the optical part of the CV-QKD system. The transmitter laser is pulsed at 10~MHz, 99\% of the intensity is coupled to one polarization to be used as a local oscillator (LO) while the remaining 1\% is coupled to the state preparation stage where four states are modulated by a phase modulator driven by a digital analogue converter (DAC). At Bob's receiver active polarization tracking is performed before the quantum signal and the LO is split using a polarization beam splitter. The LO path is directed to a phase modulator which randomly choose quadrature to measure in the homodyne detector. The signal path is directed to a optical switch which can block the incoming CV-QKD signal to characterize the shot noise of the receiver. The signal and LO are mixed before homodyne (single quadrature) detection is performed with a balanced receiver module.} \label{fig:CVQKDsetup} \end{figure} \begin{figure} \caption{WDM transmission experiment setup. 7 ECLs placed on an 100~GHz frequency grid. The CV-QKD system uses the 6th channel of the WDM setup. } \label{fig:expSetup} \end{figure} \begin{figure*} \caption{Distilled secret key rate over 24 hours. } \label{fig:24hours} \end{figure*} The overview of the experimental setup is shown Fig.~\ref{fig:expSetup}. The WDM system is using a channel spacing of 100~GHz. We use 7 external cavity lasers (ECLs) that are modulated with 12.5~Gbit/s on-off keying data using a pseudo random bit sequence of length $2^{15}-1$, constituting the channels 1-5 and 7-8 of the WDM system. The CV-QKD signal is using a wavelength of 1549.2~nm and is transmitted in the 6th band of the WDM system. The transmitted spectra is shown in Fig.~\ref{fig:expSetup}(c). The difference in launch power of the channels are less than 1.5~dB and is on average -4.5 dBm. The data channels and the CV-QKD channel are coupled together using a 3~dB coupler. Further, the CV-QKD system transmits a synchronization pulse for the clock signal at 1300~nm. This signal is coupled together with the C-band channels using a WDM coupler. \begin{figure} \caption{Variance as a function of interfering channel where one modulated channel is turned on at the time. } \label{fig:variance} \end{figure} The signals are transmitted over 10~km of conventional single mode fiber. At the receiver, the CV-QKD sync. pulse at 1300~nm is first demultiplexed using a WDM coupler and sent to the CV-QKD receiver. Further, the data channels and the CV-QKD signal are demultiplexed using an 8 channel WDM demultiplexer. The data channel can be switched to a 10~GHz photodetector and digitized by an 8~GHz bandwidth oscilloscope. The CV-QKD wavelength is always directed to the CV-QKD receiver where the secret key rate (SKR) is evaluated. \begin{figure} \caption{Secret key rate as a function of time when the 7 co-propagating classical channels are turned on and off in intervals of 10 minutes. } \label{fig:onfoff} \end{figure} \section{Results} Our initial experiment deviates slightly from the experimental setup in Fig.~\ref{fig:expSetup}. In this case we are modulating a single wavelength before the WDM multiplexer, also Alice's CV-QKD signal goes through channel 6 on the WDM multiplexer. In this experiment we measure the variance of the received signals when we turn on different co-propagating WDM channels, one at a time. The results are shown in Fig.~\ref{fig:variance}. We average the variance over data taken for at least 3 minutes. The maximum change in SKR when turning on one of the channels is 0.75\%, which is within our measurement reliability. These results encouraged us to perform the experiment with all 7 co-propagating channels on, using the setup in Fig.~\ref{fig:expSetup}. We first investigate the impact from turning on and off the 7 co-propagating channels. The SKR as a function of time is shown in Fig.~\ref{fig:onfoff} where turn on and off the 7 channels with intervals of 10 minutes. We do not notice any influence on the SKR from the co-propagating channels. Some drops in the SKR are observed, but these occur both when the co-propagating channels are on as well as when they are off. These jumps are attributed to the system being disturbed by external conditions in the labs. In Fig. \ref{fig:24hours}, we plot the SKR over 24 hours with all 7 co-propagating classical channels on. The 7 classical channels are not affected by co-propagation with the pulsed local oscillator of the CV-QKD system. Although we do not specifically measure the bit-error-rate, we can conclude this from the eye diagram after the link for all classical channels, which are shown in Fig.~\ref{fig:expSetup}(b). We observe no apparent difference in the eye diagrams when we turn on or off the CV-QKD system. The average distilled SKR is approximately in the range of 20 to 50 kbit/s over the 24 hours. The SKR is changing over time and we attribute these fluctuations to external conditions, such as temperature change in the room. This claim is supported by the fact that during the night when no one is entering the lab, the SKR is very stable. \section{Conclusions} We have demonstrated co-propagation of CV-QKD and 7 classical 12.5~Gbit/s OOK signals over 10~km of fiber. We have demonstrated continuous secret key generation between 20 and 50 kbit/s over 24 hours. These results shows that CV-QKD is a good contestant for co-integration with classical channels in future quantum enabled networks. {\noindent \footnotesize \emph{This work was partly funded by ImPACT Program of Council for Science, Technology and Innovation (Cabinet Office, Government of Japan). } } \end{document}
\begin{equation}gin{document} \title {Quantum Information Transfer between Topological and Superconducting Qubits} \author{Fang-Yu Hong} \email[Email address:]{[email protected]} \email[Tel:]{86-571-86843468} \affiliation{Department of Physics, Center for Optoelectronics Materials and Devices, Zhejiang Sci-Tech University, Hangzhou, Zhejiang 310018, China} \author{Jing-Li Fu} \affiliation{Department of Physics, Center for Optoelectronics Materials and Devices, Zhejiang Sci-Tech University, Hangzhou, Zhejiang 310018, China} \author{Zhi-Yan Zhu} \affiliation{Department of Physics, Center for Optoelectronics Materials and Devices, Zhejiang Sci-Tech University, Hangzhou, Zhejiang 310018, China} \date{\today} \begin{equation}gin{abstract} We describe a scheme that enables a strong Jaynes-Cummings coupling between a topological qubit and a superconducting flux qubit. The coupling strength is dependent on the phase difference between two superconductors on a topological insulator and may be expediently controlled by a phase controller. With this coherent coupling and single-qubit rotations arbitrary unitary operations on the two-qubit hybrid system of topological and flux qubits can be performed. Numerical simulations show that quantum state transfer and entanglement distributing between the topological and superconducting flux qubits may be performed with high fidelity. \end{abstract} \pacs{03.67.Lx, 03.65.Vf, 74.45.+c, 85.25.-j} \keywords{topological qubit, superconducting qubit, quantum interface} \maketitle \section{INTRODUCTION} The decoherence of quantum states by the environment is the main obstacle in the way towards realizing quantum computers. To circumvent this difficulty some interesting topological quantum computation schemes \cite{ayki,cnsh} have been suggested, where quantum information is stored in nonlocal (topological) degrees of freedom of topologically ordered systems. These nonlocal degrees of freedom are decoupled from local perturbations, enabling the topological approach to quantum information processing to obtain its exceptional fault tolerance and to have a tremendous advantage over conventional ones. The simplest non-Abelian excitation for topological qubits is the zero energy Majorana bound state (MBS) \cite{fwil}, which is predicted to be exist in the spin lattice systems \cite{ayki}, in the $p+ip$ superconductors \cite{ nrdg}, in the filling fraction $\nu=5/2$ fractional quantum Hall system \cite{cnsh}, in the superconductor Sr$_2$RuO$_4$ \cite{sscn}, in the topological insulators \cite{lfck,mhck}, and in some semiconductors with strong spin-orbit interaction \cite{jsrl,jali,yogr,rljs, vmou}. However, the local decoupling makes measuring and manipulating topological states difficult because they can only be manipulated by globe braiding operations, i.e., by physical exchange of the associated local quasiparticle non-Abelian excitations \cite{aste,daiv}. Moreover topologically protected braiding operations for Ising anyons alone are not adequate to fulfill universal quantum computation and have to be supplemented with topologically unprotected operations \cite{pbon,pbond}. Within a topological system unprotected operations prove to be very challenging because of significant nonuniversal effects \cite{pbrl}. On the other hand, conventional quantum information processing systems have been advancing steadily, such as the recent progresses in quantum network using single atoms in optical cavities \cite{srcn}, in long coherence times of nuclear spins in a diamond crystal \cite{pmgk,mglc}, in high fidelity manipulations on trapped ions \cite{rbdw} and on superconducting qubits \cite{jcfw}, in generation of entanglement between single-atoms at a distance \cite{dmpm} and between a photon and a solid-state spin qubit \cite{etyc} Thus it is highly desirable to combine the advantages of conventional qubits with those of topological qubits to construct hybrid systems, where the necessary topologically unprotected gates can be imported from the conventional quantum systems (CQS) and topological states can be transferred to CQS for high fidelity readout. Such hybrid systems have been considered recently for the anyons in optical lattices \cite{ljia,magu} and for the Majorana anyons coupled to superconducting flux qubits \cite{fhaa,jsst,ljck} or to a semiconductor double-dot qubit \cite{pbrl}. Here we propose a scheme for quantum information transfer between a superconducting flux qubit \cite{jemo,zxss,jcfw} and a topological qubit encoded on Majorana fermions (MFs) at the junctions among three superconductors mediated by a topological insulator (TI) \cite{lfck}. The strong Jaynes-Cummings (JC) coupling between topological and superconducting flux qubits can be obtained on the basis of the interaction between two MFs located at the two ends of a linear superconductor-TI-superconductor (STIS) junction, and be coherently controlled by the phase differences between the two superconductors of the STIS junction. With this strong coupling at hand, arbitrary quantum information transfer and quantum entanglement distribution between the topological and the flux qubits can be accomplished with near unit fidelity. \begin{equation}gin{figure}[t] \includegraphics[width=8cm]{1} \caption{\label{fig1}(color online). Schematics for a hybrid system of topological and superconducting flux qubits. A flux qubit is made up of four Josephson junctions ($j_{1,2,3,4}$) and four superconducting islands ($a,b,c,d$) patterned on the surface of a topological insulator, enclosing an external flux $\Phi\approx h/4e$. A topological qubit consists of two pairs of Majorana fermions ($(\gamma_1,\gamma_2)$ and $(\gamma_3,\gamma_4)$). Island $d$ is shared by the topological and the flux qubits. Two Majorana fermions (marked with circles) at two superconducting trijunctions are coupled though STIS quantum wire with coupling strength dependent on the phase $\phi_d$ of island $d$ relative to $\phi_u=-\pi$. } \end{figure} \section {Hybrid system} The prototype hybrid quantum system shown in Fig.1 is made up of a superconducting flux qubit and a topological qubit encoded on four MFs. The flux qubit consists of a loop of four Josephson junctions ($j_{1,2,3,4}$) and four superconducting island $a,b,c,d$, enclosing an externally applied magnetic flux $\Phi\approx\frac{h}{4e}$. The MFs are described by Majorana fermion operators $\gamma_i(i=1,2,3,4)$, which are self-Hermitian, $\gamma_i^\dagger=\gamma_i$, and fulfill fermionic anticommutation relation $\{\gamma_i,\gamma_j\}=\delta_{ij}$. The Majorana fermion $\gamma_i$ is localized at trijunction $i(i=1,2,3,4)$, which comprises three superconductors divided by a TI \cite{lfck}. A pair of MFs operators $\gamma_i,\gamma_j$ connected by a STIS wire of length $L$ can form a Dirac fermion operator $f_{ij}=(\gamma_i-i\gamma_j)/\sqrt{2}$, which creates a fermion and $f_{ij}^\dagger f_{ij}=n_{ij}=0,1$ describes the occupation of the corresponding state. Combining two such fermion states gives the two logical states of the topological qubit $\ket{0}_t=\ket{0_{12}0_{34}}$ and $\ket{1}_t=\ket{1_{12}1_{34}}$. The flux qubit is made up of four Josephson junctions with Josephson coupling energy $E_{J,1}=E_{J,2}=E_J$, $E_{J,3}=\alpha E_J$, and $E_{J,4}=\begin{equation}ta E_J$, where $0.5<\alpha<1$ and $\begin{equation}ta\gg1$. For these parameters and an externally applied flux $\Phi=h/4e$, the system has two stable states $\ket{0}_f$ and $\ket{1}_f$ for the flux qubit. Corresponding to these two states there are persistent circulating currents of opposite direction with the corresponding superconducting phase $\phi_d=\phi_c +\sigma_f^z\theta+\zeta\frac{a+a^\dagger}{\sqrt{2}}$ of island $d$ \cite{ljck}, where $\sigma_f^z=(\ket{0}\bra{0}-\ket{1}\bra{1})_f$, $\theta=\frac{\sqrt{4\alpha^2-1}}{2\alpha\begin{equation}ta}$ is the phase difference across Josephson junction $j_4$, $a$ is the annihilation operator for the flux qubit, $\zeta=(\frac{8E_C}{E_J})^{\frac{1}{4}}\begin{equation}ta^{-\frac{1}{2}}$ is the magnitude of quantum fluctuations, and the phase $\phi_c$ of island $c$ is fixed relative to the phase $\phi_u=-\pi$ of island $u$ by a phase controller \cite{ljck,ljclk}. The Hamiltonian for the hybrid system can be written in the form ($\hbar=1$) $H=a^\dagger a\omega_f-\frac{1}{2}E(\phi_d)\sigma_t^z $, where $\omega_f=\sqrt{8E_JE_C}$, $\sigma^z_{t}=(\ket{0}\bra{0}-\ket{1}\bra{1})_t$, and the coupling strength $E(\phi_d)$ has the approximate form \cite{ljck} \begin{equation}\label{eq2} E(\phi_d)\approx-1.9(\Lambda_{\phi_d}-0.5)v_F/L\quad \text{ for} \,\, \Lambda_{\phi_d}\leq -5 \end{equation} and \begin{equation}\label{eq3} E(\phi_d)\approx2\Delta_0\sin\frac{\phi_d}{2} e^{-\Lambda_{\phi_d}}\sim0 \quad \text{ for}\quad \Lambda_{\phi_d}\gg1, \end{equation} where $ \Lambda_{\phi_d}\equiv\frac{\Delta_0L}{v_F}\sin\frac{\phi_d}{2}$ with the effective Fermi velocity $v_F$ and the proximity induced superconducting gap $\Delta_0$. \begin{equation}gin{figure}[t] \includegraphics[width=8cm]{2} \caption{\label{fig2}(color online). a) Numerical simulation of the process of the state transfer, $\ket{\uparrow0}\rightarrow-i\ket{\downarrow1}$. The state transfer fidelity is $F_1=0.993$. b) Numerical simulation of quantum entanglement generating, $\ket{\uparrow0}\rightarrow(\ket{\uparrow0}-i\ket{\downarrow1})/\sqrt{2}$. The generated entanglement has a fidelity $F_2=0.996$. The parameters used are $g/2\pi=-2$GHz, $g'/2\pi=-1$GHz, $T_{f,1}=900$ns, $T_{f,2}=20$ns, and $\omega_f/2\pi=E(\phi_{on})/2\pi=50$ GHz. The corresponding matrix elements of the density matrix $\rho$ of the hybrid system are $\rho_{11}=\bra{\downarrow1}\rho\ket{\downarrow1}$, $\rho_{22}=\bra{\uparrow0}\rho\ket{\uparrow0}$, $\rho_{21}=\bra{\uparrow0}\rho\ket{\downarrow1}$, $\rho_{12}=\bra{\downarrow1}\rho\ket{\uparrow0}$.} \end{figure} Expanding the coupling strength $E(\phi_d)$ to first order in the small parameters $\frac{\theta}{\omega_f}\frac{dE(\phi)}{d\phi}|_{\phi=\phi_c}$ and $\frac{\zeta}{\omega_f}\frac{dE(\phi)}{d\phi}|_{\phi=\phi_c}$ gives the Hamiltonian \begin{equation} \label{eq1} H=a^\dagger a\omega_f-\frac{1}{2}E(\phi_c)\sigma_t^z-\frac{g'}{2}\sigma_f^z \sigma_t^z-\frac{1}{2}g(a^\dagger+a)\sigma_t^z, \end{equation} where \begin{equation}a\label{eq10} g&=&\left.\frac{\zeta}{\sqrt{2}}\frac{dE(\phi)}{d\phi}\right|_{\phi=\phi_c}\notag \\ g'&=&\left.\theta\frac{dE(\phi)}{d\phi}\right|_{\phi=\phi_c}. \end{equation}a By rewriting Hamiltonian (\ref{eq1}) in terms of $\ket{\downarrow}=\frac{1}{\sqrt{2}}(\ket{0}+\ket{1})_t$ and $\ket{\uparrow}=\frac{1}{\sqrt{2}}(\ket{0}-\ket{1})_t$ and applying the rotating-wave approximation and the interaction picture we obtain \begin{equation}a \label{eq6} H_I&=&-\frac{1}{2}g(a^\dagger\sigma_t^-+a\sigma_t^+)-\frac{g'}{2}\sigma_f^z (\sigma_t^+e^{iE(\phi_c)t}\notag\\ &+&\sigma_t^-e^{-iE(\phi_c)t}), \end{equation}a where $\sigma_t^+=\ket{\uparrow}\bra{\downarrow}$ and $\sigma_t^-=\ket{\downarrow}\bra{\uparrow}$ are the raising and lowering operators, respectively, and the resonance condition $\omega_f=E(\phi_c)$ has been assumed for simplicity. {\it Discussion.}---The first term in $H_I$ \eqref{eq6} describes the JC coupling between the topological and the flux qubits, which is just what we want. The last term will cause the total number of the excitations in the hybrid system changes and will contaminate the quantum information transfer fidelity, thus we may contain its influence by the conditions \begin{equation}\label{eq11} g/g'=\frac{\sqrt{2\begin{equation}ta}\alpha}{\sqrt{4\alpha^2-1}}(\frac{8E_C}{E_J})^{\frac{1}{4}}\gg1 \end{equation} and $E(\phi_c)/g\gg1$. However, because of the factors $e^{\pm iE(\phi_c)t}$ the influence of this non-JC term is very limited, even for the case $g<g'$, which is shown in the following numerical simulation. According to Eqs.(\ref{eq2}, \ref{eq3}) the JC coupling strength $g$ can be coherently controlled: $g\sim0$ if $\phi_c$ is tuned to $\phi_{\text{off}}$ satisfying $\frac{\Delta_0L}{v_F}\sin\frac{\phi_{\text{off}}}{2} \gg1$, and $g\approx-\Delta_0\frac{\zeta}{\sqrt{2}}\cos\frac{\phi_{\text{on}}}{2}$ if $\phi_c$ is adiabatically adjusted to $\phi_{\text{on}}$ satisfying $\frac{\Delta_0L}{v_F}\sin\frac{\phi_{\text{on}}}{2}\leq-5$. By adiabatically turn on the coupling for a duration corresponding to a $\pi$ pulse $\int g(t)dt=-\pi$, we can perform a unitary transformation \begin{equation} \mu\ket{\downarrow0}+\nu\ket{\uparrow0}\rightarrow \mu\ket{\downarrow0}-i\nu\ket{\downarrow1}, \end{equation} accomplishing a quantum state transfer from the topological qubit to the flux qubit by following a single-qubit rotation on the latter, where $\mu$ and $\nu$ are arbitrary complex numbers satisfying $|\mu|^2+|\nu|^2=1$. If we choose $\int g(t)dt=-\pi/2$, we can generate a maximally entangled state $\ket{\uparrow0}\rightarrow(\ket{\uparrow0}-i\ket{\downarrow1})/\sqrt{2}$. Up to a single-qubit rotation a $\sqrt{\text{SWAP}}$ gate, the squared root of SWAP gate, can be obtained by choosing $\int g(t)dt=-3\pi/2$. With $\sqrt{\text{SWAP}}$ gates and single-qubit $90^\circ$ rotation about $\hat{z}$ denoted by $\text{R}_z(90)$, we can obtain the controlled-phase ($\text{CP}_{t,f}$) gate \begin{equation} \text{CP}_{t,f}=\text{R}_{z,t}(90)\text{R}_{z,f}(-90)\sqrt{\text{SWAP}} \text{R}_{z,t}(180) \sqrt{\text{SWAP}} \end{equation} for the hybrid system. With $\text{CP}_{t,f}$ gates and single-qubit rotations an arbitrary unitary transformation on the hybrid system is available \cite{mnic}. \begin{equation}gin{figure}[t] \includegraphics[width=8cm]{3} \caption{\label{fig3} a) The effect of decoherence sources $\eta_1=1/2T_{f,1}$ on the fidelity of state transfer operation $\ket{\uparrow0}\rightarrow-i\ket{\downarrow1}$ with different non-JC couplings $g'$ (from top to bottom: $g'/g=0$,1,2,3,4,5,6.). Other parameters are as in Fig.\ref{fig2}. b) The same plot for the the influence of $\eta_1=1/T_{f,2}$. } \end{figure} To sufficiently suppress the influence of the non-JC coupling, $g/g'\geq1/3$ is required (as we explain later in detail), which may be fulfilled by choosing $\begin{equation}ta\gg1$, $\alpha\rightarrow0.5$, e.g., we have $g/g'\approx2$ for the case where $\begin{equation}ta=15$, $\alpha=0.8$, $E_J/E_C=80$. The corresponding flux quantum fluctuation is $\zeta=0.14$ and the phase difference of Josephson junction $j_4$ is $\theta=0.05$, which is within the reach of a phase controller \cite{ljclk}. Apart from the non-JC coupling, there are other relevant imperfections for the hybrid system. The tunneling between $\ket{0}_f$ and $\ket{1}_f$ with tunneling rate $r\sim\omega_f \text{exp}(-\sqrt{E_J/E_C})$ decreases the coherence time of the superconducting flux qubit. The coupling strength $g$ should be strong enough to repress the unwanted tunneling probability $(r/g)^2$ \cite{ljck}. Low temperature is required to exponentially decrease the probability of the occupation of the excitation modes of the quantum wire by the factor $\text{exp}(\frac{-v_f}{k_BTL})$ \cite{ljck}. \section{Numerical simulations} Considering the decoherence sources, the dynamical process of the hybrid system is described by the Lindblad master equation \begin{equation}a\label{eq7} \frac{\partial\rho}{\partial t}&=&-i[H_I,\rho]+\frac{1}{2T_{f,1}}(2a\rho a^\dagger-a^\dagger a\rho-\rho a^\dagger a)\notag\\ &+&\frac{1}{T_{f,2}}(\sigma_f^z\rho\sigma_f^z-\rho) \end{equation}a where the decoherence from the topological qubit has been neglected due to this qubit's great merit of long coherence time, $T_{f,1}$ and $T_{f,2}$ are the relaxation time and dephasing time of the superconducting flux qubit, respectively \cite{ymgs}. To study the quantum information transfer between the topological and the superconducting flux qubits under realistic conditions we numerically simulate the master equation \eqref{eq7}. We may set $\alpha=0.8$, $\begin{equation}ta=15$, $E_J/E_C=80$, $E_J/2\pi=158$GHz, $\omega_f/2\pi=50$GHz, $T_{f,1}=900$ns, and $T_{f,2}=20$ns for the superconducting flux qubit \cite{ljck,icyn}; the parameters for the topological qubit may assume to be $\Delta_0/2\pi=32.5$ GHz, $v_F=10^5$m/s, $L=5\mu$m \cite{ljck,vmou}. The resonance condition gives $E(\phi_{\text{on}})/2\pi=\omega_f/2\pi=50$GHz, resulting in $\phi_{\text{on}}=-1.73$ according to \eqref{eq2} with $\Lambda_{\phi_{\text{on}}}=-7.75 $. Then equations (\ref{eq10}) give the the coupling strength $g/2\pi=-2$ GHz and $g'/2\pi=-1$ GHz. The evolution of the state transfer \begin{equation}\label{eq8} \ket{\uparrow0}\xrightarrow{\int^{t_{f1}} g(t)dt=-\pi}\ket{\psi_1}\equiv-i\ket{\downarrow1} \end{equation} and the generating of a maximally entangled state \begin{equation}\label{eq9} \ket{\uparrow0}\xrightarrow{\int^{t_{f2}} g(t)dt=-\pi/2}\ket{\psi_2}\equiv(\ket{\uparrow0}-i\ket{\downarrow1})/\sqrt{2} \end{equation} are shown in Fig. \ref{fig2}a) and b), respectively, with the corresponding fidelity $F_1=\bra{\psi_1}\rho(t_{f1})\ket{\psi_1}=0.993$ and $F_2=\bra{\psi_2}\rho(t_{f2})\ket{\psi_2}=0.996$. Fig.\ref{fig3} shows the influence of the decoherence sources $\eta_1=1/2T_{f,1}$, $\eta_2=1/T_{f,2}$, and $g'$ on the state transfer fidelity $F_1$. From Fig.\ref{fig3} we see that the influence of $g'$ on the state transfer is small: $F_1=0.982$ for the case where $g'=3g=-6(2\pi)$GHz, $E_(\phi_{\text{on}})$, $\omega_f$, $T_{f,1}$, $T_{f,2}$, $v_F$, $L$, and $\Lambda_{\phi_{\text{on}}} $ remain the same as in fig.\ref{fig2}, while other parameters are $\alpha=0.97$, $\begin{equation}ta=10$, $E_J/E_C=30000$ \cite{jmsn, ljclk}, $\theta=0.086$, $\zeta=0.04$, $E_J=3.1$ THz, $\phi_{\text{on}}=-0.646$, and $\Delta_0=78$GHz. Apart from the aforesaid decoherence sources, there exist processes which may influence the interaction between the two Majorana fermions, such as dynamic modulations of the superconducting gap and variation of the electromagnetic environment owing to charge fluctuations. We estimate their influence on the operation fidelity by assuming unknown errors in $E(\phi_{\text{on}})$, $g'$, and $g$, and find that the corresponding fidelity $F_1$ decreases from 0.993 to 0.968 for even 10\% unknown errors in $E(\phi_{\text{on}})$, $g'$, and $g$. The recent proposal \cite{ljck} applies in the parameter regime $g'\gg g$, in contrast our scheme works well in the parameter regime $g'\leq 3 g$. With Jaynes-Cummings coupling quantum state transfer and quantum entanglement distribution between the topological and flux qubits can be more conveniently accomplished. \section{CONCLUSIONS} In summary, we have presented a scheme for quantum information transfer between topological and superconducting flux qubits. A strong Jaynes-Cummings coupling between topological and flux qubits is achieved. With this scheme, quantum state transfer, quantum entanglement generating, and arbitrary unitary transformation in the topological-flux hybrid system may be accomplished with near unit fidelity. This quantum interface enable us to store quantum information on topological qubits for long-time storage, to efficiently read out of topological qubit states, to implement partially protected universal topological quantum computation, where single-qubit state of flux qubit can be prepared with high accuracy and is transferred to topological qubit to compensate topological qubit's incapability of generating some single-qubit states. This work was supported by the National Natural Science Foundation of China ( 11072218 and 11272287), by Zhejiang Provincial Natural Science Foundation of China (Grant No. Y6110314), and by Scientific Research Fund of Zhejiang Provincial Education Department (Grant No. Y200909693). \begin{equation}gin{references} \bibitem{ayki}A.Y. Kitaev, Ann. Phys. (N.Y.) {\bf 303}, 2 (2003). \bibitem{cnsh}C. Nayak, S.H. Simon, A. Stern, M. Freedman, and S. Das Sarma, Rev. Mod. Phys. {\bf 80}, 1083 (2008). \bibitem{fwil}F. Wilczek, Nature Phys {\bf 5}, 614 (2009). \bibitem{nrdg}N. Read and D. Green, Phys. Rev. B {\bf 61}, 10267 (2000). \bibitem{sscn}S. Das Sarma, C. Nayak, and S. Tewari, Phys. Rev.B {\bf 73}, 220502R (2006). \bibitem{lfck}L. Fu and C.L. Kane, Phys. Rev. Lett. {\bf 100}, 096407 (2008). \bibitem{mhck}M. Z. Hasan and C. L. Kane, Rev. Mod. Phys. {\bf 82}, 3045(2010). \bibitem{jsrl}J. D. Sau, R. M. Lutchyn, S. Tewari, and S. Das Sarma, Phys. Rev. Lett. {\bf 104}, 040502(2010). \bibitem{jali}J. Alicea, Phys. Rev. B {\bf 81}, 125318 (2010). \bibitem{yogr}Y. Oreg, G. Refael, and F. von Oppen, Phys. Rev. Lett. {\bf 105}, 177002 (2010). \bibitem{rljs}R. M. Lutchyn, J. D. Sau, and S. Das Sarma, Phys. Rev. Lett. {\bf 105}, 077001 (2010). \bibitem{vmou}V. Mourik, K. Zuo, S. M. Frolov, S. R. Plissard, E. P. A. M. Bakkers, L. P. Kouwenhoven, Science {\bf 336}, 1003 (2012). \bibitem{daiv}D. A. Ivanov, Phys. Rev. Lett. {\bf 86}, 268 (2001). \bibitem{aste}A. Stern, Nature( London) {\bf 464}, 187 (2010). \bibitem{pbon}P. Bonderson, Phys. Rev. Lett. {\bf 103}, 110403 (2009). \bibitem{pbond}P. Bonderson, D.J. Clarke, C. Nayak, and K. Shtengel, Phys. Rev. Lett. {\bf 104}, 180505(2010). \bibitem{pbrl}P. Bonderson and R.M. Lutchyn, Phys. Rev. Lett. {\bf 106}, 130505 (2011). \bibitem{srcn}S. Ritter, C. N\"{o}lleke, C. Hahn, A. Reiserer, A. Neuzner, M. Uphoff, M. M\"{u}cke, E. Figueroa, J. Bochmann, and G. Rempe, Nature (London) {\bf 484}, 195 (2012). \bibitem{mglc}M.V. Gurudev Dutt, L. Childress, L. Jiang, E. Togan, J. Maze, F. Jelezko, A.S. Zibrov, P.R. Hemmer, and M.D. Lukin, Science {\bf 316}, 1312 (2007). \bibitem{pmgk}P.C. Maurer, G. Kucsko, C. Latta, L. Jiang, N.Y. Yao, S.D. Bennett, F. Pastawski, D. Hunger, N. Chisholm, M. Markham, D.J. Twitchen, J.I. Cirac, and M.D. Lukin, Science {\bf 336}, 1283 (2012). \bibitem{rbdw}R. Blatt and D. Wineland, Nature (London) {\bf 453}, 1008 (2008). \bibitem{jcfw}J. Clarke and F.K. Wilhelm, Nature (London) {\bf 453}, 1031 (2008). \bibitem{dmpm}D.L. Moehring, P. Maunz, S. Olmschenk, K.C. Younge, D. N. Matsukevich, L.-M. Duan, and C. Monroe, Nature (London) {\bf 449}, 68 (2007). \bibitem{etyc}E. Togan, Y. Chu, A.S. Trifonov, L. Jiang, J. Maze, L. Childress, M.V.G. Dutt, A.S. S{\o}ensen, P.R. Hemmer, A.S. Zibrov, and M.D. Lukin, Nature (London) {\bf 466}, 730 (2010). \bibitem{ljia}L. Jiang, G.K. Brennen, A.V. Gorshkov, K. Hammerer, M. Hafezi, E. Demler, M.D. Lukin, and P. Zoller, Nature Phys. {\bf 4}, 482 (2008). \bibitem{magu}M. Aguado, G. K. Brennen, F. Verstraete, and J. I. Cirac, Phys. Rev. Lett. {\bf 101}, 260501 (2008). \bibitem{fhaa}F. Hassler, A.R. Akhmerov, C.-Y. Hou, and C.W. J. Beenakker, New J. Phys. {\bf 12}, 125002 (2010). \bibitem{jsst}J.D. Sau, S. Tewari, and S. Das Sarma, Phys. Rev. A {\bf 82}, 052322 (2010). \bibitem{ljck}L. Jiang, C.L. Kane, and J. Preskill, Phys. Rev. Lett. {\bf 106}, 130504 (2011). \bibitem{jemo}J.E. Mooij, T.P. Orlando, L. Levitov, L. Tian, C.H. van der Wal, and S. Lloyd, Science {\bf 285}, 1036 (1999). \bibitem{zxss}X. Zhu, S. Saito, A. Kemp, K. Kakuyanagi, S. Karimoto, H. Nakano, W.J. Munro, Y. Tokura, M.S. Everitt, K. Nemoto, M. Kasu, N. Mizuochi, and K. Semba, Nature( London) {\bf 478}, 221 (2011). \bibitem{ljclk}L. Jiang, C. L. Kane, and J. Preskill, arXiv: 1010.5862v2. \bibitem{mnic}M.A. Nielsen and I.L. Chuang, Quantum Computation and Quantum Information, (Cambridge University Press, Cambridge, England, 2010). \bibitem{ymgs}Y. Makhlin, G. Sch\"{}n, and A. Shnirman, Rev. Mod. Phys. {\bf 73}, 357 (2001). \bibitem{icyn}I. Chiorescu, Y. Nakamura, C.J.P.M. Harmans, and J.E. Mooij, Science {\bf 299}, 1869 (2003). \bibitem{jmsn}J. M. Martinis, S. Nam, J. Aumentado, and C. Urbina, Phys. Rev. Lett. {\bf 89}, 117901(2002). \end{references} \end{document}
\begin{document} \bstctlcite{IEEEexample:BSTcontrol} \title{Predictive Prescription of Unit Commitment Decisions Under Net Load Uncertainty \thanks{Mr. Yurdakul gratefully acknowledges the support of the German Federal Ministry of Education and Research and the Software Campus program under Grant 01IS17052. } } \author{\IEEEauthorblockN{Ogun Yurdakul\IEEEauthorrefmark{1}, Feng Qiu\IEEEauthorrefmark{2}, and Sahin Albayrak\IEEEauthorrefmark{1}} \IEEEauthorblockA{\IEEEauthorrefmark{1}Department of Electrical Engineering and Computer Science, Technical University of Berlin, Berlin, Germany} \IEEEauthorblockA{\IEEEauthorrefmark{2}Energy Systems Division, Argonne National Laboratory, Lemont, IL 60439, USA}} \maketitle \begin{abstract} To take unit commitment (UC) decisions under uncertain net load, most studies utilize a stochastic UC (SUC) model that adopts a \textit{one-size-fits-all} representation of uncertainty. Disregarding contextual information such as weather forecasts and temporal information, these models are typically plagued by a poor out-of-sample performance. To effectively exploit contextual information, in this paper, we formulate a \textit{conditional} SUC problem that is solved \textit{given} a covariate observation. The presented problem relies on the true conditional distribution of net load and so cannot be solved in practice. To approximate its solution, we put forward a predictive prescription framework, which leverages a machine learning model to derive weights that are used in solving a reweighted sample average approximation problem. In contrast with existing predictive prescription frameworks, we manipulate the weights that the learning model delivers based on the specific dataset, present a method to select pertinent covariates, and tune the hyperparameters of the framework based on the out-of-sample cost of its policies. We conduct extensive numerical studies, which lay out the relative merits of the framework vis-à-vis various benchmarks. \end{abstract} \begin{IEEEkeywords} contextual stochastic optimization, unit commitment, ensemble learning \end{IEEEkeywords} \section{Introduction}\label{1} Taking unit commitment (UC) decisions under uncertain net load (i.e., load minus renewable generation) lies at the cornerstone of ensuring the economical and reliable operation of systems with deep penetration of renewables. To this end, grid operators (GOs) typically draw upon contextual information (e.g., historical realizations of net load, weather forecasts) as features to train machine learning (ML) algorithms that generate point predictions for net load, which are subsequently utilized in solving a deterministic UC problem. Despite capitalizing on contextual information, such an approach fails to capture the stochastic nature of net load and suffers due to isolating the ML algorithm from the downstream optimization problem. On the flip side, the well-touted stochastic optimization (SO) models explicitly represent uncertainty by usually making assumptions on the probability distribution of net load. Nevertheless, SO models exhibit a poor out-of-sample performance if the assumed probability distribution is wrong, and they cannot effectively exploit covariate observations, resorting to a \textit{one-size-fits-all} representation of uncertainty. \par Recently, a paradigm termed predictive prescriptions emerged in the operations research literature, which aims to address these shortcomings by jointly leveraging supervised ML algorithms and a conditional SO model. The approach put forward in \cite{bert} trains an ML model to derive weights for historical observations of the uncertain parameter and uses the weights in solving a reweighted sample average approximation (SAA) problem. Predictive prescription frameworks found applications in power systems as well \cite{pt_ifo, fd_pof, morales}. The study in \cite{pt_ifo} seeks to maximize the profit of a renewable resource trading in the day-ahead market by training trees with a task-based loss, whereas \cite{fd_pof} leverages linear regression models to determine the renewable generation forecasts that lead to UC decisions with minimal total cost. \par In this paper, we initially map out in Section \ref{sec2} a conditional SO formulation for taking UC decisions under uncertain net load \textit{given} a covariate observation. In Section \ref{sec3}, we put forward a predictive prescription framework, which leverages the random forest (RF) algorithm to derive weights that are used in solving a reweighted SAA problem. Section \ref{sec3} further lays out three principal contributions of this paper. First, we put forth a method that manipulates the weights derived from the RF algorithm based on the size and the information-richness of the dataset. Second, we present an approach to tuning the hyperparameters of the framework based on the out-of-sample cost of its prescriptions. Finally, we suggest a method for pinpointing the pertinent covariates of net load. In Section \ref{sec4}, we demonstrate the application of the framework using data harvested from the California Independent System Operator (CAISO) grid and investigate its out-of-sample and computational performance. Section \ref{sec5} concludes the paper. \section{Problem Description}\label{sec2} We start out with the analytical description of the problem. \subsection{Analytical underpinnings}\label{sec2a} We study the UC problem under the uncertainty in net load, solved by the GO at an hourly granularity for a scheduling horizon of 24 hours. The study period for each day $d$ is denoted by the set $\mathscr{H}_d \coloneqq \{h \colon h=1,...,24\}$, where the term $h$ is the index for each hourly period. We denote by ${Y}_{d} \in \mathscr{Y} \subseteq \mathbb{R}^{d_y}$ the uncertain net load across all system buses and all 24 hours in $\mathscr{H}_d$, and we represent its observation by $Y_d = y_d$. Assume that the GO has at its disposal historical observations on net load for $D$ days. Define the set $\mathscr{D}\coloneqq\{d \colon d=1,\ldots,D\}$. \par Typically, it is not possible to precisely set forth the probability distribution of net load ${Y}_{d}$ or provide a perfectly accurate forecast for the materialized net load levels ${Y}_{d} = y_d$. Nevertheless, there is a broad array of contextual information that could prove useful to these ends. For instance, temperature, solar irradiance, and wind speed measurements, temporal information such as the month of the year and the day of the week, as well as lagged observations on net load may have a direct bearing on the net load realization ${Y}_{d} = y_d$. Our framework capitalizes on the observations of these covariates in assessing the uncertainty in net load. Note that such covariates are precisely the features that are leveraged in developing ML models so as to forecast net load. We express by $X_d \in \mathscr{X} \subseteq \mathbb{R}^{d_{x}}$ the random covariate associated with ${Y}_{d}$ and denote its observation by $X_d = x_d$. We expound upon the candidate covariates drawn upon in our framework in Section \ref{sec4}. \subsection{Conditional stochastic unit commitment problem} The contextual information associated with net load can be effectively exploited in a conditional stochastic programming framework in taking UC decisions after observing the contextual information $X= \bar{x}$. If it were possible to know the true, underlying conditional distribution of the net load ${Y}$ given $X = \bar{x}$, we could formulate the following ``gold standard'' conditional stochastic unit commitment ($\mathsf{CSUC}$) problem: \begin{IEEEeqnarray}{lll} \hspace{-0.75cm} \underset{z \in \mathscr{Z}}{\text{min}} &\hspace{-0.5cm} \mathbb{E}\big[\mathcal{C}({z};{Y})|{X=\bar{x}} ]\coloneqq&\hspace{.2cm}\eta^{\mathsf{T}}{z} + \mathbb{E}\big[\mathcal{Q}({z};{Y})|{X=\bar{x}}\big]\label{objfs}\\ \hspace{-0.75cm}\text{where} &&\nonumber\\ \hspace{-0.75cm}\mathcal{Q}(z; \bar{y}) \coloneqq & \hspace{0.1cm} \underset{\zeta}{\text{min}}& \hspace{-0.7cm} c^{\mathsf{T}}\zeta \label{objss}\\ \hspace{-0.75cm}& \hspace{0.1cm}\text{subject to} & \hspace{-0.7cm} W\zeta \leq b - Tz - M\bar{y}.\label{css} \end{IEEEeqnarray} The $\mathsf{CSUC}$ problem is a two-stage conditional SO problem with a mixed-integer linear programming formulation. The objective \eqref{objfs} of the first-stage problem is to minimize the commitment and startup costs plus the expected dispatch and load curtailment costs. The first-stage decisions comprise the binary commitment, startup, and shutdown variables and are represented by $z$. The set $\mathscr{Z}$ denotes the feasible region of the first-stage decisions, which is defined by the logical constraints that relate the commitment, startup, and shutdown variables as well as the minimum uptime and downtime constraints.\par For a specific vector of first-stage decisions $z$ and materialized net load values $Y = \bar{y}$, the value function $\mathcal{Q}({z};\bar{y})$ is evaluated by solving the second-stage problem \eqref{objss}--\eqref{css} with the objective \eqref{objss} to minimize the dispatch costs and the penalty cost due to load curtailment. The second-stage variables are denoted by $\zeta$ and are composed of the power dispatch levels of generators and the curtailed load for all hours $h \in \mathscr{H}_d$. We succinctly represent in \eqref{css} the power generation and ramping limits for generators as well as the transmission constraints using injection shift factors based on the DC power flow model, wherein $W$, $T$, and $M$ are constant matrices and $b$ is the right-hand-side vector of all second-stage constraints.\par Note that the $\mathsf{CSUC}$ formulation draws upon the true conditional distribution of $Y | X=\bar{x}$, which cannot be known, thus rendering $\mathsf{CSUC}$ solely a hypothetical, ideal formulation. Nevertheless, GOs have data on the historical realizations of random net load and the associated covariates. The predictive prescription framework introduced in Section \ref{sec3} utilizes these observations to construct the training set $\mathscr{S}_{D}\coloneqq\{({x}_d, {y}_d)\}_{d=1}^{D}$, which is used to solve a surrogate problem for $\mathsf{CSUC}$. \section{Proposed Framework}\label{sec3} We next introduce our predictive prescription framework. \subsection{Surrogate problem formulation}\label{sec3a} The principal objective of the proposed framework is to approximate as close as possible the optimal $\mathsf{CSUC}$ solution after observing $X = \bar{x}$. Motivated by \cite{bert}, we approximate the $\mathsf{CSUC}$ problem by the reweighted SAA problem \begin{IEEEeqnarray}{l} \mathsf{w-CSUC}\colon\,\hat{z}_{D}(\bar{x})\in\text{arg}\,\underset{z \in \mathscr{Z}}{\text{min}}\hspace{0.2cm}\sum_{d=1}^{D}{\omega}_{D, d}(\bar{x}) \mathcal{C}({z};{y_d}),\label{objwSUC} \end{IEEEeqnarray} where ${\omega}_{D, d}(\bar{x})$, $\forall d \in \mathscr{D}$, are weight functions obtained from the training set that adjust the influence of each historical observation on the objective function of the reweighted SAA problem \eqref{objwSUC}. In \cite{bert}, the authors derive the weights ${\omega}_{D, d}(\bar{x})$ by directly using ML algorithms and spell out how different ML algorithms can be leveraged to that end. In the proposed framework, we draw upon the RF model in conjunction with a nonlinear function to derive the weights. \subsection{Evaluation of the empirical weights}\label{sec3b} We start off by training the RF model to predict the net load in the next 24 hours, for which we use the covariate observations in $\mathscr{S}_{D}$ as features and the net load values in $\mathscr{S}_{D}$ as labels. Next, for each new covariate observation $X=\bar{x}$, we use the trained RF model to quantify the similarity between the new observation and each historical observation in $\mathscr{S}_{D}$.\par To quantify the similarity between observations, we record the leaf that the new observation $\bar{x}$ is mapped into in each tree of the RF and subsequently identify the historical covariate observations that fall into the same leaf node with $\bar{x}$. Central to our approach is to assign the weight for observation $x_d$, that is, ${\omega}_{D, d}(\bar{x})$, based on the number of trees in which $x_d$ and $\bar{x}$ are assigned to the same leaf node. To this end, Bertsimas and Kallus \cite{bert} propose that the weight for $x_d$ increase linearly with the number of trees in which $\bar{x}$ and $x_d$ fall into the same leaf node, normalized by the total number of covariate observations assigned to the same leaf node with $\bar{x}$, which yields the \textit{empirical} weights \begin{IEEEeqnarray}{c} \hat{\omega}_{D, d}(\bar{x}) =\frac{1}{T}\sum_{\tau=1}^{T}\frac{\mathbb{I} \big[x_d \in \mathcal{X}^{\tau}_{l(\bar{x})}\big]}{\big|\big\{d' \colon x_{d'} \in \mathcal{X}^{\tau}_{l(\bar{x})} \big\}\big|}, \end{IEEEeqnarray} where $T$ denotes the number of trees in the forest, $\mathbb{I}(\cdot)$ the indicator function, and $\mathcal{X}^{\tau}_{l(\bar{x})}$ the set of covariate observations assigned to the same leaf with $\bar{x}$.\par \subsection{Deriving the final weights}\label{sec3c} Existing approaches in the literature plug the empirical weights $\hat{\omega}_{D, d}(\bar{x})$ into the $\mathsf{w-CSUC}$ problem without taking into account the size or the prescriptive content of the training set. Nevertheless, the empirical weights obtained with a small training set and/or a training set with little informative content may fail to afford an accurate characterization of the similarity between observations. In contrast, we can utilize a particular historical observation with greater confidence, if it were deemed to be similar to a new observation under a large training set with high prescriptive power. As such, we introduce the function $\varphi\big(\hat{\omega}_{D, d}(\bar{x}); \xi, D \big)\coloneqq$ $\hat{\omega}_{D, d}(\bar{x}) ^{\frac{D}{\xi}}$, which serves to manipulate the empirical weights based on the training set size $D$ and the weight modification parameter $\xi$. The parameter $\xi$ could be viewed as a proxy for the information richness and the prescriptive power of the training set. As $\xi$ decreases and $D$ increases, $\varphi(\cdot)$ amplifies the weights of the data points that are assessed to be strongly similar to a new observation and brings down the empirical weights of the points that are markedly dissimilar to a new observation. Further, for a small $D$ and a large $\xi$, $\varphi(\cdot)$ smoothens any significantly high and low weight value and brings the weights toward a uniform level. Clearly, a key challenge to this end is to hone in on a judicious value of $\xi$. To this end, we treat $\xi$ as a hyperparameter of the overall framework and set its value by assessing its influence on a separate validation set. \subsection{Task-based hyperparameter tuning}\label{sec3d} The proper tuning of an ML model's hyperparameters could play a drastic role in its performance. The classical approach to hyperparameter tuning is to assess the performance of an ML under different hyperparameter values based on a statistical loss function. In our framework, however, a specific selection of RF hyperparameter values may bring forth a lower prediction error without leading to UC decisions that drive down the total out-of-sample cost, which punctuates the need to tune the RF model's hyperparameters based on the ultimate task for which it is trained. As such, we treat the hyperparameters of the RF model as those of the overall framework and set their values based on the total out-of-sample cost of the optimal policy $\hat{z}_{D}(\bar{x})$ obtained with different hyperparameter values. \par At the outset, we use grid search to exhaustively generate candidate values for the hyperparameters reported in Table \ref{hyperpa}. Next, we construct a separate validation set containing pairs of covariate and net load observations $\mathscr{V}_{\bar{D}}\coloneqq\{(\bar{x}_i,\bar{y}_i)\}_{i=1}^{\bar{D}}$. We use each covariate observation $\bar{x}_i$ to compute the optimal policy $\hat{z}_{D}(\bar{x}_i)$ and subsequently the out-of-sample cost $\mathcal{C}(\hat{z}_{D}(\bar{x}_{i}); \bar{y}_{i})$ obtained under the actual net load observation $\bar{y}_i$. For each set of candidate hyperparameter values, we compute the total out-of-sample cost over the validation set, i.e., $\sum_{i=1}^{\bar{D}} \mathcal{C}(\hat{z}_{D}(\bar{x}_{i}); \bar{y}_{i})$. Ultimately, we pick the hyperparameter values that deliver the lowest total out-of-sample cost. \begin{table}[h] \centering {\fontsize{8.5}{11.05}\selectfont \setlength{\tabcolsep}{7pt} \renewcommand{1.2}{1.2} \caption{Hyperparameters} \label{hyperpa} \centering \begin{tabular}{c | c } \hline \hline {hyperparameter} & candidate values\\ \hline \hline max tree depth & 3, 6, 10\\ \hline number of features considered & \multirow{2}{*}{$\sqrt{d_{x}}$, $(0.3){d_{x}}$, $(0.6){d_{x}}$} \\ for node splitting & \\ \hline weight modification parameter $\xi$ & $\frac{D}{10}$, $\frac{D}{4}$, ${D}$, ${4D}$, $10D$\\ \hline \hline \end{tabular}} \end{table} \subsection{Selection of the covariates}\label{sec3e} A key thrust of the framework is to pinpoint information-rich covariates that aid in effectively grasping the uncertainty in net load. While there is plethora of factors that can potentially influence a net load realization, ruling out the covariates that afford little or no information can help the trained RF model better assess the similarity between covariate observations, thereby yielding final weights that reduce the out-of-sample costs. Further, working with fewer covariates allows for expediting the training and testing of the RF model.\par To identify the covariates, we employ a hybrid approach comprising a filter and a wrapper feature selection method. We denote the support of the initial set of candidate covariates by $\mathscr{X}^{r} \subseteq \mathbb{R}^{d_{x^{r}}}$. We start out by computing the Pearson correlation coefficient (PCC) for each candidate covariate and the net load observation, which measures the linear correlation between two variables. The PCC attains values between $-1$ and $1$, with $1$ (resp. $-1$) indicating a complete positive (resp. negative) correlation and $0$ signifying that the correlation is immaterial. Customarily, when the absolute value of the PCC is greater than or equal to $0.6$, it is interpreted as the variables being strongly correlated with one another \cite{pcc_num}. As such, we rule out all candidate features that yield a PCC value between $0.6$ and $-0.6$ and obtain $d_{x^{p}}$ covariates supported on the set $\mathscr{X}^{p} \subseteq \mathbb{R}^{d_{x^{p}}}$. Note that PCC measures only the linear correlation between the variables, and it does not assess how the covariates integrate with the utilized ML model. As a remedy, we additionally implement recursive feature elimination (RFE), which is a wrapper method that takes an ML model as a parameter. RFE trains the selected ML model with the initial set of features, ranks the features on the basis of their importance, and recursively eliminates the least important features until the desired number of features is reached. We run RFE with the RF model and ultimately obtain the final set of covariates with support $\mathscr{X}^{f} \subseteq \mathbb{R}^{d_{x^{f}}}$. \section{Numerical Experiments}\label{sec4} We next demonstrate the application of the proposed framework in a real-life setting. \subsection{Datasets and covariate selection}\label{sec4a} In our experiments, we draw upon the net load values recorded in the CAISO grid between June 1, 2018 and August 31, 2019 \cite{ols:caiso}. We use the measurements recorded in the first year (i.e., June 1, 2018--May 31, 2019) to construct the training sets. As set forth below, we vary the size of the training set in different experiments so as to assess its influence on the performance of the methods. Nevertheless, to compare their performance on a consistent basis, we use the same validation set and the same test set in all experiments. Specifically, we utilize the measurements recorded in June 2019 as the validation set and the measurements recorded from July 1 to August 31, 2019 as the test set. We use the IEEE 14-bus system in the experiments, which has 5 generators with an aggregate capacity of 765.31 MW. We scale the net load values so that the highest net load value is equal to the 90\% of the aggregate capacity of the generators.\par To select the covariates for PV and wind generation, we assess the spatial distribution of PV and wind installations with their respective capacities across California, and we accordingly select locations from which we harvest data on global horizontal irradiance (GHI) and wind speed (magnitude and direction). We study the total population and population density of the counties of California and identify locations from which we use temperature measurements so as to capture the influence of temperature on system load.\footnote{We provide the data and the source code of the simulations in the online companion to this paper located in \url{https://github.com/oyurdakul/isgtna23}.} We leverage as candidate covariates the GHI, wind speed, and temperature measurements reported by the National Renewable Energy Laboratory \cite{nrel} for the selected locations in the past 24 hours. We further use as candidate covariates 24 lagged realizations of net load, as well as the 24 lagged realizations of the daily, weekly, and monthly moving average of net load. Finally, we define categorical variables to indicate whether a day falls on a weekend and on a public holiday and use one-hot encoding for their representation. We ultimately obtain $d_ {x^{r}}= 440$ candidate covariates and follow the covariate selection method presented in Section \ref{sec3d} to derive $d_{x^{f}}=25$ covariates. \subsection{Benchmarks}\label{sec4b} To highlight the relative merits of the proposed framework, we draw upon different decision-making methods to obtain alternative policies and investigate their performance. One such method is the naive stochastic unit commitment ($\mathsf{NSUC}$) model, which treats the net load observations in the training set as equiprobable scenarios and disregards the covariate observation $X = \bar{x}$, stated as \begin{IEEEeqnarray}{lll} \mathsf{NSUC:}\hspace{0.25cm} \underset{z \in \mathscr{Z}}{\text{min}} &\hspace{0.25cm} \frac{1}{D}\sum_{d=1}^{D}\mathcal{C}({z};{y_{d}}). \label{objnsuc} \end{IEEEeqnarray} We solve the following reweighted SAA problem using the empirical weights as suggested in \cite{bert} so to investigate the impact of the transforming the weights: \begin{IEEEeqnarray}{lll} \mathsf{ew-CSUC:}\hspace{0.25cm} \underset{z \in \mathscr{Z}}{\text{min}} &\hspace{0.25cm} \sum_{d=1}^{D}{\hat{\omega}}_{D, d}(\bar{x}) \mathcal{C}({z};{y_d}). \label{objewuc} \end{IEEEeqnarray} We further use the point forecast of the trained RF model, i.e., $\hat{f}_{D}^{RF}(\bar{x})$, in solving the following deterministic UC problem: \begin{IEEEeqnarray}{lll} \mathsf{PFUC:}\hspace{0.25cm} \underset{z \in \mathscr{Z}}{\text{min}} &\hspace{0.25cm} \mathcal{C}({z};\hat{f}_{D}^{RF}(\bar{x})). \label{objpfuc} \end{IEEEeqnarray} To obtain the minimum out-of-sample cost that could be ideally attained, we solve the following ideal UC ($\mathsf{IUC}$) problem, which has a perfect foresight of the net load observation $\bar{y}$: \begin{IEEEeqnarray}{lll} \mathsf{IUC:}\hspace{0.25cm} \underset{z \in \mathscr{Z}}{\text{min}} &\hspace{0.25cm} \mathcal{C}({z};\bar{y}). \label{objiuc} \end{IEEEeqnarray} \subsection{Results} We conduct the experiments on a 64 GB-RAM computer containing an Apple M1 Max chip with 10-core CPU. We build the RF models and select the covariates under Python using scikit-learn 1.1.2. To model the UC instances, we extend the \verb|UnitCommitment.jl| package \cite{UCjl} to the two-stage stochastic setting, and we solve all UC problems under Julia 1.6.1 with Gurobi 9.5.0 as the solver. The penalty cost for load curtailment is set at $\$10,000/MWh$ in all experiments. \par We initially construct the training set with the first 100 observations, i.e., $D=100$. We use the validation set to tune the hyperparameters of the proposed predictive prescription framework (indicated as $\mathsf{w-CSUC}$) as well as those of the $\mathsf{PFUC}$ and $\mathsf{ew-CSUC}$ methods. To assess how the methods perform out-of-sample, we use the measurements in the test set to determine how each method would have committed the generators for the corresponding days in the test set, then we observe the actual net load levels that had materialized, and ultimately use the resulting total cost and the mean unserved energy (MUE) to score the performance of each method. In Table \ref{oos_100}, we report for each method the average of the total cost and the MUE computed over all 62 observations in the test set. We separately tabulate the MUE results in addition to the total cost as the latter may greatly vary with the choice of the penalty cost for load curtailment. \par \begin{table}[h] \centering {\fontsize{8.5}{11.05}\selectfont \setlength{\tabcolsep}{7pt} \renewcommand{1.2}{1.2} \caption{Out-of-sample costs and MUE levels} \label{oos_100} \centering \begin{tabular}{c | c | c} \hline \hline {method} & total cost (\$) & MUE $(MWh)$ \\ \hline \hline $\mathsf{IUC}$ & 380089.8 & 0.0\\ \hline $\mathsf{w-CSUC}$ & 401377.3 & 0.0\\ \hline $\mathsf{ew-CSUC}$ & 414566.0 & 0.0\\ \hline $\mathsf{NSUC}$ & 416429.5 & 0.0\\ \hline $\mathsf{PFUC}$ & 453638.1 & 7.2\\ \hline \hline \end{tabular}} \end{table} \begin{figure*} \caption{Out-of-sample performances. In \ref{res_1a} \label{res_1a} \label{res_1b} \label{test} \end{figure*} The results in Table \ref{oos_100} make clear the monetary benefits that can be reaped by implementing $\mathsf{w-CSUC}$, which delivers the lowest total cost among all methods except the perfect-foresight policy, yielding a total cost that is higher by 3.86\% than IUC. We highlight that the lower cost under $\mathsf{w-CSUC}$ in comparison with that under the runner-up method $\mathsf{ew-CSUC}$ provides an empirical justification for manipulating the empirical weights before using them in solving the reweighted SAA problem. The $\mathsf{NSUC}$ method fails to outperform $\mathsf{w-CSUC}$ and comes at the heels of $\mathsf{ew-CSUC}$, which we ascribe to $\mathsf{NSUC}$ utilizing equiprobable scenarios without taking the covariate observations into account. The results further make evident the shortcomings of drawing upon deterministic forecasts and ignoring the stochastic nature of net load in solving the UC problem, as the policies under $\mathsf{PFUC}$ deliver the highest total cost and necessitate involuntary load curtailment.\par In certain practical applications, we may fail to collect a large number of observations that can be used in constructing the training set. Coincidentally, working with a larger training set requires solving the $\mathsf{w-CSUC}$ problem with a greater number of scenarios, which drives up the computational burden. As such, we assess the performance of each method under different values of $D$. In doing so, we keep all the hyperparameters except $\xi$ constant at the values determined for $D=100$ and use grid search to tune the value of $\xi$ on the validation set. Fig. \ref{res_1a} visualizes the total cost delivered by each method, and it illustrates for the proposed framework the value of $\xi$ determined through grid search and the time to solve the $\mathsf{w-CSUC}$ problem so as to obtain the policy $\hat{z}_{D}(\bar{x})$. \par The plots in Fig. \ref{res_1a} echo the order of performance in Table \ref{oos_100}, as across most values of $D$, the policies of $\mathsf{w-CSUC}$ beat $\mathsf{ew-CSUC}$, which in turn outperforms $\mathsf{NSUC}$. Note that the policies under $\mathsf{PFUC}$ exhibit the worst out-of-sample performance for most investigated training set sizes. We further point out that the total costs obtained under the proposed framework are tightly clustered around their mean values and less spread out compared with the benchmark methods.\par We remark upon the tight coupling between the training set size, the solution time, and the out-of-sample cost. Increasing $D$ from $10$ to $100$ markedly improves (decreases) the out-of-sample performance of $\mathsf{w-CSUC}$, albeit a diminishing return as $D$ grows from $50$ to $100$. We also observe that the out-of-sample performance saturates around $D=100$ and sporadically deteriorates (increases) as $D$ grows beyond $100$, during which the solution time precipitously increases. \par One can draw from Fig. \ref{res_1a} valuable insights into the value of $\xi$ determined via grid search. Most notably, the empirical weights are amplified and suppressed the most under $D=365$, signifying an information-rich training set. This observation drives home that training the RF model using a full year’s data enables an accurate characterization of the similarity between observations. Note that, the empirical weights for $D \in \{50, 100, 200, 300\}$ are also boosted and attenuated, though not as much as for $D=365$, whereas those for $D \in \{10, 20\}$ are used as is, indicating that amplifying and suppressing the empirical weights obtained with such small datasets is not warranted. \par We next investigate the influence of the covariate selection method laid out in Section \ref{sec3e} on the out-of-sample performance and the computation time. To this end, we repeat the experiments for $D=100$ under the set covariates supported in $\mathscr{X}^r$ and $\mathscr{X}^p$. We tune the hyperparameters for each set of covariates using the validation set and leverage the test to compute the out-of-sample performances. For each set of covariates, we measure the time for computing the weights ${\omega}_{D, d}(\bar{x})$ over 30 simulation runs, which is comprised of the time for training the RF model and that for evaluating the weight for all observations in the test set. Fig. \ref{res_1b} bears out the relative merits of the proposed covariate selection method, which notches a 3.90\% reduction in the average time for evaluating the weights ${\omega}_{D, d}(\bar{x})$ vis-à-vis those under the initial set of features without compromising on the out-of-sample performance. \section{Conclusion}\label{sec5} In this paper, we worked out a predictive prescription framework that jointly leverages the random forest (RF) algorithm with a conditional stochastic optimization model so as to take unit commitment decisions under uncertain net load. We put forth a method to manipulate the empirical weights derived from the RF model based on the size and the prescriptive power of the training set, and we suggest a hybrid method to select pertinent covariates for net load. By treating the hyperparameters of the RF model as those of the overall framework, we tune them based on the ultimate task for which the framework is developed, that is, bringing forth a lower out-of-sample cost. The extensive numerical studies conducted illustrate the capabilities of the framework in reducing not only the out-of-sample cost and load curtailment, but also the computation time compared with various benchmarks. \end{document}
\begin{document} \title{On the complement of the Richardson orbit} \author[Baur]{Karin Baur} \address{Department of Mathematics \\ ETH Z\"urich R\"amistrasse 101 \\ CH-8092 Z\"urich \\ Switzerland } \email{[email protected]} \author[Hille]{Lutz Hille} \address{Mathematisches Institut \\ Fachbereich Mathematik und Informatik der Universit\"at M\"unster \\ Einsteinstrasse 62 \\ D-48149 M\"unster \\ Germany } \email{[email protected]} \keywords{Parabolic groups, Richardson orbit, nilradical} \subjclass[2000]{20G05,17B45,14L35} \thanks{This research was supported through the programme ``Research in Pairs'' by the Mathematisches Forschungsinstitut Oberwolfach in 2009. The second author was supported by the DFG priority program SPP 1388 representation theory.} \begin{abstract} We consider parabolic subgroups of a general algebraic group over an algebraically closed field $k$ whose Levi part has exactly $t$ factors. By a classical theorem of Richardson, the nilradical of a parabolic subgroup $P$ has an open dense $P$-orbit. In the complement to this dense orbit, there are infinitely many orbits as soon as the number $t$ of factors in the Levi part is $\ge 6$. In this paper, we describe the irreducible components of the complement. In particular, we show that there are at most $t-1$ irreducible components. We are also able to determine their codimensions. \end{abstract} \maketitle \tableofcontents \section{Introduction and notations} Let $P$ be a parabolic subgroup of a reductive algebraic group $G$ over an algebraically closed field $k$. Let ${\mathfrak p}$ be its Lie algebra and let ${\mathfrak p} = {\mathfrak l} \oplus {\mathfrak n}$ be the Levi decomposition of ${\mathfrak p}$, i.e. ${\mathfrak n}$ is the nilpotent radical of ${\mathfrak p}$. A classical result of Richardson~\cite{ri} says that $P$ has an open dense orbit in the nilradical. We will call this $P$-orbit the {\em Richardson orbit for $P$}. However, in general there are infinitely many $P$-orbits in ${\mathfrak n}$. For classical $G$, the cases where there are finitely many $P$-orbits in ${\mathfrak n}$ have been classified in~\cite{hr1}. Also, the $P$-action on the derived Lie algebras of ${\mathfrak n}$ have been studied in a series of papers, and the cases with finitely many orbits have been classified, cf.~\cite{bh1},~\cite{bh2},~\cite{bh3}, ~\cite{bhr}. If $G$ is a general linear group, $G=\mathrm{GL}_n$, then the parabolic subgroup $P$ can be described by the lengths of the blocks in the Levi factor: Write $P=L N$ where $L$ is a Levi factor and $N$ is the unipotent radical of $P$. Then we can assume that $L$ consists of matrices which have non-zero entries in square blocks on the diagonal. Similarly, the Levi factor ${\mathfrak l}$ of ${\mathfrak p}$ consists of the $n\times n$-matrices with non-zero entries lying in squares of size $d_i\times d_i$ ($i=1,\dots,t$) on the diagonal and ${\mathfrak n}$ are the matrices which only have non-zero entries above and to the right of these square blocks. Let $t$ be the number of such blocks and $d_1,\dots,d_t$ the lengths of them, $\sum d_i=n$ (with $d_i>0$ for all $i$). So $d$ is a composition of $n$. We will call such a $d=(d_1,\dots,d_t)$ a {\em dimension vector}. We write $P(d)$ for the corresponding parabolic subgroup and ${\mathfrak n}(d)$ for the nilpotent radical of $P(d)$, the Richardson orbit of $P(d)$ is denoted by ${\mathcal{O}}(d)$. Its partition will be $\leftarrowmbda(d)$. Once $d$ is fixed, we will often just use $P$, ${\mathfrak n}$ and $\leftarrowmbda$ if there is no ambiguity. Recall that the nilpotent $\mathrm{GL}_n$-orbits are parametrised by partitions of $n$. We will use $C(\mu)$ to denote the nilpotent $\mathrm{GL}_n$-orbit for the partition $\mu$ ($\mu$ a partition of $n$). And we will usually denote $P$-orbits in ${\mathfrak n}$ by a calligraphic O, i.e. we will write ${\mathcal{O}}$ or ${\mathcal{O}}(\mu)$ if $\mu$ is the partition of the nilpotency class of the $P$-orbit. Now, the nilradical ${\mathfrak n}$ is a disjoint union of the intersections ${\mathfrak n}\cap C(\mu)$ of the nilradical with all nilpotent $\mathrm{GL}_n$-orbits. By Richardsons result, ${\mathfrak n}\cap C(\leftarrowmbda)={\mathcal{O}}(\leftarrowmbda)$ is a single $P$-orbit. In particular, the Richardson orbit consists exactly of the elements of the nilpotency class $\leftarrowmbda$. However, for $\mu\le \leftarrowmbda$, the intersection ${\mathfrak n}\cap C(\mu)$ might be reducible (cf. Proposition~\ref{prop:comps}). In the case where ${\mathfrak n}$ is the nilradical of a Borel subalgebra of the Lie algebra of a simple algebraic group $G$, Spaltenstein has first studied the varieties ${\mathfrak n}\cap (G\cdot e)$ for $G\cdot e$ a nilpotent orbit under the adjoint action (\cite{sp}). In \cite{ghr}, the authors study the action of a Borel subgroup $B$ of a simple algebraic group on the closure ${\mathfrak n}\cap C(\mu)$ for the subregular nilpotency class $C(\mu)$ and characterize the cases where $B$ has only finitely many orbits under the adjoint action. The main goal of this article is to describe the irreducible components of the complement $Z:={\mathfrak n}\setminus {\mathcal{O}}(d)$ of the Richardson orbit in ${\mathfrak n}$. They occur in intersections ${\mathfrak n}\cap C(\mu)$ for certain partitions $\mu=\mu(i,j)\le\leftarrowmbda$. We have two descriptions of the irreducible components of $Z$. On one hand, we give rank conditions on the matrices of ${\mathfrak n}$, on the other hand, we use tableaux $T(i,j)$ for certain $(i,j)$ with $1\le i<j\le t$ and associate irreducible components ${\mathfrak n}(T(i,j))$ of the intersections ${\mathfrak n}\cap C(\mu(i,j))$ to them. Before we can state the two results we now introduce the necessary notation. Let $d=(d_1,\dots,d_t)$ be a dimension vector, ${\mathfrak n}$ the nilradical of the corresponding parabolic subalgebra. For $A\in{\mathfrak n}$ and $1\le i,j\le t$ we write $A_{ij}$ to describe the matrix formed by taking the entries of $A$ lying in the rectangle formed by rows $d_1+\dots + d_{i-1}+1$ up to $d_1+\dots + d_i$ and columns $d_1+\dots + d_{j-1}+1$ up to $d_1+\dots + d_j$ and with zeroes everywhere else. For $i\ge j$, this is just the zero matrix. Figure~\ref{fig:blocks} shows the blocks $A_{ij}$ for $d=(2,4,7)$. \begin{figure} \caption{The block decomposition of the matrix $A$ for $d=(2,4,7)$} \end{figure} We set $A[i,j]$ to be the matrix formed by the $(A_{kl})_{i\le k\le j,i\le l\le j}$, i.e. by the rectangles right to and below of $A_{ii}$ and left to and above of $A_{jj}$. For instance, $A[i,i]$ is just $A_{ii}$ and $A[1,t]$ has the same entries as $A$. More generally, $A[ij]$ is a square matrix of size $(d_i+\dots+d_j)\times (d_i+\dots+d_j)$ with $A_{ii},\dots,A_{jj}$ on its diagonal. We are now ready to explain the rank conditions. For the rest of this section, we will always assume that a pair $(i,j)$ satisfies $1\le i<j\le t$. We write $X(d)$ for an element of ${\mathcal{O}}(d)$. For $k\ge 1$ define \begin{eqnarray*} r_{ij}^k & := & \operatorname {rk} (X(d)[i,j]\,^k) \\ \kappa(i,j) & := & 1 + \#\{l\mid i<l<j,\ d_l\ge\min(d_i,d_j)\}\, . \end{eqnarray*} Observe that the numbers $r_{ij}^k$ are independent of the choice of an element of the Richardson orbit. With this, we can define two subsets of ${\mathfrak n}$ as our candidates for irreducible components of $Z$. \begin{definition}\leftarrowbel{def:Z-ij} Let $d=(d_1,\dots,d_t)$ be a dimension vector and ${\mathfrak n}$ the nilradical of the parabolic subgroup $P$ of $\mathrm{GL}_n$. We set \begin{eqnarray*} Z_{ij}^k & := & \{A\in{\mathfrak n}\mid \operatorname {rk} A[ij]^k<r_{ij}^k\} \\ Z_{ij} & := & Z_{ij}^{\kappa(i,j)} \end{eqnarray*} to be the elements $A$ of ${\mathfrak n}$ for which the rank of $k$th power of the matrix $A[ij]$ is defective, respectively the $A$ for which the rank of the $\kappa(i,j)$th power is defective. \end{definition} To any dimension vector $d=(d_1,\dots,d_t)$ we associate subsets $\Gamma(d)$ and $\Lambda(d)$ of the set $\{(i,j)\mid 1\le i<j\le t\}$. In Section~\ref{s:rank-cond} we will show that the complement $Z$ of the open dense orbit is the union of the sets $Z_{ij}$ for $(i,j)\in \Lambda(d)$. \begin{eqnarray*} \Gamma(d) & := & \{(i,j)\mid d_l<\min(d_i,d_j)\ \mbox{or}\ d_l>\max(d_i,d_j)\ \forall\ i<l<j\}\, ,Ê\\ \Lambda(d) & := & \left\{ (i,j)\in \Gamma(d)\mid d_i=d_j\right\} \cup \\ & & \left\{ (i,j)\in \Gamma(d)\mid d_i\ne d_j \mbox{ and }\right. \\ & & \left. \begin{array}{lcl} \quad\quad & (i) & d_k\le\min(d_i,d_j)\ \mbox{or}\ d_k\ge\max(d_i,d_j)\ \forall\ k\\ & (ii) & d_k\ne d_j\ \mbox{for $k<i$} \\ & (iii) & d_k\ne d_i\ \mbox{for $k>j$} \end{array} \right\},Ê\\ \end{eqnarray*} Let us describe the latter in words: For $(i,j)$ to be in $\Lambda(d)$, we require that the $d_l$ with $i<l<j$ are smaller than the minimum of $d_i$ and $d_j$ or larger than the maximum of them. Furthermore, the $d_k$ have to be smaller or larger than the minimum $\min(d_i,d_j)$ resp. the maximum $\max(d_i,d_j)$ (for all $k$) and, if $d_i\ne d_j$, then $d_i$ is different from $d_{j+1},\dots,d_t$ and $d_j$ is different from $d_1,d_2,\dots,d_{i-1}$. In general, $\Gamma(d)$ is different from $\Lambda(d)$ as we illustrate now. \begin{ex}\leftarrowbel{ex:Lambda} \begin{itemize} \item[(a)] If $d=(1,3,4,2)$ then $\Gamma(d)=\{(1,2),(2,3),(3,4),(2,4),(1,4)\}$ and $\Lambda(d)=\{(2,3),(2,4),(1,4)\}$. \item[(b)] For $d=(1,2,3,2)$, $\Gamma(d)=\{(1,2),(2,3),(3,4),(2,4)\}$, $\Lambda(d)=\{(1,2),(2,4)\}$. \item[(c)] If $d=(d_1,\dots,d_t)$ is increasing or decreasing, then \\ $\Gamma(d)=\Lambda(d)$ $=\{(1,2),(2,3),\dots,(t-1,t)\}$. \item[(d)] The fourth example will be our running example throughout the paper: If $d=(7,5,2,3,5,1,2,6,5)$ then we have $\Gamma(d)=\{(i,i+1) \mid 1\le i\le 8\}$ \\ $\cup\ \{(1,8),(2,4),$ $(2,5),(3,6),(3,7),(4,6),(4,7), (5,7),(5,8),(5,9),(7,9)\}$ and $\Lambda(d)=\{(1,8),(2,5),(3,7),(5,9)\}$. \end{itemize} \end{ex} We claim that the irreducible components of $Z={\mathfrak n}\setminus {\mathcal{O}}(d)$ are the $Z_{ij}$ with $(i,j)$ from the parameter set $\Lambda(d)$: \begin{theorem} (Theorem~\ref{thm:Z-ij}) Let $d=(d_1,\dots,d_t)$ be a composition of $n$, $\leftarrowmbda=\leftarrowmbda(d)$ the partition of the Richardson orbit corresponding to $d$. Then $$ Z=\bigcup_{(i,j)\in\Lambda(d)} Z_{ij} $$ is the decomposition of $Z$ into irreducible components. \end{theorem} For the second description of the irreducible components we let $T(d)$ be the unique Young tableau obtained by filling the Young diagram of $\leftarrowmbda$ with $d_1$ ones, $d_2$ twos, etc. (for details, we refer to Subsection~\ref{ss:young-tab}). Now for each pair $(i,j)$ we write $s(i,j)$ for the last row of $T(d)$ containing $i$ and $j$ and we let $T(i,j)$ be the tableau obtained from $T(d)$ by removing the box containing the number $j$ from row $s(i,j)$ and inserting it at the next possible position in order to obtain another tableau. The tableau $T(i,j)$ corresponds to an irreducible component of the intersection of ${\mathfrak n}$ with a nilpotent $\mathrm{GL}_n$-orbit as is explained in Section~\ref{s:tableaux} (Proposition~\ref{prop:comps}). We write ${\mathfrak n}(T(i,j))\subseteq{\mathfrak n}$ for the irreducible component in ${\mathfrak n}\cap C(\mu(i,j)$ of tableau $T(i,j)$. We claim that they correspond to irreducible components of $Z$ exactly for the $(i,j)\in\Lambda(d)$. \begin{theorem} (Corollary~\ref{cor:tableaux}) Let $d=(d_1,\dots,d_t)$ be a dimension vector, $\leftarrowmbda=\leftarrowmbda(d)$ the partition of the Richardson orbit corresponding to $d$. Then $$ Z=\bigcup_{(i,j)\in\Lambda(d)} {\mathfrak n}(T(i,j)) $$ is the decomposition of $Z$ into irreducible components. \end{theorem} As a consequence, we obtain that $Z$ has at most $t-1$ irreducible components (cf. Corollary~\ref{cor:components}) and we can describe their codimensions in ${\mathfrak n}$ (Corollary~\ref{cor:codim}). To be more precise, if $d$ is increasing or decreasing or if all the $d_i$ are different, then $Z$ has $t-1$ irreducible components. In particular, this applies to the Borel case where $d=(1,\dots,1)$. An example with $t=9$ and where we only have four irreducible components is our running example, see Example~\ref{ex:T(d)}. Note that the techniques we use are similar to the ones of~\cite{bah} where we describe the complement to the generic orbit in a representation space of a directed quiver of type A$_t$. However, the indexing sets are different and cannot be derived from each other. The paper is organised as follows: in Section \ref{s:rank-cond} we explain how to obtain the rank conditions. We first describe line diagrams associated to a composition $d$ of $n$. Line diagrams will be used to describe elements of the corresponding nilradical ${\mathfrak n}$. In Subsection~\ref{ss:Lambda-Gamma} we prove that the elements of $\Lambda(d)$ give the irreducible components. For this, we show that if $(i,j)$ does not belong to $\Gamma(d)$ then the variety $Z_{ij}$ is contained in a union of $Z_{k_sl_s}$ for a subset of elements $(k_s,l_s)$ of $\Gamma(d)$ (Lemma~\ref{lm:not-Gamma}). Next, if $(i,j)$ is in $\Gamma(d)\setminus\Lambda(d)$, then we can find $(k,l)\in\Lambda(d)$ such that $Z_{ij}$ is contained in $Z_{kl}$ (Corollary~\ref{cor:not-Lambda}). In Section~\ref{s:tableaux}, we recall Young diagrams and their fillings. Then we consider Young tableaux associated to a composition $d$ of $n$ and a nilpotency class $\mu\le\leftarrowmbda(d)$. In a next step, we consider Young tableaux $T(i,j)$ associated to the elements of the parameter set $\Lambda(d)$. To each of these tableaux $T(i,j)$ we associate an irreducible variety ${\mathfrak n}(T(i,j))$: It is defined as the irreducible component in $\mathfrak n \cap C(\mu(i,j))$ corresponding to the tableau $T(i,j)$. The ${\mathfrak n}(T(i,j))$ are known to be irreducible by work of the second author, \cite{hhab}. By showing that ${\mathfrak n}(T(i,j))$ is equal to $Z_{ij}$ from Section~\ref{s:rank-cond} for elements $(i,j)$ of the parameter set $\Lambda(d)$ we can complete the description of the complement of the Richardson orbit in ${\mathfrak n}$ into irreducible components. \section{Components via rank conditions}\leftarrowbel{s:rank-cond} \subsection{Line diagrams} Let $d=(d_1,\dots,d_t)$ be a dimension vector for a parabolic subalgebra of ${\mathfrak{gl}}_n$, ${\mathfrak n}$ the corresponding nilradical. We recall a pictorial way to represent elements of ${\mathfrak n}$ and in particular, to obtain an element of the Richardson orbit ${\mathcal{O}}(d)$. This can be found in~\cite[Section 2]{bhrr} and in~\cite[Section 3]{ba1}. We draw $t$ top-adjusted columns of $d_1$, $d_2$, $\dots, d_t$ vertices. The vertices are connected using edges between vertices of different columns. If two vertices lie on the same height and there is no third vertex between them on that height then we call the two vertices {\em neighbors}. The {\em complete line diagram} for $d$, $L_R(d)$, is the diagram with horizontal edges between all neighbored vertices (as the second and the third diagram of Example~\ref{ex:line-diagrams}). A {\em line diagram $L(d)$} for $d$ is a diagram with arbitrary edges between different columns (possibly with branching). A collection of connected edges is called a {\em chain of edges} (see the example below). If no branching occurs in a line diagram then a chain consisting of $l$ edges connects $l+1$ vertices. In that case we can define the length of a chain: The {\em length of a chain of edges} in a line diagram (without branching) is the number of edges the chain contains. A chain of length $0$ is a vertex that is not connected to any other vertex. In Example~\ref{ex:line-diagrams}, we show two complete and a branched line diagram for $d=(3,1,2,4)$ resp. for $d=(3,1,6,1,2,5,4)$. \begin{ex}\leftarrowbel{ex:line-diagrams} a) A line diagram with branching and the complete line diagram $L_R(d)$ for $d=(3,1,2,4)$ are here. To the right of the latter we give the lengths of the chains in the diagram. \begin{center} \includegraphics[scale=.5]{lines-partial.eps} \hskip 20pt \includegraphics[scale=.5]{lines-small.eps} \end{center} b) Now we consider our running example $d=(7,5,2,3,5,1,2,6,5)$. Its complete line diagram $L_R(d)$ is here, with the lengths of the chains to the right. \begin{center} \includegraphics[scale=.5]{running-diagram.eps} \end{center} \end{ex} We will see in the next subsection that the line diagram $L_R(d)$ determines an element of the Richardson orbit of ${\mathfrak n}$. In general, line diagrams give rise to elements of the nilradical of nilpotency class smaller than $\leftarrowmbda=\leftarrowmbda(d)$ with respect to the Bruhat order. Any line diagram (complete or not) gives rise to an element $A$ of ${\mathfrak n}$: The sizes of the columns of a line diagram correspond to the sizes of the square blocks in the Levi factor of ${\mathfrak p}$. An edge between column $i$ and column $j$ (with $i<j$) of the diagram corresponds to a non-zero entry in the block $A_{ij}$ of the matrix $A$. A chain of two joint edges between three columns $i_0<i_1<i_2$ gives rise to a non-zero entry in block $A^2_{(i_0,i_2)}$ of the matrix $A^2$, etc. This can be made explicit, as we explain in the next subsection. \subsection{From line diagrams to the nilradical} The elements of the nilradical ${\mathfrak n}$ for the dimension vector $d=(d_1,\dots, d_t)$ are nilpotent endomorphisms of $k^n$, for $n=\sum d_i$. In particular, if we write $e_1,\dots, e_n$ for a basis of $k^n$, then the elements of ${\mathfrak n}$ are sums $\sum_{i<j} a_{ij}E_{ij}$ for some $a_{ij}\in k$ where the elementary matrix $E_{ij}$ sends $e_j$ to $e_i$. We now describe a map associating an element of the nilradical to a given line diagram. We view the vertices of a line diagram $L(d)$ as labelled by the numbers $1,2,\dots, n$, starting at the top left vertex, with $1,2,\dots, d_1$ in the first column, $d_1+1,\dots,d_1+d_2$ in the second column, etc. Now if two vertices $i$ and $j$ (with $i<j$) are joint by an edge, we associate to this edge the matrix $E_{ij}$. We denote an edge between two vertices $i$ and $j$ ($i<j\le n$) of the diagram by $e(i,j)$. Then we associate to an edge $e(i,j)$ of $L(d)$ the elementary matrix $E_{ij}\in{\mathfrak n}$. This can be extended to a map from the set of line diagrams for $d$ to the nilradical ${\mathfrak n}$ by linearity. For later use, we denote this map by $\Phi$: $$ \Phi: \{\mbox{line diagrams for $d$}\}\longrightarrow {\mathfrak n}, \ L(d)\mapsto \sum_{e(i,j)\in L(d)}E_{ij}\,. $$ If $L(d)$ is a line diagram without branching, then the partition of the image under $\Phi$ of the line diagram $L(d)$ can be read off from it directly as follows: if $L(d)$ has $s$ chains of lengths $c_1, c_2, \dots, c_s$ (all $\ge 0$). Then $\sum_{j=1}^s (c_j+1)=\sum_{i=1}^t d_i=n$. \begin{remark}\leftarrowbel{re:line-part} Let $L(d)$ be a line diagram without branching and let $c_1,\dots, c_s$ be the lengths of the chains of $L(d)$. Let $\mu=(\mu_1,\dots,\mu_s)$ be the partition obtained by ordering the numbers $c_j+1$ by size. Then $\mu$ is the partition of $\Phi(L(d))$. \end{remark} In particular, $\Phi(L_R(d))$ is an element of the Richardson orbit ${\mathcal{O}}(d)$ since the partition of $L_R(d)$ is just the dual of the dimension vector $d$ and this is equal to $\leftarrowmbda(d)$ (cf. Section 3 in~\cite{ba1}). If $L(d)$ is any other line diagram for $d$ $L(d)$ (withouth branching), with lengths of chains $c_1,\dots,c_s$ and $\mu_i:=c_i+1$ then we always have $\sum_{j=1}^k\mu_j\le\sum_{j=1}^k\leftarrowmbda_j(d)$ and so the partition of $\Phi(L(d))$ is smaller than or equal to the partition of $\Phi(L_R(d))$ under the Bruhat order. To summarize, we have the following: \begin{lemma} Let $d$ be a dimension vector. Then, $\Phi(L(d))$ is an element of the nilradical ${\mathfrak n}$ of nilpotency class $\mu\le\leftarrowmbda(d)$. In other words, $\Phi(L(d))$ lies in ${\mathfrak n}\cap C(\mu)$. \end{lemma} \begin{ex} Let $d=(3,1,2,4)$ as in Example~\ref{ex:line-diagrams} (a). The lengths of the chains of $L_R(d)$ are $3,2,1,0$, the Richardson orbit has partition $(4,3,2,1)$. We compute the matrix of the complete line diagram $L_R(d)$, and the powers of this matrix. Let $X(d):=\Phi(L_R(d))$. Then $X(d)$ and its powers are \begin{eqnarray*} X(d)\ & = & E_{14}+E_{45}+E_{57}+E_{26}+E_{68}+E_{39}\\ X(d)^2 & = & E_{15}+E_{47}+E_{28} \\ X(d)^3 & = & E_{17} \\ X(d)^k & = & 0 \ \mbox{for $k>3$.} \end{eqnarray*} \end{ex} Recall that we have defined the varieties $Z_{ij}^k$ by comparing the ranks of certain submatrices of elements in the nilradical ${\mathfrak n}$ to the corresponding rank $r_{ij}^k$ of a Richardson element, cf. Definition~\ref{def:Z-ij}. We thus need to be able to compute the rank of the submatrix $X(d)[ij]$ of an element $X(d)$ of the Richardson orbit ${\mathcal{O}}(d)$ and of its powers. For this, we can use the line diagram $L_R(d)$. Let $X(d)=\sum_{e(k,l)\in L_R(d)}E_{kl}$ be the Richardson element given by $L_R(d)$. To compute the rank $r_{1t}^k$ of $X(d)^k$, it is enough to count the chains of length $\ge k$ in the line diagram $L_R(d)$. Analogously, to find the rank $r_{ij}^k$ of the $k$th power of the submatrix $X(d)[ij]$, one has to count the chains of length $\ge k$ between the $i$th and $j$th column in $L_R(d)$: Let $1\le k<l\le n$ be such that the image $\Phi(e(k,l))$ of the edge $e(k,l)$ is in $X(d)[ij]$. That means we are considering edges $e(k,l)$ starting in some column $i_1\ge i$ and ending in some column $i_2\le j$. Thus, in computing $r_{ij}^k$, we really consider the $k$th power of the matrix which arises from columns $i,i+1,\dots,j$ of $L_R(d)$. We now introduce the notation to refer to the subdiagram consisting of these columns. We denote by $L_R(d)[ij]$ subdiagram of $L_R(d)$ of all vertices from the $i$th up to the $j$th column and of all edges starting strictly after the $(i-1)$st column resp. ending strictly before the $(j+1)$st column. In other words, we remove columns $1,2,\dots,i-1$ and columns $j+1,\dots,t$ together with all edges incident with them. With this notation we have \begin{equation}\leftarrowbel{eq:s-r} r_{ij}^k = \#\{\mbox{chains in $L_R(d)[ij]$ with at least $k$ edges}\} \end{equation} for $1\le i<j\le t$, $k\ge 1$. Similarly, if $L(d)$ is a line diagram for $d$, we write $L(d)[ij]$ to denote the subdiagram of $L(d)$ of rows $i$ to $j$. \begin{ex} The subdiagram $L_R(d)[47]$ for $d=(7,5,2,3,5,1,2,6,5)$ of the diagram $L_R(d)$ from (b) of Example~\ref{ex:line-diagrams} is shown here (dotted lines and empty circles are thought to be removed): \begin{center} \includegraphics[scale=.5]{line_47.eps} \end{center} \end{ex} \subsection{The varieties $Z_{ij}$}\leftarrowbel{ss:Lambda-Gamma} As explained earlier, we want to show that the irreducible components of $Z$ are indexed by the parameter set $\Lambda(d)$. With this in mind, we now discuss the properties of the varieties $Z_{ij}^k$. We will prove that for $l\ne\kappa(i,j)$, $Z_{ij}^l$ is either empty or contained in $Z_{ij}$ or in the union $Z_{ij_0}\cup Z_{i_0j}$ for some $i_0\le j_0$. Later in this section we will see that not all $(i,j)$ with $1\le i<j\le t$ are needed to describe the complement $Z$. The following notations will be useful: \begin{eqnarray*} d_{<}[ij] & := & |\{l\mid i<l<j,\ d_l<\min(d_i,d_j)\}| \\ d_{\ge}[ij] & := & |\{l\mid i<l<j,\ d_l\ge\min(d_i,d_j)\}| \, . \end{eqnarray*} If $d=(7,5,2,3,5,1,2,6,5)$, then $d_<[25]=2$, $d_<[26]=\emptyset$ and $d_{\ge}[26]=3$. \begin{remark}\leftarrowbel{re:kappa} Observe that \begin{eqnarray*} \kappa(i,j) & = & 1+ \# d_{\ge}[ij] \\ & = & j-i-\# d_{<}[ij]\,. \end{eqnarray*} In particular, $\kappa(i,j)=j-i$ if and only if $d_{<}[ij]=\emptyset$. Figure~\ref{fig:no-min}Ê illustrates this. \end{remark} \begin{figure} \caption{Our running example has $d_{<} \end{figure} \begin{lemma}\leftarrowbel{lm:emptyset} Let $d=(d_1,\dots,d_t)$ be a dimension vector and $1\le i<j\le t$. Then for $k>0$ we have $$ Z_{ij}^k=\emptyset \mbox{ if and only if } k> j-i \,. $$ \end{lemma} \begin{proof} One has $r_{ij}^k=\operatorname {rk} X(d)[ij]^k>0$ exactly for $k\le j-i$ and $0\in Z_{ij}^k$ if and only if $r_{ij}^k>0$. \end{proof} It remains to consider the cases where $l$ is smaller than $\kappa(i,j)$ or when $l$ lies between $\kappa(i,j)$ and $j-i$. This is covered by the next two statements. \begin{lemma}\leftarrowbel{lm:l<kappa} For $1\le l< \kappa(i,j)$ the following holds: $$ Z_{ij}^l\subsetneq Z_{ij}\,. $$ \end{lemma} \begin{proof} We may assume $d_i\le d_j$. For any $B\in{\mathfrak n}$ the rank of $B[ij]^l$ is independent of the order of $d_i,d_{i+1},\dots,d_j$: incomputing the rank, we need to know the number of (independent) chains of length $l$ in the line diagram of $b[ij]$. Hence we may reorder $d_i,\dots,d_j$ to obtain $d_{s_1},\dots, d_{s_{j-i+1}}$ with $d_{s_k}\le d_{s_{k+1}}$ for $k=1,\dots,j-i$. One computes $r_{ij}^l=\operatorname {rk} X(d)[ij]^l$ as the sum $\sum_{k=0}^{j-i-l}d_{i+k}$. Let $A$ belong to $Z_{ij}^l$ for some $l<\kappa(i,j)$. Thus $\operatorname {rk} A[ij]^l<r_{ij}^l=\operatorname {rk} X(d)[ij]^l$. But then also the rank of $A[ij]^k$ is smaller than $r_{ij}^k$ for $k=l+1,\dots,\kappa(i,j)$. In particular, $A\in Z_{ij}$. The inequality is clear. \end{proof} Let $A$ belong to $Z_{ij}^l$ for some $l<\kappa(i,j)$. Thus $\operatorname {rk} A[ij]^l<r_{ij}^l$. But then also the rank of $A[ij]^k$ is smaller than $r_{ij}^k$ for $k=l+1,\dots,\kappa(i,j)$. In particular, $A\in Z_{ij}$. \begin{lemma}\leftarrowbel{lm:l>kappa} For $\kappa(i,j)<l\le j-i$ the following holds: there exist $i_0\le j_0\in d_{<}[ij]$, $d_{i_0}$, $d_{j_0}< \min(d_i,d_j)$ maximal, such that $$ Z_{ij}^l\subseteq Z_{ij_0} \cup Z_{i_0j}\,. $$ \end{lemma} \begin{proof} We first observe that for elements of the Richardson orbit, the rank $r_{ij}^l$ is $$ r_{ij}^l=\sum_{i_0=i}^{j-l}\max_{\begin{tiny} \begin{array}{c} i_0<\dots<i_{l}\le j \end{array} \end{tiny}} \min\{d_{i_0},\dots,d_{i_l}\} $$ (1) Let us first consider the case where $d_{<}[ij]$ only has one element, say $d_{<}[ij]=\{i_0\}$, see Figure~\ref{fig:one-min}). Then $\kappa(i,j)=j-i-1$ and so $l=j-i$. For $A\in{\mathfrak n}$ to be an element of $Z_{ij}^{l}$, the rank of $A[ij]^l$ is smaller than $r_{ij}^l$. Since $d_{i_0}$ is minimal among all $d_i,\dots,d_j$, this implies $\operatorname {rk} A[ii_0]^l<r_{ij}^l$ or $\operatorname {rk} A[i_0j]^l<r_{ij}^l$ and we are done. \begin{figure} \caption{The case $|d_<[ij]|=1$: in the running example, we have $d_{<} \end{figure} (2) The case where $d_{<}[ij]$ has at least two elements only needs a slight modification of the argument. Take $i_0$, $j_0$ from $d_{<}[ij]$ with $d_{i_0}$, $d_{j_0}$ maximal with $i_0$ being the smallest among these indices, $j_0$ the largest one (we do not distinguish between the two possibilities $d_{i_0}=d_{j_0}$ and $d_{i_0}\ne d_{j_0}$), see Figure~\ref{fig:more-min}. With a similar reasoning as in part (1) of the proof, $A$ then lies in $Z_{i,j_0}$ or in $Z_{i_0,j}$. \begin{figure} \caption{The case $i_0\ne j_0\in d_{<} \end{figure} \end{proof} \begin{lemma}\leftarrowbel{lm:Z-union} The complement $Z$ decomposes as follows: $$ Z=\cup_{1\le i<j\le t} Z_{ij}= \cup_{ij} \cup_{k\ge 1} Z_{ij}^k\, . $$ \end{lemma} \begin{proof} The inclusion $\subseteq$ of the second equality is clear. To obtain the inclusion $\supseteq$, one uses Lemmata~\ref{lm:emptyset},~\ref{lm:l<kappa} and~\ref{lm:l>kappa}. Consider the first equality: by definition, $A\in Z$ if and only if $A\notin{\mathcal{O}}(d)$. The latter is the case if and only if there exist $1\le i<j\le t$, $k\le j-i$, such that $A\in Z_{ij}^k$: to see this, one uses the formula for the dimension of the stabilizer of $A\in {\mathfrak{gl}}_n$, see \cite{kp}. This formula uses the dimensions of the kernels of the maps $A^k$, $k\ge 1$. The stabilizer of $A$ has dimension $0$ if and only if $A$ is an element of ${\mathcal{O}}(d)$. \end{proof} It now remains to see that the $(i,j)\in\Lambda(d)$ are enough to describe the irreducible components of $Z$. In a first step (Lemma~\ref{lm:not-Gamma}), we start with $(i,j)\notin \Gamma(d)$ and show that in that case $Z_{ij}$ is contained in a union of $Z_{kl}$'s such that the corresponding $(k,l)$ all lie in $\Gamma(d)$. Then we consider an element $(i,j)$ of $\Gamma(d)\setminus\Lambda(d)$ and show that we can find $(k,l)$ $\in\Lambda(d)$ with $Z_{ij}\subseteq Z_{kl}$ (Lemma~\ref{lm:not-Lambda} and Corollary~\ref{cor:not-Lambda}). As always, we assume that $1\le i<j\le t$ and $1\le k<l\le t$. \begin{lemma}\leftarrowbel{lm:not-Gamma} Assume that $(i,j)$ does not belong to $\Gamma(d)$. Then there exists $\Gamma'(d)\subseteq\Gamma(d)$ such that $$ Z_{ij}\subseteq \bigcup_{(k,l)\in\Gamma'(d)} Z_{kl}\,. $$ \end{lemma} \begin{proof} It is enough to show that we can find an $l$, $i<l<j$, with $\min(d_i,d_j)\le d_l\le \max(d_i,d_j)$, such that $$ Z_{ij}\subseteq Z_{il}\cup Z_{lj}\, . $$ By iterating this, we will eventually end up with a subset $\Gamma'(d)\subset\Gamma(d)$ as in the statement of the lemma. So choose an $l$, $1<l<t$, with $\min(d_i,d_j)\le d_l\le \max(d_i,d_j)$ (such an $l$ exists since $(i,j)\notin\Gamma(d)$). Take $A\in Z_{ij}$ arbitrary. By assumption, $A[ij]^{\kappa(i,j)}$ is defective, i.e. $\operatorname {rk} A[ij]^{\kappa(i,j)}<r_{ij}^{\kappa(i,j)}$. Since $d_l\ge d_i,d_j$, the defectiveness is inherited from $A[il]$ or from $A[lj]$ and $A\in Z_{il}$ or $A\in Z_{lj}$ accordingly. \end{proof} Let us remark that when removing an edge of a chain of $L_R(d)$ in the proof above, we ensured that the matrix $A$ has a zero entry at the corresponding position. In general, the diagram of a matrix in $Z_{il}$ resp. in $Z_{lj}$ has more non-zero entries than the ones obtained after removing one edge from $L_R(d)$: this is illustrated by the dashed lines in Figure~\ref{fig:not-Gamma}. \begin{figure} \caption{Examples for $A\in Z_{il} \end{figure} The following lemma states that for any $(i,j)$ from $\Gamma(d)\setminus\Lambda(d)$ there exists $(k,l)$ from $\Lambda(d)$ with $k\le i<j\le l$ such that $Z_{ij}\subseteq Z_{kl}$. \begin{lemma} \leftarrowbel{lm:not-Lambda} Assume that $(i,j)\in\Gamma(d)\setminus \Lambda(d)$. Then one of the following holds: $$ \begin{array}{ll} & \mbox{there exists $k>j$ with $Z_{ij}\subseteq Z_{ik}$} \\ \mbox{or} & \mbox{there exists $l<i$ with $Z_{ij}\subseteq Z_{lj}$.} \end{array} $$ \end{lemma} \begin{proof} First observe that $d_i\ne d_j$ since $(i,j)$ belongs to $\Lambda(d)$ otherwise. Without loss of generality, we assume $d_i<d_j$. We have three cases to consider: \begin{itemize} \item[(i)] There is $k_1\in\{1,\dots,i-1\}\cup\{j+1,\dots,t\}$ with $d_i<d_{k_1}<d_j$. \item[(ii)] There exists $k_2<i$ with $d_{k_2}=d_j$. \item[(iii)] There exists $k_3>j$ with $d_{k_2}=d_i$. \end{itemize} \begin{figure} \caption{For the running example, $(7,8)$ is in $\Gamma(d)\setminus\Lambda(d)$, as for all $i$, $d_{m_i} \end{figure} The three cases are illustrated in Figure~\ref{fig:not-Lambda}: if $(i,j)\in\Gamma(d)$ but not in $\Lambda(d)$ then one of the following has to occur: there has to be a $k$ with $d_k$ inside the shaded area or with $d_k$ lying on the same height as $d_j$ (if $k<i$) resp. on the same height as $d_i$ (if $k>j$). \noindent Case (i) with $k_1>j$: Among the $k_1>j$ with $d_i<d_{k_1}<d_j$ choose one with $d_{k_1}-d_i$ minimal, and $k_1$ minimal (i.e. as close to $j$ as possible). Note that we have $\kappa(i,j)\le \kappa(i,k_1)$. Now $A\in Z_{ij}$ means that $A[ij]^{\kappa(i,j)}$ is defective. Since $d_i<d_{k_1}$, this defectiveness has to be inherited from $A[i,k_1]$, i.e. $\operatorname {rk} A[i,k_1]^{\kappa(i,k_1)}<r_{i,k_1}^{\kappa(i,k_1)}$ and so, $Z_{ij}\subseteq Z_{i,k_1}$. \noindent Case (i) with $k_1<i$: here, we choose $k_1$ accordingly to be such that $d_j-d_{k_1}$ is minimal and $k_1<i$ maximal among those (i.e. as close to $i$ as possible). One checks that $\kappa(i,j)\le\kappa(k_1,j)$. Similarly as before, one gets $Z_{ij}\subseteq Z_{k_1,j}$. \noindent Case (ii) : Among the $k_2<i$ with $d_{k_2}=d_j$, choose the maximal one (i.e. the one closest to $i$). We have $\kappa(i,j)\le\kappa(k_2,j)$ and we get $Z_{ij}\subseteq Z_{k_2,j}$. Case (iii) is completely analogous to case (ii). \end{proof} Observe that $(k_2,j)$ and $(i,k_3)$ from cases (ii) and (iii) above are elements of $\Lambda(d)$. \begin{cor}\leftarrowbel{cor:not-Lambda} For any $(i,j)\in\Gamma(d)\setminus\Lambda(d)$ there exists $(k,l)\in\Lambda(d)$ such that $$ Z_{ij}\subseteq Z_{kl}\, . $$ \end{cor} \begin{proof} Without loss of generality, we can assume $d_i<d_j$. By the observation after the proof of Lemma~\ref{lm:not-Lambda}, we are done if there exists $k'<i$ with $d_{k'}=d_j$ or $k''>j$ with $d_{k''}=d_i$. Using similar arguments, one sees that if there exist $k'<i$ and $k''>j$ with $d_i <d_{k'}=d_{k''}< d_j$ then $(k',k'')\in\Lambda(d)$ and $Z_{ij}\subseteq Z_{k',k''}$. Thus, assume that there exists $k\in\{1,\dots,i-1\}\cup \{j+1,\dots,t\}$ with $d_i <d_k< d_j$ and such that there is no $k'<i$ with $d_{k'}=d_j$ and no $k''>j$ with $d_{k''}=d_i$. \\ If $k>j$, we choose $k$ such that $d_k-d_i$ is minimal and take the minimal $k>j$ among these (i.e. $k$ is as close to $j$ as possible). There are two possibilities: \\ Either we have $d_{k'}>d_k$ for all $k'<i$. Then, $(k',k)\in\Lambda(d)$ and one checks that $Z_{ij}\subseteq Z_{k',k}$. \\ Or there exists is $k'<i$ with $d_i<d_{k'}<d_k$. In that case, among the $k'<i$ with this property, we choose one with $d_k-d_{k'}$ minimal and such that $k'<i$ is maximal (i.e. $k'$ is as close to $i$ as possible). Again, we get $(k',k)\in\Lambda(d)$ and $Z_{ij}\subseteq Z_{k',k}$. \\ The case $k<i$ is analogous. \end{proof} \section{Components via tableaux}\leftarrowbel{s:tableaux} Let $d=(d_1,\dots,d_t)$ be a composition of $n$ and ${\mathcal{O}}(d)$ be the corresponding Richardson orbit in ${\mathfrak n}$, let $\leftarrowmbda=\leftarrowmbda(d)$ be the partition of the Richardson orbit. The second description of the irreducible components of $Z={\mathfrak n}\setminus{\mathcal{O}}(d)$ uses partitions $\mu_{ij}$, for $(i,j)\in\Lambda(d)$ and tableaux corresponding to them. Observe that $\leftarrowmbda_1=t$, that $\leftarrowmbda_2$ is the number of $d_i\ge 2$ appearing in $d$, $\leftarrowmbda_3=\#\{d_i\mid d_i\ge 3\}$, and so on. Let us introduce the necessary notation. If $\leftarrowmbda=\leftarrowmbda_1\ge\leftarrowmbda_2\ge\dots\ge \leftarrowmbda_s\ge 1$ is a partition of $n$ we will also use $\leftarrowmbda$ to denote the Young diagram of shape $\leftarrowmbda$. It has $s$ rows, with $\leftarrowmbda_1$ boxes in the top row, $\leftarrowmbda_2$ boxes in the second row, etc., up to $\leftarrowmbda_s$ boxes in the last row. That means that we view Young diagrams as a number of right adjusted rows of boxes, attached to the top left corner, and decreasing in length from top to bottom. A standard reference for this is the book~\cite{fu} by Fulton. \subsection{The Young tableaux ${\mathcal{T}}(\mu,d)$}\leftarrowbel{ss:young-tab} Let $\mu\le\leftarrowmbda(d)$ be a partition of $n$ (unless mentioned otherwise, we will always deal with partitions of $n$). \begin{definition} We define a {\em Young tableau} of shape $\mu$ and of dimension vector $d$ to be a filling of the Young diagram of $\mu$ with $d_1$ ones, $d_2$ twos, etc. We write ${\mathcal{T}}(\mu,d)$ for the set of all Young tableaux of shape $\mu$ and for $d$. \end{definition} Recall that the rules for fillings of a Young diagram are that the numbers in a row strictly increase from left to right and that the numbers in a column weakly increase from top to bottom. In general, there might be several Young tableaux of a given shape for a given $d$. There is exactly one Young tableau of shape $\leftarrowmbda=\leftarrowmbda(d)$ and for $d$, so ${\mathcal{T}}(\leftarrowmbda(d),d)$ only has one element. To abbreviate, we will just call it $T(d)$. The entries of the boxes of its first row are $1,2,\dots,t$. \begin{ex} The partition of the composition $d=(7,5,2,3,5,1,2,6,5)$ of 36 is $\leftarrowmbda(d)=(9,8,6,5,5,2,1)$. The partition $\mu=(9,8,6,5,4,3,1)$ is smaller than $\leftarrowmbda(d)$ and ${\mathcal{T}}(\mu,d)$ consists of one element $T(\mu,d)$. We include $T(d)$ and $T(\mu,d)$ here. $$ \psfragscanon \psfrag{1}{$_{1}$} \psfrag{3}{$_{3}$} \psfrag{2}{$_{2}$} \psfrag{4}{$_{4}$} \psfrag{5}{$_{5}$} \psfrag{6}{$_{6}$} \psfrag{7}{$_{7}$} \psfrag{8}{$_{8}$} \psfrag{9}{$_{9}$} T(d)\ \includegraphics[scale=.5]{T_d.eps} \hspace{1cm} T(\mu,d)\ \includegraphics[scale=.5]{T_25.eps} $$ \end{ex} In order to understand the irreducible components of the complement $Z={\mathfrak n}\setminus {\mathcal{O}}(d)$, we have to consider the intersections ${\mathfrak n}\cap C(\mu)$ for $\mu<\leftarrowmbda(d)$. Each irreducible component of $Z$ corresponds to an irreducible component in such an intersection. Here, we can use a result of the second author (cf. Section 4.2 of~\cite{hhab}). First, one observes that the irreducible components of ${\mathfrak n}\cap C(\mu)$ are given by sequences $\mu^1,\dots,\mu^t$ where $\mu^i$ is a partition of $\sum_j^i d_j$ where $\mu^t=\mu$ and such that $0\le\mu_j^{i+1}-\mu_j^i\le 1$ (for all $j$, for $1\le i<t$). And the latter correspond to tableaux of shape $\mu$ with $d_i$ entries $i$, i.e. the elements of ${\mathcal{T}}(\mu,d)$ in our notation. \begin{prop}\leftarrowbel{prop:comps} Let $\mu\le \leftarrowmbda(d)$ be a partition of $n$. Then the irreducible components of ${\mathfrak n}\cap C(\mu)$ are in natural bijection with with the tableaux in ${\mathcal{T}}(\mu,d)$. \end{prop} \begin{proof} This is Satz 4.2.8 in~\cite{hhab}. \end{proof} \begin{ex} \leftarrowbel{ex:O(d)-T(d)} Let $d=(d_1,\dots,d_t)$ be a dimension vector and $\leftarrowmbda=\leftarrowmbda(d)$. We know that ${\mathfrak n}\cap C(\leftarrowmbda)={\mathcal{O}}(d)$ is the Richardson orbit. On the other hand, ${\mathcal{T}}(\leftarrowmbda,d)=T(d)$ has exactly one tableau. We now explain how to relate the complete line diagram $L_R(d)$ to the tableau $T(d)$. The lengths of the chains in $L_R(d)$ are the entries of the partition of $\leftarrowmbda$ and hence give the shape of $T(d)$. The filling of $T(d)$ can now be obtained from $L_R(d)$ by labelling each vertex of the $i$-th column in $L_R(d)$ by an $i$. These numbers are then copied row by row, from left to right into the Young diagram of shape $\leftarrowmbda$ to get $T(d)$. $$ \psfragscanon \psfrag{1}{$_{1}$} \psfrag{3}{$_{3}$} \psfrag{2}{$_{2}$} \psfrag{4}{$_{4}$} \psfrag{label columns}{\tiny{\mbox{label columns}}} \psfrag{copy into T(d)}{\tiny{\mbox{copy into $T(d)$}}} \includegraphics[scale=.6]{L_2324_.eps}\quad\quad \includegraphics[scale=.6]{2324.eps}\quad\quad \includegraphics[scale=.6]{2-3-2-4.eps} $$ \end{ex} From this connection between the line diagram $L_R(d)$ and $T(d)$ one deduces the following useful observation. Every pair $(i,j)$ with $1\le i<j\le t$ determines a unique row of $T(d)$ namely the last row of $T(d)$ containing $i$ and $j$. Such a row always exists as the first row just consists of the boxes with numbers $1,2,3,\dots,t$. We denote this row by $s(i,j)$. \begin{lemma}\leftarrowbel{lm:kappa-T(d)} The number of boxes between $i$ and $j$ in row $s(i,j)$ of $T(d)$ is equal to $\kappa(i,j)-1$. \end{lemma} Proposition~\ref{prop:comps} describes the irreducible components of the intersections ${\mathfrak n}\cap C(\mu)$ for $\mu\le \leftarrowmbda$: They are given by the Young tableaux in ${\mathcal{T}}(\mu,d)$, i.e. by all possible fillings of the diagram $\mu$ by the numbers given by $d$. Clearly, not all irreducible components of the different intersections ${\mathfrak n}\cap C(\mu)$ give rise to an irreducible component of $Z$. If $\mu_2\le \mu_1$ and $T_i\in {\mathcal{T}}(\mu_i,d)$ are tableaux such that $T_2$ can be obtained from $T_1$ by moving down boxes successively, then the irreducible component corresponding to $T_2$ is already contained in the irreducible component corresponding to $T_1$ and thus does not give rise to a new irreducible component of the complement $Z$ of the Richardson orbit. This is in particular the case, if $T_1$ is obtained from the tableau $T(d)$ of the Richardson orbit by moving down a single box and $T_2$ is a degeneration of $T_1$ (obtained by moving down boxes from $T_1$). Thus, the only candidates for irreducible components are the ones given by tableaux which can be obtained from $T(d)$ by moving down a single box to the closest possible row. We call such a degeneration a {\em minimal movement}. \subsection{The Young tableaux $T(i,j)$} To describe minimal movements, we now define certain tableaux $T(i,j)$. \begin{definition} The tableau $T(i,j)$ is the tableau obtained from $T(d)$ by removing the box containing the number $j$ from row $s(i,j)$ and inserting it in the nearest row in order to obtain another tableau. In other words: Among the possible rows where this box could be inserted, we choose the one that is closest to row $s(i,j)$. We denote the partition of the resulting tableau $T(i,j)$ by $\mu(i,j)$. \end{definition} \begin{definition} For a tableau $T(i,j)$ we define ${\mathfrak n}(T(i,j))\subseteq {\mathfrak n}$ to be the irreducible component of ${\mathfrak n}\cap C(\mu(i,j))$ whose tableau is $T(i,j)$. \end{definition} We claim that ${\mathfrak n}(T(i,j))$ gives rise to an irreducible component of the complement $Z$ exactly when $(i,j)$ belongs to the parameter set $\Lambda(d)$. For completeness, we recall the definition of a the tableau $T$ for a an irreducible component in $C(\mu) \cap \mathfrak n$. Consider a maximal flag $V_0 \subset V_1 \subset \ldots \subset V_t$ of vector spaces that is stabilized by $P(d)$. Take any matrix $A$ in the open subset of an irreducible component of $C(\mu) \cap \mathfrak n$ where $A$ restricted to $V_i$ has constant Jordan type. Then the Young diagram of $A|_{V_{i}}$ is the partition obtained from $T$ by deleting all boxes with entries $i+1,\ldots,t$. So the subdiagramm consisting of all boxes with entries at most $i$ measures the generic Jordan type of $A$ restricted to the subspace $V_i$. In particular, the equation defining the component corresponding to $T(i,j)$ can involve only equations in the entries of $A[1,j]$. Even stronger, we will see in Lemma~\ref{lm:T-gleich-Z} that the equations involve only entries in $A[i,j]$ for $(i,j)$ in $\Gamma(d)$. To prepare for Lemma~\ref{lm:T-gleich-Z} we observe that for $(1,t)\in\Gamma(d)$ the component ${\mathfrak n}(T(1,t))$ coincides with $C(\mu(1,t)) \cap \mathfrak n$ since there is only one tableau for the partition $\mu(1,t)$ with dimension vector $d$. Consequently, this component is defined by the equation $\operatorname {rk} A[1,t]^{\kappa(1,t)} <r_{1,t}^{\kappa(1,t)}$ defining $C(\mu(1,t))\cap{\mathfrak n}$ inside ${\mathfrak n}$. By definition, the tableau $T(i,j)$ is obtained from $T(d)$ through a minimal movement. Its partition $\mu(i,j)$ is clearly smaller than $\leftarrowmbda=\leftarrowmbda(d)$ as the lengths of the rows of a tableau are the parts of the corresponding partition. In particular, these lengths form a decreasing sequence of positive numbers. Thus, moving down a box from a row of length $k$ to a lower row of length at most $k-2$ results in a partition which is smaller than the original partition. Note, however, that different elements $(i,j)$ and $(k,l)$ can lead to the same partition $\mu(i,j)=\mu(k,l)$, e.g. $\mu(2,5)=\mu(5,9)$ in Example~\ref{ex:T(d)} below. \begin{ex}\leftarrowbel{ex:T(d)} Let $d=(7,5,2,3,5,1,2,6,5)$ be a dimension vector, $n=36$. To illustrate the construction of $T(i,j)$ we compute these tableaux for all $(i,j)$ $\in\Lambda(d)=\{(1,8),(2,5),(3,7),(5,9)\}$. They are presented in Figure~\ref{fig:T(d)}. In the picture showing the line diagram $L_R(d)$ we have indicated the connections between the columns $i$ and $j$ for all $(i,j)\in\Lambda(d)$ by shaded areas. \begin{figure} \caption{The tableaux $T(d)$, $T(i,j)$ and $L_R(d)$ for Example~\ref{ex:T(d)} \end{figure} \end{ex} \begin{lemma}\leftarrowbel{lm:T-gleich-Z} Let $d=(d_1,\dots,d_t)$ be a dimension vector, $(i,j)\in\Gamma(d)$. Then $$ {\mathfrak n}(T(i,j))= Z_{ij}\,. $$ In particular, $Z_{i,j}$ is irreducible. \end{lemma} \begin{proof} We show that ${\mathfrak n}(T(i,j))=\{A\in {\mathfrak n}\mid\operatorname {rk} A[ij]^{\kappa(i,j)}< r_{ij}^{\kappa(i,j)}\}=Z_{ij}$. The second equation holds as it is the definition of $Z_{ij}$. \\ We first prove the lemma for a special case: replace $d_1,\ldots, d_{i-1}$ and $d_{j+1},\ldots,d_t$ by zero, thus we get a new shorter dimension vector $e := (d_i,\ldots,d_j) = (e_1,\ldots,e_{j-i+1})$. Note that $(i,j)$ is in $\Gamma(d)$ precisely when $(1,j-i+1)$ is in $\Gamma(e)$. Also note that the codimension of $Z_{i,j}$ for $d$ coincides with the codimension of $Z_{1,j-i+1}$ for $e$, the first variety is just a product of the latter with an affine space. Consequently, $Z_{i,j}$ for $d$ is irreducible precisely when $Z_{1,j-i+1}$ is irreducible for $e$. Finally, we compare the component ${\mathfrak n} (T(i,j))$ for $d$ with the unique component ${\mathfrak n}(T(1,j-i+1))$ for $e$ that coincides with $\mathfrak n \cap C(\mu(1,j-i+1))$ for $e$. Again, both are just given by the equation $\operatorname {rk} A[i,j]^{\kappa(i,j)} < r_{i,j}^{\kappa(i,j)}$ for $d$, respectively $\operatorname {rk} A[1,j-i+1]^{\kappa(1,j-i+1)} < r_{1,j-i+1}^{\kappa(1,j-i+1)}$ for $e$. This finally shows that both varieties coincide. \end{proof} \section{The irreducible components of $Z$} We are now ready to finish the proof of the descriptions of the decomposition of the complement $Z={\mathfrak n}\setminus{\mathcal{O}}(d)$ of the Richardson orbit into irreducible components. Again, let $d=(d_1,\dots,d_t)$ be a dimension vector, $\leftarrowmbda=\leftarrowmbda(d)$ the partition of the Richardson orbit and $(i,j)$ a pair with $1\le i<j\le t$. Recall that the $T(i,j)$ are elements of ${\mathcal{T}}(\mu(i,j),d)$. By Proposition~\ref{prop:comps} the $T(i,j)$ correspond to irreducible components of ${\mathfrak n}\cap C(\mu(i,j))$. So the corresponding ${\mathfrak n}(T(i,j))$ are irreducible. \begin{thm}\leftarrowbel{thm:Z-ij} $$ Z=\bigcup_{(i,j)\in\Lambda(d)} Z_{ij} $$ is the decomposition of $Z$ into irreducible components. \end{thm} \begin{proof} We know that $Z$ is the union of all $Z_{ij}$ over all $(i,j)$ with $1\le i<j\le t$ from Lemma~\ref{lm:Z-union}. By Lemma~\ref{lm:not-Gamma}, $$ Z=\bigcup_{(k,l)\in\Gamma'(d)} Z_{kl} $$ for some subset $\Gamma'(d)\subseteq\Gamma(d)$. And finally, Corollary~\ref{cor:not-Lambda} tells us that for each $(k,l)$ in this subset $\Gamma'(d)$, there exists $(i,j)\in\Lambda(d)$ such that $Z_{kl}$ is contained in $Z_{ij}$. It remains to see that $Z_{ij}\subsetneq Z_{kl}$ and $Z_{ij}\supsetneq Z_{kl}$ for all $(i,j)\ne(k,l)$ $\in\Lambda(d)$. This follows as for $(i,j)\ne (k,l)$ from $\Lambda(d)$, one can find matrices $A$ in $Z_{ij}$ which do not satisfy the conditions for $Z_{kl}$ and vice versa: Assume $(i,j)\ne(k,l)\in \Lambda(d)$. From the line diagram $L_R(d)$ we remove one edge of the lowest chain connecting columns $i$ and $j$, connecting the resulting edges if possible with lower rows to the left and right (as with the dashed lines in Figure~\ref{fig:not-Gamma}) produces an element $A$ of $Z_{ij}$ (under $\Phi$) with $A\notin Z_{kl}$. It is completely analogous to find $B\in Z_{kl}$, $B\notin Z_{ij}$ The irreducibility follows now since $Z_{ij}={\mathfrak n}(T(i,j))$ (Lemma~\ref{lm:T-gleich-Z}). \end{proof} \begin{cor}\leftarrowbel{cor:components} The complement $Z={\mathfrak n}\setminus {\mathcal{O}}(d)$ has at most $t-1$ irreducible components. \end{cor} \begin{proof} If $d$ is increasing or decreasing then clearly, $\Lambda(d)$ has size $t-1$, cf. Example~\ref{ex:Lambda}. The same is true if the $d_i$ are all different. In all other cases there are $d_i=d_j$ with $|j-i|>1$, and such that there exists an index $i<l<j$ with $d_l\ne d_i$. If $d_l>d_i$ is minimal among these, then neither $(i,l)$ nor $(l,j)$ belong to $\Lambda(d)$ and thus $\Lambda(d)$ has at most $t-2$ elements. The same is true for $d_l<d_i$, $d_l$ maximal among such. \end{proof} Furthermore, we can describe the codimension of $Z_{ij}$ in ${\mathfrak n}$ as follows. Recall that $T(i,j)$ is obtained from $T(d)$ through a minimal movement (see Subsection~\ref{ss:young-tab}). Let $c(i,j)$ be the number of rows the box with label $j$ moves down, i.e. $j$ goes from row $s(i,j)$ to row $s(i,j)+c(i,j)$. It is known that for every row a box in a Young diagram is moved down, the dimension of the GL$_n$-orbit of the corresponding nilpotent elements decreases by two. This can be seen using the formula for the dimension of the stabilizer from~\cite{kp}. The change in dimension in the nilradical is half of this. Thus, the resulting ${\mathfrak n}(T(i,j))$ then has codimension $c(i,j)$ in the nilradical ${\mathfrak n}$ and we get: \begin{cor}\leftarrowbel{cor:codim} For $(i,j)\in\Gamma(d)$, $Z_{ij}$ has codimension $c(i,j)$ in ${\mathfrak n}$. \end{cor} The second description of the irreducible components of $Z$ is now an immediate consequence of Theorem~\ref{thm:Z-ij} and Lemma~\ref{lm:T-gleich-Z}: \begin{cor} \leftarrowbel{cor:tableaux} $$ Z=\bigcup_{(i,j)\in\Lambda(d)} {\mathfrak n}(T(i,j)) $$ is the decomposition of $Z$ into irreducible components. \end{cor} \section{An application} In the last section, we illustrate our work on an example. We work with $G=\mathrm{GL}_5$ and consider the parabolic subgroups of different dimension vectors. \\ \noindent A) If $d=(1,1,1,1,1)$ then $P=B$ is a Borel subgroup. Note that $\Lambda(d)=\Gamma(d)=\{(1,2),(2,3),(3,4),(4,5)\}$, so Theorem~\ref{thm:Z-ij} describes the complement $Z$ as the union \[ Z= Z_{12}\cup Z_{23}\cup Z_{34}\cup Z_{45} \] of four irreducible components. In this example, we have that $A_{ij}=a_{ij}$ are all $1\times 1$-matrices. The Richardson orbit is the intersection of the regular nilpotent orbit with the set of upper triangular matrices in ${\mathfrak{gl}}_5$. The regular nilpotent elements are the nilpotent $5\times 5$-matrices whose 4th power is non-zero. So the Richardson orbit consists of the strictly upper triangular matrices $A=(a_{ij})_{ij}$ with \begin{eqnarray*} A[1,5]^4 & = & \begin{pmatrix} 0 & 0 & 0 & 0 & a_{12}a_{23}a_{34}a_{45} \\ & 0 & 0 & 0 & 0\\ & & 0 & 0 & 0 \\ & & & 0 & 0 \\ &&&& 0 \end{pmatrix} \quad \mbox{with $a_{12}a_{23}a_{34}a_{45}\ne 0$.} \end{eqnarray*} For $A$ to be in the complement $Z$ of the Richardson orbit, the product $a_{12}a_{23}a_{34}a_{45}$ has to be zero, i.e. $A[1,5]^4=0$. Then clearly, $A\in Z_{i,i+1}$ for an $i\le 4$ as $Z_{i,i+1}$ is the set of matrices with $A_{i,i+1}=0$. Thus, $A$ lies in one of the components $Z_{ij}$ with $(i,j)\in \Lambda(d)$. \\ \noindent B) If $d=(1,1,1,2)$ then $\Lambda(d)=\Gamma(d)=\{(1,2),(2,3),(3,4)\}$. The Richardson orbit is determined by the conditions $\operatorname {rk} A[12]=\operatorname {rk} A[23]=\operatorname {rk} A[34]=1$, $\operatorname {rk} A[13]^2=\operatorname {rk}[24]^2=1$, $\operatorname {rk}[14]^3=1$ (for $A\in {\mathfrak n}$). For $A$ to be in the complement, one of these ranks has to be zero. By Theorem~\ref{thm:Z-ij}, we should have $$ Z=Z_{12}\cup Z_{23}\cup Z_{34} $$ where the component $Z_{12}$ consists of the matrices $A\in{\mathfrak n}$ with $a_{12}=0$, the component $Z_{23}$ of the $A$ with $a_{23}=0$ and $Z_{34}$ of the $A$ with $a_{34}=a_{35}=0$. Let us first compute $A^2$, and $A^3$ for $A\in{\mathfrak n}$ (we omit the zero entries in the opposite nilradical): \[ A=\begin{pmatrix} 0 & a_{12} & a_{13} & a_{14} & a_{15} \\ & 0 & a_{23} & a_{24} & a_{25} \\ & & 0 & a_{34} & a_{35} \\ & & & 0 &0 \\ & & & & 0 \end{pmatrix} \] \[ A^2=\begin{pmatrix} 0 & 0 & a_{12}a_{23} & a_{12}a_{24}+a_{13}a_{34} & a_{12}a_{25}+a_{13}a_{35} \\ & 0 & 0 & a_{23}a_{34} & a_{23}a_{35} \\ & & 0 & 0 & 0 \\ & & & 0 & 0 \\ & & & & 0 \end{pmatrix} \] \[ A^3=\begin{pmatrix} 0 & 0 & 0 & a_{12}a_{23}a_{34} & a_{12}a_{23}a_{35} \\ & 0 & 0 & 0 & 0 & \\ & & 0 & 0 & 0 \\ & & & 0 & 0\\ & & & & 0 \end{pmatrix} \] Then we see that $A[14]^3=A^3=0$ if and only if $a_{12}a_{23}a_{34}=0$ {\em and} $a_{12}a_{23}a_{35}=0$. Thus, $A$ clearly belongs to one of the three components described above. Now, $A[13]^2=0$ if and only if $a_{12}a_{23}=0$ as this is the only non-zero entry of $A[13]^2$. Similarly, $A[24]^2=0$ if and only if $a_{23}a_{34}=0$ {\em and} $a_{23}a_{35}=0$. In all cases, $A$ is contained in one of the three components. The case of $d=(2,1,1,1)$ is completely analogous. \\ \noindent C) The first interesting case appears for $d=(1,1,2,1)$. Here, $\Lambda(d)=\{(1,2),(2,4)\}\ne \Gamma(d)$. So we expect two irreducible components, $Z_{12}$ as the matrices $A$ with $a_{12}=0$ and $Z_{24}$ as the $A$ with $\operatorname {rk} A[24]^2=0$. We first compute $A$, $A^2$ and $A^3$ for $A\in {\mathfrak n}$: \[ A=\begin{pmatrix} 0 & a_{12} & a_{13} & a_{14} & a_{15} \\ & 0 & a_{23} & a_{24} & a_{25} \\ & & 0 & 0 & a_{35} \\ & & & 0 & a_{45} \\ & & & & 0 \end{pmatrix} \ \ \ A^2=\begin{pmatrix} 0 & 0 & a_{12}a_{23} & a_{12}a_{24} & a_{12}a_{25}+a_{13}a_{35}+a_{14}a_{45} \\ & 0 & 0 & a_{23}a_{35} & a_{24}a_{45} \\ & & 0 & 0 & 0 \\ & & & 0 & 0 \\ & & & & 0 \end{pmatrix} \] \[ A^3=\begin{pmatrix} 0 & 0 & 0 & 0 &a_{12}(a_{23}a_{34} + a_{24}a_{45}) \\ & 0 & 0 & 0 & 0 & \\ & & 0 & 0 & 0 \\ & & & 0 & 0\\ & & & & 0 \end{pmatrix} \] The elements $A$ of the Richardson orbit have non-zero $a_{12}$, and $\operatorname {rk} A[23]=\operatorname {rk} A[34]=1$, $\operatorname {rk} A[13]^2=\operatorname {rk} A[24]^2=\operatorname {rk} A[14]^3=\operatorname {rk} A^3=1$. Clearly, when $a_{12}=0$, then $A\in Z_{12}$. And when $\operatorname {rk} A[23] \operatorname {rk} A[34]=0$, $A$ belongs to $Z_{24}$. Now $A[14]^3=0$ if and only if $a_{12}=0$ or $a_{23}a_{34}+a_{24}a_{34}=0$ which is equivalent to $A\in Z_{12}$ or $A\in Z_{24}$, respectively. Furthermore, $A[13]^2=0$ if and only if $a_{12}a_{23}=0$ {\em and} $a_{12}a_{24}=0$, which is equivalent to $A\in Z_{12}\cup Z_{24}$. The matrices $A$ satisfying $A[24]^2$ are by definition $Z_{24}$. The case $d=(1,2,1,1)$ is analogous. \\ \noindent D) Let $d=(2,2,1)$, with $\Lambda(d)=\Gamma(d)=\{(1,2),(2,3)\}$, the complement should be $Z_{12}\cup Z_{23}$. The Richardson orbit is given as the matrices $A$ with $\operatorname {rk} A[12]=2$ and $\operatorname {rk} A[23]=\operatorname {rk} A[13]^2=1$. We compute $A$ and $A^2$: \[ A=\begin{pmatrix} 0 & 0 & a_{13} & a_{14} & a_{15} \\ & 0 & a_{23} & a_{24} & a_{25} \\ & & 0 & 0 & a_{35} \\ & & & 0 & a_{45} \\ & & & & 0 \end{pmatrix} \ \ \ A^2=\begin{pmatrix} 0 & 0 & 0 & 0 & a_{13}a_{35}+a_{14}a_{45} \\ & 0 & 0 & 0 & a_{23}a_{35} + a_{24}a_{45} \\ & & 0 & 0 & 0 \\ & & & 0 & 0 \\ & & & & 0 \end{pmatrix} \] If $A$ is a matrix with $A[13]^2=0$ then if $a_{35}=a_{45}=0$, $A$ is an element of $Z_{23}$. So let $\operatorname {rk} A[23]\ne 0$. Solving the two equations $a_{13}a_{35}+a_{14}a_{45}=0$ $a_{23}a_{35} + a_{24}a_{45}=0$ then shows that the rank of $A[12]$ is one. Thus $Z_{13}$ is already contained in $Z_{12}$. The case $d=(1,2,2)$ is analogous.\\ \noindent E) The second interesting case is $d=(2,1,2)$, with $\Lambda(d)=\{(1,3)\}$ and $\Gamma(d)=\{(1,3), (1,2), (2,3)\}$. Here we only obtain one irreducible component in the complement! The Richardson orbit is defined by $\operatorname {rk} A[13]^2=\operatorname {rk} A^2=1$ and $\operatorname {rk} A=3$: The dimension of its stabilizer has to be equal to the dimension of the Levi factor. Using the formulae from~\cite{kp} then gives this description of the Richardson orbit. For the complement, we are looking at matrices $A$ with $\operatorname {rk} A[12]=0$ or $\operatorname {rk} A[23]=0$ or $\operatorname {rk} A[13]^2=0$. If $A$ satisfies $A[12]=0$ then $A^2$ is also zero, so $A\in Z_{13}$ by definition. Similarly, matrices with $A[23]=0$ square to zero and hence lie in $Z_{13}$. \\ \noindent F) The case $d=(1,3,1)$ with $\Lambda(d)=\{(1,3)\}$, so again, we only have one component in the complement of the open dense orbit. For matrices of the Richardson orbit, we have $\operatorname {rk} A[12]=\operatorname {rk} A[23]=\operatorname {rk} A[13]^2=1$. For the complement, we take matrices where one of these ranks is zero. If it is $\operatorname {rk} A[12]=0$ or $\operatorname {rk} A[23]=0$ then clearly, $A[13]^2=0$, so $A\in Z_{13}$. The cases $d=(3,1,1)$ and $d=(1,1,3)$ behave similarly as $d=(2,1,1,1)$ and $d=(1,1,1,2)$. We omit them here. \\ \noindent G) The remaining cases are $d=(4,1)$, $d=(1,4)$. Here, the complement to the Richardson orbit is given by $A[12]=0$, i.e. it is the zero matrix. \end{document}
\begin{document} \begin{frontmatter} \title{An Infinite-dimensional KAM Theorem with Normal Degeneracy} \author{{Jiayin Du$^{a}$ \footnote{ E-mail address : [email protected]} ,~ Lu Xu$^{a,*}$ \footnote{*Corresponding author's e-mail address : [email protected]},~ Yong Li$^{a,b}$ \footnote{E-mail address : [email protected]} }\\ {$^{a}$College of Mathematics, Jilin University,} {Changchun 130012, P. R. China.}\\ {$^{b}$School of Mathematics and Statistics, and Center for Mathematics and Interdisciplinary Sciences, Northeast Normal University, Changchun 130024, P. R. China.} } \begin{abstract} In this paper, we consider a classical Hamiltonian normal form with degeneracy in normal direction. In previous results, one needs to assume that the perturbation satisfies certain non-degenerate conditions in order to remove the degeneracy in the normal form. In stead of that, we introduce a topological degree condition and a weak convexity condition, which are easy to be verified, and we prove the persistence of lower dimensional tori without any restriction on perturbation but only smallness and analyticity. \varepsilonnd{abstract} \begin{keyword} Infinite dimensional Hamiltonian; normal degeneracy; KAM theory \varepsilonnd{keyword} \varepsilonnd{frontmatter} \tableofcontents \section{Introduction and Main Theorem} \setcounter{equation}{0} \setcounter{definition}{0} \setcounter{proposition}{0} \setcounter{lemma}{0} \setcounter{remark}{0} \subsection{Introduction}\label{sec:1} In the present paper, we consider the following Melnikov's persistence for infinite dimensional Hamiltonian with degeneracy in certain normal directions, which is described in the following form \begin{equation}\label{eq1} H(x,y,u,\bar{u},\xi)=N(y,u,\bar{u},\xi)+\varepsilon P(x,y,u,\bar{u},\xi,\varepsilon), \varepsilonnd{equation} where $(x,y,u,\bar{u})\in\mathcal{P}^{\textsf{a},\textsf{p}}=\mathbb{T}^n\times{\mathbb{C}^n}\times \varepsilonll^{\textsf{a},\textsf{p}}\times \varepsilonll^{\textsf{a},\textsf{p}},$ $\xi\in\Pi\subset\mathbb{R}^n$, $n\ge1$ is a given integer, $\mathbb{T}^{n}$ is the standard $n$-torus, $\textsf{a}\geq0$ and $\textsf{p}\geq1$ are given constants and $\varepsilonll^{\textsf{a},\textsf{p}}$ is the Hilbert space, $\varepsilon>0$ is a small parameter. The Hamiltonian is real analytic in $(x,y,u,\bar{u})$ and Lipschitz in parameter $\xi$. More specifically, integrable part $N$ is in the following form \begin{align*} N=\langle\omega(\xi), y\rangle+\langle u,\Omega^*(\xi)\bar{u}\rangle+g(z,\xi), \varepsilonnd{align*} where $\Omega^*(\xi)=\diag(\Omega_0(\xi),\Omega(\xi))$, $g(z,\xi)$ consists of high order terms which will be specified later. For given integer $b>0$, denote $\mathcal{N}=\{j_1<j_2<\cdots<j_b\}\subset\mathbb{N}_+$, and $$\Omega_0(\xi)=\diag(\Omega_0^{j_1},\cdots,\Omega_0^{j_b})\in\mathbb{R}^{b\times b},~~\Omega(\xi)=\diag(\Omega^j\in\mathbb{R}:j\in\mathbb{N}_+\backslash\mathcal{N}),$$ with $\Omega_0^{j_i}\varepsilonquiv0,~~i=1,\cdots,b.$ Divided the vectors $(u,\bar{u})$ into the form of $u=:(w_0,w)$, $\bar{u}=:(\bar{w}_0,\bar{w})$ with \begin{eqnarray*} &&w_0=(w_{j_i}\in\mathbb{C},i=1,\cdots,b),~~w=(w_j\in\mathbb{C},j\in\mathbb{N}_+\backslash\mathcal{N}),\\ &&\bar{w}_0=(\bar{w}_{j_i}\in\mathbb{C},i=1,\cdots,b),~~\bar{w}=(\bar{w}_j\in\mathbb{C},j\in\mathbb{N}_+\backslash\mathcal{N}). \varepsilonnd{eqnarray*} Denote $z=:(w_0,\bar{w}_0)$ and $g(z,\xi)=o(\|z\|_{\textsf{a},\textsf{p}}^2).$ Associated with standard symplectic structure ${\rm d}y\wedge {\rm d}x+\sqrt{-1}{\rm d}\bar{u}\wedge {\rm d}u$, $\mathcal{T}_0^n=\mathbb{T}^n\times\{y=0\}\times\{u=0\}\times\{\bar{u}=0\}$ is a $n$-dimensional invariant rotational torus for the unperturbed Hamiltonian. Studying the persistence of $\mathcal{T}_0^n$ under small perturbation is a classical problem, but it becomes more challenging when the normal form is degenerate. In the $1980$s, Kuksin \cite{kuksin1} and Wayne \cite{wayne} initially applied the KAM iterative process to construct the lower dimensional tori in the infinite dimensional Hamiltonian systems and proved the existence of quasi-periodic solutions for the nonlinear Schr\"{o}dinger equations and wave equations. Similar results also proved by P\"{o}schel in \cite{poschel1,poschel2}. Bourgain \cite{bourgain1,bourgain2,bourgain3} obtained a sharp persistence result in both finite and infinite dimensional cases only under the first Melnikov conditions, which therefore allows the multiplicity of normal frequencies. However, when degeneracy occurs in normal directions, lower dimensional tori do not necessarily survive under small perturbation, even for the Hamiltonian in finite dimensional. As a result, one needs to attach certain non-degenerate assumption on perturbation $P$. For instance, the authors in \cite{han} considered the persistence of lower dimensional for the Hamiltonian as follow \begin{eqnarray} \label{hlyh} H(I,\theta,z,\varepsilon)=\langle\omega,I\rangle+\langle A(\omega)z,z\rangle+h(z)+\varepsilon P(I,\theta,z,\omega,\varepsilon), \varepsilonnd{eqnarray} where $(I,\theta,z)\in{\mathbb T}^{d}\times{\mathbb R}^{d}\times{\mathbb R}^{2n}$, $A(\omega)$ admits zero eigenvalues and $h(z)=O(|z|^3)$. Under the assumption \begin{eqnarray}\label{hlassumption2} \frac{\partial [P(\cdot,0,\omega,\varepsilon)]}{\partial z}=0,~\quad\quad~\det\frac{\partial^2 [P(\cdot,0,\omega,\varepsilon)]}{\partial z^2}\ne 0, \varepsilonnd{eqnarray} it was proved that most of the $d$-tori $T_{\omega}=\{\omega\}\times\{I=0\}\times\{z=0\}$ persist under small perturbation. In fact, assumption (\ref{hlassumption2}) on the perturbation can remove the singularity of $A(\omega)$, hence, it yields the persistence of the majority of invariant tori under a suitable weak Melnikov non-resonance condition. Similar assumptions can be found in \cite{li} and \cite{xly}. Recently, Wu and Yuan \cite{wu} investigated the existence of KAM tori for the infinite dimensional Hamiltonian systems with finite number of zeros among normal frequencies. The authors introduced a vector which consist of the first order term of perturbation $P$ yielding at each iterative step, and they proved that there exists a KAM torus if the sum of those vectors tends to zero. Here, our assumptions provide sufficient conditions to ensure the existence of normal equilibrium during KAM process. More precisely, the topological degree condition in \textbf{(A0)} is one of transversality and the weak convex condition in \textbf{(A0)} controls the size of the perturbed normal equilibrium. In fact, Hamiltonian (\ref{eq1}) can be seen as an integral function of Hamiltonian lattice equations with degeneracy in certain directions, studying the existence of lower dimensional tori is contribute to describe the dynamical stability for such equations. However, the perturbations in certain applications are of complicated forms, so that it is difficult to verify assumptions in \cite{han,wu,xly}. Motivated by that, we construct the topological degree condition as well as the weak convexity condition on $g(z,\xi)$, which allows to degeneracy in normal direction, meanwhile, it is easily to be verified. See \textbf{(A0)} in Section \ref{sec:result} for details. It should be pointed out that the topological degree can be used to study frequency-preserving in the KAM theory, see \cite{du,tong,xu2010}. The weak convexity condition in \textbf{(A0)} is also indispensable, we show a counter example in smooth case in the last section. Beside the small divisor problem, the other difficulty during KAM iterative process is to eliminate the first order terms of the perturbation averaged with respect to angle variable $x$, that is, we need to solve an average equation in $(u,\bar{u})$-direction. Assumption \textbf{(A0)} guarantees that the average equation is solved by an equilibrium in normal direction. Consequently, we prove Hamiltonian (\ref{eq1}) admits a family of lower dimensional tori parameterized by $\xi$ varying in certain Cantor set without any restriction for the perturbation $P$ except for smallness and analyticity. \subsection{Notations} In order to state our main result, we need some notations. \begin{itemize} \item[{(i)}] {Use $|\cdot|$ to denote the supremum norm of vectors, its induced matrix norm, absolute value of functions, and Lebesgue measure for sets.} \item[{(ii)}] Endow the Hilbert space $\varepsilonll^{\textsf{a},\textsf{p}}$ with the following norm \begin{align*} ||u||_{\textsf{a},\textsf{p}}^2=||{w}_0||_{\textsf{a},\textsf{p}}^2+||{w}||_{\textsf{a},\textsf{p}}^2<\infty,~~~~u=({w}_0,w)\in\varepsilonll^{\textsf{a},\textsf{p}}, \varepsilonnd{align*} where $||{w}_0||_{\textsf{a},\textsf{p}}^2=\sum_{j_i\in\mathcal{N}}|w_{j_i}|^2{j_i}^{2\textsf{p}}{\rm e}^{2\textsf{a}j_i}$, $||{w}||_{\textsf{a},\textsf{p}}^2=\sum_{j\in\mathbb{N}_+\backslash\mathcal{N}}|w_j|^2j^{2\textsf{p}}{\rm e}^{2\textsf{a}j}$. \item[{(iii)}] Denote the complex neighborhoods of $\mathcal{T}^n_0$ \begin{align*} D(s,r)=\{(x,y,u,\bar{u}):|\textrm{Im}x|<s,|y|<r^2,||z||_{\textsf{a},\textsf{p}}<r,||w||_{\textsf{a},\textsf{p}}<r^a,||\bar{w}||_{\textsf{a},\textsf{p}}<r^a\} \varepsilonnd{align*} with $0<s, r<1$, $a\geq 2$, which will be defined in (\ref{a}). \item[{(iv)}] For $r>0$ and $\bar{\textsf{p}}\geq \textsf{p}$, we define the weighted phase space norms \begin{align*} \pmb{\|}W\pmb{\|}_r=\pmb{\|}W\pmb{\|}_{ \bar{\textsf{p}},r}=\frac{|X|}{r^{a-2}}+\frac{|Y|}{r^a}+\frac{||\tilde U||_{\textsf{a},\bar {\textsf{p}}}}{r^{a-1}}+||U_{\bar{w}}||_{\textsf{a},\bar {\textsf{p}}}+||V_{{w}}||_{\textsf{a},\bar {\textsf{p}}}, \varepsilonnd{align*} for $W=(X,Y,U,V)\in\mathcal{P}^{\textsf{a},\bar{\textsf{p}}}$, $U=(U_{\bar w_0},U_{\bar w})$, $V=({V}_{w_0},{V}_{w})$, $\tilde U=(U_{\bar w_0},{V}_{w_0})$. And we denote $$\pmb{\|}W\pmb{\|}_{r,D(s,r)}=\sup_{D(s,r)}\pmb\|W\pmb\|_r.$$ Assume that $W$ is Lipschitz in parameter $\xi$, we denote its Lipschitz semi-norm $$\pmb\|W\pmb\|_r^{\mathcal{L}}=\sup_{\xi\neq\zeta,~ \xi,\zeta\in \Pi}\frac{\pmb\|\Delta_{\xi\zeta}W\pmb\|_r}{|\xi-\zeta|},$$ where $\Delta_{\xi\zeta}W=W(\cdot,\xi)-W(\cdot,\zeta)$. Moreover, the Lipschitz semi-norms of $\omega$ and $\Omega$, i.e., $|\omega|_{\Pi}^{\mathcal{L}}$ and $\pmb|\Omega\pmb|_{r,\Pi}^{\mathcal{L}}$, are defined analogously to $\pmb\|W\pmb\|_r^{\mathcal{L}}$, where $\pmb|\Omega\pmb|_{r}=\sup_j|\Omega_j|j^{r}$. \item[{(v)}] For $\lambda\geq0$, we define $\pmb\|\cdot\pmb\|_r^\lambda=\pmb\|\cdot\pmb\|_r+\lambda\pmb\|\cdot\pmb\|_r^{\mathcal{L}}$. Also, $\pmb\|\cdot\pmb\|_r^*$ stands for $\pmb\|\cdot\pmb\|_r$ or $\pmb\|\cdot\pmb\|_r^{\mathcal{L}}$. \item[{(vi)}] We introduce the notations $$\langle l\rangle_d=\max(1, |\sum_j j^dl_j|),~~~\mathcal{Z}=\{(k,l)\neq0, |l|\leq2\}\subset\mathbb{Z}^n\times\mathbb{Z}^\infty,~~~J=\left(\begin{array}{ccc} 0&I_{b}\\ -I_{b}&0\\ \varepsilonnd{array}\right),$$ where $I_{b}$ is the $b$-order identity matrix. \varepsilonnd{itemize} \subsection{Statement of results}\label{sec:result} We consider (\ref{eq1}), i.e., \begin{equation*} \left\{ \begin{array}{ll} H:\mathbb{T}^n\times{G}\times \varepsilonll^{\textsf{a},\textsf{p}}\times \varepsilonll^{\textsf{a},\textsf{p}}\times \Pi\rightarrow \mathbb{R}^1,\\ H(x,y,u,\bar{u},\xi)=\langle\omega(\xi), y\rangle+\langle w, \Omega(\xi)\bar{w}\rangle+{g(z,\xi)}+\varepsilon P(x,y,u,\bar{u},\xi), \varepsilonnd{array} \right. \varepsilonnd{equation*} where $u=(w_0,w)$, $\bar u=(\bar w_0,\bar w)$, $z=(w_0,\bar w_0)$, $g=o(||z||_{\textsf{a},\textsf{p}}^2)$, $\omega$, $\Omega$, $g$ and $P$ are Lipschitz in parameter $\xi$, $g$ is real analytic in $z$, $P$ is real analytic in $(x,y,u,\bar{u})$ and $\varepsilon>0$ is a small parameter. First, we make the following assumptions: \begin{itemize} \item[(\textbf{A0})] {For fixed $\zeta_0=0\in O^o\subset \mathbb{C}^{2b}$, where $O$ is a bounded closed domain and {$O^o:=O\setminus\partial O$}, there are $\sigma>0$, $L\geq2$ such that \begin{align*} &\deg(\nabla{g(z)}-\nabla{g(\zeta_0)}, O^o, 0)\neq0,\\ &||\nabla{g(z)}-\nabla{g(z_*)}||_{\textsf{a},\bar {\textsf{p}}}\geq\sigma||z-z_*||_{\textsf{a},\textsf{p}}^L,~~z,z_*\in O, \varepsilonnd{align*} where $\bar p\geq p$, $\nabla g(z)=(\partial_{z_1}g(z), \partial_{z_2}g(z), \cdots, \partial_{z_{2b}}g(z))$.} \item[(\textbf{A1})] {The mapping $\xi\rightarrow\omega(\xi)$ is a lipeomorphism, that is, a homemorphism which is Lipschitz continuous in both directions. Moreover, for all integer vectors $(k,\varepsilonll)\in\mathbb{Z}^n\times\mathbb{Z}^\infty$ with $1\leq|\varepsilonll|\leq2$, $$|\{\xi:\langle k,\omega(\xi)\rangle+\langle\varepsilonll,\Omega(\xi)\rangle=0\}|=0,$$ where $|\cdot|$ denotes Lebesgue measure for sets, $|\varepsilonll|=\sum_{j}|\varepsilonll_j|$ for integer vectors, and $\langle\cdot,\cdot\rangle$ is the usual scalar product.} \item[(\textbf{A2})] There exist $d\geq1$ and $\delta<d-1$ such that $$\Omega_j(\xi)=j^d+\cdots+O(j^\delta),$$ where the dots stand for fixed lower order terms in $j$, allowing also negative exponents. More precisely, there exists a fixed, parameter-independent sequence $\bar\Omega$ with $\bar\Omega_j=j^d+\cdots$ such that the tails $\tilde \Omega_j=\Omega_j-\bar\Omega_j$ give rise to a Lipschitz map $$\tilde\Omega:\Pi\rightarrow\varepsilonll_\infty^{-\delta},$$ where $\varepsilonll_\infty^{-\delta}$ is the space of all real sequences with finite norm $\pmb|\tilde\Omega\pmb|_{-\delta}=\sup_j|\tilde\Omega_j|j^{-\delta}$. \item[(\textbf{A3})] The perturbation $P$ is real analytic in the space coordinates and Lipschitz in the parameters. For each $\xi\in \Pi$, its Hamiltonian vector space field $X_P=(P_y, -P_x, \sqrt{-1}P_{\bar u}, -\sqrt{-1}P_{u})^\top$ defines near $\mathcal{T}_0^n$ a real analytic map $$X_P:\mathcal{P}^{\textsf{a},\textsf{p}}\rightarrow\mathcal{P}^{\textsf{a},\bar {\textsf{p}}},~~~\left\{\begin{array}{cc} \bar {\textsf{p}}\geq \textsf{p} ~~for ~d>1,\\ \bar {\textsf{p}}> \textsf{p} ~~for ~d=1. \varepsilonnd{array} \right.$$ \varepsilonnd{itemize} Now, we can state our main result: \begin{theorem}\label{th1} {Consider the Hamiltonian (\ref{eq1}) and assume $(\textbf{A0})$-$(\textbf{A3})$ hold. Let $m\geq L+\frac{\sqrt{4L^2+2L}}{2}$, $\tau\geq n-1$, $\mu$ is a fixed positive integer such that $(1+\frac{1}{2m})^\mu\geq2$, and \begin{align}\label{a} a=\left\{\begin{array}{lll} [\frac{m+1}{3}]+1, ~if~ \frac{m+1}{3}~ is~ not~ integer,\\ \frac{m+1}{3},~~~~~~~~~if~ \frac{m+1}{3}~ is~ integer. \varepsilonnd{array}\right. \varepsilonnd{align} Then, there exists a sufficiently small $\varepsilon_0>0$, a Cantor set $\Pi_\varepsilon\subset\Pi$, a Lipschitz continuous family of torus embeddings $\Psi^*:\mathbb{T}^n\times\Pi_{\varepsilon}\rightarrow\mathcal{P}^{\textsf{a},\bar {\textsf{p}}}$, which is real analytic on $|\rm{Im}~ x|<\frac{s}{2}$ and close to the identity, and a Lipschitz continuous map $\omega_*:\Pi_\varepsilon\rightarrow\mathbb{R}^n$, such that for each $\xi\in\Pi_\varepsilon$, the map $\Psi^*$ restricted to $\mathbb{T}^n\times\{\xi\}$ is a real analytic embedding of a rotational torus with frequencies $\omega_*(\xi)$ for the Hamiltonian $H$ at $\xi$. Moreover, the Cantor set $\Pi_{\varepsilon}$ and $\omega^*$ satisfy the following estimates, \begin{eqnarray} &&|{\rm meas}~(\Pi\setminus \Pi_{\varepsilon})|\to 0,\quad {\rm~as~}\varepsilon\to 0,\\ &&|\omega_*-\omega|_{\Pi}+\frac{\gamma}{2M}|\omega_*-\omega|_{\Pi}^{\mathcal{L}}\leq c\varepsilon^{\frac{3m}{32\mu(m+1)(m-a)(\tau+1)}}. \varepsilonnd{eqnarray} where $\gamma:=\varepsilon^{\frac{1}{32(2b)^{m+2}\mu(m+1)(m-a)(\tau+1)}}$ and $M:=|\omega|_{\Pi}^{\mathcal{L}}+\pmb|\Omega\pmb|_{-\delta,\Pi}^{\mathcal{L}}<\infty$. } \varepsilonnd{theorem} \begin{remark}\label{remark1} { We mention that, assumption (\textbf{A0}) allows $d$-dimensional degeneracy. Consider the infinite dimensional Hamiltonian (\ref{eq1}) with the normal term $g(z)$ defined as follow \begin{align}\label{gzlizi} g(z)=g(w_0,\bar w_0):=\frac{1}{2p}\sum_{i=1}^{b}|w_{0i}|^{2p}+\frac{1}{2q}\sum_{i=1}^{b}|\bar w_{0i}|^{2q}, \varepsilonnd{align} where $w_0=(w_{0,1},\cdots,w_{0,b}),\bar w_0=(\bar w_{0,1},\cdots,\bar w_{0,b})$, $p,q>1$. The function in form of (\ref{gzlizi}) is the typical and common degenerate normal term which satisfies assumption (\textbf{A0}). } \varepsilonnd{remark} \begin{remark} As we mentioned above, the weak convexity condition in \textbf{(A0)} is indispensable. See below for a counter example: \begin{proposition}\label{example} Consider the infinite dimensional Hamiltonian (\ref{eq1}), for $b=1$, with $$\nabla g(z)=(\nabla g_1(w_0),\nabla g_2(\bar w_0)),~~~\varepsilon P=P_0(\varepsilon)\bar w_0,$$ where \begin{align*} \nabla g_1(w_0)&=\left\{\begin{array}{lll} -(-w_0-1)^{\sigma},~~~&w_0\in(-2,-1),\\ 0,~~~&w_0\in[-1,1],\\ (w_0-1)^{\sigma},~~~&w_0\in(1,2), \varepsilonnd{array}\right.\\ \nabla g_2(\bar w_0)&=\bar w_0, ~~~\bar w_0\in(-2,2), \varepsilonnd{align*} $\sigma$ is a positive integer, and $$P_0(\varepsilon)=\left\{\begin{array}{lll} ~~~~~~~~~0,&&\varepsilon=0, \\ \varepsilon^\varepsilonll\sin\frac{1}{\varepsilon},&&\varepsilon\neq0,\,\varepsilonll\in \mathbb{Z}^+\setminus\{0\}. \varepsilonnd{array}\right.$$ Then, the weak convexity condition in \textbf{(A0)} fails. Moreover, there is no low dimensional invariant tori can be preserved for any small enough perturbation. \varepsilonnd{proposition} \varepsilonnd{remark} The paper is organized as follows. In Section \ref{sec:result}, we state our main result, that is, Theorem \ref{th1}. We will describe the quasi-linear iterative scheme, show the details of construction and estimates for one cycle of KAM steps. Note that, we prove the average equation can be solved by a equilibrium under assumption \textbf{(A0)} in subsection \ref{326}. In Section \ref{sec:proof}, we complete the proof of Theorem \ref{th1} by deriving an iteration lemma and show the convergence of KAM iterations. We show an example of Hamiltonian lattice equation as an application of Theorem \ref{th1} in Section~\ref{sec:example}. At last, we explain the indispensability of the weak convexity condition by counter example in the Appendix. \section{KAM Step}\label{sec:KAM} In this section, we will show the detailed construction and estimates for one cycle of KAM steps, which is essential to study the KAM theory, see \cite{chow,li,poschel1,poschel2}. \subsection{Description of the 0-th KAM step.} {{Recall the integer $m$ satisfying \begin{align}\label{m} m\geq L+\frac{\sqrt{4L^2+2L}}{2}, \varepsilonnd{align} where $L\geq2$ is defined as in \textbf{(A0)}. Denote $\Xi=8\mu(m+1)(m-a)(\tau+1)$, where $\mu$ is a fixed positive integer such that $(1+\frac{1}{2m})^\mu\geq2$. Then \begin{equation}\label{gamma}\gamma=\varepsilon^{\frac{1}{4(2b)^{m+2}\Xi}}.\varepsilonnd{equation}}} We consider $(\ref{eq1})$ and define the following $0$-th KAM step parameters: \begin{align}\label{s0} s_0&=s,~~\gamma_0=\gamma, ~~{\rho_0<\frac{s_0}{6}},~~\varepsilonta_0=\gamma_0^{2(2b)^{m+2}}\varepsilon^{\frac{1}{\Xi}},~~~r_0=\frac{s\gamma_0}{(K_1+1)^{\tau+1}},\\\label{h0y} \omega_0(\xi)&=\omega(\xi),~~~\Omega_0(\xi)=\Omega(\xi),~~~g_0(z)=g(z),~~~f_0=0,~~~M_0=M,\\ D(s_0,r_0)&=\{(x,y,z,w,\bar{w}):|\textrm{Im}x|<s_0,|y|<r_0^2,||z||_{a,p}<r_0,||w||_{a,p}<r_0^a,||\bar{w}||_{a,p}<r_0^a\}, \varepsilonnd{align} where $0<r_0,\gamma_0\leq 1$, $0\ll s\leq1$, $0<\varepsilonta_0\leq\frac{1}{16}$, \begin{align}\label{K1} K_1=([\log\frac{1}{\varepsilonta_0^{m+1}}]+1)^{3\mu}. \varepsilonnd{align} Therefore, we have that \begin{align*} H_0&=: H(x,y,u,\bar{u},\xi)=N_0+P_0,\\ N_0&=:\langle\omega_0(\xi), y\rangle+\langle w, \Omega_0(\xi)\bar{w}\rangle+g_0(z,\xi)+f_0,\\ P_0&=: \varepsilon P(x,y,u,\bar{u},\xi), \varepsilonnd{align*} with $$\pmb\|X_{P_0}\pmb\|^{\lambda_0}_{r_0,D(s_0,r_0)}\leq \gamma_0^{2(2b)^{m+2}}r_0^{m-a}\varepsilonta_0^m,$$ where $0<\lambda_0<\frac{\gamma_0^{(2b)^{m+2}}}{M_0}$. We first prove a crucial estimate. \begin{lemma} \begin{equation}\label{P0} \pmb\|X_{P_0}\pmb\|^{\lambda_0}_{r_0,D(s_0,r_0)}\leq \gamma_0^{2(2b)^{m+2}}r_0^{m-a}\varepsilonta_0^m. \varepsilonnd{equation} \varepsilonnd{lemma} \begin{proof} {The proof is standard, but we give the explicit process due to the presence of the degenerate order $m$.} Using the fact that $\gamma_0=\varepsilon^{\frac{1}{4(2b)^{m+2}\Xi}}$, $\varepsilonta_0=\gamma_0^{2(2b)^{m+2}}\varepsilon^{\frac{1}{\Xi}}$ and $[\log\frac{1}{\varepsilonta_0^{m+1}}]+1<\frac{1}{\varepsilonta_0^{m+1}}$, we have \begin{align}\label{gre} \gamma_0^{2(2b)^{m+2}}r_0^{m-a}\varepsilonta_0^m&>\frac{\gamma_0^{m-a+2(2b)^{m+2}}\varepsilonta_0^{3\mu(m+1)(m-a)(\tau+1)+m}}{2^{(\tau+1)(m-a)}}\\ &>\frac{\gamma_0^{m-a+2(2b)^{m+2}(1+m+3\mu(m+1)(m-a)(\tau+1))}\varepsilon^{\frac{3\mu(m+1)(m-a)(\tau+1)+m}{\Xi}}}{2^{(\tau+1)(m-a)}}\notag\\ &>\frac{\varepsilon^{\frac{1}{32\mu(2b)^{m+2}(m+1)(\tau+1)}+\frac{1}{16\mu(m-a)(\tau+1)}+\frac{3}{16}+\frac{3}{8}+\frac{1}{8\mu(m-a)(\tau+1)}}}{2^{(\tau+1)(m-a)}}\notag\\ &>\frac{\varepsilon^{\frac{1}{32\mu(2b)^{m+2}(m+1)(\tau+1)}+\frac{9}{16}}}{2^{(\tau+1)(m-a)}}.\notag \varepsilonnd{align} Moreover, let $\varepsilon_0>0$ be small enough so that \begin{equation}\label{vare0} \varepsilon_0^{\frac{1}{16}-\frac{1}{32\mu(2b)^{m+2}(m+1)(\tau+1)}}\pmb\|X_P\pmb\|^{\lambda_0}_{r_0,D(s_0,r_0)}\frac{2^{(\tau+1)(m-a)}}{s^{m-a}}\leq 1 \varepsilonnd{equation} with $0<\lambda_0<\frac{\gamma_0^{(2b)^{m+2}}}{M_0}$ and for any $0<\varepsilon<\varepsilon_0$, \begin{equation*} \varepsilon^{\frac{1}{16}-\frac{1}{32\mu(2b)^{m+2}(m+1)(\tau+1)}}\pmb\|X_P\pmb\|^{\lambda_0}_{r_0,D(s_0,r_0)}\frac{2^{(\tau+1)(m-a)}}{s^{m-a}}\leq 1, \varepsilonnd{equation*} i.e., \begin{align}\label{P} \varepsilon^{\frac{1}{16}}\pmb\|X_P\pmb\|^{\lambda_0}_{r_0,D(s_0,r_0)}\leq \frac{s^{m-a}\varepsilon^{\frac{1}{32\mu(2b)^{m+2}(m+1)(\tau+1)}}}{2^{(\tau+1)(m-a)}}. \varepsilonnd{align} Then by (\ref{gre}) and (\ref{P}), \begin{align*} \pmb\|X_{P_0}\pmb\|^{\lambda_0}_{r_0,D(s_0,r_0)}=\varepsilon^{\frac{9}{16}} \varepsilon^{\frac{7}{16}}\pmb\|X_P\pmb\|^{\lambda_0}_{r_0,D(s_0,r_0)}\leq \varepsilon^{\frac{9}{16}}\varepsilon^{\frac{6}{16}}\frac{s^{m-a}\varepsilon^{\frac{1}{32\mu(2b)^{m+2}(m+1)(\tau+1)}}}{2^{(\tau+1)(m-a)}}\leq\gamma_0^{2(2b)^{m+2}}r_0^{m-a}\varepsilonta_0^m, \varepsilonnd{align*} which implies (\ref{P0}). The proof is complete. \varepsilonnd{proof} \subsection{Induction from $\nu$-th KAM step} \subsubsection{Description of the $\nu$-th KAM step} We now define the $\nu$-th KAM step parameters: \begin{align}\label{snu} r_\nu&=\varepsilonta_{\nu-1}r_{\nu-1},~\varepsilonta_{\nu}=\varepsilonta_{\nu-1}^{1+\frac{1}{2m}},~{s_\nu=s_{\nu-1}-6\rho_{\nu-1}},~\rho_{\nu-1}=\frac{\rho_0}{2^{\nu-1}},\\\label{gammanu} \gamma_\nu&=\frac{\gamma_{\nu-1}}{2}+\frac{\gamma_0}{4},~M_\nu=M_0(2-\frac{1}{2^\nu}). \varepsilonnd{align} Suppose that at $\nu$-th step, we have arrived at the following real analytic Hamiltonian: \begin{align}\label{eq2} H_\nu&=N_\nu+P_\nu, \varepsilonnd{align} with {\begin{align}\label{Nnu} N_\nu&={e_\nu(\xi)+\langle\omega_\nu(\xi),y\rangle+\langle w,\Omega_\nu(\xi)\bar w\rangle+g_\nu(z,\xi)+f_\nu(y,z,w,\bar w,\xi),}\\ g_\nu(z,\xi)&=g(z)+\sum_{j=0}^{\nu-1}\gamma_j^{2(2b)^{m+2}}r_j^{m-1}\varepsilonta_j^mO(\pmb\|z\pmb\|_{\textsf{a},\textsf{p}}^2),\\\label{fnu} f_\nu(y,z,w,\bar w,\xi)&=\sum_{4\leq2|\imath'|\leq m}f_{\imath000} y^{\imath'}+\sum_{2|\imath'|+|\jmath'|\leq m,1\leq|\imath'|,|\jmath'|}f_{\imath'\jmath'00} y^{\imath'} z^{\jmath'}+\sum_{0<2|\imath'|+|\jmath'|\leq m}f_{\imath'\jmath'11} y^{\imath'} z^{\jmath'} w\bar w, \varepsilonnd{align}} defined on $D(s_\nu,r_\nu)$ and \begin{equation}\label{XP} \pmb\| X_{P_\nu}\pmb\|^{\lambda_\nu}_{r_\nu,D(s_\nu,r_\nu)}\leq\gamma_\nu^{2(2b)^{m+2}}r_\nu^{m-a}\varepsilonta_\nu^m, \varepsilonnd{equation} with $0<\lambda_\nu<\frac{\gamma_\nu^{(2b)^{m+2}}}{M_\nu}$. {Except for additional instructions, we will omit the index for all quantities of the present KAM step (at $\nu$-th step), use $+$ to index all quantities (Hamiltonian, domains, normal form, perturbation, transformation, etc.) in the next KAM step (at $(\nu+1)$-th step), and use $-$ to index all quantities in the previous KAM step (at $(\nu-1)$-th step).} To simplify the notations, we will not specify the dependence of $P$, $P_+$ etc. All the constants {{$c_1$-$c_3$}} below are positive and independent of the iteration process, and we will also use $c$ to denote any intermediate positive constant which is independent of the iteration process. Define \begin{align*} \varepsilonta_+&=\varepsilonta^{1+\frac{1}{2m}},\\ r_{+}&=\varepsilonta r,\\ \rho_{+}&=\frac{\rho}{2},\\ s_{+}&=s-6\rho,\\ \sigma_+&=\frac{\sigma}{2}+\frac{\sigma_0}{4},\\ \gamma_+&=\frac{\gamma}{2}+\frac{\gamma_0}{4},\\ M_\nu&=M_0(2-\frac{1}{2^\nu}),\\ K_{+}&=([\log(\frac{1}{\varepsilonta^{m+1}})]+1)^{3\mu},\\ D_{+}&=D(s_{+}, r_{+}),\\ {\Pi_+}&=\{\xi\in\Pi:|\langle k,\omega(\xi)\rangle+\langle\varepsilonll,\Omega(\xi)\rangle|\geq\frac{\gamma\langle \varepsilonll\rangle_d}{(1+|k|)^\tau}, |k|\leq K_+, |\varepsilonll|\leq2, |k|+|\varepsilonll|\neq0 \}.\\ \varepsilonnd{align*} \subsubsection{Construct a symplectic transformation} We will construct a symplectic transformation $\Phi_{+}:D(s_{+},r_{+})\rightarrow D(s,r)$ such that it transforms the Hamiltonian ($\ref{eq2}$) into the Hamiltonian of the next KAM cycle (at ($\nu$+1)-th step), i.e., \begin{equation}\label{H+} H_{+}=H\circ\Phi_{+}=N_{+}+P_{+}, \varepsilonnd{equation} where $N_{+}$ and $P_{+}$ have similar properties as $N$ and $P$ respectively on $D(s_{+},r_{+})$. Next, we show the detailed construction of $\Phi_+$ and the estimate of $P_+$. \subsubsection{Truncation} Consider the Taylor-Fourier series of $P$: \begin{equation*} P=\sum_{k\in \mathbb{Z}^n,~\imath,\jmath,\varepsilonll_1,\varepsilonll_2\in \mathbb{Z}_+^n,\varepsilonll=(\varepsilonll_1,\varepsilonll_2)}p_{k\imath\jmath\varepsilonll}y^{\imath}z^{\jmath} w^{\varepsilonll_1}\bar{w}^{\varepsilonll_2} {\rm e}^{\sqrt{-1}\langle k,x\rangle}, \varepsilonnd{equation*} and let $R$ be the truncation of $P$ of the form \begin{align*} R&=\sum_{|k|\leq K_+,~2|\imath|+|\jmath|\leq m,~|\varepsilonll|=|\varepsilonll_1|+|\varepsilonll_2|\leq2}p_{k\imath\jmath\varepsilonll}y^{\imath}z^{\jmath} w^{\varepsilonll_1}\bar{w}^{\varepsilonll_2} {\rm e}^{\sqrt{-1}\langle k,x\rangle},\\ [R]&=\sum_{2|\imath|+|\jmath|\leq m,~|\varepsilonll|=|\varepsilonll_1|+|\varepsilonll_2|\leq2}p_{0\imath\jmath\varepsilonll}y^{\imath}z^{\jmath} w^{\varepsilonll_1}\bar{w}^{\varepsilonll_2}. \varepsilonnd{align*} Next we will prove that the norm of $X_P-X_R$ is much smaller than the norm of $X_P$ by selecting truncation appropriately, see the below lemma. \begin{lemma}\label{le1} Assume that $$\textbf{\textsc{(H1)}}~ K_+^n{\rm e}^{-K_+\rho} <\varepsilonta^{m+1}.$$ Then there is a constant $c_1$ such that \begin{align}\label{XP-R} \pmb\|X_P-X_R\pmb\|^*_{\varepsilonta r,D(s-\rho,8\varepsilonta r)}&\leq c_1\varepsilonta^{m-a+1} \pmb\|X_P\pmb\|^*_{r,D(s,r)},\\\label{XR} \pmb\|X_R\pmb\|^*_{r,D(s-\rho,8\varepsilonta r)}&\leq c_1 \pmb\|X_P\pmb\|^*_{r,D(s,r)}. \varepsilonnd{align} \varepsilonnd{lemma} \begin{proof} {Notice the $m$ order degeneracy and the choice of $a$. }Denote \begin{align*} I&=\sum_{2|\imath|+|\jmath|\geq m+1}p_{k\imath\jmath\varepsilonll}y^{\imath}z^{\jmath} w^{\varepsilonll_1}\bar{w}^{\varepsilonll_2} {\rm e}^{\sqrt{-1}\langle k,x\rangle},~~~~II=\sum_{|k|>K_+,2|\imath|+|\jmath|\leq m,|\varepsilonll_1|+|\varepsilonll_2|\leq2}p_{k\imath\jmath\varepsilonll}y^{\imath}z^{\jmath} w^{\varepsilonll_1}\bar{w}^{\varepsilonll_2} {\rm e}^{\sqrt{-1}\langle k,x\rangle},\\ III&=\sum_{|k|\leq K_+,2|\imath|+|\jmath|\leq m,|\varepsilonll_1|+|\varepsilonll_2|\geq3}p_{k\imath\jmath\varepsilonll}y^{\imath}z^{\jmath}w^{\varepsilonll_1}\bar{w}^{\varepsilonll_2} {\rm e}^{\sqrt{-1}\langle k,x\rangle}. \varepsilonnd{align*} Then \begin{align}\label{P-R} P-R=I+II+III. \varepsilonnd{align} We claim that \begin{align}\label{XI} \pmb\|X_{I}\pmb\|_{\varepsilonta r,D(s,8\varepsilonta r)}&=\sup_{D(s,8\varepsilonta r)}\{\frac{|I_y|}{(\varepsilonta r)^{a-2}}+\frac{|I_x|}{(\varepsilonta r)^a}+\frac{||I_{z}||_{\bar{p}}}{(\varepsilonta r)^{a-1}}+||I_{w}||_{\bar{p}}+||I_{\bar{w}}||_{\bar{p}}\}\\ &<c\varepsilonta^{m+1-a} \pmb\|X_P\pmb\|_{r,D(s,r)},\notag \varepsilonnd{align} and \begin{align}\label{XI*} \pmb\|X_{I}\pmb\|^{\mathcal{L}}_{\varepsilonta r,D(s,8\varepsilonta r)}<c\varepsilonta^{m+1-a} \pmb\|X_P\pmb\|^{\mathcal{L}}_{r,D(s,r)}. \varepsilonnd{align} Indeed, \begin{align*} |I_x|_{D(s,8\varepsilonta r)}&=|\sum_{2|\imath|+|\jmath|\geq m+1}\frac{\partial p_{k\imath\jmath\varepsilonll}{\rm e}^{\sqrt{-1}\langle k,x\rangle}}{\partial x}y^\imath z^{\jmath}w^{\varepsilonll_1}\bar w^{\varepsilonll_2}|_{D(s,8\varepsilonta r)}\\ &\leq c\sum_{2|\imath|+|\jmath|\geq m+1}\frac{|8\varepsilonta r|^{2|\imath|+|\jmath|+|\varepsilonll|}|P_x|_{D(s,r)}}{(r-8\varepsilonta r)^{2|\imath|+|\jmath|+|\varepsilonll|}}\\ &\leq c\sum_{2|\imath|+|\jmath|\geq m+1}\varepsilonta^{m+1}r^a\pmb\|X_P\pmb\|_{r,D(s,r)}, \varepsilonnd{align*} where the second inequality follows from Cauchy estimate and the last inequality follows from the definition of $\pmb\|X_P\pmb\|_{r,D(s,r)}$. Namely, \begin{align*} \frac{|I_x|_{D(s,8\varepsilonta r)}}{(\varepsilonta r)^a}\leq c\varepsilonta^{m+1-a}\pmb\|X_P\pmb\|_{r,D(s,r)}. \varepsilonnd{align*} For any $\xi,\zeta\in\Pi$, \begin{align*} \frac{|I_x|^{\mathcal{L}}_{D(s,8\varepsilonta r)}}{(\varepsilonta r)^a}&= \frac{|I_x(\xi)-I_x(\zeta)|_{D(s,8\varepsilonta r)}}{|\xi-\zeta|(\varepsilonta r)^a}\\ &=\frac{1}{|\xi-\zeta|(\varepsilonta r)^a}|\sum_{2|\imath|+|\jmath|\geq m+1}\frac{\partial (p_{k\imath\jmath\varepsilonll}(\xi)-p_{k\imath\jmath\varepsilonll}(\zeta)){\rm e}^{\sqrt{-1}\langle k,x\rangle}}{\partial x}y^\imath w_0^{\jmath_1}\bar{w}_0^{\jmath_2}w^{\varepsilonll_1}\bar w^{\varepsilonll_2}|_{D(s,8\varepsilonta r)}\\ &\leq c\sum_{2|\imath|+|\jmath|\geq m+1}\frac{|8\varepsilonta r|^{2|\imath|+|\jmath|+|\varepsilonll|}|P_x(\xi)-P_x(\zeta)|_{D(s,r)}}{|\xi-\zeta|(\varepsilonta r)^a(r-8\varepsilonta r)^{2|\imath|+|\jmath|+|\varepsilonll|}}\\ &\leq c\varepsilonta^{m+1-a}\pmb\|X_P\pmb\|_{r,D(s,r)}^{\mathcal{L}}. \varepsilonnd{align*} Similarly, we can prove \begin{align*} \frac{|I_y|_{D(s,8\varepsilonta r)}}{(\varepsilonta r)^{a-2}},~~\frac{||I_{z}||_{\bar{p}}}{(\varepsilonta r)^{a-1}},~~ ||I_{w}||_{\bar{p}},~~||I_{\bar{w}}||_{\bar{p}}\leq c\varepsilonta^{m+1-a}\pmb\|X_P\pmb\|_{r,D(s,r)}, \varepsilonnd{align*} and \begin{align*} \frac{|I_y|^{\mathcal{L}}_{D(s,8\varepsilonta r)}}{(\varepsilonta r)^{a-2}},~~\frac{||I_{z}||^{\mathcal{L}}_{\bar{p}}}{(\varepsilonta r)^{a-1}},~~ ||I_{w}||^{\mathcal{L}}_{\bar{p}},~~||I_{\bar{w}}||^{\mathcal{L}}_{\bar{p}}\leq c\varepsilonta^{m+1-a}\pmb\|X_P\pmb\|^{\mathcal{L}}_{r,D(s,r)}. \varepsilonnd{align*} Thus (\ref{XI}) and (\ref{XI*}) hold. We claim that \begin{align}\label{XIII} \pmb\|X_{III}\pmb\|^*_{\varepsilonta r,D(s,8\varepsilonta r)} &<c\varepsilonta^{m+1-a} \pmb\|X_P\pmb\|^*_{r,D(s,r)}. \varepsilonnd{align} Indeed, \begin{align*} |III_x|_{D(s,8\varepsilonta r)}&=|\sum_{|k|\leq K_+,2|\imath|+|\jmath|\leq m,|\varepsilonll_1|+|\varepsilonll_2|\geq3}\frac{\partial p_{k\imath\jmath\varepsilonll}{\rm e}^{\sqrt{-1}\langle k,x\rangle}}{\partial x}y^\imath z^{\jmath}w^{\varepsilonll_1}\bar w^{\varepsilonll_2}|_{D(s,8\varepsilonta r)}\\ &\leq c\sum_{|k|\leq K_+,2|\imath|+|\jmath|\leq m,|\varepsilonll_1|+|\varepsilonll_2|\geq3}\frac{|8\varepsilonta r|^{2|\imath|+|\jmath|+a|\varepsilonll|}|P_x|_{D(s,r)}}{(r-8\varepsilonta r)^{2|\imath|+|\jmath|+a|\varepsilonll|}}\\ &\leq c \varepsilonta^{3a}r^{a}\pmb\|X_P\pmb\|_{r,D(s,r)},\\ \varepsilonnd{align*} i.e., {{\begin{align*} \frac{|III_x|_{D(s,8\varepsilonta r)}}{(\varepsilonta r)^a}\leq c\varepsilonta^{2a}\pmb\|X_P\pmb\|_{r,D(s,r)}\leq c\varepsilonta^{m+1-a}\pmb\|X_P\pmb\|_{r,D(s,r)}. \varepsilonnd{align*}}} Similarly, we can prove \begin{align*} \frac{|III_y|_{D(s,8\varepsilonta r)}}{(\varepsilonta r)^{a-2}},~~\frac{||III_{z}||_{\bar{p}}}{(\varepsilonta r)^{a-1}},~~ ||III_{w}||_{\bar{p}},~~||III_{\bar{w}}||_{\bar{p}}\leq c\varepsilonta^{m+1-a}\pmb\|X_P\pmb\|_{r,D(s,r)}, \varepsilonnd{align*} and \begin{align*} \frac{|III_y|^{\mathcal{L}}_{D(s,8\varepsilonta r)}}{(\varepsilonta r)^{a-2}},~~\frac{||III_{z}||^{\mathcal{L}}_{\bar{p}}}{(\varepsilonta r)^{a-1}},~~ {||III_{w}||^{\mathcal{L}}_{\bar{p}}},~~{||III_{\bar{w}}||^{\mathcal{L}}_{\bar{p}}}\leq c\varepsilonta^{m+1-a}\pmb\|X_P\pmb\|_{r,D(s,r)}. \varepsilonnd{align*} We now estimate $\pmb\|X_{II}\pmb\|^*_{\varepsilonta r,D(s-\rho,8\varepsilonta r)}$. According to the definition of $II$, the Lemma A.$2$. in \cite{poschel2} and (\ref{XI}), we have \begin{align*} |II_x|_{D(s-\rho,8\varepsilonta r)}&=|\frac{\partial(P-I-III)}{\partial x}-\frac{\partial R}{\partial x}|_{D(s-\rho,8\varepsilonta r)}\leq cK_+^n{\rm e}^{-K_+\rho}|\frac{\partial(P-I-III)}{\partial x}|_{D(s,8\varepsilonta r)}\\ &\leq cK_+^n{\rm e}^{-K_+\rho}(r^a\pmb\|X_P\pmb\|_{r,D(s,r)}+\varepsilonta^{m+1}r^a\pmb\|X_{P}\pmb\|_{r,D(s,r)}+\varepsilonta^{m+1}r^a\pmb\|X_{P}\pmb\|_{ r,D(s, r)})\\ &\leq cK_+^n{\rm e}^{-K_+\rho}r^a(1+\varepsilonta^{m+1}+\varepsilonta^{m+1})\pmb\|X_{P}\pmb\|_{r,D(s,r)}, \varepsilonnd{align*} i.e., \begin{align}\label{IIx} \frac{|II_x|_{D(s-\rho,8\varepsilonta r)}}{(\varepsilonta r)^a}&\leq c\frac{K_+^n{\rm e}^{-K_+\rho}}{\varepsilonta^a}\pmb\|X_{P}\pmb\|_{r,D(s,r)}\leq c\varepsilonta^{m+1-a}\pmb\|X_{P}\pmb\|_{r,D(s,r)}, \varepsilonnd{align} where the last inequality follows from (\textbf{H1}). Similarly, we can get \begin{align}\label{IIy} \frac{|II_y|_{D(s-\rho,8\varepsilonta r)}}{(\varepsilonta r)^{a-2}},~~\frac{||II_{z}||_{\bar{p}}}{(\varepsilonta r)^{a-1}},~~ ||II_{w}||_{\bar{p}},~~||II_{\bar{w}}||_{\bar{p}}\leq c\varepsilonta^{m+1-a}\pmb\|X_P\pmb\|_{r,D(s,r)}, \varepsilonnd{align} and \begin{align}\label{IIy*} \frac{|II_y|^{\mathcal{L}}_{D(s-\rho,8\varepsilonta r)}}{(\varepsilonta r)^{a-2}},~~\frac{||II_{z}||^{\mathcal{L}}_{\bar{p}}}{(\varepsilonta r)^{a-1}},~~ ||II_{w}||^{\mathcal{L}}_{\bar{p}},~~||II_{\bar{w}}||^{\mathcal{L}}_{\bar{p}}\leq c\varepsilonta^{m+1-a}\pmb\|X_P\pmb\|_{r,D(s,r)}. \varepsilonnd{align} Then by (\ref{IIx}), (\ref{IIy}) and (\ref{IIy*}) \begin{align}\label{XII} \pmb\|X_{II}\pmb\|^*_{\varepsilonta r,D(s-\rho,8\varepsilonta r)} <c\varepsilonta^{m+1-a} \pmb\|X_P\pmb\|^*_{r,D(s,r)}. \varepsilonnd{align} Therefore, it follows from (\ref{P-R}), (\ref{XI}), (\ref{XIII}) and (\ref{XII}) that \begin{align*} \pmb\|X_P-X_R\pmb\|^*_{\varepsilonta r,D(s-\rho,8\varepsilonta r)}&\leq c\varepsilonta^{m+1-a} \pmb\|X_P\pmb\|^*_{r,D(s,r)}, \varepsilonnd{align*} and \begin{align*} \pmb\|X_R\pmb\|^*_{r,D(s-\rho,8\varepsilonta r)} &\leq \pmb\|X_P\pmb\|^*_{r,D(s-\rho,8\varepsilonta r)}+ \pmb\|X_{I}+X_{II}+X_{III}\pmb\|^*_{r,D(s-\rho,8\varepsilonta r)}\\ &\leq \pmb\|X_P\pmb\|^*_{r,D(s-\rho,8\varepsilonta r)}+ \pmb\|X_{P}-X_{R}\pmb\|^*_{r,D(s-\rho,8\varepsilonta r)}\\ &\leq\pmb\|X_P\pmb\|^*_{r,D(s,r)}+c\varepsilonta^{m-a+1}\pmb\|X_P\pmb\|^*_{r,D(s,r)}\\ &\leq c\pmb\|X_P\pmb\|^*_{r,D(s,r)}. \varepsilonnd{align*} \varepsilonnd{proof} \subsubsection{Homological Equation} We first construct a generalized Hamiltonian $F$ of the form \begin{align}\label{eq3} F&=\sum_{|k|\leq K_+,~2|\imath|+|\jmath|\leq m,|\varepsilonll_1|+|\varepsilonll_2|\leq2, |k|+|\varepsilonll_1|+|\varepsilonll_2|\neq0}F_{k\imath\jmath\varepsilonll_1\varepsilonll_2}y^{\imath}z^\jmath w^{\varepsilonll_1}\bar w^{\varepsilonll_2} {\rm e}^{\sqrt{-1}\langle k,x\rangle}, \varepsilonnd{align} which satisfies the equation \begin{equation}\label{eq4} \{N,F\}+R-[R]-Q=0, \varepsilonnd{equation} where $[R]=\frac{1}{(2\pi)^n}\int_{\mathbb{T}^n}R(x,y,z,w,\bar w){\rm d}x$ is the average of the truncation $R$, and the correction term \begin{align}\label{Q} Q=(\partial_zg+\partial_zf)J\partial_zF|_{2|\imath|+|\jmath|> m, or |\varepsilonll|> 2}. \varepsilonnd{align} { Notice that \begin{align}\label{NF} \{N,F\}=-\partial_yN\partial_xF+\partial_xN\partial_yF+\partial_zNJ\partial_zF-\sqrt{-1}\partial_{\bar w}N\partial_{w}F+\sqrt{-1}\partial_wN\partial_{\bar w}F. \varepsilonnd{align} Recall that \begin{align*} f&=\sum_{4\leq2|\imath'|\leq m}f_{\imath000} y^{\imath'}+\sum_{2|\imath'|+|\jmath'|\leq m,1\leq|\imath'|,|\jmath'|}f_{\imath'\jmath'00} y^{\imath'} z^{\jmath'}+\sum_{0<2|\imath'|+|\jmath'|\leq m}f_{\imath'\jmath'11} y^{\imath'} z^{\jmath'} w\bar w\\ &=:f_1+f_2, \varepsilonnd{align*} where $f_1=\sum_{4\leq2|\imath'|\leq m}f_{\imath000} y^{\imath'}$ and $f_2=f-f_1$, and $$g_\nu=\sum_{2\leq|\beta|\leq m}g_{0\beta00}z^{\beta},~~~~~1\leq\nu.$$ Substituting (\ref{Nnu}) and (\ref{eq3}) into (\ref{NF}), we can get \begin{align}\label{NF1} \{N,F\}=&-\sqrt{-1}\langle\omega+\partial_yf_1,k\rangle F_{k\imath\jmath\varepsilonll_1\varepsilonll_2}y^{\imath}z^\jmath w^{\varepsilonll_1}\bar w^{\varepsilonll_2} {\rm e}^{\sqrt{-1}\langle k,x\rangle}+\partial_yf_2\partial_xF+(\partial_zg+\partial_zf)J\partial_zF\notag\\ &-\sqrt{-1}\langle\Omega,\varepsilonll_1-\varepsilonll_2\rangle F_{k\imath\jmath\varepsilonll_1\varepsilonll_2}y^{\imath}z^\jmath w^{\varepsilonll_1}\bar w^{\varepsilonll_2} {\rm e}^{\sqrt{-1}\langle k,x\rangle}\\ &-\sqrt{-1}\langle\partial_{w\bar w}f,\varepsilonll_1-\varepsilonll_2\rangle F_{k\imath\jmath\varepsilonll_1\varepsilonll_2}y^{\imath}z^\jmath w^{\varepsilonll_1}\bar w^{\varepsilonll_2} {\rm e}^{\sqrt{-1}\langle k,x\rangle}.\notag \varepsilonnd{align} In order to simplify the notations, we sometimes omit the subscript of $\sum$ and only use $\sum$ to represent the sum to the index over the corresponding range of variation. We begin to calculate the second term (\ref{NF1}): \begin{align}\label{A} \partial_y f_2 \partial_x F=&\sum_{2|\imath'|+|\jmath'|\leq m;1\leq|\imath'|,|\jmath'|;|\varepsilonll'|=0,1}f_{\imath'\jmath'\varepsilonll'\varepsilonll'} \partial_y(y^{\imath'}) z^{\jmath'} w^{\varepsilonll'}\bar w^{\varepsilonll'} \\ &\sum_{|k|\leq K_+,~2|\imath|+|\jmath|\leq m,|\varepsilonll_1|+|\varepsilonll_2|\leq2, |k|+|\varepsilonll_1|+|\varepsilonll_2|\neq0}F_{k\imath\jmath\varepsilonll_1\varepsilonll_2}y^{\imath}z^\jmath w^{\varepsilonll_1}\bar w^{\varepsilonll_2} \partial_x({\rm e}^{\sqrt{-1}\langle k,x\rangle})\notag\\ =&\sum\mathcal{A}_{\textsf{b}(\textsf{c}+1)\varepsilonll'\varepsilonll'}F_{k(\imath-\textsf{b}+1)(\jmath-\textsf{c}-1)(\varepsilonll_1-\varepsilonll')(\varepsilonll_2-\varepsilonll')}y^{\imath}z^\jmath w^{\varepsilonll_1}\bar w^{\varepsilonll_2} {\rm e}^{\sqrt{-1}\langle k,x\rangle},\notag \varepsilonnd{align} where $\mathcal{A}_{\textsf{b}(\textsf{c}+1)\varepsilonll'\varepsilonll'}$ is concerned with $\partial_y f_2$ and $k$, and $2|\textsf{b}|+\textsf{c}+1\leq m$, $1\leq|\textsf{b}|$, $0\leq |\textsf{c}|$, $|\varepsilonll'|=0,1$, ${|k|\leq K_+,~2|\imath|+|\jmath|\leq m,|\varepsilonll_1|+|\varepsilonll_2|\leq2, |k|+|\varepsilonll_1|+|\varepsilonll_2|\neq0}$. Next we calculate the third term of (\ref{NF1}): \begin{align}\label{mathcalB} (\partial_zg+\partial_zf)J\partial_zF&=(\partial_zg+\partial_zf)J\partial_zF|_{2|\imath|+|\jmath|\leq m, |\varepsilonll|\leq 2}+(\partial_zg+\partial_zf)J\partial_zF|_{2|\imath|+|\jmath|> m, or |\varepsilonll|> 2}\\ &=:(\partial_zg+\partial_zf)J\partial_zF|_{2|\imath|+|\jmath|\leq m, |\varepsilonll|\leq 2}+Q.\notag \varepsilonnd{align} Specially, \begin{align*} \partial_zgJ\partial_zF|_{2|\imath|+|\jmath|\leq m, |\varepsilonll|\leq 2} &=\partial_zgJ\sum_{|k|\leq K_+,~2|\imath|+|\jmath|\leq m,|\varepsilonll_1|+|\varepsilonll_2|\leq2, |k|+|\varepsilonll_1|+|\varepsilonll_2|\neq0} F_{k\imath\jmath\varepsilonll_1\varepsilonll_2}y^{\imath}\partial_z(z^{\jmath}) w^{\varepsilonll_1}\bar w^{\varepsilonll_2} {\rm e}^{\sqrt{-1}\langle k,x\rangle}\\ &=:\sum_{}S_{\imath\jmath\varepsilonll}F_{k\imath\jmath\varepsilonll_1\varepsilonll_2}y^\imath z^{\jmath}w^{\varepsilonll_1}\bar w^{\varepsilonll_2}{\rm e}^{\sqrt{-1}\langle k,x\rangle}+\sum_{}\mathcal{B}_iF_{k\imath(\jmath-i+1)\varepsilonll_1\varepsilonll_2}y^\imath z^{\jmath}w^{\varepsilonll_1}\bar w^{\varepsilonll_2}{\rm e}^{\sqrt{-1}\langle k,x\rangle} , \varepsilonnd{align*} where $S_{\imath\jmath\varepsilonll}$ is a ($|\imath|+|\jmath|+|\varepsilonll|$) order tensor and concerned with $\partial_z^2g(0)$ and $J$, $\mathcal{B}_i$ is concerned with $\partial_z^2g(z)-\partial_z^2g(0)$ and $J$, and $2\leq i$, ${|k|\leq K_+,~2|\imath|+|\jmath|\leq m,|\varepsilonll_1|+|\varepsilonll_2|\leq2, |k|+|\varepsilonll_1|+|\varepsilonll_2|\neq0}$. And \begin{align*} \partial_zfJ\partial_zF|_{2|\imath|+|\jmath|\leq m, |\varepsilonll|\leq 2}=&\partial_zf J \sum_{|k|\leq K_+,~2|\imath|+|\jmath|\leq m,|\varepsilonll_1|+|\varepsilonll_2|\leq2, |k|+|\varepsilonll_1|+|\varepsilonll_2|\neq0} F_{k\imath\jmath\varepsilonll_1\varepsilonll_2}y^{\imath}\partial _z(z^{\jmath}) w^{\varepsilonll_1}\bar w^{\varepsilonll_2} {\rm e}^{\sqrt{-1}\langle k,x\rangle}\\ =:&\sum \mathcal{B}_{\textsf{b}(\textsf{c}+1)\varepsilonll'\varepsilonll'}F_{k(\imath-\textsf{b})(\jmath-\textsf{c}+1)(\varepsilonll_1-\varepsilonll')(\varepsilonll_2-\varepsilonll')}y^{\imath}z^{\jmath}w^{\varepsilonll_1}\bar w^{\varepsilonll_2}{\rm e}^{\sqrt{-1}\langle k,x\rangle}, \varepsilonnd{align*} where $\mathcal{B}_{\textsf{b}(\textsf{c}+1)\varepsilonll'\varepsilonll'}$ is concerned with $\partial_{y}^{\textsf{b}}\partial_z^{\textsf{c}+2}\partial_w^{\varepsilonll'}\partial_{\bar w}^{\varepsilonll'}f(0,0,0,0)$, $|\varepsilonll'|=0, 1$ and $J$, and $2|\textsf{b}|+\textsf{c}+1\leq m$, $1\leq|\textsf{b}|$, $0\leq |\textsf{c}|$, ${|k|\leq K_+,~2|\imath|+|\jmath|\leq m,|\varepsilonll_1|+|\varepsilonll_2|\leq2, |k|+|\varepsilonll_1|+|\varepsilonll_2|\neq0}$. Now, we calculate the fifth term of (\ref{NF1}): \begin{align}\label{C} &-\sum_{|k|\leq K_+,~2|\imath|+|\jmath|\leq m,|\varepsilonll_1|+|\varepsilonll_2|\leq2, |k|+|\varepsilonll_1|+|\varepsilonll_2|\neq0}\sqrt{-1}\langle\partial_{w\bar w}f,\varepsilonll_1-\varepsilonll_2\rangle F_{k\imath\jmath\varepsilonll_1\varepsilonll_2}y^{\imath}z^\jmath w^{\varepsilonll_1}\bar w^{\varepsilonll_2} {\rm e}^{\sqrt{-1}\langle k,x\rangle}\\ &=\sum \mathcal{C}_{\textsf{b}(\textsf{c}+1)}F_{k(\imath-\textsf{b})(\jmath-\textsf{c}-1)\varepsilonll_1\varepsilonll_2}y^{\imath}z^\jmath w^{\varepsilonll_1}\bar w^{\varepsilonll_2} {\rm e}^{\sqrt{-1}\langle k,x\rangle},\notag \varepsilonnd{align} where $\mathcal{C}_{\textsf{b}(\textsf{c}+1)}$ is concerned with $\partial_y^{\textsf{b}}\partial_z^{\textsf{c}+1}\partial_{w}\partial_{\bar w}f(0,0,0,0)$, $2|\textsf{b}|+\textsf{c}+1\leq m$, $1\leq|\textsf{b}|$, $0\leq |\textsf{c}|$, ${|k|\leq K_+,~2|\imath|+|\jmath|\leq m,|\varepsilonll_1|+|\varepsilonll_2|\leq2, |k|+|\varepsilonll_1|+|\varepsilonll_2|\neq0}$. Substituting (\ref{A}), (\ref{mathcalB}) and (\ref{C}) into (\ref{NF1}), combining (\ref{NF1}) with (\ref{eq4}), and comparing the coefficients above, we then obtain the following quasi-linear equations: \begin{itemize} \item[1.] For $\varepsilonll_1=\varepsilonll_2$, $|\varepsilonll_1|=0$, $k\neq0$, \begin{align}\label{he1} &\sqrt{-1}(\langle\omega+\partial_yf_1,k\rangle I_{(2b)^{\imath+\jmath}}+S_{\imath\jmath\varepsilonll})F_{k\imath\jmath00}+\mathcal{A}_{\textsf{b}(\textsf{c}+1)00}F_{k(\imath-\textsf{b}+1)(\jmath-\textsf{c}-1)00}\\ &+\mathcal{B}_{\textsf{b}(\textsf{c}+1)00}F_{k(\imath-\textsf{b})(\jmath-\textsf{c}+1)00}+\mathcal{B}_iF_{k\imath(\jmath-i+1)00}+\mathcal{C}_{\textsf{b}(\textsf{c}+1)}F_{k(\imath-\textsf{b})(\jmath-\textsf{c}-1)00}=p_{k\imath\jmath00}.\notag\varepsilonnd{align} \item[2.] For $\varepsilonll_1\neq\varepsilonll_2$, \begin{align}\label{he2} &\sqrt{-1}((\langle\omega+\partial_yf_1,k\rangle +\langle\Omega,\varepsilonll_1-\varepsilonll_2\rangle) I_{(2b)^{\imath+\jmath+\varepsilonll_1+\varepsilonll_2}}+S_{\imath\jmath\varepsilonll})F_{k\imath\jmath\varepsilonll_1\varepsilonll_2}\\ &+\mathcal{A}_{\textsf{b}(\textsf{c}+1)\varepsilonll'\varepsilonll'}F_{k(\imath-\textsf{b}+1)(\jmath-\textsf{c}-1)(\varepsilonll_1-\varepsilonll')(\varepsilonll_2-\varepsilonll')}+\mathcal{B}_{\textsf{b}(\textsf{c}+1)\varepsilonll'\varepsilonll'}F_{k(\imath-\textsf{b})(\jmath-\textsf{c}+1)(\varepsilonll_1-\varepsilonll')(\varepsilonll_2-\varepsilonll')}+\mathcal{B}_iF_{k\imath(\jmath-i+1)\varepsilonll_1\varepsilonll_2}\notag\\ &+\mathcal{C}_{\textsf{b}(\textsf{c}+1)}F_{k(\imath-\textsf{b})(\jmath-\textsf{c}-1)\varepsilonll_1\varepsilonll_2}=p_{k\imath\jmath\varepsilonll_1\varepsilonll_2}\notag. \varepsilonnd{align} \item[3.] For $\varepsilonll_1=\varepsilonll_2$, $|\varepsilonll_1|=1$, $k\neq0$, \begin{align}\label{he3} &\sqrt{-1}(\langle\omega+\partial_yf_1,k\rangle I_{(2b)^{\imath+\jmath+2}}+S_{\imath\jmath\varepsilonll})F_{k\imath\jmath11}+\mathcal{A}_{\textsf{b}(\textsf{c}+1)00}F_{k(\imath-\textsf{b}+1)(\jmath-\textsf{c}-1)11}\\ &+\mathcal{A}_{\textsf{b}(\textsf{c}+1)11}F_{k(\imath-\textsf{b}+1)(\jmath-\textsf{c}-1)00}+\mathcal{B}_{\textsf{b}(\textsf{c}+1)00}F_{k(\imath-\textsf{b})(\jmath-\textsf{c}+1)11}+\mathcal{B}_{\textsf{b}(\textsf{c}+1)11}F_{k(\imath-\textsf{b})(\jmath-\textsf{c}+1)00}\notag\\ &+\mathcal{B}_iF_{k\imath(\jmath-i+1)11}+\mathcal{C}_{\textsf{b}(\textsf{c}+1)}F_{k(\imath-\textsf{b})(\jmath-\textsf{c}-1)11}=p_{k\imath\jmath11}.\notag\varepsilonnd{align} \varepsilonnd{itemize} Here the notations $\textsf{b}, \textsf{c}, i, \varepsilonll', \imath, \jmath, k$ are defined as above, $\Omega=(\Omega^j), j\in\mathbb{N}_+\setminus\mathcal{N}$, \\ $\mathcal{A}_{\textsf{b}(\textsf{c}+1)00}F_{k(\imath-\textsf{b}+1)(\jmath-\textsf{c}-1)00}$ stands for $\sum_{2|\textsf{b}|+\textsf{c}+1\leq m, 1\leq|\textsf{b}|, 0\leq |\textsf{c}|}\mathcal{A}_{\textsf{b}(\textsf{c}+1)00}F_{k(\imath-\textsf{b}+1)(\jmath-\textsf{c}-1)00}$, and the rest of terms are analogously defined. } We declare that the quasi-linear equations (\ref{he1})-(\ref{he3}) are solvable under some suitable conditions. We denote $${\Pi_+}=\{\xi\in\Pi:|\langle k,\omega(\xi)\rangle+\langle\varepsilonll,\Omega(\xi)\rangle|\geq\frac{\gamma\langle \varepsilonll\rangle_d}{(1+|k|)^\tau}, |k|\leq K_+, |\varepsilonll|\leq2, |k|+|\varepsilonll|\neq0 \}.$$ Then we can solve equations (\ref{he1})-(\ref{he3}) on $\Pi_+$. The details can be seen in the following lemma: \begin{lemma}\label{le2} Assume that \begin{align*} &{\textbf{\textsc{(H2)}} ~8r<\frac{\langle\varepsilonll\rangle_d(\gamma-\gamma_+)}{(K_++1)^{\tau+1}}.} \varepsilonnd{align*} Then the quasi-linear equations (\ref{he1})-(\ref{he3}) have a solution $F_{k\imath\jmath\varepsilonll}$ satisfying \begin{align}\label{XF} \pmb\|X_{F}\pmb\|^{\lambda}_{r,D(s-\rho,{r})} &\leq c_2A_\rho r^{m-a}\varepsilonta^m, \varepsilonnd{align} where $0<\lambda<\frac{\gamma^{(2b)^{\imath+\jmath+\varepsilonll}}}{M}$, and \begin{align}\label{Arho} A_\rho^2=(\sum_{0<|k|\leq K_+,~2|\imath|+|\jmath|\leq m, ~|\varepsilonll|\leq2}(\frac{(1+|k|)^{1+(2b)^{\imath+\jmath+\varepsilonll}\tau}}{(\langle\varepsilonll\rangle_d)^{(2b)^{\imath+\jmath+\varepsilonll}}})^2{\rm e}^{-2|k|\rho}). \varepsilonnd{align} Moreover, \begin{align*} \pmb\|DX_{F}\pmb\|_{r,r,D(s-2\rho,\frac{r}{2})}\leq c_2\frac{1}{\rho r^a}\pmb\|X_F\pmb\|_{r,D(s-\rho,r)}. \varepsilonnd{align*} \varepsilonnd{lemma} \begin{proof} {For $\forall y\in D(r)$, by \textbf{(H2)}, \begin{align*} |\partial_yf_1|&\leq cr<\frac{\gamma\langle\varepsilonll\rangle_d}{8(|k|+1)^{\tau+1}}. \varepsilonnd{align*} Denote $$L_k=\langle k,\omega+\partial_yf_1\rangle+\langle \varepsilonll,\Omega\rangle+\tilde\lambda,$$ where $\tilde\lambda$ is the minimum in the absolute value of the eigenvalue of $ S_{\imath\jmath\varepsilonll}$, and $$|\tilde\lambda|\leq|\partial_z^2g(0)|\leq\gamma_-^{2(2b)^{m+2}}r_-^{m-2}\varepsilonta_-^m\leq\gamma_-^{2(2b)^{m+2}}r^{m-2}\varepsilonta^2\leq\frac{\gamma\langle\varepsilonll\rangle_d}{8(|k|+1)^{\tau+1}}.$$ Then \begin{align*} |L_k|&=|\langle k,\omega\rangle+\langle\varepsilonll,\Omega\rangle|-|\tilde\lambda|-|\langle k,\partial_yf_1\rangle|\\ &\geq\frac{\gamma\langle\varepsilonll\rangle_d}{(1+|k|)^\tau}-\frac{\gamma\langle\varepsilonll\rangle_d}{8(1+|k|)^\tau}-\frac{\gamma\langle\varepsilonll\rangle_d}{8(1+|k|)^\tau}\\ &\geq\frac{\gamma\langle\varepsilonll\rangle_d}{2(1+|k|)^\tau}, \varepsilonnd{align*} and \begin{align}\label{LI} |\det L_kI_{(2b)^{\imath+\jmath+\varepsilonll}}|\geq(\frac{\gamma\langle\varepsilonll\rangle_d}{2(1+|k|)^\tau})^{(2b)^{\imath+\jmath+\varepsilonll}}. \varepsilonnd{align} Define the coefficient matrix of (\ref{he1})-(\ref{he3}) by $B_{\imath\jmath\varepsilonll}$. Then by (\ref{LI}), \begin{align}\label{B} |\det B_{\imath\jmath\varepsilonll}|\geq \frac{(\gamma\langle\varepsilonll\rangle_d)^{(2b)^{\imath+\jmath+\varepsilonll}}}{2^{(2b)^{\imath+\jmath+\varepsilonll}}(|k|+1)^{\tau(2b)^{\imath+\jmath+\varepsilonll}}}. \varepsilonnd{align} Note that \begin{align*} |B_{\imath\jmath\varepsilonll}^{-1}|=|\frac{\textrm{adj} B_{\imath\jmath\varepsilonll}}{\det B_{\imath\jmath\varepsilonll}}|\leq c\frac{(|k|+1)^{\tau(2b)^{\imath+\jmath+\varepsilonll}+(2b)^{\imath+\jmath+\varepsilonll}-1}}{(\gamma\langle\varepsilonll\rangle_d)^{(2b)^{\imath+\jmath+\varepsilonll}}}. \varepsilonnd{align*} Applying the identity \begin{align*} \partial_y^jB_{\imath\jmath\varepsilonll}^{-1}=-\sum_{|j'|=1}^{|j|}\left(\begin{array}{c} j\\ j' \varepsilonnd{array}\right)(\partial_y^{j-j'}B_{\imath\jmath}^{-1}\partial_y^{j'}B_{\imath\jmath})B_{\imath\jmath}^{-1} \varepsilonnd{align*} inductively, we have \begin{align}\label{B-} |\partial_y^j B_{\imath\jmath\varepsilonll}^{-1}|_{ D(s)\times G_+}&\leq c(|k|+1)|^{|j|}|B_{\imath\jmath\varepsilonll}^{-1}|^{|j|+1}\\ &\leq c\frac{(1+|k|)^{|j|+(|j|+1)(2b)^{\imath+\jmath+\varepsilonll}\tau}}{(\gamma\langle\varepsilonll\rangle_d)^{(|j|+1)(2b)^{\imath+\jmath+\varepsilonll}}},~~2|j|\leq m. \varepsilonnd{align} Then \begin{align*} \|F_z\|_{D(s-\rho,r)} &\leq \|\sum_{0<|k|\leq K_+,~2|\imath|+|\jmath|\leq m}B_{\imath\jmath\varepsilonll}^{-1}\partial_z(p_{k\imath\jmath\varepsilonll}z^\jmath)y^{\imath}w^{\varepsilonll_1}\bar w^{\varepsilonll_2}\|_{D(s-\rho,r)}\\ &\leq \|\sum_{0<|k|\leq K_+,~2|\imath|+|\jmath|\leq m}\frac{(1+|k|)^{1+(2b)^{\imath+\jmath+\varepsilonll}\tau}}{(\gamma\langle\varepsilonll\rangle_d)^{(2b)^{\imath+\jmath+\varepsilonll}}}|\partial_z(p_{k\imath\jmath\varepsilonll}z^{\jmath})y^{\imath}w^{\varepsilonll_1}\bar w^{\varepsilonll_2}|{\rm e}^{|k|(s-\rho)}\|_{D(s-\rho,r)}\\ &\leq(\sum_{0<|k|\leq K_+,~2|\imath|+|\jmath|\leq m}(\frac{(1+|k|)^{1+(2b)^{\imath+\jmath+\varepsilonll}\tau}}{(\gamma\langle\varepsilonll\rangle_d)^{(2b)^{\imath+\jmath+\varepsilonll}}})^2{\rm e}^{-2|k|\rho})^{\frac{1}{2}}\\ &~~~~(\sum_{0<|k|\leq K_+,2|\imath|+|\jmath|\leq m}|\partial_z(p_{k\imath\jmath}z^{\jmath})y^\imath w^{\varepsilonll_1}\bar w^{\varepsilonll_2}|^2{\rm e}^{2|k|s})^{\frac{1}{2}}\\ &\leq \frac{A_\rho}{\gamma^{(2b)^{m+2}}}\|R_z\|_{D(s,r)}, \varepsilonnd{align*} i.e., \begin{align}\label{Fz} \frac{\|F_z\|_{D(s-\rho,r)}}{r^{a-1}}&\leq\frac{A_\rho}{r^{a-1}\gamma^{(2b)^{m+2}}}\|R_z\|_{D(s,r)}\leq\frac{A_\rho}{r^{a-1}}\gamma^{(2b)^{m+2}}r^{m-1}\varepsilonta^m. \varepsilonnd{align} To control the Lipschitz semi-norm of $F_z$. Let $\Delta=\Delta_{\xi\zeta}$ for $\xi$, $\zeta\in\Pi$. Note that \begin{align*} \Delta F_z&=\sum_{0<|k|\leq K_+,~2|\imath|+|\jmath|\leq m,~|\varepsilonll|\leq2}(\Delta\partial_z(F_{k\imath\jmath\varepsilonll}z^\jmath)y^\imath w^{\varepsilonll_1}\bar w^{\varepsilonll_2}\\ &=\sum_{0<|k|\leq K_+,~2|\imath|+|\jmath|\leq m,~|\varepsilonll|\leq2}\Delta B_{\imath\jmath\varepsilonll}^{-1}\partial_z(p_{k\imath\jmath\varepsilonll}z^\jmath)y^\imath w^{\varepsilonll_1}\bar w^{\varepsilonll_2}\\ &=\sum_{0<|k|\leq K_+,~2|\imath|+|\jmath|\leq m,~|\varepsilonll|\leq2}(B_{\imath\jmath\varepsilonll}^{-1}(\xi)\Delta \partial_z(p_{k\imath\jmath\varepsilonll}z^\jmath)+\Delta B_{\imath\jmath\varepsilonll}^{-1}\partial_z(p_{k\imath\jmath\varepsilonll}(\zeta)z^\jmath))y^\imath w^{\varepsilonll_1}\bar w^{\varepsilonll_2}{\rm e}^{\sqrt{-1}\langle k,x\rangle}\\ &=:U1+U2, \varepsilonnd{align*} where \begin{align*} U1&=\sum_{0<|k|\leq K_+,~2|\imath|+|\jmath|\leq m,~|\varepsilonll|\leq2}B_{\imath\jmath\varepsilonll}^{-1}(\xi)\Delta \partial_z(p_{k\imath\jmath\varepsilonll}z^\jmath) y^\imath w^{\varepsilonll_1}\bar w^{\varepsilonll_2}{\rm e}^{\sqrt{-1}\langle k,x\rangle},\\ U2&=\sum_{0<|k|\leq K_+,~2|\imath|+|\jmath|\leq m,~|\varepsilonll|\leq2}\Delta B_{\imath\jmath\varepsilonll}^{-1}\partial_z(p_{k\imath\jmath\varepsilonll}(\zeta)z^\jmath) y^\imath w^{\varepsilonll_1}\bar w^{\varepsilonll_2}{\rm e}^{\sqrt{-1}\langle k,x\rangle}. \varepsilonnd{align*} Notice by (\ref{B-}) that \begin{align*} \|U1\|_{D(s-\rho,r)}&\leq \sum_{0<|k|\leq K_+,~2|\imath|+|\jmath|\leq m,~|\varepsilonll|\leq2}\frac{(1+|k|)^{(2b)^{\imath+\jmath+\varepsilonll}\tau}}{(\gamma\langle\varepsilonll\rangle_d)^{(2b)^{\imath+\jmath+\varepsilonll}}}|\Delta \partial_z(p_{k\imath\jmath\varepsilonll}z^\jmath) y^\imath w^{\varepsilonll_1}\bar w^{\varepsilonll_2}|{\rm e}^{|k|(s-\rho)}\\ &\leq \frac{A_\rho}{\gamma^{(2b)^{m+2}}}\|\Delta R_z\|_{D(s,r)}, \varepsilonnd{align*} and \begin{align*} \|U2\|_{D(s-\rho,r)}&\leq \sum_{0<|k|\leq K_+,~2|\imath|+|\jmath|\leq m,~|\varepsilonll|\leq2}M\frac{(1+|k|)^{(2b)^{\imath+\jmath+\varepsilonll}\tau}}{(\gamma\langle\varepsilonll\rangle_d)^{(2b)^{\imath+\jmath+\varepsilonll}}}\partial_z(p_{k\imath\jmath\varepsilonll}(\zeta)z^\jmath) y^\imath w^{\varepsilonll_1}\bar w^{\varepsilonll_2}{\rm e}^{|k|(s-\rho)}\\ &\leq \frac{MA_\rho}{\gamma^{(2b)^{m+2}}}\|R_z\|_{D(s,r)}, \varepsilonnd{align*} where $M=|\omega|_{\Pi}^\mathcal{L}+\pmb|\Omega\pmb|_{-\delta,\Pi}^\mathcal{L}$. Then \begin{align*} \|\Delta F_z\|_{D(s-\rho,r)}&\leq \frac{A_\rho}{\gamma^{(2b)^{m+2}}}(\|\Delta R_z\|_{D(s,r)}+{M}\|R_z\|_{D(s,r)}). \varepsilonnd{align*} Dividing by $|\xi-\zeta|$ and taking the supremum over $\xi\neq\zeta$ in $\Pi$ we arrive at \begin{align}\label{FzL} \frac{1}{r^{a-1}}\|F_z\|_{D(s-\rho,r)}^{\mathcal{L}}&\leq\frac{A_\rho}{\gamma^{(2b)^{m+2}}}(\pmb\|X_R\pmb\|^{\mathcal{L}}+{M}\pmb\|X_R\pmb\|_{D(s,r)})\\ &\leq \frac{MA_\rho}{\gamma^{(2b)^{m+2}}}(\gamma^{(2b)^{m+2}}r^{m-a+1}\varepsilonta^m+\gamma^{2(2b)^{m+2}}r^{m-a}\varepsilonta^m)\notag\\ &\leq \frac{MA_\rho}{\gamma^{(2b)^{m+2}}} \gamma^{(2b)^{m+2}}r^{m-a}\varepsilonta^m.\notag \varepsilonnd{align} Similarly, we have \begin{align}\label{Fy}\frac{|F_y|_{D(s-\rho,r)}}{r^{a-2}}\leq cA_\rho r^{m-a}\varepsilonta^m,~~~~\frac{1}{r^{a-2}}|F_y|_{D(s-\rho,r)}^{\mathcal{L}}\leq c\frac{MA_\rho}{\gamma^{(2b)^{m+2}}} r^{m-a}\varepsilonta^m, \varepsilonnd{align} \begin{align}\label{Fx}\frac{|F_x|_{D(s-\rho,r)}}{r^a}\leq cA_\rho r^{m-a}\varepsilonta^m,~~~~\frac{1}{r^{a}}|F_x|_{D(s-\rho,r)}^{\mathcal{L}}\leq c\frac{MA_\rho}{\gamma^{(2b)^{m+2}}} r^{m-a}\varepsilonta^m. \varepsilonnd{align} Next we estimate $\|F_{w}\|_{\bar{p}}$, $\|F_{\bar w}\|_{\bar{p}}$. Using the Lemma 1 in \cite{poschel2}, we have \begin{align}\label{Fw}\|F_{w}\|_{\bar{p}}\leq cA_\rho r^{m-a}\varepsilonta^m,~~~~\|F_{w}\|_{\bar{p}}^{\mathcal{L}}\leq c\frac{MA_\rho}{\gamma^{(2b)^{m+2}}} r^{m-a}\varepsilonta^m. \varepsilonnd{align} \begin{align}\label{Fbw}\|F_{\bar w}\|_{\bar{p}}\leq cA_\rho r^{m-a}\varepsilonta^m,~~~~\|F_{\bar w}\|_{\bar{p}}^{\mathcal{L}}\leq c\frac{MA_\rho}{\gamma^{(2b)^{m+2}}} r^{m-a}\varepsilonta^m. \varepsilonnd{align} Hence, in view of (\ref{Fz}), (\ref{FzL}), (\ref{Fy}), (\ref{Fx}), (\ref{Fw}) and (\ref{Fbw}), \begin{align*} \pmb\|X_{F}\pmb\|_{r,D(s-\rho,{r})}+\frac{\gamma^{(2b)^{m+2}}}{M}\pmb\|X_F\pmb\|_{r,D(s-\rho,{r})}^{\mathcal{L}} &\leq cA_\rho r^{m-a}\varepsilonta^m. \varepsilonnd{align*} {{By the generalized Cauchy estimate, we have \begin{align*} \pmb\|DX_F\pmb\|_{r,r,D(s-2\rho,\frac{r}{2})}<\frac{2^a}{\rho r^a}\pmb\|X_F\pmb\|_{r,D(s-\rho,r)}, \varepsilonnd{align*} where on the left we use the operator norm \begin{align*} \pmb\|L\pmb\|_{r,r'}=\sup_{W\neq0}\frac{\pmb\|LW\pmb\|_{\bar p,r}}{\pmb\|W\pmb\|_{p,r'}}. \varepsilonnd{align*}}} The proof is complete.} \varepsilonnd{proof} Next we apply the above transformation $\phi_F^1$ to Hamiltonian $H$, i.e., \begin{align*} H\circ\phi_F^1&=(N+R)\circ\phi_F^1+(P-R)\circ\phi_F^1\\ &=(N+R)+\{N,F\}+\int_0^1\{(1-t)\{N,F\}+ R,F\}\circ\phi_F^tdt+(P-R)\circ\phi_F^1\\ &=N+[R]+\int_0^1\{R_t,F\}\circ\phi_F^tdt+(P-R)\circ\phi_F^1+Q\\ &=:\bar N_++\bar P_+, \varepsilonnd{align*} and \begin{align}\label{eq15} \bar N_+&= N+[R]\notag\\ &=e+\langle\omega,y\rangle+\langle\Omega w,\bar w\rangle+g(z)+f(y,z,w,\bar w)+[R](y,z,w,\bar w),\\\label{eq16} \bar P_+&=\int_0^1\{R_t,F\}\circ\phi_F^tdt+(P-R)\circ\phi_F^1+Q,\\ R_t&=(1-t)Q+(1-t)[R]+tR.\notag \varepsilonnd{align} \subsubsection{Translation} In this subsection, we will eliminate the first order items of $z$. Consider the translation $$\phi:x\rightarrow x,~~~~~y\rightarrow y,~~~~~z\rightarrow z+\zeta_+-\zeta,$$ where $z=(w_0,\bar w_0)^\top$, and $\zeta_+\in B_{ r}(\zeta)$ is to be determined. Let $$\Phi_+=\phi_F^1\circ\phi.$$ Then \begin{align} H\circ\Phi_+&=N_++P_+,\notag\\\label{eq17} N_+&=\bar N_+\circ\phi,\\\label{eq18} P_+&=\bar P_+\circ\phi, \varepsilonnd{align} with \begin{align*} N_+&=\bar{N}_+\circ\phi=(N+[R])\circ\phi\\ &=(e+\langle\omega,y\rangle+\langle\Omega w,\bar w\rangle+g(z)+f(y,z,w,\bar w)+[R](y,z,w,\bar w))\circ\phi\\ &=e+\langle\omega,y\rangle+\langle\Omega w,\bar w\rangle+g(z+\zeta_+-\zeta)+f(y,z+\zeta_+-\zeta,w,\bar w)+[R](y,z+\zeta_+-\zeta,w,\bar w)\\ &=:e_++\langle\omega_+,y\rangle+\langle\Omega_+ w,\bar w\rangle +g_++f_+, \varepsilonnd{align*} where \begin{align}\label{e+} e_+&=e+g(\zeta_+-\zeta)+[R](0,\zeta_+-\zeta,0,0),\\\label{omega+} \omega_+&=\omega+\partial_y[R](0,0,0,0),\\\label{Omega+} \Omega_+&=\Omega+\partial_{w,\bar w}[R](0,0,0,0),\\\label{g+} g_+&={g(z+\zeta_+-\zeta)-g(\zeta_+-\zeta)+[R](0,z+\zeta_+-\zeta,0,0)-[R](0,\zeta_+-\zeta,0,0),}\\\label{f+} f_+&=f(y,z+\zeta_+-\zeta,w,\bar w)+[R](y,z+\zeta_+-\zeta,w,\bar w)-[R](0,z+\zeta_+-\zeta,0,0)\\ &~~~~-\langle\partial_y[R](0,0,0,0),y\rangle-\langle\partial_{w,\bar w}[R](0,0,0,0)w,\bar w\rangle.\notag \varepsilonnd{align} \subsubsection{Eliminate the first order terms}\label{326} In this subsection, we will appropriately choose $\zeta_+$ such that the first order terms about $z$ disappear. The concrete details see the following lemma. \begin{lemma}\label{le3} Let \begin{align}\label{eq47} \nabla g_+(0)=\nabla g(\zeta_+-\zeta)+\nabla_z[R](0,\zeta_+-\zeta,0,0). \varepsilonnd{align} There exists $\zeta_+\in B_{ (r_-^{m-1}\varepsilonta_-^m)^\frac{1}{L}}(\zeta)$ such that \begin{align*} \nabla g_+(0)=\nabla g(0)=\cdots=\nabla g_0(0)=0. \varepsilonnd{align*} \varepsilonnd{lemma} \begin{proof} The proof will be completed by an induction on $\nu$. We start with the case $\nu=0$. It follows from $g(z)=o(\|z\|_{\textsf{a},\textsf{p}}^2)$ and \textbf{(A0)} that \begin{align}\label{0} &\nabla g_0(\zeta_0)=\nabla g_0(0)=0,~~\zeta_0\in O\\\label{degg0} &\deg(\nabla g_0(\cdot)-\nabla g_0(0),O^o,0)\neq0,\\\label{nag0} &\|\nabla g_0(z)-\nabla g_0(z_*)\|_{\bar{p}}\geq\sigma_0\|z-z_*\|_{{p}}^L,~~z,z_*\in O. \varepsilonnd{align} Now assume that for some $\nu\geq1$ we have got \begin{align}\label{1} &\nabla g_i(0)=\nabla g_{i-1}(0)=0,~~\zeta_i\in B_{ (r_{i-2}^{m-1}\varepsilonta^m_{i-2})^\frac{1}{L}}(\zeta_{i-1}),\\\label{degg} &\deg(\nabla g_i(\cdot)-\nabla g_i(0),O^o,0)\neq0,\\\label{nag} &\|\nabla g_i(z)-\nabla g_i(z_*)\|_{\bar{p}}\geq\sigma_i\|z-z_*\|_{{p}}^L, ~~z\in O\backslash B_{(r^{m-1}\varepsilonta^m)^{\frac{1}{L}}}(z_*), z_*\in O, \varepsilonnd{align} where $i=1,2,\cdots,\nu.$ Then we need to find $\zeta_+$ near $\zeta$ such that $\nabla g_+(0)=\nabla g(0)$. Consider homotopy $H_t(z):[0,1]\times O\rightarrow \varepsilonll^{a,\bar{p}}\times\varepsilonll^{a,\bar{p}}$, \begin{align*} H_t(z)&=:\nabla g(z-\zeta)-\nabla g(0)+t\nabla_z[R](0,z-\zeta,0,0). \varepsilonnd{align*} Notice by (\ref{XR}) that \begin{align}\label{ezr} &\|\nabla_z[R](y,z,w,\bar w)\|_{\bar{p}}\leq r^{a-1}\pmb\|X_R\pmb\|_{r,D(s-\rho,8\varepsilonta r)}\leq\gamma^{2(2b)^{m+2}}r^{m-1}\varepsilonta^m. \varepsilonnd{align} For any $z\in\partial O$, $t\in[0,1]$, by (\ref{nag}) and (\ref{ezr}), we have \begin{align*} \|H_t(z)\|_{\bar{p}} &\geq\|\nabla g(z-\zeta)-\nabla g(0)\|_{\bar{p}}-\|\nabla_z[R](0,z-\zeta,0,0)\|_{\bar{p}}\\ &\geq\sigma\|z-\zeta\|_{p}^L-\gamma^{2(2b)^{m+2}}r^{m-1}\varepsilonta^m\\ &>\frac{\sigma\delta^L}{2}, \varepsilonnd{align*} where $\delta:=\min\{\|z-\zeta\|_{p}, \forall z\in\partial O\}$. So, it follows from the homotopy invariance and (\ref{degg}) that \begin{align}\label{H1} \deg(H_1(\cdot),O^o,0)=\deg(H_0(\cdot),O^o,0)\neq0. \varepsilonnd{align} We note by (\ref{nag}) and (\ref{ezr}) that for any $z\in O\backslash B_{(r_{-}^{m-1}\varepsilonta_{-}^m)^{\frac{1}{L}}}(\zeta)$, \begin{align*} \|H_1(z)\|_{\bar{p}}&=\|\nabla g(z-\zeta)-\nabla g(0)+\nabla_z[R](0,z-\zeta,0,0)\|_{\bar{p}}\\ &\geq\|\nabla g(z-\zeta)-\nabla g(0)\|_{\bar{p}}-\|\nabla_z[R](0,z-\zeta,0,0)\|_{\bar{p}}\\ &\geq\sigma\|z-\zeta\|_p^L-\gamma_0^{2(2b)^{m+2}}r^{m-1}\varepsilonta^m\\ &\geq\sigma r_{-}^{m-1}\varepsilonta_{-}^m-\gamma_0^{2(2b)^{m+2}}r^{m-1}\varepsilonta^m\\ &\geq\frac{\sigma}{2}r_{-}^{m-1}\varepsilonta_{-}^m. \varepsilonnd{align*} Hence by excision and (\ref{H1}), \begin{align*} \deg(H_1(\cdot),B_{ (r_{-}^{m-1}\varepsilonta_{-}^m)^{\frac{1}{L}}}(\zeta),0)=\deg(H_1(\cdot),O^o,0)\neq0, \varepsilonnd{align*} then there exist at least a $\zeta_+\in B_{ (r_{-}^{m-1}\varepsilonta_{-}^m)^{\frac{1}{L}}}(\zeta)$, such that \begin{align*} H_1(\zeta_+)=0, \varepsilonnd{align*} i.e., \begin{align*} \nabla g(\zeta_+-\zeta)+\nabla_z[R](0,\zeta_+-\zeta,0,0)=\nabla g(0), \varepsilonnd{align*} thus \begin{align}\label{g+=g0} \nabla g_+(0)=\nabla g(0)=\cdots=\nabla g_0(0)=0. \varepsilonnd{align} Next we need to prove \begin{align}\label{degg+} &\deg(\nabla g_+(\cdot)-\nabla g_+(0),O^o,0)\neq0,\\\label{nag+} &\|\nabla g_+(z)-\nabla g_+(z_*)\|_{\bar{p}}\geq\sigma_+\|z-z_*\|_{{p}}^L. \varepsilonnd{align} By (\ref{g+}), \begin{align*} \nabla g_+(z)=\nabla g(z+\zeta_+-\zeta)+\nabla[R](0,z+\zeta_+-\zeta,0,0). \varepsilonnd{align*} Then \begin{align}\label{g+-g} \nabla g_+(z)-\nabla g(z)&=\nabla g(z+\zeta_+-\zeta)-\nabla g(z)+\nabla[R](0,z+\zeta_+-\zeta,0,0), \varepsilonnd{align} and \begin{align}\label{g+-g+} \nabla g_+(z)-\nabla g_+(z_*)&=\nabla g(z+\zeta_+-\zeta)-\nabla g(z_*+\zeta_+-\zeta)+\nabla[R](0,z+\zeta_+-\zeta,0,0)\\ &~~~-\nabla[R](0,z_*+\zeta_+-\zeta,0,0).\notag \varepsilonnd{align} In view of (\ref{ezr}), (\ref{g+-g}), and $\zeta_+\in B_{(r_{-}^{m-1}\varepsilonta_{-}^m)^\frac{1}{L}}(\zeta)$, we get \begin{align}\label{nag+-nag} \|\nabla g_+(z)-\nabla g(z)\|_{\bar{p}}\leq c(r_{-}^{m-1}\varepsilonta_{-}^m)^\frac{1}{L}, \varepsilonnd{align} so, it follows from the property of degree, (\ref{degg}) and (\ref{g+=g0}) that (\ref{degg+}) holds, i.e., \begin{align*} \deg(\nabla g_+(\cdot)-\nabla g_+(0),O^o,0)=\deg(\nabla g_+(\cdot)-\nabla g(\cdot)+\nabla g(\cdot)-\nabla g(0),O^o,0)\neq0. \varepsilonnd{align*} According to (\ref{nag}), (\ref{ezr}) and (\ref{g+-g+}), we have for any $z\in O\backslash B_{(r^{m-1}\varepsilonta^m)^{\frac{1}{L}}}(z_*)$, \begin{align*} \|\nabla g_+(z)-\nabla g_+(z_*)\|_{\bar{p}}\geq\sigma\|z-z_*\|_{{p}}^L-4\gamma_0^{2(2b)^{m+2}}r^{m-1}\varepsilonta^m\geq\sigma_+\|z-z_*\|_{{p}}^L, \varepsilonnd{align*} which implies (\ref{nag+}). The proof is complete. \varepsilonnd{proof} \subsubsection{Frequency Property} In view of (\ref{omega+}), (\ref{Omega+}) and $\pmb\|X_{[R]}\pmb\|\leq c\pmb\|X_{P}\pmb\|$, we can get $|\omega_+-\omega|<c\pmb\|X_P\pmb\|_r$ and $\|(\Omega_+-\Omega)w\|_{\bar p}<cr^a\pmb\|X_P\pmb\|_r$ on $D(s,r)$, hence $\pmb|\Omega_+-\Omega\pmb|_{\bar p-p}<c\pmb\|X_P\pmb\|_r$ on $\Pi$. The same holds for their Lipschitz semi-norms with $-\delta\leq\bar p-p$, and we get \begin{align}\label{om+-om} |\omega_+-\omega|+\pmb|\Omega_+-\Omega\pmb|_{-\delta}<c\pmb\|X_P\pmb\|_r,~~|\omega_+-\omega|^{\mathcal{L}}+\pmb|\Omega_+-\Omega\pmb|_{-\delta}^{\mathcal{L}}<c\pmb\|X_P\pmb\|_r^{\mathcal{L}}. \varepsilonnd{align} In order to bound the small divisors for the new frequencies $\omega_+$ and $\Omega_+$ for $|k|<K_+$, we observe that $|\varepsilonll|_\delta\leq|\varepsilonll|_{d-1}\leq 2\langle\varepsilonll\rangle_d$, hence $$|\langle k,\omega_+-\omega\rangle+\langle\varepsilonll,\Omega_+-\Omega\rangle|\leq|k||\omega_+-\omega|+|\varepsilonll|_\delta\pmb|\Omega_+-\Omega\pmb|_{-\delta}<K_+\langle\varepsilonll\rangle_d\pmb\|X_P\pmb\|_r\leq(\gamma-\gamma_+)\frac{\langle\varepsilonll\rangle_d}{(1+|k|)^\tau},$$ where $\gamma-\gamma_+>cK_+\max_{|k|\leq K_+}(1+|k|)^\tau\pmb\|X_P\pmb\|_r$. The new ones then satisfy \begin{align*} |\langle k,\omega_+\rangle+\langle \varepsilonll,\Omega_+ \rangle|\geq \gamma_+\frac{\langle\varepsilonll\rangle_d}{A_k} \varepsilonnd{align*} on $\Pi$. \subsubsection{Estimate on $\Phi_+$} \begin{lemma}\label{le7} In addition to \textbf{\textsc{(H1)}}-\textbf{\textsc{(H2)}}. Assume that \begin{align*} &\textbf{\textsc{(H3)}}~{A_\rho}r^{m-1}\varepsilonta^{m}<\rho,\\ &\textbf{\textsc{(H4)}}~A_\rho r^{m-2a}\varepsilonta^{m-a}<1. \varepsilonnd{align*} Then the following hold. \begin{itemize} \item[(1)]For all $0\leq t\leq 1$, \begin{align}\label{eq26} \phi_F^t&:D(s-5\rho,2\varepsilonta r)\rightarrow D(s-4\rho,4\varepsilonta r),\\\label{eq27} \phi&:D(s-6\rho,\varepsilonta r)\rightarrow D(s-5\rho,2\varepsilonta r), \varepsilonnd{align} are well defined. \item[(2)]$\Phi_+:D_+=D(s_+,r_+)=D(s-6\rho,\varepsilonta r)\rightarrow D(s,r).$ \item[(3)]There is a constant $c_3$ such that \begin{align*} \pmb\|\phi_F^t-id\pmb\|^{*}_{r,D(s-5\rho,2\varepsilonta r)}&\leq c_3\pmb\|X_F\pmb\|_{r,D(s-2\rho,4\varepsilonta r)}^*\leq c_3A_\rho r^{m-a}\varepsilonta^m,\\ \pmb\|D\phi_F^t-Id\pmb\|^{*}_{r,r,D(s-6\rho,\varepsilonta r)}&\leq c_3\frac{1}{\rho r^a}\pmb\|X_F\pmb\|_{r,D(s-2\rho,4\varepsilonta r)}^*\leq c_3\frac{A_\rho}{\rho}r^{m-2a}\varepsilonta^{m-a}. \varepsilonnd{align*} \item[(4)] \begin{align*} \pmb\|\Phi_+-id\pmb\|^{*}_{r,D(s-5\rho,2\varepsilonta r)}&\leq c_3\pmb\|X_F\pmb\|_{r,D(s-6\rho,4\varepsilonta r)}^*\leq c_3A_\rho r^{m-a}\varepsilonta^m,\\ \pmb\|D\Phi_+-Id\pmb\|^{*}_{r,r,D(s-4\rho,\varepsilonta r)}&\leq c_3\frac{1}{\rho r^a}\pmb\|X_F\pmb\|_{r,D(s-2\rho,4\varepsilonta r)}^*\leq c_3\frac{A_\rho}{\rho}r^{m-2a}\varepsilonta^{m-a}. \varepsilonnd{align*} \varepsilonnd{itemize} \varepsilonnd{lemma} \begin{proof} (1)~~(\ref{eq27}) immediately follows from $\zeta_+\in B_{(r_{-}^{m-1}\varepsilonta_{-}^m)^\frac{1}{L}}(\zeta)$ in Lemma \ref{le3}. Indeed, for $\forall x,y,z,w,\bar w, \in D(s-6\rho,\varepsilonta r)$, $\phi(x,y,z,w,\bar w)=(x,y,z+\zeta_+-\zeta,w,\bar w)$, then \begin{align*} ||z+\zeta_+-\zeta||_p&<\varepsilonta r+c(r_{-}^{m-1}\varepsilonta_{-}^m)^\frac{1}{L}<2\varepsilonta r, \varepsilonnd{align*} as $m\geq L+\frac{\sqrt{4L^2+2L}}{2}$. To verify (\ref{eq26}), we denote $\phi_{F_1}^t$, $\phi_{F_2}^t$, $\phi_{F_3}^t$, $\phi_{F_4}^t$, $\phi_{F_5}^t$ as the components of $\phi_{F}^t$ in the $x$, $y$, $z$, $w$, $\bar w$ planes, respectively. Then \begin{align}\label{eq28} \phi_F^t=id+\int_0^tX_F\circ \phi_F^\lambda d\lambda,~~~~0\leq t\leq 1. \varepsilonnd{align} For any $(x,y,z,w,\bar w)\in D(s-5\rho,2\varepsilonta r)$, we let $t_*=\sup\{t\in[0,1]:\phi_F^t(x,y,z,w,\bar w)\in D(s-4\rho,4\varepsilonta r)\}$. Then for any $0\leq t\leq t_*$, by the definition of $\pmb\|\cdot\pmb\|_{r,D(s,r)}$, (\ref{XP}), (\ref{XR}), (\ref{XF}), (\ref{eq28}), \textbf{(H3)} and \textbf{(H4)}, we get \begin{align*} |\phi_{F_1}^t(x,y,z)|_{D(s-5\rho,2\varepsilonta r)}&\leq|x|_{D(s-5\rho,2\varepsilonta r)}+\int_0^t|F_y\circ\phi_F^\lambda|_{D(s-5\rho,2\varepsilonta r)}d\lambda\\ &\leq s-5\rho+r^{a-1}\pmb\|X_F\pmb\|_{r,D(s-4\rho,4\varepsilonta r)}\\ &<s-5\rho+cA_\rho r^{m-1}\varepsilonta^m\\ &\leq s-4\rho,\\ |\phi_{F_2}^t(x,y,z)|_{D(s-5\rho,2\varepsilonta r)}&\leq|y|_{D(s-5\rho,2\varepsilonta r)}+\int_0^t|-F_x\circ\phi_F^\lambda|_{D(s-5\rho,2\varepsilonta r)}d\lambda\\ &\leq (2\varepsilonta r)^2+r^a\pmb\|X_F\pmb\|_{r,D(s-4\rho,4\varepsilonta r)}\\ &<(2\varepsilonta r)^2+c A_\rho r^m\varepsilonta^m\\ &<(4\varepsilonta r)^2,\\ |\phi_{F_3}^t(x,y,z)|_{D(s-5\rho,2\varepsilonta r)}&\leq|z|_{D(s-5\rho,2\varepsilonta r)}+\int_0^t|\tilde{J}F_z\circ\phi_F^\lambda|_{D(s-5\rho,2\varepsilonta r)}d\lambda\\ &\leq 2\varepsilonta r+r\pmb\|X_F\pmb\|_{r,D(s-4\rho,4\varepsilonta r)}\\ &\leq 2\varepsilonta r+cA_\rho r^{m-a+1}\varepsilonta^m\\ &<4\varepsilonta r,\\ |\phi_{F_4}^t(x,y,z)|_{D(s-5\rho,2\varepsilonta r)}&\leq|w|_{D(s-5\rho,2\varepsilonta r)}+\int_0^t|iF_{\bar w}\circ\phi_F^\lambda|_{D(s-5\rho,2\varepsilonta r)}d\lambda\\ &\leq (2\varepsilonta r)^a+\pmb\|X_F\pmb\|_{r,D(s-4\rho,4\varepsilonta r)}\\ &\leq(2\varepsilonta r)^a+cA_\rho r^{m-a}\varepsilonta^m\\ &<(4\varepsilonta r)^a,\\ |\phi_{F_5}^t(x,y,z)|_{D(s-5\rho,2\varepsilonta r)}&\leq|\bar w|_{D(s-5\rho,2\varepsilonta r)}+\int_0^t|-iF_{ w}\circ\phi_F^\lambda|_{D(s-5\rho,2\varepsilonta r)}d\lambda\\ &\leq (2\varepsilonta r)^a+\pmb\|X_F\pmb\|_{r,D(s-4\rho,4\varepsilonta r)}\\ &\leq(2\varepsilonta r)^a+cA_\rho r^{m-a}\varepsilonta^m\\ &<(4\varepsilonta r)^a. \varepsilonnd{align*} Thus, $\phi_F^t\in D(s-4\rho,4\varepsilonta r)\subset D(s,r)$, i.e. $t_*=1$ and (1) holds. (2)~~It follows from (1) that (2) holds. (3)~~By (\ref{XP}), (\ref{XR}), (\ref{XF}) and (\ref{eq28}), \begin{align*} \pmb\|\phi_F^t-id\pmb\|_{r,D(s-5\rho,2\varepsilonta r)}&=\pmb\|\int_0^tX_F\circ \phi_F^\lambda d\lambda\pmb\|_{r,D(s-5\rho,2\varepsilonta r)}\leq c\pmb\|X_F\pmb\|_{r,D(s-4\rho,4\varepsilonta r)}. \varepsilonnd{align*} Using Lemma A.4 in \cite{poschel2}, we have $$\pmb\|\phi_F^t-id\pmb\|^{\mathcal{L}}_{r,D(s-5\rho,2\varepsilonta r)}\leq exp(\pmb\|DX_F\pmb\|_{D(s-4\rho,4\varepsilonta r)})\pmb\|X_F\pmb\|^{\mathcal{L}}_{D(s-4\rho,4\varepsilonta r)}\leq c_4\pmb\|X_F\pmb\|_{D(s-4\rho,4\varepsilonta r)}^{\mathcal{L}}.$$ {{By the generalized Cauchy estimate, \begin{align}\label{Dphi-I} \pmb\|D\phi_F^t-I\pmb\|^{*}_{r,r,D(s-6\rho,\varepsilonta r)}&<\frac{\pmb\|\phi_F^t-id\pmb\|^{*}_{r,D(s-5\rho,2\varepsilonta r)}}{\rho(\varepsilonta r)^a}<c\frac{\pmb\|X_F\pmb\|^*_{D(s-4\rho,4\varepsilonta r)}}{\rho(\varepsilonta r)^a}=c\frac{A_\rho}{\rho}r^{m-2a}\varepsilonta^{m-a}. \varepsilonnd{align}}} (4) now follows from (3). \varepsilonnd{proof} \subsubsection{Estimate on $P_+$} In the following, we estimate the next step $P_+$. \begin{lemma}\label{le8} Assume $\textbf{\textsc{(H1)}}$-$\textbf{\textsc{(H4)}}$ and \begin{align*} &\textbf{\textsc{(H5)}}~ \Delta=:(\frac{A_\rho^2}{\rho^2}r^{m-2a}\varepsilonta^{-\frac{1}{2}} +c\varepsilonta^{\frac{1}{2}}\gamma^{(2b)^{m+2}}+c\frac{A_{\rho}}{\rho}\varepsilonta^{\frac{3}{2}})\leq\gamma_+^{2(2b)^{m+2}}. \varepsilonnd{align*} Then \begin{equation}\label{eXP+} \pmb\|X_{P_+}\pmb\|^{\lambda}_{r_+,D(s_+,r_+)}\leq \gamma_+^{2(2b)^{m+2}}r_+^{m-a}\varepsilonta_+^m, \varepsilonnd{equation} where $0<\lambda<\frac{\gamma^{(2b)^{m+2}}}{M}$. \varepsilonnd{lemma} \begin{proof} Recall (\ref{eq16}) and (\ref{eq18}), i.e., \begin{align*} P_+&=\int_0^1\{R_t,F\}\circ\Phi_+dt+(P-R)\circ\Phi_++Q\circ\phi,\\ R_t&=(1-t)Q+(1-t)[R]+tR. \varepsilonnd{align*} Hence the new perturbing vectorfield is \begin{align}\label{XP+} X_{P_+}=\int_{0}^1(\Phi_+)^*[X_{R_t},X_{F}]dt+(\Phi_+)^*(X_P-X_R)+\phi^*X_{Q}, \varepsilonnd{align} where $(\Phi_+)^*(X_P-X_R)=D\Phi_+^{-1}(X_P-X_R)\circ\Phi_+$, $[X_{R_t},X_{F}]=JD\nabla R_tX_F-JD\nabla FX_{R_t}$. Let $\Phi_{+}$ map $D(s-6\rho,\varepsilonta r)$ into $D(s-4\rho, 4\varepsilonta r)$. Using the method in \cite{poschel2} on page of $14$-$15$, we can prove that for $0<\lambda<\frac{\gamma^{(2b)^{m+2}}}{M}$ \begin{align}\label{Phi*} \pmb\|\Phi_{+}^*Y\pmb\|_{{\varepsilonta r},D(s-6\rho,\varepsilonta r)}&<c\pmb\|Y\pmb\|_{\varepsilonta r,D(s-4\rho,4\varepsilonta r)},\\\label{phi*l} \pmb\|\Phi_{+}^*Y\pmb\|^{\lambda}_{{\varepsilonta r},D(s-6\rho,\varepsilonta r)}&<c\pmb\|Y\pmb\|^{\lambda}_{\varepsilonta r,D(s-3\rho,5\varepsilonta r)}. \varepsilonnd{align} Then in view of (\ref{XP-R}) and (\ref{Phi*}), we can prove \begin{align}\label{Phi*XP} \pmb\|(\Phi_+)^*(X_P-X_R)\pmb\|^{\lambda}_{\varepsilonta r,D(s-6\rho,\varepsilonta r)}\leq c\pmb\|X_P-X_R\pmb\|^{\lambda}_{\varepsilonta r,D(s-4\rho,4\varepsilonta r)}\leq c\varepsilonta^{m+1-a}\pmb\|X_P\pmb\|^{\lambda}_{ r,D(s,r)}. \varepsilonnd{align} Recall the definition of $Q$ in (\ref{Q}), i.e., \begin{align*} Q=(\partial_zg+\partial_zf)J\partial_zF|_{2|\imath|+|\jmath|> m, or |\varepsilonll|\geq 2}. \varepsilonnd{align*} In the following, we estimate \begin{align*} \pmb\|X_Q\pmb\|_{\varepsilonta r,D(s-2\rho,6\varepsilonta r)}=\sup_{D(s-2\rho,6\varepsilonta r)}\{\frac{|Q_y|}{(\varepsilonta r)^{a-2}}+\frac{|Q_x|}{(\varepsilonta r)^a}+\frac{||JQ_z||_{\bar p}}{(\varepsilonta r)^{a-1}}+\|Q_{w}\|+\|Q_{\bar w}\|\}. \varepsilonnd{align*} We calculate \begin{align}\label{Qy} \frac{|Q_y|_{D(s-2\rho,6\varepsilonta r)}}{(\varepsilonta r)^{a-2}}&<\frac{|Q|_{D(s-2\rho,8\varepsilonta r)}}{(\varepsilonta r)^a}<\frac{|(\partial_zg+\partial_zf)I\partial_zF|_{D(s-2\rho,8\varepsilonta r)}}{(\varepsilonta r)^a}<c\frac{r^{a-1}\pmb\|X_F\pmb\|_{r,{D(s-2\rho,8\varepsilonta r)}}}{(\varepsilonta r)^{a-2}}\\ &<cA_\rho r^{m-a+1}\varepsilonta^{m-a+2},\notag \varepsilonnd{align} and \begin{align}\label{QyL} \frac{|Q_y|^{\mathcal{L}}_{D(s-2\rho,6\varepsilonta r)}}{(\varepsilonta r)^{a-2}}&<cr^{a-1}\pmb\|X_F\pmb\|+cr^{a-1}\pmb\|X_F\pmb\|^{\mathcal{L}}<c\frac{M}{\gamma^{(2b)^{2\imath+\jmath+\varepsilonll}}}A_\rho r^{m-a+1}\varepsilonta^{m-a+2}. \varepsilonnd{align} Similarly, we can prove \begin{align}\label{Qz} \frac{|Q_x|_{D(s-2\rho,6\varepsilonta r)}}{(\varepsilonta r)^a}, \frac{||Q_z||_{\bar p,D(s-2\rho,6\varepsilonta r)}}{(\varepsilonta r)^{a-1}},||Q_w||_{\bar p,D(s-2\rho,6\varepsilonta r)},||Q_{\bar w}||_{\bar p,D(s-2\rho,6\varepsilonta r)}\leq c\frac{A_\rho}{\rho}r^{m-a+1}\varepsilonta^{m-a+2}, \varepsilonnd{align} and \begin{align} \frac{|Q_x|^{\mathcal{L}}_{D(s-2\rho,6\varepsilonta r)}}{(\varepsilonta r)^a}, \frac{||Q_z||^{\mathcal{L}}_{\bar p,D(s-2\rho,6\varepsilonta r)}}{(\varepsilonta r)^{a-1}},||Q_w||^{\mathcal{L}}_{\bar p,D(s-2\rho,6\varepsilonta r)},||Q_{\bar w}||^{\mathcal{L}}_{\bar p,D(s-2\rho,6\varepsilonta r)}\leq c\frac{A_\rho}{\rho}\frac{M}{\gamma^{(2b)^{2\imath+\jmath+\varepsilonll}}}r^{m-a+1}\varepsilonta^{m-a+2}. \varepsilonnd{align} Then \begin{align}\label{XQ} \pmb\|X_Q\pmb\|^{\lambda}_{\varepsilonta r,D(s-2\rho,6\varepsilonta r)}\leq c\frac{A_{\rho}}{\rho}r^{m-a+1}\varepsilonta^{m-a+2}. \varepsilonnd{align} By (\ref{XR}) and (\ref{XQ}), we can check that \begin{align}\label{XRt} \pmb\|X_{R_t}\pmb\|_{\varepsilonta r,D(s-2\rho,6\varepsilonta r)}&\leq \pmb\|X_{Q}\pmb\|_{\varepsilonta r,D(s-2\rho,6\varepsilonta r)}+\pmb\|X_{[R]}\pmb\|_{\varepsilonta r,D(s-\rho,8\varepsilonta r)}+\pmb\|X_{R}\pmb\|_{\varepsilonta r,D(s-\rho,8\varepsilonta r)}\notag\\ &\leq c\frac{A_{\rho}}{\rho}r^{m-a+1}\varepsilonta^{m-a+2}+c\gamma^{2(2b)^{m+2}}r^{m-a}\varepsilonta^{m},\notag\\ &\leq c\frac{A_{\rho}}{\rho}r^{m-a}\varepsilonta^{m}, \varepsilonnd{align} and \begin{align} \pmb\|X_{R_t}\pmb\|^\mathcal{L}_{\varepsilonta r,D(s-2\rho,6\varepsilonta r)}&\leq c\frac{A_\rho}{\rho}\frac{M}{\gamma^{(2b)^{2\imath+\jmath+\varepsilonll}}}r^{m-a}\varepsilonta^{m}. \varepsilonnd{align} Using the generalized Cauchy estimate, Lemmas \ref{le1} and \ref{le2}, we get \begin{align}\label{DXF} \pmb\|DX_F\pmb\|^*_{r,r,D(s-3\rho, 5\varepsilonta r)}&\leq\frac{\pmb\|X_F\pmb\|^*_{r,D(s-2\rho,6\varepsilonta r)}}{\rho(\varepsilonta r)^a},~~~ \pmb\|DX_{R_t}\pmb\|^*_{r,r,D(s-3\rho, 5\varepsilonta r)}\leq\frac{\pmb\|X_{R_t}\pmb\|^*_{r,D(s-2\rho,6\varepsilonta r)}}{\rho(\varepsilonta r)^a}. \varepsilonnd{align} Then (\ref{XR}), (\ref{XF}), (\ref{XRt}), (\ref{DXF}) together with the definition of $[\cdot,\cdot]$ yield \begin{align}\label{[XRt]} \pmb\|[X_{R_t},X_F]\pmb\|_{\varepsilonta r,D(s-3\rho, 5\varepsilonta r)}&\leq\pmb\|DX_{R_t}\cdot X_F\pmb\|_{\varepsilonta r,D(s-3\rho, 5\varepsilonta r)}+\pmb\|DX_F\cdot X_{R_t} \pmb\|_{\varepsilonta r,D(s-3\rho, 5\varepsilonta r)}\\ &\leq\pmb\|DX_{R_t}\pmb\|_{\varepsilonta r,\varepsilonta r}\pmb\|X_F\pmb\|_{\varepsilonta r,D(s-3\rho, 5\varepsilonta r)}+\pmb\|DX_F \pmb\|_{\varepsilonta r,\varepsilonta r}\pmb\|X_{R_t} \pmb\|_{\varepsilonta r,D(s-3\rho, 5\varepsilonta r)}\notag\\ &\leq\frac{\pmb\|X_F\pmb\|_{\varepsilonta r,D(s-2\rho, 6\varepsilonta r)}\pmb\|X_{R_t} \pmb\|_{\varepsilonta r,D(s-2\rho, 6\varepsilonta r)}}{\rho (\varepsilonta r)^a}\notag\\ &\leq c\frac{A_\rho^2}{\rho^2}r^{2m-3a}\varepsilonta^{2m-a}.\notag \varepsilonnd{align} Similarly, \begin{align}\label{[XRtL]} \pmb\|[X_{R_t},X_F]\pmb\|^{\mathcal{L}}_{\varepsilonta r,D(s-3\rho, 5\varepsilonta r)}&\leq\pmb\|DX_{R_t}\cdot X_F\pmb\|^{\mathcal{L}}_{\varepsilonta r,D(s-3\rho, 5\varepsilonta r)}+\pmb\|DX_F\cdot X_{R_t} \pmb\|^{\mathcal{L}}_{\varepsilonta r,D(s-3\rho, 5\varepsilonta r)}\\ &\leq\pmb\|DX_{R_t}\pmb\|^{\mathcal{L}}_{\varepsilonta r,\varepsilonta r}\pmb\|X_F\pmb\|_{\varepsilonta r,D(s-3\rho, 5\varepsilonta r)}+\pmb\|DX_{R_t}\pmb\|_{\varepsilonta r,\varepsilonta r}\pmb\|X_F\pmb\|^{\mathcal{L}}_{\varepsilonta r,D(s-3\rho, 5\varepsilonta r)}\notag\\ &~~~~+\pmb\|DX_F \pmb\|^{\mathcal{L}}_{\varepsilonta r,\varepsilonta r}\pmb\|X_{R_t} \pmb\|_{\varepsilonta r,D(s-3\rho, 5\varepsilonta r)}+\pmb\|DX_F \pmb\|_{\varepsilonta r,\varepsilonta r}\pmb\|X_{R_t} \pmb\|^{\mathcal{L}}_{\varepsilonta r,D(s-3\rho, 5\varepsilonta r)}\notag\\ &\leq c\frac{A_\rho^2}{\rho^2}\frac{M}{\gamma^{(2b)^{m+2}}}r^{2m-3a}\varepsilonta^{2m-a}.\notag \varepsilonnd{align} So, by (\ref{phi*l}), (\ref{[XRt]}) and (\ref{[XRtL]}), we have \begin{align}\label{Phi*XRt} \pmb\|(\Phi_+)^*[X_{R_t},X_{F}]\pmb\|^{\lambda}_{\varepsilonta r,D(s-6\rho,\varepsilonta r)}&\leq c\pmb\|[X_{R_t},X_{F}]\pmb\|^{\lambda}_{\varepsilonta r,D(s-3\rho,5\varepsilonta r)}\leq c\frac{A_\rho^2}{\rho^2}r^{2m-3a}\varepsilonta^{2m-a}. \varepsilonnd{align} Recall (\ref{XP+}) and collect all terms (\ref{Phi*XP}), (\ref{XQ}), (\ref{Phi*XRt}) we then arrive at the estimate \begin{align*} \pmb\|X_{P_+}\pmb\|^{\lambda}_{\varepsilonta r,D(s-5\rho,\varepsilonta r)}&\leq c\frac{A_\rho^2}{\rho^2}r^{2m-3a}\varepsilonta^{2m-a} +c\varepsilonta^{m-a+1}\gamma^{(2b)^{m+2}}r^{m-a}\varepsilonta^m+c\frac{A_{\rho}}{\rho}r^{m-a+1}\varepsilonta^{m-a+2}\\ &\leq cr_+^{m-a}\varepsilonta_+^m(\frac{A_\rho^2}{\rho^2}r^{m-2a}\varepsilonta^{-\frac{1}{2}} +c\varepsilonta^{\frac{1}{2}}\gamma^{(2b)^{m+2}}+c\frac{A_{\rho}}{\rho}\varepsilonta^{\frac{3}{2}})\\ &\leq\Delta r_+^{m-a}\varepsilonta_+^m\\ &\leq \gamma_+^{2(2b)^{m+2}}r_+^{m-a}\varepsilonta_+^m, \varepsilonnd{align*} where the last inequality follows from \textbf{\textsc{(H5)}}. The proof is complete. \varepsilonnd{proof} This completes one cycle of KAM steps. \section{Proof of Main Results}\label{sec:proof} \subsection{Iteration Lemma} In this section, we will prove an iteration lemma which guarantees the inductive construction of the transformations in all KAM steps. Let $r_0,s_0,\gamma_0, \varepsilonta_0,H_0,N_0,P_0$ be given at the beginning of Section \ref{sec:KAM} and let $D_0=D(s_0,r_0)$, $K_0=0$, $\Phi_0=id$. We define the following sequence inductively for all $\nu=1,2,\cdots$: \begin{align*} \rho_{\nu}&=\frac{\rho_0}{2^\nu},\\ \varepsilonta_\nu&=\varepsilonta_{\nu-1}^{1+\frac{1}{2m}},\\ r_\nu&=\varepsilonta_{\nu-1}r_{\nu-1},\\ s_\nu&=s_{\nu-1}-6\rho_{\nu-1},\\ \gamma_\nu&=\gamma_0(\frac{1}{2}+\frac{1}{2^\nu}),\\ \sigma_\nu&=\sigma_0(\frac{1}{2}+\frac{1}{2^\nu}),\\ M_\nu&=M_0(2-\frac{1}{2^\nu}),\\ K_\nu&=([\log(\frac{1}{\varepsilonta_{\nu-1}^{m+1}})]+1)^{3\mu},\\ D_\nu&=D(s_\nu, r_\nu),\\ {\Pi_\nu}&=\{\xi\in\Pi_{\nu-1}:|\langle k,\omega_{\nu-1}(\xi)\rangle+\langle\varepsilonll,\Omega_{\nu-1}(\xi)\rangle|\geq\frac{\gamma_{\nu-1}\langle \varepsilonll\rangle_d}{(1+|k|)^\tau}, \\ &~~~~~~|k|\leq K_+, |\varepsilonll|\leq2, |k|+|\varepsilonll|\neq0 \}. \varepsilonnd{align*} \begin{lemma}\label{le9} Denote \begin{align*} \varepsilonta_*^2=\varepsilonta_0^m. \varepsilonnd{align*} If $\varepsilon$ is small enough, then the KAM step described in Section \ref{sec:KAM} is valid for all $\nu=0,1,\cdots$, resulting the sequences $$e_\nu, \omega_\nu, \Omega_\nu, g_\nu, f_\nu, P_\nu, \Phi_\nu, H_\nu$$ $\nu=1,2,\cdots,$ with the following properties: \begin{itemize} \item[(1)] \begin{align}\label{eq30} |{e_{\nu+1}}-{e_{\nu}}|_{\Pi_\nu}&\leq \frac{\varepsilonta_*^\frac{1}{2}}{2^{\nu-1}},\\\label{eq31} |{e_{\nu+1}}-{e_{0}}|_{\Pi_\nu}&\leq2\varepsilonta_*^{\frac{1}{2}},\\\label{eq32} |{\omega_{\nu+1}}-{\omega_{\nu}}|^{\lambda_\nu}_{\Pi_\nu}&\leq \frac{\varepsilonta_*^\frac{1}{2}}{2^\nu},\\\label{eq33} |{\omega_{\nu+1}}-{\omega_{0}}|^{\lambda_\nu}_{\Pi_\nu}&\leq2\varepsilonta_*^{\frac{1}{2}},\\\label{eq34} \pmb|{\Omega_{\nu+1}}-{\Omega_{\nu}}\pmb|^{\lambda_\nu}_{-\delta,\Pi_\nu}&\leq \frac{\varepsilonta_*^\frac{1}{2}}{2^\nu},\\\label{eq35} \pmb|{\Omega_{\nu+1}}-{\Omega_{0}}\pmb|^{\lambda_\nu}_{-\delta,\Pi_\nu}&\leq2\varepsilonta_*^{\frac{1}{2}},\\\label{eq36} |{g_{\nu+1}}-{g_{\nu}}|_{D(s_\nu,r_{\nu})}&\leq \frac{\varepsilonta_*^\frac{1}{2}}{2^{\nu-1}},\\\label{eq37} |{g_{\nu+1}}-{g_{0}}|_{D(s_\nu,r_{\nu})}&\leq2\varepsilonta_*^{\frac{1}{2}},\\\label{eq38} |{f_{\nu+1}}-{f_{\nu}}|_{D(s_\nu,r_{\nu})}&\leq \frac{\varepsilonta_*^\frac{1}{2}}{2^{\nu-1}},\\\label{eq39} |{f_{\nu+1}}-{f_{0}}|_{D(s_\nu,r_{\nu})}&\leq 2\varepsilonta_*^{\frac{1}{2}},\\\label{eq40} \pmb\|X_{P_\nu}\pmb\|^{\lambda_\nu}_{r_\nu,D(s_\nu,r_{\nu})}&\leq\frac{\varepsilonta_*^\frac{1}{2}}{2^\nu},\\\label{eq41} ||\zeta_{\nu+1}-\zeta_{\nu}||_{p}&\leq \frac{\varepsilonta_*^\frac{1}{2}}{2^\nu}. \varepsilonnd{align} \item[(2)] There exist a Lipschitz family of real analytic symplectic coordinate transformations $\Phi_{\nu+1}:{D}_{\nu+1}\times \Pi_{\nu+1}\rightarrow {D}_{\nu}$ and a closed subset $$\Pi_{\nu+1}=\Pi_\nu\setminus\cup_{|k|>K_\nu}\mathcal{R}_{kl}^{\nu+1}(\gamma_{\nu+1})$$ of $\Pi_\nu$, where $$\mathcal{R}_{kl}^{\nu+1}(\gamma_{\nu+1})=\{\xi\in\Pi_\nu:|\langle k,\omega_{\nu+1}\rangle+\langle\varepsilonll,\Omega_{\nu+1}\rangle|<\frac{\gamma_{\nu+1}\langle\varepsilonll\rangle_d}{(1+|k|)^\tau}\},$$ such that on $D_{\nu+1}\times\Pi_{\nu+1}$, \begin{equation*} H_{\nu+1}=H_\nu\circ\Phi_{\nu+1}=N_{\nu+1}+P_{\nu+1}, \varepsilonnd{equation*} \begin{align}\label{Peq36} ||\Phi_{\nu+1}-id||_{{D}_{\nu+1}}\leq\frac{\varepsilonta_*^\frac{1}{2}}{2^\nu}, \varepsilonnd{align} and the same estimates as above are satisfied with $\nu+1$ in place of $\nu$, that is, \begin{align} \label{M+} |\omega_{\nu+1}(\xi)|_{\Pi_{\nu+1}}^\mathcal{L}+\pmb|\Omega_{\nu+1}(\xi)\pmb|_{-\delta,\Pi_{\nu+1}}^\mathcal{L}\leq M_{\nu+1},\varepsilonnd{align} \begin{align}\label{XP+2}\pmb\|X_{P_{\nu+1}}\pmb\|^{\lambda_{\nu+1}}_{D(s_{\nu+1},r_{\nu+1}),\Pi_{\nu+1}}<\gamma_{\nu+1}^{2(2b)^{m+2}}r_{\nu+1}^{m-a}\varepsilonta_{\nu+1}^m,\varepsilonnd{align} and \begin{align}\label{Pi+}|\Pi_{\nu+1}\setminus\Pi_\nu|<c\gamma_0\frac{1}{1+K_{\nu-1}}.\varepsilonnd{align} \varepsilonnd{itemize} \varepsilonnd{lemma} \begin{proof} The proof amounts to the verification of $\textbf{(H1)}$-$\textbf{(H5)}$ for all $\nu$. According to the definition of $r_\nu$ and $\varepsilonta_\nu$, we note that \begin{align*} r_\nu=\varepsilonta_0^{2m((1+\frac{1}{2m})^\nu-1)+m},~~~\varepsilonta_\nu=\varepsilonta_0^{(1+\frac{1}{2m})^\nu}. \varepsilonnd{align*} In the following, we prove $\textbf{(H1)}$-$\textbf{(H5)}$. $\textbf{(H1)}$: Since $(1+\frac{1}{2m})^\mu>2$, we have \begin{align*} \frac{\rho_0}{2^{\nu}}([\log\frac{1}{\varepsilonta^{m+1}}]+1)^\mu&=\frac{\rho_0}{2^{\nu}}((1+\frac{1}{2m})^\nu\log\frac{1}{\varepsilonta_0^{m+1}}+1)^\mu\\ &\geq\frac{\rho_0}{2^{\nu}}2^{\nu}(\log\frac{1}{\varepsilonta_0^{m+1}})^\mu\\ &\geq1. \varepsilonnd{align*} It follows from the above that \begin{align*} &3\mu n\log([\log{\frac{1}{\varepsilonta^{m+1}}}]+1)-\frac{\rho_0}{2^\nu}([\log\frac{1}{\varepsilonta^{m+1}}]+1)^{3\mu}\\ &3\mu n\log([\log{\frac{1}{\varepsilonta^{m+1}}}]+1)-(\log\frac{1}{\varepsilonta^{m+1}})^{2\mu}\\ &\leq-\log\frac{1}{\varepsilonta^{m+1}}, \varepsilonnd{align*} as $\varepsilonta$ is small enough, which is ensured by making $\varepsilon$ small. Thus, \begin{equation*} K_{\nu+1}^{n}{\rm e}^{-K_{\nu+1}\rho_\nu}\leq \varepsilonta_\nu^{m+1}, \varepsilonnd{equation*} i.e. $\textbf{(H1)}$ holds. $\textbf{(H2)}$: By a similar method as \cite{chow}, we can prove \textbf{(H2)}. It is easy to see that \textbf{(H3)} and \textbf{(H4)} hold by the below (\ref{<<1}). $\textbf{(H5)}$: We recall $\Delta$ here what has been defined in \textbf{(H5)} and estimate it term by term. In view of the definition of $A_\rho$ in (\ref{Arho}) and $\rho_\nu=\frac{\rho_0}{2^\nu}$, we calculate \begin{align}\label{<<1} \frac{A_\rho\varepsilonta_0^{\frac{1}{4}(1+\frac{1}{2m})^\nu}}{\rho_\nu}&=\frac{(2^n\sum_{0<|k|<K_+}|k|^{4\tau+2}{\rm e}^{\frac{-2|k|\rho_0}{2^\nu}})^\frac{1}{2}2^\nu\varepsilonta_0^{\frac{1}{4}(1+\frac{1}{2m})^\nu}}{\rho_0}\notag\\ &\leq\frac{(2^n(\frac{2^{\nu-1}}{\rho_0})^{4\tau+3}(4\tau+2)!)^\frac{1}{2}2^\nu\varepsilonta_0^{\frac{1}{4}(1+\frac{1}{2m})^\nu}}{\rho_0}\notag\\ &\leq\frac{(2^{n+2}(4\tau+2)!)^\frac{1}{2}(2^{2\tau+\frac{5}{2}})^{\nu-1}\varepsilonta_0^{\frac{1}{4}(1+\frac{1}{2m})^{\nu-1}}\varepsilonta_0^{\frac{1}{4}(1+\frac{1}{2m})}}{\rho_0^{2\tau+\frac{5}{2}}}\notag\\ &\leq\frac{(2^{n+2}(4\tau+2)!\varepsilonta_0^{\frac{1}{2}(1+\frac{1}{2m})})^\frac{1}{2}}{\rho_0^{2\tau+\frac{5}{2}}}(2^{2\tau+\frac{5}{2}})^{\nu-1}\varepsilonta_0^{\frac{1}{4}(1+\frac{1}{2m})^{\nu-1}}\notag\\ &\ll1. \varepsilonnd{align} Recall that $\gamma_0=\varepsilon^{\frac{1}{4(2b)^{m+2}\Xi}}$, $\varepsilonta_0=\gamma_0^{2(2b)^{m+2}}\varepsilon^{\frac{1}{\Xi}}$, we have $$\varepsilonta_0<\gamma_0^{2(2b)^{m+2}}.$$ Observe that $\varepsilonta_\nu=\varepsilonta_0^{(1+\frac{1}{2m})^\nu}$, and $\gamma_\nu=\frac{\gamma_0}{2}(1+\frac{1}{2^\nu})$. Then $$\varepsilonta_\nu<\gamma_\nu^{2(2b)^{m+2}}$$ and \begin{align}\label{eta+} \varepsilonta_\nu^{1+\frac{1}{4}}<\gamma_{\nu+1}^{2(2b)^{m+2}}. \varepsilonnd{align} So \begin{align}\label{P+3}c\frac{A_{\rho_\nu}}{\rho_\nu}\varepsilonta_\nu^{\frac{3}{2}}\leq\gamma_{\nu+1}^{2(2b)^{m+2}}.\varepsilonnd{align} Obviously, \begin{align}\label{P+2}c\varepsilonta_\nu^{\frac{1}{2}}\gamma_\nu^{(2b)^{m+2}}\leq\gamma_{\nu+1}^{2(2b)^{m+2}}.\varepsilonnd{align} In view of the definition of $r_\nu$, $\varepsilonta_\nu$, we have $$r_\nu^{m-2a}\varepsilonta_\nu^{-\frac{9}{4}}\leq \varepsilonta_0^{(m(m-2a)-\frac{9}{4})(1+\frac{1}{2m})^\nu}<1, as~ m>4,$$ this together with (\ref{<<1}) and (\ref{eta+}) yields \begin{align}\label{P+1} \frac{A_{\rho_\nu}^2\varepsilonta_\nu^{\frac{1}{2}}}{\rho_\nu^2}r_\nu^{m-2a}\varepsilonta_\nu^{-1}\leq r_\nu^{m-2a}\varepsilonta_\nu^{-\frac{9}{4}}\varepsilonta_\nu^{1+\frac{1}{4}}\leq\gamma_{\nu+1}^{2(2b)^{m+2}}. \varepsilonnd{align} Combine (\ref{P+3}), (\ref{P+2}) with (\ref{P+1}), (\textbf{H5}) holds. Above all, the KAM steps described in Section \ref{sec:KAM} are valid for all $\nu$, which gives the desired sequences stated in the lemma. Let $\theta\gg1$ be fixed and $\varepsilonta_0$ be small enough so that \begin{equation}\label{eta0eq38} \varepsilonta_0<(\frac{1}{\theta})^{2m}<1. \varepsilonnd{equation} Then \begin{align} \varepsilonta_1&=\varepsilonta_0^{1+\frac{1}{2m}}<\frac{1}{\theta}\varepsilonta_0<1,\notag\\ \varepsilonta_2&=\varepsilonta_1^{1+\frac{1}{2m}}<\frac{1}{\theta}\varepsilonta_1<\frac{1}{\theta^2}\varepsilonta_0,\notag\\ \vdots\notag\\\label{etanueq39} \varepsilonta_\nu&=\varepsilonta_{\nu-1}^{1+\frac{1}{2m}}<\cdots<\frac{1}{\theta^\nu}\varepsilonta_0. \varepsilonnd{align} Let $\theta\geq2$ in (\ref{etanueq39}). We have that for all $\nu\geq1$ \begin{align}\label{cetanu} c_0\varepsilonta_\nu&\leq\frac{\varepsilonta_0}{2^\nu}\leq\frac{\varepsilonta_*^\frac{1}{2}}{2^\nu}. \varepsilonnd{align} Now, (\ref{eq30}), (\ref{eq36}) and (\ref{eq38}) follow from (\ref{e+}), (\ref{g+}), (\ref{f+}) and $\zeta_+\in B_{(r_-^{m-1}\varepsilonta_-^m)^{\frac{1}{L}}}(\zeta)$ in Lemma \ref{le3} and (\ref{cetanu}); by adding up (\ref{eq30}), (\ref{eq36}) and (\ref{eq38}) for all $\nu=0,1,\cdots$, we can get (\ref{eq31}), (\ref{eq37}) and (\ref{eq39}), respectively; (\ref{eq40}) follows from (\ref{XP+}) in Lemma \ref{le8} and (\ref{cetanu}); (\ref{eq41}) follows from $\zeta_+\in B_{(r_-^{m-1}\varepsilonta_-^m)^{\frac{1}{L}}}(\zeta)$ in Lemma \ref{le3} and (\ref{cetanu}); (\ref{eq32}) and (\ref{eq34}) follow from (\ref{omega+}), (\ref{Omega+}), (\ref{XR}), and (\ref{cetanu}); by adding up (\ref{eq32}) and (\ref{eq34}) for all $\nu=0,1,\cdots$, we can get (\ref{eq33}) and (\ref{eq35}). $(2)$ follows from Lemma \ref{le7}. Since $$\Pi_{\nu+1}=\Pi_\nu\setminus\cup_{K_{\nu-1}<|k|<K_\nu,|\varepsilonll|\leq2}\mathcal{R}_{kl}^{\nu+1}(\gamma_{\nu+1}),$$ we have \begin{align} |\Pi_{\nu}-\Pi_{\nu+1}|\leq\sum_{K_{\nu-1}<|k|\leq K_{\nu},~ |\varepsilonll|\leq2}|\mathcal{R}_{kl}^{\nu+1}(\gamma_{\nu+1})|\leq\sum_{K_{\nu-1}<|k|\leq K_{\nu},~ |\varepsilonll|\leq2}\frac{\gamma_0}{(|k|+1)^\tau}<c\gamma_1\frac{1}{1+K_{m-1}}. \varepsilonnd{align} The detailed proof can be seen in \cite{wu}. In view of (\ref{omega+}) and (\ref{Omega+}), the Lipschitz semi-norm of the new frequency can be bounded as following \begin{align*} |\omega_{\nu+1}(\xi)|_{\Pi_{\nu+1}}^\mathcal{L}+\pmb|\Omega_{\nu+1}(\xi)\pmb|_{-\delta,\Pi_{\nu+1}}^\mathcal{L}\leq M_{\nu}+c\pmb\|X_P\pmb\|_r^{\mathcal{L}} \leq M_{\nu+1}. \varepsilonnd{align*} The proof is complete. \varepsilonnd{proof} \subsection{Convergence} The convergence is standard, see \cite{poschel2}. For the sake of completeness, we briefly give the framework of proof. Let \begin{align*} \Psi^\nu=\Phi_1\circ\Phi_2\circ\cdots\circ\Phi_\nu,~~~~\nu=1,2,\cdots. \varepsilonnd{align*} By Lemma \ref{le9}, we have \begin{align*} D_{\nu+1}&\subset D_\nu,\\ \Psi^\nu&:{D}_\nu\rightarrow {D}_0,\\ H_0\circ\Psi^\nu&=H_\nu=N_\nu+P_\nu,~~~\nu=0,1,\cdots, \varepsilonnd{align*} where $\Psi_0=id$. To prove the convergence of the sequence $\Psi^\nu$, we note that the operator norm $\pmb\|\cdot\pmb\|_{r,s}$ satisfies \begin{align*} \pmb\|AB\pmb\|_{r,s}\leq \pmb\|A\pmb\|_{r,r}\pmb\|B\pmb\|_{s,s}, ~~~r\geq s. \varepsilonnd{align*} By the mean value theorem, we thus obtain \begin{align}\label{psinu+1} \pmb\|\Psi^{\nu+1}-\Psi^\nu\pmb\|_{r_0,D_{\nu+1}}\leq\pmb\|D\Psi^\nu\pmb\|_{r_0,r_\nu,D_\nu}\pmb\|\Phi_{\nu+1}-id\pmb\|_{r_\nu,D_{\nu+1}}. \varepsilonnd{align} In view of the chain rule $D\Psi^\nu=D\Phi_1\circ\cdots\circ D\Phi_\nu$ and (\ref{Dphi-I}), we have \begin{align}\label{dpsi} \pmb\|D\Psi^\nu\pmb\|_{r_0,r_\nu,D_\nu}\leq\prod_{i=0}^{\nu}\pmb\|D\Phi_i\pmb\|_{r_i,r_i,D_i}\leq\prod_{i\geq0}(1+\frac{A_\rho}{\rho_i}r_i^{m-2a}\varepsilonta_i^{m-a})\leq2, \varepsilonnd{align} for all $\nu\geq0$. Using (\ref{eq36}), (\ref{psinu+1}), (\ref{dpsi}) and the identity \begin{align*} \Psi^\nu=id+\sum_{i=1}^\nu(\Psi^i-\Psi^{i-1}), \varepsilonnd{align*} it is easy to verify that $\Psi^\nu$ is uniformly convergent and denote the limitation by $\Psi^*$. By Lemma \ref{le9},we see that $e_\nu$, $ \omega_\nu$, $\Omega_\nu$, $ g_\nu$, $f_\nu$ and $\zeta_\nu$ are uniformly convergent and denote the limitation by $e_*$, $ \omega_*$, $\Omega_*$, $g_*$, $f_*$ and $\zeta_*$, respectively. It follows from Lemma \ref{le3} that $$\nabla g_*(0)=\cdots=\nabla g_\nu(0)=\cdots=\nabla g_0(0)=0.$$ Then, $N_\nu$ converge uniformly to \begin{align*} N_*=e_*(\xi)+\langle\omega_*(\xi), y\rangle+\langle w, \Omega_*(\xi)\bar{w}\rangle+g_*(z,\xi)+f_*(y, z, w, \bar w), \varepsilonnd{align*} with \begin{align*} g_*(z,\xi)&=g(z,\xi)+\varepsilon^{\frac{3m}{32\mu(m+1)(m-a)(\tau+1)}}O(||z||_{\textsf{a},\textsf{p}}^2),\\ f_*(y,z,w,\bar w,\xi)&=\sum_{4\leq2|\imath|\leq m}f_{\imath000} y^{\imath}+\sum_{2|\imath|+|\jmath|\leq m,1\leq|\imath|,|\jmath|}f_{\imath\jmath00} y^{\imath} z^{\jmath}+\sum_{0<2|\imath|+|\jmath|\leq m}f_{\imath\jmath11} y^{\imath} z^{\jmath} w\bar w. \varepsilonnd{align*} Hence \begin{align*} P_\nu=H_0\circ\Psi^\nu-N_\nu \varepsilonnd{align*} converges uniformly to \begin{equation*} P_*=H_0\circ\Psi^*-N_*. \varepsilonnd{equation*} On the embedded tori, the flow of the perturbed Hamiltonian $H$ can be computed as follows. Note that \begin{align*} \pmb\|X_H\circ\Psi^\nu-D\Psi^\nu\cdot X_{N_\nu}\pmb\|\leq\pmb\|D\Psi^\nu\pmb\|_{r_0,r_\nu,D_\nu}\pmb\|(\Psi^\nu)^*X_H-X_{N_\nu}\pmb\|_{r_\nu,D_\nu}\leq c\pmb\|X_{P_\nu}\pmb\|_{r_\nu,D_\nu}, \varepsilonnd{align*} whence in the limit, $X_H\circ\Psi^*=D\Psi^*\cdot X_{N^*}$. Thanks to Wu and Yuan \cite{wu}, we can complete the measure estimate, we thus omit the details here. \section{Example}\label{sec:example} In this section, we introduce an example to demonstrate the existence of quasi-periodic solution by Theorem~1. Consider the following Hamiltonian lattice, \begin{eqnarray}\label{HL} H&=&\sum_{j\in{{\mathbb Z}^+}\setminus \Lambda}\frac{\alpha^2_j}{2}q^2_{j}+\frac{1}{2}p^2_j +\sum_{j\in\Lambda}\beta_{j}W(q_j,p_j)+\varepsilon \sum_{j\in{\mathbb Z}^{+}} V(q_{j+1}-q_j), \varepsilonnd{eqnarray} where $q_j,~p_j\in {\mathbb R}$. For fixed positive integer $n_1$ and $n_2$, the set $\Lambda$ is defined as $\Lambda:=\{j|~n_1< j\le n_2,~j\in{\mathbb Z}^+\}$. Moreover, let $\alpha:=(\alpha_1,\cdots,\alpha_{n_1})^{\top}$ be the vector-parameter varying in certain closed region ${\cal O}\subset{\mathbb R}^{n_1}$, $\beta_{j}$, $j\in\Lambda$ and $\alpha_j$,~$j>n_2$ be fixed constants. The functions are defined as \begin{eqnarray*} && W(q_j,p_j):=q_j^4-6q_j^2p_j^2+p^4_j, \quad j\in\Lambda\\ && V(q_j,p_j):=\frac{1}{\alpha+1}(q_{j+1}-q_j)^{\alpha+1},\quad j\in{\mathbb Z}^+, \varepsilonnd{eqnarray*} for fixed $\alpha>0$. When $\Lambda$ is empty, Hamiltonian (\ref{HL}) can be seen as the energy function of the following Newton's cradle lattices with as small Hertzian interaction, that is, \begin{eqnarray}\label{newtoncradle} \ddot{q}_j+\alpha^2_jq_j=\varepsilon{\rm D}V(q_{j+1}-q_j)-\varepsilon{\rm D}V(q_j-q_{j-1}). \varepsilonnd{eqnarray} Newton's cradle lattice is known as a simplified model for granular chains consisting of linear pendular and nonlinear interaction in form of Hertz's forces. The existence of (quasi-)periodic breather and the corresponding stability were studied in \cite{geng,GJ,GJ2}. The methods mentioned in these works are both numerical simulations and theoretical proof, such as KAM and Nash-Moser iterations. We mention that, in the view of infinite dimensional Hamiltonian normal form, the Hamiltonian lattice with respect to equation (\ref{newtoncradle}) are non-degenerate in normal direction. In contract of that, Hamiltonian (\ref{HL}) allows degeneracy in the direction of $(q_j,p_j)$ for $j\in\Lambda$. Introduce the standard action-angle form variables $(x,y)$, where $x\in{\mathbb T}^{n_1}$,~$y\in{\mathbb R}^{n_1}$ and the normal variables $u,\bar{u}\in{l}^{\textsf{a},\textsf{p}}$. We also denote as above that $u=(w_0,w)^{\top}$ and $\bar{u}=(\bar{w}_0,\bar{w})$, where $w_0,~\bar{w}_0\in{\mathbb R}^{n_2-n_1}$ and $w,\bar{w}$ are vectors in infinite dimension. The transformations are as follows, \begin{eqnarray*} &&q_j:=\sqrt{\frac{2}{\alpha_j}}\sqrt{y_j}\cos\theta_j,\quad\quad\quad p_{j}:=\sqrt{2\alpha_j}\sqrt{y_j}\sin\theta_j,\quad\quad\quad 1\le j\le n_1,\\ &&q_{j}:=\frac{w_{0,j-n_1}+\bar{w}_{0,j-n_1}}{2},\quad\quad~~~ p_{j}:=\frac{\bar{w}_{0,j-n_1}-\bar{w}_{0,j-n_1}}{2\sqrt{-1}},\quad\quad~~~ n_1+1\le j\le n_2,\\ &&q_{j}:=\frac{1}{\sqrt{2\alpha_j}}(w_{j-n_1}+\bar{w}_{j-n_1}),\quad p_{j}:=\sqrt{\frac{\alpha_j}{-2}}(\bar{w}_{j-n_1}-\bar{w}_{j-n_1}),\quad n_2+1\le j. \varepsilonnd{eqnarray*} Then, Hamiltonian (\ref{HL}) can be reduced into the following form, that is, \begin{eqnarray}\label{reducedHL} H(x,y,u,\bar{u},\alpha):=\langle\omega(\alpha),y\rangle+\langle w,\Omega\bar{w}\rangle+g(w_0,\bar{w}_0)+\varepsilon P(x,y,u,\bar{u},\alpha), \varepsilonnd{eqnarray} where \begin{align*} &\omega(\alpha):=\alpha,\\ &\Omega:={\rm diag}\{\alpha_{n_2+1},\alpha_{n_2+2},\cdots,\alpha_{n_2+j},\cdots\},\\ &g(w_0,\bar w_0):=\sum_{j=n_1+1}^{n_2}\beta_{j}(\frac{|w_{0,j-n_1}|^4}{2}+\frac{|\bar{w}_{0,j-n_1|}|^4}{2}), \varepsilonnd{align*} and the perturbation reads as \begin{eqnarray*} P(x,y,u,\bar{u},\alpha)&:=&\frac{y_1}{\alpha_1}\cos\theta_1+\sum^{n_1}_{j=2}\frac{2y_j}{\alpha_j}\cos\theta_j -\sqrt{\frac{y_{n_1}}{2\alpha_{n_1}}}\cos\theta_{n_1}(w_{0,1}+\bar{w}_{0,1})\\ &~&-\sum_{j=1}^{n_1-1}2\sqrt{\frac{y_{j+1}y_j}{\alpha_{j+1}\alpha_{j}}}\cos\theta_{j+1}\cos\theta_{j}+O(|(u,\bar{u})|^2), \varepsilonnd{eqnarray*} by choosing $\alpha=1$ in (\ref{HL}). It is obvious that $g(w_0,\bar{w}_0)$ are in the same form as it is mentioned in Remark~\ref{remark1} by choosing $p=q=2$, so that it satisfies assumption (\textbf{A0}). Moreover, choosing a fixed point $y_*=(y_{*1},~\cdots,~y_{*n_1})^{\top}$ with $y_{*j}\ne 0$, the perturbation $P$ is real analytic with respect to $(x,y,u,\bar{u})$ in the following complex neighborhood, that is, $$ D(s,r)=\{(x,y,u,\bar{u}):~|{\rm Im}~x|<s,~\|y-y_*\|<r^2,~\|(w_0,\bar{w}_0)\|<r,~\|(w,\bar{w})\|<r^a\}, $$ where $0<s,r\ll1$,~$a\ge2$ is defined as in \ref{a}. It is obviously that (\textbf{A3}) is satisfied. Then we obtain the following result. \begin{corollary} Consider Hamiltonian lattice (\ref{HL}), as well as the reduced system (\ref{reducedHL}), assume that the tangent and normal frequencies $\omega(\alpha)$ and $\Omega$ satisfy assumptions \textbf{\textsc{(A1)}}-\textbf{\textsc{(A2)}}. It follows from Theorem 1 that Hamiltonian lattice (\ref{HL}) admits a family of real analytic embedding of a $n_1$-dimensional tori for the majority of $\alpha\in{\cal O}$. \varepsilonnd{corollary} \begin{remark} We have to mention that, as a demonstration of Theorem 1, we simply choose $\alpha=1$. However, $\alpha$ may not be an integer in certain applications. It leads to the result that the perturbation is not real analytic with respect to $(u,~\bar{u})$ near the origin so that KAM iteration in present paper can not be directly applied. Hence, we look forward to generalize the result to such a system which not only allows degeneracy in normal direction but also $C^{1+\alpha}$ with respect to normal variables. \varepsilonnd{remark} \section{Appendix A. \textbf{Proof of Proposition \ref{example}}}\label{pro} \begin{proof} Obviously, for $\forall z\in (-2,2)\times(-2,2)$, $$\nabla g(-z)=-\nabla g(z),~~~~\nabla g(0)=0,$$ and for $\forall z\in \partial(-2,2)\times(-2,2)$, $$\nabla g(z)\neq0,$$ Using Borsuk's theorem, we have $$\deg(\nabla g(z),(-2,2)\times(-2,2),0)\neq0,$$ i.e., the topological degree condition in \textbf{(A0)} holds. For $\forall z, z_*\in[-1,1],$ and $z\neq z_*$, we have $$\nabla g(z)-\nabla g(z_*)=0,$$ but $$|z-z_*|^L>0,~~~\forall L\geq2,$$ which shows that the weak convexity condition in \textbf{(A0)} fails. Note that the perturbed motion equation in the direction of $w_0$ is $$\dot{w_0}=\bar w_0+\varepsilon^\varepsilonll\sin\frac{1}{\varepsilon}.$$ In order to ensure the existence of low dimensional invariant tori, we need to solve the following equation $$\bar w_0+\varepsilon^\varepsilonll\sin\frac{1}{\varepsilon}=0,$$ which implies that $\bar w_0$ is discontinuous and alternately appears on $(-2,-1)$ and $(1,2)$ as $\varepsilon\rightarrow0_+$. So, this example shows that the weak convexity condition in \textbf{(A0)} is necessary. \varepsilonnd{proof} \section*{Acknowledgments} The second author is supported by National Natural Science Foundation of China (grant No.12271204), Project of Science and Technology Development of Jilin Province, China (grant No.20200201265JC). The third author is supported by National Basic Research Program of China (grant No. 2013CB834100), National Natural Science Foundation of China (grant No. 11571065, 11171132, 12071175), Project of Science and Technology Development of Jilin Province, China (grant No. 2017C0281, 20190201302JC), and Natural Science Foundation of Jilin Province (grant No. 20200201253JC). \begin{thebibliography}{11} \bibitem{bourgain1} J. Bourgain, Construction of quasi-periodic solutions for Hamiltonian perturbations of linear equations and applications to nonlinear PDE, Internat. Math. Res. Notices. 11(1994), 475-497. \href{https://mathscinet.ams.org/mathscinet/search/publdoc.html?arg3=&co4=AND&co5=AND&co6=AND&co7=AND&dr=all&pg4=AUCN&pg5=TI&pg6=PC&pg7=ALLF&pg8=ET&r=1&review_format=html&s4=&s5=Construction \bibitem{bourgain2} J. Bourgain, On Melnikov's persistency problem, Math. Res. Lett. 4(1997), 445-458. \href{https://mathscinet.ams.org/mathscinet/search/publdoc.html?arg3=&co4=AND&co5=AND&co6=AND&co7=AND&dr=all&pg4=AUCN&pg5=TI&pg6=PC&pg7=ALLF&pg8=ET&r=1&review_format=html&s4=&s5=On \bibitem{bourgain3} J. Bourgain, Quasi-periodic solutions of Hamiltonian perturbations of $2$D linear Schr$\ddot{o}$dinger equations, Ann. of Math.(2) 148(1998), 363-439. \href{https://mathscinet.ams.org/mathscinet/search/publdoc.html?arg3=&co4=AND&co5=AND&co6=AND&co7=AND&dr=all&pg4=AUCN&pg5=TI&pg6=PC&pg7=ALLF&pg8=ET&review_format=html&s4=&s5=Quasi-periodic \bibitem{chow} S.N. Chow, Y. Li, Y.F. Yi, Persistence of invariant tori on submanifolds in Hamiltonian systems, J. Nonlinear Sci. 12(2002), 585-617. \href{https://mathscinet.ams.org/mathscinet/search/publdoc.html?arg3=&co4=AND&co5=AND&co6=AND&co7=AND&dr=all&pg4=AUCN&pg5=TI&pg6=PC&pg7=ALLF&pg8=ET&r=1&review_format=html&s4=&s5=Persistence \bibitem{du} J.Y. Du, Y. Li, H.K. Zhang, Kolmogorov's theorem for degenerate Hamiltonian systems with continuous parameters, \href{https://arxiv.org/abs/2206.05461}{arXiv:2206.05461} \bibitem{du23} J.Y. Du, Y. Li, Melnikov's persistence in degeneracy of high co-dimension, \href{https://arxiv.org/abs/2301.00206}{arXiv:2301.00206} \bibitem{geng} J.S. Geng, C.F.Ge, Y.F. Yi, Quasi-periodic breathers in Newton's cradle, J. Math. Phys. 63, 082703(2022). \href{https://mathscinet.ams.org/mathscinet/search/publdoc.html?arg3=&co4=AND&co5=AND&co6=AND&co7=AND&dr=all&pg4=AUCN&pg5=TI&pg6=PC&pg7=ALLF&pg8=ET&r=1&review_format=html&s4=&s5=Quasi-periodic \bibitem{GJ} G. James, Nonliner waves in Newton's cradle and the discrete p-Schr\"odinger equation, Math. Medels Methods Appl. Sci. 21(2011), 2335-2377. \href{https://mathscinet.ams.org/mathscinet/search/publications.html?batch_title=Selected+Matches+for \bibitem{GJ2} G. James, periodic travlelling waves and compactons in granular chains, J. Nonlinear Sci. 22(2012), 813-848. \href{https://mathscinet.ams.org/mathscinet/search/publications.html?batch_title=Selected+Matches+for \bibitem{han} Y.C. Han, Y. Li, Y.F. Yi, Degenerate lower-dimensional tori in Hamiltonian systems, J. Differential Equations 227(2006), 670-691. \href{https://www.sciencedirect.com/science/article/pii/S0022039606000751?via \bibitem{kuksin1} S.B. Kuksin, Hamiltonian perturbations of infinite dimensional linear systems with an imaginary spectrum, Funktsional. Anal. i Prilozhen. 21(1987), 22-37. \href{https://mathscinet.ams.org/mathscinet/search/publdoc.html?agg_author_212780=212780&agg_year_1987=1987&arg3=&batch_title=Selected \bibitem{li} Y. Li, Y.F. Yi, Persistence of lower dimensional tori of general types in Hamiltonian systems, Trans. Amer. Math. Soc. 357(2005), 1565-1600. \href{https://mathscinet.ams.org/mathscinet/search/publdoc.html?arg3=&co4=AND&co5=AND&co6=AND&co7=AND&dr=all&pg4=AUCN&pg5=TI&pg6=PC&pg7=ALLF&pg8=ET&r=1&review_format=html&s4=&s5=Persistence \bibitem{poschel1} J. P\"{o}schel, On elliptic lower-dimensional tori in Hamiltonian systems, Math. Z. 202(1989), 559-608. \href{https://mathscinet.ams.org/mathscinet/search/publdoc.html?arg3=&co4=AND&co5=AND&co6=AND&co7=AND&dr=all&pg4=AUCN&pg5=TI&pg6=PC&pg7=ALLF&pg8=ET&review_format=html&s4=&s5= \bibitem{poschel2} J. P\"{o}schel, A KAM-theorem for some nonlinear partial differential equations, Ann. Scuola Norm. Sup. Pisa Cl. Sci. 23(1996), 119-148. \href{https://mathscinet.ams.org/mathscinet/search/publdoc.html?arg3=&co4=AND&co5=AND&co6=AND&co7=AND&dr=all&pg4=AUCN&pg5=TI&pg6=PC&pg7=ALLF&pg8=ET&r=1&review_format=html&s4=&s5= \bibitem{tong} Z.C. Tong, J.Y. Du, Y. Li, KAM theorem on modulus of continuity about parameter, Sci. China Math. (Accepted) \href{https://arxiv.org/abs/2210.04383}{arXiv:2210.04383 } \bibitem{wayne} C.E. Wayne, Periodic and quasi-periodic solutions of nonlinear wave equations via KAM theory, Comm. Math. Phys. 127(1990), 479-528. \href{https://mathscinet.ams.org/mathscinet/search/publdoc.html?arg3=&co4=AND&co5=AND&co6=AND&co7=AND&dr=all&pg4=AUCN&pg5=TI&pg6=PC&pg7=ALLF&pg8=ET&r=1&review_format=html&s4=&s5=Periodic \bibitem{wu} Y. Wu, X.P. Yuan, A KAM theorem for the Hamiltonian with finite zero normal frequencies and its applications (in memory of Professor Walter Craig), J. Dynam. Differential Equations 33(2021), 1427-1474. \href{https://mathscinet.ams.org/mathscinet/search/publdoc.html?arg3=&co4=AND&co5=AND&co6=AND&co7=AND&dr=all&pg4=AUCN&pg5=TI&pg6=PC&pg7=ALLF&pg8=ET&r=1&review_format=html&s4=&s5=A \bibitem{xly} L. Xu, Y. Li, Y.F. Yi, Lower-dimensional tori in multi-scale, nearly integrable Hamiltonian systems, Ann. Henri Poincar\'{e} 18(2017), 53-83. \href{https://mathscinet.ams.org/mathscinet/search/publdoc.html?arg3=&co4=AND&co5=AND&co6=AND&co7=AND&dr=all&pg4=AUCN&pg5=TI&pg6=PC&pg7=ALLF&pg8=ET&r=1&review_format=html&s4=&s5=Lower-dimensional \bibitem{xu2010} J.X. Xu, J.G. You, Persistence of the non-twist torus in nearly integrable Hamiltonian systems, Proc. Amer. Math. Soc. 138(2010), 2385-2395. \href{https://mathscinet.ams.org/mathscinet/search/publdoc.html?arg3=&co4=AND&co5=AND&co6=AND&co7=AND&dr=all&pg4=AUCN&pg5=TI&pg6=PC&pg7=ALLF&pg8=ET&r=1&review_format=html&s4=&s5=PERSISTENCE \varepsilonnd{thebibliography} \varepsilonnd{document}
\begin{document} \title{Robust optical readout and characterization of nuclear spin transitions in nitrogen-vacancy ensembles in diamond} \author{A.~Jarmola} \email{[email protected]} \affiliation{ Department of Physics, University of California, Berkeley, California 94720, USA } \affiliation{ U.S. Army Research Laboratory, Adelphi, Maryland 20783, USA } \author{I.~Fescenko} \affiliation{ Center for High Technology Materials and Department of Physics and Astronomy, University of New Mexico, Albuquerque, New Mexico 87106, USA } \author{V.~M.~Acosta} \affiliation{ Center for High Technology Materials and Department of Physics and Astronomy, University of New Mexico, Albuquerque, New Mexico 87106, USA } \author{M.~W.~Doherty} \affiliation{ Laser Physics Centre, Research School of Physics, Australian National University, Canberra 2601, Australia } \author{F.~K.~Fatemi} \affiliation{ U.S. Army Research Laboratory, Adelphi, Maryland 20783, USA } \author{T.~Ivanov} \affiliation{ U.S. Army Research Laboratory, Adelphi, Maryland 20783, USA } \author{D.~Budker} \affiliation{ Department of Physics, University of California, Berkeley, California 94720, USA } \affiliation{Helmholtz Institut Mainz, Johannes Gutenberg University, 55128 Mainz, Germany } \author{V.~S.~Malinovsky} \affiliation{ U.S. Army Research Laboratory, Adelphi, Maryland 20783, USA } \date{\today} \begin{abstract} Nuclear spin ensembles in diamond are promising candidates for quantum sensing applications, including rotation sensing. Here we perform a characterization of the optically detected nuclear-spin transitions associated with the $^{14}$N nuclear spin within diamond nitrogen vacancy (NV) centers. We observe nuclear-spin-dependent fluorescence with the contrast of optically detected $^{14}$N nuclear Rabi oscillations comparable to that of the NV electron spin. Using Ramsey spectroscopy, we investigate the temperature and magnetic-field dependence of the nuclear spin transitions in the 77.5$-$420\,K and 350$-$675\,G range, respectively. The nuclear quadrupole coupling constant $Q$ was found to vary with temperature $T$ yielding $d|Q|/dT=-35.0(2)$\,Hz/K at $T=297$\,K. The temperature and magnetic field dependencies reported here are important for quantum sensing applications such as rotation sensing and potentially for applications in quantum information processing. \end{abstract} \maketitle Quantum sensors based on nitrogen-vacancy (NV) spin qubits in diamond are used in a number of sensing modalities, including magnetometry, electrometry, and thermometry~\cite{DOH2013, RON2014,DEG2017, BAR2019SEN}. Typically, the qubit used for sensing applications is formed from the NV electron spin levels due to their high sensitivity to environmental perturbations. However, nuclear spins can be more suitable for applications where sensitivity to magnetic noise and temperature variations is undesirable, such as rotation sensing~\cite{LED2012,AJO2012,MAC2012}. Of particular interest are nitrogen nuclear spins intrinsic to NV centers. These spins can be efficiently optically polarized and read out via NV electron spins. Consider the example of using the intrinsic $^{14}$N nuclear spins of an ensemble of NV centers for rotation sensing. The $^{14}$N nuclear spins are prepared in a superposition state and precess about their quantization axis with nuclear precession rate $\omega_0$. If the diamond rotates about this axis with a rate $\omega$, the nuclear precession rate in the diamond reference frame is $\omega_0-\omega$. The minimum detectable change in $\omega$ is given by: \begin{equation} \begin{aligned} \label{eq:Sensitivity} \delta\omega\approx\frac{1}{C\sqrt{\eta NT^{*}_{2}\tau}}, \end{aligned} \end{equation} where $C$ is the fractional contrast of spin-state-dependent fluorescence, $\eta$ is the photon-collection efficiency, $N$ is the number of interrogated spins, $T^{*}_{2}$ is the spin-coherence time, and $\tau$ is the total integration time. From Eq.~\eqref{eq:Sensitivity}, it can be seen that intrinsic $^{14}$N nuclear spins offer an advantage over electron spins owing to their 10$^3$-fold longer coherence time~\cite{JAS2019} at the same number density. Nuclear spins also have the advantage of having a $10^3\mbox{--}10^4$ times smaller gyromagnetic ratio than electron spins, which minimizes frequency shifts due to fluctuations in magnetic field. A remaining challenge is to realize a high spin readout contrast, $C$, without introducing additional sources of technical noise. One avenue that has been explored is to use conditional microwave pulses to map nuclear spin states onto NV electron-spin states~\cite{SME2009, STE2010PRB,NEU2010SCIENCE}. This approach was shown~\cite{JAS2019} to achieve a readout contrast $C\gtrsim10^{-2}$, approaching the contrast realized with NV electron spin ensembles~\cite{BAR2019SEN}. However, environmental influences, such as magnetic field and temperature variations, affect the electron-spin transition frequency, which limits the robustness of this technique~\cite{JAS2019}. Optical readout of the nuclear spin state can also be accomplished directly without the use of microwave mapping pulses in the vicinity of the excited-state level anticrossing (ESLAC)~\cite{SME2009, STE2010PRB, JAS2019}. The advantage of this technique is that it directly provides information about the nuclear spin states without precise knowledge of the electron spin transition frequencies. While this technique was previously demonstrated, its readout contrast has not been systematically analyzed. In this work, we characterize the optical readout mechanism of $^{14}$N nuclear spin ensembles. We find that the contrast of nuclear spin Rabi oscillations exceeds 2\,\% in a broad range of magnetic fields, from approximately 450 to 550\,G. Using Ramsey spectroscopy, we investigate the temperature and magnetic-field dependence of the nuclear spin transitions. At 297 K, we find that the temperature dependence of the nuclear quadrupole coupling constant is $d|Q|/dT=-$35.0(2)\,Hz/K, which is about 2000 times smaller than the temperature dependence of the NV electron-spin zero-field splitting $D$~\cite{ACO2010}. Our results hold promise for quantum sensing applications requiring minimal magnetic field and temperature dependence, including gyroscopes and clocks~\cite{HOD2013}. \begin{figure} \caption{\label{fig:NVlevels} \label{fig:NVlevels} \end{figure} A schematic of the relevant energy levels and transitions in diamond NV centers is presented in Fig.~\ref{fig:NVlevels}(a). Application of light with wavelength shorter than that of the zero-phonon line of the $^3$A$_2\rightarrow ^3$E transition (at 637 nm) induces optical polarization of the NV centers into the $m_S=0$ sublevel of the ground electronic state~\cite{MAN2006}. If a magnetic field $\textbf{B}$ is applied along the axis of the NV center, the $m_S=\pm1$ sublevels of the ground and excited state experience a Zeeman shift. At $B\approx500$\,G the $m_S=0$ and $m_S=-1$ sublevels in the excited state become nearly degenerate. This condition is referred to as excited-state level-anticrossing (ESLAC). A peculiar feature of ESLAC is that in its vicinity, electron polarization is effectively transferred to the nuclei, so that nearly complete $^{14}$N polarization can be achieved in a relatively wide range of magnetic fields~\cite{JAC2009,FIS2013PRB}. A transfer of electron spin polarization to nuclei is also observed at the ground-state level-anticrossing in the vicinity of 1024\,G, but this is not discussed in this work (see Ref.~\cite{AUZ2019} and references therein). Nuclear spin polarization at the ESLAC mediated by NV centers in diamond has been described, for example, in Refs.~\cite{JAC2009,SME2009,FIS2013PRB,STE2010PRB} and is only briefly summarized here. Near the ESLAC, strong hyperfine coupling in the excited state allows the energy-conserving electron-nuclear-spin flip-flop processes to occur between coupled electron-nuclear spin states, denoted $|m_{S},m_{I}\rangle$. Specifically, such processes can lead to flip-flops between $|0,-1\rangle$ and $|-1,0\rangle$, as well as between $|0,0\rangle$ and $|-1,+1\rangle$ states, Fig.~\ref{fig:NVlevels}(b,c). Under optical illumination near ESLAC the system is polarized into the $|0,+1\rangle$ spin state, Fig.~\ref{fig:NVlevels}(d). The mechanism responsible for nuclear spin polarization leads to nuclear-spin-dependent fluorescence and provides the means for direct nuclear spin optical readout. The polarized $|0,+1\rangle$ spin state produces maximum fluorescence, because it is not affected by mixing in the excited state and does not pass thought the dark singlet states as often. As depicted in Figs.~\ref{fig:NVlevels}(b,c) respectively, the $|0,-1\rangle$ and $|0,0\rangle$ states undergo an electron-nuclear-spin flip-flop process in the excited state which changes their electron spin projection to $m_{S}=-1$ and cause them to pass through the dark singlet states, which reduces their fluorescence. The degree of mixing in the excited state, and therefore the fluorescence rate is different for $|0,0\rangle$ and $|0,-1\rangle$ states and depends on the applied magnetic field~\cite{SUP2019}. \begin{figure*} \caption{\label{fig:ODNMR} \label{fig:ODNMR} \end{figure*} We used a custom-built confocal-microscopy setup to measure optically detected nuclear magnetic resonances (ODNMR) in an ensemble of NV centers. The sample used in our experiments is a [100]-cut high-pressure high-temperature grown diamond with an initial nitrogen concentration of $\sim50$\,ppm. NV centers were created by irradiating the sample with 10\,MeV electrons at a dose of $\sim10^{18}$\,cm$^{-2}$ and subsequent annealing in vacuum at 800\,$^\circ$C for three hours. The diamond sample was mounted inside a continuous flow microscopy cryostat. Pulses of 532 nm laser light (20\,mW, 20\,$\upmu$s duration) were focused on the diamond using a microscope objective with 0.6 numerical aperture. Fluorescence was collected through the same objective, passed through a 650-800\,nm bandpass filter, and detected with a fiber-coupled Si avalanche photodiode. Radio-frequency and microwave magnetic fields were delivered using a 100\,$\upmu$m diameter copper wire placed on the diamond surface next to the optical focus. A static magnetic field $B$ was applied along one of the NV axes using a neodymium permanent magnet. To perform ODNMR spectroscopy we applied a pulse sequence illustrated in Fig.~\ref{fig:ODNMR}(a). The radio-frequency pulse with a typical duration of 200\,$\upmu$s was applied between optical pump and probe pulses and fluorescence response of the system was recorded as a function of the radio frequency. Figure~\ref{fig:ODNMR}(b) shows an example of $^{14}$N ODNMR spectrum recorded at a magnetic field $B=503$\,G. Two resonances were observed at frequencies $f_1$ and $f_2$ that correspond to $|0,+1\rangle \rightarrow |0,0\rangle$ and $|0,-1\rangle \rightarrow |0,0\rangle$ transitions, respectively. The amplitudes of the resonances indicate a strong nuclear polarization of the system into the $|0,+1\rangle$ state. We used a pulse sequence illustrated in Fig.~\ref{fig:ODNMR}(c) to resonantly drive the nuclear-spin transition with a radio-frequency pulse of varying duration $\tau_{RF}$. Figure~\ref{fig:ODNMR}(d) shows an example of optically detected nuclear Rabi oscillations between $|0,+1\rangle$ and $|0,0\rangle$ states, where $C$ is the contrast of Rabi oscillations. We measured the fluorescence responses of the NV centers selectively initialized in the three nuclear-spin states $|0,+1\rangle$, $|0,0\rangle$, and $|0,-1\rangle$ at 503 G, Fig.~\ref{fig:ODNMR}(e). The $|0,+1\rangle$ state was always initialized first by optical pumping, $|0,0\rangle$ state was prepared through transferring the population from $|0,+1\rangle$ state by applying a radio-frequency $\pi$ pulse resonant with $f_1$ transition, and $|0,-1\rangle$ state was prepared by sequentially applying two $\pi$ pulses with radio-frequencies $f_1$ and $f_2$ resulting in transferring the population from optically pumped $|0,+1\rangle$ state. To get a further insight into optical detection of nuclear spin-states we performed a detailed study of a relative fluorescence responses for $|0,+1\rangle$ and $|0,0\rangle$ states ($f_1$ transition) as a function of applied magnetic field strength near the ESLAC. Figure~\ref{fig:ODNMR}(f) shows the dependence of $C$ for the $f_1$ transition as a function of the applied magnetic field along the NV axis. The contrast of nuclear Rabi oscillations exceeds 2\,\% from approximately 450 to 550\,G, reaching its maximum value of $\sim3.8$\,\% at $\sim 485$\,G. \begin{figure} \caption{\label{fig:BDep} \label{fig:BDep} \end{figure} \begin{figure} \caption{\label{fig:TDep} \label{fig:TDep} \end{figure} Nuclear-spin transition frequencies $f_{1}$ and $f_{2}$ were experimentally measured as a function of magnetic field and temperature by implementing a Ramsey interferometry technique. Figures~\ref{fig:BDep}(a,b) depict the pulse timing diagrams for measuring $f_{1}$ and $f_{2}$, respectively. The $^{14}$N NV nuclear spins are either prepared in $|0,+1\rangle$ state (to measure $f_{1}$) or $|0,0\rangle$ state (for $f_{2}$). Subsequently, a $\pi/2 - \tau - \pi/2$ pulse sequence is applied with the radio frequency tuned near the expected nuclear spin transition. The fluorescence is recorded as a function of $\tau$ and the resulting Ramsey interference fringes Fig.~\ref{fig:BDep}(c) are fit to an exponentially decaying sinusoidal function to reveal the detuning $\delta$ of the transition $f$ with respect to the pulse radio frequency $RF$. From the fit to the Ramsey data, which includes an exponential decay $e^{-\tau/T_{2}^{*}}$, we infer the $^{14}$N nuclear spin-coherence time $T_{2}^{*}$. The experimentally measured values of $T_{2}^{*}$ are in the range from 0.5 to 0.8 ms for all temperature and magnetic field ranges used in this work. We plot experimental values of $(f_{1}+f_{2})/2$ and $(f_{1}-f_{2})/2B$ as a function of magnetic field in Fig.~\ref{fig:BDep}(d) and (e), respectively. Unlike for an ideal nuclear spin, these values are not constant but, rather decrease with increasing magnetic field strength throughout the studied field range. This is due to mixing of the electron and nuclear spin states via the transverse magnetic hyperfine interaction characterized by the constant $A_{\perp}$. The average value of the nuclear-spin transition frequencies, $f_{1}$ and $f_{2}$ is described by~\cite{SUP2019}: \begin{equation} \begin{aligned} \label{eq:f1+f2} \frac{f_{1}+f_{2}}{2}=\left|{Q+\frac{A^{2}_{\perp}D}{D^{2}-\gamma^{2}_{e}B^{2}}}\right|. \end{aligned} \end{equation} The effective nuclear gyromagnetic ratio is determined from: \begin{equation} \begin{aligned} \label{eq:f1-f2} \frac{f_{1}-f_{2}}{2B}=\gamma_{n}\left (1-\frac{\gamma_{e}}{\gamma_{n}}\frac{A^{2}_{\perp}}{D^{2}-\gamma^{2}_{e}B^{2}}\right ), \end{aligned} \end{equation} where $\gamma_{e}$ and $\gamma_{n}$ are the electron and nuclear gyromagnetic ratios, respectively. We fit the data plotted in Fig.~\ref{fig:BDep}(d) to the Eq.~(\ref{eq:f1+f2}) with the following parameters fixed: $D=2870$\,MHz, $\gamma_{e}=2.803$\,MHz/G, $A_{\perp}=-2.62$\,MHz~\cite{CHE2015}. From the fit we obtained the value of $Q=-4.9457(3)$\,MHz which is in agreement with previously reported values~\cite{SME2009,STE2010PRB} and represents an order of magnitude improvement in precision. We use Eq.~(\ref{eq:f1-f2}) to fit the data in Fig.~\ref{fig:BDep}(e) and extract the $^{14}$N gyromagnetic ratio $\gamma_{n}=307.5(3)$\,Hz/G. The obtained value agrees with the literature data \cite{HAR2002} and confirms the validity of our theoretical model. Error bars of $Q$ represent the combination of statistical uncertainty and the uncertainty in $A_{\perp}$, while in the case of $\gamma_{n}$, the main uncertainty is associated with that in the magnetic field measurement~\cite{SUP2019}. Next, we measure $f_{1}$ and $f_{2}$ as a function of temperature and use Eq.~(\ref{eq:f1+f2}) and $D(T)$~\cite{SUP2019} to determine $Q(T)$. Figure~\ref{fig:TDep}(a) shows the experimentally measured values of $(f_{1}+f_{2})/2$ and the inferred value of $|Q|$ from Eq.~(\ref{eq:f1+f2}) as a function of temperature. $|Q|$ is shifted from $(f_{1}+f_{2})/2$ by $\sim3$\,kHz and found to smoothly decrease by $\sim10$kHz with temperature increasing from 77.5\,K to 420\,K. We fit the experimentally determined $|Q|$ data to the forth-order polynomial function: \begin{equation} \label{eq:QvsT} |Q(T)|=\sum_{n=0}^{4}{a_{n}T^{n}}. \end{equation} The fit values of the coefficients are: $a_{0}=4949.473$\,kHz, $a_{1}=-9.32 \times 10^{-3}$\,kHz/K, $a_{2}=9.2597 \times 10^{-5}$\,kHz/K$^{2}$, $a_{3}=-4.6294 \times 10^{-7}$\.kHz/K$^{3}$, $a_{4}=3.983 \times 10^{-10}$\,kHz/K$^{4}$. The temperature slope of the nuclear quadrupole coupling constant $Q$ at 297 K is $d|Q|/dT=-35.0(2)$\,Hz/K, which is $\sim2000$ times smaller than the temperature dependence of the zero-field splitting parameter $D$ of electron spin transitions~\cite{ACO2010}, which is $dD/dT=-74.2(7)$ kHz/K. A recent preprint~\cite{SOS2018} used a different technique to infer a value of $d|Q|/dT=-24(4)$\,Hz/K that is lower than our result, but with an order of magnitude larger uncertainty. We find that the fractional changes of $Q$ and $D$ as a function of temperature match almost perfectly up to a constant factor of 3.6 (see Fig.~\ref{fig:TDep}b). Thus, the qualitatively identical temperature variations of $Q$ and $D$ suggest that they may arise from a common mechanism. Such a mechanism that would cause similar changes in $Q$ and $D$ is not obvious because $Q$ and $D$ arise from different interactions that depend on different aspects of the NV center's electron orbitals. $D$ arises from the magnetic dipolar interactions between the NV center's two unpaired electrons occupying its $e_{x}$ and $e_{y}$ molecular orbitals, which are exclusively composed of carbon atomic orbitals. Whilst $Q$ arises from the interaction of the $^{14}$N electric quadrupole moment with the electric field gradient at the nucleus generated by the electrons occupying the nitrogen's $p_{z}$ atomic orbital directed along the NV center's axis. The equivalent temperature variations of $D$ and $Q$ present an interesting theoretical problem that will potentially reveal new microscopic understanding of the NV center. Accordingly, it should be pursued in the future with ab initio calculations. The discussions presented in~\cite{SUP2019} may serve as an intuition to guide those calculations. In this work, motivated by the development of diamond-based rotation sensors, we investigated the nonlinear temperature and magnetic field dependence of the $^{14}$N hyperfine spin transitions in an ensemble of diamond NV centers. These measurements were enabled by a direct optical readout technique (without the use of microwave transitions) optimized in this work. The fluorescence contrast of nuclear Rabi oscillations depends on magnetic field and reaches it maximum value $\sim3.8$\,\% at around 485\,G. Such a high contrast is comparable to that of the electron-spin transitions. From the magnetic field dependence of the frequencies of the nuclear spin transition, we determine the values of the nuclear quadruple coupling constant $Q$, and gyromagnetic ratio $\gamma_{n}$ for NV $^{14}$N, which are in agreement with the published values. While the measured temperature dependence of the nuclear-spin transition is smaller than the corresponding dependence of the electron-spin transitions in both absolute (by a factor of $\sim2000$) and relative (by a factor of $\sim3.6$) measure, this temperature dependence can still prove problematic for precision sensors. This dependence can be further reduced by re-configuring the measurement to sense the interval between the $m_I=\pm 1 $ levels directly, in analogy with how this is done for electronic states \cite{FAN2013}. We also note that $^{15}$N nucleus does not have a quadrupole moment and therefore there is no quadrupole splitting. Relative advantages and disadvantages of using $^{15}$N vs. $^{14}$N centers for gyroscopic applications require a separate consideration. The authors are grateful to Chih-Wei Lai and Pauli Kehayias for useful discussions. This work was supported by in part by EU FET-OPEN Flagship Project ASTERIQS (action 820394), and the German Federal Ministry of Education and Research (BMBF) within the Quantumtechnologien program (FKZ 13N14439) and A. J. acknowledges support from the Army Research Laboratory under Cooperative Agreement No. W911NF-16-2-0008. M. D. acknowledges support from the Australian Research Council (DE170100169). \begin{thebibliography}{23} \makeatletter \providecommand \@ifxundefined [1]{ \@ifx{#1\undefined} } \providecommand \@ifnum [1]{ \ifnum #1\expandafter \@firstoftwo \else \expandafter \@secondoftwo \fi } \providecommand \@ifx [1]{ \ifx #1\expandafter \@firstoftwo \else \expandafter \@secondoftwo \fi } \providecommand \natexlab [1]{#1} \providecommand \enquote [1]{``#1''} \providecommand \bibnamefont [1]{#1} \providecommand \bibfnamefont [1]{#1} \providecommand \citenamefont [1]{#1} \providecommand \href@noop [0]{\@secondoftwo} \providecommand \href [0]{\begingroup \@sanitize@url \@href} \providecommand \@href[1]{\@@startlink{#1}\@@href} \providecommand \@@href[1]{\endgroup#1\@@endlink} \providecommand \@sanitize@url [0]{\catcode `\\12\catcode `\$12\catcode `\&12\catcode `\#12\catcode `\^12\catcode `\_12\catcode `\%12\relax} \providecommand \@@startlink[1]{} \providecommand \@@endlink[0]{} \providecommand \url [0]{\begingroup\@sanitize@url \@url } \providecommand \@url [1]{\endgroup\@href {#1}{\urlprefix }} \providecommand \urlprefix [0]{URL } \providecommand \Eprint [0]{\href } \providecommand \doibase [0]{http://dx.doi.org/} \providecommand \selectlanguage [0]{\@gobble} \providecommand \bibinfo [0]{\@secondoftwo} \providecommand \bibfield [0]{\@secondoftwo} \providecommand \translation [1]{[#1]} \providecommand \BibitemOpen [0]{} \providecommand \bibitemStop [0]{} \providecommand \bibitemNoStop [0]{.\EOS\space} \providecommand \EOS [0]{\spacefactor3000\relax} \providecommand \BibitemShut [1]{\csname bibitem#1\endcsname} \let\auto@bib@innerbib\@empty \bibitem [{\citenamefont {Doherty}\ \emph {et~al.}(2013)\citenamefont {Doherty}, \citenamefont {Manson}, \citenamefont {Delaney}, \citenamefont {Jelezko}, \citenamefont {Wrachtrup},\ and\ \citenamefont {Hollenberg}}]{DOH2013} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~W.}\ \bibnamefont {Doherty}}, \bibinfo {author} {\bibfnamefont {N.~B.}\ \bibnamefont {Manson}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Delaney}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Jelezko}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Wrachtrup}}, \ and\ \bibinfo {author} {\bibfnamefont {L.~C.}\ \bibnamefont {Hollenberg}},\ }\href {\doibase https://doi.org/10.1016/j.physrep.2013.02.001} {\bibfield {journal} {\bibinfo {journal} {Physics Reports}\ }\textbf {\bibinfo {volume} {528}},\ \bibinfo {pages} {1 } (\bibinfo {year} {2013})},\ \bibinfo {note} {the nitrogen-vacancy colour centre in diamond}\BibitemShut {NoStop} \bibitem [{\citenamefont {Rondin}\ \emph {et~al.}(2014)\citenamefont {Rondin}, \citenamefont {Tetienne}, \citenamefont {Hingant}, \citenamefont {Roch}, \citenamefont {Maletinsky},\ and\ \citenamefont {Jacques}}]{RON2014} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Rondin}}, \bibinfo {author} {\bibfnamefont {J.-P.}\ \bibnamefont {Tetienne}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Hingant}}, \bibinfo {author} {\bibfnamefont {J.-F.}\ \bibnamefont {Roch}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Maletinsky}}, \ and\ \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Jacques}},\ }\href {\doibase 10.1088/0034-4885/77/5/056503} {\bibfield {journal} {\bibinfo {journal} {Reports on Progress in Physics}\ }\textbf {\bibinfo {volume} {77}},\ \bibinfo {pages} {056503} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Degen}\ \emph {et~al.}(2017)\citenamefont {Degen}, \citenamefont {Reinhard},\ and\ \citenamefont {Cappellaro}}]{DEG2017} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {C.~L.}\ \bibnamefont {Degen}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Reinhard}}, \ and\ \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Cappellaro}},\ }\href {\doibase 10.1103/RevModPhys.89.035002} {\bibfield {journal} {\bibinfo {journal} {Rev. Mod. Phys.}\ }\textbf {\bibinfo {volume} {89}},\ \bibinfo {pages} {035002} (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Barry}\ \emph {et~al.}(2019)\citenamefont {Barry}, \citenamefont {Schloss}, \citenamefont {Bauch}, \citenamefont {Turner}, \citenamefont {Hart}, \citenamefont {Pham},\ and\ \citenamefont {Walsworth}}]{BAR2019SEN} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~F.}\ \bibnamefont {Barry}}, \bibinfo {author} {\bibfnamefont {J.~M.}\ \bibnamefont {Schloss}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Bauch}}, \bibinfo {author} {\bibfnamefont {M.~J.}\ \bibnamefont {Turner}}, \bibinfo {author} {\bibfnamefont {C.~A.}\ \bibnamefont {Hart}}, \bibinfo {author} {\bibfnamefont {L.~M.}\ \bibnamefont {Pham}}, \ and\ \bibinfo {author} {\bibfnamefont {R.~L.}\ \bibnamefont {Walsworth}},\ }\href@noop {} {\enquote {\bibinfo {title} {Sensitivity optimization for nv-diamond magnetometry},}\ } (\bibinfo {year} {2019}),\ \Eprint {http://arxiv.org/abs/1903.08176} {arXiv:1903.08176 [quant-ph]} \BibitemShut {NoStop} \bibitem [{\citenamefont {Ledbetter}\ \emph {et~al.}(2012)\citenamefont {Ledbetter}, \citenamefont {Jensen}, \citenamefont {Fischer}, \citenamefont {Jarmola},\ and\ \citenamefont {Budker}}]{LED2012} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~P.}\ \bibnamefont {Ledbetter}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Jensen}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Fischer}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Jarmola}}, \ and\ \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Budker}},\ }\href {\doibase 10.1103/PhysRevA.86.052116} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {86}},\ \bibinfo {pages} {052116} (\bibinfo {year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Ajoy}\ and\ \citenamefont {Cappellaro}(2012)}]{AJO2012} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Ajoy}}\ and\ \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Cappellaro}},\ }\href {\doibase 10.1103/PhysRevA.86.062104} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {86}},\ \bibinfo {pages} {062104} (\bibinfo {year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Maclaurin}\ \emph {et~al.}(2012)\citenamefont {Maclaurin}, \citenamefont {Doherty}, \citenamefont {Hollenberg},\ and\ \citenamefont {Martin}}]{MAC2012} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Maclaurin}}, \bibinfo {author} {\bibfnamefont {M.~W.}\ \bibnamefont {Doherty}}, \bibinfo {author} {\bibfnamefont {L.~C.~L.}\ \bibnamefont {Hollenberg}}, \ and\ \bibinfo {author} {\bibfnamefont {A.~M.}\ \bibnamefont {Martin}},\ }\href {\doibase 10.1103/PhysRevLett.108.240403} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {108}},\ \bibinfo {pages} {240403} (\bibinfo {year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Jaskula}\ \emph {et~al.}(2019)\citenamefont {Jaskula}, \citenamefont {Saha}, \citenamefont {Ajoy}, \citenamefont {Twitchen}, \citenamefont {Markham},\ and\ \citenamefont {Cappellaro}}]{JAS2019} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.-C.}\ \bibnamefont {Jaskula}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Saha}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Ajoy}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Twitchen}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Markham}}, \ and\ \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Cappellaro}},\ }\href {\doibase 10.1103/PhysRevApplied.11.054010} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Applied}\ }\textbf {\bibinfo {volume} {11}},\ \bibinfo {pages} {054010} (\bibinfo {year} {2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Smeltzer}\ \emph {et~al.}(2009)\citenamefont {Smeltzer}, \citenamefont {McIntyre},\ and\ \citenamefont {Childress}}]{SME2009} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Smeltzer}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {McIntyre}}, \ and\ \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Childress}},\ }\href {\doibase 10.1103/PhysRevA.80.050302} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {80}},\ \bibinfo {pages} {050302} (\bibinfo {year} {2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Steiner}\ \emph {et~al.}(2010)\citenamefont {Steiner}, \citenamefont {Neumann}, \citenamefont {Beck}, \citenamefont {Jelezko},\ and\ \citenamefont {Wrachtrup}}]{STE2010PRB} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Steiner}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Neumann}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Beck}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Jelezko}}, \ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Wrachtrup}},\ }\href {\doibase 10.1103/PhysRevB.81.035205} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. B}\ }\textbf {\bibinfo {volume} {81}},\ \bibinfo {pages} {035205} (\bibinfo {year} {2010})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Neumann}\ \emph {et~al.}(2010)\citenamefont {Neumann}, \citenamefont {Beck}, \citenamefont {Steiner}, \citenamefont {Rempp}, \citenamefont {Fedder}, \citenamefont {Hemmer}, \citenamefont {Wrachtrup},\ and\ \citenamefont {Jelezko}}]{NEU2010SCIENCE} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Neumann}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Beck}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Steiner}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Rempp}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Fedder}}, \bibinfo {author} {\bibfnamefont {P.~R.}\ \bibnamefont {Hemmer}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Wrachtrup}}, \ and\ \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Jelezko}},\ }\href {\doibase 10.1126/science.1189075} {\bibfield {journal} {\bibinfo {journal} {Science}\ }\textbf {\bibinfo {volume} {329}},\ \bibinfo {pages} {542} (\bibinfo {year} {2010})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Acosta}\ \emph {et~al.}(2010)\citenamefont {Acosta}, \citenamefont {Bauch}, \citenamefont {Ledbetter}, \citenamefont {Waxman}, \citenamefont {Bouchard},\ and\ \citenamefont {Budker}}]{ACO2010} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {V.~M.}\ \bibnamefont {Acosta}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Bauch}}, \bibinfo {author} {\bibfnamefont {M.~P.}\ \bibnamefont {Ledbetter}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Waxman}}, \bibinfo {author} {\bibfnamefont {L.-S.}\ \bibnamefont {Bouchard}}, \ and\ \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Budker}},\ }\href {\doibase 10.1103/PhysRevLett.104.070801} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {104}},\ \bibinfo {pages} {070801} (\bibinfo {year} {2010})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Hodges}\ \emph {et~al.}(2013)\citenamefont {Hodges}, \citenamefont {Yao}, \citenamefont {Maclaurin}, \citenamefont {Rastogi}, \citenamefont {Lukin},\ and\ \citenamefont {Englund}}]{HOD2013} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~S.}\ \bibnamefont {Hodges}}, \bibinfo {author} {\bibfnamefont {N.~Y.}\ \bibnamefont {Yao}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Maclaurin}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Rastogi}}, \bibinfo {author} {\bibfnamefont {M.~D.}\ \bibnamefont {Lukin}}, \ and\ \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Englund}},\ }\href {\doibase 10.1103/PhysRevA.87.032118} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {87}},\ \bibinfo {pages} {032118} (\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Manson}\ \emph {et~al.}(2006)\citenamefont {Manson}, \citenamefont {Harrison},\ and\ \citenamefont {Sellars}}]{MAN2006} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {N.~B.}\ \bibnamefont {Manson}}, \bibinfo {author} {\bibfnamefont {J.~P.}\ \bibnamefont {Harrison}}, \ and\ \bibinfo {author} {\bibfnamefont {M.~J.}\ \bibnamefont {Sellars}},\ }\href {\doibase 10.1103/PhysRevB.74.104303} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. B}\ }\textbf {\bibinfo {volume} {74}},\ \bibinfo {pages} {104303} (\bibinfo {year} {2006})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Jacques}\ \emph {et~al.}(2009)\citenamefont {Jacques}, \citenamefont {Neumann}, \citenamefont {Beck}, \citenamefont {Markham}, \citenamefont {Twitchen}, \citenamefont {Meijer}, \citenamefont {Kaiser}, \citenamefont {Balasubramanian}, \citenamefont {Jelezko},\ and\ \citenamefont {Wrachtrup}}]{JAC2009} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Jacques}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Neumann}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Beck}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Markham}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Twitchen}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Meijer}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Kaiser}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Balasubramanian}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Jelezko}}, \ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Wrachtrup}},\ }\href {\doibase 10.1103/PhysRevLett.102.057403} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {102}},\ \bibinfo {pages} {057403} (\bibinfo {year} {2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Fischer}\ \emph {et~al.}(2013)\citenamefont {Fischer}, \citenamefont {Jarmola}, \citenamefont {Kehayias},\ and\ \citenamefont {Budker}}]{FIS2013PRB} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Fischer}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Jarmola}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Kehayias}}, \ and\ \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Budker}},\ }\href {\doibase 10.1103/PhysRevB.87.125207} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. B}\ }\textbf {\bibinfo {volume} {87}},\ \bibinfo {pages} {125207} (\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Auzinsh}\ \emph {et~al.}(2019)\citenamefont {Auzinsh}, \citenamefont {Berzins}, \citenamefont {Budker}, \citenamefont {Busaite}, \citenamefont {Ferber}, \citenamefont {Gahbauer}, \citenamefont {Lazda}, \citenamefont {Wickenbrock},\ and\ \citenamefont {Zheng}}]{AUZ2019} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Auzinsh}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Berzins}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Budker}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Busaite}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Ferber}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Gahbauer}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Lazda}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Wickenbrock}}, \ and\ \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Zheng}},\ }\href {\doibase 10.1103/PhysRevB.100.075204} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. B}\ }\textbf {\bibinfo {volume} {100}},\ \bibinfo {pages} {075204} (\bibinfo {year} {2019})}\BibitemShut {NoStop} \bibitem [{SUP()}]{SUP2019} \BibitemOpen \href@noop {} {}\bibinfo {note} {See Supplemental Material for details.}\BibitemShut {Stop} \bibitem [{\citenamefont {Hall}\ \emph {et~al.}(2016)\citenamefont {Hall}, \citenamefont {Kehayias}, \citenamefont {Simpson}, \citenamefont {Jarmola}, \citenamefont {Stacey}, \citenamefont {Budker},\ and\ \citenamefont {Hollenberg}}]{HAL2016} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {L.~T.}\ \bibnamefont {Hall}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Kehayias}}, \bibinfo {author} {\bibfnamefont {D.~A.}\ \bibnamefont {Simpson}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Jarmola}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Stacey}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Budker}}, \ and\ \bibinfo {author} {\bibfnamefont {L.~C.~L.}\ \bibnamefont {Hollenberg}},\ }\href {\doibase 10.1038/ncomms10211} {\bibfield {journal} {\bibinfo {journal} {Nature Communications}\ }\textbf {\bibinfo {volume} {7}},\ \bibinfo {pages} {10211} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Chen}\ \emph {et~al.}(2015)\citenamefont {Chen}, \citenamefont {Schwarz}, \citenamefont {Jelezko}, \citenamefont {Retzker},\ and\ \citenamefont {Plenio}}]{CHE2015} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Q.}~\bibnamefont {Chen}}, \bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Schwarz}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Jelezko}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Retzker}}, \ and\ \bibinfo {author} {\bibfnamefont {M.~B.}\ \bibnamefont {Plenio}},\ }\href {\doibase 10.1103/PhysRevB.92.184420} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. B}\ }\textbf {\bibinfo {volume} {92}},\ \bibinfo {pages} {184420} (\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Harris}\ \emph {et~al.}(2002)\citenamefont {Harris}, \citenamefont {Becker}, \citenamefont {de~Menezes}, \citenamefont {Goodfellow},\ and\ \citenamefont {Granger}}]{HAR2002} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.~K.}\ \bibnamefont {Harris}}, \bibinfo {author} {\bibfnamefont {E.~D.}\ \bibnamefont {Becker}}, \bibinfo {author} {\bibfnamefont {S.~M.~C.}\ \bibnamefont {de~Menezes}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Goodfellow}}, \ and\ \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Granger}},\ }\href {\doibase https://doi.org/10.1006/snmr.2002.0063} {\bibfield {journal} {\bibinfo {journal} {Solid State Nuclear Magnetic Resonance}\ }\textbf {\bibinfo {volume} {22}},\ \bibinfo {pages} {458 } (\bibinfo {year} {2002})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Soshenko}\ \emph {et~al.}(2018)\citenamefont {Soshenko}, \citenamefont {Vorobyov}, \citenamefont {Rubinas}, \citenamefont {Kudlatsky}, \citenamefont {Zeleneev}, \citenamefont {Bolshedvorskii}, \citenamefont {Sorokin}, \citenamefont {Smolyaninov},\ and\ \citenamefont {Akimov}}]{SOS2018} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {V.~V.}\ \bibnamefont {Soshenko}}, \bibinfo {author} {\bibfnamefont {V.~V.}\ \bibnamefont {Vorobyov}}, \bibinfo {author} {\bibfnamefont {O.}~\bibnamefont {Rubinas}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Kudlatsky}}, \bibinfo {author} {\bibfnamefont {A.~I.}\ \bibnamefont {Zeleneev}}, \bibinfo {author} {\bibfnamefont {S.~V.}\ \bibnamefont {Bolshedvorskii}}, \bibinfo {author} {\bibfnamefont {V.~N.}\ \bibnamefont {Sorokin}}, \bibinfo {author} {\bibfnamefont {A.~N.}\ \bibnamefont {Smolyaninov}}, \ and\ \bibinfo {author} {\bibfnamefont {A.~V.}\ \bibnamefont {Akimov}},\ }\href@noop {} {\enquote {\bibinfo {title} {Temperature drift rate for nuclear terms of nv center ground state hamiltonian},}\ } (\bibinfo {year} {2018}),\ \Eprint {http://arxiv.org/abs/1807.08100} {arXiv:1807.08100 [physics.atom-ph]} \BibitemShut {NoStop} \bibitem [{\citenamefont {Fang}\ \emph {et~al.}(2013)\citenamefont {Fang}, \citenamefont {Acosta}, \citenamefont {Santori}, \citenamefont {Huang}, \citenamefont {Itoh}, \citenamefont {Watanabe}, \citenamefont {Shikata},\ and\ \citenamefont {Beausoleil}}]{FAN2013} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Fang}}, \bibinfo {author} {\bibfnamefont {V.~M.}\ \bibnamefont {Acosta}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Santori}}, \bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Huang}}, \bibinfo {author} {\bibfnamefont {K.~M.}\ \bibnamefont {Itoh}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Watanabe}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Shikata}}, \ and\ \bibinfo {author} {\bibfnamefont {R.~G.}\ \bibnamefont {Beausoleil}},\ }\href {\doibase 10.1103/PhysRevLett.110.130802} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo {volume} {110}},\ \bibinfo {pages} {130802} (\bibinfo {year} {2013})}\BibitemShut {NoStop} \end{thebibliography} \begin{center} \textbf{\large Supplemental material: Robust optical readout and characterization of nuclear spin transitions in nitrogen-vacancy ensembles in diamond} \end{center} \setcounter{equation}{0} \setcounter{section}{0} \setcounter{figure}{0} \setcounter{table}{0} \setcounter{page}{1} \setcounter{equation}{0} \setcounter{figure}{0} \setcounter{table}{0} \setcounter{page}{1} \makeatletter \renewcommand{S\arabic{table}}{S\arabic{table}} \renewcommand{S\arabic{equation}}{S\arabic{equation}} \renewcommand{S\arabic{figure}}{S\arabic{figure}} \renewcommand{S\Roman{section}}{S\Roman{section}} \renewcommand{\bibnumfmt}[1]{[S#1]} \renewcommand{\citenumfont}[1]{S#1} \section{NV ground-state transitions} \label{sec:SIham} The relevant spin Hamiltonian of the NV ground state in the presence of an axial magnetic field $B_{z}$ can be written as: \begin{equation} \begin{aligned} \label{eq:Hamiltonian} H=D\left (S^{2}_{z}-\frac{1}{3}\textbf{S}^{2}\right )+\gamma_{e}S_{z}B_{z}+Q\left (I^{2}_{z}-\frac{1}{3}\textbf{I}^{2}\right )-\\ -\gamma_{n}I_{z}B_{z}+A_{\parallel}S_{z}I_{z}+\frac{A_{\perp}}{2}(S_{+}I_{-}+S_{-}I_{+}), \end{aligned} \end{equation} where $\textbf{S}$ and $\textbf{I}$ are the dimensionless electron and nuclear spin operators, respectively, $D$ is the zero-field electron spin-spin interaction, $\gamma_{e}$ and $\gamma_{n}$ are the electron and nuclear gyromagnetic ratios, respectively, $Q$ is the nuclear quadrupole coupling constant and $A_{\parallel}$ and $A_{\perp}$ are the axial and transverse magnetic hyperfine constants. Treating the transverse magnetic hyperfine interaction as a perturbation, the nuclear spin Hamiltonian of the $m_{S}=0$ manifold is to second order: \begin{equation} \begin{aligned} \label{eq:Hamiltonian2} H_{0}=\left ( Q+\frac{A^{2}_{\perp}D}{D^{2}-\gamma^{2}_{e}B^{2}_{z}}\right )\left (I^{2}_{z}-\frac{2}{3}\right )-\\ -\gamma_{n}\left (1-\frac{\gamma_{e}}{\gamma_{n}}\frac{A^{2}_{\perp}}{D^{2}-\gamma^{2}_{e}B^{2}_{z}}\right )I_{z}B_{z}, \end{aligned} \end{equation} from which we can determine the average value of the nuclear-spin transitions $f_{1}$ and $f_{2}$: \begin{equation} \begin{aligned} \label{eq:Q} \frac{f_{1}+f_{2}}{2}=\left|{Q+\frac{A^{2}_{\perp}D}{D^{2}-\gamma^{2}_{e}B^{2}_{z}}}\right| \end{aligned} \end{equation} and the effective nuclear gyromagnetic ratio $\gamma_{n}^{\mathit{eff}}$: \begin{equation} \begin{aligned} \label{eq:gamma} \gamma_{n}^{\mathit{eff}}=\frac{f_{1}-f_{2}}{2B_{z}}=\gamma_{n}\left (1-\frac{\gamma_{e}}{\gamma_{n}}\frac{A^{2}_{\perp}}{D^{2}-\gamma^{2}_{e}B^{2}_{z}}\right ). \end{aligned} \end{equation} These expressions are valid sufficiently far from the ground-state level-anticrossing (GSLAC), where the term with the resonant denominator is a small correction. We also note in passing that the the vicinity of the GSLAC is an interesting regime to study the temperature and magnetic field dependence of $A_{\perp}$, which will be a subject of future work. \section{Nuclear-spin-dependent fluorescence} \label{sec:Traces} \begin{figure} \caption{\label{fig:Traces} \label{fig:Traces} \end{figure} Figure~\ref{fig:Traces}(a) shows an averaged and normalized fluorescence responses (0.5 $\mu$s readout time) of the NV centers selectively initialized in the three nuclear-spin states of the $m_{S}=0$ manifold $|0,+1\rangle$, $|0,0\rangle$, and $|0,-1\rangle$ as a function of magnetic field. The fluorescence rate for the $|0,+1\rangle$ state is the highest for the studied magnetic field range, while the relative fluorescence rate for $|0,0\rangle$ and $|0,-1\rangle$ states depends on the magnetic field strength. The fluorescence rate for $|0,-1\rangle$ state is higher than that for $|0,0\rangle$ state at the magnetic field strength below $\approx495$\,G and it is lower at magnetic field strength above $\approx495$\,G. This relative fluorescence-response behaviour reflects the degree of mixing in the excited state depending on the magnetic field. Figure~\ref{fig:Traces}(b) demonstrates differences in the fluorescence responses for the selectively initialized nuclear-spin states $|0,+1\rangle$, $|0,0\rangle$, and $|0,-1\rangle$ at 503\,G, 484\,G , and 424\,G. \section{Magnetic field alignment and measurement} \label{sec:Bmeas} \begin{figure} \caption{\label{fig:Bcal} \label{fig:Bcal} \end{figure} A static magnetic field, $\textbf{B}$ was applied along the NV axis using a neodymium permanent magnet mounted on a three-axis translation stage. The alignment of the applied magnetic field was done by overlapping three NV electron-spin resonances corresponding to the three NV subensembles which are not aligned with the field. The alignment of the $\textbf{B}$ along the [111] axis is estimated to be better than 0.2 degree. The electron-spin resonances associated with the NV subensemble aligned with $\textbf{B}$ were used to determine the strength of the applied field. Pulsed ODMR signals were recorded for both $m_{S}=0 \rightarrow m_{S}=-1$ and $m_{S}=0 \rightarrow m_{S}=+1$ transitions Fig.~\ref{fig:Bcal}. The duration of the microwave $\pi$ pulse in the pulsed ODMR measurement was on the order of 2\,$\mu$s. Each ODMR signal was fitted with the three Lorentzians separated by 2.2 MHz. The relative amplitudes of the Lorentzians fit indicates the high degree on nuclear spin polarization $>$95\% even at $\approx$150\,G away from the ESLAC. We used the fit values of the central Lorentzian frequency for both transitions $f_{-}$ and $f_{+}$ to determine the strength of the magnetic field: \begin{equation} \begin{aligned} \label{eq:Bcal} B=\frac{f_{+}-f_{-}}{2\gamma_{e}}, \end{aligned} \end{equation} where $\gamma_{e}=2.803$\,MHz/G. We estimate the statistical uncertainty of the magnetic field measurement to be better than 0.1\,G. Nevertheless, the drift of the magnetic field during the nuclear Ramsey measurements can introduce a systematic uncertainty in $\textbf{B}$. This magnetic field drift associated with the permanent magnet's sensitivity to the ambient temperature fluctuations represents the main source of uncertainty in the measurement of the nuclear gyromagnetic ratio $\gamma_{n}$. We estimate the maximum systematic error introduced by the magnetic field drift to be less than 0.3\,G. In the ODMR signal we also observe polarization of the $^{13}$C nuclear spins proximal to the NV centers with hyperfine interaction strength of 13–14\,MHz~\cite{FIS2013PRB}. \section{Temperature dependence of \textit{D}} \label{sec:DvsT} \begin{figure} \caption{\label{fig:DofT} \label{fig:DofT} \end{figure} Figure~\ref{fig:DofT} shows the experimentally measured values of $D$ as a function of temperature for a diamond sample similar to that used in the present work ([N]$\approx50$\,ppm, [NV]$\approx16$\,ppm) in the range from 5 to 390\,K. These results were obtained employing a standard ODMR technique. The data for the temperature range from 5 to 300\,K were published in~\cite{DOH2014PRB}, while the rest of the data remained unpublished. We use this data to plot the fractional shift of $D$ in Fig.\,4(b) of the main text in the range from 90 to 390\,K. \section{\textit{D(T)} vs. \textit{Q(T)}} \label{sec:DvsT} To find some link between $D$ and $Q$, we need to also consider how $D$ depends on individual atomic orbitals. As shown in Ref.~\cite{DOH2014PRL}, $D$ is approximately determined by the separation of the mean positions of electrons in the dangling $sp^{3}$ carbon orbitals around the vacancy. These mean positions depend on the carbon lattice positions and the $sp$-hybridization of their dangling orbitals. The lattice positions and hybridization are correlated because if the carbons move relative to their nearest-neighbors, then their atomic orbitals must rehybridize to maintain bonds with those neighbors (this is often called bond bending). The same is true of the hybridization of the nitrogen atom's orbitals and so the occupation of the nitrogen's $p_{z}$ orbital is correlated to the nitrogen's lattice position. Thus, $D$ and $Q$ are linked if there is a common displacement (up to a proportionality factor) of the carbon and nitrogen atoms around the vacancy. As shown in Ref.~\cite{DOH2014PRB}, the temperature dependence of $D$ has two contributions: thermal expansion and quadratic electron-phonon interactions. These contributions are generalizable across the different types of resonances of the NV center (i.e. visible and infrared)~\cite{DOH2014PRB} and so also expected to govern the temperature dependence of $Q$ (although in that case it will be nuclear-phonon interactions). Both of these contributions represent lattice displacements: either static (thermal expansion) or dynamic (interactions with phonons). Hence, if there are common displacements of the nitrogen and carbon atoms then this may result in the qualitatively identical temperature variations of $D$ an $Q$. \end{document}
\begin{document} \title{A Reformulation of the Riemann Hypothesis} \date{November 28, 2020} \author{Jose Risomar Sousa} \maketitle \usetagform{Tags} \begin{abstract} We present some novelties on the Riemann zeta function. Using the analytic continuation we created for the polylogarithm, $\mathrm{Li}_{k}(e^{m})$, we extend the zeta function from $\Re(k)>1$ to the complex half-plane, $\Re(k)>0$, by means of the Dirichlet eta function. More strikingly, we offer a reformulation of the Riemann hypothesis through a zeta's cousin, $\varphi(k)$, a pole-free function defined on the entire complex plane whose non-trivial zeros coincide with those of the zeta function. \end{abstract} \section{Introduction} The Riemann Hypothesis is a long-standing problem in math, which involves the zeros of the analytic continuation of its most famous Dirichlet series, the zeta function. This Dirichlet series, along with its analytic continuation, constitutes a so-called $L$-function, whose zeros encode information about the location of the prime numbers. Riemann provided insight into this connection through his unnecessarily convoluted prime counting functions\citesup{Prime}.\\ The zeta function as a Dirichlet series is given by, \begin{equation} \nonumber \zeta(k)=\sum_{j=1}^{\infty}\frac{1}{j^k} \text{,} \end{equation} and throughout here we use $k$ for the variable instead of the usual $s$, to keep the same notation used in previous papers released on generalized harmonic numbers and progressions and interrelated subjects.\\ This series only converges for $\Re(k)>1$, but it can be analytically continued to the whole complex plane. For the purpose of analyzing the zeros of the zeta function though, we produce its analytic continuation on the complex half-plane only, $\Re{(k)>0}$, by means of the alternating zeta function, known as the Dirichlet eta function, $\eta{(k)}$. It's a well known fact that all the non-trivial zeros of $\zeta(k)$ lie on the critical strip ($0<\Re{(k)<1}$). The Riemann hypothesis is then the conjecture that all such zeros have $\Re{(k)}=1/2$. A somewhat convincing argument for the Riemann hypothesis was given in \citena{RH}, though it's reasonable to think it will take some time for it to be acknowledged or dismissed.\\ We start from the formula we found for the analytic continuation of the polylogarithm function, discussed in paper \citena{AC}. The polylogarithm is a generalization of the zeta function, and it has the advantage of encompassing the Dirichlet eta function.\\ We then greatly simplify the convoluted expressions and remove the complex numbers out of the picture, going from four-dimensional chaos ($\mathbb{C}\rightarrow\mathbb{C}$) to a manageable two-dimensional relation. \section{The polylogarithm, $\mathrm{Li}_{k}(e^{m})$} As seen in \citena{AC}, the following expression for the polylogarithm holds for all complex $k$ with positive real part, $\Re{(k)}>0$, and all complex $m$ (except $m$ such that $\Re{(m)}>=0$ and $\abs{\Im{(m)}}>2\pi$ -- though for $\abs{\Im{(m)}}=2\pi$ one must have $\Re{(k)}>1$): \begin{multline} \nonumber \mathrm{Li}_{k}(e^{m})=-\frac{m^{k}}{2k!}-\frac{m^{k-1}\left(1+\log{(-m)}\right)}{(k-1)!}\\ -\frac{m^{k}}{2(k-1)!}\int_{0}^{1}(1-u)^{k-1}\coth{\frac{m u}{2}}+\frac{2\left(1-m^{-k}(m-\log{(1-u)})^k\right)}{k\,u^2}\,du \end{multline}\\ \indent From this formula, we can derive two different formulae for $\zeta(k)$, using $m=0$ or $m=2\pi\bm{i}$, but both are only valid when $\Re{(k)}>1$. Using $m=2\pi\bm{i}$ is effortless, we just need to replace $m$ with $2\pi\bm{i}$ in the above. Using $m=0$ is not as direct, we need to take the limit of $H_k(n)$\footnote{A formula derived from the partial sums of $\mathrm{Li}_{k}(e^{m})$, as explained in \citena{AC}.} as $n$ tends to infinity: \begin{equation} \nonumber \zeta(k)=\frac{1}{k!}\int_{0}^{1}\frac{\left(-\log{u}\right)^k}{(1-u)^2}\,du \text{} \end{equation} \subsection{The analytic continuation of $\zeta{(k)}$} The analytic continuation of the zeta function to the complex half-plane can be achieved using the Dirichlet eta function, as below: \begin{equation} \nonumber \zeta(k)=\frac{1}{1-2^{1-k}}\eta(k)=\frac{1}{1-2^{1-k}}\sum_{j=1}^{\infty}\frac{(-1)^{j+1}}{j^{k}} \text{,} \end{equation}\\ \noindent which is valid in the only region that matters to the zeta function's non-trivial zeros, the critical strip. The exception to this mapping are the zeros of $1-2^{1-k}$, which are also zeros of $\eta(k)$, thus yielding an undefined product.\\ For the purpose of studying the zeros of the zeta function, we can focus only on $\eta(k)$ and ignore its multiplier. The below should hold whenever $\Re{(k)}>0$: \begin{multline} \nonumber \eta{(k)}=\frac{(\bm{i}\pi)^{k}}{2k!}+\frac{(\bm{i}\pi)^{k-1}\left(1+\log{(-\bm{i}\pi)}\right)}{(k-1)!}\\ +\frac{(\bm{i}\pi)^{k}}{2(k-1)!}\int_{0}^{1}-\bm{i}(1-u)^{k-1}\cot{\frac{\pi u}{2}}+\frac{2\left(1-\pi^{-k}(\pi+\bm{i}\log{(1-u)})^k\right)}{k\,u^2}\,du \end{multline} \section{Simplifying the problem} Before starting to solve equation $\eta{(k)}=0$, let's simplify it by letting $h(k)=2\,k!(\bm{i}\pi)^{-k}\eta(k)$.\\ Now, $\eta{(k)}=0$ if and only if $h(k)=0$, which implies the below equation: \begin{multline} \nonumber -1+\frac{2\bm{i}\,k\left(1+\log{(-\bm{i}\pi)}\right)}{\pi} =\int_{0}^{1}-\bm{i}\,k(1-u)^{k-1}\cot{\frac{\pi u}{2}}+\frac{2\left(1-\pi^{-k}(\pi+\bm{i}\log{(1-u)})^k\right)}{u^2}\,du \end{multline}\\ \indent Now we need to make another transformation, with the goal of separating the real and imaginary parts. Let's set $k=r+\bm{i}\,t$, expressed in polar form, and change the integral's variable using the relation $\log{(1-u)}=\pi\tan{v}$, chosen for convenience. \begin{equation} \nonumber k=r+\bm{i}\,t=\sqrt{r^2+t^2}\exp\left(\bm{i}\arctan{\frac{t}{r}}\right) \end{equation}\\ \indent With that, taking into account the Jacobian of the transformation, our equation becomes: \begin{multline} \nonumber \frac{\pi(r-1)-2\,t(1+\log{\pi)}}{\pi}+\bm{i}\frac{\pi\,t+2\,r(1+\log{\pi)}}{\pi}=\\ \pi\int_{0}^{\pi/2}(\sec{v})^2(-\bm{i}\sqrt{r^2+t^2}\tan{\frac{\pi e^{-\pi\tan{v}}}{2}}\exp\left(-\pi\,r\tan{v}+\bm{i}\left(\arctan{\frac{t}{r}}-\pi\,t\tan{v}\right)\right)\\+\frac{1}{2}\left(\csch{\frac{\pi\tan{v}}{2}}\right)^2\left(1-\exp\left(-r\log{\cos{v}}+t\,v+\bm{i}(-t\log{\cos{v}}-r\,v)\right)\right))\,dv \end{multline}\\ \indent Though this expression is very complicated, it can be simplified, as we do next. Since the parameters are real, we can separate the real and imaginary parts. \subsection{The real part equation} Below we have the equation one can derive for the real part: \begin{multline} \nonumber \frac{\pi(r-1)-2\,t(1+\log{\pi)}}{\pi}=\\ \pi\int_{0}^{\pi/2}(\sec{v})^2(\sqrt{r^2+t^2}\tan{\frac{\pi e^{-\pi\tan{v}}}{2}}\exp\left(-\pi\,r\tan{v}\right)\sin{\left(\arctan{\frac{t}{r}}-\pi\,t\tan{v}\right)}\\+\frac{1}{2}\left(\csch{\frac{\pi\tan{v}}{2}}\right)^2(1-\exp\left(-r\log{\cos{v}}+t\,v\right)\cos{(t\log{\cos{v}}+r\,v)}))\,dv \end{multline}\\ \indent Any positive odd integer $r$ satisfies this equation, when $t=0$.\\ If $r+\bm{i}\,t$ is a zero of the zeta function, so is its conjugate, $r-\bm{i}\,t$. Hence, noting that the first term inside of the integral is an odd function in $t$, we can further simplify the above by adding the equations for $t$ and $-t$ up as follows: \begingroup \small \begin{multline} \nonumber 2(r-1)=\frac{\pi}{2}\int_{0}^{\pi/2}\left(\sec{v}\csch{\frac{\pi\tan{v}}{2}}\right)^2\left(2-(\sec{v})^r\left(e^{t\,v}\cos{(t\log{\cos{v}}+r\,v)}+e^{-t\,v}\cos{(t\log{\cos{v}}-r\,v)}\right)\right)\,dv \end{multline} \endgroup\\ \indent Let's call the function on the right-hand side of the equation $f(r,t)$. After this transformation, the positive odd integers $r$ remain zeros of $f(r,0)=2(r-1)$. \subsection{The imaginary part equation} Likewise, for the imaginary part we have: \begin{multline} \nonumber \frac{\pi\,t+2\,r(1+\log{\pi)}}{\pi}=\\ \pi\int_{0}^{\pi/2}(\sec{v})^2(-\sqrt{r^2+t^2}\tan{\frac{\pi e^{-\pi\tan{v}}}{2}}\exp\left(-\pi\,r\tan{v}\right)\cos{\left(\arctan{\frac{t}{r}}-\pi\,t\tan{v}\right)}\\+\frac{1}{2}\left(\csch{\frac{\pi\tan{v}}{2}}\right)^2\exp\left(-r\log{\cos{v}}+t\,v\right)\sin{(t\log{\cos{v}}+r\,v)})\,dv \end{multline}\\ \indent Coincidentally, any positive even integer $r$ satisfies this equation when $t=0$, so the two equations (real and imaginary) are never satisfied simultaneously for any positive integer.\\ Now the first term inside of the integral is an even function in $t$, so to simplify it we need to subtract the equations for $t$ and $-t$, obtaining: \begin{multline} \nonumber 2\,t=\frac{\pi}{2}\int_{0}^{\pi/2}(\sec{v})^{r+2}\left(\csch{\frac{\pi\tan{v}}{2}}\right)^2\left(e^{t\,v}\sin{(t\log{\cos{v}}+r\,v)}+e^{-t\,v}\sin{(t\log{\cos{v}}-r\,v)}\right)\,dv \end{multline}\\ \indent Let's call the function on the right-hand side of the equation $g(r,t)$. After this transformation, any $r$ satisfies $g(r,0)=0$, whereas the roots of $f(r,0)=2(r-1)$ are still the positive odd integers $r$. This means that when $t=0$, these transformations have introduced the positive odd integers $r$ as new zeros of the system, which weren't there before. \section{Riemann hypothesis reformulation} If we take a linear combination of the equations for the real and imaginary parts, such as $2(r-1)-2\,t\,\bm{i}=f(r,t)-\bm{i}\,g(r,t)$, we can turn the system of equations into a simpler single equation: \begin{equation} \nonumber k-1=\frac{\pi}{2}\int_{0}^{\pi/2}\left(\sec{v}\csch{\frac{\pi\tan{v}}{2}}\right)^2\left(1-\frac{\cos{k\,v}}{(\cos{v})^k}\right)\,dv \end{equation}\\ \indent Going a little further, with a simple transformation ($u=\tan{v}$) we can deduce the following theorem.\\ \textbf{Theorem} $k$ is a non-trivial zero of the Riemann zeta function if and only if $k$ is a non-trivial zero of: \begin{equation} \nonumber \varphi(k)=1-k+\frac{\pi}{2}\int_{0}^{\infty}\left(\csch{\frac{\pi\,u}{2}}\right)^2\left(1-(1+u^2)^{k/2}\cos{(k\arctan{u})}\right)\,du \end{equation}\\ \indent Hence the Riemann hypothesis is the statement that the zeros of $\varphi(k)$ located on the critical strip have $\Re(k)=1/2$.\\ \textbf{Proof} All the roots of $\varphi(k)$ should also be roots of the zeta function, except for the positive odd integers and the trivial zeros of the eta function ($1+2\pi\bm{i}\,j/\log{2}$, for any integer $j$), though this might not be true since we transformed the equations (that is, there might be other $k$ such that $\varphi(k)=0$ but $\zeta(k)\neq 0$).\\ However, a little empirical research reveals the following relationship between $\zeta(k)$ and $\varphi(k)$: \begin{equation} \label{eq:FE} -\frac{2\,\Gamma(k+1)\left(2^{1-k}-1\right)}{\pi^k}\cos{\frac{\pi k}{2}}\,\zeta(k)=\varphi(k) \text{, for all complex }k\text{ except where undefined.} \end{equation}\\ \indent This relationship was derived from the observation that $\varphi(k)=\Re{(h(k))}$ for all real $k$. $\square$\\ Note this functional equation breaks down at the negative integers ($k!=\pm\infty$ but $\zeta(k)=0$ or $\cos{\pi k/2}=0$, whereas $\varphi(k)\neq 0$) and at 1. There is a zeros trade-off between these two functions (the negative even integers for the positive odd integers). Note also that while the convergence domain of $h(k)$ is $\Re{(k)}>0$, the domain of $\varphi(k)$ is the whole complex plane.\\ If we combine equation \eqrefe{FE} with Riemann's functional equation, we can obtain the following simpler equivalence, valid for all complex $k$ except the zeta pole: \begin{equation} \nonumber 2(k-1)(1-2^{-k})\zeta(k)=\varphi(1-k) \text{,} \end{equation} \noindent which in turn implies Riemann's functional equation when combined with \eqrefe{FE}. \subsection{Particular values of $\varphi(k)$ when $k$ is integer} From the functional equations, we can easily find out the values of $\varphi(\pm k)$ when $k$ is a non-negative integer: \begin{equation} \nonumber \begin{cases} \varphi(2k)=\left(2-2^{2k}\right)B_{2k}\\ \varphi(2k+1)=0\\ \varphi(-k)=2k\left(1-2^{-k-1}\right)\zeta(k+1) \end{cases} \end{equation}\\ \indent And from these formulae we conclude that for large positive real $k$, $\varphi(-k)\sim 2k$.\\ One can also create a generating function for $\varphi(k)$, based on the following identities: \begin{equation} \nonumber \cos{\arctan{u}}=\frac{1}{\sqrt{1+u^2}} \text{, and } \cos{(k\,\arctan{u})}=T_k(\cos{\arctan{u}}) \text{,} \end{equation} \noindent where $T_k(x)$ is the Chebyshev polynomial of the first kind.\\ Therefore, using the generating function of $T_k(x)$ available in the literature, for the positive integers we have: \begin{equation} \nonumber \sum_{k=0}^{\infty}\left(x\sqrt{1+u^2}\right)^k\,T_k(\cos{\arctan{u}})=\frac{1-x}{(1-x)^2+(x\,u)^2} \text{,} \end{equation}\\ \noindent from which it's possible to produce the generating function of $\varphi(k)$ (let it be $q(x)$): \begin{equation} \nonumber \sum_{k=0}^{\infty}x^k\,\varphi(k)=\frac{2}{1-x}-\frac{1}{(1-x)^2}+\frac{\pi\,x^2}{2(1-x)}\int_{0}^{\infty}\left(\csch{\frac{\pi\,u}{2}}\right)^2\frac{u^2}{(1-x)^2+(x\,u)^2}\,du \end{equation}\\ \indent The $k$-th derivative of $q(x)$ yields the value of $\varphi(k)$, and obtaining it is not very hard (we just need to decompose the functions in $x$ into a sum of fractions whose denominators have degree 1, if the roots are simple -- so we can easily generalize their $k$-th derivative). After we perform all the calculations and simplifications we find that: \begin{equation} \nonumber \varphi(k)=\frac{q^{(k)}(x)}{k!}=1-k-\frac{\pi\,k!}{2}\int_{0}^{\infty}\left(\csch{\frac{\pi\,u}{2}}\right)^2\sum_{j=1}^{\floor{k/2}}\frac{(-1)^j\,u^{2j}}{(2j)!(k-2j)!}\,du \text{,} \end{equation} \noindent where $\floor{k/2}$ means the integer division.\\ In here we went from an expression that holds for all $k$, to an expression that only holds for $k$ a positive integer, the opposite of analytic continuation. \section{Graphics plotting} First we plot the curves obtained with the imaginary part equation, $g(r,t)$, with values of $r$ starting at $1/8$ with $1/8$ increments, up to $7/8$, for a total of 7 curves plus the $2\,t$ line. The points where the line crosses the curves are candidates for zeros of the Riemann zeta function (they also need to satisfy the real part equation, $f(r,t)=2(r-1)$).\\ Let's see what we obtain when we plot these curves with $t$ varying from 0 to 15. In the graph below, higher curves have greater $r$, though not always, below the $x$-axis it's vice-versa -- but generally the more outward the curve, the greater the $r$: \begin{center} \includegraphics{grafico1.png} \end{center} As we can see, it seems the line crosses the curve for $r=1/2$ at its local maximum, which must be the first non-trivial zero (that is, its imaginary part). The line also crosses 3 other curves (all of which have $r>1/2$), but these are probably not zeros due to the real part equation. Also, it seems there must be a line that unites the local maximum points of all the curves, though that is just a wild guess.\\ One first conclusion is that one equation seems to be enough for $r=1/2$, the line seems to only cross this curve at the zeta zeros. Another conclusion is that apparently curves with $r<1/2$ don't even meet the first requirement, and also apparently $r=1/2$ is just right. A third conclusion is that all curves seem to have the same inflection points.\\ In the below graph we plotted $g(r,t)$ for the minimum, middle and maximum points of the critical strip (0, $1/2$ and 1), with $t$ varying from 0 to 26, for further comparison (0 is pink, 1 is green): \begin{center} \includegraphics{grafico2.png} \end{center} Now, the graph below shows plots for curves $-2(r-1)+f(r,t)$ and $-2t+g(r,t)$ together. The plots were created for $r=0$ (red), $r=1/2$ (green) and $r=1$ (blue) (curves with the same color have the same $r$). A point is a zero of the zeta function when both graphs cross the $x$-axis at the same point (three zeta zeros are shown). \begin{center} \includegraphics{grafico3.png} \end{center} And finally, graphs for the difference of the two functions, $-2(r-1)+f(r,t)+2t-g(r,t)$, were created for the same $r$'s as before and with the same colors as before (but now we also have pink ($r=1/4$) and cyan ($r=3/4$)). \begin{center} \includegraphics{grafico4.png} \end{center} \end{document}
\begin{document} \title{Decoherence and recoherence from vacuum fluctuations near a conducting plate} \author{Francisco D.\ Mazzitelli$^{1}$, Juan Pablo Paz$^{1,2}$, Alejandro Villanueva$^1$} \affiliation{(1): Departamento de F\'\i sica {\it J.J. Giambiagi}, FCEyN UBA, Ciudad Universitaria, Pabell\' on I, 1428 Buenos Aires, Argentina} \affiliation{(2): Theoretical Division, MS B213 LANL, Los Alamos, NM 87545, USA} \date{\today} \begin{abstract} {The interaction between particles and the electromagnetic field induces decoherence generating a small suppression of fringes in an interference experiment. We show that if a double slit--like experiment is performed in the vicinity of a conducting plane, the fringe visibility depends on the position (and orientation) of the experiment relative to the conductor's plane. This phenomenon is due to the change in the structure of vacuum induced by the conductor and is closely related to the Casimir effect. We estimate the fringe visibility both for charged and for neutral particles with a permanent dipole moment. The presence of the conductor may tend to increase decoherence in some cases and to reduce it in others. A simple explanation for this peculiar behavior is presented.} \end{abstract} \pacs{PACS number(s):03.65.Yz,03.75.Dg} \maketitle The interaction of a quantum system with its environment is responsible for the process of decoherence, which is one of the main ingredients to understand the quantum--classical transition \cite{decoherence}. In some cases, the interaction with the environment cannot be switched off. This is the case for charged particles that unavoidably interact with the electromagnetic field. As this interaction is fundamental, its effect is present in any interference experiment. In this letter we will analyze the influence of a conducting boundary in the decay of the visibility of interference fringes in a double slit experiment performed with charged particles (or neutral particles with a dipole moment). The reduction of fringe visibility is induced by the interaction between the particles and the electromagnetic field. Some aspects of this problem have been analyzed before. In fact, it is known that for charged particles, the interaction between the system (the particle) and the environment (the electromagnetic field) induces a rather small decoherence effect even if the initial state of the field is the vacuum \cite{stern1,stern2,barone,breuer,vourdas,ford,hu}. A particularly simple expression for the decay in the fringe visibility was obtained in \cite{stern1,stern2}: Assuming an electron in harmonic motion (with frequency $\Omega$) along the relevant trajectories of the double slit experiment, the fringe visibility decays by a factor $(1-P)^2$ where $P$ is the probability that a dipole $p=eR$ oscillating at frequency $\Omega$ emits a photon ($R$ is the characteristic size of the trajectory). This result is in accordance with the idea that decoherence becomes effective when a record of the state of the system is irreversibly imprinted in the environment. In this case, after photon emission, if the electron follows the trajectory $\vec X_1(t)$ of the double slit experiment (see Figure 1) it becomes correlated with a state of the environment $|E_1(t)\rangle$. This state is different from the one with which the electron correlates if it follows the trajectory $\vec X_2(t)$. The absolute value of the overlap between these two different states is precisely given by $(1-P)^2$. In this letter we will analyze how the fringe visibility is modified when performing a double slit interference experiment in the vicinity of a conducting plane. Our analysis will serve not only to correct previous results \cite{ford} but also to show that the effect of the conductor is quite remarkable and simple to understand. As we will see, the presence of the conducting plane may produce more decoherence in some cases and less decoherence in others. For example, we will show that if a conducting plate is placed perpendicular to the trajectories of the interfering charge, the fringe visibility decreases with respect to the vacuum case (absence of conducting plate). However, if the plate lies parallel to the electron's trajectories, the contrast increases (the system recoheres!). We will show that this peculiar behavior can be understood in simple terms and the magnitude of the effect can be easily estimated. There are several interesting physical effects connected with the one we are analyzing here. Thus, it is well known that a conducting boundary modifies the properties of the zero point fluctuations, and therefore could affect the interference experiments of particles that interact with the electromagnetic field. Other consequences of the presence of nontrivial boundary conditions are the Casimir force between two conductors \cite{casimir} and the Casimir--Polder force \cite{casimir-polder} affecting a probe particle in the vicinity of a conductor. These phenomena, that have been experimentally verified \cite{casimir-experiments}, are close relatives of the process we are studying here. In fact, the Casimir--Polder force can be thought as the dispersive counterpart of the decoherence effect we will discuss. The influence of boundaries on the electromagnetic vacuum is also responsible for changes in atomic lifetimes and interference phenomena for light emitted by atoms near conducting surfaces \cite{atomic-experiments}. \begin{figure} \caption{Scheme for a double slit like experiment near a conducting plane. The component of the velocity in the direction from the source to the detector is assumed to be constant.} \label{fig1} \end{figure} Let us first outline a simple method to compute the effect of electromagnetic interactions on the fringe contrast. We consider two electron wave packets that follow well defined trajectories $\vec{X} _1(t)$ and $\vec{X} _2(t)$ that coincide at initial ($t=0$) and final ($t=T$) times as shown in Figure 1. In the absence of environment, the interference depends on the relative phase between both wave packets at $t=T$. Because of the interaction with the quantum electromagnetic field, the interference pattern is affected. This effect can be calculated as follows: We assume an initial state of the combined particle--field system of the form $|\Psi(0)\rangle= (|\phi_1\rangle+|\phi_2\rangle)\otimes|E_0\rangle$. Here $|E_0\rangle$ is the initial (vacuum) state of the field and $|\phi_{1,2}\rangle$ are two states of the electron that are localized around the initial point and that in the absence of other interaction continue to be localized along the trajectories $X_{1,2}(t)$ respectively. At later times, due to the particle field interaction the state of the combined system becomes $|\Psi(t)\rangle= (|\phi_1(t)\rangle\otimes |E_1(t)\rangle+|\phi_2(t)\rangle \otimes|E_2(t)\rangle)$. Thus, the two localized states $|\phi_1(t)\rangle$ and $|\phi_2(t)\rangle$ become correlated with two different states of the field. Therefore, the probability of finding a particle at a given position turns out to be \begin{eqnarray} prob(\vec X,t)&=&|\phi_1(\vec X,t)|^2+|\phi_2(\vec X,t)|^2 \nonumber\\ &+& 2{\rm Re}(\phi_1(\vec X,t)\phi_2^*(\vec X,t)\langle E_2(t)|E_1(t)\rangle). \label{probability} \end{eqnarray} The overlap factor $F=\langle E_2(t)|E_1(t)\rangle$ is responsible for two effects: Its phase produces a shift of the interference fringes. The absolute value $|F|$ is responsible for the decay in the fringe contrast, which is the phenomenon we will analyze here. The calculation of the factor $F$ is conceptually simple since it is nothing but the overlap between two states of the field that arise from the vacuum under the influence of two different sources (this factor is identical to the Feynman--Vernon influence functional \cite{Feynman-Vernon}). Each of the two states of the field can be written as $|E_a(t)\rangle=T\left(\exp(-i\int d^x J_a^\mu (x)A_\mu (x))\right)|E_0\rangle$, where $J_a^\mu (x)$ is the conserved $4$--current generated by the particle following the classical trajectory $\vec X_a(t)$, i.e. $J_a^\mu (\vec X,t)=\left(e,e\dot{\vec X}_a (t)\right)\times\delta^3(\vec X-\vec X_a(t))$, ($a=1,2$). Using this, it is simple to derive an expression for the absolute value of the overlap. Denoting $|F|=\exp(-W_c)$, we get \begin{equation} W_c={1\over 2}\int d^4x\int d^4y (J_1-J_2)^\mu(x)D_{\mu\nu}(x,y)(J_1-J_2)^\nu(y), \label{Wcharges} \end{equation} where $D_{\mu\nu}$ is the expectation value of the anti--commutator of two field operators: $D_{\mu\nu}(x,y)={1\over 2}\langle\{A_\mu(x),A_\nu(y)\}\rangle$. It is easy to show that the square of the overlap has a simple interpretation: $|F|^2$ is equal to the probability for vacuum persistence in the presence of a source $j_{\mu}=(J_1-J_2)_\mu$, which corresponds to a time dependent electric dipole ${e}(\vec X_1(t)-\vec X_2(t))$ \cite{breuer}. A conceptually similar and physically interesting problem can be analyzed along the same lines: the decoherence of neutral particles with a non--vanishing permanent dipole moment. In such case we can model the particle--field interaction using a Lagrangian $L_{int}=P_{\mu\nu}(x)F^{\mu\nu}(x)$. Here $F_{\mu\nu}$ is the field strength tensor and $P_{\mu\nu}$ is a totally antisymmetric tensor whose non--vanishing components are given in terms of the electric and magnetic dipole densities. For particles with electric dipole $\vec p$ and magnetic dipole $\vec m$ moving along a trajectory $\vec X(t)$, the dipolar tensor is such that $P_{0i}=p_i(t)\delta^3(\vec X-\vec X(t))/2$ and $P_{ij}=\epsilon_{ijk}m_k (t)\delta^3(\vec X-\vec X(t))/2$. In this case we can perform a calculation which is similar to the one above and show that the overlap $F=\exp(-W_d)$ is \begin{eqnarray} W_d&=&{1\over 2}\int\int d^4x d^4y \ (P_1-P_2)^{\mu\nu}(x)K_{\mu\nu\rho\sigma}(x,y)\times\nonumber\\ &\times& (P_1-P_2)^{\rho\sigma}(y), \label{Wdipole} \end{eqnarray} where the kernel is $K_{\mu\nu\rho\sigma}(x,y)=\langle\{F_{\mu\nu}(x),F_{\rho\sigma}(y)\}\rangle$. In what follows we will present results for the {\sl decoherence factors} $W_c$ and $W_d$ (the subscripts stand for ''charges'' and ''dipoles''). To compute $W_c$ we need the two point function appearing in (\ref{Wcharges}). In the Feynman gauge and in the absence of conducting plates it is \begin{equation} D^{(0)}_{\mu\nu}(x,y)=-\eta_{\mu\nu}\int {d^3\vec k\over{(2\pi)^32k}} \ {\rm e}^{i\vec k(\vec x-\vec y)}\cos(k(x_0-y_0)), \label{Dmunu} \end{equation} where the superscript $(0)$ identifies this as the vacuum contribution. We will assume that the trajectories are such that $\vec X_1(t)=-\vec X_2(t)=x(t) \hat{\rm x}$. This is enough to describe a typical double slit experiment from the point of view of an observer moving at constant velocity from the source to the detector. In such case we obtain the following relatively simple expression for $W_c$: \begin{equation} W_c^{(0)}=e^2 \int {d^3\vec k\over 8\pi^3 k}(1-{k_j^2\over k^2})\vert \int_{-\infty}^{\infty} dt\,\dot x(t) \,\cos[k_x x(t)] \, e^{i k t}\vert^2. \label{Wc0} \end{equation} This result, obtained first in \cite{stern1}, can be simplified further by assuming the validity of the dipole approximation (which is consistent in the nonrelativistic limit). Doing this, one can evaluate the decoherence factor for some special trajectories. In fact, for adiabatic trajectories, where $x(t)= R \exp [-t^2/T^2]$, we find that $W_c^{(0)}=2e^2 v^2/3\pi$, where $v=R/T$ is a characteristic velocity. This result is finite and free of any cutoff dependence. However, for trajectories evolving over a finite time the situation is different. Thus, assuming that the motion starts at $t=0$, ends at $t=T$, and that is composed of periods of constant velocity $v$, or constant acceleration $v/\tau$, we obtain a result that diverges logarithmically when $\tau\rightarrow 0$: $W_c^{(0)}={2 e^2 v^2} \log[T/\tau]/\pi^2$ (if $\tau/T\ll 1$). Previous results \cite{ford,breuer} were obtained for trajectories with discontinuous velocity using a natural UV cutoff arising from the finite size of the electron. The results of \cite{breuer} agree with ours if the high frequency cutoff is identified with $1/\tau$. Thus, the cutoff dependence disappears in the adiabatic case and is a consequence of abrupt changes in velocity and the instantaneous preparation of the initial state. If the two wave packets are superposed after oscillating $N$ times, it is possible to define a decoherence rate (the ammount by which the decoherence factor grows in a single oscillation). Thus, if the time to complete one oscillation is much shorter than the period between oscillations we can show that, for large $N$, the decoherence factor is proportional to $N$: $W_c^{(0)}=N W_c^{(0)}(1)$ where $W_c^{(0)}(1)$ is the decoherence factor in a single oscillation. We will now show how this result is modified by the presence of a perfect conductor located in the plane $z=0$. To consider the effect of the conductor we only need to use the appropiate two point function $D_{\mu\nu}$ that is the sum of two terms \cite{Brown}: $D_{\mu\nu}=D_{\mu\nu}^{(0)}+D_{\mu\nu}^{(B)}$. The vacuum term is the same as in (\ref{Dmunu}). The contribution of the boundary conditions (identified by the superscript (B)) can be obtained by the method of images and is: \begin{eqnarray} D^{(B)}_{\mu\nu}(x,y)&=&(\eta_{\mu\nu}+2n_\mu n_\nu)\int {d^3\vec k\over{(2\pi)^32k}} \times\nonumber\\ &\times&\exp(i\vec k(\vec x-{\vec y}'))\cos(k(x_0-y_0)). \label{Dplate} \end{eqnarray} Here $n^\mu$ is the normal to the plane and ${\vec y}'$ is the position of the image point of $\vec y$ (a prime denotes a vector reflected with respect to the plane, i.e. ${\vec y}'=(y_x,y_y,-y_z)$). Using (\ref{Dplate}) we can derive a formula for the contribution of the boundary to the decay of the interference fringes. The complete equation is involved and will be given elsewhere \cite{us-next}. Here we will restrict to the case where the trajectories are either perpendicular or parallel to the conductor's plane. Thus, we will write $\vec X_{1,2}=z_0 \hat z \pm x(t) \hat {\rm \j} $ where $\hat {\rm \j} $ defines a fixed vector aligned either along the $\hat z$--axis or along the plane perpendicular to it. In such case, the conductor's contribution to decoherence is \begin{eqnarray} W_c^{(B)}&=& -\,\hat {\rm \j} {\hat {\rm \j} }'\, e^2 \int {d^3\vec k\over 8\pi^3k}\, (1-{k_j^2\over k^2})\, e^{2ik_z z_0}\times\nonumber\\ &\times& \vert\int_0^tdt' \, \dot x(t') \, \cos[k_j x(t')] \, e^{i k t'}\vert^2. \label{WcB} \end{eqnarray} The sign of $W_c^{(B)}$ is determined by the orientation of ${\hat{\rm \j}}'$ relative to $\hat{\rm \j} $. $W_c^{(0)}$ is negative when the trajectories are parallel to the conductor's plane (since in that case ${\hat {\rm \j}}'=\hat{\rm \j}$). On the other hand, $W_c^{(0)}$ is positive when the trajectories are perpendicular to the plane (where ${\hat {\rm \j}}'=-\hat{\rm \j}$). At small distances to the plane ($z_0\simeq 0$) we can see from (\ref{WcB}) that $\vert W_c^{(B)}\vert\simeq W_c^{(0)}$. Therefore, if the trajectories are perpendicular to the plane, in the limit of small distances the decoherence factor is $W_c=W_c^{(0)}+W_c^{(B)}\simeq 2 W_c^{(0)}$: The effect of the conductor is to double the decoherence factor. However, if the trajectories are parallel to the conductor the effect is exactly the opposite: As $W_c^{(B)}$ is negative, the conductor produces {\sl recoherence} increasing the contrast of the fringes. In fact, for small distances the decoherence factor tends to vanish since $W_c=W_c^{(0)}+W_c^{(B)}\simeq 0$. These results can be understood using the method of images taking into account that decoherence in empty space is related to the probability of photon emision for a varying dipole $p=e x(t)$. When the conducting plane is parallel to the dipole, the image dipole is $\vec p_{im}=-\vec p$. Therefore the total dipole moment vanishes, and so does the probability to emit a photon. The image dipole cancels the effect of the real dipole and this produces the recovery of the fringe contrast. On the other hand, when the conductor is perpendicular to the trajectories, the image dipole is equal to the real dipole $\vec p_{im}=+\vec p$. Therefore, the total dipole is twice the original one. This in principle would lead us to conclude that the total decoherence factor $W_c=W_c^{(0)}+W_c^{(B)}$ should be four times larger than $W_c^{(0)}$. However, one should take into account that in the presence of a perfect mirror photons can only be emitted in the $z\geq 0$ region. This introduces an additional factor of $1/2$ that gives rise to the final result $W_c\simeq 2 W_c^{(0)}$. The impact of conducting boundaries on the fringe visibility for interference experiments performed with charged particles was previously examined in \cite{ford}. However, results obtained in such papers are not correct due to inconsistent approximations that violate the conservation of the $4$--current. Thus the expressions obtained there differ from ours in several ways: not only they are not proportional to $v^2$ but also they violate the possitivity of the total decoherence factor $W_c$. Let us now describe the results for the case of neutral particles with permanent dipole moments. The calculation is tedious and details will be given elsewhere \cite{us-next}. Here we will analyze it under somewhat simplified assumptions. Will assume that the dipole moments $\vec p$ and $\vec m$ remain constant along the trajectories. If $\vec X_1(t)=-\vec X_2(t)=x(t)\hat{\rm \j}$ we find: \begin{eqnarray} W_d^{(0)}&=& \int {d^3\vec k\over 8\pi^3}\,k \{\vec p^2(1-{k_p^2\over k^2})+\vec m^2(1-{k_m^2\over k^2})\times\nonumber\\ &\times&\vert\int_0^tdt'\sin[k_j x(t')] \, e^{i k t'}\vert^2. \label{Wd0} \end{eqnarray} Using again the dipole approximation, for the adiabatic trajectory the decoherence factor is such that $W_d^{(0)}/W_c^{(0)}\simeq p^2/e^2T^2$ for a purely electric dipole. This ratio is typically much smaller than one. The effect of the conductor can also be taken into account using the method described above. For simplicity we will only consider trajectories that are parallel to the plane (i.e., $\vec X_{1,2}=z_0\hat z \pm x(t) \hat {\rm \j}$) and assume that the dipole moments are either perpendicular or parallel to the conductor (the general case is more complex but the essential features can be seen here). Using this we obtain \begin{eqnarray} W_d^{(B)}&=& -\, \int {d^3\vec k\over 32\pi^3}\,k \{\vec p {\vec p}'(1-{k_p^2\over k^2})-\vec m {\vec m}'(1-{k_m^2\over k^2})\, \times\nonumber\\ &\times& e^{2ik_zz_0}\vert\int_0^tdt'\sin[k_j x(t')] \, e^{i k t'}\vert^2. \label{WdB} \end{eqnarray} Thus, if the reflected dipole ${\vec p}'$ has the opposite direction than $\vec p$ (which is the case when $\vec p$ is parallel to the plate) the conductor tends to increase decoherence (since the contribution of the electric dipole to $W_d^{(B)}$ is positive). Likewise, when $\vec p$ is perpendicular to the plane, $\vec p={\vec p}'$ and the contribution of the electric dipole to $W_d^{(B)}$ is negative. Therefore, in this case the conductor produces {\sl recoherence} instead of decoherence. The opposite effect is found for the magnetic dipole. Indeed, when $\vec m'=-\vec m$ (magnetic dipole perpendicular to the plane) the conductor produces recoherence while more decoherence is produced if the magnetic dipole is parallel to the plane. This features can also be understood by thinking in terms of the image dipoles that are generated by the conductor. Thus, both when the $\vec p$ is perpendicular to the plane or when $\vec m$ is parallel, the direction of the image dipoles coincide with the source dipoles. In such case the decoherence increases. In the opposite situation ($\vec p$ parallel or $\vec m$ perpendicular to the plane) the effect of the conductor is to introduce recoherence. Again, in the limit of small distances the absolute value of $W_d^{(0)}$ and $W_d^{(B)}$ coincide and therefore the decoherence factor doubles with respect to the vacuum case. For the two cases we considered (charges and dipoles) one can show that the boundary contribution to the decoherence factor decays algebraically with the distance to the conductor (in the limit of large distances). For small separations, explicit expressions for the decoherence factor can be obtained. For example, for charges moving close and parallel to the conductor, the lowest order contribution of $W_c$ depends quadratically on $z_0$. As expected, it exactly coincides with the decoherence factor produced by an electric dipole $p=2ez_0$ in vacuum (with an additional factor of $1/2$ that takes into account that photons can only be emitted with $z\geq 0$). In conclusion, our work shows that the effect of conducting boundaries on interference experiments has a rather simple interpretation: The way in which decoherence is affected is similar to the manner in which atomic emission properties are modified by the presence of conducting boundaries. Thus, the effect of the boundaries has not a well defined sign and may produce either more decoherence or complete recoherence (i.e. smaller or higher fringe visibility than in vacuum) depending on the orientation of the relevant trajectories with respect to the conductor's plane. The effect discussed here is conceptually important due to its fundamental origin (i.e., it is always present) but its magnitude is too small to be under the reach of current experiments involving interference of neutral atoms in the vicinity of conducting planes \cite{atom-chip}. This work was supported by UBA, CONICET, Fundaci\'on Antorchas and ANPCyT, Argentina. F.D.M thanks ICTP for hospitality during completion of this work. \end{document}
\begin{document} \maketitle \centerline{\scshape Harry Dankowicz$^*$} {\footnotesize \centerline{Department of Mechanical Science and Engineering} \centerline{University of Illinois at Urbana-Champaign} \centerline{Urbana, IL 68101, USA} } \centerline{\scshape Jan Sieber} {\footnotesize \centerline{College of Engineering, Mathematics and Physical Sciences} \centerline{University of Exeter} \centerline{Exeter EX4 4QF, United Kingdom} } \centerline{(Communicated by the associate editor name)} \begin{abstract} This paper presents a rigorous framework for the continuation of solutions to nonlinear constraints and the simultaneous analysis of the sensitivities of test functions to constraint violations at each solution point using an adjoint-based approach. By the linearity of a problem Lagrangian in the associated Lagrange multipliers, the formalism is shown to be directly amenable to analysis using the \textsc{coco} software package, specifically its paradigm for staged problem construction. The general theory is illustrated in the context of algebraic equations and boundary-value problems, with emphasis on periodic orbits in smooth and hybrid dynamical systems, and quasiperiodic invariant tori of flows. In the latter case, normal hyperbolicity is used to prove the existence of continuous solutions to the adjoint conditions associated with the sensitivities of the orbital periods to parameter perturbations and constraint violations, even though the linearization of the governing boundary-value problem lacks a bounded inverse, as required by the general theory. An assumption of transversal stability then implies that these solutions predict the asymptotic phases of trajectories based at initial conditions perturbed away from the torus. Example \textsc{coco} code is used to illustrate the minimal additional investment in setup costs required to append sensitivity analysis to regular parameter continuation. \textbf{200} words. \end{abstract} \section{Introduction} In the search for optimal designs of engineering structures, the adjoint method is a commonplace tool for computing the sensitivities of response functions to variations in the design parameters given a set of constraints that respect physical laws and discrete design decisions (see the review \cite{Tortorelli94} and extensive references cited therein, as well as the more recent literature, e.g., \cite{rubino2018adjoint}). As observed in this literature, compared to direct approaches that necessitate the inversion of a problem linearization, the adjoint approach is advantageous in cases where the number of response functions is small relative to the number of design variables. In cases where the constraints are in the form of initial-value problems, the adjoint method results in adjoint differential equations that must be solved in backward time in order to determine the desired sensitivities (see, e.g., \cite{TraversoMagri2019}, where such an adjoint approach is used to perform real-time data assimilation for a predictive reduced-order model of a problem in nonlinear thermoacoustics). Modifications to such a formalism may be derived to handle so-called hybrid systems, in which continuously differentiable time-dependence is interrupted by event-driven state-space jumps and vector field discontinuities associated, e.g., with collisions in a multibody mechanical system~\cite{Sandu20} and switches in a relay. A reduced form of the adjoint formalism occurs in the computation of the asymptotic phase of a limit cycle~\cite{Govaerts06}. Generalizations to systems with delay~\cite{Ahsan21,novivcenko2012phase} or event-driven discontinuities~\cite{Park18,Shirisaka17} address the necessary modifications to the general theory and the associated adjoint conditions. A further generalization of the reduced formalism occurs in the analysis of asymptotic phases associated with perturbations off invariant tori in~\cite{demir2010}. With its origin in constrained-design optimization, the adjoint approach derives from the analysis of a Lagrangian (functional) in terms of the design variables, the response variables, and an auxiliary set of Lagrange multipliers. The adjoint conditions are then obtained by imposing vanishing infinitesimal variations of the Lagrangian with respect to the design and response variables. Due to the linearity of the Lagrangian in the Lagrange multipliers, the adjoint conditions are also linear in these variables, albeit with coefficients that depend nonlinearly on the design and response variables. This linearity lends itself to a staged approach to problem construction in which constraints and the associated terms in the adjoint conditions are added successively according to an ordering that is sensible to the designer. Such a staged approach also naturally respects a modular approach to constraints, in which constraints are grouped and composed through the application of glue. The \textsc{Matlab}-based software package \textsc{coco} \cite{dankowicz2013recipes,COCO} supports such a staged approach of problem construction of both constraints and adjoint conditions. It is, therefore, able to integrate sensitivity analysis with parameter continuation along implicitly-defined solution manifolds, almost out of the box. In recent papers~\cite{ahsan2020optimization,li2017staged,li2020optimization}, this functionality was used to demonstrate a successive continuation approach to locating extrema of a single objective function along families of solutions to nonlinear boundary-value problems. In the present paper\footnote{\jrem{The code included in this paper constitutes fully executable scripts. Complete code, including that used to generate the results in Fig.~\ref{fig:invc}, is available at \url{https://github.com/jansieber/adjoint-sensitivity2022-supp}.}}, we demonstrate this functionality in the context of the more general problem of sensitivity analysis. With this past work in mind, the purpose of this paper is to collect, and present original rigorous derivations of, earlier results, specifically those pertaining to phase reduction analysis near limit cycles and quasiperiodic invariant tori. Here, and in some contrast to the above-cited references, the emphasis is on a formalism that respects its origin in a problem Lagrangian and that, thereby, provides new interpretations of the results obtained in the existing literature. Particular care is taken in investigating the solvability of the adjoint conditions, especially in the case of quasiperiodic invariant tori. Several example low-level encodings in \textsc{coco} demonstrate the utility of its core functionality, as well as the ways in which the analysis may be generalized to more complicated examples. The remainder of the paper is organized as follows. In Section~\ref{sec:adjoint-based sensitivity analysis}, we develop the theoretical foundation and illustrate its application to an algebraic problem using explicit analysis and with an implementation in \textsc{coco}. Section~\ref{sec:Examples} contrasts the direct approach with the adjoint method for deriving well-known results regarding sensitivities associated with an initial-value problem, a final-value problem, and a periodic boundary-value problem in smooth dynamical systems. It further considers the implications of event-driven discontinuities of state and vector field and presents results for a periodic orbit with a single state-space discontinuity. A more demanding analysis of two-dimensional quasiperiodic invariant tori is considered in Section~\ref{sec:quasiperiodic invariant tori}. Here, normal hyperbolicity is relied upon to establish the solvability of the adjoint conditions and transversal stability is assumed only to derive the asymptotic phase dynamics. A discussion of the regularity of the problem linearization in Section~\ref{sec:regularity} shows that while the linear problem lacks a bounded inverse, the sensitivities computed using the adjoint method represent components of the problem Jacobian with bounded inverses. The theoretical results are illustrated in the context of continuation along a family of invariant curves of a two-dimensional map in Section~\ref{sec:invc}. Section~\ref{sec:construction} reviews the construction paradigm of \textsc{coco} with particular emphasis on the stages introduction of constraints and contributions to the adjoint conditions. These principles are illustrated in Section~\ref{sec:invc:coco} using an explicit \textsc{coco} encoding of the invariant curve problem from Section~\ref{sec:invc}. The paper concludes in Section~\ref{sec:conclusions} with a brief outlook on ideas worth exploring. \section{Adjoint-based sensitivity analysis} \label{sec:adjoint-based sensitivity analysis} We precede our treatment of periodic orbits and quasiperiodic invariant tori with a general theoretical formalism and an algebraic example that illustrates the promise of the adjoint-based analysis and its implementation in the \textsc{coco} software package. The section assumes some familiarity with \textsc{coco}'s design principles and command structure, as can be gleaned from extensive tutorial documentation included with the \textsc{coco} release and the monograph \cite{dankowicz2013recipes}. \subsection{Theoretical preliminaries} \label{sec:preliminaries} Consider the Lagrangian \begin{equation} \label{eq:genlag} L(u,\mu,\lambda,\eta):=\langle\lambda,\Phi(u)\rangle_{\mathcal{R}_\Phi}+\langle\eta,\Psi(u)-\mu\rangle_{\mathbb{R}^{n_\Psi}} \end{equation} in terms of the \textit{continuation variables} $u\mathrm{i}n\mathcal{U}_\Phi$, \textit{zero functions} $\Phi:\mathcal{U}_\Phi\rightarrow\mathcal{R}_\Phi$, \textit{monitor functions} $\Psi:\mathcal{U}_\Phi\rightarrow\mathbb{R}^{n_\Psi}$, \textit{continuation parameters} $\mu\mathrm{i}n\mathbb{R}^{n_\Psi\times 1}$, and \textit{adjoint variables} $\lambda\mathrm{i}n\mathcal{R}^\ast_\Phi$ and $\eta\mathrm{i}n\mathbb{R}^{1\times n_\Psi}$, where $\mathcal{R}_\Phi^*$ denotes the dual space of linear functionals on $\mathcal{R}_\Phi$. The variations of $L$ with respect to $u$, $\lambda$, and $\eta$ vanish at a point $(u,\mu,\lambda,\eta)$ provided that \begin{equation}\label{eq:gensys} \Phi(u)=0,\,\Psi(u)-\mu=0,\, (D\Phi(u))^\ast\lambda+ (D\Psi(u))^\ast\eta=0. \end{equation} Consistent with the \textsc{coco} syntax, we refer to the first two of these equations as an \textit{extended continuation problem}. The final equation is the corresponding set of \textit{adjoint conditions}, which are linear in the adjoint variables. Suppose that the \textit{zero problem} $\Phi(u)=0$ is regular\footnote{The equation $\Phi=0$ on $\mathcal{U}$ is said to be \textit{regular with dimensional deficit} $d$ at a solution point $\tilde{u}$ if there exists a function $\Psi:\mathcal{U}\rightarrow\mathbb{R}^d$ such that the map $F:u\mapsto(\Phi(u),\Psi(u))$ is continuously Frech\'{e}t differentiable on a neighborhood of $\tilde{u}$ and $DF(\tilde{u})$ has a bounded inverse.} with dimensional deficit $d<n_\Psi$ at a solution $\tilde{u}$ and let $\tilde{\mu}:=\Psi(\tilde{u})$. Let $\mathbb{I}$, $\mathbb{J}_1$, and $\mathbb{J}_2$ be three disjoint subsets of $\{1,\ldots,n_\Psi\}$ such that $\|\mathbb{I}\cup\mathbb{J}_1\|=d$, $\mathbb{J}_1\cup\mathbb{J}_2=\{1,\ldots,n_\Psi\}\setminus\mathbb{I}$, and the reduced continuation problem \begin{equation} \Phi(u)=0,\,\Psi_{\mathbb{I}\,\cup\,\mathbb{J}_1}(u)-\tilde{\mu}_{\mathbb{I}\,\cup\,\mathbb{J}_1}=0 \end{equation} is regular at $\tilde{u}$ with zero dimensional deficit. Finally, denote by $(\tilde{\lambda},\tilde{\eta})$ the solution to the adjoint conditions \begin{equation}\label{gen:adjoint} (D\Phi(\tilde{u}))^\ast\lambda+(D\Psi(\tilde{u}))^\ast\eta=0 \end{equation} with $\tilde{\eta}_{\mathbb{J}_2\setminus k}=0$ and $\tilde{\eta}_k=1$ for some $k\mathrm{i}n\mathbb{J}_2$. Then, we show below that the components of $(\tilde{\lambda},\tilde{\eta}_{\mathbb{I}\,\cup\,\mathbb{J}_1})$ describe the sensitivities of the monitor function $\Psi_k(u)$ to violations of the zero problem and perturbations in $\mu_{\mathbb{I}\,\cup\,\mathbb{J}_1}$, respectively, that perturb $u$ away from $\tilde{u}$. Indeed, for small $(\delta\Phi,\delta\mu_{\mathbb{I}\,\cup\,\mathbb{J}_1})\mathrm{i}n\mathcal{R}_\Phi\times\mathbb{R}^{n_\Psi}$, the perturbed problem \begin{equation} \Phi(u)=\delta\Phi,\,\Psi_{\mathbb{I}\,\cup\,\mathbb{J}_1}(u)-\tilde{\mu}_{\mathbb{I}\cup\mathbb{J}_1}=\delta\mu_{\mathbb{I}\,\cup\,\mathbb{J}_1} \end{equation} has a locally unique solution $u=\tilde{u}+\delta u$, where \begin{equation}\label{gen:implicit:diff} \delta\Phi=D\Phi(\tilde{u})\delta u+\mathcal{O}(\|\delta u\|^2),\quad \delta\mu_{\mathbb{I}\,\cup\,\mathbb{J}_1}=D\Psi_{\mathbb{I}\,\cup\,\mathbb{J}_1}(\tilde{u})\delta u+\mathcal{O}(\|\delta u\|^2)\mbox{.} \end{equation} The perturbation $\delta u$ results in a perturbation to the value of the monitor function $\Psi_k$ given by \begin{align}\label{gen:diff:insert} \delta\Psi_k&=D\Psi_k(\tilde{u})\delta u+\mathcal{O}(\|\delta u\|^2)=\langle(D\Psi_k(\tilde{u}))^\ast,\delta u\rangle_{\mathcal{U}_\Phi}+\mathcal{O}(\|\delta u\|^2), \end{align} where the second equality follows from the formal definition of the adjoint. We may determine the sensitivity of $\Psi_k$ to violations of the zero problem and perturbations in $\mu_{\mathbb{I}\,\cup\,\mathbb{J}_1}$ by solving \eqref{gen:implicit:diff} for $\delta u$ in terms of $\delta\Phi$ and $\delta\mu_{\mathbb{I}\,\cup\,\mathbb{J}_1}$, substituting the result into the middle expression of \eqref{gen:diff:insert} and identifying the coefficients in front of $\delta\Phi$ and $\delta\mu_{\mathbb{I}\,\cup\,\mathbb{J}_1}$, respectively. We call this the \emph{direct differentiation} approach. Alternatively, upon inserting the solution $(\tilde{\lambda},\tilde{\eta})$ from \eqref{gen:adjoint} with $\tilde{\eta}_{\mathbb{J}_2\setminus k}=0$ and $\tilde{\eta}_k=1$ in the rightmost expression of \eqref{gen:diff:insert}, again applying the formal definition of adjoints, and using \eqref{gen:implicit:diff}, we arrive at the well-known adjoint sensitivity formula \begin{align}\label{gen:adj:sensitivity} \delta\Psi_k&=-\langle\tilde{\lambda},\delta\Phi\rangle_{\mathcal{R}_\Phi}-\langle\tilde{\eta}_{\mathbb{I}\,\cup\,\mathbb{J}_1},\delta\mu_{\mathbb{I}\,\cup\,\mathbb{J}_1}\rangle_{\mathbb{R}^{n_\Psi}}+\mathcal{O}(\|(\delta\Phi,\delta \mu_{\mathbb{I}\,\cup\,\mathbb{J}_1})\|^2). \end{align} This confirms the claimed relationship between the components of $(\tilde{\lambda},\tilde{\eta}_{\mathbb{I}\,\cup\,\mathbb{J}_1})$ and the sensitivities of $\Psi_k(u)$ to violations of the zero problem and perturbations in $\mu_{\mathbb{I}\,\cup\,\mathbb{J}_1}$, respectively, that perturb $u$ away from $\tilde{u}$. The \emph{adjoint-based} analysis thus produces the sensitivities directly from the solution of \eqref{gen:adjoint} without the need to first invert \eqref{gen:implicit:diff}. In the context of parameter continuation using the \textsc{coco} package, elements of $\mathbb{I}$ index \textit{inactive} continuation parameters that impose permanent constraints on the continuation variables during a given continuation run. In contrast, elements of $\mathbb{J}_1\cup\mathbb{J}_2$ index \textit{active} continuation parameters that track the values of the corresponding monitor functions during continuation. Different choices of $\mathbb{J}_2$ and $k$ in the assignment of values to elements of $\tilde{\eta}$ then yield different combinations of sensitivities. The \textsc{coco} construction paradigm does not commit the user to a particular choice of values for the adjoint variables $\eta$. Instead, much like with the dimension of the solution manifold, this decision can be made at runtime (e.g., using the function \mcode{coco_set_parival} applied to the corresponding complementary continuation parameters), after problem construction. This is a consequence of the general formalism which produces terms in the adjoint conditions associated with each individual zero or monitor function but omits the conditions that would follow from also performing variations with respect to elements of $\mu$. Given $\tilde{u}$, once the adjoint conditions have been derived, a particular choice of $\mathbb{J}_2$ and $k$ has been made, and the components of $\eta_{\mathbb{J}_2}$ have been assigned accordingly to $0$ or $1$, a solution for $\tilde{\lambda}$ and $\tilde{\eta}_{\mathbb{I}\,\cup\,\mathbb{J}_1}$ may be obtained by solving a linear problem. If a solution has been found for one such $\tilde{u}$, we may continue such a solution under variations in $\mu_{\mathbb{J}_1\cup\,\mathbb{J}_2}$. Since \textsc{coco} treats the adjoint conditions as part of an augmented continuation problem, which is assumed to be nonlinear, a solution to the adjoint conditions is typically computed with \textsc{coco} using continuation under variations in $\eta_k$ from $0$ to $1$. For the same reason, \textsc{coco} does not currently take advantage of the linear form in order to solve a matrix version of the adjoint conditions with columns of $\eta_{\mathbb{J}_2}$ representing different choices of $k$. Simultaneous analysis in the current release of \textsc{coco} for different choices of $\mathbb{J}_2$ and $k$ instead requires duplicate copies of the adjoint conditions in the augmented continuation problem. \subsection{An algebraic example} \label{sec:illustration} We illustrate the ideas of the previous section by considering the Lagrangian \begin{align} L&=\lambda_{\mathrm{am,1}}\left( c_1^2-a_1^2-b_1^2\right)+\lambda_{\mathrm{am,2}}\left(c_2^2-a_2^2-b_2^2\right)+\lambda_\mathrm{fr}\left(\omega_1-\omega_2-\varepsilonsilon\right)\nonumber\\ &\qquad+\lambda_{\mathrm{de},1,\mathrm{re}}\left((1-\omega_1^2)a_1+2\zeta\omega_1b_1-1\right)+\lambda_{\mathrm{de},1,\mathrm{im}}\left((1-\omega_1^2)b_1-2\zeta\omega_1a_1\right)\nonumber\\ &\qquad+\lambda_{\mathrm{de},2,\mathrm{re}}\left( (1-\omega_2^2)a_2+2\zeta\omega_2b_2-1\right)+\lambda_{\mathrm{de},2,\mathrm{im}}\left((1-\omega_2^2)b_2-2\zeta\omega_2a_2\right)\nonumber\\ &\qquad+\eta_\mathrm{da}\left(c_1-c_2-\Delta\right)+\eta_\mathrm{av}\left(\frac{\omega_1+\omega_2}{2}-\bar{\omega}\right)+\eta_\varepsilonsilon\left(\varepsilonsilon-\varepsilonsilon_0\right)+\eta_\zeta\left(\zeta-\zeta_0\right) \end{align} in terms of the continuation variables $a_1$, $b_1$, $c_1$, $a_2$, $b_2$, $c_2$, $\omega_1$, $\omega_2$, $\zeta$, and $\varepsilonsilon$, continuation parameters $\Delta$, $\bar{\omega}$, $\varepsilonsilon_0$, and $\zeta_0$, and adjoint variables $\lambda_{\mathrm{am,1}}$, $\lambda_{\mathrm{am,2}}$, $\lambda_\mathrm{fr}$, $\lambda_{\mathrm{de},1,\mathrm{re}}$, $\lambda_{\mathrm{de},1,\mathrm{im}}$, $\lambda_{\mathrm{de},2,\mathrm{re}}$, $\lambda_{\mathrm{de},2,\mathrm{im}}$, $\eta_\mathrm{da}$, $\eta_\mathrm{av}$, $\eta_\varepsilonsilon$, and $\eta_\zeta$. The corresponding nonlinear algebraic zero problem \begin{gather} c_1^2-a_1^2-b_1^2=0,\,c_2^2-a_2^2-b_2^2=0,\,\omega_1-\omega_2-\varepsilonsilon=0,\label{eq:const1}\\ (1-\omega_1^2)a_1+2\zeta\omega_1b_1-1=0,\,(1-\omega_1^2)b_1-2\zeta\omega_1a_1=0,\\ (1-\omega_2^2)a_2+2\zeta\omega_2b_2-1=0,\,(1-\omega_2^2)b_2-2\zeta\omega_2a_2=0.\label{eq:const3} \end{gather} corresponds to the search for harmonic solutions of the form \begin{align} x_1(t)=c_1\cos(\omega_1 t+\phi_1) =a_1\cos\omega_1 t+b_1\sin\omega_1 t,\\x_2(t)=c_2\cos(\omega_2 t+\phi_2)=a_2\cos\omega_2 t+b_2\sin\omega_2 t \end{align} of the linear differential equations \begin{equation} \label{eq:linosc} \ddot{x}_1+2\zeta\dot{x}_1+x_1=\cos\omega_1 t,\,\ddot{x}_2+2\zeta\dot{x}_2+x_2=\cos\omega_2 t, \end{equation} for excitation frequencies $\omega_1$ and $\omega_2$ that differ by $\varepsilonsilon$. Restricting attention to $c_1,c_2>0$, straightforward analysis of \eqref{eq:const1}-\eqref{eq:const3} yields the relationship \begin{align} \Delta&= \frac{1}{\sqrt{(1-(\bar{\omega}+\varepsilonsilon_0/2)^2)^2+4\zeta_0^2(\bar{\omega}+\varepsilonsilon_0/2)^2}}\nonumber\\ &\qquad-\frac{1}{\sqrt{(1-(\bar{\omega}-\varepsilonsilon_0/2)^2)^2+4\zeta_0^2(\bar{\omega}-\varepsilonsilon_0/2)^2}} \label{eq:Deltaexp} \end{align} between the four continuation parameters. We may obtain the sensitivity of $\Delta$ with respect to $\bar{\omega}$, $\zeta_0$, or $\varepsilonsilon_0$ by direct differentiation of the right-hand side. For example, the sensitivity of $\Delta$ with respect to $\bar{\omega}$ behaves as $o(\varepsilonsilon_0)$ in the limit as $\varepsilonsilon_0\rightarrow 0$ provided that \begin{equation} 3\bar{\omega}^6+5(2\zeta_0^2-1)\bar{\omega}^4+(16\zeta_0^4-16\zeta_0^2+1)\bar{\omega}^2+1-2\zeta_0^2=0, \end{equation} corresponding to an inflection point in the frequency response curve for the differential equation \begin{equation} \ddot{x}+2\zeta_0\dot{x}+x=\cos\bar{\omega}t. \end{equation} For $\zeta_0=0.1$, two such inflection points are located at $\bar{\omega}\approx 0.92$ and $1.06$. Alternatively, in the absence of an explicit solution, we may consider the linearization \begin{gather} 2c_1\delta_{c_1}-2a_1\delta_{a_1}-2b_1\delta_{b_1}=\delta_{\mathrm{am},1},\label{eq:sec2lin1}\\ 2c_2\delta_{c_2}-2a_2\delta_{a_2}-2b_2\delta_{b_2}=\delta_{\mathrm{am},2},\\ \delta_{\omega_1}-\delta_{\omega_2}-\delta_\varepsilonsilon=\delta_\mathrm{fr},\\ -2\omega_1a_1\delta_{\omega_1}+(1-\omega_1^2)\delta_{a_1}+2\omega_1b_1\delta_\zeta+2\zeta b_1\delta_{\omega_1}+2\zeta\omega_1\delta_{b_1}=\delta_{\mathrm{de},1,\mathrm{re}},\\ -2\omega_1b_1\delta_{\omega_1}+(1-\omega_1^2)\delta_{b_1}-2\omega_1a_1\delta_\zeta-2\zeta a_1\delta_{\omega_1}-2\zeta\omega_1\delta_{a_1}=\delta_{\mathrm{de},1,\mathrm{im}},\\ -2\omega_2a_2\delta_{\omega_2}+(1-\omega_2^2)\delta_{a_2}+2\omega_2b_2\delta_\zeta+2\zeta b_2\delta_{\omega_2}+2\zeta\omega_2\delta_{b_2}=\delta_{\mathrm{de},2,\mathrm{re}},\\ -2\omega_2b_2\delta_{\omega_2}+(1-\omega_2^2)\delta_{b_2}-2\omega_2a_2\delta_\zeta-2\zeta a_2\delta_{\omega_2}-2\zeta\omega_2\delta_{a_2}=\delta_{\mathrm{de},2,\mathrm{im}},\\ \delta_{c_1}-\delta_{c_2}=\delta_\Delta,\, \frac{\delta_{\omega_1}+\delta_{\omega_2}}{2}=\delta_{\bar{\omega}},\, \delta_\varepsilonsilon=\delta_{\varepsilonsilon_0},\,\delta_\zeta=\delta_{\zeta_0}\label{eq:sec2lin8} \end{gather} around some solution to the extended continuation problem. The sensitivities of $\Delta$ with respect to $\bar{\omega}$, $\varepsilonsilon_0$, $\zeta_0$, and the constraint violations $\delta_{\mathrm{am},1}$, $\delta_{\mathrm{am},2}$, $\delta_\mathrm{fr}$, $\delta_{\mathrm{de},1,\mathrm{re}}$, $\delta_{\mathrm{de},1,\mathrm{im}}$, $\delta_{\mathrm{de},2,\mathrm{re}}$, and $\delta_{\mathrm{de},2,\mathrm{im}}$ may then be obtained by solving for $\delta_\Delta$ and inspecting the coefficients in front of $\delta_{\bar{\omega}}$, $\delta_{\varepsilonsilon_0}$, $\delta_{\zeta_0}$, $\delta_{\mathrm{am},1}$, $\delta_{\mathrm{am},2}$, $\delta_\mathrm{fr}$, $\delta_{\mathrm{de},1,\mathrm{re}}$, $\delta_{\mathrm{de},1,\mathrm{im}}$, $\delta_{\mathrm{de},2,\mathrm{re}}$, and $\delta_{\mathrm{de},2,\mathrm{im}}$, respectively. As shown in the previous section, these sensitivities may also be obtained directly by solving the adjoint conditions \begin{gather} 2c_1\lambda_{\mathrm{am},1}+\eta_\mathrm{da}=0,\label{eq:adjobj}\\ 2c_2\lambda_{\mathrm{am},2}-\eta_\mathrm{da}=0\\ -2a_1\lambda_{\mathrm{am},1}+(1-\omega_1^2)\lambda_{\mathrm{de},1,\mathrm{re}}-2\zeta\omega_1\lambda_{\mathrm{de},1,\mathrm{im}}=0,\\ -2a_2\lambda_{\mathrm{am},2}+(1-\omega_2^2)\lambda_{\mathrm{de},2,\mathrm{re}}-2\zeta\omega_2\lambda_{\mathrm{de},2,\mathrm{im}}=0,\\ -2b_1\lambda_{\mathrm{am},1}+2\zeta\omega_1\lambda_{\mathrm{de},1,\mathrm{re}}+(1-\omega_1^2)\lambda_{\mathrm{de},1,\mathrm{im}}=0,\\ -2b_2\lambda_{\mathrm{am},2}+2\zeta\omega_2\lambda_{\mathrm{de},2,\mathrm{re}}+(1-\omega_2^2)\lambda_{\mathrm{de},2,\mathrm{im}}=0,\\ \lambda_\mathrm{fr}+2(\zeta b_1-\omega_1a_1)\lambda_{\mathrm{de},1,\mathrm{re}}-2(\zeta a_1+\omega_1b_1)\lambda_{\mathrm{de},1,\mathrm{im}}+\eta_\mathrm{av}/2=0,\\ -\lambda_\mathrm{fr}+2(\zeta b_2-\omega_2a_2)\lambda_{\mathrm{de},2,\mathrm{re}}-2(\zeta a_2+\omega_2b_2)\lambda_{\mathrm{de},2,\mathrm{im}}+\eta_\mathrm{av}/2=0,\\ 2\omega_1 b_1\lambda_{\mathrm{de},1,\mathrm{re}}-2\omega_1 a_1\lambda_{\mathrm{de},1,\mathrm{im}}+2\omega_2 b_2\lambda_{\mathrm{de},2,\mathrm{re}}-2\omega_2 a_2\lambda_{\mathrm{de},2,\mathrm{im}}+\eta_\zeta=0,\\ -\lambda_\mathrm{fr}+\eta_\varepsilonsilon=0, \label{eq:adjobjlast} \end{gather} with $\eta_\mathrm{da}=1$, for the remaining adjoint variables. Notably, the linearized equations \eqref{eq:sec2lin1}-\eqref{eq:sec2lin8} impose eleven linear constraints on the eleven unknowns $\delta_{a_1}$, $\delta_{b_1}$, $\delta_{c_1}$, $\delta_{a_2}$, $\delta_{b_2}$, $\delta_{c_2}$, $\delta_{\omega_1}$, $\delta_{\omega_2}$, $\delta_\zeta$, $\delta_\varepsilonsilon$, and $\delta_\Delta$, but the only expression of interest is the solution for $\delta_\Delta$ and this is only obtained as a linear combination of constraint violations and parameter perturbations. In contrast, the adjoint conditions impose ten linear constraints on the ten unknowns $\lambda_{\mathrm{am,1}}$, $\lambda_{\mathrm{am,2}}$, $\lambda_\mathrm{fr}$, $\lambda_{\mathrm{de},1,\mathrm{re}}$, $\lambda_{\mathrm{de},1,\mathrm{im}}$, $\lambda_{\mathrm{de},2,\mathrm{re}}$, $\lambda_{\mathrm{de},2,\mathrm{im}}$, $\eta_\mathrm{av}$, $\eta_\varepsilonsilon$, and $\eta_\zeta$, all of which are of interest and immediately represent the sought sensitivities of $\Delta$ with respect to the individual constraint violations and perturbations of the continuation parameters $\bar{\omega}$, $\varepsilonsilon_0$, and $\zeta_0$. We proceed to implement this analysis in \textsc{coco} \jrem{with fixed $\varepsilonsilon=0.01$ and $\zeta=0.1$. In this case, the derivative of the right-hand side of \eqref{eq:Deltaexp} with respect to $\bar{\omega}$ vanishes for $\bar{\omega}\approx 0.92043919$ and $1.06205837$ with residuals of $2\times 10^{-7}$ and $4\times 10^{-7}$, respectively.} \textsc{coco}-compatible encodings of the zero function $\Phi$ and its Jacobian\footnote{In the absence of explicit encodings of second derivatives, \textsc{coco} relies on a suitable finite-difference approximation of these derivatives, as necessary.} $D\Phi$ are as follows. \begin{lstlisting}[language=coco-highlight] function [data, f] = phi(prob, data, u) v = num2cell(u); [a1, b1, c1, a2, b2, c2, o1, o2, ze, ep] = deal(v{:}); f = [c1^2-a1^2-b1^2; c2^2-a2^2-b2^2; o1-o2-ep; (1-o1^2)*a1+2*ze*o1*b1-1; (1-o1^2)*b1-2*ze*o1*a1; (1-o2^2)*a2+2*ze*o2*b2-1; (1-o2^2)*b2-2*ze*o2*a2]; end \end{lstlisting} \begin{lstlisting}[language=coco-highlight] function [data, J] = dphi(prob, data, u) v = num2cell(u); [a1, b1, c1, a2, b2, c2, o1, o2, ze, ep] = deal(v{:}); J = [-2*a1,-2*b1,2*c1,0,0,0,0,0,0,0; 0,0,0,-2*a2,-2*b2,2*c2,0,0,0,0; 0,0,0,0,0,0,1,-1,0,-1; 1-o1^2,2*ze*o1,0,0,0,0,-2*o1*a1+2*ze*b1,0,2*o1*b1,0; -2*ze*o1,1-o1^2,0,0,0,0,-2*o1*b1-2*ze*a1,0,2*o1*a1,0; 0,0,0,1-o2^2,2*ze*o2,0,0,-2*o2*a2+2*ze*b2,2*o2*b2,0; 0,0,0,-2*ze*o2,1-o2^2,0,0,-2*o2*b2-2*ze*a2,2*o2*a2,0]; end \end{lstlisting} Similarly, \textsc{coco}-compatible encodings of the monitor function $\Psi$ and its Jacobian $D\Psi$ are given below. \begin{lstlisting}[language=coco-highlight] function [data, f] = psi(prob, data, u) v = num2cell(u); [a1, b1, c1, a2, b2, c2, o1, o2, ze, ep] = deal(v{:}); f = [c1-c2; (o1+o2)/2; ep; ze]; end \end{lstlisting} \begin{lstlisting}[language=coco-highlight] function [data, J] = dpsi(prob, data, u) v = num2cell(u); [a1, b1, c1, a2, b2, c2, o1, o2, ze, ep] = deal(v{:}); J = [0,0,1,0,0,-1,0,0,0,0; 0,0,0,0,0,0,1/2,1/2,0,0; 0,0,0,0,0,0,0,0,0,1; 0,0,0,0,0,0,0,0,1,0]; end \end{lstlisting} We proceed to construct and initialize the corresponding extended continuation problem using the following sequence of commands. \begin{lstlisting}[language=coco-highlight] >> prob = coco_prob(); >> prob = coco_add_func(prob, 'phi', @phi, @dphi, [], 'zero', ... 'u0', [-0.49; 4.9; 4.9; 0; 5; 5; 1.01; 1; .1; .01]); >> prob = coco_add_func(prob, 'psi', @psi, @dpsi, [], 'inactive', ... {'da', 'av', 'ep', 'ze'}, 'uidx', 1:10); \end{lstlisting} Here, an initial solution guess for the continuation variables is encapsulated in the vector array following the \mcode{'u0'} flag in the first call to \mcode{coco_add_func}. The \mcode{'inactive'} flag in the second call to \mcode{coco_add_func} implies that the corresponding continuation parameters, here designated by the string labels \mcode{'da'}, \mcode{'av'}, \mcode{'ep'}, and \mcode{'ze'}, are initially inactive. The commands \begin{lstlisting}[language=coco-highlight] >> prob = coco_add_adjt(prob, 'phi'); >> prob = coco_add_adjt(prob, 'psi', {'e.da','e.av','e.ep','e.ze'}, ... 'aidx', 1:10); \end{lstlisting} append the corresponding adjoint conditions with additional \textit{complementary continuation parameters}, designated by the string labels \mcode{'e.da'}, \mcode{'e.av'}, \mcode{'e.ep'}, and \mcode{'e.ze'}, and associated with complementary monitor functions whose values equal $\eta_\mathrm{da}$, $\eta_\mathrm{av}$, $\eta_\varepsilonsilon$, and $\eta_\zeta$, respectively, in the notation of this section. These are inactive by default. All the adjoint variables are initialized with their default value $0$ by this construction. To obtain the sensitivities of $\Delta$ with $\mathbb{I}=\{3,4\}$, $\mathbb{J}_1=\{2\}$, and $\mathbb{J}_2=\{1\}$, we first perform continuation along the one-dimensional solution manifold obtained by allowing \mcode{'da'}, \mcode{'e.da'}, \mcode{'e.av'}, \mcode{'e.ep'}, and \mcode{'e.ze'} to vary, while keeping \mcode{'av'}, \mcode{'ep'}, and \mcode{'ze'} fixed. \begin{lstlisting}[language=coco-highlight] >> coco(prob, 'run', [], 1, {'da', 'e.da', 'e.av', 'e.ep', 'e.ze'}, ... {[], [0 1]}) \end{lstlisting} \begin{lstlisting}[language=coco-small] STEP DAMPING NORMS COMPUTATION TIMES IT SIT GAMMA ||d|| ||f|| ||U|| F(x) DF(x) SOLVE 0 2.40e-01 1.00e+01 0.0 0.0 0.0 1 1 1.00e+00 3.72e-02 6.80e-04 1.00e+01 0.0 0.0 0.0 2 1 1.00e+00 9.75e-05 4.76e-09 1.00e+01 0.0 0.0 0.0 3 1 1.00e+00 6.83e-10 3.56e-15 1.00e+01 0.0 0.0 0.0 ... LABEL TYPE da e.da e.av e.ep e.ze ... 1 EP -7.3832e-02 0.0000e+00 0.0000e+00 0.0000e+00 0.0000e+00 ... 2 -7.3832e-02 2.5752e-01 1.2060e+00 1.8906e+00 -3.1450e-01 ... 3 -7.3832e-02 5.6770e-01 2.6587e+00 4.1678e+00 -6.9333e-01 ... 4 -7.3832e-02 8.7788e-01 4.1114e+00 6.4451e+00 -1.0722e+00 ... 5 EP -7.3832e-02 1.0000e+00 4.6833e+00 7.3416e+00 -1.2213e+00 \end{lstlisting} We proceed to extract the solution data from the point obtained with \mcode{'e.da'} equal to $1$ and use this to reconstruct and reinitialize the augmented continuation problem. \begin{lstlisting}[language=coco-highlight] >> prob = coco_prob(); >> chart = coco_read_solution('phi', 'run', 5, 'chart'); >> prob = coco_add_func(prob, 'phi', @phi, @dphi, [], 'zero', ... 'u0', chart.x); >> prob = coco_add_func(prob, 'psi', @psi, @dpsi, [], 'inactive', ... {'da', 'av', 'ep', 'ze'}, 'uidx', 1:10); >> chart = coco_read_adjoint('phi', 'run', 5, 'chart'); >> prob = coco_add_adjt(prob, 'phi', 'l0', chart.x); >> chart = coco_read_adjoint('psi', 'run', 5, 'chart'); >> prob = coco_add_adjt(prob, 'psi', {'e.da','e.av','e.ep','e.ze'}, ... 'aidx', 1:10, 'l0', chart.x); \end{lstlisting} Continuation along the solution manifold obtained for fixed \mcode{'ep'}, \mcode{'ze'}, and \mcode{'e.da'}, and for \mcode{'av'} in the interval $[0.5,2.5]$ is then triggered by the commands \begin{lstlisting}[language=coco-highlight] >> prob = coco_set(prob, 'cont', 'ItMX', 500, 'NPR', 100); >> coco(prob, 'run', [], 1, {'da', 'av', 'e.av', 'e.ep', 'e.ze'}, ... {[], [0.5 2.5]}) \end{lstlisting} where the \mcode{'ItMX'} and \mcode{'NPR'} options regulate the number of continuation steps in each direction from the initial solution and the screen output frequency. The resultant output is shown below \begin{lstlisting}[language=coco-small] STEP DAMPING NORMS COMPUTATION TIMES IT SIT GAMMA ||d|| ||f|| ||U|| F(x) DF(x) SOLVE 0 1.06e-14 1.90e+01 0.0 0.0 0.0 ... LABEL TYPE da av e.av e.ep e.ze ... 1 EP -7.3832e-02 1.0050e+00 4.6833e+00 7.3416e+00 -1.2213e+00 ... 2 FP -2.0587e-01 1.0621e+00 -1.7633e-06 2.0541e+01 -3.9743e+00 ... 3 -1.2158e-01 1.1556e+00 -9.0966e-01 1.2164e+01 -1.0384e+00 ... 4 EP -1.7965e-03 2.5000e+00 -2.6831e-03 1.7965e-01 -3.4810e-04 ... LABEL TYPE da av e.av e.ep e.ze ... 5 EP -7.3832e-02 1.0050e+00 4.6833e+00 7.3416e+00 -1.2213e+00 ... 6 1.7834e-01 9.2050e-01 2.9412e-03 -1.7795e+01 3.7095e+00 ... 7 FP 1.7834e-01 9.2044e-01 9.2729e-07 -1.7795e+01 3.7069e+00 ... 8 EP 1.6854e-02 5.0000e-01 -7.5097e-02 -1.6857e+00 1.8071e-02 \end{lstlisting} Here, the points denoted by \mcode{FP} are fold points (local extrema) in the quantity $\Delta$ and coincide with the loci of sign changes in the sensitivity of $\Delta$ with respect to $\bar{\omega}$, i.e., approximate inflection points. The corresponding values of \mcode{'av'} \jrem{(computed with the default residual tolerance of $10^{-6}$ and rounded off to $0.92044$ and $1.0621$)} agree with those predicted by the closed-form analysis in the first part of this section. \section{Sensitivities along solutions to ODEs} \label{sec:Examples} In this section, we derive several known results about the sensitivity of quantities associated with the behavior of smooth and hybrid dynamical systems to violations of a governing set of differential and algebraic constraints using either linearization of the governing constraints or the adjoint-based approach. Notably, we interpret the adjoint conditions liberally as matrix equalities, enabling simultaneous derivation of the sensitivities of a vector of monitor functions to constraint violations and perturbations in the remaining continuation parameters. \subsection{A single trajectory segment} \label{sec:Forward dynamics} Consider the flow $F(t,x,p)$ corresponding to the autonomous vector field $f:\mathbb{R}^n\times\mathbb{R}^q\rightarrow\mathbb{R}^n$, such that $x(\tau;T,x_0,p):=F(T\tau,x_0,p)$ is the unique solution to the initial-value problem with rescaled time: \begin{equation}\label{forw:ode} x'=Tf(x,p),\quad x(0)=x_0. \end{equation} We apply the formalism of Section~\ref{sec:preliminaries} to quantities involved in \eqref{forw:ode} in order to evaluate the sensitivities of $x(1;T,x_0,p)$ to its arguments. As in the previous section, we first perform direct differentiation and compare intermediate steps and final results to the predictions of the adjoint analysis. \subsubsection*{Direct differentiation} It follows directly by differentiation at $\tau=1$ that \begin{equation} \partial_{T}x(1;T,x_0,p)=\partial_tF(T,x_0,p)=f\left(F(T,x_0,p),p\right)=f(x(1;T,x_0,p),p), \end{equation} and we obtain the Jacobians $\partial_{x_0}x(1;T,x_0,p)=\partial_xF(T,x_0,p)$ and $\partial_{p}x(1;T,x_0,p)=\partial_pF(T,x_0,p)$ as the solutions at $\tau=1$ to the standard first-order variational initial-value problems \begin{equation} \label{eq:vareq} X'=T\partial_xf\left(x(\tau;T,x_0,p),p\right)X,\quad X(0)=I_n \end{equation} and \begin{equation} \label{eq:vareqp} P'=T\partial_xf\left(x(\tau;T,x_0,p),p\right)P+T\partial_pf\left(x(\tau;T,x_0,p),p\right),\quad P(0)=0, \end{equation} respectively, where $I_n$ denotes the corresponding identity matrix. Consider, for example, a perturbation of $x_0$ along the vector $f(x_0,p)$. Since \begin{equation} \frac{d}{d\tau}f(x(\tau;T,x_0,p),p)=T\partial_xf\left(x(\tau;T,x_0,p),p\right)f(x(\tau;T,x_0,p),p), \end{equation} it follows that \begin{equation} \label{eq:vfmap} f(x(\tau;T,x_0,p),p)=X(\tau)f(x_0,p), \end{equation} i.e., that, to linear order, $x(1;T,x_0,p)$ is perturbed by the vector $f(x(1;T,x_0,p),p)$. Moreover, by the variation-of-parameters formula, we obtain \begin{equation} P(\tau)=TX(\tau)\mathrm{i}nt_0^{\tau} X(\sigma)^{-1}\partial_pf\left(x(\sigma;T,x_0,p),p\right)\,\mathrm{d}\sigma. \end{equation} This gives the sensitivity $P(1$) of $x(1;T,x_0,p)$ with respect to $p$ directly in terms of a convolution integral. \subsubsection*{Regularity} Alternatively, in notation consistent with that of the previous section, let $\mathcal{U}_\Phi=C^1([0,1];\mathbb{R}^n)\times\mathbb{R}\times\mathbb{R}^q$ be the space of continuation variables, and define \begin{equation} \mathcal{U}\ni u=(x(\cdot),T,p)\mapsto \Phi(u):=x'(\cdot)-Tf(x(\cdot),p)\mathrm{i}n C^0([0,1];\mathbb{R}^n)=\mathcal{R}_\Phi \end{equation} and \begin{equation} \label{eq:monitorforward} \Psi(u):=\begin{pmatrix}x(1)\\x(0)\\T\\p\end{pmatrix}\mathrm{i}n\mathbb{R}^{2n+1+q}. \end{equation} In this case, every solution to the zero problem is regular with dimensional deficit $n+1+q$. Indeed, given a solution $(\tilde{x}(\cdot),T,p)$ and small constraint violation $\delta_\mathrm{ode} (\cdot)$, the linearized equation \begin{equation} \delta_\mathrm{ode}=\delta'_x-T\partial_xf(\tilde{x},p)\delta_x-f(\tilde{x},p)\delta_T-T\partial_pf(\tilde{x},p)\delta_p \end{equation} implies that \begin{align} \label{eq:linfor} \delta_x(\tau)&=X(\tau)\mathrm{i}nt_0^{\tau} X(\sigma)^{-1}\delta_\mathrm{ode}(\sigma)\,\mathrm{d}\sigma+X(\tau)\delta_x(0)+\tau f(\tilde{x}(\tau),p)\delta_T+P(\tau)\delta_p. \end{align} We thus obtain $\delta_x(\cdot)$ in terms of the $n+1+q$ quantities $\delta_x(0)$, $\delta_T$, and $\delta_p$. \subsubsection*{Adjoint analysis} Following \eqref{eq:genlag}, we write the Lagrangian \begin{align} \label{eq:lagrangetrajseg} L&=\mathrm{i}nt_0^1\lambda^{\mathsf{T}}(\tau)\left(x'(\tau)-Tf(x(\tau),p)\right)\,\mathrm{d}\tau+\eta_{x(1)}^{\mathsf{T}}\left(x(1)-\mu_{x(1)}\right)\nonumber\\&\quad+\eta_{x(0)}^{\mathsf{T}}\left(x(0)-\mu_{x(0)}\right)+\eta_{T}\left(T-\mu_{T}\right)+\eta_{p}^{\mathsf{T}}\left(p-\mu_{p}\right) \end{align} in terms of the adjoint variables\footnote{Here, the dual space $\mathcal{R}^\ast_\Phi$ is the space of functions of bounded variation. We restrict attention to the subspace of continuously differentiable functions $\lambda(\cdot)$ to allow the use of integration of parts when evaluating variations of $L$.} $\lambda(\cdot)\mathrm{i}n C^1([0,1];\mathbb{R}^n)$, $\eta_{x(1)},\eta_{x(0)}\mathrm{i}n\mathbb{R}^n$, $\eta_T\mathrm{i}n\mathbb{R}$, and $\eta_p\mathrm{i}n\mathbb{R}^q$. The adjoint conditions \eqref{gen:adjoint} may then be written \begin{align} \label{eq:adjforfirst} \delta x(\cdot)&: &0&=-\lambda^{\prime\,{\mathsf{T}}}-T\lambda^{\mathsf{T}}\partial_xf(x,p),\\ \delta x(1)&:&0&=\lambda^{\mathsf{T}}(1)+\eta_{x(1)}^{\mathsf{T}},\\ \delta x(0)&:&0&=-\lambda^{\mathsf{T}}(0)+\eta_{x(0)}^{\mathsf{T}},\\ \delta T&:&0&=-\mathrm{i}nt_0^1\lambda^{\mathsf{T}}(\tau)f(x(\tau),p)\,\mathrm{d}\tau+\eta_T,\\ \delta p&:&0&=-\mathrm{i}nt_0^1\lambda^{\mathsf{T}}(\tau)T\partial_pf(x(\tau),p)\,\mathrm{d}\tau+\eta_p^{\mathsf{T}}, \label{eq:adjforlast} \end{align} where we have labeled each condition by the corresponding variation $\delta u$. We let $\mathbb{I}=\emptyset$, $\mathbb{J}_1=\{n+1,\ldots,2n+1+q\}$, and $\mathbb{J}_2=\{1,\ldots,n\}$. With reference to \eqref{eq:monitorforward}, this choice corresponds to computing the sought sensitivities of $x(1)$ with respect to violations of the governing differential constraint and perturbations to $T$, $x(0)$, and $p$, respectively that drive $(x(\cdot),T,p)$ away from $(x(\cdot),T,p)$. It follows by inspection using \eqref{forw:ode}, \eqref{eq:vareq}, \eqref{eq:vareqp}, and \eqref{eq:adjforfirst}-\eqref{eq:adjforlast} that \begin{align} \label{eq:adjfornext1} \tilde{\lambda}^{\mathsf{T}}f(x,p)&\equiv\eta_T=-\eta^{\mathsf{T}}_{x(1)}f(x(1),p),\\ \label{eq:adjfornext2}\tilde{\lambda}^{\mathsf{T}}X&\equiv\eta^{\mathsf{T}}_{x(0)}=-\eta^{\mathsf{T}}_{x(1)}X(1),\\ \label{eq:adjfornext3}\tilde{\lambda}^{\mathsf{T}}P\big|_{\tau=1}&=\eta_p^{\mathsf{T}}=-\eta^{\mathsf{T}}_{x(1)}P(1). \end{align} We set $\eta_{x(1)}=I_n$ and obtain the sensitivities of $x(1)$ with respect to variations in $T$, $x(0)$, and $p$ from the quantities $f(x(1),p)$, $X(1)$, and $P(1)$, respectively, as expected from the results obtained using direct differentiation. In this case, \eqref{eq:adjfornext1} and \eqref{eq:adjfornext2} also imply that \begin{equation} X(1)f(x(0),p)=f(x(1),p), \end{equation} consistent with \eqref{eq:vfmap}. \subsection{A single segment with a Poincar\'{e} section} \label{sec:Poincaresection} \jrem{We append the zero function $(x(\cdot),T,p)\mapsto h_\mathrm{ps}(x(1),p)$ to the previous construction by adding the term $\lambda_\mathrm{ps}h_\mathrm{ps}(x(1),p)$ to the corresponding Lagrangian \eqref{eq:lagrangetrajseg}. We use the notation $(\cdot)_\mathrm{ps}$ to indicate the association with a Poincar\'{e} section \begin{equation} \label{eq:poincare:seection} \{x:h_\mathrm{ps}(x,p)=0\}. \end{equation} Then, every solution to the zero problem with nonzero Lie derivative \begin{equation} \mathcal{L}_fh_\mathrm{ps}(x(1),p):=\partial_xh_\mathrm{ps}(x(1),p)f(x(1),p) \end{equation} (i.e., that intersects the Poincar\'{e} section transversally)} is regular with dimensional deficit $n+q$. Indeed, consider the additional small constraint violation $\delta_h$, such that \begin{equation} \partial_xh_\mathrm{ps}(x(1),p)\delta_x(1)+\partial_ph_\mathrm{ps}(x(1),p)\delta_p=\delta_h. \end{equation} Then, \eqref{eq:linfor} implies that \begin{align} \label{eq:linfor3} &\mathcal{L}_fh_\mathrm{ps}(x(1),p)\delta_T=\delta_h-\partial_xh_\mathrm{ps}(x(1),p)X(1)\mathrm{i}nt_0^{1} X(\sigma)^{-1}\delta_\mathrm{ode}(\sigma)\,\mathrm{d}\sigma\nonumber\\ &\qquad- \partial_xh_\mathrm{ps}(x(1),p)X(1)\delta_x(0)-\left(\partial_ph_\mathrm{ps}(x(1),p)+ \partial_xh_\mathrm{ps}(x(1),p)P(1)\right)\delta_p, \end{align} which may be uniquely solved for $\delta_T$ if $\mathcal{L}_fh_\mathrm{ps}(x(1),p)\ne 0$. Consequently, $\delta_x(\cdot)$ and $\delta_T$ may be obtained in terms of the $n+q$ quantities $\delta_x(0)$ and $\delta_p$. With the additional constraint, the adjoint conditions become \begin{align} \delta x(\cdot)&: &0&=-\lambda_\mathrm{de}^{\prime\,{\mathsf{T}}}-T\lambda_\mathrm{de}^{\mathsf{T}}\partial_xf(x,p),\\ \delta x(1)&: &0&=\lambda_\mathrm{de}^{\mathsf{T}}(1)+\lambda_\mathrm{ps}\partial_xh_\mathrm{ps}(x(1),p)+\eta_{x(1)}^{\mathsf{T}},\\ \delta x(0)&: &0&=-\lambda_\mathrm{de}^{\mathsf{T}}(0)+\eta_{x(0)}^{\mathsf{T}},\\ \delta T&: &0&=-\mathrm{i}nt_0^1\lambda_\mathrm{de}^{\mathsf{T}}(\tau)f(x(\tau),p)\,\mathrm{d}\tau+\eta_T,\\ \delta p&: &0&=-\mathrm{i}nt_0^1\lambda_\mathrm{de}^{\mathsf{T}}(\tau)T\partial_pf(x(\tau),p)\,\mathrm{d}\tau+\lambda_\mathrm{ps}\partial_ph_\mathrm{ps}(x(1),p)+\eta_p^{\mathsf{T}}. \end{align} This time, we let $\mathbb{I}=\emptyset$, $\mathbb{J}_1=\{n+1,\ldots,2n,2n+2,\ldots,2n+1+q\}$, and $\mathbb{J}_2=\{1,\ldots,n,2n+1\}$ in order to capture the sensitivities of $x(1)$ and $T$ with respect to constraint violations and perturbation in $x(0)$ and $p$. It follows by the identical steps to the previous section that \begin{align} \lambda_\mathrm{de}^{\mathsf{T}}f(x,p)&\equiv \eta_T=-\left(\eta^{\mathsf{T}}_{x(1)}+\lambda_\mathrm{ps}\partial_xh_\mathrm{ps}(x(1),p)\right)f(x(1),p),\\ \lambda_\mathrm{de}^{\mathsf{T}}X&\equiv\eta^{\mathsf{T}}_{x(0)}=-\left(\eta^{\mathsf{T}}_{x(1)}+\lambda_\mathrm{ps}\partial_xh_\mathrm{ps}(x(1),p)\right)X(1),\\ \lambda_\mathrm{de}^{\mathsf{T}}P\big|_{\tau=1}&=\lambda_\mathrm{ps}\partial_ph_\mathrm{ps}(x(1),p)+\eta_p^{\mathsf{T}}=-\left(\eta^{\mathsf{T}}_{x(1)}+\lambda_\mathrm{ps}\partial_xh_\mathrm{ps}(x(1),p)\right)P(1) \end{align} By considering the case when $\eta_T=0$ and $\eta_{x(1)}=I_n$, we find that the sensitivities of $x(1)$ equal \begin{equation} \frac{f(x(1),p)}{\mathcal{L}_fh_\mathrm{ps}(x(1),p)} \end{equation} with respect to $h_\mathrm{ps}(x(1),p)$, \begin{equation} \Pi(x(1),p)X(1) \end{equation} with respect to $x(0)$, and \begin{equation} \Pi(x(1),p)P(1)-\frac{f(x(1),p)\partial_ph_\mathrm{ps}(x(1),p)}{\mathcal{L}_fh_\mathrm{ps}(x(1),p)} \end{equation} with respect to $p$, where the nullspaces of the projection matrix \begin{equation} \Pi(x(1),p):= I_n-\frac{f(x(1),p)\partial_xh_\mathrm{ps}(x(1),p)}{\mathcal{L}_fh_\mathrm{ps}(x(1),p)} \end{equation} and its transpose are spanned by $f(x(1),p)$ and $\left(\partial_xh_\mathrm{ps}(x(1),p)\right)^{\mathsf{T}}$, respectively. Similarly, by considering the case when $\eta_T=1$ and $\eta_{x(1)}=0$, we find that the sensitivities of $T$ equal \begin{equation} \frac{1}{\mathcal{L}_fh_\mathrm{ps}(x(1),p)} \end{equation} with respect to $h_\mathrm{ps}(x(1),p)$, \begin{equation} -\frac{\partial_xh_\mathrm{ps}(x(1),p)}{\mathcal{L}_fh_\mathrm{ps}(x(1),p)}X(1) \end{equation} with respect to $x(0)$, and \begin{equation} -\frac{\partial_xh_\mathrm{ps}(x(1),p)P(1)+\partial_ph_\mathrm{ps}(x(1),p)}{\mathcal{L}_fh_\mathrm{ps}(x(1),p)} \end{equation} with respect to $p$. These conclusions also follow from implicit differentiation of the constraints \begin{equation} h_\mathrm{ps}(x(1),p)=H,\,x(1)=F(T,x(0),p) \end{equation} with respect to the independent variables $H$, $x(0)$, and $p$, or by solving the linearized equations \eqref{eq:linfor} and \eqref{eq:linfor3} for $\delta_x(1)$ and $\delta_T$. \subsection{Periodic orbits} \label{sec:Periodic orbits} \jrem{Next, append the zero function $(x(\cdot),T,p)\mapsto x(0)-x(1)$ to the previous construction by adding the term $\lambda_\mathrm{po}^{\mathsf{T}}(x(0)-x(1))$ to the corresponding Lagrangian. Here, the notation $(\cdot)_\mathrm{po}$ reflects the corresponding imposition of the periodicity constraint $x(1)=x(0)$.} Then, every solution to the zero problem with $\mathcal{L}_fh_\mathrm{ps}(x(1),p)\ne 0$, and for which the eigenvalue $1$ of the monodromy matrix $X(1)$ is simple, is regular with dimensional deficit $q$. Here, consider the additional small constraint violation $\delta_\mathrm{po}$, such that \begin{equation} \delta_x(0)-\delta_x(1)=\delta_\mathrm{po}. \end{equation} Then, \eqref{eq:linfor} and \eqref{eq:linfor3} imply that \begin{align} &\left(I_n-\Pi(x(1),p)X(1)\right)\delta_x(0)=\delta_\mathrm{po}+\Pi(x(1),p)X(1)\mathrm{i}nt_0^{1} X(\sigma)^{-1}\delta_\mathrm{ode}(\sigma)\,\mathrm{d}\sigma\nonumber\\ &\qquad+ \frac{f(x(1),p)}{\mathcal{L}_fh_\mathrm{ps}(x(1),p)}\delta_h+ \left(\Pi(x(1),p)P(1)-\frac{f(x(1),p)\partial_ph_\mathrm{ps}(x(1),p)}{\mathcal{L}_fh_\mathrm{ps}(x(1),p)}\right)\delta_p, \end{align} which may be uniquely solved for $\delta_x(0)$ provided that $\Pi(x(1),p)X(1)$ has no eigenvalues equal to $1$. Indeed, if there exists a vector $v$ such that $\Pi(x(1),p)X(1)v=v$, then $X(1)v-v$ must be parallel to $f(x(1),p)$ in violation of the assumption on $X(1)$ that the eigenvalue $1$ is simple. When invertibility holds, it follows after substitution in \eqref{eq:linfor3} and some simplification that \begin{align} \label{eq:perorbsen} \mathcal{L}_fh_\mathrm{ps}(x(1),p)\delta_T&=w^{\mathsf{T}}\left(\delta_\mathrm{po}+P(1)\delta_p+\mathrm{i}nt_0^{1} X(\sigma)^{-1}\delta_\mathrm{ode}(\sigma)\,\mathrm{d}\sigma\right), \end{align} where the prefactor \begin{equation} w^{\mathsf{T}}:=-\partial_xh_\mathrm{ps}(x(1),p)X(1)\left(I_n-\Pi(x(1),p)X(1)\right)^{-1} \end{equation} is the unique left eigenvector of $X(1)$ corresponding to the eigenvalue $1$ such that $w^{\mathsf{T}}f(x(1),p)=-\mathcal{L}_fh_\mathrm{ps}(x(1),p)$. With the additional periodicity constraint, the adjoint conditions now become \begin{align} \delta x(\cdot)&: &0&=-\lambda_\mathrm{de}^{\prime\,{\mathsf{T}}}-T\lambda_\mathrm{de}^{\mathsf{T}}\partial_xf(x,p),\\ \delta x(1)&: &0&=\lambda_\mathrm{de}^{\mathsf{T}}(1)+\lambda_\mathrm{ps}\partial_xh_\mathrm{ps}(x(1),p)-\lambda_\mathrm{po}^{\mathsf{T}}+\eta_{x(1)}^{\mathsf{T}},\\ \delta x(0)&: &0&=-\lambda_\mathrm{de}^{\mathsf{T}}(0)+\lambda_\mathrm{po}^{\mathsf{T}}+\eta_{x(0)}^{\mathsf{T}},\\ \delta T&: &0&=-\mathrm{i}nt_0^1\lambda_\mathrm{de}^{\mathsf{T}}(\tau)f(x(\tau),p)\,\mathrm{d}\tau+\eta_T,\\ \delta p&: &0&=-\mathrm{i}nt_0^1\lambda_\mathrm{de}^{\mathsf{T}}(\tau)T\partial_pf(x(\tau),p)\,\mathrm{d}\tau+\lambda_\mathrm{ps}\partial_ph_\mathrm{ps}(x(1),p)+\eta_p^{\mathsf{T}}. \end{align} This time, we let $\mathbb{I}=\emptyset$, $\mathbb{J}_1=\{2n+2,\ldots,2n+1+q\}$, and $\mathbb{J}_2=\{1,\ldots,2n+1\}$. It follows again by inspection that \begin{align} \lambda_\mathrm{de}^{\mathsf{T}}f(x,p)&\equiv \eta_T=-\left(\eta^{\mathsf{T}}_{x(1)}+\lambda_\mathrm{ps}\partial_xh_\mathrm{ps}(x(1),p)-\lambda_\mathrm{po}^{\mathsf{T}}\right)f(x(1),p)\nonumber\\ &=(\lambda_\mathrm{po}^{\mathsf{T}}+\eta^{\mathsf{T}}_{x(0)})f(x(1),p),\\ \lambda_\mathrm{de}^{\mathsf{T}}X&\equiv\lambda_\mathrm{po}^{\mathsf{T}}+\eta^{\mathsf{T}}_{x(0)}\nonumber\\ &=-\left(\eta^{\mathsf{T}}_{x(1)}+\lambda_\mathrm{ps}\partial_xh_\mathrm{ps}(x(1),p)-\lambda_\mathrm{po}^{\mathsf{T}}\right)X(1),\\ \lambda_\mathrm{de}^{\mathsf{T}}P\big|_{\tau=1}&=\lambda_\mathrm{ps}\partial_ph_\mathrm{ps}(x(1),p)+\eta_p^{\mathsf{T}}\nonumber\\&=-\left(\eta^{\mathsf{T}}_{x(1)}+\lambda_\mathrm{ps}\partial_xh_\mathrm{ps}(x(1),p)-\lambda_\mathrm{po}^{\mathsf{T}}\right)P(1). \end{align} By considering the case when $\eta_T=1$ and $\eta_{x(0)}=\eta_{x(1)}=0$, we obtain \begin{equation} \lambda_\mathrm{ps}=0,\,\lambda_\mathrm{po}^{\mathsf{T}}f(x(1),p)=1,\,\lambda_\mathrm{po}^{\mathsf{T}}=\lambda_\mathrm{po}^{\mathsf{T}}X(1),\text{ and }\eta_p^{\mathsf{T}}=\lambda_\mathrm{po}^{\mathsf{T}}P(1), \end{equation} and conclude that the sensitivities of $T$ with respect to $h_\mathrm{ps}(x(1),p)$, $x(0)-x(1)$, and $p$ equal $0$, the unique left eigenvector $w^{\mathsf{T}}$ of the monodromy matrix $X(1)$ corresponding to the eigenvalue $1$ and such that $w^{\mathsf{T}}f(x(1),p)=-1$, and the product $w^{\mathsf{T}}P(1)$, respectively, consistent with the result in \eqref{eq:perorbsen}. \subsection{Segmented trajectories in hybrid systems} \label{sec:Hybrid dynamics} We broaden the perspective to hybrid dynamical systems that include discrete jumps and switches between different vector fields. The results in this section generalize to trajectories consisting of any number of consecutive solution segments. \jrem{For example, consider a time history of a hybrid dynamical system that evolves along a trajectory of a flow $F_1$ until a transversal intersection at $(x_0,p_0)$ with an \emph{event surface} \begin{equation} \{x:h_\mathrm{es}(x,p)=0\}, \end{equation} followed by evolution along a trajectory of a flow $F_2$ from a point $g(x_0,p_0)$ for some map $g$. In particular, consistent with the discussion in Section~\ref{sec:Poincaresection}, assume that $\partial_xh_\mathrm{es}(x_0,p_0)f_1(x_0,p_0)\ne 0$.} Consider the composition \begin{equation} D(x,p)=F_2(-\sigma(x,p),g(F_1(\sigma(x,p),x,p),p),p), \end{equation} where $h_\mathrm{es}(F_1(\sigma(x,p),x,p),p)=0$ defines $\sigma(x,p)\approx 0$ uniquely for $x$ and $p$ in some neighborhood of $x_0$ and $p_0$, respectively, such that $\sigma(x_0,p_0)=0$. The function $D$ is referred to as the \textit{zero-time discontinuity mapping}~\cite{PWS08} associated with the \textit{jump function} $g$ and the transversal intersection with the event surface $h_\mathrm{es}=0$. As a special case, $D(x_0,p_0)=g(x_0,p_0)$. We obtain the sensitivities $\partial_xD(x_0,p_0)$ and $\partial_pD(x_0,p_0)$ by analyzing the zero problem \begin{eqnarray} x_1'-\sigma f_1(x_1,p)=0,\,x_2'+\sigma f_2(x_2,p)=0,\\h_\mathrm{es}(x_1(1),p)=0,\,x_2(0)-g(x_1(1),p)=0\label{eq:bcs} \end{eqnarray} and monitor functions \begin{equation} \Psi(u):=\begin{pmatrix}x_2(1)\\x_1(0)\\p\end{pmatrix}, \end{equation} which yield the adjoint conditions \begin{align} \delta x_1(\cdot)&: &0&=-\lambda_{\mathrm{de},1}^{\prime\,{\mathsf{T}}}-\lambda_{\mathrm{de},1}^{\mathsf{T}}\sigma f_1(x_1,p),\\ \delta x_2(\cdot)&: &0&=-\lambda_{\mathrm{de},2}^{\prime\,{\mathsf{T}}}+\lambda_{\mathrm{de},2}^{\mathsf{T}}\sigma f_2(x_2,p),\\ \delta x_1(1)&: &0&=\lambda_{\mathrm{de},1}^{\mathsf{T}}(1)+\lambda_\mathrm{es}\partial_xh_\mathrm{es}(x_1(1),p)-\lambda_\mathrm{jf}^{\mathsf{T}}\partial_xg(x_1(1),p),\\ \delta x_1(0)&: &0&=-\lambda_{\mathrm{de},1}^{\mathsf{T}}(0)+\eta^{\mathsf{T}}_{x_1(0)},\\ \delta x_2(1)&: &0&=\lambda_{\mathrm{de},2}^{\mathsf{T}}(1)+\eta^{\mathsf{T}}_{x_2(1)},\\ \delta x_2(0)&: &0&=-\lambda_{\mathrm{de},2}^{\mathsf{T}}(0)+\lambda_\mathrm{jf}^{\mathsf{T}},\\ \delta \sigma&: &0&=-\mathrm{i}nt_0^1\lambda_{\mathrm{de},1}^{\mathsf{T}}f_1(x_1,p)\,\mathrm{d}\tau+\mathrm{i}nt_0^1\lambda_{\mathrm{de},2}^{\mathsf{T}}f_2(x_2,p)\,\mathrm{d}\tau,\\ \delta p&: &0&=-\mathrm{i}nt_0^1\lambda_{\mathrm{de},1}^{\mathsf{T}}\sigma\partial_pf_1(x_1,p)\,\mathrm{d}\tau+\mathrm{i}nt_0^1\lambda_{\mathrm{de},2}^{\mathsf{T}}\sigma\partial_pf_2(x_2,p)\,\mathrm{d}\tau\nonumber\\ &&&\qquad+\lambda_\mathrm{es}\partial_ph_\mathrm{es}(x(1),p)-\lambda_\mathrm{jf}^{\mathsf{T}}\partial_p g(x_1(1),p)+\eta_p^{\mathsf{T}}, \end{align} where $\eta=(\eta_{x_2(1)},\eta_{x_1(0)},\eta_p)$, and $\lambda_\mathrm{es}$ and $\lambda_\mathrm{jf}$ are adjoint variables associated with the boundary conditions \eqref{eq:bcs}. Since $\sigma=0$ for $x_1(0)=x_0$ and $p=p_0$, it follows by inspection that this choice implies that $x_1\equiv x_0$, $x_2\equiv g(x_0,p_0)$, \begin{align} \label{eq:adjdiscmapsimpl1} \lambda_{\mathrm{de},1}^{\mathsf{T}}&\equiv\eta^{\mathsf{T}}_{x_1(0)}=\lambda_\mathrm{jf}^{\mathsf{T}}\partial_xg(x_0,p_0)-\lambda_\mathrm{es}\partial_xh_\mathrm{es}(x_0,p_0),\\ \lambda_{\mathrm{de},2}^{\mathsf{T}}&\equiv-\eta^{\mathsf{T}}_{x_2(1)}=\lambda_\mathrm{jf}^{\mathsf{T}},\\ \eta_p^{\mathsf{T}}&=\lambda_\mathrm{jf}^{\mathsf{T}}\partial_pg(x_0,p_0)-\lambda_\mathrm{es}\partial_ph_\mathrm{es}(x_0,p_0), \end{align} and \begin{equation} \label{eq:adjdiscmapsimpl4} \eta^{\mathsf{T}}_{x_1(0)}f_1(x_0,p_0)=-\eta^{\mathsf{T}}_{x_2(1)}f_2(g(x_0,p_0),p_0). \end{equation} Guided by the general theory, we let $\mathbb{I}=\emptyset$, $\mathbb{J}_1=\{n+1,\ldots,2n+q\}$, and $\mathbb{J}_2=\{1,\ldots,n\}$ consistent with computing the sensitivities of $x_2(1)=D(x_1(0),p)$ with respect to $x_1(0)$ and $p$ at $x_1(0)=x_0$ and $p=p_0$. With $\eta_{x_2(1)}=I_n$, we obtain \begin{align} \partial_xD(x_0,p_0)&=-\eta^{\mathsf{T}}_{x_1(0)}=\partial_xg(x_0,p_0)+\lambda_\mathrm{es}\partial_xh_\mathrm{es}(x_0,p_0),\\ \partial_pD(x_0,p_0)&=-\eta_p^{\mathsf{T}}=\partial_pg(x_0,p_0)+\lambda_\mathrm{es}\partial_ph_\mathrm{es}(x_0,p_0), \end{align} where \begin{equation} \lambda_\mathrm{es}=\frac{f_2(g(x_0,p_0),p_0)-\partial_xg(x_0,p_0)f_1(x_0,p_0)}{\partial_xh_\mathrm{es}(x_0,p_0)f_1(x_0,p_0)} \end{equation} is obtained by multiplying \eqref{eq:adjdiscmapsimpl1} by $f_1(x_0,p_0)$ and simplifying using \eqref{eq:adjdiscmapsimpl4}. The same expressions may be obtained by implicit differentiation of the constraint \begin{equation} h_\mathrm{es}(F_1(\sigma(x,p),x,p),p)=0 \end{equation} and the defining expression for $D(x,p)$. As a second example from the theory of hybrid systems, we consider a zero problem for a two-segment periodic orbit of period $T$ obtained by gluing together two individual zero problems of the second form considered in Section~\ref{sec:Forward dynamics} using additional boundary conditions. Specifically, we assume that \begin{eqnarray} \label{eq:twoseg1} x'_1-\sigma f(x_1,p)=0,\,h_\mathrm{es}(x_1(1),p)=0,\\x'_2-(T-\sigma)f(x_2,p)=0,\,h_\mathrm{ps}(x_2(1),p)=0,\\ x_2(0)-g(x_1(1),p)=0,\,x_1(0)-x_2(1)=0,\label{eq:twoseg3} \end{eqnarray} where the terminal point of the first segment (of total duration $\sigma$) is constrained to the event surface $h_\mathrm{es}=0$ and mapped by the jump function $g$ to the initial point on the second segment (of total duration $T-\sigma$). In this case, any solution that intersects both the event surface and the Poincar\'{e} section at $h_\mathrm{ps}=0$ transversally, i.e., such that \begin{align} \partial_x h_\mathrm{es}(x_1(1),p)f(x_1(1),p)&\ne 0,\label{eq:trans1}\\ \partial_xh_\mathrm{ps}(x_2(1),p)f(x_2(1),p)&\ne 0,\label{eq:trans2} \end{align} is regular with dimensional deficit $q$. If we choose the monitor functions \begin{equation} \Psi(u):=\begin{pmatrix}T\\p\end{pmatrix}, \end{equation} we obtain the adjoint conditions \begin{align} \delta x_1(\cdot)&: &0&=-\lambda_{\mathrm{de},1}^{\prime\,{\mathsf{T}}}-\lambda_{\mathrm{de},1}^{\mathsf{T}}\sigma f(x_1,p),\\ \delta x_2(\cdot)&: &0&=-\lambda_{\mathrm{de},2}^{\prime\,{\mathsf{T}}}-\lambda_{\mathrm{de},2}^{\mathsf{T}}(T-\sigma) f(x_2,p),\\ \delta x_1(1)&: &0&=\lambda_{\mathrm{de},1}^{\mathsf{T}}(1)+\lambda_\mathrm{es}\partial_xh_\mathrm{es}(x_1(1),p)-\lambda_\mathrm{jf}^{\mathsf{T}}\partial_x g(x_1(1),p),\\ \delta x_1(0)&: &0&=-\lambda_{\mathrm{de},1}^{\mathsf{T}}(0)+\lambda_\mathrm{po}^{\mathsf{T}},\\ \delta x_2(1)&: &0&=\lambda_{\mathrm{de},2}^{\mathsf{T}}(1)+\lambda_\mathrm{ps}\partial_xh_\mathrm{ps}(x_2(1),p)-\lambda_\mathrm{po}^{\mathsf{T}},\\ \delta x_2(0)&: &0&=-\lambda_{\mathrm{de},2}^{\mathsf{T}}(0)+\lambda_\mathrm{jf}^{\mathsf{T}},\\ \delta T&: &0&=-\mathrm{i}nt_0^1\lambda_{\mathrm{de},2}^{\mathsf{T}}(\tau)f(x_2(\tau),p)\,\mathrm{d}\tau+\eta_T,\\ \delta \sigma&: &0&=-\mathrm{i}nt_0^1\lambda_{\mathrm{de},1}^{\mathsf{T}}(\tau)f(x_1(\tau),p)\,\mathrm{d}\tau+\mathrm{i}nt_0^1\lambda_{\mathrm{de},2}^{\mathsf{T}}(\tau)f(x_2(\tau),p)\,\mathrm{d}\tau,\\ \delta p&: &0&=-\mathrm{i}nt_0^1\lambda_{\mathrm{de},1}^{\mathsf{T}}(\tau)\sigma\partial_pf(x_1(\tau),p)\,\mathrm{d}\tau+\lambda_\mathrm{es}\partial_ph_\mathrm{es}(x_1(1),p)\nonumber\\ &&&\qquad-\mathrm{i}nt_0^1\lambda_{\mathrm{de},2}^{\mathsf{T}}(\tau)(T-\sigma)\partial_pf(x_2(\tau),p)\,\mathrm{d}\tau+\lambda_\mathrm{ps}\partial_ph_\mathrm{ps}(x_2(1),p)\nonumber\\ &&&\qquad-\lambda_\mathrm{jf}^{\mathsf{T}}\partial_pg(x_1(1),p)+\eta_p^{\mathsf{T}}, \end{align} where $\eta=(\eta_T,\eta_p)$, and $\lambda_\mathrm{es}$, $\lambda_\mathrm{ps}$, $\lambda_\mathrm{jf}$, and $\lambda_\mathrm{po}$ are adjoint variables associated with the four boundary conditions. As before, the functions $\lambda_{\mathrm{de},1}^{\mathsf{T}}f(x_1,p)$ and $\lambda_{\mathrm{de},2}^{\mathsf{T}}f(x_2,p)$ are constant and both equal to $\eta_T$. By transversality \eqref{eq:trans2}, it follows that $\lambda_\mathrm{ps}=0$ and, consequently, that $\lambda_\mathrm{po}=\lambda_{\mathrm{de},1}(0)=\lambda_{\mathrm{de},2}(1)$. Similarly, from \eqref{eq:trans1}, it follows that \begin{equation} \lambda_\mathrm{es}=\lambda_{\mathrm{de},2}^{\mathsf{T}}(0)\frac{\partial_x g(x_1(1),p)f(x_1(1),p)-f_2(x_2(0),p)}{\partial_x h_\mathrm{es}(x_1(1),p)f(x_1(1),p)} \end{equation} and, consequently, \begin{equation} \lambda_{\mathrm{de},1}^{\mathsf{T}}(1)=\lambda_{\mathrm{de},2}^{\mathsf{T}}(0)\partial_xD(x_1(1),p). \end{equation} Since \begin{equation} \lambda_{\mathrm{de},1}^{\mathsf{T}}(0)=\lambda_{\mathrm{de},1}^{\mathsf{T}}(1)\partial_x F(\sigma,x_1(0),p) \end{equation} and \begin{equation} \lambda_{\mathrm{de},2}^{\mathsf{T}}(0)=\lambda_{\mathrm{de},2}^{\mathsf{T}}(1)\partial_xF(T-\sigma,x_2(0),p), \end{equation} it follows that \begin{equation} \label{eq:compeigvec} \lambda_{\mathrm{de},2}^{\mathsf{T}}(1)=\lambda_{\mathrm{de},2}^{\mathsf{T}}(1)\partial_xF(T-\sigma,x_2(0),p)\partial_xD(x_1(1),p)\partial_x F(\sigma,x_1(0),p). \end{equation} Further, from the final adjoint condition, after some manipulation we obtain \begin{align} \eta_p^{\mathsf{T}}&=\lambda_{\mathrm{de},2}^{\mathsf{T}}(1)\bigg(\partial_xF(T-\sigma,x_2(0),p)\partial_xD(x_1(1),p)\partial_pF(\sigma,x_1(0),p)\nonumber\\ &\qquad+\partial_xF(T-\sigma,x_2(0),p)\partial_pD(x_1(1),p)+\partial_pF(T-\sigma,x_2(0),p)\bigg). \end{align} Inspired by these expressions, we use the discontinuity mapping to define the period-$T$ flow map $G_T$ for fixed $\sigma$ and small changes in $x_1(0)$ and $p$ as follows \begin{equation} G_T(x_1(0),p):=F(T-\sigma,D(F(\sigma,x_1(0),p),p),p). \end{equation} It then follows by comparison with \eqref{eq:compeigvec} that $\lambda_\mathrm{po}^{\mathsf{T}}$ is the unique left eigenvector of the \emph{monodromy matrix} \begin{equation} \partial_xG_T(x_1(0),p) \end{equation} corresponding to the eigenvalue $1$ and such that $\lambda_\mathrm{po}^{\mathsf{T}}f(x_1(0),p)=\eta_T$, and that \begin{equation} \eta_p^{\mathsf{T}}=\lambda_\mathrm{po}^{\mathsf{T}}\partial_pG_T(x_1(0),p), \end{equation} just like in the case of a smooth periodic orbit. If we let $\mathbb{I}=\emptyset$, $\mathbb{J}_1=\{2,\ldots,q+1\}$, and $\mathbb{J}_2=\{1\}$, we obtain the sensitivities of $T$ with respect to $h_\mathrm{es}$, $h_\mathrm{ps}$, $x_2(0)-g(x_1(1),p)$, $x_1(0)-x_2(1)$, and $p$ from $-\lambda_\mathrm{es}$, $-\lambda_\mathrm{ps}$, $-\lambda_\mathrm{jf}^{\mathsf{T}}$, $-\lambda_\mathrm{po}^{\mathsf{T}}$, and $-\eta_p^{\mathsf{T}}$, respectively. \section{Quasiperiodic invariant tori} \label{sec:quasiperiodic invariant tori} \jrem{In this section, we demonstrate the utility of the adjoint approach for a special class of normally hyperbolic invariant manifolds, namely transversally stable, quasiperiodic invariant tori. We show that this approach may be used to compute the linearization of the corresponding stable fiber projection, as a generalization of the concept of asymptotic phase for periodic orbits. Indeed, in this case, stable fibers span the entire neighborhood of the invariant torus, such that the asymptotic phase can be easily observed, for example, in numerical simulations.} In contrast to the treatment in previous sections, the discussion is here concerned with an infinite-dimensional problem for which determining the regularity of the zero problem is a nontrivial task. \subsection{Revisiting the periodic orbit}\label{sec:po:revisit} Before turning to the infinite-dimensional case, however, we return again to the analysis in Section~\ref{sec:Periodic orbits} of a periodic orbit in a smooth vector field and consider its implications to the dynamics of nearby trajectories. To distinguish the periodic orbit from such nearby trajectories, we denote the former by $\tilde{x}(\tau)$. Similarly, let $\tilde{\lambda}_{\mathrm{de},T}(\tau)$ denote the corresponding solution to the adjoint conditions obtained with $\eta_T=1$ and $\eta_{x(0)}=\eta_{x(1)}=0$, such that $\tilde{\lambda}_{\mathrm{de},T}^{\mathsf{T}}(\tau)=\tilde{\lambda}_{\mathrm{de},T}^{\mathsf{T}}(\tau+1)$. It follows that the vectors $\tilde{\lambda}_{\mathrm{de},T}^{\mathsf{T}}(\tau)$ and $f(\tilde{x}(\tau),p)$ are left and right nullvectors of the operator \begin{equation} \Gamma_\tau:= \tilde{X}(\tau+1)\tilde{X}^{-1}(\tau)-I_n, \end{equation} where $\tilde{X}(\tau)$ satisfies the variational initial-value problem \begin{equation} X'(\tau)=T\partial_x f(\tilde{x}(\tau),p)X(\tau),\,X(0)=I_n. \end{equation} Moreover, since $\tilde{\lambda}_{\mathrm{de},T}^{\mathsf{T}}(\tau)f(\tilde{x}(\tau),p)=1$, \jrem{using the notation $(\cdot)_\mathrm{tg}$ for \emph{tangential},} the definition \begin{equation} q_\mathrm{tg}(\tau):=f(\tilde{x}(\tau),p)\tilde{\lambda}_{\mathrm{de},T}^{\mathsf{T}}(\tau) \end{equation} defines a projection (since $q_\mathrm{tg}q_\mathrm{tg}=q_\mathrm{tg}$) such that \begin{equation} q_\mathrm{tg}(\tau+1)=q_\mathrm{tg}(\tau),\,\tilde{X}(\tau)q_\mathrm{tg}(0)=q_\mathrm{tg}(\tau)\tilde{X}(\tau), \end{equation} and $\Gamma_\tau q_\mathrm{tg}(\tau)=q_\mathrm{tg}(\tau)\Gamma_\tau=0$, i.e., such that $\Gamma_\tau$ maps the image of $q_\mathrm{tr}(\tau):=I_n-q_\mathrm{tg}(\tau)$ to itself and $\Gamma_\tau q_\mathrm{tr}(\tau)=\Gamma_\tau$. \jrem{The notation $(\cdot)_\mathrm{tr}$ indicates that this projection is onto a subspace \emph{transversal} to the tangent of the orbit.} Provided that the right nullspace of $\Gamma_\tau$ is one dimensional, it follows that $\Gamma_\tau$ is invertible on this image. But this is a consequence of the regularity assumption on the monodromy matrix $\tilde{X}(1)$ in Section~\ref{sec:Periodic orbits}, since \begin{equation} \tilde{X}(\tau+1)\tilde{X}^{-1}(\tau)v=v\Leftrightarrow \tilde{X}(1)\tilde{X}^{-1}(\tau)v=\tilde{X}^{-1}(\tau)v \end{equation} follows from $\tilde{X}(\tau+1)=\tilde{X}(\tau)\tilde{X}(1)$. For $x\approx \tilde{x}(\tau)$ for some $\tau$, the scalar equation \begin{equation} \label{eq:localcoords} \tilde{\lambda}_{\mathrm{de},T}^{\mathsf{T}}(\sigma)(x-\tilde{x}(\sigma))=0 \end{equation} defines $\sigma$ uniquely on a neighborhood of $\tau$, such that $\sigma=\tau$ when $x=\tilde{x}(\tau)$. Indeed, the derivative of the left-hand side of \eqref{eq:localcoords} with respect to $\sigma$, evaluated at $x=\tilde{x}(\tau)$ and $\sigma=\tau$ equals \begin{equation} -T\tilde{\lambda}_{\mathrm{de},T}^{\mathsf{T}}(\tau)f(\tilde{x}(\tau),p) \end{equation} which reduces to the nonzero scalar $-T$. It follows by implicit differentiation of \eqref{eq:localcoords} w.r.t.\ $x$, evaluated at $\sigma=\tau$ and $x=\tilde{x}(\tau)$, that \begin{equation} \partial_x \sigma(\tilde{x}(\tau))=\frac{1}{T}\tilde{\lambda}_{\mathrm{de},T}^{\mathsf{T}}(\tau). \end{equation} Then, for $x=\tilde{x}(\tau)+\delta$ with $\left\|\delta\right\|\ll 1$, it follows that $x=\tilde{x}(\sigma(x))+x_\mathrm{tr}(x)$, where $q_\mathrm{tg}(\sigma(x))x_\mathrm{tr}(x)=0$ \jrem{(i.e., $x_\mathrm{tr}$ is the projection onto the transversal subspace)} and \begin{align} \sigma(x)&=\tau+\frac{1}{T}\tilde{\lambda}_{\mathrm{de},T}^{\mathsf{T}}(\tau)\delta+O(\left\|\delta\right\|^2),\\ x_\mathrm{tr}(x)&=q_\mathrm{tr}(\tau)\delta+O(\left\|\delta\right\|^2). \end{align} For an arbitrary curve $x(\tau)$ that remains near the limit cycle for all $\tau$, we obtain the unique decomposition \begin{equation} \label{eq:assumedform} x(\tau)=\tilde{x}(\sigma(\tau))+x_\mathrm{tr}(\tau), \end{equation} where $q_\mathrm{tg}(\sigma(\tau))x_\mathrm{tr}(\tau)=0$ for all $\tau$. Suppose, for example, that this is true with $\|x_\mathrm{tr}(\tau)\|=O(\varepsilonsilon)$ for a solution $x(\tau)$ of the perturbed differential equation \begin{equation} \label{ds:periodicperturbed} x'=Tf(x,p)+\delta_\mathrm{ode}(\tau) \end{equation} with $\|\delta_\mathrm{ode}(\tau)\|=O(\varepsilonsilon)$. Substitution of \eqref{eq:assumedform} into \eqref{ds:periodicperturbed} yields \begin{equation} \label{eq:persub} Tf(\tilde{x}(\sigma),p)(\sigma'-1)+x'_\mathrm{tr}=T\partial_xf(\tilde{x}(\sigma),p)x_\mathrm{tr}+\delta_\mathrm{ode}+O(\varepsilonsilon^2). \end{equation} from which it follows that $\sigma'=1+O(\varepsilonsilon)$. Multiplication by $q_\mathrm{tr}(\sigma)$ results in \begin{equation} \label{eq:subs2} q_\mathrm{tr}(\sigma)x_\mathrm{tr}'=Tq_\mathrm{tr}(\sigma)\partial_x f(\tilde{x}(\sigma),p)x_\mathrm{tr}+q_\mathrm{tr}(\sigma)\delta_\mathrm{ode}+O(\varepsilonsilon^2). \end{equation} But from the form of the rate $\sigma'$, we find \begin{equation} q_\mathrm{tr}(\sigma)x_\mathrm{tr}'=x_\mathrm{tr}'-q'_\mathrm{tr}(\sigma)x_\mathrm{tr}+O(\varepsilonsilon^2) \end{equation} and, consequently, \begin{equation} \label{eq:subs3} x_\mathrm{tr}'=Tq_\mathrm{tr}(\sigma)\partial_x f(\tilde{x}(\sigma),p)x_\mathrm{tr}+q'_\mathrm{tr}(\sigma)x_\mathrm{tr}+q_\mathrm{tr}(\sigma)\delta_\mathrm{ode}+O(\varepsilonsilon^2). \end{equation} Now recall from before that \begin{equation} \label{eq:qstVcommuteper} \tilde{X}(\sigma)q_\mathrm{tr}(0)=q_\mathrm{tr}(\sigma)\tilde{X}(\sigma). \end{equation} Differentiation with respect to $\sigma$ and use of \eqref{eq:qstVcommuteper} then yields \begin{equation} T\partial_x f(\tilde{x}(\sigma),p)q_\mathrm{tr}(\sigma)=q'_\mathrm{tr}(\sigma)+Tq_\mathrm{tr}(\sigma)\partial_x f(\tilde{x}(\sigma),p). \end{equation} Multiplication with $x_\mathrm{tr}$ then results in \begin{equation} \label{eq:dqstxstcomm} T\partial_x f(\tilde{x}(\sigma),p)x_\mathrm{tr}=q'_\mathrm{tr}(\sigma)x_\mathrm{tr}+Tq_\mathrm{tr}(\sigma)\partial_x f(\tilde{x}(\sigma),p)x_\mathrm{tr} \end{equation} and, consequently, \begin{equation} \label{eq:subs4per} x_\mathrm{tr}'=T\partial_x f(\tilde{x}(\sigma),p)x_\mathrm{tr}+q_\mathrm{tr}(\sigma)\delta_\mathrm{ode}+O(\varepsilonsilon^2). \end{equation} Substitution in \eqref{eq:persub} and multiplication by $\tilde{\lambda}_{\mathrm{de},T}^{\mathsf{T}}(\sigma)$ finally yields \begin{equation}\label{eq:phaseper} \sigma'=1+\frac{1}{T}\tilde{\lambda}_{\mathrm{de},T}^{\mathsf{T}}(\sigma)\delta_\mathrm{ode}+O(\varepsilonsilon^2). \end{equation} We proceed to assume that the eigenvalues of the monodromy matrix $\tilde{X}(1)$ away from $1$ all lie within the unit circle, i.e., that the periodic orbit is normally hyperbolic and orbitally asymptotically stable. Then, \begin{equation} \|\tilde{X}(k)q_\mathrm{tr}(0)\|=\|\tilde{X}(1)-f(\tilde{x}(0),p)\tilde{\lambda}_{\mathrm{de},T}^{\mathsf{T}}(0)\|^k\le e^{-k/\tau_\mathrm{tr}} \end{equation} for some positive constant $\tau_\mathrm{tr}$. It follows from \eqref{eq:subs4per} that if $\|x_\mathrm{tr}(\tau)\|$ is $O(\varepsilonsilon)$ at $\tau=0$, then it remains so for all $\tau$ and \eqref{eq:phaseper} remains valid as well for all $\tau$. In this case, for $\delta_\mathrm{ode}(\tau)=0$ and some perturbation $\delta$ to the initial condition $\tilde{x}(\sigma_0)$, it follows that $\sigma(\tau)=\sigma(0)+\tau+O(\delta^2)$, and we find that \begin{align} F\left(T\tau,\tilde{x}(\sigma_0)+\delta,p\right)-F\left(T\tau,\tilde{x}\left(\sigma_0+\frac{1}{T}\tilde{\lambda}_{\mathrm{de},T}^{\mathsf{T}}(\sigma_0)\delta\right),p\right) \end{align} behaves as $O(\|\delta\|^2)+O\left(\exp(-\tau/\tau_\mathrm{tr})\right)$ for large $\tau$. The quantity $\sigma_0+\frac{1}{T}\tilde{\lambda}_{\mathrm{de},T}^{\mathsf{T}}(\sigma_0)\delta$ is the linear (in $\delta$) approximation to the corresponding \emph{asymptotic phase}. By the assumption on the eigenvalues of $X(1)$ it further follows that \begin{equation} \tilde{X}(k)=f(\tilde{x}(0),p)\tilde{\lambda}_{\mathrm{de},T}^{\mathsf{T}}(0)+O\left(\exp(-k/\tau_\mathrm{tr})\right) \end{equation} for large $k$. We obtain \begin{equation} \tilde{\lambda}_{\mathrm{de},T}^{\mathsf{T}}(0)=\lim_{k\rightarrow\mathrm{i}nfty}\frac{f^{\mathsf{T}}(\tilde{x}(0),p)}{\|f(\tilde{x}(0),p)\|^2}\tilde{X}(k), \end{equation} where the limit has exponential convergence with rate $1/\tau_\mathrm{tr}$. \subsection{Torus functions} \label{sec:torus functions} Let $\mathbb{S}\sim[0,1]$, such that an $\mathbb{R}^n$-valued function on $\mathbb{S}$ is a topological circle. We seek to generalize the treatment in previous sections to the infinite-dimensional boundary-value problem \begin{equation} \label{eq:torusbvp} \partial_\tau v(\phi,\tau)=Tf(v(\phi,\tau),p),\,v(\phi,1)=v(\phi+\rho,0) \end{equation} for the continuously differentiable function $v:\mathbb{S}\times[0,1]\rightarrow\mathbb{R}^n$ in terms of the \emph{a priori} unknown quantities $T$, $\rho$, and $p$. If such a solution exists, then \begin{equation} \label{eq:partialvperiodicity} \partial_\tau v(\phi,1)=\partial_\tau v(\phi+\rho,0),\,\partial_\phi v(\phi,1)=\partial_\phi v(\phi+\rho,0). \end{equation} The boundary condition in \eqref{eq:torusbvp} ensures that the image of $v$ is an invariant topological torus that is covered by a parallel flow described by the rotation number $\rho$. Indeed, given a solution to \eqref{eq:torusbvp}, we may define the \emph{torus function} $u:\mathbb{S}\times\mathbb{S}\rightarrow\mathbb{R}^n$ such that \begin{equation} \label{eq:udef} u(\theta_1,\theta_2):=v(\theta_1-\rho\theta_2,\theta_2)\mbox{ and }v(\phi,\tau)=u(\phi+\rho\tau,\tau). \end{equation} It follows that $x(\tau)=u(\theta_1(\tau),\theta_2(\tau))$ satisfies $x'=Tf(x,p)$ if $\theta_1'=\rho$ and $\theta_2'=1$. Thus, in terms of the angular coordinates $\theta_1$ and $\theta_2$, the flow on the invariant torus is a rigid rotation. If $\rho$ is irrational, trajectories on the invariant torus are \textit{quasiperiodic} and cover the torus densely. We may use the torus function $u$ to construct the two-dimensional family \begin{equation} v_{(\theta_1,\theta_2)}:(\phi,\tau)\mapsto u(\theta_1+\phi+\rho\tau,\theta_2+\tau) \label{eq:2dimtorusfamily} \end{equation} of solutions to \eqref{eq:torusbvp}, defined for $(\phi,\tau)\mathrm{i}n\mathbb{S}\times\mathbb{R}$ and parameterized by the initial condition $u(\theta_1+\phi,\theta_2)$ for arbitrary constants $\theta_1,\theta_2\mathrm{i}n\mathbb{S}$. We obtain the original solution in the special case that $\theta_1=\theta_2=0$ and omit subscripts in this case. Using this definition, we obtain $v(\phi,\tau+1)=v(\phi+\rho,\tau)$ for all $(\phi,\tau)\mathrm{i}n\mathbb{S}\times\mathbb{R}$, and \eqref{eq:partialvperiodicity} generalizes to \begin{equation} \label{eq:partvquasi} \partial_\tau v(\phi,\tau+1)=\partial_\tau v(\phi+\rho,\tau),\,\partial_\phi v(\phi,\tau+1)=\partial_\phi v(\phi+\rho,\tau) \end{equation} for all $(\phi,\tau)\mathrm{i}n\mathbb{S}\times\mathbb{R}$. Let $V(\phi,\tau)$ be the solution to the variational initial-value problem \begin{equation} \label{eq:variationaltorus} \partial_\tau V=T\partial_xf(v,p)V,\,V(\cdot,0)=I_n, \end{equation} such that \begin{equation} \label{eq:Vprod} V(\phi,\tau+1)=V(\phi+\rho,\tau)V(\phi,1) \end{equation} follows from \eqref{eq:torusbvp}. Differentiation of both sides of the following equalities with respect to $\tau$ then shows that \begin{equation} \label{eq:Xpartphiv} \partial_\phi v(\phi,\tau)=V(\phi,\tau)\partial_\phi v(\phi,0) \end{equation} and \begin{equation} \label{eq:Xparttauv} \partial_\tau v(\phi,\tau)=V(\phi,\tau)\partial_\tau v(\phi,0). \end{equation} We conclude that $\partial_\tau v(\cdot,\tau)$ and $\partial_\phi v(\cdot,\tau)$ are right nullvectors of the linear operator \begin{equation}\label{def:glin} \Gamma_{\rho,\tau}:\delta(\cdot)\mapsto V(\cdot-\rho,\tau+1)V^{-1}(\cdot-\rho,\tau)\delta(\cdot-\rho)-\delta(\cdot). \end{equation} The non-uniqueness implied by \eqref{eq:2dimtorusfamily} may be removed by appending two \emph{phase conditions} to \eqref{eq:torusbvp}. In general, these take the form \begin{align} h_1(v(\cdot,\cdot),p)=h_2(v(\cdot,\cdot),p)=0\label{eq:torusphase} \end{align} in terms of two functionals $h_1(\cdot,p)$ and $h_2(\cdot,p)$ that satisfy a suitable non-degeneracy condition. Here, we assume that these functionals are chosen so that the square matrix \begin{equation}\label{phase:ndeg:h} \mathrm{i}nt_\mathbb{S}\mathrm{i}nt_0^1\begin{bmatrix}\partial_vh_1(v(\cdot,\cdot),p)\\\partial_vh_2(v(\cdot,\cdot),p)\end{bmatrix}(\phi,\tau)\begin{bmatrix}\partial_\tau v(\phi,\tau) & \partial_\phi v(\phi,\tau)\end{bmatrix} \mathrm{d}\tau\,\mathrm{d}\phi \end{equation} is nonsingular. \subsection{Normal hyperbolicity} \label{sec:normal hyperbolicity} \jrem{We assume henceforth that the solution $v$ of \eqref{eq:torusbvp} is \textit{normally hyperbolic}. For our scenario, this implies the existence of an invariant continuous family of \emph{tangential} projections \begin{equation} \label{eq:qtgdef} q_\mathrm{tg}(\phi,\tau):=\partial_\tau v(\phi,\tau)q_\tau^{\mathsf{T}}(\phi,\tau)+\partial_\phi v(\phi,\tau)q_\phi^{\mathsf{T}}(\phi,\tau) \end{equation} onto the tangent spaces of the torus, and a complementary family of \emph{transversal} projections $q_\mathrm{tr}(\phi,\tau):=I_n-q_\mathrm{tg}(\phi,\tau)$, such that \begin{equation} \label{tg:qper:inv} q_\mathrm{tg}(\phi,\tau+1)=q_\mathrm{tg}(\phi+\rho,\tau),\quad q_\mathrm{tg}(\phi,\tau)V(\phi,\tau)=V(\phi,\tau)q_\mathrm{tg}(\phi,0). \end{equation} Moreover, the map $\Gamma_{\rho,\tau}$ is a bijection with bounded inverse on the space of transversal perturbations $\phi\mapsto q_\mathrm{tr}(\phi,\tau)\delta(\phi)$ for arbitrary continuous periodic $\delta(\phi)$.} By \eqref{eq:Vprod}, we obtain the conjugacy \begin{equation} \label{eq:Gsimilar} V^{-1}(\phi,\tau)\Gamma_{\rho,\tau}\left[\delta(\cdot)\right](\phi,\tau)=\Gamma_{\rho,0}\left[V^{-1}(\cdot,\tau)\delta(\cdot)\right](\phi,\tau) \end{equation} between the operators $\Gamma_{\rho,\tau}$ and $\Gamma_{\rho,0}$. Given the continuous families of projections $q_\mathrm{tg}(\phi,\tau)$ and $q_\mathrm{tr}(\phi,\tau)$, normal hyperbolicity then follows if and only if the map $\Gamma_{\rho,0}$ is a bijection with bounded inverse on the space of functions $\phi\mapsto q_\mathrm{tr}(\phi,0)\delta(\phi)$ for arbitrary continuous periodic $\delta(\phi)$. In other words, invertibility of $\Gamma_{\rho,\tau}\vert_{\rg q_\mathrm{tr}(\cdot,\tau)}$ for all $\tau$ is equivalent to invertibility of $\Gamma_{\rho,0}\vert_{\rg q_\mathrm{tr}(\cdot,0)}$. General theorems about persistence of normally hyperbolic manifolds \cite{HPS77} imply, for example, that torus functions $v$ with non-rigid rotation (such that $\rho(\phi)$ is non-constant periodic) persist. Since \jrem{the tangential projection} $q_\mathrm{tg}$ is \jrem{applied pointwise}, $q_\mathrm{tg}q_\mathrm{tg}=q_\mathrm{tg}$ and $q_\mathrm{tg}q_\mathrm{tr}=q_\mathrm{tr}q_\mathrm{tg}=0$ hold everywhere. It follows from \eqref{eq:qtgdef} that \begin{align} \label{eq:orthogqtauqphi} \begin{bmatrix} 1 &0\\ 0&1 \end{bmatrix}= & \begin{bmatrix} q_\phi^{\mathsf{T}}(\phi,\tau)\\ q_\tau^{\mathsf{T}}(\phi,\tau) \end{bmatrix} \begin{bmatrix} \partial_\phi v(\phi,\tau), & \partial_\tau v(\phi,\tau) \end{bmatrix} \end{align} for all $(\phi,\tau)\mathrm{i}n\mathbb{S}\times\mathbb{R}$. Moreover, by \eqref{eq:partialvperiodicity}, \eqref{eq:Xpartphiv}, \eqref{eq:Xparttauv}, and \eqref{tg:qper:inv}, we obtain \begin{equation} \label{eq:perqtauqphi} q_\tau(\phi,\tau+1)=q_\tau(\phi+\rho,\tau),\,q_\phi(\phi,\tau+1)=q_\phi(\phi+\rho,\tau), \end{equation} and \begin{equation} q_\tau^{\mathsf{T}}(\phi,\tau)V(\phi,\tau)=q_\tau^{\mathsf{T}}(\phi,0),\,q_\phi^{\mathsf{T}}(\phi,\tau)V(\phi,\tau)=q_\phi^{\mathsf{T}}(\phi,0). \end{equation} It follows that \begin{equation} \label{eq:shiftqtau} 0=q_\tau^{\mathsf{T}}(\phi+\rho,\tau)V(\phi,\tau+1)V^{-1}(\phi,\tau)-q_\tau^{\mathsf{T}}(\phi,\tau) \end{equation} and \begin{equation} \label{eq:shiftqphi} 0=q_\phi^{\mathsf{T}}(\phi+\rho,\tau)V(\phi,\tau+1)V^{-1}(\phi,\tau)-q_\phi^{\mathsf{T}}(\phi,\tau). \end{equation} Multiplication of each of these equalities by an arbitrary function $\delta(\phi)$ and integration over $\mathbb{S}$ then yields \begin{align} 0&=\mathrm{i}nt_\mathbb{S}\left(q_\tau^{\mathsf{T}}(\phi+\rho,\tau)V(\phi,\tau+1)V^{-1}(\phi,\tau)-q_\tau^{\mathsf{T}}(\phi,\tau)\right)\delta(\phi)\,\mathrm{d}\phi\nonumber\\ &=\mathrm{i}nt_\mathbb{S}q_\tau^{\mathsf{T}}(\phi,\tau)\left(V(\phi-\rho,\tau+1)V^{-1}(\phi-\rho,\tau)\delta(\phi-\rho)-\delta(\phi)\right)\,\mathrm{d}\phi\nonumber\\ &=\mathrm{i}nt_\mathbb{S}q_\tau^{\mathsf{T}}(\phi,\tau)\Gamma_{\rho,\tau}\left[\delta(\cdot)\right](\phi)\,\mathrm{d}\phi \end{align} and \begin{align} 0&=\mathrm{i}nt_\mathbb{S}\left(q_\phi^{\mathsf{T}}(\phi+\rho,0)V(\phi,\tau+1)V^{-1}(\phi,\tau)-q_\phi^{\mathsf{T}}(\phi,0)\right)\delta(\phi)\,\mathrm{d}\phi\nonumber\\ &=\mathrm{i}nt_\mathbb{S}q_\phi^{\mathsf{T}}(\phi,\tau)\left(V(\phi-\rho,\tau+1)V^{-1}(\phi-\rho,\tau)\delta(\phi-\rho)-\delta(\phi)\right)\,\mathrm{d}\phi\nonumber\\ &=\mathrm{i}nt_\mathbb{S}q_\phi^{\mathsf{T}}(\phi,\tau)\Gamma_{\rho,\tau}\left[\delta(\cdot)\right](\phi)\,\mathrm{d}\phi, \end{align} i.e., that the linear functionals $\mathrm{i}nt_\mathbb{S}q_\tau^{\mathsf{T}}(\phi,\tau)\left(\cdot\right)\,\mathrm{d}\phi$ and $\mathrm{i}nt_\mathbb{S}q_\phi^{\mathsf{T}}(\phi,\tau)\left(\cdot\right)\,\mathrm{d}\phi$ lie in the left nullspace of the operator $\Gamma_{\rho,\tau}$. With the help of \eqref{eq:partvquasi}, \eqref{eq:Vprod}, \eqref{eq:Xpartphiv}, \eqref{eq:Xparttauv}, and \eqref{def:glin}, it follows from \eqref{eq:qtgdef} and \eqref{tg:qper:inv} that \begin{align} &q_\mathrm{tg}(\phi,\tau)\Gamma_{\rho,\tau}\left[\delta(\cdot)\right](\phi,\tau)=\Gamma_{\rho,\tau}\left[q_\mathrm{tg}(\cdot,\tau)\delta(\cdot)\right](\phi,\tau)=-q_\mathrm{tg}(\phi,\tau)\delta(\phi)\nonumber\\ &\qquad+\left(\partial_\tau v(\phi,\tau)q_\tau^{\mathsf{T}}(\phi-\rho,\tau)+\partial_\phi v(\phi,\tau)q_\phi^{\mathsf{T}}(\phi-\rho,\tau)\right)\delta(\phi-\rho). \end{align} Then, \eqref{eq:orthogqtauqphi} implies that \begin{equation} q_\mathrm{tg}(\phi,\tau)\Gamma_{\rho,\tau}\left[q_\mathrm{tr}(\cdot,\tau)\delta(\cdot)\right](\phi,\tau)=0, \end{equation} i.e., that the space of functions $\phi\mapsto q_\mathrm{tr}(\phi,\tau)\delta(\phi)$ for arbitrary continuous periodic functions $\delta(\phi)$ is, in fact, invariant under $\Gamma_{\rho,\tau}$. That $\Gamma_{\rho,\tau}$ is a bijection on this space with a bounded inverse is then equivalent to the existence of a bounded inverse of the map $\hat{\Gamma}_{\rho,\tau}$ given by \begin{equation} \label{nhyp:gst:def} \hat{\Gamma}_{\rho,\tau}\left[\delta(\cdot)\right](\phi,\tau):=\Gamma_{\rho,\tau}\left[ q_\mathrm{tr}(\cdot,\tau)\delta(\cdot)\right](\phi,\tau)-q_\mathrm{tg}(\phi,\tau)\delta(\phi) \end{equation} on the space of continuous periodic functions $\delta(\phi)$. From \eqref{eq:Gsimilar}, we obtain \begin{equation} V^{-1}(\phi,\tau)\hat{\Gamma}_{\rho,\tau}\left[\delta(\cdot)\right](\phi,\tau)=\hat{\Gamma}_{\rho,0}\left[ V^{-1}(\cdot,\tau)\delta(\cdot)\right](\phi,\tau), \end{equation} i.e., that it again suffices to study the restriction to the case $\tau=0$. In particular, normal hyperbolicity implies the existence of a unique continuous periodic solution $\delta=\hat{\Gamma}_{\rho,0}^{-1}\left[\delta_\mathrm{rhs}(\cdot)\right]$ for continuous periodic $\delta_\mathrm{rhs}$ (with the norm of $\delta$ bounded by a fixed multiple of the norm of $\delta_\mathrm{rhs}$) such that \begin{equation} \label{torus:transversal:hyp} V(\phi-\rho,1)q_\mathrm{tr}(\phi-\rho,0)\delta(\phi-\rho)-\delta(\phi)=\delta_\mathrm{rhs}(\phi). \end{equation} Suppose, for example, that $\delta(\cdot)$ is an eigenfunction of $\hat{\Gamma}_{\rho,0}$ with eigenvalue $z$, i.e., such that \begin{equation} V(\phi-\rho,1)q_\mathrm{tr}(\phi-\rho,0)\delta(\phi-\rho)=(1+z)\delta(\phi) \end{equation} for all $\phi$. Then, by induction and liberal use of \eqref{eq:Vprod} and \eqref{tg:qper:inv}, \begin{equation} V(\phi-k\rho,k)q_\mathrm{tr}(\phi-k\rho,0)\delta(\phi-k\rho)=(1+z)^k\delta(\phi) \end{equation} and, consequently, \begin{equation} |1+z|^k\le\|V(\cdot,k)q_\mathrm{tr}(\cdot,0)\|. \end{equation} If, in addition to $v$ being normally hyperbolic, the sequence $V(\cdot,k)q_\mathrm{tr}(\cdot,0)$ is bounded by \begin{align} \label{eq:normdecay} \|V(\cdot,k)q_\mathrm{tr}(\cdot,0)\|\leq C_\mathrm{tr}\exp(-k/\tau_\mathrm{tr}) \end{align} for some positive constants $C_\mathrm{tr}$ and $\tau_\mathrm{tr}$, we say that $v$ is \emph{transversally stable}. In this case \begin{equation} |1+z|\le \exp(-1/\tau_\mathrm{tr}) \end{equation} and the spectral radius of $\hat{\Gamma}_{\rho,0}^{-1}$ must be bounded by $1/(1-\exp(-1/\tau_\mathrm{tr}))$. In general, \begin{equation} V(\phi,k)q_\mathrm{tg}(\phi,0)=\partial_\tau v(\phi+k\rho,0)q_\tau^{\mathsf{T}}(\phi,0)+\partial_\phi v(\phi+k\rho,0)q_\phi^{\mathsf{T}}(\phi,0). \end{equation} For a transversally stable $v$, it follows from $q_\mathrm{tr}=I_n-q_\mathrm{tg}$ that \begin{equation} \label{qtg:limk} V(\phi,k)=\partial_\tau v(\phi+k\rho,0)q_\tau^{\mathsf{T}}(\phi,0)+\partial_\phi v(\phi+k\rho,0)q_\phi^{\mathsf{T}}(\phi,0)+O(\exp(-k/\tau_\mathrm{tr})). \end{equation} Provided that the $2\times2$ matrix \begin{align} A(\phi):= \begin{bmatrix} \partial_\phi v^{\mathsf{T}}(\phi,0)\partial_\phi v(\phi,0)& \partial_\phi v^{\mathsf{T}}(\phi,0)\partial_\tau v(\phi,0)\\ \partial_\tau v^{\mathsf{T}}(\phi,0)\partial_\phi v(\phi,0)& \partial_\tau v^{\mathsf{T}}(\phi,0)\partial_\tau v(\phi,0) \end{bmatrix} \end{align} has a uniformly bounded inverse (in $\phi$), we may multiply \eqref{qtg:limk} by $\partial_\phi v^{\mathsf{T}}(\phi+k\rho,0)$ and $\partial_{\tau} v^{\mathsf{T}}(\phi+k\rho,0)$, respectively, to obtain \begin{align} \begin{bmatrix} \partial_\phi v^{\mathsf{T}}(\phi+k\rho,0)\\ \partial_{\tau} v^{\mathsf{T}}(\phi+k\rho,0) \end{bmatrix} V(\phi,k)&=A(\phi+k\rho) \begin{bmatrix} q_\phi^{\mathsf{T}}(\phi,0)\\[0.5ex]q_\tau^{\mathsf{T}}(\phi,0) \end{bmatrix}+O(\exp(-k/\tau_\mathrm{tr})). \end{align} Consequently, \begin{align} q_\mathrm{tg}(\phi,0)=\begin{bmatrix} v_\phi(\phi,0)&v_\tau(\phi,0) \end{bmatrix}\lim_{k\to\mathrm{i}nfty}A(\phi+k\rho)^{-1} \begin{bmatrix} \partial_\phi v^{\mathsf{T}}(\phi+k\rho,0)\\ \partial_{\tau} v^{\mathsf{T}}(\phi+k\rho,0) \end{bmatrix} V(\phi,k), \end{align} where the limit has exponential convergence with rate $1/\tau_\mathrm{tr}$. \subsection{Adjoint conditions}\label{sec:torus:adj} Inspired by the analysis in previous sections, we next consider the Lagrangian \begin{align} L&=\mathrm{i}nt_\mathbb{S}\mathrm{i}nt_0^1\lambda_\mathrm{de}^{\mathsf{T}}(\phi,\tau)(\partial_\tau v(\phi,\tau)-Tf(v(\phi,\tau),p))\,\mathrm{d}\tau\,\mathrm{d}\phi\nonumber\\ &\qquad+\mathrm{i}nt_\mathbb{S} \lambda_\mathrm{bc}^{\mathsf{T}}(\phi)(v(\phi+\rho,0)-v(\phi,1))\,\mathrm{d}\phi+\lambda_\mathrm{ps}^{\mathsf{T}}h(v(\cdot,\cdot),p)\nonumber\\ &\qquad\qquad+\eta_\rho(\rho-\mu_\rho)+\eta_T(T-\mu_T)+\eta^{\mathsf{T}}_p(p-\mu_p), \end{align} from which we derive the adjoint conditions \begin{align}\label{phase:var:dv} &\mbox{$\delta v(\phi,t)$:}&0&=-\partial_\tau \lambda_\mathrm{de}^{\mathsf{T}}-T\lambda_\mathrm{de}^{\mathsf{T}}\partial_xf(v,p)+\lambda_\mathrm{ps}^{\mathsf{T}}\partial_v h(v(\cdot,\cdot),p),\\ \label{phase:var:dv1} &\mbox{$\delta v(\phi,1)$:}& 0&=\phantom{-}\lambda_\mathrm{de}^{\mathsf{T}}(\phi,1)-\lambda_\mathrm{bc}^{\mathsf{T}}(\phi),\\ \label{phase:var:dv0} &\mbox{$\delta v(\phi,0)$:}& 0&=-\lambda_\mathrm{de}^{\mathsf{T}}(\phi,0)+\lambda_\mathrm{bc}^{\mathsf{T}}(\phi-\rho),\\ \label{phase:var:dT} &\mbox{$\delta T$:}& 0&=-\mathrm{i}nt_\mathbb{S}\mathrm{i}nt_0^1\lambda_\mathrm{de}^{\mathsf{T}}(\phi,\tau)f(v(\phi,\tau),p)\,\mathrm{d}\tau\,\mathrm{d}\phi+\eta_T,\\ \label{phase:var:drho} &\mbox{$\delta\rho$:}& 0&=\mathrm{i}nt_\mathbb{S}\lambda_\mathrm{bc}^{\mathsf{T}}(\phi)\partial_\phi v(\phi+\rho,0)\,\mathrm{d}\phi+\eta_\rho,\\ \label{phase:var:dp} &\mbox{$\delta p$:}& 0&=-\mathrm{i}nt_\mathbb{S}\mathrm{i}nt_0^1\lambda_\mathrm{de}^{\mathsf{T}}(\phi,\tau)T\partial_pf(v(\phi,\tau),p)\,\mathrm{d}\tau\,\mathrm{d}\phi+\eta_p^{\mathsf{T}}\nonumber\\ &&&\qquad+\lambda_\mathrm{ps}^{\mathsf{T}}\partial_ph(v(\cdot,\cdot),p). \end{align} For every solution to these conditions, by elimination of $\lambda_\mathrm{bc}$ from \eqref{phase:var:dv1} and \eqref{phase:var:dv0}, it must hold that \begin{equation}\label{phase:lde:periodic} \lambda_\mathrm{de}(\phi,1)=\lambda_\mathrm{de}(\phi+\rho,0) \end{equation} analogously to \eqref{eq:torusbvp}. Multiplying \eqref{phase:var:dv} by the solution $V(\phi,\tau)$ to \eqref{eq:variationaltorus} yields \begin{equation} \lambda_\mathrm{ps}^{\mathsf{T}}\partial_vh(v(\cdot,\cdot),p)V(\phi,\tau)=\partial_\tau\left(\lambda_\mathrm{de}^{\mathsf{T}}(\phi,\tau)V(\phi,\tau)\right) \end{equation} and integrating with respect to $\tau$ from $0$ to $1$ then results in \begin{equation} \label{eq:lambdapscond} \lambda_\mathrm{ps}^{\mathsf{T}}\mathrm{i}nt_0^1\partial_vh(v(\cdot,\cdot),p)V(\phi,\tau)\,\mathrm{d}\tau=\lambda_\mathrm{de}^{\mathsf{T}}(\phi+\rho,0)V(\phi,1)-\lambda_\mathrm{de}^{\mathsf{T}}(\phi,0), \end{equation} since $V(\phi,0)=I_n$. Multiplication of both sides of \eqref{eq:lambdapscond} by $\partial_\tau v(\phi,0)$ or $\partial_\phi v(\phi,0)$, followed by integration over $\phi$ now yields \begin{align} &\lambda_\mathrm{ps}^{\mathsf{T}}\mathrm{i}nt_\mathbb{S}\mathrm{i}nt_0^1\partial_vh(v(\cdot,\cdot),p)V(\phi,\tau)\partial_\tau v(\phi,0)\,\mathrm{d}\tau\mathrm{d}\phi\nonumber\\ &\qquad=\mathrm{i}nt_\mathbb{S}\lambda_\mathrm{de}^{\mathsf{T}}(\phi,0)\left(V(\phi-\rho,1)\partial_\tau v(\phi-\rho,0)-\partial_\tau v(\phi,0)\right)\,\mathrm{d}\phi\nonumber\\ &\qquad=\mathrm{i}nt_\mathbb{S}\lambda_\mathrm{de}^{\mathsf{T}}(\phi,0)\Gamma_{\rho,0}\left[\partial_\tau v(\cdot,0)\right](\phi)\,\mathrm{d}\phi=0\label{eq:lambdaps1} \end{align} and \begin{align} &\lambda_\mathrm{ps}^{\mathsf{T}}\mathrm{i}nt_\mathbb{S}\mathrm{i}nt_0^1\partial_vh(v(\cdot,\cdot),p)V(\phi,\tau)\partial_\phi v(\phi,0)\,\mathrm{d}\tau\mathrm{d}\phi\nonumber\\ &\qquad=\mathrm{i}nt_\mathbb{S}\lambda_\mathrm{de}^{\mathsf{T}}(\phi,0)\left(V(\phi-\rho,1)\partial_\phi v(\phi-\rho,0)-\partial_\phi v(\phi,0)\right)\,\mathrm{d}\phi\nonumber\\ &\qquad=\mathrm{i}nt_\mathbb{S}\lambda_\mathrm{de}^{\mathsf{T}}(\phi,0)\Gamma_{\rho,0}\left[\partial_\phi v(\cdot,0)\right](\phi)\,\mathrm{d}\phi=0,\label{eq:lambdaps2} \end{align} where the final equalities follow from the observation at the end of the previous section that $\partial_\tau v(\cdot,0)$ and $\partial_\phi v(\cdot,0)$ are right nullvectors of $\Gamma_{\rho,0}$. By the nonsingularity of \eqref{phase:ndeg:h}, the equalities \eqref{eq:lambdaps1} and \eqref{eq:lambdaps2} imply that $\lambda_\mathrm{ps}=0$ and, consequently, that $\lambda_\mathrm{de}^{\mathsf{T}}(\phi,\tau)V(\phi,\tau)$ must be constant. In particular, by \eqref{eq:Vprod}, \begin{equation} \label{eq:deXper} \lambda_\mathrm{de}^{\mathsf{T}}(\phi+\rho,\tau)V(\phi,\tau+1)V^{-1}(\phi,\tau)-\lambda_\mathrm{de}^{\mathsf{T}}(\phi,\tau)=0. \end{equation} Multiplication of this equality by an arbitrary function $\delta(\phi)$ and integration over $\mathbb{S}$ then yields the equality \begin{align} \label{phase:lde:left} 0&=\mathrm{i}nt_\mathbb{S}\lambda_\mathrm{de}^{\mathsf{T}}(\phi,\tau)\left(V(\phi-\rho,\tau+1)V^{-1}(\phi-\rho,\tau)\delta(\phi-\rho)-\delta(\phi)\right)\,\mathrm{d}\phi\nonumber\\&\qquad=\mathrm{i}nt_\mathbb{S}\lambda_\mathrm{de}^{\mathsf{T}}(\phi,\tau)\Gamma_{\rho,\tau}\left[\delta(\cdot)\right](\phi)\,\mathrm{d}\phi, \end{align} i.e., that the linear functional $\mathrm{i}nt_\mathbb{S}\lambda_\mathrm{de}^{\mathsf{T}}(\phi,\tau)\left(\cdot\right)\,\mathrm{d}\phi$ must lie in the left nullspace of the operator $\Gamma_{\rho,\tau}$. Conversely, for every periodic function $\lambda_\mathrm{bc}(\cdot)$ satisfying \begin{equation} \label{eq:lambdabccond} \lambda_\mathrm{bc}^{\mathsf{T}}(\phi)V(\phi,1)-\lambda^{\mathsf{T}}_\mathrm{bc}(\phi-\rho)=0, \end{equation} such that $\mathrm{i}nt_\mathbb{S}\lambda_\mathrm{bc}^{\mathsf{T}}(\phi-\rho)\left(\cdot\right)\,\mathrm{d}\phi$ is in the left nullspace of the operator $\Gamma_{\rho,0}$, the function \begin{equation} \lambda_\mathrm{de}^{\mathsf{T}}(\phi,\tau):=\lambda_\mathrm{bc}^{\mathsf{T}}(\phi)V(\phi,1)V^{-1}(\phi,\tau) \end{equation} satisfies the adjoint boundary value problem \eqref{phase:var:dv}--\eqref{phase:var:dv0} with $\lambda_\mathrm{ps}=0$. It follows from \eqref{eq:shiftqtau} and \eqref{eq:shiftqphi} that the assignment $\lambda_\mathrm{bc}(\phi)=c_\tau q_\tau(\phi+\rho,0)+c_\phi q_\phi(\phi+\rho,0)$ for arbitrary constants $c_\tau$ and $c_\phi$ satisfies \eqref{eq:lambdabccond} and that the corresponding $\lambda_\mathrm{de}^{\mathsf{T}}(\phi,\tau)$ equals $c_\tau q_\tau^{\mathsf{T}}(\phi,\tau)+c_\phi q_\phi^{\mathsf{T}}(\phi,\tau)$. Substitution in \eqref{phase:var:dT} and \eqref{phase:var:drho} then yields $(\eta_T,\eta_\rho)=(c_\tau/T,-c_\phi)$. In the context of the general theory, the solution to the adjoint equations, if it exists, depends on the choice of assignments of $0$'s and $1$'s to the elements of $\eta_{\mathbb{J}_2}$. Here, we focus on two such possibilities, namely that with $\eta_T=1$ and $\eta_\rho=0$, which we denote by an additional subscript $_T$, and that with $\eta_T=0$ and $\eta_\rho=1$, which we denote by the additional subscript $_\rho$. We thus obtain one such pair of solutions $\lambda_{\mathrm{de},T}(\phi,\tau)=Tq_\tau(\phi,\tau)$ and $\lambda_{\mathrm{de},\rho}(\phi,\tau)=-q_\phi(\phi,\tau)$. The corresponding values of $\eta_p^{\mathsf{T}}$ equal \begin{equation} \label{eq: etaT_T} T\mathrm{i}nt_\mathbb{S}q_\tau^{\mathsf{T}}(\phi,1)P(\phi,1)\,\mathrm{d}\phi \end{equation} and \begin{equation} \label{eq: etaT_rho} -\mathrm{i}nt_\mathbb{S}q_\phi^{\mathsf{T}}(\phi,1)P(\phi,1)\,\mathrm{d}\phi, \end{equation} respectively, where \begin{equation} \label{eq:Pquasi} P(\phi,\tau):=TV(\phi,\tau)\mathrm{i}nt_0^\tau V(\phi,\sigma)^{-1}\partial_pf(v(\phi,\sigma),p)\,\mathrm{d}\sigma\mbox{.} \end{equation} In general, since $\lambda_\mathrm{de}^{\mathsf{T}}(\phi,\tau)V(\phi,\tau)$ must be constant in $\tau$, it follows from \eqref{eq:Xpartphiv} and \eqref{eq:Xparttauv} that the products \begin{equation} \lambda_\mathrm{de}^{\mathsf{T}}(\phi,\tau)\partial_\tau v(\phi,\tau)\mbox{ and }\lambda_\mathrm{de}^{\mathsf{T}}(\phi,\tau)\partial_\phi v(\phi,\tau) \end{equation} must also be constant in $\tau$. Thus, if the rotation number $\rho$ is irrational (such that every trajectory on the torus covers it densely) and $\lambda_\mathrm{bc}$ (and, hence, $\lambda_\mathrm{de}$) is continuous in $\phi$, the scaling conditions \eqref{phase:var:dT} and \eqref{phase:var:drho} imply \begin{align} \label{eq:orthoggen} \lambda_\mathrm{de}^{\mathsf{T}}(\phi,\tau)\partial_\tau v(\phi,\tau) = T\eta_T,\, \lambda_\mathrm{de}^{\mathsf{T}}(\phi,\tau)\partial_\phi v(\phi,\tau)&=-\eta_\rho \end{align} for all $(\phi,\tau)$. As an immediate consequence, \begin{equation} \label{eq:lambdaqtg} \lambda_\mathrm{de}^{\mathsf{T}}(\phi,\tau)q_\mathrm{tg}(\phi,\tau)=T\eta_Tq_\tau^{\mathsf{T}}(\phi,\tau)-\eta_\rho q_\phi^{\mathsf{T}}(\phi,\tau). \end{equation} Furthermore, from \eqref{eq:deXper} we find \begin{align} \label{eq:lambdadeqst1} \lambda_\mathrm{de}^{\mathsf{T}}(\phi+\rho,\tau)q_\mathrm{tr}(\phi+\rho,\tau)V(\phi,\tau+1)V^{-1}(\phi,\tau)-\lambda_\mathrm{de}^{\mathsf{T}}(\phi,\tau)q_\mathrm{tr}(\phi,\tau)=0 \end{align} and, consequently, that the functional $\mathrm{i}nt_\mathbb{S}\lambda_\mathrm{de}^{\mathsf{T}}(\phi,\tau)q_\mathrm{tr}(\phi,\tau)(\cdot)\,\mathrm{d}\phi$ must lie in the left nullspace of the operator $\hat{\Gamma}_{\rho,\tau}$. Since $\hat{\Gamma}_{\rho,\tau}$ has a bounded inverse, we conclude that \begin{equation} \label{phase:lde:st0} \lambda_\mathrm{de}^{\mathsf{T}}(\phi,\tau)q_\mathrm{tr}(\phi,\tau)=0\mbox{} \end{equation} must hold for all $(\phi,\tau)$. Considering the function\footnote{The indicator function $\mathbbm{1}_r:\mathbb{S}\rightarrow\{0,1\}$ is nonzero on $|\phi|<r/2$ (appropriately defined in the metric on $\mathbb{S}$).} \begin{equation} \delta(\phi)=\mathbbm{1}_r(\phi-\phi_0)\bigl[q_\mathrm{tr}(\phi,\tau)+q_\mathrm{tg}(\phi,\tau)\bigr]\delta_0, \end{equation} which equals $\delta_0$ on the ball $B_r$ with diameter $r$ around some arbitrary $\phi_0$ in $\mathbb{S}$ and $0$ elsewhere, \eqref{eq:lambdaqtg} and \eqref{phase:lde:st0} imply that \begin{align} \mathrm{i}nt_\mathbb{S}\lambda_\mathrm{de}^{\mathsf{T}}(\phi,\tau)\delta(\phi)\,\mathrm{d}\phi &=\mathrm{i}nt_{B_r}\lambda_\mathrm{de}^{\mathsf{T}}(\phi,\tau)\,\mathrm{d}\phi \,\delta_0\nonumber\\ &=\left[T\eta_T\mathrm{i}nt_{B_r}q_\tau^{\mathsf{T}}(\phi,\tau)\,\mathrm{d}\phi-\eta_\rho\mathrm{i}nt_{B_r}q_\phi^{\mathsf{T}}(\phi,\tau)\,\mathrm{d}\phi\right]\delta_0. \end{align} Dividing by $r$ and assuming continuity of $\lambda_\mathrm{de}$ in $\phi$, we obtain \begin{align} \label{eq:adjointsol} \lambda_\mathrm{de}(\phi,\tau)=T\eta_Tq_\tau(\phi,\tau)-\eta_\rho q_\phi(\phi,\tau), \end{align} since the above integral equality holds for all $\delta_0$, arbitrary small $r>0$, and arbitrary $\phi_0$. We conclude that for irrational $\rho$, the functions $Tq_\tau(\phi,\tau)$ and $q_\phi(\phi,\tau)$ are, in fact, the unique continuous solutions for $\lambda_{\mathrm{de},T}(\phi,\tau)$ and $\lambda_{\mathrm{de},\rho}(\phi,\tau)$, respectively. It follows from \eqref{eq:orthogqtauqphi} that \begin{equation} \label{eq:ortho1} \lambda_{\mathrm{de},T}^{\mathsf{T}}(\phi,\tau)\partial_\tau v(\phi,\tau)=T,\,\lambda_{\mathrm{de},\rho}^{\mathsf{T}}(\phi,\tau)\partial_\phi v(\phi,\tau)=-1, \end{equation} and \begin{equation} \label{eq:ortho2} \lambda_{\mathrm{de},T}^{\mathsf{T}}(\phi,\tau)\partial_\phi v(\phi,\tau)=\lambda_{\mathrm{de},\rho}^{\mathsf{T}}(\phi,\tau)\partial_\tau v(\phi,\tau)=0 \end{equation} for all $(\phi,\tau)$, consistent with \eqref{eq:orthoggen}. \subsection{Asymptotic analysis} As in the analysis of the periodic orbit in Section~\ref{sec:po:revisit}, for $x\approx v(\phi,\tau)$ for some $\phi$ and $\tau$, the scalar equations \begin{equation} \label{eq:toruslocalcoords} \lambda^{\mathsf{T}}_{\mathrm{de},T}(\varphi,\sigma)(x-v(\varphi,\sigma))=0,\,\lambda^{\mathsf{T}}_{\mathrm{de},\rho}(\varphi,\sigma)(x-v(\varphi,\sigma))=0 \end{equation} define $\varphi$ and $\sigma$ uniquely on a neighborhood of $\phi$ and $\tau$, such that $\varphi=\phi$ and $\sigma=\tau$ when $x=v(\phi,\tau)$. Indeed, the Jacobian of the left-hand sides of \eqref{eq:toruslocalcoords} with respect to $\varphi$ and $\sigma$, evaluated at $x=v(\phi,\tau)$, $\varphi=\phi$, and $\sigma=\tau$ equals \begin{equation} \begin{bmatrix}-\lambda^{\mathsf{T}}_{\mathrm{de},T}(\phi,\tau)\partial_\phi v(\phi,\tau) & -\lambda^{\mathsf{T}}_{\mathrm{de},T}(\phi,\tau)\partial_\tau v(\phi,\tau)\\-\lambda^{\mathsf{T}}_{\mathrm{de},\rho}(\phi,\tau)\partial_\phi v(\phi,\tau) & -\lambda^{\mathsf{T}}_{\mathrm{de},\rho}(\phi,\tau)\partial_\tau v(\phi,\tau)\end{bmatrix} \end{equation} which reduces to the nonsingular matrix \begin{equation} \begin{bmatrix}0 & -T\\1 & 0\end{bmatrix} \end{equation} by \eqref{eq:ortho1} and \eqref{eq:ortho2}. It follows by implicit differentiation of \eqref{eq:toruslocalcoords} with respect to $x$, evaluated at $\varphi=\phi$, $\sigma=\tau$, and $x=v(\phi,\tau)$, that \begin{equation} \partial_x\varphi(v(\phi,\tau))=-\lambda^{\mathsf{T}}_{\mathrm{de},\rho}(\phi,\tau),\,\partial_x\sigma(v(\phi,\tau))=\frac{1}{T}\lambda^{\mathsf{T}}_{\mathrm{de},T}(\phi,\tau). \end{equation} If we define $x_\mathrm{tr}:=x-v(\varphi(x),\sigma(x))$, such that $q_\mathrm{tg}(\varphi(x),\sigma(x))x_\mathrm{tr}(x)=0$, then for $x=v(\phi,\tau)+\delta$ with $\|\delta\|\ll 1$, it follows that $x=v(\varphi(x),\sigma(x))+x_\mathrm{tr}(x)$, where \begin{align} \varphi(x)=\phi-\lambda^{\mathsf{T}}_{\mathrm{de},\rho}(\phi,\tau)\delta+O(\|\delta\|)^2,\\ \sigma(x)=\tau+\frac{1}{T}\lambda^{\mathsf{T}}_{\mathrm{de},T}(\phi,\tau)\delta+O(\|\delta\|)^2. \end{align} For an arbitrary curve $x(\tau)$ that remains near the invariant torus for all $\tau$, we obtain the unique decomposition \begin{align}\label{forcing:decompose} x(\tau)=v(\varphi(\tau),\sigma(\tau))+x_\mathrm{tr}(\tau), \end{align} where $q_\mathrm{tg}(\varphi(\tau),\sigma(\tau))x_\mathrm{tr}(\tau)=0$ for all $\tau$. Suppose, for example, that this is true with $\|x_\mathrm{tr}(\tau)\|=O(\varepsilonsilon)$ for a solution $x(\tau)$ of the perturbed differential equation \begin{align} \label{ds:perturbed} x'&=Tf(x,p)+\delta_\mathrm{ode}(\tau), \end{align} where $\delta_\mathrm{ode}(\tau)=O(\varepsilonsilon)$ for all $\tau$. Substitution of \eqref{forcing:decompose} into \eqref{ds:perturbed} and use of \eqref{eq:torusbvp} yields \begin{equation} \label{eq:subs1} \partial_\phi v\varphi'+\partial_\sigma v(\sigma'-1)+x_\mathrm{tr}'=T\partial_x f(v,p)x_\mathrm{tr}+\delta_\mathrm{ode}+O(\varepsilonsilon^2). \end{equation} We conclude that $\varphi'=O(\varepsilonsilon)$ and $\sigma'=1+O(\varepsilonsilon)$. Following the same arguments for $x_\mathrm{tr}$ as in Section~\ref{sec:po:revisit}, we obtain \begin{equation} \label{eq:subs4} x_\mathrm{tr}'=T\partial_x f(v,p)q_\mathrm{tr}x_\mathrm{tr}+q_\mathrm{tr}\delta_\mathrm{ode}+O(\varepsilonsilon^2). \end{equation} Substitution in \eqref{eq:subs1} and multiplication by $\lambda_{\mathrm{de},T}^{\mathsf{T}}(\varphi,\sigma)$ and $\lambda_{\mathrm{de},\rho}^{\mathsf{T}}(\varphi,\sigma)$, respectively, then yields the generalization of \eqref{eq:phaseper} for the torus: \begin{align} \varphi'=-\lambda_{\mathrm{de},\rho}^{\mathsf{T}}(\varphi,\sigma)\delta_\mathrm{ode}+O(\varepsilonsilon^2),\label{eq:varphiprime}\\ \sigma'=1+\lambda_{\mathrm{de},T}^{\mathsf{T}}(\varphi,\sigma)\frac{\delta_\mathrm{ode}}{T}+O(\varepsilonsilon^2).\label{eq:sigmaprime} \end{align} Finally, we revisit the assumed uniform $O(\varepsilonsilon)$ bound on the magnitude of $x_\mathrm{tr}$. As in Section~\ref{sec:po:revisit}, this follows provided that $v$ is transversally stable, in which case \eqref{eq:varphiprime} and \eqref{eq:sigmaprime} hold for all $\tau$. In contrast to the resultant bounded behavior of $x_\mathrm{tr}$, the variables $\sigma(\tau)-\tau$ and $\phi(\tau)$ may have non-trivial dynamics, for example, determining potential locking near the invariant torus graph $v(\cdot,\cdot)$. For the special case of $\delta_\mathrm{ode}=0$ and a perturbation $\delta$ to the initial condition $v(\phi_0,\sigma_0)$, transversal stability implies that \begin{equation} \varphi(\tau)=\varphi(0)+O(\|\delta\|^2),\,\sigma(\tau)=\sigma(0)+\tau+O(\|\delta\|^2), \end{equation} and we find that \begin{multline}\label{phase:linear} F(\tau,v(\phi_0,\sigma_0)+\delta,p)\\-F\left(\tau,v\left(\phi_0-\lambda_{\mathrm{de},\rho}^{\mathsf{T}}(\phi_0,\sigma_0)\delta,\sigma_0+\frac{1}{T}\lambda_{\mathrm{de},T}^{\mathsf{T}}(\phi_0,\sigma_0)\delta\right),p\right) \end{multline} behaves as $O(\|\delta\|^2)+O(\exp(-\tau/\tau_\mathrm{tr}))$ for large $\tau$. The quantities $\phi_0-\lambda_{\mathrm{de},\rho}^{\mathsf{T}}(\phi_0,\sigma_0)\delta$ and $\sigma_0+\frac{1}{T}\lambda_{\mathrm{de},T}^{\mathsf{T}}(\phi_0,\sigma_0)\delta$ are the linear (in $\delta$) approximations to the corresponding asymptotic phases. \subsection{Regularity} \label{sec:regularity} We conclude the general discussion of the quasiperiodic invariant torus by exploring the extent to which the predictions of the adjoint analysis agree with those obtained from the linearization of the zero problem \eqref{eq:torusbvp} and \eqref{eq:torusphase}. We recall from the general theory in Section~\ref{sec:preliminaries} that the values of the adjoint variables $\lambda$ and $\eta_{\mathbb{I}\,\cup\,\mathbb{J}_1}$ capture the sensitivities of the monitor function $\Psi_k(u)$ to violations of the zero problem $\Phi(u)=0$ and variations in $\Psi_{\mathbb{I}\,\cup\,\mathbb{J}_1}(u)$, respectively, provided that the reduced continuation problem \begin{equation} \Phi(u)=0,\,\Psi_{\mathbb{I}\,\cup\,\mathbb{J}_1}(u)-\Psi_{\mathbb{I}\,\cup\,\mathbb{J}_1}(\tilde{u})=0 \end{equation} is regular at $u=\tilde{u}$ with zero dimensional deficit. In this case, \begin{equation} \begin{pmatrix}\tilde{\lambda} & \tilde{\eta}_{\mathbb{I}\,\cup\,\mathbb{J}_1}\end{pmatrix}=-D\Psi_k(\tilde{u})\begin{pmatrix}D\Phi(\tilde{u})\\D\Psi_{\mathbb{I}\,\cup\,\mathbb{J}_1}(\tilde{u})\end{pmatrix}^{-1},\label{eq:lambdasol} \end{equation} where the inverse on the right is assumed to be bounded. As we show below, this assumption fails for the zero problem given by \eqref{eq:torusbvp} and \eqref{eq:torusphase} with irrational $\rho$. Nevertheless, the solutions obtained for $\lambda_\mathrm{de}$, $\lambda_\mathrm{bc}$, $\lambda_\mathrm{ps}$, and $\eta_p$ (whose existence and uniqueness within the space of continuous $\lambda_\mathrm{bc}$ follow from normal hyperbolicity) for $k=1$ ($\eta_\rho=1$ and $\eta_T=0$) and $k=2$ ($\eta_\rho=0$ and $\eta_T=1$), respectively, do represent the sensitivities of $T$ and $\rho$ to constraint violations and variations in $p$. To show this, given constraint violations $\delta_\mathrm{ode}(\cdot,\cdot)$, $\delta_\mathrm{bc}(\cdot)$, and $\delta_h$, consider the linearized equations \begin{align} \delta_\mathrm{ode}&=\partial_\tau\delta_v-T\partial_xf(v,p)\delta_v-f(v,p)\delta_T-T\partial_pf(v,p)\delta_p\\ \delta_\mathrm{bc}&=\delta_v(\cdot+\rho,0)-\delta_v(\cdot,1)+\partial_\phi v(\cdot+\rho,0)\delta_\rho\label{eq:linquasi2}\\ \delta_h&=\mathrm{i}nt_\mathbb{S}\mathrm{i}nt_0^1\partial_v h(v(\cdot,\cdot),p)\delta_v(\phi,\tau)\,\mathrm{d}\tau\,\mathrm{d}\phi+\partial_p h(v(\cdot,\cdot),p)\delta_p.\label{eq:phasecondlin} \end{align} The first of these implies that \begin{align} \delta_v(\cdot,\tau)&=V(\cdot,\tau)\mathrm{i}nt_0^\tau V(\cdot,\sigma)^{-1}\delta_\mathrm{ode}(\cdot,\sigma)\,\mathrm{d}\sigma\nonumber\\ &\qquad+V(\cdot,\tau)\delta_v(\cdot,0)+\tau f(v(\cdot,\tau),p)\delta_T+P(\cdot,\tau)\delta_p,\label{eq:deltavsol} \end{align} where $P(\cdot,\tau)$ was given in \eqref{eq:Pquasi}. We substitute this solution with $\tau=1$ into \eqref{eq:linquasi2}, shift the argument $\phi$ to $\phi-\rho$, replace $f(v(\phi-\rho,1),p)$ by $\frac{1}{T}\partial_\tau v(\phi,0)$ and isolate the constraint violations on one side of the equation to obtain \begin{align} \label{eq:Glineq} \delta_\mathrm{rhs}(\phi-\rho)&=\partial_\phi v(\phi,0)\delta_\rho-\frac{1}{T}\partial_\tau v(\phi,0)\delta_T\nonumber\\ &\qquad-P(\phi-\rho,1)\delta_p-\Gamma_{\rho,0}[\delta_v(\cdot,0)](\phi), \end{align} where \begin{equation} \delta_\mathrm{rhs}(\phi):=\delta_\mathrm{bc}(\phi)+V(\phi,1)\mathrm{i}nt_0^1 V(\phi,\sigma)^{-1}\delta_\mathrm{ode}(\phi,\sigma)\,\mathrm{d}\sigma. \end{equation} Let $\delta_{\tau/\phi}(\phi):=q_{\tau/\phi}^{\mathsf{T}}(\phi,0)\delta_v(\phi,0)$ and $\delta_{\mathrm{tr}}(\phi):=q_{\mathrm{tr}}(\phi,0)\delta_v(\phi,0)$ such that \begin{align} \delta_v(\phi,0)&=\partial_\tau v(\phi,0)\delta_\tau(\phi)+\partial_\phi v(\phi,0)\delta_\phi(\phi)+ q_\mathrm{tr}(\phi,0)\delta_\mathrm{tr}(\phi) \end{align} and \eqref{eq:Glineq} can be written as \begin{align} \delta_\mathrm{rhs}(\phi-\rho)&=\partial_\phi v(\phi,0)\left(\delta_\rho+\delta_\phi(\phi)-\delta_\phi(\phi-\rho)\right)\nonumber\\ &\qquad+\partial_\tau v(\phi,0)\left(-\frac{\delta_T}{T}+\delta_\tau(\phi)-\delta_\tau(\phi-\rho)\right)\nonumber\\ &\qquad\qquad-P(\phi-\rho,1)\delta_p-\Gamma_{\rho,0}[q_\mathrm{tr}(\cdot,0)\delta_\mathrm{tr}(\cdot)](\phi). \end{align} It follows that \begin{align} q_\mathrm{tr}(\phi,0)\left(\delta_\mathrm{rhs}(\phi-\rho)+P(\phi-\rho,1)\delta_p\right)&=-\hat{\Gamma}_{\rho,0}[q_\mathrm{tr}(\cdot,0)\delta_\mathrm{tr}(\cdot)](\phi),\\ q_\tau^{\mathsf{T}}(\phi,0)\left(\delta_\mathrm{rhs}(\phi-\rho)+P(\phi-\rho,1)\delta_p\right)&=-\frac{\delta_T}{T}+\delta_\tau(\phi)-\delta_\tau(\phi-\rho),\label{eq:deltatau}\\ q_\phi^{\mathsf{T}}(\phi,0)\left(\delta_\mathrm{rhs}(\phi-\rho)+P(\phi-\rho,1)\delta_p\right)&=\delta_\rho+\delta_\phi(\phi)-\delta_\phi(\phi-\rho).\label{eq:deltaphi} \end{align} Since $\hat{\Gamma}_{\rho,0}$ has a bounded inverse, the first of these equations may be solved uniquely for $\delta_\mathrm{tr}$ independently of $\delta_\tau$ and $\delta_\phi$ in terms of $\delta_\mathrm{rhs}$, $\delta_p$, $\delta_T$, and $\delta_\rho$. By integrating the second and third equations over $\mathbb{S}$, we obtain (recall that $q_{\tau/\phi}(\phi,0)=q_{\tau/\phi}(\phi-\rho,1)$) \begin{align} \label{eq:taumean} \mathrm{i}nt_\mathbb{S}q_\tau^{\mathsf{T}}(\phi-\rho,1)\delta_\mathrm{rhs}(\phi-\rho)\,\mathrm{d}\phi+\mathrm{i}nt_\mathbb{S}q_\tau^{\mathsf{T}}(\phi-\rho,1)P(\phi-\rho,1)\,\mathrm{d}\phi\,\delta_p=-\frac{\delta_T}{T}, \end{align} and \begin{align} \label{eq:phimean} \mathrm{i}nt_\mathbb{S}q_\phi^{\mathsf{T}}(\phi-\rho,1)\delta_\mathrm{rhs}(\phi-\rho)\,\mathrm{d}\phi+\mathrm{i}nt_\mathbb{S}q_\phi^{\mathsf{T}}(\phi-\rho,1)P(\phi-\rho,1)\,\mathrm{d}\phi\,\delta_p=\delta_\rho, \end{align} such that, consistent with \eqref{eq: etaT_T} and \eqref{eq: etaT_rho}, $\eta_p^{\mathsf{T}}\delta_p=-\delta_T$ (if $\delta_\mathrm{rhs}=0$ and $(\eta_T,\eta_\rho)=(1,0)$) or $\eta_p^{\mathsf{T}}\delta_p=-\delta_\rho$ (if $\delta_\mathrm{rhs}=0$ and $(\eta_T,\eta_\rho)=(0,1)$). We note, for example, that a necessary condition for the well-posedness of the zero problem \eqref{eq:torusbvp},\,\eqref{eq:torusphase} along an $m$-dimensional family of quasiperiodic invariant tori with fixed rotation number $\rho$ in $m+1$ parameters is the presence of a nonzero element of the row matrix $\mathrm{i}nt_\mathbb{S}q_\phi^{\mathsf{T}}(\phi,0)P(\phi-\rho,1)\,\mathrm{d}\phi$. The tangent space of the family of invariant tori in parameter space is then given by the orthogonal complement to this row matrix (since $\delta_\rho=0$ as $\rho$ is fixed). For $\delta_\mathrm{rhs}=0$ and $(\eta_T,\eta_\rho)=(0,1)$, this row matrix equals $-\eta_p^{\mathsf{T}}$ by \eqref{eq: etaT_rho}. We attempt to solve for $\delta_\tau$ and $\delta_\phi$ by expanding all terms in \eqref{eq:deltatau} and \eqref{eq:deltaphi} in Fourier series per the formal ans\"{a}tze \begin{equation} \delta_\tau(\phi)=\sum_{k=-\mathrm{i}nfty}^\mathrm{i}nfty e^{2\pi\mathrm{i}k\phi}\delta_{\tau,k},\,\delta_\phi(\phi)=\sum_{k=-\mathrm{i}nfty}^\mathrm{i}nfty e^{2\pi\mathrm{i}k\phi}\delta_{\phi,k}, \label{eq:unknownFourier} \end{equation} \begin{equation} q_\tau(\phi,0)\left(\delta_\mathrm{rhs}(\phi-\rho)+P(\phi-\rho,1)\delta_p\right)=\sum_{k=-\mathrm{i}nfty}^\mathrm{i}nfty e^{2\pi\mathrm{i}k\phi}\mathrm{rhs}_{\tau,k}, \end{equation} and \begin{equation} q_\phi(\phi,0)\left(\delta_\mathrm{rhs}(\phi-\rho)+P(\phi-\rho,1)\delta_p\right)=\sum_{k=-\mathrm{i}nfty}^\mathrm{i}nfty e^{2\pi\mathrm{i}k\phi}\mathrm{rhs}_{\phi,k}. \end{equation} Substitution in \eqref{eq:taumean} and \eqref{eq:phimean} then results in the equations \begin{align} \label{forward:freq} \left(1-\mathrm{e}^{-2\pi\mathrm{i} k\rho}\right)\delta_{\tau/\phi,k}&=P_{\tau/\phi,k}\delta_p+\mathrm{rhs}_{\tau/\phi,k} \end{align} for all $k\ne 0$. Notably, $\delta_{\tau,0}$ and $\delta_{\phi,0}$ do not appear in these conditions, but can instead be solved for uniquely in terms of the remaining Fourier coefficients of $\delta_\tau$, $\delta_\phi$, and $\delta_\mathrm{tr}$ from \eqref{eq:phasecondlin} after substitution of \eqref{eq:deltavsol}, since the corresponding coefficient matrix is the nonsingular matrix \eqref{phase:ndeg:h}. Although the coefficients $\delta_{\tau,k}$ and $\delta_{\phi,k}$ for $k\ne 0$ may be uniquely determined from \eqref{forward:freq}, arbitrarily small divisors $1-\mathrm{e}^{-2\pi\mathrm{i} k\rho}$ occur for large $k$ and convergence of the Fourier series in \eqref{eq:unknownFourier} cannot be guaranteed. This violates the assumed existence of the bounded inverse on the right-hand side of \eqref{eq:lambdasol} and, further, the assumption of regularity relied upon in the standard implicit-function theorem\footnote{\jrem{See the reviews \cite{delaLlave2001tutorial,moser1966} for the use of generalized implicit function theorems and Diophantine conditions on $\rho$ (i.e., $|\exp(2\pi\mathrm{i} k\rho)-1|\geq C_\mathrm{Diop}|k|^{-\nu}$ for all $k\neq0$ and some constants $C_\mathrm{Diop}>0$ and $\nu>0$) to establish existence of invariant tori with parallel flows and irrational rotation numbers. A general treatment for when formal expansions (such as \eqref{forward:freq}) permit one to establish the existence of invariant manifolds with a certain degree of regularity is given in \cite{cabre2003:I,cabre2003:II,cabre2005:III}. The more recent monograph \cite{haro2016parameterization} develops numerical algorithms with rigorous error bounds for computing invariant manifolds (such as quasiperiodic tori) in the presence of unbounded inverses in \eqref{eq:lambdasol} and small divisors.}} to imply local solvability of the zero problem. That the subsystem \eqref{eq:taumean} and \eqref{eq:phimean} is well-posed, however, implies that the corresponding rows of the inverse in \eqref{eq:lambdasol} are bounded. In particular, \eqref{eq:taumean} and \eqref{eq:phimean} implies vanishing sensitivity of $T$ or $\rho$ to perturbations $\delta_h$ to the phase conditions, consistent with the observation that $\lambda_\mathrm{ps}=0$. \subsection{An invariant curve} \label{sec:invc} The preceding analysis has considered two-dimensional invariant tori for flows, for which normal hyperbolicity and irrational rotation numbers guarantee the existence of two distinct asymptotic phases associated with the tangent vector fields $\partial_\tau v(\phi,\tau)$ and $\partial_\phi v(\phi,\tau)$. We illustrate the key results of this treatment but consider the simpler problem of an invariant curve of a perturbation of the normal form map for the Neimark-Sacker bifurcation \cite{kuznetsov2013elements} given by \begin{align} \label{invc:map} M(x,p)&= \begin{bmatrix} \cos\theta &-\sin\theta\\ \sin\theta &\phantom{-}\cos\theta \end{bmatrix} \left((1+\alpha)x+ \begin{bmatrix} a & -b\\ b &\phantom{-}a \end{bmatrix}|x|^2x\right)+ \begin{bmatrix} r_1x_1\\r_2 x_2 \end{bmatrix} \end{align} with $r_1=0$, $\alpha=1/4$, $a=-1/4$, $\theta=\pi\left(\sqrt{\smash[b]{5}}-1\right)$, and $p=(r_2,b)^{\mathsf{T}}$. For $p=0$, the circle $v:\mathbb{S}\mapsto\mathbb{R}^2$ with radius $\sqrt{-\alpha/a}=1$ and centered on the origin is invariant and attracting. In particular, $M(v(\phi),0)=v(\phi+\theta/2\pi)$, i.e., $\theta/2\pi$ is the corresponding rotation number. As seen in the top-left panel of Fig.~\ref{fig:invc}, there exists a unique one-dimensional family of invariant curves with rotation number $\theta/2\pi$ and for simultaneous variations in both component of $p$. The panels in the middle column show two such invariant curves and illustrate a loss of smoothness (i.e., increasing higher-order derivatives of the curve function $v$) along this family. \begin{figure}\label{fig:invc} \end{figure} To eliminate the degeneracy associated with arbitrary phase shifts, we append a scalar phase condition to obtain the zero problem \begin{equation} \label{invc:zeroproblem} M(v(\phi),p)=v(\phi+\rho),\,h(v(\cdot),p)=0 \end{equation} in terms of the unknown function $v:\mathbb{S}\rightarrow\mathbb{R}^2$ and scalar $\rho$, and assume that \begin{equation} \label{invc:nondeg} \mathrm{i}nt_\mathbb{S}\partial_v h(v(\cdot),p)(\phi) v'(\phi)\,\mathrm{d}\phi\ne 0. \end{equation} Here, $V(\phi):=\partial_xM(v(\phi),p)$ plays the role of the matrix $V(\phi,1)$ in previous sections. The corresponding operator $\Gamma_\rho$ is then given by \begin{equation} \label{invc:gro} \Gamma_\rho:\delta(\cdot)\mapsto V(\cdot-\rho)\delta(\cdot-\rho)-\delta(\cdot) \end{equation} and, in particular, $v'(\phi)$ is a right nullvector of $\Gamma_\rho$. Normal hyperbolicity implies the existence of a continuous family of projections \begin{equation} q_\mathrm{tg}(\phi):= v'(\phi)q_\phi^{\mathsf{T}}(\phi),\,q_\mathrm{tr}(\phi):=I_2-q_\mathrm{tg}(\phi) \end{equation} such that \begin{equation} q_\mathrm{tg/tr}(\phi+\rho)V(\phi)=V(\phi)q_\mathrm{tg/tr}(\phi)\Leftrightarrow q_\phi^{\mathsf{T}}(\phi+\rho)V(\phi)=q_\phi^{\mathsf{T}}(\phi) \end{equation} and $\Gamma_\rho$ is a bijection with bounded inverse on the space of functions $\phi\mapsto q_\mathrm{tr}(\phi)\delta(\phi)$ for arbitrary continuous periodic $\delta(\phi)$. As in previous sections, we conclude that the linear functional $\mathrm{i}nt_\mathbb{S}q_\phi^{\mathsf{T}}(\phi)(\cdot)\,\mathrm{d}\phi$ lies in the left nullspace of $\Gamma_\rho$. By normal hyperbolicity, the map \begin{equation} \label{invc:grohat} \hat{\Gamma}_\rho:\delta(\cdot)\mapsto V(\cdot-\rho)q_\mathrm{tr}(\cdot-\rho)\delta(\cdot-\rho)-\delta(\cdot) \end{equation} has a bounded inverse on the space of continuous periodic functions $\delta(\phi)$. For a transversally stable invariant curve, eigenvalues of $\hat{\Gamma}_\rho$ lie inside a circle centered on $-1$ and of radius less than $1$. This prediction is verified by the lower-left panel of Fig.~\ref{fig:invc}, which also shows that $\Gamma_\rho$ has eigenvalues accumulating to $0$. Analogously with the results in previous sections, \begin{equation} q_\phi^{\mathsf{T}}(\phi)=\lim_{k\rightarrow\mathrm{i}nfty}\frac{v^{\prime\,{\mathsf{T}}}(\phi+k\rho)}{\| v'(\phi+k\rho)\|^2}V(\phi)^k \end{equation} with exponential convergence. Alternatively, for irrational $\rho$, $q_\phi(\phi)$ may be obtained from the unique continuous solution $\lambda_\mathrm{map}(\phi-\rho)$ of the adjoint conditions \begin{align} 0&=\lambda_\mathrm{map}^{\mathsf{T}}(\phi)V(\phi)-\lambda_\mathrm{map}^{\mathsf{T}}(\phi-\rho)+\lambda_\mathrm{ps}\partial_v h(v(\cdot),p)(\phi)\label{invc:delv}\\ 0&=-\mathrm{i}nt_\mathbb{S}\lambda_\mathrm{map}^{\mathsf{T}}(\phi)v'(\phi+\rho)\,\mathrm{d}\phi+\eta_\rho\label{invc:delrho}\\ 0&=\mathrm{i}nt_\mathbb{S}\lambda_\mathrm{map}^{\mathsf{T}}(\phi)\partial_p M(v(\phi),p)\,\mathrm{d}\phi+\lambda_\mathrm{ps}\partial_p h(v(\cdot),p)+\eta_p^{\mathsf{T}}\label{invc:delp} \end{align} with $\eta_\rho=1$. Indeed, multiplication of \eqref{invc:delv} by $v'(\phi)$ and integration over $\mathbb{S}$ shows that $\lambda_\mathrm{ps}=0$ and, consequently, that $\lambda_\mathrm{map}^{\mathsf{T}}(\phi)V(\phi)=\lambda_\mathrm{map}^{\mathsf{T}}(\phi-\rho)$ and $\mathrm{i}nt_\mathbb{S}\lambda_\mathrm{map}^{\mathsf{T}}(\phi-\rho)(\cdot)\,\mathrm{d}\phi$ lies in the left nullspace of $\Gamma_\rho$. A forward linear sensitivity analysis requires solving the linear system \begin{align} \label{invc:forward:Gamma} \delta_\mathrm{rhs}(\phi)&=V(\phi-\rho)\delta_v(\phi-\rho)-\delta_v(\phi)+v'(\phi)\delta_\rho\\ \label{invc:forward:ps} \delta_h&=\mathrm{i}nt_\mathbb{S}\partial_xh(v(\cdot),p)(\phi)\delta_v(\phi)\,\mathrm{d}\phi+\partial_p h(v(\cdot),p)\delta_p \end{align} with arbitrary $\delta_\mathrm{rhs}(\phi)\mathrm{i}n\mathbb{R}^2$ and $\delta_\mathrm{ps}\mathrm{i}n\mathbb{R}$, for $\delta_v(\phi)\mathrm{i}n\mathbb{R}^2$, $\delta_\rho\mathrm{i}n\mathbb{R}$, and $\delta_p\mathrm{i}n\mathbb{R}^2$. Since $\Gamma_\rho$ has a simple eigenvalue $0$ with eigenvector $v'$, the non-degeneracy condition \eqref{invc:nondeg} ensures that the \eqref{invc:forward:Gamma},\,\eqref{invc:forward:ps} is solvable. However, as the spectrum of $\Gamma_\rho$ in Fig.~\ref{fig:invc} indicates, the solution $\delta_v$ is not bounded by $\delta_\mathrm{rhs}$ in the same norm due to the small-divisor problem discussed in Section~\ref{sec:regularity}. Finally, we illustrate the predicted asymptotic convergence of forward iterates of $v(\phi_0,p)+\delta_0$ toward forward iterates of $v(\phi_0+q_\phi(\phi_0)\delta_0)$ per the theory of asymptotic phase. The panels in the right column of Figure~\ref{fig:invc} show the norm \begin{multline} \label{invc:asymptotic:phase} M^k(v(\phi_0,p)+\delta_0,p)-M^k(v(\phi+q_\phi(\phi_0)\delta_0,p),p)\approx\\ M^k(v(\phi_0,p)+\delta_0,p)-M^k(v(\phi_0,p)+\partial_\phi v(\phi_0,p)q_\phi(\phi_0)\delta_0,p) \end{multline} for $20$ initial conditions at distances $|\delta_0|\approx 10^{-4}$ from the points indicated by black dots in the phase portraits in the middle column in Fig.~\ref{fig:invc}. We observe exponential convergence up to an error of $|\delta_0|^2\approx 10^{-7}$, consistent with the theoretical prediction \eqref{phase:linear}. \section{Implementation in \textsc{COCO}} \label{sec:construction} We use this section to further highlight the algorithmic nature of the adjoint approach and its implementation in \textsc{coco}. A general discussion that represents a reference for users and toolbox developers is followed by the explicit encoding in \textsc{coco} of the combined sensitivity of $\rho$ with respect to $r_2$ and $b$ along the family of invariant curves in Section~\ref{sec:invc}. \subsection{Principles of construction} As alluded to in Section~\ref{sec:preliminaries}, the construction philosophy of the \textsc{coco} software package naturally lends itself to the Lagrangian approach to computing the sensitivities of different monitor functions with respect to violations of constraints and variations of continuation parameters. Specifically, by the additive nature of the problem Lagrangian and the associated adjoint conditions, it is possible to arrive at a complete set of defining equations in terms of the adjoint variables in multiple stages, mirroring the staged addition of zero functions and monitor functions to the extended continuation problem. As an illustration of the staged construction paradigm, consider again the analysis of the two-segment periodic orbit of period $T$ in Section~\ref{sec:Hybrid dynamics}. Here, the contributions to the adjoint conditions associated with individual constraints are clearly identified by the corresponding adjoint variables. As constraints are appended individually or in groups, the corresponding contributions to the adjoint conditions may be constructed in terms of linear operations on the corresponding adjoint variables. At any stage of construction, one obtains an extended continuation problem with an associated set of adjoint conditions. We illustrate this principle in Fig.~\ref{fig:staged construction}. \begin{figure} \caption{At each stage of construction of the two-segment periodic orbit problem in \eqref{eq:twoseg1} \label{fig:staged construction} \end{figure} A key concept of the \textsc{coco} construction philosophy is the idea that a constructor that appends a zero or monitor function must be able to operate on an existing extended continuation problem in order to also expand its domains of continuation variables and/or continuation parameters. We say that a constructor is \textit{embeddable} if this is the case. One way this may be visualized is in terms of an operator on the space of logical matrices of arbitrary dimension, whose columns represent a Cartesian decomposition of the domain of continuation variables and whose rows represent subsets of zero functions or monitor functions introduced at individual stages of construction. With each call to an embeddable constructor, the matrix grows by any number of additional columns with all zero entries followed by the addition of rows with a combination of zeros and ones in the previously defined columns and only ones in the newly defined columns. An example is shown in Fig.~\ref{fig:booleanconstraint}, where the interpretation is that the zero or monitor functions added by the constructor in this call depend on the variables identified by ones in the corresponding row, and not on those identified by zeros. \begin{figure} \caption{\jrem{A logical matrix representation of the variable dependencies of zero and monitor functions introduced at different stages of problem construction.} \label{fig:booleanconstraint} \end{figure} The new columns added by a call to an embeddable constructor represent continuation variables that are meaningful to the constructor, but not to \textsc{coco}, which is unable to distinguish among subsets of these variables other than through the cardinal number of the corresponding column. The latter depends on the order of construction and on internal details of the constructor, neither of which can be assumed to be known by subsequent constructors, except in the most simple cases. In \textsc{coco}, this challenge is overcome through a subindexing mechanism, whereby constructor-dependent details are stored in a \textit{function data structure} associated with a particular matrix row and used to extract meaningful subsets of continuation variables associated with this row. This mechanism is illustrated in Fig.~\ref{fig:subindexing}. \begin{figure} \caption{Fields of a function data structure may be used to store context-independent integer indices for subsets of continuation variables associated with a newly constructed function for later reference by subsequent constructor calls. In the figure, the fields \mcode{fcndata.x1_idx} \label{fig:subindexing} \end{figure} As shown in Section~\ref{sec:preliminaries}, the collection of adjoint variables naturally decomposes through a Cartesian product of dual spaces to each of the ranges of the zero functions or monitor functions introduced at individual stages of construction. As additional zero or monitor functions are introduced, terms are added to existing adjoint conditions or used to initialize new conditions. The analogy with the dependence on previously defined continuation variables and the introduction of new continuation variables in the construction of zero and monitor functions suggests a similar use of a logical matrix representation to track the target locations of new contributions to the adjoint conditions. In each newly added column, rows with ones imply contributions to a particular (existing or newly constructed) adjoint condition. The analogy is further extended by the realization that it is again necessary to use function data for constructor-specific details that identify the associated adjoint conditions independently of the order of construction. This desired functionality is illustrated in Fig.~\ref{fig:booleanadjoint}. \begin{figure} \caption{\jrem{A logical matrix representation of the dependencies of the adjoint conditions on adjoint variables introduced at different stages of problem construction.} \label{fig:booleanadjoint} \end{figure} \textsc{coco} toolboxes collect predefined embeddable zero and monitor function constructors that exhibit no dependence on previously defined sets of continuation variables, as well as adjoint constructors that, consequently, do not contribute terms to existing adjoint conditions. Since the examples considered in this paper cover classes of problems of a general, largely problem-independent nature, it is sensible to design toolboxes to enable their immediate analysis. For the forward dynamics problems in Section~\ref{sec:Forward dynamics}, the \textsc{coco}-compatible \mcode{coll} toolbox achieves this objective and may be applied out of the box. For the periodic orbit problem in Section~\ref{sec:Periodic orbits}, the \mcode{bvp} constructors in the \mcode{coll} toolbox achieve the desired result. Alternatively, if the discrete phase condition given by the vanishing of $h(x(1),p)$ is replaced by an integral phase condition, the \textsc{coco}-compatible \mcode{po} toolbox provides the sought support. The first example in Section~\ref{sec:Hybrid dynamics} is best handled with two consecutive calls to \mcode{coll} constructors and the addition of the intersegment boundary conditions using the \textsc{coco} core constructors \mcode{coco_add_func} and \mcode{coco_add_adjt} that were already deployed in Section~\ref{sec:illustration}. For the periodic orbit problem in Section~\ref{sec:Hybrid dynamics} (and, indeed, for arbitrary multi-segment periodic orbit problems), the \mcode{hspo} constructors in the \mcode{po} toolbox implement the desired formalism. For the case of a quasiperiodic invariant torus, the \mcode{bvp} constructors may be applied to a discretization of the PDE constraints in terms of the coefficients of truncated Fourier series for every $\phi$-dependent function. Since these constructors assume that all unknowns are either discretized state variables, interval durations, or problem parameters, the rotation number $\rho$ must be thought of as a problem parameter, even though the vector field is independent of $\rho$. \subsection{Demo for invariant curve} \label{sec:invc:coco} We allow explicit \textsc{coco} code to illustrate the paradigm of construction and analysis \jrem{for the invariant curves studied in Section~\ref{sec:invc}. The dynamics for this example is given by the map $M(\cdot,r_2,b):\mathbb{R}^2\to\mathbb{R}^2$, defined in \eqref{invc:map}.} To this end, we \jrem{let the rotation number $\theta/2\pi$ equal the golden mean $(\sqrt{5}-1)/2$ and approximate this} by the truncated continued fraction expansion $p/q$ with $p=233$ and $q=377$ (with error $\approx 3\times10^{-6}$). The map $M$ and its derivatives $\partial_xM$, $\partial_{r_2}M$, and $\partial_bM$ are then encoded in \textsc{Matlab} using anonymous functions as follows \begin{lstlisting}[language=coco-highlight] >> [numer, denom] = deal(233, 377); >> [theta, alpha, a, r1] = deal(2*pi*numer/denom, 1/4, -1/4, 0); >> mrot = [cos(theta), -sin(theta); sin(theta), cos(theta)]; >> M = @(x,r2,b) ((1+alpha)*mrot + ... (x'*x)*mrot*[a, -b; b, a] + [r1, 0; 0, r2])*x; >> dMx = @(x,r2,b) (1+alpha)*mrot + ... (x.'*x)*mrot*[a, -b; b, a] + [r1, 0; 0, r2] + ... 2*mrot*[a, -b; b, a]*x*x'; >> dMr2 = @(x,r2,b) [0; x(2)]; >> dMb = @(x,r2,b) (x'*x)*mrot*[0, -1; 1, 0]*x; \end{lstlisting} We proceed to discretize the first half of the zero problem \eqref{invc:zeroproblem} in terms of the sequence $\{v_i\}_{i=1}^{q}$ for angles $\phi_i=2\pi(i-1)/q$ such that $v(\phi_i)\approx v_i$ for $i=1,\ldots,q$, and \begin{equation} \label{invc:approxzero} M(v_i,p)-v_{\operatorname{mod}(i+p-1,q)+1}-q\left(v_{\operatorname{mod}(i+p,q)+1}-v_{\operatorname{mod}(i+p-1,q)+1}\right)\delta_\rho \approx 0, \end{equation} where $\delta_{\rho}=\rho-\theta/2\pi$ is assumed to be small. (Here, the coefficient of $\delta_\rho$ is an approximation for $\partial_\phi v(\phi_{i+p})$, identifying $\phi_{i+q+1}=\phi_i$ for all $i\mathrm{i}n\mathbb{Z}$). It follows that when $\delta_\rho=0$, the sequence $\{v_i\}_{i=1}^{q}$ is an approximate periodic orbit of period $q$ on the invariant curve that makes $p$ excursions around the invariant curve before repeating. The commands \begin{lstlisting}[language=coco-highlight] >> [r20, b0, drho0] = deal(0, 0, 0); >> phi = 2*pi*linspace(0, 1-1/denom, denom); >> v0 = [cos(phi); sin(phi)]*sqrt(-alpha/a); \end{lstlisting} generate the corresponding initial solution guess for $(r_2,b)=(0,0)$ with \begin{equation} v_{0,i}=\sqrt{\frac{-\alpha}{a}}\begin{pmatrix}\cos \phi_i \\ \sin \phi_i \end{pmatrix}. \end{equation} We use this sequence to construct the discretized phase condition \begin{align} \label{eq:invc:phascond} \sum_{i=1}^q[v_{0,\operatorname{mod}(i,q)+1}-v_{0,i}]^{\mathsf{T}}[v_i-v_{0,i}]=0 \end{align} as an approximation of the integral condition \begin{equation} \mathrm{i}nt_\mathbb{S}\partial_\phi v_0^{\mathsf{T}}(\phi)\left(v(\phi)-v_0(\phi)\right)\,\mathrm{d}\phi=0 \end{equation} in terms of the function \begin{equation} v_0(\phi)=\sqrt{\frac{-\alpha}{a}}\begin{pmatrix}\cos \phi \\ \sin \phi \end{pmatrix}. \end{equation} We construct a zero problem consisting of the vanishing left-hand side of \eqref{invc:approxzero} for $i=1,\ldots,q$ and \eqref{eq:invc:phascond} using the staged construction paradigm of \textsc{coco} as follows. We begin by constructing the function \begin{equation} M_\mathrm{res}(u):=M(u_{1,2},u_5,u_6)-u_{3,4} \end{equation} and its Jacobian per the following commands: \begin{lstlisting}[language=coco-highlight] >> [mx, my, mr2, mb, mdrho] = deal(1:2, 3:4, 5, 6, 7); >> Mres = @(u) M(u(mx), u(mr2), u(mb)) - u(my); >> dMres = @(u) [dMx(u(mx), u(mr2), u(mb)), -eye(2), ... dMr2(u(mx),u(mr2),u(mb)),dMb(u(mx),u(mr2),u(mb)),zeros(2,1)]; \end{lstlisting} Then, $M_\mathrm{res}(x,y,r_2,b,\delta_\rho)=0$ implies that $y$ is the image of $x$ under the map $M(\cdot,r_2,b)$. Without loss of generality, we carry $\delta_\rho$ along even in the absence of explicit dependence on this variable. The following sequence of commands create $q$ instances of the zero problem $M_\mathrm{res}=0$, initialized with \begin{align} u_0=(v_{0,i},v_{0,\operatorname{mod}(i+p-1,q)+1},r_{2,0},b_0,0), \end{align} and labeled with the function identifiers \mcode{'M1'}, \mcode{'M2'}, and so on. For each instance, we also construct the corresponding contributions to the adjoint conditions. \begin{lstlisting}[language=coco-highlight] >> fcn = @(f) @(p,d,u) deal(d, f(u)); >> fid = @(s,i)[s,num2str(i)]; >> prob = coco_prob; >> for i=1:denom irot = mod(i+numer-1, denom)+1; prob = coco_add_func(prob, fid('M',i), fcn(Mres), fcn(dMres), ... [], 'zero', 'u0', [v0(:,i); v0(:,irot); r20; b0; drho0]); prob = coco_add_adjt(prob, fid('M',i)); muidx(:,i) = coco_get_func_data(prob, fid('M',i), 'uidx'); maidx(:,i) = coco_get_adjt_data(prob, fid('M',i), 'axidx'); end \end{lstlisting} We store the indices that track the internal ordering of the corresponding continuation variables in the $7\times q$ index array \mcode{muidx}. The column indices for the associated adjoint conditions are correspondingly kept in the $7\times q$ index array \mcode{maidx}. The $q$ zero problems introduced thus far are uncoupled. We achieve the coupling implied by the vanishing of the left-hand side of \eqref{invc:approxzero} using the following sequence of commands. \begin{lstlisting}[language=coco-highlight] >> [bx, bxnext, by, bdrho] = deal(1:2, 3:4, 5:6, 7); >> fbc = @(u) u(bx) + u(bdrho)*denom*(u(bxnext) - u(bx)) - u(by); >> dbc = @(u) [eye(2)*(1-denom*u(bdrho)), eye(2)*denom*u(bdrho), ... -eye(2), denom*(u(bxnext)-u(bx))]; >> for i=1:denom irot = mod(i+numer-1,denom)+1; inext = mod(i+numer,denom)+1; prob = coco_add_func(prob, fid('bc',i), fcn(fbc), fcn(dbc), ... [],'zero', 'uidx', [muidx(mx,irot); muidx(mx,inext); ... muidx(my,i); muidx(mdrho,i)]); prob = coco_add_adjt(prob, fid('bc',i), ... 'aidx', [maidx(mx,irot); maidx(mx,inext); ... maidx(my,i); maidx(mdrho,i)]); end \end{lstlisting} We use the index array \mcode{muidx} to refer to the previously initialized continuation variables and the index array \mcode{maidx} to refer to previously initialized adjoint conditions. As each of the $q$ zero problems \mcode{'M1'}, \mcode{'M2'}, \ldots depend on their own instances of the parameters $r_2$, $b$, and $\rho$, we use the \mcode{coco_add_glue} constructor to glue these together across all instances, as shown below. \begin{lstlisting}[language=coco-highlight] >> for i=2:denom prob = coco_add_glue(prob, fid('pglue',i),... muidx([mr2, mb, mdrho], 1), muidx([mr2, mb, mdrho], i)); prob = coco_add_adjt(prob, fid('pglue',i),... 'aidx', [maidx([mr2, mb, mdrho], 1); maidx([mr2, mb, mdrho], i)]); end \end{lstlisting} Finally, we add the phase condition, per the construction \begin{lstlisting}[language=coco-highlight] >> dx0 = denom*(v0(:,[2:end,1])-v0); >> phascond = @(u)dx0(:)'*(u(:)-v0(:)); >> dphascond = @(u)dx0(:)'; >> prob = coco_add_func(prob, 'phasecond', fcn(phascond), ... fcn(dphascond), [], 'zero', 'uidx', muidx(mx,:)); >> prob = coco_add_adjt(prob, 'phasecond', 'aidx', maidx(mx,:)); \end{lstlisting} Following the analysis in previous sections, we append monitor functions that evaluate to the instances of $r_2$, $b$, and $\delta_\rho$ associated with \mcode{'M1'} and label the corresponding continuation parameters by \mcode{'r2'}, \mcode{'b'}, and \mcode{'drho'}. By default these are initially inactive. \begin{lstlisting}[language=coco-highlight] >> prob = coco_add_pars(prob, 'pars', muidx([mr2, mb, mdrho], 1), ... {'r2','b','drho'}); \end{lstlisting} The call to \mcode{coco_add_adjt} below then appends complementary monitor functions whose values equal $\eta_{r_2}$, $\eta_b$, and $\eta_\rho$, respectively. We associate these with additional complementary continuation parameters, designated by the string labels \mcode{'e.r2'}, \mcode{'e.b'}, and \mcode{'e.drho'}. These are also initially inactive. \begin{lstlisting}[language=coco-highlight] >> prob = coco_add_adjt(prob, 'pars', {'e.r2','e.b','e.drho'}, ... 'aidx', maidx([mr2, mb, mdrho], 1), 'l0', [0; 0; 1]); \end{lstlisting} Here, we set $\eta_\rho=1$, as we intend to determine the sensitivity of $\rho$ with respect to all other variables. We obtain results compatible with Fig.~\ref{fig:invc} by allowing \mcode{'r2'}, \mcode{'b'}, \mcode{'e.r2'}, and \mcode{'e.b'} to vary, while holding \mcode{'rho'} and \mcode{'e.rho'} fixed. \jrem{The commands below define a user-defined solution point at $r_2=-0.16$ (the point labeled A in Fig.~\ref{fig:invc}) and perform continuation of the corresponding augmented continuation problem on the computational domain defined by $r_2\mathrm{i}n[-0.9,0]$.} \begin{lstlisting}[language=coco-highlight] >> prob = coco_add_event(prob, 'A', 'r2', -0.16); >> coco(prob, 'run', [], 1, {'r2', 'b', 'e.r2', 'e.b'}, [-0.9,0]); \end{lstlisting} \begin{lstlisting}[language=coco-small] STEP DAMPING NORMS COMPUTATION TIMES IT SIT GAMMA ||d|| ||f|| ||U|| F(x) DF(x) SOLVE 0 1.00e+00 2.75e+01 0.1 0.0 0.0 1 1 1.00e+00 2.44e-01 5.15e-15 2.75e+01 0.2 0.6 0.0 2 1 1.00e+00 1.11e-14 3.04e-15 2.75e+01 0.3 1.1 0.1 ... LABEL TYPE r2 b e.r2 e.b ... 1 EP 0.0000e+00 2.5649e-18 -5.3757e-02 -1.5916e-01 ... 2 -1.3955e-01 4.2507e-02 -5.3742e-02 -1.9276e-01 ... 3 A -1.6000e-01 4.8149e-02 -5.4033e-02 -1.9790e-01 ... 4 -3.2179e-01 8.9201e-02 -5.7266e-02 -2.4255e-01 ... 5 -5.1513e-01 1.3088e-01 -6.0551e-02 -3.1210e-01 ... 6 -7.0947e-01 1.6371e-01 -6.0449e-02 -4.2622e-01 ... 7 -8.8006e-01 1.8328e-01 -6.3439e-02 -7.2503e-01 ... 8 EP -9.0000e-01 1.8498e-01 -7.5930e-02 -9.2511e-01 \end{lstlisting} \jrem{The analysis locates the point A at $(r_2,b)\approx (-0.16,0.04815)$, and finds that $\eta_{r_2}\approx -0.054033$ and $\eta_b\approx -0.19790$ are the corresponding sensitivites of $\delta_\rho$ to variations in $\delta_{r_2}$ and $\delta_b$, respectively, i.e., \begin{equation} \delta_\rho=-\eta_p^{\mathsf{T}}\delta_p=-(\eta_{r_2},\eta_b)\begin{pmatrix} \delta_{r_2}\\\delta_b \end{pmatrix}\approx 0.0540\delta_{r_2}+0.198\delta_b\mbox{,} \end{equation} assuming no violations of the governing constraints. Since $\delta_\rho=0$ along the family of invariant tori, it follows that $\delta_b/\delta_{r_2}\approx -0.273$ at this location. This ratio equals the slope at point A of the curve of invariant tori in the $(r_2,b)$-plane in Fig.~\ref{fig:invc}.} \jrem{The tangents to stable fibers shown in Fig.~\ref{fig:invc} are given by the adjoint variables $\lambda_{\mathrm{map},i}$. These} can now be extracted from the data stored with the solution label \mcode{lab} using the commands \begin{lstlisting}[language=coco-highlight] >> chart = coco_read_adjoint(fid('M',i), 'run', lab, 'chart'); >> getfield(chart, 'x') \end{lstlisting} \jrem{Finally, for the illustration of the spectrum of the linear map $\Gamma_\rho$ in Fig.~\ref{fig:invc}, we evaluate the map $\partial_xM$ on the solutions generated by \textsc{coco}. The map $V(\cdot-\rho)$ is then approximated by \begin{align}\label{invc:Gamma} \mathrm{diag}\left(\partial_xM(x_{\sigma_{-p,q}(1)},r_2,b),\ldots,\partial_xM(x_{\sigma_{-p,q}(q)},r_2,b)\right)\cdot (\sigma_{-p,q}\otimes I_2), \end{align} where $\sigma_{p,q}$ denotes the permutation (and permutation matrix) of the first $q\mathrm{i}n\mathbb{Z}$ integers corresponding to a rotation by $p\mathrm{i}n\mathbb{Z}$, and $\mathrm{diag}(A_1,\ldots,A_q)$ denotes the blockdiagonal matrix with $A_j$ on the diagonal. This is computed using the following sequence of commands. First, we read the solution data from the file for the label of point A, extracting the $2\times q$ array for the solution $x$, and the parameters $r_2$ and $b$.} \begin{lstlisting}[language=coco-highlight] >> chart_p = coco_read_solution('run', 3, 'chart'); >> x = chart_p.x(muidx(mx,:)); >> [r2, b] = deal(chart_p.x(5), chart_p.x(6)); \end{lstlisting} \jrem{Then we construct the rotation, corresponding to $x\mapsto x(\cdot-\rho)$.} \begin{lstlisting}[language=coco-highlight] >> perm = diag(ones(numer,1),denom-numer) +... diag(ones(denom-numer,1),-numer); >> sigma = kron(perm, eye(2)); \end{lstlisting} \jrem{Finally, we apply this rotation to $x$, implementing expression \eqref{invc:Gamma}, by applying $\partial_xM$ and multiplying with the rotation matrix again. } \begin{lstlisting}[language=coco-highlight] >> xrot = num2cell(reshape(sigma*x(:), 2, denom), 1); >> dMxval = cellfun(@(x)dMx(x,r2,b), xrot, 'uniformoutput', false); >> blkdiag(dMxval{:})*sigma; \end{lstlisting} \section{Concluding discussion} \label{sec:conclusions} A goal of the discussion in the preceding sections has been to demonstrate the relationship between several known results about the asymptotic phase dynamics near transversally stable limit cycles and quasiperiodic invariant tori, on the one hand, and the adjoint conditions obtained from the analysis of a problem Lagrangian, on the other hand. As an example, we have found that commonly adopted normalization conditions result directly from the choice of monitor functions and sought sensitivities. Indeed, our analysis extends the treatment beyond the orbitally stable case to the more general instance of normal hyperbolicity. In this case, while it may no longer be possible to define a concept of asymptotic phase, the association persists between the solution to the adjoint conditions and a continuous family of complementary projections onto and transversally to the periodic orbit or quasiperiodic invariant torus. Although the analysis in Section~\ref{sec:quasiperiodic invariant tori} was restricted to the case of a two-dimensional torus, it generalizes with minimal effort to the case of an $m$-dimensional torus simply by considering a rotation vector $\rho\mathrm{i}n\mathbb{R}^{m-1}$ and a vector-valued angle $\phi\mathrm{i}n\mathbb{S}^{m-1}$, integrals over $\mathbb{S}^{m-1}$ rather than $\mathbb{S}$, and additional phase conditions. For the transversally stable case, the results reduce to those of~\cite{demir2010}, which are presented there without derivation. A further generalization to the case of a piecewise-smooth dynamical system is open for investigation (cf.~\cite{Szalai09}). For the case of dynamical systems with time delay, a preliminary analysis in~\cite{Ahsan21} showed the relationship between an adjoint variable and the sensitivity of the orbital duration to violations of the periodic orbit constraint. This same analysis can, of course, be generalized according to the treatment in this paper without significant additional effort. More interesting is to consider the adjoint-based sensitivity analysis for a normally hyperbolic quasiperiodic invariant torus of a smooth dynamical system with delay. While the adjoint-based approach was used in~\cite{ahsan2020optimization} to search for optimal quasiperiodic invariant tori in an example system with delay, the analysis lacked the theoretical rigor of the present treatment, in particular as relates to the solvability of the adjoint conditions. This appears to be a worthwhile problem to pursue. Finally, we briefly discuss opportunities associated with further development of \textsc{coco} to support analyses of the form considered here. For example, while the example in Section~\ref{sec:invc:coco} relied on low-level functionality (\mcode{coco_add_func}, \mcode{coco_add_pars}, \mcode{coco_add_glue}, and \mcode{coco_add_adjt}), it is straightforward to build a composite constructor that encapsulates the individual steps of constraint construction and a separate composite constructor that encapsulates the individual steps of constructing contributions to the adjoint conditions. Such development would then benefit from the encoding of utility functions for generating an initial solution guess, extracting solution data, and graphing families of solutions and sensitivities. Indeed, a similar construction could apply to the case where the map $M$ is only implicitly defined by the solution to a differential equation. In this case, the zero problem $M_\mathrm{res}=0$ is replaced with the collocation problem for the corresponding trajectory segment. Rather than \mcode{coco_add_func} and \mcode{coco_add_adjt}, here calls to the constructors \mcode{ode_isol2coll} and \mcode{adjt_isol2coll} would instantiate individual trajectory segments and the corresponding contributions to the adjoint conditions. Finally, the index arrays \mcode{muidx} and \mcode{maidx} would be constructed from a subset of the indices associated with each trajectory instance, specifically those corresponding to the initial and final points on the trajectory segment and the corresponding parameter values. The implementation in \textsc{coco} of the higher-dimensional case should follow the same pattern, with necessary modifications to the discretization, now over $\mathbb{S}^{m-1}$, and the addition of a required number of phase conditions. \section*{\jrem{Code availability}} \jrem{The code included in this paper constitutes fully executable scripts. Complete code, including that used to generate the results in Fig.~\ref{fig:invc}, is available at \url{https://github.com/jansieber/adjoint-sensitivity2022-supp}.} \section*{Acknowledgments} The second author's research is supported by the UK Engineering and Physical Sciences Research Council (EPSRC) grants EP/N023544/1 and EP/V04687X/1. Received xxxx 20xx; revised xxxx 20xx. \end{document}
\begin{document} \begin{abstract} In this paper, we study scalar conservation laws where the flux is driven by a geometric H\"older $p$-rough path for some $p\in (2,3)$ and the forcing is given by an It\^o stochastic integral driven by a Brownian motion. In particular, we derive the corresponding kinetic formulation and define an appropriate notion of kinetic solution. In this context, we are able to establish well-posedness, i.e. existence, uniqueness and the $L^1$-contraction property that leads to continuous dependence on initial condition. Our approach combines tools from rough path analysis, stochastic analysis and theory of kinetic solutions for conservation laws. As an application, this allows to cover the case of flux driven for instance by another (independent) Brownian motion enhanced with L\'evy's stochastic area. \end{abstract} \subjclass[2010]{60H15, 35R60, 35L65} \keywords{scalar conservation laws, rough paths, kinetic formulation, kinetic solution, BGK approximation, method of characteristics} \date{\today} \maketitle \section{Introduction} The goal of the present paper is to develop a well-posedness theory for the following scalar rough conservation law \begin{equation}\label{eq} \begin{split} \mathrm{d} u+\diver\big(A(x,u)\big)\,\mathrm{d} z&=g(x,u)\,\mathrm{d} W,\qquad t\in(0,T),\,x\in\mathbb{R}^N,\\ u(0)&=u_0, \end{split} \end{equation} where $z=(z^1,\dots,z^M)$ is a deterministic rough driving signal and $W=(W^1,\dots,W^K)$ is a Wiener process and the stochastic integral is understood in the It\^ o sense. The coefficients $A:\mathbb{R}^N\times\mathbb{R}\rightarrow\mathbb{R}^{N\times M}$, $g:\mathbb{R}^N\times\mathbb{R}\rightarrow\mathbb{R}^K$ satisfy a sufficient regularity assumption introduced in Section \ref{sec:hypotheses}. The above equation can be rewritten by using the Einstein summation convention as follows \begin{equation*} \begin{split} \mathrm{d} u+\mathbb{P}artial_{x_i}\big(A_{ij}(x,u)\big)\,\mathrm{d} z^j&=g_k(x,u)\,\mathrm{d} W^k,\qquad t\in(0,T),\,x\in\mathbb{R}^N,\\ u(0)&=u_0. \end{split} \end{equation*} As an application of our analysis, one can replace $z$, for instance, by another Brownian motion $B$, which is independent of $W$, and give meaning to \begin{equation}\label{eq:stoch1} \begin{split} \mathrm{d} u+\diver\big(A(x,u)\big)\circ\mathrm{d} B&=g(x,u)\,\mathrm{d} W,\qquad t\in(0,T),\,x\in\mathbb{R}^N,\\ u(0)&=u_0. \end{split} \end{equation} Conservation laws and related equations have been paid an increasing attention lately and have become a very active area of research, counting nowadays quite a number of results for deterministic and stochastic setting, that is for conservation laws either of the form \begin{equation}\label{eq:det} \mathbb{P}artial_t u+\diver\big(A(u)\big)=0 \end{equation} (see \cite{vov1}, \cite{car}, \cite{vov}, \cite{kruzk}, \cite{lpt1}, \cite{lions}, \cite{perth}, \cite{tadmor}) or \begin{equation}\label{eq:stoch} \mathrm{d} u+\diver\big(A(u)\big)\mathrm{d} t=g(x,u)\mathrm{d} W \end{equation} where the It\^o stochastic forcing is driven by a finite- or infinite-dimensional Wiener process (see \cite{bauzet}, \cite{karlsen}, \cite{debus2}, \cite{debus}, \cite{DV}, \cite{feng}, \cite{bgk}, \cite{holden}, \cite{kim}, \cite{stoica}, \cite{wittbold}). Degenerate parabolic PDEs were studied in \cite{car}, \cite{chen} and in the stochastic setting in \cite{BVW}, \cite{degen2}, \cite{hof}. Since the theory of rough paths viewed as a tool that allows for deterministic treatment of stochastic differential equations has been of growing interest recently, several attempts have already been made to extend this theory to conservation laws as well. First, Lions, Perthame and Souganidis (see \cite{lps}, \cite{lps1}) developed a pathwise approach for $$\mathrm{d} u+\diver\big(A(x,u)\big)\circ \mathrm{d} W=0$$ where $W$ is a continuous real-valued signal and $\circ$ stands for the Stratonovich product in the Brownian case, then Friz and Gess (see \cite{friz2}) studied $$\mathrm{d} u+\diver f(t,x,u)\mathrm{d} t=F(t,x,u)\mathrm{d} t+\Lambda_k(x,u,\nabla u)\mathrm{d} z^k$$ where $\Lambda_k$ is affine linear in $u$ and $\nabla u$ and $z=(z^1,\dots,z^K)$ is a rough driving signal and Gess and Souganidis \cite{gess} considered $$\mathrm{d} u+\diver\big(A(x,u)\big)\mathrm{d} z=0$$ where $z=(z^1,\dots,z^M)$ is a geometric $\alpha$-H\" older path and in \cite{GS} they studied the long-time behavior for $z$ being a Brownian motion. In order to find a suitable concept of solution for problems of the form \eqref{eq}, it was observed already some time ago that, on the one hand, classical $C^1$ solutions do not exist in general and, on the other hand, weak or distributional solutions lack uniqueness. The first claim is a consequence of the fact that any smooth solution has to be constant along characteristic lines, which can intersect in finite time (even in the case of smooth data) and shocks can be produced. The second claim demonstrates the inconvenience that often appears in the study of PDEs and SPDEs: the usual way of weakening the equation leads to the occurrence of nonphysical solutions and therefore additional assumptions need to be imposed in order to select the physically relevant ones and to ensure uniqueness. Hence one needs to find some balance that allows to establish existence of a unique (physically reasonable) solution. Towards this end, we pursue the ideas of kinetic approach, a concept of solution that was first introduced by Lions, Perthame, Tadmor \cite{lions} for deterministic hyperbolic conservation laws and further studied in \cite{vov1}, \cite{chen}, \cite{vov}, \cite{lpt1}, \cite{lions}, \cite{tadmor}, \cite{perth}. This direction also appears in several works on stochastic conservation laws and degenerate parabolic SPDEs, see \cite{degen2}, \cite{debus2}, \cite{debus}, \cite{DV}, \cite{bgk}, \cite{hof} and in the (rough) pathwise works \cite{GS}, \cite{gess}, \cite{lps}, \cite{lps1}. In comparison to the notion of entropy solution introduced by Kru\v{z}kov \cite{kruzk} and further developed e.g. in \cite{bauzet}, \cite{car}, \cite{feng}, \cite{kim}, \cite{wittbold}, kinetic solutions are more general in the sense that they are well defined even in situations when neither the original conservation law nor the corresponding entropy inequalities can be understood in the sense of distributions which is part of the definition of entropy solution. Usually this happens due to lack of integrability of the flux and entropy-flux terms, e.g. $A(u)\notin L^1_{\text{loc}}$. Therefore, further assumptions on initial data or the flux function $A$ are in place in order to overcome this issue and remain in the entropy setting. It will be seen later on that no such restrictions are necessary in the kinetic approach as the equation that is actually to be solved -- the so-called kinetic formulation, see \eqref{eq:kinform} -- is in fact linear. In addition, various proofs simplify as methods for linear PDEs are available. Let us now shortly present the main ideas of our approach. Apart from the above mentioned difficulties there is another one that originates in the low regularity of driving signals and solution. Namely, the corresponding rough integrals are not well defined so we present a formulation, see \eqref{eq:weakkinformul}, that does not include any rough path driven terms and therefore provides a suitable notion of kinetic solution in this context. To this end, we adapt the ideas of \cite{lps}, \cite{lps1}, \cite{gess} where the authors introduced a method of modified test functions to eliminate the rough terms. Our method then combines this approach with the ideas for stochastic conservation laws treated in \cite{debus}, \cite{bgk}. As usual for this class of problems, we define a second notion of solution -- a generalized kinetic solution -- which, roughly speaking, takes values in the set of Young measures. The general idea is that in order to get existence of such a solution, weak convergence (in some $L^p$ space) of the corresponding approximations is sufficient which allows for an easier proof. The key result is then the Reduction Theorem that states that any generalized kinetic solution is actually a kinetic one, that is, the Young measure at hand is a parametrized Dirac mass. Concerning the existence part, we make use of the so-called Bhatnagar-Gross-Krook approximation (BGK for short) which allows to describe the conservation law as the hydrodynamic limit of the corresponding BGK model, as the microscopic scale $\varepsilon$ vanishes. This is nowadays a standard tool in the deterministic setting where the literature is quite extensive (see \cite{vov1}, \cite{vov}, \cite{lpt1}, \cite{lions}, \cite{tadmor}, \cite{perth}). Even though the general concept is analogous in the stochastic case, the techniques are significantly different, the result was established recently by the author \cite{bgk}. To be more precise, the key point is to solve the corresponding characteristic system which for the general case of \eqref{eq} reads as follows \begin{equation}\label{eq:charr} \begin{split} \mathrm{d}\varphi^0_t&= -\mathbb{P}artial_{x_i}A_{ij}(\varphi_t)\,\mathrm{d} z^j+g_k(\varphi_t)\mathrm{d} W^k,\\ \mathrm{d}\varphi^i_t&=\mathbb{P}artial_u A_{ij}(\varphi_t)\,\mathrm{d} z^j,\qquad i=1,\dots,N. \end{split} \end{equation} Already at this level one can understand the difficulties coming from the complex structure of \eqref{eq}. Namely, in the case of \eqref{eq:det} the characteristic system reduces to a set of independent equations (note that the flux function $A$ is independent of the space variable) \begin{equation*} \begin{split} \mathrm{d}\varphi^0_t&=0\\ \mathrm{d}\varphi^i_t&=\mathbb{P}artial_u A_{ij}(\varphi^0_t)\,\mathrm{d} t,\qquad i=1,\dots,N, \end{split} \end{equation*} which can be solved immediately and as a consequence the method simplifies. For the stochastic case \eqref{eq:stoch} we obtain \begin{equation*} \begin{split} \mathrm{d}\varphi^0_t&= g_k(\varphi_t)\,\mathrm{d} W^k\\ \mathrm{d}\varphi^i_t&=\sum_{j=1}^M\mathbb{P}artial_u A_{ij}(\varphi^0_t)\,\mathrm{d} t,\qquad i=1,\dots,N, \end{split} \end{equation*} hence the first coordinate of the characteristic curve is governed by an SDE and further difficulties arise due to randomness (see \cite{bgk}). If $A$ is also $x$-dependent we observe an additional term in the first equation of the characteristic system \eqref{eq:charr}, however, let us point out that there is a major difference between this and the term coming from the forcing. In particular, if $g=0$ then the flow of diffeomorphisms generated by \eqref{eq:charr} is volume preserving as can be seen easily by calculating divergence of the corresponding vector field. This does not hold true anymore if forcing in nonconservative form is present in the equation, unless $g$ is independent of the solution, i.e. we consider additive noise in \eqref{eq}. The method of BGK approximation has another advantage especially when dealing with rough path driven conservation laws. As already mentioned above, the problem boils down to solving the characteristic system \eqref{eq:charr}, i.e. an ordinary (stochastic, rough) differential equation and the theory for those problems is well-established, unlike the theory for rough path driven equations in infinite dimension, i.e. rough PDEs. Furthermore, the BGK model also provides an explicit formula for the approximate solutions and therefore the necessary estimates independent of the microscopic scale $\varepsilon$ come rather naturally. The exposition is organized as follows. In Section \ref{sec:hypotheses}, we introduce the basic setting, define the notion of kinetic solution and state our main result, Theorem \ref{thm:main}. In order to make the paper more self-contained, Section \ref{sec:rough} provides a brief overview of the relevant concepts from rough path theory. Section \ref{sec:uniqueness} is devoted to the proof of uniqueness, reduction of a generalized kinetic solution to a kinetic one and the $L^1$-contraction property, Theorem \ref{thm:reduction} and Corollary \ref{cor:contraction}. The remainder of the paper deals with the existence part of Theorem \ref{thm:main} which is divided into several parts and finally established through Theorem \ref{thm:bgkconvergence}. Subsections \ref{sec:roughtr} and \ref{sec:bgksol} give a rough-pathwise existence of unique solutions to the BGK approximation. In these two sections we work with a fixed realization of the driving signals or, to be more precise, a fixed realization of a joint lift of $(t,z,W)$. In Subsection \ref{sec:conv}, the stochastic approach is resumed and we pass to the limit and obtain a kinetic solution to \eqref{eq}. \section{Definitions and the main result} \label{sec:hypotheses} Let us now introduce the precise setting of \eqref{eq}. We work on a finite time interval $[0,T],\,T>0$ and on the whole space concerning the space variable $x\in\mathbb{R}^N$. Throughout the paper, we assume that $z$ can be lifted to a geometric H\"older $p$-rough path for some $p\in (2,3)$ and denote its lift by $\mathbf{z}=(\mathbf{z}^1,\mathbf{z}^2)$. Next we consider the following joint lift of $(z,W)$: we define $\mathbf{\Lambda}=(\mathbf{\Lambda}^1,\mathbf{\Lambda}^2)$ by \begin{align}\label{jointlift} \begin{aligned} \mathbf{\Lambda}_t^{1,i}&=\begin{cases} z^i_t, & \text{ if }\,i\in\{1,\dots,M\},\\ W_t^{i-M}, &\text{ if }\,i\in\{M+1,\dots,M+K\}, \end{cases}\\ \mathbf{\Lambda}_t^{2,i,j}&=\begin{cases} \mathbf{z}_t^{2,i,j}, & \text{ if }\,i,j\in\{1,\dots,M\},\\ \int_0^t W^{i-M}_r\circ\mathrm{d} W^{j-M}_r &\text{ if }\,i,j\in\{M+1,\dots,M+K\},\\ \int_0^t z^i_r\mathrm{d} W^{j-M}_r & \text{ if }\,i\in\{1,\dots,M\},\,j\in\{M+1,\dots,M+K\},\\ z^j_tW^{i-M}_t-\int_0^tz^j_r\mathrm{d} W^{i-M}_r& \text{ if }\,i\in\{M+1,\dots,M+K\},\,j\in\{1,\dots,M\}. \end{cases} \end{aligned} \end{align} It was shown in \cite{DOR15} that such a stochastic process exists and could be considered as the canonical joint lift of $z$ and $W$. Note that although in the original equation \eqref{eq} we consider It\^o stochastic integral, the lift of $W$ used in the construction of $\mathbf{\Lambda}$ above corresponds to the Stratonovich version. Regarding the coefficients in \eqref{eq}, let us fix the following notation. Let \begin{align*} a&=(a_{ij})=(\mathbb{P}artial_u A_{ij}):\mathbb{R}^N\times\mathbb{R}\longrightarrow \mathbb{R}^{N \times M},\\ b&=(b_{j})=(\diver_x A_{\cdot j}):\mathbb{R}^N\times\mathbb{R}\longrightarrow \mathbb{R}^M, \end{align*} and assume that $a,\,b\in \mathrm{Lip}^{\gamma+2}$ and $g\in \mathrm{Lip}^{\gamma+3}$ for some $\gamma>p.$ Here we adopt the notation of \cite[Definition 10.2]{friz}, namely, a mapping $V:\mathbb{R}^e\rightarrow\mathbb{R}^d$ belongs to $\mathrm{Lip}^\beta$ provided it is bounded, $\lfloor \beta \rfloor$-times continuously differentiable with bounded derivatives of all orders and its $\lfloor \beta\rfloor^{\text{th}}$ derivative is $\{\beta\}$-H\"older continuous. Furthermore,we suppose that \begin{equation}\label{eq:null} b(x,0)=0\quad g(x,0)=0\;\qquad\forall x\in\mathbb{R}^N, \end{equation} and denote $$G^2(x,\xi)=\sum_{k=1}^K|g_k(x,\xi)|^2,\qquad \forall x\in\mathbb{R}^N,\,\xi\in\mathbb{R}.$$ \subsection{Notations} We adopt the following notations. The brackets $\langle\cdot,\cdot \rangle$ are used to denote the duality between the space of distributions over $\mathbb{R}^N\times\mathbb{R}$ and $C_c^1(\mathbb{R}^N\times\mathbb{R})$. We denote similarly the integral $$\langle f,h\rangle=\int_{\mathbb{R}^N}\int_\mathbb{R} f(x,\xi) h(x,\xi)\,\mathrm{d} x\,\mathrm{d} \xi,\qquad f\in L^p(\mathbb{R}^N\times\mathbb{R}),\;h\in L^q(\mathbb{R}^N\times\mathbb{R}),$$ where $p,q\in[1,\infty]$ are conjugate exponents. By $\mathcal{M}_b([0,T)\times\mathbb{R}^N\times\mathbb{R})$ we denote the space of Borel measures over $[0,T)\times\mathbb{R}^N\times\mathbb{R}$ and $\mathcal{M}^+_b([0,T)\times\mathbb{R}^N\times\mathbb{R})$ are then nonnegative Borel measures. We also use the shorthand $$n(\mathbb{P}hi)=\int_{[0,T)\times\mathbb{R}^N\times\mathbb{R}}\mathbb{P}hi(t,x,\xi)\,\mathrm{d} n(t,x,\xi),\qquad n\in\mathcal{M}_b([0,T)\times\mathbb{R}^N\times\mathbb{R}),\,\mathbb{P}hi \in C_c([0,T)\times\mathbb{R}^N\times\mathbb{R}).$$ and $C_0([0,T]\times\mathbb{R}^N\times\mathbb{R})$ denotes the space of continuous functions $[0,T]\times\mathbb{R}^N\times\mathbb{R}$ that vanish at infinity, i.e. for large $(x,\xi)$. The differential operators gradient $\nabla$ and divergence $\diver$ are (unless otherwise stated) understood with respect to the space variable $x$. \subsection{Definitions} \label{subsec:def} As the next step, let us introduce the kinetic formulation of \eqref{eq} as well as the basic definitions concerning the notion of kinetic solution. The motivation behind this approach is given by the nonexistence of a strong solution and, on the other hand, the nonuniqueness of weak solutions, even in simple cases. The idea is to establish an additional criterion -- the kinetic formulation -- which is automatically satisfied by any strong solution to \eqref{eq} (in case it exists) and which permits to ensure the well-posedness. We start with the definition of kinetic measure. \begin{defin}[Kinetic measure]\label{def:kinmeasure} A mapping $m$ from $\Omega$ to $\mathcal{M}_b^+([0,T]\times\mathbb{R}^N\times\mathbb{R})$, the set of nonnegative bounded measures over $[0,T]\times\mathbb{R}^N\times\mathbb{R}$, is said to be a kinetic measure provided \begin{enumerate} \item $m$ is measurable in the following sense: for each $\mathbb{P}hi\in C_0([0,T]\times\mathbb{R}^N\times\mathbb{R})$, the mapping $m(\mathbb{P}hi):\Omega\rightarrow\mathbb{R}$ is measurable, \item the mapping $$\int_{[0,T)\times\mathbb{R}^N\times\mathbb{R}}\mathrm{d} m(t,x,\xi):\Omega\to\mathbb{R}$$ is measurable and $$\mathbb{E}\int_{[0,T)\times\mathbb{R}^{N}\times\mathbb{R}}\mathrm{d} m(t,x,\xi)<\infty,$$ \item for any $\mathbb{P}hi\in C_0(\mathbb{R}^N\times\mathbb{R})$, $t\mapsto m(\mathbf{1}_{[0,t]}\mathbb{P}hi)$ is progressively measurable. \end{enumerate} \end{defin} Formally speaking, the kinetic formulation corresponding to the conservation law at hand is given as follows \begin{equation}\label{eq:kinform} \begin{split} \mathrm{d} F+\nabla F\cdot a\,\mathrm{d} z-\mathbb{P}artial_\xi F\, b\,\mathrm{d} z&=-\mathbb{P}artial_\xi F\, g\,\mathrm{d} W+\frac{1}{2}\mathbb{P}artial_\xi(G^2\mathbb{P}artial_\xi F)\,\mathrm{d} t+\mathbb{P}artial_\xi m,\\ F(0)&=F_0, \end{split} \end{equation} where $F=\mathbf{1}_{u>\xi}$ and $m$ is a kinetic measure\footnote{Here $u$ is a function of $(\omega,t,x)$ so $F(\omega,t,x,\xi)=\mathbf{1}_{u(\omega,t,x)>\xi}$ is well-defined and regarded as a function of four variables $(\omega,t,x,\xi)$.}. However, since the expected regularity of solutions is low and consequently the rough path driven integrals are not well defined, it is necessary to define a suitable notion of weak solution to this problem. This leads us to the notion of kinetic solution to rough path driven conservation laws that we introduce in this work. Note that it is a consistent extension of the corresponding notion of kinetic solution for the case of a smooth driving signal $z$, for further discussion on this subject we refer the reader to Subsection \ref{subsec:smoothdrivers}. \begin{defin}[Kinetic solution]\label{kinsol} Let $u_0\in L^1\cap L^2(\Omega\times\mathbb{R}^N).$ Then a progressively measurable $$u\in L^2(\Omega;L^2(0,T;L^2(\mathbb{R}^N)))$$ satisfying \begin{equation}\label{fd} \mathbb{E}\esssup_{0\leq t\leq T}\|u(t)\|_{L^1_x}\leq C \end{equation} is said to be a kinetic solution to \eqref{eq} with initial datum $u_0$ provided there exists a kinetic measure $m$ such that the pair $(F=\mathbf{1}_{u>\xi},m)$ satisfies, for all $\mathbb{P}hi\in C^1_c(\mathbb{R}^N\times\mathbb{R})$ and $\alpha\in C^1_c([0,T))$, $\mathbb{P}$-a.s., \begin{align}\label{eq:weakkinformul} \begin{split} \int_0^T&\big\langle F(t),\mathbb{P}hi(\theta_t)\big\rangle\mathbb{P}artial_t\alpha(t)\,\mathrm{d} t+\big\langle F_0,\mathbb{P}hi\big\rangle\alpha(0)\\ &=\int_0^T\langle\mathbb{P}artial_\xi F(t) g,\mathbb{P}hi(\theta_t)\rangle\alpha(t)\,\mathrm{d} W-\frac{1}{2}\int_0^T\langle\mathbb{P}artial_\xi(G^2\mathbb{P}artial_\xi F(t)),\mathbb{P}hi(\theta_t)\rangle\alpha(t)\,\mathrm{d} t+ m\big(\alpha\mathbb{P}artial_\xi\mathbb{P}hi(\theta_t)\big), \end{split} \end{align} where $\theta=(\theta^0,\theta^x)$ is the inverse flow corresponding to \begin{align}\label{eq:fl1} \begin{aligned} \mathrm{d}\mathbb{P}i^0_t&=-b(\mathbb{P}i_t)\,\mathrm{d} \mathbf{z}\\ \mathrm{d}\mathbb{P}i^x_t&=a(\mathbb{P}i_t)\,\mathrm{d} \mathbf{z}. \end{aligned} \end{align} \end{defin} To be more precise, it follows from \cite[Proposition 11.11]{friz} that under our assumptions \eqref{eq:fl1} possesses a unique solution $\mathbb{P}i$ that defines a flow of $C^2$-diffeomorphisms. We denote by $\mathbb{P}i_{s,t}(x,\xi)$ the solution of \eqref{eq:fl1} starting from $(x,\xi)$ at time $s$. To simplify the notation, we write $\mathbb{P}i_t$ instead of $\mathbb{P}i_{0,t}$ and we denote the corresponding inverse flow by $\theta$. Then $\theta_{t,s}=\mathbb{P}i_{t,s}^{-1} $ is the unique solution to the time-reversed problem \begin{equation}\label{eq:fl2} \begin{split} \mathrm{d}\theta^0_{t,s}&=-b(\theta_{t,s})\,\mathrm{d} \cev{\mathbf{z}}^s,\\ \mathrm{d}\theta^x_{t,s}&=a(\theta_{t,s})\,\mathrm{d} \cev{\mathbf{z}}^s, \end{split} \end{equation} where $\cev{\mathbf{z}}^s(\cdot)=\mathbf{z}(s-\cdot)$ is the time-reversed path to $\mathbf{z}$. We point out that the flow $\mathbb{P}i$ as well as $\theta$ is volume preserving as can be seen easily by calculating divergence of the corresponding vector field and recalling the fact that divergence free vector fields generate volume preserving flows. Thus the Jacobian of $\mathbb{P}i_{s,t}$ satisfies: $\mathrm{J}\mathbb{P}i_{s,t}\equiv1$ and similarly for $\theta$. Note that with a classical argument of separability, the set of full probability where \eqref{eq:weakkinformul} holds true does not depend on the particular choice of test functions $\mathbb{P}hi,\,\alpha$. We proceed with a reminder of Young measures and the related definition of kinetic function that will eventually lead to the notion of generalized kinetic solution, see Definition \ref{genkinsol}. The concept of Young measures was developed in \cite{young} as a technical tool for describing composite limits of nonlinear functions with weakly convergent sequences, for further reading we refer the reader e.g. to \cite{malek}. In what follows, we denote by $\mathcal{P}_1(\mathbb{R})$ the set of probability measures on $\mathbb{R}$. \begin{defin}[Young measure] Let $(X,\lambda)$ be a $\sigma$-finite measure space. A mapping $\nu:X\rightarrow\mathcal{P}_1(\mathbb{R})$ is called a Young measure provided it is weakly measurable, that is, for all $\mathbb{P}hi\in C_b(\mathbb{R})$ the mapping $z\mapsto \nu_z(\mathbb{P}hi)$ from $X$ to $\mathbb{R}$ is measurable. A Young measure $\nu$ is said to vanish at infinity if \begin{equation*} \int_X\int_\mathbb{R}|\xi|\,\mathrm{d}\nu_z(\xi)\,\mathrm{d}\lambda(z)<\infty. \end{equation*} \end{defin} \begin{defin}[Kinetic function] Let $(X,\lambda)$ be a $\sigma$-finite measure space. A measurable function $f:X\times\mathbb{R}\rightarrow[0,1]$ is called a kinetic function on $X$ if there exists a Young measure $\nu$ on $X$ that vanishes at infinity such that for a.e. $z\in X$ and for all $\xi\in\mathbb{R}$ $$f(z,\xi)=\nu_z(\xi,\infty).$$ \end{defin} \begin{lemma}[Compactness of Young measures]\label{kinetcomp} Let $(X,\lambda)$ be a $\sigma$-finite measure space such that $L^1(X)$ is separable. Let $(\nu^n)$ be a sequence of Young measures on $X$ such that for some $p\in[1,\infty)$ \begin{equation}\label{eq:estyoungm} \sup_{n\in\mathbb{N}}\int_X\int_\mathbb{R}|\xi|^p\,\mathrm{d}\nu^n_z(\xi)\,\mathrm{d} \lambda(z)<\infty. \end{equation} Then there exists a Young measure $\nu$ on $X$ and a subsequence still denoted by $(\nu^n)$ such that for all $h\in L^1(X)$ and all $\mathbb{P}hi\in C_b(\mathbb{R})$ $$\lim_{n\rightarrow \infty}\int_X h(z)\int_\mathbb{R} \mathbb{P}hi(\xi)\,\mathrm{d}\nu^n_z(\xi)\,\mathrm{d}\lambda(z)=\int_X h(z)\int_\mathbb{R} \mathbb{P}hi(\xi)\,\mathrm{d}\nu_z(\xi)\,\mathrm{d}\lambda(z)$$ Moreover, if $f_n,\,n\in\mathbb{N},$ are the kinetic functions corresponding to $\nu^n,\,n\in\mathbb{N},$ such that \eqref{eq:estyoungm} holds true, then there exists a kinetic function $f$ (which correponds to the Young measure $\nu$ whose existence was ensured by the first part of the statement) and a subsequence still denoted by $(f^n)$ such that $$f_n\overset{w^*}{\longrightarrow} f\quad \text{ in }\quad L^\infty(X\times\mathbb{R}).$$ \begin{proof} Various results of this form are somewhat classical in the literature, a proof for the case of $(X,\lambda)$ being a finite measure space can be found in \cite[Theorem 5, Corollary 6]{debus}, however, one can actually observe that this additional assumption is not used in the proof and therefore the same proof applies to our setting of $(X,\lambda)$ being a $\sigma$-finite measure space. \end{proof} \end{lemma} \begin{rem}\label{chi} If $f : X \times \mathbb{R}\to [0, 1]$ is a kinetic function corresponding to the Young measure $\nu$ satisfying, for some $p\in(1,\infty)$, $$\int_X\int_\mathbb{R}|\xi|^p\,\mathrm{d}\nu_z(\xi)\,\mathrm{d} \lambda(z)<\infty,$$ then we denote by $\chi_f$ the function defined by $\chi_f(z,\xi) = f (z,\xi)- \mathbf{1}_{0>\xi}$. Contrary to $f$, this modification is integrable on $E\times\mathbb{R}$ whenever $E\subset X$ with $\lambda(E)<\infty$. Indeed, $$\chi_f(z,\xi)=\begin{cases} -\int_{(-\infty,\xi]}\mathrm{d}\nu_z,&\xi<0,\\ \int_{(\xi,\infty)}\mathrm{d}\nu_z,&\xi\geq0, \end{cases} $$ hence \begin{align*} |\xi|^p\int_X|\chi_f(z,\xi)|\mathrm{d}\lambda(z)\leq \int_X\int_\mathbb{R}|\zeta|^p\mathrm{d}\nu_z(\zeta)\mathrm{d}\lambda(z)<\infty \end{align*} which implies \begin{align*} \int_E\int_\mathbb{R}|\chi_f(z,\xi)|\mathrm{d}\xi\mathrm{d}\lambda(z)\leq \int_\mathbb{R}\frac{1}{1+|\xi|^p}\mathrm{d}\xi\bigg(\lambda(E)+\int_X\int_\mathbb{R}|\zeta|^p\mathrm{d}\nu_z(\zeta)\mathrm{d}\lambda(z)\bigg)<\infty. \end{align*} Besides, if $f$ is at equilibrium, that is, there exists $u\in L^1(X)$ such that $f=\mathbf{1}_{u>\xi}$ and $\nu=\delta_{u=\xi}$ then we will rather write $\chi_u$ instead of $\chi_f$ and it holds true that $$\int_\mathbb{R}|\chi_{u(z)}(\xi)|\mathrm{d}\xi= |u(z)|.$$ \end{rem} \begin{defin}[Generalized kinetic solution]\label{genkinsol} Let $F_0:\Omega\times\mathbb{R}^N\times\mathbb{R}\rightarrow[0,1]$ be a kinetic function. A progressively measurable function $F:\Omega\times[0,T]\times\mathbb{R}^N\times\mathbb{R}\rightarrow[0,1]$ is called a generalized solution to \eqref{eq} with initial datum $F_0$ if $F(t)$ is a kinetic function for a.e. $t\in[0,T]$, there exists $C>0$ such that \begin{equation}\label{integrov} \mathbb{E}\esssup_{0\leq t\leq T}\int_{\mathbb{R}^N}\int_{\mathbb{R}}|\xi|\,\mathrm{d}\nu_{t,x}(\xi)\,\mathrm{d} x+\mathbb{E}\int_0^T\int_{\mathbb{R}^N}\int_{\mathbb{R}}|\xi|^2\,\mathrm{d}\nu_{t,x}(\xi)\,\mathrm{d} x\leq C, \end{equation} where $\nu=-\mathbb{P}artial_\xi F$, and if there exists a kinetic measure $m$ such that \eqref{eq:weakkinformul} holds true. \end{defin} \subsection{Kinetic solutions in the case of smooth driver $z$} \label{subsec:smoothdrivers} For the sake of completeness, let us now present a formal derivation of \eqref{eq:kinform} in case of sufficiently smooth solution to the conservation law \eqref{eq} driven by a smooth path $z$ and a Wiener process $W$. We denote such a solution by $u$ and note that it satisfies \eqref{eq} pointwise in $x\in\mathbb{R}^N$. Let us fix $x\in\mathbb{R}^N$ and derive the equation satisfied by $\mathbf{1}_{u(\cdot,x)>\xi}$ which is understood as a distribution in $\xi$. Towards this end, we denote by $\langle \cdot,\cdot\rangle_\xi$ the duality between the space of distributions over $\mathbb{R}$ and $C^\infty_c(\mathbb{R})$ and observe that if $\mathbb{P}hi(\xi)=\int_{-\infty}^\xi\mathbb{P}hi_1(\zeta)\,\mathrm{d}\zeta$ for some $\mathbb{P}hi_1\in C_c^\infty(\mathbb{R})$, then $$\langle \mathbf{1}_{u>\xi},\mathbb{P}hi_1\rangle_\xi=\mathbb{P}hi(u)-\mathbb{P}hi(-\infty)=\mathbb{P}hi(u).$$ Therefore using the It\^o formula, we obtain \begin{align*} \mathrm{d}&\,\langle \mathbf{1}_{u>\xi},\mathbb{P}hi_1\rangle_\xi=-\mathbb{P}hi_1(u)\diver\big(A(x,u)\big)\mathrm{d} z+\mathbb{P}hi_1(u) g(x,u)\mathrm{d} W+\frac{1}{2}\mathbb{P}hi'_1(u)G^2(x,u)\mathrm{d} t\\ &=-\mathbb{P}hi_1(u)a(x,u)\cdot\nabla u\,\mathrm{d} z-\mathbb{P}hi_1(u)b(x,u)\mathrm{d} z+\mathbb{P}hi_1(u) g(x,u)\mathrm{d} W+\frac{1}{2}\mathbb{P}hi'_1(u)G^2(x,u)\mathrm{d} t \end{align*} and since $$\nabla\mathbf{1}_{u>\xi}=\delta_{u=\xi}\nabla u\qquad\text{in}\qquad \mathcal{D}'(\mathbb{R}^N\times\mathbb{R})\quad\text{a.s.}$$ we test the above against $\mathbb{P}hi_2\in C_c^\infty(\mathbb{R}^N)$ and deduce \begin{align*} \mathrm{d}\langle \mathbf{1}_{u>\xi},\mathbb{P}hi_1\mathbb{P}hi_2\rangle&=-\langle a(x,\xi)\nabla\mathbf{1}_{u>\xi},\mathbb{P}hi_1\mathbb{P}hi_2\rangle\mathrm{d} z-\langle b(x,\xi)\delta_{u=\xi},\mathbb{P}hi_1\mathbb{P}hi_2\rangle\mathrm{d} z+\langle g(x,\xi)\delta_{u=\xi},\mathbb{P}hi_1\mathbb{P}hi_2\rangle\mathrm{d} W\\ &\quad+\frac{1}{2}\langle G^2(x,\xi)\delta_{u=\xi},\mathbb{P}hi'_1\mathbb{P}hi_2\rangle\mathrm{d} t\\ &=-\langle a(x,\xi)\nabla\mathbf{1}_{u>\xi},\mathbb{P}hi_1\mathbb{P}hi_2\rangle\mathrm{d} z+\langle b(x,\xi)\mathbb{P}artial_\xi \mathbf{1}_{u>\xi},\mathbb{P}hi_1\mathbb{P}hi_2\rangle\mathrm{d} z-\langle g(x,\xi)\mathbb{P}artial_\xi \mathbf{1}_{u>\xi},\mathbb{P}hi_1\mathbb{P}hi_2\rangle\mathrm{d} W\\ &\quad+\frac{1}{2}\langle \mathbb{P}artial_\xi(G^2(x,\xi)\mathbb{P}artial_\xi\mathbf{1}_{u>\xi}),\mathbb{P}hi_1\mathbb{P}hi_2\rangle\mathrm{d} t \end{align*} and conclude that the kinetic formulation \eqref{eq:kinform} with the kinetic measure $m=0$ is valid in the sense of $\mathcal{D}'(\mathbb{R}^N\times\mathbb{R})$ a.s. In general, the kinetic measure is not known in advance and becomes part of the solution that takes account of possible singularities of the solution $u$. In particular, it vanishes in the above computation because we assumed certain level of regularity of $u$. In order to justify the formulation \eqref{eq:weakkinformul}, note that, due to \cite[Theorem 4]{caruana} (similarly to Proposition \ref{prop:aux}), if $\mathbb{P}hi\in C^1_c(\mathbb{R}^N\times\mathbb{R})$ then the composition $\mathbb{P}hi(\theta_t)$ is the unique strong solution to \begin{align*} \mathrm{d} \mathbb{P}hi(\theta_t)+\nabla \mathbb{P}hi(\theta_t)\cdot a\,\mathrm{d} z-\mathbb{P}artial_\xi\mathbb{P}hi(\theta_t)\,b\,\mathrm{d} z&=0,\\ \mathbb{P}hi(\theta_0)&=\mathbb{P}hi. \end{align*} As a consequence, if $(F,m)$ solves \eqref{eq:kinform} in the sense of $\mathcal{D}'(\mathbb{R}^N\times\mathbb{R})$ a.s.\footnote{For instance $(F,m)=(\mathbf{1}_{u>\xi},0)$ from the discussion above.} then applying formally the It\^o formula to the product $\langle F(t),\mathbb{P}hi(\theta_t)\rangle$ we deduce \begin{align}\label{hu} \begin{aligned} \langle F(t),\mathbb{P}hi(\theta_t)\rangle&=\langle F_0,\mathbb{P}hi\rangle -\int_0^t\langle\nabla F\cdot a,\mathbb{P}hi(\theta_t)\rangle\mathrm{d} z+\int_0^t\langle\mathbb{P}artial_\xi F\, b,\mathbb{P}hi(\theta_t)\rangle\mathrm{d} z\\ &\quad-\int_0^t\langle\mathbb{P}artial_\xi F\, g,\mathbb{P}hi(\theta_t)\rangle\mathrm{d} W+\frac{1}{2}\int_0^t\langle \mathbb{P}artial_\xi(G^2\mathbb{P}artial_\xi F),\mathbb{P}hi(\theta_t)\rangle\mathrm{d} t- m\big(\mathbf{1}_{[0,t)}\mathbb{P}artial_\xi \mathbb{P}hi(\theta_t)\big)\\ &\quad-\int_0^t\langle F,\nabla\mathbb{P}hi(\theta_t)\cdot a\rangle\mathrm{d} z+\int_0^t\langle F,\mathbb{P}artial_\xi\mathbb{P}hi(\theta_t)\, b\rangle\mathrm{d} z, \end{aligned} \end{align} where \begin{align*} &-\int_0^t\langle\nabla F\cdot a,\mathbb{P}hi(\theta_t)\rangle\mathrm{d} z+\int_0^t\langle\mathbb{P}artial_\xi F\, b,\mathbb{P}hi(\theta_t)\rangle\mathrm{d} z-\int_0^t\langle F,\nabla\mathbb{P}hi(\theta_t)\cdot a\rangle\mathrm{d} z+\int_0^t\langle F,\mathbb{P}artial_\xi\mathbb{P}hi(\theta_t)\, b\rangle\mathrm{d} z\\ &\qquad\qquad=-\int_0^t\big\langle\nabla \big[F\mathbb{P}hi(\theta_t)\big],a\big\rangle\mathrm{d} z+\int_0^t\big\langle\mathbb{P}artial_\xi\big[F\mathbb{P}hi(\theta_t)\big], b\big\rangle\mathrm{d} z\\ &\qquad\qquad=\int_0^t\big\langle \big[F\mathbb{P}hi(\theta_t)\big],\diver a-\mathbb{P}artial_\xi b\big\rangle\mathrm{d} z=0 \end{align*} due to the fact that $\diver a-\mathbb{P}artial_\xi b=0$. Thus, \eqref{hu} is a stronger version of \eqref{eq:weakkinformul} that does not require time dependent test functions. To conclude, let us also shortly discuss the connection with the notion of kinetic solution from \cite{debus} in the case of $z_t=t$. As no rough integrals then appear on the left hand side of \eqref{eq:kinform}, the flow transformation method used in Definition \ref{kinsol} was not needed in \cite[Definition 2]{debus}. Consequently, their notion of kinetic solution relied on solving the corresponding version of \eqref{eq:kinform} directly in the sense of distributions. According to the above reasoning, this is possible in the case of sufficiently smooth solution to \eqref{eq}. Since the setting of \cite{debus} differs from ours in several ways: spatial domain, $x$-dependence of the coefficient $A$, integrability assumptions on a kinetic solution as well as on a kinetic measure, we will not compare the two notions of kinetic solution further. \subsection{The main result} \label{subsec:main} Our main result reads as follows. \begin{thm}\label{thm:main} Let $u_0\in L^2(\Omega;L^1(\mathbb{R}^N))\cap L^4(\Omega;L^2(\mathbb{R}^N)) $. Under the above assumptions, the following statements hold true: \begin{enumerate} \item[\emph{(i)}] There exists a unique kinetic solution to \eqref{eq}. In addition, it satisfies \begin{equation}\label{eqqq} \mathbb{E}\sup_{0\leq t\leq T}\|u(t)\|_{L^1_x}^2+\mathbb{E}\bigg|\int_0^T\|u(t)\|_{L^2_{x}}^2\,\mathrm{d} t\bigg|^2\leq C\Big(\mathbb{E}\|u_0\|_{L^1_x}^2+\mathbb{E}\|u_0\|_{L^2_x}^4\Big). \end{equation} \item[\emph{(ii)}] Any generalized kinetic solution is actually a kinetic solution, that is, if $F$ is a generalized kinetic solution to \eqref{eq} with initial datum $\mathbf{1}_{u_0>\xi}$ then there exists a kinetic solution $u$ to \eqref{eq} with initial datum $u_0$ such that $F=\mathbf{1}_{u>\xi}$ for a.e. $(t,x,\xi)$. \item[\emph{(iii)}] If $u_1,\,u_2$ are kinetic solutions to \eqref{eq} with initial data $u_{1,0}$ and $u_{2,0}$, respectively, then for a.e. $t\in[0,T]$ \begin{equation*} \mathbb{E}\|(u_1(t)-u_2(t))^+\|_{L^1_x}\leq \mathbb{E}\|(u_{1,0}-u_{2,0})^+\|_{L^1_x}. \end{equation*} \end{enumerate} \end{thm} \subsection{Application to conservation laws with stochastic flux and forcing} It is immediately possible to apply our theory to conservation laws of the form \eqref{eq:stoch1}, where $B$ is a stochastic process that is independent on $W$ and that can be enhanced to a stochastic process $\mathbf B$ for which almost every realization is a geometric $p$-H\"older rough path. For instance, one may consider another Brownian motion but also more general Gaussian or Markov processes (the reader is referred to \cite[Part 3]{friz} for further examples). Let $B$ be a Brownian motion defined on a stochastic basis $(\bar\Omega,\bar{\mathscr{F}},(\bar{\mathscr{F}}_t),\bar\mathbb{P})$ and let $\mathbf B(\bar\omega)$ be a realization of its (Stratonovich) lift. For $\bar\omega$ fixed, we set $$\mathbf z=(\mathbf z^1,\mathbf z^2):=\big(\mathbf B^1(\bar\omega),\mathbf B^2(\bar\omega)\big)$$ and Theorem \ref{thm:main} yields the existence of a unique kinetic solution $$u(\bar\omega)\in L^1(\Omega;L^\infty(0,T;L^1(\mathbb{R}^N)))\cap L^2(\Omega;L^2(0,T;L^2(\mathbb{R}^N)))$$ together with the corresponding $L^1$-contraction property. Let us now set $$\tilde\Omega=\bar\Omega\times\Omega,\qquad \tilde\mathscr{F}=\bar\mathscr{F}\otimes\mathscr{F},\qquad \tilde\mathscr{F}_t=\bar\mathscr{F}_t\otimes\mathscr{F}_t,\qquad \tilde\mathbb{P}=\bar\mathbb{P}\otimes\mathbb{P}$$ and $$\tilde B(\bar\omega,\omega):=B(\bar\omega),\qquad \tilde W(\bar\omega,\omega):=W(\omega).$$ Both these processes are Brownian motions on $(\tilde \Omega,\tilde\mathscr{F},(\tilde\mathscr{F}_t),\tilde\mathbb{P})$ and if $\tilde {\mathbf B}$ denotes the (Stratonovich) lift of $\tilde B$ then $$\tilde {\mathbf B}(\bar\omega,\omega)=\mathbf B(\bar\omega).$$ Moreover, standard results in rough path theory guarantee that the RDE solutions to \eqref{eq:fl1} and \eqref{eq:fl2} driven by $\mathbf z=\tilde{\mathbf B}(\bar\omega,\omega)$ coincide $\tilde\mathbb{P}$-a.s. with the corresponding SDE solutions and in particular they are $(\tilde\mathscr{F}_t)$-adapted. It remains to verify that the It\^o integral in the definition of kinetic solution \eqref{eq:weakkinformul}, which is now constructed for a.e. $\bar\omega $ as an It\^o integral on $ (\Omega,\mathscr{F},(\mathscr{F}_t),\mathbb{P})$, can be extended to $(\tilde\Omega,\tilde\mathscr{F},(\tilde\mathscr{F}_t),\tilde\mathbb{P})$. But that follows directly from its construction: the corresponding Riemann sums converge in $\mathbb{P}$ for a.e. $\bar\omega$, therefore by Fubini's theorem they also converge in $\tilde\mathbb{P}.$ As a consequence, \eqref{eq:stoch1} or more precisely its kinetic formulation \eqref{eq:weakkinformul} is solved in the natural (stochastic) sense and Theorem \ref{thm:main} applies pathwise in $\bar\omega$. \section{Elements of rough path theory} \label{sec:rough} In this section, we recall some basic notions and results from rough path theory which are used throughout the paper. For the general exposition we refer the reader to \cite{friz}, \cite{lyons1}, \cite{lyons2}. Let $G^{[p]}(\mathbb{R}^d)$ denote the step-$[p]$ nilpotent free group over $\mathbb{R}^d$ and let us consider the following rough differential equation (RDE) \begin{equation}\label{eq:rde} \begin{split} \mathrm{d} y&=V(y)\mathrm{d}\mathbf{x},\\ y(0)&=y_0, \end{split} \end{equation} where $V=(V_1,\dots,V_d)$ is a family of sufficiently smooth vector fields on $\mathbb{R}^e$ and $\mathbf{x}:[0,T]\rightarrow G^{[p]}(\mathbb{R}^d)$ is a geometric H\"older $p$-rough path, namely, it is a path with values in $G^{[p]}(\mathbb{R}^d)$ satisfying $$\|\mathbf{x}\|_{\frac{1}{p}\text{-H\"ol};[0,T]}=\sup_{0\leq s< t\leq T}\frac{\|\mathbf{x}_{s,t}\|}{|s-t|^{1/p}}<\infty.$$ We denote by $C^{0,\frac{1}{p}\text{-H\"ol}}([0,T];G^{[p]}(\mathbb{R}^d))$ the space of all geometric H\"older $p$-rough paths. Definition of a solution to problems of the form \eqref{eq:rde} is based on Davie's lemma (see \cite[Lemma 10.7]{friz}) that gives uniform estimates for ODE solutions depending only on the rough path regularity (e.g. $p$-variation) of the canonical lift of a regular driving signal $x:[0,T]\rightarrow\mathbb{R}^d.$ As a consequence, a careful limiting procedure yields a reasonable notion of solution to \eqref{eq:rde}. To be more precise, we define a solution to \eqref{eq:rde} as follows. \begin{defin}\label{def:solutionrde} Let $\mathbf{x}\in C^{0,\frac{1}{p}\text{-H\"ol}}([0,T];G^{[p]}(\mathbb{R}^d))$ be a geometric H\"older $p$-rough path and suppose that $(x^n)$ is a sequence of Lipschitz paths with the corresponding step-$[p]$ lifts denoted by $\mathbf{x}^n$, i.e. $$\mathbf{x}^n_t:=S_{[p]}(x^n)_t=\bigg(1,\int_{0<v<t}\mathrm{d} x^n_v,\dots,\int_{0<v_1<\cdots<v_{[p]}<t}\mathrm{d} x^n_{v_1}\otimes\cdots\otimes \mathrm{d} x^n_{v_{[p]}}\bigg),$$ such that $$\mathbf{x}^n\longrightarrow \mathbf{x}$$ uniformly on $[0,T]$ and $\sup_n \|\mathbf{x}^n\|_{\frac{1}{p}\text{-H\"ol};[0,T]}<\infty$. We say that $y\in C([0,T];\mathbb{R}^e)$, also denoted by $\mathbb{P}i_{(V)}(0,y_0;\mathbf{x})$, is a solution to \eqref{eq:rde} provided it is a limit point (in uniform topology on $[0,T]$) of $$\big\{\mathbb{P}i_{(V)}(0,y_0;x^n);\,n\geq1\big\}$$ where $\mathbb{P}i_{(V)}(0,y_0;x^n)$ denotes the solution to the ODE \begin{equation*} \begin{split} \mathrm{d} y^n&=V(y^n)\mathrm{d}{x}^n,\\ y^n(0)&=y_0. \end{split} \end{equation*} \end{defin} Under a sufficient regularity assumption upon the collection of vector fields $V$, there exists a unique solution to \eqref{eq:rde} which defines a flow of diffeomorphisms. \begin{thm}\label{thm:existence} Let $p\geq 1$. Let $\mathbf{x}$ be a geometric H\"older $p$-rough path and suppose that the vector fields $V=(V_1,\dots,V_d)$ are $\mathrm{Lip}^{\gamma+k}$ for some $\gamma>p$. Then the following holds true. \begin{enumerate} \item[\emph{(i)}] There exists a unique solution to \eqref{eq:rde}, say $\mathbb{P}i_{(V)}(0,y_0;\mathbf{x})\in C([0,T];\mathbb{R}^e)$, and the map $$\mathbb{P}hi:(t,y)\in[0,T]\times\mathbb{R}^e\mapsto\mathbb{P}i_{(V)}(0,y;\mathbf{x})_t\in\mathbb{R}^e$$ is a flow of $C^k$-diffeomorphisms. \item[\emph{(ii)}] There exists $C>0$ that depends on $|V|_{\text{\emph{Lip}}^{\gamma+k}}$ and $\|\mathbf{x}\|_{\frac{1}{p}\text{\emph{-H\"ol}};[0,T]}$ such that for every multiindex $\alpha$ with $1\leq |\alpha|\leq k$ the following estimates hold true\footnote{Here, $|\cdot|_{\frac{1}{p}\text{{-H\"ol}};[0,T]}$ denotes the H\"older seminorm of a mapping taking values in $\mathbb{R}^e$.} $$\sup_{y\in\mathbb{R}^e}|\mathbb{P}artial_\alpha\mathbb{P}hi(y)|_{\frac{1}{p}\text{\emph{-H\"ol}};[0,T]}\leq C,\qquad \sup_{y\in\mathbb{R}^e}|\mathbb{P}artial_\alpha\mathbb{P}hi^{-1}(y)|_{\frac{1}{p}\text{\emph{-H\"ol}};[0,T]}\leq C.$$ \end{enumerate} \begin{proof} The proof of these results can be found in \cite[Proposition 11.11]{friz} and in \cite[Lemma 13]{crisan}. \end{proof} \end{thm} \section{Contraction property} \label{sec:uniqueness} Let us start with the question of uniqueness. Our first result gives the existence of representatives of a generalized kinetic solution that possess certain left- and right- continuity properties. This builds the foundation for the key estimate of this section, Proposition \ref{prop:doubling}, as it allows us to strengthen the sense in which \eqref{eq:weakkinformul} is satisfied, namely, we obtain a weak formulation that is only weak in $x,\,\xi$ (cf. Corollary \ref{cor:strongerversion}) and therefore using time dependent test functions is no longer necessary. \begin{prop}[Left- and right-continuous representatives]\label{limits} Let $F$ be a generalized kinetic solution to \eqref{eq}. Then $F$ admits representatives $F^-$ and $F^+$ which are a.s. left- and right-continuous, respectively, in the sense of $\mathcal{D}'(\mathbb{R}^N\times\mathbb{R})$. More precisely, for all $t^*\in[0,T)$ there exist kinetic functions $F^{*,+}$ on $\Omega\times\mathbb{R}^N$ such that setting $F^+(t^*)=F^{*,+}$ yields $F^+=F$ for a.e. $(\omega,t,x,\xi)$ and \begin{equation*} \big\langle F^+(t^*+\,\varepsilon),\mathbb{P}hi(\theta_{t^*+\varepsilon})\big\rangle\longrightarrow\big\langle F^{+}(t^*),\mathbb{P}hi(\theta_{t^*})\big\rangle\quad\varepsilon\downarrow 0\quad\forall\mathbb{P}hi\in C^1_c(\mathbb{R}^N\times\mathbb{R})\quad \text{a.s}. \end{equation*} Similarly, for all $t^*\in(0,T]$ there exist kinetic functions $F^{*,-}$ on $\Omega\times\mathbb{R}^N$ such that setting $F^-(t^*)=F^{*,-}$ yields $F^-=F$ for a.e. $(\omega,t,x,\xi)$ and \begin{equation*} \big\langle F^-(t^*-\,\varepsilon),\mathbb{P}hi(\theta_{t^*-\varepsilon})\big\rangle\longrightarrow\big\langle F^{-}(t^*),\mathbb{P}hi(\theta_{t^*})\big\rangle\quad\varepsilon\downarrow 0\quad\forall\mathbb{P}hi\in C^1_c(\mathbb{R}^N\times\mathbb{R})\quad \text{a.s}. \end{equation*} Furthermore, there exists a set of full measure $\mathcal{A}\subset(0,T)$ such that, for all $t\in\mathcal{A}$, \begin{equation}\label{1s1} \langle F^-(t),\mathbb{P}hi(\theta_t)\rangle=\langle F^+(t),\mathbb{P}hi(\theta_t)\rangle\qquad\forall\mathbb{P}hi\in C^1_c(\mathbb{R}^N\times\mathbb{R})\quad\text{a.s}. \end{equation} \begin{proof} {\em Step 1:} First, we show that $F$ admits representatives $F^-$ and $F^+$ that are left- and right-continuous, respectively, in the required sense. It follows the ideas of \cite[Proposition 8]{debus} and \cite[Proposition 3.1]{hof}, nevertheless, as our mixed rough-stochastic setting introduces new difficulties we present the proof in full detail. As the space $C^1_c(\mathbb{R}^N\times\mathbb{R})$ (endowed with the topology of the uniform convergence on any compact set of functions and their first derivatives) is separable, let us fix a countable dense subset $\mathcal{D}_1$. Let $\mathbb{P}hi\in \mathcal{D}_1$ and $\alpha\in C^1_c([0,T))$. Integration by parts and the stochastic version of Fubini's theorem applied to \eqref{eq:weakkinformul} yield \begin{equation*} \int_0^T g_\mathbb{P}hi(t)\alpha'(t)\mathrm{d} t+\langle F_0,\mathbb{P}hi\rangle\alpha(0)=\langle m,\mathbb{P}artial_\xi\mathbb{P}hi(\theta_t)\rangle(\alpha)\qquad\mathbb{P}\text{-a.s.} \end{equation*} where \begin{equation}\label{fceg} \begin{split} g_\mathbb{P}hi(t)&=\big\langle F(t),\mathbb{P}hi(\theta_t)\big\rangle+\int_0^t\langle\mathbb{P}artial_\xi F g,\mathbb{P}hi(\theta_s)\rangle\mathrm{d} W-\frac{1}{2}\int_0^t\langle\mathbb{P}artial_\xi(G^2\mathbb{P}artial_\xi F,\mathbb{P}hi(\theta_s)\rangle\mathrm{d} s. \end{split} \end{equation} Hence $\mathbb{P}artial_tg_\mathbb{P}hi$ is a (pathwise) Radon measure on $[0,T]$ and by the Riesz representation theorem $g_\mathbb{P}hi\in BV([0,T])$. Due to the properties of $BV$-functions, we obtain that $g_\mathbb{P}hi$ admits left- and right-continuous representatives which coincide except for an at most countable set. Moreover, apart from the first one all terms in \eqref{fceg} are almost surely continuous in $t$. Hence, on a set of full measure, denoted by $\Omega_\mathbb{P}hi$, $\langle F(t),\mathbb{P}hi(\theta_t)\rangle$ also admits left- and right-continuous representatives which coincide except for an at most countable set. Let them be denoted by $\langle F,\mathbb{P}hi(\theta_t)\rangle^\mathbb{P}m$ and set $\Omega_0=\cap_{\mathbb{P}hi\in\mathcal{D}_1}\Omega_\mathbb{P}hi.$ Since $\mathcal{D}_1$ is countable, $\Omega_0$ is also a set of full measure. Besides, for $\mathbb{P}hi\in \mathcal{D}_1$, $(t,\omega)\mapsto \langle F(t,\omega),\mathbb{P}hi(\theta_t)\rangle^\mathbb{P}m$ has left- and right-continuous trajectories in time, respectively, and are thus measurable with respect to $(t,\omega)$. For $\mathbb{P}hi\in C^1_c(\mathbb{R}^N\times\mathbb{R})$, we define $\langle F(t,\omega),\mathbb{P}hi(\theta_t)\rangle^\mathbb{P}m$ on $[0,T]\times \Omega_0$ as the limit of $\langle F(t,\omega),\mathbb{P}hi_n(\theta_t)\rangle^\mathbb{P}m$ for any sequence $(\mathbb{P}hi_n)$ in $\mathcal{D}_1$ converging to $\mathbb{P}hi$ in the topology of $C^1_c(\mathbb{R}^N\times\mathbb{R})$. Then as a pointwise limit of measurable functions $\langle F(\cdot,\cdot),\mathbb{P}hi(\theta_t)\rangle^\mathbb{P}m$ is also measurable in $(t,\omega)$. Moreover, due to the uniform convergence $\mathbb{P}hi_n$ to $\mathbb{P}hi$ and boundedness of $F$, it has left- and right-continuous trajectories, respectively. Consequently, let $\omega\in\Omega_0$ and let $\mathcal{N}_\omega\subset(0,T)$ denote the corresponding countable set of times, where $$\langle F(t,\omega),\mathbb{P}hi(\theta_t)\rangle^-\neq\langle F(t,\omega),\mathbb{P}hi(\theta_t)\rangle^+\qquad\text{for some}\quad\mathbb{P}hi\in\mathcal{D}_1.$$ It follows from the Fubini theorem that there exists a set of full measure $\mathcal{A}\subset(0,T)$ such that for all $t\in\mathcal{A}$ it holds true that $t\notin\mathcal{N}_\omega$ for a.e. $\omega\in\Omega$, i.e. $$\langle F(t),\mathbb{P}hi(\theta_t)\rangle^-=\langle F(t),\mathbb{P}hi(\theta_t)\rangle^+\qquad\forall\mathbb{P}hi\in\mathcal{D}_1\quad\text{a.s.}$$ thus \begin{equation}\label{1s} \langle F(t),\mathbb{P}hi(\theta_t)\rangle^-=\langle F(t),\mathbb{P}hi(\theta_t)\rangle^+\qquad\forall\mathbb{P}hi\in C^1_c(\mathbb{R}^N\times\mathbb{R})\quad\text{a.s}. \end{equation} Let us proceed with the construction of $F^+,$ the construction of $F^-$ being analogous. In the sequel, we will frequently write expressions of the type $F(t,\mathbb{P}i_{t})$ as a shorthand for the composition of $F(t,\cdot)$ with $\mathbb{P}i_t$ (similarly for $F^+$ etc.). Therefore, $\langle F(t,\mathbb{P}i_t),\mathbb{P}hi\rangle $ is understood as $$\int_{\mathbb{R}^{N+1}}F(t,\mathbb{P}i_t(x,\xi))\mathbb{P}hi(x,\xi)\,\mathrm{d} x\,\mathrm{d}\xi$$ and since the flow $\mathbb{P}i$ is volume preserving $$\langle F(t,\mathbb{P}i_t),\mathbb{P}hi\rangle=\int_{\mathbb{R}^{N+1}}F(t,x,\xi)\mathbb{P}hi(\theta_t(x,\xi))\,\mathrm{d} x\,\mathrm{d}\xi=\langle F(t),\mathbb{P}hi(\theta_t)\rangle.$$ It is now straightforward to define $F^+(t),\, t\in[0,T),$ by \begin{equation}\label{eq:ff} \big\langle F^+(t),\mathbb{P}hi(\theta_t)\big\rangle:=\langle F(t),\mathbb{P}hi(\theta_t)\rangle^+,\qquad\mathbb{P}hi\in C^1_c(\mathbb{R}^N\times\mathbb{R}) \qquad \text{a.s.} \end{equation} and observe that $F^+(t)$ is right-continuous in the required sense. Note that $F^+(t)$ is a.s. a well-defined distribution since, due to the flow properties of $\mathbb{P}i$ and $\theta$, the mapping $\mathbb{P}hi\mapsto\mathbb{P}hi(\theta_t)$ is a bijection on $C^1_c(\mathbb{R}^N\times\mathbb{R})$ with the inverse mapping $\mathbb{P}hi\mapsto\mathbb{P}hi(\mathbb{P}i_t)$. Moreover, it belongs to $L^\infty(\mathbb{R}^N\times\mathbb{R})$ a.s. Indeed, for $\omega\in\Omega_0$ and $\mathbb{P}hi\in \mathcal{D}_1$, if $t\in[0,T)$ is such that $$\langle F(t,\omega),\mathbb{P}hi(\theta_t)\rangle^+=\langle F(t,\omega),\mathbb{P}hi(\theta_t)\rangle$$ then it follows immediately from the fact that $F(t)\in L^\infty(\mathbb{R}^N\times\mathbb{R})$ a.s. For a general $t\in[0,T)$ we take a sequence $t_n\searrow t$ with the property above and use lower semicontinuity. Moreover, seen as a function $F^+ :\Omega\times [0, T ] \to L^p_{loc}(\mathbb{R}^N\times\mathbb{R})$, for some $p\in[1,\infty)$, it is weakly measurable and therefore measurable. Hence, according to the Fubini theorem $F^+$, as a function of four variables $\omega,t,x,\xi$, is measurable. Next, we show that $F^+$ is a representative (in time) of $F$, i.e. for a.e. $t^*\in[0,T)$ it holds that $F^+(t^*)=F(t^*)$ where the equality is understood a.e. in $(\omega,x,\xi)$. Indeed, due to the Lebesgue differentiation theorem, \begin{equation*} \lim_{\varepsilon\rightarrow 0}\frac{1}{\varepsilon}\int_{t^*}^{t^*+\varepsilon}F\big(t,\mathbb{P}i_t(x,\xi)\big)\,\mathrm{d} t=F\big(t^*,\mathbb{P}i_{t^*}(x,\xi)\big)\qquad\text{a.e. }(\omega,t^*,x,\xi) \end{equation*} hence by the dominated convergence theorem \begin{equation*} \lim_{\varepsilon\rightarrow 0}\frac{1}{\varepsilon}\int_{t^*}^{t^*+\varepsilon}\big\langle F(t,\mathbb{P}i_t),\mathbb{P}hi\big\rangle\,\mathrm{d} t=\big\langle F(t^*,\mathbb{P}i_{t^*}),\mathbb{P}hi\big\rangle\qquad\forall\mathbb{P}hi\in C_c^1(\mathbb{R}^N\times\mathbb{R})\quad\text{a.e. } (\omega,t^*). \end{equation*} Since the left hand side is equal to $\big\langle F^+(t^*,\mathbb{P}i_{t^*}),\mathbb{P}hi\big\rangle$ for all $t^*\in [0,T]$ and $\omega\in\Omega_0$, it follows that $$\big\langle F^+(t^*),\mathbb{P}hi(\theta_{t^*})\big\rangle=\big\langle F(t^*),\mathbb{P}hi(\theta_{t^*})\big\rangle\qquad\forall\mathbb{P}hi\in C_c^1(\mathbb{R}^N\times\mathbb{R})\quad\text{a.e. } (\omega,t^*)$$ which implies $$ F^+(t^*)= F(t^*)\qquad\text{in}\quad L^\infty(\mathbb{R}^N\times\mathbb{R})\quad\text{a.e. } (\omega,t^*)$$ and Fubini theorem together with the measurability of $F^+$ and $F$ regarded as functions of four variables $\omega,t,x,\xi$ yields $$ F^+(t^*,x,\xi)= F(t^*,x,\xi)\qquad\text{a.e. } (\omega,t^*,x,\xi).$$ {\em Step 2:} Second, we prove that for all $t^*\in[0,T)$ \begin{equation}\label{eq:aaa} \big\langle F^+(t^*\mathbb{P}m\,\varepsilon),\mathbb{P}hi\big\rangle\longrightarrow\big\langle F^{+}(t^*),\mathbb{P}hi\big\rangle\quad\quad\forall\mathbb{P}hi\in C^1_c(\mathbb{R}^N\times\mathbb{R}) \quad\text{a.s}. \end{equation} Towards this end, we verify \begin{equation}\label{eq:aaaa} \big\langle F^+(t^*+\varepsilon,\mathbb{P}i_{t^*}),\mathbb{P}hi\big\rangle\longrightarrow\big\langle F^{+}(t^*,\mathbb{P}i_{t^*}),\mathbb{P}hi\big\rangle\quad\quad\forall\mathbb{P}hi\in C^1_c(\mathbb{R}^N\times\mathbb{R})\quad\text{a.s}. \end{equation} and observe that since the flow $\mathbb{P}i$ is volume preserving, testing in \eqref{eq:aaaa} by $\mathbb{P}hi(\mathbb{P}i_{t^*})\in C^1_c(\mathbb{R}^N\times\mathbb{R})$ yields \eqref{eq:aaa}. In order to prove \eqref{eq:aaaa}, we write \begin{align*} \big| &\langle F^+(t^*+\varepsilon,\mathbb{P}i_{t^*}),\mathbb{P}hi\rangle- \langle F^+(t^*,\mathbb{P}i_{t^*}),\mathbb{P}hi\rangle\big|\\ &\quad\leq\big| \langle F^+(t^*+\varepsilon,\mathbb{P}i_{t^*}),\mathbb{P}hi\rangle-\big\langle F^+(t^*+\varepsilon,\mathbb{P}i_{t^*+\varepsilon}),\mathbb{P}hi\big\rangle\big|+\big|\big\langle F^+(t^*+\varepsilon,\mathbb{P}i_{t^*+\varepsilon}),\mathbb{P}hi\big\rangle-\big\langle F^+(t^*,\mathbb{P}i_{t^*}),\mathbb{P}hi\big\rangle\big|. \end{align*} The second term on the right hand side vanishes a.s. as $\varepsilon\rightarrow0$ according to {\em Step 1} and regarding the first one, we have \begin{align*} \big| &\langle F^+(t^*+\varepsilon,\mathbb{P}i_{t^*}),\mathbb{P}hi\rangle-\big\langle F^+(t^*+\varepsilon,\mathbb{P}i_{t^*+\varepsilon}),\mathbb{P}hi\big\rangle\big|=\big|\big\langle F^+(t^*+\varepsilon),\mathbb{P}hi(\theta_{t^*})-\mathbb{P}hi(\theta_{t^*+\varepsilon})\big\rangle\big| \end{align*} and we argue by the dominated convergence theorem: according to Theorem \ref{thm:existence}, if $\supp\mathbb{P}hi\subset B_{R}$, where $B_{R}\subset\mathbb{R}^N\times\mathbb{R}$ is the ball of radius $R$ centred at $0$, then $\mathbb{P}hi(\theta_{t^*})$ and $\mathbb{P}hi(\theta_{t^*+\varepsilon})$ remain compactly supported uniformly in $\varepsilon$ and their support is included in $ B_{R+CT^{1/p}}$. Besides, $$\mathbb{P}hi(\theta_{t^*}(x,\xi))-\mathbb{P}hi(\theta_{t^*+\varepsilon}(x,\xi))\longrightarrow 0\qquad\forall (x,\xi)\in\mathbb{R}^N\times\mathbb{R}$$ and since $F^+$ takes values in $[0,1]$, the claim follows. {\em Step 3:} Now, it only remains to show that $F^+(t^*)$ is a kinetic function on $X=\Omega\times\mathbb{R}^N$ for all $t^*\in[0,T)$. Towards this end, we observe that for all $t^*\in[0,T)$ $$F_n(t^*,x,\xi):=\frac{1}{\varepsilon_n}\int_{t^*}^{t^*+\varepsilon_n}F(t,x,\xi)\,\mathrm{d} t$$ is a kinetic function on $X=\Omega\times\mathbb{R}^N$ and by \eqref{integrov} the assumptions of Lemma \ref{kinetcomp} are fulfilled\footnote{Note that we may assume without loss of generality that the $\sigma$-field $\mathcal{F}$ is countably generated and hence, according to \cite[Proposition 3.4.5]{cohn}, the space $L^1(\Omega)$ is separable.}. Accordingly, there exists a kinetic function $F^{*,+}$ and a subsequence $(n^*_k)$ such that, on the one hand, \begin{equation*} F_{n_k^*}(t^*)\overset{w^*}{\longrightarrow} F^{*,+}\qquad \text{in}\qquad L^\infty(\Omega\times\mathbb{R}^N\times\mathbb{R}). \end{equation*} Since, on the other hand, we have due to {\em Step 2} that \begin{equation*} F_{n^*_k}(t^*)\longrightarrow F^+(t^*)\qquad \text{in}\qquad \mathcal{D}'(\mathbb{R}^N\times\mathbb{R})\quad\text{a.s.}, \end{equation*} we deduce that $F^+(t^*)=F^{*,+}$ for all $t^*\in[0,T)$, which completes the proof. {\em Step 4:} The proof of existence of the left-continuous representative $F^-$ on $(0,T]$ can be carried out similarly and \eqref{1s1} then follows immediately from \eqref{1s}. \end{proof} \end{prop} Remark that according to Proposition \ref{limits}, a stronger version of \eqref{eq:weakkinformul} holds true provided $F$ is replaced by $F^+$ or $F^-$. The precise result is presented in the following corollary. \begin{cor}\label{cor:strongerversion} Let $F$ be a generalized kinetic solution to \eqref{eq} and let $F^-$ and $F^+$ be its left- and right-continuous representatives. Then the couples $(F^+,m)$ and $(F^-,m)$, respectively, satisfy for all $t\in(0,T)$ and every $\mathbb{P}hi\in C^1_c(\mathbb{R}^N\times\mathbb{R})$, a.s., \begin{align*} \langle F^+(t),\mathbb{P}hi(\theta_t)\rangle&=\langle F_{0},\mathbb{P}hi\rangle-\int_0^t\langle\mathbb{P}artial_\xi F g,\mathbb{P}hi(\theta_s)\rangle\mathrm{d} W+\frac{1}{2}\int_0^t\langle\mathbb{P}artial_\xi(G^2\mathbb{P}artial_\xi F,\mathbb{P}hi(\theta_s)\rangle\mathrm{d} s-m\big(\mathbf{1}_{[0,t]}\mathbb{P}artial_{\xi}\mathbb{P}hi(\theta_s)\big) \end{align*} and \begin{align*} \langle F^-(t),\mathbb{P}hi(\theta_t)\rangle&=\langle F_{0},\mathbb{P}hi\rangle-\int_0^t\langle\mathbb{P}artial_\xi F g,\mathbb{P}hi(\theta_s)\rangle\mathrm{d} W+\frac{1}{2}\int_0^t\langle\mathbb{P}artial_\xi(G^2\mathbb{P}artial_\xi F,\mathbb{P}hi(\theta_s)\rangle\mathrm{d} s-m\big(\mathbf{1}_{[0,t)}\mathbb{P}artial_{\xi}\mathbb{P}hi(\theta_s)\big). \end{align*} Furthermore, setting $F^-(0):=F_0$, then for every $\mathbb{P}hi\in C^1_c(\mathbb{R}^N\times\mathbb{R})$ and $t^*\in [0,T)$ it holds true \begin{equation}\label{eq:cont1} \big\langle F^+(t^*)-F^-(t^*),\mathbb{P}hi(\theta_{t^*})\big\rangle=-m\big(\mathbf{1}_{\{t^*\}}\mathbb{P}artial_\xi\mathbb{P}hi(\theta_{t^*})\big)\quad\text{a.s}. \end{equation} In particular, if $\mathcal{A}\subset(0,T)$ is the set of full measure constructed in Proposition \ref{limits}, then for all $t^*\in\mathcal{A}$, $m$ does not have atom at $t^*$ a.s. \begin{proof} Consider \eqref{eq:weakkinformul} with a test function of the form $(s,x,\xi)\mapsto \mathbb{P}hi(x,\xi)\alpha_\varepsilon(s)$ where $\mathbb{P}hi\in C^1_c(\mathbb{R}^N\times\mathbb{R})$ and $$\alpha_\varepsilon(s)=\begin{cases} 1,&\quad s\leq t,\\ 1-\frac{s-t}{\varepsilon},&\quad t\leq s\leq t+\varepsilon,\\ 0,&\quad t+\varepsilon\leq s. \end{cases}$$ Due to Proposition \ref{limits} we obtain convergence of the left hand side of \eqref{eq:weakkinformul} as $\varepsilon\rightarrow0$: \begin{align*} \int_0^T\langle F(s),\mathbb{P}hi(\theta_s)\rangle\,\mathbb{P}artial_s\alpha_\varepsilon(s)\,\mathrm{d} s=-\frac{1}{\varepsilon}\int_t^{t+\varepsilon}\langle F(s),\mathbb{P}hi(\theta_s)\rangle\,\mathrm{d} s\longrightarrow-\big\langle F^+(t),\mathbb{P}hi(\theta_t)\big\rangle. \end{align*} Then we have \begin{align*} m\big(\alpha_\varepsilon(s) \mathbb{P}artial_\xi\mathbb{P}hi(\theta_s)\big)&=\int_{[0,t)\times\mathbb{R}^N\times\mathbb{R}}\mathbb{P}artial_\xi\mathbb{P}hi(\theta_s)\,\mathrm{d} m+\int_{[t,t+\varepsilon)\times\mathbb{R}^N\times\mathbb{R}}\Big(1-\frac{s-t}{\varepsilon}\Big)\mathbb{P}artial_\xi\mathbb{P}hi(\theta_s)\,\mathrm{d} m\\ &\longrightarrow \int_{[0,t]\times\mathbb{R}^N\times\mathbb{R}}\mathbb{P}artial_\xi\mathbb{P}hi(\theta_s)\,\mathrm{d} m \end{align*} and the convergence of the remaining two terms on the right hand side of \eqref{eq:weakkinformul} is obvious because they are continuous in $t$. Therefore, we have justified the equation for $(F^+,m)$. Concerning $(F^-,m)$, we apply a similar approach but use the function $$\alpha_\varepsilon(s)=\begin{cases} 1,&s\leq t-\varepsilon,\\ \frac{t-s}{\varepsilon},& t-\varepsilon\leq s\leq t,\\ 0,&t\leq s \end{cases}$$ instead. Finally, the formula \eqref{eq:cont1} follows from \eqref{eq:weakkinformul} by testing by $(t,x,\xi)\mapsto\mathbb{P}hi(x,\xi)\alpha_\varepsilon(t)$ where $$\alpha_\varepsilon(t)=\frac{1}{\varepsilon}\min\big\{(t-t^*+\varepsilon)^+,(t-t^*-\varepsilon)^-\big\}$$ and sending $\varepsilon\rightarrow 0$. As a consequence, $m$ has an atom at $t^*$ if and only if $F^-(t^*)\neq F^+(t^*)$ hence, in view of \eqref{1s1} the proof is complete. \end{proof} \end{cor} \begin{lemma}\label{lem:equil} Let $F$ be a generalized kinetic solution to \eqref{eq} and let $F^+$ be its right-continuous representative. Assume that the initial condition $F_0$ is at equilibrium: there exists $u_0\in L^1(\Omega\times\mathbb{R}^N)$ such that $F_0=\mathbf{1}_{u_0>\xi}$. Then $F^+(0)=F_0$ and in particular the corresponding kinetic measure $m$ has no atom at $0$ a.s., i.e. the restriction of $m$ to $\{0\}\times\mathbb{R}^N\times\mathbb{R}$ vanishes a.s. \begin{proof} Let $m_0$ be the restriction of $m$ to $\{0\}\times\mathbb{R}^N\times\mathbb{R}$. Then we deduce from \eqref{eq:cont1} at $t^*=0$ (recall that $F^-(0)=F_0$) that \begin{equation}\label{m0} \langle F^+(0)-\mathbf{1}_{u_0>\xi},\mathbb{P}hi\rangle=-m_0\big(\mathbb{P}artial_\xi \mathbb{P}hi\big)\qquad\forall \mathbb{P}hi\in C^1_c(\mathbb{R}^N\times\mathbb{R})\quad\text{a.s}. \end{equation} Let $H_R$ be a smooth truncation on $\mathbb{R}$ such that $0\leq H_R\leq 1$, $H_R(\xi)= 1$ if $|\xi|\leq R$ and $H_R(\xi)= 0$ if $|\xi|\geq 2R$, $|\mathbb{P}artial_\xi H_R| \leq 1$. For $\mathbb{P}hi\in C^1_c(\mathbb{R}^N)$ we intend to pass to the limit $R\to\infty$ in \begin{equation}\label{fg67} \langle F^+(0)-\mathbf{1}_{u_0>\xi},\mathbb{P}hi H_R\rangle=-m_0\big(\mathbb{P}hi\mathbb{P}artial_\xi H_R\big) \end{equation} Since $m$ and consequently $m_0$ is a finite measure a.s., the right hand side converges to $0$ a.s. as $R\to \infty$ due to the dominated convergence theorem, whereas for the left hand side, we write $$\langle F^+(0)-\mathbf{1}_{u_0>\xi},\mathbb{P}hi H_R\rangle=\langle F^+(0)-\mathbf{1}_{0>\xi},\mathbb{P}hi H_R\rangle-\langle\mathbf{1}_{u_0>\xi}-\mathbf{1}_{0>\xi},\mathbb{P}hi H_R\rangle.$$ We make use of Remark \ref{chi} and there introduced shorthand notation $\chi_{u_0}=\chi_{\mathbf{1}_{u_0>\xi}}$, which together with the dominated convergence theorem implies the convergence $$\langle\mathbf{1}_{u_0>\xi}-\mathbf{1}_{0>\xi},\mathbb{P}hi H_R\rangle\longrightarrow\langle \chi_{u_0},\mathbb{P}hi\rangle.$$ Consequently, we deduce that also the remaining term from \eqref{fg67}, i.e. $$\langle F^+(0)-\mathbf{1}_{0>\xi},\mathbb{P}hi H_R\rangle$$ is converging since all the other terms converge. Therefore, necessarily, $$\langle F^+(0)-\mathbf{1}_{0>\xi},\mathbb{P}hi H_R\rangle\longrightarrow\langle F^+(0)-\mathbf{1}_{0>\xi},\mathbb{P}hi\rangle.$$ As a consequence, we obtain $$\int_\mathbb{R} F^+(0,\xi)-\mathbf{1}_{0>\xi}\,\mathrm{d}\xi=u_0\quad\text{a.s.}$$ and using this fact one can easily observe that $$p(\xi):=\int_{-\infty}^\xi\mathbf{1}_{u_0>\zeta}-F^+(0,\zeta)\,\mathrm{d} \zeta=\int_{-\infty}^\xi(\mathbf{1}_{u_0>\zeta}-\mathbf{1}_{0>\zeta})-(F^+(0,\zeta)-\mathbf{1}_{0>\zeta})\,\mathrm{d} \zeta\geq0\quad\text{a.s.}$$ Indeed, $p(-\infty)=p(\infty)=0$ and $p$ is increasing if $\xi\in (-\infty,u_0)$ and decreasing if $\xi\in(u_0,\infty)$. However, it follows from \eqref{m0} that $p=-m_0$ and since $m_0$ is nonnegative, we deduce that $m_0\equiv 0$. \end{proof} \end{lemma} The following result allows us to restart the evolution given by \eqref{eq} at an arbitrary time $t^*\in(0,T)$. \begin{lemma}\label{lemma:neworigin} Let $F$ be a generalized kinetic solution to \eqref{eq} on $[0,T]$ with the initial datum $F_0$. Then for every $t^*\in[0,T)$, $t\mapsto F(t^*+t)$ is a generalized kinetic solution to \eqref{eq} on $[0,T-t^*]$ with the initial datum $F^-(t^*)$. \begin{proof} Let $\alpha\in C^1_c([0,T))$, $\mathbb{P}hi\in C^1_c(\mathbb{R}^N\times\mathbb{R})$ and test \eqref{eq:weakkinformul} by $(t,x,\xi)\mapsto\mathbb{P}hi(x,\xi)\alpha_\varepsilon(t)$ where $$\alpha_\varepsilon(t)=\begin{cases} \alpha(t),&\quad t\leq t^*,\\ \alpha(t)\Big(1-\frac{t-t^*}{\varepsilon}\Big),&\quad t^*\leq t\leq t^*+\varepsilon,\\ 0,&\quad t^*+\varepsilon\leq t. \end{cases}$$ In the limit $\varepsilon\rightarrow0$ we infer \begin{align}\label{eq:cont2} \begin{aligned} \int_0^{t^*}&\langle F(t),\mathbb{P}hi(\theta_t)\rangle\mathbb{P}artial_t\alpha(t)\,\mathrm{d} t-\langle F^+(t^*),\mathbb{P}hi(\theta_{t^*})\rangle\alpha(t^*)+\langle F_0,\mathbb{P}hi\rangle\alpha(0)\\ &=\int_0^{t^*}\langle\mathbb{P}artial_\xi F g,\mathbb{P}hi(\theta_t)\rangle\alpha(t)\,\mathrm{d} W-\frac{1}{2}\int_0^{t^*}\langle\mathbb{P}artial_\xi(G^2\mathbb{P}artial_\xi F),\mathbb{P}hi(\theta_t)\rangle\alpha(t)\,\mathrm{d} t+m\big(\mathbf{1}_{[0,t^*]}\alpha\mathbb{P}artial_\xi\mathbb{P}hi(\theta_t)\big). \end{aligned} \end{align} Subtracting \eqref{eq:cont2} from \eqref{eq:weakkinformul} and using \eqref{eq:cont1} leads to \begin{align*} \begin{aligned} \int_{t^*}^T&\langle F(t),\mathbb{P}hi(\theta_t)\rangle\mathbb{P}artial_t\alpha(t)\,\mathrm{d} t+\langle F^-(t^*),\mathbb{P}hi(\theta_{t^*})\rangle\alpha(t^*)\\ &=\int^T_{t^*}\langle\mathbb{P}artial_\xi F g,\mathbb{P}hi(\theta_t)\rangle\alpha(t)\,\mathrm{d} W-\frac{1}{2}\int^T_{t^*}\langle\mathbb{P}artial_\xi(G^2\mathbb{P}artial_\xi F),\mathbb{P}hi(\theta_t)\rangle\alpha(t)\,\mathrm{d} t+m\big(\mathbf{1}_{[t^*,T]}\alpha\mathbb{P}artial_\xi\mathbb{P}hi(\theta_t)\big). \end{aligned} \end{align*} Finally we take $(t,x,\xi)\mapsto\mathbb{P}hi(\mathbb{P}i_{t^*}(x,\xi))\alpha(t)$ as a test function and deduce that \begin{align*} \begin{aligned} \int_{t^*}^T&\langle F(t),\mathbb{P}hi(\theta_{t^*,t})\rangle\mathbb{P}artial_{t}\alpha(t)\,\mathrm{d} t+\langle F^-(t^*),\mathbb{P}hi\rangle\alpha(t^*)\\ &=\int^T_{t^*}\langle\mathbb{P}artial_\xi F g,\mathbb{P}hi(\theta_{t^*,t})\rangle\alpha(t)\,\mathrm{d} W-\frac{1}{2}\int^T_{t^*}\langle\mathbb{P}artial_\xi(G^2\mathbb{P}artial_\xi F),\mathbb{P}hi(\theta_{t^*,t})\rangle\alpha(t)\,\mathrm{d} t+m\big(\mathbf{1}_{[t^*,T]}\alpha\mathbb{P}artial_\xi\mathbb{P}hi(\theta_{t^*,t})\big). \end{aligned} \end{align*} As a consequence, $t\mapsto F(t^*+t)$ is a kinetic solution to \eqref{eq} on $[0,T-t^*]$ with initial condition $F^-(t^*)$. \end{proof} \end{lemma} We proceed with a result that plays the key role in our proof of reduction of generalized kinetic solution to a kinetic one, Theorem \ref{thm:reduction}, and the $L^1$-contraction property, Corollary \ref{cor:contraction}. To be more precise, the relevant estimate that we need to verify in case of two generalized kinetic solutions $F_1,F_2$ with initial conditions $F_{1,0}$ and $F_{2,0}$, respectively, is the following: $$\int_{\mathbb{R}^N\times\mathbb{R}} F_1(t) (1-F_2(t))\,\mathrm{d} x\,\mathrm{d} \xi\leq\int_{\mathbb{R}^N\times\mathbb{R}} F_{1,0} (1-F_{2,0})\,\mathrm{d} x\,\mathrm{d} \xi.$$ Indeed, setting $F:=F_1=F_2$ then leads to the Reduction Theorem and using the identity $$\int_{\mathbb{R}} \mathbf{1}_{u_1(t)>\xi} (1-\mathbf{1}_{u_2(t)>\xi})\,\mathrm{d} \xi=(u_1(t)-u_2(t))^+$$ gives the $L^1$-contraction property. However, it is not possible to multiply the two equations directly, i.e. the equation for $F_1$ and the one for $1-F_2$ as it is necessary to mollify first. This is taken care of in Proposition \ref{prop:doubling}. In order to simplify the notation, we denote $\overline{F}=1-F$. In the sequel, let $(\varrho_\delta)$ be an approximation to the identity on $\mathbb{R}^N\times\mathbb{R}$ and $(\kappa_R)$ be a truncation on $\mathbb{R}^N$ such that $\kappa_R\equiv 1$ on $B_R$, $\supp\kappa_R\subset B_{2R}$, $0\leq \kappa_R\leq 1$ and $|\nabla_x\kappa_R|\leq R^{-1}$. \begin{prop}\label{prop:doubling} Let $F_1,\, F_2$ be generalized kinetic solutions to \eqref{eq}. Let $s,\,t\in[0,T)$, $s\leq t$, be such that neither of the kinetic measures $m_1$ and $m_2$ has atom at $s$ a.s. Then it holds true \begin{align}\label{eq:doubling} \begin{aligned} \mathbb{E}\int&\,\kappa_R(x)\langle F_1^+(t),\varrho_\delta((x,\xi)-\theta_{s,t}(\cdot,\cdot))\rangle_{z,\sigma}\langle \overline{F}_2^+(t),\varrho_\delta((x,\xi)-\theta_{s,t}(\cdot,\cdot))\rangle_{y,\zeta}\mathrm{d} x\mathrm{d}\xi\\ &\quad-\mathbb{E}\int\kappa_R(x)\langle F_1^-(s),\varrho_\delta((x,\xi)- (\cdot,\cdot))\rangle_{z,\sigma}\langle \overline{F}_2^-(s),\varrho_\delta((x,\xi)- (\cdot,\cdot))\rangle_{y,\zeta}\mathrm{d} x\mathrm{d}\xi\\ &\leq C(t-s)^{1/p}\bigg(\mathbb{E}\int_{(s,t]}\int\mathrm{d} m_1(r,z,\sigma)+\mathbb{E}\int_{(s,t]}\int\mathrm{d} m_2(r,y,\zeta)\bigg)\\ &\quad+C(t-s)^{1/p}\bigg(\mathbb{E}\int_s^t\int |\zeta|^2\mathrm{d}\nu^2_{r,y}(\zeta)\mathrm{d} y\mathrm{d} r+\mathbb{E}\int_s^t\int |\sigma|^2\mathrm{d}\nu_{r,z}^1(\sigma)\mathrm{d} z\mathrm{d} r\bigg)\\ &\quad+CR\delta(t-s)^{1+1/p}. \end{aligned} \end{align} \begin{proof} {\em Step 1:} Applying Corollary \ref{cor:strongerversion} and Lemma \ref{lemma:neworigin} to the generalized kinetic solutions $F_1,F_2$ and to the mollifier $\varrho_\delta$ in place of a test function, we deduce that for all $(x,\xi)\in \mathbb{R}^N\times\mathbb{R}$ \begin{equation}\label{eq:stronger} \begin{split} \langle& F_1^+(t),\varrho_\delta((x,\xi)-\theta_{s,t}(\cdot,\cdot))\rangle_{z,\sigma}\\ &=\langle F_1^-(s),\varrho_\delta((x,\xi)- (\cdot,\cdot))\rangle_{z,\sigma}-\int_s^t\big\langle\mathbb{P}artial_\sigma F_1g,\varrho_\delta((x,\xi)-\theta_{s,r}(\cdot,\cdot))\big\rangle_{z,\sigma}\,\mathrm{d} W\\ &\qquad-m_1\Big(\mathbf{1}_{[s,t]}(\cdot)\mathbb{P}artial_{\sigma}\varrho_\delta((x,\xi)-\theta_{s,r}(\cdot,\cdot))\Big)-\frac{1}{2}\int_s^t\big\langle G^2\mathbb{P}artial_\sigma F_1,\mathbb{P}artial_\sigma\varrho_\delta((x,\xi)-\theta_{s,r}(\cdot,\cdot))\big\rangle_{z,\sigma}\mathrm{d} r \end{split} \end{equation} and similarly \begin{equation}\label{eq:stronger1} \begin{split} \langle& \overline{F}_2^+(t),\varrho_\delta((x,\xi)-\theta_{s,t}(\cdot,\cdot))\rangle_{y,\zeta}\\ &=\langle \overline{F}_2^-(s),\varrho_\delta((x,\xi)- (\cdot,\cdot))\rangle_{y,\zeta}+\int_s^t\big\langle\mathbb{P}artial_\zeta F_2g,\varrho_\delta((x,\xi)-\theta_{s,r}(\cdot,\cdot))\big\rangle_{y,\zeta}\,\mathrm{d} W\\ &\qquad+m_2\Big(\mathbf{1}_{[s,t]}(\cdot)\mathbb{P}artial_{\zeta}\varrho_\delta((x,\xi)-\theta_{s,r}(\cdot,\cdot))\Big)+\frac{1}{2}\int_s^t\big\langle G^2\mathbb{P}artial_\zeta F_2,\mathbb{P}artial_\zeta\varrho_\delta((x,\xi)-\theta_{s,r}(\cdot,\cdot))\big\rangle_{y,\zeta}\mathrm{d} r \end{split} \end{equation} Our notation in the above is the following: the variable $(x,\xi)$ is fixed in both \eqref{eq:stronger} and \eqref{eq:stronger1}; in \eqref{eq:stronger} we consider the test function $(z,\sigma)\mapsto\varrho_\delta((x,\xi)-\theta_{s,t}(z,\sigma))$ whereas in \eqref{eq:stronger1} we make use of $(y,\zeta)\mapsto\varrho_\delta((x,\xi)-\theta_{s,t}(y,\zeta))$. We denote \begin{align*} \mathcal{I}_1(t)&:=-\int_s^t\big\langle\mathbb{P}artial_\sigma F_1g,\varrho_\delta((x,\xi)-\theta_{s,r}(\cdot,\cdot))\big\rangle_{z,\sigma}\,\mathrm{d} W,\\ \mathcal{I}_2(t)&:=\int_s^t\big\langle\mathbb{P}artial_\zeta F_2g,\varrho_\delta((x,\xi)-\theta_{s,r}(\cdot,\cdot))\big\rangle_{y,\zeta}\,\mathrm{d} W, \end{align*} and observe that \begin{align*} \mu_1:C_b([s,T])&\rightarrow\mathbb{R},\\ \alpha&\mapsto -m_1\Big(\alpha(\cdot)\,\mathbb{P}artial_{\sigma}\varrho_\delta((x,\xi)-\theta_{s,r}(\cdot,\cdot)))\Big)-\frac{1}{2}\int_s^T\alpha(r)\big\langle G^2\mathbb{P}artial_\sigma F_1,\mathbb{P}artial_\sigma\varrho_\delta((x,\xi)-\theta_{s,r}(\cdot,\cdot))\big\rangle_{z,\sigma}\mathrm{d} r, \end{align*} and \begin{align*} \mu_2:C_b([s,T])&\rightarrow\mathbb{R},\\ \alpha&\mapsto m_2\Big(\alpha(\cdot)\,\mathbb{P}artial_{\zeta}\varrho_\delta((x,\xi)-\theta_{s,r}(\cdot,\cdot)))\Big)+\frac{1}{2}\int_s^T\alpha(r)\big\langle G^2\mathbb{P}artial_\zeta F_2,\mathbb{P}artial_\zeta\varrho_\delta((x,\xi)-\theta_{s,r}(\cdot,\cdot))\big\rangle_{y,\zeta}\mathrm{d} r, \end{align*} are Radon measures hence $$t\mapsto \mu_1([s,t]),\qquad t\mapsto \mu_2([s,t])$$ are c\`adl\`ag functions of bounded variation. Multiplying \eqref{eq:stronger} and \eqref{eq:stronger1}, we have \begin{align}\label{123} \begin{aligned} \mathbb{E}\langle F_1^+(t),&\,\varrho_\delta((x,\xi)-\theta_{s,t}(\cdot,\cdot))\rangle_{z,\sigma}\langle \overline{F}_2^+(t),\varrho_\delta((x,\xi)-\theta_{s,t}(\cdot,\cdot))\rangle_{y,\zeta}\\ &\qquad-\mathbb{E}\langle F_1^-(s),\varrho_\delta((x,\xi)- (\cdot,\cdot))\rangle_{z,\sigma}\langle \overline{F}_2^-(s),\varrho_\delta((x,\xi)- (\cdot,\cdot))\rangle_{y,\zeta}\\ &=\mathbb{E}\langle F_1^-(s),\varrho_\delta((x,\xi)- (\cdot,\cdot))\rangle_{z,\sigma}\mathcal{I}_2(t)+\mathbb{E}\langle F_1^-(s),\varrho_\delta((x,\xi)-\theta_s(\cdot,\cdot))\rangle_{z,\sigma}\mu_2([s,t])\\ &\quad+\mathbb{E}\mathcal{I}_1(t)\langle \overline{F}_2^-(s),\varrho_\delta((x,\xi)- (\cdot,\cdot))\rangle_{y,\zeta}+\mathbb{E}\mathcal{I}_1(t)\mathcal{I}_2(t)+\mathbb{E}\mathcal{I}_1(t)\mu_2([s,t])\\ &\quad+\mathbb{E}\mu_1([s,t])\langle \overline{F}_2^-(s),\varrho_\delta((x,\xi)-\theta_s(\cdot,\cdot))\rangle_{y,\zeta}+\mathbb{E}\mu_1([s,t])\mathcal{I}_2(t)+\mathbb{E}\mu_1([s,t])\mu_2([s,t])\\ &=J_1+\cdots+J_8. \end{aligned} \end{align} For $J_5$, we apply the It\^o formula to the product of a continuous martingale and a c\`adl\`ag function of bounded variation, that is $\mathcal{I}_1(t)$ and $\mu_2([s,t])$, to deduce the following integration by parts formula (see e.g. \cite[Chapter II, Theorem 33]{protter}) \begin{align*} \mathcal{I}_1(t)\mu_2([s,t])&=\int_s^t\mu_2([s,r))\,\mathrm{d}\mathcal{I}_1(r)+\int_{(s,t]}\mathcal{I}_1(r)\,\mathrm{d}\mu_2(r) \end{align*} which implies $$ J_5=\mathbb{E}\int_{(s,t]}\mathcal{I}_1(r)\,\mathrm{d}\mu_2(r)$$ and similarly we obtain $$ J_7=\mathbb{E}\int_{(s,t]}\mathcal{I}_2(r)\,\mathrm{d}\mu_1(r).$$ Next, \begin{align*} J_1=\mathbb{E}\int_s^t\langle F_1^-(s),\varrho_\delta((x,\xi)- (\cdot,\cdot))\rangle_{z,\sigma}\big\langle\mathbb{P}artial_\zeta F_2g,\varrho_\delta((x,\xi)-\theta_{s,r}(\cdot,\cdot))\big\rangle_{y,\zeta}\,\mathrm{d} W_r=0 \end{align*} and similarly $J_3=0$. $J_4$ can be rewritten as follows \begin{align*} J_4&=\mathbb{E}\langle\!\langle\mathcal{I}_1(t),\mathcal{I}_2(t)\rangle\!\rangle\\ &=-\mathbb{E}\int_s^t\int g(z,\sigma)\cdot g(y,\zeta)\varrho_\delta((x,\xi)-\theta_{s,r}(z,\sigma))\varrho_\delta((x,\xi)-\theta_{s,r}(y\zeta))\mathrm{d}\nu^1_{r,z}(\sigma)\mathrm{d}\nu^2_{r,y}(\zeta)\mathrm{d} z\mathrm{d} y\mathrm{d} r. \end{align*} Next, we apply the integration by parts formula for functions of bounded variation (see e.g. \cite[Chapter 0, Proposition 4.5]{revuz}) to $J_8$ and deduce \begin{align*} J_8&=\mathbb{E}\mu_1(\{s\})\mu_2(\{s\})+\mathbb{E}\int_{(s,t]}\mu_1([s,r])\,\mathrm{d}\mu_2(r)+\mathbb{E}\int_{(s,t]}\mu_2([s,r))\,\mathrm{d} \mu_1(r)\\ &=J_{80}+J_{81}+J_{82}. \end{align*} Since the kinetic measures $m_1$ and $m_2$ do not have atoms at $s$ a.s. due to the assumptions, it follows that $J_{80}=0.$ Regarding $J_{81}$, we make use of the formula for $\mu_1([s,r])$ given by \eqref{eq:stronger}. Namely \begin{align*} J_{81}&=\mathbb{E}\int_{(s,t]}\big\langle F_1^+(r),\varrho_\delta((x,\xi)-\theta_{s,r}(\cdot,\cdot))\big\rangle_{z,\sigma}\,\mathrm{d}\mu_2(r)\\ &\quad-\mathbb{E}\big\langle F^-_{1}(s),\varrho_\delta((x,\xi)-\theta_s(\cdot,\cdot))\big\rangle_{z,\sigma}\mu_2([s,t])-\mathbb{E}\int_{(s,t]}\mathcal{I}_1(r)\,\mathrm{d}\mu_2(r)=J_{811}-J_2-J_{5}. \end{align*} Going back to the product of \eqref{eq:stronger} and \eqref{eq:stronger1} we see that $J_{2}$ and $J_5$ cancel and for $J_{811}$ we write \begin{align*} J_{811}&=\mathbb{E}\int_{(s,t]}\int F_1^+(r)\varrho_\delta((x,\xi)-\theta_{s,r}(z,\sigma))\mathbb{P}artial_\zeta \varrho_\delta((x,\xi)-\theta_{s,r}(y,\zeta))\mathrm{d} m_2(r,y,\zeta)\mathrm{d} z\mathrm{d} \sigma\\ &\quad-\frac{1}{2}\mathbb{E}\int_{s}^t\int F_1(r)G^2(y,\zeta)\varrho_\delta((x,\xi)-\theta_{s,r}(z,\sigma))\mathbb{P}artial_\zeta \varrho_\delta((x,\xi)-\theta_{s,r}(y,\zeta))\mathrm{d} \nu^2_{r,y}(\zeta)\mathrm{d} y\mathrm{d} z\mathrm{d} \sigma\mathrm{d} r \end{align*} Let us now continue with $J_{82}$. We apply the formula for $\mu_2([s,r))$ which can be obtained from the formula for $F^-$ in Corollary \ref{cor:strongerversion} using Lemma \ref{lemma:neworigin}, cf. \eqref{eq:stronger1}. It yields \begin{align*} J_{82}&=\mathbb{E}\int_{(s,t]}\langle \overline{F}_2^-(r),\varrho_\delta((x,\xi)-\theta_{s,r}(\cdot,\cdot))\rangle_{y,\zeta}\,\mathrm{d}\mu_1(r)\\ &\quad-\mathbb{E}\langle \overline{F}^-_{2}(s),\varrho_\delta((x,\xi)- (\cdot,\cdot))\rangle_{y,\zeta}\mu_1([s,t])-\mathbb{E}\int_{(s,t]}\mathcal{I}_2(r)\mathrm{d}\mu_1(r)=J_{821}-J_6-J_7, \end{align*} where $J_{6}$ and $J_7$ cancel and for $J_{821}$ we have \begin{align*} J_{821}&=-\mathbb{E}\int_{(s,t]}\int \overline{F}_2^-(r)\varrho_\delta((x,\xi)-\theta_{s,r}(y,\zeta))\mathbb{P}artial_\sigma \varrho_\delta((x,\xi)-\theta_{s,r}(z,\sigma))\mathrm{d} m_1(r,z,\sigma)\mathrm{d} y\mathrm{d} \zeta\\ &\quad+\frac{1}{2}\mathbb{E}\int_{s}^t\int \overline F_2(r)G^2(z,\sigma) \varrho_\delta((x,\xi)-\theta_{s,r}(y,\zeta))\mathbb{P}artial_\sigma\varrho_\delta((x,\xi)-\theta_{s,r}(z,\sigma))\mathrm{d} \nu^1_{r,z}(\sigma)\mathrm{d} z\mathrm{d} y\mathrm{d} \zeta\mathrm{d} r. \end{align*} {\em Step 2:} Finally, we have all in hand to proceed with the proof. Integrating \eqref{123} with respect to $x,\xi$ we obtain \begin{equation*} \begin{split} \mathbb{E}\int&\, \kappa_R(x)F^+_1(t)\overline{F}^+_2(t)\varrho_\delta((x,\xi)-\theta_{s,t}(z,\sigma))\varrho_\delta((x,\xi)-\theta_{s,t}(y,\zeta))\mathrm{d} x\mathrm{d} \xi\mathrm{d} y\mathrm{d}\zeta\mathrm{d} z\mathrm{d} \sigma\\ &-\mathbb{E} \int \kappa_R(x)F^-_{1}(s)\overline{F}^-_{2}(s)\varrho_\delta((x,\xi)-(z,\sigma))\varrho_\delta((x,\xi)-(y,\zeta))\mathrm{d} x\mathrm{d} \xi\mathrm{d} y\mathrm{d}\zeta\mathrm{d} z\mathrm{d} \sigma=I_1+\cdots+I_5, \end{split} \end{equation*} where \begin{align*} I_1&=\mathbb{E}\int_{(s,t]}\int \kappa_R(x)F_1^+(r)\varrho_\delta((x,\xi)-\theta_{s,r}(z,\sigma))\mathbb{P}artial_\zeta\varrho_\delta((x,\xi)-\theta_{s,r}(y,\zeta))\mathrm{d} m_2(r,y,\zeta),\\ I_2 &=-\mathbb{E}\int_{(s,t]}\int \kappa_R(x)\overline{F}^-_2(r) \varrho_\delta((x,\xi)-\theta_{s,r}(y,\zeta))\mathbb{P}artial_\sigma\varrho_\delta((x,\xi)-\theta_{s,r}(z,\sigma))\mathrm{d} m_1(r,z,\sigma)\\ I_3&=-\mathbb{E}\int_s^t\int\kappa_R(x) g(z,\sigma)\cdot g(y,\zeta)\varrho_\delta((x,\xi)-\theta_{s,r}(z,\sigma))\varrho_\delta((x,\xi)-\theta_{s,r}(y,\zeta))\mathrm{d}\nu^1_{r,z}(\sigma)\mathrm{d}\nu^2_{r,y}(\zeta),\\ I_4&=-\frac{1}{2}\mathbb{E}\int_{s}^t\int\kappa_R(x) F_1(r)G^2(y,\zeta)\varrho_\delta((x,\xi)-\theta_{s,r}(z,\sigma))\mathbb{P}artial_\zeta \varrho_\delta((x,\xi)-\theta_{s,r}(y,\zeta))\mathrm{d} \nu^2_{r,y}(\zeta),\\ I_5&=\frac{1}{2}\mathbb{E}\int_{s}^t\int \kappa_R(x)\overline F_2(r)G^2(z,\sigma) \varrho_\delta((x,\xi)-\theta_{s,r}(y,\zeta))\mathbb{P}artial_\sigma\varrho_\delta((x,\xi)-\theta_{s,r}(z,\sigma))\mathrm{d} \nu^1_{r,z}(\sigma). \end{align*} Above as well as in the sequel, when we integrate with respect to all variables $r,x,\xi,z,\sigma,y,\zeta$, we only specify the kinetic measures $m_1,\,m_2$ and the Young measures $\nu^1,\,\nu^2$, the Lebesgue measure being omitted. Since the technique of estimation of $I_1$ and $I_2$ is similar, let us just focus on $I_1$. It holds that \begin{align*} I_1&=-\mathbb{E}\int_{(s,t]}\int\kappa_R(x) \varrho_\delta((x,\xi)-\theta_{s,r}(z,\sigma))\varrho_\delta((x,\xi)-\theta_{s,r}(y,\zeta))\mathrm{d}\nu^{1,+}_{r,z}(\sigma)\mathrm{d} m_2(r,y,\zeta)\\ &\quad+\mathbb{E}\int_{(s,t]}\int \kappa_R(x)F_1^+ (r) \mathbb{P}artial_\sigma\varrho_\delta((x,\xi)-\theta_{s,r}(z,\sigma))\varrho_\delta((x,\xi)-\theta_{s,r}(y,\zeta))\mathrm{d} m_2(r,y,\zeta)\\ &\quad+\mathbb{E}\int_{(s,t]}\int \kappa_R(x)F_1^+ (r) \varrho_\delta((x,\xi)-\theta_{s,r}(z,\sigma))\mathbb{P}artial_\zeta\varrho_\delta((x,\xi)-\theta_{s,r}(y,\zeta))\mathrm{d} m_2(r,y,\zeta)\\ &=I_{11}+I_{12}+I_{13}, \end{align*} where $\nu^{1,+}$ is the Young measure corresponding to $F_1^+$ and the first equality holds true because the sum of the first two terms is equal to zero due to\footnote{By $\langle\cdot,\cdot\rangle_\sigma$ we denote the duality between the space of distributions on $\mathbb{R}_\sigma$ and $C^1_c(\mathbb{R}_\sigma)$.} \begin{align}\label{ibp} \begin{aligned} \langle F_1^+(r,z,\cdot),\mathbb{P}artial_\sigma\varrho_\delta((x,\xi)-\theta_{s,r}(z,\cdot))\rangle_\sigma&=-\langle\mathbb{P}artial_\sigma F_1^+(r,z,\cdot),\varrho_\delta((x,\xi)-\theta_{s,r}(z,\cdot))\rangle_\sigma\\ &=\langle\nu^{1,+}_{r,z}(\cdot),\varrho_\delta((x,\xi)-\theta_{s,r}(z,\cdot))\rangle_\sigma, \end{aligned} \end{align} which holds true for a.e. $(\omega,z)$ and every $(r,x,\xi)$. Consequently, $I_{11}\leq0$. We will show that $I_{12}+I_{13}$ is small if $t-s$ is small. Towards this end, we observe that \begin{equation}\label{1} \mathbb{P}artial_\sigma\varrho_\delta((x,\xi)-\theta_{s,r}(z,\sigma))=-\nabla_{x,\xi}\varrho_\delta((x,\xi)-\theta_{s,r}(z,\sigma))\cdot\mathbb{P}artial_\sigma\theta_{s,r}(z,\sigma) \end{equation} and \begin{align}\label{2} \begin{aligned} &\int \kappa_R(x)\nabla_{x,\xi}\varrho_\delta((x,\xi)-\theta_{s,r}(z,\sigma))\varrho_\delta((x,\xi)-\theta_{s,r}(y,\zeta))\mathrm{d} x\mathrm{d}\xi\\ &\quad+\int \kappa_R(x)\varrho_\delta((x,\xi)-\theta_{s,r}(z,\sigma))\nabla_{x,\xi} \varrho_\delta((x,\xi)-\theta_{s,r}(y,\zeta))\mathrm{d} x\mathrm{d}\xi\\ &\qquad=-\int \nabla_{x,\xi} \kappa_R(x) \varrho_\delta((x,\xi)-\theta_{s,r}(z,\sigma)) \varrho_\delta((x,\xi)-\theta_{s,r}(y,\zeta))\mathrm{d} x\mathrm{d}\xi \end{aligned} \end{align} it follows that (recall that $\theta^x$ was defined in \eqref{eq:fl2}) \begin{align}\label{eq:estim} \begin{aligned} I_{12}+I_{13}= \mathbb{E}\int_{(s,t]}\int& F_1^+ (r)\kappa_R(x)\big[\mathbb{P}artial_\sigma\theta_{s,r}(z,\sigma)-\mathbb{P}artial_\zeta\theta_{s,r}(y,\zeta)\big]\\ &\cdot\nabla_{x,\xi}\varrho_\delta((x,\xi)-\theta_{s,r}(z,\sigma))\varrho_\delta((x,\xi)-\theta_{s,r}(y,\zeta))\mathrm{d} m_2(r,y,\zeta)\\ &\hspace{-1.5cm}+\mathbb{E}\int_{(s,t]}\int F_1^+(r)\nabla_{x} \kappa_R(x) \cdot\mathbb{P}artial_\sigma\theta^x_{s,r}(z,\sigma)\\ &\times\varrho_\delta((x,\xi)-\theta_{s,r}(z,\sigma)) \varrho_\delta((x,\xi)-\theta_{s,r}(y,\zeta))\mathrm{d} m_2(r,y,\zeta)\\ &\hspace{-1.9cm}=I_{121}+I_{131} \end{aligned} \end{align} To proceed, we use the ideas from \cite{gess}. Namely, we observe that for every $r\in[s,t]$, $\mathbb{P}artial_\xi\theta_{s,r}(\cdot)$ is Lipschitz continuous and its Lipschitz constant can be estimated by $C (t-s)^{1/p}$. Indeed, according to Theorem \ref{thm:existence} \begin{align*} \sup_{x,\xi\in\mathbb{R}^N\times\mathbb{R}}|\mathrm{D}^2\theta_{s,r}(x,\xi)|=\sup_{x,\xi\in\mathbb{R}^N\times\mathbb{R}}|\mathrm{D}^2\theta_{s,r}(x,\xi)-\mathrm{D}^2 \theta_{s,s}(x,\xi)|\leq C (r-s)^{1/p}\leq C (t-s)^{1/p}. \end{align*} Besides, we only have to consider the case of $$|(x,\xi)-\theta_{s,r}(z,\sigma)|<\delta,\qquad |(x,\xi)-\theta_{s,r}(y,\zeta)|<\delta$$ hence $$|\theta_{s,r}(z,\sigma)-\theta_{s,r}(y,\zeta)|<2\delta$$ and consequently \begin{align}\label{eq:4} |(z,\sigma)-(y,\zeta)|&=\big|\mathbb{P}i_{s,r}\big(\theta_{s,r}(z,\sigma)\big)-\mathbb{P}i_{s,r}\big(\theta_{s,r}(y,\zeta)\big)\big|\leq C|\theta_{s,r}(z,\sigma)-\theta_{s,r}(y,\zeta)|<C\delta \end{align} which implies \begin{equation}\label{eq:estim1} \big|\mathbb{P}artial_\sigma\theta_{s,r}(z,\sigma)-\mathbb{P}artial_\zeta\theta_{s,r}(y,\zeta)\big|\leq C (t-s)^{1/p}\delta. \end{equation} Let us now consider the remaining terms in the integral with respect to $(x,\xi,z,\sigma)$ and use the fact that $\theta_{s,r}$ is volume preserving and $\delta|\nabla_{x,\xi}\varrho_\delta|(\cdot)\leq C\varrho_{2\delta}(\cdot)$. It yields \begin{align}\label{eq:estim2} \begin{aligned} \int & \kappa_R(x)\big|\nabla_{x,\xi}\varrho_\delta((x,\xi)-\theta_{s,r}(z,\sigma))\big|\varrho_\delta((x,\xi)-\theta_{s,r}(y,\zeta))\mathrm{d} x\mathrm{d}\xi\mathrm{d} z\mathrm{d}\sigma\\ &\leq\int \big|\nabla_{x,\xi}\varrho_\delta((x,\xi)-(z,\sigma))\big|\varrho_\delta((x,\xi)-\theta_{s,r}(y,\zeta))\mathrm{d} x\mathrm{d}\xi\mathrm{d} z\mathrm{d}\sigma\\ &\leq\frac{C}{\delta}\int \varrho_{2\delta}((x,\xi)-(z,\sigma))\varrho_\delta((x,\xi)-\theta_{s,r}(y,\zeta))\mathrm{d} x\mathrm{d}\xi\mathrm{d} z\mathrm{d}\sigma=\frac{C}{\delta}. \end{aligned} \end{align} Plugging \eqref{eq:estim1} and \eqref{eq:estim2} into $I_{121}$ we conclude that \begin{align*} I_{121}\leq C (t-s)^{1/p}\mathbb{E}\int_{(s,t]}\int \mathrm{d} m_2(r,y,\zeta). \end{align*} Since due to Theorem \ref{thm:existence} \begin{equation}\label{e3} \sup_{(x,\xi)\in\mathbb{R}^N\times\mathbb{R}}|\mathbb{P}artial_\sigma\theta_{s,r}^x(x,\xi)|=\sup_{(x,\xi)\in\mathbb{R}^N\times\mathbb{R}}|\mathbb{P}artial_\sigma\theta_{s,r}^x(x,\xi)-\mathbb{P}artial_\sigma\theta_{s,s}^x(x,\xi)|\leq C(t-s)^{1/p}, \end{equation} we use the fact that $|\nabla_x\kappa_R(x)|\leq R^{-1}$ and that $\theta_{s,r}$ is volume preserving to deduce \begin{align*} I_{131}&\leq CR^{-1}(t-s)^{1/p}\mathbb{E}\int_{(s,t]}\int\varrho_{\delta}((x,\xi)-\theta_{s,r}(z,\sigma))\varrho_{\delta}((x,\xi)-\theta_{s,r}(y,\zeta))\mathrm{d} m_2(r,y,\zeta)\\ &\leq CR^{-1}(t-s)^{1/p}\mathbb{E}\int_{(s,t]}\int\mathrm{d} m_2(r,y,\zeta)\leq C(t-s)^{1/p}\mathbb{E}\int_{(s,t]}\int\mathrm{d} m_2(r,y,\zeta). \end{align*} Let us proceed with $I_4$ and $I_5$. Using the ideas from \eqref{1} and \eqref{2} we conclude \begin{align*} I_4&=\frac{1}{2}\mathbb{E}\int_s^t\int F_1(r)G^2(y,\zeta)\kappa_R(x)\big[\mathbb{P}artial_\zeta\theta_{s,r}(y,\zeta)-\mathbb{P}artial_\sigma\theta_{s,r}(z,\sigma)\big]\\ &\hspace{2cm}\cdot\nabla_{x,\xi}\varrho_\delta((x,\xi)-\theta_{s,r}(y,\zeta))\varrho_\delta((x,\xi)-\theta_{s,r}(z,\sigma))\mathrm{d}\nu^2_{r,y}(\zeta)\\ &\quad-\frac{1}{2}\mathbb{E}\int_s^t\int F_1(r)G^2(y,\zeta)\nabla_x\kappa_R(x)\cdot\mathbb{P}artial_\sigma\theta_{s,r}(z,\sigma)\\ &\hspace{2cm}\times\varrho_\delta((x,\xi)-\theta_{s,r}(z,\sigma))\varrho_\delta((x,\xi)-\theta_{s,r}(y,\zeta))\mathrm{d}\nu^2_{r,y}(\zeta)\\ &\quad+\frac{1}{2}\mathbb{E}\int_s^t\int G^2(y,\zeta)\kappa_R(x)\varrho_\delta((x,\xi)-\theta_{s,r}(z,\sigma))\varrho_\delta((x,\xi)-\theta_{s,r}(y,\zeta))\mathrm{d}\nu^1_{r,z}(\sigma)\mathrm{d}\nu^2_{r,y}(\zeta)\\ &=I_{41}+I_{42}+I_{43} \end{align*} and similarly \begin{align*} I_5&=-\frac{1}{2}\mathbb{E}\int_s^t\int \overline F_2(r)G^2(z,\sigma)\kappa_R(x)\big[\mathbb{P}artial_\sigma\theta_{s,r}(z,\sigma)-\mathbb{P}artial_\zeta\theta_{s,r}(y,\zeta)\big]\\ &\hspace{2cm}\cdot\nabla_{x,\xi}\varrho_\delta((x,\xi)-\theta_{s,r}(z,\sigma))\varrho_\delta((x,\xi)-\theta_{s,r}(y,\zeta))\mathrm{d}\nu^1_{r,z}(\sigma)\\ &\quad+\frac{1}{2}\mathbb{E}\int_s^t\int \overline F_2(r)G^2(z,\sigma)\nabla_x\kappa_R(x)\cdot\mathbb{P}artial_\zeta\theta_{s,r}(y,\zeta)\\ &\hspace{2cm}\times\varrho_\delta((x,\xi)-\theta_{s,r}(z,\sigma))\varrho_\delta((x,\xi)-\theta_{s,r}(y,\zeta))\mathrm{d}\nu^1_{r,z}(\sigma)\\ &\quad+\frac{1}{2}\mathbb{E}\int_s^t\int G^2(z,\sigma)\kappa_R(x)\varrho_\delta((x,\xi)-\theta_{s,r}(z,\sigma))\varrho_\delta((x,\xi)-\theta_{s,r}(y,\zeta))\mathrm{d}\nu^1_{r,z}(\sigma)\mathrm{d}\nu^2_{r,y}(\zeta)\\ &=I_{51}+I_{52}+I_{53}. \end{align*} Using a similar approach as for $I_{121}$ together with the fact that $G^2(y,\zeta)\leq C|\zeta|^2$ which follows from the assumptions on $g$, we deduce that \begin{align*} I_{41}+I_{51}&\leq C(t-s)^{1/p}\bigg(\mathbb{E}\int_s^t\int G^2(y,\zeta)\mathrm{d}\nu^2_{r,y}(\zeta)\mathrm{d} y\mathrm{d} r+\mathbb{E}\int_s^t\int G^2(z,\sigma)\mathrm{d}\nu_{r,z}^1(\sigma)\mathrm{d} z\mathrm{d} r\bigg)\\ &\leq C(t-s)^{1/p}\bigg(\mathbb{E}\int_s^t\int |\zeta|^2\mathrm{d}\nu^2_{r,y}(\zeta)\mathrm{d} y\mathrm{d} r+\mathbb{E}\int_s^t\int |\sigma|^2\mathrm{d}\nu_{r,z}^1(\sigma)\mathrm{d} z\mathrm{d} r\bigg) \end{align*} and similarly to $I_{131}$ we get \begin{align*} I_{42}+I_{52}&\leq CR^{-1}(t-s)^{1/p}\mathbb{E}\int_{s}^t\int|\zeta|^2\mathrm{d} \nu^2_{r,y}(\zeta)\mathrm{d} y\mathrm{d} r+ CR^{-1}(t-s)^{1/p}\mathbb{E}\int_{s}^t\int|\sigma|^2\mathrm{d} \nu^1_{r,z}(\sigma)\mathrm{d} z\mathrm{d} r\\ &\leq C(t-s)^{1/p}\mathbb{E}\int_{s}^t\int|\zeta|^2\mathrm{d} \nu^2_{r,y}(\zeta)\mathrm{d} y\mathrm{d} r+ C(t-s)^{1/p}\mathbb{E}\int_{s}^t\int|\sigma|^2\mathrm{d} \nu^1_{r,z}(\sigma)\mathrm{d} z\mathrm{d} r. \end{align*} Besides, \begin{align*} I_3&+I_{43}+I_{53}\\ &=\frac{1}{2}\mathbb{E}\int_s^t\int \kappa_R(x)\big|g(z,\sigma)-g(y,\zeta)\big|^2\varrho_\delta((x,\xi)-\theta_{s,r}(z,\sigma))\varrho_\delta((x,\xi)-\theta_{s,r}(y,\zeta))\mathrm{d}\nu^1_{r,z}(\sigma)\mathrm{d}\nu^2_{r,y}(\zeta)\\ &\leq C\delta^2\mathbb{E}\int_s^t\int_{Q_R} \varrho_\delta((x,\xi)-\theta_{s,r}(z,\sigma))\varrho_\delta((x,\xi)-\theta_{s,r}(y,\zeta))\mathrm{d}\nu^1_{r,z}(\sigma)\mathrm{d}\nu^2_{r,y}(\zeta) \end{align*} where $Q_R=B_{2R,x}\times\mathbb{R}_\xi\times B_{2R+CT^{1/p},z}\times\mathbb{R}_\sigma\times B_{2R+CT^{1/p},y}\times\mathbb{R}_\zeta$ and we used \eqref{eq:4} and the Lipschitz continuity of $g$. Next, we estimate using \eqref{ibp}, the fact that $\delta|\nabla_{x,\xi}\varrho_\delta|(\cdot)\leq C\varrho_{2\delta}(\cdot)$, \eqref{e3} and the fact that $\theta$ is volume preserving, as follows \begin{align*} &\int_{B_{2R+CT^{1/p},z}\times\mathbb{R}_\sigma}\varrho_\delta((x,\xi)-\theta_{s,r}(z,\sigma))\mathrm{d}\nu^1_{r,z}(\sigma)\mathrm{d} z\leq \int_{\mathbb{R}^N\times\mathbb{R}}F^{-}_1(r,z,\sigma)\mathbb{P}artial_\sigma\varrho_\delta((x,\xi)-\theta_{s,r}(z,\sigma))\mathrm{d} z\mathrm{d}\sigma\\ &\qquad=-\int_{\mathbb{R}^N\times\mathbb{R}}F^{-}_1(r,z,\sigma)\nabla_{x,\xi}\varrho_\delta((x,\xi)-\theta_{s,r}(z,\sigma))\mathbb{P}artial_\sigma\theta_{s,r}(z,\sigma)\mathrm{d} z\mathrm{d}\sigma\\ &\qquad \leq C\delta^{-1}(t-s)^{1/p}\int_{\mathbb{R}^N\times\mathbb{R}}\varrho_{2\delta}((x,\xi)-\theta_{s,r}(z,\sigma))\mathrm{d} z\mathrm{d}\sigma=C\delta^{-1}(t-s)^{1/p} \end{align*} which holds true for a.e. $(\omega,r)$ and every $(x,\xi)$. Hence \begin{align*} I_3+I_{43}+I_{53}&\leq C\delta(t-s)^{1/p}\mathbb{E}\int_s^t\int_{B_{2R+CT^{1/p},y}\times\mathbb{R}_\zeta}\int_{B_{2R,x}\times\mathbb{R}_\xi}\varrho_\delta((x,\xi)-\theta_{s,r}(y,\zeta))\mathrm{d} x\mathrm{d}\xi\mathrm{d}\nu^2_{r,y}(\zeta)\mathrm{d} y\\ &\leq C\delta(t-s)^{1/p}\mathbb{E}\int_s^t\int_{B_{2R+CT^{1/p},y}\times\mathbb{R}_\zeta}\mathrm{d}\nu^2_{r,y}(\zeta)\mathrm{d} y\leq CR\delta(t-s)^{1+1/p} \end{align*} which completes the proof of \eqref{eq:doubling}. \end{proof} \end{prop} Finally, we have all in hand to prove the Reduction Theorem and the $L^1$-contraction property. \begin{thm}[Reduction Theorem]\label{thm:reduction} Let $u_0\in L^1\cap L^2(\Omega\times\mathbb{R}^N)$ and let $F$ be a generalized kinetic solution to \eqref{eq} with initial datum $F_0=\mathbf{1}_{u_0>\xi}$. Then it is a kinetic solution to \eqref{eq}, that is, there exists $$u\in L^2(\Omega,L^2(0,T;L^2(\mathbb{R}^N)))$$ satisfying \eqref{fd} such that $F(t,x,\xi)=\mathbf{1}_{u(t,x)>\xi}$ a.e. \begin{proof} Let $t\in[0,T)$, $h>0$ and $\{0=t_0<t_1<\cdots<t_n=t\}$ be a partition of $[0,t]$ such that $|t_i-t_{i+1}|\le h$, $i=0,\dots,n-1$. According to Corollary \ref{cor:strongerversion}, we can assume (without loss of generality) that the kinetic measure $m$ has a.s. no atom at $t_i$, $i=1,\dots,n-1,$ that is, we only consider points from the dense set $\mathcal{A}\subset[0,T]$ which was constructed in Proposition \ref{limits}. Besides, due to Lemma \ref{lem:equil}, $m$ has no atom at $0$ a.s. Furthermore, if $\nu^+$ denotes the Young measure corresponding to $F^+$ regarded as a kinetic function on $\Omega\times[0,T]\times\mathbb{R}^N$, then it follows from \eqref{integrov} that $$\mathbb{E}\int_0^T\int_{\mathbb{R}^N}\int_\mathbb{R}|\xi|^2\mathrm{d}\nu^+_{t,x}(\xi)\mathrm{d} x\mathrm{d} t<\infty.$$ Consequently we may assume (without loss of generality) that for all $i\in\{1,\dots,n-1\}$ \begin{equation}\label{ref} \mathbb{E}\int_{\mathbb{R}^N}\int_\mathbb{R}|\xi|^2\mathrm{d}\nu^+_{t_i,x}(\xi)\mathrm{d} x<\infty \end{equation} and since $u_0\in L^2(\Omega\times\mathbb{R}^N)$ and $\nu_{t_0,x}^+=\delta_{u_0(x)}$ due to Lemma \ref{lem:equil}, the same holds true also for $i=0$. Application of Proposition \ref{prop:doubling} yields \begin{align}\label{eqq:123} \begin{aligned} & \mathbb{E}\int\,\kappa_R(x)\langle F^+(t_i),\varrho_\delta((x,\xi)-\theta_{t_{i-1},t_i}(\cdot,\cdot))\rangle_{z,\sigma}\langle \overline{F}^+(t_{i}),\varrho_\delta((x,\xi)-\theta_{t_{i-1},t_i}(\cdot,\cdot))\rangle_{y,\zeta}\mathrm{d} x\mathrm{d}\xi\\ &\quad-\mathbb{E}\int\kappa_R(x)\langle F^+(t_{i-1}),\varrho_\delta((x,\xi)-\theta_{t_{i-1},t_{i-1}}(\cdot,\cdot))\rangle_{z,\sigma}\langle \overline{F}^+(t_{i-1}),\varrho_\delta((x,\xi)-\theta_{t_{i-1},t_{i-1}}(\cdot,\cdot))\rangle_{y,\zeta}\mathrm{d} x\mathrm{d}\xi\\ &\leq Ch^{1/p}\mathbb{E}\int_{(t_{i-1},t_i]}\int\mathrm{d} m(r,y,\zeta)+Ch^{1/p}\mathbb{E}\int_{t_{i-1}}^{t_i}\int |\zeta|^2\mathrm{d}\nu_{r,y}(\zeta)\mathrm{d} y\mathrm{d} r+CR\delta h^{1+1/p} \end{aligned} \end{align} Now, we send $\delta\rightarrow0$ first and then $R\to\infty$. Regarding the first term on the left hand side of \eqref{eqq:123}, since the flow is volume preserving we have \begin{align*} \langle F^+(t_i),\varrho_\delta((x,\xi)-\theta_{t_{i-1},t_i}(\cdot,\cdot))\rangle_{z,\sigma}=\langle F^+(t_i,\mathbb{P}i_{t_{i-1},t_i}(\cdot,\cdot)),\varrho_\delta((x,\xi)-(\cdot,\cdot))\rangle_{z,\sigma} \end{align*} which converges to $F^+(t_i,\mathbb{P}i_{t_{i-1},t_i}(x,\xi))$ for a.e. $(x,\xi)$ as $\delta\to0$ and similarly for the term including $\overline F^+(t_i)$. Thus, if $i=n$ we apply Fatou's Lemma to estimate the $\liminf_{\delta\to 0}$ of the first term on the left hand side from below by \begin{align*} & \mathbb{E}\int\,\kappa_R(x)F^+(t_n,\mathbb{P}i_{t_{n-1},t_n}(x,\xi))\overline F^+(t_n,\mathbb{P}i_{t_{n-1},t_n}(x,\xi))\mathrm{d} x\mathrm{d}\xi\\ &\quad=\mathbb{E}\int\,\kappa_R(\theta^x_{t_{n-1},t_n}(x,\xi))F^+(t_n,x,\xi)\overline F^+(t_n,x,\xi)\mathrm{d} x\mathrm{d}\xi. \end{align*} If $i\in\{0,\dots,n-1\}$ we need to pass to the limit. To this end, we note that according to \eqref{ref} and Remark \ref{chi} the functions $\chi_{F^+(t_i)}$ are integrable in $\omega,\xi$ and locally integrable in $x$. Therefore, we estimate \begin{align*} &\langle F^+(t_i),\varrho_\delta((x,\xi)-\theta_{t_{i-1},t_i}(\cdot,\cdot))\rangle_{z,\sigma}\langle \overline{F}^+(t_{i}),\varrho_\delta((x,\xi)-\theta_{t_{i-1},t_i}(\cdot,\cdot))\rangle_{y,\zeta}\\ &\quad\quad\quad\leq\begin{cases} \langle F^+(t_i),\varrho_\delta((x,\xi)-\theta_{t_{i-1},t_i}(\cdot,\cdot))\rangle_{z,\sigma},&\text{ if }\xi\geq 0,\\ \langle \overline{F}^+(t_{i}),\varrho_\delta((x,\xi)-\theta_{t_{i-1},t_i}(\cdot,\cdot))\rangle_{y,\zeta},&\text{ if }\xi<0, \end{cases} \end{align*} and observe that the left hand side converges a.e. in $(\omega,x,\xi)$. Indeed, both functions $F^+(t_i)$ and $\overline{F}^+(t_i)$ are locally integrable in $(\omega,x,\xi)$ so their mollifications converge a.e. Next, we apply Remark \ref{chi} to deduce that $F^+(t_i)$ is integrable on $\Omega\times B_{2R+CT^{1/p},x}\times[0,\infty)$ whereas $\overline F^+(t_i)$ is integrable on $\Omega\times B_{2R+CT^{1/p},x}\times(-\infty,0).$ As a consequence, \begin{align*} \langle F^+(t_i),\varrho_\delta((x,\xi)-\theta_{t_{i-1},t_i}(\cdot,\cdot))\rangle_{z,\sigma}\longrightarrow F^+(t_i,\mathbb{P}i_{t_{i-1},t_i}(x,\xi))\quad\text{in}\quad L^1(\Omega\times B_{2R}\times [0,\infty)) \end{align*} and similarly \begin{align*} \langle \overline F^+(t_i),\varrho_\delta((x,\xi)-\theta_{t_{i-1},t_i}(\cdot,\cdot))\rangle_{y,\zeta}\longrightarrow\overline F^+(t_i,\mathbb{P}i_{t_{i-1},t_i}(x,\xi))\quad\text{in}\quad L^1(\Omega\times B_{2R}\times (-\infty,0)) \end{align*} and for subsequences we obtain convergence a.e. in $(\omega,x,\xi)$. Therefore, we may apply the generalization of the Vitali convergence theorem \cite[Corollaire 4.14]{kavian} to deduce that \begin{align*} &\langle F^+(t_i),\varrho_\delta((x,\xi)-\theta_{t_{i-1},t_i}(\cdot,\cdot))\rangle_{z,\sigma}\langle \overline{F}^+(t_{i}),\varrho_\delta((x,\xi)-\theta_{t_{i-1},t_i}(\cdot,\cdot))\rangle_{y,\zeta}\\ &\qquad\qquad\longrightarrow F^+(t_i,\mathbb{P}i_{t_{i-1},t_i}(x,\xi))\overline F^+(t_i,\mathbb{P}i_{t_{i-1},t_i}(x,\xi))\quad\text{in}\quad L^1(\Omega\times B_{2R}\times \mathbb{R}) \end{align*} and accordingly to pass to the limit as $\delta\to 0$ in the first and second term on the left hand side of \eqref{123} to obtain terms of the form \begin{align*} & \mathbb{E}\int\,\kappa_R(x)F^+(t_i,\mathbb{P}i_{t_{i-1},t_i}(x,\xi))\overline F^+(t_i,\mathbb{P}i_{t_{i-1},t_i}(x,\xi))\mathrm{d} x\mathrm{d}\xi\\ &\quad=\mathbb{E}\int\,\kappa_R(\theta^x_{t_{i-1},t_i}(x,\xi))F^+(t_i,x,\xi)\overline F^+(t_i,x,\xi)\mathrm{d} x\mathrm{d}\xi. \end{align*} In order to pass to the limit as $R\to\infty$ we intend to apply the dominated convergence theorem. To this end, it is necessary to justify that for all $i=0,\dots,n,$ $$F^+(t_i)\overline F^+(t_i)\in L^1(\Omega\times\mathbb{R}^N\times\mathbb{R}),$$ which can be proved inductively: if $i=0$ then $F^+(0)\overline{F}^+(0)=\mathbf{1}_{u_0>\xi}(1-\mathbf{1}_{u_0>\xi})=0$ hence the claim holds true. If it is true for some $i-1$ where $i\in\{1,\dots,n\}$ then the Fatou lemma together with the discussion above leads to \begin{align*} \mathbb{E}\int F^+(t_i,x,\xi)\overline{F}^+(t_i,x,\xi) \mathrm{d} x\mathrm{d}\xi&\leq \lim_{R\to\infty}\mathbb{E}\int \kappa_R (\theta^x_{t_{i-1},t_i}(x,\xi))F^+(t_{i-1},x,\xi)\overline{F}^+(t_{i-1},x,\xi) \mathrm{d} x\mathrm{d}\xi\\ &\quad+ Ch^{1/p}\mathbb{E}\int_{(t_{i-1},t_i]}\int\mathrm{d} m(r,y,\zeta)+Ch^{1/p}\mathbb{E}\int_{t_{i-1}}^{t_i}\int |\zeta|^2\mathrm{d}\nu_{r,y}(\zeta)\mathrm{d} y\mathrm{d} r\\ &=\mathbb{E}\int F^+(t_{i-1},x,\xi)\overline{F}^+(t_{i-1},x,\xi) \mathrm{d} x\mathrm{d}\xi\\ &\quad+ Ch^{1/p}\mathbb{E}\int_{(t_{i-1},t_i]}\int\mathrm{d} m(r,y,\zeta)+Ch^{1/p}\mathbb{E}\int_{t_{i-1}}^{t_i}\int |\zeta|^2\mathrm{d}\nu_{r,y}(\zeta)\mathrm{d} y\mathrm{d} r. \end{align*} Thus Definition \ref{def:kinmeasure}(ii) and \eqref{integrov} give the claim for $i$. \color{black} Altogether, sending $\delta\to0$ and $R\to\infty$ gives for all $i\in\{1,\dots,n\}$ \begin{align*} &\mathbb{E}\int F^+(t_i,x,\xi)\overline{F}^+(t_i,x,\xi) \mathrm{d} x\mathrm{d}\xi-\mathbb{E}\int F^+(t_{i-1},x,\xi)\overline{F}^+(t_{i-1},x,\xi) \mathrm{d} x\mathrm{d}\xi\\ &\quad\leq Ch^{1/p}\mathbb{E}\int_{(t_{i-1},t_i]}\int\mathrm{d} m(r,y,\zeta)+Ch^{1/p}\mathbb{E}\int_{t_{i-1}}^{t_i}\int |\zeta|^2\mathrm{d}\nu_{r,y}(\zeta)\mathrm{d} y\mathrm{d} r. \end{align*} Consequently, \begin{align*} &\mathbb{E}\int F^+(t,x,\xi)\overline{F}^+(t,x,\xi) \mathrm{d} x\mathrm{d}\xi-\mathbb{E}\int F^+_0(x,\xi)\overline{F}^+_0(x,\xi) \mathrm{d} x\mathrm{d}\xi\\ &=\sum_{i=1}^n\mathbb{E}\int F^+(t_i,x,\xi)\overline{F}^+(t_i,x,\xi) \mathrm{d} x\mathrm{d}\xi-\mathbb{E}\int F^+(t_{i-1},x,\xi)\overline{F}^+(t_{i-1},x,\xi) \mathrm{d} x\mathrm{d}\xi\\ &\leq Ch^{1/p}\mathbb{E}\int_{(0,t]}\int\mathrm{d} m(r,y,\zeta)+Ch^{1/p}\mathbb{E}\int_{0}^{t}\int |\zeta|^2\mathrm{d}\nu_{r,y}(\zeta)\mathrm{d} y\mathrm{d} r \end{align*} and sending $h\rightarrow0$ yields (using Definition \ref{def:kinmeasure}(ii) and \eqref{integrov}) \begin{align*} \mathbb{E}\int& F^+(t,x,\xi)\overline{F}^+(t,x,\xi)\, \mathrm{d} x\,\mathrm{d}\xi\leq \mathbb{E}\int F_{0}(x,\xi)\overline{F}_{0}(x,\xi)\,\mathrm{d} x\,\mathrm{d} \xi=\mathbb{E}\int \mathbf{1}_{u_0(x)>\xi}(1-\mathbf{1}_{u_0(x)>\xi})\,\mathrm{d} x\,\mathrm{d}\xi=0. \end{align*} Hence $F^+(t)(1-F^+(t))=0$ for a.e. $(\omega,x,\xi)$. Now, the fact that $F^+(t)$ is a kinetic function for all $t\in[0,T)$ gives the conclusion: indeed, by Fubini Theorem, for any $t \in [0, T )$, there is a set $E_t$ of full measure in $\Omega\times\mathbb{R}^N$ such that, for all $(\omega,x)\in E_t$, $F^+(t,x,\xi)\in\{0,1\} $ for a.e. $\xi\in\mathbb{R}$. Recall that $-\mathbb{P}artial_\xi F^+(\omega,t,x, \cdot)$ is a probability measure on $\mathbb{R}$ hence, necessarily, there exists $u:\Omega\times [0,T)\times\mathbb{R}^N\rightarrow\mathbb{R}$ measurable such that $F^+(\omega,t,x,\xi)=\mathbf{1}_{u(\omega,t,x)>\xi}$ for a.e. $(\omega,x,\xi)$ and all $t\in[0,T)$. Moreover, according to \eqref{integrov}, it holds $$\mathbb{E}\esssup_{0\leq t\leq T}\int_{\mathbb{R}^N}|u(t,x)|\,\mathrm{d} x=\mathbb{E}\esssup_{0\leq t\leq T}\int_{\mathbb{R}^N}\int_\mathbb{R}|\xi|\,\mathrm{d}\nu^+_{t,x}(\xi)\,\mathrm{d} x\leq C$$ and $$\mathbb{E}\int_0^T\int_{\mathbb{R}^N}|u(t,x)|^2\,\mathrm{d} x\mathrm{d} t=\mathbb{E}\int_0^T\int_{\mathbb{R}^N}\int_\mathbb{R}|\xi|^2\,\mathrm{d}\nu^+_{t,x}(\xi)\,\mathrm{d} x\mathrm{d} t\leq C$$ hence $u$ is a kinetic solution. \end{proof} \end{thm} \begin{cor}[$L^1$-contraction property]\label{cor:contraction} Let $u_1$ and $u_2$ be kinetic solutions to \eqref{eq} with initial data $u_{1,0}$ and $u_{2,0}$, respectively. Then there exist representatives $\tilde{u}_1$ and $\tilde{u}_2$ of the class of equivalence $u_1$ and $u_2$, respectively, such that for all $t\in[0,T)$ \begin{equation*} \mathbb{E}\big\|\big(\tilde{u}_1(t)-\tilde{u}_2(t)\big)^+\big\|_{L^1_x}\leq \mathbb{E}\|(u_{1,0}-u_{2,0})^+\|_{L^1_x}. \end{equation*} \begin{proof} First, we observe that the following identity holds true $$(u_1-u_2)^+=\int_\mathbb{R}\mathbf{1}_{u_1>\xi}\overline{\mathbf{1}_{u_2>\xi}}\,\mathrm{d} \xi.$$ Let $\tilde{u}_1$ and $\tilde{u}_2$ denote the representatives constructed in Theorem \ref{thm:reduction}. Then proceeding similarly to Theorem \ref{thm:reduction}, we apply Proposition \ref{prop:doubling} and obtain \begin{align*} \mathbb{E}\big\|\big(\tilde u_1(t) -\tilde u_2(t)\big)^+\big\|_{L^1_x}&=\mathbb{E}\int F^+_1(t,x,\xi)\overline{F}^+_2(t,x,\xi)\,\mathrm{d} \xi\mathrm{d} x\\ &\leq\mathbb{E}\int \mathbf{1}_{u_{1,0}(x)>\xi}\overline{\mathbf{1}_{u_{2,0}(x)>\xi}}\,\mathrm{d} \xi\mathrm{d} x=\mathbb{E}\|(u_{1,0}-u_{2,0})^+\|_{L^1_x} \end{align*} which completes the proof. \end{proof} \end{cor} \section{Existence} \label{sec:existence} In the existence part of the proof of Theorem \ref{thm:main} we make use of the so-called Bhatnagar-Gross-Krook approximation (BGK), which is a standard tool in the deterministic setting. Its stochastic counterpart was established in \cite{bgk} and could be understood as an alternative proof of existence to \cite{debus}. Let us now briefly describe this method. Towards the proof of existence of a kinetic solution to \eqref{eq}, which is (formally speaking) a distributional solution to the kinetic formulation \eqref{eq:kinform}, we consider the following BGK model \begin{equation}\label{eq:bgk} \begin{split} \mathrm{d} F^\varepsilon+ \nabla F^\varepsilon\cdot a\,\mathrm{d} z-\mathbb{P}artial_\xi F^\varepsilon\,b\,\mathrm{d} z&=-\mathbb{P}artial_\xi F^\varepsilon\, g\,\mathrm{d} W+\frac{1}{2}\mathbb{P}artial_\xi(G^2\mathbb{P}artial_\xi F^\varepsilon)\,\mathrm{d} t+\frac{\mathbf{1}_{u^\varepsilon>\xi}-F^\varepsilon}{\varepsilon}\mathrm{d} t,\\ F^\varepsilon(0)&=F^\varepsilon_0, \end{split} \end{equation} where the local density of particles is defined by $$u^\varepsilon(t,x)=\int_\mathbb{R} \big(F^\varepsilon(t,x,\xi)-\mathbf{1}_{0>\xi}\big)\mathrm{d}\xi.$$ In other words, the unknown kinetic measure is replaced by a right hand side written in terms of $F^\varepsilon$. Heuristically, solving \eqref{eq:bgk} is significantly easier and can be reduced to solving the homogeneous problem \begin{equation}\label{eq:aux1} \begin{split} \mathrm{d} X+\nabla X\cdot a\,\mathrm{d} z-\mathbb{P}artial_\xi X\,b\,\mathrm{d} z&=-\mathbb{P}artial_\xi X\, g\,\mathrm{d} W+\frac{1}{2}\mathbb{P}artial_\xi(G^2\mathbb{P}artial_\xi X)\,\mathrm{d} t,\\ X(s)&=X_0, \end{split} \end{equation} establishing the properties of its solution operator and employing Duhamel's principle. The final idea is that, as the microscopic scale $\varepsilon$ vanishes, $F^\varepsilon$ converges to $F$ which is a generalized kinetic solution to \eqref{eq}. The Reduction theorem, Theorem \ref{thm:reduction}, then applies and as a consequence there exists $u$ such that $F=\mathbf{1}_{u>\xi}$ and $u$ is the unique kinetic solution to \eqref{eq}. We point out that the expected regularity in $(x,\xi)$ of solutions to \eqref{eq:aux}, \eqref{eq:bgk} is low, namely $L^\infty$, and therefore the rigorous treatment of the above outlined technique requires a suitable notion of weak solution in the context of rough paths. Here the usual definition of distributional solutions encounters several obstacles due to rough regularity of the driving signals. Motivated by the discussion in Section \ref{sec:hypotheses}, we introduce weak formulations which correspond to solving \eqref{eq:aux} and \eqref{eq:bgk} in the sense of distributions but do not involve any rough path driven terms. In the sequel, we will use the following notation: for $\alpha\in\mathbb{R}$ we denote by $\chi_{\alpha}:\mathbb{R}\rightarrow\mathbb{R}$ the so-called equilibrium function which is defined as \begin{equation}\label{def:equil} \chi_{\alpha}(\xi)=\mathbf{1}_{0<\xi<\alpha}-\mathbf{1}_{\alpha<\xi<0}. \end{equation} \subsection{Rough transport equation} \label{sec:roughtr} This subsection is devoted to the study of the auxiliary equation \eqref{eq:aux1}. Our approach here as well as in Subsection \ref{sec:bgksol} is rough-pathwise and therefore we work with one fixed realization $\omega$ and the probability space $(\Omega,\mathscr{F},\mathbb{P})$ remains hidden. The stochastic integral will then re-appear in Section \ref{sec:conv} for the final passage to the limit. It can be seen that the Stratonovich form of \eqref{eq:aux1} reads as follows \begin{equation*} \begin{split} \mathrm{d} X+\nabla X\cdot a\,\mathrm{d} z-\mathbb{P}artial_\xi X\,b\,\mathrm{d} z&=-\mathbb{P}artial_\xi X\, g\circ\mathrm{d} W+\frac{1}{4}\mathbb{P}artial_\xi X\mathbb{P}artial_\xi G^2\,\mathrm{d} t. \end{split} \end{equation*} Now we recall that (the stochastic process) $\mathbf{\Lambda}$ is the joint lift of $z$ and $W$ constructed in \eqref{jointlift}. Let us fix one realization $\mathbf{\Lambda}(\omega)$. This leads us to the study of the following rough transport equation \begin{equation*}\label{eq:tr} \begin{split} \mathrm{d} X&=\left(\begin{array}{c} \mathbb{P}artial_\xi X\\ \nabla X \end{array}\right)\cdot\left(\begin{array}{ccc} \frac{1}{4}\mathbb{P}artial_\xi G^2&b& -g\\ 0&-a&0 \end{array}\right)\,\mathrm{d} (t,\mathbf{\Lambda}(\omega)), \end{split} \end{equation*} However, for notational simplicity (and with a slight abuse of notation) we may rather write \begin{equation}\label{eq:aux} \begin{split} \mathrm{d} X+\nabla X\cdot a\,\mathrm{d} \mathbf{z}-\mathbb{P}artial_\xi X\,b\,\mathrm{d} \mathbf{z}&=-\mathbb{P}artial_\xi X\, g\,\mathrm{d} \mathbf{w}+\frac{1}{4}\mathbb{P}artial_\xi X\mathbb{P}artial_\xi G^2\,\mathrm{d} t,\\ X(s)&=X_0, \end{split} \end{equation} where $\mathbf{w}$ denotes the corresponding realization of the Stratonovich lift of $W$ and the cross-iterated integrals between $t$, $z$ and $W$ are not explicitly mentioned. We introduce two notions of solution to \eqref{eq:aux}. The first definition follows the usual approach from rough path theory which is based on the approximation of the driving signals (see \cite[Definition 3]{caruana}). \begin{defin}\label{def:strongsol} Let $(\mathbf{z},\mathbf{w})$ be a geometric H\"older $p$-rough path and suppose that $(z^n,w^n)$ is a sequence of Lipschitz paths such that $$S_{[p]}(z^n,w^n)\equiv(\mathbf{z}^n,\mathbf{w}^n)\longrightarrow(\mathbf{z},\mathbf{w})$$ uniformly on $[0,T]$ and $$\sup_n\|(\mathbf{z}^n,\mathbf{w}^n)\|_{\frac{1}{p}\text{-H\"ol}}<\infty.$$ Assume that for each $n\in\mathbb{N}$ \begin{equation*} \begin{split} \mathrm{d} X^n+\nabla X^n\cdot a\,\mathrm{d} z^n-\mathbb{P}artial_\xi X^n\,b\,\mathrm{d} z^n&=-\mathbb{P}artial_\xi X^n\, g\,\mathrm{d} w^n+\frac{1}{4}\mathbb{P}artial_\xi X^n\mathbb{P}artial_\xi G^2\,\mathrm{d} t\\ X^n(s)&=X_0\in C_b^1(\mathbb{R}^N\times\mathbb{R}) \end{split} \end{equation*} has a unique solution $X^n$ which belongs to $C^{1}_b([0,T]\times\mathbb{R}^N\times\mathbb{R})$. Then any limit point (in the uniform topology) of $(X^n)$ is called a solution for the rough PDE, denoted formally by \eqref{eq:aux}. \end{defin} Such a solution can be constructed by making use of the method of characteristics established in \cite{caruana} provided the initial datum $X_0$ is sufficiently regular. To be more precise, the associated characteristic system is given by \begin{equation}\label{eq:char} \begin{split} \mathrm{d}\varphi^0_t&= -b(\varphi_t)\,\mathrm{d} \mathbf{z}+g(\varphi_t)\,\mathrm{d} \mathbf{w}-\frac{1}{4}\mathbb{P}artial_\xi G^2(\varphi_t)\,\mathrm{d} t,\\ \mathrm{d}\varphi^x_t&=a(\varphi_t)\,\mathrm{d} \mathbf{z}. \end{split} \end{equation} where $\varphi^0_t$ and $\varphi^x_t$ describe the evolution of the $\xi$-coordinate and $x$-coordinate, respectively, of the characteristic curve. Let us denote by $\varphi_{s,t}(x,\xi)$ the solution of \eqref{eq:char} starting from $(x,\xi)$ at time $s$. It follows from \cite[Proposition 11.11]{friz} that under our assumptions $\varphi$ defines a flow of $C^2$-diffeomorphisms and we denote by $\mathbb{P}si$ the corresponding inverse flow. Our first existence and uniqueness result for \eqref{eq:aux} is taken from \cite[Theorem 4]{caruana} and reads as follows. \begin{prop}\label{prop:aux} Let $X_0\in C_b^1(\mathbb{R}^N\times\mathbb{R}).$ Then there exists a unique solution to \eqref{eq:aux} given explicitly by $$X(t,x,\xi;s)=X_0\big(\mathbb{P}si_{s,t}(x,\xi)\big).$$ \end{prop} We conclude that the solution operator $\mathcal{S}(t,s)X_0=X_0\big(\mathbb{P}si_{s,t}(x,\xi)\big)$ is well defined on $C^1_b(\mathbb{R}^N\times\mathbb{R})$. Nevertheless, as the right hand side makes sense even for more general initial conditions $X_0$ which do not necessarily fulfill the assumptions of Proposition \ref{prop:aux}, the domain of definition of the operator $\mathcal{S}$ can be extended. In particular, since diffeomorphisms preserve sets of measure zero the above is well defined also if $X_0$ is only defined almost everywhere. In this case, we define consistently $$\mathcal{S}(t,s)X_0=X_0\big(\mathbb{P}si_{s,t}(x,\xi)\big),\qquad 0\leq s\leq t\leq T.$$ The aim is to show that this extension is the solution operator to \eqref{eq:aux} in a weak sense. Towards this end, it is necessary to weaken the assumption upon the initial condition and therefore a suitable notion of weak solution is required. In the following proposition we establish basic properties of the ope\-rator $\mathcal{S}.$ \begin{prop}\label{oper} Let $\mathcal{S}=\{\mathcal{S}(t,s),0\leq s\leq t\leq T\}$ be defined as above. Then \begin{enumerate} \item[\emph{(i)}]\label{item1} for any $p\in[1,\infty]$, $\mathcal{S}$ is a family of linear operators on $L^p(\mathbb{R}^N\times\mathbb{R})$ which is uniformly bounded in the operator norm, i.e. there exists $C>0$ such that for any $X_0\in L^p(\mathbb{R}^N\times\mathbb{R})$, $0\leq s\leq t\leq T$, \begin{equation}\label{first} \big\|\mathcal{S}(t,s)X_0\big\|_{L^p_{x,\xi}}\leq C\|X_0\|_{L^p_{x,\xi}}, \end{equation} \item[\emph{(ii)}]\label{item4} $\mathcal{S}$ verifies the semigroup law \begin{equation*} \begin{split} \mathcal{S}(t,s)&=\mathcal{S}(t,r)\circ\mathcal{S}(r,s),\qquad 0\leq s\leq r\leq t\leq T,\\ \mathcal{S}(s,s)&=\mathrm{Id},\hspace{3.3cm} 0\leq s\leq T. \end{split} \end{equation*} \end{enumerate} \begin{proof} The proof of (ii) as well as (i) in the case of $p=\infty$ follows immediately from the definition of the operator $\mathcal{S}$ and the flow property of $\mathbb{P}si$. In order to show (i) for $p\in[1,\infty)$, we observe that due to \cite[Proposition 11.11]{friz} the map $$(s,t,x,\xi)\longmapsto |\mathrm{J}\mathbb{P}si_{s,t}(x,\xi)|,$$ where $\mathrm{J}$ denotes the Jacobian, is bounded from above and below by a positive constant. Consequently, using the notation of Proposition \ref{prop:aux}, we have \begin{equation*} \begin{split} \|X_0\|_{L^p_{x,\xi}}^p&=\int_{\mathbb{R}^N\times\mathbb{R}}\big|X\big(t,\varphi_{s,t}(x,\xi);s\big)\big|^p\,\mathrm{d} x\,\mathrm{d}\xi\\ &=\int_{\mathbb{R}^N\times\mathbb{R}}|X(t,x,\xi;s)|^p\,|\mathrm{J}\mathbb{P}si_{s,t}(x,\xi)|\,\mathrm{d} x\,\mathrm{d} \xi\geq C\|X\|^p_{L^p_{x,\xi}}. \end{split} \end{equation*} \end{proof} \end{prop} \begin{defin}\label{def:weaksol} $X:[s,T]\times\mathbb{R}^N\times\mathbb{R}\rightarrow\mathbb{R}$ is called a weak solution to \eqref{eq:aux} provided \begin{equation}\label{eq:weaktransport} \begin{split} \mathrm{d} X\big(t,\varphi_{s,t}\big)&=0\\ X(s)&=X_0 \end{split} \end{equation} holds true in the sense of $\mathcal{D}'(\mathbb{R}^N\times\mathbb{R})$, i.e. for all $\mathbb{P}hi\in C^1_c(\mathbb{R}^N\times\mathbb{R})$ and $t\in[s,T]$ $$\big\langle X\big(t,\varphi_{s,t}\big),\mathbb{P}hi\big\rangle=\langle X_0,\mathbb{P}hi\rangle.$$ \end{defin} \begin{cor}\label{cor:weaksol} Let $X_0\in L^\infty(\mathbb{R}^N\times\mathbb{R})$. Then there exists a unique $X\in L^\infty([s,T]\times\mathbb{R}^N\times\mathbb{R})$ that is a weak solution to \eqref{eq:aux}. Moreover, it is represented by $$X(t)=\mathcal{S}(t,s)X_0.$$ \begin{proof} In order to prove existence, observe that if $X$ is given by the above representation then clearly for all $0\leq s\leq t\leq T$ $$X\big(t,\varphi_{s,t}(x,\xi);s\big)=X_0\big(\mathbb{P}si_{s,t}\circ\varphi_{s,t}(x,\xi)\big)=X_0(x,\xi)\qquad \text{ for a.e. }(x,\xi)$$ and hence $X$ solves \eqref{eq:weaktransport}. However, let us also give another proof of existence which in addition justifies Definition \ref{def:weaksol} as the appropriate notion of weak solution to \eqref{eq:aux}. To this end, let us consider smooth approximations of $X_0$, namely, let $(\varrho_\delta)$ be an approximation to the identity on $\mathbb{R}^N\times\mathbb{R}$ and set $X_0^\delta=X_0*\varrho_\delta$. According to Proposition \ref{prop:aux}, there exists a unique $X^\delta$ which solves \eqref{eq:aux} with the initial condition $X_0^\delta$ and is given by $X^\delta(t;s)=X^\delta_0(\mathbb{P}si_{s,t})$ hence \begin{equation}\label{eq:delta} X^\delta\big(t,\varphi_{s,t}(x,\xi);s\big)=X_0^\delta(x,\xi). \end{equation} Moreover, $$X_0^\delta\overset{w^*}{\longrightarrow} X_0\quad\text{ in }\quad L^\infty(\mathbb{R}^N\times\mathbb{R})$$ so $$X^\delta(t;s)\overset{w^*}{\longrightarrow} X_0(\mathbb{P}si_{s,t})\quad\text{ in }\quad L^\infty(\mathbb{R}^N\times\mathbb{R})\quad\forall t\in[s, T]$$ which justifies the passage to the limit in \eqref{eq:delta} and completes the proof. Regarding uniqueness, due to linearity it is enough to show that any weak solution with $X_0=0$ vanishes identically. Let $X$ be such a weak solution, i.e. it holds $$\langle X(t,\varphi_{s,t};s),\mathbb{P}hi\rangle=0\qquad \forall\mathbb{P}hi\in C^1_c(\mathbb{R}^N\times\mathbb{R}).$$ Testing by the mollifier $\varrho_\delta((x,\xi)-(\cdot,\cdot))$ we deduce that for all $(x,\xi)$ \begin{align*} 0&=\int_{\mathbb{R}^N\times\mathbb{R}}X\big(t,\varphi_{s,t}(y,\zeta);s\big)\,\varrho_\delta\big((x,\xi)-(y,\zeta)\big)\,\mathrm{d} y\,\mathrm{d} \zeta \end{align*} and since for a.e. $(x,\xi)$ the right hand side converges to $X\big(t,\varphi_{s,t}(x,\xi);s\big)$, the claim follows. \end{proof} \end{cor} \begin{lemma}\label{indik} For all $0\leq s\leq t\leq T$ it holds true that $$\mathcal{S}(t,s)\mathbf{1}_{0>\xi}-\mathbf{1}_{0>\xi}=0.$$ \begin{proof} It follows from \eqref{eq:null} that for all $x\in\mathbb{R}^N$ the solution to \eqref{eq:char} starting from $(x,0)$ satisfies $\varphi^{0}_{s,t}(x,0)\equiv 0$. Moreover, since the solution to \eqref{eq:char} is unique, we deduce that $$\varphi^{0}_{s,t}(x,\xi)\begin{cases} \geq 0,&\quad \text{if }\;\xi\geq 0,\\ \leq 0,&\quad \text{if }\;\xi\leq 0. \end{cases}$$ Indeed, this can be proved by contradiction: let $\xi>0$ and assume that for some $t\in[s,T]$ and $x\in\mathbb{R}^N$, $\varphi^0_{s,t}(x,\xi)<0$. Since $t\mapsto\varphi_{s,t}^0(x,\xi)$ is continuous and $\varphi_{s,s}^0(x,\xi)>0$ there exists $r\in[s,t]$ such that $\varphi^0_{s,r}(x,\xi)=0$. Now, we may take $\varphi_{s,r}(x,\xi)=(0,\varphi^x_{s,r}(x,\xi))$ as the initial condition for \eqref{eq:char} at time $r$ to obtain, on the one hand, $$\varphi^0_{r,t}(\varphi_{s,r}(x,\xi))=\varphi^0_{s,t}(x,\xi)$$ and, on the other hand, $$\varphi^0_{r,t}(\varphi_{s,r}(x,\xi))=\varphi^0_{r,t}(0,\varphi^x_{s,r}(x,\xi))=0.$$ which is a contradiction. As a consequence, $\mathcal{S}(t,s)\mathbf{1}_{0>\xi}=\mathbf{1}_{0>\xi}$ and the claim follows. \end{proof} \end{lemma} \subsection{Solution to the BGK model} \label{sec:bgksol} Throughout this section we continue with our rough-pathwise analysis, that is, we consider one fixed realization of the driving path $\mathbf{\Lambda}(\omega)$ and therefore the underlying probability space $(\Omega,\mathscr{F},\mathbb{P})$ remains hidden. We will apply the auxiliary results for the rough transport equation \eqref{eq:aux} and establish existence and uniqueness for the BGK model \eqref{eq:bgk}. First of all, it is necessary to specify in which sense \eqref{eq:bgk} is to be solved. As it can be seen in Definition \ref{def:strongsol}, the solution given by Proposition \ref{prop:aux} satisfies the equation \eqref{eq:aux} only on a formal level. This obstacle was overcome by the definition of weak solution (see Definition \ref{def:weaksol}) which also permitted generalization to less regular initial data, namely $X_0\in L^\infty(\mathbb{R}^N\times\mathbb{R})$. We continue in this fashion and define the notion of weak solution to the BGK model similarly. Concerning the initial data for the BGK model \eqref{eq:bgk}, let us simply set $F^\varepsilon_0=\mathbf{1}_{u_0>\xi}$ which is sufficient for our purposes. Nevertheless, our proof of existence and convergence of \eqref{eq:bgk} would remain valid if we considered $F_0^\varepsilon=\mathbf{1}_{u_0^\varepsilon>\xi}$ where $(u_0^\varepsilon)$ is a suitable approximation of $u_0$. \begin{defin}\label{def:bgk} Let $\varepsilon>0$. Then $F^\varepsilon\in L^\infty([0,T]\times\mathbb{R}^N\times\mathbb{R})$ satisfying $F^\varepsilon-\mathbf{1}_{0>\xi}\in L^1([0,T]\times\mathbb{R}^N\times\mathbb{R})$ is called a weak solution to the BGK model \eqref{eq:bgk} with initial condition $F_0^\varepsilon$ provided \begin{equation}\label{eq:bgk2} \begin{split} \mathrm{d} F^\varepsilon(t,\varphi_{0,t})&=\frac{\mathbf{1}_{u^\varepsilon(t)>\xi}\circ\varphi_{0,t}-F^\varepsilon(t,\varphi_{0,t})}{\varepsilon}\mathrm{d} t\\ F^\varepsilon(0)&=F_0^\varepsilon \end{split} \end{equation} holds true in the sense of $\mathcal{D}'(\mathbb{R}^N\times\mathbb{R})$\footnote{By $\mathbf{1}_{u^\varepsilon(t)>\xi}\circ\varphi_{0,t}$ we denote the composition of $(x,\xi)\mapsto\mathbf{1}_{u^\varepsilon(t,x)>\xi}$ with $(x,\xi)\mapsto\varphi_{0,t}(x,\xi)$.}, i.e. for all $\mathbb{P}hi\in C^1_c(\mathbb{R}^N\times\mathbb{R})$ and $t\in[0,T]$ $$\big\langle F^\varepsilon(t,\varphi_{0,t}),\mathbb{P}hi\big\rangle=\langle F_0^\varepsilon,\mathbb{P}hi\rangle+\frac{1}{\varepsilon}\int_0^t\big\langle \mathbf{1}_{u^\varepsilon(s)>\xi}\circ\varphi_{0,s}-F^\varepsilon(s,\varphi_{0,s}),\mathbb{P}hi\big\rangle\mathrm{d} s.$$ \end{defin} The result reads as follows. \begin{thm}\label{duhamel} For any $\varepsilon>0$, there exists a unique weak solution of the BGK model \eqref{eq:bgk} and it is represented by \begin{equation}\label{eq:sol} F^\varepsilon(t)=\mathrm{e}^{-\frac{t}{\varepsilon}}\mathcal{S}(t,0)F_0^\varepsilon+\frac{1}{\varepsilon}\int_0^t\mathrm{e}^{-\frac{t-s}{\varepsilon}}\mathcal{S}(t,s)\mathbf{1}_{u^\varepsilon(s)>\xi}\,\mathrm{d} s. \end{equation} \begin{proof} By Duhamel's principle, the problem \eqref{eq:bgk2} admits an equivalent integral representation \begin{equation*} F^\varepsilon(t,\varphi_{0,t})=\mathrm{e}^{-\frac{t}{\varepsilon}}F_0^\varepsilon+\frac{1}{\varepsilon}\int_0^t\mathrm{e}^{-\frac{t-s}{\varepsilon}}\mathbf{1}_{u^\varepsilon(s)>\xi}(\varphi_{0,s})\,\mathrm{d} s \end{equation*} which can be rewritten as \eqref{eq:sol}. Recall, that the local densities are defined as follows \begin{equation}\label{dens} u^\varepsilon(t,x)=\int_\mathbb{R} \big(F^\varepsilon(t,x,\xi)-\mathbf{1}_{0>\xi}\big)\,\mathrm{d} \xi \end{equation} hence the function $F^\varepsilon$ is not integrable with respect to $\xi$. For the purpose of the proof it is therefore more convenient to consider rather $f^\varepsilon(t)=F^\varepsilon(t)-\mathbf{1}_{0>\xi}$ which is integrable and prove its existence. Moreover, we will show $f^\varepsilon$ also admits an integral representation, similar to \eqref{eq:sol}. Indeed, due to Lemma \ref{indik} and Corollary \ref{cor:weaksol}, $\mathbf{1}_{0>\xi}=\mathcal{S}(t,s)\mathbf{1}_{0>\xi}$ is the unique weak solution to \eqref{eq:aux} hence $f^\varepsilon$ solves \begin{equation}\label{bgk4} \begin{split} \mathrm{d} f^\varepsilon(t,\varphi_{0,t})&=\frac{\chi_{u^\varepsilon(t)}\circ\varphi_{0,t}-f^\varepsilon(t,\varphi_{0,t})}{\varepsilon}\,\mathrm{d} t\\ f^\varepsilon(0)&=\chi_{u_0} \end{split} \end{equation} in the sense of distributions. By a similar reasoning as above, \eqref{bgk4} has the integral representation \begin{equation}\label{eq:hsol} f^\varepsilon(t)=\mathrm{e}^{-\frac{t}{\varepsilon}}\mathcal{S}(t,0)\chi_{u_0}+\frac{1}{\varepsilon}\int_0^t \mathrm{e}^{-\frac{t-s}{\varepsilon}}\mathcal{S}(t,s)\chi_{u^\varepsilon(s)}\,\mathrm{d} s \end{equation} and thus can be solved by a fixed point method. According to the identity \begin{equation*} \int_\mathbb{R}|\chi_{\alpha}-\chi_{\beta}|\,\mathrm{d} \xi=|\alpha-\beta|,\qquad \alpha,\,\beta\in\mathbb{R}, \end{equation*} some space of $\xi$-integrable functions would be well suited to deal with the nonlinearity term $\chi_{u^\varepsilon}.$ Let us denote $\mathscr{H}=L^\infty(0,T;L^1(\mathbb{R}^N\times\mathbb{R}))$ and show that the mapping \begin{equation*} \begin{split} \big(\mathscr{K}g\big)&(t)=\mathrm{e}^{-\frac{t}{\varepsilon}}\mathcal{S}(t,0)\chi_{u_0}+\frac{1}{\varepsilon}\int_0^t\mathrm{e}^{-\frac{t-s}{\varepsilon}}\mathcal{S}(t,s)\chi_{v(s)}\,\mathrm{d} s, \end{split} \end{equation*} where the local density $v(s)=\int_\mathbb{R} g(s,\xi)\,\mathrm{d} \xi$ is defined consistently with \eqref{dens}, is a contraction on $\mathscr{H}$. Let $g,\,g_1,\,g_2\in \mathscr{H}$ with corresponding local densities $v,\,v_1,$ $v_2$. By Proposition \ref{oper} and the assumptions on initial data, we arrive at \begin{equation*} \begin{split} \big\|(\mathscr{K}g)(t)\big\|_{L^1_{x,\xi}}&\leq \mathrm{e}^{-\frac{t}{\varepsilon}}\big\|\mathcal{S}(t,0)\chi_{u_0}\big\|_{L^1_{x,\xi}}+\frac{1}{\varepsilon}\int_0^t\mathrm{e}^{-\frac{t-s}{\varepsilon}}\big\|\mathcal{S}(t,s)\chi_{v(s)}\big\|_{L^1_{x,\xi}}\mathrm{d} s\\ &\leq C\Big(\|u_0\|_{L^1_x}+\sup_{0\leq s\leq t}\|v(s)\|_{L^1_x}\Big) \end{split} \end{equation*} with a constant independent on $t$, hence \begin{equation*} \begin{split} \big\|\mathscr{K}g\big\|_{\mathscr{H}}&\leq C\big(\|u_0\|_{L^1_x}+\|g\|_{\mathscr{H}}\big)<\infty. \end{split} \end{equation*} Next, we estimate \begin{equation*} \begin{split} \big\|(\mathscr{K} g_1)(t)-(\mathscr{K}g_2)(t)\big\|_{L^1_{x,\xi}}&\leq \frac{1}{\varepsilon}\int_0^t\mathrm{e}^{-\frac{t-s}{\varepsilon}}\big\|\mathcal{S}(t,s)(\chi_{v_1(s)}-\chi_{v_2(s)})\big\|_{L^1_{x,\xi}}\mathrm{d} s\\ &\leq\frac{C}{\varepsilon}\int_0^t\mathrm{e}^{-\frac{t-s}{\varepsilon}}\big\|\chi_{v_1(s)}-\chi_{v_2(s)}\big\|_{L^1_{x,\xi}}\mathrm{d} s\\ &\leq \frac{C}{\varepsilon}\int_0^t\mathrm{e}^{-\frac{t-s}{\varepsilon}}\|v_1(s)-v_2(s)\|_{L^1_{x}}\mathrm{d} s\\ &\leq \frac{C}{\varepsilon}\int_0^t\mathrm{e}^{-\frac{t-s}{\varepsilon}}\|g_1(s)-g_2(s)\|_{L^1_{x,\xi}}\mathrm{d} s, \end{split} \end{equation*} so $$\big\|\mathscr{K} g_1-\mathscr{K}g_2\big\|_{\mathscr{H}}\leq C\big (1-\mathrm{e}^{-\frac{T}{\varepsilon}}\big)\|g_1-g_2\|_{\mathscr{H}}$$ and according to the Banach fixed point theorem, the mapping $\mathscr{K}$ has a unique fixed point in $\mathscr{H}$ provided $T$ was small enough. Nevertheless, since the choice of $T$ does not depend on the initial condition, extension of this existence and uniqueness result to the whole interval $[0, T ]$ can be done by considering the equation on smaller intervals $[0,\tilde T],\,[\tilde T,2\tilde T]$, etc. and repeating the above procedure. As a consequence, we obtain the existence of a unique weak solution to \eqref{eq:bgk} that is given by \eqref{eq:sol} and the proof is complete. \end{proof} \end{thm} \subsection{Convergence of the BGK model} \label{sec:conv} In this final section we establish another weak formulation of the BGK model \eqref{eq:bgk} that actually includes the stochastic integral and is therefore better suited for proving the convergence to \eqref{eq:weakkinformul}. To be more precise, we prove the following result, which will complete the proof of Theorem \ref{thm:main}. \begin{thm}\label{thm:bgkconvergence} Let $f^\varepsilon=F^\varepsilon-\mathbf{1}_{0>\xi}$. Then there exists $u$ which is a kinetic solution to the conservation law \eqref{eq} and, in addition, $(f^\varepsilon)$ converges weak-star in $L^\infty(\Omega\times[0,T]\times\mathbb{R}^N\times \mathbb{R})$ to the equilibrium function $\chi_u$. \end{thm} We start with an auxiliary result. \begin{prop}\label{lemma:7} $F^\varepsilon\in L^\infty(\Omega\times[0,T]\times\mathbb{R}^N\times\mathbb{R})$ satisfies the following weak formulation of \eqref{eq:bgk}: let $\mathbb{P}hi\in C^2_c(\mathbb{R}^N\times\mathbb{R}),$ then it holds true a.s. \begin{align}\label{eq:bgkweak} \mathrm{d} \langle F^\varepsilon,\mathbb{P}hi(\theta_t)\rangle&=-\langle \mathbb{P}artial_\xi F^\varepsilon g,\mathbb{P}hi(\theta_t)\rangle\,\mathrm{d} W+\frac{1}{2}\langle\mathbb{P}artial_\xi(G^2\mathbb{P}artial_\xi F^\varepsilon),\mathbb{P}hi(\theta_t)\rangle\,\mathrm{d} t+\frac{1}{\varepsilon}\langle\mathbf{1}_{u^\varepsilon>\xi}-F^\varepsilon,\mathbb{P}hi(\theta_t)\rangle\,\mathrm{d} t. \end{align} Moreover, it is progressively measurable. \begin{proof} In order to verify \eqref{eq:bgkweak}, we test \eqref{eq:sol} by $\mathbb{P}hi(\theta_{0,t})$ and obtain \begin{align}\label{eq:5a} \big\langle F^\varepsilon(t),\mathbb{P}hi(\theta_{0,t})\big\rangle=\mathrm{e}^{-\frac{t}{\varepsilon}}\big\langle\mathcal{S}(t,0)F_0^\varepsilon,\mathbb{P}hi(\theta_{0,t})\big\rangle+\frac{1}{\varepsilon}\int_0^t\mathrm{e}^{-\frac{t-s}{\varepsilon}}\big\langle\mathcal{S}(t,s)\mathbf{1}_{u^\varepsilon(s)>\xi},\mathbb{P}hi(\theta_{0,s}(\theta_{s,t}))\big\rangle\,\mathrm{d} s. \end{align} According to Duhamel's principle, it is enough to verify that if $X_0\in L^\infty(\mathbb{R}^N\times\mathbb{R})$ then $$\big(X(t):=\mathcal{S}(t,s)X_0,\mathbb{P}hi(\theta_{0,s}(\theta_{s,t}))\big)$$ solves \begin{align}\label{eq:5} \begin{aligned} \mathrm{d}\big\langle X(t),\mathbb{P}hi(\theta_{0,s}(\theta_{s,t}))\big\rangle&=-\big\langle \mathbb{P}artial_\xi X g,\mathbb{P}hi(\theta_{0,s}(\theta_{s,t}))\big\rangle\,\mathrm{d} W+\frac{1}{2}\big\langle\mathbb{P}artial_\xi(G^2\mathbb{P}artial_\xi X),\mathbb{P}hi(\theta_{0,s}(\theta_{s,t}))\big\rangle\,\mathrm{d} t,\\ \langle X(s),\mathbb{P}hi(\theta_{0,s})\rangle&=\langle X_0,\mathbb{P}hi(\theta_{0,s})\rangle. \end{aligned} \end{align} Indeed, let us calculate the stochastic differential of $\langle F^\varepsilon(t),\mathbb{P}hi(\theta_{0,t})\rangle$ given by the right hand side of \eqref{eq:5a}. Since according to the It\^o formula applied to a product \begin{align*} \mathrm{e}^{-\frac{t-s}{\varepsilon}}\big\langle X(t),\mathbb{P}hi(\theta_{0,s}(\theta_{s,t}))\big\rangle&=\big\langle X_0,\mathbb{P}hi(\theta_{0,s})\big\rangle-\frac{1}{\varepsilon}\int_s^t\mathrm{e}^{-\frac{r-s}{\varepsilon}}\big\langle X(r),\mathbb{P}hi(\theta_{0,s}(\theta_{s,r}))\big\rangle\mathrm{d} r\\ &\quad+\int_s^t\mathrm{e}^{-\frac{r-s}{\varepsilon}}\mathrm{d}\big\langle X(r),\mathbb{P}hi(\theta_{0,s}(\theta_{s,r}))\big\rangle, \end{align*} it follows due to \eqref{eq:5} \begin{align*} \langle F^\varepsilon(t),\mathbb{P}hi(\theta_{0,t})\rangle&=\langle F^\varepsilon_0,\mathbb{P}hi\rangle-\frac{1}{\varepsilon}\int_0^t\mathrm{e}^{-\frac{r}{\varepsilon}}\big\langle \mathcal{S}(r,0)F_0^\varepsilon,\mathbb{P}hi(\theta_{0,r})\big\rangle\mathrm{d} r\\ &\quad-\int_0^t\mathrm{e}^{-\frac{r}{\varepsilon}}\big\langle \mathbb{P}artial_\xi [\mathcal{S}(r,0)F_0^\varepsilon] g,\mathbb{P}hi(\theta_{0,r})\big\rangle\,\mathrm{d} W_r+\frac{1}{2}\int_0^t\mathrm{e}^{-\frac{r}{\varepsilon}}\big\langle\mathbb{P}artial_\xi(G^2\mathbb{P}artial_\xi [\mathcal{S}(r,0)F_0^\varepsilon]),\mathbb{P}hi(\theta_{0,r})\big\rangle\,\mathrm{d} r\\ &\quad+\frac{1}{\varepsilon}\int_0^t\bigg[\langle\mathbf{1}_{u^\varepsilon(s)>\xi},\mathbb{P}hi(\theta_{0,s})\rangle-\frac{1}{\varepsilon}\int_s^t\mathrm{e}^{-\frac{r-s}{\varepsilon}}\big\langle\mathcal{S}({r,s})\mathbf{1}_{u^\varepsilon(s)>\xi},\mathbb{P}hi(\theta_{0,s}(\theta_{s,r}))\big\rangle\mathrm{d} r\\ &\hspace{2.5cm}-\int_s^t\mathrm{e}^{-\frac{r-s}{\varepsilon}}\big\langle \mathbb{P}artial_\xi [\mathcal{S}(r,s)\mathbf{1}_{u^\varepsilon(s)>\xi}] g,\mathbb{P}hi(\theta_{0,s}(\theta_{s,r}))\big\rangle\,\mathrm{d} W_r\\ &\hspace{2.5cm}+\frac{1}{2}\int_s^t\mathrm{e}^{-\frac{r-s}{\varepsilon}}\big\langle\mathbb{P}artial_\xi(G^2\mathbb{P}artial_\xi [\mathcal{S}(r,s)\mathbf{1}_{u^\varepsilon(s)>\xi}]),\mathbb{P}hi(\theta_{0,s}(\theta_{s,r}))\big\rangle\,\mathrm{d} r\bigg]\mathrm{d} s\\ &=\langle F^\varepsilon_0,\mathbb{P}hi\rangle-\int_0^t\big\langle \mathbb{P}artial_\xi F^\varepsilon(r) g,\mathbb{P}hi(\theta_{0,r})\big\rangle\,\mathrm{d} W_r+\frac{1}{2}\int_0^t\big\langle\mathbb{P}artial_\xi(G^2\mathbb{P}artial_\xi F^\varepsilon(r)),\mathbb{P}hi(\theta_{0,r})\big\rangle\,\mathrm{d} r\\ &\quad -\frac{1}{\varepsilon}\int_0^t\langle F^\varepsilon(s),\mathbb{P}hi(\theta_{0,s})\rangle\mathrm{d} s+\frac{1}{\varepsilon}\int_0^t\langle\mathbf{1}_{u^\varepsilon(s)>\xi},\mathbb{P}hi(\theta_{0,s})\rangle\mathrm{d} s, \end{align*} where the last equality is a consequence of deterministic and stochastic Fubini's theorem and \eqref{eq:sol}. Hence \eqref{eq:bgkweak} is satisfied. Let us now justify \eqref{eq:5}. It was shown in \cite[Theorem 8]{DOR15} that the RDE solution to the characteristic system \eqref{eq:char} corresponding to the joint lift $\mathbf{\Lambda}$ constructed in \eqref{jointlift} can be obtained as limit of SDE solutions to \begin{align}\label{eq:sdechar} \begin{aligned} \mathrm{d}\varphi^{n,0}_t&=g(\varphi^{n}_t)\circ\mathrm{d} W-\frac{1}{4}\mathbb{P}artial_\xi G^2(\varphi_t^n)\,\mathrm{d} t-b(\varphi^n_t)\,\mathrm{d} z^n,\\ \mathrm{d}\varphi^{n,x}_t&=a(\varphi^n_t)\,\mathrm{d} z^n, \end{aligned} \end{align} where the corresponding lifts $(\mathbf{z}^n)$ approximate $\mathbf{z}$ as in Definition \ref{def:solutionrde}. Let $\mathbb{P}si^n$ and $\theta^n$, respectively, denote the inverse flows corresponding to \eqref{eq:sdechar} and \begin{align}\label{bbb} \begin{aligned} \mathrm{d}\mathbb{P}i^{n,0}_t&=-b(\mathbb{P}i^n_t)\,\mathrm{d} z^n,\\ \mathrm{d}\mathbb{P}i^{n,x}_t&=a(\mathbb{P}i^n_t)\,\mathrm{d} z^n, \end{aligned} \end{align} respectively. Let $X_0\in C^1_b(\mathbb{R}^N\times\mathbb{R})$ and $\mathbb{P}hi\in C^2_c(\mathbb{R}^N\times\mathbb{R})$. Then setting $X^n(t)=X_0(\mathbb{P}si^n_{s,t})$ yields a solution to the stochastic transport equation corresponding to the characteristic system \eqref{eq:sdechar}, which starts at time $s$ from $X_0$. Besides, $\mathbb{P}hi(\theta^n_{0,s}(\theta^n_{s,t}))$ yields a solution to the transport equation corresponding to the characteristic system \eqref{bbb}, which starts at time $s$ from $\mathbb{P}hi(\theta^n_{0,s})$. Therefore, applying the It\^o formula to their product and integrating with respect to $(x,\xi)$, we observe that the integrals driven by $z^n$ cancel due to the fact that $\diver a-\mathbb{P}artial_\xi b=0$ (cf. a similar discussion in Subsection \ref{subsec:smoothdrivers}) and we obtain \begin{align*} \begin{aligned} \mathrm{d}\big\langle X^n(t),\mathbb{P}hi(\theta^n_{0,s}(\theta^n_{s,t}))\big\rangle&=-\big\langle \mathbb{P}artial_\xi X^n g,\mathbb{P}hi(\theta^n_{0,s}(\theta^n_{s,t}))\big\rangle\,\mathrm{d} W+\frac{1}{2}\big\langle\mathbb{P}artial_\xi(G^2\mathbb{P}artial_\xi X^n),\mathbb{P}hi(\theta^n_{0,s}(\theta^n_{s,t}))\big\rangle\,\mathrm{d} t,\\ \langle X(s),\mathbb{P}hi(\theta_{0,s})\rangle&=\langle X_0,\mathbb{P}hi(\theta^n_{0,s})\rangle. \end{aligned} \end{align*} As mentioned above, due to \cite[Theorem 8]{DOR15}, $\mathbb{P}si^n_{s,t}(x,\xi)\rightarrow\mathbb{P}si_{s,t}(x,\xi)$ uniformly in $t\in[s,T]$ in probability, where $\mathbb{P}si$ is the RDE inverse flow to \eqref{eq:char}. Due to invariance of the $\mathrm{Lip}^\gamma$-norm under translation, we deduce (for a subsequence) that $\mathbb{P}si^n\rightarrow\mathbb{P}si$ uniformly in $t\in[0,T]$ and $(x,\xi)\in\mathbb{R}^N\times\mathbb{R}$ a.s. Indeed, since $\varphi_{s,t}^n(x,\xi)$ is the SDE solution to \eqref{eq:sdechar} starting from $(x,\xi)$ at time $s$, $\tilde\varphi_{s,t}^n:=\varphi_{s,t}^n(x,\xi)-(x,\xi)$ solves \begin{align*} \begin{aligned} \mathrm{d}\tilde\varphi^{n,0}_t&=g\big((x,\xi)+\tilde\varphi^{n}_t\big)\circ\mathrm{d} W-\frac{1}{4}\mathbb{P}artial_\xi G^2\big((x,\xi)+\tilde\varphi_t^n\big)\,\mathrm{d} t-b\big((x,\xi)+\tilde\varphi^n_t\big)\,\mathrm{d} z^n,\\ \mathrm{d}\tilde\varphi^{n,x}_t&=a\big((x,\xi)+\tilde\varphi^n_t\big)\,\mathrm{d} z^n,\\ \tilde\varphi^n(s)&=(0,0), \end{aligned} \end{align*} and similarly $\tilde\varphi_{s,t}:=\varphi_{s,t}(x,\xi)-(x,\xi)$ solves \begin{equation*} \begin{split} \mathrm{d}\tilde\varphi^0_t&= -b\big((x,\xi)+\tilde\varphi_t\big)\,\mathrm{d} \mathbf{z}+g\big((x,\xi)+\tilde\varphi_t\big)\,\mathrm{d} \mathbf{w}-\frac{1}{4}\mathbb{P}artial_\xi G^2\big((x,\xi)+\tilde\varphi_t\big)\,\mathrm{d} t,\\ \mathrm{d}\tilde\varphi^x_t&=a\big((x,\xi)+\tilde\varphi_t\big)\,\mathrm{d} \mathbf{z},\\ \tilde\varphi(s)&=(0,0). \end{split} \end{equation*} Since for a family of vector fields $V=(V_1,\dots,V_d)$ on $\mathbb{R}^e$ it holds true that $$\|V(y+\cdot)\|_{\text{Lip}^\gamma}=\|V(\cdot)\|_{\text{Lip}^\gamma},$$ \cite[Theorem 8]{DOR15} yields the convergence $\tilde\varphi^n\to\tilde\varphi$ uniformly in $t\in[s,T]$ in probability which is independent of $(x,\xi)$. Therefore $\varphi^n\to\varphi$ uniformly in $t\in[s,T]$ and $(x,\xi)\in\mathbb{R}^N\times\mathbb{R}$ in probability and as a consequence, along a subsequence, $\varphi^n\to\varphi$ uniformly in $t\in[s,T]$ and $(x,\xi)\in\mathbb{R}^N\times\mathbb{R}$ a.s. Finally, according to Theorem \ref{thm:existence} \begin{align*} \sup_{t,x,\xi}|\mathbb{P}si^n_{s,t}(x,\xi)-\mathbb{P}si_{s,t}(x,\xi)|&=\sup_{t,x,\xi}|\mathbb{P}si^n_{s,t}(\varphi^n_{s,t}(x,\xi))-\mathbb{P}si_{s,t}(\varphi^n_{s,t}(x,\xi))|\\ &=\sup_{t,x,\xi}|(x,\xi)-\mathbb{P}si_{s,t}(\varphi(x,\xi)+\varphi^n_{s,t}(x,\xi)-\varphi(x,\xi))|\\ &=\sup_{t,x,\xi}|\mathbb{P}si_{s,t}(\varphi_{s,t}(x,\xi))-\mathbb{P}si_{s,t}(\varphi(x,\xi)+\varphi^n_{s,t}(x,\xi)-\varphi(x,\xi))|\\ &\leq \sup_{t,x,\xi}|\mathrm{D} \mathbb{P}si_{s,t}|\sup_{t,x,\xi}|\varphi^n_{s,t}(x,\xi)-\varphi_{s,t}(x,\xi)|\\ &\leq C\sup_{t,x,\xi}|\varphi^n_{s,t}(x,\xi)-\varphi_{s,t}(x,\xi)|\rightarrow 0\quad\text{a.s.} \end{align*} and the claim follows. Moreover, $\theta^n\rightarrow\theta$ uniformly in $s,\,t\in[0,T]$, $(x,\xi)\in\mathbb{R}^N\times\mathbb{R}$ (see \cite[Theorem 4]{caruana}) and the same holds true for their first and second order derivatives with respect to $\xi$. Therefore, we may pass to the limit, apply dominated convergence theorem (for both Lebesgue and stochastic integral, see \cite[Theorem 32]{protter} for the latter one) and \eqref{eq:5} follows. If $X_0\in L^\infty(\mathbb{R}^N\times\mathbb{R})$ then we consider its smooth approximation $X^\delta_0$, apply the previous result and pass to the limit. The progressive measurability follows immediately from the construction. \end{proof} \end{prop} As the next step we prove a stochastic version of Proposition \ref{oper}(i) for $p=1$. \begin{lemma}\label{prop:update} The family $\mathcal{S}=\{\mathcal{S}(t,s),\,0\leq s\leq t\leq T\}$ consists of bounded linear operators on $L^1(\Omega\times\mathbb{R}^N\times\mathbb{R})$. In particular, if $X_0\in L^1(\Omega\times\mathbb{R}^N\times\mathbb{R})$ then \begin{equation}\label{eq:6} \sup_{0\leq s\leq T}\mathbb{E}\sup_{s\leq t\leq T}\|\mathcal{S}(t,s)X_0\|_{L^1_{x,\xi}}\leq C \,\mathbb{E}\|X_0\|_{L^1_{x,\xi}}. \end{equation} \begin{proof} Assume in addition that $X_0$ is nonnegative, bounded and compactly supported in $(x,\xi)$. In general, \eqref{eq:5} holds true for all $\mathbb{P}hi\in C^1_c(\mathbb{R}^N\times\mathbb{R})$, nevertheless, since $X\in L^1(\mathbb{R}^N\times\mathbb{R})$ a.s., which follows from Proposition \ref{oper}, the assumption on the test function $\mathbb{P}hi$ can be relaxed and we may take $\mathbb{P}hi\equiv 1$. Taking expectation, the stochastic integral vanishes due to the additional assumption and we obtain \begin{equation}\label{eq:7} \mathbb{E}\|\mathcal{S}(t,s)X_0\|_{L^1_{x,\xi}}\leq \mathbb{E}\|X_0\|_{L^1_{x,\xi}}. \end{equation} Besides, the nonnegativity assumption can be immediately omitted by splitting $X_0$ into negative and positive part. In the general case of $X_0\in L^1(\Omega\times\mathbb{R}^N\times\mathbb{R})$, we approximate $X_0$ in $L^1(\Omega\times\mathbb{R}^N\times\mathbb{R})$ by $X_0^\delta$ bounded and compactly supported such that $$\mathbb{E}\|X_0^\delta\|_{L^1_{x,\xi}}\leq \mathbb{E}\|X_0\|_{L^1_{x,\xi}}.$$ Apply \eqref{eq:7} to $X_0^\delta$. Due to linearity of $\mathcal{S}(t,s)$ it implies that $\mathcal{S}(t,s)X_0^\delta$ is Cauchy in $L^1(\Omega\times\mathbb{R}^N\times\mathbb{R})$, the limit is necessarily $\mathcal{S}(t,s)X_0$ so Fatou's lemma yields \eqref{eq:7} and the first claim follows. To obtain, \eqref{eq:6}, we test \eqref{eq:5} again by $\mathbb{P}hi\equiv 1$, take supremum and expectation. Applying Burkholder-Davis-Gundy's and weighted Young's inequalities and \eqref{eq:7} we obtain \begin{align*} \mathbb{E}\sup_{s\leq t\leq T}\|\mathcal{S}(t,s)X_0\|_{L^1_{x,\xi}}&\leq \mathbb{E}\|X_0\|_{L^1_{x,\xi}}+\mathbb{E}\sup_{s\leq t\leq T}\bigg|\int_s^t\langle X,\mathbb{P}artial_\xi g\rangle\,\mathrm{d} W\bigg|\\ &\leq\mathbb{E}\|X_0\|_{L^1_{x,\xi}}+C\mathbb{E}\bigg(\int_s^T\|X\|_{L^1_{x,\xi}}^2\,\mathrm{d} t\bigg)^{1/2}\\ &\leq \mathbb{E}\|X_0\|_{L^1_{x,\xi}}+C\mathbb{E}\bigg(\sup_{s\leq t\leq T}\|X\|_{L^1_{x,\xi}}\bigg)^{1/2}\bigg(\int_s^T\|X\|_{L^1_{x,\xi}}\,\mathrm{d} t\bigg)^{1/2}\\ &\leq \mathbb{E}\|X_0\|_{L^1_{x,\xi}}+\frac{1}{2}\mathbb{E}\sup_{s\leq t\leq T}\|X\|_{L^1_{x,\xi}}+C\,\mathbb{E}\int_s^T\|X\|_{L^1_{x,\xi}}\,\mathrm{d} t\\ &\leq C\,\mathbb{E}\|X_0\|_{L^1_{x,\xi}}+\frac{1}{2}\mathbb{E}\sup_{s\leq t\leq T}\|\mathcal{S}(t,s)X_0\|_{L^1_{x,\xi}} \end{align*} and the proof is complete. \end{proof} \end{lemma} \begin{cor}\label{prop:update1} The family $\mathcal{S}=\{\mathcal{S}(t,s),\,0\leq s\leq t\leq T\}$ consists of bounded linear operators on $L^2(\Omega;L^1(\mathbb{R}^N\times\mathbb{R}))$. In particular, if $X_0\in L^2(\Omega;L^1(\mathbb{R}^N\times\mathbb{R}))$ then \begin{equation}\label{eq:61} \sup_{0\leq s\leq T}\mathbb{E}\sup_{s\leq t\leq T}\|\mathcal{S}(t,s)X_0\|^2_{L^1_{x,\xi}}\leq C \,\mathbb{E}\|X_0\|^2_{L^1_{x,\xi}}. \end{equation} \begin{proof} The proof follows the lines of Lemma \ref{prop:update}. The key observation is that if $X_0$ is nonnegative, bounded and compactly supported in $(x,\xi)$ then, on the one hand, \begin{equation*} \begin{split} \mathbb{E}\|\mathcal{S}(t,s)X_0\|^2_{L^1_{x,\xi}}&\leq C\,\mathbb{E}\|X_0\|^2_{L^1_{x,\xi}}+C\, \mathbb{E}\bigg|\int_s^t \langle X,\mathbb{P}artial_\xi g\rangle\,\mathrm{d} W\bigg|^2\\ &\leq C\,\mathbb{E}\|X_0\|^2_{L^1_{x,\xi}}+C\, \mathbb{E}\int_s^t\|X(r)\|_{L^1_{x,\xi}}^2\,\mathrm{d} r\\ \end{split} \end{equation*} and the Gronwall lemma implies \begin{equation}\label{eq:71} \begin{split} \mathbb{E}\|\mathcal{S}(t,s)X_0\|^2_{L^1_{x,\xi}}&\leq C\,\mathbb{E}\|X_0\|^2_{L^1_{x,\xi}}. \end{split} \end{equation} On the other hand, Burkholder-Davis-Gundy's inequality and \eqref{eq:71} implies \begin{align*} \mathbb{E}\sup_{s\leq t\leq T}\|\mathcal{S}(t,s)X_0\|^2_{L^1_{x,\xi}}&\leq C\,\mathbb{E}\|X_0\|^2_{L^1_{x,\xi}}+C\,\mathbb{E}\sup_{s\leq t\leq T}\bigg|\int_s^t\langle X,\mathbb{P}artial_\xi g\rangle\,\mathrm{d} W\bigg|^2\\ &\leq C\,\mathbb{E}\|X_0\|^2_{L^1_{x,\xi}}+C\,\mathbb{E}\int_s^T\|X\|_{L^1_{x,\xi}}^2\,\mathrm{d} t\\ &\leq C\,\mathbb{E}\|X_0\|^2_{L^1_{x,\xi}}. \end{align*} \end{proof} \end{cor} \begin{proof}[Proof of Theorem \ref{thm:bgkconvergence}] In view of Proposition \ref{lemma:7} we remark that $F^\varepsilon$ also satisfies the following weak formulation of \eqref{eq:bgk2}: let $\mathbb{P}hi\in C_c^2(\mathbb{R}^N\times\mathbb{R})$ and $\alpha\in C^1_c([0,T))$ then \begin{equation}\label{formul} \begin{split} \int_0^T&\big\langle F^\varepsilon(t),\mathbb{P}hi(\theta_t)\big\rangle\mathbb{P}artial_t\alpha(t)\,\mathrm{d} t+\big\langle F_0^\varepsilon,\mathbb{P}hi\big\rangle\alpha(0)\\ &=\int_0^T\langle\mathbb{P}artial_\xi F^\varepsilon(t) g,\mathbb{P}hi(\theta_t)\rangle\alpha(t)\,\mathrm{d} W(t)-\frac{1}{2}\int_0^T\langle\mathbb{P}artial_\xi(G^2\mathbb{P}artial_\xi F^\varepsilon(t)),\mathbb{P}hi(\theta_t)\rangle\alpha(t)\,\mathrm{d} t\\ &\hspace{1cm}-\frac{1}{\varepsilon}\int_0^T\big\langle\mathbf{1}_{u^\varepsilon(t)>\xi}-F^\varepsilon(t),\mathbb{P}hi(\theta_t)\big \rangle\alpha(t)\,\mathrm{d} t \end{split} \end{equation} and a similar expression holds true for $f^\varepsilon$, namely, it satisfies the weak formulation of \eqref{bgk4}. Taking the limit on the left hand side of \eqref{formul} is quite straightforward. Indeed, according to the representation formula \eqref{eq:sol} it holds that the set $(F^\varepsilon)$ is bounded in $L^\infty(\Omega\times[0,T]\times\mathbb{R}^N\times\mathbb{R})$, more precisely, $F^\varepsilon\in[0,1],\,\varepsilon\in(0,1).$ Therefore, by the Banach-Alaoglu theorem, there exists $F\in L^\infty(\Omega\times[0,T]\times\mathbb{R}^N\times\mathbb{R})$ such that, up to subsequences, \begin{equation}\label{weakstar} F^\varepsilon\overset{w^*}{\longrightarrow}F\quad\text{in}\quad L^\infty(\Omega\times[0,T]\times\mathbb{R}^N\times\mathbb{R}). \end{equation} As a consequence, $$\int_0^T\big\langle F^\varepsilon(t),\mathbb{P}hi(\theta_t)\big\rangle\mathbb{P}artial_t\alpha(t)\,\mathrm{d} t\overset{w^*}{\longrightarrow} \int_0^T\big\langle F(t),\mathbb{P}hi(\theta_t)\big\rangle\mathbb{P}artial_t\alpha(t)\,\mathrm{d} t\quad\text{in}\quad L^\infty(\Omega),$$ $$\frac{1}{2}\int_0^T\langle\mathbb{P}artial_\xi( G^2\mathbb{P}artial_\xi F^\varepsilon(t)), \mathbb{P}hi(\theta_t)\rangle\alpha(t)\,\mathrm{d} t\overset{w^*}{\longrightarrow}\frac{1}{2}\int_0^T\langle\mathbb{P}artial_\xi( G^2\mathbb{P}artial_\xi F(t)), \mathbb{P}hi(\theta_t)\big\rangle\alpha(t)\,\mathrm{d} t\quad\text{in}\quad L^\infty(\Omega),$$ According to the hypotheses on the initial data, $$\big\langle F_0^\varepsilon,\mathbb{P}hi\big\rangle\alpha(0)=\big\langle \mathbf{1}_{u_0>\xi},\mathbb{P}hi\big\rangle\alpha(0).$$ We intend to prove a similar convergence result for the stochastic term as well. Since $$\big\langle F^\varepsilon(t),\mathbb{P}artial_\xi\big(g\mathbb{P}hi(\theta_t)\big)\big\rangle\alpha(t)\overset{w}{\longrightarrow}\big\langle F(t),\mathbb{P}artial_\xi\big(g\mathbb{P}hi(\theta_t)\big)\big\rangle\alpha(t)\quad \text{in} \quad L^2(\Omega\times [0,T])$$ and the stochastic integral $\varPhi\mapsto\int_0^T\varPhi\,\mathrm{d} W$ regarded as bounded linear operator from $L^2(\Omega\times[0,T])$ to $L^2(\Omega)$ is weakly continuous it follows $$\int_0^T\langle\mathbb{P}artial_\xi F^\varepsilon(t) g,\mathbb{P}hi(\theta_t)\rangle\alpha(t)\,\mathrm{d} W(t)\overset{w}{\longrightarrow}\int_0^T\langle\mathbb{P}artial_\xi F(t) g,\mathbb{P}hi(\theta_t)\rangle\alpha(t)\,\mathrm{d} W(t)\quad\text{in}\quad L^2(\Omega).$$ Furthermore, since we established convergence of all the terms in \eqref{formul} except for the third one on the right hand side, multiplying \eqref{formul} by $\varepsilon$ gives the convergence of this remaining term to $0$, that is, \begin{equation}\label{klad} \int_0^T\big\langle\mathbf{1}_{u^\varepsilon(t)>\xi}-F^\varepsilon(t),\mathbb{P}hi(\theta_t)\big\rangle\alpha(t)\,\mathrm{d} t\overset{}{\longrightarrow} 0\quad\text{in}\quad L^2(\Omega). \end{equation} As the next step, we show that the same convergence holds true if we replace the test function $\mathbb{P}hi(\theta)\alpha$ by a general function $\beta\in L^1([0,T]\times\mathbb{R}^N\times\mathbb{R})$. To this end, recall that linear combinations of tensor functions of the form $\mathbb{P}hi\alpha$, where $\mathbb{P}hi\in C^2_c(\mathbb{R}^N\times\mathbb{R})$, $\alpha\in C^1_c([0,T))$, are dense in $L^1([0,T]\times\mathbb{R}^N\times\mathbb{R})$. Hence if $\beta\in L^1([0,T]\times\mathbb{R}^N\times\mathbb{R})$ then there exists $\sum_{i=1}^n\mathbb{P}hi_i\alpha_i$ with $\mathbb{P}hi_i\in C^2_c(\mathbb{R}^N\times\mathbb{R})$ and $\alpha_i\in C^1_c([0,T))$, $i=1,\dots,n$, such that $$\bigg\|\beta-\sum_{i=1}^n\mathbb{P}hi_i\alpha_i\bigg\|_{L^1_{t,x,\xi}}<\delta$$ so, due to the fact that $\mathbf{1}_{u^\varepsilon>\xi}-F^\varepsilon\in [-1,1]$, we obtain \begin{align*} \bigg|\int_0^T\big\langle\mathbf{1}_{u^\varepsilon(t)>\xi}-F^\varepsilon(t),\beta(t,\theta_t)\big\rangle\,\mathrm{d} t\bigg|&\leq \delta+\sum_{i=1}^n\bigg|\int_0^T\big\langle\mathbf{1}_{u^\varepsilon(t)>\xi}-F^\varepsilon(t),\mathbb{P}hi_i(\theta_t)\big\rangle\alpha_i(t)\,\mathrm{d} t\bigg| \end{align*} thus $$\int_0^T\big\langle\mathbf{1}_{u^\varepsilon(t)>\xi}-F^\varepsilon(t),\beta(t,\theta_t)\big\rangle\,\mathrm{d} t\overset{}{\longrightarrow} 0\quad\text{in}\quad L^2(\Omega).$$ Consequently, we may take $\beta(t,\mathbb{P}i_t)$ instead of $\beta$ to finally deduce $$\mathbf{1}_{u^\varepsilon>\xi}- F^\varepsilon\overset{w^*}{\longrightarrow} 0\quad\text{in}\quad L^\infty(\Omega\times[0,T]\times\mathbb{R}^N\times\mathbb{R})$$ and, in particular, for all $\mathbb{P}hi\in C^1_c(\mathbb{R})$,\footnote{Here $\langle\cdot,\cdot\rangle_\xi$ denotes the duality between the space of distributions on $\mathbb{R}$ and $C^1_c(\mathbb{R})$.} \begin{equation}\label{lll} \langle\mathbb{P}artial_\xi\mathbf{1}_{u^\varepsilon>\xi}-\mathbb{P}artial_\xi F^\varepsilon,\mathbb{P}hi\rangle_\xi\overset{w^*}{\longrightarrow} 0\quad\text{in}\quad L^\infty(\Omega\times[0,T]\times\mathbb{R}^N). \end{equation} In order to obtain the convergence in the remaining term of \eqref{formul} and in view of the kinetic formulation \eqref{eq:weakkinformul}, we need to show that the term $\frac{1}{\varepsilon}(\mathbf{1}_{u^\varepsilon>\xi}-F^\varepsilon)$ can be written as $\mathbb{P}artial_\xi m^\varepsilon$ where $m^\varepsilon$ is a random nonnegative measure over $[0,T]\times\mathbb{R}^N\times\mathbb{R}$ bounded uniformly in $\varepsilon$. If we define \begin{equation}\label{meas} \begin{split} m^\varepsilon(\xi)&=\frac{1}{\varepsilon}\int_{-\infty}^\xi \big(\mathbf{1}_{u^\varepsilon>\zeta}-F^\varepsilon(\zeta)\big)\,\mathrm{d}\zeta=\frac{1}{\varepsilon}\int_{-\infty}^\xi \big(\chi_{u^\varepsilon}(\zeta)-f^\varepsilon(\zeta)\big)\,\mathrm{d}\zeta, \end{split} \end{equation} it is easy to check that $m^\varepsilon\geq 0$ a.s. since $F^\varepsilon\in[0,1]$. Indeed, $m^\varepsilon(-\infty)=m^\varepsilon(\infty)=0$ and $m^\varepsilon(t,x,\cdot)$ is increasing if $\xi\in(-\infty,u^\varepsilon(t,x))$ and decreasing if $\xi\in(u^\varepsilon(t,x),\infty)$. Let us proceed with a uniform estimate for $(u^\varepsilon)$ and $(m^\varepsilon)$. \begin{prop}\label{densities} The set of local densities $(u^\varepsilon)$ satisfies the uniform estimate \begin{equation*} \mathbb{E}\sup_{0\leq t\leq T}\|u^\varepsilon(t)\|^2_{L^1_x}\leq C\,\mathbb{E}\|u_0\|^2_{L^1_x}. \end{equation*} \begin{proof} It follows from the definition of $u^\varepsilon$ in \eqref{dens} and from \eqref{eq:hsol} that \begin{equation*} \begin{split} u^\varepsilon(t)&=\mathrm{e}^{-\frac{t}{\varepsilon}}\int_\mathbb{R}\mathcal{S}(t,0)\chi_{u_0}\,\mathrm{d}\xi+\frac{1}{\varepsilon}\int_0^t\mathrm{e}^{-\frac{t-s}{\varepsilon}}\int_\mathbb{R}\mathcal{S}(t,s)\chi_{u^\varepsilon(s)}\,\mathrm{d}\xi\,\mathrm{d} s. \end{split} \end{equation*} Let us now define the following auxiliary function $$H(s)=\left|\int_\mathbb{R}\mathcal{S}(t,s)\chi_{u^\varepsilon(s)}\,\mathrm{d}\xi\right|.$$ Then $$H(t)\leq \mathrm{e}^{-\frac{t}{\varepsilon}} H(0)+(1-\mathrm{e}^{-\frac{t}{\varepsilon}})\max_{0\leq s\leq t} H(s)$$ and we conclude that $H(t)\leq H(0),\,t\in[0,T]$. In order to estimate $H(0)$, we apply Corollary \ref{prop:update1} and obtain \begin{equation*} \begin{split} \mathbb{E}\sup_{0\leq t\leq T}\|u^\varepsilon(t)\|^2_{L^1_x}&\leq \mathbb{E}\sup_{0\leq t\leq T}\big\|\mathcal{S}(t,0)\chi_{u_0}\big\|^2_{L^1_{x,\xi}} \leq C\,\mathbb{E}\|u_0\|^2_{L^{1}_{x}}. \end{split} \end{equation*} \end{proof} \end{prop} \begin{prop}\label{kin} For all $t^*\in[0,T]$ it holds true that \begin{equation}\label{kin1} \mathbb{E}\bigg|\int_{[0,t^*]\times\mathbb{R}^N\times\mathbb{R}}\,\mathrm{d} m^\varepsilon(t,x,\xi)\bigg|^2+\mathbb{E}\langle f^\varepsilon(t^*),\xi\rangle^2\leq C\big(\mathbb{E}\|u_0\|_{L^2_x}^4+\mathbb{E}\|u_0\|^2_{L^1_{x}}\big). \end{equation} \begin{proof} Here we follow the ideas of \cite{gess}. Let $\{0=t_0<t_1<\cdots<t_n=t^*\}$ be a partition of $[0,t^*]$ with step size $h$, to be chosen below. It follows immediately from the formula \eqref{eq:hsol} that for every $t_i\in[0,t^*)$, $t\mapsto f^\varepsilon(t_i+t)$ is a solution to \eqref{bgk4} on $[0,t^*-t_i]$ with the initial condition $f^\varepsilon(t_i)$ and therefore the corresponding version of \eqref{eq:bgkweak} holds true, namely, for all $\mathbb{P}hi\in C^1_c(\mathbb{R}^N\times\mathbb{R})$ and all $t\in[t_i,t_{i+1}]$ \begin{align*} \mathrm{d} \langle f^\varepsilon(t),\mathbb{P}hi(\theta_{t_i,t})\rangle&=-\langle \mathbb{P}artial_\xi f^\varepsilon g,\mathbb{P}hi(\theta_{t_i,t})\rangle\,\mathrm{d} W+\frac{1}{2}\langle\mathbb{P}artial_\xi(G^2\mathbb{P}artial_\xi f^\varepsilon),\mathbb{P}hi(\theta_{t_i,t})\rangle\,\mathrm{d} t+\langle\mathbb{P}artial_\xi m^\varepsilon,\mathbb{P}hi(\theta_{t_i,t})\rangle. \end{align*} Now, we need to test by $\mathbb{P}hi(\xi)=\xi$. Since $f^\varepsilon\in L^\infty(0,T;L^1(\mathbb{R}^N\times\mathbb{R}))$ a.s., we can test by constants, in particular, we do not need compactly supported test functions. Therefore, let $\mathbb{P}hi_R\in C^1(\mathbb{R})$ be an approximation of $\mathbb{P}hi$ which is bounded, monotone increasing, i.e. $\mathbb{P}artial_\xi\mathbb{P}hi_R\geq0$, and preserves the sign, i.e. $\xi\sgn\mathbb{P}hi_R(\xi)\geq 0$, and which satisfies $|\mathbb{P}artial_\xi\mathbb{P}hi_R|,|\mathbb{P}artial^2_\xi\mathbb{P}hi_R|\leq C$ uniformly in $R$. Using the weak formulation for $f^\varepsilon$ above we deduce \begin{align*} \int_{t_i}^{t}\langle m^\varepsilon,\mathbb{P}artial_\xi\mathbb{P}hi_R(\theta^0_{t_i,s})\rangle\,\mathrm{d} s+\langle f^\varepsilon(t),\mathbb{P}hi_R(\theta^0_{t_i,t})\rangle&=\langle f^\varepsilon(t_i),\mathbb{P}hi_R\rangle-\int_{t_i}^{t}\langle\mathbb{P}artial_\xi f^\varepsilon g,\mathbb{P}hi_R(\theta^0_{t_i,s})\rangle\,\mathrm{d} W_s\\ &\quad+\frac{1}{2}\int_{t_i}^{t}\langle\mathbb{P}artial_\xi(G^2\mathbb{P}artial_\xi f^\varepsilon),\mathbb{P}hi(\theta^0_{t_i,s})\rangle\,\mathrm{d} s. \end{align*} Observe that $0\leq \sgn(\xi) f^\varepsilon(\xi)\leq 1$ as a consequence of \eqref{eq:hsol}. Since also $\sgn(\xi)\theta^0_{t_i,t}(\xi)\geq 0$ due to the assumption \eqref{eq:null}, the second term on the left hand side is nonnegative. Moreover, on a small time interval $\mathbb{P}artial_\xi\theta^0_{t_i,t}$ remains close to its initial value, that is, choosing $h$ sufficiently small (which can be justified using Theorem \ref{thm:existence}) we may assume that for every $i$ $$\inf_{t_i\leq t\leq t_{i+1}}\mathbb{P}artial_\xi\theta_{t_i,t}^0\geq \frac{1}{2}$$ and consequently also the first term on the left hand side is nonnegative. Thus we deduce \begin{align*} \begin{aligned} &\mathbb{E}\bigg|\int_{t_i}^{t}\langle m^\varepsilon,(\mathbb{P}artial_\xi\mathbb{P}hi_R)(\theta^0_{t_i,s})\mathbb{P}artial_\xi\theta^0_{t_i,s}\rangle\,\mathrm{d} s\bigg|^2+\mathbb{E}\langle f^\varepsilon(t),\mathbb{P}hi_R(\theta^0_{t_i,t})\rangle^2\\ &\leq C\,\mathbb{E}\langle f^\varepsilon(t_i),\mathbb{P}hi_R\rangle^2+C\,\mathbb{E}\bigg|\int_{t_i}^{t}\langle\mathbb{P}artial_\xi f^\varepsilon g,\mathbb{P}hi_R(\theta^0_{t_i,s})\rangle\,\mathrm{d} W_s\bigg|^2+C\,\mathbb{E}\bigg|\int_{t_i}^{t}\langle\mathbb{P}artial_\xi(G^2\mathbb{P}artial_\xi f^\varepsilon),\mathbb{P}hi_R(\theta^0_{t_i,s})\rangle\,\mathrm{d} s\bigg|^2. \end{aligned} \end{align*} Moreover, it follows from \eqref{eq:hsol}, Lemma \ref{prop:update} and Proposition \ref{densities} that \begin{align}\label{eq:9} \mathbb{E}\|f^\varepsilon(t)\|^2_{L^1_{x,\xi}}\leq \sup_{0\leq s\leq t\leq T}\mathbb{E}\|\mathcal{S}(t,s)\chi_{u^\varepsilon(s)}\|^2_{L^1_{x,\xi}}\leq C\,\mathbb{E}\|u_0\|^2_{L^1_{x}} \end{align} and therefore the third term on the right hand side can be estimated as follows \begin{align*} \mathbb{E}\bigg|\int_{t_i}^{t}\langle\mathbb{P}artial_\xi(G^2\mathbb{P}artial_\xi f^\varepsilon),\mathbb{P}hi_R(\theta^0_{t_i,s})\rangle\,\mathrm{d} s\bigg|^2&\leq C(t-t_i)^2\,\mathbb{E}\|u_0\|^2_{L^1_{x}}. \end{align*} where the constant $C$ does not depend on $R$ due to the assumption on the derivatives of $\mathbb{P}hi_R$ above. For the stochastic integral we have \begin{align*} \mathbb{E}\bigg|\int_{t_i}^{t}\langle\mathbb{P}artial_\xi f^\varepsilon g,\mathbb{P}hi_R(\theta^0_{t_i,s})\rangle\,\mathrm{d} W_s\bigg|^2&=\mathbb{E}\int_{t_i}^{t}\langle\mathbb{P}artial_\xi f^\varepsilon g,\mathbb{P}hi_R(\theta^0_{t_i,s})\rangle^2\,\mathrm{d} s\\ &\leq C\,\mathbb{E} \int_{t_i}^t\langle f^\varepsilon,\mathbb{P}hi_R(\theta^0_{t_i,s})\rangle^2\,\mathrm{d} s+C\,\mathbb{E}\int_{t_i}^t \|f^\varepsilon\|_{L^1_{x,\xi}}^2\,\mathrm{d} s\\ &\leq C\,\mathbb{E} \int_{t_i}^t\langle f^\varepsilon,\mathbb{P}hi_R(\theta^0_{t_i,s})\rangle^2\,\mathrm{d} s+C(t-t_i)\,\mathbb{E}\|u_0\|_{L^1_x}^2 \end{align*} hence the Gronwall lemma yields \begin{align}\label{eq:8a} \begin{aligned} &\mathbb{E}\bigg|\int_{t_i}^{t}\langle m^\varepsilon,(\mathbb{P}artial_\xi\mathbb{P}hi_R)(\theta^0_{t_i,s})\mathbb{P}artial_\xi\theta^0_{t_i,s}\rangle\,\mathrm{d} s\bigg|^2+\mathbb{E}\langle f^\varepsilon(t),\mathbb{P}hi_R(\theta^0_{t_i,t})\rangle^2\leq C_h\Big(\mathbb{E}\langle f^\varepsilon(t_i),\mathbb{P}hi_R\rangle^2+\mathbb{E}\|u_0\|_{L^1_x}^2\Big). \end{aligned} \end{align} Therefore, if $i=0$ then we estimate the first term on the right hand side of \eqref{eq:8a} by \begin{align}\label{eq:100} \mathbb{E}\langle f^\varepsilon_0,\mathbb{P}hi_R\rangle^2&\leq \mathbb{E} \langle\chi_{u_0},\xi\rangle^2=\frac{1}{4}\,\mathbb{E}\|u_0\|_{L^2_x}^4 \end{align} and obtain by Fatou's lemma \begin{align}\label{eq:10} \begin{aligned} \mathbb{E}\bigg|\int_{0}^{t_{1}}&\langle m^\varepsilon,1\rangle\,\mathrm{d} t\bigg|^2+\mathbb{E}\langle f^\varepsilon(t_{1}),\theta^0_{0,t_{1}}\rangle^2\leq C_h\Big(\mathbb{E}\|u_0\|_{L^2_x}^4+\mathbb{E}\|u_0\|^2_{L^1_{x}}\Big). \end{aligned} \end{align} In order to get a similar estimate on $[t_1,t_2]$ we go back to \eqref{eq:8a} and assume without loss of generality (using Theorem \ref{thm:existence} again) that $h$ was small enough so that for every $i$ $$\sup_{t_i\leq t\leq t_{i+1}}|\xi-\theta^0_{t_i,t}|\leq 1.$$ Consequently, by \eqref{eq:9} and \eqref{eq:10} \begin{align*} \mathbb{E}\langle f^\varepsilon(t_1),\xi\rangle^2&\leq C\,\mathbb{E}\langle f^\varepsilon(t_{1}),\theta^0_{0,t_{1}}\rangle^2+C\,\mathbb{E}\langle f^\varepsilon(t_1),(\xi-\theta^0_{0,t_1})\rangle^2\leq C\big(\mathbb{E}\|u_0\|_{L^2_x}^4+\mathbb{E}\|u_0\|_{L^1_{x}}^2\big). \end{align*} Iterating the above technique finitely many times, the claim follows. \end{proof} \end{prop} As a consequence of Proposition \ref{densities}, the assumptions of Lemma \ref{kinetcomp} are satisfied for $\nu^\varepsilon_{t,x}=\delta_{u^\varepsilon(t,x)=\xi}$ and hence there exists a Young measure $\nu_{t,x}$ vanishing at infinity such that $\nu^\varepsilon\rightarrow \nu$ in the sense given by this Lemma. We deduce from \eqref{lll} that $\mathbb{P}artial_\xi F=-\nu$ hence $F$ is a kinetic function. Next, we verify the second estimate from \eqref{integrov}. Due to the definition of $m^\varepsilon$ in \eqref{meas}, it follows from \eqref{kin1} that $$\mathbb{E}\bigg|\frac{1}{\varepsilon}\int_0^T\langle\chi_{u^\varepsilon(t)}-f^\varepsilon(t),\xi\rangle\,\mathrm{d} t\bigg|^2+\mathbb{E}\langle f^\varepsilon(t),\xi\rangle^2\leq C\big(\mathbb{E}\|u_0\|_{L^2_x}^4+\mathbb{E}\|u_0\|^2_{L^1_x}\big)$$ which implies \begin{equation}\label{eq:101} \mathbb{E}\bigg|\int_0^T\langle\chi_{u^\varepsilon(t)},\xi\rangle\,\mathrm{d} t\bigg|^2\leq C\big(\mathbb{E}\|u_0\|_{L^2_x}^4+\mathbb{E}\|u_0\|^2_{L^1_x}\big). \end{equation} Rewriting the left hand side by the same argument as in \eqref{eq:100} we deduce $$\mathbb{E}\bigg|\int_0^T\|u^\varepsilon(t)\|_{L^2_x}^2\mathrm{d} t\bigg|^2\leq C\big(\mathbb{E}\|u_0\|_{L^2_x}^4+\mathbb{E}\|u_0\|^2_{L^1_x}\big)$$ and as a consequence the estimate \eqref{eqqq} follows. Finally, in order to show that $F$ is a generalized kinetic solution to \eqref{eq}, we will prove that there exists a kinetic measure $m$ such that, for all $\mathbb{P}hi\in C^2_c(\mathbb{R}^N\times\mathbb{R})$ and $\alpha\in C^1_c([0,T))$, \begin{equation}\label{nino} \int_0^T\big\langle\mathbb{P}artial_\xi m^\varepsilon,\mathbb{P}hi(\theta_t)\big\rangle\alpha(t)\,\mathrm{d} t\overset{w}{\longrightarrow}\int_0^T\big\langle\mathbb{P}artial_\xi m,\mathbb{P}hi(\theta_t)\big\rangle\alpha(t)\,\mathrm{d} t\quad\text{in}\quad L^1(\Omega). \end{equation} According to \eqref{kin1} and the fact that $m^\varepsilon\geq0$ a.s. we deduce that each $m^\varepsilon$ is a nonnegative finite measure over $[0,T]\times\mathbb{R}^N\times\mathbb{R}$ a.s. Besides, due to the convergence in \eqref{formul}, we obtain that for all $\mathbb{P}hi\in C^2_c(\mathbb{R}^N\times\mathbb{R})$ and all $\alpha\in C^1_c([0,T))$ the left hand side of \eqref{nino} indeed converges weakly in $L^1(\Omega)$ to some limit. Besides, due to Proposition \ref{kin}, the set of measures $(m^\varepsilon)$ is bounded in $L^2_w(\Omega;\mathcal{M}_b([0,T]\times\mathbb{R}^N\times\mathbb{R}))$, i.e. the space of weak-star measurable mappings from $\Omega$ to $\mathcal{M}_b([0,T]\times\mathbb{R}^N\times\mathbb{R})$ with finite $L^2(\Omega;\mathcal{M}_b([0,T]\times\mathbb{R}^N\times\mathbb{R}))$-norm. Since $L^2_w(\Omega;\mathcal{M}_b([0,T]\times\mathbb{R}^N\times\mathbb{R}))$ is the dual of the separable space $L^2(\Omega;C_0([0,T]\times\mathbb{R}^N\times\mathbb{R}))$, the Banach-Alaoglu theorem applies and yields existence of $m\in L^2_w(\Omega;\mathcal{M}_b([0,T]\times\mathbb{R}^N\times\mathbb{R}))$ such that, up to a subsequence, \begin{equation}\label{convm1} m^\varepsilon\overset{w^*}{\longrightarrow}m\quad\text{in}\quad L^2_w(\Omega;\mathcal{M}_b([0,T]\times\mathbb{R}^N\times\mathbb{R})). \end{equation} This in turn verifies the convergence (and identification of the limit) in \eqref{nino}. It remains to prove that $m$ is indeed a kinetic measure. Clearly, since all $m^\varepsilon$ are nonnegative, the same remains valid for $m$. The points (i) and (ii) from Definition \ref{def:kinmeasure} follow directly from the construction of $m$ and the uniform estimate \eqref{kin1}. The remaining point Definition \ref{def:kinmeasure}(iii) can be justified as follows. Let $\mathbb{P}hi\in C_0(\mathbb{R}^N\times\mathbb{R})$ and define $$x^\varepsilon(t):=\int_{[0,t]\times\mathbb{R}^N\times\mathbb{R}}\mathbb{P}hi(x,\xi)\mathrm{d} m^\varepsilon(s,x,\xi).$$ If $\vartheta\in L^\infty(\Omega)$ and $\gamma\in L^\infty(0,T)$ then by Fubini's theorem $$\mathbb{E}\bigg[\vartheta\int_0^T\gamma(t)x^\varepsilon(t)\mathrm{d} t\bigg]=\mathbb{E}\bigg[\vartheta\int_{[0,T]\times\mathbb{R}^N\times\mathbb{R}}\mathbb{P}hi(x,\xi)\varGamma(s)\mathrm{d} m^\varepsilon(s,x,\xi)\bigg],$$ where $\varGamma(s)=\int_s^T\gamma(t)\mathrm{d} t$ is continuous and $\varGamma(T)=0$. Since the right hand side converges to $$\mathbb{E}\bigg[\vartheta\int_{[0,T]\times\mathbb{R}^N\times\mathbb{R}}\mathbb{P}hi(x,\xi)\varGamma(s)\mathrm{d} m(s,x,\xi)\bigg]$$ due to \eqref{convm1}, we may apply Fubini's theorem again to deduce that $$x(t):=\int_{[0,t]\times\mathbb{R}^N\times\mathbb{R}}\mathbb{P}hi(x,\xi)\mathrm{d} m(s,x,\xi)$$ is a weak limit in $L^1(\Omega\times[0,T])$ of progressively measurable processes and is therefore also progressively measurable. Altogether, we have proved that $m$ is a kinetic measure and $F$ is a generalized kinetic solution to \eqref{eq}. Since any generalized kinetic solution is actually a kinetic one, due to Reduction Theorem \ref{thm:reduction}, it follows that $F=\mathbf{1}_{u>\xi}$ and $\nu=\delta_u$, where $u\in L^4(\Omega;L^2(0,T;L^2(\mathbb{R}^N)))$ is the unique kinetic solution to \eqref{eq}. The weak-star convergence of $f^\varepsilon$ to $\chi_u$ follows immediately from \eqref{weakstar} and therefore the proof is complete. \end{proof} \section*{Acknowledgment} The author wishes to thank the anonymous referees for providing many useful suggestions. \end{document}
\begin{document} \title{A sieve problem and its application} \author{Andreas Weingartner} \address{Department of Mathematics, Southern Utah University, 351 West University Boulevard, Cedar City, Utah 84720, USA} \email{[email protected]} \begin{abstract} Let $\theta$ be an arithmetic function and let $\mathcal{B}$ be the set of positive integers $n=p_1^{\alpha_1} \cdots p_k^{\alpha_k}$, which satisfy $p_{j+1} \le \theta ( p_1^{\alpha_1}\cdots p_{j}^{\alpha_{j}})$ for $0\le j < k$. We show that $\mathcal{B}$ has a natural density, provide a criterion to determine whether this density is positive, and give various estimates for the counting function of $\mathcal{B}$. When $\theta(n)/n$ is non-decreasing, the set $\mathcal{B}$ coincides with the set of integers $n$ whose divisors $1=d_1< d_2 < \ldots <d_{\tau(n)}=n$ satisfy $d_{j+1} \le \theta( d_j )$ for $1\le j <\tau(n)$. \end{abstract} \maketitle \section{Introduction} Let $\theta$ be an arithmetic function, $\theta: \mathbb{N}\to \mathbb{R}\cup \{\infty\}$. We write $\mathcal{B}$ (or $\mathcal{B}_\theta$) to denote the set of positive integers containing $n=1$ and all those $n \ge 2$ with prime factorization $n=p_1^{\alpha_1} \cdots p_k^{\alpha_k}$, \ $p_1< p_2< \ldots < p_k$, which satisfy \begin{equation}\label{Bdef} p_{j+1} \le \theta (p_1^{\alpha_1}\cdots p_{j}^{\alpha_{j}} ) \qquad (0\le j < k), \end{equation} where $p_1^{\alpha_1}\cdots p_{j}^{\alpha_{j}}$ is understood to be $1$ when $j=0$. Let $B(x)$ (or $B_\theta(x)$) be the number of positive integers $n\le x$ in $\mathcal{B}$. The following list shows some examples of $\theta$ and its corresponding set $\mathcal{B}$. \begin{itemize} \item If $\theta(n)=2n$, then $\mathcal{B}$ is the set of integers with $2$-dense divisors, i.e. integers $n$ which have a divisor in every interval $[y,2y]$ for $1\le y \le n$ (see \cite{Saias1,Ten86,PDD}). \item If $\theta(n)=\sigma(n)+1$, where $\sigma(n)$ is the sum-of-divisors function, then $\mathcal{B}$ is the set of practical numbers, i.e. integers $n$ such that every $1\le m\le n$ can be written as a sum of distinct positive divisors of $n$ (see \cite{Saias1,Ten86,PDD} and the references therein). \item If $\theta(n) = n+1$, then $\mathcal{B}$ is the set of even $\varphi$-practical numbers, i.e. even integers $n$ such that the polynomial $X^n-1$ has a divisor in $\mathbb{Z}[X]$ of every degree from $1$ to $n$ (see \cite{PTW,Thom,Thom2}). \end{itemize} Building on earlier work by Tenenbaum \cite{Ten86} and Saias \cite{Saias1}, we found \cite[Theorem 1.2]{PDD} that $$B(x)\sim \frac{c_\theta x}{\log x} \qquad (x \to \infty),$$ for some positive constant $c_\theta$, provided $\theta(n)$ satisfies $$\max(2,n)\le \theta(n) \ll \frac{n \log 2n}{(\log\log 3n)^{1+\varepsilon} }\quad (n\ge 1),$$ for some $\varepsilon>0$. This applies to each of the three examples listed above. In this note, our goal is to investigate the set $\mathcal{B}$ in general, without any restrictions on $\theta$. We will show that $\mathcal{B}$ always has a natural density (Theorem \ref{thmB}) and provide a criterion to determine whether this natural density is positive or zero (Theorem \ref{cor}). We give estimates for $B(x)$ with explicit error terms, first without any assumptions on $\theta$ (Theorem \ref{thmB}), and then under certain conditions on the size of $\theta(n)$ (Corollary \ref{corB} and Theorem \ref{thm4}). As an application, we consider the following set, related to the distribution of divisors. Let $\mathcal{D}$ be the set of positive integers containing $n=1$ and all those $n \ge 2$, whose divisors $1=d_1< d_2 < \ldots <d_{\tau(n)}=n$ satisfy \begin{equation}\label{Ddef} d_{j+1} \le \theta( d_j ) \qquad (1\le j <\tau(n)). \end{equation} We write $D(x)$ for the number of positive integers $n\le x$ in $\mathcal{D}$. Theorem \ref{DB} shows that $\mathcal{B}=\mathcal{D}$ provided $\theta(n)/n$ is non-decreasing, so that all results concerning $B(x)$ also apply to $D(x)$ under this assumption. \section{Statement of results} Let $P^+(n)$ (resp. $P^-(n)$) denote the largest (resp. smallest) prime factor of $n\ge 2$ and put $ P^+(1)=1$, $ P^-(1)=\infty$. Note that replacing $\theta(n)$ by $\max(\theta(n),P^+(n))$ in \eqref{Bdef} leaves the set $\mathcal{B}$ unchanged, because of the assumption $p_1< p_2< \ldots < p_k$. Moreover, if $\theta(1)<2$ then $\mathcal{B}=\{1\}$. Thus, we may assume from now on, without any loss of generality, that \begin{equation}\label{theta} \theta: \mathbb{N}\to \mathbb{R}\cup \{\infty\}, \quad \theta(1)\ge 2, \quad \theta(n)\ge P^+(n) \quad (n\ge 2). \end{equation} Let $\chi(n)$ be the characteristic function of the set $\mathcal{B}$. We shall see in Lemma \ref{lemL} that the series \begin{equation*} L=\sum_{n\ge 1} \frac{\chi(n)}{n} \prod_{p\le \theta(n)} \left(1-\frac{1}{p}\right) \end{equation*} converges to a value $0\le L \le 1$. Theorem \ref{thmB} shows that, for every choice of $\theta$, the set $\mathcal{B}$ has a natural density, which is given by $1-L$. \begin{theorem}\label{thmB} Let $\theta$ satisfy \eqref{theta}. We have \begin{equation}\label{thmB2} B(x) = (1-L)x + o(x). \end{equation} More precisely, \begin{equation}\label{thmB1} B(x)=(1-L)x +O\left(1 + x \sum_{n\ge 1} \frac{\chi(n)}{n \log \theta(n)} \exp\left(-\frac{\max(0,\log (x/n))}{3\log \theta(n)}\right)\right). \end{equation} \end{theorem} Theorem \ref{cor} provides a simple criterion to determine whether $L=1$ and $B(x)=o(x)$, or $L<1$ and $B(x)\sim (1-L)x$. \pagebreak \begin{theorem}\label{cor} Let $\theta$ satisfy \eqref{theta}. \begin{enumerate} \item[(i)] Assume that $\theta(n) \le f(n)$ where $n \log f(n)$ is increasing. If \begin{equation*} \sum_{n\ge 1} \frac{1}{n \log f(n)} \end{equation*} diverges, then $L=1$ and $B(x)=o(x)$. \item[(ii)] Assume that $\theta(n) \ge f(n)\ge 2$, $f(n)\gg P^+(n)$, and that for every $t\ge 1$ there exists an $r\in \mathbb{N}$, such that $f(2^r n) \ge t f(n)$ for all $n\ge 1$. If $$ \sum_{n\ge 1} \frac{1}{n \log f(n)} $$ converges, then $L<1$ and $B(x)\sim(1-L)x$. \item[(iii)] $B(x) \sim x $ $ \Leftrightarrow $ $L=0$ $ \Leftrightarrow $ $\theta(n)=\infty $ for all $ n\ge 1$ $ \Leftrightarrow $ $B(x)=[x]$. \end{enumerate} \end{theorem} Note that the three examples of $\theta$ listed in the introduction satisfy $L=1$ and $B(x)=o(x)$. For an instance where $L<1$, consider $\theta(n)=2^n$, for which $L=0.7734...$ by numerical computation. (For $n\ge 30$ we used estimates for the Euler product with effective error bounds due to Rosser and Schoenfeld \cite[Theorem 7]{RS}.) Hence \eqref{thmB2} implies that $B(x)=cx(1+o(1))$ with $c=1-L=0.2265...$, while \eqref{thmB1} shows that we have $B(x)=cx(1+O(1/\log x))$, when $\theta(n)=2^n$. Corollary \ref{corB} generalizes this example to $\log\theta(n) \asymp n^a$, where $a$ is a positive constant. We also consider the case $\log\theta(n) \asymp (\log 2n)^a$, where $a>1$ is constant. (The notation $f(n) \asymp g(n)$ means that $f(n)\ll g(n)$ (i.e. $f(n)=O(g(n))$) and $g(n)\ll f(n)$.) \begin{corollary}\label{corB} Let $\theta$ satisfy \eqref{theta}. \begin{enumerate} \item[(i)] If $\log\theta(n) \asymp n^a$ for some constant $a>0$, then $0<L<1$ and $$B(x) = (1-L) x\left(1 + O\left(\frac{1}{\log x}\right)\right).$$ \item[(ii)] If $\log\theta(n) \asymp (\log 2n)^a$ for some constant $a>1$, then $0<L<1$ and $$B(x) = (1-L) x\left(1 + O\left(\frac{1}{(\log x)^{1-1/a}}\right)\right).$$ \end{enumerate} \end{corollary} The error terms in Corollary \ref{corB} are easily derived from \eqref{thmB1}, using the trivial bound $\chi(n)\le 1$. The claim that $0<L<1$ follows directly from Theorem \ref{cor}, with (i) $f(n)=\max(2,\exp(c n^a))$ and (ii) $f(n)=\max(2,\exp(c (\log 2n)^a))$, where $c>0$ is a suitable constant. Other examples of $\theta$, for which $L<1$, can be dealt with similarly. When $B(x)=o(x)$, we need a different strategy for obtaining an asymptotic formula for $B(x)$, since the estimate \eqref{thmB1} provides only an upper bound for $B(x)$ whenever $L=1$. We will focus on the case $\theta(n) \asymp n^a$, where $a\ge 1$ is constant. Theorem \ref{cor} shows that $L=1$ and $B(x)=o(x)$. Theorem \ref{thm4} generalizes \cite[Theorem 1.2]{PDD}, where the case $a=1$ is established with $\lambda_1=1$. \begin{theorem}\label{thm4} Let $\theta$ satisfy \eqref{theta} and assume $\theta(n)\asymp n^a$ for some constant $a\ge 1$. Then there are constants $c_\theta>0$ and $\lambda_a\in (0,1]$, such that \begin{equation}\label{Basymp} B(x) =\frac{ c_\theta x}{(\log x)^{\lambda_a}}\left(1+ O\left(\frac{1}{\log x}\right)\right). \end{equation} Here $s=-\lambda_a$ is the unique solution in the interval $[-1,0)$ of the equation \begin{equation}\label{lambdaeq} 0= s + \frac{e^{-\gamma}}{a (a+1)^s} +s \int_1^\infty \bigl( \omega(t) - e^{-\gamma}\bigr) \frac{dt}{(at+1)^{s+1}}, \end{equation} where $\omega(t)$ denotes Buchstab's function and $\gamma$ is Euler's constant. For $a\ge 1$, \begin{equation}\label{LB} \lambda_a>\frac{e^{-\gamma}}{a}\left(1+\frac{e^{-\gamma}\log (a+1)}{a}-\frac{0.16}{a}\right) \end{equation} and \begin{equation}\label{UB} \lambda_a< \frac{e^{-\gamma}}{a}\left(1+\frac{e^{-\gamma}\log (a+1)}{a}+\frac{\log^2(a+1)}{a^2}\right). \end{equation} \end{theorem} Figure \ref{figure1} and Table \ref{table1} show several values of $\lambda_a$, obtained by solving equation \eqref{lambdaeq} numerically. As in \cite[Theorem 1.2]{PDD} with $a=1$, one can consider a less restrictive condition on $\theta$, such as $n^a (\log n)^{-b} \ll \theta(n) \ll n^a (\log n)^{b}$, where $0\le b<1$, and establish the estimate \eqref{Basymp} with the relative error term $O(1/\log x)$ replaced by $O(1/(\log x)^{1-b})$. However, we will not pursue this here. \begin{figure} \caption{Values of $\lambda_a$ together with the bounds \eqref{LB} \label{figure1} \end{figure} \begin{table}[h] \begin{tabular}{ | l | l | } \hline $a$ & $\lambda_a$ \\ \hline 1 & 1 \\ \hline 1.1 & 0.8854... \\ \hline 1.2 & 0.7927... \\ \hline 1.3 & 0.7164... \\ \hline 1.4 & 0.6526... \\ \hline 1.5 & 0.5985... \\ \hline 1.6 & 0.5522... \\ \hline 1.7 & 0.5122... \\ \hline 1.8 & 0.4772... \\ \hline 1.9 & 0.4464... \\ \hline 2 & 0.4191... \\ \hline \end{tabular} \qquad \begin{tabular}{ | l | l | } \hline $a$ & $\lambda_a$ \\ \hline 2.5 & 0.3195... \\ \hline 3 & 0.2567... \\ \hline 3.5 & 0.2139... \\ \hline 4 & 0.1829... \\ \hline 4.5 & 0.1595... \\ \hline 5 & 0.1412... \\ \hline 6 & 0.1147... \\ \hline 7 & 0.09639... \\ \hline 8 & 0.08301... \\ \hline 9 & 0.07283... \\ \hline 10 & 0.06484... \\ \hline \end{tabular} \caption{Truncated values of $\lambda_a$.} \label{table1} \end{table} We now turn to the distribution of divisors. Let $\mathcal{D}$ be the set defined in \eqref{Ddef}. When $\theta(n)=tn$, where $t$ is constant, Tenenbaum \cite[Lemma 2.2]{Ten86} showed that $\mathcal{D}=\mathcal{B}$. We want to generalize this result as much as possible. The example $\theta(n)=n+1$, for which $\mathcal{D}=\{1,2\}$ while $\mathcal{B}$ is infinite, illustrates that some condition on $\theta$ is required to ensure equality of these two sets. The condition we need is \begin{equation}\label{thetacond} \theta(n) \le \theta(n+1), \quad m \theta(n) \le \theta(m n)\qquad (n,m\ge 1, \ \gcd(n,m)=1). \end{equation} Note that \eqref{thetacond} is satisfied if $\theta(n)/n$ is non-decreasing. \begin{theorem}\label{DB} Let $\theta$ satisfy \eqref{theta}. \begin{enumerate} \item[(i)] We have $\mathcal{D}\subseteq \mathcal{B}$, hence $D(x)\le B(x)$. \item[(ii)] If $\theta$ satisfies \eqref{thetacond}, then $\mathcal{D}=\mathcal{B}$, hence $D(x)=B(x)$. \end{enumerate} \end{theorem} As an example, consider $\theta(n)=n^2+1$. Theorems \ref{thm4} and \ref{DB} show that the number of integers $n\le x$ whose divisors $1=d_1< d_2 < \ldots <d_{\tau(n)}=n$ satisfy $d_{j+1} \le d_j^2 +1$ for $1\le j < \tau(n)$, is given by $$D(x) = \frac{c x}{(\log x)^{\lambda_2}}\left(1+ O\left(\frac{1}{\log x}\right)\right),$$ where $c$ is a positive constant and $\lambda_2=0.4191...$ With $\theta(n)=2^n$, Corollary \ref{corB} and Theorem \ref{DB} show that the number of integers $n\le x$ whose divisors $1=d_1< d_2 < \ldots <d_{\tau(n)}=n$ satisfy $d_{j+1} \le 2^{d_j}$ for $1\le j < \tau(n)$, is given by $$D(x) = (1-L) x\left(1 + O\left(\frac{1}{\log x}\right)\right),$$ where $1-L=0.2265...$ The proof of Theorem \ref{thmB} is based on the functional equation in Lemma \ref{mainlemma} and an estimate for the number of integers without small prime factors in Lemma \ref{philemma}. Theorem \ref{cor} is established with the help of Theorem \ref{thmB}. The proof of Theorem \ref{thm4}, which requires the most amount of work, is modeled after \cite[Theorem 1.2]{PDD}, with the added difficulty that the poles of the Laplace transform (i.e. the solutions of equation \eqref{lambdaeq}) now depend on the parameter $a$ (see Lemma \ref{lambdamu}). The proof of Theorem \ref{DB} generalizes that of Tenenbaum \cite[Lemma 2.2]{Ten86}. \section{Preliminaries} Let $$ \Phi(x,y) = \# \big\{ 1\le n\le x : P^-(n)>y \big\} $$ and define $\Phi(x,\infty)=1$ for $x\ge 1$. For $u\ge 1$, Buchstab's function $\omega(u)$ is defined as the unique continuous solution to the equation \begin{equation*} (u\omega(u))' = \omega(u-1) \qquad (u>2) \end{equation*} with initial condition $u\omega(u)=1$ for $1\le u \le 2$. Let $\omega(u)=0$ for $u<1$ and define $\omega$ at 1 and $\omega'$ at 1 and 2 by right-continuity. Let $\Gamma(u)$ denote the usual gamma function. The calculation of the values of $\lambda_a$ in Table \ref{table1} and the approximation of several integrals in the proof of Lemma \ref{lambdamu} require estimates for integrals involving $\omega(t)-e^{-\gamma}$. For that purpose, we used exact formulas for $\omega(t)$ for $0\le t \le 5$, derived with the help of Mathematica. To estimate the contribution from $t>5$, we used a table of zeros and relative extrema of $\omega(t)-e^{-\gamma}$ on the interval $[5, 10.3355]$ due to Cheer and Goldston \cite{CG}, and the estimate (ii) from Lemma \ref{omega} for $t\ge 10.3355$. \begin{lemma}\label{omega} We have \begin{enumerate}[(i)] \item $|\omega'(u)|\le 1/\Gamma(u+1) \quad (u\ge 0)$, \item $|\omega(u)-e^{-\gamma}| \le 1/\Gamma(u+1) \quad (u\ge 0)$. \end{enumerate} \end{lemma} \begin{proof} See \cite[Lemma 2.1]{PDD}. \end{proof} \begin{lemma}\label{philemma} Let $u=\log x / \log y$. \begin{enumerate} \item[(i)] For $x\ge 1$, $y\ge 2$, we have $$ \Phi(x,y)= e^\gamma x \omega(u) \prod_{p\le y} \left(1-\frac{1}{p}\right) + O\left(\frac{y}{\log y} + \frac{x e^{-u/3}}{(\log y)^2}\right).$$ \item[(ii)] For $x\ge y \ge 2$ we have $$ \Phi(x,y)=x \prod_{p\le y} \left(1-\frac{1}{p}\right) + O\left(\frac{x e^{-u/3}}{\log y}\right).$$ \end{enumerate} \end{lemma} \begin{proof} (i) See \cite[Lemma 2.2]{PDD}. Part (ii) follows from (i) and Lemma \ref{philemma}. \end{proof} \begin{lemma}\label{mainlemma} Let $\theta$ satisfy \eqref{theta}. For $x\ge 0$ we have \begin{equation*} [x]=\sum_{n\le x} \chi(n) \, \Phi(x/n, \theta(n)) . \end{equation*} \end{lemma} \begin{proof} In \cite[Lemma 2.3]{PDD} we proved this result assuming $\theta(n)<\infty$. The inclusion of $\infty$ in the possible values of $\theta$ has no effect on the proof. The basic idea is that every integer $1\le m\le x$ factors uniquely as $m=nr$, where $n \in \mathcal{B}$ and $r$ is counted in $\Phi(x/n,\theta(n))$. \end{proof} \begin{lemma}\label{lemL} Let $\theta$ satisfy \eqref{theta}. The series \begin{equation*} L=\sum_{n\ge 1} \frac{\chi(n)}{n} \prod_{p\le \theta(n)} \left(1-\frac{1}{p}\right) \end{equation*} converges and $0\le L \le 1$. \end{lemma} \begin{proof} Lemma \ref{philemma} (ii) implies $\lim_{x\to \infty} \Phi(x,y)/x = \prod_{p\le y} (1-1/p)$. From Lemma \ref{mainlemma}, we have $$ \sum_{1\le n \le N} \frac{\chi(n)}{n} \prod_{p\le \theta(n)} \left(1-\frac{1}{p}\right) \le 1,$$ for every $N\ge 1$. The result now follows since the terms of the series are $\ge0$. \end{proof} \section{Proof of Theorems \ref{thmB} and \ref{cor}.} \begin{proof}[Proof of Theorem \ref{thmB}] Since $\Phi(x/n, \theta(n))=1$ when $n\le x < n\theta(n)$, Lemma \ref{mainlemma} yields \begin{equation*} \begin{split} [x] & = B(x) + \sum_{n\le x} \chi(n) \, \bigl(\Phi(x/n, \theta(n))-1\bigr) \\ & = B(x) + \sum_{n\theta(n) \le x}\chi(n) \, \bigl(\Phi(x/n, \theta(n))-1\bigr) \\ & = B(x) + x \sum_{n\theta(n) \le x} \frac{\chi(n)}{n} \prod_{p\le \theta(n)} \left(1-\frac{1}{p}\right)+ O(E_1(x)), \end{split} \end{equation*} where $$ E_1(x)= x \sum_{n\theta(n) \le x}\frac{\chi(n)}{n \log\theta(n)} \exp\left(-\frac{\log (x/n)}{3\log\theta(n)}\right),$$ by Lemma \ref{philemma} (ii). Thus $$[x]=B(x)+ Lx + O(E_1(x)+E_2(x)),$$ where $$E_2(x)= x\sum_{n\theta(n) > x}\frac{\chi(n)}{n \log\theta(n)}.$$ This completes the proof of \eqref{thmB1}. The estimate \eqref{thmB2} follows, since $E_1(x)+E_2(x)=o(x)$ by Lemma \ref{lemL}. \end{proof} \begin{proof}[Proof of Theorem \ref{cor}] (i) Assume $L<1$ so that $ B(n)\ge c n $ for some $c>0$ and all $n \ge 1$. Let $g(n)=\log f(n)$ and assume $n g(n)$ is increasing. Partial summation applied twice yields \begin{equation*} \begin{split} \sum_{n\le N} \frac{\chi(n)}{n g(n)} & = \frac{B(N)}{N g(N)} + \sum_{n\le N-1} B(n) \left(\frac{1}{n g(n)}-\frac{1}{(n+1)g(n+1)} \right) \\ & \ge \frac{c N}{N g(N)} + \sum_{n\le N-1} c n \left(\frac{1}{n g(n)}-\frac{1}{(n+1)g(n+1)} \right) \\ & = \sum_{n\le N} \frac{c}{n g(n)}, \end{split} \end{equation*} which grows unbounded as $N$ increases. But this is impossible, since $$ \sum_{n\le N} \frac{\chi(n)}{n g(n)}\le \sum_{n\le N} \frac{\chi(n)}{n \log \theta(n)} \ll \sum_{n\le N} \frac{\chi(n)}{n} \prod_{p\le \theta(n)} \left(1-\frac{1}{p}\right) \le L \le 1, $$ by Lemma \ref{lemL}. Thus $L=1$ and $B(x)=o(x)$. (ii) Assume that $\theta(n) \ge f(n)\ge 2$, where $f(n) \gg P^+(n)$, and that $$ \sum_{n\ge 1} \frac{1}{n \log f(n)} $$ converges. Then there exists a $t\ge 1$ such that $t f(n) \ge P^+(n)$ and $$ L_{tf}\le \sum_{n\ge 1} \frac{1}{n} \prod_{p\le t f(n)} \left(1-\frac{1}{p}\right) < 1.$$ Theorem \ref{thmB} implies that $B_{tf}(x) \gg x$. According to the hypothesis, there is an $r\in \mathbb{N}$ such that $f(2^r n) \ge t f(n)$ for all $n\ge 1$. Thus, if $n$ is counted in $B_{tf}(x/2^r)$, then $2^r n$ is counted in $B_f(x)$. This yields $$B_\theta(x) \ge B_f(x) \ge B_{tf}(x/2^r) \gg_r x.$$ Part (iii) is an immediate consequence of Theorem \ref{thmB}. \end{proof} \section{Proof of Theorem \ref{thm4}.} Throughout this section, we assume that $\theta$ satisfies \eqref{theta} and that \begin{equation}\label{Best} B(x) \ll \frac{x}{(\log x)^{\nu}} \qquad (x\ge 2), \end{equation} for some suitable $\nu \in [0,1]$, to be determined later. Clearly, $\nu=0$ is admissible. All constants implied by $\ll$ and the big O notation may depend on $\theta$, and therefore on $a$, but are otherwise absolute. Lemmas \ref{lem1} through \ref{lem4} correspond to Lemmas 5.3 through 5.7 of \cite{PDD}. The main difference is the assumption on the size of $B(x)$ for the purpose of estimating the error terms, for which we use \eqref{Best} here. \begin{lemma}\label{lem1} For $x\ge e$ we have \begin{multline*} B(x) = \\ x\ -\ x\sum_{n\theta(n)\le x} \frac{\chi(n)}{n} \, e^\gamma \omega\left(\frac{\log x/n}{\log \theta(n)}\right) \prod_{p\le \theta(n)} \left(1-\frac{1}{p}\right) + O\left(\frac{x}{(\log x)^{1+\nu}}\right) \end{multline*} \end{lemma} \begin{proof} As in the proof of Theorem \ref{thmB}, we have \begin{equation}\label{lem0eq} B(x)=[x]-\sum_{n\theta(n)\le x} \chi(n) \, (\Phi(x/n, \theta(n))-1). \end{equation} We apply Lemma \ref{philemma} (i) to estimate each occurrence of $\Phi(x/n, \theta(n))$ in \eqref{lem0eq}. The contribution from the error term $O(y/\log y)$ is \begin{equation*} \begin{split} \ll \sum_{n\theta(n)\le x} \chi(n) \frac{\theta(n)}{\log\theta(n)} & \ll \sum_{n^{1+a}\ll x} \chi(n)\frac{n^a}{\log 2n} \\ & \ll \frac{x^\frac{a}{1+a}}{\log x}\sum_{n\ll x^{1/(1+a)}} \chi(n) \ll \frac{x}{(\log x)^{\nu+1}}, \end{split} \end{equation*} by \eqref{Best}. For the contribution from the error term $O\left( \frac{xe^{-u/3}}{(\log y)^2}\right)$, we split up the range of summation by powers of $2$ and use \eqref{Best} to get \begin{equation*} \begin{split} & \ll \sum_{n\theta(n) \le x} \chi(n) \frac{x}{n (\log \theta(n))^2} \exp\left(-\frac{\log x/n}{3\log \theta(n)}\right) \\ & \ll \sum_{n\ll x^{1/(1+a)}} \frac{x}{n (\log 2n)^{2+\nu}} \exp\left(-\frac{\log 2x}{A\log 2n}\right) \ll \frac{x}{(\log x)^{1+\nu}}, \end{split} \end{equation*} where $A$ is some suitable positive constant. \end{proof} \begin{lemma}\label{lem3} For $x\ge e$ we have \begin{equation*} B(x)= x\sum_{n\ge 1} \frac{\chi(n)}{n \log \theta(n)} \left( e^{-\gamma} - \omega\left(\frac{\log x/n}{\log \theta(n)}\right)\right) + O\left(\frac{x }{(\log x)^{1+\nu}}\right) \end{equation*} \end{lemma} \begin{proof} We have $L=1$ by Theorem \ref{cor}. Since $\omega(u)=0$ for $u<1$, Lemma \ref{lem1} shows that \begin{equation*} B(x) = x\sum_{n\ge 1} \frac{\chi(n)}{n} \prod_{p\le \theta(n)} \left(1-\frac{1}{p}\right) \left(1- e^\gamma \omega\left(\frac{\log x/n}{\log \theta(n)}\right)\right) + O\left(\frac{x }{(\log x)^{1+\nu}}\right) \end{equation*} The contribution from $n$ with $\log n \le \sqrt{\log x}$ is $\ll x \exp \left(-\sqrt{\log x}\right)$. For those $n$ for which $\log n > \sqrt{\log x}$, we use the estimate $$ \prod_{p\le \theta(n)} \left(1-\frac{1}{p}\right) = \frac{e^{-\gamma}}{\log \theta(n)} \left(1+O\left(\frac{1}{(\log n)^4}\right)\right).$$ The contribution from the error term is $$\ll x \sum_{\log n > \sqrt{\log x}} \frac{1}{n (\log n)^5} \ll \frac{x}{(\log x)^2} \ll \frac{x}{(\log x)^{1+\nu}}.$$ \end{proof} \begin{lemma}\label{lem3b} For $x\ge e$ we have \begin{equation*} B(x)= x\sum_{n\ge 2} \frac{\chi(n)}{a n \log n} \left( e^{-\gamma} - \omega\left(\frac{\log x/n}{a \log n}\right)\right) + O\left(\frac{x }{(\log x)^{1+\nu}}\right). \end{equation*} \end{lemma} \begin{proof} Since $\theta(n) \asymp n^a$, we have $\log \theta(n) = a\log n + O(1)$. Inserting this estimate for each instance of $\log \theta(n) $ in Lemma \ref{lem3}, yields the desired result. For more details on the calculations see \cite[Lemma 5.6]{PDD}, where the case $a=1$ is treated. \end{proof} \begin{lemma}\label{lem4} For $x\ge e$ we have \begin{equation*} B(x)= x\int_{e}^{\infty} \frac{B(y)}{a y^2 \log y} \left( e^{-\gamma} - \omega\left(\frac{\log x/y}{a\log y}\right)\right) \, \mathrm{d}y + O\left(\frac{x }{(\log x)^{1+\nu}}\right). \end{equation*} \end{lemma} \begin{proof} This follows from applying partial summation to the sum in Lemma \ref{lem3b}. \end{proof} From Lemma \ref{lem4} we have, for $x\ge e$, \begin{equation}\label{inteq} B(x)= x \, \alpha - x \int_{e}^{\infty} \frac{B(y)}{ay^2 \log y} \, \omega\left(\frac{\log x/y}{a\log y}\right)\mathrm{d}y + O\left(\frac{x }{(\log x)^{1+\nu}}\right), \end{equation} where $$ \alpha:=e^{-\gamma} \int_{e}^{\infty} \frac{B(y)}{a y^2 \log y} \, \mathrm{d}y . $$ For $x\ge e$ let $z\ge 0$ be given by $$z=\log \log(x)$$ and let $$ G_\theta(z) := \frac{B\left(\exp(e^z)\right)}{\exp(e^z)} = \frac{B(x)}{x} .$$ Dividing \eqref{inteq} by $x$ and changing variables in the integral via $u=\log\log y$ we get, for $z\ge 0$, \begin{equation}\label{conv} \begin{split} G_\theta(z) & = \alpha - \frac{1}{a} \int_{0}^{z} G_\theta(u) \, \omega\left((e^{z-u} -1)/a\right) \, \mathrm{d}u +E_\theta(z) \\ & = \alpha -\frac{1}{a} \int_{0}^{z} G_\theta(u) \, \Omega_a(z-u) \, \mathrm{d}u +E_\theta(z), \end{split} \end{equation} where \begin{equation}\label{Error} E_\theta(z) \ll e^{-(1+\nu)z} \end{equation} and $$ \Omega_a(u):= \omega\left((e^{u} -1)/a\right) . $$ Equation \eqref{conv} leads to the equation of Laplace transforms $$ \widehat{G}_\theta(s) = \frac{\alpha}{s} -\frac{1}{a} \widehat{G}_\theta(s) \, \widehat{\Omega}_a(s) + \widehat{E}_\theta(s) \qquad (\re s >0),$$ which we solve for $\widehat{G}_\theta(s)$ to get $$\widehat{G}_\theta(s) = \frac{\alpha}{s (1+\frac{1}{a}\widehat{\Omega}_a(s))} + \frac{\widehat{E}_\theta(s)}{1+\frac{1}{a}\widehat{\Omega}_a(s)} \qquad (\re s >0).$$ Let $F_a(z)$ be given by \begin{equation}\label{FaDef} F_a(z) = 1 - \frac{1}{a} \int_{0}^{z} F_a(u) \, \Omega_a(z-u) \, \mathrm{d}u. \end{equation} Equation \eqref{FaDef} is an error-free, rescaled version of equation \eqref{conv}, which depends on $a$, but does not involve $\theta$. Note that the upper limit of the integral could be replaced by $z-\log(a+1)$, since $\omega(t)=0$ for $t<1$. Thus $F_a(z)=1$ for $0\le z \le \log(a+1)$, and for $z>\log(a+1)$, $F_a(z)$ is determined by the values of $F_a(u)$ for $u \in [0, z-\log(a+1)]$. Hence \eqref{FaDef} defines the continuous function $F_a(z)$ for $z\ge 0$. As with $G_\theta(z)$, we find that the Laplace transform of $F_a(z)$ is given by $$ \widehat{F}_a(s) = \frac{1}{s (1+\frac{1}{a}\widehat{\Omega}_a(s))} \qquad (\re s >0),$$ and therefore, \begin{equation}\label{LaplaceGF} \widehat{G}_\theta(s) = \alpha \widehat{F}_a(s) + s \widehat{F}_a(s) \widehat{E}_\theta(s). \end{equation} The Laplace transform of $ \Omega_a(u)$, defined for $\re(s)>0$, is given by \begin{equation*} \begin{split} \widehat{\Omega}_a(s) & = \int_0^\infty \omega((e^u-1)/a) \, e^{-us} du = a \int_0^\infty \omega(t) \frac{dt}{(at+1)^{s+1}} \\ &= \frac{e^{-\gamma}}{s(a+1)^s} + a\int_1^\infty \bigl( \omega(t) - e^{-\gamma}\bigr) \frac{dt}{(at+1)^{s+1}} . \end{split} \end{equation*} The last equation extends $ \widehat{\Omega}_a(s)$ to a meromorphic function on $\mathbb{C}$ with a simple pole at $s=0$. We will need to investigate the location of zeros of the entire function $g_a(s)$, defined by \begin{equation*} \begin{split} g_a(s) & = 1/ \widehat{F}_a(s) =s \left(1+\frac{1}{a}\widehat{\Omega}_a(s)\right) \\ & =s + \frac{e^{-\gamma}}{a (a+1)^s} +s \int_1^\infty \bigl( \omega(t) - e^{-\gamma}\bigr) \frac{dt}{(at+1)^{s+1}}. \end{split} \end{equation*} \pagebreak \begin{lemma}\label{lambdamu} For $a\ge 1$, $g_a(s)$ has a simple real zero at $s=-\lambda_a$, where $\lambda_1=1$, $\lambda_a \in (0,1)$ for $a>1$, $$ \lambda_a>\frac{e^{-\gamma}}{a}\left(1+\frac{e^{-\gamma}\log (a+1)}{a}-\frac{0.16}{a}\right) =: l_a $$ and $$ \lambda_a< \frac{e^{-\gamma}}{a}\left(1+\frac{e^{-\gamma}\log (a+1)}{a}+\frac{\log^2(a+1)}{a^2}\right)=: u_a. $$ For $a\ge 1$, $g_a(s)$ has no other zero with $\re(s)\ge -1-u_a$. \end{lemma} \begin{proof} We have $g_a(0) = e^{-\gamma}/a >0$ and $$g_a(-1)=-1+e^{-\gamma}(1+1/a) - (2e^{-\gamma}-1)=e^{-\gamma}(1/a -1).$$ Thus $g_1(-1)=0$ and $\lambda_1=1$. Assume $a>1$. We have $g_a(-1)<0$, so that $g_a(s)$ has a zero in the interval $(-1,0)$, since $g_a(s)$ is real if $s$ is real. For $s\in [-1,0]$ we have \begin{equation*} \begin{split} I_a(s) & :=\int_1^\infty \bigl( \omega(t) - e^{-\gamma}\bigr) \frac{dt}{(at+1)^{s+1}} \\ & \le \frac{1}{(a+1)^{s+1}} \int_1^\infty | \omega(t) - e^{-\gamma}| dt < \frac{0.16}{(a+1)^{s+1}}, \end{split} \end{equation*} by numerical computation of the last integral, and \begin{equation*} \begin{split} I_a(s) & = \int_1^{e^\gamma} \bigl( t^{-1} - e^{-\gamma}\bigr) \frac{dt}{(at+1)^{s+1}} + \int_{e^\gamma}^\infty \bigl( \omega(t) - e^{-\gamma}\bigr) \frac{dt}{(at+1)^{s+1}} \\ & \ge \frac{1}{(ae^\gamma+1)^{s+1}} \bigl(\gamma - e^{-\gamma}(e^\gamma -1)\bigr) - \frac{1}{(ae^\gamma+1)^{s+1}} \int_{e^\gamma}^\infty | \omega(t) - e^{-\gamma}| dt \\ & \ge \frac{\gamma -1 + e^{-\gamma} - 0.021}{(ae^\gamma+1)^{s+1}} > \frac{0.11}{(ae^\gamma+1)^{s+1}} . \end{split} \end{equation*} Since $s\le 0$, $$ g_a(s) <s + \frac{e^{-\gamma}}{a (a+1)^s} +s\frac{0.11}{(ae^\gamma+1)^{s+1}} =: g^+_a(s),$$ and $$ g_a(s) >s + \frac{e^{-\gamma}}{a (a+1)^s}+s \frac{0.16}{(a+1)^{s+1}}=:g^-_a(s). $$ Hence $ g_a(-l_a)>g^-_a(-l_a)$ and $g_a(-u_a)<g^+_a(-u_a)$ if $u_a \le 1$. A calculus exercise, made easier with the help of a computer, shows that $g^-_a(-l_a)>0$ and $g^+_a(-u_a)<0$ for $a\ge 1$. Hence $g_a(s)$ has a zero in $(-1,0)\cap (-u_a,-l_a)$. It remains to show that, for $a\ge 1$, $g_a(s)$ has no other zero with $\re(s)\ge -1-u_a$. We write $$h_a(s)=g_a(s)-s, \qquad \mu_a=u_a+1.$$ For $\re(s)\ge -\mu_a$, $$|h_a(s)| \le \frac{e^{-\gamma}}{a (a+1)^{-\mu_a}} +|s| \int_1^\infty \bigl| \omega(t) - e^{-\gamma}\bigr| \frac{dt}{(at+1)^{-\mu_a+1}} .$$ Since $\mu_a>1$ and $at+1<(a+1)t$ for $t\ge 1$, the last integral is $$ \le (a+1)^{\mu_a-1} \int_1^\infty \bigl| \omega(t) - e^{-\gamma}\bigr| t^{\mu_a-1} dt.$$ First assume $a\ge 10$. We have $$\mu_a-1 \le \mu_{10}-1<0.1,$$ $$ (a+1)^{\mu_a-1}\le (10+1)^{\mu_{10}-1}< 1.2,$$ and $$ \frac{e^{-\gamma}}{a (a+1)^{-\mu_a}} \le \frac{e^{-\gamma}}{ 10(10+1)^{-\mu_{10}}} < 0.73.$$ Thus, for $a\ge 10$, $$|h_a(s)| \le 0.73 + 1.2 |s| \int_1^\infty \bigl| \omega(t) - e^{-\gamma}\bigr| t^{0.1} dt<0.73+0.21|s|,$$ by numerical computation of the integral. Hence we have $|h_a(s)|<|s|$ provided $|s|>0.93$. On the boundary of the rectangle $R$ with corners $-\mu_a \pm iT$, $\alpha \pm iT$, where $\alpha, T \ge 5$, we have $|s|>0.93$, since $\mu_a>1$. Rouch\'e's theorem implies that $g_a(s)$ and $s$ have the same number of zeros, i.e. exactly one zero, with $\re(s)\ge -\mu_a$. Next, assume $1\le a \le 10$. We have $$\mu_a-1 \le \mu_{1}-1<1.1,$$ $$ (a+1)^{\mu_a-1}\le (1+1)^{\mu_{1}-1}< 2.1,$$ and $$ \frac{e^{-\gamma}}{a (a+1)^{-\mu_a}} \le \frac{e^{-\gamma}}{ (1+1)^{-\mu_{1}}} < 2.4.$$ Thus, for $1\le a\le 10$, $$|h_a(s)| \le 2.4 + 2.1 |s| \int_1^\infty \bigl| \omega(t) - e^{-\gamma}\bigr| t^{1.1} dt<2.4+0.5|s|.$$ Hence $|h_a(s)|<|s|$ provided $|s|>4.8$. On the boundary of the rectangle $R$ we have $|s|\ge 5$, with the possible exception of the segment with endpoints $-\mu_a \pm 5i$. For $s$ on this segment, and $1\le a \le 10$, we evaluate $|h_a(s)/s|$ numerically to get $$\frac{|h_a(s)|}{|s|}=\frac{|h_a(-\mu_a+i\tau)|}{|-\mu_a+i\tau|} < 0.98, \quad (1 \le a \le 10, \ -5 \le \tau \le 5).$$ Hence $|h_a(s)|<|s|$ on the boundary of $R$, and Rouch\'e's theorem implies that $g_a(s)$ has exactly one zero with $\re(s)\ge -\mu_a$. \end{proof} \pagebreak \begin{lemma}\label{ga} We write $s=\sigma + i \tau$ and $$ H_a(\sigma):= \frac{1}{a (a+1)^\sigma} +\frac{1}{a} \int_1^\infty |\omega'(t)| \frac{dt}{(at+1)^{\sigma}} .$$ If $g_a(s)=0$ then $|\tau| \le H_a(\sigma)$. For $|\tau| \ge 2 H_a(\sigma)$, we have $$ \frac{1}{g_a(s)}=\frac{1}{s} + O\left(\frac{H_a(\sigma)}{\sigma^2+\tau^2 } \right) .$$ \end{lemma} \begin{proof} Integration by parts shows that $$ g_a(s) = s + \frac{1}{a (a+1)^s} +\frac{1}{a} \int_1^\infty \omega'(t) \frac{dt}{(at+1)^{s}} = s + H_a(\sigma) \xi_a(s), $$ where $\xi_a(s) \in \mathbb{C}$ with $|\xi_a(s)|\le 1$. Thus any zero of $g_a(s)$ must satisfy $|\tau|\le |s| \le H_a(\sigma)$. We have $$ g_a(s) = s \left(1+\frac{H_a(\sigma) \xi_a(s)}{s}\right),$$ from which the estimate for $1/g_a(s)$ follows. \end{proof} \begin{lemma}\label{FaInv} The function $F_a(z)$ defined by \eqref{FaDef} satisfies \begin{equation}\label{FaAsymp} F_a(z)=c_a e^{-\lambda_a z} + O_a\left(e^{-\mu_a z}\right) \quad (z\ge 0), \end{equation} and \begin{equation}\label{FapAsymp} F'_a(z)=\tilde{c}_a e^{-\lambda_a z} + O_a\left(e^{-\mu_a z}\right) \quad (z\ge 0), \end{equation} for constants $c_a, \tilde{c}_a=-\lambda_a c_a$, where $\mu_a>\lambda_a+1$. Here $F'_a(z)$ denotes the right derivative of $F_a(z)$. \end{lemma} \begin{proof} We evaluate the inverse Laplace integral $$ F_a(z) = \frac{1}{2\pi i} \int_{1-i\infty}^{1+i\infty} \widehat{F}_a(s) e^{zs}\, \mathrm{d} s.$$ Let $\mu=\mu_a=u_a+1$ from Lemma \ref{lambdamu} and put $T=\exp(z(\mu+1))$. Since the result is trivial for bounded $z$, we may assume that $z$ is sufficiently large such that $T>2H(-\mu)$. We have $$ \int_{1+iT}^{1+i\infty} \widehat{F}_a(s) e^{zs}\, \mathrm{d} s = \int_{1+iT}^{1+i\infty} \frac{1}{s} e^{zs}\, \mathrm{d} s + O\left(e^{z}/T \right)=O\left(e^{-\mu z}\right),$$ by Lemma \ref{ga} and integration by parts applied to the last integral. We apply the residue theorem to the rectangle with vertices $-\mu \pm iT$, $1\pm iT$. The contribution from the horizontal segments can be estimated by Lemma \ref{ga} as $$ \int_{-\mu+iT}^{1+iT} \widehat{F}_a(s) e^{zs}\, \mathrm{d} s \ll \frac{e^{z}}{T} =O\left(e^{-\mu z}\right).$$ For the vertical segment with $\re s = -\mu$ we have $$ \left| \int_{-\mu-i2H_a(-\mu)}^{-\mu+i2H_a(-\mu)} \widehat{F}_a(s) e^{zs}\, \mathrm{d} s \right| \le 4H_a(-\mu) \max_{|\tau|\le 2H_a(-\mu)} \left|\widehat{F}_a(-\mu+i\tau)\right| e^{-\mu z} = O\left(e^{-\mu z}\right)$$ and $$ \int_{-\mu+i2H_a(-\mu)}^{-\mu+iT} \widehat{F}_a(s) e^{zs}\, \mathrm{d} s = \int_{-\mu+i2H_a(-\mu)}^{-\mu+iT} \left(\frac{1}{s}+O(\tau^{-2})\right) e^{zs}\, \mathrm{d} s =O\left(e^{-\mu z}\right).$$ The residue theorem now yields $$ F_a(z)= \res\left( \widehat{F}_a(s) e^{zs}; -\lambda_a \right)+O\left(e^{-\mu z}\right) = c_a e^{-\lambda_a z }+O\left(e^{-\mu z}\right) ,$$ which completes the proof of \eqref{FaAsymp}. If $0\le z < \log(a+1)$, we have $F_a(z)=1$ and hence $F'_a(z)=0$. If $z\ge \log(a+1)$, \eqref{FaDef} implies \begin{equation}\label{FapEq} F'_a(z)= -\frac{1}{a} \int_0^{z-\log(a+1)} F_a(u) \, \omega'\left(\frac{e^{z-u}-1}{a}\right) \frac{e^{z-u}}{a} \, \mathrm{d} u -\frac{1}{a} F_a(z-\log(a+1)). \end{equation} The estimate \eqref{FapAsymp} follows from applying \eqref{FaAsymp} to each occurrence of $F_a$ on the right-hand side of \eqref{FapEq}, and the fact that $\omega'(t) \ll_A t^{-A}$ for any constant $A$, by Lemma \ref{omega}. \end{proof} We are now ready to finish the proof of Theorem \ref{thm4}. Since $\widehat{F'_a}(s)=s\widehat{F}_a(s)-F_a(0)$ and $F_a(0)=1$, \eqref{LaplaceGF} yields $$ \widehat{G}_\theta(s) = \alpha \widehat{F}_a(s) + \widehat{E}_\theta(s) +\widehat{F'_a}(s) \widehat{E}_\theta(s),$$ and thus \begin{equation}\label{last} G_\theta(z)= \alpha F_a(z) + E_\theta(z) + \int_0^z F'_a(z-u) E_\theta(u) du. \end{equation} We estimate $F_a(z)$ and $F'_a(z-u)$ using Lemma \ref{FaInv}. Since $E_\theta(u) \ll e^{-(\nu+1)u}$ by \eqref{Error}, $\nu \ge 0$ and $\lambda_a\le 1$, \eqref{last} shows that $G_\theta(z) \ll z e^{-\lambda_az}$. Thus $\nu= \lambda_a-\varepsilon$ is acceptable in \eqref{Best} for every $\varepsilon>0$. With this choice of $\nu$, \eqref{last} shows that $G_\theta(z) \ll e^{-\lambda_az}$, which means that $\nu=\lambda_a$ is acceptable in \eqref{Best}. Hence we assume $\nu = \lambda_a$ and $E_\theta(u) \ll e^{-(\lambda_a+1)u}$ for the remainder of this proof. The contribution to the integral in \eqref{last} from the main term in Lemma \ref{FaInv} is \begin{equation*} \begin{split} & \int_0^z \tilde{c}_a e^{-\lambda_a (z-u)} E_\theta(u) du \\ &= \tilde{c}_a e^{-\lambda_a z} \left( \int_0^\infty e^{\lambda_a u} E_\theta(u) du + O\left( \int_z^\infty e^{\lambda_a u} e^{-(\lambda_a+1)u} du\right) \right) \\ &=\tilde{c}_a e^{-\lambda_a z}\left( \beta + O(e^{-z})\right), \end{split} \end{equation*} say. The contribution from the error term in Lemma \ref{FaInv} to the integral in \eqref{last} is $$ \ll \int_0^z e^{-\mu_a(z-u)} e^{-(\lambda_a+1)u} du \ll e^{-(\lambda_a+1)z}, $$ since $\mu_a>\lambda_a +1$. Hence \eqref{last} implies $$ G_\theta(z)= (\alpha c_a + \beta \tilde{c}_a ) e^{-\lambda_a z} + O\left(e^{-(\lambda_a+1)z}\right).$$ With $c_\theta :=\alpha c_a + \beta \tilde{c}_a$, we get $$ G_\theta(z)=B(x)/x = c_\theta (\log x)^{-\lambda_a} + O \left((\log x)^{-(\lambda_a+1)}\right).$$ It remains to show that $c_\theta >0$ if $a\ge 1$. Since $\theta(n)\gg n^a \ge n$, there exists an integer $k$ such that $\theta(2^k n)\ge 2n$ for all $n\ge 1$. With $\theta_0(n)=2n$, we have $n\in \mathcal{B}_{\theta_0}$ implies $2^k n \in \mathcal{B}_\theta$. Hence $B_\theta(x) \ge B_{\theta_0}(x/2^k) \gg x/\log x$, by \cite[Theorem 1.2]{PDD}. Since $\lambda_a+1>1$, we must have $c_\theta>0$. \section{Proof of Theorem \ref{DB}} (i) Assume that $n=p_1^{\alpha_1} \cdots p_k^{\alpha_k} \in \mathcal{D}$, where $p_1< p_2< \ldots < p_k$. Let $0\le j < k$ and write $d_i=p_1^{\alpha_1} \cdots p_{j}^{\alpha_{j}}$. The next larger divisor, $d_{i+1}$, satisfies $d_{i+1}\ge p_{j+1}$, since $d_{i+1}$ must be divisible by at least one of the primes $p_{j+1},\ldots,p_k$. Thus $$ p_{j+1} \le d_{i+1} \le \theta(d_i) = \theta(p_1^{\alpha_1} \cdots p_{j}^{\alpha_{j}}),$$ which means that $n\in \mathcal{B}$. (ii) Assume $\theta$ satisfies \eqref{thetacond}. To show that $n\in \mathcal{B}$ implies $n\in \mathcal{D}$ for all $n\ge 2$, we proceed by induction on $k$, the number of distinct prime factors of $n$. When $k=1$, $n=p^\alpha \in \mathcal{B}$ implies $p\le \theta(1)$. For $1\le j \le \alpha$, we have $$d_{j+1} = p^j = p^{j-1} p \le p^{j-1} \theta(1) \le \theta( p^{j-1}) =\theta(d_j),$$ which means that $n\in \mathcal{D}$. Now assume that, for some $k\ge 1$ and each $m=p_1^{\alpha_1} \cdots p_{k}^{\alpha_{k}}$, if $m \in \mathcal{B}$ then $m\in \mathcal{D}$. Let $n=m p^\alpha \in \mathcal{B}$, where $p>p_k$ and $m=p_1^{\alpha_1} \cdots p_{k}^{\alpha_{k}}$. By the definition of $\mathcal{B}$, we have $m \in \mathcal{B}$, and hence $m\in \mathcal{D}$. Assume that $1=t_1<t_2<\ldots < t_r=m$ are the divisors of $m$. Then the divisors of $n$, say $1=d_1<d_2<\ldots <d_l=n$, are of the form $ d_i = p^\beta t_j $, where $0\le \beta \le \alpha$, $1\le j \le r$. If $j<r$, we have $$ d_{i+1} \le p^\beta t_{j+1} \le p^\beta \theta(t_j) \le \theta(p^\beta t_j)=\theta(d_i), $$ as desired. If $j=r$, then $d_i=p^\beta m$ and we may assume $0\le \beta < \alpha$, that is $d_i<n$. If $p>m$, then $p^{\beta+1}>p^\beta m=d_{i}$. Also, $n\in \mathcal{B}$ implies $p\le \theta(m)$, so $$ d_{i+1}\le p^{\beta+1}\le p^\beta \theta(m) \le \theta( p^\beta m) = \theta(d_i). $$ If $p \le m$, then $t_s \le m/p < t_{s+1}$, for some $s$ with $1\le s < r$. Now $d_i=p^\beta m<p^{\beta+1} t_{s+1} $, so $$ d_{i+1} \le p^{\beta+1} t_{s+1} \le p^{\beta+1} \theta(t_s) \le \theta( p^{\beta+1} t_s). $$ Since $t_s \le m/p$, we have $p^{\beta+1} t_s \le m p^\beta = d_i$ and hence $\theta( p^{\beta+1} t_s) \le \theta(d_i)$, as $\theta(n)$ is non-decreasing. Thus $d_{i+1} \le \theta(d_i)$ also holds in this case, which means that $n\in \mathcal{D}$. \end{document}
\begin{document} \title{On the Discrepancy of Random Matrices with Many Columns} \abstract{ Motivated by the Koml\'os conjecture in combinatorial discrepancy, we study the discrepancy of random matrices with $m$ rows and $n$ independent columns drawn from a bounded lattice random variable. It is known that for $n$ tending to infinity and $m$ fixed, with high probability the $\ell_\infty$-discrepancy is at most twice the $\ell_\infty$-covering radius of the integer span of the support of the random variable. However, the easy argument for the above fact gives no concrete bounds on the failure probability in terms of $n$. We prove that the failure probability is inverse polynomial in $m, n$ and some well-motivated parameters of the random variable. We also obtain the analogous bounds for the discrepancy in arbitrary norms. We apply these results to two random models of interest. For random $t$-sparse matrices, i.e. uniformly random matrices with $t$ ones and $m-t$ zeroes in each column, we show that the $\ell_\infty$-discrepancy is at most $2$ with probability $1 - O(\sqrt{ \log n/n})$ for $n = \Omega(m^3 \log^2 m)$. This improves on a bound proved by Ezra and Lovett (Ezra and Lovett, \emph{Approx+Random}, 2015) showing that the same is true for $n$ at least $m^t$. For matrices with random unit vector columns, we show that the $\ell_\infty$-discrepancy is $O(\operatorname{ex}p(\sqrt{ n/m^3}))$ with probability $1 - O(\sqrt{ \log n/n})$ for $n = \Omega(m^3 \log^2 m).$ Our approach, in the spirit of Kuperberg, Lovett and Peled (G. Kuperberg, S. Lovett and R. Peled, STOC 2012), uses Fourier analysis to prove that for $m \times n$ matrices $M$ with i.i.d. columns, and $n$ sufficiently large, the distribution of $My$ for random $y \in \{-1,1\}^n$ obeys a local limit theorem. } \tableofcontents \section{Introduction} The topic of this paper is combinatorial discrepancy, a well-studied parameter of a set system or matrix with many applications to combinatorics, computer science and mathematics \cite{C00, M09}. Discrepancy describes the extent to which the sets in a set system can be simultaneously split into two equal parts, or two-colored in a balanced way. Let $\mathcal{S}$ be a collection (possibly with multiplicity) of subsets of a finite set $\Omega$. The $\ell_\infty$-discrepancy of a two-coloring of the set system $(\Omega, \mathcal{S})$ is the maximum imbalance in color over all sets $S$ in $\mathcal{S}$. The $\ell_\infty$-disrepancy\footnote{often just referred to as the discrepancy} of $(\Omega, \mathcal{S})$ is the minimum discrepancy of any two-coloring of $\Omega$. Formally, $$\operatorname{disc}(\Omega, \mathcal{S}) := \min_{\chi:\Omega \to \{+1, -1\}} \max_{S \in \mathcal{S}} |\chi(S)|,$$ where $\chi(S) = \sum_{x \in S} \chi(x)$. More generally, the discrepancy of a matrix $M \in \operatorname{Mat}_{m\times n}(\mathbb{C})$ or $\operatorname{Mat}_{m\times n}(\mathbb{R})$ is $$\operatorname{disc}(M) = \min_{v \in \{+1, -1\}^n} \|Mv\|_\infty .$$ If $M$ is the incidence matrix of the set system $(\Omega, \mathcal{S})$, then the definitions agree. Using a clever linear-algebraic argument, Beck and Fiala showed that the discrepancy of a set system $(\Omega, \mathcal{S})$ is bounded above by a function of its \emph{maximum degree} $\Delta(\mathcal{S}) := \max_{x \in \Omega} |\{S \in \mathcal{S}: x \in S\}|$. If $\Delta(\mathcal{S})$ is at most $t$, we say $(\Omega, \mathcal{S})$ is \emph{$t$-sparse}. \begin{thm}[Beck-Fiala \cite{BF81}] If $(\Omega, \mathcal{S})$ is $t$-sparse, then $\operatorname{disc}(\Omega,\mathcal{S}) \leq 2t-1$. \end{thm} Beck and Fiala conjectured that $\operatorname{disc}(\mathcal{S})$ is actually $O(\sqrt{t})$ for $t$-sparse set systems $(\Omega, \mathcal{S})$. Their conjecture would follow from the following stronger conjecture due to Koml\'os: \begin{conj}[Koml\'os Conjecture; see \cite{Sp87}] If every column of $M$ has Euclidean norm at most $1$, then $\operatorname{disc}(M)$ is bounded above by an absolute constant independent of $n$ and $m$. \end{conj} This conjecture is still open. The current record is due to Banaszczyk \cite{Ban98}, who showed $\operatorname{disc}(M) = O(\sqrt{ \log n})$ if every column of $M$ has norm at most $1$. This implies $\operatorname{disc}(\Omega, \mathcal{S}) = O(\sqrt{t \log n})$ if $(\Omega, \mathcal{S})$ is $t$-sparse. \subsection{Discrepancy of random matrices.} Motivated by the Beck-Fiala conjecture, Ezra and Lovett initiated the study of the discrepancy of random $t$-sparse matrices \cite{EL16}. Here, motivated by the Koml\'os conjecture, we study the discrepancy of random $m\times n$ matrices with independent, identically distributed columns. \begin{qn}\label{qn:random_komlos}Suppose $M$ is an $m\times n$ random matrix with independent, identically distributed columns drawn from a vector random variable that is almost surely of Euclidean norm at most one. Is there a constant $C$ independent of $m$ and $n$ such that for every $\varepsilon > 0$, $\operatorname{disc}(M) \leq C$ with probability $1 - \varepsilon$ for $n$ and $m$ large enough? \end{qn} The Koml\'os conjecture, if true, would imply an affirmative answer to this question. We focus on the regime where $n \gg m$, i.e., the number of columns is much larger than the number of rows. A few results are known in the regime $n = O(m)$. The theorems in this direction actually control the possibly larger \emph{hereditary discrepancy}. Define the hereditary discrepancy $\operatorname{herdisc}(M) $ by $$ \operatorname{herdisc}(M) = \max_{Y \subset [n]} \operatorname{disc}(M|_Y),$$ where $M|_Y$ denotes the $m \times |Y|$ matrix whose columns are the columns of $M$ indexed by $Y$. Clearly $\operatorname{disc}(M) \leq \operatorname{herdisc}(M)$. Often the Koml\'os conjecture is stated with $\operatorname{disc}$ replaced by $\operatorname{herdisc}$. While the Koml\'os conjecture remains open, some progress has been made for \emph{random $t$-sparse matrices}. To sample a random $t$-sparse matrix $M$, choose each column of $M$ uniformly at random from the set of vectors with $t$ ones and $m-t$ zeroes. Ezra and Lovett showed the following: \begin{thm}[\cite{EL16}] If $M$ is a random $t$-sparse matrix and $n = O(m)$, then $\operatorname{herdisc} (M) = O( \sqrt{t \log t})$ with probability $1 - \operatorname{ex}p( - \Omega(t))$. \end{thm} The above does not imply a positive answer to \cref{qn:random_komlos} due to the factor of $\sqrt{\log t}$, but is better than the worst-case bound $\sqrt{t \log n}$ due to Banaczszyk. We now turn to the regime $n \gg m$. It is well-known that if $\operatorname{disc}(M|_Y) \leq C$ holds for all $|Y| \leq m$, then $\operatorname{disc}(M) \leq 2C$ \cite{AS}. However, this observation is not useful for analyzing random matrices in the regime $n \gg m$. Indeed, if $n$ is large enough compared to $m$, the set of submatrices $M|_Y$ for $|Y| \leq m$ is likely to contain a matrix of the largest possible discrepancy among $t$-sparse $m\times m$ matrices, so improving discrepancy bounds via this observation is no easier than improving the Beck-Fiala theorem. The discrepancy of random matrices when $n \gg m$ behaves quite differently than the discrepancy when $n = O(m)$. For example, the discrepancy of a random $t$-sparse matrix with $n = O(m)$ is only known to be $O(\sqrt{t \log t})$, but it becomes $O(1)$ with high probability if $n$ is large enough compared to $m$. \begin{thm}[\cite{EL16}]\label{el_thm_2}Let $M$ be a random $t$-sparse matrix. If $n = \Omega\left(\binom{m}{t} \log \binom{m}{t}\right)$ then $\operatorname{disc} (M) \leq 2$ with probability $1 - \binom{m}{t}^{-\Omega(1)}$. \end{thm} \subsection{Discrepancy versus covering radius} Before stating our results, we describe a simple relationship between the covering radius of a lattice and a certain variant of discrepancy. We'll need a few definitions. \begin{itemize} \item For $S \subseteq \mathbb{R}^m$, let $\operatorname{span}_\mathbb{R} S$ denote the linear span of of $S$, and $\operatorname{span}_\mathbb{Z} S$ denote the integer span of $S$. \item A lattice is a discrete subroup of $\mathbb{R}^m$. Note that the set $\operatorname{span}_\mathbb{Z} S$ is a subgroup of $\mathbb{R}^m$, but need not be a lattice. If $S$ is linearly independent or lies inside a lattice, $\operatorname{span}_\mathbb{Z} S$ {\em is} a lattice. Say a lattice in $\mathbb{R}^m$ is \emph{nondegenerate} if $\operatorname{span}_\mathbb{R} \mathcal{L} = \mathbb{R}^m$. \item For any norm $\| \cdot \|_*$ on $\mathbb{R}^m$, we write $d_{*}(x,y)$ for the associated distance, and for $S \subseteq \mathbb{R}^m$, $d_*(x,S)$ is defined to be $\inf_{y \in S} d_*(x,y)$. \item The {\em covering radius} $\rho_*(S)$ of a subset $S$ with respect to the norm $\| \cdot \|_*$ is $\sup_{x \in \operatorname{span}_{\mathbb{R}}S} d_*(x,S)$ (which may be infinite.) \item The discrepancy may be defined in other norms than $\ell_\infty$. If $M$ is an $m \times n$ matrix and $\| \cdot \|_*$ a norm on $\mathbb{R}^m$, define the {\em $*$-discrepancy} $\operatorname{disc}_*(M)$ by $$\operatorname{disc}_*(M):= \min_{y \in \{\pm 1\}} \| My\|_*.$$ In particular, $\operatorname{disc}(M)$ is $\operatorname{disc}_{\infty}(M)$. \end{itemize} A natural relaxation of $*$-discrepancy is the \emph{odd$_*$ discrepancy}: instead of assigning $\pm1$ to the columns, one could minimize $\|M\mathbf y\|_*$ for $\mathbf y$ with odd entries. By writing each odd integer as $1$ plus an even number, it is easy to see that the odd$_*$ discrepancy of $M$ is equal to $$d_* (M \mathbf 1, 2 \mathcal{L}) \leq 2 \rho_*(\mathcal{L}).$$ where $\mathcal{L}$ is the lattice generated by the columns of $M$ and $\mathbf 1$ is the all-ones vector. In fact, by standard argument which can be found in \cite{LSV86}, the maximum odd$_*$ discrepancy of a matrix whose columns generate $\mathcal{L}$ is sandwiched between $\rho_*(\mathcal{L})$ and $2 \rho_*(\mathcal{L})$. In general, $\operatorname{disc}_*(M)$ can be arbitrarily large compared to the odd$_*$ discrepancy of $M$, even for $m=1, n =2$. If $r \in \mathbb{Z}$ then $M = [2r + 1, r]$ has $\rho_*(\operatorname{span}_\mathbb{Z} M) = 1/2$ but $\operatorname{disc}_*(M) = r + 1$. However, the discrepancy of a \emph{random} matrix with many columns drawn from $\mathcal{L}$ behaves more like the odd discrepancy. \begin{prop}\label{prop:lattice_soft} Suppose $X$ is a random variable on $\mathbb{R}^m$ whose support generates a lattice $\mathcal{L}$. Then for any $\varepsilon>0$, there is an $n_0(\varepsilon)$ so that for $n > n_0(\varepsilon)$, a random $m \times n$ matrix with independent columns generated from $X$ satisfies $$ \operatorname{disc}_*( M) \leq d_* (M \mathbf 1, 2 \mathcal{L}) \leq 2 \rho_*(\mathcal{L})$$ with probability at least $1-\varepsilon$. \end{prop} \begin{proof} Let $S$ be the support of $S$. For every subset $T$ of $S$, let $s_T$ be the sum of the columns of $T$. Let $C$ be large enough that for all $T$, there is an integer combination $v_T$ of elements of $S$ with even coefficients at most $C$ such that $\|v_T - s_T\| \leq d_*(s_T, 2 \mathcal{L})$. Choose $n_0(\varepsilon)$ large enough so that with probability at least $1-\varepsilon$, if we take $n_0(\varepsilon)$ samples of $X$, every element of $S$ appears at least $C+1$ times. Let $n \geq n(\varepsilon)$ and let $M$ be a random matrix obtained by selecting $n$ columns according to $X$. With probability at least $1-\varepsilon$ every vector in $S$ appears at least $C$ times. We claim that if this happens, $\operatorname{disc}_*(M) \leq d_* (M \mathbf 1, 2 \mathcal{L}).$ This is because if $T$ is the subset of $S$ that appeared an odd number of times in $M$, $d_* (M \mathbf 1, 2 \mathcal{L}) = d_* (s_T, 2 \mathcal{L})$, but because each element of $S$ appears at least $C+1$ times, we may choose $\mathbf y \in \{\pm 1\}^n$ so that $M\mathbf y = s_T - v_T$ for $\|v_T - s_T\| \leq d_*(s_T, 2 \mathcal{L})$. \end{proof} \subsection{Our results} The above simple result says nothing about the number of columns required for $M$ to satisfy the desired inequality with high probability. The focus of this paper is on obtaining quantitative upper bounds on the function $n_0(\varepsilon)$. We will consider the case when $\operatorname{span}_\mathbb{Z} \operatorname{supp}(X)$ is a lattice $\mathcal{L}$. The bounds we obtain will be expressed in terms of $m$ and several quantities associated to the lattice $\mathcal{L}$, the random variable $X$ and the norm $\|\cdot \|_*$. Without loss of generality, we assume $X$ is symmetric, i.e. $\Pr[X = x] = \Pr[X = -x]$ for all $x$. For a real number $L > 0$ we write $B(L)$ for the set of points in $\mathbb{R}^m$ of (Euclidean) length at most $L$. \begin{itemize} \item The $\|\cdot\|_*$ covering radius $\rho_*(\mathcal{L})$. \item The {\em distortion} $R_*$ of the norm $\|\cdot\|_*$, which is defined to be maximum Euclidean length of a vector $x$ such that $\|x\|_*=1$. For example, $R_{\infty}=\sqrt{m}$. \item The determinant $\det \mathcal{L}$ of the lattice $\mathcal{L}$, which is the determinant of any matrix whose columns form a basis of $\mathcal{L}$. \item The determinant $\det \Sigma$, where $\Sigma=\mathbb{E}[XX^\dagger]$ is the $m \times m$ covariance matrix of $X$. \item The smallest eigenvalue $\operatorname{\mathbf{\Sigma}}ma$ of $\Sigma.$ \item The maximum Euclidean length $L=L(Z)$ of a vector in the support of $Z=\Sigma^{-1/2}X$. \item A parameter $s(Z)$ called the \emph{spanningness}. The definition of this crucial parameter is technical and is given in \cref{sec:proof_overview}; roughly speaking, it is large if $Z$ is not heavily concentrated near some proper sublattice of $\mathcal{L}$. \end{itemize} We now state our main quantitative theorem about discrepancy of random matrices. \begin{thm}[main discrepancy theorem]\label{thm:lattice} Suppose $X$ is a random variable on a nondegenerate lattice $\mathcal{L}$. Let $\Sigma:=\mathbb{E} XX^\dagger$ have least eigenvalue $\operatorname{\mathbf{\Sigma}}ma$. Suppose $\operatorname{supp} X \subset \Sigma^{1/2} B(L)$ and that $\mathcal{L} = \operatorname{span}_\mathbb{Z} \operatorname{supp} X$. If $n \geq N$ then $$ \operatorname{disc}_*( M) \leq d_* (M \mathbf 1, 2 \mathcal{L}) \leq 2 \rho_*(\mathcal{L})$$ with probability at least $$1 - O\left( L\sqrt{\frac{\log n}{n}}\right).$$ Here $N$, given by \cref{eq:ndisc} in \cref{sec:local}, is a polynomial in the quantities $m$, $s(\Sigma^{-1/2} X)^{-1}$, $L$, $R_*$, $\rho_*(\mathcal{L})$, and $\log \left(\det \mathcal{L}/\det \Sigma\right)$. \end{thm} \begin{rem}[degenerate lattices]Our assumption that $\mathcal{L}$ is nondegenerate is without loss of generality; if $\mathcal{L}$ is degenerate, we may simply restrict to $\operatorname{span}_\mathbb{R} \mathcal{L}$ and apply \cref{thm:lattice}. Further, the assumptions that $\mathcal{L} = \operatorname{span}_\mathbb{Z} \operatorname{supp} X$ and $\mathcal{L}$ is nondegenerate imply $\operatorname{\mathbf{\Sigma}}ma > 0$. \end{rem} \begin{rem}[weaker moment assumptions] Our original motivation, the K\'omlos conjecture, led us to study the case when the random variable $X$ is bounded. This assumption is not critical. We can prove a similiar result under the weaker assumption that $(\mathbb{E} \|X\|_2^\eta)^{1/\eta} = L< \infty$ for some $\eta > 2$. The proofs do not differ significantly, so we give a brief sketch in \cref{sec:moments}. \end{rem} Obtaining bounds on the spanningness is the most difficult aspect of applying \cref{thm:lattice}. We'll do this for random $t$-sparse matrices, for which we extend \cref{el_thm_2} to the regime $n = \Omega( m^3 \log^2 m)$. For comparison, \cref{el_thm_2} only applies for $n \gg \binom{m}{t} $, which is superpolynomial in $m$ if $\min(t, m-t) = \omega(1)$. \begin{thm}[discrepancy for random $t$-sparse matrices]\label{thm:tsparse} Let $M$ be a random $t$-sparse matrix. If $n =\Omega(m^3 \log^2 m)$ then $$\operatorname{disc}(M) \leq 2$$ with probability at least $1 - O\left(\sqrt{\frac{m\log n}{n}}\right).$ \end{thm} \begin{rem} We refine this theorem later in \cref{thm:tsparse2} of \cref{sec:tsparse} to prove that the discrepancy is, in fact, usually $1$. \end{rem} \refstepcounter{const}\label{const:disc1} Using analogous techniques to the proof of \cref{thm:lattice}, we also prove a similar result for a non-lattice distribution, namely the matrices with random unit vector columns. \begin{thm}[random unit vector discrepancy]\label{thm:unit_disc} Let $M$ be a matrix with i.i.d random unit vector columns. If $n = \Omega(m^3 \log^2m),$ then $$\operatorname{disc} M = O(e^{-\sqrt{\frac{n}{m^3}}})$$ with probability at least $1 - O\left(L\sqrt{\frac{\log n}{n}}\right)$. \end{thm} \refstepcounter{const}\label{cubeconst} \refstepcounter{const}\label{expconst} One might hope to conclude a positive answer to \cref{qn:random_komlos} in the regime $n \gg m$ from \cref{thm:lattice}. This seems to require the following weakening of the Koml\'os conjecture: \begin{conj}\label{qn:lattice_komlos} There is an absolute constant $C$ such that for any lattice $\mathcal{L}$ generated by unit vectors, $\rho_\infty(\mathcal{L}) \leq C$. \end{conj} \subsection{Proof overview}\label{sec:proof_overview} In what follows we focus on the case when $X$ is isotropic, because we may reduce to this case by applying a linear transformation. The discrepancy result for the isotropic case, \cref{thm:identity_discrepancy}, is stated in \cref{sec:local}, and \cref{thm:lattice} is an easy corollary. We now explain how the parameters in \cref{thm:lattice} arise. The theorem is proved via \emph{local central limit theorems} for sums of vector random variables. Suppose $M$ is a fixed $m \times n$ matrix with bounded columns and consider the distribution over $M\mathbf v$ where $\mathbf v$ is chosen uniformly at random from $(\pm 1)^n$. Multidimensional versions of the central limit theorem imply that this distibution is approximately normal. We will be interested in local central limit theorems, which provide precise estimates on the probability that $M\mathbf v$ falls in a particular region. By applying an appropriate local limit theorem to a region around the origin, we hope to show that the probability of being close to the origin is strictly positive, which implies that there is a $\pm 1$ assignment of small discrepancy. We do not know suitable local limit theorems that work for all matrices $M$. We will consider random matrices of the form $M=M^X(n)$, where $X$ is a random variable taking values in some lattice $\mathcal{L} \subset \mathbb{R}^m$, and $M^X(n)$ has $n$ columns selected independently according to $X$. We will show that, for suitably large $n$ (depending on the distribution $X$), such a random matrix will, with high probability, satisfy a local limit theorem. The relative error in the local limit theorem will decay with $n$, and our bounds will provide quantitative information on this decay rate. In order to understand our bounds, it helps to understand what properties of $X$ cause the error to decay slowly with $n$. We'll seek local limit theorems that compare $\Pr_{\mathbf y}[M \mathbf y = w]$ to something proportional to $e^{- \frac{1}{2} w^\dagger (M M^\dagger)^{-1} w}$. One cannot expect such precise control if the lattice is very fine. If the spacing tends to zero, we approach the situation in which $X$ is not on a lattice, in which case the probability of expressing any particular element could always be zero! In fact, in the nonlattice situation the covering radius can be zero but the discrepancy can typically be nonzero. For this reason our bounds will depend on $\log (\det \mathcal{L})$ and on $L$. We also need $\rho_*(\mathcal{L})$ and the distortion $R_*$ to be small in order to ensure $e^{- \frac{1}{2} w^\dagger (M M^\dagger)^{-1} w}$ is not too small for some vector $w$ that we want to show is hit by $My$ with positive probability over $y$. Finally, we need that $X$ does not have most of its mass on or near a smaller sublattice $\mathcal{L}'$. This is the role of spanningness, which is analogous to the spectral gap for Markov chains. Since we assume $X$ is symmetric, choosing the columns $M$ and then choosing $y$ at random is the same as adding $n$ identically distributed copies of $X$. Intuitively, this means that if $M$ is likely to have $My$ distributed according to a lattice Gaussian, then the sum of $n$ copies of $X$ should also tend to the lattice Gaussian on $\mathcal{L}$. If the support of $X$ is contained in a smaller lattice $\mathcal{L}'$, then clearly $X$ cannot obey such a local central limit theorem, because sums of copies of $X$ are also contained in $\mathcal{L}'$. In fact, this is essentially the only obstruction up to translations. We may state the above obstruction in terms of the $\emph{dual lattice}$ and the Fourier transform of $X$. \begin{defin}[dual lattice]\label{defin:dual} If $\mathcal{L}$ is a lattice, the \emph{dual lattice} $\mathcal{L}^*$ of $\mathcal{L}$ is the set $$\mathcal{L}^*= \{z: \langle z, \lambda \rangle \in \mathbb{Z} \textrm{ for all } \lambda \in \mathcal{L}\}.$$ \end{defin} The Fourier transform $\widehat{X}$ of $X$ is the function defined on $\theta \in \mathbb{R}^m$ by $\widehat{X}(\theta) = \mathbb{E} [\operatorname{ex}p(2 \pi i \langle X, \theta \rangle )]$. Note that $|\widehat{X}(\theta)|$ is always $1$ for $\theta \in \mathcal{L}^*$. In fact, if $|\widehat{X}(\theta)| =1$ also \emph{implies} that $\theta \in \mathcal{L}^*$, then the support of $X$ is contained in no (translation of a) proper sublattice of $\mathcal{L}$! This suggests that, in order to show that a local central limit theorem holds, it is enough to rule out vectors $\theta$ outside the dual lattice with $|\widehat{X}(\theta)| = 1$. In this work, the obstructions are points $\theta$ far from the dual lattice with $\mathbb{E} [|\langle \theta, X \rangle \bmod 1|^2]$ small, where $y \bmod 1$ is taken in $(-1/2, 1/2]$. However, we know that for $\theta$ very close to the dual lattice we have $|\langle \theta, x \rangle \bmod 1|^2 = |\langle \theta, x \rangle |^2$ for all $x \in \operatorname{supp} X$, so $\mathbb{E} [|\langle \theta, X \rangle \bmod 1|^2]$ is exactly $d(\theta, \mathcal{L}^*)^2$. The spanningness measures the value of $\mathbb{E} [|\langle \theta, X \rangle \bmod 1|^2]$ where this relationship breaks down. \begin{defin}[Spanningness for isotropic random variables]\label{defin:spanningness} Suppose that $Z$ is an isotropic random variable defined on the lattice $\mathcal{L}$. Let $$\tilde{Z}(\theta):= \sqrt{\mathbb{E} [|\langle \theta, Z \rangle \bmod 1|^2]},$$ where $y \bmod 1$ is taken in $(-1/2, 1/2]$, and say $\theta$ is \emph{pseudodual} if $\tilde{Z}(\theta) \leq d(\theta, \mathcal{L}^*)/2$. Define the \emph{spanningness} $s(Z)$ of $Z$ by $$ s(Z) := \inf_{\mathcal{L}^* \not\ni \;\theta \textrm{ pseudodual}} \tilde{Z}(\theta).$$ It is a priori possible that $s(Z) = \infty$. \end{defin} Spanningness is, intuitively, a measure of how far $Z$ is from being contained in a proper sublattice of $\mathcal{L}$. Indeed, $s(Z) = 0$ if and only if this the case. Bounding the spanningness is the most difficult part of applying our main theorem. Our spanningness bounds for $t$-sparse random matrices use techniques from the recent work of Kuperberg, Lovett and Peled \cite{LKP12}, in which the authors proved local limit theorems for $My$ for non-random, highly structured $M$. Our discrepancy bounds also apply to the lattice random variables considered in \cite{LKP12} with the spanningness bounds computed in that paper; this will be made precise in \cref{lem:lkp} of \cref{sec:span}. \iffalse We need one more parameter to state our quantitative theorem. \begin{defin}[distortion]\label{defin:distortion} Define $R_*$ to be the radius of the unit ball in the norm $\|\cdot \|_*$, or $$R_* = \max_{\|x\|_* = 1}\|x\|_2.$$ For example, $R_\infty = \sqrt{m}.$ \end{defin} \fi \subsubsection*{Related work} We submitted a draft of this work in April 2018, and during our revision process Hoberg and Rothvoss posted a paper on arXiv using very similar techniques on a closely related problem \cite{HR18}. They study random $m\times n$ matrices $M$ with independent entries that are $1$ with probability $p$, and show that for $\operatorname{disc} M = 1$ with high probability in $n$ provided $n = \Omega(m^2 \log m)$. The results are closely related but incomparable: our results are more general, but when applied to their setting we obtain a weaker bound of $n \geq \Omega(m^3 \log^2m)$. \subsubsection*{Organization of the paper} In \cref{sec:local} we build the technical machinery to carry about the strategy from the previous section. We state our local limit theorem and show how to use it to bound discrepancy. In \cref{sec:tsparse} we recall some techniques for bounding spanningness, the main parameter that controls our local limit theorem, and use these bounds to prove \cref{thm:tsparse} on the discrepancy of random $t$-sparse matrices. In \cref{sec:unit} we use similar techniques to bound the discrepancy of matrices with random unit columns. \cref{sec:proofs} contains the proofs of our local limit theorems. \subsubsection*{Notation} If not otherwise specified, $M$ is a random $m\times n$ matrix with columns drawn independently from a distribution $X$ on a lattice $\mathcal{L}$ that is supported only in a ball $B(L)$, and the integer span of the support of $X$ (denoted $\operatorname{supp} X$) is $\mathcal{L}$. $\Sigma$ denotes $\mathbb{E} XX^\dagger$. $D$ will denote the Voronoi cell of the dual lattice $\mathcal{L}^*$ of $\mathcal{L}$. $\| \cdot \|_2$ denotes the Euclidean norm for vectors and the spectral norm for matrices, and $\| \cdot \|_*$ denotes an arbitrary norm. Throughout the paper there are several constants $c_1, c_2, \dots$. These are assumed to be absolute constants, and we will assume they are large enough (or small enough) when needed. \section{Likely local limit theorem and discrepancy}\label{sec:local} Here we show that with high probability over the choice of $M$, the random variable $My$ resembles a Gaussian on the lattice $\mathcal{L}$. We also show how to use the local limit theorem to bound discrepancy. For ease of reference, we define the rate of growth $n$ must satisfy in order for our local limit theorems to hold. \begin{defin}\label{defin:ndef} Define $N_0 = N_0(m, s(X), L, \det \mathcal{L})$ by \begin{align}N_0:=c_{\ref*{nlower}} \max\left\{m^2 L^2( \log m + \log L)^2, s(X)^{-4}L^{-2},L^2 \log^2 \det \mathcal{L}\right\},\label{eq:nlim} \end{align} where $c_{\ref*{nlower}} $ is a suitably large absolute constant. \end{defin} A few definitions will be of use in the next theorem. \begin{defin}\label{def:gauss_psd} For a matrix $M$, define the lattice Gaussian with covariance $\frac{1}{2} M M^\dagger$ by $$G_{M}(\lambda) = \frac{2^{m/2}\det(\mathcal{L})}{\pi^{m/2}\sqrt{\det(MM^\dagger)}} e^{-2 \lambda^{\dagger}( MM^\dagger)^{-1}\lambda}.$$ For two Hermitian matrices $A$ and $B$, $A \succeq B$ means $A - B$ is positive-semidefinite. \end{defin} \begin{thm}\label{thm:lattice_local_limit} Let $X$ be a random variable on a lattice $\mathcal{L}$ such that $\mathbb{E} XX^\dagger = I_m$, $\operatorname{supp} X \subset B(L)$, and $\mathcal{L} = \operatorname{span}_\mathbb{Z} \operatorname{supp} X$. For $n \geq N_0$, with probability at least $1 - c_{\ref*{failconst}} L\sqrt{\frac{\log n}{n}}$ over the choice of columns of $M$, the following two properties of $M$ hold: \begin{enumerate} \item $MM^\dagger \succeq \frac{1}{2}n I_m$. \item For all $\lambda \in \mathcal{L} - \frac{1}{2} M \textbf 1$, \begin{align}\left|\Pr_{y_i \in \{\pm 1/2\}} [M\mathbf y = \mathbf\lambda] - G_{M}(\lambda)\right| = G_{M}(0) \cdot \frac{ 2m^2 L^2}{n}. \label{eq:thm_bound} \end{align} where $G_M$ is as in \cref{def:gauss_psd}. In particular, for all $\lambda \in \mathcal{L} - \frac{1}{2} M \textbf 1$ with $e^{-2 \lambda^{\dagger}( MM^\dagger)^{-1}\lambda} > 2m^2 L^2/n$ we have $$\Pr_{y \in \{\pm 1/2\}^n}(My = \lambda) > 0.$$ \end{enumerate} \end{thm} Equipped with the local limit theorem, we may now bound the discrepancy. We restate \cref{thm:lattice} using $N_0$. \begin{thm}[discrepancy for isotropic random variables]\label{thm:identity_discrepancy} Suppose $X$ is an isotropic random variable on a nondegenerate lattice $\mathcal{L}$ with $\mathcal{L}=\operatorname{span}_\mathbb{Z} \operatorname{supp} X$ and $\operatorname{supp} X \subset B(L)$. If $n \geq N$ then $$ \operatorname{disc}_*( M) \leq d_* (M \mathbf 1, 2 \mathcal{L}) \leq 2 \rho_*(\mathcal{L})$$ with probability $1 - c_{\ref*{failconst}} L\sqrt{\frac{\log n}{n}}$, where \refstepcounter{const}\label{ndisc} \begin{align}N_1 =c_{\ref*{ndisc}} \max\left\{R^2_* \rho_*(\mathcal{L})^2, N_0\left(m, s(X), L, \det \mathcal{L}\right)\right\} \label{eq:isodisc}\end{align} for $N_0$ as in \cref{eq:nlim}. \end{thm} \begin{proof} By the definition of the covering radius of a lattice, there is a point $\lambda \in \mathcal{L} - \frac{1}{2} M \textbf 1$ with $\|\lambda\|_* \leq d_* (\frac{1}{2} M \mathbf 1, \mathcal{L}) \leq \rho_{*}(\mathcal{L})$. It is enough to show that, with high probability over the choice of $M$, the point $\lambda$ is hit by $My$ with positive probability over $y \in \{\pm 1/2\}^n$. If so, $2y$ is a coloring of $M$ with discrepancy $2 \rho_* (\mathcal{L})$.\\ Because $n$ is at least $N_0(m, s(X), L, \det \mathcal{L}')$, the events in \cref{thm:lattice_local_limit} hold with probability at least $1 - c_{\ref*{failconst}} L\sqrt{\frac{\log n}{n}}$. We claim that if the events in \cref{thm:lattice_local_limit} occur, then $\lambda$ is hit by $My$ with positive probability. Indeed, by the final conclusion in \cref{thm:lattice_local_limit}, it is enough to show that $$e^{-2 \lambda^{\dagger}(MM^\dagger)^{-1}\lambda} > 2 m^2 L^2/n.$$ Because $n \geq N$, $e^{-1} \geq 2 m^2 L^2/n$. Thus, it is enough to show $\lambda^{\dagger}(MM^\dagger)^{-1}\lambda < \frac{1}{2}$. This is true; by $MM^\dagger \geq \frac{1}{2} n I_m$, we have $\lambda^{\dagger}(MM^\dagger)^{-1}\lambda \leq 2\|\lambda\|_*^2 \frac{R^2_*}{n} \leq 2\frac{R^2_*}{n} \rho_*(\mathcal{L})^2$. \end{proof} Now \cref{thm:lattice} is an immediate corollary of \cref{thm:identity_discrepancy}. \begin{thm}[Restatement of \cref{thm:lattice}]\label{thm:lattice_disc}Suppose $X$ is a random variable on a nondegenerate lattice $\mathcal{L}$. Suppose $\Sigma:=\mathbb{E} [XX^\dagger]$ has least eigenvalue $\operatorname{\mathbf{\Sigma}}ma$, $\operatorname{supp} X \subset \Sigma^{1/2} B(L)$, and that $\mathcal{L}=\operatorname{span}_\mathbb{Z} \operatorname{supp} X$. If $n \geq N$ then $$ \operatorname{disc}_*( M) \leq d_* (M \mathbf 1, 2 \mathcal{L}) \leq 2 \rho_*(\mathcal{L})$$ with probability at least $1 - c_{\ref*{failconst}} L\sqrt{\frac{\log n}{n}}$, where \refstepcounter{const}\label{ndisc} \begin{align}N =c_{\ref*{ndisc}} \max\left\{\frac{R^2_* \rho_*(\mathcal{L})^2}{\operatorname{\mathbf{\Sigma}}ma}, N_0\left(m, s(\Sigma^{-1/2}X), L, \frac{\det \mathcal{L}}{\sqrt{\det \Sigma}}\right)\right\} \label{eq:ndisc}\end{align} for $N_0$ as in \cref{eq:nlim}. \end{thm} \begin{proof}Note that $\operatorname{\mathbf{\Sigma}}ma > 0$, because $\mathcal{L}$ is nondegenerate and $\mathcal{L} = \operatorname{span}_Z \operatorname{supp} X \subset \operatorname{span}_\mathbb{R}\operatorname{supp} X$. Thus, $\operatorname{\mathbf{\Sigma}}ma = 0$ contradicts $\operatorname{span}_\mathbb{R}\operatorname{supp} X \subsetneq \mathbb{R}^m$. Let $Z:= \Sigma^{-1/2} X$ so that $\mathbb{E} [ZZ^\dagger] = I_m$; we'll apply \cref{thm:identity_discrepancy} to the random variable $Z$, the norm $\|\cdot\|_0$ given by $\| v \|_0:=\|\Sigma^{1/2} v \|_*$, and the lattice $\mathcal{L}' = \Sigma^{-1/2} \mathcal{L}$. The distortion $R_0$ is at most $R_*/\operatorname{\mathbf{\Sigma}}ma^{1/2}$, the lattice determinant becomes $\det \mathcal{L}' = \det \mathcal{L}/\sqrt{\det \Sigma}$, and $\operatorname{supp} Z \subset B(L)$. The covering radius of $\rho_0(\mathcal{L}')$ is exactly $\rho_*(\mathcal{L})$. Since the choice of $N$ in \cref{eq:ndisc} is $N_1$ of \cref{thm:identity_discrepancy} for $Z, \|\cdot \|_0,$ and $\mathcal{L}'$, we have from \cref{thm:identity_discrepancy} that $$\operatorname{disc}_*(M) = \operatorname{disc}_0(\Sigma^{-1/2} M) \leq 2\rho_0(\mathcal{L}') = 2 \rho_*(\mathcal{L})$$ with probability at least $1 - c_{\ref*{failconst}} L\sqrt{\frac{\log n}{n}}$. \end{proof} \section{Discrepancy of random $t$-sparse matrices}\label{sec:tsparse} Here we will state our spanningness bounds for $t$-sparse matrices, and before proving them, compute the bounds guaranteed by \cref{thm:lattice}. For $S \in \binom{[m]}{t}$, let $1_S \in \mathbb{R}^m$ denote the characteristic vector of $S$. \begin{fact}[random $t$-sparse vector]\label{fact:tsparse} A \emph{random $t$-sparse vector} is $1_S$ for $S$ drawn uniformly at random from $\binom{[m]}{t}$. Let $X$ be a random $t$-sparse vector with $0 < t < m$. The lattice $\mathcal{L} \subset \mathbb{Z}^m$ is the lattice of integer vectors with coordinate sum divisible by $t$; we have $\rho_\infty(\mathcal{L}) =1$. Observe that $\mathcal{L}^* = \mathbb{Z}^m + \mathbb{Z} \frac{1}{t} \textbf{1}$, where $\textbf{1}$ is the all ones vector. Since $e_1, \dots, e_{m-1}, \frac{1}{t}\textbf 1$ is a basis for $\mathcal{L}^*$, $\det \mathcal{L} = 1/\det \mathcal{L}^*= t$. $$\Sigma_{i,j} = \mathbb{E}[X X^\dagger]_{ij} = \left\{ \begin{array}{ccc} \frac{t}{m} & i = j \\ & \\ \frac{t(t-1)}{m(m-1)} & i \neq j \end{array}\right. $$ The eigenvalues of $\Sigma$ are $\frac{t^2}{m}$ with multiplicity one, and $\frac{t(m-t)}{m(m-1)}$ with multiplicity $m-1$. \end{fact} The next lemma is the purpose of the next two subsections. \begin{lem}\label{lem:tspanning} There is a constant $\refstepcounter{const} \label{spreadconst} c_{\ref*{spreadconst}}$ such that the spanningness is at least $c_{\ref*{spreadconst}} m^{-1}$; that is, $$ s(\Sigma^{-1/2}X) \geq c_{\ref*{spreadconst}} m^{-1}.$$ \end{lem} Before proving this, we plug the spanningness bound into \cref{thm:lattice_disc} to bound the discrepancy of $t$-sparse random matrices. \begin{proof}[Proof of \cref{thm:tsparse}] If $X$ is a random $t$-sparse matrix, $\|\Sigma^{-1/2} X\|_2$ is $\sqrt{m}$ with probability one. This is because $\mathbb{E}\|\Sigma^{-1/2} X\|_2^2 = m$, but by symmetry $\|\Sigma^{-1/2} x\|_2$ is the same for every $x \in \operatorname{supp} X$. Hence, we may take $L = \sqrt{m}$. By \cref{fact:tsparse}, $\operatorname{\mathbf{\Sigma}}ma$ is $\frac{t(m-t)}{m(m-1)}$. Now $N$ from \cref{thm:lattice_disc} is at most \begin{align} c_{\ref*{ndisc}}\max\left\{\underbrace{m \cdot \frac{m(m-1)}{t(m-t)}}_{\frac{R_\infty^2 \rho_\infty(\mathcal{L})^2}{\operatorname{\mathbf{\Sigma}}ma}}, \underbrace{m^3\log^2 m}_{m^2 L^2( \log M + \log L)^2}, \underbrace{m^3}_{s(X)^{-4}L^{-2}} \underbrace{m \log^2 t}_{L^2\log^2 \det \mathcal{L}}\right\},\label{eq:ntsparse}\end{align} which is $O(m^3 \log^2 m)$. \end{proof} We can refine this theorem to obtain the limiting distribution for the discrepancy. \begin{thm}[discrepancy of random $t$-sparse matrices]\label{thm:tsparse2} Let $M$ be a random $t$-sparse matrix for $0 < t< m$. Let $Y = \operatorname{B}(m, 1/2)$ be a binomial random variable with $m$ trials and success probability $1/2$. Suppose $n =\Omega(m^3 \log^2 m)$. If $n$ is even, then \begin{align*} \Pr[\operatorname{disc}(M) = 0] &= 2^{-m+1} + O\left(\sqrt{(m/n)\log n}\right)\\ \Pr[\operatorname{disc}(M) = 1] &= (1 - 2^{-m+1}) + O\left(\sqrt{(m/n)\log n}\right) \end{align*} and if $n$ is odd then \begin{align*} \Pr[\operatorname{disc}(M) =0] &= 0\\ \Pr[\operatorname{disc}(M) = 1] &= \Pr[Y \geq t| Y \equiv t \bmod 2] + O\left(\sqrt{(m/n)\log n}\right)\\ \Pr[\operatorname{disc}(M) = 2] &= \Pr[Y < t| Y \equiv t \bmod 2] + O\left(\sqrt{(m/n)\log n}\right) \end{align*} with probability at least $1 - O\left(\sqrt{\frac{m\log n}{n}}\right).$ Note that $$\Pr[Y \leq s| Y \equiv t \bmod 2] =2^{-m + 1} \sum_{ k \equiv t \bmod 2}^s \binom{m}{k}.$$ \end{thm} \begin{proof}[Proof of \cref{thm:tsparse2}] The proof is identical to that of \cref{thm:tsparse} except we do additional work to bound $d_* (M \mathbf 1, 2 \mathcal{L})$ instead of just using $2 \rho_*(\mathcal{L})$. There are two cases: \begin{description} \item[Case 1: $n$ is odd.] The coordinates of $M \mathbf 1$ sum to $nt$. The lattice $2\mathcal{L}$ is the integer vectors with even coordinates whose sum is divisible by $2t$. Thus, in order to move $M \mathbf 1$ to $2 \mathcal{L}$, each odd coordinate must be changed to even and the total sum must be changed by an odd number times $t$. The number of odd coordinates has the same parity as $t$, so we may move $M$ to $2\mathcal{L}$ by changing each coordinate by at most $1$ if and only if the number of odd coordinates is at least $t$. \item[Case 2: $n$ is even.] In this case, the total sum of the coordinates must be changed by an \emph{even} number times $t$. The parity of the number of odd coordinates is even, so the odd coordinates can all be changed to even preserving the sum of all the coordinates. This shows may move $M$ to $2\mathcal{L}$ by changing each coordinate by at most $1$, and by at most $0$ if all the coordinates of $M \mathbf 1$ are even. \end{description} Thus, in the even case the discrepancy is at most $1$ with the same failure probability and $0$ with the probability all the row sums are even, and in the odd case the discrepancy is at most $1$ provided the number of odd coordinates of $M \mathbf 1$ is at least $t$. Observe that the vector of row sums of a $m\times n$ random $t$-sparse matrix taken modulo $2$ is distributed as the sum of $n$ random vectors of Hamming weight $t$ in $\mathbb{F}_2^m$. \cref{lem:markovmixing} below shows that the Hamming weight of this vector is at most $O(e^{-2n/m + 3m})$ in total variation distance from a binomial $\operatorname{B}(m, 1/2)$ conditioned on having the same parity as $nt$. Because this is dominated by $\sqrt{(m/n)\log n}$ for $n \geq m^3 \log^2m$, the theorem is proved. \end{proof} \begin{lem}[number of odd rows]\label{lem:markovmixing} Suppose $X_n$ is a sum of $n$ uniformly random vectors of Hamming weight $0 <t<m$ in $\mathbb{F}_2^m$ and $Z_n$ is a uniformly random element of $\mathbb{F}_2^m$ with Hamming weight having the same parity as $nt$. If $d_{TV}$ denotes the total variation distance, then $$ d_{TV}(X_n, Z_n) = O(e^{- 2n/m + m}).$$ \end{lem} \begin{proof} Though we will not use the language of Markov chains, the following calculation consists of showing that the random walk on the group $\mathbb{F}_2^m$ mixes rapidly by showing it has a spectral gap. Let $X$ be a random element $\mathbb{F}_2^m$ of Hamming weight $t$. Let $f$ be the probability mass function of $X$. Let $h_n$ be the probability mass function of $Z_n$. By the Cauchy-Schwarz inequality, it is enough to show that the probability mass function $f_n$ of the sum of $n$ i.i.d. copies of $X$ satisfies \begin{align*}\sum_{x \in \mathbb{F}_2^m} |f_n(x) - h_n(x)|^2 = O(e^{-2n/m})\end{align*} For $y \in \mathbb{F}_2^m$, let $\chi_y: \mathbb{F}_2^m \to \{\pm 1\}$ be the Walsh function $\chi_y(x) = (-1)^{y \cdot x}$. The Fourier transform of a function $g: \mathbb{F}_2^m \to \mathbb{R}$ is the function $\widehat{g}: \mathbb{F}_2^m \to \mathbb{R}$ given by $ \widehat{g}(y) = \sum_{x \in \mathbb{F}_2^m} g(x)\chi_y(x).$ The function $f_n$ satisfies $\widehat{f}_n = (\widehat{f}\;)^n$. Note that $\widehat{h}_n(\mathbf 0) =\widehat{f}_n(\mathbf 0)= 1$, $\widehat{h}_n(\mathbf 1) = \widehat{f}_n(\mathbf 1) = (-1)^{nt}$, and $\widehat{h} = 0$ elsewhere. By Plancherel's identity, \begin{align}\sum_{x \in \mathbb{F}_2^m}|f_n(x) - h_n(x)|^2 &= \mathbb{E}_{y \in \mathbb{F}_2^m} |\widehat{f}(y)^n - \widehat{h}_n(y)|^2\\ & = \sum_{y \in \mathbb{F}_2^m, y \neq \mathbf0, y \neq \mathbf 1} 2^{-n}|\widehat{f}(y)|^{2n}.\label{walsh} \end{align} Now we claim that $|\widehat{f}(y)| \leq 1 - \frac{1}{m}$ for $y \not \in \{\mathbf 0, \mathbf1\}$, which would imply \cref{walsh} is at most $ (1 - 1/m)^{2n} \leq e^{-2n/m}.$ Indeed, if the Hamming weight of $y$ is $s$, then $\widehat{f}(y)$ is exactly the expectation of $(-1)^{|S \cap T|}$ where $T$ is a random $t$-set and $S$ a fixed $s$-set. By symmetry we may assume $t \leq s$, and since we are only concerned with the absolute value of this quantity, by taking the complement of $S$ we may assume $s \leq m/2$. We may choose the elements of $T$ in order; it is enough to show that the expectation of $(-1)^{|S \cap T|}$ is at most $1 - 1/m$ in absolute value even after conditioning on the choice of the first $t-1$ elements of $T$. Indeed, the value of $(-1)^{|S \cap T|}$ is not determined by this choice, so the conditional expectation is a rational number in $(-1,1)$ with denominator at most $m$, and hence at most $1 - 1/m$ in absolute value.\end{proof} We'll now discuss a general method for bounding the spanningness of lattice random variables. \subsection{Spanningness of lattice random variables}\label{sec:span} Suppose $X$ is a finitely supported random variable on $\mathcal{L}$. We wish to bound the spanningness $s(X)$ below. The techniques below nearly identical to those in \cite{LKP12}, in which spanningness is bounded for a very general class of random variables. We may extend spanningness for nonisotropic random variables. \begin{defin}[nonisotropic spanningness]\label{defin:noniso_span} A distribution $X$ with finite, nonsingular covariance $\mathbb{E} XX^\dagger = \Sigma$ determines a metric $d_X$ on $\mathbb{R}^m$ given by $d_X(\theta_1, \theta_2) = \|\theta_1 - \theta_2\|_X$ where the square norm $\|\theta \|^2_X$ is given by $\theta^\dagger \Sigma \theta = \mathbb{E}[ \langle X , \theta \rangle^2]$. Let $$\tilde{X}(\theta):= \sqrt{\mathbb{E} [|\langle \theta, X \rangle \bmod 1|^2]},$$ where $y \bmod 1$ is taken in $(-1/2, 1/2]$, and say $\theta$ is \emph{pseudodual} if $\tilde{X}(\theta) \leq d_X(\theta, \mathcal{L}^*)/2$. Define the \emph{spanningness} $s(X)$ of $X$ by $$ s(X) := \inf_{\mathcal{L}^* \not\ni \;\theta \textrm{ pseudodual}} \tilde{X}(\theta).$$ This definition of spanningness is invariant under invertible linear transformations $X \leftarrow AX$ and $\mathcal{L} \leftarrow A \mathcal{L}$; in particular, $s(X)$ is the same as $s(\Sigma^{-1/2} X)$ for which we have $\|\theta\|_{\Sigma^{-1/2}X} = \|\theta\|_2$. Hence, this definition extends the spanningness of \cref{defin:spanningness}. \end{defin} \subsubsection*{Strategy for bounding spanningness} Our strategy for bounding spanningness below is as follows: we need to show that if $\mathbf \theta$ is \emph{pseudodual but not dual}, i.e., $0 < \tilde{X}(\theta) \leq d(x, \mathcal{L}^*)/2$, then $\tilde{X}(\theta)$ is large. We do this in the following two steps. \begin{enumerate} \item Find a $\delta$ such that if all $|\langle x, \theta \rangle \bmod 1| \leq \frac{1}{\beta}$ for all $x \in \operatorname{supp} X$, then $\tilde{X}(\theta) \geq d_X( \theta, \mathcal{L}^*)$. Such $\theta$ cannot be pseudodual without being dual. \item $X$ is $\alpha$-\emph{spreading}: for all $\theta$, $$\tilde{X}(\theta) \geq \alpha \sup_{x \in \operatorname{supp} X} |\langle x, \theta \rangle \bmod 1|$$ \end{enumerate} Together, if $\theta$ is pseudodual, then $\tilde{X}(\theta) \geq \alpha/\beta$. To achieve the first item, we use bounded integral spanning sets as in \cite{LKP12}. The following definitions and lemmas are nearly identical to arguments in the proof of Lemma 4.6 in \cite{LKP12}. \begin{defin}[bounded integral spanning set]\label{elvee} Say $B$ is an integral spanning set of a subspace $H$ of $\mathbb{R}^m$ if $B \subset \mathbb{Z}^m$ and $\operatorname{span}_\mathbb{R} B= H$. Say a subspace $H \subset \mathbb{R}^m$ has a \emph{$\beta$-bounded integral spanning set} if $H$ has an integral spanning set $B$ with $\max\{\|b\|_1: b \in B\} \leq \beta.$ \end{defin} \begin{defin}\label{gammavee}Let $A_X$ denote the matrix whose columns are the support of $X$ (in some fixed order). Say $X$ is \emph{$\beta$-bounded} if $\ker A_X$ has a $\beta$-bounded integral spanning set. \end{defin} \begin{lem}\label{dualclose} Suppose $X$ is $\beta$-bounded. Then either $$\max_{x \in \operatorname{supp}(X)}|\langle x, \theta \rangle \bmod{1}| \geq \frac{1}{\beta}$$ or $$\tilde{X}(\theta) \geq d_X(\theta, \mathcal{L}^*)$$ \end{lem} \begin{proof} To prove Lemma \ref{dualclose} we use a claim from \cite{LKP12} , which allows us to deduce that if $\langle x, \theta \rangle$ is very close to an integer for all $x$ then we can ``round" $\theta$ to an element of the dual lattice to get rid of the fractional parts. \begin{claim}[Claim 4.12 of \cite{LKP12}]\label{snapto}Suppose $X$ is $\beta$ bounded, and define $r_x := \langle x, \theta \rangle \bmod{1} \in (-1/2, 1/2]$ and $k_x$ to be the unique integer such that $\langle x, \theta \rangle = k_x + r_x$. If $$\max_{x \in \operatorname{supp}(X)} |r_x| < 1/\beta$$ then there exists $l \in \mathcal{L}^*$ with $$\langle x, l \rangle = k_x $$ for all $x \in \operatorname{supp}(X)$. \end{claim} Now, suppose $\max_{x \in \operatorname{supp}(X)}|\langle x, \theta \rangle \bmod{1}| = \max_{x \in \operatorname{supp}(X)} |r_x| < 1/\beta$. By Claim \ref{snapto}, exists $l \in \mathcal{L}^*$ with $\langle x, l \rangle = k_x$ for all $x \in \operatorname{supp}(X)$. By assumption, $$ \tilde{X}(\theta) = \sqrt{\mathbb{E}(\langle X, \theta\rangle \bmod 1)^2} = \sqrt{\mathbb{E} r_X^2}= \sqrt{\mathbb{E} \langle X, \theta - l \rangle^2} \geq d_X(\theta, \mathcal{L}^*),$$ proving Lemma \ref{dualclose}.\end{proof} In order to apply Lemma \ref{dualclose}, we will need to bound $\tilde{X}(\theta)$ below when there is some $x$ with $|\langle x, \theta \rangle|$ fairly large. \begin{defin}\label{alphavee} Say $X$ is $\alpha$-spreading if for all $\theta \in \mathbb{R}^m$, $$\tilde{X}(\theta) \geq \alpha \cdot \sup_{x \in \operatorname{supp} X}| \langle x, \theta \rangle \bmod{1}|.$$ \end{defin} Combining \cref{dualclose} with \cref{alphavee} yields the following bound. \begin{cor}\label{spreadcor} Suppose $X$ is $\beta$-bounded and $\alpha$-spreading. Then $s(X) \geq \frac{\alpha}{\beta}$. \end{cor} A lemma of \cite{LKP12} immediately gives a bound on spanningness for random variables that are uniform on their support. \begin{lem}[Lemma 4.4 from \cite{LKP12}]\label{lem:lkp} Suppose $X$ is uniform on $\operatorname{supp} X \subset B_\infty( L')$ and for any two elements $x, y \in \operatorname{supp} X$ there is an invertible linear transformation $A$ such that $Ax= y$ and $X = AX$. Then $X$ is $$\Omega \left(\frac{1}{(m \log (L' m))^{3/2}} \right) \textrm{-spreading}.$$ In particular, if $X$ is $\beta$-bounded, then $$ s(X) = \Omega \left(\frac{1}{\beta(m \log (L' m))^{3/2}} \right).$$ \end{lem} \if{false} \begin{proof} We must show that if if $\varepsilon^2 < 2 \frac{\alpha}{\beta^2}$ then $\inf_{D_X \setminus B_X(\varepsilon)} \mathbb{E} (\langle \theta, X \rangle \bmod 1)^2 \geq \frac{1}{2} \varepsilon^2.$ Suppose $\theta \in D_X \setminus B_X(\varepsilon)$ so that in particular, $d_X(\theta, \mathcal{L}^*) \geq \varepsilon$. Suppose $\max_{x \in \operatorname{supp}(X)}|\langle x, \theta \rangle \bmod{1}| \geq \frac{1}{\beta}$; by the assumption that $X$ is $\alpha$-spreading we have $$\mathbb{E}[(\langle X, \theta \rangle \bmod{1})^2] \geq \frac{\alpha}{\beta^2}.$$ If $\max_{x \in \operatorname{supp}(X)}|\langle x, \theta \rangle \bmod{1}| < \frac{1}{\beta}$, we must be in the second case of Lemma \ref{dualclose} and so $$\mathbb{E}(\langle X, \theta\rangle \bmod 1)^2 \geq \varepsilon^2.$$ Since $\frac{\alpha}{\beta^2}, \varepsilon^2 \geq \frac{1}{2} \varepsilon^2$, the proof is complete. \end{proof} \fi \subsection{Proof of \cref{lem:tspanning}} Using the techniques from the previous section, we'll prove \cref{lem:tspanning}, which states that $t$-sparse random vectors have spanningness $\Omega(m^{-1})$. In particular, we'll prove that $t$-sparse random vectors are $4$-bounded and $\Omega(m^{-1})$-spreading and apply \cref{spreadcor}. \subsubsection{Random $t$-sparse vectors are $\Omega(m^{-1})$-spreading} Note that \cref{lem:lkp} gives that $t$-sparse vectors are $\Omega \left(\frac{1}{(m \log (m))^{3/2}} \right) \textrm{-spreading}, $ but we can do slightly better due to the simplicity of the distribution $X$. In order to show that $t$-sparse vectors are $c$-spreading, recall that we must show that if a \emph{single} vector $1_S$ has $|\langle \theta, 1_S \rangle \bmod 1| > \delta$, then $\mathbb{E}[|\langle \theta, X \rangle \bmod 1|^2] \geq c^2\delta^2$. We cannot hope for $c = o(m^{-1})$, because for small enough $\delta$ the vector $\theta = \delta (\frac{1}{t} 1_{[t]} - \frac{1}{m-t} 1_{m\setminus [t]})$ has $\langle \theta, 1_{[t]} \rangle \bmod 1 = \delta$ but $\mathbb{E}[|\langle \theta, X \rangle \bmod 1|^2] = \frac{1}{m-1} \delta^2$. Our bound is worse than this, but the term in \cref{eq:ntsparse} depending on the spanningness is not the largest anyway, so this does not hurt our bounds. \begin{lem}\label{sparsespread} There exists an absolute constant \refstepcounter{const} \label{spreadconst}$c_{{\theconst}}>0$ such that random $t$-sparse vectors are $\frac{c_{\ref*{spreadconst}}}{m}$-spreading. \end{lem} \begin{proof} If $t = 0$ or $t = m$, then $t$-sparse vectors are trivially $1$-spreading. Suppose there is some $t$-subset of $[m]$, say $[t]$ without loss of generality, satisfying $|\langle \theta, 1_{[t]}\rangle \bmod 1| = \delta > 0$. For convenience, for $S \in \binom{[m]}{t}$, define $$w(S) := |\langle \theta, 1_{S}\rangle \bmod 1|.$$ We need to show that $w([t]) = \delta$ implies $\mathbb{E}_S w(S)^2 =\Omega(m^{-2}\delta^2)$. To do this, we will define random integer coefficients $\lambda = \left(\lambda_S: S \in \binom{[m]}{t}\right)$ such that $$1_{[t]} = \sum_{S \in \binom{[m]}{t}} \lambda_S 1_S.$$ Because $|a + b \bmod 1| \leq |a \bmod 1| + |b \bmod 1|$ for our definition of $\bmod 1$, we have the lower bound \begin{align}\delta = w([t]) \leq \mathbb{E}_{\lambda} \sum_{S\in \binom{[m]}{t}} w(S) |\lambda_S|= \sum_{S \in \binom{[m]}{t}} w(S) \cdot \mathbb{E}_\lambda |\lambda_S|.\label{eq:coeffs}\end{align} It is enough to show $\mathbb{E}_\lambda |\lambda_S|$ is small for all $S$ in $\binom{m}{t}$, because then $\mathbb{E} [ w(S)]$ is large and $$\mathbb{E}[ w(S)^2] \geq \mathbb{E}[ w(S)]^2.$$ We now proceed to define $\lambda$. Let $\operatorname{\mathbf{\Sigma}}ma$ be a uniformly random permutation of $[n]$ and let $T = \operatorname{\mathbf{\Sigma}}ma({[t]})$. We have \begin{align} 1_{[t]} = 1_T + \sum_{i \in {[t]}: i \neq \operatorname{\mathbf{\Sigma}}ma (i)} e_i - e_{\operatorname{\mathbf{\Sigma}}ma(i)}, \label{eq:permutation}\end{align} where $e_i$ is the $i^{th}$ standard basis vector. Now for each $i \in {[t]}: i \neq \operatorname{\mathbf{\Sigma}}ma (i)$ pick $R_i$ at random conditioned on $i \in R_i$ but $\operatorname{\mathbf{\Sigma}}ma(i) \not\in R_i$. Then \begin{align}e_i - e_{\operatorname{\mathbf{\Sigma}}ma(i)} = 1_{R_i} - 1_{R_i - i + \operatorname{\mathbf{\Sigma}}ma(i)}. \label{eq:difference} \end{align}To construct $\lambda$, first define the indicator $e^U$ by $ e^U_S:= 1_{S =U}$ for $U, S \in \binom{[m]}{t}$, and then define $$ \lambda = e^T - \sum_{i \in {[t]}: i \neq \operatorname{\mathbf{\Sigma}}ma (i)} e^{R_i} - e^{R_i - i + \operatorname{\mathbf{\Sigma}}ma(i)}.$$ By \cref{eq:permutation} and \cref{eq:difference}, this choice satisfies $\sum \lambda_S 1_S = 1_{[t]}$. It remains to bound $\mathbb{E}_\lambda|\lambda_S|$ for each $S$. We have \begin{align} \mathbb{E}_\lambda |\lambda_S| &\leq \Pr [T = S]\label{eq:tee}\\ &+ \sum_{i = 1}^t \Pr[\operatorname{\mathbf{\Sigma}}ma(i) \neq i \textrm{ and }R_i = S]\label{eq:rplus}\\ & + \sum_{i = 1}^t \Pr[\operatorname{\mathbf{\Sigma}}ma(i) \neq i \textrm{ and } R_i - i + \operatorname{\mathbf{\Sigma}}ma(i) = S].\label{eq:rminus} \end{align} since $T$ is a uniformly random $t$-set, $\cref{eq:tee} = \binom{m}{t}^{-1}$. Next we have $\Pr[\operatorname{\mathbf{\Sigma}}ma(i) \neq i \textrm{ and }R_i = S] = \frac{m-1}{m} \Pr[R_i = S]$. However, $R_i$ is chosen uniformly at random among the $t$-sets containing $i$, so $$\Pr[R_i = S] =\binom{m-1}{t-1}^{-1} 1_{i \in S} = \frac{m}{t} \binom{m}{t}^{-1} 1_{i \in S}. $$ Thus $\cref{eq:rplus} \leq (m-1) \binom{m}{t}^{-1}.$ Similarly, $R_i - i + \operatorname{\mathbf{\Sigma}}ma(i)$ is chosen uniformly at random among sets \emph{not} containing $i$, so $\Pr[ R_i - i + \operatorname{\mathbf{\Sigma}}ma(i) = S] = \binom{m}{t-1}^{-1} 1_{i \not\in S} = \frac{m-t + 1}{t} \binom{m}{t}^{-1} 1_{i \not\in S}$. Thus $\cref{eq:rplus} \leq (m-1) \binom{m}{t}^{-1}.$ Thus, for every $S$ we have $\mathbb{E}_\lambda|\lambda_S| \leq 2 m \binom{m}{t}^{-1}.$ Combining this with \cref{eq:coeffs} we have $$\mathbb{E}[ w(S)^2] \geq \mathbb{E}[ w(S)]^2 \geq (2m)^{-2} \delta^2.$$ We may take $c_{\ref*{spreadconst}} = 1/2$. \end{proof} \subsubsection{Random $t$-sparse vectors are $4$-bounded} Recall that $A_X$ is a matrix whose columns consist of the finite set $\operatorname{supp} X = \left\{1_S: S \in \binom{[m]}{t}\right\}.$ We index the columns of $A_X$ by $\binom{[m]}{t}$. \begin{lem}\label{4bd} $X$ is $4$-bounded. That is, $\ker A_X$ has a $4$-bounded integral spanning set. \end{lem} Before we prove the lemma we establish some notation. We have $A_X:\mathbb{R}^{\binom{[m]}{t}} \to \mathbb{R}^m$. Let $e_S \in \mathbb{R}^{\binom{[m]}{t}}$ denote the standard basis element with a one in the $S$ position and 0 elsewhere. For $i \in [m]$, $e_i \in \mathbb{R}^m$ denotes the standard basis vector with a one in the $i^{th}$ position and 0 elsewhere. \begin{defin}[the directed graph $G$] For $S, S' \in \binom{[m]}{t}$ we write $S' \to_j S$ if $1 \in S'$, $j \not \in S'$ and $S$ is obtained by replacing 1 by $j$ in $S'$. Let $G$ be the directed graph with $V(G) = \binom{[m]}{t}$ and $S'S \in E(G)$ if and only if $S'\to_j S$ for some $j \in S\setminus S'$. Thus every set containing 1 has out-degree $m-t$ and in-degree 0 and every set not containing 1 has in-degree $t$ and out-degree 0. \end{defin} The following proposition implies Lemma \ref{4bd}. Note that if $S'\to_j S$, then $1_{S'} - 1_{S} = e_1 - e_j$. \begin{prop}\label{spanning} $$\mathcal{S} = \bigcup_{j = 2}^{m}\{e_{S'} - e_S + e_T - e_{T'}: S' \to_j S \textrm{ and } T' \to_j T\}$$ is a spanning set for $\ker A_X$. \end{prop} \begin{proof}[Proof of \cref{spanning}] Clearly $\mathcal{S}$ is a subset of $\ker A_X$, because if $S'\to_j S$, then $1_{S'} - 1_{S} = e_1 - e_j$, and so $A_X (e_{S'} - e_S) = 1_{S'} - 1_{S} = e_1 - e_j$. Thus, if $S' \to_j S$ and $T' \to_j T$, $A_X (e_{S'} - e_S + e_T - e_{T'}) = 0$. If $S'\to_j S$, then $A_X (e_{S'} - e_S) = 1_{S'} - 1_{S} = e_1 - e_j$. Thus, if $S' \to_j S$ and $T' \to_j T$, $A_X (e_{S'} - e_S + e_T - e_{T'}) = 0$, so $e_{S'} - e_S + e_T - e_{T'} \in \ker A_X$. \\ Next we try to prove $\mathcal{S}$ spans $\ker A_X$. Note that $\dim \ker A_X = \binom{m}{t}-m$, because the column space of $A_X$ is of dimension $m$ (as we have seen, $e_1 - e_j$ are in the column space of $A_X$ for all $1 < j \leq m$; together with some $1_S$ for $1 \notin S \in \binom{[m]}{t}$ we have a basis of $\mathbb{R}^m$). Thus, we need to show $\dim \operatorname{span}_\mathbb{R} \mathcal{S}$ is at least $\binom{m}{t}-m$.\\ For each $j \in [m]-1$, there is some pair $T_j, T_j' \in \binom{[m]}{t}$ such that $T'_j \to_j T_j$. For $j \in \{2, \dots, m\}$, pick such a pair and let $f_j := e_{T'_j} - e_{T_j}$. As there are only $m-1$ many $f_j$'s, $\dim \operatorname{span} \{f_j: j \in [m] - 1\} \leq m-1$. By the previous argument, if $S' \to_j S$, then $e_{S'} - e_S - f_j \in \ker A_X$. Because $\bigcup_{j = 2}^{m}\{e_{S'} - e_S - f_j : S'\to_jS\} \subset \mathcal{S}$, it is enough to show that $$\dim \operatorname{span}_\mathbb{R} \bigcup_{j = 2}^{m}\{e_{S'} - e_S - f_j : S'\to_jS\} \geq \binom{m}{t}-m.$$ We can do this using the next claim, the proof of which we delay. \begin{claim}\label{highdim} $$\dim\operatorname{span}_\mathbb{R}\bigcup_{j=2}^m \{e_{S'} - e_S : S' \to_j S\} = \binom{m}{t}-1.$$ \end{claim} Let's see how to use \cref{highdim} to finish the proof: \begin{eqnarray*} \dim \operatorname{span}_\mathbb{R} \bigcup_{j = 2}^{m}\{e_{S'} - e_S - f_j : S'\to_jS\} \geq\\ \dim \operatorname{span}_\mathbb{R} \bigcup_{j = 2}^{m}\{e_{S'} - e_S: S'\to_jS\} - \dim\operatorname{span}_\mathbb{R}\{f_j: 1 \neq j \in [m]\} \geq\\ \binom{m}{t}-1 - (m-1) = \binom{m}{t}-m. \end{eqnarray*} The last inequality is by Claim \ref{highdim}. \end{proof} Now we finish up by proving \cref{highdim}. \begin{proof}[Proof of \cref{highdim}] If a directed graph $H$ on $[l]$ is \emph{weakly connected}, i.e. $H$ is connected when the directed edges are replaced by undirected edges, then $\operatorname{span}\{e_i -e_j: ij \in E(H)\}$ is of dimension $l-1$. To see this, consider a vector $v\in \operatorname{span}_\mathbb{R}\{e_i -e_j: ij \in E(H)\}^\perp$. For any $ij \in E$, we must have that $v_i = v_j$. As $H$ is weakly connected, we must have that $v_i = v_j$ for all $i, j \in [l]$, so $\dim\operatorname{span}_\mathbb{R}\{e_i -e_j: ij \in E(H)\}^\perp \leq 1$. Clearly $\mathbf 1 \in \operatorname{span}_\mathbb{R}\{e_i -e_j: ij \in E(H)\}^\perp$, so $\dim\operatorname{span}_\mathbb{R}\{e_i -e_j: ij \in E(H)\}^\perp = 1$. \\ In order to finish the proof of the claim, we need only show that our digraph $G$ is weakly connected. This is trivially true if $t = 0$, so we assume $t \geq 1$. Ignoring direction of edges, the operations we are allowed to use to get between vertices of $G$ (sets in $\binom{[m]}{t}$, that is) are the addition of $1$ and removal of some other element or the removal of $1$ and addition of some other element. Thus, each set containing $1$ is reachable from some set not containing $1$. If $S$ does not contain one and also does not contain some $i \neq 1$, we can first remove any $j$ from $S$ and add $1$, then remove $1$ and add $i$. This means $S - j + i$ is reachable from $S$. If there is no such $i$, then $S = \{2, \dots, m\}$. This implies the sets not containing $1$ are reachable from one another, so $G$ is weakly connected. \end{proof} \section{Proofs of local limit theorems}\label{sec:proofs} \subsection{Preliminaries} We use a few facts for the proof of \cref{thm:lattice_local_limit}. Throughout this section we assume $X$ is in isotropic position, i.e. $\mathbb{E} [XX^\dagger] = I_m$. This means $D_X = D$ and $B_X(\varepsilon) = B(\varepsilon)$. \subsubsection{Fourier analysis} \begin{defin}[Fourier transform] If $Y$ is a random variable on $\mathbb{R}^m$, $\widehat{Y}: \mathbb{R}^m \to \mathbb{C}$ denotes the Fourier transform $$\widehat{Y}(\theta) = \mathbb{E}[e^{2 \pi i \langle Y, \theta\rangle }].$$ \end{defin} We will use the Fourier inversion formula, and our choice of domain will be the Voronoi cell in the dual lattice. \begin{defin}[Voronoi cell]\label{defin:voronoi} Define the Voronoi cell $D$ of the origin in $\mathcal{L}^*$ to be the points as close to the origin as anything else in $\mathcal{L}^*$, or $$D:= \{r \in \mathbb{R}^m: \|r\|_2 \leq \inf_{t \in \mathcal{L}^* \setminus\{0\}} \|r - t\|_2\}.$$ Note that $\operatorname{vol} D = \det \mathcal{L}^* = 1/\det \mathcal{L}$, where $\det \mathcal{L}$ is the volume of any domain whose translates under $\mathcal{L}$ partition $\mathbb{R}^m$. \end{defin} \begin{fact}[Fourier inversion for lattices, \cite{LKP12}] For any random variable $Y$ taking values on a lattice $\mathcal{L}$ (or even a lattice coset $v + \mathcal{L}$), $$\Pr(Y = \lambda) = \det(\mathcal{L})\int_{D} \widehat{Y}(\theta) e^{-2\pi i \langle \lambda, \theta \rangle} d \theta. $$ for all $\lambda \in \mathcal{L}$ (resp. $\lambda \in v + \mathcal{L}$). Here $D$ is the Voronoi cell as in \cref{defin:voronoi}, but we could take $D$ to be any fundamental domain of $\mathcal{L}$.\end{fact} \subsubsection{Matrix concentration} We use a special case of a result by Rudelson. \begin{thm}[\cite{Ru99}]\label{thm:rud} Suppose $X$ is an isotropic random vector in $\mathbb{R}^m$ such that $\|X\|_2\leq L$ almost surely. Let the $n$ columns of the matrix $M$ be drawn i.i.d from $X$. For some absolute constant $\refstepcounter{const} c_{{\theconst}}$ independent of $m,n$ $$\mathbb{E}\left\| \frac{1}{n}MM^\dagger - I_m\right\|_2 \leq c_{{\theconst}}L \sqrt{\frac{\log n}{n}}.$$ In particular, there is a constant $\refstepcounter{const}\label{rud_const} c_{{\theconst}}$ such that with probability at least $1 - c_{{\theconst}}L \sqrt{\frac{\log n}{n}}$ we have \begin{align*}&MM^\dagger \preceq 2I_m \label{concentration}\tag{concentration}\\ \textrm{ and }&MM^\dagger \succeq \frac{1}{2}I_m \label{anticoncentration} \tag{anticoncentration}\\ \end{align*} \end{thm} \subsection{Dividing into three terms} This section contains the plan for the proof of \cref{thm:lattice_local_limit}. The proof compares the Fourier transform of the random variable $My$ to that of a Gaussian; the integral to compute the difference of the Fourier transforms will be split up into three terms, which we will bound separately. Let $M$ be a matrix whose columns $x_i $ are fixed vectors in $\mathcal{L}$, and let $Y_M$ denote the random variable $My$ for $y$ chosen uniformly at random from $\{\pm 1/2\}^n$. This choice is made so that the random variable $Y_M$ takes values in the lattice coset $ \mathcal{L} - \frac{1}{2}M \textbf{1}.$ Let $\Sigma_M$ be the covariance matrix of $Y_M$, which is given by $$ \Sigma_M = \frac{1}{4}\sum_{i = 1}^n x_i x_i^\dagger = \frac{1}{4}MM^\dagger.$$ Let $Y$ be a centered Gaussian with covariance matrix $\Sigma_M$. That is, $Y$ has the density $$G_M(\lambda) = \frac{1}{(2\pi)^{m/2} \sqrt{\det \Sigma_M}} e^{ - \frac{1}{2} \lambda^\dagger \Sigma_M^{-1} \lambda}.$$ Observe that \cref{eq:thm_bound} in \cref{thm:lattice_local_limit} is equivalent to $$|\mathbb{P}(Y_M = \lambda) - \det(\mathcal{L}) G_M(\lambda)| \leq \frac{1}{(2\pi)^{m/2} \sqrt{\det \Sigma_M}} \cdot 2 m^2 L^2 n^{-1}$$ for $\lambda \in \mathcal{L} - \frac{1}{2}M \textbf{1}$. To accomplish this, we will show that $\widehat{Y_M}$ and $\widehat{Y}$ are very close. By Fourier inversion, for all $\lambda \in \mathcal{L} - \frac{1}{2}M \textbf{1}$, \begin{eqnarray*} |\mathbb{P}(Y_M = \lambda) - \det(\mathcal{L}) G_M(\lambda)| =\\ \det(\mathcal{L})\left|\int_{ D} \widehat{Y_M}(\theta) e^{-2\pi i \langle \lambda, \theta \rangle} d \theta - \int_{\mathbb{R}^m}\widehat{Y}(\theta) e^{-2\pi i \langle \lambda, \theta \rangle} d \theta \right|; \end{eqnarray*} recall the Voronoi cell $D$ from \cref{defin:voronoi}. Let $B(\varepsilon) \subset \mathbb{R}^m$ denote the Euclidean ball of radius $\varepsilon$ about the origin. If $B( \varepsilon) \subset D$, then for all $\lambda \in \mathcal{L} - \frac{1}{2}M \textbf{1}$, \begin{align} |\mathbb{P}(Y_M = \lambda) - \det(\mathcal{L}) G_M(\lambda)| \leq \nonumber \\ = \det(\mathcal{L}) \left(\underbrace{\int_{B(\varepsilon)} |\widehat{Y_M}(\theta)- \widehat{Y}(\theta)| d\theta}_{ I_1} + + \underbrace{\int_{\mathbb{R}^m \setminus B(\varepsilon)} |\widehat{Y}(\theta)| d\theta }_{I_2} + \underbrace{\int_{D \setminus B(\varepsilon)} |\widehat{Y_M}(\theta)| d\theta}_{I_3} \right). \label{eq:the_integral} \end{align} We now show that this is decomposition holds for reasonably large $\varepsilon$, i.e. $B(\varepsilon) \subset D$. \begin{lem}\label{isosplit} Suppose $ \varepsilon \leq \frac{1}{2L}$. Then Then $B( \varepsilon) \subset D$; in particular, \cref{eq:the_integral} holds. \end{lem} \begin{proof} Suppose $\theta \in B(\varepsilon)$; we need to show that any nonzero element of the dual lattice has distance from $\theta$ at least $\varepsilon$. It is enough to show that any such dual lattice element has norm at least $2 \varepsilon$. Suppose $0 \neq \alpha \in \mathcal{L}^*$. As $\operatorname{supp}(X)$ spans $\mathbb{R}^m$, for some $x \in \operatorname{supp}(X)$, we have $0 \neq \langle \alpha, x \rangle \in \mathbb{Z}$, so $\|x\|_2 \|\alpha\|_2 \geq |\langle \alpha, x \rangle | \geq 1;$ in particular $\|\alpha\|_2 \geq \frac{1}{L} \geq 2 \varepsilon$. \end{proof} \subsubsection*{Proof plan} We bound $I_1$ by using the Taylor expansion of $\widehat{Y_M}$ to see that, near the origin, $\widehat{Y_M}$ is very close to the unnormalized Gaussian $\widehat{Y}$. We bound $I_2$ using standard tail bounds for the Gaussian. The bounds for the first two terms hold for \emph{any} matrix $M$ satisfying \cref{concentration} and \cref{anticoncentration} and for the correct choice of $\varepsilon$. Finally, we bound $I_3$ in \emph{expectation} over the choice of $M$. This is the only bound depending on the spanningness. \subsubsection{The term $I_1$: near the origin} Here we show how to compare $\widehat{Y_M}$ to $\widehat{Y}$ near the origin in order to bound $I_1$ from \cref{eq:the_integral}. The Fourier transform of the Gaussian $Y$ is $$\widehat{Y}(\theta) = \operatorname{ex}p( - 2\pi^2 \theta^T \Sigma_M \theta). $$ There is a very simple formula for $\widehat{Y_M}$, the Fourier transform of $Y_M$. \begin{prop}\label{transform} If $M$ has columns $x_1, \dots x_n$, then \begin{equation}\widehat{Y}_M(\theta) = \prod_{j = 1}^n \operatorname{\mathbf{co}}s({\pi \langle x_j, \theta \rangle}).\label{transform_eq}\end{equation} \end{prop} \begin{proof} $\widehat{Y}_M(\theta) = \mathbb{E}_{y \in_R \{\pm 1/2\}^n}[e^{2\pi i \langle \sum_{j =1}^n y_j x_j , \theta \rangle}] = \prod_{j =1}^n \mathbb{E}_{y_j}[e^{2\pi i \langle y_j x_j , \theta \rangle}] = \prod_{j = 1}^n \operatorname{\mathbf{co}}s({\pi \langle x_j, \theta \rangle}).$ \end{proof} We can bound the first term by showing that near the origin, $\widehat{Y_M}$ is very close to a Gaussian. Recall that by Proposition \ref{transform}, $$ \widehat{Y_M}(\theta) = \prod_{j = 1}^n \operatorname{\mathbf{co}}s({\pi \langle x_j, \theta \rangle}).$$ For $\theta$ near the origin, $\langle v_j, \theta \rangle$ will be very small. We will use the Taylor expansion of cosine near zero. \begin{prop}\label{taylor} For $x \in (-1/2,1/2)$, $\operatorname{\mathbf{co}}s(\pi x) = \operatorname{ex}p({\frac{\pi^2 x^2}{2}+ O(x^4)})$. \end{prop} \begin{proof} Let $\operatorname{\mathbf{co}}s(\pi x) = 1 - y$ where $y \in [0,1)$. Then $\log(\operatorname{\mathbf{co}}s(\pi x)) = \log(1 - y) = 1 - y + O(y^2)$. Since $\operatorname{\mathbf{co}}s(\pi x) = 1 - \frac{\pi^2 x^2}{2} + O(x^4)$, we have that $y = \frac{\pi^2 x^2}{2} + O(x^4)$. Thus $\log(\operatorname{\mathbf{co}}s(\pi x)) = 1 - \frac{\pi^2 x^2}{2} + O(x^4)$. The proposition follows.\end{proof} We may now apply \cref{taylor} for $\|\theta\|_2$ small enough. \begin{lem}\label{isotaylor} Suppose $M$ satisfies \cref{concentration} and $\|\theta\|_2 < \frac{1}{2L}$. Then there exists a constant \refstepcounter{const}\label{tayc}$c_{\theconst}>0$ such that $$ \widehat{Y_M}(\theta)\leq \operatorname{ex}p\left( - 2 \pi^2 \theta^\dagger\Sigma_M \theta + E\right)$$ for $|E| \leq c_{\theconst} n L^2 \|\theta\|^4 $. \end{lem} \begin{proof} Because for all $i \in [n]$ we have $|\langle x_i, \theta \rangle| \leq \|x_i\|_2 \|\theta\|_2 < 1/2$, Proposition \ref{taylor} applies for all $i \in [n]$ and immediately yields that there is a constant $c$ such that $$\widehat{Y_M}(\theta) = \operatorname{ex}p\left( - 2 \pi^2 \theta^T\Sigma_M \theta + E\right).$$ for $|E| \leq c\sum_{j = 1}^n \langle x_j, \theta\rangle^4$. Next we bound the quartic part of $E$ by \begin{align*} \sum_{j = 1}^n \langle x_j, \theta\rangle^4 &\leq \max_{j \in [n]} \| x_j\|_2^2 \|\theta\|_2^2 \sum_{j = 1}^n \langle x_j, \theta\rangle^2\\ &\leq L^2 \|\theta\|_2^2 \theta^\dagger \left( \sum_{j = 1}^n x_j x_j^\dagger \right) \theta\\ &\leq 2 n L^2 \|\theta\|_2^4, \end{align*} and take $c_{\ref*{tayc}} = 2c$. \end{proof} \begin{lem}[First term]\label{i1} Suppose $M$ satisfies \cref{anticoncentration} and \cref{concentration}. Further suppose that $L^2 n \varepsilon^4 < 1$, and that $\varepsilon < \frac{1}{2L}$. There exists \refstepcounter{const}\label{i1c}$c_{{\theconst}}$ with $$ I_1 \leq c_{{\theconst}} \frac{ m^2 L^2 n^{- 1}}{(2\pi)^{m/2}\sqrt{\det(\Sigma_M)}}.$$ \end{lem} \begin{proof} By \ref{concentration} and Lemma \ref{isotaylor}, \begin{eqnarray*} I_1 = \int_{B(\varepsilon)} |\widehat{Y_M}(\theta)- \widehat{Y}(\theta)| d\theta \leq \int_{B(\varepsilon)} \widehat{Y}(\theta)\left|e^{c_{\ref*{tayc}} L^2 n \|\theta\|_2^4 } - 1\right| d \theta. \end{eqnarray*} Let the constant $c$ be such that $|e^{c_{\ref*{tayc}} x} - 1| \leq c|x|$ for $x \in [-1, 1]$. Thus $$ I_1 \leq c L^2 n \int_{B(\varepsilon)} \widehat{Y}(\theta) \|\theta\|_2^4 d \theta. $$ By \cref{anticoncentration}, \begin{equation} I_1 \leq c L^2 n^{-1} \int_{B(\varepsilon)} \widehat{Y}(\theta) \left( \theta^\dagger \Sigma_M \theta \right)^2 d \theta . \label{i1bd} \end{equation} Note that $(2\pi)^{m/2}\sqrt{\det(\Sigma_M)}\widehat{Y}$ is equal to the density of $W = \frac{1}{2\pi}\Sigma_M^{-1/2}G$, where $G$ is a Gaussian vector with identity covariance matrix. $\Sigma_M^{-1/2}$ exists because \cref{anticoncentration} holds. Further, $W^\dagger \Sigma_M W = \frac{1}{4\pi^2 }\|G\|_2^2.$ Therefore \begin{align*} \int_{\mathbb{R}^m} \widehat{Y}(\theta) \left( \theta^\dagger \Sigma_M \theta \right)^2 d \theta &= \frac{1}{(2\pi)^{m/2}\sqrt{\det(\Sigma_M)}}\mathbb{E}_W\left[\left(W^\dagger \Sigma_M W \right)^2\right]\\ &= \frac{1}{16\pi^4(2\pi)^{m/2}\sqrt{\det(\Sigma_M)}}\mathbb{E}_G\left[ \|G\|_2^4\right]\\ & = \frac{1}{16\pi^4(2\pi)^{m/2} \sqrt{\det(\Sigma_M)}}(2m + m^2)\\ &\leq \frac{3 m^2}{16\pi^4(2\pi)^{m/2}\sqrt{\det(\Sigma_M)}}.\end{align*} Plugging this into \eqref{i1bd} and setting $c_{\ref*{i1c}} = \frac{3}{ \pi^4} c$ completes the proof. \end{proof} \subsubsection{The term $I_2$: Bounding Gaussian mass far from the origin} Here we bound the term $I_2$ of \cref{eq:the_integral}, which is not too difficult. \begin{lem}[Third term]\label{i3}Suppose $M$ satisfies \cref{anticoncentration} holds and that $\varepsilon^2 \geq \frac{16m}{\pi^2n}$. Then $$ I_2 \leq \frac{e^{-\frac{\pi^2}{8} \varepsilon^2n}}{(2\pi)^{m/2}\sqrt{\det(\Sigma_M)}}.$$ \end{lem} \begin{proof} If $M$ satisfies \cref{anticoncentration}, then $B(\varepsilon) \supset \frac{1}{2} \{\theta: \theta^\dagger \Sigma_M \theta \geq n \varepsilon\}:=B_M(\varepsilon/2)$. If we integrate over $B_M(\varepsilon/2)$ and change variables, it remains only to calculate how much mass of a standard normal distribution is outside a ball of radius larger than the average norm. From, say, Lemma 4.14 of \cite{LKP12} , if $\varepsilon^2 \geq \frac{16m}{\pi^2n}$ then $$\int_{\mathbb{R}^m \setminus B_M(\varepsilon/2)} |\widehat{Y}(\theta)| d\theta \leq \frac{e^{-\frac{\pi^2}{8} \varepsilon^2n}}{(2\pi)^{m/2}\sqrt{\det(\Sigma_M)}}.$$ \end{proof} \subsubsection{The term $I_3$: Bounding the Fourier transform far from the origin} It remains only to bound the term $I_3$ of \cref{eq:the_integral} which is given by $$I_3 = \int_{D \setminus B(\varepsilon)} |\widehat{Y_M}(\theta)| d\theta.$$ This is the only part in which spanningness plays a role. If $\varepsilon$ is at most the spanningness (see \cref{defin:spanningness}), we can show $I_3$ is very small with high probability by bounding it in expectation over the choice of $M$. The proof is a simple application of Fubini's theorem. \begin{lem}\label{i2} If $\mathbb{E} XX^\dagger = I_m$ and $\varepsilon \leq s(X)$, then $$\mathbb{E}[ I_3 ] \leq \det(\mathcal{L}^*) e^{-2 \varepsilon^2 n} $$ \end{lem} \begin{proof} By Fubini's theorem, \begin{align} \mathbb{E}_M[ I_3 ]&= \int_{D \setminus B(\varepsilon)} \mathbb{E}|\widehat{Y_M}(\theta)| d\theta \nonumber\\ & \leq \det(\mathcal{L}^*)\sup\{ \mathbb{E}[|\widehat{Y_M}(\theta)|]:\theta\in D \setminus B(\varepsilon) \}.\label{eq:fubini} \end{align} By Proposition \ref{transform} and the independence of the columns of $n$, $$ \mathbb{E}_M[|\widehat{Y_M}(\theta)|] = \left( \mathbb{E}|\operatorname{\mathbf{co}}s({\pi \langle X, \theta \rangle})|\right)^n.$$ Thus, \begin{align}\sup\{ \mathbb{E}[|\widehat{Y_M}(\theta)|]:\theta\in D \setminus B(\varepsilon) \} \leq \left(\sup\{ \mathbb{E}[|\operatorname{\mathbf{co}}s({\pi \langle X, \theta \rangle})|]:\theta\in D \setminus B(\varepsilon) \}\right)^n.\label{eq:sup}\end{align} $|\operatorname{\mathbf{co}}s(\pi x)|$ is periodic with period $1/2$, so it is enough to consider $\langle X, \theta \rangle \bmod{1}$, where $x \bmod{1}$ is taken to be in $[-1/2, 1/2)$. Note that for $|x| \leq 1/2$, $|\operatorname{\mathbf{co}}s(\pi x)| = \operatorname{\mathbf{co}}s(\pi(x)) \leq 1 - 4x^2$, so $$ \mathbb{E}[|\operatorname{\mathbf{co}}s({\pi \langle X, \theta \rangle})|] \leq 1 - 4\mathbb{E}[(\langle X, \theta \rangle \bmod{1})^2] = 1 - 4\tilde{X}(\theta)^2$$ By the definition of spanningness and the assumption in the hypothesis that $\varepsilon \leq s(X)$, we know that every vector with $\tilde{X}(\theta) \leq d(\theta, \mathcal{L}^*)/2 = \|\theta\|/2$ is either in $\mathcal{L}^*$ or has $\tilde{X}(\theta) \geq \varepsilon$. Thus, for all $\theta \in D$, $\tilde{X}(\theta) \geq \max \{\|\theta\|/2,\varepsilon\}$, which is at least $\varepsilon/2$ for $\theta \in D \setminus B(\varepsilon)$. Combining this with \cref{eq:sup} and using $1 - x \leq e^{-x}$ implies $$\sup\{ \mathbb{E}[|\widehat{Y_M}(\theta)|]:\theta\in D \setminus B(\varepsilon) \} \leq e^{-2 \varepsilon^2 n}.$$ Plugging this into \cref{eq:fubini} completes the proof. \end{proof} \subsection{Combining the terms} Finally, we can combine each of the bounds to prove \cref{thm:lattice_local_limit}. \begin{proof}[Proof of \cref{thm:lattice_local_limit}] Recall the strategy: we have some conditions (the hypotheses of Lemma \ref{isosplit}) under which we can write the difference between the two probabilities of interest as a sum of three terms, and we have bounds for each of the terms (Lemma \ref{i1}, Lemma \ref{i2}, and Lemma \ref{i3}) respectively. Our expression depends on $\varepsilon$, and so we must choose $\varepsilon$ satisfying the hypotheses of those lemmas. These are as follows: \begin{enumerate}[label = (\roman*)] \item\label{isosplitconst} To apply \ref{isosplit} we need $\varepsilon \leq \frac{1}{2L},$ \item\label{i1const} for Lemma \ref{i1} we need $L^2 n\varepsilon^4 \leq 1$, \item\label{i3const} to apply Lemma \ref{i3}, we need $\varepsilon^2 \geq \frac{16m}{\pi^2n}$, and \item\label{i2const} for Lemma \ref{i2} we need $\varepsilon \leq s(X)$. \end{enumerate} It is not hard to check that setting $$\varepsilon = L^{-1/2}n^{-1/4}$$ will satisfy the four constraints provided $n\geq 16 L^2$, $n \geq (16 m L)^2/\pi^4$, and $n \geq s(X)^{-4} L^{-2}$. However, the first condition follows from the second because $L \geq \sqrt{m}$ (this follows from $\mathbb{E} XX^\dagger = I_m$, which implies $\mathbb{E}[\|X\|_2^2] = m$), so $$n \geq (16 m L)^2/\pi^4 \textrm{ and } n \geq s(X)^{-4} L^{-2}$$ suffice. By \ref{isosplit} we have \begin{align*} |\mathbb{P}(Y_M = \lambda) - \det(\mathcal{L}) G_M(\lambda)| \leq \nonumber \\ = \det(\mathcal{L}) \left(\underbrace{\int_{B(\varepsilon)} |\widehat{Y_M}(\theta)- \widehat{Y}(\theta)| d\theta}_{ I_1} + + \underbrace{\int_{\mathbb{R}^m \setminus B(\varepsilon)} |\widehat{Y}(\theta)| d\theta }_{I_2} + \underbrace{\int_{D \setminus B(\varepsilon)} |\widehat{Y_M}(\theta)| d\theta}_{I_3} \right). \end{align*} By \cref{i2} and Markov's inequality, $I_3$ is at most $ e^{-\varepsilon^2 n}$ with probability at least $1 - e^{-\varepsilon^2 n} \det(\mathcal{L}^*)$. By \cref{thm:rud}, \cref{anticoncentration} and \cref{concentration} hold for $M$ with probability at least $1 - c_{\ref*{rud_const}} L\sqrt{( \log n)/n}$. If $n$ is at least a large enough constant times $L^2 \log^2 \det \mathcal{L}$, $e^{-\varepsilon^2 n} \det(\mathcal{L}^*)$ is at most $L\sqrt{( \log n)/n}$. Thus, all three events hold with probability at least \refstepcounter{const}\label{failconst}$1 - c_{\ref*{failconst}} L\sqrt{( \log n)/n}$ over the choice of $M$. Condition on these three events, and plug in the bounds given by \cref{i1} and \cref{i3} for $I_1$ and $I_2$ and the bound $e^{-\varepsilon^2 n} = e^{- \sqrt{n}/L }$ for $I_3$ to obtain the following: \begin{align} |\mathbb{P}(Y_M = \lambda) - \det(\mathcal{L}) G_M(\lambda)| \nonumber \\ \leq \det(\mathcal{L}) \left(\frac{ m^2 L^2 n^{- 1}}{(2\pi)^{m/2}\sqrt{\det(\Sigma_M)}} + \frac{e^{-\frac{\pi^2}{8} \sqrt{n}/L}}{(2\pi)^{m/2}\sqrt{\det(\Sigma_M)}} + e^{- \sqrt{n}/L }\right). \nonumber\\ \leq \frac{\det(\mathcal{L})}{(2\pi)^{m/2}\sqrt{\det(\Sigma_M)}} \left(m^2 L^2 n^{- 1} + e^{-\frac{\pi^2}{8} \sqrt{n}/L} + (2\pi)^{m/2}\sqrt{\det(\Sigma_M)} e^{- \sqrt{n}/L }\right)\nonumber\\ \leq \frac{\det(\mathcal{L})}{(2\pi)^{m/2}\sqrt{\det(\Sigma_M)}} \left(m^2 L^2 n^{- 1} + 2 e^{\frac{m}{2} \log (4\pi n) - \sqrt{n}/L } \right), \label{eq:the_bound} \end{align} where the last inequality is by \cref{concentration}. If $\refstepcounter{const}\label{nlower} c_{{\theconst}}$ is large enough, the quantity in parentheses in \cref{eq:the_bound} is at most $2m^2 L^2/n$ and the combined failure probability of the three required events is at most $c_{\ref*{failconst}} L\sqrt{\frac{\log n}{n}}$ provided \begin{align} n \geq N_0 = c_{{\theconst}} \max\left\{m^2 L^2( \log m + \log L)^2, s(X)^{-4}L^{-2},L^2 \log^2 \det \mathcal{L}\right\}. \end{align} \end{proof} \subsection{Weaker moment assumptions}\label{sec:moments} We now sketch how to extend the proof of \cref{thm:lattice_local_limit} to the case $(\mathbb{E} \|X\|_2^\eta)^{1/\eta} = L< \infty$ for some $\eta > 2$, weakening the assumption that $\operatorname{supp} X \subset B(L)$. \begin{thm}[lattice local limit theorem for $>2$ moments]\label{thm:moment_local_limit} Let $X$ be a random variable on a lattice $\mathcal{L}$ such that $\mathbb{E} XX^\dagger = I_m$, $(\mathbb{E} \|X\|_2^\eta)^{1/\eta} = L< \infty$ for some $\eta > 2$, and $\mathcal{L} = \operatorname{span}_\mathbb{Z} \operatorname{supp} X$. Let $G_M$ be as in \cref{thm:lattice_local_limit}. There exists \begin{align}N_2 = \operatorname{poly}(m, s(X), L, \frac{1}{\eta - 2},\log \left(\det \mathcal{L}\right))^{1 + \frac{1}{\eta - 2}}\label{eq:nmoment} \end{align} such that for $n \geq N_2$, with probability at least $1 - 3n^{- \frac{\eta-2}{2 + \eta}}$ over the choice of columns of $M$, the following two properties of $M$ hold: \begin{enumerate} \item $MM^\dagger \succeq \frac{1}{2}n I_m$; that is, $MM^\dagger - \frac{1}{2}n I_m$ is positive-semidefinite. \item For all $\lambda \in \mathcal{L} - \frac{1}{2} M \textbf 1$, \begin{align} \left|\Pr_{y_i \in \{\pm 1/2\}} [M\mathbf y = \mathbf\lambda] - G_{M}(\lambda)\right| \leq G_M(0)\cdot 2m^2 L^2 n^{- \frac{\eta-2}{2 + \eta}}. \label{eq:thm_bound} \end{align} \end{enumerate} \end{thm} Before proving the theorem, note that it allows us to extend our discrepancy result to this regime. The proof of the next corollary from \cref{thm:moment_local_limit} is identical to the proof of \cref{thm:lattice_disc} from \cref{thm:lattice_local_limit}. \begin{cor}[discrepancy for $>2$ moments]\label{cor:moment_disc}Suppose $X$ is a random variable on a nondegenerate lattice $\mathcal{L}$. Suppose $\Sigma:=\mathbb{E} [XX^\dagger]$ has least eigenvalue $\operatorname{\mathbf{\Sigma}}ma$, $(\mathbb{E} \|Z\|_2^\eta)^{1/\eta} = L< \infty$ for some $\eta >2$ where $Z:=\Sigma^{-1/2}X$, and that $\mathcal{L}=\operatorname{span}_\mathbb{Z} \operatorname{supp} X$. If $n \geq N_3$ then $$ \operatorname{disc}_*( M) \leq2 \rho_*(\mathcal{L})$$ with probability at least $1 - 3n^{- \frac{\eta-2}{2 + \eta}}$, where \refstepcounter{const}\label{ndisc} \begin{align}N_3 =c_{\ref*{ndisc}} \max\left\{\frac{R^2_* \rho_*(\mathcal{L})^2}{\operatorname{\mathbf{\Sigma}}ma}, N_2\left(m, s(\Sigma^{-1/2}X), L, \frac{\det \mathcal{L}}{\sqrt{\det \Sigma}}\right)\right\} \label{eq:ndisc}\end{align} for $N_0$ as in \cref{eq:nlim}. \end{cor} \begin{proof}[Proof sketch of \cref{thm:moment_local_limit}] We review each step of the proof of \cref{thm:lattice_local_limit} and show how it needs to be modified to accomodate the weaker assumptions. Recall that, to prove \cref{thm:lattice_local_limit}, we had some conditions (the hypotheses of Lemma \ref{isosplit}) under which we can write the difference between the two probabilities of interest as a sum of three terms, and we have bounds for each of the terms (Lemma \ref{i1}, Lemma \ref{i2}, and Lemma \ref{i3}) respectively. We also need an analogue of \cref{thm:rud} which tells us that \cref{anticoncentration} and \cref{concentration} hold with high probability. Neither Lemma \ref{i2} nor Lemma \ref{i3} use bounds on the moments of $\|X\|_2$, so they hold as-is. Let's see how the remaining lemmas must be modified: \begin{description} \item[Matrix concentration:] By Theorem 1.1 in \cite{SV13}, \cref{concentration} and \cref{anticoncentration} hold with probability at least $1 - n^{-\frac{\eta-2}{2 + \eta}}$ provided $n \geq \operatorname{poly}(m, L, \frac{1}{\eta-2})^{1 + \frac{1}{\eta - 2}}$. \item[Lemma \ref{isosplit}:]The bound $\varepsilon \leq \frac{1}{2L}$ becomes $ \varepsilon \leq \frac{1}{4} L^{-\frac{\eta}{\eta-2}}$. To prove Lemma \ref{isosplit} it was enough to show $\alpha$ was at least twice the desired bound on $\varepsilon$ for $\alpha \neq 0 \in \mathcal{L}^*$. Here we do the same, but to show $\alpha$ is large we consider the random variable $Y \geq 1$ defined by conditioning $|\langle \alpha, X \rangle|$ on $\langle \alpha, X \rangle \neq 0$. Recall that we assume $X$ is isotropic. Let $\Pr[ \langle \alpha, X \rangle \neq 0]$, so that $\|\alpha\|^2 = p \mathbb{E}[Y^2]$ and $ L\|\alpha\| \geq (\mathbb{E} |\langle \alpha, X \rangle|^{\eta})^{\frac{1}{\eta}} = p^{\frac{1}{ \eta}}(\mathbb{E}[ Y^{\eta}])^{\frac{1}{\eta}} \geq p^{\frac{1}{ \eta}} (\mathbb{E}[Y^{2}])^{\frac{1}{2}}$ by H\"older's inequality. Cancelling $p$ from the two inequalities and using $Y \geq 1$ yields the desired bound. \item[Lemma \ref{i1}:] The analogue of this lemma will require $L^2 n^{1 + \frac{4 }{2 + \eta}}\varepsilon^4 < 1$ and $\varepsilon < \frac{1}{4L}n^{ - \frac{2}{2 + \eta} }$, and will hold with probability at least $1 - n^{- \frac{\eta}{4 + \eta}}$ over the choice of columns of $M$. The numerator of the right-hand side becomes $m^2 L^2 n^{- \frac{\eta-2}{2 + \eta}}$. \cref{i1} followed from \cref{isotaylor}. Here the analogue of \cref{isotaylor} holds with $|E| \leq c_{\ref*{tayc}} n^{1 + \frac{4 }{2 + \eta}} L^2 \|\theta\|^4$ if $\|z_i\| \leq L n^{\frac{2}{2 + \eta} } \textrm{ for all }i \in [n]$, which holds with probability $1 - n^{- \frac{\eta-2}{2 + \eta}}$ by Markov's inequality. The rest of the proof proceeds the same. \end{description} The new constraints on $\varepsilon$ will be satisfied if we take $$ \varepsilon = n^{ - \frac{4 + \eta}{12 + 3\eta} },$$ and $n \geq \max \left\{(4L)^{\frac{12 + 6\eta}{\eta-2}}, \frac{16}{\pi^2} m^{\frac{6 + 3\eta}{2\eta-4}}\right\}.$ The rest of the proof proceeds as for \cref{thm:lattice_local_limit}. \end{proof} \section{Random unit columns}\label{sec:unit} Let $X$ be a uniformly random element of the sphere $\mathbb{S}^{m-1}$. Again, let $M$ be an $m \times n$ matrix with columns drawn independently from $X$. Note that $X$ is not a lattice random variable. This time $\Sigma = \frac{1}{m} I_m,$ and $\|\Sigma^{-1/2} X\|_2$ is always at most $m$. We are essentially going to prove a local limit theorem, only this time we will not precisely control the probability of hitting a point but rather the expectation of a particular function. The function will essentially be the indicator of the cube, but it will be modified a bit to make it easier to handle. Let $B$ be the function, which we will determine later. Recall that, once $M$ is chosen, $Y_M$ is the random variable obtained by summing the columns of $M$ with i.i.d $\pm 1/2$ coefficients. $\Sigma_M$ is $MM^\dagger/4$, and $Y$ is the Gaussian with covariance matrix $\Sigma_M$ We will try to show that, with high probability over the choice of $M$, $\mathbb{E} B(Y_M) \sim \mathbb{E} B(Y)$. If $B$ is supported only in $[-K, K]^m$, to show that $\operatorname{disc} M < K$ it suffices to show that $$ |\mathbb{E} B(Y_M) - \mathbb{E} B(Y)| < \mathbb{E} B(Y).$$ \subsection{Nonlattice likely local limit} We now investigate a different extreme case in which $\operatorname{span}_\mathbb{Z} \operatorname{supp} X$ is dense in $\mathbb{R}^m$. In this case the ``dual lattice" is $\{0\}$, so we define pseudodual vectors to be those vectors with $\tilde{X}(\theta) \leq \|\theta\|_X/2$, and the spanningness to be the least value of $\tilde{X}(\theta)$ at a nonzero pseudodual vector. \begin{thm}\label{thm:nonlattice_local_limit} Suppose $\mathbb{E} XX^\dagger = I_m$, $\operatorname{supp} X \subset B(L)$, and that $s(X)$ is positive. Let $B:\mathbb{R}^m \to \mathbb{R}$ be a nonnegative function with $\|B\|_1 \leq 1$ and $\|\widehat{B}\|_1 \leq \infty$. If $$ n \geq N_1 = c_{\ref*{unitnlower}} \max\left\{m^2 L^2( \log M + \log L)^2, s(X)^{-4}L^{-2},L^2 \log^2 \|B\|_1\right\},$$ then with probability at least $ c_{\ref*{failconst}} L\sqrt{\frac{\log n}{n}}$ over the choice of $M$ we have $$|\mathbb{E}[ B(Y_M)] - \mathbb{E}[B(Y)]| \leq 2 m^2 L^2 n^{-1}$$ and $MM^\dagger \succeq \frac{1}{2}n I_m.$ \end{thm} \begin{proof} By Plancherel's theorem, $$ \mathbb{E}[ B(Y_M)] - \mathbb{E}[B(Y)] = \int_{\mathbb{R}^m} \widehat{B}(\theta) (\widehat{Y_M}(\theta) - \widehat{Y}(\theta)) d \theta.$$ Again, we can split this into three terms: \begin{align} \left| \int_{\mathbb{R}^m} \widehat{B}(\theta)(\widehat{Y_M}(\theta) - \widehat{Y}(\theta)) d\theta \right|\leq \nonumber\\ \underbrace{\int_{B(\varepsilon)} | \widehat{B}(\theta)| |\widehat{Y_M}(\theta) - \widehat{Y}(\theta) | d \theta}_{J_1} + \underbrace{\int_{R^m \setminus B(\varepsilon)} | \widehat{B}(\theta)\widehat{Y_M}(\theta)|d\theta}_{J_2} + \underbrace{\int_{R^m \setminus B(\varepsilon)} | \widehat{B}(\theta)\widehat{Y}(\theta)|d\theta.}_{J_3} \end{align} The proofs of the next two lemmas are identical to that of \cref{i1} and \cref{i3}, respectively, except one uses the assumption $\|B\|_1 \leq 1$, which implies $\|\widehat{B}\|_\infty \leq 1$, to remove $\widehat{B}$ from the integrand. \begin{lem}[First term]\label{j1} Suppose \cref{anticoncentration} and \cref{concentration} hold. Further suppose that $L^2 n \varepsilon^4 < 1$, $\varepsilon < \frac{1}{2L}$, and that $\|B\|_1 \leq 1$. There exists \refstepcounter{const}\label{j1c}$c_{{\theconst}}$ with $$ J_1 \leq c_{{\theconst}} \frac{m^2 L^2 n^{- 1}}{(2\pi)^{m/2}\sqrt{\det(\Sigma_M)}}.$$ \end{lem} \begin{lem}[Third term]\label{j3}Suppose \cref{anticoncentration} holds, $\varepsilon^2 \geq \frac{16m}{\pi^2n}$, and $\|B\|_1 \leq 1$. Then $$ J_3 \leq \frac{e^{-\frac{\pi^2}{8} \varepsilon^2n}}{(2\pi)^{m/2}\sqrt{\det(\Sigma_M)}}.$$ \end{lem} The proof of the next lemma is the same as that of \cref{i3}, except in the derivation of \cref{eq:fubini} instead of integrating over $D$ one must integrate over the whole of $\mathbb{R}^m\setminus B(\varepsilon)$ against $\widehat{B}$, hence $\det L^*$ is replaced by $\|\widehat{B}\|_1$. \begin{lem}\label{j2} If $X$ is in isotropic position and $\varepsilon \leq s(X)$, then $$\mathbb{E}[ J_2] \leq \|\widehat{B}\|_1 e^{-2 \varepsilon^2 n} $$ \end{lem} We now proceed to combine the termwise bounds. As before, we may choose $\varepsilon = n^{1/2} L^{-1/2}$ provided $$n \geq (16 m L)^2/\pi^4 \textrm{ and } n \geq s(X)^{-4} L^{-2},$$ and with probability at least $1- \|\widehat{B}\|_1 e^{-\varepsilon^2 n} - c_{\ref*{rud_const}} L \sqrt{ \frac{\log n}{n}},$ we have $J_2$ at most $e^{-\varepsilon^2 n}$ and \cref{concentration}, \cref{anticoncentration} hold. Condition on these events. As in the proof of $\cref{thm:lattice_local_limit}$, we have \begin{align} \left| \int_{\mathbb{R}^m} \widehat{B}(\theta)(\widehat{Y_M}(\theta) - \widehat{Y}(\theta)) d\theta \right|\leq \frac{1}{(2\pi)^{m/2}\sqrt{\det(\Sigma_M)}} \left(m^2 L^2 n^{- 1} + 2 e^{\frac{m}{2} \log (4\pi n) - \sqrt{n}/L } \right). \label{eq:unit_bound} \end{align} If $\refstepcounter{const}\label{unitnlower} c_{{\theconst}}$ is large enough, the quantity in parentheses in \cref{eq:the_bound} is at most $2m^2 L^2/n$ and the combined failure probability of all the required events is at most $c_{\ref*{failconst}} L\sqrt{\frac{\log n}{n}}$ provided \begin{align} n \geq N_1 = c_{{\theconst}} \max\left\{m^2 L^2( \log M + \log L)^2, s(X)^{-4}L^{-2},L^2 \log^2 \|B\|_1\right\}. \end{align} \end{proof} \subsection{Discrepancy for random unit columns} \begin{lem}\label{uniti2ev}Let $X$ be a random unit vector. Then $$ s( X) \geq \refstepcounter{const}\label{unitev}c_{{\theconst}}.$$ for some fixed constant $c_{{\theconst}}$. \end{lem} Before we prove the lemma, let's show how to use it and \cref{thm:nonlattice_local_limit} to prove discrepancy bounds. \begin{proof}[Proof of \cref{thm:unit_disc}] Let $X$ be a random unit vector. We need to choose our function $B$. \begin{defin} For $K > 0$, let $B = \frac{1}{(2K)^{2m}} 1_{[-K,K]^m} * 1_{[-K,K]^m}$. Alternately, one can think of $B$ as the density of a sum of two random vectors from the cube $[-K,K]^m$. \end{defin} It's not hard to show $\|B\|_1 = 1$ using that $B$ is a density that and that $\|\widehat{B}\|_1 = \frac{1}{(2K)^{m}}$ using Plancherel's theorem. Next, we apply \cref{thm:nonlattice_local_limit} to $Z = \sqrt{m} X$; in order to apply the theorem we need $$ n \geq N_2 := \refstepcounter{const}\label{unitdiscrep} c_{{\theconst}} \max\left\{m^3 \log^2 m, m^{-1}, m^3 \log^2 (1/K)\right\}.$$ Thus, we may take $$n \geq c_{\ref*{cubeconst}} m^3 \log^2m \textrm{ and } K = c_{\ref*{expconst}} e^{-\sqrt{\frac{n}{m^3}}}. $$ We also need to obtain a lower bound on $\mathbb{E} [B(Y)]$ in order to use the bound on $|\mathbb{E}[B(Y_M)] - \mathbb{E}[B(Y)]|$ to deduce that $\mathbb{E}[B(Y_M)] > 0$, or equivalently that $\operatorname{disc} M \leq 2K$. The quantity $\mathbb{E} B(Y)$ is at least the least density of $Y$ on the support of $B$. The support of $B$ is contained within a $2K\sqrt{m}$ Euclidean ball. Using the property $MM^\dagger \geq \frac{1}{2}n I_m$, the density of $Y$ takes value at least \begin{align*} \frac{1}{(2\pi)^{m/2}\sqrt{\det(\Sigma_M)}} e^{-2 \operatorname{\mathbf{\Sigma}}ma_{min}(MM^\dagger)^{-1} 4 K^2 m} &\geq \frac{1}{(2\pi)^{m/2}\sqrt{\det(\Sigma_M)}} e^{-16 K^2 m/n}. \end{align*} Since the error term in \cref{thm:nonlattice_local_limit} is at most $\frac{1}{(2\pi)^{m/2}\sqrt{\det(\Sigma_M)}}2 m^2 L^2 n^{-1}$, to deduce $\operatorname{disc} M \leq K$ it is enough to show $ e^{-16 K^2 m/n} > 2m^3 n^{-1}$; indeed this is true with $K = c_{\ref*{expconst}} e^{-\sqrt{\frac{n}{m^3}}}$ and $n \geq m^3 \log^2 m$ for suitably large $c_{\ref*{expconst}}$.\end{proof} \subsubsection{Spanningness for random unit vectors} We now lower bound the spanningness for random unit vectors. We use the fact that for large $m$ the distribution of a random unit vector behaves much like a Gaussian upon projection to a one-dimensional subspace. \begin{proof}[Proof of \cref{uniti2ev}] Let $\|\theta\|_X =\frac{1}{\sqrt{m}}\delta > 0$. We split into two cases. In the first, we show that if $\delta = O(\sqrt{m})$, then $\theta$ is not pseudodual. In the second, we show that if $\delta = \Omega(\sqrt{m})$ then $\tilde{X}(\theta)$ is at least a fixed constant. This establishes that $s(X)$ is at least some constant. By rotational symmetry we may assume $\theta$ is $\delta e_1$, where $e_1$ is the first standard basis vector, so $$\tilde{X}(\theta)^2 = \mathbb{E}[(\langle X, \theta \rangle \bmod{1})^2 ] = \mathbb{E}[(\delta X_1 \bmod{1})^2 ].$$ We now try to show $\theta$ is not a pseudodual vector if $\delta = O( \sqrt{m})$. Recall that $X$ is a random unit vector; it is easier to consider the density of $X_1$. The probability density function of $\delta X_1$ for $x < \delta$ is proportional to $(1-(x/\delta)^2)^{\frac{m-3}{2}} =: f_\delta(x)$. Integrating this density gives us \begin{align*} \int_{-\delta}^\delta \left(1-\left(\frac{x}{\delta}\right)^2\right)^{\frac{m-3}{2}}dx &= \delta\int_{0}^\delta (1 - x)^{\frac{m-3}{2}} x^{-\frac{1}{2}} dx\\ &= \frac{\delta \Gamma\left( \frac{m-1}{2}\right) \Gamma\left( \frac{1}{2}\right)}{ \Gamma\left( \frac{m}{2}\right)}\\ &= \frac{\delta \sqrt{\pi}}{ \sqrt{m}}\left(1 + o(1)\right). \end{align*} Therefore, we obtain \begin{align*} \mathbb{E}[( \delta X_1 \bmod{1})^2 ] &= \frac{ \Gamma\left( \frac{m}{2}\right)}{\delta \Gamma\left( \frac{m-1}{2}\right) \Gamma\left( \frac{1}{2}\right)} \int_{-\delta}^{\delta} (x \bmod 1)^2 \left(1-\left(\frac{x}{\delta}\right)^2\right)^{\frac{m-3}{2}}dx. \\ \end{align*} If we simply give up on all the $x$ for which $|x| > 1/2$, we obtain the following lower bound for the above quantity: \begin{align*} &\frac{ \Gamma\left( \frac{m}{2}\right)}{\delta \Gamma\left( \frac{m-1}{2}\right) \Gamma\left( \frac{1}{2}\right)}\left[\int_{-\delta}^{\delta} x^2 \left(1-\left(\frac{x}{\delta}\right)^2\right)^{\frac{m-3}{2}}dx - 2\int_{1/2}^{\delta} x^2 \left(1-\left(\frac{x}{\delta}\right)^2\right)^{\frac{m-3}{2}}dx \right]\\ &= \frac{\delta^2}{m} - \frac{2 \Gamma\left( \frac{m}{2}\right)}{\delta \Gamma\left( \frac{m-1}{2}\right) \Gamma\left( \frac{1}{2}\right)} \int_{1/2}^{\delta} x^2 \left(1-\left(\frac{x}{\delta}\right)^2\right)^{\frac{m-3}{2}}dx\\ &\geq \frac{\delta^2}{m} - (1 + o(1))\frac{2\sqrt{m - 3}}{ \delta \sqrt{ \pi}} \int_{1/2}^{\infty} x^2 e^{-\frac{(m-3)x^2}{2 \delta^2}}dx.\\ & = \frac{\delta^2}{m} - (1 + o(1)) \frac{2^{3/2}\delta^2 }{ m} \left(\frac{1}{\sqrt{2\pi}}\int_{\frac{\sqrt{m - 3}}{2 \delta}}^{\infty} u^2 e^{-\frac{u^2}{2}}du\right) \end{align*} The integral in parentheses is simply the contribution to the variance of the tail of a standard gaussian, and can be made an arbitrarily small constant by making $\delta/\sqrt{m}$ small. Thus, for $\delta$ at most $\delta \leq \refstepcounter{const}\label{mlb1} c_{{\theconst}}\sqrt{m}$, the last line above expression is at least $.6^2\frac{\delta^2}{m} = .6^2\| \theta\|_X^2$, so $\theta$ is not pseudodual.\\ Next we must handle $\delta$ larger than $c_{{\theconst}}\sqrt{m}$; we will show that in this case $\tilde{X}(\theta)$ is at least some fixed constant. We use the fact that $f_\delta$ is unimodal, so for any $k \neq 0$, $\int_{k - 1/2}^{k + 1/2} (x \bmod 1)^2 f_\delta(x) dx$ is at least the mass of $f_\delta(x)$ between $k-1/2$ and $k$ times the integral of $(x \bmod 1)^2$ on this region (that is, $1/24$). This product is then at least which is at least one 48th of the mass of $f_\delta(x)$ between $k-1/2$ and $k + 1/2$. Taken together, we see that \begin{equation}\label{leakage}\mathbb{E}[(\delta X_1 \bmod 1)^2] \geq \frac{1}{48} \Pr[|\delta X_1| \geq 1/2].\end{equation} We will lower bound the left-hand side by a small constant for $\delta = \Omega(\sqrt{m})$. We can do so by bounding the ratio of $\int_{-1/2}^{1/2} f_\delta(x)$ to $\int_{1/2}^\infty f_\delta(x)$. To this end we will translate and scale the function $$g_\delta(x) = \left\{\begin{array}{cc} f_\delta(x) & x \geq 1/2\\ 0 & x < 1/2 \end{array}\right. $$ to dominate $f_\delta(x)$ for $x \in [0, 1/2]$. Let us find the smallest scaling $a > 0$ such that $a g_\delta (x + 1/2) \geq f_\delta(x)$ for $x \in [0,1/2]$; equivalently, $a f_\delta (x + 1/2) \geq f_\delta(x)$ for $x \in [0,1/2]$. If we find such an $a$, we'll have $\int_0^{1/2} {f_\delta(x)} \leq a \int_0^\infty {g_\delta(x)} dx$, or $\Pr[x \in [-1/2, 1/2]] \leq a (1 - \Pr[x \in [-1/2, 1/2]]).$ We need \begin{align*} a (1-((x+1/2)/\delta)^2)^{\frac{m-3}{2}} \geq (1-(x/\delta)^2)^{\frac{m-3}{2}}, \end{align*} or \begin{align*}a=\max_{x \in [0,1/2]} \left(\frac{1-(x/\delta)^2}{1-((x+1/2)/\delta)^2}\right)^{\frac{m-3}{2}}\\ = \max_{x \in [0,1/2]} \left(\frac{\delta^2-x^2}{\delta^2-(x+1/2)^2}\right)^{\frac{m-3}{2}}\\ \leq \left(\frac{\delta^2}{\delta^2-1}\right)^{\frac{m-3}{2}} = \left(1-\frac{1}{\delta^2} \right)^{-\frac{m-3}{2}}\\ \leq e^{\frac{(m-3)}{2\delta^2}}. \end{align*} As discussed, we now have $\Pr[x \in [-1/2, 1/2]] \leq a (1 - \Pr[x \in [-1/2, 1/2]]).$ Equivalently, $\Pr[x \in [-1/2, 1/2]] \leq a/(1 + a)$. Therefore $$\Pr[|x| > 1/2] \geq 1/(1 + a) \geq .5 e^{-\frac{m-3}{2\delta^2}}.$$ If $\delta \geq c_{\ref*{mlb1}} \sqrt{m}$, this and \cref{leakage} imply $\mathbb{E}[(\delta X_1 \bmod 1)^2] = \mathbb{E}[(\langle \theta, X \rangle \bmod 1)^2$ is at least some constant. Thus, if $\delta \geq c_{\ref*{mlb1}}$ then $\tilde{X}(\theta)$ is at least some constant. \end{proof} \section{Open problems} There is still a gap in understanding for $t$-sparse vectors. \begin{qn} Let $M$ be an $m\times n$ random $t$-sparse matrix. What is the least $N$ such that for all $n\geq N$, the discrepancy of $M$ is at most one with probability at least $1/2$? We know that for $t$ not too large or small, $m \leq N \leq m^3 \log^2 m$. The lower bound is an easy exercise. \end{qn} Next, it would be nice to understand \cref{qn:random_komlos} for more column distributions in other regimes such as $n = O(m)$. In particular, it would be interesting to understand a distribution where combinatorial considerations probably won't work. For example, \begin{qn} Suppose $M$ is a random $t$-sparse matrix plus some Gaussian noise of of variance $\sqrt{t/m}$ in each entry. Is $\operatorname{disc} M = o(\sqrt{t\log m})$ with high probability? How much Gaussian noise can existing proof techniques handle? \end{qn} The quality of the nonasymptotic bounds in this paper depend on the spanningness of the distribution $X$, which depends on how far $X$ is from lying in a proper sublattice of $\mathcal{L}$. If $X$ actually \emph{does} lie in a proper sublattice of $\mathcal{L}' \subset \mathcal{L}$, we may apply our theorems with $\mathcal{L}'$ instead. This suggests the following: \begin{qn} Is there an $N$ depending on only the parameters in \cref{eq:ndisc} \emph{other than spanningness} such that for all $n \geq N$, $$\operatorname{disc} M \leq \max_{\mathcal{L}' \subset \mathcal{L}} \rho_\infty (\mathcal{L}')$$ with probability at least $1/2$? \end{qn} Next, the techniques in this paper are suited to show that essentially any point in a certain coset of the lattice generated by the columns may be expressed as the signed discrepancies of a coloring. This is why we obtain twice the $\ell_\infty$-covering radius for our bounds. In order to bound the discrepancy, we must know $\rho_\infty(\mathcal{L})$. However, the following question (\cref{qn:lattice_komlos} from the introduction) is still open, which prevents us from concluding that discrepancy is $O(1)$ for an arbitrary bounded distribution: \begin{conj} There is an absolute constant $C$ such that for any lattice $\mathcal{L}$ generated by unit vectors, $\rho_\infty(\mathcal{L}) \leq C$. \end{conj} We could also study a random version of the above question: \begin{qn}\label{qn:random_lattice_komlos} Let $v_1, \dots, v_m$ be drawn i.i.d from some distribution $X$ on $\mathbb{R}^m$, and let $\mathcal{L}$ be their integer span. Is $\rho_\infty(\mathcal{L}) = O(1)$ with high probability in $m$? \end{qn} Here we also bring attention to a meta-question asked in \cite{LKP12, HR18}. Interestingly, though we use probabilistic tools to deduce the existence of low-discrepancy assignments, the proof does not yield any obvious efficient randomized algorithm to find them. \begin{qn}If an object can be proved to exist by a suitable local central limit theorem, is there an efficient randomized algorithm to find it? \end{qn} \end{document}
\begin{document} \renewcommand{Abstract}{Research Content} \title{New Formulas for the Euler-Mascheroni Constant and other\\ Consequences derived from the Acceptance of Hyperbolicity of Jensen \\ Polynomials and the Analysis of the Turán Moments for the $\xi$-Function\\ \begin{abstract} \noindent \justifying \color{gray} The Euler-Mascheroni constant is calculated by three novel representations over these sets respectively: 1) Turán moments, 2) coefficients of Jensen polynomials for the Taylor series of the Riemann Xi function at $s=1 / 2+i . t$ and 3) even coefficients of the Riemann Xi function around $s=1 / 2$. These findings support the acceptance of the property of hyperbolicity of Jensen polynomials within the scope of the Riemann Hypothesis due to exactness on the approximations calculated not only for the Euler-Mascheroni constant, but also for the Bernoulli numbers and the even derivatives of the Riemann Xi function at $s=1 / 2$. The new formulas are linked to similar patterns observed in the formulation of the Akiyama-Tanigawa algorithm based on the Gregory coefficients of second order and lead to understanding the Riemann zeta function as a bridge between the Gregory coefficients and other relevant sets \end{abstract} \renewcommand{Abstract}{Abstract} \begin{abstract} \noindent \justifying \color{gray} We calculate the Euler-Mascheroni constant $\gamma$ in this article by a convergent series that requires the computation of few Turán moments $\widehat{b}_{n}$ or their equivalent definition in terms of the coefficients $c_{n}$ of the Jensen polynomials $J^{d, N}(x)$ of degree $d$ and shift $N=0$ for the Taylor series of the Riemann $\xi$-function at the special points $s=\frac{x}{2}$ and $s=\frac{1}{2}+i x$. This fascinating result comes from the consequences of the explicit multiplication of all the factors with each other indicated by the Hadamard product of the $\xi$-function at the complex variable $s$, which leads to write a convenient algebraic expansion in $s^{0}, s^{1}, s^{2}, s^{3} \ldots$ that can be plausibly compared to the equivalent terms in $s^{0}, s^{1}, s^{2}, s^{3} \ldots$ derived from the Taylor series for the $\xi$-function around $\frac{1}{2}$, i.e., the Taylor series and Hadamard product of $\xi$ must be equivalents to each other because they represent the Riemann $\xi$ function. Hence, new priceless summation formulas that support the novel representations for $\gamma$ are presented in this article leading to relate successfully the coefficients of the Taylor series for the $\xi$ function to the non-trivial zeros of the Riemann zeta function $\zeta$ and the well-known values $\Gamma\left(\frac{1}{4}\right), \zeta\left(\frac{1}{2}\right), \gamma$ and the Lugo's constant within this scope. Furthermore, it is provided a thorough inspection of the numerical results for the first twenty-one values of $c_{n}$ and $\widehat{b}_{n}$, for $n=$ $0,1,2 \ldots, 20$, and their role in the representations proposed for $\gamma$, which would support the formulation of all the $c_{n}$ and $\widehat{b_{n}}$ and their connection with the even derivatives of $\xi$ at $s=\frac{1}{2}$. As an important conclusion of this work, the new representations for the Euler-Mascheroni constant could be significant consequences of the Riemann Hypothesis due to the tremendous precision and assertiveness achieved for $\gamma$ based on that hypothesis \end{abstract} \vspace*{\baselineskip} \justifying \textbf{Keywords:} Euler-Mascheroni constant, coefficients of Jensen polynomials, Turán Moments, Riemann Xi-function, Riemann zeta function, non-trivial zeros of the Riemann zeta function, Riemann Hypothesis, Hadamard product, Taylor series, even derivatives of the Xi-function, Lugo constant, Gregory coefficients, Akiyama-Tanigawa’s formula, hyperbolicity, Bernoulli numbers \begin{flushleft} \section*{I. Introduction} \end{flushleft} We have deduced a new formula for the Euler-Mascheroni constant $\gamma$ based on the set of the Turán moments $\widehat{b}_{n}$ whose role could be crucial for the analysis of the Riemann Hypothesis. Hence, \[ \gamma=\log (4 \pi)-2+\left(2^{7}\right) \sum_{n=1}^{\infty} \frac{n \widehat{b}_{n}}{(2 n) !}. \] That formula could be reinforcing potential consequences derived from the validation of the property known as hyperbolicity of the Jensen polynomials for a particular Taylor series for the Riemann Xi function at $s=\frac{1}{2}+i . x$. Thus, we provide a clear evidence that the real roots $x$ defined by the assumption of hyperbolicity of the Jensen polynomials $J^{d, N}(x)$ necessarily lead to link the expected coefficients $\widehat{b}_{n}, c_{n}$ and $a_{2 n}$ with each other as discussed in this article, which would support significantly the Riemann Hypothesis itself because these coefficients $\widehat{b_{n}}, a_{2 n}$ and $c_{n}$ cannot be casually computed without assuming that major Conjecture on the structure of the roots $s=\frac{1}{2}+i . x$ within the Hadamard product and Taylor series for the Riemann Xi function as explained later. We offer numerical data by the use of already computed Turán moments $\widehat{b}_{n}$ replaced in the new formula for $\gamma$, but also by the summation series involving the even-index coefficients $a_{2 n}$ that will be discussed thoroughly later. Thus, the second version for the EulerMascheroni constant we dealt with in this article is given by \[ \gamma=\log (4 \pi)-2+\left(2^{4}\right) \sum_{n=1}^{\infty} \frac{n a_{2 n}}{2^{2 n}}. \] Then, we will revise later that the coefficients $a_{2 n}$ are related to the Turán moments $\widehat{b}_{n}$ by the relation \[ a_{2 n}=\frac{2^{3}\left(2^{2 n}\right) \widehat{b}_{n}}{(2 n) !}=\frac{8 \cdot\left(2^{2 n}\right) \widehat{b}_{n}}{(2 n) !}, \] and thanks to a rigorous experimental inspection, we have concluded that the $c_{n}$ are linked to the other coefficients $a_{2 n}$ and $\widehat{b}_{n}$ as follows \[ c_{n}=2(n !)(-1)^{n} a_{2 n}=(-1)^{n} \frac{n !}{(2 n) !} 2^{2 n} \cdot 2^{4} \cdot \widehat{b}_{n} \] Therefore, we have found that a third formula for the Euler-Mascheroni constant is defined by the coefficients $c_{n}$ of the Jensen polynomials within our approach as \[ \gamma=\log (4 \pi)-2+\left(2^{3}\right) \sum_{n=1}^{\infty} \frac{n(-1)^{n} c_{n}}{\left(2^{2 n}\right) n !} \] As a result, the previous three formulas for the Euler-Mascheroni constant and the precise ways to link the coefficients $c_{n}, a_{2 n}$ and $\widehat{b}_{n}$ with each other are the main set of unpublished findings for this important constant we offer in our research work. Furthermore, we provide strict numerical evaluation for the approximation for the Euler-Mascheroni constant, which can be easily calculated by anyone using the data we present for the $c_{n}, a_{2 n}$ and $\widehat{b}_{n}$ in the Table 1 , just replacing these coefficients as follows\\\\ Version 1, using the $\widehat{b}_{n}$ of Table 1: \[ \begin{aligned} \gamma=\log (4 \pi) &-2+\left(2^{7}\right) \sum_{n=1}^{\infty} \frac{n \widehat{b}_{n}}{(2 n) !} \approx 0.53102424697+\left(2^{7}\right)\left(\frac{\widehat{b}_{1}}{2 !}+\frac{2\widehat{b}_{2}}{4 !}+\cdots+20 \frac{\widehat{b}_{20}}{40 !}\right) \approx \\ & \approx 0.53102424697+\left(2^{7}\right)\left(3.60870452595\left(10^{-4}\right)\right) \approx 0.577215664902. \end{aligned} \] \noindent Version 2, using the $a_{2 n}$ of Table 1: \[ \begin{aligned} \gamma=\log (4 \pi)&-2+\left(2^{4}\right) \sum_{n=1}^{\infty} \frac{n a_{2 n}}{2^{2 n}} \approx 0.53102424697+16\left(\frac{a_{2}}{4}+\frac{2 a_{4}}{16}+\cdots+\frac{20 a_{40}}{2^{40}}\right) \approx \\ &\approx 0.577215664902, \end{aligned} \]\\ Version 3, using the $c_{n}$ of Table 1: \[ \gamma \approx 0.53102424697+\left(2^{3}\right)\left(\frac{-c_{1}}{4}+\frac{2 c_{2}}{16(2 !)}-\cdots+\frac{20 c_{20}}{2^{40}(20 !)}\right) \approx 0.577215664902 . \] The previous representations are consistent and look similar to other known formulas like the Akiyama-Tanigawa's representation, which involves an interesting pattern below \[ \gamma=\log (2 \pi)-2-2 \sum_{n=1}^{\infty} \frac{(-1)^{n} G_{n}(2)}{n} . \] We have noticed that the structure of this formula, although over the numbers $G_{n}(2)$ known as the Gregory coefficients of the second order, is extremely similar, but not the same, to our cases based on the $c_{n}, a_{2 n}$ and $\widehat{b_{n}}$ which is not a coincidence because our novel representations for the Euler-Mascheroni constant have been inferred from a new totally different approach which is independent of the Akiyama-Tanigawa's formulation. Furthermore, other findings or discoveries introduced here are one summation formula that involves the non-trivial zeros $s_{r}=\sigma_{r}+t_{r}.i$ and $\bar{s}_{r}=\sigma_{r}-t_{r} . i$ of the Riemann zeta function to the coefficients $a_{2 n}$ given by:\\ \[ \sum_{r=1}^{\infty} \frac{\sigma_{r}}{s_{r} \bar{s}_{r}}=\left(\frac{4 a_{2}}{2^{2}}+\frac{4(2) a_{4}}{2^{4}}+\frac{4(3) a_{6}}{2^{6}}+\frac{4(4) a_{8}}{2^{8}}+\cdots\right)=\sum_{n=1}^{\infty} \frac{4 n a_{2 n}}{2^{2 n}}, \]\\ and a second summation relating the coefficients $a_{2 n}$ as well: \[ \frac{1}{2}=\left(\frac{a_{0}}{2^{0}}+\frac{a_{2}}{2^{2}}+\frac{a_{4}}{2^{4}}+\frac{a_{6}}{2^{6}}+\cdots\right)=\sum_{n=0}^{\infty} \frac{a_{2 n}}{2^{2 n}} . \] We will discuss later that these last results really are reduced to the following expressions\\ \[ \sum_{r=1}^{\infty} \frac{\frac{1}{2}}{s_{r} \bar{s}_{r}}=\frac{1}{2} \sum_{r=1}^{\infty} \frac{1}{\left(\frac{1}{2}+t_{r} \cdot i\right)\left(\frac{1}{2}-t_{r} \cdot i\right)}=\sum_{n=1}^{\infty} \frac{4 n a_{2 n}}{2^{2 n}}, \]\\ and \[ \sum_{n=0}^{\infty} \frac{a_{2 n}}{2^{2 n}}=\frac{1}{2}, \]\\ because the numerical evidence of the computed coefficients demands obligatorily the acceptance of the real part as strictly $\sigma_{r}=\frac{1}{2}$ for every non-trivial zero of the Riemann Zeta function. All these results will be explained carefully by our approach of equating the Hadamard product and Taylor series for the Riemann Xi function. For now, we introduce the crucial formulas that can be checked simply by replacing the data of the Table 1 as revised later.\\\\ We have also discovered another expected pattern to calculate the Bernoulli numbers $B_{2 r}$, whose index $2 r$ is considered for $r=0,1,2 \ldots$, which leads to involve the coefficients $\widehat{b}_{n}$ and $c_{n}$ after having assumed the complete property of hyperbolicity of the Jensen polynomials for the Taylor series of the Riemann Xi function. We compute successfully various values of $B_{2 r}$ by using the valid coefficients of the Jensen polynomials, i.e., the coefficients $c_{n}$ are achievable as expected by the acceptance of the property of hyperbolicity and the unique real part $\frac{1}{2}$ for the formulations of the non-trivial zeros $s_{r}=\frac{1}{2}+t_{r} . i$. The formulas for the Bernoulli numbers $B_{2 r}$ deduced here are\\ \[ \begin{aligned} &B_{2 r}=\frac{16(-1)^{r-1} \cdot(2 r) ! 2^{-2 r}}{(\pi)^{r}(2 r-1) \mathrm{r} !} \sum_{n=0}^{\infty} \frac{2^{2 n} \widehat{b}_{\mathrm{n}}\left(2 r-\frac{1}{2}\right)^{2 n}}{(2 n) !}, \\[10pt] &B_{2 r}=\frac{(-1)^{r-1} \cdot(2 r) ! 2^{1-2 r}}{(\pi)^{r}(2 r-1) \mathrm{r} !} \sum_{n=0}^{\infty} \frac{(-1)^{n} c_{n}\left(2 r-\frac{1}{2}\right)^{2 n}}{2(n) !}, \end{aligned} \]\\ of course, by replacing $c_{n}=2(n !)(-1)^{n} a_{2 n}$, we could find a third formula based on these coefficients $a_{2 n}$, thus, if someone wants to compute particular Bernoulli numbers based on the Turán moments $\widehat{b}_{n}$ or also by the coefficients $c_{n}$ of the Jensen polynomials, or even index coefficients $a_{2 n}$ could use the Table 1 with twenty-one data of these coefficients in order to verify it. As reinforce of our approach, if these coefficients $a_{2 n}, c_{n}$ and $\widehat{b}_{n}$ had not been properly calculated, then, neither the Bernoulli numbers nor the even derivatives of the Riemann Xi function at $s=1 / 2$ and all the formulations for the Euler-Mascheroni constant never had been achievable! Hence, the validity of all the non-trivial zeros with real part $\frac{1}{2}$ is clear as stated on the Riemann Hypothesis.\\\\ Regarding the relevance of these results, we propose in the current introduction that findings about new links between the Euler-Mascheroni constant $\gamma=0.57721 \ldots$ and other important numbers in mathematics could help to unravel fundamental conjectures with extraordinary consequences in several fields of exact sciences and industrial technologies. We propose the use of these less known coefficients $a_{2 n}, c_{n}$ and $\widehat{b}_{n}$ as very important set of numbers within the study of other famous sets like the Gregory coefficients of higher orders. As a result, we define the Gregory coefficients $G_{n}=G_{n}(1), n \geq 2$, with basic order 1, by a well-known expression like\\ \[ G_{n}=-\frac{B_{n}{ }^{(n-1)}}{(n-1) \cdot(n !)} \]\\ which introduces immediately a new relationship between $G_{2 r}=G_{2 r}(1)$ and the respective $c_{n}$, $a_{2 n}$ and $\widehat{b_{n}}$ because we already know that we can replace the previous formula in the proposed ones for $B_{2 r}$\\ \[ \begin{aligned} &B_{2 r}=\frac{16(-1)^{r-1} \cdot(2 r) ! 2^{-2 r}}{(\pi)^{r}(2 r-1) r !} \sum_{n=0}^{\infty} \frac{2^{2 n} \widehat{b}_{\mathrm{n}}\left(2 r-\frac{1}{2}\right)^{2 n}}{(2 n) !}=\sqrt[2 r-1]{-G_{2 r}(2 r-1) \cdot(2 r) !} \\[12pt] &B_{2 r}=\frac{(-1)^{r-1} \cdot(2 r) ! 2^{1-2 r}}{(\pi)^{r}(2 r-1) r !} \sum_{n=0}^{\infty} \frac{(-1)^{n} c_{n}\left(2 r-\frac{1}{2}\right)^{2 n}}{2(n) !}=\sqrt[2 r-1]{-G_{2 r}(2 r-1) \cdot(2 r) !} \end{aligned} \] which lead to formulate the Gregory coefficients $G_{2 r}$ in function of the either $c_{n}$ or $\widehat{b}_{n}$, with possibility of expressing $G_{2 r}$ in terms of $a_{2 n}$ as well. So the vital numbers $c_{n}, \widehat{b_{n}}$ and $a_{2 n}$ can act as generators of the fundamental numbers $B_{2 r}$ of Bernoulli or the Gregory of type $G_{2 r}$. These are perhaps the greatest conclusions derived in this article as nobody had noticed such numerical findings.\\\\ We also want to introduce some famous definitions for $\gamma$ that every reader of the article needs to assimilate in order to understand that our formulas are not the only ones, there are many others, e.g., the limit based on the Harmonic numbers [1] $H_{n}=\sum\limits_{k=1}^{n} \frac{1}{k}$, being $\log n$ the natural logarithm of $n$ or $\ln (n)$\\ \[ \gamma=\lim _{n \rightarrow \infty}\left(H_{n}-\log n\right),\tag{1} \] which is given by the convergent series\\ \[ \gamma=\sum_{n=2}^{\infty}(-1)^{n} \frac{\zeta(n)}{n},\tag{2} \]\\ whose definition over the terms $\zeta(n)$, for $n=2,3,4 \ldots$, draws attention due to the relevance of the Riemann zeta function in number theory and functional analysis. Furthermore, the beautiful expression\\ \[ \sum_{\rho} \frac{1}{\rho}=1+\frac{\gamma}{2}-\frac{\log (4 \pi)}{2},\tag{3} \]\\ defines a summation over the inverses of all the values $\rho=s_{r}$ or $\rho=\bar{s}_{r}$ known as the non-trivial zeros of the Riemann zeta function, being $s_{r}=\sigma_{r}+i t_{r}$ and $\bar{s}_{r}=\sigma_{r}-i t_{r}$, where a useful count index $r=1,2,3, \ldots$ lets distinguish the different zeros $s_{1}, s_{2}, s_{3}, \ldots \bar{s}_{1}, \bar{s}_{2}, \bar{s}_{3}, \ldots$ from each other. In this regard, one of the most important unsolved problems in mathematics is related to the nature of the real part of these $\rho$, i.e., for each $\sigma_{r}$, since all the calculations made so far have not been able to yield a non-trivial zero of the Riemann zeta function that had a real part different from $\frac{1}{2}$, e.g., $\sigma_{1}=\frac{2}{3}, \sigma_{2}=\frac{3}{5}, \ldots$ or others. In this article is discussed an approach that links not only the Eq.(3) to the non-trivial zeros $\rho$, but also to the coefficients $a_{2 n}$ [2] of the Taylor series for the Riemann $\xi$ function, around the point $s_{0}=\frac{1}{2}$, given by\\ \[ \xi(s)=\sum_{n=0}^{\infty} a_{2 n}\left(s-s_{0}\right)^{2 n}=\sum_{n=0}^{\infty} a_{2 n}\left(s-\frac{1}{2}\right)^{2 n}.\tag{4} \]\\ Then, a summation involving each $a_{2 n}$, or later, each Jensen $c_{n}$ (or also the Turán moments $\widehat{b_{n}}$ or indistinctly $\widehat{b}_{m}$ ), would lead to calculate the Euler-Mascheroni constant with high fidelity.\\\\ In the references about the Riemann $\xi$-function, the first coefficient $a_{0}=-\frac{\Gamma\left(\frac{1}{4}\right)\left(\frac{1}{2}\right)}{8 \pi^{\frac{1}{4}}}=$ $0.497120 \ldots[3]$ is exactly formulated thanks to the calculation of the integral\\ \[ a_{2 n}=4 \int_{1}^{\infty} \frac{d\left[x^{\frac{3}{2}} \psi^{\prime}(x)\right]}{d x} \frac{\left(\frac{1}{2} \ln x\right)^{2 n}}{(2 n) !} x^{-\frac{1}{4}} d x,\tag{5} \]\\ \noindent at $n=0$, where $a_{2(0)}=a_{0}$ and $\psi(x)=\sum\limits_{m=1}^{\infty} e^{-m^{2} \pi x}=\frac{1}{2}\left[\vartheta_{3}\left(0, e^{-\pi x}\right)-1\right]$, being $\vartheta_{3}$ the Jacobi theta function. Moreover, $A_{0}=a_{0}=\frac{\left.d^{(0)} \xi\left(\frac{1}{2}+i x\right)\right|_{x=0}}{d x^{(0)}}=-\frac{\Gamma\left(\frac{1}{4}\right) \zeta\left(\frac{1}{2}\right)}{8 \pi^{\frac{1}{4}}}$ which can be inferred from the representation given by DeFranco [4] in the following Taylor series for $\xi(s)$, at $s=\frac{1}{2}+i x$, with $x$ a real number\\ \[ \xi\left(\frac{1}{2}+i x\right)=\sum_{n=0}^{\infty}(-1)^{n} A_{n} x^{2 n}=\sum_{n=0}^{\infty}(-1)^{n} a_{2 n} x^{2 n},\tag{6} \]\\ \noindent being $A_{n}=a_{2 n}=\left.\frac{(-1)^{2 n}}{(2 n) !} \frac{d^{2 n}}{d^{2 n} x} \xi\left(\frac{1}{2}+i x\right)\right|_{x=0}$. Therefore, each $a_{2 n}$ would require very complicated steps of evaluation which could limit apparently a complete representation for the Riemann $\xi$ function through the computation of thousands of such coefficients. However, in this article is argued that, in fact, each $a_{2 n}$ can be analytically and numerically represented by the coefficients $c_{n}$ of the Jensen polynomials $J^{d, N}(x)$ of degree $d$ and shift $N=0$ for the Taylor series of the Riemann $\xi$-function or, instead, the little known Turán moments $\widehat{b}_{n}$ which were already tabulated almost three decades ago!, e.g., the first twenty-one data found in [5]. Furthermore, each $c_{n}$ can be calculated by special summation series, corroborated by the even derivatives of the Riemann $\xi$ function at $\frac{1}{2}$, that involve the computation of all the non-trivial zeros $\rho$ but only if the assertion of the Riemann Hypothesis was accepted, i.e., $\sigma_{r}=\frac{1}{2}$ for $r=1,2,3, \ldots$, as clearly exposed in this work. In fact, with a few tens of $c_{n}$ (and some thousands of non-trivial zeros $s_{r}=\sigma_{r}+i t_{r}$, assuming $\sigma_{r}=\frac{1}{2}$ ) it would be possible to achieve numerically a good representation for the EulerMascheroni constant according to the approach introduced in this article, which is proved computationally with extraordinary effectiveness. As a result, each $A_{n}$ or $a_{2 n}$ is exactly defined in terms of the $c_{n}$, and later, $\gamma$ will have a representation based on those values as well. Throughout this article, the assumption of the Riemann Hypothesis plays an important role because it leads to raise consequences never seen before that let refine the model of the Taylor series for Eq.(4) and Eq.(6), and other results explained later. Nevertheless, in the proposed methodology, there will be made clear that the Riemann Hypothesis has not to be neither assumed nor proved in order to equate the representation for the Hadamard product of the Riemann $\xi$-function to the Taylor series in Eq.(4). In this regard, the most fascinating scenario for future research is the deduction of not just one, not two, but thousands of formulations relating the $a_{2 n}, c_{n}$ and $\widehat{b}_{n}$ to various sequences based on the non-trivial zeros of the Riemann zeta function that would be the result of the natural comparison between the Hadamard product of the $\xi$-function and its Taylor series.\\\\\\ \textbf{Theory}\\\\ Based on the Taylor series given by Eq.(4) and the representation for the Hadamard product of the Riemann $\xi$-function [6]\\ \[ \xi(s)=\frac{1}{2} \prod_{\rho}\left(1-\frac{s}{\rho}\right),\tag{7} \]\\ being $\rho=s_{r}$ or also $\rho=\bar{s}_{r}$ all the non-trivial zeros of the Riemann zeta function, i.e., including all the conjugate pairs $s_{r}=\sigma_{r}+i t_{r}$ and $\bar{s}_{r}=\sigma_{r}-i t_{r}$, then, Eq.(7) can be conveniently adapted to the algebraic equivalent version deduced by professor Alhargan [7] and published few months ago\\ \[ \xi(s)=\frac{1}{2} \prod_{\rho}\left(1-\frac{s}{\rho}\right)=\frac{1}{2} \prod_{r=1}^{\infty}\left(1-\frac{s\left(2 \sigma_{r}-s\right)}{s_{r} \bar{s}_{r}}\right),\tag{8} \]\\ being $\sigma_{r}$ the real part of the non-trivial zeros of the Riemann zeta function. At this point, it is important to clarify that Eq.(4) and Eq.(8) are absolutely equivalent to each other because both of them represent the same function, i.e., the Riemann $\xi$-function. Therefore, the Hadamard product is developed on the right side of Eq.(9), with the $\frac{1}{2}$ being algebraically included only in the first factor, as follows\\ \[ \sum_{n=0}^{\infty} a_{2 n}\left(s-\frac{1}{2}\right)^{2 n}=\left[\frac{1}{2}-\frac{s\left(2 \sigma_{1}-s\right)}{2 s_{1} \bar{s}_{1}}\right]\left[1-\frac{s\left(2 \sigma_{2}-s\right)}{s_{2} \bar{s}_{2}}\right]\left[1-\frac{s\left(2 \sigma_{3}-s\right)}{s_{3} \bar{s}_{3}}\right]\left[1-\frac{s\left(2 \sigma_{4}-s\right)}{s_{4} \bar{s}_{4}}\right] \ldots\tag{9} \]\\ \noindent Then, after the product of the first two factors $\left[\frac{1}{2}-\frac{s\left(2 \sigma_{1}-s\right)}{2 s_{1} \bar{s}_{1}}\right],\left[1-\frac{s\left(2 \sigma_{2}-s\right)}{s_{2} \bar{s}_{2}}\right]$ to each other \[ \begin{aligned} &\sum_{n=0}^{\infty} a_{2 n}\left(s-\frac{1}{2}\right)^{2 n}=\\[10pt] &=\left[\frac{1}{2}-\frac{s\left(2 \sigma_{2}-s\right)}{2 s_{2} \bar{s}_{2}}-\frac{s\left(2 \sigma_{1}-s\right)}{2 s_{1} \bar{s}_{1}}+\frac{s^{2}\left(2 \sigma_{1}-s\right)\left(2 \sigma_{2}-s\right)}{2 s_{1} \bar{s}_{1} s_{2} \bar{s}_{2}}\right]\left[1-\frac{s\left(2 \sigma_{3}-s\right)}{s_{3} \bar{s}_{3}}\right]\left[1-\frac{s\left(2 \sigma_{4}-s\right)}{s_{4} \bar{s}_{4}}\right] \ldots, \end{aligned}\tag{10}\] \noindent and when multiplying carefully more and more factors with each other and rearranging the long results, a noticeable pattern begins to emerge, mainly for $s^{0}$ and $s^{1}$, which undoubtedly defines the coefficients that accompany the terms $s^{0}, s^{1}, s^{2}, s^{3} \ldots$ on the right side of the Eq.(9) as follows\\ \[ \begin{aligned} \sum_{n=0}^{\infty} a_{2 n}\left(s-\frac{1}{2}\right)^{2 n}=\frac{1}{2} s^{0}-\left(\frac{\sigma_{1}}{s_{1} \bar{s}_{1}}+\frac{\sigma_{2}}{s_{2} \bar{s}_{2}}+\frac{\sigma_{3}}{s_{3} \bar{s}_{3}}+\cdots\right) s^{1}+\left(\frac{1}{2 s_{1} \bar{s}_{1} \ldots}+\frac{1}{2 s_{2} \bar{s}_{2} \ldots}+\frac{1}{2 s_{3} \bar{s}_{3} \ldots}+\right.\\[10pt] \left.+\frac{2 \sigma_{1} \sigma_{2} \ldots}{s_{1} \bar{s}_{1} s_{2} \bar{s}_{2} \ldots}+\cdots\right) s^{2}+\left(-\frac{\sigma_{1} \ldots}{s_{1} \bar{s}_{1} s_{2} \bar{s}_{2} \ldots}-\cdots\right) s^{3}+\cdots+F_{j} s^{j}+F_{j+1} s^{j+1}+\cdots , \end{aligned}\tag{11} \]\\ where the coefficients with more complexity in Eq.(11), such like those for $s^{2}, s^{3}, \ldots, s^{j}, s^{j+1}, \ldots$, and successive expressions, have been indicated implicitly, with some parts in ellipsis and denominations like $F_{j}, F_{j+1}$, as they are not going to be used in the calculation of the EulerMascheroni constant. However, they could play a relevant role to broaden this research in the future, being necessary to define them in new quests of results. Then, the Eq.(11) can be written as\\ \[ \sum_{n=0}^{\infty} a_{2 n}\left(s-\frac{1}{2}\right)^{2 n}=\frac{1}{2} s^{0}-\left(\sum_{r=1}^{\infty} \frac{\sigma_{r}}{s_{r} \bar{s}_{r}}\right) s^{1}+F_{2} s^{2}+F_{3} s^{3}+\cdots+F_{j} s^{j}+F_{j+1} s^{j+1}+\cdots,\tag{12} \]\\ \noindent being the coefficients $F_{0}=\frac{1}{2}, F_{1}=-\sum\limits_{r=1}^{\infty} \frac{\sigma_{r}}{s_{r} \bar{s}_{r}}, F_{2}=\frac{1}{2 s_{1} \bar{s}_{1} \ldots}+\frac{1}{2 s_{2} \bar{s}_{2} \ldots}+\cdots, F_{3}=-\frac{\sigma_{1} \ldots}{s_{1} \bar{s}_{1} s_{2} \bar{s}_{2} \ldots}-\cdots$, and the successive undefined terms like $F_{j}$ and $F_{j+1}$ for future advances.\\\\ Now, the expansion of the left side of Eq.(11) or Eq.(12), up to certain organized arrangement for the terms $s^{0}, s^{1}, s^{2}, s^{3}$, leads to understand clearly why the Hadamard product of the Riemann $\xi$-function corresponds exactly to the Taylor series for the same function. First, the pattern of the summation in the left side of Eq.(12) is tentatively developed according to the first addends given by $a_{0}\left(s-\frac{1}{2}\right)^{0}, a_{2}\left(s-\frac{1}{2}\right)^{2}, a_{4}\left(s-\frac{1}{2}\right)^{4}$ $\ldots$ for the indicated power exponent $2 n$. This approach represents an intuitive operation that can be validated algebraically and whose consistency will be proved thanks to the truthful formulas obtained in this article and introduced before. Therefore, the left side of Eq.(12) is expanded as follows\\ \[ \begin{aligned} \sum_{n=0}^{\infty} a_{2 n}\left(s-\frac{1}{2}\right)^{2 n}=a_{0}\left(s-\frac{1}{2}\right)^{0}+a_{2}\left(s-\frac{1}{2}\right)^{2}+& a_{4}\left(s-\frac{1}{2}\right)^{4} \ldots=a_{0}+a_{2}\left(s^{2}-s+\frac{1}{4}\right)+\\[10pt] &+a_{4}\left(s^{2}-s+\frac{1}{4}\right)\left(s^{2}-s+\frac{1}{4}\right)+\cdots \end{aligned}\tag{13} \]\\ \noindent and after a careful factorization of the expressions based on the coefficients $a_{2 n}$ in Eq.(13)\\ \[ \begin{aligned} \sum_{n=0}^{\infty} a_{2 n}\left(s-\frac{1}{2}\right)^{2 n}&=\left(\frac{a_{0}}{2^{0}}+\frac{a_{2}}{2^{2}}+\frac{a_{4}}{2^{4}}+\frac{a_{6}}{2^{6}}+\cdots\right) s^{0}+\left(-\frac{4 a_{2}}{2^{2}}-\frac{4(2) a_{4}}{2^{4}}-\frac{4(3) a_{6}}{2^{6}}-\frac{4(4) a_{8}}{2^{8}}-\cdots\right) s^{1}+ \\[10pt] &+G_{2} s^{2}+G_{3} s^{3}+\cdots+G_{j} s^{j}+G_{j+1} s^{j+1}+\cdots \end{aligned}\tag{14} \] \noindent where $G_{0}=\left(\frac{a_{0}}{2^{0}}+\frac{a_{2}}{2^{2}}+\frac{a_{4}}{2^{4}}+\frac{a_{6}}{2^{6}}+\cdots\right), G_{1}=\left(-\frac{4 a_{2}}{2^{2}}-\frac{4(2) a_{4}}{2^{4}}-\frac{4(3) a_{6}}{2^{6}}-\frac{4(4) a_{8}}{2^{8}}-\cdots\right)$ and the others coefficients in Eq.(14) would be represented by $G_{2}, G_{3}, \ldots, G_{j}, G_{j+1}, \ldots .$ Therefore, every coefficient indicated by the letter $G$ could be compared to the equivalent counterpart $F$ as follows\\ \begin{align} &F_{0}=\frac{1}{2}=G_{0}=\left(\frac{a_{0}}{2^{0}}+\frac{a_{2}}{2^{2}}+\frac{a_{4}}{2^{4}}+\frac{a_{6}}{2^{6}}+\cdots\right)=\sum_{n=0}^{\infty} \frac{a_{2 n}}{2^{2 n}},\tag{15} \\[10pt] &F_{1}=-\sum_{r=1}^{\infty} \frac{\sigma_{r}}{s_{r} \bar{s}_{r}}=G_{1}=\left(-\frac{4 a_{2}}{2^{2}}-\frac{4(2) a_{4}}{2^{4}}-\frac{4(3) a_{6}}{2^{6}}-\frac{4(4) a_{8}}{2^{8}}-\cdots\right), \tag{16}\\[10pt] &F_{2}=\left(\frac{1}{2 s_{1} \bar{s}_{1} \cdots}+\frac{1}{2 s_{2} \bar{s}_{2} \ldots}+\cdots\right)=G_{2}, \tag{17}\\[10pt] &F_{3}=\left(-\frac{\sigma_{1} \ldots}{s_{1} \bar{s}_{1} s_{2} \bar{s}_{2} \ldots}-\cdots\right)=G_{3},\tag{18} \end{align}\\ \noindent and successively for the others like $F_{j}=G_{j}, F_{j+1}=G_{j+1}, \ldots$ In this regard, the Eq.(15) can be also represented when replacing the value $A_{0}=a_{0}=\frac{\left.d^{(0)} \xi\left(\frac{1}{2}+i x\right)\right|_{x=0}}{d x^{(0)}}=-\frac{\Gamma\left(\frac{1}{4}\right) \zeta\left(\frac{1}{2}\right)}{8 \pi^{\frac{1}{4}}}$ as follows\\ \[ \frac{1}{2}+\frac{\Gamma\left(\frac{1}{4}\right)\zeta\left(\frac{1}{2}\right)}{8 \pi^{\frac{1}{4}}}=\frac{1}{2}\left(1+\frac{\Gamma\left(\frac{1}{4}\right)\zeta\left(\frac{1}{2}\right)}{4 \pi^{\frac{1}{4}}}\right)=\left(\frac{a_{2}}{2^{2}}+\frac{a_{4}}{2^{4}}+\frac{a_{6}}{2^{6}}+\cdots\right)=\sum_{n=1}^{\infty} \frac{a_{2 n}}{2^{2 n}},\tag{19} \]\\ \noindent where the summation in Eq.(19) starts in $n=1$ instead of $n=0$. Now, Eq.(16) is re-written as\\ \[ \sum_{r=1}^{\infty} \frac{\sigma_{r}}{s_{r} \bar{s}_{r}}=\left(\frac{4 a_{2}}{2^{2}}+\frac{4(2) a_{4}}{2^{4}}+\frac{4(3) a_{6}}{2^{6}}+\frac{4(4) a_{8}}{2^{8}}+\cdots\right)=\sum_{n=1}^{\infty} \frac{4 n a_{2 n}}{2^{2 n}}=2^{2} \sum_{n=1}^{\infty} \frac{n a_{2 n}}{2^{2 n}}. \tag{20} \]\\ \vspace*{-0.5\baselineskip} \hspace*{-1.9ex} According to the structure of Eq.(20), the expression $\sum\limits_{r=1}^{\infty} \frac{\sigma_{r}}{s_{r} \bar{s}_{r}}$ is a similar definition for representing the famous summation over the inverses of all the non-trivial zeros of the Riemann zeta function given by Eq.(3). The link between Eq.(3) and Eq.(20) is as follows: given the nontrivial zeros of the Riemann zeta function as $\rho=s_{r}$ or also as $\rho=\bar{s}_{r}$, with $s_{r}=\sigma_{r}+i t_{r}$ and $\bar{s}_{r}=\sigma_{r}-i t_{r}$, then, the conjugate pairs can be arranged as\\ \begin{align} &\sum_{\rho} \frac{1}{\rho}=\left(\frac{1}{s_{1}}+\frac{1}{s_{2}}+\frac{1}{s_{3}}+\cdots\right)+\left(\frac{1}{\bar{s}_{1}}+\frac{1}{\bar{s}_{2}}+\frac{1}{\bar{s}_{3}}+\cdots\right), \tag{21}\\[10pt] &\sum_{\rho} \frac{1}{\rho}=\left(\frac{1}{s_{1}}+\frac{1}{\bar{s}_{1}}\right)+\left(\frac{1}{s_{2}}+\frac{1}{\bar{s}_{2}}\right)+\left(\frac{1}{s_{3}}+\frac{1}{\bar{s}_{3}}\right)+\cdots, \tag{22}\\[10pt] &\sum_{\rho} \frac{1}{\rho}=\left(\frac{s_{1}+\bar{s}_{1}}{s_{1} \bar{s}_{1}}\right)+\left(\frac{s_{2}+\bar{s}_{2}}{s_{2} \bar{s}_{2}}\right)+\left(\frac{s_{3}+\bar{s}_{3}}{s_{3} \bar{s}_{3}}\right)+\cdots, \tag{23}\\[10pt] &\sum_{\rho} \frac{1}{\rho}=\left(\frac{\sigma_{1}+i t_{1}+\sigma_{1}-i t_{1}}{s_{1} \bar{s}_{1}}\right)+\left(\frac{\sigma_{2}+i t_{2}+\sigma_{2}-i t_{2}}{s_{2} \bar{s}_{2}}\right)+\left(\frac{\sigma_{3}+i t_{3}+\sigma_{3}-i t_{3}}{s_{3} \bar{s}_{3}}\right)+\cdots,\tag{24} \end{align} \noindent and then, after cancelling the imaginary parts to each other, and adding the duplicate real parts,\\ \[ \sum_{\rho} \frac{1}{\rho}=\left(\frac{2 \sigma_{1}}{s_{1} \bar{s}_{1}}\right)+\left(\frac{2 \sigma_{2}}{s_{2} \bar{s}_{2}}\right)+\left(\frac{2 \sigma_{3}}{s_{3} \bar{s}_{3}}\right)+\cdots=2 \sum_{r=1}^{\infty} \frac{\sigma_{r}}{s_{r} \bar{s}_{r}}=1+\frac{\gamma}{2}-\frac{\log (4 \pi)}{2} .\tag{25} \]\\ \noindent At this point, the summation $\sum_{r=1}^{\infty} \frac{\sigma_{r}}{s_{r} \bar{s}_{r}}$ in Eq.(20) can be used into Eq.(25) as follows\\ \[ \sum_{\rho} \frac{1}{\rho}=2 \sum_{r=1}^{\infty} \frac{\sigma_{r}}{s_{r} \bar{s}_{r}}=8 \sum_{n=1}^{\infty} \frac{n a_{2 n}}{2^{2 n}}=1+\frac{\gamma}{2}-\frac{\log (4 \pi)}{2} .\tag{26} \]\\ \noindent As a result, Eq.(26) is evidently a link between the Euler-Mascheroni constant and the coefficients $a_{2 n}$ of the Taylor series for the Riemann $\xi$-function, around the point $s_{0}=\frac{1}{2}$, given by Eq.(4). At the present time, there is no bibliographic reference about this important finding and its fascinating consequences. The Eq.(26) lets conclude not only that the summation over the inverses of the non-trivial zeros of the Riemann zeta function, i.e., $\sum_{\rho} \frac{1}{\rho}$, converges to $1+\frac{\gamma}{2}-\frac{\log (4 \pi)}{2}$, but also that there is an unsuspected relationship involving the coefficients $a_{2 n}$ of Eq.(4) with $\sum_{\rho} \frac{1}{\rho}$. Later, in the next sections, the first twenty-one numerical values of the $a_{2 n}$ will be calculated and analysed leading to unravel the consistency of the first formulas presented in the introduction. Then, from Eq.(26) it is inferred a new representation for the Euler-Mascheroni constant as follows\\ \[ \gamma=\log (4 \pi)-2+\left(2^{4}\right) \sum_{n=1}^{\infty} \frac{n a_{2 n}}{2^{2 n}} .\tag{27} \]\\ Now, a theoretical basis regarding the Turán moments and coefficients of the Jensen polynomials will be discussed in order to understand their role within this context. Thus, an excellent article about Turán inequalities and their role in the Riemann Hypothesis is cited here. It was written by Csordas, T. S. Norfolk and and R. S. Varga [5] who calculated the first twenty-one Turán moments $\widehat{b}_{m}$ (baptized in this way for the purposes of the current article, without an official name in the references) in the early eighties. They wrote the entire function $\xi$ in Taylor series form as\\ \[ \frac{1}{8} \xi\left(\frac{x}{2}\right)=\sum_{m=0}^{\infty} \frac{(-1)^{m} \widehat{b}_{m} x^{2 m}}{(2 m) !},\tag{28} \]\\ being the moments $\widehat{b}_{m}$ and the function $\Phi(t)$ as \[ \widehat{b}_{m}:=\int_{0}^{\infty} t^{2 m} \Phi(t) d t, \quad (m=0,1, \ldots), \tag{29} \] \[ \Phi(t)=\sum_{n=1}^{\infty}\left(2 n^{4} \pi^{2} e^{9 t}-3 n^{2} \pi e^{5 t}\right) \exp \left(-n^{2} \pi e^{4 t}\right).\tag{30} \]\\ These authors obtained, as a result, that the Turán inequalities given by\\ \[ \left(\widehat{b}_{m}\right)^{2}>\left(\frac{2 m-1}{2 m+1}\right) \widehat{b}_{m-1} \widehat{b}_{m+1} \quad(m=1,2, \ldots),\tag{31} \]\\ are all valid. Moreover, they pointed out (quoted verbatim): \textit{'if one of these inequalities were to fail for some $m \geq 1$, then the Riemann Hypothesis could not be true!'} [5, p.522]. Actually, in the present research about the Euler-Mascheroni constant, it will be reinforced the validity of the aforementioned, proving that the moments $\widehat{b}_{m}$ lead to calculate exactly the Euler-Mascheroni constant in Eq.(27) by the precise formulation of the $a_{2 n}$ based on the data of the $\widehat{b}_{m}$ and after having accepted the Riemann Hypothesis and the hyperbolicity of the Jensen polynomials [8] for the Taylor series with coefficients $c_{n}$, associated to the Jensen polynomials, as follows\\ \[ 2 \xi\left(\frac{1}{2}+i x\right)=\sum_{n=0}^{\infty} c_{n} \frac{x^{2 n}}{n !}.\tag{32} \]\\ Eq.(32) is a particular case of the Eq.(4) when the complex variable is set as $s=\frac{1}{2}+i x$ and is equivalent to the Eq.(6) when the number 2 is included as\\ \[ 2 \xi\left(\frac{1}{2}+i x\right)=2 \sum_{n=0}^{\infty}(-1)^{n} a_{2 n} x^{2 n}=\sum_{n=0}^{\infty} c_{n} \frac{x^{2 n}}{n !},\tag{33} \]\\ with $a_{2 n}=A_{n}=\left.\frac{(-1)^{2 n}}{(2 n) !} \frac{d^{2 n}}{d^{2 n} x} \xi\left(\frac{1}{2}+i x\right)\right|_{x=0}$. As a result, the coefficients $c_{n}$ are equivalent to\\ \[ c_{n}=2(n !)(-1)^{n} a_{2 n}=\left.2(n !) \frac{(-1)^{n}}{(2 n) !} \frac{d^{2 n}}{d^{2 n} x} \xi\left(\frac{1}{2}+i x\right)\right|_{x=0} .\tag{34} \]\\ In this regard, the biggest problem about the definitive acceptance of the model for the Taylor series in Eq.(32), to ensure that only real roots $x$ exist instead of imaginary or complex $x$, has been to finish checking the hyperbolicity of all the Jensen polynomials $J^{d, N}(x)=\sum\limits_{h=0}^{d} c_{h}\left(\begin{array}{l}d \\ h\end{array}\right) x^{h}[9]$ of degree $d$ and, at least, shift $N=0$, for the Taylor series of the Riemann $\xi$-function in Eq.(32). The current evidence provided by Griffin et al. [10] has led to rescue the possibility that such property known as hyperbolicity or the fact of having only real roots for those polynomials could be true at all. In the present paper, taking into consideration that the values of the coefficients $a_{2 n}$ and $c_{n}$ can be calculated by the computation of the Turán moments $\widehat{b_{m}}$, and based on other numerical evidences derived from the assumption of the hypothetical nature of the Hadamard product for $\xi\left(\frac{1}{2}+i x\right)$ that would equate the respective Taylor series seen in Eq.(32) as follows\\ \[ 2 \xi\left(\frac{1}{2}+i x\right)=\prod_{r=1}^{\infty} \frac{\left(t_{r}{ }^{2}-x^{2}\right)}{\left(\frac{1}{4}+t_{r}{ }^{2}\right)}=2 \sum_{n=0}^{\infty}(-1)^{n} a_{2 n} x^{2 n}=\sum_{n=0}^{\infty} c_{n} \frac{x^{2 n}}{n !},\tag{35} \]\\ it has been made a significant effort in favour of the computation of the first three coefficients $c_{n}$, i.e., $c_{0}, c_{1}$ and $c_{2}$ by the thorough analysis of the Eq.(35) in the section Materials and Methods and the convergence of certain summations involving significant sets of the non-trivial zeros $\rho$ with $\sigma_{r}=\frac{1}{2}$ calculated by Odlyzko [11], which leads to compute consistently the same analytical value for\\ \[ c_{0}=2(0 !)(-1)^{0} a_{0}=\left.\frac{2(0 !)(-1)^{0}}{(2(0)) !} \frac{d^{0}}{d^{0} x} \xi\left(\frac{1}{2}+i x\right)\right|_{x=0}=2\left(-\frac{\Gamma\left(\frac{1}{4}\right) \zeta\left(\frac{1}{2}\right)}{8 \pi^{\frac{1}{4}}}\right)=-\frac{\Gamma\left(\frac{1}{4}\right) \zeta\left(\frac{1}{2}\right)}{4 \pi^{\frac{1}{4}}},\tag{36} \]\\ \noindent and numerical valid approximations for $c_{1}$ and $c_{2}$. However, the computation of twenty-one values for $c_{n}$ is seen later in Table 1, by using the known data for $\widehat{b}_{n}$. As a result, the successive $c_{1}$ and $c_{2}$ are inferred analytically and numerically according to the assumption that Eq.(35) contains in its structure a pattern for the real part of the non-trivial zeros assumed like $s_{r}=\frac{1}{2}+i t_{r}$ and $\bar{s}_{r}=\frac{1}{2}-$ $i t_{r}$, being $\sigma_{r}=\frac{1}{2}$, which works perfectly against the same expected calculations derived from the data of the Turán moments and the formulation in Eq.(34) based mainly in the numerical values of the even derivatives $\left.\frac{d^{2 n}}{d^{2 n} x} \xi\left(\frac{1}{2}+i x\right)\right|_{x=0}$. As a result, twenty-one experimental coefficients of the Jensen polynomials are validated and used for calculating, quite enough, the Euler-Mascheroni constant and Lugo's as well. \begin{flushleft} \section*{II. Materials and Methods} \end{flushleft} In this section it is formulated empirically the exact relationship between the Turán moments $\widehat{b}_{n}$ and the coefficients $c_{n}$ of the Jensen polynomials, for $n=0,1,2,3, \ldots$, as follows \[ c_{n}=(-1)^{n} \frac{n !}{(2 n) !} 2^{2 n} \cdot 2^{4} \cdot \widehat{b}_{n}, \tag{37} \] and also, between the $\widehat{b}_{n}$ and $a_{2 n}$, thanks to the Eq.(34), $c_{n}=2(n !)(-1)^{n} a_{2 n}$, as \begin{align} &c_{n}=2(n !)(-1)^{n} a_{2 n}=(-1)^{n} \frac{n !}{(2 n) !} 2^{2 n} \cdot 2^{4} \cdot \widehat{b}_{n},\tag{38}\\[10pt] &a_{2 n}=\frac{2^{3}\left(2^{2 n}\right) \widehat{b}_{n}}{(2 n) !}=\frac{8 \cdot\left(2^{2 n}\right) \widehat{b}_{n}}{(2 n) !}.\tag{39} \end{align}\\ Therefore, when checking the first coefficients $c_{0}$ and $a_{0}$, which are already known by the result in Eq.(36), and also thanks to the mentioned reference [3], it is concluded that $\widehat{b}_{0}$ obeys consistently the previous formulas because the known data for $\widehat{b}_{0} \approx 6.214009727353926\left(10^{-2}\right)$ from the Table $4.1$ of the authors G. Csordas, T.S. Norfolk and R.S. Varga [5, p.540] or in the Table 1 of the current article can be validated as \[c_{0}=(-1)^{0} \frac{0 !}{(2(0)) !} 2^{2(0)} \cdot 2^{4} \cdot \widehat{b}_{0}=16 \widehat{b_{0}}=-\frac{\Gamma\left(\frac{1}{4}\right) \zeta\left(\frac{1}{2}\right)}{4 \pi^{\frac{1}{4}}} \approx 16\left(6.214009727353926\left(10^{-2}\right)\right),\tag{40}\] which is evidently $c_{0}=-\frac{\Gamma\left(\frac{1}{4}\right) \zeta\left(\frac{1}{2}\right)}{4 \pi^{\frac{1}{4}}} \approx 0.994241556376 \ldots$, because $\zeta\left(\frac{1}{2}\right) \approx-1.460354 \ldots$ and $\Gamma\left(\frac{1}{4}\right)=\sqrt{2 G \sqrt{2 \pi^{3}}} \approx 3.6256098177 \ldots$, being $G \approx 0.8346268 \ldots$, the Gauss constant [12]. Hence, $a_{0}=-\frac{\Gamma\left(\frac{1}{4}\right) \zeta\left(\frac{1}{2}\right)}{8 \pi^{\frac{1}{4}}}=\frac{2^{3}\left(2^{2(0)}\right) \widehat{b}_{0}}{(2(0)) !}=8 \widehat{b}_{0} \approx 0.497120778188 \ldots$ Then, the successive values for the coefficients $\widehat{b}_{n}, c_{n}$ and $a_{2 n}$ have been filled in the Table 1 in order to compute the Euler-Mascheroni constant by any scientific calculator, e.g., an (HP) $48 \mathrm{G}+$ of $128 \mathrm{~K} \mathrm{RAM}$ [13]. Thus, it is possible to use the Eq.(27) when considering $\log (4 \pi)-2 \approx 0.53102424697$ and that the summation in Eq.(41) starts at $n=1$, with $a_{2} \approx 1.14859721576\left(10^{-2}\right)$ and finishes at $n=20$, $a_{40} \approx 1.48737634559\left(10^{-55}\right)$ \[\gamma=\log (4 \pi)-2+\left(2^{4}\right) \sum_{n=1}^{\infty} \frac{n a_{2 n}}{2^{2 n}} \approx 0.53102424697+16\left(\frac{a_{2}}{4}+\frac{2 a_{4}}{16}+\cdots+\frac{20 a_{40}}{2^{40}}\right),\tag{41}\]\\ \[\begin{aligned} \gamma \approx 0.53102424697&+16\left(2.88696362077\left(10^{-3}\right)\right) \approx\\[10pt] &\approx 0.53102424697+4.61914179323\left(10^{-2}\right) \approx 0.577215664902 \end{aligned}\tag{42} \] \begin{table}[h] \def1.5{1.5} \centering \begin{tabular}{|P{0.03\linewidth}|P{0.25\linewidth}|P{0.22\linewidth}|P{0.22\linewidth}|} \hline \textbf{n}&\textbf{Turán moments $\boldsymbol{(\widehat{b}_n )}$} & \textbf{Jensen’s $\boldsymbol{c_n^{\ast}}$}& \textbf{Taylor’s $\boldsymbol{a_{2n}^{\ast}}$}\\ \hline 0 & $6.214009727353926\cdot 10^{-2}$ & 0.994241556376 & 0.497120778188 \\ \hline 1 & $7.178732598482949\cdot 10^{-4}$ & $-2.29719443152 \cdot 10^{-2}$ & $1.14859721576\cdot 10^{-2}$\\ \hline 2 & $2.314725338818463\cdot 10^{-5} $ & $4.93808072283\cdot 10^{-4} $ & $1.23452018071 \cdot 10^{-4}$\\ \end{tabular} \caption{The first twenty-one Turán moments $\widehat{b}_{n}$, Jensen's $c_{n}$ and Taylor's $a_{2 n}$} \end{table} \begin{table}[t] \def1.5{1.5} \centering \begin{tabular}{|P{0.03\linewidth}|P{0.25\linewidth}|P{0.22\linewidth}|P{0.22\linewidth}|} 3 & $1.170499895698397\cdot 10^{-6} $ & $-9.98826577664\cdot 10^{-6}$ & $8.32355481387\cdot 10^{-7}$\\ \hline 4 & $7.859696022958770\cdot 10^{-8} $ & $1.91626874465\cdot 10^{-7} $ & $3.99222655135\cdot 10^{-9}$\\ \hline 5 & $6.47444266092415\cdot 10^{-9}$ & $-3.50784618243\cdot 10^{-9}$ & $1.46160257601 \cdot 10^{-11}$\\ \hline 6 & $6.248509280628118\cdot 10^{-10}$ & $6.15533766557\cdot 10^{-11}$ & $4.27454004553\cdot 10^{-14}$\\ \hline 7 & $6.857113566031334\cdot 10^{-11}$ & $-1.03921031437\cdot 10^{-12}$ & $1.03096261346 \cdot 10^{-16}$\\ \hline 8 & $8.379562856498463\cdot 10^{-12}$ & $1.69325437329\cdot 10^{-14}$ & $2.09976980815 \cdot 10^{-19}$\\ \hline 9 & $1.122895900525652 \cdot 10^{-12}$ & $-2.66944768148\cdot 10^{-16}$ & $3.67814109551\cdot 10^{-22}$\\ \hline 10 & $1.630766572462173\cdot 10^{-13}$ & $4.08084512257\cdot 10^{-18}$ & $5.62285758732\cdot 10^{-25}$\\ \hline 11 & $2.543075058368090\cdot 10^{-14}$ & $-6.06077541545\cdot 10^{-20}$ & $7.59176013038\cdot 10^{-28}$\\ \hline 12 & $4.226693865498318\cdot 10^{-15}$ & $8.75935173682\cdot 10^{-22}$ & $9.14334287904\cdot 10^{-31}$\\ \hline 13 & $7.441357184567353\cdot 10^{-16}$ & $-1.23371064104\cdot 10^{-23}$ & $9.9061066332\cdot 10^{-34}$\\ \hline 14 & $1.380660423385153 \cdot 10^{-16}$ & $1.69556431188 \cdot 10^{-25}$ & $9.72469343308\cdot 10^{-37}$\\ \hline 15 & $2.687936596475912\cdot 10^{-17}$ & $-2.27655637357\cdot 10^{-27}$ & $8.7045996667\cdot 10^{-40}$\\ \hline 16 & $5.470564386990504\cdot 10^{-18}$ & $2.98923338866\cdot 10^{-29}$ & $7.14348661116\cdot 10^{-43}$\\ \hline 17 & $1.160183185841992\cdot 10^{-18}$ & $-3.84211459038\cdot 10^{-31}$ & $5.40097046858\cdot 10^{-46}$\\ \hline 18 & $2.556698594979872\cdot 10^{-19}$ & $4.83821574529\cdot 10^{-33}$ & $3.7784546542\cdot 10^{-49}$\\ \hline 19 & $5.840019662344 811\cdot 10^{-20}$ & $-5.97376698863\cdot 10^{-35}$ & $2.45540797309\cdot 10^{-52}$\\ \hline 20 & $1.379672872080269\cdot 10^{-20}$ & $7.23728179619\cdot 10^{-37}$ & $1.48737634559\cdot 10^{-55}$ \\ \hline \end{tabular} \end{table} \hspace{6ex}\begin{minipage}{0.825\textwidth} \justifying \noindent \small $^{\ast}c_{n}$ and $a_{2 n}$ were computed by a Hewlett Packard (HP) $48 \mathrm{G}+$ of $128 \mathrm{~K} \mathrm{RAM}$, based on the Turan moments included on the existing cited Table $4.1$ of the article of the authors G. Csordas, T.S. Norfolk and R.S. Varga [5, p.540]. \end{minipage}\\\\\\ Now, the Lugo's constant [14] can be defined according to the expression\\ \[ L:=\lim _{n \rightarrow \infty}\left[\sum_{i=1}^{n} \sum_{j=1}^{n} \frac{1}{i+j}-(2 \ln (2)) n+\ln (n)\right]=-\frac{1}{2}-\gamma+\ln (2),\tag{43} \]\\ and, by replacing $\gamma$ according to Eq.$(27)$ in Eq.$(43)$, it is obtained\\ \[ L=-\frac{1}{2}-\left[\ln (4 \pi)-2+\left(2^{4}\right) \sum_{n=1}^{\infty} \frac{n a_{2 n}}{2^{2 n}}\right]+\ln (2)=\frac{3}{2}-\ln (2 \pi)-\left(2^{4}\right) \sum_{n=1}^{\infty} \frac{n a_{2 n}}{2^{2 n}},\tag{44} \]\\ \[L \approx \frac{3}{2}-\ln (2 \pi)-4.61914179323\left(10^{-2}\right) \approx-0.384068484342,\tag{45}\]\\ where the data from Table 1 has been used again. Now, when using Eq.(39) into Eq.(41), a second new representation for $\gamma$ is possible by the formula\\ \[ \gamma=\log (4 \pi)-2+\left(2^{7}\right) \sum_{n=1}^{\infty} \frac{n \widehat{b}_{n}}{(2 n) !},\tag{46} \]\\ which is based on the Turán moments $\widehat{b}_{n}$.\\\\ A verification of its convergence by using the data of the well-known twenty-one values for $\widehat{b}_{n}$, from Table 1, i.e., $\widehat{b}_{1}=7.178732598482949\left(10^{-4}\right), \ldots$ $\widehat{b}_{20}=1.379672872080269\left(10^{-20}\right)$, excluding $\widehat{b_{0}}$, and the same scientific calculator mentioned, is presented below\\ \[ \begin{aligned} \gamma=\log (4 \pi)-2+\left(2^{7}\right) \sum_{n=1}^{\infty} \frac{n \widehat{b}_{n}}{(2 n) !} \approx 0.53102424697+\left(2^{7}\right)\left(\frac{\widehat{b}_{1}}{2 !}+\frac{\widehat{2 b}_{2}}{4 !}+\cdots+20 \frac{\widehat{b}_{20}}{40 !}\right) \approx \\[10pt] \approx 0.53102424697+\left(2^{7}\right)\left(3.60870452595\left(10^{-4}\right)\right) \approx 0.577215664902 . \end{aligned}\tag{47} \]\\ Then, the Eq.(46) also contributes to represent the Lugo's constant as follows\\ \[ \begin{aligned} L=\frac{3}{2}-\ln (2 \pi)-\left(2^{7}\right) \sum_{n=1}^{\infty} \frac{n \widehat{b}_{n}}{(2 n) !} & \approx-0.33787706641-2^{7}\left(3.60870452595\left(10^{-4}\right)\right) \approx \\[5pt] & \approx-0.384068484342 . \end{aligned}\tag{48} \]\\ Now, when using Eq. $(37)$ for replacing the Turán moments $\widehat{b}_{n}$ by the coefficients $c_{n}$ of the Jensen polynomials, a third representation is deduced\\ \[ \gamma=\log (4 \pi)-2+\left(2^{3}\right) \sum_{n=1}^{\infty} \frac{n(-1)^{n} c_{n}}{\left(2^{2 n}\right) n !}, \tag{49} \]\\ it is possible to calculate, once again, the same approximation for the Euler-Mascheroni constant and the Lugo's by using the data for $c_{n}$ instead of $\widehat{b}_{n}$ or $a_{2 n}$. Here, the results\\ \[ \gamma \approx 0.53102424697+\left(2^{3}\right)\left(\frac{-c_{1}}{4}+\frac{2 c_{2}}{16(2 !)}-\cdots+\frac{20 c_{20}}{2^{40}(20 !)}\right) \approx 0.577215664902,\tag{50} \]\\ \[ L=\frac{3}{2}-\ln (2 \pi)-\left(2^{3}\right) \sum_{n=1}^{\infty} \frac{n(-1)^{n} c_{n}}{\left(2^{2 n}\right) n !} \approx-0.384068484342.\tag{51} \]\\ Now, thanks to Eq.(19) it is possible to count on a valid series that can approximate very well the value for $\Gamma\left(\frac{1}{4}\right) \zeta\left(\frac{1}{2}\right)$ by using the Taylor coefficients $a_{2 n}$ (or the $c_{n}$ and $\widehat{b}_{n}$ instead) as follows\\ \[ \Gamma\left(\frac{1}{4}\right) \zeta\left(\frac{1}{2}\right)=\left(8 \pi^{\frac{1}{4}}\right)\left[\left(\frac{a_{2}}{2^{2}}+\frac{a_{4}}{2^{4}}+\frac{a_{6}}{2^{6}}+\cdots\right)-\frac{1}{2}\right]=-64 \pi^{\frac{1}{4}} \widehat{b}_{0} \approx-5.29467577665 .\tag{52} \]\\ The discussion about reinforcing the previous results, specifically the assumption of the Riemann Hypothesis in order to reach the consistent approximations for the Euler-Mascheroni constant, which have proved their effectiveness at all, is based on the corroboration of the first coefficients $c_{0}, c_{1}, c_{2}, \widehat{b}_{0}, \widehat{b}_{1}, \widehat{b}_{2}$ and $a_{0}, a_{2}$ and $a_{4}$ by computing numerically some summations in Matlab[15] by using thousands of the well-known non-trivial zeros of the Riemann zeta function provided by Odlyzko as cited before. First, it is necessary to start with Eq.(35) for each $\sigma_{r}=\frac{1}{2}$ by expanding the Hadamard product when multiplying the factors to each other in a similar way as in Eq.(9), but this time for the version given by Eq.(53)\\ \[ 2 \xi\left(\frac{1}{2}+i x\right)=\frac{\left(t_{1}{ }^{2}-x^{2}\right)}{\left(\frac{1}{4}+t_{1}{ }^{2}\right)} \frac{\left(t_{2}{ }^{2}-x^{2}\right)}{\left(\frac{1}{4}+t_{2}{ }^{2}\right)} \ldots \frac{\left(t_{j}{ }^{2}-x^{2}\right)}{\left(\frac{1}{4}+t_{j}{ }^{2}\right)} \ldots=2 \sum_{n=0}^{\infty}(-1)^{n} a_{2 n} x^{2 n}=\sum_{n=0}^{\infty} c_{n} \frac{x^{2 n}}{n !},\tag{53} \]\\ then, by multiplying Eq.(53) by the following factors presented in Eq.(54)\\ \[ \frac{\left(\frac{1}{4}+t_{1}{ }^{2}\right)}{t_{1}{ }^{2}}\frac{\left(\frac{1}{4}+t_{2}{ }^{2}\right)}{t_{2}{ }^{2}}\ldots \frac{\left(\frac{1}{4}+t_{j}{ }^{2}\right)}{t_{j}{ }^{2}} \ldots = \prod\limits_{r=1}^{\infty} \frac{\left(\frac{1}{4}+t_{r}{ }^{2}\right)}{t_{r}{ }^{2}}, \tag{54} \] \[\prod_{r=1}^{\infty} \frac{\left(\frac{1}{4}+t_{r}{ }^{2}\right)}{t_{r}{ }^{2}} \cdot\left[\sum_{n=0}^{\infty} c_{n} \frac{x^{2 n}}{n !}\right]=\prod_{r=1}^{\infty} \frac{\left(\frac{1}{4}+t_{r}{ }^{2}\right)}{t_{r}{ }^{2}} \prod_{r=1}^{\infty} \frac{\left(t_{r}{ }^{2}-x^{2}\right)}{\left(\frac{1}{4}+t_{r}{ }^{2}\right)}=\frac{\left(t_{1}{ }^{2}-x^{2}\right)}{t_1^2}\frac{ \left(t_{2}{ }^{2}-x^{2}\right)}{t_2^2}\ldots \frac{\left(t_{j}{ }^{2}-x^{2}\right)}{t_{j}{ }^{2}} \ldots, \tag{55}\]\\ where each $\left(\frac{1}{4}+t_{r}^{2}\right)$ is cancelled in the previous product. Now, by definition, the limit\\ \[\lim _{s \rightarrow \frac{1}{2}} 2 \xi(s)=\lim _{s \rightarrow \frac{1}{2}} \zeta(s)(s-1) s \Gamma\left(\frac{s}{2}\right) \pi^{\frac{-s}{2}}=\lim _{s \rightarrow \frac{1}{2}} \prod_{r=1}^{\infty}\left[1-\frac{s\left(2 \sigma_{r}-s\right)}{s_{r} \bar{s}_{r}}\right]\tag{56}\]\\ is easily calculated by\\ \[\lim _{s \rightarrow \frac{1}{2}} \zeta(s)(s-1) s \Gamma\left(\frac{s}{2}\right) \pi^{\frac{-s}{2}}=\zeta\left(\frac{1}{2}\right)\left(\frac{1}{2}-1\right)\left(\frac{1}{2}\right) \Gamma\left(\frac{1}{4}\right) \pi^{\frac{-1}{4}}=-\frac{1}{4} \Gamma\left(\frac{1}{4}\right) \zeta\left(\frac{1}{2}\right) \pi^{\frac{-1}{4}}=c_{0}=2 a_{0}. \tag{57}\]\\ As a result, if the Riemann Hypothesis is assumed also in that step, each $\sigma_{r}$ would be equal to $\frac{1}{2}$, i.e., Eq.(56) would be reduced to the expression\\\\ \[\lim _{s \rightarrow \frac{1}{2}} \prod_{r=1}^{\infty}\left[1-\frac{s\left(2 \sigma_{r}-s\right)}{s_{r} \bar{s}_{r}}\right]=\prod_{r=1}^{\infty}\left[1-\frac{\frac{1}{2}\left(1-\frac{1}{2}\right)}{\left(\frac{1}{4}+t_{r}^{2}\right)}\right]=\prod_{r=1}^{\infty}\left[1-\frac{\frac{1}{4}}{\left(\frac{1}{4}+t_{r}^{2}\right)}\right]=-\frac{\Gamma\left(\frac{1}{4}\right)\zeta\left(\frac{1}{2}\right)}{4 \pi^{\frac{1}{4}}}=c_{0} \tag{58}\]\\ or\\ \[\prod_{r=1}^{\infty}\left[1-\frac{\frac{1}{4}}{\left(\frac{1}{4}+t_{r}^{2}\right)}\right]=\prod_{r=1}^{\infty}\left[\frac{\left(\frac{1}{4}+t_{r}^{2}\right)-\frac{1}{4}}{\left(\frac{1}{4}+t_{r}^{2}\right)}\right]=\prod_{r=1}^{\infty}\left[\frac{t_{r}^{2}}{\left(\frac{1}{4}+t_{r}^{2}\right)}\right]=-\frac{\Gamma\left(\frac{1}{4}\right)\zeta\left(\frac{1}{2}\right)}{4 \pi^{\frac{1}{4}}}=c_{0}=2 a_{0}\tag{59}\]\\\\ Keeping Eq.(59) in mind, it will be proved that it is possible to calculate analytically the same $c_{0}$ which has been already defined in Eq.(59) because it is known from the approach on the cited reference number[3] and also thanks to Eq.(36). However, if the analytical deduction and numerical computation are attempted independently further the aforementioned, the surprise would be that the supposition of the Riemann Hypothesis, i.e., $\sigma_{r}=\frac{1}{2}$, would yield that the value $-\frac{1}{4} \Gamma\left(\frac{1}{4}\right) \zeta\left(\frac{1}{2}\right) \pi^{\frac{-1}{4}}$ must be really $c_{0}$ or its equivalent $2 a_{0}$. Moreover, it is not the only assertion, but also that the consecutive coefficients $c_{n}$ can be inferred by the assumption of the Riemann Hypothesis that coincides with the calculations of the Table 1. Thus, Eq.(59) would help to simplify Eq.(54) as follows\\\\ \[\frac{-4 \pi^{\frac{1}{4}}}{\Gamma\left(\frac{1}{4}\right)\zeta\left(\frac{1}{2}\right)} \sum_{n=0}^{\infty} c_{n} \frac{x^{2 n}}{n !}=\frac{\left(t_{1}{ }^{2}-x^{2}\right)}{t_1^2}\frac{\left(t_{2}{ }^{2}-x^{2}\right)}{t_2^2} \ldots\frac{\left(t_{j}{ }^{2}-x^{2}\right)}{t_{j}{ }^{2}} \ldots=\prod_{r=1}^{\infty} \frac{\left(t_{r}{ }^{2}-x^{2}\right)}{t_{r}{ }^{2}}=\prod_{r=1}^{\infty}\left(1-\frac{x^{2}}{t_{r}{ }^{2}}\right).\tag{60}\]\\ If the factors indicated in the product given by the right side of Eq.(60) are developed carefully, the following patterns would occur\\\\ \[\begin{aligned} \frac{\left(t_{1}{ }^{2}-x^{2}\right)}{t_{1}{ }^{2}} \frac{\left(t_{2}{ }^{2}-x^{2}\right)}{t_{2}{ }^{2}} \ldots \frac{\left(t_{j}{ }^{2}-x^{2}\right)}{t_{j}{ }^{2}} \ldots =1&-\left(\frac{1}{t_{1}{ }^{2}}+\frac{1}{t_{2}{ }^{2}}+\cdots\right) x^{2}+\left(\frac{1}{t_{1}{ }^{2} t_{2}{ }^{2}}+\frac{1}{t_{1}{ }^{2} t_{3}{ }^{2}}+\cdots+\right) x^{4}+\\ &-\left(\frac{1}{t_{1}{ }^{2} t_{2}{ }^{2} t_{3}{ }^{2}}+\frac{1}{t_{1}{ }^{2} t_{2}{ }^{2} t_{4}{ }^{2}}+\cdots\right) x^{6}+\cdots+D_{2 l} x^{2 l}+\cdots, \end{aligned}\tag{61}\] \noindent with implicit terms like $D_{2 l} x^{2 l}$ for higher degrees. Then, Eq.(61) is generalized partially as\\ \[\begin{aligned} \frac{-4 \pi^{\frac{1}{4}}}{\Gamma\left(\frac{1}{4}\right) \zeta\left(\frac{1}{2}\right)} \sum_{n=0}^{\infty} c_{n} \frac{x^{2 n}}{n !}=\prod_{r=1}^{\infty} &\left(1-\frac{x^{2}}{t_{r}^{2}}\right)=1-\left(\sum_{r=1}^{\infty} \frac{1}{t_{r}^{2}}\right) x^{2}+\left.\left(\sum_{j=1}^{\infty} \sum_{i=1}^{\infty} \frac{1}{t_{j}^{2} t_{i}^{2}}\right)\right|_{t_{j} \neq t_{i}} x^{4}-\\ &\left(\left.\sum_{i, j, m}^{\infty} \frac{1}{t_{j}^{2} t_{i}^{2} t_{m}^{2}}\right|_{t_{j} \neq t_{i} \neq t_{m}}\right) x^{6}+\cdots+D_{2 l} x^{2 l}+\cdots\end{aligned} \tag{62}\]\\\\ It is important to be cautious with the terms $\left.\left(\sum_{j=1}^{\infty} \sum_{i=1}^{\infty} \frac{1}{t_{j}{ }^{2} t_{i}{ }^{2}}\right)\right|_{t_{j} \neq t_{i}},\left(\left.\sum_{i, j, m}^{\infty} \frac{1}{t_{j}{ }^{2} t_{i}{ }^{2} t_{m}{ }^{2}}\right|_{t_{j} \neq t_{i} \neq t_{m}}\right)$ and others consecutives in order to avoid and repeat elements like $t_{i}{ }^{2}$ and $t_{j}{ }^{2}$ to respect the combinatorial structures derived from the considered expansion. Thus, from Eq.(62) it is easy to compare side by side and deduce which coefficients pair the respective with each other as\\ \begin{align} \frac{-4 \pi^{\frac{1}{4}}}{\Gamma\left(\frac{1}{4}\right) \zeta\left(\frac{1}{2}\right)} \cdot c_{0}=1 &\rightarrow c_{0}=\frac{-\Gamma\left(\frac{1}{4}\right) \zeta\left(\frac{1}{2}\right)}{4 \pi^{\frac{1}{4}}} \approx 0.994241556376, \tag{63}\\[10pt] \frac{-4 \pi^{\frac{1}{4}}}{\Gamma\left(\frac{1}{4}\right) \zeta\left(\frac{1}{2}\right)} \cdot c_{1}=-\left(\sum_{r=1}^{\infty} \frac{1}{t_{r}^{2}}\right) &\rightarrow c_{1}=\frac{\Gamma\left(\frac{1}{4}\right) \zeta\left(\frac{1}{2}\right)}{4 \pi^{\frac{1}{4}}}\left(\sum_{r=1}^{\infty} \frac{1}{t_{r}^{2}}\right)=-c_{0}\left(\sum_{r=1}^{\infty} \frac{1}{t_{r}^{2}}\right)\tag{64}\\[10pt] \frac{-4 \pi^{\frac{1}{4}}}{(2 !) \Gamma\left(\frac{1}{4}\right) \zeta\left(\frac{1}{2}\right)} \cdot c_{2}=\left.\sum_{j=1}^{\infty} \sum_{i=1}^{\infty} \frac{1}{t_{j}{ }^{2} t_{i}{ }^{2}}\right|_{t_{j} \neq t_{i}} &\rightarrow c_{2}=-\left.\frac{(2 !) \Gamma\left(\frac{1}{4}\right) \zeta\left(\frac{1}{2}\right)}{4 \pi^{\frac{1}{4}}}\left(\sum_{j=1}^{\infty} \sum_{i=1}^{\infty} \frac{1}{t_{j}{ }^{2} t_{i}{ }^{2}}\right)\right|_{t_{j} \neq t_{i}}\tag{65}\\[10pt] \frac{-4 \pi^{\frac{1}{4}}}{(3 !) \Gamma\left(\frac{1}{4}\right) \zeta\left(\frac{1}{2}\right)} \cdot c_{3}=-\left.\sum_{i, j, m}^{\infty} \frac{1}{t_{j}{ }^{2} t_{i}{ }^{2} t_{m}{ }^{2}}\right|_{t_{j} \neq t_{i} \neq t_{m}} &\rightarrow c_{3}=\frac{(3 !) \Gamma\left(\frac{1}{4}\right)\zeta\left(\frac{1}{2}\right)}{4 \pi^{\frac{1}{4}}}\left(\left.\sum_{i, j, m}^{\infty} \frac{1}{t_{j}{ }^{2} t_{i}{ }^{2} t_{m}{ }^{2}}\right|_{t_{j} \neq t_{i} \neq t_{m}}\right),\tag{66} \end{align}\\ \noindent and all the successive terms given by expressions like: $\frac{-4 \pi^{\frac{1}{4}}}{\Gamma\left(\frac{1}{4}\right) \zeta\left(\frac{1}{2}\right)(l !)} \cdot c_{l}=D_{2 l}$ for any $n=l$. A routine in Matlab, Fig.1, that processed a vector with a length of 40356 imaginary parts of the non-trivial zeros of the Riemann zeta function has been used for computing the coefficients $c_{0}, c_{1}$ and $c_{2}$. The results are very close approximations for the first coefficients mentioned which is consistent with the data seen in Table 1. There are no doubts that the calculations of such coefficients are valid, backing the approach of professors Csordas, Norfolk and Varga, who obtained the tabulated Turán moments which are related to the coefficients of Jensen polynomials as proved before. Furthermore, expressions like $\sum_{r=1}^{\infty} \frac{1}{t_{r}{ }^{2}},\left.\sum_{j=1}^{\infty} \sum_{i=1}^{\infty} \frac{1}{t_{j}{ }^{2} t_{i}{ }^{2}}\right|_{t_{j} \neq t_{i}}$ and successive converge after having used certain amounts of imaginary parts, and it is evident that inverses like $\frac{1}{t_{r}{ }^{2}}$ and $\frac{1}{t_{j}{ }^{2} t_{i}{ }^{2}}$ vanish for greater values of $t_{i}, t_{j}$ or others, especially those involving long products like $t_{j}{ }^{2} t_{i}{ }^{2} t_{m}{ }^{2}$ which define the denominator of such expressions. The convergence for these approximations does not require to use exaggerated sets of data, but for the experimental purposes of this research, it is used a convenient amount of 40356 , being enough even less than that. \noindent \begin{figure} \caption{Routine in Matlab for computing the first three coefficients of Jensen polynomials with shift $N=0, c_{0} \end{figure}\\ Therefore, the expected numerical coefficients $c_{0}, c_{1}$ and $c_{2}$ in Matlab are\\ \[c_{0}=\frac{-\Gamma\left(\frac{1}{4}\right) \zeta\left(\frac{1}{2}\right)}{4 \pi^{\frac{1}{4}}} \approx 0.9942 \ldots \tag{67}\]\\ \[c_{1}=\frac{\Gamma\left(\frac{1}{4}\right) \zeta\left(\frac{1}{2}\right)}{4 \pi^{\frac{1}{4}}}\left(\sum_{r=1}^{\infty} \frac{1}{t_{r}^{2}}\right) \approx-0.02297 \ldots,\tag{68}\]\\ \[c_{2}=-\left.\frac{(2 !) \Gamma\left(\frac{1}{4}\right) \zeta\left(\frac{1}{2}\right)}{4 \pi^{\frac{1}{4}}}\left(\sum_{j=1}^{\infty} \sum_{i=1}^{\infty} \frac{1}{t_{j}{ }^{2} t_{i}{ }^{2}}\right)\right|_{t_{j} \neq t_{i}} \approx 0.00049172 \ldots\tag{69}\]\\ As a very interesting finding, it has been introduced a remarkable analytical representation for computing some first Jensen's $c_{n}$, because from Eq.(34) it is possible to infer that, for $n=1$,\\ \[c_{1}=\frac{\Gamma\left(\frac{1}{4}\right) \zeta\left(\frac{1}{2}\right)}{4 \pi^{\frac{1}{4}}}\left(\sum_{r=1}^{\infty} \frac{1}{t_{r}^{2}}\right)=-\left.\frac{d^{2}}{d^{2} x} \xi\left(\frac{1}{2}+i x\right)\right|_{x=0} \approx-0.02297 \ldots\tag{70}\]\\ which means that the second derivative of the Riemann Xi-function evaluated at $\frac{1}{2}+i(0)$ is based on the summation of the inverse of the squares of the imaginary parts of the non-trivial zeros of the Riemann zeta function as seen in Eq.(70). As a result, several values for the even derivatives of $\xi$ will be calculated using the Turán moments, which can be compared to the approximations of Coffey[16].\\\\ Taking into consideration that Coffey[16] calculated the even derivatives of the Riemann Xi function at the particular point $s=\left.\left(\frac{1}{2}+i x\right)\right|_{x=0}$, the new results presented in the current article are part of a further computation of some known even derivatives and also new ones as seen in the Table 2 until the last even derivative of order 40 or $\left.\frac{d^{40}}{d^{40} x} \xi\left(\frac{1}{2}+i x\right)\right|_{x=0}$ which is rare or too difficult to be found in the references about the Riemann Xi function, hence it is why this work is novel. \begin{table}[h] \def1.5{1.5} \centering \begin{tabular}{|P{0.03\linewidth}|P{0.27\linewidth}|P{0.27\linewidth}|P{0.27\linewidth}|} \hline \textbf{n} & $\left.\frac{d^{2 n}}{d^{2 n} x} \xi\left(\frac{1}{2}+i x\right)\right|_{x=0}$ \text {by} $\widehat{\boldsymbol{b}}_{\mathbf{n}}$ & $\left.\frac{d^{2 n}}{d^{2 n} x} \xi\left(\frac{1}{2}+i x\right)\right|_{x=0}$ \text {by} $\boldsymbol{c}_{\mathbf{n}}$ & $\left.\frac{d^{2 n}}{d^{2 n} x} \xi\left(\frac{1}{2}+i x\right)\right|_{x=0}$ \text {by} $\boldsymbol{{a}_{2 \mathbf{n}}}$ \\ \hline 0 & 0.497120778188 & 0.497120778188 & 0.497120778188 \\ \hline 1 & 0.0229719443152 & 0.0229719443152 & 0.0229719443152 \\ \hline 2 & 0.002962848433688 & 0.0029628484337 & 0.0029628484337 \\ \hline 3 & 0.0005992959465976 & 0.000599295946598 & 0.000599295946599 \\ \hline 4 & 0.00016096657455 & 0.000160966574551 & 0.00016096657455 \\ \hline 5 & 0.0000530386342783 & 0.0000530386342783 & 0.0000530386342783 \\ \hline 6 & 0.0000204751152107 & 0.0000204751152108 & 0.0000204751152107 \\ \hline 7 & 0.00000898775589325 & 0.00000898775589327 & 0.00000898775589325 \\ \hline 8 & 0.00000439330425091 & 0.0000043933042509 & 0.00000439330425091 \\ \hline 9 & 0.00000235488338359 & 0.00000235488338359 & 0.00000235488338359 \\ \hline 10 & 0.00000136798615159 & 0.00000136798615159 & 0.00000136798615159 \\ \hline 11 & 0.000000853314391166 & 0.000000853314391166 & 0.000000853314391166 \\ \hline 12 & 0.00000056729724758 & 0.00000056729724758 & 0.00000056729724758 \\ \hline 13 & 0.0000003995048218196 & 0.000000399504821818 & 0.000000399504821818 \\ \hline 14 & 0.000000296494568267 & 0.000000296494568267 & 0.000000296494568267 \\ \hline 15 & 0.0000002308919955117 & 0.000000230891995511 & 0.000000230891995511 \\ \hline 16 & 0.0000001879671610623 & 0.000000187967161062 & 0.000000187967161062 \\ \hline 17 & 0.0000001594543628979 & 0.000000159454362897 & 0.000000159454362897 \\ \hline 18 & 0.0000001405559916949 & 0.000000140555991695 & 0.000000140555991694 \\ \hline 19 & 0.0000001284233905038 & 0.000000128423390504 & 0.000000128423390503 \\ \hline 20 & 0.0000001213573092303 & 0.000000121357309231 & 0.000000121357309231 \\ \hline \end{tabular} \caption{The first twenty-one approximations of the even derivatives of $\xi\left(\frac{1}{2}+i x\right)$ at $\left.\right|_{x=0}$.} \end{table}\\ \hspace{-0.8ex}\begin{minipage}{0.95\textwidth} \justifying \small \noindent The first twenty-one even derivatives of $\xi\left(\frac{1}{2}+i x\right)$ at $\left.\right|_{x=0}$ calculated on Matlab by using the $\widehat{b}_{\mathbf{n}}, c_{\mathbf{n}}$ and $a_{2 \mathbf{n}}$. Here can be appreciated that the three columns contain practically the same results since they are equivalent ways to compute the values. The precision will depend on the decimal places of the coefficients used. \end{minipage}\\\\\\ When inspecting $\mathrm{Eq}(34)$, it is rewritten in terms of $a_{2 n}$ as follows\\ \[\left.\frac{d^{2 n}}{d^{2 n} x} \xi\left(\frac{1}{2}+i x\right)\right|_{x=0}=(2 n) !(-1)^{2 n} a_{2 n}=(2 n) ! a_{2 n},\tag{71}\]\\ or also by its equivalent version based on the coefficients of the Jensen polynomials $c_{n}$\\ \[\left.\frac{d^{2 n}}{d^{2 n} x} \xi\left(\frac{1}{2}+i x\right)\right|_{x=0}=(-1)^{n} \frac{(2 n) !}{2(n !)} c_{n},\tag{72}\] and a third way based on the Turán moments corresponding to the analysis of the Eq.(38) and Eq.(39), which lets conclude that\\ \[\left.\frac{d^{2 n}}{d^{2 n} x} \xi\left(\frac{1}{2}+i x\right)\right|_{x=0}=(2 n) ! a_{2 n}=(2 n) ! \frac{8 .\left(2^{2 n}\right) \widehat{b}_{n}}{(2 n) !}=8\left(2^{2 n}\right) \widehat{b}_{n}.\tag{73} \]\\\\ By using any of these three representations and the data of the Table 1, it can be calculated any even derivative of $\xi\left(\frac{1}{2}+i x\right)$ at $\left.\right|_{x=0}$ as seen in the Table 2. Moreover, the consistent approximations of such derivatives would prove that the three kind of coefficients are all valid, otherwise they would yield to wrong approximations in the three cases. The Table 2 illustrates the even derivatives from $\left.\frac{d^{(0)}}{d^{(0)} x} \xi\left(\frac{1}{2}+i x\right)\right|_{x=0}$ to $\left.\frac{d^{(40)}}{d^{(40)} x} \xi\left(\frac{1}{2}+i x\right)\right|_{x=0}$, as a result, the researchers can validate the first known even derivatives computed by Coffey [16] that can be found on the page 529 of the journal where the publication was presented, i.e., $\left.\frac{d^{(2)}}{d^{(2)} x} \xi\left(\frac{1}{2}+i x\right)\right|_{x=0} \cong 0.0229719443$, $\left.\frac{d^{(4)}}{d^{(4)} x} \xi\left(\frac{1}{2}+i x\right)\right|_{x=0} \cong 0.0029628484$ and $\left.\frac{d^{(6)}}{d^{(6)} x} \xi\left(\frac{1}{2}+i x\right)\right|_{x=0} \cong 0.0005992959$. Moreover, the even derivatives are all positive accordingly to the results of Coffey.\\\\ The m-file for computing the even derivatives can be requested directly from the author as free code, the name is 'Even derivatives.m', however, it is very easy just to test by a calculator the coefficients of the Table 1 and the equations Eq.(71), Eq.(72) and Eq.(73) in order to generate the data of the Table 2.\\\\ Now, the final part of this section shows a formula for the Riemann Xi and Riemann zeta function based on the three types of coefficients discussed before as follow\\ \[\xi(s)=\sum_{n=0}^{\infty} a_{2 n}\left(s-\frac{1}{2}\right)^{2 n} \equiv \sum_{n=0}^{\infty}(-1)^{n} \frac{c_{n}}{2(n !)}\left(s-\frac{1}{2}\right)^{2 n} \equiv \sum_{n=0}^{\infty} \frac{8 .\left(2^{2 n}\right) \widehat{b_{n}} }{(2 n) !}\left(s-\frac{1}{2}\right)^{2 n},\tag{74}\]\\ and using one the multiples definitions of the Riemann zeta function [17] by the Riemann Xi as\\ \[\zeta(s)=\xi(s) \frac{\pi^{\frac{s}{2}}}{(s-1) \Gamma\left(1+\frac{s}{2}\right)}=\frac{\pi^{\frac{s}{2}}}{(s-1) \Gamma\left(1+\frac{s}{2}\right)} \sum_{n=0}^{\infty} a_{2 n}\left(s-\frac{1}{2}\right)^{2 n},\tag{75}\]\\ and hence, the other two representations based on the coefficients of the Jensen polynomials and the Turán moments thanks to the equivalences inferred before\\ \[\zeta(s)=\xi(s) \frac{\pi^{\frac{s}{2}}}{(s-1) \Gamma\left(1+\frac{s}{2}\right)}=\frac{\pi^{\frac{s}{2}}}{(s-1) \Gamma\left(1+\frac{s}{2}\right)} \sum_{n=0}^{\infty}(-1)^{n} \frac{c_{n}}{2(n !)}\left(s-\frac{1}{2}\right)^{2 n},\tag{76}\]\\ \[\zeta(s)=\xi(s) \frac{\pi^{\frac{s}{2}}}{(s-1) \Gamma\left(1+\frac{s}{2}\right)}=\frac{\pi^{\frac{s}{2}}}{(s-1) \Gamma\left(1+\frac{s}{2}\right)} \sum_{n=0}^{\infty} \frac{8 .\left(2^{2 n}\right) \widehat{b}_{n}}{(2 n) !}\left(s-\frac{1}{2}\right)^{2 n},\tag{77}\]\\ because the Eq.(37), Eq.(38) and Eq.(39) let define such equivalences. Moreover, the expressions given by Eq.(15) and the successive ones after it that have been studied previously like Eq.(25) and Eq.(26) let formulate a new polynomial series not seen before and whose coefficients would be precisely the $G_{0}, G_{1}, G_{2}, \ldots G_{j}, \ldots$ being $G_{0}$ and $G_{1}$ the first well-known coefficients revised previously\\ \[\frac{1}{2}=G_{0}=\left(\frac{a_{0}}{2^{0}}+\frac{a_{2}}{2^{2}}+\frac{a_{4}}{2^{4}}+\frac{a_{6}}{2^{6}}+\cdots\right)=\sum_{n=0}^{\infty} \frac{a_{2 n}}{2^{2 n}},\tag{78}\]\\ \[G_{1}=-\frac{1}{2} \sum_{\rho} \frac{1}{\rho}=\sum_{r=1}^{\infty} \frac{\sigma_{r}}{s_{r} \bar{s}_{r}}=-\sum_{n=1}^{\infty} \frac{4 n a_{2 n}}{2^{2 n}}=-\frac{1}{2}\left(1+\frac{\gamma}{2}-\frac{\log (4 \pi)}{2}\right)\tag{79}\]\\ As a result, the partial definition of the Riemann Xi and zeta functions are\\ \[\xi(s)=\sum_{n=0}^{\infty} a_{2 n}\left(s-\frac{1}{2}\right)^{2 n} \equiv \frac{1}{2}-\frac{1}{2}\left(1+\frac{\gamma}{2}-\frac{\log (4 \pi)}{2}\right) s+G_{2} s^{2}+G_{3} s^{3}+\cdots G_{j} s^{j}+\cdots.\tag{80}\]\\ \[\zeta(s)=\frac{\pi^{\frac{s}{2}}}{(s-1) \Gamma\left(1+\frac{s}{2}\right)}\left[\frac{1}{2}-\frac{1}{2}\left(1+\frac{\gamma}{2}-\frac{\log (4 \pi)}{2}\right) s+G_{2} s^{2}+G_{3} s^{3}+\cdots G_{j} s^{j}+\cdots\right],\tag{81}\]\\ which are alternative representations when expanding the term $a_{2 n}\left(s-\frac{1}{2}\right)^{2 n}$ by the binomial theorem applied on $\left(s-\frac{1}{2}\right)^{2 n}$. Finally, a graphical representation of the modulus of the Riemann Xi function in Matlab and also the real and imaginary parts of it are presented in the Fig.2 for a small region specified for $0<\operatorname{Real}(s)<1$ and $-15<\operatorname{Imag}(s)<15$, in order to show only the region where the first conjugated non-trivial zeros $s=\frac{1}{2}+i 14,1347 \ldots$ and $s=\frac{1}{2}-i 14,1347 \ldots$ are. The modulus has been computed within the rectangle of the complex domain considered by using the analytical expression Eq. (74) based on the Taylor even coefficients $a_{2 n}$ which were calculated on the Table 1. As a result, it is a valid approximation for the Riemann Xi function by using those 21 coefficients. It would prove that the coefficients are suitable for a representation of the Riemann Xi function. The algorithm in Matlab generates graphics that can be compared to the same figures that Wolfram [18] has available for the public, as a result, the algorithm launches the same modulus, real and imaginary parts exactly as expected on the domain considered. The Fig. 3 can be checked via Wolfram on the link \url{https://mathworld.wolfram.com/Xi-Function.html} and compared to the m.file named 'Riemann Xi modulus.m'. Moreover, typical values like $\zeta\left(\frac{1}{2}\right) \cong 0.497120778188$ and $\zeta(0)=\zeta(1)=0.5$ are computed by the $m$-file through the Taylor series given by Eq. (74).\\ \begin{figure} \caption{The real and imaginary parts and modulus of the Riemann Xi function represented in Matlab by using the twenty-one $a_{2 n} \end{figure} \noindent The m-file leads to represent consistently the expected figures in small intervals, and the first non-trivial zero has been computed approximately $\xi\left(\frac{1}{2}+i 14,13 \ldots\right) \cong$\\ \begin{figure} \caption{The real and imaginary parts of the Riemann Xi function and its modulus computed by Wolfram. The results are similar to Fig.2} \end{figure}\\ The complete m-file for generating the graphics in Fig.2(can be requested for use from authors as well).\\\\ Crucial results that can help to validate the computed even coefficients $a_{2 n}$ and $c_{2 n}$ and their numerical consequences based on the formulas applied to the data of the Turán moments $\widehat{b_{n}}$ are the Bernoulli numbers [19] based on the Eq.(75) because the fact of using the famous expression for $\zeta(2 r)$[20] when the Riemann zeta function is evaluated per every $s=2 r$, for $r=0,1,2,3 \ldots$, as follows\\ \[\zeta(2 r)=(-1)^{r-1} \frac{(2 \pi)^{2 r} B_{2 r}}{2(2 r) !}=\frac{\pi^{r}}{(2 r-1) \Gamma(1+r)} \sum_{n=0}^{\infty} a_{2 n}\left(2 r-\frac{1}{2}\right)^{2 n},\tag{82}\]\\ where $\Gamma(1+r)=r !$ is a well-known property of the Gamma function and $B_{2 r}$ are the Bernoulli numbers with even index $2 r$. Then, every Bernoulli $B_{2 r}$ is written in function of convergent summations based only on the Taylor coefficients $a_{2 n}$ as follows\\ \[B_{2 r}=\frac{2(-1)^{r-1} \cdot(2 r) !{\pi}^r}{(2 \pi)^{2r}(2 r-1) \mathrm{r} !} \sum_{n=0}^{\infty} a_{2 n}\left(2 r-\frac{1}{2}\right)^{2 n}=\frac{(-1)^{r-1} \cdot(2 r) !(2)^{1-2 r}}{\pi^{r}(2 r-1) \mathrm{r} !} \sum_{n=0}^{\infty} a_{2 n}\left(2 r-\frac{1}{2}\right)^{2 n},\tag{83}\]\\ which can be evaluated easily by any calculator or software online like Matlab for any specific integer $r=0,1,2,3, \ldots$, for example, when computing the first four even indexes for those Bernoulli numbers based on the known data of the Table 1 , for twenty-one $a_{2 n}$ as follows\\ \[B_{0}=2 \sum_{n=0}^{\infty} a_{2 n}\left(\frac{1}{2}\right)^{2 n}=2 \sum_{n=0}^{\infty} \frac{a_{2 n}}{4^{n}} \approx 2\left(\frac{a_{0}}{4^{0}}+\frac{a_{2}}{4^{1}}+\cdots+\frac{a_{40}}{4^{20}}\right) \approx 0.4971207781+\frac{\frac{1.14859721576}{100}}{4}+\cdots,\tag{84}\]\\ $B_{0} \approx 0.999999999999 \approx 1$ as expected. Now, for the next Bernoulli numbers with even index\\ \begin{align} &B_{2}=\frac{1}{\pi^{1}} \sum_{n=0}^{\infty} a_{2 n}\left(\frac{3}{2}\right)^{2 n} \approx \frac{1}{\pi}\left[a_{0}+a_{2}\left(\frac{3}{2}\right)^{2}+\cdots+a_{40}\left(\frac{3}{2}\right)^{40}\right] \approx 0.16666666666658 \approx \frac{1}{6}, \tag{85}\\[10pt] &B_{4}=\frac{-1}{2 \pi^{2}} \sum_{n=0}^{\infty} a_{2 n}\left(\frac{7}{2}\right)^{2 n} \approx \frac{-1}{2 \pi^{2}}\left[a_{0}+a_{2}\left(\frac{7}{2}\right)^{2}+\cdots+a_{40}\left(\frac{7}{2}\right)^{40}\right] \approx-0.03333 \ldots \approx-\frac{1}{30},\tag{86}\\[10pt] &B_{6}=\frac{3}{4 \pi^{3}} \sum_{n=0}^{\infty} a_{2 n}\left(\frac{11}{2}\right)^{2 n} \approx \frac{3}{4 \pi^{3}}\left[a_{0}+a_{2}\left(\frac{11}{2}\right)^{2}+\cdots+a_{40}\left(\frac{11}{2}\right)^{40}\right] \approx 0.0238 \ldots \approx \frac{1}{42}.\tag{87} \end{align} Anyone who tests more Bernoulli numbers $B_{2 r}$ with the help of Eq.(83) will prove consistently that such numbers are inferred in this way. The results for the Eq.(41) and the Eq.(46) are undoubtedly related to those findings. Until now, nobody had discovered that the Bernoulli numbers could have an evident connection with the Turán moments and the coefficients of Jensen polynomials as seen before, which has been numerically derived by the computation and, above all, the proper analysis of such numbers. As a result, Eq.(83) can be written in two ways as well, one for the Turán moments, Eq.(88), and a second one, Eq.(89), over the coefficients of Jensen polynomials, being the Eq.(89) deeply related to the fact of considering the entire function $\xi$ evaluated as $\xi\left(\frac{1}{2}+i x\right)$ when the scenario of hyperbolicity of the Jensen polynomials is accepted according to the recent works of researchers like Griffin et al. [21], who have proved many significant cases for such property on the Jensen polynomials which in the current article would be extremely difficult to be contradicted against the variety of formulas obtained and presented here considering the real part of the non-trivial zeros of the Riemann zeta function lying strictly on the critical line. From this scenario, the Bernoulli numbers $B_{2 r}$ have an important connection with the Turán moments and coefficients of Jensen polynomials as follows\\\\ \[B_{2 r}=\frac{16(-1)^{r-1} \cdot(2 r) ! 2^{-2 r}}{(\pi)^{r}(2 r-1) \mathrm{r} !} \sum_{n=0}^{\infty} \frac{2^{2 n} \widehat{b_{\mathrm{n}}}\left(2 r-\frac{1}{2}\right)^{2 n}}{(2 n) !}\tag{88} \]\\ \[B_{2 r}=\frac{(-1)^{r-1} \cdot(2 r) ! 2^{1-2 r}}{(\pi)^{r}(2 r-1) \mathrm{r} !} \sum_{n=0}^{\infty} \frac{(-1)^{n} c_{n}\left(2 r-\frac{1}{2}\right)^{2 n}}{2(n) !},\tag{89}\]\\ Remembering also that with the help of the known expression that links the Gregory coefficients $G_{n}$ of order 1 introduced in the beginning of this article\\ \[G_{n}=-\frac{B_{n}{ }^{(n-1)}}{(n-1) \cdot(n !)}\]\\ we were able to find that every $G_{2 r}=G_{2 r}(1)$, with $r=1,2,3$ and the respective $c_{n}$ and $\widehat{b_{n}}$ can be linked as\\ \[\begin{aligned} &B_{2 r}=\frac{16(-1)^{r-1} \cdot(2 r) ! 2^{-2 r}}{(\pi)^{r}(2 r-1) \mathrm{r} !} \sum\limits_{n=0}^{\infty} \frac{2^{2 n} \widehat{b_{n}}\left(2 r-\frac{1}{2}\right)^{2 n}}{(2 n) !}=\sqrt[2 r-1]{-G_{2 r}(2 r-1) \cdot(2 r) !} \\[10pt] &B_{2 r}=\frac{(-1)^{r-1} \cdot(2 r) ! 2^{1-2 r}}{(\pi)^{r}(2 r-1) r !} \sum\limits_{n=0}^{\infty} \frac{(-1)^{n} c_{n}\left(2 r-\frac{1}{2}\right)^{2 n}}{2(n) !}=\sqrt[2 r-1]{-G_{2 r}(2 r-1) \cdot(2 r) !} \end{aligned}\]\\ \noindent where each $B_{0}, B_{2}, B_{4} \ldots$ can be deduced by a similar way computed by Eq.(83), since Eq.(83) can be modified by the equivalences studied before $C_{n}=2(n !)(-1)^{n} a_{2 n}=\frac{16(-1)^{n}(n !) 2^{2 n}}{(2n)!}\widehat{b_{n}}$ in Eq.(38) and Eq.(39) which lead to write such expressions, only possible if the entire function $\xi$ evaluated in the complex values like $\frac{1}{2}+i x$ is agreed to the phenomen of hyperbolicity of the Jensen polynomials or the validity of the Riemann hypothesis. The formulation of the Bernoulli numbers for even indexes, the calculation of the even derivatives of the Riemann Xi function at the special point $\frac{1}{2}+i(0)$ and the novel three representation formulas for the Euler-Mascheroni constant would be very important consequences for supporting the validity of the Riemann hypothesis, which is presented in this article for pointing out the extreme difficulties to try to adjust the entire function in other ways that violated the Riemann hypothesis itself. The Hadamard product and the Taylor series for $\xi\left(\frac{1}{2}+i x\right)$ would be hardly feasible against the evidence provided on the data of the Table 1 and Table 2 and the formulations derived from these. As a result, the Eq.(46), $\gamma=\log (4 \pi)-2+$ $\left(2^{7}\right) \sum\limits_{n=1}^{\infty} \frac{n \widehat{b_{n}}}{(2 n) !}$, is consistent with the formula for the summation over the Bernoulli numbers[22], in Eq.(90), that computes the approximation for the Euler-Mascheroni constant as follows\\ \[\gamma=\frac{1}{2}+\sum_{n=1}^{\infty} \frac{B_{2 n}}{(2 n)}\tag{90}\]\\ the unexpected connection between the Euler-Mascheroni constant and the Turán moments is not a surprise if it is noticed that there is already a well-known formula, Eq.(90), that lets consider an expansion on the Bernoulli numbers. Of course, other infinite sets of relevant numbers like the Gregory coefficients [23] and the Cauchy numbers of the second kind [24] can describe special summations on those ones which converge to the Euler-Mascheroni constant. Now, the evidence shows that three new sets of coefficients $a_{2 n}, \widehat{b_{n}}$ and $C_{n}$, can contribute to represent this important constant as it has been inferred in the current article. Moreover, the equivalences on both sides of Eq.(46) and Eq.(90) lead to get this possible formulation\\ \[\gamma=\frac{1}{2}+\sum_{n=1}^{\infty} \frac{B_{2 n}}{(2 n)}=\log (4 \pi)-2+\left(2^{7}\right) \sum_{n=1}^{\infty} \frac{n \widehat{b_{n}}}{(2 n) !},\tag{91}\]\\ which exposes that the summation of the Bernoulli numbers in that way has a final effect that is equivalent to introduce the Turán moments and the number $\pi$ in the other special summation. As a result, many sets of numbers produce relevant summations that represent the same constant.\\\\ The Turán inequalities and Laguerre inequalities [25] are accomplished as well when the numerical values for the coefficients $C_{n}$ of the Jensen polynomials are examined\\ \[C_{n}{ }^{2}-C_{n-1} C_{n+1}>0, \quad n \geq 1,\tag{92}\]\\ for example,\\ \[C_{1}^{2}-C_{0} C_{2}>0,\tag{93}\]\\ because, when testing $C_{0}=0.994241556376, C_{1}=-2.29719443152\left(10^{-2}\right)$ and $C_{2}=$ $4.93808072283\left(10^{-4}\right)$\\ \[\left(-2.29719443152\left(10^{-2}\right)\right)^{2}-0.994241556376\left(4.93808072283\left(10^{-4}\right)\right) \approx 3.674\left(10^{-5}\right)>0\tag{94}\]\\ and for the consecutive values $C_{1}, C_{2}, C_{3}$\\ \[C_{2}{ }^{2}-C_{1} C_{3}>0,\tag{95}\]\\ \[\left(4.93808072283\left(10^{-4}\right)\right)^{2}-\left(-2.29719443152\left(10^{-2}\right)\right)\left(-9.98826577664\left(10^{-6}\right)\right) \approx 1.4396\left(10^{-8}\right)>0 \tag{96}\]\\ Such inequalities must be accomplished strictly for all the successive coefficients considered, which is valid here because these are the correct coefficients of the Jensen polynomials.\\\\\\ \textbf{Discussion}\\\\ The numerical results presented in this article are evidently good approximations for the coefficients of the Jensen polynomials and the even Taylor coefficients for the Riemann Xi function which have been inferred from a solid basis like the Csordas, Norfolk and Varga's work mentioned before in the references. The formulation of these coefficients have let discover an interesting formula for the Euler-Mascheroni constant which is a novel representation. The results, numerically said, are a clear proof of the consistency of these numbers, being the main proof, the possibility to represent various values of the Riemann Xi function and, above all, its graphics, i.e., its modulus, real and imaginary parts. The references in mathematics regarding the Turán moments and their immediate connection with the coefficients of the Jensen polynomials, Bernoulli numbers and the hyperbolicity of the Jensen polynomials are just a little part of a reduced group of publications when discussing the structure of the Riemann Xi function, and hence, the Riemann zeta function, which is really a pity because not many researchers have considered the potential of the analysis of interesting sets of Taylor coefficients as seen in this article and other equivalent numbers that could help to understand the field of number theory in a better way. Regarding the possible future directions of this research, the newest ideas are based on the similitude or parallelism between these two expressions, Eq.(97) for the formulation already seen\\ \[\frac{\zeta(s)(s-1) \Gamma\left(1+\frac{s}{2}\right)}{8 \pi^{\frac{s}{2}}}-\widehat{b_{0}}=\sum_{n=1}^{\infty} \frac{2^{2 n} \widehat{b_{n}}\left(s-\frac{1}{2}\right)^{2 n}}{(2 n) !},\tag{97}\]\\ and the definition of the series form for the hyperbolic cotangent function [26] which involves the Bernoulli numbers as follows\\ \[\left(s-\frac{1}{2}\right) \operatorname{coth}\left(s-\frac{1}{2}\right)-B_{0}=\sum_{n=1}^{\infty} \frac{2^{2 n} B_{2 n}\left(s-\frac{1}{2}\right)^{2 n}}{(2 n) !}.\tag{98}\]\\ From these two equations is thought to involve the Bernoulli numbers within the representation of a compact formula, still being developed in multiple hypothesis, due to the tremendous parallelism that these two series present within this context. The only difference between Eq.(97) and Eq.(98) is the use of either the Turán moments $\widehat{b_{n}}$ or the $B_{2 n}$ which would lead to describe the possible connection between the Riemann zeta function $\zeta(s)$ and other special functions like the hyperbolic trigonometric functions. This idea is majestic because there exist a possible link to the Riesz function [27] and the suggestions of using Eq.(97) and Eq.(98), since the Riesz function has this interesting definition\\ \[R(x)=2 \sum_{n=1}^{\infty} \frac{n^{\bar{n}} x^{n}}{(2 \pi)^{2 n}\left(\frac{B_{2 n}}{2 n}\right)},\tag{99}\]\\ being $n^{\bar{n}}$ the rising factorial power.\\\\ The essence of this function is appreciated when considering $F(x)=\frac{1}{2} \operatorname{Riesz}\left(4 \pi^{2} x\right)$ since its Laurent series is related precisely to the concept of hyperbolic cotangent function as seen in this expression\\ \[\left(\frac{x}{2}\right) \operatorname{coth}\left(\frac{x}{2}\right)=\sum_{n=0}^{\infty} C_{n} x^{n}=1+\frac{1}{12} x^{2}-\frac{1}{720} x^{4}+\cdots,\tag{100}\]\\ which lets define the series form for\\ \[F(x)=\frac{1}{2} \operatorname{Ries} z\left(4 \pi^{2} x\right)=\sum_{n=1}^{\infty} \frac{x^{n}}{c_{2 n}(n-1) !}=12 x-720 x^{2}+15120 x^{3} \ldots,\tag{101}\]\\ that is also written in the references as\\ \[F(x)=\sum_{n=1}^{\infty} \frac{n^{\overline{n+1}} x^{n}}{B_{2 n}},\tag{102}\]\\ the perspective of this work is to develop a model that lets write the Turán moments, coefficients of the Jensen polynomials and even Taylor coefficients strictly in function of the Bernoulli numbers or others like the Gregory coefficients of higher order, which would be a tremendous finding in mathematics because would let write the Riemann zeta function and Riemann Xi strictly in function of the Gregory coefficients of higher orders. The initial speculations based on the hyperbolic trigonometric functions, particularly about the hyperbolic cotangent, seem to point out that at least in some complex points the hyperbolic cotangent is candidate to be used within a model for the Riemann $\mathrm{Xi}$ and zeta functions.\\\\ A forced equivalence that is still under revision for future development is the idea of equating the Eq.(97) and Eq.(98) for representing some domains of the Riemann zeta function, in fact, the partial model of a compact analytical definition is highly based in transcendental functions that nobody had suspected before, a slight set of parameters, about a novel model known as the hypothetical parameters, for example, $a$ and $b$, could help to equate the expressions and cover all the complex domains considered conveniently by\\ \[\frac{\zeta(s)(s-1) \Gamma\left(1+\frac{s}{2}\right)}{{8 \pi^{\frac{s}{2}}}}-\widehat{b_{0}}=\left((\mathbf{a}) \cdot \mathbf{s}-\frac{b}{2}\right) \operatorname{coth}\left((\mathbf{a}) \cdot \mathbf{s}-\frac{b}{2}\right)-B_{0}=\sum_{n=1}^{\infty} \frac{2^{2 n} B_{2 n}\left((a) \cdot s-\frac{b}{2}\right)^{2 n}}{(2 n) !},\tag{103}\]\\ the analysis of that kind of expressions, of course, improved by meticulous theorems, could be the key for resolving the mystery of writing a final version for the Riemann Xi and zeta function. Some tests have shown that using transcendental equations could help to find these parameters and define the way of the Riemann zeta function and Xi as well. This is the future part of this research.\\\\\\ \textbf{Conclusions}\\\\ The most important conclusion regarding the coefficients of the Taylor series for the Riemann Xi function is the surprising consequence that the convergence of the special summations involving the non-trivial zeros of $\zeta$ seems to define exactly the coefficients of the Jensen polynomials which are useful for supporting the idea of the hyperbolicity of this kind of polynomials regarding the work of Griffin, Ono and other authors. If the current research shows that after computing the first coefficients of Jensen polynomials by the assumption of the real part of the non-trivial zeros of the Riemann zeta function like $\frac{1}{2}$, then the structure of the Hadamard product of the Riemann Xi function is clearly exact to the expansion for the Taylor series of the Riemann Xi function around $s_{0}=\frac{1}{2}+i \cdot t_{0}$, being $t_{0}$ the imaginary part of any non-trivial zero of the Riemann zeta function.\\\\ The never told before comparison between the Hadamard product and the Taylor series of the Riemann Xi function is an important equivalence that according to this article would lead to deduce veridic consequences when ${s}_{{0}}=\frac{1}{2}+{i} \cdot {t}_{{0}}$, because the formulas depend on this approach, as a result, how the Euler-Mascheroni constant, Bernoulli numbers, even derivatives of the Riemann Xi function and other relationships discovered could be written with such exactness and based on solid true experimental results if the Riemann hypothesis had been false? That is the most important conclusion that the readers of this article should take into consideration after examining the complete work presented here. \end{document}
\begin{document} \begin{frontmatter} \title{{\large {Distribution-on-Distribution Regression via Optimal Transport Maps}}} \runtitle{Distribution to Distribution Regression via Optimal Transport Maps} \begin{aug} \author{\fnms{Laya} \snm{Ghodrati}\ead[label=e1]{[email protected]}} \and \author{\fnms{Victor M.} \snm{Panaretos}\ead[label=e2]{[email protected]}} \runauthor{L. Ghodrati \& V.M. Panaretos} \affiliation{Ecole Polytechnique F\'ed\'erale de Lausanne} \address{Institut de Math\'ematiques\\ Ecole Polytechnique F\'ed\'erale de Lausanne\\ \mathcal{P}rintead{e1}, \mathcal{P}rintead*{e2}} \end{aug} \begin{abstract} We present a framework for performing regression when both covariate and response are probability distributions on a compact interval. Our regression model is based on the theory of optimal transportation and links the conditional Fr\'echet mean of the response to the covariate via an optimal transport map. We define a Fr\'echet-least-squares estimator of this regression map, and establish its consistency and rate of convergence to the true map, under both full and partial observation of the regression pairs. Computation of the estimator is shown to reduce to a standard convex optimisation problem, and thus our regression model can be implemented with ease. We illustrate our methodology using real and simulated data. \end{abstract} \begin{keyword}[class=AMS] \kwd[Primary ]{62M, 15A99} \kwd[; secondary ]{62M15, 60G17} \end{keyword} \begin{keyword} \kwd{functional regression} \kwd{random measure} \kwd{optimal transport} \kwd{Wasserstein metric} \end{keyword} \end{frontmatter} {{ \footnotesize \tableofcontents }} \section{Introduction} Functional data analysis \citep{hsing2015theoretical} considers statistical inference problems whose sample and parameter spaces constitute function spaces. This framework encompasses data that are best viewed as realisations of random processes, and presents challenges arising from the infinite dimensionality of the function spaces, typically taken to be separable Hilbert spaces. On the other hand, non-Euclidean statistics \citep{patrangenaru2015nonparametric} treats inference problems whose sample and parameter spaces are finite dimensional manifolds. Such problems present with a different set of challenges, linked with the non-linearity of the corresponding spaces, which often arises due to non-linear constraints satisfied by the data/parameters. When the data/parameters of interest are, in fact, probability distributions, one has a problem that is simultaneously functional and non-Euclidean: on the one hand the data can be seen as random processes, and on the other they satisfy non-linear constraints, such as positivity and integral constraints. Thus, the functional data analysis of probability distributions features interesting challenges stemming from this dual nature of the ambient space, for example the finite measurement of intrinsically infinite dimensional objects, and the lack of a linear structure which is crucial to basic statistical operations, such as averaging or, more generally, regression toward a mean. See \citet{petersen-review} for an excellent overview. One approach to dealing with the non-linear nature of probability distributions is to apply a suitable transformation and map the problem back to a space with a linear structure \citep{kneip2001inference,delicado2011dimensionality,petersen2016functional,kokoszka2019forecasting}. A seemingly more natural approach is to embrace the intrinsic non-linearity, and to analyse the data in their native space, equipped with a canonical metric structure. In the case of probability distributions, the Wasserstein metric \citep{panaretos2019statistical,panaretos2020invitation} has been exhibited as a canonical choice \citep{panaretos2016amplitude}, primarily because it captures deformations, which are typically the main form of variation for probability distributions. The case of inferring the Fr\'echet mean of a collection of random elements in the Wasserstein space is by now well understood \citep{panaretos2016amplitude,bigot2018upper,zemel2019frechet,gouic2019fast}. The deep links to convexity and the tangent space structure of the Wasserstein space play an important role in motivating and deriving the analysis of this case. The next step is to understand the notion of regression of one probability distribution on another. The first to do so were \citet{chen2020wasserstein}, and, independently, \citet{zhang2020wasserstein}, the latter paper focussing on autoregression. They used the tangent space structure to define a regression operation: using the log transform, the regressor and response are lifted to suitable tangent spaces, where a (linear) regression model is defined in a more familiar Hilbertian setting \citep{morris2015functional,hall2007methodology}. This allows the authors to use the well-developed toolbox of functional regression, and derive appropriate asymptotic theory. In this paper, we propose an alternative notion of distribution-on-distribution regression, following a different path. Rather than taking a geometrical approach, via the tangent bundle structure, we follow a shape-constraint approach, namely exploiting convexity. Our model is defined directly at the level of the probability distributions, and stipulates that the response distributions are related to the covariate distributions by means of an optimal transport map, and further deformational noise. A key advantage of this approach is its clean and transparent interpretation, since the regression operator can be interpreted \emph{pointwise} at the level of the original distributions, and its effect consists in mass transportation, or equivalently, quantile re-arrangement. Further to this, the approach requires minimal regularity conditions, and does not suffer from ill-posedness issues as inverse problems do. Finally, its computational implementation reduces to a standard convex optimisation problem. The usefulness of the approach is exhibited when revisiting the analysis of the mortality data of \citet{chen2020wasserstein}, where the approach is seen to lead to similar (if more expansive) qualitative conclusions, but with the advantage of an arguably improved interpretability. \section{Background on Optimal Transport and Some Notation}{\label{Wasserstein}} In order to define our regression model, we now provide some minimal background on optimal transport and Wasserstein distances, including some relevant notation. For more background see, e.g. \cite{panaretos2020invitation}. Let $\Omega\subseteq\mathbb{R}$ and $\mathcal{W}_2(\Omega)$ be the set of Borel probability measures on $\Omega$, with finite second moment. The 2-Wasserstein distance $W$ between $\mu,\nu \in \mathcal{W}_2(\Omega)$ is defined by $$d^2_{\mathcal{W}}(\nu,\mu):=\underset{\gamma \in \Gamma(\nu,\mu)}{\inf} \int_{\Omega} |x-y|^2 \diff \gamma(x,y),$$ where $\Gamma(\nu,\mu)$ is the set of couplings of $\mu$ and $\nu$, i.e. the set of Borel probability measures on $\Omega \times \Omega$ with marginals $\nu$ and $\mu$. It can be shown that $\mathcal{W}_2(\Omega)$ endowed with $d^2_{\mathcal{W}}$ is a metric space, which we simply call the Wasserstein space of distributions. A coupling $\gamma$ is deterministic if it is the joint distribution of $\{X, T(X)\}$ for some deterministic map $T: \Omega \to \Omega$, called an optimal transport map. In such a case, we write $\nu=T\#\mu$ and say that $T$ pushes $\mu$ forward to $\nu$, i.e. $\nu(B)=\mu\{T^{-1}(B)\}$ for any Borel set $B$. Occasionally we denote this as $T_{\mu\to\nu}$, for clarity. When the source distribution $\mu$ is absolutely continuous with respect to the Lebesgue measure, then the optimal plan is induced by a map $T$. When $d=1$, the map $T$ admits the explicit expression $T=F^{-1}_{\nu}\circ F_\mu$, where $F^{-1}_\nu$ is the quantile function of $\nu$, and $F_\mu$ is the cumulative distribution function of $\mu$. In addition \begin{equation}\label{w1d} d_{\mathcal{W}}^2(\mu,\nu)=\int_0^1\big|F^{-1}_{\mu}(p) - F^{-1}_{\nu}(p)\big|^2 \diff p. \end{equation} A notion of average of probability distributions can be defined via the Fr\'echet mean with respect to the Wasserstein metric. Namely, let $\Lambda$ be a random measure on $\mathcal{W}_2(\Omega)$ with law $P$. A Fr\'echet mean of $\Lambda$ is a minimizer of the Fr\'echet functional $$F(b)=\frac{1}{2}E d^2_{\mathcal{W}}(b,\Lambda)=\frac{1}{2}\int_{\mathcal{W}_2(\Omega)} d^2_{\mathcal{W}}(b,\lambda)\diff P(\lambda)\quad b \in \mathcal{W}_2(\Omega).$$ The Fr\'echet functional can thus serve as a basis to define a sum-of-squares functional in the context of regression, and this will be done in the next section. We will occasionally use the fact that $\mathcal{W}_2(\mathbb{R})$ is flat in that for $\mu,\nu, b \in \mathcal{W}_2(\mathbb{R})$ it holds that \begin{equation}\label{PC} d_{\mathcal{W}}(\mu,\nu)=\norm{T_{b \to \nu} - T_{b \to \mu}}_{L^2(b)}, \end{equation} whenever the optimal maps involved are well-defined. Finally, we will use the notation $a \lesssim b$ to indicate that there exists a positive constant $C$ for which $a\leq C b$ holds. The support of a function $f$ will be denoted by $\text{supp}(f)$ . And, for a measure $\mu$, we indicate the $L^p$ norm of a function $f:[0,1]\rightarrow \mathbb{R}$ with respect to $\mu$ as $\norm{f}_{L^p(\mu)}$. \section{Distribution-on-Distribution Regression}{\label{Distribution-on-Distribution Regression}} \subsection{Fr\'echet Functionals and Regression Operators}\label{regression_operators} Let $(\mu,\nu)$ be a pair of random elements in $\mathcal{W}_2(\Omega) \times \mathcal{W}_2(\Omega)$ with joint distribution $P$. Then, similar to a standard nonparametric regression model, we can define a regression operator $\Gamma: \mathcal{W}_2(\Omega) \rightarrow \mathcal{W}_2(\Omega)$ as the minimizer of the conditional Fr\'echet functional, viewed as a function of $\mu$, $$ \argmin_b \int_{\mathcal{W}_2(\Omega)} d^2_{\mathcal{W}}(b,\nu)\diff P(\nu\,|\,\mu)=\Gamma(\mu) $$ assuming that for any $\mu$, the Fr\'echet mean of the conditional law $P(\cdot \,|\, \mu)$ of $\nu$ given $\mu$ is unique , which can be enforced by means of regularity assumptions on the pair $(\mu,\nu)$. The difference between the above formulation and the standard regression formulation is that we have replaced the notion of expectation with a Wasserstein-Fr\'echet mean, an approach termed as ``Fr\'echet Regression" by \citet{petersen2019frechet}. Postulating a specific form on the regression operator $\Gamma^*$ amounts to defining a certain type of regression model. If $\Gamma$ is left unconstrained, except for possessing some degree of regularity, then we would speak of a nonparametric regression model. However, assumptions on $\Gamma$ are needed to ensure its identifiability, and simply assuming it is regular will not suffice in this more general context. For instance, the approach of \citet{chen2020wasserstein} and \citet{zhang2020wasserstein} consists in constraining $\Gamma$ to be in a certain sense linear, in that it can be represented as a linear operator at the level of the tangent bundle. Identifiability, and indeed fitting and asymptotic theory, can then be derived by appealing to the inclusion of the tangent spaces in Hilbert spaces. Here we impose a different constraint on $\Gamma$, and consequently define a different notion of regression. Namely we impose a \emph{shape constraint}, by assuming that $\Gamma(\mu)=T\#\mu$, where $T$ is an increasing map. This is developed in the next section which postulates a regression model on the pair $(\mu,\nu)$ that guarantees the uniqueness of the conditional Fr\'echet mean $\Gamma(\mu)$ of $\nu$ given $\mu$, and imposes mild conditions ensuring the identifiability of $\Gamma$. \subsection{The Regression Model and The Fr\'echet-Least-Squares Estimator} {\label{The Model and The Estimator}} Henceforth, we will take the domain $\Omega$ to be a compact interval of $\mathbb{R}$. Let $\{(\mu_i,\nu_i)\}_{i=1}^{N}$ be an independent collection of regressor/response pairs in $\mathcal{W}_2(\Omega)\times \mathcal{W}_2(\Omega)$. Motivated by the discussion in the previous paragraph, we define the regression model \begin{equation}{\label{model}} \nu_{i}=T_{\epsilon_i}\#(T_0\#\mu_i), \quad \{\mu_i,\nu_i\}_{i=1}^N, \end{equation} where $T_0:\Omega \to \mathbb{R}$ is an unknown optimal map and $\{T_{\epsilon_i}\}_{i=1}^{N}$ is a collection of independent and identically distributed random optimal maps satisfying $E\{T_{\epsilon_i}(x)\}=x$ almost everywhere on $\Omega$. These represent the ``noise" in our model. The regression task will be to estimate the unknown $T_0$ from the observations $\{\mu_i,\nu_i\}_{i=1}^N$. To be able to do so, we need to ensure that $T_0$ is identifiable, and for this we now introduce some conditions. In the spirit of Section \ref{regression_operators}, let $P$ be the probability law induced on $\mathcal{W}_2(\Omega) \times \mathcal{W}_2(\Omega)$ by model \eqref{model}. We denote by $P_{M}$ and $P_N$ the marginal distributions induced on the typical regressor $\mu$ and the typical response $\nu$, respectively. \begin{assumption}{\label{absCont}} Let $\mu$ be a measure in the support of $P_M$. Then $\mu$ is absolutely continuous with respect to Lebesgue measure on $\Omega$. \end{assumption} Denote by $Q$ the measure that is linear average of $P_M$, i.e. $Q(A)=\int_{\mathcal{W}_2(\Omega)} \mu(A)\diff P_M(\mu)$. We also denote by $Q_N$ the empirical counterpart of $Q$, namely $Q_N(A)=\frac{1}{N} \sum_{i=1}^N \mu_i(A)$, where $\{\mu_i\}$ are independent random measures with law $P_M$. Note that all $\mu$ in the support of $P_M$ are dominated by the measure $Q$, i.e. $\mu \ll Q$ almost surely. Define the parameter set of optimal transport maps $\mathcal{T}$ as: $$\mathcal{T}:=\{T :\Omega \to \Omega: 0\leq T'(x) {<\infty} \text{ for } Q \text{-almost every } x \in \Omega \}.$$ Implicit in the definition of $\mathcal{T}$ is that its elements are assumed differentiable $Q$-a.e. In the presence of Assumption \ref{absCont}, the $Q$-a.e. existence of $T'$ is automatically guaranteed, since Lebesgue's theorem on the differentiation of monotone functions states that a monotone function automatically has a derivative Lebesgue almost everywhere in the interior of $\Omega$, and Assumption \ref{absCont} implies that $Q$ is dominated by Lebesgue measure.\\ \begin{comment} Thus, in the presence of Assumption \ref{absCont}, the only restriction on the set of maps $\mathcal{T}$ further to monotonicity is the boundedness of the derivative. \end{comment} \noindent We will also assume: \begin{assumption}{\label{assumpMaps}} The model (\ref{model}) is induced by a map $T_0$ and random maps $T_{\epsilon}$ that are of class $\mathcal{T}$. \end{assumption} \noindent With these assumptions in place, we can now establish identifiability: \begin{theorem}{\label{identifiability}} Assume that the law $P$ induced by model (\ref{model}) satisfies Assumptions \ref{absCont} and \ref{assumpMaps}. Then, the regressor operator $\Gamma(\mu)=T_0\#\mu$ in model \eqref{model} is identifiable over the parameter class $\mathcal{T}$ in the $L^2(Q)$ topology. Specifically, for any $T\in\mathcal{T}$ such that $\|T-T_0\|_{L^2(Q)}>0$, it holds that $$M(T)>M(T_0),$$ where \begin{equation}{\label{population-functional}} M(T):= \frac{1}{2}\int_{\mathcal{W}_2(\Omega)\times \mathcal{W}_2(\Omega)} d^2_{\mathcal{W}}(T\# \mu,\nu) \diff P(\mu,\nu). \end{equation} \end{theorem} \begin{remark}[Identifiability $Q$-almost everywhere] Theorem \ref{identifiability} establishes the identifiability of $T_0$ up to $Q$-null sets, with minimal assumptions on the input measures $\mu$. Consequently, if the random covariate measure $\mu$ is almost surely supported on a strict subset $\Omega_0\subset \Omega$, we can identify $T_0$ on $\Omega_0$ (which coincides with the support of $Q$) but not on $\Omega\setminus\Omega_0$. Of course, if the measure $Q$ is equivalent to Lebesgue measure, in the sense of mutual absolute continuity, identifiablity will also hold Lebesgue almost everywhere on $\Omega$. Additional conditions on the law of the random covariate measure $\mu$ can yield this equivalence. A simple condition is to require $\int_{\mathcal{W}_2(\Omega)} \inf_{x\in\Omega} f_\mu(x) \diff P_M(\mu)>0$, yielding that $f_Q(x)>0$, where $f_\mu$ and $f_Q$ are the Lebesgue densities of the measures $\mu$ and $Q$. However this condition implies that $\textrm{supp}(\mu)=\Omega$ with positive probability, which can be restrictive as we would like our model to encompass situations where none of the covariate measures are fully supported on $\Omega$. A considerably weaker condition that guarantees the equivalence of $Q$ to Lebesgue measure is to require the existence of a cover $\{E_m\}_{m\geq 1}$ of $\Omega$ such that $P_M\{E_m\subseteq\mathrm{supp}(f_\mu)\}>0$ for all $m$ -- intuitively, this enables different covariate measures to give information on $T_0$ on different subsets of $\Omega$, but requires that they collectively provide information on all of $\Omega$. As an example let $\Omega=[0,1]$ and let $\mu$ be defined as the normalised Lebesgue measure on $S=[U,U+1/3] \mod 1$, where $U$ is a uniform random variable on $[0,1]$. In this case none of the realisations of $\mu$ are supported on $\Omega$, but the ``cover condition" is satisfied. \end{remark} Further to identifiability, the theorem gives a way to estimate $T_0$ by means of $M$-estimation. We can define an estimator $\mathbb{H}at{T}_N$ as the minimizer of the sample counterpart of $M$, \begin{equation}{\label{functional}} M_N(T):= \frac{1}{2N} \sum_{i=1}^N d^2_{\mathcal{W}}(T\# \mu_i,\nu_i), \quad \quad \mathbb{H}at{T}_N:=\arg\min_{T \in \mathcal{T}} M_N(T), \end{equation} where $(\mu_i,\nu_i)$ are independent samples from $P$ for $i=1,\dots,N$. In effect this a ``Fr\'echet least square" estimator. The existence and uniqueness of a minimizer is not a priori obvious, but we establish both in the next section \ref{Existence and Uniqueness of the Estimator}. \begin{remark}[Pure Intercept Model] When all the input measures are equal, $\mu_1=\mathbb{H}dots=\mu_N$, our regression model reduces to a ``pure intercept model", which is equivalent to the problem of estimating a Fr\'echet mean. To see this, let $\mu_0$ a fixed measure. From the assumption that $E\{T_{\epsilon_i}(x)\}=x$ a.e., one can deduce that the conditional Fr\'echet mean of the measure $\nu$, given the measure $\mu_0$ is equal to $\nu_0=T_0\# \mu_0$. Estimation of $T_0$ is then equivalent to estimation of the Fr\'echet mean $\nu_0$ of the output measures, since $T_0=T_{\mu_0\to\nu_0}=F_{\nu_0}^{-1}\circ F_{\mu_0}$. \end{remark} \subsection{Interpretation and Comparison} It was argued in the introduction that the proposed regression model has the advantage of being easily interpretable, and now we elaborate on this point. The fact that the regressor operator $\Gamma(\mu)$ takes the form \begin{equation}\label{our-regressor} \Gamma(\mu)=T_0\# \mu, \end{equation} where $T_0:\Omega\to\Omega$ is a monotone map, has a simple interpretation in terms of mass transport: the effect of the Fr\'echet mean in this regression is to transport the probability mass assigned by $\mu$ on a subinterval $(a,b)\subset \Omega$ onto the transformed subinterval $(T_0(a),T_0(b))$. Therefore, the model can be directly interpreted at the level of the quantity that the input/output measures are modelling. In particular, the model can be interpreted at the level of quantiles. Since $$F^{-1}_{T_0\#\mu}(\alpha)=(T_0\circ F^{-1}_{\mu})(\alpha)=T_0\{F^{-1}_{\mu}(\alpha)\},\qquad \alpha\in(0,1),$$ we can see that the mean effect of the regression is to move the $\alpha$-quantile of $\mu$, say $q_\alpha$, to the new location $T_0(q_\alpha)$. Each response distribution $\nu_i$ will then further deviate from its conditional Fr\'echet mean $T_0\#\mu_i$ by means of a random monotone ``error" map $T_{\epsilon}:\Omega\to\Omega$ whose expectation is the identity map, $$F^{-1}_{\nu_i}(\alpha)=T_{\epsilon_i}\big[T_0\{F^{-1}_{\mu}(\alpha)\}\big],\qquad \alpha\in(0,1).$$ This highlights the analogy with a classical regression setup, except that the addition operation is replaced by the composition operation at the level of quantiles, or equivalently, by the push-forward operation at the level of distributions. In particular, the assumption that $E\{T_{\epsilon_i}(x)\}=x$ is directly analogous to the classical assumption that the errors have zero mean: one can directly see that $E\{T_{\epsilon_i}(x)\}=x$ for almost all $x\in\Omega$ implies that $$E\{F^{-1}_{\nu_i}(\alpha)\}=E\Big(T_{\epsilon_i}\big [ T_0\{F^{-1}_{\mu}(\alpha)\}\big]\Big)=T_0\{F^{-1}_{\mu}(\alpha)\},\qquad \alpha\in(0,1).$$ Assuming that we have obtained an estimator $\mathbb{H}at T_N$ of the regression map $T$ based on $N$ regressor/response pairs, we can then define the \emph{fitted distributions}, $$\mathbb{H}at{\nu}_i=\mathbb{H}at{T}_N\# \mu_i.$$ We can also define the $i$th \emph{residual map} $T_{e_i}(x):\Omega\to\Omega$ as the optimal transport map $T_{e_i}=T_{\nu_i\to\mathbb{H}at\nu_i}$ that pushes forward the observed response $\nu_i$ to the fitted value $\mathbb{H}at\nu_i$. The residual maps can be plotted in a ``residual plot" and contrasted to the identity map, by analogy to the classical regression case. This can help identify outlying observations, and also to appreciate in what manner the fitted values differ from the observe values. In particular, it can reveal in which regions of the support of the measures the model provides a good fit, and where less so. It can also serve to identify clusters of observations whose residuals are similar, suggesting the potential presence of a latent indicator variable, i.e. that separate regressions ought to be fit to different groups of observations. Finally, the residual plot can serve as a diagnostic tool for the validity of the model. Since the residual map $T_{e_i}$ can be seen as a proxy for the latent error map $T_{\epsilon_i}$, deviations of the average of the residual maps from the identity can serve as a means to diagnose departures from the assumed model. Note that, contrary to classical regression, where the residuals sum to zero by construction, the residual maps $T_{e_i}$ are \emph{not} constrained to have mean equal to the identity. By comparison, \citet{chen2020wasserstein} introduce (linear) regression in Wasserstein space by means of a geometric approach, that is in a sense a linear model between tangent spaces. Namely, for $\bar\mu$ and $\bar\nu$, the Fr\'echet means of the regressor and response measures, they postulate a regressor operator of the form \begin{equation}\label{muller-regressor} \Gamma(\mu)=\big\{\mathcal{B}(T_{\bar{\mu}\to \mu}-{I})+I\big\}\# \bar{\nu}, \end{equation} where $I(x)=x$ is the identity map on $\Omega$, and $\mathcal{B}: L^2(\bar{\mu})\to L^2(\bar{\nu})$ is a bounded linear operator with some assumptions, so that the terms involved be well-defined. Again, linearity guarantees identifiability. The expression appears convoluted, but the geometrical interpretation is simple: $T_{\bar{\mu}\to \mu}-I$ represents the image of $\mu$ under the log map at $\bar{\mu}$ (see Section 2.3 of \citet{panaretos2020invitation}). Equivalently, $T_{\bar{\mu}\to \mu}-I$ is the lifting of $\mu$ to the tangent space $\mathrm{Tan}_{\bar{\mu}}\{\mathcal{W}_2(\Omega)\}\subset L^2(\bar{\mu})$ at $\bar{\mu}$. Once the regressor $\mu$ is lifted onto $\mathrm{Tan}_{\bar{\mu}}\{\mathcal{W}_2(\Omega)\}\subset L^2(\bar{\mu})$, the action of the regression operator is to map it to its image in $L^2(\bar\nu)$ via the bounded linear operator $\mathcal{B}:L^2(\bar{\mu})\to L^2(\bar{\nu})$, as in a standard functional linear model. The final step is to push forward $\bar{\nu}$ by this image plus the identity, i.e. $\mathcal{B}(T_{\bar{\mu}\to \mu}-{I})+I$, which retracts back onto $\mathcal{W}_2(\Omega)$ and yields a measure (if $\mathcal{B}(T_{\bar{\mu}\to \mu}-{I})+I$ is a monotone map, then this is equivalent to exponentiation, see Section 2.3 of \citet{panaretos2020invitation}). The model is most easily interpretable on the tangent space, where it states that the expected lifting of the response $\nu_i$ at $\bar{\nu}$ is related to the lifting of the regressor $\mu_i$ at $\bar{\mu}$ by means of the linear operator $\mathcal{B}$. Similarly, fitted values are defined on the tangent space, and then can be retracted by the same push-forward operation. The two approaches do not directly compare, and neither captures the other as a special case. Similarly, there is no reason to a priori expect that one model would typically outperform the other in terms of fit, and one can expect this to depend on the specific data set at hand. Thus, our method should be seen as an alternative rather than an attempt at an improved or more general version of regression. An apparent advantage of the regressor function \eqref{our-regressor}, however, is an arguably easier and more direct interpretation of the regression effect, directly at the level regressor/response, through a monotone re-arrangement of probability mass, as discussed above. Indeed this allows a direct point-wise interpretation of the regression effect. The regressor \eqref{muller-regressor} on the other hand allows for a traditional (functional) regression interpretation via the linear operator $\mathcal{B}$, albeit acting on the logarithms of regressor/response, which makes it harder to interpret the regression effect at the level of the original measures, since there are two transformations involved, one non-linear and one linear. Similar points can be made with regards to the residuals and residual plots. Another potential advantage is at the level of regularity conditions imposed on $\Gamma$ for the purposes of theory. Equation \eqref{muller-regressor} leads to an inverse problem on the tangent space, as is standard with functional linear models, and thus requires more delicate technical assumptions on the problem, in addition to regularisation. By contrast, the shape-constrained approach \eqref{our-regressor} only requires monotonicity on the regressor $T_0$. It also avoids the instabilities of an inverse problem. The utility of our model illustrated in Section \ref{mortality_data}, which considers an example where the age-at-death distribution $\nu_i$ for country $i$ in 2013 serves as a response distribution, and the age-at-death distribution $\mu_i$ of the same country in 1983 serves as the regressor. Interestingly, it leads to similar fits and qualitative conclusions as the analysis of the same data by \citet{chen2020wasserstein}, while exhibiting a clean and more expansive interpretation. Indeed, our definition of residual maps help identify effects related to changes in infant mortality not easily detectable when looking only at the fitted distributions, and to identify an interesting clustering of observations. See Section \ref{mortality_data} for more details. \subsection{Existence and Uniqueness of the Estimator} {\label{Existence and Uniqueness of the Estimator}} In this section, we establish the existence and uniqueness of the estimator $\mathbb{H}at{T}_N$. To show the existence, we use a variant of the Weierstrass theorem, namely \citet[Thm 7.3.6]{kurdila2006convex}, stated for convenience as Theorem \ref{theorem for existence} in the Appendix. This requires establishing the convexity and Gateaux differentiability of the functional $M_N$, and this we do in the next lemma: \begin{lemma}[Strict Convexity and Differentiability]{\label{functional-convexity-derivative}} Let $\mathcal{T}$ be the parameter set and suppose we have $N$ independent observations $(\mu_i,\nu_i)$ that are realizations of $P$. Both the empirical functional $M_N(T)$ and the population functional $M(T)$ are strictly convex with respect to $T \in \mathcal{T}$. Moreover the functionals $M$ and $M_N$ are Gateaux-differentiable on the set of optimal maps in $\mathcal{T}$ with respect to the $L^2(Q)$ and $L^2(Q_N)$ distances, respectively. The corresponding derivatives of $M$ in the direction $\eta\in L^2(Q)$ is: \begin{equation} D_\eta M(T)=\int\int_{\Omega} \eta(x)\{T(x)-T_{\mu,\nu}(x)\} \diff \mu(x)\diff P(\mu,\nu), \end{equation} and the derivative of $M_N$ in the direction $\eta\in L^2(Q_N)$ is \begin{equation} D_\eta M_N(T)=\frac{1}{N}\sum_{i=1}^N \int_{\Omega} \eta(x)\{T(x)-T_{\mu_i,\nu_i}(x)\} \diff \mu_i(x), \end{equation} where $T_{\mu,\nu}$ is the optimal map from $\mu$ to $\nu$. \end{lemma} \noindent Since $\mathcal{T}$ is a convex, closed, and bounded subset of $L^2(Q)$ functions, we may now apply the Weierstrass theorem cited above to conclude: \begin{proposition}[Existence and Uniqueness of the Estimator]{\label{Unique-minimizer}} There exists a unique solution $\mathbb{H}at{T}_N\in\mathcal{T}$ to the Fr\'echet sum-of-squares minimization problem \eqref{functional}, with uniqueness being in the $L^2(Q_N)$ sense. \end{proposition} \subsection{Computation}{\label{computation}} \noindent Since the domain $\Omega$ is one-dimensional, we have that $$d^2_{\mathcal{W}}(\nu,\mu)=\int_{0}^1 \big|F^{-1}_\mu(p)-F_\nu^{-1}(p)\big|^2 \diff{p}.$$ Furthermore, since the regressors $\mu_i$ are assumed absolutely continuous (Assumption \ref{absCont}), we can always write $\nu_i=T_{\mu_i \to \nu_i}\#\mu_i$ for an optimal map $T_{\mu_i \to \nu_i}$. We can therefore manipulate the Fr\'echet sum-of-squares and use a Riemann approximation to write \begin{align} \sum_{i=1}^N d^2_{\mathcal{W}}(T\#\mu_i,\nu_i)=\sum_{i=1}^N \norm{T\circ F^{-1}_{\mu_i} - F^{-1}_{\nu_i}}^2_{L^2} &=\sum_{i=1}^N \int_0^1 \big|T\circ F^{-1}_{\mu_i}(p)-F^{-1}_{\nu_i}(p)\big|^2 \diff p \nonumber \\ &= \sum_{i=1}^N \int_0^1 \big|T\circ F^{-1}_{\mu_i}(p)-T_{\mu_i \to \nu_i} \circ F^{-1}_{\mu_i}(p)\big|^2 \diff p \nonumber \\ &=\sum_{i=1}^N \int_{\Omega} \big|T(x)-T_{\mu_i \to \nu_i}(x)\big|^2 \diff \mu_i (x) \nonumber \\ &\approx \sum_{i=1}^N \sum_{j=1}^m \big|T(x_j)-T_{\mu_i \to \nu_i}(x_j)\big|^2 \mu_i(h_j), \label{Riemann} \end{align} for $m$ user-defined nodes $\{x_j\}_{j=1}^{m}$ in an interval partition $\{I_j\}_{j=1}^{m}$ of $\Omega$, and $h_j=|I_j|$. Writing $y_{ij}=T_{\mu_i \to \nu_i}(x_j), w_{ij}=\mu_i(h_j)$ and $z_j=T(x_j)$, we reduce the above approximate minimization of the Fr\'echet sum-of-squares to the solution of the following convex optimization problem: \begin{equation} \begin{split} & \text{minimise }f(z)=\sum_{i=1}^N \sum_{j=1}^m w_{ij} h_i(y_{ij},z_j)\\ &\text{subject to } z_1 \leq z_2 \leq \cdots \leq z_m \end{split} \end{equation} where $h_i(y_{ij},z_j)=|y_{ij}-z_j|^2$. The above problem resembles an isotonic regression problem with repeated measurements, and can be solved via the Pool-Adjacent-Violater-Algorithm (PAVA) \citep{mair2009isotone}. \subsection{Consistency and Rate of Convergence}{\label{consistency and rate of convergence}} In this section, we establish the asymptotic properties of the proposed estimators both in the case of the fully observed set of measures $\{\mu_i,\nu_i\}$ and the case where one only indirectly observes input/output distributions through i.i.d. samples from each. A natural risk function to measure the quality of the estimator is the Fr\'echet mean squared error: $$R(T):=\mathbb{E}_{\mu \sim P_M} d^2_{\mathcal{W}}(T_0\# \mu,T\# \mu)=\int_{\mathcal{W}_2(\Omega)} d^2_{\mathcal{W}}(T_0\# \mu,T\#\mu) \diff P_M(\mu).$$ Using the equation \eqref{PC} we can rewrite the above risk as follows: \begin{equation*} \begin{split} \int d^2_{\mathcal{W}}(T_0\# \mu,T\#\mu) \diff P_M(\mu) &= \int \norm{T_0-T}_{L^2(\mu)}^2 \diff P_M(\mu) \\ &=\int \int_{\Omega} \big|T_0(x)-T(x)\big|^2 \diff \mu(x) \diff P_M(\mu) \\ &= \norm{T_0-T}^2_{L^2(Q)} \end{split} \end{equation*} Thus, we can obtain consistency and convergence rates in Fr\'echet mean squared error using the criterion $\|{T_0-\mathbb{H}at{T}_N}\|_{L^2(Q)}$, in particular: \begin{theorem}\label{rate of convergence} In the context of model \eqref{model}, suppose that Assumptions \ref{absCont} and \ref{assumpMaps} hold true. Then, the estimator $\mathbb{H}at{T}_N$ defined in \eqref{functional} is a consistent estimator for $T_0$ satisfying \begin{equation} N^{1/3} \norm{\mathbb{H}at{T}_{N}-T}_{L^2(Q)}= O_P(1). \end{equation} \end{theorem} In many practical applications, one does not have not access to the measures $(\mu_i,\nu_i)$. Instead, one has to make do with observing random samples from each $\mu_i$ and $\nu_i$. In this case, a standard approach is to use smoothed proxies in lieu of the unobservable measures, usually assuming some more regularity. Let $\mu_i^n$ and $\nu_i^n$ be consistent estimators of $\mu_i$ and $\nu_i$ obtained from smoothing a random sample of size $n$ from each respective measure. Given such estimators, define a new estimator of $T_0$ as \begin{equation}{\label{estimator-sample}} \mathbb{H}at{T}_{n,N}:=\arg\min_{T \in \mathcal{T}_B} \frac{1}{2N}\sum_{i=1}^N d^2_{\mathcal{W}}(T\# \mu_i^n,\nu_i^n), \end{equation} where $$\mathcal{T}_B:=\{T :\Omega \to \Omega: 0\leq T'(x) < B \text{ for } Q \text{-almost every } x \in \Omega \}\subset \mathcal{T} =\cup_{B>0} \mathcal{T}_B.$$ Note that here one can use any estimators of $\mu_i$ and $\nu_i$ which are consistent in Wasserstein distance, provided $\mu_i^n$ is absolutely continuous. Then, the rate of convergence of $\mathbb{H}at{T}_{n,N}$ will depend on the rate of convergence of $\mu_i^n$ and $\nu_i^n$ to $\mu_i$ and $\nu_i$, respectively in the Wasserstein distance: \begin{theorem}{\label{estimator-discrete-samples}} In the context of model \eqref{model}, suppose that Assumption \ref{absCont} holds true, and furthermore that there exists a $B<\infty$ such that $T_0 \in \mathcal{T}_B$, and $T_\epsilon\in \mathcal{T}_B$ almost surely. Then, the estimator $\mathbb{H}at{T}_{n,N}$ defined in \eqref{estimator-sample} satisfies \begin{equation}\label{partially-observed-rate} \norm{\mathbb{H}at{T}_{n,N}-T_0}_{L^2(Q)}\lesssim N^{-1/3}+{r_n}^{-1/2} \end{equation} where $r_n^{-1}$ is the rate of convergence in the Wasserstein distance of $\mu_i^n$ to $\mu_i$ and $\nu_i^n$ to $\nu_i$. \end{theorem} Precise values of $r_n$ can be obtained by choosing specific estimators and imposing additional regularity on the underlying regressor/response measures. For instance, one can follow the estimation approach of \cite{weed2019estimation} and obtain the minimax rate of convergence over measures with densities in Besov classes. \begin{remark} {Note that $B\in (0,\infty)$ can be any finite constant, however large. Its precise value does not influence the rate \eqref{partially-observed-rate} itself, but only the constants. It is therefore not to be interpreted as a regularisation parameter. To be strictly faithful to the assumptions of Theorem \ref{estimator-discrete-samples}, the computation could incorporate additional constraints of the form $(z_{i+1}-z_i)\leq B(x_{i+1}-x_i)$, as a discretization of $T' \leq B$. From a practical point of view, though, we always have $(z_{i+1}-z_i)\leq \big({|\Omega|}/{\min_{1\leq j\leq m} |I_j|}\big)(x_{i+1}-x_i)$, since $T:\Omega\rightarrow\Omega$ is monotone. So maintaining the original formulation of Section \ref{computation} implicitly corresponds to some $B>|\Omega|/\min_{1\leq j\leq m} |I_j|$ in Theorem \ref{estimator-discrete-samples} (recall that $m$ is the user-defined number of nodes in the Riemann sum approximation \eqref{Riemann}). } \end{remark} \section{Simulated Examples} In this section we illustrate the estimation framework and finite sample performance of the method by means of some simulations. First we generate random predictors $\{\mu_i\}^N_{i=1}$. We consider random distributions that are mixtures of three independent Beta components. We choose the parameters of the Beta distributions to be uniformly distributed random variables on $[1,10]$, with densities $$f_{\mu_i}(x) = \sum_{j=1}^3 \mathcal{P}i_{j} b_{\alpha_{i,j},\beta_{i,j}}(x), \quad \alpha_{i,j} \sim \text{Uniform}[1,10], \quad \beta_{i,j} \sim \text{Uniform}[1,10].$$ The $\{\mathcal{P}i_j\}_{j=1}^3$ are arbitrary fixed mixture weights in $[0,1]$, such that $\sum_{j=1}^3 \mathcal{P}i_j=1$. As for the noise maps $T_{\epsilon_i}$, we use the class of random optimal maps introduced in \citet{panaretos2016amplitude}. Let $k$ be an integer and define $\zeta_k:[0,1] \to [0,1]$ by $$\zeta_0(x)=x, \quad \zeta_k(x)=x-\frac{\sin(\mathcal{P}i kx)}{|k|\mathcal{P}i}, \qquad k\in Z\setminus \{0\}.$$ These are strictly increasing smooth functions satisfying $\zeta_k(0)=0$ and $\zeta_k(1)=1$ for any $k$. These maps can be made random by replacing $k$ by an integer-valued random variable $K$. If the distribution of $K$ is symmetric around zero, then it is straightforward to see that $E[\zeta_K(x)]=x$, for all $x\in[0,1]$, as required in the definition of model \eqref{model}. We generate a discrete family of random maps by the following procedure, which is slightly different from the mixture family of maps introduced in \cite{panaretos2016amplitude}: for $J>1$ let $\{K_j\}^J_{j=1}$ be i.i.d. integer-valued symmetric random variables, and $\{U_{(j )}\}^{J-1}_{j=1}$ be the order statistics of $J-1$ i.i.d. uniform random variables on $[0,1]$, independent of $\{K_j\}^J_{j=1}$. The random maps are then defined as $$T_{\epsilon}(x)=\sum_{j=1}^{J-1} I(U_{(j)}\leq x \leq U_{(j+1)})\mathcal{X}i(U_{(j)},U_{(j+1)},K_j) (x)$$ where $\mathcal{X}i(U_{(j)},U_{(j+1)},K_j) (x)$ is defined as the ratio $$\Bigg\{\zeta_{K_j}\bigg(\frac{2x}{U_{(j+1)}-U_{(j)}}-\frac{U_{(j+1)}+U_{(j)}}{U_{(j+1)}-U_{(j)}}\bigg)+\frac{U_{(j+1)}+U_{(j)}}{U_{(j+1)}-U_{(j)}}\Bigg\} \bigg/ \Bigg(\frac{2}{U_{(j+1)}-U_{(j)}}\Bigg).$$ As for the optimal map $T_0$ constituting the regression operator, we set $T_0 = \zeta_4$. After having generated the random $\mu_i$ and $T_{\epsilon_i}$, we generate the response distributions according to model \eqref{model}, i.e. $\nu_i = T_{\epsilon_i}\#T\# \mu_i$. Figure (\ref{fig:densities}) depicts representative sample pairs of predictor and response densities. \begin{figure} \caption{Examples of simulated predictor (blue) and corresponding response (orange) densities.} \label{fig:densities} \end{figure} \begin{figure} \caption{Estimated (yellow) versus true (black) regression map for each of 100 replications of the combinations of $N \in \{10,100,1000\} \label{fig:nN} \end{figure} \begin{figure} \caption{Boxplots for the squared $L^2$ deviation between the true regression map and the estimated regression maps based on 100 replications for the nine combinations of $N \in \{10,100,1000\} \label{fig:nNBoxplot} \end{figure} For estimation, we consider the case where we only observe $n$ independent samples from each pair of distributions $(\mu_i,\nu_i)^N_{i=1}$. For simplicity, we use kernel density estimation, rather than the estimators in \cite{weed2019estimation}, to obtain the proxies $\mu_i^n$ and $\nu_i^n$ for the distributions $\mu_i$ and $\nu_i$. Subsequently, for each $i$, we estimate $Q_i^n$, where $Q_i^n$ is the optimal map such that $\nu_i^n= Q_i\# \mu_i^n$ and solve the convex optimisation problem described in Section \ref{computation} to obtain the estimator $\mathbb{H}at{T}_{n,N}$. Figure (\ref{fig:nN}) contrasts the estimated and true regression maps in each replication, for all nine combinations $N \in \{10,100,1000\}$ and $n\in \{10,100,1000\}$. It is apparent that the dominant source of error is the bias due to partial observation, i.e. due to observing the measures through finite samples of size $n$. When $n$ is moderately large (e.g. $n=100$) we see that the agreement between estimated and true map is very good, even for small values of $N$. To quantitatively summarise the behaviour of the mean squared error in $N$, we construct boxplots for the error $\|{\mathbb{H}at{T}_{n,N}-T_0}\|_{L^2}$ in Figure (\ref{fig:nNBoxplot}), each based on 100 replications for the corresponding combination of $n\in\{10,100,1000\}$ and $N\in\{10,100,1000\}$. The scale used is the same for each value of $n$, in order to focus the behaviour with respect to $N$. \section{Analysis of Mortality Data}\label{mortality_data} We consider the age-at-death distributions for $N=37$ countries in the years 1983 and 2013, obtained from the Human Mortality Database of UC Berkeley and the Max Planck Institute for Demographic Research, openly accessible on \texttt{www.mortality.org}. Death rates are provided by single years of age up to 109, with an open age interval for 110+. We use Gaussian kernel density smoothing, to obtain age-at-death densities from the count data. Denote by $\mu_i$ the age-at-death distribution for the $i$th country at year 1983 and $\nu_i$ the age-at-death distribution for the same country at year 2013. We use the distributions $\mu_i$ and $\nu_i$ as predictor and response distributions respectively. We chose these two years to allow comparison with \citet{chen2020wasserstein}, who illustrate their methodology on the same data set, and same pair of years. \begin{figure} \caption{Estimated Regression Map for the age-at-death distributional regression (black) contrasted to the identity (red).} \label{fig:estimatedMap} \end{figure} \begin{figure} \caption{Residual maps of all the 37 countries (blue) and their average map (orange).} \label{fig:diagnostics} \end{figure} We fit the model (\ref{model}) by means of the approach described in Section \ref{computation} to obtain the estimated regression map based on the $N=37$ countries. This is depicted in Figure \ref{fig:estimatedMap}. The map dominates the identity map pointwise, indicating that the regression effect is to transport the mass of the age-at-death distribution to the right at visually all locations. Said differently, the map indicates an effect of net improvement in mortality across all ages. The most pronounced such effect is observed in young ages (between 0-10), where the regression map rises steeply: The proportion of the population dying at ages 0-10 in 1983 is redistributed approximately over the range 0-30 in 2013. The form of the map restricted to $[0,10]\mapsto [0,30]$ is approximately linear, indicating that this redistribution is achieved by conserving the actual shape of the distribution but scaling by a constant. The effect is still visible though less pronounced in the early adult to middle age range: The proportion of the population dying at ages between 20 and 60 in 1983 is approximately redistributed over ages 40-60 in 2013. The regression map is approximately parallel to the identity map on the range 60-80, shifted upwards by about 10 years indicating a translation of that interval by that amount of years between 1983 and 2013, i.e. the proportion of the population dying between 60-80 in 1983 has shifted to ages 70-90, but the shape of the distribution of that proportion over each of these two 20 year periods is approximately conserved. Overall, the regression map approximately resembles a piecewise linear map, allowing to interpret it locally by translations and dilations. \begin{figure} \caption{Distribution-on-distribution regression for the mortality distributions of Japan, Ukraine, Italy and USA in the year 2013 on those in 1983. Here WD stands for the Wasserstein distance between the observed and fitted densities at year 2013, indicating goodness-of-fit.} \label{fig:8countries} \end{figure} \begin{figure} \caption{Residual maps $T_{\mathbb{H} \label{fig:8residuals} \end{figure} It is not easy to directly compare the effects expressed via this estimated regression map with the effects reflected by the estimated regression coefficient function $\mathbb{H}at\beta$, that is, the integral kernel of the operator $\mathcal{B}$ in Equation \eqref{muller-regressor} obtained in \citet[Figure 3]{chen2020wasserstein}, when fitting their model to the same data. This is largely due to fact that the $\mathbb{H}at{\beta}$ acts on tangent space elements, and thus is rather subtle to interpret. In interpreting their estimated regression operator, those authors remarked that the estimated $\mathbb{H}at{\beta}(s,t)$ was stratified according to the $s$ argument so that, ``\emph{if the log-transformed predictor is non-negative or non-positive throughout its domain, then the fit for the log-transformed response is determined by the comparison of the absolute values of the log transformed predictor over the positive and negative strata of the estimated coefficient $\beta(\cdot,t)$}". Using the estimated map $\mathbb{H}at{T}_N$ we can then compute the fitted age-at-death distributions for the year 2013, namely $\mathbb{H}at{\nu}_i=\mathbb{H}at{T}_N\#\mu_i$. {Figure (\ref{fig:8countries}) depicts the predictor and response densities as well as fitted response densities for a sample of 8 different countries. The first four of these countries (Japan, Ukraine, Italy and USA) were also selected as representative examples in \citet{chen2020wasserstein}.} All eight countries exhibit a negatively-skewed age-at-death distribution. Comparing the actual distributions for the years 1983 and 2013 we can observe the decreasing trend in infant death counts and peaks shifting to older ages, as dictated by the fitted regression map. Contrasting observed and fitted distributions for 2013 allows for better comparison with the model output in \cite{chen2020wasserstein}, than does comparing the estimated regression operators. Indeed, the main observations made in \cite{chen2020wasserstein} are also apparent from our fitted model. In the case of our model, besides looking at the shape of the predicted densities, we can also take advantage of the direct interpretability of the residual maps $T_{e_i}=T_{\mathbb{H}at{\nu}_i\to \nu_i}$, where $T_{\mathbb{H}at{\nu}_i\to \nu_i}$ is the optimal map between the fitted response $\mathbb{H}at{\nu}_i$ and actual response $\nu_i$. The collection of residual maps is plotted in Figure (\ref{fig:8residuals}). It is apparent that the pointwise variability declines for progressively older ages, illustrating that it is harder to fit mortality at younger ages. One can then focus on the residual maps of specific countries. For example, doing so in the case of Japan and Ukraine, we reproduce the observation in \cite{chen2020wasserstein} that ``\emph{for Japan, the rightward mortality shift is seen to be more expressed than suggested by the fitted model, so that longevity extension is more than is anticipated, while the mortality distribution for Ukraine seems to shift to the right at a slower pace than the fitted model would suggest}". Similarly, we recover the same inference as in \cite{chen2020wasserstein} regarding the US: ``\emph{while the evolution of the mortality distributions for Japan and Ukraine can be viewed as mainly a rightward shift over calendar years, this is not the case for USA, where compared with the fitted response, the actual rightward shift of the mortality distribution seems to be accelerated for those above age 75} [note: 65 in our case]\emph{, and decelerated for those below age 70 }[note: 65 in our case]". In terms of fit as measured by the Wasserstein distance between response and fit, both models have a harder time fitting Japan, ours doing slightly worse. On the flip side, our model fits Italy better, and the US and Ukraine considerably better (we only contrast countries explicitly mentioned in \cite{chen2020wasserstein}). Figure \ref{fig:diagnostics} features the overlay of all residual maps, in order to explore the goodness-of-fit of the model as well as the validity of the model assumptions. As the figure shows, the mean of residuals almost matches the identity map, which provides evidence in support of our model specification, in that the residual effects after correcting for the regression should have mean identity, reflected by the assumption that $E\{T_\epsilon(x)\}=x$. Note that, contrary to usual least squares where the residuals have empirical mean zero, the residual maps need not have mean identity exactly. \begin{figure} \caption{Residual maps (blue), the identity map (red) and the Wasserstein distance between the observed and fitted densities at year 2013 for each country. The countries are clustered in two groups (a) and (b). The list of abbreviations can be found in Table \ref{country-list} \label{fig:residuals-east} \label{fig:residuals-west} \end{figure} Finally, we can scrutinise the individual residual maps for each of the 37 countries which we plot separately in Figures (\ref{fig:residuals-east}) and (\ref{fig:residuals-west}). The separation into two figures is deliberate, and is based on an apparent clustering: in Figure (\ref{fig:residuals-east}) one can observe more of a rightward shift of fitted mortalities compared to the observed moralities for the countries concerned. This contrasts to countries in Figure (\ref{fig:residuals-west}) which feature less of a rightward shift than fitted by the model. In a sense, these are clusters of ``underfitted" and ``overfitted" observations. Interestingly countries in Figure (\ref{fig:residuals-east}) belong mostly to Eastern Europe plus Portugal, Spain, Italy, Israel, Japan and Taiwan. Countries in figure (\ref{fig:residuals-west}) belong to western/northern European countries plus USA, New Zealand and Australia. Thanks to the pointwise interpretability of the residual maps, one can notice a particular contrast between these two groups of countries in terms of their fitted/observed infant mortality rates. This may be related to the fact that countries in Figure (\ref{fig:residuals-east}) experienced a more pronounced improvement in their health care systems over the period 1983-2013, compared countries in Figure (\ref{fig:residuals-west}) where healthcare was of comparably high quality already in 1983. It is interesting to note that Japan and Taiwan feature residual maps that everywhere dominate the identity. \section{Proofs} \begin{proof}[Proof of Lemma \ref{functional-convexity-derivative}] Using the closed form of optimal transport maps when $d=1$, one can write: $$M(T)=\frac{1}{2}\int \int_0^1 \big|T\{F^{-1}_{\mu}(p)\}-F^{-1}_{\nu}(p)\big|^2 \diff p \diff P(\mu,\nu).$$ The expression above shows that $M$ is convex with respect to $T$ since the map $x\to x^2$ is convex and also integration preserves convexity. To show the strict convexity we should prove that for all $0<\beta<1$ and all $T_1,T_2$ such that $\norm{T_1-T_2}^2_{L^2(Q)}>0$, $$M\big\{\beta T_1 +(1-\beta)T_2\big\} < \beta M(T_1) + (1-\beta) M(T_2).$$ In fact by expanding the squares in the equality and doing some algebra one can conclude that the equality happens if and only if $\norm{T_1-T_2}^2_{L^2(Q)}=0$. Thus $M$, and similarly $M_N$, are strictly convex. Notice that the domain of definition of $M$ can be extended to the space of $L^2(Q)$ functions. Therefore the Gateaux derivative of $M$ in the direction of $\eta \in L^2(Q)$ can be defined as: \begin{equation*} \begin{split} D_\eta M(T) &= \lim_{\epsilon \to 0} \frac{M(T+\epsilon \eta)-M(T)}{\epsilon}. \end{split} \end{equation*} Expanding the first term we have: \begin{equation}{\label{perturb}} \begin{split} M(T+\epsilon \eta) &= M(T) + \epsilon \int \int_0^1 \Big[T\{F^{-1}_{\mu}(p)\}-F^{-1}_{\nu}(p)\Big]\eta\{F^{-1}_{\mu}(p)\}\diff p \diff P(\mu,\nu)\\ &\quad +\frac{\epsilon^2}{2} \int \int_0^1 \big|\eta\{F^{-1}_{\mu}(p)\}\big| \diff x \diff P(\mu,\nu)\\ &= M(T) +\epsilon\int <T-F^{-1}_{\nu}\circ F_{\mu},\eta>_{L^2(\mu)}\diff P(\mu,\nu) +\frac{\epsilon^2}{2} \int \norm{\eta}^2_{L^2(\mu)}\diff P(\mu)\\ &= M(T) +\epsilon\int <T-F^{-1}_{\nu}\circ F_{\mu},\eta>_{L^2(\mu)}\diff P(\mu,\nu) +\frac{\epsilon^2}{2} \norm{\eta}^2_{L^2(Q)}. \end{split} \end{equation} The last equality is true since \begin{equation*} \begin{split} \int \norm{\eta}^2_{L^2(\mu)} \diff P(\mu) &=\int \int_{\Omega} |\eta(x)|^2 \diff \mu(x) \diff P(\mu,\nu)\\ &=\int \int_{\Omega} |\eta(x)|^2 \diff Q(x)\\ &=\norm{\eta}^2_{L^2(Q)}. \end{split} \end{equation*} Since $\norm{\eta}^2_{L^2(Q)}<\infty$, we can conclude $$D_{\eta} M(T)=\int <T-F^{-1}_{\nu}\circ F_{\mu},\eta>_{L^2(\mu)}\diff P(\mu,\nu)=\int \int_{\Omega} \{T(x)-T_{\mu,\nu}(x)\}\eta(x)\diff \mu(x)\diff P(\mu,\nu),$$ where $T_{\mu,\nu}$ is the optimal map from $\mu$ to $\nu$. One can use a similar argument to derive the derivative of $M_N$. \end{proof} \begin{proof}[Proof of Theorem \ref{identifiability}] We prove that $T_0$ is the unique minimizer of the population functional in $\mathcal{T}$. Suppose $\nu=T_{\epsilon}\#(T_0\#\mu_0)$ for some fixed measure $\mu_0$, where by assumption $\mathbb{E}\{T_{\epsilon}(x)\}=x$ almost everywhere. Thus according to Proposition 3.2.11 of \cite{panaretos2020invitation}, $T_0 \# \mu_0$ is the Fr\'echet mean of the conditional probability law of $\nu$ given $\mu_0$ or equivalently, for any $\mu_0$ $${\arg\inf}_{b\in \mathcal{W}_2(\Omega)} \int_{\mathcal{W}_2(\Omega)} d^2_{\mathcal{W}}(b,\nu) \diff P(\nu|\mu_0)=T_0\# \mu_0,$$ where $P$ is the joint distribution of $(\mu,\nu)$ induced by Model \eqref{model}. Now $T_0$ is a minimizer of the above functional, since for any $T$: \begin{equation*} \begin{split} M(T)&=\int d^2_{\mathcal{W}}(T\# \mu,\nu) \diff P(\mu,\nu)\\ &=\int \int d^2_{\mathcal{W}}(T\#\mu_0,\nu)\diff P(\nu|\mu_0) \diff P(\mu_0)\\ &\geq \int \int d^2_{\mathcal{W}}(T_0\#\mu_0,\nu)\diff P(\nu|\mu_0) \diff P(\mu_0)\\ &=\int d^2_{\mathcal{W}}(T_0\# \mu,\nu) \diff P(\mu,\nu). \end{split} \end{equation*} Also since $d^2_{\mathcal{W}}(T\# \mu,\nu)$ is strictly convex w.r.t. $T \in \mathcal{T}$, and integration preserves strict convexity, the functional $M$ is strictly convex. So $T_0$ is, in fact, the \emph{unique} minimizer. \end{proof} \noindent To establish Proposition \ref{Unique-minimizer}, we will use the following theorem. \begin{theorem}[\citet{kurdila2006convex}, Theorem 7.3.6]{\label{theorem for existence}} Let $X$ be a reflexive Banach space and suppose that $f:M\subseteq X \to \mathbb{R}$ is Gateaux-differentiable on the closed, convex and bounded subset $M$. If any of the following three conditions holds true, \begin{enumerate} \item $f$ is convex over $M$, \item $Df$ is monotone over $M$, \item $D^2 f$ is positive over $M$, \end{enumerate} then all three conditions hold, and there exists an $x_0\in X$ such that $$f(x_0)=\inf_{x\in M} f(x).$$ \end{theorem} \begin{proof}[Proof of Proposition \ref{Unique-minimizer}] The set of maps $\mathcal{T}$ is closed, convex and bounded in the Hilbert space of $L^2(Q)$ functions. Thus the existence follows immediately from \eqref{functional-convexity-derivative} and Theorem \ref{theorem for existence}. Uniqueness also follows from strict convexity of $M$. \end{proof} To establish the consistency and rate of convergence of our estimator, we will make use of the theory of $M$-estimation. To this aim, we restate some key theorems from \citet{van1996weak}. \begin{theorem}[\citet{van1996weak}, Theorem 3.2.3]{\label{M-estimation}} Let $M_n$ be random functions for positive integer $n$, and let $M$ be a fixed function of $\theta$ such that for any $\epsilon>0$ \begin{equation}{\label{uniform convergence}} \inf_{d(\theta,\theta_0)\geq \epsilon} M(\theta)> M(\theta_0), \end{equation} \begin{equation}{\label{uniqueness-theta}} \sup_{\theta}|M_n(\theta)-M(\theta)|\to 0 \quad \text{in probability}. \end{equation} Then any sequence of estimators $\mathbb{H}at{\theta}_n$ with $M_n(\mathbb{H}at{\theta}_n)\leq M_n(\theta_0)+o_P(1)$ converges in probability to $\theta_0$. \end{theorem} \begin{theorem}[\citet{van1996weak}, Theorem 3.2.5]{\label{theorem3.2.5}} Let $M_N$ be a stochastic process indexed by a metric space $\Theta$, and let $M$ be a deterministic function, such that for every $\theta$ in a neighborhood of $\theta_0$, $$M(\theta)-M(\theta_0)\gtrsim d^2(\theta,\theta_0).$$ Suppose that, for every $N$ and sufficiently small $\delta$, $$\mathbb{E}^* \sup_{d^2(\theta,\theta_0)<\delta} \sqrt{N}\big|(M_N-M)(\theta)-(M_N-M)(\theta_0)\big|\lesssim \mathcal{P}hi_N(\delta),$$ for functions $\mathcal{P}hi_N$ such that $\delta \to \mathcal{P}hi_N(\delta)/\delta^\alpha$ is decreasing for some $\alpha<2$ (not depending on $N$). Let $$r_N^2\mathcal{P}hi_N\left(\frac{1}{r_N}\right)\leq \sqrt{N}, \quad \text{for every } N.$$ If the sequence $\mathbb{H}at{\theta}_N$ satisfies $M_N(\mathbb{H}at{\theta}_N)\leq M_N(\theta_0)+O_P(r_N^{-2})$, and converges in outer probability to $\theta_0$, then $r_N d(\mathbb{H}at{\theta}_N,\theta_0)=O^*_P(1)$. If the displayed conditions are valid for every $\theta$ and $\delta$, then the condition that $\mathbb{H}at{\theta}_N$ is consistent is unnecessary. \end{theorem} \begin{theorem}[\citet{van1996weak}, Theorem 2.7.5]{\label{metric-entropy-monotone}} The class $\mathcal{F}$ of monotone functions $f:\mathbb{R} \to [0,1]$ satisfies $$\log N_{[]}(\epsilon,\norm{.}_{L^2(Q)},\mathcal{F})\leq K\left(\frac{1}{\epsilon}\right),$$ for every probability measure $Q$, every $p\geq1$, and a constant $K$ that depends only on $p$. \end{theorem} \begin{theorem}[\citet{van1996weak}, Theorem 3.4.2]{\label{chaining}} Let $\mathcal{F}$ be class of measurable functions such that $P f^2 < \delta^2$ and $\norm{f}_{\infty}<M$ for every $f$ in $\mathcal{F}$. Then $$\mathbb{E} \sup_{f \in \mathcal{F}} |\sqrt{N} (\mathbb{H}at{P}-P)f|\leq \tilde{J}_{[]}(\delta,\norm{.}_{L^2(P)},\mathcal{F})\Bigg(1+\frac{\tilde{J}_{[]}(\delta,\norm{.}_{L^2(P)},\mathcal{F})}{\delta^2 \sqrt{N}} M \Bigg),$$ where $\tilde{J}_{[]}(\delta,\norm{.}_{L^2(P)},\mathcal{F})=\int_0^\delta \sqrt{1+\log N_{[]}(\epsilon,\norm{.}_{L^2(P)},\mathcal{F})} \diff \epsilon$. \end{theorem} \begin{proof}[Proof of Theorem \ref{rate of convergence}] Recall that, from Lemma \ref{Unique-minimizer}, $\mathbb{H}at{T}_N$ is the minimizer of the following criterion within the function class $\mathcal{T}$: $$M_N(T):=\frac{1}{2N}\sum_{i=1}^N d^2_{\mathcal{W}}(T\# \mu_i,\nu_i).$$ And the ``true" optimal map $T_0$ is the minimizer of the following criterion function, $$M(T):=\frac{1}{2}\int d^2_{\mathcal{W}}(T\# \mu,\nu) \diff P(\mu,\nu).$$ First we obtain an adequate upper bound for the bracketing number of the class of functions indexed by $T$ of the form: $$\mathcal{F}_u:=\{f_T(\mu,\nu) = d^2_{\mathcal{W}}(T\#\mu,\nu)-d^2_{\mathcal{W}}(T_0\#\mu,\nu) , \text{ s.t. } T\in \mathcal{T} \text{ and } \norm{T-T_0}_{L^2(Q)}\leq u \},$$ where the domain of each function $f_T \in \mathcal{F}_u$ is $\mathcal{W}_2(\Omega) \times \mathcal{W}_2(\Omega)$. Denote by $\log N_{[]}(\epsilon,\norm{.}_{L^2(Q)},\mathcal{F}_u)$ the bracketing entropy of the function class $\mathcal{F}_u$. One can directly control this bracketing entropy by the bracketing entropy of the class of optimal maps $\mathcal{T}$ since \begin{equation}{\label{lipstchitz}} \begin{split} |d^2_{\mathcal{W}}(T_1\#\mu,\nu)-d^2_{\mathcal{W}}(T_2\#\mu,\nu)| & \leq \norm{T_1-T_2}_{L^2(\mu)}\\ & \leq C \norm{T_1-T_2}_{L^2(Q)}. \end{split} \end{equation} Since optimal maps are monotone functions, using Lemma \ref{metric-entropy-monotone}, we know $\log N_{[]}(\epsilon,\norm{.}_{L^2(Q)},\mathcal{T})\leq K\left(\frac{1}{\epsilon}\right)$, and thus we conclude $$\log N_{[]}(\epsilon,\norm{.}_{L^2(P)},\mathcal{F}_u)\lesssim\left(\frac{1}{\epsilon}\right).$$ The first line of the inequality (\ref{lipstchitz}) also shows that $$P f_T^2\leq P \norm{T-T_0}^2_{L^2(\mu)}=\norm{T-T_0}^2_{L^2(Q)}\leq u^2,$$ for all $f_T \in \mathcal{F}_u$. To get the rate of convergence, we first show that $M(T)$ has quadratic growth around its minimizer. For any map $T$, we can write $T = T_0 + \eta $, where $\eta = T - T_0$. Thus the equation (\ref{perturb}), with $\epsilon=1$ and also the fact $D_\eta M(T_0)=0$ yields \begin{equation*} \begin{split} M(T)-M(T_0) &= \frac{1}{2} \norm{\eta}^2_{L^2(Q)}\\ &=\frac{1}{2}\norm{T-T_0}^2_{L^2(Q)}. \end{split} \end{equation*} Next, we find a function $\mathcal{P}hi_N(\delta)$ such that \begin{equation*} \begin{split} \mathbb{E} \sup_{\norm{T-T_0}_{L^2(Q)}\leq \delta, T\in \mathcal{T}} \sqrt{N} \Big|(M_{N}-M)(T)-(M_{N}-M)(T_0)\Big| &= \mathbb{E} \sup_{f \in F_\delta} \sqrt{N} |(P_N-P)f|\\ &\leq \mathcal{P}hi_N(\delta). \end{split} \end{equation*} Since the functions in $\mathcal{F}_\delta$ are uniformly bounded and $Pf^2\leq \delta^2$ for all $f\in\mathcal{F}_\delta$, the conditions of Theorem \ref{chaining} are satisfied and we can choose $$\mathcal{P}hi_N(\delta)=\tilde{J}_{[]}(\delta,\norm{.}_{L^2(P)},\mathcal{F}_\delta)\Bigg(1+\frac{\tilde{J}_{[]}(\delta,\norm{.}_{L^2(P)},\mathcal{F}_\delta)}{\delta^2 \sqrt{N}} \bar{c} \Bigg),$$ where the constant $\bar{c}$ is a uniform upper bound for the functions in class $\mathcal{F}_\delta$. Since we noted that $\log N_{[]}(\epsilon,\norm{.}_{L^2(P)},\mathcal{F}_u)\lesssim \epsilon^{-1}$ for any $u>0$, we can show $$\tilde{J}_{[]}(\delta,\norm{.}_{L^2(P)},\mathcal{F})\leq \int_0^\delta 1+ \sqrt{\log N_{[]}(\epsilon,\norm{.}_{L^2(P)},\mathcal{F}_\delta)} \diff \epsilon \lesssim \sqrt{\delta}.$$ The above inequality and the required condition $\mathcal{P}hi_N(\delta)\leq \delta_N^2 \sqrt{N}$ gives the bound $\delta_N=N^{-1/3}$. \end{proof} To establish the rate of convergence under imperfect observation we will make use of the following Lemma. \begin{lemma}{\label{Figalli}} Let $\mu_n$ be a sequence of measures converging in Wasserstein distance to a measure $\mu$ at a rate of convergence $r_n^{-1}$ and let $T\in \mathcal{T}$. Then $d^2_{\mathcal{W}}(T\# \mu_n,T\#\mu)\lesssim r_n^{-2}$. \end{lemma} \begin{proof} For simplicity and without loss of generality assume that $d^2_{\mathcal{W}}( \mu_n,\mu)=r_n^{-2}$ exactly. If $S_n$ is the optimal map from $\mu_n$ to $\mu$, then $$ \int \big|S_n(x)-x\big|^2 d\mu_n \leq r_n^{-2}. $$ Since $T$ is differentiable almost everywhere, and satisfies $|T'(x)| \leq B $ for almost all $x \in \Omega$, then $T$ is Lipschitz continuous with Lipschitz constant at most $B$. Thus \begin{equation} \begin{split} d^2_{\mathcal{W}}(T\# \mu_n,T\#\mu)&\leq \int \big|T\{S_n(x)\}-T(x)\big|^2d \mu_n\\ &\leq B^2 \int \big|S_n(x)-x\big|^{2}d \mu_n \mathbb{H}space{10mm} \\ & \lesssim r_n^{-2} \end{split} \end{equation} \end{proof} \begin{proof}[Proof of Theorem \ref{estimator-discrete-samples}] Define $M_{n,N}(T):=\frac{1}{N}\sum_{i=1}^N d^2_{\mathcal{W}}(T\# \mu_i^n,\nu_i^n)$. For any map $T \in \mathcal{T}$, \begin{equation}{\label{figalli2}} \begin{split} \mathbb{E} |M_{n,N}(T)-M_N(T)|&=\mathbb{E}\Big|\frac{1}{N}\sum_{i=1}^N d^2_{\mathcal{W}}(T\# \mu_i^n,\nu_i^n)-\frac{1}{N}\sum_{i=1}^N d^2_{\mathcal{W}}(T\# \mu_i,\nu_i)\Big|\\ &\leq \mathbb{E}\big|d^2_{\mathcal{W}}(T\# \mu_i^n,\nu_i^n)-d^2_{\mathcal{W}}(T\# \mu_i,\nu_i)\big| \\ &\leq 2C \mathbb{E}\big|d_{\mathcal{W}}(T\# \mu_i^n,\nu_i^n)-d_{\mathcal{W}}(T\# \mu_i^n,\nu_i)\big|+\mathbb{E}\big|d_{\mathcal{W}}(T\# \mu_i^n,\nu_i)-d_{\mathcal{W}}(T\# \mu_i,\nu_i)\big|\\ &\leq 2C\mathbb{E} d_{\mathcal{W}}( \nu_i^n,\nu_i)+\mathbb{E} d_{\mathcal{W}}(T\# \mu_i^n,T\#\mu_i)\\ &\lesssim r_n^{-1} \mathbb{H}space{10mm} \text{ (by Lemma \ref{Figalli})}, \end{split} \end{equation} where $C=\sup_{\mu,\nu} d_{\mathcal{W}}(\mu,\nu)$, and $r_n^{-1}$ is the rate of estimation of an absolutely continuous measure from $n$ samples. Thus the above inequality shows the uniform convergence of $M_{n,N}$ to $M_N$ (at a rate independent of $N$). Also, since $\mathbb{H}at{T}_N$ is the unique minimizer of $M_N$, according to Theorem \ref{M-estimation}, $\mathbb{H}at{T}_{n,N}$ is a consistent estimator for $\mathbb{H}at{T}_N$, when $N$ is fixed. Now assuming $N$ is fixed, we again use Theorem \ref{theorem3.2.5} for functionals $M_{n,N}$ and $M_N$. Since both functionals are differentiable, the first condition of the Theorem (quadratic growth) is satisfied. For the second condition we need to find an upper bound for \begin{equation} \begin{split} \mathbb{E} \sup_{\norm{T-\mathbb{H}at{T}_N}_{L^2(Q)}<\delta} \sqrt{n} \big|(M_{n,N}-M_N)(T)-(M_{n,N}-M_N)(\mathbb{H}at{T}_{N})\big| &=\mathcal{P}hi_n(\delta). \end{split} \end{equation} According to \eqref{figalli2}, we have $\mathcal{P}hi_n(\delta_n)\lesssim r_n^{-1} \sqrt{n}.$ We also need $\mathcal{P}hi_n(\delta_n)\leq \sqrt{n} \delta_n^2$, thus $\delta_n^2\sim r_n^{-1}$. Therefore $$\norm{\mathbb{H}at{T}_{n,N}-\mathbb{H}at{T}_N}_{L^2(Q)}=\delta_n=r_n^{-1/2},$$ and $$\norm{\mathbb{H}at{T}_{n,N}-T_0}_{L^2(Q)}\leq \norm{\mathbb{H}at{T}_{n,N}-\mathbb{H}at{T}_N}_{L^2(Q)}+\norm{\mathbb{H}at{T}_{N}-T_0}_{L^2(Q)},$$ thus $$\norm{\mathbb{H}at{T}_{n,N}-T_0}_{L^2(Q)} \lesssim r_n^{-1/2}+N^{-1/3}.$$ \end{proof} \begin{table} \caption{Country abbreviations used in Figures \ref{fig:residuals-east} and \ref{fig:residuals-west}} \begin{tabular}{ |p{3cm}||p{3cm}| } \mathbb{H}line \multicolumn{2}{|c|}{Country List Figure (\ref{fig:residuals-east})} \\ \mathbb{H}line Country Name & Country Code\\ \mathbb{H}line Estonia&EST\\ Slovakia&SVK\\ Bulgaria&BGR\\ Hungary&HUN\\ Czechia&CZE\\ Lithuania&LTU\\ East Germany&DEUTE\\ Latvia&LVA\\ Belarus&BLR\\ Ukraine&UKR\\ Israel&ISR\\ Slovenia&SVN\\ Poland&POL\\ Spain&ESP\\ Italy&ITA\\ Portugal&PRT\\ Russia&RUS\\ Japan&JPN\\ Taiwan&TWN\\ Greece&GRC\\ \mathbb{H}line \end{tabular} \quad \begin{tabular}{ |p{3cm}||p{3cm}| } \mathbb{H}line \multicolumn{2}{|c|}{Country List Figure (\ref{fig:residuals-west})} \\ \mathbb{H}line Country Name & Country Code\\ \mathbb{H}line Australia&AUS \\ West Germany&DEUTW\\ Austria&AUT \\ Netherlands&NLD\\ Iceland&ISL\\ Ireland&IRL\\ Belgium&BEL\\ France&FRATNP\\ Finland&FIN\\ New Zealand &NZL-NP\\ Switzerland&CHE\\ Sweden&SWE\\ Norway&NOR\\ United Kingdom&GBR-NP\\ U.S.A.&USA\\ Denmark&DNK\\ Luxemburg&LUX\\ \quad&\quad\\ \quad&\quad\\ \quad&\quad\\ \mathbb{H}line \end{tabular} \label{country-list} \end{table} \end{document}
\begin{document} \title[Banach spaces whose duals are isomorphic to $l_{1}(\Gamma)$]{On Banach spaces whose duals are isomorphic to $l_{1}(\Gamma)$} \author{Daniel M. Pellegrino} \address[Daniel M. Pellegrino]{Depto de Matem\'{a}tica e Estat\'{\i}stica- Caixa Postal 10044- UFCG- Campina Grande-PB-Brazil } \email{[email protected]} \thanks{This work is partially supported by Instituto do Mil\^{e}nio, IMPA} \subjclass{Primary: 46G25; Secondary: 46B60} \begin{abstract} In this paper we present new characterizations of Banach spaces whose duals are isomorphic to $l_{1}(\Gamma),$ extending results of Stegall, Lewis-Stegall and Cilia-D'Anna-Guti\'{e}rrez. \end{abstract} \maketitle \section{Introduction, notation and background} Banach spaces whose duals are isomorphic to $l_{1}(\Gamma)$ were investigated in the works of Stegall \cite{Stegall}, Lewis-Stegall \cite{Lewis} and, in a recent paper, Cilia-D'Anna-Guti\'{e}rrez \cite{Cilia} studied polynomial characterizations of such spaces. Our aim is to show that polynomial characterizations of Banach spaces whose duals are isomorphic to $l_{1} (\Gamma)$ are extremely much more common than it is known, and many of these statements are consequences of simple results concerning polynomial ideals. Our techniques also give alternative non-tensorial proofs for the results of \cite{Cilia}. Throughout this paper $E,E_{1},...,E_{n},F,G,$ will stand for (real or complex) Banach spaces, $B_{E}$ will denote the closed unit ball on $E$ and $\mathbb{N}$ will denote the set of the natural numbers. The space of all continuous $n$-homogeneous polynomials from $E$ into $F$ endowed with the $\sup$ norm is represented by $\mathcal{P}(^{n}E;F)$ and the space of all continuous $n$-linear mappings from $E_{1}\times...\times E_{n}$ into $F$ (with the $\sup$ norm) is denoted by $\mathcal{L}(E_{1} ,...,E_{n};F).$ When $E_{1}=...=E_{n}=E$, we write $\mathcal{L}(^{n}E;F).$ If $P\in\mathcal{P}(^{n}E;F),$ we use the symbol $\overset{\vee}{P}$ for the (unique) symmetric $n$-linear mapping associated to $P$. On the other hand, if $T\in\mathcal{L}(^{n}E;F)$ we write $\overset{\wedge}{T}(x)=T(x,...,x).$ For $i=1,...,n,$ $\Psi_{i}^{(n)}:\mathcal{L}(E_{1},...,E_{n};F)\rightarrow \mathcal{L}(E_{i};\mathcal{L}(E_{1},\overset{[i]}{...},E_{n};F))$ will represent the canonical isometric isomorphism given by \[ \Psi_{i}^{(n)}(T)(x_{i})(x_{1}\overset{[i]}{...}x_{n})=T(x_{1},...,x_{n}), \] where the notation $\overset{[i]}{...}$ means that the $i$-th coordinate is not involved. An ideal of (homogeneous) polynomials $\mathfrak{P}$ is a subclass of the class of all continuous homogeneous polynomials between Banach spaces such that for all index $n$ and all $E$ and $F$, \ the components $\mathfrak{P} (^{n}E;F)=\mathcal{P}(^{n}E;F)\cap\mathfrak{P}$ satisfy: (i) $\mathfrak{P}(^{n}E;F)$ is a linear subspace of $\mathcal{P}(^{n}E;F)$ which contains the polynomials of finite type. (ii) If $P\in\mathfrak{P}(^{n}E;F),$ $T_{1}\in\mathcal{L}(G;E)$ and $T_{2} \in\mathcal{L}(F;H),$ then $T_{2}PT_{1}\in\mathfrak{P}(^{n}G;H).$ In this note we will be concerned with two special methods of creating ideals of polynomials: factorization and linearization. \begin{itemize} \item (Factorization method) If $\mathfrak{I}$ is an operator ideal, a continuous $n$-homogeneous polynomial $P\in\mathcal{P}(^{n}E;F)$ is of type $\mathcal{P}_{\mathcal{L}(\mathfrak{I})}$ if there exists a Banach space $G$, a linear operator $T\in\mathfrak{I}(E;G)$ and $Q\in\mathcal{P}(^{n}G;F)$ such that $P=QT.$ \item (Linearization method) If $\mathfrak{I}$ is an operator ideal, $T\in\mathcal{L}(E_{1},...,E_{n};F)$ is of type $[\mathfrak{I]}$ if $\Psi _{i}^{(n)}(T)\in\mathfrak{I}(E_{i};\mathcal{L}(E_{1},\overset{[i]}{...} ,E_{n}))$ for every $i=1,...,n.$ We say that $P\in\mathcal{P}(^{n}E;F)$ is of type $\mathcal{P}_{[\mathfrak{I]}}$ if $\overset{\vee}{P}$ is of type $[\mathfrak{I]}$. \end{itemize} An $n$-homogeneous polynomial is said to be $p$-dominated if there exist $C\geq0$ and a regular probability measure $\mu$ \ on the Borel $\sigma $-algebra on B$_{E^{\prime}}$(with the weak star topology) such that \begin{equation} \left\Vert P\left( x\right) \right\Vert \leq C\left[ \int_{B_{E^{\prime}} }\left\vert \varphi\left( x\right) \right\vert ^{p}d\mu\left( \varphi\right) \right] ^{\frac{n}{p}}. \end{equation} We write $\mathcal{P}_{d,p}(^{n}E;F)$ to denote the space of $p$-dominated $n$-homogeneous polynomials from $E$ into $F.$ For $n=1$ we obtain the $p$-absolutely summing operator. We represent the space of all absolutely $p$-summing operators from $E$ into $F$ by $\mathcal{L}_{as,p}(E;F).$ It is well known that $\mathcal{P}_{d,p}(^{n}E;F)=\mathcal{P}_{\mathcal{L} (as,p)}(^{n}E;F)$. For references on $p$-dominated polynomials we mention (\cite{irish},\cite{Matos2},\cite{studia}, among others). For details concerning polynomials on Banach spaces we mention \cite{Dineen}. \section{Results} We shall start with some useful Lemmas: \begin{lemma} \label{aaa} If $\mathfrak{I}_{1}$ and $\mathfrak{I}_{2}$ are ideals of polynomials, and \begin{equation} \mathcal{L}_{\mathfrak{I}_{1}}(E;F)\subset\mathcal{L}_{\mathfrak{I}_{2} }(E;F)\text{ for every }F \label{nova} \end{equation} then \[ \mathcal{P}_{\mathcal{L}[\mathfrak{I}_{1}]}(^{m}E;F)\subset\mathcal{P} _{\mathcal{L}[\mathfrak{I}_{2}]}(^{m}E;F)\text{ and }\mathcal{P} _{[\mathfrak{I}_{1}]}(^{m}E;F)\subset\mathcal{P}_{[\mathfrak{I}_{2}]} (^{m}E;F)\text{ for every }F. \] \end{lemma} Proof. If $P\in\mathcal{P}_{\mathcal{L}[\mathfrak{I}_{1}]}(^{m}E;F),$ then $P=Qu,$ with $Q\in\mathcal{P}(^{m}G;F)$ and $u\in\mathcal{L}_{\mathfrak{I} _{1}}(E;G).$ From (\ref{nova}), we have $u\in\mathcal{L}_{\mathfrak{I}_{2} }(E;G)$ and thus $P\in\mathcal{P}_{\mathcal{L}[\mathfrak{I}_{2}]}(^{m}E;F).$ The other case is similar.$\Box$ \begin{lemma} \label{teoee}If $\mathcal{I}_{1}$ and $\mathcal{I}_{2}$ are ideals of polynomials and $\mathcal{P}_{\mathcal{I}_{0}}(^{n}E;F)\cap\mathcal{P} _{\mathcal{I}_{1}}(^{n}E;F)\subset\mathcal{P}_{\mathcal{I}_{2}}(^{n}E;F)$ for some natural $n,$ suppose that the following hold true: \end{lemma} (i) $P\in\mathcal{P}_{\mathcal{I}_{2}}(^{n}E;F)\Rightarrow\overset{\vee} {P}(.,a,...,a)\in$ $\mathcal{L}_{\mathcal{I}_{2}}(E;F)$ for every $a\in E$, fixed. (ii) For $j=0,1$ and $m<n,$ if $P\in\mathcal{P}_{\mathcal{I}_{j}}(^{m}E;F)$ and $\varphi\in\mathcal{L}(E;\mathbb{K}),$ then $P.\varphi\in\mathcal{P} _{\mathcal{I}_{j}}(^{m+1}E;F).$ Then $\mathcal{L}_{\mathcal{I}_{0}}(E;F)\cap\mathcal{L}_{\mathcal{I}_{1} }(E;F)\subset\mathcal{L}_{\mathcal{I}_{2}}(E;F).$ Proof. If $T\in\mathcal{L}_{\mathcal{I}_{0}}(E;F)\cap\mathcal{L} _{\mathcal{I}_{1}}(E;F)$, then define $\varphi\in$ $\mathcal{L}(E;\mathbb{K} ),$ $\varphi\neq0$ and $a\in E$ such that $\varphi(a)=1$ and consider the following $n$-homogeneous polynomial: \[ P(x)=T(x)\varphi(x)^{n-1}. \] By applying (ii), $P\in\mathcal{P}_{\mathcal{I}_{0}}(^{n}E;F)\cap \mathcal{P}_{\mathcal{I}_{1}}(^{n}E;F)\subset\mathcal{P}_{\mathcal{I}_{2} }(^{n}E;F).$ Finally (i) yields that $\overset{\vee}{P}(.,a,...,a)\in \mathcal{L}_{\mathcal{I}_{2}}(E;F)$ and thus \[ \frac{1}{n}T+\frac{n-1}{n}T(a)\varphi\in\mathcal{L}_{\mathcal{I}_{2}}(E;F). \] Since $\varphi\in\mathcal{L}_{\mathcal{I}_{2}}(E;\mathbb{K}),$ we conclude that $T\in\mathcal{L}_{\mathcal{I}_{2}}(E;F).\Box$ \begin{lemma} \label{novo}If $\mathcal{P}_{\mathcal{L}[\mathcal{I}_{0}]}(^{n}E;F)\cap \mathcal{P}_{\mathcal{L}[\mathcal{I}_{1}]}(^{n}E;F)\subset\mathcal{P} _{\mathcal{I}_{2}}(^{n}E;F)$ and $\mathcal{P}_{\mathcal{I}_{2}}(^{n}E;F)$ satisfies the hypothesis (i) of Lemma \ref{teoee}, then \[ \mathcal{L}_{\mathcal{I}_{0}}(E;F)\cap\mathcal{L}_{\mathcal{I}_{1} }(E;F)\subset\mathcal{L}_{\mathcal{I}_{2}}(E;F). \] \end{lemma} Proof. If $T\in\mathcal{L}_{\mathcal{I}_{0}}(E;F)\cap\mathcal{L} _{\mathcal{I}_{1}}(E;F),$ choosing a continuous (non null) linear functional $\varphi$ on $F,$ define an $n$-homogeneous polynomial $P:E\rightarrow F$ by $P(x)=T(x)\varphi^{n-1}(T(x)).$ Then $P=Q\circ T$, where $Q:F\rightarrow F$ is given by $Q(y)=y\varphi^{n-1}(y).$ Thus $P\in\mathcal{P}_{\mathcal{L} [\mathcal{I}_{0}]}(^{n}E;F)\cap\mathcal{P}_{\mathcal{L}[\mathcal{I}_{1}]} (^{n}E;F)\subset\mathcal{P}_{\mathcal{I}_{2}}(^{n}E;F)$ and since $\mathcal{P}_{\mathcal{I}_{2}}(^{n}E;F)$ satisfies the hypothesis (i) of Lemma \ref{teoee}, we have that $\overset{\vee}{P}(.,a,...,a)\in$ $\mathcal{L} _{\mathcal{I}_{2}}(E;F)$ (for $a\in E$ so that $\varphi(a)\neq0$) and hence $T\in\mathcal{L}_{\mathcal{I}_{2}}(E;F).\Box$ Let us recall the concepts of compact and nuclear polynomials. A polynomial $P:E\rightarrow F$ is said to be compact if $P(B_{E})$ is relatively compact in $F$. The space of all compact $m$-homogeneous polynomials from $E$ into $F$ will be denoted by $\mathcal{P}_{K}(^{m}E;F).$ For the compact operators from $E$ into $F$ we use the symbol $\mathcal{L}_{K}(E;F).$ We say that $P\in\mathcal{P}(^{m}E;F)$ is nuclear if it is possible to find $(\varphi _{i})\subset E^{\prime}$ and $(y_{i})\subset F$ so that \[ Px=\sum\limits_{i=1}^{\infty}\left[ \varphi_{i}(x)\right] ^{m}y_{i}\text{ and }\sum\limits_{i=1}^{\infty}\left\| \varphi_{i}\right\| ^{m}\left\| y_{i}\right\| <\infty. \] The space of all nuclear $m$-homogeneous polynomials from $E$ into $F$ is denoted by $\mathcal{P}_{N}(^{m}E;F).$ For the linear case we write $\mathcal{L}_{N}(E;F).$ The relation between nuclear, compact operators (polynomials), and Banach spaces whose duals are isomorphic to $l_{1}(\Gamma)$ is given by the following results: \begin{theorem} \label{T2}(Lewis-Stegall \cite{Lewis}/Stegall \cite{Stegall}) Given a Banach space $E$, the following assertions are equivalent: (i) $E^{\prime}$ is isomorphic to $l_{1}(\Gamma)$ for some $\Gamma.$ (ii) For every Banach space $F$, we have $\mathcal{L}_{as,1}(E;F)\subset \mathcal{L}_{N}(E;F).$ (iii) For every Banach space $F$, $\mathcal{L}_{as,1}(E;F)\cap\mathcal{L} _{K}(E;F)\subset\mathcal{L}_{N}(E;F).$ \end{theorem} \begin{theorem} \label{T}(Cilia-D'Anna-Guti\'{e}rrez \cite{Cilia}) Given a Banach space $E$, the following assertions are equivalent: (i) $E^{\prime}$ is isomorphic to $l_{1}(\Gamma)$ for some $\Gamma.$ (ii) For all natural $m$ and every Banach space $F$, we have $\mathcal{P} _{d,1}(^{m}E;F)\subset\mathcal{P}_{N}(^{m}E;F).$ (iii) There is a natural $m$ such that for every $F$ we have $\mathcal{P} _{d,1}(^{m}E;F)\subset\mathcal{P}_{N}(^{m}E;F).$ (iv) There is a natural $m$ such that for every $F$ we have $\mathcal{P} _{d,1}(^{m}E;F)\cap\mathcal{P}_{K}(^{m}E;F)\subset\mathcal{P}_{N}(^{m}E;F).$ \end{theorem} Our results will show that it is possible to show a considerably longer list of characterizations of Banach spaces whose duals are isomorphic to $l_{1}(\Gamma)$ and present different proofs for each assertion of Theorem \ref{T}. \begin{theorem} Given a Banach space $E$, the following assertions are equivalent: (i) $E^{\prime}$ is isomorphic to $l_{1}(\Gamma)$ for some $\Gamma.$ (ii) For all $m\in\mathbb{N}$ and every $F$, we have $\mathcal{P}_{d,1} (^{m}E;F)\subset\mathcal{P}_{\mathcal{L}[N]}(^{m}E;F).$ (iii) There is $m\in\mathbb{N}$ such that for every $F$ we have $\mathcal{P} _{d,1}(^{m}E;F)\subset\mathcal{P}_{\mathcal{L}[N]}(^{m}E;F).$ (iv) For all $m\in\mathbb{N}$ and every $F$, we have $\mathcal{P}_{d,1} (^{m}E;F)\subset\mathcal{P}_{N}(^{m}E;F).$ (v) There is $m\in\mathbb{N}$ such that for every $F$ we have $\mathcal{P} _{d,1}(^{m}E;F)\subset\mathcal{P}_{N}(^{m}E;F).$ (vi) For all $m\in\mathbb{N}$ and every $F$, we have $\mathcal{P}_{d,1} (^{m}E;F)\subset\mathcal{P}_{[N]}(^{m}E;F).$ (vii) There is $m\in\mathbb{N}$ such that for every $F$ we have $\mathcal{P} _{d,1}(^{m}E;F)\subset\mathcal{P}_{[N]}(^{m}E;F).$ (viii) For all $m\in\mathbb{N}$ and every $F$, we have $\mathcal{P} _{[as,1]}(^{m}E;F)\subset\mathcal{P}_{[N]}(^{m}E;F).$ (ix) There is $m\in\mathbb{N}$ such that for every $F$ we have $\mathcal{P} _{[as,1]}(^{m}E;F)\subset\mathcal{P}_{[N]}(^{m}E;F).$ (x) For all $m\in\mathbb{N}$ and every $F$ we have $\mathcal{P}_{d,1} (^{m}E;F)\cap\mathcal{P}_{K}(^{m}E;F)\subset\mathcal{P}_{N}(^{m}E;F).$ (xi) There is $m\in\mathbb{N}$ such that for every $F$ we have $\mathcal{P} _{d,1}(^{m}E;F)\cap\mathcal{P}_{K}(^{m}E;F)\subset\mathcal{P}_{N}(^{m}E;F).$ (xii) For all $m\in\mathbb{N}$ and every $F$, we have $\mathcal{P} _{[as,1]}(^{m}E;F)\cap\mathcal{P}_{[K]}(^{m}E;F)\subset\mathcal{P}_{[N]} (^{m}E;F).$ (xiii) There is $m\in\mathbb{N}$ such that for every $F$ we have $\mathcal{P}_{[as,1]}(^{m}E;F)\cap\mathcal{P}_{[K]}(^{m}E;F)\subset \mathcal{P}_{[N]}(^{m}E;F).$ (xiv) For all $m\in\mathbb{N}$ and every $F$, we have $\mathcal{P}_{d,1} (^{m}E;F)\cap\mathcal{P}_{[K]}(^{m}E;F)\subset\mathcal{P}_{\mathcal{L} [N]}(^{m}E;F).$ (xv) There is $m\in\mathbb{N}$ and every $F$, we have $\mathcal{P}_{d,1} (^{m}E;F)\cap\mathcal{P}_{[K]}(^{m}E;F)\subset\mathcal{P}_{\mathcal{L} [N]}(^{m}E;F).$ (xvi) For all $m\in\mathbb{N}$ such that for every $F$ we have $\mathcal{P} _{d,1}(^{m}E;F)\cap\mathcal{P}_{[K]}(^{m}E;F)\subset\mathcal{P}_{[N]} (^{m}E;F).$ (xvii) There is $m\in\mathbb{N}$ such that for every $F$ we have $\mathcal{P}_{d,1}(^{m}E;F)\cap\mathcal{P}_{[K]}(^{m}E;F)\subset \mathcal{P}_{[N]}(^{m}E;F).$ \end{theorem} Proof. (i)$\Rightarrow$(ii) is consequence of the Theorem of Lewis-Stegall and Lemma \ref{aaa}. (ii)$\Rightarrow$(iii) is obvious. A direct computation gives $\mathcal{P}_{\mathcal{L}[N]}(^{m}E;F)\subset \mathcal{P}_{N}(^{m}E;F)$ and hence it is easy to see that (iii)$\Rightarrow $(iv)$\Rightarrow$(v). It is not hard to check that the ideals of nuclear polynomials and dominated polynomials satisfy (i) and (ii) of Lemma \ref{teoee}, respectively (these facts will be used several times in the present proof). Hence (v) implies $\mathcal{L}_{as,1}(E;F)\subset \mathcal{L}_{N}(E;F)$ and consequently we obtain (i). (ii)$\Rightarrow$(vi) holds because $\mathcal{P}_{\mathcal{L}[N]} (^{m}E;F)\subset\mathcal{P}_{[N]}(^{m}E;F)$ (it is true for arbitrary ideals of polynomials)$.$ (vi)$\Rightarrow$(vii) is obvious. In order to prove (vii)$\Rightarrow$(i) it suffices to show that (vii) implies $\mathcal{L}_{as,1}(E;F)\subset\mathcal{L}_{N}(E;F).$ So, in order to apply Lemma \ref{teoee} we must show that whenever $P\in\mathcal{P}_{[N]}(^{m}E;F)$ we have $\overset{\vee}{P}(.,a,...,a)\in\mathcal{L}_{N}(E;F).$ In fact, if $P\in\mathcal{P}_{[N]}(^{m}E;F)$, we can find $(\varphi_{i})\subset E^{\prime }$ and $(y_{i})\subset\mathcal{L}(^{m-1}E;F)$ so that \[ \Psi_{1}^{(m)}(\overset{\vee}{P})(x)=\sum\limits_{i=1}^{\infty}\left[ \varphi_{i}(x)\right] y_{i}\text{ and }\sum\limits_{i=1}^{\infty}\left\Vert \varphi_{i}\right\Vert \left\Vert y_{i}\right\Vert <\infty. \] Thus \[ \overset{\vee}{P}(x,a,...,a)=\Psi_{1}^{(m)}(\overset{\vee}{P} )(x)(a,...,a)=\sum\limits_{i=1}^{\infty}\left[ \varphi_{i}(x)\right] y_{i}(a,...,a) \] and \[ \sum\limits_{i=1}^{\infty}\left\Vert \varphi_{i}\right\Vert \left\Vert y_{i}(a,...,a)\right\Vert \leq\sum\limits_{i=1}^{\infty}\left\Vert \varphi _{i}\right\Vert \left\Vert y_{i}\right\Vert \left\Vert a\right\Vert ^{m-1}<\infty. \] Hence $\mathcal{P}_{[N]}(^{m}E;F)$ satisfy (i) of \ Lemma \ref{teoee}. (i)$\Rightarrow$(viii) is due to the result of Lewis-Stegall and Lemma \ref{aaa}. (viii)$\Rightarrow$(ix) is obvious. For the proof of (ix)$\Rightarrow$(i) one may realize that a standard use of Ky Fan's Lemma yields that a continuous (symmetric) multilinear mapping $T:E\times...\times E\rightarrow F$ is of type $[as,p]$ if, and only if,\ there exist $C\geq0$ and a regular probability measure $\mu\in P\left( B_{E^{\prime}}\right) ,$ such that \begin{equation} \left\Vert T\left( x_{1},...,x_{n}\right) \right\Vert \leq C\left\Vert x_{1}\right\Vert ...\left\Vert x_{n-1}\right\Vert \left[ \int_{B_{E^{\prime} }}\left\vert \varphi\left( x_{n}\right) \right\vert ^{p}d\mu\left( \varphi\right) \right] ^{\frac{1}{p}} \end{equation} and thus $\mathcal{P}_{[as,1]}(^{m}E;F)$ satisfy (ii) of Lemma \ref{teoee} and we conclude that (ix) implies $\mathcal{L}_{as,1}(E;F)\subset\mathcal{L} _{N}(E;F).$ Thus, the result of Lewis-Stegall completes the proof. (iv)$\Rightarrow$(x)$\Rightarrow$(xi) is trivial. Since the ideal of compact polynomials also satisfies (ii) of Lemma \ref{teoee}, (xi) implies $\mathcal{L}_{as,1}(E;F)\cap\mathcal{L}_{K}(E;F)\subset\mathcal{L}_{N}(E;F)$ for every $F$ and thus we obtain (i). In order to prove (i)$\Rightarrow$(xii) we observe that (i) implies $\mathcal{L}_{as,1}(E;F)\cap\mathcal{L}_{K}(E;F)\subset\mathcal{L}_{N}(E;F)$ for every $F$ and thus Lemma \ref{aaa} asserts that (xii) holds. (xii)$\Rightarrow$(xiii) is obvious. In order to prove (xiii)$\Rightarrow$(i) we must show that $\mathcal{P} _{[K]}(^{m}E;F)$ satisfy (ii) of Lemma \ref{teoee}. If $P\in\mathcal{P} _{[K]}(^{m}E;F)$ and $\varphi\in\mathcal{L}(E;F)$, we shall firstly prove that $R$ defined by $R(x_{1},...,x_{m+1})=\frac{1}{m+1}\varphi(x_{1})\overset{\vee }{P}(x_{2},...,x_{m+1})...+\frac{1}{m+1}\varphi(x_{m+1})\overset{\vee} {P}(x_{1},...,x_{m})$ is of type $[K].$ In fact, since \[ \Psi_{1}^{(m+1)}(R)(x)=\frac{1}{m+1}\varphi(x).\overset{\vee}{P}+\frac{m} {m+1}\varphi.\Psi_{1}^{(m)}(\overset{\vee}{P})(x), \] and since $\varphi$ and $\Psi_{1}^{(m)}(\overset{\vee}{P})$ are compact mappings, we conclude that $\Psi_{1}^{(m+1)}(R)$ is compact. Thus $R$ is of type $[K]$ and hence $\varphi.P\in\mathcal{P}_{[K]}(^{m+1}E;F).\ $So, $\mathcal{P}_{[K]}(^{m}E;F)$ satisfy (ii) of Lemma \ref{teoee} and we conclude that $\mathcal{L}_{as,1}(E;F)\cap\mathcal{L}_{K}(E;F)\subset\mathcal{L} _{N}(E;F)$ and obtain (i). Since the ideal of compact operators is closed, injective and surjective, we have that $\mathcal{P}_{[K]}=\mathcal{P}_{\mathcal{L}[K]}$ and this fact will be used in each one of the next arguments$.$ For the proof of (i)$\Rightarrow $(xiv) note that (i) implies $\mathcal{L}_{as,1}(E;F)\cap\mathcal{L} _{K}(E;F)\subset\mathcal{L}_{N}(E;F)$ and Lemma \ref{aaa} furnishes the proof. The proof that (xiv) implies (xv) is immediate. Since $\mathcal{P} _{\mathcal{L}[N]}(E;F)\subset\mathcal{P}_{[N]}(E;F)$ we obtain (xv)$\Rightarrow$(xvi). Finally, (xvi)$\Rightarrow$(xvii) is trivial and (xvii)$\Rightarrow$(i) is obtained by invoking Lemma \ref{novo} . $\Box$ It is worth remarking, for example, that $\mathcal{P}_{d,1}(^{n}E;F)$ and $\mathcal{P}_{[as,1]}(^{n}E;F)$ are different spaces, in general, showing that our results are different from the previous characterizations given in Theorems \ref{T2} and \ref{T}. The following example was suggested by Prof. M. C. Matos. \begin{example} Define $P:l_{2}\rightarrow\mathbb{\ K}$ by $P(x)= {\displaystyle\sum\limits_{j=1}^{\infty}} \frac{1}{j^{\alpha}}x_{j}^{2}$ with $\alpha=\frac{1}{2}+\varepsilon$ and $0<\varepsilon<\frac{1}{2}.$ Then $P\in\mathcal{P}_{[as,1]}(^{2} l_{2};\mathbb{K})$ and $P\notin\mathcal{P}_{d,1}(^{2}l_{2};\mathbb{K})$ . \end{example} In fact, $\overset{\vee}{P}:l_{2}\times l_{2}\rightarrow\mathbb{K}\ $\ is given by $\overset{\vee}{P}(x,y)=\sum\limits_{j=1}^{\infty}\frac{1}{j^{\alpha }}x_{j}y_{j}$ and $(\frac{1}{j^{\alpha}})_{j=1}^{\infty}\in l_{2}.$ It suffices to show that $\overset{\vee}{P}$ fails to be $1$-dominated, and $\Psi_{1}^{(2)}(\overset{\vee}{P})\in\mathcal{L}_{as,1}(l_{2};l_{2}).$ Since \[ \left( \sum\limits_{j=1}^{m}\left\Vert \overset{\vee}{P}(e_{j},e_{j} )\right\Vert ^{\frac{1}{2}}\right) ^{2}=\left[ \sum\limits_{j=1}^{m}\left( \frac{1}{j^{\alpha}}\right) ^{\frac{1}{2}}\right] ^{2}\geq\left[ \sum\limits_{j=1}^{m}\left( \frac{1}{m^{\frac{\alpha}{2}}}\right) \right] ^{2}=m^{2-\alpha}, \] if we had \[ \left( \sum\limits_{j=1}^{m}\left\Vert \overset{\vee}{P}(e_{j},e_{j} )\right\Vert ^{\frac{1}{2}}\right) ^{2}\leq C\left\Vert (e_{j})_{j=1} ^{m}\right\Vert _{w,1}^{2}, \] we would obtain $m^{2-\alpha}\leq C(m^{\frac{1}{2}})^{2}=Cm $ and it is a contradiction since $\alpha<1.$ In order to prove that $\Psi_{1}^{(2)}(\overset{\vee}{P})\in\mathcal{L} _{as,1}(l_{2};l_{2}),$ observe that \[ \Psi_{1}^{(2)}(\overset{\vee}{P})((x_{j})_{j=1}^{\infty})=\left( \frac {1}{j^{\alpha}}x_{j}\right) _{j=1}^{\infty}. \] Now, a characterization of Hilbert-Schmidt operators, due to Pe\l czy\'{n}ski (see \cite{Pelc}) asserts that it suffices to show that $\Psi_{1} (\overset{\vee}{P})$ is a Hilbert-Schmidt operator. But is is easy to check, since $\sum\limits_{k=1}^{\infty}\left\Vert \Psi_{1}^{(2)}(\overset{\vee} {P})(e_{k})\right\Vert _{l_{2}}^{2}=\sum\limits_{k=1}^{\infty}\left[ \frac {1}{k^{\alpha}}\right] ^{2}<\infty.$ The author wishes to acknowledge Professors M. C. Matos, G. Botelho and E. \c{C}aliskan for important advice. \end{document}
\begin{document} \title{Suppressing decoherence and improving entanglement by quantum-jump-based feedback control in two-level systems} \author{S. C. Hou} \author{X. L. Huang} \author{X. X. Yi} \affiliation{School of Physics and Optoelectronic Technology,\\ Dalian University of Technology, Dalian 116024 China} \date{\today} \begin{abstract} We study the quantum-jump-based feedback control on the entanglement shared between two qubits with one of them subject to decoherence, while the other qubit is under the control. This situation is very relevant to a quantum system consisting of nuclear and electron spins in solid states. The possibility to prolong the coherence time of the dissipative qubit is also explored. Numerical simulations show that the quantum-jump-based feedback control can improve the entanglement between the qubits and prolong the coherence time for the qubit subject directly to decoherence. \end{abstract} \pacs{73.40.Gk, 03.65.Ud, 42.50.Pq}\maketitle \section{introduction} Superposition of states and entanglement make quantum information processing much different from its classical counterpart. But a quantum state would unavoidably interact with its environment, resulting in a degradation of coherence and entanglement. For example, spontaneous emission in atomic qubits \cite{Roos} would spoil the coherence of quantum states and limit the entanglement time. Recent experimental advances have enabled individual systems to be monitored and manipulated at quantum level \cite{Puppe}. This makes the quantum feedback control realizable. Among the feedback controls, The homodyne-mediated feedback \cite{Wiseman,Wiseman2} and quantum-jump-based feedback controls have been proposed to generate steady state entanglement in a cavity \cite{Wang,Carvalho}. These two feedback schemes are Markovian, namely, a feedback information proportional to the quantum-jump detection is synchronously used. Besides, these control scheme can also be used to suppress decoherence \cite{Viola,Katz,Ganesan,Zhang}. Meanwhile, researchers are looking for proper systems for experimental implementation of quantum information processing. Among the various candidates, solid-states quantum devices based on superconductors \cite{Bertet} and lateral quantum dots \cite{Hayashi} are promising ones, however, the decoherence from intrinsic noise originating from two-level fluctuators is hard to engineered \cite{Rebentrost}. For this reason, the nuclear spins have attracted considerable attention \cite{Vandersypen} due to its long coherence times \cite{Ladd}. But their weak interactions to others make the preparation, control, and detection on them difficult. Thanks to its intrinsic interactions with electron spins, electron spin can be used as an ancilla to access single nuclear spin. This naturally leads us to rise the following question: can feedback strategy be used to suppress decoherence, prepare and protect entanglement between the nuclear and electron spins by controlling the electron spin? In this paper, we will study this problem by considering a nuclear spin (as a qubit) coupled to electron spin (as the other qubit) that is exposed to its environment. We show that a Markovian feedback based on quantum-jump can be used to suppress decoherence, produce entanglement and protect it. The paper is organized as follows: In Sec.{\rm II}, we describe our model and present the dynamics in absence of feedback. In Sec.{\rm III}, we introduce the quantum-jump-based feedback control and give the dynamical equation under the feedback control. The effect of feedback control on decoherence and entanglement is discussed in Sec.{\rm IV} and Sec.{\rm V}, respectively. Sec.{\rm VI} concludes our results. \section{model} Our system consists of a pair of two-level systems, called qubit 1 and qubit 2, where only the qubit 2 interacts with its environment. We present a scheme employing quantum-jump-based feedback control on the qubit 2 to affect the decoherence of the qubit 1 and increase entanglement between the two qubits. The Hamiltonian of the system reads \begin{eqnarray} H=\frac{1}{2}\hbar\omega_1\sigma_{1}^z+\frac{1}{2}\hbar\omega_2\sigma_{2}^z+\hbar g(\sigma_1^+\sigma_2^-+\sigma_1^-\sigma_2^+) . \label{eqn:systemhamiltonian} \end{eqnarray} The first two terms represent the free Hamiltonian of the two qubits, the last term describes their interactions under the rotating-wave approximation. $\omega_1$ and $\omega_2$ are the transition frequency of the two qubits, respectively. $g $ is the coupling strength of the two qubits. $\sigma_z$ is the Pauli matrix, i.e., $\sigma_z=\ket{e}\bra{e}-\ket{g}\bra{g}$, and $\sigma^+=\ket{e}\bra{g}$, $\sigma^-=\ket{g}\bra{e}.$ The state of this quantum system can be described by the density operator $\rho$ which is obtained by tracing out the environment. The dynamics of open quantum systems can be described by quantum master equations. The most general form of master equation for the density operator is \cite{quantumoptics,quantumnoise} \begin{eqnarray} \dot{\rho}=-\frac{i}{\hbar}[H,\rho]+\mathcal{L}(\rho), \label{eqn:masterequation} \end{eqnarray} where $H$ is the system Hamiltonian and $\mathcal{L}$ is a superoperator defined by $\mathcal{L}(\rho)=\Sigma_k\gamma_k(L_k\rho L_k^{\dag}-\frac{1}{2}L_k^\dag L_k\rho-\frac{1}{2}\rho L_k^\dag L_k),$ in which different $k$ characterizes different dissipative channels. In our system, the first qubit is assumed to be isolated from environment. The decoherence comes from the spontaneous emission of the qubit 2 (the second qubit). This situation is of relevance to a system consisting of nuclear and electron spins in aforementioned solid state devices. The dynamics of such a system takes, \begin{eqnarray} \dot{\rho}=-\frac{i}{\hbar}[H,\rho]+\gamma(\sigma_2^-\rho\sigma_2^+- \frac{1}{2}\sigma_2^+\sigma_2^-\rho-\frac{1}{2}\rho \sigma_2^+\sigma_2^-) . \label{eqn:full} \end{eqnarray} Here $\sigma_2^{\pm}=I_1\otimes\sigma_2^{\pm}$. The second part of Eq.(\ref{eqn:full}) describes the dissipation of our system with $\gamma$ the decay rate. Though the first qubit is assumed to be isolated from environment, it still loss coherence due to the coupling to the second qubit. The decoherence process can be showed by the decay of off-diagonal elements of the reduced density matrix for the first qubit. In order to investigate this decoherence, we calculate the evolution of system density operator $\rho$ and then trace out the second qubit to get the reduced matrix \begin{eqnarray} \rho_1=\text{Tr}_2(\rho)=\sum_{k=e,g}{ }_2\langle k|\rho|k\rangle_2= \left( \begin{array}{cc} \rho_{ee} & \rho_{eg}\\ \rho_{ge} & \rho_{gg}\\ \end{array} \right). \end{eqnarray} The diagonal elements are the populations in the excited and ground states of the first qubit. And the off-diagonal elements represent the coherence of the qubit 1. \section{Quantum-jump-based Feedback control} Quantum feedback controls play an increasingly important role in quantum information processing. It is widely used to create and stabilize entanglement as well as combat with decoherence \cite{Carvalho,Wang,Zhang,Katz}. In our model, the second qubit is used as an ancilla through which the feedback can affect the dynamics of the first qubit, i.e., by employing a feedback control on the second qubit, we control the first qubit. The goal is to suppress the decoherence of the first qubit and enhance the entanglement between the two qubits by a feedback control on the second qubit\cite{Carvalho}. Our feedback control strategy is based on quantum-jump detection. The master equation with feedback can be derived from the general measurement theory \cite{Wiseman2}. In our paper, Eq.(\ref{eqn:full}) is equivalent to \begin{eqnarray} \rho(t+dt)=\sum_{\alpha=0,1}\Omega_{\alpha}(T)\rho(t)\Omega_{\alpha}^{\dag}(T). \label{measure} \end{eqnarray} with \begin{eqnarray} \Omega_{1}(dt)=\sqrt{\gamma dt}\sigma_2^- \\\Omega_{0}=1-(\frac{i}{\hbar}H+\frac{1}{2}\gamma\sigma_2^+\sigma_2^-)dt. \label{measure2} \end{eqnarray} When the measurement result is $\alpha=1$, a detection occurs, which causes a finite evolution in the system via $\Omega_1(dt)$. This is called a quantum jump. Then the unnormalized density matrix becomes $\tilde{\rho}_{\alpha=1}=\sigma_2^-\rho(t)\sigma_2^+dt$. The feedback control is added by giving $\tilde{\rho}_{\alpha=1}$ a finite unitary evolution, then $\tilde{\rho}_{\alpha=1}$ become $\tilde{\rho}_{\alpha=1}=F\sigma_2\rho(t)\sigma_2^+F^{\dag}dt$. In the limit that the feedback acts immediately after a detection and in a very shot time (much smaller than the time scale of the system's evolution), the master equation is Markovian, \begin{eqnarray} \dot{\rho}=-\frac{i}{\hbar}[H,\rho]+\gamma(F\sigma_2^-\rho\sigma_2^+F^\dag- \frac{1}{2}\sigma_2^+\sigma_2^-\rho-\frac{1}{2}\rho \sigma_2^+\sigma_2^-). \label{eqn:controlled} \end{eqnarray} Here $F=e^{iH_f}$ and $H_f=-\frac{1}{\hbar}H_f't_f.$ We see that the operator $H_f$ contains a relatively large operator $H_f'$ multiplied by a very short time $t_f$ (Markovian assumption), but the product represents a certain amount of evolution, so it is convenient to discuss $H_f$ instead of $H_f'$ and $t_f$. Here $H_f$ is a $2\times 2$ hermit operator which can be decomposed by Pauli matrixes $H_f=A_x\sigma_x+A_y\sigma_y+A_z\sigma_z$ ($A_x,A_y,A_z$ are real numbers). So we have, \begin{eqnarray} F=I_1\otimes e^{i\vec{A}\cdot\vec{\sigma}}=I_1\otimes(\cos|\vec{A}|+i\frac{\sin|\vec{A}|}{|\vec{A}|} \vec{A}\cdot\vec{\sigma}). \label{eqn:feedback} \end{eqnarray} Here $\vec{\sigma}=(\sigma_{x},\sigma_{y},\sigma_{z})$ and $\vec{A}=(A_x,A_y,A_z)$ representing the amplitude of $\sigma_x,\sigma_y$ and $\sigma_z$ control. In order to understand the physical meaning of feedback operator $F$, we rewrite it as $F=I_1\otimes e^{-i\frac{\omega}{2}\vec{n}\cdot\vec{\sigma}}$ where $\vec{n}=(\sin{\theta}\cos{\phi},sin{\theta}\sin{\phi}, \cos{\theta})$ and $\vec{\sigma}=(\sigma_x,\sigma_y,\sigma_z)$, this feedback operator is equivalent to a time-evolution with evolution operator $F=I_1\otimes e^{iH_f}$. And it is clear that the operator $F$ rotate the Bloch vector of the second qubit with the angle $\omega$ around the $\vec{n}$ axis. The relationship between the two forms of $F$ are $A_x=-\frac{\omega}{2}\sin{\theta}\cos{\phi}, A_y=-\frac{\omega}{2}\sin{\theta}\sin{\phi}, A_z=-\frac{\omega}{2}\cos{\theta}.$ So a $\sigma_x$ control ($A_y=0,A_z=0$) means rotating the Bloch vector with a certain amount of angle around the $x$ axis of Bloch sphere, so does the $A_y$ and $A_z$ control. Different $\vec{A}$ represents different feedback evolution i.e., rotate the Bloch vector with a particular angle around a particular direction in the Bloch sphere. For simplicity, we discuss the $\sigma_x, \sigma_y,\sigma_z$ control one by one in the following. This control mechanism has the advantage of being simple to apply in practice, since it does not need real time state estimation as Bayesian feedback control does\cite{Wiseman3}. The emission of the second qubit is measured by a photo detector, whose signal provides the information to design the control $F$. In this kind of monitoring, the absence of signal predominates the dynamics and the control is triggered only after a detection click, i.e. a quantum jump, occurs. \section{Decoherence suppression} Before investigating the influence of the feedback control, we first analyze the evolution of our system without control. Assume that the two qubits are initially in the same pure superposition state, for example, $|\psi\rangle=\frac{1}{\sqrt{2}}(|e\rangle_1+|g\rangle_1) \otimes\frac{1}{\sqrt{2}}(|e\rangle_2+|g\rangle_2)$. The corresponding density matrix is, \begin{eqnarray} \rho_0=|\psi\rangle\langle\psi|=\frac{1}{4} \left( \begin{array}{cccc} 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 \\ \end{array} \right). \end{eqnarray} We assign the Planck constant $\hbar$ to be 1, $\omega_1=\omega_2=\omega$ in Eq.(\ref{eqn:systemhamiltonian}), and $g/\omega=1,\gamma/\omega=0.5$. After numerical calculation, we get the evolution of the density matrix for the first qubit without control. Since $\rho_{eg}=\rho_{ge}^*, \rho_{ee}+\rho_{gg}=1$, we only discuss coherence $|\rho_{eg}|$ and excited state population $\rho_{ee}$ for simplicity. The evolution of $|\rho_{eg}|$ and $\rho_{ee}$ without control is depicted in Fig.\ref{FIG:rhovs} (a) and (b) (dashed lines). In Fig.\ref{FIG:rhovs} (a), a fast decay of $|\rho_{eg}|$ (dashed line) can be found. This demonstrates that the first qubit lost coherence due to the second qubit's spontaneous emission and their interaction. Meanwhile, the first qubit lost energy due to couplings to the second qubit (Fig.\ref{FIG:rhovs} (b) (dashed line)). The results also show that the populations in excited state and ground state decay away. This is because the first qubit exchange energy with the second qubit, see Eq.(\ref{eqn:systemhamiltonian}). \begin{figure} \caption{(a) Time evolution of $|\rho_{eg} \label{FIG:rhovs} \end{figure} Now we add feedback control $F$ to our system, the master equation then becomes Eq.(\ref{eqn:controlled}). Our system is initially in the state $\rho_0$, other parameters remain unchanged. \begin{figure} \caption{The evolution of absolute value of the first qubit's off-diagonal element with different control parameters, for $g/\omega=1,\gamma/\omega=0.5$ and $t$ in the unit of $\frac{1} \label{FIG:rho3} \end{figure} We first analysis the $\sigma_x$ control by choosing feedback amplitude $A_x=0\thicksim\pi, A_y=A_z=0$. Note that when $A_y=A_z=0,$ the feedback amplitude $A_x$ influence the system's evolution with a period of $\pi$ which comes from the term $F\sigma_2^-\rho\sigma_2^+F^{\dag}$ in Eq.(\ref{eqn:feedback}). It can be analytically proved that $e^{iA_x\sigma_x}\sigma^-\rho_2\sigma^+ e^{-iA_x\sigma_x}=e^{i(A_x+\pi) \sigma_x}\sigma^-\rho_2\sigma^+e^{-i(A_x+\pi)\sigma_x}$ and $e^{iA_y\sigma_y}\sigma^-\rho_2\sigma^+e^{iA_y\sigma_y}= e^{i(A_y+\pi)\sigma_y}\sigma^-\rho_2\sigma^+e^{i(A_y+\pi)\sigma_y}$ under any $A_x$ and $A_y$. Here $\rho_2$ is the reduced density matrix of the second qubit. The absolute value for the first qubit's off-diagonal density matrix element evolves as showed in Fig.\ref{FIG:rho3} (a). The figure indicates that, for an appropriate feedback amplitude , $A_x\approx1.3$ and $A_x\approx1.9$, the absolute value of off-diagonal element can be evidently enhanced compared with the uncontrolled case ($A_x=0$). That means the decoherence is partially suppressed. The improvement of coherence caused by feedback is shown explicitly in Fig.\ref{FIG:rhovs} (a). We plot $|\rho_{eg}|$, representing the coherence of the first qubit, as a function of time with $A_x=1.2,A_y=A_z=0$ (a selected controlled case). In comparison with the uncontrolled case, a stronger oscillation amplitude and longer dechoherence time appears. Meanwhile, the $\rho_{ee}$ decays slowly compared to the uncontrolled case as shown in Fig.\ref{FIG:rho3} (b). Similarly, the $\sigma_y$ control is also able to slow down the decay of $|\rho_{eg}|$. We make $A_y=0\thicksim\pi, A_x=A_z=0$. The numerical results of $|\rho_{eg}|$ is shown in Fig.\ref{FIG:rho3} (b). Unlike the $\sigma_x$ and $\sigma_y$ control, the $\sigma_z$ control ($A_z=0\thicksim\pi, A_x=A_y=0$) has no effect on the evolution of the system as shown in Fig.\ref{FIG:rho3} (c). This is because $e^{iA_z\sigma_z}\sigma_-\rho_2\sigma_+ e^{-iA_z\sigma_z}=\rho_2$ for any $A_z$. The physics behind this result is that after emitting a photon, the controlled qubit must stay in the ground state with the Bloch vector pointing the bottom of the Bloch sphere, so the rotation around $z$ axis does not change the Bloch vector, i.e., the state of the qubit remains unchanged. The present results show that decoherence of the first qubit can be suppressed by controlling its partner. The decoherence source in our system is the spontaneous emission of the second qubit, once the detector detects a photon, i.e. a quantum jump of the second qubit happens, the feedback beam instantaneously act on the second qubit and then the first qubit is impacted through the coupling of the two qubit. The feedback control scheme can reduce the destructive effects of coherence and slow down the dissipation of energy. The control effect is relevant to the coupling strength $g$. When $g$ is small, the first qubit is hard to be impacted by the second qubit, so it's hard to prepare, measure and control the state of the first qubit. As the interaction goes stronger, the effect of feedback control becomes more evident. For the case discussed in Fig.\ref{FIG:rhovs}, the first qubit is dissipative. We found that when the control parameters is chosen as: $A_x=\frac{\pi}{2},A_y=A_z=0$ , or $A_y=\frac{\pi}{2},A_x=A_y=0$ with the two qubits initially being prepared in the same states, the decoherence dynamics turns to a phase damping type. The population in ground state and excited state do not change, while the off-diagonal elements evolves in the same way as in the uncontrolled case. We show this in a bloch sphere\cite{Altafini} in Fig.\ref{FIG:bloch}. Here the reduced density matrix of the first qubit can be written by $\rho_1=\frac{1}{2}(I+\vec{P}\cdot\vec{\sigma})$. We can get the polarization vector components $P_x=\text{Tr}(\sigma_x\rho_1)$, $P_y=\text{Tr}(\sigma_y\rho_1)$ and $P_z=\text{Tr}(\sigma_z\rho_1)$. \begin{figure} \caption{ Polarization vector evolution in a bloch sphere for feedback amplitude $A_x=\frac{\pi} \label{FIG:bloch} \end{figure} \section{Entanglement control} \begin{figure} \caption{(a) Conccurence as a function of time and $A_y$. The system is initially in the state $|\psi\rangle=|g\rangle_1|e\rangle_2$, for the parameters $g/\omega=1,\gamma/\omega=0.5$. (b) A controlled evolution for $A_y=0.5\pi,A_x=A_z=0$ vs. uncontrolled case. The entanglement is improved by choosing an appropriate feedback. $t$ is in the unit of $\frac{1} \label{FIG:cge} \end{figure} Quantum feedback control has been recently used to improve the creation of steady state entanglement in open quantum systems. A highly entangled states of two qubits in a cavity can be produced with an appropriate selection of the feedback Hamiltonian and detection strategy \cite{Carvalho,Carvalho2}. We will show that the quantum-jump-based feedback scheme can produce and improve entanglement in our model. We choose the concurrence \cite{Wootters} as a measure of entanglement. For a mixed state represented by the density matrix $\rho$ the "spin-flipped" density operator reads \begin{eqnarray} \tilde{\rho}=(\sigma_y\otimes\sigma_y)\rho^*(\sigma_y\otimes\sigma_y) \end{eqnarray} where the $*$ denotes complex conjugate of $\rho$ in the bases of $\{|gg\rangle, |ge\rangle, |eg\rangle, |ee\rangle\}$, and $\sigma_y$ is the usual Pauli matrix. The concurrence of the density matrix $\rho$ is defined as \begin{eqnarray} C(\rho)=\max{(\sqrt{\lambda_1}-\sqrt{\lambda_2} -\sqrt{\lambda_3}-\sqrt{\lambda_4},0)}. \end{eqnarray} where $\lambda_i$ are eigenvalues of matrix $\rho\tilde{\rho}$ and sorted in decreasing order $\lambda_1>\lambda_2>\lambda_3>\lambda_4$. The range of concurrence is from 0 to 1, and $C=1$ represents the maximum entanglement. In absence of spontaneous emission, i.e. $\gamma=0$, the system evolves without dissipation. We find that for the system initially in a separable state except $|\psi\rangle=|e\rangle_1|e\rangle_2$ or $|\psi\rangle=|g\rangle_1|g\rangle_2$ (the eigenstates of system Hamiltonian $H$), an entangled state can be generated due to the interaction between the two qubits. The amount of entanglement depends on initial states of the system and the coupling strength $g$. But when the spontaneous emission effect is taken into account, the performance of entanglement preparation get worse considerably. \begin{figure} \caption{ (a) Conccurence as a function of time and $A_y$. The system is initially in the state $|\psi\rangle=|e\rangle_1|e\rangle_2$, for the parameters $g/\omega=1,\gamma/\omega=0.5$. (b) The controlled conccurence evolution for $A_y=1.2,A_x=A_z=0$ vs. uncontrolled case. $t$ is in the unit of $\frac{1} \label{FIG:cee} \end{figure} Now we investigate if our feedback control strategy can improve the entanglement preparation with the effect of spontaneous emission of the second qubit. The master equation with control is Eq.(\ref{eqn:controlled}). The effect of feedback control lies in different choices for the feedback parameters $A_x, A_y, A_z$, the coupling strength $g$ and different initial states. Here we present two typical results with two different states. Our first choice is the initial state $|\psi\rangle=|g\rangle_1|e\rangle_2$ with $\sigma_y$ control for $A_y=0\thicksim\pi, A_x=0, A_z=0$. The concurrence evolution is plotted as a function of time and feedback amplitude $A_y$ in Fig.\ref{FIG:cge} (a), and Fig.\ref{FIG:cge} (b) denotes the concurrence evolution with a selected feedback amplitude compared with the uncontrolled case. We see that entangled states can be generated with any feedback parameters, but it decreases with time because of the dissipative effect. When an appropriate feedback amplitude $A_y\approx0.9$ is chosen, the concurrence amplitude is remarkably enhanced, and the entanglement lasts for a long time. For the system initially in the state $|\psi\rangle=|e\rangle_1|e\rangle_2$ with $\sigma_y$ control, the dynamics of the concurrence is shown in Fig.\ref{FIG:cee} (a). Note that in this case if there is no spontaneous effect, this is a steady state of the system, the density matrix elements does not change with time. Fig.\ref{FIG:cee} (a) demonstrates that the dissipation and feedback can produce entanglement. We show this explicitly in Fig.\ref{FIG:cee} (b) by choosing feedback amplitude $A_y=1.2$. We can see that for a proper feedback amplitude, after an entanglement death, a larger amount entanglement is regenerated. The above results shows the feedback control strategy can be used to prepare and protect entanglement in our model. The effect of entanglement control strongly depends on the initial state. For a certain initial state, we found that the $\sigma_x$ control and $\sigma_y$ control has the similar effect but the $\sigma_z$ control does not work. \section{Conclusion and remarks} In this paper, we studied the effect of quantum jump based feedback control on a system consisting of two qubits where only one of them subject to decoherence. By numerical simulation, we found that it is possible to suppress decoherence of the first qubits by a local control on the second qubits. We observed that the decoherence time of the first qubit is increased remarkably. The control scheme can also used to protect the entanglement between the two qubits. These features can be understood as that the feedback control changes the dissipative dynamics of the system through the quantum-jump operators. We would like to note that Hamiltonian Eq.(\ref{eqn:systemhamiltonian}) does not describe the hyperfine interaction. However, by the recent technology we can simulate Hamiltonian Eq.(\ref{eqn:systemhamiltonian}) in nuclear-electron spin systems, in this sense, the scheme presented here is available for nuclear-electron spin systems. On the other hand, by using the hyperfine interaction Hamiltonian, our further simulations show that we can obtain results similar to that for Hamiltonian Eq.(\ref{eqn:systemhamiltonian}). \begin{references} \bibitem{Roos} C. F. Roos, G. P. T. Lancaster, M. Riebe, H. H\"{a}ffner, W.H\"{a}sel, S. Gulde, C. Becher, J. Eschner, F. Schmidt-Kaler, and R. Blatt, Phys. Rev. Lett. \textbf{92}, 220402 (2004). \bibitem{Puppe} T. Puppe, I. Schuster, A. Grothe, A. Kubanek, K. Murr, P. W. H. Pinkse, and G. Rempe, Phys. Rev. Lett. \textbf{99}, 013002 (2007). \bibitem{Wiseman} H. M. Wiseman, and G. J. Milburm, Phys. Rev. Lett. \textbf{70}, 548 (1993). \bibitem{Wiseman2} H. M. Wiseman, Phys. Rev. A \textbf{49}, 2133 (1994). \bibitem{Wang} J. Wang, H. M. Wiseman, and G. J. Milburn, Phys. Rev. A \textbf{71}, 042309 (2005). \bibitem{Carvalho} A. R. R. Carvalho, J.J. Hope, Phys. Rev. A \textbf{76}, 010301 (2007). \bibitem{Viola} L. Viola, and S. Lloyd, Phys. Rev. A \textbf{58}, 2733 (1998). \bibitem{Katz} G. Katz, M. A. Ratner, and R. Kosloff, Phys. Rev. Lett. \textbf{98}, 203006 (2007). \bibitem{Ganesan} N. Ganesan, and T. Tarn, Phys. Rev. A \textbf{75}, 032323 (2007). \bibitem{Zhang} J. Zhang, C. Li, R. Wu, T. Tarn, and X. Liu, J. Phys. A \textbf{38}, 6587-6601 (2005). \bibitem{Bertet} P. Bertet, I. Chiorescu, G. Burkard, K. Semba, C. J. P. M. Harmans, D. P. DiVincenzo, and J. E. Mooij, Phys. Rev. Lett. \textbf{95}, 257002 (2005). \bibitem{Hayashi} T. Hayashi, T. Fujisawa, H. D. Cheong, Y. H. Jeong, and Y. Hirayama, Phys. Rev. Lett. \textbf{91}, 226804 (2003). \bibitem{Rebentrost} P.Rebentrost, I.Serban, T.Schulte-Herbr\"{u}ggen, and F. K. Wilhelm, Phys. Rev. Lett. \textbf{102}, 090401 (2009). \bibitem{Vandersypen} L. Vandersypen and I. Chuang, Rev. Mod. Phys. \textbf{76}, 1037 (2004). \bibitem{Ladd} T. Ladd, D. Maryenko, Y. Yamamoto, E. Abe, and K. Itoh, Phys. Rev. B \textbf{71}, 014401 (2005). \bibitem{quantumoptics} M. O. Scully, and M. S. Zubairy {\it Quantum Optics} (Cambridge University Press, Cambridge 1997). \bibitem{quantumnoise} C. W. Gardiner, and P. Zoller, {\it Quantum Noises} (Springer-Verlag, Berlin 1991). \bibitem{Wiseman3} H. M. Wiseman, S. Mancini, and J. Wang, Phys. Rev. A \textbf{66}, 013807 (2002). \bibitem{Altafini} C. Altafini, J. Math. Phys. \textbf{44}, 2357 (2003). \bibitem{Carvalho2} A. R. R. Carvalho, A. J. S. Reid, and J. J. Hope, Phys. Rev. A \textbf{78}, 012334 (2008). \bibitem{Wootters}W. K. Wootters, Phys. Rev. Lett. \textbf{80}, 2245 (1998). \end{references} \end{document}
\begin{document} \title{The splitting number can be smaller than the matrix chaos number} \author{Heike Mildenberger and Saharon Shelah} \thanks{The first author was supported by a Minerva fellowship.} \thanks{The second author's research was partially supported by the ``Israel Science Foundation'', founded by the Israel Academy of Science and Humanities. This is the second author's work number 753} \address{Heike Mildenberger, Saharon Shelah, Institute of Mathematics, The Hebrew University of Jerusalem, Givat Ram, 91904 Jerusalem, Israel } \email{[email protected]} \email{[email protected]} \begin{abstract} Let $\chi$ be the minimum cardinal of a subset of $2^\omega$ that cannot be made convergent by multiplication with a single Toeplitz matrix. By an application of creature forcing we show that $\gs < \chi$ is consistent. We thus answer a question by \Vo. We give two kinds of models for the strict inequality. The first is the combination of an $\aleph_2$-iteration of some proper forcing with adding $\aleph_1$ random reals. The second kind of models is got by adding $\delta$ random reals to a model of $\MA_{<\kappa}$ for some $\delta \in [\aleph_1,\kappa)$. It was a conjecture of Blass that $\gs=\aleph_1 < \chi = \kappa$ holds in such a model. For the analysis of the second model we again use the creature forcing from the first model. \end{abstract} \subjclass{03E15, 03E17, 03E35, 03D65} \maketitle \newcommand{\LOC}{{\mathbb L}} \setcounter{section}{-1} \section{Introduction} We consider products of $\omega\times \omega$ matrixes $A=(a_{i,j})_{i,j\in\omega}$ and functions from $\omega$ to 2 or to some bounded interval of the reals. The product $A \cdot f$ is defined as usual in linear algebra, i.e., $(A \cdot f)(i) = \sum_ {j \in \omega} a_{i,j} \cdot f(j)$. We define \[ A\lim f := \lim_{i\to\infty} \sum_{j=0}^{\infty} (a_{i,j}\cdot f(j)). \] Toeplitz (cf.\ \cite{Cooke}) showed: $A\lim$ is an extension of the ordinary limit iff $A$ is a regular matrix\index{regular matrix}, i.e.\ iff $\exists m \; \forall i\; \sum_{j=0}^{\infty} |a_{i,j}| < m$ and $\lim_{i \to \infty} \sum_{j=0}^{\infty} a_{i,j} = 1$ and $\forall j \; \lim_{i \to \infty} a_{i,j} = 0$. Regular matrices are also called Toeplitz\index{Toeplitz matrix} matrices. We are interested whether for many $f$'s simultaneously there is one $A$ such that all $A\lim f$ exist, and formulate our question in terms of cardinal characteristics. Let $\ell^\infty$ denote the set of bounded real sequences, and let $\mathbb M$ denote the set of all Toeplitz matrices. \Vo\ \cite{Vojtas88} defined for ${\mathbb A} \subseteq {\mathbb M}$ the chaos relations $\chi_{{\mathbb A}, \infty}$ and their norms $\Vert \chi_{{\mathbb A}, \infty} \Vert$ \begin{eqnarray*} \chi_{{\mathbb A},\infty} &=& \{(A,f) \such A \in {\mathbb A} \;\wedge\; f \in \ell^\infty \;\wedge\; A \lim f \mbox{ does not exist}\},\\ \Vert \chi_{{\mathbb A},\infty} \Vert &=& \min \{ |{\mathcal F}| \such {\mathcal F} \subseteq \ell^\infty \: \wedge \\ && \makebox[2cm]{} (\forall A \in {\mathbb A}) \; (\exists f \in {\mathcal F}) \; A \lim f \mbox{ does not exist}\}. \end{eqnarray*} By replacing $\ell^\infty$ by ${}^\omega 2$, the set of $\omega$-sequences with values in 2, we get the variations $\chi_{{\mathbb A},2}$. In \cite{Mi6} we showed that for the cardinals we are interested in, ${}^\omega 2$ and $\ell^\infty$ give the same result. From now on we shall work with ${}^\omega 2$. \Vo\ (cf.\ \cite{Vo2}) also gave some bounds valid for any ${\mathbb A}$ that contains at least all matrices which have exactly one non-zero entry in each line: \[ \gs \leq \Vert\chi_{{\mathbb A},2}\Vert \leq \gb \cdot \gs.\] We write $\chi$ for $\Vert\chi_{{\mathbb M},2}\Vert$. In \cite{Mi6} we showed that $\chi < \gb \cdot \gs$ is consistent relative to \zfc. Here, we show the complementary consistency result, that $\gs < \chi$ is consistent. We get the convergence with positive matrices. Now we recall here the definitions of the cardinal characteristics $\gb$ and $\gs$ involved: The order of eventual dominance $\leq^\ast$ is defined as follows: For $f,g \in \omega^\omega$ we say $f \leq^\ast g$ if there is $k \in \omega$ such that for all $n \geq k$ we have $f(n) \leq g(n)$. The unbounding number $\gb$ is the smallest size of a subset ${\mathcal B} \subseteq {}^\omega \omega$ such that for each $f \in {}^\omega \omega$ there is some $b \in {\mathcal B}$ such that $b \not\leq^\ast f$. The splitting number $\gs$ is the smallest size of a subset ${\mathcal S} \subseteq [\omega]^\omega$ such that for each $X \in \potinf$ there is some $S \in {\mathcal S}$ such that $X \cap S$ and $X \setminus S$ are both infinite. The latter is expressed as ``$S$ splits $X$'', and $\mathcal S$ is called a splitting family. For more information on these cardinal characteristics, we refer the reader to the survey articles \cite{Blasshandbook, vanDouwen, Vaughan}. If $A \lim f$ exists, then also $A' \lim f$ exists for any $A'$ that is gotten from $A$ be erasing rows and moving the remaining (infinitely many) rows together. We may further change $A'$ by keeping only finitely many non-zero entries in each row, such that the neglected ones have a negligible absolute sum, and then possibly multiplying the remaining ones such that they again sum up to 1. Hence, after possibly further deleting of lines we may restrict the set of Toeplitz matrices to linear Toeplitz matrices. A matrix is linear iff each column $j$ has at most one entry $a_{i,j} \neq 0$ and for $j < j'$ the $i$ with $a_{i,j} \neq 0$ is smaller or equal to the $i$ with $a_{i,j'} \neq 0$ if both exist, in picture \begin{small} \begin{equation*} \begin{pmatrix} c_0(0) & \dots c_0(\mup(c_0) -1) & 0 & \dots & 0 & 0 & \dots\\ 0 & \dots 0 & c_1(\mdn(c_1)) & \dots & c_1(\mup(c_1) -1) & 0 & \dots\\ 0 & \dots 0 & 0 &\dots& 0 & c_2(\mdn(c_2)) &\dots \\ \vdots \end{pmatrix} \end{equation*} \end{small} Linear matrices can be naturally (as in the picture) read as $(c_n \such n \in \omega)$ where $c_n \colon [\mdn(c_n), \mup(c_n)) \to [0,1]$, $c_n(j) = a_{n,j}$, give the finitely many non-zero entries in row $n$, and $\mup(c_{n-1}) = \mdn(c_n)$. The $c_n$ are special instances of the weak creatures in the sense of \cite{RoSh:470}. In the next two sections we shall show: The $c_n$'s coming from the trunks of the conditions in the generic filter of our forcing $Q$ give matrices that make, after multiplication, members of ${}^\omega 2$ from the ground model and members of ${}^\omega 2$ of any random extension convergent. \section{A creature forcing}\label{S1} In this section, we give a self-contained description of the creature forcing $Q$ which is the main tool for building the two kinds of models in the next section. Moreover, we explain the connections and give the references to \cite{RoSh:470}, so that the reader can identify it as a special case of an extensive framework. \begin{definition}\label{1.1} a) We define a notion of forcing $Q$. Its members $p$ are of the form $p= (n,c_0,c_1, \dots )= (n^p, c_0^p, c_1^p, \dots )$ such that \begin{myrules} \item[(1)] $n^p \in \omega$. \item[(2)] For each $i \in \omega$ there are $\mdn(c_i) < \mup(c_i) < \omega$ such that\\ $c_i \colon [\mdn(c_i),\mup(c_i)) \to [0,1]$, such that $(\forall k \in \dom(c_i))(c_i(k) \cdot k! \in {\mathbb Z})$. \item[(3)] $w(c_i) = \{ k \in [\mdn(c_i),\mup(c_i)) \such c_i(k) \neq 0 \}$, and $\sum_{k \in w(c_i)} c_i(k) = 1$. \item We let $\norm(c_i) = \mdn(c_i)$. We denote by $K$ the set of those $c_i$. \item[(4)] $\mup(c_i) = \mdn(c_{i+1})$. \end{myrules} We let $p \leq q$ (``$q$ is stronger than $p$'', we follow the Jerusalem convention) if \begin{myrules} \item[(5)] $n^p \leq n^q$. \item[(6)] $c_0^p = c_0^q, \dots , c^p_{n^p-1} = c^q_{n^p-1}$. \item[(7)] there are $n^p \leq k_{n^p} < k_{n^p+1} < \dots $ and there are non-empty sets $u \subseteq [k_n,k_{n+1})$ and rationals $d_\ell >0$ for $\ell \in u$ such that $c^q_n = \sum\{d_\ell \cdot c_\ell^p \such \ell \in u\}$ and $\sum_{\ell \in u } d_\ell = 1$. We let $ \Sigma(\langle c_\ell \such \ell \in [k_n, k_{n+1}) \rangle$ denote the collection of all $c_n$ gotten with any $u \subseteq [k_n,k_{n+1})$ and any weights $d_\ell$ for $\ell \in u$. Thus $\mdn(c^q_n) = \mdn(c^p_{k_n})$ and $\mup(c^q_n)= \mup(c^p_{k_{n+1} -1})$. \end{myrules} b) We write $p \leq_i q$ iff $n^p = n^q$ and $c_j^q = c_j^p$ for $j < n^p + i$. \end{definition} \begin{remark} The notation we used in \ref{1.1} is natural to describe our forcing in a compact manner. However, it does not coincide with the notation given for the general framework in \cite{RoSh:470}. Here is a translation: We write \\ $((c_0^p,c_1^p, \dots c_{n-1}^p), c_n^p, c_{n+1}^p, \dots)$ instead of $(n^p, c_0^p, c_1^p, \dots)$, which contains the same information. Then we write \begin{equation}\tag{$\ast$}\label{ast} ((c_0^p,c_1^p, \dots c_{n-1}^p), c_n^p, c_{n+1}^p, \dots) = (w^p, t_0^p, t_1^p \dots). \end{equation} Then the $t_i^p$ are (simple cases) of components of weak creatures in the sense of \cite[1.1.1 to 1.1.10]{RoSh:470}. If we write $\bf t = (\norm({\bf t}), \val({\bf t}), \dis({\bf t}))$ for a weak creature in the sense of \cite{RoSh:470}, then we have that $\dis$ is the empty function, and $t_i$ is part of such a ${\bf t}$ in the following sense: $\norm({\bf t}) = \mdn(t_i)$, $\rge(\val({\bf t})) = \{ t_i\}$. We set ${\bf H}(i) = \left\{0,\frac{1}{i!},\frac{2}{i!}, \dots , \frac{i! - 1}{i!}, 1\right\}$ and $t_i \in \prod_{m \in [\mdn(t_i), \mup(t_i))} {\bf H}(i)$. $K$ is a collection of weak creatures, and $\Sigma$ from \ref{1.1}(7) is a composition operation. Thus our $Q$ is $Q^*_{s\infty}(K,\Sigma)$ in Ros\l anowski's and Shelah's framework and is finitary and nice and satisfies some norm-conditions. We do not give the definitions of these properties, because we are working with our specific case. The interested reader should consult \cite{RoSh:470}. We use $w$, $w^p$, $w^q$ for the trunks in the representation as in \eqref{ast}. \end{remark} In order to make our work self-contained, we write a proof that $Q$ allows continuous reading of names and hence is proper. In this section, we use the notation as in \eqref{ast}, because it is more suitable. \begin{definition} \label{1.3} $q=(w^q, t_0^q, \dots)$ approximates $\mathunderaccent\tilde-3 {\tau}$ at $t_n^q$ iff for all $r$ (if $q \leq r$ and $r$ forces a value to $\mathunderaccent\tilde-3 {\tau}$, then $r^{q,n}$ forces this, where $t_i^{r^{q,n}} = t_i^r$ for $i < n$ and $\{ t_i^{r^{q,n}} \such i \geq n \} = \{ t_i^q \such i < \omega, \mdn(t_i^q) \geq \mup(t^{r}_{n-1}) \}.$) \end{definition} \begin{definition}\label{1.4} For $w\in\bigcup\limits_{m<\omega}\prod\limits_{i<m}{\bf H}(i)$ and ${\mathcal S}\in [K]^{\textstyle {\leq}\omega}$ we define the set $\pos(w,{\mathcal S})$ of possible extensions of $w$ from the point of view of ${\mathcal S}$ (with respect to $(K,\Sigma)$) as: \begin{eqnarray*} \pos^*(w,{\mathcal S}) & =& \Sigma({\mathcal S})\\ (&=& \{u: (\exists s\in\Sigma({\mathcal S}))(\langle w,u\rangle\in\val[s])\} \\ &&\mbox{for a general creature forcing}), \end{eqnarray*} \[\hspace{-1cm} \begin{array}{ll} \pos(w,{\mathcal S})=\{u:&\!\!\!\!\mbox{there are disjoint sets }{\mathcal S}_i\mbox{ (for $i<m<\omega$) with }\bigcup\limits_{i<m}{\mathcal S}_i={\mathcal S}\\ \ &\mbox{and a sequence }0<\ell_0<\ldots<\ell_{m-1}<\lh(u)\mbox{ such that}\\ \ & u{\restriction} \ell_0\in\pos^*(w,{\mathcal S}_0)\ \&\ \\ \ & u{\restriction} \ell_1\in\pos^*(u{\restriction}\ell_0,{\mathcal S}_1)\ \&\ \ldots\ \&\ u\in\pos^*(u{\restriction} \ell_{m-1},{\mathcal S}_{m-1})\}.\\ \end{array} \] \end{definition} \begin{lemma}\label{1.5} \label{deciding} (The case $\ell=0$ of \cite[Theorem 2.1.4]{RoSh:470}) $Q$ has continuous reading of names, i.e. if $p \Vdash \mathunderaccent\tilde-3 {\tau} \colon \omega \to V \mbox{ (old universe) }$ there is $q=(w^q,s_0, s_1 \dots )$ such that \begin{myrules} \item[$(\alpha)$] $p \leq_0 q \in Q$, \item[$(\beta)$] if $n < \omega$ and $m \leq \mup(s_{n-1})$ then the condition $q$ approximates $\mathunderaccent\tilde-3 {\tau}(m)$ at $s_n$. \nothing{ d $q \leq r$ and $r$ forces a value to $\mathunderaccent\tilde-3 {\tau}(i)$ and $i < \mup(t^r_{n_r -1})$, then $r^q$ forces this, where $n^{r^q} = n^r$, $t_n^{r^q} = t_n^r$ for $n < n^r$ and $\{ t_n^{r^q} \such n \geq n^r \} = \{ t_n^q \such n < \omega, \mdn(t_n^q) > \mup(t^{r^q}_{n^{r^q}-1} \}$. } \end{myrules} \end{lemma} \proof Let $p = (w^p,t_0^p,t_1^p, \dots )$. Let $w^q = w^p$. Now, by induction on $n \geq 0$ we define $q_n, s_n, t^n_{n+1}, t^n_{n+2}, \dots$ such that: \begin{myrules} \item[(i)] $q_0 = p$, \item[(ii)] $q_{n+1} = ( w^p,s_0,\dots,s_n,t^n_{n+1},t^n_{n+2},\dots) \in Q$, \item[(iii)] $q_n \leq_n q_{n+1}$, \item[(iv)] if $w_1 \in {\rm pos}(w^p,s_0,\dots s_{n-1})$, and $m \leq \mup(s_{n-1})$ and there is a condition $r \in Q$, $r \geq_0 (w_1,s_n,t^n_{n+1},t^n_{n+2}, \dots )$ which decides the value of $\mathunderaccent\tilde-3 {\tau}(m)$ then the condition $(w_1,s_n,t^n_{n+1},t^n_{n+2}, \dots )$ already does it. \end{myrules} Arriving at stage $n \geq 0$ we have defined $$q_n=(w^p,s_0,s_1,\dots,s_{n-1},t^{n-1}_n,t^{n-1}_{n+1},\dots).$$ Let $\langle (w^n_i, m^n_i): i< K_n\rangle$ be an enumeration of \[{\rm pos}(w^p,s_0,\ldots,s_{n-1})\times(\mup(s_{n-1})+1)\] (since each ${\bf H}(m)$ is finite, $K_n$ is finite). Next choose by induction on $k\leq K_n$ conditions $q_{n,k}\in Q$ such that: \begin{myrules} \item[$(\alpha)$] $q_{n, 0}=q_n$. \item[$(\beta)$] $q_{n,k}$ is of the form $(w^p,s_0,\dots,s_{n-1},t_n^{n,k}, t^{n, k}_{n+1}, t^{n, k}_{n + 2},\ldots)$. We set $w^n_k = (w^p,s_0, \dots s_{n-1})$. \item[$(\gamma)$] $q_{n,k}\leq_n q_{n,k+1}$. \item[$(\delta)$] If, in $Q$, there is a condition $r\geq_0 (w^n_k,t^{n,k}_n,t^{n,k}_{n+1},t^{n,k}_{n+2},\ldots)$ which decides (in $Q$) the value of $\mathunderaccent\tilde-3 {\tau}(m^n_k)$, then \[(w^n_k,t^{n,k+1}_n,t^{n,k+1}_{n+1},t^{n,k+1}_{n+2},\ldots)\in Q\] is a condition which forces a value to $\mathunderaccent\tilde-3 {\tau}(m^n_k)$. \end{myrules} For this part of the construction we need our standard assumption that we may iterate the process in \ref{1.1}(7). Note, that choosing $(w^n_k,t^{n,k+1}_n,t^{n,k+1}_{n+1},t^{n,k+1}_{n+2},\ldots)$ we want to be sure that \[(w^p,s_0,\ldots,s_{n-1},t^{n,k+1}_n,t^{n,k+1}_{n+1},t^{n,k+1}_{n+2},\ldots) \in Q.\] Next, the condition $q_{n+1}\stackrel{\rm def}{=} q_{n,K_n}\in Q$ satisfies (iv): the keys are the clause ($\delta$) and the fact that \[(w^n_k,t^{n,k+1}_n,t^{n,k+1}_{n+1},t^{n,k+1}_{n+2},\ldots)\leq (w^n_k,t^{n,K_n}_n, t^{n,K_n}_{n+1},t^{n,K_n}_{n+2},\ldots)\in Q.\] Thus $s_n\stackrel{\rm def}{=} t^{n,K_n}_n$, $t^{n+1}_{n+k}\stackrel{\rm def}{=} t^{n,K_n}_{n+k}$ and $q_{n+1}= (w^p,s_0, \dots, s_n,t^{n+1}_n, \dots )$ are as required. Now, by a fusion argument \[q\stackrel{\rm def}{=}(w^p,s_0,s_1,\ldots,s_l,s_{l+1},\ldots)=\lim_n q_n\in Q.\] It is easily seen that $q$ satisfies the assertions of the theorem. \proofend \nothing{ \begin{definition}\label{1.6} For $p,q \in Q$ we write $q \leq_{apr} q$ if there is some $n$, $w^q$ such that $q=(w^q,t_n^p,t_{n+1}^p,\dots)$ and $p \leq q$. That is, $q$ is only in the trunk stronger than $p$. \end{definition} } \begin{lemma}(\cite[Corollary 2.1.6]{RoSh:470}) \begin{myrules} \item[(a)] Suppose that $\mathunderaccent\tilde-3 {\tau}_n$ are $Q$-names for ordinals and $q\in Q$ is a condition satisfying $(\beta)$ of \ref{deciding}. Further assume that $q\leq r\in Q$ and $r\Vdash $``$\mathunderaccent\tilde-3 {\tau}_m=\alpha$'' (for some ordinal $\alpha$).\\ Then $q'= r^{q,m}$ forces this. \item[(b)] The forcing notion $Q$ is proper. \end{myrules} \end{lemma} \proof (a) is a special case of the previous lemma. For (b), we use the equivalent definition of properness given in \cite[III.2.13]{Sh:h}, and the fact that $\{q'\!\in Q: (\exists r \geq q)(\exists n) q'=r^{q,n}\}$ is countable provided $\bigcup\limits_{i<\omega}{\bf H}(i)$ is countable. \section{The effect of $Q$ on random reals}\label{S2} Let $G$ be $Q$-generic over $V$. We set $c^G_n = c_n^q$ for $q \in G$ and $n^q >n$. This is well defined. Let $\mathunderaccent\tilde-3 {c_n}$ be a name for it. Our aim is to show that multiplication by the matrix whose $n$-th row is $c_n$ makes any real from the ground model and even any real from a random extension of the ground model convergent. For background information about random reals we refer the reader to \cite[\S 42]{Jech}. The Lebesgue measure ist denoted by $\Leb$. With ``adding $\kappa$ random reals'' we mean forcing with the measure algebra $R_\kappa$ on $2^{\omega \times\kappa}$, that is adding $\kappa$ random reals at once or ``side-by-side'' and not successively. \begin{definition}\label{2.1} \begin{myrules} \item[(1)] Let $\may_k(p) = \{ c_n^r \such p \leq_k r, n \geq n^p + k\}$. \item[(2)] For a creature $c$ and $\eta \in {}^\omega 2$ let $\aver(\eta,c) = \sum_{k \in w(c)} c(k) \eta(k)$. \end{myrules} \end{definition} \begin{mainlemma}\label{2.2} Assume that \begin{myrules} \item[(A)] $\mathunderaccent\tilde-3 {\eta}$ is a random name of a member of ${}^\omega 2$, $ \mathunderaccent\tilde-3 {\eta} = f(\mathunderaccent\tilde-3 {r})$ where $f$ is Borel and $\mathunderaccent\tilde-3 {r}$ is as name of the random generic real, \item[(B)] $p \in Q$, \item[(C)] $k^* < \omega$. \end{myrules} Then for every $k \geq k^*$ there is some $q(k) \in Q$ such that \begin{myrules} \item[($\alpha$)]$p \leq_{k^*} q(k)$, \item[$(\beta)$] for all $\ell$, if $k^* \leq k < \ell < \omega$ and $c_1, c_2 \in \may_\ell(q(k))$ then $$ \frac{1}{\ell!} > \Leb\left\{r \such \frac{3}{2^k} \leq \left| \aver(f(r), c_1) - \aver(f(r), c_0) \right| \right\}.$$ \end{myrules} \end{mainlemma} \proof For $q \in Q$ and $k, \ell \in \omega$, $i \in \{0,1, \dots, 2^k\}$ we set \begin{eqnarray*} \err_{k,i}(\mathunderaccent\tilde-3 {\eta},c) &=& \Expect\left(\left|\aver(\mathunderaccent\tilde-3 {\eta},c) - \frac{i}{2^k}\right|\right)\\ & = & \int_0^1 \left|\aver(f(r),c) - \frac{i}{2^k} \right| \; d \Leb(r),\\ \eee^\ell_{k,i}(\mathunderaccent\tilde-3 {\eta},q) &=& \inf \{ \err_{k,i}(\mathunderaccent\tilde-3 {\eta},c) \such c \in \may_\ell(q)\}. \end{eqnarray*} Note that $\err_{k,i}(\mathunderaccent\tilde-3 {\eta},c)$ is a real and no longer a random name. So the infimum is well-defined. Now, if $\ell_1 < \ell_2$ then $\may_{\ell_1}(q) \supseteq \may_{\ell_2}(q)$ and hence \begin{equation*}\label{mono} \eee^{\ell_1}_{k,i}(\mathunderaccent\tilde-3 {\eta},q) \leq \eee^{\ell_2}_{k,i}(\mathunderaccent\tilde-3 {\eta},q). \end{equation*} So $\langle \eee^\ell_{k,i}(\mathunderaccent\tilde-3 {\eta},q) \such \ell \in \omega \rangle$ is an increasing bounded sequence and \begin{equation*} \eee^*_{k,i}(\mathunderaccent\tilde-3 {\eta},q) = \lim \langle \eee^\ell_{k,i}(\mathunderaccent\tilde-3 {\eta},q) \such \ell \in \omega \rangle \end{equation*} is well-defined. We fix $i \leq 2^k$, until Subclaim 4, when we start looking at all $i$ together. \nothing{, and later we shall vary $k$ as well.} Subclaim 1: There is some $q^{k,i}_1=q_1 \geq_{k^\ast} p$ such that for $\ell \geq k^\ast$ $$\eee^*_{k,i}(\mathunderaccent\tilde-3 {\eta}, p) - \frac{1}{\ell} \leq \err_{k,i}(\mathunderaccent\tilde-3 {\eta}, c^{q_1}_\ell) \leq \eee^*_{k,i}(\mathunderaccent\tilde-3 {\eta}, p) + \frac{1}{\ell}.$$ Moreover, if $\mdn(c_{\ell'}^{q_1}) = \mdn(c^p_{\ell})$ then $\eee^{\ell'}_{k,i}(\mathunderaccent\tilde-3 {\eta},q_1) \geq \eee_{k,i}^*(\mathunderaccent\tilde-3 {\eta},p)-\frac{1}{\ell}$. Why? We choose $c_\ell^{q_1}$ by induction on $\ell$: For $\ell \leq n^p + k^\ast$, we take $c_\ell^{q_1} = c_\ell^p$. Suppose that we have chosen $c_m^{q_1}$ for $m < \ell$ and that we are to choose $c_\ell^{q_1}$, $\ell > n^p + k^\ast$. We set $\eps = \frac{1}{\ell}$. By possibly end-extending $c_{\ell -1}^{q_1}$ by zeroes we may assume that $\mup(c_{\ell -1}^{q_1}) = \mup(c_{\ell'}^p)$ for such a large $\ell' \geq \ell$ such that for all $\ell'' \geq \ell'$, $\eee_{k,i}^{\ell''}(\mathunderaccent\tilde-3 {\eta},p) \geq \eee_{k,i}^\ast(\mathunderaccent\tilde-3 {\eta},p) - \eps$. Then we take $c_\ell = c_\ell^{q_1} \in \may_{\ell''}(p)$ such that $\err_{k,i}(\mathunderaccent\tilde-3 {\eta},c_\ell^{q_1}) \leq \eee_{k,i}^{\ell''}(\mathunderaccent\tilde-3 {\eta},p) + \eps \leq \eee_{k,i}^\ast(\mathunderaccent\tilde-3 {\eta},p) + \eps$. On the other side we have that $\err_{k,i}(\mathunderaccent\tilde-3 {\eta},c_\ell^{q_1}) \geq \eee_{k,i}^{\ell''}(\mathunderaccent\tilde-3 {\eta},p) \geq \eee_{k,i}^\ast(\mathunderaccent\tilde-3 {\eta},p) - \eps$. The fact that this holds also for $\ell' \leq \ell$ if $\mdn(c_{\ell'}^{q_1}) = \mdn(c^p_{\ell})$ yields the ``moreover'' part. Subclaim 2: In Claim 1, if $\ell \geq k^*$ and $q^{k,i}_1 \leq_\ell q_2$ then $$\eee^*_{k,i}(\mathunderaccent\tilde-3 {\eta},q) - \frac{1}{\ell} \leq \err_{k,i}(\mathunderaccent\tilde-3 {\eta}, c_\ell^{q_2}) \leq \eee^*_{k,i}(\mathunderaccent\tilde-3 {\eta},q) + \frac{1}{\ell}.$$. Why? By the definition if suffices to show: \begin{equation}\tag{$\otimes$}\label{otimes} \begin{split} & \mbox{if } \ell_1 < \cdots < \ell_t < \omega \mbox{ and } d_1 , \dots d_t \geq 0 \mbox{ and } d_1 + \cdots + d_t =1,\\ & \mbox{and } c_\ell^{q_2} =d_1 c_1^{q_1} + \cdots + d_t c_t^{q_1},\\ &\mbox{then } \eee^*_{k,i}(\mathunderaccent\tilde-3 {\eta},q_1) - \frac{1}{\ell} \leq \err_{k,i}(\mathunderaccent\tilde-3 {\eta}, c_\ell^{q_2}) \leq \eee^*_{k,i}(\mathunderaccent\tilde-3 {\eta},q_1) + \frac{1}{\ell}. \end{split} \end{equation} The first inequality holds by the ``moreover'' after the first inequality in the previous claim. For the second inequality it suffices to show that $$\err_{k,i}(\mathunderaccent\tilde-3 {\eta}, c) \leq \sum_{s=1}^t d_s \err_{k,i}(\mathunderaccent\tilde-3 {\eta}, c_s^{q_1}).$$ For this is suffices to show that $$\Expect\left(\left|\aver(\mathunderaccent\tilde-3 {\eta},c) - \frac{i}{2^k} \right|\right) \leq \sum_{s=1}^t d_s \Expect\left(\left|\aver(\mathunderaccent\tilde-3 {\eta},c_{s}^{q_1}) - \frac{i}{2^k} \right|\right), $$ and writing this explicitly noting that $\Expect$ is actually a Lebesgue integral and that $d_s \geq 0$ and that $\sum_{s} d_s = 1$ we finish by the triangular inequality. Subclaim 3: Let $q^{k,i}$ be as in Subclaim 2. For all $\ell$, if $c_0, c_1 \in \may_\ell(q^{k,i}_1)$, then $$ \frac{2^{k+1}}{\ell} \geq \Leb\left\{ r \such \aver(f(r), c_0) \geq \frac{i+1}{2^k} \wedge \aver(f(r),c_1) \leq \frac{i-1}{2^k} \right\}.$$ Why? Consider $c = \frac{1}{2} c_0 + \frac{1}{2} c_1 \in \may_\ell(q_1)$ . Write \\ $A=\left\{ r \such \aver(f(r), c_0) \geq \frac{i+1}{2^k} \wedge \aver(f(r),c_1) \leq \frac{i-1}{2^k} \right\}$. \begin{eqnarray*} \frac{2}{\ell} & \geq & \frac{1}{2} \err_{k,i}(\mathunderaccent\tilde-3 {\eta},c_0) + \frac{1}{2} \err_{k,i}(\mathunderaccent\tilde-3 {\eta},c_1) -\err_{k,i}(\mathunderaccent\tilde-3 {\eta},c)\\ & = & \int_0^1 \left(\frac{1}{2}\left|\aver(f(r),c_0) - \frac{i}{2^k} \right| +\right. \frac{1}{2}\left|\aver(f(r),c_1) - \frac{i}{2^k} \right| -\\ && \left.\left|\aver(f(r)),c) - \frac{i}{2^k} \right|\right) d \Leb(r)\\ &\geq& \int_A \left(\frac{1}{2}\left|\aver(f(r),c_0) - \frac{i}{2^k} \right| +\right. \frac{1}{2}\left|\aver(f(r),c_1) - \frac{i}{2^k} \right| -\\ && \left.\left|\aver(f(r)),c) - \frac{i}{2^k} \right|\right) d \Leb(r)\\ &\geq& \frac{1}{2^{k}} \Leb(A). \end{eqnarray*} \nothing{ We divide the truth value in the defintion of $\err_{k,i}(\mathunderaccent\tilde-3 {\eta},c)$ by the events above. Outside it is $\leq \frac{1}{2} \err_{k,i}(\mathunderaccent\tilde-3 {\eta}, c_1) + \frac{1}{2} \err_{k,i}(\mathunderaccent\tilde-3 {\eta},c_2) \leq \err^*_{k,i} + \frac{1}{\ell}$. Inside the result drops by $\Leb$(event above) $\times \frac{1}{2^{k-1}}$ as $d_1 \leq -\frac{1}{2^k}$ and $d_2 \geq \frac{1}{2^k}$ implies that $d_1 + d_2| \leq |d_1| +|d_2| - \frac{1}{2^{k_1}}$. } Subclaim 4: For every $q \in Q$ and $k^*$ we can find $q^{k}$ such that \begin{myrules} \item[$\alpha$)] $q \leq_{k^*} q^{k}$, \item[$\beta$)] if $\ell \in [k,\omega)$ and $c_0,c_1 \in \may_\ell(q^{k})$ and $i \in \{1,2, \dots, 2^k-1\}$ then $\frac{2^{k+1}}{\ell} > \Leb\left\{ r \such \aver(f(r), c_0) \geq \frac{i+1}{2^k} \wedge \aver(f(r),c_1) \leq \frac{i-1}{2^k} \right\}.$ \item[$\gamma$)] This holds also for every $q^{*} \geq q^{k}$. \end{myrules} Why? Repeat Subclaims 1 and 2 and 3 choosing $q^{k,i}$, $i = 0,1, \dots, 2^k$. We let $q_0 = q$ and choose $q^{k,i+1} $ such that it relates to $q^{k,i}$ like $q_1$ to $q$. Now $q^{k}=q^{k,2^k}$ is o.k. Note that according to \eqref{otimes} thinning and averaging can only help. Subclaim 5: Let $q^k$ be as in Subclaim 4. For $\ell \geq k$ there is $q(k,\ell)\geq_{\ell-1}q^k$ such that for $c_0,c_1 \in \may_\ell(q(k,\ell))$, $$ \frac{1}{\ell!} > \Leb\left\{r \such \frac{3}{2^k} \leq \left| \aver(f(r), c_1) - \aver(f(r), c_0) \right| \right\}.$$ Why? The event $\frac{3}{2^k} \leq \left| \aver(f(r), c_1) - \aver(f(r), c_0) \right|$ implies that for some $i \in \{1,2,\dots,2^k-1\}$ we have $\aver(f(r), c_1) \geq \frac{i+1}{2^k} \wedge \aver(f(r), c_2) \leq \frac{i-1}{2^k}$ or vice versa. So it is incuded in the union of $2 \times (2^k -1)$ events, each of measure $\leq \frac{2^{k+1}}{\ell}$. Hence it itself has measure $\leq \frac{2^{2k+2}}{\ell}$. By thinning out $q^k$ (by moving the former $\ell$ far out by putting in a lot of zeroes and thus having as new $c_\ell$'s weak creatures that were formerly labelled with a much larger $\ell$ and thus giving a much smaller quotient according to Subclaim 4) we replace $\frac{2^{2k+2}}{\ell}$ by $\frac{1}{\ell!}$. Subclaim 6: Finally we come to the $q(k)$ from part $(\beta)$ of the lemma: For any $k$ there is $q(k)$ such that $q \leq_{k^*} q(k)$ and for any $\ell\geq k$ and any $c_1, c_2 \in \may_\ell(q^*)$ then $$ \frac{1}{\ell!} > \Leb\left\{r \such \frac{3}{2^k} \leq \left| \aver(f(r), c_1) - \aver(f(r), c_0) \right| \right\}.$$ Why? Like in the previous claim we choose inductively $q(k,\ell)$ such that $q_0 =p$ and $q(k,\ell+1) \geq_\ell q(k,\ell)$ and $(q(k,\ell+1),q(k,\ell), \ell)$ are like $(q(k,\ell), q, \ell)$ from Subclaim 5, but for larger and larger $\ell$. Now $$q(k) =(n^p + k, c_0^p, \dots, c_{n^p + k}^p, c_{n^p + k +1}^{q(k,n^p+k+1)}, c_{n^p +k+1}^{q(k,n^p+k+2)}, \dots) $$ is as required in $(\alpha)$ and $(\beta)$ of the conclusion; we have even $q(k) \geq_k p$. \proofend \begin{conclusion}\label{2.3} $\Vdash_Q $ ``if $\mathunderaccent\tilde-3 {\eta} \in V$ is a random name of a member in $2^\omega$ (i.e.\ a name for a real in $V^{R_\omega}$) then ``$\Vdash_{R_\omega} \langle \aver(\mathunderaccent\tilde-3 {\eta}, \mathunderaccent\tilde-3 {c}_n) \such n \in \omega \rangle$ converges'' '' \end{conclusion} \proof Let $q \in Q$ and $\eps > 0$ be given. Let $\mathunderaccent\tilde-3 {\eta} = f(\mathunderaccent\tilde-3 {r})$, $f \in V$, be a random name for a real. We take $k_0$ such that $\frac{3}{2^k} < \eps$. Then we take for $q(k)\geq q$ as in the Main Lemma. We set $$A_{k,c_0,c_1} = \left\{ r \such \frac{3}{2^k} > \left| \aver(f(r), c_1) - \aver(f(r), c_0) \right| \right\}.$$ Since $\sum_{\ell \geq 1} \frac{1}{\ell !} < \infty$, we can apply the Borell Cantelli lemma and get: For any sequence $\langle c_\ell \such \ell \in \omega \rangle$ such that $c_\ell \in \may_\ell(q^*(k))$ we have that $$\Leb \left(\bigcup_{K \in [k,\omega)} \bigcap_{\ell \geq K} A_{k_0,c_\ell,c_{\ell+1}} \right) =1.$$ So $r \in \bigcap_{\ell \geq K} A_{k,c_\ell,c_{\ell+1}}$ for some $K \geq k$. So $q(k)$ forces that $\langle c_\ell \such \ell \in \omega \rangle$ describes a matrix whose product with $\eta$ lies eventually within an $\eps$ interval. Now we take smaller and smaller $\eps$'s and a density argument. \proofend \begin{conclusion} \label{2.4} Let $P_{\omega_2}=\langle P_i, \mathunderaccent\tilde-3 {Q}_j \such i \leq \omega_2, j < \omega_2 \rangle$ be a countable support iteration of $\mathunderaccent\tilde-3 {Q}_i$, where $Q_i$ is $Q$ defined in $V^{P_i}$, and let $\mathunderaccent\tilde-3 {R}_{\omega_1}$ be a $P_{\omega_2}$ name of the $\aleph_1$-random algebra. Then in $V^{P_{\omega_2} \ast \mathunderaccent\tilde-3 {R}_{\omega_1}}$ we have $ \gs = \aleph_1$ and $\chi > \aleph_1$. \end{conclusion} \proof Dow proves in \cite[Lemma 2.3]{Dow} that $s = \aleph_1$ after adding $\aleph_1$ or more random reals, over any ground model. In order to show $\chi > \aleph_1$, let $\eta_i$ , $i < \omega_1$ be reals in $V^{P_{\omega_2} \ast \mathunderaccent\tilde-3 {R}_{\omega_1}}$. Over $V^{P_{\omega_2}}$, each $\eta_i$ has a $R_{\omega_1}$-name $\mathunderaccent\tilde-3 {\eta_i}$. Since the random algebra is c.c.c, there are w.l.o.g.\ only countably many of the $\aleph_1$ random reals mentioned in $\mathunderaccent\tilde-3 {\eta_i}$. Let $\mathunderaccent\tilde-3 {\eta'_i}$ be got from $\mathunderaccent\tilde-3 {\eta_i}$ by replacing these countably many by the first $\omega$ ones and then doing as if it were just one random real. This is possible because $R_1$ and $R_\omega$ are equivalent forcings. Since the random algebra is c.c.c., the name $\mathunderaccent\tilde-3 {\eta'_i}$ can be coded as a single real $r_i$ in $V^{P_{\omega_2}}$. Now, by \cite[V.4.4.]{Properforcing} and by the properness of the $ \mathunderaccent\tilde-3 {Q_j}$, this name $r_i$ appears at some stage $\alpha(\eta_i) <\aleph_2$ in the iteration $P_{\omega_2}$. We take the supremum $\alpha$ of all the $\alpha(\eta_i)$, $i < \omega_1$. We apply the Main Lemma to the $\mathunderaccent\tilde-3 {\eta'_i}$. Thus $Q_\alpha$ adds a Toeplitz matrix, that makes after multiplication all the $\eta'_i$ convergent. Since the Main Lemma applies to all random algebras simultaneously, this matrix makes also the $\eta_i$ convergent. \proofend \begin{definition} \label{2.5} \begin{myrules} \item[(1)] $Q_{pr}= \{p \in Q \such n^p = 0\}$ is called the pure part of $Q$. \item[(2)] We write $p \leq^* q$ if there are some $w$, $n$ such that $p \leq (w,t_n^q,t_{n+1}^q \dots )$. So, it is up to a finite ``mistake'' $p \leq q$. \end{myrules} \end{definition} \begin{fact}\label{2.6} If $\langle p_i \such i < \gamma \rangle$ is $\leq^*$-increasing in $Q$ and $\MA_{|\gamma|}$ holds, then there is $p \in Q_{pr}$ such that for all $i < \delta$, $p_i \leq^* p$. \end{fact} \proof We apply $\MA_{|\gamma|}$ to the following partial order $P$: Conditions are $(s,F)$ where $s=( t_0^p, \dots, t_n^p)$ is an initial segment of a condition in $Q_{pr}$ and $F \subset \gamma$ is a finite set. We let $(s,F) \leq_P (t,G)$ iff $s \trianglelefteq t$ and $F \subseteq G$ and $(\forall n \in \lgg(t) - \lgg(s))(\forall \alpha \in F) (n > $ (all mistakes between the $p_\alpha) \rightarrow t_n \in \Sigma(c_i^{p_\alpha} \such i \in {\mathcal S}(\alpha,n)$ for suitable ${\mathcal S}(\alpha,n)))$. This forcing is c.c.c., because conditions with the same first component are compatible and because there are only countably many possibilities for the first component. It is easy to see that for $\alpha < \delta$ the sets $D_\alpha = \{ (s,F) \such \alpha \in F \}$ is dense and that for $n \in \omega$ the sets $D^n =\{ (s,F) \such \lgg(s) \geq n \}$ are dense. Hence if $G$ is generic, then $p= \bigcup\{s \such \exists F (s,F) \in G \} \geq^* p_\alpha$ for all $\alpha$. \proofend \begin{conclusion}\label{2.7} If $V \models \MA_\kappa$ and $\kappa > \delta > \aleph_0$, then in $V^{R_\delta}$ then matrix number is $\geq \kappa$ and the splitting number is $\aleph_1$. \end{conclusion} \proof As mentioned, \cite{Dow} shows the the result on the splitting number. For the matrix number, let random names $\mathunderaccent\tilde-3 {\eta_i}$, $i < \gamma$ be given in $V$, $\gamma < \kappa$. We fix $\eps > 0$ and $K$ as in the proof of \ref{2.3}. We choose for $i < \gamma$, $p^i = \langle c^i_k \such k \in \omega \rangle$ as in the end of the proof of \ref{2.3} for $\mathunderaccent\tilde-3 {\eta_i}$ and use and Fact 2.6. $\gamma+1$ times iteratively and find a pure condition $p = \langle c_k \such k \in \omega\rangle \geq^\ast p^i$ for all $i < \gamma$, that gives the lines of a matrix which brings everything into an $\eps$-range. We denote these $c_k$ by $c_k = c_k(\eps)$. Now by induction we choose $c_k$: $c_0 = c_0(1)$, and $c_k= c_{k'}(\frac{1}{k'+1}) $ if $k' > k$ is the first $k''$ such that $\mdn(c_{k''}(\frac{1}{k''+1})) > \mdn(c_{k-1})$. The matrix with $c_k$ in the $k$th line acts as desired. (Now $\mup(c_k) > \mdn(c_{k+1})$ is possible but this does not do any harm.) \proofend {\bf Acknowledgement:} The first author would like to thank Andreas Blass for discussions on the subject and for reading and commenting. \end{document}
\begin{document} \title{Existence and computation of Riemann--Stieltjes integrals through Riemann integrals} \begin{center} {\large Rodrigo L\'opez Pouso \\ Departamento de An\'alise Matem\'atica\\ Facultade de Matem\'aticas,\\Universidade de Santiago de Compostela, Campus Sur\\ 15782 Santiago de Compostela, Spain. } \end{center} \begin{abstract} We study the existence of Riemann--Stieltjes integrals of bounded functions against a given integrator. We are also concerned with the possibility of computing the resulting integrals by means of related Riemann integrals. In particular, we present a new generalization to the well--known formula for continuous integrands and continuously diffe\-ren\-tiable integrators. \end{abstract} \section{Introduction} Let $f:[a,b]\longrightarrow {{\Bbb R}}$ be continuous and let $G:[a,b]\longrightarrow {{\Bbb R}}$ have bounded variation over the interval $[a,b]$. A standard result \cite[Exercise 30 (g), page 281]{str} then guarantees that \begin{equation} \label{rs} \int_a^b{f(x) \, dG(x)} \end{equation} exists in the Riemann--Stieltjes sense. Moreover, the formula \begin{equation} \label{for1} \int_a^b{f(x) \, dG(x)}=\int_a^b{f(x)G'(x) \, dx} \end{equation} holds true if $G$ has a continuous derivative in $[a,b]$, thus reducing the computation of Riemann--Stieltjes integrals to Riemann ones. \bigbreak We can find deeper information in Stromberg \cite[Exercise 30 (k), page 281]{str}: the formula (\ref{for1}) is fulfilled when $f$ is Riemann integrable and $G$ is absolutely continuous. This result leans on the Fundamental Theorem of Calculus for the Lebesgue integral, and the proof suggested by Stromberg uses approximations of $f$ by step functions. Notice that, in this case, the integral in the right--hand side of (\ref{for1}) is a Lebesgue integral which need not be, in general, a Riemann integral. \medbreak In this note we are concerned with the twofold problem of the existence of (\ref{rs}) in the Riemann--Stieltjes sense and the possibility of computing it via (\ref{for1}) when $f$ is merely a bounded function (not necessarily Lebesgue measurable). Obviously, the conditions over the integrator $G$ have to be reenforced. Specifically, in this paper we prove the following result. \begin{theorem} \label{th} Let $f:[a,b]\longrightarrow {{\Bbb R}}$ be bounded, let $g:[a,b]\longrightarrow {{\Bbb R}}$ be Riemann integrable and let $G(x)=c+\int_a^x{g(y) \, dy}$ ($x \in [a,b]$) for some $c \in {{\Bbb R}}$. A neccessary and sufficient condition for the function $f$ to be Riemann--Stieltjes integrable with respect to $G$ over $[a,b]$ is that the product $f g$ be Riemann integrable over $[a,b]$ and, in that case, \begin{equation} \label{rsf} \int_a^b{f(x) \, dG(x)}=\int_a^b{f(x) g(x) \, dx}. \end{equation} \end{theorem} The proof of Theorem \ref{th}, which occupies section 2, is based on the following sharp version of the mean value theorem for Riemann integrals. \begin{theorem} \label{corpro}{\bf \cite[Corollary 4.6]{rlp}} If $h:[a,b] \longrightarrow {{\Bbb R}}$ is Riemann integrable on $[a,b]$ then there exist points $c_1,\, c_2 \in (a,b)$ such that \begin{equation} \nonumber h(c_1)(b-a) \le {\int_a^b}{h(x) \, dx} \le h(c_2)(b-a). \end{equation} \end{theorem} Even though our conditions on the function $f$ are very general, our proofs turn out to be quite easy (this fact being another interesting point). In particular, we remain in the realm of Riemann integration theory. \section{Proof of Theorem \ref{th}} We need the following lemma on mixed Riemann sums (see \cite{cat} for more information on mixed sums). Its proof is very easy, but we include it for completeness and for the convenience of the reader. \begin{lemma} \label{mixed} Let $f:[a,b]\longrightarrow {{\Bbb R}}$ be bounded and let $g:[a,b]\longrightarrow {{\Bbb R}}$ be Riemann integrable. If the product $f g$ is Riemann integrable on $[a,b]$ then for each $\varepsilon>0$ there exists $\delta>0$ such that for every partition $a=x_0<x_1<\dots<x_n=b$ whose norm is less than $\delta$ we have $$\left|\sum_{k=1}^n{f(y_k)g(z_k)(x_k-x_{k-1})-\int_a^b{f(x)g(x) \, dx}} \right|<\varepsilon,$$ for any choice of points $y_k,z_k \in [x_{k-1},x_k]$ ($k=1,2,\dots,n$). \end{lemma} \noindent {\bf Proof.} For a given partition $a=x_0<x_1<\dots<x_n=b$ and points $y_k,z_k \in [x_{k-1},x_k]$ ($k=1,2,\dots,n$) we have \begin{align*} \left|\sum_{k=1}^nf(y_k)g(z_k)\right.&\left.(x_k-x_{k-1})-\int_a^b{f(x)g(x) \, dx} \right| \\ &\le \left|\sum_{k=1}^n{f(y_k)[g(z_k)-g(y_k)](x_k-x_{k-1})}\right|\\ & \qquad +\left|\sum_{k=1}^n{f(y_k)g(y_k)(x_k-x_{k-1})} -\int_a^b{f(x)g(x) \, dx} \right| \\ &\le \sup_{a \le x \le b}|f(x)|\sum_{k=1}^n{\mbox{osc}(g,[x_{k-1},x_k])(x_k-x_{k-1})}\\ &\qquad+\left| \sum_{k=1}^n{f(y_k)g(y_k)(x_k-x_{k-1})}-\int_a^b{f(x)g(x) \, dx} \right|, \end{align*} where $$\mbox{osc}(g,[x_{k-1},x_k])=\sup_{x_{k-1}\le x \le x_k} g(x) -\inf_{x_{k-1}\le x \le x_k} g(x)$$ is the oscillation of $g$ in $[x_{k-1},x_k]$. Since $g$ and $fg$ are Riemann integrable, the last term in the previous inequality is as small as we wish if the norm of the partition is sufficiently small (and this does not depend on the choice of $y_k, z_k \in [x_{k-1},x_k]$).\hbox to 0pt{} $\rlap{$\sqcap$}\sqcup$\medbreak \bigbreak \noindent {\bf Proof of Theorem \ref{th}.} We first show that the condition is sufficient, so we assume that $fg$ is Riemann integrable on $[a,b]$. Let $\varepsilon>0$ be fixed and let $\delta>0$ be as in Lemma \ref{mixed}. We are going to prove that for any partition of $[a,b]$, say $P=\{x_0,x_1,\dots,x_n\}$, with norm less than $\delta$, and any choice of points $y_k \in [x_{k-1},x_k]$ ($k=1,2,\dots,n$), we have \begin{equation} \label{eq1} \left|\sum_{k=1}^n{f(y_k)[G(x_k)-G(x_{k-1})]}-\int_a^b{f(x)g(x) \, dx} \right|< \varepsilon, \end{equation} thus finishing the first part of the proof of Theorem \ref{th}. Let $P$ and the $y_k$'s be as above. Theorem \ref{corpro} guarantees that for each $k \in \{1,2,\dots,n\}$ there is some $z_k \in (x_{k-1},x_k)$ such that $$f(y_k) \, \int_{x_{k-1}}^{x_k}{g(y) \, dy} \le f(y_k)g(z_k)(x_k-x_{k-1}).$$ Hence \begin{align*} \sum_{k=1}^n{f(y_k)[G(x_{k})-G(x_{k-1})]}&=\sum_{k=1}^n{f(y_k)\int_{x_{k-1}}^{x_k}{g(y) \, dy}} \\ &\le \sum_{k=1}^n{f(y_k)g(z_k)(x_k-x_{k-1})}\\ &< \int_a^b{f(x)g(x)\, dx}+\varepsilon \quad \mbox{(by Lemma \ref{mixed}).} \end{align*} We deduce in an analogous way that $$\sum_{k=1}^n{f(y_k)[G(x_{k})-G(x_{k-1})]}> \int_a^b{f(x)g(x)\, dx}-\varepsilon,$$ so (\ref{eq1}) obtains. \bigbreak We now prove that the condition is necessary, and therefore we assume that $f$ is Riemann--Stieltjes integrable with respect to $G$ over $[a,b]$. Let us consider an arbitrary partition $a=x_0<x_1<\dots<x_n=b$ and arbitrary points $y_k \in [x_{k-1},x_k]$ ($k=1,2,\dots,n$). We have \begin{align} \label{in1} &\left| \sum_{k=1}^n { f(y_k)g(y_k)(x_k-x_{k-1}) } - \sum_{k=1}^n{f(y_k)[G(x_{k})-G(x_{k-1})]} \right| \\ \nonumber & \qquad=\left|\sum_{k=1}^n {f(y_k)\left[g(y_k)(x_k-x_{k-1})- \int_{x_{k-1}}^{x_k}{g(y) \, dy} \right]}\right| \\ \label{in2} & \qquad \le \sup_{a \le x \le b}|f(x)| \sum_{k=1}^n{\left| g(y_k)(x_k-x_{k-1}) -\int_{x_{k-1}}^{x_k}{g(y) \, dy} \right|. } \end{align} For each $k \in \{1,2,\dots,n\}$ we have $$ \inf_{x_{k-1}\le x \le x_k}g(x) \, (x_k-x_{k-1}) \le \int_{x_{k-1}}^{x_k}{g(y) \, dy} \le \sup_{x_{k-1}\le x \le x_k}g(x) \, (x_k-x_{k-1}),$$ and, since $y_k \in [x_{k-1},x_k]$, we deduce that \begin{align*} &\left| g(y_k)(x_k-x_{k-1}) -\int_{x_{k-1}}^{x_k}{g(y) \, dy} \right| \\ & \qquad \le \left[ \sup_{x_{k-1}\le x_k}g(x)-\inf_{x_{k-1}\le x \le x_k}g(x) \right] (x_k-x_{k-1})\\ & \qquad =\mbox{osc}(g,[x_{k-1},x_k])(x_k-x_{k-1}). \end{align*} Going with this information back to the inequality (\ref{in1})--(\ref{in2}), we get \begin{align*} &\left| \sum_{k=1}^n { f(y_k)g(y_k)(x_k-x_{k-1}) } - \sum_{k=1}^n{f(y_k)[G(x_{k})-G(x_{k-1})]} \right| \\ & \quad \le \sup_{a \le x \le b}|f(x)| \sum_{k=1}^n{\mbox{osc}(g,[x_{k-1},x_k])(x_k-x_{k-1})}, \end{align*} which tends to zero when the norm of the partition tends to zero (and this does not depend on the $y_k$'s). To sum up, if the norm of the partition is sufficiently small then Riemann sums of $fg$ are as close as we wish to Riemann--Stieltjes sums of $f$ with respect to $G$, which, in turn, are as close as we wish to the corresponding Riemann--Stieltjes integral. Hence Riemann sums of $fg$ are as close as we wish to the Riemann--Stieltjes integral of $f$ with respect to $G$ provided that the norm of the partition is sufficiently small. \hbox to 0pt{} $\rlap{$\sqcap$}\sqcup$\medbreak \section{Concluding remarks} Theorem \ref{th} is not a particular case to \cite[Exercise 30 (k)]{str}, which requires $f$ to be Riemann integrable on $[a,b]$. Examples in the conditions of Theorem \ref{th} which do not fulfill the assumptions in \cite[Exercise 30 (k)]{str} can be easily constructed: it suffices to consider bounded functions $f$ which are not Riemann integrable on a subinterval of $[a,b]$ where $g$ is almost everywhere equal to zero. If we combine Theorem \ref{th} with the {\it integration by parts formula} for the Riemann--Stieltjes integral, see \cite[Exercise 30 (h)]{str}, we obtain the following new sufficient condition for Riemann--Stieltjes integrability. \begin{corollary} \label{coth} In the conditions of Theorem \ref{th}, the function $G$ is Riemann--Stieltjes integrable with respect to $f$ over $[a,b]$ and $$\int_a^b{G(x) \, df(x)}=G(b)f(b)-G(a)f(a)-\int_a^b{g(x)f(x) \, dx}.$$ \end{corollary} Finally, Theorem \ref{th} and Corollary \ref{coth} yield the following more ``symmetric" test. \begin{corollary} Let $\alpha, \, \beta:[a,b] \longrightarrow {{\Bbb R}}$ be Riemann integrable functions. If one of them is an indefinite Riemann integral, then either of them is Riemann--Stieltjes integrable with respect to the other over $[a,b]$ and the corresponding Riemann--Stieltjes integrals reduce to Riemann integrals. \end{corollary} \end{document}
\begin{document} \everymath{\displaystyle} \title{Tame pseudofinite theories with wild pseudofinite dimensions} \author{Alexander Van Abel} \maketitle \begin{abstract} We construct two pseudofinite theories which are tame from a neostability perspective, yet have pathological fine pseudofinite dimension in all models. These theories serve as counterexamples to potential converses of results by Garcia, Macpherson and Steinhorn relating pseudofinite dimension to tameness. We demonstrate that pseudofinite cardinality in these theories is well behaved with regards to definability, and provide a novel method of proving quantifier elimination using pseudofinite cardinality. \end{abstract} \section{Introduction} In their paper ``Pseudofinite structures and simplicity'' \cite{psas}, authors Dario Garcia, Dugald Macpherson and Charles Steinhorn proved a number of results relating stability-theoretic notions in an infinite ultraproduct of finite structures to conditions on the dimension operator $\delta = \delta_{fin}$, which is the ``fine pseudofinite dimension'' introduced by Hrushovski in his paper \cite{hrush}. One way to define $\delta$ is as follows. Let $(M_i : i \in I)$ be a family of finite $L$-structures. Let $M = \prod_{i \to \mathcal{U}} M_i$ be an ultraproduct. Take two formulas $\varphi(\bar{x},\bar{y})$ and $\psi(\bar{x},\bar{y})$, and tuples $\bar{a}$ and $\bar{b}$ of the same length as $\bar{y}$. We say \[\delta(\varphi(M,\bar{a})) = \delta(\psi(M,\bar{b}))\] if there is some fixed natural number $n$ such that for almost all $i$ we have $\frac{1}{n} \leq \frac{|\varphi(M_i, \bar{a}_i)|}{|\varphi(M_i, \bar{b}_i)|} \leq n$, and we say \[\delta(\varphi(M, \bar{a})) < \delta(\psi(M,\bar{b}))\] if for every natural number $n \in \mathbb{N}$, we have $n \cdot |\varphi(M_i, \bar{a}_i)| < |\psi(M_i, \bar{b}_i)|$ for almost all $i$. In this sense, $\delta$ is comparing asymptotic growth rates of the sizes of the defined subsets in the finite structures $M_i$. In \cite{psas}, the authors specify the following conditions on the sequence $(M_i : i \in I)$, which we call \emph{GMS conditions} in this paper. Let $\varphi(\bar{x},\bar{y})$ be a formula. Then GMS condition $(A)_\varphi$ says the following: there is no sequence of parameters $\bar{a}_1, \bar{a}_2, \ldots$ such that \[\delta\big{(}\varphi(M, \bar{a}_1)\big{)} > \delta\big{(}\varphi(M, \bar{a}_1) \wedge \varphi(M, \bar{a}_2)\big{)} > \delta\big{(} \varphi(M, \bar{a}_1) \wedge \varphi(M, \bar{a}_2) \wedge \varphi(M,\bar{a}_3)\big{)} > \ldots.\] The authors of that paper then show that if the sequence $(M_i : i \in I)$ satisfies $(A)_\varphi$ for every formula $\varphi$ (which is what it means to satisfy GMS condition (A)) then the ultraproduct $M$ has a simple and low theory \cite[Theorem 3.2.2]{psas}. The authors demonstrate that this implication does not reverse, in the following way (\cite[Example 4.1.3]{psas}). Take $T$ to be the theory of an equivalence relation with infinitely many infinite classes. This is a stable, hence simple, theory. It is also pseudofinite: it is the theory of any ultraproduct $\prod_{i \to \mathcal{U}} M_i$ of finite equivalence structures such that both the number of equivalence classes and the size of the smallest class tend to infinity with respect to the ultrafilter. The authors construct such a sequence of structures $(M_n : n \in \omega)$ in a way that violates the GMS condition $(A)_\varphi$ for the formula $\varphi(x,y) := $ ``$\neg x E y$''. For $n \in \omega$, let $M_n$ have $n$ equivalence classes, one of size $n^i$ for $i = 1, \ldots, n$ (so that $|M_n| = \sum_{i=1}^n n^i$). For $1 \leq k \leq n$ let $c^k_n$ be an element of the $k$th largest class in $M_n$, the one of size $n^{n-(k-1)}$. Then $\big{|}\{x \in M_n : (\neg x E c^1_n) \wedge \ldots \wedge (\neg x E c^k_n\} \big{|}$ is $\sum_{i=1}^{n-k} n^i$, which is approximately $\frac{1}{n^k} |M_n|$. This implies that in the ultraproduct, we have \[\delta\big{(} \varphi(M, c^1) \wedge \ldots \wedge \varphi(M, c^k) \big{)} > \delta\big{(} \varphi(M, c^1) \wedge \ldots \wedge \varphi(M, c^k) \wedge \varphi(M, c^{k+1})\big{)}\] for each $k$, where $c^k$ is the element of the ultraproduct represented by the sequence $(c^k_1, c^k_2, c^k_3,\ldots)$. Therefore, GMS condition $(A)_\varphi$ fails, even though the theory is simple and low (superstable, even). However, we can find a model of $T$ which is an ultraproduct of finite equivalence structures $M'_n$ where GMS condition $(A)_\varphi$ \emph{is} satisfied for every formula $\varphi$. We can take $M'_n$ to have $n$ equivalence classes, each of size $n$. Then the $\delta$ dimension is highly well-behaved -- in particular $(A)_\varphi$ is satisfied for every $\varphi$ (giving a rather roundabout proof that $T$ is a simple theory). This led the author of this document to the following question. Let $T$ be a pseudofinite simple and low theory (which includes the class of stable theories). Since $T$ is pseudofinite, it may be modelled by an ultraproduct of finite structures $\prod_{i \to \mathcal{U}} M_i$. As shown above, it may be the case that the sequence $(M_i : i \in I)$ does not satisfy GMS condition $(A)_\varphi$ for every $\varphi$. But will it be the case that there is \emph{some} sequence $(M_i : i \in I)$ of finite structures and ultrafilter $\mathcal{U}$ such that $\prod_{i \to \mathcal{U}} M_i \models T$ and $(M_i : i \in I)$ satisfies $(A)_\varphi$ for all $\varphi$? We can ask a similar question for supersimple $T$ and the GMS condition (SA), to be introduced in the next section, inspired by the result in \cite{psas} that if an infinite ultraproduct of finite structures satisfies (SA) then the theory of the ultraproduct is supersimple \cite[Theorem 3.3]{psas}. In this paper, we give negative answers to both of these questions. In the section ``Supersimple does not imply (SA)'' we exhibit a pseudofinite theory which is supersimple of $U$-rank 1 such that no infinite ultraproduct of finite structures which models $T$ will satisfy GMS condition (SA), and in the section ``Simple does not imply (SA)'', we exhibit a pseudofinite theory which is simple and low (stable, in fact) such that no ultraproduct of finite structures satisfying $T$ satisfies GMS condition (A). The author thanks Alf Dolich, Alice Medvedev, Charles Steinhorn, Dario Garcia, Alex Kruckman, and Cameron Donnay Hill for their comments and suggestions. \color{black} \section{Counting pairs and dimension conditions} The way that we formalize pseudofinite dimension $\delta$ is through the notion of \emph{pseudofinite cardinality}, which is itself formalized by what Garcia terms a \emph{counting pair}. For a (first-order, single-sorted) language $L$, let $L^+$ be the two-sorted expansion $(L,OF)$, where $OF = (0,1,+,-,\cdot,<)$ is the language of ordered fields. Symbols from $L$ apply to the home sort, and symbols from $OF$ apply to the new second sort. Additionally, for every partitioned formula $\varphi(\bar{x},\bar{y})$ of $L$, we add a cross-sorted function $f_{\varphi(\bar{x},\bar{y})}$ which maps $|\bar{y}|$-tuples from the home sort $L$ to elements of the $OF$ sort. Let $M_0$ be a finite $L$-structure. We extend $M_0$ to an $L^+$-structure $M_0^+$ by letting the $OF$ sort contain a copy of the ordered field $\mathbb{R}$, and for each partitioned formula $\varphi(\bar{x},\bar{y})$ and each $\bar{b} \in M_0^{|\bar{y}|}$ letting $f_{\varphi(\bar{x},\bar{y})}(\bar{b})$ be $|\varphi(M_0^{|\bar{x}|},\bar{b})| \in \mathbb{N} \subseteq \mathbb{R}$. If $M$ is an ultraproduct of finite structures $\prod_{i \to \mathcal{U}} M_i$, then we expand $M$ to an $L^+$-structure $M^+$ by expanding each $M_i$ to $M_i^+$, and then taking $M^+$ to be the two-sorted ultraproduct $\prod_{i \to \mathcal{U}} M_i^+$. We denote the OF-sort of $M$ by $\mathbb{R}^\star$. The structure in this sort is an ultrapower of the ordered field of real numbers. When $X \subseteq M$ is definable, say by $\varphi(\bar{x},\bar{b})$ we let $|X|$ denote the value $f_\varphi(\bar{b}) \in \mathbb{R}^\star$. We note that this is a well-defined notation, in that if $\varphi(M^{|\bar{x}|},\bar{b}) = X = \varphi'(M^{|\bar{x}|},\bar{c})$ then this equality holds in almost all $M_i$, whence $f_\varphi(\bar{b}) = f_{\varphi'}(\bar{c})$). We denote the set of nonstandard integers $\{(r_i)_{i \to \mathcal{U}} \in \mathbb{R}^\star : r_i \in \mathbb{Z}$ for $\mathcal{U}$-almost many $i\}$ by $\mathbb{Z}^\star$, and note that for all definable sets $X$ we have $|X| \in \mathbb{Z}^\star$. We note that the expansion $M^+$ does not depend purely on the structure $M$, but rather on the sequence of structures $M_i$ and the ultrafilter $\mathcal{U}$. That is, if $M = \prod_{i \to \mathcal{U}} M_i$ is isomorphic to $M' = \prod_{i \to \mathcal{U}'} M_i'$, it may not be the case that $M^+$ is isomorphic to $(M')^+$ -- as in the example given in the introductory section. In this framework of pseudofinite cardinality, we can simplfy the definition of $\delta$ to $\delta(X) = \delta(Y)$ if there is some $n \in \mathbb{N}$ with $\frac{1}{n}|Y| \leq |X| \leq \frac{1}{n}|Y|$, and $\delta(X) < \delta(Y)$ if for all $n \in \mathbb{N}$ we have $n |X| < |Y|$, for definable sets $X$ and $Y$. Alternatively, $\delta(X) < \delta(Y)$ if and only if $\frac{|X|}{|Y|}$ is an infinitesimal element of $\mathbb{R}^\star$. \begin{definition} Let $M := \prod_{i \to \mathcal{U}} M_i$ be a pseudofinite ultraproduct, and let $M^+$ be its counting pair expansion. Let $\varphi(\bar{x},\bar{y})$ be an $L$-formula. We say that $M^+$ satifies the GMS condition $(A)_\varphi$ if there is \emph{no} sequence of $|\bar{y}|$-tuples $\bar{a}_1,\bar{a}_2,\ldots \in M$ such that $\delta(\varphi(\bar{x},\bar{a}_1) > \delta(\varphi(\bar{x},\bar{a}_1) \wedge \varphi(\bar{x},\bar{a}_2)) > \ldots$. We say that $M^+$ satisfies the GMS condition $(A)$ if it satisfies $(A)_\varphi$ for all $L$-formulas $\varphi(\bar{x},\bar{y})$. We say $M^+$ satisfies the GMS condition (SA) if there is no sequence of definable sets $M \supseteq X_1 \supset X_2 \supset X_3 \supset \ldots$ such that $\delta(X_i) > \delta(X_{i+1})$ for each $i$. \end{definition} We usually abuse notation and say that $M$ satisfies (A) or (SA) rather than $M^+$, while keeping in mind that these properties depend on the particular ultrafilter and family of finite structures such that $M = \prod_{i \to \mathcal{U}} M_i$. In \cite{psas}, the authors prove that if $M$ satisfies (A) then $Th(M)$ is simple, and if $M$ satisfies (SA) then $Th(M)$ is supersimple. Pseudofinite cardinality inherits first-order expressible properties from finite cardinality. In particular, we will use the following fact about cardinalities of sets with constant-sized fibers, the proof of which follows directly from the finite case: \begin{lemma} \label{constfiber} Let $X \subseteq M^n$ be a definable set in the pseudofinite structure $M$. Suppose there is $0 < m < n$ and a hyperreal $c \in \mathbb{R}^\star$ such that for all tuples $\bar{a}$ of length $m$, if $\{\bar{b} : (\bar{a}\bar{b}) \in X\}$ is nonempty then $|\{\bar{b} : (\bar{a}\bar{b}) \in X\}| = c$. Then $|X| = c \cdot |\{\bar{a} : \exists \bar{y} (\bar{a} \bar{y}) \in X\}|$. \end{lemma} The author is generally interested in the model theory of counting pairs for models of a given theory. Let $L, L'$ be disjoint first-order languages and let $M$ be an $L$-structure and $N$ be an $L'$ structure. The \emph{disjoint union} $M \sqcup N$ is the two-sorted structure in the language $L \cup L'$, where the first sort is the $L$-structure $M$ and the second sort is the $L'$-structure $N$, with no additional relations defined. Definable subsets in $M \sqcup N$ are Boolean combinations of sets $X \times Y$, where $X \subseteq M$ and $Y \subseteq N$ are definable in $L$ and $L'$, respectively. Loosely speaking, $M \sqcup N$ is the ``minimal'' simultaneous two-sorted expansion of $M$ and $N$. In particular, every relation strictly on $M$ definable in $M \sqcup N$ is already definable by an $L$-formula in $M$, and similarly for $N$. Let $\varphi(\bar{x},\bar{y})$ be a partitioned $L$-formula. Let $(\psi_1(\bar{y}),\ldots,\psi_m(\bar{y}))$ be a tuple of $L$-formulas (possibly with parameters) and let $(c_1,\ldots,c_m)$ be a tuple of hyperreals. We say that $(c_1,\ldots,c_m)$ and $(\psi_1(\bar{y}),\ldots,\psi_m(\bar{y}))$ \emph{give and define the cardinalities of } $\varphi(\bar{x},\bar{y})$ if \begin{itemize} \item For each $\bar{b} \in M^{|\bar{y}|}$, there is an $i \in \{1,\ldots,m\}$ such that $|\varphi(M^{|\bar{y}|},\bar{b})| = c_i$, and \item For each $i$, $\psi_i(M^{|\bar{y}|}) = \{\bar{b} : |\varphi(M^{|\bar{y}|},\bar{b})| = c_i\}$. \end{itemize} \begin{proposition} \label{defunion} Let $M = \prod_{i \to \mathcal{U}} M_i$ be a pseudofinite ultraproduct. Let $A \subseteq M$ and $B \subseteq \mathbb{R}^\star = \prod_{i \to \mathcal{U}} \mathbb{R}$. Then the structure $M^+$ is interdefinable with the disjoint union $M \sqcup \mathbb{R}^\star$ over $A \cup B$ if and only if for every formula $\varphi(x,\bar{y})$ with $x$ a single variable, there are formulas $\psi_1(\bar{y}),\ldots,\psi_n(\bar{y})$ with parameters from $A$ and hyperreals $c_1,\ldots,c_n \in \mathbb{R}^\star$ algebraic over $B$ which give and define the cardinalities of $\varphi(x,\bar{y})$. \end{proposition} \begin{proof} The disjoint union $M \sqcup \mathbb{R}^\star$ is definable in any multi-sorted structure containing $M$ and $\mathbb{R}^\star$ as sorts. We show that the condition in the proposition is equivalent to $M^+$ being definable in the disjoint union. Suppose $M^+$ is definable over $A \cup B$, and let $\varphi(x,\bar{y})$ be an $L$-formula. By assumption, the relation ``$f_{\varphi(x,\bar{y})}(\bar{b}) = c$'' as a property of $(\bar{b},c)$ is definable in $M \sqcup \mathbb{R}^\star$ over $A \cup B$. Every definable subset of $M \sqcup \mathbb{R}^\star$ is a Boolean combination of sets $X \times Y$, where $X$ is a definable subset of $M$ and $Y$ is a definable subset of $\mathbb{R}^\star$. Putting this Boolean combination into disjunctive normal form, we obtain $L(A)$-formulas $\psi_1(\bar{y}),\ldots,\psi_n(\bar{y})$ and $OF(B)$-formulas $\theta_1(t),\ldots,\theta_n(t)$ such that for all $\bar{b} \in M^{|\bar{y}|}$ and $c \in \mathbb{R}^\star$, \[f_{\varphi(x,\bar{y})}(\bar{b}) = c \mbox{ if and only if } M \sqcup \mathbb{R}^\star \models \bigvee_{i=1}^n [\psi_i(\bar{b}) \wedge \theta_i(c)].\] We may assume each $\psi_i(M^{|\bar{y}|})$ is nonempty. It then follows that each $\psi_i(\mathbb{R}^\star)$ is a singleton. Let $c_i$ be this unique element. Then for all $\bar{b} \in M^{|\bar{y}|}$, the pseudofinite cardinality $|\varphi(M,\bar{b})| = f_{\varphi(x,\bar{y})}(\bar{b})$ is $c_i$ for some $i$. Since $\psi_i$ uniquely defines $c_i$ over $B$, we obtain that $c_i$ is algebraic over $B$. If $c_i = c_j$ for some $i \neq j$, we replace $\psi_i(\bar{y})$ with $\psi_i(\bar{y}) \vee \psi_j(\bar{y})$ and remove $\psi_j, \theta_j$; in this way, we may assume the hyperreals $c_1,\ldots,c_n$ are disjoint. Then it follows that for each $i$, the set $\{\bar{b} \in M^{|\bar{y}|} : |\varphi(M,\bar{b})| = c_i\}$ is defined by $\psi_i(\bar{y})$, proving the forward direction of the proposition. Now suppose that for every formula $\varphi(x,\bar{y})$ with $x$ a single variable, there are formulas $\psi_1(\bar{y}),\ldots,\psi_n(\bar{y})$ with parameters from $A$ and hyperreals $c_1,\ldots,c_n \in \mathbb{R}^\star$ algebraic over $B$ which give and define the cardinalities of $\varphi(x,\bar{y})$. We will show that we can remove the restriction that $x$ is a single variable, by induction on $|\bar{x}|$. The case $|\bar{x}| = 1$ is true by assumption. Suppose the statement is true of $|\bar{x}|$ and consider the partitioned formula $\varphi(w \bar{x}, \bar{y})$. Applying the single-variable case to the repartitioned formula $\varphi(w,\bar{x}\bar{y})$, we obtain formulas $\psi_1(\bar{x}\bar{y}), \ldots, \psi_m(\bar{x}\bar{y})$ and hyperreals $c_1,\ldots,c_m \in \mathbb{R}^\star$ which give and define $\varphi(w,\bar{x}\bar{y})$. Then by the inductive hypothesis, there are formulas $\theta_{i,j}(\bar{y})$ and cardinalities $c_{i,j} \in \mathbb{R}^\star$ for $1 \leq i \leq m$ and $1 \leq j \leq m_i$ which give and define $\psi_i(\bar{x},\bar{y})$. For a tuple $\sigma = (\sigma_1,\ldots,\sigma_n) \in \prod_{i=1}^m [1,\ldots,m_i]$ let $\theta_\sigma(\bar{y})$ be the formula $\bigwedge_i \theta_{i, \sigma_i}(\bar{y})$. As the formulas $\theta_{i,1}(\bar{y}),\ldots,\theta_{i,m_i}(\bar{y})$ partition $M^{|\bar{y}|}$ for each $i$, the formulas $\theta_{\sigma}(\bar{y})$ partition $M^{|\bar{y}|}$ as $\sigma$ ranges over $\prod_{i=1}^m [1,\ldots,m_i]$, although some of the formulas may not be realized in $M$. If $M \models \theta_\sigma(\bar{b})$, then $|\varphi(M^{|\bar{x}| + 1},\bar{b})| = \sum_{i=1}^m |\{(c,\bar{a}) : M \models \varphi(c,\bar{a},\bar{b})$ and $M \models \psi_i(\bar{a})\}|$. By Lemma \ref{constfiber}, this is equal to $\sum_{i=1}^m c_i \cdot |\psi_i(M^{|\bar{x}|},\bar{b})|$, which is $c_\sigma := \sum_{i=1}^m c_i c_{i, \sigma_i}$ as $\bar{b} \in \theta_{i,\sigma_i}(M^{|\bar{y}|})$. Identifying tuples $\sigma$ and $\sigma'$ such that $c_\sigma = c_{\sigma'}$ (and combining $\theta_\sigma$ and $\theta_{\sigma'}$ into the disjunction $\theta_\sigma \vee \theta_{\sigma'}$, and re-indexing the tuples with numbers, we obtain formulas $\theta_1(\bar{y}),\ldots,\theta_n(\bar{y})$ and hyperreals $d_1,\ldots,d_n \in \mathbb{R}^\star$ which give and define the cardinalities $\varphi(M^{|\bar{x}|+1},\bar{y})$. Therefore for every formula $\varphi(\bar{x},\bar{y})$, there are formulas $\psi_1(\bar{y}),\ldots,\psi_n(\bar{y})$ with parameters from $A$ and hyperreals $c_1,\ldots,c_n \in \mathbb{R}^\star$ algebraic over $B$ which give and define the cardinalities of $\varphi(\bar{x},\bar{y})$. Then the relation ``$f_{\varphi(\bar{x},\bar{y})}(\bar{y}) = t$'' in $M^+$ is definable in $M \sqcup \mathbb{R}^\star$, by the formula $\bigvee_{i=1}^n \psi_i(\bar{y}) \wedge t = c_i$. Parameters in the formulas $\psi_i$ come from $A$, and the hyperreals $c_i$ are algebraic over $B$, proving the backwards direction of the proposition. \end{proof} \section{Supersimple does not imply (SA)} In this section, we give a supersimple pseudofinite theory $T$ such that every infinite ultraproduct of finite models which satisfies $T$ fails to satisfy (SA). Our language $L$ contains \begin{itemize} \item A unary predicate $U_\sigma$ for every string $\sigma \in \omega^{<\omega}$ \item A binary relation $B_{\sigma,\tau}$ for every pair of strings $\sigma,\tau \in \omega^{<\omega}$ of the same length. \end{itemize} Our theory $T$ contains the following axiom (a) and axiom schemata (b)-(h): \begin{enumerate}[label=(\alph*)] \item $U_\emptyset$ is the universe; \item $U_\sigma \neq \emptyset$, for each string $\sigma$; \item $U_\sigma \supset U_\tau$, for strings $\sigma$ and $\tau$ such that $\sigma$ is an initial segment of $\tau$; \item $U_{\sigma i} \cap U_{\sigma j} = \emptyset$, for each string $\sigma$ and distinct numbers $i,j$; \item $B_{\sigma, \tau}$ is (the graph of) a bijection from $U_{\sigma}$ to $U_{\tau}$, for strings $\sigma$ and $\tau$ of the same length; \item $B_{\sigma,\tau}(x,y)$ if and only if $B_{\tau,\sigma}(y,x)$, for strings $\sigma$ and $\tau$ of the same length; \item If $B_{\sigma,\tau}(x,y)$ and $B_{\tau,\rho}(y,z)$ then $B_{\sigma,\rho}(x,z)$, for strings $\sigma, \tau$ and $\rho$ of the same length; \item If $U_{\sigma i}(x)$ and $U_\tau(y)$, then $B_{\sigma, \tau}(x,y)$ if and only if $U_{\tau i}(y)$ and $B_{\sigma i, \tau i}(x,y)$, for strings $\sigma$, $\tau$ of the same length and each number $i$. \end{enumerate} \begin{proposition} $T$ has quantifier elimination. \end{proposition} \begin{proof} It suffices to show that the formula $\exists \Phi x (x,\bar{y})$ is equivalent to a quantifier-free formula $\pi(\bar{y})$, where $\Phi$ is a conjunction of literals (atomic formulas and their negations). We may assume that each literal contains the variable $x$, and we may also assume that each variable $y_i$ in $\bar{y}$ appears exactly once, as the general case will follow this (for example, if $\exists x B_{0,1}(x,y_1) \wedge B_{0,2}(x,y_2)$ is equivalent to $\pi(y_1,y_2)$, then $\exists x B_{0,1}(x,y) \wedge B_{0,2}(x,y)$ is equivalent to $\pi(y,y)$). By the axioms of $T$, such a quantifier-free formula is equivalent up to re-arrangement of variables to $U(x) \wedge B(x,\bar{y})$, where $U(x)$ is \[U_\sigma(x) \wedge \neg U_{\sigma \tau_1}(x) \wedge \ldots \wedge \neg U_{\sigma \tau_n}(x),\] and $B(x,\bar{y})$ is \[B_{\sigma, \rho_1}(x,y_1) \wedge \ldots \wedge B_{\sigma, \rho_n}(x,y_m) \wedge \neg B_{\sigma, \theta_1}(x,y_{m+1}) \wedge \ldots \wedge \neg B_{\sigma, \theta_k}(x,y_{m+k}),\] where $\sigma, \tau_i, \rho_i$ and $\theta_i$ are strings, with $\sigma, \rho_i$ and $\theta_i$ all of the same length (and possibly with $n, m$ or $k$ equal 0, in which case the associated conjunction is empty). Then for a tuple $\bar{b} = (b_1,\ldots,b_{m+k})$ of elements, not necessarily distinct, we have $M \models \exists x [U(x) \wedge B(x,\bar{b})]$ if and only if the following three sets of conditions hold of the tuple $\bar{b}$: \begin{itemize} \item $B_{\rho_i, \rho_j}(b_i, b_j)$ for $i,j$ distinct numbers in $\{1,\ldots,m\}$ \item $\neg B_{\rho_i, \theta_j}(b_i,b_{m+j})$ for $i$ in $\{1,\ldots,m\}$ and $j$ in $\{1,\ldots,k\}$, and \item $\neg U_{\rho_i \tau_j}(b_j)$ for $i$ in $\{1,\ldots,n\}$ and $j$ in $\{1,\ldots,m\}$. \end{itemize} These conditions are all together given by a single quantifier-free formula in $y_1,\ldots,y_{m+k}$, proving quantifier elimination. \end{proof} Let $M$ be a model of $T$, and $A \subset M$. By quantifier elimination, every type $p(x) \in S_1(A)$ is determined by the sets \[U(p) = \{\sigma \in \omega^{<\omega} : U_\sigma(x) \in p\},\] and \[B(p) = \{(a,\tau) \in M \times \omega^{<\omega} : B_{\sigma', \tau}(x,a) \in p \mbox{ for some } \sigma' \in U(p) \}.\] $p$ is an algebraic type if and only if $B(p)$ is nonempty. It follows that a non-algebraic type is determined by its restriction in $S(\emptyset$). \begin{proposition} \label{weakmin} Let $M$ be a model of $T$. Let $A \subseteq B \subseteq M$ and let $p \in S(A)$ be a non-algebraic type. Then $p$ has a unique extension to a non-algebraic type $q \in S(B)$. \end{proposition} \begin{proof} By quantifier-elimination, a type $p \in S(A)$ is determined by which atomic formulas or their negations are in it. $p$ is non-algebraic if and only if the formula ``$\neg B_{\sigma, \tau}(x,a)$'' is in $p$ for every $a \in A$ and every pair of strings $\sigma, \tau$ of the same length. \end{proof} \begin{corollary} $T$ is superstable of $U$-rank 1. \end{corollary} \begin{proof} Superstability can be seen from type-counting, as it follows from Proposition \ref{weakmin} that $|S(A)| = |S(\emptyset)| + |A|$ for every set $A$. Therefore every type has a $U$-rank, and the fact that each non-algebraic type has a unique non-algebraic extension implies that each non-algebraic type has $U$-rank 1. \end{proof} \begin{proposition} $T$ is pseudofinite. \end{proposition} \begin{proof} We construct a sequence of finite structures $M_1, M_2, M_3, \ldots$ such that each axiom of $T$ is satisfied in $M_n$ for sufficiently large $n$. Let $M_n$ have as its universe the set of all strings in $\{0,\ldots,n-1\}^n$. To avoid clashing notation with the language $L$, we denote elements of this universe by letter variables $a = a_1 a_2 \ldots a_n$. For $\sigma \in \{0,\ldots,n-1\}^{\leq n}$, we let $a \in U_\sigma(M_n)$ when $\sigma$ is an initial segment of $a$. Let $\sigma, \tau \in \{0,\ldots,n-1\}^{\leq n}$ have the same length. Suppose that $a \in U_\sigma(M_n)$ and $b \in U_\sigma(M_n)$, so that $a = \sigma a'$ and $b = \sigma b'$ for strings $a', b' \in \{0,\ldots,n-1\}^{\leq n}$. Then we set $B_{\sigma, \tau}(a,b)$ if and only if $a' = b'$. If $\sigma \in \omega^{<\omega} / \{0,\ldots,n-1\}^{\leq n}$, we let $U_\sigma, B_{\sigma, \tau}$ and $B_{\tau, \sigma}$ be empty relations. One easily verifies that the axioms for $T$ hold in $M_n$ whenever the strings $\sigma, \tau, \rho$ appearing in the axiom are elements of $\{0,1,\ldots,n-1\}^{\leq n}$ (or in the case of axiom schema (h), elements of $\{0,1,\ldots,n-1\}^{<n}$) and the numbers $i,j$ are elements of $\{0,1,\ldots,n-1\}$. Therefore each axiom holds in $M_n$ for sufficiently large $n$. \end{proof} \begin{proposition} Let $M$ be a an ultraproduct of finite structures which satisfies $T$. Let $f \in \omega^{\omega}$. For each $n$ let $\sigma_n$ be the initial segment of $f$ consisting of the first $n$ entries. Then $\delta(M) > \delta(U_{\sigma_1}(M)) > \delta(U_{\sigma_2}(M)) > \ldots$. In particular, $M$ fails to satisfy condition (SA). \end{proposition} \begin{proof} We show that for every string $\sigma \in \omega^{<\omega}$ and every number $k \in \omega$, $\delta(U_\sigma(M)) > \delta(U_{\sigma k}(M))$, which proves the proposition. For all numbers $j \in \omega$, the relation $B_{\sigma k, \sigma j}$ gives a bijection between $U_{\sigma k}(M)$ and $U_{\sigma j}(M)$. Therefore, taking pseudofinite cardinalities, we have $|U_{\sigma k}(M)| = |U_{\sigma j}(M)|$ for all $j \in \omega$. Let $n \in \omega$. Then $U_{\sigma 0}(M), \ldots, U_{\sigma n}(M)$ are disjoint subsets of $U_\sigma(M)$, whence $|U_\sigma(M)| \geq |U_{\sigma 0}(M)| + \ldots + |U_{\sigma n}(M)| = (n+1)|U_{\sigma k}(M)| > n |U_{\sigma k}(M)|$. Since this is true of every $n$, it follows that $\delta(U_\sigma(M)) > \delta(U_{\sigma k}(M))$. \end{proof} \begin{proposition} \label{defdisjoint} Let $M = \prod_{i \to \mathcal{U}} M_i$ be a pseudofinite ultraproduct which satisfies $T$ and let $M^+$ be its counting pair extension. Then $M^+$ is parametrically definable in the disjoint union $M \sqcup \mathbb{R}^\star$. \end{proposition} \begin{proof} We use Proposition \ref{defunion}. Let $\varphi(x,\bar{y})$ be an $L$-formula. By quantifier elimination, $\varphi(x,\bar{y})$ is equivalent to a disjunction of conjunctions of atomic formulas and their negations. In fact, it suffices to prove the statement in Proposition \ref{defunion} when $\varphi(x,\bar{y})$ is merely a single conjunction of atomic formulas (not negations), as pseudofinite cardinality satisfies inclusion-exclusion properties. For example, if we have defined $f_\varphi$ for each $\varphi$ a conjunction of formulas from $\{\phi, \psi, \theta\}$ , then we can define $f_{\phi \vee (\psi \wedge \neg \theta)}$ as $f_{\phi} + f_{\psi \wedge \neg \theta} - f_{\phi \wedge \psi \wedge \neg \theta} = f_\phi + (f_{\psi} - f_{\psi \wedge \theta}) - (f_{\phi \wedge \psi} - f_{\phi \wedge \psi \wedge \theta})$. So it suffices to consider a $\varphi(x,\bar{y})$ a conjunction of formulas of the form $U_\sigma(x)$ and $B_{\tau, \rho}(x,y_i)$. A conjunction of predicates $U_\sigma(x)$ is equivalent to a single $U_\sigma(x)$. If there is an instance of $B_{\tau, \rho}$ in the conjunction, then the defined set $\varphi(M,\bar{y})$ is either empty or a singleton, whence $(c_1,c_2) = (0,1)$ and $(\psi_1(\bar{y}), \psi_2(\bar{y})) = (\neg \exists \bar{x} \varphi(x,\bar{y}), \exists \bar{x} \varphi(x,\bar{y}))$ give and define the cardinalities of $\varphi(x,\bar{y})$. Otherwise $\varphi(x,\bar{y})$ is equivalent to the single predicate $U_\sigma(x)$; letting $c = |U_\sigma(M)|$, we get that $(c)$ and $(\bar{y} = \bar{y})$ give and define the cardinalities of $\varphi(x,\bar{y})$. \end{proof} \color{black} \section{Simple does not imply (A)} The theory in this result is inspired by Example 4.1.3 in \cite{psas}, which is the theory of a single equivalence relation with infinitely many equivalence classes. As explained in the introductory section, the condition $(A)_\phi$ fails in this ultraproduct for the formula $\phi(x,y) :=$ ``$\neg x E y$", although the theory of the ultraproduct is stable. In this way they show that direct converse to their result ``$(A)$ implies a simple and low theory'' is false. However, there are many ways to satisfy this theory in a product of finite structures -- any sequence of finite equivalence relations such that both the number of classes and the size of the smallest class go to infinity will work. In particular, we can consider $\prod_{n \to \mathcal{U}} M_n$ where $M_n$ has $n$ equivalence classes, each of size $n$. In this case, the dimension operator $\delta$ is extremely well-behaved, and the condition (A) is satisfied (and the stronger condition (SA) is as well). Thus it still seemed possible to give a weaker potential converse, that given a simple pseudofinite theory $T$ one can find some pseudofinite ultraproduct $M \models T$ such that (A) is satisfied in $M$. The example in this section, which was suggested by Alex Kruckman and Cameron Donnay Hill in conversation with the author, shows that this weakened converse is still false. Our theory is an expansion of the theory of infinite equivalence relation with infinitely many infinite classes, with additional functions which \emph{force} the cardinalities of equivalence classes to grow at a rate fast enough to have (A) fail at the formula ``$\neg x E y$'' as above. We do this by introducing a pairing function which bijects the largest equivalence class with the set of \emph{ordered pairs} from the second largest, and bijects that class with the set of ordered pairs from the third largest, and so on. We realize this pairing function as a pair of unary functions $f$ and $g$, so that $x \mapsto (f(x), g(x))$ is our bijection. We give our construction in detail. Our language $L$ contains: \begin{itemize} \item A binary relation $E$, and \item Two unary functions $f$ and $g$. \end{itemize} We frequently refer to $L$-terms as $\sigma(x)$, where $\sigma$ is a string in $\{f,g\}^{<\omega}$.\\ The theory $T_0$ says: \begin{enumerate}[label=(\alph*)] \item $E$ is an equivalence relation; \item $f$ and $g$ are well-defined on $E$-equivalence classes, and project down to the same function on $M / E$. That is, if $x E y$ then $f(x) E g(y)$ (which implies $f(x) E f(y)$ and $g(x) E g(y)$); \item There is a single $E$-class $C_{fin}$ such that $C_{fin} = \{x : f(x) = x\} = \{x : g(x) = x\}$; \item There is a single $E$-class $C_{init}$ such that $C_{init} = \{x : f^{-1}(x) = \emptyset\} = \{x : g^{-1}(x) = \emptyset\}$; \item If $x,y \notin C_{fin}$ and $f(x) E f(y)$ then $x E y$ (that is, $f,g$ are injective on equivalence classes except for $C_{fin}$ and $f^{-1}(C_{fin})$). \item For each equivalence class $C$ which is not $C_{init}$ and each pair $(x, y) \in C^2$ there is a unique $z$ (if $C = C_{fin}$ then there is a unique $z \notin C_{fin}$) such that $f(z) = x$ and $g(z) = y$. $T_0$ is finitely axiomatizable and has finite models (see Proposition \ref{eqpairpseudo}). Our theory $T$ is the theory $T_0$ together with axiom schemata for \item $\neg (f^k(x) E x)$ for each $k \in \omega$; \item $E$ has infinitely many classes; \item The equivalence classes of $E$ are infinite \end{enumerate} Later on, the following abbreviation will be helpful. \begin{definition} Let $M \models T$ and let $k \in \omega$. Then $C_{init+k}$ denotes the equivalence class $f^k(C_{init})$, and $C_{fin-k}$ denotes the equivalence class $f^{-k}(C_{fin}) \setminus f^{-(k-1)}(C_{fin})$. \end{definition} Clearly each $C_{init+k}$ and $C_{fin-k}$ is $\emptyset$-definable. When we prove quantifier elimination for $T$, we will add unary predicates for each $C_{init+k}$ to our language, since each of these classes is not otherwise quantifier-free definable. Here we describe a method for constructing models of $T$. First let us define a simpler language and theory, which we will also use in proving quantifier elimination for $T$. \begin{definition} \label{tstar} The language $L^\star$ has a unary function $S$ and two constant symbols $c_{init}, c_{fin}$. The theory $T^\star$ says \begin{itemize} \item If $x \neq c_{init}$ then there is a $y$ with $S(y) = x$. If $x \neq c_{fin}$ then this $y$ is unique and $y \neq x$; if $x = c_{fin}$ then $S(x) = x$ and there is a unique $y \neq x$ with $S(y) = x$. There is no $y$ with $S(y) = c_{init}$. \item If $x \neq c_{fin}$ then $S^n(x) \neq x$ (for all $n$) \end{itemize} \end{definition} One class of models of $T^\star$ is \[(\mathcal{M}: S; c_{init}; c_{fin}) = ([a,b] : x \mapsto \min (succ(x),b); a; b\},\] where $[a,b]$ is an infinite closed interval of a discrete linear order, and $succ(x)$ is the successor function. $T^\star$ is strongly minimal and is known to have quantifier elimination (a slight variation on, for example, Exercise 3.4.3 in \cite{marker}). We shall use the quantifier elimination of $T^\star$ to give a quantifier elimination for $T$. If $M$ is an $L$-structure which satisfies $T$, we can define an $M^\star$ in $M^{eq}$ on the collection of equivalence classes $M / E$, where $c_{init}$ is $C_{init}$, $c_{fin}$ is $C_{fin}$, and $S([a]_E) = [f(a)]_E = [g(a)]_E$. Then $M^\star$ is a model of $T^\star$. Let $I$ be a model of $T^\star$, and let $X$ be an infinite set and let $\pi : X \to X \times X$ be a bijection of $X$ with its own square. We define the $L$-structure $M_{I,\pi}$ as follows. Let the universe of $M_{I,\pi}$ be $I \times X$. Let $C_{init}(M)$ be $\{c_{init}\} \times X$ and let $C_{fin}(M)$ be $\{c_{fin}\} \times X$. Let $(i,x) E (i',x')$ exactly when $i = i'$. If $i = c_{fin}$ let $f((i,x)) = g((i,x)) = (i,x)$. If $i \neq c_{fin}$, let $f((i,x)) = (S(i), \pi_1(x))$ and $g((i,x)) = (S(i), \pi_2(x))$, where $\pi_1, \pi_2 : X \to X$ are the unary functions such that $\pi(x) = (\pi_1(x),\pi_2(x))$. The resulting structure is a model of $T$. We observe, tangentially, that if $\pi, \pi'$ are different pairing functions on $X$, the structures $M_{I,\pi}$ and $M_{I,\pi'}$ may not be isomorphic. Let $X = \omega$. Suppose there is an $a \in X$ such that $\pi(a) = (a,a)$. Then $\sigma((a,c_{init})) = \tau((a,c_{init}))$ for all $\sigma, \tau \in \{f,g\}^{<\omega}$ of the same length. Suppose $\pi'$ has the property that whenever $\pi'(x) = (y,y)$ (so that $\pi_1'(x) = \pi_2'(x)$) we must have $y < x$. Then the type $\{C_{init}(x)\} \cup \{\sigma(x) = \tau(x) : \sigma, \tau \in \{f,g\}^{<\omega}$ are the same length$\}$ is not realized in $M_{I,\pi'}$ (for otherwise there is $a \in X$ with $\pi(a) = (b,b)$, and $\pi(b) = (c,c)$, and so on, giving a descending sequence $a > b > c > \ldots$), and so $M_{I,\pi}$ and $M_{I,\pi'}$ are not isomorphic. \begin{proposition} \label{eqpairpseudo} 1. $T$ is pseudofinite. 2. Let $M = \prod_{i \to \mathcal{U}} M_i$ be a pseudofinite ultraproduct which satisfies $T$. Let $a_0,a_1,\ldots \in M$ such that $a_i \in f^{i}(C_{init})$. Then $\delta(\neg(x E a_0)) > \delta(\neg(x E a_0) \wedge \neg(x E a_1)) > \ldots$. In particular, $M$ fails to satisfy condition (A$)_\phi$ for the formula $\phi(x,y) =$``$\neg x E y$", and therefore $M$ fails to satisfy condtion (A). \end{proposition} \begin{proof} 1. For every pair of numbers $n,m$, there is a unique (up to isomorphism) finite structure $M_{n,m} \models T_0$ which has $n$ equivalence classes such that $|C_{fin}(M_{n,m})| = m$. In that case, $|M_{n,m}| = \sum_{i=0}^{n-1} m^{2^i}$. Each instance of the axiom schema for $T$ is clearly satisfied in $M_{n,m}$ for sufficiently large $n,m$. Therefore $T$ is pseudofinite. If $\mathcal{U}$ is an ultrafilter on $\omega \times \omega$ then $\prod_{(n,m) \to \mathcal{U}} M_{n,m} \models T_0$ always, and $\prod_{(n,m) \to \mathcal{U}} M_{n,m} \models T$ iff $\{(n,m) : n \geq j$ and $m \geq k\}$ is $\mathcal{U}$-big for all $j,k \in \omega$. 2. For each $k$, let $\varphi_k(x)$ be the formula $\bigwedge_{i=0}^k x \notin f^{i}(C_{init})$. For $k,m \in \omega$ let $D_{k,m}$ be the number $\sum_{i=0}^k m^{2^i}$. Then for $n > k$ we have that $|\varphi_k(M_{n,m})| = D_{(n-k-2),m}$, as $|M_{n,m}|$ is $D_{n-1,m}$ and $f^{j}(C_{init}(M_{n,m}))$ has size $m^{2^{(n-1)-j}}$ for $0 \leq j \leq k$. If $\epsilon > 0$ is a positive real and $j < k$ then $(1 - \epsilon) m^{2^k-2^j} D_{j,m} < D_{k,m}$ for sufficiently large $m$, as both sides of this inequality are degree-$2^k$ polynomials in $m$. Therefore if $N \in \omega$ then $N \cdot \varphi_k(M_{n,m}) < \varphi_j(M_{n,m})$ for sufficiently large $m$ -- that is, for finite models of $T_0$ with $|C_{init}| > c_{N,j,k}$ for some $c_{N,j,k} \in \omega$. So if $(M_\lambda : \lambda \in \Lambda)$ is a family of finite models of $T_0$ and $\mathcal{U}$ is an ultrafilter on $\Lambda$ such that $M := \prod_{i \to \mathcal{U}} M_\lambda \models T$, then $\{\lambda \in \Lambda : |C_{init}(M_\lambda)| > c_{N,j,k}\} \in \mathcal{U}$, and so $\{\lambda \in \Lambda : N \cdot |\varphi_k(M_\lambda)| \leq |\varphi_j(M_\lambda)|\} \in \mathcal{U}$. Since this is true for every $N$, we obtain that $\delta(\varphi_k(M_\lambda)) < \delta(\varphi_j(M_\lambda))$. If $a_0,a_1,\ldots \in M$ are such that $a_i \in f^i(C_{init}(M))$ for each $i$, then $\varphi_k(x)$ is equivalent to $\bigwedge_{i=0}^k \neg x E a_i$, which proves the failure of $(A)_\phi$ for the formula $\phi(x,y) := \neg x E y$. \end{proof} The following lemma is the core property of $T$ from which we obtain quantifier elimination, as well as a more general result about the cardinalities of definable sets. Intuitively, it says that we may ``coordinatize'' an element of an equivalence class $C$ by freely choosing $2^n$ elements of $f^n(C)$, generalizing axiom (f) of $T_0$. \begin{lemma} \label{pairlem} Let $\{s_1,\ldots,s_{2^n}\}$ be a complete enumeration of the strings in $\{f,g\}^n$. Let $b_1,\ldots,b_{2^n} \in C$ for some equivalence class $C \notin \{C_{init}, \ldots, C_{init+n-1}\}$. Then if $C \neq C_{fin}$, there is a unique $a \in M$ such that $s_i(a) = b_i$ for each $i$ (clearly, $a \in f^{-n}(C)$). If $C = C_{fin}$, there is a unique solution $a \in C_{fin-k}$ for each $k$ such that $b_i = b_j$ whenever $s_i$ and $s_j$ agree on their final $k$ bits (in particular, there is a unique solution in $C_{fin-n}$), and these are the only solutions in $M$. \end{lemma} \begin{proof} We prove this by induction on $n$. For $n = 1$, this is Axiom (f) for $C \neq C_{fin}$, and follows from Axioms (f) and (c) for $C = C_{fin}$. Suppose the claim has been shown for $n$. Let us enumerate the strings in $\{f,g\}^{n+1}$ as $\{f s_1, fs_2,\ldots,f s_n, g s_1, g s_2, \ldots, g s_n\}$, where $\{s_1,\ldots,s_{2^n}\}$ is $\{f,g\}^n$. Let \[b_1,\ldots,b_{2^n},b_{2^n+1},\ldots,b_{2^{n+1}} \in C.\] If $C \neq C_{fin}$ then by induction, there are unique elements $c_1,c_2 \in M$ such that $s_i(c_1)= b_i$ and $s_i(c_2) = b_{2^n + i}$ for $i \leq 2^n$, and there is a unique $a \in M$ such that $f(a) = c_1$ and $g(a) = c_2$. It follows that $a$ is the unique element of $M$ such that $f s_i(a) = b_i$ and $g s_i(a) = b_{2^n + i}$ for each $1 \leq i \leq 2^n$. Suppose $C = C_{fin}$. If $s_i(x) \in C_{fin}$, we must have $x \in C_{fin-k}$ for some $k \leq n$. If $x \in C_{fin - k}$ then $s(x) = \tau(x)$ where $\tau$ is the final segment of $s$ of length $k$. It follows that if $s_i, s_j \in \{f,g\}^n$ agree on their final $k$ bits and $b_i \neq b_j$, then there is no solution to $s_i(x) = b_i \wedge s_j(x) = b_j$ in $C_{fin-k}$. On the other hand, suppose that $b_i = b_j = b_\tau$ whenever $s_i$ and $s_j$ both have $\tau \in \{f,g\}^k$ as a final segment. Then for $x \in C_{fin-k}$, the formula $\bigwedge_{i=1}^{2^n} s_i(x) = b_i$ is equivalent to $\bigwedge_{\tau \in \{f,g\}^k} \tau(x) = b_\tau$, which has a unique solution in $C_{fin-k}$. \end{proof} We note that Lemma \ref{pairlem} implies that for each $k$, we have $a = b$ if and only if $s(a) = s(b)$ for all $s \in \{f,g\}^k$ and $a E b$. This is used for syntactical manipulation of formulas (Lemma \ref{intermedform}) in our proof of quantifier elimination. The stability of $T$ is an easy consequence of quantifier elimination in $T$ (Proposition \ref{qe}). The proof of quantifier elimination is rather involved, so we defer it until the end of the section. \begin{proposition} \label{stable} $T$ is stable. \end{proposition} \begin{proof} Let $M$ be a model of $T$, and let $A \subset M$ be a subset. We will show that all 1-types over $A$ are definable over $A$. Let $p \in S_1(A)$. By quantifier elimination in the language $L \cup \{C_{init},C_{init+1},\ldots\}$ (Proposition \ref{qe}), it suffices to show that the sets \[X_{\sigma,\tau} = \{a \in A : \mbox{``}\sigma(x) E \tau(a)\mbox{"} \in p\}\] and \[Y_{\sigma,\tau} = \{a \in A : \mbox{``}\sigma(x) = \tau(a)\mbox{"} \in p\}\] are definable over $A$ for all $\sigma, \tau \in \{f,g\}^{<\omega}$. The sets $Y_{\sigma, \tau}$ are either empty or of the form $\{a \in A : \tau(a) = \tau(a')\}$ for some fixed $a' \in A$, so they are definable over $A$. Similarly, the sets $X_{\sigma,\tau}$, if nonempty, are of the form $\{a \in A : \tau(a) E \tau(a')\}$ for some fixed $a' \in A$, which are definable over $A$ as well. Therefore all 1-types over $A$ are definable over $A$, which is one of the many equivalent conditions for the stability of $T$. \end{proof} By Proposition \ref{stable} (using the fact every stable theory is simple and low, see \cite[Remark 2.2]{buech}) and Proposition \ref{eqpairpseudo}, $T$ is a simple and low pseudofinite theory for which every pseudofinite ultraproduct satisfying $T$ fails to satisfy the GMS condition (A). The rest of this section is spent proving quantifier elimination in $T$. First we must expand our language $L$, adding unary predicates for the equivalence classes $C_{init}, C_{init+1},\ldots$ and $C_{fin}, C_{fin-1},\ldots$. These equivalence classes are $\emptyset$-definable in the language $\{f,g,E,=\}$, but the classes $C_{init+k}$ are not definable by quantifer-free formulas: the formula ``$C_{init}(x)$", for example, can be defined by the formula ``$\neg \exists z f(z) = x$". The final classes $C_{fin-k}$ are quantifier-free definable, but we include them anyway to make notation easier. An outline of our argument is as follows. First, we define the property ``$\varphi$ has definable polynomial cardinality in $\theta$ over $R$", where $\varphi, \theta$ are formulas and $R$ is a subring of $\mathbb{R}^\star$. In Lemma \ref{standard}, we isolate some additional properties which, together with definable polynomial cardinalities, may be used to show quantifier elimination. We then define a class of formulas (the ``basic" formulas) and show that the hypotheses of Lemma \ref{standard} hold of all basic formulas (Lemma \ref{basic}). We use this to obtain this result for the wider class of ``intermediate" formulas (Lemma \ref{intermediate}), and then finally for certain conjunctions of atomic formulas and their negations (Lemma \ref{literals}), from which quantifier elimination follows (Proposition \ref{qe}). Here we define what we mean by ``definable polynomial cardinality". \begin{definition} Let $(M,\mathbb{R}^\star)$ be a counting pair. Let $\varphi(\bar{x},\bar{y})$ and $\theta(\bar{z},\bar{y})$ be $L$-formulas, and let $R$ be a subring of $\mathbb{R}^\star$. Then we say $\varphi$ \emph{has (quantifier-free) definable polynomial cardinality in $\theta$ over $R$} if there are nonzero polynomials $F_1(X),\ldots,F_r(X)$ with coefficients from $R$ and (quantifier-free) formulas $\pi_1(\bar{y}),\ldots,\pi_r(\bar{y})$ so that \begin{itemize} \item If $M \models \pi_i(\bar{b})$ then $|\varphi(M^{|\bar{x}|},\bar{b})| = F_i(|\theta(M^{|\bar{z}|},\bar{b})|)$ \item If $M \models \bigwedge_i \neg \pi_i(\bar{b})$ then $\varphi(M^{|\bar{x}|},\bar{b}) = \emptyset$. \end{itemize} \end{definition} Under certain additional assumptions, q.f.-definable polynomial cardinality can be used to obtain quantifier elimination results, via the following lemma. \begin{lemma} \label{standard} Let $(M,\mathbb{R}^\star)$ be a counting pair. Suppose $\varphi$ has definable polynomial cardinality in $\theta$ over $\mathbb{R}$ (standard reals), witnessed by $F_1(X),\ldots,F_r(X)$ and $\pi_1(\bar{y}),\ldots,\pi_r(\bar{y})$. Suppose that whenever $M \models \pi_i(\bar{b})$ the set $\theta(M^{|\bar{z}|},\bar{b})$ is infinite. Then the formula $\exists x \varphi(\bar{x},\bar{y})$ is equivalent in $M$ to the formula $\bigvee_i \pi_i(\bar{y})$. \end{lemma} \begin{proof} By DeMorgan's law, if $M \models \neg \bigvee_i \pi_i(\bar{b})$ then $M \models \neg \exists \bar{x} \varphi(\bar{x},\bar{b})$. If $M \models \pi_i(\bar{b})$ for some $i$, then by assumption $|\theta(M^{|\bar{z}|},\bar{b})|$ is an infinite hyperinteger. The polynomial $F_i$ is nonzero and has standard coefficients, so therefore $F_i(|\theta(M^{|\bar{z}|},\bar{b})|)$ is nonzero -- for no nonzero polynomial with standard coefficients has an infinite hyperinteger as a root, since all roots in $\mathbb{R}$ and therefore in $\mathbb{R}^\star$ are bounded by a finite integer. Therefore $|\varphi(M^{|\bar{x}|},\bar{b})| \neq 0$, whence $M \models \exists \bar{x} \varphi(\bar{x},\bar{b})$. \end{proof} Next we define our family of \emph{basic} formulas. \begin{definition} Let $k$ be an integer. A \emph{$k$-basic} formula $\phi(x,y_0,\ldots,y_{m-1})$ is a formula of the form $\bigwedge_{i = 0}^{m-1} s_i(x) = y_i$, where $(s_0,\ldots,s_{m-1})$ is an $m$-tuple of strings $s_i \in \{f,g\}^k$. \end{definition} In the following lemma, we show that basic formulas have quantifier-free definable polynomial cardinality over a formula which satisfies the conditions of Lemma \ref{standard}. \begin{lemma} \label{basic} Let $\phi(x,\bar{y})$ be a $k$-basic formula. Then $\phi$ has q.f.-definable polynomial cardinality in the formula ``$z E y_0$" over $\mathbb{Z}$. \end{lemma} \begin{proof} Let $\phi(x,\bar{y})$ be $\bigwedge_{i=0}^{m-1} s_i(x) = y_i$, where each $s_i$ is a string in $\{f,g\}^k$. If $M \models \phi(a,\bar{b})$, then the following must hold: \begin{itemize} \item $b_i E b_j$ for all $i,j < m$ (since each string $s$ has the same length), \item $b_i = b_j$ whenever $s_i = s_j$, and \item $b_i \notin C_{init+l}$ for $i < m$ and $l < k$. \end{itemize} Let $\Psi(\bar{y})$ be the conjunction of the above conditions. Suppose $M \models \Psi(\bar{b})$. Let $S$ be the set of strings $\{s_i : i < m\}$. Suppose first that $b_0 \notin C_{fin}$ (and so $b_i \notin C_{fin}$, since all $b_i$ are $E$-equivalent as per $\Psi$). Then $\phi(M,\bar{b})$ is in definable bijection with $[b_0]_E^{2^k - |S|}$. To see this, we consider elements of this latter set as tuples indexed by strings in $\{f,g\}^k \setminus S$, and we pair $(c_s)_{s \notin S}$ with the unique $a$ such that $s_i(a) = b_i$ for $i < m$ and $s(a) = c_s$ for $s \notin S$ -- such an $a$ exists and is unique by Lemma \ref{pairlem}. We note that in this case, $\phi(M,\bar{b})$ is a subset of $f^{-k}([b_0]_E)$. If $b_0 \in C_{fin}$ then, as in the proof of Lemma \ref{pairlem}, the situation is slightly more complicated. We have $\phi(M,\bar{b}) \subset C_{fin} \cup \ldots \cup C_{fin-k}$. The set $\phi(M,\bar{b})$ has nonempty intersection with $C_{fin - l}$ precisely when $\bar{b}$ satisfies the following condtition: if $s, s' \in S$ and the length-$l$ prefix of $s_i$ equals the length-$l$ prefix of $s_j$, then $b_i = b_j$. Let $\rho_l(\bar{y})$ be a quantifier-free formula expressing this condition on $\bar{b}$. When $M \models \rho_l(\bar{b})$, the intersection $\phi(M,\bar{b}) \cap C_{fin_l}$ is in definable bijection with $C_{fin-l}^{2^n - |S'|}$, where $S'$ is the set of all length-$k$ prefixes of elements of $S$. We note also that $C_{fin-l}$ is in definable bijection with $C_{fin}^{2^l}$ by the same argument used in this proof, and so $\phi(M,\bar{b}) \cap C_{fin-l}$ is in definable bijection with a power of $C_{fin}$. Therefore $\phi(M,\bar{b})$ in definable bijection with a power of $[b_0]_E$ or (if $[b_0]_E$ is $C_{fin}$) with one of finitely many disjoint unions of powers of $C_{fin}$, all determined by quantifier-conditions on $\bar{b}$. Since definable bijections preserve pseudofinite cardinality, we obtain that $\phi$ has qf-definable polynomial cardinality in the formula ``$z E y_0$" over $\mathbb{Z}$. \end{proof} A brief modification of the above proof gets us a similar result for a wider class of formulas. \begin{lemma} \label{basicplus} Let $\phi(x,\bar{y}) := \bigwedge_i s_i(x) = y_i$ be a $k$-basic formula and let $\psi(x)$ be a conjunction of formulas $s(x) = t(x)$ with $s,t \in \{f,g\}^k$. Then $\varphi(x,\bar{y}) := \phi(x,\bar{y}) \wedge \psi(x)$ has qf-definable polynomial cardinality in ``$z E y_0$" over $\mathbb{Z}$. \end{lemma} \begin{proof} Without loss of generality, $\psi(x)$ is of the form $\bigwedge_{t \sim t'} t(x) \sim t'(x)$, where $\sim$ is some equivalence relation on $\{f,g\}^k$. Now, $\varphi(M,\bar{b})$ is nonempty if and only if $\phi(M,\bar{b})$ is nonempty and $\bar{b}$ additionally satisfies the property that $b_i = b_j$ whenever $s_i \sim s_j$. Let $[t_0]_\sim,\ldots,[t_{l-1}]_\sim$ be the equivalence classes, ordered so that for some $j \leq l$ we have $\{[s_i]_\sim : i < m\} = \{[t_j]_\sim, \ldots, [t_{l-1}]_\sim\}$. Assume that all elements of $\bar{b}$ are from the same equivalence class $C$ (or else $\phi(M,\bar{b})$ is empty). Then, as in Lemma \ref{basic}, the set $\varphi(M,\bar{b})$ is in definable bijection with $C^j$ if $C \neq C_{fin}$, as we may freely choose elements of $C$ for each equivalence class of strings $[t]_\sim$ which is not specified by a formula $s_i(x) = y_i$. If $C = C_{fin}$ then as before, $\varphi(M,\bar{b})$ is in definable bijection with one of finitely many disjoint unions of powers of $C$, depending on some quantifier-free condition on $\bar{b}$. \end{proof} \begin{definition} A \emph{$k$-intermediate} formula $\phi(x,y_1,\ldots,y_m)$ is a conjunction of formulas of the form $s(x) = s'(x)$ and $s(x) = t(y_i)$, where $s, s'$ and $t$ are all strings in $\{f,g\}^{<\omega}$, with each string $s$ and $s'$ having length $k$. \end{definition} In this definition, we do not require every $\bar{y}$ variable to occur in a conjunct, and we do not require that both types of conjunct appear. We do not consider the empty conjunct to be $k$-intermediate, but we note that it is equivalent to the $k$-intermediate formula ``$x = x$". Equivalently, a $k$-intermediate formula is a formula $\varphi(x,\bar{y})$ of the form $\psi(x)$ or the form $\phi(x,t_1(y_{i_1}),\ldots,t_n(y_{i_n})) \wedge \psi(x)$, where $\phi(x,y_1,\ldots,y_n)$ is a $k$-basic fomrula and $\psi(x)$ is a conjunction of formulas $s(x) = s'(x)$, with each $s, s'$ being a string of length $k$. The fact that this is equivalent depends on the function symbols in $L$ being unary, so that every $L$-term is of the form $t(v)$ for some variable $v$ and some string $t \in \{f,g\}^{<\omega}$. The way we prove that certain conjunctions of atomic formulas have definable polynomial cardinality is by splitting the conjunctions into a part in the language $\{f,g,=\}$, and a part in the language $L$ without equality. The latter we call \emph{equivalence formulas}. \begin{definition} \label{eqdef} An \emph{equivalence formula} is an $L$-formula which does not use the equality symbol. \end{definition} Atomic equivalence formulas are $C_{init+i}(v), C_{fin-i}(v)$ and $s(v) E t(w)$" with a natural number $i \in \omega$, strings $s,t, \in \{f,g\}^{<\omega}$, and variables $v,w$. An easy induction on formula length shows that if $\phi(\bar{x})$ is an equivalence formula and $\bar{a}, \bar{a}' \in M^{|\bar{x}|}$ with $a_i E a'_i$ for each $i$, then $M \models \phi(\bar{a})$ if and only if $M \models \phi(\bar{a}')$. In fact, a stronger phenomenon occurs. Recall the language $L^\star = \{c_{init},c_{fin},S)$ and $T^\star$ (Definition \ref{tstar}). Recall also the remarks after that definition, that we may define $M^\star$, a model of $T^\star$, on the set of equivalence classes $[x]_E$ by interpreting $c_{init}$ as $C_{init}$, $c_{fin}$ as $C_{fin}$, and letting $S([a]_E)$ be $[f(a)]_E$. \begin{lemma} \label{bi-interp} For every equivalence formula $\phi(\bar{x})$ there is an $L^\star$-formula $\phi^\star(\bar{x})$ so that for all $\bar{a} \in M^{n}$, we have $M \models \phi(a_0,\ldots,a_{n-1})$ if and only if $M^\star \models \phi^\star([a_0]_E,\ldots,[a_{n-1}]_E)$. \end{lemma} \begin{proof} If $\phi(x)$ is $C_{init+k}(x_i)$, let $\phi^\star(x_i)$ be $S^k(c_{init})$. If $\phi(x_i)$ is $C_{fin-k}(x_i)$, let $\phi^\star(x_i)$ be $S^k(x_i) = c_{fin} \wedge S^{k-1}(x_i) \neq c_{fin}$. If $\phi(\bar{x})$ is $s(x_i) E t(x_j)$, let $\phi^\star(x_i)$ be $S^{|s|}(x_i) = S^{|t|}(x_j)$. Boolean connectives and quantifiers pass up directly. \end{proof} As a consequence of this lemma, we obtain \begin{lemma} \label{qelight} Let $\Phi(x,\bar{y})$ be a quantifier-free equivalence formula. Then the formula $\exists x \Phi(x,\bar{y})$ is equivalent to a quantifier-free formula $\theta(\bar{y})$. \end{lemma} \begin{proof} Let $\Phi^\star(x,\bar{y})$ be the $L^\star$ formula as in Lemma \ref{bi-interp}. Since the theory $T^\star$ has quantifier elimination, the formula $\exists x \Phi^\star(x,\bar{y})$ is equivalent in $T^\star$ to a quantifier-free formula $\theta^\star(\bar{y})$. Via the interpretation of $M^\star$ in $M$, there is an $L$-formula $\theta(\bar{y})$ so that $M \models \theta(b_0,\ldots,b_{m-1})$ if and only if $M^\star \models \theta^\star([b_0]_E,\ldots,[b_{m-1}]_E)$. This happens if and only $M^\star \models \exists x \Phi^\star(x,[b_0]_E,\ldots,[b_{m-1}]_E)$, and that happens if and only if $M \models \exists x \Phi(x,b_0,\ldots,b_{m-1})$. \end{proof} We now show that intermediate formulas satisfy the hypotheses of Lemma \ref{standard}. We also demonstrate that sets definable by intermediate formulas intersect equivalence classes in a definable way, which will be used when we incorporate equivalence formulas into our analysis of formulas with definable polynomial cardinality, as per the remarks before Definition \ref{eqdef}. \begin{lemma} \label{intermediate} Let $\phi(x,\bar{y})$ be a $k$-intermediate formula with a conjunct of the form $s(x) = t(y_i)$. Then $\phi$ has q.f.-definable polynomial cardinality in the formula ``$z E t(y_i)$". Furthermore, $\phi(x,\bar{b})$ is a subset of $f^{-k}([t(b_i)]_E)$, and there are quantifier-free formulas $\eta_1(\bar{y}), \ldots, \eta_p(\bar{y})$ in the language $\{f,g,=\}$ and numbers $j_1, \ldots, j_p \in \{0,\ldots,k\}$ so that \begin{itemize} \item If $M \models \exists x \phi(x,\bar{b})$ and $t(b_i) \in C_{fin}$ then $M \models \eta_i(\bar{b})$ for some $i$, and $\phi(M,\bar{b})$ intersects exactly the equivalence classes $C_{fin-k}, \ldots, C_{fin-j_i}$. \end{itemize} \end{lemma} \begin{proof} The formula $\phi(x,\bar{y})$ is equal to $\varphi(x,t_0(y_{i_0}),\ldots,t_{l-1}(y_{i_{l-1}})) \wedge \psi(x)$, where $\varphi$ is $k$-basic and $\psi$ is $k$-intermediate in the single variable $x$. We may arrange the variables in $\varphi$ so that $t_0(y_{i_0})$ is the term $t(y_i)$ assumed in the hypotheses of the lemma. Then the first statement of the lemma follows directly from Lemma \ref{basicplus}. For the second statement of the lemma, we note that if $a \in \phi(M,\bar{b})$, then $M \models s(a) = t(b_i)$ and so $a \in f^{-|s|}([t(b_i)]_E)$. The rest of the lemma follows directly from the proof of Lemma \ref{basic}. \end{proof} In order to turn Lemma \ref{intermediate} into our general quantifier-elimination argument, we use the following syntactical lemma. \begin{lemma} \label{intermedform} Let $\phi(x,\bar{y})$ be a conjunction of atomic $L$-formulas, each of which contains the variable $x$. Let $k$ be any number larger than the length of the largest $\{f,g\}$-string appearing in a term in $\phi$. Then $\phi(x,\bar{y})$ is equivalent to $\rho(x,\bar{y}) \wedge \eta(x,\bar{y})$, where $\rho$ is a $k$-intermediate formula and $\eta(x,\bar{y})$ is a quantifier-free equivalence formula. \end{lemma} \begin{proof} We note that a conjunction of $k$-intermediate formulas is again a $k$-intermediate formula, and a conjunction of equivalence formulas is clearly another equivalence formula. Therefore it suffices to prove the lemma for a single atomic formula $\phi(x,y)$. If $\phi(x,y)$ has no equality symbol in it, we are done (our $k$-intermediate formula may be ``$f^k(x) = f^k(x)$". If $\phi(x,y)$ is $s(x) = t(y)$, let $k > \max\{ |s|, |t|\}$. Then by Lemma \ref{pairlem}, $\phi(x,y)$ is equivalent to \[s(x) E t(y) \wedge \bigwedge_{u \in \{f,g\}^{k - |s|}} us(x) = ut(y).\] Suppose $\phi(x,y)$ is $s(x) = t(x)$, and again let $k > \max\{|s|,|t|\}$. If $|s| = |t|$ then as in the case of $s(x) = t(y)$, the formula $s(x) = t(x)$ is equivalent to \[\bigwedge_{u \in \{f,g\}^{k - |s|}} us(x) = ut(x)\] (the missing conjunct ``$s(x) E t(x)$" is always true). If $|s| > |t|$, let $u_1$ be any string in $\{f,g\}^{k-|s|}$ and let $u_2$ be any string in $\{f,g\}^{k - |t|}$. Then ``$s(x) = t(x)$" is equivalent to ``$[u_1s(x) = u_2t(x)] \wedge [t(x) \in C_{fin}]$". To see this, we note that if $s(x) = t(x)$ then we must have $t(x) \in C_{fin}$, for if $s'$ is the length-$|t|$ final segment of $s$ then $t(x) E s'(x)$, and if $s'(x)$ is not an element of the final class $C_{fin}$ then $s(x)$ will be in a different equivalence class from $s'(x)$. Then we have $u_1s(x) = s(x)$ and $u_2t(x) = t(x)$, since our unary functions $f,g$ are the identity on $C_{fin}$. On the other hand, if $t(x) \in C_{fin}$ and $u_1s(x) = u_2t(x)$ then $s(x) \in C_{fin}$, and so $s(x) = u_1s(x) = u_2t(x) = t(x)$. \end{proof} \begin{lemma} \label{atomic} Let $\phi(x,\bar{y})$ be a conjunction of atomic formulas, with at least one conjunct of the form $s(x) = t(y_i)$ or $s(x) E t(y_i)$. Then $\phi$ has q.f.-definable polynomial cardinality in ``$z E t'(y_i)$" over $\mathbb{Z}$, for some $t' \in \{f,g\}^{<\omega}$. \end{lemma} \begin{proof} Let us write $\phi(x,\bar{y})$ as $\rho(x,\bar{y}) \wedge \xi(x,\bar{y})$, where $\rho$ is in the language $\{f,g,=\}$ and $\xi$ is an equivalence formula. By Lemma \ref{intermedform}, there is a $k$ so that $\rho(x,\bar{y})$ is equivalent to the conjunction of a $k$-intermediate formula $\varphi(x,\bar{y})$ and another equivalence formula $\xi'(x,\bar{y})$. Inspecting the proof of that lemma, we see that if $\rho(x,\bar{y})$ contains a conjunct of the form $s(x) = t(y_i)$, then $\varphi$ contains a conjunct of the form $s'(x) = t'(y_i)$ for some strings $s', t'$ which are initial extensions of $s$ and $t$. Then by Lemma \ref{intermediate}, the formula $\varphi(x,\bar{y})$ has q.f.-definable polynomial cardinality in the formula ``$z E t'(y_i)$" over $\mathbb{Z}$, as witnessed by polynomials $F_1(X),\ldots,F_r(X)$ and quantifier-free formulas $\pi_1(\bar{y}),\ldots,\pi_r(\bar{y})$. Furthermore, there are quantifier-free formulas $\eta_1(\bar{y}),\ldots,\eta_p(\bar{y})$ so that ``$\bigvee_j \eta_j(\bar{y})$" is equivalent to \[\exists x \varphi(x,\bar{y}) \wedge [t'(y_i) \in C_{fin}]\] and if $M \models \eta_j(\bar{b})$ then solutions to $\varphi(x,\bar{b})$ lie exactly in some finite set of final classes $C_{fin-l}$. In sum, letting $\eta_0(\bar{y})$ be the formula ``$t'(y_i) \notin C_{fin}$", we obtain formulas $\sigma_0(w,y_i),\sigma_1(w),\ldots,\sigma_p(w)$ where $\sigma_0(w)$ is the formula ``$s'(w) E t'(y_i)$" and $\sigma_j(w)$ is a disjunction of formulas $C_{fin-l}(w)$ for $j > 0$ so that if $M \models \exists x \varphi(x,\bar{b})$ then $M \models \bigvee_{j=0}^p \eta_j(\bar{b})$, and if $M \models \eta_j(\bar{b})$ then $\varphi(M)$ intersects $[d]_E$ exactly when $M \models \sigma_j(d)$. Now we can incorporate the equivalence formula $\xi'(x,\bar{y}) \wedge \xi(x,\bar{y})$. The precise details here are particularly tedious, so we sketch the argument. To find the cardinality of $\phi(M,\bar{b})$, we look at the set $\varphi(M,\bar{b})$. The formulas $\pi_j(\bar{y})$ define the cardinality of this set, and the formulas $\eta_j(\bar{y})$ determine which equivalence classes this set intersects. If $M \models \eta_j(\bar{y})$, then there is a solution to $\varphi(x,\bar{b}) \wedge \xi'(x,\bar{b}) \wedge \xi(x,\bar{b})$ precisely if there is a solution to the equivalence formula $\sigma_j(w,\bar{b}) \wedge \xi(w,\bar{b}) \wedge \xi'(w,\bar{b})$. By Lemma \ref{qelight}, this is equivalent to a quantifier-free condition $\tau(\bar{y})$ on $\bar{b}$. If the solution set is nonempty, then the cardinality is exactly the cardinality of $\varphi(M,\bar{b})$ if this set is contained in a single equivalence class (as in the case where $t'(b_i) \notin C_{fin}$), or is otherwise some smaller cardinality which is still polynomial in $C_{fin} = [t'(b_i)]_E$, with quantifier-free defining formulas given by a conjunction of a $\pi_j(\bar{y})$, an $\eta_{j'}(\bar{y})$, and the quantifier-free formula $\tau(\bar{y})$. \end{proof} \begin{lemma} \label{literals} Let $\phi(x,\bar{y})$ be a conjunction of literals, with at least one conjunct of the form $s(x) = t(y_i)$ or $s(x) E t(y_i)$. Then $\phi$ has q.f.-definable polynomial cardinality in ``$z E t(y_i)$" over $\mathbb{Z}$. \end{lemma} \begin{proof} Let $\phi(x,\bar{y})$ be $\rho(x,\bar{y}) \wedge \bigwedge_i \neg \eta_i(x,\bar{y})$, where $\rho$ is a conjunction of atomic formulas, one of which is ``$s(x) = t(y_i)$" or ``$s(x) E t(y_i)$", and each $\eta_i$ is atomic (with the possibility that there is no conjunct of this form). Then $\phi$ is equivalent to $\rho(x,\bar{y}) \wedge \neg \bigvee_i \eta_i(x,\bar{y})$. Then for any $\bar{b}$, we have $|\phi(M,\bar{b})| = |\rho(M,\bar{b})| - |\rho(M,\bar{b}) \wedge \bigvee_i \eta_i(M,\bar{b})| = |\rho(M,\bar{b})| - |\bigvee_i [\rho(M,\bar{b}) \wedge \eta_i(M,\bar{b})]|$. Now we appeal to the inclusion-exclusion principle from basic combinatorics. Given finite subsets $A,B$ of some set, the cardinality $|A \cup B|$ is $|A| + |B| - |A \cap B|$. More generally, given finite sets $A_1, \ldots, A_n$, the set $A_1 \cup \ldots \cup A_n$ has cardinality \[\sum_{k=1}^n (-1)^{k+1}(\sum_{1 \leq i_1 < \ldots < i_k \leq n} |A_{i_1} \cap \ldots \cap A_{i_k}|).\] Therefore the cardinality $|\bigvee_i [\rho(M,\bar{b}) \wedge \eta_i(M,\bar{b})]|$ is a sum/difference of cardinalities of the form $|\rho(M,\bar{b}) \wedge \eta_{i_1}(M,\bar{b}) \wedge \ldots \wedge \eta_{i_k}(M,\bar{b})|$. By Lemma \ref{atomic}, each of these formulas has q.f.-definable polynomial cardinality in ``$z E t(y_i)$" over $\mathbb{Z}$. It follows that $\bigvee_i [\rho(M,\bar{b}) \wedge \eta_i(M,\bar{b})]$ and therefore $\phi(x,\bar{y})$ do as well. \end{proof} The final ingredient in our quantifier elimination proof is an observation about $\emptyset$-definable sets. \begin{lemma} \label{emptydef} Let $\phi(x)$ be a quantifier-free formula in the single variable $x$. Then there is a quantifier-free formula $\theta(w)$ in the unary relational language $\{C_{init}, C_{init+1}, \ldots\} \cup \{C_{fin},C_{fin-1},\ldots\}$ so that for each equivalence class $[d]_E$, the set $\phi(M)$ intersects $[d]_E$ if and only if $M \models \theta(d)$. \end{lemma} \begin{proof} Atomic formulas in the single variable $x$ come in the forms ``$C_{init+i}(x)$", ``$C_{fin-i}(x)$", ``$s(x) E t(x)$", and ``$s(x) = t(x)$". When $s$ and $t$ are strings of the same length, the formula $s(x) E t(x)$ is always true. When $|s| > |t|$, $s(x) E t(x)$ happens if and only if $[s(x)]_E = [t(x)]_E = C_{fin}$, which is true if and only if $x \in \bigcup_{i \leq |t|} C_{fin-i}(x)$. So any Boolean combination of single-variable formulas in $x$ without the equality symbol is equivalent to a formula in the language $\{C_{init+i} : i \in \omega\} \cup \{C_{fin-i} : i \in \omega\}$. Suppose $k > \max( |s|, |t|)$. Then by Lemma 4.17, ``$s(x) = t(x)$" is equivalent to $\rho(x) \wedge \eta(x)$, where $\rho$ is $k$-intermediate and $\eta$ is a quantifier-free equivalence formula. By definition, $\rho$ is a conjunction of the form $\bigwedge_i s_i(x) = t_i(x)$, where $s_i, t_i \in \{f,g\}^k$ for each $i$. It follows that there exists a $k$ such that $\phi(x)$ is equivalent to a Boolean combination of formulas of the form ``$C_{init+i}(x)$", ``$C_{fin-i}(x)$" and ``$s(x) = t(x)$", where all $s,t \in \{f,g\}^k$. The statement of the lemma passes up through disjunctions, so we may assume $\phi(x)$ is a conjunction of such formulas and their negations. Let us write $\phi(x)$ as $\rho(x) \wedge \eta(x)$, where $\rho$ is in the language $\{f,g,=\}$ and $\eta$ is in the language $\{C_{init+i}(x) : i \in \omega\} \cup \{C_{fin-i}(x) : i \in \omega\}$. Without loss of generality, $\rho(x)$ is $\bigwedge_{s \sim t} s(x) = t(x) \wedge \bigwedge_j s'_j(x) \neq t'_j(x)$ where $\sim$ is an equivalence relation on $\{f,g\}^k$ and all $s'_j,t'_j$ are strings in $\{f,g\}^k$. Then $\rho(M)$ is nonempty if and only if $s'_j \not \sim t'_j$ for all $j$. If $\phi(M)$ is nonempty and $C$ is not $C_{fin-i}$ for any $i < k$ then $\rho(C)$ is nonempty, and for $i < k$ the set $\rho(C_{fin-i})$ is nonempty if and only if $s'_j$ and $t'_j$ do not have the same length-$i$ final string for each $j$. In all, there is a quantifier-free formula $\pi(w)$ in the language $\{C_{fin-i}(w) : i \in \omega\}$ such that for all $d \in M$, $\rho([d]_E)$ is nonempty if and only if $M \models \pi(d)$. Then $\phi([d]_E)$ is nonempty if and only if $M \models \pi(d) \wedge \eta(d)$. This proves the lemma, with $\theta(w) = \pi(w) \wedge \eta(w)$. \end{proof} We now finish our proof of quantifier elimination. \begin{proposition} \label{qe} $T$ has quantifier elimination in the expanded language \[\{f,g,E\} \cup \{C_{init}, C_{init+1}, C_{init+2}, \ldots\}.\] \end{proposition} \begin{proof} We recall that we have been working in the expanded language \[\{f,g,E\} \cup \{C_{init}, C_{init+1},\ldots\} \cup \{C_{fin}, C_{fin-1},\ldots\}\] since the remarks after Proposition \ref{stable}. The predicates $C_{fin-k}$ are all quantifier-free definable in $\{f,g,E\}$; for example, ``$C_{fin}(x)$" is equivalent to ``$f(x) = x$", and we may define the other final predicates recursively, as ``$C_{fin-k}(x)$" holds precisely when ``$C_{fin-(k-1)}(f(x)) \wedge \neg C_{fin-(k-1)}(x)$" holds. It suffices to show that $\exists x \Phi(x,\bar{y})$ is equivalent to a quantiifer-free formula $\eta(\bar{y})$ whenever $\Phi(x,\bar{y})$ is a conjunction of literals. We may assume that the variable $x$ appears in each literal. If there is a literal in $\Phi$ of the form $s(x) = t(y_i)$ or $s(x) E t(y_i)$ then we apply Lemma \ref{literals} and Lemma \ref{standard} to obtain quantifer-free formulas $\pi_1(\bar{y}),\ldots,\pi_r(\bar{y})$ so that in $M$, the formula $\exists x \Phi(x,\bar{y})$ is equivalent to the quantifier-free formula $\bigvee_i \pi_i(\bar{y})$. If not, then let us write $\Phi$ as $\psi(x) \wedge \varphi(x,\bar{y}) \wedge \bigwedge_i s_i(x) \neq t_i(y_{j_i})$, where $\psi(x)$ is a conjunction of literals in the variable $x$ and $\varphi(x,\bar{y})$ is an equivalence formula. By Lemma \ref{emptydef}, there is a quantifier-free formula $\eta(w)$ in the language $\{C_{init}(w), C_{init+1}(w), \ldots\} \cup \{C_{fin}(w), C_{fin-1}(w), \ldots\}$ so that for all $d \in M$, the set $\psi(M)$ intersects $[d]_E$ if and only if $M \models \eta(d)$. Therefore $M \models \exists x [\psi(x) \wedge \varphi(x,\bar{y})$ if and only if $M \models \exists w [\eta(w) \wedge \varphi(w,\bar{y})]$. If we have $M \models \exists x [\psi(x) \wedge \varphi(x,\bar{y})]$ then $M \models \exists x \Phi(x,\bar{y})$, since we may choose a witness for $x$ in $\psi(x) \wedge \varphi(x,\bar{y})$ so that $s_i(x) \neq t_i(x_{j_i})$ for each $i$. So $M \models \exists x \Phi(x,\bar{y})$ if and only if $M \models \exists w [\eta(w) \wedge \varphi(w,\bar{y})]$. Since $\eta \wedge \varphi$ is an equivalence formula, Lemma \ref{qelight} gives us a quantifier-free equivalence formula $\theta(\bar{y})$ so that $\exists w \eta(w,\bar{y}) \wedge \varphi(w,\bar{y})$ is equivalent to $\theta(\bar{y})$. Therefore $\exists x \Phi(x,\bar{y})$ is equivalent to $\theta(\bar{y})$. \end{proof} \end{document}